"Science, Faculty of"@en . "Statistics, Department of"@en . "DSpace"@en . "UBCV"@en . "Romann, Alexandra"@en . "2009-02-27T19:45:37Z"@en . "2008"@en . "Master of Science - MSc"@en . "University of British Columbia"@en . "Measurement error is a frequent issue in many research areas. For instance,\nin health research it is often of interest to understand the relationship be\ntween an outcome and an exposure, which is often mismeasured if the study\nis observational or a gold standard is costly or absent. Measurement error\nin the explanatory variable can have serious effects, such as biased parame\nter estimation, loss of power, and masking of the features of the data. The\nstructure of the measurement error is usually not known to the investigators,\nleading to many difficulties in finding solutions for its correction.\nIn this thesis, we consider problems involving a correctly measured con\ntinuous or binary response, a mismeasured continuous exposure variable,\nalong with another correctly measured covariate. We compare our proposed\nBayesian approach to the commonly used simulation extrapolation (SIMEX)\nmethod. The Bayesian model incorporates the uncertainty of the measure\nment error variance and the posterior distribution is generated by using the\nGibbs sampler as well as the random walk Metropolis algorithm. The com\nparison between the Bayesian and SIMEX approaches is conducted using\ndifferent cases of a simulated data including validation data, as well as the\nFramingham Heart Study data which provides replicates but no validation\ndata. The Bayesian approach is more robust to changes in the measurement\nerror variance or validation sample size, and consistently produces wider\ncredible intervals as it incorporates more uncertainty.\nThe underlying theme of this thesis is the uncertainty involved in the es\ntimation of the measurement error variance. We investigate how accurately\nthis parameter has to be estimated and how confident one has to be about\nthis estimate in order to produce better results by choosing the Bayesian\nmeasurement error correction over the naive analysis where measurement\nerror is ignored."@en . "https://circle.library.ubc.ca/rest/handle/2429/5236?expand=metadata"@en . "1003343 bytes"@en . "application/pdf"@en . "Evaluating the Performance of Simulation Extrapolation and Bayesian Adjustments for Measurement Error by Alexandra Romann B.Sc., Thompson Rivers University, 2005 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in The Faculty of Graduate Studies (Statistics) The University of British Columbia (Vancouver) August, 2008 \u00C2\u00A9 Alexandra Romann 2008 Abstract Measurement error is a frequent issue in many research areas. For instance, in health research it is often of interest to understand the relationship be tween an outcome and an exposure, which is often mismeasured if the study is observational or a gold standard is costly or absent. Measurement error in the explanatory variable can have serious effects, such as biased parame ter estimation, loss of power, and masking of the features of the data. The structure of the measurement error is usually not known to the investigators, leading to many difficulties in finding solutions for its correction. In this thesis, we consider problems involving a correctly measured con tinuous or binary response, a mismeasured continuous exposure variable, along with another correctly measured covariate. We compare our proposed Bayesian approach to the commonly used simulation extrapolation (SIMEX) method. The Bayesian model incorporates the uncertainty of the measure ment error variance and the posterior distribution is generated by using the Gibbs sampler as well as the random walk Metropolis algorithm. The com parison between the Bayesian and SIMEX approaches is conducted using different cases of a simulated data including validation data, as well as the Framingham Heart Study data which provides replicates but no validation data. The Bayesian approach is more robust to changes in the measurement 11 Abstract error variance or validation sample size, and consistently produces wider credible intervals as it incorporates more uncertainty. The underlying theme of this thesis is the uncertainty involved in the es timation of the measurement error variance. We investigate how accurately this parameter has to be estimated and how confident one has to be about this estimate in order to produce better results by choosing the Bayesian measurement error correction over the naive analysis where measurement error is ignored. 111 Table of Contents Abstract ii Table of Contents iv List of Tables vi List of Figures vii Acknowledgements viii Dedication ix 1 Introduction 1 1.1 Overview of Some Currently Available Methods 3 1.1.1 Functional Methods 4 1.1.2 Structural Methods 6 2 Simulation Extrapolation 7 3 Bayesian Estimation 11 3.1 Bayes Rule 11 3.2 Choice of Prior Distributions 13 iv Table of Contents 3.3 Markov Chain Monte Carlo Algorithms 13 3.3.1 The Metropolis-Hastings Algorithm 14 3.3.2 Random Walk Metropolis Algorithm 14 3.3.3 The Gibbs Sampler 15 4 Simulation Study 17 4.1 Simulation Extrapolation Approach 18 4.2 Bayes-MCMC Approach 24 4.3 Comparison: SIMEX versus Bayes-MCMC 30 5 Exploring the Bayes-MCMC Correction 34 6 Framingham Study Example 43 6.1 SIMEX Analysis 45 6.2 Bayes-MCMC Analysis 47 6.2.1 Analysis Using R 47 6.2.2 Analysis Using WinBUGS 50 6.3 Results 52 7\u00E2\u0080\u0099 Conclusion and Future Work 54 Bibliography 58 V List of Tables 4.1 MSEs of SIMEX and Bayesian estimates 30 4.2 Average widths of CIs using SIMEX and Bayesian analyses 32 4.3 CI coverage of the true values using SIMEX and Bayesian analyses 33 5.1 Results of the prior knowledge investigation 40 6.1 Framingham example results 52 vi List of Figures 4.1 SIMEX extrapolation plot when r2 is moderate 20 4.2 SIMEX extrapolation plot when r2 is large 21 4.3 SIMEX plot when r2 is moderate 22 4.4 SIMEX plot when r2 is large 23 4.5 Traceplots and posterior densities of fi and T when r2 is mod erate 28 4.6 Traceplots and posterior densities of and T when -r2 is large 29 5.1 Inverse Gamma plots 36 5.2 Inverse Gamma (50, 12.75) plot 38 5.3 Graphical representation of the results of the prior knowledge investigation 42 6.1 SIMEX extrapolation plot for the Framingham example . . 46 6.2 Traceplots and posterior density plots of the Bayesian analysis of the Framingham example 49 6.3 Traceplots and posterior density plots of the WinBUGS anal ysis of the Framingham example 51 vii Acknowledgements First of all, I thank my supervisor, Professor Paul Gustafson, and my co supervisor, Dr. Nhu Le, for their support throughout my study period in the Department of Statistics. It was an honour to work with both of them and their guidance helped me complete this thesis. I express sincere gratitude to Professors Lang Wu, Harry Joe, Arnaud Doucet, Raphael Gottardo, Matias Salibi\u00C3\u00A1n-Barrera, and Michael Schuizer for their excellent teaching and mentorship. I also thank Mike Mann for his friendship and encouragement, as well as the office staff for their help. I am grateful to all the graduate students for malcing the department a great place to study. In particular, I thank Michael Regier, Justin Harring ton, Aline Tabet, Kimberly Fernandes, and Luke Bornn for their friendship and advice. On a personal note, I am eternally grateful to my parents, Ursula and Max Romann, who have provided me with an abundance of kind support all of my life, and to Neal Dustin for his love and patience. Alexandra Romann The University of British Columbia August OO8 vui To My Parents ix Chapter 1 Introduction In many research areas the measurement accuracy of a variable is a frequent issue. For instance, in health research it is often of interest to understand the relationship between an outcome Y and an exposure X. When an in strument is available that mea.sures X correctly, such a tool is called a gold standard. Suppose that X can only be measured imprecisely due to the absence or high cost of a gold standard. Examples of such a scenario are in takes of foods or drugs, and exposure to airborne pollutants or radiation. If the covariate is a categorical variable, the problem with such imprecise mea surement is called misclassification, whereas it is referred to as measurement error for a continuous variable. In this thesis, we will only concern ourselves with the latter case. There exists vast literature on misclassification, for example see Gustafson (2004). Covariate measurement error problems are concerned with inference on regression coefficients where one or more of the covariates are measured imperfectly. It has been documented extensively that treating this surrogate variable as if it were the true exposure can lead to very poor results. We illustrate this with an example in Chapter 4. Car roll, Ruppert, Stefanski and Crainiceanu (2006) call the effect of ignoring imprecision in mismeasured covariates the Triple Whammy of measurement 1 Chapter 1. Introduction error because it causes bias iu parameter estimatiou, leads to a loss of power, and masks the features of the data. Many methods have been proposed for countering these issues. We will focus on the first problem of how to offset the bias caused by measurement error. It is interesting that the magnitude of a regression coefficient of the incorrectly measured variable is often biased towards zero, a phenomenon called attenuation. In other words, it will underestimate the true value. The actual measurement that is made for X is the surrogate and incorrect exposure variable X*. Further, we denote the correctly measured response by Y and the other correctly observed covariates by Z. The set of observed covariates is thus {X*, Z}. Measurement error problems can be classified into nondifferential and differential problems, the first of which occurs when X* contains no infor mation about Y other than what is available in X and Z. In this case, the distribution of the mismeasured surrogate depends only on the true explana tory variable and not on the response. Otherwise, the measurement error is considered differential. For a given problem it is important to know how the error is arising in the covariate. In the classical additive error model X* = X + U, where U is the measurement error, and U \u00C2\u00B1 (X,Y,Z), E(UX) = 0. This model is used when the error-prone covariate is measured uniquely to an individual, which is especially true when the measurement can be replicated. Examples 2 Chapter 1. Introduction of classical additive error arise in nutrition surveys and blood pressure mea surements. However, if all subjects in a small group are given the same value of the error-prone covariate and thus the independence assumption between U and X is not satisfied, then the Berkson error model is better suited. In this model X = X + U where U \u00C2\u00B1 X, Y, Z. This applies to dust exposure data of miners, for example. Employees having worked in the mine for a fixed number of years are assigned the same exposure to dust, even though the true exposure is almost certainly unique to each individual. The models discussed above are additive measurement error models, however, the error can also arise in a multiplicative fashion. Examples are problems where the exposure X is positive and skewed, which occurs fre quently in epidemiological problems. In that case, the observed exposure might be assumed to equal the product of the true exposure and the mea surement error such that X = XU. Log-exposure can be used to convert such a case to an additive measurement error model. It then follows that logX* =logX+logU. 1.1 Overview of Some Currently Available Methods Many methods have been proposed to deal with measurement error. They can be broadly grouped into functional and structural models (Carroll, Rup pert, Stefanski and Crainiceanu, 2006). In functional modelling, the Xs are regarded as fixed or random with no or minimal distribution assumptions. 3 Chapter 1. Introduction In structural models the Xs are regarded as random variables, thus the potential for exposure model misspecification arises. 1.1.1 Functional Methods In order to use functional methods, the size of the measurement error needs to be estimated. In some cases, a gold standard test is available and one is able to observe X directly for some subset of the data. This is the validation data from which the measurement error variance is estimated. In other cases replication data or measurements from another instrument may be available. A simple and quite general method called regression calibration (Pierce and Kellerer, 2004) is appropriate when such a gold standard or replicates are available and a linear measurement error with constant variance ap plies. In this method E(XIX*, Z) is estimated and the unobserved Xs are replaced by these estimates. Then a standard analysis is run to obtain the parameter estimates. The resulting standard errors are adjusted to account for the estimation of the parameters, using bootstrap, for example. This method has been shown to be very useful for generalized linear models, but performs poorly for highly nonlinear models (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). Another simple method that is also potentially applicable to any regres sion model, but is more computationally intensive than regression calibra tion, is the simulation extrapolation approach (SIMEX). For SIMEX, the bias function can be expressed in closed form for linear regression, and thus having more data leads to better estimates. However, when the bias func 4 Chapter 1. Introduction tion is not known and has to be approximated, having more data does not improve this approximation. We will discuss the SIMEX method in detail in the next chapter. Although regression calibration and SIMEX are quite general methods for eliminating or reducing measurement error bias, they result in estimators that are consistent only in important special cases such as linear regression (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). In addition, when the measurement error variance is large, both regression calibration and SIMEX may not be useful in reducing the bias induced by the mismeasurement. Score function methods are almost as widely applicable, but result in fully consistent estimators more generally. In these methods consistency of the estimators is due to the fact that they are M-estimators whose score func tions are unbiased even if there is measurement error. The conditional-score method (Stefanski and Carroll, 1987; Nakamura, 1990) and the corrected- score method (Stefanski, 1989) differ im their underlying assumptions and ease of computation. When the assumptions for both are satisfied, the conditional-score method will usually be more efficient than the corrected- score method. However, sometimes (for example in Poisson regression) the conditional-score estimator requires numerical integration, while the cor rected score has a closed form expression. Both score function methods are not straightforward to implement and the problem of multiple roots is often serious (Hossain, 2007). SIMEX and regression calibration are simple to implement because they are \u00E2\u0080\u009Cadd-on\u00E2\u0080\u009D packages to existing software, whereas other methods such as 5 Chapter 1. Introduction the score function approaches require a completely different analysis set-up. 1.1.2 Structural Methods To use structural models, the best possible specifications of the measurement (X*IY, X, Z), outcome (YIX, Z), and exposure (XIZ) models are needed. Bayesian methods fall into this category, as they are based on a likelihood approach followed by direct analytic calculations if possible, or otherwise by estimating methods such as Markov chain Monte Carlo (MCMC) sam pling techniques. We will discuss such approaches further in this thesis. Alternatively, one could use a likelihood estimation, which also requires fully specified measurement, outcome, and exposure models. In this case, the Expectation-Maximization algorithm could be applied instead of the Bayesian MCMC techniques. Bayesian methods have the advantage that they incorporate parameter un certainty, whereas frequentist approaches are less able to incorporate such uncertainty. However, they require fully specified exposure models, while functional approaches, such as SIMEX, do not. This eliminates the risk of misspeciflcation in the use of functional approaches. 6 Chapter 2 Simulation Extrapolation The simulation extrapolation (SIMEX) method was first proposed by Cook and Stefanski (1994) and is a simulation-based means of estimating and re ducing bias due to additive measurement error (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). It can be used for inference for parametric mea surement error models, when the measurement error variance is known or can be estimated (Cook and Stefanski, 1994). Additional measurement er ror is added to the already mismeasured covariate and the corresponding parameter estimates are calculated. The relationship between the parame ter estimate and the amount of added measurement error is calculated and extrapolated back to the case of no measurement error. Cook and Stefanski (1994) show that this approach is asymptotically equivalent to the method- of-moments estimation in linear measurement error models. The authors make a very simplified comparison between SIMEX and method-of-moments estimation using Monte Carlo-derived estimating equations. SIMEX is intu itive, simple to implement, and can be applied to just about any regression model, as it creates new datasets with added measurement error. SIMEX is applicable to general estimation methods, such as least-squares, maximum likelihood, or quasilikelihood, and can also be extended to nonadditive mod 7 Chapter 2. Simulation Extrapolation els (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). One way to deal with multiplicative measurement error would be to convert it to additive error by taking logarithms, as discussed in the last chapter. To implement this method we use the simex package for covariate mea surement error in R, written by Lederer and Kiichenhoff (2008). We assume a simple linear regression of the form Y = fib + j31X + e with additive and nondifferential measurement error X\u00E2\u0080\u0099 = X + U, where UJJY, X), E(U) = 0, and Var(U) = a. Typically, although normality of the measurement error is assumed, it is not completely critical in practice. It is also assumed that the measurement error variance, o is known or well estimated. Now, if u > 0, then the ordinary least squares estimate of j3 from regressing Y on X*, denoted by/3x,najve, is biased. Note that SIMEX is usually not used for simple linear regression, as simple method-of moments bias corrections can be used. We add further measurement error to the already noisy predictor and create a new data set in which the measurement error variance is greater. Suppose that this process is repeated until we have a total of M data sets, including the naive one. Then each of these data sets has a successively larger measurement error variance, (1 + (m)o, where 0 = ( < ( < ... < (iw are set by the user. /3x,m, the least squares estimate of the slope of the m data set, m = 1,2, ..., M, consistently estimates fixQ/{0 + (1 + Cm)u}. If we regress !3x,m on Cm, we get the following mean function: a 2 \u00E2\u0080\u009C G1t > 0Px,m ) k)g2+(l+g \u00E2\u0080\u0094 8 Chapter 2. Simulation Extrapolation We can extrapolate back to = \u00E2\u0080\u00941 to get to the case of no measurement error. From the equation above it is evident that this is the case because G(\u00E2\u0080\u00941) = 43w. We now describe these steps in detail (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). First, simulated data sets with increasingly larger mea surement error variance have to be created. Define Xi()X+Um,i, i=1,...,n,m=1,...,M, where {Um,i}..i N(0, u) are mutually independent and identically distributed. They are also independent of all observed data. Because the error variance in the simulated data has been inflated by a multiplicative factor of (1 + (, the error variance is zero, in theory, when = \u00E2\u0080\u00941. Math ematically Var{X,()IXi} = (1 + C)u = (1 + )Var(XX), which is equal to 0 when = \u00E2\u0080\u00941, though we cannot actually generate for negative values of (. Now that the data sets have been simulated, the naive estimates based on these predictors have to be computed. Let!3m() be the estimator when the m\u00E2\u0080\u0099 additional error is used and then the average of these estimators is = m=1 () is the mean from many simulations with the same measurement er ror. This way, we can precisely estimate the additional bias due to increasing 9 Chapter 2. Simulation Extrapolation measurement error. In the extrapolation step 3() is modeled as a function of > 0 and then the fitted models are extrapolated back to = \u00E2\u0080\u00941. This extrapolated value is denoted byI3simex The choice of the extrapolation function is important, as the wrong choice could give misleading results and therefore not help in correcting the bias caused by measurement error. The simex function in R requires the user to specify an extrapolation function, otherwise the default, the quadratic function, is used. We apply the quadratic extrapolation function because it is numerically stable and seems to work best in our scenarios. The SIMEX method is very popular because it is intuitive and simple to implement. We note, however, that this method does not completely correct the bias in practice, as the functional form of the bias is generally unknown and needs to be approximated. SIMEX works well for popular special cases, but does not always produce consistent estimators in general. 10 Chapter 3 Bayesian Estimation Bayesian inference implemented with Markov chain Monte Carlo (MCMC) algorithms are mismeasurement-correcting methods that have become in creasingly popular with the growing availability of MCMC algorithms. 3.1 Bayes Rule Assume we have an independent and identically distributed (iid) sample w = (Wi, ..., w,) from a density fo, where 8 9 is an unknown parameter. Then the likelihood function is f(81w) JJf(w). Note that in the Bayesian framework 0 is assumed to be random. This is the major difference between Bayesian and frequentist methods, where the parameters are assumed to be fixed and probabilities refer to limiting fre quencies (Doucet, 2007; Casella and Berger, 2002). Because 0 is assumed to be random, we can set a prior distribution ir(0) on it that expresses our belief about the parameter before having seen the data. Given these distributions 11 Chapter 3. Bayesian Estimation and using Bayes\u00E2\u0080\u0099 theorem we can construct a posterior distribution 7r1\u00E2\u0080\u0099OIw) \u00E2\u0080\u0094 f(81w)Tr(8) \u00E2\u0080\u0094 I f(Ow)ir(8)dEi Note that this implies ir(OIw) cc f(81w)7r(8), where the proportionality constant can be obtained by normalization. The evaluation of this normal izing constant becomes an issue when 8 is very high dimensional and cannot be evaluated in closed form. For example, suppose we want to compare the estimators, 8, to 8 by using the quadratic loss function 18 \u00E2\u0080\u0094 OH2. In the Bayesian framework, one would calculate the expected value of 8 under the posterior (Mann and Robert, 2007): 8 \u00E2\u0080\u0094 1 8r1 w\u00E2\u0080\u0099dO \u00E2\u0080\u0094 8f(8w)ir(8)d8 -J - ff(Ojw)7r(8)d8 Markov chain Monte Carlo (MCMC) techniques or other numerical methods are needed to complete such calculations. MCMC methods have, since they were introduced around 1990, revolutionized Bayesian statistics (Doucet, 2007). The posterior distribution is an updated version of the prior combined with the observed data, as represented by the likelihood. We can then consider the posterior distribution the new prior and repeat the procedure to get another posterior. MCMC generates multiple dependent samples by running a Markov chain a set number of times, whereas sequential Monte Carlo (SMC) generates multiple independent samples simultaneously. 12 Chapter 3. Bayesian Estimation 3.2 Choice of Prior Distributions The prior distribution 7r(O) expresses the subject researcher\u00E2\u0080\u0099s knowledge and belief about the parameter(s) of interest based on previous experience and subject area knowledge, but without having actually seen the current data. When such information is not available, the impact of the prior on the inference must be minimized. A \u00E2\u0080\u009Cflat\u00E2\u0080\u009D distribution, called noninformative, is assigned to such priors, therefore not preferring any values. This process is not trivial, as there is not one agreed upon notion of \u00E2\u0080\u009Cflat\u00E2\u0080\u009D (Gustafson, 2004; Mann and Robert, 2007). 3.3 Markov Chain Monte Carlo Algorithms In Bayesian problems the posterior distributions are often very complicated, therefore it is difficult to compute probabilities. The integrals that have to be evaluated are frequently of high dimensions so that numerical integration techniques become useless. These problems have made Bayesian analysis very limited until around 1990 when MCMC algorithms were first applied to statistical analyses. The MCMC techniques are based on the idea that it is sufficient to produce a Markov chain t,, where n is a natural number, whose stationary distribution is f, for instance, the posterior distribution (Mann and Robert, 2007). If the marginal distribution of t is f, then f is also the marginal distribution of.t1 (Grimmett and Stirzaker, 2001). For large n, t is approximately distributed from f. The following three sections briefly discuss the very general Metropolis- Hastings algorithm (Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller, 13 Chapter 3. Bayesian Estimation 1953; Hastings, 1970), the random walk Metropolis algorithm (Metropolis, Rosenbiuth, Rosenbiuth, Teller, and Teller, 1953), and the Gibbs sampler (Casella and George, 1992). Mann and Robert (2007), among many other sources, give a good overview of these and additional algorithms. 3.3.1 The Metropolis-Hastings Algorithm The Metropolis-Hastings (MH) algorithm is a general iterative method to sample from any probability distribution function f(9) known up to a nor malizing constant. The first step in the MR algorithm is to set a proposal probability distribution (8\u00E2\u0080\u0099I 9). Based on the current state of the Markov chain, a new candidate for 8\u00E2\u0080\u0099 is proposed as follows at the 1th iteration, j 1: 1. Set (\u00C2\u00B0). This can be done randomly or deterministically. 2. Sample 9* 3. Compute = mm (1 f(\u00E2\u0080\u0099(9I9(1))) 4. With probability(9i_1),*), set (i) = 9*\u00E2\u0080\u00A2 Otherwise set 9(j) = 9(i\u00E2\u0080\u00941) 3.3.2 Random Walk Metropolis Algorithm The original Metropolis algorithm incorporates a random walk proposal and in this thesis we use the following algorithm if the model in question is a 14 Chapter 3. Bayesian Estimation logistic regression. A new candidate for 0 is proposed as follows at the jth iteration, i 1: 1. Skip this step if i 1. Compute the maximum likelihood estimate 9 and the covariance matrix E corresponding to the asymptotic (Fisher) covariance of 8. Set 9(0) = 2. Generate 0* Nk (O(z_1), s2), where 2 is the scale factor. Typically, use = I, where I is the identity matrix. 3. Compute = mm (i f(9(1))) 4. With probability c(9(), 9*), set = 0\u00E2\u0080\u0099. Otherwise set 0(i) = 9(i\u00E2\u0080\u00941) 3.3.3 The Gibbs Sampler The Gibbs sampler is a special case of the MH algorithm with an acceptance probability of one, when full conditional distributions are available. It is a simple and popular iterative method to sample from high dimensional probability distributions. Suppose again we want to sample from f(9), where 9 = (Oi, ..., Or). Then the algorithm of the Gibbs sampler to generate a th . (i) (i)Markov chain at the \u00E2\u0080\u0098t iteration (\u00C2\u00B0 , ..., 0, ) is the following: 1. Set (90), 9,0)) This can be done randomly or deterministically. 2. For j = 1, ...,p, sample O f(0I0), where 9 = (9(i) i) (i1) 15 Chapter 3. Bayesian Estimation These MCMC algorithms are simple and general algorithms to sample from any target distribution, however, the technical details are usually very tedious. We will apply the Gibbs sampler and the random walk Metropolis algorithm. These will be discussed in more detail in the simulation ap plication in the next chapter and the Framingham example in Chapter 6, respectively. 16 Chapter 4 Simulation Study In this example we consider the case of a linear regression involving two covariates such that Y = /3 + /31X + 132Z + e, where \u00E2\u0082\u00AC is normally dis tributed with mean zero. We suppose that only a mismeasured surrogate X is available for X, but we observe Z and Y correctly. For simplicity, we assume the variables are distributed normally and the measurement model is nondifferential. Then XX,Z,Y N(X,r2), where r2 is an unknown parameter. The response model, parameterized by i3o, /3i, 2, and u2, is YIX,Z N(i30+i3iX+i32Z,u). The exposure model, conditioned on the covariate Z and parameterized by \u00E2\u0082\u00ACo, o, and A2, is XZ N(o +ciZ,A2). We consider a study with n=250 subjects, where X*, Z, and Y are observed for all subjects. For a validation sample of m = 10 randomly 17 Chapter 4. Simulation Study chosen subjects, X is also measured. This is applicable for situations where a gold standard test does exist, but where it is not feasible to administer it to all subjects due to high costs or time constraints. Let the subscript C denote the complete data available for the validation sample, and R the incomplete or reduced data. For instance, X denotes the incorrectly measured variable for the subjects in the validation sample. Following Gustafson (2004), we simulate a data set with (X, Z) having a bivariate normal distribution, each marginal distribution having a stan dard normal distribution, and a correlation coefficient of 0.75. This set-up yields (co,c) = (0,0.75) and A2 = 0.4375. We initially set T = 0.5 (Sce nario 1), and increase it to r = 1.5 (Scenario 2) later. We set (30,d1,2)= (0,0.5,0.25) and u = 1. Firstly, we analyze this hypothetical study using the frequentist SIMEX approach, secondly from a Bayesian perspective, and finally compare the results from both methods. 4.1 Simulation Extrapolation Approach We use the simex package in R (Lederer and K\u00C3\u00BCchenhoff, 2008) for this analysis. The simexQ function in this package takes an estimate of T as input, therefore it needs to be estimated from the validation data: fvrn ( \u00E2\u0080\u0094 = I L4=1kXZC Xjc) V m 18 Chapter 4. Simulation Study We assume this value to be correct for its use in the SIMEX algorithm. This may be too strong an assumption as our validation sample size is small and we will investigate the validity of this further later. We use the quadratic extrapolation function because it seems to perform best in the case of this set-up. This function is usually used when the relationship is not known, but we use it because of its numerical stability even though we know the relationship. The additional data sets with successively larger measurement error are computed as described in Chapter 2. The extrapolation for this example is illustrated in Figure 4.1 for Scenario 1 with moderate measurement error variance, and in Figure 4.2 for Scenario 2 with large measurement error vari ance. We will later elaborate on these two scenarios, and the performance of SIMEX in each case. Figures 4.3 and 4.4 show how well the SIMEX ap proach works in Scenario 1 and Scenario 2, respectively. SIMEX seems to perform very well in the case of moderate measurement error variance, but has its shortcomings when the measurement error variance is increased to 1.5 from 0.5. 19 00 U) 0 0 0 U) C - 0 0 0 \u00E2\u0080\u00984, c\u00E2\u0080\u009Dj 0 c\u00E2\u0080\u0099Jj 0 Chapter 4. Simulation Study (i+) Figure 4.1: SIMEX extrapolation plot for Scenario I where T = 0.5, il lustrating the effect of adding consecutively more measurement error to the original naive estimate at = 0. The SIMEX estimate is the extrapolation to (= \u00E2\u0080\u00941. 0 I I I I I I I 0.0 0.5 1.0 1.5 2.0 2.5 30 20 Chapter 4. Simulation Study C LO C L() 0\u00E2\u0080\u00A2- 0 C 0- Lfl 0) C 0 C1) C c\u00E2\u0080\u0099J C 0 C 1.5 2.0 2.5 (i+) Figure 4.2: SIMEX extrapolation plot for Scenario where \u00E2\u0080\u0098r = 1.5, il lustrating the effect of adding consecutively more measurement error to the original naive estimate at = 0. The SIMEX estimate is the extrapolation to = \u00E2\u0080\u00941. 0 I I I 0.0 0.5 1.0 I I 3.0 21 Chapter 4. Simulation Study x Figure 4.3: In the case of Scenario 1 (T = 0.5), the solid line shows the least-squares line had the data been measured correctly, while the dashed line illustrates the case of the naive analysis where measurement error is not taken into account. The dotted line illustrates the correction the SIMEX algorithm provides for this particular example. \u00E2\u0080\u0094 True Model - - - Naive Model SIMEX model c\u00E2\u0080\u0099.j C r c) \u00E2\u0080\u00942 \u00E2\u0080\u00941 0 1 2 22 Chapter 4. Simulation Study >, x Figure 4.4: In the case of Scenario (-r = 1.5), the solid line shows the least-squares line had the data been measured correctly, while the dashed line illustrates the case of the naive analysis where measurement error is not taken into account. The dotted line illustrates the correction the SIMEX algorithm provides for this particular example. To make the SIMEX approach more comparable to the Bayes-MCMC method, it seems sensible to add uncertainty to the estimate of Var(X* IX) = \u00E2\u0080\u0098r2. We achieve this by using the bootstrap method where the pairs (X*,X) c.\u00E2\u0080\u0099j C c\u00E2\u0080\u0099.J \u00E2\u0080\u00942 \u00E2\u0080\u00941 0 1 2 23 Chapter 4. Simulation Study are sampled and the estimate of r2 is obtained using this bootstrap sample. This process is repeated 1000 times and 95% confidence limits are obtained for r. We then perform the SIMEX analysis three times; using the estimated T, as well as the lower and upper bounds of the 95% CI. For the 3 estimates, it is possible to get \u00E2\u0080\u009Cadded-variance confidence intervals\u00E2\u0080\u009D in addition to the regular confidence intervals. We obtain the former as follows: We take the and the 97.5 percentiles of the simex 3 estimates using the lower and upper bound of r, respectively. These \u00E2\u0080\u009Cadded-variance confidence intervals\u00E2\u0080\u009D are wider than the regular confidence intervals on the estimated coefficients. 4.2 Bayes-MCMC Approach We assume prior independence of all unknown parameters, so f(a A2 2 T2) = f(a)f()f(A)f aT. We assign improper locally uniform priors to the a\u00E2\u0080\u0099s and \u00E2\u0080\u0098s such that f(a) 1 and f(j3) \u00E2\u0080\u0098-S-\u00E2\u0080\u0099 1, although alternatively we could use proper, but very fiat, priors. We use such uninformative priors because they do not favour any one value for the coefficients. Partly due to their conjugate property, Inverse Gamma distributions seem sensible to use as priors for the variances of normal distributions (Gustafson, 2004; Doucet, 2007). So we assign IG(0.5,0.5) priors to A2, u2, and r2. The posterior distribution is 24 Chapter 4. Simulation Study n f(xR,/3,a,r2,u.\Ix*,xc,y,z)cxllf(xxj,T) j=1 n [J f(yx, z, 3,u2)x i=1 n fl f(xzj, , j=1 f (r, a2, 2), which expands to 1 n/2 (4_x)2) <() exP(i1 2A / 1 \ (0.5+1) 7\u00E2\u0080\u00940.5 exP_Th_)x / 1 \ (0.5+1) 1\u00E2\u0080\u00940.5 exp___-)x / 1 \u00E2\u0080\u0098 (0.5+1) \u00E2\u0080\u00940.5 From this posterior we can get the full conditional distributions as follows. Note that the superscript C denotes the complement. N ((A\u00E2\u0080\u0099A)\u00E2\u0080\u0099A\u00E2\u0080\u0099x,\2(A\u00E2\u0080\u0099A)\u00E2\u0080\u0099), 25 Chapter 4. Simulation Study N ((B\u00E2\u0080\u0099B)\u00E2\u0080\u0099B\u00E2\u0080\u0099y,u2(B\u00E2\u0080\u0099B)\u00E2\u0080\u0099), 1 zi 1 X1 Z1 1 Z2 1 X2 Z2 where A = arid B = 1 z 1 x z We also have: (r21r2c) ()exp (_ (Zi(x - x)2 + 1)), which implies that T2IT2IG(,UU). The full conditional distributions for the other two variance components can be obtained in a very similar fashion: 22C IGQ1 Ix\u00E2\u0080\u0094Aa112+1) U2U2IG( HY_B13112+1) Only considering the cases where no gold standard measurement is made, the full conditional distribution for XR follows a normal distribution with 26 Chapter 4. Simulation Study the following mean and variance: E1 c \u00E2\u0080\u0094 (1/r2)x + (32/2) (( \u00E2\u0080\u0094 \u00E2\u0080\u0094 32zi)//31) + (i/2) (cO + aiz)X \u00E2\u0080\u0094 (1/r2)+(2/u)+ (1/A2) Var(xIx) = (i/r2) + (?/u2) + (i/A2) We are now ready to use the Gibbs sampler and to do so, we need to continually sample from the full conditional distributions above. We use 10,000 iterations after 1000 burn-in iterations. The traceplots and posterior densities of j3 and -r in the case of Scenarios 1 and 2 are shown in Figures 4.5 and 4.6, respectively. From this we see that there are no mixing problems. 27 Chapter 4. Simulation Study 0 - 0 0 2000 4000 6000 8000 10000 Iterations \u00E2\u0080\u00940.2 \u00E2\u0080\u00940.1 0.0 0.1 0.2 0.3 beta 0 U, 0-1 L$ 0 or FO 0 2000 4000 6000 8000 10000 Iterations 0.0 0.5 1.0 beta 1 t5 c\J I -1 .. IC I I I I I 0 0 2000 4000 6000 8000 10000 Iterations .5 !5 1.0 beta 2 0 ________________ o H 0 2000 4000 6000 8000 10000 Iterations 0.3 0.4 0.5 0.6 0.7 0.8 0.9 tau Figure 4.5: Traceplots and posterior densities of and T from the Gibbs sampler for Scenario 1, where the measurement error variance is moderate. The traceplots show the 10,000 iterations after the 1000 burn-in iterations. 28 Chapter 4. Simulation Study ________________ CsJ \u00E2\u0080\u0098 oID I : 0 2000 4000 6000 8000 10000 Iterations I1 0.8 1.0 1.2 1.4 1.6 1.8 tau Figure 4.6: Traceplots and posterior densities of and r from the Gibbs sampler for Scenario 2, where the measurement error variance is large. The traceplots show the 10,000 iterations after the 1000 burn-in iterations. 0 o 0 2000 4000 6000 8000 10000 Iterations 0 2000 4000 6000 8000 10000 \u00E2\u0080\u00940.2 0.0 0.1 0.2 0.3 beta 0 II 0 ID 0 \u00E2\u0080\u00940.4 0) ci) U 00 \u00E2\u0080\u00941.0 \u00E2\u0080\u00940.5 1.0 Iterations (N elrlj 0 0 2000 4000 6000 8000 10000 Iterations beta 1 i I 0.0 0.5 1.0 beta 2 Chapter 4. Simulation Study 4.3 Comparison: SIMEX versus Bayes-MCMC It makes sense to compare the performances of the Bayes-MCMC and the SIMEX approaches by looking at the mean squared errors of the relevart parameters using both methods. For the Bayesian analysis, both the mean and the median of the 10,000 MCMC /E estimates were used for comparison. Table 4.1 shows the MSEs of and /3 based on one hundred simulated data sets. Table 4.1: Mean Squared Errors of SIMEX and Bayesian estimates of T and i3i with varying validation sample size (m) and true -r. We use both, the mean and the median, in the Bayesian case. The MSEs are based on 100 comparisons between the SIMEX and Bayesian methods. MSE MSE61 simex Bayes Bayes f simex Bayes Bayes median mean_IL______ median mean m = 10 0.0089 0.0054 0.0050 0.0175 0.0369 0.0436 Ttrue = 0.5 m = 0.0802 0.0083 0.0090 0.1311 0.0718 0.0700 Ttrue = 1.5 m = 0.0025 0.0020 0.0020 0,0159 0.0200 0.0203 Ttrue = 0.5 m = 50 0.0223 0.0067 0.0067 0.1280 0.0303 0.0298 Ttrue = 1.5 When the measurement error and the validation sample are relatively small, SIMEX performs better than the Bayesian method in estimating the L3i coefficient. However, the Bayes-MCMC method outperforms the SIMEX algorithm in estimating T in all performed scenarios. As soon as the mea surement error variance is increased, the Bayes-MCMC method works better 30 Chapter 4. Simulation Study than SIMEX in the estimation of j3. SIMEX seems to deal badly with large measurement error variance (r2 = 1.52), producing a MSE over three times larger than the Bayes method in the r estimation and four times the MSE of the Bayes method in the /3i estimation, even when the validation sample size is large (m = 50). When the validation sample size, m, is ten and the measurement error variance is 1 \u00E2\u0080\u00A252, SIMEX produces a MSE of nine times that of the Bayes-MCMC correction for T and almost twice the MSE of the Bayesian approach for /. This confirms the intuition that SIMEX should be vulnerable in the large measurement error case. In summary, SIMEX seems to perform well in simple scenarios with reasonably low measurement error variance and the Bayes-MCMC method is more robust to changes in validation size and measurement error variance. The average widths, across the one hundred runs, of the 95% confidence (SIMEX) and credible (Bayes-MCMC) intervals of and /i are shown in Table 4.2. The added-variance (bootstrap) confidence bounds are also pro vided. The Bayes-MCMC CIs for are more narrow than the SIMEX ones obtained via bootstrapping. Since is expected that the Bayesian approach creates wider CIs than the SIMEX method for /i, the values we get from the simulation study are reassuring. It is notable that the added-variance SIMEX confidence bounds are, although wider than the regular SIMEX CIs, constantly more narrow than the Bayesian ones. 31 Chapter 4. Simulation Study Table 4.2: Average widths of Confidence and Credible Intervals (CI) using SIMEX and Bayesian CIs of i and /i with varying validation sample size (m) and true T. The averages are based on 100 simulated data sets. i Average CI Width /3 Average CI Width simex Bayes simex simex Bayes boot boot m = 0.3802 0.2922 0.4010 0.5322 0.7742 Ttrue = 0.5 m = 1 1.1406 0.4189 0.2362 0.2823 1.0734 Ttrue = 1.5 m \u00E2\u0080\u0094 50 0,1846 0.1755 0.3978 0.4735 0.4921 Ttrue = 0.5 m = 50 0.5537 0.3143 0.2368 0.2649 0.6669 Ttrue = 1.5 Another way to compare the SIMEX and the Bayes-MCMC approaches is to count how many times the confidence intervals and the credible intervals of f and /3i contain the true values. We use 95% CIs mentioned above for this measure of coverage. Table 4.3 gives these values out of 100 runs for the SIMEX, the added-variance (bootstrap) SIMEX, and the Bayes-MCMC corrections. The SIMEX (bootstrap) confidence bands included the true T 84 times when the validation sample size was 10 and 94 times when the validation sample size increased to 50, regardless of the true r. When the validation sample size is large, the SIMEX and the Bayesian approaches work about equally well in estimating r, while the Bayesian approach seems to be the better choice when the validation sample size is small. However, when it comes to the coverage of the true /3 by the CIs, it becomes evident that the Bayes-MCMC method performs much better, especially when the measurement error variance is large. If the measurement error variance is 32 Chapter 4. Simulation Study small, it may be more economical to use SIMEX and perhaps account for more variance by bootstrapping, than the time and cost consuming Bayesian approach. Table 4.3: Confidence and Credible Interval (CI) coverage of the true values using SIMEX and Bayesian CIs of and /3 with varying validation sample size (m) and true r. The values denote how many times, out of the 100 runs, the CIs contained the true values. We use the SIMEX and Bayesian methods, as well as the SIMEX-bootstrap method. \u00E2\u0080\u0098 CI Coverage (100) CI Coverage (100) simex Bayes simex simex Bayes boot boot m=10 84 97 83 96 96 Ttrue = 0.5 m=10 84 99 0 1 97 Ttrue = 1.5 m=50 94 93 89 96 90 Ttrue = 0.5 m=50 94 95 0 0 95 Ttrue = 1.5 From this study, it seems evident that the Bayes-MCMC method gener ally produces more accurate results than the SIMEX approach. When the measurement error variance is large, SIMEX does a poor job in correcting for the mismeasurement. Bayesian approaches are more expensive than con ventional methods, due to computational difficulties as well as the scarcity of pre-packaged functions, and may not be worth using if the measurement error variance is small with a simple study set-up. 33 Chapter 5 Exploring the Bayes-MCMC Correction It is well known that the more one knows about the size of the measure ment error, the better the results of the techniques mentioned in this thesis. The estimate of the measurement error can stem from a validation sample, another instrument, or subject area knowledge. But how correctly do we have to estimate this measurement error in order to improve our analysis? Moreover, how certain do we have to be that this estimate is correct? In a Bayesian framework the first question refers to the location of a chosen prior, and the second to the prior\u00E2\u0080\u0099s width. It seems intuitive that there is a trade-off on both scales. If our estimate is correct, a very narrow prior should be best suited. What if our estimate is wrong and we choose a nar row prior distribution, or our estimate is correct but we have little faith in it and assign it a fiat prior? Is there a threshold where we end up with better results by not correcting for measurement error at all? We investigate these points further using the simulation example set-up from Chapter 4, but now assume that we have no validation data and thus n = 250 incomplete obser vations. 34 Chapter 5. Exploring the Bayes-MCMC Correction The question about how much one needs to know about the measurement error in a given problem could be approached from many directions. We choose to use five different inverse gamma prior distributions, with scale parameter a and shape , for r2. All these priors have different widths but the same mode = 3/(a + 1) = 0.25, and are shown in Figure 5.1 below. 35 Chapter 5. Exploring the Bayes-MCMC Correction 0 - \u00E2\u0080\u0094 shape=50,scale=12.75 - - - shape2O,scaIe=5.25 shape=10,scale=2.75 shape=5,scale=1.5 \u00E2\u0080\u0094 \u00E2\u0080\u0094 - shape=3,scale=1 \u00E2\u0080\u0098\u00E2\u0080\u0098I o - 2.0 Figure 5.1: Inverse Gamma (f(x)=IG(o,/3)) plots with mode==O.25, where is the shape and is the scale of the distribution. We place the, realistically unknown, true value of T2 at several differ ent percentiles of these prior distributions. The idea is to investigate what effect arises from placing these priors on r2 when the true value of the mea surement error variance is located towards the center or either tail of each distribution. We examine what happens when we estimate r2 to be around 36 Chapter 5. Exploring the Bayes.-MCMC Correction 0.25, at the prior\u00E2\u0080\u0099s mode, but the true value is at a particular percentile of the distribution. The width of the prior indicates how certain we are about the estimate. For instance, if we estimate the measurement error variance to be around 0.25 and are certain, due to prior knowledge, that it is larger than 0.15 but no larger than 0.45, say, we would choose the IG(50, 12.75) prior out of the five distributions mentioned. This example is illustrated in Figure 5.2. 37 Chapter 5. Exploring the Bayes-MCMC Correction o - - - shape=50,scale=1 2.75 x=0.15,x=0.45 0- I I I I I I 0.0 0.2 0.4 0.6 0.8 1.0 x Figure 5.2: Inverse Gamma (f(x)=rIG(50, 12.75)) plot with mode=O.25, and a rough approximation of where its support is greater than zero. Table 5.1 below shows the mean square errors for the parameter, calculated by performing the analysis with different inverse gamma priors on r2 with the true T2 values being located at several different percentiles of these prior distributions. For each prior, the mean square errors are calculated in the case of the naive analysis, where rio measurement error 38 Chapter 5. Exploring the Bayes-MCMC Correction correction is performed, and in the case of the Bayesian measurement er ror correction described above. It follows that when the naive mean square error is larger than the corrected one, then the measurement error should be taken into account. However, if the naive mean square error is smaller than the corrected one, it would be counter-productive to try to correct for measurement error, as we get a similar or better result by performing the cheaper and by far simpler naive analysis. To see this relationship better, the ratio=naiveMSE/correctedMSE is given for each case. When this ra tio is greater than one, a correction for measurement error is needed. Of course, this is not a matter of absolute certainty, as different problems will have different thresholds, and there is always some \u00E2\u0080\u009Cgray area\u00E2\u0080\u009D around the threshold. 39 Ta bl e 5. 1: Fo r th e fiv e pr io r di st rib ut io ns be lo w, th e m o de is 0. 25 . Th er e ar e ei gh t ca se s fo r ea ch m o de l: r is a t th e 5th , 1 0 t h , 2 5 t h , 5 0 t h , 7 5 t h , 9 0 t h , o r 9 5 t h pe rc en til e; o r a t th e m o de o f t he di st rib ut io n. M ea n sq ua re d e rr o rs o f th e i3i e st im at es ar e ba se d o n 50 ru n s. Fo r ea ch m o de l, th e n ai ve M SE e st im at es (ig no rin g m e a su re m e n t e rr o r), th e co rr ec te d M SE e st im at es , a n d th ei r ra tio s ar e gi ve n. Pr io r 5th 1 0 t h 2 5 t h 5 0 t h 7 5 t k 9 0 t h 9 5 t h m o de M SE M SE M SE M SE M SE M SE M SE M SE IG (5 0, 12 .7 5) n ai ve 0. 03 38 0. 03 62 0. 03 61 0. 04 16 0. 04 47 0. 04 95 0. 05 21 0. 04 01 c o rr e c te d 0. 03 33 0. 02 85 0. 02 28 0. 02 01 0. 01 60 0. 01 73 0. 01 81 0. 02 04 ra tio 1. 01 63 1. 27 28 1. 58 70 2. 06 90 2. 79 17 2. 85 72 2. 87 87 1. 96 97 IG (2 0, 5. 25 ) n ai ve 0. 02 70 0. 02 94 0. 03 35 0. 03 95 0. 04 53 0. 05 29 0. 05 98 0. 03 79 c o rr e c te d 0. 07 41 0. 05 42 0. 04 80 0. 02 60 0. 02 05 0. 01 68 0. 02 07 0. 03 19 . ra tio 0. 36 38 0. 54 30 0. 69 66 1. 52 15 2. 21 06 3. 14 08 2. 89 14 1. 18 86 IG (1 0, 2. 75 ) n ai ve 0. 02 57 0. 02 80 0. 03 63 0. 04 46 0. 05 40 0. 06 64 0. 07 60 0. 03 78 c o rr e c te d 0. 13 97 0. 11 97 0. 08 03 0. 04 56 0. 02 43 0. 01 84 0. 02 46 0. 06 33 ra tio 0. 18 38 0. 23 41 0. 45 21 0. 97 79 2. 21 94 3. 60 96 3. 08 99 0. 59 65 IG (5 ,1 .5 ) n ai ve 0. 02 46 0. 02 96 0. 03 75 0. 05 12 0. 06 97 0. 09 07 0. 10 56 0. 03 90 c o rr e c te d 0. 20 74 0. 17 83 0. 13 02 0. 06 30 0. 03 24 0. 02 69 0. 03 46 0. 10 97 ra tio 0. 11 86 0. 16 58 0. 28 84 0. 81 17 2. 14 94 3. 37 61 3. 37 61 0. 35 51 IG (3 , 1 ) n ai ve 0. 02 40 0. 02 84 0. 03 99 0. 08 57 0. 08 57 0. 11 47 0. 13 86 0. 03 67 c o rr e c te d 0. 25 56 0. 23 84 0. 16 41 0. 08 59 0. 03 93 0. 03 03 0. 04 25 0. 17 39 ra tio 0. 09 38 0. 11 91 0. 24 30 0. 99 83 2. 17 86 3. 79 15 3. 26 13 0. 21 12 Ct, I I Chapter 5. Exploring the Bayes-MCMC Correction Figure 5.2 shows the ratios from Table 5.1 graphically. The true r2 values and the ratios are represented on the horizontal and vertical axes, respectively. Each line ifiustrates the behaviour of the ratios of five prior distributions across the different scenarios. When the line of the chosen prior distributions in the figure are well above one, a measurement error correction is needed. However, if it is below one, it may be useless to perform the measurement error correction because of lack of prior knowledge. We note that this \u00E2\u0080\u009Crule\u00E2\u0080\u009D is not black and white. When the ratio is close to one, one may want to investigate measurement error correction further, taking into account subject area knowledge, cost of a potential correction, and value. Priors much wider than the ones shown lead to greater biases in the parameter estimation than the naive analysis. 41 Chapter 5. Exploring the Bayes-MCMC Correction 75th 90th 95th \u00E2\u0080\u0094 IG(50,12.75) \u00E2\u0080\u0094 \u00E2\u0080\u0094 \u00E2\u0080\u0094 IG(20,5.25) IG(102.75) \u00E2\u0080\u0094 \u00E2\u0080\u0094 -IG(5l.5) rIG(31) Figure 5.3: The ratios of the naive over the corrected mean square errors from Table 5.1 are shown graphically. 3.5 3 2.5 1.5 0.5 5th tOth 25th 50th True tau\u00E2\u0080\u0099 value 42 Chapter 6 Famingham Study Example To illustrate the use of SIMEX and the Bayes-MCMC methods in a real- world application, we use a subset of the Framingham Heart Study data. This still ongoing, large cohort study was started in 1948 under the direc tion of the National Heart, Lung, and Blood Institute in the United States with the objective to identify some of the characteristics that contribute to cardiovascular disease. We use only complete data of male adults aged be tween 31 and 65 at the first exam, which yields a subset of n 1615 subjects (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). The study consists of a series of medical exams, about two years apart, where a number of variables were recorded. The response is the indicator variable Y, taking the value 1 if the individual has developed a coronary heart disease within eight years of the third exam, and 0 otherwise. It has become well known, primarily through this study, that high blood pressure is one of the leading causes of cardiovascular diseases. We assume that the systolic blood pressure (SBP) measurements may be mismeasured and will try to account for the bias caused by this. We adopt a transformation of SBP introduced by Carroll, Ruppert, Stefanski and Crainiceanu (2006), setting X log(SBP \u00E2\u0080\u0094 50). We do not have any validation data available, but SBP is measured at the 43 Chapter 6. Framingham Study Example second and third exams. We treat these measurements as replicates for our practical purposes, even though technically they are not. Therefore, we use the mean, for simplicity denoted by X, for each individual i. The nondif ferential measurement error assumption seems plausible because we would like to be able to measure long-term systolic blood pressure (Xi), but what we observe in reality is blood pressure on a single day (X). It is possible that this actual measurement indicates little information about the long- term systolic blood pressure (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). The model is X = where E(U) = 0 and Var(U) = r2. We also have the correctly recorded age, Z, at the second exam for each subject. Since we are dealing with a binary response, we choose to use the logit link on Y to make sure its domain covers the real numbers. Thus the response model is logit(P(Y = lIX, Z)) = log (1 p(Yxz)) = + + The measurement and exposure models are defined as in the simulation study above. Under the nondifferentiality assumption, the measurement model is XX,Z,Y N(X,r2). The exposure model is XZ N(c0 + c1Z, 2) 44 Chapter 6. Framingham Study Example We will use this set-up to conduct a SIMEX and a Bayes-MCMC analysis. 6.1 SIMEX Analysis As before with the simulated dataset, we use the simex package in R and apply the quadratic fitting method because of its numerical stability. The replicate measurements allow us to estimate the components of variance estimator V as follows: = \u00E2\u0080\u0094 = 0.01278, i=1 j=1 where X is the mean of the replicates (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). We thus estimate the measurement error variance /2 = = 0.00639 from the data as well as Var(X*) = 0.04543. We use the gim function in R for the logistic regression model. The results from the SIMEX analysis are shown in section 6.2 and the extrapolation plot appears in Fig ure 6.1. 45 Chapter 6. Framingham Study Example Q C) CD 1 CD U) 0.0 1.0 1.5 2.0 3.0 (i\u00C3\u00B7) Figure 6.1: The extrapolation performed by the simex function is shown for the mismeasured systolic blood pressure in the Framingham data. 0.5 I I I 2.5 46 Chapter 6. Framingham Study Example 6.2 Bayes-MCMC Analysis Unlike in the SIMEX analysis, we can use the replicate measurements to incorporate the uncertainty about r2 in the Bayesian framework. As earlier in the simulation example, we assign improper locally uniform priors to the as and 3s such that f(a) 1 and fC8) 1, and IG(O.5,O.5) priors to ,\2 and r2. We first perform the analysis by coding the complete MCMC algorithm in R, and secondly using W1nBUGS software as part of the R program. 6.2.1 Analysis Using R The posterior distribution is f(x,a,,2,rx*,y,z) flf(xx,r2) f(iiIx z, j9) x TI f(xIz, a, f(a, 43 2 r2), which expands to 47 Chapter 6. Framingham Study Example f(xa/3A2T2Ix*yz)(\u00C2\u00B1)flexp(((xi \u00E2\u0080\u0094x)2+(4 _x)2)) x exp (y(i3o +/3lXj + /32Z)) "Thesis/Dissertation"@en . "2008-11"@en . "10.14288/1.0066976"@en . "eng"@en . "Statistics"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "Attribution-NonCommercial-NoDerivatives 4.0 International"@en . "http://creativecommons.org/licenses/by-nc-nd/4.0/"@en . "Graduate"@en . "Evaluating the performance of simulation extrapolation and Bayesian adjustments for measurement error"@en . "Text"@en . "http://hdl.handle.net/2429/5236"@en .