Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Evaluating the performance of simulation extrapolation and Bayesian adjustments for measurement error Romann, Alexandra 2008-12-31

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata


ubc_2008_fall_romann_alexandra.pdf [ 979.83kB ]
JSON: 1.0066976.json
JSON-LD: 1.0066976+ld.json
RDF/XML (Pretty): 1.0066976.xml
RDF/JSON: 1.0066976+rdf.json
Turtle: 1.0066976+rdf-turtle.txt
N-Triples: 1.0066976+rdf-ntriples.txt
Original Record: 1.0066976 +original-record.json
Full Text

Full Text

Evaluating the Performance of Simulation Extrapolation and Bayesian Adjustments for Measurement Error by Alexandra Romann B.Sc., Thompson Rivers University, 2005 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in The Faculty of Graduate Studies (Statistics)  The University of British Columbia (Vancouver) August, 2008  ©  Alexandra Romann 2008  Abstract Measurement error is a frequent issue in many research areas. For instance, in health research it is often of interest to understand the relationship be tween an outcome and an exposure, which is often mismeasured if the study is observational or a gold standard is costly or absent. Measurement error in the explanatory variable can have serious effects, such as biased parame ter estimation, loss of power, and masking of the features of the data. The structure of the measurement error is usually not known to the investigators, leading to many difficulties in finding solutions for its correction. In this thesis, we consider problems involving a correctly measured con tinuous or binary response, a mismeasured continuous exposure variable, along with another correctly measured covariate. We compare our proposed Bayesian approach to the commonly used simulation extrapolation (SIMEX) method. The Bayesian model incorporates the uncertainty of the measure ment error variance and the posterior distribution is generated by using the Gibbs sampler as well as the random walk Metropolis algorithm. The com parison between the Bayesian and SIMEX approaches is conducted using different cases of a simulated data including validation data, as well as the Framingham Heart Study data which provides replicates but no validation data. The Bayesian approach is more robust to changes in the measurement 11  Abstract error variance or validation sample size, and consistently produces wider credible intervals as it incorporates more uncertainty. The underlying theme of this thesis is the uncertainty involved in the es timation of the measurement error variance. We investigate how accurately this parameter has to be estimated and how confident one has to be about this estimate in order to produce better results by choosing the Bayesian measurement error correction over the naive analysis where measurement error is ignored.  111  Table of Contents Abstract  ii  Table of Contents  iv  List of Tables  vi  List of Figures  vii  Acknowledgements Dedication 1  viii  ix  Introduction  1  1.1  Overview of Some Currently Available Methods  3  1.1.1  Functional Methods  4  1.1.2  Structural Methods  6  2  Simulation Extrapolation  3  Bayesian Estimation  7 11  3.1  Bayes Rule  11  3.2  Choice of Prior Distributions  13  iv  Table of Contents 3.3  4  Markov Chain Monte Carlo Algorithms  13  3.3.1  The Metropolis-Hastings Algorithm  14  3.3.2  Random Walk Metropolis Algorithm  14  3.3.3  The Gibbs Sampler  15  Simulation Study  17  4.1  Simulation Extrapolation Approach  18  4.2  Bayes-MCMC Approach  24  4.3  Comparison: SIMEX versus Bayes-MCMC  30  5  Exploring the Bayes-MCMC Correction  34  6  Framingham Study Example  43  6.1  SIMEX Analysis  45  6.2  Bayes-MCMC Analysis  47  6.2.1  Analysis Using R  47  6.2.2  Analysis Using WinBUGS  50  6.3 7’  Results  Conclusion and Future Work  Bibliography  52 54 58  V  List of Tables 4.1  MSEs of SIMEX and Bayesian estimates  30  4.2  Average widths of CIs using SIMEX and Bayesian analyses  32  4.3  CI coverage of the true values using SIMEX and Bayesian analyses  33  5.1  Results of the prior knowledge investigation  40  6.1  Framingham example results  52  vi  List of Figures 4.1  SIMEX extrapolation plot when r 2 is moderate  20  4.2  SIMEX extrapolation plot when r 2 is large  21  4.3  SIMEX plot when r 2 is moderate  22  4.4  SIMEX plot when r 2 is large  23  4.5  Traceplots and posterior densities of fi and  T  when r 2 is mod  erate  28  4.6  Traceplots and posterior densities of  5.1  Inverse Gamma plots  36  5.2  Inverse Gamma (50, 12.75) plot  38  5.3  Graphical representation of the results of the prior knowledge  and  T  when -r 2 is large  investigation  42  6.1  SIMEX extrapolation plot for the Framingham example  6.2  Traceplots and posterior density plots of the Bayesian analysis  .  .  of the Framingham example 6.3  29  46  49  Traceplots and posterior density plots of the WinBUGS anal ysis of the Framingham example  51  vii  Acknowledgements First of all, I thank my supervisor, Professor Paul Gustafson, and my co supervisor, Dr. Nhu Le, for their support throughout my study period in the Department of Statistics. It was an honour to work with both of them and their guidance helped me complete this thesis. I express sincere gratitude to Professors Lang Wu, Harry Joe, Arnaud Doucet, Raphael Gottardo, Matias Salibián-Barrera, and Michael Schuizer for their excellent teaching and mentorship. I also thank Mike Mann for his friendship and encouragement, as well as the office staff for their help. I am grateful to all the graduate students for malcing the department a great place to study. In particular, I thank Michael Regier, Justin Harring ton, Aline Tabet, Kimberly Fernandes, and Luke Bornn for their friendship and advice. On a personal note, I am eternally grateful to my parents, Ursula and Max Romann, who have provided me with an abundance of kind support all of my life, and to Neal Dustin for his love and patience. Alexandra Romann The University of British Columbia August OO8  vui  To My Parents  ix  Chapter 1  Introduction In many research areas the measurement accuracy of a variable is a frequent issue. For instance, in health research it is often of interest to understand the relationship between an outcome Y and an exposure X. When an in strument is available that mea.sures X correctly, such a tool is called a gold standard. Suppose that X can only be measured imprecisely due to the absence or high cost of a gold standard. Examples of such a scenario are in takes of foods or drugs, and exposure to airborne pollutants or radiation. If the covariate is a categorical variable, the problem with such imprecise mea surement is called misclassification, whereas it is referred to as measurement error for a continuous variable. In this thesis, we will only concern ourselves with the latter case. There exists vast literature on misclassification, for example see Gustafson (2004). Covariate measurement error problems are concerned with inference on regression coefficients where one or more of the covariates are measured imperfectly. It has been documented extensively that treating this surrogate variable as if it were the true exposure can lead to very poor results. We illustrate this with an example in Chapter 4. Car roll, Ruppert, Stefanski and Crainiceanu (2006) call the effect of ignoring imprecision in mismeasured covariates the Triple Whammy of measurement  1  Chapter 1. Introduction error because it causes bias iu parameter estimatiou, leads to a loss of power, and masks the features of the data.  Many methods have been proposed for countering these issues. We will focus on the first problem of how to offset the bias caused by measurement error. It is interesting that the magnitude of a regression coefficient of the incorrectly measured variable is often biased towards zero, a phenomenon called attenuation. In other words, it will underestimate the true value. The actual measurement that is made for X is the surrogate and incorrect exposure variable X*. Further, we denote the correctly measured response by Y and the other correctly observed covariates by Z. The set of observed covariates is thus {X*, Z}.  Measurement error problems can be classified into nondifferential and differential problems, the first of which occurs when X* contains no infor mation about Y other than what is available in X and Z. In this case, the distribution of the mismeasured surrogate depends only on the true explana tory variable and not on the response. Otherwise, the measurement error is considered differential.  For a given problem it is important to know how the error is arising in the covariate. In the classical additive error model X*  =  X + U, where U  is the measurement error, and U ± (X,Y,Z), E(UX)  =  0. This model is  used when the error-prone covariate is measured uniquely to an individual, which is especially true when the measurement can be replicated. Examples 2  Chapter 1. Introduction of classical additive error arise in nutrition surveys and blood pressure mea surements. However, if all subjects in a small group are given the same value of the error-prone covariate and thus the independence assumption between U and X is not satisfied, then the Berkson error model is better suited. In this model X  =  X + U where U ± X, Y, Z. This applies to dust exposure  data of miners, for example. Employees having worked in the mine for a fixed number of years are assigned the same exposure to dust, even though the true exposure is almost certainly unique to each individual.  The models discussed above are additive measurement error models, however, the error can also arise in a multiplicative fashion. Examples are problems where the exposure X is positive and skewed, which occurs fre quently in epidemiological problems. In that case, the observed exposure might be assumed to equal the product of the true exposure and the mea surement error such that X  =  XU. Log-exposure can be used to convert  such a case to an additive measurement error model. It then follows that logX* =logX+logU.  1.1  Overview of Some Currently Available Methods  Many methods have been proposed to deal with measurement error. They can be broadly grouped into functional and structural models (Carroll, Rup pert, Stefanski and Crainiceanu, 2006). In functional modelling, the Xs are regarded as fixed or random with no or minimal distribution assumptions. 3  Chapter 1. Introduction In structural models the Xs are regarded as random variables, thus the potential for exposure model misspecification arises.  1.1.1  Functional Methods  In order to use functional methods, the size of the measurement error needs to be estimated. In some cases, a gold standard test is available and one is able to observe X directly for some subset of the data. This is the validation data from which the measurement error variance is estimated. In other cases replication data or measurements from another instrument may be available.  A simple and quite general method called regression calibration (Pierce and Kellerer, 2004) is appropriate when such a gold standard or replicates are available and a linear measurement error with constant variance ap plies. In this method  E(XIX*, Z) is estimated and the unobserved Xs are  replaced by these estimates. Then a standard analysis is run to obtain the parameter estimates. The resulting standard errors are adjusted to account for the estimation of the parameters, using bootstrap, for example. This method has been shown to be very useful for generalized linear models, but performs poorly for highly nonlinear models (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). Another simple method that is also potentially applicable to any regres sion model, but is more computationally intensive than regression calibra tion, is the simulation extrapolation approach (SIMEX). For SIMEX, the bias function can be expressed in closed form for linear regression, and thus having more data leads to better estimates. However, when the bias func 4  Chapter 1. Introduction tion is not known and has to be approximated, having more data does not improve this approximation. We will discuss the SIMEX method in detail in the next chapter. Although regression calibration and SIMEX are quite general methods for eliminating or reducing measurement error bias, they result in estimators that are consistent only in important special cases such as linear regression (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). In addition, when the measurement error variance is large, both regression calibration and SIMEX may not be useful in reducing the bias induced by the mismeasurement.  Score function methods are almost as widely applicable, but result in fully consistent estimators more generally. In these methods consistency of the estimators is due to the fact that they are M-estimators whose score func tions are unbiased even if there is measurement error. The conditional-score method (Stefanski and Carroll, 1987; Nakamura, 1990) and the correctedscore method (Stefanski, 1989) differ im their underlying assumptions and ease of computation.  When the assumptions for both are satisfied, the  conditional-score method will usually be more efficient than the correctedscore method. However, sometimes (for example in Poisson regression) the conditional-score estimator requires numerical integration, while the cor rected score has a closed form expression. Both score function methods are not straightforward to implement and the problem of multiple roots is often serious (Hossain, 2007). SIMEX and regression calibration are simple to implement because they are “add-on” packages to existing software, whereas other methods such as 5  Chapter 1. Introduction the score function approaches require a completely different analysis set-up.  1.1.2  Structural Methods  To use structural models, the best possible specifications of the measurement  (X*IY, X, Z), outcome (YIX, Z), and exposure (XIZ) models are needed. Bayesian methods fall into this category, as they are based on a likelihood approach followed by direct analytic calculations if possible, or otherwise by estimating methods such as Markov chain Monte Carlo (MCMC) sam pling techniques. We will discuss such approaches further in this thesis. Alternatively, one could use a likelihood estimation, which also requires fully specified measurement, outcome, and exposure models. In this case, the Expectation-Maximization algorithm could be applied instead of the Bayesian MCMC techniques. Bayesian methods have the advantage that they incorporate parameter un certainty, whereas frequentist approaches are less able to incorporate such uncertainty. However, they require fully specified exposure models, while functional approaches, such as SIMEX, do not. This eliminates the risk of misspeciflcation in the use of functional approaches.  6  Chapter 2  Simulation Extrapolation The simulation extrapolation (SIMEX) method was first proposed by Cook and Stefanski (1994) and is a simulation-based means of estimating and re ducing bias due to additive measurement error (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). It can be used for inference for parametric mea surement error models, when the measurement error variance is known or can be estimated (Cook and Stefanski, 1994). Additional measurement er ror is added to the already mismeasured covariate and the corresponding parameter estimates are calculated. The relationship between the parame ter estimate and the amount of added measurement error is calculated and extrapolated back to the case of no measurement error. Cook and Stefanski (1994) show that this approach is asymptotically equivalent to the methodof-moments estimation in linear measurement error models. The authors make a very simplified comparison between SIMEX and method-of-moments estimation using Monte Carlo-derived estimating equations. SIMEX is intu itive, simple to implement, and can be applied to just about any regression model, as it creates new datasets with added measurement error. SIMEX is applicable to general estimation methods, such as least-squares, maximum likelihood, or quasilikelihood, and can also be extended to nonadditive mod  7  Chapter 2. Simulation Extrapolation els (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). One way to deal with multiplicative measurement error would be to convert it to additive error by taking logarithms, as discussed in the last chapter. To implement this method we use the simex package for covariate mea surement error in R, written by Lederer and Kiichenhoff (2008).  We assume a simple linear regression of the form Y  =  with additive and nondifferential measurement error X’ UJJY, X), E(U)  =  0, and Var(U)  =  =  fib  X + e 1 + j3  X + U, where  a. Typically, although normality of  the measurement error is assumed, it is not completely critical in practice. It is also assumed that the measurement error variance, estimated. Now, if u  >  o  is known or well  0, then the ordinary least squares estimate of j3  from regressing Y on X*, denoted by  x,najve, 3 /  is biased. Note that SIMEX is  usually not used for simple linear regression, as simple method-of moments bias corrections can be used. We add further measurement error to the already noisy predictor and create a new data set in which the measurement error variance is greater. Suppose that this process is repeated until we have a total of M data sets, including the naive one. Then each of these data sets has a successively larger measurement error variance, (1 + (m)o, where 0  =  (  <  (  <  ...  < (iw  of the slope of the fixQ/{0 + (1 +  are set by the user. data set, m  m  Cm)u}.  x,m, 3 /  =  1,2,  If we regress  !3x,m  ...,  the least squares estimate M, consistently estimates  on  Cm,  we get the following  mean function:  a  Px,m  “)  2  G1t ) t k + 2 (l+g g  > —  0 8  Chapter 2. Simulation Extrapolation We can extrapolate back to  =  —1 to get to the case of no measurement  error. From the equation above it is evident that this is the case because G(—1)  =  43w.  We now describe these steps in detail (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). First, simulated data sets with increasingly larger mea surement error variance have to be created. Define  Xi()X+Um,i,  where {Um,i}..i  i=1,...,n,m=1,...,M,  N(0, u) are mutually independent and identically  distributed. They are also independent of all observed data. Because the error variance in the simulated data has been inflated by a multiplicative factor of (1 +  (,  the error variance is zero, in theory, when  ematically Var{X,()IXi} equal to 0 when negative values of  =  =  (1 + C)u  =  =  —1. Math  (1 + )Var(XX), which is  —1, though we cannot actually generate  for  (.  Now that the data sets have been simulated, the naive estimates based on these predictors have to be computed. Let the  m’  ! m 3 ()  be the estimator when  additional error is used and then the average of these estimators is  =  m=1  ()  is the mean from many simulations with the same measurement er  ror. This way, we can precisely estimate the additional bias due to increasing  9  Chapter 2. Simulation Extrapolation  measurement error. In the extrapolation step 3() is modeled as a function of then the fitted models are extrapolated back to value is denoted by  =  >  0 and  —1. This extrapolated  I s 3 imex  The choice of the extrapolation function is important, as the wrong choice could give misleading results and therefore not help in correcting the bias caused by measurement error. The simex function in R requires the user to specify an extrapolation function, otherwise the default, the quadratic function, is used. We apply the quadratic extrapolation function because it is numerically stable and seems to work best in our scenarios.  The SIMEX method is very popular because it is intuitive and simple to implement. We note, however, that this method does not completely correct the bias in practice, as the functional form of the bias is generally unknown and needs to be approximated. SIMEX works well for popular special cases, but does not always produce consistent estimators in general.  10  Chapter 3  Bayesian Estimation Bayesian inference implemented with Markov chain Monte Carlo (MCMC) algorithms are mismeasurement-correcting methods that have become in creasingly popular with the growing availability of MCMC algorithms.  3.1  Bayes Rule  Assume we have an independent and identically distributed (iid) sample w  =  (Wi,  ...,  w,) from a density  fo,  where 8  9 is an unknown parameter.  Then the likelihood function is  f(81w)  JJf(w).  Note that in the Bayesian framework 0 is assumed to be random. This is the major difference between Bayesian and frequentist methods, where the parameters are assumed to be fixed and probabilities refer to limiting fre quencies (Doucet, 2007; Casella and Berger, 2002). Because 0 is assumed to be random, we can set a prior distribution ir(0) on it that expresses our belief about the parameter before having seen the data. Given these distributions  11  Chapter 3. Bayesian Estimation and using Bayes’ theorem we can construct a posterior distribution  7r1’OIw)  f(81w)Tr(8)  —  —  I f(Ow)ir(8)dEi  Note that this implies ir(OIw) cc f(81w)7r(8), where the proportionality constant can be obtained by normalization. The evaluation of this normal izing constant becomes an issue when 8 is very high dimensional and cannot be evaluated in closed form. For example, suppose we want to compare the estimators, 8, to 8 by using the quadratic loss function 18  —  . 2 OH  In the  Bayesian framework, one would calculate the expected value of 8 under the posterior (Mann and Robert, 2007):  8  1 8r 8 w’dO 1 -J  —  —  -  8f(8w)ir(8)d8 ff(Ojw)7r(8)d8  Markov chain Monte Carlo (MCMC) techniques or other numerical methods are needed to complete such calculations. MCMC methods have, since they were introduced around 1990, revolutionized Bayesian statistics (Doucet, 2007). The posterior distribution is an updated version of the prior combined with the observed data, as represented by the likelihood.  We can then  consider the posterior distribution the new prior and repeat the procedure to get another posterior. MCMC generates multiple dependent samples by running a Markov chain a set number of times, whereas sequential Monte Carlo (SMC) generates multiple independent samples simultaneously.  12  Chapter 3. Bayesian Estimation  3.2  Choice of Prior Distributions  The prior distribution 7r(O) expresses the subject researcher’s knowledge and belief about the parameter(s) of interest based on previous experience and subject area knowledge, but without having actually seen the current data. When such information is not available, the impact of the prior on the inference must be minimized. A “flat” distribution, called noninformative, is assigned to such priors, therefore not preferring any values. This process is not trivial, as there is not one agreed upon notion of “flat” (Gustafson, 2004; Mann and Robert, 2007).  3.3  Markov Chain Monte Carlo Algorithms  In Bayesian problems the posterior distributions are often very complicated, therefore it is difficult to compute probabilities. The integrals that have to be evaluated are frequently of high dimensions so that numerical integration techniques become useless. These problems have made Bayesian analysis very limited until around 1990 when MCMC algorithms were first applied to statistical analyses. The MCMC techniques are based on the idea that it is sufficient to produce a Markov chain t,, where n is a natural number, whose stationary distribution is  f,  for instance, the posterior distribution  (Mann and Robert, 2007). If the marginal distribution of t is  f,  then  f  is  also the marginal distribution of.t 1 (Grimmett and Stirzaker, 2001). For large n, t is approximately distributed from  f.  The following three sections briefly discuss the very general MetropolisHastings algorithm (Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller, 13  Chapter 3. Bayesian Estimation 1953; Hastings, 1970), the random walk Metropolis algorithm (Metropolis, Rosenbiuth, Rosenbiuth, Teller, and Teller, 1953), and the Gibbs sampler (Casella and George, 1992). Mann and Robert (2007), among many other sources, give a good overview of these and additional algorithms.  3.3.1  The Metropolis-Hastings Algorithm  The Metropolis-Hastings (MH) algorithm is a general iterative method to sample from any probability distribution function f(9) known up to a nor malizing constant. The first step in the MR algorithm is to set a proposal probability distribution ( ’ I 9). Based on the current state of the Markov 8 chain, a new candidate for 8’ is proposed as follows at the  th 1  iteration, j  1:  1. Set (°). This can be done randomly or deterministically. 2. Sample 9* 3. Compute  =  mm  (1 f(’(9I9(1)))  4. With probability  (i_1), 9 ( * ),  set  (i)  =  9*•  Otherwise set  (j) 9  =  (i—1) 9  3.3.2  Random Walk Metropolis Algorithm  The original Metropolis algorithm incorporates a random walk proposal and in this thesis we use the following algorithm if the model in question is a  14  Chapter 3. Bayesian Estimation logistic regression. A new candidate for 0 is proposed as follows at the iteration, i  1:  1. Skip this step if i  1. Compute the maximum likelihood estimate 9  E  and the covariance matrix covariance of 8. Set 2. Generate 0* use  jth  Nk  9(0)  corresponding to the asymptotic (Fisher)  =  (O(z_1), s2),  where  2  is the scale factor. Typically,  I, where I is the identity matrix.  =  3. Compute  mm  =  (i  4. With probability c(9(), 9*), set  f(9(1)))  =  0’.  Otherwise set  0 ( i)  =  (i—1) 9  3.3.3  The Gibbs Sampler  The Gibbs sampler is a special case of the MH algorithm with an acceptance probability of one, when full conditional distributions are available. It is a simple and popular iterative method to sample from high dimensional probability distributions. Suppose again we want to sample from f(9), where 9  =  (Oi,  ...,  Or). Then the algorithm of the Gibbs sampler to generate a  Markov chain at the  1. Set  (90),  2. For  j  where  =  ‘t th iteration  9,0))  .  9  (i)  ,  ...,  0,  )  is the following:  This can be done randomly or deterministically.  1, ...,p, sample O = (9(i)  (i)  (°  i)  f(0I0),  (i1)  15  Chapter 3. Bayesian Estimation These MCMC algorithms are simple and general algorithms to sample from any target distribution, however, the technical details are usually very tedious. We will apply the Gibbs sampler and the random walk Metropolis algorithm.  These will be discussed in more detail in the simulation ap  plication in the next chapter and the Framingham example in Chapter 6, respectively.  16  Chapter 4  Simulation Study In this example we consider the case of a linear regression involving two covariates such that Y  =  /3 + 13 X Z + e, where 2 /3 + 1  €  is normally dis  tributed with mean zero. We suppose that only a mismeasured surrogate X is available for X, but we observe Z and Y correctly. For simplicity, we assume the variables are distributed normally and the measurement model is nondifferential. Then  XX,Z,Y  N(X,r ) 2 ,  where r 2 is an unknown parameter. The response model, parameterized by i3o,  / i 3 , 2,  and u , is 2  YIX,Z  0) N(i3 Z,u 2 +i3iX+i3 .  The exposure model, conditioned on the covariate Z and parameterized by , is 2 €o, o, and A XZ N(o 2 +ciZ,A ) .  We consider a study with n=250 subjects, where X*, Z, and Y are observed for all subjects. For a validation sample of m  =  10 randomly 17  Chapter 4. Simulation Study chosen subjects, X is also measured. This is applicable for situations where a gold standard test does exist, but where it is not feasible to administer it to all subjects due to high costs or time constraints. Let the subscript C denote the complete data available for the validation sample, and R the incomplete or reduced data. For instance, X denotes the incorrectly measured variable for the subjects in the validation sample. Following Gustafson (2004), we simulate a data set with (X, Z) having a bivariate normal distribution, each marginal distribution having a stan dard normal distribution, and a correlation coefficient of 0.75. This set-up yields  (co,c)  =  (0,0.75) and A 2  nario 1), and increase it to r (0,0.5,0.25) and u  =  =  =  0.4375. We initially set  T =  0.5 (Sce  1.5 (Scenario 2) later. We set ) 2 , 0 (3 , 1 ,d d  =  1. Firstly, we analyze this hypothetical study using  the frequentist SIMEX approach, secondly from a Bayesian perspective, and finally compare the results from both methods.  4.1  Simulation Extrapolation Approach  We use the simex package in R (Lederer and Küchenhoff, 2008) for this analysis. The simexQ function in this package takes an estimate of  T  as  input, therefore it needs to be estimated from the validation data: fvrn  =  (  I L4=1kXZC  V  —  Xjc)  m  18  Chapter 4. Simulation Study We assume this value to be correct for its use in the SIMEX algorithm. This may be too strong an assumption as our validation sample size is small and we will investigate the validity of this further later. We use the quadratic extrapolation function because it seems to perform best in the case of this set-up. This function is usually used when the relationship is not known, but we use it because of its numerical stability even though we know the relationship. The additional data sets with successively larger measurement error are computed as described in Chapter 2. The extrapolation for this example is illustrated in Figure 4.1 for Scenario 1 with moderate measurement error variance, and in Figure 4.2 for Scenario 2 with large measurement error vari ance. We will later elaborate on these two scenarios, and the performance of SIMEX in each case. Figures 4.3 and 4.4 show how well the SIMEX ap proach works in Scenario 1 and Scenario 2, respectively. SIMEX seems to perform very well in the case of moderate measurement error variance, but has its shortcomings when the measurement error variance is increased to 1.5 from 0.5.  19  Chapter 4. Simulation Study  0  0  0  U) 0 0 0  -  U) C 0 0  0 ‘4,  c”j 0  c’Jj  0  I  I  I  I  I  I  I  0.0  0.5  1.0  1.5  2.0  2.5  30  (i+)  T = 0.5, il lustrating the effect of adding consecutively more measurement error to the original naive estimate at = 0. The SIMEX estimate is the extrapolation to (= —1.  Figure 4.1: SIMEX extrapolation plot for Scenario I where  20  Chapter 4. Simulation Study  C LO C  0  L()  0•0 C 0-  Lfl  0) C  0  ) 1 C  C  c’J  C  0 C  I  I  I  0.0  0.5  1.0  1.5  2.0  I  I  2.5  3.0  (i+)  Figure 4.2: SIMEX extrapolation plot for Scenario  where ‘r = 1.5, il lustrating the effect of adding consecutively more measurement error to the original naive estimate at = 0. The SIMEX estimate is the extrapolation  to  =  —1.  21  Chapter 4. Simulation Study  —  -  -  True Model Naive Model SIMEX model  c’.j  C  r  c)  —2  —1  0  1  2  x  Figure 4.3: In the case of Scenario 1  (T = 0.5), the solid line shows the least-squares line had the data been measured correctly, while the dashed line illustrates the case of the naive analysis where measurement error is not taken into account. The dotted line illustrates the correction the SIMEX algorithm provides for this particular example.  22  Chapter 4. Simulation Study  c.’j  C >,  c’.J  —2  —1  0  1  2  x  Figure 4.4: In the case of Scenario (-r = 1.5), the solid line shows the least-squares line had the data been measured correctly, while the dashed line illustrates the case of the naive analysis where measurement error is not taken into account. The dotted line illustrates the correction the SIMEX algorithm provides for this particular example. To make the SIMEX approach more comparable to the Bayes-MCMC method, it seems sensible to add uncertainty to the estimate of Var(X* IX)  =  . We achieve this by using the bootstrap method where the pairs (X*,X) 2 ‘r  23  Chapter 4. Simulation Study are sampled and the estimate of r 2 is obtained using this bootstrap sample. This process is repeated 1000 times and 95% confidence limits are obtained for r. We then perform the SIMEX analysis three times; using the estimated T,  as well as the lower and upper bounds of the 95% CI. For the 3 estimates,  it is possible to get “added-variance confidence intervals” in addition to the regular confidence intervals. We obtain the former as follows: We take the and the 97.5 percentiles of the simex 3 estimates using the lower and upper bound of r, respectively. These “added-variance confidence intervals” are wider than the regular confidence intervals on the estimated coefficients.  4.2  Bayes-MCMC Approach  We assume prior independence of all unknown parameters, so  f(a  2 A  2 T2)  =  )f(T f(a)f()f(A ) 2 ) f(a .  We assign improper locally uniform priors to the a’s and ‘s such that  f(a)  1 and f(j3)  ‘-S-’  1, although alternatively we could use proper, but  very fiat, priors. We use such uninformative priors because they do not favour any one value for the coefficients.  Partly due to their conjugate  property, Inverse Gamma distributions seem sensible to use as priors for the variances of normal distributions (Gustafson, 2004; Doucet, 2007). So we assign IG(0.5,0.5) priors to A , u 2 , and r 2 . 2 The posterior distribution is  24  Chapter 4. Simulation Study  n  f(xR , , I 2 ) x*,xc,y,z)cxl .\ u x ,/3,a lf(xxj,T ,r j=1 n  )x 2 [J f(yx, z, 3, u  i=1  n  fl f(xzj,  ,  j=1  , 2), 2 f (r, a which expands to  , 2, 2 f (xR, , , r  x, y, z)  () () () 1  1 1  n/2  exp n/2  exp n/2  (— (  ) 2 (4_x) 2 2T V’ ‘  V’  exP(i1  (v  — — — 13o 2 2a  ><  2 2A  \ (0.5+1)  7—0.5 exP_Th_)x  /  1  \ (0.5+1)  1—0.5 exp___-)x  /  1  (0.5+1)  ) 2 /32zi)  ) 2 (xi_ao_alzi)  / 1  ‘  <  —0.5  From this posterior we can get the full conditional distributions as follows. Note that the superscript C denotes the complement.  N ((A’A)’A’x, 2 \ ( A’A)’),  25  Chapter 4. Simulation Study N ((B’B)’B’y, u (B’B)’), 2  where A  1  zi  1  Z2  =  1  arid B  1  X1  Z1  1  X2  Z2  1  x  z  =  z  We also have:  ()exp  (r21r2c)  (Zi(x  -  x)2 +  1)),  (_  which implies that  T2IT , 2 U U ). 2IG(  The full conditional distributions for the other two variance components can be obtained in a very similar fashion: 22C  1 IGQ  2 +1) Ix—Aa11 HY_B1311 + 2 1)  U2U2IG(  Only considering the cases where no gold standard measurement is made, the full conditional distribution for  XR  follows a normal distribution with  26  Chapter 4. Simulation Study the following mean and variance:  1 E  c  —  (1/r2)x +  (32/2)  X  ((  —  —  32zi)//31)  +  (i/2)  (cO + aiz)  )+) 2 (1/r /u + (1/A 2 ( ) 2  —  Var(xIx)  =  (i/r2)  +  (?/u2)  + (i/A ) 2  We are now ready to use the Gibbs sampler and to do so, we need to continually sample from the full conditional distributions above. We use 10,000 iterations after 1000 burn-in iterations. The traceplots and posterior densities of j3 and -r in the case of Scenarios 1 and 2 are shown in Figures 4.5 and 4.6, respectively. From this we see that there are no mixing problems.  27  Chapter 4. Simulation Study  0  -  0  0  2000  4000  6000  8000  10000  —0.2  —0.1  0.0  Iterations  0.1  0.2  0.3  beta 0  U,  L$ r  0-1  o  F  O  0  0  2000  4000  6000  8000  10000  0.0  0.5  Iterations  c\J  1.0  t5  beta 1  I  -1  IC ..  I  I  I  I  I  0  2000  4000  6000  8000  0  10000  .5  !5  Iterations  1.0  beta 2  H  0  o 0  2000  4000  6000  8000  10000  0.3  Iterations  0.4  0.5  0.6  0.7  0.8  0.9  tau  Figure 4.5: Traceplots and posterior densities of  and T from the Gibbs sampler for Scenario 1, where the measurement error variance is moderate. The traceplots show the 10,000 iterations after the 1000 burn-in iterations.  28  Chapter 4. Simulation Study  0  II  o  0  ID  0  2000  4000  6000  8000  0  10000  —0.4  —0.2  Iterations  0.0  0.2  0.1  0.3  beta 0  0)  ci)  U  0  2000  4000  6000  8000  10000  0 0  —1.0  —0.5  1.0  Iterations  beta 1  (N CsJ  0  elrlj 0  2000  ‘  ID  4000  6000  8000  o i  10000  I  0.0  0.5  Iterations  beta 2  I 0  2000  4000  6000  Iterations  1.0  8000  10000  I1  : 0.8  1.0  1.2  1.4  1.6  1.8  tau  Figure 4.6: Traceplots and posterior densities of and r from the Gibbs sampler for Scenario 2, where the measurement error variance is large. The traceplots show the 10,000 iterations after the 1000 burn-in iterations.  Chapter 4. Simulation Study  4.3  Comparison: SIMEX versus Bayes-MCMC  It makes sense to compare the performances of the Bayes-MCMC and the SIMEX approaches by looking at the mean squared errors of the relevart parameters using both methods. For the Bayesian analysis, both the mean and the median of the 10,000 MCMC /E estimates were used for comparison. Table 4.1 shows the MSEs of  and /3 based on one hundred simulated data  sets.  Table 4.1: Mean Squared Errors of SIMEX and Bayesian estimates of T and i3i with varying validation sample size (m) and true -r. We use both, the mean and the median, in the Bayesian case. The MSEs are based on 100 comparisons between the SIMEX and Bayesian methods. MSE 1 MSE6 simex Bayes Bayes simex Bayes Bayes median mean_IL______ median mean m = 10 0.0089 0.0054 0.0050 0.0175 0.0369 0.0436 Ttrue = 0.5 m = 0.0802 0.0083 0.0090 0.1311 0.0718 0.0700 Ttrue = 1.5 m = 0.0025 0.0020 0.0020 0,0159 0.0200 0.0203 Ttrue = 0.5 m = 50 0.0223 0.0067 0.0067 0.1280 0.0303 0.0298 Ttrue = 1.5  f  When the measurement error and the validation sample are relatively small, SIMEX performs better than the Bayesian method in estimating the i coefficient. However, the Bayes-MCMC method outperforms the SIMEX 3 L algorithm in estimating  T  in all performed scenarios. As soon as the mea  surement error variance is increased, the Bayes-MCMC method works better  30  Chapter 4. Simulation Study than SIMEX in the estimation of j3. SIMEX seems to deal badly with large measurement error variance (r 2  =  1.52),  producing a MSE over three times  larger than the Bayes method in the r estimation and four times the MSE of the Bayes method in the size is large (m  =  i 3 /  estimation, even when the validation sample  50). When the validation sample size, m, is ten and the  measurement error variance is 1 •52, SIMEX produces a MSE of nine times that of the Bayes-MCMC correction for  T  and almost twice the MSE of the  Bayesian approach for /. This confirms the intuition that SIMEX should be vulnerable in the large measurement error case. In summary, SIMEX seems to perform well in simple scenarios with reasonably low measurement error variance and the Bayes-MCMC method is more robust to changes in validation size and measurement error variance.  The average widths, across the one hundred runs, of the 95% confidence (SIMEX) and credible (Bayes-MCMC) intervals of  and  /i  are shown in  Table 4.2. The added-variance (bootstrap) confidence bounds are also pro vided. The Bayes-MCMC CIs for  are more narrow than the SIMEX ones  obtained via bootstrapping. Since is expected that the Bayesian approach creates wider CIs than the SIMEX method for  /i,  the values we get from  the simulation study are reassuring. It is notable that the added-variance SIMEX confidence bounds are, although wider than the regular SIMEX CIs, constantly more narrow than the Bayesian ones.  31  Chapter 4. Simulation Study  Table 4.2: Average widths of Confidence and Credible Intervals (CI) using SIMEX and Bayesian CIs of i and /i with varying validation sample size (m) and true T. The averages are based on 100 simulated data sets. i Average CI Width /3 Average CI Width simex Bayes simex simex Bayes boot boot m = 0.3802 0.2922 0.4010 0.5322 0.7742 Ttrue = 0.5 m = 1 1.1406 0.2362 0.2823 1.0734 0.4189 Ttrue = 1.5 m 50 0,1846 0.1755 0.3978 0.4735 0.4921 Ttrue = 0.5 m = 50 0.5537 0.2368 0.2649 0.6669 0.3143 Ttrue = 1.5 —  Another way to compare the SIMEX and the Bayes-MCMC approaches is to count how many times the confidence intervals and the credible intervals of f and /3i contain the true values. We use 95% CIs mentioned above for this measure of coverage. Table 4.3 gives these values out of 100 runs for the SIMEX, the added-variance (bootstrap) SIMEX, and the Bayes-MCMC corrections. The SIMEX (bootstrap) confidence bands included the true  T  84 times when the validation sample size was 10 and 94 times when the validation sample size increased to 50, regardless of the true r. When the validation sample size is large, the SIMEX and the Bayesian approaches work about equally well in estimating r, while the Bayesian approach seems to be the better choice when the validation sample size is small. However, when it comes to the coverage of the true /3 by the CIs, it becomes evident that the Bayes-MCMC method performs much better, especially when the measurement error variance is large. If the measurement error variance is 32  Chapter 4. Simulation Study small, it may be more economical to use SIMEX and perhaps account for more variance by bootstrapping, than the time and cost consuming Bayesian approach.  Table 4.3: Confidence and Credible Interval (CI) coverage of the true values using SIMEX and Bayesian CIs of and /3 with varying validation sample size (m) and true r. The values denote how many times, out of the 100 runs, the CIs contained the true values. We use the SIMEX and Bayesian methods, as well as the SIMEX-bootstrap method. CI Coverage (100) CI Coverage (100) simex Bayes simex simex Bayes boot boot m=10 84 97 83 96 96 Ttrue = 0.5 m=10 84 99 0 1 97 ‘  Ttrue  =  1.5  m=50 0.5 m=50 Ttrue = 1.5  94  93  89  96  90  94  95  0  0  95  Ttrue =  From this study, it seems evident that the Bayes-MCMC method gener ally produces more accurate results than the SIMEX approach. When the measurement error variance is large, SIMEX does a poor job in correcting for the mismeasurement. Bayesian approaches are more expensive than con ventional methods, due to computational difficulties as well as the scarcity of pre-packaged functions, and may not be worth using if the measurement error variance is small with a simple study set-up.  33  Chapter 5  Exploring the Bayes-MCMC Correct ion It is well known that the more one knows about the size of the measure ment error, the better the results of the techniques mentioned in this thesis. The estimate of the measurement error can stem from a validation sample, another instrument, or subject area knowledge. But how correctly do we have to estimate this measurement error in order to improve our analysis? Moreover, how certain do we have to be that this estimate is correct? In a Bayesian framework the first question refers to the location of a chosen prior, and the second to the prior’s width. It seems intuitive that there is a trade-off on both scales. If our estimate is correct, a very narrow prior should be best suited. What if our estimate is wrong and we choose a nar row prior distribution, or our estimate is correct but we have little faith in it and assign it a fiat prior? Is there a threshold where we end up with better results by not correcting for measurement error at all? We investigate these points further using the simulation example set-up from Chapter 4, but now assume that we have no validation data and thus n  =  250 incomplete obser  vations.  34  Chapter 5. Exploring the Bayes-MCMC Correction  The question about how much one needs to know about the measurement error in a given problem could be approached from many directions. We choose to use five different inverse gamma prior distributions, with scale parameter a and shape the same mode  =  ,  for r . All these priors have different widths but 2  3/(a + 1)  =  0.25, and are shown in Figure 5.1 below.  35  Chapter 5. Exploring the Bayes-MCMC Correction  0  -  —  -  -  —  —  -  shape=50,scale=12.75 shape2O,scaIe=5.25 shape=10,scale=2.75 shape=5,scale=1.5 shape=3,scale=1  ‘‘I  o 2.0  -  Figure 5.1: Inverse Gamma (f(x)=IG(o,/3)) plots with mode==O.25, where is the shape and is the scale of the distribution. We place the, realistically unknown, true value of  2 T  at several differ  ent percentiles of these prior distributions. The idea is to investigate what effect arises from placing these priors on r 2 when the true value of the mea surement error variance is located towards the center or either tail of each distribution. We examine what happens when we estimate r 2 to be around  36  Chapter 5. Exploring the Bayes.-MCMC Correction 0.25, at the prior’s mode, but the true value is at a particular percentile of the distribution. The width of the prior indicates how certain we are about the estimate. For instance, if we estimate the measurement error variance to be around 0.25 and are certain, due to prior knowledge, that it is larger than 0.15 but no larger than 0.45, say, we would choose the IG(50, 12.75) prior out of the five distributions mentioned. This example is illustrated in Figure 5.2.  37  Chapter 5. Exploring the Bayes-MCMC Correction  o  shape=50,scale=1 2.75 x=0.15,x=0.45  -  -  0I  I  I  I  I  I  0.0  0.2  0.4  0.6  0.8  1.0  x  Figure 5.2: Inverse Gamma (f(x)=rIG(50, 12.75)) plot with mode=O.25, and a rough approximation of where its support is greater than zero. Table 5.1 below shows the mean square errors for the  parameter,  calculated by performing the analysis with different inverse gamma priors on r 2 with the true  2 T  values being located at several different percentiles  of these prior distributions.  For each prior, the mean square errors are  calculated in the case of the naive analysis, where rio measurement error  38  Chapter 5. Exploring the Bayes-MCMC Correction  correction is performed, and in the case of the Bayesian measurement er ror correction described above. It follows that when the naive mean square error is larger than the corrected one, then the measurement error should be taken into account. However, if the naive mean square error is smaller than the corrected one, it would be counter-productive to try to correct for measurement error, as we get a similar or better result by performing the cheaper and by far simpler naive analysis. To see this relationship better, the ratio=naiveMSE/correctedMSE is given for each case. When this ra tio is greater than one, a correction for measurement error is needed. Of course, this is not a matter of absolute certainty, as different problems will have different thresholds, and there is always some “gray area” around the threshold.  39  .  Table 5.1: For the five prior distributions below, the mode is 0.25. There are eight cases for each model: r is at th, 10 th, 25 th, 50 th, 75 th, th, 90 th the 5 or 95 percentile; or at the mode of the distribution. Mean squared errors of the i3i estimates are based on 50 runs. For each model, the naive MSE estimates (ignoring measurement error), the corrected MSE estimates, and their ratios are given. th 10 th 25 th 5 th 50 tk 75 th 90 th 95 Prior mode MSE MSE MSE MSE MSE MSE MSE MSE IG(50, 12.75) naive 0.0338 0.0362 0.0361 0.0416 0.0447 0.0495 0.0521 0.0401 corrected 0.0333 0.0285 0.0228 0.0201 0.0160 0.0173 0.0181 0.0204 1.0163 1.2728 1.5870 2.0690 2.7917 2.8572 2.8787 1.9697 ratio IG(20, 5.25) naive 0.0270 0.0294 0.0335 0.0395 0.0453 0.0529 0.0598 0.0379 corrected 0.0741 0.0542 0.0480 0.0260 0.0205 0.0168 0.0207 0.0319 ratio 0.3638 0.5430 0.6966 1.5215 2.2106 3.1408 2.8914 1.1886 IG(10, 2.75) naive 0.0257 0.0280 0.0363 0.0446 0.0540 0.0664 0.0760 0.0378 corrected 0.1397 0.1197 0.0803 0.0456 0.0243 0.0184 0.0246 0.0633 ratio 0.1838 0.2341 0.4521 0.9779 2.2194 3.6096 3.0899 0.5965 IG(5, 1.5) naive 0.0246 0.0296 0.0375 0.0512 0.0697 0.0907 0.1056 0.0390 corrected 0.2074 0.1783 0.1302 0.0630 0.0324 0.0269 0.0346 0.1097 ratio 0.1186 0.1658 0.2884 0.8117 2.1494 3.3761 3.3761 0.3551 IG(3, 1) naive 0.0240 0.0284 0.0399 0.0857 0.0857 0.1147 0.1386 0.0367 corrected 0.2556 0.2384 0.1641 0.0859 0.0393 0.0303 0.0425 0.1739 ratio 0.0938 0.1191 0.2430 0.9983 2.1786 3.7915 3.2613 0.2112  I  I  Ct,  Chapter 5. Exploring the Bayes-MCMC Correction Figure 5.2 shows the ratios from Table 5.1 graphically. The true r 2 values and the ratios are represented on the horizontal and vertical axes, respectively. Each line ifiustrates the behaviour of the ratios of five prior distributions across the different scenarios. When the line of the chosen prior distributions in the figure are well above one, a measurement error correction is needed. However, if it is below one, it may be useless to perform the measurement error correction because of lack of prior knowledge. We note that this “rule” is not black and white. When the ratio is close to one, one may want to investigate measurement error correction further, taking into account subject area knowledge, cost of a potential correction, and value. Priors much wider than the ones shown lead to greater biases in the parameter estimation than the naive analysis.  41  Chapter 5. Exploring the Bayes-MCMC Correction  IG(50,12.75) IG(20,5.25) IG(102.75) -IG(5l.5)  —  —  —  —  3.5  — —  rIG(31) 3  2.5  1.5  0.5  5th  tOth  25th  50th  75th  90th  95th  True tau’ value  Figure 5.3: The ratios of the naive over the corrected mean square errors from Table 5.1 are shown graphically.  42  Chapter 6  Famingham Study Example To illustrate the use of SIMEX and the Bayes-MCMC methods in a realworld application, we use a subset of the Framingham Heart Study data. This still ongoing, large cohort study was started in 1948 under the direc tion of the National Heart, Lung, and Blood Institute in the United States with the objective to identify some of the characteristics that contribute to cardiovascular disease. We use only complete data of male adults aged be tween 31 and 65 at the first exam, which yields a subset of n  1615 subjects  (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). The study consists of a series of medical exams, about two years apart, where a number of variables were recorded. The response is the indicator variable Y, taking the value 1 if the individual has developed a coronary heart disease within eight years of the third exam, and 0 otherwise. It has become well known, primarily through this study, that high blood pressure is one of the leading causes of cardiovascular diseases. We assume that the systolic blood pressure (SBP) measurements may be mismeasured and will try to account for the bias caused by this. We adopt a transformation of SBP introduced by Carroll, Ruppert, Stefanski and Crainiceanu (2006), setting X  log(SBP  —  50).  We do not have any validation data available, but SBP is measured at the  43  Chapter 6. Framingham Study Example  second and third exams. We treat these measurements as replicates for our practical purposes, even though technically they are not. Therefore, we use the mean, for simplicity denoted by X, for each individual i. The nondif ferential measurement error assumption seems plausible because we would like to be able to measure long-term systolic blood pressure (Xi), but what we observe in reality is blood pressure on a single day (X). It is possible that this actual measurement indicates little information about the longterm systolic blood pressure (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). The model is X  where E(U)  =  =  0 and Var(U)  =  . We 2 r  also have the correctly recorded age, Z, at the second exam for each subject.  Since we are dealing with a binary response, we choose to use the logit link on Y to make sure its domain covers the real numbers.  Thus the  response model is  logit(P(Y  =  lIX, Z))  =  log  (1  p(Yxz))  =  +  +  The measurement and exposure models are defined as in the simulation study above.  Under the nondifferentiality assumption, the measurement  model is XX,Z,Y  N(X,r ) 2 .  The exposure model is  XZ  0 +c N(c Z, 1  2)  44  Chapter 6. Framingham Study Example We will use this set-up to conduct a SIMEX and a Bayes-MCMC analysis.  6.1  SIMEX Analysis  As before with the simulated dataset, we use the simex package in R and apply the quadratic fitting method because of its numerical stability. The replicate measurements allow us to estimate the components of variance estimator V as follows:  —  =  =  0.01278,  i=1 j=1  where X  is the mean of the replicates (Carroll, Ruppert, Stefanski and  Crainiceanu, 2006). We thus estimate the measurement error variance /2 = 0.00639 from the data as well as Var(X*)  =  =  0.04543. We use the gim  function in R for the logistic regression model. The results from the SIMEX analysis are shown in section 6.2 and the extrapolation plot appears in Fig ure 6.1.  45  Chapter 6. Framingham Study Example  C)  Q  CD 1  CD  U)  0.0  0.5  1.0  I  I  I  1.5  2.0  2.5  3.0  (i÷)  Figure 6.1: The extrapolation performed by the simex function is shown for  the mismeasured systolic blood pressure in the Framingham data.  46  Chapter 6. Framingham Study Example  6.2  Bayes-MCMC Analysis  Unlike in the SIMEX analysis, we can use the replicate measurements to  2 in the Bayesian framework. As earlier incorporate the uncertainty about r in the simulation example, we assign improper locally uniform priors to the as and 3s such that f(a) ,\2  1 and  fC8)  1, and IG(O.5,O.5) priors to  . We first perform the analysis by coding the complete MCMC 2 and r  algorithm in R, and secondly using W1nBUGS software as part of the R program.  6.2.1  Analysis Using R  The posterior distribution is  r *,y,z) , 2 f(x,a,, x  x flf(xx,r ) 2  f(iiIx z, j9) x  TI f(xIz, a, f(a,  43  2  , r ) 2  which expands to  47  Chapter 6. Framingham Study Example  f(xa/3A2T2Ix*yz)(±)flexp(((xi —x) +(4 _x)2)) x 2  exp (y(i o+/ 3 lXj + /32Z)) 3 iJ 1 + exp (/3o + /3x + / 2Zj) 3  / 1 jj  (—1 (x llexPi\__  —  <  —  2 1Zj)  2 A  1 1 N° /—0.5 51 exP_)x i) /—0.5 / 1 (O.5+1) expj— i—I \TJ  We cannot obtain full conditional distributions for x and 3, and thus re quire the use of the random-walk MH algorithm to update /3. We use the algorithm for logistic regression given in 3.3.2 with the scale  s  use the gim function in R, such that model=sumniary(glm(y  =  0.5. We x  +  z,  fainily=binomial)). Then rnodel$coef provides the maximum likelihood estimates 3, while rnodel$cov  .  unscaled gives YZ. The other parameters are  updated via the Gibbs sampler  as in the simulation study in Chapter 4.  Figure 6.2 shows that there are no apparent mixing problems using 300,000 iterations after 5,000 burn-in iterations. The results are presented and discussed in the next section.  48  Chapter 6. Framingham Study Example  0  I  o o cci  I.  .hIIL  ‘rI  -1  I.  .11  hIII C  I  —I  C ci)  Oh  c’J-I  I  I  I  I 1T I  I  rJ I  Il  01 0-1 0  I  0  50000  150000  250000  —20  —15  Iterations  II  cc) cci  I  ILl  I  01  c  :j  1.11  c\J  I.II••I•IIl••Ic,c I  I  0  50000  I  I  0] 0  I  150000  250000  4  Iterations  0 cci ci)  1  I  .1,  1  beta 1  L Ii  U  i.  I  IL  .?  oI dl  1. I o I ° —‘  ci) .,cq. ‘TI  ‘  I  0  I  50000  I  150000 Iterations  —5  beta 0  .1.  .  —10  I  C  VlIIIflI1 I  250000  I  0.02  0.04  0.06  0.08  0.10  beta 2  Figure 6.2: Traceplots and posterior density plots of 300,000 iterations after 5,000 burn-in iterations of the random-walk Metropolis sampler of the three /3 parameters using R.  49  Chapter 6. Framingham Study Example  6.2.2  Analysis Using WinBUGS  In this second analysis we use WinBUGS (Bayesian Inference Using Gibbs Sampling for Windows) for the execution of the MCMG steps. WinBUGS is a computational tool for MCMC that elimiates much of the hard work that is involved in the coding of MCMC algorithms for the user (Haneuse, 2008). This way, we need not worry about the random walk Metropolis Hastings algorithm and thus need not specify a tuning parameter, and such. Full conditional distributions for the Gibbs sampler updates are also not required. We only provide the likelihood, the priors, and initial values. If the user does not provide initial values, WinBUGS will assign them. Although WinBUGS can be used by itself, we set up the problem in R and run WinBUGS using the R2WinBUGS package. The bugs function in this package allows us to call WinBUGS from R and then do any further analyses of the posterior in R. Traceplots and posterior densities are shown iii  Figure 6.3, and it is evident that there is no need for concern.  50  Chapter 6. Framingham Study Example  0  o 0  II  I  -II  D  DI  0I  oJ  1 cI  I  -  0  50000  150000  250000  —15  Iterations  LC  L  1  I. ii  beta 0  iJL  iIi  .  II  I  ; -IL II  CC)  0 iI 0  I  rr  I  50000  ¶V  p1  I  1 I  150000  1I I  aC  ri’  -I I  -II  ii  Y//rNNNrj  o-t 0  I  250000  0.5  Iterations  1.0  1.5  •  2.0  2.5  3.0  3.5  betal  0  01  0  cj  CD C  0 I 0 0  0  50000  150000 Iterations  250000  0.00  .1.  i  0.02  0.04  -  I  0.06  0.08  0.10  beta 2  Figure 6.3: Traceplots and posterior density plots of 300,000 iterations after  5,000 burn-in iterations of the WinBUGS output of the three  parameters.  51  Chapter 6. Framingham Study Example  6.3  Results  The results for the 3 estimates from the SIMEX and Bayesian analyses are recorded in Table 6.1 below.  (Note that the mean of the 300,000  realizations is shown in the Bayesian results.) Table 6.1: parameter estimates with 95% confidence, or credible, intervals using the naive, SIMEX, and Bayesian analyses. Parameter Estimate 95% CI CI width naive -12.4066 (-15.8394, -8.9737) 6.8657 /3 -13.1492 SIMEX (-16.9464, -9.3520) 7.5944 Bayes (R) -13.7247 (-17.8030,-9.8861) 7.9169 WinBUGS -13.6909 (-17.7200, -9.7190) 8.001 naive 1.7080 (0.9079,2.5080) 1.6001 SIMEX 1.8926 (0.9961,2.7890) 1.7929 i 3 ! Bayes (R) 2.0295 (1.1083,2.9485) 1.8402 W1nBUGS 2.0184 (1.0930,2.9510) 1.8580 naive 0.0507 (0.0282, 0.0733) 0.0451 SIMEX 0.0493 (0.0264, 0.0721) 0.0457 /2 Bayes (R) 0.0490 (0.0263, 0.0720) 0.0457 WinBUGS 0.0488 (0.0262, 0.0720) 0.0458 If we choose the Bayesian analysis using R, the resulting model for ex plaining the probability of whether or not an individual has developed coro nary heart disease within eight years of the third exam is: P(Y=1IX*,Z) N log( 1— P(Y = 1IX*,Z))  P(Y= 1IX,Z)  =  —  —  —13.72 + 2.03X + 0.05Z  exp(—13.72 + 2.03X + 0.05Z) 1 + exp(—13.72 + 2.03X + 0.05Z)  In light of the simulation study in Chapter 4, the results for this subset of the Framingham data behave as we would expect. The Bayesian analyses 52  Chapter 6. Framingham Study Example produce wider credible intervals than the SIMEX or the naive analyses. Recall that we estimated a value for r 2 and then used it as if it were known in the SIMEX analysis, whereas we treated it as an unknown parameter in both Bayesian analyses, thus accounting for more variability. The naive analysis produces the most narrow confidence intervals, thus being too confident. It is comforting that the results of both Bayesian analyses are very close. SIMEX seems to give improved results over the naive analysis, with its parameter estimates moving towards the corresponding Bayesian estimates.  53  Chapter 7  Conclusion and Future Work Measurement error, when ignored, can have serious effects on statistical analyses, such as biased parameter estimation, loss of power, and masking of the features of the data.  Nevertheless, the possible presence of mea  surement error often fails to be investigated outside of academic research. Many methods have been proposed to deal with measurement error, and they can be broadly grouped into functional and structural models (Carroll, Ruppert, Stefanski and Crainiceanu, 2006). Simple “black-box” functional methods, such as simulation extrapolation (SIMEX) and regression cali bration, have been introduced to make the solving of such problems more accessible. However, such approaches are rarely appropriate when the prob lem is complicated or the measurement error is large. We show that SIMEX, a simulation-based method for which the measurement error variance has to be known or well estimated, fails when the measurement error variance is large. Bayesian measurement error adjustment, a structural method, allows for more accurate bias correction more generally, while integrating more variability.  The risk with structural methods is that of exposure model  misspeeification. Ignoring any philosophical issues associated with Bayesian  54  Chapter 7. Conclusion and Future Work methods for the sake of this discussion, the only other drawback concerning the Bayesian framework is that the posterior distributions are often very complicated and numerical integration techniques become useless. Compu tationally intensive sampling methods such as Markov chain Monte Carlo algorithms often take a long time to code as well as implement (Robert and Casella, 2004). In Chapter 4 we show that when the measurement error variance and the validation sample are relatively low, SIMEX performs better than the Bayes-MCMC method in estimating the coefficient of X*. If either the mea surement error variance, the validation sample, or both are increased, the Bayes-MCMC correction outperforms SIMEX. The Bayesian method is ro bust to changes in validation sample or measurement error variance sizes. SIMEX incorporates less variance in its results than the Bayesian approach, and thus produces more narrow confidence intervals even when forced to take into account more variance by using the bootstrap method. The Bayesian credible intervals contain the true values more often than the SIMEX confi dence intervals. However, when the measurement error variance is small, it may be more economical to use SIMEX and account for more variance by bootstrapping.  To implement Bayesian methods, thorough understanding of statistics as well as statistical software is needed and thus non-statisticians often shy away from using Bayesian methods due to time or financial constraints. WinBUGS is a useful tool to use for the computational part of a Bayesian analysis. Knowledge of the Bayesian framework is still needed to use it, 55  Chapter 7.  Conclusion and Future Work  however, WinBUGS takes care of many complicated details. As shown in the Framingham Heart study example, the results it produces are very similar to the “custom” results. This example also illustrates another source of information when validation data is not available: replication data. The replicates are used to estimate and update the measurement error variance. Again, the Bayesian measurement error correction method produces wider credible intervals than the SIMEX analysis. However, the SIMEX analysis does seem to improve the results over the naive method as well. The Bayesian adjustment presented in this thesis can be applied and extended to a variety of problems in many different research areas.  As  an obvious extension, one could consider adding more correctly measured covariates, which would be fairly straightforward when using WinBUGS. Otherwise, one could extend the current model quite simply as well.  One crucial part of Bayesian statistics is the choice of priors for the unknown parameters. In this thesis, we investigate the effects of different choices of priors for the measurement error variance, r . As shown in Chap 2 ter 5, it may so happen that correcting for measurement error is of no value due to lack of knowledge (or wrong assumption) about the measurement error variance. We investigate when measurement error correction is worth the trouble. Our findings indicate that the accuracy of the measurement error estimation is less important than the width of the prior we assign to it. Our investigation concerning these issues only considers mean squared errors, so an immediate extension would be to consider the width of credi 56  Chapter 7. Conclusion and Future Work  ble intervals as well. Another relevant extension would be to investigate the Bayes-MCMC correction further. It would be useful to develop “rules” for when to correct for measurement error in the cases of more general models. If validation or replication data are available, incorporating such information as part of the decision of whether or not it is worth correcting for measure ment error would be very useful for subject area researchers.  57  Bibliography Carroll, R.J., Ruppert, D., Stefanski, L.A., and Crainiceanu, C.M. (2006).  Measurement Error in Nonlinear Models, a Modern Perspective, Vol. 105 of Monographs on Statistics and Applied Probability, second edn, Chapman & Hall/CRC, Boca Raton. Casella, G. and Berger, R.L. (2002). Statistical Inference, Vol. 6 of Duxbury Advanced Series, second edn, Duxbury.  Casella, G. and George, E.I. (1992). Explaining the Gibbs sampler, The American Statistician 46: 167-174. Cook, J.R. and Stefanski, L.A. (1994). Simulation-extrapolation estima tion in parametric measurement error models, Journal of the American  Statistical Association 89: 1314-1328. Doucet, A. (2007). Statistical computing and Monte Carlo methods course notes, The University of British Columbia. Grimmett, G.R., and Stirzaker, D.A. (2001). Probability and Random Pro cesses, third edn, Oxford University Press Inc., New York.  58  Bibliography Gustafson, P. (2004). Measurement Error and Misclassification in Statistics  and Epidemiology: Impacts and Bayesian Adjustments, Vol. 13 of Interdis ciplinary Statistics, Chapman & Hall/CRC, Boca Raton. Haneuse, S. (2008). An introduction to WinBUGS, Summer school on Bayesian Modeling and Computation at the University of British Columbia. PIMS collaborative research group. Hastings, W.K. (1970). Monte Carlo sampling methods using Markov chains and their applications, Biometrika 57: 97-109.  Hossain, S. (2007). Dealing with Measurement Error in Covariates with Special Reference to Logistic Regression Model: A Flexible Parametric Ap proach, Doctor of Philosophy Thesis, The University of British Columbia. Lederer, W. and Kiichenhoff, H. (2008). The simex Package. The R Project for Statistical Computing. Mann, J-M., and Robert, C.P. (2007). Bayesian Core: A Practical Ap  proach to Computational Bayesian Statistics, Springer Science+Business Media, LLC, New York. Metropolis, N., Rosenbiuth, A.W., Rosenbiuth, M.N., Teller, A.H., and Teller, E. (1992). Equations of state calculations by fast computing ma chines, Journal of Chemical Physics 21: 1087-1092. Nakmura, T. (1990). Corrected score functions for errors-in-variables mod els: methodology and application to generalized linear models, Biometrika 77: 127-137.  59  Bibliography Pierce, D.A. and Kellerer, A.M. (2004). Adjusting for covariate errors with nonparametric assessment of the true covariate distribution, Biometrilca 91: 863-876.  Robert, C.P. and Casella, G. (2004). Monte Carlo Statistical Methods, sec ond edn, Springer Science+Business Media, LLC, New York. Stefanski, L.A. (1987). Unbiased estimation of a nonlinear function of a normal mean with application to measuremnt error models, Communica tions in Statistics, Series A 18: 4335-4358. Stefanski, L.A. and Carroll, R.J. (1987). Conditional scores and aptimal scores in generalized linear measurement error models, Biometrika 74: 703716.  60  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics

Country Views Downloads
United States 21 1
China 14 5
Brazil 9 1
France 7 0
Japan 5 0
Germany 2 0
Canada 1 0
Australia 1 0
Russia 1 0
City Views Downloads
Unknown 18 1
Shenzhen 11 2
Ashburn 11 0
Tokyo 5 0
Salt Lake City 3 0
Beijing 3 3
Los Angeles 2 0
Cambridge 2 0
Denver 1 0
Gainesville 1 0
Melbourne 1 0
Oak Ridge 1 0
Nanterre 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items