UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Simultaneous inference for generalized linear mixed models with informative dropout and missing covariates Wu, Kunling 2003-12-31

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
[if-you-see-this-DO-NOT-CLICK]
ubc_2004-0111.pdf [ 3.53MB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0091594.json
JSON-LD: 1.0091594+ld.json
RDF/XML (Pretty): 1.0091594.xml
RDF/JSON: 1.0091594+rdf.json
Turtle: 1.0091594+rdf-turtle.txt
N-Triples: 1.0091594+rdf-ntriples.txt
Original Record: 1.0091594 +original-record.json
Full Text
1.0091594.txt
Citation
1.0091594.ris

Full Text

Simultaneous Inference for Generalized Linear Mixed Models with Informative Dropout and Missing Covariates by Kunling Wu M.Sc, Beijing Polytechnic University, China, 1999  A THESIS S U B M I T T E D IN PARTIAL F U L F I L L M E N T O F T H E R E Q U I R E M E N T S FOR T H E D E G R E E O F M a s t e r of Science in T H E F A C U L T Y OF G R A D U A T E STUDIES (Department of Statistics)  We accept this thesis as conforming to the required standard  The University of British Columbia December 2003 © Kunling Wu, 2003  Library Authorization  In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission.  Kunling W u  19/12/2003  Name of Author (please print)  Date (dd/mm/yyyy)  Title of Thesis:  Simultaneous Inference for Generalized Linear Mixed Models  with Informative Dropout and Missing Covariates  Degree:  Master of Science  Department of  Statistics  The University of British Columbia Vancouver, B C Canada  Year:  2003  Abstract Generalized linear mixed effects models (GLMMs) are popular in many longitudinal studies. In these studies, however, missing data problems arise frequently, which makes statistical analyses more complicated. In this thesis, we propose an exact method and an approximate method for GLMMs with informative dropouts and missing covariates, and provide a unified approach for simultaneous inference. Both methods are implemented by Monte Carlo E M algorithms. The approximate method is based on Taylor series expansion, and it avoids sampling the random effects in the E-step. Thus, the approximate method may be computationally more efficient when the dimension of random effects is not small. We also briefly discuss other methods for accelerating the E M algorithms. To illustrate the proposed methods, we analyze two real datasets, a AIDS 315 dataset and a dataset from a parent bereavement project, using these methods. A simulation study is conducted to evaluate the performance of the proposed methods under various situations.  ii  Contents Abstract  ii  Contents  iii  List of Tables  vii  List of Figures  viii  Acknowledgements  ix  Dedication 1  Introduction  1  1.1  Generalized Linear Mixed Effect Models  1  1.2  Missing Data Problems  3  1.3  Motivating Examples  5  1.3.1  Example 1  5  1.3.2  Example 2  5  1.4 2  x  Objectives and Outline  6  Generalized Linear Mixed Models and Missing Data 2.1 Introduction  8 8  iii  2.2  2.3  2.4  Generalized Linear Models  8  2.2.1  Model Specification  9  2.2.2  Maximum Likelihood Estimation in GLMs  10  2.2.3  Quasi-Likelihood Approach  13  Generalized Linear Mixed Models  14  2.3.1  Generalized Linear Mixed Models  14  2.3.2  Maximum Likelihood Estimation  15  2.3.3  Literature for Generalized Linear Mixed Models  16  Literature for Missing Data  17  2.4.1  Literature of Informative Dropout  17  2.4.2  Literature of Missing Covariates  18  3 Exact Inference for G L M M s with Informative Dropout and Missing Covariates  20  3.1  Introduction  20  3.2  Models and Likelihood . . .  3.3  Monte Carlo E M Algorithm  23  3.3.1  E-step  24  3.3.2  M-step  26  3.3.3  Variance Estimation  27  3.4  •.  21  Sampling Methods  28  3.4.1  Gibbs Sampler  28  3.4.2  Adaptive Rejection Algorithm  3.4.3  Rejection Sampling  30  3.4.4  Sampling Method for Binary Variables  31  iv  -.  .  29  4  5  6  3.5  P X - E M Algorithm  31  3.6  Convergence  34  Approximate Inference for G L M M s with Informative Dropout and Missing Covariates  35  4.1  Introduction  35  4.2  Approximate Inference without Missing Values  36  4.3  Approximate Inference with Missing Values  39  4.4  Strategies for Sampling the Missing Values  42  4.5  PX-EM  43  Covariate Models and Dropout Models  46  5.1  Introduction  46  5.2  Dropout Models  46  5.3  Covariate Models  48  5.4  Sensitivity Analyses  49  Real Data Examples  50  6.1  Introduction  50  6.2  Example 1  51  6.2.1  Data Description . . .'  51  6.2.2  Models  52  6.2.3  Analysis and Results  54  6.2.4  Sensitivity Analysis  56  6.2.5  Conclusion  58  6.3  Example 2 6.3.1  '.  Data Description  •  59 59  v  6.4  6.3.2  Models  60  6.3.3  Analysis and Results  62  6.3.4  Sensitivity Analysis  63  6.3.5  Conclusion  65  Computation Issues  65  7 Simulation Study  71  7.1  Introduction  71  7.2  Description of the Simulation Study  72  7.2.1  Models  72  7.2.2  Bias and Mean-Squared Error  73  7.3  Simulation Results  74  7.3.1  .Comparison of Methods with Varying Missing Rates  74  7.3.2  Comparison of Methods with Different Variances  75  7.3.3  Comparison of Methods with Different Sample sizes  76  7.3.4  Comparison of Methods with Varying Intfa-individual Measurements 76  7.3.5  Conclusion  77  8 Conclusion and Discussion  81  Bibliography  84  vi  List of Tables 6.1  Summary statistics  52  6.2  Estimates for the AIDS data  55  6.3  Sensitivity analysis for covariate models  57  6.4  Sensitivity analysis for. dropout models  6.5  Summary statistics  6.6  Estimates for the Parent Bereavement data  62  6.7  Sensitivity analysis for covariate models  63  6.8  Sensitivity analysis for dropout models  64  7.1  Simulation results with varying missing rates  75  7.2  Simulation results with varying variances  76  7.3  Simulation results with varying sample sizes  77  7.4  Simulation results with varying intra-individual measurements  78  •. • • • ,  vii  58 60  List of Figures 6.1  Viral loads (log scale) for six randomly selected patients. The open dots 10  are the observed values and the dashed line indicates the detection limit of viral loads. The viral loads below the detection limit are substituted with log (50)  67  10  6.2  GSI scores for six randomly selected parents. The open dots are the observed values and the GSI scores at time 0 are the baseline values  68  6.3  (a) Time series and (b) autocorrelation function plots for CH50  69  6.4  (a) Time series and (b) autocorrelation function plots for 6  46  associated  with patient 46  70  7.1  (a) Time series and (b) autocorrelation function plots for z  7.2  (a) Time series and (b) autocorrelation function plots for 6 with individual 18  79  2  18  associated 80  viii  Acknowledgements First and foremost, I would like to thank my supervisor, Dr. Lang Wu, for his excellent guidance and immense help during my study at UBC. Without his support, expertise and patience, this thesis would not have been completed. Also, I would like to thank my second reader, Dr. Paul Gustafson, for his invaluable comments and suggestions on this thesis. Furthermore, I thank Dr. Nancy Heckman and Dr. Bertrand Clarke for their invaluable advice on my consulting projects, which will benefit me very much in the future.  I thank all the faculty and staff in the Department of Statistics at U B C for  providing such a nice academic environment. I also thank all the graduate students in the Department of Statistics for making my study at UBC so enjoyable. Most importantly I would like to thank my parents for loving me and believing in me. My big thanks goes to my beloved husband, Weiliang Qiu, for his love, his constant support and encouragement, which push me to be the best at everything I do.  KUNLING  The University of British Columbia December 2003  ix  WU  To my parents and husband.  Chapter 1 Introduction 1.1  Generalized Linear Mixed Effect Models  Longitudinal data or repeated measurement data occur frequently in many applications where repeated measurements are obtained for each individual. Statistical analysis of longitudinal data are reviewed in Diggle, Liang and Zeger (1994). One of the key advantages of a longitudinal study over a cross-sectional study is to separate variation over time within an individual from difference among individuals, while a cross-sectional study can not do this because it simply records one measurement for each individual. So the analysis of cross-sectional data may confound time effect and may give misleading results. For longitudinal data, it is important to recognize two sources of variations: intra-individual variation produced by different measurements within a given individual, and inter-individual variation among different individuals. Generalized linear models (GLMs) such as logistic regression models, extend normal linear models to allow non-normal error distributions in the natural exponential family such as Poisson and binomial distributions. GLMs can handle not only continuous variables but also discrete variables, as long as the distribution of the variable belongs 1  to the natural exponential family. Therefore, GLMs provide a unified different approach for continuous and discrete responses and have wide applicability in practice. For example, in Agresti (1990), a sample of male residents of Framingham, Massachusetts, was collected according to their blood pressures. During a follow-up period, whether or not these male residents developed coronary heart diseases, was recorded and viewed as response.  So the response variable is binary. To investigate the relationship between  the blood pressure and the coronary heart disease, we can build a logistic regression model and then make statistical inferences based on this G L M . Generalized linear mixed models (GLMMs) are useful for longitudinal studies which extend GLMs by introducing random effects to account for correlation within the repeated measurements for a given individual. Such models can separate two kinds of variations, borrow information cross individuals and allow discrete and continuous responses. Therefore, GLMMs are very popular in the analysis of longitudinal data. A G L M M may be written as a hierarchical two-stage model. At the first stage, intra-individual variation is charactered by a G L M . In the second stage, inter-individual variation is represented through individual-specific regression parameters. Covariates are often introduced in the second stage to partially explain systematic variation. There are two main approaches to estimate parameters in GLMMs: (i) an exact likelihood inference based on numerical integration (Booth and Hobert (1999)), and (ii) an approximate inference based on linearization procedures via Taylor series expansion (Breslow and Clayton (1993); Vonesh et al. (2002)). In the exact inference, marginal likelihood is obtained by integrating out random effects from the joint distribution of response and random effects. By maximizing the marginal likelihood, we obtain estimates of parameters of interest. However, the integration is usually intractable, one may use Monte Carlo approximations to evaluate it (Wei and Tanner (1990)). The exact likelihood  2  inference works well with a small dimension of random effects. However, computation may become quite demanding or unstable as the dimension of random effects increases. In such cases, we may consider the approximate inference which avoids this computation difficulty by integrating out the random effects. The strategy for the approximate inference is to iteratively solve L M E models based on second-order Taylor series expansion around current estimates. If the number of measurements for each individual is large enough, approximate methods may give reasonable estimates for parameters. Otherwise, approximate maximum likelihood estimates may be inconsistent.  1.2  Missing Data Problems  Missing data are a serious problem in many applications and arise frequently in longitudinal studies.  Two kinds of missing data often occur in a longitudinal study: (i)  missing covariates due to different measurement schedules for covariates and response or other problems; and (ii) missing responses due to dropout or missing visits. For example, individuals may withdraw or die before the end of study or do not come to the study center for measurements at scheduled times for various reasons. Missing data problems make statistical analysis in longitudinal studies much more complicated, since standard complete-data methods are not directly applicable. Commonly-used naive methods for missing data include the complete-case method, which deletes all incomplete observations, the mean imputation method, which substitutes missing values with the mean values of observed data, and the last-value-carriedforward method, which imputes a missing value by the immediate previous observed data. At the presence of missing data, the missing data mechanism must be taken into account in order to obtain valid statistical inference. Little and Rubin (1987) define three types  3  of missing data mechanisms. Missing data are missing completely at random (MCAR) if the probability of missingness is independent of both observed and unobserved data. For example, patients do not come to the study center because of reasons irrelevant to the treatment such as simply forgetting the appointment. Missing data are missing at random (MAR) if the probability of missingness depends only on observed data, but not on unobserved data. For example, a patient may occasionally fail to visit the clinic because he/she is too old. Missing data are nonignorable or informative missing data (NIM) if the probability of missingness depends on unobserved data. For example, a patient fails to visit the clinic because he/she is too sick. If missing values are M C A R , the complete-case method will give unbiased, but inefficient estimates. If the missing data are not M C A R , the naive methods may give biased, even misleading results due to not taking missing data information into consideration. M C A R and M A R are called ignorably missing. We can ignore the missing data mechanism in likelihood inference when missing values are ignorably missing (Little and Rubin (1987)). Little (1992, 1995) gave a review on missing covariates in regression and drop-out in repeated-measures studies. Ibrahim, Lipsitz, and Chen (1999) proposed a Monte-Carlo E M method for estimating parameters in GLMs with nonignorable missing covariates. Wu and Wu (2001) estimated parameters in nonlinear mixed effects models with missing covariates (MAR) by a three-step multiple imputation method. Wu and Carroll (1988) considered linear mixed effect models with informative dropout and assumed missingness depending on random effects. Ibrahim, Chen and Lipsitz (2001) developed a Monte Carlo E M algorithm for estimating parameters in GLMMs with informative dropout. However, little literature considers parameter estimation in GLMMs with informative dropout and missing covariates simultaneously.  4  1.3 1.3.1  Motivating Examples Example 1  Our research is motivated by a longitudinal study from the AIDS Clinical Trial Group (ACTG) Protocol 315 (Wu and Ding (1999)). In this study, 46 HIV infected patients were treated with a potent antiviral drug. Plasma HIV-1 RNA (viral loads) were repeatedly quantified on days 2, 7, 10, 14, 21, 28, and weeks 8, 12, and 24 after initiation of treatment. After the antiviral treatment, the patients' viral loads will decay, and the decay rate may reflect the efficacy of the treatment. The Nucleic Acid Sequence-Based Amplification assay that is used to measure the viral load has a detection limit. If the viral load drops below the detection limit after the treatment, the viral load can not be measured, which may indicate that the treatment may be successful. To investigate the treatment effect, one approach is to define the response as whether the viral load is below the detection limit or not, which is thus a binary variable. In this study, patients drop out before the end of the study, and the dropout may be informative. Thus, the response contains nonignorable missing values. Preliminary studies show that some baseline covariates such as CD4 cell counts, tumor necrosis factor (measured by T N F levels) and total complement levels (measured by CH50), may partially explain variation in the viral load trajectory. However, some of these covariates are also missing. Our objectives are to model the viral load trajectory and to identify covariates that may partially predict changes of viral loads, in the presence of informative dropouts and missing covariates.  1.3.2  Example 2  Our second example involves a longitudinal study from a parent bereavement project, which investigates the long-term mental outcomes of parents whose children died by 5  accident, suicide, or homicide. After their children's death, the parents usually experience a high level of mental distress. In this study, the mental distress of 239 parents were measured at baseline (i.e. 4 to 6 weeks after their children's death), and then at 4, 12, 24 and 60 months post-death. The Global Severity Index (GSI), which is the most sensitive indicator of mental distress, is used to measure the parents' distress levels. A high GSI score indicates a high level of mental distress. If the parents' adjustment to their children's death goes well, their GSI scores will decrease over time, at least lower than their baseline GSI scores. To examine how the parents' mental distress changes over time after their children's death, we treat whether or not a parent's GSI score after baseline is lower than his/her baseline value as response. Several other relevant factors were also obtained at baseline, including parents' gender, marital status, age, education, annual income, the cause of death, age and gender of the deceased child. These baseline factors may be important predictors of parents' distress and thus are viewed as covariates. Note that some baseline covariates such as income contain missing values, and some responses are also missing. Our objectives are to investigate the change of parent's distress levels over time and to determine which covariates affect the parent's mental distress.  1.4  Objectives and Outline  In this thesis, we develop an exact inference method, implemented by a Monte-Carlo E M algorithm, to make simultaneous inferences for GLMMs with informative dropout and missing covariates. To avoid computational difficulties when the dimension of random effects is not small, we propose an approximate inference method, which integrates out the random effects for more efficient computation. The remainder of this thesis is organized as follows. Chapter 2 introduces GLMs  6  and G L M M s and reviews the literature about informative dropout and missing covariates. Chapter 3 discusses the exact inference method for estimation of G L M M s with informative dropout and missing covariates. The approximate inference method based on linearization is presented in Chapter 4. We discuss dropout models and covariate models in Chapter 5. In Chapter 6, we apply our methods to two real data examples. Chapter 7 presents our simulation study. We conclude the thesis with a discussion in Chapter 8.  7  Chapter 2 Generalized Linear Mixed Models and Missing Data 2.1  Introduction  Before we present our methods for estimating parameters in GLMMs with informative dropout and missing covariates, we give a brief introduction to GLMs, GLMMs, and methods for the missing data problems in this chapter. In Section 2.2, we introduce GLMs and the methods of estimation for parameters in GLMs.  Section 2.3 describes  GLMMs, briefly discusses two main methods for estimating parameters in GLMMs and reviews the literature for GLMMs.  In Section 2.4, we give a literature review about  methods of handling informative dropout and missing covariates respectively.  2.2  Generalized Linear Models  A classical linear model is useful to model a continuous response under the assumption that the response follows a normal distribution and a linear relationship exists between  8  the mean of the response and covariates. However, in practise, some non-normal distributions such as binomial, Poisson, etc, may be better assumptions for some response variables such as discrete variables. For example, we may want to study whether developing a heart disease relates to the blood pressure level. Here, we treat the health status of patients' heart as our response. The response is thus a binary variable which takes values of 0 or 1, where 0 means that a patient has a heart disease and 1 means that a patient has no heart disease. Obviously, here the assumption of normality is completely unrealistic. Moreover, frequently the mean of the response can not be expressed as a linear form of the covariates. In those situations, we can not use standard linear models. Generalized linear models (GLMs), which are a extension of classical linear models, can not only deal with variables whose distributions come from the exponential family but also allow nonlinear forms between the mean of responses and the covariates. Variables in the exponential family include continuous variables such as normal and exponential, and discrete variables such as binomial and Poisson. Due to the capability to handle continuous data as well as discrete data, GLMs unify different methodologies and thus have wide applicability in practice.  2.2.1  M o d e l Specification  GLMs are specified by three components including a random component, a systematic component, and a link function. Let y — (yi,y2)  • • • IVN)  T  be a vector of independent and identically distributed  (i.i.d) observations whose distribution belongs to the natural exponential family. Then the density function of each observation y* can be expressed in the form  f(yu A) = exp{[y6» - ¥>(0i)]/a(0) + <Vu </>)}, i  i  9  (2.1)  where a(.), ip(.) and c(.) are specific functions, 6 = (#1,6 , • • • , 6N) is called the natural T  2  parameters, and <p is called the dispersion parameter. The random component of GLMs is specified by the above density function of the response variable. The systematic component specifies the relation between covariates CCJ to the linear predictor rji by a linear form V i  = xJ(3  i = l,2,.••  ,N,  (2.2)  where j3 is a vector of regression parameters. ' The mean \i{ = E[%ji) is related to the linear predictor through the link component of GLMs r = xj(3 = g(ii ) h  i  i = l,2,---,N,  (2.3)  or Pi = g- (Vi)=9~ (xJf3) 1  i = l,2,---,N,  1  (2.4)  where g(.) which is a monotone and differentiate function called the link function. In the exponential family, if a link function g(.) satisfies g(fa) = Oi(fJ-i), then the link is called the canonical link or natural link. Binomial, Poisson and normal variables all have canonical link functions. A function g(/i,) = fa gives the identity link. For example, normal variables have the identity link function. In summary, GLMs allow for linear as well as non-linear models under a single framework. Moreover, GLMs make it possible to fit models where the underlying data are normal, Poisson, binomial, etc, by a suitable choice of the link functions.  2.2.2  Maximum Likelihood Estimation in GLMs  The principal method of estimation used in GLMs is the maximum likelihood method. In this section, we will briefly describe how to obtain the maximum likelihood esti-  10  mates of parameters in GLMs. We assume <p is known, then c(yi, (f>) is a constant in the log-likelihood function about 9 and thus is not ignored in the following log-likelihood function. For N independent observations, the log-likelihood function is N l(0\y)  =  Y^m\yi) i=l N i=l N  Some useful Equations  Now we will derive some useful identities used in maximizing the likelihood function. The derivation of (2.5) with respect to 0$ gives dl  1  an  /  d<p(0i)\  ,  i aytfli)  39}  a{cp)  de  ( 2 7 )  '  2  .  1  ' '  The following is two well-known likelihood results that we use here:  K i t ' i dl \  0  -  <-» 2  „ (d l  8  »  2  Substituting (2.6) and (2.7) to (2.8) and (2.9) respectively gives Efa) = M*<) = ^  ,  (2-10)  and i  ay(^)  I  n  dmjQi)  I  where  V(//i)  =  dni(6i)/d9i  is often called the variance function. Equation (2.11) indicates  that the variance of the response depends on its mean. We differentiate two sides of equation (2.3) with respect to (3 and obtain dg{fM) d^j d9j  =  Upon rearrangement, the above equation can be written as d6i  1  d(3  V(ni)dg(fii)/dfjLi  (2.12)  Maximum Likelihood Estimation To obtain the maximum likelihood estimates .(MLEs) of /3, we differentiate (2.5) with respect to (3, and then apply (2.10), (2.11) and (2.12) to get the following score function 509) =  dl(0\y) 0(3  E N  .dkip^ddi -Uh\Pi\yi) Wi  1  =  d6i  df3  (- ) 2  13  N  _L_ y  - fM ^ V{ni)dg(pi)/diii V i  (<^) ~[  a  Let W = diag" {Vi^idgi^/d^f,  ••• ,  1  X = (x ,x ,--1  2  A = d\B%{dg(ni)/dtiudgM/dya,  VifMN^dgifi^/diJLftf},  ,x ), N  • • • ,dg(ix )/dfj, } N  N  ,  Then (2.13) can be rewritten as  df3  = S((3) = -^-rXWA(Y a(4>)  12  - M)-  (2-14)  MLEs can be obtained by solving the following score equation  S(f3) = -^XWA(Y-»((3))  = Q-  (2.15)  The solution to the above equation (2.15) can be performed by Fisher scoring algorithm or Gauss-Newton algorithm. In the case of canonical links, both Fisher scoring and Newton-Raphson reduce to the iteratively re-weighted least squares algorithm. Under the regularity condition, MLEs of parameters in GLMs have the asymptotic normality property P^N((3,a(<l>)(XWX )- ). T  1  As we see, the asymptotic covariance matrix of (3 is equal to the inverse of the expected Fisher information matrix, which is F(/3) = K  1  -E(^M) = -±-XWX . T  V  d(3d(3 ' T  a(</>)  y  (2.16) '  With a large sample size, we can apply this property to make inference about f3.  2.2.3  Quasi-Likelihood Approach  Based on the fact that the just first two moments of variables are mainly involved in the score function, Wedderburn (1974) proposed the quasi-likelihood method for estimating parameters in GLMs. The advantages of this method are that we do not need to make specific distribution assumptions, and that its estimators own the similar asymptotic properties as MLEs.  13  2.3  Generalized Linear Mixed Models  2.3.1  Generalized Linear Mixed Models  A G L M M is a extension of a G L M to longitudinal data by introducing random effects to account for correlation within repeated measurements for a given individual. It can separate the inter-individual variation and the intra-individual variation and borrow strength across individuals. Thus, a G L M M is very popular in the analysis of longitudinal data. A G L M M may be written as a hierarchical two-stage model. In the first stage, the intraindividual variation is specified by a generalized linear regression model. In the second stage, the inter-individual variation is represented through individual-specific regression parameters. Let yij denote the jth observation on individual i, % = 1, 2, • • • , JV; j = 1, 2, • • • , n,. Then there are a total of  YliLi  i  n  observations.  • stage 1 (intra-individual variation) Let bi be the random effects associated with individual i. We assume that conditioning on  bi,  observations  ya,yi2,-''  ,VirH  a  r  e  independent and each has the  density function from the natural exponential family.  fiVijlP,  bi) =  exp{[j/ 0y -  E{ \p, yij  y  (p(dij)]/a(<l>) + c{ , yij  h) = faj = g(x%(3 + z%bi),  </))},  (2.17) (2.18)  where 0 is a dispersion parameter, Here we assume that (f> is known. The function g(.) is the link function, 77^ = xfj/3 +  is the linear predictor, and at^ and  are two vectors of covariates such as time, baseline value, etc.  14  • stage 2 (inter-individual variation) r = xjf3 + zjb , 1i  (2.19)  i  bi~N(0,D), where ^ = [rjn,--- ,r) ) , T  ini  (2.20)  x - (x , • • •-, x ), t  a  ini  and z  t  = (zn,--- ,z ), ini  and /3  is a vector of fixed parameters. We assume that the random effects, bi's, are i.i.d. The covariance matrix D in (2.20) quantifies the random inter-individual variation.  2.3.2  Maximum Likelihood Estimation  Let Vi = (y  ) . From the preceding section, the joint density of y = (y , • • • , y ) T  l  N  and 6 = (6i, • • • , 6jv) can be written as N m  f(y, b\f3, D) = HH  f( \(3, bMibilD).  (2.21)  yij  j=i  i=i  Since random effects b are unobservable variables, we integrate out random effects and obtain the marginal distribution for y N  f(y\(3,D) = TJ  . m  / lJ{/(y |A W ( M ^ ) } * . -  (2-22)  y  Thus, the corresponding log-likelihood is N  l(f3,D\y) =  /  yj  . m  J[fMP>  \  KfiPiWh  J .  (2.23)  If the above log-likelihood has a closed form, we can obtain the MLEs of parameters in GLMMs by solving the score equation as usual. However, usually integration with respect to random effects is intractable such that we can not get the closed form for the log-likelihood. This problem results in two main approaches of estimation for parameters in GLMMs including an exact likelihood inference method based on numerical 15  integration and an approximate inference method based on linearization procedures via Taylor series expansion. In the exact inference, when integration becomes intractable due to moderate to large random effects, one may solve this problem by implementing the Monte Carlo E M algorithm. The exact inference method works very well with a small dimension of random effects. However, the computation may become quite demanding or unstable as the dimension of random effects increases, while the approximate inference method avoids the computation problem by integrating out the random effects. The strategy for the approximate inference method is to iteratively solve L M E models based on first-order or second-order Taylor series expansion around current estimates. If the number of the intra-individual measurements (measurements for each individual) is large enough, the approximate method may give reasonable estimates for parameters. Otherwise, approximate MLEs may be inconsistent.  2.3.3  Literature for Generalized Linear Mixed Models  McCulloch (1997) derived a Monte Carlo Newton-Raphson algorithm and combined it with a simulated maximum likelihood method to come up with a hybrid method for GLMMs. His simulation study showed that the Monte Carlo E M algorithm, the Monte Carlo Newton-Raphson algorithm and the hybrid method worked well in calculating MLEs for GLMMs, and the hybrid method gave more precise estimators.  Booth and  Hobert (1999) proposed two new implementations for the maximum likelihood fitting in GLMMs. Both methods are carried out by the Monte Carlo E M algorithm. The main difference is that the first method uses a rejection sampling to generate samples from the exact conditional distribution of the random effects, while the second one uses a multivariate t importance sampling. Breslow (1993) proposed a penalized quasi-likelihood (PQL) method and a marginal quasi-likelihood (MQL) method, and demonstrated their 16  suitability for inference in GLMMs by simulation and application in several examples. In the simulation study, P Q L and M Q L made correct inferences on regression coefficients, but underestimated parameters a bit (in absolute value). Vonesh (2002) proposed a conditional second-order generalized estimation equation (CGEE2) to estimate the parameters in G L M M s and also showed that the efficiency of estimators was improved due to the involvement of the second-order moment.  2.4  Literature for Missing Data  We frequently encounter the missing data (response or/and covariate) problem in practise. However, ignoring missing data or using overly simple methods to handle missing data often leads to invalid inference. Thus, it is very important to find appropriate approaches to deal with missing data in our hand. Various strategies for considering the missing data mechanism have been proposed in the recent literature.  2.4.1  Literature of Informative Dropout  Wu and Carroll (1988) considered linear mixed effect models with informative dropout under the assumption that the informative dropout could be modeled by a probit model which included the random effects as its covariates. Diggle and Kenward (1994) derived a likelihood method to get MLEs in a multivariate linear model with informative dropout modeled by a logistic regression model which included the response as covariate. Computation of the likelihood was speeded up by using the probit approximation to the logit transformation. Their simulation work has shown that considering the informative dropout mechanism in the statistical inference reduces the bias caused by the ordinary least square (OLS) estimator or by only considering the informative dropout as M A R .  17  Little (1995) gave a review on modeling the dropout mechanism in repeated-measures studies. Regarding how to factor the dropout mechanism, models handling dropout were classified into selection models and pattern-mixture models. The main difference between two types of models is that we need to specify the form of missing data mechanism in the selection models while pattern-mixture models do not require that. He classified NIM into nonignorable Outcome-Based missing data where the dropout depends on missing values, and Random-effect-Based missing data where the dropout depends on future values. He also suggested to examine the sensitivity of results to the choice of missing data mechanism when we almost know nothing about the missing data mechanism. Ibrahim, Chen and Lipsitz (2001) developed a Monte Carlo E M algorithm to obtain MLEs in GLMMs with informative dropout and nonmontone missing data patterns. Moreover, they proposed that the missing data mechanism may be modeled by a logistic regression or a sequence of one-dimensional conditional distributions which may reduce the number of nuisance parameters.  2.4.2  Literature of Missing Covariates  Little (1992) defined three special types of patterns of missing covariates: (i) univariate missing data where only one covariate values are missing, (ii) monotone or nested missing data where the (j + l)th covariate Xj  +1  is observed for every case in which the jth (j =  1, 2, • • • ,p) covariate Xj is observed and (iii) a special pattern where two covariates can not be observed at the same time. He reviewed the methods of estimation in the' regression models with missing covariates. The six reviewed statistical methods dealing with missing covariates are compared in this paper, including complete-case methods, available-case methods, least squares on imputed data, maximum likelihood, Bayesian methods and multiple imputation. He suggested that the maximum likelihood, Bayesian methods and 18  multiple imputation would be a better choice for dealing with missing covariate problems. Moreover, he preferred the maximum likelihood in a large sample and Bayesian methods or multiple imputation in a small sample. Ibrahim (1990) analyzed the problem of missing covariates (MAR) in GLMs with discrete covariates and applied the E M algorithm to obtain MLEs under the assumption that the missing covariates came from a discrete distribution. The asymptotic variance of MLEs was estimated by computing the observed information matrix via Louis's method. Ibrahim, Lipsitz, and Ghen (1999) proposed a Monte-Carlo E M algorithm for estimating parameters in GLMs with nonignorable missing covariates. In this paper, they assumed a multinomial model for the missing data mechanism and a sequence of one-dimensional conditional distribution for unobserved covariates. Wu and Wu (2001) estimated parameters in nonlinear mixed effect models with missing covariates (MAR) by a three-step multiple imputation method. In first step, they fitted a hierarchical model without covariates. Then they imputed the missing covariates based on a multivariate linear model implemented by Gibbs sampler, and created B independent complete datasets in the second step. In the last step, they used the standard complete-data method to analyze each dataset and thus obtained the overall inference based on B analysis results.  19  Chapter 3 Exact Inference for GLMMs with Informative Dropout and Missing Covariates 3.1  Introduction  In this chapter, we develop an exact inference method based on numerical integration to obtain MLEs for parameters in GLMMs with informative dropout and missing covariates. The proposed exact method is implemented by a Monte Carlo E M algorithm, which need to generate samples for missing values and random effects by Gibbs sampler in each E M step. In Section 3.2, we give a description of GLMMs with informative dropout and missing covariates, considered in this thesis. Section 3.3 describes a Monte Carlo E M algorithm for implementing the exact inference method. A detailed description of our sampling methods is provided in Section 3.4. In Section 3.5, we present a P X E M algorithm, which may boost the convergence rate of the standard E M algorithm. Computation issues regarding our algorithm are discussed in Section 3.6. 20  3.2  Models and Likelihood  We assume that data are collected from N individuals. Let y = (yu, • • • ,yi ) ', T  {  ni  where  ytj is the outcome for individual % at time Uj, j = 1,2,-•• , n$, i = 1,2, ••• ,JV. The response t/j may contain missing values due to dropouts. So we write y = (y {  where y  t  components of y . t  Let r, = (ra,-- - ,r )  mis  be a vector of missing response indicators  T  iTH  such that Tij = 1 if  is missing for individual % at time Uj, and  observed for individual i at time t^.  m  j contains the missing  corresponds to the observed components oiy , and y  O0Sti  , y is,i)>  obSti  Let a;, = (x , • • • ,Xif)  = 0 if yij is  be a (/ x 1) vector of  T  a  time-independent covariates for individual i. Since the time-independent covariates may also contain missing values, we write Xi = (x^i,  x ),  where x i = the observed part  miSti  O0S!  of Xi and x i ^ = the missing part of Xi. Let Si = (sn, • • • , S j / ) be a vector of missing T  m S  covariate indicators such that k =  = 1 if Xik is missing and Sik = 0 if xu- is observed,  l,---,f.  Let /(.) denote a generic density function. If the response and all covariates are completely observed for each individual, the corresponding G L M M can be written as a hierarchical two-stage model as follows.  IMP,  i) =  b  e x  P [{ViAjM  - vtfijMfiM)  + (Vij, 4>)], c  (3.1)  Vij =9(lMj) = Al/3 + zJjbi,  (3.2) bi ^ N(0,D), l  d  j = l,---,m,  i =  l,---,N,  where £ ( y | 6 j ) = pij and (j) is the dispersion parameter (here we assume that <> / is ,  iJ  known). The function g(.) is a link function,  is the linear predictor, (3 = (/3i, • • • , (5 )  T  P  is a vector of fixed effects, and 6j = (bn, ••• • ,b )  T  iq  covariate Afj = (xf,tfj)  is a (1 x p) vector, where  is a vector of random effects. The is a vector of time-dependent  covariates. Usually, the covariate vector z^- is a subset of Aij. 21  The q x q matrix D  quantifies the random inter-individual covariance. By integrating out the unobservable random effects b we obtain the following complete-data marginal distribution i:  /(!/!••• ,y \ --N  ,x ,f3,D)  Xl  = l[  N  i=l  / Y[{f( \p,bi,Xi)f(bi\D)}dbi.  (3.3)  yij  j=l  J  In the presence of missing values in the response and covariates, the completedata marginal distribution becomes more complicated.  When the missing responses  are informative, we have to take into account the missing data mechanism, i.e., the distribution of the missing data indicators TV Otherwise, the estimates of parameters may be biased. In this thesis, we make the following assumptions: (i) The missing covariates are MAR, i.e, the missing covariate mechanism does not depend on any unobserved values, but may depend on observed values. In other words, the density function for the missing covariate indicator  Sj  satisfies  f(si\y x ,S) u  = f(si\y  t  o b s i  ,x  o b S t i  ,S),  where  is a  S  vector of parameters, (ii) The missing responses are informative, i.e, the missing response mechanism may depend on the unobserved values.  We denote  as the  fir^y^Xi,^})  density function of the missing response indicator, where tf) is a vector of parameters, (iii) Let  f(Xi\a)  to be the density function for covariates  Xj,  where  a is a vector of parameters.  Modeling strategies for specifying the missing data mechanism covariate model  f(xi\a)  are explored in Chapter 5. By integrating out  we obtain the marginal distribution for the observed data N  f(y bs, obs,r,s\(3,D,ip,d,at.)  n  =]J t=i  x  0  fiTilVi, y  o b s  =  (si, • • • , SN).  (y  ,  obSil  ••• , y , ), obs N  x  o b s  =  (y  o b s  ,x  o b s  ,r,  y  and  m i s i  x  m i S j i  ,  s).  /* Tli  / J J  J  where  p  and the  fir^y^Xi,^)  x  Y[{f(y \P,b ,x )f(b \D)f(x \a) ij  i  i  i  (3.4)  )f{Si\ ,  u  (x  i  i  j=  Xi  ,  obSil  ••• , x  yi,  o b s > N  6)}dbidx idy i , miSt  ),r  =  (n, • • •  m  Si  ,r ) N  and  s  =  Rubin (1976) showed that the missing data mechanism can be ignored from  likelihood inference if the data are MAR. Since we assume that the missing covariates  22  are M A R , ignoring the missing covariates mechanism leads to the the following observed data log-likelihood: l((3,D,ij>,cx\y ,x ,r,s obs  obs  (3.5)  Maximizing the above log-likelihood gives us the MLEs for parameters in the G L M M . However, the intractable integration in (3.5) makes the observed data log-likelihood difficult to maximize..In this thesis, we propose an exact inference via Markov Chain Monte Carlo techniques and an approximate inference method via Taylor series expansion. In next section, we describe the Monte Carlo E M algorithm in details, which implements the exact inference method. The approximate inference method will be illustrated in Chapter 4.  3.3  Monte Carlo E M Algorithm  The E M algorithm (Dempster, Laid, and Rubin, 1977) is a very useful and powerful algorithm to compute MLEs in a wide variety of situations such as missing data and random effect models. Each iteration of a E M algorithm consists of an E-step that evaluates the expectation of "complete data" log-likelihood conditional on the observed data and previous parameter estimates, and a M-step that updates the parameter estimates by maximizing the expectation of the conditional log-likelihood. This iterative computation between the E-step and M-step till convergence leads to the MLEs. If we treat (y  , y ^ x  obSti  mia  , x ,  obsA  mis4  b , T\) = (y x , fy, n) as the "complete data", t  23  i}  t  the complete data density for individual i is given by  f(y x b ,r \f3,D,ip,a) i)  i)  i  i  = f{Vi\P, bi, ^ )/(x |o;)/(6 | D)/(r |t/ , x i  i  i  J  i  i  u  ip).  This leads to the complete data log-likelihood JV  i=l N  = £  [log{/d/il)3,6i,Xi)} + l o g g i a ) }  ( 3  -  6 )  i=l  + l o g { / ( 6 | £ > ) } + logifinlVi, i  where 7 = (j3, a, ip, D) and ^(7; y , x t  iy  x iP)}], u  7-j) is the contribution to the complete data log-  likelihood from the ith individual. Note that we are mainly interested in estimating the parameters (/3,D), and treat (cx,ip) as nuisance parameters. Ibrahim et al. (2001) proposed a Monte Carlo E M algorithm for estimating parameters in GLMMs with informative dropout without missing covariates. Here we extend their method to GLMMs with informative dropout and missing covariates for simultaneous inference.  3.3.1 Let 7^  E-step be the current parameter estimates. Then the conditional expectation of the  complete-data log-likelihood given the observed data for individual % at the (t + l)st E M  24  iteration is given by Qi{i\i ) (t)  E! (jiilf > Vii i i Ti)\yobs,ii Xobs,ii fii  ^)  x  = / / /[log{/(j/ |iS,6i a; )} + log{/(aj .|a)} i  )  i  i  (3.7) + l o g l / ^ D ) } + iog{f(n\ ,  Xi,ib)}  yi  f{ymis,ii rnis,iibi\y bs,i> obs,ii ii7^)dbidy ^ X  X  r  o  m  =h +I2 + I3 + h.  In general, the above integration is intractable and does not have a closed form expression. However, this integral can be evaluated by using Monte Carlo approximations (Wei and Tanner (1990)). Specifically, a sample of size b\ )} mi)  can be drawn from  f(y  m i a i i  , x ,u mia  bf^),  {(y^ ,x^ i3i  bi\y  ,  obS:i  x ^, obs  u  t  7 ) via Gibbs (i)  r  u  sampler along with the adaptive rejection algorithm (Gilks and Wild, 1992). Then we may approximate QtOrl*/^) by 1 rrii  +  m  log{/(aJo6»,i,  a^mL.iI")} (3.8)  + -E °g{/(^ i^)} 1  1  )  i=i  For simplicity, we may take m; constant in each iteration. However, increasing m with ;  each iteration may speed up the E M convergence (Booth and Hobert, 1999). The E-step  25  for all individuals at the (t + l)st iteration can be written as . C?( |7  )=£ft(7l7 ) W  ( i )  7  i=i  =  E  E — °g{f(yobs,i, j=l  i=l  +  Vmis,i\P> ^ \  l  E  E  ~  *=i  i=i  JV  J7lj  l o  b  S  t  l  ,  X^i)}  X  (3.9)  * _  i ^ ) }  ,  JV TOj v  £  £  ~  t=l j = l  m,-  l  0  S { / (  ^ 0 b  r  S  , i .  =Q (/3|7 ) + g ( « | 7 ) (1)  3.3.2  o  g{/'( <^,i>VmisM))  + £ £ - i ° g { / ( * ?  +  X  1  W  (2)  W  J/mL,i. </>)}  +Q  (3)  (^l7 ) + W  Q  [ i )  W\l  [ t )  ).  M-step  We can obtain the updated estimates 7^ ) at the (t + l)st iteration by maximizing +1  Q ( 7 | 7 ^ ) - Assuming that the parameters (3, a, D and ip are alldistinct, we can update (3, a, D and ip by maximizing maximizer / 3 ^ ' for  Q^ \ 2  and Q  ( 4 )  separately at the M-step. The  may be computed via iteratively re-weighted least squares where  +1  the missing values are replaced by their simulated values {y^ j, x^ , 6^}. is  /3  ( t + 1 )  ist  =argmax{Q (/3, |7 )} (1)  JV =argmaxj^ ^  (i)  (3.10) —{f(y bs,u 0  Vmu,i\P,  b\ \x  ,  J  obsA  x^ )}. {  t=l 3=1  The maximizer £)( ) for  can be written as follows:  t+1  L>(* =argmax{Q (£>, |7 )} +1)  (3)  (t)  JV rm  = a r g m a x £ E — M/C^P)} 1=1 .7 = 1  26  (3-11)  To update a and ip, one can use standard methods for commonly used models such as multivariate normal models and logistic regression models. =argmax{Q( >(a,|7 )} 2  a  (t)  N mi  (3.12)  =arg max V V - \ o g { f { x  V  ( t + 1 )  o b s  ^ , x^] 1 a)}, 4  =argmax{Q^(^,|7 * )} (  )  Af m j  =argmax  (3.13)  EE~ i=l  l o  g{/( ^ob ,i> r  S  3=1  I/£L,i, */>)}•  To obtain the MLEs 7, we may start with any reasonable values for 7, which can be obtained by the complete-case method or other naive methods , and then iterate between E-step and M-step until convergence is reached. 3.3.3 Variance Estimation The asymptotic covariance matrix of 7 can be obtained by the method of Louis (1982). Specifically, note that the observed information matrix equals the conditional expected complete information minus the missing information, that is, Iobsil)  = Icom{l) ~ Imis\obs(l)-  Let  Qh\i)  = E i=l  <?<(7l7) = E '  and  27  i=l  E k=l  - <*h)> s  r r k  (3-14)  Since /3, a, ip and D are distinct, matrices <2(7|7), Q(7|7) and ^•('y) are block diagonal. Then, based on (3.14), the asymptotic observed information matrix gives  {  N  mi  N  -\  EE"5«(7)^(7)-E4(7l7)Qr(7|7) i=l j = l ^  i=l  Thus, the asymptotic covariance matrix of 7 can be approximated by ,  3.4 3.4.1  J  cov( ) = /^(7).  •  (3-15)  ( 3  7  1 6  )  Sampling Methods Gibbs Sampler  As we can see from the preceding section, generating samples from the conditional distribution /(2/ i , ,a; j i,6 |t/ m  S  i  m  S]  i  j,cc 6 i,ri,7 )  is crucial for implementing the E-step of  (t)  ohS)  0  Si  the Monte-Carlo E M algorithm. Gibbs sampler is a popular method to generate samples from a complicated multidimensional distribution by sampling from each of the full conditional distributions in turn. Here we use the Gibbs sampler to simulate the "missing values" as follows. Set initial values generated values are yj*] , S)i  Step  x ^ {  i s i  y^  and  i s i  , x^  i s i  and  b\°\  Supposed that the current  bf\  1. draw a sample for the missing responses {2/^^} from f{ymis,i\yobs,ii obs,i, mis,iibi X  X  ,fi,^  ^),  Step 2. draw a sample for the missing covariates f{Xmis,i\y bs,ii o  v\nis}i obs,i, °| \ i,~f^),  Step 3. draw a sample {of  x  +1)  r  } from  a  n  {x^ -} 1  from  d  f ^ y ^ y ^ ^ X o ^ u X ^  28  1  } ^ ^ ^ ) .  After a sufficiently large burn-in of d iterations, the sampled values will achieve a steady state, that is, {(y^ts]} >  m?s]i> b^ ^),  x  1  k = d+  the multidimensional density function  3.4.2  f(y  m i s A  1, • • •  ,  can be treated as samples from  ,B}  x ^,  bi\y  mis  ,  x  obs<i  o b S t i  7 ). (t)  , r  it  Adaptive Rejection Algorithm  Gilk and Wilks (1992) proposed an adaptive rejection algorithm for effectively sampling any univariate log-concave density function. In the current situation, we can write oc  f(Vmisj\Vob8,i> i> i,ru'Y ) x  fiXmiaAVu  b  obs,i, bi,  x  it)  n,7 ) (i)  f(y \bi,x ,j3^ ) >  i  i  {t)  i  {t)  Xi,ip^)  i  oc  and  m i s < i  often come from the exponential.fam-  f(bi\D^)  y  m i s i  m i s < i  ,  x  and / ( b ^ , x ,r*j,7 ) (i)  x  m i S t i  ,  and b, respectively. If  and f(xi\D^)  use the adaptive rejection algorithm to generate samples from x  {t)  is log-concave in  f{y i ,i\y bs,u m  S  i> bi,  x  0  and / ( b j l ^ a ^ r ^ W ) , are log-concave. So we can  t)  i  f( mis,i\yi' obs,i,bi,ri,^)  Xi,ip ),  i  then the products of log-concave functions,  ,  fi,7W), f(x i j\Vi,Xob8,i,bi,r ,'y( ), m 8  i  fiyilbuXupMmbilD®).  is log-concave in each component of y  each component of x  i  (t)  {  ily, and thus are log-concave in each component of /(rjlj/j,  i  oc / ( y j b j , x ,P )f(x \a )f{r \y ,  f i b ^ x u r u ^ )  Density functions  f(y \b ,x ,P )fir^y^Xi,^),  t  f(y i i\y bs,i, i,  b, r,  x  m  St  0  t  t  7^),  respectively. Note that the adaptive  rejection sampling can only be applied to the univariate case, but  y  m i s i  and bj  ,x i j m  S  are often multidimensional. Thus, to implement the Gibbs sampler described earlier, we need to modify the sampling scheme to incorporate multidimensional variables, as described below. For example, suppose that  y ^ mis  is a multivariate of dimension  ) . Since the function f{y T  ^\y s,i,  mis  29  ob  i,  x  bi, Tj,7  (t)  I,  that is,  y  m i s  t  =  ) is log-concave with  respect to each component of y k,i\Vobs,ii  where k = l , - - -  m i s i  and  ,  {ymis,h,i, h ^ k}, Xi, bi, r  iy  ,1,  OC f{y i i\y  7^)  m  the function f{y ^y ^ mis  concave with respect to y  St  ^fc},x  {y is,h,i,h  obs  Xi, bi, 7"j,  i,  ooS:  it  m  7^),  r 7«) is log-  bi,  it  Thus, another Gibbs sampler, together with the adaptive  m i s  rejection sampling, can be used for generating a sample from  f(y i m  S i  i\y  o b s  x  j,  iy  6j, 7**,  7^).  Specifically, we can proceed as follows. j  Step 1. use the adaptive rejection sampling method to generate h  u  0  u  u  Step 2. use the adaptive rejection sampling method to generate f(ymis,2,i\y ,i,{ytts li>y mis,h,i, {  >  h  obs  3  > , » i .  * > 7  b  (  t  {yttsl,V  h  ^  X  6;, r  u  1  x  b  ) ;  )  T*i)7^)  3.4.3  c  a  n D  e  y  m i s  t  i  from  7 ).  y  m a  D  e  Samples from f(xmia,i\Vi>z ,b r 7 ), (i)  [t)  ofeSii  m  r o m  W  U  After a burn-in period, the vector j y ^ , ^ ; • • • , 2/mis,V} f(y is,i\Vobs^ h i^in )-  f  y^ts^li  Step Z. use the adaptive rejection sampling method to generate f(ymi8,l,i\Vob8,i>  m  2},x b r ~i®);  f{ymis,iAy bs^{ymis,h^ >  l  fr°  u  u  treated as a sample from and / ( b ^ , a:*,  obtained in a similar way.  Rejection Sampling  When the density functions do not satisfy the log-concave property, the usual rejection sampling method can be used for generating the desired samples. For example, suppose that we want to sample from  7M) = cf(b \D^)f(y \b ,x ,f3^), i  i  i  i  which can be written as  f(bi\y ,Xi,ri,'y®), i  f(bi\y^Xi,ri,  where c is a constant, then the usual rejection sam-  pling method can be described as follows.  30  Step 1. generate a random value b* from f(bi\D^),  and draw a sample U from the Uni-  form(0,l), Step 2. accept b* as a sample from fibily^Xi,^,^^) T — sup{f(y \u,x ,(3^)}. i  i  if U < f(y \b ,x ,^)/T i  i  where  I  Otherwise, reject b* and go to step 1.  u  Samples from f(y i\y ,Xi,bi,r ,'yW) mi3t  obati  and f(x i <i\y ,x i,b ,r ,'yW)  i  m a  i  obaj  i  can be ob-  i  tained in a similar way.  3.4.4  S a m p l i n g M e t h o d for B i n a r y V a r i a b l e s  If the missing variables are binary variables, then we may use an easier way to generate the desired sample. Here, we take the missing response y , mis  as an example. Suppose that  the response is binary and we want to draw samples from f{y i i\y bsi, ii  bi,ri,*y^).  x  m S  For simplicity, here we assume that y  miSji  0  is univariate. The corresponding sampling  procedure is described as follows. Step 1. draw a sample U from the Uniform(0,l), Step 2. take 0 as a sample from f{y  \y  miSti  ,  obSti  x b r 7W) if U < f{0\y u  u  h  , x r 7 )(t)  obSti  u  u  Otherwise, take 1 as a sample.  3.5  P X - E M Algorithm  Although the E M algorithm is a very popular tool for estimation due to its easy implementation and stable convergence, it may converge quite slowly in some applications such as ours. To speed up the convergence, many acceleration methods have been proposed (e.g. Liu and Rubin, 1994a, Meng and Van Dyk, 1997), A particularly useful method is  31  the P X - E M algorithm (Liu et al, 1998) , which speeds up the E M algorithm by introducing additional working parameters to the model. For the P X - E M , the E-step is usually the same as the standard E M , while the M-step needs to maximize the expected loglikelihood over the original parameters and the working parameters. Thus, the P X - E M algorithm can be obtained by simple modification of the standard E M . Liu et al (1998) showed that the P X - E M algorithm may dramatically accelerate the rate of convergence without loss of the simplicity and stability of the standard E M . Next, we show how to apply the P X - E M algorithm to G L M M s with informative dropouts and missing covariates. We may expand the G L M M (3.1)-(3.2) by introducing additional working parameters as follows. f(Vij\P*, bi) = exp [{yijdijinij)  - <p(0ij(lMj))}/a.(<t>) + c(y«, <f>)],  (3.17)  Vij = 9(lMj) = ATj/3* + zJjAbi,  (3.18) 6,~iV(0,D*),  j = l,..-, , ni  i =  l,-..,N,  where A is a q x q matrix, called working parameters. The P X - E M algorithm is the standard E M applied to the expanded models (3.17)-(3.18) rather than the original models (3.1)-(3.2). Specifically, the E-step and the M-step are described as follows. E-step:  Let 0  ( 4 )  = ( / 3 * \ a * W V>* ,£>* , A ) = {P \ot^\^ ,D^,I ) (4  W  (i)  (i)  {t  t]  qxq  be the  current estimates of the expanded parameters. The E-step of P X - E M is obtained by simply adding the working parameters A to Q(.), i.e, the E-step of the standard E M in Section 3.2.1. Then the conditional expectation of the complete-data log-likelihood given the observed data for the model (3.17)-(3.18) can be written as  32  N  g*(0|0(*)) =^g*(©|0( )) t  2=1 N  nu  =E E i=l  ~ °s{f(yobs,u  3=1 N  V%L,i\P*, ^\x  l  ,  obSti  a£L,z)}  ^ mi  ^  1=1 3 = 1 N +  mi  E E ^  l  i=i  j=i  N  mi  i=l  j=l  ^ o  § { / (  W  )  i ^ ) }  ^  + EE ~ =Q* (P\  ?  6  1  l o  S{/( ^o6 ,i> r  S  Vmls^ V>*)}  ^  A|0 ) + Q< \a*\®W) (t)  2  + Q< \D*\&^) 3  +  Q< \ip-*\&V). 4  (3.19)  Obviously, everything in this E-step is the same as the E-step of the standard E M in Section 3.2.1, except the extra working parameters A in (3.18). M-step:  By the same standard maximization procedures as the M-step in Section  3.2.2, we maximize Q*^\ Q*^ \ Q*^ and Q*^ separately to update the estimates of the 2  expanded parameters to (3< \ t+l  a  <  t + 1  \ ip< \ t+1  D< ^ t+  and A ( ) . The only difference t+1  in this step between the P X - E M and the E M is that the P X - E M maximizes Q*^ over (3* and A, while the E M does this only over /3. The reduction to the original parameters in the models (3.1) — (3.2) gives p{t+l)  _  )  g»(t+i)  )  a  (t+l)  =  *(*+i)  a  >  =  £)(*+!) =  \(t+l)Jj*(t+l)j{(t+l) _ T  This iterative calculation between the E-step and M-step until convergence leads to the MLEs of parameters in the original models (3.1) — (3.2).  33  3.6  Convergence  When carrying out the Monte Carlo E M algorithm, Monte Carlo samples for the "missing data" are drawn at each iteration to approximate true values. Consequently, Monte Carlo errors are introduced. One way to reduce the Monte Carlo errors is to increase the Monte Carlo sample size m*. However, the computation is intensive for a large rrij. Because the estimate 7^ in the initial E M steps is often far from the true values of the parameters, Monte Carlo samples of a large size at initial iterations, and then increase  may be wasted. Thus, we usually use a small m, with the iteration, as suggested by Booth and  Hobert (1999). After an initial burn-in period, the Gibbs sampler converges to a stationary state and thus produces draws from the conditional density function &obs,i, i,7^)r  f(y ,x ^,bi\y , misi  mis  obSti  Obviously, the determination of the burn-in period is very important. We  way use common diagnostic methods to determine the burn-in period, such as time series plots. The proposed Monte Carlo E M algorithm often works well for the models with a small dimension of random effects. When the dimension of the random effects is not small, however, the proposed E M algorithm and Gibbs sampler may converge very slowly or even not converge. Therefore, in next chapter, we propose an approximate inference method which may avoid these convergence difficulties and may be much more efficient.  34  Chapter 4 Approximate Inference for G L M M s with Informative Dropout and Missing Covariates 4.1  Introduction  In Chapter 3, we have described the exact inference method implemented by the Monte Carlo E M algorithm. However, the exact method may be computationally intensive and may even offer potentially computational difficulties such as slow or non-convergence. Moreover, when the dimension of random effects is not small, sampling random effects may result in inefficient and computationally unstable Gibbs samplers, which may lead to a high degree of autocorrelation and a lack of convergence. In the presence of missing response and missing covariates, these problems become more serious. In this chapter, we propose an approximate inference method which is not only much more efficient, but also avoids potential computational difficulties. This approximate method is obtained by Taylor series expansion and it avoids sampling the random effects in the E-step by 35  integrating them out. Pinheiro et al. (2001), in a different context, have showed that the convergence rate of the E M algorithm can be greatly improved by integrating out the random effects in the E-step. The outline of this chapter is as follows. In Section 4.2, we present the approximate inference method for G L M M s without missing values, and then extend this method to G L M M s with informative dropout and missing covariates implemented by the Monte Carlo E M algorithm in Section 4.3. In Section 4.4, we briefly describe the sampling methods used in Section 4.3. We conclude this chapter with a discussion about the P X - E M algorithm, an extension of the standard E M algorithm.  4.2  Approximate Inference without Missing Values  As described in Chapter 3, a G L M M is written as 1/3, bi) = exp  [{y 0 (/z«) y  y  ¥>(0y(/xy))}/a(#) + ^Va^)],  (4.1)  Vij = 9(tMj) = AjjP + zJjbi,  (4.2) b ^ N ^ D ) ,  j = l,---,rH,  i =  l,---,N,  where the notation is the same as (3.1)-(3.2) in Chapter 3. Denoting the observation vector as y  = (yn-- - ,yim) , and design matrices as Ai = (An,-- - ,Afc) T  t  (ZJI, • • • ,z ) , T  iq  T  then the conditional mean and covariance of ?/j satisfy E(y \bi) i  and z% = = fj, = t  g~ (Af(3 + zjbi) and cov(y i |b i ) = C f = diag{V(//y)/a(0), j = 1, • • • ,nj} respectively. 1  The above G L M M yields a marginal log-likelihood function by integrating out the  36  random effects KP, \Vi- D  ,VN)  m  N  Hi  = l o g | /flfifiVijlfrbimbiWdbi [  J  /  1  i=lj=l e x  P  (4.3)  )  fe  E  { S(f(yij\P,  bi)) + l o g ( / ( | D ) ) } | db • • • db  l0  0 i  x  i=l 3 = 1  To estimate the parameters in the G L M M , we need to maximize the log-likelihood function (4.3). However, in most cases, the integral in (4.3) is intractable. Evaluating the integral in (4.3) by Monte Carlo methods for the exact inference method may offer potential computational problems, as noted earlier. Here, we consider a much more efficient approximate inference method based on Taylor series expansion. The following approximation is based on a second-order Taylor series expansion about the current parameter estimates 6, which is equivalent to the Laplace's approximation (see [20] [26]), d k(6)I dOdO 0=0 2  J e ^dO  « (2TT)!  k  k{0)  (4.4)  o  T1  where 0 is a q x 1 vector, k{0) is a function of 0, and 6 is a maximizer to k(0). Applying the Laplace's approximation to the log-likelihood (4.3) yields l(P,D\  yi  1  • VN) ~ ~y log(27r)  N  afci(6) 2  N  5>  $>(&?),  4  dbidbj  i=l  bi=b?  (4:5)  i=i  where ki(bi) = Y^Li { l°g(/(yy|/3, bi)) + log(/(6j|D))|, and 6° maximizes the function Hh).  Maximizing the approximate log-likelihood function (4.5) with respect to /3 and D, and maximizing &i(bj) with respect to 6j are equivalent to jointly solving the following score equations (see [20] [26]), Ei VWi- J5 (y -^ ) = 0  '  1  i  1  i  i  >  (4.6) a(</»)^ W - 5 (y - ^) T  1  i  i  i  = D-%,  37  i = 1, • • • , N,  where J5j is a  x  diagonal matrix with diagonal terms <9#(^y )/cty%- and W = t  B^Bi.  It can be shown that the solution to (4.6) via Fisher scoring is equivalent to iteratively solving the following linear equations (see [20] [26]), A W~ A T  A W~ Z  l  T  Z W- A T  Z W- Z  X  T  A W-^y  X  +  X  T  D-  [ Z W-'y  X  T  A  (4.7) )  where i/j = A?p + zfbi + 5i(?/j - g (AfJ3 + zfbi)), J3 and b are the current es1  {  timates. are A  T  Row vectors are y  = (y , • • • ,yjf) and b  T  = (AT... ,A ), T  N  Z  T  = diag(zf, • • • ,z ), D  T  = (bf, • • • ,bff), and matrices  T  T  N  d  = diag{L>, • • • , D} and W  =  diag{Wi, • • • , WN}- Solving the linear equations (4.7) is equivalent to solving the following linear mixed effect (LME) model (see [3] [26]), y^Af/3  + zfbi + ei,  i =  l,---,N,  (4.8)  where ej's are independent with a normal distribution N(0,Wi), bj's are independent and have an identical normal distribution N(0, D), and  and bi are independent. From  (4.8) we can derive bAn ~ N (EiZiWr^y.  where E* = (ZiW^zf  - AtP), E j ) ,  (4.9)  + J 9 ) , and - 1  - 1  yi~N(A {3,z Dzi T  T  + Wi).  (4.10)  In summary, approximate estimates for GLMMs can be obtained by iteratively solving the liner mixed effect model (4.8), which can be easily handled by standard software packages such as Splus and SAS.  38  4.3  Approximate Inference with Missing Values  In the previous section, we discuss an approximate inference method for the G L M M without missing values. However, in our G L M M (4.1) — (4.2), the response y is non{  ignorably missing and the covariate x is ignorably missing. In this section we consider t  a similar method for GLMMs with missing values. Note that missing values in G L M M (4.1) — (4.2) correspond to missing responses in y and missing covariates in Xi in the {  L M E model (4.8) respectively. Here we write y = (y ^,y ), t  the observed components of y and y {  that y  and y ^  obSti  mis  obs  where y  miS}i  obSji  contains the missing components of  m i s i  contains y i  . Note  are appropriate functions of the missing and observed components  of y respectively. So the missing response indicator for y is the same as the missing {  {  response indicator for y . For L M E models with non-ignorable missing responses, Ibrahim t  et al. (2001) derived a much more efficient Monte Carlo E M algorithm by integrating out the random effects in the E-step. Here we extend their approaches to the G L M M with informative dropout and missing covariates by iteratively solving the L M E model (4.8) with non-ignorable missing responses and ignorable missing covariates. Since sampling random effects is avoided in the E-step, the rate of convergence of the E M algorithm may be greatly improved. The E-step and M-step are described in details as follows. E-step:  Let 7  ( t )  = (/3 ,a ,•j/> ,£ * ) be the current parameter estimates. The (t)  (t)  (t)  )(  )  response in the L M E model (4.8) can be written as y = Aj(3  w  t  g-\A ^+zJbf)) T  where bf = ^  and W f = B ^ B ^ }  „  z ^ m  1  (y-Af^),  B® =  + zfb^ + B f\y <  i  -  ^ l ^ - i ^ c o ^ >  .  ' As in the previous section, the contribution of individual i in the (t + l)st iteration  39  }  is given by  Qi{i\i ) [t)  J J J  {\og(f{y \!3,b x ))+\og(f{b \D)) i  u  i  + log (f(Xi\a)) + log J'(ilmis,ii  X  (firiiyi,x ij>))} u  & i | j / c * . s , i ' obs,i,  mis,it  x  i  x  r  i,  7^ ^)dbidy  idx  r  mis  + log(/(b |I>))  {log(/(j/ |Afti,^))  i  i  + \ogtf{xi\a)) + log(firiljji, Xi, •0))} /(bi|j/i,a;i.7 )^» W  filfmis,ii  =  h  + h  mis,i\y bs,ii  obs,ii  x  + h  fit  x  o  7^  ^)dymis,id mis,i x  (4.11)  h,  +  where /(yJ/3,6j, Xj) is the normal density with mean Af/3 + zfbi and covariance W/*'. Equation (4 9) implies b ^ , x :  u  7  (t)  ~ N(bf\  S f ) , where S }  W 4  = (ZjW^zf+Z)«~ )- . 1  1  After some algebra, we can integrate out the random effects 6, from (4.11) and obtain the following results /i = - ^ l o g | ^  W  (y - AJ(3 - z bi) W^\y,  |-^  f(bi\iji,Xi,7^  T  ^)db^f  (y i,x i ^\y ,x b ,t, mis  m  S  - l i o g i w ^ - ^ z i w r  \  I J(  1  obsi  0  s  - AJ(3 - z?b®) W®-\  f^Vmis,ii  - AJ(3 -  yi  mis,i\y bs,ii  x  o  obs,ii  x  ft,7^  r  ii  40  7^  m S  zjb®)  ^)dy i idx i j. m St  zfbi)  ^)dy i idx i i  ^ )  T  Vi  - Aff3 -  T  t  m s  m St  I = 2  J  - -\og\D\- -J l  l  f{ymis,i>  =  mis,i\yobs,ii  ^ii  x  ^)dy'mis,id' Xmis,i  - -log\D\--\tv(D-^f ) 1  )  ^^  >i  i  ^ ^  and f(r \y ,^. ,'d})  f(Xi\a)  ^3  obs,ii  x  j^ Since  {JibfDbiMbt^Xin^dbi}  JJ  =  i  i  m  i  >  s  i  '  Ayobs,ii  obs,ii  Xmis  do not depend on  i  lf^)dy i idx  ii  x  r  bj,  we have  m St  r  ^)dy dx i i,  f{ xi\ a)f{ymis4> xrnisAyobs,ii xobs,i, rii'y^  misi  m St  and —  J  f{ Ti\yi> xi>^){ymis,i> xrnisAyobs,ii xobs,ii^  J  As we can see, integrals 7j, I , h and I4 do not involve the random effects Oj. Thus 2  we only need to generate random samples from  f{ymi3ii,  xmiS}i\y0bs,i,  obs,i,  x  r 7 ). This (t)  u  leads to a much more efficient E M than that for the exact method. Suppose that from Then  f(ymiSii,xmiSti\y0bs,ii 6ffc)  =  { ( y {^ i s 4 ,  x^J,  x0bsti,  s f ^ W ^ f o W  fi,  ••• ,  4  7^). Let x\ ^ =  - A ^), T  t  is a sample of size m* generated  a&) )}  (x (, i, xUisi), 0  S)  k = 1,2, • • • , m,.  y\  ^ =  (y0bs,iiymls,i)-  Thus C?j(7l7 ) can be (i)  approximated as  Qi{i\i ) [t)  - ^ W ^ - ^ i z i W ^ z ^ )  + [ - i log |D| - W E ? ' ) - ^ B&r^r')] 1  fc=i  fc=i  41  fe=l  Therefore, the E-step for all individuals at the (t + l)st iteration can be written as | «)  Q ( 7  7  f ^ (  =  |  7  « )  7  1=1  - E[-^°si^  ( t )  i-^^  ( i r l  ^)  i=l m. k=l _  N  ~  rru  i=l  1  +E £ E /N'V)] + E =1  '  fc=l  E  [ i ' fc=l  i=l  k=l  /('.i*f»!".*)]  = Q (/3|7 ) + Q (^l7 ) + Q ( « | 7 ) + Q (iph ). (1)  W  (2)  (t)  (3)  W  W  (4.13)  {t)  M-step: Since the parameters in 7 are all distinct, we can maximize Q(7|7^) by maximizing  and  separately, leading to the updated estimate 7( ). These i+1  maximizations can be accomplished by standard complete data optimization methods. The covariance matrix for the parameter estimates, 7, can again be obtained using Louis's method (1982), as in Chapter 3.  4.4  Strategies for Sampling the Missing Values  To implement the E-step of the E M algorithm, we need to generate random samples for the missing response  y  and missing covariates x  miSii  m i s i  As in Chapter 3, we can use Gibber sampler to draw  f(ymis,i> misAyob8,i> obs,i,r ,'y( ). x  x  from the joint density function  t)  i  the desired samples. The procedure is described as follows. Suppose that the current samples for missing values are y^ ^ and x^ . ia  isi  Step 1. Draw a sample for the missing responses fill mis,ill/obs,ii obs,ii x  x  i , i i i i 7^ 0> r  m  s  42  {y^s}}  from  Step 2. Draw a sample fror missing covarites {x }  from  misi  f( mis,i\yobs,iiymis,i  > obs,i> ii T*'  x  X  r  After a burn-in period, the sampled vaules sample from the density function f{y i i, m  mis,i\Vobs,ii  x  3  can.be treated as the true  {ymis}i xm7s})  obs,ii i>7^  x  r  ). Note that 'f(Vmi , Voba,i> i> u'lf ) A  x  r  {t)  8  <x  f(Vi\x ,'r )f(ri\y ,Xi,'y ), {t)  it)  i  i  where /(j/Ja^, 7^) is a normal density function with mean Af(3^ and covariance zfD^Zi.  If the density function /(T*J|2/J, X j , 7^)  is log-concave, we can use the adaptive  rejection algorithm to draw sample in Step 1; otherwise, we may consider the general rejection sampling method. Similarly, since  f(x a,i mi  I Vi,  7 ) (t)  obs,i, 7 ) oc / ( & | x , W  x  {  as in Chapter 3, samples from f(x \yi,  t  x b i, 7^)  miSti  0  St  |7 ) w  f{x  f(r  t  I Vi,  Xi,  7 ), (t)  in Step 2, can be obtained using  the adaptive rejection algorithm or the rejection sampling method, depending on whether both /(a3j|7^) and f(ri\y  u  4.5  ^,7'*') are log-concave.  PX-EM  The E M algorithm described in Section 4.3 may still be quite slow. To improve the speed of the E M algorithm, in this section we again consider the P X - E M for the approximate method, which is obtained by applying the standard E M algorithm to an expanded model. Specifically, we introduce a q x q working parameter matrix V to the L M E model (4.8) and obtained the following expanded L M E model  y = Ajf3* + zfVb i  l  +e  u  43  i = l,-..,N,  (4.14)  where the €j's are independent error terms with an identical distribution N(0, Wi), bi ~ l  N(0,D*), and €j and b are independent. Let 0 = ((3*, a*, tp*, D*, V), where  d  is a  t  vector of parameters for the dropout model and a* is a vector of parameters for the covariate model. Note that model (4.14) is reduced to model (4.8) when V = I . qxq  E-step:  Let 0(*> = (f3 ,a^,ip ,D ,I ) {t)  {t)  be the current parameter estimates.  {t)  qxq  Then in the E-step the conditional expectation of the complete-data log-likelihood given the observed data for the expanded model (4.14) can be written as JV  <?*(0|0«)=E  ^(eieW)  i=i  N  £  [ - \ log |wf I -  -HV? Wr z*V^)  l  l  Zi  i=l  rat  1  £(vi 1  f c )  -  %F ~ ^vb^fwr  ^  1  - AJ(3*  - zfvbT)  k=l  + E [ - j logm  - i ^ - E j . ) , _ _ L g «J-V-if"') (  i=l  fc=l  1  J  (4.15) where W® = diag { V J ^ ) ^ ^ ) / ^ ) /jw- )-!, 1  2  / ^ ^ ^ ^ , , ^ ^ ) } , i f = ( z ^ ^ z f +  = ( a ^ , ^ , ) , y f - (ySL,,Vo*,i), ^ }  = sW^WiW-'^W-A^W).  The sample of size rrii {(y£] , a£L,i)> • • • , ( y S i , ^ S ) }  i s d  r  a  w  n  f r o m  Sii  /(i/m»,<>  a i ,i| m  a  y , x i, 7**, 0 ^ ) by Gibbs sampler along with the adaptive rejection algorithm. Again, obSti  obSt  everything in this E-step is the same as the E-step of the standard E M in Section 4.3, except the extra working parameters A in (4.15).  M-step:  In the M-step, we maximize <2* 44  (1)  ,  Q* \. <2* , and <2* {2  (3)  (4)  separately to  update the parameter estimates to / 3 *  ( t + 1 )  ,  a< \ t/>* , D* t+1  (t+1)  (t+1)  and V^ . Then the t+1)  estimates of original parameters are given by  p(t+i)  *( ,ipt+1)  /  {t+1)  g*(<+ ) ( + ) a 1  =  i  ;Q;  1  =  45  = t/>* , Jj( ) = (t+1)  t+1  y(*+ )rj)*( )v(* ) . 1  t+1  +1 T  Chapter 5 Covariate Models and Dropout Models 5.1  Introduction  In the previous chapters, we have discussed how to estimate the parameters in G L M M s with informative dropout and missing covariates. As we note earlier, to provide a valid inference, we need to specify a dropout model for the missing response, and a covariate model for the time-independent covariates, and then incorporate them into our analyses. In this chapter, we describe how to specify these models. Sections 5.2 and 5.3 introduce dropout models and covariate models respectively. In Section 5.4, we discuss sensitivity analyses for the dropout model and covariate model.  5.2  Dropout Models  A dropout model is the distribution for the missing response indicators r^. The parameters in the dropout model are treated as nuisance parameters and are usually not of  46  inference interest. Thus, we should try to reduce the number of nuisance parameters to make the estimation of /3 more efficient. Moreover, too many nuisance parameters may even make the G L M M unidentifiable. Therefore, one should be very cautious about adding extra nuisance parameters. Since the missing response indicators r^, are binary, a natural model for the r^'s is a logistic regression model as follows. We may assume Tli  (5.1)  = H-'J(\ --,•,•)'  f i r ^ x ^ )  i.e. an independent assumption for the r^'s, and MT  (5.2)  ) = H^;y ,Xi,t ),  13  i  ij  where TI^ = P r ( r y = 1) and h(.) is an often linear function of y  i 5  Xi and t y . To  determine a suitable function /i(.), one can consider standard model selection techniques, such as the likelihood ratio test, A I C / B I C , or consider simple and reasonable linear functions. For example, if we believe that the current missing response indicator only depends on the current or previous response values, then it may be reasonable to assume h(ip; y  it  Xi, tij) = ipo + ipiVij + V ^ y i j - i - Note that the independence model ( 5 . 1 ) is simple  and may not contain too many nuisance parameters, but it fails to incorporate possible correlation among the r^'s. To incorporate possible correlation among the r^'s, we may adapt the model considered in Ibrahim et al. f(r  i}  (2001),  ip) =f(r \y , il  i  ••• x  xt u  u  V j / ^ l n i , y, t  /(rylra,---  ,  Xi,t  r j_i), y x i5  i)(  u  • • • x /(rjjjjrji, • • • , r j ( _ i ) , y 5  ni  ip )  i7  i;  t  u  2  ip )  Xi,ti,  (5-3)  3  ip ), ni  where the ipfs are the parameters for the jth one-dimensional conditional distribution, ip = (•j/'i, 1P2, • • • J V ' M )  a  n  d M = maxlnj}.  47  We assume that the i/>'s are distinct.  The one-dimensional conditional distributions in the product of (5.3) may be chosen to be logistic regression models. Again, one can choose parsimonious one-dimensional distributions in (5.3) by standard model selection techniques. Lipsitz and Ibrahim (1996) noted that model (5.3) approximates a joint log-linear model, a natural model for binary variables.  5.3  Covariate Models  When some covariates are missing, we need a distributional assumption for the covariates. The parameters in the covariate model are also viewed as nuisance parameters. Ibrahim (1990) proposed a saturated multinomial model for categorical covariates with missing values. A drawback of his method is that the saturated model greatly increases the number of nuisance parameters, which increases computation burden and may make the model unidentifiable. When the missing covariates are all continuous, we may assume a multivariate normal distribution for the covariates (see [15]). To allow both continuous and categorical covariates, we may write the covariate distribution as a product of onedimensional conditional distributions, as in Ibrahim, et al. (1999), f(Xi, CX.) f(Xi \Xn,  • • • , £ j _ i , CX-c)  =  c  ) C  . X f(Xi -i\Xn,  • • • ,Xi -2,Oi - )  tC  tC  c  1  (5.4)  • • • x /(arii.ai), where a = ( a i , a , • • • , a ) and a i , • • • , a are distinct. The index c is the number of 2  c  c  covariates that include missing values. Note that we do not need to make distributional assumptions for the completely observed covariates, which are conditioned on and are suppressed in the expressions. Note also that this modeling scheme allows the missing covariates to be continuous, categorical and mixed. For example, suppose that x\ is 48  continuous and x is binary. By the above modeling strategy, we may specify a normal 2  distribution for x\ and a logistic regression model for x conditional on x\. 2  5.4  Sensitivity Analyses  Since both the dropout model and the covariate model are not verifiable based on the observed data, it is important to conduct sensitivity analyses. That is, we should try other plausible dropout models and covariate models, and then assess the sensitivity of results to those different models. If there is not much difference between the results based on different models, we draw a relatively reliable conclusion. Otherwise, the results may depend on the assumed models and the conclusions may not be reliable.  49  Chapter 6 Real Data Examples 6.1  Introduction  In previous chapters, we have described an exact method and an approximate method for G L M M s with informative dropouts and missing covariates. In this chapter, we will discuss application of these methods to two real datasets. In Section 6.2, we consider a dataset from the AIDS Clinical Trial Group (ACTG) Protocol 315 and investigate the viral load trajectory after an antiviral treatment. In Section 6.3, we consider a dataset from a parent bereavement project to study the pattern of change of parents' mental distress over time after their children's death. In Section 6.4, we discuss computation issues in the analyses of our examples.  50  6.2 6.2.1  Example 1 Data Description  Our research is motivated by a longitudinal study from the AIDS Clinical Trial Group (ACTG) Protocol 315 (Wu and Ding, 1999). In this study, 46 HIV infected patients were treated with a potent antiviral drug, a combination of ritonavir, 3TC, and A Z T . Plasma HIV-1 R N A (viral load) was repeatedly quantified on days 2, 7, 10, 14, 21, 28, and weeks 8, 12, and 24 after initiation of treatment. After the antiviral treatment, the patients' viral loads will decay, and the decay rate may reflect the efficacy of the treatment. Throughout the time course, due to individual characteristics, the viral load may continue to decay, fluctuate, or start to rebound (rise). The Nucleic Acid SequenceBased Amplification assay that is used to measure the viral load has a detection limit of 100 R N A copies per ml plasma. If the viral load drops below the detection limit after the treatment, the viral load can not be measured, which may indicate that the treatment is successful. Note that patients with a viral load below the detection limit at early stage may have viral rebound and may have a viral load dropping again after rebound. Figure 6.1 shows the viral load trajectories for six randomly selected patients. To investigate the treatment effect, one approach is to define the response as whether the viral load is below the detection limit or not, which is thus a binary variable. In this study, some patients drop out before the end of the study, and the dropout may be informative. Thus, the response contains non-ignorable missing values. We summarize our data in Table 6.1. As we see from Table 6.1, 8.9% of our responses are missing due to patients' dropout. Preliminary studies show that some baseline covariates such as CD4 cell counts, tumor necrosis factor (measured by T N F levels) and total complement levels (measured by CH50), may partially explain variation in the viral load trajectory. However, some of 51  Table 6.1: Summary statistics Variable  Sample mean 0.1 175.4 242.3 60.0  Sample Percentage of standard deviation missing values 0.3 8.9% 87.5 0% 15.2% 49.6 29.0 8.7% # of patients: TV = 46. # of observations per patient: nj = 7 or 8.  Response CD4 CH50 TNF  these covariates are also missing, since in the multi-center study some baseline covariates may not be measured at some centers. As indicated in Table 6.1, the baseline CH50 contains approximately 15.2% missing values, the T N F level contains roughly 8.7% missing values and the CD4 cell count is completely observed. Our objectives are to model the viral load trajectory and to identify covariates that may partially predict changes of viral loads, in the presence of informative dropouts and missing covariates.  6.2.2  Models  Let tjij be the viral load status for patient i at the j t h visit, i = 1,2,-•• ,N, j = 1, 2, • • • ,rij, where N = 46 and nj = 7 or 8. If the viral load for patient i at the jtb. visit is below the detection limit,  = 1; otherwise, y^ = 0. Naturally, we consider a  logistic regression model for the binary response. To take into account the inter-patient variation and the intra-patient correlation, we add a random effect term bi to the logistic regression model and obtain the following G L M M . logit{Pr( /j = l|/3,6j)} = log ?  j  Pv(  yij  l-Pv(  Vlj  52  = l\f3,bi)  \  = l\(3,bi))  (  6  1  )  where j3 = (fa,fa, fa,fa, fa), xu is the baseline CD4 cell count for patient i, x  i2  is the  baseline CH50 for patient i, x^ is the baseline T N F for patient i, and iy is the jth measurement time for patient i. The regression coefficients, fa, fa, fa, andfa,represent the fixed effects associated with the baseline CD4, CH50, T N F , and time respectively, and bi represents the random effect associated with each patient. We assume that b\, • • • ,b^ are independent and follow an identical normal distribution N(0, a ) with a unknown. 2  2  In this study, the baseline CH50 and T N F contain missing values and the CD4 cell count is completely observed. For this example, it appears reasonable to assume that the missingness of baseline covariates is MAR, i.e., the missingness may depend on the observed values but not on the missing values. To make a valid likelihood inference, we need to specify a model for the covariates which contain missing values. Since CH50 and T N F are continuous and each approximately has a normal marginal distribution, the joint distribution of CH50 and T N F (i.e., Xi and x^) may be written as a product of 2  two one-dimensional normal distributions. f(x ,x \xii,cx) i2  where a = ( « ! , - • • f{xa\xii,Xi ) 2  ,a ) , T  7  a  f(x \xii) i2  = f{x \xii,Xi )f(x \xn), a  2  (6.2)  i2  is the density function of N(ot\ + a Xn,a ), 2  is the density function of iV(of4 +  ct^xn + a§Xi , 2  3  and  aj).  As noted earlier, 8.9% of the responses yy are missing due to patients' dropout. The dropout may be due to drug intolerance or drug resistance, so we assume that the response is non-ignorably missing or the dropout is informative, i.e., the missingness of responses may depend on the missing values. When the missing data (response) are nonignorable, we must model the missing data mechanism in order to obtain valid statistical results. To incorporate the missing data mechanism, we need to specify a distribution for the missing response indicator. The missing response indicator is defined as if yij is missing;  = 0 if  =1  is observed. Here, we use a logistic regression model for the 53  missing data indicator, which includes the current response j/y, CD4 cell count Xn and time tij as covariates and is chosen based on the likelihood ratio test. logit { P r ( r = 1\4>)} = log y  where  <f>  =  |TT^^^y  (<f>o, 4>i, 4>2, 4>3) T  }=  ^ +  <t>m +  +Mj,  Thus, in model (6.3), we link the missingness of the response  to the values being missing and therefore allow the response to be non-ignorably missing. For simplicity, we focus on the following independent model TV  f(r\4>) =  ni  nil (^  i=l  P r  = W  y  { l - P'(ry = l ^ ) } ^ 1  j=l  More complicated models without assuming r^-'s being independent are possible as well, which contain more nuisance parameters and may be unidentifiable.  6.2.3  (6.3)  Analysis and Results  In this section, we analyze the A C T G protocol 315 dataset using our proposed methods. Note that before our analysis, covariates CD4, CH50 and T N F were standardized to avoid extremely small estimates. We consider the following methods to estimate the parameters in model (6.1) — (6.3) with informative dropouts and missing covariates: (i) the exact method using the Monte Carlo E M algorithm, (ii) the approximate method using the Monte Carlo E M algorithm. Table 6.2 shows maximum likelihood estimates of  (3 =  (/?o,Pi,P2,@3,  At), along  with their standard errors and p-values, based on the above two methods. Compared with the approximate method, the exact method sometimes gave somewhat larger estimates and smaller standard errors. For example, CD4 cell count is marginally significant (pvalue 0.107) based on the approximate method, but highly significant (p-value 0.025) 54  Table 6.2: Estimates for the AIDS data Methods  Parameters Po  Exact method  Estimate -4.811 SE 0.998 p- value < 0.001 Approximate Estimate -3.879 method SE 0.789 p- value < 0.001 * SE refers to the standard error.  Pi  Pi  Pz  PA  0.898 0.400 0.025 0.861 0.534 0.107  0.634 0.456 0.164 0.395 0.598 0.508  -0.745 0.538 0.166 -0.291 0.600 0.627  5.869 1.361 < 0.001 6.240 0.880 < 0.001  based on the exact method. Since the approximate method integrates out the random effects instead of sampling the random effects, it should be faster than the exact method. However, in this example, the E M convergence for the exact method is obtained in 21 iterations, while the E M convergence for the approximate method is reached in 55 iterations. This is probably because only one random effect is included in model (6.1). From Table 6.2, both methods indicate that the time covariate is highly significant, suggesting a strong relationship between the viral load trajectory and time. The estimated coefficient for the time covariate, fa, is 5.869 based on the exact method. This means that the estimated odds of patients' viral loads dropping below the detection limit is exp(5.869) = 354 times higher when time increases by one unit (6 months). The exact method suggests that CD4 cell count may be associated with patients' viral loads. The estimated coefficient for CD4, fa, is 0.898. This means that the estimated odds of patients' viral loads dropping below the detection limit, is exp(0.898) = 2.45 times higher when CD4 increases by one unit (262 cell counts). Based on the p-values, the baseline CH50 and T N F do not appear to have significant effects on patients' viral loads, using either of the two methods of estimation.  55  As discussed in previous chapters, the P X - E M algorithm should have a higher convergence rate than E M . This is confirmed in this example. The number of iterations to convergence for the exact method is 21 by the E M algorithm, while the number of iterations to convergence for the exact method is 10 by the P X - E M algorithm.  6.2.4  Sensitivity Analysis  To check the sensitivity of our results to the choice of the covariate model, we re-analyze the dataset using the following alternative covariate models, which are obtained based on a standard model selection method - the likelihood ratio test. (i) Alternative Covariate Model 1 (CM1): Model (6.2) with a = a = 0. The two 2  5  conditional distribution in the covariate model become Xi \xn ~ N(cti,ctz) and 2  ~ iV(a + a x , a ), i.e., x 4  e  i2  4  and x  i2  i 3  are independent of x . a  (ii) Alternative Covariate Model 2 (CM2): Model (6.2) with a = a = a = 0. The 2  5  6  two conditional distributions in the covariate model become x \xn ~ N(oti,ct3) i2  and Xi3\xn,Xi  2  ~ N(a±,OLT),  i.e., xu, Xi and x^ are independent. 2  The estimates based on the original covariate model (6.2), and the alternative covariate models CM1 and CM2 are shown in Table 6.3. As we can see from Table 6.3, the three covariate models gave similar results. This suggests that parameter estimates and their standard errors may not depend on the covariate models. Similarly, we need to assess sensitivity of our results to the dropout models. We performed sensitivity analyses based on the following choices of the dropout models. (i) Alternative Dropout Model 1 (DM1): logit {Pr(ry = 1|<£)} = (j) + fayij-i + foVij + foxn + fatij] Q  56  Table 6.3: Sensitivity analysis for covariate models Covariate Models Original Model (6.2)  Parameters A)  Estimate SE p- value Estimate CM1 SE p- value Estimate CM2 SE p- value * SE refers to the standard  -4.811 0.998 < 0.001 -4.731 1.007 < 0.001 -4.893 0.999 < 0.001 error.  0.898 0.400 0.025 0.906 0.387 0.019 0.921 0.424 0.030  Ih  ft  0.634 0.456 0.164 0.594 0.433 0.170 0.676 0.455 0.137  -0.745 0.538 0.166 -0.767 0.520 0.140 -0.763 0.543 0.160  5.869 1.361 < 0.001 5.754 1.375 < 0.001 5.920 1.356 < 0.001  (ii) Alternative Dropout Model 2 (DM2): logit {Pr(ry =  l\<f>)}  = (po + (piVij-i  + hVa',  (iii) Alternative Dropout Model 3 (DM3): logit { P r ( r = l\<f>)} = (p + <h^n + hUjy  0  Note that DM3 corresponds to an ignorable missing data mechanism (i.e., M A R ) . Table 6.4 gives the estimates, standard errors, and p-values based on the original dropout model and the alternative dropout models D M I , DM2 and DM3 respectively. As we can see from Table 6.4, all these dropout models, except dropout model DM2, gave similar results. Dropout model DM2, which excludes CD4 and time, produced slightly smaller absolute values of estimates and standard errors. But this does not affect our conclusion. Thus, our results are not very sensitive to the choice of the dropout models.  57  Table 6.4: Sensitivity analysis for dropout models Dropout Models Original Model . (6.3)  Parameters ft  Estimate -4.811 SE 0.998 p- value < 0.001 Estimate -4.911 DMI SE 1.027 p- value < 0.001 Estimate -3.761 DM2 SE 0.571 p- value < 0.001 Estimate -4.867 SE DM3 1.001 p- value < 0.0001 * SE refers to the standard error.  6.2.5  ft  ft  ft  0.898 0.400 0.025 1.047 0.417 0.012 0.772 0.299 0.010 0.949 0.410 0.021  0.634 0.456 0.164 0.611 0.500 0.221 0.428 0.311 0.168 0.630 0.466 0.177  -0.745 0.538 0.166 -0.735 0.613 0.230 -0.323 0.324 0.318 -0.753 0.549 0.171  ft  5.869 1.361 < 0.001 6.656 1.341 < 0.001 6.011 0.898 < 0.001 6.153 1.345 < 0.001  Conclusion  Based on our analyses, we conclude that a patient's viral load tends to more likely drop below the detection limit if he/she stays longer in the study, and a patient with a higher baseline CD4 cell count is more likely to have his/her viral load below the detection limit over the time course. In this example, the exact method converged much faster than the approximate method, and gave smaller standard errors. Also, different covariate models and different dropout models lead to essentially the same results, so our conclusions may be robust.  58  6.3 6.3.1  Example 2 Data Description  Our second example involves a longitudinal study from a parent bereavement project, which investigates long-term mental outcomes of parents whose children died by accident, suicide, or homicide. After their children's death, the parents usually experience a high level of mental distress. In this study, the mental distress of 239 parents were calculated based on their answers in the questionnaire, at baseline (i.e. 4 to 6 weeks after their children's death), and then at 4, 12, 24 and 60 months post-death. The Global Severity Index (GSI), which is the most sensitive indicator of mental distress, is used to measure the parents' distress levels. A high GSI score indicates a high level of mental distress. If the parents' adjustment to their children's death goes well, their GSI scores will decrease over time, at least lower than their baseline GSI scores. Figure 6.2 shows the profiles of GSI scores for six randomly selected parents from 239 parents enrolled in this study. To investigate how the parents' mental distress changes over time after their children's death, we treat whether or not a parent's GSI score after baseline is lower than his/her baseline value as response. Several other relevant factors were also obtained at baseline, including parents' gender, marital status, age, education, annual income, cause of death, age, and gender of the deceased child. These baseline factors may be important predictors of parents' distress and thus are viewed as covariates. Summary statistics for the response, education and income are given in Table 6.5. Since the response is binary, we consider a G L M M model for analysis. Note that some baseline covariates such as income contain missing values, and some responses are also missing since some parents may be not in a good mood at the scheduled time. As indicated in Table 6.5, 4.2% of incomes are missing and 19.7% of responses are missing. 59  Table 6.5: Summary statistics Variable  Sample mean 0.7 13.7 4.67  Sample standard deviation 0.5 -2.3 1.9 # of parents: N = 239. # of observations per parent: = 4.  Response Education Income  Percentage of missing values 19.7% 0% 4.2%  Our objectives are to investigate the change of parent's distress levels over time and to determine which covariates affect the parent's mental distress. We will use the methods developed here for G L M M models with informative dropouts and missing covariates.  6.3.2  Models  To get a parsimonious model, we used standard model selection techniques such as the likelihood ratio test to determine which covariates should be included in our model. Since some covariates and responses contain missing values, model selection were carried out based on the complete-case method. Based on these analyses, we include income, education, and time as covariates in the model. Note that education does not have missing values, while income contains 4.2% missing values. We denote y^ as the response for parent % at the j t h time after baseline, i = 1, 2, • • • , N, j•= 1, 2, • • • , n . Here N = 239 and n = 4. If GSI for parent i at the j'th {  t  time is lower than his/her baseline GSI, yy = 1; otherwise, y^ = 0. We consider the following G L M M to investigate the effects of covariates on the mental distress. logit {Pr(y = l|/3,6 )} = log y  i  = fa  Pr(y - = 1|/3A) \ 1 - P r ( y y = 11/3,6,)/ i3  + fiixn +  60  (3 Xi2 + fotij 2  + h,  (  g  4  )  where (3 = (Po, Pi, P2, Ps), %n is the education level for parent i, x  i2  is the annual family  income for parent i, and t - is the j t h scheduled time for parent i. The regression coeffii3  cients, Pi, p\, and Pz, represent the fixed effects associated with the parents' education level, income, and time respectively, and 6$ represents the random effect associated with each parent. Here, we assume that  , frjv are independent and have an identical  normal distribution N(0,a ) with a unknown. 2  2  . Note that income x contains approximately 4.2% missing values. Here, we assume i2  that the missing income is M A R . To incorporate missing covariates, we need to specify a model for income. We assume the following covariate distribution x \xn i2  ~ N(cti + a xn,a ). 2  (6.5)  3  Note also that 19.7% of responses y - are missing. The response (i.e., GSI status) is i3  missing probably due to parents' high stress, so we assume that our response is nonignorably missing, i.e., the missingness of responses may depend on the missing values. To incorporate the missing data mechanism in our analysis, we specify a distribution for the missing response indicator. Recall that the missing response indicator is defined as  = 1 if yij is missing;  = 0 if j/y is observed. Here, we use a logistic model  for the missing response mechanism, which includes the current response value yy, the immediate previous response value yij-i, f  logit { P r ( r = 1|0)} = l o g ^ y  education xn , and time Uj as covariates.  Pr(r - = 1|0)  _ ^ pr  =  1  ^ j = 4>o + (piVij-i  + faVij + h^n  + <M«>  (6.6) where <p = (4>o, (f>i, §2, <i>z) • We assume that the r^-'s are independent for all i and j. T  Note that the covariates in model (6.6) are selected based on the likelihood ratio test.  61  Table 6.6: Estimates for the Parent Bereavement data Methods  Parameters ft  Exact method  Estimate SE p- value Approximate Estimate method SE p- value " SE refers to the standard error.  -1.882 0.966 0.051 • -1.579 0.898 0.079  ft  ft  ft  0.182 0.070 0.010 0.139 0.065 0.033  0.083 0.165 0.612 0.058 0.152 0.704  0.345 . 0.258 0.181 -0.239 0.193 0.216  :  6:3.3  Analysis and Results  We consider the following methods to estimate the parameters in models (6.4)-(6.6). (i) the exact method using the Monte Carlo E M algorithm, (ii) the approximate method using the Monte Carlo E M algorithm. Estimates of (3, along with their standard errors and p-values, are shown in Table 6.6. Compared with the exact method, the approximate method resulted in smaller absolute values of estimates and smaller standard errors. Especially for the estimate of fa, the exact and approximate methods gave opposite results. As discussed in previous chapters, the approximate method should have a faster convergence rate, since it avoids sampling the random effect in each E M iteration. However, for this example, the number of iterations to convergence for the approximate method is 24, larger than the number of iterations to convergence for the exact method, which is 13. The P X - E M algorithm improved the convergence speed a bit in this example. The number of iterations to convergence for the exact method based on P X - E M is 9, smaller than 13. Table 6.6 shows that education is significant based on the exact method and the approximate method. The estimate for education fa based on the exact method is 0.182, 62  Table 6.7: Sensitivity analysis for covariate models Covariate Models Original model (6.5)  Parameters  Estimate SE p- value Estimate CM1 SE p- value *SE refers to the standard error.  Po -1.882 0.966 0.051 -1.969 1.043 0.059  P2 0.083 0.165 0.612 0.107 0.175 0.542  Pi 0.182 0.070 0.010 0.189 0.076 0.013  Ps 0.345 0.258 0.181 0.312 0.255 0.222  which suggests that the estimated odds of having a lower distress than the baseline value is exp(0.182) = 1.2 times higher, when parents increase their education level by one unit. Based on both the exact method and the approximate method, income and time do not have significant effects on change of parents' mental distress.  6.3.4  Sensitivity Analysis  To check the sensitivity of the above results to the covariate models, we consider the following alternative covariate model (i) Alternative Covariate Model 1 (CM1): Model (6.5) with N(ai,az),  i.e., x  i2  a  2  = 0. That is, x  i 2  \x  n  ~  is independent of Xu.  Table 6.7 shows that results based on the original covariate model and the alternative covariate model are quite similar. This suggests that the results may be robust to the covariate models. We also check the sensitivity of our results to the dropout models. We consider the following alternative dropout models.  63  Table 6.8: Sensitivity analysis for dropout models Dropout Model Original model (6.6)  •  Parameters  ft  Estimate -1.882 SE 0.966 p- value 0.051 Estimate -1.592 DMI SE 0.808 p- value 0.049 Estimate -1.958 DM2 SE 1.019 p- value 0.055 Estimate -1.460 DM3 SE 1.006 p- value 0.146 * SE refers to the standard error.  ft  ft  0.182 0.070 0.010 0.161 0.059 0.006 0.193 0.074 0.010 0.167 0.073 0.022  0.083 0.165 0.612 0.063 0.136 0.644 0.087 0.169 0.604 0.094 0.172 0.585  ft  0.345 0.258 0.181 0.545 0.273 0.046 0.239 0.254 0.347 0.939 0.282 < 0.001  (i) Alternative Dropout Model 1 (DMI): logit {Pr(ry = l\<p)}  = 0o + <P\Va  + foxn + hUj]  (ii) Alternative Dropout Model 2 (DM2): logit {Pr(ry = 1|0)} =  0 + (piVij-i O  + foVij]  (iii) Alternative Dropout Model 3 (DM3): logit {Pr(ry = l\cp)} = 0o + faxn + faUj. Note that DM3 suggests that the missing responses may be M A R . The comparison of estimates based on the original dropout model and the above alternative dropout models is given in Table 6.8. As we can see from Table 6.8, whether y - and yij-i are included i3  in the dropout model affects our inference on the coefficient of the time covariate (i.e., fa). For the dropout model DM3, which excludes y - and y%j-\ as covariates, we obtain a i3  highly significant p-value (< 0.001) for fa. For the dropout model D M I , which excludes Vij-i,  we get a marginally significant fa. Other dropout models lead to insignificant  fa. That is, the conclusion about fa is sensitive to the choices of the dropout models. However, estimates of other parameters and their standard errors are quite robust to the different dropout models.  6.3.5  Conclusion  Our analyses suggest that parents with a higher education level are more likely to have a lower level of mental distress, i.e., they may have a good adjustment to their children's death. Possibly due to the low dimension of random effects and the small number of intra-parent measurements, the approximate method in this example did not improve the convergence rate. Unlike in Example 1, the approximate method in this example gave smaller standard errors than the exact method. For this example, sensitivity analyses suggest that our conclusions about the time covariate may not be reliable, i.e., they may depend on the choices of dropout models, but our conclusions about other covariates are reliable.  6.4  Computation Issues  Starting values.  For the E M algorithms in our examples, the starting values for j3  were obtained based on the logistic regressions using the completely observed cases, the starting values for a were obtained based on linear regression models using the completely observed cases, and the starting values for <p were obtained based on logistic regressions using the last-value-carried-forward method.  Convergence of the Gibbs sampler. 65  We checked the convergence of the Gibbs  sampler used in each Monte-Carlo E M by examining the time series and autocorrelation function plots. For example, Figure 6.3 shows the time series and autocorrelation function plots for generating missing CH50 in the first example. From Figure 6.3, we notice that the Gibbs sampler converged quickly and the autocorrelations between successive generated samples are negligible. We also drew the time series plot and autocorrelation function for the random effect 646 associated with patient 46 in the first example, shown in Figure 6.4. It shows that the Markov chain converged quickly, but the autocorrelations are negligible after lag 6. Time series and autocorrelation function plots for other random effects and other missing covariates show similar behaviors. Therefore, for each E M iteration, we discarded the first 200 samples as the burn-in, and then we took one sample from every 10 simulated samples until 500 samples were obtained. S t o p p i n g r u l e . The stopping rule for the E M and P X - E M algorithms in our examples is that the relative change in the parameter values from successive iterations is smaller than a given tolerance level (e.g. 0.01). However, due to Monte Carlo errors induced by the Gibbs sampler, it is difficult to converge for a extremely small tolerance level, otherwise it may converge very slowly.  66  \  i  i  i  0  50  100  150  r 200  day  Figure 6.1: Viral loads (log scale) for six randomly selected patients. The open dots are the observed values and the dashed line indicates the detection limit of viral loads. The viral loads below the detection limit are substituted with log (50). 10  10  67  o C O  in  O  o o  Ui  55  o  ci  o d  month  Figure 6.2: GSI scores for six randomly selected parents. The open dots are the observed values and the GSI scores at time 0 are the baseline values.  68  Time series plot  o to I O  CM I  I  100  200  300  400  500  iteration (a)  Autocorrelation function plot  00 ci  LL  O  I  ci  .  "i  ~l 10  '  •  .  r  I  .  I  I  .  .  I  1  1  1  15  20  25  Lag (b)  Figure 6.3: (a) Time series and (b) autocorrelation function plots for CH50.  69  Time series plot  o  CD CD  E o  "O c  CO  CM I  i  100  200  300  400  500  iteration (a)  Autocorrelation function plot  oo d  O  o d  I  1  i  H 10  15  r20  25  Lag (b)  Figure 6.4: (a) Time series and (b) autocorrelation function plots for 6 associated with patient 46. 46  70  Chapter 7 Simulation Study 7.1  Introduction  To evaluate the performance of the two proposed methods: the exact method (EX) and the approximate method (AP), we conduct a simulation study in this chapter. In our simulations, we compare E X and A P in terms of biases and mean-squared errors of their estimates. Section 7.2 gives a description of data generation models in our simulations. In Section 7.3, we compare two methods of estimation in four different situations, and examine the effects of missing rates, variance of random effects, sample size, and number of intra-individual measurements. We conclude this chapter in Section 7.4.  71  7.2 7.2.1  Description of the Simulation Study Models  We generate the response variable  from the following G L M M  logit{Pr(yy = 11/3,6;j)} =Po + AaJii + fax%i + /3 ijj + h 3  i = l,---,iV,  j =  (7.1)  !,-••,  where /3 = (Po,Pi, fa, fa), the random effectsfcj'sare assumed to be i.i.d with a normal distribution iV(0, a ). The true values of /3 and a are /3 = (-3,0.5, -0.3,4) and a = 0.3. 2  2  2  The number of individuals (sample size) is N = 50, and the number of intra-individual measurements is  = 10. The n* time points for each individual are 2, 7, 9, 14, 20, 28,  40, 56, 70, and 84. The covariates Xu and x  i2  are continuous variables.  generated from iV( 1,0.1) and covariate x  i2  x \Xii i2  where a = (ai,a ,a ) 2  3  Covariate variable Xu is  is generated from the following model  ~ N(cti + a xn,  (7.2)  a)  2  3  and the true value of at. is a = (—1.5,1,0.2). In our simulation  study, the missing covariate mechanism is assumed to be MAR. For each generated data set, we keep xu completely observed and delete those values of xa with probability 0.8 which correspond to the largest values of x^. To evaluate the proposed methods, we also generate some missing values of responses j/jj's as follows. The model for the missingness of the response is logit {Pr(ry = 1|<£)} = (j) + fayy Q  where <j> = (<J)Q, 4>I) and the missing response indicator  (7.3) is a binary variable. The above  model suggests that the missingness of the response depends on the missing values, and 72  thus the response is non-ignorably missing (NIM). We generate missing responses based on the model (7.3). That is, if r = 1, then y is deleted; if r = 0, y is retained. Note i3  i3  i3  i3  that different values of d) will lead to different missing rates of responses. We will discuss E X and A P in two different values of <f> in Section 7.3.1.  7.2.2  Bias and Mean-Squared Error  We examine the convergence of Monte Carlo Markov Chains by their time series plots and autocorrelations function plots. Time series plots and autocorrelation function plots have shown that Markov Chains converged very fast, usually in 100 or 200 iterations, and autocorrelations are negligible after lag 2. Figure 7.1 and Figure 7.2 show typical time series plots and typical autocorrelation plots for the missing covariates and random effects from our simulated data sets. Thus, to ensure the convergence, we conservatively discard the first 500 samples, and then take one sample every 10 samples until we obtained the desired number of samples. We run B = 100 replicates in each simulation, and compare E X and A P in terms of biases and mean square errors (MSEs). Here, bias and MSE are reported in terms of percent relative bias and percent relative root mean-squared error. The bias for P , the j t h component of /3, is defined as 3  bias.,- = J3j — Pj, where Pj is the estimate of Pj. The mean-squared error for @ is defined as 3  MSEj = bias + s , 2  2  where s is the simulated standard error of P . Then, the percent relative bias of J3 3  3  (%bias) is biasj/Pj  x  73  100%,  3  and the percent relative VMSE  (%  / V  is  MSE)  y/MSEj/lPj]  x 100%.  In our simulations, we consider (i) two missing rates: 2 0 % and 4 0 % , (ii) two different variances of bp. a = 0 . 3 and a = 1, (iii) two different sample sizes: N = 5 0 2  2  and N = 1 0 0 , (iv) three different numbers of intra-individual measurements: n, = 5 , n, = 1 0 and rij = 2 0 . In the above situations, we compare estimates based on E X and AP, and investigate how the missing rate, the variance of bi, the sample size, and the number of intra-individual measurements affect estimation of the parameters.  7.3 7.3.1  Simulation Results Comparison of Methods with Varying Missing Rates  To see the impact of the missing rates on estimation by E X and AP, we estimate the parameters based on two missing rates respectively. A 2 0 % missing rate and a 4 0 % missing rate are considered. In our case, if the true values of <f> are (p  =  (—1.8,1),  the missing response mechanism ( 7 . 3 ) leads to an average of 2 0 % missing rate for the response; if <f> =  (—0.8,1),  the missing response mechanism  (7.3)  leads to an average  of 4 0 % missing rate. Regarding the covariate x with missing values, we take the same 2  missing rate as the response. Table 7 . 1 shows average simulation results from 1 0 0 simulated data sets based on methods E X and AP. E X and A P yield comparable results for the two missing rates. In the 2 0 % missing rate case, compared with AP, E X gives smaller biases, but slightly larger mean-squared errors. In the 4 0 % missing rate case, E X produces slightly larger biases and mean-squared errors than AP. As we can see from Table 7 . 1 , the missing rate 74  Table 7.1: Simulation results with varying missing rates Missingness rate (%) 20  Parameter (true values)  %bias EX 6 -6 -3 2 22 44 50 3  A> = - 3  0.5 & = -0.3 ft = 4 A) = - 3 A = 0.5 ft = -0.3 & =4 P\ =  40  %VMSE AP 1 -8 -5 -2 14 39 48 -4  EX 29 115 144 12 46 164 153 17  AP 28 112 141 11 40 148 148 15  greatly affects biases and mean-squared errors of estimates from two methods, especially estimates from E X , that is, the absolute values of biases and the mean-squared errors increase with the missing rate.  7.3.2  Comparison of Methods with Different Variances  To investigate how the variability of 6* affects the estimates from two methods, we consider two sets of values of cr : a small variance a = 0.3 and a moderate variance a = 1 2  1  2  at the same missing rate 20%. We summarize the simulation results of estimation from E X and A P in Table 7.2. E X produces slightly lager mean-squared errors of estimates than A P in both cases. However, the performance of E X is still quite close to AP. We also note that the meansquared errors of estimates based on E X and A P increase as a increases. That is, the 2  variability of random effects affects estimation of E X and AP.  75  Table 7.2: Simulation results with varying variances Variance  Small Variance a = 0.3 2  Moderate Variance a = 1 2  7.3.3  Parameter (true values) ft ft ft ft ft ft ft ft  = = = = = = = =  %bias EX 6 -6 -3 2 4 -3 9 2  -3 0.5 -0.3 4 -3 0.5 -0.3 4  %VMSE AP 1 -8 -5 -2 -5 -7 7 -6  EX  AP  29 115 144 12 36 156 176 12  28 112 141 11 36 150 169 12  Comparison of Methods with Different Sample sizes  To examine the effect of the sample size on estimation, we estimate the parameters based on E X and A P with two different sample sizes: N = 50 and iV = 100, with a 20% missing rate. The average simulation results from E X and A P are shown in Table 7.3. We note that, as the sample size increases from 50 to 100, A P becomes more reliable in the sense that A P provides somewhat smaller biases and mean-squared errors than E X . However, A P does not outperform E X much. Moreover, both A P and E X yield smaller mean-squared errors for larger sample sizes.  7.3.4  Comparison of Methods with Varying Intra-individual Measurements  To see how the number of intra-individual measurements affects our estimates, we consider the two methods of estimation under three different numbers of intra-individual measurements : n^ = 5, n, = 10 and nj = 20. If n* = 5, the time points for each individ76  Table 7.3: Simulation results with varying sample sizes Number of individuals N=50  N=100  Parameter (true values) ft ft ft ft ft ft ft ft  ual are 2, 9, 20, 40, 70; if 28, 40, 56, 70 and 84; if  = = = = = = =  -3 0.5 -0.3 4 -3 0.5 -0.3 4  %VMSE  %bias EX 6 -6 -3 2 9 8 11 2  AP 1 -8 -5 -2 4 4 9 -2  EX 29 115 144 12 21 82 101 9  AP 28 112 141 11 20 79 98 8  = 10, the time points for each individual are 2, 7, 9, 14, 20, = 20, the time points for each individual are 2, 4, 7, 9, 12, 14,  17, 20, 24, 28, 33, 40, 46, 53, 56, 60, 66, 70, 76 and 84. The simulation results are indicated in Table 7.4. Both E X and A P produce smaller mean-squared errors as the number of intra-individual measurements increases (i.e., as rii increases). Compared with E X , A P provides slightly smaller mean-squared errors in the three cases. But, the results from E X and A P are still quite close and become closer as n, gets larger.  7.3.5  Conclusion  Based on the simulation results in the preceding sections, we may draw conclusions as follows. • Estimates based on E X and A P get worse in terms of biases and mean square errors as the missing rate gets larger. • The mean-squared errors of estimates from both E X and A P increase as the vari77  Table 7.4: Simulation results with varying intra-individual measurements Number of intra-individual measurements  Parameter (true values)  rii = 5  Hi  Po = - 3 Pi = 0.5 Pi = -0.3  03 = 4 A) = - 3 ft = 0.5 P2 = 0 . 3 03=4 0o = - 3 0i = 0.5 02 = -0.3 03=4  = 10  T  rii = 20  %bias EX 11 9 -0.1 7 6 -6 -3 2 5 -4 2 1  %VMSE AP 4 1 -7 2 1 -8 -5 -2 1 -4 4 -3  EX  AP  40 150 200 19 29 115 144 12 22 91 117 8  37 148 193 16 28 112 141 11 21 90 118 8  ability of random effects a increases. 2  • Increasing the sample size reduces the mean-squared errors of estimates for both E X and AP. • Increasing the number of intra-individual measurements reduces biases and meansquared errors of estimates for both E X and AP. • A P yields somewhat smaller mean-squared errors than E X and thus provides more stable results. This is probably because sampling the random effect in the E X , may lead to unstable Gibbs samplers and thus induce more Monte Carlo errors. Note that the convergence rate of E X is approximately as fast as that of A P in our simulations probably due to the fact that only one random effect is included in our GLMMs.  78  Time series plot  i  1  0  100  1  r~  1  1  200  300  400  500  iteration (a)  Autocorrelation function plot  00  c> —  LL  O  <  ci  o o  1  i  0  1  I  I  '  I I '  1  5  I  1 10  I  I  .  I  I  ,  ,  II'  1  1  r  15  20  25  Lag (b)  Figure 7.1: (a) Time series and (b) autocorrelation function plots for z . 2  79  T i m e series plot  Figure 7.2: (a) Time series and (b) autocorrelation function plots for 6 individual 18.  80  18  associated with  Chapter 8 Conclusion and Discussion In this thesis, we have proposed two methods to estimate the parameters for G L M M s with informative dropouts and missing covariates. These include an exact method and an approximate method, which are implemented by the Monte Carlo E M algorithm. For the exact method, the conditional expectation in the E-step of the Monte-Carlo E M is evaluated by Monte Carlo approximations (Wei and Tanner, 1990), which generate random samples for the unobservable random effects, missing covariates, and missing responses.  However, sampling the random effects may offer potential computational  difficulties such as very slow or non-convergence, especially when the dimension of the random effects is not small. To overcome this difficulty, in the more efficient approximate method, we integrate out the random effect in the E-step and thus avoid sampling the random effects in the Monte Carlo E M . Pinheiro and Wu (2001) have shown that the convergence rate of the E M algorithm can be greatly improved by integrating out the random effects. To further speed up the Monte Carlo E M , we also applied a P X - E M algorithm, which accelerates the E M algorithm by introducing additional working parameters to the model. Based on our two examples, the P X - E M algorithm is much faster than the 81  standard E M algorithm. We conducted a simulation study to compare the performance of the exact method and the approximate method. In our simulations, in general, the approximate method gives somewhat more stable results than the exact method in the sense that it provides smaller mean-squared errors. As the number of intra-individual measurements or the sample size increases, the performance of the approximate method and the exact method becomes similar. Our simulations also suggest that the proportion of missing values, the variance of random effects, the sample size, the number of intra-individual measurements, may affect the performance of the exact method and the approximate method. The proposed methods were applied to an AIDS dataset to evaluate an antiviral treatment. The results of our analyses based on the exact and approximate methods suggest that the viral loads of HIV patients tend to decrease with time, and that patients with higher CD4 cell counts are more likely to have their viral loads suppressed below the detection limit. We also applied our methods to a data set from a parent bereavement project to investigate the change of parents' mental distress after their children's death and to determine which factors influence parents' mental distress. We conclude that parents with a higher education level are more likely to have a better adjustment to their children's death. Note that we have assumed parametric models for the missing covariates and missing response indicators. So it is important to conduct sensitivity analyses of our results to these parametric models. Based on our sensitivity analyses, except for /3 in 3  the second example, the results of the two examples are quite robust to the choices of the covariate model and the dropout model. Thus these results except for f3 in the second 3  example may be reliable. Finally, we give an outline for possible future work. 82  (i) For simplicity, in our examples and simulations, we include only one random effect in the GLMMs to demonstrate our methods. In the future, we will study models with more random effects and further investigate the computational advantages/disadvantages of the proposed methods, via,simulations. (ii) In our examples and simulations, we only consider mixed effect logistic regression models with informative dropouts and missing covariates. Generally, our proposed methods can be applied to other GLMMs, such as mixed-effect Poisson models, and nonlinear mixed effect models with informative dropouts and missing covariates. (iii) In the thesis, we assume that covariates with missing values are time-independent. When some covariates with missing values are time-dependent, similar methods can be proposed. (iv) We have assumed that the missing responses depend on the values being missing. We could also apply our methods to shared-parameter models, in which the missingness of responses is assumed to depend on the unobservable random effects.  83  Bibliography [1] Agresti, A . Categorical Data Analysis. New York: John Wiley, 1990. [2] Booth, J. G. and Hobert, J. P. Maximizing generalized linear mixed model likelihoods with an automated Monte Carlo E M algorithm. Journal of the Royal Statistical Society, Soc. B, 61:265-285, 1999. [3] Breslow, N . E. and Clayton, D. G. Imcomplete data in generalized linear models. Journal of the American Statistical Association, 88:9-25, 1993. [4] Dempster, A. P., Laird, N . M . , and Rubin, D. B. Maximum likelihood estimation from incomplete data via the the E M algorithm (with Discussion). Journal of the Royal Statistical Society, Soc. B, 39:1-38, 1977. [5] Diggle, P. and Kenward, M . G. Informaive Drop-out in Longitudinal Data Analysis. Apllied Statistics, 43:49-93, 1994. [6] Diggle, P. J., Liang, K . Y . , and Zeger, S. L. Analysis of Longitudinal Data. Oxford: Oxford University Press, 1994. [7] Gilks, W. R. and Wild, P. Adaptive rejection sampling for Gibbs sampling. Applied Statistics, 41:337-348, 1992.  84  [8] Ibrahim, J. G. Incomplete data in generalized linear models. Journal of the American Statistical Association, 85:765-769, 1990. [9] Ibrahim, J. G., Chen, M . H., and Lipsitz, S. R. Missing responses in generalized linear mixed models when the missing data mechanism is nonignorable. Biometrika, 88:551-564, 2001. [10] Ibrahim, J. G. and Lipsitz, S. R. Missing covariates in generalized linear models when the missing data mechanism is non-ignorable. Journal of the Royal Statistical Society, Soc. B, 61:173-190, 1999. [11] Lipsitz, S. R. and Ibrahim, J. G. A conditional model for incomplete covariates in parametric regression models. Biometrika, 83:916-922, 1996. [12] Little, R. J. A. Regression with missing X's: A review. Journal of the American Statistical Association, 87:1227-1237, 1992. [13] Little, R. J. A . Modeling the drop-out mechanism in repeated-measures studies. Journal of the American Statistical Association, 90:1112-1121, 1995. [14] Little, R. J. A . and Rubin, D. B. Statistical Analysis with Missing Data. New York: John Wiley, 1987. [15] Little, R. J. A . and Schlucher, M . D. Maximum likelihood estimation for mixed continuous and categorical data with missing values. Biometrika, 72:497-512, 1985. [16] Liu, C. and Rubin, D. B. The E C M E algorithm: a simple extension of E M and E C M with faster monotone convergence. Biometrika, 81:633-648, 1994a. [17] Liu, C , Rubin, D. B., and Wu, Y . N . Parameter expansion to accerlate E M : The P X - E M algorithm. Biometrika, 85:755-770, 1998. 85  [18] Louis, T. A . Finding the observed information matix when using the E M algorithm. Journal of the Royal Statistical Society, Ser. B, 44:226-233, 1982. [19] McCulloch, C. E. Maximum likelihood algorithm for generalized linear mixed models. Journal of the American Statistical Association, 92:162-170, 1997. [20] McCullock, C. E. and Searle, S. R. Generalized, Linear, and Mixed Models. New York: Wiley, 2001. [21] Meng, X . L. and Van Dyk. The E M algorithm - an old folk song sung to a fast new tune (with Discussion). Journal of the Royal Statistical Society, Soc. B, 59:511-567, 1997. [22] Pinheiro, J. C. and Wu, Y . Efficient algorithms for bobust estimation in linear mixedeffects models using the multivariate t-distribution. Journal of Computational and Graphical Statistics, 10:249-276, 2001. [23] Vonesh, E. F., Wang, H., Nie, L., and Majumdar, D. Conditional second-order generalized estimating equations for generalized linear and nonlinear mixed-effects models. Journal of the American Statistical Association, 97:271-283, 2002. [24] Wedderburn, R. W. M . Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method. Biometrika, 61:439-447, 1974. [25] Wei, G. C. and Tanner, M . A. A Monte Carlo implementation of the E M algorithm and the poor man's data augmentation algorithms. Journal of the American Statistical Association, 85:699-704, 1990. [26] Wolfinger, R. Laplace's approximation for nonlinear mixed models. Journal of the American Statistical Association, 80:791-795, 1993. 86  [27] Wu, H . and Ding, A . Population HIV-1 Dynamics in Vivo: Applicable Models and Inferential Tools for Virological Data from AIDS Clinical Trials.  Biometrics,  55:410-418, 1999. [28] Wu, H. and Wu, L. A multiple imputation method for missing covariates in nonlinear mixed-effects models with application to HIV dynamics. Statistics in medicine, 20:1755-1769, 2001. [29] Wu, M . C. and Carroll, R. J. Estimation and comparision of changes in the presence of informative right censoring by modeling the censoring process. Biometrics, 44:175-188, 1988.  87  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Country Views Downloads
United States 10 5
China 8 7
France 3 0
Canada 3 0
United Kingdom 1 0
Japan 1 0
City Views Downloads
Ashburn 8 0
Shenzhen 6 0
Unknown 4 16
Fort Worth 2 0
Winnipeg 2 0
Beijing 1 7
London 1 0
Mississauga 1 0
Tokyo 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0091594/manifest

Comment

Related Items