UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Log hazard regression Sun, Huiying 1999

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_1999-0621.pdf [ 5.16MB ]
Metadata
JSON: 1.0089159.json
JSON-LD: 1.0089159+ld.json
RDF/XML (Pretty): 1.0089159.xml
RDF/JSON: 1.0089159+rdf.json
Turtle: 1.0089159+rdf-turtle.txt
N-Triples: 1.0089159+rdf-ntriples.txt
Original Record: 1.0089159 +original-record.json
Full Text
1.0089159.txt
Citation
1.0089159.ris

Full Text

L O G H A Z A R D R E G R E S S I O N by Huiying Sun Ph.D, Harbin Institute of Technology, Harbin, CHINA, 1991. A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T OF T H E R E Q U I R E M E N T S F O R T H E D E G R E E OF M A S T E R OF SCIENCE in T H E F A C U L T Y OF G R A D U A T E STUDIES Department of Statistics We accept this thesis as conforming / to the required standard T H E U N I V E R S I T Y OF BRITISH C O L U M B I A September, 1999 ©Huiying Sun, 1999 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of The University of British Columbia Vancouver, Canada DE-6 (2/88) Abstract We propose using regression splines to estimate the two log marginal hazard func- tions of bivariate survival times, where each time could be censored. The method is a modified version of Kooperberg, Stone and Truong's (JASA, 1995) hazard regression for estimating a univariate survival time. We derive an approach to find standard errors for estimates of the difference of the log hazard functions. The approach is inspired by- Wei, Lin, and Weissfeld (JASA, 1989). We also propose procedures for testing the four hypotheses that the marginals follow an exponential or Weibull distribution and that the two failure times have the same distribution or have proportional hazards. A simulation study is conducted to assess the performance of our estimates and test procedures. We study the effects of the censoring rates, correlation levels, and number of knots. The regression is applied to the data set of the Diabetic Retinopathy Study (Diabetic Retinopathy Study Research Group, 1981). Our analysis for the data set matches study results of Huster, Brookmeyer, and Self (1989). u Contents Abstract ii Table of Contents iii List of Tables vi List of Figures vii Acknowledgments xv 1 Introduction 1 2 Hazard Regression 5 2.1 Definitions and Notation 5 2.1.1 Failure Time Distribution 5 2.1.2 Censored Survival Data 7 2.2 Log Hazard Regression 8 2.2.1 Model and Estimates 8 2.2.2 Conditions on b and f3 12 2.3 Consistency and Normality of J3 15 2.4 Proofs of the Lemmas 18 iii 2.5 Log Hazard Regression of Paired Failure Times 26 2.5.1 Model and Notation 26 2.5.2 Consistency and Normality of /3 27 3 Regression Space 32 3.1 Cubic Splines 33 3.2 The Regression Space B 33 3.3 Numerical Implementation 38 3.4 Knot Selection 40 3.5 Hypothesis Testing 42 3.5.1 Univariate Case 42 3.5.2 Bivariate Case 43 4 Application to the Diabetic Retinopathy Study 46 4.1 Data Description 46 4.2 Data Analysis 47 4.2.1 Model 1 50 4.2.2 Model 2 54 4.3 Other Models Used to fit the Eye Data . . 60 4.3.1 Exponential Model 60 4.3.2 Weibull Model 63 4.3.3 Cox Proportional Hazard Model 64 5 Simulation 68 5.1 Description of the Simulation Study 69 5.2 The Univariate Case 74 5.2.1 Exponential Model 74 5.2.2 Weibull Model 77 iv 5.2.3 The B-spline Model 79 5.3 The Bivariate Case 81 5.3.1 Generation of Paired Dependent Data 81 5.3.2 Proportional Hazards Model 83 5.3.3 Non-proportional Hazards Model 86 5.3.4 Effect of the Number of Knots 87 5.4 Summary of Simulations 88 6 Conclusion 135 Bibliography 139 Appendix 142 A Simulation Results for Bivariate Data with 50% Censoring 142 v List of Tables 4.1 Test statistic and p-value for testing that the failure times of the treated eyes and the untreated eyes have the same distribution 59 4.2 x2 test statistics and p-values for testing that failure times are exponen- tially distributed 63 4.3 z statistics and p-values for testing that failure times have Weibull dis- tributions 63 4.4 Test statistic and p-value for testing that the two hazards of the treated eyes and the untreated eyes are proportional 64 4.5 Test results from the Cox proportional hazards model for the eye data. . 64 vi List of Figures 4.1 Histograms of observed times of the eye data 48 4.2 Scatter plots of the observations of the eye data 49 4.3 The estimated density functions of the eye data for Model 1 51 4.4 The estimated survival functions of the eye data for Model 1 52 4.5 The estimated hazard functions of the eye data for Model 1 with pointwise 95% confidence intervals 53 4.6 The estimated log ratio of the hazard functions of the eye data with pointwise 95% confidence intervals for Model 1 54 4.7 The estimated density functions of the eye data for Model 2 55 4.8 The estimated survival functions of the eye data for Model 2 56 4.9 The estimated hazard function of the untreated eye for Model 2 56 4.10 The estimated log ratio of the hazard functions of the eye data with pointwise 95% confidence intervals for Model 2 57 4.11 The estimated density functions of the failure times of the untreated eyes. 58 4.12 The estimated survival functions of the untreated eyes 58 4.13 The estimated hazard functions of the failure times of the untreated eyes. 59 4.14 The estimated survival curves of the treated eye 61 4.15 The estimated survival curves of the untreated eye 62 vii 4.16 The estimated log hazards ratio from Model 1 and the Cox proportional hazards model with the pointwise 95% confidence intervals from the Cox proportional hazards model 66 4.17 The estimated log hazards ratio from Model 1 and the Cox proportional hazards model with the pointwise 95% confidence intervals from Model 1. 66 4.18 The estimated log hazards ratio from Model 2 and the Cox proportional hazards model with the pointwise 95% confidence intervals from the Cox proportional hazards model 67 4.19 The estimated log hazards ratio from Model 2 and the Cox proportional hazards model with the pointwise 95% confidence intervals from Model 2. 67 5.1 The histograms of the simulated failure times, censoring times, non- censored failure times, and observed censoring times for exponential data as in model (5.2) 91 5.2 Log hazard of the exponential distribution (5.2) and the pointwise quar- tiles and empirical mean of the 500 estimated log hazards 92 5.3 The pointwise standard deviations of the estimated log hazard for the exponential model (5.2) and the pointwise quartiles and empirical mean of the 500 estimated standard errors 93 5.4 The quantiles of the pointwise z values of the estimated log hazards for the exponential distribution (5.2) 94 5.5 Histograms and qq-plots of the normalized estimate of j3 from the Quan- tile Knots methods with three knots for the exponential model (5.2). . . 95 5.6 Empirical distribution functions of the p-values for testing that the failure times are exponentially distributed 96 viii 5.7 The histograms of the simulated failure times, censoring times, non- censored failure times, and observed censoring times for Weibull data as in model (5.3) 97 5.8 Log hazard of the Weibull distribution (5.3) and the pointwise quartiles and empirical mean of the estimated log hazards 98 5.9 The pointwise standard deviations of the estimated log hazards for the Weibull distribution (5.3) and the pointwise quartiles and empirical mean of the 500 estimated standard errors. 99 5.10 The quantiles of the pointwise z values of the estimated log hazards for the Weibull model (5.3) 100 5.11 The histograms and the qq-plots of the standardized estimates of the parameter (3 from the method 5.1 with six knots for the Weibull model (5.3) 101 5.12 Empirical distribution functions of the p-values for testing the hypothesis that the failure time distribution is exponential 102 5.13 Empirical distribution functions of the p-values for testing that the failure times have a Weibull distribution 103 5.14 The histograms of the simulated failure times, censoring times, non- censored failure times, and observed censoring times for the data as in the B-spline model (5.4) 104 5.15 Log hazard of the the B-spline data as in model (5.4) and the pointwise quartiles and empirical mean of the estimated log hazards 105 5.16 The pointwise standard deviations of the estimated log hazards for the B-spline model (5.4) and the pointwise quartiles and empirical mean of the estimated standard errors 106 ix 5.17 The quantiles of the pointwise z values of the estimated log hazards for the B-spline model (5.4) 107 5.18 The histograms and the qq-plots of the standardized estimates of the parameter /3 from the True Knots method for the B-spline model (5.4). . 108 5.19 Empirical distribution functions of the p-values for testing the hypothesis that the failure times have a Weibull distribution 109 5.20 The plots of the correlation coefficients versus 9 for the data as in the proportional hazards and non-proportional hazards models 110 5.21 The histograms of the marginal failure times, and non-censored marginal failure times for the data generated according to the proportional hazards model in Section 5.1. The censoring rate for the "treatment" is 25%. . . I l l 5.22 The true log hazards ratio of the data from the proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the 500 estimated log hazards ratios 112 5.23 The standard deviations of the estimated log hazards ratio for data gen- erated according to the proportional hazards model. The censoring rate is 25% for the "treatment" 113 5.24 The quantiles of the pointwise z values of the estimated log hazards ratio for the data generated according to the proportional hazards model. The censoring rate is 25% for the "treatment" 114 5.25 Empirical distribution functions of the p-values for testing the hypothesis that the two failure times of the data generated as in the proportional hazards model in Section 5.1 have proportional hazards 115 5.26 The histograms of the marginal failure times, and the non-censored marginal failure times for the data from the non-proportional hazards model in Sec- tion 5.1. The censoring rate for the "treatment" is 25% 116 5.27 The true log hazards ratio of the data from the non-proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the estimated log hazards ratios 117 5.28 The standard deviations of the estimated log hazards ratio for data gen- erated according to the non-proportional hazards model. The censoring rate is 25% for the "treatment" 118 5.29 The quantiles of the pointwise z values of the estimated log hazards ratio for the data generated according to the non-proportional hazards model. The censoring rate is 25% for the "treatment" 119 5.30 Empirical distribution functions of the p-values for testing the hypothesis that the two failure times of the data generated as in the non-proportional hazards model in Section 5.1 have proportional hazards 120 5.31 The true log hazards ratio of the data from the proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the estimated log hazards ratios by the Same Knots method. . . 121 5.32 The true log hazards ratio of the data from the proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the estimated log hazards ratios by the Different Knots method. 122 5.33 The true log hazards ratio of the data from the non-proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the estimated log hazards ratios by the Same Knots method. . . 123 5.34 The true log hazards ratio of the data from the non-proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the estimated log hazards ratios by the Different Knots method. 124 xi 5.35 The standard deviations of the estimated log hazards ratio by the Same Knots method for data generated according to the proportional hazards model. The censoring rate is 25% for the "treatment" 125 5.36 The standard deviations of the estimated log hazards ratio by the Dif- ferent Knots method for data generated according to the proportional hazards model. The censoring rate is 25% for the "treatment" 126 5.37 The standard deviations of the estimated log hazards ratio by the Same Knots method for data generated according to the non-proportional haz- ards model. The censoring rate is 25% for the "treatment" 127 5.38 The standard deviations of the estimated log hazards ratio by the Differ- ent Knots method for data generated according to the non-proportional hazards model. The censoring rate is 25% for the "treatment" 128 5.39 The quantiles of the pointwise z values of the estimated log hazards ra- tio by the Same Knots method for the data generated according to the proportional hazards model. The censoring rate is 25% for the "treatment". 129 5.40 The quantiles of the pointwise z values of the estimated log hazards ratio by the Different Knots method for the data generated according to the proportional hazards model. The censoring rate is 25% for the "treatment". 130 5.41 The quantiles of the pointwise z values of the estimated log hazards ratio by the Same Knots method for the data generated according to the non- proportional hazards model. The censoring rate is 25% for the "treatment". 131 5.42 The quantiles of the pointwise z values of the estimated log hazards ra- tio by the Different Knots method for the data generated according to the non-proportional hazards model. The censoring rate is 25% for the "treatment" 132 xii 5.43 By the Same Knots method empirical distribution functions of the p- values for testing the hypothesis that the two failure times of the data generated as in the proportional hazards model in Section 5.1 have pro- portional hazards 133 5.44 By the Same Knots method empirical distribution functions of the p- values for testing the hypothesis that the two failure times of the data generated as in the non-proportional hazards model in Section 5.1 have proportional hazards 134 A . l The histograms of the marginal failure times, and non-censored marginal failure times for the data generated according to the proportional hazards model in Section 5.1. The censoring rate for the "treatment" is 50%. . . 143 A.2 The true log hazards ratio of the data as in the proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the 500 estimated log hazards ratios 144 A.3 The standard deviations of the estimated log hazards ratio for data gen- erated according to the proportional hazards model. The censoring rate is 50% for the "treatment" 145 A.4 The quantiles of the pointwise z values of the estimated log hazards ratio for the data generated according to the proportional hazards model. The censoring rate is 50% for the "treatment" 146 A.5 Empirical distribution functions of the p-values for testing the hypothesis that the two failure times of the data generated as in the proportional hazards model in Section 5.1 have proportional hazards 147 A.6 The histograms of the marginal failure times, and the non-censored marginal failure times for the data from the non-proportional hazards model in Sec- tion 5.1. The censoring rate for the "treatment" is 50% 148 xiii A.7 The true log hazards ratio of the data as in the non-proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the 500 estimated log hazards ratios 149 A.8 The standard deviations of the estimated log hazards ratio for data gen- erated according to the non-proportional hazards model. The censoring rate is 50% for the "treatment" 150 A.9 The quantiles of the pointwise z values of the estimated log hazards ratio for the data generated according to the non-proportional hazards model. The censoring rate is 50% for the "treatment" 151 A. 10 Empirical distribution functions of the p-values for testing the hypothesis that the two failure times of the data generated as in the non-proportional hazards model in Section 5.1 have proportional hazards 152 xiv Acknowledgments Foremost, I would like to express my sincere thanks to my supervisor, Nancy Heck- man, for her guidance, support, and patience throughout the development of this thesis. Her numerous advices will be eternally appreciated. I would also like to thank James Zidek for his thoughtful comments on this manuscript. Additionally, I am grateful to all faculty members and office staff for being kind and helpful throughout my Master's program. Finally, I would like to thank all graduate students for letting me share their knowl- edge and expertise and making my two years at UBC such a pleasant experience. xv C h a p t e r 1 I n t r o d u c t i o n Bivariate failure time data arise when study subject units are paired. Examples of paired units include eyes or ears from the same person, twins, and father and son. Since there exist natural relationships between the two subjects in one pair, the two failure times within the same pair might be correlated. Moreover, in such studies, either or both failure times might not be observed because of censoring. A well-know study involving paired failure times is the Diabetic Retinoparthy Study, Diabetic Retinopathy Study Research Group (1981). The dependence, along with the censoring, greatly complicates the analysis of the data. This data set will be discussed in Chapter 4. In univariate survival analysis, classical methods such as the Kaplan-Meier estimator and the Cox proportional regression model are based on the hazard function h(i), = P(t<T<t + At\T>t) = f£ v ; At->o At S(t)' where S is the survivor function and / is the density function, see Section 2.1. If T is a discrete random variable and T can take on values 0 < ti < t2 < • • •, then the hazard function is h(t) = P(T = t \ T > t ) - P ^ T = - 1 - - i o ... n[ 3) 3 i>t,)- - i s { ^ , j - l , 2 , . 1 And thus the survival function can be written as s(t)= n j:tj<t see Lawless (1982). An analogous formula relating S and h exists for T continuous, see Gill (1992). But in the bivariate Ccisê  cis Gill discussed, we do not have a nice formula relating the hazard and survivor functions, since there is no canonical way to define past, present, and future at "time" t. Traditionally, there are two main approaches to analyze censored paired data. One approach is to try non-parametric estimation of the bivariate survivor function. The typical works on this approach are Dabrowska (1988) and Prentice and Cai (1992). Dabrowska extended the univariate Kaplan-Meier approach by defining a bivariate haz- ard and using it to estimate the joint survival function. The marginals of Dabrowska's estimate are given by the univariate Kaplan-Meier estimates. She proved the consis- tency of the estimates but did not give estimates for the covariance of the estimates. Prentice and Cai (1992) did not define a joint hazard. Instead, they gave a representa- tion of the bivariate survivor function in terms of the marginal survivor functions and the covariance where Ti and T2 are paired failure times and hk is the hazard of 7*, k = 1,2. They proposed an estimate of this bivariate survivor function and proved the consistency of their estimate but they did not give estimates of standard errors. Lin and Ying (1993) presented a simple estimator of the bivariate survival function and also estimated the covariance function of their estimates. They assumed univariate censoring, that is, that there is one censoring time and it affects both failure times. The other approach to bivariate survival analysis is to extend the Cox proportional hazards model, Cox (1972), to paired data. Recall the Cox proportional model has 2 hazard function h(t\z) = h0(t) exp((3z) where h0 is the baseline hazard, z is a covariate vector, and (3 is unknown. The Cox proportional model assumes that all observations are independent. Holt and Prentice (1974) assume that within a pair, hazards are proportional according to a covariate, but the baseline hazard can depend on the pair. The effect of the covariate, that is, the value of (3, is the same across pairs. Clayton (1978) and Oakes (1982) presented a fully parametric model for paired sur- vival data. This model assumes that each marginal follows the Cox proportional model with respect to some covariates. They used an additional parameter to describe the association within a pair. Huster, Brookmeyer, and Self (1989) extended the Clayton- Oakes model to allow censoring. They also discussed another approach for inducing the correlation within a pair. They obtained the parameter estimates from an independence working model, that is, univariate estimates are used in the model to get the parameter estimates, and then they estimated variance robustly. Wei, Lin, and Weissfeld (1989) proposed a model for multivariate failure times. They assumed each marginal distribution of the failure times follows a Cox proportional haz- ards model with respect to some covariates. They showed that the resulting estimators of the parameters for covariate effects are asymptotically jointly normal and gave a consistent estimate of the covariance matrix. Lee, Wei, and Amato (1991) used a different way to extend the Cox proportional model to paired data. For paired data (Tu, T2i), with treatment indicator Z j , they showed the consistency and asymptotic normality of the estimated regression coefficient in the Cox proportional model. In this case the Zi's might be dependent. But the corresponding variance-covariance estimate may no longer be valid due to the dependence between the members in the pairs. They proposed a correct variance-covariance estimate taking 3 account of the correlation within a pair. Al l of the proposed models do not estimate the baseline hazard functions, treating it as a nuisance parameter. With the development of smoothing theory and methods, many people have used splines in survival analysis, that is, to approximate density functions, survival functions, hazard functions, or baseline hazard functions, in the presence of censored data. For example, Abrahamowicz, Ciampi, and Ramsay (1992) used B-splines to estimate the density and O'Sullivan (1988) used smoothing splines to estimate log hazard functions. Kooperberg, Stone, and Troung (1995) used linear B-splines and their tensor prod- ucts to estimate the conditional log-hazard function as a function of t and a covariate vector z. Their model contains the Cox proportional hazards model as a submodel. Since cubic splines can provide a better approximation than linear splines, Kooperberg et. al. also proposed a model in which the log hazard function is estimated with cubic B-splines. Unfortunately, because of the complication in estimation and model selection, they did not consider covariates in this model. Nor did they provide standard errors for the estimated log hazard function. In this thesis, we use cubic B-splines to estimate log hazards in univariate survival data and log hazard ratios in bivariate data. We use Kooperberg's approach to get estimates, but we provide estimates of standard errors. In Chapter 2, we define the log hazard regression model, the estimates of the log hazard functions and ratios, and prove the asymptotic properties of the estimates. In Chapter 3, we discuss the properties of cubic B-splines functions and how to choose knots which define these functions. In Chapter 4, we use our model to analyze the data set from the Diabetic Retinoparthy Study. We present our simulation study in Chapter 5. 4 Chapter 2 Hazard Regression In this chapter we define a parametric regression model for log hazard functions of censored failure times. We start with introducing definitions and notation in the first section. The regression model and the estimates are defined in Section 2.2. We show the consistency and normality of the estimates in Section 2.3 and leave the proofs of two required lemmas to Section 2.4. Finally, in Section 2.5, we define the regression model for bivariate failure times and discuss inference based on the estimates. 2.1 Definitions and Notation 2.1.1 Failure Time Distribution All functions, unless stated otherwise, are defined over the interval R+ = [0,oo). Let T be a nonnegative continuous random variable representing the failure time of individuals in some population. Let / denote the density function of T and let the distribution 5 function be F{t) =P{T <t) = f f{u)du. Jo The probability of an individual surviving till time t is given by the survivor function f(u) du = l- F(t). The hazard function is defined as = l i m r(t<T<t + At\T>t) = M V 1 Ai-+0 At S(t) ' which specifies the instantaneous rate of failure at time t, given that the individual survives until at least time t. The functions / , F, S, and h give mathematically equivalent specifications of the distribution of T. Since f(t) = —S'(t), by the definition of h, h(t) = ~~ logS(t). Thus log S(t) - log 5(0) = - /'' h(u) du Jo and since 5(0) = 1, S(t) = exp (- £ h{u) duj (2.1) and f(t) = h(t) exp (- J* h(u) du^j . (2.2) For our log hazard regression, we denote the log hazard function by a, that is, a{t) = log h(t). (2.3) It is easy to derive expressions for S(t) and f(t) in terms of a(t). 6 2.1.2 Censored Survival Data Consider n individuals in some population. Let Tj be the failure time of individual i, and Ci a censoring time, that is, a time beyond which individual i can not be observed. The variable Tj will be observed whenever Tj < Cj. Let Xi = min(Tj, Ci) and That is, 5i indicates whether the failure time Tj is censored or not and Xi is equal to Tj if Ti is observed, and to Ci if it is not. The data from observations on n individuals consist of the pairs (Xi, 6i), i = 1, • • •, n. For the data we first make the following assumptions. Assumptions (1) (Xi,6i) are independent random vectors with an identical distribution; (2) the failure times Ti and the censoring times Cj are independent. We denote the survivor functions and the density functions of failure time Tj and the censoring times Cj by S,Sc,f and fc, respectively. By Lawless (1982), under the Assumptions (1) and (2), The probability is a measure defined on [0, oo) x {0,1}. We denote it by v. Then the density function of (Xi, Si) is defined by It is easy to check that / is the Radon-Nikodym derivative of v with respect to the cr-finite cross-product measure a. defined on [0, oo) x {0,1} such that the marginal of \i on [0, oo) is the Lebesgue measure and the marginal on {0,1} is counting measure. r 1 i fT j<Cj 0 i fT j>Cj . f(t,5) = f(t)sS(t)1-6fc(t)1'SSc(t)s. (2.4) 7 2.2 Log Hazard Regression In this section we fit a parametric model for the log hazard function a of failure time T. First we define the model in a general form and determine the estimates that we will use. Then, in Section 2.2.2, we discuss the conditions on the model. 2.2.1 Model and Estimates Let G be a p-dimensional linear space of real continuous functions defined on |0, oo), and let {bj,j = 1, • • • ,p} be a basis of G. We call G the regression space. The log hazard regression is a model for the log hazard function, that is, a(t\B) = J2PMt) = PT1>(t), (2-5) 3=1 where (3 = (Pi,PP)T is unknown and b(t) = (h(t), • • •, bp(t))T. Given (3, by Equations (2.3), (2.1), and (2.2), we have the hazard function, survivor function, and density function h(t\3) = exp (2.6) S(t\/3) = e x p ( - / t e x p ( ^ ^ ( W ) ) d u ) , (2.7) J 0 j=l f(t\0) = exp(2 /3 J -6 i (t))exp{- f eW (j^ PM^)du}, (2.8) i=i J o j=i respectively. Note that to ensure that (2.7) defines a non-degenerate survivor function, it is nec- essary and sufficient that rt P / exp ( 53 Pjbj(u))du < oo (2.9) for some t > 0 and P exp ( I exp ( Pjbj(u))du = +oo. (2.10) So we need some conditions on b and (3. For now we assume that (2.9) and (2.10) hold, and we will postpone discussing the conditions until Section 2.2.2. For n observations (Xi,5i),i = l,---,n, we make the following assumptions, in addition to Assumptions (1) and (2). Assumptions (3) the log hazard function a of Tj satisfies model (2.5); (4) the distribution of the censoring times Ci does not involve (3 and the density fc of C is bounded. Then by (2.4), the density function of (X{,5i) is /(*, si® = nm'sitw-'Mty-'Sctt)'. (2.H) We denote the joint distribution function of (Xi, S{) by L(X\(3), that is, L(X\(3) = f[f(Xi, 6,1(3), i=i and we choose 0 to maximize L. Let fq(t,5\(3) = f(t\!3)5S(t\(3f- & (2.12) and Lq((3\X) = f[fq(Xl,Si\(3). i=i Then L(X\(3) = fiMXiJifflMXtf-'iSciXi)*' i=l = Lq(3\X)f[fc(Xi)1-s*Se(Xi)6*. i=i Since Sc and fc do not involve j3, we estimate (3 by J3, the maximizer of the function We call the partial derivative of log Lq with respect to j3 the score function, denoted by U~(n\j3\X), which is a vector with p components uM(P\X)m = ^-logLq(f3\X) = J2^-loefq(Xi,Si\0), (2-13) m = 1,2, •••,£>. The observed information matrix, denoted by I^((3\X), is a p x p matrix with (mZ)-th entry ' ' " ' c i * ' - = -a£w,]06LMX) = "gaSft w (214) We denote the information matrix, in the usual way, by I((3), which is a p x p matrix with the (ml)-th entry I(f3)ml = -E | ^ £ ^ - log fq{Xx, 5,|/3)|. (2.15) By definition of / , and Equations (2.8) and (2.7), we can write fq, log U^n\ I^n\ and I ((3) in terms of the basis functions b\, • • •, bp. We have fq(t,5\(3) = [ e x p ( £ / ^ ( i ) ) ] 5 e x p { - /*exp ( £ P M u ) ) d u \ ; (2.16) 3=1 ° 3=1 and log/ g(M|/3) =5J2(3Mt) ~ [texV(J2pjb3(u))du. (2.17) Thus, under regularity conditions, log fq{Xu 5t\f3) d(3m = I h E M M - / * e x P ( E fXi P = 5j6 m (^i) - / 6 m (« ) exp Pjbj(u))du; J 0 • i and J'=I - ^ - ^ log fq{Xi,8i\(3) = ~ Jo bm{u)bi(u) exp 10 Hence U^(f3\X)m = Y\sibm(Xi)- [ , 6 m ( u ) e x p ( ^ / 3 i 6 i ( u ) ) d « (2.18) 1=1 J 0 .7 = 1 and / W ( / 3 | X ) M / = £ bmiuMu) exp (J2PMu))du. (2.19) Since for any vector v = (vi, • • •, vp)T ^ 0, vTI^((3\X)v = £ r\vTb(u))2eW(YPMu))du > 0, i=i 1 / 0 i=i I^n\p3\X) is positive definite (i.e., the second partial derivative of logL g is negative definite). Therefore, if (3 exists, it is the unique solution of the p equations U{n\f3\X) = Q. (2.20) In the next chapter, we will show how to numerically solve (2.20) for (3. Note: By the relationship between Lq and L, 9 log L(f3\X) = ^ log Lg(f3\X). Therefore, U^((3\X)m = Y^^gf(Xl,5l\p3) i=l °Pm and I^((3\X)ml = -£-*iogf<xii6i\/3) i = l opmopi which are the usual definitions of the score function and the observed information matrix. 11 2 . 2 . 2 Conditions on b and (3 As we mentioned in the last section, we need some conditions on the basis b or on (3 such that (2.9) and (2.10) are true. A sufficient condition that (2.9) and (2.10) hold is that b is bounded. We say a function u : R+ —>• RP is bounded if there exists 0 < M < oo such that M * ) | | 2 = £ M * ) | 2 < M , for all te R+. Proposition 2.2.1 If b is bounded, then (i) (2.9) and (2.10) hold for all t and [3 and (ii) there exists a number M((3,p) such that ' f m < M ( P , p ) e x P ( - 1 ^ ) . (2.21) Proof. Since b is bounded, there exists a number 0 < M < oo, dependent on (3 and p, such that | Y%=i Pjbj(u)\ < M. Hence < exp (£pMu)) < e x P ( M ) - (2-22) It follows that for any t rt P rt / exp f ̂ 2Pjbj(u))du < / exp(M)du < oo JO j=\ and r ° ° 1 roo . r . oo I Jo « P ( E / A ( « ) ) d « > y 0 ^r^du — O O . ^ - - -Jo exp(M) So we proved that (2.9) and (2.10) are true. Next, let M((3,p) = exp(M), then, by the definition of / from (2.8) and from (2.22), 3=1 J° 3=1 £ M ^ P » E X P ( - M ( ^ ) ) . 12 which completes the proof of (ii). • We would like the regression space G to contain the log hazard for the commonly used Weibull distribution. Recall that the density function of a Weibull distribution is Thus, if bp(t) = \og(t) and 6p_i = 1, then model (2.5) includes the Weibull distributions. More generally, we consider the basis 61, • • •, 6p_i bounded and bp(t) = log(i + c), where c > 0 is a constant. We call bp log tail and {bj, j — 1, • • • ,p} bounded plus log tail. This basis doesn't satisfy the assumption of Proposition 2.2.1. But we have the following result. Proposition 2.2.2 If b is bounded plus log tail and /3P > —1, then (i) (2.9) and (2.10) hold for all t and (ii) there exists a number M((3,p) such that fw(t) = 7 A 7 i 7 - 1 e x p ( - (Xty), i > 0 , A > 0 , 7 > 1. The corresponding log hazard function is aw(t) = log(7A-0 + (7-l)log(t). f(t\(3)<M((3,p)(t + c)^exP( (t + c ) ^ + 1 M O M (2.23) Proof. Since b is bounded plus log tail, there exists M such that YJj=\ Pjbj(u) < log(M). Then (2.24) Therefore, 00 13 for any t and 0 0 (u + cYp du = oo o exp(M) as Pp > —1. Then (i) is proved. To prove (ii), using the definition of / from (2.8) and using (2.24), we have From now on, unless stated otherwise, we assume the n observations (Xi,5i), i = 1, • • • ,n , satisfy Assumptions (1) - (4) defined in the previous sections and either A s s u m p t i o n (5a) 6 is bounded and (3 E RP (5b) 6 is bounded plus log tail and (3 6 RP 1 x (—1, oo). We call RP the parameter space, if (5a) holds and RP'1 x (—1, oo) the parameter space if (5b) holds. N o t e 1 Since (2.9) holds for all t > 0, f{t) > 0 for all t > 0. So the support of / is R+, which will be used for the proof of Theorem 2.3.1 in the next section. N o t e 2 It is easy to see that if b is bounded then for any compact set B in the parameter space, there exists M dependent on B and p such that f(t\f3) = exp ( £ ftbjit)) exp { - f exp ( E («))<*«} where M((3,p) = max{Ci,C 2}. So (ii) is true. • or / ( * | / 3 ) < M e x P ( - - M 14 for all (3 e B. If 6 is bounded plus log tail, then for any compact set B in the parameter space, there exist M and f3'p > —1, dependent on B and p, such that for any (3 G B. That is, on a compact set B in the parameter space, the density function f(t\(3) of T can be dominated by a function of t. In this section we use Theorem 5.1 in Lehmann and Casella (1998) to show the following Theorem 2.3.1 Let (Xi, Si), • • •, (Xn, 5n) satisfy Assumptions (1) - (5). Then, for (30, the true parameter, (iii) ^I^((3\X) —> I((30) ? n probability provided (3 —> (30 in probability. We first state the conditions of their theorem. Conditions A . For different values of (3, the distributions P$ of the observations are distinct. B. The distributions Pp have common support. C. The observations (Xi, Si), • • •, (Xn, 6n) are i.i.d. with probability density f(t,6\(3) with respect to a cr-finite measure p. 2.3 Consistency and Normality of /3 theorem. (i) (3 —> (3Q in probability; (ii) y/K0 - fi0) => N(O,[I{0o)n 15 D. There exists an open set u of the parameter space Q, containing the true parameter point (30 such that for almost all (X, 5), the density f(X,5\(3) admits all third derivatives (d3/dpjdpkdpl)f(X, 8\(3) for all (3eu. E . The first and second logarithmic derivatives of / satisfy the equations d_ df3 and £{^log / ( .M | /3 )} = 0, I ^ = E{-^^f(x-m} = E{^uVf(x,sW.±uvf(x,s\m}, I,me {1,•••,?}. F. I((3)jk is finite and the matrix I(/3), a p x p matrix with the (ml)-th entry 7(/3)m;, is positive definite for all (3 € to. G . There exist functions Mjki such that (P/dPjdpMnogfix^ip)] < Mjkl(x) for all {3 e OJ, where Ep (Mjkl(X)) < oo for all j, k, I e {1, • • • ,p}. Theorem 2.3.2 (Theorem 5.1 in Lehmann, E.L. and Casella, G, 1998). Let (X1,6X), • • •, (Xn,5n) be i.i.d., each with a density f(t,5\f3) which satisfies the above conditions. Then with probability tending to 1 and as n —>• oo, there exists a solution J3 of the likelihood equation " dlog/(A^|/3) such that (a ) Pj is consistent for estimating Pj, (b) y/n(0—p3) is asymptotically normal with mean zero and covariance matrix [I(p3)]~l, and 16 (c) J3j is asymptotically efficient in the sense that y/n($j — Pj) converges in distribution to a random normal variable with mean zero and variance [I((3)jj]~l. Proof of Theorem 2.3.1. Suppose (3 is the solution of the equation o = E i=i n dlog(/g(M|/3)) d(3 E n a i o g ( / M f l ) ) d(3 To show (i) and (ii), we need only check the conditions of Theorem 2.3.2. First, by Assumptions 1 and 2 and the definition of f(-,-\(3) from (2.11), (Xi,6i), • • •, (Xn, Sn) and /(•, -|/3) satisfy condition C. Second, since {bi, • • •, bp} is a basis, from (2.16), the definition of fq in terms of the basis functions, for different value of (3, fg is distinct. From Note 1, after Propositions 2.2.1 and 2.2.2, the support of fq is [0, oo). By the relationship between /(•, -|/3) and fq from (2.11) and (2.12), conditions A and B hold. Then, from (2.11) and (2.12), a i o g / C X x . f t l f l ) _ 3 l o g / , | / 3 ) f n O K . I = 1, • • • ,p, since Sc and fc do not involve (3. Hence condition D follows from a direct calculation of the derivatives of fq from the definition (2.16). Next for condition F, I((3)jk is finite follows from Lemma 2.3.4 below. As shown in the calculations after (2.19), 1(13) is positive definite. Finally, conditions E and G follow from Lemmas 2.3.4 and 2.3.5 below, respectively. Therefore, (a) and (b) in Theorem 2.3.2 yield (i) and (ii), respectively. To prove (iii), we need to use (i) and the bounds in Lemma 2.3.4 below. The proof is as in Theorem 3.10 of Lehmann and Cassella (1998). Thus, we have finished proving Theorem 2.3.1. Using (ii) and (iii) in the above theorem and Slutsky's theorem, we have the following Corollary. dPi 17 Corollary 2.3.3 Let (Xi, 81), • • •, (Xn, 8n) satisfy Assumptions (1) - (5). Then [n&\f3\X)}^{f) - (3) => N{QJ), where I is the p x p identity matrix. The following two lemmas are used in the proof of Theorem 2.3.1. Lemma 2.3.4 Suppose that b satisfies Assumption (5a) (or Assumption (5b)) and B is a compact set in RP (or in Rp~l x (—1, 00)). Then there exists 0 < M < 00, dependent on B andp (or on B,p, and c), such that for any (3 G B, sk G {1, • • • k = 1, • • •, m; m = 0,1,2,3, \d™ log fq(Xu 8^(3) < MK(XX), (2.26) where K{X() 1 + Xi if b is bounded 1 + I log(A^! + c)| + (Xi + c)^ p + 4 if b is bounded plus log tail. Furthermore, E{K(Xi)} < 00. (2.27) Lemma 2.3.5 Let (30 be the true values of the parameters in model (2.5). Then d l o g / ^ , ^ ) , i) E ii) E d(3 l/30 0; 5 log / , ( * ! , 8i\f3)\ dlogfq(Xx,8i\(3) OP, 00 OP 0o = -E d2logfq(Xi,8i\(3) QPmdPi 0o l We will give the proofs of the above lemmas in the next section. 2.4 Proofs of the Lemmas Now we prove Lemmas 2.3.4 and 2.3.5. 18 .9" l O g 1 ) 3 ) d(3Sl---d/3Sm = < For convenience, we first calculate the partial derivatives of log fq with respect to (3 in terms of the basis functions. By the definition of log/ g from (2.17), for sk G {l,---,p};k = l,---,m, $i YPibjixi) ~ j e x P ( Y/3jbj{u))du, m = 0; j=i J o 3=1 fXi ,P . 5ibSl{X]) - I bSl(u)exv{YPjbj(u))du> rn = 1; (2. J o j=i - / bSl(u) • • -bSm(u)exp [YPjbj(u))du^ m = 2,3. . J o j=i Proof of Lemma 2.3.4 v If b is bounded, since B is a compact set, Y, \Pjbj{u)\ 1S bounded on B, and so is 3=1 V exp (YJ Pjbj(u)j. Hence, by (2.28), we can find 0 < M < oo, dependent on B and p, 3 = 1 such that (2.26) holds. If b is bounded plus log tail, then &i, • • •, fep—i are bounded. Hence there exists 1 < M i < oo such that for (3 G B { p-i p-i EftMu) Y \ b j ( u ) \ \ < M i , 3=1 3=1 which implies (2.29) (2.30) exp ( Y hb3 («)) < e M l (u + c)p". 3=1 Then for m = 0, by (2.28) and (2.30), (2.26) is true. To prove (2.26) for m = 1, 2,3, it suffices to show that there exists M < oo such that rXi [ 1 I M " ) • + tf'du < M i l + (xi + c ) ^ 4 ) , (2-31) J 0 Sk G {1, • • • ,p}; k = 1, • • • ,m. This clearly holds for sk G {1, • • • ,p — 1}, since bj is bounded, j = 1, • • • ,p — 1. But the log tail \og(t + c) is unbounded, which makes the proof of (2.31) more complicated. So we need to investigate the properties of the integral fXl \ \og(u + c)\m(u + c)^du, Jo m = 1,2,3. Noting /3p > — 1 and c > 0, we have the following easily proven facts. 19 Fact 1 for any £ > 0 and any compact set B C (—l,oo), there exists M x < oo such that sup | log(u + c)I (u + c) 2 < M L ; ue(0,l],/3pGB Fact 2 for any compact set B 6 (—1, oo), sup / (u + c) 2 du < 00; /3peB Jo Fact 3 log(u + c) < it + c for u G [1, 00), Fact 4 for any /?' > 0, B" > - 1 , and M' > 0, J (u + c)^" exp { - (u +c)0'/M'}du < 00. We will use Facts 1 - 3 to prove that i) there exists M 2 < 00 such that for (3 G B, f1 I log(u + c)\m{u + cfpdu < M 2 , Jo m = 1,2, 3; ii) there exists M 3 < 00 such that for X\ > 1 and (3 E B, j*1 I log(« + c)|m(u + cfrdu < M3{X1 + c ) ^ + 4 , m — 1, 2, 3. If i) and ii) hold, then, when X\ < 1, /o M 2 , |log(w + c ) | M (M + c ) ^ ^ ./o < [ \\og(u + c)\m(u + c)^du < Jo 10 and when Xi > 1, / A l |log(u + c)r(u + c)^du 1/ 0 = f11 log(u + c)\m(u + c)A>d« + /"*' I Iog(w + c)\m{u + c)*1*du J 0 */1 < M2 + M 3 ( X ! + c ) ^+ 4 <M(l + (Xi + c)^ + 4 ) 20 for M = max{M 2 ,M 3 }. So (2.31) would hold, which implies that (2.26) would be true. Proof of i): Since B is compact, using Facts 1 and 2, we can find M'2, M'2' < oo such that for (3 G B, | log(« + c)\m{u + c)^ < Mj , m = 1,2,3, for u G (0,1) and Ciu + c ^ d u < M'i. Jo Then for 0 G B, C \log{u + c)\m(u + cf"du J 0 < / \log(u+ c)\m(u + c)^(u + c)hs1du J 0 < M'2 f (u + c)^du < M'2M'2 - M2 < oo, J 0 m — 1, 2, 3. Proof of ii): By Fact 3 and (2.29), j * 1 \\og(u + c)\m(u + cf*du {u + c)3(u + cYvdu = ^ ( ( ^ i + ^ - a + c ) ^ 4 ) < M3(X1 + c)^+4 m — 1, 2,3, for some M 3 > 0. Now that we have proven that (2.26) is true, we show that E{K(Xi)} < oo. First suppose that b is bounded. Then EiKiX,)} = E{1 + Xl}< E{(1 + Ti)}, since, by definition, X\ < Tx. By Proposition 2.4.1 below, E{T{\ < oo. 21 Now consider b bounded plus log tail. Write K(XX) = K*{X1) + K**{X1), where K*{X1) = l + (X, + c)^+ 4 + | log(X 1 + c)\I{XL + c > 1} and K**(XX) = | log(X! + c)\I{XL + c < 1}. Since K* is a non-decreasing function, E{K*(XX)} < E{K*{TX)} < E{K(TX)}, which is finite by Proposition 2.4.2 below. To show E(K**) is bounded, note that, by Assumption 4, fc is bounded, and by (ii) in Proposition 2.2.2, f(t\(3) is bounded on [0,1]. Thus, the marginal density of XX, fXl(x) - f(x\f3)Sc(x) + S(x\p)fc(x), is bounded. Then we have E{K**(X1)} < £ I log(a; + c)\fXl(x)dx < oo since / \\og(x + c)\dx < oo. Therefore, Jo E{K{XX)} = E{K*{X1)} + E{K**(XX)} < oo, which completes the proof of Lemma 2.3.4. The following propositions were used in the proof of Lemma 2.3.4. P r o p o s i t i o n 2.4.1 If b is bounded, then for any a > 0 , E(TA) < oo. 22 • Proof. Since b is bounded, by (ii) in Proposition 2.2.1, there exists 0 < M((3,p) < oo such that / ( ^ ) < M ( / 3 , p ) e x p ( - F I i ; s ) . From this and Fact 4 in the proof of Lemma 2.3.4, we have roo roo . —f . E(Ta) = Jo taf(t\(3)dt< Jo M(P,p)taexp( p))dt < oo. Proposition 2.4.2 If b is bounded plus log tail and (5V > —1, then (i) for any a > 0, E(Ta) < oo, and (ii) E ( | l o g ( T + c ) | )<oo . Proof. Since bi, - • • ,bp-i are bounded, by (ii) in Proposition 2.2.2, there exists 0 < M((3,p) < oo such that (2.23) hold. Let Then from (2.23) and Fact 4 in the proof of Lemma 2.3.4, roo roo E(Ta) = / taf(t\3) dt < M(P,p) / ta(t + c)^g(t,3) dt < oo, yo Jo as /3P > —1. Thus (i) is proved. To prove (ii), using (2.23) and Facts 1 - 4 in the proof of Lemma 2.3.4, we have roo £ ( | l o g ( T + c) | )= / |log(t + c)|/(t|/3)dt Jo roo < M(3,p) \log{t-rc)\(t + c)^g(t,3)dt Jo < M(3,p) { £ | log(t + c)\(t + c)^{t + c)^g{t, 3) dt + {t + c)^+lg(t, 3) dt < M(3,p) {Mx j f \ t + c ) ^ ( t , / 3 ) dt + f"{t-¥cf-+lg{t,3) dt} < oo, which finishes the proof of (ii). • 23 Remark: The following proposition will be used in the proof of Lemma 2.3.5 and Section 2.5. The proof is omitted since it uses the same procedures and arguments as in the proofs of Propositions 2.4.1 and 2.4.2. Proposition 2.4.3 Suppose that b satisfies the assumptions of Lemma 2.3.4- Then EilKiXj}2} < oo. The proof of Lemma 2.3.5 To prove i), write E d0 df(Xu6M 0o = E d\ogf{Xl,8l\0) d(3 0o 00 dE(l) 0. To justify the interchange of differentiation and integration, we must prove that there exists a neighborhood B of 0O such that uniformly on B, the partial derivatives of f(Xi,6i\(3) with respect to 0 can be dominated by an integrable function of (Xi,5). Then by the Dominated Convergence Theorem, we can exchange the order of derivative and integral. To do this, we choose a > 0 such that B = {0 : \\0 — 0O\\ < a} is compact in the parameter space. By Lemma 2.3.4, there exists M such that for /3 e B, \dlog fq(X1,61\0Y < MK(Xi), (2.32) I = 1, • • • ,p, and E(K{Xi)) < oo. Hence, from (2.25) and (2.32), for 0 e B, dfiXuSM f(Xi,6M dlog/(XuSM 00, <MK(Xl)f{Xu5l\0). Thus it suffices to find an integrable function of (Xi, 5) to dominate K(Xi)f(Xi, Si\0). By the definition of /(-,-\0) from (2.11), K(t)f(t,l\0) = K(t)f(t\0)Sc(t) < K(t)f(t\0) < K(t)f*(t), 24 where /* is the bound on / given in Note 2 after Proposition 2.2.2. It is easy to show that Kf* is integrable. To bound K(Xx)f(Xx, O\0) = K(Xx)S(Xx\0)fc(Xx), it suffices to bound K(Xx)S(Xx\0), since, by assumption 4, fc is bounded. By the definition of S(t\0) from (2.7) and by the proofs of Propositions 2.2.1 and 2.2.2, there exists M > 0 such that, for 0 G B, S(t|/j)<exp(-^)=S*(t) if b is bounded, and S(t\f3) < Mexp ( - ^ ) = S*(t) if 6 is bounded plus log tail, where (3'p is the lower bound of pp for 0 e B. One easily shows that KS* is integrable. To prove ii), provided integration and differentiation can be interchanged, we can write 0 = J_E\d\ogf{XltSx\Pl\_ d fd\ogf(Xx,8x\0) dPm dPi j dp„ - r — ( J dpm \ dPi r\d\ogf{Xu8x\0) d\ogf{Xx,8x\0) , d*\ogf(Xx,8x\0)\ t f v = I \ Wm 1% + dpjit )ttx.MMn E\d\ogfq{Xl,8l\0) d\ogfq{Xx,8x\0)\ \d*\ogfq{Xx,8x\0)\ 1 dpm dPi J j dpmdPi J ' Therefore, £ {afc l o g / ' w ' i l , ' 3 ) 4 l o g / ' ( ; f ^ l ! ' 3 ) } = - B { a £ k l o g / ' ( x ^ l l / 3 ) } - To exchange integration and differentiation, we need to bound d2 log f(Xx,8x\0)/dpmdPi. By i) and Proposition 2.4.3, this can be done with an argument similar to that in the proof of i). We omit the calculations here. 25 2.5 Log Hazard Regression of Paired Failure Times Now we consider the individuals with paired failure times. We first introduce the nota- tion and define the log hazard regression model and estimates for paired failure data. Then, in Section 2.5.2, we show the consistency and normality of our estimates and give a consistent estimate of the covariance matrix of the estimates. 2.5.1 Model and Notation Consider n subjects with paired failure times for each. For k = 1, 2, and i = 1, • • •, n, let Tfa and Cki be the kih. failure time and censoring time of the zth individual, respectively. Let Xki = min(Tfci, Cki) and Ski = s 1 if Tki < Cki, 0 \iTki>Cki. Then (Xki, Ski) is the observation of the kth. failure of the zth individual. Note: the censoring times may be the same, i.e., Cu = C2i- We refer to fk, Fk, Sk,hk, and ak as the corresponding density, distribution, survivor, hazard, and log hazard functions, respectively. We fit a log hazard regression model for the two log hazard functions separately. That is, we suppose that al(t\[3l) = Y,(3ljblj{t) and a2(*|/32) = £ # y M * ) - For k = 1, 2, we estimate (3k by (5k as defined in Section 2.2.1. Referring to the univariate case, we denote the regression basis by bk. As in (2.16), we let /fc9(t,<5|/3,) = / f c(i| /9 f c)^ f c(t|/3 f c)1-'5. 26 We also let Ukn\f3k\Xk), lln\(3k\Xk), and Ik{Pk) be the corresponding score functions, observed information matrix, and information matrix, respectively, see (2.13), (2.14) and (2.15). Similarly, we denote by Kk the corresponding functions defined in (2.27). Let (3 ( \ ( - \ (2.33) We denote the ith summand in U k by uki(f3k), that is, d Uki{Pk) = ^g-l0g/fc9(^fci,4i|/3fc), k = 1,2. Let U^(f31,(32\X1,X2) ^ UP(02\X2) ) i=i v «2i(/32) and define £-21 ^22 ^ ^ cov(ttu(/3i),iin(^i)) c o v ^ n ^ ) , ^ ! ^ ) ) ^ (2.34) j y cov(u2i(/32)>wii(/3i)) cov(u2i(/32),W2i(/32)) J 2.5.2 Consistency and Normality of (3 In this section, we show the consistency and the normality of the estimates J3. The main result, given in Theorem 2.5.2, relies on the asymptotic normality of 11^0-^, /32\Xi, X2) given in the following. P r o p o s i t i o n 2.5.1 Suppose, marginally, each of the failure times satisfies Assumptions (1) through (5). Then -^ t /W ( / 3 1 , / 3 2 |X 1 ,X 2 ) ^ iV (0 , S ) . 27 Proof. Since uKI(3k),i = 1, • • •, n, are i.i.d. and, by i) in Lemma 2.3.5, E(uki{3K)) = 0, using the multivariate central limit theorem, we have ( « 2 i ( / 3 2 ) iV(0,E). • Theorem 2.5.2 Suppose, marginally, each of the failure times satisfies Assumptions (1) through (5). Then (i) (3 —>• 3 in probability; (ii) y/n{J3 -&)=> N{0, Q), where ( Qu Qn \ ( Q = n u {Q21 Q22 J [ I2{32)-lV2lh{Pi)-1 / 2 ( / 3 2 ) - 1 S 2 2 7 2 ( / 3 2 ) - 1 j Proof, (i) For any e > 0, by Theorem 2.3.1, P{\\3 -(3\\>e)< P{\\3X - 3J > e) + P(\\32 - L32\\ > e) -> 0, as n —>• 00. (ii) By expanding U^ifl^/32[-X"i,X2) in a Taylor series about 3, we have U^{p^2\XuX2) (n) N ( ltL\3\\XL){31-31) ^ KliN)(3*2\X2)(02-32) J \ u2n>{p2\x2) ) where 3* = (3\,32)T is on a line segment between (3 and (3. Since U^01,L32\XUX2) = O, ( U{?\BX\X,) A / ^ U?(32\X2) ) I^idUX,) 0 0 r(n) / r ( /3 ; |x 2 ) y (2.35) 28 i.e., ^ J3 - (3 ^ ( i rin)ta* By (iii) in Theorem 2.3.1, i / H 0 t | X i ) o o l-i{2\F2\x2)) y j-u2n\&\xa)) [lit\pi\xk)ll ^ hiP,)-1 in probability, since (3*k —> f3k in probability, k = 1, 2. Using this and Proposition 2.5.1, by Slutsky's theorem, we have v H 9 - 0 ) ^ i V ( O , Q ) . • To estimate Q, let Qki = [^(PklXk)]'1 (\±uki0k)uumT) [^lin)0i\Xi)]~\ (2.36) n \n i = 1 j n k, I — 1, 2, and Q = Qn Qn \ Q21 Q22 J In the next theorem we show that Q is a consistent estimate of Q. Theorem 2.5.3 Suppose, marginally, each of the failure times satisfies Assumptions (1) - (5) for k — 1, 2. Then Q is a consistent estimate of Q. Proof. Let /3Q be the true values of the parameter. From Theorem 2.3.1, llin)0k\Xk)^Ik((3ko) in probability, k = 1,2. Therefore, we need only prove that - £ K(&) r Sw (2.37) n i=l 29 in probability, k, I = 1, 2. By the law of large numbers, 1 ^ n Y,uki(Pk0)uH(3l0) T -+ ^(u f c l(/3 f c 0)u l l()9w) : r) = S i=i in probability. Hence, to prove (2.37), it is sufficient to prove that j n 1 n - Z M / 3 f c W ^ ) T - - X>**03*oW/3«>)r o n i=i n i=i in probability. Let ukis(3k) be the sth component of uki(3k), i.e., ukis{3k) = ——\ogfkg(Xkl,Ski\3k), OPks s = 1, • • • ,pk; k — 1, 2; z = 1, • • •, n. We need to prove •y n 1 71 -Yukis(Pk)uiij(Pi) ~ ~Yukis{Pko)unj(Pio) ->• 0> (2-38) n 1 = 1 n i=l in probability, for s = 1, • • • ,pk; j = 1, - • • ,pt;k,l = 1, 2. Using the mean value theorem applied to T,7=iukis(fak)uuj(fai) and £ ? = 1 u f c is(/3 f c 0)u ; i j(/3 / 0), we have j n 1 n -YUkis(fak)uiij{fal) ~ - YUkis((3k0)Uhj(3l0) i=i ^ E E i=l m=l 5/3 fcm •(«*M(/3*)WMJ(A)) /3* /3fcm Z^fcOml -i n p; + s E E " j=i m = l 5/3 Im •(Ukis{Pk)uiij(Pl)) /3* l/3/m /3/0m 1 n Pk n i=l m=l d dd km •iUkia{Pk)^lijW) /3* Pk Y m=l E I A m - A fcOml 1 " P( + ^ E E n i=i m = l 5/3 /m -(Ukis{0k)uiij(Pl)) A I m=l E I A m — A u m where 3* is on a line seg ent between fa and (3 . Thus, since /3 ->• 3 in probability, to prove (2.38), it suffices to show that ± E L i £ m = i ^h'km is bounded in probability around (3 . That means we need to find M < oo and a neighborhood of /3 -o such that { 1 n Pk d 5/3 fcm -(Ukis{Pk)UUj(Pl)) < M 30 when n —>• oo, k, I = 1,2. We choose a number a > 0 such that B = {/3 : \\f3 — (3 \\ < a} is a compact set in the parameter space. By Lemma 2.3.4, there exist Mk and M; such that for 3 e B,k,l = 1,2, k ^ l,m = 1, • • • ,pk, d d(3 -(Ukis(Pk) uUj(0l)) km uH]iPi)^—ukis{(3k) °Pkm d d2 l0Sfiq(Xu, 5li\dl)————\ogfkq(Xki, 5ki\(3k)) < MMK^X^KtiXu). For k = l, Thus, d d(3 km -(Ukis(Pk)Ulij(Pl)) < 2M2k[Kk(Xki)}2. { 1 n Pk SUP - E E d d(3, km •(ukUl3k)uuj((3i)) < M > P^MkM^j^KkiX^KtiX^KM^ which will converge to 1 for M sufficiently large, since -> EiK^X^K^Xu)) < oo n t=i by Proposition 2.4.3. As in the univariate case, we have the following Corollary. Corollary 2.5.4 If b\ and b2 satisfy assumptions 1 through 5, then V^Q-"0-l3)=>N(O,I), • where I is (pi + p2) x (p± + p2) identity matrix. 31 Chapter 3 Regression Space In the previous chapter we discussed the regression model for log hazard functions for a general regression space G. Here, we will use a family of extremely flexible functions, cubic splines, as the regression space. This family was used by Kooperberg, Stone, and Truong in 1995 for univariate log hazard regression. In this chapter we first give a brief introduction to cubic splines. Then we give the definition of a restricted cubic spline regression space B and a method to construct a cubic B-spline basis of B. In Section 3.3 we introduce the numerical computation method that we use to calculate the estimates of the log hazard regression model for paired failure data. In Section 3.4 we explain our methods for choosing knots for the log regression model. Finally, in Section 3.5, we show that the log hazard regression model with the re- gression space B can be used for hypothesis testing. In the univariate case, we can test the hypotheses that the failure times have an exponential or a Weibull distribution. In the bivariate case, we can test the hypotheses that the two failure times have the same distribution or that the two failures have proportional hazards. 32 3.1 Cubic Splines Definition 3.1.1 The function (j) is a cubic spline on [a, b] with knots ti, - • • ,tx (a < ti < • • • < tj( < b) if <f> is a cubic polynomial on the subintervals [a, ti], [ii, £2]; • • •, \PK, b] and (f) has 2 continuous derivatives on [a, b]. Denote the collection of these splines by Sp(ti, • • • ,tK). By De Boor (1978), Sp(ti, • • •, tK) is a linear space with dimension K + A. The power basis {pk, k = —3, • • •, K} of Sp(ti, • • •, tK) is defined as: Po(t) = l , p-i(t) = t, p-2 = t2, p-3 = t3, Pi(t) = (t-ti)l, pK(t) = (t - tK)3+, where f (t - t')3 for t > if (t ~ t% = U for t < If. Therefore, a power basis representation of a cubic spline <j> € Sp(ti, • • •, t^) is Ht) = E ekPk(t) = E + E °k{t - tkf+. (3.1) k=-3 k=0 k=l The power basis is very easy to understand, but isn't used in computation since it has bad numerical properties (see De Boor, 1978). A numerically much better basis is the B-spline basis, which consists of functions that are zero except on some sequential subintervals. For more details about the B-spline basis, see De Boor (1978) and Shikin (1995)' We will use a restricted B-spline basis, defined in Section 3.2, for our regression. 3.2 The Regression Space B In this section we will introduce the cubic spline regression spaces defined by Kooperberg et al. (1995) and we will give an algorithm to construct restricted B-spline bases in those spaces. 33 First, we assume the log hazard function a(t\3) of a failure time to be a cubic spline from Sp(ti, • • •, tK) satisfying (**) a is linear on [0,ii] and is constant on [£/<-, oo). If we use the power basis representation a = EfcL-3 6kPk, then (**) implies #_3 and #_2 equal 0, and it places three constraints on 61, • • •, 9K- This leaves a i f + 4 — 2 — 3 = K - 1 dimensional space. More precisely, define V = span{p_i,p0,Pi, • • -,PK} and B = {<p £ V : (f> is linear on [0, t\] and constant on [tk, oo)}. Then dim(P) = K + 2 and dim(c )̂ = K - 1. Next, we give the definition of a restricted B-spline basis of B. We assume that K > 3, and place restrictions on bi, bx-i and, if K > 3, on 62, • • •, Definition 3.2.1 The set of functions {bi, • • •, 6^-1} ^n & is called a restricted B-spline basis of B if it has the following properties: 1. b\ is linear but not constant on [0, £1] and is zero on [t3,oo); 2. if K > 3, for 1 < j < K — 2, bj is zero on [0, £,-_i) and [tj+3, 00) but not zero on (tj-i-, tj); 3. if K > 3, bj<-2 is zero on [0, £ # - 3 ) , not zero on ( ^ - 3 , ^ - 2 ) ; and a non-zero constant on [tK, °o); 4- bK-\ is a non-zero constant on [0, 00). Clearly, any fy's satisfying 1-4 are bounded, a condition required in Section 2.2.1. Now we need to show that the restricted B-spline basis is well defined. That means that 34 we need to prove that b/s with the above properties exist and they are a basis of B. We will give an algorithm for constructing bj's (this implies the existence), but first we will show that any bfs satisfying 1-4 are a basis. Theorem 3.2.2 If b\, • • •, bx-i, K > 3, in B have properties 1 - 4 of Definition 3.2.1, then they are a basis of B. Proof. Since B is a K — 1 dimensional space, to show that {b\, • • •, bx-i} is a basis of B, we need only prove that b\, • • •, bx-i are linearly independent. Let K-l cj>{t) = £ a&it). Suppose that (f)(t) = 0. We are going to show that this implies that otj = 0,j — \,---,K — 1. Consider t G [0,ii). If K > 3, by properties 2 and 3, bj(t) = 0, for j = 2, • • •, K - 2. If K = 3, then <f>(t) = ai&i(t) + a2b2(t). Therefore, for K > 3 and t G [0,£x) <t>(t) = + ttK-i^-iW = 0. By property 4, bx-x is non-zero constant, and by property 1, bi is linear but not a constant. Hence we have ai = tttf-i = 0. (3.2) Hence, if K = 3, then &! and 62 are a basis of B. Now we assume that K > 3. Consider £ G (ti, t2)- If K > 4, by properties 2 and 3, = 0, j = 3, • • •, K - 2. If K = 4, then <K*) = + oc2b2{t) + a 3 &3(*)- Hence, for K > 4 and i G (h,t2), using (3.2), we have </>{t) = tti6x(i) + a2b2(t) + otK^bK-iit) = a2b2{t) = 0. 35 So OJ2 = 0 since b2(t) ̂  0 by property 2. By induction on j , using a similar explanation, for t G {tj-i, tj), <f)(t) = ajbj(t) = 0, which implies that a.j = 0 for j = 3, • • • ,K — 2. Hence we have that a,- = 0,j = 1, • • •, K — 1, which shows that {bi, • • •, bK-i} is a basis of B. • For the convenience of our constructing a restricted B-spline basis, we give the fol- lowing result as a lemma. Lemma 3.2.3 Fix J = 2, • • •, K — 2, and let 0 J+2 b{t) = E 0kPk(t)+ E ekPk{t). k=-3 k=J-l Then b = c on [tj+2,oo) if and only if #_ 3 , #_ 2 , #o, 0j, QJ+I, 0j+2 solve the linear system ds + 4- 9j + 9j+i + 9j+2 = 0 0_2 + 3 O _ i 0 / _ i + 3tj9j + 3tJ+10J+l + 3O+20J+2 = 0 0_i + 3£3_i#/-i + 3£j#/ + 3t2J+19j+i + 3r^ + 20/+ 2 = 0 OQ + tj^Oj-i + tj9j + t J + l 9 J + i + £ j + 2 0 / + 2 = c. Proof. First regroup terms of b(t) by powers of t, so that = a0 + ait + a2t2 + a 3£ 3 , with a's depending on indicator functions involving the 9k 's and the £,-'s. We see that 6 = c on [ t j + 2 , co) is equivalent to a0 = c, = a2 = a3 = 0, which is equivalent to the above equations. • Now we start our construction of a restricted B-spline basis of B. Let bj = Y,k=-i OkjPk, j = 1, - • • ,K — 1. We will find O^s so that the fy's satisfy properties 1 - 4 of Defini- tion 3.2.1. 36 Step 1: Let bK-i = p0, i.e., #0(K-I) = 1 and 9k(K-i) = 0, for k = -1 ,1 , • • •, K. Step 2: To define bi = J2k=-i QkiPk, let 9ki = 0 if k > 3. As in the proof of Lemma 3.2.3, we regroup terms of bi by powers of t. Then b\ = 0 on [ £ 3 , 0 0 ) implies that #oij $(-i)i> ^ i i ) $21 > #31 solve the following linear system #11 + #21 + #31 = 0 t\9l\ + *2#21 + ^3#31 = 0 #(-1)1 + 3£ 2#n -+• 3i2#2i + 3i3#3i = 0 #01 + * i # l l + ^2^21 + ^2^31 = 0. Since the tks are not equal to each other, there is a unique solution for the linear system once one of the 9ki,k = — 1, • • •, 3, is fixed. We let 9n — 1 and solve the linear system. Note that #(-1)1 ^ 0 otherwise the linear system has only zero solution which would contradict #n = 1. Thus, 61 is linear but not constant on [0, £1) and b\ = 0 on ( £ 3 , 0 0 ) . To construct bj, 1 < j < K — 1, we first construct b* which is zero on [0, £j_i) and a non-zero constant on [t,+2, 0 0 ) . We will define the fr-'s as linear combinations of the bj's such that bj is zero on [tj+3, 0 0 ) , 1 < j < K — 2. Step 3 For 1 < j < K - 1, let Then for any constant c, by Lemma 3.2.3, bj = c on [£ J + 2,oo) and 0Sj = 0, s = -3, -2 , -1,0 imply that 9[j_^j,9'jj, 9[j+l^, 9[j+2^ satisfy the linear system 9U-i)i + 6'n + 9'u+i)j + d'(j+2)j = 0 + t3d'jj + tJ+ld{j+i)j + ti+29{j+2)j = 0 t)-i9{j-i)j + t2j9'jj + t2j+19[j+1)j + t2j+29[j+2)j = 0 ^ - i % - i b ' + + ^f+i ^ O+ib + ^ + 2 % + 2 ) j = c. 37 Let Q[j-i)j = 1- Then we can solve the first three equations for 9'^, Q'(j+2)j an<^ then set c to satisfy the fourth equation. We must show that c ^ 0. It is easy to see that, if c = 0, then the linear system in Lemma 3.2.3 has only the zero solution. Hence here = 1 implies that c ^ 0. Then b* is zero on [ 0 , n o n - z e r o on (tj-\,tj), and a non-zero constant on (£ , - + 2 ,oo) . Step 4: Let bx-2 — b*K_2. Then bx-2 satisfies property 3. Step 5: Recall that b* is a non-zero constant on [£ , - + 2 ,oo) . For 1 < j < K — 1, let dj be that constant on [tj+2, oo). For 1 < j < K — 2, if we let a J + i then bj is zero on [ £ j + 3 , o o ) and also on [0, r.j_i), since fr- and b*+l are zero on [0 ,£ j_ i ) . Also 6j is not zero on (£,•_!,£,•) since is zero but 6* is not zero on (tj-\,tj). Thus bi, • • •, satisfy properties 1 - 4 and are a B-spline basis of B. From now on, for given knots, we use B as our regression space with the restricted B-spline basis h, • • • ,bK-i defined as above. By a bounded basis plus log tail, we mean the basis is {bi, • • •, bpc-i, bK} and the regression space is span{&i, • • •, bx}, where bi<(t) = log(t + c), c > 0 is a constant. Since the basis b is bounded or bounded plus log tail, all relevant results discussed in Chapter 2 can be used. To simplify notation, we will denote both regression spaces byB. 3.3 Numerical Implementation In this section we introduce the algorithm used to calculate the estimates (3 and Q in the log hazard regression model of paired failure times. By the definition of (3 from (2.33), the estimates (3X and Q2 are calculated sepa- rately. So we need only discuss the calculation of f3x. We can use Kooperberg's heft 38 algorithm (see Kooperberg, Stone, Truong, 1995) to find r31. They use the Newton- Raphson method for computing r31 since 31 is the unique solution of the equations (2.20). Specifically, they start with an initial guess J3^\ then iteratively determine fa[h+ ^ from according to the formula - (fc+1) - (fc) and stop the iterations when log Lq(f3x \X{) — logLq(r31 \XX) < e, where e is a given positive number chosen so that estimates with the desired accuracy can be obtained. Thus the main numerical task in calculating J3 is the computation of the log likelihood logLq^^Xi), score function {3X), and observed information matrix ^(d^Xi) for various values of 3. By the definitions of l o g L ^ / ^ j X i ) , U^^) and /{^(JSJXI) , see (2.17), (2.18), and (2.19), this computation involves the numerical approximation of E jXU mdt, (3.3) i=i J o for ip of the form p i = hm(t)bu(t) expCEPijhjlt)), Z,m G { l , - " , P i } - Kooperberg et. al. do not calculate j0Xli ip(t)dt for each i. Rather, they rewrite (3.3) as roo j N(t)rp(t)dt, (3.4) where N(t) is the number of i satisfying Xki > t. The function JV(-) is piecewise constant, has jumps at the observations Xkx, • • • ,Xkn, and equals zero to the right of the maximum observation. Then they rewrite (3.4) as E r N(twt)dt, „ Ja.v-1 39 where {a„} is a finite grid of points containing all knots. Then they calculate /N( t ) ip ( t )d t numerically. Now consider calculation of Q. By the definition of Q from (2.36), the calculation of Q involves the calculation of /3fc, ^(fi^Xk), and the summands uKI(J3K), k = 1,2, i = 1, • • •, n . As we mentioned above, we can get (3K from the heft algorithm directly. By slightly modifying the heft code, we can obtain IKN\[3k\Xk) from heft as well. But to evaluate uki(/3K), we need to calculate f*u ip(t)dt separately. We can not get these integrals from heft without rewriting its entire integration program. Instead, we use Gaussian Quadrature (see Abramowitz and Stegun 1964, p. 916) for the integration ip(t)dt to calculate uKI(J3K), k = 1,2. 3.4 Knot Selection In Section 3.2, we give the method to define the regression space B for given knots. In this section we introduce the methods that we use to choose the knots. For the univariate case, we use the following two methods to select the knots. • Choose knots by Kooperberg' heft algorithm. For a data set, Kooperberg's heft algorithm can choose the knots for the model fitting. The algorithm chooses knots by a stepwise addition and stepwise deletion pro- cedure. See Kooperberg et. al. 1995 for details. We hoped that heft would select knots well. But from our simulation study we find that there are some numerical calculation problems with the knots chosen by heft. If the ratio of the maximum of the knots to the minimum of the knots is too big, then the resulting log hazard estimate and the calculation of the estimated standard errors may be impossible, see Section 5.2.2. So we should not use the knots from heft, if we receive a warning message from the heft code, or the calculation of the estimated standard error is not possible. 40 • Choose the knots by the quantiles. We choose the quantiles of the non-censored observations as the knots to define the regression space B. This procedure is based on ideas in Kooperberg et. al's knot selection for their stepwise addition method and Abrahamowicz et. al's knot selection method, Abrahamowicz, Ciampi, and Ramsay (1992). Choosing knots equal to quantiles may also result in a large ratio of maximum of the knots to the minimum of the knots. As noted above, this causes numerical problems. We solve this problem by truncating the data, that is, if there are any warnings, we use the quantiles of a truncated data set. We truncate the observations which are greater than 80 in our simulations. For paired data we use the following methods: • Use different knots for modeling the two log hazard functions. Choose knots for each failure time by the above two methods. We then use the two sets of knots to define the two regression spaces and marginally fit the log hazard regression models for the paired data. • Use the the same knots for modeling the two log hazard functions. In this method we need to choose one set of knots which defines one regression space for both log hazard functions. If we denote the ranges of non-censored observations {Xki : 5ki = 1} by Rk, k = 1, 2, then our knots must lie in RiC\ R2. There are two way to choose this set of knots. a) Use the quantiles of the non-censored observations which lie in Ri f) R2. b) Use the union of the knots selected for each marginal log hazard estimate. Denote the sets of knots selected for the two log hazards by /Ci and /C 2 . Use (K-i U /C 2) fl (Ri n R2) as the set of knots for the regression space. 41 3.5 Hypothesis Testing In this section we show how to use the restricted B-spline basis and the regression space B and given data to test hypotheses. 3.5.1 Univariate Case In the univariate case, we can test: • HQ\ the failure time has an exponential distribution; • HQ-. the failure time has a Weibull distribution. By the definition of the log hazard regression model in (2.5), the log hazard function is If the failure time is exponentially distributed, then a(t\3) = constant. Therefore, for the restricted B-spline basis defined in Section 3.2, ({bj, j = 1, • • •, K — 1} if b is bounded or {bj, j = 1, • • •, K} if b is bounded plus log tail, where K is the number of knots), we can rewrite HQ as H E 0 : pj = 0iorj^K-l. Then we can write HQ as HE0: X J B = 0 for an appropriate matrix X E . By Corollary 2.3.3, [nlW(P\X)]±0-0)=>N(O,r). 42 n(Xjfa)T(Xj[I^0\X)rXe)-\XTf3) => Hence, under HQ, ( XK-2 ^ & i s bounded; xk-i ^ & 1S bounded plus log tail. Therefore, n(XTfa)T{XT[I^\fa\X)]-1Xe)-1(XTr3) can be used as a test statistic. To test HQ, we need to use the restricted B-spline basis plus log tail 6^(i) = log(i). Thus, if the failure time has a Weibull distribution, then a(t\a) = pK_1bK-1{t) + pKbK{t). Hence, testing H™ is equivalent to testing Pj = 0 for j < K — 1. By a similar procedure as for testing HQ, we can find a matrix Xw such that n{Xlfa)T{Xl[&\fa\X)]-'Xw)-\Xlfa) => X2K_2 and we can use n(X^fa)T(X^[I^(r3\X)]-1Xw)-1(X^r3) as our test statistic. 3.5.2 Bivariate Case In bivariate case, we can test: • H0: the two failure times have the same distribution; • HQ-. the two failure times have proportional hazards. For given paired data (Xn, 5u, X2i, S2i), i = 1, • • • ,n, to test Ho and Hp, we model the log hazard functions of the two failure times with the same regression space B. Then the log hazard regressions are 3 = 1 3 = 1 43 If the two failure times have the same distribution, then a i M f t ) - a2(t\(32) = J2(foj ~ fajMt) = 0. 3=1 Therefore, testing H0 is equivalent to testing H*Q: Plj-r32j = 0fovj = l,---,p. Then we can rewrite H$ as H* : XTB = 0 for an appropriate matrix X. For bases described in Section 3.2, by Corollary 2.5.4, ^Q-^{f3-f3)=>N({)J). Then under H0, XK-I ^ ° i s bounded; XK if b is bounded plus log tail. So, we use n(XTr3)T{XTQX)-1(XTP) as test statistic. If the two hazards are proportional, then p ai(*l/5i) - ot2(t\(32) = £ ( / ? y ~ r%j)bj(t) = constant. 3 = 1 Hence, for the regression spaces and bfs defined in Section 3.2, the test for Hi equivalent to the test for Pij — p2j = 0 for j ^ K — 1. Then we can rewrite Hi as Hi : XTB = 0 for an appropriate matrix X. By Corollary 2.5.4, v ^ g - * G 9 - £ ) = > J V ( o , / ) . 44 n(XTC3)T(XTQX)-1(XTf3) =» n(XT0)T(XTQXr\XTi3) => { Then under HQ, ( XK-2 ^ ° 1S bounded; XK-I ^ 0 1S bounded plus log tail. Therefore, n(XT'(3)T(XTQX)-l(XTp) can be used as test statistic. Note: When T^'s and T2;'s are independent, the Cox proportional hazards model hc(t) = M*)exp(7) (3.5) provides an estimate of the relative risk 7 . Thus under independence assumption, we can also test H0 by testing HQ : 7 = 0. To our knowledge, there is no non- parametric test for proportional hazards for dependent paired data as we defined in Chapter 2. We might use the usual Cox proportional model, assuming the two failure times are independent. However, while the usual estimate of 7 may be good, the standard errors are probably biased. 45 Chapter 4 Application to the Diabetic Retinopathy Study In this chapter we apply the log hazard regression model to data collected to study the effect of laser treatment on diabetic retinopathy (see Diabetic Retinopathy Study Research Group, 1981). We give a description of this study in Section 4.1, and analyze the data in Section 4.2. Finally, in Section 4.3, we discuss analysis of this data set using other models. 4.1 Data Description Diabetic retinopathy is a complication associated with diabetes mellitus, which consists of abnormalities in the microvasculature within the retina of the eye. It is the major cause of visual loss in many industrialized countries (Murphy and Patz, 1978). The Diabetic Retinopathy Study (DRS) was funded by the National Eye Institute in 1971 to investigate the effectiveness of laser photocoagulation in delaying the onset 46 of blindness for diabetic retinopathy patients. One eye of each patient was randomly chosen to receive photocoagulation and the other eye was observed without treatment. A total of 1,742 patients was followed over several years. The endpoint used to assess the treatment effect was the occurrence of visual acuity less than 5/200 at two consecutively completed 4-month follow-ups. The only data available are from the 197 patients defined as high-risk by DRS criteria. Of the 197 pairs of observations, approximately one-half (101/197) of the untreated eyes and one-quarter (54/197) of the treated eyes achieved the outcome after 5 years of follow-up. The histograms of the censored and uncensored data for treatment and control (Figure 4.1) show that more untreated eyes than treated eyes failed during the study and many patients left the study after about 3 years (36 months) or more from the start time. The correlation between the uncensored observations of the treatment group and the control group is 0.28. This indicates possible dependence between the two failure times. We show the scatter plots in Figure 4.2. For this data set, the two censoring times are identical, that is, Cn = Cc%- The primary goal of the DRS study was to assess the effectiveness of the laser photocoagulation treatment. A secondary goal was to assess the relative risk of blindness of the untreated and the treated eyes as a function of time. 4.2 Data Analysis In this section we will address the following questions: 1. Do the failure times of the treated and the untreated eyes have the same distri- bution? That is, is there a treatment effect? 47 Trea tment 20 40 60 Uncensored Times 20 40 60 80 Censored Times Control 20 40 60 Uncensored Times 20 40 60 Censored Times Figure 4.1: Histograms of observed times of the eye data. 48 CM H S • • • • • »• • • 0 20 40 60 Treated failure •a o 2 CD o CO c cu o T3 CD CO c D o o CM 0 20 40 60 Treated failure • • • • % 20 40 60 T3 2> o co c cu o T5 0) CO CD i_ C D O CO o o CM Treated censored 0 20 40 60 Treated censored Figure 4.2: Scatter plots of the observations of the eye data. 49 2. What is the log hazards ratio? That is, what is the relative risk of blindness of the untreated eyes and treated eyes? We carry out two analyses of the eye data. First, as described in Section 3.4 on knot selection, we let Kooperberg's heft algorithm choose the knots for the log hazard regression model (Model 1). This allows different knots to be used for the log hazards estimates of the failure times of the treated and the untreated eyes. In our second analysis (Model 2), we use the same regression space for the log hazards of the two failure times. The set of knots used is the union of the knots chosen by the heft algorithm for each failure time. With both models we can get an estimate of the log hazards ratio with 95% pointwise conference intervals. We then test if there is a treatment effect, and if the two hazards are proportional with Model 2, as described in Section 3.5. 4.2.1 Model 1 To estimate each failure time's log hazard, we use Kooperberg's heft algorithm with log tail (3p(t) = log(t) (c = 0), and without specifying knots. The heft algorithm selects 1.5,6.17, and 63.33 as knots for the regression model of the treated eyes and chooses no knots for the untreated eyes, which means that heft chooses a Weibull model for the failure time of the untreated eye. Then the log hazard regression model, Model 1, is: aT{t\3T) = /3TlbTl(t)+pT2bT2(t)+PT3bTz{t), (4.1) ac{t\3c) = Pcibci(t) + Pc2bC2(t). (4.2) As defined in Section 3.2, bT1 is linear on [0,1.5), cubic on [1.5,6.17) and [6.17,63.33), constant on [63.33, oo), br2 and ba are constant functions, and bT3(t) = bc2(t) = log(t). The estimates of 3T and 3C from heft are (3T = (0.00017,-8.13,0.69)T 50 Time to Blindness (Months) Figure 4.3: The estimated density functions of the eye data for Model 1. and £ c = (-3.68,-0.18)T, respectively. The estimates of the marginal densities, survival functions, hazard func- tions, and the log hazards ratio are in Figures 4.3 to 4.6. From the plot in Figure 4.3, we find that both estimated densities have a high value in the first twenty months. But the density corresponding to the untreated eye is higher than that of the treated eye during the observed time period. The estimated density of the failure time of the untreated eyes has its maximum at time zero, while the other density achieves its maximum at about month six. Since the regression model is for the log hazard, we calculate the estimated densities based on the estimate of log hazards as in equation (2.8). A pointwise confidence interval for the estimated densities would require integration of the upper and lower bounds of confidence intervals for the hazard function. This is difficult, so these confidence intervals have not been calculated. 51 Treatment Control Time to Blindness (Months) Figure 4.4: The estimated survival functions of the eye data for Model 1. The estimated survival curves, in Figure 4.4, show that the estimated survival func- tion of the treated eyes is always greater than that of the untreated eyes. Therefore, there appears to be a large treatment effect. From the plots of the estimated hazard functions in Figure 4.5, we see that the risk of blindness in the untreated eye is much higher than that of the treated eye. The risk of blindness in the untreated eyes decreases with time while the treated eyes may have a maximal risk at t = 6. Based on the estimated hazard functions, we can see that the treatment may have delayed the onset of blindness for the patients for the first couple of months after the operation. Figure 4.6 shows the estimated log hazards ratio with pointwise 95% confidence intervals. Values greater than zero indicate a higher risk of the control group and so it seems that the treatment has a beneficial effect. The estimate of log relative risk of blindness of the untreated eye to the treated eye is equal to 1.15 at t = 1, and decreases to 0.34 at about month six, then increases smoothly. After attaining its maximum value 52 Treatment Estimated hazard 95% Cl Time to Blindness (Months) Control Estimated hazard 95% Cl Time to Blindness (Months) Figure 4.5: The estimated hazard functions of the eye data for Model 1 with pointwise 95% confidence intervals. 53 Figure 4.6: The estimated log ratio of the hazard functions of the eye data with pointwise 95% confidence intervals for Model 1. of 1.186 at month 42, it decreases slowly. Thus, based on Figure 4.6, the estimated log ratio of the two hazards gives an answer to questions 1 and 2 asked in the beginning of Section 4.2. 4.2.2 Model 2 To test if there is a treatment effect, we assume the log hazard functions of the treated eye and the untreated eye are from the same regression space B. So we will use the same restricted B-spline basis plus a log tail for the two regressions. To choose knots that define B, we refer to the knots used for Model 1. As heft uses 1.5,6.17, and 63.33 for the treatment group and no knots for the controls, we use 1.5,6.17, and 63.33, the 54 union of the knots used for the two groups, as the knots to define our regression space B. Then B is a three dimensional space and the log hazard regression, Model 2, is: aT(t\BT) = frMt) + fahit) + PTM*), (4-3) ac(t\Bc) = 3cMt)+Pc2b2(t)+PcMt), (4-4) where 61 = bn, b2 = 6T2, and 63 = 6̂ 3 as defined in Section 4.2.1. Using the heft algorithm with fixed knots, we get the estimates J3T = (0.00017, -8.13,0.69) r, Pc = (0.00007,-5.15,0.16)T. As in Section 4.2.1, we calculate the estimates of the densities, survival, hazard functions, and the log hazards ratio. We give the plots in Figures 4.7 to 4.10. The results are similar to those from Model 1. Time to Blindness (Months) Figure 4.7: The estimated density functions of the eye data for Model 2. 55 Treatment Control Time to Blindness (Months) Figure 4.8: The estimated survival functions of the eye data for Model 2. Figure 4.9: The estimated hazard function of the untreated eye for Model 2. 56 Figure 4.10: The estimated log ratio of the hazard functions of the eye data with pointwise 95% confidence intervals for Model 2. Since Model 1 and Model 2 use the same regression space for the log hazard of the failure times of the treated eyes, the two models produce the same estimates for the failure times of the treated eyes. Thus we do not repeat the plots of these estimates. We compare the estimates from Model 1 and Model 2 for the untreated eyes in Figures 4.11 to 4.13. From Figures 4.11 to 4.13 we see that Model 1 and Model 2 give almost the same estimates of the survival function of the untreated eyes. But the estimated densities and hazards look a little different. Next we use Model 2 to test the hypothesis that the failure times of the treated eye and the untreated eye have the same distribution. By the definition of Model 2 from (4.3) and (4.4), this hypothesis is equivalent to Ho : PTJ - P C J = 0, j = 1,2,3. 57 M o d e l 2 T i m e t o B l i n d n e s s ( M o n t h s ) Figure 4.11: The estimated density functions of the failure times of the untreated eyes. M o d a l 1 M o d e l 2 T i m e t o B l i n d n e s s ( M o n t h s ) Figure 4.12: The estimated survival functions of the untreated eyes. 58 M o d e l 1 M o d e l 2 T i m e t o B l i n d n e s s ( M o n t h s ) Figure 4.13: The estimated hazard functions of the failure times of the untreated eyes. Using the test described in Section 3.5, we obtain the test statistics and p-values shown in Table 4.1. Table 4.1: Test statistic and p-value for testing that the failure times of the treated eyes and the untreated eyes have the same distribution. Test xl p-value P T J - P C J = 0,j = 1,2,3 48.56 0 Hence we may reject H0 and conclude that the two distributions are different. From our plots of the log hazards ratios in Figures 4.6 and 4.10, we conclude that the laser photocoagulation treatment had a significant effect in delaying the onset of blindness in patients with diabetic retinopathy. 59 4.3 Other Models Used to fit the Eye Data Huster et. al. (1989), considered modeling the marginal distributions for the eye data as exponential and as Weibull or modeling the data via the Cox proportional hazards. We show the estimated marginal survival curves from fitting different models in Figures 4.14 and 4.15. The Kaplan-Meier estimate is also given. Note that for the treatment Model 1 and Model 2 are the same and for the control Model 1 is the Weibull. We see that all fit inside the confidence intervals of the Kaplan-Meier estimates except the estimates from the exponential model. But the estimated survival functions from our log hazard regres- sion models are closer to the Kaplan-Meier estimates than those from the parametric models. In this section we use Model 2, defined in Section 4.2.2, to decide which model(s) are appropriate to the eye data. Then, based on the test results, if we can choose a standard parametric model, say exponential, Weibull, or proportional hazards, we will use the selected model to test if there is a treatment effect. 4.3.1 Exponential Model Since Model 2 from (4.3) and (4.4) includes the exponential distribution, we can use the model to test if the marginal distributions are exponential. As we discussed in Section 3.5, we test the two null hypotheses: fr3 = 0J = l,3 and Pcj = 0,j = l,3. Table 4.2 presents our test results. Hence we reject the hypotheses that the failure times of the treated eye and the untreated eye are exponentially distributed. 60 K-M Model 1 = Model 2 Exponential Weibull 95% Cl of K-M 0 20 40 60 Time to Blindness (Months) Figure 4.14: The estimated survival curves of the treated eye. 61 K-M Model 2 Exponential Weibull = Model 1 95% Cl of K-M 0 20 40 60 Time to Blindness (Months) Figure 4.15: The estimated survival curves of the untreated eye. 62 Table 4.2: x2 test statistics and p-values for testing that failure times are exponentially distributed. Test x\ p-value = 0,j = 1,3 11.26 0.004 = 0,3 = 1,3 8.14 0.017 4.3.2 Weibull Model To test the hypotheses that the marginal distributions are Weibull, we test the two hypotheses: PTI =0 and Pci = 0. We show the test results in Table 4.3. Table 4.3: z statistics and p-values for testing that failure times have Weibull distribu- tions. Test se z p-value PTI = 0 1.73 x 10"4 6.28 x 10"5 2.75 0.006 PCI = 0 6.94 x 10~5 3.59 x 10"5 1.93 0.053 Thus we reject the hypothesis that the failure time of the treated eye has a Weibull distribution and we can not accept the hypothesis that the failure time of the untreated eye has a Weibull distribution. 63 4.3.3 Cox Proportional Hazard Model Now we use Model 2 to test the hypothesis that the two hazards are proportional. As mentioned in Section 3.5, we need to test H0: P T j - P C j = 0, i = 1,3, since b2 is constant. The test statistics and p-values are shown in Table 4.4. Table 4.4: Test statistic and p-value for testing that the two hazards of the treated eyes and the untreated eyes are proportional. Test x\ p-value PTJ-PCJ = 0,3 = 1,3 2.81 0.25 Thus we may assume that the two groups have proportional hazards, that is, we may assume that the relative risk of blindness of the untreated eye versus the treated eye is a constant. Now we assume that the two failure times are independent and fit the Cox propor- tional hazards model (3.5) for the eye data. To test the treatment effect, we test 7 = 0. The results are given in Table 4.5. Table 4.5: Test results from the Cox proportional hazards model for the eye data. Test 7 se z-value p-value 7 = 0 0.777 0.169 4.6 4.2 x 10~6 The Cox proportional hazards model also indicates that there is a treatment effect. Figures 4.16 to 4.19 compare the pointwise 95% confidence intervals of the log hazards 64 ratio from Models 1 and 2 and the Cox proportional hazards model, which assumes the two failure times are independent. We can see that the confidence interval from Model 2 is narrower than the confidence interval from the Cox proportional hazards model during the months 4 through 23, when the most failures in the both groups occurred. 65 O 2 0 4 0 S O T i m e t o B l i n d n e s s ( M o n t h s ) Figure 4.16: The estimated log hazards ratio from Model 1 and the Cox proportional hazards model with the pointwise 95% confidence intervals from the Cox proportional hazards model. i C o x T i m e t o B l i n d n e s s ( M o n t h s ) Figure 4.17: The estimated log hazards ratio from Model 1 and the Cox proportional hazards model with the pointwise 95% confidence intervals from Model 1. 66 4 0 Time to Blindness (Months) Figure 4.18: The estimated log hazards ratio from Model 2 and the Cox proportional hazards model with the pointwise 95% confidence intervals from the Cox proportional hazards model. Figure 4.19: The estimated log hazards ratio from Model 2 and the Cox proportional hazards model with the pointwise 95% confidence intervals from Model 2. 67 Chapter 5 Simulation This chapter contains discussion of a simulation study of estimates in the log hazard regression model. Our main aim is to check the bias and variability of our estimates for the log regression model. We consider different censoring rates and different correlation levels of the paired failure times. We also investigate the three test procedures: for exponential or Weibull marginal distributions and for proportional hazards, see Section 3.5. Our study consists of two parts: the univariate case and the bivariate case. In the univariate case, presented in Section 5.2, we investigate the estimates of the marginal log hazard functions and the pointwise standard errors. We also test that the marginals follow exponential or Weibull distributions. In the bivariate case, presented in Section 5.3, we examine the estimated log hazards ratios and their estimated standard errors. We also test that the paired failure times follow the proportional hazards model. In Section 5.1 we give a brief description of our data generation and model fitting. We present the simulation results in Sections 5.2 and 5.3 for the marginal and bivariate data, respectively. 68 In this simulation study we find that, in general, the estimates for the log hazards perform well except in the tails of the failure time distribution or when the censoring is high. The estimated standard errors slightly underestimate the true variability, which is fairly small. The estimated standard errors for the log hazards ratios do not depend on the correlation level too much. Al l of the estimates depend on the censoring rate. The lower the censoring, the better the results. We will give a summary of this study in Section 5.4. 5.1 Description of the Simulation Study In this study for each distribution of T considered, we generate 200 pairs Ts and Cs in each simulation, where the Ts are failure times, the Cs are censoring times, and the Ts and Cs are independent. We choose two censoring rates c = 0.25 or 0.50, i.e., P(T > C) = 0.25 or P(T > C) = 0.50. We generate the failure time T from distributions with log hazards of the form where {bi, • • • ,6p_i} is a B-spline basis and bp(i) = log(t). We use three distributions for the marginal failure time T: Exponential : <x{t\l3)=PMt) + --- + Ppbp(t), (5.1) aT(t) log(20). (5.2) In this case, we choose the censoring time C with ac(t) log(20) 69 for a 50% censoring rate or ac{t) = - log(60) for 25% censoring. Weibull : aT(t) = -3.68 - 0.18log(t). (5.3) The corresponding censoring time C has the log hazard ac(t) = - 3 . 6 8 - 0 . 1 8 log(t) for 50% censoring or ac(t) = -4.69 - 0.181og(i) for 25% censoring. B-spline : aT(t) = 0.00017 - 8.13 +0.69 log(i), (5.4) where bx is linear on [0,1.5), cubic on [1.5,6.17) and [6.17,63.33), and zero on [63.33, oo). See Sections 3.2 and 4.2 for the exact definition of bx. The censoring time follows a Weibull with log hazard ac(t) = -13.39 + 2 log(*) for 50% censoring or ac(t) = -25.24 + 41og(i) for a 25% censoring rate. 70 In all three cases, we choose the parameters for T's distribution to fit the eye data in Chapter 4. We choose the exponential distribution with mean close to the empirical mean of the non-censored failure times of the treated eyes; the Weibull which is the distribution of the estimated distribution of the failure time of the untreated eye based on the given data. The B-spline model (5.4) is the estimated distribution of the failure times of the treated eye as calculated in Section 4.2. We use S-plus to generate data from an exponential or Weibull distribution. For the data as in the B-spline model (5.4), we first generate a random variable u ~ ?7(0,1) then solve S(t) = u for t numerically. Since t is the (1 — u)th quantile of T's distribution, t can be calculated via heft's quantile function. We use the following three methods to estimate the log hazard function. True Model : Fit the data assuming it follows the true model. That is, estimate the parameters in (5.1) using the true b/s. In this case both the number and the locations of the knots do not depend on the data set. Quantile Knots : Use the quantiles of the non-censored observations as the knots that define the b/s in (5.1). We consider two cases: three knots and six knots. In the three knots case, our knots are the quartiles. In the six knots case, our knots are the 1/6, • • •, 5/6 quantiles. Thus, the locations of the knots depend on the data set, but the number of the knots does not. Flexible Knots : Choose the knots that define the fy's in (5.1) by Kooperberg's heft algorithm. In this case both the number and locations of the knots depend on the data set. Thus we have six models for (T, C), from the three distributions for the failure times and the two censoring times for each distribution. For each of these six models we run 500 simulations and calculate the above four estimates of the log hazard functions. 71 In the bivariate case, we will use the Clayton method to generate (Ti,T2) from marginals for Ti and T2 and with a parameter 9, (see Section 5.3.1). We consider two types of paired data: Proportional Hazards : The failure time Ti and censoring time Ci, say for the treat- ment, are as in the univariate B-spline model. Then the censoring rates for T\ are 0.5 and 0.25. The failure time T 2, say for the control, is from the distribution with the log hazard a2(t) = 0.00017 bi(t) - 7.13 + 0.691og(t). (5.5) Then T\ and T2 have the proportional hazards. We assume that the treatment and control have the same censoring time C = C\ — C2. Non-proportional Hazards : The failure time X, and censoring time C\ of the treat- ment are as in the univariate B-spline model. The failure time T2 of the control is from the exponential model (5.2). Thus T\ and T2 do not have proportional hazards. Again we also assume that the treatment and the control have the same censoring time C. So the censoring rates of T\ are 0.5 and 0.25. The correlation level of Xi and T2 is determined by a parameter 9. We will give more details about 9 in Section 5.3.1. In this study we consider three correlation levels between Xi and X 2 : 9 = 1, in which case Xi and X 2 are independent; 9 = 1.5 and 9 — 2.5. The bigger 9, the more Xi and X 2 are correlated. We use the following three methods to estimate the log hazards ratios of the paired data. Cox : Assume Xi and X 2 are independent and have proportional hazards. Use the Cox proportional hazards model, as implemented in Splus's coxph. 72 Same Knots : Use the Quantile Knots method with the same six knots for Ti and T 2 to estimate each marginal log hazard. The knots are the 1/7, 2/7,3/7,4/7, 5/7, 6/7 quantiles of the union of the non-censored observations of Ti and T 2 which are less than 80. See our explanation in Section 3.4. Different Knots : Use the Quantile Knots method with six knots for estimating Xi's log hazard and six knots for T2's when Ti and T 2 have proportional hazards, and three knots for T2's when Tx and T 2 do not have proportional hazard. Then the knots are different for Ty and T 2 in each simulation. Hence in the bivariate case we have 12 models for (Ti, T 2, C): two pairs of marginals for TX,T2, two censoring rates, and three correlation levels. For each of the 12 models we run 500 simulations and use the above three methods to estimate the log hazards ratios. In all of these simulations, we will look at: • plots of the pointwise averages and quantiles of the estimates to study the bias and variability of the estimates. • plots of the pointwise standard deviations of the estimates to study the variability of the estimates. • plots of the quantiles of the pointwise z values of the estimates to assess the reliability of the estimates. The pointwise z values are constructed as estimate - true estimated SE • plots of the empirical distribution of the p-values for testing that the failure times have exponential or Weibull marginal distributions and for testing that the two failure times have proportional hazards. Plots appear at the end of this chapter and in the appendix. 73 5.2 The Univariate Case We present our simulation results for the models (5.2), (5.3), and (5.4) below in Sections 5.2.1, 5.2.2, and 5.2.3, respectively. 5.2.1 Exponential Model In this section we apply the True Model, Quantile Knots, and Flexible Knots estimation methods to the data generated from the exponential distribution (5.2). Figure 5.1 shows the histograms of the failure times, the censoring times, the non-censored data and the censored data for those distributions. Since 60 is the 98th percentile of non- censored observations under 25% censoring and the 99.7th percentile of the non-censored observations under 50% censoring, we only plot estimates for values of t between 0 and 60. See Figures 5.2 to 5.4. Note that in the Flexible Knots estimation method, if no knots are chosen and the coefficient of the log tail is zero, then an exponential model is fit. This occurs 468 out of 500 times for 25% censoring and 481 out of 500 times for 50% censoring. Hence the estimates from the Flexible Knots method are usually the same as from the True Model. So we do not include the results from the Flexible Knots estimation method in these figures. Figure 5.2 shows the quartiles and empirical mean of the estimated log hazards from the True Model, Quantile Knots, and Flexible Knots methods and the true log hazard function for the exponential model (5.2). As expected, we find that the estimates from the True Model method are the closest to the true log hazard. For the Quantile Knots method, using three knots seems better than using six knots, which is somewhat surprising. In general, we expect that the more knots are used, the smaller the bias of the estimates but the larger the variability. The six knots estimates are more variable 74 but they are also more biased. This may be since the true distribution is exponential and so no knots are needed. Using more knots would not reduce the bias but it would increase the variability of the estimates. It is not surprising that the higher the censoring rate, the higher the bias of the estimates. Figure 5.3 shows the pointwise standard deviations of the estimated log hazards and the quartiles and the empirical mean of the estimated standard errors of the estimated log hazards. It is normal that the True Model method has the smallest pointwise standard deviations. The standard deviations using six knots are bigger than when using three knots. The higher the censoring rate, the bigger the standard deviations. Comparing the the pointwise standard deviations of the estimated log hazards with the pointwise quantiles of the estimated standard errors, we find that the bias of the estimated standard errors is the smallest with True Model method and the largest when using six knots. The higher censoring rate causes a bigger bias of the estimated standard errors except with the True Model method. We give the 97.5 and 2.5 percentiles and the quartiles of the pointwise z-values of the 500 estimated log hazards in Figure 5.4. We expect that they are between —2 and 2. We find that they are not out of this range too much for all estimates. Thus, pointwise confidence intervals based on the estimates would be reliable. Figure 5.5 presents the histograms and the qq-plots of Pi- Pi Zi = — , sef t i = 1,2,3, the standardized estimates of 3 with the Quantile method using three knots. Those figures show that the estimated 3s are approximately normal. For the method using six knots, the graphs look the same. We do not show them here. We also check three test procedures that the failure times follow an exponential distribution. Each procedure involves fitting the log hazard by same regression model and then using a x2 test that some regression parameters are equal to zero. First we fit 75 the data from model 5.2 to a Weibull model, that is, a{t\d) = falogit) + fo. Then we use (Pi / se(Px))2 as the test statistic to test Pi = 0. This is the usual para- metric test of exponential versus Weibull. When using the Quantile Knots method, equation (5.1) can be written as a{t\B) = p\h (<) + fohit) + PM*) when using three knots and a(t\0) = PMt) + ••• + PM*) + Peh(t) when using six knots. As discussed in Section 3.5.1, the test is equivalent to test that Pi = P3 = 0 when using three knots and Pi = • • • = P4 = Pe = 0 when using six knots. Using the x2 statistics discussed in Section 3.5.1, we can calculate p-values of the test for each simulation. In Figure 5.6, we give plots of the empirical distributions of the p-value to test the hypothesis that the failure times have an exponential distribution. The plots in the top row in Figure 5.6 show the p-values for the test of exponential versus Weibull, and those in the second and third rows are the p-values for the test of exponential versus B-spline model as in (5.4) when using three and six knots, respectively. Since the null hypothesis is true, the p-value should be uniformly distributed. Thus we expect that each plot would be a straight line. We find that the test works well and it does not depend on the knot selection or the censoring rates too much. However, the test rejects a little too often when the censoring rate is high. 76 5.2.2 Weibull Model As in Section 5.2.1, we first present the histograms of the data in Figure 5.7. From the histograms we can see that, for both censoring rates, most non-censored failure times were less than 150. In fact 150 is about the 98th percentile for 50% censoring and the 95th percentile for 25% censoring of the observations. So in Figures 5.8 to 5.10, we end our plots at time 150. As we mentioned in Section 3.4, the large range of the distribution of non-censored failure times may cause numerical problems for the Quantiles Knots and the Flexible Knots methods. This happens in our simulations for model (5.3) when we use the Flexible Knots method. Using the knots chosen by the heft algorithm we get four warning messages under 25% censoring and ten warning messages under 50% censoring. Moreover, we can not carry out the calculation for the estimated standard error for any simulated data sets due to the big ranges of the knots. Hence with the Flexible Knots methods we can only get the results of the estimated log hazards for simulated data sets that did not produce warning messages. We can not get any results related to the estimated standard errors. However, the Quantile Knots Method works well for the data as in model (5.3). Figure 5.8 shows summary plots of the estimated log hazards compared to the true log hazard of the model (5.3). We find that under 25% censoring the True Model and Quantile Knots estimation methods perform well. The estimates with the Flexible Knots method are highly biased when time is large, especially for high censoring. When we look at the pointwise standard deviations in Figure 5.9, as expected, the True Model method gives the smallest standard deviations. The Quantile Knots method using six knots has the largest standard deviations. When we compare the pointwise quartiles of our estimated standard error with the pointwise standard deviations of the estimated log hazards, we find that all the estimates 77 look fine, except that the estimates are a little bit too small after time 40. The censoring rate effect is the same as in the exponential model: the higher the censoring rate, the bigger the bias and variability of the estimates of the standard errors, particularly for small values of time. Figure 5.10 shows the pointwise 2.5 and 97.5 percentiles and quartiles of the empirical z values of the estimated log hazards. They are as expected for standard normal random variables. In Figure 5.11, we give the histograms and qq-plots of the estimates of 3 from the Quantile Knots method with six knots under 50% censoring. From the plots we can see that the estimates of 3 are approximately normal. For the True Model and three knots estimates, the graphs look the same. We do not present them here. For test procedures, we check not only the tests that data follow an exponential distribution as in Section 5.2.1, but also test that data follow a Weibull distribution. For the Weibull test we fit the data using bfs from the Quantile Knots estimation method. We use the test statistics discussed in Section 3.5.1 to test the hypothesis that the data follow a Weibull distribution. The test is equivalent to testing that /?i = 0 when using three knots and that (5\ = • • • = (3$ = 0 when using six knots. Figures 5.12 and 5.13 show the plots of the empirical distribution functions of the p-values for testing the null hypotheses that the distribution is exponential and that the distribution is Weibull, respectively. We expect a high proportion of small p-values in Figure 5.12 and a straight line in Figure 5.13. From Figure 5.12, we see that under 25% censoring the True Model method and the Quantile Knots method with six knots has the lowest power. Increasing censoring to 50% results in lower power for all methods. From Figure 5.13, we find that the test for Weibull works very well and it depends on neither the number of the knots nor the censoring rates. 78 5.2.3 The B-spline Model We first look at the histograms of the failure and censoring distributions, the censored failure times, and the non-censored failure times presented in Figure 5.14. From the histograms for the non-censored failure times, we see a high proportion of times on the interval [0,20], with the remaining times almost uniformly distributed on [20,150] for censoring rate 25% and on [20,100] for censoring rate 50%. As in the Weibull model (5.3) case, the large range of the distribution of non-censored failure times causes some numerical problems for the B-spline model when we use the Quantiles Knots and the Flexible Knots methods. Using the Flexible Knots method, that is, using the knots chosen by the heft algorithm, we can calculate only 113 esti- mated standard errors out of 500 simulations under 25% censoring and 306 under 50% censoring. The same problem occurs when we use six knots with the Quantile Knots method. To avoid those problems, when we use six knots we choose the knots which are the quantiles of the non-censored failure times less than 80. We present the simulation results in Figures 5.2.15 to 5.2.19. The results for the Flexible Knots method are based on the estimates that we can get. By comparing the estimated log hazards with the true hazard function in Figure 5.15, we fined that the biases of all estimates are fairly small when time is less than 70 under 25% censoring and when time is less than 40 under 50% censoring. We also find that those two numbers, 70 and 40, are close to the medians, 70.82 and 36.89, of the non- censored failure times in the two models. The estimates with the Flexible method perform well until time 100 under 25% censoring. We expect that using larger knots would get better estimates at tail part if there were not numerical calculation problems. Figure 5.16 gives the pointwise standard deviations of the estimated log hazards and the quantiles and the empirical mean of estimated standard errors. From those plots we find that the True Model method and using three knots give the similar pointwise 79 standard deviations which are smaller than the pointwise standard deviations with using six knots. But the the difference between using six knots and using three knots is no as big as with data generated according to the Exponential and Weibull models. Al l of the estimates have the biggest variability at time zero. The censoring rate effect is as the same as for data generated as in models (5.2) and (5.3). The biases of the estimated standard error are very small except with the Flexible Knots method. Next we look at the plot of the 2.5 and 97.5 quantiles and the quartiles of the pointwise z values of the estimated log hazards from the True Model and Quantile Knots methods in Figure 5.17. We find that under 25% censoring the estimates look reliable for time is less than 70. Under 50% censoring the reliability of the estimates is not too bad as the time is less than 40, which match what we see from Figure 5.2.14. Different from in the Exponential model and Weibull model cases, for the B-spline model, the estimates with using six knots look more reliable than those with three knots. It does not surprise us since we expect using six knots would obtain better estimates than using three knots. To see if we can rely on the number of knots used by heft algorithm, we also check the numbers of knots used by heft algorithm. The average of the numbers of knots used by heft in the 500 simulations is 5.18, which is close to 6. The histograms and the qq-plots of the estimated parameters fa, fa and fa, in Fig- ure 5.18, show that these estimates look normally distributed but not with mean zero. We also test the hypothesis that the data follow a Weibull distribution. For the Quantile Knots estimation method, the test procedures are the same to those we used for the data from the Weibull distribution (5.3) in Section 5.2.2. For the True Model method, we test fa = 0. For the Flexible Knots method we test fa = • • • = (3K_2 = 0, where K is the number of the knots chosen by the heft for each simulation. The test statistics are the same as we discussed in Section 3.5.1. Figure 5.19 shows that, under 25% censoring, by all the four estimation methods the test procedures for that the failure 80 times have a Weibull distribution have a high power except that by using six knots. 5.3 The Bivariate Case In this section we generate the paired data (T\, C\, T2, C 2 ) . T\ and T 2 are from (5.2), (5.3) or (5.4). We assume C\ = C2, and the censoring times are defined according to the distribution of Ti from (5.2), (5.3) or (5.4). We are interested in investigating our estimated log hazards ratios and the estimates of their standard errors. We will also test for proportional hazards. In Section 5.3.1 we introduce the method we use to generate dependent data for given marginal distributions. Then we present the simulation results in Sections 5.3.2 to 5.3.4. 5.3.1 Generation of Paired Dependent Data Clayton (1978) proposed a family of bivariate distributions for survival times. Let S\ and S2 denote the marginal survivor functions for each member of a pair of failure times (Ti,T 2). The joint survivor function for the Clayton model with parameter 9 is S(tut2,B) = { i " e-i tut2 > 0, 9 > 1 ti,t2 > 0.9 = 1 (5.6) Si(*i)S2(t2), When 9 = 1, the failure times are independent. The parameter 9 can be written in terms of two conditional hazard functions. If we denote the hazard for the conditional distribution of T\ given T2 = t2 and the hazard for Ti given T 2 > t2 by hTl\T2=lt2(ti) and hTl\T2>t2(ti), respectively, then in this model hTl\T2=t2{ti)/hTl\T2>M = 9. See Clayton (1978). 81 For given Si,S2 and 9 we generate bivariate data (Ti,T 2) satisfying the Clayton model using the following: (A) Generate Ti = ti according to Si, (B) Generate T 2 = t2 according to the conditional distribution of T 2 given Ti = tx. (A) can be carried out as in the univariate case. For (B), when 9 > 1, the conditional probability is (abusing notation slightly), pfrp ^ , , x P(T 2 > t2 and Tj = fr) dS(tu t2, 9)/dh P(Ti>t2\Ti-ti) - - ^ -eh [Sijti)1-9 + S2(t2y~d - l ] " ^ " 1 (1 - 9)Si(ti)-eS[(ti) S[{ti) = [Siiti)1'9 + s2(t2)l-e - l ] 1 ^ Si(ti)-e = 1 - FT2\Tl=tl(t2). We will generate U = u, a uniform [0,1], and solve u = i*T 2 | 7 i = t i ( £ 2 ) f° r ^2: u = l - P ( T 1 > t 2 | T i = i 1 ) = 1 - [5 1 ( t 1 ) 1 - e + 5 2 ( t 2 ) 1 - e - l ] T ^ 5 1 ( t 1 ) - e . Solving for S2(t2), we have 5 2(i 2) = {[(1 - u)1^ - l]Si{ti)l-e + l } 1 ^ . (5.7) Now solve Equation 5.7 for t2. Let p2 denote 1 minus the right part of Equation 5.7 equal p2. Then t2 is the p 2th quantile of the distribution function 1 — 5 2 . If S2 has a standard distribution say mean one exponential, then t2 =qexp (p2) in Splus. For nonstandard distributions, solve S2(t2) = 1 — p2 for t2 numerically. Thus, for given marginal survivor functions Si and S2 and the parameter 9, we can generate a random bivariate variable (Ti,T 2) = (ti,t2) satisfying the Clayton model by the following procedure. 82 1. Generate a random variable Ti = t\ according to Si, 2. Calculate Si(ti); 3. Generate a random variable U = it ~ f/(0,1) ; 4. Calculate P2 = 1 - {[(1 - U)1-^ - l]Si{tif-6 + l } ^ ' j 5. Solve S2(t2) = 1 — p2 for t2. Our choices of marginal distributions Si and S2 follow the proportional hazards and non-proportional hazards model as defined in Section 5.1. Figure 5.20 presents the plots of the correlation coefficients of Ti and T2 vs 9, where Tk has survivor function Sk, k = 1,2 In our simulation studies, we consider these models with 9 = 1,9 = 1.5, and 2.5. When 9 = 1, Ti and T2 are independent. 5.3.2 Proportional Hazards Model We present our simulation results for the proportional hazards model in this section. We address the effect of the correlation level and the comparisons of the Cox, Same Knots, and Different Knots estimates of the log hazards ratio. Recall that the Cox estimate assumes that 9 = 1. Since we had studied the effect of the censoring rate on the estimates of log hazards in the univariate case, we only present the results for 25% censoring here and give the results for 50% censoring in the Appendix. Figure 5.21 shows the histograms of the two marginals and the non-censored failure times. We can see that there are very few non-censored observations larger than 150. For the same reason as in the univariate case, we only plot results in the the Proportional Hazards model up to time 150. 83 First we look at the plots of the pointwise quartiles and average of the 500 estimated log hazards ratios in Figure 5.22. Comparing them with the true log ratio, we find that the estimates from the Cox method have the smallest bias, as we expected. The plots show that at time values within the range of knots, the biases of the estimates from the Same Knots method and the Different Knots method are almost the same except that the estimates from the Different Knots method have bigger bias at time zero. It looks like the correlation level does not affect the Same Knots and Different Knots estimates too much. But it is interesting to see that the estimates from the Cox method have the smallest bias when 9 = 1.5. Then we look at the plots of the pointwise standard deviations of the 500 estimated log hazards ratios in Figure 5.23. We see that the estimates of the log hazards ratio from the Cox method have the smallest variability. There is no big difference between the variabilities of the estimated log hazards ratio from the other two methods. We compare Figure 5.23 with Figure 5.16, which shows the variability of the marginal log hazards. It seems that near time 0 the variabilities of the estimated log hazards ratio with the Same Knots and Different Knots methods are similar to the variabilities of the estimated marginal log hazards. When we compare the estimated standard errors with the pointwise standard de- viations in Figure 5.23, we find that the estimates of the standard error from the Cox methods have the smallest bias when 9 is 1 and 1.5, that is when the paired data are independent or only slightly correlated. But when 9 is 2.5, the estimated standard errors from the other two methods have smaller bias than the Cox method. The reason for the increasing bias of the estimated standard errors from the Cox method might be the violation of the assumption of independence of the two failure times. Over all three cor- relation levels, the biases of the estimated standard errors from the Same Knots method and the Different Knots method are very close to each other. 84 When we look at the plots of the quantiles of the pointwise z values of the estimated log hazards ratios, Figure 5.24, we note that the estimated log hazards ratios from the Cox method are more reliable than the estimates from the other two methods, and the best case for the Cox method is when 9 = 1.5. For the Same Knots and Different Knots methods, the reliability of the estimates of the log hazards ratio is fairly good when time t is between 5 and 70, which is about the range of the knots. Once more the reliability of the estimates from the Same Knots and Different Knots methods does not depend on the correlation levels, which does not surprise us because our estimated standard errors take the correlation into account. Finally, we test the hypothesis that the failure times have proportional hazards by first estimating the two log hazards using the Same Knots method. We then test Pij ~ 02j = 0, j = 1, 2, 3,4, 6, as indicated in Section 3.5.2. Using the x 2 test statistics discussed in Section 3.5.2, we can calculate p-values for the test. We look at Figure 5.25, the plots of the empirical distributions of the p-value for testing that the failure times have proportional hazards. The null hypothesis is rejected far more than we expected. We expected each plot would be a straight line since the null hypothesis is true and the p-value should be uniformly distributed. An interesting result is that the distribution curve is closest to the line y = x when the data are highly correlated (9 = 2.5). Through the simulation results, we would like to say that for the data as in the proportional hazards model, the Cox method works well. The performance of the Same Knots and Different Knots methods is reasonable. The estimated standard errors from all three methods are good but those from the Cox model deteriorate as 9 increases. 85 5.3.3 Non-proportional Hazards Model Figure 5.26 shows the histograms of the two marginals and the non-censored failure times for the data from the non-proportional hazards model. As with the proportional hazard model, we only plot results in the the non-proportional hazards model up to time 150. First we look at Figure 5.27, the plots of the true log ratio and the pointwise quantiles and average of the 500 estimated log hazards ratios. As we expected, the Cox method does not work. The estimates from the other two methods are very close. These es- timates have smallest bias when the data are independent. But it is hard to see any difference between 9 = 1.5 and 9 = 2.5. Next we look at Figure 5.28, the plot of the pointwise standard deviations of the 500 estimated log hazards ratios and the pointwise quartiles and mean of the 500 estimated standard errors. Since the Cox method does not work for estimating log hazards ratios for the non-proportional hazards data, we only discuss the results from the other two methods here. We find that the pointwise standard deviations from the Same Knots method are a little bit bigger than those from the Different Knots method. But the standard deviation from the Different Knots method has a bigger jump at t — 0. When we look at the variability of the estimated standard errors, we find that the bias difference between the Same Knots method and the Different Knots method is very small within the range of knots. The standard errors from the Same Knots and Different Knots methods are too small for t > 70. The Different Knots method performs slightly better in this range. The bias and the variability of the estimated standard errors do not depend on the correlation level of the paired data. We also check the plots of the quantiles of the pointwise z values of the estimated log hazards ratios from the three methods, Figure 5.29. Those plots show that the quantile curves for the Same Knots and Different Knots methods mainly lie in the range (—2, 2) 86 with the best performance when the data are independent. It is not surprising that the Cox method performs poorly since the failure times do not have proportional hazards. Now we consider testing the hypothesis that the failure times have proportional hazards. The test procedures are the same as those we used for the data sets from the proportional model in Section 5.3.2 When we look at the plot of the empirical distributions of the p-values, Figure 5.30, we find that the test power is not high but the lowest power is when 6 = 1. 5.3.4 Effect of the Number of Knots In this section we study how the number of knots used in each simulation affects the estimates of the log hazards ratios and the standard errors. The data sets are generated as in the proportional hazards model and the non-proportional hazards model. For the data set from proportional hazards model, we use three, six, and nine knots to estimate the marginal hazards of T\ and T 2. For the data generated from the non-proportional hazards model, we use three, six, and nine knots for failure time T\ but only three knots for T 2. For the data from proportional hazards model, the true hazard of T 2 has three knots, see (5.5). In contrast, for the data from non-proportional hazards model, T 2 is exponentially distributed. As we found in the univariate simulation (Section 5.2), heft will rarely choose more than three knots to fit this exponential. Therefore, we do not need to study fits with more than three knots. The simulation results are as we expected, that is, the more knots, the smaller the bias of the estimates but the bigger the variability of the estimates. We show the results in Figures 5.31 to 5.37. From those graphs we find that the estimates of the log hazard ratio by using three knots have the smallest variability but the biggest bias. There is no big difference in bias between using six knots and using nine knots but the estimates using nine knots have the biggest variability. 87 When we look at the estimated standard errors, we find that the estimates using six knots have the smallest bias and variability. Figures 5.39 to 5.42 show the plots of the quantiles of the pointwise z values of the estimated log hazards ratios from the three methods. It can be seen that the performance of the estimates using six knots is similar to those with nine knots and superior to those using three knots. Figures 5.43 and 5.44 present the p-values to test that the two failure times have proportional hazards for the data as in the proportional model and non-proportional model, respectively. We find that, for both data sets, the number of knots does not effect the test too much but the test performs better when Ti and T 2 are dependent than when Ti and T 2 are independent. Overall, for the data generated as in the proportional hazards and non-proportional hazards models, using six knots seems better than using three knots or using nine knots. Of course, the value of six may also depend on the sample size and the true data distribution. 5.4 Summary of Simulations Through this simulation study, we find the following. In the univariate case: 1. With all the estimation methods, the estimates of the log hazard perform well within the range of knots used. However, if the range of knots is large, the Flexi- ble Knots method might cause some numerical calculation problems in estimating the log hazards and in calculating standard errors. 2. With the Quantile Knots method, the estimates of standard errors perform well 88 except that they slightly underestimate the variability of the estimated log haz- ards when the censoring rate is high. 3. The censoring rate affects the bias and variability of the estimated log hazards. The smaller the censoring rate, the better the estimates. 4. For all marginal models, the True Model method gives the best estimates for the log hazards. 5. The estimates using three knots look a little bit better than those using six knots for the data generated as in the exponential distribution (5.2) and the Weibull distribution (5.3). But for the data generated as in the B-spline model (5.4), the estimates with six knots look more reliable. 6. The Flexible Knots method gives better estimated log hazards than other meth- ods when the ratio of the largest knot to the smallest knot is not too big. Other- wise calculation of the estimated standard errors and the estimated log hazards causes numerical problems. 7. The test procedures for testing that the failure times follow an exponential and Weibull distribution perform well. In the bivariate case: 1. With the Same Knots and Different Knots estimation methods, the estimates of the log hazards ratio perform well within the range of knots used. There is not a big difference between the estimates from the two estimation methods. 2. For the data with proportional hazards, the Cox method gives the least-biased estimates of the log hazards ratios when the failure times are independent. For the data from the non-proportional hazards model, the Cox proportional model does not work at all. 89 3. The test that the failure times have proportional hazards does not perform well. We expect a bigger sample size might improve the test procedure. 4. The correlation levels do not affect the estimated log hazards ratios too much. But they affect the test that the failure times have proportional hazards. The test has the lowest power when the two failure times are independent. Further simulations are required to better understand the effects of correlation. In this simulation we also check the test procedures for the hypothesis that the two failure times have the same distribution. But there are some numerical problems when we calculate p-values. We do not present the result here. One more thing we would like to point out is that our estimated formulae for the standard errors assume for non-random knots. The procedures used in our simulations have random knots except for the True Model method in the univariate case. So this might be a reason that the estimated standard errors are a bit small. 90 c = 0.25 o o 0 50 100 150 200 250 0 200 400 600 The failure times The censoring times 0 50 100 150 The non-censored failure times c = 0.5 0 50 100 150 The observed censoring times 0 50 100 150 200 250 300 The failure times 0 50 100 150 200 250 300 The censoring times 0 20 40 60 80 100 The non-censored failure times 0 20 40 60 80 100 120 The observed censoring times Figure 5.1: The histograms of the simulated failure times, censoring times, non-censored failure times, and observed censoring times for exponential data as in model (5.2). 91 co c = 0.25 c = 0.5 o CO o CO CO "3- CO 0 10 20 30 40 50 60 True Model 0 10 20 30 40 50 60 True Model o CO •"3- CO 0 10 20 30 40 50 60 3 Knots co o co -3- co 10 20 30 40 3 Knots 50 60 CO c i o co CO CO pj o CO •>J- CO 0 10 20 30 40 50 60 6 Knots 10 20 30 40 6 Knots Log hazard Mean of the estimated log hazards Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards Figure 5.2: Log hazard of the exponential distribution (5.2) and the pointwise quartiles and empirical mean of the 500 estimated log hazards. 92 c = 0.25 c = 0.5 20 30 40 50 True Model 60 20 30 40 3 Knots 20 30 40 6 Knots o 0 10 20 30 40 True Model CM o 0 10 20 30 40 50 60 3 Knots co o 20 30 40 6 Knots Standard Deviation Mean of the estimated standard errors Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure 5.3: The pointwise standard deviations of the estimated log hazard for the expo- nential model (5.2) and the pointwise quartiles and empirical mean of the 500 estimated standard errors. 93 c = 0.25 c = 0.5 0 10 20 30 40 50 60 True Model 0 10 20 30 40 50 60 True Model 0 10 20 30 40 50 60 3 Knots 0 10 20 30 40 50 60 3 Knots 0 10 20 30 40 50 60 6 Knots 0 10 20 30 40 50 60 6 Knots Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure 5.4: The quantiles of the pointwise z values of the estimated log hazards for the exponential distribution (5.2). 94 c = 0.25 - 3 - 2 - 1 0 1 2 3 PI - 3 - 2 - 1 0 1 2 3 P2 c = 0.5 - 2 0 2 4 P2 8 - 2 0 2 4 - 4 - 2 0 2 p3 Figure 5.5: Histograms and qq-plots of the normalized estimate of 3 from the Quantile Knots methods with three knots for the exponential model (5.2). 95 c = 0.25 c = 0.5 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Weibull Weibull 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 6 Knots 6 Knots p-values y = x Figure 5.6: Empirical distribution functions of the p-values for testing that the failure times are exponentially distributed. 96 c = 0 .25 o 0 200 400 600 800 1200 0 1000 2000 3000 4000 The failure times The censoring times 0 200 400 600 The non-censored failure times 0 200 400 600 800 The observed censoring times c = 0 .5 0 200 400 600 800 The failure times 200 400 600 800 The censoring times 100 200 300 400 The non-censored failure times I 100 200 300 400 500 The observed censoring times Figure 5.7: The histograms of the simulated failure times, censoring times, non-cen failure times, and observed censoring times for Weibull data as in model (5.3). 97 c = 0.25 50 100 True Model 150 50 100 3 Knots 150 50 100 6 Knots 150 c = 0.5 50 100 True Model 50 100 3 Knots 50 100 6 Knots 50 100 Flexible Knots 50 100 Flexible Knots 150 150 150 150 Log hazard Mean of the estimated log hazards Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards Figure 5.8: Log hazard of the Weibull distribution (5.3) and the pointwise quartiles and empirical mean of the estimated log hazards. 98 c = 0.25 c = 0.5 Standard Deviation Mean of the estimated standard errors - Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure 5.9: The pointwise standard deviations of the estimated log hazards for the Weibull distribution (5.3) and the pointwise quartiles and empirical mean of the 500 estimated standard errors. 99 c = 0.25 50 100 True Model 150 c = 0.5 50 100 True Model 150 50 100 3 Knots 150 50 100 3 Knots 150 50 100 150 6 Knots 50 100 150 6 Knots Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure 5.10: The quantiles of the pointwise z values of the estimated log hazards for the Weibull model (5.3). 100 8 , s s • s 8 • S -3 -2 - 1 0 1 2 3 -3 -2 -1 0 1 2 3 -3 -2 - 1 0 1 2 3 &4 (5 f6 Figure 5.11: The histograms and the qq-plots of the standardized estimates of the parameter 3 from the method 5.1 with six knots for the Weibull model (5.3). 101 c = 0.25 c = 0.5 co O o d 0.10 3 Knots 0.20 0.0 0.05 0.10 3 Knots 0.15 0.20 Figure 5.12: Empirical distribution functions of the p-values for testing the hypothesis that the failure time distribution is exponential. 102 c = 0.25 c = 0.5 p-values y = x Figure 5.13: Empirical distribution functions of the p-values for testing that the failure times have a Weibull distribution. 103 c = 0.25 200 400 600 The failure times 100 200 300 The censoring times 50 100 150 The non-censored observations c = 0.5 SO 100 150 The censored data observations 200 400 600 The failure times 50 100 150 200 250 300 The censoring times 0 20 40 60 80 100 The non-censored observarions 100 200 300 The censored observations Figure 5.14: The histograms of the simulated failure times, censoring times, non- censored failure times, and observed censoring times for the data as in the B-spline model (5.4). 104 c = 0.25 c = 0.5 0 50 100 150 0 50 100 150 True Model True Model 0 50 100 150 0 50 100 150 6 Knots 6 Knots 0 50 100 150 0 50 100 150 Flexible Knots Flexible Knots Log hazard Mean of the estimated log hazards Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards Figure 5.15: Log hazard of the the B-spline data as in model (5.4) and the pointwise quartiles and empirical mean of the estimated log hazards. 105 c - 0.25 c = 0.5 0 50 100 150 0 50 100 150 True Model True Model 0 50 100 150 0 50 100 150 3 Knots 3 Knots 0 50 100 150 0 50 100 150 6 Knots 6 Knots 0 50 100 150 0 50 100 150 Flexible Knots Flexible Knots Standard Deviation Mean of the estimated standard errors Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure 5.16: The pointwise standard deviations of the estimated log hazards for the B-spline model (5.4) and the pointwise quartiles and empirical mean of the estimated standard errors. 106 c = 0.25 c = 0.5 0 50 100 150 0 50 100 150 Flexible Knots Flexible Knots Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure 5.17: The quantiles of the pointwise z values of the estimated log hazards for the B-spline model (5.4). 107 c = 0.25 c = 0.25 c = 0.5 0.10 0.15 0.20 True Model True Model 0.05 0.10 0.15 6 Knots 0.05 0.10 0.15 6 Knots 0.20 0.05 0.10 0.15 Flexible Knots 0.05 0.10 0.15 Flexible Knots 0.20 Figure 5.19: Empirical distribution functions of the p-values for testing the hypothesis that the failure times have a Weibull distribution. 109 Proportional hazards tx> 1.5 2.0 2.5 3.0 3.5 4.0 e Non-proportional hazards oo o O 1-5 2.0 2.5 3.0 3.5 4.0 e Figure 5.20: The plots of the correlation coefficients versus 6 for the data as in the proportional hazards and non-proportional hazards models 110 Treatment Control 0 200 400 600 All data 0 100 200 300 400 All data o o o 1 0 50 100 150 Non-censored treatment i 1 1 r 0 50 150 250 Non-censored control Figure 5.21: The histograms of the marginal failure times, and non-censored marginal failure times for the data generated according to the proportional hazards model in Section 5.1. The censoring rate for the "treatment" is 25%. I l l Cox Same Knots Different Knots 150 150 150 V I o i n v ; T v i 0 50 100 150 9 = 2.5 d V ) I v i 0 50 100 150 6 = 2.5 V ) o V ) V I 0 50 100 150 9 = 2.5 Log hazards ratio Mean of the estimated log hazards ratios Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards ratios Figure 5.22: The true log hazards ratio of the data from the proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the 500 estimated log hazards ratios. 112 Cox Same Knots Different Knots 0 50 100 150 0 50 100 150 0 50 100 150 8 = 2.5 6 = 2.5 9 = 2.5 Standard Deviation Mean of the estimated standard errors Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure 5.23: The standard deviations of the estimated log hazards ratio for data gen- erated according to the proportional hazards model. The censoring rate is 25% for the "treatment". 113 Cox Same Knots Different Knots Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure 5.24: The quantiles of the pointwise z values of the estimated log hazards ratio for the data generated according to the proportional hazards model. The censoring rate is 25% for the "treatment". 114 Figure 5.25: Empirical distribution functions of the p-values for testing the hypothesis that the two failure times of the data generated as in the proportional hazards model in Section 5.1 have proportional hazards. 115 Treatment Control o o O T m CM o o o ISI o o o in 1111 t .> 200 400 600 All data o o o o co o o o o o 8 H CM _ J « _ 0 50 100 200 All data o o o o -> 0 100 200 300 Non-censored treatment 0 50 100 150 200 Non-censored control Figure 5.26: The histograms of the marginal failure times, and the non-censored marginal failure times for the data from the non-proportional hazards model in Sec- tion 5.1. The censoring rate for the "treatment" is 25%. 116 ( Cox Same Knots Different Knots o I 50 100 150 6 = 2.5 50 100 150 6 = 2.5 50 100 150 9= 1.5 50 100 150 9 = 2.5 Log hazards ratio Mean of the estimated log hazards ratios Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards ratios Figure 5.27: The true log hazards ratio of the data from the non-proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the estimated log hazards ratios. 117 Cox Same Knots Different Knots 0 50 100 150 0 50 100 150 0 50 100 150 9 = 1 9 = 1 8 = 1 0 50 100 150 0 50 100 150 0 50 100 150 8 = 2.5 8 = 2.5 8 = 2.5 Standard Deviation Mean of the estimated standard errors Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure 5.28: The standard deviations of the estimated log hazards ratio for data gen- erated according to the non-proportional hazards model. The censoring rate is 25% for the "treatment". 118 Cox Same Knots Different Knots 0 50 100 150 0 50 100 150 0 50 100 150 0=1 .5 0=1 .5 9=1 .5 Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure 5.29: The quantiles of the pointwise z values of the estimated log hazards ratio for the data generated according to the non-proportional hazards model. The censoring rate is 25% for the "treatment". 119 0.0 0.05 0.10 0.15 0.20 e = i 0.0 0.05 O.IO 0.15 0.20 e = i .5 0.0 0.05 O.IO 0.15 0.20 e = 2.5 Figure 5.30: Empirical distribution functions of the p-values for testing the hypothesis that the two failure times of the data generated as in the non-proportional hazards model in Section 5.1 have proportional hazards. 120 3 Knots 6 Knots 9 Knots 50 100 150 9 = 1 50 100 150 9 = 1.5 0 50 100 150 9 = 1.5 0 50 100 150 9 = 2.5 0 50 100 150 9 = 2.5 50 100 150 9 = 1.5 0 50 100 150 9 = 2.5 Log hazards ratio Mean of the estimated log hazards ratios Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards ratios Figure 5.31: The true log hazards ratio of the data from the proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the estimated log hazards ratios by the Same Knots method. 121 3 Knots 6 Knots 9 Knots 150 150 150 0 50 100 150 G = 1.5 50 100 150 6 = 1.5 50 100 150 6= 1.5 0 50 100 150 9 = 2.5 50 100 150 9 = 2.5 50 100 150 6 = 2.5 Log hazards ratio Mean of the estimated log hazards ratios Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards ratios Figure 5.32: The true log hazards ratio of the data from the proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the estimated log hazards ratios by the Different Knots method. 122 3 Knots 6 Knots 9 Knots 0 50 100 150 0 50 100 150 0 50 100 150 6=1 6=1 6=1 0 50 100 150 0 50 100 150 0 50 100 150 6= 1.5 6 = 1.5 6= 1.5 0 50 100 150 0 50 100 150 0 50 100 150 0 = 2.5 6 = 2.5 6 = 2.5 Log hazards ratio Mean of the estimated log hazards ratios Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards ratios Figure 5.33: The true log hazards ratio of the data from the non-proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the estimated log hazards ratios by the Same Knots method. 123 3 Knots 6 Knots 9 Knots 150 o I 150 50 100 e = 1.5 50 100 150 e = 1.5 50 100 150 9 = 2.5 0 50 100 150 9 = 2.5 o 7 P I — 0 50 100 150 9 = 2.5 Log hazards ratio Mean of the estimated log hazards ratios Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards ratios Figure 5.34: The true log hazards ratio of the data from the non-proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the estimated log hazards ratios by the Different Knots method. 124 3 Knots 6 Knots 9 Knots 50 100 150 e = i 50 100 150 6 = 1 150 150 150 0 50 100 150 6 = 2.5 50 100 150 6 = 2.5 50 100 150 6 = 2.5 Standard Deviation Mean of the estimated standard errors Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure 5.35: The standard deviations of the estimated log hazards ratio by the Same Knots method for data generated according to the proportional hazards model. The censoring rate is 25% for the "treatment". 125 3 Knots 6 Knots 9 Knots 50 100 150 6 = 1 150 150 50 100 150 8 = 1.5 o o 0 50 100 150 8 = 2.5 50 100 150 8 = 2.5 50 100 150 8 = 2.5 Standard Deviation Mean of the estimated standard errors Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure 5.36: The standard deviations of the estimated log hazards ratio by the Different Knots method for data generated according to the proportional hazards model. The censoring rate is 25% for the "treatment". 126 3 Knots 6 Knots 9 Knots 150 o 150 150 150 150 50 100 150 0 = 2.5 50 100 150 0 = 2.5 50 100 150 6 = 2.5 Standard Deviation Mean of the estimated standard errors Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure 5.37: The standard deviations of the estimated log hazards ratio by the Same Knots method for data generated according to the non-proportional hazards model. The censoring rate is 25% for the "treatment". 127 3 Knots 6 Knots 9 Knots 150 150 50 100 150 0= 1 150 150 150 0 50 100 150 0 = 2.5 50 100 150 0 = 2.5 50 100 150 6 = 2.5 Standard Deviation Mean of the estimated standard errors Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure 5.38: The standard deviations of the estimated log hazards ratio by the DifFerent Knots method for data generated according to the non-proportional hazards model. The censoring rate is 25% for the "treatment". 128 3 Knots 6 Knots 9 Knots 0 50 100 150 0 50 100 150 0 50 100 150 6 = 2.5 6 = 2.5 6 = 2.5 Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure 5.39: The quantiles of the pointwise z values of the estimated log hazards ratio by the Same Knots method for the data generated according to the proportional hazards model. The censoring rate is 25% for the "treatment". 129 3 Knots 6 Knots 9 Knots 0 50 100 150 0 50 100 150 0 50 100 150 e = i e = i e = i Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure 5.40: The quantiles of the pointwise z values of the estimated log hazards ratio by the Different Knots method for the data generated according to the proportional hazards model. The censoring rate is 25% for the "treatment". 130 3 Knots 6 Knots 9 Knots 0 50 100 150 0 50 100 150 0 50 100 150 0 = 1 6 = 1 6 = 1 0 50 100 150 0 50 100 150 0 50 100 150 6 = 1.5 6 = 1.5 6 = 1.5 Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure 5.41: The quantiles of the pointwise z values of the estimated log hazards ratio by the Same Knots method for the data generated according to the non-proportional hazards model. The censoring rate is 25% for the "treatment". 131 3 Knots 6 Knots 9 Knots Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure 5.42: The quantiles of the pointwise z values of the estimated log hazards ratio by the Different Knots method for the data generated according to the non-proportional hazards model. The censoring rate is 25% for the "treatment". 132 3 Knots 6 Knots 9 Knots 6 = 1 6 = 1 6 = 1 p-values y = x Figure 5.43: By the Same Knots method empirical distribution functions of the values for testing the hypothesis that the two failure times of the data generated as the proportional hazards model in Section 5.1 have proportional hazards. 133 3 Knots 6 Knots 9 Knots 0.0 0.10 0.20 0.0 0.10 0.20 0.0 0.10 0.20 e=i e=i e= i 0.0 0.10 0.20 0.0 0.10 0.20 0.0 0.10 0.20 6= 1.5 6= 1.5 6= 1.5 0.0 0.10 0.20 0.0 0.10 0.20 0.0 0.10 0.20 0 = 2.5 6 = 2.5 6 = 2.5 Figure 5.44: By the Same Knots method empirical distribution functions of the p-values for testing the hypothesis that the two failure times of the data generated as in the non- proportional hazards model in Section 5.1 have proportional hazards. 134 Chapter 6 Conclusion In this thesis a flexible parametric model, log hazard regression model for paired censored failure times, is proposed. In this model B-splines are used to estimate the log hazards of marginal failure times and the log hazards ratio of the two failure times. Consistent estimates of the standard errors for the estimated marginal log hazards and the log hazards ratio are presented as well. Based on this model we propose test procedures for the four hypotheses that the marginals follow an exponential or Weibul l distributions and that the two failure times have the same distribution or have proportional hazards. A simulation study indicates that when the censoring rate is not too high, the esti- mates of the log hazards and the log hazards ratio perform well wi thin the range of the knots used and the estimates of standard errors for the estimated log hazards ratio are good, but a little bit small. This underestimation is bigger at time zero. The simulation study also shows that the tests that the marginal failure times follow an exponential or Weibul l distribution perform very well. B u t the test for the hypotheses that the two failure times have proportional hazards or have the same distribution tend to over-reject the null hypothesis when the null hypothesis is true. Also, the test is not 135 powerful. The model was applied to the Diabetic Retinopathy Study data to assess the ef- fectiveness of laser photocoagulation in delaying the onset of blindness for diabetic retinopathy patients. The conclusion of the analysis with the log hazard regression model agrees with those from the standard parametric models fitted by Huster et. al (1989), that is, there is a significant laser photocoagulation treatment effect. We use the log hazard model to test the hypothesis that the failure times of the treated eye and untreated eye have an exponential or a Weibull distribution. The test results indi- cate that there is significant evidence to reject the hypotheses that the failure time of the treated eye follows either exponential or Weibull distribution. The test results also show that there is significant evidence to reject that the failure time of the untreated eye follows an exponential distribution. But the test that the failure time of the untreated eye follows a Weibull yields a marginal result. The p-value is 0.053. The p-value for testing the hypothesis that the failure times of the treated eye and the untreated are proportional hazards is very high, 0.25. This, along with the tendency of the test to over-reject (see Section 5.3), indicates that it is reasonable to assume that the failure times of the treated eye and the untreated eye have proportional hazards. This result supports the analysis of Huster et. al. (1989). We note that knot selection is a basic step to use the log hazard regression model. The simulation study shows that our Quantile Knots procedure performs well when the censoring rate is not too high. Even when we restrict the range of the knots, the estimates are fairly reliable within the range of the knots used. We can not always choose knots using Kooperberg et. al's stepwise addition and stepwise deletion method, since the chosen knots often cause numerical problems (see Section 5.2). However, the knots they selected will give us some indication of the number and the location of the knots that should be used. From the experience of our simulation, 136 we give the following suggestions for using the log hazard regression models to analyze paired data: 1. Use the heft algorithm to choose a set of knots for each failure time as we men- tioned in Section 3.4. If there is no warning message from the heft code and the calculation of estimated standard errors can be performed, then use the Different Knots method to estimate the log hazards ratio with the two sets of knots. If there is any warning message or the calculation of the estimated standard errors is impossible for one or both failure times, choose the knots for the one or both failure times by the quantiles method as we discussed in Section 3.4. Use the number of knots used by heft. 2. Test for exponential or Weibull. Fit the data with the log hazards model decided by the selected knots above. Then use the methods we mentioned in Section 3.5, to test if the marginals have exponential or Weibull distributions. If they do have those standard parametric distributions, use those distributions for modeling the marginal failure times. 3. To test proportional hazards, choose one set of knots for the two failure times. Use the quantiles of the non-censored failure times as discussed in Section 3.4. The number of knots should be between max{iT1, K2} and Kx + K2, where Kx and K2 are the numbers of of knots selected by heft for Ti and T 2, respectively. Then use the method given in Section 3.5 to test whether the two failure times have proportional hazards. From the definition of our log hazard regression model, we can generalize the model to multivariate censored survival data easily. We also can include covariates in this model. If we assume that the effects of covariates are independent of time, then the calculation of the estimates is straightforward based on the heft algorithm. We can model the 137 log hazard as a function of time plus a function of the covariate. If the effects of the covariates are dependent on time, then the log hazard would be modeled as a bivariate function of time and the covariate. However, the calculations of the estimates would be more complicated. To simplify calculations somewhat, Kooperberg et. al. modeled this bivariate function using linear B-spline and their tensor products. There are some unanswered questions. First, why is the variability of the estimate of log hazard or log hazards ratio big at time zero? We might reduce the problem if we restrict the B-spline functions to be constant between zero and the smallest knot. Second, how do we choose the range of knots? As we see from our simulations, the estimates perform better within the range of the knots than outside of the range. We need more simulations to find a better way to determine the range of knots. Third, why do the test procedures for testing that the failure times have proportional hazard or have the same distribution work poorly, while the test procedures for testing that marginal failure times follows an exponential or Weibull distribution work well? We expect that a big sample size might improve the behavior of the test procedure. Finally, why does a high correlation level yield a better test for testing that failure times have proportional hazards? We need more simulations to understand the effect of the correlation level. 138 Bibliography [1] Abrahamowicz, M. Ciampi, A. and Ramsay, J. 0. (1992). Nonparametric Density Estimation for Censored Survival Data: Regression-Spline Approach. The Cana- dian Journal of Statistics 20, 171 - 185. [2] Andersen, P. K. and Gill, R. D. (1982). Cox's Regression Model for Counting Processes: a Large Sample Study. The Annals of Statistics 10, 1100 - 1120. [3] De Boor, C.(1978). A Practical Guide to Splines. Applied Mathematical Sciences. 27, Springer-Verlag, New York. [4] Clayton, D. G. (1978). A Model for Association in Bivariate Life Tables and Its Application in Epidemiological Studies of Familial Tendency in Chronic Disease Incidence. Biometrika 65, 141 - 151. [5] Dabrowska, D. M. (1988). Kaplan-Meier Estimate on the Plane. The Annals of Statistics 16, 1475 - 1489. [6] Diabetic Retinopaty Study Research Group (1981), Diabetic retinopathy study. Investigative Ophthalmology and Visual Science, 21, 149 - 226. [7] Diggle, P. J., Liang, K. Y., and Zeger, S. L. (1996). Analysis of Longitudinal Data. Clarendon Press, Oxford. 139 [8] Eilers, P. H. C. and Marx, B. D. (1996). Flexible Smoothing with B-splines and Penalties. Statistical Science 11, 89 - 102. [9] Eubank, R. L. (1988). Spline Smoothing and Nonparametric Regression. M. Dekker, New York. [10] Ferguson, T. S. (1996). A Course in Large Sample Theory. Chapman and Hall, London/Weinheim/New York/Tokyo/Melbourne/Madras. [11] Fleming, T. R. and Harrington, D.P. (1991). Counting Processes and Survival Anal- ysis. John Wiley & Sons, Inc., New York. [12] Gill, R. D. (1992). Multivariate Survival Analysis. Theory Probab. Appl. 37, 18 - 31. [13] Holt, J. D. and Prentice, R. L. (1974). Survival Analysis in Twin Studies and Matched Pair Experiments. Biometrika 61, 17 - 30. [14] Huster, W. J., Brookmeyer, R., and Self, S. G. (1988). Modeling Paired Survival Data with Covariates. Biometrics 45, 145 - 156. [15] Kooperberg, C , Stone, C. J, and Truong, Y. K. (1995). Hazard Regression. Journal of the American Statistical Association 90, 78 - 94. [16] Kooperberg, C. (1998). Bivariate Density Estimation with an Application to Sur- vival Analysis. Journal of Computational and Graphical Statistics 7, 322 -341. [17] Lawless, J. F. (1982). Statistical Models and Methods for Lifetime Data. Wiley, New York. [18] Murphy, R.P. and Patz, A. (1978). New Concepts in Management of Retinal Vascular Disorder. Ophthalmology Update, International Congress Series. No. 508, K.Mizuno and Y.Mitsui(eds), 115 - 125, Amsterdam: Excerpta Medica. 140 [19] Oakes, D. (1982). A model for Association in Bivariate Survival Data. Journal of the Royal Statistical Society, Series B 44, 414 - 428. [20] Sen, P. K and Singer, J. M. (1993). Larger Sample Methods in Statistics. Chapman and Hall, New York, London. [21] Shikin, E. V.(1995). Handbook on Splines for the User. CRC Press, Boca Rato. [22] Stone, C. J., Hansen, M. H., Kooperberg, C , and Truong, Y. K. (1997). Polynomial Splines and their Tensor Products in Extended Linear Modeling. The Annals of Statistics 25, 1371 - 1470. [23] Wei, L. J., Lin, D. Y. , and Weissfeld, L. (1989). Regression Analysis of Multivariate Incomplete Failure Time Data by Modeling Marginal Distributions. Journal of the American Statistical Association, 84, 1065 - 1073. [24] Wei, L. J. and Lachin, J. M. (1984). Two-Sample Asymptotically Distribution-Free Tests for Incomplete Multivariate Observations. Journal of the American Statistical Association 79, 653 - 661. 141 Appendix A Simulation Results for Bivariate Data with 50% Censoring Figures A . l to A . 10 show the simulation results for the proportional and nor-proportional data generated as in the Section 5.1 with 50% censoring. 142 Treatment Control o o o in CM o o o in o o o in 200 400 600 All data o o o o CO o o o o 0 100 200 300 400 All data 0 20 40 60 80 0 50 100 200 Non-censored treatment Non-censored control Figure A . l : The histograms of the marginal failure times, and non-censored marginal failure times for the data generated according to the proportional hazards model in Section 5.1. The censoring rate for the "treatment" is 50%. 143 Cox Same Knots Different Knots 0 50 100 150 0 50 100 150 0 50 100 150 6 = 1 6 = 1 6 = 1 150 0 50 100 150 6 = 1.5 150 0 50 100 150 6 = 2.5 50 100 150 6 = 2.5 50 100 150 6 = 2.5 Log hazards ratio Mean of the estimated log hazards ratios Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards ratios Figure A.2: The true log hazards ratio of the data as in the proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the 500 estimated log hazards ratios. 144 Cox Same Knots Different Knots 150 50 100 150 9 = 1.5 50 100 150 6 = 1.5 0 50 100 150 9 = 2.5 50 100 150 9 = 2.5 50 100 150 9 = 2.5 Standard Deviation Mean of the estimated standard errors Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure A.3: The standard deviations of the estimated log hazards ratio for data gener- ated according to the proportional hazards model. The censoring rate is 50% for the "treatment". 145 Cox Same Knots Different Knots Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure A.4: The quantiles of the pointwise z values of the estimated log hazards ratio for the data generated according to the proportional hazards model. The censoring rate is 50% for the "treatment". 146 p-values y = x Figure A.5: Empirical distribution functions of the p-values for testing the hypothesis that the two failure times of the data generated as in the proportional hazards model in Section 5.1 have proportional hazards. 147 Treatment Control o o o lO CM O O O m o o o m 200 400 600 All data o o o o co o o o o o o o o C\J 0 50 100 200 All data o o o o o o o CO o o o CM 0 50 100 150 Non-censored treatment o o o o co o o o o o o o o CM 0 50 100 200 Non-censored control Figure A.6: The histograms of the marginal failure times, and the non-censored marginal failure times for the data from the non-proportional hazards model in Section 5.1. The censoring rate for the "treatment" is 50%. 148 Cox Same Knots Different Knots 50 100 150 e = i 50 100 150 e = i.5 50 100 150 6 = 2.5 50 100 150 6 = 1 50 100 150 6=1 //.,-- 50 100 150 e = i.5 50 100 150 6 = 2.5 50 100 150 6 = 2.5 Log hazards ratio Mean of the estimated log hazards ratios Median of the estimated log hazards Upper and lower quartiles of the estimated log hazards ratios Figure A.7: The true log hazards ratio of the data as in the non-proportional hazards model defined in Section 5.1 and the pointwise quartiles and empirical mean of the 500 estimated log hazards ratios. 149 Cox Same Knots Different Knots 0 50 100 150 0 50 100 150 0 50 100 150 6 = 2.5 6 = 2.5 6 = 2.5 Standard Deviation Mean of the estimated standard errors Median of the estimated standard errors Upper and lower quartiles of the estimated standard errors Figure A.8: The standard deviations of the estimated log hazards ratio for data gener- ated according to the non-proportional hazards model. The censoring rate is 50% for the "treatment". 150 Cox Same Knots Different Knots 0 50 100 150 0 50 100 150 0 50 100 150 0 = 1.5 6 = 1.5 6 = 1.5 0 50 100 150 0 50 100 150 0 50 100 150 6 = 2.5 6 = 2.5 6 = 2.5 Median Upper and lower quartiles 2.5 and 97.5 quantiles Figure A.9: The quantiles of the pointwise z values of the estimated log hazards ratio for the data generated according to the non-proportional hazards model. The censoring rate is 50% for the "treatment". 151 O.O 0 .05 O.IO 0 .15 0 .20 0.0 0 .05 0 .10 0 .15 0 .20 O.O 0 .05 0 .10 0 .15 0 .20 9 = 1 8 = 1.5 9 = 2.5 Figure A. 10: Empirical distribution functions of the p-values for testing the hypothesis that the two failure times of the data generated as in the non-proportional hazards model in Section 5.1 have proportional hazards. 152

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
China 4 2
United States 3 1
Ukraine 1 0
France 1 0
Japan 1 0
City Views Downloads
Unknown 2 1
Ashburn 2 0
Shanghai 2 0
Beijing 2 0
Mountain View 1 1
Tokyo 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0089159/manifest

Comment

Related Items

Admin Tools

To re-ingest this item use button below, on average re-ingesting will take 5 minutes per item.

Reingest

To clear this item from the cache, please use the button below;

Clear Item cache