Case-control Studies w i th Misclassi f ied Exposure: A Bayesian Approach by Refik Saskin B.Sc, Brock University, 1998 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in THE FACULTY OF GRADUATE STUDIES (Department of Statistics) we accept this thesis as conforming to the required standard The Univers i ty of B r i t i sh Co lumb ia August 2000 © Refik Saskin, 2000 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of The University of British Columbia Vancouver, Canada Date AjJC Of. £°<~ . DE-6 (2/88) Abstract When dealing with the case-control data, it is often the case that the ex-posure to a risk factor of interest is subject to miclassification. Methods for correcting the odds-ratio are available when the misclassification probabilities are known. In practice, however, good guesses rather than the exact values are available for these probabilities. We show that when these guesses are treated as exact even the smallest differencies between the true and guessed values can lead to very erroneous odds-ratio estimates. This problem is alle-viated by a Bayesian analysis which incorporates the uncertainty about the misclassification probabilities as prior information. In practice, data on the exposure variable are quite often available from more than one source. We review three methods for improving the odds-ratio estimates that combine information from two sources. We then develop a Bayesian approach which is based on latent class analysis, and apply it to the sudden infant death syndrome data. The inference required the use of the Metropolis-Hastings algorithm and/or the Gibbs sampler. ii Contents Abstract i i Contents i i i List of Tables v List of Figures v i i Acknowledgements x i i Dedication x i i i 1 Introduction 1 2 One Test Case 5 2.1 Definitions and Terminology . 5 2.1.1 Data and Setup 7 2.2 Known Sensitivity and Specificity 10 2.3 The Gibbs Sampler 17 2.3.1 Examples 21 iii 2.4 The Metropolis-Hastings Algorithm 24 2.4.1 Large Sample Case 31 2.4.2 Finite Sample Case 35 2.4.3 Examples . . 36 2.5 Discussion 40 3 Two Test Case 49 3.1 Data and Setup 49 3.2 Correction Methods 51 3.2.1 The Marshall and Graham Method 51 3.2.2 The Drews, Flanders and Kosinski Method 52 3.2.3 The Kaldor and Clayton Method 54 3.3 Our Method 56 3.3.1 Examples 60 3.4 An Application to Real Data 68 3.5 Discussion 72 4 Conclusion 74 Bibliography 78 IV List of Tables 2.1 Distribution of subjects in a case-control study by disease sta-tus and an imperfect measurement of a dichotomous exposure. Symbol + denotes apparently exposed, and — denotes appar-ently unexposed 9 2.2 (a) True distribution of exposure among cases and controls, (b) Observed distribution of exposure, given 10% misclassifcation among cases and controls, (c) Estimated distribution of expo-sure assuming 14% misclassifcation among cases and controls. 14 2.3 Distribution of observed and latent data when one imperfect classification procedure is used. Symbol + denotes exposed, and — denotes unexposed 18 3.1 Distribution of subjects in a case-control study when two tests are used to assess the exposure status. Symbol + denotes ex-posed and — denotes unexposed. 50 v 3.2 Distribution of observed and latent data when two imperfect tests are used, together with the contribution to the likelihood each combination of the observed and latent data makes. Sym-bol + denotes exposed, and — denotes unexposed 57 3.3 True parameter values and the empirical coverage of the 80% HPD credible intervals for logc/> for the sample sizes iVj = 200, N{ = 800, and Ni = 3200, i = 0,1. Numbers in brackets repre-sent mean interval length. For the empirical coverage and the mean length, 1000 data sets were simulated 67 3.4 A pseudorandom sample of 226 SIDS cases and 226 controls from the NICHD study. Data was classified using medical record (MR) and interview (Int) data 69 3.5 Median estimates of the sensitivities, specificities and the loga-rithm of the odds-ratio, and 95% HPD credible intervals for the logarithm of the odds-ratio, using the Gibbs sampler 73 3.6 95% HPD credible intervals for the sensitivities and specificities of the interview data and the medical records 73 3.7 Estimates of the sensitivities, specificities and the logarithm of the odds-ratio, and 95% confidence intervals for the logarithm of the odds-ratio, using the EM algorithm (Drews et al. (1993)). 73 vi List of Figures 2.1 Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in con-trols and prevalence of exposure in cases in Example 1. Data was simulated for N0 = Nx = 200 25 2.2 Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in con-trols and prevalence of exposure in cases in Example 1. Data was simulated for N0 = Nx = 800 26 2.3 Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in con-trols and prevalence of exposure in cases in Example 1. Data was simulated for N0 = Nx = 3200 27 2.4 Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in con-trols and prevalence of exposure in cases in Example 2. Data was simulated for N0 = Nx = 200 28 vn 2.5 Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in con-trols and prevalence of exposure in cases in Example 2. Data was simulated for N0 = Ni = 800 29 2.6 Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in con-trols and prevalence of exposure in cases in Example 2. Data was simulated for N0 = Ni = 3200 . 30 2.7 The support of p(a, P\90, 6\). The shaded rectangles comprise A{e0,el) 32 2.8 Posterior distribution of logc/> in Example 1. The first column gives the M-COR posterior, the second column gives the U-COR posterior, and the third column gives the E-COR posterior. The rows correspond to sample sizes N0 = Nx = 200, N0 = Ni — 800, N0 = N1 = 3200 and 7V0 = Ni = oo 42 2.9 80% highest posterior density credible intervals for logc/> in Ex-ample 1. The solid vertical lines represent credible intervals for forty data sets with sample sizes N0 = Nx = 3200, the solid horizontal line indicates the true value of log cp and the dashed horizontal line indicates log </>'. The first panel gives the M-COR intervals, the second panel gives the U-COR intervals, and the third panel gives the E-COR intervals 43 viii 2.10 Prior and posterior samples of the sensitivity and specificity for the datasets in Example 1 44 2.11 Posterior distribution of log</> in Example 2. The first column gives the M-COR posterior, the second column gives the U-COR posterior, and the third column gives the E-COR posterior. The rows correspond to sample sizes N0 = Nx = 200, N0 = Nx = 800, JV 0 = Ni= 3200 and NQ = JVI = OO 45 2.12 8 0 % highest posterior density credible intervals for log</> in Ex-ample 2. The solid vertical lines represent credible intervals for forty data sets with sample sizes JV 0 = Nx = 3200, the solid horizontal line indicates the true value of log <p and the dashed horizontal line indicates log <f>'. The first panel gives the M-COR " intervals, the second panel gives the U-COR intervals, and the third panel gives the E-COR intervals. . 46 2.13 Posterior distribution of log0 in Example 3. The first column gives the M-COR posterior, the second column gives the U-COR posterior, and the third column gives the E-COR posterior. The rows correspond to sample sizes N0 = Nx = 200, N0 = Ni = 800, N0 = N1 = 3200 and N0 = Nx = oo 47 ix 2.14 80% highest posterior density credible intervals for log</> in Ex-ample 3. The solid vertical lines represent credible intervals for forty data sets with sample sizes N0 = Nx = 3200, the solid horizontal line indicates the true value of log <f> and the dashed i horizontal line indicates log <$>'. The first panel gives the M-COR intervals, the second panel gives the U-COR intervals, and the third panel gives the E-COR intervals 3.1 Post burn-in output of the five independent chains of the Gibbs sampler for sensitivities, specificities, prevalence of exposure in controls and prevalence of exposure in cases in Example 1. Data was simulated for JV0 = Nx = 200 3.2 Posterior samples of the logarithm of the odds-ratio for the datasets in Example 1 ' 3.3 80% highest posterior density credible intervals for log <j> in Ex-ample 1. The solid vertical lines represent credible intervals for forty data sets and the solid horizontal line indicates the true value of log 4>. The panel (a) gives the intervals for the sample size N0 = Ni = 200, the panel (b) gives the intervals for the sample size N0 — Ni — 800, and the panel (c) gives the intervals for the sample size N0 = Nx = 3200 x Posterior samples of the logarithm of the odds-ratio for the sam-ple size N0 = JVi = 800 in Example 1. (a) Beta(l, 1) prior used for all six parameters, (b) Beta(7.55, 2.64) prior used for a, and = 1,2, while Beta(l,l) prior used for TIQ and 7Ti (the his-togram is identical to that shown in Figure 3.2 (b)). (c) Beta prior for each parameter was chosen so that the mode 7 is cen-tred at the true value and 0.95 probability is assigned to the interval 7 ± 0.05 xi Acknowledgements I would like.to thank Dr. Paul Gustafson for his consistent guidance, help and support. His patience and expert advice made the completion of my research possible. I would also like to thank Dr. Nhu Lee for his generous input and willingness to share his expertise with me. Many thanks to the entire UBC Statistics Department for making these last two years most pleasurable. Also, many thanks to my family for all their love, support and encour-agement. Finally, I would like to thank my girlfriend Marnie for her continued support and patience. R E F I K SASKIN The University of British Columbia August 2000 xii To my mother, God rest her soul. xiii Chapter 1 Introduction Misclassification of exposure is one of the most serious problems in epidemi-ology. Even on the smallest scale, exposure misclassification can substantially bias estimates of the relative risk. In particular, nondifferential misclassifica-tion of a dichotomous exposure variable always tends to attenuate observed exposure-disease relationships. This attenuation can be particularly large for rare exposures and imperfect specificity or highly prevalent exposure and im-perfect sensitivity of the test used to assess the exposure status. The corrected estimates can be obtained easily if the sensitivity and specificity of the clas-sification procedure are known, which was illustrated by Barron (1977) and Greenland and Kleinbaum (1983). In most cases, however, these quantities are very difficult to estimate, because no measure of true exposure is available, and even if available, such "gold standard" measures are often very expensive and/or invasive. A more common scenario is when good guesses rather than exact values are available for the sensitivity and specificity. If one treats these 1 guesses as exact, Marshall (1989) has shown that even small discrepancies be-tween the true and guessed values of the sensitivity and specificity can lead to very erroneous odds-ratio estimates. The effects of ignoring misclassification and the methods which correct for it have received a considerable attention in the literature. Thomas, Stam and Dwyer (1993) and Walter and Irwig (1988) provide reviews. Although one can rarely obtain completely accurate exposure data for epidemiologic studies, data on the exposure variable are often available from more than one source. For instance, exposure data'are often available from both medical records and interviews, or from two different diagnostic tests performed on a study subject. A variety of methods have been suggested for improving the odds-ratio estimates by combining data from two information sources. One simple approach, as suggested by Marshall and Graham (1984), is to restrict the analysis to subjects for whom the two sources agree about the exposure. As pointed out by Walter (1984), however, this method may result in a substantial loss of precision due to the exclusion of a potentially large proportion of study subjects. Hui and Walter (1980) developed a method for estimating the sensitivity and specificity of two classification schemes in two groups (such as controls and cases) when, given the true exposure status, misclassification is nondifferential and the two schemes classify study subjects independently. More recent work by Drews, Flanders and Kosinski (1993) has extended their method to more complicated settings. They perform numeric maximization rather than providing a closed-form solution. The approach of 2 Drews et al. (1993) requires that the analyst specify the degree of dependency between the two classification schemes. Since, in practice, a gold standard may be unavailable, impractical, or itself measured with error, this method could have limited applicability. In their discussion of latent class analysis, Kaldor and Clayton (1985) give an example of data where replicate measurements are available for some or all of the cases and/or controls. They demonstrate that obtaining a replicate measurements on even a modest proportion of subjects leads to substantially improved estimation of odds-ratio. In the following chapter we will consider a case-control setting where one imperfect classification scheme is used to assess the exposure status. In this context, we will introduce a simple approach to Bayesian inference about the odds-ratio when the misclassification probabilities are known. Then, we will outline a Bayesian approach to odds-ratio inference with only partial knowledge of classification probabilities. Two computational methods will be presented that enable the inference on odds-ratio. Finally, examples will be presented illustrating the use of these methods. In chapter 3 we will look at a case-control setting where the exposure status is measured by two imperfect classification schemes. We will briefly review three existing methods to correct the odds-ratio estimates. Then, we will introduce a Bayesian approach. It is an extension of the Joseph, Gyorkos and Coupal (1995) method for inference on the sensitivities and specificities of two classification procedures applied to one population. We will examine the validity of our method by applying it to nine hypothetical case-control studies, 3 where the true odds-ratio are known. Finally, we will apply our method to a case-control study of sudden infant death syndrome, and compare the results with those obtained using the method of Drews et al. (1993). 4 Chapter 2 One Test Case 2.1 Defini t ions and Terminology In this section we present some key definitions and terminology, making the subsequent sections easier to follow. In a "case-control study" investigator selects "cases" of a disease, and a comparison group, called "controls". The cases and controls are then compared with respect to a specific risk factor, often referred to as "exposure". The samples of the cases and controls are usually regarded as independent. "Misclassification" or "measurement error" refers to any diference be-tween the true value of a variable and its measured value. The term mis-classification is almost always used in the context of categorical variables and measurement error when one refers to continuous variables. Errors can be "random" or "systematic". If the error is not randomly distributed around its true value, we say that it is systematic. Overestimating everyone's exposure 5 level by a factor of two would be an example of a systematic error. Systematic and random errors can be either "differential" or "nondif-ferential". If the misclassification in the exposure depends upon the disease status, than the misclassification is differential. If on the other hand, the mis-classification is the exposure does not depend on the disease status, we say that the misclassification is nondifferential. A more rigorous definition of non-differential misclassification is that the risk of disease depends only on the true exposure, and given this true exposure, the measured exposure does not add any additional information. In clinical medicine and epidemiology, tests are often used to determine the presence or absence of a disease, or whether or not someone was exposed to a risk factor of interest. Ideally, those who are exposed (or with disease) should be classified as being exposed (or with disease), and those who are not exposed (or without disease) should be classified as unexposed (or without disease). The "sensitivity" and "specificity" of a test consider how often such correct classification occurs. The sensitivity of a test is the percentage of exposed study subjects (or with disease) classified as exposed (or with disease). The specificity of a test is the percentage of unexposed study subjects (or without disease) classified as unexposed (or without disease).. Moreover, the "false postive rate" (FPR) of a test is the percentage of unexposed study subjects (or without disease) who are classified as exposed (or with disease). The "false negative rate" (FNR) of a test is the percentage of exposed study subjects (or with disease) who are classified as unexposed (or without disease). Therefore, 6 FPR = 1 - specificity and FNR = 1 - sensitivity. As the methodology we develop in subsequent chapters is of Bayesian nature, we will very briefly introduce some of its key features. In a nutshell, the Bayesian approach uses probability to describe both model parameters and random variables. In the context of the usual statistical model with a random variable X having possible distributions indexed by parameter 9, the data x becomes known to the statistician and the object is to make inference about the unknown parameter. The information about 6 that is available to the statistician prior to observing the data and his/her belief about the parameter are reflected in a "prior distribution". In the Bayesian approach, the statistician will wish to calculate the probability distribution of 9 given X = x, called the "posterior distribution". Once this is accomplished, point and interval estimates of 9 can be calculated, significance tests can be performed, etc. With the recent development of computer techniques has come an increase in the popularity of Bayesian inference. 2.1.1 D a t a a n d S e t u p Suppose we have a case-control study attempting to assess the relationship between disease status and a dichotomous exposure (usually labeled as 0 for unexposed and 1 for exposed). Suppose further that the measurement proce-dure used to assess the exposure status is not perfect, i.e., the sensitivity and specificity of that procedure are less than one (so that there is a non-negligible number of false positives and false negatives). The study, therefore, consists of 7 measuring the apparent exposure status for random sample of N0 controls and Ni cases. We will assume that the random samples are independent and that the disease status is known exactly, without error. These assumptions are not particularly unreasonable or limiting. First, the independence between cases and controls is a usual assumption one makes in a case-control study. Second, in order to determine the potential treatment, clinicians concentrate on correctly identifying the disease. For a disease that is difficult to diagnose, multiple tests, sometimes even invasive procedures, are applied. Moreover, the diseased subjects chosen to participate in a case-control study have usually had the condition for quite some time before the start of a study, increasing the chance of correct diagnosis. As a result, the disease status is (usually) very well determined. Throughout this thesis we will assume that the misclassification is non-differential, i.e., independent of disease status. In the case-control setting, the data comes in a two-by-two table, as shown in Table 2.1. Here, Xi and yi are the observed number of positive and negative test results, respectively, in the sample of Xi + yi = study subjects. The subscript i = 0 denotes controls, i = 1 cases. Furthermore, let D, E and E' represent disease status, actual expo-sure status and apparent exposure status, respectively. In each case, suppose that one represents presence, and zero represents absence. As nondifferential misclassification is assumed, the relevant parameters are TTO = Pr{E = 1\D = 0), 8 Test + — Controls x0 yo Cases • Xi y\ iVi N Table 2.1: Distribution of subjects in a case-control study by disease status and an imperfect measurement of a dichotomous exposure. Symbol + denotes apparently exposed, and — denotes apparently unexposed. 7Ti = Pr(E = 1\D = 1), a = Pr{E' = l\E = 1), 8 = Pr(E' = 0\E = 0). Here, a is the sensitivity of the classification procedure, (3 is the specificity of the classification procedure, and 7r0 and TT\ are the exposure prevalences among the controls and cases, respectively. The false positive and false negative rates are \ — B and 1 — a, respectively. The odds-ratio, denoted as t/>, is defined in terms of the exposure prevalences as T r i / O - j r i ) 7 r 0 / ( l - 7 r 0 ) ' In general, suppose that R tests are applied to S populations and that misclassification is nondifferential. As R tests are used to determine the expo-sure status, there are 2R possible classification outcomes. Furthermore, since 9 the number of subjects is fixed for each population, there are 2R — 1 degrees of freedom per population, giving a total of (2R — 1)5 degrees of freedom avail-able. On the other hand, if the misclassification is nondifferential, there are 2R + S parameters to estimate, the sensitivity and specificity of each test and S exposure prevalences. Therefore, in the case of one test and two population (R = 1,5 = 2), the likelihood with no constraint placed on the parameters is overparameterized. Hence, if (a,7r 0,7Ti) are unknown, the resulting like-lihood function from a case-control study where one classification scheme is used to assess the exposure is nonidentifiable (Hui and Walter (1980)). 2.2 K n o w n Sensit iv i ty and Specif ici ty If the misclassification probabilities a and /? are known, Barron (1977) and Greenland and Kleinbaum (1983) present a relatively straightforward proce-dure for the adjustment of odds-ratio estimates. To illustrate their methodol-ogy, suppose that a and j3 are the sensitivity and specificity of a test used to assess the exposure, while a* and j3* are the sensitivity and specificity of a test used to assess the disease in a case-control study. Note that their method does not require the assumption that the disease status is known exactly. Suppose further that (xTi,yTi) and {xMi,yMi) are, respectively, true and misclassified cell counts in a 2 x 2 table (see Table 2.1). When there is no misclassification, we estimate the odds-ratio by <h=——> (2.1) XTOVTI 10 and if the exposure is subject to misclassification, we estimate the odds-ratio by (2.2) XMIVMO The misclassified cell counts are related to the true cell counts by the set of four equations m = Ct, where m %M0 %T0 VMO VTO ,t = VM\ VTI and a/3* (1 — (3)8* a(l-a*) {I - (3){l - a*) (l-a)fi* /38* ( l - a ) ( l - a * ) 8(1 - a*) a(l-8*) (1 — ^)(1 — aa* (1 - 8)a* (l-a)(l-8*) (3(1-/3*) (l-a)a* pa* Therefore, if (a, (3, a*, 8*) are known and C is invertible, a correction formula is easily derived, since t — C _ 1 m . This, in turn, enables us to compute the estimate of the true odds-ratio </>T-We now present a simple approach to Bayesian inference about odds-ratio when the sensitivity and specificity are known. Let XQ be the number of apparently exposed controls and Xx be the number of apparently exposed cases. A control can be truly exposed and cor-rectly classified as exposed with probability (TXQO) or unexposed and incorrectly classified as unexposed with probability (1 — 7r 0 ) ( l — B). Similar is true for the 11 cases. Therefore, X0 and X\ are distributed as independent Binomial(Ni,9i) random variables, where .0i = 7T i Q!+(l-7r i )(l- ^ ,1 = 0,1. (2.3) Here, 90 and $i are apparent exposure prevalences among controls and cases, respectively. Suppose now that a and B are known exactly, without error. Further, note that an estimate of (j> can be obtained from estimates of 9Q and 9X, as 7r 0/(l-7r 0) ( g i + ) 9 - l ) / ( a - g i ) ( . (e0 + p-i)/(a-e0y l ' j Thus, inference about the odds-ratio (f> is simple. To illustrate this, suppose that independent Uniform[0,1] priors are chosen for 7r0 and 7Ti, i.e., no prior knowledge is assumed about the prevalence of exposure in controls and cases. This choice of prior implies Uniform [min(l — B,a), max(l — B, a)] priors for 90 and 9i, since (2.3) implies 9i G [min(l - f3, a), max(l — B, a)], i = 0,1. Consequently, 9i\xo,Xx follows a Beta(xi + 1, Ni — Xi + 1) distribution truncated to the interval [min(l — 8, a), max(l — B, a)], yielding the posterior distribution of (j)\x0,xi. Therefore, sampling from the posterior distribution of odds-ratio is straightforward. Examples illustrating this approach will be shown in section 2.4.3. 12 We now examine a more realistic scenario, when good guesses rather than exact values are available for the sensitivity and specificity. These guesses could be available, for instance, via previous studies. If these guesses are treated as exact, Marshall (1989) has demonstrated that even small differ-ences between the guessed and true values of misclassification probabilities can lead to very erroneous odds-ratio estimates. More specifically, consider the following example. Table 2(a) below shows the true distribution of ex-posure among 300 cases and 300 controls. In this case, logc/> = 1.35, with a 95% confidence interval of (0.90, 1.80). This confidence interval was computed assuming the asymptotic normality of logc/>. Table 2(b) presents the effect the misclassification of 10% has on the odds-ratio. Here, logc/> = 0.85, with a 95% confidence interval of (0.47, 1.23), thus lessening the apparent effect of expo-sure. Of the 30 exposed controls, only 90% are correctly identified as exposed and 10% are mistakenly identified as not exposed. Ignoring the sampling error, of the 270 unexposed controls, 90% are correctly classified with 10% classified as exposed. Hence, the 54 controls classified as exposed in Table 2(b) include 90% of 30, or 27, who are exposed and 10% of 270, or 27, who are unexposed. The same process applies to cases. The investigator who knows that the misclassification is 10% can easily adjust the data in Table 2(b) and correctly estimate the odds-ratio. A slight miscorrection can yield an erroneus estimate of the odds-ratio. To illustrate this, suppose that the investigator may guess, on the basis of previous studies, that the misclassification is 14%. Table 2(c) displays the result of adjusting 13 (a) Test + Controls 30 270 300 Cases 90 210 300 600 (b) Test (c) • Test Controls + + — 54 2 If) 300 Controls 17 283 300 102 198 300 Cases 83 217 300 600 600 Table 2.2: (a) True distribution of exposure among cases and controls, (b) Observed distribution of exposure, given 10% misclassifcation among cases and controls, (c) Estimated distribution of exposure assuming 14% misclassifcation among cases and controls. the data in Table 2(b) using the guessed value. Here, log</> = 1.85, with a 95% confidence interval of (1.30, 2.40), thus producing an exaggerated estimate of the effect of exposure. As shown in the above example, odds-ratio estimates can be very sensi-tive to small discrepancies between the actual and assumed values of a and 8. We investigate this further by looking at the asymptotic bias of the odds-ratio that arises when incorrect values of the sensitivity and specificity are assumed. 14 To that end, let a' and be the assumed values of a and ft used to correct the odds-ratio estimate. We say that "miscorectioh" occurs when (a1, /?') 7^ (a, 8). Moreover, we say that miscorrection is "asymptotically unde-tectable" if both #o and Q\ lie in the interval [min(l — B', a'), max(l — B', a')}. On the other hand, we say that miscorrection is "asymptotically detectable" if a' and 8' are such that one or both of 60 and 0\ lie outside the inter-val [min(l — 8', a'), max(l — 8', a')}. This distinction comes from the fact that, in the large sample case, the values of 90 and 0\ are efectively known exactly. So, no amount of data could detect that miscorrection occurs if 6i e [min(l - 8', a'), max(l - B',a')],i = 0,1. Therefore, as N0 and Nx in-crease, the exposure prevalences converge to 7r- = (0j + ft' — l)/(a + 8 — 1). Consequently, the posterior distribution of cf> concentrates at * ~ (d0 + 8'-l)/(a'-60y {2-b) The difference \(j>' — 4>\ is the asymptotic bias. The following theorem illustrates potentially very dangerous consequences of a very small miscorrection. Theorem: Suppose a and ft are fixed, with a + 8 > 1. Let ea > 0, e/j > 0 and R > 0 be arbitrary. Then, there exist a', 8', TV0 and TT\ such that (i) there is asymptotically undetectable miscorrection, (ii) \a' — a\ < ea, \fi' — fi\ < ep and (iii) $l<\> = R. Proof: Note first that </>' = Rcj), where = [l + cM/[l 4-^ / (1-^)] [l + c/7r0]/[l + d/( l -7r 0 ) ] ' 15 c = (8' - 8)/(a + 8 + 1) and d = (a' - a)/(a + 8 + 1), Note also that the requirement for asymptotic undetectability is equivalent to — c < ^ < 1 + d, i = 0,1. Now, choose a' so that \a' — a\ < ea and d > 1 and choose 8' such that \B' — 8\ < ep and c G (—1,0). Therefore, asymptotic undetectability holds for |c| < 7Tj < 1, z = 0,1. So now equation (2.6) is given by _ [1-lcl/TTx] / [l + d/(l-7n)] . • [1 - |C| /TTo] / [1 + - TTo)] - ^ ' Hence, R increases to infinity as 7To decreases to |c| (or TT\ increases to one) and decreases to zero as 7Ti decreases to |c| (or 7r0 increases to one), as required. Here we considered the case where a' > a and 8' > 8. It is easy to establish the claim for other three cases. It is worth noting that R can go to either zero or infinity without ir0 or 71"! going to either zero or one, which are rather unrealistic cases. Secondly, the assumption that a + 8 > 1 is not unreasonable, given that no case-control study would be carried out if the false positive and false negative rates of the classification procedure are very high (higher than 0.5, say). The previous work, coupled with the result of the above theorem, sug-gests that it seems reasonable to include the uncertainty in the available guesses into the analysis. Bayesian methods make this feasible, as the lack of exact knowledge of a and 8 is easily incorporated via the appropriate choice of priors for a and 8. This methodology was introduced by Joseph, Gyorkos and Coupal (1995) in the context of applying R different diagnostic tests to S populations. Specifically, they focused their attention on one population and 16 one test (R = 1,5 = 1) and one population and two tests (R = 2,5 = 1). Consequently, their inference was on ,the sensitivity, specificity and disease prevalence of the population of interest. We will attempt to extend their ap-proach to the case-control setting. The Gibbs sampler will be used to draw the posterior samples of a, 8, 7TQ and TX\ and consequently of 0. Let Ai and B~i be the information that is missing when the test used to assess the exposure is imperfect, that is the number of true positive test results out of ai and 6j, respectively. Thus, Ai is the number of true positives and Bi is the number of false negatives, i = 0,1. This missing information is called "latent data" and analysis of such data, called "latent class analysis", has been done by Kaldor and Clayton (1985) and Walter and Irwig (1988). Incorporating this latent information into Table 2.1, we have (Table 2.3). Using the independence between cases and controls, the likelihood func-tion of the data in Table 2.3 is given by 2.3 The Gibbs Sampler 1(AQ, B0, Ai, Bi,x0,y0,xuyi\n0,7Ti, a, 8) x [^{\-a)]Bi [(1-^)8] x [ 7 r l C ^ [ ( l - ^ ) ( l - / ? ) ] \Vi-Bi 17 + Apparent exposure Controls True Exposure Cases True Exposure + — + — x0 - A0 x0 + xx - Ax Xi BQ yo - Bo yo — Bx yi-Bx y\ N0 Table 2.3: Distribution of observed and latent data when one imperfect clas-sification procedure is used. Symbol + denotes exposed, and — denotes unex-posed. By gathering the like terms, the likelihood becomes Z ^ o . B c A i . B i . x o . y o . ^ i . y i k c T i . a , / ? ) = II [Ail ...(yi- Bi)\ x irfi+Bi(l-TTT)NI-A>-B> x a Ao+Ai Bo+Bi x pyo+yi-Bo-Bi ^ _ p^xo+xi-Ao-Ai (2.8) The likelihood (2.8) is nonidentifiable if (a,/3,7r0,7Ti) are unknown. We now state a more formal definition of nonidentifiability, introduced by Dawid (1979), Suppose that the Bayesian model is denoted by likelihood l(x\A) and prior p(A), where A = (Ai,A 2). We say that A 2 is nonidentifiable if p(A 2 |A x, x) = p(A 2|Ai). In other words, A 2 is not identified by the data if observing data x 18 does not increase our knowledge about A 2 given Ai. However, nonidentifiabil-ity does not imply that there is no Bayesian updating, i.e., it does not imply that p(X2\x) = p(A2). Furthermore, since p(A 2 |Ai,x) oc l(x\A)p(X2\Xi)p(Xi), A 2 is nonidentifiable if and only if l(x\A) is free of A 2 . Hence, the definition of nonidentifiability, as introduced by Dawid (1979), is equivalent to nonidentifi-ability in the likelihood. We now turn our attention to the choice of prior for a, ft, ir0 and iii. We will choose beta densities to represent the prior information available for a, 8, 7T0 and 7Ti. The reason for this choice of prior is three-fold. First, the beta density is positive on the interval [0,1], which coincides with the range of all parameters of interest. Secondly, the family of beta densities is flexible, in the sense that a variety of shapes can be chosen by selecting different values of the hyperparameters. Finally, it is the conjugate prior distribution for the binomial likelihood, significantly easing the derivation of the posterior distribution. Therefore, suppose that the four parameters are independent a priori, with a ~ Beta(aa, fj,a), 8 ~ Beta(a i 8,^), 7T0 ~ Beta(a7 r o,^7 r o), 7T! ~ Beta(a^1,^7 r i), giving a joint prior density p(a,/3,7r0,7ri) = pa(a)pp{P) J[ p^fa). . (2.9) i=0,l 19 Since a posterior density is proportional to a likelihood multiplied by a prior, we have that the posterior density p(ir0, TT\, a, B\Ai, Bi, x0, y0, xx, yi) is propor-tional to n i=0, l x a Al\...{yl-Bi)\ A0+Ai+aa-l^ _ ^Bo+Bx+Hc-1 (2.10) Note here that the latent data Ai and Bi, i = 0,1 are not observed, hindering the use of (2.10) in calculating the marginal posterior densities of a, B, ir0 and 7Ti. However, the inference is made possible by using a Gibbs sampler. This is a very useful technique for sampling from a p-dimensional distribution. Here is a brief review. Suppose gx is the joint density distribution function of a p-dimensional random variable X = (Xx,X2,..., Xp), with the univariate conditional densi-ties qx1\X2,x3,...,xp,Qx2\xux3,...,Xp, and so on. To implement the Gibbs sampler, we start with initial guesses of the Xi, say .x[°\ X^,..., X^ and simulate x f W U f , . . . , ^ 0 ) from qXl\x2,x3,..,xP, X^\X[l\xi°\...,X^ from qx2lxux3,..,xP, X^X11\X£\...,XJ»1 from qx^x,,...^-This is repeated k times, generating the sample X(fc) = (XJ f e \ , . . . , X^). 20 At each stage the conditional distribution uses the most recent values of all the other components of X . It can be shown that, as k —> oo, the density of the samples approaches qx- In practice, the convergence is usually quite rapid. Once the convergence has occurred, subsequent samples can be gen-erated either by restarting the algorithm with the new guesses, or continuing the algorithm at the current value X( f c). We can now use the Gibbs sampler to sample from the posterior distri-bution (2.10). We have the following conditional densities: a\A0, B0, Ai, Bx, aa, \ia ~ Beta(A) + Ax + aa, B0 + Bx + fj,a), B\x0,y0, xuyl,AQ, B0, Ax,Bi, o&, up ~ Beta(y0 + Hi - B0 - Bx + ap,x0 + xi - Ao- Ai+ fj.p), ni\Ai, Bi, Xi, yh aWi, /j,^ ~ Beta(Ai + B{ + o^Xi + y, - A{- Bi - nni),i = 0,1, Ai\iri, a, B,Xi ~ Binomial^, v.a+{l7!^%){l_p)), i = 0,1, Bi\irl,a,B,yl ~ Binomial^, v.{1l£)+$_1<.)p),i = 0,1. Hence, conditional on knowing the exact values of a, B, 7 r 0 and nx, we can easily sample from the posterior distributions of the latent variables Ai and Bi. Conversely, conditional on A{ and Bt, sampling from the posterior distributions of a, 8, 7r0 and 7Ti, and consequently from </>, is straightforward. 2.3.1 E x a m p l e s Example 1 To illustrate the use of the Gibbs sampler, consider a scenario 21 in which the true value of sensitivity is a — 0.85, specificity 8 = 0.90 and the prevalences of exposure in controls and cases is, respectively, ir0 = 0.08 and 7Ti = 0.12. These values imply that 4> = 1-57 or logc/> = 0.45. Data are simulated using the true values for three different sample sizes, namely for JV0 = Nx = 200, N0 = Nx = 800 and N0 = Nx = 3200, with re-spective simulated values of (x0,Xi) = (37,41), (x0,xi) = (128,147) and (x0,xi) = (495,603). The prior for the prevalence of exposure in controls and the prevalence of exposure in cases was chosen to be Beta(l, 1). No prior knowledge was assumed for 7r0 and -K\, since the motivation for doing the case-control study is to make inference about odds-ratio. Further, suppose that the researcher's guesses at the sensitivity and specificity are 0.83 and 0.91, respectively, and that these guesses are accurate to within ±0.05. The infor-mation of the form 7 ± S could be translated into a Beta prior with mode at 7 and 0.95 probability on the interval 7 ± 6. This implies 5^(2(183.50, 38.38) prior for sensitivity and £eta(128.50,13.61) prior for specificity. Five indepen-dent chains of the Gibbs sampler were run with different starting values. The output is shown in Figure 2.1, Figure 2.2 and Figure 2.3. The plots shown in these figures clearly indicate drifiting behaviour by the Gibbs sampler and a lack of adequte mixing, even after the first 10000 observations were discarded. For example, as the plots in Figure 2.1 indicate, convergence was not achieved for none of the parameters. This may not be surprising because, as Gelfand and Sahu (1999) note, drifting behaviour may arise when the Gibbs sampler is applied to nonidentifiable models. The drifting 22 behaviour also seems to be the case for intermediate and large sample sizes, however, it is not as pronounced. Even though more formal diagnostic methods were not used in assessing the convergence of the Gibbs sampler, we believe that the plots in Figure 2.1 through Figure 2.3 present strong enough evidence of the inadequacy of the Gibbs sampler in this case. Example 2 Consider now a different scenario where the true value of sensitivity and specificity is a = (5 = 0.95 and prevalence of exposure in controls and cases is 7r0 = 0.06 and TTX — 0.15, respectively. These values imply 4> = 2.76 or logc/> = 1.02. Using these true values, we simulated data (x0,xi) = (13,44), (x 0 ,xi) = (68,141) and (x0,xi) = (340,595) for N0 = Nx = 200, N0 = Nx = 800 and N0 = Ni = 3200 respectively. As before, Beta(l, 1) was used as prior for both the prevalence of exposure in controls and the prevalence of exposure in cases. As for the sensitivity and specificity, we will assume that our guess at both the sensitivity and specificity is 0.95, and that it is accurate to within ±0.05. This translates into Beta(99.70, 6.19) prior for both the sensitivity and specificity. Five independent chains were run with different starting values. The output is shown in Figure 2.4, Figure 2.5 and Figure 2.6. We observe a similar behaviour of the Gibbs sampler as in Example 1, even though our guesses at the true values of the sensitivity and specificity were exact. For instance, note the behaviour of the fourth chain in Figure 2.4. It exibits a completely different behaviour than the other four chains in the sense that it drifts towards a completely different value in the parameter space. 23 In particular, the mean value of the sensitivity obtained from the fourth chain is 0.03, whereas the mean value obtained from the other four chains is 0.57, a very big discrepancy. Moreover, no point estimates were close to the true value of 0.95 for the sensitivity. The 95% HPD credible intervals of the sensitivity and the prevalence of exposure in cases for every sample size considered did not contain the true values. 2.4 The Metropol is-Hast ings A lgo r i t hm The behaviour observed above may not be surprising. Gelfand and Sahu (1999) have noted that drifting behaviour in the Gibbs sampler can arise if it is applied to nonidentifiable models. Therefore, the extension of the methodology of Joseph, Gyorkos and Coupal (1995) to the case-control setting does not seem to be appropriate. Instead, we use the the Metropolis-Hastings algorithm in the reparameterization (2.3). This reparameterization separates the identifiable parameters (90,9i) from the nonidentifiable parameters (a, 8). In this new parameterization, (2.9) becomes p(a, 8,90,9,) 1 Pa(a)pp(B) (a + 8 - i y x (2-11) since the Jacobian is given by .7(0o,0i) duo d80 dir\ ae0 1 d6i d-Ky 001 (a + B- l ) 2 ' 24 J|C4m/ ^iJW/ 3000 INDEX Wf^pwi^ "*^y^V yv^^V ?w****fw 3000 INDEX 3000 INDEX 3000 INDEX Figure 2.1: Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in controls and prevalence of exposure in cases in Example 1. Data was simulated for JV0 = Nx = 200. 25 f^VAA> w a h j V A,..,. 3000 INDEX 3000 INDEX ^ . . ^ . A 3000 INDEX > ^ A V X J 3000 INDEX Figure 2.2: Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in controls and prevalence of exposure in cases in Example 1. Data was simulated for N0 = N1 = 800. 26 3000 INDEX 3000 INDEX 3000 INDEX 3000 INDEX Figure 2.3: Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in controls and prevalence of exposure in cases in Example 1. Data was simulated for iV0 = jVi = 3200. 27 v/w«m wAmi* XjAJ 0 1000 2000 3000 INDEX 4000 5000 600 m V ^ W W ^ ^ V ^ ^ V W W ^ W ^ ^ r f ^ ' W V ^ V f T Y W V ^ ^ V V V r ^ 0 1000 2000 3000 INDEX 4000 5000 60O 4JUU^AJ*« UWUJAWUJW \i/MfMj/^ 0 1000 2000 3000 INDEX 4000 SOOO 600 a/VIW v v \ W W 0 1000 3000 3000 4000 5000 6000 INDEX Figure 2.4: Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in controls and prevalence of exposure in cases in Example 2. Data was simulated for N0 = Ni = 200. 28 ijl v ^ , w v ^ v a ^ t 0 1000 2000 3000 4000 5000 6000 INDEX 0 1000 2000 3000 4000 5000 6000 INDEX °- 0 1000 2000 3000 4000 5000 6000 INDEX , A A A A ^ g 6 i . . . , . . r 0 1000 2000 3000 4000 5000 6000 INDEX Figure 2.5: Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in controls and prevalence of exposure in cases in Example 2. Data was simulated for jV0 = Nx = 800. 29 0 1000 2000 3000 . 4000 5000 6000 INDEX 0 1000 2000 3000 4000 5000 6000 INDEX Figure 2.6: Post burn-in output of the five independent chains of the Gibbs sampler for sensitivity, specificity, prevalence of exposure in controls and prevalence of exposure in cases in Example 2. Data was simulated for N0 = Ni = 3200. 30 Therefore, the joint posterior distribution of (a, B,90,9x) given the data is Here, the prior and posterior conditional distribution of a, 8\9o, 9\ are identical, and hence, according to Dawid (1979), (a, 8) are nonidentifiable parameters. However, as a result of learning about (90,9X), the marginal prior and posterior distributions of the sensitivity and specificity are not equal. Thus, one could learn about (a,B) indirectly, through the updating of {60,9x)-2.4.1 L a r g e S a m p l e C a s e Let us examine (2.12) further. Since p(90, 9x\xo, xx) and p(90,9x) constitute a regular model and prior, the posterior distribution of 90,9i\x0, x\ will converge to a point mass at the true value of (90,9x), as No —» 00 and iV\ —> 00. The uncertainty in p(a, 8, \90, 9{) remains unchanged by any amount of data. Note that we can express it as p(a,/3|0o,0i) cx p{90,9l\a,B)p{9Q,9l) oc p(90\91,a,8)p(9l\a,p)pa(a)pi}(p) oc p(9o\a, B)p(0i\a, B)pa(a)pp(B) 31 p(a, 8,90,9^X0, xx) p(xQ, X X \ 9 0 , 9X, a, 8)p(a, 8,90,9±) p(x0,xi)'. p(xo,xl\9o,9l)p{a, B,9Q,9I) p(x0,xi) p(90,91\xo,xi)p(a,B, \90,9x). (2.12) * (a + B- i)2 /^o,gi)( Q ;> 8)pa(<x)Pf>(P), (2-13) where A(90,9l) = {{a,B) :9< a < 1 , 1 - 0 <B< l} U {(a, B) : 0 < a < 0,0 < 8 < 1 - 0} , and 0 = max{0o, #i} and 9 = min{#0, ^ j . This region is shown in Figure 2.7. p (1.1) 1-9 1-6 \ a + p = l (0. (!) 6 6 a Figure 2.7: The support of p(a, B\QQ, 9X). The shaded rectangles comprise A(90,9l). From (2.13) we see that p(a, B) differs fromp(o;, B\90, 9\) in two respects. First, the support of p(ot, B) is the whole unit square, whereas the support of 32 p(a, B\90,9i) is A(90,9i). Blettner and Wahrendorf (1984) observed this in the context of finite samples. They describe how to calculate the range of the possible underlying true effects, measured by odds-ratio, and of the possible misclassification probabilities, given the observed data. Second, the density p(a, B\90, 9\) has (a + B — 1)~2 as a factor. It is, of course, infinite along the line a = 1 — B. The rectangles that make up the support A(90,9i) are on either side of this line, implying that (2.13) is bounded. In particular, (Q + p _ 1 p J^(0o,fli)0*. P)P*(a)Pp(P) < ^ ]_ e^2lA{B0A)(ai B)pa(a)pp(/3). (2.14) Therefore, we can sample from (2.13) using the "acceptance/rejection algo-rithm". To sample from a random variable X, this algorithm makes use of samples of another random variable, say Y,- whose probability density func-tion gy is similar to the probability density function of X, pX- The random variable Y is chosen so that we can easily generate samples of it and so that its density gy can be scaled to majorize px, using some constant c; that is, so that cgy(x) > px(x), for all x. The density gY is called the "majorizing density" and cgY is called the "majorizing function". The closer cgy(x) is to px(%), the faster the acceptance/rejection algorithm will be. In our case, the majorizing density is pa(a)pp(B) truncated to the interval A(9o, 9i). For a more detailed description, see Gentle (1998). However, we have not found the acceptance/rejection algorithm to be very efficient, and instead used the "Metropolis-Hastings algorithm". The Metropolis-Hastings algorithm helps when it is difficult to sample from the 33 target distribution, say TT(X), but there is an easy way to sample from an-other, say q(x,y), called the "candidate-generating density". In our case, the candidate-generating density is pa{o)pp{B) truncated to the interval A(90,9i), same as the majorizing density above. The Metropolis-Hastings algorithm proceeds by constructing a Markov Chain which has ir(x) as its stationary distribution. We present a brief illustration of how the Metropolis-Hastings algorithm works. For a complete description, see Chib and Greenberg (1995) or Gentle (1998). Let X be an irreducible, recurrent Markov Chain, with the stationary distribution n(x), and let x be its current state. The Metropolis-Hastings algorithm proceeds by sampling a candidate value y from q(x, y), and choosing the next state of the Markov Chain to be either the candidate value y or the current state x, depending on the following acceptance probability a(x,y) = < (2.15) 1 otherwise The process is repeated k times. Of course, the draws are regarded as a sample from the target density ir(x) only as k gets large. The above algorithm has a drawback in comparison with the Gibbs sampler: the degree of serial correlation is increased due to the possibility of rejection of a move. If q(x, y) is such that the probability of a move is very small, the chain will be characterized by long sequences of repeated outcomes. However, a very convenient feature of the Metropolis-Hastings algorithm, apart from the free selection of the candidate-generating density q(x,y), is that we only need to know ir(x) up to a normalizing constant (as it cancels out in the 34 ratio n(y)/7r(x)). Different choices of q(x, y) give rise to different variants of the Metropolis-Hastings algorithm. If q(x,y) is symmetric, i.e., if q(x,y) = q(y,x), the prob-ability of a move reduces to min7r(y)/7r(x). This is known as the "Metropolis algorithm". Another family of the candidate-generating densities is given by q(x,y) — q(y — x), characterizing the "random walk" chain. The candidate value y is given by y = x + z, where Z is a random variable drawn from q(z). Our choice of q(x, y) yields what is known as the "independence" chain. It arises when the candidate-generating density is not a function of the current state, i.e, when q(x,y) = q(y), for some density q(-). Other choices of the candidate-generating density are possible. 2.4.2 Finite Sample Case In a finite sample case, we have demonstrated that the Gibbs sampler does not work very well. Instead, we try to sample from (2.12) using the Metropolis-Hastings algorithm. Note that (2.12) can be expressed'as p(a,/5,0o,0i|zo,zi) oc ex00(l-e0)No-xo9^(1-6^-^ x (a + p _ 1 ) 2 / - A ( g o . g i ) ( Q ! » P)pa(a)pp{B). (2.16) The candidate generating density that approximates (2.16) from which we can sample to drive the Metropolis-Hastings algorithm is /(a,)Mo,0i) = ^ ^ / O ( 0 O ) / I ( 0 I ) / A ( 0 o ^ O ( « ^ K ( « ) ^ ( / 5 ) - (2.17). 35 Here, fc(0o,0i) = Pr{(a,£) G A(9O,01)}, so that /(a, /3|0O,#1) is the prior /?) truncated to A(0O, #i)- Furthermore, / , is the Beta(xi + 1, iVj — x i + 1) density, so that p(a, B\9o, 9\) in (2.12) is approximated by f(d0,9i) by replacing the intractable prior for (0O, 9\) by a uniform prior on the unit square. There-fore, we can sample from (2.17) by first sampling from f(90, 6\) and then from f{a,B\90,9x). We now present three examples illustrating the use of Metropolis-Hastings algorithm. 2.4.3 E x a m p l e s Example 1 Consider a scenario in which the true value of sensitivity and specificity is a = B = 0.84, and the prevalences of exposure in controls and cases is, respectively, 7 r 0 = 0.061 and -KX = 0.15, implying that 90 = 0.20148, 9\ = 0.262 and log0 = 1.00. Furthermore, suppose that our best guesses at the sensitivity and specificity are (a',B') = (0.81,0.81) and that these guesses are accurate to within ±0.05. This leads to a Beta(19A.OO, 46.27) prior for both a and B. Under the guessed values, log^' = 1.94. Data are simulated un-der this scenario for three different sample sizes, namely for N0 = Nx = 200, iV0 = Nx — 800 and NQ = Nx = 3200, with respective simulated values of {x0,Xl) = (41,54), (x0,xx) = (162,214) and (x0,xx) = (661,861). Three dif-ferent posterior distributions were used to generate the samples of log </> for each of the data sets. The first, named "miscorrected" (M-COR) posterior, originates from assuming (a,B) = (a',B'). The second, named "uncertainty-36 corrected" (U-COR) posterior originates from assuming a prior distribution on a and 8. The third, named "exactly-corrected" (E-COR) posterior arises from the knowledge of the exact values of the sensitivity and specificity. The sam-pling from M-COR and E-COR proceeds as described in section 2.2, whereas from U-COR as per section 2.4.2. Furthermore, we consider the large sample case where the M-COR and E-COR concentrate at 4>' and </>, respectively, and the sampling from U-COR proceeds as per section 2.4.1. All posterior distributions are shown in Figure 2.8. There is very little difference between the three posteriors for N0 = Ni = 200. At the intermediate sample size, M-COR and U-COR posteriors are similar, with E-COR posterior being more peaked. A greater difference between M-COR and U-COR poste-riors appears for the large sample size. The U-COR posterior covers both the true value of log<?!> and the wrong value log</>', whereas the M-COR posterior misses the true value and is drawn towards log0'. Even in the large-n limit, the U-COR posterior is quite wide, covering both values. However, this should still be preferable to the M-COR posterior which, asymptotically, concentrates at the wrong answer. Convergence is very slow in the present scenario. To assess the coverage of the credible intervals under each posterior, we simulate forty data sets under the same scenario for the large sample size N0 = Ni = 3200 and compute 80% HPD credible intervals for \og(j) under the three posteriors. These intervals appear in Figure 2.9. The empirical coverage rates for the intervals are 22%, 90%, 90% and the average interval widths are 1.76, 1.64, 0.56 for the M-COR, U-COR and E-COR intervals 37 respectively. Note the very poor empirical coverage exhibited by the M-COR credible intervals as many of the intervals are drawn toward log</>'. On the other hand, the U-COR intervals show overcoverage. However, the cost of admitting uncertainty seems to be high, as the average length of the U-COR intervals is 1.64 on the log scale. As mentioned in section 2.4.1, the prior and the posterior distribution for (a, 8) differ in two respects, one of which is the support. Figure 2.10 shows the effects of truncation to A(9o,9i). We see that this effect is most evident for the case NQ = Nx = oo. Furthermore, note that the truncation affects 8 but not a. This will usually be true for small exposure prevalences. When 7 r 0 and ITi are small, 1 — 90 and 1 — 9± will lie in the region where draws from pp(8) are likely, whereas 90 and 9\ will be in the region where draws from pa (a) are not likely. Example 2 Consider now a scenario in which the true values of the sensitivity an specificity are a = 0.84 and 8 = 0.78, and all other quantities are the same as in Example 1. This implies 90 = 0.25 7 82 , 9\ = 0.313 and log</>' = 0.70. Under these conditions, we simulate data (x0,xi) — (55,67), (z 0 ,zi) = (190,256) and {x0,xx) = (830,966) for N0 = Nx = 200, N0 = Nx = 800 and N0 = Ni = 3200 respectively. Posterior distributions for these data are shown in Figure 2.11. For the small sample, size, there is very little difference between the M-COR and U-COR posteriors. For the intermediate and large sample sizes, the U-COR posterior performs better in the sense that it is closer to the E-COR posterior. The M-COR posterior again misses the 38 true value of the log odds-ratio. Forty data sets are again considered to assess the coverage of the cred-ible intervals for the large sample size under each posterior. The 80% HPD credible intervals are shown in Figure 2.12. The empirical coverage rates for the intervals are 37%, 87%, 82% and the average interval widths are 0.40, 0.77, 0.67 for the M-COR, U-COR and E-COR intervals respectively. Again, we see the undercoverage of the M-COR intervals, and the slight overcoverage of the U-COR intervals. In terms of the average widths of the intervals, the cost of admitting uncertainty in this example does not seem to be as high as in Example 1. Example 3 Finally, suppose that our best guesses at the sensitivity and specificity are (a1, 6') = (1,1), analogous to no misclassification. Fur-ther, suppose that we feel that there could be small misclassification, so that Beta(58Al, 1) was assigned as a prior to both a and /5. This prior density is a strictly increasing function with finite maximum at 1, and 95% probability assigned to the interval [0.95,1]. Suppose that the prevalences are the same as in Example 1, but the sensitivity and specificity are a = 6 = 0.97. These val-ues imply 60 = 0.08734, 9i = 0.171 and log</>' = 0.77. Again, we simulate data under this scenario for NQ = Nx = 200, N0 = N± = 800 and N0 = Nx = 3200, with respective simulated values being (x0,xi) = (23,31), (x0,xi) = (63,140) and (x0,xi) = (285,575). All posteriors appear in Figure 2.13. Again, for the small sample size, there is very little difference between the three poste-riors. However, in the case of the intermediate and the large sample size, the 39 U-COR posterior is closer to the E-COR posterior than is M-COR posterior, showing once more the value of admitting uncertainty about the sensitivity and specificity. Again, we simulate 40 data sets under this scenario for No = Nx = 3200. The resulting 80% HPD credible intervals are shown in Figure 2.14. Similar to Examples 1 and 2, the empirical coverage of the M-CQR intervals is very low (3%), while 85% of the U-COR intervals cover the true value. The average interval widths are 0.20, 0.41, 0.28 for the M-COR, U-COR and E-COR intervals respectively. Hence, the cost of admitting uncertainty is not nearly as severe as in Example 1. 2.5 Discussion So far, we have focused our attention to a case-control setting where one imper-fect classification scheme is used to assess the exposure status. The odds-ratio estimates obtained from the observed data, if not corrected for misclassifica-tion, can be very imprecise. Relatively straightforward correction methods are available if the classification probabilities are known. However, the odds-ratio estimates which are corrected using slightly, inaccurate classification probabil-ities may still be quite erroneous. In particular, we have demonstrated that even arbitrarily small differences between the true and assumed classification probabilities can, asymptotically, lead to arbitrarily large difference between the actual odds-ratio and the odds-ratio obtained using the assumed classifica-tion probabilities. By admitting uncertainty about the classification prpbabil-40 ities, this problem seems to be alleviated. Two computational approaches to this analysis were suggested: the Gibbs sampler and the Metropolis-Hastings algorithm. The Gibbs sampler did not seem to work particularly well. This was not surprising since, as Gelfand and Sahu (1999) noted, drifting behaviour of the Gibbs sampler may occur if it is applied to a nonidentifiable model. On the other hand, the Metropolis-Hastings algorithm applied to the model that separates the identifiable and the nonidentifiable part seemed to perform well. Despite nonidentifiability, there was considerable learning about the odds-ratio from the data. The U-COR posterior was quite concentrated rela-tive to the prior. However, the cost of admitting uncertainty is evident in the increase in variability of the U-COR posterior relative to the E-COR posterior. As a result, the U-COR credible intervals were twice as wide as the E-COR credible intervals. On the other hand, there seemed to be an improvement of the U-COR posterior over the M-COR posterior. This improvement ap-peared to arise from marginal learning about the specificity but not about the sensitivity. 41 LOG.-ODDS LOG-ODDS LOG-ODDS LOG-ODDS LOG-ODDS LOG-OODS Figure 2.8: Posterior distribution of log <fi in Example 1. The first column gives the M-COR posterior, the second column gives the U-COR posterior, and the third column gives the E-COR posterior. The rows correspond to sample sizes N0 = Ni = 200, N0 = N1 = 800, iV0 = Ni = 3200 and N0 = Nx = oo. 42 M-COR 1 , 1 | t - J - ! — 1 J 1 M " l | 1 - L - L -1 OATASET U-COR Figure 2.9: 80% highest posterior density credible intervals for log</> in Exam-ple 1. The solid vertical lines represent credible intervals for forty data sets with sample sizes N0 = Nx = 3200, the solid horizontal line indicates the true value of log(j) a n d the dashed horizontal line indicates log</>'. The first panel gives the M-COR intervals, the second panel gives the U-COR intervals, and the third panel gives the E-COR intervals. 43 SENSITIVITY (a) Prior (b) Sample size: N0 = Nx = 200 (c) Sample size: N0 = Ni = 3200 (d) Sample size: iVo = Ni = oo Figure 2.10: Prior and posterior samples of the sensitivity and specificity the datasets in Example 1. 44 LOG-ODDS ' LOG-ODDS LOG-ODDS ........... « Figure 2.11: Posterior distribution of log(/> in Example 2. The first column gives the M-COR posterior, the second column gives the U-COR posterior, and the third column gives the E-COR posterior. The rows correspond to sample sizes. N0 = Ni = 200, N0 = Nx = 800, N0 = Nx = 3200 and N0 = Nx = oo. 45 Figure 2.12: 80% highest posterior density credible intervals for l og0 in Ex-ample 2. The solid vertical lines represent credible intervals for forty data sets with sample sizes JV0 = Nx = 3200, the solid horizontal line indicates the true value of \og(f) and the dashed horizontal line indicates logfi'. The first panel gives the M-COR intervals, the second panel gives the U-COR intervals, and the third panel gives the E-COR intervals. 46 LOG-ODDS LOG-ODDS LOG-ODDS -0.6 0.0 0.5 1.0 1.5 2.0 -0.5 0.0 0.5 1.0 1,5 2.0 -0.5 0.0 0.5 1.0 1.5 2.0 LOG-ODDS LOG-ODDS LOG-ODDS -0.5 • 0.0 0.5 1.0 1.5 2.0 -0.5 0.0 0.5 1.0 1.5 2.0 -0.5 0.0 0.5 1.0 1,5 2.0 LOG-ODDS LOG-ODDS * LOG-ODDS Figure 2.13: Posterior distribution of log</> in Example 3. The first column gives the M-COR posterior, the second column gives the U-COR posterior, and the third column gives the E-COR posterior. The rows correspond to sample sizes N0 = NX = 200, N0 = Nx = 800, N0 =N1 = 3200 and N0 = Nx = oo. 47 M-COR Figure 2.14: 80% highest posterior density credible intervals for log</> in Ex-ample 3. The solid vertical lines represent credible intervals for forty data sets with sample sizes N0 = Nx = 3200, the solid horizontal line indicates the true value of log</> and the dashed horizontal line indicates logc//. The first panel gives the M-COR intervals, the second panel gives the U-COR intervals, and the third panel gives the E-COR intervals. 48 Chapter 3 Two Test Case In the previous chapter we examined a case-control setting where exposure is measured with one imperfect test. We now turn our attention to the situa-tion where the exposure status is determined by two imperfect classification schemes. We will illustrate the existing methods to correct the odds-ratio es-timates, and examine our method via a simulation study and an analysis of data on the sudden infant death syndrome. 3.1 Da ta and Setup As in chapter 2 we define relevant parameters and introduce the general setup. Consider the data from a case-control study in which each subject has an underlying true, but unobserved exposure (E), coded as 1 for exposed and 0 for unexposed. This exposure is assessed by applying two measures or tests (say, Ti and T2) to each subject. These tests are coded as 1 for positive and 49 0 for negative outcome. We will assume that the classification procedures misclassify the subjects nondifferentially and that the disease status is known exactly, without error. When two tests are used to determine the exposure status, the data usually come in the following form (Table 3.1). Controls Cases Test 2 Test 2 + + a0 b0 a0 + b0 + Test 1 ai bi ax + bi Co d0 c0 + d0 — Ci di ci + di N0 Ni' Table 3.1: Distribution of subjects in a case-control study when two tests are used to assess the exposure status. Symbol + denotes exposed and — denotes unexposed. We define the following parameters to model the probabilities of the different possible outcomes in controls and cases: Oil = Pr(Tx = l\E = 1), a2 = Pr(T2 = 1\E = 1), = Pr(Tx = 0\E = 0), = Pr(T2 = 0\E = 0), with the exposure prevalences 7r0 and 7Ti and the odds-ratio 0 defined as before. 50 Here, ax and a2 are the sensitivities of test 1 and test 2, respectively, and B\ and B2 are the specificities of test 1 and test 2, respectively. 3.2 Correc t ion Methods In this section we illustrate three different methods to correct the odds-ratio estimates using dual measurements. The first method uses only the concor-dant data (i.e., the data for which two measurements are in agreement). The other two methods use the EM algorithm to obtain the estimates. This algo-rithm is a useful way of obtaining maximum likelihood estimates when some data are missing. Each iteration of the EM algorithm involves two steps: the expectation step (E-step) and the maximization step (M-step). For precise definitions of these steps and the full description of the EM algorithm, see Dempster, Laird and Rubin (1977). 3.2.1 T h e M a r s h a l l a n d G r a h a m M e t h o d Marshall and Graham (1984) consider the problem of exposure to a risk fac-tor, where the "true" exposure is unknown. They proposed a simple way to decrease the bias in the odds-ratio estimates caused by misclassification, us-ing two independent imperfect tests to gather information on exposure status. The method is based on restricting the analysis to data for which two inde-pendent assessments of exposure are concordant, either positive or negative. Subjects for which both tests are positive are considered as exposed, and those for which both tests are negative are considered as unexposed. 51 They have demonstrated that the use of two tests in this fashion can provide a less biased estimate of the odds-ratio. Even if the second test is less accurate than the first, the contrast between subjects about whom there is test agreement yielded a better approximation of the odds-ratio than if only the more accurate report were used. There are, of course, obvious drawbacks of this method. The Marshall-Graham procedure uses only a subset of the data, the concordant observations. Clearly, by using only a part of the data, the statistical efficiency of the esti-mate is decreased. Furthermore, this procedure does not completely remove the bias caused by misclassification. It is necessary that both the sensitiv-ity and specificity of both assessment procedures be relatively high, in order to closely approximate the relative risk. For example, if the sensitivity and specificity are about 0.9, the prevalence of exposure in cases is 0.3 and the prevalence of exposure in controls is about 0.097, the true odds-ratio of 4.0 will be approximated by about 3.7. 3.2.2 T h e D r e w s , F l a n d e r s a n d K o s i n s k i M e t h o d Drews, Flanders and Kosinski (1993) examine methods to use two classification schemes to improve the odds ratio estimates in case-control studies. Their assumptions are differ slightly from ours in that they do not assume that errors in one test are independent of errors in the second test. This is reflected in the following parameters used by these authors, in addition to the sensitivity and specificity of test 1, to model the probabilities of the different possible 52 outcomes in cases and controls: = Pr(T2 = 1\E = = 1), = Pr(T2 = 1\E = = 0), 02 = Pr(T2 = 0\E = 0,Ti = 1), &A*0 = Pr(T2 = 0\E = 0,71 = 0). Under this model, [i\ and represent deviations from a model in which, given true exposure status, errors in one test are independent of errors in a second test. Note also that if the misclassification probabilities of T2 are independent of the misclassification probabilities of 7\ (that is, / / i = — 1),. a2 and B2 are not the sensitivity and specificity of T2. Since there are two tests applied to two populations, there are six degrees of freedom to estimate six parameters. Hence, the likelihood is identifiable, provided that we treat [ii and fi0 as known, which is precisely what the authors do. This might be a drawback because, in practice, the values of fix and fj,Q are not easily available. To derive the likelihood, consider an individual who was positive on both tests. This person could have been truly exposed and correctly tested positive on both tests (P = 7Tiotxa2), or the person could have been truly unexposed and falsely tested positive on both tests [P = (1 — 7Ti)(l — /?i)(l — B2Ho)]- The net contribution to the likelihood is then \rKiaxa2 + {l — 7 r 1 ) ( l — B\)(l — (32fj,0)]ai, where ax is the number of individuals who tested positive on test 1 and test 2 and are cases. Similar contributions from other cells in Table 3.1 gives the 53 following likelihood: L = n [KkUi^+(i - 7r f c ) ( i - - / M r fc=0,l X [ 7 T f c a i ( l - a2) + (1 - 7T f c ) ( l - BJB^x x [ T r / b C l - a O C l - ^ / x O + ^ l - T r O A ^ r x [ ^ ( l - aOaaMi + C l - T r ^ A C l - ^ ) ] " * . (3.1) To estimate the parameters that maximize this likelihood, the authors use the EM algorithm. In this case, the missing data are the true number of exposed and the true number of unexposed subjects in each cell of Table 3.1. A very similar methodology was developed by Hui and Walter (1980). It is a careful application of the work done on the problem of evaluating the accuracy of a new diagnostic test against a standard test with unknown error rates. This method yields a similar likelihood to (3.1). Unlike Drews et al., however, they do not introduce /^ into their analysis and the maximization of the likelihood is done analytically. A drawback of this approach is that these estimates can sometimes lie outside the parameter space (i.e., outside the interval [0,1]). 3.2.3 T h e K a l d o r a n d C l a y t o n M e t h o d The methodology developed by Kaldor and Clayton (1985) uses latent class analysis to correct the odds-ratio estimate. To understand how their approach extends to a two-test case, we first introduce the general setup. 54 Suppose that all measurements made are categorical, and that the dis-ease status D is measured without error. Suppose also that some variables Vi,...,Vr are measured without error, but each of the variables W\,..., Ws are measurements of one of t latent classes C\,..., Ct (unobservable true expo-sure), where t < s, and are subject to misclassification error. Conditional on Cfc, the variables Wj are mutually independent, and independent of D and V*. Each possible outcome in the (1 + r + s + £)-dimensional table resulting from a cross^classification by all these variables can be represented by the vector x=(d, v,w, c), where d is disease status, and v,w and c are respectively r-, s- and ^-dimensional vectors representing categories of Vi, Wj and Ck- Fur-ther, suppose that mx represent the number of individuals in category x. Of course, this variable is unobservable, since the true values of the latent classes are unknown. One can only observe m<ivw^ where the dot indicates summa-tion over all categories of the t latent classes. The authors assume, however, that mx can be generated by log-linear model of the form log /J,X = XU^a > where [ix = E(mx). In terms of definitions above, for a model with two measurements per subject, we have r = 0 since there are no variables measured without error, and t — 1 because there is only one latent variable underlying the repeat measurements. Each of the variables D, C and Wj is dichotomous, taking the values 0 and 1. The latent class/logistic model is then of the form log fjiX = 6 + 9f + Bcc + 6™ + E L i ( C + C H > where s is the number of repeat measurements made of the risk factor C. The authors utilize the EM algorithm 55 to estimate the relevant parameters. Here, the missing data are unobservable true exposure (C\,..., Ct). 3.3 Our M e t h o d Our method is an extension of the Joseph, Gyorkos and Coupal (1995) method for inference on the sensitivities and specificities of two classification proce-dures applied to one population. We modify their method to apply to the two test, two population scenario. Let the unobserved latent data Ai} Bt, d, and Di, i = 0,1 represent the number of true positive subjects out of the observed.cell counts ai, bi} Ci, and di, i = 0,1, respectively, in the 2 x 2 x 2 table (see Table 3.1). In addition to the assumptions and definitions made in section 3.1, we will assume that the two tests determine the exposure status independently. We believe that this assumption is as reasonable as the assumption made by Drews, Flanders and Kosinski (1993), whereby they assume the degree of dependence is known. Let us now examine the possible outcomes involving the observed and the latent data. Since any.subject, whether truly exposed or not, can test positively or negatively on each test, there are eight possible combinations for both cases and controls. An individual can truly be exposed and correctly classified as exposed by both tests pTiaia^], or can truly be exposed and cor-rectly classified by one of the tests and misclassified by the other [7TJQ!I(1 — ct2) or 7 T j ( l — ai)o!2], and so on. Similarly, an individual can truly be unexposed and misclassified by both tests [(1 — 7Tj ) ( l — /?i)(l — 62)}, or can truly be un-56 exposed and correctly classified by both tests [(1 — n^Bifo], and so forth. All eight combinations are summarized in Table 3.2. Number of True Test 1 Test 2 Contribution to subjects exposure (Ti) likelihood Al + + + IXiO.iO.2 Bi + + —. 7 T j Q ! i ( l - a2) Ci + — + 7 T i ( l - a1)a2 A + — — 7 T j ( l - «i)(l - a2) cii — Ai — + + (\ - - - h) h-B, — + — {l-n)(\-Bx)B2 ci ~ Ci — — + {\-*i)Bx{\-B2) di - Di — — — (1 -Table 3.2: Distribution of observed and latent data when two imperfect tests are used, together with the contribution to the likelihood each combination of the observed and latent data makes. Symbol + denotes exposed, and — denotes unexposed. Under the assumption of independence between controls and cases, the derivation of the likelihood function of the latent and observed data in Ta-ble 3.2 is straightforward. Ni l(Ai,Bi,Ci,Di,ai,bi,Ci,di\iTi,ahBi) = JJ i0L>1Ai\...(di-Di)] x [TTiaia2]Ai [ 7 T j Q ! i ( l - a2)}Bi x [7ri(l - a1)a2fi - ai)(l - a2)f> o-i-Ai [(1 B2 [(1 - 7 r 0 ( l - B,)B2]BL -Bi [(1 -7T,)A(1 - B2)T -Ci [(1 - ^i)B^B2 \di-Di 57 By gathering the proper exponents and collecting the like terms, the likelihood can be rewritten as Ni n i=0,l ^Ai+Bi+d+Di ^ _ ^Ni-(Ai+Bi+Ci+Di) Ai\ ...(di - Di)\ X aAo+Bo+Ai+Bi ^ _ ^CO+DO+CI+DIQAQ+CO+AI+C!^ _ a^B0+D0+Bi+DX x pC0+d,o+ci+di-(Co+D0+Ci+Di) ^ _ ^ ^ a o + 6 0 + a i + 6 1 _ ( A 0 + B o + ^ i + J B i ) X pbo+do+bi+di-tBo+Do+Bt+Di)^ _ ^ao+co+ai+ci-(A0+Co+Ai+Ci) (3.2) As in chapter 2, we will choose beta densities to represent the prior information available for at, Bi, and n^i = 0,1. Moreover, we will assume that the six parameters are independent a priori, with ai ~ Beta(crQi, fj,ai), i - 1, 2, Bi ~ Beta(a/3.,/i/8.),i = 1,2, 7r0 ~ Beta(a 7 r o ,^ 0), 7Ti ~ Beta(crWl,^7r i), giving a joint prior density p(al,/31,a2,p2,ir0,'K1) = II Pf/fa) II P<*j(aj)PPj(Pj)- (3-3) i=0,l j=l,2 Since the posterior density is proportional to the likelihood function (3.2) and the prior distribution (3.3), it is easily seen that the posterior density is 58 X X proportional to TT N i Aj+Bj+Ci+Di+v^-l, _ ^Ni-iAi+Bt+d+DO+^-l l{l[Al\...{dl-Di)^ V ^ • X a f 0 + B ° + A l + B l + a a l ~ l ( l - a^Co+Do + Cy+Dy+lla.-l X aAo+Co+Ai+Ci+<ra2-l ^ _ a^Bo+Do+Bi+Di+na2-l ^ c o + d 0 + c i + d 1 - ( C o + r » o + C , i + U i ) + o - ^ 1 - l ^ _ p^ao+bo+ai+bl_(Ao+Bo+A1+B1)+iJ,i31-l ^6o+do+6i+di-(Bo+£'o+Bi+r ' i )+<T / 3 1 - l^ 1 _ ^ ^ a o + C o + a i + C l _ ( J 4 0 + c 0 + A 1 - f - C i ) + ^ 1 - 1 (3.4) We plan to use the Gibbs sampler to sample from (3.4). The implemen-tation of this algorithm should be relatively straightforward as we have the following univariate conditional densities: o.\\Ai, Bi, Ci, Di, aai, jiai ~ Beta(^0 + BQ + Ax + Bx + aai ,C0 + D0 + Cx + DX +fiai), Bx\a,i, bi, dh Ai, Bi, d, Di, opx, (Apt ~ Beta(c0 + d0 + cx + dx - (C 0 + D0 + Ci + Dx) + aPl,a0 + b0 + ax + 61 - (A0 + B0 + AX + Bx) +/ip,), ®21 A-i, Bi, Ci, Di, c Q 2 , fj,a2 ~ Beta(A) + Co + Ax + d + aa2,B0 + D0 + £ l + Z > l + M a 2 ) > A I ^, 6i, q , ^, Ai, Bi, Ci, Di, a02, pp2 ~ Beta(60 + d0 + bx + di - (B0 + D0 + Bx + Dx) + o-^.oo + c0 + ax+cx- (A0 + C0 + Ax + Cx) + iip2), TVi\Ai,Bi,Cl,Di,di,bi,ci,di,ani,^i ~ Beta(Ai + Bt + Ci + D{ + an., Nt -(Ai + Bt + d + Di) + fiv.),i = 0,1, 59 Ai\m, on, Bu at ~ Binomial^, naia2+{1??$-pl){1-p2)),i = 0,1, B^, ah Bu h ~ Binomial^, v.ai{x^%Zl%x-p^ i = °> L d\-Ki, at, Bi, Cl ~ Binomial^, ni{1_aJ^!}v%i{l_02)), i = 0,1, A k i , ai, A , di ~ Binomial^, J ; ^ ^ ^ ^ ), i = 0,1, We see that the Gibbs sampler enables us to draw inference not only on the prevalences of exposure (and consequently on the odds-ratio), but also on the the sensitivities and specificities of the two classification schemes. 3.3.1 Examples Here, we plan to investigate the validity of our method by applying it to eight hypothetical case-control studies where the true values of sensitivities, specificities, prevalence of exposure in controls and prevalence of exposure in cases are known. Data will be simulated using these true values for three different sample sizes, 7V0 = Nx = 200, N0 = Nx = 800 and N0 = Nx = 3200. One example will be discussed in more detail and the the results of the remaining eight will be summarized in a table. Example 1 Consider a scenario where the true values of the six param-eters are (ax, Bx, a2, fa, TT0, TTI> = (0.85, 0.90, 0.90, 0.88, 0.07, 0.18). The value of 7T0 and iri imply <\> = 2.92 or log0 = 1.07. Data was simulated under this sce-nario for three different sample sizes. For the sample size N0 = Nx = 200, simulated values were (a0,b0,c0,d0) = (11,9,21,159) and (ax, bi,cx,di) = (33,8,13,146). For the sample size N0 = Nx = 800, simulated values were 60 (a0, 6 0 , Co, do) = (47, 83, 90, 580) and (auh, Ci, dx) = (113, 78, 90, 519). Finally, for the sample size N0 = Nx = 3200, simulated values were (a0,b0,c0,do) = (181,295,348,2376) and (ai,&i,Ci,di) = (453,297,338,2112). As in chapter 2, the prior for the prevalence of exposure in controls and the prevalence of exposure in cases was chosen to be Beta(l, 1). Furthermore, suppose that the investigator does not have a very good prior knowledge about the sensitivities and specificities of the two test, but is only willing to assume that the prob-abilities of correct classification are greater than chance (i.e., they are in the interval [0.5,1]). We will translate this information into a Beta prior which assigns a 0.95 probability to this interval and with its mode at a value in this interval. In this example, we will set the mode at 0.8 for all four classification probabilities. This implies a Beta(7.55, 2.64) prior for the sensitivities and specificities. Five independent chains of the Gibbs sampler were run using different starting values. The sequential output is shown in Figure 3.1. Before attempting any inference, we will first examine the adequacy of the Gibbs sampler. The sequential plots of the post burn-in period (the first 500 observations from each chain were discarded) in Figure 3.1 indicate that convergence has occurred. The five independent chains appear to consistently converge to the same region in the parameter space, with no instances of slow mixing or drifting behaviour. Hence, we will regard Figure 3.1 as informal evidence that our posterior sampling is adequate. The statistical inference will be based on 5 x (2500 — 500) = 10000 sampled pairs of the prevalence of exposure in controls and prevalence of exposure in cases. 61 2000 4000 6000 S0OO Figure 3.1: Post burn-in output of the five independent chains of the Gibbs sampler for sensitivities, specificities, prevalence of exposure in controls and prevalence of exposure in cases in Example 1. Data was simulated for JV0 = Ni = 200. Figure 3.2 shows the histograms of the posterior log odds-ratio for the three sample sizes considered, together with the true log odds-ratio, which is obtained when JVj —> oo, i = 0,1. We see^ that the convergence toward the true value of log odds-ratio is much more rapid than in the case when one test is used. Also, as there is no nonidentifiability in the likelihood, the estimation error is 0(n - 1 / 2 ) , which is witnessed in the widths of the posterior distributions. As the sample size increases from 200 to 800, and from 800 to 62 3200, the width of the posterior distribution decreases roughly by the factor of 2. The posteriors in Figure 3.2 are, of course, based on single datasets. To assess the generality of the findings and the coverage of the credible intervals, we simulate 1000 datasets under the same scenario for the three sample sizes. For each dataset and each sample size we compute 80% HPD credible intervals for log0 . The first forty intervals appear in Figure 3.3. The average interval widths are 1.01, 0.51, and 0.28 and the empirical coverage rates for the intervals are 85.7%, 83.8%, and 84.1% for the sample sizes N0 = Nx =.200,'N0 = Ni = 800, and N0- = Ni = 3200, respectively. As the standard error of the empirical coverage rates is approximately 1.26, it appears that 80% HPD credible intervals exhibit slight overcoverage for all three sample sizes. When doing Bayesian inference, it is important to assess the sensitivity of estimates. As we have already seen in Figure 3.1, the Gibbs sampler is not at all sensitive to the choice of starting values; all five chains converged to the same region in the parameter space. We now investigate whether the estimates of l og0 are overly sensitive to different choices of the hyperparameters. We re-run the Gibbs sampler algorithm for the intermediate sample size, = Ni = 800, with different values of the hyperparameters. We consider two diametrically opposed scenarios. First, we examine the case where Beta(l, 1) prior is used for all six parameters. This scenario could potentially arise when an investigator has no prior knowledge of any of the six parameters, or is not certain of the validity of the available information. Second, we consider the 63 (a) Sample size: N0 = Nx = 200 (b) Sample size: N0 = Nx = 800 25 30 2.0 2.5 3.0 (c) Sample size: N0 = Nv= 3200 (d) Sample size: No = Ni = oo Figure 3.2: Posterior samples of the logarithm of the odds-ratio for the datasets in Example 1. 64 (a) n n n M l " 1 M 1 1 1 M M 1 M - M l 0 10 20 30 DATASET 40 (b) ! 1 I i 1 1 1 1 1 M i M i i i i i i i , 1 , 1 , , 1 i | I 1 | 1 1 1 1 M i l l 1 i i i | | | i | 1 1 1 i • i | i 0 10 20 30 DATASET 40 (o) i 1 i i 1 1 1 1 1 1 1 1 1 . 1 i . 1 1 1 1 . i 1 1 1 1 1 1 | 1 1 1 1 1 1 1 1 1 1 | | 1 1 | 1 1 1 1 1 1 1 1 1 | | 1 1 1 1 | 1 1 0 10 20 30 40 Figure 3.3: 80% highest posterior density credible intervals for \ogcj) in Exam-ple 1. The solid vertical lines represent credible intervals for forty data sets and the solid horizontal line indicates the true value of log <j>. The panel (a) gives the intervals for the sample size N0 = Nx = 200, the panel (b) gives the intervals for the sample size N0 = Nx = 800, and the panel (c) gives the intervals for the sample size N0 = Nx = 3200. 65 (a) (b) (c) Figure 3.4: Posterior samples of the logarithm of the odds-ratio for the sample size N0 = Ni = 800 in Example 1. (a) Beta(l, 1) prior used for all six param-eters, (b) Beta(7.55, 2.64) prior used for ctj and Bi,i = 1,2, while Beta(l, 1) prior used for ir0 and ITI (the histogram is identical to that shown in Fig-ure 3.2 (b)). (c) Beta prior for each parameter was chosen so that the mode 7 is centred at the true value and 0.95 probability is assigned to the interval 7 ±0.05. case where very good prior knowledge is available for all six parameters. The hyperparameters a and / i of Beta(a, fi) priors were chosen so that the mode 7 is centred at the true value of each parameter and 0.95 probability is assigned to the interval 7 ± 0.05. The histograms of log(/> are shown in Figure 3.4, together with the histogram identical to that shown in Figure 3.2 (b). As evidenced in Figure 3.4, the posterior distribution of log 0 is very insensitive to the choice of hyperparameter values in this example, suggesting that the precise knowledge of hyperparameter values is not very important to the analysis. The results for the remaining eight examples are shown in Table 3.3. Under each scenario, 1000 data sets were simulated to assess the coverage of 66 the 80% HPD credible intervals and the mean interval length. It appears that for each scenario and each sample size, 80% HPD credible intervals exhibit slight overcoverage. Parameters Empirical coverage of 80% HPD CI a\ Pi a2 ft \0g(f) Ni = 200 Nt = 800 Ni = 3200 0.96 0.96 0.96 0.96 0.75 83.3% (0.94) 82.8% (0.48) 81.9% (0.25) 0.95 0.90 0.90 0.95 0.75 83.2% (0.96) 84.3% (0.47) 83.1% (0.27) 0.90 0.90 0.90 0.87 0.86 84.4% (0.98) 84.7% (0.50) 84.2% (0.28) 0.85 0.85 0.87 0.87 0.86 83.8% (1.03) 83.6% (0.51) 82.1% (0.24) 0.85 0.83 0.87 0.85 1.54 81.9% (1.02) 84.9% (0.52) 83.0% (0.22) 0.82 0.82 0.82 0.82 1.54 81.8% (1.03) 82.6% (0.52) 82.5% (0.29) 0.85 0.90 0.90 0.88 1.07 83.2% (1.01) 82.5% (0.47) 82.6% (0.28) 0.65 0.65 0.65 0.65 1.11 81.2% (1.04) 82.8% (0.49) 81.6% (0.28) Table 3.3: True parameter values and the empirical coverage of the 80% HPD credible intervals for log</> for the sample sizes = 200, Ni = 800, and = 3200,2 = 0,1. Numbers in brackets represent mean interval length. For the empirical coverage and the mean length, 1000 data sets were simulated. From the above table, we see that the empirical coverage rates do not appear to depend on the values of the six parameters. In particular, the empirical coverage rates for the scenario where relatively high values were chosen for the sensitivities and specificities (the first row in Table 3.3) are not significantly different than the empirical coverage rates for the scenario where relatively low values were chosen for the sensitivities and specificities (the last row in Table 3.3). 67 3.4 A n App l i ca t ion to Rea l Da ta We will now compare our method to the method of Drews et al. (1993) by analyzing data from a case-control study of sudden infant death syndrome (SIDS). The exposure data was obtained from maternal interviews and medical records. The data appear in Drews et al.(1993), as taken from Hoffman et al. (1988). These data come from the National Institute of Child Health and Human Development's (NICHD) case-control study of SIDS. A total of 844 SIDS victims and two age-matched living infants were included in the case-control study. A first control was individually age-matched to the case in such a way that, at the time of the interview, the control would be the same age as the case had been when he/she died. A second control was matched to the case on birth-weight and race. Drews et al. (1993) were given access to a data on a pseudorandom sample (a systematic sample with a random start) of 226 of the 844 SIDS cases and one of the two age-matched controls for each of the 226 cases. See Hoffman et al. (1988) or Drews, Kraus and Greenland (1990) for a full description. Mothers of cases and controls were interviewed regarding events which had occurred during their pregnancies, their labour and deliveries, and their infants lives within five weeks of the death of the cases. We looked at six dichotomous exposure variables: maternal anemia dur-ing pregnancy (ANEM), maternal urinary tract infection during pregnancy (UTI), previous spontaneous abortion (PSA), low pregnancy weight gain, which was defined as weight gain of less than 15 pounds (LPWG), mater-68 Controls Cases MR MR + • — + — ANEM Int + Int -20 15 43 147 24 15 49 125 UTI Int + Int -14 4 10 190 13 14 14 174 PSA Int + Int -21 11 13 175 23 15 12 169 PV Int + Int -91 6 12 66 69 9 9 78 LPWG Int + Int -6 4 3 86 11 5 13 73 PAU Int + Int -21 16 12 168 29 17 22 143 Table 3.4: A pseudorandom sample of 226 SIDS cases and 226 controls from the NICHD study. Data was classified using medical record (MR) and interview (Int) data. 69 nal antibiotic use during pregnancy (PAU), and polio vaccination (PV) before death or interview. Data for these variables were available from both med-ical records and maternal interviews. The subjects for which information is missing from either source were not included in the analysis.Further, since the matching of cases and controls was fairly weak, it was ignored. Table 3.4 shows the distribution of controls and cases according to both interview and medical record data for the six variables. Interview data were arbitrarily taken to rep-resent test 1 and medical records to represent test 2. This distinction has no ill effect, since we assumed that the errors between two tests are independent. As no reliable prior information about the sensitivity and specificity of the interview data and medical records was available, we assigned a Beta(l, 1) prior to all six parameters. The results of our analysis are shown in Table 3.4 and Table 3.5. Table 3.4 shows median estimates of the sensitivities, specifici-ties and log0, together with 95% HPD credible intervals for log</>. Table 3.5 shows 95% HPD credible intervals for the sensitivities and specificities of the interview data and medical records. Five independent chains of the Gibbs sampler were run with different starting values. The inference was based on 5 x (5500 - 500) = 25000 draws. As mentioned before, Drews et al. (1993) used the EM algorithm to obtain estimates of the sensitivity and specificity of the interview data and medical records and the odds-ratio. The results of their analysis appear in Table 3.6. Note that for data set, at least one of the parameters estimated using Drews et al. (1993) method lies on the boundary of the parameter space (i.e., 70 1.00). This is, of course not true when the Gibbs sampler is used. Except for this distinction, the EM algorithm and the Gibbs sampler produce similar estimates of the sensitivities and specificities for the ANEM, PV, LPWG, and PAU data sets. Greater discrepancies occur for the UTI and PSA datasets. For instance, in the UTI dataset, the estimate of the sensitivity of medical records obtained by the Gibbs sampler is 0.67, whereas the estimate obtained using the EM algorithm is 0.56, a difference of 0.11. Also, the estimate of the sensitivity of interview data using the Gibbs sampler is 0.70, while the one obtained via the EM algorithm is 0.60. A similar finding applies to the PSA dataset. The point estimates of l og0 are similar in magnitude, though the ones obtained vie the Gibbs sampler are smaller than the EM estimates for all but the ANEM data set. However, the credible intervals are narrower than the confidence intervals for all six data sets. In particular, in five out of six data sets, the credible intervals for log <fi are completely contained in the confidence intervals. This finding is somewhat surprising, since our credible intervals take into account the variability associated with the sensitivities and specificities of the two tests, which is not true for the confidence intervals. The variance used to calculate the confidence interval for log <f> is estimated using the delta method, as follows: Var( log0)= V A R ( ? R O ) 2 + V A R ( 7 F L ) 2 Z C O V K T T X ) Ml - TTo)] 2 Ml - TTx)] 2 Ml - TO)] Ml - T l ) ] ' The values of Var(-7r0), Var(7Ti), and Cov(7r0,7Ti) can be obtained by substi-tuting the parameter estimates into the expected information matrix and in-71 verting this matrix to obtain the estimated covariance matrix. Therefore, Var(log (j)) only depends on the point estimates of the sensitivities and speci-ficities, not the estimates of their standard errors. The explicit formulae for the information matrix when the errors of the two classification schemes are independent (/ii — / i 0 = 1) are given in Hui and Walter (1980). Since we have established the coverage of the credible intervals through the simulation studies, and the length via the analysis of real data set, we can conclude that they perform better than the confidence intervals. 3.5 Discussion In this chapter we have examined the situation where the exposure status is determined by two imperfect classification schemes. We have seen that the Gibbs sampler works very well in this case, presumably because the likelihood is identifiable. Our method performed well both in the simulation studies and in the analysis of the real data, where it appeared to outperform the EM algorithm approach of Drews et al. (1993). 72 Interview Medical 95% HPD Data Record Cr. Int Variable a P a P for log <f> ANEM 0.75 0.91 0.42 0.94 0.52 (-0.07,1.15) UTI 0.70 0.97 0.67 0.98 0.45 (-0.39,1.07) PSA 0.75 0.96 0.77 0.96 0.15 (-0.42,0.74) PV 0.96 0.94 0.92 0.94 -0.43 • (-0.89,0.01) LPWG 0.78 0.98 0.56 0.97 1.08 (0.15,2.01) PAU 0.77 0.96 0.67 0.94 0.58 (0.02,1.17) Table 3.5: Median estimates of the sensitivities, specificities and the logarithm of the odds-ratio, and 95% HPD credible intervals for the logarithm of the odds-ratio, using the Gibbs sampler. Interview Medical Data Record Variable a P a P ANEM (0.61, 0.90) (0.83, 0.96) (0.30, 0.54) (0.88, 0.97) UTI (0.56, 0.88) (0.91, 1.00) (0.49, 0.87) (0.91, 1.00) PSA (0.63, 0.89) (0.89, 0.99) (0.64, 0.92) (0.90, 0.99) PV (0.89, 0.99) (0.87, 0.98) ' (0.84, 0.97) (0.88, 0.98) LPWG (0.60, 0.91) (0.93, 1.00) (0.41, 0^ 74) (0.93, 1.00) PAU (0.61, 0.91) (0.89, 1.00) (0.49, 0.87) (0.89, 0.98) Table 3.6: 95% HPD credible intervals for the sensitivities and specificities of the interview data and the medical records. Interview Medical 95% Data Record Conf. Int Variable a P a P log</> for log (j) ANEM 0.78 1.00 0.35 0.93 0.51 (-0.23,1.24) UTI 0.60 0.99 0.56 1.00 0.51 (-0.27,1.30) PSA 0.63 0.93 1.00 1.00 0.21 (-0.44,0.86) PV 1.00 1.00 0.88 0.91 -0.46 (-0.94,0.02) LPWG 0.84 1.00 0.52 0.97 1.17 (0.2,2.31) PAU 0.79 1.00 0.60 0.94 0.62 . (-0.01,1.24) Table 3.7: Estimates of the sensitivities, specificities and the logarithm of the odds-ratio, and 95% confidence intervals for the logarithm of the odds-ratio, using the EM algorithm (Drews et al. (1993)). 73 Chapter 4 Conclusion As we have seen, misclassification of exposure poses serious problems in statis-tical analysis. The effects of ignoring misclassification and the methods which correct for it have received a considerable attention in literature. More often than not, exposure misclassification substantially biases estimates of the rel-ative risk, even when misclassification rates are very small. Nondifferential misclassification, for example, tends to attenuate observed exposure-disease relationships (odds-ratio, for instance). Corrected estimates of the odds-ratio (p can be easily obtained if the misclassification probabilities (the sensitivity and specificity) are known. Bar-ron (1977) and Greenland and Kleinbaum (1983) provide classical methods for correcting the odds-ratio, which rely on the invertibility of matrices. We introduced a Bayesian approach to correcting the odds-ratio when the sensi-tivity and specificity are known. The sampling from the posterior distribution of prevalence of exposure among cases and control (and hence, the odds-ratio) 74 proved to be quite simple. However, having the exact values of the sensitivity and specificity is not very common in practice. Instead, good guesses of these values might be available (through previous studies or by comparing the diagnostic test.to a gold standard). It might be very tempting to carry out the above analysis by pretending that these guesses are the true values of the sensitivity and specificity. We showed, however, that even the smallest difference between the true and guessed values can result in very large difference between the miscorrected and true odds-ratio. This result, together with the previous work of Marshall (1989), sug-gested that it is reasonable to incorporate the uncertainty into the analysis. Bayesian methods make this feasible, as partial knowledge about the sensitiv-ity and specificity is easily represented with the appropriate prior distribution. We suggested two algorithms to sample from the posterior distribution: the Gibbs sampler and the Metropolis-Hastings. The Gibbs sampler did not work particularly well. The possible explanation is that drifting behaviour of the Gibbs sampler may occur when it is applied to the nonidentifiable model. On the other hand, the Metropolis-Hastings algorithm applied to a parameteriza-tion that separates the identifiable and nonidentifiable part seemed to perform well. We were able to demonstrate that the approach which admits the un-certainty in the guesses of the sensitivity and specificity performs better than the method that ignores it. We also showed that, despite nonidentifiability, the marginal prior and posterior distributions of (a, B) were not equal. As a 75 result of learning about the apparent exposure prevalences (#i,#2), one could learn about (a,B). In chapter 3 we examined the scenario when the exposure data is avail-able from two imperfect sources. Various methods for improving the odds-ratio estimates by combining data from two imperfect classification schemes have been suggested, of which we outlined three. The method of Marshall and Graham (1984) uses only the concordant observations to corect the odds-ratio estimates. The methods of Drews et al. (1993) and Kaldor and Clayton (1985) are both classical approaches that incorporate the underlying true exposure into their analysis. We developed a Bayesian latent class aproach, where the latent class was the true exposure. We used the Gibbs sampler to sample from the posterior distributions of the six parameters. Both the adequacy of the Gibbs sampler and the validity of our method were established through a simulation study. The analysis of case-control data on the sudden infant death syndrome showed that our method could easily be applied in practice. The analysis also showed that our method appears to outperform the method of Drews et al. (1993). The following suggestions could be considered for further development of the methodology presented here. One would be to extend the method to the situation where three or more imperfect tests are used to assess the exposure. This should be a very simple extension of our methodology. Other would be to modify the methods to apply to matched-pair case-control studies. And finally, 76 it seems useful to explore the situation where, in addition to the misclassified dichotomous exposure, other covariates are measured, possibly without error. This does not appear to be a matter of simple extension of our method, because it would require the use of a link function between the probability of disease and covariates in question. 77 Bibliography [1] Barron, B.A. "The effects of misclassification on the estimation of relative risk." Biometrics, 33, 414-418 (1977). [2] Blettner, M., Wahrendorf, J. "What does an observed relative risk convey about possible misclassification?." Methods of Information in Medicine, 41, 923-937 (1984). [3] Chib, S., Greenberg, E. "Understanding the Metropolis-Hastings algo-rithm." The American Statistician, 49, 327-335 (1995). [4] Dawid, A.P. "Conditional independence in statistical theory (with discus-sion)." Journal of the Royal Statistical Society B, 41, 1-31 (1979). [5] Dempster, A.P., Laird, N.M., Rubin, D.B. "Maximum likelihood from incomplete data via the EM algorithm (with discussion)." Journal of the Royal Statistical Society B, 39, 1-38 (1977). [6] Drews, C D . , Flanders, W.D., Kosinski, A.S. "Use of two data sources to estimate odds ratios in case-control studies." Epidemiology, 4, 327-335 (1993). 78 [7] Drews, C D . , Kraus, J.R., Greenland, S. "Recall bias in a case-control study of sudden infant death syndrome." International Journal of Epi-demiology, 19, 405-411 (1990). [8] Gelfand, A.E., Sahu, S.K. "Identifiability, improper priors, and Gibbs sampling for generalized linear models." Journal of the American Statis-tical Association, 94, 247-253 (1999). [9] Gentle, J.E. Random Number Generation and Monte Carlo Methods. New York, Springer-Verlag, 1998. [10] Greenland, S., Kleinbaum, D.G. "Correcting for misclassification in two-way tables and matched-pair studies." International Journal of Epidemi-ology, 12, 93-97 (1983). [11] Hoffman, H.J., Hunter, J.C., Ellish, N.J., Janerich, D.T., Goldberg,. J. "Adverse reproductive factors and the sudden infant death syndrome." In: Harper, C.M.R., Hoffman, H.J., eds. Sudden Infant Death Syndrome: Risk Factors and Basic Mechanisms. New York, PMA Publishing, 1988. [12] Hui, S.L., Walter, S.D. "Estimating the error rates of diagnostic tests." Biometrics, 36, 167-171 (1980). [13] Joseph, L., Gyorkos, T.W., Coupal, L. "Bayesian estimation of disease prevalence and the parameters of diagnostic tests in the absence of a gold standard." American Journal of Epidemiology, 141, 263-272 (1995). 79 [14] Kaldor, J., Clayton, D. "Latent class analysis in chronic disease epidemi-ology." Statistics in Medicine, 4, 327-335 (1985). [15] Marshall, J.R. "The use of dual or multiple reports in epidemiological studies." Statistics in Medicine, 8, 1041-1049 (1989). [16] Marshall, J.R., Graham, S. "Use of dual responses to increase validity of case-control studies." Journal of Chronic Diseases, 37, 125-136 (1984). [17] Thomas, D., Stram, D., Dwyer, J. "Exposure measurement error: influ-ence on exposure-disease relationships and methods of correction." An-nual Review of Public Health, 14, 69-93 (1993). studies." [18] Walter, S.D. "Commentary on 'Use of dual responses to increase validity of case-control studies' ." Journal of Chronic Diseases, 37, 137-139 (1984). [19] Walter, S.D., Irwig, L .M. "Estimation of test error rates, disease preva-lence and relative risk from misclassified data: a review." Journal of Clin-ical Epidemiology, 9, 923-937 (1988). 80
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Case-control studies with misclassified exposure :...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Case-control studies with misclassified exposure : a Bayesian approach Saskin, Refik 2000
pdf
Page Metadata
Item Metadata
Title | Case-control studies with misclassified exposure : a Bayesian approach |
Creator |
Saskin, Refik |
Date Issued | 2000 |
Description | When dealing with the case-control data, it is often the case that the exposure to a risk factor of interest is subject to miclassification. Methods for correcting the odds-ratio are available when the misclassification probabilities are known. In practice, however, good guesses rather than the exact values are available for these probabilities. We show that when these guesses are treated as exact even the smallest differencies between the true and guessed values can lead to very erroneous odds-ratio estimates. This problem is alleviated by a Bayesian analysis which incorporates the uncertainty about the misclassification probabilities as prior information. In practice, data on the exposure variable are quite often available from more than one source. We review three methods for improving the odds-ratio estimates that combine information from two sources. We then develop a Bayesian approach which is based on latent class analysis, and apply it to the sudden infant death syndrome data. The inference required the use of the Metropolis-Hastings algorithm and/or the Gibbs sampler. |
Extent | 3152481 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2009-07-20 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0089794 |
URI | http://hdl.handle.net/2429/10953 |
Degree |
Master of Science - MSc |
Program |
Statistics |
Affiliation |
Science, Faculty of Statistics, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2000-11 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_2000-0564.pdf [ 3.01MB ]
- Metadata
- JSON: 831-1.0089794.json
- JSON-LD: 831-1.0089794-ld.json
- RDF/XML (Pretty): 831-1.0089794-rdf.xml
- RDF/JSON: 831-1.0089794-rdf.json
- Turtle: 831-1.0089794-turtle.txt
- N-Triples: 831-1.0089794-rdf-ntriples.txt
- Original Record: 831-1.0089794-source.json
- Full Text
- 831-1.0089794-fulltext.txt
- Citation
- 831-1.0089794.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0089794/manifest