UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The design of quantal response experiments and the modelling of quantal response experiments over… Sitter, Randy Rudolf 1986

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1986_A6_7 S55_3.pdf [ 2.41MB ]
Metadata
JSON: 831-1.0096804.json
JSON-LD: 831-1.0096804-ld.json
RDF/XML (Pretty): 831-1.0096804-rdf.xml
RDF/JSON: 831-1.0096804-rdf.json
Turtle: 831-1.0096804-turtle.txt
N-Triples: 831-1.0096804-rdf-ntriples.txt
Original Record: 831-1.0096804-source.json
Full Text
831-1.0096804-fulltext.txt
Citation
831-1.0096804.ris

Full Text

T H E DESIGN OF QUANTAL RESPONSE EXPERIMENTS AND T H E MODELLING OF QUANTAL RESPONSE EXPERIMENTS OVER TIME by RANDY RUDOLF SITTER B.Sc, The University of British Columbia, 1984 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF T H E REQUIREMENTS FOR T H E DEGREE OF MASTER OF SCIENCE in T H E FACULTY OF GRADUATE STUDIES Statistics Department We accept this thesis as conforming to the required standard T H E UNIVERSITY OF BRITISH COLUMBIA August 1986 ©Randy Rudolf Sitter, 1986 In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of the requirements f o r an advanced degree a t the U n i v e r s i t y o f B r i t i s h Columbia, I agree t h a t the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and study. I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e copying of t h i s t h e s i s f o r s c h o l a r l y purposes may be granted by the head of my department or by h i s or her r e p r e s e n t a t i v e s . I t i s understood t h a t copying or p u b l i c a t i o n of t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l not be allowed without my w r i t t e n p e r m i s s i o n . Department of The U n i v e r s i t y of B r i t i s h Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 E-6 (3/81) A B S T R A C T The problem of designing a quantal response experiment when estima-tion of the median effective dose (ED50) is of main interest is examined. The asymptotic variances of the maximum likelihood estimators of the ED50 for various 3 and 5 point designs, using the logit model, are compared to the minimum possible which is achieved with an inadvisable 1 point design (Cher-noff[5]). Alternate criteria for choosing a design that attempt to incorporate goodness-of-fit of the model are then examined. The modelling of quantal response experiments observed over time is also considered. A growth-curve approach to this problem was suggested by Carter and Hubert[3], and applied to a data set. The feasibility of this approach is discussed, and a simpler, more direct approach is proposed. The two models are applied to the presented data set, and the resulting fits are compared. The model proposed here appears to fit the data better. Inference about the ED50 using the two models is also compared. ii T A B L E O F C O N T E N T S Abstract ii Table of Contents iii List of Tables v List of Figures vi Acknowledgement vii I Designs For Quanta! Response Experiments Based On Mini -mum Variance 1. Description of Quantal Response Models 1 2. Optimal Design for Estimation of ED50 3 3. Alternate Multi-point Designs 5 4. An Example 9 II Alternate Criteria For Design Of Quanta! Response Experi-ments 1. Introduction to the Problem 14 2. Alternate Criterion 1 15 3. Alternate Criterion 2 18 III Quantal Response Experiments Over Time 1. Description of Problem 22 2. Proposed Model 29 3. Estimation and Confidence Intervals for the ED50 31 iii 4. Goodness-Of-Fit and Model Simplification 33 5. Application to an Example 34 References 53 iv LIST OF TABLES I. 3pt Designs-Symmetric About ED50 39 II. 5pt Designs-Symmetric About ED50 40 IH. 3pt Design P{x) = 0.2 Incorrect Value Of ED50 41 IV. 5pt Design P{xx) = 0.2 P(z 3) = 0.4 Incorrect Value Of ED50 . . . 42 V. Regional Comparison Of Study Design To 5pt Design 43 VI. Comparison Of 5pt Equal Allocation Design To Study Design For Various Values Of fi 43 VII. Observed And Expected Cumulative Mortality Counts; Carter And Hubert 44 VIII. Testing Nested Models 44 IX. Observed And Expected Cumulative Mortality Counts; Model 4 44 X. Point Estimates And Confidence Intervals For ED50 44 v LIST OP FIGURES 1 45 2 46 3 47 4 48 5 49 6 50 7 51 8 52 vi A C K N O W L E D G E M E N T I thank Dr. A. J. Petkau for his guidance and encour-agement in the producing of this thesis. I also thank Dr. Harry Joe for his suggestions and comments. This work was supported by the Natural Sciences and Engineering Research Council of Canada as well as by teaching and research assis-tanceships from the Department of Statistics, University of British Columbia. vii I. Designs For Quanta! Response Experiments Based On Minimum Variance 1. Description of Quanta! Response Models A typical quant al response problem consists of an observation y at a dose level x classified into two categories, response and non-response, with proba-bilities P(i) and 1 — P(x), respectively. Take n,- independent observations of this type at k dose levels observing the number of responses r,-, t = 1 , k . If we then assume P(i) = F(x \ 9) where F(x \ 9) is a distribution function, and 6 is a vector of parameters, the log likelihood function is given by L ® = ^{r.-logF(x.- | 9) + (n,- - r,-)log[l - F(Xi | 0)]} . The maximum likelihood estimator (MLE) 9 can then be found for 9. De-pending on the choice of F(x \ 9), the maximum likelihood equations, ob-tained by setting the derivative of L(9) with respect to 9 to 0 and solving for 9, may not have an explicit solution. In this case the MLE must be found numerically using an iterative method such as Newton-Raphson. The most commonly used models, for quantal response problems, are the probit model (Finney[7]) and logit model (Berkson[l]). The probit model is obtained by assuming F{x | 9) = $(a + fix) 1 and the logit by assuming F(x\S) = 9{a + 0x). where and *(o = [i+e-'r1, with -co < t < +00 and —00 < a < +00, 0 < (3 < +00 the unknown parameters corresponding to 6_. Often the main interest in modelling quantal response curves is the es-timation of the particular dose level at which the probability of response is 50%. Generally the dose level effecting 1?% of the test subjects is denoted by EDt?. Thus the 50% response dose is denoted by ED50. For "reasonable designs" (design points not too distant from the ED50) estimation of the ED50 using the probit and logit models produces similar results. Chapter I and Chapter II will deal with the logit model, but a similar development could be carried out for the probit model. For estimation of the ED50 using the logit model a simple reparameteri-zation is convenient. Letting fx = — j| and a = j, yields \&(o:+/?x) = 1S r(^ i), and ft is the ED50. The information matrix for the MLE, 0 = (/t, &)T is T(o\ - UL ( E*=i Wit) E?=i Kzi1>{?i) \ n n 2 where z, = (x,- - fi)/o~, n = £ ) , = 1 n,-, A,- = n,/n, and tp(t) = e'/(l + e*)2, and thus the asymptotic variance of / t is given by V(A) = /1-11(^). A question which immediately comes to mind when considering such a quan-tal response experiment is the optimal experimental design in terms of what dose levels to use, the number of dose levels to use, and how to allocate subjects to these dose levels. 2. Optimal Design for Estimation of ED50 In considering this problem it is useful to first look at a linear regression model with Y = p1x1 +...+pkxk + e, where e is a random variable, with mean 0 and variance v2, and for different observations F,- the corresponding e,'s are independent. One formulation of the optimal design problem is the following: for x — ( x l } x k ) in a specified set S, select n points of S, (xi, :.,xn)} so as to yield a minimum variance unbiased estimator of fc <f> = ^ a i p i = aTp, «=i where o i , a * are known constants. Elfving[6] gave the following graphical solution to this problem (see also Chernoff[5]): Let S* be the convex set generated by the points of S and S~, the reflection of S about the origin. 3 Then z, the point at which the ray from the origin through a intersects S*, represents the solution in the following way. If z is a convex combination of points xt- of S or —x,- of S~ with weights A,-, assign nA,- observations to experimental level x,-. The variance of the resulting estimate of <f> will then be If the nA,- are not integers, this is not an exact solution and slightly under-states the achievable variance. Chernoff[5] uses this method to solve a more general problem, which includes the optimal design problem of interest here. In the simple regression problem with k — 2, and the contribution to the inverse of the covariance matrix of j3 based on a single observation at (x i ,^) is given by J _ / X\ XiX2\ 1/2 ^ X i Z 2 x\ ) ' With the identification xt = a x2 = -zip?{z), o where z = (x — fi>)/(r, 1(9) in (1.1) is of exactly this form and the design problem of minimizing the asymptotic variance of the MLE of $ — a\\i-ra^a 4 can be viewed as the regression problem with S = jx = (xXtX2) : -oo < z < +00 j . To estimate fi = ED50, let aT = (1}0). The solution shown in Figure 1 is to place all observations at the point 2 = 0, (i.e. x = /x); the corresponding asymptotic variance of ft would be 4<72 v ( A ) n For estimation of the ED50, exactly the same design is obtained for the probit function; see Chernoff[5]. Of course // is unknown, but the situation might arise where there is some idea of the value of fi. If this is the case, the above solution would suggest putting all the observations at this suspected value of fi. This design would not be used in practice, however, since even if the main interest is the estimation of the ED50, examination of the fit of the model would also be of importance and this design does not allow for any such test. However it does give an optimal design with which to compare more realistic designs. 3. Alternate Multi-point Designs In view of the form of the optimal design it seems reasonable to add more dose levels while keeping a large number of observations in the vicinity of the suspected value of fi. Smith, Savin, and Robertson[13] looked at the maxi-mum likelihood estimates of the ED50, for the logit model, and their rate of 5 convergence to normality, for some 5 and 8 point designs. Their main con-clusion was: when inference about the ED50 is of main interest, symmetric (about n) designs are advisable and extreme response probabilities should be avoided. In view of this work, this section considers some symmetric (about /J.) multi-point designs and compares them to the optimal design for estimation of the ED50. Table I shows a number of such 3 point designs and the resulting asymptotic variances of fi for the logit model. P(x) = ^(^^) is assumed known at 3 points, and A is the fraction of observations at each of dose levels X\ and x3, where P(xx) = 1 — P(x3) is given at the top of the table. The remaining observations are assumed to be taken at x2 = where P(x2) = (^0) = 0-5. * From Table I it can be seen that, depending upon the range of P(x) around P(/i) = 0.5 in which one is interested, relatively high efficiency can be acheived. These 3 point designs yield at least an heuristic check on the fit of the model in this range (could at least check assumed symmetry, for example). With an increased range of interest the fraction of observations one must put at ft to acheive the same efficiency increases: 1) for fixed P(x), efficiency decreases as A increases; and 2) for fixed A, effi-ciency decreases as P(x) increases. Table II shows a similar relationship for 5 point symmetric designs. Here X\ is the fraction of the n observations put at both xi and x5, with P ^ ) = 1 — P(x5). Similarily, A 3 is the fraction of * Depending on n, the total number of observations, these designs could lead to non-integer allocations. 6 the n observations put at both x2 and x 4 i with P(x2) = 1 — P(x4). P(zi) and P(x2) are given at the top of each table. The remaining observations are placed at x3 = fi where P ( i 3 ) = (^0) = 0.5. From Table I and Table II it seems reasonable to assume that, if the main interest is estimation of the ED50, and goodness-of-fit of the model is only of interest in a moderate region of P(x) about P(/x) = 0.5 (say 0.2 < P(z) < 0.8), then given a good initial guess of fi and tr, a pyramid type design, symmetric about ft and within the region of interest, will yield high efficiency. This type of design also allows some assessment of goodness-of-fit of the model. Of course the guessed values of fi and a used to design the experiment could be quite poor. Of interest then is the robustness of this type of design to poor initial guesses of fi and a. To address this question assume that a is known but our value of fi is incorrect. Suppose the experiment is designed assuming P(zo) = 0-5, but P(xo) actually equals p*. This implies x0 - n or fi — x0 — a * (p*) = XQ - (7 log 1-p* So for any dose level x the actual value of P(x) is X - XQ a 7 where 7 = log{p*/(l — p*)}, but the design assumes: P(x) = * ( ^ ) . From this the actual asymptotic variances of the designs in Table I and Table II can be calculated given any specific guessed values of fi. Table III gives the resulting efficiency (to the optimal) of the 3 point design from Table I, with P(xi) = 0.2, for incorrect values of fi. The experiment was designed assuming P(x0) = 0.5. The actual value of P(x0) is given at the top of each sub-table. Note that as A approaches 0 the V(/t) approaches +00 for any incorrect guess of fi. Table IV gives the same results for the 5 point design from Table II, with P(xi) = 0.2 and P(x2) = 0.4. These two designs could be thought of as competitors in the situation where goodness-of-fit is only of interest in the range 0.2 < P(x) < 0.8. The tabulations show that the pyramid designs have higher efficiency if the guessed value of fi is reasonably good, but the equal allocation designs are more robust to poor guesses of fi. The question of robustness can also be addressed without assuming a is known, but the evaluations become less decipherable. It should be noted that incorrect guesses of fi affect the symmetry of the design, while incor-rect guesses of a affect the distance from fi at which the design points are taken. So, though all-encompassing statements are difficult, some idea of the relative importance of the accuracy of guesses of fi and a can be obtained by comparing the effect of poor guesses of fi, outlined in Table III, and the 8 effect of decreasing P(xi) in Table I. 4. An Example The Department of Fisheries and Oceans, Vancouver BC recently spon-sored a survey of sport fishing in British Columbia. * As part of the analysis of the data collected, the logit quantal response model was used to estimate the economic value of sport fishing in BC tidal waters. Along with some background information, four questions were asked of fishermen returning to docks in four major fishing areas on Vancouver Island: 1) How many days do you plan to go fishing between now and the end of the next month? D days 2) Suppose you were offered D (days) xy = Dy dollars to give up fishing in tidal waters until the end of next month. Would you accept the offer? No___ Yes___ 3) How much did you spend on your fishing trip today? (Include costs such as, bait, gasoline, boat rentals. Do not include equipment costing more than $100) $ 4) Now imagine that the cost of fishing in BC tidal waters increased. If the cost of your fishing trip had been z dollars higher today would you still have gone fishing? No Yes * Economic Valuation of the BC Tidal Sport Fishery. DPA Group Inc. Vancouver BC, February 1985. 9 In question 2 the amount offered, y, can be viewed as the dose level with the answer a binary response. Similarily in question 4, the increased fishing cost, zt can be viewed as a dose level with the answer a binary response. Of main interest in this study was the ED50 for each of the four geographical areas, since the ED50 represents an estimate of the net "average value" per angler day of the sport fishing experience. The Department of Fisheries and Oceans set 30 dose levels for each question ranging from $2 to $80 per day for question 2 and $1 to $50 for question 4, and specified a target of an equal number of observations at each dose level. As the survey continued it became apparent the dose levels chosen were a bit low in terms of symmetry about the ED50; hence, the range of dose levels was extended for the second half of the study. For question 2 the dose range was extended to include $2 to $200 and for question 4 to include $1 to $100 with equal allocation at each dose level from this point on. Thus overall there were approximately twice as many observations on the dose levels in the initial range as the added ones. Question 4 for the Sechelt area will be used for comparisons of this design to the designs of Section 2. The maximum likelihood estimates, (t and a, for ft and a , and the variance of ft obtained in this study, based on the n = 382 observations are as follows: ft = 41.34, & = -13.05 Y{ft) = 7.40. 10 Due to the nature of the question, the higher the dose the fewer the number of positive responses, and thus the negative value of a. If we assume the estimated values are in fact the exact values of // and a, then comparisons can be made. If the 5 point design of Table IV (P(xi) = 0.2, P(i2) = 0.4) had been used with Ai = 0.05 and \ 2 = 0.2, the estimate of ft obtained would have had asymptotic variance less than above, as long as the initial guess fi0 of fi was in the range 31 < fi0 < 51.5, assuming a had been guessed correctly. If the correct value of p and a had been guessed the attainable V(/i) with this 5 point design would have been (see Table II): V(/i) = 4.22 • (13.05)2/382 = 1.88, and n = 98 would have been sufficient to attain the asymptotic variance of 7.40 achieved in the study. Figure 2, a graph of V(/i) vs po, shows explicitly the relationship between fto and the achievable variance using this 5 point design and the variance obtained in the study (7.40). Table V gives /i, a, and V(£) obtained in the study, as well as the attainable asymptotic V(/z) using the above 5 point design with correct guesses of fi and a for question 2 and 4 at each of the geographical regions considered. In this survey it is apparent that no accurate knowledge of fj, and a was available prior to designing the experiment. However the use of such a large number of dose levels does not seem to be warranted. In addition to the loss of efficiency which has been demonstrated, the use of a large number 11 of dose levels must have greatly complicated the study due to the need for randomization and balancing over both time and regions. It should also be noted that, in this study some of the dose levels used obviously implied a priori extreme probabilities of response. The extreme dose levels close to zero provided essentially no information about the ED50. It would seem that more realistic lower bounds could have been possible. As noted in the previous section, numerical work suggests that this could affect the asymptotics used in inference about the ED50. The initial design of equal allocation to 30 dose levels between $2 and $80 (for question 2) seems to indicate a willingness to assume $2 to $80 encom-passes a reasonable range of P(z) about the ED50, but with no committment to any specific point within this range as an initial guess of fi. A 5 point design with equal allocation at dose levels equally spaced between $2 and $80 would have competitive variance for any value of //, and represents relatively the same amount of prior knowledge while greatly simplifying the survey. To illustrate this Table VI compares the attainable variance of jj, using equal allocation to the 30 dose levels used initially in the study to the attainable variance of /t using equal allocation to the 5 points ($8, $24, $40, $56, $72) for various values of /i, and assuming a = —13.05, the value obtained for Sechelt question 4. It should also be noted that, when it became apparent in the study that the choice of dose levels was too low, and that more were to be 12 added, a sequential approach could have been used to choose the next set of dose levels. A reasonable approach would be to estimate fi and a using the information already obtained, and design the second half of the exper-iment assuming these are the true values of fi and a (see Wetherill[14], for example). 13 II. Alternate Criteria For Design Of Quanta! Response Experiments 1. Introduction to the Problem When designing a quantal response experiment where estimation of fi, the ED50, is of main interest, minimizing the asymptotic variance of fi, the M L E of fi, with respect to dose allocation and number of doses, given a good initial guess 9Q = (yito,o"o) of £ = (A*)^ )) leads to an inadvisable design as described in chapter I. One would like some alternate criterion which will assure some ability to test the goodness-of-fit of the presumed underlying model. Finney[7] proposed the criterion of minimizing the square of the half-length of a fiducial interval for fi, given a good initial guess fi0 of fi. Kalish and Rosenberger[8] consider 2 point designs symmetric about x = fi, and determine D-optimal, G-optimal, A-optimal, and E-optimal designs. A design which minimizes the determinant of J - 1 (9), where 1(9) is the total information matrix for 9, is called D-optimal, and a design which minimizes the maximum variance of a predicted response over a specific region of the explanatory variables is called G-optimal. A-optimality refers to minimizing the trace of J - 1 ( £ ) , and E-optimality to minimizing the maximum latent root. All of these criterion need a good initial guess of 9, since 1(9) depends on 9. 14 2. Alternate Criterion 1 When modelling a quantal response experiment as described in chapter I, the probability of response p(x) at dose level x is assumed to be F(x \ 0) for a specified cumulative distribution function F. The maximum likelihood A estimate 0 of 0 can then be obtained, as well as an estimate of E, the co-A A variance matrix for 0. Using 0, the MLE for p(x) = F(x \ 9) for a given x is A p(i) = F(x | 0). A possible criterion for designing such an experiment under the model assumption is to choose the design D to minimize the expected A overall distance between F(x \ 0) and F(x \ 0): The integral over dose gives an overall measure of the distance between the p(x) curve and the true p(x) curve; Kuo[9] uses (a weighted version of) this distance measure as the loss function in a Bayesian nonparametric approach to the same problem. The criterion then suggests minimizing the expected value of this distance over all possible designs D. must be evaluated. The asymptotic value can be obtained as a function of S, First C(D) = Eg the covariance matrix of 0. Assuming F(x | 0) can be expanded in a Taylor series about F(x \ 0) yields 15 which implies c{D) = \{f*~[&-®T{%in* i <D)]2<**}. In matrix notation this becomes •+00 c(D) = v,{f ™(§jn* 1 a ) T ( i - 9 ) d - o T ( ^ ( * 1 *))<**} This implies /~f-oo . •> \ X / \ since E(|) = 0. For a location-scale model, 9 = (/J,O") t, where F(x | 9) = H^^^) for some cumulative distribution function if with density h. In this situation and aa cr\ a / \ a J Substituting these into equation (2.1) yields 1 f r+oo /«+oo C(Z>) = -I V(A) / fc2(0* + V(a) / * 2 fc3(*)rf< ® I ^ —00 J—00 + 2Cov(/x,o-) y * fc2(*)*J. Further, if /i(-) is symmetric about 0 then / + t h2{t)dt = 0, J—00 16 in which case (2.2) reduces to C(D) = ^V(fi) J + h?(t)dt + V(a) J + (2.3) Finally, for the logit model (H = h = rp) +00 1 —00 " and ' + 0 ° r 1 - 6 /-t-00 e h2(t)dt = -00 Thus for the logit model (2.3) reduces to C(^I{iv(/0 + (^)v(*)}, (2.4) where V(/i) = / ^ ( ^ J i a n d v ( * ) = ^aa (£) obtained from 1(9) in (1.1). The proposal is to minimize C(D) as approximated in (2.4) with respect to A,- = n,/n, Xi, and k, where t = 1, This was done numerically (using a successively refined grid search) for fixed values of k assuming good initial guesses fi0, of fx. For k = 2 the design obtained consisted of: xx and x2 chosen such that P(ii) = 1 — P(i2) == 0-2 with Ai = A2 = | . Setting k = 3 and A; = 5 did not change the design obtained. As described in Chapter I, Figure 1 provides the basis for determining the optimal design for the minimization of the asymptotic variance of any linear combination of p, and <J, V(a/i + 6a), where a and 6 are specified constants; these designs are 1 or 2 point designs depending on the particular linear 17 \ combination of interest. Here we wish to minimize aV(fi) + &V(<r) and the covariance term which plays an important role in the above problem does not enter. Nevertheless, the situation is similar and it is not surprising that a 2 point design is advised. Another way of obtaining a design using the C(D) criterion would be to choose the design to minimize the asymptotic variance of /t subject to noC(D) < Q, where Q is some specified constant. There does not appear to be any natural way of specifying this constant however, and this approach is not pursued further. 3. Alternate Criterion 2 Criterion 1 proposed in the previous section is based on minimizing a measure of the expected distance between the actual function, p(x), and the M L E , p(x), of p(x) assuming the model is correct. The design problem can be viewed in an alternate way. The experiment may be designed to ensure a powerful test against some specific deviation from the assumed model. Chapman and Nam[4] discuss this criterion for the case p(x) = a + fix. The discussion is based on the Pearson's chi-square test associated with the hypothesis of interest. A formulation of some asymptotic results used by Chapman and Nam[4], and general enough for this situation, follows. An experiment consists of k sequences of n,- trials (Ef=i n« = n)> where 18 each trial may result in an event E or its complement E. Let p,- = p,(0) be the probability of event E on the sequence, where 9 = (9i, ...,9m)T belongs to a set M in m-dimensional Euclidean space (assume m < k). Let Pi — Pt(0°)> where 9° is an interior point of M. Let u,- be the number of occurences of E on the t*fc sequence. With suitable conditions' on the set M and the functions p,(0), under the sequence ( n —• +00) of alternatives HA : Pi = P ° + Ci/y/n, the statistic fc r , ; M 2 £ > . - P , - ( ! ) ( I - P , - ( ! ) ) ' A where 0 is an asymptotically efficient estimator, is asymptotically, for A,- = rii/n fixed and as n -> +00, distributed as X2fc_m](A); * n e noncentrality parameter A is given by (see Mitra[10]) where A = <5T [I-B(BTB)-1BT]6, CgVA7 -c f\/A7 . , , and BT = \A7 (2.5) (2.6) lx2fc - mx2& (2.7) In the dose response problem E represents a response, and E represents no response. The experiment can be constructed to ensure a powerful test 19 of the hypothesis against H0 : P i = F(xi | 9°) HA:Pi = F(xi\60) + Ci/VZ The unknown vector of parameters 9 can be estimated using maximum like-lihood estimation. The choice of the c,'s depends upon the deviation from the model which is of most interest. For the logit model F(x \ 9) = * ( ^ ) and Y 2 = >p [tx, - n,fr(z,)]2 where z,- = (z,-/2)/<7, = ^/(l + e') and ip(t) = e'/U+e') 2 as in chapter I. The general theory implies that, in the limit, X2 will have a chi-square distribution with k — 2 degrees of freedom under Ho, and a non-central chi-square distribution with k — 2 degrees of freedom and non-centrality parameter A under H&. From (2.6) and (2.7) k . k 8TB = and BT B = (2.8) A C C D (2.9) where A = £*=1 Xrftf), C = £ * = 1 A,z?V(z t°), D = £ * = l * . • ( * ? ) W ) , and z® = (xi — /x0)/cr0. Thus in this case BTB = ^/(0°), where /(•) is given 20 in (1.1), and A can then be written in terms of (2.8) and (2.9) as follows: A = 8T8 - {6TB){BTB)~1(6TB)T. Choosing the design to minimize the asymptotic variance /* under the restriction A > Ao, where A 0 is some specified constant, might appear to be a desirable way to proceed. But the above development is for a particular sequence of alternatives. The noncentrality parameter, A , is a function of the particular value of the parameter 0° through = (x,- — HO)/(TQ, and we only have the noncentrality parameter for all p "close enough" to F(x,- | 0°). This is the usual type of situation: if we want to discuss power, we have to be willing (able) to specify the magnitude of the departures of interest. But the situation in our design problem is more complex; the objective is to ensure that designs prescribed will be reasonably sensitive for detecting departures from the assumed underlying model logit p(x) = (x — /i)/<r, where the values of the parameters are not specified. Also note that the c,'s which appear in the noncentrality parameter are deviations at particular x,'s, but these x,-'s are part of what is determined in the design problem. Given {xi}*= 1, 0_o, and {c,} f c = 1 , one could determine how the observations should be allocated to maximize A, and also what allocations yield A > Ao, but this does not really address the design problem of interest. 21 III. Quantal Response Experiments Over Time I. Description of Problem In chapter I a typical quantal response problem and the standard type of analysis for such problems were described. In this chapter a generaliza-tion of this problem is examined, and a method of analysis is proposed. In the generalized problem subjects are assigned to one of k fixed dose levels. At each of m fixed time points the subjects are classified into one of two categories, has responded or has not responded, and the number of subjects that have responded is noted. This is done for each dose level separately. A number of replications of this basic experiment could be performed. Carter and Hubert [3] proposed the following growth-curve model ap-proach to such problems, notation di the ith dose level, i — 1 ,A ; ; *i = logioK); tj the jth time point, j — 1 , m ; nn the number of subjects at dose level d,- in replication /, 1=1,..., L] riji the number of subjects at dose level di in replication / that respond prior to time tj. First a transformation of the raw data was carried out: Ziji = arcsin(r,y//n,7)2. Then the response variables Ziji were modelled using a polynomial of order 22 q — 1 in time and linear in log10(dose) as follows: i zm = JZiP'i + P»*xi + Psi)tJi"~1) + tiji 8=1 where X^/Li Psi = 0 for each s. The error vectors are assumed to be independent and normally distributed with mean 0 and completely arbitrary (unknown) covariance matrix E . With the obvious vector notation, this model can be expressed as: Z_it are independent random vectors with z,7~i\rm(/xt7,£) where « = 1 Under the stated assumptions, this model can then be analyzed using the growth curve methodology of Potthoff and Roy[12]. This has the advantage that the maximum likelihood estimators of {3 — (Psi,Ps2 '• s = l,...,q), and P — {Psi : s = 1, •-,q'y I = 1, L) have closed form solutions, and under the model assumptions all the necessary distribution theory is available. This allows straightforward determination of confidence intervals for the ED50 at any fixed time point, as well as simultaneous confidence bands for the ED50 as a function of time, without appealing to asymptotic results. Carter and Hubert[3] present the results of an application of this model to a data set 23 consisting of L = 2 replications of a basic experiment involving nf-j = 10 fish on each oik = 7 dose groups, all observed at the same m = 3 time points. A particular feature of the experiment is that the 10 fish on a particular dose group are all contained in a single tank to which a specified concentration of a toxic copper substance was administered; thus the experimental units are the tanks, while the sampling units are the fish. There appear to be some potential difficulties with the use of this model in the present context. The first potential difficulty involves the normality assumption. For fixed t and /, the possible values of Riji(j = l,...,m) are 0, l , . . . , n ,7 , but these response variables are further restricted by the rela-tionship Rm < Ri2i < • • • < Rimi- Since arcsin(-)* is a monotonic function, the same restriction applies to the transformed response variables Zi3i(j = 1, m). This immediately raises the issue of the adequacy of the multivari-ate normal approximation for the distribution of Z,it. While any parametric analysis will involve some distributional approximation, the use of the multi-variate normal seems particularly questionable in the application presented by Carter and Hubert[3] where n,j = 10 (and m = 3). Here a trivariate distribution restricted as described above and with each univariate marginal distribution supported on the set of points { arcsin(r/10)a : r = 0 , 1 , l o } is being approximated by a trivariate normal distribution; the adequacy of this approximation would appear doubtful to say the least. 24 The second potential difficulty involves the assumption concerning the covariance structure; the Z f l are assumed to be independent and normally distributed with unknown covariance matrix S,j = £, where S is of com-pletely general structure, and does not vary with dose or replication. It is unclear how such an assumption can be justified; the nature of the response variables provides some information concerning the approximate structure of E,7, and clearly suggests that E,/ should be allowed to vary from dose to dose within each replication. To see this, consider the data for an individ-ual subject. After the dose is administered to the subject at time tQ = 0, the subject remains in the no-response category until his (random) time of response T, when he becomes a member of the response category where he remains from time T onward. Each subject is observed at the same m time points t i , t m thereby identifying the interval in which he responded. Sup-pose the response times for the subjects on dose group i and replication / are assumed to be independent and identically distributed according to the response time distribution Fu: Fa(t) = -P{Ta<t)t where Tn represents the response time for any one of these subjects. Let Pij-l = P ( « y _ i < Tu < t3) = Fn{tj) - F f - , ( *y_i) for j = l,...,m + 1 with tm+i = +00, and note that Ylv=\Vwi — Fu{tj). If Uiji denotes the number of subjects at dose level i and replication / who re-25 spond between iy_i and tj, then the random vector Uit = (Um,t/,-irn+i,t)r is multinomially distributed with index n,/ and cell probabilities p,yj (j — 1 , m + 1) for each t = 1 , k and I = 1 , L . The data under consider-ation, riji, is an observation of the cumulative count i?fJj = E£=i Uivi- It follows that Cov(Riji,Rtj>i) = nit(^2p.vt) (* ~ ]C p , v ' ) = f1 ~ ^('/') > for > j. As the first step of their approach, Carter and Hubert[3] perform what they refer to as a variance-stabilizing transformation: Ziji = arcsin(i2t-7 /n,7)^. Under the above assumptions, the delta method yields the approximate co-variance structure of Z_u as Cov(ZtJ7, Zijn) = (E£=i Piui) ( l - EJ '=i P.v/) *'\J (Ei = i P . v i ) ( l-Ei = i P . v i ) F t7(ty) 1-Ff7(*y0 for j' > j. When j' = j this becomes V(Z l j 7 ) = Ann It can be seen that at a given dose and replication S,7 = (Cov(Ziji,Zij>i) :;' = l,....m;j' = l,...,m) \ / m x i 26 will have more weight on the diagonal with weight decreasing away from the diagonal. Also £,7 depends upon both = (p.u, ...,p,-,m+i,t) and n.j, so even in the case where the n,j are all equal, E.j will vary across doses and replications if, as is anticipated, the pit do. Carter and Hubert[3] emphasize that their approach is intended to dif-ferentiate between sampling units and experimental units. In the context of their application, they want to allow for the possibility that there may be a cause of mortality other than the toxic copper substance that affects all r of the fish in a tank simultaneously. In this case, there are two sources of variation in the mortality counts corresponding to the different time inter-vals; multinomial variation affecting each fish and tank variation affecting all of the fish in a tank. Their assumption of a common unknown covariance matrix S for the vectors Z_a of transformed cumulative counts appears to be an attempt to account for the unknown tank variation. While the above discussion does not differentiate between sampling and experimental units, this additional source of variation could easily and ex-plicitly be incorporated by a slight extension of the model. Possibly the simplest way to do so would be to assume that the vector of cell probabili-ties pit corresponding to the Ith dose group in replication / is itself randomly distributed across tanks. If this distribution is taken to be the Dirichlet 27 distribution denned by the density m + l w i r , - y i - l where w > 0, Tr.y/ > 0 and ]CyLV ~ 1> * n e n * n e unconditional distribu-tion of U^i, the vector of mortality counts, is the Dirichlet-multinomial; see Moismann[ll]. Unconditionally, we have E(£tf) = na**, and V(I^i) = n i 7 C« ^diagfo-,) - T T , 7 ^ where C,/ = (n,j + w)/(l + w). Thus the covariance matrix is a constant, C,i, times the corresponding multinomial covariance matrix based on ?rl7. It follows that except for multiplication by the variance inflation factor Cu, the covariance structure of Zit is of the same form as described above. There are alternate methods of incorporating this additional source of variation, but explicit assumptions leading to the result that the Z_it have common (completely arbitrary) unknown covariance matrix £ seem difficult to identify. It appears this assumption has been made as a matter of conve-nience so that the observed data exhibits the probabilistic structure required for the application of the Potthoff-Roy growth-curve methodology. An alternate, more direct approach, which would not require any as-sumptions which are clearly incorrect at the outset, would involve an anal-ysis based on the multinomial structure of the vector of mortality counts 28 • together with a parametric specification of the underlying response time distribution Fu. Such analyses do not incorporate any tank variation which may be present; whether such a generalization is required can be subse-quently examined via the use of the Dirichlet-multinomial model in place of the multinomial model. Various aspects of statistical inference for the Dirichlet-multinomial model, which are relevant to such an undertaking, are considered in Brier[2] and Wilson[15]. An alternate analysis along these lines, of the data set presented by Carter and Hubert[3], will be pursued in the remainder of this chapter. 2. Proposed Model To motivate a more direct method for analyzing this type of problem, consider the data corresponding to a single dose level within a particular replication: n subjects are treated at dose level d and the cumulative number of responses ry are observed at times tj, j = l,. . . ,m. Assume the subjects respond independently with P(T <t) = F{t) where T is the time to response. If Uj equals the number of responses between ty_i and tj (uy = ry — ry_x) then f { U i — u i , u m + i — um+i) — nl n where tQ = 0, tm+1 = +oo, and Ylj=i uj = n 29 m+l [* W) -3 = 1 3 m+l Randomization of subjects to dose levels reduces the general problem to k independent experiments of this type within each of L independent replications. The dose and replication effects could then be incorporated by allowing F(t) to be different for different dose levels and replications. This is usually done by keeping the form of F(t) fixed, and allowing the parameters in F(t) to depend upon dose and replication. In general let F(t) be of the form Fft,^), where 0t is a vector of parameters for each /. Then the dependence on dose could be incorporated as follows: To motivate the choice of and 0_i(d) some preliminary data anal-ysis was performed on the experimental data presented by Carter and Hu-bert^]. Initially an exponential F(t) = 1 — exp(—Xt) was fit at each dose level within each replication, and a reasonable fit was attained (X2 = 25.02, A G2 = 22.81, df = 28). The relationship between the A's and the dose levels A A was then examined within each replication. The plots of A vs dose, A vs A A log(dose), log(A) vs dose, and log(A) vs log(dose) all appeared reasonably linear, with those involving log(dose) appearing most nearly so. This preliminary analysis suggested the exponential distribution might provide an adequate model for the time to response. A generalization is provided by the Weibull distribution, which has been widely used in time-30 to-response problems. Thus the simple model proposed is F(t) = l-exp(-A< 7) where X > 0 and 7 > 0, with the dependence on dose and replications incorporated as F(t I d,l) = 1 -exp(-A,(d)^'(d)), where Xi(d) = exp[au + /?jlog(d)] and 71(d) = 7J, with —00 < aj < +00, Pi > 0, and 7/ > 0. Viewed as a function of x = loge(«i) for fixed t and /, this F(t I x, /) has the form of a cumulative distribution function as is usually desired for a dose response problem. This model has a 3L component vector of parameters 9 = (9_i : I = 1, ...,L)T where 9^ = (a/,A>7f)r> a n Q l except for an additive constant which does not depend upon 9_, the log-likelihood function is L(9) = £u,-y<log[F(*y I XiA) - F(*y_! | where u,yj is the observed number of responses between iy_! and tj at dose level i on replication /, and t = 1 , k , j = 1 , m + 1, / = 1 , L , 3. Estimation and Confidence Intervals for the ED50 A The maximum likelihood estimator 9 for 0_ can be obtained using an iter-ative method such as the Newton-Raphson method. Provided ni = 2_/,-=1 n,-/ is reasonably large for each /, / _ 1 ( £ ) , the inverse of the (total) information A matrix for 9, will be a good estimate of the covariance matrix for 9. Although 31 1(9) depends upon £ which is unknown, 1(9) can be used as an estimate of 1(9). The observed information matrix can also be used to estimate / (£) , and is more convenient to use since it is evaluated in the course of the Newton-Raphson iteration for £. Then the asymptotic normality of maximum likelihood estimators can be used to carry out inference for any of the parameters of interest. Of particular interest in the present context is inference for the ED50 at a given time point within a particular replication if replication effects are present. Corresponding to the usual situation for quantal response problems, for fixed t within replication /, the log ED50 is the point x0i(t) which satisfies F(t\xol(t),l)=1-. Under the presumed model, this becomes *oz(0 = ^(loglog(2) - 7, log(<) - ai], and the maximum likelihood estimator of x0i(t) is x0i(t) = i[loglog(2) - 7/ log(<) -Pi Confidence intervals for XQI (the dependence on t is suppressed from here on) can be obtained using one of the following two methods. Method 1 assumes that x0i ~ N(x0i, V(£oi)) 32 where the variance of z0J is approximated using the delta method. This yields v(*o,) = *£/-W0/ where is the vector of derivatives of z0z with respect to 9. Then IQ1(9) and xj)j(l) are used in place of I~l{9) and x^t(£), and confidence intervals for x0i at a given time within a particular replication can be obtained. Method 2 (Fieller intervals) assumes that [a, - XoiPi] ~ N(0, V(aj - XQIPI)), where aj = [loglog(2) - 7/log(*) - &{]. Forming a probability statement about a/ — x0{Pi and solving the resulting quadratic in x0j yields a confidence interval for XQI. These confidence intervals for x0/(£), the log ED50, obtained by either method can then be transformed to obtain confidence intervals for the ED50 at a given t for a particular /. 4. Goodness-Of-Fit and Model Simplification The observed data has a multinomial likelihood. Thus, under the model assumptions, expected cell counts estimated via maximum likelihood lead directly to goodness-of-fit tests. Letting u,-yi = nn[F(tj \ Xi,§i) — F(tj-\ \ Xi, 9j)], providing n is large, we have x 2 = Y2 33 — X l , ( f c m - 3 ) > and G 2 = 2 ^ ; u 0 7 l o g ( ^ ) - X i ( f c m _ 3 ) , where i = l,...,fc, j = 1,...,m + 1, / = 1, ...,£>. Under the model assump-tions the limiting chi-square distributions for both X 2 and G2 will be good approximations provided not too many of the expected cell counts are small. The G2 statistic also gives a method for testing whether eliminating pa-rameters, or representing a group of parameters by a single one, significantly affects the fit of the model. An example would be to compare the model with parameters 6_x = (aj,/?j,7/;/ = 1,...,L)T to the reduced model with parameters 6_2 = (a/,/?/,7;/ = l , . . . , i ) T , thus 7J = 7 for (I — In general, if Model 2 is a reduced version of Model 1, then a " - " 2 " "1 ~ X[dfModel2-dfModell]* In this example: dfModel2 - dfModell = [Lkm - 2L - 1] - \Lkm - 3L] = L - 1. This yields a method for determining the most parsimonious model. 5. Application To An Example In the previously described example presented in Carter and Hubert[3], each of fc = 7 different concentrations of a toxic copper substance was ad-ministered to a single tank of n,j = 10 fish. The cumulative mortality counts for each tank were recorded on each of m = 3 days (f0 = 0, t\ = 2, £ 2 = 3, 34 <3 = 4) and the experiment had L — 2 replications. Carter and Hubert fit their model with q = 2 (linear in inverse time). The resulting model, Ziji = 1-193 - 1.286IT1 + 0.782x,- - 0.935X.J71 + plt + fatj1 with pn — —P12 = 0.0302 and fai = — P22 = —0.0853, has 6 parameters, 2 of which model replication effects. Table VII gives the cumulative counts reported in the experiment, and the resulting estimates using the above model (shown in parentheses). The proposed model was fit to this data with 5X = (ax, /?i, 71, c*2,P2,72) and seemed to give a very good fit (X 2 = 14.49, G2 = 18.01, df = 36). Nested models, with parameters combined as in the previous example, were tested to see if the combining of the parameters significantly affected the fit, using G2. It seemed reasonable to reduce the model first to £ 2 = (ari,/?!, 0:2,#2.7) by combining 71 and 72 into one parameter. If this re-duction did not significantly affect the fit, it would indicate that the rela-tionship between time and the probability of response is the same on the 2 replications. Further reducing the model to £ 3 = (or 1,a 2,^, 7) would simi-larily examine the relationship between dose and the probability of response over the 2 replications. Finally reducing the model to 9^ = (a,/?,7) would indicate whether there are any replication effects. The parameters being es-timated, the X2 and G2 values for each model, and the difference of the G2,s between nested models, for all of the models described above, are summa-35 rized in Table VIII. Table VIII also examines the model reductions obtained when the 7 parameter is fixed at 7 = 1, both at the 92 = (cn,/?i,c*2>#2>7) stage and at the 94 = (a,/3,7) stage, to reduce the model from Weibull to exponential. The most parsimonious model was found to be the 3 param-eter model with 9^ = (a,/3,7), which suggests no replication effect. Table VIII also shows the significance of the 7 parameter, and thus the Weibull generalization.* The fitted cumulative mortality counts under Model 4 are given by Rijl = " i l 1 - exp(-A(ii)<i) with A(z,) = exp(d + f3xi), n« = 10, a = -2.398, 0 = 0-889, and 7 = 1.630. A A The estimated covariance matrix for 9 = (a, {3,7) is (0.1103 -0.0082 -0.0764 \ -0.0082 0.0188 0.0053 . -0.0764 0.0053 0.0620 / Table IX gives the observed cumulative counts and the cumulative counts estimated using Model 4 (shown in parentheses) which agree closely with the observed counts. Figure 3 shows the fitted dose response curves under Model 4 at various fixed time points, and illustrates the general shape of the fitted model. Figures 4, 5, and 6 show the fitted dose response curves * It should be noted that, due to the small observed cell counts, the p-values should not be considered as exact probabilities; nevertheless, the summary provided in Table VIII is very clear. 36 at t = 2.0, 3.0, and 4.0 for both Carter and Hubert's model and Model 4, and illustrate the difference in shape of the two models. Comparing the fits as summarized in Tables 7 and 9 is difficult, but neither model seems to clearly fit the data better. One method for comparing the fit of the 2 models is to look at X2 and G2 calculated from the expected cell probabilities for each model. Though X2 and G2 for Carter and Hubert's model cannot be compared to a chi-squared distribution, they can be used as a crude measure of the extent of departure from the observed data. For Carter and Hubert's model X2 = 17.21 and G2 = 18.99, while for Model 4 the analogous values are X2 = 14.76 and G2 = 18.42. Not only does Model 4 yield smaller X2 and G2 values, but the model has only 3 parameters as compared to Carter and Hubert's 6 parameters; overall Model 4 seems to fit the data somewhat better than Carter and Hubert's model. The ultimate objective of these analyses is inference for the ED50. Table X gives point estimates and 95% confidence intervals for the ED50 at some specific time points under Carter and Hubert's fitted model, and our fitted Model 4; the intervals are obtained using Fieller's method. Figure 7 shows pointwise confidence bands for the log ED50 over time for Model 4 using the two methods mentioned; the delta method and Fieller's method. Figure 8 shows confidence bands for the ED50 over time for both models, using Fieller's method. It can be seen that the confidence intervals obtained using 37 Model 4 are substantially wider than those obtained by Carter and Hubert. Model 4 seems a reasonable model and fits the data very well. There is no reason to believe that the confidence intervals obtained are grossly incorrect in their coverage probabilities, yet if these coverage probabilities are accurate then Carter and Hubert's model is badly overestimating the accuracy of its estimate of the ED50 at a fixed time point. In general it would not seem advisable to use the model proposed by Carter and Hubert[3] in the example they presented. There are some major doubts as to the adequacy of the trivariate normal approximation being used in this situation. Also the confidence intervals obtained using their model in this example appear to be misleading. The approach illustrated in this chapter is a simpler and more direct approach to a problem of this type. 38 T a b l e I S p t D e s i g n s - S y m m e t r i c A b o u t E D 5 0 P(x) = 0.05 Efficiency 0.0 0.1 0.2 0.3 0.4 0.5 4.000 4.773 5.917 7.781 11.362 21.044 1.00 0.84 0.68 0.51 0.35 0.19 P(x) = 0.1 Efficiency 0.0 0.1 0.2 0.3 0.4 0.5 4.000 4.587 5.376 6.493 8.196 11.109 1.00 0.87 0.74 0.62 0.48 0.36 P{x) = 0.2 Efficiency 0.0 0.1 0.2 0.3 0.4 0.5 4.000 4.310 4.673 5.102 5.617 6.249 1.00 0.93 0.86 0.78 0.71 0.64 P(x) = 0.3 {n/a2)V(ii) Efficiency 0.0 0.1 0.2 0.3 0.4 0.5 4.000 4.132 4.273 4.424 4.587 4.761 1.00 0.97 0.93 0.90 0.87 0.84 39 Table II 5pt Designs- Symmetric About ED50 J°(z,) = 0.1 P(x2) =0.2 Ai A, (n/<r*)V(fl Efficiency 0.05 0.10 0.15 0.20 0.20 0.20 0.20 0.20 5.05 5.49 6.02 6.67 0.79 0.73 0.66 0.60 P(xl) = 0.l P(x2) = 0.3 Ai A2 (n/«r«)V(/k) Efficiency 0.05 0.10 0.15 0.20 0.20 0.20 0.20 0.20 4.59 4.95 5.38 5.88 0.87 0.81 0.74 0.68 P(zi) = 0.2 P(x7) =0.3 Ai A, in/**)V[fi) Efficiency 0.05 0.10 0.15 0.20 0.20 0.20 0.20 0.20 4.44 4.63 4.83 5.05 0.90 0.86 0.83 0.79 P(xi) = 0.2 P(x2) =0.4 Ai A* Efficiency 0.05 0.10 0.15 0.20 0.20 0.20 0.20 0.20 4.22 4.39 4.57 4.76 0.95 0.91 0.88 0.84 40 Table III 8pt Design P(x) = 0.2 Incorrect Value Of ED50 P(*o) = 0.3 A Efficiency 0.1 0.2 0.3 0.4 14.37 8.79 7.15 6.55 0.28 0.46 0.56 0.61 P(x0) = 0A A Efficiency 0.1 0.2 0.3 0.4 6.56 5.57 5.53 5.79 0.61 0.72 0.72 0.69 P{x0) = 0.42 A Efficiency 0.1 0.2 0.3 0.4 5.74 5.24 5.37 5.72 0.70 0.76 0.75 0.70 -P{x0) = 0.44 A (n/V')V(/i) Efficiency 0.1 0.2 0.3 0.4 5.11 4.99 5.25 5.68 0.78 0.80 0.76 0.70 P(x0) = 0.46 A Efficiency 0.1 0.2 0.3 0.4 4.66 4.81 5.17 5.64 0.86 0.83 0.77 0.71 41 Table IV 5pt Designs P(xl) = 0.2 P{x2) = 0.4 Incorrect Value Of ED50 P{x0 = 0.3 A, A, (n/<r»)V(ji) Efficiency 0.05 0.20 18.73 0.21 0.10 0.20 12.25 0.33 0.15 0.20 9.61 0.41 0.20 0.20 8.23 0.49 P(*o) = 0.4 Ai A2 [nfo>)Vifi) Efficiency 0.05 0.20 7.36 0.54 0.10 0.20 6.09 0.66 0.15 0.20 5.65 0.71 0.20 0.20 5.50 0.73 P(*o) = 0.42 A, A2 Efficiency 0.05 0.20 6.19 0.65 0.10 0.20 5.46 0.73 0.15 0.20 5.25 0.76 0.20 0.20 5.22 0.77 P(x0) = 0.44 A, Aa (n/or»)V(/i) Efficiency 0.05 0.20 5.32 0.75 0.10 0.20 4.98 0.80 0.15 0.20 4.95 0.81 0.20 0.20 5.02 0.80 P(*o) = 0.46 A, A2 Efficiency 0.05 0.20 4.70 0.85 0.10 0.20 4.65 0.86 0.15 0.20 4.73 0.85 0.20 0.20 4.87 0.82 42 Table V Regional Comparison Of Study Design To Spt Design Question Region & V(/i)-study V(/i)-5pt 2 Victoria 60.50 -30.72 45.43 5.00 P Alberni 138.14 -59.06 49.56 8.38 Campbell R 79.72 -68.89 94.87 37.72 Sechelt 60.94 -26.85 13.47 8.62 4 Victoria 21.11 -8.19 0.59 0.34 P Alberni 73.18 -24.07 8.18 1.25 Campbell R 35.60 -13.63 3.24 1.21 Sechelt 41.34 -13.05 7.40 1.88 Table VI Comparison Of Spt Equal Allocation Design to Study Design For Various Values Of (t It V(ji) - 5pt V(/i)-study design 10 8.02 8.12 20 4.92 4.85 30 4.01 3.86 40 3.80 3.68 50 4.01 4.05 60 4.92 5.32 70 8.02 9.23 43 Table VII Observed And Expected Cumulative Mortality Counts; Carter and Hubert Block Time Concentration (ng/l) # (days) 0.10 0.20 0.30 0.50 1.00 2.00 2.50 1 2 0(0.49) 1(0.98) 1(1.33) 2(1.84) 3(2.62) 3(3.49) 4(3.79) 3 1(0.85) 1(1.79) 2(2.47) 3(3.42) 5(4.81) 6(6.21) 7(6.65) 4 1(1.06) 1(2.28) 3(3.13) 4(4.30) 8(5.94) 7(7.48) 9(7.93) 2 2 0(0.60) 1(1.13) 1(1.50) 2(2.03) 3(2.84) 4(3.73) 4(4.03) 3 1(0.83) 1(1.77) 2(2.44) 3(3.39) 5(4.77) 6(6.18) 7(6.61) 4 1(0.96) 1(2.13) 2(2.97) 4(4.13) 7(5.77) 8(7.33) 8(7.79) Table VIII Testing Nested Models Model # Parameters X a G 1 <V P AG 3 df P 1 («i,0i»7i.«2.A.7a) 14.49 18.01 36 large — - — 2 14.77 18.33 37 large 0.32 1 >0.1 2* («!,£!, ora,/?2,l) 27.25 26.72 38 large 8.39 1 <0.01 3 (ai,02,/?,7) 14.75 18.37 38 large 0.04 1 >0.1 4 («,y?,7) 14.76 18.42 39 large 0.05 1 >0.1 4* («,/M) 27.29 26.81 40 large 8.39 1 <0.01 Table IX Observed And Expected Cumulative Mortality Counts; Model 4 Block Time Concentration^//) # (days) 0.10 0.20 0.30 0.50 1.00 2.00 2.50 1 2 0(0.36) 1(0.65) 1(0.92) 2(1.41) 3(2.45) 3(4.06) 4(4.70) 3 1(0.68) 1(1.22) 2(1.70) 3(2.55) 5(4.20) 6(6.35) 7(7.08) 4 1(1.06) 1(1.88) 3(2.58) 4(3.75) 8(5.81) 7(8.01) 9(8.60) 2 2 0(0.36) 1(0.65) 1(0.92) 2(1.41) 3(2.45) 4(4.06) 4(4.70) 3 1(0.68) 1(1.22) 2(1.70) 3(2.55) 5(4.20) 6(6.35) 7(7.08) 4 1(1.06) 1(1.88) 2(2.58) 4(3.75) 7(5.81) 8(8.01) 8(8.60) Table X Point Estimates And Confidence Intervals For ED50 Time x0 (Model 4) 95% CI x0(G &z H) 95% CI 2.00 2.76 (1.82,4.88) 5.60 (3.99,8.76) 2.50 1.83 (1.32,2.77) 1.83 (1.61,2.11) 3.00 1.31 (0.98,1.81) 1.02 (0.93,1.38) 3.50 0.99 (0.73,1.32) 0.84 (0.68,1.07) 4.00 0.77 (0.54,1.04) 0.70 (0.56,0.91) 6.00 0.37 (0.20,0.57) 0.49 (0.37,0.66) 44 Figure 1 z=2.40 z=-2.40 45 46 Figure 4 48 49 50 51 52 References [1] Berkson, J. (1953). "A Statistically Precise and Relatively Simple Method of Estimating the Bioassay With Quantal Response,Based on the Logistic Function". Journal of American Statistical Association, 48, 565-599. [2] Brier/s. S. (1980). "Analysis of Contingency Tables Under Cluster Sam-pling". Biometrika, 67, 591-596. [3] Carter, E. M . & Hubert, J . J . (1984). "A Growth-Curve Model Approach to Multivariate Quantal Bioassay". Biometrics, 40, 699-706. [4] Chapman, D. G. & Nam, J. (1968). "Asymptotic Power of Chi Square Tests for Linear Trends in Proportions". Biometrics, 24, 315-328. Corrections- Biometrics, 25, 777. [5] Chernoff, H. (1972). "Sequential Analysis and Optimal Designs". Philadelphia: Society for Industrial and Applied Mathematics. [6] Elfving, G. (1952). "Optimum Allocation in Linear Regression Theory". Annals of Mathematical Statistics, 23, 255-262. [7] Finney, D. J . (1971). "Probit Analysis(3rd ed.). Cambridge: Cambridge University Press. [8] Kalish, L. A. & Rosenberger, J. L. (1978). "Optimal Designs for the Estimation of the Logistic Function". Technical Report 33, Department of Statistics, Pennsylvania State University. 53 [9] Kuo, L. (1983). "Bayesian Bioassay Design". Annals of Statistics, 11, 886-895. [10] Mitra, S. K. (1958). "On The Limiting Power Function of the Frequency X2 Test". Annals of Mathematical Statistics, 29, 1221-1233. [11] Moismann, J. E. (1962). "On the Compound Multinomial Distribution, the Multivariate ^-distribution, and Correlation Among Proportions." Biometrika, 49, 65-82. [12] Potthoff, R. F. & Roy, S. N. (1964). "A Generalized Multivariate Anal-ysis of Variance Model Useful Especially for Growth Curve Problems". Biometrika, 51, 313-326. [13] Smith, K. C , Savin, N. E. , &: Robertson, J. L. (1984). °A Monte Carlo Comparison of Maximum Likelihood and Minimum Chi Square Sampling Distributions in Logit Analysis". Biometrics, 40, 471-482. [14] Wetherill, G. B. (1962). "Sequential Estimation of Quantal Response Curves". Journal of the Royal Statistical Society, Ser. B, 25, 1-48. [15] Wilson, J. R. (1986). " Approximate Distribution and Test of Fit for the Clustering Effect in the Dirichlet Multinomial Model". Communications in Statistics: Theory and Methods, 15, 1235-1249. 54 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0096804/manifest

Comment

Related Items