UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Parameter estimation in some multivariate compound distributions Smith, George E. J. 1965

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
[if-you-see-this-DO-NOT-CLICK]
UBC_1966_A8 S55.pdf [ 5.07MB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0080602.json
JSON-LD: 1.0080602+ld.json
RDF/XML (Pretty): 1.0080602.xml
RDF/JSON: 1.0080602+rdf.json
Turtle: 1.0080602+rdf-turtle.txt
N-Triples: 1.0080602+rdf-ntriples.txt
Original Record: 1.0080602 +original-record.json
Full Text
1.0080602.txt
Citation
1.0080602.ris

Full Text

PARAMETER ESTIMATION IN SOME MULTIVARIATE COMPOUND DISTRIBUTIONS GEORGE E. J . SMITH B . S c , U n i v e r s i t y of B r i t i s h Columbia, 1962 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS i n the Department of Mathematics We accept t h i s t h e s i s as conforming t o the req u i r e d standard THE UNIVERSITY OF BRITISH COLUMBIA August, 1965 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that per mission for extensive copying of this thesis for scholarly purposes may be granted by the Head of my Department or by his representatives* It is understood that copying or publi cation of this thesis for financial gain shall not be allowed without my written permission. Department of ^J^til^Ag^U^^> The University of British Columbia Vancouver 8, Canada Date i i ABSTRACT During the past three decades or so there has been much work done concerning contagious p r o b a b i l i t y d i s t r i b u t i o n s i n an attempt t o e x p l a i n the behavior of c e r t a i n types of b i o l o g i c a l p o p u l a t i o n s . The d i s t r i b u t i o n s most widely discussed have been the P o i s s o n - b i n o m i a l , the Poisson P a s c a l or Poisson-negative binomial,' and the Poisson-Poisson or Neyman Type A. Many ge n e r a l  i z a t i o n s of the above d i s t r i b u t i o n s have a l s o been discussed. The purpose of t h i s work i s t o d i s c u s s the m u l t i v a r i a t e analogues of the above three d i s t r i b u t i o n s , i . e . the Po i s s o n - m u l t i n o m i a l , Poisson-negative m u l t i n o m i a l , and P o l s s o n - m u l t i  v a r i a t e P o i s s o n , r e s p e c t i v e l y . I n chapter one the f i r s t of these d i s t r i b u t i o n s i s discussed. I n i t i a l l y a b i o l o g i c a l model i s suggested which leads us t o a p r o b a b i l i t y generating function. Prom t h i s a r e c u r s i o n formula f o r the p r o b a b i l i t i e s i s found. Parameter e s t i m a t i o n by the methods of moments and maximum l i k e l i h o o d i s discussed i n some d e t a i l and an approximation f o r the asymptotic e f f i c i e n c y of the former method i s found. The l a t t e r method i s a s y m p t o t i c a l l y e f f i c i e n t . F i n a l l y sample zero and u n i t sample frequency e s t i  mators are b r i e f l y d i s c u s s e d . I n chapter two, e x a c t l y the same procedure i s fol l o w e d f o r the Poisson-negative m u l t i n o m i a l d i s t r i b u t i o n . Many clo s e s i m i l a r i t i e s are obvious between the two d i s t r i b u t i o n s . i i i The l a s t chapter i s devoted t o a p a r t i c u l a r common l i m i t i n g case of the f i r s t two d i s t r i b u t i o n s . T h i s i s the P o l s s o n - m u l t i  v a r i a t e Poisson. I n t h i s case the d e s i r e d r e s u l t s are obtained by c a r e f u l l y c o n s i d e r i n g appropriate l i m i t s i n e i t h e r of the previous two cases. i v TABLE OF CONTENTS page INTRODUCTION 1 CHAPTER I The Poisson M u l t i n o m i a l D i s t r i b u t i o n 1-1. A B i o l o g i c a l Model 2 1-2. P r o b a b i l i t y Generating Function and Recursion Formula f o r P r o b a b i l i t i e s . . 4 1-3. E s t i m a t i o n of Parameters by the Method of Moments 6 1-4. Maximum L i k e l i h o o d Estimators . . 10 1-5- Covariance M a t r i x of Maximum L i k e l i h o o d E s t i mators A. Method of C a l c u l a t i o n . . . . 16 B. C a l c u l a t i o n of the elements of J . . 2 2 1-6. E f f i c i e n c y of Method of Moments A. Method of C a l c u l a t i o n . . . . 26 B. C a l c u l a t i o n of 'det M' i n Terms of the Parameters 28 C. Determination of the Jacobian 'det (J' 32 1-7. Sample Zero Frequency and U n i t Sample Frequency Estimators A. Sample Zero Frequency and F i r s t Moments 3^ B. U n i t Sample Frequency E s t i m a t i o n 36 page CHAPTER I I The Poisson Negative-Multinomial  D i s t r i b u t i o n 2-1. A B i o l o g i c a l Model .. . . . . . 38 2-2. P r o b a b i l i t y Generating Function and Recursion Formula f o r P r o b a b i l i t i e s . . 39 2-3. E s t i m a t i o n of Parameters by the Method of Moments 4"3 2-4. Maximum L i k e l i h o o d Estimators . . . 46 2-5. Covariance M a t r i x of Maximum L i k e l i h o o d Estimators A. Method of C a l c u l a t i o n . . . . '.. 52 B. C a l c u l a t i o n of the Elements of J * . 57 2-6. E f f i c i e n c y of Method of Moments A. Method of C a l c u l a t i o n . . . . 6 1 B. C a l c u l a t i o n of 'det M' i n Terms of the Parameters . . . . . . 62 C. Determination of the Jacobian 'det (}*' 64 1-7. Sample Zero Frequency and U n i t Sample Frequency Estimators A. Sample Zero Frequency and F i r s t Moments 66 B. U n i t Sample Frequency Estimators 68 v l page CHAPTER I I I L i m i t i n g D i s t r i b u t i o n s of the Poiss o n - M u l t i n o m i a l and Po i s s o n - Negative M u l t i n o m i a l D i s t r i b u t i o n s 3-1. I n t r o d u c t i o n 70 3-2. The Poisson-Poisson D i s t r i b u t i o n . . 70 3-3- The Information M a t r i x 73 3-4. E f f i c i e n c y of Method of Moments . . 75 3-5. Sample Zero Frequency and F i r s t Moments 78 3-6. U n i t Sample Frequency E s t i m a t i o n . . 79 APPENDIX 1A Obtaining an E x p l i c i t E xpression f o r the P r o b a b i l i t y F u n c t i o n . . 82 APPENDIX IB C a l c u l a t i o n of Moments from F a c t o r i a l Cumulant Generating Function . . 86 APPENDIX 1C C a l c u l a t i o n of the E n t r i e s of the Information M a t r i x " J " . . . 92 APPENDIX ID C a l c u l a t i o n of the Inverse of the M a t r i x J/£ 95 APPENDIX I E A Lemma Showing fl = CfTM0) . . . 100 APPENDIX I F C a l c u l a t i o n of (1-6.12) and (2-6.5) 102 APPENDIX 1G C a l c u l a t i o n of (1-6.16) and (2-6.6) 103 v i i page APPENDIX 1H C a l c u l a t i o n of "det M" . . . . 105 APPENDIX 2A Obtaining the P r o b a b i l i t y Generating Function g*(s) 106 APPENDIX 2B C a l c u l a t i o n of the E n t r i e s of the Information M a t r i x " J * " . . . 107 ) APPENDIX 2C C a l c u l a t i o n of det (}* . . . . HQ APPENDIX 3A C a l c u l a t i o n of E f f i c i e n c y of Method of Moments f o r the P o i s s o n - M u l t i  v a r i a t e P o i s s o n D i s t r i b u t i o n . . 112 BIBLIOGRAPHY 115 i i v i i i ACKNOWLEDGMENT i The author would l i k e to express h i s sincere thanks to Dr. Stanley w. Nash whose many hours of guidance and constructive c r i t i c i s m made t h i s thesis possible. He would also l i k e to thank Dr. R.A. Restrepo f o r reading the f i n a l form, and Miss Carol Lambert f o r typing i t . F i n a l l y , he wishes to thank the National Research Council of Canada and the Un i v e r s i t y of B r i t i s h Columbia for t h e i r f i n a n c i a l assistance. 1 INTRODUCTION I n recent years there have been many attempts t o i n v e s t i g a t e s t a t i s t i c a l l y the behavior of va r i o u s i n s e c t and p l a n t p o p u l a t i o n s . I t has been found t h a t models u s i n g a simple normal, P o i s s o n , or bi n o m i a l d i s t r i b u t i o n are g e n e r a l l y inadequate. The negative b i n o m i a l d i s t r i b u t i o n has been used somewhat s u c c e s s f u l l y by F i s h e r [1Q41], Anscombe [1950], B l i s s [1953], and o t h e r s . More r e c e n t l y , compound o r 'contagious' d i s t r i b u t i o n s have been a p p l i e d t o b i o l o g i c a l models w i t h somewhat g r e a t e r success. The three which are most commonly used are ( 1 ) , the P o i s s o n - b i n o m i a l - McGuire, B r i n d l e y , and Bancroft [1957], S p r o t t [1958], Shumway and Gurland [ i 9 6 0 ] ; (2) the Poisson-Pascal (or Poisson- negative b i n o m i a l ) - K a t t i and Gurland [ 1 9 6 l ] | and (3) the Poi s s o n - Poisson (or Neyman Type A) - Neyman [1938], Douglas [1955]. Models based on these d i s t r i b u t i o n s , however, must assume homo geneity i n the c h a r a c t e r i s t i c s of the experimental p l o t . These might i n c l u d e s o i l type, amount of moisture present, type of ve g e t a t i o n p r e s e n t , e t c . Attempts t o r e l a x t h i s assumption or to g e n e r a l i z e i n other ways have been made by Neyman [1938], F e l l e r [19^3], Thomas [19^9], B e a l l and R e s c i a [1953], and Gurland [1958]. So f a r , o n l y the u n i v a r i a t e case has been considered f o r the above compound d i s t r i b u t i o n s and t h e i r g e n e r a l i z a t i o n s . The object of t h i s t r e a t i s e i s t o extend some of the r e s u l t s of the three compound d i s t r i b u t i o n s mentioned above t o the m u l t i v a r i a t e case. 2 CHAPTER I THE POISSON-MULTINOMIAL DISTRIBUTION 1-1. A B i o l o g i c a l Model I n t h i s model we assume there i s a l a r g e f i e l d A of area S^, and homogeneous throughout, where batches of i n s e c t eggs are l a i d . Homogeneity i m p l i e s t h a t the p r o b a b i l i t y d e n s i t y of the batches f o l l o w s a uniform d i s t r i b u t i o n over the f i e l d . We w i l l a l s o assume the p o s i t i o n of a p a r t i c u l a r batch i s independent of the p o s i t i o n s of the o t h e r s . T h i s seems t o be a reasonable assumption as l o n g as the average d i s t a n c e between the batches i s much g r e a t e r than t h e i r s i z e . Next, l e t us choose a r e g i o n B of A which i s f a r enough from the boundary of A so t h a t boundary e f f e c t s w i l l be n e g l i g a b l e . L e t us d i v i d e B i n t o many small p l o t s or quadrats, B^, Bg,' , a l l having the same shape and area Sg , which i s much smaller than S^. L e t Z be a random v a r i a b l e denoting the number of batches l a i d i n a p a r t i c u l a r quadrat. I f M i s the t o t a l number of eggs l a i d i n A, then p B o l z 1 a ° SA \ J Mz I f M i s assumed to be l a r g e , since Sg << S A, t h i s i s a p p r o x i - o mately the Poisson d i s t r i b u t i o n P(Z = z) = e" X - X where X = M z l . "Ho SA (1-1.1) Having outlined the breeding ground, l e t us consider the insects themselves. We suppose the insects coming from those eggs that' hatch can be divided i n t o n-1 classes on the basis of some dis t i n g u i s h i n g c h a r a c t e r i s t i c (e.g. colour, s i z e , type of insect, etc. ). For each integer i , 1 _< i jC n-1, l e t be a random variable denoting the number of insects i n a quadrat that are born i n t o the i c l a s s . We assume the p r o b a b i l i t y of an insect being born i n t o the i class i s and i s independent of what happens to any other i n s e c t . The p r o b a b i l i t y p n that an insect does not hatch i s therefore n-1 i = l n - 1 " I P i 1 " 1 - 2 ) We now make the a r b i t r a r y assumption that exactly N eggs are l a i d i n each batch. Assume a l l the eggs hatch about the same time, and sometime l a t e r we count the number of insects i n a quadrat, noting how many belong to each c l a s s . I f we assume the e f f e c t of insects migrating into and out of the quadrat i s negligable, we have that the conditional j o i n t density of X^, X 2, • • • > -^n-l """S ( $\z*z) = P^ (x = xjz - z) = (|z) T T P ± X I (1-1.3) 4 where we def i n e X - (X^, Xg, ) I J- * n i = l 0 otherwise n-1 where x n = Kz - ^  x^ i = l 0 (1-1.4) J Thus we have a m u l t i n o m i a l d i s t r i b u t i o n . We can combine (1-1.1) and ( l - l . J ) t o get the j o i n t d e n s i t y P(X=x) = P-(x) = £ P(X=x | Z=z) P(Z=z) z=o n .-X 1$ d2)TTp x i (1-1.5) z=o i = l 1-2. P r o b a b i l i t y Generating Function and Recursion Formula  f o r P r o b a b i l i t i e s G e n e r a l l y i t i s much e a s i e r t o c a l c u l a t e i n d i v i d u a l p r o b a b i l i t i e s u s i n g a r e c u r s i o n formula r a t h e r than the d e n s i t y f u n c t i o n . The f i r s t step i n t h i s d i r e c t i o n i s t o f i n d the prob a b i l i t y generating f u n c t i o n g ( s ) where t = ( s ^ s^9 — , s n - l ^ g(S) = B ( S l X l s / 2 ... s ^ * " " 1 ) z=o x n =o x_ -. =o i = l 5 We may assume the upper l i m i t of the sums t o he » "because of the d e f i n i t i o n of (|z) given i n (1-1.4). To s i m p l i f y the above expr e s s i o n , we simply note t h a t the n-1 summations on the r i g h t are the m u l t i n o m i a l expansion of n ^ Mz 1=1 Then » z n-1 z=o z ' i = l and the r e s u l t of summing t h i s i s n-1 g ( s ) = e x p | x [ ^ s ± v ± + p n ] N - x | (1-2.1) 1=1 j Prom (1-2.1) and the d e f i n i t i o n of a p r o b a b i l i t y generating f u n c t i o n , we can c a l c u l a t e the i n d i v i d u a l p r o b a b i l i t i e s by means of n-1 _ (1-2.2) 1=1 •4* V* where i s the (n-1 )-vector w i t h 1 i n the k p o s i t i o n and zeros elsewhere, and means the p a r t i a l d e r i v a t i v e w i t h respect t o . Using the L e i b n i t z r u l e f o r m u l t i p l e d i f f e r e n t i a t i o n , we o b t a i n the f o l l o w i n g r e s u l t s which are e x p l i c i t e l y c a l c u l a t e d i n Appendix 1A. 6 P^(x +e f e) = P x ( 0 ) = g(6) = exp [ X ( p n H - l ) l (1-2.3) 1=1 n-1 p iJTT (-) 1=1 1 ] P*(y) 1-3. E s t i m a t i o n of Parameters by the Method of Moments The f i r s t step i n t h i s method i s n e c e s s a r i l y t o f i n d the moments of the d i s t r i b u t i o n . T h i s can probably be best accomp l i s h e d i f we r e a l i z e t h a t g ( s ) i s the f a c t o r i a l moment generating f u n c t i o n i f we set s = (1, 1, , 1) s i i n s t e a d of s=0. This i s the f a c t o r i a l cumulant generating f u n c t i o n from which we can e a s i l y c a l c u l a t e the f a c t o r i a l cumulants. Then, u s i n g t a b l e s r e l a t i n g moments and cumulants such as the one i n David and Barton [1962], pages 142-3, we may f i n d the f a c t o r i a l moments and f i n a l l y the moments about the o r i g i n . Since the above mentioned t a b l e r e l a t e s cumulants and moments about the o r i g i n , i t a l s o r e l a t e s f a c t o r i a l moments and cumulanbs since both have the same r e l a t i o n s h i p s . (David and Barton [1962], page 51). Now set c ( s ) = l o g g ( s ) = X j [ Y s t P i + P n l W - l i = l (1-3.1) 7 Using the above procedure, we f i r s t d e f i n e G, = N(X+1) - 1 Grt = G, = N 2(X 2+3X+1) - 5N(X+1) + 2 N5(x3+6x+7X+l) - 6N2(X2+3X+1) . + 11N(\+1) - 6 Then the moments are, according t o Appendix IB, E(X ±) = NXP ± E ( X ± 2 ) = NXp i(p iG 1 +l) = NXp^PjNCx+l) - p±+l] E f X ^ j ) = NXp 1P JG 1 - NXP^jtNCX+l) - 1] E ( X i 5 ) = Nxp1(pi2G2+3PiG1+l) E ( X i 2 X j ) . HXP 1P J(P 1Q 2+G 1) E(\Xj\) = N X P l P d P k G 2 E ( X ± 4 ) = lXP i(p i 3G 3+6p i 2G 2+7p iG 1+l) E i X ^ X j ) = NXp 1P J(p i 2G 3 43p iG 2+G 1) E ( X 1 2 X J 2 ) = Nxp iPj[p 1P JG 3+(p i+p J)G 2+G 1] = K ^ i * W m G 3 E f X ^ X . ^ ) = Nxp iPjP k(p iG 5+G 2) (1-3.2) (1-5.3) (1-3.4) Only the moments given i n (1-3.3) are needed t o estimate the 8 parameters. The remaining ones, however, w i l l be needed l a t e r t o c a l c u l a t e the e f f i c i e n c y . L e t us def i n e a new random v a r i a b l e ¥, n-1 ¥ = Y X, (1-3.5) i = l 1 n-1 Then E(¥) = £ E ( X ± ) = N X ( l - p n ) (1-3-6) i = l n-1 n - l and E(W2') = E( V V X.X.) 1=1 j = l 1 3 n-1 n-1 -I E ( X I 2 ) + I Y E < x i V i = l 1=1 J4I I f we s u b s t i t u t e f o r the expectations from (1-3.3) and sum, E(W 2) = N X ( l - P n ) [ l + [ H ( X + l ) - l ] ( l - P n ) J (1-3.7) S u b s t i t u t i n g (1-3.6) i n t o (1 -3.7i™ e o b t a i n E(¥ 2) = E(¥) [E(¥) + ( N - l ) ( l - P n ) + l ] (1-3.8) Now we can solve (1-3.6) f o r 1-Pn> s u b s t i t u t e i n t o (1-3.8)> and solve the r e s u l t i n g equation f o r X. ¥e o b t a i n ,2j X = I z l (1.3.9) N E(W 2)-E 2(¥)-E(¥) From (1-3.3) P i - E ( X ± ) E(X j L)[E(W 2)-E 2(W)-E(W)] NX (N-1) E*(w) n-1 (1-3.10) i = l * K (1-3-9) and (1-3.10) give the moment estimators X* and p^* f o r the paramters X and p^ r e s p e c t i v e l y , i f the. p o p u l a t i o n moments are rep l a c e d by the corresponding sample moments. Before proceeding, l e t us de f i n e some n o t a t i o n . L e t p be the number of samples we observe. Next d e f i n e where X j Q i s the obs e r v a t i o n of the J** 1 c h a r a c t e r i s t i c from the a t h sample. F i n a l l y , d e f i n e P n-1 0 j = i j = i a=i n-1 w = ) x. a L ja 3=1 ( l n 3 . l l ) t h So, i s the mean of the number of i n s e c t s w i t h the i c h a r a c t e r i s t i c per sample and x.. i s the mean number of i n s e c t s per sample. Thus, from (1-3.9), we may w r i t e 2 r - i 3C« • x* = T H (1/0) Y w a 2-x.. 2-x.. a=l 10 A l s o , p^^ i s estimated by observed no. of i n s e c t s w i t h i property no. of i n s e c t s observed + estimated no. of unhatched eggs i . e . p ± * = x^/NX* and p n i s estimated by estimated no. of unhatched eggs no. of i n s e c t s observed + estimated no. of unhatched eggs n-1 p n * = 1 - Y P i * = (Nx-* " x..)/NX* i = l 1-4. Maximum L i k e l i h o o d E s t imators The l i k e l i h o o d f u n c t i o n , L, i s given by L =TT Px^J (1-4.1) a=l To f i n d the maximum l i k e l i h o o d e stimators \ and p^ of X and p^, we must solve the f o l l o w i n g system of equations. dL/aX = 0 (1-4.2) S L / ^ = 0 1 = 1, 2, n-1 Equation (1-4.1) can be w r i t t e n as f o l l o w s l o g t - J l o g ( X Q ) a=l 11 Because " l o g L" i s a mcnotone i n c r e a s i n g f u n c t i o n of L, (1-4.2) i s e q u i v a l e n t t o s o l v i n g the system 3 — l o g L = y i — 1- P_,(£ ) = 0 o (1-4.3) l o g L = T i 2- P-^x* ) = 0 Using (1-1.5) we can see t h a t - a - P X ( K) = e-^ d ^ f f p ^ 1 £ - ^ ] * J Z = o z ! 1=1 Pn Let us expand t h i s expression i n t o two terms. I n the second term r e p l a c e ( S z ) by C j J ^ H x j + 1 ) since these two expressions are equal. Then i t i s c l e a r t h a t x f ... _ a r~i i ^  . M"» P< .-rV x<. x-r^ 1 an X n X u 7.» X + e 1 n 1 , 1 , -n (1-4.4) P j z i o 2 1 J Pn i = l ' P j X, x.+l = -1 P->(x) - - i — P^x+e,) P j P j * ' Equation (1-4.4) must ho l d f o r each o b s e r v a t i o n , i . e . i t holds f o r x=x a and x j = x j o ; • S u b s t i t u t i n g t h i s i n t o (1-4.3) we o b t a i n it l o g L = £ p-TTT c? W-^p*<W ]-° B P j a=l P j , P j M u l t i p l y i n g by p^ and u s i n g (1-3.11), t h i s becomes 12 . £, f x , +1 )P^(x +e.) $ _L. i o g L = px, - 7 { 3a x K a J = o (1-4.5) S P j J ' ak P ^ ( x a ) Considering again the p r o b a b i l i t y f u n c t i o n (1-115), we can d i f f e r e n t i a t e w i t h respect t o X and o b t a i n oo „ n n-1 9A- z=o z* i = l Equation (1-1.4) i m p l i e s z = x n/N + (1/N) £ x k . k=l I f we s u b s t i t u t e f o r the f i r s t z i n (1-4.6) u s i n g t h i s expression, then SX _~_ N z! x ' 1 1 z=o i = i + * - x i i ("r " k ) ^ ? d ^ f f p ^ 1 - ^ ) z=o k=l ' i = l P m N X z=o 2 1 m P n i = l N X k=l where m may be any i n t e g e r between 1 and n-1 i n c l u s i v e . Thus n-1 9X - p mNX - - N X k = 1 l _ P _ , ( x ) = P n ( V " 1 ) P x(x+r m ) + (— £ x k - l ) P x(5c) (1-4.7) We should n o t i c e at t h i s p o i n t t h a t we a c t u a l l y have n-1 expres sions f o r 3/3X(P^(x)). These are a l l equal, however, and i t 13 does not matter which one we choose. Equation (1-4.7) holds f o r each o b s e r v a t i o n , i . e . when x = x a , a x ^ a . Then, s u b s t i t u t i n g (1-4.7) w i t h t h i s m o d i f i  c a t i o n i n t o the second equation of (1-4.3), we o b t a i n I P«(W + I T , . !] = 0 (1-4.8) M u l t i p l y i n g t h i s by i t p m , s u b s t i t u t i n g from (1-4.5) f o r the f i r s t term, and u s i n g the s i m p l i f y i n g n o t a t i o n d e f i n e d i n (1-3-11), I f we sum over m, t h i s y i e l d s x.. - N X ( l - P n ) = 0 Hence P n = (NX - x.. ) / NX (1-4.10) F i n a l l y , l e t us s u b s t i t u t e t h i s i n t o (1-4.9) P m = / NX m = 1, n-1 (1-4.11) Equations (1-4.1Q) and (1-4.11) give the maximum l i k e l i h o o d e s t i m a t o r s f o r the p^ i f we know the corresponding estimator f o r X. We could conceivably solve f o r % by s u b s t i t u t i n g (1-4.10) and (1-4.11) i n t o (1-4.5) or (1-4.7) w i t h the expression set equal t o zero. However, i t i s not hard t o see t h a t i t would be impo s s i b l e t o solve t h i s d i r e c t l y f o r X. Thus i t i s necessary t o use a numerical procedure. 14 The following c a l c u l a t i o n i s "based on Newton's method, which says that i f ffx) = 0, then where X^ i s the n t n i t e r a t i o n of X and /\ Q i s the i n i t i a l estimate, which might be the moment estimator of X. To f i n d a suitable f("5i) i n our case, note from (1-4.9) that V&m - (** - (1-^.13) 6 n-1 I f we substitute f o r P n/^ m and £ £ x k a i n (1-4. 8), using a=l'k=l (1-4.13) and (1-3-11) respectively, and multiply the r e s u l t i n g equation by NX/(N"^-x..), we obtain an expression which i s zero. We may use t h i s as our f ( l ) . • Hence 5, (x +1) P-?(x „+e m) f ( t ) = Y ! - P - 0 (1-4.14) a=l xm. P x ^ a ) I t remains to f i n d D-^f(x) f o r substitution i n t o (1-4.12), From (1-4.14), 6 *__+! a=l "'Sa. n-1 3 5 2 ( X « J (1-4.15) Now D ^ x J . £ J j - P ^ x J - D # k P 4 (2 a ) (1-^.16) k=l 9 P k d * and from (1-4.11), D^pm = - x m / N l 2 = - p f f l A (1-4.17) Let us substitute expressions f o r the derivatives i n (1-4.16). 15 Use (1-4.4) t o replac e 9 / a p k [ P - ( x a ) ] , (1-4.7) t o replace d/dXtP^x* ) j and (1-4.17) t o replace D<?p . Then we may use AO, A. m (1-4.13) t o replace f n / P m and (1-3-H) t o replac e the r e s u l t i n g e xpression. n-1 D * - <V*) I ( W 1 ) " K(N-Dw a/NX> + 1 ] k=l n\ * (1-4.18) in. w x I f we replace x\ by x +e . and hence x by x. +1, and a " , a m' ma ma n-j use (1-3.11) t o replace y x ka k=l n-1 k=l - [ 1 + ( w a + l ) ] P x ( x a + ^ m ) + ^ + N % ' X ' - 1 ' ^ 3 NX x a m ^ + 1 ^ • ^ ( V 2 ^ ) (1-4.19) The (r+1) i t e r a t e d values of X, p^, P n_]_ C a n be c a l c u l a t e d from the r i t e r a t e d values by the f o l l o w i n g procedure. F i r s t s u b s t i t u t e the r t n i t e r a t e d values i n t o (1-4.18) and (1-4.19) t o o b t a i n D*P--.(xV) and D«P^(x +eL). Then s u b s t i t u t e these X x x a ' X x v a m' i n t o (1-4.15) t o f i n d D ^ f ( t ) which i s f i n a l l y s u b s t i t u t e d i n t o (1-4.12) f o r X r + 1 - Then the ( r + l ) s t i t e r a t e d values of the p m can be found from (1-4.10) and (1-4.11). 16 1-5. Cover!ance M a t r i x of Maximum L i k e l i h o o d Estimators A. Method of C a l c u l a t i o n /\ D i r e c t c a l c u l a t i o n of the covariance m a t r i x , fl , of the maximum l i k e l i h o o d estimators i s p r a c t i c a l l y i m p o s s i b l e . Under c e r t a i n c o n d i t i o n s , however, we may f i n d the asymptotic fl as P ->• * by f i r s t c a l c u l a t i n g the i n f o r m a t i o n m a t r i x J = XX I xp 1 * * * I * p n _ 1 I , I P XX P 1 P 1 O 0 o I, PnP l ^ n - l I p n - l x I p n - l X I p n - l p n - l : (1-5.1) where l f l t = E ( | - l o g L • l o g L ) = - E ( ^ l o g L ) (1-5.2) where L i s the l i k e l i h o o d f u n c t i o n . The second e q u a l i t y i s tr u e by the argument presented i n K e n d a l l and St u a r t [1961] PP. 52-53. Prom the remarks at the beginning of cj 1-3 we conclude t h a t the f a c t o r i a l moment generating f u n c t i o n i s gi v e n by (1-2.1) where s* i s set equal t o 1 Instead of 0. Since g ( s ) i s c l e a r l y i n f i n i t e l y d i f f e r e n t i a b l e , a l l the f a c t o r i a l moments are f i n i t e . Because each moment about the o r i g i n i s a f i n i t e l i n e a r combination of f a c t o r i a l moments, these moments are a l s o f i n i t e . Lemma 1-1. Let X,Y, and Z be non-negative, i d e n t i c a l l y d i s t r i b u t e d 17 and mutually independent random v a r i a b l e s w i t h E(X^) < ». Then 0 < E(XYZ) < EiX2!) < E ( X 5 ) . Proof; Consider 0 < E(XY^ - Y^Z)2 = E(X 2Y) - 2E(XYZ) + E ( X Y 2 ) . Because of the mutual independence and i d e n t i c a l d i s t r i b u t i o n of the random v a r i a b l e s , E f X 2 ! ) = E ( X 2 ) E(Y) = E ( Z 2 ) E(Y) = E ( Y Z 2 ) . Thus 0 < E(X 2Y) - E(XYZ). Now, n o t i n g t h a t XYZ > 0 always, we have 0 _< E(XYZ) _< E(X 2Y) Now consider 0 < E ( X 5 / 2 - X ^ ^ l i f ^ E ( X 3 ) - 2E(X 2Y) + E ( X Y 2 ) . But E (X 2Y) a E(XY 2) since d i s t r i b u t i o n s are i d e n t i c a l . There f o r e 0 < E ( X 5 ) - EiX2!) and so E(X 2Y) < E ( X 5 ) . Combining t h i s w i t h the previous r e s u l t we have 0 < E(XYZ) < E(X2Y) < E ( X 5 ) . Q.E.D. Lemma 1-2. For the Poisson-multinomial d i s t r i b u t i o n n-1 n - l P x ^ + % i ) < (Pn/Pn>( I X i + N + X N ( I x i + N ) N } P x ( x ) <• 1=1 i = l 3 Proof; From (1-1.5), P x ( x + r m ) . e" x J ( x V * D (gw ) ( T T P i X i K P m / P n ) d " " ) z=o m 1=1 Let 1T be an i n t e g e r such t h a t 18 n-1 N ( T - 1 ) < £ x± < NT (1-5.4) 1=1 T depends on x. Prom the d e f i n i t i o n of the mu l t i n o m i a l coef f i c i e n t , we see tha t the f i r s t T terms of the sum i n (1-5.3) are zero. Thus, i f we w r i t e the mul t i n o m i a l c o e f f i c i e n t i n a s l i g h t l y d i f f e r e n t way, 0 0 z „ _JL x. Nz-x -...-x.. >*<**w> - I 77 (iz> ( T T P i 1 ) ( p m / p n ) z=T z ' i = l V*" 1 oo n < (P„/P N) I (xz/z:) (f) (TTPI ) " z=T n i = l n » - ( P m / P n ) { m ( x T / T 0 ( f ) ( T T P i " 1 ) * • tj^JT (f) (TT P ^ 1 ) ) 1 = 1 z=T+l 1 = 1 (1-5.5) Note t h a t the f i r s t term i n the brackets i s NT times the 'z=T" term i n the expansion of P-j(x) and, since each term i s p o s i t i v e , n n-1 NT(X T/T!)(| r) ( T T P i i ) < NTP^x) < ( £ Xj4-N) P-(S) (1-5-6) 1 = 1 1 = 1 n-1 Now consider the second term i n (1-5.5). Because £ x i < NT 1=1 and z 2 T + l * ro* (Nz-N)! 5 l i (Nz-k) ( | Z ) = T ^ H ^—5=1 JJ L-n=t (1-5.7) ( J ] " x ± l ) ( N z - N - £ x±)l k=o (Nz-k- £ x ± ) 1=1 1=1 1=1 Now since k<N and z-l>T we have 19 Nz-k- £ x ± Nz-N- 2, x ± NT- £ x ± 1=1 1=1 1=1 1=1 Thus (1-5.7) hecomes i = i Hence the second term on the right side of (1-5-5) i s l e s s than or equal to z=T+l^ z" x ;- „ , i = l 1=1 n - l This expression i s XN( ^  x^+N)^ times part of the expansion i = l n-1 of P^(x) and thus i s l e s s than XN( £ x ±+N) N. i = l Using t h i s f a c t along with (1-5.6) i n (1-5-5) we have n-1 n-1 . P ^ x + e J < (P m / P n ) [ £ x i + N + X N ( £ x 1 + N ) N |p 3 e(x) i = l i = l Q.E.D. Lemma 1-3. For the Poisson-multinomial d i s t r i b u t i o n , E[io/oX(log L)P]<«P Proof i Using (1-4.1) and d i f f e r e n t i a t i n g the logarithm, E ( | i L l o g L p ) = E ( | I T j r l ^ l x P * ( ^ ) l 3 > OA i X Gt a=l P P P a/BX[P-(* a)] o/dXfP^x )] a/oXCP^Xg)] < } } ) e( X — 2 L * Y % " a = l Y = 1 6 = l W P x ( * V p*<*6) 20 Because the observations are mutually independent, the expressions o/SX[P^(x )] — are independent f o r a = l , 2, , f3. Hence we may apply lemma 1-1 t o the above i n e q u a l i t y . X a=l PX<X> Since the observations are i d e n t i c a l l y d i s t r i b u t e d , the above expression i s independent of a, hence E( \L. l o g L | 5 ) < ^  E( | -2± p ) (1-5.8) ax " P^(x) and s u b s t i t u t i n g f o r a/aX[P^(x)] from (1-4.7), ^ , , p f x + l ) P^(x+eV) , E(|-L- l o g L p ) < (J* E ( | n ^ . x V _ A m + (1/NX) V x . - l p ) 1 9 X " % N X **<x> i = i Replacing the absolute value of the sum by the sum of the absolute values and u s i n g lemma 1-2, i Jr- £ x i + ( v 1 ) [ 1 A + < I! X i + W ) N ] +1) <• W A- i = l i = l When the above expression i s expanded, i t w i l l y i e l d a f i n i t e sum of terms of the f o l l o w i n g type - n-1 constant • E ( j [ x^ ^ ) d=l ? (1-5.9) where the n^ are non-negative i n t e g e r s These terms are a l l f i n i t e s ince we know a l l the moments are f i n i t e Hence the r e s u l t f o l l o w s . Q.E.D. 21 Lemma 1-4. For the Poisson-multinomial d i s t r i b u t i o n E( 13/3 P ± ( l o g L ) P ) < oo , i = 1, 2, ..., n-1. Proof; By the same argument as i n lemma 1-3 but u s i n g i n s t e a d of X , we w i l l o b t a i n (1-5.3) w i t h p.^  r e p l a c i n g X , i . e . E ( | ^ l o g L p ) < p 3 E ( , ^ i H x f £ l l p ) BPi P S ( x ) S u b s t i t u t i n g f o r the d e r i v a t i v e u s i n g (1-4.4), we f i n d * ^ x. x .+1 P-*(x+e.) , l o g L p ) < p 5 E ( U - J ^ i — J - P ) s p ± " p d p d P X(X) Replacing the absolute value of the sum by the sum of the absolute values and u s i n g lemma 1-2. < e ( * i - [ y x. + H + XN ( V" x, + T$f]\ ( P j p n i = l i = i > Upon expansion, the above expression becomes a f i n i t e sum of terms of the type described i n (1-5.9 )> and by the same reasoning as was used t h e r e , E[ Ja/ap.^ ( l o g L ) p ] i s f i n i t e . Q.E.D. Let us appeal t o theorem 2, page 282 i n Rao [1947]. This theorem says the f o l l o w i n g - Let ii be the covariance m a t r i x of the maximum l i k e l i h o o d e s t i m a t o r s O^g, and L be the l i k e l i h o o d f u n c t i o n . 22 Then, i f E D a / d ^ ( l o g L ) | 2 + n ] < i = 1, 2, n f o r some n > 0, J " 1 " j£! ^ (1-5.10) where J i s the i n f o r m a t i o n m a t r i x and 0 i s the number of samples observed. Lemmas (1-3) and (1-4) show t h a t the Poisson-multinomial d i s t r i b u t i o n s a t i s f i e s the c o n d i t i o n s of Rao's theorem i f we choose n=l. Hence (1-5.10) h o l d s , and thus f o r samples of reasonable s i z e we can make the approximation J " 1 = A (1-5.11) B. C a l c u l a t i o n of the Elements of J Before proceeding l e t us f i r s t prove the f o l l o w i n g r e s u l t . Lemma 1-5- £ £ (x + l ) ( x +l)P^(x+e )P x(x+^,) Define A ± . = -1 + £ ... 2, — ~ — ~ — I T — *L- x l " v An-1 „=o x_ , - o ^  p i P j P x ( x ) (1-5-12) Then A i j = = A, say, f o r i n , i , J , k = 1 , 2, n-1. Proof; A i j ~ ^ k = A i J ' A i m + A i m ' Amk C D W - l - l x, =o . =o 1 n-1 ( x d + l ) P x ( x - f e ; j ) ( x m + l ) P x ( g + e a )  P j Pm 00 + 23 xl=° ^ - i - o N 2 * 3 ? ^ * ) P j L P k R e c a l l now t h a t (1-4.7) i s tru e f o r a l l values of m from 1 t o n-1. T h i s i s p o s s i b l e only i f P . ( x + e ; ) = i ^ ^ ( x + e k ) . Using t h i s f a c t , the terms i n the brackets of the above equation are zero and hence the whole expression i s zero. Q.E.D. Let us consider 1 ^ . Prom (1-5.2) and (1-4.1) P I u = E [ ( * _ l o g L ) 2 ] = E([ I 1- l o g P ^ ( x a ) ] 2 ) 3X a = i d*- P P a=l Y=1 Because the observations are independent of each other and a l s o have i d e n t i c a l d i s t r i b u t i o n s , I - p E ( [ l - l o g P - ( x ) ] 2 ) + p ( p - l ) E 2 [ ^ - l o g P - ( x ) ] (1-5.13) K K 3X x 3X x At t h i s p o i n t l e t us observe t h a t E [ l - l o g P-(x)] = E [ - J ^ - 1- P^(x)] ax x P-(X) ax x 00 00 00 00 - I - I 77 p x ^ = 7 7 £ - I xl=° ^-1=° xl=° xn-l=° - l - ( l ) - 0 (1-5-14) ax 24 U L A P^(x) p m - x V m ' Thus (1-5.13) becomes, a f t e r d i f f e r e n t i a t i n g the l o g a r i t h m , S u b s t i t u t i n g f o r the d e r i v a t i v e from (1-4.7) and u s i n g the d e f i n i t i o n of e x p e c t a t i o n , x 1=o x n_ 1=o x v ' V *m n-1 -n 2 + [(1/NX) £ x^-1] P-(x) \ k=l J I f we now square, replace the r e s u l t i n g sums by the moments they represent and use (1-5.12) along w i t h lemma 1-5, n-1 i I x x = p n 2 (A+l) + (1/N 2X 2) B[( £ \ ) 2 ] + 1 n-1 " (2P n/Pm H x) E t V Z \ ' 1 ) ] + ( ^ m ^ 2 ) E( Xm> _ 1 k=l n-1 - (2/NX) B( £ x k ) k=l We can evaluate the various moments by expansion and the use of (1-3.3). A f t e r s i m p l i f y i n g the r e s u l t i n g e x p r e s s i o n , we o b t a i n (1/0) I u = p n 2 A + ( l - p n ) ( N X + Fp n-P n)/NX (1-5.16) By a s i m i l a r procedure the other e n t r i e s of the i n f o r m a t i o n m a t r i x may be c a l c u M e d . The r e s u l t s are 25 (VP) I XP. h i \ = " P n H X A + W p n + 1 " p n u r2. 2 (VP) Ip p = N V A + N a ( 1 / P i + 1 - N ) (VP) I N 2X 2A + N\(l-N) f o r i ^ j P i P d (1-5.17) The c a l c u l a t i o n s of the above r e s u l t s are done i n Appendix 1C. Let us def i n e B P P = ( 1 / P ) I P I P J J 4 i Thus + NX/p, = ( V P) I _ _ PP i P . ^ (1-5.18) (1-5.19) J = 3 I f we s u b s t i t u t e (1-5.18) and (1-5.19) i n t o (1-5-1), B XX B B XP XP B p p+NX/ P l- B XP B_ PP B XP B. PP B p p + H X / P n - l (1-5.20) By (1-5.11) the i n v e r s e of t h i s m a t r i x i s fl . I n Appendix ID the i n v e r s e of such a m a t r i x i s c a l c u l a t e d i n d e t a i l . By making the appropriate a s s o c i a t i o n of v a r i a b l e s we have n-1 det J = p n f B u + ( B u B p p - B X p 2 ) ( l - P n ) / N X J J\ (HVPj.) (1-5.21) L i = l 26 var \ = J KX + V l - P n ) p B xx^ + (BXXBPP"BXP ) ( 1- pn> B X P P i p ' ; : BXX N A +( BXX BPP - % Ki^)' •• 1 (B, - B.„ ) p.p. [ BXX H X + <BXXBPP ' B ^ ) ( l - P n ) ] H X ~ 1 RP± ^ BXX BPP- BXP 2 ) P i 2 , var p. = i [ — — — ± - « J * ®x [ BxxN X + ^ BXX BPP" BXP 2)(i-P n)"" (1-5-22) C o r o l l a r i e s 2.1 and 2.2 i n Rao [19^7] s t a t e t h a t i f the d i s t r i b u t i o n s a t i s f i e s lemmas 1-3 and 1-4, then the maximum l i k e l i h o o d e stimators are minimum variance estimators f o r l a r g e samples and i n terms of the g e n e r a l i z e d v a r i a n c e , det S~l , are a s y m p t o t i c a l l y e f f i c i e n t . 1-6. E f f i c i e n c y of the Method of Moments A. Method of C a l c u l a t i o n The e f f i c i e n c y of a method of parameter e s t i m a t i o n f o r a m u l t i v a r i a t e d i s t r i b u t i o n i s defined t o be E f f = d e t CM (1-6.1) det C where C i s the covariance m a t r i x of the estimators f o r the method i n question and C^ i s the covariance m a t r i x of the minimum variance e s t i m a t o r s . Because the Poisson-multinomial d i s t r i b u t i o n s a t i s f i e s 27 the c o n d i t i o n s of lemmas 1-3 and 1-4, c o r o l l a r y 2-2 i n Rao [1947] s t a t e s t h a t the maximum l i k e l i h o o d estimators have minimum variance, /~* Thus, i n our case, C M = JTi and C = _T1 , the covariance m a t r i x of the moment es t i m a t o r s . Hence E f f = ( d e t / I )/(det ft ) and by (1-5.11), E f f (1-6.2) det n • det J To c a l c u l a t e i l , we w i l l f i r s t f i n d the covariance m a t r i x , M, of the moment es t i m a t o r s . L e t us d e f i n e W and "X^  as f o l l o w s p n-1 N (VP) I ( E \ a r a=l k=l P \ - ( V P) £ X i a a=l (1-6.3) where X k a i s the random v a r i a b l e denoting the number of i n s e c t s observed w i t h the k c h a r a c t e r i s t i c on the a o b s e r v a t i o n . W2 estimates E(W 2) and 1^ estimates E ( X 1 ) . By d e f i n i t i o n var W c o v f ? , ^ ) eov^W 2,^) ... cov(iP,Z Q_ 1) 3 cov(¥ var "X^  covCx^/X^) ... c o v ( T 1 , X n _ 1 ) cov(¥ ,X 2) cov(!T 1,Y 2) var "X"2 .. c o v O ^ / X ^ ) cov(¥ 2,T n_ 1) c o v ( 7 1 , T n _ 1 ) c o v C X g , ^ ^ ). .. var T n _ 1 J ( l - 6 . t ) 28 By Appendix IE we can approximate det fl by det A « (det (}) 2 det M (1-6.5) where det (} i s the Jacobian 3[X, p-,, ..., p„ T ] det 0] = _ 1 S=l (1-6.6) ^ ( W 2 ) , E O ^ ) , E ^ ^ ) ] B. C a l c u l a t i o n of det M i n Terms of the Parameters F i r s t we must express the elements of M i n terms of P x , p 2 , ..., P n - 1 ^ X. Consider P var "3^ = var [(1/p) £ X ± a ] a=l Because the observations are independent and i d e n t i c a l l y d i s t r i  buted f o r each o b s e r v a t i o n , P var ^ m ( 1 / p 2 ) £ var X i a = (1/p) var X ± a = 1 (1-6.7) = (1/p) [ E ( X ± 2 ) - E 2 ( X ± ) ] I f we now r e p l a c e the expectations by the expressions given i n (1-3.3), we w i l l have var X± = (1/p) NX V± [ p ± ( N - l ) + 1] (1-6.8) Now consider cov (T^, X j ) I 4 3 • p P cov ( X ^ X , ) = cov [(1/p) £ X i a , (1/p) £ X J y ] a=l Y=1 P P . - (1 / P 2 ) E [ l x ± a - p E ( X i ) H lx^ - p E ( X j ) 3 i a«l Y=l ; 29 The l a s t e q u a l i t y i s t r u e since E ( X i a ) = E(X^) f o r a l l a, A l s o , since the observations are independent, p covOC^Tj) - (1/P 2) YE{[X±a ~ E( Xi> 3 [ Xja " E ( X j ) ] ] p a=l + ( V P 2 ) I I E^±a ~ E < X i ) 5 ~ E<V3 a=lY4° But E [ X ± a - E ( X ± ) ] = E ( X i a ) - E ( X i ) = 0. Hence, since'the X±a are i d e n t i c a l l y d i s t r i b u t e d i n a, we'can w r i t e c o v f X ^ Z j ) = (1/p) E [ [ X 1 - E ( X i ) ] [ X J - E ( X J ) ] ] (1-6.9) - (1/P) [ E ( X 1 X J ) ^ - E ( X ± ) E ( X j ) ] S u b s t i t u t i n g (1-3.3) i n t o the above equation we o b t a i n c o v C x ^ X j ) = (1/p) N(H-1)X p ± p j (1-6.10) Now consider cov(¥ ,X^). By the same reasoning as before __ f §. n-1 n-1 p -) cov(W 2,^) = cov (1/p) I XiaV* ^/P) I \a ( a=l i = l 3=1 k=l J n-1 n-1 n-1 n-1 - w*& 1 1 E W j ) - EL\) I" I E<xiV} l i = l 3=1 i = l 3=1 ^ - (1/P ) { E ( X k 5 ) + 2 £ E ( X k 2 X j ) + £ £ E ( X k X i X J ) i^ k J4k,i n-1 n-1 + I s(\\2) - *(\) [ I E(x±2) + I Z E ( x i x 3 ) ] ( i 4 k i = l 1=1 3 4 i J (1-6.11) 30 By s u b s t i t u t i n g (1-3.3) and (1-3.4) i n t o the above expression, we o b t a i n an equation i n terms of the parameters which u l t i m a t e l y reduces t o cov(W 2,'X k) = ( l / 0 ) H X P k | G 2 ( l - p n ) 2 + 3 0 ^ 1 ^ ) - NXCl-p^Ea^l-p^ + (1-6.12) The steps between (1-6.11) and (1-6.12) a r e - o u t l i n e d i n Appendix I F . For s i m p l i c i t y , l e t us de f i n e E1 = G 2 ( l - P n ) 2 + 3G 1(1-P n) ~ N \ ( l - p n ) [ G 1 ( l - p n ) + 1] + 1 (1-6.13) Then cov(W 2,X k) - (1/p) N X p ^ (1-6.14) F i n a l l y consider var Using again the arguments of i d e n t i t y and independence, we o b t a i n var W2 = var j (1/p) £ ( £ X i a ) 2 f n - i ( 1 = 1 } = (i/p) E ( I x±)4 - E 2 ( I x±y I i = l i = l , n-1 n-1 n-1 n-1 n-1 n-1 - <v*» I I I I B<WA> - c I Z w^ri ( i = l t1=l k=l T3=l i = l j = l n-1 ( i = l J 4 l k 4 i , j m4i,j,k n-1 n-1 i=i J4I k 4 i , j 1=1 j+i 31 n-1 n-1 n-1 1=1 j 4 i 1=1 1=1 n-1 \ + r r E w n ( i- 6- i5) How s u b s t i t u t e (1-3.3) and (1-3.4) i n t o the above equation and s i m p l i f y as i s done i n Appendix 1G. Then Var Tn£ = ( l / p j H X ( l - p n ) U 5 ( l - p n ) ^ + 6 G 2 ( l - p n r + 7 ( ^ ( 1 ^ ) + 1 - N X ( l - p n ) [ G 1 ( l - p n ) + l ] 2 ^ (1-6.16) For s i m p l i c i t y we may de f i n e H, 2 = G 5 ( l - p n ) 3 + 6 G 2 ( l - p n ) 2 + 7G x(l-p n) + 1 - Hx(l-P n)CG 1(l-p n) + l ] 2 (1-6.17) Then var W2 = (1/p )Nx(l-P n )Hg (1-6.18) Thus, s u b s t i t u t i n g (1-6.8), (1-6.10), (1-6.14), and (1-6.18) i n t o (1-6.4), HX(l-p n)H 2 NXp-J^  ... ^P n-1 H1 Nxp^ NXP 1[p 1(N-l)+l] ... N(N-1 JXp-jP^ N X P 2 H 1 N(N-l)Xp1P2 N(N-1 )XP 2P n_ 1 * • • • • • • • • • • N X p n - l H l N(W-1 )Xp1Pn_1 ... 'HXP^CP^CN-D+I] (1-6.19) 4 32 The determinant of t h i s m a t r i x i s c a l c u l a t e d e x p l i c i t l y ^ i n Appendix 1H. We need only set N' = N-1 and R = NX i n the ma t r i x i n the appendix and we have (1-6.19). Then n-1 det M = ( N X / p ) n ( l - p n ) ( " [ " [ p 1)[H 2(N-Np n+p n)-H 1 2] 1=1 (1-6.20) C. Determination of the Jacobian, det (fr To evaluate det 4 as defined i n (1-6.6) we must evaluate the determinant of the f o l l o w i n g m a t r i x - oX oE(W 2) 3P 1  0E(W 2) 9 p n - l aE(w^) oX 9E(X X) 3P 9P n-1 9E(X 1 ) dp- 9 E(X 1) 3E(X 2) ax SPX s p n - i (1-6.21) To f i n d the above p a r t i a l d e r i v a t i v e s we appeal t o (1-3*9) and (1-3-10), n o t i n g t h a t n-1 E(W) = Y E ( X i ) ' i = l A f t e r d i f f e r e n t i a t i n g we o b t a i n 33 oX 0E(¥ 2) N-1 E(W)[2E(frr )-E(W)3 N [E(¥^)-E2(¥)-E(¥)]2 E(X.) (N-1 )E^(¥) SP. 3E(X j ) 9P ± SE(X i) E ^ ± \ E(W)-2E(¥ 2) N-1 ' E^(¥) E ( X ± ) E(W)-2E(¥ 2) N-1 E^(¥) i f E(¥ 2)-E 2(¥)-E(¥) (N-1 ) E 2 ( W ) I t w i l l be more convenient i f we are able t o express the e n t r i e s of (f i n terms of the parameters. To t h i s end we may- s u b s t i t u t e f o r the expectations i n the above set of equations u s i n g (1-3.3), (1-3.6), and (1-3-7). A f t e r s i m p l i f i c a t i o n we w i l l f i n d oX • 3E(¥ii) ' oX 3E(X, ) N ( N - l ) ( l - p n ) ' 2 [ p n + N ( X + l ) ( l - P n ) 3 - l N ( N - l ) ( l - p n ) 2 ap± aE(¥ y) oP ± BE(X J ) oP ± o E ^ ) p i N(N - l )x( l-P n)^ P i [ l - 2 [ p n + N ( X + l ) ( l . p n ) ] } N(N-1 )X(1-P n)' i f ±45 P i f l - 2 [ p n + N ( X + l ) ( l - P n ) ] ] 1 72. N(N-1 )X(1-P n) NX (1-6.22) 34 The s u b s t i t u t i o n of (1-6.22) i n t o (1-6.21) gives an e x p l i c i t e xpression f o r (J. I t s determinant may he found by m u l t i p l y i n g the f i r s t row by p ±/X and adding i t t o the ( i + l ) s t row f o r 1 = 1 , 2, . .., n-1. T h i s w i l l give an upper t r i a n g u l a r m a t r i x which may be expanded by the f i r s t column t o give det Cj = - _ 1 (1-6.23) (NX ) n - 1 N ( N - l ) ( l ^ p n r I f we now s u b s t i t u t e (1-6.20) and (1-6.23) i n t o (1-6.5), d e t a = N V i ) V - P n > V r V > P l At t h i s p o i n t we can f i n d the e f f i c i e n c y by s u b s t i t u t i n g the above equation along w i t h (1-5.21) i n t o (1-6.2). Hence N 2 ( N - l ) 2 ( l - p n ) 5 E f f = 5 5- (1-6.24) [ B U N X + ( 1 - P n ) ( B u B p p - B X p * ) ] [H 2(N-Np n +p n )-H^ ] 1-7- Sample Zero Frequency and U n i t Sample Frequency Estimators A. Sample Zero Frequency and F i r s t Moments Sample zero frequency e s t i m a t i o n i s u s e f u l i f the zero sample, ( i . e . X = (5), occurs q u i t e f r e q u e n t l y . From (1-2.3), i f we set X=0 P^(0) = e x p [ x ( p n N - l ) ] (1-7.1) L e t us de f i n e F ( a ) t o be the frequency w i t h which 35 X = a = ( a ^ , a 2 , a ^) occurs i n p observations. Consider the estimator F ( 0 ) f o r E^O) E [ | F ( 0 ) ] = ^ 0 ) = e x p [ X ( p n N - l ) ] (1-7.2) Hence (1/p) F(o) i s an unbiased estimator f o r EJ(0). We may o b t a i n the sample zero frequency estimators ~ and p^ f o r \ and p^ by u s i n g the moment estimators f o r the f i r s t moments given by (1-3.3) and (1-6.3) f f . together w i t h the estimator j u s t defined i n (1-7.2) t o o b t a i n the equations X. = NXp^ (1-7.3) (1/p) F ( 6) = e x p [ x ( p n N - l ) ] To solve f o r X, p^, l e t us f i r s t add the top equation of (1-7-3) f o r i = l , 2, — , n-1. n-1 N X ( l - P n ) = I ^ i = l S o l v i n g f o r p n and s u b s t i t u t i n g i n t o the bottom equation of (1-7.3), n-1 l o g M i = f [ ( l - - i V 7, f-1 ] (1-7.4) P ^ 1 = 1 We can use a numerical method t o f i n d x, and then from (1-7.2), P ± = \/NX (1-7.5) 36 B. U n i t Sample Frequency E s t i m a t i o n I f the u n i t samples ( i . e . X = e k , k = l , 23 n-1). occur" f a i r l y f r e q u e n t l y i t may he advantageous t o use t h i s e s t i m a t o r . From (1-2.1) we can see t h a t * ( e k ) = M ! 2 = [exp [ \ ( p / - l ) ] ) l P n M P ^ (1-7.6) s=0 Consider the estimator (1/p) F(®j c) f o r ^ s ^ ) ' We. n o t i c e 1 —* —* that E[j| F ( e k ) ] = ^(© k).- Hence the estimator i s unbiased. Thus we may solve the equations (1/P) F ( e k ) = { e x p [ X ( p n N - l ) ) } N p n N _ 1 p k X k = l , 2, n-1 (1-7.7) along w i t h (1-7.3) f o r p^ and X t o o b t a i n t h e i r u n i t sample estimators p^ and X. To solve these equations l e t us d i v i d e (1-7.7) by (1-7.7) w i t h k = l . Then m * ( 1 . 7 . 8 ) P ( e x ) P X D i v i d i n g (1-7-7) w i t h k=l by (1-7-3) n ~ l v N-1 Fie^/FiO) = N ( l - I p k ) p xX k=l I f we s u b s t i t u t e f o r p k from (1-7-8) and solve f o r X , v p 1NF(0) p. n r _^  N-1 1/X = 1 . \ [1 - - i — I (1-7.9) F ( e x ) F ( ^ ) £ k 37 Prom (1-7.3) and the f a c t t h a t the sum of the i s one, we o b t a i n n-1 e k=l Upon s u b s t i t u t i o n f o r p k from (1-7-8) and d i v i s i o n by X, we f i n d 1 l o g I l 9 l + i . [ i i — V P ( e k ) ] (1-7-10) X B F ( e ] L ) ^ I f we now s u b s t i t u t e f o r 1/H from (1-7-9), equation (1-7-10) becomes p,NF(0) p, _ N-1 w ? r x 1 [1 - — i — V P ( e t ) ] l o g H P J . + i P i V * = t l " p 7 h £ F ( * * ) 3 F ( e l ' k=l T h i s may be r e w r i t t e n as ^ n-1 , , )* ^ n-1 P l - - i _ £ P ( e k ) ^ f - ^ r r CHP(O) l o g + I F ( e k ) ] - l | + 1 = 0 (1-7.11) We can now use a numerical method t o f i n d p^. We can then c a l c u l a t e X from (1-7-9) and f i n a l l y p^, 1=2s 3, ..., n-1 from (1-7.8). 38 CHAPTER I I THE POISSON-NEGATIVE MULTINOMIAL DISTRIBUTION 2-1. A B i o l o g i c a l Model This d i s t r i b u t i o n a r i s e s from a model very s i m i l a r t o the one given i n § 1.1 f o r the Poisson-multinomial d i s t r i b u t i o n . The only v a r i a t i o n s are the f o l l o w i n g . L e t N represent, i n s t e a d of the t o t a l number of eggs l a i d i n each batch, the mean number of eggs t h a t do not hatch i n each batch. Let Z be a random v a r i a b l e denoting the number of batches of eggs l a i d i n a p a r t i c u l a r quadrat and assume the egg l a y i n g stops as soon as the (Nz) egg i s l a i d t h a t w i l l not hatch. Hence i f we d e f i n e p 1 ? p 2 , Pn_-j_ the same as i n §1.1 and p n by (1-1.2), then ( x 1 + . . .+x n_ 1+Nz-l)! x ± (2.1.1) •d rsw * \ i n - i ^ NZ-TT ^ P-(x|Z = z) = p n I | P i x 1 ! x 2 ! . . . x n _ 1 ! ( N z - l ) ! i = 1 i f z>0 P-(x|Z = 0) = ( 1 i f x=0 0 otherwise I f Z again has a Poisson (X) d i s t r i b u t i o n as i n § 1.1, then 00 P^(x) = £P x(*iZ = z ) P(Z=z) z=o (2-1.2) = e 2, ( x ' 1' — ; ~ p n 11 ?! 39 where we adopt the convention t h a t (Xj+.. .+^^+12-1)1 f l i f x = 0, z = 0 x ' .. .x„ , !(KTz-l)! \ 1 0 i f x 4 6, z = 0 (2-1.3) 2-2. P r o b a b i l i t y Generating Function and Recursion Formula f o r  P r o b a b i l i t i e s The p r o b a b i l i t y generating f u n c t i o n i s defined by g*(s) = E ( S l X l s 2 * 2 ... B n_ 1* n- 1) (2-2.1) Thus from (2-1.2), (2-1.3), and (2-2.1) n-1 g* x.=o x , =o z=l x l ^ n - i ' V " 2 - 1 ^ ' i=l 1 n - l x,o ] + 6- where 6-± -> = | 1 i f x = 0 x,o 0 i f x 4 0 L e t us sum the terms i n 6^ se p a r a t e l y and rearrange the order of the sums of the other terms. Then g * ( s ) = £ . . . £ i T r ( s l P l ) z=l x 1 = o x n _ 1 = o x l x n - l * ^ z ± ) m ±=1 + e~ x (2-2.2) To evaulate the above expression we use the i d e n t i t y (x 1+...+x n_ 1+Nz-l): x l ! ' * ' ^ - I 1 ^ 2 " * 1 ) 1 n-1 n-1 / Z 3^+Nz-l^ X J 4o (2-2.3) Prom F e l l e r [1950], page 61, (12.4), we have the i d e n t i t y 03 00 Hence £ ("a) (-p) x = £ ( X + * r l ) ( - l ) X ( - P f = (1-p ) " a (2-2.5) x=o x=o The l a s t e q u a l i t y i s t r u e because the middle term i s simply the bi n o m i a l expansion of (1-p) • We may use (2-2.4) t o replace the c o m b i n a t o r i a l expression on the r i g h t side of (2-2.3) and then use the r e s u l t t o repla c e the f a c t o r i a l s i n (2-2.2). We o b t a i n an expression i n v o l v i n g negative b i n o m i a l c o e f f i c i e n t s z=l xn-l =° x i L = 0 3=1 n-1 / ~Y ^-Nz k=j+l (2-2.5i) n-1 • TT C-s±Pl f 1 + e i = l -X A f t e r c a r r y i n g out the i n d i c a t e d summations u s i n g (2-2.5) as i s done i n Appendix 2A, g z=o n-1 i = l -Nz 41 and hence N g*(s) = exp ) X *n •I 1=1 s,, P - X (2-2.6) L e t us consider the f o l l o w i n g change of v a r i a b l e s *n ' n p ± = - t o ± A n i = 1, . . . , n-1 N = - V > 0 (2-2.7) Then (2-2.6) becomes g*(s) = exp [ x [ b n + £ - \ n-1 i = l (2-2.8) Notice t h i s formula i s e x a c t l y the same as (1-2.1) where V corresponds t o N, and b t o p. Hence whatever we say about X and p^ i n the P o l s son-multinomial d i s t r i b u t i o n , we may say the same t h i n g about X and b^ r e s p e c t i v e l y i n the Poi s s o n - negative m u l t i n o m i a l d i s t r i b u t i o n by v i r t u e of (2-2.8). I n many r e s u l t s obtained i n t h i s chapter t h i s f a c t w i l l g r e a t l y reduce the l e n g t h of c a l c u l a t i o n s , while i n other s , e s p e c i a l l y those i n v o l v i n g d e r i v a t i v e s w i t h respect t o p^, i t i s b e t t e r t o c a l c u l a t e d i r e c t l y . To o b t a i n an expression f o r the p r o b a b i l i t i e s we must d i f f e r e n t i a t e g*(s*) an appropriate number of times. S t a r t i n g 42 w i t h (2-2.8) we can e x a c t l y f o l l o w the procedure i n Appendix 1A w i t h the obvious change i n symbols and o b t a i n (1A-4) which w i l l be w r i t t e n as n-1 P x ( x + e k ) = n-1 X V n V - l x l x n - l Y ... Y v(v-i) ... [v- Y (x±-y±)i i = l V 1 y x=o y ^ - o r -r-r b i x xi " y i i = l V P x ( y ) Prom (2-2.7) i t i s c l e a r t h a t (2-2.9) Upon s u b s t i t u t i o n f o r the b's and V i n the above equation from (2-2.7) and (2-2.9), we o b t a i n P*(x+eV) = - X ( P k / P n ) ( l / P n ) x k + 1 -N-1 x l x n - l Y ... Y (-BO(-N-l) .. • ^1=° yn-l=° ( x i - y i ) H T T ( - P i ) 1 \ x v ) t i = l i = l v x i - y i ; - ' 1 P*(y) I f we now f a c t o r the "minus one" out of each term immediately f o l l o w i n g the m u l t i p l e sum and each p^, and note t h a t N i s an i n t e g e r , we o b t a i n the second equation of (2-2.10). The f i r s t comes from (2-2.6). 43 P x ( x + e k ) = P^O) = g*(0) = e ^ n " 1 ) n _ 1 x P k p n N I1 V ^ h\-y±)v. y l = o y n - l = ° 1=1 (N-1)! (2-2.10) n-1 TT 1=1 T f P i X l " 7 1 / ( x i - y i ) ! ] P-(y) 2-3. E s t i m a t i o n of Parameters by the Method of Moments To o b t a i n the moments of t h i s d i s t r i b u t i o n , we use the same method as i n § 1-3. Prom (2-2.8) we may form the cumulant generating f u n c t i o n n-1 c ( s ) = x f [ b n + Y *±\f ~ l ] (2-3.1) 1=1 By f o l l o w i n g the c a l c u l a t i o n s of Appendix IB, but r e p l a c i n g N w i t h V, and p^ w i t h b^, 1 = 1, 2, n, we w i l l get (1-3.2), (1-3.3), and (1-3.4) w i t h the above replacement. L e t us c a l l these m o d i f i e d equations (1-3.2)', (1-3.3)', and (1-3.4)'. I f we apply the t r a n s f o r m a t i o n given by (2-2.8) and (2-2.9) so as t o express (1-3.2)', ( 1 - 3 . 3 ) % and (1-3.4)' i n terms of N and p ± and then d e f i n e G 1» = N(X+l) + 1 ^ 0 * = N 2(X 2+3X+1) + 3N(X+1) + 2 •x -z o I 3 o ; (2-3*2) P(X^+6\N-7X+1) + 6N^(X>3X+1) / G * = ^ ^ x +  ^(X +3X+1) + 11N(X+1) + 6 44 the moments w i l l he E(X±) = E ( X ± 2 ) = N\p ±/p n NXp, p, 1 (-± G * + 1) p n p n NXP4 p.H p. 1 ( X + 1 ) + - i + i] P, p n Pn n E(X±X^ = NXp ±Pj P. n NXp.p, A [N(X+1) + 1] n (2-3.3) E ( X ± 2 X J 2 ) E ( X i X / k ) NXp, p E ( X ± 5 ) = i ( i + G * + l ) p n P n n E ( X i X . ) . E ( X i X J X k ) - E ( X ± 4 ) NXp.p, p, g 3 (— G 2* + G x*) p n Pn ( G * 2 NXp, p n ^ 3 p 2 1 G *+6-L- G 9*+7— G *+l) Pn Pn' P n Pn 2 , 3 x NXP,p, p, p. E ( X i % j ) = — | l i (-^ G 3* + 3 G 2* + G 1*) n n n v n ^n p n 1 NXp, p.p. p. ±f-^ ( _ i G3* + G2*) Pn' P N x p i p j P k P i m (2-3.4) n 45 Let us now define the random v a r i a b l e , ¥, by n-1 W = £ X ± (2-3.5) i = l TiZ^ K \ ( l - p ^ ) Then E(W) = £ E ( X i ) = ~-^» (2-3-6) i = l p n n-1 n-1 and E(¥ 2) = E( £ £ X ±X^) i = l J=l n-1 n-1 = 1 E ( X ± 2 ) + £ I E ( X i X j ) i = l i = l 3 4 i I f we s u b s t i t u t e f o r the expectations from (2-3.2) and sum, E(¥ 2) = M ( l - p n ) ( 1 + [N(*+l)+l] ( 1 " P n ) | (2-3.7) P n I P n J Then, the s u b s t i t u t i o n of (2-3.6) i n t o t h i s y i e l d s E(¥ 2) = E(¥) JE(¥) + ( N + l ) ( l - p n ) + l j (2-3-8) " ~ P r T Consequently we can solve (2-3.6) f o r p n , s u b s t i t u t e i n t o (2-3.8), and solve f o r \ t o o b t a i n X . f i i » 2 ( V ) (2-J.9) N E(JT )-E*(W)-E(W) Prom (2-3.3), P i=P nE(X ± )/NX and from (2-3-6) pn=NX/[E(¥)+NX] Thus we may e l i m i n a t e p n from these two equations and s u b s t i t u t e f o r X from (2-3.9). A f t e r s i m p l i f y i n g 46 E(X±) E(Xi)[E(W2)-E2(¥)-E(W)3 ? i E(W)+NX E ( ¥) [E ( ¥ 2 )-E 2 ( W )+NE (¥) ] (2-3.10) P n = H\/[E(¥) + NX] To o b t a i n the moment estimators ' X* and p^* of X and r e s p e c t i v e l y , we simply replace the p o p u l a t i o n moments by t h e i r corresponding sample moments. Using the n o t a t i o n defined i n (1-3.11), (2-3-9) and (2-3.10) y i e l d x . . 2 H+l (1/3) £ w a 2-x.. 2-x.. a=l p i * = x ± / " [ x ' ' + n-1 p * = i - I p * = NX*/[x.. + NX*] n 1=1 1 2-4 Maximum L i k e l i h o o d Estimators I n t h l s s e c t i o n we w i l l see t h a t the d e r i v a t i o n of the maximum l i k e l i h o o d e stimators c l o s e l y p a r a l l e l s t h a t f o r the Poisson-multinomial d i s t r i b u t i o n . L e t us define 8, x^, and x ^ a as at the end of § 1.3. Then we may def i n e the l i k e l i h o o d . f u n c t i o n , L, as i n (1-4.1) and o b t a i n (1-4.3). For convenience we w i l l r e c o r d t h i s set of equations again 47 B log L = £ I _ A_ P-(x a) = 0 (2-4.1) a =1 P x ( x a > s p i Here, of course, denotes the Poisson-negative m u l t i n o m i a l r a t h e r than the Poisson-multinomial d e n s i t y . I t i s p o s s i b l e t o f i n d the d e r i v a t i v e s of P^(x) by d i f f e r e n t i a t i n g (2-1.2) a s , £ ,z (x, + ...+x„ ,+ H z - l ) l s p i zto 2 1 V-'.^.x^Nz-l)! n-1 'TTV° ( V p i - N z / p n ) 3=1 Consider the f o l l o w i n g i d e n t i t y n-1 n-1 z = (l/N)[(Wz + £ x j ) - £ X j ] (2-4.2) 3=1 3=1 I f we use t h i s i d e n t i t y t o s u b s t i t u t e f o r the l a s t z i n the above equation, then P-(x) - ( x . ^ ) P-(x) - e" x V n-1 x n-1 n-1 3=1 k=l k=l n-1 = [ x ^ + (1/P n)y x ^ P x ( x ) k=l 48 _ ( x i + 1 > e ~ X y £ {TV-...+xn_1+Vz)l p N z ^ n-1 ^ P i p n ' z=o z l x 1 : . . . x n _ 1 ! ( x i + l ) ( N z - l ) ! n ? i T T 3 =1 C l e a r l y t h i s reduces t o n-1 a - = + d/Pn) I ^ ] F S ( i ) - J £ P s ( X + ^ ) op^ K=X P n p i (2-4.3) How l e t us d i f f e r e n t i a t e (2-1.2) w i t h respect t o X. i . P.<*) = e" X y « £ i ( y - ^ i - i ^ ) ' p Nz 53 x d 9X X z ^ z! x 1 ! . . . x n _ 1 ! ( N z - l ) l n J i. ^ 3=1 " P x ( x ) The use of (2-4.2) t o s u b s t i t u t e f o r the f i r s t z i n the numerator of the above expression r e s u l t s i n = (e'VHx) I ^ [(Hz + £ x ± ) - £ x±]-± - f i l l „ _ z! i _ i i = i x.^ .. . x ^ ^ N z - l ) ' z=o 'n* IT Pj^ - 3=1 Therefore we s i m p l i f y t o W m j = 1 m=l, 2, n-1 (2-4.4) 49 Equations (2-4.3) and (2-4.4) w i l l h o l d f o r each o b s e r v a t i o n , i . e . when x = ~x a* ^ = xma, a = 1, 2, 6. With t h i s i n mind we may s u b s t i t u t e (2-4.4) i n t o the top equation of (2-4.1) t o o b t a i n V P*(y°m> . V t l + (I/Bx/V 1 - ] = o (2-4.5) Using the same i d e a we s u b s t i t u t e (2-4.3) i n t o the bottom equation of (2-4.1) a=l k=l a=l ^ ± I t i s p o s s i b l e t o replace the l a s t sum by s u b s t i t u t i n g from (2-4.5) and then s i m p l i f y i n g the n o t a t i o n by means of (1-3.11) to get x i y p i + x../p n - (Nx/£n)(B+x. ./NX) = 0. S o l v i n g f o r p^, t h i s becomes P ± = P n x i / l d i = 1, 2, n-1 (2-4.6) I f we now m u l t i p l y t h i s equation by Nx and add f o r 1=1, 2, n-1, we w i l l o b t a i n n-1 n-1 SIX P ± - F X ( l - P n ) = $n I * i . =^ nx.. i = l i = l Hence p n = NX/(NX + x..) (2-4.7) and upon s u b s t i t u t i o n of t h i s i n t o (2-4.6) we have the estimator f o r p ±. p ± = x± /(NX + x.. ) i = 1, 2, ..., n-1 (2-4.8) 50 I t s t i l l remains t o f i n d the estimator f o r X, tha t i s , X. As i s the ease w i t h the Poisson-multinomial d i s t r i b u t i o n , i t i s almost impossible t o solve f o r X d i r e c t l y , and hence we must use a numerical method. The f o l l o w i n g c a l c u l a t i o n i s based on Newton's formula which i s given by (1-4.12). W r i t i n g i t again, where t(t) = 0. (2-4.10) I f we use (2-4.9) t o s u b s t i t u t e f o r p m i n (2-4.5), the l a t t e r equation reduces t o an express i o n which i s a candidate f o r f ( X ) since i t s a t i s f i e s (1-4.13), i . e . The f i n a l step i n our procedure i s t o f i n d D ^ f ( l ) f o r s u b s t i t u t i o n i n t o (2-4.9). D i f f e r e n t i a t i n g (2-4.10) w i t h respect t o 1 gi v e s a=l *x v x a ' - P * < V V ^ P*<*a> ] (2-4.12) But we know m=l B pm B A l and from (2-4.7), Ik p. = - x, /(Nt + x . . ) 2 (2-4.14) A. J- 1 • 5 1 Now l e t us s u b s t i t u t e f o r the d e r i v a t i v e s i n ( 2 - 4 . 1 2 ) . Using ( 2 - 4 . 1 3 ) t o s u b s t i t u t e f o r the p a r t i a l s w i t h respect t o and 'i r e s p e c t i v e l y n - 1 n - 1 3=1 ^ i = l Next, s u b s t i t u t i n g f o r p n and p^ u s i n g ( 2 - 4 . 7 ) and ( 2 - 4 ^ 8 ^ r e s p e c t i v e l y , and u s i n g ( 1 - 3 . 1 1 ) t o repla c e the expression £ x$a9 we f i n a l l y have J = 1 n - 1 + 1 ] P x ( ^ a ) + < » ^ ) ( S s £ ) P x ( x o + e m ) ( 2 - 4 . 1 5 ) xm. ™x If we now repla c e x*a by x^+e^ and hence x m a by x t a a+^» w e o b t a i n n - 1 3 = 1 - [ ^ ( 1 + 1 / N X ) + 1 ] P x ( x a + e m ) + [ 1 / x + N X + X - + 1 . - ^ ~ ] . P ^ ( x a + 2 e m ) ( 2 - 4 . 1 6 ) If we know the r i t e r a t e d values of X, P l, P n_jL J t t i e 52 s u b s t i t u t i o n of (2-4.15) and (2-4.16) i n t o (2-4.12) and the r e s u l t i n t o (2-4.9) y i e l d s Then the ( r + l ) s t i t e r a t e d values of ..., p n may he found from (2-4.7) and (2-4.8). One suggestion f o r i n i t i a l estimates of \, p^, Pn_-j_ i s the moment e s t i m a t o r s . 2-5 Covariance M a t r i x of the Maximum L i k e l i h o o d Estimators  A. Method of C a l c u l a t i o n As w i t h the Poisson-multinomial d i s t r i b u t i o n , d i r e c t c a l c u l a t i o n of -Q i s n e a r l y i m p o s s i b l e . We wish t o show however, th a t i t may be c a l c u l a t e d i n d i r e c t l y by the same method as i n § 1-5A, I.e. u s i n g Rao's theorem which i s s t a t e d i n t h a t s e c t i o n . To prove t h i s d i s t r i b u t i o n s a t i s f i e s Rao's theorem, the same procedure as i n § 1-5A i s f o l l o w e d . From the remarks at the beginning of 1-3 we conclude t h a t the f a c t o r i a l moment generating f u n c t i o n i s given by (2-2.6) where s i s set equal t o 1 i n s t e a d of 0. Since 1 - ^  P i = P n > 0 , i t i s c l e a r t h a t ^ i = l g*("s) i s i n f i n i t e l y d i f f e r e n t i a b l e . Hence a l l the f a c t o r i a l moments are f i n i t e . Because each moment about the o r i g i n i s a f i n i t e l i n e a r combination of f a c t o r i a l moments, these moments are a l s o f i n i t e . Lemma 2-1. For the Poisson-negative m u l t i n o m i a l d i s t r i b u t i o n , n-1 n-1 1 1=1 1=1 53 Proof: Using the convention defined i n (2-1.3), P->(x+S ) = e'x V l ! ( X 1 + -' ' + X n - 1 + H z - 1 ) ! ( V " - - + X n - 1 + N z )  X m z l l z l ^'•.•.xn.1!(NZ-l)« x ^ l n " 1 x, p 1=1 n-1 ( fi . - i r« r (^i + '-'+X- T+HZ-1)! ) V x, P->(x) + We-X y ( x B / a I ) — — ( i = l 1 Z = l x 1 ! . . . x n _ 1 ! ( N Z - l ) : V 1 ! ' P n N Z ( T T P ^ 1 ) z ] (2-5.D 1=1 ) Let us observe t h a t f o r z _> 2 ( x ^ . . )J x 1 ! . . . X j ^ ! (Nz-1 )l (x x+.. .+x n_ 1+Nz-N-l)! _ J L (x x+.. .+x n_ 1+U Z-k) x 1 ! . . . x n _ 1 ! ( N Z - N - l ) l J J_ (Nz-k) n-1 < (^I+...+^I.I+NZ-N-I)I / r x ± + 1\N - x 1 ! . . . x n _ 1 J ( N Z - N - l ) ! { £ l We may use the above i n e q u a l i t y t o s u b s t i t u t e f o r the f a c t o r i a l s i n (2-5.1) f o r z=2, 3, ... . Thus P- f n ^ . (x., + . . .+X„ .,+N-l). P x ^ m ) < — 1 x i P 5 ( x ) + N\e - X - 1 S l l - V - M i t i ^ . . . . x ^ K N - l ) ! 54 i = l 1=1 z ^ ( z - l ) ! x 1 ! . . . X n . 1 ! ( W - l ) ! n-1 -) l ( " i , " n " » i ^ i i = i The second t e r n i n the "braces i s the "zasl" term i n the expansion of P-»(x). The t h i r d term i n the braces i s equal t o P^>(x) minus the "z«o" term i n i t s expansion, a f a c t which i s easily- seen i f we re p l a c e z by z-1. Since each term i n the expansion of P^(x) i s non-negative, we can conclude px<x+v) < i f i * i * * ( * > + ^ V 1 ( i = i n-1 + *n* ( 1 X i + 1 ) N N x PxWJ 1=1 I f we now note t h a t e~ x, p m , p n are p o s i t i v e and l e s s than one and x i J> 0 f o r a l l i , we can w r i t e n-1 n-1 v P x(3c+e m) <[ £ x i +HX [1 + ( Y X j + l f ] | k 1=1 i = l ' Q.E.D. Lemma 2-2. For the Poisson-negative m u l t i n o m i a l d i s t r i b u t i o n , E( |a/5X ( l o g L ) l 5 ) < ». 55 Proof; By the same argument as i n lemma 1-3, we are ahle t o o b t a i n (1-5-3) i . e . E(|£_ l o g L p ) < B^E (| -5 p ) ax P^ OO S u b s t i t u t i n g from (2-4.3) f o r the d e r i v a t i v e A f t e r r e p l a c i n g the absolute value of the sum by the sum of the absolute values and u s i n g the r e s u l t of lemma 2-1, we have E ( l o g L p ) < B 3 E ( I Y V N U l +( Y x i + 1 ) N } f A X \ N X P M ( 1=1 1=1 J n-1 5 + (1/Nx) Y 3=1 Upon expansion t h i s w i l l be a f i n i t e sum of terms of the f o l l o w i n g type - n-1 constant . E ( x j ^) 3=1 i (2-5.2) where the n^ are non-negative Integers These terms are a l l f:Lnite since we know a l l the moments are f i n i t e . Hence the r e s u l t f o l l o w s . Q.E.D. 56 Lemma 2-3- For the Poisson-negative m u l t i n o m i a l d i s t r i b u t i o n , E(WhV± ( l o g L ) p ) < « f o r i = 1, 2, ..., n-1. Proof; By the same argument as i n lemma 1-3, hut r e p l a c i n g X by P i we get (1-5.3) w i t h p^ ^ r e p l a c i n g x. Thus ' * -z % d/9p* [ P ^ ( x ) ] , E ( | l - l o g L p ) < B?E(] 1 x V P ) 3P ± P$(x) Le t us s u b s t i t u t e from (2-4.2) f o r the d e r i v a t i v e . rr nZ^ x.+l B-*(x+e. ) , < ^ , X i / P i + ( 1 / P n ) y ^ . ^ _ - | i - ^ p ) k=l p n p i p 3 ( x ) By the manipu l a t i o n of absolute value s i g n and use of lemma 2-1, we a r r i v e a t n-1 E( |A_ l o g L | 5 ) < B 5E ( x±/v± + ( l / p n ) £ ^ 9 p i k=l p n P ± ( 3=1 3=1 ; 1 I f t h i s expression i s expanded, a sum of terms l i k e those i n (2-5.2) i s obtained and by the same reasoning as t h e r e , the r e s u l t i s obtained. Q.E.D. 57 I f we consider Rao's theorem which i s s t a t e d near the end of §1-5A, we n o t i c e t h a t lemmas 2-2 and 2-3 show t h a t the Poisson- negative m u l t i n o m i a l d i s t r i b u t i o n s a t i s f i e s i t s c o n d i t i o n s by choosing h = 1. Hence (1-5-5) w i l l h o l d , and f o r samples of reasonable s i z e we may use the approximation where J * = I * I * XPT 1 * 1 * Pn X p ^ p n - l X P«_iPi ' n - l ^ l (2-5-3) I * ] X p n - 1 p l p n - l I * p n - l p n - l i s the i n f o r m a t i o n m a t r i x , i . e . (2-5.4) I s t * = E ( l o g L • ^ ~ l o g L ) (2-5.5) as at where L i s the l i k e l i h o o d f u n c t i o n f o r the Poisson-negative m u l t i n o m i a l d i s t r i b u t i o n . B. C a l c u l a t i o n of the Elements of J * . Before proceeding w i t h the c a l c u l a t i o n , l e t us f i r s t prove the f o l l o w i n g lemma which w i l l be of use t o s i m p l i f y the n o t a t i o n . 58 Lemma 2-4. S * (x + l ) ( x + l ) P x ( x + e JP-Cx+e ) Define A. . = -1 + ) ... } — i J,,, » x — - i - xl=° ^ - 1 = ° p i p j Px-W (2-5.6) Then A ^ = A ^ = A, say i,j,k,m = 1, 2, ..., n-1 Proof; The p r o o f i s i d e n t i c a l t o t h a t of lemma 1-5 except t h a t i n p l a c e o f equation (1-4.7) we r e f e r t o (2-4.4). We must a l s o note t h a t P^ now r e f e r s t o the p r o b a b i l i t y f u n c t i o n of the Polsson-negative m u l t i n o m i a l whereas i n lemma 1-5 i t r e f e r r e d t o tha t of the Poisson-multinomial. Q.E.D. Consider I x x * . I f we s u b s t i t u t e (1-4.1) i n t o (2-5.5) and use the same argument as was used f o r o b t a i n i n g (1 -5.1$), we get a formula f o r I x x * which i s e x a c t l y the same as t h a t f o r 1^^ i n the l a s t mentioned equation. Now s u b s t i t u t e f o r the d e r i v a t i v e u s i n g (2-4.H). Then (1/P ) I . * - f ... f — ( P-(x+eV ) A« v L ^ P^ (x) ] H\p x m 1 ^ t l - 1 n-1 n - i i - [1 + (1/HX) I x±] P x(5c) i = l j Expansion of t h i s expression u s i n g the d e f i n i t i o n of ex p e c t a t i o n and lemma 2-4 y i e l d s n-1 2 ( 1 / P f c u * = A + 2 + (1/N 2X 2) E [ ( Y X±) ] 1=1 n-1 - (2/N 2X 2p m) E[( Y X ± - l ) X m ] - (2/NXpm) E f X j 1=1 n-1 + (2/NX) Y E ( x i ) L e t us make use of equations (2-3.; ) t o s u b s t i t u t e f o r the e x p e c t a t i o n s . A f t e r s i m p l i f i c a t i o n , we f i n d ( 1 / 6 ) I X X * - A - ( l - p n ) [ H ( X + l ) ( l + P n ) + 1] / HXP n 2 (2-5.7) S i m i l a r l y we may o b t a i n the other e n t r i e s of the i n f o r m a t i o n m a t r i x P j x \ NXA/p n + [(NX+N+l)p n^ - l / p n - 1X]/P n (l/p)I p * = N2X2A/pn2 + (NX/pn2)[-(Srx+N+l)/pn2 i + l/p n + NX + 1] f o r 1 4 J (1/P frplP * = K2X2A/pn2 + (NX/pn2 )[ - (HX+H+1 )/pn2 + 1/Pn + NX+ 1] + HX/pnPj_ (2-5.8) The c a l c u l a t i o n s of the above r e s u l t s are o u t l i n e i n Appendix 2B. Now l e t us d e f i n e 60 Bxx* = I * XX BxP* - (1/p) IXP 1 BPP* - (1/p) I p i p 3 4 i (2-5-9) Hence B * + Nx/p„p, pp n I = (1/P) I * (2-5.10) J * m B Bxx* BXP* B. * Xp Xp B *+N\/p P., B * PP XP B * PP B *+NX/p P , pp T A / p n F n - l Now, from (2-5.3) we have t h a t f\ - J - 1 , and u s i n g Appendix ID, formulas (1D-1) and (1D-6) w i t h the appropriate a s s o c i a t i o n o f v a r i a b l e s , n-1 det J* > B u * + (B u .B p p * - B,p.2)pn(l-Pn)/Hx} f t (-»-) i = l p n p i (2-5.11) 61 ! NX + B p p * P n ^ n ) var X sr — — ^ " P BXX # N X +( BXX* Bpp*- BXp*'' i ^ n t ^ P n ) B * p p„ * 0 BXX* N X +( BXX* BPP*- BXP* 2to B(i-p n) c o , , ^ , ^ , . - 1 ^ X x ' V - ^ X P ^ n V , S [ B u * H X + ( B x x * B p p * - B x p . 2 ) p n ( l - P n ) ] H X ~ l ( , ? n p i ^ X X ^ P P * - ^ * 2 ^ ! 2 var p. = £1 [:^--- ——=^ ^- ^—= 61 NX C B u * N X + ( B x x * B p p * - B X p * 2 ) p n ( l - . p n ) ] N X ( (2-5.12) C o r o l l a r i e s 2.1 and 2.2 i n Rao [19^7] s t a t e t h a t i f the d i s t r i b u t i o n s a t i s f i e s lemmas 2-2 and 2-3, then the maximum l i k e l i  hood estimators are minimum variance estimators f o r l a r g e samples and i n terms of the g e n e r a l i z e d v a r i a n c e , det f l * , are asym- t o t i c a l l y e f f i c i e n t . 2-6 E f f i c i e n c y of Method of Moments f A. Method of C a l c u l a t i o n The method used i s i d e n t i c a l t o the one described i n 1-6A. To d i s t i n g u i s h c e r t a i n q u a n t i t i e s such as the Information m a t r i x , covarianee m a t r i x , e t c . f o r the d i s t r i b u t i o n now under consider a t i o n from those f o r the dis-tribution described i n chapter 1, s u p e r s c r i p t s t a r s w i l l toe w r i t t e n a f t e r the symbols (e.g. fi *, J * , M*, e t c . ) . Thus the e f f i c i e n c y i s given by 62 E f f = (2-6.1) det n * det J * where PL * I s the covariance m a t r i x of the moment es t i m a t o r s . As I s shown i n Appendix I E , det A * = (det ( j * ) 2 det M* (2-6.2) where det (f* i s given by (1-6.6) and M* i s given by (1-6.4) except t h a t the q u a n t i t i e s now r e f e r t o the Poisson-negative m u l t i n o m i a l d i s t r i b u t i o n . B. C a l c u l a t i o n of det M* i n Terms of the Parameters We may repeat the argument t h a t l e d t o (1-6.7) f o r the present d i s t r i b u t i o n and^obtain an i d e n t i c a l r e s u l t , namely var X± = ( l / p ) [ E ( X ± 2 ) - E 2 ( X ± ) ] S u b s t i t u t i n g f o r the expectations u s i n g (2-3.3), we o b t a i n var X ± - ( l / P ) ( N X p i / p n ) [ ( p i / p n ) ( N + l ) + l ] (2-6.3) I f we repeat the argument t h a t l e d t o (1-6.9) we w i l l o b t a i n cov ( X ^ X j ) = ( l / p ) [ E ( X ± X j ) - E ( X ± ) E(X^)3 and s u b s t i t u t i o n from (2-3-3) y i e l d s c o v p C ^ ) = (l/p)N(N+l)X P i P j / P n 2 (2-6.4) S i m i l a r l y , r e p e t i t i o n of the arguments l e a d i n g t o (1-6.11) and (1-6.15) l e a d t o i d e n t i c a l equations f o r the Poisson-negative m u l t i n o m i a l d i s t r i b u t i o n . By s u b s t i t u t i n g (2-3.3) i n t o (1-6.11) 63 and (1-6.15) and u s i n g Appendices I F and 1G w i t h G ±* r e p l a c i n g Gj^ and P k / P n r e p l a c i n g p f c, we have r e s p e c t i v e l y cov (W 2,Y k) = (1/0)(N\ P l / p n ) H X * 1-P„ f 1-Pr, ") where H * = 2 I G 2* ( 2) + 3G 1* • p n I p n i 1-p^ _ NX[G * ( £) + i ] + 1 p n (2-6.5) and var = ( l / P ) [ N X ( l - p n ) / p n ] H 2 * l - p _ 3 l-p„ 2 1-p where H 2* = G^* ( 2) + 6G 2* ( 2) + 7 G l * ( S) r n * n 1-P„ l - p _ 2 + l-NX ( 2) [ G ] L* ( E) + i ] n n n (2-6.6) Now l e t us s t u h s t l t u t e (2-6.3) through (2-6.6) i n t o the expression f o r M* which i s given by (1-6.4). Then N X ( l - P n ) H * N2 NXP, n NXpn NXp. n-1 'n -RJ* n NXpn p, ±[_J: (N+l)+l] p n p n P l P2 H(N+1 )X-^-7y p. n p l p n - l N(N+1 )x-^-£ n NXp n-1 H * n PT P i N(N+1 )X 1 g n NXP W T P„ T _J^ll[^zi(H+l)+l] p n p n y (2-6.7) 64 The determinant of the above m a t r i x i s found i n Appendix 1H i f we re p l a c e H ±, N», and R by H ±*, (N+l)/p n, and NX/p n r e s p e c t i v e l y i n the r e s u l t g iven i n the appendix. Thus det M = (NX/BpJ ( l ^ K f l P t J l H ^ C l + C l ^ K N + l J / p J i = l \ * 2 \ (2-6.8) C. Determination of the Jacoblan, det 4* The expression f o r (}* i s the same as the one given f o r (J i n (1-6.21) w i t h the exception t h a t the q u a n t i t i e s r e f e r now to the Poisson-negative m u l t i n o m i a l d i s t r i b u t i o n . The p a r t i a l d e r i v a t i v e s i n (1-6.21) can be obtained i n a s t r a i g h t f o r w a r d manner from (2-3.9) and (2-3.10). Hence, by d i f f e r e n t i a t i n g , ,2, ax _ N+l E c^¥) 9E(¥5) H [E(W2)-E2(¥)-E(¥)]2 aX m N+l . E(W)[2E(W2)-E(W) j aE(X±) N [E(W2)-E2(¥)-E(W)]2 ap± (N+l)E(X±) aE(¥2) [E(W 2 ) -E 2(¥)+NE(¥)] 2 ap* E(X . ) r o o <? 1 2 ^-^T 2 -E(¥^)[E(¥^)-E^(¥)+NE(¥)] 3E(Xj) E^(¥)[E(¥^)-E^(¥)+NE(¥)] + E(¥)[E(¥2)-E(¥)][E(¥)-N] - E5(W)| i f i=fj 3P4 E(X. ) ( o o o 1 - ^ - i - ^ 2 ) -E(¥^)[E(¥^)-E^(¥)+NE(¥)] aE(X±) E';(¥)[E(¥'::)-E';(¥)+NE(¥)]i 65 + E ( W ) [ E ( W 2 ) - E ( W ) ] [ E ( W ) - N] - + E ( W 2 ) - E 2 ( W ) - E ( W ) E ( W ) [ E ( W 2 ) - E 2 ( W ) + N E ( W ) ] E 5 ( W ) | Let us use (2-3.3), (2-3.6), and (2-3.7) t o s u b s t i t u t e f o r the expectations and then s i m p l i f y the r e s u l t s t o o b t a i n p n S E ( W ^ ) N(N+l)(l-p n) 2 ax FPn 9E ( X I ) N(N+l)(l-p n) 2 P i p n 3 aE(w 2) N(H+l)x(l-p n) 2 ap± Dp ±P n aE(xd) N(N+l)x(l-P n) 2 aPi ^ i P n B E ( X 1 ) N(N+!)X(1-Pn)2 f o r i4j IB. NX (2-6.9) where P and D are de f i n e d by P = 2(l-p n)[N(x+l)+l] + p n D = - N(l-p 2 ) - 1 - - i - [ 2 N 2 X p n ( l - P n ) n N+l n n - N 2 X 2 ( l - P n ) 2 + N X ( l - P n 2 ) ] (2-6.10) We may now s u b s t i t u t e these values i n t o (1-6.21) and o b t a i n an e x p l i c i t e xpression f o r G;*. The determinant of (f* i s c a l c -66 u l a t e d i n Appendix 2C i f we replace Q i n the appendix by N ( N + l ) ( l - p n ) 2 / p n . The r e s u l t i s , + ri* P n n + 1 t ( N + 1 ) ( 1 - P n ) + D + F P n ] det Q> = - -2 _ £ Cp-6 11) Wow we are able t o s u b s t i t u t e (2-6.8) and (2-6.11) i n t o (2-6.2) t o get aet A. . C + 2^(« +i)(i-P n)^ a1 2 r _ 8 ( N + i r N ( N X ) n - - L ( l - p n ) V 1 2 1 n-1 (2-6.12) + ( l - P n ) H 2 * ( N + l ) / p n | 7 j " P i ) 1=1 By the same argument t h a t l e d t o (1-6.2), i t i s easy t o see tha t 1 E f f = d e t i \ * det J * I f we repla c e the determinants by t h e i r e x p l i c i t -expressions given i n (2-5.11) and (2-6.12), we f i n d t h a t N 2 ( U + l ) 4 ( l - p )5 E f f = — ^ — — p n ^ X X ^ P n ^ P n X BXX* Bpp*- BXp* >] (2-6.13) A . 1 [ (N+l ) ( l - p n )+D+Fp n]* [ (H^-B^*'"' ) p n + ( l - P n )(N+1 )H 2* ] 2-7 Sample Zero Frequency and U n i t Sample Frequency Estimators A. Sample Zero Frequency and F i r s t Moments This type of e s t i m a t i o n i s u s e f u l under the same c o n d i t i o n s as o u t l i n e d i n ^ 1-7A, i . e . the zero sample occurs f a i r l y f r e q u -67 e n t l y . S e t t i n g X = 0 i n (2-2.10), we o b t a i n P^(0) = exp [ X ( p n N - l ) ] (2-7.1) I f we d e f i n e P ( a ) i n the same manner as i n ^ 1-7A, then E [ | F ( 3 ) ] = P-("C) = e x p [ x ( p n N - l ) ] and hence ( l / p ) P ( 0 ) i s an unbiased estimator f o r P_*(6). Thus, t o o b t a i n the sample zero frequency estimators X and p^ of X and p^, we solve the equation (1/B )F(0) = exp [T(p ^-1)3 along w i t h the equations of (2-3.3) which i n v o l v e the f i r s t moments. Hence we solve the set (1 / P)P(0) = e x p t ^ p Z - l ) ] T± = EX V ± / v n i = 1, 2, n-1 (2-7.2) f o r X and p^ . To do t h i s we add the second equation of (2-7.2) f o r 1 = 1, 2, , n-1. Then we have i = l NX and hence p n = n-1 (2-7.3) + NX i = l We may now s u b s t i t u t e t h i s q u a n t i t y i n t o the l o g a r i t h m of the f i r s t equation of (2-7.2) and f i n d N B / N X \ n-1 y x,+NX \ i = l 1 I - 1 68 -We must now solve f o r x\ I t i s best t o use some numerical procedure. Once having done t h i s we may f i n d p n from (2-7-3) and f i n a l l y p^ from (2-7.2). B. U n i t Sample Frequency E s t i m a t i o n For cases where the u n i t samples occur f r e q u e n t l y , the u n i t sample estimators are sometimes u s e f u l . From (2-2.7) P x ( r k ) = i i K H BS k = exp [ x ( p n N - l ) ] N p n N p k X (2-7.4). s=o Now E[ (1/6 )F(e* k)] = P - ( e k ) and thus the u n i t frequency estimator i s unbiased f o r p ^ ( e k ) . To f i n d the u n i t sample estimators x and p^ f o r x and p^ r e s p e c t i v e l y , we must solve (l/p)F(eT k) = e x p [ x ( ? n H - l ) ] ^ n \ X (2-7.5) together w i t h the zero sample estimator (1/6)F(0) = exp [ x ( p n K - l ) ] (2-7.6) A f t e r d i v i d i n g (2-7-5) by (2-7-5) w i t h k = l , we get P(®v) P k (2-7-7) F ( % ) P X A l s o , a f t e r d i v i d i n g (2-7-5) w i t h k=l by the f i r s t equation of (2-7.2) and n o t i n g the d e f i n i t i o n of p n n-1 N F ^ J / F ^ ) = H ( l - £ P k ) P i * (2-7.8) k=l 69 I f we now take the l o g a r i t h m of (2-7.6) and s u b s t i t u t e f o r the p k from (2-7.7) ^ n-1 l l o g l & l + 1 = [1 - - 4 - Y F(&_)] (2-7.9) X P p < e l > M L L e t us d i v i d e (2-7.8) by X and s u b s t i t u t e f o r p k , k = 2, 3, . . n - 1 from (2-7-7) 1 / X = J ~ _ [ 1 - — L _ V P ( e k ) ] (2-7.10) F ( e x ) F ( e - ) k ^ * We may now s u b s t i t u t e t h i s i n t o (2-7.9) and o b t a i n p,NF(0) - p n nZ^ ^ N 1 v [1 - — k - V F(e, )] l o g l i ^ i + l F ( e l > F ( e l > k=l k 6 P i V " - N = Cl - — 7 F(e )] F ( e l ) k = l k T h i s may be r e w r i t t e n as P i ' ^ N PiNF(O) t?(n) [1 - — ± — 7 F ( e k ) ] [ 1 V l o g li21 - 1] + 1 = 0 (2-7.11) Probably the best way t o solve t h i s f o r p^ i s t o use v a s u i t a b l e numerical procedure. A f t e r doing t h i s , X may be obtained from (2-7.10) and p k , k=2, 3, n-1 from (2-7-7). 70 CHAPTER I I I LIMITING DISTRIBUTIONS OF THE POISSOSi-MULTINOMIAL AND POISSON-NEGATIVE MULTINOMIAL DISTRIBUTIONS 3-1 I n t r o d u c t i o n I n many a p p l i c a t i o n s c e r t a i n parameters may "be known already t o "be very l a r g e , t o be a p a r t i c u l a r value, or t o be almost n e g l i g a b l e . U s u a l l y , I f circumstances permit, i t i s much e a s i e r t o consider the l i m i t i n g d i s t r i b u t i o n s as the parameters approach t h e i r r e s p e c t i v e l i m i t s . I f a p a r t i c u l a r p^ i s allowed t o approach zero i n e i t h e r of the above d i s t r i b u t i o n s , %fee- form e£ %fee d i o t r i b u t l o n c , the form of the d i s t r i b u t i o n w i l l remain unchanged except t h a t the I* 1 v a r i a t e w i l l be completely ignored. 3-2 The Poisson-Polsson D i s t r i b u t i o n The most i n t e r e s t i n g l i m i t i n g d i s t r i b u t i o n i s the one i n which N -» « and p^-» 0 f o r i = l , 2, n-1 ±n such a way th a t Np^=a^ = constant. Then, by u s i n g the f a c t t h a t n-1 we have from (1-2.1) 1=1 n-1 N ' a±( s i - l )+l] -X 1=1 I f we now apply the mathematical i d e n t i t y l i m (1+Q/N) = l i m (1+l/ST) (3-2.1) N* oo N->oo 71 we w i l l have g L ( s ) = exp 1 X exp [ £ a 1 ( s 1 - l ) ] - x| 1=1 I f , i n s t e a d , we s t a r t from (2-2.6), we get P g * ( s ) = l i r a g*(s) = exp ( X l i m (3-2.2) n nT[ 1 " I s i ? i 1=1 By u s i n g the d e f i n i t i o n of the and a f a m i l i a r theorem about l i m i t s , = exp f X l i m [1-(1/N) V a ± ] l i m [1-(1/N) ^ a ^ ] - Xf F i n a l l y , l e t us "apply i d e n t i t y (3-2.1) to tne above l i m i t s . A f t e r s l i g h t s i m p l i f i c a t i o n we get n-1 S L * ( s ) = exp i x exp[ £ a i ( s i - l ) ] - x| 1=1 (3-2.3) Equations (3-2.2) and (3-2.3) show t h a t S L ( s ) = g L * ( s ) . Hence both the Poissson-multinomial and Poisson-negative multinomial have the same l i m i t i n g d i s t r i b u t i o n . From the form of the p r o b a b i l i t y generating f u n c t i o n we see tha t t h i s i s a Poisson- m u l t i v a r i a t e P o i s s o n , or the m u l t i v a r i a t e analogue of the Neyman Type A d i s t r i b u t i o n . From what has been done i n cnapters I and I I i t i s , f o r the most p a r t , an easy matter to o b t a i n the same r e s u l t s f o r the l i m i t i n g d i s t r i b u t i o n as we obtained f o r the two previous d i s t r i  b u t i o n s . We simply a l l o w the parameters to approach t h e i r l i m i t s i n the formulas which give the q u a n t i t i e s we wisn to f i n d . The moments of the d i s t r i b u t i o n may be found from (1-3.3), 72 E(Xj.) = l i m NX p ± = X a ± , or from (2-3.3), N-* E ( X ± ) = l i m N X p i / p n = X a± S i m i l a r l y , E(X±2) = x ( X + l ) o ± 2 + Xa ± ^ (3-2.4) E ( X ± X j ) = X(X+l)a j.a^ Here we may consider e s t i m a t i n g the parameters. We must he aware t h a t these are now X, a,, — , CL, n . From e i t h e r x n—x (1-3.9) and (1-3.10), or (2-3-9) and (2-3-10), we can de r i v e the moment estimators X = ^ E(W 2) .2/ E(W*)-E*(W)-E(W) o o t (3-2.5) E(X. )[E(W*)-E*(W)-E(W)] ' a. = l i m Np. - — — — — — 1 N^ co 1 E^W) A l s o , the maximum l i k e l i h o o d e stimators f o r the may he found from (1-4.11) or (2-4.8) by t a k i n g the l i m i t of Hp, as N^oo. Thus we have The maximum l i k e l i h o o d estimator f o r X i s given by e i t h e r (1-4.14) or (2-4.11) since both of these remain unchanged as N-^oo. We must r e a l i z e , however, thax now r e f e r s t o the -ft d e n s i t y of the l i m i t i n g d i s t r i b u t i o n . We s h a l l henceforth denote the l i m i t i n g d e n s i t y by . 3-3 The Information M a t r i x Prom e i t h e r (1-5.12) or (2-5.6) I t i s c l e a r that i n the l i m i t i n g case • * (x±+l)(x +l)lx(x+e±)Zt(x+e ) A = -1 + ) . . . ) * • »— L e t J denote the i n f o r m a t i o n m a t r i x and I ^  denote i t s e n t r i e s . Then I . . = l i m I = l i m I * Prom (1-5.16) and (1-5.18), or (2-5-7) and (2-5-9) (1/P) I = l i m B.. = l i m B * = A (3-3-1) '~ A A _ A. A. TVT v „ A A I n c a l c u l a t i n g the other e n t r i e s we must he c a r e f u l since corresponds t o Np^ ^ r a t h e r than p^. Thus from the d e f i n i  t i o n of the i n f o r m a t i o n m a t r i x , (1-5.2) xal Brf (x) aX 9 a i x mmmmJ\. = l i m E [ — , i r - P-(x) — 2 P^(x)3 N * * P-"(X) ax a(N P j L) x = l i m (1/N) E[ % 1- P-(*)£- P - ( x ) ] N * « P - d{x) ax = l i m ( I _ /N) = l i m ( I . _ */N) Now from (1-5.17) and (I-5.18), or (2-5-8) and (2-5-9) ( 1 / 3 ) 1 = l i m (B /I) = l i m (B */N) = -XA+1 ~ x a i &+• A p W-^oo x p (3-3 .2) S i m i l a r l y , u s i n g the same equations, ( 1 / p £ a a " < 1 / F 3 ) L L M D A 2 ) = ( V 3 ) l i m (1L » / N 2 ) ^ i 3 i>« p i p j N->» p i p j = l i m (B/N 2) = l i m ( B * / N 2 ) PP PP = X(\A - 1 ) ( 1 / 3 = X ( X A - l + l / a i ) A l s o , Hence we may summarize J as J = -XX -Xp. I I i x p 1 % l P l I x p n - 1 ~ p l p n - l I XP, n-1 " p l p n - l ~ p n - l p n - l ( 3 - 3 . 3 ) ( 3 - 3 . 4 ) where ( 1 / 3 ) 1 2xx - A = B. XP, -XP ( 1 / 3 ) 1 ^ . , = 1 l p i p 3 •PP - XA+1 X(XA-l) i f 143 ( l / p ) I p = B p p + \/a± = x(xA-l+l/ a i) (3-3-5) 75 3-4 E f f i c i e n c y of Method of Moments By u s i n g the same arguments as are used t o o b t a i n (1-6.2) and (1-6.5)> we can show t h a t E f f « J: (3-4.1) (det (j) det M det J where M i s the covariance m a t r i x of the moment estimators of \, a^, c&n_x and det (f i s the Jacobian 9[E(¥if),E(X1),...,E(Xn_1)3 F i r s t l e t us consider M. M may be found by t a k i n g l i m i t s i n e i t h e r (1-6.19) or (2-6.7). This r e s u l t s i n cov O ^ X j ) = (1/3) \a±a.j f o r ±4j \ var X± = (1/3) Xa i(a i+1) _ ' (3-4.2) cov (¥ 2,X i) = (1/3) X a ^ var W2 = (1/3) Xa H 2 where n-1 a = Z aJ "1 i = l 76 Si l i m EL = N->« 1 l i m H 1* = (2X+l)a + (4x+3)a + 1 l i m H 2 = l i m H 2* (3-4.3) N-»o N->< a3(4x2+6x+l )+2a2(2x2+3x+3)+a(6X+7)+l Because each entry of M and M* converges t o the corresponding entry of M as N->», det M = l i m det = l i m det M* (3-4.4) Now consider det 9 [ X $ ct-i $ • • • 3 Q5 •! 3 det <| = l i m x n"-L N->» a[E(W^ ),E(X X ) s . . . ,2(3^^)3 = l i m N->» _a_x 3E(W ) q " 9E(¥c:) aa 'n-1 ax aE(xx) aa 'n-1 aE(tr) aE(xx) acu aE(x1) aE(x2) ax ao^ aa. n-1 9 E ( X n - l ) By v i r t u e of (1-6.21) and the f a c t t h a t a i=Np i, 77 det (£ = l i m N 1 1 det (| = l i m i f 1 " 1 det ^ (3-4.5) From ^ 3-3 we see t h a t det J = l i m N»» XX I XP A n-1 "P 1P 1 A ' P l p n - 1 i.„ A 7 i„ „ A ' X pn - 1 P n - l p l ^ n - l P n - i l i m H " 2 ( n _ 1 ) det J (3-4.6) The same r e l a t i o n holds i f we consider J * . Now i f we s u b s t i t u t e (3-4.4), (3-4.5) and (3-4.6) i n t o (3-4.1) we have E f f m 5 ( l i m N n _ 1 d e t (|) 2(lim det M) I N->» »••?>» . ( l i m j j - 2 ( n - l ) d e t J ) ) N*« ) = l i m j (det (J) det M det J N^ co ( = l i m ( E f f ) N-*» -1 (3-4.7) E x a c t l y the same r e s u l t w i l l h o ld i f we consider (}*, M*, and J * i n s t e a d of (f, M, and J. By t a k i n g l i m i t s e i t h e r i n (1-6.24) or In (2-6.13) and then s u b s t i t u t i n g f o r the B's from (3-3.5), 78 E f f = 2 — (3-4.8) [XA(a+l)-a][H 2(a+l J-H^ ] where and Hg are given by (3-4.3). This c a l c u l a t i o n i s o u t l i n e d i n Appendix 3A. 3-5 Sample Zero Frequency and F i r s t Moment Estimators From e i t h e r (1-7.1) or (2-7.1) we can deduce t h a t the p r o b a b i l i t y of the zero sample i s P-(tf) = l i m exp [ X ( p r N - l ) 3 = exp [ X ( l i m (p N ) -1)]. Because a = N ( l - p n ) , we have l i m p / = l i m ( l - a / N ) N = e" a (3-5-1) Thus P-(0) = exp [X(e" a-1)3 D e f i n i n g F ( a ) t o be the frequency w i t h which "t = a = ( a ^ , ..., a n - i ) occurs i n B obs e r v a t i o n s , we have E(|F(0)) = P x(0) = exp [X(e" a-1)3 (3-5-2) Thus (l/p)F(0) i s an unbiased estimator f o r P (0). The sample zero frequency estimators X and p^ f o r X and p^ r e s p e c t i v e l y may be found by s o l v i n g the above equations simultaneously w i t h the f i r s t moment equations of (3-2.4), i . e . 79 (1/6)F(0) = e x p [ t ( e - a ' - l ) ] (3-5.3) Adding the second equation of (3-5-3) f o r 1=1, 2, ..., n-1, n-1 n-1 1=1 i = l Thi s equation may he solved f o r a and used t o replace a i n the f i r s t equation of (3-5-3)- Hence (1/B)F(0) = exp X e x p [ ( - l / X ) £ T± - X \ 1 ^ i = l This may be solved f o r X u s i n g a s u i t a b l e numerical method. Once having done t h i s we can solve f o r the 3^  u s i n g the second equation of (3-5-2), i . e . - X^/T. 3-6 U n i t Sample Frequency E s t i m a t i o n 'Taking l i m i t s i n e i t h e r (1-7-7) or (2-7.4) w i t h the help of (3-5.1) y i e l d s P - ( e k ) = e x p [ x ( e ~ a - l ) ] a^Xe'** k = l , 2, ..., n-1 (3-6.1) Le t us note t h a t E [ ( l / 8 ) F ( x ) j = P - ( x ) , hence ( l / B ) F ( e k ) i s an unbiased estimator f o r £ x ( e ^ ) - Thus, i f we l e t X and be the u n i t sample estimators f o r X and r e s p e c t i v e l y , then we may solve (3-6.1) w i t h e k ) replaced, by i t s estimator, and (3-5.2) w i t h P^(0) replaced by i t s estimator f o r X and 80 c^, k = l , 2, ..., n-1, i . e . we must solve the system (1/P)P(0) = exp [x(e-«-l)] ( l / B ) P ( t k ) = exp Me-"-!)]^* ( (3-6.2) k = l , 2,..., n-1 Upon d i v i s i o n of the second equation hy the second equation w i t h k = l , we o b t a i n ^ 2 - °£ (5-6.,) and upon d i v i s i o n of the second equation by the f i r s t and s e t t i n g k = l , Ffe., ) , * = ct-i Xe" a (3-6.4) F(3) 1 By t a k i n g the l o g a r i t h m of the f i r s t equation of (3-6.2) and s u b s t i t u t i n g f o r the i n a = c ^ f ... + from (3-6.3) we o b t a i n (1/X) l o g - exp \ - - L - £ P ( e k ) | -1 (3-6.5) 6 1 F ( e l k ) ¥e may solve (3-6.4) f o r 1/X and then replace the o^ . i n a usi n g (3-6.3) t o o b t a i n v a,F(0) (- a, „ , 1/X = -± exp ) A- V P(e. 81 Let us now s u b s t i t u t e t h i s equation f o r 1/x i n (3-6.5). Then \hz& i o g H O I . 1 ) « p (. A . T F ( e k ) \ + i = o 1 p ( * i > s i 1 F ( g i ) k = i k j Because t h i s equation does not lend i t s e l f e a s i l y t o exact s o l u  t i o n , i t i s best t o t r y a numerical procedure t o f i n d a^. Then ctp, a , may be found from (3-6.3) and X from (3-6.4). 82 APPENDIX 1A OBTAINING AN EXPLICIT EXPRESSION FOR THE PROBABILITY FUNCTION We w i l l s t a r t w i t h the p r o b a b i l i t y generating f u n c t i o n given by (1-2.1). g ( t ) = e X ( T N - l ) n-1 where T = ^ s i p ± + p n 1=1 Then D k g ( s ) = Xg(s) D j j T ^ ) (1A-1) For s i m p l i c i t y l e t Xg(s) = A, D k(T^) = B. Then ( 1A-1) becomes \&(b) = AB*. To f i n d the higher order d e r i v a t i v e s , l e t us r e s o r t t o L e i b n i t z Rule which s t a t e s t h a t x n - l x n - l , f l T 3v r / X n - l x ^ y n-l„ ^ ^ - l ' ^ n - l , Dn-1 <*» - I ( y 1 1 ^ V l " " ^ ' D n - 1 yn-i-° n" Now, we may d i f f e r e n t i a t e termwlse w i t h respect t o s 2 9 1 1 ( 1 o b t a i n Use L e i b n i t z Rule on each term. yn-l =° yn-2=0 . r n n-2 y n - 2 n x n - l y n - l p l 83 The sums are independent of each other and consequently we may reverse t h e i r order 3?n""2 "^ n 1 r* f7 A ^ W ^ - I N „ yn-2T 4 y n - l y n-2=° y n - i = o n-2 'n-1 n " 2 V l D  x n-2" y n-2 D x n - l " y n - l E n-2 n-1 Continuing t h i s process we f i n d t h a t , a f t e r r e p l a c i n g A and B by the expressions they represent, x l x n - l yl=° yn-l=° . n y n - l n y l r , f f / - U i i x n - l " y n - l „ x l " y l n / J ^ n-1 [ x g ( s ) j D n - 1 ...D1 D k(T ; (1A-2) The remaining problem i s t o f i n d the above m u l t i p l e d e r i v a t i v e of T N. Not i c e D k ( T K ) = NT 5 1" 1 DfcT = NT 1" 1 p k Continuing, D x X l " y i ( \ T H ) = N p k D 1 X l " y i ( T H - 1 ) N-(x, -y, ) - l x..-y, = p k N ( N - l ) ( l - 2 ) ... [ N - ( x 1 - y 1 ) ] T X p i We may proceed i n the same manner through the d e r i v a t i v e s w i t h respect t o a l l the s^. The r e s u l t i n g expression i s 84 D ^ - l ^ n - l n x l - y l n-1 x _ y n-1 ^ pk ( T T P i 1 1 ) ^ ( N - 1 ) ... [N- £ ( x i - y i ) ] T i i i = l i = l n-1 n _ r ^ x i " y i ) _ i (1A-3) From the d e f i n i t i o n of a p r o b a b i l i t y generating f u n c t i o n , P x ^ k ) 5=1 ( V i ) T T V i = l s=0 a n d , s u b s t i t u t i n g from (lA.j-2), * n - l x n - l n-1 (x f e+l) I T *A • yx-o yn-1-o i = i i = i n-1 x n - l " y n - l _ x l " y l I f we use (1A-3) t o evaluate the second square bracket X "n=T c l * n - l n-1 „ n-1 r IT (y^ )t"fT y±! ^(y)3 ( V 1 ) T T x i ! ^1=° y n - i = ° 1 = 1 1 1 = 1 i = l n-1 x . - y . ^ N- V ( x . - y . ) - l i = i 1=1 F i n a l l y , we can express the combinations as f a c t o r i a l s and, a f t e r rearranging the terms, we a r r i v e at 85 V y,=o y« .=o i = l N-1 x l x n - l n-1 yl=° yn-l=° n-1 1=1 p n (1A-4) ( x i - y i ) 1 I f , as i n the case of the Poisson-multinomial d i s t r i  b u t i o n , N i s a p o s i t i v e i n t e g e r , then P^(x+e k) = /  X p k p n r - i x i B - l Nl u /-> n^T V 1 ^1=° y n - i = ° [N- £ ( x i - y i ) - i ] i i = l n " 1 p 4 X i " y i n-1 [ T T ) " 1 . T J Pg(y) i f N> £ ( x ± - y i ) 1=1 p n (X±-Y±)' 1=1 0 otherwise 86 APPENDIX IB CALCULATION OP MOMENTS PROM FACTORIAL CUMULANT GENERATING FUNCTION From (1-3.1), the cumulant generating f u n c t i o n f o r the Poisson-multinomial d i s t r i b u t i o n i s n " 1 N c ( s ) = x [ [ I s i P i + p n ] - l ] i = l For s i m p l i c i t y of n o t a t i o n , i n the f o l l o w i n g c a l c u l a t i o n s the expectations of products of X 1, X g, X^, X^ w i l l be c a l c u l a t e d e x p l i c i t e l y . I t i s c l e a r , however t h a t these r e s u l t s may be g e n e r a l i z e d t o products of any X^'s. Let us now c a l c u l a t e a l l the p a r t i a l d e r i v a t i v e s of c(s') w i t h respect t o the s.^  of order l e s s than or equal t o f o u r . These are n ~ 1 N-1 1' TT = ^ I s i p i + p n > p l B s l i = l 2 n-1 N-2 2L£ 2 - XN(N-I)( I s i P l + P n ) P l 2  s s l 1=1 9 S 1 3 i = l XN(N-l)(N-2)( £ s l P l + p n ) P l 5 4 n _ 1 N-4 h 4. = XN(N-l)(N-2)(N-3)( £ s i P i + p n ) P l 4 * s l i = l 2 n ~ l u_2 5- Trh" X M ( H - 1 ) ( 2, s i p i + p n ) P i p 2 d s 2 S s l 1=1 87 6. d g = XN(H-l)(N-2)( ^ s i P i + P n ) P i P 2  a V s l 1=1 7. 5 ° 3 = XN(N-l)(N - 2)(N-3)( £ s l P i + p n ) p x 5 p 2  B s 2 3 s l 1=1 -3 P n p 1 N-3 8. L-S \N(N-l)(N-2)( ^ s i P l + p n ) p l P 2 p 3 BS^QSgdS^ 1—1 4 n _ 1 N-4 9 . L-2 2= XN(N-l)(N - 2)(N-3)( £ s i P i + P n ) P i 2 P 2 P 3 a s 3 9 s 2 9 s l 1=1 4 1 1 - 1 N-4 10. = XN(N-l)(N-2)(N-3)( £ s l P i + p n ) P X 2 P 2 2 4 n _ 1 u_4 11. L-£ = XH(N-l)(N-2)(N-3)( £ s i P i + p n ) P 1 P 2 P 3 P 4 as^as^as 2 as x i = i To o b t a i n the f a c t o r i a l cumulants corresponding t o the above d e r i v a t i v e s we set s=l. We denote the f a c t o r i a l cumulants by K i j k m w n e r e t h i s symbol represents the i t h cumulant w i t h respect t o X.^ -3th w i t h respect X 2, k t h w i t h respect t o Xy m t h w i t h respect t o X^. I f the l a s t s u b s c r i p t s are zero, they may be omitted (e.g. ~ ^ 110 ~ E n )• Thus formulas 1-11 become r e s p e c t i v e l y 1. K x = NXp^ ^ 2. K 2 = N(N-l) Xp ]_ 2 3. K 3 = N(N-l)(N - 2 ) X p x 3 88 4. = N(N-l)(N-2)(N-3) X P 1 4 5. K 1 ± = H(H-l) \ p x p 2 6. K 2 1 « N(N-l)(N-2) X P A 2 p 2 7. K 3 ± = N(N-l)(N-2)(N-3) X P A 5 P 2 8. K l x l = N(N-l)(N-2) X P l P 2 P 3 9- K 2 1 ]_= N(N-l)(N-2)(N-3) X P j S g P ^ 10. K 2 2 = N(N-l)(N-2)(N-3) X p x 2 p 2 2 11. N(N-l)(N-2)(H-3) X p - j P ^ p ^ ( r ) ( r ) Now de f i n e E ^ 1 ... X t * ) t o be the r ^ * 1 f a c t o r i a l moment w i t h respect t o X^, , and the r ^ f a c t o r i a l moment w i t h respect t o X^. Then, u s i n g t a b l e s converting f a c t o r i a l cumulants t o f a c t o r i a l moments, we have i f we def i n e the G^ by (1-3.2) 1. E ( X X ) = K x = NXP X 2. E(X^2h m K2+K±2 - N X p 1 2 [ N ( X + l ) - l ] = NXP 1 2G 1 3. E ( X X X 2 ) = K i ; L + K 1 0 K 0 1 = NX p 1 P 2 [ N ( X + l ) - l ] - N X p ^ p ^ 4. E ( X X ( 5 ^ ) = + 3K 2K 1 + K x 5 m N X p x 5 [ N 2 ( X2+3X+1 )-3N( X+l )+2 ] = N X P l 3 G 2 89' 5. E ( X l ( 2 ) x 2 ) = K ^ ^ + S K ^ K ^ o ^ N X p 1 2 p 2 G 2 6. E ( X 1 X 2 X 3 ) « K l l l + K H O K 0 0 1 + K 1 0 1 K 0 1 0 + K 0 1 1 K 1 0 0 + K 1 0 0K 0 1 0K 0 0 1 4 = NXpjP^PjOg 7. E ^ 4 ) ) = + 4-K^ + 3 K 2 2 + o K ^ 2 + K x a NXp x 4[N 5(X 5+6x 2+7X+l)-6u 2(X 2+?X+1) + llN(x+l)-6] 8. E ( X l ( 5 ) X 2 ) = K 3 1 + 3 K 2 1 K 1 Q + K 5 0 K 0 1 + 3 K n K 2 0 = N X p 1 5 P 2 G 3 9. E ( X ( 2 ) x 2 2 ) ) = K 2 2 + 2 K 1 2 K 1 0 + 2K21KQ1 + ZK^2 + + S A o 2 + 4 K 1 1 K 1 0 K 0 1 + K 2 0 K 0 1 2 + ^icfoi = N X P 1 2 P 2 2 G ? 10. E f X ^ ^ X j ) = K 2 1 1 + SK^i^oo + K210 K001 + K201 K010 2 + K011 K200 + ^ H O ^ O l + K011 K100 + 2 K 1 1 0 K 1 0 0 K o o : L + SKioAooSlO + K200 K010 K001 + ^OO^OIO^OI - N X P x 2 p 2 P 5 G^ 90 11. E(X1X2X5X^) = K l x l l + K 1 1 1 0 K 0 0 0 1 + K 1 1 0 1 K 0 0 1 0 + K 1 0 1 1 K 0 1 0 0 + K0111 K1000 + K1100 K0011 + K1010 K0101 + K1001 K0110 + K1100 K0010 K0001 + K1010 K0100 K0001 + K1001 K0100 K0010 + K0110 K1000 K0001 + K0101 K1000 K0010 + ^ o i A o o o ^ o i o o + ^ OOC^ OIOC^ OOIC^ OOOI Now i t i s an easy matter to f i n d the moments about the o r i g i n . Equations 1, 3> 6, and 11 need no change. The others need s l i g h t modification which i s done as follows - 2'. E ( X x 2 ) m E ( X x ( 2 ) ) + E(X x ) = NXPl { [N(X+l)-l]p x + l } = NXPX ( P XG X + 1) 4'. E ( X x 5 ) = E ( X 1 ^ ^ ) + 3 E ( X x ( 2 ) ) + E(X x ) = NX P l(p x 2G 2 + 3P XG X + 1) 5«. E ( X X 2 X 2 ) = E(X x^ 2)x 2) + E(X x X 2 ) = KXP XP 2(P XG 2 + Gx) 7'. E ( X X 4 ) = E ( X x ( 4 ) ) + 6 E ( X X ( 5 ) ) + 7E ( X X ^ 2 ^ ) + E(X x) - Nxp x( P l 5G 5 + 6 p x 2 G 2 + 7P xG x + 1) 8'. E ( X x 5 X 2 ) m E ( X x ^ 3 ) X 2 ) + 3E(X x( 2)x 2) + E(X x X 2 ) = NXPXP2 (P x 2& 5 + 3p xG 2 + Gx) 91 9*. E ( X 1 2 X 2 2 ) = E f X ^ 2 ^ 2 * ) + E{X^2\) + EiX^k^+EiX^) = N\p xp 2 [ p X P 2 G 5 + ( p X + P 2 ) G 2 + G x] 10'. E ( X x 2 X 2 X 3 ) = E ( X 1 ( 2 ^ X 2 X 5 ) + E ( X 1 X 2 X 3 ) - NXPTLP^-JO?^ + G 2 ) Formulas 1, 3* 6 and 11 together w i t h the primed formulas give expressions f o r the moments of order f o u r or l e s s . These, i n a s l i g h t l y g e n e r a l i z e d form are summarized i n (1-3.4). 92 APPENDIX 1C CALCULATION OP THE ENTRIES OP THE INFORMATION MATRIX " J " The c a l c u l a t i o n of I i s o u t l i n e d i n §1.5B. Consider I . and I . X P j P j X I,- = I . = E( l o g L l o g L ) toy d e f i n i t i o n . x p J p r ax apd D i f f e r e n t i a t i n g a f t e r s u b s t i t u t i n g from (1-4.1) y i e l d s 6, P 6 6 « y y E[ ' 1 i - p-(5?) . 1 • i - p-»(x )] A Y = l W ^ X a W 9 P J Y Because the observations are independent and i d e n t i c a l l y d i s t r i b u t e d I,„ = 0 E ( i £- P-(x) . -2- P->(x)) X P J U P - ( X ) ] 2 ax X ^ a P j x ^ * + 0(0-1) E [ — 3 - P^(x )] E [ - k - P^(x )] (1C-1) P-(X) ax x a P-(X) a P j x a The second term i s zero by the same reasoning as i n (1-5.8). From the d e f i n i t i o n of e x p e c t a t i o n , and s u b s t i t u t i n g from (1-4.4) and (1-4.7) f o r the d e r i v a t i v e s , 93 (i/e) i . = I... Y ( p " ( y 1 } P s (? +e m) n-1 x + ^ + (1/NX) £ ^ P ^ x ) - P^(x)) ( ( x , / P J . ) P - ( x ) - i l l - P^x+e^)] k=l P j As I s seen from (1-4.7) t h i s h olds f o r a l l m. I f we choose m=j, m u l t i p l y the expression out and replac e the sums by corres ponding expectations and use lemma 1-5 where necessary, ( 1 / P ) l x p =-!-§— E f X ^ - l ) ] - p nNX(A+l)+(l/p JNX)EfX J) d P^j NX I f we choose m4j and f o l l o w the same procedure, (1/3) I x p = P n E ( X j X j - p nNx(A+l)+(l/p ; jNX)E(X j) J P^PjnNX I n e i t h e r case, we may s u b s t i t u t e f o r the expectations u s i n g ( l - 3 « 3 ) . Both w i l l l e a d t o the same r e s u l t which i s (1/p) I = - P rNXA + P nN + 1 - p n (1C-2) Now consider I . By d e f i n i t i o n I ^ = p i p d p i p j E[a/ap., ( l o g L ) . a/ap . ( l o g L ) ] . L e t us now s u b s t i t u t e from (1-4.1) and r e a l i z e t h a t the observations are independent and i d e n t i c a l l y d i s t r i b u t e d . Then p i p j [P^(X)]2 a P ± x a P j x + B(B-I) E M ~ ~ p5?( x)3 E [ — L - . P-(x)] P^(X) ap± x P-(X) a P j Reasoning as i n (1-5.8), we see tha t the second term i s zero. The s u b s t i t u t i o n f o r the d e r i v a t i v e s from (1-4.4) y i e l d s CO CO ^ x.+l [ ( ^ / P i ) P^(x) + - i — p l ( x + e . ) ] p i I f i = j , we use the d e f i n i t i o n of ex p e c t a t i o n and expand the expression. Then (1 /P) Ip p - ( l / p i 2 ) [ E ( X i 2 ) - 2 E ( X i ( X i - l ) ) + E2\%±2(A+1)] and the use of (1-3.3) and s i m p l i f i c a t i o n g i v e s (1/p) I p . N 2\ 2A + NX ( l / p ± + 1-N) (1G-3) I f ±4i> w e m a v u s e the same procedure t o get (1/P) 1 ^ = ( l / p i p J ) [ - E ( X i X j ) + N ^ S j P j C A + l ) ] = N 2X 2A +NX(1-N) (1C-4) Equations 1C-1, 2, 3> and 4 give the remaining e n t r i e s of the in f o r m a t i o n m a t r i x . 95 APPENDIX ID CALCULATION OP THE INVERSE OF THE MATRIX J/B = Q R R S+W/p1 R S • * R S Step 1. Fi n d det (J/B) R S S+W/p, R S S 3+W/p, n-1 Let us perform the f o l l o w i n g elementary row operations on J/p. 1. Subtract column 2 from columns 3, 4, n-1. 2. Subtract (S/R) (column 1) from column 2. 3- Add (pj^/p-j^) [row ( k + l ) ] t o row 2 f o r k=2, 3, n-1. 4. Add' (p.j/W)(QS/R-R) (row 2) to row 1. These operations leave the determinant unchanged, w i l l have the r e s u l t Hence we det (±) = B Q + — (— - R)(R + R V ^ ) W R .n p. n-1 Pj k=2 p l W/Pl n - 1 W/p, n-1 det (J/B) = [Q+(Q3-R2)(l-pn)/¥3"n" (W/p± ) 1=1 (1D-1) Step 2. F i n d minors of m a t r i x elements Let K be the minor of the (ct,y) entry of J/B. T K S + ¥/p x S + W/p, S S + W/p. n-1 Carry out the f o l l o w i n g elementary row o p e r a t i o n s . 1. Subtract column 1 from columns 23 J>, ..., n-1. 2. Add (Pfc/Pi) (row k ) t o row 1 f o r k=2, 3> n-1. The r e s u l t i s a lower t r i a n g u l a r m a t r i x whose determinant i s n-1 K u - [1 + S(l-p n)/W] ~ y j (W/p ±) (1D-2) i = l K X PJ R P l S + s w_ p i - 1 S + JL 97 Do the f o l l o w i n g o p e r a t i o n s . 1. Subtract row i from a l l other rows. 2. Expand by column 1. The r e s u l t i s a diagonal m a t r i x whose determinant i s column i+1 i ^ J 98 Do the f o l l o w i n g operations 1. Subtract row J from a l l other rows except row 1. 2. Subtract (R/S) (row J ) from row 1. 3. Expand by column (1+1) and then by row 1. K_ _ = ( - l ^ + ^ Q S - R 2 ) I \ (W/p ) P i P 0 K P*P Q R R S + | - p l S+ w 5 i - l R p i + i s+ w >n-l (1D-4) This w i l l be the same as det ( J/B) but w i t h the expressions i n p^ m i s s i n g . Thus K = [Q+(QS-R 2)(l-p B- P i)/W] J T (¥/pfc) p i p i k 4 i (1D-5) Prom e i t h e r (1-5.6) or (2-5.3) we know tha t f\ = J , i . e . fl = (1/B)(J/S )""*". By a w e l l known theorem i n m a t r i x theory, the elements of II can be expressed as the c o f a c t o r s T of corresponding elements of J d i v i d e d by det J . Hence from (1D-1), (1D-5), v a r x - 1 ^ - 1 W+s(l-p p) \ p d e t ( j / 0 ) p WQ+(QS-R^)(l-p n) r n v ,* - * 1 K p i X 1 R P j COV (X,p. ) = p- ~ ~ a — 5 — 1 p d e t f J / 0 ) P WQ+(QS-R-)(l-p n) cov ( M j ) = | . ^  . - 1 1 3 P d e t ( J / 0 ) p [WQ+(QS-R*)(l-p n)]W var p. = J . P i P l = - I — ^ 1 1 P o ^ j T ^ P [WQ +(QS-R 2)(l-p n)]W (ID 100 APPENDIX IE Lemma; Le t m = (ou±, ..., i»n) and £i » (u^, ..., u^) be two ..sets of random v a r i a b l e s r e l a t e d by the equations UD^ = (^(M), i = 1, 2, ..., n. L e t and ~\x^ denote the expected values of u)x and r e s p e c t i v e l y , and A = (-H^) and M = ( M ^ ) denote the covariance m a t r i c e s of w and # r e s p e c t i v e l y . A l s o , denote by 0, the Jacobian A L i l = du)4 <M=W' Then i f higher order d e r i v a t i v e s of w i t h respect t o are n e g l i g a b l e compared t o the f i r s t order, we have f l = (}^ M(|, Proof: Using T a y l o r ' s theorem, n k=l 9 ( ik UO=UD (^-1-^) + second order terms Thus n n Z v-i a«)-. _ _ k=l m=l 9 | Jk 9*m where the d e r i v a t i v e s are evaluated at uu n i t i o n s of fl and M, - w Prom the d e f i - ±3 n n = 1 I k=l m=l 9 l ik B|Jm M. - ( 4 T M(J) i d Thus each entry of . f l agrees w i t h each entry of <JT M(J. Q.E.D. By a w e l l known theorem from m a t r i x theory, the lemma i m p l i e s det fl = det ( J T . det M . det (J, o r det Pi = (det 4 ) 2 det M. APPENDIX I F CALCULATION OF (1-6.12) AND (2-6.5) Let us s u b s t i t u t e (1-3.3) and (1-3.4) i n t o (1-6.11), cov (W2,\) = ( l / 6 ) ^ P k N X ( p f e 2 G 2 + 3P kG x + 1) + 2 I N X P k P j ( P k G 2 - C l ) + T 1 ^ V k P i P j j=fk i 4 k d4k,i +A N X P A ( p i G 2 + G l } " W X P k I N X P i ( P i G l + X ) i 5 f k i = l n-1 " N X p k I I N\G l P ip. i = l j 4 i A f t e r c a r r y i n g out the summations we o b t a i n = ) { P kNX(p k 2G 2+3P kG 1+l )+2N\p k(p kG 2+G 1 )(1-P f c-P n ) + N X p k G 2 [ ( l - p k - p n ) 2 - I P ± 2]+NXP k[G 2 E P i 2 + G i ( l - P k - P n ) i 4 k ±4^ n-1 n-1 - N 2 X 2 p k ( G x I P i 2 4- l - P n ) - H 2 X 2 G l P k C ( l - p n ) 2 - I Pl2] i = l i = l Upon s i m p l i f i c a t i o n , t h i s y i e l d s cov (V2,\J = ( l / p ) N \ p k { ( l - P n ) [ G 2 ( l - p n ) + 3G X - N X < G 1 ( l - p n ) + l ) ] + l } This equation i s (1-6.12). APPENDIX 1G CALCULATION OP (1-6.16) AND (2-6.6) Let us s u b s t i t u t e (1-3-3) and (1-3.4) I n t o (1-6.15) _ n-1 w W2 - ( l / p ) ( I I I I ^ j P i P j V m 1-1 j 4 i k 4 i , j m 4 i 5 ^ k n-1 n-1 + 6I I I NXfGjPjSjPfc. + O ^ p ^ ) + 3 £ £ NX 1=1 J4I k 4 i 5 j i = i J4I n-1 [ G 3 P i 2 p / + r i 2 ( p i 2 p J + p i p j 2 ) + G l p i p ^ + * I I NX i = i J4I n-1 ( G 5 P i 5 P j + ^ G g P ^ j + C l p i p ^ + I Kx(G 3P i 2 r+6G 2P i 2+7G 1P i+p i) 1=1 n-1 n-1 2 .[ i N X P ^ P ^ + I H ^ r ^ w ^ " } 1=1 i = i d4i A f t e r c a r r y i n g out the summations we ob t a i n n-1 n-1 n-1 ^/^{m^ia-pj - 6 ( i - P r / l p k s + x l P k 2 ) 2 + 8 ( i - p n ) £ P l k=l k=l 1=1 n-1 n-1 n-1 n-1 -6 I p ± * ] + 6 f f X [ 0 5 < ( l - p n ) ^ P l 2 - 2 ( l - p n ) £ p / ) 2 1=1 1=1 1=1 J=l n-1 n-1 n-1 +2 I P i 4 ) * » 8 ( ( 1 - P n ) 5 - 3 ( l - P n ) I p / + 2 I p ±5 ) ] 1=1 'J=l i = l n-1 n-1 ( n-1 n-1 +3NX[G3<( ^ p . 2 ) 2 - 1 ^ ) ^ ( 2 ( 1 ^ ) lv*-2 lv±3) 1=1 1=1 'j=i 1=1 n-1 11' n-1 n-1 +0 1 < ( l - p n ) 2 - I p , 2 ) ] +4HX[G 3 ( ( l - p n ) £ P l ' - X Pl4> 1=1 1=1 1=1 i o 4 + 3 G 2 ( ( 1 - P n ) n i P ^ - T P I 3 > «i < ( l - P n ) 2 - ^ 2 > 1 i=i i = l n-1 1=1 n-1 n-1 n-1 +NX(G 3 I P i 4 + 6 G 2 I 4+7^1 P ^ + l - p J 1=1 1=1 1=1 n-1 n-1 ~ -[IU<G x £ p ± 2 + i - P a > +NXGx < ( i - P n ) 2 - £ P ± 2 ) ] 2 } i = i i = i Upon s i m p l i f i c a t i o n , t h i s y i e l d s var W2 = (l/0 ) N X(l-p n) { G ? ( l - p n ) 5 + 6 G 2 ( l - p n ) 2 + 7 G x ( l - p n ) + l - N x C l - p ^ t G ^ l - p ^ + l ] 2 ] This equation i s (1-6.16) 105 APPENDIX 1H CALCULATION OF det M WHERE M IS GIVEN BY R ( i - P n ) H 2 Rp 2H x R P n - l H l B P A Rp 1(p 1N»+l) RN'p 1P 2 RN' P lP 2 RN'p,p „ F l F n - l R P n - l H l RN 'Pl p n - 1 Let us perform the f o l l o w i n g elementary o p e r a t i o n s . 2. 3. Take the common f a c t o r R out of each row, the common f a c t o r H^ out of row 1 and column 1, and the common f a c t o r p^ out of row 1+1, i = l , 2, n-1. M u l t i p l y row 1 by -N' and add t o rows 2, J>» n. M u l t i p l y row i+1 by -p^ and add t o row 1, 1*1, 2, n-1. We now have n n-1 det M - (£.)" H12(TTP1) i=l n-1 = ( R / B ) n ( 7 f P i ) ( l - p n ) { H 2 - H 1 2 ( l - p n ) H 2 H ' ] 1=1 „ [1 ^ J ( l - P n ) H, 0 •. 106 APPENDIX 2A OBTAINING THE PROBABILITY GENERATING FUNCTION g*(s ) Let us s t a r t w i t h (2-2.5^"). This expression i s equal to ee x 2=o -X-,- — -Nz 5 -x„-...-Nz ) ( - S 2 P 2 ) 2 Z ( 2 x' ) ( " V l ) 1 + e -X x^=o 1 We can now apply (2-2.5) t o the l a s t sum and o b t a i n n-l z=l xn-l=° x 2=o -x,-.;.-Nz x c -x_-...-Nz , 6 0 -x-,-...-Nz - s 0 p 0 X 2 -x 0-...-Nz . x 2=o X, l - s l P l A p p l y i n g (2-2.5) again and co n t i n u i n g i n t h i s manner 00 £ (X ZA!)p n I , Z(l- S lP 1-...-VlP„-ir I , ZH- e" X z=l oo =e" X £ U z A ! ) p n N Z ( l - s 1 P 1 - . . . - s n _ 1 P n _ 1 ) -Nz z=o The sum i s simply the power s e r i e s expansion of N exp ( X / \ - X 1- £ S i P i * 1=1 from which (2-2.6) f o l l o w s 107 APPENDIX 2B CALCULATION OP THE ENTRIES OF THE INFORMATION MATRIX J * The c a l c u l a t i o n of 1 ^ * i s done I n §2.5B. Consider I . * and I *. By the same argument as i n xp.j PjA Appendix 1C, we may o b t a i n the same equation as (1C-1), and as i n (1-5.8), second term w i l l be zero, i . e . XV3 V6X 1 [ P x ( x ) ] 2 SX X 5Pj > S u b s t i t u t i o n of (2-4.2) and (2-4.3) i n t o the above equation r e s u l t s I n 1 x,=o x ,=o r x W t «*p m 1=1 w l " " ~ n - l " n-1 x. +1 4 - l l P ^ x ) ] • { [ x . / V d / p J ^ x k ] P , ( 5 c ) - - i — P x ( x + t i ) } C k=l p n p i (2B-1) From (2-4.3), we see t h i s must ho l d f o r any m, m=l, 2, n-1. I f we choose m=I, expand t h i s expression, and make use of lemma 2-4, we can write the above equation i n ex p e c t a t i o n n o t a t i o n as n-1 ( l / B ) l X p * = ( l / P i ) [ ( - l + l / P n ) E ( X 1 ) - ( l / N X ) E ( X i Y \ ) 1 k=l n-1 n-1 +(l/NXP n) ( ( 2 / P i ) E [ X i ( £ X k - 1 ) ] - E [ ( £ X k ) 2 ] ] k=l k=l n-1 - ( l / p n ) I E(X k ) - ( W x/p n)(A +l)+(l/NX)[E(X i 2)-E(X 1)] k=l S i m i l a r l y , i f we choose m=)&, we o b t a i n n-1 ( l / 6 ) l x p * = ( l / p 1 ) [ ( - l + l / p n ) E ( X 1 ) - ( l / K x ) E ( X 1 I X^) 1 k=l n-1 n-1 + ( l / H X p m ) f ( 2 / p m ) E [ X f f i ( I X k~l)]-E(£ X k ) 2 ) k=l k=l } n-1 -(l/p n)£ E ( X k ) - ( N X / p n ) ( A + l ) + ( l / N X P 1 P m ) E ( X 1 X m ) k=l Upon s i m p l i f i c a t i o n by means of s u b s t i t u t i o n from (2-3.3) of e i t h e r of the above expressions, we f i n d t h a t (1/P )lXPi*=NxA/pn+[ (NX+N+1 ) p n 2 - l / p n - NX]/pn (2B-2) Next consider I p . Prom (2-5-5) and same procedure as above, P i P J [ P ^ ( x ) j - BP ± X BX X Let us s u b s t i t u t e f o r the d e r i v a t i v e s from (2-4.2) and expand oo » n-1 1 J X t=O x n=o ^ x ^ ' k=l " P x ( x + ^ i ) } { ^ j / P j + C ^ P n ) I P x ( x ) p n p i k=l x +1 p n p j 109 I f I = j , expansion and the d e f i n i t i o n of exp e c t a t i o n y i e l d 2 ( 1 / P ) I P i p i * = ( 1 / P i ^ 1 " 2 / p n ) E ( X i M s / P n P i H d / P i n-1 n-1 + l / p n ) E ( X 1 ) + ( l - l / p n ) E ( X i £ X k ) + ( l / p n 2 ) E ( Y \ ) k=l k=l + ( N 2 X 2 / p n 2 ) ( A + l ) S u b s t i t u t i o n f o r the expectations u s i n g (2-3.3) w i l l g ive us (1/0)1 • = N 2 X 2 A / p n 2 + (NX/p n 2)[-(NX+N+l)/p n 2 + l / p n +NX+1] + NX/pNPI (2B-4) I f 14J, (2B-4) y i e l d s P i p / " ( 1 - 2 / p ^ L , , , ^ n-1 + ( i/ P j)E(x d Y \ ) i + ( ^ n 2 ) { e ( E \y k=l k=l n-1 n-1 - (i/p d)ECx d( Y V 1 ^ • (V P 1 ) E [ X 1( J X^.-!)] n-1 ( l / P ) l p , P , * - ( l - 2 / p n ) E ( X 1 X j ) + ( l / p n ) [ ( l / p 1 ) E ( X 1 I X ^ k=l n-1 n-1 k=l " k=l n-1 n-1 k=l k=l + ( N 2 X 2 / p n 2 ) ( A + l ) = N 2X 2A/p n 2+(HX/p n 2 )[-(NX+N+l)/P n 2+l/p n+NX+l] (2B-5) Equations 2B-1, 2, 4 and 5 give the remaining e n t r i e s of the i n f o r m a t i o n m a t r i x . l i p det (J* = APPENDIX 2C CALCULATION OP det (J* WHERE -P n Q p n p l XQ XQ Pn^Pn-1 XQ F "Q P l D -Pn XQ NX PgD XQ XQ P XD xo~ PoD n XQ NX 'rill XQ D F P XD XQ XQ P i D Fn-1 . XQ I E NX To f i n d the determinant we must perform the f o l l o w i n g elementary operations. 1. Fac t o r Q out of every row of the m a t r i x , p R out of column 1, and f i n a l l y p x/X out of row i+1 f o r i = l , 2^  • • • £ x^"*l • 2. M u l t i p l y column 1 "by F and add t o the other columns. 3. Subtract row 2 from rows"3* 4, n. 4. M u l t i p l y column i+1 by P^/P^ and add t o column 2 f o r 5. Expand by row 1 and then by column 1. n r The r e s u l t I s a constant times the determinant of a diagonal m a t r i x , and so -Hp n _ 1 Q (NX) n n 112 APPENDIX 3A CALCULATION OF EFFICIENCY OF METHOD OF MOMENTS FOR THE POISSON-MOLTIVARIATE POISSON DISTRIBUTION Equation (3-^.7) f f . suggests two ways t o f i n d the e f f i c i e n c y . We may take the l i m i t of the e f f i c i e n c y of e i t h e r the Poisson M u l t i n o m i a l or the Poisson-Negative M u l t i n o m i a l d i s t r i  b u t i o n as N->•«>, i . e . we may take l i m i t s i n e i t h e r (1-6.24) or (2-6.13). Method 1. Take l i m i t i n g value of (1-6.24). A f t e r d i v i d i n g numerator and denominator by N and rearranging the expression s l i g h t l y , Use (3-3.1), (3-3.2), and (3-3-3) t o f i n d the l i m i t of the f i r s t * f a c t o r i n the denominator and (3-4-3) t o f i n d the l i m i t of the second. A l s o use the f a c t t h a t a = N(l-p ). Then 1 ' [ H 2 < N ( l - p n ) + p n > -H^] E f f s a This s i m p l i f i e s t o (3A-1) 113 Method 2. Take l i m i t i n g value of (2-6.13). A f t e r d i v i d i n g numerator and denominator hy N and rearr a n g i n g the terms s l i g h t l y , ( l + l / N ) 4 N 5 ( l - p n ) 5 E f f = l i m <! — * xx ' v "xx V~PP }~(\v"™S I p n ^ X B X X * + < B X X ^ B P P * / N ) - ( B X p V N ) O P n N ( l - P n ) l 1 [ (1+1/N)N(l-pn )+D+Pp nJ 2[(H 2*-H 1* 2 )p n+(l-p n)N(1+1/H)H 2*] ^ L e t us note t h a t from (2-6.10), l i m Fp n = l i m ( 2 ( l - p n ) [ N ( X + l ) + l ] p n + P n 2 ) = 2(X+l)a + l l i m D = l i m f-K ( l - p )(l+p ) - 1 - 2(-^L) XP_N(l-p ) + ( — ) X 2 N 2 ( l - p ) 2 + (-*-) X ( l - P n ) 2 ) = -2(x+l)a - 1 N+l n N+l J Hence we see tha t l i m (D+Fp ) = 0 I f we use t h i s f a c t and equations (3-3.1), (3-3.2), and (3-3-3) t o f i n d the l i m i t of the second f a c t o r i n the denominator and (3-4-3) t o f i n d the l i m i t of the l a s t f a c t o r ,3, E f f = c r y |1«[XA+<A\ (XA-1) - (-XA+1T > «].a' . [ H 2 - H 1 2 + a H 2 ] ] This s i m p l i f i e s t o E f f = a?/ [ [xA(a+l) - a][Hg(a+l) This i s e x a c t l y the same as ( J A - l ) . 115 BIBLIOGRAPHY [I] Anscombe, F. J . , (1950), "Sampling Theory of the Negative Binomial and Logarithmic S e r i e s D i s t r i b u t i o n s " . B i o m e t r i k a , V o l . 37, pp. 358-382. [2] B a r r e t t , M.E., (1962), " S t o c h a s t i c Processes i n P o p u l a t i o n S t u d i e s . " An unpublished M'.A. t h e s i s at the U n i v e r s i t y of B r i t i s h Columbia. [3] B e a l l , G. and Rescia, R.R., (1953), "A G e n e r a l i z a t i o n of Neyman's Contagious D i s t r i b u t i o n s . " B i o m e t r i c s , V o l . 9, pp. 354-386. [4] B l i s s , C.I., (1953), " F i t t i n g the Negative Binomial D i s t r i  b u t i o n t o B i o l o g i c a l Data." B i o m e t r i c s , V o l . 9, pp. 176-200. [5] David,F.N. and Barton,D.E., (1962), Combinatorial Chance. Hafner P u b l i s h i n g Company, New York. [6] Douglas, J.B., (1955)5 " F i t t i n g the Neyman Type A (two parameter) Contagious D i s t r i b u t i o n . " B i o m e t r i c s , V o l . 11, pp. 149-173- [7] F e l l e r , W., (1943), "On a General C l a s s of Contagious D i s t r i b u t i o n s . " Ann. Math. S t a t . , V o l . 14, pp. 3*9-400. [8] F e l l e r , W., (1950), An I n t r o d u c t i o n t o P r o b a b i l i t y Theory  and i t s A p p l i c a t i o n s , V o l . 1, John Wiley and Sons, I n c . , New York, London. [9] F i s h e r , R.A., (1941-2), "The Negative B i n o m i a l D i s t r i b u t i o n . " Ann1. Eug., V o l . 11, pp. 182-I87. [10] Gurland, J . , (1958), "A Generalized C l a s s of Contagious D i s t r i b u t i o n s . " B i o m e t r i c s , V o l . 14, pp. 229-249. [II] K a t t i , S.K. and Gurland, J . , (1961), "The Po i s s o n P a s c a l D i s t r i b u t i o n . " B i o m e t r i c s V o l . 17, pp. 527-538. [12] Kendall,M.G. and Stuart,A., (1958), The Advanced Theory of S t a t i s t i c s , V o l I . Hafner P u b l i s h i n g Company, New York. [13] Kendall,M.G. and Stuart,A., (I96I), The Advanced Theory of S t a t i s t i c s , V o l I I . Hafner P u b l i s h i n g Company, New York. [14] McGuire, J.U., B r i n d l e y , T.A., and Bancroft T.A., (1957), "The D i s t r i b u t i o n of European Corn Borer Larvae Pyrausta N u b i l a l i s (HBN) i n F i e l d Corn." B i o m e t r i c s , V o l . 13, pp. 65-78. 116 [15] Neyman, j . , (1938), "On a New C l a s s of 'Contagious' D i s t r i b u t i o n s . " Ann. Math. S t a t . , V o l . 9, pp. 35-57. [16] Shumway, R., and Gurland, J . , ( i 9 6 0 ) , " P i t t i n g the Poisson Binomial D i s t r i b u t i o n s . " B i o m e t r i c s , V o l . 16, pp. 522-533. [17] Skellam, J.G., (1952), "Studies i n S t a t i s t i c a l Ecology I . S p a t i a l P a t t e r n . " B i o m e t r i k a , V o l . 39, pp. 346-362. [18] S p r o t t , D.A., (1958), "Method of Maximum L i k e l i h o o d A p p l i e d to the Poisson Binomial D i s t r i b u t i o n . " B i o m e t r i c s , V o l . 14, pp. 97-106. [19] Thomas, M., (1949), "A G e n e r a l i z a t i o n of Poisson's Binomial L i m i t f o r Use i n Ecology." B i o m e t r i k a , V o l . 36, pp. 18-25. [20] Rao, C.R., (1945), "Information and the Accuracy A t t a i n  able i n the E s t i m a t i o n of S t a t i s t i c a l Parameters." C a l c u t t a Math. Soc. B u l l . , V o l . 37* pp.' 8 I - 9 I . [21 ] Rao, C .R., (1947) "Minimum Variance and the E s t i m a t i o n of Several Parameters." Proc. Camb. P h i l . S o c , V o l . 43, pp. 280-283. 

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
Unknown 18 0
China 8 4
United States 7 0
Greece 5 1
Japan 3 0
City Views Downloads
Unknown 18 2
Piraeus 5 1
Beijing 5 0
Ashburn 4 0
Shenzhen 3 4
Tokyo 3 0
Mountain View 2 0
Seattle 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0080602/manifest

Comment

Related Items