Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Testing for and selecting strong players in a tournament Humphries, Dick 1971

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
831-UBC_1972_A6_7 H84.pdf [ 4.27MB ]
Metadata
JSON: 831-1.0080471.json
JSON-LD: 831-1.0080471-ld.json
RDF/XML (Pretty): 831-1.0080471-rdf.xml
RDF/JSON: 831-1.0080471-rdf.json
Turtle: 831-1.0080471-turtle.txt
N-Triples: 831-1.0080471-rdf-ntriples.txt
Original Record: 831-1.0080471-source.json
Full Text
831-1.0080471-fulltext.txt
Citation
831-1.0080471.ris

Full Text

TESTING FOR AND SELECTING STRONG PLAYERS IN A TOURNAMENT by DICK HUMPHRIES B.Sc, Uni v e r s i t y of B r i t i s h Columbia, Vancouver, B.C., 1969 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF "SCIENCE IN THE DEPARTMENT of MATHEMATICS We accept this thesis as conforming to the required standard The University of B r i t i s h Columbia December, 1971 In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t o f the r e q u i r e m e n t s f o r t h e L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and s t u d y . I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y p u r p o s e s may be g r a n t e d by the Head o f my Department o r by h i s r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l not be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n . Department o f , Mathematics The U n i v e r s i t y o f B r i t i s h C olumbia V a n c o u v e r 8, Canada an advanced degree at the U n i v e r s i t y o f B r i t i s h C o l u m b i a , I a g r e e t h a t ( i i ) ABSTRACT In t h i s thesis we derive some t e s t i n g and s e l e c t i o n procedures f o r a tournament i n v o l v i n g a set of n players that has two complimentary subsets of players, each c o n s i s t i n g of approximately equally matched players. The background to t h i s work i s found i n the work of Narayana and Zidek who, i n Contributions to the theory of tournaments, II [8] considered the case i n v o l v i n g one strong player with n - 1 equally matched, weaker opponents. Their r e s u l t s are extended i n t h i s paper to include the p o s s i b i l i t y of k, 1 <_ k < n, strong players. Moreover, we supply r e s u l t s which prove several of the unproved assertions i n [8]. I t becomes apparent that, the l a r g e r k i s , the more r e s t r i c t i v e the design must be and, a l s o , the more inadequate the s e l e c t i o n procedure i s . Although no numerical and computational work has been done, asymptotic r e s u l t s have been obtained which are e a s i l y adapted for such purposes. ii ( i i i ) ACKNOWLEDGMENTS The author wishes to thank Dr. J.V. Zidek f o r suggesting the topic of t h i s thesis and f o r h i s generous time and assistance given during i t s w r i t i n g . The author i s also indebted to Dr. R. Shorrock f o r h i s c a r e f u l reading and c r i t i c i s m s of th i s t h e s i s . The f i n a n c i a l support of the National Research Council of Canada and the U n i v e r s i t y of B r i t i s h Columbia i s also g r a t e f u l l y acknowledged. F i n a l l y , a s p e c i a l thanks i s due to Mrs. Y.S. Chia Choo f o r her consciencious typing of th i s t h e s i s . Civ) TABLE OF CONTENTS INTRODUCTION (vi) CHAPTER I - NOTATION AND PRELIMINARY RESULTS p . l 1.1 S t a t i s t i c a l P r e l i m i n a r i e s p . l 1.2 P r o b a b i l i s t i c P r e l i m i n a r i e s p.4 1.3 Notation and D e f i n i t i o n s p.7 1.4 The Basic P r o b a b i l i t y D i s t r i b u t i o n of a Tournament with Given Design p.10 CHAPTER II - REVIEW OF THE RESULTS FOR THE ONE STRONG PLAYER CASE p.14 2.1 Basic S t a t i s t i c a l Properties of the Model p.14 2.2 Testing the Hypothesis of Player E q u a l i t y p.18 2.3 S e l e c t i n g the Strongest Player p.24 CHAPTER I I I - THE TWO STRONG PLAYER CASE p.27 3.1 The L i k e l i h o o d Function and a S u f f i c i e n t S t a t i s t i c p.27 3.2 The L i k e l i h o o d Ratio Test p.34 3.3 Asymptotic Results f o r the L i k e l i h o o d Ratio Test p.36 3.4 Relevance of the Invariance P r i n c i p l e p.53 3.5 Bayes Invariant Tests p.56 3.6 The Asymptotic Form of a Certain Bayes Invariant Test p.64 3.7 Se l e c t i n g the Strongest P a i r p.79; (v) TABLE OF CONTENTS (CONTINUED) CHAPTER IV - EXTENSIONS TO THE k-STRONG PLAYER CASE p.88 4.1 Extension of the Basis S t a t i s t i c a l Properties p.88 4.2 Testing Procedures p.90 4.3 Selection Procedures p.93 CHAPTER V - CONCLUDING REMARKS p.95 REFERENCE ' p.98 ii Cvi) INTRODUCTION In an attempt to rank, in order of quality, members of a collection of objects, the method of paired comparisons is often employed. In this method the objects or "players" are compared in pairs and, on each comparison, a "winner" i s selected. An example of a situation where this method would be applicable is the following. Various types of razor blades are to be tested. Two blades of different types would be compared on opposite sides of a man's face, after which one of the blades would be selected as the "winner" . Other applications are found in taste testing, consumer preference tests and personnel ratings. The basic experimental unit of a tournament is the paired comparison. The structure of a tournament depends upon i t s design, two of the better known designs being the round robin and random knock-out. In "Contribution to the theory of tournaments", II [8], Narayana and Zidek consider the situation where one strong player competes against n - 1 opponents who are supposed to be of equal strength. As i t turns out, the form of the answers obtained to the s t a t i s t i c a l questions investigated are independent of the tournament's design. Thus these answers pertain to the general tournament, defined in [8]. In this paper our efforts are directed towards two goals. F i r s t , rigorous proofs are supplied for several unproved assertions given in [8]. Secondly, an attempt i s made to extend the results to the case where k equally strong players compete against n - k equally weak players. This case may be viewed as an approximation to the situation where k players (vii) of approximately equal strength compete against n - k players who are also of approximately equal strength. Chapter I contains the basic s t a t i s t i c a l and probabilistic preliminaries which form the basis for subsequent analysis. Also included is the definition of a general tournament, as posed by Narayana and Zidek, and, f i n a l l y , using this definition, the basic probability model for ' a tournament with a given design is derived. In Chapter II we summarize, without proof, the results in [8] for the case where one strong player competes against n - 1 weaker players of equal strength. Although this chapter isn't necessary to subsequent results in this thesis, i t is included for completeness and because i t provides a natural background for the present work. Some of the results of [8] are concerned with the problem of testing the n u l l hypothesis of equality of the n players against the alternative that there is one strong player. The remainder pertain to the question of selecting the strongest player, assuming one exists. In each case i t is supposed that data from m, m >_ 1, independent repetitions of the tournament are available. For the testing problem two solutions are obtained. The likelihood ratio test and also an approximate c r i t i c a l region of size a are derived. The usual procedure for determining the c r i t i c a l region uses the well-known result concerning the limiting distribution of - 2 log L, where L denotes the likelihood ratio. But this result does not apply here because of the irregular character of the parameter space ; so, in order to find an approximate c r i t i c a l region, the limiting form of - 2 log L is found. Another solution ( v i i i ) to the testing problem is given. It ut i l i z e s the fact that the probability density of the sufficient s t a t i s t i c is invariant under the group of permutations of the observations corresponding to the n players. Attention is restricted to the class of a l l invariant tests, and the class of a l l Bayes procedures within this class i s derived. The limiting behavior, under the nu l l hypothesis, of members of this class is studied when their prior densities satisfy a certain regularity condition. These results simplify the task of calculating an approximate c r i t i c a l region. Finally, two modes of selection are considered. The f i r s t calls for the selection of a random subset of players which contains the strongest with probability greater than some prescribed constant. In the second, a subset of given size i s chosen. Both of the procedures obtained are Bayes among invariant procedures. In Chapter III we consider the case where two equally strong players compete against n - 2 weak players who are assumed to be equal in strength. Once again, the questions of testing and selection are investigated and the results tend to paral l e l those of the one strong player case. The sufficient s t a t i s t i c in this case i s invariant under the group of permutations of observations corresponding to pairs of objects. In the problem of testing, i t is reasonable to res t r i c t attention to the class of a l l invariant tests. In the selection problem, however, i t is apparent that the invariance principle becomes unduly restrictive since i t requires that we choose subsets of pairs instead of players. In terms of the loss function, this implies that including a pair consisting of one strong and one weak player i s just as bad as including a pair of two weak players. The point of view might be defended, when appropriate, by arguing that unjust harm is being done by excluding one of the strong players. It is clear that the proofs of the results for Chapter III apply to the corresponding assertions for the one strong player case. Likewise, with the appropriate adjustments for the i n i t i a l assumptions, they can be readily extended to the k strong player case. Consequently, Chapter IV b r i e f l y summarizes the main results for this case. Contained in Chapter V are some concluding remarks. CHAPTER I NOTATION AND PRELIMINARY RESULTS 1.1 S t a t i s t i c a l Preliminaries. Let X be a given space and 4(X) a a-algebra of subsets of Suppose a random variable X i s observed and takes i t s values i n We assume X is distributed according to a unique (but unknown) member of the family, M = {P9 I 6 e <8)} of probability distributions on -6 (#), which is indexed by a set <g) , ( called the parameter space. The elements of © are often referred to as the possible states of nature. After observing X an element or "action" is to be chosen from a set, A, of actions. If the action chosen is a e A and the true state of nature i s G e © , then a non-negative loss, L(a,6), is incurred. If randomized decision rules are used, a more satisfactory theory is achieved. A randomized Jdecision rule is a procedure that uses a random device to make the actual choice of action, after X is observed. For I example, a coin may be tossed to choose, f i n a l l y , between two actions. This may result in a lower average loss than that incurred using any non-randomized rule (see Ferguson [5], page 22). Nonrandomized decision rules are readily seen to be particular cases of randomized decision rules -the nonrandomized decision rule which chooses, after X i s observed, the - 2 -action a(X) e A is simply the randomized decision rule which assigns, after X is observed, probability one to the action a(X) and probability zero to a l l other actions in A. We summarize the foregoing in the following definition : Let 4(A) be a given a-algebra of subsets of A that contains a l l singletons consisting of individual points of A. Let V represent the class of decision procedures available for the problem. Any element, <S(°, °), of V is a mapping of 4(A) x 5€ into the interval [0,1] which satisfies the condition that, for each x e %, (5(o,x) is a probability distribution on 4(A). If X = x is observed, the action selected is a random variable distributed according to 6(°,x) . If a randomized decision rule is used, then, for any particular value of x, the loss incurred is random. The expected loss with respect to the measure 6 (o,x) on 4(A) i s used as a basis for selecting a procedure. It is defined through the equation E X{L(a,9)} = L(a,6)<5 (da,x) , A where S(da,x) represents, int u i t i v e l y , the weight of the probability measure 6(°,x) located at the action a. For any measurable function ' f : X—> (- 0 0, 0 0), let E Q{f(X)} = | f(x) dP 0(x) , ii - 3 -whenever the l a t t e r quantity i s defined. The r i s k , r(0,S), of a procedure 6 e V, when X i s d i s t r i b u t e d by P , 8 e <g) , i s defined 0 by E8 L(a,9)<5(da;X) A Suppose ^ i s a o - f i n i t e measure on (X,6(X)) which dominates the family, M . For each 0 e @ , l e t p(°|0) denote the Radon Nikodym d e r i v a t i v e of Pg with respect to if) . In p a r t i c u l a r , when X i s a di s c r e t e random v a r i a b l e , p(x|0) represents the p r o b a b i l i t y of observing the value X = x when 9 e ® i s the true state of nature. Let II be a p r o b a b i l i t y measure on a a-algebra of subsets of © . A Bayes procedure with respect to IT i s defined to be any member of V which minimizes and makes f i n i t e r ( 0 , S ) < f f i (e ) as a function of 6 . I f a Bayes procedure e x i s t s , i t i s a member of V which, at X = x, minimizes as a function of 6 , the quantity, [ L(a,9)p(x|0) dll(0)<5 (da;x) JA J@ Suppose G i s a group of measurable b i j e c t i o n s from X i n t o X. . Then 6 i s s a i d to be homomorphic to a group H i f there e x i s t s a mapping (j) : G — > H such that - 4 -Assume there exist measurable transformation groups G and G , each homomorphic to G , such that every element of G and G is a measurable bijection of ® and A , respectively, onto i t s e l f . Under these homomorphic mappings, which are assumed to be measurable, l e t g e G and g e G correspond to the element g E G , and assume P_Q(gB) = P Q(B), 0 £ ® , B e 4(3C) and L(ga,g9) = L(a,9), a e A , 9 £ © The s t a t i s t i c a l problem i s then said to be invariant under the group G . An element 6 £ V is said to be an invariant procedure under the group G i f 6(g ^A;g "'"x) = 6(A;x) , x £ X, A E 4(A) . The subclass of a l l those procedures which are invariant under G w i l l be denoted by C V . A Bayes procedure belonging to V^. w i l l be referred to as a Bayes invariant procedure. 1.2 Probabilistic Preliminaries To assist in the calculation of approximate c r i t i c a l regions for hypothesis testing, asymptotic results are obtained using some convergence theorems for sequences of random variables. With this in mind, two basic - 5 -definitions of convergence for a sequence of random variables are now introduced, along with some of the related convergence theorems. DEFINITION 1 . 2 . 1 . A sequence of random variables {X^, i = 1 , 2 , " - , is said to converge to a constant c in probability (written X n —E> c) i f , for every e > 0 , lim P( |x n - c| > e ) = 0 . n-K° In measure theory, this definition corresponds to convergence in measure and so measure theoretic properties which hold for this type of convergence w i l l , of course, carry over into this context. DEFINITION 1 . 2 . 2 . Denote by {Fn>, n = 1 , 2 , " • the sequence of distribution functions of the random variables {xn}> n = 1 > 2 , " * The sequence of random variables i s said to converge i n law to a random variable X with distribution function F i f F n —> F as n 0 0 at a l l continuity points of F. Such convergence is expressed as X R —> X. The function F i s called the limiting distribution function of . Its role is important in problems where the random variable ^ represents a s t a t i s t i c , computed from a sample of size n, whose actual distribution i s d i f f i c u l t to find. In such cases the distribution function of Xj^ may be approximated, for large values of n, by i t s limiting distribution. The following results w i l l be important in determining limiting distributions in this paper. - 6 -THEOREM 1.2.1 (ChebysheVs Inequality). For any random variable 2 X with mean y and variance a < 00 P( |x - y| > ca ) 1 \ , c > 0. c PROOF : See Rao [9], page 77. THEOREM 1.2.2 (Khinchine's Theorem). Let {X±}, i = 1,2,---n i i X i be independent and identically distributed and write Xj^  = — — . Then E(X ±) = y < « => -£•> y . PROOF : See Rao [9], page 92 . THEOREM 1.2.3. Let {Xn,Yn>, n = 1,2,- be a sequence of L p pairs of random variables such that X n —> X and Y n —> c . Then JLi , n L X Y X —> cX and > — . n n Y c n PROOF : See Rao [9], page 102 . P THEOREM 1.2.4. If g is a continuous function, X —> X n and Y n —> Y then g(X n) —> g(X) and g(Y n) —> g(Y). PROOF See Rao [9], page 104. - 7 -Finally, a form of the multivariate central limit theorem as given in Rao [ 9 ] , page 108, w i l l be required . THEOREM 1 . 2 . 5 . Consider independent and identically distributed k-dimensional random variables - U n = ^ D 2 n » " " » U k a ) » n = X>2> admitting f i r s t and second order moments, E(U n) = V_ and D(Un) = = E{[U n - E(U n)][U n - E(U n)]'} = E. Define the sequence of random variables _ _ _ _ i n U = (U, U. ) , U. = - y U. . , n = 1 , 2 , " • . —n In' ' kn ' in n i i j = l Then the limiting distribution of /ii(U - p) is N, (0,E) , that i s , n *c k-variate normal with zero mean and dispersion matrix E . This concludes the necessary background involved in our asymptotic computations. We turn now to the definition of a general tournament as formulated by Narayana and Zidek [8] and introduce the notation for random variables which w i l l be used throughout this paper." -1 . 3 Notation and Definitions Suppose we are given a collection of n objects (n > 1), say P = tP-^,' * " ,Pn) • A random tournament among these objects w i l l consist of a sequence of M (a fixed integer) comparisons (called t r i a l s ) between - 8 -p a i r s of objects i n P and a r u l e , R, which determines the p a i r of objects to be compared at each of the M t r i a l s . I t i s assumed that at t r i a l a, R determines a unique p r o b a b i l i t y d i s t r i b u t i o n on the set of a l l pa i r s of P. This d i s t r i b u t i o n may depend on any observations taken during the previous a - l t r i a l s and p o s s i b l y on the observed value of an a u x i l i a r y random v a r i a b l e (taken, perhaps, from a table of random numbers). I t cannot, however, depend upon the future of the tournament or on unobservable c h a r a c t e r i s t i c s of the objects i n P for i n these cases i t would not be determined at t r i a l a. Together M and R c o n s t i t u t e the design, , of the tournament. I t i s assumed that whenever objects P^ and P^ of P ( i J j ) are compared, P^ i s selected with p r o b a b i l i t y n\ ' and that comparisons are c o n d i t i o n a l l y , given the p a i r s involved, independent events. Suppose we are given a tournament with a f i x e d design. Define the random v a r i a b l e s N.. and W.., i , j = l,-««,n, as follows : N. . W. . the number of times i meets j i n the tournament i f i f j 0, otherwise 1 the number of times i beats j i n the tournament i f i f j 0, otherwise • (1.3.1) In s e c t i o n 1.4 we c a l c u l a t e , up to a constant, the p r o b a b i l i t y that X = x, - 9 -where X' = ( N 1 2 » w 1 2 » N i 3 » W i 3 * * * *» Nij' Wij»'*'^j=i ... n * F o r t h e s a k e ° f i < j d e f i n i t e n e s s we take the paired components ^ i j ' ^ i j ^ °^ ^' t o ^ e o r < ^ e r e ^ l e x i c o g r a p h i c a l l y . In p r a c t i c e , a s u f f i c i e n t s t a t i s t i c , Y, of X, rather than X i t s e l f , w i l l be observed. Because the d i s t r i b u t i o n of X i s from the exponential family, the random va r i a b l e s and S^, i = l,»««,n, w i l l be of importance, where n c = y N. and 3=1 n 5. = I W, . (1.3.2) The tournament i s independently repeated m times ; that i s , m (v) (v) observations are made of the s u f f i c i e n t s t a t i s t i c Y. Let N. . , T. and S^V^ denote, r e s p e c t i v e l y , the independent copies of N.., i , j = l , ' - - , n , i < j , T^ and S^, i = l , ' " * , n , determined by the v*"*1 r e p l i c a t i o n . Define the random va r i a b l e s N.. , T. and S. , m > 1, through the 1 3 m ' 1m im — following equations : m N. . l j m im im v=l m (v) i j (v) v=l m v i i x (v) • (1.3.3) - 10 -These random va r i a b l e s w i l l , i n turn, c o n s t i t u t e a s u f f i c i e n t s t a t i s t i c , T, of Y, on which a l l procedures derived for the case i n question are based. 1.4 The Basic P r o b a b i l i t y D i s t r i b u t i o n f or a Tournament with Given Design Suppose we are given a tournament with design, $ . Then, for any given t r i a l a, i f r e p r e s e n t s the event " i beats j on a" and F.. the event " i plays j on a" , i t i s c l e a r that E.. CZ F.. and l j a i j a i j a so E. .' = E. . ' n F..' . Therefore i j a l j a i j a P(E.. ) l j a P(F.. ) i j a P(E. . O F . . ) P(F.. ) 1 3 " Thus Let = P(E.. F.. ) . l j a 1 i j a P ( E , - ^ ) = TUV&IAJ* i'i = i f j , a = l , - - - , y (1.4.1) 1 i f i meets j on t r i a l a and wins V. . i j a 0 otherwise and V a = ( V 1 2 a ' V 1 3 a ' * " ' V i j a ' " " ) ' ' i > j = 1 - , " - n - 1 * J> a = where, once again, we order the n(n - 1) components, V.. , lexicographically. x j ct - 11 -Let 5-y a b e t^ i e P r o b a b i l i t y that i and j meet on t r i a l a. By the d e f i n i t i o n of a tournament given i n s e c t i o n 1.3, ^±^a c a n depend only on observations taken from the previous t r i a l s and p o s s i b l y on the observed value of an a u x i l i a r y random v a r i a b l e , r a say. E x p l i c i t l y , then, hj« " WVl'V2>---'Vra> Equation (1.4.1) implies that the p r o b a b i l i t y of observing the value v a on t r i a l a i s prob (V a = v a ) = n v.. v.. i,3=l 1 J a 1 J v. . '+v.. ' i j a n ? i j a J i a ^±30. Jjla xja j x a since E,. . . . xja j i a Define the random v a r i a b l e ^ij a» *->J* = ^ y ' ' ' » n > 1 K 3 > a s follows U. . = xja 1 i f i meets j on t r i a l a 0 otherwise Then U. . = V. . + V.. and, making use of the f a c t that TT . . = 1 - ir.. , xja xja jxa & xj jx we obtain - 12 -prob (V - V ) - n K.1.^ TT.^ a(l - TT.. ) ^ ^ , a = 1, • • •, u, r a a i j - i 1 J a 1 J a 1 J a (1.4.2) where v.. and u.. are the observed values of V.. and IK . , i j a i j a i j a i j a re s p e c t i v e l y . The M t r i a l s are independent and so the p r o b a b i l i t y of observing the outcome V = v, where V = ' ' * > ^ ) , i s M n u.. v.. u.. -v.. prob (v = v) = n n ? ^ T r . ^ a ( i - TT,.) L J A L J A a=l i,3=l i j a i<j n w.. n.,-w.. , = 5 ( v r i - - - v V . * * i j " J ^ - V 1 J (1.4.3) where M w.. = / v.. , the number of times i wins against i , M n.. = y u.. , the number of times i meets i , and i j a u . i j a ' J a=l J a=l x,j=l J ±<3 R e c a l l that X = (N, „ ,W. . , ••• ,N.. ,W . ."•) » . In Theorem 1.4.1 12 12 i j i ] the p r o b a b i l i t y d i s t r i b u t i o n of X i s given. THEOREM 1.4.1. For a tournament of given design, A9 , - 1 3 -n w prob (X = x) = C(x) n Tt.y (1 1,3=1 ij ij where x = (n 1 2,w 1 2, • • • ,n±.. .w^ , • • •) PROOF : The outcome, X = x, of a given tournament i s a function of the random v a r i a b l e V = (,•••,V^) . From equation (1.4.3) we obtain prob (X = x) = I £(v) v: x(v)=x n w. . n T t . . 3 (1 i . j - l 1 J i<j n ,-w . i j *3 n w. . n tt.'2 (1 i , j = l n..-w.. I CO) v: x(v)=x n w. . C(x) n it.'3 (1 i,3=l 1 J i<3 T T . . ) 13 n..-w.. 13 i3 I t should be observed that the value of C(x) depends i m p l i c i t l y on the design, i9 , through E, . - 14 -CHAPTER II REVIEW OF THE RESULTS FOR THE ONE STRONG PLAYER CASE In this chapter we review the basic results of Narayana and Zidek [8], Their work is concerned with the case where for some unknown i and p, p e [%,1], i r ^ = p , j ± i and ir fc = % , j 4 k, when neither j = i or k = i . This is the case of one strong player. Two natural s t a t i s t i c a l problems arise in connection with the one strong player case. The f i r s t is that of finding a test of the n u l l hypothesis p = % against the alternative hypothesis p > % . The second is that of determining a procedure for selecting the integer i which represents the strong player. 2.1 Basic S t a t i s t i c a l Properties of the Model Recall that T^ and S^, i = l,-«',n, denote the number of t r i a l s i n which object i is involved and in which object i wins, respectively. If TI\ = % for i , j = l,»««,n, i ^ j , we make the stipulation that the random variables (T^,S^,), i = l , - - * , n , and, likewise, that the random variables (2S^  - T^), i = l,-««,n, are exchangeable. As a consequence, the random variables T^, i = l,**',n, are exchangeable i f player equality is assumed and therefore have, under this assumption, common mean, denoted by E(T^) = u. Finally, we also assume that T^ >_ 1 for i = 1,•••,n . II - 15 -Let I = {l,«««,n} index the set of possible strong players. Then f o r some unknown i e I and p E [%,1], TT = p for j = l,''«,n, j 4 i» while TT . = % for k, j ^ i and k f j . Under t h i s assumption, kj Theorem 1.4.1 reduces to M—t s . t —s prob (X = x) = C(x)(%) \ 1 ( 1 - p) 1 1 (2.1.1) Since the invariance p r i n c i p l e i s to be invoked, the following representation of the parameter space, ® , seems to be most convenient. Define ® ^ as the set of n-tuples whose co-ordinates are a l l % except f o r the i which i s p, where p varies over the closed i n t e r v a l [%,1] . Then ® = U ® . . . i e l 1 Suppose the tournament i s played through once and the random vector Y = y i s observed, where Y = (T.. ,S.., • • • ,T ,S )' and J 1' 1' ' n n y = ( t ^ j S ^ , • • •, t n , s n ) 1 . I f 9 = ( ei>"**> e n)' E ® > i t follows from equation (2.1.1) that p(y|8) = C ( y ) 2 M ( n " 1 ) II ^ 9 ^ k (1 - 9 ^ ^ k e l The tournament i s independently repeated m times and the l i k e l i h o o d function obtained i s y. , 1 S Mm-T, S. T, -S, L ( 9 ) = C 2 M m ( n - 1 ) n (%) k m 9 , k i n (1 - 9 , ) k m k m k e l - 16 -where C i s a constant depending upon the observed values of the random vectors { Y ^ } , V = l,"-,m. Here Y ^ denotes the copy of Y t t l determined by the v r e p l i c a t i o n . From the F a c t o r i z a t i o n Theorem (see, f o r example, Lindgren [7], page 194) i t follows that the s t a t i s t i c T = t(Y)= = (T n ,S- , •••,T ,S )' i s s u f f i c i e n t f o r the family of underlying lm lm' ' nm nm J J ° d i s t r i b u t i o n s . The p r o b a b i l i t y d i s t r i b u t i o n of the s u f f i c i e n t s t a t i s t i c T i s given by Mm~t s ' t s q(t|e) = c ( t ) 2 M m ( n - 1 ) n (%) - e ) 1 1 . i e l This d i s t r i b u t i o n i s i n v a r i a n t under a transformation group G isomorphic to , the group c o n s i s t i n g of the n! permutations on I. More p r e c i s e l y , f o r X = (X^,'«-,X ) e Rn, define the ac t i o n of S n on X i n the following manner. For a e S n write 1 2 ••• n a = a ( l ) a (2) ... a(n) By the notation a(a^,•••,a n) we mean that the object i n p o s i t i o n k, k = l , . . . , n , moves, under a, i n t o p o s i t i o n a ( k ) . Then, under a, a /. \ becomes a, . Now, f o r each i e l there e x i s t s k £ I such that a (k) Hf. a (k) = i . So the i ^ coordinate of X = ( X ^ , * * - , ^ ) , which becomes X^, can be represented as X _^ . Hence, we write a ( i ) a ( X r . . . , X n ) = ( X p ( 1 ) , - . . , X p ( n ) ) , p = a~ . - 17 -equations Define the b i j e c t i o n s , and g^, through the following S a ( T ) = 8a ( Tlm' Slm'*"' Tn m' Snm )' = (T ... ,S ... ,---,T , . "S , . )* (2.1.2) p ( l ) m p(l)m' p(n)m, p (n)m g 0(9) = 8L ,<e l f . . . ,e n )» = (VD'-'Vk))' (2-1-3) where p = a . Let G = { g I a e S } and G = {g I 0 e S } . I t i s a n a n c l e a r that G and 17 are homomorphic to one another and that q(g ( 7t|g ( j9) = = q(t|0) f o r any a e S n . In the next two sections of t h i s chapter, transformation groups, 6 , homomorphic to G, are defined on the p a r t i c u l a r action spaces of i n t e r e s t . We r e s t r i c t ourselves to loss functions which are i n v a r i a n t ; that i s l o s s functions L such that L(ga,g9) = L(a,6) , where g i s the homomorphic image of g . Under t h i s condition, the problem w i l l be i n v a r i a n t under G and we may r e s t r i c t our a t t e n t i o n to the class of i n v a r i a n t procedures. - 18 -2.2. Testing the Hypothesis of Player E q u a l i t y Let ® * = (%,•••,%) e R n and ® * to test the n u l l hypothesis ® - ® . We wish then o H : 9 e ® * o o against the a l t e r n a t i v e n± : 0 e ® * . The f i r s t t e s t constructed i s the l i k e l i h o o d r a t i o t e s t . The l i k e l i h o o d r a t i o , L, i s defined to be L = max L(0) 0e ® * o  max L(9) 0e ® * The n u l l hypothesis i s then rejected i f and only i f L <_ c(a) , where the constant c(a) i s chosen so that the s i z e of the te s t i s a . Now, where max L(0) 0e ®„ Mm-T. S. T. -S. C(Y) max (%) (p ±) (1 - p ^ i e l , .- im , % i f - — < & im  Cim T. im otherwise - 19 -I t follows that we r e j e c t H Q i f and only i f { max ( 2 P . ) S i m ( 2 [ l - v^1^^ } _ 1 < c(a) i e l The usual procedure f o r determining c(a) i s to consider -2 log L. Under f a i r l y general conditions t h i s has, asymptotically, under 2 H Q , a chi-square d i s t r i b u t i o n . H Q i s then rejected i f -2 log L >_ 55^ ^ , where r represents the number of r e s t r i c t i o n s to which G i s subjected by the n u l l hypothesis. In t h i s case, the parameter space does not s a t i s f y the necessary conditions - i n p a r t i c u l a r , the p a r t i a l d e r i v a t i v e s -^z-— p (y 19) i do not e x i s t - and so a d i f f e r e n t method i s o u t l i n e d here f o r approximating c(a) . Observe that - l o g L = max{S. log(2p.) + (T. -S. )log(2[1-p.])} i e l P and that, under the n u l l hypothesis, E(2S_^ - T^) = 0, 2p_^  — > 1 , and P 2[1 - p_^ ] — > 1 . I t follows (although not immediately) that p ? 1 -2 log L —> max {T. (2p. - 1) } i e l Thus, asymptotically as m -* °°, the l i k e l i h o o d r a t i o test i s equivalent [8] contains an e r r o r at t h i s point. There i t i s asserted that the r e s u l t i s true without the f a c t o r 2 i n -2 log L. The e r r o r r e s u l t s because only one (and not the required two) terms are retained from the s e r i e s expansions of log (2p.) and log(2[1-p.]). Our proof of Theorem (3.3.4) w i l l c l a r i f y the r o l e of t h i s second term. - 20 -to the test which rejects the hypothesis H Q : 6 e ® Q i f max { / l . (2p. - 1) } > c(ct): J T im l — i e l where c(ct)* = /-2 log c(a) . It seems d i f f i c u l t to determine the exact distribution of max { /F. (2p. - 1) } under H_. : however, the random . T i m r i ° i e l vector [2S, -T- ] [2S - T ] lm lm nm nm /T lm / T nm converges in law under the n u l l hypothesis to Z' = (Z^,«««,Z n), where Z has a multivariate normal distribution N n(u,£), where u = (0,**-,0)' and £ = 0 . . = \ 11 n - 1 i f i = j i f i ^ j (2.2.1) Denoting max ([2S. - T. ],0) by [2S. - T. ] + , the random variable im im im im + /T. (2p. - 1) can be written as [2S. - T. ] im im im l St. and so, under the n u l l im hypothesis, A Helmert transformation of Z yields n - 1 independent and identically distributed normal random variables Y, with mean 0 and variance — n — r 1 n - 1 - 21 -An approximate value for c* can then be expressed as the solution of an equation involving an (n - 1) fold integral which is possible to evaluate by numerical means. This procedure is outlined in detail in Chapter III. The sufficient s t a t i s t i c , T, has been observed in section 2.1 to be invariant under a group, G, of transformations on T , the domain of T. The second method exployed for testing the n u l l hypothesis is to construct the Bayes procedures within the class of invariant test procedures. For this problem, the action space, A , consists of two elements : a Q : accept the n u l l hypothesis, 9 e ©* a^ : accept the alternative hypothesis, 0 e@* . The three transformation groups involved are G = {g a : T — > T | g a is defined by equation (2.1.2)} G = Cga : © —> © | g"a is defined by equation (2.1.3)} and G = {g : A —> A | g i s the identity mapping on A} Since this method is also used later i n the selection problem, the following general result is f i r s t obtained : - 22 -LEMMA 2.2.1. For i e l l e t L ±(a,p) = L(a,0) when 9 E ® ± . A Bayes i n v a r i a n t procedure with respect to a p r i o r measure II on the Borel subsets of [%,1], evaluated at t = ( t . ,s t ,s ), i s any • ' ' lm' lm' nm' nm J p r o b a b i l i t y d i s t r i b u t i o n which minimizes I i e l s. t . -s . L.(a,p) dP(a)(2p) i m ( 2 [ l - p ] ) i m i m dn(p) as a function of P . Lemma 2.2.1 i s now e a s i l y applied to the problem of t e s t i n g the n u l l hypothesis. As a p a r t i c u l a r example, Narayana and Zidek [8] construct the class of Bayes i n v a r i a n t tests under zero-one l o s s . That i s , and L(a n,9) = • LCa ^ e ) 1 i f 9 e ® ! 0 i f 9 e ® 0 i f 9 e ® ! 1 i f 9 e ® ' Note that L(a n,9) 1 f o r a l l 9 e ® - ® ' 0 f o r 9 e ® - 23 -<=> L(a ,0) = • • o' 1 f o r a l l 8 e U ( © . - © ) i . T x o i e l 0 f o r 0 e ® <=> L(a Q,0) = 1 f o r a l l 0 e © - ® * i o 0 f o r 0 e ® , f o r a l l i e l , A s i m i l a r comment holds for L(a^,0). Consequently, the zero-one loss on (A x®) induces a zero-one loss on the sets (A x ® ^ ) , i e l , and so the loss functions L^(a,0), i e l , are zero-one loss functions. I f F denotes the c o n d i t i o n a l d i s t r i b u t i o n function of p given p > , Lemma 2.1.1 implies the following r e s u l t . THEOREM 2.1.1. For zero-one loss the class of a l l Bayes i n v a r i a n t tests of © * against ® * consists of a l l t e s t s of the form, r e j e c t ® * i f and only i f I i e l 1 s. t. - s . (2p) i m ( 2 [ l - p]) i m i m dF(p) >_ c (2.2.2) % when t = (t, ,s, , * * * , t ,s )' , f o r some constant c > 0. lm lm nm' nm — ,For a given p r i o r measure II on the Borel subsets of [%,1] , the constant c can be evaluated i f the p r i o r p r o b a b i l i t i e s of @ q and ® ^ are known. However, i t may be preferable to place the emphasis on - 24 -the s i z e of the test and thus choose the value c to depend upon a. In t h i s case, the l i m i t i n g form of the t e s t when 0 £ ® q i s of assistance i n approximating the value of c. Suppose F has a monotonically decreasing density f which i s continuous at p = % . With this r e s t r i c t i o n i t can be shown that as r l S. T. -S. m ->- 0 0 , /m (2p) (2[1 - p]) f(p) dp converges i n law, under J % t2 f (>) P "2 + t Z i the n u l l hypothesis, to — — e dt ( i e l ) , where u = E(N.) 2/jT •'o 1 and the d i s t r i b u t i o n of z^ i s that associated with equation (2.2.1). <Kt) dt Let M(z) denote M i l l ' s r a t i o , that i s , M(z) = " . , \ > where <b <J)(z) denotes the standard normal density function. As a consequence of the above r e s u l t , any t e s t of the form (2.2.2) i s asymptotically equivalent to one which r e j e c t s ® * i f and only i f £ — ° i e l /T7 M(-Z ±) _> C * The i m constant c* can be chosen by simulation with the a i d of tables f o r M(z) i n order to make the l e v e l of the t e s t a . An approximate t e s t , then, of 0 e®* r e j e c t s i f and only i f J M ° i e l /jT im i m >• c' 2.3. S e l e c t i n g the Strongest Player Two possible methods of s e l e c t i n g the strongest player are considered i n [8], and f o r both of these the Bayes i n v a r i a n t procedure i s derived. In the f i r s t case a random subset of I i s selected i n such a - 25 -way that i t includes the strongest player with p r o b a b i l i t y at l e a s t as large as a prescribed constant. For the second method, a subset of prescribed s i z e i s s e l e c t e d . In f i n d i n g the Bayes i n v a r i a n t procedure for the f i r s t method, the a c t i o n space, A, consists of a l l 2 n subsets of I and G = {g^. |cfeSn J where § a ( a ) = { c ( i ) | i £ a} . A loss of L_j,(0) i s incurred i f player i , i e l , i s included i n the subset when 0 i s the true state of nature. In a d d i t i o n , L units are l o s t i f the strong player i s not included. R e c a l l that © n = {(%,••• ,p) | p e [%,1]} . For 0 e ® there e x i s t s 0 e © and a e 5^ such that g 0 =0 and so the l o s s function, n n n - a n L ± ( a , p ) , i s given by L(a,g0 ) = I L (ge ) + L[ l - X a ( a ( n ) ) ] , j e a where x a i s the c h a r a c t e r i s t i c function of . a e A and g"(=g" ) i s any permutation under which a(n) = i . Assuming we consider only loss functions L ±(0) such that 1^(0) = ^ ( j ^ C g ^ ) , then L(0,a) = L(g0,ga) and the problem remains i n v a r i a n t under the group, S n , of permutations on n elements. Let t = ( t n .s, ,«'-,t ,s ) and lm lm nm nm n-1 T ± ( t ) = I I j = l r ^ i 1 L . ( P ) ( 2 p ) S r m ( 2 [ l - p ] ) " ™ " 3 ™ dF ( P ) + (n - 1) - (n - 1)L 1 s. t. - s . L n(p)(2p) l m ( 2 [ l - p]) l m i m dF(p) H 1 ( 2 p ) S l m ( 2 [ l - p ] ) t i m _ S i m dF( P) , - 26 -where L^(p) i s an abbreviation f o r (%,•••,%,p) and F i s the p r i o r measure on the Borel subsets of (%,1). From Lemma 2.2.1 i t follows that the Bayes i n v a r i a n t random subset s e l e c t i o n procedure with respect to F consists of i n c l u d i n g object i f and only i f T^(t) < 0, i e l . The value of L can be chosen to ensure that the p r o b a b i l i t y of i n c l u d i n g the strongest player i s greater than or equal to any prescribed constant 3, 0 < g <_ 1. The second method of s e l e c t i n g the strongest player consists of choosing a subset of f i x e d s i z e , say s. In t h i s case A consists of a l l subsets of I of s i z e s and G i s defined as i n the f i r s t case. L^(a,p) i s taken to be 0 i f i e a and 1 otherwise . Lemma 2.1.1 implies that the Bayes i n v a r i a n t procedure with respect to a p r i o r measure F consists of choosing those objects corresponding to the s l a r g e s t values of , where V i = ( 2 P ) S i m ( 2 [ l - p ] ) t l r a " S i m dF(p) . - 27 -CHAPTER I I I THE TWO STRONG PLAYER CASE A n a t u r a l extension of the case treated i n Chapter II i s that i n v o l v i n g a tournament which may have two strong players. The n u l l hypothesis to be tested i s that, amongst the n players i n a tournament, a l l are of equal strength. The altermative hypothesis i s that two players are equally strong while the n - 2 remaining players are equally weak. In this chapter we derive r e s u l t s f o r the problem of t e s t i n g t h i s hypothesis and, l a t e r , f o r that of s e l e c t i o n . The l i k e l i h o o d r a t i o test i s s i m i l a r to that f o r the one strong player case and, indeed, i s i n v a r i a n t under the group of permutations of the observations corresponding to the ( n ) 0 p a i r s of players. This f a c t suggests a representation of the I z J parameter space s i m i l a r to that i n Chapter I I . We e x p l o i t the invariance of the problem to obtain r e s u l t s s i m i l a r i n form to those of the one strong player case. 3.1. The L i k e l i h o o d Function and a S u f f i c i e n t S t a t i s t i c In keeping with the previous notation , N... W.., t . and S. are as defined i n equations (1.3.1) and (1.3.2) . Let 1^ ~ { ( i , j ) | i < j , i , j = 1,•••,n} index the set of po s s i b l e p a i r s of strong players. In order to s a t i s f y c e r t a i n conditions necessary f o r some of the s t a t i s t i c a l a n a l y s i s , two s t i p u l a t i o n s must be made. - 28 the where n F i r s t , i t i s assumed that i f i r ^ = % for i , j = l,«»-,n, i f j , random va r i a b l e s (T..,S..), ( i , j ) £ I ? , are exchangeable, T.. = T. + T. - 2N.. • (3.1.1) . S. . = S. + S. - N. . The same i s assumed for the n 2 random va r i a b l e s ^ ^ ^ j ~ ^ i j ^ ' (i>i) G ^2' Consequently, the random v a r i a b l e s T.., ( i , j ) e I~ , are exchangeable xj ^ under player e q u a l i t y and therefore have, under this assumption, common mean, denoted by E(T_^) = . Secondly, we assume that T.. >_ 1 f o r ( i , j ) e I 9 ; that i s , for any p a i r ( i , j ) e I2 > e i t h e r i plays someone e l s e other than j or j plays, someone else other than i . F i n a l l y , f o r some unknown p a i r ( i , j ) e ^ and p e [%,1] la p i f k = i and % = j , or k = j 1-p i f k f i and l - j , or I = i % otherwise • (3.1.2) for any p a i r (k,£) e , (k,£) £ ( i , j ) . (Note that (k,&) e implies k < £) . R e c a l l that X = ( N - „ , W - „ , N . . , W . . , • • • ) 1 i s the random vector 12 12 i ] i ] denoting the outcome of a tournament. Let x = ( n , „ . w , „ , n . . ,w..,•••)'• 0 12 12 xj i j - 29 -In Theorem 3.1.1 prob (X = x) i s determined under the conditions imposed by equations (3.1.2). THEOREM 3.1.1. For the two strong player case M—t s t —5 prob (X = x) = C(x)(%) i j ( p ) i J ( l - p) i J l j where t . . and s.. are the observed values of T.. and S.., r e s p e c t i v e l y . and ( i , j ) i s the (unknown) strong p a i r . PROOF : From Theorem 1.4.1 we have prob (X = x) = C(x) n r r . f (1 - TT, ) K Z ™ (k , J l) eJ K J l K l where J = {(k,£,)| k < I, k,£ = l,«--,n} . For s i m p l i c i t y of notation l e t J = {(k,jD E j | k = i and I f j} U {(k,jl) e j | k = j} J" = {(k,Jl) E J j k ^  i and % = j} U {(k,Jl) e j | I = i} and j" = J - (j U J 2 ) . Then J" J 2 and j " 3 are d i s j o i n t and J.^ u J" y J" 3 = J . Furthermore, equations (3.1.2) imply that - 30 -r P i f (k,fc) e J l i f (k,£) e J2 . % i f (k,£) e J 3 • and, hence, prob (X = x) = C(x) w, n, -w, n iOv v tut too 7 7 k* ( 1 " kT 1 w. ,2 n 1 7 ( 1 • 7 r k 2 ) J 3 U • f i n ) V _ W k £ B B B B B C(x)[p i ( l - p) ^ [ ( 1 - p) \ * ] [ ( % ) 3 ] B B +B B +B C(x)(%) p V - p) J (3.1.3), where = 1 w. kz B = I (n. W k £ ) = 1 B„ = > w, B, = I (n. w i „ ) Keeping i n mind that, f o r any (k,£) e J , k < I , we obtain - 31 -B l + B4 = I W k £ + { C nkA - W U } J l J2 " \ W k £ + \ W * k J l ^ J2 n n • £ W U " w i i + ^ W U j-1 i - 1 " / W . . - W . . + / w., w J k J 1 k - i l k i - 1 n k=l 1 R A=i+1 l J t j-1 n k = l J K £=j+l w. . + w. . 1 J s. + s. - n.. = s. . 13 S i m i l a r l y , and B„ + B_ = t. . - s.. 2 3 i j i j B c = M - t.. . 5 13 S u b s t i t u t i n g these values back i n t o equation (3.1.3) gives the desired r e s u l t . We now devise a representation of the parameter space. By an n V J -vector we s h a l l mean a vector with n , 2 , components which are l a b e l l e d l e x i c o g r a p h i c a l l y by the index set 1^ . Define ® t o ^ e the n -vector whose components are a l l % except f o r the ( i , j ) .,th - 32 -i component which i s p, where p varies over the closed i n t e r v a l The parameter space ® can then be expressed as ® = U ® . ( i , j ) e l 2 1 J Let Y=&12>S12>'''>T±yS±y''')' a n d y = ( t 1 2 ' S 1 2 ' " • ' t i j ' S i j ' " • ) , • We s t a t e , using t h i s notation, the following c o r o l l a r y which i s a consequence of Theorem 3.1.1. COROLLARY 3.1.1. I f 0 = ( 0 ^ , •• •, 0 , •) ' E ® , then p(y | 0) = C(y)2 M(n-l) n . (%) k \ ; f (i (k,A)eI 2 k r where n = n v 2 , M-t.. s.. t . . - s . . PROOF : Note that, f o r p = %, (%) 1 J p 1 J ( 1 - p) 1 3 1 J = (%) and that f o r Be®, n - 1 of the co-ordinates are % while one co-ordinate i s p. Thus, from Theorem 3.1.1, we have M—t. s t —s prob (X = x) = C(x)(%) l j p i j ( l - p) i j i j M(n-l) M-t s ' = c(x)2 n (%) k V f (1 (k,A)eI ^ tk£~Skil Now Y i s a function of X and i t follows that - 33 ->(y| 8) = I prob (X = x) Thus, p(y| 9) x: y(x)=y M(n-D I C(x)2 n x: y(x)=y M _ t k £ SkJl tk£~ Sk)l ( k , ) l ) £ I 2 ( % ) k \ | p (1 - 0 U ) ™ k * M(n-l) M-t a = 2 n (*) k £ 0 k £ (1 - 0 ) (k,£)el k £ k * fck£ Sk£ I C(x) x: y(x)=y y(n-D . y - t s c(y)2 n (%) ^ e j * ( l (k,£)el 2 V tkA Sk£ the desired conclusion. I t i s understood that the function C(y) i s not the same as the function C(x) ; however, i n an attempt to keep the notation r e l a t i v e l y uncomplicated, no d i f f e r e n t i a t i o n i s made between the two since these functions play no r o l e i n subsequent a n a l y s i s . Suppose m observations are made of the random v a r i a b l e Y, that i s , the tournament i s independently repeated m times. Let 1 - u 1 2 »&12 . T ^ . S ^ , . . . ) ' , v = 1, ,m , denote the random vectors corresponding to these m observations and write m T. . = J T.^ B T. + T. - 2N. . m ;.. = 7 s:vJ = s. + s. - N. . (3.1.4) . - 34 -Then the l i k e l i h o o d function, L(0), i s e a s i l y computed and we have THEOREM 3.1.2.. I f e = (Q^, e ± j , - • • ) ' e ® , the l i k e l i h o o d function f o r the random sample ( Y ^ , • • • ,Y^m^) i s *, r i N Mm-T. . ' S . . T. . -S . . L(e) = c 2 M m ( n - 1 ) n (%) 1 J m e / j m ( l - e..) l j m l j m , U , J ) e I 2 J 1 J where T.. and S . . are as defined i n equation (3.1.4), and ljm l jm ( m , N c » n C(Y*- V ;) . v = l The F a c t o r i z a t i o n Theorem implies that T = t(Y) = = (T.„ , S , „ ,'••,1.. , S . . , • • • ) ' i s s u f f i c i e n t f o r the family of underlying 12m 12m ljm ljm J J ° dis t r i b u t i o n s . 3.2. The Li k e l i h o o d Ratio Test. Let ® * = (%,•••,%), an £ J-vector, and ® * = ® - ® * . The n u l l hypothesis, that the n players are a l l equal, can formally be wr i t t e n 9 e ® * . To t e s t t h i s hypothesis against the a l t e r n a t i v e of two o equally strong players and n - 2 equally weak players ( i . e . 0 e ®* ), i t i s n a t u r a l to consider the l i k e l i h o o d r a t i o t e s t . THEOREM 3.2.1. hypothesis i f The l i k e l i h o o d r a t i o test r e j e c t s the n u l l - 35 -max ( i , j ) e l 2 ( 2 P . . ) S i J m ( 2 [ l - p . J ) T i j m " S i j m -1 1 c > where ijm i f ^ < h ijm otherwise (3.2.1) PROOF : Let f(p) = p a ( l - p ) b , where a + b > 0 and b _> 0. Then i f p £ (0,1), f'(p) > 0, = 0', or < 0 according as p < , = , or > a + b Furthermore, f(0) = f ( l ) = 0 and so the maximum of f(p) over the i n t e r v a l [0,1] occurs at p = —-j-rr- . (The conditions a + b > 0 3. i D and b > 0 ensure that 0 < a + b — < 1) . Over the i n t e r v a l [%,1] the maximum occurs at % i f —r- < k, and at a + b — ' a + b otherwise. Now, max L(0) 9 E ® max C2 9e<3> „ Mm-T. S. T. -S. Mm(n-l) n x k£m k£m . . k£m kjlm (k,£)el 2 k £ k J l max C (%) (i,j)e± 2 pe [%,'!] Mm-T,, . S. . T. . -S. . ijm ( p ) ijm ( 1 _ p ) ijm ijm = C ^  max Mm-T.. ( i , j ) e l 2 max (p) -1 (1 P£ [%,'l] P) T -S ijm ljm 1 - 36 -Mra-T.. S. . T.. -S.. = C max (%) J (p ) J (1 - p ) ( i , j ) e l 2 J So the likelihood ratio, L, i s max L ( 0 ) S T -S max (2p..) l j m (2[1 - p..) ± 3 m ± j m { ( i , j ) e l 2 1 J 1 3 0 £@* max L(0) 0e® = • Since the likelihood ratio test rejects the nu l l hypothesis i f L <^  c , the proof is complete. 3.3. Asymptotic Results for the Likelihood Ratio Test As noted before, the parameter space, ® , does not satisfy the conditions which imply that -2 log L has, asymptotically, under H Q , a chi-square distribution. In order to calculate an approximate value of c for an a-level likelihood ratio test we give, instead, the procedure outlined in Chapter II. F i r s t , with the help of a few preliminary results, an asymptotically equivalent form of the likelihood ratio test is obtained. THEOREM 3.3.1. Under HQ, E(2S_ - T ) = 0 for any ( i , j ) e I 2-PROOF : Note that I (2S - T ) I { (2S - T ) + (2S - T ) } ( i , j ) e l 2 1 J X 3 ( i , j ) e l 2 1 1 3 3 n = (n - 1) J (2S. - T.) = 0 • i l l 1=1 - 37 -Therefore, £ E ( 2 S ^ - T _ ) = E( £ (2S... - T )) = E(0) = 0 . Under the i j i j n u l l hypothesis, the random va r i a b l e s {2S,. - T..}-^ . N T are exchangeable i j i j ( i , j ) e l 2 and thus have common expectation. Therefore, n E(2S. . - T..) = 0 f o r x3 *3 , any ( i , j ) £ I 2> which implies E ^ S ^ - 1\ ) = 0 f o r any ( i , j ) e I 2 THEOREM 3.3.2. Under H (a) var (2S . . - T..) = E(T..) = y. ° i j 13 x j ' *2 f o r any ( i , j ) e I, (b) cov ( 2 S i . - T.., 2 8 ^ - T k £ ) = _ 1 f o r any ( i , j ) , (k,£) e I 2 , ( i , j ) 4 (k,A), where n = n 2 PROOF : (a) The random v a r i a b l e (2S..-T..)=(2S.-T.)+(2S.-T.) i j i j i x j 3 i s the number of successes le s s f a i l u r e s achieved by i and j together, a f t e r the games i n which i plays j are neglected. I t can be decomposed in t o the sum of M random variables W.. , a = l . * * * , u, where xja ' ' ' 1 i f i or j (but not both) plays on t r i a l a and wins 0 i f 1 n e i t h e r i nor j plays on t r i a l a both i and j play on t r i a l a -1 i f i or j (but not both) plays on t r i a l a and loses, Under H q we have player e q u a l i t y , and so the p r o b a b i l i t y of i winning on t r i a l a, given that i plays on t r i a l a , i s the same as the p r o b a b i l i t y that i loses on t r i a l a , given that i plays on t r i a l a Define the events D. , E. , and F.. as follows : xa i a xja 38 -D. = the event " i plays and loses on t r i a l a' E. = the event " i plays and wins on t r i a l a" xa * J F. . = the event " i plays j on t r i a l a" xja r J J Using this notation, the above remarks imply prob (E i a) = prob (D ± a ) (3.3.1) Let p.. = prob (W.. =1) . Then r x j a r xja p. . = prob (E. ) + prob (E. ) - prob (F.. ) (3.3.2) xja r xa ja v xja Equations (3.3.1) and (3.3.2) together imply that prob (W.. =1) = p.. xja *xja prob (W.. = -1) = p.. v v i j a ' *xja prob (W.. = 0) = 1 - 2p..' xja *xja (3.3.3) Consequently, E(W.. ) = 0 and var (W.. ) = 2p.. . Now, let p. „(°,°) n -" xja' v xja 'xja ' ij3 be short for prob ( X j j a = 0 , w ^ j g = ° ). By an argument similar to that used in establishing (3.3.3), i t follows that, under the null hypothesis, p.. .(1,1) = p.. 0 ( - l , D = p.. fl(l,-l) = P.. _ ( - l , - l ) . Therefore r x j a 3 r x j a 3 i j a 3 i j a 3 i j a ' ij3 = 0 , - 39 -and hence M var (2S J. - T. .) = j var (W., ) + 2 Y cov (W ' ,W. / ) 13 U aix xja ^ xja xja M = I 2p a=l xj a The random variable T . = ( T . - N . . ) + ( T . -N..) can be i j i 13 3 13 expressed as the sum of M random variables, ^ i j a > a = i>***>^» where V. . = 13a 1 i f i or j (but not both) play on t r i a l a neither i nor j play on t r i a l a or 0 i f both i and j play on t r i a l a Equation (3.3.3) implies that prob ( V ± j a = 1) = 2 P 1 j a a n d P r o b ^ V i j a = M = 1 - 2p.. . Therefore, E(V.. ) = 2p. .' and E(T..) = E( Y V.. ) = M M = J E(V. . ) = J" 2p.. = var (2S.. - T...) . This completes the proof 1 xia xja X3 X3 a=l J a=l J of part (a) . (b) Because of exchangeability under the n u l l hypothesis, we write E(T..) = y_ . Now, 13 ^ Y (2S . . - T..) = 0 => ( i , j ) e l , 1 J 1 J var y (2S. . - T..) I d , 3 ) ' l 2 1 J 1 J from which we deduce, successively, I var (2S -T ) + 1 1 cov (2S . .-T. . , 2S.. -T. „) = ( i , j ) e I , 13 13 (i,j),(k,£)el2 (i,j)^(k,£) i j i j ' kl kl and and 40 -ny2 + n(n - l)cov ( 2 S . . - ^ , 2 8 ^ - V - 0 y2 cov (2S,_. - T.^S, „ - T. a). i j i j ' ki kl n - 1 Let ( i , j ) £ I 2 be fixed. Since we are concerned i n this section with asymptotic results, we write P ^ j m instead of p ^ to emphasize the fact that p is a random variable depending upon m. p p LEMMA 3.3.1. If If I < |g I and e —> 0 then f —> 0 . 1 n 1 — 1°n 1 °n n PROOF : Note that | f n ( x ) | >_ e => |g n(x)| >_ e . Therefore { x f (x) - 0 > e } C { x I ' n 1 — |g n(x) " 0| > £ } , which implies P( {x| |f n(x) - 0| >_ £} ) <_ P( {x| |g n(x) - 0| >_ e} ) As n 0 0 , the right hand side of the inequality approaches 0, giving the desired result. THEOREM 3.3.3. For any ( i , j ) e I 2 the following results hold as m •* °° , i f the nu l l hypothesis i s true : „ A. P ( a ) 2Pijm ~> 1 (b) 2 ( l - 5 l j m ) - > l - 41 -PROOF : (a) By d e f i n i t i o n , e i t h e r I2p. . - l l = 0 or J ' xjm 1 2p.. - 1 xjm 1 2 S i j m ~ T i j m T ijm In any case, 2 P, . - 1 < xjm 1 — 2S 4 J . - T.. ijm 13m m m (v) „,(v) v=l IJ xj m But the random variables ( 2 s f ^ - T ^ ^ ) , v = 1,2,***, are independent and identically distributed with mean 0 under the n u l l hypothesis and so, I (2s;^ - T ^ } ) by Theorem 1.2.2, v=l i j xj p ^ p > 0 . By Lemma 3.3.1, 2p.. -1 —> 0. m J xjm (b) 2(1 - p.. ) = 1 - (2p.. - 1), which converges to rxjm v xjm ' • 0 in probability by (a) . m (c) Note that T.. = Y T ^ and that the random ijni v ^ xj ,(v) variables T , v = 1,2, ••• , are independent and identically distributed Ti'm P with mean . Thus, by Theorem 1.2.2, ^ m —> . Since g(x) = /x /T. . is a continuous function, Theorem 1.2.4 implies that / •"L^m — > /JIT v m I LEMMA 3.3.2. Let {X,} , 1 = 1,2,-", be independent and identically distributed random variables with mean 0 and variance a < °°. - 42 -6— P — 1 m Then for 5 < % , m X —> 0 as m « , where X = — J X. . m x - a2  PROOF : X has mean 0 and variance — . By Theorem 1.2.1 —: m 2 25 . P( |x| >%-) < a m so and 6 — m 2 m e 2 r>/ I 5—i N a 25-1 P( |m X| > e) <_ - j m e r lim P( |m X| > e) =0 provided 5 < % m-x» THEOREM 3.3.4. Under H„ (a) S.. log (2p.. -1) - S.. (2p.. -1) 0 xjm b xjm ' xjnr Fxjm (2p.. - l ) 2 p , _ xjm r n + S. . -4 > 0 as m -»• °° . xjm 2 (b) (T.. - S.. )log(2[l - p . . ]) xjm xjnr 6 V xjm J / (2p.. - l ) 2 p + (T.. - S . . ) ( 2 p . . - 1 ) + (T.. -SJm) ^ >0 as m » . xjm xjm' ^xjm x xjm ijm 2 PROOF : (a) Let f = S . . < m xjm ( 2 p . . - l ) 2 log(2p.. ) - (2p.. -1) + ^ ° xjm xjm 2 If P i j m = % for some m, then fm = 0 . Otherwise, note that, since 0 < 2p.. - 1 < 1 , we can write - 'xjm -log (2p..m) = log ( 1 + ( 2 P . . m - 1)) (2P, . - I ) 2 (2p.. - l ) 3 (2p.V - l ) 4  U P i j m 1 ; 2 + 3 4 - 43 -Therefore, f = S. . m ijm (2p.. - I ) 3 (2p.. - l ) 4 •ljm 1.1 m S. . (2p, . - 1) xjm rxjm , (2p.. - 1) _1 rxjm  3 " 4 where = gmhm > g = S.. (2p, . - 1) °m xjm x 'xjm 1 ^ i i m - 1 ) m 3 From Theorem 3.3.3, Theorem 1.2.4 and standard r e s u l t s f o r convergence i n P 1 measure, i t follows that h m — > — as m -* 0 0 . Also, 'm1 2S - T ijm ijm ijm T, . ijm r 2s . . i r 1 xjm 2 T. . xim 2S. . - T.. ijm xjm T % ijm T <2p.• ) 2 xjm 2S.. - T.. xjm xjm 2/3 m . 2S. . - N.. By Lemma 3.3.2, J J — m 2/3 m ( 2 5 ; ^ - T ^ ) 1 = n,V3 £ IJ i j _ = m' ;=1 m converges to zero i n p r o b a b i l i t y . Therefore r 2S. . - N.. ) m 2/3 -> 0 . Also, by - 44 -P Theorem 3.3.3, 2p.1 — > 1 and thus the product converges to zero i n r i j m • r , P P p r o b a b i l i t y . By Lemma 3.3.1, g^ — > 0 and thus, f i n a l l y , f f f l = 8 m h m — > 0. (b) The proof i s s i m i l a r to that of (a). The preceding theorems of this s e c t i o n lead to an asymptotically equivalent form of the l i k e l i h o o d r a t i o t e s t f o r which the c a l c u l a t i o n of the c r i t i c a l region i s comparatively simple. The next theorem gives t h i s asymptotically equivalent test while the re s t of th i s s e c t i o n deals with the problem of evaluating (approximately) i t s c r i t i c a l region. THEOREM 3.3.5. Asymptotically, as m -> c ° } the l i k e l i h o o d r a t i o test i s equivalent to the tes t which r e j e c t s H Q i f max { /T7^ _(2p.. - 1) } > c ( a ) * . ( x , j ) e l 0 J J PROOF : By adding parts (a) and (b) of Theorem 3.3.4 we obtain { S.. log (2p. .) + (T.. - S.. ) l o g (2[1 - p. .]) } - S.. (2p. . - 1) 13m 6 ^13 13m xjm 6 ^13 13m ^13 (2p., - l ) 2 (2p - l ) 2 + (T , , m - S ^ J ( 2 P 4 4 - 1) + S ^ m - i J + ( T , ^ - S „ J * ijm i j m i j i j m 2 ijm ijm 2 P — > 0 , or, e q u i v a l e n t l y , - 45 -{ S i j m l o S ^ i j ) + C T i j m " S . j m ) l o g (2[1 - p ^ ] ) } o C2p - l ) 2 p - T. . (2p, . - 1) + T. - ^ J - > 0 . ijm r i j ijm 2 Therefore, 2{S.. log (2 P..) + (T.. - S.. )log (2[1 - p..])} —> T (2p, . - l ) 2 13m 0 ^13 13m 13m' i j ijm * l j for any p a i r ( i , j ) e a n ^ hence so that or 2 P -2 log L - max T.. (2p. . - 1) — > 0 /-2 log L - / m r c T . . (2p. . - 1) — > 0 , . .N T 13m r i j ( i , j ) e l 2 J J , _ p /-2 log L - max •f77"(2p. . - 1) — > 0 (3.3.4), ( i , j ) e l 2 l j m 1 J since f(x) = /x i s continuous and s t r i c t l y i n c r e a s i n g . The n u l l hypothesis i s r e j e c ted i f L < c(a), or, /-2 log L >_ V-2 log c(a) (3.3.5) . L e t t i n g c ( a ) * denote /-2 log c(a) , equation (3.3.5) combined with (3.3.4) gives the desired r e s u l t . In order to obtain an approximate value of c(a) , consider the I I -vector IJ^ V^ defined by - 46 -U ( V ) - ( U ( V ) u ( v ) ..-)' = ( [2S (v) 12 T ( v ) j J - 1 2 J , , [ 2 S ^ 13 ) , where v = 1,2,••• These are independent and i d e n t i c a l l y d i s t r i b u t e d random vectors admitting, under the n u l l hypothesis, f i r s t and second order moments, . (Theorem 3.3.1) E(U ( V )) = 0 D(U ( V ) ) = r , where the given by ( n n X I 2 J 2 ^ J d i s p e r s i o n matrix T consists of elements Y.. 13 13 ' n ' 2 i f i = j i f i i j (Theorem 3.3.2) Define the sequence of random vectors JJ^ , m = 1,2,*'", by m = < U12m'""'U-.-^','-)'' W h e r e U-" v=l (v) i j xjm' xjm m , ( i , j ) e I 2 , so that m I u v=l (v) 12 m v=l (v) i j = ( [ 2 S 1 0 - T 1 0 ]//m", ...,[2S. . - T.. ]/^T, 12m 12m ' xjm xjm - 47 -An a p p l i c a t i o n of the m u l t i v a r i a t e c e n t r a l l i m i t Theorem (Theorem 1.2.5) y i e l d s that, under the n u l l hypothesis, ^ Um > 1 > where the d i s t r i b u t i o n of U i s an ' n ' - v a r i a t e normal with mean 0 f f n 1 2 x 1 and dis p e r s i o n matrix r R e c a l l that under consequence of Theorem 1.2.3, implies t h i s , as a ( ^ U12m ^±211 > /T 12m/m /T. . /m '12 U. U . Let Z = U . Since Z i s a l i n e a r function of U , i t has an r n ^  I 2 J - v a r i a t e normal d i s t r i b u t i o n with mean y_ and di s p e r s i o n matrix E = ( a ^ j ) , say. However, E( — U ) = — E(U) and E(ZZ') = E \ ( — U ) ( U ) ' • 1 1 1 — E(U u 1 ) - ^ - = — r 1— — r y 2 Sv2 2 Therefore - 48 -i f i = j otherwise. Now, note that f [ 2 S12m - T l 2 l / ^ » > [2S. . - T. . ]+//m" IT™ i.T» > ( Z 1 2 ' Z i j » (3.3.6) , where, in general, X = max (X,0) . The approximate test of Theorem 3.3.5 requires that c(a)* be chosen so that o ( i , j ) e l 2 J J By equation (3.3.6) c ( a ) * is approximately that number c(a)** such that P( max Z+ < c(«)** ) ( i , j ) e l 2 J = 1 - a - 49 However, f o r at l e a s t one p a i r ( i , j ) £ I 2 > Z^ .. iL 0 a n d so max ( i , j ) e l , 1 J max Z.. U , j ) e l , 1 J .th Let Z / . v represent the i smallest value of the random ( i ) v a r i a b l e s {Z..},. . N _ and i j ( i , j ) e l 2 R = {z = ( z 1 2 , - - - , z i j , - - - ) | z 1 2 £ ••• < z <_ •••} Since the random va r i a b l e s {Z }> . * are exchangeable, the j o i n t i j \ i > 3' ^  •'"2 d i s t r i b u t i o n function, F^ , of ^12'*' * ' ^ i j ' " * * ^  ^ S s y m m e t r i c a - ' - with respect to the co-ordinates of z = (zTu2'*" * ' z i j '' " ' l e t t i n g n = n 2 i t follows that the j o i n t d i s t r i b u t i o n function, F 2 > of the order s t a t i s t i c ( Z ( j j >'"* , Z(n)^ * s V F 1 ( z ( 1 ) , . . : , z ( }) i f (z ...,z ) e R F 2 ( z ( 1 ) , - . . , z ( n ) ) -0 otherwise Expressed i n terms of F 2 , c** i s that number such that F2(<», • • • c * * ) = = 1 - a . Since E i s not of f u l l rank ( i t has rank n - 1) there i s no e x p l i c i t determination of the density function of F 2 ; however, a s u i t a b l e Helmert transformation which diagonalizes E allows us to circumvent t h i s - 50 -di f f i c u l t y by examining n - 1 independent random variables which are functions of Z.,•••,Z , . and which do have a joint density. Let (1) (n) H' = 1_ 1_ /2 1_ /6 1_ _ i_ 1_ /6 1_ "/6 1_ /n" 1_ AT" 1 1 1 1 (n - 1) / n ( n - l) / n ( n - l) / n ( n - l) • • • • n d i - D "Vn(n - D and Define z* - ( z ( 1 ) , - - - , z ( r i ) ) V = H*Z* V l = ° The inverse transformation is given by Z* = HV . Note that, since V is a linear transformation of Z and - 51 -H'FJH = n - 1 n - 1 n - 1 the random variables ^ 2 ' * " * ^ n a r e i n d e P e n d e n t a n d identically distributed with a univariate normal distribution admitting mean 0 and variance — Furthermore, the conditions n - 1 " - Z ( D - Z ( 2 ) " - Z ( 2 ) ± Z ( 3 ) 0 0 ^ Z ( 3 ) 1 Z ( 4 ) - 00 < Z < C — (n) -( 3 . 3 . 7 ) are equivalent to - c o < /2 v 2 - z ( 1 ) - z ( 2 ) < 0 00 < / i ( i - i ) V i - / ( i - l ) ( i - 2 ) v ± _ 1 = z ( i _ 1 ) - z ( ± ) 1 °» i = l , - - - » n, nz - nz v n = (n) nz (n) nc Mn - 1) . /n(n - l ) /n(n - D - 52 -which in turn yield vn 1 " Vn c** /n - 1 • (3.3.8) Let R* C (R11 and S*CI(J?'r| represent the regions defined by equations (3.3.7) and (3.3.8), respectively, and l e t F be the joint distribution function of the random variables V2'*"' Vn * A P P l y i n S the transformation H' to Z and integrating out V-^  from to <» , we obtain 1 - a = R * J d F2 ( z(l)''--' z(n) ) R* n l d r 1 ( z ( 1 ) , . . . V z ( n ) ) n! dF(v 2,•••,v n) n! J — J J /_n_ c** /n-A v /I v n-l 2Trn n-l exp 1 n-l 2 n i=2 1 This equation can be used to obtain the value c** which determines an approximate a-sized c r i t i c a l region for the test of Theorem 3.2.1. An application of these results is f a c i l i t a t e d by the use of tables prepared - 53 -by Grubbs [6], who has tabulated values of F , (u) , where F n(u) = prob (u n <_ u) and x - x u = _B n „ For our purposes, a = l , x n = z ^ ^ and x = 0 . 3.4. Relevance of the Invariance Principle ,(v) In keeping with previous notation, we l e t Y = (T22^'^12^' *"*' T i ^ » S^^,**')' denote the random vector corresponding to the observed values on the v t b replication of the tournament. Recall that P ( y ( v ) | e ) = c ( y ( v ) ) 2 M ( n - 1 ) ' n (%) 1 J e . ! j ( i - e . . ) 1 * i j ( i , j ) £ i 2 1 J 1 J and that a sufficient s t a t i s t i c for the family f ( ( y • • • , y ^ ) | 6) is T = t f Y ^ • • • Y ^ ) = CT S T S 1 t U ' ' ; U12m'b12m' ^ijm'^ijm' ; ' where T.. and S.. are as defined in equation (3.1.4). The distribution ijm ijm ^ of T has probability function q(t | 0) = I f ( ( y ( 1 ) , - - - , y ( m ) ) | 0) , - 54 -with the summation extending over a l l values of (y • • • ,y ) such that t ( y ( 1 ) , - . - , y ( m ) ) = t . Writing y = ( y ( 1 ) , . . - , y ( m ) ) , n n { 2 J , and keeping i n mind that the random v a r i a b l e s Y independent, we obtain (v) v = 1, • • • ,m , are q(t | 8) = I y: t(y)=t m n P ( y ( v ) I 0) v=l I y: t(y)=t m n v=l (v) N OM(n-l) M - t( ^ s ^ c ( y v v ; ) 2 V l ' n (%) i J e . t J (1-0..) ( i , j ) e l 2 1 J 1 J I y: t(y)=t m n c ( y ( v ) ) b = i 2Mm(n-l) n (%) M t i j m e S i j m ( i - 0 ) t ± j m S ± j m ( i , j ) s l 0 1 J 1 J c ( t ) 2Mm(n - D n (%) ( i , j ) e l , M-t.. s.. i j m e x j m , ± i j t . . -s . . where C(t) = I y: t(y)=t m n c ( y ( v ) ) v=l Let 5^ represent the group c o n s i s t i n g of the n! permutations on I 2 and T , the domain of T. For a e , t e T and 0 e ® define g c ( t ) = ( t p ( 1 2 ) . ' S p ( 1 2 ) ' " - ' t p ( i j ) ' S p ( i j ) ' " * ) T g <9) " (Vl2)>•••> ep(iJ)'•••> , ' (3.4.1), - 55 -where p = a 1 . Let the transformation groups G and G be defined by G = { g ( j : T —> T I a e S n > G = { g a : ® —> ® | a e S n } (3.4.2) We claim now that the distribution of T is invariant under G ; that i s , for any a e , q(t | 6) = q(gt | g9) , where g and g are the homomorphic images of a . In order to see this note that i t i s sufficient to show that C(t) = C(gt) since the remaining factor in q is obviously invariant. Also, r e c a l l that, under the nu l l hypothesis, the random variables ( T ^ \ s f j ^ ) , ( i , j ) e I„ , are exchangeable for any v, v = 1,••-,m . Thus, i f F_ ^ "o i denotes the distribution function of (T. 0,S.. ,T.. ,S ) under the 12 12 xj i j null hypothesis, then F [ ( T ( V ) S ( V ) T ( V ) S ( V ) •••)'] = F [s ( T ( V ) S ( V ) ••• T ( v ) S ( v ) •••)] It follows that p ( y ^ | 6Q) = p ( g ( y ^ ) | 6Q) • This, in turn, implies that C ( y ^ ) = C ( g ( y ^ ) ) . Taking the product of both sides of this equality over a l l v, v = l,'**,m, and then summing over a l l y such that t(y) = t establishes the claim. - 56 -The property of invariance which we have just established is now u t i l i z e d by constructing certain Bayes invariant procedures, f i r s t , for i testing the n u l l hypothesis and, later, for selecting the strongest pair. 3.5 . Bayes Invariant Tests Bayes invariant procedures for testing the n u l l hypothesis are constructed for three different loss functions. We f i r s t obtain a useful preliminary lemma which is a particular example of a result formulated by Zidek [12]. In [12] i t is shown that under f a i r l y general conditions the Bayes invariant procedure with respect to a prior, II , within the class of invariant procedures is really a Bayes procedure with respect to a certain prior measure constructed from n . F i r s t , note that in this case the orbits of @ induced by G are labelled by { ® p } , p e [%,1] , where ® p = { ( p , - , % ) ' , (%,p,%,•••,%)',•••,(%,• •-,%,?)'} . Since the risk of an invariant decision procedure is constant on each orbit (for a proof of this result see Ferguson [5], pages 149-151) i t follows that, for any 6 e V^, r(0,5) depends on 0 only through p. Consequently the Bayes procedures relative to are obtained from prior distributions on [%,1], the range of p. - 57 -Now, suppose IT is a given prior measure on the Borel subsets of [%,1]. The measure II induces a measure on each ' ( i , j ) e ^ in an obvious manner. For B_^_.d: ® ^ , the induced measure th of B.. is n(BI .) , where B' is the projection of the ( i , j ) i j i j ' i j v J J co-ordinate of B „ into . For the sake of simplicity we write n(B. .) to denote II(B! .) . Let 8(@) be the Borel subsets of ® and for B e g(@) write B = U B.. , where B. . = Bl ® . . . ( i , j ) e l 2 1 J l j 1 J defined on ( ® ,8(®) which is given by Let A denote the measure .(B) = ±j I n(B ) , B e B ( @ ) n ! ( i , j ) s l 2 1 3 Then the Bayes procedure with respect to A evaluated at ' (t12m'T2m' " '' ^ j m ^ i j m ' " " ^  ' is any probability distribution which minimizes, as a function of P , ( L(a,9) dP(a) q(t|o) dA(0) (3.5.1) ® JA Let L..(a,p) = L(a,0), 8 e ®.. , ( i , j ) e I ? . Expression (3.5.1) is equivalent to J ® L(a,9)dP(a) £ c ( t ) n (20^) S u m(2 [ i - e 1 i) W 8 W m 1 (k,£)el 2 ™ k £ dA(9) , - 58 -or 1 2n r c(t) • ( * I1 J L..(a,p) d P ( a ) S l j m ( 2 [ l - p j ) ' 1 ^ " 3 1 ^ dn(p) A 3 Thus, i f a Bayes procedure with respect to A e x i s t s , i t i s a member of V which, at T = t, minimizes as a function of P, the quantity, rl ( i , j ) E l 2 t . . -s, L..(a,p) dP(a)(2p) l j m ( 2 [ l - p]) l j m l j m dn(p) (3.5.2) A 1 3 The following lemma shows that t h i s member of V (assuming i t e x i s t s ) i s , i n f a c t , the Bayes procedure with respect to JJ amongst i n v a r i a n t procedures. LEMMA 3.5.1. The Bayes i n v a r i a n t procedure with respect to a p r i o r measure JJ on the Borel subsets of [%,1] evaluated at t = (t., „ ,s,_ , • • • . t . . ,s.. ,•••) i s any p r o b a b i l i t y d i s t r i b u t i o n which 12m 12m* ijm' l j n r J r 1 minimizes, as a function of P, the quantity, I ( i , j ) e l 2 i s. . t. . -s. L..(a,p) dP(a)(2p) 1 3 m ( 2 [ l - p]) 1 3 m 1 3 m dn(p) . A 1 3 PROOF : Suppose the infimum of expression (3.5.2) i s attained (and f i n i t e ) at a point P which we denote by 6(°,t). Observe that - 59 -L..(a,p) d P ( g a ) ( 2 p ) S i j m ( 2 [ l - p ] ) ^ ™ S i j m cUI(p) A 1 J "if f1 L.-Cr^a),?) d P ( a ) ( 2 p ) S l J m ( 2 [ l - p ] ) ^ " 1 " 3 1 ^ <ffl(p) = I [ ^ L . ^ g - ^ a ) , ? ) d P ( a ) ( 2 p ) S C T ^ ) m (2[1 - p ] ) ^ a j ) » " ^ ( i j ) » d n ( p ) where, i n each case, the sum i s over a l l ( i , j ) e I 2 • The infimum of the /s—5_ — ]_ l a s t expression i s attained at S(g (°),g (t)) . Consequently, ~ - l -1 6(g (°)>g ( t ) ) = ^ ( o j t ) and so 6 e . I t remains to be shown that <S i s Bayes w i t h i n . Let T* ={t=(t 1 0 , s 1 0 , " - , t . . ,s.. ,-'-)sT| t 1 0 < t 1 0 <• • •} and 12m 12m' 13m ijm 1 12m- 13m-l e t u be the measure on T which assigns a weight of one to each point t * e T* . The homomorphic image of a, a e , i n G, G and G i s denoted by g, g and g , r e s p e c t i v e l y , and the l a b e l p i s used as an abbreviation f o r (%, ••'•,%,?) . In terms of t h i s notation, the expression (3.5.2) can be w r i t t e n e q u i v a l e n t l y as aeSn J% J A for some g Q e G . I f f L(a,gp) dP(a)q(g t * | gP) dll(p) , As noted i n the beginning of t h i s s e c t i o n , the Bayes procedures with respect to f o r t h i s problem are obtained from p r i o r d i s t r i b u t i o n s - 60 -on B[%,1], the Borel subsets of [%,1]. Consequently, the Bayes r i s k of an invariant procedure, 6' , with respect to the p r i o r , II , on ( [%,1], B[%,1] ) i s given by R (<s* ,n) = f l r(<S\p) dn(p) f , I f f L(a,p)6 ,(da,gt*)q(gt A|p) dJI(p) du(t*) Now, for any element g e G , R(6*,n) = f £ f f L(a,p(6'(da,(g ^ t ^ q C C g  1gJt\V) dJI (p) dy(t*) n T aeS„ h 1 I L(ga,gp)6'(d(ga),g ot*)q(g 0t*|gp) dH(p) du(t*) £ L(a,Ip)S'(da,g 0t*)q(g 0t*|gp) dll(p) du(t*) r a £ s n * I L(a,gp)S(da,g 0t*)q(g 0t*|lp) dH(p) dy(t*) = R(6,n) . Therefore, 6 i s Bayes amongst the class of invariant procedures. Lemma (3.5.1) can now be applied to the problem of testing the n u l l hypothesis. The action space consists of two points : - 61 -aQ : accept the n u l l hypothesis, 8 e ® * a-^  : accept the a l t e r n a t i v e hypothesis, 8 e © * . The t h i r d transformation group involved (the f i r s t two are defined i n equation (3.4.2)), G, consists of the i d e n t i t y mapping, g , on A . A l l three are c l e a r l y homomorphic to 5^ Consider the loss functions s p e c i f i e d i n the following tables which give the values of L..(a,p), ( i , j ) e I- : General Loss . 0 - 1 Loss Linear Loss ° ° 1 K ® o ®1 a o . 0 L 0(P) a o 0 1 a o 0 2p - 1 a l 0 a l 1 0 a l 1 0 Since these los s functions depend only on p, and not the p a i r ( i , j ) , they are i n v a r i a n t . By a s i m i l a r argument to that i n Chapter I I , s e c t i o n 2.2, i t i s c l e a r that a 0 - 1 loss defined on each © .. , ( i , j ) e I„ , i j ^ induces a 0 - 1 loss on © . And, f i n a l l y , i t should be noted that the l i n e a r l o s s function penalizes the s t a t i s t i c i a n less i f he i n c o r r e c t l y chooses a Q when p i s close to % than when i t i s further away, and so i t may be argued that the l i n e a r loss i s more r e a l i s t i c than 0 - 1 l o s s . I f F denotes, f o r a given p r i o r II, the c o n d i t i o n a l d i s t r i b u t i o n of p given p > % , Lemma 3.5.1 implies, f o r the general los s function, - 62 -THEOREM 3.5.1. The class of a l l Bayes i n v a r i a n t tests of @): against consists of a l l tests of the form, r e j e c t © q i f and only i f ( i , j ) e l , J % L G(p)(2p) l j m ( 2 [ l - pj) i J 1 " dF(p) > c± t, . - s . . ijm jm when t = (t.. 0 , s 1 0 t . . ,s.. ,•••)> f o r some constant c, > 0. 12m' 12m ijm ijm' 1 — PROOF : Let and n Q = prob (p = %) = n({%}) n 1 = prob ( P > %) = n((%,!]) The c o n d i t i o n a l d i s t r i b u t i o n of p given p > % , F(p), can thus be wri t t e n , FCP) = F ^ " • . 1 For any p r o b a b i l i t y measure, P, on the power set of A l e t <% represent P({a 1>) so that P({a Q}) = 1 - £ . Applying Lemma 3.5.1 we obtain i n f I P ( i , j ) e l 2 [%,1] L..(a,p) d P ( a ) ( 2 p ) S i J m ( 2 [ l - p ] ) t : L J m " S : L J m dn(p) A 1 J = i n f I 0<C<1 ( i , j ) e i 2 [L i j ( a 1 , p ) ? + L . . ( a 0 , P ) ( l - 5 ) } ( 2 p ) S i j m ( 2 [ l - p ] ) t i : J m ^ ^ U I Q p ) = i n f {£T + (1 - ?)T } 0<£<1 - 63 -where and T g = n L 1 ( * ) n o T, = *1 . I L 0 ( p ) ( 2 p ) S i j m ( 2 [ l - P ] ) t i J m " S i J m dF( P) ( i , J ) E I 2 Hence, the infimum i s attained by s e t t i n g E, = 0 or 1 according as T > T. or T < T '. When T = T- the infimum i s attained at any o 1 o 1 o 1 J point 5 £ [0,1]. For the sake of de f i n i t e n e s s , we choose 6 as 6 ( a l f t ) = • 1 i f T i > T 1 — o 0 i f T < T, o 1 That i s , we r e j e c t the n u l l hypothesis with p r o b a b i l i t y 1 i f and only i f s. . t . . -s . . I L 0(p)(2p) l j m ( 2 [ l - p ] ) l j m l j m dF(p) i / n L - f t ) ( i , j ) e l 2 J % The proof of Theorem 3.5.2 i s s i m i l a r to that of Theorem 3.5.1 and so i s omitted. THEOREM 3.5.2. The class of a l l Bayes i n v a r i a n t tests of ® * o against © * consists of a l l tests of the form, r e j e c t © * i f and only i f (a) I 1 s. . t . . -s. . (2p) l j m ( 2 [ l - p]) l j m l j m dF(p) > c 2 ( f o r 0-1 loss) ( i , j ) e l 2 J % - 64 -(b) t . . -s. I (2p-l)(2p) l j m ( 2 [ l - p ] ) 1 3 m l j m dF(p) > c. (i,j)el2 J% ( f o r l i n e a r loss) when t = (t.„ ,s._ , - ' - , t . . ,s.. ,•••)» f o r c e r t a i n constants c , . c 0 > 0. 12m 12m xjm xjm z' 3 — 3.6 The Asymptotic Form of a Certain Bayes Invariant Test Given F, an a - l e v e l Bayes i n v a r i a n t test can be constructed f o r any of the three tests i n s e c t i o n 3.5 by s u i t a b l y choosing the constants c l ' c2 o r °3 * ^he l i m i t i n g form of these tests when 0 e ® * i s of assistance i n approximating the value of the constants, when the tests are required to be of s i z e a. In t h i s s e c t i o n we consider the case of zero-one loss and examine the d i s t r i b u t i o n of 1 S.. T.. -S. . (2p) 1 J i r i ( 2 [ l - p ] ) X 3 m ± 2 X a dF(p) under the n u l l hypothesis. We assume that F has a density, f, which i s monotonically decreasing and continuous at p = % . Note that by making the s u b s t i t u t i o n , i n the l a s t i n t e g r a l , p = %( h 1) , we obtain /m v'ni T.. -S, (2p) 1 J m ( 2 [ l - p ] ) i j m l j m f ( p ) dP = % T , ijm. rJm~ MS.. -%T.. v^(%T.. -S. 4 )//m" 2 2m ; ( 1 + z_j ijm ' xjm ^_ Zy K" xjm xjm ,„ z 0 /in /m m f ( % ( - £ + D ) d z /m - 65 -For the most part, our e f f o r t s w i l l be d i r e c t e d to showing that t h i s l a t t e r i n t e g r a l converges i n law (under the n u l l hypothesis) to f ( % ) .00 Lt-7 e J dt, ( i , j ) e I 9 , where u„ = E(T..) and Z.. i s as defined i n equation (3.3.6). In 2 xj xj n proving t h i s r e s u l t , the following theorem w i l l be used (see, f o r example, Rao [9], Exercise 6.11, page 271). LEMMA 3.6.1. Suppose X ^ , m = 1,2,'*', and X are random v a r i a b l e s with range X and G i s a function defined on X . I f (1) X m — > X m (2) G i s continuous (3) There e x i s t functions G m , m = 1,2,''', defined on 36 such that G m — > G uniformly on compacts of x, then G m(X m) — > G(X) . We apply t h i s theorem by s e t t i n g X m = ( [2S - T . ] / (2i/m) , T /(2m)) m xjm xjm xjm OO = U ^ > where 3^ i s the range of X m m=l fSra i— r~ 2 /-. r \ /-i . zNavm,, z.-avm,.. z Ncm . z . , G m(a,c) = (1 + — ) (1 ) (1 - — ) f ( % ( — + 1)) dz •"O /m /m. /m • (3.6.1) J0 G(a,c) = f ( % ) exp(2az - cz ) dz - 66 -In showing that the conditions of Theorem 3.6.1 are s a t i s f i e d i n t h i s case i t w i l l be important to note that, since >_ 1 f o r each ( i , j ) £ I 2 » we need only consider c > % . THEOREM 3 . 6 . 1 . Under the n u l l hypothesis ([2S.. -T. . J7(2/m), . . . J ijm ijm ' T ij m/(2m)) converges i n law to ^^2.Z±2 '^ y2^' w h e r e ^2 = E ^ T i j ^ a n d Z „ i s as defined i n equation ( 3 . 3 . 6 ) . PROOF Since [2S.. - T.. ]//m" _ 13m ijm L_ • t . . /m 13 m' -> Z. . and %/T. . /m—> %/J77 , i t follows from Theorem 1.2.3 that ijm 2 ' %/T.. /m 13m [2S. . - T. . ]/in ' ijm 13m /T. . /rn 13 m Also, convergence i n p r o b a b i l i t y implies convergence i n law and so %(T. . Im) — > %u. . ijm 2 THEOREM 3.6.2. G(a,c) = f ( % ) exp(2az - cz ) dz i s a continuous function of (a,c) . PROOF : Observe that exp(-%(w - - 2^-) 2) 0 '2c dw exp(2az - cz ) dz = 9 9 0 /2c" exp(-%(- • — ) 2 ) ^ /2c - 67 -u 2 L e t t i n g <t)(u) denote G(a,c) = , we can therefore write k 2u / < U ) /2a /2c ., 2as f ( % ) /2c" (3.6.2). Then f o r any sequence ( a n , c n ) — > (a,c), G(a n,c n) — > G(a,c) as a consequence of the co n t i n u i t y of § and the Lebesgue Dominated Convergence Theorem. To show that condition (3) of Lemma (3.6.1) i s s a t i s f i e d , we l e t B = {(x,y) | y > %} and note that 3 ( C B . I t remains to be shown that the functions G as defined i n equation (3.6.1) approach f ( % ) exp(2az-cz )dz JO uniformly on compact subsets of B. In the subsequent lemmas leading to this r e s u l t frequent use w i l l be made of the f a c t that lim (1 + ^ ) m m m-*» = exp (z) (3.6.3) Also u s e f u l w i l l be the r e s u l t of the following lemma, whose proof i s immediate. LEMMA 3.6.2. Suppose ( f } and {h } , m = 1,2, r m m are two monotone inc r e a s i n g sequences of p o s i t i v e functions. Then the sequence { h m f m } i s monotone inc r e a s i n g . - 6 8 -LEMMA 3.6.3. For f i x e d (a,c) l e t h m ( z ) = (1 + ^ ) a ^ ( l - - , vm vm for 0 <_ z v^ rii and c >_ % . Then {h m} > m = l ^ , * ' ' , i s a monotone 2a i n c r e a s i n g sequence of p o s i t i v e functions i f z > — . 2 PROOF : F i x z and l e t h(x) = (1 + - ) a X ( l - - ) _ a X ( l + - ) C X f o r x > z . We show that h(x) or, equivalently, % (x) = log [h(x ) J i s 2a an i n c r e a s i n g function of x i f z > — . Observe that £(x) = (ax + c x 2 ) l o g ( l + -) + (- ax + c x 2 ) l o g ( l - -) X X £'(x) = (a + 2 c x ) l o g ( l + |) + (- a + 2 x c ) l o g ( l - ^) + 2 x 2 2 (cz - a) x - z and 2 £"(x) = 2 c l o g ( l " + 2 2 Z 2 (czx 2 - 3 c z 3 + 2az 2) x (x - z ) Since l o g ( l - a) < - a f o r 0 < a < 1, we have £"(x) < - ^j- + 2 2 Z 2 ( c z x 2 - 3 c z 3 + 2az 2) x (x - z ) 3 2z 2 3 = ~2—2 ^ {(2a - c z ) x - cz } x (x - z ) 3 2 3 But - cz < 0 and therefore the parabola (2a - cz)x - cz w i l l l i e 2a 2 3 below the x-axis provided z > — . Under this condition (2a-cz)x -cz < 0 - 69 -(and thus £"(x) < 0) for any value of x and, in particular, for x > z. It follows that Jl'(x) is an increasing function of x . By making a change of variables and applying l'Hopital's rule, we obtain 2 2 2 v i • i / - i z \ f • log( l - z y ) lim x log( l ~) = lim — ^ i— J-x-x» x y-*0+ = lim 2z2y y->0"*" 1 - z 2 y 2 - 0 , which, in turn, yields lim £'(x) = lim-( a log x-x» x-**> 1 + x + 2cx log(l - z_ s 2xz 2; 2 2 x x - (cz - a) } = 0 But £'(x) is decreasing so £'(x) >_ 0 and, consequently, &(x) is 2a increasing. Thus, i f z > — , h m is increasing in m and, since h. is obviously positive, the result follows. m Let fm(z> h m(z) g m(z) f ( % ( — + D ) /in ( 1 + /m /m ( 1 + - z - ) a ^ ( i z^-a/ni^ z 2^ cm /— m /m _JL) - a ^ ( l _ £_)cm f ( % (_z + 1 ) } /in /n (3.6.3), - 70 -m = 1,2,'•' . Because f i s p o s i t i v e , measurable, and monotone decreasing, the sequence {f m} , m = 1,2,,'*') i s a monotone increasing sequence of p o s i t i v e measurable functions. Furthermore, h m ( z ) i s continuous on i (0,/m) and therefore measurable, so the product, g m = h f , i s measurable. 2a Let us assume m > — and write c where G m(a,c) = H m(a,c) + J m ( a , c ) , m nT H m(a,c) - 2a/c J (a,c) = m r/m. { }2a/c i f a < 0 [ g m ( z ) ] dz i f a > 0 [g (z)] dz i f a < 0 m — [ g m ( z ) ] dz i f a > 0 • (3.6.4) We w i l l v e r i f y i n the sequel that condition (3) of Lemma (3.6.1) holds by f i r s t showing that J m ( a , c ) i s continuous i n (a,c) f o r each m. Then by Lemma (3.6.3) and the fa c t that f m ( z ) i s , f o r each z, increasing i n m, i t w i l l follow from the Lebesgue Monotone Convergence Theorem and Dini's Theorem that J m ( a , c ) converges uniformly on compact subsets of B to J ( a , c ) , where - 71 -J(a,c) = exp(2az - cz ) dz i f a <^  0 2a/c exp(2az - cz ) dz i f a > 0 F i n a l l y , by d i r e c t c a l c u l a t i o n , we show that H m(a,c) converges uniformly on compact subsets of B to H(a,c), where i f a < 0 H(a,c) = • 2a/c exp(2az - cz ) dz i f a > 0. Suppose f(a,c,z) i s a continuous function of (a,c,z) f o r (a,c) e B and z e D, where D i s compact. The following lemma proves, i n essense, that f(a,c,z) dz i s continuous i n (a,c). Therefore, D H(a,c) i s continuous and, consequently, J(a,c) LEMMA 3.6.4. Let J m ( a , c ) , m = l ^ , - - - , be as defined i n equation (3.6.4). Then J m ( a , c ) i s a continuous function of (a,c). PROOF Let us consider the case where a > 0 (the proof for the other being s i m i l a r ) . Define and f m by , x , f N , Zvai^rH-cm,, z.-a/irH-cm h m(a,c,z) = h m ( z ) = (1 + — ) (1 ) /m Jm - 72 -and V z > f ( % ( — + D ) /ni m = 1,2,•• • . Then Jm ( a' c ) " V ^ ' ^ l 1 ; I V a ' c ' z ) f m ( z ) ~ V>o'co'z)fm(z>l d z ; 2a/c r/m < f ( % ) |h m(a,c,z) - h m ( a o , c 0 , z ) | dz . J 2a/c Note that i f a <_ 0 then -a/m + cm > 0 . On the other hand, i f a > 0 2a a T~~ then, since m > — > — , - a/m + cm i s , again, p o s i t i v e . In both cases , 2a t— then, h m ( a , c , z ) , f o r — _< z <_ /m , i s a continuous function of a, c, 2a /— and z. Thus, f o r any z^ e [ — , /m] , there e x i s t s 6 z > 0 such that a - a < 6 o 1 z, c - c I < 6 o 1 z. z - z. < 6 1 z, => |h m(a,c,z) - h m ( a o , c 0 , z . ) | < 3f(%)/m 2 a j— Since [ — , /m] i s compact, we can f i n d a f i n i t e set, {z^,-*',z n} , such that 2a n [— , vm] C U . {z i = l z - z. < 6 } i z. X - 73 -Let 6 = min{<5 , • • • ,6 } . Then for a - a < 6 and c - c j < 6 we z. z 1 o1 1 o1 1 n 2a j— have, for any z e [— , /m] , some i e {l,---,n} such that |h m(a,c,z) - h m ( a 0 , c 0 , z ) | <_ (tyCa.c.z) - h m ( a Q , c Q , z±) \ + l y ^ ' ^ ' 2 ^ " V a> c> zi>l V+ \\( a> c> zi> ~ hm(ao'co»z)l < ? + g + e 3f(%)Vm 3f (%)v /in 3f(k)Sm This, in turn, implies that |j m(a,c) - J m ( a 0 , c 0 ) | < e THEOREM 3.6.3 (a) J m(a,c) converges to J(a,c) uniformly on compact subsets of B, PROOF : By equation (3.6.3) and the continuity of f(p) at 2 p = % , g m(a,c,z) converges to f(%)exp(2az - cz ) . It follows by Lemma (3.6.3) and the Lebesgue Monotone Convergence Theorem that J m(a,c) —> J(a,c). Moreover, the fact that g m(a,c,z) is increasing in m for fixed a c and z implies (for the case a > 0) Jm+l ( a' c ) = f/m+1 f/m+1 f/m g m(a,c,z)dz = J (a,c). , , W a ' C ' z ) d z ^ 2a/c 2a/c g m(a,c,z)dz >_ 2a/c - 74 -Consequently, { J - J m K m = 1,2,«»«, i s a decreasing sequence of continuous functions converging to 0, so the r e s u l t follows by Di n i ' s Theorem. LEMMA 3.6.5. For f i x e d a > 0 l e t i / \ /••> . z.a/m., z,.-a/m _ r— k m ( z ) = (1 + — ) (1 ) , 0 <_ z <_ /m . }/m /m Then the sequence {k m(z)} , m = 1,2,'**, i s monotone decreasing i n m fo r each f i x e d z e [0,/m] . PROOF : Fix z and l e t k(x) = (1 + ^ ) a x ( l - -|) a x . We show that &(x) = log [k(x)] i s a decreasing function of x f o r a > 0 and x > z . Observe that £(x) = ax log(x + z) - ax log(x - z) 2 a. zx £'(x) = a log(x + z) - a log(x - z) - 2 2 X - z and A M(x) = 4 a g 3 9 9 > 0 ( x Z - z V Furthermore, l i m &'(x) = 0. I t follows that £'(x) < 0 and the sequence x-H-oo ( k m ( z ) } i s monotone decreasing. - 75 -2 LEMMA 3.6.6. The sequence {p (z)} = { (1 - — ) c m } , m==l,2, • • •, m m is monotone increasing in m for fixed z < , c > % . 2 PROOF : Fix z and let p(x) = (1 - . Then and £(x) = log[p(x)] = cx log ( l - — ) 2 2 £'(x) = c logU - ^-) + C Z 2 X - z 2 2 x - z = cz x(x - z ) > 0 . Therefore, £(x) is an increasing function of x . THEOREM 3.6.4 (b) H m(a,c) converges uniformly to H(a,c) on compact subsets of B. PROOF : Let E be a compact subset of B. On E the function, 2a 2a — , of (a,c) is bounded, say 0 <^— <_ M-^  , for some M^  > 0 . Similarly, 1 < exp 4a' <^  M2 for some M2 > 0 By Lemma 3.6.4, i f a > 0 k^Cajz) Vm + z - z decreases 2a monotonically to exp(2az) . For 0 <_ z <_ — , k m(a,z) is a continuous - 76 -function of a and z and therefore, by Dini's Theorem, there e x i s t s N^ e IN such that m >_ N-^  implies /m + z /m - z a/m - exp(2az) < », ^ \.. for a l l (a,c) e E and z e [O,-2-3-] 3f(%)M. S i m i l a r l y , using Lemma 3.6.5, there e x i s t s an integer N 2 such that m >_ N 2 implies 2 z cm 2 i £ 2 a. (1 - — ) - exp(- cz ) | < 3 f ( ^ ) M M f o r a l l (a,c) e E and z e [ 0 , — ] F i n a l l y , choose N^ so that m >_ N^ implies | f ( % ( — • + ! ) ) - f ( % ) | < vm 2 1 Let N = max(NpN2,N,j) . For m >_ N, repeated use of the triangle 2a i n e q u a l i t y y i e l d s , f o r a l l (a,c) £ E and z e [ 0 , — ] , g m(z) - g(z)| < f ( % ) 3f(%)M, + M„ 3 M 1 M 2 + f(%)M. 3f (%)M.M1 £ Hence,- H m(a,c) < •M, M 0 . 1 dz = e for a l l (a,c) e E. Theorems (3.6.3(a)) and (3.6.3(b)) enable us to deduce immediately that G m converges uniformly to G on compact subsets of B. Observe that, by making the s u b s t i t u t i o n t = /2c z , G(a,c) can be expressed as '  V y'1-. : '•• _ JL+ l a t '• G(a,c) e 2 ^ dt ' /2c- J 0 Using Lemma (3.6.1) we have est a b l i s h e d that j-. . ,/5 /m(S.. - % T . . )//m" /ta(%T.. -S . . ) / / i i 2 m ( - ^ ) ( 1 + z } urn xjm ( 1 _ z } xjm xjm, ( 1 _ z_ } m f ( % ( _ ^ + 1 ) ) J 0 / i i i /m /m converges i n law, under the n u l l hypothesis, to t 2 Equivalently, f(%) _ ~ + t Z e J dt . 1 S; /m (2p) l j m ( 2 [ l - p]) Since / = m T.. -S. . ijm xjm V K i f „"2 + t z i j f(p) d p — - > e dt . , t h i s implies, by Theorem (1.2.3), that 1 S.. T.. -S.. • fco _ — + t Z . . xjm x j m f ( p ) d p _L> f | % ) . f e 2 xj l j m ( 2 [ l - p])' dt 0 (3.6.5). M i l l ' s Ratio, M(z), i s defined by - 78 -f 0 0 u 2 M(z) = V2T7 /2ir du 2 z 2 which, upon s u b s t i t u t i n g t = u - z, reduces to t 2 , .oo _ - t Z M(z) = e dt (3.6.6) As a consequence of (3.6.5) and (3.6.6) any t e s t of the form given i n Theorem (3.5.2(a)) i s asymptotically equivalent (when F(p) has a monotonically decreasing density f(p) which i s continuous at p = %) to one which r e j e c t s © i f and only i f 1 ^ ( i , j ) e l 2 / u 2 M(-Z ± j) >_ c* . The constant c^ can be chosen by simulation with the a i d of tables f o r M(z) (prepared by Birnbaum et a l [2]) to make the s i z e of the test a . An approximate t e s t , then, of 8 e ® q r e j e c t s i f and only i f I ( i ' j ) £ l 2 ^ i j m M( /TT ijm 1 -2S. . ' i j m ) 1 C2 ijm J - 79 -3.7 Selecting the Strongest Pair In the case where there is evidence to suggest that a tournament may have two (approximately equally) strong players, the problem of selection may be regarded in either of two ways. One possibility i s to l e t the action space be a subset of P(I 2) » the power set of I 2 , and, after observing the sufficient s t a t i s t i c , T, attempt to pick a subset containing the strongest pair. The other p o s s i b i l i t y i s to let the action space consist of a l l subsets of {1,•••,n} and attempt to pick a subset containing one (or both) of the strongest players. In this section we consider the former po s s i b i l i t y . We do so since i t is then possible to define a suitable transformation group on A and apply Lemma ( 3 . 5 . 1 ) . Let n = n and be the group consisting of n! permutations on • The transformation groups, G and G , are as defined in equation ( 3 . 4 . 2 ) , while G is defined by G = [Ga | g a : A — > A, a e , where g a(a) = {a(i,j) | ( i , j ) e A} . G and V are, as previously noted, homomorphic to 4 ^  . Furthermore, for a e 5^ we define $ : —> G by <t)(a) = g^ . Then <f) is a homomorphism since - 80 -*(<J,0(a) = g_ (a) x 1 2 {{J 1cr 2(i,j) | ( i , j ) e a> = ? G { o 2 ( i , j ) | ( i , j ) e I 2 ) = <j)(a1)(|)(a2) (a) , and so G is also homomorphic to We derive f i r s t the Bayes i n v a r i a n t procedure f o r the problem of s e l e c t i n g an element of PC*^) s o t n a t i t contains the strongest p a i r with p r o b a b i l i t y at l e a s t as large as some prescribed constant 8, 0 <_ $ <_ 1. In t h i s case A = P ( I 2 ) • For convenience of notation, l e t _ i = ( i ^ , i 2 ) , j_ = (j^,J 2)» and th £_ = (£^,£ 2) represent pa i r s from I 2 and l e t n_ = (n-l,n) be the n p a i r . A loss of L^(0) i s incurred i f the p a i r ±_ , i e , i s included i n the subset when 0 i s the true state of nature. In ad d i t i o n , L units are l o s t i f the strong p a i r i s not included.. R e c a l l that represents the set of n-tuples ( l e x i c o g r a p h i c a l l y ordered) whose components are a l l % except f o r the jL component which i s p, where p varies over the closed i n t e r v a l [%,1]. For 0 e ® ^ there e x i s t s 0 e © and a e S such that a ( n ) = i and g\(0„) = 0. n n — — • o n_ The loss function L^(a,P) i s therefore given by - 81 -L.(a,p) = L(a , g e n ) = £ L.Cg^ ) + L{ 1 - X a [ o ( n ) ] > , - 1 1 jea ^ -where x„ i s the c h a r a c t e r i s t i c function of a i n A , a i s any permutation under which a (n_) = jL and "g i s the homomorphic image of 0" . Note that •I S^Q) •j_eg(a) - i and X i ( a ) [ a a o ( ^ ) ] X a [ a o ( n ) ] , where g and g are the homomorphic images of a . Thus, by r e q u i r i n g of 1^(9) that L i(9) = L ^ ^ C g Q ) , we ensure that L(a,9) = L(gajg9) and so that the problem remains i n v a r i a n t under the group, S n , of permutations on T.^ • 1 , 61 t = ( ' " * t i m » " i m ' " 0 and T.(t) I i £ l 2 I £] UI2 1 s L.(p)(2p) i - m ( 2 [ l PJ) - — dF(p) (3.7.1) + (n - 1) I L „ ( p ) ( 2 p ) S i m ( 2 [ l - . p ] ) t i m ' S i n » ' d F ( p ) (n - D L r 1 s< , im (2p) ±"'(2[1 - p ] ) ^ " 1 ^ dF(p) . - 82 -Here L^(p) is an abbreviation f o r ^(6^) and F i s the p r i o r measure on the Borel subsets of (%,1). Then i n terms of t h i s n otation we have THEOREM 3.7.1. The Bayes i n v a r i a n t random subset s e l e c t i o n procedure with respect to a p r i o r measure F on the Borel subsets of [ % , l j consists of i n c l u d i n g the p a i r i _ , 1 e 1^ , i f and only i f T ± ( t ) < 0 where T ^ t ) i s as defined i n equation (3.7.1) . PROOF : Let S be a p r o b a b i l i t y measure defined on the power set of A with density £, ^ ( a ) = J £ (a) be the p r o b a b i l i t y of — aeA: iea choosing the p a i r i_ , and g, g, g be the homomorphic images of cr e Sn . For s i m p l i c i t y , the abbreviation p ;is used f o r (%,•••,%,?) £ ® n and i q(t|gp) f o r ( 2 p ) S a ( H ) m ( 2 [ l - p]y^n)^SoMm . Note that the invariance of the loss function implies L a ( n ) ( a , p ) = I ' (L.Cgp) - L X { . } [ a ( n ) ] } + L = l {L - (p) - L X , [nj> + L . iea a ( i ) {a • L(i)} We therefore have - 83 -i n f I 1 5(a)L C T ( r i )(a,p)q(t|gp) dF(p) ^ aeS n aeA 'h = i n f L f I I I f1 C ( a ) { L _ ± ( P ) —* aeA iea aeSn 'h o ( i ) LX , . [n])q(t |gp) {a • L ( i ) } J dF(p) = i n f I I C(a) % aeA: iea I L (p)q(t|gp) [ oe.Sn a_ ± ( i ) - L I X _n fn]q(t|gp) aeSn {a x ( i ) } dF(p) i n f I * ± . * i £ l 2 .X I L (p)q(t|gp) I i e I 2 a e S n : i e I 2 a e S n ° a - 1 ( i)=j I x{1}[n]q(t|gp)- \ dF( P) = i n f I i e l 2 ^ I L (p)q (t|p) Lq i n(t|p) dF(p) , where q,,(t|P) = I qCtJgp) ^ aeS n : - 84 -By Lemma (3.5.1) the Bayes i n v a r i a n t procedure i s that p r o b a b i l i t y measure which minimizes as a function of i> = (• • • ,4^, • • •) the infimum obtained above. Thus rl fl ' 1 i f I L (p)q. (t|p) dF(p) < L q l n(t|p) dF(p) * . ( t ) = \ otherwise (3.7.2) . Let G. . = (a e S n | c r - 1 ^ ) = j ) . Suppose J_ = n_ • Then G consists of the (n - 1)! permutations of the following form : • • • r j _ • • • 1 Since a (n_) = J L f o r a l l a e G. . , i t follows that —.3 % aeG 1 L n ( P ) ( 2 p ) S a ( ^ ) m ( 2 [ l - p ] ) Vn)m- S a(n> d F ( p ) = (n - 1)1 L ( p ) ( 2 p ) S ^ m ( 2 [ l - p]AM dF(p) (3.7.3) Suppose, on the other hand, that j v * n_ . In t h i s case G consists of a l l permutations of the following form : - 85 -• • • n • where £_ is an arbitrary element of 1^ for which %_ ^  jL . There are -1 -1 (n - 2)! permutations such that a (i) = j_ and a (l) = n_ , and the are n - 1 possible values that l_ can be. Therefore, for i JL > J k rtcO. , L L , ( p ) ( 2 P ) S a ( ^ m ( 2 [ l - p ] ) t ^ H ) n . " ^ ( n ) m d p ( p ) % aeG. . J-= (n - 2) ! . J £el, — i L,(p)(2p) ^- m(2tl - p]) - - dF(p) (3.7.4) k 1 The procedure given by (3.7.2) can be expressed as follows : include the pair jL i f and only i f 1 t \ iel„ -1 (P)q ± 1(t|p) dF(p) + rl rl L n(p)q. n(t|p) dF(p) - L I q. n(t|p) dF(p) < 0, which, with the aid of (3.7.3) and (3.7.4), reduces to T ±(t) < 0. The value of L is then chosen so that the probability of including the strongest pair is greater than or equal to 8 , 0 <_ 8 <_ 1. (see Narayana and Zidek [8]) . The second method of selection considered consists of choosing a subset of fixed size, say s . In this case A consists of a l l subsets - 86 -of I„ of s i z e s. We take the loss to be 0 i f the strong p a i r i s i n the subset and 1 otherwise ; that i s . L^Ca.p) = 0 i f i e a 1 otherwise This l o s s function i s i n v a r i a n t since L(a,6) = 0 implies 8 e ® ^ f o r some i £ a which implies successively g8 e ® c t(.Q f ° r some i e a , g8 e f o r some i e ga and L(ga,g9) = 0 . Likewise, i f L(a,0) = 1 then L(ga,g8) = 1. The Bayes i n v a r i a n t procedure for t h i s method of s e l e c t i o n i s summarized i n the following theorem. THEOREM 3.7.2. Let t = (• • * ,t^,s_^, • • •) . The Bayes i n v a r i a n t procedure with respect to a p r i o r measure F on the Borel subsets of [%,1] for the above method of s e l e c t i o n consists of choosing the s l a r g e s t values of V\(t) , where v i ( t ) = (2p ) S i m(2[l - p]) V S i m d F ( p ) (3.7.5) - 87 -PROOF : Note that L ^ j O i . p ) « 1 - £ ' X{ ±} (n.)) ie a Proceeding as i n the proof of Theorem (2.7.1) we obtain inf I I ?(a)L f n v (a,p)q(t|gp) dF(p) aeS 'aeA J % ^ inf - J $ . { £ Xr n ( < j ( l l ) ) q ( t | g p ) } dF(p) $ i e l 2 - J % aeS -i n f - I ^ [ ( n - l ) ! ( 2 p ) S i m ( 2 [ l - p D ^ i * dF(p) $ i e l 2 - J % The Bayes i n v a r i a n t s e l e c t i o n procedure therefore consists of choosing the s l a r g e s t values of , where i s as defined by equation (3.7.5). - 88 -CHAPTER IV EXTENSIONS TO THE K-STRONG PLAYER CASE For a tournament consisting of n players, the results of Chapter III can be readily extended to the case where we wish to test for s the existence of (and then select) k players, 1 <^  k < n, who are stronger than the others and who are, themselves,.(approximately) equal in strength. Although the analysis may be slightly more involved in some places, i t i s evident that the method of proofs employed for the previous case carry over to this case. In this chapter we present, without proof, results for the k-strong player case which are suggested by our earlier analysis. 4.1 Extension of the Basic S t a t i s t i c a l Properties Let _i = ( i p * ' " ' 1 ^ represent a k-tuple of integers and let 1^ index the set of a l l possible { n k-tuples such that i ^ < ••• < i ^ We denote by T_^  and S^ the number of times a player in x plays someone not in i_ and the number of times a player in i^ beats someone not in jL, respectively. That i s , T. = J T. - 2 J N, . i e i _ i , j e i J and s. = y s. - y N. . — xex i>je.i i<j - 89 -where T. , S. and N.. are as defined by equation (1.3.2). Under the 1 x i j hypothesis that a l l players are equal we make the following two assumptions about the design of the tournament : (1) the n random variables ^ ^ ' ^ i ^ a r e exchangeable as are the random variables (2S. - T.) . x x (2) the tournament is k-connected in the sense that, i f P = {T,--',n} and P' is any subset of k members of P , then at least one player of P' must play at least one player of P - P' . In particular, this implies T. > 1 for each i e l , . x— — k The parameter space, ® , is represented as follows. Let n •= f n and l e t © ^ be the 1-tuple whose components are ordered lexicographically by 1^ and whose jL component is p, p e [%,1] , while a l l other components are % . Then © = U © . i e l , — — k Suppose the tournament is independently repeated in times. Let T^V^ and denote the independent copies of T^ and S_^  , jL e 1^ , determined by the v t n replication. Define Tn.m and S_.m , jL e 1^ , m >_ 1 , by T. = I T ( V ) - 9 0 -For 0 = (•••,6^,* * *) e ® thelikelihood function is *r r t\ Mm-rT. S. T. -S. L(0) = c2m^-x) n (%) ^ . 8 ^ (1 - 0.) ^ m ^m i e I k ... ~ ~ and, therefore, T = (•••,T. , S. , • • • ) ' is sufficient for the family of xm xm J underlying distributions. Let be the group consisting of the r\! permutations on 1^ . , and define the functions g^ and g~a , a e S^ , by Sa 1 V ' V V ; ^ ' P ( i ) ' P(i) ' g a0 = ^ ( • • • . e i , - - - ) ' = (•••>%(!)>•••)' > P = a" 1 The probability of observing the value T = t is given by .. , ,. Mm-t. s. t. -s. q(t|e) = C(t)2 V l ' n (%) - B±~ (1 - 0 ±) - -i e l . — — — k and this density function is invariant under the group G = {&a\ a £ • In the problem of testing and selection we consider only invariant loss functions and thereby restri c t ourselves, when considering Bayes procedures, to the class of invariant procedures. 4.2. Testing Procedures For the problem of testing the nu l l hypothesis of player equality, - 91 -l e t ® * = (%,•••,%) e <kn and ® * = ® - ® * i f where The l i k e l i h o o d r a t i o t e s t r e j e c t s the n u l l hypothesis i f and only S. T. -S. . max ( 2 p J - (2[1 - .p ]) - - | <_ C(a) , ie I, — — — k xm T. im i f im otherwise This test i s asymptotically equivalent, as m -> 0 0 , to that t e s t which r e j e c t s the n u l l hypothesis i f and only i f max { / r T(2p. - 1)} > c ( a ) * , . T xm *x — xeL — — — k and c* i s approximately that number c** such that — f 3 V3 n-1 1 exp(- I v 2 ) dv n-1 > n > 2 1 2T\T\ The c l a s s of a l l Bayes i n v a r i a n t test procedures i s also s i m i l a r i n form to that derived previously. The a c t i o n space, A, consists of the two elements a Q : accept the n u l l hypothesis, G e ® * - 92 -and a^ : accept the alternative hypothesis, 0 e © ^ . We let G = {e} , the identity transformation on A and consider only those loss functions L(a,0) such that L(ga,g0) = L(a,0) . Suppose II is a prior measure on the Borel subsets of [%,lj , F is the conditional distribution of p given p > % and t = (• • • ,t^,s_^, • • •) . Then the Bayes invariant procedure with respect to II evaluated at t consists of a l l tests of the form, reject ® * i f and only i f I i e l k f l L o ( p ) ( 2 p ) S i m ( 2 [ l - p ] ) ^ 1 " ^ dF(p) > c (4.2.1) , for some constant c >_ 0. If F has a monotonically decreasing density f which is continuous at p = % then, under the n u l l hypothesis , 2 / f ~ f1 (2p) -i ; ; i(2[l - p]) ±"lf(p) dP -^ > S i m / o r i I N * " ! 1 1 ! S i m r / ' \ jr> L f(%) t + tz. 2 1 A, e dt , 0 where u is the mean of T. , i e T, , under H and Z. is the l — —k o i limiting random variable, as m -* °° , under H„ of [2S. - T. ]//r. , b o im im im jL e T^ . Therefore, for zero-one loss, an approximate test of that given in equation (4.7.1) is the one which rejects ® Q i f and only i f 2S ie T. / 1. — im — -4c/ im , — • where M(z) denotes Mill's ratio. The value of c* can be chosen by simulation to give an a-level test. - .93 -4.3 S e l e c t i o n Procedures In order to u t i l i z e the invariance of q (t|G) under the group G for the problem of s e l e c t i o n , we must s e l e c t subsets of I rather than subsets of P = {l,''',n} . For the f i r s t method of s e l e c t i o n we take A to be the power set of 1^ and we wish to s e l e c t an element, a, of A so that i t contains the strongest k-tuple with p r o b a b i l i t y at l e a s t as large as some prescribed constant. A loss of L.(0) i s incurred i f the k-tuple i , i e I, , i s included i n the subset when 6 i s the true s t a t e of nature. In addition, L units are l o s t i f the strong k-tuple i s not included. We consider only los s functions L^(0) such that ^ ( i ) ® ^ = ^ ( 0 ) , a e and ~g = "ga . Let F be a p r i o r measure on the Borel subsets of [%,1] and l e t t = ^ " ' ^ i m ^ i m ' " 0 ' - = ( r l - k + 1 > n - k + 2 » '' ' »n-l,n) £ I f c , L ^ p ) = = L i ( % , - - - , % ,p) . Define T ±(t) by T.(t) I i £ l k ££l k J % (p)(2p) ^ m ( 2 [ l ,* lm lm , >. pj) - - dF(p) + (n - 1) I L ( p ) ( 2 p ) S — m ( 2 [ 1 - p ] ) t A m ' S i m dF(p) - (n - l ) L s. t. -s . (2p) - m ( 2 [ l - p]) - m - m dF( P) (4.3.1) - 94 -as Then the Bayes i n v a r i a n t random subset s e l e c t i o n procedure with respect to a p r i o r measure F on the Borel subsets of [%,1] consists of i n c l u d i n g the k-tuple i , i e I 2 , i f and only i f T ^ t ) < 0 where T^Ct) i s defined i n equation (4.3.1) . For the second method of s e l e c t i o n , we pick a subset of f i x e d s i z e , say s, so A consists of a l l subsets of 1^ of s i z e s. Let the loss be 0 i f the strong k-tuple i s i n a and 1 otherwise. For a p r i o r measure F on the Borel subsets of [%,1] , define V. , _ i e I, , by v. = i J s. t . -s . (2p) i m ( 2 [ l - P ] ) ± m i m dF(p) (4.3.2) Then the Bayes i n v a r i a n t procedure with respect to a p r i o r measure F on the Borel subsets of [%,1] consists of choosing the s l a r g e s t values of V\ , where i s as defined by equation (4.3.2) . - 95 -CHAPTER V CONCLUDING REMARKS The stipulation that, under the nu l l hypothesis, the random variables (T.,S.) , i e l , , and, likewise, the random variables 1 1 - k (2S. - T.) , i e l , are exchangeable may appear contrived. (In the case k = 1 i t seems reasonable to assume, under player equality, that the random variables T_^  , i = 1, • • • ,n , are exchangeable and in [8] i t is asserted that the rest of the hypothesis follows from this one assumption.) We make these assumptions since the exchangeability of the random variables {(2S^ - T_^ )} , i = l,'*-,n, is necessary in obtaining asymptotic results, while that of the random variables {(T^,S^)} , i = l,-'-,n, is necessary to ensure that the probability density of the sufficient s t a t i s t i c is invariant. The restriction to invariant procedures for k > 1 becomes -increasingly more stringent as k increases. But the invariance principle compels us to select a set of k-tuples rather than of players. Moreover, because the loss functions must be invariant, choosing to include a k-tuple with k - 1 strong players results in the same loss as the choice of one with no strong players. This is somewhat unrealistic. Indeed the only context in which the assumed loss is reasonable is that in which i t is important to avoid unjustly excluding some strong players when others are chosen. - 96 -An alternative approach exists when k > 2 . It requires that a "finer" group action be defined. Observe that where T. . = T. + T. - 2N. . . -We l e t ® . be the the set of i j i 3 IJ 2L n 2 -tuples (with the co-ordinates lexicographically ordered by I^) whose ( i , j ) t n co-ordinate is p, p e [%,1] , i f i and j are in i. , and % otherwise. We would then be selecting a set consisting of pairs instead of k-tuples and could repose the selection problem accordingly. We now need only assume that the ^ n random variables ^ i j ' ^ i j ^ ' £ I2 ' a V & e x c n' a nS e a^-'- e under the n u l l hypothesis. But we must s t i l l have, under the null hypothesis, that the I £ random variables (2S^ - 1\) , i e 1^ , are exchangeable. Ideally we would like to set equal to the set of n-tuples with the i t n co-ordinate p, p e [%,1] , i f i e ji and % otherwise, and thus define the group action on the elements l,2,*'-,n . However, i t does not seem possible to break T. + T. - 2N.. into two components, one relating only to i and the other only to j . Finally, we point out that in the paper by Narayana and Zidek [8] . CO the statement is made (page 1571) that, since {([2S. -T. ]/2/m,T. /2m)} , r ° ' im xm xm m=l is bounded away from the origin in probability, the desired result i s obtained by applying the Manu-Wald theory. The result i s , indeed, true and the proof has been given i n Chapter I I I , but under the observation that 3C C{(x,y) | y _^ %} . The following counterexample, which was p r i v a t e l y communicated to Dr. Zidek by J . B j e r r i n g , shows that condition (3) of Theorem (3.6.1) cannot be replaced by (3') G m — > G uniformly on compact subsets of B' , where card (B) = 1 and P(X £ B) = 0 Let Q = [0,1] with Lebesgue measure, X = [0,1] with the usual metric and "ft = & with the usual metric. Random va r i a b l e s X R , n=l,2,-»-, and X, mapping Q i n t o 3£, are defined as follows : X = 1 and X Q = 1 -. ^  , n = 1,2,-f . Also, functions G m , m= 1,2,-•• , and G, mapping 5£ int o are defined through the following equations : G(x) = 0 and 1 G n(x) = for x > % (x - * ) n 0 f o r x < h Now, X n — > X and G i s continuous. Let B = {%} . Then G n — > G uniformly on compact subsets of B' and P(X £ B) = 0 . However, G(X) = 0 but G n ( X n ) = 71 77 Tv* (1 - 1/n - %) (% - l / n ) n > — — f o r n > 2 (%) U Therefore, G n(X n) —^f—> G(X) . - 98 -REFERENCES [1]. Berberian, S.K., Measure and Integration. New York, The MacMillan Co., 1965. [2]. BIrnbaum et a l . , The normal p r o b a b i l i t y function : tables of c e r t a i n area-brdiriate r a t i o s and of t h e i r r e c i p r o c a l s . Biometrika, 1955, 42, 217-222. [3]. Blackwell, D.A., and Gershick, M.A., Theory of Games and S t a t i s t i c a l  Decisions. New York, John Wiley and Sons, Inc., 1954. [4]. David, H.A., The method of Paired Comparisons. London, Charles G r i f f i n and Co. Ltd., 1963. [5]. Ferguson, T.S., Mathematical S t a t i s t i c s : A Decision Theoretic  Approach. New York, Academic Press, 1967. [6]. Grubbs, F.E., Sample c r i t e r i a f o r t e s t i n g o u t l y i n g observations. Ann. Math. S t a t i s t . , 1950, 21, 27-28. i [7], Lindgren, B.W., S t a t i s t i c a l Theory. New York, The MacMillan Co., 1963. [8]. Narayana, T.V., and Zidek, J.V., Contributions to the theory of  tournaments, I I . Rev. Roum. Math. Pures et Appl., Tome XIV, No 10, 1563-1576, Bucarest, 1969. - 99 -[9]. Rao, C.R., Linear S t a t i s t i c a l Inference arid Its A p p l i c a t i o n s . New York, John Wiley and Sons, Inc., 1965. [10]. Royden, H.L., Real A n a l y s i s . New York, The MacMillan Co., 1968. [11]. Studden, W.J., On s e l e c t i n g a subset of k populations containing  the best. Ann. Math. S t a t i s t . , 1967, 38, 1072-1078. [12], Zidek, J.V., A representation of Bayes i n v a r i a n t procedures i n terms  of Haar measure. Ann. I n s t i t . S t a t i s t . Math., 1969, 21, 291-308. •i 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0080471/manifest

Comment

Related Items