"Science, Faculty of"@en . "Statistics, Department of"@en . "DSpace"@en . "UBCV"@en . "Wang, Steven Xiaogang"@en . "2009-10-09T20:15:00Z"@en . "2001"@en . "Doctor of Philosophy - PhD"@en . "University of British Columbia"@en . "A maximum weighted likelihood method is proposed to combine all the relevant\r\ndata from different sources to improve the quality of statistical inference especially\r\nwhen the sample sizes are moderate or small.\r\nThe linear weighted likelihood estimator (WLE), is studied in depth. The weak\r\nconsistency, strong consistency and the asymptotic normality of the WLE are proved.\r\nThe asymptotic properties of the WLE using adaptive weights are also established.\r\nA procedure for adaptively choosing the weights by using cross-validation is proposed\r\nin the thesis. The analytical forms of the \"adaptive weights\" are derived when the\r\nWLE is a linear combination of the MLE's. The weak consistency and asymptotic normality\r\nof the WLE with weights chosen by cross-validation criterion are established.\r\nThe connection between WLE and theoretical information theory is discovered. The\r\nderivation of the weighted likelihood by using the maximum entropy principle is presented.\r\nThe approximations of the distributions of the WLE by using saddlepoint\r\napproximation for small sample sizes are derived. The results of the application to\r\nthe disease mapping are shown in the last chapter of this thesis."@en . "https://circle.library.ubc.ca/rest/handle/2429/13844?expand=metadata"@en . "5580559 bytes"@en . "application/pdf"@en . "Maximum Weighted Likelihood Estimation by Steven Xiaogang Wang B.Sc , Beijing Polytechnic University, P.R. China, 1991. M.S., University of California at Riverside, U.S.A., 1996. \u00E2\u0080\u009E A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T OF T H E R E Q U I R E M E N T S F O R T H E D E G R E E OF Doctor of Philosophy in T H E F A C U L T Y OF G R A D U A T E STUDIES Department of Statistics We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y OF BRITISH C O L U M B I A June 21, 2001 \u00C2\u00A9Steven Xiaogang Wang, 2001 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department The University of British Columbia Vancouver, Canada DE-6 (2/88) Maximum Weighted Likelihood Estimation Steven X. Wang A b s t r a c t A maximum weighted likelihood method is proposed to combine all the relevant data from different sources to improve the quality of statistical inference especially when the sample sizes are moderate or small. The linear weighted likelihood estimator (WLE), is studied in depth. The weak consistency, strong consistency and the asymptotic normality of the WLE are proved. The asymptotic properties of the WLE using adaptive weights are also established. A procedure for adaptively choosing the weights by using cross-validation is proposed in the thesis. The analytical forms of the \"adaptive weights\" are derived when the WLE is a linear combination of the MLE's. The weak consistency and asymptotic nor-mality of the WLE with weights chosen by cross-validation criterion are established. The connection between WLE and theoretical information theory is discovered. The derivation of the weighted likelihood by using the maximum entropy principle is pre-sented. The approximations of the distributions of the WLE by using saddlepoint approximation for small sample sizes are derived. The results of the application to the disease mapping are shown in the last chapter of this thesis. ii Contents Abs t rac t i i i Table of Contents i i i L is t of Figures v i L i s t of Tables v i i Acknowledgements v i i i 1 In t roduct ion 1 1.1 Introduction 1 1.2 Local Likelihood and Related Methods 2 1.3 Relevance Weighted Likelihood Method 6 1.4 Weighted Likelihood Method 6 1.5 A Simple Example 8 1.6 The Scope of the Thesis 11 2 M o t i v a t i n g Example : N o r m a l Populat ions 14 2.1 A Motivating Example 15 2.2 Weighted Likelihood Estimation 16 2.3 A Criterion for Assessing Relevance 18 iii 2.4 The Optimum WLE 22 2.5 Results for Bivariate Normal Populations 24 3 M a x i m u m Weighted L ike l ihood Es t ima t ion 28 3.1 Weighted Likelihood Estimation 28 3.2 Results for One-Parameter Exponential Families 29 3.3 WLE On Restricted Parameter Spaces 32 3.4 Limits of Optimum Weights 38 4 A s y m p t o t i c Proper t ies of the W L E 48 4.1 Asymptotic Results for the WLE 49 4.1.1 Weak Consistency 50 4.1.2 Asymptotic Normality 60 4.1.3 Strong Consistency 70 4.2 Asymptotic Properties of Adaptive Weights 74 4.2.1 Weak Consistency and Asymptotic Normality 74 4.2.2 Strong Consistency by Using Adaptive Weights 77 4.3 Examples 78 4.3.1 Estimating a Univariate normal Mean 78 4.3.2 Restricted Normal Means 79 4.3.3 Multivariate Normal Means 81 4.4 Concluding Remarks 83 5 Choos ing Weights by Cross-Val ida t ion 85 5.1 Introduction 85 5.2 Linear WLE for Equal Sample Sizes 87 5.2.1 Two Population Case 88 iv 5.2.2 Alternative Matrix Representation of A e and be 93 5.2.3 Optimum Weights X\u00C2\u00B0f By Cross-validation 95 5.3 Linear WLE for Unequal Sample Sizes 96 5.3.1 Two Population Case 97 5.3.2 Optimum Weights By Cross-Validation 99 5.4 Asymptotic Properties of the Weights 102 5.5 Simulation Studies 107 6 Derivat ions of the Weighted L ike l ihood Functions 112 6.1 Introduction 112 6.2 Existence of the Optimal Density 114 6.3 Solution to the Isoperimetric Problem 116 6.4 Derivation of the WL Functions 117 7 Saddlepoint A p p r o x i m a t i o n of the W L E 125 7.1 Introduction 125 7.2 Review of the Saddlepoint Approximation 125 7.3 Results for Exponential Family 128 7.4 Approximation for General WL Estimation 134 8 A p p l i c a t i o n to Disease M a p p i n g 137 8.1 Introduction 137 8.2 Weighted Likelihood Estimation 138 8.3 Results of the Analysis 141 8.4 Discussion 143 Bib l iography 146 v List of Figures 2.1 A special case: the solid line is the max\p-a\(D\9), depending continuously on an unknown parameter vector 9 E Er. Suppose that one suspects the unknown parameter 9 of belonging to a given Borel set H C ET. Let H denote a Borel measurable alternative such that H(~)H = 0 with P(H) + P(H) = 1. The key feature of their proposal is to use weighted likelihood ratio defined as follows: $(D\H) where $(D\H) = J (D\9)dP(9\H). The reason that they called it weighted likelihood ratio is because the quantity &(D\H) is the summary of the evidence in D for H. A modern name for their weighted likelihood ratio might be odds ratio. Markatou, Basu and Lindsay (1997,1998) propose a method based on the weighted likelihood equation in the context of robust estimation. Their approach can be de-1.2. L O C A L L I K E L I H O O D A N D R E L A T E D M E T H O D S 4 scribed as follows: Suppose that {X\u00C2\u00B1, X2, \u00E2\u0080\u00A2 Xn} is a random sample with distribution f(x;9). The weighted likelihood equation is defined as n Q ^2w(xi,F)\u00E2\u0080\u0094logf{xi;6) i=l where F is the empirical cumulative distribution function. The weight function w(Xi, F) is selected such that it has value close to 1 if there is no evidence of model violation at x from the empirical distribution function. The weight function will be very close to 0 or exactly 0 at Xi if the empirical cumulative distribution func-tions indicates lack of fit at or near X{. Thus, the role of the weight function is to down-weight points in the sample that are inconsistent with the assumed model. Hunsberger (1994) also uses the term \"weighted likelihood\" to arrive at kernel estimators for the parametric and non-parametric components of semi-parametric regression models. Consider the model with Xi\(Yi,Ti = ti) having the distribution f(Xi;Xi) where A; = yiB0 + g(ti). Furthermore let / , the conditional density of X\(Y,T), be arbitrary but known. Then xB0 is the parametric portion, Bo being the unknown parameter to be estimated that relates the covariate y to the response. Here g is the non-parametric portion of the model, the only assumption on g being that it is a smooth function of t. Assume yi = r(i;) + r/i where r is a smooth funtion and the rji are independent random error terms with E(r]i) = 0 and En2 \u00E2\u0080\u0094 a2. Now Xi can be rewritten using the model for the y's to obtain Aj = r/j/30 + h(ti), where h(ti) =\u00E2\u0080\u00A2 r(ti)P0 + g(ti) is the portion that depends on t. The main purpose is to estimate /?o and Qi = h(ti),i = 1, 2 , n in the semi-parametric model by maximizing a weighted likelihood function WW 0) = J2 Y,wC-^Y\u00C2\u00B09f{Xf, P, 9i)/n2b i 3 with respect to B and 6, where 0 = (6\, 9 2 , 9 n ) 1 . In the weighted likelihood function, 1.2. L O C A L L I K E L I H O O D A N D R E L A T E D M E T H O D S 5 w is a kernel that assigns zero weights to the observations Xj that correspond to tj outside a neighborhood of tj. Besides these \"weighted likelihood\" approaches, it should be noted that the term weighted likelihood has been used in other contexts as well. Newton and Raftery (1994) introduce what they called weighted likelihood bootstrap as a way to simu-late approximately from a posterior distribution. The weighted likelihood function is defined as n i = l where the random weight vector w = (wn>i, wnj2,wn>n) has some probability dis-tribution determined by the statistician. The function L is not a likelihood function in the usual sense. It is considered by Newton and Raftery to be good approximation to the posterior. Rao (1991) introduces his definition of the weighted maximum likelihood to deal with irregularities of the observation times in the longitudinal studies of the growth rate. To be more specific, he defines the weighted likelihood as Ln(P) = l[f{xl,n,tl,n,d\{xji), / 2 ( . ; 6 > 2 ) , / m ( . ; 6>m), where X i = (Xil,Xi2,...,Xini)t. The joint distribution of ( X i , X 2 , . . . , X m ) is not assumed. We are interested in the probability density function / i ( . ; 9i) : 9\ \u00E2\u0082\u00AC O of a study variable or vector of variables X , 9\ being an unknown parameter or vector of parameters. At least in some qualitative sense, the / 2 ( . ; 62), /m(-; 9m) are thought to be \"similar to\" / i ( . ; 9i). For fixed X = x, the weighted likelihood (WL) is defined as: m WL(^ 1 ) = n/ 1(x i;c9 1)AS i=i where A = (A 1 ; A 2 , A m ) ' is the \"weight vector\" which must be specified by the analyst. We say that 9~x is a maximum weighted likelihood estimator (WLE) for 9\ if h = arg sup WL(0i) . The uniqueness of the maximizer is not assumed. In this thesis, we assume that Xn,X{2, ...,Xirii, i=l,2,...,m, are independent and identically distributed random variables. The W L then becomes m n; wL(0o=nn^^)A i-i=i 3=1 Hu (1997) proposes a paradigm which abstracts that of non-parametric regression and function estimation. There information about 9\ builds up because the number of populations grows with increasingly many in close proximity to that of B\. This is the paradigm commonly invoked in the context of non-parametric regression but it is not always the most natural one. In contrast we postulate a fixed number of 1.5. A SIMPLE E X A M P L E 8 populations with an increasingly large number of samples from each population. Our paradigm may be more natural in situations like that in which James-Stein estimator obtained, where a specific set of populations is under study. 1.5 A Simple Example The advantages of using WLE might be illustrated by the following example. A coin is tossed twice. Let 6\ = P(H) for this coin. Let 6\ denote the MLE of 6\. It then follows that 6X = Si/2 where Si =Xi + X2 and the X's are independent Bernoulli random variables. If unknown to the statistician, 6 = 1/2, then The probability for the MLE to conclude that either the fair coin has no head or no tail is 50%. It is clear that the probability of making a nonsensical decision about the fair coin in this case is extremely high due to the small sample size. Suppose that another coin, not necessarily a fair one, is tossed twice as well. The question here is whether we can use the result from the second experiment to derive a better estimate for 6\. The answer is affirmative if we combine the information obtained from the two experiments. Suppose for definiteness 92 = P(H) = 0.6 for the second coin. Let 92 denote the MLE of 02\u00E2\u0080\u00A2 Thus, 92 = S 2 /2 = (Y1 + Y2)/2 where Yx and Y2 are independent Bernoulli P (0i = 1/2) P ({X, = 0; X2 = l}D{X1 = l;X2 = 0}) = 1/2; 1.5. A SIMPLE E X A M P L E 9 random variables. It then follows that P (l2 = fj) = P(YX = Y2 = 0) = 0.16; P (\u00C2\u00A72 = 1/2) = P ({Yx = 0; Y2 = 1} n {Y1 = 1; Y2 = 0}) = 0.48; P(92 = l) = P (Fi = Y2 = 1) = 0.36. Consider a new estimate which is a linear combination of 6\ and 02: 9\ = Ai#i + A202, where Ai and A2 are relevant weights. The optimum weights will be discussed in later chapters. Pretend that we do not know how to choose the best weights. We might just set each of the weights to be 1/2. It follows that P (0i = 1/4 P (0X - 1/2 P (0i = 3/4 P = P (Sl = 0; S2 = 0) = 0.04; = >({Si = l ; 5 2 = 0}n{Si=0;S 2 = l}) = 0.20; = P ({Si = 1; S2 = 1} n {Si = 2; S 2 = 0} n {Si = 0; S 2 = 1}) = 0.37; = P (Si = 1; S 2 = 2} n {Si = 2; S 2 = 1}) = 0.30; = P (Si = 2; S 2 = 2) = 0.09. Note that the probability of making a nonsensical decision has been greatly reduced to 0.13 (0.04+0.09) through the introduction of the information obtained from the second coin. Furthermore, it can be verified that MSE{9X) = 1/8; MSE{9X) = 1-02 x 1/16 \u00C2\u00AB 1/16. 1.5. A S I M P L E E X A M P L E 10 Thus, MSE(9l)/MSE(9l) \u00C2\u00AB 0.5. We see that the M S E of the W L E is only about 50% of that of the M L E . Due to the small sample size of the first experiment, for arbitrary 9X, the prob-ability of making a nonsensical decision is 9\ + (1 \u00E2\u0080\u0094 0i)2 with a minimum value of 50%. By incorporating the relevant information in a very simple way, that proba-bility is greatly reduced. In particular, if the second coin is indeed a fair coin and 9i is arbitrary, the probability of making a nonsensical decision is then reduced to \[9\ + (1 - 0i)2] which is less or equal to 12.5%. We would like to consider the reduction of MSE by using the simple average of the two M L E ' s in this case. Let 0i \u00E2\u0080\u0094 \9X + \92. For arbitrary 9\ and 02 with sample size of 2, we have MSE(9X) = Var(d1) = ^61(l-dl) MSE(9X) = Var(9x) +. Bias(Bx)2 = =\0i(i-el) + l-92(i-92) + -4(e1-92)2. It follows that, for 0i 7^ 0 or 1, MSE(BX) = | 0 i ( l - 0 i ) + | 0 2 ( l - 0 2 ) + f (0 i -0 2 ) 2 1 192(1 - 92) + l(9x - 92)2 4 | 0 i ( l - 0 i ) Assume that 0i, 92 \u00C2\u00A3 [0.35,0.65], it then follows that 0.35*0.65 < ( l -0 i )0 i ; (0i-0 2 ) 2 < 0.09; 02(1-02) < 0.25. 1.6. T H E S C O P E O F T H E T H E S I S 11 We then have ^ < 0.72. MSE(91) ~ Therefore, for a wide range of values of 9X and 92, the simple average of 9\ and 92 will produce at least 28% reduction in the MSE compared with the traditional M L E , 6\. The maximum reduction is achieved if these two coins happen to be of the same type, i.e. 9\ = 92. We remark that the upper bound on the bias in this case is 0.15. Notice that the weights are chosen to be 0.5. However they are not the optimum weights which minimize the MSE of a weighted average of 9\ and B2. The optimum weights will be studied in later chapters. 1.6 The Scope of the Thesis In Chapter 2 we will show that certain linear combinations of the MLE ' s derived from two possibly different normal populations achieves smaller M S E than that of the traditional one sample M L E for finite sample sizes. A criterion for assessing the relevance of two related samples is proposed to control the magnitude of possible bias introduced by the combination of data. Results for two normal populations are shown in this chapter. The weighted likelihood function which uses all the relevant information is formally proposed in Chapter 3. Our weighted likelihood generalizes the R E W L of Hu (1994). Results for exponential families are presented in this chapter. The advantages of using a linear W L E on restricted parameter spaces are demonstrated. A set of optimum weights for the linear W L E is proposed and the non-stochastic limits of the proposed optimum weights when the sample size of the first population goes to infinity are found. Chapter 4 is concerned with the asymptotic properties of the W L E . The weak 1.6. T H E S C O P E O F T H E T H E S I S 12 consistency, strong consistency and asymptotic normality of the W L E are proved when the parameter space is a subset of Rp,p > 1. The asymptotic results proved here differ from those of Hu (1997) because a different asymptotic paradigm is used. Hu's paradigm abstracts that of non-parametric regression and function estimation. There information about 9\ builds up because the number of populations grows with increasingly many in close proximity to that of 6\. This is the paradigm commonly invoked in the context of non-parametric regression but it is not always the most nat-ural one. In contrast we postulate a fixed number of populations with an increasingly large number samples from each. Asymptotically, the procedure can rely on just the data from the population of interest alone. The asymptotic properties of the W L E using adaptive weights, i.e. weights determined from the sample, are also established in this chapter. These results offer guidance on the difficult problem of specifying A. In Chapter 5 we address the of choosing the adaptive weights by using the cross-validation criterion. Stone (1974) introduces and studies in detail the cross-validatory choice and assessment of statistical predictions. The K-group estimators in Stone (1974) and Geisser (1975) are closely related to the linear W L E . Breiman and Fried-man (1997) also demonstrate the benefit of using cross-validation to obtain the linear combination of predictions that achieve better estimation in the context of multi-variate regression. Although there are many ways of dividing the entire sample into subsamples such as a random selection of the validation sample, we use the simplest leave-one-out approach in this chapter since the analytic forms ofthe optimum weights are tractable for the linear W L E . The weak consistency and asymptotic normality of the W L E based on cross-validated weights are established in this chapter. We develop a theoretical foundation for the W L E in Chapter 6. Akaike (1985) reviewed the historical development of entropy and discussed the importance of the maximum entropy principle. Hu and Zidek (1997) discovered the connection between 1.6. T H E S C O P E O F T H E T H E S I S 13 relevance weighted likelihood and maximum entropy principle for the discrete case. We shall show that the weighted likelihood function can be derived from the maximum entropy principle for the continuous case. In the context of weighted likelihood estimation, the i.i.d. assumption is no longer valid. Observations from different samples follow different distributions. The saddle-point approximation technique in Daniels (1954) is then generalized for the non i.i.d. case to derive very accurate approximate distributions of the W L E for exponential families in Chapter 7. The saddlepoint approximation for estimating equations pro-posed in Daniels (1983) is also generalized to derive the approximate density of the W L E derived from estimating equations. The last chapter of this thesis applies the W L E approach to disease mapping data. Weekly hospital admission data are analyzed. The data from a particular site and neighboring sites are combined to yield a more reliable estimate to the average weekly hospital admissions compared with the traditional M L E . C h a p t e r 2 M o t i v a t i n g E x a m p l e : N o r m a l Popu la t ions Combining information from disparate sources is a fundamental activity in both sci-entific research and policy decision making. The process of learning, for example, is one of combining information: we are constantly called upon to update our beliefs in the light of new evidence, which may come in various forms. In some cases, the nature of the similarity among different populations is revealed through some geo-metrical structure in the parameter space, e.g. the means of several populations are all points on a circular helix. From the value of relevant variables it might then be possible to obtain a great deal of information about the parameter of primary infer-ential interest. Therefore, we should be able to construct a better estimate of the parameter of primary interest by combining data derived from different sources. How might one combine the data in a sensible way? To illustrate some of the fundamental characteristics of our research, it is useful to consider a simple example and examine the inferences that can be made. 14 2.1. A MOTIVATING E X A M P L E 15 2.1 A Motivating Example In this subsection we consider the following simple example: Xi = ati + a, ei~ N(0,(T1), i = l,...,n (2.1) Yi = Bti + e'ii e'i^N(0,a22), i = l,...,n, (2.2) where the { i i } \" = 1 are fixed. The {ei}'s are i.i.d.. So are the {e^j's. While Covfa, e'j) = 0 if i ^ j; Cov(ei,e'i) = po\0~2 for all i, and for the purpose of our demonstration we assume p, o\ and a2 are known although that would rarely be the case in practice. Note that a bivariate normal distribution is not assumed in the above model. In fact, only the marginal distributions are specified; no joint distribution is assumed although we do assume the correlation structure in this case. The parameters, a and 8, are of primary interest. They are thought or expected to be not too different due to the \"similarity\" of the two experiments. The error terms are assumed to be i.i.d. within each sample. The joint distributions of the (Xi,Yi),i = l , . . . , n are not specified. Only the correlations between samples are assumed to be known. The objective is to get reasonably good estimates for the parameters without assuming a functional form of the joint distribution of (Xi, Yi) which may be unknown to the investigator. Assuming marginal normality, the marginal likelihoods for a and 8 are \" (x,~a U)2 L1(x1,x2,...,xn; a) cc T[exp( ' 2 * ), i= i 1 \u00C2\u00B0 l L2{yi,y2,yn; 8) oc ]~[exp(-^1 ). Therefore, ignoring the constants, , r / x ^ ( X j - a tj)2 In U(a) = Y ~ 2 , 2.2. W E I G H T E D L I K E L I H O O D E S T I M A T I O N 16 i r ta\ (Vi-P U)2 In LM = -}Z- 2 2 \u00E2\u0080\u00A2 \u00C2\u00BB=i 2 The MLE ' s based on the Xi and respectively for a and /? are n i = i \u00C2\u00AB = . \u00E2\u0080\u0094 \u00E2\u0080\u0094 > i = l n E*i J/i i = i 2.2 W e i g h t e d L i k e l i h o o d E s t i m a t i o n If we know that \a \u00E2\u0080\u0094 B\ < C, where C is a known constant according to past studies or expert opinions, this information (\"direct information\" or \"prior information\") might be used to yield a better estimate of the parameter. An extremely important aspect of the problem of combining information that we have just described is that we can incorporate the relevant information into the likelihood function by assigning possibly different weights to different samples. We next show how it is done. The weighted likelihood (WL) for inference about a is defined as: WL(a) = L1(x1,x2,...,xn;a)XlL1(y1,y2, ...,yn;a)X2, (2.3) where A i and A 2 are weights selected according to the relevance of the likelihood to which they attached. The non-negativeness of the weights is not assume although the optimum weights are actually non-negative. Note that y 2 , y n ; a) instead of L2((yi, y 2 , y n \ B) is used to define WL(a) since a is of our primary interest at this stage and the marginals distri-butions of the F 's are thought to resemble the marginal distributions of the X ' s . Note that the W L depends on the distributions of the X ' s . But it does not depend 2.2. W E I G H T E D L I K E L I H O O D E S T I M A T I O N 17 on the distribution of Y's. Since X and Y are not independent, the weights of the likelihood functions are designed to reflect the dependence that is not expressed in the marginals. Notice that the joint distribution of the X's and y ' s does not appear in the W L and no assumptions are made about it. The maximum weighted likelihood estimator (WLE) is obtained by maximizing the weighted likelihood function for given weights Ax and A 2. From (2.3) we get In WL(Q;) = Ai In Li{xx, x2, xn; a) + A2 In Ll(y1,y2, ...,yn; a), din W L ( a ) Ai , A2 ^ , \ 1 i=i 1 t=i So the W L E for a is Al \u00E2\u0080\u009E , ^2 n a = \u00E2\u0080\u0094a + \u00E2\u0080\u0094B. A i + A 2 A i + A 2 Without loss of generality, we can write a = Ai a + A2 ft, where Ai + A2 = 1. The W L E is a linear combination of the M L E for a and B, a and ft under the over-simplified model. The weights Ai and A2 reflect the importance of a and J3. In-tuitively, the inequality A2 < Ai should be satisfied because direct sample information from {Xj}\"=1 should be more reliable than relevant sample information from {Fi}\"=1. Obviously, the W L E is the M L E obtained from the marginal distribution if the weight for the second marginal likelihood function is set to be zero. This may happen when evidence suggests that the seemingly relevant sample information is actually totally irrelevant. In that case, we would not want to include that information and thereby accrue too much bias into our estimator. 2.3. A C R I T E R I O N F O R A S S E S S I N G R E L E V A N C E 18 We call the estimator derived from the weighted likelihood function the W L E in line with the terminology found in the literature although the estimator described here differs from others proposed in those published papers. In particular, we work with the problem in which the number of samples are fixed in advance while the authors of the R E W L E were interested in the problem where the number of populations goes to infinity. The distinction here should be clear. 2.3 A Criterion for Assessing Relevance The weighted likelihood estimator is a linear combination of the MLE ' s derived from the likelihood function under the condition of marginal normality. We would like to find a good W L E under our model since there is no guarantee that any W L E will outperform the M L E in the sense of achieving a smaller MSE. Obviously, the W L E will be determined by the weights assigned to the likelihood function and the MLE ' s obtained from the marginals. The value of any estimator will depend on our choice of a loss function. The most commonly used criterion, the mean squared error (MSE), is selected as the measure of the performance of the W L E . The next proposition will give a lower bound for the ratio ^ such that the W L E will outperform the M L E obtained from the marginal distribution. For simplicity, we make the assumption of equal variances, i.e. o\ = o\. Propos i t ion 2.1 Let a be the WLE and a, the MLE of a. If \a \u00E2\u0080\u0094 8\ < C, then E(a - a) = A2 (8 - a) \E(a-a)\ < A 2 C Var(a) < Var(a). 2.3. A CRITERION FOR ASSESSING R E L E V A N C E 19 If p < 1 and A 2 > 0, then Ai C1 S; 2 c 2 4 max MSE(d) < MSE(a) iff -\u00C2\u00B1 > , <*-P\ MSE(a) with equality iff Xi = 1 and A 2 = 0. |a\u00E2\u0080\u00946| MSE(a) = Var(a) . Equality is achieved only when Ai is set to 1 and A 2 to 0. (ii) If p < 1, it follows that Var(a) < Var(a). Furthermore, 2 2 max E(a - a)2 < E(& - a)2 ^2 XL X2 p + X\C2 <2 Xx A 2 . \P\u00E2\u0080\u0094a\ ' ?-a| 2(i-P)a2 \u00E2\u0080\u00A2 \u00C2\u00B0 , The optimum weights which achieve the minimum of the MSE in this case will be positive as will be shown in the next section. Thus, we will not consider the case when A 2 < 0. We remark that the reduction in the MSE is independent of the assumption that of equals o\. In general, it can be verified that , if p < 0\jo2, then max MSE(a) < MSE(a) iff ^ > ^ f + i ^ - ^ i ) ( 2 4 ) \P-a\ |a-/J| oo and p < 1. Here we have little information about the upper bound for the distance between the two parameters, or we do not have any prior information at all. 2) Sf \u00E2\u0080\u0094>\u00E2\u0080\u00A2 oo and p < 1. We already have enough data to make inference and the addi-tional relevant data has little value in terms of estimating the parameter of primary interest. 3) af \u00E2\u0080\u0094> 0 and p < 1. This means that the precision ofthe data from the first sample is good enough already. p 0.0 0.2 0.4 0.6 0.8 1.0 Weight Function Figure 2.1: A special case: the solid line is the max\p_a\ y because ^ = K + 1 > y. This implies that with the optimum weights given above the WLE will achieve a smaller MSE than the MLE according to Proposition 2.1 Thus, the optimum weights for estimating a are \* _ I+K 1 2+ftT' A * \u00E2\u0080\u0094 1 A2 \u00E2\u0080\u0094 2+K' The optimum WLE's for a and 8 are a = l\u00C2\u00B1K _L_ R \" 2+K ~ 2+K lJ-> r 2+K r ' 2+K where i f = ^_ p ^ 2 . Note that the optimum WLE for ^ is obtained by the argument of symmetry, o Notice that A2 < \ for all K > 0. This implies that, for the purpose of estimating a, a should never get a weight which is less than that of b if we want to obtain the optimum WLE. This is consistent with our intuition since relevant sample information (the {Fjj's) should never get larger weight than direct sample information (the {-^}'s) when the sample sizes and variances are all equal. It should be noted that the WLE under a marginal normality assumption is a linear combination of the MLE's. Furthermore, the optimum WLE is the best linear combination of MLE's in the sense of achieving the smallest upper bound for MSE compared to other linear combination of MLE's. The optimization procedure is used to obtain the minimax solution in the sense that basically we are minimizing the upper bound of the MSE function over the set {{a, 8) :\a-B\< C}. 2.5. R E S U L T S F O R B I V A R I A T E N O R M A L P O P U L A T I O N S 24 2.5 R e s u l t s for B i v a r i a t e N o r m a l P o p u l a t i o n s This subsection contains some results for bivariate normal distributions. The follow-ing results should be considered as immediate corollaries of Propositions 2.1 and 2.2 established in the previous section. Coro l l a ry 2.1 Let Xl Yi If \Px \u00E2\u0080\u0094 PY\ < C and p < 1, then the optimum WLE's for estimating the marginal means are r, v \u00E2\u0080\u0094 l+M y +- 1 v / 2 2 N Px a po \ po2 o2 l , . . , n . 2+M ^ 1 2+M ^ - 2+M Y + 2+M A ' where M = 2 (l-p) a 2 \u00E2\u0080\u00A2 Furthermore, ^ax\itx-iiY\ Cov(X,Y). n _ Proof: Let ti = 1 in Proposition 2.1. Then Sf \u00E2\u0080\u0094 1 = n. By letting a \u00E2\u0080\u0094 X and b \u00E2\u0080\u0094 Y, we can apply Proposition 2.2 to get the optimum W L E . Therefore the optimum W L E will have smaller MSE and variance than the M L E , X, obtained from the marginal normal distribution. . Observe that fiX + PY = X + Y. If we take Var on both sides, we then have the following Var (fix) + Var fay) + 2 Cov(fiX, fiy) = Var(X) + Var(Y) + 2 Cov(X, Y). 2.5. RESULTS FOR BIVARIATE N O R M A L POPULATIONS 25 It follows that Cov(p,x,p,Y) > Cov(X,Y). o From Corollary 2.1, we draw the conclusion that the optimum WLE out-performs the MLE when \JJLx \u00E2\u0080\u0094 p,y\ < C is true. Corollary 2.2 Under the conditions of Corollary 2.1, - p fix \u00E2\u0080\u0094 X \u00E2\u0080\u0094> 0 as n \u00E2\u0080\u0094\u00C2\u00BB oo. Proof: Let M = 2 g 2 \u00E2\u0080\u00A2 Thus, M -> oo as n -4 oo. Therefore, -> 0 as n \u00E2\u0080\u0094> co. By Markov's inequality, we have, for all e > 0, P ( \ p x - X \ > e ) = p ( ^ \ X - Y \ > e ^ j < ^ E ^ ~ Y ^ _> 0 n ^ oo.o Corollary 2.3 77ie optimum WLE is strongly consistent under the conditions of Corollary 2.1. Proof: Consider P(\fix-px\>e) = \P(\(jix-X)-{nx-X)\>e) where M = 2 ( f l p \" g 2 as before. But '(^-^>i)-Klf^l>5)^ 5Ts),*(*-p),?<^ where Mi is a constant. Furthermore p 0- Y -\"- i > 2)^\u00E2\u0080\u00941^ ^w = ^ -where M 2 is a constant. Thus, oo P(\jix-Hx\ > e) < oo. n=l 2.5. R E S U L T S F O R B I V A R I A T E N O R M A L P O P U L A T I O N S 26 By the Borel-Cantelli Lemma (Chung 1968), we conclude that fix nx- o It follows from Corollary 2.1 that the optimum W L E is preferable to the M L E . How-ever the relevant information will play a decreasingly important role as the direct sample size increases. Motivated by our previous results, it seems that the linear combination of M L E ' s is a convenient way to combine information if the marginal distributions are assumed to be normal. A more general result is given as follows when the normality condition is relaxed. Theorem 2.1 Let Xi, X 2 , X n be i.i.d. random variables with E(Xi) = a and Var(Xi) = o \ < oo, and let Yi,Y2, ...,Yn be i.i.d. random variables with E(Yi) = 6 and Var(Yi) < oo. Let a and B denote some estimates of a and ft respectively. Assume E(a) = a, E0) = B, and \B \u00E2\u0080\u0094 a\ < C. Let a = Ai a + A 2 $, where = 2+7?' ^ 2 = 2+K a n d K = {i-P)Var{a) \u00E2\u0080\u00A2 ^ s 0 s u P P o s e P \u00E2\u0080\u0094 cor(a,J3) < 1 while Var(a) is assumed known. IfVar0) < Var(a), then \E(a - a)\ < A 2 C and Var(a) < I^ar(o;), max\a-p\ 0 if Var(a) \u00E2\u0080\u0094> 0 as n \u00E2\u0080\u0094>\u00E2\u0080\u00A2 oo. Proof: We consider the variance of weighted estimate d of a: Var(a) = Var{X1a + X2P) = A 2 Var(a) + X22 Var0) + 2 A : A 2 Cov(aJ) < A 2 Var(a) + X22 Var{a) + 2 Xx X2 p ^/Var(a) \]var(B) < X\ Var(a) + X22 Var(&) + 2 A x A 2 Var(a) = Var(a). 2.5. R E S U L T S F O R B I V A R I A T E N O R M A L P O P U L A T I O N S 27 Thus, we have Var(6t) < Var(a). Furthermore, MSE\p-a\ h)-i=l j=l We say that 9X is a maximum weighted likelihood estimator (WLE) for 9\ if 6i = arg sup WL(0i). In many cases the W L E may be obtained by solving the estimating equation: (d/ddi) log WL(di) = 0. Note that the uniqueness of the W L E is not assumed. Throughout this thesis, 9\ is the parameter of primary inferential interest. We will use 9\ and 9\ to denote the W L E and the true value of 9\ in the sequel. 3.2 R e s u l t s for O n e - P a r a m e t e r E x p o n e n t i a l F a m -i l ies Exponential family models play a central role in classical statistical theory for inde-pendent observations. Many models used in statistical practice are from exponential families and they are analytically tractable. Assume that X n , X i 2 , X l n ; are independent random variables which follow the same distribution from the exponential family with one parameter, #i, that is, h(x; 9l) = exp (A^S^) + B^) + Ci{x)). 3.2. RESULTS FOR O N E - P A R A M E T E R E X P O N E N T I A L FAMILIES 30 The likelihood function for nx observations which are i.i.d from the above distri-bution family can be written as ^ ( J n , ^ , . . , ^ ; ^ ) = exp iA^J^Sixu) + nx B^,) + Y/C1(xu) ) . V i=i i=i It follows that In L^Xn, X 1 2 , X l n i ; 9i) = Ai(6i) ^ s(xv) + n i B ^ ) + Constant. 3=1 It then follows that i=l m (i'^OTlx1) + B;(0 1)) rn _ 1 where Tlx 1) = ^ E5K). j'=i It is known (Lehmann 1983, p 123) that for the exponential family, the necessary and sufficient condition for an unbiased estimator to achieve the Cramer-Rao lower bound is that there exists D(9i) such that <91n Li(Xn,Xl2, Xini;0i) 001 nlD1(91)(T(x1)-9l), V0x. Theorem 3.1 Assume that, for any given i, Xij %'~\" f(x;9i),j = l,2,...,nj. The WLE of 6\ is a linear combination of the MLE's obtained from the marginals if o9\ i.e., the WLE of 9\ will be the linear combination of the MLE's if the Cramer-Rao lower bound can be achieved by the MLE's derived from the marginals. Proof: Under the condition din Li(xn,Xi2,...,xini;8i) d61 = niD1{e1) (T(^) - 0X), 3.2. R E S U L T S F O R O N E - P A R A M E T E R E X P O N E N T I A L F A M I L I E S 31 it can be seen that T(x}) is the traditional M L E for 9\ which achieves the Cramer-Rao lower bound and is unbiased as well. Then we have din W L 09, =J2X>ni D1{d1)(T(xi)-B1) Thus the W L E is given by m m where U = rii/ ]P Aj/i;. i=i Therefore the W L E of 9\ is a linear combination of the MLE ' s obtained from the marginals. This completes the proof, o Thus, for normal distributions, Bernoulli, exponential and Poisson distributions the W L E is a linear combination of the MLE' s obtained from the marginal distribu-tions. Theorem 3.2 For distributions of the exponential family form, suppose the MLE of 9\ has the form of g(T(x1)) where T(x}) is the sufficient statistic. Then the WLE of ( m \ m 2~2 U\ T{xl) I, where U = n^/ ]T \ni. i=i ) i=i Proof: As seen above dlnLi(si i ,3;i2, ...,xlni\9i) dQx Consequently, the W L E satisfies = n i (A'MTix^ + B ' M ) . ^Xim (X (0 1 )T(^) + B ' M ) =o i=i which implies that A[(9i) (f>A, T ^ ) ) + B ' M = 0 ,i=i m where *i = n*/ Ajnj. i=i 3.3. W L E O N R E S T R I C T E D P A R A M E T E R S P A C E S 32 Therefore the W L E of 9\ takes the form ( m 5 > A , T V ) i=i m where U = rii/ E ^ \" i - This completes the proof, o. Therefore the W L E has the same functional form as the M L E obtained from the marginals if we confine our attention to the one-parameter exponential family. The only modification made by the W L E is to use the linear combination of sufficient statistics from the two samples instead of using the sufficient statistic from a single sample. The advantage of doing this is that it may be a better estimator in terms of variance and MSE. 3.3 W L E O n R e s t r i c t e d P a r a m e t e r Spaces The estimation of parameters in restricted parameter spaces is an old and difficult problem. A n overview of the history and some recent developments can be found in van Eeden (1996). van Eeden and Zidek (1998) consider the problem of combin-ing sample information in estimating ordered normal means, van Eeden and Zidek (2001) also consider estimating one of two normal means when their difference is bounded. We are concerned with combining the sample information when a number of populations in question are known to be related. Often, prior to gathering of the current data in a given investigation, relevant information in some form is available from past studies or expert opinions. A l l statis-ticians must make use of such information in their analysis although they might differ in the degree of combining information. A linear combination of the estimates from each population is straightforward to use. In this section, we assume that the W L E is a linear combination of the individual M L E . We remark that the results of this sec-3.3. W L E ON RESTRICTED P A R A M E T E R SPACES 33 tion hold for the general case where the new estimator is a linear combination of the estimator derived from each sample. We need the following Lemma on optimization to prove the major theorem of this section. We emphasize that we do not require the Aj's to be non-negative. n Lemma 3.1 Let A = (Ai, A 2 , A m ) ' and Aj = 1. Let A be a symmetric i = i invertible m x m matrix. The weight function, X, which minimizes the objective function, A*^4A, is given by the following formula: A* = - ^ i -provided llA 1 > 0. Proof: Using the Lagrange method for maximizing an objective function under con-straints, we only need to maximize G = XlA X - c(l*A - 1), m subject to 2~2 Xi = 1. i = l Differentiating the function G gives Setting | f = 0, it follows that ^ = 2AX-cl. oX X = -A~ll. 2 Now 1 = 1* A = - l ^ - 1 ! . So c = 2 2 l M - U ' Therefore, VA-1!' 3.3. W L E ON RESTRICTED P A R A M E T E R SPACES 34 Since XtA A is a quadratic function of A;, therefore, A A A has its global minimum at the stationary point. This completes the proof.o As before, we would assume that the parameters are not too far apart. The following theorem will show that some benefit could be gained if we take advantage of such information, namely that, sufficient precision may be gained at the expense of bias so as to reduce the MSE. However maximizing that gain may entail negative weights. Theorem 3.3 Assume that Xij, i \u00E2\u0080\u0094 1 , 2 , m , j = 1, 2 , n ; are random variables with E(Xij) = Qi, and m is the the number of related data sources and rii is th>e sample size for each source. The marginal distributions are assumed to be known. The joint distribution across those related sources is not assumed. Instead, the covariance structure of the joint distribution is assumed to be known. Furthermore, Q\,Q2, ...,Qm are all finite and \9i \u00E2\u0080\u0094 6i\ < Ci where Ci is a known constant. Let 0 = (Q\,Q2, \u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2,9m)t where Qi \u00E2\u0080\u0094 Qi(xn,Xi2, Xini) is the MLE for the parameter Qi derived the distribution of the X^. Suppose E(9i) = Qi and V = cov(0) are known and (V + BB1)'1 is inveriible. Then the minimax linear WLE for Qi is: 0~i = ^ ^ ' i = l where: A* = (Aj, A j , A ^ J , s t (V + B B ^ l V(V + BB1)-1!' V = cov(d)mxm; B = (0, C2, C 3 , C m ) * . Proof: We are seeking the best linear combination of these MLE's derived from the marginal distributions. As before, let us consider the MSE of the WLE. 3.3. W L E O N R E S T R I C T E D P A R A M E T E R S P A C E S 35 it Writing #1 = A 9 = J2 Xi 9iy we can calculate i=i MSE(9~i) = E[Y^ X & - Or)}2 (since E A, = 1) i=i = \u00C2\u00A3 E E xx3(el-el)(93-el)\ i=l j=l m m = E E *i *3 - 0i) & - Qi)} i=l j=l m m = E E A ' Xi E\.& -\u00C2\u00B0r + ei- Oi)0j ~ 8j + Oj - Oi)) i = l j=l m m = E E X i Xi (cov0u Oj) + (di - 9X)(93- - 9,)) i=l j=l m m = E E A' Xi cov(9i,0j) + E E A* Xi & - dMdi ~ 0i) i=l j=l - i=l j=l < A'cou(0)A + \ t B B t \ (by the assumptions) = ' \'(V + BB*)\. Let A = V + BBt. By applying Lemma 3.1, we conclude that the optimal linear WLE is: m Ol = E Xi ^' i=l where A' = (A;, A S , X m Y = ^ B ^ V o ' It can be seen that the max MSE function is a quadratic form in Ai ,A2 , . . .A m and involves only the first and the second moments of the marginal distributions and the joint distributions. The whole procedure consists of two stages. The first step is to work out the functional form of the M S E . The second step is to work out the optimum weights to construct the optimum WLE. Note that the optimum weights are functions of the matrices V and B. Let's consider some special cases: 3.3. W L E O N R E S T R I C T E D P A R A M E T E R S P A C E S 36 (i) B = 0 and V = a21. This implies that all the data can be pooled together in such a way that equal weights should be attached to the M L E derived from each marginal distribution since now we have 6i = 92 = ... = 9m. That is, A* = ^ for all i for i = 1 , 2 , m . This is consistent with our intuition for the i.i.d. normal case. (ii) B = 0 and V = diag(o\, a 2 , a 2 ) . The optimum W L E is then given by: 5 if 0 1 + ^ \u00C2\u00B0 2 + \"\" + A \u00C2\u00B0 m 01 = m \u00E2\u0080\u00A2 k=l k Note that most current pooling methods estimate the population parameter 6 is a weighted average of the estimates obtained from each data set: m 9= i = 1 m i = l * provided that the Var(9i) = crt2 are known. The optimal weights under the assumption are proportional to the precision in the ith data set, that is, to the inverse of the variance. This is a simple example of a weighted least squares estimator. Therefore the optimum W L E coincides with the weighted least squares estimator under the assumption B = 0 and V. = diag(a2, a 2 , a 2 ) -Furthermore, let us apply Theorem 3.3 to our motivating example since it is also a special case of the theorem. To be consistent with the notation used in Section 2, we still use a and b to denote the parameters of interest. We need to work out the matrix V and BB1 before applying the theorem since both are assumed to be known. As before, we have O\"2 2 2 Var(a) = Var(b) = ^ a n d cov(a, b) = 3.3. W L E O N R E S T R I C T E D P A R A M E T E R S P A C E S 37 To write those in a matrix form, we have V = cov{{a,bY) As for the matrix BB1, we have BB1 = J_ / \u00C2\u00B0 2 pa2 St \ pa2 o2 0 0 0 c2 Adding the above two matrices together gives V + BB* = S2t o po* pa2 o2 + C2Sf Therefore, \u00E2\u0080\u00A2 {V + BB1)-1 -i - i S2 s2t o* po* po2 o2 + C2S2 o2 + C2S2 -po2 W + BBH -po* o2 + C2S2 a 4 - p 2 a 4 + C2S2to2 -po* -po* o2 Thus, we have (V + BB1)'1! = S2 o 4 - p2oA + C2S2o2 a 4 - p2o4 + C2S2o2 Furthermore, si o2 + C2S2 -po2 | ^ 1 1 2 2 \u00E2\u0080\u0094poz O* (1 - p)o2 + C2S2 (1 - p)o2 l t ( K + B B t ) ' H = o^-p2o^C2S2o2 ^ - ^ + ^ 3.4. LIMITS OF O P T I M U M WEIGHTS 38 Therefore the optimum weight function, A* = (A*, A^)', is where K = c2s? (l-p)a2+C2S2 \ 2(1-p) 0. LetV = cov{0), where 0 = (\u00C2\u00A7u 0 2 , B m y . Re-write V = o\W, where the of is a function ofni andW is a function ofn,, n2, ...nm. Assume lim o~2(ni) = 0 and lim W(ni,n2, ...,nm)~l = W0~l. n\\u00E2\u0080\u0094too ni\u00E2\u0080\u0094>oo Then where S\u00E2\u0080\u009E {V + B B t y 1 ! JmxmJ lim V^* lim = WrT1 -WQ1BBtWg Proof: From the matrix identity (Rao 1965), 3.4. L I M I T S O F O P T I M U M W E I G H T S 39 it follows that Therefore, lim A* -1 ^W^BB^W'1 1 + ^ B W - ' B \W-lBBlW w-1 - -1 - l = w -1 1 + \BW~XB W^BB'W'1 al + BlW-lB' = lim (a2W + BB1)-1! nX^oo V(afW + BB*)-n Q-jjo-fW + BB1)-1! n i ^ o o al!l(a2W -rBB^-1! fw~l \u00E2\u0080\u0094 WzlMB^Wzl\ i \ V V a2+BtW-iB ) 1 = lim m-yoo -, t (xxr-i _ W~1BBtW-1 \ A \ V V al+BW-lB ) (W-l _ W 0 ^ B B ^ \ V 0 B'W^B ) where Sn _1 Wg1BBtW^ BtWg^B t ( w i _ w0->BBtw-i\ V 0 BW^B ) 1 LJmxmJ-l -. This completes the proof, o Coro l la ry 3.1 Under the conditions of Theorem 3.4, assume that V = Diag(a\, o f , a , m If a2(ni) \u00E2\u0080\u0094> 0 and a2(ni)/a2(ni) \u00E2\u0080\u0094> ji > 0, for i > 2, as nx \u00E2\u0080\u0094)\u00E2\u0080\u00A2 oo and ]C 7i > 0, then i=2 lim A* = (1,0,0,...,())'. 3.4. LIMITS OF O P T I M U M WEIGHTS 40 Proof: Consider the following Next we will consider the case where the covariance matrix is not a diagonal matrix but has a special form. Suppose that cov(6) = ^-V0 where / 1 P P 1 P' P \u00E2\u0080\u009Em\u00E2\u0080\u00941 \ \u00E2\u0080\u009E m - 2 y pm\u00E2\u0080\u0094l pm~2 pm~^ (3.2) where /) / 0 and p2 < 1. This kind of covariance structure is found in first order autoregressive model with lage 1 effect, namely AR(1) model, in time series analysis. If the goal of inference is to predict the average response at the next time point given observations from current and a fixed number of previous time points, then this type of covariance structure is of our interest. By Proposition 2.2, the optimum weights will go to (1,0) if m = 2. This is because K = ,x_ ? ^ / n x goes to infinity. As a result, the optimum weights A = (^f-, 2+K) ~~^ (1>0)-3.4. LIMITS OF O P T I M U M WEIGHTS 41 A 2 Corollary 3.2 For m > 2, if cov(0) = ^-Vb, where V0 takes the form as in (3.2), then u m A I L , P + D 0 - p 2 ' A ) - p 2 ' \" \" ' D 0 - p * ' D 0 - p * J ' where D0 = 1 + (m - 2)(1 - p)2. Proof: It can be verified that / i n n n \ WV1 = l l - p2 1 -p 0 0 -p 1 + p2 -p 0 0 0 0 0 0 0 -p 1 + p2 -p It follows that B1WQ11 B ^ ^ B C ( - P , i - p + p2, (i - p ) 2 , ( i - p)2, i - Py, 1 - p 2 1 - p 2 We then have Do-P (1 - P 2 ) A) (-p, 1 - p + p2, (1 - p ) 2 , ( 1 - p)2,1 - pf. 3.4. LIMITS OF O P T I M U M WEIGHTS 42 Therefore, SI 1 - p ( 1 - pf ( 1 - pf ( 1 - pf 1 -- p ( Do-P (1 - P2)D0 \ i-p + pz (1 - Pf 1 D0 -p + (1 - pf 1-p P ( I - P + P 2 ) p{i-p)2 p(i-P)2 pii-py* V 1- p2 \ 0' ' D0 D0 D0 D0 It can be verified that lim 1*51 =-^-^(1 \u00E2\u0080\u0094 j^). This completes the proof, o Note that the limits of A2 and A^ take different form than that of A 3 , A ^ _ x . The reason is that B = (0, C, C,C) ignores the first row or column of W 0 _ 1 when multiplied with it and the last row of W0~l is different than the other rows in that matrix. Positive Weights If the estimators are uncorrelated, that is, p = 0, then we show below that asymp-totically all weights are positive. Furthermore, all the weights are always positive in this case. Theorem 3.5 As before, let V be the covariance matrix of (9\, 92, \u00E2\u0080\u00A2 9m) and let B = (0, C, C, ...,Cy. Furthermore, assume that V = diag(o\,o\, ...,ofJ, where the of, i = 1 , 2 , m , are known. Then A* > 0 for all i = 1 , 2 , m and for all n. Proof: Observe that V~l = diag ( - \ , - \ - ) . The matrix (V + BB*)-1 can be written as (V + BB')-1 = V-1 -V ' l + o. J_ + _ J _ V - L 3=2 3 Also, for 2 < i < m, I I A : = * l + \ > o. This completes the proof, o Negative Weights The above theorem gives conditions that insure positive weights. However, weights will not always be non-negative. In Corollary 3.2 , we have lim A * = ( i , -f>D\u00C2\u00B0 + P(l~P + P2), PJ1 ~ Pf ? . Pi1 - Pf Pi1 ~ P) V ? \" l ^ o o y ' DQ- p2 D0 - p2 ' '\"' DQ - p2 ' DQ - p2 J where p2 < 1 and D 0 = 1 + (m \u00E2\u0080\u0094 2)(1 \u00E2\u0080\u0094 p) 2. It follows that D0 > I provided m > 2. Hence the sign of those asymptotic weights taking the form ^1^,2 are m determined by the sign of p. Notice that hm A* = 1 and lim Ai = 1*. Hence m lim A2 = \u00E2\u0080\u0094 2^2 bm A*. Therefore, we have the following. Til\u00E2\u0080\u0094S-OO j _ g n i \u00E2\u0080\u0094 \u00E2\u0080\u00A2 o o 1) If p > 0, then lim A* > 0, i = 3 , m , and A2 < 0. J l l \u00E2\u0080\u0094 \u00C2\u00BB o o 2) If p < 0, then lim A* < 0, i = 3,...,m, and A2 > 0. A general result on negative weights is shown below. Theorem 3.6 Assume B = (0, C, C , C ) and m > 3. Lei kFo\"1 = (aij)mxm. Let m 2 e i = YI aij- Assume a u \u00E2\u0080\u0094 -^r\u00E2\u0080\u0094 > 0. Then the asymptotic weight, lim A*, is negative 3=2 h if ^ < for some i > 1. 6 1 E et 3.4. LIMITS OF O P T I M U M WEIGHTS 45 Proof: By Theorem 3.4, we have (V + BB*)-1! S m x m l lim A* = lim , \u00E2\u0080\u0094 n ^ o o n^oo V(V + BB*)-1! l * 5 m x m l where 5 m x m = W0 1 - \u00C2\u00B0 B t i y - i B \u00C2\u00B0 \u00E2\u0080\u00A2 We then have W0 lB = ( C ^ a i i , C ^ a 2 i , . . . , C ^ a m i ) i = C ( e i , e 2 , e m ) * . . J=2 j=2 j=2 Therefore, W Q - ^ B ' W Q - 1 , B~*wFrB = ( a n + e i , a i 2 + e 2 , . . . , a m i + e m ) ' C ( e i , e 2 , \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 . , e m ) f C(e i , e2 , - , e m ) (0,C,C,...,C)C(e1,e2,...,emy C 2 ( e i , e 2, e m ) ' ( e i , e 2, \u00E2\u0080\u00A2 e m ) . m i=2 ^ a n + ei ^ ai2 + e2 ( m i=2 e ie 2 e 2e! e 2 e2e m y c m e i C77je2 ... e m y V 1 / 3.4. L I M I T S O F O P T I M U M W E I G H T S 46 It follows that 9 1 \u00E2\u0080\u0094 / o n + ei ^ ai2 + e 2 \ am X + em) J ( m E e i i=2 ei Ee* i=l \u00E2\u0080\u00A2 m e 2 E e i i=i \ a n + ei a 2 i + e 2 a m i + ei(ei+E e; j=2 m i = 2 m e2(ei+ E ev) i=2 m E e, i = 2 (e i+E e 0 i=2 m i=2 I 6 m J^J I \ 1 = 1 / = (an e\ e2 ei e m e : t , a 2 i - \u00E2\u0080\u0094 \u00E2\u0080\u0094 , a m i \u00E2\u0080\u0094 ) . m Ee* i=2 i=2 i=2 Note that W0 is symmetric which implies that the matrix W0 1 is also symmetric. Thus, we have m m 3=2 3=2 By the assumption of the theorem, we have l ^ m x m l \u00E2\u0080\u0094 \u00C2\u00B0 1 1 m^~~ > 0, E e i i=2 Therefore, lim A* < 0 if an < ^r1- for any i such that i > 2. o ni->oo ti i = 2 We can construct a simple example where the non-asymptotic weights can actually be negative. Suppose that information from three populations is available and one single observation is obtained from each population. Further, we assume that the three random variables, X\, X 2 , and X3, say, follow a multivariate normal distribution 3.4. LIMITS OF O P T I M U M WEIGHTS 47 with covariance matrix as follows: / 1.0 0.7 0.3 V = I 0.7 1.0 0.7 0.3 0.7 1.0 Also, assume that C2 = C 3 = 1. Thus B = (0,1,1)'. It follows that V \ V + BB1 = ( 1.0 0.7 0.3 0.7 1.0 0.7 0.3 0.7 1.0 Hence, approximately (V + BB1)'1 = \ ( + 0 0 0 0 1.0 1.0 0 1.0 1.0 ^ ( 1.0 0.7 0.3 X 0.7 2.0 1.7 0.3 1.7 2.0 ( 1.67 -1.34 0.89 ^ -1.34 2.88 -2.24 0.89 -2.24 2.27 We then have and (V + BB) 1 = (1.22, -0.71, 0.92)*, 1\V + BB1)'1! = 1.43 It follows that A* = (V + BB*)-1! (0.85,-0.49,0.64). V(V + BB')-1! Thus, A 2 is negative in this example. The negative weights in this example might be due to the collinearity. If we replace 0.7 by 0.3 in the covariance matrix, all the weights will be positive. Chapter 4 Asymptotic Properties of the W L E Throughout this chapter, 9\ is the parameter of primary inferential interest although in their extension of the REWL, Hu and Zidek (1997) consider simultaneous inference for all the 0's. Recall that for fixed X = x, the weighted likelihood (WL) is defined as: m rii n i i / i ^ i ) * . w i=l j = l where A = (Ai, A 2 , A m ) * is the \"weight vector\". It follows that logWL(x, 0i) = E E X i l o g hfai'i i=l 3=1 We say that 0\ is a maximum weighted likelihood estimator (WLE) for 0i if 0i = arg sup WL(x, 0i). We will use 0J to denote the true value of 0i in the sequel. The asymptotic results proved here differ from those of Hu (1997) because a dif-ferent asymptotic paradigm is used. Hu's paradigm abstracts that of non-parametric regression and function estimation. There information about 0i builds up because the number of populations grows with increasingly many in close proximity to that of 9\. 48 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 49 This is the paradigm commonly invoked in the context of non-parametric regression but it is not always the most natural one. In contrast we postulate a fixed number of populations with an increasingly large number samples from each. Asymptotically, the procedure can rely on just the data from the population of interest alone. These results offer guidance on the difficult problem of specifying A. We also consider the general version of the adaptively weighted likelihood in which the weights are allowed to depend on the data. Such likelihood arises naturally when the responses are measured on a sequence of independent draw on discrete random variables. In that case the likelihood factors into powers of the common probability mass function at successive discrete points in the sample space. (The multinomial likelihood arises in precisely this way for example). The factors in the likelihood may well depend on a vector of parameters deemed to be approximately fixed during the sampling period. The sample itself now \"self-weights\" the likelihood factors according to their degree of relevance in estimating the unknown parameter. In Section 4.1, we present our extension of the classical large sample theory for the the asymptotic results for the maximum likelihood estimator. Both consistency and asymptotic normality of the W L E for the fixed weights are shown under appropriate assumptions. The weights may be \"adaptive\" that is, allowed to depend on the data. In Section 4.2 we present the asymptotic properties of the W L E using adaptive weights. 4.1 A s y m p t o t i c R e s u l t s for the W L E In this section, establish the consistency and asymptotic normality of the W L E under appropriate conditions. 4.1. A S Y M P T O T I C R E S U L T S F O R T H E W L E 50 4 . 1 . 1 W e a k C o n s i s t e n c y Consistency is a minimal requirement for any good estimate of the parameter of interest. In this and the next sub-section, we will give a general conditions that en-sure the consistency of the WLE's . Our analysis concerns a-finite probability spaces (X, T, fii),i = 1,2 under suitable regularity conditions. We assume that the probabil-ity measures fa is absolutely continuous with respect to one another; that is, suppose there exists no set (event) E G T for which fa(E) = 0 and faj(E) ^ 0, or fa(E) = 0 and fJ-j(E) ^ 0 for i ^ j. Let v be a measure that dominates fa, i = 1, 2, for example (pi + fa>)/2 and fr, i = 1,2. By the Radon-Nikodym theorem (Royden 1988, p. 276), there exist measurable functions fi(x), i = 1,2, called probability densities, unique up to sets of (probability) measure zero in v, 0 < fi(x) < oo (a.e. v),i = 1,2, ...,m such that p.i{E)= j fi(x)du(x), i = l,2. for all \u00C2\u00A3 6 ^ . Define the Kullback-Leibler information number as: K(fllf2) = E 1 ^ l o g ^ ^ = J logfj^h{x)du{x). In this expression, log(fi(X)/f2(X)) is defined as +oo if fi(x) > 0 and f2(x) = 0, so the expectation could be +oo. Although log(fi(x)/f2(x)) is defined as \u00E2\u0080\u0094 oo when fi{x) = 0 and f2(x) > 0, the integrand, log(fi(x)/f2{x))fi(x) is defined as zero in this case. The next lemma gives well known result. L e m m a 4.1 (Shannon-Kolmogorov Information Inequality) Let f\(x) and f2(x) be densities with respect to v. Then K(f1,f2) = E1 (log^^j = j logj&hWM*) > 0, with equality if and only if f\(x) = f2(x) (a.e. v). 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 51 Proof: (See for example, Ferguson 1996, p. 113). Let 9\ denote the true value of 9i and 9\u00C2\u00B0 \u00E2\u0080\u0094 (9\, B 2 , 9 m ) , for ^ e 9 , i = 2,3, ...,m. Throughout this chapter, the following assumptions are assumed to hold except where otherwise stated. Assumption 4.1 The parameter space O is compact and separable. Assumption 4.2 For each i = l,..,m, assume {Xij : j = 1,2,rij.} are i.i.d. random variables having common probability density functions with respect to v. Assumption 4.3 Assume fi(x,9\) = fi(x,9[)(a.e.) v) implies that 9\ = 9[ for any 6\, 9[ G \u00C2\u00A9 and the densities fi(x, 9) have the same support for all 9 G O. Assumption 4.4 For any 8\ G 0 and for any open set O C 0 , assume sup \ l o g ( h 0 ? i n f IM/i(x; 90x)/h(x; 6>x))| sup \log(h{x^)/h{x; d,))\ inf |/op(/i(a;;0f)//i(a;;0i))| are each measurable in x and 112 ^ [ s u p i i o g y y ; 1 ; ! ] ' 0 is a constant independent of 9{, i = 1, 2,.., m. Assumption 4.5 Let n = (m, n2,nm). Assume A ( n ) = (A x n ), A 2 n ) , A ^ ) ' satisfies \ W ^ W = (w1,W2,...,Wm)t=(l,0,...,0)t while max nl max ItUj \u00E2\u0080\u0094 A - ^ l 2 < O (n)~6) as m \u00E2\u0080\u0094> oo, l 0. Assumption 4.5 will be satisfied if nif i = 2 , m are in the same order of nx and also \wi-\f]\ = O (n^)/ 2 ) . 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 52 In this chapter, we require the density functions to be upper semi-continuous. Let |.|| be defined as Euclidean norm; that is W 2 = CE\u00C2\u00B0Z)1,2> 1=1 for any x = ( x i , ^2, . . . j X g ) ' . Definition 4.1 A real-valued function, g(0), defined on the parameter space, Q, is , said to be upper semi-continuous on O, if for all 9 G \u00C2\u00A9 and for any sequence 6n G Q such that lim \\9n \u00E2\u0080\u0094 9\ \ = 0, we have g(9) > lim s u p A function is called lower semi-continuous if g(9) < liminfg(#n) whenever lim \\9n \u00E2\u0080\u0094 9\ \ = 0. We need to show that sup U(x] 9i) = 1\u00C2\u00B09JJ0^ for some open set O is measurable if fi(x; 9i) is upper semi-continuous. Proposition 4.1 If U(x; 9,) is lower semi-continuous in 9\ for all x, then sup U(x \\6i-e0\\ 0, there exist 91(e) such that \9l(e) - 9\u00C2\u00B0\ < R and U(x,9{(e)) >s-e/2. (4.3) Since D is a dense subset of {9\ : \6\\u00E2\u0080\u00949\\ < i?}, then there exist a sequence 9^(6) G D such that lim 9^\e) = 91(e). Since U(x;9i) is lower semi-continuous, then, for fixed n\u00E2\u0080\u0094foo e and some S > 0, there exist 0** G D such that \u00E2\u0080\u0094 0JI < 5 and [7(x; 0J) - e/2 < {/(a; 0**)- (4.4) 4.1. A S Y M P T O T I C R E S U L T S F O R T H E W L E 53 Thus, combining equation (4.3) and (4.4), it follows that for any given e > 0, there exist 9{* e D such that U(x\9\*) > s - e. Equation (4.2) is then established. We then have {x : sup U(x; #i) < a} |0i-0?| = n?=1{x : U(x; 9?) < a, 6? \u00C2\u00A3 D}. Since {x : U(x;9i) < a} is measurable. Therefore the set {x : sup U(x; 9\u00C2\u00B1) < a} \ei-e\u00C2\u00B0\ 0}, then the Assumption 4.4 is automatically satisfied because loq^'6A is lower semi-continuous and 3 /i(x;0i) (0 sup 9i-e\u00C2\u00B0\ oo. by Assumption 4.5. It then follows that sup 8ie& \u00C2\u00B1\u00C2\u00A3i>-A, (\"'M\u00C2\u00AB < ^ \u00C2\u00A3 \u00C2\u00A3 h ->3,0m, 9{e&,i = 2,3,m. Proof: With Pgo measure 1, m rii m nt | W T T T T , , ^ s if and only if where Observe that i=l j=l i = l j=l W\u00E2\u0080\u009E,(X,9 1)>0. n i ~ l ~ { ji\Xj,Vi ^n),nJl{Xj3\9l) y n l ~ l ~ { Jl{Xj,Vl) 1 / / i ( ^ i j , # i ) o / a N By equation (4.5) we have 1 -Sni (X, 9\) 1 ~ Peot H i for any #i \u00E2\u0082\u00AC 0 and any 92, 9 5 , 9 m By the weak law of large numbers, i , / ipfij-;f l?) h{x^e\) n i / i ( ^ i j ; 0 i ) / I ( A I J - ; 0 I ) 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 56 when 6\ ^ 9\ by Lemma 4.1 and Assumption 4.3. Therefore lim P0O (Wni > 0) = 1 for all 6X ^ 0?, 92,03,9m. o ni->oo For any open set O, let Zi3(0) = m^log^^3'6^. We are now in a position to prove the weak consistency of the WLE. Theorem 4.2 Suppose that logf\(x;9) is upper semi-continuous in 6 for all x. As-sume that for every 9\ ^ 9\ there is an open set Ng1 such that 9\ \u00E2\u0082\u00AC Ngx C O. Then for any sequence of maximum weighted likelihood estimates 0(\u00E2\u0084\u00A2^ of'9\, and for all e > 0, lim Pgo ( | |0 i n i ) - 0 \u00C2\u00B0 | | > e) =0, ni\u00E2\u0080\u0094>oo \ / for any 92,93, ...,9m,9i e Q,i = 2,3, ...,m. Proof: The proof of this theorem given below resembles the proof of weak consistency ofthe MLE in Schervish (1995). For each 9\ ^ 9\, let NJfi\k = 1, 2,... be a sequence of open balls centered at 0i and of radius at most l/k such that for all k, N: (*+i) C Ng^ c e. It follows that Hfcli NJ,? = { i^}- Thus, for fixed Xij, Zij(Ng?) increases with k and therefore has a limit as k \u00E2\u0080\u0094>\u00E2\u0080\u00A2 oo. Note that loq{l^,9A is lower semi-continuous in 0i for each x. So, lim ZIJW) > log J The limit in the last expression is not required to be finite. Observe that E = Ea inf log fi(Xij-,9^) fi(Xij',9[) < EgO SUP 1 e[ee log fi{Xif, 9\) f\(Xij;9[) < oo, 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 57 by Assumption 4.4. This implies that Z\j(Ng') are integrable. Using the monotone convergence theorem, we then have lim EeoZ^N^) = E* lim Z l j ( < ) ) > E*(logfffi''\u00C2\u00AE\) > 0. Thus, we can choose k* = k*(0i) so that EgoZij(N^k ^) > 0. Let be the interior of 'NJfi ^ for each Q\ G 0. Let e > 0 and N0 be the open ball of radius e around d\. Now, 0 \ A^o is a compact set since 0 is compact. Also, {JV\u00C2\u00A3:0 i\"ee \JV o } is an open cover of 0 \ A^. Therefore, there exist a finite sub-cover, A 7^, Ng*2, NQP such that EeoZ^iN*,) > 0, I = 1, 2, ...,p. We then have ^ ( l l * i B , ) - 0 i l l > e ) = Peo (flf0 e for some ;=i ;=i V 1 i= i j = i / p / 1 m n; _. m = E - E E \" M N k ) + - EE( A * ( n ) - ^ )^- (^) < o 1=1 \ 1 i=l j=l 1 i = l j = l P / -. ni ^ m rij \ \u00E2\u0080\u00A2 = E ^ - E ^ ( ^ ) + - E E ^ n ) - \u00C2\u00AB*) w . ) < o . (=1 \ 1 j=l 1 i = l j=l / If we show the last expression goes to zero as n\ goes to infinity, then P*> ( i f r 0 ~ei\\ >e) \u00E2\u0080\u0094>0 as m'^oo.. 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 58 Since EgoZ^N*,)2 < E9i (sup llog^f^]l) < K < oo by Assumption 4.4, it 1 ' / follows from Lemma 4.2 that 1 m Hi Also, n-i 1 i=i j=i 1 n i p - ZXj(N*0[) -A EgoZXj(N*g[) > 0, for any 6[ e 9 \ N0, by the Weak Law of Large Numbers and the construction of NZ. Thus, for any 6[eO\N0, - X ^ ( A r ; , ) + - ^ ^ ( A S n ) - ^ ) ^ ( i V ; , ) < 0 \u00E2\u0080\u0094>0 as m ' - x x ) . 1 J = l 1 i = l j = 1 / This implies that P / n i ^ m rii \ E P*\u00C2\u00B0 - E ^ Wx) + - E E ( A ^ - ^)^(^) < 0 U 0 a a m 4 o o . 1=1 \ 1 j = l 1 i=\ j=l J Thus the assertion follows, o In the next theorem we drop Assumption 4.1 which assumes the compactness of the parameter space and replace it with a slightly different condition. At the same time we keep Assumption 4.2-4.5. Theorem 4.3 Suppose logfi(x;9) is upper semi-continuous in 9 for all x. Assume that for every 9X ^ 9X there is an open set Ngx such that 9X e Ng1 C O . In addition, assume that there is a compact subset C of Q such that 9X G C and 0 < Ego I inf l o g J ; ) ^ a [ < K G < oo, (4.6) \e[\u00C2\u00A3ccn@ fx(Xi:j-9'x) J where K c is a constant independent of 92, \u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2,9m. Then for any sequence of maximum weighted likelihood estimates of 9X and for all e > 0 lim P*>(| |0i B l )-0 1 | |>e) =0, ni\u00E2\u0080\u0094>oo \ / 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 59 for any 9 2 , 9 3 , 9 m , 9{eQ,i = 2 , 3 , r a . Proof: Let and e be as in the proof of Theorem 4.2, and let N*i,N*2,N*p be an open cover of C \ NQ with EgoZij(N^) > 0. Then P* (\\%ni) - %\\ > *) P < Y,pe\u00C2\u00B0 ( ^ i n i ) e Nk) + (^ n i ) e cc n e) *=i k=l (1 _ n l _ 1 _ m _ \ It follows from the proof of Theorem 4.2 that the first term of last expression goes to zero as n goes to infinity. By the Weak Law of Large Numbers, we have I E *\u00C2\u00AB 0, by equation (4.6). m I \ P 0 If we show that ^ ^ 2~^ (X \u00E2\u0080\u0094 Wi)Zij(Cc fl 6) \u00E2\u0080\u0094 0 , then the second expression n i i=ij=i goes to zero. Consequently, the result of the theorem will follow. Observe that m rii ^ E E ( A S n ) - ^ ) % ( G C n 0 ) = E ^ ( A i n ) - ^ E ^ n e ) . n i i=l i=\ i=l U l \u00E2\u0080\u00A2 U i j=l By the Weak Law of Large Numbers, it follows that inf^l\u00E2\u0082\u00ACccne l\u00C2\u00B0Q f\x3--9i)) IS a ^n^e number by the condition of this theorem. By Assumption 4.5, it follows that Tl' ( \ \u00E2\u0080\u0094 {\r - wt) \u00E2\u0080\u0094> 0, as m ->\u00E2\u0080\u00A2 oo. (4.8) n\ 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 60 Combining equation (4.7) and (4.8), we then have , m rii 1 1=1 j=l 4.1.2 Asymptotic Normality To obtain asymptotic normality of the WLE, more restrictive conditions are needed. In particular, some conditions will be imposed upon the first and second derivative of the likelihood function. For each fixed n, there may be many solutions to the likelihood equation even if the WLE is unique. However, as will be seen in the next theorem, there generally exist a sequence of solutions of this equation that are asymptotically normal. Assume that 0i is a vector defined in RP with p, a positive integer, i. e. 0i = (011, 012, \u00E2\u0080\u00A2 0\P) and the true value of the parameter is 0\u00C2\u00B0 = (0 n , 0 \u00C2\u00B0 2 , 6 \ p ) . Write ip(x; 6i) = \u00E2\u0080\u0094\u00E2\u0080\u0094logfi(x; 0i), a p column vector, (701 and tp =\u00E2\u0080\u0094ip(x;9x), a p by p matrix. Then, for any j, the Fisher Information matrix is defined as Assuming that the partial derivatives can be passed under the integral sign in f fi(x;6x)di/(x) = 1, we find that, for any j,, E^{Xlj^) = J ^^^jf^x^duix)^ J ^f1(x;e\u00C2\u00B01)du(x)=0, (4.9) so that I(0i) is in fact the covariance matrix of tp, I(e\u00C2\u00B01) = coveoiJj(Xlj;601). 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 61 If the second partial derivatives with respect to 9\ can also be passed under the integral sign, then f(d2/d92)fi(x; 9\)dv(x) = 0, and I(91) = -Eeoi;(Xl;90l). To simplify the notation, let WLni(x;91) = \u00E2\u0080\u0094logWL(x;81) and WLni(x; 0\u00C2\u00B0) = \u00E2\u0080\u0094logWL(x; 0i)|\u00E2\u0080\u009E1=flo In the next theorem we assume that the parameter space is an open subset of RP. Theorem 4.4 Suppose: (1) for almost all x the first and second partial derivatives of fi(x;9) with respect to 9 exist, are continuous in 9 \u00C2\u00A3 Q, and may be passed through the integral sign in ff1(x;9)du{x) = l; (2) there exist three functions G\(x), G^O )^ and Gz(x) such that for all 9<1,---,9m, Ego\Gi(Xij)\2 < Ki < oo,l = 1,2,3,, i = l , . . . ,m , and in some neighborhood of 9\ each component of ip(x) (respectively ip(x)) \u00C2\u00ABs bounded in absolute value by G\(x) (respectively G2(x)) uniformly in 9\ G 6. Further, &logh{x;e,) d9ikld9ik2d9ik^ ki,k,2,kz = 1, ..,p, is bounded by Gs(x) uniformly in 9\ G 0; (3) I(9\) is positive definite. Then there exists a sequence of roots 9^^ of the weighted likelihood equation that is weakly consistent and ^ ( f ' - ^ A i V ^ ^ ) ) - 1 ) ) , as m->oo. Proof: 1. Existence of consistent roots.The proof of existence of consistent roots resembles the proof in Lehmann (1983, p 430-432). Let d be small enough so that 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 62 Sa = {9l : ||0! - 0\u00C2\u00B0|| < a} C 6 and let 4(a) = {x : log WL(x; 0?) > log WL(x, 0?) for all boundary points 9\ of Sa} = {x : log WL(x; 0\u00C2\u00B0) > sup log WL(x, 0*)}. e^eSa The set In(a) is measurable since log WL(x.; 9\) is measurable and sup 06 e 5 a logWL(x, 9\) is measurable by Proposition 4.1. We will show that Peo(X. 6 7n(a)) \u00E2\u0080\u0094>\u00E2\u0080\u00A2 1 for all sufficiently small enough a. That is, for any given e, there exist N\u00E2\u0082\u00AC such that, for any n > Ne, we have Peo(X. e /n(a)) > 1 \u00E2\u0080\u0094 e. This implies that In(a) is not an empty set when n > Ne. To prove the claim, we expand the log weighted likelihood on the boundary of Sa about the true value 9\ and divide it by to find \u00E2\u0080\u0094log WL(x; 9\) - \u00E2\u0080\u0094log WL(x; 0?) = Si + S2 + S3 ni rii where K l = l 2 P P 2ni 1 Jfel=l*2=1 j v v p \" ( E E E ( ^ i ~ 01*1) (0i*2 - 0?*2)(0i*3 - 0?*3) 6ni *1=1 *2 = 1 *3 = 1 i=i j=i and 4 W = E E A (^dlogfiixij-^i) 39 i*i 10i=0?> i=i j=i = 1 j = 1 l 8 , = , t ' 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 63 and |Cfcifc2fc3(a;ij)l < 1 by assumption. By the Weak Law of Large Numbers and (4.9) \u00E2\u0080\u0094 2, \u00E2\u0080\u0094 k = e ? \u00E2\u0080\u0094 ^ \u00C2\u00B0 as ru-too, (4.10) 1 ^ d2logfl(Xlj;9l) pe0 q 0 \u00E2\u0080\u0094 Z_^ M M k=\u00C2\u00AB? \u00E2\u0080\u0094>\"V^i) flsm->oo (4.11) where Ikik2{6i) is the (fci,fc2) element of the information matrix 1(9\u00C2\u00AE). By Lemma 4.2, we then have 1 v~>v>/,(n) sdlogfi{Xij\6\). pe0 , . \u00E2\u0080\u0094 2^ Z J A ' ~ Wi> 7ti~^ l*i=\u00C2\u00AB? ~ ^ 0 a S n i ^ \u00C2\u00B0 \u00C2\u00B0 - ( 4 1 2 ) 3 2 ; - L D a 1 - ^ ) - I ^ - - I M ^ 0 m - > \u00C2\u00AB > \u00E2\u0080\u00A2 . ( 4 - 1 3 ) i=i j=\ ^ m rii ^ E ( A ! n ) - w ' ) W ^ 0 flsni^\u00C2\u00B0\u00C2\u00B0' (4-14) n i i=i j=i To prove that the maximum difference -^log WL(x; 9\) \u00E2\u0080\u0094 ^log WL(x; 9\u00C2\u00AE) over all the boundary points 9\ of Sa is negative with Peo probability tending to 1 for sufficiently small a, we will show that, with Pgo probability tending to 1, the maximum of S2 for all the boundary points 9\ of Sa is negative while and I S 3 I are small compared to |5 2 | . We begin with S\. Observe that \u00E2\u0080\u0094Akl{x) = \u00E2\u0080\u0094 2^ M\u00E2\u0080\u0094\u00E2\u0080\u00941*1=*? + zr.l^Z^(xi ~Wi>\u00E2\u0080\u0094m k=*?-j=i 1 1=1 j=i By (4.10) and (4.12), it then follows that, for any 92, \u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2,9m, - 4 ( X ) - ^ 0 as 00. (4.15) Further, for any boundary point #J such that \u00E2\u0080\u0094 = a, we have i S i i < ^ E k w x ) i . Bi =1 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 64 For any given a, it follows from (4.15) that with Pgo probability tending to 1 - V \Akl(X)\ \u00E2\u0080\u0094\u00E2\u0084\u00A2\u00E2\u0080\u0094\u00E2\u0084\u00A2 e1=ey + Jfcifc2lyi) ni ni ^ oViklc>Vlk2 1 (4.18) + ^ E 2 > - \u00C2\u00AB 0 l\u00C2\u00ABirff-i= i j = i By (4.11) and (4.13), it then follows that, for any ki and k2, \u00E2\u0080\u0094Bklk2 + IklkM) ^ 0 as m oo. (4.19) Thus, for any boundary points such that \\9\ \u00E2\u0080\u0094 9\u00C2\u00B0\\ = a, we have I E X>* i \" -\u00E2\u0080\u00A2*?*,) I < A 2 - (4.20) A;i=lfc 2 = l By equation (4.19) and (4.20), it follows that, for given a v v . (0ik, \u00E2\u0080\u0094 ) ( ni I E X > ^ - d k ) ( ^ B ^ + W * i ))(*!* - 0l2)\ < p2a3 (4.21) \u00E2\u0080\u00A2 fe1=ifc2=i with Pgo probability tending to 1. 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 65 Let us examine the first term in (4.18). Since 1(8\u00C2\u00AE) is a symmetric and positive definite matrix, there exits a matrix B p x p such that BlB = Diag(l, 1 , 1 ) the identity matrix and \u00E2\u0080\u0094I(8\) = BtDiag(5i, 5 2 , 5 P ) B where Si < 0,i = 1,2, ...,m. Let \u00C2\u00A3 = (6,6, ...,\u00C2\u00A3\u00E2\u0080\u009E)* = B(8bn - 8\u00C2\u00B0n,db12 - 0\u00C2\u00B02, ..,0>p - 0\u00C2\u00B0,)'. Then we have ||\u00C2\u00A3|| 2 = ||0} - 0 \u00C2\u00B0 | | 2 = a2. It follows that V V rn - E E o & i - w t f x c - *W = E<^2 ^ 5 V (4-22) fc1=lfc2 = l (=1 where S* \u00E2\u0080\u0094 max{5i, S2,Sp} < 0. We see that there exist a0 such that 5* +p2ao < 0. Combining (4.21) and (4.22), it follows that with Peo probability tending to 1 there exists c > 0 such that for a < a0 S2 < \u00E2\u0080\u0094ca2. (4.23) Note that lOtii^fca^)! < 1- Thus for 53 we only consider m m Tlx 1 i=i j=i - . r i i m rii = - E G 3 ( * y ) + - E E(A* ( n ) - \" O ^ O y ) . J'=l i = l 3=1 By the Weak Law of Large Numbers with probability tending to 1, m Tii ^ 1 J = l <2(i + #3) (4.24) where we use the inequality \EZ\ < E\Z\ < 1 + E\Z\2 for any random variable Z. By (4.14), it follows that with Peo probability tending to 1 1 m nt ' J i . _ i 1 =i i = i (4.25) Hence by (4.24) and (4.25) , for any given a and'0f such that ||0j - 0?|| = a, \S3\<^(1 + K3)a\ (4.26) 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 66 Finally combining (4.17), (4.23) and (4.26), we then have, with Peo probability tending to 1, for a < ao, max(51 + S2 + S3) < -ca2 + (p + ~(1 + Kz))a? e\\u00E2\u0082\u00ACSa 2 which is less than zero if a < c/[p + ^(1 + K3)). This completes the proof of our claim that for any sufficiently small a the Pgo probability tends to 1 that max log WL(x; 0f) < log WL{x, 0\u00C2\u00B0). e'leSa For any such a and x G /71(a), it follows that there exists at least one point 9^ with ||0(ni^ \u00E2\u0080\u0094 9\\\ < a at which WL(x; 9\) has a local maximum, -^-WLn.(x; 0i)| _s(ni; 0. Since Pgo (X G J\u00E2\u0080\u009E(a)) \u00E2\u0080\u0094> 1 as n x \u00E2\u0080\u0094> 00 for all sufficiently small a, it then fol-lows that for such fixed a > 0, there exists a sequence of roots 9^{x\a) such that Peo ( l l^V) - 0 \u00C2\u00B0 | | < a) -\u00C2\u00BB 1. It remains to show that we can determine such a sequence of roots, which does not depend on a. Let 0 ^ be the closest root to 9\ among all the roots to the likelihood equation for every fixed n\. This closest root always exists. The reason for this can be seen as follows. If there is a finite number of roots within the closed ball ||0i \u00E2\u0080\u0094 Q\u00C2\u00B1\\ < a, we can always find such a root. If there are infinitely many roots in that sphere which is compact, then there exists a convergent sequence of roots inside the sphere 0~f^ such that lim ||0^ \u00E2\u0080\u0094 0\u00C2\u00B0|| = inf ||0i \u00E2\u0080\u0094 0i||, where Vni is the set of all the roots to the likelihood equation. Then the closest root exists since the limit of this sequence of roots 0^ is again a root by the assumed continuity of the ^WLni(91). Thus 9{xlY does not depend on a and Peo (||0ini)* - 0?|| < a) -> 1. ' 2. Asymptotic Normality. Expand -^-log WL(x; 0x)as \u00C2\u00BB1 m rii WLni(x-91) = WLni(x;901)+ / ^ ^ A l { n V ( ^ ; 0 ? + i(01-0?))a't(01-0?), J o i=i j=i 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 67 m rii , > where WLni(x;9\) = EE A^VO^?)-i=ij=i Now let #i = #~[ni\ where 9^ is any weakly consistent sequence of roots satisfying WLni(x; (j^) = 0, and divide by y/n[ to get: l\u00E2\u0080\u0094WLni{x-X) = B n i V ^T(0S n i ) - 0?), (4.27) where Note that Bni 1 r1 _m. \u00E2\u0080\u009E- / \u00C2\u00A3 \u00C2\u00A3 A * n ) ^ ; e ?+*$ n i ) -m rii m nt WLni(x;9\u00C2\u00B0) = E \u00C2\u00A3 w ^ ( X ^ ) + \u00C2\u00A3 \u00C2\u00A3 ( \ ( n ) - \u00C2\u00AB ; 0 ^ \u00C2\u00AB ; O i = l j = l i = l _;'=1 j = l i = l j=l By (4.27), it follows that ^ *tl lit IL-i V 1 3=1 V 1 i = i j=i From the Central Limit Theorem, because Eeoip(Xij-,9\u00C2\u00B0) = 0 and coveoil>(Xij]9l) = 1(9\u00C2\u00B0), we find that l \"* V n i i = 1 If we show -\u00C2\u00B1= \u00C2\u00A3 E(A-n) - w^X^ 9\u00C2\u00B0) % 0 and Bni ^ 1(9\u00C2\u00B0), then by the i=lj=l multivariate version of Slutsky's theorem (see for example, Sen and Singer (1993), p. 130.) we have \u00C2\u00B1=(\u00C2\u00A7\u00E2\u0084\u00A2 - 91) = B~l^=WLlni A my^Z* ~ JV(0,/(6I?)- 1) 77.1 V * 1 ! 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 68 Now we prove i=ij=i Let Kx = E E(A! - wOV'^ ijI^ i)-1=1 J'=l We then have (-1=1114,11 > e ) < g - E E E E i A . ( \" > - \u00C2\u00AB ' . i i ^ ) - ' \u00C2\u00BB - i ^ * ' i = l i ' = l j = l j ' = l < 0 ( J \u00E2\u0080\u0094>\u00E2\u0080\u00A2 0, as \u00E2\u0080\u0094>\u00E2\u0080\u00A2 co, \\u00E2\u0084\u00A2i/ by hypothesis (2) of this theorem. (ii) Bni \u00E2\u0080\u0094 ^ 1(9\u00C2\u00AE) as n\ \u00E2\u0080\u0094oo. Let Bni = < + BH, where i / - l \u00C2\u00AB i < = --/ /^m^Oi + t ^ - e ^ d t , 1 -70 j=i 711 7 0 i=l j=l First, we prove B!ni 1(9\u00C2\u00AE) as n\ \u00E2\u0080\u0094> oo. Let Sp = {9i : ||0i - 9\\\ < p}. Note that Eeoip(Xij,0i) is continuous in 0], by condition (1), so there is a p > 0 such that < P=> lEeo^Xij-A) + T(9\u00C2\u00B0)\< e. (4.28) For any t \u00E2\u0082\u00AC TZ such that 0 < t < 1, then l l ^ ^ - ^ I K P ^ I I ^ + ^ i\"0-^) - 0?|| 1. We then have Peo(\Ego7p(X1f,e0 + t(9^l)-9\u00C2\u00B0l)) + I(e0Y Nr => sup\\u00E2\u0080\u0094f]TJ>{Xij,el)-E9oi>(Xu,6i)\ A 7/ => ||0[ni^ \u00E2\u0080\u0094 0j|| < p, then 'nx-W)\ < / -^(xir,91+t(9(r]-^))+m) dt j \u00C2\u00B0 n i j=i \u00E2\u0080\u00A2 . = / - \u00C2\u00A3 ^ fe: *? + - 0?)) - Egoip (Xir, 9\u00C2\u00B0 + t(9^ - 9 Jo 1 n i - = 1 v 7 v +Eeo^(Xlj; 0? + i(0i n i ) - 0\u00C2\u00B0)) + 7(0?)left r 1 / 1 n i < / S U p -^(X^ej-Ego^Xij-A) JO V S i G S p \"1 ^7 * W 0? + i(0~!m) - 0?)) + 7(0?) )dt + sup 0 oo by equation (4.30), (4.31) and (4.32). p Secondly, we prove B\u00E2\u0084\u00A2 \u00E2\u0080\u0094e-> Q as rii \u00E2\u0080\u0094> oo. By Lemma 4.2, every component of Bj/ goes to 0 in probability. Thus \B\"\ < ^ - E E h - t ] m x l 3 - 0 ? + 1 ( 9 ^ ] - 0?)) dt 0 as ni oo. This completes the proof, o 4.1. A S Y M P T O T I C RESULTS FOR T H E W L E 70 R E M A R K : If there is a unique root of the weighted likelihood equation for every n, as in many applications, this sequence of roots will be consistent and asymptotically normal. Small et. al. (2000) discuss the multiple root problems in estimation and propose various methods for selecting among the roots if the solution to the estimating equation is not unique. 4.1.3 Strong Consistency We prove strong consistency of the WLE in this subsection. Recall that \u00C2\u00B1Sni (X, 8X) = 1 \u00C2\u00A3 f > - t W j ^ ^ y To prove the strong consistency, we prove the following lemma: Lemma 4.3 Under Assumptions 4-1- 4-5 sup \u00E2\u0080\u0094Sni(x,0i) (4.33) for any d2, 03, ...,8m, 8{ G 6, i = 2, 3 , m . Proof: By Lemma 4.2, we then have / i ( * \u00C2\u00AB j , g ? ) / i ( * y , \u00C2\u00AB i ) ' It follows by the Borel-Cantelli Lemma that, m rii 1 1=1 7 = 1 0, a.s. [Poo]. By the definition of Aij, it follows that 4.1. A S Y M P T O T I C R E S U L T S F O R T H E W L E 71 Therefore, we have sup \u00E2\u0080\u0094Sni(x;0i) \u00E2\u0080\u0094)\u00E2\u0080\u00A2 0, a.s. [Peo] for any 0j ^ 0\u00C2\u00B0, 02, 0 3 , 0 m , 0, G 6, i = 1, 2 , m . o Theorem 4.5 Suppose: (1) Q is compact; (2) logfi(x; 0) is upper semi-continuous in 0 for all x; (3) there exists a function K(x) such that Ego\K(X\)\ < oo and /og^ 1 ^, '^ < K{x), for all x and 0 \u00E2\u0082\u00AC O; Then for any sequence of maximum weighted likelihood estimates 9^ of 9\, 0\u00E2\u0084\u00A2 \u00E2\u0080\u0094\u00E2\u0080\u00A2 0? a.s. [P,?] / o r any 0 2 , 0 3 , 9 m , 0, G O, i = 2, 3 , r a . Proof: Let 9\ be the parameter of interest and let 1 . , 1 Wni = \u00E2\u0080\u0094logWL(91) -\u00E2\u0080\u0094logWL(9li) .. ni m = - E E A-( W / I ( A ^ - ; 00 - '\u00C2\u00B0s/i(*y; 0?)) U l j=l i=l 1 n l 1 = -Ec /(^'^) + -5\"1(x'^) 3=1 ^hereU(Xlj,91) = l o g ^ ^ ) . Let p > 0 and I> = (0i G 0 : ||0i - 0\u00C2\u00B0|| > p). Then D is compact by condition (1). We then have, (c.f. Ferguson 1996, p. 109) Peo (limsup sup \u00E2\u0080\u0094 V r / p f y ^ ) < sup p(0:) 1=1, (4.34) y ni->oo 8xeD ni ~( dieD J where p(0i) = / log^^h{x;9\)du{x) < 0 for 0i G D by Lemma 4.1. 4 .1 . A S Y M P T O T I C R E S U L T S F O R T H E W L E 72 Furthermore, p,(9i) is upper semi-continuous since Hid,) > f \imsuplogf)^ei ]f^e^duix) > limsup / l 0 g f - ^ p ^ U x - 9 \u00C2\u00AE ) d u { x ) . Hence p(9i) achieves its maximum value on D. Let 5 = sup p(9i); then 5 < 0 by Lemma 4.1 and Assumption 4.3. Then by Lemma 4.3, with Pdo measure 1, there exists an Ni such that, for all nx > Ni, 1 sup 0ieD ni Sni(X.,9i) < -5/2. Observe that (1 7 1 1 1 \ 1 n i - E ^ i ^ i ) + -5 r i l (X )e 1) < sup - J2u(Xlj;el)+suV n l ~ [ n l J B x t D U x ^ dleD Sni (X, #1) ni It follows that, with Peo measure 1, there exists an ./V such that for all ni > N, sup 6i&D But, for all ni > AT , sup ( \u00E2\u0080\u0094 jhu{Xlj;91) + \u00E2\u0080\u0094Sni(XA)) <5/2 0. * i p 7 ni 0iee \n\ ^ \" i J nsince 1 n i ni J2U(Xl];9\u00C2\u00B01) + -Sni(X;90) = 0 j=i U l This implies that the W L E , 0~jni) e Dc for m > N; that is | | ^ n i ) - 0X\\ < p. Since p is arbitrary, the theorem follows, o . The proof of the above theorem resembles the famous proof given by Wald (1949) which established the strong consistency of M L E except that we have to deal with the extra term, ^Sni(X.,8i). Again, a slightly different condition is required if 6 is not compact. 4.1. A S Y M P T O T I C R E S U L T S F O R T H E W L E 73 Theorem 4.6 Suppose: (1) there is a compact subset C of 0 such that 9\ e C and Ego sup log J 0 < 0; 0],eccne J i v A i j ; y i J f,2) i/iere exzsi a function K(x) such that Eeo\K(X)\ < oo and loff^jgoj < K(x), for all x and 9 \u00E2\u0082\u00AC C; (3) logfi(x; 9) is upper semi-continuous in 9 for all x; Then for any sequence of maximum weighted likelihood estimates 9^^ of 9\, fifO _ > a.s. [Pgo] for any 92,93, ...,0m,9i \u00E2\u0082\u00AC Q,i = 2,3, ...,m. Proof: Let D = (9i : \\9i - 9\u00C2\u00B0\\ > p) as in the proof of Theorem 4.5 such that C n D \u00C2\u00B1 . It follows that C fl D is also compact. It follows from the proof of Theorem 4.5 that, with PQO measure 1, there exits an Ni such that, for all n x > Ni, sup (-JTu(Xir,9i) + -Sni(X,9i))<5/2<0, where U(X^ 0X) = Also, with P e o measure 1, there exits an N2 such that, for all n i > N2, 1 . W l . \u00E2\u0080\u0094 V sup t/(Xi , - ;0i) < 5 (4.35) by the Strong Law of Large Numbers and the fact that EgoeCcnQU(Xij; 9{) < 0. As in the proof of Lemma 4.3, it can be shown that \u00E2\u0080\u0094 sup S n i(X;0i) \u00E2\u0080\u0094\u00E2\u0080\u00A2(), a.s. [Pgo]. ni 0iec cne 4.2. A S Y M P T O T I C PROPERTIES OF A D A P T I V E WEIGHTS 74 It follows that, with Pgo measure 1, there exist an A^, such that, for all n\ > N3, 1 n i 1 \u00E2\u0080\u0094 V sup U(Xlj;91) + \u00E2\u0080\u0094 sup 5 n i (X;0 x ) < 5/2 < 0. ni~~^eieccne 0ieccne It implies that, with Pgo measure 1, for all rii > N3, 1 n i 1 sup \u00E2\u0080\u0094J\"U(Xlj;91)+ sup \u00E2\u0080\u0094 S n i(X;0i) < 0. (4.36) flieone ~ ^ e i\u00E2\u0082\u00ACC c ne \u00E2\u0084\u00A2i Therefore, it follows that, with Pgo measure 1, there exist an N* = max(N2, N3), such that for all n > N*, sup 0i es \\u00E2\u0080\u0094f]u(xlj;e1) + \u00E2\u0080\u0094sni(x-,e1)) 0. m j^t n i 0iee \ni nx J since the sum is equal to 0 for 9\ = 9\. This implies that the WLE, 0\ G Dc for rii > N*; that is, H ^ \" 1 ^ ~ 9X\\ < p. Since p is arbitrary, the theorem follows, o . 4.2 Asymptotic Properties of Adaptive Weights At the practical level, we might want to let the relevant weight vector to be a function of the data. This section is concerned with the asymptotic properties of the WLE using adaptive weights. Assumption 4.1-4.4 are assumed to hold in this section. 4.2.1 Weak Consistency and Asymptotic Normality In this subsection, we adopt the following additional condition: Assumption 4.6 (Weak Convergence Condition). Assume: (i) lim ^- < oo, for i = 1,2, ...,m; 4.2. A S Y M P T O T I C PROPERTIES OF A D A P T I V E WEIGHTS 75 (ii) the adaptive relevant weight vector A^n^(X) = (A[n^(X), A 2 \" ^ ( X ) , A m ^ ( X ) ) * sat-isfies, for any e > 0, A^ (X) \u00E2\u0080\u0094> Wi, as n\ \u00E2\u0080\u0094> oo, where (wi,w2, wmy = (1,0,0)*. Let ^ ( X , W = J r g g ( A } \" , ( X ) - \u00C2\u00AB , ) ^ ^ g g . We then have the following lemma. Lemma 4.4 If the adaptive relevance weight vector satisfies Assumption 4-6, then (X, 0X) % 0, as nx -\u00C2\u00BB\u00E2\u0080\u00A2 oo for any 92,93, ...,0m,0 i G 0,z = 2,3, ...,m. Proof: Let Tj = fl lo9ff\\f-fe% for i = 1, 2 , m . Then 1 i m ^ = ^ ( n x ) - , ) r , m 1 By the weak law of large numbers, for any i = 1, 2 , m , ru nt ^ fi [Xij; 0!) / i (A%-; 0X) It then follows that, for any i = 1, 2 , m ni V / n,: for any #2,#3, 9m,^ 6 0,J = 2 , 3 , m , by Assumption 4.6. o We then have the following theorems: 4.2. A S Y M P T O T I C PROPERTIES OF A D A P T I V E WEIGHTS 76 Theorem 4.7 For each 9\, the true value of 6i, and each 9i ^ 9\u00C2\u00AE i> ( m rii m ni \ nn/i(^^?)A , (\" ) ( x ) > nnA ( ^ i \u00C2\u00AB i ) t ) ( x ) = i . t=lj=l i=lj=l J for any 92, 9 3 , 9 m , 9{ \u00E2\u0082\u00AC 6, i = 2, 3 , m . Theorem 4.8 Suppose that the conditions of Theorem 4-2 are satisfied. Then for any sequence of maximum weighted likelihood estimates 9^ of 9\ constructed with adaptive weights X\ (X), and for all e > 0, for any 92, 9 3 , 9 m , Qi e O, % = 2, 3 , m . Theorem 4.9 Suppose that the conditions of Theorem 4-3 are satisfied. Then for any sequence of maximum weighted likelihood estimates 9^^ of 9\ constructed with adaptive weights Aj(X), and for all e > 0 for any 92, 93,9m, 9iG&,i = 2, 3 , m . We remark that the proofs of Theorem 4.7 - 4.9 are identical to the proofs of Theorem 4.1 - 4.3 except that the fixed weights are replaced by adaptive weights and the utilization of Lemma 4.2 is replaced everywhere by Lemma 4.4. We are now in a position to establish the asymptotic normality for the WLE constructed by adaptive weights. We assume that the parameter space is an open subset of RP. Theorem 4.10 (Multi-dimensional) Suppose that the conditions of Theorem 4-4 a r e satisfied. Then there exists a sequence of roots of the weighted likelihood function based on adaptive weights 9^ that is weakly consistent and lim P e o ( | | ^ l ) - ^ | | > e ) = 0 , as ni \u00E2\u0080\u0094> oo. 4.2. A S Y M P T O T I C P R O P E R T I E S O F A D A P T I V E W E I G H T S 77 4.2.2 Strong Consistency by Using Adaptive Weights To establish the strong consistency of the WLE constructed by the adaptive weights, we need a condition that is stronger than Assumption 4.6. We hence assume the following condition: Assumpt ion 4.7 (Strong Convergence Condition) Assume that: (i) lim ?i < oo, for i = 1,2, ...,m; ni->-oo U 1 (ii) the adaptive relevant weight vector \W(X) = ( A ( 1 N ) ( X ) , A 2 N ) ( X ) , A \u00C2\u00A3 } ( X ) ) * sat-where ( w 1 , w 2 , w m ) ' = (1, 0 ,0)* . L e m m a 4.5 If the adaptive relevance weight vector satisfies Assumption 4-7, then isfies AJB)(X) wt, a.s. [P0O], 0, a.s. [Pgo], for any 92,83, ...,9m,.0i eO,i = 2,3,..., m. By the Strong Law of Large Numbers, for any i = 1, 2 , m , where Eg0An = Eg0 sup log 0i ee /i(*u;0i) < oo. This implies that, for any i = 1, 2 , m , 0, a.s. [Peo] by Assumption 4.7. Since 4.3. E X A M P L E S . 78 it then follows that sup 0 i G 0 0, a.s. [Peo]. This completes the proof.o We then have the following theorems: Theorem 4.11 Suppose the conditions of Theorem 4-5 are satisfied. Then for any sequence of maximum weighted likelihood estimates 9^^ of 6\ constructed by adaptive weights A^(X), for any 92,93, ...,9m,9i G Q,i = 2,3, ...,m. Theorem 4.12 Suppose the conditions of Theorem 4-6 are satisfied. Then for any sequence of maximum weighted likelihood estimates 9^^ of 9\ constructed by adaptive weights Aj(X), \u00E2\u0080\u00A2 \u00C2\u00A7M el a.s. [Pgo], for any 92,93,...,9m,9i e Q,i = 2,3,...,m. 4.3 E x a m p l e s . In this section we demonstrate the use of our theory in some examples. 4.3.1 Estimating a Univariate normal Mean. Suppose Xij are independent random variables that follow a normal distribution with mean 9i and variance 1. Assume 0 = (\u00E2\u0080\u009400,00) and C = [\u00E2\u0080\u0094M,M]. We need to verify the condition that, for 9\ G C, 0 < Ego ^inf f l ' i 6 Cc n 0 ^^/iffi'a'1) ) \u00E2\u0080\u0094 Kc < 00 for some constants M and Kc. 4.3. E X A M P L E S . 79 We then have It then follows that -\{x-e\f if \ X \ > M , -\{x - 9lf + \{x - M ) 2 if 0 M M 1 2 1 = / \" + - M)2) - ^ e X P \" { X ~ 9 i ) 2 / 2 d X ' 0 0 / [~(x - 0?)2 + \{x + M ) 2 ) - i ^ c x p - ^ ) 8 ^ . 73i = \u00E2\u0080\u0094M The first term ^ goes to zero as M goes to infinity. It can be verified that hi + hi = M2 + o(M2). It then follows that there exist M 0 > 0 such that Iu + hi + hi > 0 for M > MQ. If we choose K C = 2M02, it then follows that, for i = 1,2, ...,m,j = 1, 2, ...,Tli, /i(-Xij-;0?) \. 2 0 < \u00C2\u00A30o inf logJ;\'J\)[ ) <2MZ < oo, /or a\u00C2\u00AB M > M0. 4.3.2 Restricted Normal Means. A simple but important example is presented in this subsection. That problem is treated by van Eeden and Zidek (2001). Let Xn, Xini be i.i.d. normal random variables each with mean 9\ and variance a2. We now introduce a second random sample drawn independently of the first one from a second population: X2\, ...,X2n2, 4.3. E X A M P L E S . 80 i.i.d. normal random variables each with mean 62 and variance a2. Population 1 is of inferential interest while Population 2 is the relevant population. However, 102 \u00E2\u0080\u0094 I < C for a known constant C > 0. Assumptions 4.2 and 4.3 are obviously satisfied for this example. The condition (4.6) in Theorem 4.3 is satisfied as shown in the previous example. If we show that Assumption 4.5 is also satisfied, then all the conditions assumed will be satisfied for this example. To verify the final assumption, an explicit expression for the weight vector is needed. Let mXi. = xij, i = l,...,m,V = Cov{{Xl.,X2)t) and B = (0, C)f. It follows that It can be shown that the \"optimum\" WLE in this case, the one that minimizes the maximum MSE over the restricted parameter space, takes the following form \" l \u00E2\u0080\u0094 ^ I ^ - I . ~r /\ 2 ^V2.) where (A;,A;)* (V + BB1)-1! l^V + BB^-n' We find that It follows that 1\V + BBl)-ll 1 1 u2/ni + a2/n2 + C Thus, we have A; = a2/n2+C 4.3. E X A M P L E S . 81 Finally K = 1 \u00E2\u0080\u0094 ' \a* n2 ni/ Estimators of this type are considered by van Eeden and Zidek (2000). It follows that |A| n i^ \u00E2\u0080\u0094 Wi\ = O(^), i \u00E2\u0080\u0094 1, 2. If we have n2 = 0(n2~s), then Assump-tion 4.5 will b e satisfied. Therefore, we do not require that the two sample sizes approach to infinity at the same rate for this example in order to obtain consistency and asymptotic normality. The sample size of the relevant sample might go to infinity at a much higher rate. Under the assumptions made in the subsection, it can be shown that the conditions of Theorem 4.4 are satisfied. The maximum of the likelihood estimator in this example is unique for any fixed sample size. Therefore, we have 4 . 3 . 3 M u l t i v a r i a t e N o r m a l M e a n s . Let X = ( A i , X m ) , where for i = 1,..., m, xi = Y^Xij/m *~ N(6i, l/m). i=i Assume that the Oi are \"close\" to each other. The objective is to obtain a reasonable good estimate of 6\. If the sample size from the first population is relatively small, we choose WLE as the estimator. In the normal case, the WLE, 9i, takes the following form: m 9\ = ^ \Xi. 4.3. E X A M P L E S . 82 Note that the James-Stein estimator of the parameter 9 = (9i,...,0m) is given by C ( X ) = ( C i ( X ) , . . , U X ) ) , where C*(X) = / \ 1 m - 2 Xi. The quantity, 1 - P - 2 m _ ' i=i can be viewed as a weight function derived from the weight in the James-Stein esti-mator. Consider the following choice of weights of James-Stein type : 1 m - 2 A i ( X ) = 1 - 1+6 rn _ ) i=l 1 , 1 m - 2 m - I m - \u00E2\u0080\u009E ) ' z _ 2 ' 3 ' ' \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 ' m ' t=i for some 5 > 0 and c > 0. It can be verified that E A* = 1 and A; > 0, i = 2,3,m. i= i It follows that Peo( 1 m - 2 \" i - Y.x? + c 2=1 m - 2 i=i m - 2 \u00E2\u0080\u009E 1 < 1 4 , -geo- (since Xf > 0) nl+5e c = O n i+5 / \u00E2\u0080\u00A2 We then have 1 m - 2 n i E A ? + C i=l-> e) = O 1+6 n P e 0 ( A i < 0) = O 1+6 n (4.37) (4.38) 4.4. CONCLUDING R E M A R K S 83 .We consider the following two scenarios (i) If we set 5 = 0, it follows that Aj(X) - u>i 0, for i = 1,2, ...,m. Assumption 4.6 is then satisfied. Therefore, the weak consistency and asymptotic normality of the WLE constructed by this set of weights will follow. for i = 1,2, ...,m, then this leads to strong consistency. Since strong consistency implies weak consistency, asymptotic normality of the WLE using adaptive weights will follow in this case. 4.4 C o n c l u d i n g R e m a r k s In this chapter we have shown how classical large sample theory for the maximum likelihood estimator can be extended to the adaptively weighted likelihood estimator. In particular, we have proved the weak consistency of the latter and of the roots of the likelihood equation under more restrictive conditions. The asymptotic normality of the WLE is also proved. Observations from the same population are assumed to be independent although the observations from different populations obtained at the same time can be dependent. In practice weights will sometimes need to be estimated. Assumption 4.6 states conditions that insure the large sample results obtain. In particular, they obtain as long as the samples drawn from populations different from that of inferential interest are of the same order as that of the drawn from the latter. (ii) If 5 > 0, Ai(X) - Wi 0, a.s. [Peo], 4.4. CONCLUDING R E M A R K S 84 This finding could have useful practical implications since often there will be a dif-ferential cost of drawing samples from the various populations. The overall cost of sampling may be reduced by judiciously collecting a relatively larger amount of inexpensive data, that although biased, nevertheless increases the accuracy of the estimator. Our theory suggests that as long as the amount of that other data is about the same as obtained from the population of direct interest (and the weights are chosen appropriately), the asymptotic theory will hold. C h a p t e r 5 C h o o s i n g Weights by C r o s s - V a l i d a t i o n 5.1 I n t r o d u c t i o n This chapter is concerned with the application of the cross-validation criterion to the choice of optimum weights. This concept is an old one. In its most primitive but nevertheless useful form, it consists of controlled and uncontrolled division of the data sample into two subsamples. For example, the subsample can be selected by deleting one or a few observations or it can be a random sample from the dataset. Stone (1974) conducts a complete study of the cross-validatory choice and assessment of statistical predictions. Stone (1974) and Geisser (1975) discuss the application of cross-validation to the so-called K-group problem which uses a linear combinations of the sample means from different groups to estimate a common mean. Breiman and Friedman (1997) also demonstrate the benefit of using cross-validation to obtain the linear combination of predictions to achieve better estimation in the context of multivariate regression. Although there are many ways of dividing the entire sample into subsamples such 85 / 5.1. INTRODUCTION . 86 as a random selection technique, we use the simplest leave-one-out approach in this chapter since the analytic forms of the optimum weights are then completely tractable for the linear WLE. We will denote the vector of parameters and the weight vector by 0 = (di, 9 2 , 9 m ) and A \u00E2\u0080\u0094 (Ai, A 2 , A m ) respectively. Assume that ||0||.< oo for i = 1,2, ...,m. Let X\u00C2\u00B0ept and A \u00C2\u00B0 p t be the optimum weight vector for samples with m equal and unequal sizes. We require that E \ = 1 in this chapter. i=i Suppose that we have m populations which might be related to each other. The probability density functions or probability mass functions are of the form fi(x;9i) with 6i as the parameter for population i. Assume that -Xii) X12, X13, Xini ~ fi(x;9i) X21, X22, X23, X2n2 ~ f2(x]02) Xml, Xm2, Xm3, Xmrim ~ fm(x',9m) where, for fixed i, the {Xij} are observations obtained from population i and so on. Assume that observations obtained from each population are independent of those from other populations and E(Xij) = 4>(9i),j = 1, 2 , r i j . The population parameter of the first population, 9X, is of inferential interest. Taking the usual path, we predict Xij by (9[~j)), the WLE of its mean without using the X\j. Note that (9~[ is a function of the weight vector A by the construction of the WLE. A natural measure for the discrepancy of the WLE is the following: D(X) = f2(xl3-d>(9[^))2. (5.1) The optimum weights are derived such that the minimum of D(\) is achieved for m fixed sample sizes n\, n 2 , n m and E A* = 1. j=i We will study the linear WLE by using cross-validation when E(Xij) = 9i,j = 1.2, ...nj for any fixed i. The asymptotic properties of the WLE are established in 5.2. L I N E A R W L E FOR E Q U A L S A M P L E SIZES 87 this chapter. The results of simulation studies are shown later in this chapter. 5.2 L i n e a r W L E for E q u a l Samp le Sizes Stone (1974) and Geisser (1975) discuss the application of the the cross-validation approach to the so-called K-group problem. Suppose that the data set S consists of n observations in each of K groups. The prediction of the mean for the ith group is constructed as: (ii = aXi. + (1 \u00E2\u0080\u0094 a)X_. m n n where Y.. = E ^ ij a n d X^ = ^ ]T) -^0'- ^ w e a r e interested in group 1, then i=lj=l j=l the prediction for group 1 becomes A 1 = ( i - ^ Q ) x 1 . + g ^ \u00E2\u0080\u009E . We remark that the above formula is a special form of linear combination of the sample means. The cross-validation procedure is used by Stone (1974) to derive the value of ct. We consider general linear combinations. Let 0^ denote the WLE by using the cross-validation rule when the sample sizes are equal. If tf1, A 2. \u00E2\u0080\u0094 t / 2 , j = i Therefore, b\ \u00E2\u0080\u0094 cov \u00E2\u0080\u0094\u00E2\u0080\u00A2 o\ \u00E2\u0080\u0094 p0~\O2. Thus condition p < a\ja2 implies that of > cov for sufficiently large n. Thus, A^* eventually will be positive, o We remark that the condition p < a\/a2 is satisfied if a2 < o\ or p < 0. If the condition p < o~i/o~2 is not satisfied, then A ^ will have negative sign for suffi-ciently large n. However, the value of A^' will converge to zero as shown in the next Proposition. Proposition 5.2 If 6\ ^ Q\, then, for any given e > 0, Peo{\\T - 1| < e) \u00E2\u0080\u0094\u00E2\u0080\u00A2 1 and Peo{\\\u00C2\u00B02pt\ < e).\u00E2\u0080\u0094>\u00E2\u0080\u00A2 1. Proof: From Lemma 5.1, it follows that the second term of S\ goes to zero in prob-ability as n goes to infinity while the first term converges to (9\ \u00E2\u0080\u0094 0\u00C2\u00B0)2 in probability. 5.2. L INEAR W L E FOR E Q U A L S A M P L E SIZES 93 Therefore we have St ^ (9\u00C2\u00B0 - 8\u00C2\u00B02)2 as n oo, where (6\ \u00E2\u0080\u0094 90,)2 / 0 by assumption. Moreover, we see that Sf = 0 P ( i ) . By definition of A f , it follows that , \u00E2\u0080\u009E ,5 f . P<>0 |A21 = | \u00E2\u0080\u0094 | \u00E2\u0080\u0094> 0 as n \u00E2\u0080\u0094>\u00E2\u0080\u00A2 oo. This completes the proof, o The asymptotic limit of the weights will not exist if 9\ equals 99,. This is because the cross-validation procedure will not be able to detect the difference of the two populations involved since there is none. This can be rectified by defining A f = where c > 0. We remark that the knowledge of the variances and covariances is not assumed. 5.2.2 Alternative Ma t r ix Representation of A e and be To study the case of more than two populations, it is necessary to derive an alternative matrix representation of Xopt. It can be verified that -(-j)-(-j) _ . , _ _ 1 w _ _ 1 X n\u00E2\u0080\u00941- n \u00E2\u0080\u0094 1 2\u00E2\u0080\u0094 \u00E2\u0080\u0094 ^n \u00E2\u0080\u0094 ^n \u00E2\u0080\u0094 i f ^ ~\2 \u00E2\u0080\u0094 6 n X j . X / j _ ~7%ij%k. TXkjxi. ~ r \ 7) %ijxkj 77 -L J. n X where e\u00E2\u0080\u009E \u00E2\u0080\u0094 - J L T . \" 71\u00E2\u0080\u00941 5.2. L INEAR W L E FOR EQUAL S A M P L E SIZES 94 Thus, we have n n , 1 ' v (\u00E2\u0080\u0094j) {\u00E2\u0080\u0094j) v > / 2 ^7i / I \2 \ \u00E2\u0080\u00A2^i. %k. / j I &nXi.%k. ~ ~^%ij-Ek. ~ ^\u00E2\u0080\u00A2^fcj-^'i. \"F ( \u00E2\u0080\u0094 ~j XijXkj J j=l j = l \ / n n 1 71 _ n _ lXk- E ~ n_\Xi- E ^ + (n _ ]_) E X * J ' a ' * J j = l j = l J = l 71 2 2 2 77. 1 ^ > [n \u00E2\u0080\u0094 l)z n n \u00E2\u0080\u0094 177 en(n - 2)a;i xfc. + - - Y](xij - Xi)(xkj - xk) + -XiXk. 77 \u00E2\u0080\u0094 1 77 ^ \u00E2\u0080\u0094 ' 77\u00E2\u0080\u00941 (e*(n - 2) + 070, + ^ covl, where ^ l \" C 0 7 J i f c = -^2(Xij -Xi.)(xkj -Xk.). Recall that, for 1 < i < m and 1 < k < m, Ae(ik) = E^ .7=1 ( - J ) ^ - J ) It follows that A ^ ^ E + (el(n-2) + - ^ J 00* (5.3) where E i j t = covik and 0 = (xh,xm.). 5.2. L I N E A R W L E F O R E Q U A L S A M P L E S I Z E S 95 We also have X ^ 3=1 \S - \ ( __L_ \ An + / J (Cn-^lj Cn-^lJ I Cn^i. -^^ij j j=l ^ ' e \" n - 1 ^\u00E2\u0080\u0094' It then follows that be(x) = A 1 - e % . (5.4) where ^4i is the first column of A E and E i is the first column of the sample covariance matrix E. 5.2.3 Optimum Weights Af* B y Cross-validation We are now in a position to derive the optimum weights when sample sizes are equal. Propos i t ion 5.3 The optimum weight vector which minimizes Dem^ takes the fol-lowing form XT = ( 1 , 0 , 0 , 0 ) * - el ( A ; % - ^ ^ A - e l i \ . Proof: By differentiating Dem^ \u00E2\u0080\u0094 u ( l ' A \u00E2\u0080\u0094 1) and setting the result to zero, it follows that 5.3. L I N E A R W L E FOR U N E Q U A L S A M P L E SIZES 96 It then follows that A f = A:1 ( We then have 1 = i * A f = Thus, 2 (1 - l 'A^&e). Therefore, Af* = A-el be + Since is a quadratic function of A and A > 0, the minimum is achieved at the point A f . Furthermore, by equation (5.3) and (5.4), we have This completes the proof, o We remark that A e is invertible since E is invertible. We remark that the ex-pression of the weight vector in the two population case can also be derived by using the matrix representation given as above. The detailed calculation is quite similar to that given in the previous subsection. In the previous section, we discussed choosing the optimum weights when the samples sizes are equal. In this section, we propose cross-validation methods for choosing A - % = A ; 1 (AX - e2Ex) = (1, 0, 0 , 0 ) * - e^\" 1 ^ . Denote the optimum weight vector by \ \u00C2\u00B0 p t . It follows that 5.3 L i n e a r W L E for U n e q u a l Samp le Sizes 5.3. L I N E A R W L E FOR U N E Q U A L S A M P L E SIZES 97 adaptive weights for unequal sample sizes. If the sample sizes are not equal, it is not clear whether the delete-one-column approach is a reasonable one. For example, suppose that there are 10 observations in first sample and there are 5 observations in the second. Then there is no observation to delete for the second sample for half of the cross-validation steps. Furthermore, we might lose accuracy in prediction deleting one column for small sample sizes. Therefore we propose alternative method which delete only one data point from the first sample and keep all the data points from the rest of samples if the sample sizes are not equal. 5.3.1 Two Population Case Let us again consider the two population case in which only two populations are considered. The optimum weights Xopt are obtained by minimizing the following objective function: m D^\X) = YJ{Xi]-^x[-J) - \ 2 X 2 ) , 3 = 1 where YJ Aj = 1 and Xlm = J2 Xik- We 'remark that the major difference i = l k^j between and is that only the j th data point of the first sample is left out for the j t h term in Du^\u00E2\u0080\u00A2 Under the condition that Ax + A2 = 1, we can rewrite as a function of Ai: \u00C2\u00AB i . . .. . 2 3=1 ni 2 = J ^ ( ( x l j - x 2 ) + \ 1 ( x 2 . - x < - i ) j ) \u00E2\u0080\u00A2 3=1 (2) We differentiate D\ with respect to Ai- It then follows that - + - xi~j))) - xii~j)) \u00E2\u0080\u00A2 3=1 5.3. L INEAR W L E FOR U N E Q U A L S A M P L E SIZES 98 We then have E ( ^ 2 . - X | J V ^2 Consider ni / \" V \"v r ( - J ') ' \ .7=1 m E (*2- - - ir^ rr**1- - (*2- - x ^ 3=1 V 1 7 E ( A 2 . - A \ ) ( A \ _ x y ) - \u00E2\u0080\u0094 - E ( A i . - X y ) ( X 2 . - Xl3) 3=1 1 J'=l 1 \"l nifXx. - A \ ) 2 + \u00E2\u0080\u0094 - E (Ax . - XX])Xl3 nx _ 77-1 \u00E2\u0080\u0094 1 and m su2 = E ^ 2 - - ^ ) 2 i=i 1 = E(^i- - * 2 - ) 2 + 2^-rr(^- - *2-) E ^ - - + ( ^ - Z T ) 2 E ^ - - x ^ n i \u00E2\u0080\u0094 1 A\u00E2\u0080\u0094' n i 3=1 3=1 3=1 We then have nAXx -X2f -^Ten A f * % 1 and A f ^ 0. Proof: From equation (5.5), it follows that 5.3. L I N E A R W L E FOR U N E Q U A L S A M P L E SIZES 99 By the Weak Law of Large Numbers, we have 2 pe\u00C2\u00B0. 2 ol \u00E2\u0080\u0094> a (X L -x 2 . ) 2 % (0?-02\u00C2\u00B0)Vo. It then follows that 2 2^ n 1 ( X 1 . - X 2 . ) 2 + _ L _ a 2 We then have A f ' ^ 1 . 0. The last assertion of the theorem follows by the fact that Ai + A2 = 1. o 5.3.2 Optimum Weights B y Cross-Validation We derive the general formula for the optimum weights by cross-validation when the sample sizes are not all equal. The objective function is defined as follows: = E U ^ - A ^ - E ^ 3=1 \ i=2 n i . n i / -. m \ \u00C2\u00A3 X\3 - 2 \u00C2\u00A3 X y A, ( X L + \u00E2\u0080\u0094 j - C * ! . - *y)) + \u00C2\u00A3 XpCi. 3 = 1 3 = 1 \ 1 t=2 / +E (A i (^ i - + ^ - T i ^ - - *y) j + E A ^-j = l \ ^ ' i=2 = c{X)-2b{X)\u + \tuA{X)\u where 6i = V X y (xL + (Xx. - Ay)) = n x X 2 - - ^ - a 2 -f-f V ni - 1 / ni - 1 3=1 bi = niXiXi,, i = 2,...,m; 5.3. L I N E A R W L E FOR U N E Q U A L S A M P L E SIZES 100 and = E ( * . . + ^ ( * . - - - M ) > = '0?. + ( ^ * ? \u00C2\u00ABij = fi\XiXj,, j 7^ 1 or j ^ 1. It then follows that A = nx ( where dij = 0 , i 7^ 1 or j / 1. By the elementary rank inequality, it follows rank (A) < rank(9t\u00C2\u00A7) + rank(D) = 2. Therefore, we have rank(A) < m if m > 2. It then follows that A is not invertible for m > 2. Thus the Lagrange method will not work in this case since it involves the inversion of the matrix A . To solve this problem, we can rewrite the objective function in terms of m A2, A 3 , . . . ,A m only, that is, we replace Ai by 1 \u00E2\u0080\u0094 E - V The original minimization i=2 problem is then transformed into a minimization problem without any constraint. As we will see in the following derivation, the new objective function is a quadratic function of A2, A 3 , A m . Thus, the minimum of the new objective function exists and is unique. 5.3. L I N E A R W L E FOR U N E Q U A L S A M P L E SIZES 101 By replacing Ai by 1 - J2 A*, we then have i=2 m b(Xy\u = 6iAi + ^ M i i=2 ( m \ ro i \u00E2\u0080\u0094 A \u00C2\u00BB ) bi + J2biXi i=2 / t=2 m i=2 m 2 = b i + n 1 E ( X 1 . X i . - ^ i . + ^ - r y a ? ) A i i=2 ^ = fei+nx^^^-^ + ^ - ^ - ^ A i . i=2 Since 4^ is symmetric, we also have A*J4A u = A 2 a u + 2Ax E AjOii + Aja^Afc i=2 i=2 k=2 / . m \ 2 / m \ m mm = i - E A 0 a u + 2 ( 1 _ E A < E A*aii + E E A*a^ A i=2 / \ i=2 / i=2 i=2 Jfc=2 an - 2an E + E E A*anAfc ) ^ i=2 i=2 k=2 / ( m mm \ m m E A*ai* - E E Ai\u00C2\u00B0iiAfc) + E E AiOyAfc i=2 i=2 A:=2 / i=2 k=2 m mm = an ~ 2 E( a n ~~ \u00C2\u00B0 i * ) A i + E E A ^ a ^ ' + \u00C2\u00B0 u ~~ 2 a i i ) A f c -i=2 i=2 k=2 5.4. A S Y M P T O T I C PROPERTIES OF T H E WEIGHTS 102 We then have = c(X)-2bl-2n1f2(el(9l-\u00C2\u00A71) + ^ - a f ) x i 1=1 \ rn-1 J m mm - 2 E(Q u ~ an)^i + E \u00C2\u00A3 Ai(flij + an - 2au)Xk i=2 i=2 k=2 m ( 1 1 \ = a u - 2b, + c{X) - 2nx V 0 ^ - 6X) + -a\ + \u00E2\u0080\u0094(a n - a u) A, + E X] Aj(au + a n - 2ail)Afc i=2 k=2 m \u00E2\u0080\u0094 O n \u00E2\u0080\u0094 11 - 26i + c(X) - 2nx J2 n i ff^ m m / 1 \ + n i A 3 > . . . J A m ) ' = ^ 2 i ? C 7 - 1 l where C is a m - 1 by m - 1 matrix, and for i = 1, 2, ...m \u00E2\u0080\u0094 1, j = 1 , 2 , m \u00E2\u0080\u0094 1, Cij = 0j+i 0j+i + 9\ \u00E2\u0080\u0094 29i+i\u00C2\u00A7i + ^ 1)2<^2' We then have Af* = ( A 1 > ( ^ - 1 ) ) t ) t . where Af* = 1 \u00E2\u0080\u0094 1*A\u00C2\u00B0P*^-1 .^ We remark that C is indeed invertible for m = 2 and m = 3. It is not clear whether C is invertible for m > 3. Therefore, the g-inverse of the matrix A should be considered in order find the optimum weight vector. 5.4 A s y m p t o t i c P r o p e r t i e s of the W e i g h t s In this section, we derive the asymptotic properties ofthe cross-validated weights. Let 0 ^ be the MLE based on the first sample of size n\. Let 9\~^ and respectively 5.4. A S Y M P T O T I C PROPERTIES OF T H E WEIGHTS 103 be the MLE and WLE based on m samples without the jth data point from the first sample. This generalizes the two cases where either only the jth data point is deleted from the first sample or jth data point from each sample is deleted. Note that 9[~^ is a function of the weight function A. Let ^Dni be the average discrepancy in the cross-validation which is defined as 1 1 7 1 1 2 ni ni . 3=1 Let A ^ be the optimum weights chosen by using the cross-validation. Let 9\u00C2\u00B0 = (0\u00C2\u00B0, 02, 9m)) where 0? is the true values for 9\. We then have the following theorem. Theorem 5.1 Assume that (1) ^ D n i has a unique minimum for any fixed n\; (2) \u00C2\u00A3 E U{e\r3)) - mi) ^ O a s n , ^ oo; 3=1 (3) Pgo ^ E (Xij ~ (0{rJ)))2 < 1 for s o m e constant 0 < K < oo; (4) Pgo ( 0(0^) - (9^) > Af) = o(^) for some constant 0 < M < oo; then A ^ ) ^ ^ 0 = (l,0,0,...,0) t. (5.6) Proof: Consider 3 = 1 1 E - ^ (^ \"J'))) + - tt&r*))) 1 n i 2 1 7 1 1 \ 2 i 2 (*, - >) + \u00C2\u00A3 E (*(*\u00E2\u0080\u00A2') - *$\"\">) =i j=i ni 2 n i + \u00E2\u0080\u00A2 5.4. A S Y M P T O T I C P R O P E R T I E S O F T H E W E I G H T S 104 Note that ^ E N - Wi\"'\"')) - {e[-]))) 3 = 1 1 . n i . 3 = 1 +-E O?) - <^ i~J))) 0(*i~J)) - <^TJ))) ni 1 j=i. = S i + S 2 where * = ^ E ( ^ - ^ ? ) ) ^ ni 1 3=1 We first show that S i Consider 0. < M for all j) Pf lo(|5i|>e) = Peo (e < | S i | and (b{6[-j)) - (f>(6{~j))+Peo (e < | S i | and \(\u00C2\u00A7[~l)) - (6~[~l)) \ > M for some f) < M e < | S i | < - E l * U - ^ ) | + ^ >M) V U l 3=1 J 1=1 1 7 1 1 \ - - m)) + miV\u00C2\u00BB ( l^) - i-1})l > M) 7 1 1 3=1 J i 1 n i 1 \ The first term goes to zero by the Weak Law of Large Numbers. The second term also goes to zero by assumption (4). We then have P0o(|Si| > e) \u00E2\u0080\u0094> 0 as n x \u00E2\u0080\u0094)\u00E2\u0080\u00A2 oo. (5.7) 5.4. A S Y M P T O T I C P R O P E R T I E S O F T H E W E I G H T S 105 We next show that S2 \u00E2\u0080\u0094^ 0 as nx \u00E2\u0080\u0094> oo. Consider Peo(\S2\>e) Pgo f^ e < j ^ l and +Pgo (e < | 5 2 | and L $ ~ 0 ) -X-Jh < Pgo\e< \S2\ < < Poo | -U < = Peo M ni 7 1 1 < M /or a// j) > M /or some l^j 1 3=1 / The first term goes to zero by assumption (2). The second term also goes to zero by assumption (4). We then have Pe\u00C2\u00B0(\S2\ > e) \u00E2\u0080\u0094> 0 as % \u00E2\u0080\u0094>\u00E2\u0080\u00A2 oo. (5.8) It then follows that 1 1 _ n i _ / 2 1 n i 2 \u00E2\u0080\u0094Dni(X)-= - E (*y - 0(^))j + - E ~ H0[~J))) + Rn (5.9) 1 1 3=1 1 3=1 where Rn \u00E2\u0080\u0094> 0. Observe that the first term is independent of A. Therefore the second term must be minimized with respect to A to obtain the minimum of ^Dni(\). We see that the second term is always non-negative. It then follows that, with probability tending to 1, 1 1 n i 2 1 \u00E2\u0080\u0094 A n (A ) > - E (*y - Wi~3))) = -Dni(w), 3 = 1 since (f>(9[-j)) = (f>(9{-j)) for A ( c u ) = w0 = (1,0,0,0)* for fixed m. 5.4. A S Y M P T O T I C PROPERTIES OF T H E WEIGHTS 106 Finally, we will show that ,(cv) P 0 \u00C2\u00B0 ^ ; \u00E2\u0080\u0094> w0, as rii \u00E2\u0080\u0094> oo. Suppose that A ^ \u00E2\u0080\u0094^ w0 + d where ol is a non-zero vector. Then there exist n 0 such that for n i > no, - D n i (\{cv)) >-Dni(w). Ul V / Til This is a contradiction because A ^ is the vector which minimizes ^Dni for any fixed rii and the minimum of ^ D m ( A ) is unique by assumption, o To check the assumptions of the above theorem, let us consider the linear WLE for two samples with equal sample sizes. Assumption (1) is satisfied since ^Dni(\) is a quadratic form in A and its minimum is indeed unique for each fixed n i . To check Assumption (2), consider 1 i : f - J - r E ^ - ^ n-i f^\u00E2\u0080\u0094' \ n i \u00E2\u0080\u0094 1 ^\u00E2\u0080\u0094' J=l ]=1 T i l \u00C2\u00B1 E ( ^ T ^ - ^ T ^ ) - \" f nx ^ \ ni \u00E2\u0080\u0094 1 n i \u00E2\u0080\u0094 1 ' Next we consider n i 1 4 \u00E2\u0080\u0094' \ / n i 1 i=i 1 i=i ni \u00E2\u0080\u0094 E (^y _ (\u00E2\u0080\u0094 l ~^ x \ . -Xij ni~[\ \ n i - l ni - 1 ( n \ 2 1 Ul 2 p -J \u00E2\u0080\u0094 ^ E {Xij \u00E2\u0080\u0094 A i . ) \u00E2\u0080\u0094 ^ var(Xn) < oo as ni \u00E2\u0080\u0094> oo. 5.5. SIMULATION STUDIES 107 For the last assumption of the previous theorem, consider ni ) i X L - ( A ^ X j . + \ r x 2 ) (CTJ) (cu) X L - X , It then follows from Lemma 5.1 that ' \u00E2\u0080\u00A2 o ( | ^ i B l ) ) - ^ \u00C2\u00B0 ) | > e ) ='Peo ( n - 2 < < (al-covnX^-X.y (n - 2) 2e 2 1 Ego Ego [ ( X i . - x ^ + ^ E t X y - x ^ p i = l (al-covfiX.-X^ > ( x i : - x 2 y a\ \u00E2\u0080\u0094 cov X\. \u00E2\u0080\u0094 X2 = o{n) ( n - 2) 2e 2 ' since X x . - X 2 . 0\u00C2\u00B0 - 0\u00C2\u00B0 ^ 0 and d 2 - cow a\ - cov(Xu,X2l). Thus the assumptions of the theorem are all satisfied. We then have (cu) i ; {cv) 0. This is consistent with the result of Proposition 5.2. Since the cross-validated weight function converges in probability to w0. There-fore the asymptotic normality of 0X of using cross-validated weights follows by Theo-rem 4.10. 5.5 S i m u l a t i o n Stud ies To demonstrate and verify the benefits of using cross-validation procedures described in previous sections, we perform simulations according to the following algorithm which deletes j th point from each sample, i.e. delete-one-column approach. Step 1: Draw random samples of size n from fi(x; 0j) and f2(x; i '2l> 5.5. SIMULATION STUDIES 108 n MSE(MLE) SD of (MLE-0?) 2 MSE(WLE) SD of (WLE-00) 2 MSE(WLE) MSE(MLE) 5 0.203 0.451 0.130 0.360 0.638 10 0.100 0.317 0.075 0.274 0.751 15 0.069 0.262 0.057 0.238 0.826 20 0.051 0.227 0.042 0.204 0.809 25 0.041 0.203 0.035 0.187 0.843 30 0.035 0.187 0.031 0.177 0.895 35 0.030. 0.173 0.028 0.166 0.931 40 0.025 0.159 0.023 0.153 0.932 45 0.023 0.151 0.023 0.151 0.997 50 0.020 0.141 0.020 0.141 1.007 55 0.018 0.135 0.019 0.139 1.066 60 0.017 0.129 0.018 0.133 1.057 Table 5.1: MSE of the M L E and the W L E and standard deviations of the squared errors for samples with equal sizes for N(0,1) and Af(0.3,1). Step 2: Calculate the cross-validated optimum weights by using (5.2); Step 3. Calculate the (MLE- 0\u00C2\u00B0) 2 and (WLE- 0\u00C2\u00B0) 2 ; Repeat Step 1 - 3 for 1000 times. Calculate the averages and standard deviations of the squared differences of MLE and WLE to the value of the true parameter 6^ respectively. Calculate the averages and standard deviations of the optimum weights. We generate random samples from N(0,af) and N(c, of). For simplicity, we set = (3.6). Some of the results are shown in Table 5.3 and Table 5.4. The result for Poisson distributions is somewhat different than that from the normal. The striking difference can be seen from the ratio of the average MSE of W L E and average of MSE of W L E . The W L E achieves smaller M S E on average when the sample sizes are less than 30. Once it is over 30, it seems that we should not combine the two samples. This is not the case for the normal until the sample size reaches 45. We remark that the reduction in M S E will disappear if we set c = 1.5 in the above case.. Thus the cross-validation procedure will not combine two samples if the second sample does not help to predict the behavior of the first. We also emphasize that the value c in both cases are not assumed to be known to the cross-validation procedure. n MSE(mle) SD MSE(wle) SD MSE(wle) MSE{mle) 5 0.312 0.558 0.235 0.484 0.753 15 0.142 0.377 0.127 0.357 0.896 25 0.120 0.347 0.114 0.338 0.950 30 0.104 0.323 0.104 0.323 1.000 35 0.077 0.277 0.081 0.284 1.054 40 0.074 0.272 0.076 0.275 1.025 45 0.072 0.268 0.075 0.274 1.045 50 0.057 0.238 0.065 0.255 1.141 ' 55 0.054 0.233 0.060 0.245 1.098 60 0.046 0.215 0.052 0.229 1.132 Table 5.3: M S E of the M L E and the W L E and their Standard deviations for samples with equal sizes from V(3) and V(3.6). 5.5. SIMULATION STUDIES 111 n AVE. of Ai AVE. of A2 SD of Ai and A2 5 0.710 0.289 0.027 10 0.729 0.270 0.057 15 0.738 0.261 0.065 20 ; 0.754 0.245 0.077 25 0.754 0.245 0.078 30 0.768 0.231 0.086 35 0.777 0.222 0.091 40 0.789 0.210 0.093 45 0.797 0.202 0.097 50 0.799 0.200 0.095 55 0.812 0.187 0.097 60 0.820 0.179 0.096 Table 5.4: Optimum weights and their standard deviations for samples with equal sizes from V(3) and \"P(3.6) We remark that simulations of using the delete-one-point approach have also been done. They give quite similar results. C h a p t e r 6 Der iva t ions of the We igh t ed L i k e l i h o o d Funct ions In this chapter, we shall develop the theoretical foundation for using weighted like-lihood. Hu and Zidek (1997) discuss the connection between relevance weighted likelihood and maximum entropy principle for the discrete case. We also show that the weighted likelihood function can be derived from maximum entropy principle advocated by Akaike for the continuous case. 6.1 I n t r o d u c t i o n Akaike (1985) reviewed the historical development of entropy and discussed the im-portance of the maximum entropy principle. Hu and Zidek (1997) discovered the connection between relevance weighted likelihood and maximum entropy principle for the discrete case. We offer a proof for the continuous case. We first state the maximum entropy principle: all statistical activities are directed to maximize the expected entropy of the predictive distribution in each particular ap-plication. When a parametric model {p(x;6);6 e 6} of the distribution of a future 112 6.1. I N T R O D U C T I O N 113 observation x is given, the goodness of a particular model {p(x;9);9 \u00E2\u0082\u00AC 0} as the predictive distribution of x is evaluated by the relative entropy where f(x) is the true distribution. In this expression, logf(x)/p(x;9) is defined as -f-oo if f(x) > 0 and p(x; 9), so the expectation could be +oo. Although logf(x)/p(x; 9) is defined as \u00E2\u0080\u0094 oo when f(x) \u00E2\u0080\u0094 0 and p(x; 9) > 0, the integrand, f(x)logf(x)/p(x; 9) is defined as zero in this case. The relative entropy is a natural measure of discrepancy between two probability functions. Hence, maximizing B(f;p(.;9)) is equivalent to minimizing I(f,p(.;9)) with re-spect to 9. Without any restrictions, the desired density function is f(x) itself. However, the true density function f(x) is indeed unknown. We therefore use a set of density functions, fi(x), f2(x),fm(x), say, to represent the true density function. The density function fi (x} represents the density function which is thought to be the \"closest\" to the true density function f(x). This resembles the idea of compromised MLE proposed by Easton (1991). To be consistent with our use of relative entropy, we use it in interpreting \"re-semblance\" of any density function g(x) to the fi(x) and define that term to mean The fa reflects the maximum allowable deviation from the density function fi(x). If ai is set to be zero, then g(x) takes exact the same functional form of fi(x). For a given set of density functions, we seek a probability density function which minimizes I(fi,g) = J f\(x)log ^rdx over all probability densities g satisfying dx. p(x;9) = 1, 2, 3..., ra. (6.1) 6.2. EXISTENCE OF T H E OPTIMAL DENSITY 114 where a, are finite non-negative constants. This idea of minimizing the relative entropy under certain constraint is also similar to the approach outlined in Kull-back(1959, Chapter 3) for the hypothesis testing. 6.2 E x i s t e n c e of the O p t i m a l D e n s i t y Let V be a reflexive Banach space. Let \u00C2\u00A3 be a non-empty closed convex subset of V. Define a function 1(g) on \u00C2\u00A3 into 7Z where g is a continuous function. We are concerned with minimization problem: mi 1(g). gee To avoid trivial cases, we assume that the function 1(g) is proper, i.e. it does not take the value \u00E2\u0080\u0094oo and is not identically equal to -t-oo. We then have the following theorem. Theorem 6.1 Assume that 1(g) is convex, lower semi-continuous and proper with respect to g. In addition, assume that the set V is bounded, i.e. there exist a constant M, say, such that sup 1(g) < M. gev Then the minimization problem has at least one solution. The solution is unique if the function 1(g) is strictly convex on T>. Proof: (See, for example, Ekeland 1976, p 35. ) o Consider IJ spaces (1 < p < oo). It is known that IP (1 < p < oo) is reflexive (Royden 1988, 227). Define 1(g) = / fx(x)log^dx for some given density h(x). It can be seen that 1(g) is a proper convex function and also continuous in g on LP. We also assume that 1(g) < oo. 6.2. EXISTENCE OF T H E OPTIMAL DENSITY 115 Define \u00C2\u00A3i = and {9 \u00E2\u0080\u00A2 J fi{x)log-^-^dx < cii, J g(x)dx = 1, g(x) >0,ge Lp}, i = 2 , m , \u00C2\u00A3 = Lemma 6.1 The following inequality holds: V2(f1J2)/2 0. Define ^(x,g) = fi(x)log^ + l0g(x) + f: lkfk(x)log^. By Theorem 6.3, it follows fc=i that the necessary condition for g* to be the optimal solution is that it has to satisfy the Euler-Lagrange equation, i.e. dip d dip . . ^-^W)=0' (6-9) where g = ||. Notice that ip{x,g) is not a function of g . It implies that | 4 = 0. The Euler equation then becomes dip It follows that We then have .AM + iQ f JtW = o fa) 0 x=\ k g(x) g*(x)=J2fkfk(x), k=l where t{ = l/l0, t* = lk/lo, k = 2 , m . Since we seek a density function, it then follows that the sum of the ti must m m be 1 since Jg*(x)dx = YJ % = ^ also follows that g*(x) = YJ Kfk(x) > 0. fe=i fc=i 6.4. DERIVATION OF T H E WL FUNCTIONS 119 Since if g*(x) = YJ t*kfk{x) < 0 for all x e K with Pi(K) > 0, the constraints fc=i f fi(x)log(fi(x)/g*(x))dx = h > 0 then will not be satisfied. This completes the proof, o Consider the minimization problem without any constraint. We then seek the optimal density function g* such that it minimizes I(f\,g) for any given f\(x). Ac-cording to Theorem 6.4, the necessary condition of the optimal function g* is that g*{x) = t\f1(x). Since t* = 0,i = 2,3, ..,m, then t{ = 1. It then follows that g*(x) = f\(x), a.e.. Furthermore, we have I(f\,g) > I(fi,g*) \u00E2\u0080\u0094 I{fi, /i) = 0 for any density function g. This result is also known as the Shannon-Kolmogorov Information Inequality. We establish the uniqueness of the optimal solution in the next theorem. m Theorem 6.5 (Uniqueness) Suppose g* = Y2tf.fi(x) w ^ the t* chosen so that'g* i=i m satisfies the constraints (6.1) and J^t* = 1,0 < t* < 1, i = l,2,...,m. Then i=l g* uniquely minimizes I(fi,g) over all probability densities g satisfying constraints (6.1). . Proof: Suppose that there exist a probability density function g0 such that f fi(x) l o g ^ ~ dx < [ fi{x) log dx, J go{x) J g*(x) while It follows that / fi(x)log ^ \ dx < ai: i = 2,...,m. while J fi{x) log g*{x) dx < j fx(x) log g0(x) dx J fi(x) log g*(x) dx< J fi(x) log g0(x) dx, i = 2, . m. 6.4. DERIVATION OF T H E WL FUNCTIONS 120 We then have 771 \u00E2\u0080\u009E 771 \u00E2\u0080\u009E E*i / fiix) l o 9 9*(x) dx < / h(x) l o 9 9o(x) dx i=l J i=l J /m p. m E*.*/i(a;) l\u00C2\u00B09 9*(x) dx < Y^ifi^) lo9 9o(x) dx i=l J i=l J g*{x) log g*(x) dx < j g*(x) log g0(x) dx It follows that I(g*,g0) < 0. However, we know that I(g*,ga) > 0. Therefore, g*{x) = go{x) for all x. This completes the proof, o The weights t* are functions of ai, a 2 , a m . To describe the relationships between t* and a,, we have the next theorem. Theorem 6.6 Suppose there exists a0 = (ai , a 2 , a m ) * and5\u00C2\u00B0 = (Si, S2,<5m)' such 771 that there exists go = ^2Ufi(x) with ti chosen so that go achieves the equalities in 7 = 1 771 the constraints (6.1) and U = 1, 0 < U < 1, i = 1,2, ...,m for any a suc/i that i=i \a,i \u00E2\u0080\u0094 a\u00C2\u00B0\ < Sf. Then U are monotone functions of ai. Moreover, < 0, i = 2,...,m, da{ 0 , 1 k^i Proof: Let i(x) = fi(x) \u00E2\u0080\u0094 fi(x). Therefore, 771 g*(x) = fx(x) + Y^^kix) k=2 1 and j i(x)dx = 0, i = 2,...,m. It also follows that m fi(x) = g*(x) + i(x) - ^k4>k{x) > 0. fc=2 6.4. DERIVATION OF T H E WL FUNCTIONS 121 This implies that m -[i(x)-Y,tkk{x)]<9*(x)- (6-10) fc=2 Since g* satisfies the constraints (6.1), it follows that, for 2 < i < m In = [M*) log dx] dti dtilJ J i y J y g*(x) J J-[/<(*) ^ g Jl{x) dx] jfc=i = - f [g*(x) + Hx) - f ] W * ( i ) ] ^ dx V ^ 9 [X) = ~ f 9*(x) ^ f \ d x - [ fciz) -f^tkMx)] d x J 9 (x) J 9*{x) < - J i(x)dx + Jg*(x)^j^dx by (6.10) = 0. Therefore, it follows that, for i = 2, ...,m, dti 1 o - >\u00C2\u00BB < 0. r9o \u00E2\u0080\u0094 \u00C2\u00B0a% du It also follows that da; d since rjx + rj2 + ... + tm = 1. This completes the proof, o Theorem 6.7 The weights ti are all between 0 and 1. Proof: Note that if we set a; = 0, then ti \u00E2\u0080\u0094 1; if a, = oo, then rjj = 0. Since rjj is a monotone function of at for any fixed a3,i ^ j , it follows that 0 < ti < 1. o The distributions functions fi, /2,fm are, in fact, unknown. We have to derive the optimum function by using samples from different populations. The derivation of 6.4. DERIVATION OF T H E WL FUNCTIONS 122 the weighted likelihood function for the discrete case is given in Hu and Zidek (1994). We now generalize their derivation of the weighted likelihood function. Theorem 6.8 Assume that the optimal distribution takes the functional form f(x). Given Xi3,i = 1,2, ...,m,j = l,2,...,n;, the optimization problem considered in the this section is equivalent to optimizing the weighted likelihood function. Proof: By the proof of Theorem 6.4 and the Lagrange theorem, we need to choose the optimal density function g* which minimizes The minimization problem considered is then equivalent to maximizing first term in the above equation, i.e However distributions fi(x), f2{x),fm(x) are unknown. Their natural estimate in non-parametric context would replace by its empirical counterpart. Assume that the the optimal density function takes the same functional form of f\. The estimate of the parameter of the optimal distribution would be found as m maxY^rJi / fi(x)logf(x)dx m max E k ! log f(x; 6)dFi(x). 6.4. DERIVATION OF T H E WL FUNCTIONS 123 This implies that the estimate of parameter of the optimal density is equivalent to finding the WLE derived from the weighted likelihood function if the functional form of the optimal density function is known. This completes the proof, o We have shown that the optimal function takes the form m 9*(x) = ^2tkfk(x). k=l However the density function g* does not exist if the constraints define an empty set. Let us consider a relative simple situation where three populations are involved. Recall that the ti need to satisfy the following condition: h + t2 + h = i, 0 < U < 1, i = 1,2,3. The above condition is equivalent to the following: 0<* 2 + * 3 < l ; 0 < t2 < 1;0 < t3 <1. If we set a2 = a3 = 0, then there is no probability distribution satisfying the con-straints since a probability density function can not take the functional form of f2 and / 3 at the same time if f2izfz- The reason is as follows. In order to satisfy the con-dition a2 = 0, the weight t2 must be set to 1. We must also have t3 = 1 for the same reason. Clearly, this set of weights no longer satisfies the condition t\ + t2 + t3 = 1. Lemma 6.3 The following inequality holds: m m I{fi,J2tkfk)/*) = [ l o g m h { x ) Mx)dx k=l m < / [logfi(x) - Yjklog'fk(x)]fj(x)dx \"* k=l /m m [Y,tklogfi(x) - y^Jklogfk(x)]fi{x)dx k=l k=l m dij \u00E2\u0080\u0094 k=l This completes the proof, o Let D = (d^) where ( 1 if i = l J(fiJj) if i = 2,3, ...m. and o m x i = (l,a 2,....,am) t. Theorem 6.9 (Existence) The optimal solution does not exist if rank(D) < rank(B) (6.12) where \u00C2\u00A3 m x ( m + 1 ) = [Dmxm : a]. m m Proof: Note that / ( / i , YJ hfk) is bounded by YJ \u00C2\u00A3<./(/;,//;) by Lemma 6.3. Set k=l k=l m m ' ai = YJ tkI(fi, fk), i = 2, 3 , m . Note that Y_) tk = 1. We then have the following k=l k=l simultaneous linear equations in tf. Dt = a. By a result from elementary linear algebra, the assertion of the Theorem follows, o C h a p t e r 7 Saddlepoin t A p p r o x i m a t i o n of the W L E 7.1 Introduction In the context of weighted likelihood estimation, the i.i.d. assumption is no longer valid. Furthermore, the sample sizes are usually moderate or even very small. The powerful saddlepoint technique is applied to derive the approximate distribution of WLE from exponential family. The saddlepoint approximation for estimating equa-tions proposed in Daniels (1983) is further generalized to derive the approximate density of the WLE derived from an estimating equation. 7.2 Review of the Saddlepoint Approximation In a pioneering paper, Daniels (1954) introduced a new idea into statistics by applying saddlepoint techniques to derive a very accurate approximation to the distribution of the sample mean for the i.i.d. case. It is a general technique which allows us to 125 7.2. R E V I E W O F T H E S A D D L E P O I N T A P P R O X I M A T I O N 126 compute asymptotic expansions of integrals of the form / evw{z){z)dz (7.1) Jv when the real parameter v is large and positive. Here w and are analytic functions of z in a domain of the complex plane which contains the path of integration V. This technique is called the method of steepest descent and is used to derive saddlepoint approximations to density function of a general statistic. Consider the integral (7.1). In order to find the approximation we deform arbi-trarily the path of integration V provided we remain the domain where w and (j) are analytic. We deform the path V such that (i) the new path of integration passes through a zero of the derivative w (z); (ii) the imaginary part of w, lw(z) is constant on the new path. If we write z = x + iy, z0 = x0 + iy0, w(z) = u(x,y) + iv(x,y), w'(z0) = 0, and denote by S the surface (x,y) \u00E2\u0080\u0094> u(x,y), then by Cauchy-Riemann differential equations du dv du dv dx dy' dy dx' it then follows that the point (x0,y0) can not be a maximum or minimum but a saddlepoint on the surface S. Moreover, the orthogonal trajectories to the level curves u(x,y) = constant are given the the curves v(x,y) \u00E2\u0080\u0094 constant. It follows that the paths on S corresponding to the orthogonal trajectories of the level curves are paths of steepest descent. We will truncate the integration at certain point on the paths of steepest descent. The major part of the integration is then used to approximate 7.2. R E V I E W O F T H E S A D D L E P O I N T A P P R O X I M A T I O N 127 the integration on the complex plane. Detailed discussions can be found in Daniels (1954). Suppose that Xi, X 2 , X n are i.i.d. real-valued random variables with den-sity / and Tn(Xi, X 2 , X n ) is real-valued statistic with density /\u00E2\u0080\u009E. Let Mn(a) = f eatfn{t)dt be the moment-generating function, and Kn(a) = logMn(a) be the cumulant-generating function. Further suppose that the moment generating func-tion Mn(a) exists for real a in some non-vanishing interval that contains the origin. By Fourier inversion, fn(x) = - ! - I\" Mn(zr)e-irxdr 2TT 7_OO = 2 ^ / M - ( n z ) e ~ n Z X d z rr+ioo / \ - / \u00E2\u0080\u00A2 exp( n(Rn(z) \u00E2\u0080\u0094 zx) J dz, 1 Jr-ioo ^ ' n 2iri where I is the imaginary axis in the complex plane, r is any real number in the interval where the moment generating exists, and Rn(z) = Kn(nz)/n. Applying the saddlepoint approximation to the last integral gives the approxima-tion of /\u00E2\u0080\u009E with uniform error of order n~1: 9n{x) = y 2 i r g , ^ a exp(n[Rn{a0)-a0x}), (7.2) where \u00C2\u00ABo is the saddlepoint determined by the equation K ( \u00C2\u00AB o ) = * , where R!n and R'^ denote the first two derivatives of R^. Detail discussions of the saddlepoint can be found in Daniels (1954) and Field and Ronchetti (1990). 7.3. RESULTS FOR E X P O N E N T I A L FAMILY 128 n n! Stirling Saddlepoint r.e. of Stirling r.e. of saddlepoint 1 1 . 0.92 0.99 0.07786 ' 0.00102 2 2 1.92 1.99 0.04050 0.00052 3 6 5.83 5.99 0.02730 0.00028 4 24 23.50 24.00 0.02058 0.00017 5 120 118.02 119.99 0.01651 0.00016 6 720 710.08 719.94 0.01378 0.00008 7 5040 4980.40 5039.69 0.01183 0.00006 8 40320 39902.87 40318.05 0.01036 0.00005 9 362880 359537.62 362865.91 0.00921 0.00004 Table 7.1: Saddlepoint Approximations of r(?7, + 1) = n\ It is known that the Stirling.formula serves as a very good approximation to the Gamma function. The comparison of accuracies between the Stirling formula and the saddlepoint approximation based on the the first two terms is given in the above table with the last two columns for the relative error for using Stirling formula and saddlepoint approximation respectively. The saddlepoint approximation is given by \/2~/Trj\" + 1/ 2e~ n (l + j ^ ) - Notice that the expression before (l + ^ ) is exactly the Stirling Formula. 7.3 R e s u l t s for E x p o n e n t i a l F a m i l y The saddlepoint approximation stated in the last section is for a general statistic constructed by a series of i.i.d. random' variables. In this section, we derive the saddlepoint approximation to the distribution of a sum of a finite number of random variables that are independent but not identically distributed. 7.3. RESULTS FOR E X P O N E N T I A L FAMILY 129 Suppose that we consider the distribution of the following statistic: m S(X) = Tj(Xji, Xj2, Xjni), where X{3- are i.i.d for any given i. But (Xij) and (Xjj) with i ^ % do not follow the same distribution in general. Theorem 7.1 The saddlepoint approximation to the density function of the random variable defined by the convolution is given by: / \ 1 / 2 fs(x) , 27rgi^ ' (n ia5) /n i \ i=l exp\^ii^^Ri(nial)lnx-alx)j (7.3) where OJQ satisfying the following equation m 1=1 Proof: The moment generating function of 5(X) is Mi x M 2 . . . x M m , where Mm is the moment generating function of Ti(Xn, Xi2, ...,Xini). By Fourier inversion, /S( . i . - a . . . . . m , (^) = ^ J^M^ir) M2(ir)...Mm(ir)e-^dr = ^ [' Mx(nxz) M2{nlz)...Mm{n1z)e-^zxdz 2m Jz rT+ioo / . m . \ - / exp I ni ( Ri[n-\_z) Ini \u00E2\u0080\u0094 zx) ) dz, Wr-ioo V Kl=1 ') 2m where Ri is the cumulant generating funtion of Tj. Applying the saddlepoint technique we derive the approximate density for 5(X): / \ 1 / 2 fs, dx) = J ( n i , H 2 \"m) v ' n 2 7 r E ^ i ' ( n i a o ) / n i i=i exp ( n i ( Rdniao)lni \u00E2\u0080\u0094 OLQX^ V i=i where a:^ i s the root of the equation YJ -Rj(^i\u00C2\u00ABo)/ni = x - \u00C2\u00B0 t=i 7.3. RESULTS FOR E X P O N E N T I A L FAMILY 130 Example 7.1 (The Sum of Sample Means) Let us examine the distribution of Wn where Wn = \u00E2\u0080\u0094(Xu + X12 + ... + Xlni) + - ( X 2 1 + X22 + ... + X2n2). ni n2 The moment generating function of Wn is Mi(^-) n i x M2(~)n2 where M\ and M2 are the moment generating functions of X n and X 2 i respectively. Let K\ and K2 be the cumulant generating function of X n and X 2 i respectively. It then follows Ri(niz)/ni = riiKi [ \u00E2\u0080\u0094 ] /rii = \u00E2\u0080\u0094K{ f \u00E2\u0080\u0094 z ) , i = l,2. \ rii J m \m J The saddlepoint in this case is then a root of the equation \u00E2\u0080\u0094Kx (z +\u00E2\u0080\u0094TT K* \\u00E2\u0080\u0094z )=x. oz rii oz \ n2 ) The saddlepoint approximate density of Wn is \ 1/2 f I \ I N I 2TT (j^Kx(al) + ^K2(al)) exp (ni[Ki(ao) + \u00E2\u0080\u0094K2(\u00E2\u0080\u0094a*0) - a*0t) ] . V V Til Tlo / I ni n2 Assume that n\ = n2 = n and A^1) = = i i ' . We then have fw\u00C2\u00BB(x) = o o 3 2 R , , exp (n(2K(a*0) - a*0x)) . \27r2-^K(a0) J The sample average of the combined sample is Wn/2. It then follows that 1/2 1/2 where CHQ is the root of the equation 2n 2\u00E2\u0080\u0094AT(z) = x = 2y. 7.3. RESULTS FOR E X P O N E N T I A L FAMILY 131 It then implies that aig indeed satisfies the following equation This is exactly the saddlepoint a0 for an i.i.d. sample with size 2n. Thus, the saddlepoint approximation of the density of Wn/2 by Theorem 7.1 when the random variables from both samples are indeed i.i.d. is exactly the saddlepoint approximation of the sample mean of a i.i.d. sample with size 2n. Example 7.2 (Spread Density for the Exponential) If Yi, Y 2 , Y m + \ are independent, exponentially distributed random variables with common mean 1, then the spread, V(m+i) \u00E2\u0080\u0094 the difference between maxima and minima of the sample, has the density This is also the density of the sum Xi + X2 + ... + Xm of independent, exponentially distributed random variables Yj with respective means 1,1/2,1/m. A proof of this claim is sketched in Problem 1.13.13 in Feller (1971). It follows that the cumulant generating function of the sum \u00C2\u00A3>(X) = X\ + X2... + Xm m m . m \u00E2\u0080\u00A2 is YJ Ri(z) = \u00E2\u0080\u0094 YJ ^ n ( l \u00E2\u0080\u0094 zlJ)- The equation YJ R\{z) \u00E2\u0080\u0094 x m Theorem 7.1 becomes i=l j=l i=l YJ 1/ (j< \u00E2\u0080\u0094 z) = x which needs to be solved numerically. Due to the unequal variances of Yj, the normal approximation does not work well. Lange (1999) calculates the saddlepoint approximation for this particular example. Note that the following table is part of Table 17.2 in Lange (1999). The last column is the difference between the exact density and the saddlepoint approximation. It can be seen that saddlepoint approximation gives a very accurate approximation. Lange (1999) also shows that the saddlepoint approximation out-performs the Edge-worth expansion in this example. 7.3. RESULTS FOR E X P O N E N T I A L FAMILY 132 X Exact Error 0.5 .00137 -.00001 1.0 .05928 -.00027 1.5 .22998 .00008 2.0 .36563 .00244 2.5 .37874 .00496 3.0 .31442 .00550 3.5 .22915 .00423 4.0 .15508 .00242 4.5 .10046 .00098 5.0 .06340 .00012 5.5 .03939 -.00026 6.0 .02424 -.00037 6.5 .01483 -.00035 7.0 .00904 -.00028 7.5 .00550 -.00021 8.0 .00334 -.00014 Table 7.2: Saddlepoint Approximation of the Spread Density for m = 10. We now consider the saddlepoint approximation to the distribution of the WLE in the general exponential family. It has been shown that the WLE takes the form V m \ 9 ( Zj A J T J ( X J I , Xini) j for some smooth function g under fairly general conditions. Theorem 7.2 Assume that g is a smooth function. The saddlepoint approximation 7.3. RESULTS FOR E X P O N E N T I A L FAMILY 133 to the density of g (j\u00C2\u00A3 X ^ X ^ , X ^ ^ j is ( SwLEiy) \ 1/2 ni J 2 7 r \u00C2\u00A3 X ( A i m O / n 1 i=i / m '.exp I ni( J^i2i(Ajniao)M - ao9~l(y)) \ t=i where satisfies the following equation m 1 i = l Proof: We first derive the approximate density function for S = YJ AjIi(Yjx, Y i n i ) . i=i The cumulant generating function of AjTj is Ri(Xiz) where Ri(z) is the cumulant generating function of Tj. By Theorem 7.1, the approximate density function of S = E AjTj is given by t=i 1/2 nx 27rY:f l ; ' (A i n 1 <)/n 1 i=i exp [ nx (^2 R i i ^ i ^ o )/ni - a^x) i=i where is the root of the equation m ^i?-(AjnxQ;^)/nx = x. i=l ( m Y2, A j T j ( Y i x , X i n i i=i is given by IWLE{V) = \ 1/2 ni . 2 7 r E ^ ' ( A l n i O / m \ i=i xexp I ni(^Ri(\iniao)/ni - a*0g 1(y)) i=i g\9-l{y)) 7.4. A P P R O X I M A T I O N F O R G E N E R A L W L E S T I M A T I O N 134 where OJQ satisfies the following equation m YJ^iniz){aw0)/n1 = g-\y). i = i This completes the proof. 7.4 A p p r o x i m a t i o n for G e n e r a l W L E s t i m a t i o n The saddlepoint approximation to estimating equations for the i.i.d. case is devel-oped by Daniels (1983). We generalize the techniques to the W L E derived from the estimation equation constructed by the Weighted Likelihood Function. Recall that the W L E is the root of the following estimating equation: m n; E ^ E J - W ; \u00C2\u00AB I ) = 0 - (7.4) i=l j=l 1 For simplicity, let tp(Xij\6i) = \u00E2\u0080\u00A2J^logf(Xij\6i). The estimating equation for W L E can be written as m rii Y t ^ ^ ( X i j ; 9 1 ) = 0. (7.5) t=l 3=1 Assume that ib(Xij-,6) is a monotone decreasing function of 0 and Aj > 0, i = l , 2 , . . . ,m . Write m rii S(a) = ^ 2 \ l ^ ( X l J ; a ) , i=l j=\ where a is a fixed number. Let aig be the root of the equation m ^ a y^ n j A j \u00E2\u0080\u0094 K i ( n i \ j Z , a) = x. 2=1 We have the following theorem: Theorem 7.3 Let 0\ be the WLE derived from the weighted likelihood equation. Then poo P0l (0! > a) = P(S > 0) ~ / fs(x, a)dx (7.6) Jo 7.4. A P P R O X I M A T I O N F O R G E N E R A L W L E S T I M A T I O N 135 and fs(x) = \ 1/2 7li 2 ? r E nihiX^g-sK^nxXiZ, a)\z=Qs i = i (7.7) ( m i = i and exp (K~i(t, a)) = Eg{exp (tip(Xij, a)) ,i = 1, 2,m. Proof: The moment generating function of S(a) is Ms(t,a) = exp(Ks(t,a)) m (7 7 ( ni \ = II /'\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2/ e x p \ tXiY,^(Xii>a} ) Ylf(xij^i)dxn...dxini i = l V o o -oo V J = l / ^ m = HexpikiiXAa))11' i = l (m y^ nji^ A^i^ a) i = i / where exp (K~i(t, a)) = Egiexp(ttp(Xij,a)) ,i = 1,2, ...,m. The function Ms(t,a) is assumed to converge for real i in a non-vanishing interval containing the origin. It then folows that i 7 (m \ fs(x) = ^\u00E2\u0080\u0094 / exp 1 Y niKjjirXj, a) j exp (-irx) dr -oo T+ioo ni 2ni niKi(n\XiZ,a) \u00E2\u0080\u0094 n\xz dz T \u00E2\u0080\u0094lOO T+ZOO 2TTZ 7 ( m n- \ j exp yni \u00E2\u0080\u0094 A\"i ( r i iAj2 ; , a) \u00E2\u0080\u0094 xz) J dz The saddlepoint OJQ is the root of the equation d En\u00C2\u00BBAj\u00E2\u0080\u0094ifi(n]AiZ,a) = x. i = l 7.4. A P P R O X I M A T I O N FOR G E N E R A L WL ESTIMATION 136 It then follows that the saddlepoint approximation to the density fs(x) is given by / \ 1 / 2 fs(x) = ( 111 \ ^Wi*i-^Ki(ni\iZ,a)\z=as) ( m n l ( ^ ^ n i ^ i K i { n l ^ i z i a ) \ z = a s 0 ~ x a 0 ) ) \u00E2\u0080\u00A2 We can then deploy the device used in Field and Hampel (1982) and Daniels (1983) We then have Pg1(91>a) = P01(S(a)>O). /\u00E2\u0080\u00A2OO P9l (6X > a) = P(S > 0) ~ / fs(x, a)dx. Jo (7.8) In general, the saddlepoint approximation is very computational intensive since the normalizing constant needs to be calculated by numerical integration. We remark that the saddlepoint approximations to the WLE proposed in this chapter are for fixed weights. The saddlepoint approximation to the WLE with adaptive weights needs further study. C h a p t e r 8 A p p l i c a t i o n to Disease M a p p i n g 8.1 Introduction In this chapter, we present the results of the application of the maximum weighted likelihood estimation to parallel time series of hospital-based health data. Specifically, the weighted likelihood method is illustrated on daily hospital admissions of respiratory disease obtained from 733 census sub-division (CSD) in Southern Ontario over the May-to-August period from 1983 to 1988. Our main interest is on the estimation of the rate of weekly hospital admissions of certain densely populated areas. The association between air pollutants and respiratory morbidity are studied in Zidek et al. (1998) and Burnett (1994). For the purpose of our demonstration, we will consider the estimation of the rate of weekly hospital admissions of CSD # 380 which has the largest yearly hospital admissions total among all CSD's from 1983 to 1988. The estimation of the rate of weekly admissions is a challenging task due to the sparseness of the data set. The original data set contains many 0's which represent no hospital admissions on most of the days in the summer. On certain days of the summer, however, quite a number of 137 8.2. WEIGHTED LIKELIHOOD ESTIMATION 138 .9 o I uwuu 20 40 60 80 100 120 Days in the summer of 1983 Figure 8.1: Daily hospital admissions for CSD # 380 in the summer of 1983. people with respiratory disease went to hospital to seek treatments due perhaps to the high level of pollution in the region. For CSD # 380 that has the largest number of hospital admissions among all the CSD's, there are no hospital admission for a total of 112 days out of 123 days in the summer of 1983. However there were 17 hospital admissions on day 51. The daily counts of this CSD are shown in Figure 8.1. The problem of data sparseness and high level of variation is quite obvious. Thus we will consider the estimation of the rate of weekly admissions instead of the daily counts. There are 17 weeks in total. For simplicity, the data obtained in the last few days of each year are dropped from the analysis since they do not constitute a whole week. 8.2 Weighted Likelihood Estimation We assume that the total number of hospital admissions of a week for a particular CSD follows Poisson distribution, i.e., for year q, CSD i and week j, Y* ~ V {eqi3) , j = 1, 2 , . . , 17;'t = 1, 2 , 7 3 3 ; q = l , 2 , 6 . 8.2. WEIGHTED LIKELIHOOD ESTIMATION 139 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 C S D 380: Weeks in 1983 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 C S D 362: Weoks in 1983 1 2 3 4 5 . 6 7 8 9 10 11 12 13 14 15 16 17 C S D 367: W e e k s in 1983 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 C S D 367: W e e k s in 1983 Figure 8.2: Hospital Admissions for CSD # 380, 362, 366 and 367 in 1983. The raw estimate of which is Y\u00C2\u00A3 is highly unreliable. The sample size is only 1 in this case. Also each CSD may contain only a small group of people whose lung conditions are susceptible to the high levels of air pollution. Therefore we think that it is desirable to combine neighboring CSD's in order to produce a more reliable estimate. For any given CSD, the neighboring CSD's are defined to be CSD's that are in close proximity to the CSD of interest, CSD #380 in our analysis. To study the rate of weekly hospital admissions in a particular CSD, we would expect that neighboring subdivisions contain relevant information which might help us to derive 8.2. WEIGHTED LIKELIHOOD ESTIMATION 140 a better estimate than the traditional sample average. The importance of combining disease and exposure data is discussed in Waller et al. (1997). The Euclidean distance between the target CSD and any other CSD in the dataset is calculated by using the longitudes and latitudes. CSD's whose Euclidean distances are less than 0.2 from the target CSD are selected as the relevant CSD. For CSD # 380, neighboring CSD's are CSD # 362, 366 and 367. The time series plots of weekly hospital admissions for those CSD's in 1983 are shown in Figure 8.2. It seems that the hospital admissions of these CSD's at a given week might be related since the major peaks in the time series plot occurred at roughly the same time point. However, including data from other CSD's might introduce bias. The weight function defined in the W L E can control the degree of bias introduced by the combination of data from other CSD's. Ideally, we would assume that 0?- = Q\ for j = 1,2, ...,17. But this assumption does not hold due to seasonality. For example, week 8 always has the largest hospital admissions for CSD 380. By examining the data more closely, we realize that the 8th week for each year is a week with more than normal hospital admissions. In 1983, there are 21 admissions in week 8 while the second largest weekly count is only 7 in week 15. In fact, week 8 is an unusual week through 1983 to 1988. The air pollution level might further explain this phenomenon by using the method proposed in Zidek et al. (1998). Thus, the assumption is violated at week 8. One alternative is to perform the analysis on a window of a few weeks and repeat the analysis while we move the window one week forward. This is equivalent of assuming that 0?- take the same value for a period of a few weeks instead of the entire summer. In this chapter, we will simply exclude the observations from week 8 and proceed with the assumption that 0?- = Q\ for j = 1,2, ...7,9, ..17. The fact that the sample means and sample variances of the weekly hospital admissions for those 16 weeks of CSD 380 are quite close to each other supports our assumption. 8.3. RESULTS OF T H E ANALYSIS 141 Let YL to be the overall sample average for a particular CSD i for a given year. For Poisson distributions, the MLE of 6\ is the sample average of the weekly admissions of CSD #380 and the WLE is a linear combination of the sample average for each CSD according to Theorem 3.1. Thus, the weighted likelihood estimate ofthe average weekly hospital admissions for a CSD, 6\, is For our analysis, the weights are selected by the cross-validation procedure proposed in Chapter 5. Recall that the cross-validated weights for equal sample sizes can be calculated as follows: where bq{y) = \u00C2\u00A3 Y^vf j \ and Aq(y)lk = \u00C2\u00A3 Y ^ Y 9 ^ , i = 1, 2, 3,4; k = 1, 2, 3, 4. j=i \u00E2\u0080\u00A2 j=\ 8.3 R e s u l t s of the A n a l y s i s We assess the performance ofthe MLE and the WLE by comparing their MSE's. The MSE of the MLE and the WLE are defined as, for q = 1,2,.., 6, In fact, the 9\ are unknown. We then estimate the MSEM and MSEw by replacing 9\ by the MLE. Under the assumption of Poisson distributions, the estimated MSE for the MLE is given by: 4 WLE\" = ^2X^1 a = 1,2.., 6. i=l MSEUei) = Eel(Ygh-ei)2 MSEqM = var(Yu)/16, o=l,2,...6. 8.3. RESULTS OF T H E ANALYSIS 142 The estimated MSE for the W L E is give as following: ( m i=i 4 4 / m i=l k=l \ t = l The estimated M S E for the M L E and the W L E are given in the following table. It can be seen from the table that the MSE for the W L E is much smaller than that of the M L E . In fact, the average reduction of the M S E by using W L E is about 25%. Year 7 M L E 7 W L E 16 MSEqM 16 MSE9W M^Eqw/MSEqM 1 .185 .174 .101 .084 0.80 2 .328 .282 .241 .131 0.87 3 .227 .257 .286 .143 0.54 4 .151 .224 .159 .084 0.96 5 .303 .322 .298 .130 0.80 6 .378 .412 .410 .244 0.54 Table 8.1: Estimated MSE for the M L E and the W L E . Combining information across these CSD's might also help us in the prediction since the patterns exhibited in one neighboring location in a particular year might manifest itself at the location of interest the next year. To assess the performance of the W L E , we also use the W L E derived from one particular year to predict the overall weekly average of the next year. The overall prediction error is defined as the average of those prediction errors. To be more specific, the overall prediction errors 8.4. DISCUSSION 143 for the WLE and the traditional sample average are defined as follows: PRED V 1 \u00C2\u00A3 ( n -FT) 2 ; PRED 1 J2{WLE^-Yl+Iy The average prediction error for the MLE, PredM, is 0.065 while the Predw, the average prediction error for the WLE, is 0.047 which is about 72% of that of the MLE. 8.4 D i s c u s s i o n Bayes methods are popular choices in the area of disease mapping. Manton et al. (1989) discuss the Empirical Bayes procedures for stabilizing maps of cancer mortality rates. Hierarchical Bayes generalized linear models are proposed for the analysis of disease mapping in Ghosh et al. (1999). But it is not obvious how one would specify a neighborhood which needs to be defined in these approaches. The numerical values of the weight functions can be used as a criterion to appropriately define the neighborhood in the Bayesian analysis. We will use the following example to demonstrate how a neighborhood can be defined by using the weight functions derived from the cross-validation procedure for the WLE. From Table 8.3, we see that there is strong linear association between CSD 380 and CSD 366. However, the weight assigned to CSD 366 is the smallest one, It shows that CSD's with higher correlation contain less information for the prediction since they might have too similar a pattern to the target CSD for a given year to be helpful in the prediction for the next year. Thus CSD 366 which has the smallest weight should not be included the analysis. Therefore, the \"neighborhood\" of CSD 380 in 8.4. DISCUSSION 144 CSD 380 CSD 362 CSD 366 CSD 367 Weights CSD 380 1.000 0.421 0.906 0.572 0.455 CSD 362 0.421 1.000 0.400 0.634 0.202 CSD 366 0.906 0.400 1.000 0.553 0.128 CSD 367 0.572 0.634 0.553 1.000 0.215 Table 8.2: Correlation matrix and the weight function for 1984. the analysis should only include CSD 362 and CSD 367. In general, we might examine those CSD which are in close proximity to the target CSD. We can calculate the weight for each CSD selected by using the cross-validation procedure. The CSD with small weights should be dropped from the analysis since they are not deemed to be helpful or relevant to our analysis according to the cross-validation procedure. We remark that the weight function can also be helpful in selecting an appropriate distribution that takes into account the spatial structure. Ghosh et al. (1999) propose a very general hierarchical Bayes spatial generalized model that is considered broad enough to cover a large number of situations where a spatial structure needs to be incorporated. In particular, they propose the following: 0i = qi = z-b + Ui + Vi, i = 1, 2 , m where the qi are known constants, Xi are covariates, Ui and Vi are mutually indepen-dent with Vi J~ d ' N(0, a2) and the Ui have joint pdf ( m -E&-\u00C2\u00AB , )H/ (^) i=l j^i where uiij > 0 for all 1 < i ^ j < m. The above distribution is designed to take into account the spatial structure. In their paper, they propose to use uii = 1 if location % and j are neighbors. They also mention the possibility of using the inverse 8.4. DISCUSSION 145 of the correlation matrix as the weight function. The weights function derived from the cross-validation procedure might be a better choice since it takes account of the spatial structure without any specific model assumptions. The predictive distribution for the weekly total will be Poisson (WLE). We can then derive the 95% predictive interval for the weekly average hospital admissions. This might be criticized as failing to take into account the the uncertainty of the unknown parameter. Smith (1998) argues that the traditional plug-in method has a small MSE compared to the posterior mean under certain circumstances. In particu-lar, it has a smaller M S E when the true value of the parameter is not large. Let CIw and CIM be the 95% predictive intervals of the weekly averages calculated from the W L E and the M L E respectively. The results are shown in the following table. Year CIM CIw 1983 [0,3] [0, 3] 1984 [0, 5] [0,4] 1985 [0, 4] [0, 4] 1986 [0, 3] [0, 4] 1987 [0, 4] [0, 5] 1988 [0, 5] [0, 6] Table 8.3: M S E of the M L E and the W L E for CSD 380. We remark that this chapter is merely a demonstration of the weighted likelihood method. Further analysis is needed if one wants to compare the performances of the W L E , the M L E and the Bayesian estimator in disease mapping. B i b l i o g r a p h y [1] Akaike, H. (1985). Prediction and entropy, In: A Celebration of Statistics 1-24, Edited by Atkinson, A. C. and Fienberg, S. E., Springer-Verlag, New York. [2] Bliss, G. A. (1930). The problem of lagrange in the calculus of variations. The American Journal of Mathematics 52 673-744. [3] Breiman, L. and Friedman, H. J. (1997). Predicting multivariate responses in multiple regression, Journal of Royal Statistical Society: Series B 36 111-147. [4] Burnett, R. and Krewski, D. (1994). Air pollution effects on hospital admission rates: A random effects modeling approach. The Canadian Journal of Statistics 22 441-458. [5] Cox. D. R. (1981). Combination of data. Encyclopedia of Statistical Sciences 2 45-52, John Wiley k Sons, Inc., New York. [6] Csiszar, I. (1975) I-divergence geometry of probability distributions and mini-mization problems. The Annals of Probability 3 146-158. [7] Daniels, H. E. (1954). Saddlepoint approximation in statistics. The Annals of Mathematical Statistics 25 59-63. [8] Daniels, H. E. (1983). Saddlepoint approximation for estimating equations. Biometrika 70 89-96. 146 [9] Dickey, J. M. (1971) The weighted likelihood ratio, linear hypotheses on normal location parameters. The Annals of Mathematical Statistics 42 204-223. [10] Dickey, J. M and Lientz, B. P. (1970) The weighted likelihood ratio, sharp hy-potheses about chances, the order of a Markov chain. The Annals of Mathematical Statistics 41 214-226. [11] Easton, G. (1991). Compromised maximum likelihood estimators for location. Journal of the American Statistical Association 86 1051-1064. [12] Eguchi, S. and Copas, J. (1998). A class of local likelihood methods and near-parametric asymptotics. Journal of Royal Statistical Society, Series B 60 709-724. [13] Ekeland, I. and Temam, R. (1976). Convex Analysis and Variational Problems. American Elsevier Publishing Company Inc., New York. [14] Feller, W. (1971) An Introduction to Probability Theory and Its Applications, Vol 2. John Wiley k, Sons, Inc., New York. [15] Ferguson, T. S. (1996). A Course in Large Sample Theory. Chapman and Hall, New York. [16] Field, C. A. and Hampel, F. R. (1982). Small Sample Asymptotics Distributions of M-estimators of Location. Biometrika 69 29-46. [17] Field, C. A. and Ronchetii, E. (1990). Small Sample Asymptotics. Institute of Mathematical Statistics, Hayward. [18] Geisser, S. (1975). The predictive sample reuse method with applications, Jour-nal of the American Statistical Association 70 320-328. 147 [19] Genest, C. and Zidek, J. V. (1986). Combining probability distributions: a cri-tique and an annotated bibliography. Statistical Science 1 114-148. [20] Ghosh, M., Natarajan, K., Waller, L. A. and Kim, D. (1999). Hierarchical Bayes GLMs for the analysis of spatial data: An application to disease mapping. Jour-nal of Statistical Planning and Inference 75 305-318. [21] Giaquinta, M. and Hildebrandt, S. (1996). Calculus of Variations. Springer-Verlag Series, New York. [22] Hardle, W. and Gasser, T. (1984) Robust Nonparametric Function Fitting. Jour-nal of the Royal Statistical Society, Series B, 46 42-51. [23] Hu, F. (1994). Relevance Weighted Smoothing and A New Bootstrap Method, Ph.D. Dissertation, Department of Statistics, University of British Columbia, Canada. [24] Hu, F. (1997). The asymptotic properties of the maximum-relevance weighted likelihood estimators. The Canadian Journal of Statistics 25 45-59. [25] Hu, F., Rosenberger, W. F. and Zidek, J. V. (2000). Relevance weighted likeli-hood for dependent data. Metrika 51 223-243. [26] Hu, F. and Zidek, J. V. (1997). The relevance weighted likelihood with applica-tions. In: Empirical Bayes and Likelihood Inference 211-235, Edited by Ahmed, S. E. and Reid, N., Springer, New York. [27] Hunsberger, S. (1994) Semiparametric regression in likelihood-based models. Journal of the American Statistical Association 89 1354-1365. [28] Kullback, S. (1954). Certain inequality in information theory and the Cramer-Rao inequality. The Annals of Mathematical Statistics 25 745-751. 148 [29] Kullback, S. (1959). Information Theory and Statistics. Lecture Notes-Monograph Series Volume 21, Institute of Mathematical Statistics. [30] Lange, K. (1999) Numerical Analysis for Statisticians. Springer-Verlag, New York. [31] Lehmann, E. L. (1983), Theory of Point Estimation. John Wiley & Sons Inc., New York. [32] Markatou, M., Basu, A. and Lindsay, B. (1997). Weighted likelihood estimating equations: The discrete case with applications to logistic regression. Journal of Statistical Planning and Inference 92 215-232. [33] Markatou, M., Basu, A. and Linday, B. (1998). Weighted likelihood equations with bootstrap root search. Journal of the American Statistical Association 93 740-750. [34] Manton, K. G., Woodbury, M. A., Stallard, E. Riggan, W. B. Creason, J. P. and Pellon, A. C. (1989). Empirical Bayes procedures for stabilizing maps of U.S. cancer mortality rates. Journal of the American Statistical Association 84 637-650. [35] National Research Council (1992). Combining Information: Statistical Issues and Opportunities for Research. National Academy Press, Washington D.C.. [36] Newton, M. A. and Raftery, A. E. (1994). Approximate Bayesian inference with the weighted likelihood boostrap. Journal of Royal Statistical Society: Series B 56 3-48. 149 [37] Rao, B. L. S. (1991). Asymptotic theory of weighted maximum likelihood estima-tion for growth models. In: Statistical Inference in Stochastic Processes 183-208, edited by Prabhu, N.U. and Basawa, I. V., Marcel Dekker, Inc., New York. [38] Rao, C. R. (1965). Linear Statistical Inference and Its Applications. John Wiley & Sons, Inc., New York. [39] Royden, H. L. (1988). Real Analysis. Prentice Hall, New York. [40] Savage, L. J. (1954). The Foundations of Statistics. Springer-Verlag, New York. [41] Schervish, M. J. (1995). Theory of Statistics, New York: Springer-Verlag. [42] Small, C. G., Wang, J. and Yang, Z. (2000). Eliminating multiple root problems in estimation. Statistical Science 15, 313-341. [43] Smith, R. L. (1998). Bayesian and frequentist approaches to parametric predic-tive inference. Bayesian Statistics 6 589-612. [44] Staniswalis, J. G. (1989). The kernel estimate of a regression function in likelihood-based models. Journal of the American Statistical Association 89 276-283. [45] Stone, M. (1974). Cross-validation choice and assessment of statistical predic-tions. Journal of Royal Statistical Society: Series B 36 111-147. [46] Tibshirani, R. and Hastie, T. (1987). Local likelihood estimation. Journal of the American Statistical Association 82 559-567. [47] van Eeden, C. (1996). Estimation in restricted parameter spaces-Some history and some recent developments. Statistics & Decisions 17 1-30. 150 [48] van Eeden, C. and Zidek, J. V. (1998). Combining sample information in esti-mating ordered normal means. Technical Report # 182, Department of Statistics, University of British Columbia. [49] van Eeden, C. and Zidek, J.V. (2001). Estimating one of two normal means when their difference is bounded. Statistics & Probability Letters 51 277-284. [50] Wald, A. (1949). Note on the consistency of the maximum likelihood estimate. The Annal of Mathematical Statistics 15 358-372. [51] Waller, L. A., Louis, T. A., and Carlin, B. P. (1997). Bayes methods for combin-ing disease and exposure data in assessing environmental justice. Environmental and Ecological Statistics 4 267-281. [52] Warm, T. A. (1987). Weighted likelihood estimation of ability in item response theory. Psychometrika 54 427-450. [53] Zidek, J. V., White, R. and Le, N. D. (1998). Using spatial data in assessing the association between air pollution episodes and respiratory morbidity. Statistics for the Environment 4- Pollution Assessment and Control 111 -136. 151 "@en . "Thesis/Dissertation"@en . "2001-11"@en . "10.14288/1.0090880"@en . "eng"@en . "Statistics"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en . "Graduate"@en . "Maximum weighted likelihood estimation"@en . "Text"@en . "http://hdl.handle.net/2429/13844"@en .