UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The estimation of a characteristic function and its derivatives Chen, Laurence Wo-Cheong 1974

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
831-UBC_1974_A8 C49_4.pdf [ 2.27MB ]
Metadata
JSON: 831-1.0080105.json
JSON-LD: 831-1.0080105-ld.json
RDF/XML (Pretty): 831-1.0080105-rdf.xml
RDF/JSON: 831-1.0080105-rdf.json
Turtle: 831-1.0080105-turtle.txt
N-Triples: 831-1.0080105-rdf-ntriples.txt
Original Record: 831-1.0080105-source.json
Full Text
831-1.0080105-fulltext.txt
Citation
831-1.0080105.ris

Full Text

• • ci • THE ESTIMATION OF A CHARACTERISTIC FUNCTION AND ITS DERIVATIVES By LAURENCE WO-CHEONG CHEN B.Sc. Notre Dame University, 1970 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS in the Department of MATHEMATICS We accept this thesis as confirming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA A p r i l , 1 9 7 4 In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t o f the r e q u i r e m e n t s f o r an advanced degree a t the U n i v e r s i t y o f B r i t i s h C o l u m b i a , I agree t h a t the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and s t u d y . I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y purposes may be g r a n t e d by the Head o f my Department o r by h i s r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g or p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l no t be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n . Department o f Mathematics The U n i v e r s i t y o f B r i t i s h Co lumbia Vancouver 8 , Canada Date A p r i l 8 . 1 9 7 4 . A B S T R A C T I n t h i s t h e s i s , w e d i s c u s s t h e p r o b l e m o f e s t i m a t i n g a c h a r a c t e r i s t i c f u n c t i o n a n d i t s d e r i v a t i v e s . W e o b t a i n e s t i m a t e s w h i c h a r e c o n s i s t e n t a n d a s y m p o t o t i c a l l y n o r m a l , a n d u n i f o r m l y c o n s i s t e n t w i t h p r o b a b i l i t y o n e . T h e m e t h o d s e m p l o y e d h e r e a r e s i m i l a r t o t h e m e t h o d s u s e d i n e s t i m a t i n g a p r o b a b i l i t y d e n s i t y f u n c t i o n a n d i t s d e r i v a t i v e s ( s e e [7], [ 9 ] f o r r e f e r e n c e s ) . - i i i -ACKNOWLEDGEMENT I wish to thank Professor S. Nash for suggesting the above i n v e s t i g a t i o n and f o r h i s guidance as my thes is advisor at the U n i v e r s i t y o f B r i t i s h Columbia. I a l so wish to thank Professor J . V . Zidek for reading my thes i s and for h i s s evera l va luable suggest ions . - iv -TABLE OF CONTENTS Page 0. INTRODUCTION 1 1. THE ESTIMATE <j>n(t) OF <j>(t) 2 1-1. Asymptotic Unbiasedness 3 1-2. Quadratic Consistency 3 1-3. Asymptotic Normality 6 1-4. Limits of the Bias and the Mean Square Error 9 I- 5. Uniform Consistency of ^ ( t ) with Probability one 11 II. THE ESTIMATE (t) OF cj>^(t) 23 II- l . Asymptotic Unbiasedness -25 II-2. Quadratic Consistency 26 II-3. Asymptotic Normality 30 II-4. Uniform Consistency of <j/p^ (t) with Probability, one 36 BIBLIOGRAPHY - 40 - 1 -0. INTRODUCTION. Suppose given a sequence of independent identically distributed (iid) random variables X,, X,,, .... X , ... with a common characteristic 1 2 ' n' function (j> (t) . The problem of estimating a characteristic function i s interesting for many reasons. One possible application i s to determine the components of a corresponding mixture distribution function (see [4] ). In this thesis, we construct an estimate ^ ( t ) °£ tf>(t), which i s based on the random sample X^, X2, ...... X^, such that w i l l have some nice -asymtotic properties, and converges -uniformly to 4>(t) with probability one. In addition, i f E l x p * * i s f i n i t e for some integer q > 0, we axe able to obtain an estimate <j/^(t), the estimate of the p-th derivative of <j>(t) for 0 < p q, such that <J>^ P^ (t) w i l l have the asymptotic properties p a r a l l e l to those of ij> (t) . Furthermore, i f sup E^Jxl** = M for some constant M , then for 0 < p < q, we are able n ( "S ( \ to show that <j> p (t) converges uniformly to $ ^  (t) with probability one . - 2 -I . THE ESTIMATE <i>n(t) OF <j>(t) Let X^, • • • > x n ^ e ii** a s t i i e random v a r i a b l e X whose d i s t r i b u t i o n F(x) - P[X < x] i s a b s o l u t e l y continuous. That i s F(x) f(u ) du w i t h the d e n s i t y f ( x ) . Let <{>(t) be the c h a r a c t e r i s t i c f u n c t i o n of F(x) . Then each estimate ^ ( t ) °f 4>(t)> based on the e m p i r i c a l data, w i l l be i n the form of ( 1 . 1 ) • n ( t ) . - e l t X f (x) dx , n where n f n « = n t j = l x - X. i s the k e r n e l estimate of f ( x ) as given i n [ 8 ] , and h = h(n) i s a f u n c t i o n of the number n which converges to zero as n -*• °° ; k(y) i s some symmetric d e n s i t y f u n c t i o n , such that the moments of a l l orders e x i s t . ( 1 . 1 ) can a l s o be expressed as ( 1 . 2 ) n • n ( t ) nh i t x , e k x - X. j = l J — n i t X . w K*> dx j = l n i t X . ( J e i t h y k(y) dy M e 2 ) Kth) » j = l where - 3 -(1.3) * ( t h ) = e l t h y k(y) dy . Since k(y) i s some symmetric density function, then by definition, ^ (th) i s a characteristic function, and is real and even. I—1. Asymptotic Unbiasedness of <rn(t) • ij>n(t) would be unbiased i f E<f>n(t) were equal to <f>(t). But (1.4) E<frn(t) = *(th) e f(x) dx = Kth)<Kt) » and i()(th) equals one only when (th) = 0, however as n °° , (th) -»• 0, so i|/(th) CO) = 1. It follows immediately that limit E<J>n(t) = ij>(t) limit i|)(th) = <j>(tH(0) = <f>(t) n-*» n-*» Hence, ^ ( t ) i s asymptotically unbiased as n °° and h -> 0 . 1—2. Quadratic Consistency. The mean square error of ^ ( t ) converges to zero as n 0 0 , and h -> 0 . In notation, (1.5) E|* (t) - $(t) | 2 - > 0 as n 4 « , h + 0 . or (1-5) can be rewritten as - 4 -( 1 . 6 ) E|4 n(t) - <Kt)|2 = E[Re(^>n(t) - <j>(t))]2 + E[Im(<j,n(t) - <f,(t))]2 = Var[Re<^n(t)] + [b (Re^Ct) ) ] * + Var [Irn^CO ] + [ b ( I m * n ( t ) ) - ] ' = Var[* n(t)] + |b[$n(0]| . , where Var [<J>n(t) ] i s the variance of d>n (t) , and b[<j>n(t)] = E<j>n(t) - t*)(t) i s the bias of <J>n(t) . The quadratic consistency can be 2 shown i f Var [<j>n(t) ] and |b[ij) n(t)]| vanish as n -*• <*> and h ->• 0. But |b[+ n(t)M2 = |E*n(t) - •Ct) | 2 = |*(th) - l|2|<J>(t)|2 -> 0 as n->oo a n d t{)(th) •+ i|>(0) = 1 . It remains to show that the variance of <t ) n(0 vanishes as n ->•<». As we know any complex random variable, say Z = U + iV, i s such that i t s expectation and variance can be put in the following forms : E[Z] = E[U] + iE[V] , Var[Z] = Var[U] + Var[V] , where U and V are the real and the imaginary parts of Z Observe that ( 1 . 7 ) <frn(0 = n itX, 1 e n A *(th) = f l n - T (cos tX. + i sin tX.) n j = i 1 I J *(th) Hence 2 ( 1 . 8 ) Var[+n(0] = [ ^ h ) ] | Var (cos tX) + Var(sin tX) j 2 1 2 1 Since cos 6 = -^(l + cos 29) and sin 9 = -jU - cos 29) , i t - 5 -follows that E{cos2tX} = -|{1 + E(cos 2tX)} = y [ l + Recf>(2t)} , and E{sin 2tX} = -|{1 - E(cos 2tX) } = -|{1 - Retf>(2t)} ; and since s i n 0 cos 9 = — sin 26 , i t follows that, E{sin tX cos tX} = —[E sin 2tX} = -| Im<K2t) . From above computations, one gets Var{cos tX} = -| {1 + RecJ>(2t)} - {Re<f>(t)}2 , (1.9) Var{sin tX} = ^  {1 - Re<J>('2t) } - '{Tm<}> (£)'}'2 , Cov { cos tX, sin tX } = \ Im<()(2t) - [Re<j> (t) ] [Imcf> (t) ] It follows immediately that (1.8) can be replaced by (1.10) Var($ n(t)] - J 1 . [ E e < , ( t ) ] 2 - [I«,*(t)]2 j Since >(th) -> 1, as n ->• ~- f i t follows from (1.10) that Var[<f>n(t)] -> 0 as n -*• "» . Meanwhile we know that, for any given real random variable Y 2 2 with absolutely continuous distribution, E[Y ] > {E[Y]} when Var[Y] > 0 . Similarly, when t 4 0, and Varfcos tX] > 0, - 6 -Var[sin tX] > 0 , one gets |<Kt)| 2 = [E(cos t X ) ] 2 + [E(sin tX) ] 2 < E[cos 2tX] + E[sin 2tX] = 1 Then i t follows that (1.11) £ > Var[4>n(t)] > 0 From above, we know that the variance of <f> (t) satisfies n n Var[<()n(t)] {1 - |(f>(t)| } , for t 4 0 as n ->- « ; but for t = 0, 4>n(0) E l and Var[<{>n(0)] = 0 . 1-3. Asymptotic Normality of cj> (t) . n From (1.9) one sees that Var [Re<j>n(t) ] i Var [Im<j>n(t) ] , and Ck3v[Re<j)n(t), Imc}>n(t)] ^ 0 . It follows that <J>n(t) is not distributed according to the special kind of the univariate complex normal distribution of R.A. Wooding [11]. However, since |<f>n(t)| £ 1, |Re<J>n(t) | <_ 1 and • |lmij)n(t) | <_ 1, one may expect that Re<j>n(t) and Imc))n(t) w i l l be asymptotically bivariately normally distributed. Consider <f> (t) as an average of 0 0 0 0 i i d rn ° nl n2 nn complex random variables with common distribution. In notation, one writes d.12) A ( t ) 4 I 9 , 1=1 - 7 -i t X . w h e r e 6 . = 8. = i ) j ( t h ) e 2 = i ( ; ( t h ) { c o s t X . + i s i n t X . } , t h e n j 3 3 3 u n i v a r i a t e c o m p l e x r a n d o m v a r i a b l e , i s c o n s i d e r e d a s a b i v a r i a t e r a n d o m 2 v e c t o r i n R . One f o r m o f t h e k - v a r i a t e c e n t r a l l i m i t t h e o r e m i s s t a t e d i n S . S . W i l k s [ 1 0 ] , a s f o l l o w s : " S u p p o s e ( X ^ , X ^ , ; j = 1 , 2 , n ) i s a s a m p l e o f s i z e n f r o m a k - v a r i a t e d i s t r i b u t i o n h a v i n g f i n i t e m e a n s u ^ , i = 1 , 2 , k , a n d ( p o s i t i v e d e f i n i t e ) c o v a r i a n c e m a t r i x ||ci..T>>j|, i , m = 1 , 2 , k . , , t h e n . " We a p p l y t h i s r e s u l t t o t h e c a s e im a. i m ( X ^ , . . . , X ^ ) ~ N k = 2 w i t h s a m p l e m e a n s , n and 1 n 1 n R e * ( t ) = - £ Red =- I [ ^ ( t h ) c o s t X ] j = l 2 j = l 2 1 n 1 n Tm<{. n ( t ) = - I ImQ =~ I t ^ ( t h ) s i n t X ] j = l j = l L e t i ( ; ( t h ) y 1 - E [ > ( t h ) c o s t X ] = ^ ( t h ) R e c [ , ( t ) , i j , ( t h ) y 2 = E | > ( t h ) s i n t X ] = ' ip ( th ) Im<j ) ( t ) , b e t h e e x p e c t e d m e a n s , w h i c h a r e b o t h b o u n d e d b y o n e r e s p e c t i v e l y . L e t [ i K t h ) ] 2 ^ = V a r [ i J > ( t h ) c o s t X ] - [ M t h ) ] 2 | | [ 1 + R e ( j , ( 2 t ) ] - [ R e < j , ( t ) ] 2 | , - 8 -[4»(th)] a 2 2 = Var I* (th) sin tX] = -[*(th)] 2 I | [ 1 - Re<j,(2t)] - [Im<Kt)]2 [i|)(th) ]. a 2 1 , = Cov[i{j(th)cos tX, ^ (th)sin tX] = [*(th)]2 j -| Im*(2t) - [Recf>(t)] [ImcKt)] = [*(th)] a 1 2 be the variances and the covariance of [ReiJ>n(t)] and [Im(J>n(t) ] . It i s clear that a l l of a 22 a n < * ° 2 1 ' CT12 a r e bounded a t m o s t by 2 , and that det r a l l . ° 1 2 " a 2 1 a 2 2 > 0 . The mean vectors and the covariance matrix w i l l have limits ( K t h ) ^ , Kth)p 2 ) -»- (u-^ y 2) , and r o [iKth) 1 1 u 1 2 CT21 a 2 2 r CF 1 n 1 1 u 1 2 a 2 1 a 2 2 as n -*• 0 0 , ^(th) -»• 1 It follows that Re<j>n(t), Imci>n(t) ~ N U(th ) y ± } , ii ( t h ) o i m i n ; f o r i , m = 1 , 2 . Since \j;.(th) i s just a number for any constant t, and i|>(th) ->- 1 as n -*• » , so $ n(t) ^ s asymptotically normal. - 9 -1-4. Limi,ts of the Bias and the Mean Square Error. (a) The bias of 4>n(t) satisfies (1.13) |bU n(t)]| = |iKth) - l||<Kt)| • This expression shows that the bias of (f > n( t) depends on the properties of i|>(th) which are in turn based on the function k(y) and h. Since ^(th) is real and even, {i^(th) - 1} is always real and negative for t ^  0 , and zero as n ->• °° and h 0 . Suppose k(y) has moments of a l l orders, then the odd moments of k(y) are zero, since k(y) is symmetrically distributed. Hence, for any t held constant, and n sufficiently large, we can put i^(th) i n the following form : k a . (1.14) ij,(th) = I -y ( i t h ) m + o(th ) K , as (th) - 0 , m=0 m* where a is the moment of order m of k(y), and i s assumed to be m fi n i t e . If a n c^ ere f i n i t e and also non-zero, (1.14) can be written as (1.15) *(th) = 1 - Y~ ( t h ) 2 + o(th) 2 , ... as h + 0 , since t i s constant, (th) -* 0 implies h -> 0 . It follows that (1.16) |b[$ n(t)]| - ^  (th)2|<j,(t)| + o(th) 2, as (th) - 0 , or as (th) -»• 0 , |b[<|.n(t)]|/(th)2 - ^ |*(t)| . - 10 -The express ion (1.16) shows that for any r e a l t ^ 0, the b ias of <J>n(t), 2 |b[<f>n(t)]] -> 0 at the same rate as h -> 0 . In f a c t , the express ion (1.16) can s t i l l be t r u e , even when t i s not he ld constant , but only r e q u i r e d to increase s lowly enough that (th) s tays smal l or approaches zero as n -»- » . But even i f t increases so fa s t that (th) becomes l a r g e or approaches i n f i n i t y as n -> 0 0 , the b i a s of ^ ( t ) s t i l l vanishes as n <» and | t | ->- » . Given any E > 0, there e x i s t s a T^ so l a r g e that |<Kt)| < § , and |b[<j>n(t)]| = ^ ( t h ) - l | |4>(t ) | < 2 - | = e , whenever Itl > T . Hence i t fol lows that l i m i t lb [c|> ( t ) ] | = 0 uni formly for a l l 1 1 — e n n-*» r e a l t . (b) L i m i t s for the mean square e r r o r . The mean square e r r o r s a t i s f i e s (1.17) Q = E|<j>n(t) -v cpCt) | 2 =• Var [ * n ( t ) ] + |b[<fr n(t)]| = U(th)}2{l - | < f » ( t ) | 2 } +' {*(th) - 1> 2 I cpCt) | 2When t i s h e l d constant , one can, wi th the proper choice of i |j(th), minimize the mean square e r r o r . S e t t i n g = 0, that i s ~Kth){i - U ( t ) | 2 } + 2 {^(th) - i}|<Kt)|2 = o . One e a s i l y gets - 11 -* . ( t h ) . — J i m 1 2 1 + ( 1 _ l ) | K t ) | 2 n " T Formally, j u s t l o o k i n g at the l a s t equation, one would say t h a t m^^ n^ *-*1) 2 1 2 i s not d e f i n e d when U ( t ) | = -z . However, iii . (th) = U ( t ) | when 1 - n rmin 1 1 1 2 n = 1. Since 1 _ < 0 f o r n = 2, 3, w h i l e 0 <_ |<j)(t)| £ 1 , 2 1 the e x c e p t i o n a l case w i t h |<}>(t)| = — —.' can not a r i s e when n > 1. Thus we see t h a t ib . (th) i s w e l l d efined f o r every whole number n . rmm ' . J I t i s not d i f f i c u l t to check that \1> . (th) i s indeed a c h a r a c t e r i s t i c rmin f u n c t i o n (see [ 3 ] ) . Then 1 - * . (th) • 1 - l * < t ) | 2  m ± n 1 + (n - l ) | * ( t ) | 2 For the f i x e d t ^  0 and n, a minimized mean square e r r o r , O^n* c a n be obtained. I t i s Qmin " ^ W t h ) } 2 { 1 " l * ( t ) | 2 } + U m i n < t h ) " l > 2 . l * ( t ) | 2 as n 0 0 = U ( t ) l2 { i - U ( t ) | 2 } { l - U ( t ) l 2 } 1 + (n - l ) | f ( t ) | 2 n 1-5. Uniform Consistency of ( J 1 N ( T ) w i t h P r o b a b i l i t y One. To p r o v e t h i s u n i f o r m c o n s i s t e n c y o f <j> n ( t ) , i t i s e n o u g h t o s h o w t h a t l i m i t s u p r e m u m . \ $ ( t ) - <j>(t) | = 0 Xl-*°° —«o<t<°° - 1 2 -with probability one. An approximation F (x) of F(x), based on the empirical data, i s given i n [5]. It is F n(x) = n rx f (u) du , n —CO where ^ n ( x ) i s t n e kernel estimate of f(x) as given in [ 8 J . Since k(y) i s assumed to be a density function, ^ ( x ) ^ s s o m e distribution function; i n fact, ^ n ( x ) -*-s absolutely continuous. Assume f(x) = F'(x) and f n ( x ) = F n ( x ) a r e defined for every x. Then from E.A. Nadaraya [5], i t follows that i f f(x) i s uniformly 2 continuous, i f the sequence h i s such that *the ^ series £ *e i s f i n i t e for every positive value of y> a n d i f k(y) i s a function of bounded variation over (-°°, °°) , then limit supremum |f (x) - f(x)| = 0 H-Ko —oo<x<co with probability one. Hence li m i t f (x) = f(x) for every x . n-*=° The sequence ^ n^ x) °f t^ i e absolutely continuous distributions converges uniformly to F(x) for a l l x as n °° and h 0 (see [4]). Hence, the sequence {^(t)} of the corresponding characteristic functions converges to (J>(t) for every t. That i s , limit <j> (t) = <J>(t) for every n-x» t. Since F(x) i s absolutely continuous, we know that limit <}>(t) = 0 . 11|-*» Similarly, we have limit <j> (t) = 0 for every n. Moreover, for any real |t|-*» t , - 1 3 -< j > n ( t ) - < | > ( t ) | = el t X f ( x ) d x -n el t X f ( x ) d x f ( x ) - f ( x ) d x . n S i n c e t h e l a s t e x p r e s s i o n i s i n d e p e n d e n t o f t , i t f o l l o w s t h a t ( 1 . 1 8 ) s u p r e m u m | < S > n ( t ) - <f> ( t ) | <_ l f n( x) ~ f (x) I d x —oo < £ <oo * —oo T o s e e t h e u n i f o r m c o n s i s t e n c y o f < f > n ( t ) , w e o n l y h a v e t o s h o w t h a t ( 1 . 1 8 ) -> 0 a s n » a n d h + 0 . L e t a n d T h e n ( 1 . 1 9 ) { f - ~ f n } ( x ) = m a x { 0 , f ( x ) - f n ( x ) } , { f - f n > ( x ) = - m i n { 0 , f ( x ) - f n ( x ) } { f ( x ) - f n(x)} = { f ( x ) - f n ( x ) T - ( f(x) - f n ( x ) } , f ( x ) - f ( x ) | = | f n ( x ) - f ( x ) n " n = { f ( x ) - f n ( x ) } + + { f ( x ) - f n ( x ) } ~ . S i n c e f ( x ) a n d f ( x ) a r e d e n s i t i e s , w e h a v e n f ( x ) d x = 1 a n d f ( x ) d x = 1 n f o r e v e r y n . I t f o l l o w s t h a t - 14 -f (f(x) - f n(x)} dx = 0 •r J — O X , + , {f(x) - f n(x)}'dx {f(x) - f n(x)} dx , and (1.20) I {f(x) - f n ( x ) } + dx = I {f(x) - f n(x)}~ dx . J —oo J —oo Since (f(x) - f (x)} <_ f (x), and limit f (x) = f(x) for every x, n-*» then by the Lebesque Dominated Convergence Theorem, one-gets r°° (1.21) limit From (1.20) , {f(x) - f n(x)} +dx = limit {f(x) - f n(x)} dx = 0 (1.22) limit n-*=° {f(x) - f (x)} dx = limit n —oo n-*00 {f Cx) ~ f n ( x ) > dx = 0 Substituting (1.19) i n (1.18), we have supremum | <j> (t) - <J>(t)| <_ —oo<t<°° {f(x) - f n(x)}'dx + {f(x) - f n(x)} dx From (1.21) and (1.22), we obtain f i n a l l y that 0 <^  limit supremum | <l> (t) ~ (("(t) n-X» —ob<t<oo <_ limit n-*» {f(x) - f n(x)} dx + limit n-x» {f(x) - f (x)} dx = 0 I n ' —00 This shows that the convergence is uniform with respect to t as n -> » From [5], we know. P-j limit supremum|f (x) - f(x)| =0 V = 1. This n-*°° —oo<x<oo ' indicates that the probability of getting exceptional random sequences - 15 -X^ (co), X2(u), Xn(u), ... is zero. These exceptional sequences are the only ones for which i t may not be true that limit supremum |<j> (t) - <j>(t) | = 0 . n-X» -co < t <co Hence ^ ( t ) ~* <Kt) uniformly with probability one. We obtain the following theorems. Theorem 1.5.1. Suppose F(x) is absolutely continuous, and f(x) i s uniformly continuous. If f n ( x ) converges uniformly to f(x) with probability one, then (J>n(t) converges uniformly to <f>(t), the charac-t e r i s t i c function of F(x), with probability one. Theorem 1.5.2. Suppose (f)(t) and i^(th) are absolutely integrable over (-co, co) t a n ( j f o r any 6 > 0, k(y) satisfies the Lipschitz condition of order a (0 < a <_ 1 ) , and that ^ ( t ) converges uniformly to <J>(t) with probability one. Then f n ( x ) converges uniformly to f(x), the uniform continuous density of F(x), with probability one. Proof : Since <f>(t) is absolutely integrable over (-°°, °°), and F(x) is absolutely continuous, then by the Inversion Theorem, the density 1 " - i t x ,, x , e <|>(t) dt , where f(x) is continuous everywhere. In fact, i t is uniformly continuous To see this observe that we have, for any A > 0, c > 0, 16 -(1.23) |f(x+c) - f(x)| < ^  j e ~ i t c - l | |<J)(t) | dt t|<A sin -^ || | * Ct) dt + t|>A | sin -^|| I <p Ct) dt For any e > 0, we may choose c sufficiently small, that J t <A t|<A I* Ct) I dt < 2 ' and choose A sufficiently large, that 7 f, , |sln'-£f|U(t)|dt < £ f |*(t)|d * } It >A Z J t >A l t < 2 • Hence f(x) i s continuous, and since the bounds above are independent x, so i t is also uniformly continuous. Similarly, *J>(th) is absolutely integrable over (-», °°) , s or k(y) = 1 = J L I h J 2TT _1 2TT e " i t h y *(th) d(th) , It follows that (1.24) f n ( x ) ^ n r x - X. -\ nh J, k [ h J=l . . r itX. " l t X \ e J *(th) }• dt . . j - . n itX. e " i t X \ \ I e  3 *(th) j=l - 27 f e " ± t X *n<C> d t • dt - 17 -Since f(x) = F'(x) and f n(x) = F n^ x) a r e assumed to be defined every-where, for any A > 0, w e m a y l e t F(x + A) - F(x - A) w(x, A) = 1 2A f (x) = ^ and similarly, i f A ± 0, e ~ l t X <|>(t) dt , i f A = 0; u n(x, A) = F n(x + A) - F n(x - A) _ i f H 0, f (x) = ^ I e " i t X <j> (t)dt , i f A = 0 n 2.11 I n Now, for any real x, we can write (1.25) |f n(x) - f(x) < |f n(x) - w n(x, A) I + | c o n(x, A) - to(x, A) | + |a)(x, A) - f (x) | , By the Mean Value Theorem, u(x, A) = f(£), tJ n(x, A) = f R ( C ) with x - A < _ £ _ < x + A . There i s s t r i c t inequality i f A ^ 0, Consider the f i r s t term on the right side of (1.25), for A ^ 0, (1.26) |f n(x) - o)n(x, A) | = I f n(x) - f n ( C ) | n j= i x - X. - k By hypothesis k(y) € Lip(a), 0 < a <_ 1, that i s , for 6 > 0, - 18 -supremum |k(y') - k(y) | <_ L 6 a , —co < y <co |y'-y|<6 f o r some constant L. Now, we have y x - X. 1 Then x - X. C -. X. 1 _ 1 x - g h A Let 6 = r- • Then (1.26) i s not greater than h n — £ supremum j = l x-X. X - X. r S - x 4 — o o < -«-<00 h n 1 v ± nh I j = l fA1 a — T f A a 1 _ h — JL T.1+C* 1 h J r .a Let . 8 = p(h) as n °° , i t i s c l e a r that L 1+a n » . -> 0 as For any e > 0, there e x i s t s an i n t e g e r N^ > 0 such t h a t , f o r n > N^, one has, f o r A 4 0, | f n ( x ) - " n ( x , A)| < e/3 . We assume that <j>n(t) -> <(i(t) uniformly f o r a l l t w i t h p r o b a b i l i t y one as n -> <*> . By the c o n t i n u i t y theorem, i t f o l l o w s that F n ( x ) F(x) u n i f o r m l y as n -*• 0 0. Given £ > 0, there e x i s t s an i n t e g e r N 2 > 0 not dependent on x such that IF (X) - F(x)I < e/6 whenever n > N„. One takes N = max{N 1, N„}, n 1 *• x t and f o r n > N, one has, f o r A 4 0 that |w n(x, A ) . - w(x, A)| F (x + A) - F (x - A) n 2A F(x + A) - F(x - A) 2A - 19 -F (x + A) - F(x + A) F (x - A) - F(x - A) n n 2A 2A - 2A { ' F n ( x + A ) " F ( x + A ) I + ' F n ( x " A ) " F ( x ~ A ) I } < — + — = — 6 6 3 For the last term of (1.25) |o)(x, A) - f(x)| < e/3 , when A i s chosen small enough. Hence (1.25) is |f n(x) - f(x)| < E/3 + e/3 + e/3 = e , and f (x) -* f (x) as n °° . We obtain that n <}>n(t) ->- <f>(t) ==> f n(x) -»• f(x) as n -»• « . Thus li m i t f (x) = f(x). It follows that n l i m i t n-*» 2TT e ~ I t x <f> (t) dt Tn _1 2TT e "i t x <},(t) dt for every x. It certainly holds for x = 0, so l i m i t n-*» 2^ j $n J —CO ( t ) dt = <Kt) dt , or (1.27) l i m i t j U n ( t ) - <Kt)} d t = 0 J^ ->CO . * —CO Since ^ ( t ) a n c* . •(t) a r e both complex, from (1.27) one gets 2ir I J —a .limit Re{<j>n(t) - <j)(t)} dt = 0 , l i m i t n-*=° 2T; Im{<|) (t) - (j)(t)} dt = 0 For any e > 0, and n >_ N, some constant N , J —a 2 i t ! ReU n(t) - <f>(t)} dt e ' 2 ' • i f 2 i r ( Im{<frn(t) - <j>(t)} dt As the result of Riemann-Lebesque's Lemma, we know limit f(x) = 0 For any real x , f n ( x ) - f(x)| = _1 2* e l t X{(j) n(t) - <(»(t)} dt 1 ± 2 7 <f>n(t) - tf)(t)| dt 1 ± 27 -Re{<j> (t) - <J>(t)}| -dt 2TT Im{<f>n(t) - <))(t)}| dt The last expression i s independent of x , so (1.28) supremum |f n(x) - f(x) 3 < X < ° ° - 2T r l R e { * n ( t ) " * ( t ) } l d t + 27 |lm{+n(t) -<()(t)}| dt Let (1.29) R+(t) = {ReU(t) ~ * n ( t ) ] } + , R~(t) = {ReU(t) ~ <f-n(t)]}~; f"(t) = {Im[«j,(t) - * n(t ) ] r , I n ( t ) = {Im[<f»(t) - <(»n(t)]} - 2 1 -I t follows immediately that (1.30) Re[$(t) - * n ( t ) ] = R*(t) - R n(t) , Re[<j>(t) - * n ( t ) ] | = R n(t) + R n(t) , Im[*(t) - * n ( t ) ] = I n ( t ) - I n ( t ) , ImtKt) - <j>n(t)] | = I+(t) + I~(t) From (1.29) and (1.30), one has, for any e > 0 and n _> N, some constant N, that -± j Re[(|>(t) - <j,n(t)] dt J —GO 2^ J R n (t) dt - ^ R (t) dt n e * 2 o r 0 < "=r I K 2n I n J -co (t) dt < R 1(t) dt + f n z Similarly for the imaginary part, one gets 2i! j n * —CO (t) dt < 2TT I+(t) dt + f , hence (1.28) can be put into the form - 2 2 -supremum [f (x) - f(x)| -*0<x<°° 2TT R+(t) dt + n R (t) dt + n <<t) a t f S J — C O (t) dt TT II R n V J —co (t) dt + I n ( t ) dt } + e , where R n(t) <_ Re<j>(t) and I^Ct) <_ Im<j)(t) for a l l t , By the n Lebesque Dominated Convergence Theorem r < J —CO (t) dt -> 0 and r < (t) dt + 0 "as n -> «> . Hence limit supremum |f (x) - f(x)| = 0 , Xl-*» —oo<x<oo and this convergence ,is with probability one. - 23 -I I . THE ESTIMATE <j>-^ ( t ) OF - < j / ' P \ t ) . n I n t h i s s e c t i o n , we assume t h a t E | X | 2 ^ i s f i n i t e f o r some p o s i t i v e i n t e g e r q. I f r i s any p o s i t i v e i n t e g e r l e s s t h a n o r e q u a l t o 2q, t h e n E|x| r e x i s t s and i s a l s o f i n i t e , t h e c h a r a c t e r i s t i c f u n c t i o n i s r t i m e s d i f f e r e n t i a b l e . L e t 0 < p <^  q, t h e n t h e p - t h d e r i v a t i v e o f <j>(t), d e n o t e d by <f/P^ ( t ) , can be p u t i n t o t h e f o l l o w i n g f o r m : * ( p ) ( t ) - i l L * ( t ) -d t p ( i x ) P e i t x f ( x ) dx . N a t u r a l l y , we may choose t h e e s t i m a t e o f <j)^ P^(t) as (2.1) ( i x ) p e p i t x f. (x) dx n 1 n where f (x) = — Y k n^ nh . L , 3=1 x - X . i 1 i s t h e k e r n e l e s t i m a t e g i v e n i n [ 8 ] ; k ( y ) i s some s y m m e t r i c d e n s i t y w h i c h i s assumed t o have moments o f a l l o r d e r s . T h i s e n s u r e s t h a t ij>(th) i s any number o f t i m e s d i f f e r e n t i a b l e . Hence, s i n c e <f> ( t ) = ij>(th) t h e f o l l o w i n g e x p r e s s i o n 1 n i t X . , (2.1) c a n be a l s o r e p l a c e d by (2.2) 1 1 1 C nh I 2=1 1 -a ( i x ) P e i t X k x - X. dx i n r p 4 i { i 11 j=l <• SL< P ^ hp-%(p -o ( t h ) - ( l x } i e l t x j , n ( p-1 „ / „ \ „ i t X . <i ~$ i l l f P ) ^ S ( H ) ( t h ) ( I X , ) 1 e J } j = l W-0 4 J 3 > , n i t X . + ^ I Kth) ( i X ) P e J , j=l J - 24 -where 0 < p < q, and * C p - 4 ) ( t h ) = ( i y ) P - 4 e l t h y k(y) dy . For any r e a l t , i f (p - £) i s odd, [ f i n i t e i f (p - JI) i s even, as n 0 0 , h ->- 0 I t follows that, as n ->- 0 0 and h 0 , (2.3) r 1 n n 3 = 1 t. p i 2 h P - 2 ^ P - 2 ) ( 0 ) ( i x . ) 2 + . . . I P - 2 h 2 ^ ( 2 ) ( 0 ) ( i X . ) p - 2 } e l t X J I n i t X . + — £ ( i X . ) P e 2 , when p i s even ; I I j = l J P 1 3 i I {hP"Vp-1)(0)(ixi) + j = l 1 2 h 2 ^ ( 2 ) (0 ) ( iX. )P- 2 } e l t X J hp-3/p-3)(o)(ix.)3 + /• ID "\ + - I (IX.)* e n j = l ^ i t X . when p i s odd. I t doesn't matter whether p i s even or odd, one can see that the f i r s t p a r t i a l sum i n (2.2) becomes small and approaches zero as n » • and h ->• 0. 25 -1 n itX. However, the last term in (2.2) -*• — V ( i X . ) P e 2 as n -* °° . n j i i 3 Hence, when n is very large, one can simply approximate (2.2) by itX. \ x n itX. iKth) (iX.) Pe 2 V , which approaches to - £ ( i X . ) P e 2 3 ) n j = 1 J { — i as n ->• 0 0 and i(j(th) -»• 1 . I I - l . Assymptotic Unbiasedness. f ) 1 n itX. Let V ( t ) = n J ^ ( t h ) ( i X j ) P e 2 . Since the expectations of a l l terms i n (2.2) are f i n i t e , the term involving h goes to zero at the same rate as h P ^ does. The term of h° or 1 is the one involved in calculating E [°<fi^p^'Ct:)1 , Var (t) ] arid etc. |^p^ (t) is the term of h° in (2.2). For the asymptotic properties, one may just study those of <j>^p^(t) ; (t) should have the same asymptotic properties as <j>^P^(t), since ( 2 . 4 ) E { ^ P ) ( t ) - <D ( p )(t)} = E { ^ P ) ( t ) - ^ P ) ( t ) + E <t) - * ( p ) ( t ) } P-l where E r P 1 Since n -»• » , h ->• 0 , and |<f>^(t)| _< E|x|£ is f i n i t e , and since ,J,(P-*) ( t h ) + ^P-z)(0) <_ |,j, ( p~ £ )(0) | i s f i n i t e , for I = 0 , 1 , p, i t follows immediately that E j<t>£P^ (t) - (t) j -»• 0 as n -»• °° . (2.5) E | * n P ) ( t ) - <j>(p)(t)j = E ^ ^(th)(iX j) itX. , . P e J - 4>(p)(t) Kth) - 1 \ <j>(p)(t) - 0 , since, as n -»• °°, ijj(th) --v i^ (O) = .1 (in fact {^(th) - 1} = 0(h 2)) . Hence E {^^(^j -»•' <f>^P\t) as n ->• <*> . From (2.5), we know that 4>^ P^  (t) i s also asymptotically unbiased. II-2. Quadratic Consistency. The mean square error of $^ P^(t) can be written as (2.6) E | ^ p ) ( t ) - * ( p ) ( t ) | 2 - V a r [ ^ p ) ( t ) ] + | b [ ^ p ) ( t ) ] | 2 , where |b [^ p ) (t) ] | = | E { ^ p ) (t) } - <j> ( p ) (t) | and V a r U ( p ) ( t ) ] - V a r t ( i X ) P e i t X ] . n n Since E | ^ P ^ ( t ) - <f>^ (iyj^0 as n -> °° , as shown above, M * < p ) ( t ) ] | = |E{^ p ) (t)} - , < D ( p ) ( t ) | - 0 as n - - . For V a r [ ^ P ^ ( t ) ] , we need to compute the variance of {(iX) P e^fc^} (a) When p i s even, •Re{(iX)P e i t X} = (-1) P / 2 X P cos tX , (2.7) Im{(iX) P e ± t X} = (-1) P / 2 X P sin tX ; 27 -(b) When p is odd, £+1 Re{(iX) P e l t X} = (-1) 2 X P sin tX , <2-8 ) p - 1 Im{(iX) P e ± t X} = (-1) 2 X P cos tX . Hence i n case (a), where p > 0 is even, Re <|>(p)(t) = ( - l ) p / 2 E{XP cos tX} , (2.9) Im <j>(p)(t) = (-1) P / 2 E{XP sin tX} , Var[(iX) p e l t X ] = Var[X p cos tX] + Var[X p sin tX] ; ,.,in .case (b)., ,,wher,e ,p ...is odd., p+1 Re <J>(p)(t) = (-1) 2 E{XP sin tX} , Im <}>(p)(t) = (-1) 2 E{XP cos tX} , Var[(iX) P e i t X ] = (-1) P + 1 Var[X P sin tX] + (-1) P _ 1 Var[X P cos tX] Since (p + 1) and (p - 1) are both even, i f p i s odd, (2.10) Var[(iX) p e l t X ] = Var[X P sin tX] + Var[X P cos tX] . From (2.9) and (2;10), one sees that the variance of {(iX) P e**^} i s the same in either cases (p even or odd). For p <^  q, we have - 28 E{X 2 p cos 2tX} = \ { E ( X ) 2 p + R e * ( 2 p ) ( 2 t ) j , E{X 2 p sin 2tX> - \ { E ( X ) 2 p - Re * ( 2 p ) ( 2 t ) j , E{X 2 p sin tX cos tX} = Im A ( 2 p ) ( 2 t ) . 2 P Thus, (2.11) Var[X P.sin Var[X P cos tX] = \ j E ( X ) 2 p + Re <}>(2p)(2t) | - ^ Re 4>(p)(t)j tX] - E ( X ) 2 p - Re * ( 2 p ) ( 2 t ) } - {im <j» ( p )(t)} Cov[X P cos tX, X p sin tX] » Im <j> ( 2 p )(2t) 22 P+1 - Re <|>(p)(t) Im <}>(p)(t) From the above computations, we obtain that (2.12) Var[(iX) p e ± t X ] - { E | x | 2 p - | * ( p ) ( t ) | 2 ] , where E jX| 2 p and |<|>^(t)|2 are both f i n i t e . The variance of (t) , * 2 ie- ivarKUO" e l t X ] . i ^ i { E | x | 2 f - U < P > ( t > | 2 } approaches zero as n <» and t|)(th) ->- (0) = 1. It follows that , E | ^ P ^ ( t ) - < ^ P \ t ) | 2 -»• 0 as n } s o ^ P ^ ( t ) i s quadractically consistent. To see <j>^ P^ (t) i s quadractically consistent, we need to show E | ^ p ) ( t ) - <j> ( p )(t)| 2 + 0 as n Now, ( 2 1 3 ) E | ^ p ) ( t ) - * ( p ) ( t ) | 2 < E | ^ p ) ( t ) l ( P ) ^ | 2 (t) and + E | ^ p ) ( t ) ' - t ( p ) ( t ) | 2 , E | ^ P > ( t ) - * ( p ) ( t ) V a r [ ^ p ) ( t ) - ^ P ) ( t ) ] + | E { ^ p ) ( t ) • n P ) ( t ) > | 2 Here we have j = l 1 £=0 r P •i t\ Then E { ^ p ) ( t ) - * < p ) ( t ) } =Pf f P ' h p - % ( p - £ ) ( t h ) * ( J l ) ( t ) = o(h) , ( 2 . 1 4 ) | E { ^ p ) ( t ) - ^ p ; ( t ) } | ' = o(h z) , as n -*• oo and h -»• 0 . For the variance of [<j>^(t) - <j>£P'\t)] , we have ( 2 . 1 5 ) V a r [ ^ p ) ( t ) - i n P ) ( t ) ] ».-|var n r n P ~ l ' P ' I I L j = l £=0 ^ £ J h p - y p - * > ( t h ) ( l x j ) * i t X j -, - 30 -Since X^, X^, ...» X r are independent identically distributed, ( 2 . 1 4 ) can be written as Var[*(p>(t) - *(p)(t)] = ± VarF *? f P Ih^VP"£) (th) ( i X ) V t X 2 = o(h ) as n -> 0 0 , h -> 0. The exact calculations w i l l be given i n the following section II-3. As a result, E|cb^p)(t) - * n P ) ( t ) | 2 = o(h 2) as n -*• » , h.+ 0 , Hence •ET^ p )(t) - $ c p ) ( t ) t 2 = I E | X T 2 p - I T ^ U ) ! 2 j + o ( h ' 2 ) -*• 0 as n -> <» and h ->• 0 . Clearly " f ^ ^ ' ) a ^ s o <l u adractically consistent. II-3. Asymptotic Normality of ^ ^ ' ( t ) Consider ^ ^ ( ^ a s ^ e a v e r a g e of the independent identically distributed complex random variables, Z Z „.,..., Z . That i s nl nz nn -. n itX. • (t) = - 7 Z . , where Z . = Z. = (iX . ) P i l;(th)e J , for i = 1, 2, n n n3 n3 3 3 n . Treat each complex random variable as a two dimensional random vector, such as 3 1 -P_ ( - 1 ) 2 ( X P iji(th) cos t X . , X? \Kth) s i n t X . ) , i f p i s even; J 3 3 3 P+1 and (-1) ( X P ii;(th) s i n t X j » ~ X j <Kth) cos tX. . ) , i f p i s odd. Hence, i n the case, p i s even, 2 2 n " i|; ( t h ) a 1 ; L = 4< (th) V a r [ X p cos tX] -* a^ , ! j ; 2 ( t h ) a 2 2 = ijj 2(th) V a r [ X P s i n tX] + a 2 2 , (2.16) . i |> 2 ( th)a 1 2 = ip 2 (th) Cov[X P cos tX , X P s i n tX] -> a 1 2 , ip(th) v1 = i{)(th) E{X P cos tX} -> y 1 , _i|)(th) y 2 = T};j(th> E{X P s i n tXj -^.y^ „, as n -> « . In the c a s e , p i s odd, ^ ( t + O S ^ = ip (th) V a r [ X P s i n tX] a 2 2 , 4 , 2 ( t h)5 2 2 = ^ 2 ( t h ) V a r [ X P cos tX] * o , (2.17) . ^ ( t h ) 5 1 2 = (-D (th) Cov[X P s i n t X , X P cos tX] -v -a2± , P+1 p+1 tj;(th) {I = (-1) 2 ip(th) E{X P s i n tX} ->• (-1) 2 y 2 , ij>(th) y 2 = (-D 2 i K t h ) E{X P cos tX} -*• (-1) 2 u x , as n -* A l l y^, y 2 > c f ^ , a 2 2 and a ^ 2 are f i n i t e . Then the covariance m a t r i c e s , - 32 -ifr2(th) n ( cr 11 ° 1 2 ^ ' 2 1 u 2 2 1 % — n 11 " 1 2 a i 9 1 21 22 , when p is even; i n the case, when p is odd, ^ 2 ( th ) < CT11 CT12 a 2 1 ° 2 2 n r a 2 2 -a 2 1 , - 0 1 2 a l l as n ->• 0 0 .. Clearly <f>nP^(t) i s asymptotically normal. Similarly, we consider <£^P^ (t) as an average of the independent identically distributed complex variables, '"' ^ nn' s u c ^ a s <j,(p)(t) = i I .co . ., n n ni 3 = 1 awhere co 113 £=0 P itX. shp^p-*Wtix..)^ j ( P ) for j = 1 , 2 , n. We want to show that ^ Ct) is also asymptotically normal. For each co . i s considered i n a two-dimensional random vector, nj such as (Re u ., Im u .) . nj ' nj The expectations of Re <J>^(t) and Im <(/ P^(t), when p is even n n ( 2 . 1 8 ) (a) Re co .} = E{Re co } = V a0 Re <f> '(t) , njI n . u n £ E-f lm <j>(p)(t)l = E-f- Y Im co A = E{Im co } = \ a Im <f>(£)(t) , I n J t n -5=1 n^J n £=Q where a - [ P |h p % ( p _ £ ) ( t h ) , and a ^ tp(th) . C £ J P From ( 2 . 1 6 ) , we can put - 3 3 -p-1 £=0 Re <|) a )(t) = Hth)Re <fr ( p )(t) + £ a £ Re <j> ( 0(t) , £ = 0 p-1 and for a Re <|> ( £ ) (t) = £. £=0 Z £ = 0 p-1 r P 1 h p - y p - ° ( t h ) R e ^ ( 4 ) ( t ) = Q ( h ) as n -»- °° . It follows immediately from (2.16) and (2.18) that, we have and E-{Re * ^ P ) ( t ) } = EjRe ^ \ t ) \ + o(h) , E^Im 4>*p)(t)} = EJlm ^ p ) (t) \ + o(h) , as n-*•<*>. . The variances of Re <f/ p^ (t) and Im <f/ P^(t) are n n Var.[Re..^ p )(.t)] = ±.Var.[ t Q \ \ l » ,Var[Im <j/ p )(t)] = ± Var[ ^ a^V£] , where U = R e ( i X ) £ e l t X , and V = I m ( i X ) £ e l t X for £ = 0, 1, 2, .... p sxnce £=0 £ £ 2 P ? ? = y a„u„ + 2 y y a a U U £=0 0<£<m<p m P 2^ £ £ £=0 similar. It simply replaces U„ , U by V„, V . Now r J c £' m J £ m Var[Re »< p )(t)] = ±'{ j V a r ^ ] + 2 J I a a Cov[U£, U j } ^ £=0 0<£<m<p J = H a P v a r [ v + j o a ' v a r [ v + 2 I I azam Cov[U£, U 0<£<m<p - 3 4 -where a Var[U ] = iji (th) Var[X P cos tX] , and P P p-1 £=0 p-1 I a Var[U.] = .J £=0 hp - y p - £ ) ( t h ) Var[U ] = o(h ) as n -»• °°, I I V « C o v [V V 0<£<m<p - T I [ P ][ ? )h2p-(A"hB) * ( p - 1 ) ( t h ) ^ ( p - m ) ( t h ) C o v t U J l , U J 0<£<m<p £ ' m :m<p = o(h) Finally, we have, as n -»• 0 0 , as n -»• Var[Re 4> n P )(t)] = ^ j i|>2(th) Var[XPx cos tX], + o(h) j = Var [Re ^ p ) ( t ) ] + o(h) Similarly, the variance of Im <f>^P^ (t) i s Var[Im <j>£p) (t) ] = Var[Im ^ p ) ( t ) ] + o(h) For the covariance of Re <(>nP^(t) and Im c|/P^ (t) , CovfRe <j>nP)(t), Im <j»^p)(t)] 1 Cov V a n , ) a V n L £=0 £=0 £ £ - H j n 4 c o v [ v v + 2 n n . v » C o v [v v } £=0 0<£<m<p J - 35 -1 f 2 P _ 1 2 = ~i a Cov[U , V ] + T a Cov[U„, Vn ] n \ p p' p J £ Q £ £' £ + 2 Y Y a a Cov[U. , V ] L L £ m £ m 0<£<m<p where CC Cov[U p, V p] = ty (th) Cov[X P cos tX, X P s i n tX] , Y a 2 C o v [ U . , V.] = o ( h 2 ) £=0 7 y a.a Cov[U„, V ] = o(h) nL . L £ m £' m 0j<£<m<p as n ->-I t f o l l o w s t h a t , as n » , CovfRe <|>*p)(t), Im < J> n P )(t)] = Cov[Re ^ P > ( t ) , Im * n ? ) ( t ) ] + o(h) The covariance m a t r i x f o r Re <j> ^ P^ ( t ) and Im d>^P^(t) i s r n T n + o(h) ' ° 1 1 CT12 ' ' a l l a 12 ^ 1 f\j n n • °21 °22 • - a 21 CT22 • as n 0 0 I f p i s odd, the covariance m a t r i x i s ty (th) + o(h) n ' ° 11 a12 " i ' a 22 "°21 a, — w vs n I °21 a 22 J -CT12 a l l -as n -> 0 0 Since <|>^ p^ (t) i s a s y m p t o t i c a l l y normal, so <i>^P^ (O i s . - 36 -I I - 4 . Uniform Consistency of d/ P^(t) with Probability One. n The absolute moment of order q of F(x) is defined as E | x | q = I x|^ f(x) dx , for q > 0 and is assumed to be f i n i t e . We 1 -co approximate E J XJ ^  by ( 2 . 1 9 ) E X = x 4 dF (x) — n J —oo J —oo (x) dx Assume that sup E | x| ^  = M < °° , for some constant M . Since n F (x) -»• F(x) uniformly for a l l x as n -*- 0 0 , then according to a Theorem 4 . 5 2 i n Chung's Book [ 1 ] , for each r < q, limit E |x|r = E|x| r n->°° In this section, we choose only p < q , then ( 2 . 2 0 ) limit E |x|P = E | X | P n-*=° or ( 2 . 2 1 ) limit n-x» x P dF (x) = limit x P f (x) dx n H - * 0 0 J —00 = r | x | p dF(x) = r | X | p f ( X ) d x . J —co * —CO In order to see that ©>^p^(t) <f>^(t) uniformly for a l l t with proba-b i l i t y one as n ->• 0 0, we need to show that |x|P f n(x) -*• |x|P f (x) - 37 -uniformly for a l l x with probability one. Let E X P = M , and E X P = M for every n , then, n limxt M = M • for some non-zero constants M and M . Consxder the n o o n density function E|X|P and i t s estimate 8 n(x) |x|P f n(x) n n 11 I N p ^ r x - x. M nh . . They are continuous everywhere. Let |x[ P f(x) be uniformly continuous over (-°°, 0 0 ) , and |x|P k r x - X. be of bounded variation over (-00 , 00) . Clearly, g(x) i s uniformly continuous. We claim the following Lemma 2.4.1. Suppose |x|P k — - — ^ is of bounded variation over ( -oo 5 oo ) } and that |x|P f(x) i s uniformly continuous, and that the series co 2 £ e ^ n* 1 converges for every positive value of y> then n=l (2.22) limit supremum |g (x) - g(x)| = 0 , jj-Ko _oo<x<oo with probability one. Proof : With the above assumptions, the proof can be followed from a theorem o f E.A. Nadaraya [6] . ( 2 . 2 3 ) - 3 8 -N o w , w i t h l i m i t M = M , o n e c a n e a s i l y o b t a i n t h a t . • . n o l i m i t s u p r e m u m | x | P | f n ( x ) - f ( x ) | = 0 , n-K° —oo<x<oo w i t h p r o b a b i l i t y o n e , T h e o r e m 2 . 4 . 1 . S u p p o s e e a c h F ( x ) a n d F ( x ) a r e a b s o l u t e l y c o n t i n u o u s . n T h e n , f o r 0 < p < q , l i m i t s u p r e m u m | x | P | f ( x ) - f ( x ) | = 0 w i t h If*00 —oo<x<oo p r o b a b i l i t y o n e i m p l i e s l i m i t s u p r e m u m | < f / P ^ ( t ) - ( f ) ^ P ^ ( t ) | = 0 w i t h p r o b a b i l i t y o n e . P r o o f : F o r a n y r e a l t , w e h a v e k n P ) ( t ) - * ( p ) ( t ) | = e i t X | f n ( x ) - f ( x ) | | x | P d x x P f ( x ) - f ( x ) d x , n t h i s e x p r e s s i o n o n t h e r i g h t i s i n d e p e n d e n t o f t , s o - ( 2 . 2 4 ) s u p r e m u m | c f / P ^ ( t ) - < f / P ^ ( t ) | <_ —oo< t < c o r°° | x | P | f n ( x ) - f ( x ) | d x W e u s e t h e s a m e m e t h o d a s g i v e n i n S e c t i o n I . L e t D+(x) = | | x | p ( f ( x ) - f n ( x ) ) | , ( f ( x ) - f n ( x ) ) } - . - 39 -Clearly, D n ( x ) £ l x | P f ( x ) f o r e v e r y x . By Lebesque's Dominated Convergence Theorem, we have limit D T(x) dx = 0 , n and by (2.21) limit n-*» J |x| p{f(x) - f n(x)} dx = 0 J —CO r = limit n-x» {D+ - D~}(x) dx n n Hence limit I D* n->oo J -oo (x) dx = limit n-x» D (x) dx = 0 n Finally (2.24) can be put as supremum —oo<X<°° |<t><p)(t) - * ( p ) ( t ) | x| P|f (x) - f(x)| dx n D (x) dx + n r v * ' —oo ) dx -> 0 as n -»• 0 0 This shows that the convergence is uniform for a l l t, and furthermore, i t is with probability one as n -»• 0 0 . - 4 0 -BIBLIOGRAPHY [1] Chung, Kai-Lai (1968), "A Course in Probability Theory", Harcourt, Brace and World, Inc., New York. [2] Kawata, Tatsuo (1972), "Fourier Analysis in Probability Theory", Academic Press, New York and London. [3] Lukacs, Eugene (1970), "Characteristic Functions", 2 nd Edition, G r i f f i n , London. [4] Medgyssy, Pal(1961), "Decomposition of Supperposition of Distribution Functions", Publishing House of the Hungarian Academy of Sciences, Budapest. [5] Nadaraya, E.A. (1964), "Some New Estimates for Distribution Functions", Theory of Probability, USSR 9, pp. 497-500. [6] Nadaraya, E.A. (1965), "On Non-parametric Estimates of Density Functions and Regression Curves",Theory Probability Appl. 10, pp. 186-190. [7] Parzen, Emanuel (1962), "On Estimation of a Probability Density Function and Mode", Ann. Math. Statist., 33, pp. 1065-1076. [8] Rosenblatt, Murray (1956), "Remarks on Some Non-parametric Estimates of a Density Function", Ann. Math. Statist., 27, pp. 832-837. [9] Schuster, Eugene F. (1969), "Estimation of a Probability Density Function and i t s Derivative", Ann. Math. Statist., Vol. 40, No. 4, pp. 1187-1195. [10] Wilks, Samuel S. (1962), "Mathematical Statistics", John Wiley and Sons, Inc., New York. [11] Wooding, R.A. (1956), "The Multivariate Distribution of Complex Normal Variables", Biometrika 43, pp. 212-215. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0080105/manifest

Comment

Related Items