ESTIMATING OF THE I N T E N S I T Y FUNCTION THE NONSTATIONARY POISSON PROCESS by D A V I D WILSON .Sc., University A THESIS THE of Science FYNN and Technology SUBMITTED IN PARTIAL REQUIREMENTS MASTER Ghana, FULFILLMENT 19 OF FOR T H E D E G R E E OF OF SCIENCE in THE (INSTITUTE We OF A P P L I E D accept to THE DEPARTMENT this OF MATHEMATICS thesis the required UNIVERSITY MATHEMATICS as STATISTICS) conforming standard OF B R I T I S H August, AND 1976 (c) D a v i d W i l s o n Fynn COLUMBIA In p r e s e n t i n g t h i s t h e s i s in p a r t i a l f u 1 f i l m e n t o f the requirements f o r an advanced degree at the U n i v e r s i t y of B r i t i s h Columbia, I agree that the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and I f u r t h e r agree t h a t p e r m i s s i o n for e x t e n s i v e copying of t h i s study. thesis f o r s c h o l a r l y purposes may be g r a n t e d by the Head of my Department or by h i s representatives. It i s understood that copying o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l not be a l l o w e d without my written permission. Department o f The U n i v e r s i t y o f B r i t i s h Columbia 2075 W e s b r o o k P l a c e V a n c o u v e r , Canada V6T 1W5 i ABSTRACT Let'{N(t), intensity -T<t<T} be a n o n s t a t i o n a r y P o i s s o n p r o c e s s X ( t ) > 0 , assumed i n t e g r a b l e function, o n [-T,T] . optimal l i n e a r estimator, X , of the i n t e n s i t y function Li sidered with The i s con- i n this thesis. Chapter 1 d i s c u s s e s X as a f u n c t i o n L of h ( t ; s ) , which i s the unique s o l u t i o n o f the Fredholm i n t e g r a l equation second k i n d , •> m ( s ) h . (s) + / K ( s ; u ) h . ( u ) d u = K ( t ; s ) , t a t a<s<b. b Chapters of the 2 and 3 a r e r e s p e c t i v e l y devoted to a discussion o f some o f t h e e x a c t a n d a p p r o x i m a t e m e t h o d s f o r s o l v i n g t h e above i n t e g r a l e q u a t i o n . To illustrate numerical oilwell the use o f t h e techniques examples a r e t r e a t e d . discoveries t o d a t a on t r a f f i c Finally, counts also Britain. presented. i n C h a p t e r 5, on t h e L i o n s Gate B r i d g e , V a n c o u v e r , and t o d a t a on c o a l - m i n i n g i n Great three C h a p t e r 4 d e a l s w i t h d a t a on i n A l b e r t a , Canada. the model i s a p p l i e d devised, disasters C o m p u t e r p r o g r a m s a n d numerous d i a g r a m s a r e i i TABLE CHAPTER CHAPTER CONTENTS 1 ON T H E E S T I M A T I O N OF 1.0 INTRODUCTION 1.1 OPTIMAL LINEAR ESTIMATORS THE I N T E N S I T Y FUNCTION THE INTENSITY FUNCTION 1 1.2 THE B E S T L I N E A R E S T I M A T O R 2 SURVEY OF OF E X A C T METHODS FOR FREDHOLM INTEGRAL EQUATIONS 2 5 SOLVING (OF T Y P E I I ) 2.0 SUMMARY 2.1 FINITE 2.2.1 Hadamard's I n e q u a l i t y 14 2.2.2 Hilbert 14 2.3 THE C L A S S I C A L 2.4 HILBERT-SCHMIDT, AND CHAPTER OF 9 D I F F E R E N C E APPROXIMATIONS Spaces GRANDELL FREDHOLM TECHNIQUES 12 16 KARHUNEN-LOEVE TYPE SOLUTIONS 2.5 GRANDELL 2.6 SOME OTHER 3 A 3.1 INTRODUCTION 46 3.2 QUADRATURE 46 3.3 G E N E R A L I Z E D QUADRATURE 52 3.4 C O L L O C A T I O N METHOD 54 3.5 ERROR A N A L Y S I S 58 3.6 SUMMARY 61 SURVEY TYPE SOLUTION 26 METHODS OF A P P R O X I M A T E AND 36 RULES CONCLUSION 41 METHODS i i i CHAPTER 4 COMPARISON: APPROXIMATE 4.1 CASE O U T L I N E OF W H I T T L E ' S D E R I V A T I O N OF CHAPTER AN E X A C T V E R S U S AN SOLUTION IN A S P E C I A L (1.2.6) . 63 4.2 THE EXACT SOLUTION 65 4.3 AN 69 4.4 COMPUTATION: 5 APPLICATIONS APPROXIMATE O I L WELLS 5.1 INTRODUCTION 5.2 E S T I M A T I O N OF AT SOLUTION TRAFFIC DISCOVERY DATA 71 * 73 DENSITIES THE L I O N S G A T E B R I D G E 5.3 COAL-MINING 5.4 CONCLUDING DISASTERS 79 REMARKS 82 BIBLIOGRAPHY APPENDIX 75 93 COMPUTER PROGRAMS AND OUTPUT 1 THE OILWELL DISCOVERY PROCESS 95 2 THE. L I O N S G A T E B R I D G E P R O C E S S 104 3 THE COAL-MINING 112 D I S A S T E R PROCESS iv ACKNOWLEDGEMENT I would like t o express for g u i d a n c e and numerous the manuscript. Dr. S. W. For expert Hardy Nash events. comments acknowledge like grateful my Zidek t o W. of indebtedness to f o rhelpful t o thank Aseefa and encouragement t o D r . J i m V. throughout the writing of d i f f i c u l t material, I am a l s o I would understanding gratitude a n d D r . J . M. V a r a h typing Bunn. Finally, I also my communication. I am m o s t Samaradasa Merali i n the face grateful to a n d R. Brun. f o r patience, o f many series of CHAPTER 'ON 1 THE E S T I M A T I O N OF THE OF THE N O N S T A T I O N A R Y INTRODUCTION, INTENSITY FUNCTION POISSON PROCESS: PRELIMINARIES INTRODUCTION In sity this chapter we consider function of a nonstationary formulation given by' C l e v e n s o n linear Poisson and Zidek estimators process. ( 7 ) , we of the intenUsing consider the a point process { N ( t ) , - T $ t $ T } , 0<T^°° with independent increments and w i t h p[N(b)-N(a)=n]=^[A(a,b)] exp[-A R (a,b) ] where A(a,b)=/ A(t)dt, a b and the intensity integrable form. on [-T,TJ. For details consider function linear we -T$a<b$T, A(t)>0, i s assumed t o be Riemann The unknown A ( t ) d o e s n o t h a v e refer estimators to Grandell parametric ( 1 0 ) . I n s e c t i o n 1 . 1 , we of the intensity function, A(t), i n 2 general, and the and two natural moving-average. imposed and A ( t ) , the particular form. a integral Fredholm solution of estimator 1.1 OPTIMUM Let LINEAR as of square integrable tation with respect of an A(t), i s determined in of Clevenson (7), second i n order OF THE assumed Clevenson the is of to kind and Zidek i s obtained; obtain INTENSITY nonstationary the best a the linear process i n t e g r a b l e on [-T,T]. the and (7 Zidek [-T,TJ. A, FUNCTION Poisson Following joint estimator, approach ), A(t) Denoting distribution i s measured by of N of with We seek Grandell i s assumed E the and to expec- A, the by 2 the the present estimate record leads estimators to with . however, and work as well as i s constrained The numerical histogram tage, a A(t)>0, to criterion form. f u n c t i o n on counting the given the Grandell's histogram T In tice of ESTIMATORS by work the E/^ [A(t)-A(t)J dt (1.1.1) above, the {A(t);-T$t$T}. extended performance estimator i s required i n the in particular, s e c t i o n 1.2, equation function, estimate (10), best {N ( t ) ,-T^t$T.} b e intensity an (t) In Following which A estimators, motivation that are i s that moving to for be Also, average, f o r many a this relatively algorithms. the i n the references linear are function of the constraint i s that i t easy two cited to well compute known linear. a p p l i c a t i o n s of A in prac- estimators, disadvan- interest, linear be 3 estimates consider Here, the the In of may major must (1.1.2) be sufficiently estimation problem difficulty general, A(t) for not by have the the (1.1.3) For example, a(t)=0 this is analytic wish linearity to constraint. intractability. the estimate A(t) function a(t) let and h(t,s)=r' 0, t otherwise. r < s < t choice, This moving-average, )/r. r is a process. Such advantage of an histogram estimate requiring beyond that needed common practice to evaluate times; for values. to no f i t an used the of about assumed the the intensity time estimate m u l t i p l e s of time intensity i t has averaging moving-average integral the because knowledge select the instance, at least-squares curve estimate i s widely almost process sampled may h(t,s)d(N(s)) x(t)=pf£_ dN(s)= (N(t)-N(t-r) to the constraint, (1.1.3) crete One form some d e t e r m i n i s t i c an without linearity A(t)=a(t)+ As accurate. r, function to r. It is at dis- and then the 4 Clevenson is representable sense by the function DEFINITION: wide-sense a Zidek (7) assumed A prior condition that stationary stochastic process covariance is and with not In (1.1.4) where The s t o c h a s t i c process independent (7) s The Clevenson {x ( t ) ;-°°<t <°°} of t and i s said y and to be f u n c t i o n m ( t ) =E-{x ( t ) } i t s covariance and Zidek function function only define X (t)=[N(t )-N(t _ )] [ 2 A ] H of (t-s) i i =T i 1 i s any estimator, - 1 the , estimator t _ <t<t , ± partition A histogram 1 of , i s defined i [-T,T] a n d 2A^=t^-t^_ by A ( t ) = [ N ( t + A ) - N ( t - A ) ] [2 A ] " , 1 M -T+ A <t optimal optimal <T-A. class here. width window w i d t h also obtained. details mean wide- separately. moving-average provided were and -T=tp<t^<•..<t (1.1.5) the t constant A(t) K. K ( t , s ) = E { x ( t ) x ( s ) } - E { x ( t ) }E{x ( s ) } i s a and about {A(t);-T<t<T} i s a s t a t i o n a r y i f i t s mean-value constant knowledge We do f o r the f o r the not, histogram moving-average however, intend estimator and estimator to discuss the THE 1.2 It i s at performance mators the can least intuitively superior be X to the designed intensity Let BEST LINEAR process denote (1.1.2) when b o t h error is the consider of Grandell's and histogram about the esti- statistics of available. linear and estimate h(s,t) E[(A (t)- A (t)) i s given follows the technique. L H e r e , we X(t) of are that selected to results minimize in the by ]. This linear Grandell (10). Clevenson-Zidek T h u s we seek an minimum The (7) mean- method generalization estimator of the form T drop the assumptions s t a t i o n a r y and The problem now that in section m(t)=EX(t)Ey. i s to determine the 1.1 We to minimize take resulting these w i l l optimal i n turn choices are, minimize Zidek, say, show, a f t e r wide- M(s) ^g (t)dt. = functions (t)-X (t)] a (t) 0 (1.1.1). is m a(t) and 2 f u n c t i o n a l E [x^ the X(t) that ~ •h(t;.) having X (t)=a(t)+/^ h(t,s)d(N(s)-M(s)). (1.2.1) sense here estimators " 2 estimate we that moving-average r square evident i f more k n o w l e d g e a(t) mean-square e r r o r ESTIMATOR and . If h°(t;.), Grandell, and then subsequently Clevenson and (1.2.2) E [ X ( t ) - X ( t ) ] = ( a ( t ) - m ( t ) ) + ( h ( t ; .) , h ( t ; .) ) - 2 L ( h ( t ; . ) ) 2 L some m a n i p u l a t i o n , the that 2 t 6 where (.,.) (1.2.3) for i s defined ( x ( . ) ,y a l lfunctions nonnegative for T (1.2.4) Assume that T b y L x (. ) =/_ f c (x,x)<«>. product. K(.,.) i s The linear x (s) K ( t ; s ) d s , I t i s clear 1 from (1.2.2) that the f o ra(t)i s a°(t)=m(t) that T (x,x)<»; a n d b e c a u s e (x,x)<°° i s a n i n n e r i s defined choice T x,y f o r which a l lx such t h a t optimal of (.))=/^ x(s)y(s)m(s)ds+/^ /^ x(s)y(s)K(s,u)dsdu definite, functional by fora l l t. 0<infm(s) s and that i s a continuous linear m and K a r e bounded, functional i t follows on t h e H i l b e r t space functions H={x:(x,x)<co}. It i n turn follows by t h e R i e s z L x ( . ) = (x(.) t for a l lxeH where (1.2.5) g representation theorem that ,g (.)) t (.)eH. h°(t;s)=g. ( s ) , Thus the optimal choice f o r h, 7 is the unique solution i s the w e l l kind which occurs such The equation equation, work and this the linear are given covariance (1.2.6). and numerical and of the second a p p l i e d mathetheory. studied extensively problem for f o r e x a m p l e , H. (1.2.6) methods as well as observations Van Trees approximate to design main concern approximation emphasizing functions are thus i s our i n subsequent I t i s perhaps worth equation been filtering (see, of Information (1.2.6) has to equation exact problem mean and equation (19), 6). solution REMARK: the i n C o m m u n i c a t i o n and with 4 and integral f r e q u e n t l y i n many a r e a s contain additive noise The this known F r e d h o l m integral chapters for as connection that integral T This in the x(s)m(s)+/2 x(u)K(s,u)du=K(t;s). (1.2.6) matics i n H of s o l u t i o n s the techniques chapters. that theoretically needed the in to solve the estimator. data only integral However, i n hand is for very useful. In the special (1.2.7) case sequel (e.g., i n chapter where m(t)=u, and K(t;s)= K(t-s). 4 ) , we shall consider the 8 Thus (1.2.6) (1.2.8) It becomes , y x ( s ) + / j X ( u ) K ( s - u ) d u = K ( t - s ) , -T<s<T. f o l l o w s t h a t o u r l i n e a r e s t i m a t o r now takes the s p e c i a l form i (1.2.9) A (t)=y+/^ H(t;s)d(N(s)-ys), L o b t a i n e d from T (1.2.1) by s e t t i n g a ( t ) = m ( t ) = y . i n which our model w i l l and 5. 0 This i s the form be u s e d i n t h e a p p l i c a t i o n s i n c h a p t e r s 4 9 CHAPTER 2 SURVEY OF E X A C T METHODS FOR S O L V I N G FREDHOLM I N T E G R A L EQUATIONS (OF T Y P E I I ) SUMMARY This methods kind), chapter f o r solving theory some t h e o r e t i c a l and methodology ( o f t h e second background, The n e c e s s a r y background a n d some a s p e c t s sented. Subsequently, on s u i t a b l e of Hilbert a l l integral Hilbert 1. be g i v e n underlying these sketched a n d a number o f By means o f f o rthedifferences i n methods and t h e i r i n linear space algebra w i l l theory w i l l operators w i l l invesbe be p r e - be viewed as spaces. INTRODUCTION An function integral appears Integral a Equations a r e a l s o d i s c u s s e d and compared. some m o t i v a t i o n w i l l tigation. acting Integral T h e b a s i c m e t h o d s a r e t r e a t e d i n some d e t a i l , a n d developments examples, t o a d i s c u s s i o n o f some o f t h e e x a c t Fredholm together with applications. recent i s devoted equation under equations number o f y e a r s , The actual i s an e q u a t i o n the integral originally t h e unknown sign. have been encountered i n the theory development o f t h e theory however, o n l y i n which i n mathematics f o r of Fourier of integral Integrals. equations a t t h e end o f t h e n i n e t e e n t h century began, due t o t h e 10 works in of the Italian the year published let 1900, when have From been V. V o l t e r r a , t h e Swedish h i s famous work problem*. tions mathematician then mathematician o n a new m e t h o d of research principally Ivar of solving o n , up t o t h e p r e s e n t , the subject and Fredholm the Dirich- integral f o r numerous equa- mathema- ticians. The many theory of integral different areas applied mathematics tions. T o make impossible. applied equations It o f mathematics. c a n be s t a t e d a list Suffice mathematics linear integral vector spaces, o f such applications and m a t h e m a t i c a l a Indeed, i n t h e form i t to say that do n o t p l a y i s worth e q u a t i o n s has c l o s e there contacts with many problems of integral would be i s almost physics where of equa- almost no a r e a o f integral role. mentioning, equations at this stage, the fundamental that i n dealing concepts e i g e n v a l u e s and e i g e n f u n c t i o n s play of a with linear significant role. *"'Sur u n e n o u v e l l e m e t h o d e p o u r l a r e s o l u t i o n d u p r o b l e m e d e Dirichlet". O f v e r s , a f Kungl. Vetensk. Akad. FOrh., Stockholm, "57, n r . 1 (10 J a n . 1 9 0 0 ) , 39-46. * " S u r une c l a s s e d ' e q u a t i o n s f o n c t i o n n e l l e s " . S t o c k h o l m , 27 ( 1 9 0 3 ) , 3 6 5 - 3 9 0 . Acta Mathematica, 11 The m o s t f r e q u e n t l y studied i n t e g r a l equations arethe following: / K(t,s)f(s)ds (2.1.1) b 3. f ( t ) + X / K ( t , s ) f ( s ) d s =g(t) (2.1.2) b (2.1.3) m(t)f(t) + X / K ( t / s ) f (s)ds b The a b o v e e q u a t i o n s a r e g e n e r a l l y tions of the f i r s t , interval second, known a s F r e d h o l m e q u a - (a,b) may i n g e n e r a l be a f i n i t e 00 may d i v i d e 00 The f u n c t i o n K ( . , . ) , w h i c h We n o t e t h a t equations i s generally known as t h e k e r n e l , b 1 stated 1 2 f ( . ) enters the earlier, (s)] d s = C / K ( t , s ) f ( s ) d s + C f K ( t , s ) f b 2 and o f t e n b 1 the equations many s i t u a t i o n s ; i n s t a t i s t i c a l symmetric the function A l l t h e above i n a l i n e a r manner s o t h a t / K(t,s) [c f (s)+C f As we i t t o (2.1.2). g ( . ) a r e assumed known a n d f ( . ) i s s o u g h t . that.is, The i n t e r v a l o r (-°°,b], are f i n i t e . (2.1.3) by m ( t ) t o r e d u c e equations are l i n e a r ; -g(t) and t h i r d k i n d , r e s p e c t i v e l y . [a,°°), o r (- , ) , w h e r e a a n d b and =g(t) 2 J (2.1.1)-(2.1.3) problems t h e k e r n e l s a l s o nonnegative definite. 2 (s)ds. arise i n are usually 12 In the next results, Hilbert cal theory, Section We including Slepian shall techniques shall consider (7) t y p e discuss will (18), Although appropriate and mathematical background o f The i n Section Hilbert-Schmidt, Karhunen- s o l u t i o n s and t h e r e s u l t i n g some o t h e r methods we not only are dealing with stage consider because of great insight 2.1 space into finite finite practical the nature exact m e t h o d s , we difference difference utility, of integral find i t approxima- approximations but also provide equations. theory. (2.2.1) i n the (19) , TOOLS F I N I T E DIFFERENCE APPROXIMATIONS If, expan- i n the literature s t a t e H a d a m a r d ' s i n e q u a l i t y a s w e l l a s some a s p e c t s Hilbert classi- others. to briefly at this certain some be c o n s i d e r e d SOME U S E F U L M A T H E M A T I C A L tions state t h e d e t e c t i o n t h e o r e t i c approach by Vantrees 2. also 4 we and G r a n d e l l sions. briefly as u s e f u l t o o l s i n t h e sequel. Fredholm expansion Loeve, shall i n c l u d i n g a d i s c u s s i o n of the necessary space In are s e c t i o n we equation f ( t ) - X / j K ( t , s ) f (s)ds = g ( t ) We of shall 3. 13 we r e p l a c e t h e i n t e g r a l by a s u i t a b l e n , f (t)-A L _ ^ K ( t ,£)f (i) i=l (2.2.2) for l a r g e n and a c o n t i n u o u s the sum at n discrete (2.2.3) we i n (2.2.1). (2.2.4) we evaluate =g(l), ^ n K (2.2.3) w h i c h may (I-AH)F where t h e i - j t h equation J be r e w r i t t e n i n m a t r i x form = G element i n the matrix H i s (—)K(i,—), n n n f (^-) and G has components To s o l v e (2.2.4) we invert the matrix and and F i s g {^) . find F=(I-AH) G. - 1 The above i n v e r s e w i l l most only (2.2.1) by t h e a l g e b r a i c components at (2.2.2) j=l,2,...,n, a vector with (2.2.5) to the points n , . . f(i)-A^ i (i,-)f(i) n 4-—= n n n n i=l J and c o n t i n u o u s f ( t ) , a c l o s e approximation I f , furthermore, have r e p l a c e d t h e i n t e g r a l system = g(t) kernel K(t,s) i n (2.2.2) r e p r e s e n t s integral sum: n values. determinantal exist f o r a l l A, w i t h These a r e the r o o t s o f the equation j the exception of characteristic 14 In which (2.2.5) no 0. we solution eigenvalues. (2.2.3) = I-AH (2.2.6) We see exists. also note n e c e s s a r i l y - has equation (2.2.1) need 2.2.1 there THEOREM state this 2.2.1 upper Let estimate H be values that every eigenvalues, not as may Such have special are commonly finite even values of X for known algebraic though the as system integral eigenvalues. Hadairiard's We An that Inequality a theorem. be a matrix with for i t s determinant the general is given element h... ID by (2.2.7) 2.2.2 Hilbert So f a r we question of Basically, the it what of equation additional should purposes Hilbert have we not mean by course, to an a i t will spaces. to a a prove addressed solution solution identity. restrictions belong really Spaces on But the be of any often, class convenient an of to the as equation. must however, such to integral equation solution, particular to of ourselves we reduce may demanding functions. work impose i n the For that these so-called 15 DEFINITION product inner 2.3.1 space A linear i f an i n n e r p r o d u c t product assigns number d e n o t e d by (f,g) w i t h (af+0g,h) = a(f,h)+3(g,h) 3. (f,f)£0 a n d ( f , f ) = 0 i f and o n l y We l e t (f,f) DEFINITION Cauchy an i n t e g r a l to an element If (2.2.1) Then H i s s a i d sequence converges of an complex of integral operator equations, i s fundamental. satisfies (1) ( f , f ) = space (f,f). a n d i^-^ t o be a H i l b e r t t o an element f i n H a new e l e m e n t , the operator 0. i t t h e Norm o f f . L e t H be an i n n e r p r o d u c t s e q u e n c e i n H. In the study i ff = s i n c e by p r o p e r t y | | f | | and c a l l 2.3.2 Cauchy Such = (g,f) (f,f) i s real, = inner the following properties: 2. that on i t . p a i r f and g i n X a (f,g) note we t o every t o be an i s defined 1. We every space X i s s a i d we a space i f i n H. find that the notion Such an o p e r a t o r assigns s a y K f i n H. the condition K ( a f + 6 g ) = ctKf+BKg say that K i s a l i n e a r operator i s operator. An example o f a linear 16 Kf where may = /jK(t,s)f(s)ds, f eL2[0,l] o r may Consider L [ a , b ] , where 2 Such space an operator H. the interval [ a , b ] may be If / / |K(t,s)| dtds a a' b b 2 1 then i s continuous. n o t be d e f i n e d on t h e whole THEOREM 2 . 3 . 1 infinite. and K ( t , s ) the operator Kf = f = M <~ 2 K(t,s)f(s)ds i s bounded. a The stage i s now s e t f o r o u r m a i n concern i n the sections ahead. 2.3 THE C L A S S I C A L FREDHOLM To the begin Fredholm (2.3.1) with with, i n order meter in consider the solution o f equa- b (2.3.1) three shall f(t). = g ( t ) + A / K ( t , s ) f ( s ) d s a a Riemann A. t o f i x o u r i d e a s , we equation integral i n a given F r e d h o l m was t h e f i r s t tion TECHNIQUES i n the general The r e s u l t s person form to give (a,b). f o r a l l values o f Fredholm's theorems which interval of the para- investigations are contained a r e among t h e m o s t i m p o r t a n t and b e a u t i f u l 17 mathematical in discoveries. replacing the integral this equation t o a system The method u s e d accordance interval c i — t i / tn r 1 and with of linear —ID f •••/ t h. 1 1 parts infinity. by t h e p o i n t s -™t• l + l l equation to and l e t t i n g t h e 1 n replace equations of F r e d h o l m s m e t h o d , we p a r t i t i o n t h e (a,b), i n t o n e q u a l 2 consisted i n ( 2 . 3 . 1 ) b y a sum, t h e r e d u c t i o n n u m b e r o f t e r m s o f t h e sum t e n d In by Fredholm n (2.3.1) by n (2.3.2) f (t)=g(t)+Ah5 K ( t , t . ) f (t. ) . i=l 1 Let f ( t . ) = f . , K(t.,t.)=K..; I I ' J I J t h e n we may I 1 equations as (2.3.3) (1-AhK, , ) f,-AhK, _ f 11 1 12 2 -AhK The form teristic rewrite the system of . .-AhK. f ' = g ( t , ) i n n ^ 1 , - A h K „-...+ ( l - A h K ) f =g ( t .) . nlf1 n2 nn n ^ n 1 4 = solutions f , 1 the 1 of ratios , 2 f f o f (2.3.3) c a n be e x p r e s s e d i n n of c e r t a i n determinants determinant b y t h e common charac- 18 l-xhK (2.3.4) -AhK D (A) = n - 1 1 An that this expansion 1 2 1- AhK nn i s not equal t o zero. (-Ah) K i j L + YZ1 i»j=l 21 K . . K .. Di 33 K. 1 (-Ah) n n l K. 1 i l f i , • • - i 2 n t h e sake K(t ,S ) 1 1 . 1 1 K. . 1 2 • 2 1 K. . 2 2 " 1 1 X 1 n . l , 1 K. l i„ n 2 K(t ,S ) 1 2 ... The and K(t,s) K(t ,S ) determinant t h e above also . K, in 1 n, K(t ,S ) 1 n K 1 i~i | 2 n of simplicity, l e t (2.3.6) n K 1 : K(t ,S ) K. . 1 n ' X 1 K. l For i n t h e form K. . K. . n (2.3.5) +...+ In - AhK„ 2n 22 o f ( 2 . 3 , 4 ) may b e e x p r e s s e d = 1-Ah> i=l n -AhK K -AhK ~ n2 determinant n D (A) n 1-Ahk 2 1 •AhK , nl provided A symbol n 2 ... (2.3.6) i s taken l # t 2 * ' " n\ t 1 2 n K(t ,S ) n i s called n Fredholm's determinant, t o be d e f i n e d f o r e v e r y i n a m u l t i - d i m e n s i o n a l domain. kernel 19 The if any fundamental property pair of transposed, Using the arguments the value the the lower (2.3.6), we may write that, sequence the is sign. the expansion i n form (2.3.7) (-Ah) D ( A ) = l - A h Yi~= l K ( t " , t ") + n i ,(-Ah) Now sum suppose , (2.3.7) Thus arises (2.3.8) h+0 and 2 shown by this \ l 1/ rJ of the terms double, t r i p l e integral, of etc. a b -' 1 \s a ds,ds 2 s / 1 # 2 2 /S-^,S2,S2i / a (s ,S ,S ) K Fredholm, f o r every value vein b 2T f f converge Vt. , t ./ ^ n->-°°, t h e n e a c h b ~JT l l d s l d S 2 on the basis of 2 d s 3 + o f Hadamard's theorem, A. 1 series 3 i the Fredholm s a c o n v e r g e n t power form y i ' 3 D(A)=l-A/ K(s,s)ds+A / / K was K series 3 In Ii , j = l + single, a which 1 1 K t e n d s t o some the It. , t ft • , t . , t \ ^ r\ 3 that n 2 21 ± i,D,r=l in or the d e t e r m i n a n t changes n to determinant i s 1 i n the upper of symbol of Fredholm s function (Fredholm's D(A) first may be series) expanded of the 20 (2.3.9) D(A)=l+5 i=l ^ 1' ^ 2 (-A) K , i ! .S l f S ' " " * ' ^ 1 S . l ds ds ...ds 2 1 2 i i n a n a r b i t r a r y d o m a i n Q. We (2.3.10) now s e e k a s o l u t i o n o f t h e form f (t)=g(t)+A j N ( t , s , X ) g ( s ) d s where t h e r e s o l v e n t k e r n e l N ( t , s , X ) i s t h e p r o d u c t (2.3.11) N(t,s,X)=D(t,s,X D(X) ' D ( t , s , X ) i s t h e sum of a c e r t a i n j_ CO (2.3.12) sequence, D(t,s,X)=C (t,s)+') i=l ( Q ~* C (t,s) } i and (2.3.13) C (t,s)= *'s S ± f i f S This l e a d s to the Fredholm (2.3.14) i series. > ( •, r w h i c h has t h e same c o n v e r g e n c e d S l ds ...d 2 S i series: - D(t,s,X)=K(t s)+5 i=l S.J 2 x ' I ... ) / t , S , . . . ,S K J n 1 \S,S, , • • . ,S, p r o p e r t i e s as Fredholm's I ds ...ds 1 first 21 We a r e now theorems. THEOREM 2.3.1 Fredholm Fredholm's that equation o f the second the functions kind, under t h e g ( t ) and K ( t , s ) a r e i n t e g r a b l e , D(A)=|=0 a u n i q u e solution, which i s of the (2.3.10). THEOREM the the three x i n the case form to state - assumption has i n a position 2.3.2- If A homogeneous (2.3.15) Q i s a zero q of D(x), of multiplicity then equation f(t)=A /K(t,s)f(s)ds 0 n possesses at least one, and a t most q, l i n e a r l y independent .solutions j "^2. ^2' " " * ' ^ j — 1 f ** " ' ^ i f '. for not j=l , 2 , . . . , i ; identically nation zero, of these l^i$q, and any o t h e r solutions. S _. solution D. d e n o t e s i s a linear a Fredholm minor combiof I order i relative THEOREM tion the 2.3.3 solutions equation K(t,s). F o r t h e nonhomogeneous i n the case given to the kernel D(AQ)=0, function ^^(t) equation to possess i t i s n e c e s s a r y and s u f f i c i e n t a that g ( t ) be o r t h o g o n a l t o a l l t h e c h a r a c t e r i s t i c (j =l , 2 , . . . , i ) of the associated homogeneous c o r r e s p o n d i n g t o t h e e i g e n v a l u e AQ, and f o r m i n g t h e fundamental system. solu- The g e n e r a l solution then has t h e form 22 (2.3.16) f(t)=g(t) + A „ I 1 D +) g(s)ds r \s ,...,s p 1 p p C.(j)..(t), where $•(t)= J J 3 L products S r^. f ^ r q Fredholm's Equation V7e c a l l - l' t D pl'"-' i-l' i' i+l' j=i 2.3.2 q V with the kernel K(t,s) r ,s. .t_ , \ r \^*1' ' Degenerate Kernel d e g e n e r a t e i f i t i s t h e sum o f o f f u n c t i o n s o f one v a r i a b l e n (2.3.17) K(t,s) = > N (t)L (S) ± i i+1 Substituting (2.3.17) into f(t)=g(t)+AJK(t,s)f(s)ds we n o t i c e g(t) immediately that and o f a c e r t a i n a solution linear i s t h e sum o f t h e f u n c t i o n combination o f t h e f u n c t i o n s IsL ( t ) : n f(t)=g(t)+2 (2.3.18) A N (t), i i A i being constant i=l In order to determine expression (2.3.18) this example: by an EXAMPLE 2.3.1 the constants i n the integral S u p p o s e we are given f ( t ) = g ( t ) + X / J ( t + s ) f (s)-ds A^, we s u b s t i t u t e equation. We the integral illustrate equation The solution should have t h e form •f ( t ) = g ( t ) + A t + A , 1 2 w h e n c e we h a v e t h e i d e n t i t y g (t) + A t + A = g ( t ) + A / J (t+s) [g (s) + A 1 and 2 the system o f A x S+A ]ds 2 equations ( 1 - ^ - A ) - A A= A / J g ( s ) d s , 2 —jA-^ A+A ( l — ^ A ) 2 H e n c e we o b t a i n A equation ; L and A 1 = A / J s g (s) d s 2 and t h e s o l u t i o n of the given i n t h e form f (t)=g(t) 2A/l 6AtS 3(2-A)(t l) 2A + + U provided roots 12-12A-A + of the equation 2 hence X =-6 + 4 / 3 , A =-6-4/3 2 + g ( s ) d g Z A i s n o t one o f t h e e i g e n v a l u e s . 12-12A-A =0 and .integral The e i g e n v a l u e s a r e The corresponding characteristic •f (t)=c(x t+i-|x ), 1 where C i s an a r b i t r a r y REMARK The Fredholm important role tions, since f (t)=c(x t+i-|x ) 1 1 solutions are 2 2 2 constant. equation i n the theory i tcan e a s i l y with degenerate and a p p l i c a t i o n s kernel p l a y s an of integral be s o l v e d by a f i n i t e equa- number o f integrations. 2.3.3 Existence Equations Suppose the that Fredholm of Solutions. o f t h e second the range kind have of integration Alternative Either: The F r e d h o l m Alternative an e x i s t e n c e i s finite, theory: then we have as f o l l o w s : I f X i s a regular value, then f ( t ) =g ( t ) + X / K (t', s) f ( s ) d s h a s a u n i q u e a b the equation solution f o r any arbitrary g(t); Or: If X i s a characteristic value, then t h e homogeneous equation (2.3.19) has <j>i f (t)=X/ K(t,s)f(s)ds a a finite b number ( t ) , . . . , <t>^ ( t ) . (p, say) o f l i n e a r l y In t h i s case independent the transposed solutions homogeneous equation 25 (2.3.20) ¥(t)=A/ K(t,s)¥(s)ds b cl also has p s o l u t i o n s - ^ ( t ) V ( t ) ; -and t h e n o n h o m o g e n e o u s P equation theV^,; has a s o l u t i o n i f and o n l y that i fg(t) i s orthogonal to a l l i s , i f and o n l y i f • f k g ( t ) V. ( t ) d t = 0 , a i=l,...,p.' 1 This any solution i s clearly linear combinations Stated the i n another homogeneous not unique, s i n c e we o f t h e ^ ' s , and o b t a i n form, equation (2.3.19) has the unique t h e nonhomogeneous e q u a t i o n (and t h e s o l u t i o n must e v i d e n t l y be u n i q u e ) . implies Before we observation REMARK and so f a r found integral turn t o the next regarding wearisome numerical, i s solvable solvability analysis. apart s o l u t i o n f=0, forarbitrary In other g words, t h e above. of eigenvalues, only I f s e c t i o n we m a k e t h e f o l l o w i n g The F r e d h o l m c l a s s i c a l distribution solution. existence. concerning information another the Fredholm A l t e r n a t i v e says: then uniqueness can add t oi t techniques yield of equations but often Consequently require rather and e x i s t e n c e rather Fredholm's equations. and tedious formulae a few a p p l i c a t i o n s , e i t h e r a n a l y t i c a l from p r o v i d i n g a foundation precise have or f o r the theory of 26 2.4 H I L B E R T - S C H M I D T , KARHUNEN-LOEVE, AND GRANDELL T Y P E S O L U T I O N S 2.4.1 Hilbert-Schmidt In called the f i r s t selfadjoint compact regarding their expansion theorems. integral operators. As the part of this before, we shall s e c t i o n we operators eigenvalues, These Theory and o b t a i n eigenfunctions results define with information and t h e a s s o c i a t e d a r e then be c o n c e r n e d some the so- a p p l i e d t o compact integral equations of type f(t)=g(t)+X/ K(t,s)f(s)ds (2.4.1) b in the Hilbert to belong to L |a,b] 2 integrable, (2.4.2) space The and t h e k e r n e l function g(t) will will be assumed be assumed t o be square- / / |K(t,s)I dtds<«. a a b b 2 2.4.1 H. L e t K be a bounded, Hilbert space if the sequence that 2 so t h a t DEFINITION from L [a,bJ. i s a Cauchy {f„}, i n ' H . Then K will { K f } we n sequence, be said linear operator t o be a compact can extract a f o r any u n i f o r m l y on a operator subsequence {Kf } n bounded R sequence, 27 Now l e t {<j>^}'be a n o r t h o n o r m a l s e t i n H, a n d l e t n K where n f = K f / i y ( f , < > i ' * i n-1,2,... ) {y^} i s t h e sequence o f e i g e n v a l u e s o f K o r d e r e d p y have t h e f o l l o w i n g THEOREM 2 . 4 . 1 } i s a eigenvalues, these convergent sequence so - ' 4 We thus results: i n H. Then f c a n be r e p - Q i ± i s a suitable, element i n the nullspace of K L e t {<j>^} b e t h e c o r r e s p o n d i n g w i t h K, a n d s u p p o s e H i s L ~ [ a , b ] , 4) l i m ,b ,-b Pa al ; K ( t 2 ' -/_ s , W *.(t)*.(s: dtds=0 i . i = l that K(t,s)=^) y ( | ) ( t ) (j) (s) ± i i i=l converges i n t h e mean ( i n t h e sense of (i.e., eigenvectors then n (2 i n H. accumulate (f , * ) * +f o THEOREM 2.4.2 ciated n f L e t f be an element f=> f K o f nonzero i n t h e form (2.4.3) where that 1> t h e o r i g i n , and ( resented such l t> I 2 " Then i f K h a s an i n f i n i t y at t (2.4.4)) asso- 28 It then (2.4.5) follows / / |K(t,s)| dtds=\ b We b note that 2 oo sum <• L_ i n t h e above y. i s vital, need 1 n o t be i=l We kernel state v 2 square-integrable the that 2 ± results since the fact for arbitrary K(t,s) compact i s operators finite. . also recall that operators f o r which a r e r e f e r r e d t o as H i l b e r t - S c h m i d t the that K(t,s) i s an operators. We now important HILBERT-SCHMIDT THEOREM 2.4.3 Every function f(t) of the form (2.4.6) f(t)= is almost everywhere to the orthonormal metric kernel The (2.4.7) above /K(t,s)h(s)ds t h e sum system o f i t sF o u r i e r series with <j>^(t) o f e i g e n f u n c t i o n s respect o f t h e sym- K. theorem implies f(t) = \f.<j>.(t) / 1 that converges, 1 i=l where the c o e f f i c i e n t s function f(t) with f ^ are the Fourier respect t o the system coefficients (<|>^(t)}, of the that i s (2.4.8) f , = / f ( t ) <j>. ( t ) d fi 1 and t ^ i 1 the h^ are the Fourier with respect (2.4.9) • to coefficients t o the system of the given function h (<|>^(t) }: h = / h ( t ) <t> (t) d t i i Consequently, t h e sum the function of i t s absolutely f ( t ) i s almost and u n i f o r m l y everywhere convergent equal Fourier series: oo (2.4.10) 2.4.2 f (t)= > Application of the Hilbert-Schmidt Using a series the Hilbert-Schmidt expansion (2.4.11) with Theorem, of the solution Theorem i t i s possible to obtain f ( t ) of the i n t e g r a l equation f (t)=g(t) + A/K(t,s) f (s)ds respect metric the ^ij). (t) . t o the system kernel K(t,s). eigenvalues, of eigenfunctions Assuming A=|=A^, t h e r e that exists (<j>^(t) ) o f a A i s not equal a unique sym- t o any o f solution f(t) in L (n). 2 Specifically equation (2.4.11) oo (2.4.12) f ( t ) ' g A ( t l - \ ~ C L / <!>• ( t ) 1 1 may be expanded i n the form 30 w h i c h may term be to substituted i n (2.4.11) and integrated term by obtain oo X C. (1-X ) (j> . ( t ) = T K ( t , s ) g ( s ) d s ) (2.4.13) x=l But, (2.4.14) according to the Hilbert-Schmidt J"K(t,s)g(s)ds=\ t h e o r e m , we have again T^-jtt), i=l where From the are the Fourier (2.4.13) and X (2.4.15) i g i i t follows that ' (t)=0. ± i i=l Multiplying because (2.4.14) c o e f f i c i e n t s of the function g(t) both i sides i n turn of the orthogonality (2.4.16) we €. (1-r—)-Y^=0 1 by < | > , <J> , . . . a n d i n t e g r a t i n g , 1 2 obtain f o r every i . 1 i C.= i X . -X l g Consequently Substituting the required an absolutely these expansion and values i n the series ( 2 . 4 . 1 2 ) , we of the s o l u t i o n of equation uniformly convergent functions of the (2.4.17) f (t)=g(t)-xy~ -^-<() (t) series kernel: x i=l i (X=|=X ) i obtain (2.4.11) i n terms of as eigen- 31 REMARK I f A were e q u a l t o one o f t h e e i g e n v a l u e s A . « = < = A-. . ~ n w i t h r a n k pp=t=2 r - ' p +p q + q==il * A i4 S + = x w i t h r q, then e q u a t i o n n } * * p = = (2.4.16) w o u l d A p + 1 be § a t i § f i § S i f §nd o n l y i f = / g C t ) <j, (2.4.18) g Condition (2.4.18) p + i (t) dt=0 p+i ( i = 0 , 1 , 2 , . . . , q-1) i s i n accordance Theorem and i th a s t o be e x t e n d e d corresponding series the to the value a s many t i m e s solution A p w i t h the Fredholm Third t o a l l thee i g e n f u n c t i o n s which i s r e p e a t e d i n t h e above a s t h e number o f i t s r a n k . In that case takes t h e form oo * f (t)=g(t)-A^) X^Y7 i=l (2.4.19) ( | > i ( t ) + C l , f , p ( t ) + x + C 2*p+l ( t ) + ••• + C q * P + q - l ( t ) * where the > denotes that v a l u e s o f i e q u a l t o p, p+1, X p - X p+l~ p+2 X where q i s t h e rank By vent i n t h e s u m m a t i o n we h a v e e x c l u d e d a l l _ X p+q-l' of that eigenvalue. a s i m i l a r m e t h o d we may kernel (2.4.20) '•• p+q-1, f o r w h i c h find the expansion o f ther e s N u s i n g t h e i n t e g r a l e q u a t i o n s a t i s f i e d b y N: N(t,s, A)=K(t,s)+A/K(t>y)N(y,s A)dy. / We obtain in this case (2.4.21) 4> (t)=0, i i=l whence, it as follows before, account of the orthogonality (<j>^(t)}, of that (s) b.(s)= i X . ( A.-X) (2.4.22) v o / 1 The on required obtained 1 expansion i n the of the resolvent kernel is therefore form oo (X=A ) , i —<i> (t)<j).(s: N ( t , s , X ) = K ( t , s ) - X )i = l _ _ _ x (2.4.23) the x convergence absolute and of which uniform REMARK The seem more convenient functional titative ticular apply results provided approach kind of } by series. be to This the and than the series may not the similar \ <t> . ( t ) t y . (s) i l x.l are clearer i^T (2.4.23) classical expand of Fredholm's often KARHUNEN-LOEVE will i n view of (2.4.17) a n a l y t i c techniques THE The convergence to x i s evident expansions 2.4' (A formulae, lead to the but and the quan- techniques. EXPANSION the function i s analogous to a f(t) in Fourier a par- series expansion i n terms of sine weighting coefficients; and a good cosine functions with reference appropriate i s Helstrom (126 pp.124- 133) . To set of be more orthonormal coefficients the we d e s i r e an functions expansion <f>^ ( t ) w i t h i n terms uncorrelated of a weighting r^. Recall over precise, that interval a set of (0,T) functions {<j> i s orthonormal m f l , ( t ) : i = l , 2 , . •. . ) d e f i n e d i on this interval i f i=j / *. (t) ** . (t)dt=. 0 If the function (2.4.1)' The of 1 lo, 3 orthonormal f ( t ) ,defined f set on i s complete, (0,T) i (2.4.1)' by zero r ^ may 4>*j ( t ) and functions are except (2.4.2)' square-integrable represented as i be determined integrating / J f (t) <fr*j ( t ) d t = ) the be a (t)=2_r * (t) coefficients Since may then f o r j = i , and r =/Qf ± (t) * * ( t ) d t . k multiplying over the each interval side (0,T), /Q r «|) (t)+* (t)dt. i orthonormal, so by i j the right-hand side is 34 For illustrative integral where purposes i s the kernel e i g e n f u n c t i o n , . and X^ a n d values know that t h e homogeneous we assume that 4>^(t) i s an eigenvalue. positive. the eigenfunctions set of functions function and where for a positive definite are s t r i c t l y definite, the consider equation K(t-s) We we kernel, Furthermore, form <}>^(t) i s s a i d the i f K(t,s) a complete set. t o be c o m p l e t e By eigeni s positive definition, i f the only g(t) satisfying T / - g ( t ) cf, (t)dt=0 i for a a l l i i s the function function of tions the g(t) not identically functions The <j>^(t), t h e n In essence zero, g(t) i s also to f ( t ) . n i=l means that i f to theset an e i g e n f u n c t i o n . the series expansion Convergence this i s orthogonal s i g n i f i c a n c e o f the completeness <f>^(t) i s t h a t mean g(t)=0. property (2.4.1) i n t h e mean means of the func- converges i n 35 The cj)^(t) series chosen expansion as ( 2 . 4 . 3 )' i s t h e it can Karhunen-Loeve facts theorem be equation the eigenf unctions Some o t h e r Mercer's of will which expanded be states i n terms of of (2.4.1)' w i t h t h e the integral functions equation expansion. of eventual interest. i f K(t,s) i s positive One is semidefinite e i g e n v a l u e s and e i g e n f u n c t i o n s as oo ( 2 . 4 . 4 )' K ( t , s )= > L 1 i=l. • 1 I n some c i r c u m s t a n c e s kernel K (2.4.5/ The X <j) (t)<f>* . ( s ) . 3 "*"(u,v) , 0<t, 1 kernel has t o use the inverse by /j!JK~ (t,u)K(u,v)du=6 (t-v) inverse given defined i t i s convenient an expansion v<T. i n terms of <!>^(t) a n d X^ by 00 (2.4.6)' K _ 1 4» (t) <|)* (u)- ( t , u ) = ^ i I=T The usefulness of analytical. In of the a random p r o c e s s series 1 inverse In p r a c t i c e , s u m m a r y , we may kernel I t may be represent a over of orthonormal i a finite functions. i s , however, difficult to mainly determine. square-integrable function observation interval in a The be coefficients may made 36 to have the useful property REMARK Homogeneous tant i n communication are a roi© used is that Fredholm i n determining random process. A of being integral theory. the uncorrelated. As equations a theoretical Karhunen-Loeve difficult i t i s often difficult aspect to of find play to they theory theory, solutions impor- tool, expansion this an of . however, the equations involved. 2.5 GRANDELL In chapter we (10), G r a n d e l l used 1, obtain briefly to review the TYPE SOLUTION a method best linear G r a n d e l l s method 1 similar to that estimator. and make In some used this in section comments thereon. With Grandell the the assumption adopts as q u a d r a t i c mean. (2.5.1) where the determined only criterion Thus he the f o r the seeks covariance choice estimates of of i s known, the the estimate, type X*-(t)=a(t)+/Je (s)d(N(s)-s) . t functions a(t) so minimized. and that E{A*(t) is a that - X(t) } 2 3 (s) (as i n chapter 1) are 37 By virtue theorem can o f t h e Karhunen-Loeve of the preceding be r e p r e s e n t e d section, K(s,t) = and M e r c e r ' s the covariance kernel K(s,t) by (t) (2.52) expansion . (s) x i=l the a n d <j>^ b e i n g eigenvalues and e i g e n f u n c t i o n s , respectively, 2 By assumption exists, a ( t ) a n d 3 (.s) t are finite a n d E{ X * ( t ) - X ( t ) } so 3 ( s ) has t h e form t (2.5.3) 3 (s)=^) t B <j) (s)+b (s) i i t i=l where b ( s ) i s orthogonal t oty^ , cf> ^ t • • • • t We now (2.5.4) proceed t o minimize the expression E(A*(t)-X(t)} =E{(a(t)-l)+/g 2 B f c (s)dN(s)-s)-(Nt)-1)} oo oo + \ — Since (2.5.4) i t i=l. _ 3 . + .(t) i A - " i " i=l i s being minimized, a(t)=l 00 EU* 1 -2 y i=l ^ 2 oo —*. (t) 2 2 i=l O O e +/Qb (s)ds+^> =d(a(t)-l) +^) 2 oo g 2 ( t ) - X (t) } =^ 2 (B,-*, ( t ) ) {3. +-i—2 2 Z } and b ( s ) = 0 . t Thus 38 so that &E{A»(t)-Ut) > - ,« 2 38. " 2 i 3 2(@ -^ (t)) 1 1 V- + x setting this x equal i t o zero we obtain . 1+y. 1 which corresponds t o a minimum 3 E { A * ( t ) - A (t) } 2 •30. It follows and w (2.5.1) 2+ 2 U that 8 t^ ( s ) = )/ (2.5.5) / i n turn 2_ 2 = since > Q i (substitution i n <i>, (t)4> . (s) - V r1+y. r-^ , becomes -<f>, (t)<j>, ( s ) (2.5.6) (2.5.3)), A*(t)=l+/g^> d(N(s)-s); 1 X 1 + y and 2 2 \ T ^ i E { A * (t) - A ( t ) } = > - i 1+y i=l 00 ( (2.5.7) If we integrate now over multiply t ) i expressions the i n t e r v a l (0,T), (2.5.2) we obtain and (2.5.5), and 39 00 CO rpy < .. (u) vJ p) .. ( V t W) < v| p> V " / v <f) . (u) |>. ( s ) v p -< vp. /j3 (u)K(u,s)du=/J^_ > d t i=i <t> ( t ) D=l 1 u 3 (s) i (l+y )P i ± i=l (JK (t) <p i (s) ^-— y. Thus 6 (s) i s a i (s) 1+y. t B (S)+/QK(U,S)B (u)du REMARK Although T = T equation o f the second K(t,s) (2.5.8) kind, equation i s a Fredholm Grandell fails integral to note this fact (10) . Grandell in p ( K(s,t)-6 (s) (2.5.8) in (t) solution, of the integral t equation i / i=l i=l = <p the case unbiased estimate of interested To EXAMPLE that 3 ( t ) i s a unique of X(t). i s infinite, X(t) which reader may illustrate (2.5). K(s,t) ' refer Suppose = p covariance On kernel, the other hand, i t i s impossible i s useful this solution. t of a degenerate eigenfunctions estimate shows f o r every to Grandell's theory, t there Further, exists i f t h e number to find an i n (0,T). paper. we consider an the kernel i s given as example. an of unbiased The 40 As was noted i n the previous (t) are positive. (2.5.9) the only = solution <|> ( t ) = The T so i n this example 2 get the equation being C, = that * (t) = we yi/Jcj) ( s ) d s , a constant. o r t h o g o n a l i t y requirement / J c T ( t ) a l leigenvalues, y / g K ( t , s ) (f)(s)ds Thus • (t) = section, — Tp 2 = 1, of the <J>^(t) implies y Substitution i n equation /T so (2.5.9) gives P that Now to get the best linear estimator, A * , we use equation (2.5.6) , thus X * ( t ) = .1 + 1/T 1+(P/T) 0 from w h i c h we obtain (2.5.10) X* - REMARK mate We + (2.5.10), the given N ( T Helstrom ( s , , ) ~ s } estimate as: example on t h e covariance i s the best to other the engineering linear i n t h e above covariance now t u r n N ) SOME We { P+T that 2 .6 in P i s dependent only estimate, with note the best d linear esti- f u n c t i o n and so t h e estimate f o revery process function. OTHER METHODS solution literature (12b) a n d o t h e r s . linear the best techniques as d e s c r i b e d which appear mainly by Van Trees ( 1 9 ) , Integral of signal tered equations are frequently encountered d e t e c t i o n and e s t i m a t i o n . i n connection with Two such i n the equations the detection of signals theory encoun- i n nonwhite noise are: (2.6.1) where f ( t ) and (2.6,2) where Af(t) /j!JF(t-s)f (s)ds i s t h e sum noise, 0<t<T A a r e t o be d e t e r m i n e d , and = g(t) of parts By assumption corresponding the covariance t o white and non- so N R(t-s) and = f ( t ) i s t o be d e t e r m i n e d . function white /jF(t-s)f(s)ds substituting 0 = (t-s)+K(t-s) this into equation (2,6.2) produces the integral equation N 0 (2.6.3) which of we immediately t h e second As (2.6.3) T (t)+/jK(t-s) f (s)ds = g ( t ) as a Fredholm integral equation kind. mentioned will recognize 0<t<T i n sections generally exist (2.3) a n d unless (2.4), (-N /2) n a i s an solution to eigenvalue 4.3 of a t h e homogeneous integral positive-definite negative of equation kernel, eigenvalue, and (2.6.1). the integral there i s no Since equation trouble K(t-s) cannot about the i s have existence a solution. 2.6.1 Applications In numerous of Fourier Transforms applications, integral equations of the type oo (2.6.4) are If /_ O O K(t-s) f (s)ds encountered. The = g(t) integral K ( T ) , g ( x ) e I>2 I" / ] ' -0 0 sides to 00 w e c a n on u s e the l e f t i s a convolution. the F o u r i e r transform of both obtain (2.6.5) /2nG(K)G(f) (2.6.6) G(f) = = G(g), and so , ./2JIG(K) provided If finally ( 2 . .7) 6 G(K) does the right not side, vanish. as a obtain f . -i-GMljlf) • function o f u, i s i n L2[- O o a , c o ] we EXAMPLE Consider 2.6.2. (2.6.0) f(t)-A/ By a d i r e c t G £ M I 1 S lf(s)ds integration ( e - j t | {VA = l+u so =g ( t ) 2 that G(f)-A/2n and '-^<3(f) l+u so , (2.6.9): G(f)= _ 2 l + u -2A w h e r e we r e q u i r e ^ \ K< for ; > i < w e -2A 1,°° - p-iut , 2 l + u -2A 0 1 obtain the solution (2.6.10) ' f = ^ REMARK Techniques Mellin . / + 'j' G(g) Then % _ ,v 2/n l+u and = G(g) transforms w h e r e y = /1-2A and p r o p e r t i e s o f L a p l a c e , Hankel, and (Helstrom (12b)) c a n be d e r i v e d by relating t h e m t o F o u r i e r t r a n s f o r m s ; we d o n o t , h o w e v e r , i n t e n d t o d i s cuss these transforms i n the present work. Equations 2.6.3 In this case we with may Separable expand the Kernels kernel i n the form n (2.6.11) where K A and (t,s). (2.5) to REMARK worthy T h u s we solve of methods i s the leads to are can the omission cesses, y Aj ^ ( t ) ^ ( s ) , j= l = <f>j ( t ) Lack numerous which K(t,s) the use given space which for eigenvalues the prevents finite found complete eigenf unctions of sections of (2.4) and equation. us from i n the observation state-variable a techniques integral are and d i s c u s s i n g a l l the literature. and formulation solution. A nonstationary of Kalman and noteproBucy (6b) 46 CHAPTER A FOR THE SURVEY OF APPROXIMATE SOLUTION OF FREDHOLM OF THE 3.1 SECOND METHODS INTEGRAL EQUATIONS KIND INTRODUCTION Since usually here, exact s o l u t i o n s of difficult the private of 3 to obtain, possibility of communication, quadrature methods the i t is natural finding Professor of Fredholm the integral to equation consider, as approximate solutions. James suggested Varah we In the are do a use form n (3.11) i=l i.e., the we the integral integrand explore at this a methods for we In in particular consider then can section suggested other are the to we (3.5) the the we Trapezium tackle improvements of the on of a weighted points deal and proof with sum of In this values of their (6), simple (12) , We the validity, quadrature rules. shall (14) . rules; briefly rules in section (3.3), and collocation in section (3.4). In question errors the methods of discussed and the thus of chapter in general, literature Simpson's these t^. methods; without cited shall method by approximate explained refer (3.2) number generalizations to consider section finite and numerical which i s represented subsequent far. Finally, 47 in section ( 3 . 6 ) we shall discussed i n detail here. a a n d some summary mention T h e n we concluding 3.2 other their summarize error the p r i n c i p a l terms. shall are not end t h e c h a p t e r with o f type with remarks. QUADRATURE We methods which RULES formulae L e t us w r i t e (3.1.1) I(f) f o r the integral / f(t)dt. cl For n is the repeated equal steps applied order that mulae the interval o f l e n g t h h, so t h a t over sub-Intervals. derivative this forms, i s continuous. use e q u a l l y spaced h = (b-a)/n, The e r r o r of f(t) at a point derivative [a,bj i s d i v i d e d points, and the depends The f o l l o w i n g l e t The M i d - P o i n t t^=a+ih. Rule n I (f) = h ^ f (t _j )-+R i s 1 where R-^ = (3.2.2) 1 2 II 2-^(b-a)h f " (£), i s the remainder The T r a p e z i u m Rule n-1 1(f) = |h{f(a)+2^> i=l f ( t ) + f (b) } + R i 2 we three i ~ 1^ 2/• ••/ n • (3.2.1) formula o n some £ i n [a,bj , where s o we into term high assume for- 48 where R - ™ ( b - a ) h 2 (3.2.3) 2 f " (5) Simpson's 1 2 1(f) Rule 1 -i 2 ^ } n = |h{f (a)+4^~f (t 2 i _ )+2^> 1 i=l f(t 2 i ) + f (b) } + R 3 i=l where R REMARK all 3 - y i o In formula ( " b type important may a h 4 f l V ( 5 n must points, be d e r i v e d . ) be even. The above and h i g h e r - o r d e r However, t h e above formulae formulae three of the a r e t h e most i n practice. (3.2.4) Formulation Fredholm equations case i s now of Discrete o f the second a s t r a i g h t f o r w a r d way general ) (3.2.3), use e q u a l l y spaced same in = b y means clear. We Equations kind c a n be of quadrature choose approximated formulae. any q u a d r a t i v e The formula n / f(t)dt b = ^> w f(t ; i i I=T involving general the n points t ^ and t h e c o r r e s p o n d i n g Fredholm'integral equation o f the second weights kind w^. The 4.9 (3.3.1) is / K(t,s)f(s)ds+g(t) then the replaced unknowns tion to the by f(t^) i n matrix (3.3.2) (I-KD)f the a matrix system (to integral Written where = b cL K of n indicate equation form, = f(t) linear that this f ( t ) has this algebraic system i s only been of resents elements the We equations EXAMPLE the the = elements = t^ ^2 Then K 0, = K j ± t = by f (t)) . becomes K..=K(t.,t.) s o l u t i o n of values above of these f(t) at methods and D has the 1 3 by equations, the two points simple f^, rep- t=t^. examples t + / j K ( t , s ) f (s)ds K(t,s) Take i.e., the i s of (13) ±± The kernel (Hilderbrand = approxima- Consider f(t) where w^. approximate 3.2.1 for g, has illustrate an replaced 13 diagonal equations = s ( 1 notes \' t ( l - t j ) , i form t(l-s) { _ (1-t^ , K ± the i f t ) i f that = i<j, t ~ t > s this ^5 ±2 t<s 1 ^ a kernel n d n (l-t ) , i , j = 2 ~ i s weakly \ (trapezium ..., K 1,...,5. 3 5 = singular.) rule) t j(l-t ) 5 ... 50 so that K 0 0 0 0 0 0 3/id 1/8 i/16 0 0 1/8 1/4 1/8 0 0 1/1-6 1/8 3/16 0 0 0 0 .0 0 and D = diag Hence the from h(l/2, (I-KD)f=g system of with 1, 1, g= [o 1 1, 1/2) 1/4 1/2 3/4 l ] , we obtain, equations = 0 61/6 4 f - l / 3 2 f - l / 6 4 f 2 3 = 1/4 4 -l/32f +15/16f -l/32f 4 = -l/64f -l/32f +61/64f 4 = 3/4 2 3 2 3 V 2 = 1 The solution to this f = set of equation i s K f 0.0000 0.2943 2 0.5702 *3 0.8104 *4 f EXAMPLE solution 3.2.2 1.0000 5 Suppose - we of the integral are required equation to find an approximate 51 /JK(+.,S) f ( s ) d s + g ( t ) We = f (t) . u s e Simpson's r u l e t o a p p r o x i m a t e /Ju(t)dt where E this = represents 2 remainder the i n t e g r a l l/6{u(o)+4u(l/2)+u(l)}+E the e r r o r or remainder f o r t h e m o m e n t , we obtain Neglecting relation l / 6 ( K ( t , o ) f ( o ) + 4 K ( t , | ) f ( | ) + K ( t , l ) f (1) } + g ( t ) In this equation t we form 2 term. the i n the = f (t) . write = 0, j a n d 1 s u c c e s s f u l l y , a n d obtain l / 6 { K ( o , o ) f ( o ) + 4 K ( o , | ) f ( | ) + K ( o , l ) f (1) } + g ( o ) = f (o) l / 6 { K ( | , o ) f (o)+4K(|,|)f ( | ) + K ( | ) , l ) f (1) } + g ( l / 2 ) l / 6 { K ( l , o ) f ( o ) + 4 K ( l , | ) f ( l / 2 ) + K ( l , l ) f (1) }+g(D w h i c h may be w r i t t e n (I-KD)f with We = f(l/2) = f(l) as g D = d i a g ( l / 6 , 4/6, can, therefore, = solve 1/6). f o r t h e unknown values f , , f_ and f_ 52 3.3 GENERALIZED Assume Consider [a,bj. to develop Simpson's i s Lebesque here n the nodal (3.3.1) fa,bj. i n t e g r a t i n g f ( t ) <\> ( t ) over polynomial inter- (b-a)/n and d e f i n e t t^ = linear , t-^, Q Rule section, l e t the piecewise points Trapezoidal a+ih, i = 0, 1, interpolation function t ± ± off ( t ) ; i.e., f ( t ) = i { ( t - t ) f ( t _ ) + . ( t - t _ ) f ( t ) }, n n. 1 i 1 t _ <t$t ± i 1 i=l,...,n. Substituting (3.3.1) into / f b n ( t ) <j> ( t ) d t , we obtain n (3.3.2) / f ( t ) <j)(t)dt = y ia f (t _ ) + 6 f (t ) } b n ± i 1 i ± i=l where a. and x (3.3.3) rule rule. = f ( t ) be on the generalizations of the trapezoidal i n the preceding h integrable i s to use piecewise The G e n e r a l i z e d As at <j> ( t ) of numerically approach 3.3.1 Let and the problem The polation and fec[a,b] QUADRATURE 3. a r e g i v e n 1 - by: 3 i ^ i _ 2 i ( v t ) t ( t , a t i 6 . . ( t _ t • ) m ) d t ± 53 REMARKS ate the Since as a For this form i n t e g r a l s of usually simple the of <j> ( t ) quadrature, and singularity function, these i t i s necessary tty ( t ) over of integrand an arbitrary can to evalu- intervals. be isolated i n t e g r a t i o n s may, in general, o> ( t ) =1, the not be difficult. We also trapezoidal note rule a. = 1 = we quadratic when in this ordinary case Generalized Simpson's n>l, interpolation function quadrature obtain 2 l e t h=(b-a)/2n, quadratic we £ l The Here The since B. 3.3.2 being that on each formula and to f subinterval Rule define f on t^, t^, ^2i-2' t as the piecewise t 2 ; n 2 i ^ ' """ ^' = ^n n beomces: n (3.3.4) / f (t).<f) ( t ) d t = ^ t a b n i f (t _ )+B f i 2 i (t 2 i _ )+Y f 1 i=l where a (3.3.5) i i & = Z = zVt , 2 2h ^l f h Y. = — 2h 1 2i-2 ( t - ^ . X t - t ^ J M t J d t , 2i (t-t 2i-2 2 i _ ) 2 /^2i (t-t . 2i-2 (t-t 2 i )*(t)dt ) (t-t . ) <j> ( t ) d t i (t 2 i ) } * We we are shall see dealing the with error The above concealing need special t o be analysis form 1 sider methods modified Green independent part the of an integral (3.4.1) f(t) = assume an these generalizations i n section In have a sense, i n order to take particular the (3.5). are advantage e q u a t i o n may $ 2 on disadvantage they ( 1 2 ) , l e t <j>^, t)> , functions when METHOD often errors. features'which the linearly <JK s direct anomalous Following and of COLLOCATION 3.4 and usefulness [a,b]. orthonormal basis We for L 2 n form too of of general, any posses. a set of assume that the [a.,b] . Now con- equation g(t)+ A / K ( t , s ) f (s)ds, b approximate solution of the form n (3.4.2) f (n) i=l where the C.'s x Substituting where e are undetermined (3.4.2) into (3.4.1) n n i=l i=l denotes the error constant coefficients. yields involved as a result of assuming the 55 solution as ( f n ) - W to minimize In this e a i the vein error we determine the vanish each of b . = g(t.) at to m choose e coefficients i n such a way n choose coefficients these the a by set of the points t^, t requirement 2 , that t e^ f i (t) , and should points. Let 3 3 and a.. xj = substituting of n linear 4>. ( t . ) - A / K ( t . ,s) <!>. ( s ) d s , ± j a j x b these into the expression ( 3 . 4 . 3 ) , we obtain a system equations: n 2 i j i=l (3.4.4) from a which Now to numerical of the solve (i) The (ii) At j j = are t o be ( 3 . 4 . 4 ) , we and 1, 2, n determined, choose the the numbers points a.. are t j from the o b t a i n e d by use formula. solvability: determinant least b a t hand, a quadrature for = C^'s data Conditions i c one of of the the system b.'s must (3.4.4) be must be non-zero. non-zero. 3 If f these i n conditions the form of are satisfied, the solution a polynomial approximation.' of We (3.4.4) now gives give an example. \ 56 EXAMPLE 3.3.1 (3.4.5) Consider f(t)= the equation t+/j!jK(t,s)f (s)ds with s K S u p p o s e we ( ' t ± s / us assume f with t-^=0, ) = have . • t Let s ( 3 s< t t { s>t the points ' _ n i 1 , u, 4 = - ) ( t ) t =l. : 2 2 simplify = c The We 3 the 1 + 4 3 , , i . of the c 2 t + c 3 t form 2 the error should use the a n a l y t i c a, . ID a.. are given l-/oJ 0 . - t.-/^j 2] D 0 n by sds-/^ t.ds t_. j j j s ds-/?~ t .s d s t. ij 2 D a j 3 2 t = t J~-/"QD and b . = t. D D 3 1 S ds-/ t vanish form computation. coefficients a 1 , a solution the c o n d i t i o n that t 2 ~> tjS'ds at the points of the kernel to 57 Thus l ll l 12 W^sds-O/Jds = 1 , W = Hence C.j + 52 f ( C 2 1,1 , i / 2 ; d = s 2 + assumes = 0 3 = 96 9 C 3 = 12 approximation: (t) 1.988t-0.434t' 3 ) = f^=0, f = 0 . 8 3 6 , 2 the points best f o r t h e chosen 3 f^=1.554, = 0.470; f t 2 5/8 t h e form to the where mate _ C i s automatically at S (3.4.4) which f , d etc. 2 ± leads s C +17 C+ 6 which u^, t h e system 120 .1/2 0 and gives p o i n t s t=0, 1, the solutions = 1.247 4 = ^- a n d t ^ = 3 / 4 , solution to equation respectively. (3.4.5) Thus the approxi- i s : 0.000 (3.4.6) 0.470 0 . 836 1.247 1.554 REMARKS that not The c o l l o c a t i o n an exact control matching the size suffers from solutions all points the disadvantage of the solution at certain points o f t h e d e v i a t i o n between approximating the given method a t other which may points the exact (unless and t h e we w a n t n o t be c o m p u t a t i o n a l l y does t o choose feasible). 58 At because used) any at rate, least methods the collocation i t i s an of method improvement on i s worth the successive approximations considering direct and (and the widely quadrature methods. 3.5 ERROR ANALYSIS 3.5.1 We the now result consider of discussed. the In the the problem calculation sequel we of e s t i m a t i n g the f o r any shall of denote the accuracy quadrature the norm of methods any function 4- ( t ) b y | U| | = / |<j, ( t ) | d t b d Returning function (3.5.1) to the problem f ( t ) i s r e q u i r e d to J K ( t , s ) f (s)ds+g(t) b of e s t i m a t i o n we satisfy = the integral f (t) a. The computed (3.5.2) where function f(t) actually / K ( t , s ) f (s)ds+g(t) b cl E (t) d e n o t e s the error = satisfies f(t)-E(t) term. know that of the equation 5? If we write e(t) for the error (3.5.3) = f(t)-f(t) i n our r e s u l t , we / K(t,s)e(s)ds = b find by subtraction that e(t)+E(t) . ci In the usual way, K(f) the error writing = e(t) (3.5.5) || e|| <|| in the terms of = Under can examining It (3.5.6) _ 1 E (t) , estimate remains operator: and so | | ||E||. estimated. or perhaps assumptions finite [6] , [9] , s t a t e s be case, suitable t e r m s we (1-K) - 1 derivative, either nonsingular integral by -(1-K) can a the b i s given ||E|| for / K(t,s)f(s)ds, (3.5.4) Now K the E(t) will usually of the of f ( t ) only about the complete the expressed integrand in a smoothness of magnitudes of be singular the derivatives in case. various by differences. to that estimate || ( 1 - K ) || . A well known result 60 where K i s any bounded vided that interval | |K | | <1. [a,b], operator I f we c h o o s e then t h e norm | | K | |= (3.5.7) linear i n a Banach t h e norm space, pro- | | f | | =max | f | , o v e r t h e of Ki s / |K(t,s)|ds, b and (3.5.6) holds provided II KI We therefore have a rigorous 3.5.2 bound Generalized Turning quadrature to the corresponding rules, and u s i n g for the error i n the result. Error error the notation term o f t h e g e n e r a l i z e d of section (3.3), we have (3.5.8) E (f)= / { f ( t ) - f (3.5.9) | E ( f ) | < | U| | || f - f j | • b n a. has that f o r Lagrange Atkinson f " ^ i s continuous, and that 2 |f 1 ! L | | |U||. the error subinterval the generalized bound | E ( f ) | <^h | n and u s i n g i n t e r p o l a t i o n on each (3) h a s s h o w n the error (3.5.10) (t)}4> ( t ) d t , n Assuming mula n for- [t_._^,tj] , trapezoidal rule 61 REMARK that We note of the ordinary below, this quadrature will order of convergence trapezoidal rule, n o t be t r u e f o r t h e i s t h e same b u t , a s we shall generalization as find of a l l Simpson's rule, the corresponding bound f o r error i s : | E (f) I < ^ | h | | f n 27 (3.5.11) As the rules. Considering the that 3 hinted earlier, we 1 find 1 1 ! I ' I I* II . that, unlike the case of the trape3 zoidal of rule, the generalized convergence order whereas at least, the generalized on t h e o r d i n a r y 3.6 SUMMARY We have rules methods Fredholm on Simpson's only has an h order i s of a higher rule 4 improvement the the regular rule h . Thus, son's Simpson's considered plus their rule provides an method. AND CONCLUSION i n some detail the trapezium and generalizations as convenient quadrature f o r the numerical i n t e g r a l equation method Simpson's approximation o f the second o f c o l l o c a t i o n may the r e s u l t s o f the former yield to the solution of the kind. We a significant methods. Simp- found that improvement 62 We the do n o t c l a i m t o have a v a i l a b l e methods. methods go by v a r i o u s Eighths" rule, mentioned Indeed, names Newton-Cotes a l l , o r even f o r quadrature as: Simple rule, Radau Gauss most, o f rules, rule, some other "Three- quadrature, Lobatto rule, etc. Although methods ties. here, found conclusion, are expansion we cases wish equations do n o t f e e l , however, way too inferior t o one g i v i n g may bearing considered as a model f o r some place. such possibilias t h e a l l of which out that methods that certainly necessary, an approximate methods s o l u t i o n s be and n u m e r i c a l absolutely only further i s i n general We form some techniques, to point can such approximate i n closed to discuss a l l may be literature. of integral exceptional tion there exploring and t h e R a y l e i g h - R i t z solutions various an o p p o r t u n i t y y e t i t i s worth i n the cited In any do n o t have F o r example, Galerkin in we found. have an e x a c t real system, used. method solution. A i si n solu- but i s rarely the integral i s almost o f the system Only Generally, t o be be c o n v e n i e n t , that explicit difficult. an approximate i n mind representation finding equation, certainly i n the first 63 CHAPTER COMPARISON: AN EXACT IN VERSUS A 4 AN SPECIAL APPROXIMATE SOLUTION CASE INTRODUCTION Whittle t i o n ' (1.2.6) a method (22) d o e s obtain for certain special techniques. Whittle's method to obtain equation (1.2.6) integral estimator example of this i s treated. linear estimator solution and c o n s e q u e n t l y equa- suggests using use a s l i g h t l y modified an e x a c t practical problem Estimate A . consists by w i l d c a t t o 1971. estimator A BRIEF The d a t a discovered 1953 approximate 4.1 and W h i t t l e ' s work linear form the linear We compare of successive a numerical 30-day totals exploration i n Alberta for Clevenson and compute devised, and Zidek (7) c o n s i d e r the approximate our exact and optimal Clevenson-Zidek's results. O U T L I N E OF W H I T T L E ' S A ( t ) by a linear DERIVATION smoothing OF formulae (1.2.6) so t h a t A (t) L will of t o the Fredholm obtain i l l u s t r a t e the use o f the techniques period linear shall estimators (1.2.1). o i l wells the We linear cases. of obtaining the optimal smoothing To Bayes be e s t i m a t e d by s t a t i s t i c s of the type 64 (4.1.1) for X (t) = / w (s)dN(s), b L which, (4.1.2) t conditionally, EX (t) = / w. a X (s)X(s)ds, b j_i given L. and (4.1.3) varX T (t) = / w (s)X(s)ds. a t b J_i Assume a population of priori will then where such be of values. obtained C*[t;w] = If by The the optimum X ( t ) ' s have linear an of a estimator minimizing E[/w Xds+(/wXds) -2X(t)/wXds+X (t)], 2 2 2 respect to prior distribution X(t) s. 1 EX(t)=p(t) respect (4.1.5) f u n c t i o n , X ( t ) , i s a member f u n c t i o n s , so t h a t E denotes expectation with the with the p a r t i c u l a r distribution (4.1.4) of that 2 and E [x(t)X(s)]=y(t,s) t o w^_(s) y i e l d s y(s)w f c (s)+/p(s,u)w the integral (u)du = then minimizing equation u(t,s). (4.1. 65 By a normalization C (s) t o f the form w ( s ) /{y ( § ) / y ( t ) s t r(t,s) = (4.1.6) y(t,s) = r(t,s)/y(t)y(s), (4.1.7) w (s) y ( t , s ) / / y (t) y (s) we g e t t Substituting = these C ( s ) /y ( t ) / y ( s ) ' t into (4.1.5) we obtain, after some tion , (4.1.8) which C (s)+/ r(s;u)? (u)du b t t i s obviously THE (4.2.1) where seek the optimal X (t) = (4.2.2) EXACT linear form. SOLUTION estimator o f the form y+/^ h(t;s)d(N(s)-ys) L h(t;s) r(t;s), of the required 4.2 We = T i s the solution of the integral yh (s)+/^ K(s;u)h (u)du t T t = K(t;s), equation -T<s<T. manipula- 66 Consider upon the ( t - s ) so (4.2.3) where, and special that case where (4.2.2) may yh(s)+/ K(s-u)h(u)du a b f o r convenience, we the kernel be written depends only as = K(t-s), have dropped the subscript, t, of h [a,b]=[-T,T]. We further specialize Cov(A(t),A(s)) our = results by assuming EA(t)A(s)-EA(t)EA(s) = y(t,s)-y 2 /. s a p(t-s) = so K(t;s) 2 = 2 -a a e 1 t-s \ that /+. \ = a 26 ^ , - a'l t - n+y s L 2-. y(t,s) Since (t,s) = K(t,s) = K we U ' , /y(t)y(s) ( t S ) obtain (4.2.4) Whittle K(t-s) treats the special form K ( t ) = a( +ee~ ' ' ) a Y H+y = ^—e t case where the kernel i s of the 67 which i n o u r example ay Following be = (4.2.5) we find by r e p e a t e d h" that and ag = y Whittle converted means (s)-0 h(s) 2 a / 2 that y the integral = -2^— 6(t-s)+ya f/ h(u)du-ll, 2 b L 2 0 = 2 (4.2.3) c a n differentiation to y with equation a. 2 2 ^ - a + a * y The 6-function derivative hold the of e ~ without tives We (4.2.6) ( 4 . 2 . 7 ) t - s l at s-t. a n d —6 magnitude. Equation (4.2.6) l a here because the 6-function 2 - o f h(s) same arises a Whittle f o rs ^ t , however , ~ l must s has a general = ^- A into e-® + a 2 / Q /3-a e step ( s ( £ s-t + Q l - a | s-u| e + - G ( b - u ) I a I . + R ) ] d u e" solution o f t h e form P, Q a n d R b y u } u ( b e " s ) substituting +R - 0 | s-u| + p e must discontinuities of ^-C^^+^^KQ&^^-^+R. +pe"® ~ > br (4.2.5) f o rs=t t h e d e r i v a - (.4.2.3) t o g e t 1 s _ t 1 + have that of the ^ determine the quantities back asserts y (4.2.5) h(s) ' of the discontinuity - 0 (u-a) 68 2 where A Recalling that involved, (4.2.7) (4.2.8) h (s) If yields = t (4.2.8) (4.2.9) L x[e the integration the solution, -0|s-t| -0(t-a)-8(s-a) -®(b-t)-0(b-s)-| +e +e i s substituted ( t ) = A , [ a , b ] = [-T,T] , a n d p e r f o r m i n g into ( 4 . 2 . 1 ) we > obtain y+i +i +i , 1 2 3 where x 2 I 3 . f , r with The = / the intensity to obtain N ( s b -©(-t-s+2b) a e _ ) d ( N y ( ) S s ) _ u S ) (4.2.9) ( N ( s ) _ y S ) i s the exact optimal linear estimator A ( t ) , and i t i s t h e form which i s used solution (b) d i s p l a y s to the oilwell the exact f o r comparison. i s given d 2 „2 2aa . 2 Q = +a y function, the exact graphically ( p expression Figure d b -®(t+s-2a) 2 aa -; yG of (4.2.9) p a x A b -©|s-t| discovery and t h e approximate A Computer Program i n Appendix ( 1 ) . problem. results t o compute 69 4.3 AN This by presentation h (t;s) follows Clevenson yx(s)+/'^ x ( u ) K ( s - u ) d u (4.3.1) l e th S O L U T I O N OF 1.2.6 and Zidek (7). Denote the solution of T and APPROXIMATE = K(t-s), T O T (t;s) be o u r a p p r o x i m a t i o n -T<s<T, (t;s); to h that i s , ( t ; s ) i s the solution of h oo CO (4.3.2) yx(s)+/ co with / • x — Using show 2 CO Fourier -co< <oo, = K(t-s), s transform techniques, Clevenson and Zidek (7) that = T Trvco f o r e a c h Their x'(u)K(s-u)du (s) ds<co. e (t;s) as CO — argument h (t;s)-h T fixed also (t,s) gives o o (t;s)^0 under a bound suitable for e T regularity i n t h e form, 1 (4.3.3) |e ( t ; s ) | < A y " { / | | 1 T u > T with A = Is Ik {/°° — CO 1 I 2 (s)ds} 2 h 2 (t;u)du} {T 2 D -1 2 +By _ 1 }, conditions. 70 and B = {/°° K Corresponding (4.3.4) 2 (s)ds} 2 to equations X (t) = (4.3.1) and (4.3.2), l e t u+/^ h (t;s)d(N(s)- s) T T T M and X (4.3.5) (t) = respectively be tion Then a to bound A . y +/ — 00 the optimal by (t)- x estimator E|A|<4TAy~ {/ 1 1 when t and i s not Zidek [-T,T]. X the boundaries near lent to our Thus remark Further, approximation Clevenson that this 2 2 the boundaries X^ w i l l after approxima(4.3.3), form: }. _ 1 oo (7) o b s e r v e of _1 +By 2 > i i n the {T Ji (t;u)du} too near period, T , U 1 Clevenson only | i the 1 1 (4.3.6) and i n inequality (t) I i s o b t a i n e d oo 1 1 linear a p p l y i n g t h e bound f o r E I A I=EI X 1 h (t;s)d(N(s)-ys] OO n o t be that a period. bound i s small of the observation good approximation Note that this i s equiva- (4.2.8). and of the optimal Zidek give linear the large estimator X T Li time, (t) as to T, 71 (4.3.7) X j t ) U+e/^e- ^ 7 - S 'd(N(s)-ys) , where ^ 2 Q and To in - l . , 2 -1 -1,"2 (l+2a y j ) n 3 = a Y = ct(l+2a y determine given y "^a 2 the accuracy inequality (4.3.8) of (4.3.6) 4.4 = '4AT{T ~2 +By -1 COMPUTATION: From the data the Cov(X(t),A(s)) where p(u) using = e the mean, process; on Also [0,°°). y=0.70, the as Time an = a l a u l , data. regressive 108 evaluated, (4.3.7) and found the to bound be 2 -1 -2vT }3 Y 6 • OIL 2 data, was approximation ^. P citly the Pcosh(syt), where used -T<t<T, we WELLS y, i DISCOVERY was DATA estimated as y-0.70. i p(|t-s|), with The a=0.05, form of believe that p was chosen i s that p of without a i s symmetric and to measured observations i n the in 30-day period intervals [-T,TJ. For and there this expli- one-step express our u n c e r t a i n t y about the *2 value a =0.25 was chosen, with support 2 estimate of o . was We auto- decreasing choice from the were particular 72 case the T=110.5, and following A values - 3 = The bound 0.01 son the the linear to the constants 0.0913, y = /5/2, 0.195. in inequality ( 4 . 3 . 6 ) was earlier, we estimator and the The graphs i n Figure (b) support i s , the X found display i n Figure linear that (4.3), obtained: exact that in section t o be less than jt|<87. boundaries cates were B = mentioned REMARK tions; reference 5//2, given provided As with m of the will estimator, bound not X T i s small only observation be a good , near the (B) l a r g e time the when t period, boundaries i s not of to that compari- approximation. above [-T,T]. approximation for calculatoo This the near indi- optimal period. 73 CHAPTER 5 APPLICATIONS 5.1 INTRODUCTION The main purpose o f t h i s chapter i s t o apply the techniques developed thus f a rto practical different from t h e s p e c i a l case c o n s i d e r e d i n c h a p t e r t h a t we h a v e b e e n c o n c e r n e d A , of the intensity process. s i t u a t i o n s which may be slightly 4. Recall with the optimal l i n e a r estimator, function, A (t), of a nonstationary poisson I t h a s b e e n shown t h a t X ( t ) i s a f u n c t i o n o f h ( t ; s ) L which i s the solution of the integral (5.1.1) equation m(s)h. ( s ) + / K ( s ; u ) h . (u)du = K ( t ; s ) L. a t a<s<b, b where m(s) = y i s a c o n s t a n t I n many c a s e s (in chapter 4). the assumption t h a t m(t) i s a c o n s t a n t over the e n t i r e p e r i o d o f o b s e r v a t i o n [a,b] i s u n r e a l i s t i c . It is t h e r e f o r e o f i n t e r e s t t o study other s p e c i a l where we d r o p t h a t a s s u m p t i o n . To f a c i l i t a t e g e n e r a l model t o such p r a c t i c a l equation scribed is situations the a p p l i c a t i o n of our situations, consider the i n t e g r a l (5.1.1) w h e r e m ( t ) i s n o t a c o n s t a n t b u t i s a p r e function of t . An i n t e g r a l e q u a t i o n w i t h t h i s property sometimes c a l l e d a Fredholm i n t e g r a l e q u a t i o n o f t h e t h i r d type. H o w e v e r , b y s u i t a b l y r e d e f i n i n g t h e unknown f u n c t i o n , h (s) , 74 and/or the k e r n e l , K ( t ; s ) , i t i s always p o s s i b l e an e q u a t i o n i n t h e f o r m o f t h e s e c o n d I n p a r t i c u l a r , when m ( t ) [a,b], Hilderbrand i n the (5.1.2) /mT^)h (s)+/ such type. i s p o s i t i v e throughout the i n t e r v a l ( 1 3 ) , shows t h a t rewritten to rewrite t h e e q u a t i o n (5.1.1) can be form b K ( S ; U ) t ^ M h t i u ) d U / m (s) m (u) = K j t ^ /m (s) f or (5.1.3) x (s)+/ r(s;u)x (u)du - g ( s ) . b "c Thus i n t h i s • a. f o r m one must x h "c (s) = u recompute (s) r /m(s) a f t e r x ( s ) has been f o u n d . t H a v i n g shown t h a t by a p p r o p r i a t e l y i n v o l v e d , we can r e w r i t e t y p e e q u a t i o n , we give (5.1.1) r e d e f i n i n g the functions i n a s u i t a b l e f o r m as a two p r a c t i c a l e x a m p l e s second of such s i t u a t i o n s . 75 5.2 E S T I M A T I O N OF T R A F F I C D E N S I T I E S AT THE L I O N S G A T E B R I D G E Volumes of cars on the five-minute crossing of the this of of a data have been Lions Gate Bridge counts intensity function and other effects are useful another lane look that, from day traffic at to of appears traffic Lions be over i n an counts one to from treat the realization observed. niques lying at Gate an an a of effects and which to the i n the process. us to previous weather, time about these should detector are convinces highly reproducible one cars which pass the time every the same i s large time, should apply least, ones? be underlying schedule enables the days arrival next say working nearly gives estimator example, underlying schedule. time An of (£) (southbound) knowledge For a nonstationary Poisson developed To counts on Figure i n 1974. individual Bridge, individual day This intensity the distribution traffic sought. existing composed apparent of the detector at uncertainty day variables "In p a r t i c u l a r , to counts to counts mainly the X ( t ) was total i n decision-making. the day. f o r the reflects f o r the i n Vancouver. "typical" exogenous added the t o be location there be on a function day, one traffic d e t e c t o r on intensity A of collected our I f one compared the quite S p e c i f i c a l l y , we particular day; were with to thus to make the small. In mathematically, model expect variance in process chapters would i s assumed and study seek a the the order a to techunder- linear be 76 estimator, (5.2.1) X , of the intensity X ( t ) = y +/^ L t where h ( t ; s ) subscripted assumption chapter y^ t o i n d i c a t e t h a t t h a t m(t)=y X ( t ) , i n the form h(t;s)d(N(s)-H(s) ) ; i s a s o l u t i o n o f (5.1.1). 4 about (5.2.2) T function, i n this i s constant. t h e k e r n e l , we 2 a that h e r e we write e x a m p l e we d r o p t h e With require C o v ( X ( t ) ,X ( ) ) = a (t ^ Note the assumptions i n that S S and .2 (5.2.3) K(t;s) = K(t-s) - °y(\ t - s | ) +y y, f c U We a r e now faced with t the problem of f i n d i n g estimates 2 for y^'s. This spection and a , and a l s o i s where, comes i n . essentially any o the constants following the Bayesian The m o t i v a t i o n the desire t o base available information, information a method o f o b t a i n x n g o f some o t h e r recipe, the intro- f o r B a y e s i a n methods i s c a l c u l a t i o n s and d e c i s i o n s whether nature, i t i s sample such as t h a t on information based on or past experience. We Barnett values shall use e m p i r i c a l Bayes (4, pp. 189-200), published collected from 1 methods as suggested t h e 1974 d a t a b y L e a , N.D. a t hand, and a l s o and A s s o c i a t e s t h e same l o c a t i o n w a s by used. (16b) where some data 77 CALCULATIONS: Consider t h e data i n T a b l e 5.1. TABLE HOURLY V A R I A T I O N : Time u t appealing and Day 10 11 1,240 11 - 12 1,140 12 - 13 950 13 - 14 1,400 14 - 15 1,150 15 - 16 16 - •17 1,300 17 - 18 1,300 • 1,175 Traffic t o use t h i s data, with Unit, traffic have of Vancouver to obtain i t i s convenient y^. v a r i e s would be increased counts. to divide expect 1966. One we intuitively figures But t h i s that mean hourly, 12th t since t h e 1966 as t h e ^ ' s . since the p r i o r t o l e t y^ change f o r every vehicle s a t i s f a c t o r y ; one would might City information five-minute possibility 1966 Volume use the r e s u l t i n g values entirely BRIDGE, 1,550 i n our c a l c u l a t i o n s dealing GATE - 10 f o r t h e 1974 thus are order LIONS 9 Source: In of 5.1 cannot t h e volume Calculations by 12 be of show t h e 78 h o u r l y v o l u m e f o r 1974 o f 1966 was t o s t e p up was, 1,050; t h u s t h e 1966 on t h e a v e r a g e , i n t h e r a t i o o f 11:7. was f l u c t u a t i o n s and o t h e r c o n s i d e r a t i o n s , the R e c a l l t h a t o u r c h o i c e o f p(u)=<S autoregressive process. in arriving at a value was used H o w e v e r , t o make room f o r used i n the computer program p r e s e n t e d one-step that Hence, i n o r d e r v a l u e s , t h e c o r r e c t i o n f a c t o r 11/7 i n t h e c a l c u l a t i o n o f t h e y^_. sampling 1,650, w h i l e f o r a, and 01 factor i n Appendix 3/2 (B). I I i s i n the form of u T h i s k n o w l e d g e may a l s o i n choosing a be e x p l o i t e d an e s t i m a t e for 2 a . Computer programs were r u n f o r the f o l l o w i n g v a l u e s o f a and a: The shown by REMARK 0 . 01 1.0 0.01 2.0 0.05 2.0 0.2 2.0 r e s u l t i n g estimates of the i n t e n s i t y the graph of F i g u r e s The (D, f u n c t i o n , X ( t ) , are E). g r a p h s i n d i c a t e t h a t , as i n t h e o i l w e l l f i x e d a, as a i n c r e a s e s , t h e e s t i m a t o r X d a t a - s e n s i t i v e , and hence irregular. becomes example f o r increasingly 79 We is a now consider prescribed nonconstant 5.3, The disaster or Cox are i s defined and preceding the this Lewis this that sort this of s e c t i o n may s e c t i o n where and as Bayes an illustrate discussed empirical period pp. formal as periods, 1875 to 2-6). A death of statistical alternative model. will regarded the ( 8, here our 400-day i n v o l v i n g the more data; apply for Lewis accident example be EA(t)=m(t) t. Britain of problem mean successive discuss set a n a l y s i s , we in Cox a mining and analyzing i s hoped ticular, as from of the DISASTERS numbers, taken Cox Lewis detail the where function d i s a s t e r s i n Great data for It more gives m o r e men. methods to (F) coal-mining 1951. 10 example COAL-MINING Figure of another in an this in a work. extension ideas were little In to cited par- the as 2 justification tions now of section require Fig. (5.3.1) f o r our m(t) (F) (5.2) s t i l l to a be suggests Cosht = choice ^(e^e of a and hold with continuous ) , CO Suppose the exception function consideration t a. of <t<°°. the of the t. function assump- that we' 80 Since, of as F i g . disasters the function m(t) t o have (5.3.2) where (F) i n d i c a t e s , the average i s decreasing with (5.3.1) time, i s unsatisfactory. side of L e t us choose, instead t h e form m(t) = | ( e 0<a,b$l a +e" t b t ), -l$t<l and b>a. restriction The r e s t r i c t i o n s on a and b a r e imposed that o f occurrence the p o s i t i v e The Note rate with on t i s t o prevent these m(t) from for similar r e s t r i c t i o n s m(t) takes k becoming 1 1\ l ~l ~ i -1 1 ~l ~ 1 0 1 reasons. the following 1 l\ too large t ^ form 81 The s a m p l e mean o f knowledge, and 3=0.5 and also to b-1.0. the given simplify Using the data the 2; calculations, techniques "2 o i s about of with we section this choose (5.2), 2 =0.25 i s c h o s e n a s a=0.05 i s found an t o be estimate a good COMPUTATION: As has and of calculation the method ceding with section. the compute the Note a l r e a d y been on t in function m(t), that i n this . Also, are said, the form essentially to note (5.3.2), each value time i n the value of those t the of that i n te[-30,30j , w h i l e example the choice. It i s interesting restriction a of kernel the pre- conformity subprogram to i s d i v i d e d by f o r m(t) we 30. require te[-i,l] . Another curiously, interesting the values feature of a=1.75 and the b=1.0 present example were used w i t h i s that, the same ~ 2 values of produced which of are a a and values of as above. the estimator of p r e t t y much t h e computation as given same. by the b=1.0 a = 0 . 7 5 ; b=1.0 Thus one i s persuaded and i s w h a t was this to done the out that both intensity Nevertheless, computer Time Values a=0.5; I t turned are (sec.) as the Cost 1. 34 2.791 1. 39 here. to the values function, A (t), cost and time follows: 2.784 stick programs ($) a=0.5 and b=1.0, 82 Figures and so a again (G) that i t is estimator and one can seen to (H) see that become display the for more A_ effects a, fixed Fredholm for of the a integral optimal linear intended are as available occur agree Zidek (7) of solving with Grandell that only respect is g e n e r a l l y a p p l i c a b l e when obtaining the numerical solution In chosen the to optimal numerical reflect calculations of our 4 the a and linear of applied of (22), of the emphasis estimator, examples, a A A special l the i n chapters some of equations and , are the which Clevenson linear order A ij estimation, placed , which involves form the of which process. was covariance the quest function,A estimator, class main i n our the mathematics. second about solving These A i s any obtain the approximation type linear (1.2.6). required to 3. Fredholm equation belief Here versa. intensity incomplete, restricted 5 of Several optimal Bayes 1, to the (10), W h i t t l e the a. a causes encountered i n chapter areas in vice methods II of necessarily i n many in chapters type process. is Thus, with of presented survey, methods frequently We a increase exact estimator Poisson also an o and varying a of REMARKS various equation nonstationary techniques and 2 presents of d a t a - s e n s i t i v e and CONCLUDING Chapter for various values on the kernel structure. numerical example was The where 83 the assumption of constant more e x t e n s i v e t h a n the assumption To here the had chapter the been i n the 4, exact r e q u i r e d f o r the knowledge, the exact applied to r e a l - l i f e started, suggested i s dropped m(t)=y, simple are case much where upheld. author's not w o r k was is those mean, even though optimal of linear situations the methods had literature. a comparison techniques approximate estimator, A before this f r e q u e n t l y been It is gratifying the considered to note that estimator, in and shows t h a t t h e bound, i given case by of Clevenson the and oilwell Zidek ( 7 , p. discovery Another interesting , the 21), is satisfied in the example. observation i s that, in choosing '2 y, a a and Bayesian may use It other approach empirical i s worth constants which i s a very our i t appears is of i n order is careful usually safely At least done t o assume t h a t t h e t h a t such here about choice examination helpful. maintain the the an of the likely data, I n many assumption at assumption i t i s much special to y i e l d cases, t h a t m(t)=y, one may more since, cause A word of t h e mean f u n c t i o n , least the i t is i f mean i s c o n s t a n t choice of i s very model, here. A t o become more d a t a - s e n s i t i v e . "off-the-mark" A i n our n o t i n g t h a t i n some s i t u a t i o n s not estimators tool. B a y e s m e t h o d s a s was satisfactory results, useful occur from the caution m(t); intangible an results. in graphical form, h o w e v e r , one may a constant. Also one 84 may use the optimal yield the fairly function, large linear good A(t). time approximate estimator, results for estimator, A , since the estimation T these A , instead methods of the of often intensity CO A. Histogram estimators of the intensity of wildcat oil well discoveries using class widths of (a)30days ,(b)90days and (c) 360 days . 00 B. A comparison of the exact optimal linear estimator and the approximate solution of the intensity function of the oilwell discovery process. minute v e h i c l e counts — — ro co ro o~> O 5 o •£» o o o o o ro o CO CO 70 V 60 -o o \ °- I 50 c £ a = 0.01 , cr = 2 . 0 \ 140 X CL X in ° 130 \ a = 0 . 0 1 , a- = l . o E \ 120 I 0 00& \ 1 -54-5© 9AM 10AM 1 I 1 -40 HAM ! -30 1PM 12N00N I -20 1 -10 1.30PM 1 0 Time of PP^ I 10 day 3Pk 20 4 PM 1 30 i 5 , 1 40 D. The optimal linear estimator of the intensity function of the Lions Gate bridge process using various prior parameters. P M i l 50 54 6PM 89 190 180 h I 70 160 -a \ " \ \ \ \ 150 a = 0.05, cr = 2.0 o S. 140 a> | in v_ \ 130 /—• \ .•1 1 / / \ /: \ 120 <D CL \ 0 V a a o a; % 2 1 0 \ 0 v 90 a = 0 . 0 2 , cr = 2.0 80 70 10AM HAM 60 54-50-40 -30 9AM 1 1 I2N00N IPM 1.30PM 2PM 3,PM 4PM 6 30 -20 -10 10 20 T i m e of day 5PM 40 50 54 6PM E. The optimal linear estimator of the intensity function of the Lions Gate bridge process using various prior parameters. o •o o 9 V— <D CL >» O TD o o 8 7 6 cu CL (/> l. co to o to TO 5 4 3 ^_ o 2 co E 2 I 0 0 10 20 40 30 T i m e (in 4 0 0 d a y 50 units) F. Cool- mining disasters in Great Britain for the.period 1875 -1951 Numbers in successive 400-day periods. 60 4.5 4.0 po 3.5 CU D. >» 3.0 o X3 O o 2.5 I— c u CL in 2.0 cu i sa CO o cu E. 3 2 1 .5 1.0 0.5 0 -30 1875 -20 - 10 0 Year 10 20 30 1951 G. The optimal linear estimator of the intensity function of the mining disaster process using various prior parameters. CN 4.Of T3 O \_ CD CL a X3 O O cu 3.5 a = 0.05 ,<r = 0.5 / \ 3.0 \ \ a = 0.01, cr = 0.5 2.5 .\- CL £ V) 2.0 D W o 1.5 CD E 1.01— 0.5 -30 1875 20 10 0 Year 10 20 H. The optima! linear estimator of the intensity function of the mining disaster process using various prior parameters. 30 1951 93 BIBLIOGRAPHY A n d e r s o n , B.D.O., " K a r h u n e n - L o e v e e x p a n s i o n f o r a c l a s s o f c o v a r i a n c e s " ; P r o c . I n t e r n a t i o n a l C o n f e r e n c e on Systems S c i e n c e , H o n o l u l u (1969), 779-782. A n s e l o n e , P.M. a n d R.H. M o o r e , " A p p r o x and o p e r a t o r e q u a t i o n s " ; J . Math. 268-277. solutions of integral A n a l . Appl., 9(1964), A t k i n s o n , K., " T h e n u m e r i c a l s o l u t i o n o f t h e F r e d h o l m I n t e g r a l e q u a t i o n s o f t h e s e c o n d k i n d " ; S.I.A.M. J . N u m e r . A n a l . V o l . 4, No. 3 ( 1 9 6 7 ) . B a r n e t t , V., C o m p a r a t i v e S t a t i s t i c a l Sons (1973), London. I n f e r e n c e : Clowes and B a g g e r o e r , A . B . , "A s t a t e - v a r i a b l e a p p r o a c h t o t h e s o l u t i o n of Fredholm i n t e g r a l e q u a t i o n s " ; I.E.E. Trans. V o l . ITNo. 5 ( 1 9 6 9 ) . B u c k n e r , H.F., " N u m e r i c a l M e t h o d s o f I n t e g r a l E q u a t i o n s " ; S u r v e y o f N u m e r i c a l M e t h o d s ( T o d d ) , McGraw H i l l (1962), N.Y., 4 3 9 - 4 6 7 . K a l m a n , R . E . a n d R. B u c y , " R e s u l t s i n l i n e a r f i l t e r i n g a n d p r e d i c t i o n t h e o r y " ; T r a n s . ASME, J . B a s i c E n g . ( 1 9 6 1 ) , 83, 9 5 - 1 0 8 . C l e v e n s o n , M.L. a n d J . V . Z i d e k , " B a y e s L i n e a r e s t i m a t o r s of the I n t e n s i t y f u n c t i o n o f the n o n s t a t i o n a r y P o i s s o n p r o c e s s " ; t o a p p e a r i n J o u r . Amer. S t a t i s t . Assoc. Cox, D.R. a n d P.A.W. L e w i s , " T h e s t a t i s t i c a l a n a l y s i s o f a s e r i e s o f e v e n t s (1966); Methuen & Co., London. Davis, P.J., "Errors of numerical approximation f o r a n a l y t i c f u n c t i o n s " ; Survey o f N u m e r i c a l Methods ( T o d d ) , McGraw H i l l ( 1 9 6 2 ) , N.Y., 4 6 8 - 4 8 4 . G r a n d e l l , J . , "On s t o c h a s t i c p r o c e s s e s g e n e r a t e d b y s t o c h a s t i c i n t e n s i t y f u n c t i o n " ; Skand. A k t u r . , Uppsala, 54(1971), 204-40. Grandell, J . , "Statistical inference Poisson processes"; Stochastic (1972), 90-120. Green, C D . , London. Integral a f o r doubly stochastic Point Processes (Lewis) E q u a t i o n Methods; Nelson (1969), 94 12b. H e l s t r o m , C.W., S t a t i s t i c a l theory of Pergamon (1968), 436-47. 13. H i l d e r b r a n d , F.B., (1952), N.J., 14. I s a a c s o n , E. a n d H.B. Keller, John Wiley (1966), N;Y. 15. Johnston, J . , Econometric 243-65. 16. Kopal, 16b, Lea, 16c. L o e v e , M., P r o b a b i l i t y 464-93. 17. N e w e l l , G.F. a n d G.A. Sparks, " S t a t i s t i c a l p r o p e r t i e s of t r a f f i c , counts"; Stochastic Point Processes (Lewis); Wiley I n t e r s c i e n c e (1972), N.Y. 18. S l e p i a n , D. a n d T.T. K a d o t a , " F o u r I n t e g r a l e q u a t i o n s d e t e c t i o n t h e o r y " ; S.I.A.M. J . A p p l . M a t h . , V o l . No. 6 ( 1 9 6 9 ) , 1 1 0 2 - 1 7 . 19. Van 20. Varah, 21. W i n k l e r , R.L. a n d W.C. Hays, S t a t i s t i c s , I n f e r e n c e a n d D e c i s i o n ( 1 9 7 0 ) ; N.Y., 22. Whittle, Z., signal Methods of A p p l . Math; 381-461. Numerical A n a l y s i s of detection; Prentice-Hall Numerical Methods; McGraw-Hill Analysis; 2nd Edn., Methods; (1963), Wiley, N.Y., N.Y. N.D., and A s s o c i a t e s , "Measures t o i m p r o v e bus transit and t r a f f i c f l o w a c r o s s t h e f i r s t n a r r o w s b r i d g e " , (May, 1967). Theory; D. Van Nostrand (1955), T r e e s , H.D., D e t e c t i o n , E s t i m a t i o n and M o d u l a t i o n P a r t I (Chaps. 4 and 6 ) , W i l e y ( 1 9 6 8 ) , N.Y. J., Private P., J . R . S . S . "On B, N.Y., of 17, Theory, communication. Probability, 444-504. smoothing of p r o b a b i l i t y d e n s i t y 2 0 , No. 2 ( 1 9 5 8 ) , 3 3 4 - 4 3 . functions"; 95 APPENDIX COMPUTER PROGRAM: OF THE 1 THE O P T I M A L L I N E A R E S T I M A T O R I N T E N S I T Y F U N C T I O N OF THE O I L W E L L D I S C O V E R Y C *** OILWELL DISCOVERY C *** EXACT REAL *** SOLUTION USING WHITTLE'S DS(120), EXACT(225) DIMENSION T M ( 2 2 5 ) , C E X ( 2 2 5 ) , R E A D 2 , MU, 2 SIGMA, A L F A FORMAT(3F4.2) N=108 READ4, 4 (P(I), DS(I), = A L F A * S I G M A * SIGMA/MU T T A = SQRT(2.*WK + ALFA*ALFA) VK = WK/TTA PRINT6, 6 FORMAT(' WK,TTA,VK ',3F10.4) UT1 = 1 1 0 . 5 UT = 221. T= - 1 1 0 . K=0 8 IF(T.GT.110.) SUM= 0. J 10 1=1,N) FORMAT(F5.0, F3.0) WK SUGGESTION MU,SIGMA,ALFA,EXACT DIMENSION P ( 1 2 0 ) , = 0 J = J+l GOTO 20 PROCESS CAT(225) *** 96 IF(J.GT.N) GOTO 12 R = ABS(T-P(J)) S=T+P(J) PA1=-TTA*R I F ( P A 1 . L T . - 1 0 0 . ) A=0. A=EXP(PA1) PA2=-TTA*(UT+S) I F ( P A 2 . L T . - 1 0 0 . ) B=0. B=EXP(PA2) PA3=-TTA*(UT-S) IF(PA3.LT.-100.) C= C=0 EXP(PA3) ABC = A+ B + C IF(ABC.EQ.O.) GOTO 10 SUM=SUM+D'S ( J ) *ABC GOTO 10 12 CVA = VALU(TTA,UT1,T) ADVA = M U * V K * ( 1 . - CVA) K = K + 1 EXACT(K) = VK*SUM + ADVA P R I N T 1 4 , T, E X A C T ( K ) 14 FORMAT( 1 T .= T + 1 GOTO 8 20 NT=221 M=NT - 1 N=NT ', F 6 . 0 , F 1 0 . 2 ) 97 TM(1)=1. DO 22 1=1,M •TM(I+1)=TM(I)+1. CONTINUE DO 2 4 1=1,NT CAT(I)=TM(I) CEX(I)=EXACT(I) CONTINUE *** NOW CALL CALL SUBROUTINE AJOA TO DO THE PLOTTING AJOA(CAT,CEX,NT) TERMINATE PLOTTING CALL AND" STOP ** PLOTND STOP END . S U B R O U T I N E A J O A ( X , Y, DIMENSION N) X ( N ) , Y(N) CALL SCALE(X,N,10.0,XMIN,DX,1) CALL SCALE(Y,N,10.0,YMIN,DY,1) CALL A X I S ( 0 . , 0 . , CALL A X I S ( 0 . , ' T I M E * , - 4 , 1 0 . , 0., 0., 'EXACT', CALL LINE(X,Y,N,1) C A L L P L O T ( 1 2 . 0 , 0., RETURN END -3) XMIN, DX) 5, 1 0 . , 9 0 . , Y M I N , DY) 98 FUNCTION VALU(X,Y,Z) REAL X,Y,Z Al= -X*ABS(-Y-Z) Bl= -X*ABS(Y-Z) I F ( A 1 . L T . - 1 0 0 . . A N D . B 1 . L T . - 1 0 0 . ) P F = 0. I F ( A 1 . G E . - 1 0 0 . . A N D . B 1 . L T . - 1 0 0.) P F = E X P ( A l ) IF(Al.LT.-lOO..AND.Bl.GE.-lOO.) PF=-EXP(Bl) IF(Al.GE.-lOO..AND.B1.GE.-100.) PF=EXP(Al)-EXP(Bl) A2= -X*(Z+Y) B2 = - X * ( 3 * Y +Z) I F ( A 2 . L T . - 1 0 0 . . A N D . B 2 . L T . - 1 0 0 . ) QF=0. IF(A2.GE.-100..AND.B2.LT.-100.) QF=EXP(A2) I F ( A 2 . L T . - 1 0 0. .AND.B 2 . G E . - 1 0 0 . ) QF= - E X P ( B 2 ) IF(A2.GE.-100..AND.B2.GE.-100.) QF=EXP(A2) - EXP(B2) A3 - - X * ( Y - Z ) B3 = - X * ( 3 * Y - Z ) I F ( A 3 . L T . - 1 0 0 . . A N D . B 3 . L T . - 1 0 0 . ) RF = 0 . IF(A3.GE.-100..AND.B3.LT.-100.) RF= EXP(A3) IF(A3.LT.-100..AND.B3.GE.-100.) RF= -EXP(B3) I F ( A 3 . G E . - 1 0 0 . .AND.B 3 . G E . - 1 0 0.) R F = E X P ( A 3 ) VALU = ,(PF +.QF RETURN END + RF)/X + X - EXP(B3) 99 COMPUTER OUTPUT: THE A P P R O X I M A T E AND THE OILWELL DISCOVERIES; EXACT L I N E A R ESTIMATORS, X AND X oo R E S U L T S FOR T . L o=0.5, a=0.05. A (T) T L T A (T L -110. •1.21 0. 50 -89. 0 .88 0.88 -109 . 1.16 0.58 -88 . 0 . 91 0 . 91 -108. 1. 08 0.60 -87. 0.94 0 .94 -107. 1.03 0. 64 -86. 0 .93 0.93 -106 . 0.96 0. 63 -85. 0 .92 0.92 -105. 0. 88 0. 61 -84 . 0.91 0.91 -104 . 0.83 0.62 -83. 0. 86 0.86 -103. 0. 81 0. 64 -82 . 0 .84 0.84 -102. 0.83 0. 68 -81. 0 . 81 0 . 82 -101. . 0. 83 0. 72 -80. 0.82 0. 82 -100. 0.87 0. 78 -79. 0.81 0 .82 -99. 0.83 0.76 -78. 0.77 0.77 -98. 0.79 0.73 -77. 0. 75 0.76 -97. 0. 77 0.72 -76. 0.73 0.73 -96. 0.78 0. 74 -75. 0.69 0.69 -95. 0. 82 0 . 79 -74 . 0.68 0.68 -94. 0. 79 0. 76 -73. 0. 69 0.70 -93. 0. 78 0.76- -72. 0 .73 0.73 -92. 0.80 0. 78 -71. 0.79 0. 80 -91. 0.81 0.80 -70 . 0. 89 0.8 9 -90. 0. 85 0.84 -69 . 0.94 0 . 94 100 T A (T) A (T) CO T A (T) A T (T) -65. 0.93 0.94 -40. 0.60 0.61 -64. 0.81 0.81 -39.' 0.67 0.68 -63. 0.71 0.71 -38. 0.77 0.77 -62. 0.64 0.64 -37. 0.86 0.86 -61. 0.59 0.59 -36. 0.94 0.94 -60. 0.56 0.56 -35. 0.98 0.99 -59. 0.55 0.55 -34. 0.96 0.96 -5-8. 0.56 0.56 -33. 0.93 0.93 -57. 0.59 0.59 -32. 0.90 0.90 -56. 0.60 0.60 -31. 0.86 0.87 -55. 0.53 0.53 -30. 0.86 0.87 -54. 0.47 0.48 -29. 0.86 0.86 -53. 0.43 0.44 -28. 0.85 0.85 -52. 0.41 0.42 -27. 0.86 0.87 0.37 0.37 -26. 0.88 0.88 -50. 0.33 0.34 -25. 0.93 0.93 -49. 0.31 0.32 -24. 0.90 "0.90 -48. 0.30 0.31 -23. 0.87 0.87 -47. 0.30 0.30 -22. 0.87 0.87 -46. 0.31 0.31 -21. 0.87 0.87 -45. 0.32 0.33 -20. 0.89 0.90 -44. 0.35 0.36 -19. 0.88 0.89 -43. 0.39 0.40 -18. 0.90 0.91 -42. 0.44 0.45 -17. 0.96 0.96 -41. 0.51 0.52 -16. 1.01 1.01 -51. . 101 T A (T) °o (T) A T A (T) a> L (T) A L -15. 0.99 0.99 10. 0.77 0.78 -14. 0.97 0.98 11. 0.83 0.83 -13. 0.99 0.99 12. 0.88 0.88 -12. 0.93 0.94 13. 0.92 0.93 -11. 0.88 0.88 14. 0.89 0.90 -10. 0.85 0.86 15. 0.90 0.90 -9. 0.86 0.86 16. 0.9 3 0.94 -8. 0.86 0.86 17. 0.93 0.94 -7. 0.85 0.86 18. 0.89 0.90 -6. 0.85 0.85 19. 0.89 0.89 -5. 0.87 0.87 20. 0.87 0.88 -4. 0.92 0.93 21. 0.90 0.90 -3. 0.94 0.94 22. 0.91 0.92 -2. 0.95 0.96 23. 0.93 0.93 -1. 0.93 0.93 24. 0.91 0.91 0. 0.86 0.87 25. 0.84 0.85 1. 0.83 0.84 26. 0.81 0.82 2. 0.80 0.80 27. 0.81 0.82 3. 0.76 0.76 28. 0.84 0.84 4. 0.70 0.71 29. 0.89 0.90 5. 0.68 0.68 30. 0.98 0.99 6. 0.67 0.68 31. 1.04 1.04 7. 0.7 0 0.70 32. 1.06 1.07 8. 0.74 0.75 33. 1.08 1.09 9. 0.74 0.75 34. 1.15 1.15 102 T A (T) CO A (T) J_, T A (T) CO A (T) J_, 35. 1.18 1.19 60. 0.42 0.42 36. 1.18 1.19 61. 0.42 0.43 37. 1.23 1.24 62. 0.40 0.41 38. 1.26 1.26 63. 0.37 0.37 39. 1.15 1.15 64. 0.34 0.34 40. 1.04 1.05 65. 0.32 0.33 41. 0.94 0.94 66. 0.32 0.32 42. 0.87 0.88 67. 0.32 0.33 43. 0.83 0.84 68. 0.34 0.34 44. 0.79 0.80 69. 0.37 0.37 45. 0.78 0.78 70. 0.41 0.41 46. 0.79 0.80 71. 0.42 0.43 47. 0.80 0.80 72. 0.42 0.42 48. 0.76 0.77 73. 0.43 0.44 49. 0.71 0.72 74. 0.45 0.46 50. 0.70 0.70 75. 0.46 0.46 51. .0.67 0.67 76. 0.48 0.49 52. 0.62 0, 63 77. 0.52 0.52 53. 0.57 0.57 78. 0.57 0.58 54. 0.53 0.53 79. 0.65 0.65 55. 0.51 0.52 80. 0.67 0.68 56. 0.48 0.48 81. 0.69 0.69 57. 0.46 0.46 82. 0.66 0.66 58. 0.45 0.46 83. 0.61 0.62 59. 0.43 0.43 84. 0.59 0.59 103 T X (T) X T (T) T X (T) X T (T) 85. 0.55 0.55 98. 0.34 0.31 86. 0.53 0.54 99. 0.33 0.29 87. 0.53 0.53 100. 0.34 0.29 88. 0.55 0.55 101. 0.35 0.29 89. 0.59 0.59 102. 0.38 0.30 90. 0.61 0.61 103. 0.42 0.32 91. 0.62 0.62 104. 0.47 0.36 92. 0.58 0.57 105. 0.54 0.40 93. 0.52 0.52 106. 0.63 0.46 94. 0.4 9 0.4 7 107. 0.71 0.50 95. 0.43 0.41 108. 0.78 0.52 96. 0.39 0.37 109. 0.83 0.52 97. 0.36 0.33 110. 0.89 0.50 104 APPENDIX COMPUTER PROGRAM: OF THE 2 THE O P T I M A L L I N E A R I N T E N S I T Y FUNCTION OF THE L I O N S GATE BRIDGE PROCESS ** L I O N S GATE B R I D G E E X A C T S O L U T I O N DATA FROM 9A.M. REAL ESTIMATOR TO 6 ** P.M. MU,SIGMA,ALFA,FACT F A C T I S A M U L T I P L Y I N G FACTOR TO S C A L E UP M(T) DIMENSION P(120),DS(120),SK(10),EXACT(120) DIMENSION TM(120), READ 2, CEX(109), CAT(109) SIGMA,ALFA,FACT FORMAT(3F6.3) N=109 M=9 T=-54 . UT=109. UT1=54.5 READ 4 , ( P ( I ) , DS(I), FORMAT(F6.0,F6.0) READ5, (SK(I), FORMAT(F6.01) L=0 K=0 KT=0 L=L+1 Y=SK(L) 1=1,M) 1 = 1,N) 105 C A L L F U N C T I O N UTVAR TO COMPUTE NEW M ( T ) MU=UTVAR(FACT,Y) RMT=SQRT(MU) WK=ALFA*SIGMA*SIGMA/MU TTA=SQRT(2.*WK + A L F A * A L F A ) VK=WK/TTA 8 IF(T.GT.54) GOTO 20 ** CHANGE M ( T ) A F T E R IF(KT.GT.12) SUM EVERY 12TH INTERVAL GOTO 7 =0. J=0 10 J=J+1 IF(J.GT.N) GOTO 1 2 - R=ABH;(T-P ( J ) ) S=T+P(J) PA1=-TTA*R IF(PA1.LT.-100.) A=0 A=EXP(PA1) PA2=-TTA*(UT+S) IF(PA2.LT.-100.) B=0. B=EXP(PA2) PA3p-TTA*(UT-S) I F ( P A 3 . L T . - 1 0 0 . ) C=0. C=EXP(PA3) ABC= A+B+C IF(ABC.EQ.O-) SUM = SUM + GOTO 10 DS(J)*ABC 106 GOTO 10 ** 12 ** TO COMPUTE X T ( S ) C V A = V A L U ( T T A , U T 1 , T) TO COMPUTE H T ( S ) A F T E R C A L C U L A T I N G X T ( S ) . ** HAVE = CVA/RMT ADVA = M U * V K * ( 1 . - HAVE) K=K+1 E X A C T ( K ) = V K * S U M + ADVA + MU P R I N T 1 4 , T, E X A C T ( K ) 14 FORMAT(' ' , F 6 . 0 , F 1 0 , 2) KT=KT+1 T=T+1 GOTO 8 20 NT=109 TM(1)=1. DO 2 1 1=1,N TM(I+1)=TM(I)+1. 21 CONTINUE DO 22 1=1,NT CAT(I)=TM(I) CEX(I)=EXACT(I) 22 ** CONTINUE NOW C A L L S U B R O U T I N E CALL * AJOA(CAT,CEX,NT) TERMINATE CALL STOP A J O A TO DO THE P L O T T I N G ** P L O T T I N G AND STOP ** PLOTND 107 END C S U B R O U T I N E A J O A ( X , Y, DIMENSION N) X ( N ) , Y(N) CALL SCALE(X,N,10.0,XMIN,DX,1) CALL SCALE(Y,N,10.0,YMIN,DY,1) CALL A X I S ( 0 . , 0 . , CALLAXIS(0., ' T I M E ' , - 4 , 1 0 . , 0., 0., 'EXACT', XMIN, 5, 1 0 . , 9 0 . , Y M I N , CALL LINE(X,Y,N,1) C A L L P L O T ( 1 2 . 0 , 0., -3) RETURN END C C FUNCTION REAL UTVAR(X,Y) X,Y PRO=X*Y UTVAR=SQRT(PRO) RETURN END C FUNCTION REAL VALU(X,Y,Z) X,Y,Z Al= -X*ABS(-Y-Z) Bl= -X*ABS(Y-Z) DX) IF(A1.LT.-100..AND.B1.LT.-100.) PF = IF(A1.GE.-100..AND.B1.LT.-100.) 0. PF=EXP(Al) DY) 108 I F ( A 1 . L T . - 1 0 0..AND.B1.GE.-100.) P F = - E X P ( B I ) IF(A1.GE.-100..AND.B1.GE.-100.) P F = E X P ( A l ) - E X P ( B I ) A2= B2 -X*(Z+Y) = - X * ( 3 * Y +Z) IF(A2.LT.-100..AND.B2.LT.-100.) QF=0. IF(A2.GE.-100..AND.B2.LT.-100.) QF=EXP(A2) I F ( A 2 . L T . - 1 0 0 . .AND•B 2 . G E . - 1 0 0 . ) QF= -EXP(B2) I F ( A 2 . G E . - 1 0 0 . .AND.B 2 . G E . - 1 0 0 . ) Q F = E X P ( A 2 ) - E X P ( B 2 ) A3 = -X*(Y-Z) B3 = - X * ( 3 * Y - Z ) IF(A3.LT.-100..AND.B3.LT.-100.) RF = 0. I F ( A 3 . G E . - 1 0 0 . . A N D . B 3 . L T . - 1 0 0.) RF= EXP(A3) I F ( A 3 . L T . - 1 0 0 . .AND.B 3 . G E . - 1 0 0 . ) RF= -EXP(B3) IF(A3.GE.-100..AND.B3.GE.-100.) VALU = RETURN END ( P F + OF + R F ) / X + X RF=EXP(A3) - EXP(B3) 109 COMPUTER OUTPUT: THE LIONS GATE B R I D G E OPTIMAL LINEAR ESTIMATOR, R E S U L T S FOR a=2.0, PROCESS A . T a=0.2. A (T) T A (T) 54. 167.99 -33. 118.86 53. 167.06 -32. 119.81 52. 164.14 -31. 118.14 51. 160.44 -30. 116.04 50. 154.74 -29. 117.08 49. 148.83 -28. 117.80 48. 145.01 -27. 117.38 47. 140.18 -26. 119.11 46. 136.26 "25. 120.07 45. 131.25 -24. 120.30 4-4. 127.12 -23. 121.23 43. 121.96 -22. 118.29 42. 116.97 -21. 111.63 41. 114.64 -20. 107.17 40. 111.57 "19. 106.10 39. 111.05 -18- 103.75 38. 111.39 "17. 102.03 37. 113.03 "16. 100.53 36. 114.99 "15. 99.50 35. 117.48 -14. 97.59 34. 117.46 -13. 96.70 L L 110 A (T) T L T A (T) L 98.48 13, 95.62 -11. 100.00 14, 95.71 -10. 102.80 15, 96 . 28 -9. 104.89 16, 96.85 -8 . 108.44 17 , 96.80 -7 . 110.80 18 , 96 . 85 -6 . 112.50 19 , 96.94 -5. 113.69 20, 97.86 -4. 114.57 21, 98.90 -3. 115.16 22 , 1 0 0 .45 -2 . 113.98 23, 101.10 -1. 114.13 24 , 105.68 0. 114.05 25. 108.08 1. 113.75 26, 109.82 2. 113.05 27 , 111.08 3. 111.85 28, 112.17 4. 110.20 29, 113.41 . 5. 108.18 30. 114.50 6, 106.06 31, 115.23 7. 10 3.85 32. 118.75 8. 101.46 33. 121.37 9. 98.84 34. 122.75 10. 9 5 . 49 35. 124.38 11. 95.90 36. 123.76 12. 95.41 37. 119.45 -12. Ill T A Li (T) T A (T) T J-i .38. 116.63 47. 88.13 39. 111.52 48. 89.70 40. 105.69 49. 91.04 41. 102.30 50. 89.87 42. 97.44 51. 89.08 43. 91.68 52. 85.83 44. 90.67 53. 82.93 45. 89.47 54. 80.64 46. 88.36 112 APPENDIX 3 COMPUTER PROGRAM: OF THE THE O P T I M A L L I N E A R E S T I M A T O R I N T E N S I T Y FUNCTION OF THE C O A L - M I N I N G D I S A S T E R P R O C E S S C *** COAL M I N I N G D I S A S T E R S * EXACT SOLUTION *** C R E A L MU, SIGMA, A L F A , D I V , FACT DIMENSION P ( 1 0 0 ) , DS(IOO), DIMENSION T M ( I O O ) , C E X ( 6 0 ) , SK(IOO), EXACT(IOO) CAT(60) READ2, SIGMA, A L F A , D I V , FACT, AA, 2 FORMAT(6F6.2) N = 60 M = 60 T = -30. UT = 6 1 . UT1 = 30.5 READ 4, 4 DS(I), I = FORMAT(F6.0, F6.0) DO 5 I = SK(I) 5 (P(I), = 1,N P(I)/DIV CONTINUE L = 0 K = 0 8 IF(T.GE.30.) L = L + 1 Y=AA*SK(L) GOTO 20 1,N) BB 113 Z=BB*SK(L) C *** C A L L F U N C T I O N UTVAR TO COMPUTE NEW MU=UTVAR(FACT,Y,Z) QMT=SQRT(MU) WK = ALFA*SIGMA*SIGMA/MU TTA = SQRT(2.*WK.+ ALFA*ALFA) VK = WK/TTA SUM= 0. J 10 = 0 J = J+l IF(J.GT.N) R = GOTO 12 ABS(T-P(J)) S=T+P(J) PA1=-TTA*R IF(PA1.LT.-100.) A=0. A=EXP(PAl) PA2=-TTA*(UT+S) IF(PA2.LT.-100.) B=0. B=EXP(PA2) PA3=-TTA*(UT-S) IF(PA3.LT.-100.) C=0. C= E X P ( P A 3 ) A B C = A+ B + C IF(ABC.EQ.O.) GOTO 10 SUM=SUM+DS(J)*ABC GOTO 1 0 C TO COMPUTE X T ( S ) M(T) *** 114 12 CVA TO = VALU(TTA,UT1,T) COMPUTE H T ( S ) QCVA - CVA/QMT ADVA = M U * V K * ( 1 . K = K + 1 EXACT(K) = VK*SUM + ADVA + P R I N T 1 4 , T, 14 FORMAT(' T = T + - QCVA) MU EXACT(K) ', F 6 . 0 , F 1 0 . 2 ) 1 GOTO 8 20 NT = TM(1) DO 60 = 1. 21 I = 1,N T M ( I + 1) .= T M ( I ) + 21 1. CONTINUE DO 22 I = 1,NT CAT(I)=TM(I) CEX(I)=EXACT(I) 22 CONTINUE *** NOW CALL * CALL SUBROUTINE AJOA AJOA(CAT,CEX,NT) TERMINATE PLOTTING CALL STOP END TO PLOTND AND STOP ** DO THE PLOTTING *** 115 S U B R O U T I N E A J O A ( X , Y, DIMENSION N) X ( N ) , Y(N) CALL SCALE(X,N,10.0,XMIN,DX,1) CALL SCALE(Y,N,10.0,YMIN,DY,1) CALL A X I S ( 0 . , 0 . , CALL A X I S ( 0 . , ' T I M E ' , - 4 , 1 0 . , 0., 0., 'EXACT', CALL LINE(X,Y,N,1) C A L L P L O T ( 1 2 . 0 , 0., -3) RETURN END C C FUNCTION UTVAR(X, REAL X,Y,Z A = EXP(Y) B= E X P ( - Z ) C = A + B CASH = X*C UTVAR = C A S H RETURN END C - Y, Z) XMIN, DX) 5, 1 0 . , 9 0 . , Y M I N , DY) 116 A (T) L T A (T) L •30. 3.05 -5. 2 . 32 -29, 3.25 -4 . 2.31 •28, 3.40 -3. 2 . 31 •27. 3.47 -2. 2 .29 •26, 3.53 -1. 2.26 •25, 3.56 0. 2.21 •24, 3.54 1. 2 .17 •23, 3.53 2. 2 .13 •22, 3.52 3. 2 .10 •21, 3.48 4. 2.05 •20, 3.42 5. 1.97 •19 , 3. 33 6. 1.92 •18, 3.25 7. 1.87 •17, 3.15 8. 1.85 •16, 3 . 03 9. 1.85 •15, 2.93 10. 1.88 •14, 2 . 84 11. 1.87 •13 1.71 12. 1.87 •12, 2 . 59 13. 1.89 •11, 2.50 14. 1.94 •10, 2 .44 15. 2 . 02 -9, 2.38 16. 2.10 -8, 2 . 34 17. 2.22 -7, 2 . 30 18 . 2.28 -6, 1.20 19 . 2 . 31 117 T X T (T) T X T (T) 20. 2.32 26. 2.42 21. 2.34 27. 2.36 22. 2.38 28. 2.27 23. 2.40 29. 2.22 24. 2.41 30. 2.15 25. 2.43
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Estimating the intensity function of the nonstationary...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Estimating the intensity function of the nonstationary poisson process Flynn, David Wilson 1976
pdf
Page Metadata
Item Metadata
Title | Estimating the intensity function of the nonstationary poisson process |
Creator |
Flynn, David Wilson |
Date Issued | 1976 |
Description | Let{N(t), -T |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2010-02-11 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0080093 |
URI | http://hdl.handle.net/2429/20061 |
Degree |
Master of Science - MSc |
Program |
Mathematics |
Affiliation |
Science, Faculty of Mathematics, Department of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Graduate |
Aggregated Source Repository | DSpace |
Download
- Media
- 831-UBC_1976_A6_7 F95.pdf [ 4.3MB ]
- Metadata
- JSON: 831-1.0080093.json
- JSON-LD: 831-1.0080093-ld.json
- RDF/XML (Pretty): 831-1.0080093-rdf.xml
- RDF/JSON: 831-1.0080093-rdf.json
- Turtle: 831-1.0080093-turtle.txt
- N-Triples: 831-1.0080093-rdf-ntriples.txt
- Original Record: 831-1.0080093-source.json
- Full Text
- 831-1.0080093-fulltext.txt
- Citation
- 831-1.0080093.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0080093/manifest