Science, Faculty of
Mathematics, Department of
DSpace
UBCV
López, Miguel Martin
2009-03-17T18:46:16Z
1996
Doctor of Philosophy - PhD
University of British Columbia
Dawson-Watanabe superprocesses are stochastic models for populations undergoing spatial migration
and random reproduction. Recently E. Perkins (1993, 1995) introduced an infinite
dimensional stochastic calculus in order to characterize superprocesses in which both the reproduction
mechanism and the spatial motion of each individual are allowed to depend on the
state of the entire population, i.e. superprocesses with interactions.
This work consists of three independent chapters. In the first chapter we show that interactive
superprocesses arise as diffusion approximations of interacting particle systems. We construct
an approximating system of interacting particles and show that it converges (weakly) to a limit
which is exactly the superprocess with interactions. This result depends very intimately upon
the structure of the particle systems.
In the second chapter we study some path properties of a class of one-dimensional interactive
superprocesses. These are random measures in the real line that evolve in time. We employ
the aforementioned stochastic calculus to show that they have a density with respect to the
Lebesgue measure. We also show that this density, function is jointly continuous in space and
time and compute its modulus of continuity. Along with the proof we develop a technique that
can be used to solve some related problems. As an application we investigate path properties
of a one-dimensional super-Brownian motion in a random environment.
In the third chapter we investigate the local time of a very general class of one-dimensional
interactive superprocesses. We apply Perkins' stochastic calculus to show that the local time
exists and possesses a jointly continuous version.
https://circle.library.ubc.ca/rest/handle/2429/6160?expand=metadata
5340672 bytes
application/pdf
P A T H PROPERTIES A N D C O N V E R G E N C E OF I N T E R A C T I N G SUPERPROCESSES by MIGUEL MARTIN LOPEZ M.Sc. (Mathematics) The University of British Columbia A THESIS SUBMITTED IN PARTIAL F U L F I L L M E N T OF T H E REQUIREMENTS FOR T H E D E G R E E OF DOCTOR OF PHILOSOPHY in T H E FACULTY OF GRADUATE STUDIES Department of Mathematics We accept this thesis as conforming to the required standard T H E UNIVERSITY OF BRITISH COLUMBIA July 1996 © Miguel M . Lopez, 1996 In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for. extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Mathematics The University of British Columbia Vancouver, Canada Date S Abstract Dawson-Watanabe superprocesses are stochastic models for populations undergoing spatial migration and random reproduction. Recently E. Perkins (1993, 1995) introduced an infinite dimensional stochastic calculus in order to characterize superprocesses in which both the reproduction mechanism and the spatial motion of each individual are allowed to depend on the state of the entire population, i.e. superprocesses with interactions. This work consists of three independent chapters. In the first chapter we show that interactive superprocesses arise as diffusion approximations of interacting particle systems. We construct an approximating system of interacting particles and show that it converges (weakly) to a limit which is exactly the superprocess with interactions. This result depends very intimately upon the structure of the particle systems. In the second chapter we study some path properties of a class of one-dimensional interactive superprocesses. These are random measures in the real line that evolve i n time. We employ the aforementioned stochastic calculus to show that they have a density with respect to the Lebesgue measure. We also show that this density, function is jointly continuous in space and time and compute its modulus of continuity. Along with the proof we develop a technique that can be used to solve some related problems. As an application we investigate path properties of a one-dimensional super-Brownian motion in a random environment. In the third chapter we investigate the local time of a very general class of one-dimensional interactive superprocesses. We apply Perkins' stochastic calculus to show that the local time exists and possesses a jointly continuous version. ii Table of Contents Abstract ii Table of Contents Acknowledgement C h a p t e r 0. iii / iv Introduction 1 0.1 Review ..' 0.2 Summary of the Main Results C h a p t e r 1. 2 8 ; W e a k Convergence of Interacting Branching Particle Systems 1.1 Introduction 1.2 Interactive Branching Particle Systems 1.2.1 The Particle Picture 1.2.2 A n Equation for K 1.3 Tightness of the Normalized Branching Particle Systems 1.3.1 Convergence of the Projections 1.3.2 Compact Containment Condition 1.3.3 Relative Compactness of (K ) 1.4 Identification and Uniqueness of the Limit '. N N C h a p t e r 2. : 11 14 14 23 29 30 33 36 36 P a t h P r o p e r t i e s of a O n e - D i m e n s i o n a l S u p e r d i f f u s i p n w i t h I n t e r a c tions 41 2.1 Introduction and Statement of Results 2.1.1 Main Result 2.1.2 Historical Stochastic Calculus 2.2 Some Auxiliary Processes 2.3 A Generalized Green's Function Representation for X 2.4 Proof of the Main Result 2.5 Examples: Super Brownian Motions with Singular Interactions C h a p t e r 3. ' 11 ; L o c a l T i m e s for O n e - D i m e n s i o n a l I n t e r a c t i n g S u p e r p r o c e s s e s 3.1 Introduction and Statement of Results 3.2 Existence and Regularity of Local Times Bibliography 41 41 46 50 61 63 75 80 80 84 100 iii Acknowledgement I would like to thank all the individuals and institutions that made this journey possible. M y most sincere thanks go to my advisor, professor E d Perkins. It has been a privilege to work under his direction. It is also a pleasure to thank the probability group at U B C . They and their numerous visitors were a constant source of inspiration and high quality mathematics. In particular I acknowledge some useful conversations with professor D . Dawson. I should like to express my gratitude to the Mathematics Department at Universidad de los Andes, especially to professor S. Fajardo. I am indebted to them for showing me the beauty of mathematics and for teaching me the concept of proof. During these last years I have received much encouragement from my family and friends. Thanks to mom*and dad for that.and for the rest. I thank also the hospitality of professors R. Adler (Chapel Hill), J-F. Le Gall (Paris), T. Lindstrjsm- (Oslo) and B. Rozovskii (Los Angeles). M y best thanks to Anders Svensson, I^jXpert extraordinary, for showing me the path to computer, guruhood. I gratefully acknowledge the financial support of Dr. E . Perkins and The University of British Columbia. Last but not least I thank Laura for her love, support and for putting up with me. iv Chapter 0 Introduction This work is devoted to the study of superprocesses with interactions. Superprocesses (or measure-valued branching diffusions) are measure-valued processes that model populations undergoing random branching and spatial motion. B y a population we mean a system involving a number of similar particles. We are interested in the approximations that are possible when the number of particles is large. Consider.for example a large population of goats. Individual goats reproduce, die (branching) and wander around (spatial motion). They also interact with each other in many ways. For example they live in clusters or clans. They also have memory and tend not to return to fields they have grazed until some time has passed. Suppose we are asked to implement a computer simulation of the evolution in time of the population. We are interested not only in the total number of goats, but in their geographical locations as well. To this end, we would place a square grid over the region of interest and assign to each square (or pixel) a height (or color) proportional to the number of goats within. Due to the births-deaths (i.e. branching) and spatial motion of each animal the color of each pixel would change as time goes on. We are looking at the evolution in time of the density of goats. Note that the value of such random process at any given instant is not a number but a colored map. Therefore we must give a mathematical interpretation to "colored map"-valued processes. One way to do it is to regard each map as a measure. This example allows us to see why measure-valued processes may arise naturally and are not a mere technical device. We wish to know if after appropriate rescaling, the density of goats can be well approximated by some diffusion process. That is, we are after a limit theorem. We quote the great mathematician A . N . Kolmogorov: The epistemological value of the theory, of probability is revealed only by limit theorems. Moreover, without limit theorems it is impossible to understand the real content of the primary concept of all of our sciences - the concept of probability. Suppose that we have established the fact that as the number of goats goes to infinity a limit process does exist. We call it the goat superprocess. It is then natural to ask, how does it look like? More specifically, do the measures corresponding to the values of the goat superprocess have density functions? If so, are they continuous? Since continuous functions can oscillate wildly, how continuous? Of course, we require a rigorous formulation of all these questions. The subject of superprocesses is a rapidly developing field. It has been stimulated from several different directions including population genetics, branching processes, interacting particle systems and stochastic partial differential equations. For example the so-called Fleming-Viot superprocess is a generalization of the Wright-Fischer model (for random replication with er- rors) and has been studied in connection with population genetics (Hartl and Clark 1989). Rogers and Williams (1986) wrote in the preface of their book: Here are some guidelines ori what you might move on when your reading of our book is done. (vi) Measure-valued diffusions, random media, etc.. Durrett (1985) and Dawson and Gartner (1987) can be your 'open sesame' to what is sure to be one of the richest of Aladdin's caves. We believe that the monograph of Dawson (1993) proves that Rogers and Williams were correct. In this thesis we will give a rigorous description of a general class of models that hopefully includes what our intuition says the goat superprocess should be. We will then answer some specific questions about certain subsets of the class of models. In Section 0.1 we survey some of the theory of superprocesses. In Section 0.2 we explain our results. 0.1 Review In this Section we review the most basic concepts needed to understand both the meaning and relevance of our results. Most of the background information is very recent and known only by a handful of experts. The ideal preparation is furnished by a thorough reading of the papers Perkins (1993, 1995). These in turn can be well understood after reading either of our favorites Walsh (1986) or Dawson (1993). Our notation is consistent with that of Perkins (1995), and a reader familiar with this material can safely skip the rest of this Section. We have made an effort to give an intuitive understanding of the ideas presented below. However many of them are accompanied by heavy technical baggage, and these technicalities are'important for a careful examination of this thesis. They cannot be avoided. Some of these ideas may seem obscure at a first glance, but they contain large amounts of valuable information. This is hardly a surprise since we will be studying some fairly complex objects. We encourage the reader to refer to the excellent references when in doubt or need. We assume a basic knowledge of Probability theory on the part of the reader. We expect the reader to be acquainted with 1. Martingales, Brownian motion, weak convergence of probability measures on metric spaces, critical branching processes (the standard Galton-Watson process will suffice), the Poisson process. 2. Martingale inequalities (Doob's, Burkholder's and maximal), stopping times, stochastic integration (Ito's lemma), local times (Tanaka's formula). Most of these topics (certainly those in 1) are covered in a first graduate course in Probability theory. Those not familiar with the topics listed in 2 should still be able to understand the statement and meaning of the most important theorems. They should also be able to read the remainder of this Section. The subjects listed in 1 and 2 are the absolute minimum required to understand most of the proofs. Familiarity with superprocesses and/or stochastic partial differential equations (abbreviated S P D E ) is highly recommended. 2 B y a measure-valued process we mean a random process whose state space is Mp(E), the space of finite measures on some complete, separable, metrizable topological space (E, 13(E)). A s usual, B(E) denotes the Borel a-field and Mp(E) is endowed,with the topology of weak convergence. Superprocesses are related to branching processes, population genetics models, stochastic partial differential equations and interactive particle systems. The canonical example is d-dimensional super Brownian motion, which we now describe. F i x a positive integer N, a positive real number 7 and a finite measure m on JR . A t time t=0, N particles are located in H with law m ( - ) / m ( R ) . They move independently according to d-dimensional Brownian motions. If a particle is located at position a; at time £, then let the probability that it dies before time t 4- dt be ~fNdt + o(dt). If it dies, a fair coin is tossed and the particle is replaced by 0 or 2 identical particles situated at the position of death of their parent. The new particles then start undergoing independent Brownian motions and the process continues in the same fashion, with particles moving, dying and branching ad infinitum. In this model we want to keep track of the number of particles as well as their locations. Let I(N, t) be the total number of particles alive at time t, and let Z\, i = 1,... ,I(N,t), label their locations. Consider the rescaled measure-valued process d d d •• * (t) N ' HN,t) = jj £ ZV 6 (O- ) 1 1=1 where S denotes a unit mass at e. When the number of particles is large, i.e. as N' —>• 60, the particle system X is approximated by a measure- valueddiffusion X which we call super Brownian motion with branching rate 7. In fact, the sequence of probability measures (P(X € •))N converges weakly to a law P on C([0,00), Mf(JR )) (Dawson 1993). Following the usual convention, we endow C([0,00), E), the set of continuous paths t Xt € E, with the compact-open topology and D([0,00), E), the space of cadlag paths t •->• x G E, with Skorohod's Ji-topology. Super Brownian motion X can be characterized through a martingale problem (Dawson 1993). A typical example of a martingale problem is Levy's characterization of Brownian motion. It says that if an IR -valued random process B is a continuous martingale with square function {B ,B^)t'= tdij (here 6ij denotes Kronecker's delta), then B is a d-dimensional Brownian motion. The following is a martingale problem that uniquely characterizes the law P : If is a measure we write /j,(<p) = f <fi dfi. e N N d m t d l m (MP)™ 100 There is a continuous M/r(IR ) valued, adapted process X probability space (Q,T,!Ft, P ) such that (i) P ( A ' = rn) = 1 d t defined on a filtered 0 (ii) U<f><EC (R ), then 2 b d A^(^) = A'o(<^) + £ X, (^) ds + Z (<t>)., t (0.2) where Zt(4>) is a continuous square integrable (^ ()-martingale null at zero with square function r (Z(0)) = f , Jo t X ( ?)ds. s 1( Moreover the law Q of the canonical process X {u)) = u o n $7 = C([0, oo), JVfjr(lR )) is uniquely determined by (i) and (ii) (Dawson 1993). m t d t Recall that there are two independent sources of noise, namely the Brownian noise and the branching noise. For each test function <j> € C$ ( R ) (0.2) provides the semimartingale decomd position of {Xt(4>) rt>0). The martingale part Z.(<p) comes entirely from the branching, while the drift part / X (^(^jds 0 s comes exclusively from the spatial motions. Note that if there is no branching (i.e. 7 = 0), then (0.2) becomes the deterministic equation X (<j>) = X (<t>) +£x (j<t>)ds, t s 0 a weak form of the heat equation. This is in fact a version of the law of large numbers. O n the other hand, it can be shown that if we set <f> = 1, that is, if we disregard the spatial motions and look only at the total mass, {MP)™ gives a continuous-time continuous-state space branching process studied by Feller (1951). This process is a diffusion approximation for the classical Bienayme-Galton-Watson critical branching process. 100 The construction of super Brownian motion can be greatly generalized to yield more general superprocesses. One can tinker with the branching mechanisms or the space motions. For example it can be proved that the recipe used to obtain super Brownian motion may be adjusted to obtain super Feller processes. Just replace the Brownian motions by a Feller process with infinitesimal generator A and locally compact state space. The result will be a limit superprocess characterized by. a martingale problem analogous to ( M ? ) ™ but with (A, Domain{A)) instead of ( f , C ( l R ) ) . (Dawson 1993). 0 1 0 6 2 d When d = 1 we can recast ( M P ) ™ in stochastic partial differential equation ( S P D E ) form: X (dx) = u(t,x)dx JP — a.s., where u is the unique (in law) solution of 10|0 t m ^ (0.3) = ^u+\/yuW, and W is a space time white noise (Reimers 1989, Konno and Shiga 1988). We interpret equation (0.3) i n a weak sense, as is usually done i n the theory of Partial Differential Equations. That is, we multiply (0.3) by a smooth test function cf> with compact support and integrate over space-time: J 4>(x)u(t, x)dx = J j ^u(s,x)cj)(x)dxds + J J \J^u(s,x)()){x)W(x, s)dxds. Integrate by parts twice on the r.h.s. to obtain X (<f>) = J X ^(f)^ds t s = J* X (^t)ds s +j j y/yu(s,x)<t>{x)W{x,s)dxds + Z {<t>). t Note that (Z(4>)) = (J q t f vV(s,x)(f>(x)W(x, s)dxds^ = J J ~fu(s,x)cj)(x) dxds 2 = f Jo X (^ )ds. 2 s B y analogy with the P D E (partial differential equation) setting in which W is replaced by a smooth function, it is possible to write a Green's function (or inverse Laplacian) representation of the solution of (0.3): u(t, x) = Jp(t, x - z)u{0, z)dz + J j p(t -s,x-z) y/^u(s,z)W(z, s)dzds, (0.4) where p(t, x) is the one-dimensional Brownian transition density. Equation 0.4 has proved to be extremely useful to compute the moments of'it (Konno and Shiga 1988). The study of super Brownian motion (among other things) revealed the need to introduce a related object called the historical process or historical super Brownian motion. While analyzing fine path properties Dawson and Perkins (1991) realized that the genealogy (or family structure) of the particles is of great importance. We give an informal example to motivate this last statement. Pick a "typical" particle in the population at time t = 1. That is, choose a point x i n the set Supp(X\) (the support of X\) according to the measure X\. Suppose we are investigating the asymptotic behaviour of the total mass in a ball of radius r about x as r —> 0 (very useful when studying path properties). To estimate this quantity trace the trajectory of our particle backwards in time until time t = 0. It turns out that in two or more dimensions "most" of the mass in the ball will come from particles that branched off this trajectory between t = 0 and t = 1. This property stems from the fact that due to the criticality of the branching, only a finite number of particles alive at time t = 0 have (an infinite number of) descendants alive at time t = 1. • This type of argument can be made rigorous leading to very precise computations of Hausdorff functions, etc. (Perkins 1988). To describe the historical process consider the tree of branching Brownian motions described above. Let be the random measure on the space C([0, oo),lR ) which assigns mass T V to the trajectory of each particle alive at time t. If x denotes a path x stopped at time r. (x'(-) = x(t A •)) then d - 1 l £ H (t) = ± N S ,y. ' (0.5) {z 1=1 Equation (0.5) generalizes (0.1). As iV ->• oo 1P(H € •) converges weakly to a law Q on C([0, oo), M F ( C ( [ 0 , oo), IR'*))) (Dawson 1993). The historical process H is the canonical process H (uj) = uj i n the probability space Q, = ( Q , C ( [ 0 , o o ) , M F ( C ( [ 0 , o o ) , l R ) ) ) ) . It can also be characterized by a martingale problem which generalizes ( M P ) ™ . N t t m d M 1))0]0 From a modelling perspective, it is natural to try to introduce measure-valued processes i n which the particles interact. Three of the most important tools to analyze finite dimensional 5 diffusions are the change of measure (Cameron-Martin-Girsanov theorem), the martingale problem of Stroock and Varadhan and Ito's theory of stochastic differential equations. These three approaches admit highly non-trivial generalizations to the infinite dimensional setting of superprocesses, thus allowing the introduction of interactions (Dawson (1993) and Perkins (1993, 1995)). Perkins (1993, 1995) carried out a program to construct interactive historical superprocesses Kt(dy) in which a trajectory y in a population Kt is subject to a drift b(t, Kt,y), diffusion matrix cr(t,K ,y) and branching rate j(t,K ,y). As an application one can generalize (MP)™ to ( M P ) 6 o . That is, the Laplacian (infinitesimal generator of Brownian motion) can be replaced by an elliptic operator of the form A(t, Xt, x) = ^(qa*)ij(t, Xt, x)didj + 6j(t, Xt, x)di (sum over repeated indices), and the constant 7 by a function 7(i,X<,x). These coefficients depend on time, the state of the population and the position of the particle. Of course some constraints are required on these coefficients. A very interesting example of an interactive historical process due to Adler (Perkins 1995) is the following. 1 t 7i<7) t 100 ) E x a m p l e . Let p (x) = (2n)~ ^ exp(—|^), are given by the following 1 e > 0 (small), and suppose b(t,K^y) and ^{t,K,y) 2 s b(t,K,y) = J* j(t,K,y) j'vp (y' -yt)K (dy')e- ^ds x € s s f*fpe(y's-Vt)K,(dy')e- Wds). = exp ( - x Here the goat-like particles tend to drift away from regions where the population has already grazed (and consumed the resources). They also reproduce at a lower rate on those regions. The parameter A represents the rate of recovery of the environment. D Perkins has developed a theory of stochastic integration along Brownian trees (Perkins 1993, 1995) and has used it to characterize a broad class of interactive measure-valued branching diffusions as the unique solution of a stochastic equation driven by a historical Brownian motion. We shall explain Perkins' idea shortly. As a prerequisite let us recall some facts about stochastic differential equations (SDE's). Write C = C([0,00), JR ) and Cf = a{y :s<t), let S be the • set of d x d symmetric positive definite matrices, and let d d d s . b(t,y) : [0,oo) x C - > l R , d d a{t,y) : [0,oo) x C -> S , d d • be predictable path functionals. The intent is to give meaning to the notion of a locally Gaussian d-dimensional process Y satisfying the following ElY^-YilY^dtbiit.Yj m'Udt - YIWU - >?)i ] = y< + oidt), d t Y) + o(dt). ' This models the motion of a particle in a velocity field b which is subject to a random thermal motion of covariance a. Ito's brilliant idea was to give meaning to (0.6) by rewriting it in the form d dY} = bi{t,Y)dt + j=i 6 ^Oij(t,Y)dBi, i= 1 , d , or for short dY = b(t, Y)dt + a(t, Y)dD . t t where B is a d-dimensional Brownian motion and a is the square root of a. The last equation should be interpreted as an integral equation. Ito gave meaning to the integral J ' a(t,Y)dB , which is named after him. This is not straightforward since B is of unbounded variation. Therefore JQ a(t,Y)dB cannot be a Stieltjes integral. 0 s s A strong solution of the S D E Y = Y + [ b(s,Y)ds+ Jo (0.7) [ o(s,Y)dB , Jo t t 0 s on a given probability space (fi, T, P) and with respect to a fixed Brownian motion B = (£, (J-'t)) and initial condition £ is a process.Y = {Y : 0 < t < oo} with continuous sample paths which satisfies the following properties: t • Y is adapted to the filtration (Tt))• p[y = £] = !• 0 • - [/o(ll ( >^)ll + l l in ]R or R ). ' p b d s crcr, '( > )ll) s y ds < oo] = 1 for all < > 0. (||-|| denotes.the Euclidean norm d x d • Yt = Yo + b(s, Y)ds + o{s, Y)dB holds P — a.s. The second integral on th r.h.s. must be interpreted in the sense of Ito. s If Y is a strong solution of (0.7) there is a (measurable) map h : lR x C —> C such that d Y. = h(£,B) d d P-a.s. We are now i n a position to elucidate Perkins' idea. Let b : [0,oo) x C ( [ 0 , o o ) , ' M ( C ) ) d F . o :.[0, x oo) C([0,oo),Mp(C )) d x C d —> R , d x C —> TR , d dxd 7 : [0,oo)x C([0,oo),M {C )) x C —> (0,oo). d d F Recall that H denotes a historical Brownian motion. Intuitively, a typical path y chosen according to Ht is a Brownian path stopped at time t. This is certainly true if H is replaced by H . Perkins (1993) found a way to prove it. This allowed him to define an essentially unique version of the Ito integral JQO(S,K, y)dy(s). He then considered the following simultaneous equations N Y = yo+ fb{s,K\Y)ds+.fo{s,K,Y)dy(s), Jo Jo K (A) t = J l{Y (y) £ A)y{t,K,Y)H {dy), l t (0.8) t>0 t t>0, AeC d (0.9) (Perkins (1993, 1995)), B y (0.8), Y solves an Ito equation along the branch y with drift 6 and diffusion o. If the coefficients cr, b, 7 are Lipschitz i n an appropriate sense then (0.8) has a 7 unique strong solution of the.form Y = h(yo,y), and we write Y = Y(y) for short. Hence, i n principle at least, (0.9) makes sense. 7 can still be interpreted as a branching rate, although some additional work is needed .(Perkins 1995). If 7 = 1, then Y * is a typical path i n the population Kt- Y is an auxiliary process and K is the desired interactive historical process. Other.intuitive explanations are found in the introductions to the papers Perkins (1993) and Perkins (1995). W i t h suitable hypotheses the system (0.8-0.9) has a pathwise unique solution (Perkins 1995); These hypotheses are satisfied by the coefficients b, 7 i n the previous Example and by a = Id. If b : JR -> IR , a : lR -> ]R , are Lipschitz continuous (with respect to the Euclidean norm), 7 = 1 and b(t,K,y) and a(t,K,y) are denned by d d d dxd b(t,K,y) := f b(y - y' )K (dy') t t t and o-(t,K,y) := Ja(y -y[)K (dy') t t .... respectively, then (0.8 — 0.9) has a unique strong solution. Moreover, the process X {A) Kt{y '• yt € A) for A € B(JR ) solves the following martingale problem t d (MP)^ U<f>GCU^% then bfi ' •X (4>)-='m{(f>) + J J (^(ao*)ij(-s,Xs,x)didj<l>(x) + t biis,Xs,x)di^^ where Zt{4>) is a continuous square integrable (^ )-martingale null at zero with square function r (Z(d>)) = f Jo . t t X {o )ds. s 2 . . . Here b(t,X ,x) = fb(x — z)X (dz) and a(t, Xt,x) — f a(x — z)X (dz). of vector valued functions are computed componentwise. t 0.2 t t As usual, integrals Summary of the Main Results After pummelling the reader with a lengthy review let us finally enunciate our results. The exposition is divided into three independent chapters that can be read separately. However we suggest to the reader not familiar with the subject to read them i n order. The thesis is organized as follows: Chapter one presents the weak convergence of interacting particle systems to superprocesses with interactions. The interactive historical process K solution of (0.8-0.9) is expected to arise as a diffusion approximation for a sequence (K ) of properly rescaled interacting branching particle systems. Dawson Sz Perkins (1996) claim that this fact is proven in this thesis so we are compelled to demonstrate it. The proof requires a good understanding of the particle systems.and is not a corollary of some well known general convergence theorem. N In Chapter 2 we study interactive measure valued diffusions of the form Y =y + f Jo t b(s,X ,Y )ds, t s s X (A) = JHY (y)eA)>y(t,X,Y)H (dy) t t t A € B(B). r Here d = 1. We show that Xt(dx) = u(t,x)dx, where (t,x) i-4 u(t,x) is Holder continuous. The proof is an application of historical stochastic calculus. This result was proved for super Brownian motion relatively recently by Reimers (1989) and independently by Konno & Shiga (1988). A s an example we show that the above result leads to the existence of approximate solutions to the following version of Burgers' equation with noise. Let W bea space time white noise and consider In fact, for any arbitrary £ > 0 we solve a smoothed version of (0.10), namely du m = A d ( 2 -Tx\ u u \ p e * ) u j— • ' + ^ w > where p is the heat kernel and the convolution is in the x variable. We also show that the method of proof of the main theorem can be employed to investigate some related problems. In particular, we study a super Brownian motion i n a random environment. s In Chapter 3 we look at the local times of one-dimensional interacting superprocesses which arise as solutions of •' • Y = y + fo(X ,Y )dy(s) . Jo t 0 s s X {A) = J \{Y (y) G A)H (dy), t t t +f Jo b(X ,Y )ds, s s AeB(R). Once again d = 1 and the notation, is the same as above. We show that there exists a jointly continuous process (a, t) i-» L%(X) such that [ X {<f>)ds = f (j>(a)L {X)da Jo • • ' J-oo a s t for any positive Borel function (j). The process Lf (X) is called the local time of X. The proof of this result relies on a Tanaka-like formula of Perkins (1995). The proof uses many of the tools of the infinite dimensional stochastic analysis developed by Perkins as well as some abstract tools like the predictable section theorem to obtain concrete estimates. The mere existence of the local time is a path property of X (Geman and Horowitz 1980)^ We can also see directly that the local time is an interesting quantity. As Adler (1992) has pointed out, the superprocess discussed i n the Example of Section 0.1 exhibits waves of local time. This is also evident i n computer simulations. Most of the topics treated i n this work have been studied i n the non-interacting case, and sharper answers have been given i n this case. However none of the methods of proof can be 9 translated to the interactive case as they rely very heavily ori the independence assumptions. In this work we introduce new methodologies. However, by considering rather general coefficients we renounce to the possibility of carrying out very precise computations such as those in Perkins (1988) or Perkins-Le Gall-Taylor (1995). / 10 Chapter 1 Weak Convergence of Interacting Branching Particle Systems 1.1 Introduction Super-Brownian motion arises as a diffusion approximation for Bienayme-Galton-Watson ( B G W ) trees of branching Brownian motions (Walsh 1986.) A B G W tree of branching Brownian motions with parameters (N, 7) can be described as follows: F i x 7 > 0. A t time t = 0, N particles are located in JR according to some fixed initial distribution. To be concrete, let us place all of them at the origin. The particles start moving independently, undergoing Brownian motions. During a time interval (t, i + dt) any given particle has a probability jN dt of dying. If it does die, it either splits into 2 identical particles or it goes instantaneously to an isolated point d (a cemetery), with probability 1/2 each. If it branches, the two offspring start their lifetime at the point of death of their parent. They continue moving independently according to d-dimensional Brownian motions until they either die or branch and so on. In this model we want to keep track of the locations of the particles as well as their numbers. One way to describe the state of the system of particles is to identify the position of the particles with the position of the atoms of a purely atomic measure, say Xj^. If each particle is assigned mass 1 then X^ (A) is the number of particles i n the set A at time t. We are interested i n the high density limit when N —¥ op. Of course some renormalization is required. It turns out that the properly renormalized object to study is N~ X^. In fact N~ X X , where X is a measure-valued process which we call super-Brownian motion with branching rate 7. X can be characterized by a martingale problem. Recall that if / i is a measure, y.(<p) = / <j> d(i and Mp{E) denotes the set of finite Borel measures on a metric space E and it is endowed with the weak topology. d 1 X X N T h e o r e m 1.1 ( S u p e r - B r o w n i a n M o t i o n w i t h B r a n c h i n g R a t e 7). Let SQ denote a unit mass at 0. There is a continuous M f ( I R ) valued, adapted process Xt defined, on a filtered d probability space (fi,^", ( ^ ) , P ) such that j. TP(X = 6 ) = 1 0 0 2.1f(t>£ C (M ), then 6 2 d XM = XoW + f\ ^)ds +Z {4>), s 11 : t (MF)A, 7 where Zt(<fr) is a continuous square integrable (Ft)-martingale null, at zero with square function (Z(<p)) = fx { f> )ds. 2 t s 1( Jo Moreover the law, Q*°, of X on C([0,oo),Mf(]R )) is uniquely determined by1 and 2. d Observe that ( M P ) A , gives a semimartingale decomposition for X.(4>) when <f> € C (JR ). As we shall see later, the drift part comes from the spatial motions while the martingale part comes from the branching. More generally one can replace the space motions (i.e. the Brownian motions) by a Feller process £ taking values in a locally compact space. Call L its infinitesimal generator and D(L) its domain. C, could be, for example, a stable process. In this more general setting Theorem (1.1) holds if A / 2 is replaced by L and C (JR ) by D(L). That is, the martingale problem (MP)i is well posed (Dawson 1993). 2 7 2 d d n The branching particle systems contain genealogical information that is not evident in the limiting super-Brownian motion. This information has proven to be extremely valuable when analyzing path properties of super-Brownian motion (Perkins 1988, Dawson and Perkins 1991). A process called historical super-Brownian motion, which records the past histories of all individuals in the population, can be defined as follows. Let C = C([0, oo),]R ) and C = a(y : s,<t), the canonical e-field of C . Let Hf* be the random'measure on the space C of R e v a l u e d continuous paths which assigns mass N~ to the trajectory of each particle alive (in a superB M ) at time t. It can be shown (Dawson and Perkins 1991) that H converges weakly to a MF(C )-valued continuous process H. This limit process is called the historical process/and its law may also be characterized by a martingale problem. d d d s d d l N d T h e o r e m 1.2 ( H i s t o r i c a l S u p e r - B r o w n i a n M o t i o n ) . Let Do denote the.set of f E ,C such that f(y) = g(y(t\),... ,y(t )) for some 0 < t\ < ... < t and a smooth function g which is constant outside some compact set. For f £ Do let d n l A / ( , , t) = 1 XE Em i=l k=0 1=0, n < t i fc+ t ) A l+l x ^ (y(t A The law ~P ° of historical Brownian motipn on C([0,oo),Mp{C )) the following martingale problem. S d V/eA) t,), ,y{t A t ). n) + + l H (f) = 6o(f) + J-Hs(±Af(;s))ds t is uniquely determined by + Z (f), t where Zt{f) is a continuous square integrable martingale null at zero with square function (Z(/)) = f Jo t H (,f' )ds. 2 s Note that super Brownian motion can be obtained as a projection of historical Brownian motion, X (-) = H ({y G C : y € •}). . d t t t 12 If one is interested in modeling it is natural to try to introduce some sort of dependence into the superprocess. For example one may want to consider space motions in which individuals are attracted to each other, or branching rates that reflect the fact that lonely individuals tend to die faster. W i t h this in mind, one is tempted to replace the Laplacian i n Theorem (1.1) by an elliptic operator of the form A(t,X,x) = \f^a^HtX,x)^ f^b%X,x)£. + (1.1) Similarly, one may want to consider 7 = y(t,X, x). These coefficients depend on time, the state and past history of the population and the current position of a particle. For example (in d = l ) if b•: [0,oo) x ]R —• H , then we could define b(t,X,x) = f b ( t , z — x)X (dz). In this case, a particle located at x at time t would feel a drift (i.e. pull or push) of magnitude b(t, z — x)X (dz) coming from the particles located in a cube of side dz and centered at z. W i t h appropriate assumptions on the coefficients a, b, 7 the martingale problem ( M P ) ^ has a solution (Perkins 1995, Roelly and Meleard 1990). Roelly and Meleard showed that a;solution of the martingale problem can be obtained as a limit of a renormalized system of interacting particles. R t t ) 7 More generally, one is led to consider interactions in the enriched historical setting.. This can be very useful for example in the modeling of non-Markovian superprocesses (such as the goat superprocess introduced in Chapter 0). We shall need a generalization of (1.1). To this end we present some notation. F i x 0 < t1 < ti < ... <: t and t/> € C (TR ). Let 2 n ^(y) = ^{ti,ti;...,t )(y) Tld b = ^(y{ti),...,y(t )) n • n ' V>i and ipij denote the first and second order partials ofip. V ^ ( i , y ) , : [0,00) x C —>• ! R is the (Ct) predictable process whose j - t h component at (t,y) is d d n-l ^2l(t<ti i)iiji (y(tAti),...,y(tAt )) : + d+j n i=0 If 1 < i, j < d, xpij :[0,00) x C -> IR is the (C ) predictable process defined by d t n—In—1 $ii{t\v) = S Z E ^ k=0 1=0 1 Let - A<i),....,y(t l+^kd+ild+j(y{t t k + 1 At At )) n :. • 00 D= 0 • ( J {*(ti,ti2,...,t„):"0'-<ti <t <„.<t 2 n > * G Co°°(]R )} U {1} m m=l We will work on filtered probability space Q = (Sl,T,\T ),IP)t The following definition is motivated by Theorem (1.2) and by the preceding discussion. D e f i n i t i o n 1.3 ( H i s t o r i c a l S u p e r - B r o w n i a n M o t i o n w i t h I n t e r a c t i o n s ) . Let S denote the space of symmetric positive definite d x d matrices. Suppose d a : [0,oo) x D([0,00),M (C )) x C —• S , b : [0,oo) x D([0,00),M (C )) x C —» R , d F d F 7 : [0,oo) x D{[0,oo),M (C )) d F 13 d d d d x C —> (0,oo). d A predictable process K G C([0,oo),Mp(C )) on fl satisfies, (MP) b-r *o) if d V^€Do K (^) = 8o(^)+Z (^) + t j d (with initial condition j\j[(V^(s,y),b(s,K,y)) t ' a d i=l J=l . where Zt(i>) is a continuous square integrable martingale with square function • (Z$)) =; j* j\(s,K,y)i>(y) K (dy)ds 2 t s Vi > 0 a.s. Perkins (1995) has shown that (under suitable hypothesis) the above martingale problem is well posed. We expect the solution to be the limit point of a sequence of renormalized interacting B G W branching Brownian motions, just as in the non-interacting case. In this Chapter we introduce such a sequence of systems of particles. We also show that they are tight and that their limit points satisfy the martingale problem ( M P ) J , . Our results are a non-trivial extension of Meleard and Roelly (1990). The main difference is that by working in the historical setting the state space for the particle motions becomes C . Therefore the convergence of an approximating sequence K is no longer completely determined by the convergence of the projections / >-> 0J i7 d N Ifdhf. Interactive Branching Particle Systems 1.2 1.2.1 The Particle Picture In this Subsection we define a sequence of processes K which converges weakly to a solution •of the martingale problem ( M P ) & mentioned in the introduction (see Definition 1.3). N 0j )7 We begin by introducing a set of labels I. Let / := U ^ o ^ {!) 2} , where by convention { 1 , 2 } ° = 0. For any a = (ao, • • • ,ctk) G J we write |a| = fc and a\i = (ao,.... ,OJJ) for i < fc. If a = (ao,... , ctj) G / we denote ai = (ao,... , aj,i), i = 1,2, and ir(a) - (ao,... , cij-i), j > 1.. x 1 Let {£ : a G /} be i.i.d. random variables with P [ £ = 0] = P [ ^ = 2] = 1/2. We say that a subset A oi I is a Bienayme-Galton-Watson tree (or B G W tree) with Af roots iff , a Q (i) {a G / : |a| = 0, a|0 = i for some a i < N} C A and if a|0 > N then a ^ A. (ii) for any a £ A such that |a| > 1, fTJ^o" ^a|t •> 01 The family { £ with N roots.. Q : a G /} induces a unique probability distribution I I o n the set of all trees R e m a r k 1.4. The random variable card(A) is the total number of individuals that ever lived in a critical Bienayme-Galton-Watson process. Therefore card(.4) < oo a.s. • The next ingredient we need is a collection {B : a.G /} of independent d-dimensional Brownian motions. Define also a family {e : a G /} of i.i.d. exponential(l) random variables. We a a 14 assume that the three collections of Bernoullis, Brownian motions and exponentials are mutually independent. We also assume that they are carried by the same,probability space IP). We shall also need drift, diffusion and branching rate coefficients. For the sake of simplicity, throughout the chapter we shall assume that at time t = 0 all the particles are located at the origin. We could have started distributing the initial particles according to any probability distribution, but we want to keep the notation as simple as possible. A l l the proofs can be easily modified to cover such general initial condition. N o t a t i o n 1.5. If E is a topological space and x G c7([0, oo), E), x denotes the path x stopped at t: x = x(tA-). If {X(t, u) : t > 0} is a process taking values in a normed linear space (L, || ||) then XI = sup{||X || : 0 < s < t}. If (M,-M) is a measurable space, bM denotes the space of bounded real-valued Af-measurable functions, and M* denotes the universal completion of A l . C = C([0, oo),lR ) is endowed with the sup metric p, (C ) is its canonical filtration, M (C ) is the space of finite Borel measures on C with the weak topology. It will also be convenient to append an isolated point d to IR . • l l s d d d d F d d Let. L i p l C ^ - ^ i C ^ R r - M o o . ^ l . l ^ - ^ l ^ ^ y ) The Vasershtein metric d = d on Mp(C ) d p Vx,yGC }. d is given by d(u, a') = sup{>(4>) - u\cj>)\ : <f> € Lip(C )}. d This metric induces the weak topology on Mp(C ) Suppose that q > 0. Let d x C —¥ B b :[0, oo) x M (C ) x C —»• R , d d F : (Ethier and Kurtz 1986, p. 150, E x . 2). o :[0,oo) x M {C ) F d d d x d , d 7 :[6, oo) x D{[0, oo),M (C )jx C —> [c, oo). d : , d F Suppose that v, F : [0, oo) [1, oo) are non-decreasing functions, p is an arbitrary but otherwise fixed positive integer. Y : N x [0, oo) x IR -> ]R is defined by (p, t, x) H-> v(t) x . We will assume that the maps o~, b, 7 have the following properties: p B o u n d e d n e s s b y t h e t o t a l mass sup \\b(t,K ,y)\\ + sup \\o(t,K ,y)\\ eC ec t t d y < T(p,t,X *(l)> t y sup j(t,K,y) y€C d < F(t)(l +fK;(l)ds) JO v ' Viv € D([0,00),M (C )), ' VK<e . (1.2) d F D([0,00), (1.3) M (C )). d F Lipschitz condition lla^/^^-a^p'^Oll+ll^^y)-^^',?/)!! < T(p,i,/i(l)V i'(l))(p(y,y') + ^ ^ ' ) ) / 15 . Vu\u'eM (C ). (1.4) d F Finally, for any T > 0 there is a finite constant CT such that \\b(s, 0,0)|| + \\o(s, 0,0) || < CT for all s < T. Note that if y € C , K <E D([0, oo),M {C )) then d d F < ||6( ,0,0)|| + T(p, , ftr;(l))(||y || \\b(s,K ,y )\\ s S s s S J +^(l)) 00 .< 116(5,0,0)11 + (K (1)V T(p,*,JC(l))(l + ||y |U). , s S Similarly, • \\o-(s,K ,y )\\ < lk(a,0,0)|| + (/f,(l)VT(p,a;(l))(l + s s ||y ||oo). s R e m a r k 1.6. (a) Notice that the second argument of b and a is less general than the second argument of 7. This stems from the fact that we wanted to keep the notation as simple as possible, but also' wanted to show the reader how to generalize the proof and to cover some interesting examples. It will be easy for the reader to modify our proof to more general coefficients of the form a :[0,oo) x Z?([0,oo) x M (C ) x C —> b :[0,00) x £>([0,00) x M {C ) x C —> M . d JR , d F d dxd d F d (b) The condition that 7 be bounded away from 0 can be weakened to 00 Vy G C = 00 VK <ED{[0,oo),M (C )), j(t,K,y)dt d d F Unfortunately any further generalization is beyond the scope of the techniques employed in the chapter. 0 Examples. The reader will find many interesting examples in Perkins (1995). Here is a sample of several types of coefficients that satisfy our hypotheses. See Perkins (1995) for more details. (a) Assume A, 6, e > 0. Let p {x) be the d-dimensional heat kernel. Set s l(t,K,y)= 6 + exp ( - £ f PeM ~ yt)K (dy')e- ^ds). x s We can interpret a superprocess as a biological population of say goat-like particles. The branching rate of a particle located at yt becomes smaller if many of the particles (goats) have spent too much time near y(t) i n the past, thus depleting the food supplies and making local conditions less favorable for reproduction. The parameter A represents the rate of replenishment of the environment. We would like to consider e = 6 — 0, but for technical reasons are obliged to assume that both parameters are strictly positive. Note that 7 is bounded and bounded away from 0, so it satisfies trivially the hypothesis. (b) Let p, e be as i n Example 1. Define b(t,K ,y) = t j j\p {y' -y )K {dy')ds. % e s t t In this case the particles drift away from the places they or their living fellow particles have visited in the past. 16 ( c ) Let d be the Vasershtein metric on M (El ) associated with the Euclidean metric on IR . Assume a : M (R ) x R ^ TR and b : M (SR ) x IR ->• H satisfy d e d F d dxn d F d F 6(/i',x')ll < \\a(u,x) - <y{n',x')\\ + ||6(u,x) - d const.(l + (1) V '(l))[de(M,M') + M M Ik - and \\a(u,x)\\ + \\b(u,x)\\ < const.(l + u(l)) Vx,x'elR . \/fi,u' E M (JR ), d d F Define n : M (C ) -> M ( R ) by n ( )(A) = u({y : y € A}), andCT,6 by a(t,K ,y) = a(r., Il (ii't)) y)» b(t,K ,y) = 6(<,IIt(i<'(),y). This example shows that the case of M (Revalued interacting superprocesses follows as an instance of the historical superprocesses. d f F F t d t M t t F t (d) This is a continuation of Example (c). Let a : JR -> M Lipschitz continuous functions, and define a and b by d a(u,x) = j a(x-y)n(dy), b(u,x) = j b(x-y)u(dy). More generally if b : JR ( H and cr : R ^ Lipschitz functions then we can define coefficients d k+1) d k fc ' a(u,x) = 1 ) and b : M -> R . be bounded d x n ]R d dxn , A; = 1 , . . . are bounded k(x,xi,---,x )u(dxi)...u(dx ), a k k k=l 3 p , J [ d b{u,x) = Y^b {x,xi,...,x )u{dxi)...u{dx ). t—1 J k k • k We proceed to define the particle systems. For each N we define a system of age dependent branching processes as follows. Step 0. Pick a B G W tree A with N roots at random, i.e. according to the law IINStep 1. Define predictable path functionals b : N x [0, oo) x C C —> H by Nd Nd —• TR and a : N x [0, oo) x d n x d 1 N b (s,y\..:,y ):=b{s,^J2 (v^^y y) N 6 i l i=i i N a (s,y\...,y ):=a(s,- £S s,(y r). N y l {yJ) i Here each y E C . Let b := (hi,... , 6^) and let a be the Nd x Nn matrix whose j-t-h diagonal d x n block is Oj, j = 1 , . . . ,N, and any other entry is equal to 0. Note that by the assumptions on b and a l \\bi(8,y\... d ,y )\\ + \\ai{s,y\..,. ,y )\\ N N < \Ms,0,... ,0)11 + 11^,0,... ,0)|| + r(p,s,card(A)/N)(l+y*). 17 (1.5) Moreover, both b and a are Lipschitz: \\b(s,y ,...,y )-b(s,x\,...,x )\\ l N N < y/\\bi(s,v\... ,y )-b (s,x\... ,x")\\ +.. .+ ]\b (s,y\... N + ..-. + ( y N . - ^ J V ) ^ . <T(p, , ardM)/iV)(( /,... ^ ^ - ( x , . . . , ^ ) ) : . . . • 1 ? C 2 N < T(p,«,card(^)/iV)^(yi 5 ... ,^)|| 2 1 (1.6) Similarly H a ^ y , . . . y j - a K x , , . . ,x )\\ < T(p, 's,card{A)/N){(y\ • • • , y ) - (ar ;... ,x ));. (1.7) d 1 1 1 d d Note that once i4 has been chosen the term Y(p,s,card(A)/N) can be regarded as a function of s, bounded on bounded intervals. We shall need also an index set for the particles "alive" at a given time. For this purpose need to define an auxiliary set .7(1, i) :={aeA: \a\ = 0} C A, for all t > 0. Note that J(l',t) does not depend oh t. We shall see later the reason for this notation. For . each a € J ( l , i) set B(a) = 0. • This is the :birth time of a . ' . . . Consider the following system of N d-dimensional SDE's: Y = ViafrY ,.'.. ,Y )ds+ a t N 1 Jo a € J(l,t). • (1.8) i a {s,Y\...\Y )-dB«, N Jo a ; > B y (1.5), (1-6), (1.7) and the well known theorem for existence and uniqueness of SDE's with Lipschitz coefficients that grow linearly (e.g. Rogers & Williams p. 128), the system (1.8) possesses a unique, global (i.e. non-exploding), strong solution. Define stopping times eeJ(i,-) a G .7(1, t ) . Recall that ^ ^2eeJ(i,) denotes the path u *4 Y^9ej(i,u) ^(y )* stopped at time s. A l l of these stopping times are a.s. finite since 7 is bounded away from zero, and a.s. different from each other since the exponentials are independent and independent of the Y's. Now define 9 T\ = m i n { T a 1 1 a : a € J(l,<)} = arg min{Tf : a G J ( l , i ) } , <5(a ) = T 1 (the death time of a ) , 1 1 and 1 N N. 18 Step 2. a has just died. There are two possibilities. It had either zero Or two offspring. Let's examine each possibility separately. 1 Case 1. £ i = 0. Define a • K Set Y &1 t 1 T . Now solve the system of card(J(2,Ti)) d-dimensional SDE's =dfort> v b(s± £ = if<>7i. \j(l,t)-{a } 1/7,1 a€J(2,<), f a(s, 1 £ ;S ey,(Y r)ds+ a {Y eeJ(2,s) . r.>Ti. • dB?, (1.9) \ . , Y y eej(2,s) 1 / 7 1 Just as before, we interpret the coefficients b, a in (1.9) as being some predictable Lipschitz path functional with linear growth. Therefore (1.9) has a unique global strong solution for all times t>T\. Proceed as before. Define stopping times I? =- inf { « > 71 : > g^yf^y AT( ,.(I S £o aGJ(2,J). These are also a.s. finite and different. Set T = min{T 2 2 :a€ Q a? = arg min{r J(2,t)} : a € J(2,t)}, Q 2 and K ^ £ = jj V") ' Ti<t<T . 1 2 o€J(2,t) ' Cose 2. £ i = 2. Define 6 J ( 2 n=/ • J ( M ) i f i < T l i \J(1,<)U{Q 1,O 2}-{Q } 1 1 1 . iff>T!.. We set T i to be the.birth time of c ^ l , a 2 : X ./?(a\l) = 9 ( Q 2 ) : = T i . ' /, Set Yf = d for all t > T i and define (Y* ) = (Y^ ) := ( y ) ~ . Now solve the system of card(J(2,Ti)) d-dimensionalSDE's 1 / yi . y = yf + f Ks,^ £ a t i 1 / 7 1 1 Tl 2 6 e {Y y)ds+ a {Y Yy eej(2,s) Ti al ri f o{ ,± . £ S 1 / 7 , 1 ^..(rD-di??, eej(2,s). ' (1.10) a e J(i,t), t > Ti. 19 Once more we interpret the coefficients b, a in (1.10) as being some predictable Lipschitz path functionals with linear growth. Notice that even though the b, a i n this step are possibly different from those in step 1, the Lipschitz "constant" is the same: T(p,s,card(A)/N). Hence (1.10) has a unique global strong solution for all times i > T\. Define stopping times T 2 rt :.= inf { Q //?(«, y '• eeJ(2,-) a e J{2,t). These are a.s. finite and different. Set T = min{T 2 2 a : a <E J(2,r)} a? = arg min{T . . I 2 : a € J(2,t)}, Q 6{o?) = T 2 and aeJ(2,t) Step (n+1). Suppose that we have already defined J(n,t), T i < .... <T . n a " died at time T . n Case 1. £ n =0. Define a J(n + l,t) = 'j(n,t) i f * < T„, J(n,t) - {a } 71 if t > T . n Stop if J ( n + 1,T„) is the empty set. Define 2 ^ = 0 for t > T . Otherwise, define for t > T and solve the system of card(J(n + l , T ) ) d-dimensional SDE's n n =.d n %"-=Yf + f b(s,± n " X f-<r(8,± . 6 e „(Y y)ds+ (Y Q ) 0eJ(n+l,s) JT " JT E S ^ Y ^ - d B f , 0£J(n+l,s) (1-11) a e J(n + 1,/.), i > T„. As before (1.11) has a unique global strong solution for all times t > T . Define stopping times n \ PW J eej (n+i,.) 9€J(n+l,-) a € J(n + l,i). These are a.s. 'finite arid different. Set . T >d . • • n + 1 n+1 =min{T n Q +1 : a € J(n + l,i)} = arg m i n { r ^ +1 6(a )=T n+1 n+1 20 : a € J ( n + 1,*)}, ds > e a and Kt E V") ' aeJ(n+l,t) —Jj 1 r '. n <t < T +l. . n Case 2. £ n = 2. Define a • \j(n,<)U{a"l,a 2}-{a } N if<>T . n n y We also set T to be the birth time of a l , a 2 : n n N /?(a l) = n /3(a 2) := T Set Y = d for all t > T and define {Y ) « of card(J(n+ l,T )) d-dimensional SDE's &n &ni t T n n . := (y ) --. Now solve the system = (Y ) * &n2 n a T T n Y a t = Y% + f.bi^jf .E f 6 ey,(Y ) )ds+ a s {Y 0€J(n+l,s) , / T n ''(y)'v(D') E .. JT N eeJ(n+l,s) • OJ G J ( n + l,t), (1.12) t > T. n (1.12) has a unique global strong solution for all times t > T . Define stopping times i n T° + 1 f : = i n f < t>T :N n V)) > ) 4> E l\sA±- S ya ds>e a G J(n + l,i). ' These a.s. finite and different. Set T =mm{T^ :aeJ{n n+1 a $(a n + 1 n+1 + l,t)} +l = arg m i n l T ^ ! : a G J ( n + l , i ) } , ) = T n + 1 . •• • and i aeJ(n+l,t) R e m a r k 1.7. This procedure ends with step card(A). Notice that with probability one it takes only a finite amount of time for the procedure to finish. This is due to the fact that the branching rate is bounded below. In fact, — T -\ < e k/(N<;). • k We shall need some terminology. 21 & D e f i n i t i o n 1.8. Remark 1.7 implies that A = {a , n = 1 , . . . , card(A)}. n Therefore we assign a birth and a death time to each particle i n A. B y convention Y = d, 8(a) = 6(a) = oo if a ,4.\Set h (s) = l(8(a) < s < 6(a)), the indicator of the lifespan of a, and a a a~t& h (t) = 1, a<t& 6(a) < t, t <a t < 8(a). Q We shall adopt the convention that for any (f>: R —• IR and any 1 < j < k, (x\,... , Xk) G JR we have 4>(x\,... , Xj-\,d,Xj \,..., x^) = 0. Finally, we define some very important cr-fields. Let k f c + := a(^J(n,s),{Bf : a € J(n,s)},{e : a € J(n,s), 6(a) < s}, a : a € J(n, s), 6(a) < s}; n = 1 , . . . , ooj -: s < {£ a , t>0. and u>t R e m a r k 1.9. (a) K N is right continuous. (b) From the definition of 6(a) and since 7 > 0 it follows that 5(a)=m{{t> B{a):N f (s, (K ) , N 7 S (Y ) )ds > e ). a s a . In fact one easily sees that N f \(s,(K y,(Y ) )ds S{a a N (1.13) = e. s a Moreover, Yt=Y? + n f b(s,K?_,(Y ) )ds+ a s JT JTn n aeJ(n + l,t), f T <t<T . n n+1 o(s,K?_,(Y ) )-dB?, a s (1.14) (1.15) (c) A t this point a discussion of what we have accomplished so far will be helpful. The procedure to construct the interacting particle system K is the following. First, a B G W tree is chosen at random. This tree contains the genealogical information and nothing else. iV particles are placed at the origin and start moving according to some SDE in which the drift and diffusion depend i n their own trajectories and (in a symmetric way) i n the trajectories of all other particles. This SDE possesses a unique strong solution because of the Lipschitz nature of N 22 the coefficients. Each particle carries an alarm clock which is a Poisson point process with a random adapted intensity (also called doubly stochastic Poisson process). This intensity is strictly increasing until the first arrival and then it is zero (so it's a degenerate Poisson point process with only one arrival). When the first bell rings we decide whether to kill that particle or to replace it with two offspring. A t this instant of time, say to, we also update K^, which will then be concentrated in the paths of the N — 1 or N- + 1 particles alive. We also know the value of the exponential random variable driving the Poisson process and the number of offspring. This information is also included in the filtration T^. Observe that the driving Brownian motions are adapted to this filtration and are i n fact still Brownian motions, since the information contained in (Tj^) and not contained in their own filtration concerns some other independent Brownian motions or some independent exponentials or some independent Bernoulli random variables. The fact that the intensities of the clocks are adapted is therefore essential, and it is embodied in equation (1.13). Note also that since the system of SDE's has a strong solution, the trajectories Y are (^ / )-adapted and therefore K itself is (^ / )-adapted. The process continues in the same fashion until the next alarm clock goes off and so on. • a A n Equation for 1.2.2 r N v r v K N Observe that as a consequence of (1.14) Yt = Hs, K?_, (Y ) )ds + f + f Q a /3(a) < t < 6(a). a(s, K?_, (Y ) ) • dBf, s a s (1-16) Indeed, if 0(a) = T , 8(a) = T k y° = % k+j +E t and t G [T ,T ) k k+j then (1.14) implies b(s,K?_,(Y") )ds + / / *(s,K*L, (Y°) ) s • dBf s This isequivalent to (1.16). For any t € [0(a), 6(a)), r G N , 0 < h < .. ..< t and * € Ito's lemma and (1.16) yield r <b(t ...,t )(t,Y ) a u T = <l>(t ...,t )(0(a),Y )+ u r C^(R ), dr [\w( ,Y );b(s,K?_,(Y ) )) h (s)ds a S a a s a d Jo ' nEE r * i , ( ^ Q ) M * , ^ (1.17) + [\vns,Y«)a(s,K?_,(Y )%h (s)dB?) , a a n Jo where V ^ ( s , Y ) is considered as a row vector for matrix multiplication and (-,-) denotes the usual inner product in JR . a k k 23 N o t a t i o n 1.10. L e f t >' 0, y e C , r € N , 0 < ti < ... < t and * € C ( I R ) , K € We write d 6 r 2 M (C ). d dr F d d AtK)mt,y) = (VW,y)Mt,K , % + t y = A ? E [ * ( * W - n i ( * ( a ) < *)(*« - 1 ) ] • ^(*)=^E[r<^'>^^^ v y The following is the main result of this Subsection. P r o p o s i t i o n 1.11. For any \& € Do (1.18) = In particular, K^(l) +£ J = 1+ A(Ky_)(*)(s,y)K?(dy)te Z^(l). P r o o f . We compute = ^E[*(^ )' A Y Q ) + f A(K)ms,Y )h«(s)d a S .. (by 1.17) f A(K)($)( ,Y )h (s)d a S + ^(v*( ,y>( ,i^f_,(yT),/i (5W) j s Q 5 r (since the term inside the square brackets is 0 if 5(a) < t) : N tit u» + A? E J i [[\™(s,Y<*)a(s,K?_, + 4 - E [*(P(«),Y ) a (Y y),h«( )dB?) a S - W(^)-,Y )l(5(a) - J* j A(K)(H!)(s,y)K»(dy)ds a + W (*) N t 24 < t)] T a S = tf "(¥).+ f\f 0 A(K)(<t>)(s,y)K? (dy)ds + W (V) + Zf(t>). N t This proves (1.18). To finish the proof of the proposition observe that if ^ = 1 then V # = 0, Lemma 1.12. For any l e ^ a S I, let M = Mf (#) := '(&» - l)l(*(a) < t a t)V{6{a)-,Y ). a Then (a) M is an (T^)-martingale. a (b) (M ) = a t tih (sh(s, a , (c) Ifa ^a then(M ,M 2) ai 1 2 a . . (K y,(Y y)y(s,Y<*) ds. N a 2 = 0: t Proof (a) B y stochastic integration it suffices to prove the result for ^ = 1 (see the proof of part (b).below for a similar argument). Let Jt := o(l(6(a) < s) : s < t). The only information contained in Joo is the death time of a: Therefore £ is independent of Joo. We must show that Q E[(Z - l)(l(<5(a) < t) - l(6(a) < s))\f?} = E[(^ - l){l{s < 6(a) < t))\F»] a = 0. , (1.19) Let 6, q, be real numbers, r € {0,2} and A\(n, u, b) = {B^ < b : u < s, n € J ( n , s) for some n} A (v,q) = {e < q : v € J ( n , s) for some n , <5(i/) < s} 2 v A$(6,r) = {ie = r,6 £ J(n,s) for some n , <5(0) < s}. The cr-field J " ^ is generated by sets of the form A = Ai(r/i,ui,6i) n... Ai(r)i,Ui,bi) f\A (v , 2 x qi) n . ..n A 2 ( i ^ , 9j) n A (ei,ri)... 3 n A (e 3 fc ,r ), fc where <5(i>7), <5(0 ) < «• Z = 1,... , j , m = 1 . . . ,fc.Hence £ is conditionally independent of A given 6(a) > s. Therefore m Q P [ £ = r, A , <5(a) > s, <5(a) < tj - P [ £ = r ] P [ A , (5(a) > a, (5(a) < r l a Q Since (£ — 1) has mean zero, the above equation implies (1.19). 25 (b) We claim that it suffices to show that Um°)<-))i = '[ h (sh(a,{K y,(Y y)d8a t N (1.20) a Jo where (l(8(a) < -))t denotes the dual predictable projection of l(8(a) < t). The claim follows from the theory of stochastic integration. P u t H := ^f((8(a) At)—,Y ). H is a bounded predictable process. Moreover Xt := ( £ — l)l(8(a) < t) is a semimartingale. Clearly a t a (H • X) = (£« - l)^!(8(a)-,Y )l(8(a) < t). a t Hence, if (1:20) holds then (M ) = (H-X)t a t = • = " ' fH d(X) 2 Jo s s /^((%)Ai)-,r) a((f -i)i(*(a!<.)) Jo a = N((8(a) As)-,Y ) (Za -l) d(l(8(a) Jo (by the independence of £ ) a 2 s < •)>,' 2 Q = • J s {%j -y ) h (sh(8AK )'AY )')ds.. a Jo (by (1-20)). 2 a N tt S To prove (1.20) we will need the following fact from the theory of semimartingales: Let (At,Tt) be a locally integrable strictly increasing adapted process such that l i m ^ o o At = 0 0 a.s. Let u i-> C be the inverse function of t ^ At. Note that for every u, C is a (Tt)-stopping time. Set Qt — Tc • F ° any (progressively measurable) process X we can define the time changed process X — X(Ct). If .{X, (Ft)) is a semimartingale and its semimartingale decomposition is given by Xt = Xo +.M +'Vt then (X , (Qt)) is also a semimartingale and a semimartingale • decomposition is given by Xf = X + M + V . Let A := N f* 7(5, (K ) , (Y ) )ds. W i t h this notation Remark 1.9(b) implies 8(a) — inf{r > 8(a) : A > e }. We now apply the time, change result to the process (At^t*)- Let C be the inverse of At (on {t > 8(a)}). That is, C is defined by f ( s , (K ) \ (Y ) )ds = t. Let Q = Tg . Set X = l(5(a) < t). Then X is a (^{)-semimartingale and (X)t = (X )A - We compute the compensator of the increasing process (X ,(Qt)). One easily guesses (X ) = A ^ At. We must check that u u r t A A t A 0 A t N t t S a s (a) T a u Ct t Q 7 N S a s A t A A t t t A t E[l(8(a)<Ct)-l(8(a) S (1.21) <C )\g ] = E[A At-A As\g ]. s s s{a) 6{a) s To avoid trivialities we assume t > s > 8(a). On the set {8(a) < C } (1.21) is trivially true. So let's assume that 5(a) > C . We have then s s E[m*) <Ct)\g ,8(a)>C } s s = E[l(A <t)\g ,8(a)>C ] S{a) = E[l(e Q s <t)\Q ,e > s a s s] (by the definition of A, C). 26 = 1 - exp(—(t — s)). In the last line we have used.the fact that e e > s. We also compute a Q E[A S(Q) is an exponential r.v. independent of Q given , s A * - s\Q , 8(a) > C ] = E J s s ' =E / =E^ l ( r <A )dT\g ,6(a) s{a) s >C s 1(T < e )dT\G ,e > s a s a l ( r < e )dT|e > s a Q = / P[T < e ]dr Jo . = 1 — exp(—(< — s)). a Therefore (X) = (X ) A t At AA = A S{a) t ^(5(a)At (since vl is increasing) (c) Arguing as i n part (b) we can assume = 1. Since M whose jumps can't happen simultaneously a.s., ai [M Q 1 , M) =£ ai t AMf AMf 1 2 and M " are jump martingales ! = 0 The desired conclusion follows since the angle bracket is the dual predictable projection of the square bracket. • C o r o l l a r y 1.13. For.any ^ G Do, Z. (^ .) is a right continuous (T^)-martingale with square function A, r K (Z (nt N = f JHs,yf (s,(K )\y )K^(dy)ds. 1 N s Moreover | A Z ( $ ) | < ^||*| N P r o o f . Clearly Z^(^) easily calculated: is an r.c.1.1. martingale, since each of the M (M Q was defined in Lemma 1.12) a<t 27 t Q is. Its square function is (by Lemma 1.12 (c)) (by Lemma 1.12 (b)) The deterministic bound on the jumps of Z (ty) comes from the fact that | A M | < and two jumps can't happen at the same time a.s. N Q ^||^||oo • Our next step is to show that Z^(^) is i n IP, for p > 1. In fact, we shall proof a stronger result. We will require the following well known lemma. Lemma 1.14 (Gronwall's lemma). Assume f,g: [0,oo) —> [0,oo) whereg is non-decreasing, f is bounded on compacts and f{t)<c(g(t) + ff(*)ds Then f(t) <ce g(t) for all t>0. ct Proof. See for example (Perkins 1993, Lemma 4.6.) • Lemma 1.15. There is a function T : N x [0, oo) —> [0, oo) such that u nK *(l) N S P + p t (Z (l)) )<T (t)=T{p,t). N p /2 t N p . Proof We replicate the proof of Lemma 2.1 (Perkins 1995). B y Proposition 1.11 and Corollary 1.13 K (l) = 1 + Z {1) is a non-negative martingale. Therefore N N 4 (1.22) Define a sequence of stopping times by f : = mf{t > 0 : K^(l) > n} An. (1.22) implies that T —> oo a.s. Now, n n (Z (l)) = N tATn J J o (s,(K y,y)K^(dy)ds N 1 ftAr / n < JT t(s)(l rs + \ J.K>"(l))K?{l)da. (by the assumption on 7) <f(t)(i + y 28 K^{i)ds y 2 Burkholder's inequality yields E [ A : ^ ( I ) ? J < c i ( p ) [1 + f ( t ) p / E ( 1 + j ( Notice that E[AT *(l)f Ar ATn tf**(i)<k"J 2 A T ]-< n . Therefore we can apply Gronwall's lemma to obtain p nK *(l) N }<T (t)=c {p,t)e ^ p p tATn 2 ) c (L23 Hence mK *(l) ] < l i m i n f E [ X ^ * ( l ) ? N p t A T J (by Fatou's lemma) <r (t). p Note that T {t) is independent of N to complete the proof of the lemma. • p C o r o l l a r y 1.16. For any ip € Do, Z (ip) is a square integrable martingale. N R e m a r k 1.17. Both W and Z can be shown to be orthogonal martingale measures. We won't prove this result because we won't need it. We can recast (1.18) is S P D E form. We shall restrict ourselves to d = 1 and to the Markovian case. That is, we consider only Mp (fft)-valued superprocesses. For simplicity we assume 7 = 1 . Then solution of (1.18) corresponds formally to N N = \-^{o- (tMt,-U)u(t,x)) 2 - ^(b(tMtr),x)u(t,x)) +Z N +v -W . N We shall see that W -> 0 i n a strong sense. Taking into account Corollary (1.13) and letting N —>• oo we see that this equation becomes (formally) N d u ^ x ) = ±-^L(a (t,u{t,-),x)u(t,x)) 2 - ^(b(t,u{t,-),x)u(t,x)) + y/u{t,x)W(t,x), where W is a space time white noise. • 1.3 Tightness of the Normalized Branching Particle Systems In this section we show that the sequence {K } defined in Section 1.2 possesses at least one limit point. To this end we shall employ the following theorem (Dawson 1993, p. 48). N T h e o r e m 1.18 ( J a k u b o w s k i ' s C r i t e r i o n ) . Let (E,d) be a Polish space. Let F be a family of real continuous functions on E that separates points in E and is closed under addition. Given f e F , define f : Dp —> 7J([0, oo),!R) by (fx)(t) := f(x(t)). A sequence {P } of probability measures on Dp is tight iff the following two conditions hold: n 29 The family {P } is F-weakly tight, i.e. for each / € F the sequence {P o ( / ) } of probability measures in D([0, oo),IR) is tight. (a) ( C o n v e r g e n c e o f p r o j e c t i o n s . ) N -1 N ( b ) ( C o m p a c t c o n t a i n m e n t c o n d i t i o n . ) . For each T > 0 and e > 0 there is a compact set Kr,e C E such that . • • m{P (D([0,T],K , )) ' N >l-e. T e n If the sequence {P } N is tight, then it is relatively compact in the weak topology on We will apply Jakubowski's criterion to the sequence {Law(K )}. tackle steps (i) and (ii) in the next two Subsections. Here E = Mp(C ). N 1.3.1 d M\(DE). We Convergence of the Projections We begin this Subsection by quoting some useful theorems. T h e o r e m 1 . 1 9 . Suppose that (X ) is a sequence of locally square integrable martingales. Then n for (X ) to be.tight with continuous limit points it is sufficient that n 1. The sequence (Xft) is tight in IR. 2. The sequence (X ) is tight and its limit'points are continuous. n 3. For all M € N , e > 0 . . .. .• lira P [ s u p | A X | > e l n 0. n P r o o f . See Theorems VI.3.26 and VI.4.13 of (Jacod & Shiryaev 1987). • T h e o r e m 1.20. ( a ) Let (M ), (Y ) be sequences of real-valued processes. If the sequence (M ) is such that n n n V T > 0, . Ve > 0,. ' lim P n—too N sup|M | > e l = 0, t n t<T . . - 1 and if the sequence (Y ) is tight (resp. converges in law to Y), then the sequence (Y + M ) is tight (resp. converges in law toY). n n n (b) Let (M ) be a tight sequence of processes with values in Polish space E\, and let (Y ) be a tight sequence of processes with values in a (possibly different) Polish space E - Suppose that the limit points of both sequences are continuous. Then the sequence (Y ,M ) is tight and its limit points are continuous. ; .. n n 2 n P r o o f . See Lemma VI.3.31 and Corollary VL3.33 of (Jacod & Shiryaev 1987). N o t a t i o n 1 . 2 1 . Let's write A(6, T, y) for the oscillation of the path y: A{6,T,y):=sup{\y(t)-y(s)\:s,t€[0,T};\s-t\<6}. 30 n • P r o p o s i t i o n 1.22. We keep the 'notation' of. the preceding 'Section. Fix <JhE'Z>o(a) For any t > 0, lim -E[W *(§)] = 0: • , . N t (b) The sequence ((Z ($))) is tight and its limit points are continuous. N " ' *•. (c) The sequence (Z .(^)) is tight and its limit points are continuous. N (d) The sequence ^f f A(K )(^)(s,y)K^(dy)ds^j is tight and its limit points are continuous. N 0 P r o o f (a) Let \? = [\™(s,Y )o(s,K?_,Y°),h (s)dB?) a a n Jo This is a continuous local (jF/^-martingale (a martingale indeed!), since B is a (.7^')-Brownian motion (see Remark 1.9(c)). Note that (V ,V ') = 0 if a '^ a' because of the independence of the Brownian motions B and B '. Moreover d a a t a a (V )i < f || Jo V^(s,Y )o( \:K^,Y )\\ hfds a <ll a a 2 S liL Jo fr^s.K^atfhys (by the hypothesis on o). Hence TE[(W *(y)) ] = E N t (since the cross-covariations are 0)- < —E N ~ N V E i i v * ni fr(p,s,K^(i)) hy 2 s * "°°-E E^Afa^.sAf^l))2)^ -:7V rE[Af (l)T(p, ,/,f*(l)) ]d. < r. || V *112 ' TV 0 2 5 ,:7V JO ^E[T(p+l, ,Af*(l)) ] s as N -> ex. This concludes the proof of (a). 31 2 (b) F i x T > 0 and choose 0 < s < t < T. (Z mt-(Z m)s N = f N j nu y?l{uAK )\y )K»{dy)du N y u (by. Corollary 1.13). Recall that our hypothesis imply that j(u, (K ) ,y ) N u (Z m)t N < ' f (i)(l +tKf*{l)) u (Z ms<\\ *Wl t(t)(l + tK *(l)) is N - fK»{l)ds N t < (t - s) II * for u < t. Thus Js f (T)(l + TK$*(1))K$*(1). B y Lemma (1.15), there is a finite constant A = A(T) such that sup{IE[((l +TKT*(1))K^*{1)) ]} 2 < A. • N Hence, for any £ > 0 : P [ A ( J , r , (Z {V)).) > e] < £- TE[A(5,T, 2 N (Z (V)).) ] N 2 - (I) 2 11 Therefore •limsupP[A(tf,T, {Z ($))•) >e} = 0. N This shows that ((Z (^))-) is tight with continuous limit points. This completes the proof of (b) , N (c) We shall use Theorem 1.19. Use part (b)and note that condition 3 in Theorem 1.19 is satisfied since \AZ {V)\ < by Corollary 1.13. . N (d) The proof is very similar to that of part (b). Let's call S« := (1.24) A(K*)(<i>)( y)K^dy)ds. S: From the assumptions on 6, a and \& it follows that . . \A(K .)(9)(s,y)\-<\\ N V * T ( p , , K f * ( l ) ) + || | * 1 ^ T ( p , s , K » * ( 1 ) ) . 2 5 Therefore, for any T > 0, 0 < s < t < T - S f < constant • (t - s)T(p + 1, s, 2 ^ * ( 1 ) ) . 2 T We know that there is a finite constant A = A(T) such that s u p E [ T ( p + 1,S,KJ!*{1)) ] < A. 4 N Use Arzela-Ascoli as in (b) to finish the proof of (d). . Corollary 1.23. For any $ € DQ the sequence (K (^).)^ continuous. N • • is tight and its limit points are P r o o f . The Corollary follows immediately from Propositions 1.11, 1.22 and Theorem 1.20. • , 32 1.3.2 Compact Containment Condition P r o p o s i t i o n 1.24. Given T, e > 0 there exists a compact set FT, € M p ( C ) such that d £ € D ( [ 0 , T ] , F , ) } > 1 - e. miV{K N r (1.25) e P r o o f Suppose that for all n , T > 0 there is a compact subset Gr, of C such that d n s u p P { sup K (G ) >2~ } <V . N 0<t<T - ' We claim that (1.26) implies (1.25). Indeed, N c t n s u p ] T P { sup K (G ) N „ > 2""} < 1. c t A' (1.26) n Tn Ttn 0<t<T The Borel-Cantelli lemma implies that s u p P { sup K?{G ) N 0<t<T ' < 2~ n Tn eventually} = 1 That is, lim s u p P { sup K?{G ) n Tn m-xx> A : Vn > m} = 1.. ' < 2~ C o<<<7' Hence there is an m = m(e) such that — s u P { sup K?{G ) <2~ Vn > m} > 1 - e/2. N 0<t<T ' B y relabeling we can assume m = 1. Thus (1.26) implies c P n Tn supP{Vn sup K?{G ) N 0<t<T < 2~ ) > 1 - e / 2 . c (1.27) n Tn Recall Prohorov's theorem. A set Fx, £ M ( C ) is compact if it is closed and the conditions below are met. e F d (PI) For all n there exists a compact set G C C such that sup p {u(G^)} d n (P2) S U P / j 6 / fie Te < 2~ . n , . { ( 1 ) } < oo. ; £ M Let us suppose that there is an M = M(e) such that . s u p P { i ^ * ( l ) >M(e)} <e/2. Set . (1.28) • ' F ,,~{u£M (C ):u(l)<M(e), fi(G )<2~ d T c F n n Vn}. W i t h this choice of F , we see that (1.27), (1.28) together with (P1-P2) imply (1.25). We now show that (1.28) holds. Note that since,1^(1) is a martingale, T e P{i,jy*(l)>M}< E [ i B y Lemma 1.15 there is a function F i independent of M = e/2Ti{T) to see that (1.29) implies (1.28). ^ ( 1 ) ] . (1.29) such that E[K^(1)] < Fi(T). P u t Our next task is to exhibit a set Gr, such that (1.26) holds. For this purpose we shall need the following two lemmas. n 33 L e m m a 1.25. For any n > 0 there is a compact set F = F(mT) supE[i^(F )] € C such that d <r) c N (1.30) P r o o f Recall from Section 1.2 that conditional on A, the law of a path [Y ) is obtained as the unique solution of a system of SDE's with coefficients uniformly bounded by Y(p,T,K£*(l)). So for any Y in the support of a standard argument (e.g. Rogers and Williams (1987) p. 166) shows that there is a constant c> 0 such that for any e > 0, k € N, T > 0, (k is allowed to depend on e) a T a a C A Define F M = {y G C , T>,T,tf£'(l))^ cT (T(p,T,K»*{\)? P[A(k-\T,Y )>e\A]< j -( v + k k 3 : A(fc~ ,T,y) < e}. We compute with F = d F, 1 TE[K$ (F )] = E ^ £ i ( ( c y Q TV = E E l_ N E T £ F) i^i([n ^)|i T 1 ^ TV 2s cT ) k e Y(p,T,^(l)) • A: Y(p,r,^*(l)) 2 ' + 4 k 3 ^ ^ E ^ W ^ ^ T ^ T ^ f ^ ^ ^ + rt^T,^^!)) )]. 4 Lemma 1.15 implies that there is a constant A such that since.the random terms inside the expectations involve only polynomials on Kj(*(l). To finish the proof set e- = fc / , choose A; large enough so that cTA(k~ / + A r / ) < 77. This gives 1 2 -1 8 < »7 5 2 . • L e m m a 1.26. Let (j) e bC , r a bounded (F^)-stopping time. We write 4> (y) := 4>{y ). Then d l there exists an (Tj^)-martingale m = m^' such that mt = 0 ift<r T K?\<j) ) = K?'{</>) + m T t Vt>r and a.s. (1.31) P r o o f . For any (f>, r satisfying the hypothesis of the lemma we define jjf E [^ )' a ( o ) ")lW«) <')(&-!)], if*>r; otherwise. 34 l Let's assume first that <f> € bC and that <\> is of the form cj) = € DQ. That is, for some r € N , 0 < h < t <,. • • , t <u,tp. e . C ( R ) , 0 = ip(t ..'. ,-<„). Assume further that r = u. Then VV>(s,y) = 0 and xpij = 0 for s >u. Therefore Proposition (1.11) implies u 2 0 r K $)-K?(j) x> rd u = Z (t>)-Z?m N N 4E t t [i>(8(<*)-Y )m<*) < t)(U - 1)] a ( i.32) u-<a-<t = mf' T Vt.>T a.s. Clearly m f ' is a martingale. Assume next that <f> € bC and r = u. Any such function <f> is the 6p-limit of a sequence (<j> ) where each <?!>„ =€ 6C n£> - The bounded convergence theorem and (1.32) imply that T u u n 0 K (<f>) - K?(<f>) = K»{4> ) = mf Vt > r N K?W a.s. T t ,T (1.33) Now suppose that <f> € bC and T takes on only finitely many values, that is r € { u i , . . . , u }. Then 4>{y ) = YX=\ = n)<t>{v )- W e see that K^(if> ) = tff (<£) + m f Vt > r- a.s. as a consequence of (1.33). Finally, we consider a general bounded r , <fr € bC. Take a decreasing sequence of stopping times {r } taking only finitely many values and converging a.s. to r . Note that 4> " % <t> a.s. We know that K?{<t> ) = K? {4>) + m[ *. Note that —> K( (cf) ) a.s. by bounded convergence, K^(4>) —^K^((f>) a.s. by the right continuity of K?. Moreover it is clear from the definition of. m that m ^ —> m ^ a.s. This shows that (1.31) holds. Finally note that we always took a.s. limits of martingales bounded in L so that m ^ is indeed a martingale. • m T Ui r ,T n T r Tn n,< n f T Tn T 2 T Let's continue the proof of Proposition 1.24. B y Lemma 1.25 we can find a compact set F = Fr,n such that sup,v JE[KJp (F )] < 2~ . Define G := \J F . Note that by the theorem of ArzelaAscoli G. is also compact. We now show that this is i n fact the desired set that satisfies (1.26). Let c 2n S S<T T N := inf{s > 0 : K?(G ) C > 2~ } A T. n Set 4>(y) := l(y € G ). From Lemma 1.26 and the right continuity of K (G ) C N K${{G ) ) = 2- + r4' TN C n C we see that (1.34) T j v on the set { T - < T}. The second term on the r.h.s. of (1.34) is a martingale null at zero and on the set { T > T}. Thus n N 1(T n < T)2~ + m*' n TN = 1(T < T)K»[{y n G C : y " . g G}}. D T Take expectations to obtain 2 - " P [ / <T] = E [ l ( r <T)K^[{y N < E[tff(F )] c < 2~ 2n € C : y " £ G}] D T (since {y : y ": f G) C {y.: y <£ F} because T T (by the definition of the set F ) . 35 n . < T) Therefore ' •• ; s u p P [ r ' < T] < 2 ~ A N Since {sup < < 0 1.3.3 t K (G ) > 2~ }c c N T n t •-„... (1.26) follows from (1.35). {T <T}, N Relative Compactness of (1.35) n • - . . • • / ' - (K ) N We are now ready to begin reaping the rewards of our effort. T h e o r e m 1.27. The sequence (TP[K e N -])AT£N is relatively compact tn'-Mi(D([0,oo),Mp('C ))). rf P r o o f . Corollary 1.23 and Proposition 1.24 show that the conditions of Jakubowski's Theorem 1.18 are satisfied. • 1.4 Identification and Uniqueness of the Limit Typically, it is more difficult to prove,uniqueness than existence; This is yet another instance. Fortunately, the uniqueness of the limit points has already been studied (Perkins 1995). In this Section we find a martingale problem satisfied by the limit points of the sequence (K ). Uniqueness, with some restrictions on the branching rate 7, follows from results of (Perkins 1995.) In light of Propositions 1.11, 1.22 and Corollary 1.13 it is easy to guess that the limit points satisfy a martingale problem of the form 1.3. To prove it we will need a number of results. N In addition to our hypothesis (1.3) we will assume that 7 is Lipschitz continuous. Recall that p denotes the sup metric on C , d = d denotes the Vasershtein metric on M p ( C ) . Let d' be the metric i n the space D([0,00), Mf(C )). defined in equation (5.2) p. 153 of Ethier and K u r t z (1986). This metric induces the usual Skorohod J\ topology. Recall that as a subspace of D([0,00), MF(C )) the space (C([0,00),E),d') is closed and is in fact the usual space of continuous paths with the compact open topology. d d p d d The following hypothesis will be i n force throughout this Section. h(t,K,y) -^(t,K'.y')\ . < T(p,t,^(l) V ^ t l ) ) ! ^ , , / ) + d'(K ,K'% (Lip,) l 1 F i x <p € Do- Let S be as i n (1.24). Let's assume that there are some continuous real-valued random processes S(4>), Z'(<j>), V(4>) and an r.c.1.1. Mir(C )-valued process K such that N <i Z (4>) Z(4>), N . • iV->oc . s (cp) : s(0), N "'', K N ==> K. " ••; • N-+06 . '. The results of Section 1.3 ensure that these are valid assumptions upon taking an appropriate subsequence, B y Skorohod's representation theorem and taking a subsequence if necessary'we 36 may further assume that the convergence above takes place simultaneously and almost surely. That is, (1.36) (K ,Z (<f>),{Z (<j>)),S (<l>)) -^,(K,Z(<t>),V(cj>),S(<fi)) a.s. N N N N A l l the 4-tuples i n (1.36) are considered as £>([0, oo), Mp{C )) x D([0, oo),]R) -valued random variables defined on a common probability space (possibly different from the original one.) Let's call T (resp. T) the filtration generated by K (resp. K). d N 3 N P r o p o s i t i o n 1.28. Z.((f)) is a continuous square integrable (Ft)-martingale with square function (Z(4>)) = t v(<f>) . t • = l i m (Z (4>)) : N t N->oo P r o o f . We already know that Z.(<j>) is continuous. To prove that it is martingale it suffices to show that for s < t, r € N, 0 < s\ < ... < s < s, (the Sj's may be restricted to belong to some dense set), and / : Mp(C ) —4- IR bounded continuous r d T nf(K ,-..,K AZM} Si = nf(K ,...,K )Z (<f>)}. s Sl Sr (1.37) t Note that by Lemma 1.15 SUpE[(/(<,...,^ N ' Hence the families {f(K»,..., K»)Z (<t>) : N € N} and { / « , . . . , K»)Z»{<I>) : N € N} are both uniformly integrable. Moreover f(K^,..., K^) -> f(K ,..., K ) a.s. since almost sure convergence i n the Skorohod space D([Q, oo), Mp(C )) implies almost sure convergence at the points of continuity of t H-> Kt, i.e. everywhere with the possible exception of a countable set. A n d we are allowed to choose the s;'s outside such.exceptional set. Also, Z^i^) —> Zt(4>) a.s. and Z^(<f>) -> Z (4>) a.s. Thus a.s. convergence plus uniform integrability imply N t Sl Sr d s E[/(<,,..,^)Zf(^)]=E[/(iv • lim S l ,.'.,,iv lim E [ / « , . . . , < ) z f (</>)] = S r )Z (^], t nf(K K )ZM)\ Sl 8r 7V->oo But Z ((f>) is a . ^ - m a r t i n g a l e . Therefore N E[/«,...,K»)Z (</>)] = E [ / « , , , , , < ) Z f («£)]. N t The desired result (1.37) follows. We must'now compute the square function of Z (4>). We know that (Z (<p))t converges almost surely. Note that by Lemma 1.15 the sequences. t N {/(<,... - (Z (4>))t) : N € N}, ' ,K»){Z»{4>? {/«,... , O ( Z f ( 0 ) N 2 - (Z (<P)) ) : N e N}, N are uniformly integrable. Argue as before to prove that This concludes the proof of the Proposition. 37 S (Z (4>) t 2 — (Z(4>)) )t>o is a ^"^-martingale. • t P r o p o s i t i o n 1.29. (1.38) •{Z(<f>))t= fMs,K,y)4>{y) K {dy)ds; Jo 2 s S (<f>) = J j A{K){$)(s,yJK {dy).ds. t V* > 0 a.s. s (1.39) (S , S were defined ai the beginning of the section.) N P r o o f Let • T :=: i{s>0:supK?*(l)A j [ (l + . Jo K»*(l))du>j}. S m N Note that by Lemma 1.15 l i m ^ o o 7) = oo a.s. Moreover 7(-, •,•)!(• < Tj) is bounded. Therefore, for any m,m',n,n' € N rtATj K?(~f(s,K , n Jo •)0(-)) - - K f h(s\k \ 2 •)<!>(• ds f) n tATj < / Jo k 1 ,+ / Jo S M ((7^^ ,-)-7(5,^,-))0(-) ) 2 N K \ .)0(-)2). - K?( (s, K?Ms\ K ', -)0(-)2) ds n n 7 We estimate rtATj / Jo . K™((y(s,K»,.)- (s,K ',.))<j>(;) ) ds n 2 7 •, ptATj ^II^II^^^^^V)^^.)^ = K?{l)ds c (j)d{{K%(K ') ). n t l In addition, rtATj ^ r ( 7 ( ^ ^ ' , - ) 0 ( - ) ) - ^ ' ( 7 ( ^ ^ ' , - ) 0 ( - ) ) ds n Jo 2 S 2 2 tATi ^Mlr^K^f = c (j) n M -\K?(l)-K?(l)\ds Jo J r \K?(l)-K?'(l)\ds. T3 Jo Let n' —y oo. Use the continuity of 7 and dominated convergence to obtain rtATj rtATj / Jo ^ r ( 7 ( s , ^ , - M - ) ) - ^ ' ( 7 ( 5 , X , - M - ) ) ds n • 2 S 2 M rtATj , ' < (i)d((^") , ft: )+. (i) / Jo Cl i J t C2 38 \K™{l)-K?{\)\ds a.s. Now let rn! —> oo. Use the fact that y H - j(s,K,y)<p(y) convergence to get is bounded continuous and dominated 2 / Jo |/vr(7(5,iv",-M-) )-^(7(s,^,-)^(-) )|^ 2 2 rtAT s KciUWK^K^+ciiJ) Jo \K?(1) - K.{l)\ds a.s.- Now put n = m. Dominated convergence yields / |^(7(s,^O0(O )-^(7(s,^,-M-) )|^2 Jo (1:40) >0 a.s. 2 . n-»oo . For every integer t there is a j = j(t) such that t A Tj — t. Moreover, the integral in.(1.40) is increasing as a function of t. Therefore \K?fr{s, K , •)</>(-)%- K ( (s,K )<j>(.) )\ n ds 2 s 7 r -> 0 V O 0 a.s. The validity of (1.38) follows from the above and Proposition 1.28. The proof of (1.39) is almost identical and will be omitted. • We end this chapter introducing some notation and stating the main theorem of the chapter. N o t a t i o n 1 . 3 0 . Let M (C ) = {ueM (C ):y = y u^a.a. y}, t>0. Q = {K € C([0, oo), M {C )) : K e M (C ) Vt > 0}, Ht{oj) — uj{t) for u) € f2#, nt = o-.(H :0<s<t), U = o(H : s > 0). d t d F t F d H d 1 F t F u s and (Q ,n ) = H (n xc ,n xc ). t d H t d T h e o r e m 1.31. Any limit point K of the sequence (K ) satisfies the martingale problem: N \/4>eD 0 Z (<f>) = K (cj>) - <f>(0) - f Jo Jc t t [ A(K)(4>)( ,y)K (dy)ds, S s t > 0, is a continuous square summable (Tf) martingale such that , (Z(</>)) = f [ j(s,K,y)<f>(y) ds Vt > 0 a.s. Jo Jc 2 t d If in addition we assume that the branching rate 7 satisfies the following condition: • There are h : [0,oo) x ilfj —> IR^ and f : [0, 00) x f2# —> IR which are (Tit)-predictable and satisfy the same Lipschitz condition (1-4) as b, o and \\h(t,K,y)\\ + \f(t,K,y)\ < Y(p,Kf(l)). Moreover, if Q! = (ft! ,T' ,(T[),Q) is afilteredprobability space carrying on 39 (Tl)-predictable processes (Kt : t > 0) with sample paths in a.s. and an JR. -valued (ft)-predictable process (Y :t> 0) such that fort > 0 Y = Y + M + ^Ms^K^ds d t t where M is a continuous local martingale such that (M ,M^)t • a.s., then x ^(t,K,Y)=-y(0,K,Y) + f h(s, K,Y)dY(s) Jo + f f(s,K,Y)ds 0 t = J (oa*)(s,K,Y)ijds t Q Vt > 0 Q - a.s: Jo then the martingale problem is well posed. . P r o o f . The first part of the theorem follows from the fact that Z?{4>) = '(4>) (f)(0)-'((f)) together with Propositions 1.28, 1.29 and equation (1.36). The uniqueness part is a consequence of Theorem 5.6 (Perkins 1995.) • R e m a r k 1.32. (a) The additional technical condition needed i n Theorem 1.31 to ensure that the martingale problem is well posed is restrictive. However it is satisfied by a large number of interesting examples (Perkins 1995, p. 50). In particular it holds for the branching rate given in Example (a) (page 16). (b) In Theorem 1.31 we don't need the full strength of condition (Lip,) to prove that the limit points of the sequence (K ) satisfy the martingale problem. Continuity (as opposed to Lipschitz continuity) and boundedness by some power p > 1 of the total mass suffice. • N 40 Chapter 2 Path Properties of a One-Dimensional Superdiffusion with Interactions 2.1 Introduction and Statement of Results In this Chapter we study some path properties of the solution of the historical stochastic equation (Perkins 1995, p. 47) (2.1) in dimension d = 1. H is a one-dimensional historical Brownian motion and X is a superprocess w i t h interactions. (2.1) shows that X and H have the same family structure. However, the path y (a Brownian path) is replaced by a path Y(y) which is a Brownian motion with a drift depending on X. The term 7 is a mass factor, but under suitable hypothesis (see condition (Ay) in page 44) it can be interpreted as a branching rate (Remark 5.2 of Perkins 1995). A n intuitive explanation of (2.1) is given in Chapter 0. The present chapter is organized as follows. In this Section we review some basic results concerning historical stochastic analysis (Perkins 1993, 1995). We define what we mean by a solution of (2.1) and state Theorem 2.9, the main result of the chapter. The reader is referred to Chapter 0 for additional comments and motivation. In Section 2.2 we introduce and examine some auxiliary processes. These results, as well as their proofs, will be needed in the proof of the main theorem. In Section 2.3 we find a generalized Green's function representation of a localized version of the process X analog to (0.4). W i t h this formula we shall be able to estimate the moments of X. In Section 2.4 we prove Theorem 2.9 using the results of the previous sections together with Kolmogorov's criterion. Finally* in Section 2.5 we give a non-trivial example in which the techniques developed in this Chapter can be successfully applied to discover some path properties of a one-dimensional superdiffusion in a random environment. 2.1.1 Main Result We begin by introducing some basic notation and recalling the definition of historical Brownian motion. 41 If \i is a measure, /j,((j>) = f <f> dfi. Throughout this chapter C = C = C([0, oo),lR) and Ct = C\ = a(y : s < t), the canonical a-field of C. If y G C we write y* for the path y stopped at t, i.e. y = y(t A •). Let 1 s l MAC) = {ixeM {C):y 1 / i - a.a. =y t F y}, fiH = { ^ e C ( [ 0 , o o ) , M ( C ) ) : i i : G M F ( C ) ' F t ift(u;) = u>(t) t>0. . , V*>0}, for w G fi//, n = a(H : •H = o(H :0<s<t), t , s s>0). s fi = (fi,.F, (Tt),TP) will denote a filtered probability space satisfying the usual hypotheses. PifFt) denotes the cr-field of (.^-predictable sets in [0, co) xfiand (fi,:F, (T )) = (fi x C,T x C, (T x t t F i x 0 < h < t < ... < t and # € ^ ( M * ) . If V e C , let 2 k *(y) = and *-(*i,*2,'...,*fc)(y)=.*(y(ti);...,y(**))- denote the first and second order partials of fc-i V*(t y) = j El(i<«t+i)*.i r(y(tA«i),...,y(tAt + f c ));.. i=0 ^ fc -#(t,y) = - l fc-i . E 53l(«< W iA'tj + i)* m + u + i ( y ( t Ati),;..,y(< A« f c )). m=0 i=0 Let oo L>0 = ( J {*(tl,*2,-,-*m) : 0 < ro=l ti < t < 2 - < .< m > * G C °°(IR )} U {1}. 0 m D e f i n i t i o n 2.1 ( O n e - d i m e n s i o n a l H i s t o r i c a l B r o w n i a n M o t i o n ) . Let m be a finite Borel measure on JR. A predictable process (H : t > 0) on fi with sample paths on fi# is a onedimensional historical Brownian motion starting at m iff (Ht) satisfies the following martingale problem: t (MP)T o AA is a continuous square integrable ^i-martingale such that (Z?$)) = fi j' $(y) H (dy)ds 2 t s Vi > 0 a.s. There exists a process H satisfying the conditions of Definition 2.1 and it is unique in law. (Dawson & Perkins 1991). We picture H as an infinitesimal tree of branching one-dimensional Brownian motions. 42 D e f i n i t i o n 2 . 2 . Let (K : t > 0) be a process o n f i with paths on fi#. A set A C [0, oo) x fi is (K,JP) evanescent (or K evanescent) iff A C A i where A i is (JF * )-predictable and t t IAJ {U,UI, sup y) = 0 Kt y — a.e. Vt > 0 P — a.s. 0<u<t A property holds (K,JP) — a.e. (or K — a.e.) if it holds off a if-evanescent set. o D e f i n i t i o n 2 . 3 . Let (K : t > 0) b e a process on fi with paths on fi//. A map b : [0,oo)xfi —»• JR is if-integrable (respectively K-locally integrable) iff it is (Pf )-predictable and / ' K (\b(s)\)ds < oo (respectively, f£ \b(s)\ds < oo Kt — a.a. y) Vt > O P — a.s. a t 0 s For technical reasons that will become apparent later we shall not look directly at (2.1) but rather at a historical version of it. We shall call such version (HSE)b^ and it is defined below. D e f i n i t i o n 2.4. Suppose that H is a one-dimensional historical Brownian motion starting at m € M (JR). Let F b : [0,oo) x M (JR) x IR—> JR, F 7 : [0,oo) x M (SR) x C—> (0, oc). ' F Define the projection U : fi/f -> M {Wi) by Ui(K)(A) = K ({y : y(t) € A}). We say that the pair (K, Y) solves the historical stochastic equation F t (a) .(b).' Y = y+ t t = KM) t f b{s,fl (K),Y )ds, ;° j s t > 0, s 0(y*) (t,n.(A'),r)fr (dj/)i 7 t (HSE) bn <j>ebc. iff • y : [0, co) x fi -¥ JR is (^ *)-predictable and K : [0, oo) x fi ->• M ( R ) is (Ji)-predictable. t F • The map |6(s,u;,j/)| = \b(s,U. (K)(u),Y (<j},y))\ is if-locally integrable. s s • (HSE)t y{a) holds up to an i7-evanescent set in [0, oo) x fi, (HSE)b y{b) holds for all cf> G bC Vt > 0 P - a.s. , >! t Pathwise uniqueness holds in (HSE)b y if whenever (K, Y) and [X,Y) are solutions of (HSE) then y* = Y except on an if-evanescent set in [0, oo) x fi and K = K except on a P-evanescent set i n [0, oo) x fi. t l To each K solution of (HSE)},^ we can associate its one dimensional projection X defined by X. = fl.(A'). ' o R e m a r k 2 . 5 . It is easy to modify the above definition in order to give a rigorous description of (2.1). It is then straightforward to verify that if K solves (HSE)b then its projection X. = fl.(K) solves (2.1). " n The existence and uniqueness of solutions to (HSE)y b requires some Lipschitz conditions on the coefficients b, 7. Since their arguments include measures we need to introduce an appropriate metric. t 43 D e f i n i t i o n 2.6 ( V a s e r s h t e i n M e t r i c ) . If (S, 5) is a metric spacelet - < 1, Lip(S) = {(p : S -> JR : The Vasershtein metric - 4>{z)\ < 6(x,z) V x , z € 5}. is the metric oh M ( 5 ) defined by F ds(u,y) = snp{\u(cp)-i>{(f))\: cpeLip(S)}. This metric induces the weak topology on Mp(S) (e.g. Ethier and Kurtz 1986, p. 150 Exercise , 2). -. .. • • , ,. " •". The following conditions on the coefficients appearing in (HSE)b will be frequently required. n (IC) m(dx) = p(x)dx, where p£Cj<(JR) is H61der-d for any a < ^ , / p ( x ) o ? x = 1. R There,is a nondecreasing function Y : :[0, oo) . [1, oo) such that (Lip) For any u£ M ( B ) F |&(t,/z,x)| < T ( t V/x(l)) Vx€R. -"" (2.2) * Let d denote the Vasershtein metric on M ( 1 R ) . e F / \b(L/z, x) - b(t, v., ) | < Y(t V p ( l ) V i/(l))(d (/i,u).+ \x - z\). 2 e (By) For any X € C([0, oc), Mp(JR.)) and any Vy € C • 7 (t,Xy)- <T(tVA7(l)). 1 Moreover, there is a nondecreasing functionT : [0, oo)• —> [1, oo) such that • l(t,X,y) < T{t) (l + pX;(\)ds^ , VA' € C([0,oc),M (JR)) Vy € C . F . • The following Lipschitz condition on 7 is more restrictive than (Lip). (Ry) Suppose that there are two functions h, f : [0,oo) x M ( M ) x C —¥ JR such that F / !/ (<,//,y)| + | / ( t , / / , y ) i < T ( t V / i ( l ) ) VyGC,:.' t '(2.3)' |/ (/,/z,y) - />(t.,/,y')| < T ( t V p ( l ) V p'(l))(d (/i,/i') + sup |y(s) - y'(a)|). ( e |/(t,j/,y) - / ( t , / / , y ' ) | < T ( t V p ( l ) V ' ( l ) ) ( ^ ( / i , / i ' ) + sup |y(.s) - y'(.s)|). M (2.4) ' ' (2.5) and with the following property. Recall that fit : —> M ( 1 R ) be given by JJt(K)(A)=K ({y:y(t)eA}). . F t If a s p a c e d ' — (fi^.F', (.?>), P ' ) carries an ( ^ - p r e d i c t a b l e process (Kt) with sample paths i n f2# a.s. and an JR-valued (^"^-predictable process (Yt) such that M(t) = Y(t)-y(o)- / 6( ,n,(^),y )d , s tion then is a Brownian motion 7 s S t>o . , (t,fl(A'),Y) = 1+ f h(s,U(K),Y)dY(s) Jo 44 + [* f(sM(K),Y)ds Jo Vi > 0 P'-a.s R e m a r k 2.7. Condition (R^) seems strange at a first glance. It ensures that 7 can be interpreted as a branching rate (Remark 5.2 of Perkins 1995). • The next Proposition is fundamental. P r o p o s i t i o n 2.8. Assume b satisfies (Lip) and 7 satisfies (B,), (R-f). Then (HSE^^) has a pathwise unique solution. P r o o f . See Theorem 4.10 of Perkins (1995). • Before stating the main result of this Chapter we give some examples of coefficients 6 and 7 satisfying the basic hypothesis. E x a m p l e , (a) 7 = 1 satisfies (fly) and (JB ). 7 (b) Assume 7 € C^' ([0,00) is a non-negative function such that f^(s, •) and §^(s,'-) are Lipschitz continuous with a uniform Lipschitz constant for s i n compacts. Then j(t, X,y) = j(t,y(t)) satisfies (R,) and (f? ). This is proved i n Example 4.4 Perkins (1995). In this case the mass of each particle depends only on time and the position of the particle, and not on the rest of the population. 2 7 (c) Let p (x) be the one dimensional heat kernel. F i x e,a > 0 and set s 7(t, X, y) = exp ( - j ' Jp (y e t - x)X (dx) - ^ds). a s e It is not obvious that 7 satisfies (R-,)- This is proved in p. 51 of Perkins(1995). (d) Let b : IR —> IR be bounded Lipschitz continuous. Then b(p,x) — Jb(x — z)fj,(dz) satisfies (Lip). See Example 4.2 Perkins (1995) for a proof. In this model a particle at z in a population p, exerts a drift b(x — z)fi(dz) on a particle at x. (e) More examples can be found in Perkins (1995, p.49). •'" ' • The following is the main result of this Chapter. T h e o r e m 2.9. Let H be a one dimensional historical Brownian motion.starting dt m. Assume m satisfies (IC), b satisfies (2.2) and7 satisfies (B ) and (Ay). Let K be a solution of (HSE)^^ and suppose that X is its one-dimensional projection, i.e. X. = fl.(K). (See Definition 2.4-) Then 7 X (dx) = u(t,x)dx t for all t > 0 IP - a.s. (2.6) where u(-,-) is an a.s. jointly continuous (adapted) function. In addition, u is Holder continuous in t with any exponent a < 1/4 and in x with any exponent a < 1/2 a.s. R e m a r k 2.10. (a) The case 6 = 0 and 7 = 1, i.e. when X is one-dimensional super Brownian motion, of Theorem 2.9 was proved independently by Reimers (1989) and Konno and Shiga (1988). (b) Note that b is not assumed to satisfy (Lip) (we only assume the boundedness condition (2.2)) so pathwise uniqueness may not hold. • 45 2.1.2. Historical Stochastic Calculus The results in this Subsection will be used repeatedly. The basic reference is Chapter 2 of Perkins (1995). D e f i n i t i o n 2.11. Let (E, \\ \\) be a normed linear space. Let / : [0,oo) x fi —> E. A bounded (Jt)-stopping time T is a reducing time for / iff l(t < T)\\f(t, u>,y)\\ is uniformly bounded. The sequence {T } reduces / iff each T reduces / and T | oo P — a.s. If such a sequence exists, we say that / is locally bounded. • n n n D e f i n i t i o n 2.12. Let m be a finite Borel measure on TR and let 7 : [0, oo) x fi ->• (0, oo), b : [0, oo) x fi -¥ P , g : [0,• oo) x f l TR, be (jF *)-predictable. Assume that A = ( 7 , 6 , g 7 l ( g 7^ 0)). is locally bounded. A predictable process (K :t> 0) satisfies (MP)™ j . pn if and only if K .<= M (C) for all t > 0 a.s. and _1 f 1 t V ^ G J D Q t Z?$) = F K $)-m($) t is a continuous square integrable ^-martingale such that (Z«$)) Vt > 0 = fi Jy(s,u>,y)i>(y) K (dy)ds 2 t s Assume that K satisfies (MP)™ ~ . Let (T ) a.s. a be a reducing sequence for A . If </> £ Do, n k 4> (f) (bounded pointwise convergence) then dominated convergence and local boundedness of 7 imply n < Z * ( 0 - <£ )) —> 0, n as m t n,m —> 00 Vt > 0 a.s. Therefore there is an a.s. continuous adapted process (Z/^ (<£) : t > 0) such that sup |Zf(^)-Zf(<?)|Ao 0<s<fc as n -f 00 V/c > 0. and sup | Z f ( ^ ) - Z f ( 0 ) | - ^ O o<s<r fc as n — > • 00 46 Vfc G N . See Perkins (1993, 1995) for details. ( Z / ^ ) ) ^ is a continuous local (.T^-martingale with square function f KsM )ds. , (2.7) Jo. Since DQ is ftp-dense i n bC, can be extended to an orthogonal martingale measure {Z*(</>) : t > 0, <f> € bC} (Perkins 1993, 1995). This implies that if xb : [0, oo) x ft x C -4 IR is V(!Ft) C-measurable and satisfies (Z {ib)) = K 2 t x . P [ ^ j\(s,u:,y)iP(s,u;,y) K (dy)ds] < oo V* > 0 2 a then we may define the stochastic integral Z?W)= f f Jo Jc iP(s,u;,y)Z (ds,dy). K Z^(il)) is a continuous square-suminable martingale with square function (2.7). More generally, if / D(Z) = {ip : [0, oo) x fl x C -> IR : ip is VLT ) x Crineasurable and t J J\(s,u,y)xp(s,u,y) K (dy)ds < oo Vt > 0 2 s P-a.s.j then Z *(V>) can still be defined for ip € D(Z). In this case it is a continuous local martingale with square function (2.7). For a definition of martingale measures and the construction of the associated infinite dimensional stochastic integrals the reader is referred to the excellent book Walsh (1986). A word of warning to the reader familiar with the applications of stochastic calculus i n finance. What we call martingale measures are L -valued measures on C. They are totally different from the "martingale measures" usually employed i n finance (these are just Girsanov transformations). The fact to be remembered while reading this chapter is that Z (</>) is a (local) martingale for a large class of integrands 4> = 4>(s,uj,y). t 2 K t D e f i n i t i o n 2.13. If (K : t > 0) is a r.c.1.1. (.^-adapted M f ( C ) - v a l u e d process such that (Kt(l) : t > 0) is an (^)-martingale, and T is a bounded (J )-stopping time, define a probability P on fl by t r t R T [AXB\T P [ W ) , ] Aer, Bee. We call JP the Campbell measure associated with K and T , or simply Campbell measure when K and T are clear from the context. o T P r o p o s i t i o n 2.14. Let K be a solution of (MP) - . (Note that g = 0J. Then m (a) K eflH P — o.s. i Q and K is (Ft)-adapted. (b) IfT reduces (7,6), then under the Campbell measure P rt/\T ^ n(t,u,y) = y(t) - y(0) - b(s,u,y)ds, Jo is a (Ti)-Brownian motion stopped at T. 47 T , t > 0, P r o o f . See Theorem 2.6 of Perkins (1995). K Let K, T be as i n Proposition 2.14. Under P ^ , y is an (.F *)-semimartingale. Therefore t / a{s,u),y)dy(s) Jo may be defined for the class D(I,K) = |CT : [0,oo) x fi x C -> P , : is ( ^ - p r e d i c t a b l e , a /" a(s,u;,y) ds < oo ift — a.a. 2 JO y Vt > 0 as. j P r o p o s i t i o n 2.15. Let K be a solution of (MP)™ - . (a) i / ( T € D(I,K) there is an JR-valued (T?)-predictable process lK{a,t,u,y) such that for all reducing (Tt)-stopping times T l {o,t,io,y) = / o{s,io,y)dy(s) Jq K Vt > 0 P r - a..s. (<j, t, u;, y). We sometimes write I(o,t) instead of (b) If I(o~, s, LO, y) is another such process, then I(o,s,LO,y) = lK(o~,s,u,y) Vs < t K — a.a. Vt > 0 y t P r o o f . This is a special case of Theorem 2.12 (Perkins 1995). P — a.s. • N o t a t i o n 2.16. If b : [0, oo) x fi —• R, is universally measurable, let ;i v b 4 ) - / / o b{s,u,y)ds 10 = if fc\b(si'u},y)\ds < oo. otherwise. • P r o p o s i t i o n 2.17 (Ito's l e m m a ) . Assume K satisfies (MP) m . , o : [0, oo) x fi -> P and 'Y,1,0,1) 6 : [0,oo) x fi -4 P are (Pf)-predictable, \o\ + \b\ is K-locally integrable and YQ : fi — > P is TQmeasurable. Let Yt = YQ + I(o, t) + V(b, t) (using the notation of Proposition 2.15 and Notation 2.16) and.Yt = {t,Y ). If £ C ([0,oo) x IR), the space of bounded continuous functions ip{t, x) with a bounded continuous derivative in t and two bounded continuous derivatives in x, then . 2 t t up to a K-evanescent set of (t, to, y). P r o o f . This proposition follows from the usual Ito's lemma and Proposition 2.15. See Lemma 3.18 of Perkins (1993). • 48 P r o p o s i t i o n 2.18 ( H i s t o r i c a l Ito's l e m m a ) . . Assume a G Suppose K satisfies (MP) ^. m D(I,K), b G IR are (;F *)-predictable and \o\ + |6| is K-locally integrable. Let Yo : fi -¥ JR be Po-measurable, Yt = Yo + I(a,t) + V(b,t) (using the notation of Proposition 2.15 and Notation 2.16) and Y = (t,Y ). If ip € C ([0, oo) x IR), then K (^(Y )) is an a.s. continuous (T )semimartingale and satisfies 2 t t 6 t 1,2 t fi K (xb(Y )) = Ko(Wo)) + t t T t j^(Y )Z (ds,dy) K a + fi f [W&)$(*) + • Vt>0 + \^(Y )-o(sf]K (dy)ds s s a.s. P r o o f . This is a special case of Theorem 2.14 i n Perkins (1995). • We shall also need the following version of Proposition 2.18. P r o p o s i t i o n 2 . 1 9 . Suppose K satisfies (M'P) m 1 - . Let a € D(I, K), b €]Rbe (f?)-predictable Q and assume \o\ + \b\ is K-locally integrable. Assume also 2 sup K (V*(b,s) )) 0<s<t < oo V t > 0 2 s a.s. Then I(o,t) e D{Z) and K (V(b,t) + I(a,t)) = fi J(V(b,s) t + I(o,s))Z (ds,dy) K + fi J (o(s)b(s) + b(s))K (dy)ds s \/t > 0 a.s. P r o o f . See Theorem 2.8 and Proposition 2.13 of Perkins (1995) for the proof.' • P r o p o s i t i o n 2 . 2 0 ( L e v y ' s m o d u l u s o f c o n t i n u i t y ) . Let h(t) = \/t\og (l/t). Define + L(6, c) = {y EC Assume K solves be (T{)-predictable - . Let a G D(I,K) (MP) ^ M K (ui) e MF(C) by K () = K ({y : I(a,-,u,y) t -1 t 6 = sup{cr (s,a;,y) : 0 < s < T, 2 and locally bounded. Djefine G •}). Assume T reduces . ( 7 , 7 , 6 , a) and let 1 t 0<t-s<5}. : \y(t) - y(s)| < ch(t - s) Vs, t > 0 satisfying (u,y) eft}. • (a) If c> 2\/0, there is a 8(ui,c) > 0 a.s. such that = 0 V t G [0,T]. K (L(8,c) ) c t (b) For each 7, ©o, b, q G (0,00) and c > 2 @o there is a non-decreasing function p : [0,00) —> [0,1] s u c / i that l\m\^o PW = 0 and «/ v / 7 < i n f { ( i , C J , y) : 0 < t < T(w), 7 sup{|cj-6(t,a;,.y)| : 0 < < < T(w), then we may choose 5(LJ,C) (w, y) G (w,y)Gfi}<6, in (a) so that JP{S < A) < p(X). 49 fi}, m ( l ) < o, G< 6 , 0 P r o o f . This is a special case of Corollary 3.3 of Perkins (1995) — • We end this Subsection stating some useful estimates. P r o p o s i t i o n 2 . 2 1 . Recall that H denotes a one-dimensional historical Brownian motion start- ing at m. There is a one-dimensional Brownian motion (B ) such that t # (l) = m ( l ) + t Jo f\/lQV>dB s [H (l) : t > 0) is the continuous time continuous state space branching process studied by Feller (1951). Hence, for any p € N t P[#t*(l) ] < co p Vi>0. More generally, assume that g is bounded above and let K be a solution of (MP) ^ - .. Then, for any p > 1 - . -. m X>[K;(1) ] < co Vt > 0. p P r o o f . See Lemma 2.1 of Perkins (1995). 2.2 • Some Auxiliary Processes P r o p o s i t i o n 2 . 2 2 . Let b and y be as in Theorem 2.9, and h, f as in (Ry). If we define If, V un fi , w \ \ , h{t,U.(K)(u),y) b{t,to,y) = b(t,U. {K){u),y) + f(t,n.{K){Lu),Y{u},y)) T • = 9 ( , i J , y ) . h(t,ll.(K)(u>),y)-b(t,U (K)(u),y) f{t,U:(K)(u),y) T l(t,fl.(K)(u>),y) , y{t,U.(K)(u;),y) j(t,to,y)= {t,U.(K)(u),y) 1 then the unique solution of the historical strong equation (HSE)b^solves the martingale problem ( M P J y j j . defined in,page 46P r o o f . The proof is essentially a lengthy.application of historical Ito's lemma. See Theorem 5.1 of Perkins (1995) for the details. • R e m a r k 2 . 2 3 . In the remainder of the chapter we will only assume that K solves the martingale problem (MP) ^.. In fact we shall abuse the notation and suppose that K is a solution of the martingale problem (not necessarily obtained as a solution of the strong equation (HSE)b,y). m 1 P r o p o s i t i o n 2 . 2 4 . If we define Kt(<t>)= f - , f Mdy), J l{t,u),y) ( y t>0, ) then K satisfies (MP)™ - , where b(s,u,y) = b(s, fl (K)(u),y). Q S 50 4>£bC, • '/; P r o o f . This follows from Proposition 2.22 and it is shown during the proof of Theorem 5.6 on p. 70 of Perkins (1995). • k The following hypothesis will be i n force through Sections 2.2 and 2.3. (UB) There is a constant CUB > 0 such that |6(s,u>,y)| < c Vs > 0 Vy e C UB P -a.s. • We need to introduce yet one more historical process. B y Proposition 2.24 the results of Section 2.1 concerning stochastic integration are at our disposal. In particular we may define the stochastic integrals 1^. D e f i n i t i o n 2 . 2 5 . Note that (UB) implies be D{I,K). Let exp ^ - 7 ^ ( 5 , t,w,y) + i ' j f b{s,u,y) ds^ £ (l,u>,y).= 2 t (~f = exp H iU,y)dy(s)+ s 0 o(s,u>,y) ds^ , ^ 2 (observe that £{(1,CJ, y) is unique up to (K, P)-evanescent sets) and define J (<t>) = J <f>(y)£ (l,u,y)K (u)(dy),t t t jtW) = J ^(y )S (l,u;,y)K (u)(dy), t > 0, t t t t > 0, (2.8) 4>€bC,. ^6'6S-, (2.9) = W)(v). The usefulness of this auxiliary processes depends in part on the following lemmas. The reader may want to look ahead at Proposition 2.42 to see why the process J is relevant. L e m m a 2 . 2 6 . There is a (Pf)-predictable K-version of £t(l,u),y) We call such a version £ t ( l , a ) , y ) . which is locally bounded. _1 P r o o f . Apply Proposition 2.20 with [K,o) = (K,b). Note that since K solves (MP)™ li)Q and b is uniformly bounded, the hypotheses of Proposition 2.20 are satisfied with 0 = 1, T = oo, c = 3. Hence K ({y : J (b, -) e L(6,3) }) = 0 V* > 0. l t c R Therefore / ^ ( 6 , - ) ' € L(5,3) K -a.e.y t Vt > 0. Define 14 = inf{s > 0 : K ({y : I^{b, ) G L(k~ , 3) }) > 0} A k. s s 51 l c (2.10) Note that V t oo P - a.s. Indeed, since-5 > 0 a.s. V (u) = k for all k > <J(<*0 . Hence (2.10) implies k -1 k Ht < Vk)£t(l,v,y) < exp (z(kt + l ^ l o g + f f c ) + (kt + 1 ) ^ ) #t - a.e.y Vt > 0 a.s. (2.11) Recall that evanescent sets are predictable and that we work with the universally complete CT-field (J ?). Modify £t(l,u;,y) on a (if,P)-evanescent set to obtain the desired result. • 7 R e m a r k 2.27. Since K and J are equivalent, (2.11) gives also a J-version of £ (l,u},y) is (Ft )-predictable and locally bounded. t which a We are now able to give a martingale problem characterization of J . P r o p o s i t i o n 2.28. The historical process J defined in (2.8) satisfies the martingale problem ( )?-M,o,oMP *' T h a t • VtfG.A, Z (^) = J (^)-m(^)-fijj^(s,y)J (dy)ds J t t s is a continuous square integrable (T" )-martingale starting at zero with t J 1 t Proof. s Pick ip ...,'ip u Vt > 0 P - a.s. = fi J£ (l,oj,y)- ^( fj (dy)ds (Z (^)) k y s •€ C^{JR), 0 < h < t . < ... < t 2 k and set * = Ui-^i^i- We sometimes write £ t ( l ) = £t(l,u;,y) and b(s) = b(s,u>,y). B y Ito's lemma 2.17 and since Vs — Vo — fo b(TiU,y)dr is a Brownian motion on ( f i , P ), t =il>i(y ).+ l il>MiAU)) |$(y(s A ^))l(.s < U)dy(s) + ^{y(s Q - a.e. (K,P) A t,-))l(s < t ) , and £ (l) = 1 - f b(s)£ (\)dy(s) Jo t + t b(s) £ (l)ds Jo 2 s s (K,W) - a.e. Integration by parts (also justified by Ito's lemma 2.17) yields • -ft(l,w,y)*(y*) = * ( y ° ) - + I Jo k J2 s s Jo y(y )b(s,u,s) £ (l,u,y)ds s 2 s -ft k ]\^j{y(sAt ))rp' (y(sAt ))l(s<t )£ (l,uj,y)dy(s) + .1=1 ['^(y )b(s,u,s)£ (l,u,y)dy(s) j ° J i i j=l J- 52 i s { i=i /=i - j=i 7 0 k ft k 52 / i.- = ii V»/00 = ¥(y°) + + ^ hWM A j = A 8 - < U)Hs u,y)£ (l;u>,y)ds t B 1 <-t jf [v*(s,y) - *(y')S(a,_u;,y)]g,(l,w,y)dy(a) j^(y )b(s,u;,y) -.V*(y,s)b(a,w,_y)]5,(l,w,y)ds s 2 + p^{s,y)£ {\,u),y)ds (K,F)-a.e._ s Hence, by historical Ito's lemma 2.19 J ($) = K (£ (l)9) t t t = KoW + J J£ {i,u,y)y{y )Z {ds,dy) s o + R s y [(V^(s,y)-*(y )6(s,u;,y))6(s,v,y) s + (^(y )6(s,o;,y) s + J VV(y,s)b(s,u;,y))}£ (l,uj,y)K {dy)ds 2 j s £ {l,u,y)^{s,y)K (dy)ds s s = J ( * ) + Z ( t f ) + ^ ' J jV(s,y)J (dy)ds 0 t J Vt > 0 a.s. s where Zm J t = j j.£ {l^y)y{y )Z«{ds,dy) s s is a square integrable martingale with (Z mt J s = = £ ^J j£s(l,u;,y)^(y )Z (d ,dy) s o R S J£ (l,u,y) y(y ) K (dy)ds 2 s 2 s = j\ s J£ {l^y)- ^{y ) J {dy)ds. l s 53 s 2 s R e m a r k 2;29. Proposition 2.28 allows us to use J-historical stochastic calculus. In particular, Proposition 2.14 implies that y.At is a (Tf)-Brownian motion under the law P . f o L e m m a 2.30. There is a J-version of £t(l,u>,y) such that = exp f^jf b{s,u,y)dy(s) - | j f b(s,u},y) ds^ £ (l,u:,y) 2 t We abuse the notation and call also that version ( J , P ) - a.e. £t(l,u,y). P r o o f . Let T be a bounded predictable (^f)-stopping time. By the invariance of the stochastic integral under change of law (Dellacherie and Meyer, Theorem VIII. 12) the stochastic integral J b(s,u},y)dy(s) calculated under the law P 0 T is P T indistinguishable from the stochastic integral jjj b(s,u;,y)dy(s) calculated under the law P ^ . Therefore I (b,t,u,y) R = Ij(b,t,uf,y). Vt > 0 P ^ - as. The desired conclusion now follows from Proposition 2.15(b). • The last result in this Section is the proof that the random measure j defined by (2.9) possesses a density with respect to Lebesgue measure. We also estimate its moments. Some of the key ideas and techniques needed to prove Theorem 2.9.are introduced along with the proofs. N o t a t i o n 2.31. pt(x) denotes the one-dimensional heat kernel and Pt4>(x) = f pt(x—y)<p(y)dy. We sometimes write pf(y) = pt(x — y) = p{t, x — y). We begin by giving a Green's function representation for j- . P r o p o s i t i o n 2.32. For any <j> e C%(1R.) j {4>)' t = f P <j>(x)p(x)dx + j j P . (j>(y )£ (l,u,y)Z (ds,dy) (2.12) R t t s s s for all t e [0,1] P - a.s. P r o o f . F i x t < 1 .and set ip(s,x) = P - <p{x). Applying Ito's lemma 2.17 with Y = 1(1, t) = yt we get t i>(t,y ) = ip(P,Y ) + J t 0 S t i-£(s,y ) + -^(s,y )j s a d s +J .-^(s,y )dy(s) s = rb(0,Y ) + fi ?jt(s,y )dy(s) 0 s for -Kj-a.e. y Vt G [0,1] a.s., (since —(s,x) = --^(s,x)). Moreover, by the same lemma £ (l,u,y) t =1- £ (l,u),y)b(s,uj,y)dy(s) s 54 + b(s,uj,y) £ (l,u,y)ds 2 s for Kt-a..e.-y Vt € [0,1] a.s. Integration by parts, justified once more by Ito's lemma 2.17 yields £ {\,LO,y)^{t,y ) T = ^(0,y ) + J (£ (l)^(s,y ) 0 t s - ^{s,y )£ (\)b(s))dy(s) s s + s 6(5)^(1)1^(5,^))^ (6(5)2^(1)^(5,^) - for Kt-&.e. y Vt € [0,1] a.s. Now apply historical Ito's lemma 2.18 to obtain J £ (l)y>(t,y )K (dy) t t = j 1>(0,x)p(x)dx + t J j5,(1,L0,y)^{s,y )Z {ds,dy) K s Vte'[0,1] a.s. Recalling the definition of ip and j , we see that this last equation is exactly (2.12). n • R e m a r k 2.33. If b = 0 and 7 = 1 then Proposition 2.32 provides a very simple proof for the usual Green's function representation of super Brownian motion (compare with that of Konno and Shiga 1988, p 212). P r o p o s i t i o n 2.34. (a) For each 0 > 0 let £ (6,LO,y) = exp (-6IR(b,t,u,y) t is a P Then for any 6 > 0, £ .(6) sA + (o - y ) j f 0(5,LO,y) ds) . 2 -martingale starting at 1. s (b) There is a function K : [l,oo) x [0,00) —> R.+ , non-decreasing in each variable such that Ss(l) <*(p,s)£ (p) • p s (c) The second term on the r.h.s. of (2.12) is a square integrable martingale null at zero. P r o o f , (a) Recall that by Proposition 2.14 nt = yt — yQ — flb(s,LO,y) is a P ^ - B r o w n i a n motion. Since b is bounded, £ .{0) is an exponential martingale. sA (b) We estimate £s(l) £s{p)e ^- ° = p { p)I ku)2du <£ {p)e ^- Bu s s = p)c K(p,s)£ (p). s (c) Note that HPt-s^lloo < 00. Moreover P p K (£ (l) )ds^ 2 s s < pP*[£ (2M2,s)}ds s <m{2,t) (by parts (a) and (b) of this Proposition). This implies that the stochastic integral on the l.h.s. of (2.12) is an L martingale null at 0. • 2 We now employ equation (2.12) to show that j has a density. We state first a technical lemma. 55 L e m m a 2 . 3 5 . There exists a constant C > 0 such that -tVt' rtvt roo roo J •2 . J {p(t-s,x-z)-p{t'-s,x'-z))dzds<C(^/\t--t'\ Vt, t! > 0, x, x' e JR. + \x-x'\), P r o o f . See for example Lemma 6.2 of Shiga (1994). • P r o p o s i t i o n 2.36. There exists a (measurable) function g = g(w,s,x) suck that for any 4? € C A ' ( H ) and any t € [0,1] jj j $(x)g(s,x)dxds C $(x)j (dx)ds = s TP-a.s. (2.13) P r o o f . First, we prove that for any e 6 (0,1) / rl roo / TP[j (p ) )dxds < A x s (2.14) 2 ( J-oo JO where A < oo is a constant independent of e. By Proposition 2.32 P[j,(p?) ] = j PsPf{z)p{z)dz + 2 2 = f P s t{z)°{z)dz x s + F \^fi 2 + V^f°fp - p*{y ^^^^ T r jP*- {yr) £r{l) K {dy)dr 2 2 r+e r (by Chapman-Kolmogorov) = / P? (^p(^)^ + £ F f [ p f _ 2 +( r + e (y ) ^(l) ]dr 2 r 2 (by Fubini's theorem) < Jp (z)p(z)d.z + x 2 +€ [ TP?\p _ (y ) £ (2,u,y))dr K(2,s) x S 2 s r+£ r Jo (by Proposition 2.34(b)). r Now integrate over space-time fl rl roo / JO TP\j»{p ) ]dxds< rl x 2 J-oo Jo roo / roo / p (z)p(z)dz dxds x 2 s+e + K(2, ) f JO' n We estimate roo oo •oo roo roo / p {z)p{z)dz dxds J —oo x 2 s+t 56 f TPr\p - (yr) £r(2)]drdxds. ' • S S "1 . J-oo J-oo J-oo Jo X r+£ 2 rl roc roo roo n = / / JO •/'• / P*+e(zi)pUe( 2)p{zi)p{z2)dzidz dxds J—oo J —oo roo j roo z 2 J—oo oo : —7===p{z\)dzx I P (z2)p{z2)dz dxds •oo7-oo y27r(s + e) . 7-oo j /-oo r o o roo /•l / x s+e Jo • v 27r(s -f e) 7-oo / M] , T ' ( 7o V ( /+ ) = d s 27r s since e 2 7-00 7-oo / ^ ^ 7 = Furthermore, / /Y VO Pf[pf_ (y ) 5 (2)]drdxds r+£ r 2 r J-ooJo = / / F JO' JO rs pj_ r+e 1 r Jo Jo < ~^=, r 2 r ;> f wf[£ {2)}-= < f ( y ) d x £ ( 2 ) drds U-oo (by Fubini) fl / r 1 V27r(5 — r + e) drds since P f [£ {2)} = 1. ^ r Take A = ^ § + N ( 2 , 1 ) ^ to conclude, the proof of (2.14). Our next step is to prove that . rl roo lim / / s,6-40J TP{j {p -P ) }dxds s x x € (2.15) = 0: 2 S J_ Q O0 Apply Proposition 2.32 again. Just as before OO :• (Ps e(z)-p 5^))p(z)dz + X / + • 2 s+ •OO ; [ V?[<3>U+e(yr)-pU 6(yr)) £r(l) ]dr. S 2 2 + Jo Integrate over space-time to obtain \ rl roo I JO / J —bo - rl roo roo V\j {p*-P ) ]dxds< X / 2 s 7o - +K(2,1) f / 1 JO 00 / (p (z)-p 8(z))p(z)dz dxds x rpf[(pf: r+e x s+£ J — oo J — oo 2 s+ (y )-pj_ ,(y )) 5r(2)]drdxa^ r r+ r 2 J-ooJO The two terms on the r.h.s. are readily estimated: rl / Jo roc roc 7 / '., . - p ,. (z))p{z)dz dxds x 2 s . J —00 J —oc 57 r " rl < JO roo' / J —oo roo {p e(z)-p s(z)) p(z)dzdxds / J —oc X X + 2 + (by Jensen) oo rl roo roo rl roo / / •oo JO / (P s+e(z) -p {z)) dxdsp(z)dZ X x J —oo 2 +S oo / •oo i ' • ':• p{z)dz = c .36.i\/|e - S\ , 2 (by Lemma 2.35). To estimate the second term note that for any 77 > 0 ~ Ps(z)) dx = J(pUni*) + P (Z) 1- 2p ,(z)p (z))dx .-; ... y/27rj2sT2rfj + V2ir2s y/2it{2s + rj) . /(Ps^M) 2 X 2 < 2 S X X +r (s + n)y/4ns Hence, assuming w.l.o.g. e > 6, rl roo rs / • / ' / K[(p s-r e(yr)-p - (yr)) £r(2)}drdxds JO J-00 JO X j • X s + = J J Q . 2 r+S ~ PU^r)? [J_Jp - e{yr) X o r+ dx£ {2) r =- f I". • ~^ - -^^[£ {2)]drds Jo Jo (s - r + (e - 5))y/4it(s — r) f l = M r ( { £ / r r -v Jo Jo (s — - f + (e - 8))y/4ir(s-r) ^K (since £ /\.(2) is a P r 7o V£ —'5 T V47T • -martingale) < C2.36.2V k-~^[. / Therefore rl roo / 7• JO JP[j (p s x -p ) ]dxds < c .36.i>/|e - <5| + c .36. N(2, x 2 2 2 2 ^v/F ^! 7 J-00 = f'2.36.3\/|£ - <5|, which proves (2.15). Thus (2.14) and (2.15) show the existence of an L (Cl x [0,1] x R.) density function g = g(u, s, x) for j defined by 2 s lim / ' .. JO : ' ' / J-OO v\(j (p )-g(s,x)) ]dxds s x 2 " 1 58 J = 0 (2.16) • ' To finish the proof of the proposition we verify that g is in fact the desired density. Pick a nonnegative, nonidentically zero $ € C (TR.), t € [0,1] and compute K jfV ds J^(x)g(s,x)dxj = lim^P (^j ${x)j (p )dx x s - J $(x)g(s,x)dx^ ds (by bounded convergence) < ( JQWdxy 1 Km (by Jensen) < (l^(x)dx)~ l || $ Hoc lim j f * J*JP [(j (p )-g(s,x)) $(x)dx]ds x 2 s y°° TP[(j (p ) x 5 s ( ,x)) ]^ 2 S S = 0 (by (2.16)). This implies (2.13) and concludes the proof of the proposition. • R e m a r k 2.37. By considering functions of the form l [ f )(s)^(x) a monotone class argument shows that (2.13) is equivalent to a J JF(s,x)j (dx)ds s =j ) JF(s,x)g(s,x)dxds P-a.s. for (integrable) nonrandom functions F = F(s,x). This fact will be needed in the proof of the main theorem. • Having proved the existence of g, we proceed to estimate its moments. These bounds will also be needed in the proof of the main theorem of this chapter. P r o p o s i t i o n 2.38. For any k> 2, t € [0,1] there is a constant Ci38 depending on k and p (but not on x) such that - : n9(t,x) ]<c . k : 2 38 (2.i7) P r o o f . Write g(e,t,x) = jt{p )- Using Proposition 2.32 we estimate x V[g(e,t,x) ] <2 f pf (z)p(z)dz +2 JPp k k k Jp - (y )£ (l)Z '(ds,dy) k x +£ (since (a + b) <2 (a k k k H s+£ s k s + b )). k The first term on the r.h.s. is easily estimated: 2" J p (z)p(z)dz x k +£ < 2 || p H * , k = 2 H II o \\ P MOO k Z 59 k jp (z)dz x k +£ (2.18) The other term is estimated as follows: rt &[f fpf- e(ys)£s(i)z (ds,dy) ] K s+ k < C2MA(k)T[pK [p _ (y ) £ (l) ] x 2 s s+£ ds ' ] • 2 s k 2 s (by Burkholder) • • < C .38.l(A!)P [ ^ . [ p f - . + ^ y ^ ^ s d ) !l 1 7 2 1 2 .1/2 Jo (by Jensen's inequality) < ^- )]^^- )/ C . . (k)J]P y X [p?_ (^) ^(l)l/(2fc-2) fet 2 38 2 s 2 s+E ]ds f k \pt (y ) £s(i) -^]ds P s Jo 2k 2 s s+e (by Schwarz) = (2.19) C .38.2(A:)\/^\/i22 Notice that since y. — yo — J b(s, LO, y) is a Brownian motion under P , then the definition of 5(1) arid Girsanov's theorem imply that y is a Brownian motion (with initial distribution p) t 0 under the law Q [•] = P [5(1)-]. We use this fact to estimate I . x 'ft /i-=.p / .Jo ^.bf- 8+e (y.J^ pf-. (i/.) ^^(i) ^rfa*-- .] Ef 5I +e 5 1 •ft ksiit-s + ey^pU+eiys^Ssii^ds ^-^ } .1/2 Jo <v\[\t+ e)-^K (l)ds( -W f k \pl (y )£ (l))d Jo Jo <p 2 / S 2k 2 s s (by Jensen) < yP Jyt-s + e)~^K {l)ds - 1 2k P (by Schwarz) . = c . 3 8 . 3 ( f c ) ^ ^TP[K*(l) -^ 2k 2 Since W[K{(l) *" ] 2 3 3 s j rf l 3 1 s s+£ s s - y) (y - z)dyp(z)dz Ps t 60 s K \p*_ (y )£ (l)}ds < c . 8 . 4 by Proposition 2.21 and / f p - {x 2 s ~ / Jo jpt- e{x s+ s+e 2 s+£ - y)p (y - z)dyp{z)dz = s /Pt+ {x — z)p(z)dz <\\ p Hoc, it follows that by defining e C2.38.5(k, N,p) = C2.38.3 .38.4 II P lloo we obtain h < c .zs.5{k,p). (2.20) 2 Finally we estimate J : 2 h = f pf [pf- Ays) £sW -^] 2 s+ o ds 2k < (by Holder with p = 5/4, g = 5) « ( ( 2 0 f c - 1 3 ) / 2 , l ) ^ Q f [p£_, (y.)$]*pf [^.( \^) ? +e ds (by Proposition 2.34(b)) •t yt /"OO /•OO /"oo = N((20fc-13)/2,l) /• / / roc pt_, (x-z)ap,(z-u;)dzp(u>)rfti>sds ' +e J 0 J—oo J —oo rt = K((20A;-13)/2,1) / ( 2 7 r ( t - s ) ) - / / / 5 t = N((20fc-- 13)/2,1) / (2TT(* - s ) ) " / / 3 < - C2:38.6(fc) 2 p _, 2ft J-oo J-oo roc JO f Jo roo /-oo 3 2/5 Jo 5 pat . + a J-oo / (< " «)) r / _3/5 5 + 2 £ 5 +e) (x - z)p {z - w)dzp(w)dw*ds s ,• 5 ( a ; - w)p(w)dw*ds 5 p(w)d sds W J-oo rC .38.6(fc) 5 ~ C .38.7(fc) 2 2 That is, / If we define C .38 = 2 2 fe 2 < C .38.7(fc) (2.21) 2 || p ||oo +2 c .3 .2(fc)c2.38.5(fc,p) C2.38.7(fc) then fc 2 2 8 P[s(e,t,x)*] < C . 2 2 (2.22) 3 8 follows from (2.18), (2.19), (2.20) and (2.21). Since I? convergence implies pointwise convergence along some subsequence, by choosing appropriately a sequence e —• 0 and using Fatou's lemma we obtain (2.17) from (2.22). • n 2.3 A Generalized Green's Function Representation for X In this section we give a new representation of X as a J-integral. This is the main ingredient in the proof of Theorem 2.9. Recall that we assume (UB) throughout this Section. 61 D e f i n i t i o n 2 . 3 9 . For any 0 > 0 let : b(s,u,y)dy{s) ~Y f £ t ( 0 , w , y ) =exp(e Moreover we denote p'(t,x) = ks,u,y) dsj (J,P)-a.e. 2 • D p(t,x). x R e m a r k 2 . 4 0 . Arguing as in the proof of Proposition 2.34 we see that for any 9 > 0, £ is a P - m a r t i n g a l e and that there is a function N : [0,oo) x P such that £ {l)P < H(p,t)£ (p). ( t + ( A (#) , nondecreasing in each variable t D e f i n i t i o n 2 . 4 1 . For any e > 0 set =X (p ). X {t,X,£) t U P r o p o s i t i o n 2 . 4 2 . Let u(t,x,e) be as in the above definition and p'(t,x) = D p(t,x). We x write 7(s,w,y) = ~?(s,n {K)(uj),y) s f(s,U (K)(u),y) • f(s,u,y) = s h(s,u,y) = h{s,U. (K)(ui),y). a Then the following representation holds for any £ G (0,1] u{t,x,e)= [p {z)p{z)dz+[ Jn [p - Ms))£s(l)%Z {ds,dy) x t+e + IJ (pf- i + e x J s+ Jo J (y(«))£(^ ( - ) 2 + %p't- +e(ys ~ x)£s{l)bsyjs(dy)ds yt G [0,1] s 23 P - a.s. P r o o f . Recall that yt/\. is a Brownian motion under P ^ . Since D pf_ = —D pf_ it follows from Ito's lemma 2.17 applied to B\ = p - {y{s)), 0 < s <t, that B is a martingale, 2 s x f p' _ ^ (y - x)dy{r) ( J , P ) - a.e. Jo t r £ : r The familiar section theorem argument and (fly) yield £t(l) = 1 + r^(l)6 dy(s) s Jo Jt = 1 + / h dy(s) + / f ds Jo Jo s s 62 s+E 1 s+e B\ = Bl+ s+£ J-a.e. Integrate by parts (justified by Proposition 2.17). =pf+s{yo) +-jf' P {y )£t{l)it x t (£ (l)%p't- e(y s + P - +e(y*Ws{l)b ,t , + , s 1 + s+e s .. ' s (2-24) s + %p' _ (y - x)£ (l)b t s - ZsWp't-s+eiys - x)h Jo v^ ^ * * 3 e , dy(s) / • x s 1 -x) s + p - + {y )£ (l)h ) • ; x s s+ s s +P?- (y )£ {l)f ) s s+e s s ( J , P ) - a.e. ds s Moreover, for any cp € C' (M), t € [0,1] 2 X {<P) = JcP{y )K {dy) t t . t = y' 4>{yt)ltk {dy) t V<>-0 = f <p{yt)lt£~t{l)Jt{dy) P - a.s, (2.25) Using (2.24), (2.25) and Ito's lemma.for historical integrals 2.18 we obtain (2.23) 2.4 •. Proof of the Main Result In this section we put together the results from Sections 2.2, and 2.3 to prove Theorem 2.9. We begin with some technical lemmas. - L e m m a 2.43 ( K o l m o g o r o v ) . (a) Let [B(x) : x G P ) be a family of random variables indexed by x € P . Suppose thai there exists a real p > 0 and two constants Co, $ > 0 swc/i that d d •-'• . r Vx,x€P , E[jB(x) - £(x)| ] < C |jx - x\\ .. d p ' d+3 0 Then the process (B(x) : . x € P ) has,a continuous version which is globally Holder with exponent a, for any a < (3/p. • d (b) Let I C P . be the product of 3 intervals (either closed, open or semi-open). Let [B(x) : x £ I) be a 3-dimensional randomfield.Suppose that forany k > 13 there is a, constant Co > 0 . such that " ' ' 3 E [ | B ( x i , x , x ) - B(xi,x ,xi)\ ] 2 < C o ( | x i - x i | | ^ V | x - x | ^ + |x - x | V ) . k 3 2 3 2 3 3 for any. ( x i , x , x ) , ( x i , X 2 , x ) G I such that 0 < |xi — xi|,|x —x |,|x — x | < 1. Then the process (B(x) : x € I) has a continuous version, which is Holder, with exponent a, for any a < 1/4. Moreover, for any X2,x fixed, the map x\ i-4 i ? ( x i , x , x ) is Holder-a for any a< 1/4 and the map x >-» l ? ( x i , x , x ) is Holder-a for any a < 1/2. 2 3 3 2 2 3 2 2 2 3 3 3 3 If we know that the process (-B(x) • x € P ) is continuous to begin with then there is no need to take a version. d 63 P r o o f . Although Kolmogorov's theorem is not generally stated in this form, the standard proof (Revuz and Yor 1991) works equally well. • L e m m a 2.44. (a) Denote p'(t,x) = D p(t,x). For any 0 < e < 1, t € [0,1] x rt roo I JO \p'(t — s,x + e) — p'(t — s,x)\dxds < C2AA.iy/s. J-oo '(b) Let 0 < s < t < 1. Then rs roo I \p'(t~ i ) —p'( ~ r,x)\dxdr < c2.44.2Vt — s. r Jo x s J-oo (c) Let T{n) := mf{t > 0 : H (l) > n} A 1. t There is a function 0 : N x [0,oo) —• 1R+, non-decreasing in each variable such that X;(l) Notice also that K* (1) < e{n,s) on {T(n)>s}. =X*(1). P r o o f , (a) Estimates (a) and (b) should be well known. We prove them since we don't know a reference. We need the following elementary estimate (e.g. Ladyzenskaja 1968 p. 274) . ' \D?D?p{x,t)\.^Cn^-^-^exp^-Cn^y) (2.26) Let 8 > 0, 8 < t. We estimate rt roo ft-S roo / / \p'(t - s,x + e) - p'{t - s,x)\dxds = / / \p'(t-s,x Jo J-00 ' Jo J-00 + + e)-p'{t-s,x)\dxds rt roo \p'[t - s,x + e) — p'(t - s,x)\dxds Jt-S J-00 • ' rt—S roo < / \p'(t - s,x +.e) -p'(t - s,x)\dxds Jo J-00 • -. rt roo + 2 / Jt-S J-00 Il+h. = Use estimate (2.26) to check that J 2< C f rt. t ^ ~T^=ds -«$ y/t - S < Ci\T8. 64 \p'(t- s,x)\dxds To estimate I\ we use the fundamental theorem of calculus followed by a linear change of variables, rt—6 roo . h = / rl / JO £ / D p(t - s; x + ze)dz dxds J-oo' Jo 2 ft—6 rl <£ roo . / JO JO (by Fubini) ft-8 <e I Jo / \D p(t-s,x + ze)\dxdzds 2 J-oo rl / Jo dzds t- s (using estimate (2.26)) ^elogQ) . <c io (i). 2£ :• g If t < S the estimate (2.26) yields rt roo rt roo I I \p'{t — s,x + £) — p'(t — s,x)\dxds < 2 / / \p'(t — s,x)\dxds J.O J-oo Jo J-oo <C V6. 3 Thus, for any 6 > 0, ff o \p'(t-s,x + £) -p'(t-s,x)\dxds < Ci\Al + C £ l o g Q ) 2 +C3V6.: Put 6 = £ to obtain the desired result, (b) We compute -00 \p'(t — r, x) — p'(s — r, x)\dxdr rs < < roo rt—s rs \D D p(T T Jo J-00 Jo rt—s + s — r,x)\dTdxdr' x roo Ci,i / / / ( T + s — r) Jo Jo J-00 (estimate (2.26) n 2 exp ( — ci t—s {T + S - r)- ' dTdr z 2 = 4C(VT=s'-{Vi-y/s~)) < c2.44.2Vt — s. (c) A more general result is proved in p.61 of Perkins (1995). 65 t T -f s — r dxdrdr P r o o f o f T h e o r e m 2.9. We claim that it suffices to consider t € [0,1]. . E x e r c i s e . We do not wish to rob the reader of the pleasure of doing some things for herself, so we leave the proof of the claim as an exercise. The proof rests on the following lemma. L e m m a 2.45. Let u be as in Definition 2.^1. Suppose that there is a constant CUB such that < CUB \i(s,v,y)\ IM*)W,2/)| < c UB \f(s,v,y)\ < c UB Vs>0 \b(s,u,y)\ < c UB Vy € C P - a.s. Then for each k > 1 there is a constant C2Ab[k) such that for any e,6 € (0,1], t, s € [0,1], x,x € P , |a; — x\ < 1, P[|u(t,x,e) - u{s,x,8)\ ] < C A {\t k 2 \+ \x - x\^ 5 + \e - <5|^). - (2.27) We defer the proof of Lemma 2.45 and show how to prove the theorem. Case 1. Suppose that 6, 7, / , h are uniformly bounded. Set R > 0 and consider the function u : [0,1] x [-R, R] x (0,1]—> P . Lemmas 2.45 and 2.43 show that this function is H61der-a for any a < 1/4 and therefore uniformly continuous. It follows that it has a unique continuous extension to [0,1] x [-R, R] x [0,1]. Define . u(t,x) = u(t,x, 0) = limu(t, x,e). £->0 B y considering a sequence Rn f 00 this definition is extended to any x G P . Note that the function (t,x) 1—> u(t, x) is H61der-a for a < 1/4. This gives the joint continuity o f u . Lemma 2.45 also gives the desired exponents in x and in t. It remains to show that u is indeed the. density of X. Let <j> e C/<(P), <j> > 0. We compute X ( 0 ) = lim / 4>*p (z)X(dz) t £ = lim / / 4>(x)p (x — z)dxXt{dz) J e = l i m y jp (z)X {dz)(j){x)dx x t (by Fubini) = lira J u(x,t,e)(j)(x)dx = J u(t,x)4>(x)dx (by dominated convergence) 66 Case 2. Suppose that the coefficients are not uniformly bounded. Recall that T(n) was defined in Lemma 2.44(c). Note that Lemma 2.44(c), (J5) , (2.2), and (2.3). show that for each n € N 7 {yn,in\b ,f ;h ) n n = ( , -\bj, n h)l(0 <t< 7 7 is uniformly bounded. Hence K.^T solves (MP) where • r . m n (bn,9n) = {b,9)H0 T(n)) (see Remark 2.4 of Perkins (1995)), <t<T{n)) = {bn + h % ,(f 1 n + n h b ) f- ). n n A 1 Therefore X r ( ) satisfies the conditions of Case 1. (see Remark 2.23). To finish the proof note that Proposition 2.21 implies that for P-almost every OJ there is an no = UQ(OJ) such that T(m) - 1 for any m > UQ. • A n P r o o f o f L e m m a 2 . 4 5 . Note that since (A+B + C) < 3 (A +B + C ) it suffices to estimate TP[\u(t,x,e) - u(s,x, e)\ ], JP[\u(t,x,e) - u(t,x,e)\ ] and lP[\u(t,x, e) — u(s,x,6)\ ] separately. Throughout this proof we assume w.l.o.g. that t > s and k is an even integer. We start with the first term. B y Proposition 2.42 (generalized Green's function representation) k k k k k k k 6- TP[\u(t,x,e) - u(s,x,e)\ ] < f (p (z) h k -p (z))p(z)dz x x t+£ k +e ^yyj(pU e(yr)-pU e(yr))ir(l)%Z (dr,dy) + + k + X < X J(p' _ (yr + P ^ +. t - x) - p U r+£ (2.28) k + [L I^ P ^ _r+£ r + e (y ~ X>> r - x))£ (l)h r ~ ' -^ p s A r r r r V r r i=l In (2.28) we have adopted the (bizarre) convention p (x) = 0 for r < e. r We compute h = (Pt+eP(x) - P eP(x)) <\\P p-P \\^ • t+£ k s+ t+eP - ljpll*,' <\\ P (P _ s+£ t s <|| (P _ - l)p II*, 8 (since P is a contractive semigroup) = sup\JE [p(B - )-p(z)}\ z k t s 67 J (dy)dr r £ {l) / b k J (dy)dr + P j t k J(P -r+e(yr)-p s-r e(yr))£r(l)frJr(dy)dr [/ F J + r ~ k x))£r(l)lrb Jr(dy)dr r k (Here ( E * , i ? ) is a Brownian motion started at z eJR) <'c2.45.isupE [|(B _, ? -z)\^r] k t z (by (JC)). <c .45.iE°[(x/r^|5 |)^ ] 2 1 1 ... fc = C2A5.2(t - S) * (2.29) We estimate h < F |jfy(pf_ (|fr) r+e -p?- r+£ .fc/2 (yr)) 5 (l) 7 5 (l)- J (d2/)dr' 2 2 r r 2 J r r 1/2 (by Burkholder) 1/2 (by Jensen) [ r < P ^ T f l J ( _r s(yr) P X + - 1 / 2 p - e(yr)?Jr{dy)dr X k 1 T+ 1/2 ftf-r+eiVr)-PU e(yr))%(l) tf Mdy)dr X P (by Schwarz) . r rt = P x P . -i 1 / 2 1/2 r [Jo J *- c R {p r+ -P -r+e{*)?3r{da)dT -' {a) k X pi(p?_ (y ) r+£ k k + -p?_ r r+£ (y )) f (l) 7 2 r fc r 2fc Jr(dy)rir 1 / 2 r{dy)dr (by Definition 2.25) = - / / (Pf-r+e(«) -P?-r+e( )) 5(^a)dadr P a .JO X P 2 JR 1 1/2 f{pl M -pU+e(yr)) Sr(l) tf J, k 2 r+ k 1/2 (by Remark 2.37 ) <P \fl j &i-T+M -P - (a)fg{r\a)dadr x < P x H(A:,l) / c 1 2 fc /B P k r+e ^y(p?_ (y ) - p ^ ( y ) ) 5 ( f c ) J ( . d y ) d r r+e r r+e r (by. (t/B) and Proposition 2.34) f j (pU (a)-p . 68 (a)) dadr - Jo Jn +e x r+e 2 k 2 2 r r L X ^^-r+ (°)-^-r+ ( )) 5(r,a) " dadrj £ a e 2 f c 1 (by Jensen) f <C . (k)(t-s)^ 2A5 3 [ i p t r ^ l - P s - r + A ^ ) 2 ^ Jo JR X j f p ^ [ ( p f _ r + e (by Lemma 2.35) ^ rt < C2.45.4(*)(< " s)*? J ' P r J [(p _ X r + £ (yr) -p s-r Ayr)?£r{k)] dr ' X 1 2 + (by Proposition 2.38 and Lemma 2.35) • = C2A5.5(k){t-s) r I .il h • 2 (2.30) I .i is easily estimated (recall that y.At is.a Brownian motion under P ^ ) : 2 h.x < 2 fvl\pU M Sr{k)]dr'l 2 + Jo 2 <2/*P Ipf_ ^(y )VY/ P ^(fc) ]V« i i^ r J r 8 r J 8 ( r r JO (by Holder) < C2.45.6(&) / Jo = C2.45.6(*0 / JO / / Pt-r+e(z- 2 ) / p ( 2 - ^ ) ^ p ( u ; ) ^ 5 2 r 4/5 P^[5 (5Jt)] / dr / 1 r 5 1 2 JRJR / /" P t - r + ( a ; - z ) V ( ^ - ^ ) ^ p ( ^ ) d u ; / d r . / 5/ e 4 5 1 2 JRJR (2.31) < c .45.7(A:) 2 (the integral was estimated during the proof of Proposition 2.38). (2.30) and (2.31) yield ^2 < c .45.8('*).(*-s)^ ., (2.32) i 2 ^3 can be estimated in a similar manner. rt h <P '0 (pf_ r+e ( y ) - p*_ (y )) Jr{dy)dr2fc-l r r+e r 2 1/2' X (P?-r (yr) +e L ^-r + £ (yr)) ^(l) 2 2 f c / 2 ; C r Jr(dy)dr) (by Jensen) < P (pf_ r+£ (yr)) Jr(dy)dr 2A:-1 ( y ) - Ps 2 r 69 1 1/2 X IP [jf/(rf-r+JVr) - P ? - r ( y r ) ) 5 ( l ) 2 + £ 2 f c r / 2 f c r Jr(^)dr 1/2 (by Schwarz) = X o / X < 1/2 -l fjP -r (a)-pU ^)) 9(r,a)dadr - [f P 2 +e 2k + " [(Pf-r+efor) ~p -r+ (yr)) £r(l) f? ] F X 2 e 2k dr^ k I\{pU e(*)-p l e{a)?dadr - <[ x + k 1 r+ «/R • «/0 x / /(pf_ (a)-rf_ r+e (o)J P[^(r,a)^- ]A»dr /2 2 r+e 1 1 : (by Jensen) < C .45.9(fc)(t - s ) V 7 . 1 (by (2.31), Lemma 2.35 and Proposition 2.38) < C2.45.10(fc)(< ~ S ) ^ B y the same token i 2 2 2 2 A ~ 1 Vt-r+eiVr-x) / P (2.33) - p' . Ay s r+ - r x)\J (dy)dr ^ 2k T (by Jensen) . ~ / P / M - ^ ^~ X) - *-r+e(Vr ^+e(yr-x) ll ~ p p' . (Vr x)\J (dy)dr -' 2k T ~ a r+e x)\£ (l) h J (dy)dr r 2k 2k r (by Schwarz) = F x ll L | p I [\Pt-r+eiVr <[ '- r + e ( a ~ x)\g{r,a)dadr2k-A -P'-r+efr - x) - . p ' _ s +e ** 0 J R X / / 0 X) r + £ ( y - x)\£ (l) h ] r 2k r ^ ; dr ' 2k 1 2 \p't-r. {a-x)-p' _ {a-x)\dadr - I J " s k 1 r+e I p U + . ^ - ^ - p . - r + ^ a - ^ I P ^ G ) »/ R • 4 B K(2fc,l) 1 / 2 x yP r J [|pj_ (y - x ) - p U r+4f r 70 + £ 2 * - 1 ] ^ 1 ^ ( 2 / r -*)|&(2*)] dr 1 / 2 (by Jensen) = C2-.45.il (fc)(* . . - s ) ^ J*W> -\\p' _ (y - x) - p' _ (y J r t r+£ r s r+e -x)\£ (2k)] dr ' r 1 2 r (by Lemma (2.44) (b)) = C2At:i2(n,k,N)(t-s)* f I . h 1 (2.34). 4 1 We readily estimate h.x < 2^PJr < 2JJ [ i p ' _ ( y -x)\£ (2k)] dr" t r+£ r 2 r [\p' _ (y -x)\^} iP J T t r+E 4/5 r [S {2kf] dr^ J r l,b r 2 (by Holder) < 2c2.45.i3(A:) J J < 2c .45.13(fc)||p|| / 2 5 2 =<2c .45.i3(A:)||p|| i 2 5 2 (by Fubini) f \Pt-r+e(z - ft! r (z 5/4 Jo JnJn f [ \p' _ (z-x)\ / Jo JR t 5 4 r+e f Jn Pr - )dzdw^dr l \ l 2 W p (z-w)dwdz / dr / 4 5 r t 1 2 2 r+e <C2A,.u{k)f L x p l - ^ ^ ^ d z ^ d r " Jo t — r + £ J \. • ' t — r + £/ (by estimate (2.26) rt i --TSTTA- / = c2.45.15W / r \p' _ (z-x)\^dz^dr'l 2 5 : J Wt-r+ei* ~ x)\ = 2c . . (A )||p|| / / ' / Jo Jn 2 45 13 x)\*Pr(z - w)dzp(w)dw'tp [f (10A)] * dr2 1 2 2 . • Jo (t-r + syv-' • • t (2.35) < c2.45.i6(A:)- From (2.34) and (2.35) it follows that h < c 5.n{k)(t-s) ^.. . 2 2A (2.36) The estimation of J is identical to that of 7 . We obtain 5 4 h <c 5.n(n,k,N){t-s) 2A 2k~ 1 — . We now estimate Ie . J < 4 * » P jj {\)J {dy)dr ]^ 6 r <c ^ P k r ^Jr(l)dr»- ^y"^ 1 a ) 2fcj 71 r ( d y ) r f r ^ 1 / 2 (2.37) . ) (by Jensen) "I 1/2 ft < <#BN.(2M) 1 / 2 P J!(l) ~ 2k J 1 r f 1/2 t P J dr ~ 2k l £ (2k)J (dy)d\ r r 2fc-l < C .45.19(&)(* ~ (2.38) S)^ 2 Combining the estimates (2.29), (2.32), (2.33), (2.36), (2.37), (2.38) with (2.28) we see that for 0 < t - s, t G [0,1] (2.39) JP[\u(t,x,e) - u(s,x,e)\ ] < C 2 . 4 5 . 2 0 ( n , k ) ( t - s) <\ k Next we estimate 5- TP[\u(t,x,s) - u(t,x,e)\ ] < [ (p (z) k k pf (z))p(z)dz x k +£ + l^-r+e(yr)-plr e(yr))Srm Z (dr,dy) [l P + P + P +£ J + \L ' Jitf-r+-e(Vr) ^ r+£ t k r T+£<<Vr - x) -p' _ (y r t r+£ r (2.40) f J {dy)dr +e l^ ' - J(p' _ (y t ~ P?-r (Vr)).& M P k r - r -P't-r+e(y^-x))£r{l)hrJr{dy)dr x))E (l)%b J {dy)dr k r r r i=l We compute Li = (P (x) - P (x)) k tP tP <\\ Pt(p - p ) \t x x (here p = p(x - •)) x <\\ P - P x x \t < c2.45.21ja; — i ) ^ (2.41) 2 (by (JC)). L 2 is estimated as follows L 2 < P J J(Pf_ (y ) r+e -pf-r+e(yr)) 5r(l) 7r^r(l)- Jr(dy)dr*/ 2 r (by Burkholder) < P (f^(p -r e(yr)-pl Ayr)) Mdy)dr X + r+ 2 k 1 72 2 1 2 \ 1/2 ill^-r^\-Pt-r+e(yr))%(l) ^ X k (by Jensen) " ' -Pt-r+e(yr)) Mdy)dr - Y [ll'l^ P +eiVr) k 1 /2 + -Plr e(yr)) e (l) l jr(dy)dr\ IU l X F 2 {PU M 2 + 2k r 2k (by Schwarz) < P [fi f (P -r e(*) -PU+e(o)) 9{r,a)dadr -' X 2 + n k 1/2 x c* N(*,l) P [fi J(pf_r+e(Vr) 1 / 2 B - ^ III l n x ft r Jo Jn k 2 -11/2 P - e{a)) 9{r,a) - dadr (pf- (o) " k 2 x r+e 1/2 B r+ -Pt-r^)?dadr - P t r + e { a ) X C X(k,l) ~pl Ayr)) £r(k)Jr(dy)dr 2 fiv'r k l r+ [(p?- («r) • - p f - r ( y r ) ) ^ ( * ) ] d r / 2 r+e 1 2 +e (by Jensen) <c2.45.22(A )|x-i| f i ; x ^ P r J [(pf_ r + £ T /fer a Jo Jn + e (a)-pf-r + e (a)) Pb('-,a)*- ]dadr /2 2 r £ 2 r 1 , (y )-pf_ + (y )) £ (A)] r 1 dr / 1 r 2 (by Lemma 2.35) < c .45.2 (fc)|x - x l * 2 4 3 Tp 1 r J [(pf_ (y ) - p f _ r+e r r + e ( y r ) ) ^ ( A : ) l dr 1 / 2 2 Jo (by Proposition 2.38 and Lemma 2.35) J L = C2.45.24(K)|X - X\ 2 L .\- (2.42) 2 Note that in fact L 2 . 1 < /2.I: L 2 (2.43) < C2.45.25(^)|a; - The estimation of L 3 is analogous to that of I3 and we obtain ^3 < C 2 . 4 5 . 2 6 F ~ x| 4 (2.44) • We now proceed to estimate L4. U < IP I b U + e f e r - 4-p{- -(i/r - £ ) |J (d^dr *" r+e r 73 2 1 1/2' X y y'|p;_ r + £ (y -x)-pU e(yr-x)|5 (l) r + r ^ 2 f c (by Jensen) ll/2 x IP [J' f \p't-r+e(yr - x) - p' . {y t r+e r - x)\E (l) *^*J {dy)dr | 2 T T (by Schwarz) 1/2 < / f \p't- +e(a-x) Jo Jn -p't_ (a-x)\dadr k r 1 r+e x / [ bU+^a-^-pU+^a-xJIPfe^o) *- ]^ / Jo Jn 2 • c K(2k, l ) / k UB 1 2 x^ P 7 r 1 1 2 . [| ;_ (y - x) . - p j . _ ^ ( y r - 5)|5 (2fc)] d r / P r+e r r r 1 : 2 (by Jensen) . = C2.45.27(fc)|x J P Q r [ b U + e f o r " *) " p i - r + c f o r ~ 5)|^-(2fe)] d r ' 1 2 (by Lemma 2.44(a)) jt—i = C2.45.27(A;) |x - x j ~ L . i . 4 (2.45) Arguing as in the estimation of /4.x we see that L4.1 is bounded by a Constant. Therefore £ 4 < C2. .28(fc)|x-X\^ (2.46) 45 The estimation of L5 is identical to that of L . We obtain 4 fc — L5 < C2.45.29(A:)|x 1 - x\~ (2.47) x, x Combining the estimates (2.41), (2.43), (2.44), (2.46), (2.47) with (2.40) we see that for any TP[\u(t,x,e) -u{t,x,e)\ ] < c . 5.3o(A;)|x - x| 2 (2.48) k 2 4 To estimate P[|u(i,x,e) - u{t,x,6)\ ] we follow the procedure used to estimate JP[\ii(t,x,e) ' u(s,x,e)\ ]. Only a few trivial changes are required. We obtain k k P[\u{t, X , e) - U{t, X , 5)\ ] < C .45:31 | e k 2 fc-i - S\ 4 Estimates (2.39), (2.48) and (2.49) show that the hypothesis of Lemma 2.43 (b) hold. 74 (2.49) R e m a r k 2.46. (a) In this chapter we focused on the projection A ' of a solution K of the strong equation (HSE)^. (See Definition 2.4.) However, we could have chosen the projection X. — f l . ( I f ) ' o f a solution i f of the following martingale problem as initial data. Let m b, 7 be as in Theorem 2.9. Suppose that for all <p E DQ K (4>) = m(<t>)+£ J t A0(s,y) + b(s, fl (K),y) V0(.9,y))K (dy)ds s + Z {<j>), s t where Zt(<j>) is a continuous square integrable martingale null at. zero with square function. <Z(#)),= f " . J K { 4> )ds. 2 s ls o W i t h this definition of X the conclusions of Theorem 2.9 hold. The starting point of the proof would be a slight variation of Proposition 2.24, where we would need to employ Dawson's Girsanov theorem as in Remark 2.4 of Perkins (1995). The remainder of the proof would be essentially unchanged. If 7 = 1 then our proof works without any changes (see Remark 2.23), although some statements like the conclusion of Proposition 2.24 are trivial. • 2.5 Examples: Super Brownian Motions with Singular Interactions This section is an application of the foregoing.. ' ' We begin by recalling an example of Sznitman (1989). Roughly speaking the model is the following: N particles are placed at random (independently) i n the real line at time t = 0. . They evolve in time according to the dynamics: . "... , :"' " :.N. '. •: ' . ... • dXl = dwl + -^Mn-Xi)dt, i=l...N, (2.50) . where 5 is- Dirac's delta function, (vi )^ .axe independent one-dimeiisional Brownian motions and X\ represents the position of the i-th particle at time t. Thus the particles feel a "push" i n the positive direction when they collide! Between collisions they follow independent Brownian motions. Sznitman showed (among other things) that when iV •-> 00 a law of large numbers phenomenon occurs and the empirical distribution of the particles 1 • ? = lfE x> X S . ' • (2.51) converges to a non-random limit AT . In the limit each particle follows an independent copy of the "nonlinear" process A : . . . . 00 dX = dwi + u(t,Xi)di t 75 ' (2.52) where w is a one-dimensional Brownian motion and u(t,x) is the density of the random variable Xt- Moreover X^°(dx) = u(t,x)dx. Notice that the nonlinear process has marginals which satisfy i n a weak sense Burgers' equation 1du du Indeed, for any <j> € C (Pt), 2 <j>(X ) = 4>(X ) + t 0 2 d , ox (the classical version of) Ito's lemma and (2.52) yield J* cj>\X )dB s (\^(X ) + u(s,X )<fi'(:X ))ds. a s s s Now take expectations to obtain Recall that u(s, •) is the density of X so that for example E[(f>(X )] = / <f>{x)u(t, x)dx. Hence s t j(f)(x)(u(t,x) — u(0,x))dx = j j (^<f)(x)"u(s,x),+ 4>(x)'u(s,x) )dxds. 2 Integrate by parts on the r.h.s., cj>(x)u{(t,x)-u{0,x))dx = J J (^-cf>(x)-~^(s,x) - <j){x) — (u(s,x) )^dxds. 2 .. This last equation can also be obtained multiplying (2.53) by <f> and integrating over space-time. Sznitman (1989) proved that u(t,x) is in fact a classical solution of Burgers' equation (2.53). We refer the reader to the paper Sznitman (1989) for the proof of this and many more very interesting results. '•• . Sznitman's work motivated us to look at the following examples. E x a m p l e 1 Let X be the critical Bienayme-Galton-Watson tree of orie-dimensional Brownian motions defined by (0.1) with 7 = 1 (see page 3). N Recall that I(N,i) labels the. particles -alive at time t and Z\, i = 1 , . . . ,I(N,t) labels their locations. Consider the following interacting particle system U . The family structure of U . is identical to that of X . The particles belonging to U are labeled U ' and obey the following dynamics. N N N N 1 N 1 «N,t) dU ' = dZi + -Y^6o(Ut '-U^ )dt, N N i j t ' ' J •j = l ...J(N,t). t = 1 (Note the similarity between equations (2.50) and (2.54)). U N , l(N,t) 76 is defined by (2.54) This model is very similar to Sznitman's, but it has one extra ingredient. Instead of JV independent Brownian motions, the driving noise is now a system of branching Brownian motions. Several natural questions arise. Does the sequence (U ) converge weakly to an interacting superprocess? If so, is there a unique limit? What martingale problem is solved by the limit points? Note that since the interactions are singular, this model is not covered by the results of Chapter 1. We are currently working on these questions. We conjecture that the sequence (U ) possesses at least one limit point which solves the stochastic partial differential equation N N du A -al=2 -Tx^ d , • r- • ^ - . o\ U + W Here W is a space time white noise. Recall that a white noise W on [0, oo) x R i s a stochastic process (W(A) : A € Bf), where Bf denotes the Borel subsets of [0, oo) x IR of finite Lebesgue measure < oo), such that if A and B are disjoint subsets i n Bf, then W(A) and W(B) are independent mean zero Gaussian random variables with variances \A\ and |J5|, respectively. However, if we replace the delta function by a smooth approximation then the model will in fact be covered by our results. Let pt{x) be the one dimensional Brownian transition density. F i x e > 0 and consider the system U defined by £,N . I(N,t) N 3=1 and i=l B y Theorem 1.27 the sequence (U ' ')N (e is fixed) converges weakly to a unique (in law) superprocess U characterized by the martingale problem (MP) ^ . £ N E V*/, € C ( M ) 2 PC 0 Z?'(?p) = Ut(iP) - Ul(^) - fi J [il>'(x)U ( (x - •)) + \il>"(x)] Ul(dx)ds £ s Pe is a continuous square integrable martingale such that fU (^ )ds Jo • (Zf(^)) = £ t V t > 0 a.s. 2 Theorem 5.6 of Perkins (1995) implies that U solves the historical strong equation £ Y =y +j t jp (Y -x)U (dx)ds t e s s U (-) = J l(Y € -)H (dy). t t t Theorem 2.9 shows that there is a jointly continuous function u (t,x) such that Ut(dx) = u (t, x)dx. After integration by parts (see page 3 for a similar computation) we see that u solves the stochastic partial differential equation e e e du A , d lH=2 -d~x\ £ U A - J — / ' U r ; , -Pc^j+^W, 77 where W is a space time white noise and * denotes convolution i n the space variable x. • Example 2 In this example we look at a super Brownian motion in a super Brownian field. We work on a filtered space (f2, J~, ( ^ i ) , P ) ' rich enough to carry all the random processes defined below. Let X± and A ^ be two independent trees of critically branching one-dimensional Brownian motions (see page 3). The notation is as i n the previous example. I\(N,t) (resp. h{N, t)) labels the set of particles belonging to X\ (resp. X ) alive at time t and Z\ (resp. ZK) labels their locations. 2 t=l h{N,t) 1 We define a process U • as follows. The family structure of U is identical to that of X^. The particles belonging to U are labeled U ' and obey the dynamics N N N N 1 , h(N,t) ]T 6 (U ' - Z (t))dt, dU?'' = dZ\{t) + - N 1 0 t 3 2 i =• 1,... ,h(N,t). (Note the similarity between equations (2.50), (2.54) and (2.55)). U N (2.55) is defined by h(N,t) •u? TV- • . t=l When TV -> oo, X± converges to a super Brownian motion X\, while X converges to a super Brownian motion X . Call H\ and H the historical processes corresponding to X\ and X respectively. We expect that the sequence (U ) will converge to a superprocess U. In fact the system of equations (2.55)-(2.56) should converge to the historical strong equation 2 2 2 2 N dY = d (t) + jS(Yt t yi U (-) = J y (t))H (t)(dy )dt 2 2 2 (2.57) l(y €-)/fi(«)(dyi). t t Note that (2.55)-(2.56) and (2.57) are equivalent if (U,H H ) are replaced by (U ;H?,H£). The convergence does not follow from the results of Chapter 1. We do not prove that result here (although we hope to prove it in a future work). Instead we work directly with the limiting system (2.57), which we call super Brownian motion in a super Brownian field. Recall that one dimensional super Brownian motion has a jointly continuous density a.s. Therefore we may write X (t)(dx) — v(t,x)dx. We now condition on X . Therefore we consider v as a fixed function and so (2.2) holds. We may recast (2;57) in the form N U 2 2 2 Y = y + [ v{s,Y )ds t t (2.58) U (-) = J l(Y € - ) ^ i W ( d y ) . t , s J o t 78 We now show that (2.58) has a solution. Since v is uniformly bounded and measurable, the method of Zvonkin (1974) shows that for any bounded (.^-stopping time the equation has a unique non-exploding strong solution on (fi,irV). We can use the section theorem and the procedure in the proof of Theorem 2.12 of Perkins (1995) to show that there is a universal version of Y (w, y) such that t H\ — a.e. We then use the second equation in (2.58) to define U ( P — a.s.). Apply Theorem 2.9 to the system (2.58). Therefore, given X , there is an a.s. jointly continuous function u(t,x) such that 2 Ut(dx) = u(t,x)dx a.s. • R e m a r k 2.47. In fact it is not necessary to condition on X . But then we must deal with some lengthy technical details. We have decided to leave them out but hope to include them in some future work. n 2 79 Chapter 3 Local Times for One-Dimensional Interacting Superprocesses 3.1 Introduction and Statement of Results In this Chapter we study the local times for interacting superprocesses. We begin by describing the notions of superprocess occupation and local times. We consider first the case of super Brownian motion W. Throughout this Chapter we shall restrict ourselves to the case where the underlying particle motions are one-dimensional. The occupation time process Y is defined by m Y (0) = = f I W {cp)ds t . s Jo (3.1) for any Borel d : IR -> ITU. Formally, the local time Lf(W) of W is obtained by replacing <f> by 6 (Dirac's delta) i n (3.1). Thus, if u = u(t,x) is the jointly continuous density of super Brownian motion with respect to Lebesgue measure then a L?(W)= [ (s,a)ds. t Jo U This implies the existence of a jointly continuous version of Lf(W) which is continuously differentiable in t. (See Sugitani (1988) for more information on regularity properties of L%(W)). Note that L^(W) satisfies the fundamental density of occupation formula: / Indeed, WM)ds= I <t>(a)L {W)da. a t J-oo JO ft rt roo / W (4>)ds= I I <f)(a)u(s,a)dads Jo Jo J-oo s OO rt / / / <f>(a)u(s,a)dsda •oo J0 oo 4>{a)L {W)da. •oo 80 a t In general we define local times for superprocesses to be occupation densities. We have the following: D e f i n i t i o n 3.1. Let (X, fi, (Tt),^) be a M/?(P)-valued process, i.e. for each t, Xt is a finite random measure on (IR, B ( F ) ) . We say that L%(X) is the local time of X if the following conditions are satisfied: (i) The map (a, t) i—> Lf(X) is P — a.s. continuous. (ii) For any nonnegative Bprel function <j> roo ft / X (<t>)ds = / JO J-oo s (3.2) <t>(a)L (X)da a t D A s an immediate consequence of Theorem 2.9 we obtain C o r o l l a r y 3.2. The interacting superprocess X described in Chapter 2 possesses a local time Lf(X) which is jointly continuous and continuously differentiate in t. Our goal in this chapter is to show that more general interacting one-dimensional superprocesses have local times. The measure-valued processes we will consider arise as solutions to strong historical equations (see the Introduction of Chaper 2). We must therefore introduce diffusion, drift and mass coefficients. We will also need some notation. In this Chapter fi = (fi, (Tt)-> IP) will denote a filtered probability space satisfying the usual hypothesis. We denote C = C = C([0, o o ) , P ) and call (Ct) (resp. C) its canonical filtration (resp. its Borel cr-field). H is a one-dimensional historical Brownian motion on Cl starting at m G M / ? ( P ) . (f2, T\ {Tt)) = (SI x C, T x C, (T x C )) and Z denotes the martingale measure associated with H (see p. 11 Perkins 1995). A set A C [0, oo) x is (H, P ) evanescent (or H evanescent) iff A C A i where A i is (Tl)-predictable and 1 H t t sup 1AI(W, W, y) = 0 H — a.e. t y Vi > 0 P — a.is. 0<u<t A property holds (H, P ) — a.e. (or H — a.e.) if it holds off an H-evanescent set. If T is a bounded (^^-stopping time, P JP(H (B)l )/lP(Ho(l)) T A r denotes the Campbell measure oh defined by JP (A.xB) := T for A e T,B G C. Suppose that e> 0. Let CT : M ( P ) x P — > [e,oc), • F b: Af (P) x P—> P , F 7 : [0,oo) x C ( [ 0 , o o ) , M ( P ) ) x C - 4 (0,oo). F The hypothesis on these coefficients are the following. Here d = d|.| is the Vasershtein metric on A//.-(P.) defined by : - d(p,u) = sup{| (/) - K / ) ! : : / : P ^ P , M H/lloo 81 < 1, 1 / ( 4 - /(y)l < > - ^1 V x , y G P } . . ,. B o u n d e d n e s s b y t h e t o t a l m a s s . There are non-decreasing functions T, T : [0, oo) -> [l,oo) such that S U P \b(Pi )\ + W(p,x)\ < T(/x(l)). Vp € M f ( l R ) , sup (f,X,y) <T(t)(l+ 7 S/6C Lipschitz (3.3) x PxiMds) JO 1 V*eC([0,oo),M (P)). F (3.4) . condition. |CT.(/*,S)-<7(I/,Z)|-H&(^ (3.5) The reader will find many examples of coefficients b, a, 7 satisfying the conditions above i n Chapter 1 and i n Chapter 4 of (Perkins 1995). We say that (X, Y) is a solution of (HSE) ^ a y Y = Y + f a(X ,Y )dy(s) + ['b(X ,Y )ds, Jo ' Jo . • X (cP) = j ^X^iYtWdy). t 0 s s s (i) s • ' '. t iff (ii) • Y : [0, 00) x Q, ->• ]R is (.^-predictable, X : [0, 00) x ft -> M ( P ) is (.^-predictable, F X € C([0,oo),M/.(IR)). • (HSE) i, (ii) fft n holds for all <j> : TR —> TR bounded measurable for all t > 0 P — a.s. and (HSE)a,b,-y{i) holds H - a.e. The stochastic integral in the r.h.s. of (HSE) t,y(i) was defined in Proposition 2.15. Ct Here is the main result of this chapter: T h e o r e m 3.3. Let X be a solution of (HSE)a^a. Then X has a local time Lf(X). The basic tool needed i n the proof is a Tanaka-like formula of Perkins (1995). We shall not need its more general form. We will employ the following version. T h e o r e m 3.4. Assume L : [0,oo)xf2 —> [0,oo) is (P?)-predictable, L(-,u,y) is non-decreasing, left continuous for Ht-a.e. y for all t > 0 P — a.s. Also assume f H {L ) ds < 00 V O 0 P - a.s. Jo 2 S S Then there is an a.s. non-decreasing, left continuous [0, oo)-valued (Tt)-predictable process A (L) = J H (dL ) such that Ao(L)=0 and t t 0 s s H {L ) = HoiLo) + f fL(s,u,y)Z (ds,dy) Jo J H t t + f H {dL ) Jo s s Moreover if L is continuous H-a.e. then A (L) is a.s. continuous. t 82 Vt > 0 a.s. P r o o f . This is a special case of Theorem 2.24 (Perkins 1995). • We illustrate the idea of the proof of Theorem 3.3 with the particular example of super Brownian motion. Recall that if T is a bounded (Tt) stopping time, B {w,y) = yt — Vo is a Brownian motion stopped at T on (fi x C,T x C,TP ) (see Proposition 2.14). Let Cf(cv,y) be its local time. C is normalized so that / * cj){B )ds = / d ( a ) £ " d a . Applying Theorem 3.4 with L = we get t T s 0 s R H {C )= f I' £ Z (ds,dy) + f H (d£ ). Jo J Jo a t a t s H (3.6) a s s The second term on the right hand side of (3.6) is precisely the local time of super Brownian motion. This is intuitively clear from the particle picture. A straightforward computation shows that Lf(W).= J H (dCg) satisfies (3.2). Using the representation of the local time furnished by (3.6) and Koimpgorov's criterion we are able to prove that the local time is indeed jointly (Holder) continuous. t 0 s R e m a r k 3.5. (a) Another way to verify that L\(W) as defined above is in fact the density of occupation is the following. For simplicity we consider only the local time at 0. Let be the A-Green's function • R (x) = / e~ ^= dt. x xt Jo By Ito's lemma, the local time of Brownian motion satisfies the Tanaka formula (see e.g. p.14 Adler 1992): C° = R (Bo)-R (B )+ [\R )'(B )dB + X f R (B )ds.. x S x x t s x s (3.7) s Jo Jo Now, Ht{£t) y be computed using the representation (3.7) for C° together with Ito's lemma for historical integrals (Proposition 2.18.) It yields (formally) m a H (C° ) = -H (R (y ) - R (y ))+ f T T x T x t f [*(R )'(y )dy Z (ds,dy) x 0 r H r Jo J Jo + X f [ I*R (y )drZ (ds,dy) + \ [ f R (y )H (dy)ds Jo J Jo Jo J x = ~ fo H r x s s - ^ ^ ^ - \ [ j{R )"(ys)H {dy)ds ! RX {RX{Vs) ZH X s + f f f (R y(y )dy Z (ds,dy) + X f f [' R (y )drZ (ds,dy) Jo J Jo Jo J Jo S x r r H x r H + A^Y R (y )H (dy)ds . x = J* j s s (R \yo) -R {y ) + J\R )'(y )dy + \ J° Rx(yr)d^J Z (ds,dy) x x x s r + j* J (\R (y )- -(R )"(y ^H (dy)ds x S L x s 83 s r H = fi jc° Z (ds,dy) + fi J H s (\R (y ) x s - ^(R )"(y ) JH (dy)ds x S s s Hence H (C° )= t t f [C° Z (ds,dy) Jo J + f H (S )ds Jo H s (since R s (3.8) 0 - ^ ( i ? ) " + XR = 6 ). x X A 0 Comparing (3.6) and (3.8) we see that /„* H {dC° ) = s f H (S)ds. t a Q s b) A l l the arguments i n this Subsection concerning super Brownian motion are easily made rigorous. However, all these results are elementary and their proofs may be found elsewhere. c) Local time for super Brownian motion is known to exist in dimensions d < 3. The analogous result for interacting superprocesses is currently under investigation. The cases d = 2,3 are harder than the case d = 1, since super Brownian motion does not have, a density in dimensions greater than one. o This chapter is organized as follows. Section 3.2 is devoted to the proof of Theorem (3.3). We define a family of random variables {L^(X)\a € P c , t € P + } and show that it satisfies the conditions of Definition 3.1. 3.2 Existence and Regularity of Local Times Recall that for any bounded (J ()-stopping time T the process Y defined by (HSE) ^ r c yl is JJ a semimartingale on the space Q = ( f i x C , (Tt x C ^ P ^ ) . We wish to define an (Tt x Ct)- predictable process £"(w,y) which agrees with the semimartingale local time of Y in fi P T —a.s. for all bounded (.^-stopping times T. We use Tanaka's formula for semimartingales. Definition 3.6. For any a € JR and t > 0 let \Y - a\ - f sgn(y - a)a(X , Y )dy(s) C = \Y -a\a s t s 0 s s Jo -f sgn(Y -a)b(X ,Y )ds. (3.9) Jo *. ' • .•; Note that all the terms in the left hand side of (3.9) are defined up to (H, P)-evanescent sets in [0, co) x fi. Also, / ' (p(Y )d(Y) = /(/>(a)£"da H - a.e. in [0, oo) x fi (see Section 1 of Chapter V I of Revuz-Yor (1991)). ' • 0 s s s s s We shall need the following technical results. Lemma 3.7. Let T(n)=mi{s>Q:H {l)>n}. s Then there is a function 0 : N x [0, oo) —¥ P + X* (l) < 0 ( n , s ) , non-decreasing in each variable such that on s 84 (3.10) {T(n)>s}. (3.11) P r o o f . This statement is proved in p. 61 of (Perkins 1995). In the notation therein 0 ( n , s) = Ti{n)e ^ . The hypothesis (3.4): on 7 is needed here. • Tl s R e m a r k 3.8. In our present setting it is not necessarily true that for p € N , E[AT*(1) ] < oo. o P L e m m a 3 . 9 . Assume T is a bounded (Ft)stopping time and 'ip € b^TAs- Then HTMW) = HTASW) + /•(AT r hp Z (dr,dy) Vt>s H JsAT a.s. J P r o o f . This is a particular case of Proposition 2.7 of Perkins (1995). L e m m a 3 . 1 0 . Let L : [0, oo) x • —• [0, oo) be {Tt)-optional, and let S be an (Tt)-stopping time. Then P [ L * ( S ) >e] = sup-F[L(T) >e] Ve > 0, T<S where the sup on the right is over all {Tt)-stopping times T bounded by S. P r o o f . This result is a consequence of the optional section theorem. See Lemma 3.7 of Perkins (1993). • Suppose we could apply Theorem 3.4 to the process L = We would obtain s J C H {dy) = J* j C Z (ds,dy) a a t t H s The next argument shows that +j Vt P - a . s . H (dC ) a % s s (3.12) satisfies the hypotheses of the Theorem 3.4 so that (3.12) holds. Indeed, for any bounded {Tt)-stopping time T, C? is non-decreasing on [0,T] P — a.s. The section theorem shows that C? is continuous, non-negative and non-decreasing H — a.e. on [0, oo) x fi. To finish, we need to check that fj H ((Cg) )ds < oo P — a.s. Define stopping times T(n) as in (3.10). We compute T 2 s F [H AT(n)(\YsAT(n)'- \ )} a 2 S = H PsAT(n) f-sAT(n) a+ Y 0 rsAl(n) sAT(n) .. Jo o-(X ,Y )dy(r) r r rsAT(n) / Jo <3P r n)[l>o-an + 3c P sA + 3P ( 2 + b(X ,Y )ds r Jo 0-(Xr,Y )'' dr r sAT(n) H b(X ,Y )dr sAT(n) r r (by Burkholder) <3 pf A T ( n ) [|y -a| ] + c P 0 2 2 85 rsAT(n) / Jo • T ( X ( l ) ) dr r : r - H Ir + P AT(n) J () sAT j ?(X (l))dr 2 o S 1\ n r (by (3.3)) r ^ „ | 2 l . ~ rsAT{n) H .< 3 hsxnn)[\Yo - oi ]+c p;;r „) / Jo 2 + PH sAT{n) 2 T(G(n,r)) dr 2 ( rsAT(n) / T(6(n,r))dr JO (by Lemma 3.7) . < 3 (VsAT(n)[\Yo - a | ] + c T ( 0 ( n , ) ) + s T ( Q ( n , s ) ) ) 2 2 S 2 S 2 2 (3.13) C3.12.1 (*,")• Similarly, r-sAT(n) P if,sAT(n) / Jo sgn(y -a)a(X ,y )dy(r) r r / < P H AT(n) rsAT(n) ly S r ; a ( X , y ) dr r r (3-14) = C3.i . (5,n). 2 2 The same reasoning also shows that i7sAT(n) P ftAT(n) rtAl(n) / Jo sgn(y -a)6(X„y )d6 s < c .i . (s,n). s 3 2 3 (3.15) As a consequence of (3.9), (3.13), (3.14) and (3.15) we obtain sup 0<s<T(n) P sAT(n)(£ sAT(n)) \ H a < 2 0 0 (3.16) Hence /„*H {{C ) )ds < oo P - a.s. and (3.12) holds a s 2 s D e f i n i t i o n 3.11. We set ft LUX) := I H (dC ) Jo a s =f s J Jc Z (ds,dy). a CfHtidy)- (3.17) H s (See equation (3.12)). By Theorem 3.4 this process is a.s. increasing and continuous in t. Therefore it defines a measure Z ^ ( X ) on P . Let S + LUX) := f o- {X ,a)L (X). 2 a s JO 86 ds (3.18) As the notation indicates, the random process L"(X) denned above is indeed the desired local time of X. We must now show that it fulfills the conditions of Definition 3.1. Note that the Lipschitz condition (3.5) implies that the mapping s o~ (X , a) is continuous and therefore integrable with respect to the measure L% (X). • 2 s S First, we study the continuity of L^{X). Without loss of generality, we fix TV > 0 and restrict ourselves to the time interval [0,N]. As usual, some localization procedure is needed. We shall stop the processes H, X, Y and C?. at T(n). Note that £ ^ . ^ ^ * °* Y?( ) (in Campbell space.) Also, Z^ ^ is the orthogonal martingale measure associated with H ( \ We shall need the following estimates. < o c a 1S r t m e n A T n T L e m m a 3 . 1 2 . For any p > 1 there are constants C 3 . i 2 . i ( n , N , p ) , 0 3 . 1 2 . 2 (n, TV, p), 03.12.3(^1,TV,p) , such that for any (Ft)-stopping time T < T{n) A TV and any x,z € IR, s, t 6 [0, TV] sup P <c .i2.i(n,TV,p) [(££)*] T oeR (3.19) 3 W>T[\CT ~ £ T\ }< c .n.2(n,N,p)\x - z\l (3.20) p Z 3 and K m M - C-TAs?) M C3.12.3(n,TV,p)|i - s\l. < (3.21) providing \x — z\ < 1, \t —s\ < 1. P r o o f . Barlow Sz Yor (1981) show that there is a constant C such that p sup\\C \\ < (C ) ^\\Y ]\ . a T •' l LP p r IlP a That is, CJPT sup P ? [ ( £ £ ) ? ] < a€R IY + f a(X ,Y ) ds^ I Jo 2 T 0 s + [ 2 \b(X ,Y )\ds T s s Jo s T h e quantity on the right hand side has already been estimated in the special case p — 2 (see the discussion preceeding Definition 3.11.) The case with arbitrary p is similar. sup £ ? [ ( £ £ ) " ] < y~ C l (P?[|Yb| ].+ P ? p p „ r rT + P "•If f T Jo o(X ,Y ) dsP/ s s 2 2 \b(x ,Y )\d r s s s Jo. <.3 - Cp.(p?[|y | ]' + P ? p p 1 0 j ?(x (i)) ds 2 s "\f r(x (i))ds^ T < 2P~ C l p s Jo (P?[|YO| ] + P ? P 87 j\(Q{n,s)) ds^ 2 2 '.+'p. r{e(n,s))ds f (since T JO < T(n)) p 1 (p?[|y f]'.+ '/V '/- T(e(n ,JV))P < 1 0 = 2 A^T(0(n,AT))P) + ) c .i(n,N,p). 3A2 This concludes the proof of (3.19). We now show the validity of (3.20). Assume, w.l.o.g. x < z. $"-m - m \ p <V- {VT\\ - x\- \YO 1 + P ? 'J -z\ \ ] +.P?[jf i (y.)a(X.,y.)dy(«) p \Y 0 r (l4 l , (Y )b(X ,Y )ds ] p {x z] s s s (by 3.9) (\x-z\r> + JPr\ C <3 -' p l , {Y )o{X ,Y ) ds l 2 {x z] s s p 2 s Jo L +P T [ •Jo l (Y )b(X ,Y )ds p {XtZ] s s s . (by Burkholder) (3.22) = y-- (|x-^-r/i+/2). 1 We estimate /, .= P da l p JX 2 (by the density of occupation formula) . < |x-z|(P- )/ suppJ[(4)P/ ] 2 2 2 <c .i .i(n,7V,p/2)|x-z|P/ 3 2 (3.23) 2 (by (3.19)). /2 p?[/^ = < e~ w" [J 2 ( I , Ky )^|^^,y ) ^] 2 5 l (Y )?(X (l))o(X,, {x<z] s 2 Y ) ds ] 2 s < - T(e(n,iY))pp?[^ i e 2 s T (l)2] p s (y )a(X,y ) d p] s s 2 s = £ - T ( 6 ( n , N)) ]p" [ T £$• da 2 p p < £ - T ( 0 ( n , 7V)) sup P 2 P r [{C^) )\x - z\ p < £- T(0(n,N)yc .iii(n,N )\x 2 3 iP p - z\ . p (3.24) Notice that (3.20) follows from (3.22), (3.23) and (3.24). The proof of (3.21) is easier. Without loss of generality assume t > s. - £f A,I ] < 2 " ^TAMTM p p TPTM] / JTAS 1 \ sgn(y - x)a{X , Y )dy(r) r r r U + JP J sgn(Y - x)b(X ,Y )dr*> \ T R R ^ [xr ' + At r j) |6{xr Fr)|drp (3.25) < 2P- ( T ( 6 ( n , iV))P|t - s|P/ + T ( 0 ( n , A^)) |t - s| ) 1 2 p p Set c .i2.3 = 2 T ( 0 ( n , i V ) ) P . This proves (3.21): L e m m a 3.13. Fixp > 1. Assume |x — z| < 1, | t - s | < 1. There exist constants C3.i3.2(N,n,p), c .i3. (A ,n,p) suc/i £/m* 3 p 3 3 P [|Z* P c .i .i(N,n,p), 3 3 r AT(n) sup \L* (X) s<T(n) S (X)| ] <C3.i3.l(^,n p), p (3.26) '. > Ll(X)\» <c .i3.2(N,n,p)\x-z\^ (X)\ ] < c . . (N,n,p)\t (3.27) 2 3 and P [|L- AT(n) (X) - L X P SAT{N) - s\*/ . 2 3 13 3 for any t € [0,7V]. . P r o o f . We estimate P i^rwwrj <2 - P[i^ p i + 2 P _ 1 Ar(n) (^ AT(n) (y))n I ,tAT(n) « P y PI y £^(y)z (d ,dy) H o S (by (3.12)) = h+hNow, h < 2 - P[// p 1 f A T ( n ) (l) - // p 1 t A T (n)((^ (by Jensen) <2 - n - pf p p 1 1 A T ( n ) [(^ <2 - n - c . .i(n,/V,p) p 1 p 1 3 12 89 ) ]' p A T ( n ) A T ( n ) ) )i p (3.28) (by Lemma 3.12) B y the same token, y rtAT(n) / <2"- c P 1 2 p f j{C : ^ H (dy)d ^ x 2 2 AT{n smn) S (by Burkholder) rrtAT{n) l A l (n) < 2"- c P 1 rtAT(n) | p H (l)ds V ^ ' /-tAT(n) /•; s (by Jensen) < 2"- c n rAfVp 1 £ y p E pf yH Ann) AT(n) (dy)ds sATin) (dy)ds sAT{n) ds ( AT(n)) H AT(n)(dy) CX yH x x [f E (C (c W = 2P-%n ^N ^ J J o l f P S [(£J A r ( n ) Ar(B) ) " ] ds <2 - c n ^Nh . . (n,N,p) f, 1 E p 3 12 1 (by Lemma 3.12) P u t c a . i a . x ^ n , ? ) = 2 P - n P - c . . ( n , A , p ) + 2 P - c n ^ / v f c . i . i ( n , N,p). This proves (3.26). Next, we prove (3.27). 1 P sup \L (X) X 1 3 12 7 1 <2 Ll{X)\ .s<T(n) 1 P _ 1 P £ p sup ,8<T(n) 3 H (\C -C )\ X Z S sup f s<T(n) JO K 2 S f(C -C )Z (dr,dy) x z H r J = Ji +J . 2 Consider any (J" )-stopping time 5 < T(n) A TV. Then t (by Jensen) .; ' . <nr Vs[\c%-r y\ l s <n - c . .2{n,N,p)\x p l (3.29) - z\v' 2 z l2 (by Lemma 3.12.) Therefore sup {P[Hs(\C x s - C \P}} < nr-Wite^Nrflx Z S S<T(n)AN = k \xp 90 z\V' 2 - zf 2 where the sup is taken oyer all stopping times S bounded by T(n) A N. Note that for any such stopping time and any 9 > 0, q > 1 (Chebyshev) < k \x-z\«l 9i 2 q But by Lemma 3.10 P[ sup H (\L* - C \) > 0] = sup F[H (\£ s z r r<T AN n - £ \) > 9]. x r z s s S<T(n)AN Hence P sup H (\£ -C \ x r z r =p ,r<T AN n roo 9 ~ lP p Jo sup l >9 H {\£?-£ \) z r r<T AN n roo =p 9~ p JO sup JP[H (\C% - C \) > 9}d9 z l S s S<T(n)AN < 0P+1 9 ~ d9 p Jo + pk i\x 9~ d9 — z\^~ p+ E+l = 1 2 k l \x-z\^ +pk l \x-z\ / 2 p 2 p 1 = p 1 {p+\)k%\x-zf . 2 Moreover, rT(n) r h <2 - c P p 1 p JO J '^* ~ A T ( n ) £ sAT{n)\ sAT(n)ds Z 2H pl2 (by Burkholder) fnn) < 2 ~c W p l Jo v < 2 ~%n^N^ P rl(n) J H s { 1 ) d S 2 J 0 /"VsATin) r [\£ AT(n) X \ AT n)-^AT(n)\ H AT(n)ds CX ~ { P S ^ATin^ds <2 - c n ^N^ c . .2(n,N,p)\x-z\ / (by Lemma 3.12) p 1 E p 2 p 2 3 12 Define c . {t,p, N) = (p + l)kfc\ + 2 ~ c n * N c . . {n, p 3A3 2 x p/2 v 3 12 2 91 N,p). This proves (3.27). To finish the proof we estimate F[iz? A T ( r i ) (x)-z^ A r ( n ) (x)r < 2 - F[|/J p 1 + t A r ( n ) A r ( n ) ) - H )\r>] (Ct sAT{n) AT{n) PI •tAT(n) P-lp 2 (£- / JsAT(n) C* Z (dr,dy) • H rAT{n) = A! + A . 2 We now estimate A A <2^- w[\H tAT{n) + )n AT{n) Ann) 2^- W[\H (C ^ 2 x tAT{n) = A An + A n A - c* (q 2 1 12 < 2 ~ P [/f T(n)(l) - ^AT(n)^?AT(n) " ^ 2 p U 2 < 2 - n"- ipf 2 p p tA 2 1 A r ( n ) 1 [|£? - A r ( n ) < 2 P- nP- c . Mn,N,p)\t 2 3 12 )| ) P W} x - 1 2 £ A T ( n ) AT(n) s\ ' p 2 (by Lemma 3.12). By Lemma 3.9, tAT(n)[£ AT(n)} H =• sAT(n)[^sAT(n)\ X + H S / JsAT{n) C r {dr,dy) Z J Therefore A l2 = 2 2 p rtAT(n) - F r / 2 C Z (dr,dy) x JsAT(n) 2 p J 2 / < 2 - c„F / 2 r I . JsAT(n) 2 p • ' rtAT(n) < 2 - CpF H r (C ) H (dy)dr^ \ x 2 r 2 r •J J •Jf (l)d V / r \C YH {dy)dr x S T JsAT(n) sAT(n) JsAT(n) r J r-tAT(n) / / JsAT(n) < 2V- c (t £ - s)^n^ l p Pf A T ( n ) {C YH {dy)dr x r r J [(^ A T { n ) ) ]dr p <2P- c {t-s) / n ^c . . {n,N,p) 1 p 2 E p 3 12 1 (by Lemma 3.12). These computations also show that A <2 - c {t-s) n ^c . .i{n,N,p). p 2 1 p/2 p e 3 l2 Set c .i3.3(n,iV,p) = 2 P c ( t - 5 ) n ^ C 3 . i 2 . i ( n , i V , p ) - t - 2 - n - C 3 . i . 3 . This proves (3.28) 3 p p E 2p 92 2 p 1 2 Corollary 3.14. The map (s,x) H > L (X) has a continuous version. X Proof. The estimate (3.27) and Kolmogorov's criterion (Theorem 2.43) show that (s,x) i - > L (X) has a continuous version on [0,T(n)j x IR. The desired result follows from the fact that T(n) - » oo a.s. m X The following result will be needed i n the proof of the next proposition. Lemma 3.15. There exists a non-decreasing function S . : [0, oo) —> [0, oo) such that for any measure p, and any x, z G IR, <E(p(l))\x-z\. \a- (p,x)-o- (p,z)\ 2 2 (3.30) Proof. Recall that by (3.3)-(3.5) \a{p,x)-(j{p,z)\<r(fx(l))\x-z\ and afax) <T(/x(l)). . Therefore a(p,x) 2 \a{p,x)- 2 -o(p,z)- \ 2 = - a(p,z) 2 a{p,,x) a(p,z) 2 2 < e~ \o(p, x) — a(fx, z) j A 2 2 (since a >.£.)' =£ - 4 \a(ji,x) + a(p,z)\ \a(p,x) - a{p,z)\ <£- 2T(/x(l)) |x-z|. 4 Setting H(-) = 2 T ( - ) e 2 -4 2 we obtain (3.30). • Proposition 3.16. (a) The map (t,a) \—> (L^X)) (b) M z is JP — a.s.. continuous. Suppose for each z £ E s H o~~ (X ,z) is a semimartingale with canonical decomposition -f- A such that for any t > 0 and any k > 1 2 s z sup j p [ ( M - - ) ] + V[fi fc |d.4*| ] j . < oc. (3.31) 2fc Then there exists a version of (t,a,oj) i—> (Lf(X)) which is (jointly) .^-Holder continuous in. (t, a) for any £ . < 1/2 P - a.s. Remark 3.17. Condition (3.31) holds for all the examples i n Chapter 4 of Perkins (1995, p. 49) with some additional smoothness on the coefficients. We now show one method to check it. For simplicity we concentrate on a particular example. Suppose 7 = 1. Suppose also that there are positive constants K and i such that b(p,x) < K(l + p(l) ) and let a : JR —> H + be bounded continuous with two bounded continuous derivatives. Assume that K also bounds a and its derivatives. F i x e > 0 and define 1 a(p,x) =e + Ja{x- z)p{dz), 93 V/i 6 M ( P ) , F Vx € P . We claim that with this choice of (7, a, b), the hypothesis of Proposition 3.16(b) is satisfied. We showed in Chapter 1 that cx(p,x) — e satisifies (3.5) and (3.3). It follows immediately that CT(P, X) also satisfies (3.5) and (3.3). Theorem 5.6 of Perkins (1995) implies that X solves the martingale problem (MP)^ . b W> € C ( P ) 6 0 Z*(V>) = X {rl>) - XoW) - jfY 2 [i>'(x)b(X ,x) + t \o(X ,xW(x)]X (dx)ds s s s is a continuous square integrable martingale such that T X (ip )ds Jo (Z?i1>)) = Vt>0 2 t s a.s. We write o = a{x — z). Prom the above martingale problem we see that z X {d ) z s +£ + Zf{az) = X (a ) z Q j [d (x)b(X ,x) s r Applying Ito's lemma we obtain a{X ,z) s 2 1 (e + X (< = W)) 0 2 f Jo l z xx s s x {o>)y * * dZ { £ + { Z) s Jo _ . ,t'.2 / [a*(x)b(X„x) ~ Jo "• -a{X ,x)a ]x {dx)ds + z \o{X ,x)G ]x {dx) + z xx s s (e + X (o ))* z ds s {e + X {d )f z s " (e + X (o*))* +M :+A . 1 z z t 0 We compute ' supP[(M )£] = P 2 4WW = lf r z .Jo < 4*e ^ (e + X (o ))* •t P K•2X-v {l)dsn\j„k _4fc I s z K s <4 e- t K JP[Xt{l) ] Jo k 4k k 2k k < oc (by Proposition 2.21). Similarly, P f\dA Jo 2 2 |2fc = P Jo / ~ K( )KX ,x) x s + \a{X ,x)a z s {e + X {a )Y s 94 z xx X (dx) _ zX {{a ) ) ds2k {e + X (a )f s s + z 2 s z < 2 2fc P f Jo * ( 2 / f ( i + x (iy) + 2 K*x (i))x (i) s s s jJk + 2 P 2k <C(K,k, e,t) < oo (by Proposition 2.21) This concludes the proof of the claim. The same method can be employed to show that more general coefHcents a satisfy (3131). • P r o o f o f P r o p o s i t i o n 3.16. (a) Notice that the Lipschitz condition (3.5) and the continuity of s — > X ensure the continuity of (r,x) a(X ,x). Therefore we can fix i n a set of probability one such that the maps (t,x,w) Lf and (r,x,u>) i-v a~ (X ,x) are continuous. Note that s r 2 r = \-fioT (X ,x)L% \L?(X) - L (X)\ Z - 2 3 < \fia- (X ,x)L \ 2 2 S dr dr - f a- (X ,z)L x s 1 z s r + I Jof a- (X ,z)L : Ji + J 2 2 dr S (j- (X ,z)L + \fi(a- (X ,x) x r fi 2 2 x r dr z s Jo o-- (X ,z))L dr +J . 2 3 We estimate J <e- \L -L* \, 2 x x t a (since a > e) J <E(X* (l))\x-z\L z 2 s s (by Lemma (3.15). Observe that- L converges weakly to L when x ->• z (these are finite Borel measures on [0, s]). Recall that the map r i-> a~ (X ,z) is continuous. Therefore / o~ (X , z)L -> J a~ (X , z)L as x —> z. Hence z dr dr 2 Q r S 2 s x T dr 2 Q r dr Jim \.L (X)-L (X)\=0. x z s (b) F i x p > 2. We compute P \ tAT{n)( ) L X — tAT(n)( )\ L X T = P / Jo < 2 P + P _ 1 a- (X ,x)Z . J 2 / rtAT{n) 2P-ip s ds / o tAT(n) _ a a- (^,x)-a- (X 2 / Jo 95 x 2 (X , z)L^ s _ S ) z)L^ a- (X ,z)(L5 -Z^) 2 s s l s p = h + h. Now, tAT{n) s(*.(i))Z3. h<2 - \x-z\ P p 1 p (by Lemma 3.15.) tAT(n) <2 - \x-.z\ F p 1 S(0(n,. ))J4 p a (by Lemma 3.7) tAT(n) < 2 - |x-^| H(6(n,iV)) P p 1 p p = 2 - |x- | H(0(n,7V)) p[|Lf p 1 p p 2 fx AT(n) = 2 - | x - 2| 5(0(n,7V)) c .i3.i(/V,n,p) p 1 p p 3 (by Lemma 3.13) = c .i6.i(n,7V,p)|a; - z\ . To estimate I we need the fact that a~ (X ,z) = M§ + A is a semimartingale. 2 2 J, = 2 P _ 1 = '2 p _ 1 CT- (^, )(L3 -L^) 2 S 2 ~ .-2 (AW(n), ~ ^)(^AT(n) - L rtAT(n) Z ~ r ~ - L + 6 2p-lp < 6 - £2p 1 2p C3 y X + 6 _ _ L 5 A T ( n ) ) d c r ( X ^ ) = .L = LQ = 0). X T Tl r ~ () tAT n (Lf A T ( B ) - Z: AT(B) )«uf ( ( ^ s A T ( n ) ~~ • s A T ( n ) ) ^ t , o .i3. (/V,n,p)|x - z| / + p 2 P Z + e^p jf \} TAT{N) _ 2 tfP-i.CpP 2p/2 2 p _ 1 Z _ fer(n) = (M ,L ) T IPI • Z AT{N) rtAT{n) ~ / t A 7 » ) (integrating by parts and since (M ,L ) < e^-Vp [ L* s PI 'tAT(n) P z s P (3.32) p 3 sup \L X S -.L \ S _ \dA \ l z s Jo ,s<MT(n) _ ptAT{n) / Z rtAT(n) 2p 2 (by Lemma 3:13 and Burkholder) < 6 - e- c . 3.2(A ,n,p)|x - z| / + 2p 1 2p 3 1 r p 2 tf - C W p l p 96 • sup n) k s<tAT{n) l^-^l ' / JO d(M ) / 2 p 2 + & -c p l |P p 2p sup P \L - L \ X z S s , s<MT(n) (by Schwarz) <&P-h- c . {N,n,p)\x - z\ l + Q ~ C 2p p ZAZ 2 2p 2 l \ p G - C3M.2(N,n,p)c . (p,N)\x-z\ ' + 2p P sup \L -L \ X Z S vs<t/\T(n) p 2 1 3 UA (by Schwarz, Lemma 3.13 and (3.31)) < 6 *- e- Pc .i3.2{N,n p)\x 2 1 2 3 + 6 - C c . i.i(p,iV)c3.i3.2(/V,n,p)|x - - z\ ' p 2 t 2p 1 p 3 3 z\ / p 2 6 - C3.i3.2(N,n,p)c3.3i.i(p,N)\x-z\ / + 2p (3.33) p 2 l = c . . (n,/V,p)|x-z| / . 3 16 p 2 (3.34) 2 From (3.32) and (3.33) it follows that there is a constant P [\L* (X) - L (X)\ ] z AT{n) < C3. p AT(n) 1 6 03.16.3(71,7V,p) .3(n, such that JV,p)|x - (3.35) zf .2 Without loss of generality assume s < t. We estimate rtAT(n) P = p / o- {X ,x)L% 2 r JsAT(n) < e 2 p P ||L^ A T ( n ) - Lj AT ( )| J n <e- C3 3.z{n N,p)\t-s\ l 2p A p y 2 (by Lemma 3.13) = C3.16A{n,N,p)\t-s\ / . p 2 (3.36) . We shall prove the existence of a continuous modification of L' j,^(X) by_using Lemma 2.43 (Kolmogorov's criterion). According to the latter, it suffices to show that A nL -Ll x F<C{(t- (3.37) AT{n) Ann) for some positive constants a , /?, C. Set C — 2 c .i6.3(n, N,p) V c .i6.4(n, N,p). Observe that a TP[\ tAT{n) L ~ L Z A T (n)\ Q ] - 3 3 + 2 P[|Lt T(n) 2 lP[|-£'?AT(n) ~ tAT(n)\ ] L a < C[(t - s) ' a 2 a Q A + |x - z\ l ] a ' 2 L ^\ ] z - a sAT . (3.38) (by (3.35) and (3.36)). Realizing that (3.38) is of the type (3.37) when a > 4 we obtain a continuous version of L' ^(X) . To finish the demonstration note that L'.(X) = L\ ^(X) on {T(n) > N} and T(n) t co a.s. Therefore L'.(X) also has a jointly Holder continuous version of order £ for any C < 1/2. Our next step is to verify the occupation times formula (3.2). AT AT 97 Lemma 3.18. For any nonnegative Borel function cp J<f>{a)L?(X)da= .J Jcp{a)o {X ,a)X (da)ds s (3.39) a.s. 2 s Proof To establish (3.39) it suffices to show that for every n > 0 and any non-negative Borel function cj) in a countable family of functions with compact support r : J cP{a)L =J {X)da a tAT{n) rt*T(n) r . J cf>{a)a{X ,a) X {da)ds (3.40) a.s. 2 s s The null set in (3.40) depends only on cj> and t. But there are only countable many d's and by Corollary 3.14 L(X) is continuous, so we may take the null set in (3.40) independent of t and ^ . • . ': , • .;• ' Recall that = / £?AT(n)(*) fCZ - £?AT(n)HtAT(n)(dy) a (3.41) (ds,dy). H s We multiply both sides of (3.41) by <p{a) and integrate: J<P(a)LtAT(n) ; = da J'^(°) J£?AT(n) tAT(n)(dy)da H - J 0{a) J (3.42) J C Z (ds,dy)da. a H s Now, 'J 4>{a) j C H a tAT[n) =J J {dy)da tAT[n) cp(a)C1 daH AT{n) (dy) tAT{n) (by Fubini) 4>{Y )o {X ,Y )dsH n){dy) = J-J 2 s s tAT{ s (by the density of occupation formula). We can apply Ito's lemma for historical integrals to this last H-integral to obtain: r JJ ftAT(n) rtAl(n) ptAT(n) ptAT(n) n <P(Y )o (X ,Y )dsH s s = J^ (dy) 2 s tAT{n) ps ps ff JJ ptAT(n) Jo + \ rtAT(n) <j>(Y )o (X ,Y )drZ (dsdy) 2 r J <t>{Ys)° {Xs,Y )H {dy)ds 2 r s s r <(>(a)£ daZ (ds,dy) a H s rtAT{n) '98 r r = / + H r Jo r J <t>{Ys)o- {X ,Y )H {dy)ds 2 s s s We claim that we can use Fubini's theorem for stochastic integrals (Theorem 2.6 Walsh (1986)) to interchange the order of integration in rtAT(n) J r f r J j (t>(a)£ daZ (ds,dy) a p rir,T(n) = J <t>(a) J H s J C Z (ds,dy)da. a H s Hence f <$>{a) [ C H a tAT{n) = U{a) {dy)da tAT(n) \ J % T ( n t } [ ) C daZ (ds,dy)da a H sAT{n) / V ( 3 - 4 3 ) + j J (f){Y )a (X ,Y )H {dy)ds We obtain (3.40) by substituting (3.43) into (3.42) arid.using (HSE) b,-y .1*" remains to show that Fubini's theorem may be applied. We set p.(da) := 4>{a)da. We need to show that 2 Q s s s s a< P rtAT(n) r J Jo r J < ob. ( ^T(n)) °mn)( y) t ( Ca 2H d ds i a This is easy to check: P j J yr(^ ( ) /7, o AT n) 2 AT(n) (dy)d i(da) <JJ S/ V AT(n)[(£ AT(n)) }ds»(da) a Q S 2 S < A^(P)c . .i(n,/V,2) 3 12 < oc (by Lemma 3.12). Thus (3.39) holds. . P r o p o s i t i o n 3.19. Lf(X) satisfies the following density of occupation formula. For any nonnegative Borel function <f), j. X {<j))ds = j<t>(a)L {X)da a.i •s. (3.44) a s t P r o o f . B y considering functions of the form 4>(a)T^ ^R(uj) it follows easily from a monotone class argument and (3.39) that s Ji){s,a,uj)L {X)da a ds = j j' (3:45) o- {X ,a)^{s,a,u)X {dx)ds 2 s s for any measurable ib.: R + x R x Q —> P . Substituting ip(s,a,oj) = o~~ (X ,a)<p(a) into (3.45) we get /'<t>(a)X (da)ds = f 4>{a) [ o-- {X ,a)L {X)da Jo J Jo 2 + s 2 s a s ds = J cp(a)L (X)da. a t (by (3.18)) - P r o o f o f T h e o r e m 3.4. Propositions 3.16 and 3.19 show that the family of random variables (Lf(X)) satisfies the conditions (i) and (ii) of Definition 3.1. • 99 Bibliography [1] Adler, R . J . (1992). Superprocess local and intersection local times and their corresponding particle pictures, i n Seminar on Stochastic Processes 1992, Birkhauser, Boston. [2] Barlow, M . T . , Yor, M . (1981). Semimartingale inequalities and local times, Z. Wahrscheinlichkeitstheorie verw. Gebiete 55, 237-254. [3] Dawson, D . A . (1993). Measure-valued Markov processes, Ecole d'ete de probabilites de Saint Flour, 1991, Lect. Notes in Math. 1541, Springer, Berlin. [4] Dawson, D . A . , Gartner, J . (1987). Large deviations from the McKean-Vlasov limit for weakly interacting diffusions, Stochastics #0,247-308. [5] Dawson, D.A\, Perkins, E . A . (1991). Historical processes. Mem. Amer. Math. Soc. 454. [6] Dawson, D . A . , Perkins, E . A . (1996). Measure-valued processes and stochastic partial differential equations. Preprint. [7] Dellacherie, C . , Meyer, P . A . (1982). Probabilities and Potential B, North Holland Mathematical Studies No. 72, North Holland, Amsterdam. [8] Durret, R. (1985) Particle systems, random media, large deviations, Contemporary Maths. 41, Amer. M a t h . S o c , Providence, R. I. [9] Ethier, S . N . , Kurtz, T . G . (1986). Markov processes: characterization and convergence, Wiley, New York. [10] Hartl, D . L . , Clark, A . G . (1989). Principles of population genetics, second edition, Sinauer Associates, Inc., Sunderland, Masachussetts. [11] Jacod, J., Shiryaev, A . N . (1987). Limit theorems for stochastic processes, Springer-Verlag, New York. [12] Konno, N . and Shiga, T . (1988). Stochastic differential equations for some measure valued diffusions, Probab. T h . Rel. Fields 79, 201-225. . [13] Ladyzenskaja, O . A . , Solonnikov, V . A . and Ural'ceva, N . N . (1968). Linear and quasilinear parabolic equations of parabolic type, Transl. Math. Monographs Vol 23, Amer. Math. Soc. [14] Le Gall, J . F . Perkins, E . A . , Taylor, S. J . (1995) The packing measure of the support of super-Brownian motion, to appear in Stoch. Process. Applications. [15] Meleard, S. and Roelly, S. (1990). Interacting measure branching processes and the associated partial differential equations., Stochastics and Stochastic Reports. [16] Perkins, E . A . (1988) A space-time property of a class of measure-valued branching diffusions, Trans. Amer. Math. Soc. 305, 743-795. [17] Perkins, E . A . (1993) Measure-valued branching diffusions with spatial interactions, Probab. T h . Rel. Fields 94, 189-245. 100 [18] Perkins, E . A . (1995). On the martingale problem for interactive Measure-Valued Branching Diffusions, Memoirs of the Amer. Math. Soc. No. 549, 1-89. [19] Reimers, M . (1989). One-dimensional stochastic partial differential equations and the branching measure diffusion, Probab. T h . Rel. Fields 81, 319-340. [20] Revuz, R . J . and Yor, M . (1991). Continuous martingales and Brownian motion, SpringerVerlag, New York. [21] Rogers, L . C . G . and Williams, D . (1986). Diffusions, Markov processes and martingales, Vol 2., Wiley, New York, [22] Shiga, T . (1994). Two contrasting properties of solutions for one dimensional stochastic partial differential equations, Canadian Journal of Math. Vol. 46 No. 2, 415-437. [23] Sugitani, S. (1988) Some properties for the measure-valued branching diffusion process, J . M a t h . Soc. Japan 41. 437-462. [24] Sznitman, A - S . (1991). Topics in the Propagation of Chaos, Ecole d'ete de Probabilites de Saint Flour, L . N . M . 1464. [25] Walsh, J . B . (1986). An Introduction to Stochastic Partial Differential Equations, Ecole d'ete de Probabilites de Saint Flour, L . N . M . 1180. [26] Yor, M . (1978). Sur la continuity des temps locaux associes a certaines semi-martingales, in Temps Locaux, Asterisque 52-53. [27] Zvonkin, A . K (1974). A transformation of the phase space of a diffusion process that removes the drift, Math. USSR Sbornik, Vol. 22 No. 1, 129-149. 101
Thesis/Dissertation
1996-11
10.14288/1.0079976
eng
Mathematics
Vancouver : University of British Columbia Library
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.
Graduate
Path properties and convergence of interacting superprocesses
Text
http://hdl.handle.net/2429/6160