Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Path properties and convergence of interacting superprocesses 1996

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_1996-147851.pdf
ubc_1996-147851.pdf [ 5.09MB ]
ubc_1996-147851.pdf
Metadata
JSON: 1.0079976.json
JSON-LD: 1.0079976+ld.json
RDF/XML (Pretty): 1.0079976.xml
RDF/JSON: 1.0079976+rdf.json
Turtle: 1.0079976+rdf-turtle.txt
N-Triples: 1.0079976+rdf-ntriples.txt
Citation
1.0079976.ris

Full Text

P A T H P R O P E R T I E S A N D C O N V E R G E N C E OF I N T E R A C T I N G S U P E R P R O C E S S E S by MIGUEL MARTIN LOPEZ M.Sc. (Mathematics) The University of British Columbia A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES Department of Mathematics We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA July 1996 © Miguel M . Lopez, 1996 In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for refer- ence and study. I further agree that permission for. extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Mathematics The University of British Columbia Vancouver, Canada Date S Abstract Dawson-Watanabe superprocesses are stochastic models for populations undergoing spatial mi- gration and random reproduction. Recently E. Perkins (1993, 1995) introduced an infinite dimensional stochastic calculus in order to characterize superprocesses in which both the re- production mechanism and the spatial motion of each individual are allowed to depend on the state of the entire population, i.e. superprocesses with interactions. This work consists of three independent chapters. In the first chapter we show that interactive superprocesses arise as diffusion approximations of interacting particle systems. We construct an approximating system of interacting particles and show that it converges (weakly) to a limit which is exactly the superprocess with interactions. This result depends very intimately upon the structure of the particle systems. In the second chapter we study some path properties of a class of one-dimensional interactive superprocesses. These are random measures in the real line that evolve in time. We employ the aforementioned stochastic calculus to show that they have a density with respect to the Lebesgue measure. We also show that this density, function is jointly continuous in space and time and compute its modulus of continuity. Along with the proof we develop a technique that can be used to solve some related problems. As an application we investigate path properties of a one-dimensional super-Brownian motion in a random environment. In the third chapter we investigate the local time of a very general class of one-dimensional interactive superprocesses. We apply Perkins' stochastic calculus to show that the local time exists and possesses a jointly continuous version. i i Table of Contents A b s t r a c t i i Tab le o f Con ten t s i i i Acknow ledgemen t / i v C h a p t e r 0. In t roduc t ion 1 0.1 Review ..' 2 0.2 Summary of the Main Results ; 8 C h a p t e r 1. W e a k Convergence of In teract ing B r a n c h i n g Pa r t i c l e Systems 11 1.1 Introduction 11 1.2 Interactive Branching Particle Systems '. 14 1.2.1 The Particle Picture 14 1.2.2 An Equation for KN 23 1.3 Tightness of the Normalized Branching Particle Systems 29 1.3.1 Convergence of the Projections 30 1.3.2 Compact Containment Condition 33 1.3.3 Relative Compactness of (KN) : ' 36 1.4 Identification and Uniqueness of the Limit 36 C h a p t e r 2. P a t h P rope r t i es of a One-D imens iona l Superd i f fus ipn w i t h Interac- t ions 41 2.1 Introduction and Statement of Results 41 2.1.1 Main Result 41 2.1.2 Historical Stochastic Calculus 46 2.2 Some Auxiliary Processes 50 2.3 A Generalized Green's Function Representation for X 61 2.4 Proof of the Main Result 63 2.5 Examples: Super Brownian Motions with Singular Interactions ; 75 C h a p t e r 3. L o c a l T i m e s for One-D imens iona l In teract ing Superprocesses 80 3.1 Introduction and Statement of Results 80 3.2 Existence and Regularity of Local Times 84 B i b l i o g r a p h y 100 iii Acknowledgement I would like to thank all the individuals and institutions that made this journey possible. M y most sincere thanks go to my advisor, professor E d Perkins. It has been a privilege to work under his direction. It is also a pleasure to thank the probability group at U B C . They and their numerous visitors were a constant source of inspiration and high quality mathematics. In particular I acknowledge some useful conversations with professor D. Dawson. I should like to express my gratitude to the Mathematics Department at Universidad de los Andes, especially to professor S. Fajardo. I am indebted to them for showing me the beauty of mathematics and for teaching me the concept of proof. During these last years I have received much encouragement from my family and friends. Thanks to mom*and dad for that.and for the rest. I thank also the hospitality of professors R. Adler (Chapel Hil l ) , J-F. Le Gall (Paris), T. Lindstrjsm- (Oslo) and B. Rozovskii (Los Angeles). M y best thanks to Anders Svensson, I^jXpert extraordinary, for showing me the path to com- puter, guruhood. I gratefully acknowledge the financial support of Dr. E . Perkins and The University of British Columbia. Last but not least I thank Laura for her love, support and for putting up with me. iv Chapter 0 Introduction This work is devoted to the study of superprocesses with interactions. Superprocesses (or measure-valued branching diffusions) are measure-valued processes that model populations un- dergoing random branching and spatial motion. By a population we mean a system involving a number of similar particles. We are interested in the approximations that are possible when the number of particles is large. Consider.for example a large population of goats. Individual goats reproduce, die (branching) and wander around (spatial motion). They also interact with each other in many ways. For example they live in clusters or clans. They also have memory and tend not to return to fields they have grazed until some time has passed. Suppose we are asked to implement a computer simulation of the evolution in time of the population. We are interested not only in the total number of goats, but in their geographical locations as well. To this end, we would place a square grid over the region of interest and assign to each square (or pixel) a height (or color) proportional to the number of goats within. Due to the births-deaths (i.e. branching) and spatial motion of each animal the color of each pixel would change as time goes on. We are looking at the evolution in time of the density of goats. Note that the value of such random process at any given instant is not a number but a colored map. Therefore we must give a mathematical interpretation to "colored map"-valued processes. One way to do it is to regard each map as a measure. This example allows us to see why measure-valued processes may arise naturally and are not a mere technical device. We wish to know if after appropriate rescaling, the density of goats can be well approximated by some diffusion process. That is, we are after a limit theorem. We quote the great mathematician A . N . Kolmogorov: The epistemological value of the theory, of probability is revealed only by limit the- orems. Moreover, without limit theorems it is impossible to understand the real content of the primary concept of all of our sciences - the concept of probability. Suppose that we have established the fact that as the number of goats goes to infinity a limit process does exist. We call it the goat superprocess. It is then natural to ask, how does it look like? More specifically, do the measures corresponding to the values of the goat superprocess have density functions? If so, are they continuous? Since continuous functions can oscillate wildly, how continuous? Of course, we require a rigorous formulation of all these questions. The subject of superprocesses is a rapidly developing field. It has been stimulated from sev- eral different directions including population genetics, branching processes, interacting particle systems and stochastic partial differential equations. For example the so-called Fleming-Viot superprocess is a generalization of the Wright-Fischer model (for random replication with er- rors) and has been studied in connection with population genetics (Hartl and Clark 1989). Rogers and Williams (1986) wrote in the preface of their book: Here are some guidelines ori what you might move on when your reading of our book is done. (vi) Measure-valued diffusions, random media, etc.. Durrett (1985) and Dawson and Gartner (1987) can be your 'open sesame' to what is sure to be one of the richest of Aladdin's caves. We believe that the monograph of Dawson (1993) proves that Rogers and Williams were correct. In this thesis we wil l give a rigorous description of a general class of models that hopefully includes what our intuition says the goat superprocess should be. We will then answer some specific questions about certain subsets of the class of models. In Section 0.1 we survey some of the theory of superprocesses. In Section 0.2 we explain our results. 0.1 Review In this Section we review the most basic concepts needed to understand both the meaning and relevance of our results. Most of the background information is very recent and known only by a handful of experts. The ideal preparation is furnished by a thorough reading of the papers Perkins (1993, 1995). These in turn can be well understood after reading either of our favorites Walsh (1986) or Dawson (1993). Our notation is consistent with that of Perkins (1995), and a reader familiar with this material can safely skip the rest of this Section. We have made an effort to give an intuitive understanding of the ideas presented below. However many of them are accompanied by heavy technical baggage, and these technicalities are'important for a careful examination of this thesis. They cannot be avoided. Some of these ideas may seem obscure at a first glance, but they contain large amounts of valuable information. This is hardly a surprise since we wil l be studying some fairly complex objects. We encourage the reader to refer to the excellent references when in doubt or need. We assume a basic knowledge of Probability theory on the part of the reader. We expect the reader to be acquainted with 1. Martingales, Brownian motion, weak convergence of probability measures on metric spaces, critical branching processes (the standard Galton-Watson process will suffice), the Poisson process. 2. Martingale inequalities (Doob's, Burkholder's and maximal), stopping times, stochastic integration (Ito's lemma), local times (Tanaka's formula). Most of these topics (certainly those in 1) are covered in a first graduate course in Probability theory. Those not familiar with the topics listed in 2 should still be able to understand the statement and meaning of the most important theorems. They should also be able to read the remainder of this Section. The subjects listed in 1 and 2 are the absolute minimum required to understand most of the proofs. Familiarity with superprocesses and/or stochastic partial differential equations (abbreviated SPDE) is highly recommended. 2 B y a measure-valued process we mean a random process whose state space is Mp(E), the space of finite measures on some complete, separable, metrizable topological space (E, 13(E)). As usual, B(E) denotes the Borel a-field and Mp(E) is endowed,with the topology of weak convergence. Superprocesses are related to branching processes, population genetics models, stochastic partial differential equations and interactive particle systems. The canonical example is d-dimensional super Brownian motion, which we now describe. F i x a positive integer N, a positive real number 7 and a finite measure m on JRd. At time t=0, N particles are located in H d with law m(- ) /m(R d ) . They move independently according to d-dimensional Brownian motions. If a particle is located at position a; at time £, then let the probability that it dies before time t 4- dt be ~fNdt + o(dt). If it dies, a fair coin is tossed and the particle is replaced by 0 or 2 identical particles situated at the position of death of their parent. The new particles then start undergoing independent Brownian motions and the process continues in the same fashion, with particles moving, dying and branching ad infinitum. In this model we want to keep track of the number of particles as well as their locations. Let I(N, t) be the total number of particles alive at time t, and let Z\, i = 1,... ,I(N,t), label their locations. Consider the rescaled measure-valued process •• ' HN,t) *N(t) = jj £ 6ZV (O-1) 1=1 where Se denotes a unit mass at e. When the number of particles is large, i.e. as N' —>• 60, the particle system XN is approximated by a measure- valueddiffusion X which we call super Brow- nian motion with branching rate 7. In fact, the sequence of probability measures (P(XN € •))N converges weakly to a law P m on C([0,00), Mf(JRd)) (Dawson 1993). Following the usual con- vention, we endow C([0,00), E), the set of continuous paths t Xt € E, with the compact-open topology and D([0,00), E), the space of cadlag paths t •->• xt G E, with Skorohod's Ji-topology. Super Brownian motion X can be characterized through a martingale problem (Dawson 1993). A typical example of a martingale problem is Levy's characterization of Brownian motion. It says that if an IRd-valued random process B is a continuous martingale with square function {Bl,B^)t'= tdij (here 6ij denotes Kronecker's delta), then B is a d-dimensional Brownian mo- tion. The following is a martingale problem that uniquely characterizes the law P m : If is a measure we write /j,(<p) = f <fi dfi. (MP)™ 100 There is a continuous M/r(IR d ) valued, adapted process Xt defined on a filtered probability space (Q,T,!Ft, P ) such that (i) P ( A ' 0 = rn) = 1 (ii) U<f><ECb2(Rd), then A^(^) = A'o(<̂ ) + £ X, (^) ds + Zt(<t>)., (0.2) where Zt(4>) is a continuous square integrable (^r()-martingale null at zero with square function (Z(0))t= f Xs(1(?)ds. , Jo Moreover the law Q m of the canonical process Xt{u)) = ut on $7 = C([0, oo), JVfjr(lRd)) is uniquely determined by (i) and (ii) (Dawson 1993). Recall that there are two independent sources of noise, namely the Brownian noise and the branching noise. For each test function <j> € C$ ( R d ) (0.2) provides the semimartingale decom- position of {Xt(4>) rt>0). The martingale part Z.(<p) comes entirely from the branching, while the drift part / 0 Xs(^(^jds comes exclusively from the spatial motions. Note that if there is no branching (i.e. 7 = 0), then (0.2) becomes the deterministic equation Xt(<j>) = X0(<t>) +£xs(j<t>)ds, a weak form of the heat equation. This is in fact a version of the law of large numbers. On the other hand, it can be shown that if we set <f> = 1, that is, if we disregard the spatial motions and look only at the total mass, {MP)™100 gives a continuous-time continuous-state space branching process studied by Feller (1951). This process is a diffusion approximation for the classical Bienayme-Galton-Watson critical branching process. The construction of super Brownian motion can be greatly generalized to yield more general superprocesses. One can tinker with the branching mechanisms or the space motions. For example it can be proved that the recipe used to obtain super Brownian motion may be adjusted to obtain super Feller processes. Just replace the Brownian motions by a Feller process with infinitesimal generator A and locally compact state space. The result wi l l be a limit superprocess characterized by. a martingale problem analogous to ( M ? ) ™ 0 1 0 but with (A, Domain{A)) instead of ( f , C 6 2 ( l R d ) ) . (Dawson 1993). When d = 1 we can recast ( M P ) ™ 1 0 | 0 in stochastic partial differential equation (SPDE) form: Xt(dx) = u(t,x)dx JPm — a.s., where u is the unique (in law) solution of ^ = ^u+\/yuW, (0.3) and W is a space time white noise (Reimers 1989, Konno and Shiga 1988). We interpret equation (0.3) in a weak sense, as is usually done in the theory of Partial Differential Equations. That is, we multiply (0.3) by a smooth test function cf> with compact support and integrate over space-time: J 4>(x)u(t, x)dx = J j ^u(s,x)cj)(x)dxds + J J \J^u(s,x)()){x)W(x, s)dxds. Integrate by parts twice on the r.h.s. to obtain Xt(<f>) = J Xs^(f)^ds + j j y/yu(s,x)<t>{x)W{x,s)dxds = J* Xs(^t)ds + Zt{<t>). Note that (Z(4>))t = (Jq f vV(s,x)(f>(x)W(x, s)dxds^ = J J ~fu(s,x)cj)(x)2dxds = f Xs(^2)ds. Jo By analogy with the P D E (partial differential equation) setting in which W is replaced by a smooth function, it is possible to write a Green's function (or inverse Laplacian) representation of the solution of (0.3): u(t, x) = Jp(t, x - z)u{0, z)dz + J j p(t -s,x-z) y/^u(s,z)W(z, s)dzds, (0.4) where p(t, x) is the one-dimensional Brownian transition density. Equation 0.4 has proved to be extremely useful to compute the moments of'it (Konno and Shiga 1988). The study of super Brownian motion (among other things) revealed the need to introduce a related object called the historical process or historical super Brownian motion. While analyzing fine path properties Dawson and Perkins (1991) realized that the genealogy (or family structure) of the particles is of great importance. We give an informal example to motivate this last statement. Pick a "typical" particle in the population at time t = 1. That is, choose a point x in the set Supp(X\) (the support of X\) according to the measure X\. Suppose we are investigating the asymptotic behaviour of the total mass in a bal l of radius r about x as r —> 0 (very useful when studying path properties). To estimate this quantity trace the trajectory of our particle backwards in time until time t = 0. It turns out that in two or more dimensions "most" of the mass in the ball wil l come from particles that branched off this trajectory between t = 0 and t = 1. This property stems from the fact that due to the criticality of the branching, only a finite number of particles alive at time t = 0 have (an infinite number of) descendants alive at time t = 1. • This type of argument can be made rigorous leading to very precise computations of Hausdorff functions, etc. (Perkins 1988). To describe the historical process consider the tree of branching Brownian motions described above. Let be the random measure on the space C ( [0 , oo),lR d) which assigns mass T V - 1 to the trajectory of each particle alive at time t. If xl denotes a path x stopped at time r. (x'(-) = x(t A •)) then HN(t) = ± £ S{z,y. ' (0.5) 1=1 Equation (0.5) generalizes (0.1). As iV ->• oo 1P(HN € •) converges weakly to a law Q m on C([0, oo), M F ( C ( [ 0 , oo), IR'*))) (Dawson 1993). The historical process H is the canonical process Ht(uj) = ujt in the probability space Q, = (Q M ,C ( [0 ,oo ) ,MF(C ( [0 ,oo ) , lR d ) ) ) ) . It can also be characterized by a martingale problem which generalizes (MP)™ 1 ) ) 0 ] 0 . From a modelling perspective, it is natural to try to introduce measure-valued processes in which the particles interact. Three of the most important tools to analyze finite dimensional 5 diffusions are the change of measure (Cameron-Martin-Girsanov theorem), the martingale prob- lem of Stroock and Varadhan and Ito's theory of stochastic differential equations. These three approaches admit highly non-trivial generalizations to the infinite dimensional setting of su- perprocesses, thus allowing the introduction of interactions (Dawson (1993) and Perkins (1993, 1995)). Perkins (1993, 1995) carried out a program to construct interactive historical superprocesses Kt(dy) in which a trajectory y1 in a population Kt is subject to a drift b(t, Kt,y), diffusion matrix cr(t,Kt,y) and branching rate j(t,Kt,y). As an application one can generalize (MP)™100 to ( M P ) 7 i < 7 ) 6 ) o . That is, the Laplacian (infinitesimal generator of Brownian motion) can be replaced by an elliptic operator of the form A(t, Xt, x) = ^(qa*)ij(t, Xt, x)didj + 6j(t, Xt, x)di (sum over repeated indices), and the constant 7 by a function 7(i,X<,x). These coefficients depend on time, the state of the population and the position of the particle. Of course some constraints are required on these coefficients. A very interesting example of an interactive historical process due to Adler (Perkins 1995) is the following. Example. Let ps(x) = (2n)~1^2exp(—|^), e > 0 (small), and suppose b(t,K^y) and ^{t,K,y) are given by the following b(t,K,y) = J* j'vp€(y's-yt)Ks(dy')e-x^ds j(t,K,y) = exp ( - f*fpe(y's-Vt)K,(dy')e-xWds). Here the goat-like particles tend to drift away from regions where the population has already grazed (and consumed the resources). They also reproduce at a lower rate on those regions. The parameter A represents the rate of recovery of the environment. D Perkins has developed a theory of stochastic integration along Brownian trees (Perkins 1993, 1995) and has used it to characterize a broad class of interactive measure-valued branching diffusions as the unique solution of a stochastic equation driven by a historical Brownian motion. We shall explain Perkins' idea shortly. As a prerequisite let us recall some facts about stochastic differential equations (SDE's). Write Cd = C([0,00), JRd) and Cf = a{ys :s<t), let Sd be the • set of d x d symmetric positive definite matrices, and let . b(t,y) : [0,oo) x C d - > l R d , a{t,y) : [0,oo) x Cd -> Sd, • be predictable path functionals. The intent is to give meaning to the notion of a locally Gaussian d-dimensional process Y satisfying the following ElY^-YilY^dtbiit.Yj + oidt), m'Udt - YIWU - >?)iy <] = d t Y) + o(dt). ' This models the motion of a particle in a velocity field b which is subject to a random thermal motion of covariance a. Ito's brilliant idea was to give meaning to (0.6) by rewriting it in the form d dY} = bi{t,Y)dt + ^Oij(t,Y)dBi, j=i 6 i = 1 , d , or for short dYt = b(t, Y)dt + a(t, Y)dDt. where B is a d-dimensional Brownian motion and a is the square root of a. The last equation should be interpreted as an integral equation. Ito gave meaning to the integral J 0 ' a(t,Y)dBs, which is named after him. This is not straightforward since B is of unbounded variation. Therefore JQ a(t,Y)dBs cannot be a Stieltjes integral. - A strong solution of the S D E Yt = Y0+ [tb(s,Y)ds+ [ o(s,Y)dBs, (0.7) Jo Jo on a given probability space (fi, T, P) and with respect to a fixed Brownian motion B = (£, (J-'t)) and initial condition £ is a process.Y = {Yt : 0 < t < oo} with continuous sample paths which satisfies the following properties: • Y is adapted to the filtration (Tt))- • p[y 0 = £] = !• • - p[/o(ll b( s>^)ll + l l c r c r , ' ( s > y ) l l ) d s < oo] = 1 for all < > 0. (||-|| denotes.the Euclidean norm in ]Rd or R d x d ) . ' • Yt = Yo + b(s, Y)ds + o{s, Y)dBs holds P — a.s. The second integral on th r.h.s. must be interpreted in the sense of Ito. If Y is a strong solution of (0.7) there is a (measurable) map h : lRd x Cd —> Cd such that Y. = h(£,B) P-a.s. We are now in a position to elucidate Perkins' idea. Let b : [0 ,oo) x C ( [ 0 , o o ) , ' M F ( C d ) ) x Cd —> R d , . o :.[0, o o ) x C ( [ 0 , o o ) , M p ( C d ) ) x Cd —> TRdxd, 7 : [0,oo)x C([0,oo),M F{C d)) x Cd —> (0,oo). Recall that H denotes a historical Brownian motion. Intuitively, a typical path y chosen according to Ht is a Brownian path stopped at time t. This is certainly true if H is replaced by HN. Perkins (1993) found a way to prove it. This allowed him to define an essentially unique version of the Ito integral JQO(S,K, y)dy(s). He then considered the following simultaneous equations Yt = yo+ fb{s,K\Y)ds+.fo{s,K,Y)dy(s), t>0 (0.8) Jo Jo Kt(A) = J l{Yl(y) £ A)y{t,K,Y)Ht{dy), t>0, A e Cd (0.9) (Perkins (1993, 1995)), By (0.8), Y solves an Ito equation along the branch y with drift 6 and diffusion o. If the coefficients cr, b, 7 are Lipschitz in an appropriate sense then (0.8) has a 7 unique strong solution of the.form Y = h(yo,y), and we write Y = Y(y) for short. Hence, in principle at least, (0.9) makes sense. 7 can still be interpreted as a branching rate, although some additional work is needed .(Perkins 1995). If 7 = 1, then Y * is a typical path in the population Kt- Y is an auxiliary process and K is the desired interactive historical process. Other.intuitive explanations are found in the introductions to the papers Perkins (1993) and Perkins (1995). W i t h suitable hypotheses the system (0.8-0.9) has a pathwise unique solution (Perkins 1995); These hypotheses are satisfied by the coefficients b, 7 in the previous Example and by a = Id. If b : JRd -> IRd, a : lRd -> ]Rdxd, are Lipschitz continuous (with respect to the Euclidean norm), 7 = 1 and b(t,K,y) and a(t,K,y) are denned by b(t,K,y) := f b(yt - y't)Kt(dy') and o-(t,K,y) := Ja(yt-y[)Kt(dy') .... respectively, then (0.8 — 0.9) has a unique strong solution. Moreover, the process Xt{A) - Kt{y '• yt € A) for A € B(JRd) solves the following martingale problem (MP)^bfiU<f>GCU^% then ' •Xt(4>)-='m{(f>) + J J (^(ao*)ij(-s,Xs,x)didj<l>(x) + biis,Xs,x)di^^ where Zt{4>) is a continuous square integrable (^r t)-martingale null at zero with square function . (Z(d>))t = f Xs{o2)ds. Jo . . . Here b(t,Xt,x) = fb(x — z)Xt(dz) and a(t, Xt,x) — f a(x — z)Xt(dz). As usual, integrals of vector valued functions are computed componentwise. 0.2 Summary of the Main Results After pummelling the reader with a lengthy review let us finally enunciate our results. The exposition is divided into three independent chapters that can be read separately. However we suggest to the reader not familiar with the subject to read them in order. The thesis is organized as follows: Chapter one presents the weak convergence of interacting particle systems to superprocesses with interactions. The interactive historical process K so- lution of (0.8-0.9) is expected to arise as a diffusion approximation for a sequence (KN) of properly rescaled interacting branching particle systems. Dawson Sz Perkins (1996) claim that this fact is proven in this thesis so we are compelled to demonstrate it. The proof requires a good understanding of the particle systems.and is not a corollary of some well known general convergence theorem. In Chapter 2 we study interactive measure valued diffusions of the form Yt = yt + f b(s,Xs,Ys)ds, Jo Xt(A) = JHYt(y)eA)>y(t,X,Y)Ht(dy)r A € B(B). Here d = 1. We show that Xt(dx) = u(t,x)dx, where (t,x) i-4 u(t,x) is Holder continuous. The proof is an application of historical stochastic calculus. This result was proved for super Brownian motion relatively recently by Reimers (1989) and independently by Konno & Shiga (1988). As an example we show that the above result leads to the existence of approximate solutions to the following version of Burgers' equation with noise. Let W bea space time white noise and consider In fact, for any arbitrary £ > 0 we solve a smoothed version of (0.10), namely du A d ( \ ' j— • m = 2 u - T x \ u - p e * u ) + ^ w > where ps is the heat kernel and the convolution is in the x variable. We also show that the method of proof of the main theorem can be employed to investigate some related problems. In particular, we study a super Brownian motion in a random environment. In Chapter 3 we look at the local times of one-dimensional interacting superprocesses which arise as solutions of Yt = y0+ fo(Xs,Ys)dy(s) + f b(Xs,Ys)ds, • ' • . Jo Jo Xt{A) = J \{Yt(y) G A)Ht(dy), AeB(R). Once again d = 1 and the notation, is the same as above. We show that there exists a jointly continuous process (a, t) i-» L%(X) such that [ Xs{<f>)ds = f (j>(a)Lat{X)da Jo • • ' J-oo for any positive Borel function (j). The process Lf (X) is called the local time of X. The proof of this result relies on a Tanaka-like formula of Perkins (1995). The proof uses many of the tools of the infinite dimensional stochastic analysis developed by Perkins as well as some abstract tools like the predictable section theorem to obtain concrete estimates. The mere existence of the local time is a path property of X (Geman and Horowitz 1980)^ We can also see directly that the local time is an interesting quantity. As Adler (1992) has pointed out, the superprocess discussed in the Example of Section 0.1 exhibits waves of local time. This is also evident in computer simulations. Most of the topics treated in this work have been studied in the non-interacting case, and sharper answers have been given in this case. However none of the methods of proof can be 9 translated to the interactive case as they rely very heavily ori the independence assumptions. In this work we introduce new methodologies. However, by considering rather general coefficients we renounce to the possibility of carrying out very precise computations such as those in Perkins (1988) or Perkins-Le Gall-Taylor (1995). / 10 Chapter 1 Weak Convergence of Interacting Branching Particle Systems 1 . 1 Introduction Super-Brownian motion arises as a diffusion approximation for Bienayme-Galton-Watson (BGW) trees of branching Brownian motions (Walsh 1986.) A B G W tree of branching Brownian mo- tions with parameters (N, 7) can be described as follows: F ix 7 > 0. At time t = 0, N particles are located in JRd according to some fixed initial distribution. To be concrete, let us place all of them at the origin. The particles start moving independently, undergoing Brownian motions. During a time interval (t, i + dt) any given particle has a probability jN dt of dying. If it does die, it either splits into 2 identical particles or it goes instantaneously to an isolated point d (a cemetery), with probability 1/2 each. If it branches, the two offspring start their lifetime at the point of death of their parent. They continue moving independently according to d-dimensional Brownian motions until they either die or branch and so on. In this model we want to keep track of the locations of the particles as well as their numbers. One way to describe the state of the system of particles is to identify the position of the particles with the position of the atoms of a purely atomic measure, say Xj^. If each particle is assigned mass 1 then X^1 (A) is the number of particles in the set A at time t. We are interested in the high density limit when N —¥ op. Of course some renormalization is required. It turns out that the properly renormalized object to study is N~XX^. In fact N~XXN X , where X is a measure-valued process which we call super-Brownian motion with branching rate 7. X can be characterized by a martingale problem. Recall that if / i is a measure, y.(<p) = / <j> d(i and Mp{E) denotes the set of finite Borel measures on a metric space E and it is endowed with the weak topology. T h e o r e m 1.1 ( S u p e r - B r o w n i a n M o t i o n w i t h B r a n c h i n g R a t e 7). Let SQ denote a unit mass at 0. There is a continuous M f ( I R d ) valued, adapted process Xt defined, on a filtered probability space ( f i ,^" , ( ^ ) , P ) such that j. TP(X0 = 60) = 1 2.1f(t>£ C 6 2 (M d ) , then XM = XoW + f\s^)ds:+Zt{4>), ( M F ) A , 7 11 where Zt(<fr) is a continuous square integrable (Ft)-martingale null, at zero with square function (Z(<p))t= fxs{1(f>2)ds. Jo Moreover the law, Q*°, of X on C([0,oo),Mf(]R d )) is uniquely determined by1 and 2. Observe that ( M P ) A , 7 gives a semimartingale decomposition for X.(4>) when <f> € C2(JRd). As we shall see later, the drift part comes from the spatial motions while the martingale part comes from the branching. More generally one can replace the space motions (i.e. the Brownian motions) by a Feller process £ taking values in a locally compact space. Cal l L its infinitesimal generator and D(L) its domain. C, could be, for example, a stable process. In this more general setting Theorem (1.1) holds if A / 2 is replaced by L and C2(JRd) by D(L). That is, the martingale problem (MP)in is well posed (Dawson 1993). The branching particle systems contain genealogical information that is not evident in the l im- iting super-Brownian motion. This information has proven to be extremely valuable when ana- lyzing path properties of super-Brownian motion (Perkins 1988, Dawson and Perkins 1991). A process called historical super-Brownian motion, which records the past histories of all individu- als in the population, can be defined as follows. Let Cd = C([0, oo),]Rd) and Cd = a(ys : s,<t), the canonical e-field of Cd. Let Hf* be the random'measure on the space Cd of Reva lued continuous paths which assigns mass N~l to the trajectory of each particle alive (in a super- B M ) at time t. It can be shown (Dawson and Perkins 1991) that HN converges weakly to a MF(Cd)-valued continuous process H. This limit process is called the historical process/and its law may also be characterized by a martingale problem. T h e o r e m 1.2 (H is to r i ca l Supe r -B rown ian M o t i o n ) . Let Do denote the.set of f E ,Cd such that f(y) = g(y(t\),... ,y(tn)) for some 0 < t\ < ... < tn and a smooth function g which is constant outside some compact set. For f £ Do let l A / ( , , t) = 1 X E E m < t f c + i A tl+l) x ^ (y(t A t,), ,y{t A tn)). i=l k=0 1=0, + l + The law ~PS° of historical Brownian motipn on C([0,oo),Mp{Cd)) is uniquely determined by the following martingale problem. V / e A ) Ht(f) = 6o(f) + J-Hs(±Af(;s))ds + Zt(f), where Zt{f) is a continuous square integrable martingale null at zero with square function ( Z ( / ) ) t = f Hs(,f'2)ds. Jo Note that super Brownian motion can be obtained as a projection of historical Brownian motion, Xt(-) = Ht({y G Cd : yt € •}). . 12 If one is interested in modeling it is natural to try to introduce some sort of dependence into the superprocess. For example one may want to consider space motions in which individuals are attracted to each other, or branching rates that reflect the fact that lonely individuals tend to die faster. W i t h this in mind, one is tempted to replace the Laplacian in Theorem (1.1) by an elliptic operator of the form A(t,X,x) = \f^a^HtX,x)^ + f^b%X,x)£. (1.1) Similarly, one may want to consider 7 = y(t,X, x). These coefficients depend on time, the state and past history of the population and the current position of a particle. For example (in d=l) if b•: [0,oo) x ]R —• H , then we could define b(t,X,x) = f R b ( t , z — x)Xt(dz). In this case, a particle located at x at time t would feel a drift (i.e. pull or push) of magnitude b(t, z — x)Xt(dz) coming from the particles located in a cube of side dz and centered at z. W i t h appropriate assumptions on the coefficients a, b, 7 the martingale problem ( M P ) ^ ) 7 has a solution (Perkins 1995, Roelly and Meleard 1990). Roelly and Meleard showed that a;solution of the martingale problem can be obtained as a limit of a renormalized system of interacting particles. More generally, one is led to consider interactions in the enriched historical setting.. This can be very useful for example in the modeling of non-Markovian superprocesses (such as the goat superprocess introduced in Chapter 0). We shall need a generalization of (1.1). To this end we present some notation. F ix 0 < t1 < ti < ... <: tn and t/> € Cb2(TRTld). Let ^(y) = ̂ {ti,ti;...,tn)(y) = ̂ (y{ti),...,y(tn)) • ' V>i and ipij denote the first and second order partials ofip. V ^ ( i , y ) , : [0,00) x Cd —>• !R d is the (Ct) predictable process whose j - t h component at (t,y) is n-l ^2l(t<ti+i)iijid+j(y(tAti),...,y(tAtn)): i=0 If 1 < i, j < d, xpij :[0,00) x Cd -> IR is the (Ct) predictable process defined by n—In—1 $ii{t\v) = S Z E 1 ^ - t k + 1 Atl+^kd+ild+j(y{t A<i),....,y(t Atn)) k=0 1=0 Let :. • 0 0 • D0= ( J {*(ti,ti2,.. . ,t„):"0 '-<ti <t2 < „ . < t n > * G Co°°(]Rm)} U {1} m=l We wi l l work on filtered probability space Q = (Sl,T,\Tt),IP)- The following definition is motivated by Theorem (1.2) and by the preceding discussion. D e f i n i t i o n 1.3 ( H i s t o r i c a l S u p e r - B r o w n i a n M o t i o n w i t h Interact ions) . Let Sd denote the space of symmetric positive definite d x d matrices. Suppose a : [0,oo) x D([0,00),MF(Cd)) x Cd —• Sd, b : [0,oo) x D([0,00),MF(Cd)) x Cd —» R d , 7 : [0,oo) x D{[0,oo),MF(Cd)) x Cd —> (0,oo). 13 A predictable process K G C([0,oo),Mp(C d)) on fl satisfies, (MP)ab-r (with initial condition *o) if V ^ € D o Kt(^) = 8o(^)+Zt(^) + j\j[(V^(s,y),b(s,K,y)) ' j d d i=l J=l . where Zt(i>) is a continuous square integrable martingale with square function • (Z$))t=; j* j\(s,K,y)i>(y)2Ks(dy)ds Vi > 0 a.s. Perkins (1995) has shown that (under suitable hypothesis) the above martingale problem is well posed. We expect the solution to be the limit point of a sequence of renormalized interacting B G W branching Brownian motions, just as in the non-interacting case. In this Chapter we introduce such a sequence of systems of particles. We also show that they are tight and that their limit points satisfy the martingale problem ( M P ) 0 J J , i 7 . Our results are a non-trivial extension of Meleard and Roelly (1990). The main difference is that by working in the historical setting the state space for the particle motions becomes Cd. Therefore the convergence of an approximating sequence KN is no longer completely determined by the convergence of the projections / >-> Ifdhf. 1.2 Interactive Branching Particle Systems 1.2.1 The Particle Picture In this Subsection we define a sequence of processes KN which converges weakly to a solution •of the martingale problem ( M P ) 0 j & ) 7 mentioned in the introduction (see Definition 1.3). We begin by introducing a set of labels I. Let / := U ^ o ^ x {!) 2}1, where by convention { 1 , 2 } ° = 0. For any a = (ao, • • • ,ctk) G J we write |a| = fc and a\i = (ao,.... ,OJJ) for i < fc. If a = (ao,... , ctj) G / we denote ai = (ao,... , aj,i), i = 1,2, and ir(a) - (ao,... , cij-i), j > 1.. Let {£a : a G /} be i . i .d . random variables with P [ £ Q = 0] = P [ ^ a = 2] = 1/2. We say that a subset A oi I is a Bienayme-Galton-Watson tree (or B G W tree) with Af roots iff , (i) {a G / : |a| = 0, a|0 = i for some i < N} C A and if a|0 > N then a ^ A. (ii) for any a £ A such that |a| > 1, fTJ^o"1 â|t •> 0- The family { £ Q : a G /} induces a unique probability distribution I I o n the set of all trees with N roots.. R e m a r k 1.4. The random variable card(A) is the total number of individuals that ever lived in a critical Bienayme-Galton-Watson process. Therefore card(.4) < oo a.s. • The next ingredient we need is a collection {Ba : a.G /} of independent d-dimensional Brownian motions. Define also a family {ea : a G /} of i . i .d . exponential(l) random variables. We 14 assume that the three collections of Bernoullis, Brownian motions and exponentials are mutually independent. We also assume that they are carried by the same,probability space IP). We shall also need drift, diffusion and branching rate coefficients. For the sake of simplicity, throughout the chapter we shall assume that at time t = 0 all the particles are located at the origin. We could have started distributing the initial particles according to any probability distribution, but we want to keep the notation as simple as possible. A l l the proofs can be easily modified to cover such general initial condition. N o t a t i o n 1.5. If E is a topological space and x G c7([0, oo), E), xl denotes the path x stopped at t: xl = x(tA-). If {X(t, u) : t > 0} is a process taking values in a normed linear space (L, || ||) then XI = sup{||Xs|| : 0 < s < t}. If (M,-M) is a measurable space, bM denotes the space of bounded real-valued Af-measurable functions, and M* denotes the universal completion of A l . Cd = C([0, oo),lR d) is endowed with the sup metric p, (Cd) is its canonical filtration, MF(Cd) is the space of finite Borel measures on Cd with the weak topology. It will also be convenient to append an isolated point d to IRd. • Let. L i p l C ^ - ^ i C ^ R r - M o o . ^ l . l ^ - ^ l ^ ^ y ) V x , y G C d } . The Vasershtein metric d = dp on Mp(Cd) is given by d(u, a') = sup{>(4>) - u\cj>)\ : <f> € Lip(Cd)}. , This metric induces the weak topology on Mp(Cd) (Ethier and Kurtz 1986, p. 150, Ex . 2). Suppose that q > 0. Let o :[0,oo) x MF{Cd) x Cd —¥ B d x d , b :[0, oo) x MF(Cd) x Cd —»• R d , : 7::[6, oo) x D{[0, oo),MF(Cd)jx Cd —> [c, oo). Suppose that v, F : [0, oo) [1, oo) are non-decreasing functions, p is an arbitrary but otherwise fixed positive integer. Y : N x [0, oo) x IR -> ]R is defined by (p, t, x) H-> v(t) xp. We will assume that the maps o~, b, 7 have the following properties: B o u n d e d n e s s by the t o t a l mass sup \\b(t,Kt,y)\\ + sup \\o(t,Kt,y)\\ < T(p,t ,X t *( l )> Viv € D([0,00),MF(Cd)), (1.2) yeC yecd ' sup j(t,K,y) < F(t)(l +fK;(l)ds) VK<e D([0,00), MF(Cd)). (1.3) y€Cd v JO ' . L i p s c h i t z c o n d i t i o n l l a ^ / ^ ^ - a ^ p ' ^ O l l + l l ^ ^ y ) - ^ ^ ' , ? / ) ! ! < . T ( p , i , / i ( l )V / i ' ( l ) ) ( p ( y , y ' ) + ^ ^ ' ) ) Vu\u'eMF(Cd). (1.4) 15 Finally, for any T > 0 there is a finite constant CT such that \\b(s, 0,0)|| + \\o(s, 0,0) || < CT for all s < T. Note that if y € Cd, K <E D([0, oo),MF{Cd)) then \\b(s,Ks,ys)\\ < ||6(S,0,0)|| + T(p,S, Jftr;(l))(||y s||0 0 + ^ ( l ) ) .< 116(5,0,0)11 + (KS(1)V T(p,*,JC(l))(l + ||ys|U). , Similarly, • \\o-(s,Ks,ys)\\ < lk(a,0,0)|| + (/f,(l)VT(p,a;(l))(l + ||ys||oo). R e m a r k 1.6. (a) Notice that the second argument of b and a is less general than the second argument of 7. This stems from the fact that we wanted to keep the notation as simple as possible, but also' wanted to show the reader how to generalize the proof and to cover some interesting examples. It will be easy for the reader to modify our proof to more general coefficients of the form a :[0,oo) x Z?([0,oo) x MF(Cd) x Cd —> JRdxd, b :[0,00) x £>([0,00) x MF{Cd) x Cd —> M d . (b) The condition that 7 be bounded away from 0 can be weakened to 00 j(t,K,y)dt = 00 VK <ED{[0,oo),MF(Cd)), Vy G Cd Unfortunately any further generalization is beyond the scope of the techniques employed in the chapter. 0 Examples. The reader will find many interesting examples in Perkins (1995). Here is a sample of several types of coefficients that satisfy our hypotheses. See Perkins (1995) for more details. (a) Assume A, 6, e > 0. Let ps{x) be the d-dimensional heat kernel. Set l(t,K,y)= 6 + exp ( - £ f PeM ~ yt)Ks(dy')e-x^ds). We can interpret a superprocess as a biological population of say goat-like particles. The branching rate of a particle located at yt becomes smaller if many of the particles (goats) have spent too much time near y(t) in the past, thus depleting the food supplies and making local conditions less favorable for reproduction. The parameter A represents the rate of replenishment of the environment. We would like to consider e = 6 — 0, but for technical reasons are obliged to assume that both parameters are strictly positive. Note that 7 is bounded and bounded away from 0, so it satisfies trivially the hypothesis. (b) Let p, e be as in Example 1. Define b(t,Kt,y) = j%j\pe{y's-yt)Kt{dy')ds. In this case the particles drift away from the places they or their living fellow particles have visited in the past. 16 ( c ) Let de be the Vasershtein metric on MF(Eld) associated with the Euclidean metric on IRd. Assume a : MF(Rd) x R ^ TRdxn and b : MF(SRd) x IRd ->• H d satisfy \\a(u,x) - <y{n',x')\\ + ||6(u,x) - 6(/i',x')ll < const.(l + M(1) V M ' ( l))[de(M,M') + Ik - and \\a(u,x)\\ + \\b(u,x)\\ < const.(l + u(l)) \/fi,u' E MF(JRd), V x , x ' e l R d . Define n f : MF(Cd) -> M F (R d ) by n t ( M )(A) = u({y : y t € A}), and CT, 6 by a(t,Kt,y) = a(r., Il t(ii 't)) y)» b(t,Kt,y) = 6(<,IIt(i<'(),y). This example shows that the case of M F (Re- valued interacting superprocesses follows as an instance of the historical superprocesses. (d) This is a continuation of Example (c). Let a : JRd -> M d x n and b : M d -> R. d be bounded Lipschitz continuous functions, and define a and b by a(u,x) = j a(x-y)n(dy), b(u,x) = j b(x-y)u(dy). More generally if bk : JRd(k+1) H d and crfc : R ^ 1 ) ] R d x n , A; = 1 , . . . are bounded Lipschitz functions then we can define coefficients ' a(u,x) = ak(x,xi,---,xk)u(dxi)...u(dxk), k=lJ , [ 3 p b{u,x) = Y^bk{x,xi,...,xk)u{dxi)...u{dxk). • t— 1 J We proceed to define the particle systems. For each N we define a system of age dependent branching processes as follows. Step 0. Pick a B G W tree A with N roots at random, i.e. according to the law IIN- Step 1. Define predictable path functionals b : N x [0, oo) x CNd —• TRd and a : N x [0, oo) x CNd —> H n x d by 1 N bl(s,y\..:,yN):=b{s,^J26(v^^yiy) i=i i N al(s,y\...,yN):=a(s,-y£S{yJ)s,(yir). Here each y l E Cd. Let b := (hi,... , 6^) and let a be the Nd x Nn matrix whose j-t-h diagonal d x n block is Oj, j = 1 , . . . ,N, and any other entry is equal to 0. Note that by the assumptions on b and a \\bi(8,y\... ,yN)\\ + \\ai{s,y\..,. ,yN)\\ < \Ms,0,... ,0)11 + 11^,0,... ,0)|| + r(p,s,card(A)/N)(l+y*). (1.5) 17 Moreover, both b and a are Lipschitz: \\b(s,yl,...,yN)-b(s,x\,...,xN)\\ < y/\\bi(s,v\... ,yN)-b1(s,x\... ,x")\\2 +.. .+ ]\bN(s,y\... ... ,^)||2 < T (p ,« , card(^ ) / iV)^ (y i + ..-. + ( yN . - ^JV )^ . . . <T (p , 5 , C ardM ) / iV) ( ( ? / , . . . ^ ^ - ( x 1 , . . . , ^ ) ) : . • (1.6) Similarly H a ^y 1 , . . . y j - a K x 1 , , . . ,xd)\\ < T(p, 's,card{A)/N){(y\ • • • , y d ) - (ar1;... ,xd));. (1.7) Note that once i4 has been chosen the term Y(p,s,card(A)/N) can be regarded as a function of s, bounded on bounded intervals. We shall need also an index set for the particles "alive" at a given time. For this purpose need to define an auxiliary set .7(1, i) :={aeA: \a\ = 0} C A, for all t > 0. Note that J(l ' ,t) does not depend oh t. We shall see later the reason for this notation. For . each a € J ( l , i) set B(a) = 0. • This is the :birth time of a . ' . . . Consider the following system of N d-dimensional SDE's: Yta= ViafrY1,.'.. ,YN)ds+ i aa{s,Y\...\YN)-dB«, (1.8) Jo Jo a € J(l,t). • ; > By (1.5), (1-6), (1.7) and the well known theorem for existence and uniqueness of SDE's with Lipschitz coefficients that grow linearly (e.g. Rogers & Williams p. 128), the system (1.8) possesses a unique, global (i.e. non-exploding), strong solution. Define stopping times eeJ(i,-) a G .7(1, t ) . Recall that ^ ^2eeJ(i,) denotes the path u *4 Y^9ej(i,u) ^(y9)* stopped at time s. A l l of these stopping times are a.s. finite since 7 is bounded away from zero, and a.s. different from each other since the exponentials are independent and independent of the Y's. Now define T\ = m i n { T 1 a : a € J(l,<)} a 1 = arg min{Tf : a G J ( l , i ) } , <5(a1) = T 1 (the death time of a 1 ) , and 1 N N. 18 Step 2. a1 has just died. There are two possibilities. It had either zero Or two offspring. Let's examine each possibility separately. Case 1. £ ai = 0. Define K • \j(l,t)-{a1} i f<>7i. Set Yt&1 =dfort> Tv. Now solve the system of card(J(2,Ti)) d-dimensional SDE's = b(s± £ ;S{Yey,(Yar)ds+ f a(s, 1 £ \ Y . y , • dB?, (1.9) 1/7,1 eeJ(2,s) . 1 / 7 1 eej(2,s) a € J ( 2 , < ) , r.>Ti. Just as before, we interpret the coefficients b, a in (1.9) as being some predictable Lipschitz path functional with linear growth. Therefore (1.9) has a unique global strong solution for all times t>T\. Proceed as before. Define stopping times I? =- inf { « > 71 : A T ( S , . ( I g^yf^y > £o aGJ ( 2,J). These are also a.s. finite and different. Set T 2 = min{T 2 Q :a€ J(2,t)} a? = arg min{r2 Q : a € J(2,t)}, and K ^ = jj £ V " ) 1 ' T i < t < T 2 . o€J(2,t) ' Cose 2. £ 6 i = 2. Define J ( 2 n = / J ( M ) i f i < T l i . • \ J ( 1 , < ) U { Q 1 1 , O 1 2 } - { Q 1 } i f f > T ! . . We set T i to be the.birth time of c ^ l , a X 2 : ./?(a\l) = / 9 ( Q 1 2 ) : = T i . ' /, Set Yf = d for all t > T i and define (Y*yi)Tl = (Y^2)Ti := (y a l ) r i ~. Now solve the system of card(J(2,Ti)) d-dimensionalSDE's . yta = yfi + f Ks,^ £ 6{YeYy{Yay)ds+ f o{S,± . £ ^ . . ( r D - d i ? ? , 1 / 7 1 1 eej(2,s) 1 / 7 , 1 eej(2,s). ' (1.10) a e J(i,t), t > T i . 19 Once more we interpret the coefficients b, a in (1.10) as being some predictable Lipschitz path functionals with linear growth. Notice that even though the b, a in this step are possibly different from those in step 1, the Lipschitz "constant" is the same: T(p,s,card(A)/N). Hence (1.10) has a unique global strong solution for all times i > T\. Define stopping times T 2 Q :.= inf { rt //?(«, y '• eeJ(2,-) a e J{2,t). These are a.s. finite and different. Set T 2 = min{T 2 a : a <E J(2,r)} . . I a? = arg min{T 2 Q : a € J(2,t)}, 6{o?) = T2 and aeJ(2,t) Step (n+1). Suppose that we have already defined J(n,t), T i < .... <Tn. a " died at time Tn. Case 1. £ a n =0. Define 'j(n,t) i f * < T„, J(n + l,t) = J(n,t) - {a71} if t > Tn. Stop if J ( n + 1,T„) is the empty set. Define 2 ^ = 0 for t > Tn. Otherwise, define =.d for t > Tn and solve the system of card(J(n + l , T n ) ) d-dimensional SDE's %"-=Yfn + f b(s,± X . 6(Ye)„(YQy)ds+ f-<r(8,± E S ^ Y ^ - d B f , JT" 0eJ(n+l,s) JT" 0£J(n+l,s) (1-11) a e J(n + 1,/.), i > T„. As before (1.11) has a unique global strong solution for all times t > Tn. Define stopping times JPW \ eej(n+i,.) ds > ea 9€J( l,-) a € J (n + l , i ) . These are a.s. 'finite arid different. Set . T n + 1 =min{T n Q + 1 : a € J (n + l , i ) } >d n + 1 = arg min { r^ + 1 : a € J (n + 1,*)}, . • • 6(an+1)=Tn+1 20 and Kt —Jj E V")1' r n <t < Tn+l. aeJ(n+l,t) '. . Case 2. £ a n = 2. Define • \ j ( n , < ) U { a " l , a N 2 } - { a n } i f < > T n . y We also set Tn to be the birth time of a n l , a N 2 : /?(a n l ) = /3(an2) := T n . Set Yt&n = d for all t > Tn and define {Y&ni)T« = (Y&n2)T* := (ya)T--. Now solve the system of card(J(n+ l,Tn)) d-dimensional SDE's Yta = Y% + f.bi^jf . E 6{Yey,(Ya)s)ds+ f E ' ' (y ) ' v ( D ' ) , / T n 0€J(n+l,s) JTN .. eeJ(n+l,s) • (1.12) OJ G J ( n + l,t), t > Tn. i (1.12) has a unique global strong solution for all times t > Tn. Define stopping times T ° + 1 : = inf< t>Tn:N f l\sA±- E V))S>ya)ds>e4> a G J (n + l , i ) . ' These a.s. finite and different. Set Tn+1=mm{T^+l:aeJ{n + l,t)} a n + 1 = arg m i n l T ^ ! : a G J (n + l , i ) } , $ ( a n + 1 ) = T n + 1 . • • • and i aeJ(n+l,t) R e m a r k 1.7. This procedure ends with step card(A). Notice that with probability one it takes only a finite amount of time for the procedure to finish. This is due to the fact that the branching rate is bounded below. In fact, — Tk-\ < e&k/(N<;). • We shall need some terminology. 21 Def in i t i on 1.8. Remark 1.7 implies that A = {an, n = 1 , . . . , card(A)}. Therefore we assign a birth and a death time to each particle in A. By convention Ya = d, 8(a) = 6(a) = oo if a ,4.\Set ha(s) = l(8(a) < s < 6(a)), the indicator of the lifespan of a, and a~t& hQ(t) = 1, a<t& 6(a) < t, t <a t < 8(a). We shall adopt the convention that for any (f>: R f c —• IR and any 1 < j < k, (x\,... , Xk) G JRk we have 4>(x\,... , Xj-\,d,Xj+\,..., x^) = 0. Finally, we define some very important cr-fields. Let := a(^J(n,s),{Bf : a € J(n,s)},{ea : a € J(n,s), 6(a) < s}, {£a : a € J(n, s), 6(a) < s}; n = 1 , . . . , ooj -: s < , t>0. and u>t R e m a r k 1.9. (a) KN is right continuous. (b) From the definition of 6(a) and since 7 > 0 it follows that 5(a)=m{{t> B{a):N f 7(s, (KN)S, (Ya)s)ds > ea). . In fact one easily sees that N fS{a\(s,(KNy,(Ya)s)ds = ea. (1.13) Moreover, Yt=Y?n+ f b(s,K?_,(Ya)s)ds+ f o(s,K?_,(Ya)s)-dB?, (1.14) JTn JTn aeJ(n + l,t), Tn<t<Tn+1. (1.15) (c) At this point a discussion of what we have accomplished so far wil l be helpful. The procedure to construct the interacting particle system KN is the following. First, a B G W tree is chosen at random. This tree contains the genealogical information and nothing else. iV particles are placed at the origin and start moving according to some SDE in which the drift and diffusion depend in their own trajectories and (in a symmetric way) in the trajectories of al l other particles. This SDE possesses a unique strong solution because of the Lipschitz nature of 2 2 the coefficients. Each particle carries an alarm clock which is a Poisson point process with a random adapted intensity (also called doubly stochastic Poisson process). This intensity is strictly increasing until the first arrival and then it is zero (so it's a degenerate Poisson point process with only one arrival). When the first bell rings we decide whether to ki l l that particle or to replace it with two offspring. A t this instant of time, say to, we also update K^, which wi l l then be concentrated in the paths of the N — 1 or N- + 1 particles alive. We also know the value of the exponential random variable driving the Poisson process and the number of offspring. This information is also included in the filtration T^. Observe that the driving Brownian motions are adapted to this filtration and are in fact still Brownian motions, since the information contained in (Tj^) and not contained in their own filtration concerns some other independent Brownian motions or some independent exponentials or some independent Bernoulli random variables. The fact that the intensities of the clocks are adapted is therefore essential, and it is embodied in equation (1.13). Note also that since the system of SDE's has a strong solution, the trajectories Ya are (^ r/v)-adapted and therefore KN itself is (^ r/v)-adapted. The process continues in the same fashion until the next alarm clock goes off and so on. • 1.2.2 A n Equation for KN Observe that as a consequence of (1.14) YtQ = + f Hs, K?_, (Ya)s)ds + f a(s, K?_, (Ya)s) • dBf, /3(a) < t < 6(a). (1-16) Indeed, if 0(a) = Tk, 8(a) = Tk+j and t G [Tk,Tk+j) then (1.14) implies yt° = % + E / b(s,K?_,(Y")s)ds + / *(s,K*L, (Y°)s) • dBf This isequivalent to (1.16). For any t € [0(a), 6(a)), r G N , 0 < h < .. ..< tr and * € C^(Rdr), Ito's lemma and (1.16) yield <b(tu...,tT)(t,Ya) = <l>(tu...,tr)(0(a),Ya)+ [\w(S,Ya);b(s,K?_,(Ya)s))dha(s)ds Jo ' n E E r * i , ( ^ Q ) M * , ^ + [\vns,Y«)a(s,K?_,(Ya)%ha(s)dB?)n, (1.17) Jo where V ^ ( s , Ya) is considered as a row vector for matrix multiplication and (-,-)k denotes the usual inner product in JRk. 23 N o t a t i o n 1 .10. L e f t >' 0, y e Cd, r € N , 0 < ti < ... < tr and * € C 6 2 ( IR d r ) , K € MF(Cd). We write d d AtK)mt,y) = (VW,y)Mt,Kt,y% + = A ? E [ * ( * W - n i ( * ( a ) < *)(*« - 1 ) ] • (̂*)=̂ E[r<v '̂y>̂ ^̂ The following is the main result of this Subsection. P r o p o s i t i o n 1 .11 . For any \& € Do = + £ J A(Ky_)(*)(s,y)K?(dy)te In particular, K^(l) = 1 + Z^(l). P r o o f . We compute = ^ E [ * ( ^ A ) ' Y Q ) + f A(K)ms,Ya)h«(s)dS .. (by 1.17) f A(K)($)(S,Ya)ha(s)dS + ^ ( v * ( s , y > ( 5 , i ^ f _ , ( y T ) , / i Q ( 5 W ) r j (since the term inside : the square brackets is 0 if 5(a) < t) N tit u» i J (1.18) + A? E [[\™(s,Y<*)a(s,K?_, (Yay),h«(S)dB?)T + 4 - E [*(P(«),Ya) - W(^)-,Ya)l(5(a) < t)] - J* j A(K)(H!)(s,y)K»(dy)ds + WtN(*) 24 = tf0"(¥).+ f\f A(K)(<t>)(s,y)K? (dy)ds + WtN(V) + Zf(t>). This proves (1.18). To finish the proof of the proposition observe that if ̂  = 1 then V # = 0, Lemma 1.12. For any l e ^ a S I, let M t a = Mf (#) := '(&» - l)l(*(a) < t)V{6{a)-,Ya). Then (a) Ma is an (T^)-martingale. , . . (b) (Ma)t = tiha(sh(s, (KNy,(Yay)y(s,Y<*)2ds. (c) Ifa1^a2then(Mai,Ma2)t = 0: Proof (a) By stochastic integration it suffices to prove the result for ̂  = 1 (see the proof of part (b).below for a similar argument). Let Jt := o(l(6(a) < s) : s < t). The only information contained in Joo is the death time of a: Therefore £ Q is independent of Joo. We must show that E[(Za - l)(l(<5(a) < t) - l(6(a) < s))\f?} = E[(^ - l){l{s < 6(a) < t))\F»] = 0. , (1.19) Let 6, q, be real numbers, r € {0,2} and A\(n, u, b) = {B^ < b : u < s, n € J ( n , s) for some n} A2(v,q) = {ev < q : v € J ( n , s) for some n , <5(i/) < s} A$(6,r) = {ie = r,6 £ J(n,s) for some n , <5(0) < s}. The cr-field J " ^ is generated by sets of the form A = Ai(r/i,ui,6i) n . . . Ai(r)i,Ui,bi) f\A2(vx, qi) n . . . n A 2 ( i ^ , 9j) n A3(ei,ri)... n A 3(e f c ,r f c), where <5(i>7), <5(0m) < «• Z = 1,... , j , m = 1 . . . , fc. Hence £ Q is conditionally independent of A given 6(a) > s. Therefore P [ £ a = r, A , <5(a) > s, <5(a) < tj - P [£ Q = r ]P[A, (5(a) > a, (5(a) < r l Since (£ — 1) has mean zero, the above equation implies (1.19). 25 (b) We claim that it suffices to show that Um°)<-))i = '[tha(sh(a,{KNy,(Yay)d8- (1.20) Jo where (l(8(a) < -))t denotes the dual predictable projection of l(8(a) < t). The claim follows from the theory of stochastic integration. Put Ht := ^f((8(a) At)—,Ya). H is a bounded predictable process. Moreover Xt := (£ a — l)l(8(a) < t) is a semimartingale. Clearly (H • X)t = (£« - l)^!(8(a)-,Ya)l(8(a) < t). Hence, if (1:20) holds then (Ma)t = (H-X)t " ' = fH2sd(X)s • Jo = / ^ ( ( % ) A i ) - , r ) J a ( ( f a - i ) i ( * ( a ! < . ) ) s • Jo = N((8(a) As)-,Ya)2(Za -l)2d(l(8(a) < •)>,' Jo (by the independence of £ Q ) s = {%jS-ya)2ha(sh(8AKN)'AYtt)')ds.. Jo (by (1-20)). To prove (1.20) we will need the following fact from the theory of semimartingales: Let (At,Tt) be a locally integrable strictly increasing adapted process such that l i m ^ o o At = 0 0 a.s. Let u i-> Cu be the inverse function of t ^ At. Note that for every u, Cu is a (Tt)-stopping time. Set Qt — Tct • F ° r any (progressively measurable) process X we can define the time changed process XA — X(Ct). If .{X, (Ft)) is a semimartingale and its semimartingale decomposition is given by Xt = Xo +.Mt +'Vt then (XA, (Qt)) is also a semimartingale and a semimartingale • decomposition is given by Xf = X0 + MtA + VtA. Let At := N f*(a) 7 ( 5 , (KN)S, (Ya)s)ds. Wi th this notation Remark 1.9(b) implies 8(a) — inf{r > 8(a) : AT > ea}. We now apply the time, change result to the process (At^t*)- Let Cu be the inverse of At (on {t > 8(a)}). That is, Ct is defined by fQCt 7 ( s , (KN)S\ (Ya)s)ds = t. Let Qt = Tgt. Set Xt = l(5(a) < t). Then XA is a (^{)-semimartingale and (X)t = (XA)At- We compute the compensator of the increasing process (XA,(Qt)). One easily guesses (XA)t = AS^ At. We must check that E[l(8(a)<Ct)-l(8(a) <Cs)\gs] = E[As{a)At-A6{a)As\gs]. (1.21) To avoid trivialities we assume t > s > 8(a). On the set {8(a) < Cs} (1.21) is trivially true. So let's assume that 5(a) > Cs. We have then E[m*) <Ct)\gs,8(a)>Cs} = E[l(AS{a)<t)\gs,8(a)>Cs] = E[l(eQ <t)\Qs,ea> s] (by the definition of A, C). 26 = 1 - exp(—(t — s)). In the last line we have used.the fact that ea is an exponential r.v. independent of Qs given eQ > s. We also compute , E[AS(Q) A * - s\Qs, 8(a) > Cs] = E ' =E J l ( r <As{a))dT\gs,6(a) > Cs / 1(T < ea)dT\Gs,ea > s = E ^ l ( r < ea)dT|eQ > s = / P[T < ea]dr Jo . = 1 — exp(—(< — s)). (X)t = (XA)At = AS{a) A At ^(5(a)At (since vl is increasing) (c) Arguing as in part (b) we can assume = 1. Since Mai and M " ! are jump martingales whose jumps can't happen simultaneously a.s., [ M Q 1 , Mai)t = £ A M f 1 A M f 2 = 0 The desired conclusion follows since the angle bracket is the dual predictable projection of the square bracket. • C o r o l l a r y 1.13. For.any ̂  G Do, Z.A,(^r.) is a right continuous (T^)-martingale with square function K (ZN(nt = f JHs,yf1(s,(KN)\ys)K^(dy)ds. Therefore Moreover | A Z N ( $ ) | < ^||*| P r o o f . Clearly Z^(^) is an r.c.1.1. martingale, since each of the M t Q is. Its square function is easily calculated: (MQ was defined in Lemma 1.12) a<t 27 (by Lemma 1.12 (c)) (by Lemma 1.12 (b)) The deterministic bound on the jumps of ZN(ty) comes from the fact that | A M Q | < ^ | | ^ | | o o and two jumps can't happen at the same time a.s. • Our next step is to show that Z^(^) is in IP, for p > 1. In fact, we shall proof a stronger result. We wil l require the following well known lemma. Lemma 1.14 (Gronwall's lemma). Assume f,g: [0,oo) —> [0,oo) whereg is non-decreasing, f is bounded on compacts and f{t)<c(g(t) + ff(*)ds Then f(t) <cectg(t) for all t>0. Proof. See for example (Perkins 1993, Lemma 4.6.) • Lemma 1.15. There is a function T : N x [0, oo) —> [0, oo) such that SuPnKtN*(l)p + (ZN(l))pt/2)<Tp(t)=T{p,t). N . Proof We replicate the proof of Lemma 2.1 (Perkins 1995). By Proposition 1.11 and Corollary 1.13 KN(l) = 1 + ZN{1) is a non-negative martingale. Therefore 4 (1.22) Define a sequence of stopping times by f n : = mf{t > 0 : K^(l) > n} An. (1.22) implies that T n —> oo a.s. Now, (ZN(l))tATn= Jo J 1(s,(KNy,y)K^(dy)ds ftArn / rs \ < JT t(s)(l + J.K>"(l))K?{l)da. (by the assumption on 7) <f(t)(i + y K^{i)ds2y 28 Burkholder's inequality yields E [ A : ^ ( I ) ? A T J <ci(p) [1 + f ( t ) p / 2 E ( 1 + j ( tf**(i)<k"J Notice that E[ATA r*(l)fA T n]-< n p . Therefore we can apply Gronwall's lemma to obtain nKN*(l)ptATn}<Tp(t)=c2{p,t)ec^ (L23) Hence mKN*(l)pt] < l i m i n f E [ X ^ * ( l ) ? A T J (by Fatou's lemma) <rp(t). Note that Tp{t) is independent of N to complete the proof of the lemma. • C o r o l l a r y 1.16. For any ip € Do, ZN(ip) is a square integrable martingale. R e m a r k 1.17. Both WN and ZN can be shown to be orthogonal martingale measures. We won't prove this result because we won't need it. We can recast (1.18) is S P D E form. We shall restrict ourselves to d = 1 and to the Markovian case. That is, we consider only Mp (fft)-valued superprocesses. For simplicity we assume 7 = 1 . Then solution of (1.18) corresponds formally to = \-^{o-2(tMt,-U)u(t,x)) - ^(b(tMtr),x)u(t,x)) +ZN + v -WN. We shall see that WN -> 0 in a strong sense. Taking into account Corollary (1.13) and letting N —>• oo we see that this equation becomes (formally) d u ^ x ) = ±-^L(a2(t,u{t,-),x)u(t,x)) - ^(b(t,u{t,-),x)u(t,x)) + y/u{t,x)W(t,x), where W is a space time white noise. • 1.3 Tightness of the Normalized Branching Particle Systems In this section we show that the sequence {KN} defined in Section 1.2 possesses at least one limit point. To this end we shall employ the following theorem (Dawson 1993, p. 48). T h e o r e m 1.18 (Jakubowsk i ' s C r i t e r i on ) . Let (E,d) be a Polish space. Let F be a family of real continuous functions on E that separates points in E and is closed under addition. Given f e F , define f : Dp —> 7J([0, oo),!R) by (fx)(t) := f(x(t)). A sequence {Pn} of probability measures on Dp is tight iff the following two conditions hold: 29 (a) ( C o n v e r g e n c e o f p r o j e c t i o n s . ) The family {PN} is F-weakly tight, i.e. for each / € F the sequence {PN o ( / ) - 1 } of probability measures in D([0, oo),IR) is tight. (b ) ( C o m p a c t c o n t a i n m e n t c o n d i t i o n . ) . For each T > 0 and e > 0 there is a compact set Kr,e C E such that . • ' • m{PN(D([0,T],KT,e)) >l-e. n If the sequence {PN} is tight, then it is relatively compact in the weak topology on M\(DE). We wil l apply Jakubowski's criterion to the sequence {Law(KN)}. Here E = Mp(Cd). We tackle steps (i) and (ii) in the next two Subsections. 1.3.1 Convergence of the Projections We begin this Subsection by quoting some useful theorems. T h e o r e m 1.19. Suppose that (Xn) is a sequence of locally square integrable martingales. Then for (Xn) to be.tight with continuous limit points it is sufficient that 1. The sequence (Xft) is tight in IR. 2. The sequence (Xn) is tight and its limit'points are continuous. 3. For all M € N , e > 0 . . .. . • lira P n [ s u p | A X n | > e l 0. P r o o f . See Theorems VI.3.26 and VI.4.13 of (Jacod & Shiryaev 1987). • T h e o r e m 1.20. (a) Let (Mn), (Yn) be sequences of real-valued processes. If the sequence (Mn) is such that VT > 0, . Ve > 0,. ' l im PN n—too sup|M t n | > e l = 0, t<T . . -1 and if the sequence (Yn) is tight (resp. converges in law to Y), then the sequence (Yn + Mn) is tight (resp. converges in law toY). (b) Let (Mn) be a tight sequence of processes with values in Polish space E\, and let (Yn) be a tight sequence of processes with values in a (possibly different) Polish space E2- Suppose that the limit points of both sequences are continuous. Then the sequence (Yn,Mn) is tight and its limit points are continuous. ; .. P r o o f . See Lemma VI.3.31 and Corollary VL3.33 of (Jacod & Shiryaev 1987). • N o t a t i o n 1 .21. Let's write A(6, T, y) for the oscillation of the path y: A{6,T,y):=sup{\y(t)-y(s)\:s,t€[0,T};\s-t\<6}. 30 P r o p o s i t i o n 1.22. We keep the 'notation' of. the preceding 'Section. Fix <JhE'Z>o- (a) For any t > 0, l im -E[WtN*(§)] = 0: • , . (b) The sequence ((ZN($))) is tight and its limit points are continuous. " ' *•. (c) The sequence (ZN.(^)) is tight and its limit points are continuous. (d) The sequence ^f0 f A(KN)(^)(s,y)K^(dy)ds^j is tight and its limit points are continuous. P r o o f (a) Let \? = [\™(s,Ya)o(s,K?_,Y°),ha(s)dB?)n Jo This is a continuous local (jF/^-martingale (a martingale indeed!), since Bd is a (.7^')-Brownian motion (see Remark 1.9(c)). Note that (Va,Va')t = 0 if a '^ a' because of the independence of the Brownian motions Ba and Ba'. Moreover (Va)i < f || V^(s,Ya)o(S\:K^,Ya)\\2hfds Jo <ll liL fr^s.K^atfhys Jo (by the hypothesis on o). Hence TE[(WN*(y))t] = E (since the cross-covariations are 0)- < — E ~ N N V * "°°-E -:7V < ,:7V r. || V*' E i i v * ni fr(p,s,K^(i))2hys ^ A f a ^ . s A f ^ l ) ) 2 ) ^ rE[Af (l)T(p,5,/,f*(l))2]d. J O 112 ^E[T(p+l, s,Af*(l)) 2] TV 0 as N -> ex. This concludes the proof of (a). 31 (b) F ix T > 0 and choose 0 < s < t < T. (ZNmt-(ZNm)s = f j nuyy?l{uAKN)\yu)K»{dy)du (by. Corollary 1.13). Recall that our hypothesis imply that j(u, (KN)u,yu) < ' f ( i)( l +tKf*{l)) for u < t. Thus (ZNm)t - (ZNms<\\ *Wl t(t)(l + tKtN*(l)) fK»{l)ds J s is < (t - s) II * f (T)( l + TK$*(1))K$*(1). By Lemma (1.15), there is a finite constant A = A(T) such that sup{IE[((l +TKT*(1))K^*{1))2]} < A. • N Hence, for any £ > 0 : P [ A ( J , r , (ZN{V)).) > e] < £-2TE[A(5,T, (ZN(V)).)2] - (I)2 11 Therefore •limsupP[A(tf,T, {ZN($))•) >e} = 0. This shows that ((ZN(^))-) is tight with continuous limit points. This completes the proof of (b) , (c) We shall use Theorem 1.19. Use part (b)and note that condition 3 in Theorem 1.19 is satisfied since \AZN{V)\ < by Corollary 1.13. . (d) The proof is very similar to that of part (b). Let's call S« := A(K*)(<i>)(S:y)K^dy)ds. (1.24) From the assumptions on 6, a and \& it follows that . . \A(KN.)(9)(s,y)\-<\\ V * T ( p , 5 , K f * ( l ) ) + || | * 1^ T(p,s ,K»* (1) ) 2 . Therefore, for any T > 0, 0 < s < t < T - S f < constantT • (t - s)T(p + 1, s, 2^*(1) ) 2 . We know that there is a finite constant A = A(T) such that s u p E [ T ( p + 1,S,KJ!*{1))4] < A. N • Use Arzela-Ascoli as in (b) to finish the proof of (d). . • Corollary 1.23. For any $ € DQ the sequence (KN(^).)^ is tight and its limit points are continuous. P r o o f . The Corollary follows immediately from Propositions 1.11, 1.22 and Theorem 1.20. • , 32 1.3.2 Compact Containment Condition P r o p o s i t i o n 1.24. Given T, e > 0 there exists a compact set FT,£ € M p ( C d ) such that miV{KN € D([0,T] ,F r , e ) } > 1 - e. (1.25) P r o o f Suppose that for all n , T > 0 there is a compact subset Gr,n of Cd such that s u p P { sup KtN(GcTn) >2~n} <Vn. (1.26) N 0<t<T - ' We claim that (1.26) implies (1.25). Indeed, s u p ] T P { sup KtN(GcTtn) > 2""} < 1. A' „ 0<t<T The Borel-Cantelli lemma implies that s u p P { sup K?{GTn) < 2~n eventually} = 1 N 0<t<T ' That is, l im s u p P { sup K?{GCTn) < 2~n Vn > m} = 1.. ' m-xx> A : o<<<7' Hence there is an m = m(e) such that — s u P P { sup K?{GcTn) <2~n Vn > m} > 1 - e/2. N 0<t<T ' B y relabeling we can assume m = 1. Thus (1.26) implies supP{Vn sup K?{GcTn) < 2~n) > 1 - e / 2 . (1.27) N 0<t<T Recall Prohorov's theorem. A set Fx,e £ M F ( C d ) is compact if it is closed and the conditions below are met. (PI) For all n there exists a compact set Gn C Cd such that supfiepTe{u(G^)} < 2~n. (P2) S U P / j 6 / , ; . £ { M ( 1 ) } < oo. Let us suppose that there is an M = M(e) such that . s u p P { i ^ * ( l ) >M(e) } <e/2. (1.28) Set . '• FT,,~{u£MF(Cd):u(l)<M(e), fi(Gcn)<2~n Vn}. W i t h this choice of FT,e we see that (1.27), (1.28) together with (P1-P2) imply (1.25). We now show that (1.28) holds. Note that since,1^(1) is a martingale, P { i , j y * ( l ) > M } < E [ i ^ ( 1 ) ] . (1.29) By Lemma 1.15 there is a function F i independent of such that E[K^(1)] < Fi(T). Put M = e/2Ti{T) to see that (1.29) implies (1.28). Our next task is to exhibit a set Gr,n such that (1.26) holds. For this purpose we shall need the following two lemmas. 33 L e m m a 1.25. For any n > 0 there is a compact set F = F(mT) € Cd such that s u p E [ i ^ ( F c ) ] <r) N (1.30) P r o o f Recall from Section 1.2 that conditional on A, the law of a path [Ya)T is obtained as the unique solution of a system of SDE's with coefficients uniformly bounded by Y(p,T,K£*(l)). So for any Ya in the support of a standard argument (e.g. Rogers and Williams (1987) p. 166) shows that there is a constant c> 0 such that for any e > 0, k € N, T > 0, (k is allowed to depend on e) cT (T(p,T,K»*{\)? , T > , T , t f £ ' ( l ) ) ^ P[A(k-\T,Ya)>e\A]<CjA-( + v k k3 Define F M = {y G Cd : A(fc~ 1 ,T ,y) < e}. We compute with F = Fk,e TE[K$ (F c )] = E ^ £ i ( ( y Q ) T £ F ) = E E TV E l_ N i^i([nT^)|i cT 1 ^ Y ( p , T , ^ ( l ) ) 2 Y ( p , r , ^ * ( l ) ) 4 TV 2s • + A: ' k3 ^ ^ E ^ W ^ ^ T ^ T ^ f ^ ^ ^ + rt^T,^^!))4)]. Lemma 1.15 implies that there is a constant A such that since.the random terms inside the expectations involve only polynomials on Kj(*(l). To finish the proof set e- = fc-1/8, choose A; large enough so that cTA(k~1/2 + A r 5 / 2 ) < 77. This gives < »7 . • L e m m a 1.26. Let (j) e bCd, r a bounded (F^)-stopping time. We write 4>l(y) := 4>{yl). Then there exists an (Tj^)-martingale m = m^' T such that mt = 0 ift<r and K?\<j)T) = K?'{</>) + mt V t > r a.s. P r o o f . For any (f>, r satisfying the hypothesis of the lemma we define jjf E [ ^ a ) ' ( o ) " ) l W « ) < ' ) ( & - ! ) ] , i f * > r ; (1.31) otherwise. 34 Let's assume first that <f> € bCu and that <\> is of the form cj) = € DQ. That is, for some r € N , 0 < h < t2 <,. • • , tr <u,tp. e . C 0 x > ( R r d ) , 0 = ip(tu ..'. ,-<„). Assume further that r = u. Then VV>(s,y) = 0 and xpij = 0 for s >u. Therefore Proposition (1.11) implies KtN$)-K?(j) = ZtN(t>)-Z?m 4 E [i>(8(<*)-Ya)m<*) < t)(U - 1)] ( i .32) u-<a-<t = m f ' T V t . > T a.s. Clearly m f ' T is a martingale. Assume next that <f> € bCu and r = u. Any such function <f> is the 6p-limit of a sequence (<j>n) where each <?!>„ =€ 6Cun£>0- The bounded convergence theorem and (1.32) imply that KtN(<f>) - K?(<f>) = K»{4>T) - K?W = mf,T Vt > r a.s. (1.33) Now suppose that <f> € bC and T takes on only finitely many values, that is r € { u i , . . . , um}. Then 4>{yT) = YX=\ = n)<t>{vUi)- We see that K^(if>r) = tff (<£) + m f , T Vt > r- a.s. as a consequence of (1.33). Finally, we consider a general bounded r , <fr € bC. Take a decreasing sequence of stopping times {r n} taking only finitely many values and converging a.s. to r . Note that 4>T" % <t>r a.s. We know that K?{<t>Tn) = K?n{4>) + m[ n , < *. Note that —> K(f(cf)T) a.s. by bounded convergence, K^(4>) —^K^((f>) a.s. by the right continuity of K?. Moreover it is clear from the definition of. m that mTn^ —> mT^ a.s. This shows that (1.31) holds. Finally note that we always took a.s. limits of martingales bounded in L2 so that mT^ is indeed a martingale. • Let's continue the proof of Proposition 1.24. By Lemma 1.25 we can find a compact set F = Fr ,n such that sup,v JE[KJp (Fc)] < 2~2n. Define G := \JS<TFS. Note that by the theorem of Arzela- Ascoli G. is also compact. We now show that this is in fact the desired set that satisfies (1.26). Let T N := inf{s > 0 : K?(GC) > 2~n} A T. Set 4>(y) := l(y € GC). From Lemma 1.26 and the right continuity of KN(GC) we see that K${{GTN)C) = 2 - n + r 4 ' T j v (1.34) on the set { T n - < T}. The second term on the r.h.s. of (1.34) is a martingale null at zero and on the set { T N > T}. Thus 1(Tn < T)2~n + m*'TN = 1(Tn < T)K»[{y G CD : y T " . g G}}. Take expectations to obtain 2 - " P [ / <T] = E [ l ( r N <T)K^[{y € CD : y T " £ G}] < E[tff(F c)] (since {y : yT": f G) C {y.: y <£ F} because T n . < T) < 2~2n (by the definition of the set F) . 35 Therefore ' •;• s u p P [ r A ' < T] < 2~ n (1.35) N • - . . • • / •-„... Since {sup0<t<T KtN (Gc) > 2~n}c {TN<T}, (1.26) follows from (1.35). ' - 1.3.3 Relative Compactness of (KN) We are now ready to begin reaping the rewards of our effort. T h e o r e m 1.27. The sequence (TP[KNe - ] ) A T £ N is relatively compact tn'-Mi(D([0,oo),Mp('C r f))). P r o o f . Corollary 1.23 and Proposition 1.24 show that the conditions of Jakubowski's Theorem 1.18 are satisfied. • 1.4 Identification and Uniqueness of the Limit Typically, it is more difficult to prove,uniqueness than existence; This is yet another instance. Fortunately, the uniqueness of the limit points has already been studied (Perkins 1995). In this Section we find a martingale problem satisfied by the limit points of the sequence (KN). Uniqueness, with some restrictions on the branching rate 7, follows from results of (Perkins 1995.) In light of Propositions 1.11, 1.22 and Corollary 1.13 it is easy to guess that the limit points satisfy a martingale problem of the form 1.3. To prove it we will need a number of results. In addition to our hypothesis (1.3) we will assume that 7 is Lipschitz continuous. Recall that p denotes the sup metric on Cd, d = dp denotes the Vasershtein metric on M p ( C d ) . Let d' be the metric in the space D([0,00), Mf(Cd)). defined in equation (5.2) p. 153 of Ethier and Kurtz (1986). This metric induces the usual Skorohod J\ topology. Recall that as a subspace of D([0,00), MF(Cd)) the space (C([0,00),E),d') is closed and is in fact the usual space of continuous paths with the compact open topology. The following hypothesis will be in force throughout this Section. . h(t,K,y) -^(t,K'.y')\ < T(p,t,^(l) V ^ t l ) ) ! ^ 1 , , / ) + d'(Kl,K'% (Lip,) Fix <p € Do- Let SN be as in (1.24). Let's assume that there are some continuous real-valued random processes S(4>), Z'(<j>), V(4>) and an r.c.1.1. Mir(C < i)-valued process K such that ZN(4>) Z(4>), . • iV->oc :. sN(cp) s(0), " ' ' , K N ==> K . " ••; • N-+06 . '. The results of Section 1.3 ensure that these are valid assumptions upon taking an appropriate subsequence, By Skorohod's representation theorem and taking a subsequence if necessary'we 36 may further assume that the convergence above takes place simultaneously and almost surely. That is, (KN,ZN(<f>),{ZN(<j>)),SN(<l>)) -^,(K,Z(<t>),V(cj>),S(<fi)) a.s. (1.36) A l l the 4-tuples in (1.36) are considered as £>([0, oo), Mp{Cd)) x D([0, oo),]R)3-valued random variables defined on a common probability space (possibly different from the original one.) Let's call TN (resp. T) the filtration generated by KN (resp. K). P r o p o s i t i o n 1.28. Z.((f)) is a continuous square integrable (Ft)-martingale with square func- tion (Z(4>))t = v(<f>)t. • = l im (ZN(4>))t : N->oo P r o o f . We already know that Z.(<j>) is continuous. To prove that it is martingale it suffices to show that for s < t, r € N, 0 < s\ < ... < sr < s, (the Sj's may be restricted to belong to some dense set), and / : Mp(Cd)T —4- IR bounded continuous nf(KSi,-..,KsAZM} = nf(KSl,...,KSr)Zt(<f>)}. (1.37) Note that by Lemma 1.15 S U p E [ ( / ( < , . . . , ^ N ' Hence the families {f(K»,..., K»)ZtN(<t>) : N € N} and { / « , . . . , K»)Z»{<I>) : N € N} are both uniformly integrable. Moreover f(K^,..., K^) -> f(KSl,..., KSr) a.s. since almost sure convergence in the Skorohod space D([Q, oo), Mp(Cd)) implies almost sure convergence at the points of continuity of t H-> Kt, i.e. everywhere with the possible exception of a countable set. A n d we are allowed to choose the s;'s outside such.exceptional set. Also, Z^i^) —> Zt(4>) a.s. and Z^(<f>) -> Zs(4>) a.s. Thus a.s. convergence plus uniform integrability imply • l im E [ / ( < , , . . , ^ ) Z f ( ^ ) ] = E [ / ( i v S l , . ' . , , i v S r ) Z t ( ^ ] , l im E [ / « , . . . , < ) z f (</>)] = nf(KSlK8r)ZM)\ 7V->oo But ZN((f>) is a .^-mart ingale . Therefore E [ / « , . . . , K » ) Z t N ( < / > ) ] = E [ / « , , , , , < ) Z f («£)]. The desired result (1.37) follows. We must'now compute the square function of Zt(4>). We know that (ZN(<p))t converges almost surely. Note that by Lemma 1.15 the sequences. {/(<,... ,K»){Z»{4>? - (ZN(4>))t) : N € N}, ' {/«,... , O ( Z f ( 0 ) 2 - (ZN(<P))S) : N e N}, are uniformly integrable. Argue as before to prove that (Zt(4>)2 — (Z(4>))t)t>o is a ^"^-martingale. This concludes the proof of the Proposition. • 37 P r o p o s i t i o n 1.29. •{Z(<f>))t= fMs,K,y)4>{y)2Ks{dy)ds; (1.38) Jo St(<f>) = J j A{K){$)(s,yJKs{dy).ds. V* > 0 a.s. (1.39) (SN, S were defined ai the beginning of the section.) P r o o f Let • Tj:=:mi{s>0:supK?*(l)A [S(l + K»*(l))du>j}. N . Jo Note that by Lemma 1.15 l i m ^ o o 7) = oo a.s. Moreover 7(-, •,•)!(• < Tj) is bounded. There- fore, for any m,m',n,n' € N rtATj Jo K?(~f(s,Kn, •)0(-)2) - - K f h(s\kn\ •)<!>(• f) tATj ds < / k S M ( ( 7 ^ ^ N , - ) - 7 ( 5 , ^ , - ) ) 0 ( - ) 2 ) Jo 1 , + / K?Ms\ Kn\ .)0(-)2). - K?(7(s, Kn', -)0(-)2) Jo ds We estimate rtATj / . K™((y(s,K»,.)-7(s,Kn',.))<j>(;)2) Jo • , ptATj ^ I I ^ I I ^ ^ ^ ^ ^ V ) ^ ^ . ) ^ K?{l)ds ds = cl(j)d{{K%(Kn')t). In addition, rtATj Jo ^r(7 (^^ n ' , - )0(-) 2)-^ S M ' (7 (^^ n ' , - )0( - ) 2 ) tATi ds ^Mlr^K^f -\K?(l)-K?(l)\ds JJo = c 2(j) rT3\K?(l)-K?'(l)\ds. Jo Let n' —y oo. Use the continuity of 7 and dominated convergence to obtain rtATj rtATj / ^ r (7 ( s , ^ n , - M - ) 2 ) - ^ S M ' ( 7 ( 5 , X , - M - ) 2 ) ds Jo rtATj , ' • <C l(i)d((^") i, Jft: t)+.C 2(i) / \K™{l)-K?{\)\ds a.s. Jo 38 Now let rn! —> oo. Use the fact that y H - j(s,K,y)<p(y)2 is bounded continuous and dominated convergence to get / |/vr(7 (5 , iv" , -M-) 2 ) -^(7(s ,^ , - )^( - ) 2 )|^ Jo rtATs KciUWK^K^+ciiJ) \K?(1) - K.{l)\ds a.s.- Jo Now put n = m. Dominated convergence yields / | ^ ( 7 ( s , ^ O 0 ( O 2 ) - ^ ( 7 ( s , ^ , - M - ) 2 ) | ^ - >0 a.s. (1:40) Jo . . n-»oo For every integer t there is a j = j(t) such that t A Tj — t. Moreover, the integral in.(1.40) is increasing as a function of t. Therefore \K?fr{s, Kn, •)</>(-)%- Ks(7(s,Kr)<j>(.)2)\ ds -> 0 V O 0 a.s. The validity of (1.38) follows from the above and Proposition 1.28. The proof of (1.39) is almost identical and wil l be omitted. • We end this chapter introducing some notation and stating the main theorem of the chapter. N o t a t i o n 1.30. Let MF(Cd)t = {ueMF(Cd):y = yt u^a.a. y}, t>0. QH = {K € C([0, oo), MF{Cd)) : Kt e MF(Cd)1 Vt > 0}, Ht{oj) — uj{t) for u) € f2#, nt = o-.(Hu:0<s<t), U = o(Hs : s > 0). and (QH,nt) = (nHxcd,ntxcd). T h e o r e m 1.31. Any limit point K of the sequence (KN) satisfies the martingale problem: \/4>eD0 Zt(<f>) = Kt(cj>) - <f>(0) - f [ A(K)(4>)(S,y)Ks(dy)ds, t > 0, Jo Jc is a continuous square summable (Tf) martingale such that , (Z(</>))t= f [ j(s,K,y)<f>(y)2ds Vt > 0 a.s. Jo Jcd If in addition we assume that the branching rate 7 satisfies the following condition: • There are h : [0,oo) x ilfj —> IR^ and f : [0, 00) x f2# —> IR which are (Tit)-predictable and satisfy the same Lipschitz condition (1-4) as b, o and \\h(t,K,y)\\ + \f(t,K,y)\ < Y(p,Kf(l)). Moreover, if Q! = (ft! ,T' ,(T[),Q) is a filtered probability space carrying on 39 (Tl)-predictable processes (Kt : t > 0) with sample paths in a.s. and an JR.d-valued (ft)-predictable process (Yt:t> 0) such that fort > 0 Yt = Y0 + Mt + ^Ms^K^ds where M is a continuous local martingale such that (Mx,M^)t = JQt(oa*)(s,K,Y)ijds • a.s., then ^(t,K,Y)=-y(0,K,Y) + f h(s, K,Y)dY(s) + f f(s,K,Y)ds Vt > 0 Q - a.s: Jo Jo then the martingale problem is well posed. . P r o o f . The first part of the theorem follows from the fact that Z?{4>) = '(4>) - (f)(0)-'((f)) together with Propositions 1.28, 1.29 and equation (1.36). The uniqueness part is a consequence of Theorem 5.6 (Perkins 1995.) • R e m a r k 1.32. (a) The additional technical condition needed in Theorem 1.31 to ensure that the martingale problem is well posed is restrictive. However it is satisfied by a large number of interesting examples (Perkins 1995, p. 50). In particular it holds for the branching rate given in Example (a) (page 16). (b) In Theorem 1.31 we don't need the full strength of condition (Lip,) to prove that the limit points of the sequence (KN) satisfy the martingale problem. Continuity (as opposed to Lipschitz continuity) and boundedness by some power p > 1 of the total mass suffice. • 40 Chapter 2 Path Properties of a One-Dimensional Superdiffusion with Interactions 2.1 Introduction and Statement of Results In this Chapter we study some path properties of the solution of the historical stochastic equation (Perkins 1995, p. 47) in dimension d = 1. H is a one-dimensional historical Brownian motion and X is a superprocess with interactions. (2.1) shows that X and H have the same family structure. However, the path y (a Brownian path) is replaced by a path Y(y) which is a Brownian motion with a drift depending on X. The term 7 is a mass factor, but under suitable hypothesis (see condition (Ay) in page 44) it can be interpreted as a branching rate (Remark 5.2 of Perkins 1995). A n intuitive explanation of (2.1) is given in Chapter 0. The present chapter is organized as follows. In this Section we review some basic results concerning historical stochastic analysis (Perkins 1993, 1995). We define what we mean by a solution of (2.1) and state Theorem 2.9, the main result of the chapter. The reader is referred to Chapter 0 for additional comments and motivation. In Section 2.2 we introduce and examine some auxiliary processes. These results, as well as their proofs, wi l l be needed in the proof of the main theorem. In Section 2.3 we find a generalized Green's function representation of a localized version of the process X analog to (0.4). W i t h this formula we shall be able to estimate the moments of X. In Section 2.4 we prove Theorem 2.9 using the results of the previous sections together with Kolmogorov's criterion. Finally* in Section 2.5 we give a non-trivial example in which the techniques developed in this Chapter can be successfully applied to discover some path properties of a one-dimensional superdiffusion in a random environment. 2.1.1 Main Result We begin by introducing some basic notation and recalling the definition of historical Brownian motion. (2.1) 41 If \i is a measure, /j,((j>) = f <f> dfi. Throughout this chapter C = C1 = C([0, oo),lR) and Ct = C\ = a(ys : s < t), the canonical a-field of C. If y G C we write y* for the path y stopped at t, i.e. yl = y(t A •). Let MAC)1 = {ixeMF{C):y = yt / i - a.a. y}, t>0. . , fiH = { ^ e C ( [ 0 , o o ) , M F ( C ) ) : i i : t G M F ( C , ) ' V*>0}, ift(u;) = u>(t) for w G fi//, •Ht = o(Hs:0<s<t), n = a(Hs: s>0). fi = ( f i , .F , (Tt),TP) wil l denote a filtered probability space satisfying the usual hypotheses. PifFt) denotes the cr-field of (.^-predictable sets in [0, co) x fi and (fi , :F, (Tt)) = (fi x C,T x C, (Tt x F ix 0 < h < t2 < ... < tk and # € ^ ( M * ) . If V e C, let * ( y ) = * - ( * i , * 2 , ' . . . , * f c ) ( y ) = . * ( y ( t i ) ; . . . , y ( * * ) ) - and denote the first and second order partials of fc-i V * ( t j y ) = E l ( i < « t + i ) * . i + r ( y ( t A « i ) , . . . , y ( t A t f c ) ) ; . . i=0 ^ fc - l fc-i . - # ( t , y ) = E 53l(«< W i A ' t j + i ) * m + u + i ( y ( t A t i ) , ; . . , y ( < A « f c ) ) . m=0 i=0 Let oo L>0 = ( J { * ( t l , * 2 , - , - * m ) : 0 < t i < t2 < - < . < m > * G C 0°°(IRm)} U {1}. ro=l D e f i n i t i o n 2.1 (One-dimens ional H i s t o r i c a l B r o w n i a n M o t i o n ) . Let m be a finite Borel measure on JR. A predictable process (Ht : t > 0) on fi with sample paths on fi# is a one- dimensional historical Brownian motion starting at m iff (Ht) satisfies the following martingale problem: (MP)TAAo is a continuous square integrable ^i-martingale such that (Z?$))t = fi j' $(y)2Hs(dy)ds Vi > 0 a.s. There exists a process H satisfying the conditions of Definition 2.1 and it is unique in law. (Dawson & Perkins 1991). We picture H as an infinitesimal tree of branching one-dimensional Brownian motions. 42 D e f i n i t i o n 2 .2 . Let (Kt : t > 0) be a process o n f i with paths on fi#. A set A C [0, oo) x fi is (K,JP) evanescent (or K evanescent) iff A C A i where A i is (JFt* )-predictable and sup IAJ {U,UI, y) = 0 Kt — a.e. y Vt > 0 P — a.s. 0<u<t A property holds (K,JP) — a.e. (or K — a.e.) if it holds off a if-evanescent set. o D e f i n i t i o n 2 .3 . Let (Kt : t > 0) bea process on fi with paths on fi//. A map b : [0,oo)xfi —»• JR is if-integrable (respectively K-locally integrable) iff it is (Pf )-predictable and / 0 ' Ks(\b(s)\)ds < oo (respectively, f£ \b(s)\ds < oo Kt — a.a. y) Vt > O P — a.s. a For technical reasons that wil l become apparent later we shall not look directly at (2.1) but rather at a historical version of it. We shall call such version (HSE)b^ and it is defined below. D e f i n i t i o n 2 .4 . Suppose that H is a one-dimensional historical Brownian motion starting at m € MF(JR). Let b : [0,oo) x MF(JR) x IR—> JR, 7 : [0,oo) x MF(SR) x C—> (0, oc). ' Define the projection Ut : fi/f -> MF{Wi) by Ui(K)(A) = Kt({y : y(t) € A}). We say that the pair (K, Y) solves the historical stochastic equation (a) Yt = yt+ f b{s,fls(K),Ys)ds, t > 0, ;° (HSE)bn . ( b ) . ' KM) = j 0(y*)7(t,n.(A'),r)frt(dj/)i <j>ebc. iff • y : [0, co) x fi -¥ JR is (^ t*)-predictable and K : [0, oo) x fi ->• M F ( R ) is (Ji)-predictable. • The map |6(s,u;,j/)| = \b(s,U.s(K)(u),Ys(<j},y))\ is if-locally integrable. • (HSE)t>!y{a) holds up to an i7-evanescent set in [0, oo) x fi, (HSE)bty{b) holds for all cf> G bC Vt > 0 P - a.s. , Pathwise uniqueness holds in (HSE)bty if whenever (K, Y) and [X,Y) are solutions of (HSE) then y* = Yl except on an if-evanescent set in [0, oo) x fi and K = K except on a P-evanescent set in [0, oo) x fi. To each K solution of (HSE)},^ we can associate its one dimensional projection X defined by X. = f l . (A') . ' o R e m a r k 2 .5 . It is easy to modify the above definition in order to give a rigorous description of (2.1). It is then straightforward to verify that if K solves (HSE)bn then its projection X. = fl.(K) solves (2.1). - " The existence and uniqueness of solutions to (HSE)ytb requires some Lipschitz conditions on the coefficients b, 7. Since their arguments include measures we need to introduce an appropriate metric. 43 D e f i n i t i o n 2.6 (Vasershtein M e t r i c ) . If (S, 5) is a metric spacelet - Lip(S) = {(p : S -> JR : < 1, - 4>{z)\ < 6(x,z) V x , z € 5}. The Vasershtein metric is the metric oh M F ( 5 ) defined by ds(u,y) = snp{\u(cp)-i>{(f))\: cpeLip(S)}. This metric induces the weak topology on Mp(S) (e.g. Ethier and Kurtz 1986, p. 150 Exercise , 2). - . - . . • • , ,. " • " . The following conditions on the coefficients appearing in (HSE)bn wil l be frequently required. (IC) m(dx) = p(x)dx, where p£Cj<(JR) is H61der-d for any a < ^ , / R p(x)o?x = 1. There,is a nondecreasing function Y : :[0, oo) . [1, oo) such that (Lip) For any u£ M F ( B ) |&(t,/z,x)| < T(t V/x(l)) V x € R . -"" (2.2) * Let d e denote the Vasershtein metric on M F (1R) . / \b(L/z, x) - b(t, v., 2 )| < Y(t V p(l) V i/(l))(de(/i,u).+ \x - z\). (By) For any X € C([0, oc), Mp(JR.)) and any Vy € C • 7 ( t , X y ) - 1 < T ( t V A 7 ( l ) ) . Moreover, there is a nondecreasing functionT : [0, oo)• —> [1, oo) such that • l(t,X,y) < T{t) (l + pX;(\)ds^ , VA' € C([0,oc),MF(JR)) Vy € C . . • The following Lipschitz condition on 7 is more restrictive than (Lip). (Ry) Suppose that there are two functions h, f : [0,oo) x M F ( M ) x C —¥ JR such that / !/t(<,//,y)| + | / ( t , / / , y ) i < T ( t V / i ( l ) ) V y G C , : . ' '(2.3)' |/ ((/,/z,y) - />(t.,/,y')| < T(t V p( l ) V p'(l))(d e(/i,/i ') + sup |y(s) - y'(a)|). (2.4) ' |/(t,j/,y) - / ( t , / / , y ' ) | < T(t V p( l ) V M ' ( l ) ) (^( / i , / i ' ) + sup |y(.s) - y'(.s)|). ' (2.5) and with the following property. Recall that fit : —> M F (1R) be given by JJt(K)(A)=Kt({y:y(t)eA}). . If a s p a c e d ' — ( f i^ .F ' , (.?>), P ' ) carries an (^-predictable process (Kt) with sample paths in f2# a.s. and an JR-valued (^"^-predictable process (Yt) such that M(t) = Y(t)-y(o)- / 6(s,n,(^),ys)dS, t>o tion then . , 7(t,fl(A'),Y) = 1+ f h(s,U(K),Y)dY(s) + [* f(sM(K),Y)ds V i > 0 P ' - a . s Jo Jo is a Brownian mo 44 R e m a r k 2.7. Condition (R^) seems strange at a first glance. It ensures that 7 can be inter- preted as a branching rate (Remark 5.2 of Perkins 1995). • The next Proposition is fundamental. P r o p o s i t i o n 2.8. Assume b satisfies (Lip) and 7 satisfies (B,), (R-f). Then (HSE^^) has a pathwise unique solution. P r o o f . See Theorem 4.10 of Perkins (1995). • Before stating the main result of this Chapter we give some examples of coefficients 6 and 7 satisfying the basic hypothesis. E x a m p l e , (a) 7 = 1 satisfies (fly) and (JB7). (b) Assume 7 € C^'2([0,00) is a non-negative function such that f^(s, •) and §^(s, '-) are Lipschitz continuous with a uniform Lipschitz constant for s in compacts. Then j(t, X,y) = j(t,y(t)) satisfies (R,) and (f? 7). This is proved in Example 4.4 Perkins (1995). In this case the mass of each particle depends only on time and the position of the particle, and not on the rest of the population. (c) Let ps(x) be the one dimensional heat kernel. Fix e,a > 0 and set 7(t, X, y) = exp ( - j ' Jpe(yt - x)Xs(dx)e-a^ds). It is not obvious that 7 satisfies (R-,)- This is proved in p. 51 of Perkins(1995). (d) Let b : IR —> IR be bounded Lipschitz continuous. Then b(p,x) — Jb(x — z)fj,(dz) satisfies (Lip). See Example 4.2 Perkins (1995) for a proof. In this model a particle at z in a population p, exerts a drift b(x — z)fi(dz) on a particle at x. (e) More examples can be found in Perkins (1995, p.49). •'" ' • The following is the main result of this Chapter. T h e o r e m 2.9. Let H be a one dimensional historical Brownian motion.starting dt m. Assume m satisfies (IC), b satisfies (2.2) and7 satisfies (B7) and (Ay). Let K be a solution of (HSE)^^ and suppose that X is its one-dimensional projection, i.e. X. = fl.(K). (See Definition 2.4-) Then Xt(dx) = u(t,x)dx for all t > 0 IP - a.s. (2.6) where u(-,-) is an a.s. jointly continuous (adapted) function. In addition, u is Holder continuous in t with any exponent a < 1/4 and in x with any exponent a < 1/2 a.s. R e m a r k 2.10. (a) The case 6 = 0 and 7 = 1, i.e. when X is one-dimensional super Brownian motion, of Theorem 2.9 was proved independently by Reimers (1989) and Konno and Shiga (1988). (b) Note that b is not assumed to satisfy (Lip) (we only assume the boundedness condition (2.2)) so pathwise uniqueness may not hold. • 45 2.1.2. Historical Stochastic Calculus The results in this Subsection will be used repeatedly. The basic reference is Chapter 2 of Perkins (1995). D e f i n i t i o n 2.11. Let (E, \\ \\) be a normed linear space. Let / : [0,oo) x fi —> E. A bounded (Jt)-stopping time T is a reducing time for / iff l(t < T)\\f(t, u>,y)\\ is uniformly bounded. The sequence {Tn} reduces / iff each Tn reduces / and Tn | oo P — a.s. If such a sequence exists, we say that / is locally bounded. - • D e f i n i t i o n 2.12. Let m be a finite Borel measure on TR and let 7 : [0, oo) x fi ->• (0, oo), b : [0, oo) x fi -¥ P , g : [0,• oo) x f l TR, be (jFf*)-predictable. Assume that A = (7,6,g7 _ 1 l(g 7̂  0)). is locally bounded. A predictable process (Kt:t> 0) satisfies (MP)™ j . pn if and only if Kt .<= MF(C)1 for all t > 0 a.s. and V ^ G J D Q Z?$) = Kt$)-m($) is a continuous square integrable ^-martingale such that (Z«$))t = fi Jy(s,u>,y)i>(y)2Ks(dy)ds Vt > 0 a.s. a Assume that K satisfies (MP)™ ~ . Let (Tk) be a reducing sequence for A. If </>n £ Do, 4>n (f) (bounded pointwise convergence) then dominated convergence and local boundedness of 7 imply <Z*(0 n - <£m)) t —> 0, as n,m —> 00 Vt > 0 a.s. Therefore there is an a.s. continuous adapted process (Z/^ (<£) : t > 0) such that sup |Zf(^)-Zf(<?)|Ao 0<s<fc as n -f 00 V/c > 0. and sup | Z f ( ^ ) - Z f ( 0 ) | - ^ O o<s<rfc as n —>• 00 Vfc G N . 46 See Perkins (1993, 1995) for details. ( Z / ^ ) ) ^ is a continuous local (.T^-martingale with square function (ZK{ib))t= f KsM2)ds. , (2.7) Jo. Since DQ is ftp-dense in bC, can be extended to an orthogonal martingale measure {Z*(</>) : t > 0, <f> € bC} (Perkins 1993, 1995). This implies that if xb : [0, oo) x ft x C -4 IR is V(!Ft) x C-measurable and satisfies . P [ ^ j\(s,u:,y)iP(s,u;,y)2Ka(dy)ds] < oo V* > 0 then we may define the stochastic integral Z?W)= f f iP(s,u;,y)ZK(ds,dy). Jo Jc Z^(il)) is a continuous square-suminable martingale with square function (2.7). More generally, if / D(Z) = {ip : [0, oo) x fl x C -> IR : ip is VLTt) x Crineasurable and J J\(s,u,y)xp(s,u,y)2Ks(dy)ds < oo Vt > 0 P - a . s . j then Zt*(V>) can still be defined for ip € D(Z). In this case it is a continuous local martingale with square function (2.7). For a definition of martingale measures and the construction of the associated infinite dimensional stochastic integrals the reader is referred to the excellent book Walsh (1986). A word of warning to the reader familiar with the applications of stochastic calculus in finance. What we call martingale measures are L 2 -valued measures on C. They are totally different from the "martingale measures" usually employed in finance (these are just Girsanov transformations). The fact to be remembered while reading this chapter is that ZtK(</>) is a (local) martingale for a large class of integrands 4> = 4>(s,uj,y). D e f i n i t i o n 2.13. If (Kt : t > 0) is a r.c.1.1. ( .^-adapted Mf(C)-valued process such that (Kt(l) : t > 0) is an (^)-martingale, and T is a bounded (J r t)-stopping time, define a probability P R on fl by TT[AXB\- P [ W ) ] , Aer, Bee. We call JPT the Campbell measure associated with K and T, or simply Campbell measure when K and T are clear from the context. o P r o p o s i t i o n 2.14. Let K be a solution of (MP)mi - Q. (Note that g = 0J. Then (a) K eflH P — o.s. and K is (Ft)-adapted. (b) IfT reduces (7,6), then under the Campbell measure P T , rt/\T ^ n(t,u,y) = y(t) - y(0) - b(s,u,y)ds, t > 0, Jo is a (Ti)-Brownian motion stopped at T. 47 Proo f . See Theorem 2.6 of Perkins (1995). K Let K, T be as in Proposition 2.14. Under P ^ , y is an (.Ft*)-semimartingale. Therefore / a{s,u),y)dy(s) Jo may be defined for the class D(I,K) = |CT : [0,oo) x fi x C -> P , : a is (^ -predic table , ds < oo ift — a.a. y Vt > 0 a s . j P r o p o s i t i o n 2.15. Let K be a solution of (MP)™ - . /" a(s,u; ,y) 2 J O (a) i / ( T € D(I,K) there is an JR-valued (T?)-predictable process lK{a,t,u,y) such that for all reducing (Tt)-stopping times T .s. lK{o,t,io,y) = / o{s,io,y)dy(s) Vt > 0 P r - a. Jq We sometimes write I(o,t) instead of (<j, t, u;, y). (b) If I(o~, s, LO, y) is another such process, then I(o,s,LO,y) = lK(o~,s,u,y) Vs < t Kt — a.a. y Vt > 0 P — a.s. Proo f . This is a special case of Theorem 2.12 (Perkins 1995). • N o t a t i o n 2.16. If b : [0, oo) x fi —• R, is universally measurable, let ;vib 4) = - / / o b{s,u,y)ds if fc\b(si'u},y)\ds < oo. 10 otherwise. • P r o p o s i t i o n 2.17 (Ito's lemma) . Assume K satisfies (MP)m . , o : [0, oo) x fi -> P and 'Y,1,0,1) 6 : [0,oo) x fi -4 P are (Pf)-predictable, \o\2 + \b\ is K-locally integrable and YQ : fi —> P is TQ- measurable. Let Yt = YQ + I(o, t) + V(b, t) (using the notation of Proposition 2.15 and Notation 2.16) and.Yt = {t,Yt). If £ C t ([0,oo) x IR), the space of bounded continuous functions ip{t, x) with a bounded continuous derivative in t and two bounded continuous derivatives in x, then . up to a K-evanescent set of (t, to, y). Proo f . This proposition follows from the usual Ito's lemma and Proposition 2.15. See Lemma 3.18 of Perkins (1993). • 48 P r o p o s i t i o n 2 .18 ( H i s t o r i c a l I t o ' s l e m m a ) . Suppose K satisfies (MP)m^. . Assume a G D(I,K), b G IR are (;Ft*)-predictable and \o\2 + |6| is K-locally integrable. Let Yo : fi -¥ JR be Po-measurable, Yt = Yo + I(a,t) + V(b,t) (using the notation of Proposition 2.15 and Notation 2.16) and Yt = (t,Yt). If ip € C 6 1 , 2([0, oo) x IR), then Kt(^(Yt)) is an a.s. continuous (TT)- semimartingale and satisfies Kt(xb(Yt)) = Ko(Wo)) + fi j^(Ya)ZK(ds,dy) + fi f [W&)$(*) + • + \^(Ys)-o(sf]Ks(dy)ds Vt>0 a.s. P r o o f . This is a special case of Theorem 2.14 in Perkins (1995). • We shall also need the following version of Proposition 2.18. P r o p o s i t i o n 2 .19 . Suppose K satisfies (M'P)m1 - Q. Let a € D(I, K), b €]Rbe (f?)-predictable and assume \o\2 + \b\ is K-locally integrable. Assume also sup Ks(V*(b,s)2)) < oo V t > 0 a.s. 0<s<t Then I(o,t) e D{Z) and Kt(V(b,t) + I(a,t)) = fi J(V(b,s) + I(o,s))ZK(ds,dy) + fi J (o(s)b(s) + b(s))Ks(dy)ds \/t > 0 a.s. P r o o f . See Theorem 2.8 and Proposition 2.13 of Perkins (1995) for the proof.' • P r o p o s i t i o n 2 .20 ( L e v y ' s m o d u l u s o f c o n t i n u i t y ) . Let h(t) = \/t\og+(l/t). Define L(6, c) = {y EC : \y(t) - y(s)| < ch(t - s) Vs, t > 0 satisfying 0<t-s<5}. Assume K solves (MP) M̂  - . Let a G D(I,K) be (T{)-predictable and locally bounded. Djefine Kt(ui) e MF(C) by Kt() = Kt({y : I(a,-,u,y)1 G •}). Assume T reduces . (7 ,7 - 1 ,6 , a) and let 6 = sup{cr2(s,a;,y) : 0 < s < T, (u,y) eft}. • (a) If c> 2\/0, there is a 8(ui,c) > 0 a.s. such that Kt(L(8,c)c) = 0 V t G [0,T]. (b) For each 7, ©o, b, q G (0,00) and c > 2v/@o there is a non-decreasing function p : [0,00) —> [0,1] s u c / i that l\m\^o PW = 0 and «/ 7 < i n f { 7 ( i , C J , y) : 0 < t < T(w), (w, y) G fi}, m( l ) < o, sup{|cj-6(t,a;,.y)| : 0 < < < T(w), ( w , y ) G f i } < 6 , G < 6 0 , then we may choose 5(LJ,C) in (a) so that JP{S < A) < p(X). 49 P r o o f . This is a special case of Corollary 3.3 of Perkins (1995) — • We end this Subsection stating some useful estimates. P r o p o s i t i o n 2 . 2 1 . Recall that H denotes a one-dimensional historical Brownian motion start- ing at m. There is a one-dimensional Brownian motion (Bt) such that # t ( l ) = m ( l ) + f\/lQV>dBs Jo [Ht(l) : t > 0) is the continuous time continuous state space branching process studied by Feller (1951). Hence, for any p € N P[#t*(l)p] < co V i > 0 . More generally, assume that g is bounded above and let K be a solution of (MP)m^ - .. Then, for any p > 1 - . -. X>[K;(1)p] < co Vt > 0. P r o o f . See Lemma 2.1 of Perkins (1995). • 2.2 Some Auxiliary Processes P r o p o s i t i o n 2 .22 . Let b and y be as in Theorem 2.9, and h, f as in (Ry). If we define If, V un fi , w \ \ , h{t,U.(K)(u),y) b{t,to,y) = b(t,U.T{K){u),y) + - f(t,n.{K){Lu),Y{u},y)) • = f{t,U:(K)(u),y) h(t,ll.(K)(u>),y)-b(t,UT(K)(u),y) , 9 ( , i J , y ) l(t,fl.(K)(u>),y) y{t,U.(K)(u;),y) . j(t,to,y)=1{t,U.(K)(u),y) then the unique solution of the historical strong equation (HSE)b^solves the martingale problem (MPJyjj . defined in,page 46- P r o o f . The proof is essentially a lengthy.application of historical Ito's lemma. See Theorem 5.1 of Perkins (1995) for the details. • R e m a r k 2 .23 . In the remainder of the chapter we will only assume that K solves the martin- gale problem (MP)m1 ^.. In fact we shall abuse the notation and suppose that K is a solution of the martingale problem (not necessarily obtained as a solution of the strong equation (HSE)b,y). P r o p o s i t i o n 2 .24 . If we define Kt(<t>)= f - , f ( y ) Mdy), t>0, 4>£bC, J l{t,u),y) then K satisfies (MP)™ - Q, where b(s,u,y) = b(s, flS(K)(u),y). • '/; 50 P r o o f . This follows from Proposition 2.22 and it is shown during the proof of Theorem 5.6 on p. 70 of Perkins (1995). k • The following hypothesis will be in force through Sections 2.2 and 2.3. (UB) There is a constant CUB > 0 such that |6(s,u>,y)| < cUB Vs > 0 Vy e C P -a.s. • We need to introduce yet one more historical process. By Proposition 2.24 the results of Section 2.1 concerning stochastic integration are at our disposal. In particular we may define the stochastic integrals 1^. D e f i n i t i o n 2 .25 . Note that (UB) implies be D{I,K). Let £t(l,u>,y).= exp ^-7^(5 , t ,w,y) + i ' j f b{s,u,y)2ds^ = exp(~f0 HsiU,y)dy(s)+ ^ o(s,u>,y)2ds^ , (observe that £{(1,CJ, y) is unique up to (K, P)-evanescent sets) and define Jt(<t>) = J <f>(y)£t(l,u,y)Kt(u)(dy),t > 0, 4>€bC,. (2.8) jtW) = J ^(yt)St(l,u;,y)Kt(u)(dy), t > 0, ^6'6S-, (2.9) = W)(v). The usefulness of this auxiliary processes depends in part on the following lemmas. The reader may want to look ahead at Proposition 2.42 to see why the process J is relevant. L e m m a 2 .26 . There is a (Pf)-predictable K-version of £t(l,u),y) which is locally bounded. We call such a version £ t ( l , a ) , y ) _ 1 . P r o o f . Apply Proposition 2.20 with [K,o) = (K,b). Note that since K solves (MP)™li)Q and b is uniformly bounded, the hypotheses of Proposition 2.20 are satisfied with 0 = 1, T = oo, c = 3. Hence Kt({y : JR(b, -)l e L(6,3)c}) = 0 V* > 0. Therefore /^(6 , - ) '€ L(5,3) Kt-a.e.y Vt > 0. (2.10) Define 14 = inf{s > 0 : Ks({y : I^{b, )s G L(k~l, 3)c}) > 0} A k. 51 Note that Vk t oo P - a.s. Indeed, since-5 > 0 a.s. Vk(u) = k for all k > <J(<*0-1. Hence (2.10) implies Ht < Vk)£t(l,v,y) < exp (z(kt + l ^ l o g + f f c ) + (kt + 1 ) ^ ) #t - a.e.y Vt > 0 a.s. (2.11) Recall that evanescent sets are predictable and that we work with the universally complete CT-field (J7?). Modify £t(l,u;,y) on a (if,P)-evanescent set to obtain the desired result. • R e m a r k 2.27. Since K and J are equivalent, (2.11) gives also a J-version of £t(l,u},y) which is (Ft )-predictable and locally bounded. a We are now able to give a martingale problem characterization of J . P r o p o s i t i o n 2.28. The historical process J defined in (2.8) satisfies the martingale problem ( M P )?-M,o,o- T h a t * ' • V t f G . A , ZtJ(^) = Jt(^)-m(^)-fijj^(s,y)Js(dy)ds is a continuous square integrable (T"t)-martingale starting at zero with (ZJ(^))t = fi J£s(l,oj,y)-1^(yfjs(dy)ds Vt > 0 P - a.s. P r o o f . Pick ipu...,'ipk •€ C^{JR), 0 < h < t 2 . < ... < tk and set * = Ui-^i^i- We sometimes write £t( l ) = £t(l,u;,y) and b(s) = b(s,u>,y). By Ito's lemma 2.17 and since Vs — Vo — fo b(TiU,y)dr is a Brownian motion on ( f i , P t ), il>MiAU)) =il>i(yQ).+ l |$(y(s A ^))l(.s < U)dy(s) + ̂ {y(s A t,-))l(s < t{) (K,P) - a.e. , and £t(l) = 1 - f b(s)£s(\)dy(s) + t b(s)2£s(l)ds (K,W) - a.e. Jo Jo Integration by parts (also justified by Ito's lemma 2.17) yields • - f t ( l ,w,y)*(y*) = * ( y ° ) - ['^(ys)b(s,u,s)£s(l,u,y)dy(s) Jo + I y(ys)b(s,u,s)2£s(l,u,y)ds Jo k -ft k + J2 ]\^j{y(sAtj))rp'i(y(sAti))l(s<ti)£s(l,uj,y)dy(s) .1=1 J° j = l J- 52 i=i /=i - 7 0 j = i k ft k - 52 / A hWM8 A < U)Hstu,y)£B(l;u>,y)ds .- i »/ 0 i = V 0 j = 1 <-t = ¥(y° ) + jf [v*(s,y) - *(y')S(a,_u;,y)]g,(l,w,y)dy(a) + ^ j^(y s )b(s,u; ,y) 2 - .V*(y,s)b(a,w,_y)]5,( l ,w,y)ds + p^{s,y)£s{\,u),y)ds (K,F)-a.e._ Hence, by historical Ito's lemma 2.19 Jt($) = Kt(£t(l)9) = KoW + Jo J£s{i,u,y)y{ys)ZR{ds,dy) + y [ ( V ^ ( s , y ) - * ( y s ) 6 ( s , u ; , y ) ) 6 ( s , v , y ) + ( ^ ( y s ) 6 ( s , o ; , y ) 2 - VV(y,s)b(s,u;,y))}£s(l,uj,y)Ks{dy)ds + J j £s{l,u,y)^{s,y)Ks(dy)ds = J 0 ( * ) + Z t J ( t f ) + ^ ' J jV(s,y)Js(dy)ds Vt > 0 a.s. where ZJtm = j j.£s{l^y)y{ys)Z«{ds,dy) is a square integrable martingale with (ZJmt = ^Joj£s(l,u;,y)^(ys)ZR(dS,dy) = £ J£s(l,u,y)2y(ys)2Ks(dy)ds = j\ J£s{l^y)-l^{ys)2Js{dy)ds. 53 R e m a r k 2;29. Proposition 2.28 allows us to use J-historical stochastic calculus. In particular, Proposition 2.14 implies that y.At is a (Tf)-Brownian motion under the law P f . o L e m m a 2.30. There is a J-version of £t(l,u>,y) such that £t(l,u:,y) = exp f^jf b{s,u,y)dy(s) - | j f b(s,u},y)2ds^ ( J , P ) - a.e. We abuse the notation and call also that version £t(l,u,y). P r o o f . Let T be a bounded predictable (^f)-stopping time. By the invariance of the stochastic integral under change of law (Dellacherie and Meyer, Theorem VIII. 12) the stochastic inte- gral J 0 b(s,u},y)dy(s) calculated under the law P T is P T indistinguishable from the stochastic integral jjj b(s,u;,y)dy(s) calculated under the law P ^ . Therefore IR(b,t,u,y) = Ij(b,t,uf,y). Vt > 0 P ^ - as. The desired conclusion now follows from Proposition 2.15(b). • The last result in this Section is the proof that the random measure j defined by (2.9) possesses a density with respect to Lebesgue measure. We also estimate its moments. Some of the key ideas and techniques needed to prove Theorem 2.9.are introduced along with the proofs. N o t a t i o n 2.31. pt(x) denotes the one-dimensional heat kernel and Pt4>(x) = f pt(x—y)<p(y)dy. We sometimes write pf(y) = pt(x — y) = p{t, x — y). We begin by giving a Green's function representation for j- . P r o p o s i t i o n 2.32. For any <j> e C%(1R.) jt{4>)' = f Pt<j>(x)p(x)dx + j j Pt.s(j>(ys)£s(l,u,y)ZR(ds,dy) (2.12) for all t e [0,1] P - a.s. P r o o f . F ix t < 1 .and set ip(s,x) = Pt-S<p{x). Applying Ito's lemma 2.17 with Yt = 1(1, t) = yt we get i>(t,yt) = ip(P,Y0) + J i-£(s,ys) + -^(s,ya)j d s + J .-^(s,ys)dy(s) = rb(0,Y0) + fi ?jt(s,ys)dy(s) for -Kj-a.e. y Vt G [0,1] a.s., (since —(s,x) = --^(s,x)). Moreover, by the same lemma £t(l,u,y) = 1 - £s(l,u),y)b(s,uj,y)dy(s) + b(s,uj,y)2£s(l,u,y)ds 54 for Kt-a..e.-y Vt € [0,1] a.s. Integration by parts, justified once more by Ito's lemma 2.17 yields £T{\,LO,y)^{t,yt) = ^(0,y 0 ) + J (£s(l)^(s,ys) - ^{s,ys)£s(\)b(s))dy(s) + ( 6 ( 5 )2 ^ ( 1 )^ (5 ,^ ) - 6(5)^(1)1^(5,^))^ for Kt-&.e. y Vt € [0,1] a.s. Now apply historical Ito's lemma 2.18 to obtain J £t(l)y>(t,yt)Kt(dy) = j 1>(0,x)p(x)dx + J j5,(1,L0,y)^{s,ys)ZK{ds,dy) Vte'[0,1] a.s. Recalling the definition of ip and jn, we see that this last equation is exactly (2.12). • R e m a r k 2.33. If b = 0 and 7 = 1 then Proposition 2.32 provides a very simple proof for the usual Green's function representation of super Brownian motion (compare with that of Konno and Shiga 1988, p 212). P r o p o s i t i o n 2.34. (a) For each 0 > 0 let £t(6,LO,y) = exp (-6IR(b,t,u,y) + (o - y ) j f 0(5,LO,y) 2 ds) . Then for any 6 > 0, £sA.(6) is a P s -martingale starting at 1. (b) There is a function K : [l,oo) x [0,00) —> R.+ , non-decreasing in each variable such that Ss(l)p <*(p,s)£s(p) • (c) The second term on the r.h.s. of (2.12) is a square integrable martingale null at zero. P r o o f , (a) Recall that by Proposition 2.14 nt = yt — yQ — flb(s,LO,y) is a P^ -Brownian motion. Since b is bounded, £sA.{0) is an exponential martingale. (b) We estimate £s(l)p = £s{p)e{^-p)I°ku)2du <£s{p)es^-p)cBu = K(p,s)£s(p). (c) Note that HPt-s^lloo < 00. Moreover P p Ks(£s(l)2)ds^ < pP*[£s(2M2,s)}ds <m{2,t) (by parts (a) and (b) of this Proposition). This implies that the stochastic integral on the l.h.s. of (2.12) is an L 2 martingale null at 0. • We now employ equation (2.12) to show that j has a density. We state first a technical lemma. 55 L e m m a 2 .35 . There exists a constant C > 0 such that - t V t ' roo rtvt roo . • 2 J J {p(t-s,x-z)-p{t'-s,x'-z))dzds<C(^/\t--t'\ + \x-x'\), Vt, t! > 0, x, x' e JR. P r o o f . See for example Lemma 6.2 of Shiga (1994). • P r o p o s i t i o n 2 .36 . There exists a (measurable) function g = g(w,s,x) suck that for any 4? € C A ' ( H ) and any t € [0,1] j j $(x)js(dx)ds = jC$(x)g(s,x)dxds TP-a.s. (2.13) P r o o f . First, we prove that for any e 6 (0,1) rl roo / / TP[js(px()2)dxds < A JO J-oo where A < oo is a constant independent of e. By Proposition 2.32 P[j,(p?)2] = j PsPf{z)p{z)dz2 + V^f°fps-Tp*{yr^^^^ = f Pxs+t{z)°{z)dz2 + F \^fi jP*-r+e{yr)2£r{l)2Kr{dy)dr (by Chapman-Kolmogorov) = / P ? + ( ( ^ p ( ^ ) ^ 2 + £ F f [ p f _ r + e ( y r ) 2 ^ ( l ) 2 ] d r (by Fubini's theorem) < Jpx+€(z)p(z)d. z2 + K(2,s) [STP?\pxs_r+£(yr)2£r(2,u,y))dr Jo (by Proposition 2.34(b)). Now integrate over space-time fl roo rl rl roo roo / TP\j»{px)2]dxds< / / pxs+e(z)p(z)dz2dxds . JO J-oo Jo J-oo J-oo + K(2, S ) f fSTPr\pX-r+£(yr)2£r(2)]drdxds. JO' J-oo Jo ' • We estimate "1 roo roo noo roo / pxs+t{z)p{z)dz2dxds •oo J —oo (2.14) 56 rl roc roo roo = / / •/'• / P*+e(zi)pUe(z2)p{zi)p{z2)dzidz2dxds J O J—oo J—oo J —oo noo roo j roo / : —7===p{z\)dzx I Pxs+e(z2)p{z2)dz2dxds •oo7-oo y27r(s + e) . 7-oo / • l j /-oo r o o roo Jo • v 27r(s -f e) 7-oo 7-00 7-oo = / M ] , T d s ' ( s i n c e / ^ = ^ 7o V 2 7 r( s/+ e) 7 Furthermore, / Y / P f [ p f _ r + £ ( y r ) 2 5 r ( 2 ) ] d r d x d s V O J-ooJo = / / F r / p j _ r + e ( y r ) 2 d x £ r ( 2 ) J O ' J O U - o o drds (by Fubini) f l r s ;> < f1 f wf[£r{2)}-= 1 drds Jo Jo V27r(5 — r + e) < ~^=, since P f [£r{2)} = 1. ^ Take A = ^ § + N ( 2 , 1 ) ^ to conclude, the proof of (2.14). Our next step is to prove that . rl roo l im / / TP{js{px€-PxS)2}dxds = 0: (2.15) s,6-40JQ J_O0 Apply Proposition 2.32 again. Just as before / OO :• (Ps+e(z)-pXs+5^))p(z)dz2+ • •OO ; [SV?[<3>U+e(yr)-pU+6(yr))2£r(l)2]dr. Jo Integrate over space-time to obtain - rl roo rl roo roo \ I / V\js{p*-PX)2]dxds< / / (pxs+£(z)-pxs+8(z))p(z)dz2dxds J O J —bo 7 o J — oo J — oo - +K(2,1) f 1/ 0 0 rpf[(pf: r + e (y r )-pj_ r + , (y r )) 2 5r(2)]drdxa^ J O J - o o J O The two terms on the r.h.s. are readily estimated: r l r o c r o c '., . " / 7 / - px,.s(z))p{z)dz2dxds Jo J —00 J —oc . r 57 rl roo' roo < / / {pX+e(z)-pX+s(z))2p(z)dzdxds JO J —oo J —oc (by Jensen) roo rl roo / oo rl roo / / (Pxs+e(z) -pX+S{z))2dxdsp(z)dZi •oo JO J —oo / oo ' • ':• p{z)dz = c2.36.i\/|e - S\ , •oo (by Lemma 2.35). To estimate the second term note that for any 77 > 0 . /(Ps^M) ~ Ps(z))2dx = J(pUni*)2 + PXS(Z)2 - 2pX+r,(z)pX(z))dx .-; ... 1 + < y/27rj2sT2rfj V2ir2s y/2it{2s + rj) (s + n)y/4ns Hence, assuming w.l.o.g. e > 6, rl roo rs / • / ' / K[(pXs-r+e(yr)-pXs-r+S(yr))2£r(2)}drdxds . JO J-00 JO j • = JQ Jo [J_JpX-r+e{yr) ~ PU^r)? dx£r{2) =- f I". • M ( { £ ~ ^ / - r - ^ ^ [ £ r { 2 ) ] d r d s Jo Jo (s - r + (e - 5))y/4it(s — r) = fl r -v Jo Jo (s — - f + (e - 8))y/4ir(s-r) ^ K (since £T/\.(2) is a P r -martingale) 7o V47T • V£ —'5 < C2.36.2V /k-~^[. Therefore rl roo / 7 • JP[js(px -px)2]dxds < c2.36.i>/|e - <5| + c2.36.2N(2, ^v/F7^! JO J - 0 0 = f ' 2 . 3 6 . 3 \ / | £ - <5|, which proves (2.15). Thus (2.14) and (2.15) show the existence of an L2(Cl x [0,1] x R.) density function g = g(u, s, x) for js defined by l im / / v\(js(px)-g(s,x))2]dxds = 0 (2.16) ' : JO J - O O 1 " J • ' .. ' ' 58 To finish the proof of the proposition we verify that g is in fact the desired density. Pick a nonnegative, nonidentically zero $ € CK(TR.), t € [0,1] and compute ds ds jfV J^(x)g(s,x)dxj = l i m ^ P (^j ${x)js(px)dx - J $(x)g(s,x)dx^ (by bounded convergence) < ( JQWdxy1 Km J*JP [(js(px)-g(s,x))2$(x)dx]ds (by Jensen) < (l^(x)dx)~l || $ Hoc l im jf* y°  TP[(js(px) - 5 ( S , x ) ) 2 ] ^ S = 0 (by (2.16)). This implies (2.13) and concludes the proof of the proposition. • R e m a r k 2.37. By considering functions of the form l [ a f))(s)^(x) a monotone class argument shows that (2.13) is equivalent to J JF(s,x)js(dx)ds = j JF(s,x)g(s,x)dxds P-a.s. for (integrable) nonrandom functions F = F(s,x). This fact will be needed in the proof of the main theorem. • Having proved the existence of g, we proceed to estimate its moments. These bounds wi l l also be needed in the proof of the main theorem of this chapter. P r o p o s i t i o n 2.38. For any k> 2, t € [0,1] there is a constant Ci38 depending on k and p (but not on x) such that - : n9(t,x)k]<c2.38- P r o o f . Write g(e,t,x) = jt{px)- Using Proposition 2.32 we estimate V[g(e,t,x)k] <2k f pf+£(z)p(z)dzk+2kJPp Jpx-s+£(ys)£s(l)ZH'(ds,dy)k (since (a + b)k <2k(ak + bk)). The first term on the r.h.s. is easily estimated: 2" J px+£(z)p(z)dzk < 2k || p H * , jpx+£(z)dzk : (2.i7) = 2k II o \\k Z H P MOO (2.18) 59 The other term is estimated as follows: rt &[f fpf-s+e(ys)£s(i)zK(ds,dy)k] < C2MA(k)T[pKs[px_s+£(ys)2£s(l)2] dsk'2] • (by Burkholder) • • < C!l.38.l(A!)P [ ^ . [ p f - . + ^ y ^^sd ) 1 7 ^ - 2 ) ] ^^ - 1 ) / 2 Jo (by Jensen's inequality) .1/2 < C2.38.2(k)J]P y tX s[p?_ s + E(^) 2^(l)l/(2fc-2) ] d sfe- P f ks\pts+e(ys)2£s(i)2k-^]ds Jo (by Schwarz) = C 2 . 3 8 . 2 ( A : ) \ / ^ \ / i 2 - (2.19) Notice that since y. — yo — J0 b(s, LO, y) is a Brownian motion under P t , then the definition of 5(1) arid Girsanov's theorem imply that y is a Brownian motion (with initial distribution p) under the law Q [•] = P [5(1)-]. We use this fact to estimate Ix. 'ft /i-=.p / ^ .bf - 8 + e (y . J^ E f pf- . + e ( i / . ) 5 I ^^( i ) 5 ^rfa* - - 1 . ] .Jo •ft < p / ksiit-s + ey^pU+eiys^Ssii^ds2^-^2} Jo <v\[\t-S + e)-^Ks(l)ds(2k-W2 f ks\pls+e(ys)£s(l))d Jo Jo (by Jensen) .1/2 1 r fl ~ 1 P / Ks\p*_s+£(ys)£s(l)}ds Jo < y P Jyt-s + e)~^Ks{l)ds2k-3 (by Schwarz) . = c 2 . 3 8 . 3 ( f c ) ^ ^TP[K*(l)2k-^ j jpt-s+e{x - y)Ps(y - z)dyp(z)dz Since W[K{(l)2*"3] < c 2 . 3 8.4 by Proposition 2.21 and / f pt-s+£{x - y)ps(y - z)dyp{z)dz = 60 /Pt+e{x — z)p(z)dz <\\ p Hoc, it follows that by defining C2.38.5(k, N,p) = C2.38.3 .38.4 II P l l o o we obtain h < c2.zs.5{k,p). (2.20) Finally we estimate J 2 : h = fo pf [pf-s+Ays)2£sW2k-^] ds < (by Holder with p = 5/4, g = 5) « ( ( 2 0 f c - 1 3 ) / 2 , l ) ^ Q f [p£_, + e (y . )$]*pf [ ^ . ( ? \ ^ ) ds (by Proposition 2.34(b)) •t /"OO /•OO y t /"oo roc = N((20fc-13)/2, l ) /• / / pt_, + e (x-z)ap,(z-u;)dzp(u>)rfti>sds ' J 0 J—oo J —oo r t /-oo roo = K((20A;-13)/2,1) / (27r ( t - s ) ) - 3 / 5 / / p 2 f t_, + e )(x - z)ps{z - w)dzp(w)dw*ds JO J-oo J-oo 5 , • ft roc = N((20fc-- 13)/2,1) / (2TT(* - s ) ) " 3 / 5 / p a t + a . + 2 £ ( a ; - w)p(w)dw*ds Jo J-oo 5 5 5 < C2:38.6(fc) / (< " «))_3/5r2/5 / p(w)dWsds Jo J-oo 2 - rC2.38.6(fc) 5 ~ C2.38.7(fc) That is, / 2 < C2.38.7(fc) (2.21) If we define C2.38 = 2fe || p ||oo +2fcc2.38.2(fc)c2.38.5(fc,p)2C2.38.7(fc)2 then P[s(e,t,x)*] < C 2 . 3 8 (2.22) follows from (2.18), (2.19), (2.20) and (2.21). Since I? convergence implies pointwise conver- gence along some subsequence, by choosing appropriately a sequence en —• 0 and using Fatou's lemma we obtain (2.17) from (2.22). • 2.3 A Generalized Green's Function Representation for X In this section we give a new representation of X as a J-integral. This is the main ingredient in the proof of Theorem 2.9. Recall that we assume (UB) throughout this Section. 61 D e f i n i t i o n 2 .39 . For any 0 > 0 let : £t (0 ,w,y) =exp(e b(s,u,y)dy{s) ~Y f ks,u,y)2dsj ( J , P ) - a . e . • Moreover we denote p'(t,x) = Dxp(t,x). R e m a r k 2 .40 . Arguing as in the proof of Proposition 2.34 we see that for any 9 > 0, £ ( A (#) is a P ( - m a r t i n g a l e and that there is a function N : [0,oo) x P + , nondecreasing in each variable such that £t{l)P < H(p,t)£t(p). D e f i n i t i o n 2 . 4 1 . For any e > 0 set U{t,X,£) =Xt(pX). P r o p o s i t i o n 2 .42 . Let u(t,x,e) be as in the above definition and p'(t,x) = Dxp(t,x). We write 7(s,w,y) = ~?(s,ns{K)(uj),y) • f(s,u,y) = f(s,Us(K)(u),y) h(s,u,y) = h{s,U.a(K)(ui),y). Then the following representation holds for any £ G (0,1] u{t,x,e)= [pxt+e{z)p{z)dz+[ [px-s+Ms))£s(l)%ZJ{ds,dy) Jn Jo J + IJ ( p f - i + e ( y ( « ) ) £ ( ^ ( 2- 2 3) + %p't-s+e(ys ~ x)£s{l)bsyjs(dy)ds yt G [0,1] P - a.s. P r o o f . Recall that yt/\. is a Brownian motion under P ^ . Since Dspf_s+£ = —D2pf_s+E it follows from Ito's lemma 2.17 applied to B\ = px-s+e{y{s)), 0 < s <t, that B1 is a martingale, B\ = Bl+ f p't_r^£(yr - x)dy{r) : ( J , P ) - a.e. Jo The familiar section theorem argument and (fly) yield £t(l) = 1 + r^(l)6 sdy(s) Jo Jt = 1 + / hsdy(s) + / fsds J-a.e. Jo Jo 62 Integrate by parts (justified by Proposition 2.17). Px{yt)£t{l)it =pf+s{yo) +-jf' (£s(l)%p't-s+e(ys -x) , - .. ' + Px-s+e(y*Ws{l)bs + px-s+e{ys)£s(l)hs) dy(s) ,t , • ; / • (2-24) + Jo v^1^3*1*+ ZsWp't-s+eiys - x)hs , + %p't_s+e(ys - x)£s(l)bs +P?-s+e(ys)£s{l)fs) ds ( J , P ) - a.e. Moreover, for any cp € C'2(M), t € [0,1] Xt{<P) = JcP{yt)Kt{dy) . = y' 4>{yt)ltkt{dy) = f <p{yt)lt£~t{l)Jt{dy) V<>-0 P - a.s, (2.25) Using (2.24), (2.25) and Ito's lemma.for historical integrals 2.18 we obtain (2.23) •. 2.4 Proof of the Main Result In this section we put together the results from Sections 2.2, and 2.3 to prove Theorem 2.9. We begin with some technical lemmas. - L e m m a 2.43 ( K o l m o g o r o v ) . (a) Let [B(x) : x G P d ) be a family of random variables in- dexed by x € P d . Suppose thai there exists a real p > 0 and two constants Co, $ > 0 swc/i that •-'•r . V x , x € P d , E[jB(x) - £(x)|p] < C0|jx - x\\d+3.. ' Then the process (B(x) : .x € P d ) has,a continuous version which is globally Holder with exponent a, for any a < (3/p. • (b) Let I C P . 3 be the product of 3 intervals (either closed, open or semi-open). Let [B(x) : x £ I) be a 3-dimensional random field. Suppose that forany k > 13 there is a, constant Co > 0 . such that " ' ' E[|B(xi ,x 2 ,x 3 ) - B(xi,x3,xi)\k] < Co(|x i - x i | | ^ V | x 2 - x 2 | ^ + |x3 - x 3 | V ) . for any. ( x i , x 2 , x 3 ) , (x i ,X2 , x 3 ) G I such that 0 < |xi — xi|,|x2 —x2|,|x3 — x3| < 1. Then the process (B(x) : x € I) has a continuous version, which is Holder, with exponent a, for any a < 1/4. Moreover, for any X2,x3 fixed, the map x\ i-4 i? (x i ,x 2 , x 3 ) is Holder-a for any a< 1/4 and the map x 2 >-» l ? (x i ,x 2 ,x 3 ) is Holder-a for any a < 1/2. If we know that the process (-B(x) • x € P d ) is continuous to begin with then there is no need to take a version. 63 P r o o f . Although Kolmogorov's theorem is not generally stated in this form, the standard proof (Revuz and Yor 1991) works equally well. • L e m m a 2.44. (a) Denote p'(t,x) = Dxp(t,x). For any 0 < e < 1, t € [0,1] rt roo I \p'(t — s,x + e) — p'(t — s,x)\dxds < C2AA.iy/s. JO J-oo '(b) Let 0 < s < t < 1. Then rs roo I \p'(t~ r i x ) —p'(s ~ r,x)\dxdr < c2.44.2Vt — s. Jo J-oo (c) Let T{n) := mf{t > 0 : Ht(l) > n} A 1. There is a function 0 : N x [0,oo) —• 1R+, non-decreasing in each variable such that X;(l) < e{n,s) on {T(n)>s}. Notice also that K* (1) =X*(1). P r o o f , (a) Estimates (a) and (b) should be well known. We prove them since we don't know a reference. We need the following elementary estimate (e.g. Ladyzenskaja 1968 p. 274) ' \D?D?p{x,t)\.^Cn^-^-^exp^-Cn^y) . (2.26) Let 8 > 0, 8 < t. We estimate rt roo ft-S roo / / \p'(t - s,x + e) - p'{t - s,x)\dxds = / / \p'(t-s,x + e)-p'{t-s,x)\dxds Jo J-00 ' Jo J-00 rt roo + \p'[t - s,x + e) — p'(t - s,x)\dxds • Jt-S J-00 • ' rt—S roo < / \p'(t - s,x +.e) -p'(t - s,x)\dxds -. Jo J-00 rt roo + 2 / \p'(t- s,x)\dxds Jt-S J-00 = Il+h. Use estimate (2.26) to check that rt. ^ -«$ y/t - S < Ci\T8. J 2 < C ft ~T^=ds 64 To estimate I\ we use the fundamental theorem of calculus followed by a linear change of variables, rt—6 roo . rl h = / / £ / D2p(t - s; x + ze)dz JO J-oo' Jo dxds ft—6 rl roo . <£ / / \D2p(t-s,x + JO JO J-oo ze)\dxdzds (by Fubini) ft-8 rl < e I / dzds Jo Jo t-s (using estimate (2.26)) ^ e l o g Q ) . <c2£iog(i). : • If t < S the estimate (2.26) yields rt roo rt roo I I \p'{t — s,x + £) — p'(t — s,x)\dxds < 2 / / \p'(t — s,x)\dxds J.O J-oo Jo J-oo <C3V6. Thus, for any 6 > 0, fof \p'(t-s,x + £) -p'(t-s,x)\dxds < Ci\Al + C 2 £ l o g Q ) +C3V6.: Put 6 = £ to obtain the desired result, (b) We compute \p'(t — r, x) — p'(s — r, x)\dxdr -00 rs roo rt—s < \DTDxp(T + s — r,x)\dTdxdr' Jo J-00 Jo < rs rt—s roo Ci,i / / / ( Jo Jo J-00 T + s — r) 2 exp ( — cit T -f s — r dxdrdr (estimate (2.26) nt—s {T + S - r)-z'2dTdr = 4C(VT=s'-{Vi-y/s~)) < c2.44.2Vt — s. (c) A more general result is proved in p.61 of Perkins (1995). 65 P r o o f o f T h e o r e m 2.9. We claim that it suffices to consider t € [0,1]. . Exerc i se . We do not wish to rob the reader of the pleasure of doing some things for herself, so we leave the proof of the claim as an exercise. The proof rests on the following lemma. L e m m a 2.45. Let u be as in Definition 2.^1. Suppose that there is a constant CUB such that \i(s,v,y)\ < CUB IM*)W,2/)| < cUB \f(s,v,y)\ < cUB \b(s,u,y)\ < cUB V s > 0 Vy € C P - a.s. Then for each k > 1 there is a constant C2Ab[k) such that for any e,6 € (0,1], t, s € [0,1], x,x € P , |a; — x\ < 1, P[|u(t,x,e) - u{s,x,8)\k] < C2A5{\t - \+ \x - x\^ + \e - <5|^). - (2.27) We defer the proof of Lemma 2.45 and show how to prove the theorem. Case 1. Suppose that 6, 7, / , h are uniformly bounded. Set R > 0 and consider the function u : [0,1] x [-R, R] x (0,1]—> P . Lemmas 2.45 and 2.43 show that this function is H61der-a for any a < 1/4 and therefore uniformly continuous. It follows that it has a unique continuous extension to [0,1] x [-R, R] x [0,1]. Define . u(t,x) = u(t,x, 0) = l imu(t , x,e). £->0 By considering a sequence Rn f 00 this definition is extended to any x G P . Note that the function (t,x) 1—> u(t, x) is H61der-a for a < 1/4. This gives the joint continuity o f u . Lemma 2.45 also gives the desired exponents in x and in t. It remains to show that u is indeed the. density of X. Let <j> e C/<(P), <j> > 0. We compute X t (0 ) = lim / 4>*p£(z)X(dz) = l im / / 4>(x)pe(x — z)dxXt{dz) J = l i m y jpx(z)Xt{dz)(j){x)dx (by Fubini) = lira J u(x,t,e)(j)(x)dx = J u(t,x)4>(x)dx (by dominated convergence) 66 Case 2. Suppose that the coefficients are not uniformly bounded. Recall that T(n) was defined in Lemma 2.44(c). Note that Lemma 2.44(c), (J5)7, (2.2), and (2.3). show that for each n € N {yn,in\bn,fn;hn) = (7,7-\bj, h)l(0 <t< T(n)) is uniformly bounded. Hence K.^Tn solves (MP)m r . (see Remark 2.4 of Perkins (1995)), where • (bn,9n) = {b,9)H0 <t<T{n)) = {bn + hn%1,(fn + hnbn)Af-1). Therefore X A r ( n ) satisfies the conditions of Case 1. (see Remark 2.23). To finish the proof note that Proposition 2.21 implies that for P-almost every OJ there is an no = UQ(OJ) such that T(m) - 1 for any m > UQ. • P r o o f o f L e m m a 2 .45 . Note that since (A+B + C)k < 3k(Ak+Bk + Ck) it suffices to estimate TP[\u(t,x,e) - u(s,x, e)\k], JP[\u(t,x,e) - u(t,x,e)\k] and lP[\u(t,x, e) — u(s,x,6)\k] separately. Throughout this proof we assume w.l.o.g. that t > s and k is an even integer. We start with the first term. By Proposition 2.42 (generalized Green's function representation) 6-hTP[\u(t,x,e) - u(s,x,e)\k] < f (pxt+£(z) -px+e(z))p(z)dzk + ^yyj(pU+e(yr)-pU+e(yr))ir(l)%ZJ(dr,dy)k + F[/< J(PX-r+e(yr)-pXs-r+e(yr))£r(l)frJr(dy)drk (2.28) + P ^ J(p't_r+£(yr - x) - p U + e ( y r - x))£r(l)hr Jr(dy)drk P [L I^_r+£^r ~ X>> ~ p ' s - ^ V r ~ x))£r(l)lrbrJr(dy)drk j £r{l)A/rbrkrJr(dy)dr + . + P i=l In (2.28) we have adopted the (bizarre) convention pr(x) = 0 for r < e. We compute h = (Pt+eP(x) - Ps+eP(x))k <\\Pt+£p-Pt+eP\\^ • <\\ Ps+£(Pt_s - ljpll*,' <|| (Pt_8 - l)p II*, (since P is a contractive semigroup) = sup\JEz[p(Bt-s)-p(z)}\k 67 (Here (E* , i? ) is a Brownian motion started at z eJR) <'c2.45.isupE?[|(Bt_, -z)\^r]k z (by (JC)). < c 2 . 4 5 . i E ° [ ( x / r ^ | 5 1 | ) ^ 1 ] f c . . . = C2A5.2(t - S) * We estimate .fc/2 1/2 1/2 h < F |jfy(pf_r+e(|fr) - p ? - r + £ ( y r ) ) 2 5 r ( l ) 2 7 r 2 5 r ( l ) - J J r ( d 2 / ) d r ' (by Burkholder) (by Jensen) [ rl f T 1 / 2 < P ^ J (P X_r+s(yr) - pX-T+e(yr)?Jr{dy)drk-1 X P ftf-r+eiVr)-PU+e(yr))%(l)ktfkMdy)dr (by Schwarz) . r rt r . -i 1 / 2 = P [Jo JR {p*- r+c {a) -PX-r+e{*)?3r{da)dTk-' x P pi(p?_r+£(yr) - p ? _ r + £ ( y r ) ) 2 f r ( l ) f c 7 2 f c J r ( d y ) r i r (by Definition 2.25) = P - / / ( P f - r + e ( « ) -P? - r+e( a ) ) 2 5 (^a )dadr .JO JR X P f{plr+M -pU+e(yr))2Sr(l)ktfkJ, (by Remark 2.37 ) < P \fl j &i-T+M -Px-r+e(a)fg{r\a)dadrk x H ( A : , l ) 1 / 2 c f c / B P ^y(p?_r+e(yr) - p ^ r + e ( y r ) ) 2 5 r ( f c ) J r ( . d y ) d r (by. (t/B) and Proposition 2.34) f j (pU+e(a)-px.r+e(a))2dadrk-2 Jo Jn 1/2 1/2 r{dy)dr 1 1/2 1 / 2 < P 1 / 2 (2.29) 68 X L ^ ^ - r + £ ( ° ) - ^ - r + e ( a ) ) 2 5 ( r , a ) f c " 1 d a d r j (by Jensen) <C2A5.3(k)(t-s)^ f [ i p t r ^ l - P s - r + A ^ ) 2 ^ Jo JR X j f p ^ [ ( p f _ r + e ^ (by Lemma 2.35) rt ' < C2.45.4(*)(< " s)*? J P r J [ ( p X _ r + £ ( y r ) -pXs-r+Ayr)?£r{k)] dr1'2 (by Proposition 2.38 and Lemma 2.35) • = C2A5.5(k){t-s)hrlI2.i- • (2.30) I2.i is easily estimated (recall that y.At is.a Brownian motion under P ^ ) : h.x < 2 fvl\pU+M2Sr{k)]dr'l2 Jo <2/*P r JIpf_ r^(y r)VY/8P r J^(fc)8]V« (i ri^ J O (by Holder) < C2.45.6(&) / / / Pt-r+e(z- 2 ) 5 / 2 p r (2 - ^ )^p(u ; )^ 4 / 5 P^[5 r (5 J t ) ] 1 / 5 dr 1 / 2 Jo J R J R = C2.45.6(*0 / / /" P t - r + e ( a ; - z ) 5 / V ( ^ - ^ ) ^ p ( ^ ) d u ; 4 / 5 d r . 1 / 2 J O J R J R < c2.45.7(A:) (2.31) (the integral was estimated during the proof of Proposition 2.38). (2.30) and (2.31) yield 2̂ < c 2 .45.8( '*) .(*-s)^ i . , (2.32) ^3 can be estimated in a similar manner. rt h < P '0 (p f_ r + e (y r ) - p*_ r + e(y r)) 2 Jr{dy)dr 2fc-l X (P?-r + e (yr) L ^ - r + £ ( y r ) ) 2 ^ ( l ) 2 f c / r 2 ; C J r ( d y ) d r ) (by Jensen) 1/2' < P (pf_ r + £ (y r ) - Ps (yr))2Jr(dy)dr 69 2A:-1 1 1/2 X IP [jf/(rf-r+JVr) - P ? - r + £ ( y r ) ) 2 5 r ( l ) 2 f c / r 2 f c J r ( ^ ) d r (by Schwarz) - l 1/2 1/2 =  P [fo fjPX-r+e(a)-pU+^))29(r,a)dadr2k- X / < F " [(Pf-r+efor) ~pX-r+e(yr))2£r(l)2kf?k] dr^ <[ I\{pU+e(*)-pxlr+e{a)?dadrk-1 «/0 «/R • x / / ( p f _ r + e ( a ) - r f _ r + e ( o ) J 2 P [ ^ ( r , a ) ^ - 1 ] A » d r 1 / 2 : (by Jensen) < C 2 . 4 5 . 9 ( f c ) ( t - s ) 2 V i 7 2 . 1 (by (2.31), Lemma 2.35 and Proposition 2.38) < C2.45.10(fc)(< ~ S ) 2 ^ 1 B y the same token A ~ P / Vt-r+eiVr-x) - p's.r+Ayr - x)\JT(dy)dr2k^ (by Jensen) . ~ P / / M - ^ ^ ~  X) -p*-r+e(Vr ~ x)\JT(dy)dr2k-' l l ^+e(yr-x) p'a.r+e(Vr ~ x)\£r(l)2kh2kJr(dy)dr (by Schwarz) = F l l L | p ' - r + e ( a "  X) -P'-r+efr ~ x)\g{r,a)dadr2k-A ^ ; x I [\Pt-r+eiVr - x) - . p ' s _ r + £ ( y r - x)\£r(l)2kh2k] dr1'2 <[ I \p't-r.+e{a-x)-p's_r+e{a-x)\dadrk-1 ** 0 J R X / / I p U + . ^ - ^ - p . - r + ^ a - ^ I P ^ G ) 2 * - 1 ] ^ 1 ^ J 0 »/ R • 4 BK(2fc,l) 1/ 2 x y P r J [|pj_r+4f(yr -x) - p U + £ ( 2 / r -*)|&(2*)] dr1/2 (2.33) 70 (by Jensen) . . = C2-.45.il (fc)(* - s ) ^ J*W>Jr-\\p't_r+£(yr - x) - p's_r+e(yr -x)\£r(2k)] dr1'2 (by Lemma (2.44) (b)) = C2At:i2(n,k,N)(t-s)*hf1I4.1 (2.34). We readily estimate h.x < 2^PJr [ ip ' t _ r + £ (y r -x)\£r(2k)] dr"2 < 2JJJT [\p't_r+E(yr-x)\^}4/5iPrJ [Sr{2kf]l,bdr^2 (by Holder) < 2c2.45.i3(A:) J J f \Pt-r+e(z - x)\*Pr(z - w)dzp(w)dw'tpJr [fr(10A)] * dr2 < 2c2.45.13(fc)||p||2/5 ft! Wt-r+ei* ~ x)\5/4Pr(z - W)dzdw^drll2 \ . Jo JnJn =<2c2.45.i3(A:)||p||2i5 f [ \p't_r+e(z-x)\5/4 f pr(z-w)dwdz4/5dr1/2 Jo JR Jn (by Fubini) = 2c2.45.13(A :)||p||2/5 / ' / \p't_r+e(z-x)\^dz^dr'l2 Jo Jn <C2A,.u{k)f L x p l - ^ ^ ^ d z ^ d r " 2 Jo t — r + £ J \. • ' t — r + £/ (by estimate (2.26) rt i = c2.45.15W / - - T S T T A - 1 / 2 . • Jo (t-r + syv-' • t • < c2.45.i6(A:)- (2.35) From (2.34) and (2.35) it follows that h < c2A5.n{k)(t-s)2^.. . (2.36) The estimation of J 5 is identical to that of 74. We obtain 2k~ 1 h <c2A5.n(n,k,N){t-s) — . (2.37) We now estimate Ie . J 6 <4 * » P jjr{\)Jr{dy)drk]^ ^ J r ( l ) d r » - 1 ^ y " ^ a ) 2 f c j r ( d y ) r f r ^ 1 / 2 < c ^ P 71 ) (by Jensen) < < # B N . ( 2 M ) 1 / 2 P < C2.45.19(&)(* ~ S)^ ft "I 1/2 r f t J!(l)2k~1 J dr2k~l P J £r(2k)Jr(dy)d\ 1/2 2fc-l (2.38) Combining the estimates (2.29), (2.32), (2.33), (2.36), (2.37), (2.38) with (2.28) we see that for 0 < t - s, t G [0,1] JP[\u(t,x,e) - u(s,x,e)\k] < C 2 . 4 5 . 2 0 ( n , k ) ( t - s) <\ (2.39) Next we estimate 5-kTP[\u(t,x,s) - u(t,x,e)\k] < [ (px+£(z) - pf+£(z))p(z)dzk + P [l l^-r+e(yr)-plr+e(yr))SrmrZJ(dr,dy)k ' Jitf-r+-e(Vr) ~ P?-r + e(Vr)).& M frJr{dy)drk \L l^P't-T+£<<Vr -P't-r+e(y^-x))£r{l)hrJr{dy)dr ^ J(p't_r+£(yr - x) -p't_r+£(yr - x))Er(l)%brJr{dy)drk + P + P (2.40) i=l We compute Li = (PtP(x) - PtP(x))k <\\ Pt(px - px) \t (here px = p(x - •)) <\\ Px - Px \t < c2.45.21ja; — i ) ^ 2 (by (JC)). (2.41) L 2 is estimated as follows L 2 < P J J(Pf_r+e(yr) -pf -r+e(yr) ) 2 5r( l ) 2 7r^r( l ) - 1 J r (dy)dr* / 2 (by Burkholder) < P (f^(pX-r+e(yr)-plr+Ayr))2Mdy)drk-1 72 \ 1/2 1/2 X ill^-r^\-Pt-r+e(yr))%(l)k^ (by Jensen) " P [ll'l^+eiVr)-Pt-r+e(yr))2Mdy)drk-1Y/2 ' X F IU l {PU+M-Plr+e(yr))2er(l)2kl2kjr(dy)dr\ (by Schwarz) < P [fi fn(PX-r+e(*) -PU+e(o))29{r,a)dadrk-' x c * B N ( * , l ) 1 / 2 P [fi J(pf_r+e(Vr) ~plr+Ayr))2£r(k)Jr(dy)dr - ^ III l n P t r + e { a ) -Pt-r^)?dadrk-2 ft r - 1 1 /2 x (pf- r + e(o) " Px-r+e{a))29{r,a)k-ldadr Jo Jn X CkBX(k,l)1/2 fiv'r [ (p?- r + e («r ) • - p f - r + e ( y r ) ) 2 ^ ( * ) ] d r 1 / 2 (by Jensen) < c 2 . 4 5 . 2 2 ( A ; ) | x - i | i f a T / f e r + e ( a ) - p f - r + e ( a ) ) 2 P b ( ' - , a ) * - 1 ] d a d r 1 / 2 Jo Jn , x ^ P r J [ ( p f _ r + £ ( y r ) - p f _ r + £ ( y r ) ) 2 £ r ( A ) ] dr1/2 (by Lemma 2.35) < c2.45.23(fc)|x - x l 4 * 1 T p r J [(pf_r+e(yr) - p f _ r + e ( y r ) ) 2 ^ ( A : ) l dr1/2 Jo L J (by Proposition 2.38 and Lemma 2.35) = C2.45.24(K)|X - X\ 2 L2.\- Note that in fact L 2 . 1 < / 2 . I : L2 < C2.45.25(^)|a; - The estimation of L 3 is analogous to that of I3 and we obtain ^ 3 < C 2 . 4 5 . 2 6 F ~ x| 4 • We now proceed to estimate L4. U < IP I b U + e f e r - 4-p{- r + e-(i/r - £ ) |J r (d^dr2*"1 73 (2.42) (2.43) (2.44) 1 / 2 ' l l / 2 1/2 X y y ' | p ; _ r + £ ( y r - x ) - p U + e ( y r - x ) | 5 r ( l ) 2 f c ^ (by Jensen) x IP [J' f \p't-r+e(yr - x) - p't.r+e{yr - x)\ET(l)2*^*JT{dy)dr | (by Schwarz) < / f \p't-r+e(a-x) -p't_r+e(a-x)\dadrk-1 Jo Jn x / [ b U + ^ a - ^ - p U + ^ a - x J I P f e ^ o ) 2 * - 1 ] ^ 1 / 2 . Jo Jn • ckUBK(2k, l ) 1 / 2 x ^ P r 7 [|P;_ r + e(y r - x) . -p j . _ r ^(yr - 5)|5r(2fc)] d r 1 / 2 : (by Jensen) . = C2.45.27(fc)|x JQ P r [bU+efor " *) " pi-r+cfor ~ 5)|^-(2fe)] d r 1 ' 2 (by Lemma 2.44(a)) jt—i = C2.45.27(A;) |x - x j ~ L 4 . i . (2.45) Arguing as in the estimation of /4 .x we see that L 4 . 1 is bounded by a Constant. Therefore £ 4 < C2. 4 5 .28 ( fc)|x -X\^ (2.46) The estimation of L5 is identical to that of L 4 . We obtain fc — 1 L5 < C2.45.29(A:)|x - x\~ (2.47) Combining the estimates (2.41), (2.43), (2.44), (2.46), (2.47) with (2.40) we see that for any x, x TP[\u(t,x,e) -u{t,x,e)\k] < c2.45.3o(A;)|x - x| 2 (2.48) To estimate P[|u(i,x,e) - u{t,x,6)\k] we follow the procedure used to estimate JP[\ii(t,x,e) ' u(s,x,e)\k]. Only a few trivial changes are required. We obtain fc-i P[\u{t, X , e ) - U{t, X , 5)\k] < C 2.45:31 |e - S\ 4 Estimates (2.39), (2.48) and (2.49) show that the hypothesis of Lemma 2.43 (b) hold. (2.49) 74 R e m a r k 2.46. (a) In this chapter we focused on the projection A' of a solution K of the strong equation (HSE)^. (See Definition 2.4.) However, we could have chosen the projection X. — f l . ( I f ) ' o f a solution i f of the following martingale problem as initial data. Let m b, 7 be as in Theorem 2.9. Suppose that for all <p E DQ Kt(4>) = m(<t>)+£ J A0(s,y) + b(s, fls(K),y) V0(.9,y))Ks(dy)ds + Zt{<j>), where Zt(<j>) is a continuous square integrable martingale null at. zero with square function. <Z(#)),= f Ks{ls4>2)ds. " . J o W i t h this definition of X the conclusions of Theorem 2.9 hold. The starting point of the proof would be a slight variation of Proposition 2.24, where we would need to employ Dawson's Girsanov theorem as in Remark 2.4 of Perkins (1995). The remainder of the proof would be essentially unchanged. If 7 = 1 then our proof works without any changes (see Remark 2.23), although some statements like the conclusion of Proposition 2.24 are trivial. • 2.5 Examples: Super Brownian Motions with Singular Interac- tions This section is an application of the foregoing.. ' ' We begin by recalling an example of Sznitman (1989). Roughly speaking the model is the following: N particles are placed at random (independently) in the real line at time t = 0. . They evolve in time according to the dynamics: . "... , :"' " :.N. '. •: ' . ... • dXl = dwl + -^Mn-Xi)dt, i=l...N, (2.50) . where 5 is- Dirac's delta function, (vi1)^ .axe independent one-dimeiisional Brownian motions and X\ represents the position of the i-th particle at time t. Thus the particles feel a "push" in the positive direction when they collide! Between collisions they follow independent Brownian motions. Sznitman showed (among other things) that when iV •-> 00 a law of large numbers phenomenon occurs and the empirical distribution of the particles •  X? = lfESx> . ' • (2.51) converges to a non-random limit AT 0 0 . In the limit each particle follows an independent copy of the "nonlinear" process A : . . . . dXt = dwi + u(t,Xi)di ' (2.52) 75 where w is a one-dimensional Brownian motion and u(t,x) is the density of the random variable Xt- Moreover X^°(dx) = u(t,x)dx. Notice that the nonlinear process has marginals which satisfy in a weak sense Burgers' equation du 1 d2u d , ox Indeed, for any <j> € C 2 ( P t ) , (the classical version of) Ito's lemma and (2.52) yield <j>(Xt) = 4>(X0) + J* cj>\Xs)dBs (\^(Xa) + u(s,Xs)<fi'(:Xs))ds. Now take expectations to obtain Recall that u(s, •) is the density of Xs so that for example E[(f>(Xt)] = / <f>{x)u(t, x)dx. Hence j(f)(x)(u(t,x) — u(0,x))dx = j j (^<f)(x)"u(s,x),+ 4>(x)'u(s,x)2)dxds. Integrate by parts on the r.h.s., cj>(x)u{(t,x)-u{0,x))dx = J J (^-cf>(x)-~^(s,x) - <j){x) — (u(s,x)2)^dxds. .. This last equation can also be obtained multiplying (2.53) by <f> and integrating over space-time. Sznitman (1989) proved that u(t,x) is in fact a classical solution of Burgers' equation (2.53). We refer the reader to the paper Sznitman (1989) for the proof of this and many more very interesting results. '•• . Sznitman's work motivated us to look at the following examples. Example 1 Let XN be the critical Bienayme-Galton-Watson tree of orie-dimensional Brownian motions defined by (0.1) with 7 = 1 (see page 3). Recall that I(N,i) labels the. particles -alive at time t and Z\, i = 1 , . . . ,I(N,t) labels their locations. Consider the following interacting particle system UN. The family structure of UN. is identical to that of XN. The particles belonging to UN are labeled UN'1 and obey the following dynamics. 1 «N,t) dUtN'i = dZi + -Y^6o(UtN'-U^j)dt, •j = lt...J(N,t). (2.54) ' J ' = 1 (Note the similarity between equations (2.50) and (2.54)). UN is defined by , l(N,t) 76 This model is very similar to Sznitman's, but it has one extra ingredient. Instead of JV inde- pendent Brownian motions, the driving noise is now a system of branching Brownian motions. Several natural questions arise. Does the sequence (UN) converge weakly to an interacting superprocess? If so, is there a unique limit? What martingale problem is solved by the limit points? Note that since the interactions are singular, this model is not covered by the results of Chapter 1. We are currently working on these questions. We conjecture that the sequence (UN) possesses at least one limit point which solves the stochastic partial differential equation du A d , o\ • r- • -al=2U-Tx^ + ^ W - . Here W is a space time white noise. Recall that a white noise W on [0, oo) x R i s a stochastic process (W(A) : A € Bf), where Bf denotes the Borel subsets of [0, oo) x IR of finite Lebesgue measure < oo), such that if A and B are disjoint subsets in Bf, then W(A) and W(B) are independent mean zero Gaussian random variables with variances \A\ and |J5|, respectively. However, if we replace the delta function by a smooth approximation then the model will in fact be covered by our results. Let pt{x) be the one dimensional Brownian transition density. F ix e > 0 and consider the system U£,N defined by . I(N,t) N 3 = 1 and i=l By Theorem 1.27 the sequence (U£'N')N (e is fixed) converges weakly to a unique (in law) superprocess U E characterized by the martingale problem (MP) ^ PC 0 . V*/, € C 2 ( M ) Z?'(?p) = Ut(iP) - Ul(^) - fi J [il>'(x)U£s(Pe(x - •)) + \il>"(x)] Ul(dx)ds is a continuous square integrable martingale such that • (Zf(^))t= fU£(^2)ds V t > 0 a.s. Jo Theorem 5.6 of Perkins (1995) implies that U£ solves the historical strong equation Yt = yt + j jpe(Ys-x)Us(dx)ds Ut(-) = J l(Yt € -)Ht(dy). Theorem 2.9 shows that there is a jointly continuous function ue(t,x) such that Ut(dx) = ue(t, x)dx. After integration by parts (see page 3 for a similar computation) we see that ue solves the stochastic partial differential equation du£ A , d / ' A - J — r ; , lH=2U -d~x\U -Pc^j+^W, 77 where W is a space time white noise and * denotes convolution in the space variable x. • Example 2 In this example we look at a super Brownian motion in a super Brownian field. We work on a filtered space (f2, J~, (^ i ) ,P) ' rich enough to carry all the random processes defined below. Let X± and A ^ be two independent trees of critically branching one-dimensional Brownian motions (see page 3). The notation is as in the previous example. I\(N,t) (resp. h{N, t)) labels the set of particles belonging to X\ (resp. X2) alive at time t and Z\ (resp. ZK) labels their locations. t = l 1 h{N,t) We define a process UN• as follows. The family structure of UN is identical to that of X^. The particles belonging to UN are labeled UN'1 and obey the dynamics , h(N,t) dU?'' = dZ\{t) + - ]T 60(UtN'1 - Z32(t))dt, i =• 1,... ,h(N,t). (2.55) (Note the similarity between equations (2.50), (2.54) and (2.55)). UN is defined by h(N,t) •u? • . TV- t = l When TV -> oo, X± converges to a super Brownian motion X\, while X2 converges to a super Brownian motion X2. Cal l H\ and H2 the historical processes corresponding to X\ and X2 respectively. We expect that the sequence (UN) will converge to a superprocess U. In fact the system of equations (2.55)-(2.56) should converge to the historical strong equation dYt = dyi(t) + jS(Yt - y2(t))H2(t)(dy2)dt Ut(-) = J l (y t €- ) / f i (« ) (dyi ) . (2.57) Note that (2.55)-(2.56) and (2.57) are equivalent if (U,HUH2) are replaced by (UN;H?,H£). The convergence does not follow from the results of Chapter 1. We do not prove that result here (although we hope to prove it in a future work). Instead we work directly with the limiting system (2.57), which we call super Brownian motion in a super Brownian field. Recall that one dimensional super Brownian motion has a jointly continuous density a.s. Therefore we may write X2(t)(dx) — v(t,x)dx. We now condition on X2. Therefore we consider v as a fixed function and so (2.2) holds. We may recast (2;57) in the form Yt = yt+ [ v{s,Ys)ds , J o (2.58) Ut(-) = J l(Yt € - ) ^ i W ( d y ) . 78 We now show that (2.58) has a solution. Since v is uniformly bounded and measurable, the method of Zvonkin (1974) shows that for any bounded ( .^-stopping time the equation has a unique non-exploding strong solution on (fi,irV). We can use the section theorem and the procedure in the proof of Theorem 2.12 of Perkins (1995) to show that there is a universal version of Yt (w, y) such that We then use the second equation in (2.58) to define U ( P — a.s.). Apply Theorem 2.9 to the system (2.58). Therefore, given X2, there is an a.s. jointly continuous function u(t,x) such that Ut(dx) = u(t,x)dx a.s. • Remark 2.47. In fact it is not necessary to condition on X2. But then we must deal with some lengthy technical details. We have decided to leave them out but hope to include them in some future work. n H\ — a.e. 79 Chapter 3 Local Times for One-Dimensional Interacting Superprocesses 3.1 Introduction and Statement of Results In this Chapter we study the local times for interacting superprocesses. We begin by describing the notions of superprocess occupation and local times. We consider first the case of super Brownian motion W. Throughout this Chapter we shall restrict ourselves to the case where the underlying particle motions are one-dimensional. The occupation time process Y is defined by m = f Jo Yt(0) = I Ws{cp)ds . (3.1) for any Borel d : IR -> ITU. Formally, the local time Lf(W) of W is obtained by replacing <f> by 6a (Dirac's delta) in (3.1). Thus, if u = u(t,x) is the jointly continuous density of super Brownian motion with respect to Lebesgue measure then L?(W)= [tU(s,a)ds. Jo This implies the existence of a jointly continuous version of Lf(W) which is continuously dif- ferentiable in t. (See Sugitani (1988) for more information on regularity properties of L%(W)). Note that L^(W) satisfies the fundamental density of occupation formula: Indeed, / WM)ds= I <t>(a)Lat{W)da. JO J-oo ft rt roo / Ws(4>)ds= I I <f)(a)u(s,a)dads Jo Jo J-oo / OO rt / <f>(a)u(s,a)dsda •oo J0 / oo 4>{a)Lat{W)da. •oo 80 In general we define local times for superprocesses to be occupation densities. We have the following: D e f i n i t i o n 3.1. Let (X, fi, (Tt),^) be a M/?(P)-valued process, i.e. for each t, Xt is a finite random measure on (IR, B ( F ) ) . We say that L%(X) is the local time of X if the following conditions are satisfied: (i) The map (a, t) i—> Lf(X) is P — a.s. continuous. ( i i ) For any nonnegative Bprel function <j> ft roo / Xs(<t>)ds = / <t>(a)Lat(X)da (3.2) JO J-oo D A s an immediate consequence of Theorem 2.9 we obtain C o r o l l a r y 3.2. The interacting superprocess X described in Chapter 2 possesses a local time Lf(X) which is jointly continuous and continuously differentiate in t. Our goal in this chapter is to show that more general interacting one-dimensional superprocesses have local times. The measure-valued processes we will consider arise as solutions to strong historical equations (see the Introduction of Chaper 2). We must therefore introduce diffusion, drift and mass coefficients. We will also need some notation. In this Chapter fi = (fi, (Tt)-> IP) wil l denote a filtered probability space satisfying the usual hypothesis. We denote C = C1 = C([0, oo) ,P) and call (Ct) (resp. C) its canonical filtration (resp. its Borel cr-field). H is a one-dimensional historical Brownian motion on Cl starting at m G M/?(P) . (f2, T\ {Tt)) = (SI x C, T x C, (Tt x Ct)) and ZH denotes the martingale measure associated with H (see p. 11 Perkins 1995). A set A C [0, oo) x is (H, P ) evanescent (or H evanescent) iff A C A i where A i is (Tl)-predictable and sup 1AI(W, W, y) = 0 Ht — a.e. y V i > 0 P — a.is. 0<u<t A property holds (H, P ) — a.e. (or H — a.e.) if it holds off an H-evanescent set. If T is a bounded (^^-stopping time, P r denotes the Campbell measure oh defined by JPT (A.xB) := JP(HT(B)lA)/lP(Ho(l)) for A e T,B G C. Suppose that e> 0. Let CT : M F ( P ) x P — > [e,oc), • b : A f F ( P ) x P — > P , 7 : [0,oo) x C ( [ 0 , o o ) , M F ( P ) ) x C - 4 (0,oo). The hypothesis on these coefficients are the following. Here d = d|.| is the Vasershtein metric on A//.-(P.) defined by : - d(p,u) = sup{|M(/) - K / ) ! : : / : P ^ P , H/l loo < 1, 1 / ( 4 - / ( y ) l < > - ^1 Vx ,y G P } . . ,. 81 B o u n d e d n e s s b y t h e t o t a l m a s s . There are non-decreasing functions T, T : [0, oo) -> [l,oo) such that S U P \b(Pix)\ + W(p,x)\ < T(/x(l)). Vp € M f ( l R ) , (3.3) s u p 7 ( f , X , y ) < T ( t ) ( l + PxiMds) V *eC ( [ 0 , o o ) , M F ( P ) ) . (3.4) S/6C1 JO . L i p s c h i t z c o n d i t i o n . |CT . ( /* ,S)-<7(I/,Z)|-H&(^ (3.5) The reader wi l l find many examples of coefficients b, a, 7 satisfying the conditions above in Chapter 1 and in Chapter 4 of (Perkins 1995). We say that (X, Y) is a solution of (HSE)a^y Yt = Y0+ f a(Xs,Ys)dy(s) + ['b(Xs,Ys)ds, (i) Jo ' Jo . • • ' '. Xt(cP) = j ^X^iYtWdy). (ii) iff • Y : [0, 00) x Q, ->• ]R is ( .^-predictable, X : [0, 00) x ft -> M F ( P ) is (.^-predictable, X € C([0,oo),M/.(IR)). • (HSE)ffti,n(ii) holds for all <j> : TR —> TR bounded measurable for all t > 0 P — a.s. and (HSE)a,b,-y{i) holds H - a.e. The stochastic integral in the r.h.s. of (HSE)Ctt,y(i) was defined in Proposition 2.15. Here is the main result of this chapter: T h e o r e m 3.3. Let X be a solution of (HSE)a^a. Then X has a local time Lf(X). The basic tool needed in the proof is a Tanaka-like formula of Perkins (1995). We shall not need its more general form. We will employ the following version. T h e o r e m 3.4. Assume L : [0,oo)xf2 —> [0,oo) is (P?)-predictable, L(-,u,y) is non-decreasing, left continuous for Ht-a.e. y for all t > 0 P — a.s. Also assume f HS{L2S) ds < 00 V O 0 P - a.s. Jo Then there is an a.s. non-decreasing, left continuous [0, oo)-valued (Tt)-predictable process At(L) = J0tHs(dLs) such that Ao(L)=0 and Ht{Lt) = HoiLo) + f fL(s,u,y)ZH(ds,dy) + f Hs{dLs) Vt > 0 a.s. Jo J Jo Moreover if L is continuous H-a.e. then At(L) is a.s. continuous. 82 Proo f . This is a special case of Theorem 2.24 (Perkins 1995). • We illustrate the idea of the proof of Theorem 3.3 with the particular example of super Brownian motion. Recall that if T is a bounded (Tt) stopping time, Bt{w,y) = yt — Vo is a Brownian motion stopped at T on (fi x C,T x C,TPT) (see Proposition 2.14). Let Cf(cv,y) be its local time. C is normalized so that /0* cj){Bs)ds = / R d ( a ) £ " d a . Applying Theorem 3.4 with Ls = we get Ht{Cat)= f I' £asZH(ds,dy) + f Hs(d£as). (3.6) Jo J Jo The second term on the right hand side of (3.6) is precisely the local time of super Brownian motion. This is intuitively clear from the particle picture. A straightforward computation shows that Lf(W).= J0tHs(dCg) satisfies (3.2). Using the representation of the local time furnished by (3.6) and Koimpgorov's criterion we are able to prove that the local time is indeed jointly (Holder) continuous. R e m a r k 3.5. (a) Another way to verify that L\(W) as defined above is in fact the density of occupation is the following. For simplicity we consider only the local time at 0. Let be the A-Green's function • Rx(x) = / e~xt^= dt. Jo By Ito's lemma, the local time of Brownian motion satisfies the Tanaka formula (see e.g. p.14 Adler 1992): C°S = Rx(Bo)-Rx(Bt)+ [\Rx)'(Bs)dBs + X f Rx(Bs)ds.. (3.7) Jo Jo Now, Ht{£t) m a y be computed using the representation (3.7) for C° together with Ito's lemma for historical integrals (Proposition 2.18.) It yields (formally) HT(C°T) = -HT(Rx(yt) - Rx(y0))+ f f [*(Rx)'(yr)dyrZH(ds,dy) Jo J Jo + X f [ I*Rx(yr)drZH(ds,dy) + \ [ f Rx(ys)Hs(dy)ds Jo J Jo Jo J = ~ fo !{RX{Vs) - RX^ZH^ ^ - \ [ j{RX)"(ys)Hs{dy)ds + f f fS(Rxy(yr)dyrZH(ds,dy) + X f f [' Rx(yr)drZH(ds,dy) Jo J Jo Jo J Jo + A^Y Rx(ys)Hs(dy)ds . = J* j (Rx\yo) -Rx{ys) + J\Rx)'(yr)dyr + \ J° Rx(yr)d^J ZH(ds,dy) + j* J (\Rx(yS)-L-(Rx)"(ys^Hs(dy)ds 83 = fi jc°sZH(ds,dy) + fi J (\Rx(ys) - ^(Rx)"(ys)SJHs(dy)ds Hence Ht(C°t)= f [C°sZH(ds,dy) + f Hs(S0)ds (3.8) Jo J Jo (since Rx - ^ ( i ? A ) " + XRX = 60). Comparing (3.6) and (3.8) we see that /„* Hs{dC°a) = fQtHs(S)ds. b) A l l the arguments in this Subsection concerning super Brownian motion are easily made rigorous. However, all these results are elementary and their proofs may be found elsewhere. c) Local time for super Brownian motion is known to exist in dimensions d < 3. The analogous result for interacting superprocesses is currently under investigation. The cases d = 2,3 are harder than the case d = 1, since super Brownian motion does not have, a density in dimensions greater than one. o This chapter is organized as follows. Section 3.2 is devoted to the proof of Theorem (3.3). We define a family of random variables {L^(X)\a € P c , t € P + } and show that it satisfies the conditions of Definition 3.1. 3.2 Existence and Regularity of Local Times Recall that for any bounded (Jr()-stopping time T the process Y defined by (HSE)c^yl is JJ a semimartingale on the space Q = ( f i x C, (Tt x C ^ P ^ ) . We wish to define an (Tt x Ct)- predictable process £"(w,y) which agrees with the semimartingale local time of Y in fi P T —a.s. for all bounded ( .^-stopping times T. We use Tanaka's formula for semimartingales. Definition 3.6. For any a € JR and t > 0 let Cas = \Yt-a\- \Y0 - a\ - f sgn(y s - a)a(Xs, Ys)dy(s) Jo -f sgn(Ys-a)b(Xs,Ys)ds. (3.9) Jo * . ' • .•; Note that all the terms in the left hand side of (3.9) are defined up to (H, P)-evanescent sets in [0, co) x fi. Also, / 0 ' (p(Ys)d(Y)s = /(/>(a)£"da H - a.e. in [0, oo) x fi (see Section 1 of Chapter V I of Revuz-Yor (1991)). ' • We shall need the following technical results. Lemma 3.7. Let T(n)=mi{s>Q:Hs{l)>n}. (3.10) Then there is a function 0 : N x [0, oo) —¥ P + , non-decreasing in each variable such that X*s(l) < 0(n,s) on {T(n)>s}. (3.11) 84 P r o o f . This statement is proved in p. 61 of (Perkins 1995). In the notation therein 0 ( n , s) = Ti{n)eTl^s. The hypothesis (3.4): on 7 is needed here. • R e m a r k 3.8. In our present setting it is not necessarily true that for p € N , E[AT*(1)P] < oo. o L e m m a 3 .9 . Assume T is a bounded (Ft)stopping time and 'ip € b^TAs- Then /•(AT r HTMW) = HTASW) + hp ZH(dr,dy) Vt>s a.s. JsAT J P r o o f . This is a particular case of Proposition 2.7 of Perkins (1995). • L e m m a 3 .10 . Let L : [0, oo) x —• [0, oo) be {Tt)-optional, and let S be an (Tt)-stopping time. Then P[L*(S) >e] = sup-F[L(T) >e] Ve > 0, T<S where the sup on the right is over all {Tt)-stopping times T bounded by S. P r o o f . This result is a consequence of the optional section theorem. See Lemma 3.7 of Perkins (1993). • Suppose we could apply Theorem 3.4 to the process Ls = We would obtain J CatHt{dy) = J* j CasZH(ds,dy) + j% Hs(dCas) Vt P - a . s . (3.12) The next argument shows that satisfies the hypotheses of the Theorem 3.4 so that (3.12) holds. Indeed, for any bounded {Tt)-stopping time T, C? is non-decreasing on [0,T] P T — a.s. The section theorem shows that C? is continuous, non-negative and non-decreasing H — a.e. on [0, oo) x fi. To finish, we need to check that fj Hs((Cg)2)ds < oo P — a.s. Define stopping times T(n) as in (3.10). We compute F [HSAT(n)(\YsAT(n)'- a\2)} = P H sAT(n) Y0 f-sAT(n) rsAl(n) a+ .. o-(Xr,Yr)dy(r) + b(Xr,Yr Jo Jo sAT(n) )ds <3P s A r ( n ) [ l>o-an + 3 c 2 P rsAT(n) / 0-(Xr,Yr)'' Jo dr + 3 P H sAT(n) sAT(n) b(Xr,Yr)dr (by Burkholder) < 3 p f A T ( n ) [ | y 0 - a | 2 ] + c 2 P rsAT(n) • / T ( X r ( l ) ) : Jo dr 85 - H I rsAT(n) 1 \ + P S A T ( n ) Jo ?(Xr(l))dr2 j (by (3.3)) .< 3 hsxnn)[\Yo - oi 2 ]+c 2p;;r („) ^ r „ . | 2 l ~ H rsAT{n) / T ( G ( n , r ) ) 2 d r Jo + P H sAT{n) rsAT(n) / T(6(n,r ) )dr JO (by Lemma 3.7) . < 3 (VsAT(n)[\Yo -a| 2 ] + c 2 S T ( 0 ( n , S ) ) 2 + s 2 T ( Q ( n , s)) 2 ) C3.12.1 (*,")• Similarly, P if, sAT(n) r-sAT(n) / s g n ( y r - a ) a ( X r , y r ) d y ( r ) Jo < P / rsAT(n) ; HSAT(n) l y a(X r ,y r ) dr = C3.i 2 . 2(5,n). The same reasoning also shows that ftAT(n) P i7 sAT(n) rtAl(n) / s g n ( y s - a ) 6 ( X „ y s ) d 6 Jo < c 3 . i 2 . 3 (s ,n) . As a consequence of (3.9), (3.13), (3.14) and (3.15) we obtain sup P 0<s<T(n) HsAT(n)(£asAT(n))2\ < 0 0 Hence /„*Hs{{Cas)2)ds < oo P - a.s. and (3.12) holds D e f i n i t i o n 3.11. We set ft LUX) := I Hs(dCas) Jo = f CfHtidy)- J JcasZH(ds,dy). (3.13) (3-14) (3.15) (3.16) (3.17) (See equation (3.12)). By Theorem 3.4 this process is a.s. increasing and continuous in t. Therefore it defines a measure Z ^ S ( X ) on P + . Let LUX) := f o-2{Xs,a)Lads(X). JO (3.18) 86 As the notation indicates, the random process L"(X) denned above is indeed the desired local time of X. We must now show that it fulfills the conditions of Definition 3.1. Note that the Lipschitz condition (3.5) implies that the mapping s o~2(Xs, a) is continuous and therefore integrable with respect to the measure L%S(X). • First, we study the continuity of L^{X). Without loss of generality, we fix TV > 0 and restrict ourselves to the time interval [0,N]. As usual, some localization procedure is needed. We shall stop the processes H, X, Y and C?. at T(n). Note that £ r ^ A . 1S ^ o c a ^ t * m e °* Y?(n) (in Campbell space.) Also, Z^T^ is the orthogonal martingale measure associated with HT(n\ We shall need the following estimates. < L e m m a 3 . 1 2 . For any p > 1 there are constants C3. i2 . i (n ,N,p ) , 0 3 . 1 2 . 2 (n, TV, p), 03.12.3(^1,TV,p) , such that for any (Ft)-stopping time T < T{n) A TV and any x,z € IR, s, t 6 [0, TV] and sup P T [ ( £ £ ) * ] <c 3.i2 . i (n,TV,p) oeR W>T[\CT ~ £ZT\p}< c3.n.2(n,N,p)\x - z\l KMmM - C-TAs?) < C3.12.3(n,TV,p)|i - s\l. (3.19) (3.20) (3.21) providing \x — z\ < 1, \t —s\ < 1. Proof . Barlow Sz Yor (1981) show that there is a constant Cp such that sup\\CaT\\LP < (Cp)l^\\Yr]\IlP. • ' a That is, sup P ? [(££)?] < CJPT a€R IY0 + fT a(Xs,Ys)2ds^2 + [T \b(Xs,Ys I Jo Jo )\ds The quantity on the right hand side has already been estimated in the special case p — 2 (see the discussion preceeding Definition 3.11.) The case with arbitrary p is similar. sup £ ? [ ( £ £ ) " ] < y~lCp (P?[|Yb| p ] .+ P ? „ r rT + P fT o(Xs,Ys)2dsP/2 Jo "•If \b(xs,Ys)\dsr Jo. <.3 p - 1 Cp . (p?[|y 0 | p ] ' + P ? j ?(xs(i))2ds "\fTr(xs(i))ds^ Jo < 2P~lCp ( P ? [ | YO| P ] + P ? j\(Q{n,s))2ds^2 87 '.+'p. f JO r{e(n,s))dsp (since T < T(n)) 1 < (p?[|y0f]'.+ '/V1'/-2T(e(n),JV))P + A ^ T ( 0 ( n , A T ) ) P ) = c3A2.i(n,N,p). This concludes the proof of (3.19). We now show the validity of (3.20). Assume, w.l.o.g. x < z. $"-m - mp\ <V-1{VT\\ \YO - x\- \Y0 -z\ \p] +.P?[jf ri ( l 4(y.)a(X.,y.)dy(«) We estimate + P ? 'J l{x,z](Ys)b(Xs,Ys)dsp] (by 3.9) <3p-' (\x-z\r> + JPr\ C l{x,z]{Ys)o{Xs,Ys)2dspl2 LJo [ l{XtZ](Ys)b(Xs,Ys)dsp •Jo . (by Burkholder) = y - - 1 ( | x - ^ - r / i + / 2 ) . + P T /, .= P J X dapl2 (by the density of occupation formula) . < |x-z|(P-2)/2suppJ[(4)P/2] <c3.i2.i(n,7V,p/2)|x-z|P/2 (by (3.19)). / 2 = p ? [ / ^ ( I , 2 K y 5 ) ^ | ^ ^ , y s ) 2 ^ ] < e~2w" [J l{x<z](Ys)?(Xs(l))o(X,, Ys)2dsp] < e- 2T(e(n,iY))pp?[^Ti ( l ) 2 ](y s)a(X,y s) 2d sp] = £ - 2 T ( 6 ( n , N))p]p" [ T £$• dap < £ - 2 T ( 0 ( n , 7V))P sup P r [{C^)p)\x - z\p < £- 2 T(0 (n,N)yc 3 . i i i (n ,N i P ) \x - z\p. (3.22) (3.23) (3.24) Notice that (3.20) follows from (3.22), (3.23) and (3.24). The proof of (3.21) is easier. Without loss of generality assume t > s. ^TAMTM - £f A,Ip] < 2 p " 1 TPTM] / sgn(y r - x)a{Xr, Yr)dy(r) \ U JTAS + JPTJ \ sgn(YR - x)b(XR,Yr)dr*> +^At[xr|6{xr'Fr)|drpj) < 2P-1 ( T ( 6 ( n , iV))P|t - s|P/2 + T ( 0 ( n , A^))p|t - s|p) Set c3.i2.3 = 2 p T ( 0 ( n , i V ) ) P . This proves (3.21): (3.25) Lemma 3.13. Fixp > 1. Assume |x — z| < 1, | t - s | < 1. There exist constants c3.i3.i(N,n,p), C3.i3 .2 (N,n,p) , c 3 . i3 . 3 (A r ,n,p) suc/i £/m* P [|Z*AT(n)(X)|p] <C3.i3.l(^,n > p), P sup \L*S(X) - Ll(X)\» s<T(n) - <c3.i3.2(N,n,p)\x-z\^2 and P [|L-A T ( n )(X) - LXSAT{N)(X)\P] < c3.13.3(N,n,p)\t - s\*/2. for any t € [0,7V]. . P r o o f . We estimate P + 2 P _ 1 P (3.26) '. (3.27) (3.28) i^rwwrj <2 p- iP[ i^A r ( n )(^A T ( n )(y))n I , tAT(n) « yo y £^(y)z H (d S ,dy) PI Now, (by (3.12)) = h+h- h < 2 p - 1 P [ / / f A T ( n ) ( l ) p - 1 / / t A T ( n ) ( ( ^ A T ( n ) ) p ) i (by Jensen) < 2 p - 1 n p - 1 p f A T ( n ) [ ( ^ A T ( n ) ) p ] ' < 2 p - 1 n p - 1 c 3 . 1 2 . i ( n , / V , p ) 89 (by Lemma 3.12) By the same token, rtAT(n) f / 2 < 2 " - 1 c p P y : j{CxAT{n^2Hsmn)(dy)dS^2 (by Burkholder) rtAT{n) r l A l (n) rtAT(n) f < 2 " - 1 c p P | Hs(l)ds V ^ J (CxAT(n)yHsATin)(dy)ds (by Jensen) < 2"- 1c pn £ rAfVp ' /-tAT(n) /•; yo J (cxAnn)yHsAT{n)(dy)ds ds l W[f (CXAT(n))PHSAT(n)(dy) = 2P-%nE^NE^ p f A r ( n ) [ (£J A r ( B ) ) " ] ds <2f,-1cpnE^Nh3.12.1(n,N,p) (by Lemma 3.12) P u t c a . i a . x ^ n , ? ) = 2 P - 1 nP - 1 c 3 . 1 2 . 1 (n , A 7 , p ) + 2 P - 1 c p n £ ^ / v f c 3 . i 2 . i (n , N,p). This proves (3.26). Next, we prove (3.27). P sup \LX(X) - Ll{X)\ .s<T(n) < 2 P _ 1 P sup HS(\CX-CZS)\ ,8<T(n) sup f f(Cx-Czr)ZH(dr,dy) Ks<T(n) JO J = J i + J 2 . Consider any (J" t)-stopping time 5 < T(n) A TV. Then Therefore (by Jensen) .; . <nrlVs[\c%-rsy\ ' <np-lcz.l2.2{n,N,p)\x - z\v'2 (by Lemma 3.12.) sup {P[Hs(\Cxs - CZS\P}} < nr-Wite^Nrflx - zf2- S<T(n)AN = kp\x- z\V'2 (3.29) 90 where the sup is taken oyer all stopping times S bounded by T(n) A N. Note that for any such stopping time and any 9 > 0, q > 1 (Chebyshev) < kq\x-z\«l2 9i But by Lemma 3.10 P [ sup Hr(\L* - Czr\) > 0] = sup F[Hs(\£xs - £zs\) > 9]. Hence P r<TnAN sup Hr(\£xr-Cz\ ,r<TnAN S<T(n)AN roo = p 9p~llP Jo roo = p 9p~l sup JP[HS(\C% - Czs\) > 9}d9 JO S<T(n)AN sup Hr{\£?-£z\) >9 r<TnAN < Jo 0P+1 9p~1d9 + pkp+i\x — z\^~ E+l 9~2d9 = kpl1\x-z\^2+pkpl1\x-z\p/2 = {p+\)k%\x-zf2. Moreover, h < 2 p - 1 c p P rT(n) r JO J ' ^ * A T ( n ) ~ £ZsAT{n)\2HsAT(n)dspl2 (by Burkholder) < 2p~lcvW fnn) rl(n) r Jo H s { 1 ) d S 2 J0 J \CXAT{n)-^AT(n)\PHSAT(n)ds < 2P~%n^N^ /"VsATin) [\£XAT(n) ~ ^ATin^ds <2p-1cpnE^N^2c3.12.2(n,N,p)\x-z\p/2 (by Lemma 3.12) Define c3A3.2{t,p, N) = (p + l)kfc\ + 2p~xcvn * Np/2c3.12.2{n, N,p). This proves (3.27). 91 To finish the proof we estimate F [ i z ? A T ( r i ) ( x ) - z ^ A r ( n ) ( x ) r < 2 p - 1 F [ | / J t A r ( n ) ( £ - A r ( n ) ) - HsAT{n)(CtAT{n))\r>] •tAT(n) + 2 P - l p = A! + A2. / C*rAT{n)ZH(dr,dy) JsAT(n) • PI We now estimate A A 1 <2^-2w[\HtAT{n)(qAnn) - c*AT{n))n + 2^-2W[\HtAT{n)(CxAn^ = An + A12 A U < 2 2 p ~ 2 P [ / f t A T (n ) ( l ) p - 1 ^AT (n )^?AT (n) " ^ A T ( n ) ) | P ) < 2 2 p - 2 n " - 1 i p f A r ( n ) [ | £ ? A r ( n ) - £xAT(n)W} < 22P-2nP-1c3.12Mn,N,p)\t - s\p'2 (by Lemma 3.12). By Lemma 3.9, HtAT(n)[£XSAT(n)} =• HsAT(n)[^sAT(n)\ + / C r Z {dr,dy) JsAT{n) J Therefore Al2 = 2 2 p - 2 F < 2 2 p - 2 C p F < 2 2 p - 2 c „ F rtAT(n) r / CxrZH(dr,dy) JsAT(n) J • ' rtAT(n) r I / . (Cxr)2Hr(dy)dr^2\ JsAT(n) J • J / • J f r ( l ) d S V / \CxTYHr{dy)dr JsAT(n) JsAT(n) J s T(n) r-tAT(n) / / {CxrYHr{dy)dr JsAT(n) J < 2V-lcp(t - s ) ^ n ^ £ P f A T ( n ) [ ( ^ A T { n ) ) p ] d r <2P-1cp{t-s)p/2nE^c3.12.1{n,N,p) (by Lemma 3.12). These computations also show that A2<2p-1cp{t-s)p/2ne^c3.l2.i{n,N,p). Set c 3.i3.3(n,iV,p) = 2 P c p ( t - 5 ) p n E ^ C 3 . i 2 . i ( n , i V , p ) - t - 2 2 p - 2 n p - 1 C 3 . i 2 . 3 . This proves (3.28) 92 Corollary 3.14. The map (s,x) H > LX(X) has a continuous version. Proof. The estimate (3.27) and Kolmogorov's criterion (Theorem 2.43) show that (s,x) i - > LX(X) has a continuous version on [0,T(n)j x IR. The desired result follows from the fact that T(n) -» oo a.s. m The following result wil l be needed in the proof of the next proposition. Lemma 3.15. There exists a non-decreasing function S . : [0, oo) —> [0, oo) such that for any measure p, and any x, z G IR, \a-2(p,x)-o-2(p,z)\ <E(p(l))\x-z\. (3.30) Proof. Recall that by (3.3)-(3.5) \a{p,x)-(j{p,z)\<r(fx(l))\x-z\ and Therefore \a{p,x)-2 -o(p,z)-2\ = afax) <T(/x(l)) . . a(p,x)2 - a(p,z)2 a{p,,x)2a(p,z)2 < e~A \o(p, x)2 — a(fx, z)2j (since a >.£.)' = £ - 4 \a(ji,x) + a(p,z)\ \a(p,x) - a{p,z)\ < £ - 4 2 T ( / x ( l ) ) 2 | x - z | . Setting H(-) = 2 T ( - ) 2 e - 4 we obtain (3.30). • Proposition 3.16. (a) The map (t,a) \—> (L^X)) is JP — a.s.. continuous. (b) Suppose for each z £ E s H o~~2(Xs,z) is a semimartingale with canonical decomposition Mz -f- Az such that for any t > 0 and any k > 1 sup jp[(M--) f c ] + V[fi |d.4*|2fc] j . < oc. (3.31) Then there exists a version of (t,a,oj) i—> (Lf(X)) which is (jointly) .^-Holder continuous in. (t, a) for any £ .< 1/2 P - a.s. Remark 3.17. Condition (3.31) holds for all the examples in Chapter 4 of Perkins (1995, p. 49) with some additional smoothness on the coefficients. We now show one method to check it. For simplicity we concentrate on a particular example. Suppose 7 = 1. Suppose also that there are positive constants K and i such that b(p,x) < K(l + p(l) 1 ) and let a : JR —> H + be bounded continuous with two bounded continuous derivatives. Assume that K also bounds a and its derivatives. F ix e > 0 and define a(p,x) =e + Ja{x- z)p{dz), V / i 6 M F ( P ) , Vx € P . 93 We claim that with this choice of (7, a, b), the hypothesis of Proposition 3.16(b) is satisfied. We showed in Chapter 1 that cx(p,x) — e satisifies (3.5) and (3.3). It follows immediately that CT(P, X) also satisfies (3.5) and (3.3). Theorem 5.6 of Perkins (1995) implies that X solves the martingale problem (MP)^ b 0 . W> € C 6 2 ( P ) Z*(V>) = Xt{rl>) - XoW) - jfY [i>'(x)b(Xs,x) + \o(Xs,xW(x)]Xs(dx)ds is a continuous square integrable martingale such that (Z?i1>))t= T Xs(ip2)ds V t > 0 a.s. Jo We write oz = a{x — z). Prom the above martingale problem we see that Xs{dz) = XQ(az) + Zf{az) + £ j [dz(x)b(Xs,x) + l-a{Xs,x)azxx]xs{dx)ds Applying Ito's lemma we obtain a{Xs,z) 2 = 1 - r W))2 Jo { £ + xs{o>)y dZ*{*Z) (e + X0(< _ . ,t'.2 / [a*(x)b(X„x) + \o{Xs,x)Gzxx]xs{dx) ~ Jo "• (e + Xs(oz))* ds f Jo {e + Xs{dz)f 1 " +Mz:+Azt. (e + X0(o*))* We compute ' supP [ (M 2 )£ ] = P =rlf .Jo 4WW ^ < 4 * e _ 4 f c P (e + Xs(oz))* •t •2 -v n\j„k I Jo KzXs{l)dsK <4ke-4ktkK2kJP[Xt{l)k] < oc (by Proposition 2.21). Similarly, f\dA Jo P 2 |2fc = P Jo 2 / K(x)KXs,x) + \a{Xs,x)azxx Xs(dx) zXs{{az)2) ~ {e + Xs{az)Y _ + {e + Xs(az)f ds 2k 94 < 2 2 f c P f Jo * ( 2 / f 2 ( i + xs(iy) + K*xs(i))xs(i)jJk + 22kP <C(K,k, e,t) < oo (by Proposition 2.21) This concludes the proof of the claim. The same method can be employed to show that more general coefHcents a satisfy (3131). • P r o o f of P r o p o s i t i o n 3.16. (a) Notice that the Lipschitz condition (3.5) and the continuity of s — > Xs ensure the continuity of (r,x) a(Xr,x). Therefore we can fix in a set of probability one such that the maps (t,x,w) Lf and (r,x,u>) i-v a~2(Xr,x) are continuous. Note that \L?(X) - LZ(X)\ = \-fioT2(X3,x)L% - fi (j-2(Xs,z)Lzdr < \fia-2(Xr,x)Lxdr\ + \fi(a-2(Xr,x) - o--2(Xr,z))Lxdr I fSa-2(Xs,z)Lxdr- fSa-2(Xs,z)Lzdr 1 Jo Jo + : J i + J 2 + J 3 . We estimate Jx<e-2\Lxt-L*a\, (since a > e) J2<E(X*s(l))\x-z\Lzs (by Lemma (3.15). Observe that- Ldr converges weakly to Lzdr when x ->• z (these are finite Borel measures on [0, s]). Recall that the map r i-> a~2(Xr,z) is continuous. Therefore /Q s o~2(XT, z)Lxdr -> JQS a~2(Xr, z)Ldr as x —> z. Hence Jim \.Lx(X)-Lzs(X)\=0. (b) F ix p > 2. We compute P \LtAT{n)(X) — LtAT(n)(X)\T = P < 2 P _ 1 P + 2 P - i p / a-2(Xs,x)Zxds- / a Jo . J o tAT(n) _ l p (X s , z)L^ s rtAT{n) _ / a - 2 ( ^ , x ) - a - 2 ( X S ) z ) L ^ / a - 2 ( X s , z ) ( L 5 s - Z ^ ) Jo 95 = h + h. Now, tAT{n) tAT(n) h<2p-1\x-z\pP (by Lemma 3.15.) <2p-1\x-.z\pF (by Lemma 3.7) < 2 p - 1 | x - ^ | p H ( 6 ( n , i V ) ) p P s(*.(i))Z3. S(0(n,. a))J4 tAT(n) fx = 2 p- 1|x- 2| pH(0(n,7V)) p p[|LfA T ( n ) = 2 p - 1 |x - 2| p5(0(n,7V)) pc 3 .i3 . i(/V,n,p) (by Lemma 3.13) = c3.i6.i(n,7V,p)|a; - z\p. To estimate I2 we need the fact that a~2(Xs,z) = M§ + Azs is a semimartingale. (3.32) J , = 2 P _ 1 P = ' 2 p _ 1 P 'tAT(n) CT-2(^,2)(L3S-L^) PI .-2 ~ ~ rtAT(n) _ _ (AW(n), ^ ) ( ^ A T ( n ) - L t A 7 » ) ~ / fer(n) ~ L 5 A T ( n ) ) d c r ( X ^ ) (integrating by parts and since (MZ,LX)T = (MZ,LZ)T = .LX = LQ = 0). r ~ ~ IPI • Tl rtAT(n) < e^-Vp [ L*AT{N) - LZTAT{N)\} + e^p jf ( L f A T ( B ) - Z : A T ( B ) ) « u f ( + 6 2 p - l p rtAT{n) _ _ yo (^sAT(n) ~~ • t , s A T ( n ) ) ^ < 6 2 p - 1 £ - 2 p C 3 . i 3 . 2 ( / V , n , p ) | x - z| p / 2 + tfP-i.CpP rtAT(n) _ _ + 6 2 p _ 1 P 2p/2 sup \LXS -.LZS\ , s<MT(n) ptAT{n) / \dAzs\2pl2 Jo (by Lemma 3:13 and Burkholder) < 6 2 p - 1 e- 2 p c 3 . 1 3.2(A r ,n,p)|x - z| p / 2 + tfp-lCpW • sup k s<tAT{n) l ^ - ^ l / d ( M 2 ) p / 2 n) ' JO 96 2p sup \LXS - Lzs\ , s<MT(n) + &p-lcp | P (by Schwarz) <&P-h-2pcZAZ.2{N,n,p)\x - z\pl2 + Q2p~lCp P \ P sup \LX-LZS\ vs<t/\T(n) + G2p-1C3M.2(N,n,p)c3.UA(p,N)\x-z\p'2 (by Schwarz, Lemma 3.13 and (3.31)) < 62*-1e-2Pc3.i3.2{N,ntp)\x - z\p'2 + 6 2 p - 1 C p c 3 . 3 i . i (p, iV )c3.i3 .2 ( /V,n,p)|x - z\p/2 + 62p-lC3.i3.2(N,n,p)c3.3i.i(p,N)\x-z\p/2 = c 3 . 1 6 . 2 ( n , / V , p ) | x - z | p / 2 . From (3.32) and (3.33) it follows that there is a constant 03.16.3(71,7V,p) such that P [\L*AT{n)(X) - LzAT(n)(X)\p] < C 3 . 1 6 . 3 ( n , JV,p)|x - zf2.- Without loss of generality assume s < t. We estimate (3.33) (3.34) (3.35) P = p rtAT(n) / o-2{Xr,x)L% JsAT(n) < e 2 p P | | L ^ A T ( n ) - L j A T ( n ) | J <e-2pC3A3.z{nyN,p)\t-s\pl2 (by Lemma 3.13) = C3.16A{n,N,p)\t-s\p/2. . (3.36) We shall prove the existence of a continuous modification of L' Aj,^(X) by_using Lemma 2.43 (Kolmogorov's criterion). According to the latter, it suffices to show that nLxAnn)-LlAT{n)F<C{(t- (3.37) for some positive constants a , /?, C. Set C — 2 ac 3.i6.3(n, N,p) V c3.i6.4(n, N,p). Observe that TP[\LtAT{n) ~ L Z A T ( n ) \ Q ] - 2 alP[|-£'?AT(n) ~ LtAT(n)\Q] + 2 a P[|Lt A T(n) - LzsAT^\a] < C[(t - s)a'2 + |x - z\al2] ' . (3.38) (by (3.35) and (3.36)). Realizing that (3.38) is of the type (3.37) when a > 4 we obtain a continuous version of L'AT^(X) . To finish the demonstration note that L'.(X) = L\AT^(X) on {T(n) > N} and T(n) t co a.s. Therefore L'.(X) also has a jointly Holder continuous version of order £ for any C < 1/2. - Our next step is to verify the occupation times formula (3.2). 97 Lemma 3.18. For any nonnegative Borel function cp J<f>{a)L?(X)da= .J Jcp{a)o2{Xs,a)Xs(da)ds a.s. (3.39) Proof To establish (3.39) it suffices to show that for every n > 0 and any non-negative Borel function cj) in a countable family of functions with compact support r : rt*T(n) r . J cP{a)LatAT{n){X)da = J J cf>{a)a{Xs,a)2Xs{da)ds a.s. (3.40) The null set in (3.40) depends only on cj> and t. But there are only countable many d's and by Corollary 3.14 L(X) is continuous, so we may take the null set in (3.40) independent of t and ^ . • . ' : , • . ; • ' Recall that £ ? A T ( n ) ( * ) = / £?AT(n)HtAT(n)(dy) - f CasZH (ds,dy). (3.41) We multiply both sides of (3.41) by <p{a) and integrate: J<P(a)LtAT(n)da; = J'^(°) J£?AT(n)HtAT(n)(dy)da - J 0{a) J J CasZH(ds,dy)da. Now, 'J 4>{a) j CatAT[n)HtAT[n){dy)da = J J cp(a)C1AT{n)daHtAT{n)(dy) (by Fubini) = J-J 4>{Ys)o2{Xs,Ys)dsHtAT{n){dy) (by the density of occupation formula). We can apply Ito's lemma for historical integrals to this last H-integral to obtain: ftAT(n) n ptAT(n) f ps (3.42) r rt l( ) ptAT(n) f JJ <P(Ys)o2(Xs,Ys)dsHtAT{n)(dy) = J^ JJ <j>(Yr)o2(Xr,Yr)drZH(dsdy) ptAT(n) r + Jo J <t>{Ys)°2{Xs,Ys)Hs{dy)ds \ rtAT(n) r r = / <(>(a)£asdaZH(ds,dy) rtAT{n) r + Jo J <t>{Ys)o-2{Xs,Ys)Hs{dy)ds '98 We claim that we can use Fubini's theorem for stochastic integrals (Theorem 2.6 Walsh (1986)) to interchange the order of integration in rtAT(n) r r f rir,T(n) p J J j (t>(a)£asdaZH(ds,dy) = J <t>(a) J J CasZH(ds,dy)da. Hence f <$>{a) [ CatAT{n)HtAT(n){dy)da = U{a) t } ) [ CasAT{n)daZH(ds,dy)da J \ % T ( n / V ( 3 - 4 3 ) + jQ J (f){Ys)a2(Xs,Ys)Hs{dy)ds We obtain (3.40) by substituting (3.43) into (3.42) arid.using (HSE)a<b,-y .1*" remains to show that Fubini's theorem may be applied. We set p.(da) := 4>{a)da. We need to show that P r rtAT(n) r J Jo J (Ca^T(n))2H°mn)(dy)dsti(a < ob. This is easy to check: P j Jo yr(^ A T ( n ) ) 2 /7 , A T ( n ) (dy)d S /i(da) < J J Q VSAT(n)[(£aSAT(n))2}ds»(da) < A ^ ( P ) c 3 . 1 2 . i ( n , / V , 2 ) < oc (by Lemma 3.12). Thus (3.39) holds. . P r o p o s i t i o n 3.19. Lf(X) satisfies the following density of occupation formula. For any non- negative Borel function <f), •s. (3.44) j. Xs{<j))ds = j<t>(a)Lat{X)da a.i P r o o f . By considering functions of the form 4>(a)T^s^R(uj) it follows easily from a monotone class argument and (3.39) that Ji){s,a,uj)Lads{X)da = j j' o-2{Xs,a)^{s,a,u)Xs{dx)ds (3:45) for any measurable ib.: R + x R x Q —> P + . Substituting ip(s,a,oj) = o~~2(Xs,a)<p(a) into (3.45) we get /'<t>(a)Xs(da)ds = f 4>{a) [ o--2{Xs,a)Lads{X)da Jo J Jo = J cp(a)Lat(X)da. (by (3.18)) - P r o o f of T h e o r e m 3.4. Propositions 3.16 and 3.19 show that the family of random variables (Lf(X)) satisfies the conditions (i) and (ii) of Definition 3.1. • 99 Bibliography [1] Adler, R . J . (1992). Superprocess local and intersection local times and their corresponding particle pictures, in Seminar on Stochastic Processes 1992, Birkhauser, Boston. [2] Barlow, M . T . , Yor, M . (1981). Semimartingale inequalities and local times, Z. Wahrschein- lichkeitstheorie verw. Gebiete 55, 237-254. [3] Dawson, D . A . (1993). Measure-valued Markov processes, Ecole d'ete de probabilites de Saint Flour, 1991, Lect. Notes in Math. 1541, Springer, Berlin. [4] Dawson, D . A . , Gartner, J . (1987). Large deviations from the McKean-Vlasov limit for weakly interacting diffusions, Stochastics #0,247-308. [5] Dawson, D.A\, Perkins, E . A . (1991). Historical processes. Mem. Amer. Math. Soc. 454. [6] Dawson, D . A . , Perkins, E . A . (1996). Measure-valued processes and stochastic partial dif- ferential equations. Preprint. [7] Dellacherie, C . , Meyer, P .A . (1982). Probabilities and Potential B, North Holland Mathe- matical Studies No. 72, North Holland, Amsterdam. [8] Durret, R. (1985) Particle systems, random media, large deviations, Contemporary Maths. 41, Amer. Math. Soc, Providence, R. I. [9] Ethier, S .N. , Kurtz , T . G . (1986). Markov processes: characterization and convergence, Wiley, New York. [10] Hart l , D . L . , Clark, A . G . (1989). Principles of population genetics, second edition, Sinauer Associates, Inc., Sunderland, Masachussetts. [11] Jacod, J . , Shiryaev, A . N . (1987). Limit theorems for stochastic processes, Springer-Verlag, New York. [12] Konno, N . and Shiga, T . (1988). Stochastic differential equations for some measure valued diffusions, Probab. T h . Rel. Fields 79, 201-225. . [13] Ladyzenskaja, O . A . , Solonnikov, V . A . and Ural'ceva, N . N . (1968). Linear and quasilinear parabolic equations of parabolic type, Transl. Math. Monographs Vol 23, Amer. Math. Soc. [14] Le Gal l , J . F . Perkins, E . A . , Taylor, S. J . (1995) The packing measure of the support of super-Brownian motion, to appear in Stoch. Process. Applications. [15] Meleard, S. and Roelly, S. (1990). Interacting measure branching processes and the associ- ated partial differential equations., Stochastics and Stochastic Reports. [16] Perkins, E . A . (1988) A space-time property of a class of measure-valued branching diffu- sions, Trans. Amer. Math. Soc. 305, 743-795. [17] Perkins, E . A . (1993) Measure-valued branching diffusions with spatial interactions, Probab. T h . Rel. Fields 94, 189-245. 100 [18] Perkins, E . A . (1995). On the martingale problem for interactive Measure-Valued Branching Diffusions, Memoirs of the Amer. Math. Soc. No. 549, 1-89. [19] Reimers, M . (1989). One-dimensional stochastic partial differential equations and the branching measure diffusion, Probab. T h . Rel. Fields 81, 319-340. [20] Revuz, R . J . and Yor, M . (1991). Continuous martingales and Brownian motion, Springer- Verlag, New York. [21] Rogers, L . C . G . and Williams, D . (1986). Diffusions, Markov processes and martingales, Vol 2., Wiley, New York, [22] Shiga, T . (1994). Two contrasting properties of solutions for one dimensional stochastic partial differential equations, Canadian Journal of Math. Vol. 46 No. 2, 415-437. [23] Sugitani, S. (1988) Some properties for the measure-valued branching diffusion process, J . Math. Soc. Japan 41. 437-462. [24] Sznitman, A-S . (1991). Topics in the Propagation of Chaos, Ecole d'ete de Probabilites de Saint Flour, L . N . M . 1464. [25] Walsh, J .B . (1986). An Introduction to Stochastic Partial Differential Equations, Ecole d'ete de Probabilites de Saint Flour, L . N . M . 1180. [26] Yor, M . (1978). Sur la continuity des temps locaux associes a certaines semi-martingales, in Temps Locaux, Asterisque 52-53. [27] Zvonkin, A . K (1974). A transformation of the phase space of a diffusion process that removes the drift, Math. USSR Sbornik, Vol. 22 No. 1, 129-149. 101

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
Japan 24 0
China 9 1
France 4 0
Germany 3 24
Canada 1 0
United States 1 2
City Views Downloads
Tokyo 24 0
Beijing 9 1
Unknown 7 24
Niagara Falls 1 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items