UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Stochastic ODEs and PDEs for interacting multi-type populations Kliem, Sandra Martina 2009

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2009_fall_kliem_sandra.pdf [ 790.74kB ]
Metadata
JSON: 24-1.0070868.json
JSON-LD: 24-1.0070868-ld.json
RDF/XML (Pretty): 24-1.0070868-rdf.xml
RDF/JSON: 24-1.0070868-rdf.json
Turtle: 24-1.0070868-turtle.txt
N-Triples: 24-1.0070868-rdf-ntriples.txt
Original Record: 24-1.0070868-source.json
Full Text
24-1.0070868-fulltext.txt
Citation
24-1.0070868.ris

Full Text

Stochastic ODEs and PDEs for interacting multi-type populations by Sandra Martina Kliem  A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Mathematics)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) September, 2009 c Sandra Martina Kliem, 2009  ii  Abstract This thesis consists of the manuscripts of three research papers studying stochastic ODEs (ordinary differential equations) and PDEs (partial differential equations) that arise in biological models of interacting multi-type populations. In the first paper I prove uniqueness of the martingale problem for a degenerate SDE (stochastic differential equation) modelling a catalytic branching network. This work is an extension of a paper by Dawson and Perkins to arbitrary networks. The proof is based upon the semigroup perturbation method of Stroock and Varadhan. In the proof estimates on the corresponding semigroup are given in terms of weighted H¨ older norms, which are equivalent to a semigroup norm in this generalized setting. An explicit representation of the semigroup is found and estimates using cluster decomposition techniques are derived. In the second paper I investigate the long-term behaviour of a special class of the SDEs considered above, involving catalytic branching and mutation between types. I analyse the behaviour of the overall sum of masses and the relative distribution of types in the limit using stochastic analysis. For the latter existence, uniqueness and convergence to a stationary distribution are proved by the reasoning of Dawson, Greven, den Hollander, Sun and Swart. One-dimensional diffusion theory allows for a complete analysis of the two-dimensional case. In the third paper I show that one can construct a sequence of rescaled perturbations of voter processes in d = 1 whose approximate densities are tight. This is an extension of the results of Mueller and Tribe for the voter model. We combine critical long-range and fixed kernel interactions in the perturbations. In the long-range case, the approximate densities converge to a continuous density solving a class of SPDEs (stochastic PDEs). For integrable initial conditions, weak uniqueness of the limiting SPDE is shown by a Girsanov theorem. A special case includes a class of stochastic spatial competing species models in mathematical ecology. Tightness is established via a Kolmogorov tightness criterion. Here, estimates on the moments of small increments for the approximate densities are derived via an approximate martingale problem and Green’s function representation.  iii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  Abstract  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Overview of the Manuscripts . . . . . . . . . . . . . . . . . . . . 1.1.1 Degenerate stochastic differential equations for catalytic branching networks . . . . . . . . . . . . . . . . . . . . . 1.1.2 Long-term behaviour of a cyclic catalytic branching system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Convergence of rescaled competing species processes to a class of SPDEs . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Degenerate Stochastic Differential Equations for Catalytic Branching Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Catalytic branching networks . . . . . . . . . . . . . . . . 2.1.2 Comparison with Dawson and Perkins [7] . . . . . . . . . 2.1.3 The model . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Statement of the main result . . . . . . . . . . . . . . . . 2.1.5 Outline of the proof . . . . . . . . . . . . . . . . . . . . . 2.1.6 Weighted H¨ older norms and semigroup norms . . . . . . 2.1.7 Outline of the paper . . . . . . . . . . . . . . . . . . . . . 2.2 Properties of the Semigroup . . . . . . . . . . . . . . . . . . . . 2.2.1 Representation of the semigroup . . . . . . . . . . . . . . 2.2.2 Decomposition techniques . . . . . . . . . . . . . . . . . 2.2.3 Existence and representation of derivatives of the semigroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 L∞ bounds of certain differentiation operators applied to Pt f and equivalence of norms . . . . . . . . . . . . . . .  1 2 2 3 4 5 7 9 9 9 11 12 13 14 18 20 21 21 26 28 32  iv 2.2.5  Weighted H¨ older bounds of certain tors applied to Pt f . . . . . . . . 2.3 Proof of Uniqueness . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . .  differentiation opera. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  3 Long-term Behaviour of a Cyclic Catalytic Branching System 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Main results and outline of the paper . . . . . . . . . . . 3.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Existence and nonnegativity . . . . . . . . . . . . . . . . 3.2.2 The overall sum and uniqueness . . . . . . . . . . . . . . 3.2.3 The normalized processes . . . . . . . . . . . . . . . . . . 3.2.4 Properties of a stationary distribution to the system (3.15) of normalized processes . . . . . . . . . . . . . . . . . . . 3.2.5 Stationary distribution . . . . . . . . . . . . . . . . . . . 3.2.6 Extension to arbitrary networks . . . . . . . . . . . . . . 3.2.7 Complete analysis of the case d = 2 . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Convergence of Rescaled Competing Species Processes to a Class of SPDEs . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 The voter model and the Lotka-Volterra model . . . . 4.1.2 Spatial versions of the Lotka-Volterra model . . . . . 4.1.3 Long-range limits . . . . . . . . . . . . . . . . . . . . 4.1.4 Overview of results . . . . . . . . . . . . . . . . . . . 4.1.5 Outline of the paper . . . . . . . . . . . . . . . . . . . 4.2 Main Results of the Paper . . . . . . . . . . . . . . . . . . . 4.2.1 The model . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Main results . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Reformulation . . . . . . . . . . . . . . . . . . . . . . 4.3 An Approximate Martingale Problem . . . . . . . . . . . . . 4.4 Green’s Function Representation . . . . . . . . . . . . . . . . 4.5 Tightness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Characterizing Limit Points . . . . . . . . . . . . . . . . . . 4.7 Uniqueness in Law . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . .  40 52 57 58 58 58 59 60 60 62 68 71 73 74 74 77  . . . . . . . . . . . . . . . . .  78 78 78 80 80 82 83 83 83 86 88 94 101 109 118 120 123  5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Overview of Results and Future Perspectives of the Manuscripts 5.1.1 Degenerate stochastic differential equations for catalytic branching networks . . . . . . . . . . . . . . . . . . . . . 5.1.2 Long-term behaviour of a cyclic catalytic branching system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  125 125 125 126  v 5.1.3  Convergence of rescaled competing species processes to a class of SPDEs . . . . . . . . . . . . . . . . . . . . . . . . 127 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129  Appendices A Appendix for Chapter 3 . . . A.1 a ˜ is non-singular . . . . . . A.2 Proof of Proposition 3.2.18 Bibliography . . . . . . . . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  130 130 132 139  B Appendix for Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . 140 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142  vi  List of Figures 2.1 2.2 2.3  Decomposition from the catalyst’s point of view . . . . . . . . . . ¯i . . . . . . . . . . . . . . . . . . . . Definition of NR , NC and R Decomposition of the system of SDEs . . . . . . . . . . . . . . .  11 15 19  3.1  The definition of t2n−1 and t2n . . . . . . . . . . . . . . . . . . . .  66  vii  Acknowledgements I am tremendously grateful to my Ph.D. supervisor Ed Perkins for his help. He introduced me to the problems at hand and provided me with helpful comments and suggestions throughout. Particular thanks go to him for his valuable feedback after reading my manuscript. I also wish to thank Carl Mueller for a discussion on survival and coexistence questions related to the limiting SPDEs I obtained in my third manuscript. A special thanks goes to the administration and the computer support of my department that made my life easier in many ways. Finally, a big thanks goes to all the people I met during my studies at UBC, my parents and my other friends for providing me with help and encouragement throughout my studies.  1  Chapter 1  Introduction In the following three Chapters I investigate degenerate stochastic ODEs (ordinary differential equations) and SPDEs (stochastic partial differential equations) that arise in biological models of interacting multi-type populations. In the first two Chapters I investigate the behaviour of the respective masses of a finite number of interacting populations in a non-spatial setting, and in the last paper I study two interacting populations only but add a spatial component. The former will result in the consideration of SDEs (stochastic differential equations), the latter in the consideration of limiting SPDEs. The former models can arise from a network of cooperating branching populations which require the presence of other types (catalysts) to reproduce, while the latter class includes scaling limits of ecological models for two types competing for resources. When investigating biological models for the evolution of populations over time, the common question to answer is that for survival, extinction and coexistence of types. A well-known model for the evolution of the mass of one type of population is Feller’s branching diffusion with parameter γ and linear drift, i.e. the unique solution to the SDE dxt = bxt dt +  2γxt dBt  with constants b ∈ R, γ ∈ R+ and Bt a Brownian motion. Such diffusions can be obtained as the limit of a sequence of rescaled Galton-Watson branching processes at criticality (that is, branching processes with an average number of descendents approaching one). By adding a spatial component, one obtains super-Brownian motion with linear drift instead, which is the unique in law solution u(t, x) to the following SPDE ∂u √ ˙, = ∆u + bu + γuW ∂t ˙ = W ˙ (t, x) is space time white noise. Here, ∆u models the spatial where W √ ˙ models the stochastic flucmotion and dispersion of the population and γuW tuations in the population size. ut = u(t, ·) can be interpreted as the continuous spatial density of the population at time t. For both models, the degeneracy in the fluctuation term and its lack of Lipschitz continuity leads to difficulties in establishing uniqueness. In the above, the additivity properties inherent to both models (for example the sum of two independent (b, γ)-Feller-diffusions starting at x1 respectively x2 is a (b, γ)-Fellerdiffusion starting at x1 + x2 ) can be successfully employed to investigate the  2 long-term behaviour of the above SDE, respectively SPDE. For extensive literature on the above the interested reader is referred to Perkins [14]. As a next step, one can consider an equation for each type of population and introduce interactions between types by interlinking the equations. Thereby one can model competition of types for resources but also mutual help between types. As a result, the analysis of the resulting equations becomes more complicated as additivity properties are not present anymore in this context. In the first and third paper of my thesis I shall therefore employ different perturbation methods to derive results on the systems at hand from results of more accessible models. For instance, the first paper uses a perturbation method of Stroock and Varadhan to obtain the new system of SDEs as a perturbation of a system of independent Feller diffusions with constant coefficients. The last paper considers perturbations of the biased voter model for which the long-range limit was obtained in Mueller and Tribe [12]. In what follows, I shall give a short overview of the models, objectives and underlying literature of each of the three manuscripts of this thesis.  1.1 1.1.1  Overview of the Manuscripts Degenerate stochastic differential equations for catalytic branching networks  In the first paper I investigate weak uniqueness of solutions to the following system of SDEs: For j ∈ R ⊂ {1, . . . , d} and Cj ⊂ {1, . . . , d}\{j}: (j) dxt  and for j ∈ /R  = bj (xt )dt +  (j)  dxt    2γj (xt )   = bj (xt )dt +  i∈Cj    (j) (i) xt  xt dBtj  (1.1)  (j)  (1.2)  2γj (xt )xt dBtj .  Here xt ∈ Rd+ and bj , γj , j = 1, . . . , d are H¨ older-continuous functions on Rd+ with γj (x) > 0, and bj (x) ≥ 0 if xj = 0. The Btj , j ∈ {1, . . . , d} are independent Brownian motions. This system of SDEs models catalytic branching networks, where types i ∈ Cj catalyze the replication of type j, j ∈ R (the so-called reactants). Such systems can be obtained as a limit of near-critical branching particle systems. The growth rate of types corresponds to the branching rate in this stochas(i) tic setting, i.e. type j, j ∈ R in state xt branches at a rate γj (xt ) i∈Cj xt proportional to the sum of masses of types i, i ∈ Cj at time t. The degeneracies in the covariance coefficients of this system and their lack of Lipschitz continuity make the investigation of uniqueness a challenging question. The former rules out the classic Stroock-Varadhan approach of perturbing  3 Brownian motion and the latter prevents application of Itˆ o’s pathwise uniqueness arguments. Similar results have been proven in Athreya, Barlow, Bass and Perkins [1] and Bass and Perkins [2] but without the additional singularity (i) i∈Cj xt in the covariance coefficients of the diffusion. The question of uniqueness of equations with non-constant coefficients arises already in the case d = 2 in the renormalization analysis of hierarchically interacting two-type branching models treated in Dawson, Greven, den Hollander, Sun and Swart [6]. In [7], Dawson and Perkins proved weak uniqueness for the following system of SDEs: For j ∈ R and Cj = {cj }, cj = j, (j)  dxt  = bj (xt )dt +  (cj ) (j) xt dBtj ,  2γj (xt )xt  and (1.2) as above. This restriction to at most one catalyst per reactant is sufficient for the renormalization analysis for d = 2 types, but for more than 2 types one will encounter models where one type may have two catalysts. The goal of my first paper was to overcome this restriction and to allow consideration of general multi-type branching networks as envisioned in the section on future challenges in [6]. In my first paper, I extend the techniques of [7] to the setting of general catalytic networks, i.e. to (1.1) and (1.2). My work further includes natural settings such as competing hypercycles (cf. Eigen and Schuster [8], p.55 respectively Hofbauer and Sigmund [10], p.106). This latter work proposed an analogous system of ODEs as a model for the emergence of long polynucleotides in prebiotic evolution.  1.1.2  Long-term behaviour of a cyclic catalytic branching system  As an application of the above I investigate the following special case in my second paper, involving cyclic catalytic branching and mutation between types. As I shall point out in this paper, the cyclic setup can easily be extended to arbitrary networks. Questions for survival and coexistence of types in the long time limit arise. Such questions naturally arise in biological competition models. For instance, Fleischmann and Xiong [9] investigated a cyclically catalytic superBrownian motion in one spatial dimension. They showed global segregation (noncoexistence) of neighbouring types in the limit and other results on the finite time survival-extinction but they were not able to determine, if the overall sum dies out in the limit or not. In [10] (p. 86) multi-type branching processes with independent replication and mutation between types were rejected as a model since typically one type would take over, contrary to the observed diversity which emerged from the primordial soup. Let the following system of SDEs for d ≥ 2 be given: d  dxit =  i 2γ i xit xi+1 t dBt + j=1  xjt qji dt, i ∈ {1, . . . , d},  4 where xd+1 ≡ x1t . I assumed the γ i and qji , i = j to be given positive constants t i and the x0 ≥ 0, i ∈ {1, . . . , d} to be given initial conditions. (qji ) is a Qmatrix modelling the mutations or migrations from type j to type i. I shall d investigate in particular the behaviour of the sum of types st = i=1 xit and of i i the normalized processes yt = xt /st in the time-limit. The latter addresses the diversity issue in [10].  1.1.3  Convergence of rescaled competing species processes to a class of SPDEs  The objectives of this paper were threefold. To better understand them, I shall first introduce the three papers that provide motivation. Firstly, in [12], Mueller and Tribe construct a sequence of rescaled competing species processes ξtN ∈ {0, 1}Z/N in dimension d = 1 and show that its approximate densities A(ξtN )(x) ≡  1 √ |{y ∈ Z/N : 0 < |y − x| ≤ 1/ N }|  y∈Z/N, √ 0<|y−x|≤1/ N  ξtN (y), x ∈ N −1 Z  converge in distribution to a continuous space time density that solves an SPDE. Here, ξtN ∈ {0, 1}Z/N denotes the configuration at time t of a “voter process” with bias τ = Nθ . That is, each type (0 or 1) invades a randomly chosen “neighbouring site” with constant rate, where θ > 0 would slightly favour 1’s by giving them a slightly larger invasion rate. ξtN (x) = i if site x ∈ Z/N is occupied by type i, i = 0, 1 and hence ut can be interpreted as the limiting continuous space time density of type 1 and 1 − ut as the density of type 0. [12] fix θ ≥ 0, i.e. consider the case where the opinion of type 1 is slightly dominant. They show that ut is a solution of the following SPDE, the heat equation with drift, driven by Fisher-Wright noise, namely ∆u ∂u = + 2θ(1 − u)u + ∂t 6  ˙ . 4u(1 − u)W  (1.3)  Observe that [12] scale space by 1/N and use {y ∈ Z/N : 0 < |y − x| ≤ √ 1/ N }, the set of neighbours of x, to calculate approximate densities. Hence, the number of neighbours of x ∈ Z/N is increasing proportionally to 2N 1/2 . They thus obtain long-range interactions. Finally, they also speed up time to obtain appropriate limits. After rescaling (1.3) appropriately, one obtains the Kolmogorov-PetrovskiiPiscuinov (KPP) equation driven by Fisher-Wright noise. The behaviour of this SPDE has already been investigated in Mueller and Sowers [11] in detail, where the existence of travelling waves was shown. Secondly, in Cox and Perkins [4] it was shown that stochastic spatial LotkaVolterra models, suitably rescaled in space and time, converge weakly to superBrownian motion with linear drift. As they choose the parameters in their models to approach 1, the models can also be interpreted as small perturbations  5 of the voter model. [4] extended the main results of Cox, Durrett and Perkins [3], which proved similar results for long-range voter models. Both papers treat the low density regime, i.e. where only a finite number of individuals of type 1 is present. Also note that both papers use a different scaling in comparison to [12]. [12] is at the threshold of the results in [3], but not included, and therefore [12] obtains a non-linear drift term in the limiting SPDE as a result. [4] considers fixed kernel models in dimensions d ≥ 3 and long-range kernel models in arbitrary dimension separately. Finally, in Cox and Perkins [5], the results of [4] for d ≥ 3 are used to relate the limiting super-Brownian motions in the fixed kernel case to questions of coexistence and survival of a rare type in the original Lotka-Volterra model. Thirdly, spatial versions of the Lotka-Volterra model with finite range were introduced and investigated in Neuhauser and Pacala [13]. The model from [13] incorporates a fecundity parameter and models both intra- and interspecific competition. The paper shows that short-range interactions alter the predictions of the mean-field model. In my paper I try to extend the approach of [12] for voter models to small perturbations of voter models similar to the perturbations in [4]. I work at criticality in the hope to obtain continuous densities in the limit that solve a class of SPDEs, similar to (1.3) but with more diverse drifts. My second goal is to thereby include spatial versions of Lotka-Volterra models for competition and fecundity parameters near one as introduced in [13] as the approximating models and to determine their limits. As an additional extension to [12] I shall investigate the weak uniqueness of the limiting class of SPDEs as weak uniqueness of the solutions to the SPDE will yield in turn weak uniqueness of the limits of the approximating densities. The last objective of this paper was to combine both long-range models at criticality and fixed kernel models in the perturbations. I investigate if the additional fixed kernel perturbation impacts statements on tightness (equivalent to relative compactness in my Polish spaces) of the approximating models. Thereby, results of [4] are extended. As a special case I would then be able to consider rescaled Lotka-Volterra models with long-range dispersal and shortrange competition.  1.2  Concluding Remarks  Tightness of approximating particle systems can be used to prove existence of limiting points of the approximating particle systems. Often, all limits can be shown to have certain properties in common. For instance, if all limits satisfy an SDE or SPDE as in my third paper for the case of long-range interactions only, weak uniqueness for the limiting systems of SDEs or SPDEs then yields uniqueness of the limits. Additionally, weak uniqueness of the solutions makes available certain tools that are used to investigate the behaviour of the systems at hand. Therefore, the proof of weak uniqueness in the first paper was fundamental to the analysis of the model for cyclic catalytic branching and mutation  6 between types of the second paper. All three papers of my thesis have in common that they investigate multitype interaction models with a degeneracy in the component modelling fluctuations that stems from catalytic branching (the Fisher-Wright noise term can be seen as an application of a 2-cyclic model). Additionally all three models are parameter-dependent, where the parameters can be used to answer questions of survival and coexistence of types.  7  Bibliography [1] Athreya, S.R. and Barlow, M.T. and Bass, R.F. and Perkins, E.A. Degenerate stochastic differential equations and super-Markov chains. Probab. Theory Related Fields (2002) 123, 484–520. MR1921011 [2] Bass, R.F. and Perkins, E.A. Degenerate stochastic differential equations with H¨ older continuous coefficients and super-Markov chains. Trans. Amer. Math. Soc. (2003) 355, 373–405 (electronic). MR1928092 [3] Cox, J.T. and Durrett, R. and Perkins, E.A. Rescaled voter models converge to super-Brownian motion. Ann. Probab. (2000) 28, 185–234. MR1756003 [4] Cox, J.T. and Perkins, E.A. Rescaled Lotka-Volterra Models converge to Super-Brownian Motion. Ann. Probab. (2005) 33, 904–947. MR2135308 [5] Cox, J.T. and Perkins, E.A. Survival and coexistence in stochastic spatial Lotka-Volterra models. Probab. Theory Related Fields (2007) 139, 89– 142. MR2322693 [6] Dawson, D.A. and Greven, A. and den Hollander, F. and Sun, R. and Swart, J.M. The renormalization transformation for two-type branching models. Ann. Inst. H. Poincar´e Probab. Statist. (2008) 44, 1038–1077. MR2469334 [7] Dawson, D.A. and Perkins, E.A. On the uniqueness problem for catalytic branching networks and other singular diffusions. Illinois J. Math. (2006) 50, 323–383 (electronic). MR2247832 [8] Eigen, M. and Schuster, P. The Hypercycle: a principle of natural selforganization. Springer, Berlin, 1979. [9] Fleischmann, K. and Xiong, J. A cyclically catalytic super-brownian motion. Ann. Probab. (2001) 29, 820–861. MR1849179 [10] Hofbauer, J. and Sigmund, K. The Theory of Evolution and Dynamical Systems. London Math. Soc. Stud. Texts, vol. 7, Cambridge Univ. Press, Cambridge, 1988. MR1071180 [11] Mueller, C. and Sowers, R.B. Random Travelling Waves for the KPP Equation with Noise. J. Funct. Anal. (1995) 128, 439–498. MR1319963 [12] Mueller, C. and Tribe, R. Stochastic p.d.e.’s arising from the long range contact and long range voter processes. Probab. Theory Related Fields (1995) 102, 519–545. MR1346264 [13] Neuhauser, C. and Pacala, S.W. An explicitly spatial version of the Lotka-Volterra model with interspecific competition. Ann. Appl. Probab. (1999) 9, 1226–1259. MR1728561  8 [14] Perkins, E.A. Dawson-Watanabe superprocesses and measure-valued diffusions. Lectures on Probability Theory and Statistics (Saint-Flour, 1999), 125–324, Lecture Notes in Math., 1781, Springer, Berlin, 2002. MR1915445  9  Chapter 2  Degenerate Stochastic Differential Equations for Catalytic Branching Networks1 2.1 2.1.1  Introduction Catalytic branching networks  In this paper we investigate weak uniqueness of solutions to the following system of stochastic differential equations (SDEs): For j ∈ R ⊂ {1, . . . , d} and Cj ⊂ {1, . . . , d}\{j}: (j) dxt  and for j ∈ /R  = bj (xt )dt +  (j)  dxt    2γj (xt )   = bj (xt )dt +  i∈Cj    (i) (j) xt  xt dBtj  (2.1)  (j)  (2.2)  2γj (xt )xt dBtj .  Here xt ∈ Rd+ and bj , γj , j = 1, . . . , d are H¨ older-continuous functions on Rd+ with γj (x) > 0, and bj (x) ≥ 0 if xj = 0. The degeneracies in the covariance coefficients of this system make the investigation of uniqueness a challenging question. Similar results have been proven (i) in [1] and [4] but without the additional singularity i∈Cj xt in the covariance coefficients of the diffusion. Other types of singularities, for instance replacing (i) the additive form by a multiplicative form i∈Cj xt , are possible as well, under additional assumptions on the structure of the network (cf. Remark 2.1.9 at the end of Subsection 2.1.5). The given system of SDEs can be understood as a stochastic analogue to a system of ODEs for the concentrations yj , j = 1, . . . , d of a type Tj . Then 1 A version of this chapter has been accepted for publication. Kliem, S.M. (2009) Degenerate Stochastic Differential Equations for Catalytic Branching Networks. Ann. Inst. H. Poincar´e Probab. Statist.  10 yj /y˙ j corresponds to the rate of growth of type Tj and one obtains the following ODEs (see [9]): for independent replication y˙ j = bj yj , autocatalytic replication y˙ j = γj yj2 and catalytic replication y˙ j = γj i∈Cj yi yj . In the catalytic case the types Ti , i ∈ Cj catalyze the replication of type j, i.e. the growth of type j is proportional to the sum of masses of types i, i ∈ Cj present at time t. An important case of the above system of ODEs is the so-called hypercycle, firstly introduced by Eigen and Schuster (see [8]). It models hypercyclic replication, i.e. y˙ j = γj yj−1 yj and represents the simplest form of mutual help between different types. The system of SDEs can be obtained as a limit of branching particle systems. The growth rate of types in the ODE setting now corresponds to the branching rate in the stochastic setting, i.e. type j branches at a rate proportional to the sum of masses of types i, i ∈ Cj at time t. The question of uniqueness of equations with non-constant coefficients arises already in the case d = 2 in the renormalization analysis of hierarchically interacting two-type branching models treated in [6]. The consideration of successive block averages leads to a renormalization transformation on the diffusion functions of the SDE (i)  (i)  dxt = c θi − xt  dt +  2gi (xt )dBti , i = 1, 2  with θi ≥ 0, i = 1, 2 fixed. Here g = (g1 , g2 ) with gi (x) = xi γi (x) or gi (x) = x1 x2 γi (x), i = 1, 2 for some positive continuous function γi on R2+ . The renormalization transformation acts on the diffusion coefficients g and produces a new set of diffusion coefficients for the next order block averages. To be able to iterate the renormalization transformation indefinitely a subclass of diffusion functions has to be found that is closed under the renormalization transformation. To even define the renormalization transformation one needs to show that the above SDE has a unique weak solution and to iterate it we need to establish uniqueness under minimal conditions on the coefficients. This paper is an extension of the work done in Dawson and Perkins [7]. The latter, motivated by the stochastic analogue to the hypercycle and by [6], proved weak uniqueness in the above mentioned system of SDEs (2.1) and (2.2), where (2.1) is restricted to (j)  dxt  = bj (xt )dt +  (cj ) (j) xt dBtj ,  2γj (xt )xt  i.e. Cj = {cj } and (2.2) remains unchanged. This restriction to at most one catalyst per reactant is sufficient for the renormalization analysis for d = 2 types, but for more than 2 types one will encounter models where one type may have two catalysts. The present work overcomes this restriction and allows consideration of general multi-type branching networks as envisioned in [6], including further natural settings such as competing hypercycles (cf. [8] page 55 resp. [9], p. 106). In particular, the techniques of [7] will be extended to the setting of general catalytic networks. Intuitively it is reasonable to conjecture uniqueness in the general setting as (c ) (i) there is less degeneracy in the diffusion coefficients; xt j changes to i∈Cj xt ,  11  Figure 2.1: Decomposition from the catalyst’s point of view: Arrows point from vertices i ∈ NC to vertices j ∈ Ri (for the definition of NC , Ri and N2 see Subsections 2.1.3 and 2.1.5 to follow). Separate points signify vertices j ∈ N2 . The dotted arrows signify arrows which are only allowed in the generalized setting and thus make a decomposition of the kind used in [7] inaccessible. all coordinates i ∈ Cj have to become zero at the same time to result in a singularity. For d = 2 weak uniqueness was proven for a special case of a mutually catalytic model (γ1 = γ2 = const.) via a duality argument in [10]. Unfortunately this argument does not extend to the case d > 2.  2.1.2  Comparison with Dawson and Perkins [7]  The generalization to arbitrary networks results in more involved calculations. The most significant change is the additional dependency among catalysts. In [7] the semigroup of the process under consideration could be decomposed into groups of single vertices and groups of catalysts with their corresponding reactants (see Figure 2.1). Hence the main part of the calculations in [7], where bounds on the semigroup are derived, i.e. Section 2 of [7] (“Properties of the basic semigroups”), could be reduced to the setting of a single vertex or a single catalyst with a finite number of reactants. In the general setting this strategy is no longer available as one reactant is now allowed to have multiple catalysts (see again Figure 2.1). As a consequence we shall treat all vertices in one step only. This results in more work in Section 2, where bounds on the given semigroup are now derived directly. We also employ a change of perspective from reactants to catalysts. In [7] every reactant j had one catalyst cj only (and every catalyst i a set of reactants Ri ). For the general setting it turns out to be more efficient to consider every catalyst i with the set Ri of its reactants. In particular, the restriction ¯ i , including only reactants whose catalysts are all zero, turns out from Ri to R to be crucial for later definitions and calculations. It plays a key role in the extension of the definition of the weighted H¨ older norms to general networks  12 (see Subsection 2.1.6). Changes in one catalyst indirectly impact other catalysts now via common reactants, resulting for instance in new mixed partial derivatives. As a first step a representation for the semigroup of the generalized process had to be found (see (2.15)). In [7], (12) the semigroup could be rewritten in a product form of semigroups of each catalyst with its reactants. Now a change in one catalyst resp. coordinate of the semigroup impacts in particular the local covariance of all its reactants. As the other catalysts of this reactant also appear in this coefficient, a decomposition becomes impossible. Instead the triangle inequality has to be often used to express resulting multi-dimensional coordinate changes of the function G, which is closely related with the semigroup representation (see (2.16)), via one-dimensional ones. As another important tool Lemma 2.2.6 was developed in this context. The ideas of the proofs in [7] often had to be extended. Major changes can be found in the critical Proposition 2.2.25 and its associated Lemmas (especially Lemma 2.2.29). The careful extension of the weighted H¨ older norms to arbitrary networks had direct impact on the proofs of Lemma 2.2.19 and Theorem 2.2.20.  2.1.3  The model  Let a branching network be given by a directed graph (V, E) with vertices V = {1, . . . , d} and a set of directed edges E = {e1 , . . . , ek }. The vertices represent the different types, whose growth is under investigation, and (i, j) ∈ E means that type i “catalyzes” the branching of type j. As in [7] we continue to assume: Hypothesis 2.1.1. (i, i) ∈ / E for all i ∈ V . Let C denote the set of catalysts, i.e. the set of vertices which appear as the 1st element of an edge and R denote the set of reactants, i.e. the set of vertices that appear as the 2nd element of an edge. For j ∈ R, let Cj = {i : (i, j) ∈ E} be the set of catalysts of j and for i ∈ C, let  Ri = {j : (i, j) ∈ E} be the set of reactants, catalyzed by i. If j ∈ / R let Cj = ∅ and if i ∈ / C, let Ri = ∅. We shall consider the following system of SDEs: For j ∈ R:   (j)  dxt  and for j ∈ /R  = bj (xt )dt +  (j)  dxt  2γj (xt )   = bj (xt )dt +  i∈Cj  (i) (j) xt  xt dBtj (j)  2γj (xt )xt dBtj .  Our goal will be to show the weak uniqueness of the given system of SDEs.  13  2.1.4  Statement of the main result  In what follows we shall impose additional regularity conditions on the coefficients of our diffusions, similar to the ones in Hypothesis 2 of [7], which will remain valid unless indicated to the contrary. |x| is the Euclidean length of x ∈ Rd and for i ∈ V let ei denote the unit vector in the ith direction. Hypothesis 2.1.2. For i ∈ V , γi : Rd+ → (0, ∞),  bi : Rd+ → R  are taken to be H¨ older continuous of some positive index on compact subsets of Rd+ such that |bi (x)| ≤ c(1 + |x|) on Rd+ , and bi (x) ≥ 0 if xi = 0. In addition, bi (x) > 0 if i ∈ C ∪ R and xi = 0. Definition 2.1.3. If ν is a probability on Rd+ , a probability P on C(R+ , Rd+ ) is said to solve the martingale problem MP(A,ν) if under P , the law of x0 (ω) = ω0 (xt (ω) = ω(t)) is ν and for all f ∈ Cb2 (Rd+ ), t  Mf (t) = f (xt ) − f (x0 ) −  0  Af (xs )ds  is a local martingale under P with respect to the canonical right-continuous filtration (Ft ). Remark 2.1.4. The weak uniqueness of a system of SDEs is equivalent to the uniqueness of the corresponding martingale problem (see for instance, [12], V.(19.7)). For f ∈ Cb2 (Rd+ ), the generator corresponding to our system of SDEs is Af (x) = A(b,γ) f (x)  =  j∈R  γj (x)   i∈Cj    xi  xj fjj (x) +  γj (x)xj fjj (x) + j ∈R /  Here fij is the second partial derivative of f w.r.t. xi and xj . As a state space for the generator A we shall use        xi + x j  > 0 . S = x ∈ Rd+ :   j∈R  i∈Cj  We first note that S is a natural state space for A.  bj (x)fj (x). j∈V  (2.3)  14 Lemma 2.1.5. If P is a solution to MP(A,ν), where ν is a probability on Rd+ , then xt ∈ S for all t > 0 P -a.s.  Proof. The proof follows as for Lemma 5, [7] on p. 377 via a comparison argument with a Bessel process, using Hypothesis 2.1.2. We shall now state the main theorem which, together with Remark 2.1.4 provides weak uniqueness of the given system of SDEs for a branching network. Theorem 2.1.6. Assume Hypothesis 2.1.1 and 2.1.2 hold. Then for any probability ν, on S, there is exactly one solution to MP(A,ν).  2.1.5  Outline of the proof  Our main task in proving Theorem 2.1.6 consists in establishing uniqueness of solutions to the martingale problem MP(A,ν). Existence can be proven as in Theorem 1.1 of [1]. The main idea in proving uniqueness consists in understanding our diffusion as a perturbation of a well-behaved diffusion and applying the Stroock-Varadhan perturbation method (refer to [13]) to it. This approach can be devided into three steps. Step 1: Reduction of the problem. We can assume w.l.o.g. that ν = δx0 . Furthermore it is enough to consider uniqueness for families of strong Markov solutions. Indeed, the first reduction follows by a standard conditioning argument (see p. 136 of [3]) and the second reduction follows by using Krylov’s Markov selection theorem (Theorem 12.2.4 of [13]) together with the proof of Proposition 2.1 in [1]. Next we shall use a localization argument of [13] (see e.g. the argument in the proof of Theorem 1.2 of [4]), which basically states that it is enough if for ˜ δx0 ) has a unique solution, where each x0 ∈ S the martingale problem M P (A, bi = ˜bi and γi = γ˜i agree on some B(x0 , r0 ) ∩ Rd+ . Here we used in particular that a solution never exits S as shown in Lemma 2.1.5. Finally, if the covariance matrix of the diffusion is non-degenerate, uniqueness follows by a perturbation argument as in [13] (use e.g. Theorem 6.6.1 and Theorem 7.2.1). Hence consider only singular initial points, i.e. where either (j)  (i)  x0 = 0 or i∈Cj  x0 = 0 for some j ∈ R  (j)  or x0 = 0 for some j ∈ / R.  Step 2: Perturbation of the generator. Fix a singular initial point x0 ∈ S and set (for an example see Figure 2.2)     x0i = 0 ; NR = j ∈ R :   i∈Cj  NC = ∪j∈NR Cj ; N2 = V \ (NR ∪ NC ) ; ¯ i = R i ∩ NR , R  PSfrag replacements 15 1 : x01 = 0 ∗1 ∈ NC 6 ∗6 ∈ NR ∗x06 > 0 ¯ i , i = 1, 2, 3 ∗6 ∈ R  2 : x02 = 0 ∗2 ∈ NC 3 : x03 = 0 ∗3 ∈ NC  7 ∗7 ∈ / NR ∗x06 ≥ 0 ∗6 ∈ R3 ¯3 ∗6 ∈ /R  4 : x04 > 0 ∗4 ∈ / NC 5 : x05 = 0 ∗5 ∈ / NC  ¯ i . The ∗’s are the implications deduced Figure 2.2: Definition of NR , NC and R from the given setting. i.e. in contrast to the setting in [7], p. 327, N2 can also include zero catalysts, but only those whose reactants have at least one more catalyst being non-zero. (i)  Let Z = Z(x0 ) = {i ∈ V : x0i = 0} (if i ∈ / Z, then x0i > 0 and so xs > 0 for 0 small s a.s. by continuity). Moreover, if x ∈ S, then NR ∩ Z = ∅ and NR ∪ N C ∪ N 2 = V is a disjoint union. Notation 2.1.7. In what follows let RA ≡ {f, f : A → R} resp. RA + ≡ {f, f : A → R+ }. for arbitrary A ⊂ V . Next we shall rewrite our system of SDEs with corresponding generator A as a perturbation of a well-understood system of SDEs with corresponding generator A0 , which has a unique solution. The state space of A0 will be S x0 = S0 = {x ∈ Rd : xi ≥ 0 for all i ∈ / NR }. First, we view  x(j)  j∈NR  , x(i)  i∈NC  , i.e. the set of vertices with zero  catalysts together with these catalysts, near its initial point  x0j  j∈NR  , x0i  i∈NC  C as a perturbation of the diffusion on RNR × RN + , which is given by the unique  16 solution to the following system of SDEs: (j)  dxt  = b0j dt +    2γj0   i∈Cj  and (i)  dxt = b0i dt +    (i) (j) xt dBtj , x0 = x0j , for j ∈ NR  (i)  (2.4)  (i)  2γi0 xt dBti , x0 = x0i , for i ∈ NC ,  where for j ∈ NR , b0j = bj (x0 ) ∈ R and γj0 = γj (x0 )x0j > 0 as x0j > 0 if its catalysts are all zero. Also, b0i = bi (x0 ) > 0 as x0i = 0 for i ∈ NC and γi0 = γi (x0 ) k∈Ci x0k > 0 if i ∈ NC ∩ R as i is a zero catalyst thus having at least one non-zero catalyst itself, or γi0 = γi (x0 ) > 0 if i ∈ NC \R. Note that the non-negativity of b0i , i ∈ NC ensures that solutions starting in {x0i ≥ 0} remain there (also see definition of S0 ). Secondly, for j ∈ N2 we view this coordinate as a perturbation of the Feller branching process (with immigration) (j)  dxt  = b0j dt +  (j)  (j)  2γj0 xt dBtj , x0 = x0j , for j ∈ N2 ,  (2.5)  where b0j = (bj (x0 ) ∨ 0) (at the end of Section 2.3 the general case bj (x0 ) ∈ R is reduced to bj (x0 ) ≥ 0 by a Girsanov transformation), γj0 = γj (x0 ) i∈Cj x0i > 0 if j ∈ R by definition of N2 , i.e. at least one of the catalysts being positive, / R. As for i ∈ NC , the non-negativity of b0j , j ∈ N2 or γj0 = γj (x0 ) > 0 if j ∈ ensures that solutions starting in {x0j ≥ 0} remain there (see again definition of S0 ). Therefore we can view A as a perturbation of the generator   ∂2 ∂2 ∂ + (2.6) γj0  γi0 xi 2 . xi  2 + A0 = b0j ∂xj ∂xj ∂xi j∈V  j∈NR  i∈Cj  i∈NC ∪N2  The coefficients b0i , γi0 found above for x0 ∈ S now satisfy  0   γj > 0 for all j, / NR , b0j ≥ 0 if j ∈   0 bj > 0 if j ∈ (R ∪ C) ∩ Z,  (2.7)  where  NR ∩ Z = ∅.  (2.8)  In the remainder of the paper we shall always assume the conditions (2.7) hold when dealing with A0 whether or not it arises from a particular x0 ∈ S as above. As we shall see in Subsection 2.2.1 the A0 martingale problem is then well-posed and the solution is a diffusion on S0 ≡ S x0 = {x ∈ Rd : xi ≥ 0 for all i ∈ V \NR = NC ∪ N2 }.  (2.9)  17 Notation 2.1.8. In the following we shall use the notation NC2 ≡ NC ∪ N2 . Step 3: A key estimate. Set Bf := (A − A0 )f ˜bj (x) − b0 j  = j∈V  + i∈NC2  ∂f + ∂xj  γ˜i (x) − γi0 xi  j∈NR 2  ∂ f , ∂x2i    γ˜j (x) − γj0   i∈Cj    xi   ∂2f ∂x2j  where for j ∈ V,  ˜bj (x) = bj (x),  for j ∈ NR ,  γ˜j (x) = γj (x)xj , and  for i ∈ NC2 ,  γ˜i (x) = 1{i∈R} γi (x)  k∈Ci  xk + 1{i∈R} γi (x). /  By using the continuity of the diffusion coefficients of A and the localization argument mentioned in Step 1 we may assume that the coefficients of the operator B are arbitrarily small, say less than η in absolute value. The key step (see Theorem 2.3.3) will be to find a Banach space of continuous functions with norm · , depending on x0 , so that for η small enough and λ0 > 0 large enough, BRλ f ≤  1 2  Here  ∞  Rλ f =  f , ∀ λ > λ0 .  (2.10)  e−λs Ps f ds  (2.11)  0  is the resolvent of the diffusion with generator A0 and Pt is its semigroup. The uniqueness of the resolvent of our strong Markov solution will then follow as in [13] and [4]. A sketch of the proof is given in Section 2.3. Remark 2.1.9. Under additional restrictions on the structure of the branching network our results carry over to the system of SDEs, where the additive form for the catalysts is replaced by a multiplicative form as follows. For j ∈ R we now consider   (j)  dxt  = bj (xt )dt +  instead and for j ∈ /R (j)  dxt  2γj (xt )   = bj (xt )dt +  i∈Cj  (j) (i) xt  xt dBtj  (j)  2γj (xt )xt dBtj  18 as before. Indeed, if we impose that for all j ∈ R we have either |Cj | = 1 or |Cj | ≥ 2 and for all i1 = i2 , i1 , i2 ∈ Cj : i1 ∈ Ci2 or i2 ∈ Ci1 , and if we assume that Hypothesis 2.1.2 holds, then we can show a result similar to Theorem 2.1.6. For instance, the following system of SDEs would be included. (1)  = b1 (xt )dt +  2γ1 (xt )xt xt xt dBt1 ,  (2)  = b2 (xt )dt +  2γ2 (xt )xt xt xt dBt2 ,  (3)  = b3 (xt )dt +  2γ3 (xt )xt xt xt dBt3 ,  (4)  = b4 (xt )dt +  2γ4 (xt )xt xt xt dBt4 .  dxt dxt dxt dxt  (2) (3) (1) (3) (4) (2) (4) (1) (3) (1) (2) (4)  Note in particular, that the additional assumptions on the network ensure that at most one of either the catalysts in Cj or j itself can become zero, so that we obtain the same generator A0 as in the setting of additive catalysts if we set γj0 ≡ γj (x0 ) i∈{j}∪Cj :x0 >0 x0i (cf. the derivation of (2.4)). i  Remark 2.1.10. In [5] the H¨ older condition on the coefficients was successfully removed but the restrictions on the network as stated in [7] were kept. As both [7] and [5] are based upon realizing the SDE in question as a perturbation of a well-understood SDE, one could start extending [5] to arbitrary networks by using the same generator and semigroup decomposition for the well-understood SDE as considered in this paper.  2.1.6  Weighted H¨ older norms and semigroup norms  In this section we describe the Banach space of functions which will be used in (2.10). In (2.10) we use the resolvent of the generator A0 with state space S0 = S x0 = {x ∈ Rd : xi ≥ 0 for all i ∈ NC2 }. Note in particular that the ¯ i etc. depend on x0 . state space and the realizations of the sets NR , R Next we shall define the Banach space of weighted α-H¨ older continuous funcα tions on S0 , Cw (S0 ) ⊂ Cb (S0 ), in two steps. It will be the Banach space we look for and is a modification of the space of weighted H¨ older norms used in [4]. Let f : S0 → R be bounded and measurable and α ∈ (0, 1). As a first step define the following seminorms for i ∈ NC : α/2  |f |α,i = sup |f (x + h) − f (x)| |h|−α xi  ∨ |h|−α/2 :  ¯ i , x, h ∈ S0 . |h| > 0, hk = 0 if k ∈ / {i} ∪ R  19  ✁   ✁     Figure 2.3: Decomposition of the system of SDEs: unfilled circles, resp. filled circles, resp. squares are elements of NR , resp. NC , resp. N2 . The definition of ¯ i (un|f |α,i , i ∈ NC allows changes in i (filled circles) and the associated j ∈ R filled circles), the definition of |f |α,j , j ∈ N2 allows changes in j ∈ N2 (squares). Hence changes in all vertices are possible. For j ∈ N2 this corresponds to setting α/2  |f |α,j = sup |f (x + h) − f (x)| |h|−α xj  ∨ |h|−α/2 :  hj > 0, hk = 0 if k = j, x ∈ S0 . This definition is an extension of the definition in [7], p. 329. In our context the definition of |f |α,i , i ∈ NC had to be extended carefully by replacing the set ¯ i ) by the set R ¯ i ⊂ Ri . Observe that the seminorms Ri (in [7] equal to the set R for i ∈ NC and j ∈ N2 taken together still allow changes in all coordinates (see Figure 2.3). The definition of |f |α,j , j ∈ N2 furthermore varies slightly from the one in [7]. We use our definition instead as it enables us to handle the coordinates i ∈ NC , j ∈ N2 without distinction. Secondly, set I = NC2 . Then let |f |Cwα = max |f |α,j , j∈I  f  α= Cw  |f |Cwα + f  ∞,  where f ∞ is the supremum norm of f . f Cwα is the norm we looked for and its corresponding Banach subspace of Cb (S0 ) is α Cw (S0 ) = {f ∈ Cb (S0 ) : f  α< Cw  ∞},  the Banach space of weighted α-H¨ older continuous functions on S0 . Note that ¯ i etc. and hence the definition of the seminorms |f |α,j , j ∈ I depends on NC , R on x0 . Thus f Cwα depends on x0 as well. The seminorms |f |α,i are weaker norms near the spatial degeneracy at xi = 0 where we expect to have less smoothing by the resolvent.  20 Some more background on the choice of the above norms can be found in [4], Section 2. Bass and Perkins ([4]) consider α/2  |f |∗α,i ≡ sup |f (x + hei ) − f (x)||h|−α xi  |f |∗α ≡ sup |f |∗α,i and  f  i≤d  ∗ α≡  |f |∗α + f  : h > 0, x ∈ Rd+ ,  ∞  instead, where ei denotes the unit vector in the i-th direction in Rd . They show that if f ∈ Cb (Rd+ ) is uniformly H¨ older of index α ∈ (0, 1], and constant outside α,∗ of a bounded set, then f ∈ Cw ≡ {f ∈ Cb (Rd+ ) : f ∗α < ∞}. On the other α,∗ hand, f ∈ Cw implies f is uniformly H¨ older of order α/2. As it will turn out later (see Theorem 2.2.20) our norm f Cwα is equivalent to another norm, the so-called semigroup norm, defined via the semigroup Pt corresponding to the generator A0 of our process. As we shall mainly investigate properties of the semigroup Pt on Cb (S0 ) in what follows, it is not surprising that this equivalence turns out to be useful in later calculations. In general one defines the semigroup norm (cf. [2]) for a Markov semigroup {Pt } on the bounded Borel functions on D where D ⊂ Rd and α ∈ (0, 1) via |f |α = sup t>0  Pt f − f tα/2  ∞  ,  f  α=  |f |α + f  ∞  .  (2.12)  The associated Banach space of functions is then S α = {f : D → R : f Borel , f  α<  ∞}.  (2.13)  Convention 2.1.11. Throughout this paper all constants appearing in statements of results and their proofs may depend on a fixed parameter α ∈ (0, 1) and {b0j , γj0 : j ∈ V } as well as on |V | = d. By (2.7) M 0 = M 0 (γ 0 , b0 ) ≡ max γi0 ∨ γi0  −1  i∈V  ∨ b0i  ∨  max  i∈(R∪C)∩Z  b0i  −1  < ∞. (2.14)  Given α ∈ (0, 1), d and 0 < M < ∞, we can, and shall, choose the constants to hold uniformly for all coefficients satisfying M 0 ≤ M .  2.1.7  Outline of the paper  Proofs only requiring minor adaptations from those in [7] are usually omitted. A more extensive version of the proofs appearing in Sections 2.2 and 2.3 may be found on the arXiv at arXiv:0802.0035v2. The outline of the paper is as follows. In Section 2.2 the semigroup Pt corresponding to the generator A0 on the state space S0 , as introduced in (2.6) and (2.9), will be investigated. We start with giving an explicit representation of the semigroup in Subsection 2.2.1. In Subsection 2.2.2 the canonical measure N0 is introduced which is used in Subsection 2.2.3 to prove existence and give a representation of derivatives of the semigroup. In Subsections 2.2.4 and 2.2.5  21 bounds are derived on the L∞ norms and on the weighted H¨ older norms of those differentiation operators applied to Pt f , which appear in the definition of A0 . Furthermore, at the end of Subsection 2.2.4 the equivalence of the weighted H¨ older norm and semigroup norm is shown. Finally, in Section 2.3 bounds on the resolvent Rλ of Pt are deduced from the bounds on Pt found in Section 2.2. The bounds on the resolvent will then be used to obtain the key estimate (2.10). The remainder of Section 2.3 illustrates how to prove the uniqueness of solutions to the martingale problem MP(A,ν) from this, as in [7].  2.2  Properties of the Semigroup  2.2.1  Representation of the semigroup  In this subsection we shall find an explicit representation of the semigroup Pt corresponding to the generator A0 (cf. (2.6)) on the state space S0 and further preliminary results. We assume the coefficients satisfy (2.7) and Convention 2.1.11 holds. Let us have a look at (2.4) and (2.5) again. For i ∈ NC or j ∈ N2 (i) (j) the processes xt resp. xt are Feller branching processes (with immigra(j) tion). If we condition on these processes, the processes xt , j ∈ NR become independent time-inhomogeneous Brownian motions (with drift), whose distributions are well understood. Thus if the associated process is denoted by (j) (j) , the semigroup Pt f has the explicit repre= xt xt = xt sentation  j∈NR ∪NC2  j∈V  (i)  Pt f (x) = ⊗i∈NC2 Pxi i ×  j∈NR  R  |NR |  f {zj }j∈NR , xt   i∈NC2  (2.15)  pγ 0 2I (j) zj − xj − b0j t dzj  , t  j  where Pxi i is the law of the Feller branching immigration process x(i) on C(R+ , R+ ), started at xi with generator Ai0 = b0i (j)  It and for y ∈ (0, ∞)  ∂2 ∂ + γi0 x 2 , ∂x ∂x t  = 0 i∈C j  x(i) s ds,  z2  e− 2y py (z) := . (2πy)1/2  Remark 2.2.1. This also shows that the A0 martingale problem is well-posed.  22 and xNR ≡ {xj }j∈NR , let  For (y, z) = {yj }j∈NR , {zi }i∈NC2  G(y, z) = Gt,xNR (y, z) = Gt,xNR {yj }j∈NR , {zi }i∈NC2 = R  |NR |  f {uj }j∈NR , {zi }i∈NC2  j∈NR  (2.16)  pγj0 2yj uj − xj − b0j t duj .  Notation 2.2.2. In the following we shall use the notations (j)  E NC2 = ⊗i∈NC2 Pxi i , ItNR = It  (i)  j∈NR  C2 , xN = xt t  i∈NC2  and we shall write E whenever we do not specify w.r.t. which measure we integrate. Now (2.15) can be rewritten as C2 Pt f (x) = E NC2 Gt,xNR ItNR , xN t  C2 = E NC2 G ItNR , xN t  .  (2.17)  Lemma 2.2.3. Let j ∈ NR , then (a)   E NC2    E NC2   i∈Cj  (i) xt  =  2  (i)  xt i∈Cj  xi + b0i t ,  i∈Cj    =  2  xi  i∈Cj  i∈Cj  + i∈Cj    E NC2   2 (i)  i∈Cj  xt − x i  (j)  (b) E NC2    =  and  E NC2 It    = E NC2  (j)  It  +  −p       2   k∈Cj  k∈Cj  i∈Cj  0 i∈C j  i∈Cj     x(i) s ds =         k∈Cj  i∈Cj      b0k + γi0  b0i  t2  xi t +  ≤ c(p)t−p min (t + xi )−p i∈Cj    b0k + γi0  xi  t  b0k + γi0  b0i  t2 ,  2γi0 xi t +  t    b0i 2 t . 2  ∀p > 0.  Note. Observe that the requirement b0i > 0 if i ∈ (R ∪ C) ∩ Z as in (2.7) is crucial for Lemma 2.2.3(b). As i ∈ Cj , j ∈ NR implies i ∈ C ∩ Z, (2.7) guarantees b0i > 0. The bound (b) cannot be applied to i ∈ N2 in general, as (2.7) only gives b0i ≥ 0 in these cases.  23 Proof of (a). The first three results follow from Lemma 7(a) in [7] together with the independence of the Feller-diffusions under consideration. Proof of (b). Proceeding as in the proof of Lemma 7(b) in [7] we obtain E NC2  −p  (j)  It  ∞  ≤ cp e  E NC2 e−u ∞  i∈Cj  t  u−p−1 du  0  ≤ cp e min (j)  −1 (j) It  (i)  0  Pxi i e−u  −1 (i) It  u−p−1 du  (i)  as It = i∈Cj 0 xs ds ≡ i∈Cj It , where the Feller-diffusions under consideration are independent. Now we can proceed as in Lemma 7(b) of [7] to obtain the desired result. Lemma 2.2.4. Let Gt,xNR be as in (2.16). Then (a) for j ∈ NR ∂Gt,xNR {yj }j∈NR , {zi }i∈NC2 ∂xj  =  ∂Gt,xNR (y, z) ≤ f ∂xj  ∞  (γj0 yj )−1/2 , (2.18)  and more generally for any k ∈ N, there is a constant ck such that ∂ k Gt,xNR (y, z) ≤ ck f ∂xkj  ∞  −k/2  yj  .  (b) For j ∈ NR ∂Gt,xNR (y, z) ≤ c1 f ∂yj  ∞  yj−1 .  (2.19)  More generally there are constants ck , k ∈ N such that for l1 , l2 , j1 , j2 ∈ NR , ∂ m1 +m2 +k1 +k2 Gt,xNR k1 k2 m2 1 ∂xm l1 ∂xl2 ∂yj1 ∂yj2  (y, z) ≤ cm1 +m2 +k1 +k2 f  ∞  −m1 /2 −m2 /2 −k1 −k2 y l2 y j1 y j2  y l1  for all m1 , m2 , k1 , k2 ∈ N. (c) Let y NR = {yj }j∈NR and z NC2 = {zi }i∈NC2 , then for all z NC2 with zi ≥ 0, i ∈ NC2 we have that xNR , y NR → Gt,xNR y NR , z NC2 is C 3 on R|NR | × (0, ∞)|NR | . Proof. This proceeds as in [7], Lemma 11, using the product form of the density. Lemma 2.2.5. If f is a bounded Borel function on S0 and t > 0, then Pt f ∈ Cb (S0 ) with |Pt f (x) − Pt f (x )| ≤ c f ∞ t−1 |x − x | .  24 Proof. The outline of the proof is as in the proof of [7], Lemma 12. We shall nevertheless show the proof in detail as it illustrates some basic notational issues, which will appear again in later theorems. Note in particular the frequent use of the triangle inequality resulting in additional sums of the form j:j∈R¯i in 0 the second part of the proof. Using (2.17), we have for x, x ∈ RNR , Pt f x, xNC2 − Pt f x , xNC2 = E  NC2  ≤ f  ∞  C2 Gt,x ItNR , xN t  |xj − xj | γj0  j∈NR  ≤c f  ∞  ≤c f  ∞  γj0  t−1 j∈NR  − Gt,x (j)  E NC2  |xj − xj | j∈NR  (2.20)  It  C2 ItNR , xN t  −1/2  (by (2.18))  t−1/2 min (t + xi )−1/2  (by Lemma 2.2.3(b))  i∈Cj  |xj − xj |.  Next we shall consider x, x = x + hei0 ∈ RNC2 where i0 ∈ NC2 is arbitrarily fixed. Assume h > 0 and let xh denote an independent copy of x(i0 ) starting at h but with b0i0 = 0. Then x(i0 ) + xh has law Pxi0i +h (additive property of Feller branching processes) and so if Ih (t) =  t 0  0  xhs ds,  P t f xN R , x − P t f xN R , x (j)  = E NC2 Gt,xNR  It  −Gt,xNR  + 1{i0 ∈Cj } Ih (t)  (j)  It  j∈NR  j∈NR  C2 , xN t  , xit + 1{i=i0 } xht  i∈NC2  .  For what follows it is important to observe that ¯ i0 , {j ∈ NR : i0 ∈ Cj } = j : j ∈ R ¯i necessary. Next we shall use the triangle having made the definition of R inequality to first sum up changes in the jth coordinates (where j ∈ NR such that i0 ∈ Cj ) separately in increasing order, followed by the change in the coordinate i0 . If Th = inf{t ≥ 0 : xht = 0} we thus obtain as a bound for the above (recall that ek denotes the unit vector in the kth direction): c f ¯i j:j∈R 0  =  ∞  c f ¯i j:j∈R 0  (j)  E NC2 Ih (t) It  ∞  −1  E NC2 [Ih (t)] E NC2  +2 f (j)  It  −1  ∞  E[Th > t]  +2 f  ∞  E[Th > t]  25 by (2.19) and as G ∞ ≤ f ∞ by the definition of G. Next we shall use that E[Th > t] ≤ tγh0 (for reference see equation (2.26) in Section 2.2.2). Together i0  with Lemma 2.2.3(a), (b) we may bound the above by c f  ∞  ¯i j:j∈R 0  htt−1 min (t + xi )−1 + 2 f i∈Cj  ∞  h ≤c f tγi00  ∞  ht−1 .  The case x = x + hei , i ∈ NC2 follows similarly. Note that for i ∈ N2 only the second term in the above bound is nonzero as the sum is taken over an ¯i = ∅ for i ∈ N2 ). Together with (2.20) (recall that the 1-norm and empty set (R Euclidean norm are equivalent) we obtain the result via triangle inequality. Finally, we give elementary calculus inequalities that will be used below. Lemma 2.2.6. Let g : Rd+ → R be C 2 . Then for all ∆, ∆ > 0, y ∈ Rd+ and I1 , I2 ⊂ {1, . . . , d}, |g(y + ∆  i1 ∈I1  e i1 + ∆  i2 ∈I2  (∆∆ ) −g(y + ∆ i2 ∈I2 ei2 ) + g(y)| (∆∆ )  ≤  ei2 ) − g(y + ∆  sup {y ∈  Q  i∈{1,...,d} [yi ,yi +∆+∆  ]} i ∈I i ∈I 1 1 2 2  i1 ∈I1  e i1 )  ∂2 g(y ) . ∂yi1 ∂yi2  Also let f : Rd+ → R be C 3 . Then for all ∆1 , ∆2 , ∆3 > 0, y ∈ Rd+ and I1 , I2 , I3 ⊂ {1, . . . , d}, |f (y + ∆1  ei1 + ∆2 i2 ∈I2 ei2 + ∆3 i3 ∈I3 ei3 ) (∆1 ∆2 ∆3 ) −f (y + ∆1 i1 ∈I1 ei1 + ∆3 i3 ∈I3 ei3 ) + f (y + ∆2 i2 ∈I2 ei2 ) (∆1 ∆2 ∆3 ) −f (y + ∆2 i2 ∈I2 ei2 + ∆3 i3 ∈I3 ei3 ) + f (y + ∆3 i3 ∈I3 ei3 ) (∆1 ∆2 ∆3 ) −f (y + ∆1 i1 ∈I1 ei1 + ∆2 i2 ∈I2 ei2 ) + f (y + ∆1 i1 ∈I1 ei1 ) − f (y)| (∆1 ∆2 ∆3 ) ∂3 ≤ sup f (y ) . Q ∂yi1 ∂yi2 ∂yi3 {y ∈ i∈{1,...,d} [yi ,yi +∆1 +∆2 +∆3 ]} i1 ∈I1  i1 ∈I1 i2 ∈I2 i3 ∈I3  Proof. This is an extension of [7], Lemma 13, using the triangle inequality to split the terms under consideration into sums of differences in only one coordinate at a time.  26  2.2.2  Decomposition techniques  In this subsection we cite relevant material from [7], namely Lemma 8, Proposition 9 and Lemma 10. Proofs and references can be found in [7]. Further background and motivation on the processes under consideration may be found in [11], Section II.7. Let {Px0 : x ≥ 0} denote the laws of the Feller branching process X with no immigration (equivalently, the 0-dimensional squared Bessel process) with generator L0 f (x) = γxf (x). Recall that the Feller branching process X can be constructed as the weak limit of a sequence of rescaled critical Galton-Watson branching processes. If ω ∈ C(R+ , R+ ) let ζ(ω) = inf{t > 0 : ω(t) = 0}. There is a unique σ-finite measure N0 on Cex = {ω ∈ C(R+ , R+ ) : ω(0) = 0, ζ(ω) > 0, ω(t) = 0 ∀t ≥ ζ(ω)}  (2.21)  such that for each h > 0, if Ξh is a Poisson point process on Cex with intensity hN0 , then X= Cex  νΞh (dν) has law Ph0 .  (2.22)  Citing [11], N0 can be thought of being the time evolution of a cluster given that it survives for some positive length of time. The representation (2.22) decomposes X according to the ancestors at time 0. Moreover we also have N0 [νδ > 0] = (γδ)−1  (2.23)  and for t > 0 νt dN0 (ν) = 1.  (2.24)  Cex  For t > 0 let Pt∗ denote the probability on Cex defined by Pt∗ [A] =  N0 [A ∩ {νt > 0}] . N0 [νt > 0]  (2.25)  Lemma 2.2.7. For all h > 0 Ph0 [ζ > t] = Ph0 [Xt > 0] = 1 − e−h/(tγ) ≤  h . tγ  (2.26)  Proposition 2.2.8. Let f : C(R+ , R+ ) → R be bounded and continuous. Then for any δ > 0, lim h−1 Eh0 [f (X)1{Xδ >0} ] = h↓0  Cex  f (ν)1{νδ >0} dN0 (ν).  The representation (2.22) leads to the following decompositions of the pro(i) (i) cesses xt , i ∈ NC2 that will be used below. Recall that xt is the Feller branching immigration process with coefficients b0i ≥ 0, γi0 > 0 starting at xi and with law Pxi i . In particular, we can make use of the additive property of Feller branching processes.  27 Lemma 2.2.9. Let 0 ≤ ρ ≤ 1. (a) We may assume x(i) = X0 + X1 , where X0 is a diffusion with generator A0 f (x) = γi0 xf (x) + b0i f (x) starting at ρxi , X1 is a diffusion with generator γi0 xf (x) starting at (1 − ρ)xi ≥ 0, and X0 , X1 are independent. In addition, we may assume Nt  νt Ξ(dν) =  X1 (t) = Cex  ej (t),  (2.27)  j=1  where Ξ is a Poisson point process on Cex with intensity (1−ρ)xi N0 , {ej , j ∈ N} is an iid sequence with common law Pt∗ , and Nt is a Poisson random variable i (independent of the {ej }) with mean (1−ρ)x . tγi0 (b) We also have t  t  X1 (s)ds = Cex  0  t  νs ds1{νt =0} Ξ(dν) +  0  Nt  ≡  Cex  0  νs ds1{νt =0} Ξ(dν)  rj (t) + I1 (t) j=1  and t 0  Nt  x(i) s ds  rj (t) + I2 (t),  =  (2.28)  j=1  t  t  where rj (t) = 0 ej (s)ds, I2 (t) = I1 (t) + 0 X0 (s)ds. (c) Let Ξh be a Poisson point process on Cex with intensity hi N0 (hi > 0), independent of the above processes. Set Ξx+h = Ξ + Ξh and Xth = νt Ξh (dν). Then (i) νt Ξx+h (dν) + X0 (t) (2.29) Xtx+h ≡ xt + X h (t) = Cex  is a diffusion with generator A0 starting at xi + hi . In addition Nt  νt Ξ  x+h  (dν) =  Cex  ej (t),  (2.30)  j=1  where Nt is a Poisson random variable with mean ((1 − ρ)xi + hi )(γi0 t)−1 , such that {ej } and (Nt , Nt ) are independent. Also t 0  where I3h (t) =  Cex  t 0  Nt  Xsx+h ds =  rj (t) + I2 (t) + I3h (t), j=1  νs ds1{νt =0} Ξh (dν).  (2.31)  28  2.2.3  Existence and representation of derivatives of the semigroup  Let A0 and Pt be as in Subsection 2.2.1. The first and second partial derivatives of Pt f w.r.t. xk , xl , k, l ∈ NC2 will be represented in terms of the canonical measure N0 . Recall that by (2.17) C2 Pt f (x) = E NC2 G ItNR , xN t  (j)  where ItNR = It  (j)  with It  j∈NR  =  t 0  ,  (i)  xs ds. i∈Cj  C2 , η, η , θ, θ ∈ Cex (for the definition of Notation 2.2.10. If X ∈ C R+ , RN +  Cex see (2.21)) and k, l ∈ NC2 , let G+k X; t,xNR  t  ηs ds, θt 0 t  ≡ Gt,xNR  0 i∈C j  Xsi ds + 1{k∈Cj }  t  ηs ds 0  j∈NR  , Xti + 1{i=k} θt  i∈NC2  and G+k,+l X; t,xNR  t  t  ηs ds, θt , 0  0 t  ≡ Gt,xNR  0 i∈C j  ηs ds, θt  Xsi + 1{k∈Cj } ηs + 1{l∈Cj } ηs ds  Xti + 1{i=k} θt + 1{i=l} θt  , j∈NR  . i∈NC2  Note that if k ∈ N2 in the above we have 1{k∈Cj } = 0 for j ∈ NR , i.e. G+k X; t,xNR  0  0  t  ηs ds, θt  = G+k,+l X; 0, θt , t,xNR  ηs ds, θt  = G+k,+l X; t,xNR  t  ηs ds, θt , 0  = G+k X; 0, θt , t,xNR  t  ηs ds, θt ,  and for l ∈ N2  ηs ds, θt 0  t  G+k,+l X; t,xNR  G+k,+l X; t,xNR  t  0  t 0  ηs ds, θt  t 0  ηs ds, θt , 0, θt . (2.33)  C2 , ν, ν ∈ Cex and k, l ∈ NC2 , let If X ∈ C R+ , RN +  (X, ν) ≡ G+k ∆G+k X; t,xNR t,xNR  t  νs ds, νt 0  (2.32)  − G+k X; 0, 0 t,xNR  29 and ∆G+k,+l (X, ν, ν ) t,xNR X; ≡ G+k,+l t,xNR  (2.34)  t  t  νs ds, νt , 0  X; − G+k,+l t,xNR  0 t 0  X; 0, 0, − G+k,+l t,xNR  νs ds, νt  t 0  νs ds, νt  X; 0, 0, 0, 0 . νs ds, νt , 0, 0 + G+k,+l t,xNR  Proposition 2.2.11. If f is a bounded Borel function on S0 and t > 0 then Pt f ∈ Cb2 (S0 ) and for k, l ∈ V = {1, . . . , d} (Pt f )kl  ∞≤  c  f ∞ . t2  Moreover if f is bounded and continuous on S0 , then for all k, l ∈ NC2 (Pt f )k (x) = E NC2 (Pt f )kl (x) = E NC2  ∆G+k xNC2 , ν dN0 (ν) , t,xNR xNC2 , ν, ν dN0 (ν)dN0 (ν ) . ∆G+k,+l t,xNR  (2.35) (2.36)  Proof. The outline of this proof is similar to the one for [7], Proposition 14. We shall therefore only mention some changes due to the consideration of more than one catalyst at a time. With the help of Lemma 2.2.5 and using that Pt f = Pt/2 (Pt/2 f ) one can easily show that it suffices to consider bounded continuous f . In [7], Proposition 14 one only proves the existence of (Pt f )kl (x), k, l ∈ NC2 and its representation in terms of the canonical measure as in (2.36) based on (2.35). From the methods used it should then be clear how the easier formula (2.35) may have been found. Hence, let us also assume (Pt f )k exists and is given by (2.35) for k ∈ NC2 . Let 0 < δ ≤ t. The role of δ will be explained at the end of this proof. In the first case where νδ = νt = 0, use Lemmas 2.2.6 and 2.2.4(b) to see that for  30 k, l ∈ NC xNC2 , ν, ν ∆G+k,+l t,xNR xNC2 ; = G+k,+l t,xNR  (2.37) t  δ  νs ds, 0, 0  xNC2 ; −G+k,+l t,xNR  0 t 0  xNC2 ; 0, 0, − G+k,+l t,xNR  νs ds, 0  t  0 i∈C j  x(i) s ds + 1{k∈Cj }  t  − Gt,xNR  0 i∈C j t  − Gt,xNR  0 i∈C j t  + Gt,xNR ≤  0 i∈C j  c f ¯l ¯ k j2 :j2 ∈R j1 :j1 ∈R  0  νs ds, 0  xNC2 ; 0, 0, 0, 0 νs ds, 0, 0, 0 + G+k,+l t,xNR  t  = Gt,xNR  δ  0  δ  νs ds + 1{l∈Cj }  x(i) s ds + 1{l∈Cj } x(i) s ds + 1{k∈Cj }  j∈NR  C2 , xN t  νs ds  0  0  C2 , xN t  j∈NR t  C2 , xN t  νs ds 0  j∈NR  C2 , xN t  x(i) s ds  ∞  δ  νs ds  j∈NR (j1 )  It  −1  (j2 )  It  δ  −1 0  t  νs ds  νs ds 0  (compare to (49) in [7]). For k or l ∈ N2 we obtain via (2.32) and (2.33) xNC2 , ν, ν ∆G+k,+l t,xNR  = 0.  This is consistent with (2.37) if we consider the sum over an empty set to be ¯ k = Rk ∩ NR and thus R ¯ k = ∅ if k ∈ N2 ). Hence (2.37) is a zero (recall that R bound for all k, l ∈ NC2 . The other cases are proven as in [7] (for the last case use the trivial bound (xNC2 , ν, ν ) ≤ 4 f ∞ ) with the same modifications as just observed. ∆G+k,+l t,xNR  31 Combining all the cases we conclude that ∆G+k,+l (xNC2 , ν, ν ) t,xNR   ≤  1{νδ =νt =0}   ¯ k j2 :j2 ∈R ¯l j1 :j1 ∈R    + 1{νδ =0,νt >0}     + 1{νδ >0,νt =0}  t  ≤  −1  (j ) It 1  1{νδ =νt =0}  0  (j)  It  x(k) s ds  0  −1  x(l) s ds  0 t  + 1{νδ >0,νt =0}  t  −1  ¯k j:j∈R  t  + 1{νδ =0,νt >0}  0  ¯l j:j∈R  0  x(k) s ds  t 0 −1  0    t  νs ds  0    δ  −1  (j) It  δ  −1  (j ) It 2  νs ds  νs ds    νs ds + 1{νδ >0,νt >0} c f  x(l) s ds  −1  δ  0  ∞  t  νs ds  νs ds 0  δ  νs ds  0 −1  t 0  νs ds + 1{νδ >0,νt >0} c f  ∞  ≡ g¯t,δ xNC2 , ν, ν The remainder of the proof works similar to the proof in [7]. Some minor changes are necessary in the proof of continuity from below in x2 (now to be replaced by xNC2 ) following (59) in [7], by considering every coordinate on its own. Also, new mixed partial derivatives appear, which can be treated similarly to the ones already appearing in the proof of Proposition 14 in [7]. Other necessary technical changes will reappear in later proofs where they will be worked out in detail. They are thus omitted at this point. Remark 2.2.12. The necessity for introducing δ only becomes clear in the context of a complete proof. For instance, the derivation of (2.36) starts by defining X.h , independent of x(l) and satisfying Xth = h +  t 0  2γl0 Xsh dBs , (h > 0)  (i.e. X h has law Ph0 ) so that x(l) + X h has law Pxl l +h . Therefore (2.35) together with definition (2.34) implies 1 [(Pt f )k (x + hel ) − (Pt f )k (x)] h 1 = ∆G+k,+l xNC2 , ν, X h t,xNR h  1{Xδh =0} + 1{Xδh >0} dN0 (ν)dP NC2 dPh0 .  Now the first term can be made arbitrarily small for t fixed and δ ↓ 0 + . The second term can be further rewritten with the help of Proposition 2.2.8 and will finally yield the representation (2.36) by first taking h ↓ 0+ and then δ ↓ 0+ .  32  2.2.4  L∞ bounds of certain differentiation operators applied to Pt f and equivalence of norms  We continue to work with the semigroup Pt on the state space S0 corresponding to the generator A0 . Recall the definitions of the semigroup norm |f |α from (2.12) and of the associated Banach space of functions S α from (2.13) in what follows. Proposition 2.2.13. If f is a bounded Borel function on S0 then for j ∈ NR ∂ c f ∞ Pt f (x) ≤ √ , √ ∂xj t max{ t + xi }  (2.38)  i∈Cj  and max{xi } i∈Cj  c f ∂2 Pt f (x) ≤ ∂x2j t  ∞  .  (2.39)  If f ∈ S α , then α  1  α ∂ c|f |α t 2 − 2 √ ≤ c|f |α t 2 −1 , Pt f (x) ≤ ∂xj max{ t + xi }  (2.40)  i∈Cj  and max{xi } i∈Cj  ∂2 α Pt f (x) ≤ c|f |α t 2 −1 . ∂x2j  (2.41)  Proof. The proof proceeds as in [7], Proposition 16 except for minor changes. The estimate in (2.38) can be obtained by mimicking the calculation in (2.20). (2.39) follows from a double application of (2.38), where we use that P t ∂ and ∂x commute. j If f ∈ S α , we proceed as in [2] and write ∂ ∂ ∂ P2t f (x) − Pt f (x) = Pt (Pt f − f )(x) . ∂xj ∂xj ∂xj Applying the estimate (2.38) to g = Pt f − f and using the definition of |f |α we get c|f |α tα/2 ∂ ∂ c g ∞ ≤√ . P2t f (x) − Pt f (x) ≤ √ √ √ ∂xj ∂xj t max{ t + xi } t max{ t + xi } i∈Cj  This together with (2.38) ⇒ lim  t→∞  ∂ Pt f (x) = 0 ∂xj  i∈Cj  33 implies that ∂ Pt f (x) ≤ ∂xj  ∞ k=0  ≤ |f |α  ∂ (P k f − P2(k+1) t f ) (x) ∂xj 2 t ∞  max{ 2k t + xi }  k=0  ≤ |f |α t  c  1 α 2 −2  2k t  i∈Cj  1 α 2 −2  c √ . max{ t + xi } i∈Cj  This then immediately yields (2.40). Use (2.39) to derive (2.41) in the same way as (2.38) was used to prove (2.40). Notation 2.2.14. If w > 0, set pj (w) =  w j −w . j! e  k j=1 rj (t)  Lemma 2.2.9, let Rk = Rk (t) =  For {rj (t)} and {ej (t)} as in  and Sk = Sk (t) =  k j=1  ej (t).  C2 Notation 2.2.15. If X ∈ C R+ , RN , Y, Y , Z, Z ∈ C(R+ , R+ ), η, η , θ, θ ∈ + Cex and m, n, k, l ∈ NC2 , where m = n let  X, Yt , Zt , Yt , Zt ; Gm,n,+k,+l t,xNR t  ≡ Gt,xNR  0 i∈C \{m,n} j  t  t  ηs ds, θt , 0  0  ηs ds, θt  Xsi ds + 1{m∈Cj } Yt + 1{n∈Cj } Yt t  + 0  1{k∈Cj } ηs + 1{l∈Cj } ηs ds  , j∈NR  1{i∈{m,n}} Xti + 1{i=m} Zt + 1{i=n} Zt / + 1{i=k} θt + 1{i=l} θt  . i∈NC2  The notation indicates that the one-dimensional coordinate processes t t Xsm ds, Xtm resp. 0 Xsn ds, Xtn will be replaced by the processes Yt , Zt resp. 0 Yt , Zt (note that for m ∈ N2 this only implies a change from Xtm into Zt ). t t Additionally, we add 0 νs ds, θt , 0 νs ds and θt as before. The terms m,+k,+l m,+k,+l m,+k m,n,+l m,n m etc. Gt,x , Gt,x NR NR , Gt,xNR , Gt,xNR , Gt,xNR , ∆Gt,xNR  (2.42)  will then be defined in a similar way, where for instance Gm only refers t,xNR to replacing the processes processes.  t 0  Xsm ds, Xtm via Yt , Zt but doesn’t involve adding  34 Proposition 2.2.16. If f is a bounded Borel function on S0 , then for i ∈ NC2 ∂ c f ∞ , Pt f (x) ≤ √ √ ∂xi t t + xi and xi  c f cxi f ∞ ∂2 ≤ Pt f (x) ≤ ∂x2i t(t + xi ) t  (2.43)  ∞  .  (2.44)  If f ∈ S α , then α  1  c|f |α t 2 − 2 α ∂ Pt f (x) ≤ √ ≤ c|f |α t 2 −1 , ∂xi t + xi and xi  α ∂2 Pt f (x) ≤ c|f |α t 2 −1 . ∂x2i  Proof. The outline of the proof is the same as for [7], Proposition 17. Part of the proof will be presented here with its notational modifications since some care is needed when working in a multi-dimensional setting and the formulas become more involved. As in the proof of Proposition 2.2.11 we assume w.l.o.g. that f is bounded and continuous. In what follows we shall illustrate the proof of (2.44) as (2.43) is easier. Consider second derivatives in k. The representation of (Pt f )kk in Proposition 2.2.11 and symmetry allow us to write for k ∈ NC2 (i.e. l = k) (Pt f )kk (x) = E NC2 + 2E NC2 + E NC2  xNC2 , ν, ν 1{νt =0,νt =0} dN0 (ν)dN0 (ν ) ∆G+k,+k t,xNR ∆G+k,+k xNC2 , ν, ν 1{νt =0,νt >0} dN0 (ν)dN0 (ν ) t,xNR xNC2 , ν, ν 1{νt >0,νt >0} dN0 (ν)dN0 (ν ) ∆G+k,+k t,xNR  ≡ E1 + 2E2 + E3 . The idea for bounding |E1 |, |E2 | and |E3 | is similar to the one in [7]. In what follows we shall illustrate the necessary changes to bound |E3 |.  Notation 2.2.17. We have N0 [·∩{νt > 0}] = (γt)−1 Pt∗ [·] on {νt > 0}, where we used (2.25) and (2.23). Whenever we change integration w.r.t. N0 to integration (∗)  w.r.t. Pt∗ we shall denote this by = .  35 The decomposition of Lemma 2.2.9 (cf. (2.27) and (2.28)) with ρ = 0 gives (∗)  |E3 | =  c E t2  k,+k,+k xNC2 , RNt + I2 (t), SNt + X0 (t); Gt,x NR t  (2.45)  t  νs ds, νt , 0  0  νs ds, νt t  k,+k,+k xNC2 , RNt + I2 (t), SNt + X0 (t); 0, 0, − Gt,x NR k,+k,+k xNC2 , RNt + I2 (t), SNt + X0 (t); − Gt,x NR  0  νs ds, νt  t  νs ds, νt , 0, 0 0  k,+k,+k xNC2 , RNt + I2 (t), SNt + X0 (t); 0, 0, 0, 0 + Gt,x NR  × dPt∗ (ν)dPt∗ (ν ) , where for instance k,+k,+k xNC2 , RNt + I2 (t), SNt + X0 (t); Gt,x NR t  = Gt,xNR  0 i∈C \{k} j  t  t  νs ds, νt , 0  0  νs ds, νt  Xsi ds + 1{k∈Cj } (RNt + I2 (t)) t  + 0  1{k∈Cj } (νs + νs ) ds  , j∈NR  1{i=k} Xti + 1{i=k} (SNt + X0 (t)) + 1{i=k} (νt + νt )  i∈NC2  by Notation 2.2.15 and the comment following it. Recall that Rk = Rk (t) = kj=1 rj (t) and Sk = Sk (t) = kj=1 ej (t) with {rj (t)} and {ej (t)} as in Lemma 2.2.9. In particular, {ej , j ∈ N} is iid with t common law Pt∗ and rj (t) = 0 ej (s)ds. We obtain (recall the definition of Gkt,xNR from (2.42)) |E3 | =  c E Gkt,xNR xNC2 , RNt +2 + I2 (t), SNt +2 + X0 (t) t2 − 2Gkt,xNR xNC2 , RNt +1 + I2 (t), SNt +1 + X0 (t) + Gkt,xNR xNC2 , RNt + I2 (t), SNt + X0 (t)  .  Observe that in case k ∈ N2 the above notation Gkt,xNR (xNC2 , RNt + I2 (t), (k)  SNt + X0 (t)) only indicates that xt  gets changed into SNt + X0 (t); for k ∈ N2  36 t  (k)  the indicated change of 0 xs ds into RNt + I2 (t) has no impact on the term under consideration. t (i) Let w = xk /(γk0 t). The independence of Nt from ({ 0 xs ds, i ∈ Cj \{k}, j ∈ (NC2 )\{k}  NR }, xt  |E3 | =  c t2  , I2 (t), X0 (t), {el }, {rl }) yields ∞  pn (w)E Gkt,xNR xNC2 , Rn+2 + I2 (t), Sn+2 + X0 (t)  n=0  − 2Gkt,xNR xNC2 , Rn+1 + I2 (t), Sn+1 + X0 (t) + Gkt,xNR xNC2 , Rn + I2 (t), Sn + X0 (t) Sum by parts twice and use |G| ≤ f c f  ∞  1 xk t  ∞  w |pn−2 (w) − 2pn−1 (w) + pn (w)|  n=2 ∞  ≤c f  ∞  1 xk t  wp0 (w) + wp1 (w) +  ≤c f  ∞  1 xk t  2p1 (w) +  ∞  1 . xk t  ≤c f  to bound the above by  ∞  w(3p0 (w) + p1 (w)) +  pn (w)  n=2  ∞  pn (w)  n=0  .  |(w − n)2 − n| w  (w − n)2 + n w  We obtain another bound on |E3 | if we use the trivial bound |G| ≤ f (2.45). This yields |E3 | ≤ c f ∞ t−2 and so |E3 | ≤  ∞  in  c f ∞ . t(t + xk )  Combine the bounds on |E1 |, |E2 | and |E3 | to obtain (2.44). The bounds for f ∈ S α are obtained from the above just as in the proof of Proposition 2.2.13. Recall Convention 2.1.11, as stated in (2.14), for the definition of M 0 in what follows. (j)  Notation 2.2.18. Set Jt  (j)  = γj0 2It , j ∈ NR .  Lemma 2.2.19. For each M ≥ 1, α ∈ (0, 1) and d ∈ N there is a c = c(M, α, d) > 0 such that if M 0 ≤ M , then |f g|α ≤ c|f |Cwα g  ∞  + f  ∞  |g|α  (2.46)  |g|α .  (2.47)  and fg  α≤  c  f  α Cw  g  ∞  + f  ∞  37 Proof. Compared to the proof of [7], Lemma 18, the derivation of a bound for the second error term E2 below becomes more involved. Again the triangleinequality has to be used to express multi-dimensional coordinate changes via one-dimensional ones. |N | ˜ Let xNR , xNC2 ∈ R|NR | × R+ C2 and define f(y) = f (y) − f (x). Then (2.15) gives |Pt (f g)(x) − (f g)(x)| ≤ |Pt (f˜g)(x)| + |f (x)||Pt g(x) − g(x)|  ≤ g  ∞  + f  E NC2  ∞  f˜  R|NR |  (2.48)  C2 z N R , xN t  j∈NR  |g|α tα/2 .    pJ (j) zj − xj − b0j t dzj  t  The above expectation can be bounded by three terms as follows:   C2 f˜ z NR , xN t  E NC2   j∈NR  pJ (j) zj − xj − b0j t dzj  t  (2.49)  C2 − f˜ z NR , xNC2 f˜ z NR , xN t  ≤ E NC2  + f z NR , xNC2 − f xNR + b0NR t, xNC2  + f xNR + b0NR t, xNC2 − f xNR , xNC2  ×  j∈NR  pJ (j) zj − xj − b0j t dzj t  ≡ E1 + E2 + E3 . For all three terms we shall use the triangle inequality to sum up changes in different coordinates separately. The definition of |f |α,i gives E1 ≤ ≤  i∈NC2  i∈NC2  |f |α,i E NC2 |f |α,i  (i)  xt − x i (i)  α  −α/2  xi  E NC2 xt − xi  2 α/2  (i)  ∧ xt − x i −α/2  xi  α/2  (i)  ∧ E NC2 xt − xi  2 α/4  .  We now proceed as in the derivation of a bound on E1 in the proof of Lemma 18 in [7], using Lemma 2.2.3(a) (alternatively compare with estimation of E2 below). We finally obtain E1 ≤ c  i∈NC2  |f |α,i tα/2 2α/2 ≤ c|f |Cwα tα/2 2α/2 .  38 Similarly we have E2 ≤  min  k∈NR  ¯i i:k∈R  ≤c as  k∈NR  k∈NR  −α/2  zk − (xk + b0k t)  α/2  pJ (j) zj − xj − b0j t dzj  zk − (xk + b0k t) ≤c  α  |f |α,i E NC2  j∈NR  min  |f |α,i E NC2  min  |f |α,i  ¯i i:k∈R  ¯i i:k∈R  xi  ∧  t  α/2  (k)  Jt  −α/2  α/2  (k)  E NC2 Jt  (k)  ∧ Jt  xi  −α/2  xi  α/4  α/4  (k)  ∧ E NC2 Jt  |z|β pJ (z)dz ≤ cJ β/2 for β ∈ (0, 1). Next use Lemma 2.2.3(a) which shows (k)  (k)  2 2 ≤ = γk0 2E NC2 It that E NC2 Jt l∈Ck cM (t + xl t). Put this in the above bound on E2 to see that E2 can be bounded by     α/2 α/4   −α/2   min |f |α,i  (t2 + xl t) c ∧ xi (t2 + xl t) ¯i   i:k∈R l∈Ck k∈NR l∈Ck    α/2  t2 + x l t  max xi  k∈NR  ≤ c|f |Cwα  k∈NR     ≤ c|f |Cwα t  k∈NR  α/2 α/2  2  ∧  l∈Ck i:k∈R ¯i  k∈NR  ≤ c|f |Cwα tα/2  α/4  .      2  t + t max xi  l∈Ck  α/2  t + 1 max xi  ¯i i:k∈R  ¯i i:k∈R  max xi  ∧  1+  α/4  ¯i i:k∈R          t  For the third term E3 we finally have E3 ≤  min  k∈NR  ¯i i:k∈R  ≤ c|f |Cwα  |f |α,i b0k t  b0k t  α  −α/2  xi  ∧  b0k t  α/2  α/2  k∈NR  ≤ c|f |Cwα tα/2 . Put the above bounds on E1 , E2 and E3 into (2.49) and then in (2.48) to conclude that |Pt (f g)(x) − (f g)(x)| ≤  g  ∞  c|f |Cwα + f  ∞  and so by definition of the semigroup norm |f g|α ≤ c|f |Cwα g  ∞  + f  This gives (2.46) and (2.47) is then immediate.  ∞  |g|α .  |g|α tα/2  39 Theorem 2.2.20. There exist 0 < c1 ≤ c2 such that c1 |f |Cwα ≤ |f |α ≤ c2 |f |Cwα .  (2.50)  α This implies that Cw = S α and so S α contains C 1 functions with compact support in S0 .  Proof. The idea of the proof was taken from the proof of Theorem 19 in [7]. The second inequality in (2.50) follows immediately by setting g = 1 in Lemma 2.2.19. For the first inequality let x, h ∈ S0 , t > 0 and use Propositions 2.2.13 and 2.2.16 to see that |f (x + h) − f (x)| (2.51) ≤ |Pt f (x + h) − f (x + h)| + |Pt f (x) − f (x)| + |Pt f (x + h) − Pt f (x)| ≤ 2|f |α tα/2 + |Pt f (x + h) − Pt f (x)|   ≤ 2|f |α tα/2 + c|f |α t  1 α 2 −2  |hj | √ + max{ t + xl } i∈N    j∈NR l∈Cj  C2   hi  √ , t + xi  where we used the triangle inequality together with hl ≥ 0, l ∈ Cj ⊂ NC2 for all j ∈ NR . √ √ −1 −1 and t + xi by By setting t = |h| and bounding maxl∈Cj { t + xl } √ −1 t we obtain as a first bound on (2.51) c|f |α |h|α/2 .  (2.52)  ¯ i such Next only consider h ∈ S0 such that there exists i ∈ NC2 and j ∈ {i} ∪ R ¯ that hj = 0 and hk = 0 if k ∈ / {i} ∪ Ri . (2.51) becomes |f (x + h) − f (x)| ≤ 2|f |α tα/2 + c|f |α t  α 1 2 −2  α  1     ≤ 2|f |α tα/2 + c|f |α t 2 − 2 √ In case xi > 0 set t = second upper bound  |h|2 xi   |hj | hi  √ +√ max{ t + xl } t + xi  ¯ i l∈Cj j:j∈R  1 |h|. t + xi  and bound −α/2  c|f |α xi  √  t + xi  |h|α .  −1  by  √ xi  −1  to get as a (2.53)  The first inequality in (2.50) is now immediate from (2.52) and (2.53) and the proof is complete. Note. Special care was needed when choosing h ∈ S0 in the last part of the proof as it only works for those h which are to be considered in the definition of | · |Cwα . Note that this was the main reason to define the weighted H¨ older norms ¯ i instead of Ri . for R  40 Remark 2.2.21. The equivalence of the two norms will prove to be crucial later in Section 2.3, where we show the uniqueness of solutions to the martingale problem MP(A,ν) as stated in Theorem 2.1.6. All the estimates of Section 2.2 are obtained in terms of the semigroup norm. In Section 2.3 we shall further need estimates on the norm of products of certain functions. At this point we shall have to rely on the result of Lemma 2.2.19 for weighted H¨ older norms. The equivalence of norms now yields a similar result in terms of the semigroup norm.  2.2.5  Weighted H¨ older bounds of certain differentiation operators applied to Pt f  The xj , j ∈ NR derivatives are much easier. Notation 2.2.22. We shall need the following slight extension of our notation for E NC2 : E NC2 = ExNNC2 = ⊗i∈NC2 Pxi i . C2 Notation 2.2.23. To ease notation let − 12  Tk  t, xNC2 ≡  min (t + xl )−1/2 , k ∈ NR ,  l∈Ck  (t + xk )−1/2 ,  k ∈ NC2 .  Proposition 2.2.24. If f is a bounded Borel function on S0 , then for all x, h ∈ S0 , j ∈ NR , i ∈ Cj and arbitrary k ∈ V , ∂ ∂ c f ∞ −1 Pt f (x + hk ek ) − Pt f (x) ≤ |hk |Tk 2 t, xNC2 3/2 ∂xj ∂xj t  (2.54)  and ∂ 2 Pt f ∂ 2 Pt f c f ∞ −1 (x + h e ) − x (x) ≤ |hk |Tk 2 t, xNC2 . k k i 2 2 3/2 ∂xj ∂xj t (2.55) If f ∈ S α , then (x + hk ek )i  ∂ α 3 ∂ −1 Pt f (x + hk ek ) − Pt f (x) ≤ c|f |α t 2 − 2 |hk |Tk 2 t, xNC2 ∂xj ∂xj  (2.56)  and (x + hk ek )i  ∂ 2 Pt f ∂ 2 Pt f α 3 −1 (x + hk ek ) − xi (x) ≤ c|f |α t 2 − 2 |hk |Tk 2 t, xNC2 . 2 2 ∂xj ∂xj (2.57)  Proof. The focus will be on proving (2.55) as (2.54) is simpler. Again, it suffices to consider f bounded and continuous. For increments in xk , k ∈ NR the statement follows as in the proof of [7], Proposition 22.  41 Consider increments in xk , k ∈ NC2 . We start with observing that for hk ≥ 0 (xi + δki hi )  ∂ 2 Pt f ∂ 2 Pt f (x + h e ) − x (x) k k i ∂x2j ∂x2j  = δki hi ExNNC2 C2 +h  k ek  + xi ExNNC2 C2 +h  ∂2 C2 G NR ItNR , xN t ∂x2j t,x k ek  − ExNNC2 C2  ∂2 C2 G NR ItNR , xN t ∂x2j t,x  ≡ E1 + E2 , by arguing as in the proof of [7], Proposition 22. The bound on E1 is derived as in that proof, using Lemmas 2.2.4(a) and 2.2.3(b). For E2 we use the decompositions (2.29), (2.30), (2.31) and notation from Lemma 2.2.9 with ρ = 21 . Recall the notation Gkt,xNR from (2.42) and the definition of Rk and Sk as in Notation 2.2.14. Then |E2 | = xi E  ∂2 k G N xNC2 , RNt + I2 (t) + I3h (t), SNt + X0 (t) ∂x2j t,x R −  ≤ xi E  ∂2 k G N xNC2 , RNt + I2 (t), SNt + X0 (t) ∂x2j t,x R  ∂2 k G N xNC2 , RNt + I2 (t) + I3h (t), SNt + X0 (t) ∂x2j t,x R −  + xi E  ∂2 k G N xNC2 , RNt + I2 (t), SNt + X0 (t) ∂x2j t,x R  ∂2 k G N xNC2 , RNt + I2 (t), SNt + X0 (t) ∂x2j t,x R −  ∂2 k G N xNC2 , RNt + I2 (t), SNt + X0 (t) ∂x2j t,x R  ≡ E2a + E2b . E2a can be bounded as in [7], using Lemmas 2.2.4(b) and 2.2.3(b), and the independence of xNC2 and I3h (t). Next turn to E2b . Recall that Sn = Sn (t) = n n −w k w /k!. In the first term l=1 el (t), Rn = Rn (t) = l=1 rl (t) and pk (w) = e of E2b we may condition on Nt as it is independent from the other random variables and in the second term we do the same for Nt . Thus, if w = w + γh0kt k  42 and w =  xk , 2γk0 t  then by Lemma 2.2.4(a) and Lemma 2.2.3(b),  E2b ∞  = xi  n=0 ∞  ≤ cxi  ×  w  pn (u)du  w  n=0     E NC2     min ∞  xi  −1  (j)  It  f  ∞  w  n=0  w  ∞  , t 0  E NC2  i∈Cj \{k}  ≤ c f  ∂2 k G N xNC2 , Rn + I2 (t), Sn + X0 (t) ∂x2j t,x R  (pn (w ) − pn (w)) E  t 0  X0 (s)ds  −1   , k ∈ Cj    l∈Cj  t 0  ∧E  pn (u)du t−1 min (t + xl )−1 , xk 2  where we used that X0 starts at E  −1  (i)  xs ds    k∈ / Cj ,    X0 (s)ds  −1  and thus by Lemma 2.2.3(b)  ≤ ct−1 t +  xk 2  −1  ≤ ct−1 (t + xk )−1 .  We therefore obtain with i ∈ Cj ∞  w  E2b ≤ c f  ∞  xi w  n=0 w  ≤c f ∞ n=0  where we used ∞ n=0  ∞  pn (u)  w  1 √ du u  = pn (u) |n−u| u  1 u E|N  ∞ n=0  pn (u) |n−u| u  ≤ pn (u) nu + 1 = distributed with parameter u. Hence E2b ≤ c f As  √1 xk  ∧  2 √ t  ∞  (w − w)  |n − u| dut−1 (t + xi )−1 u w  ∧  w  − u| ≤  E|N | u  t−1 ,  2du 1 u  E|N − u|2 =  √1 u  and  + 1 = 2 with N being Poisson  1 √ ∧ 2 t−1 = c f w  ∞  hk t  √  √  t ∧ 2 t−1 . xk  1 ≤ c √t+x we finally get k  E2b ≤ c f  ∞  t−3/2 hk (t + xk )−1/2 .  The bounds (2.56) and (2.57) can be derived from the first two by an argument similar to the one used in the proof of Proposition 2.2.13 (alternatively refer to the end of the proof of Proposition 22 in [7]). In what follows recall Notation 2.2.23.  43 Proposition 2.2.25. If f is a bounded Borel function on S0 , then for all x, h ∈ S0 , i ∈ NC2 and arbitrary k ∈ V , c f ∞ ∂ ∂ −1 Pt f (x + hk ek ) − Pt f (x) ≤ |hk |Tk 2 t, xNC2 3/2 ∂xi ∂xi t  (2.58)  and ∂ 2 Pt f c f ∞ ∂ 2 Pt f −1 |hk |Tk 2 t, xNC2 . (x + h e ) − x (x) ≤ k k i 2 2 3/2 ∂xi ∂xi t (2.59) If f ∈ S α , then (x + hk ek )i  3 α ∂ ∂ −1 Pt f (x + hk ek ) − Pt f (x) ≤ c|f |α t 2 − 2 |hk |Tk 2 t, xNC2 ∂xi ∂xi  and (x + hk ek )i  α 3 ∂ 2 Pt f ∂ 2 Pt f −1 (x + h e ) − x (x) ≤ c|f |α t 2 − 2 |hk |Tk 2 t, xNC2 . k k i ∂x2i ∂x2i  Proof. Proposition 2.2.25 is an extension of Proposition 23 in [7]. The last two inequalities follow from the first two by an argument similar to the one used in the proof of Proposition 2.2.13 (alternatively refer to the end of the proof of Proposition 22 in [7]). As the proof of (2.58) is similar to, but much easier than, that of (2.59), we only prove the latter. As usual we may assume f is bounded and continuous. (X, ν, ν ) from (2.34). Proposition 2.2.11 gives Recall the notation ∆G+i,+i t,xNR 4  E NC2 ∆n Gt,xNR xNC2  (Pt f )ii (x) =  ,  (2.60)  n=1  where ∆1 Gt,xNR (X) ≡  (X, ν, ν )1{νt =νt =0} dN0 (ν)dN0 (ν ), ∆G+i,+i t,xNR  ∆2 Gt,xNR (X) ≡  (X, ν, ν )1{νt >0,νt =0} dN0 (ν)dN0 (ν ), ∆G+i,+i t,xNR  ∆3 Gt,xNR (X) ≡  (X, ν, ν )1{νt =0,νt >0} dN0 (ν)dN0 (ν ) ∆G+i,+i t,xNR  and ∆G+i,+i (X, ν, ν )1{νt >0,νt >0} dN0 (ν)dN0 (ν ) t,xNR  ∆4 Gt,xNR (X) ≡ (∗)  =  c t2  (X, ν, ν )1{νt >0,νt >0} dPt∗ (ν)dPt∗ (ν ). ∆G+i,+i t,xNR  44 Let us consider first the increments in xk , k ∈ NC2 . Increments in xk , k ∈ NR will follow at the end of this section in Lemma 2.2.30. Let hk ≥ 0 and use (2.60) to obtain |(x + hk ek )i (Pt f )ii (x + hk ek ) − xi (Pt f )ii (x)|  (2.61)  4  ≤  xi ExNNC2 C2 +h n=1  k ek  − ExNNC2 C2  ∆n Gt,xNR xNC2  + hk |(Pt f )kk (x + hk ek )| . The last term on the right hand side can be bounded via (2.44) as follows: hk |(Pt f )kk (x + hk ek )| ≤ hk  c f ∞ ≤c f t(t + xk )  ∞  hk t−3/2 (t + xk )−1/2 ,  where we used hk ≥ 0. In the following Lemmas 2.2.26, 2.2.27 and 2.2.29 we again use the decompositions from Lemma 2.2.9 with ρ = 21 to bound the first four terms in (2.61). Lemma 2.2.26. For k ∈ NC2 (and i ∈ NC2 ) we have xi ExNNC2 C2 +h  k ek  − ExNNC2 C2  ∆1 Gt,xNR xNC2  ≤  c f ∞ hk . t3/2 (t + xk )1/2  Proof. This Lemma corresponds to Lemma 24 in [7]. In [7] one considered ∆G+i,+i (·) as a second order difference, thus obtaining terms involving (t+ xi )−2 . In our setting this method will not work for i = k as we do in fact need terms of the form (t + xi )−1 (t + xk )−1 . Instead, we shall bound the left hand side by reasoning as for the E2 -term in Proposition 22 of [7] (part of the proof can be ∂2 found in this paper in the proof of Proposition 2.2.24), but with ∂x 2 G(·), j ∈ NR j  replaced by ∆G+i,+i (·), i ∈ NC2 .  Lemma 2.2.27. For k ∈ NC2 (and i ∈ NC2 ) and n = 2, 3 we have xi ExNNC2 C2 +h  k ek  − ExNNC2 C2  ∆n Gt,xNR xNC2  ≤  c f ∞ hk . + xk )1/2  t3/2 (t  (2.62)  Proof. By symmetry we only need to consider n = 2. As before let w = n  n  xk , 2γk0 t h I3 (t)  w = w + γh0kt , Sn = l=1 el (t) and Rn = l=1 rl (t). Let Qh be the law of k as defined after (2.31). As this random variable is independent of the others appearing below we may condition on it and use (2.29), (2.30) and (2.31) to  45 conclude xi ExNNC2 C2 +h  k ek  ∆2 Gt,xNR xNC2 k,+i,+i xNC2 , I2 (t) + z + RNt , X0 (t) + SNt ; Gt,x NR  = xi E  t  t  νs ds, νt , 0 k,+i,+i xNC2 , I2 (t) + z + RNt , X0 (t) + SNt ; 0, 0, − Gt,x NR k,+i,+i xNC2 , I2 (t) + z + RNt , X0 (t) + SNt ; − Gt,x NR  νs ds, 0  0 t 0  νs ds, 0  t  νs ds, νt , 0, 0 0  k,+i,+i + Gt,x xNC2 , I2 (t) + z + RNt , X0 (t) + SNt ; 0, 0, 0, 0 NR  × 1{νt >0} 1{νt =0} dN0 (ν)dN0 (ν )dQh (z) . When working under ExNNC2 there is no I3h (t) term. Hence we obtain the same C2 formula with z replaced by 0 and Nt replaced by Nt . The difference of these terms can be bounded by a difference dealing with the change from z to 0 and the change from Nt to Nt separately. For the second term we recall that pn (u) = e−u un /n! and observe that Nt is independent of the other random variables. Hence we may condition on its value to see that the l.h.s. of (2.62) is at most k,+i,+i ∆Gt,x xNC2 , I2 (t) + z + RNt , X0 (t) + SNt ; NR  xi E  t  t  νs ds, 0  νs ds, νt , 0  0 k,+i,+i xNC2 , I2 (t) + RNt , X0 (t) + SNt ; − ∆Gt,x NR  t  t  νs ds, νt , 0  0  νs ds, 0  × 1{νt >0} 1{νt =0} dN0 (ν)dN0 (ν )dQh (z) + xi  ∞ n=0  (pn (w ) − pn (w))E  k,+i,+i xNC2 , ∆Gt,x NR t  I2 (t) + Rn , X0 (t) + Sn ;  t  νs ds, νt , 0  0  νs ds, 0  × 1{νt >0} 1{νt =0} dN0 (ν)dN0 (ν ) ≡ Ea + Eb . The first term can be rewritten as the sum of two second order differences t (one in z, one in 0 νs ds). Together with Lemma 2.2.6, Lemma 2.2.4(b) and  46 Lemma 2.2.3(b) we therefore obtain (terms including empty sums are again understood as being zero) Ea ≤ 2xi c f  ∞  ¯ i j2 :j2 ∈R ¯k j1 :j1 ∈R   −1 (j )   It 1 ,  E   min  t 0  m∈Cj1 \{k}  (m)  xs  t  ×  0  ≤ xi c f ≤ c f  ∞  X0 (s)ds  ds  −1  −1  t 0  ∧  X0 (s)ds  −1  t 0   k∈ / C j1 ,     , k ∈ C j1   νs dsdN0 (ν )N0 [νt > 0]  zdQh (z)  t−2 (t + xi )−1 (t + xk )−1 tt−1 hk t  hk t−3/2 (t + xk )−1/2 .  ∞  Turning to Eb observe that we have the sum of two first order differences t (both in 0 νs ds). Together with the triangle inequality, Lemma 2.2.4(b) and Lemma 2.2.3(b) we therefore obtain Eb ≤ cxi  ∞  w w  n=0  pn (u)du   −1 (j )   It 1 ,  E   min m∈Cj1 \{k}  t 0  f  (m)  xs  ∞  ds  ¯i j1 :j1 ∈R  −1  ∧  t  ×  ≤ cxi  0 ∞ n=0  νs dsdN0 (ν )N0 [νt > 0] w  w  pn (u)du  f  ∞    k∈ / C j1 ,   −1  t  , k ∈ C j1  X0 (s)ds 0  t−1 (t + xi )−1 tt−1 .  Now proceed again as in the estimation of E2b in the proof of Proposition 2.2.24 to get Eb ≤ cxi t−1/2 hk (t + xk )−1/2 f ≤c f  ∞  ∞  t−1 (t + xi )−1 tt−1  hk t−3/2 (t + xk )−1/2 .  The above bounds on Ea and Eb give the required result. Notation 2.2.28. Let m,n=m Gt,x (X, Yt , Zt , Yt , Zt ) NR  ≡  (X, Yt , Zt , Yt , Zt ) Gm,n t,xNR  if n = m  (X, Yt , Zt ) Gm t,xNR  if n = m.  X, Yt , Zt , Yt , Zt ; Expressions such as Gm,n=m,+k,+l t,xNR will be defined similarly.  t 0  ηs ds, θt ,  t 0  ηs ds, θt  47 Lemma 2.2.29. For k ∈ NC2 (and i ∈ NC2 ) we have xi ExNNC2 C2 +h  k ek  − ExNNC2 C2  ∆4 Gt,xNR xNC2  ≤  c f ∞ hk . t3/2 (t + xk )1/2  Proof. Let ExNNC2 C2 +h  E ≡ xi  k ek  − ExNNC2 C2  ∆4 Gt,xNR xNC2  .  (2.63)  We use the same setting and notation as in Lemma 2.2.27. Proceeding as in the estimation of the l.h.s. in (2.62), thereby not only decomposing x(k) but also x(i) (the respective parts of the decomposition of x(k) and x(i) are designated via upper indices k resp. i and are independent for k = i), we have xi ExNNC2 C2 +h  k ek  ∆4 Gt,xNR xNC2 (k)  xNC2 , I2 (t) + z + R ∆Gk,i=k,+i,+i t,xNR  = xi E +S  (k) Nt  (i)  (k)  , I2 (t) + R  (i)  (i) (i) Nt  , X0 (t) + S  (i) (i) Nt  (k) Nt  (k)  (k)  , X0  t  t  νs ds, νt ,  ;  (t)  0  0  νs ds, νt  × 1{νt >0} 1{νt >0} dN0 (ν)dN0 (ν )dQh (z) . Now let for k = i ˆ n (z) ≡ E Gk N xNC2 , I (k) (t) + z + Rn(k) , X (k) (t) + Sn(k) G 0 2 t,x R  ,  respectively for k = i, ˆ n z, Nt (k) ≡ E Gk,i N xNC2 , I (k) (t) + z + R(k)(k) , X (k) (t) + S (k)(k) , G 0 2 t,x R Nt  Nt  (i) I2 (t)  +  (i) Rn(i) , X0 (t)  +  Sn(i)  .  ˆ n z, Nt (k) excludes the random Note that the expectation in the definition of G variable Nt  (k)  . Use w  xk ExNNC2 C2 +h (∗)  = c  xk t2  and use w(i) = xi ExNNC2 C2 +h (∗)  =c  xi t2  =  xk 2γk0 t  +  hk γk0 t  (i.e. ρ = 1/2) to obtain for k = i  ∆4 Gt,xNR xNC2  pn w  (2.64)  ˆ n+2 − 2G ˆ n+1 + G ˆ n (z)dQh (z), G  (k)  n=0  k ek  n=0  k ek  ∞  xi 2γi0 t  ∞  (k)  to obtain for k = i ∆4 Gt,xNR xNC2  pn w(i) E  ˆ n+2 − 2G ˆ n+1 + G ˆn G  (2.65) z, Nt  (k)  dQh (z) .  48 A similar argument holds for xi ExNNC2 ∆4 Gt,xNR xNC2 . Indeed, if k = i C2 replace z by 0 and replace w (k) by w(k) = 2γxk0 t in (2.64). If k = i replace z by k  (k)  (k)  0 and replace Nt by Nt in (2.65). Let us first investigate the case k = i. Define ˆ n (z) = G ˆ n (z) − G ˆ n (0) H to get for E as in (2.63), E≤c  ∞  xk t2  pn w  ˆ n+2 − 2H ˆ n+1 + H ˆ n (z)dQh (z) H  (k)  n=0  +c  xk t2  ∞  pn w  (k)  n=0  ˆ n+2 − 2G ˆ n+1 + G ˆ n (0) G  − pn w(k)  ≡ E1 + E2 . We can bound E1 by c  xk t2  ∞ n=0  (pn−2 − 2pn−1 + pn ) w  (k)  sup n≥0  ˆ n (z)dQh (z) , H ∞ n=0  where pn (w) ≡ 0 if n < 0. By using qn (w) = wpn (w) and qn )(w)| ≤ 2 (see [7], (109)) we obtain E1 ≤ c  xk 1 sup t2 w (k) n≥0  |(qn−2 −2qn−1 +  ˆ n (z)dQh (z) . H  ˆ n (z) is zero for k ∈ N2 (recall that for k ∈ N2 the indicated Next observe that H t (k) (k) (k) (k) (k) change from 0 xs ds into I2 (t) + z + Rn resp. I2 (t) + Rn has no impact on the terms under consideration) and is a first order difference for k ∈ NC for which we obtain as usual ˆ n (z)dQh (z) ≤ c f H  ∞  ≤c f  ∞  ≤c f  Together with w (k) =  xk 2γk0 t  and w  E1 ≤ c f  (k)  ∞  =  ∞  t−1 (t + xk )−1  zdQh (z)  t−1 (t + xk )−1 hk t hk t−1/2 (t + xk )−1/2 .  xk 2γk0 t  +  hk γk0 t  this gives  hk t−3/2 (t + xk )−1/2 .  49 ∞≤  For E2 we obtain with G E2 ≤ c f  ∞  ∞  xk t2  n=0  f  ∞  and Fubini’s theorem  (pn−2 − 2pn−1 + pn ) w  (k)  (2.66)  − (pn−2 − 2pn−1 + pn ) w(k) ≤c f  ∞  w  xk t2  ∞  (k)  w (k)  n=0  pn−2 − 2pn−1 + pn (u) du.  n  As pn (u) = e−u un! we have pn (u) = −pn (u) + pn−1 (u) and thus we obtain in case 0 < u < 1 for the integrand ∞  pn−2 − 2pn−1 + pn (u) ≤ 8.  n=0  For u ≥ 1 we obtain for the integrand as an upper bound p0 (u) + p1 (u) 3 ≤e  −u  ∞  n(n − 1)(n − 2) n(n − 1) n 1 −1 + pn (u) −3 +3 −1 3 2 u u u u n=2  1 (1 + 3 + u) + 3 u  ≤ e−u (4 + u) +  1 u3  ∞ n=2  pn (u) (n − u)3 − 3n(n − u) + 2n 2  3  E|Nu − u| + 3 ENu2 E(Nu − u) + 2ENu ,  where Nu is Poisson with mean u. Note that E|Nu − u|m ≤ cm um/2 for m ∈ N and u ≥ 1. We also have ENu = u and ENu2 = u2 + u. This yields as an upper bound for the integrand in (2.66) for u ≥ 1 3  cu− 2 +  1 3 c3 u3/2 + 3 (u2 + u) c2 u1 + 2u ≤ cu− 2 . u3  We thus get for E2 w  E2 ≤ c f  ∞  xk t2  ≤c f  ∞  xk w t2  ≤c f  ∞  xk h k t2 t  ≤c f  ∞  (k)  w (k) (k)  − w(k)  xk + t 2γk0 t  hk t−3/2 (t + xk )  −3/2  1 2γk0  u+  w(k) +  du 1 2γk0  −3/2  −3/2  −1/2  .  Together with the bound on E1 the assertion now follows for k = i.  50 Next investigate the case k = i. Define ˆ n 0, N (k) , ˆ n z, N (k) − G ˆ 1 z, N (k) = G H t t t n ˆ n 0, Nt(k) ˆ n2 Nt (k) , Nt(k) = G ˆ n 0, Nt (k) − G H to get E≤c +  ∞  xi t2  pn w(i) E  n=0 ∞ xi pn c 2 t n=0  1 1 ˆ n+2 ˆ n+1 ˆ n1 H − 2H +H  ˆ 2 − 2H ˆ2 + H ˆ2 H n+2 n+1 n  w(i) E  z, Nt Nt  (k)  (k)  dQh (z) (k)  dQh (z) .  , Nt  ˆ n z, N (k) and thus of Recall that the expectation in the definition of G t ˆ 1 z, N (k) H t n  excludes the random variable Nt  expectation w.r.t. Nt xi E≤c 2 t  ∞ n=0  (k)  (k)  . To bound E we thus take  , too. Rewriting this yields  (pn−2 − 2pn−1 + pn ) w(i)  × sup E n≥0  ˆ 1 z, N (k) dQh (z) H t n  and by using qn (w) = wpn (w) and obtain E≤c  xi 1 sup E t2 w(i) n≥0 +E  ∞ n=0  +E  ˆ 2 N (k) , N (k) dQh (z) H t t n  |(qn−2 − 2qn−1 + qn )(w)| ≤ 2 again we  ˆ n1 z, Nt (k) dQh (z) H ˆ 2 N (k) , N (k) dQh (z) H t t n  .  ˆ n1 z, Nt (k) is zero for k ∈ N2 and is a first order difference Next observe that H for k ∈ NC for which we obtain ˆ n1 z, Nt (k) dQh (z) ≤ c f H  ∞  ≤c f  ∞  t−1 (t + xk )−1  zdQh (z)  hk t−1/2 (t + xk )−1/2 .  51 The other term can be bounded as follows: ˆ n2 Nt (k) , Nt(k) H  ≤  ∞  pN w  (k)  N =0  − pN w(k) (k)  (k)  (k)  × E Gk,i xNC2 , I2 (t) + RN , X0 t,xNR (i)  (k)  (t) + SN ,  (i)  I2 (t) + Rn(i) , X0 (t) + Sn(i) ≤  ∞  (k)  N =0  where w(k) = 2γxk0 t and w k Proposition 2.2.24 we use ∞  w  N =0  w (k)  to finally get with G  pN w (k)  =  xk 2γk0 t  − pN w(k) +  hk . γk0 t  G  ∞,  As done before in the proof of  (k)  ∞≤  pN (u)du ≤ ct−1/2 hk (t + xk )−1/2 f  ∞  ˆ 2 N (k) , N (k) dQh (z) ≤ ct−1/2 hk (t + xk )−1/2 f H n t t  ∞  .  Plugging our results into our estimate for E we get E≤c  xi t c f t 2 xi  ∞  hk t−1/2 (t + xk )−1/2 ≤ c f  ∞  hk t−3/2 (t + xk )−1/2 ,  which proves our assertion. Finally we consider the increments in xk , k ∈ NR . Lemma 2.2.30. If f is a bounded Borel function on S0 , then for all x, h ∈ S0 , i ∈ NC2 and k ∈ NR xi  ∂ 2 Pt f ∂ 2 Pt f c f ∞ (x + h e ) − x (x) ≤ |hk | min (t + xl )−1/2 . k k i l∈Ck ∂x2i ∂x2i t3/2  Proof. Except for the necessary adaptations, already used in the proofs of the preceding assertions, the proof proceeds analogously to Lemma 27 in [7]. Continuation of the proof of Proposition 2.2.25. Use Lemmas 2.2.26, 2.2.27 and 2.2.29 in (2.61) together with the calculation following (2.61) to obtain the bound for increments in xk , k ∈ NC2 . Lemma 2.2.30 gives the corresponding bound for increments in xk , k ∈ NR which completes the proof of (2.59).  52  2.3  Proof of Uniqueness  As in Section 3, [7], it is relatively straightforward to use the results from the previous sections on the semigroup Pt to prove bounds on the resolvent Rλ of Pt . We shall then use these bounds to complete the proof of uniqueness of solutions to the martingale problem MP(A,ν) satisfying Hypothesis 2.1.1 and 2.1.2, where ν is a probability on        S = x ∈ Rd+ : xi + x j  > 0   j∈R  (recall (2.3) and Lemma 2.1.5) and   Af (x) =  j∈R  γj (x)   i∈Cj  i∈Cj  xi  xj fjj (x) +  γj (x)xj fjj (x) +  bj (x)fj (x). j∈V  j ∈R /  (2.67) The proof of uniqueness is identical to the one in [7] except for minor changes such as the replacement of xcj by i∈Cj xi at the appropriate places. Note in particular the change in the definition of the state space S. In what follows we shall give a sketch of the proofs and indicate where statements have to be modified. For explicit calculations the reader is referred to [7], Sections 3 and 4. Notation 2.3.1. For i ∈ NC2 let y¯i = {yj }j∈R¯i , yi , y¯i e¯i =  ¯i j∈R  ¯ i = R|R¯ i | × R+ , yj ej + yi ei and R  (2.68)  ¯ i = ∅. For where we understand this to be y¯i = (yi ) in case i ∈ N2 , i.e. R 2 f ∈ Cb (S0 ) let ∂f = ∂x ¯i  ∂ f ∂xj  and  , ¯i j∈R  ∂f ∂x ¯i  ∂ f ∂xi  = sup ∞  ,  ∂f = ∂x ¯i  ¯i j∈R  ∂ ∂ f + f ∂xj ∂xi  ∂f (x) : x ∈ S0 , ∂x ¯i  (2.69)  (2.70)  where S0 = {x ∈ Rd : xi ≥ 0 for all i ∈ NC2 } as defined in (2.9). Also introduce   2 2 ∂ ∂ ∆i f =  xi 2 f , xi 2 f  . ∂xj ∂xi ¯ j∈Ri  Define |∆i f | and ∆i f  ∞  similarly to (2.69) and (2.70).  53 With the help of these notations A0 (see (2.6)) can be rewritten to    A0 f (x) =  b0j fj (x) +  j∈V  j∈NR  b0 i ,  = i∈NC2  γj0   ∂f (x) + ∂x ¯i  i∈Cj  xi  fjj (x) +  γi0 xi fii (x)  (2.71)  i∈NC2  γ 0 i , ∆i f (x) ,  i∈NC2  where ·, · denotes the standard scalar product in Rk , k ∈ N. To prevent over¯ i1 ∩ R ¯ i2 = ∅ for i1 = i2 , i1 , i2 ∈ NC (see also definition (2.68)) counting in case R 0 the vector b i was replaced by b0 i in the above formula, where b0 i has certain coordinates set to zero so that the above equality holds. The same applies to the vector γ 0 i . The details are left to the interested reader. α Theorem 2.3.2. There is a constant c such that for all f ∈ Cw (S0 ), λ ≥ 1 and k ∈ NC2 ,  (a) (b)  ∂Rλ f + ||∆k Rλ f ||∞ ≤ cλ−α/2 |f |Cwα . ∂x ¯k ∞ ∂Rλ f + |∆k Rλ f |Cwα ≤ c|f |Cwα . ∂x ¯k C α w  Note. This result is slightly weaker than the corresponding Theorem 34 in [7] as |f |α,k is replaced by |f |Cwα in (a). Proof. Firstly we obtain a result similar to Proposition 30 in [7]. This is an easy consequence of Proposition 2.2.13 and Proposition 2.2.16, using the equivalence of norms shown in Theorem 2.2.20 and states that there is a constant c such that α (a) For all f ∈ Cw (S0 ), t > 0, x ∈ S0 , and i ∈ NC2 , ∂Pt f (x) ≤ c|f |Cwα tα/2−1/2 (t + xi )−1/2 ≤ c|f |Cwα tα/2−1 , ∂x ¯i  (2.72)  and ∆ i Pt f  ∞≤  c|f |Cwα tα/2−1 .  (2.73)  (b) For all f bounded and Borel on S0 and all i ∈ NC2 , ∂Pt f ∂x ¯i  ∞  ≤c f  ∞  t−1 .  α Note in particular that Theorem 2.2.20 gave Cw = S α and that every function α in Cw (S0 ) is by definition bounded. Secondly, an easy consequence of Propositions 2.2.24, 2.2.25 and the triangle inequality, using the equivalence of norms shown in Theorem 2.2.20 and the equivalence of the maximum norm and Euclidean norm of finite dimensional  54 vectors, is a result similar to Proposition 32, [7]: There is a constant c such that α ¯i ∈ R ¯ i, for all f ∈ Cw (S0 ), i, k ∈ NC2 and h (a) ∂Pt f ∂Pt f ¯i , x+¯ hi e¯i − (x) ≤ c|f |Cwα t−3/2+α/2 (t + xi )−1/2 h ∂x ¯k ∂x ¯k  (2.74)  (b) ¯ i . (2.75) ∆k (Pt f ) x + ¯ hi e¯i − ∆k (Pt f )(x) ≤ c|f |Cwα t−3/2+α/2 (t + xi )−1/2 h ∞  Finally recall that Rλ f (x) = 0 e−λt Pt f (x)dt is the resolvent associated with Pt . Now the remainder of the proof works as in the proof of Theorem 34 in [7]: Part (a) of Theorem 2.3.2 is obtained by integrating (2.72) resp. (2.73) over time. Part (b) follows by integrating (2.72) resp. (2.73) over the timeinterval from zero to some fixed value t˜ > 0 and (2.74) resp. (2.75) over the time interval from t˜ to infinity. Appropriate choices for t˜ now yield the required bounds. Here the choices of t˜ are in fact easier due to the replacement of | · |α,i in [7] by | · |Cwα . Proof of Theorem 2.1.6. The existence of a solution to the martingale problem for MP(A, ν) follows by standard methods (a result of Skorokhod yields existence of approximating solutions, then use a tightness-argument), e.g. see the proof of Theorem 1.1 in [1]. Note in particular that Lemma 2.1.5 ensures that solutions remain in S ⊂ Rd+ . The uniform boundedness in M of the term E  i  XTM,i  that appears in the proof of Theorem 1.1 in [1] can easily be re-  M,i 2 via a Gronwallplaced by the uniform boundedness in M of E i∈V (XT ) type argument. At the end of this section we shall reduce the proof of uniqueness to the following theorem. The theorem investigates uniqueness of a perturbation of the operator A0 as defined in (2.6) (also refer to (2.71)) with coefficients satisfying (2.7) and (2.8). A0 is the generator of a unique diffusion on S x0 given by (2.9) with semigroup Pt and resolvent Rλ given by (2.11). For the definition of M 0 refer to (2.14). In what follows x0 ∈ S will be arbitrarily fixed.  Theorem 2.3.3. Assume that  ˜ (x) = Af  j∈NR  +  γ˜j (x)   i∈Cj    xi  fjj (x)  γ˜j (x)xj fjj (x) +  j∈V  j∈NC2  (2.76) ˜bj (x)fj (x), x ∈ S x0 ,  where ˜bk : S x0 → R and γ˜k : S x0 → (0, ∞), d  ˜= Γ k=1  ||˜ γk ||Cwα + ˜bk  α Cw  < ∞.  55 Let  d  γ˜k − γk0  ˜0 = k=1  ∞  + ˜bk − b0k  ∞  ,  where b0k , γk0 , k ∈ V satisfy (2.7). Let Bf = (A˜ − A0 )f . ˜ ≥ 0 such that (a) There exists 1 = 1 (M 0 ) > 0 and λ1 = λ1 (M 0 , Γ) α α if ˜0 ≤ 1 and λ ≥ λ1 then BRλ : Cw → Cw is a bounded operator with BRλ ≤ 1/2. (b) If we assume additionally that γ˜k and ˜bk are H¨ older continuous of index α ∈ (0, 1), constant outside a compact set and ˜bk |{xk =0} ≥ 0 for all k ∈ V \NR , ˜ ν) has a unique solution for each probability then the martingale problem MP(A, ν on S x0 . ˜ λ be the associated resolvent operator of the perturbation operator Proof. Let R α ˜ Using the definition B = A˜ − A0 and recalling (2.71) we get for f ∈ Cw A. that BRλ f  α≤ Cw  i∈NC2  ˜b(x) − b0  , i  γ˜(x) − γ 0  + i∈NC2  ∂Rλ f (x) ∂x ¯i i  α Cw  .  , ∆i Rλ f (x) α Cw  Using (2.46) (recall in particular the discussion on the reasons for using two different norms from Remark 2.2.21) we obtain for instance for arbitrary i ∈ NC ¯i and j ∈ R ˜bj (x) − b0 ∂Rλ f (x) j ∂xj ≤c  ˜bj (x) − b0 j  ≤c  ˜+M Γ  0  α Cw  λ  −α/2  α Cw  ∂Rλ f (x) ∂xj  ∞  + ˜bj (x) − b0j  ∞  ∂Rλ f (x) ∂xj  α  |f |Cwα + ˜0 |f |Cwα  by Theorem 2.3.2, (2.50) and the assumptions of this theorem. By arguing similarly for the other terms we get indeed BRλ f Cwα ≤ 21 f Cwα for λ big enough thus finishing the proof of part (a). For part (b) we proceed as in the proof of [7], Theorem 37. The proof of Theorem 37 in [7] involves the proof of Lemma 38 in [7], where one shows that α for f ∈ Cw ˜ λ f = Rλ f + R ˜ λ BRλ f. R (2.77) Note that the proof of Lemma 38 relies amongst others on an estimate, derived in Corollary 33 of [7], which we now obtain for free in Proposition 2.2.11 as we treated all vertices in one step only. The proof of Theorem 37 now concludes as follows. Iteration of (2.77) yields ˜ λ f (x) = R  ∞ n=0  Rλ ((BRλ )n f )(x).  56 Using BRλ  α≤ Cw  1/2 from part (a) and f  λ Rλ ((BRλ )n f )  ∞≤  (BRλ )n f  ∞≤  ∞≤  f  α Cw  (BRλ )n f  we get α≤ Cw  2−n f  α Cw  .  Thus the series converges uniformly and the error term approaches zero. The ˜ ν) now follows from the uniqueness of its resolvents R ˜λ . uniqueness of MP(A, Continuation of the proof of Theorem 2.1.6. Recall “Step 1: Reduction of the problem”, in Subsection 2.1.5. The remainder of the proof of uniqueness of M P (A, δx0 ) works analogously to [7] (compare the proof of Theorem 4 on pp. 380-382 in [7]) except for minor changes, making again use of Lemma 2.1.5. The main step consists in using a localization argument of [13] (see e.g. the argument in the proof of Theorem 1.2 of [4]), which basically states that it is ˜ δx0 ) has a unique enough if for each x0 ∈ S the martingale problem M P (A, ˜ solution, where bi = bi and γi = γ˜i agree on some neighbourhood of x0 . By comparing the definition of A (see (2.67)) and A˜ (see (2.76)) one chooses ˜bk (x) = bk (x) for all k ∈ V, γ˜j (x) = xj γj (x) for j ∈ NR ,   γ˜j (x) =   i∈Cj  xi  γj (x) for j ∈ R\NR  γ˜j (x) = γj (x) for j ∈ / R. By setting  b0k ≡ ˜bk (x0 ) and γk0 ≡ γ˜k (x0 )  and choosing ˜bk and γ˜k in appropriate ways, the assumptions of Theorem 2.3.3(a), (b) will be satisfied in case b0k ≥ 0 for all k ∈ N2 (and hence by Hypothesis 2.1.2 for all k ∈ NC2 ). In particular the boundedness and continuity of the coefficients of A˜ will allow us to choose ˜0 arbitrarily small. In case there exists k ∈ N2 such that b0k < 0 a Girsanov argument as in the proof of Theorem 1.2 of [4] allows the reduction of the latter case to the former case.  57  Bibliography [1] Athreya, S.R. and Barlow, M.T. and Bass, R.F. and Perkins, E.A. Degenerate stochastic differential equations and super-Markov chains. Probab. Theory Related Fields (2002) 123, 484–520. MR1921011 [2] Athreya, S.R. and Bass, R.F. and Perkins, E.A. H¨ older norm estimates for elliptic operators on finite and infinite-dimensional spaces. Trans. Amer. Math. Soc. (2005) 357, 5001–5029 (electronic). MR2165395 [3] Bass, R.F. Diffusions and Elliptic Operators. Springer, New York, 1998. MR1483890 [4] Bass, R.F. and Perkins, E.A. Degenerate stochastic differential equations with H¨ older continuous coefficients and super-Markov chains. Trans. Amer. Math. Soc. (2003) 355, 373–405 (electronic). MR1928092 [5] Bass, R.F. and Perkins, E.A. Degenerate stochastic differential equations arising from catalytic branching networks. Electron. J. Probab. (2008) 13, 1808–1885. MR2448130 [6] Dawson, D.A. and Greven, A. and den Hollander, F. and Sun, R. and Swart, J.M. The renormalization transformation for two-type branching models. Ann. Inst. H. Poincar´e Probab. Statist. (2008) 44, 1038–1077. MR2469334 [7] Dawson, D.A. and Perkins, E.A. On the uniqueness problem for catalytic branching networks and other singular diffusions. Illinois J. Math. (2006) 50, 323–383 (electronic). MR2247832 [8] Eigen, M. and Schuster, P. The Hypercycle: a principle of natural selforganization. Springer, Berlin, 1979. [9] Hofbauer, J. and Sigmund, K. The Theory of Evolution and Dynamical Systems. London Math. Soc. Stud. Texts, vol. 7, Cambridge Univ. Press, Cambridge, 1988. MR1071180 [10] Mytnik, L. Uniqueness for a mutually catalytic branching model. Probab. Theory Related Fields (1998) 112, 245–253. MR1653845 [11] Perkins, E.A. Dawson-Watanabe superprocesses and measure-valued diffusions. Lectures on Probability Theory and Statistics (Saint-Flour, 1999), 125–324, Lecture Notes in Math., 1781, Springer, Berlin, 2002. MR1915445 [12] Rogers, L.C.G. and Williams, D. Diffusions, Markov Processes, and Martingales, vol. 2, Reprint of the second (1994) edition. Cambridge Mathematical Univ. Press, Cambridge, 2000. MR1780932 [13] Stroock, D.W. and Varadhan, S.R.S. Multidimensional Diffusion Processes. Grundlehren Math. Wiss., vol. 233, Springer, Berlin-New York, 1979. MR532498  58  Chapter 3  Long-term Behaviour of a Cyclic Catalytic Branching System1 3.1 3.1.1  Introduction Basics  In this paper we investigate the long-term behaviour of the following system of stochastic differential equations (SDEs) for d ≥ 2: d  dXti  =  2γ i Xti Xti+1 dBti  + j=1  Xtj qji dt, i ∈ {1, . . . , d},  (3.1)  where Xtd+1 ≡ Xt1 . We shall assume the γ i and qji , i = j to be given positive constants and the X0i ≥ 0, i ∈ {1, . . . , d} to be given initial conditions. (qji ) is a Q-matrix modelling mutations from type j to type i. This system involves both cyclic catalytic branching and mutation between types. The extension of the cyclic setup to arbitrary networks (see Subsection 3.2.6 at the end of this paper) is straightforward. Existence of solutions shall be shown by standard methods. To show weak uniqueness we shall employ the results of Dawson and Perkins [3] once we show that a solution does not hit 0 ∈ Rd in finite time. The given system of SDEs can be understood as a stochastic analogue to a system of ODEs for the concentrations yj , j = 1, . . . , d of a type Tj . Then yj /y˙ j corresponds to the rate of growth of type Tj and one obtains the following ODEs (see Hofbauer and Sigmund [6]): for independent replication y˙ j = bj yj , autocatalytic replication y˙ j = γj yj2 and catalytic replication y˙ j = γj i∈Cj yi yj . In the cyclic catalytic case type Tj+1 catalyzes the replication of type j, i.e. the growth of type j is proportional to the mass of type j + 1 present at time t. The cyclic catalytic case represents the simplest form of mutual help between different types. It was firstly introduced by Eigen and Schuster (see Eigen and Schuster [4]). 1 A version of this chapter will be submitted for publication. Kliem, S.M. (2009) Long-term Behaviour of a Cyclic Catalytic Branching System.  59 The system of SDEs can be obtained as a limit of branching particle systems. The growth rate of types in the ODE setting now corresponds to the branching rate in the stochastic setting, i.e. type j branches at a rate proportional to the mass of type j + 1 at time t. Results on weak uniqueness for catalytic branching networks can be found for instance in [3] and Kliem [9]. The former proved weak uniqueness for catalytic replication under the restriction to networks with at most one catalyst per reactant, which includes the hypercyclic case. The latter removed this restriction. Both papers allow more general diffusion- and drift- coefficients under some H¨ older-continuity conditions. These conditions were weakened in Bass and Perkins [1] to continuity only. Our main interest shall be the long-time behaviour of the above system. In particular, we shall investigate survival and coexistence of types. Such questions naturally arise in biological competition models. For instance, Fleischmann and Xiong [5] investigated a cyclically catalytic super-Brownian motion. They showed global segregation (noncoexistence) of neighbouring types in the limit and other results on the finite time survival-extinction but they were not able to determine, if the overall sum dies out in the limit or not. In this paper we shall show that in our SDE-setup the overall sum converges to zero but does not hit zero in finite time. To further analyze the relative behaviour of types while they approach zero, we turned our attention to the normalized processes Yti ≡ Xti / j Xtj - note that Xti /Xtj = Yti /Ytj - and showed weak convergence to a unique stationary distribution that does not charge the set where at least one of the coordinates is zero.  3.1.2  Main results and outline of the paper  As a first step we shall show existence and nonnegativity of solutions Xti , i ∈ {1, . . . , d} to the above SDE by standard methods in Subsection 3.2.1. As a next step we shall prove in Subsection 3.2.2 that the sum of all coordinates, d i.e. St ≡ i=1 Xti , converges to zero but does not hit zero in finite time a.s. We then establish the weak uniqueness of the system by Theorem 4 of [3] or Theorem 1.6 of [9]. Secondly, from Subsection 3.2.3 on we shall change our focus to the normalized processes, i.e. to Yti = Xti /St to get some insight on the relative behaviour of types. Existence of solutions follows again by standard methods and the weak uniqueness of solutions in [0, 1]d follows by establishing a connection between the system at hand and the original system of SDEs. In Subsection 3.2.4 we show that any stationary distribution for Yt does not charge the set where at least one of the coordinate processes becomes extinct. We shall use this result in Subsection 3.2.5 to prove weak convergence to a unique stationary distribution by adapting the proof of Theorem 2.3 of Dawson, Greven, den Hollander, Sun and Swart [2] to our setup. Finally, in Subsection 3.2.7 we shall give a complete analysis of the case d = 2 by using methods of speed and scale.  60  3.2  Main Results  3.2.1  Existence and nonnegativity  Let (Ω, F, (F)t , P) be a filtered probability space that satisfies the usual conditions (cf. Rogers and Williams [10], Introduction to Chapter IV). Consider the following system of SDEs for d ≥ 2: d  dXti =  2γ i Xti Xti+1 dBti + j=1  Xtj qji dt, i ∈ {1, . . . , d},  (3.2)  where Xtd+1 ≡ Xt1 . We shall assume the γ i and qji , i = j to be given strictly positive constants and the X0i ≥ 0, i ∈ {1, . . . , d} to be given initial conditions. As the qji model mutations from type j to type i we impose qii = −  j:j=i  qij ⇐⇒  qij = 0.  (3.3)  j  Let qmax = max |qij |, qmin = min |qij | > 0 and γmax = max γi > 0. 1≤i,j≤d  1≤i,j≤d  1≤i≤d  First we shall investigate the existence of solutions to (3.2). Lemma 3.2.1. There exists a solution to the given system of SDEs (3.2). Reference for the proof. Existence follows by standard methods, see for instance Theorem V.3.10 in Ethier and Kurtz [7]. Next, we shall show that all solutions to (3.2) stay in the first quadrant (here we replaced the terms under the square root with their absolute values to be able to consider solutions on all of Rd ). For this purpose we shall first show that the local time of the coordinate processes at zero is zero. Corollary 3.2.2. Let i ∈ {1, . . . , d} be arbitrarily fixed. Then the local time l t0 at zero of the process Xti is zero. Proof. The proof proceeds along the lines of standard techniques for local times. Let i ∈ {1, . . . , d} be arbitrarily fixed. By [10], IV.(45.3) (“occupation density formula”) we have for ϕ(x) = 1[0, ] (x), > 0, t 0  ϕ(Xsi )d < X i >s =  t 0  1{0≤Xsi ≤ } 2γ i Xsi Xsi+1 ds =  0  lta da =  lta ϕ(a)da. R  Next recall from Theorem IV.(44.2) that without loss of generality lta is rightcontinuous in a. We also know that the processes under consideration are continuous. Hence, 0 ≤ lt0 = lim+ ↓0  1 0  lta da ≤ lim+ ↓0  t  1 0  1{0<Xsi ≤ } 2γ i Xsi+1 ds = 0,  the last by dominated convergence, proving the assumption.  61 Notation 3.2.3. In what follows we shall denote the martingale part of Xti by t  Mit ≡  2γ i Xsi Xsi+1 dBsi , i ∈ {1, . . . , d}.  0  Lemma 3.2.4. The processes Xti , i ∈ {1, . . . , d} are nonnegative if we start at X0i ≥ 0, ∀i ∈ {1, . . . , d}. We also obtain that the Mit , i ∈ {1, . . . , d} are martingales. Proof. To the purpose of proving this Lemma we shall use that Xti  −  t  = 0  1 −1{Xsi ≤0} dXsi + lt0 = 2  t 0  −1{Xsi ≤0} dXsi ,  (3.4)  the last by Corollary 3.2.2 t  Before we continue, observe that Mit ≡ 0 2γ i Xsi Xsi+1 dBsi is a martingale. Indeed, Mi is a continuous local martingale and so it suffices to show that E < Mi >t < ∞ for all t > 0. To show this defines a sequence of stopping times Tn ≡ inf{t ≥ 0 : maxdi=1 |Xti | ≥ n}. As Cauchy Schwarz’ inequality yields t  E < Mi >t∧Tn ≤ C  i+1 2 i E (Xs∧T ) ds < ∞, )2 E (Xs∧T n n  0  (3.5)  2  = E < Mi·∧Tn >t Mi·∧Tn is a continuous martingale. In particular, E Mit∧Tn and we obtain that   2       t∧Tn d 2   i j i 2 i 2  Xs qji ds  + E E (Xt∧Tn ) ≤ C (X0 ) + E Mt∧Tn   0   j=1 t  (3.5)  ≤ C 1+  0  d  i max E (Xs∧T )2 ds . n i=1  i )2 ≤ CeCt . As Tn → ∞ for Hence Gronwall’s lemma gives maxdi=1 E (Xt∧T n n → ∞ we can now apply the monotone convergence theorem in (3.5) to get E < Mi >t < ∞ for all t > 0. Thus Mi is indeed a continuous martingale. Taking expectations in (3.4) this implies    E Xti  Xti  −  t  ≤ E  d  0 j=1  1Xsj ≤0 (−Xsj )qji ds .  Sum both sides over i and use (3.3) to obtain 0 ≤ ≥ 0 a.s. for all i ∈ {1, . . . , d} and t ≥ 0.  i  E Xti  −  ≤ 0. Thus  62  3.2.2  The overall sum and uniqueness  In what follows we shall investigate the behaviour of our system for t → ∞. We shall show that the sum of all coordinates converges to zero but does not hit zero at a finite time. At the end of this Subsection we shall use this result to establish the weak uniqueness of solutions to (3.2). Notation 3.2.5. Let St ≡  d i=1  Xti .  Corollary 3.2.6. St converges a.s. for t → ∞. Proof. First note that by Lemma 3.2.4, St is a nonnegative process. Using (3.2) we obtain for St d  d  d  i=1 j=1  i=1  d  (3.3)  Xtj qji dt =  2γ i Xti Xti+1 dBti +  dSt =  2γ i Xti Xti+1 dBti . i=1  Using that the Mit are martingales as shown in Lemma 3.2.4, we obtain that St is a nonnegative martingale and thus a.s. convergent. Lemma 3.2.7. St > 0 for all t ≥ 0 a.s., given that S0 > 0. Proof. First observe that t  d  < S >t = 0 i=1  2γ i Xsi Xsi+1 ds ≤ 2γmax  t 0  Ss2 ds.  Next we shall use a time-change to be able to take advantage of this inequality. Let d t Is Is ≡ 2γ i Xsi Xsi+1 and Ct ≡ ds for t ≤ τ, 2 0 2γmax Ss i=1 where τ ≡ inf{t ≥ 0 : ∃ > 0 such that Is = 0 ∀s ∈ [t, t + ]}. Note that if St0 = 0 for some t0 > 0, then St = 0 for all t ≥ t0 by the optional sampling theorem as St is a continuous nonnegative martingale. As Is ≤ 2γmax Ss2 we therefore have Ss > 0 for all s < τ . Also note that Ct < ∞ for all t < τ as 0 ≤ Is and Is ≤ 2γmax Ss2 which yields Is 0≤ ≤ 1 for all 0 ≤ s < τ. (3.6) 2γmax Ss2 In particular the definition of τ implies that Ct is a continuous strictly increasing function defining a homeomorphism between [0, τ ] and [0, ξ] for ξ < ∞ respectively [0, ∞) if ξ = ∞ (let us also denote this by [0, ξ]), where ξ ≡ Cτ . Let D : [0, ξ] → [0, τ ] be the continuous strictly increasing inverse to Ct . Under this time-change we now obtain for St , Zt ≡ SDt , t ≤ ξ,  63 where Dt  < Z >t =< S >Dt =  t  Is ds =  I Ds  0  0  2 2γmax SD s ds = 2γmax I Ds  t 0  Zs2 ds,  i.e. d < Z >t = 2γmax Zt2 dt, ∀t ≤ ξ. Thus Z is a geometric Brownian motion (see for instance Karatzas and Shreve [8], Exercise 5.5.31). In particular Zt > 0 ∀t ≤ ξ if ξ < ∞ respectively Zt > 0 ∀t < ξ if ξ = ∞ follows, given that Z0 = S0 > 0. We therefore obtain  SDt > 0 ∀t ≤ ξ ⇒ St > 0 ∀t ≤ τ, if ξ < ∞ respectively (3.7) (3.6) SDt > 0 ∀t < ξ ⇒ St > 0 ∀t < τ, if ξ = ∞ ⇒ τ =∞ .  In what follows let η = inf{t ≥ 0 : St = 0}. To finish our proof it remains to show that η = ∞ a.s. By (3.7) it can easily be seen that we have τ ≤ η and by continuity of St the infimum in the definition of η is attained given η < ∞. Finally note that ST = 0 implies St = 0 ∀t ≥ T as St is a nonnegative martingale. Indeed, T is a stopping time and so this follows from the optional sampling theorem (see for instance [7], II.2.13). Let us suppose by contradiction that η < ∞. We are left with two cases. 1. case. τ (ω) = η(ω) < ∞. This yields a contradiction as Sτ = Sη = 0 by definition of η but Sτ > 0 by (3.7). Before investigating the second case we shall need a Corollary on the behaviour of the martingales Mit . Corollary 3.2.8. We have for Mit = 0 for t¯ ≥ t → ∞ a.s.  t 0  2γ i Xsi Xsi+1 dBsi that Mit¯ − Mit →  Proof. Mit is a continuous martingale with d < Mi >t = 2γ i Xti Xti+1 dt and Mi0 = 0. By [10], Theorem IV.(34.11) we can time-change Mit such that Mit = B<Mi >t , where Bt is a Brownian motion on a suitably extended probability space. Now 0 ≤< Mi >t¯ − < Mi >t = t¯ d  ≤  t  i=1  t¯ t  2γ i Xsi Xsi+1 ds  2γ i Xsi Xsi+1 ds =< S >t¯ − < S >t → 0 for t¯ ≥ t → ∞  as St converges a.s. and thus lim < S >t < ∞ a.s. Hence we obtain t→∞  Mit¯ − Mit = B<Mi >t¯ − B<Mi >t → 0 for t¯ ≥ t → ∞ a.s. as required.  (3.8)  64 Continuation of the proof of Lemma 3.2.7. 2. case. τ (ω) < η(ω) < ∞. Suppose τ < η < ∞. This implies that there exists > 0 such that I|[τ,τ + ) = 0 and S|[τ,τ + ) > 0. By definition of It , St and the continuity of the processes under consideration this requires that there exists δ = δ(ω) > 0 and i = i(ω) ∈ {1, . . . , d} such that Xti |[τ,τ +δ) = 0 and Xti+1 |[τ,τ +δ) > δ.  (3.9)  Next consider increments in the ith coordinate to see that for all 0 ≤ t < δ 0 = Xτi +t − Xτi =  Miτ +t  −  Miτ  d  τ +t  + τ  j=1,j=i  Xsj qji − Xsi qij ds  ≥ Miτ +t − Miτ − (d − 1) max qij i=j  τ +t τ  Xsi ds +  τ +t τ  Xsi+1 qi+1,i ds  ≥ Miτ +t − Miτ + tqi+1,i δ. As I|[τ,τ +  )  d<S>s = 0. Similarly ds [τ,τ + ) i >s 3.2.8 this implies d<M = 0. Rewriting ds [τ,τ + ) the Corollary we get Mi |[τ,τ + ) ≡ const. and thus  = 0 and d < S >s = Is ds we get  to the proof of Corollary  Mit = B<Mi >t as done in Miτ +t − Miτ = 0. Plugging this in the above inequality we obtain 0 ≥ tqi+1,i δ > 0, a contradiction. Taking both cases together we have shown that η = ∞ a.s., i.e. inf{t ≥ 0 : St = 0} = ∞ a.s. Remark 3.2.9. We have actually shown more. As η < ∞ was not used in the proof of case 2 of Lemma 3.2.7, we have moreover shown that τ = η = ∞. Also observe that the proof of Lemma 3.2.7 only uses qij ≥ 0, j = i and qi+1,i > 0 for all i ∈ {1, . . . , d} and so in particular holds for nearest neighbour random walk on the circle, even though uniqueness for this case remains open. Lemma 3.2.10. The overall sum St of our processes converges to 0 a.s., i.e. St → 0 for t → ∞. As the processes Xti ≤ St , i ∈ {1, . . . , d} are nonnegative, they converge to 0 a.s. as well. Proof. We shall prove the assertion by constructing a contradiction. The a.s.existence of a limit S∞ was shown in Corollary 3.2.6. Suppose by contradiction that S∞ = S∞ (ω) > 0 for ω element of a set of positive measure. Choose 0 < < S∞ arbitrarily small. By (3.8) there exists T = T ( , ω) ≥ 0 such that for all t¯ > t ≥ T Mit¯ − Mit ≤ , i ∈ {1, . . . , d}, and |St − S∞ | ≤ .  (3.10)  65 Now observe that for t¯ > t Xt¯i − Xti ≥ − Mit¯ − Mit + ≥−  Mit¯ −  Mit  t¯  t j=1,j=i t¯  =−  Mit  Xsj qji − Xsi qij ds  t j=1,j=i t¯ t  t¯  d  + qmin  ≥ − Mit¯ − Mit + qmin Mit¯ −  d  Xsj ds  − (d − 1)qmax  (Ss − Xsi )ds − (d − 1)qmax  t t¯ t  Xsi ds  t¯  + qmin t  Xsi ds  t¯  Ss ds − [(d − 1)qmax + qmin ]  t  Xsi ds.  Hence we have for t¯ > t ≥ T Xt¯i − Xti ≥ − + qmin (t¯ − t) (S∞ − ) − [(d − 1)qmax + qmin ] (t¯ − t) sup Xsi . s∈[t,t¯]  This is equivalent to sup Xsi ≥  s∈[t,t¯]  − + qmin (t¯ − t) (S∞ − ) − Xt¯i − Xti . [(d − 1)qmax + qmin ] (t¯ − t)  (3.11)  In what follows fix i ∈ {1, . . . , d} arbitrary and use the following notation 0 ≤ I ≡ lim inf Xti ≤ lim sup Xti ≡ S ≤ S∞ < ∞. t→∞  t→∞  We shall prove that S∞ > 0 implies I > 0, which will provide us with the desired contradiction in the end. 1. case: S∞ > 0 and I < S. As Xti is a continuous process, there exists an increasing sequence {tn }n∈N , tn → ∞ for n → ∞, independent of the choice of , such that Xtin = I + δ(S − I), n ∈ N, where 0 < δ < 1 fixed but arbitrarily small (see Figure 3.1). Without loss of generality let sup s∈[t2n−1 ,t2n ]  Xsi = Xti2n−1 = Xti2n = I + δ(S − I), n ∈ N.  Applying (3.11) with t = t2n−1 , t¯ = t2n (choose n ∈ N such that t ≥ T ) gives I + δ(S − I) ≥  − + qmin (t2n − t2n−1 ) (S∞ − ) . [(d − 1)qmax + qmin ] (t2n − t2n−1 )  (3.12)  66 PSfrag replacements S I + δ(S − I)  I t1  t3  t2  t4  Figure 3.1: The definition of t2n−1 and t2n . As the choice of n may depend on we have to find an estimate for the term → 0+ . First observe that for 0 ≤ u < v to be t2n −t2n−1 before considering specified later v  Xvi  −  Xui  ≤  Miv  −  Miu  d  + u j=1  Xsj qji ds  ≤ Miv − Miu + (v − u)qmax sup Ss 0≤s<∞  and thus as St converges a.s. v−u≥  Xvi − Xui − Miv − Miu . qmax sup Ss 0≤s<∞  Now (3.8) yields that for all θ > 0 small there exists T = T (θ, ω) (note that T is independent of the choice of ) such that for all v > u ≥ T v−u≥  Xvi − Xui − θ . qmax sup Ss  (3.13)  0≤s<∞  Moreover, choose T such that for all t2n−1 ≥ T we have δ Xsi < I + (S − I), 2 s∈[t2n−1 ,t2n ] inf  which is possible by definition of I, S and δ. We get ∃u, v ∈ [t2n−1 , t2n ] s.t. |Xvi − Xui | >  δ (S − I) 2  (3.13)  ⇒  v−u≥  δ 2 (S  − I) − θ . qmax sup Ss 0≤s<∞  By choosing θ sufficiently small we obtain that there exists T = T (θ, ω, δ) such that |t2n − t2n−1 | ≥ v − u > const(ω, δ) > 0 for all t2n−1 ≥ T . Let us return to (3.12). Letting → 0+ yields I + δ(S − I) ≥  qmin S∞ . [(d − 1)qmax + qmin ]  67 Now let δ → 0+ to finally obtain I > 0. 2. case: S∞ > 0 and I = S. As I = S is equivalent to Xti being convergent, we can choose T as defined in (3.10) to additionally satisfy |Xsi − I| ≤ , ∀s ≥ T. Now (3.11) gives with t¯ > t = T , t¯ arbitrary − + qmin (t¯ − T ) (S∞ − ) − Xt¯i − XTi [(d − 1)qmax + qmin ] (t¯ − T ) − + qmin (t¯ − T ) (S∞ − ) − 2 . ≥ [(d − 1)qmax + qmin ] (t¯ − T )  I + ≥ sup Xsi ≥ s∈[T,t¯]  Taking → 0+ yields I > 0 once more. Taking both cases together we have shown lim inf Xti > 0 for all i ∈ {1, . . . , d}, t→∞  given S∞ > 0, as i was chosen arbitrary in the calculations. As the Xti are continuous processes this gives a contradiction to ∞ d i i i+1 lim < S >t = 0 i=1 2γ Xs Xs ds < ∞ a.s., as this requires that the limit t→∞ inferior of the nonnegative integrand becomes zero for t → ∞. S∞ > 0 thus gives a contradiction, i.e. we have shown S∞ = 0 a.s. Lemma 3.2.11. The solution to the given SDE (3.2) is unique in law for all d X0 ∈ Rd s.t. X0i ≥ 0, i ∈ {1, . . . , d} and i=1 X0i + X0i+1 > 0. Proof. We shall apply Theorem 4 in [3] (or alternatively Theorem 1.6 of [9]). The Hypotheses under which this Theorem is stated hold, except for condition (3) in Hypothesis 2. Here we have that bi (x) > 0 if xi = 0, except for the case where x = 0 ∈ Rd (here bi (0) = 0). Thus we have to consider this case separately. By modifying the drift coefficients in a small open neighbourhood of 0, say in B (0) with > 0 arbitrarily small, we can achieve that the drift coefficients satisfy all conditions such as H¨ older continuity on compact subsets of Rd+ and |bi (x)| ≤ c(1 + |x|) and additionally are strictly positive on B (0). For instance, let ˜bi (x) ≡ bi (x) + ( − |x|) ∨ 0. By solving the system with these modified coefficients we obtain existence and uniqueness of the modified solution. Finally take > 0 so small that the starting point x0 ∈ / B (0). As the diffusion and drift coefficients of the modified SDE are c identical with the ones of the original SDE on (B (0)) , every solution to (3.2) + is unique until it hits B (0). By taking ↓ 0 and recalling that we showed that every solution to (3.2) does not hit 0 in finite time (see Lemma 3.2.7) we obtain the assertion.  68  3.2.3  The normalized processes  Corollary 3.2.12. The corresponding SDEs for the normalized processes Yti ≡  Xti St  (3.14)  with 0 ≤ Yti ≤ 1 are as follows. dYti = − Yti  2γ j Ytj Ytj+1 dBtj + (1 − Yti ) 2γ i Yti Yti+1 dBti  j=i  (3.15)  d  + Yti j=i  2γ j Ytj Ytj+1 dt + (Yti − 1)2γ i Yti Yti+1 dt +  Ytj qji dt. j=1  Idea of the proof. The proof is an easy application of Itˆ o’s formula. In what follows we shall consider the above system of SDEs for the Yti without referring to their derivation via the Xti and ask for existence and uniqueness of solutions. As we have not shown nonnegativity of the Yti yet, we replace the terms under the square root with their absolute values. Proposition 3.2.13. The SDE (3.15) started at Y0 ∈ [0, 1]d \∂[0, 1]d with i i Y0 = 1 has a unique in law solution. Moreover the solution satisfies Y t ∈ d [0, 1] and i Yti = 1 for all t ≥ 0 a.s. Proof. Existence follows immediately from the existence of solutions Xt to (3.2). Indeed, let X0 ≡ Y0 , then Yti ≡ Xti / j Xtj solves (3.15) by Corollary 3.2.12 with Y0i = X0i / j X0j = Y0i / j Y0j = Y0i . As in Corollary 3.2.2 one can show that the local times at zero of the processes Yti are zero. The nonnegativity of the processes Yti can be shown as follows. Lemma 3.2.14. The processes Yti , i ∈ {1, . . . , d} are nonnegative. Proof. We have for all 1 ≤ i ≤ d and t ≤ TM = TM (ω) ≡ inf{t ≥ 0 : maxi |Yti | ≥ M }, M ∈ N fixed, Yti  −  t  = 0  −1{Ysi ≤0} dYsi t  = mart. + 0 d  + j=1    −1{Ysi ≤0} Ysi   Ysj qji  ds.  j=i  2γ j Ysj Ysj+1 + (Ysi − 1)2γ i Ysi Ysi+1  69 We can bound the above by t  mart. + 0  C(M ) Ysi t  ≤ mart. +  0  −  − 1{Ysi ≤0} −  C(M ) Ysi  Ysj qji ds j=i  Ysj  +C  −  ds.  j=i  Taking expectations on both sides and summing over all coordinates we get for all t ≥ 0, i E Yt∧T M  t  −  ≤  i  i E Ys∧T M  C(M ) 0  −  +C i  i t  ≤ C(M )  0  i E Ys∧T M  −  −  j Ys∧T M  E  ds  j=i  ds.  i  i An application of Gronwall’s lemma yields E Yt∧T M and use Fatou’s lemma to obtain the claim.  As we shall show in the following Corollary for all t ≥ 0.  i  −  = 0. Take M → ∞  Y0i = 1 implies  Corollary 3.2.15. Every solution (Yti )1≤i≤d to (3.15) with satisfy i Yti = 1 for all t ≥ 0 a.s.  i  i  Yti = 1  Y0i = 1 has to  Proof. We have from (3.15) and (3.3) that d  d  d  Yti  d  1−  =  i=1  Yti j=1  i=1  2γ j Ytj Ytj+1 dBtj − 2γ j Ytj Ytj+1 dt  d  1−  =  where Nt ≡  t 0  d  i=1  (dNt − d < N >t ) ,  2γ j Ysj Ysj+1 dBsj and Nt∧TM is a martingale starting at 0.  j=1 d  Setting Dt ≡  Yti  i=1  (Dt∧TM − 1)2  Yti and applying Itˆ o’s formula we obtain (D0 − 1 = 0) t∧TM  = mart. + 0  1 2(Ds − 1)(−1 + Ds ) + 2(1 − Ds )2 d < N >s 2  giving  t  E(Dt∧TM − 1)2 ≤ 3C(M )  0  E(Ds∧TM − 1)2 ds.  Here we used that d < N >s ≤ C(M )ds for s ≤ t ∧ TM . Now Gronwall’s lemma yields Dt − 1 ≡ 0 for all t ≤ TM a.s. Take M → ∞ to obtain the claim.  70 Continuation of the proof of Proposition 3.2.13. It remains to prove the uniqueness of solutions to (3.15). Observe that Corollary 3.2.15 implies 0 ≤ Yti ≤ 1 by the nonnegativity of the Yti . Now suppose that Yt is a solution to (3.15) with Y0 ∈ [0, 1]d \∂[0, 1]d and such that i Y0i = 1. The following Lemma gives the existence of processes Xti such that Yti = Xti / i Xti . This will enable us later to derive uniqueness of solutions to (3.15) from uniqueness of solutions to (3.2). Lemma 3.2.16. Given a process (Yti )1≤i≤d that satisfies (3.15) with Y0 ∈ [0, 1]d\∂[0, 1]d and with i Y0i = 1, we can find a system of processes (Xti )1≤i≤d that satisfies (3.2) and (3.14) with St ≡ i Xti . Proof. We start with a motivation for the proof. Let (Yti )1≤i≤d be given by (3.14). Definition (3.14) and St = i Xti implied in the former setting that d  d  d  2γ i Xti Xti+1 dBti = St  dXti =  dSt =  i=1  i=1  2γ i Yti Yti+1 dBti , i=1  i.e. dSt ≡ St dMt  (3.16)  being solved by St = S0 exp Mt −  1 < M >t . 2  In the given setting the above calculation may be taken as a motivation to define Rt ≡ R0 exp Mt −  1 < M >t , where Mt ≡ 2  d  t  2γ i Ysi Ysi+1 dBsi ,  0 i=1  such that (3.16) holds with St replaced by Rt and R0 > 0 to be chosen arbitrarily. Let TM ≡ inf{t ≥ 0 : maxi Yti ≥ M }. For t ≤ TM we have < M >t < ∞ a.s. This finally leads to the definition of (Xti )1≤i≤d , for t ≤ TM , by Xti ≡ Yti Rt = Yti R0 exp Mt −  1 < M >t , 2  (3.17)  where X0i ≡ Y0i R0 . Let us check that (Xti )1≤i≤d as defined in (3.17) satisfies (3.2). Indeed, we have dXti  (3.17)  =  (3.15),(3.16)  =  Rt dYti + Yti dRt + < Y i , R >t d  Rt  Ytj qji dt  2γ i Yti Yti+1 dBti + Rt j=1  (3.17)  =  d  Xtj qji dt,  2γ i Xti Xti+1 dBti + j=1  71 which proves our claim after taking M → ∞ and observing that by Corollary 3.2.15, Rt = i Yti Rt = i Xti . Conclusion of the proof of Proposition 3.2.13. The uniqueness of the solution to (3.15) follows from the uniqueness of the solution (Xti )1≤i≤d to (3.2). Indeed, observe that the uniqueness of Xt yields the uniqueness of Rt = i Yti Rt = i Xti and thus of Yti = Xti /Rt .  3.2.4  Properties of a stationary distribution to the system (3.15) of normalized processes  Recall that we are given the system of SDEs (3.15) and look for solutions satisfying i Yti = 1, where 0 ≤ Yti ≤ 1, 1 ≤ i ≤ d. In what follows we shall look for stationary distributions to this system. By Proposition IV.9.2 of [7], a measure π is a stationary distribution for our process, that is, it is a stationary distribution for A, where A denotes the generator of our system of SDEs, if and only if Agdπ = 0 for all g ∈ C 2 . Hence we shall investigate necessary properties of a measure π satisfying [0,1]d Agdπ = 0 for all g ∈ C 2 . In particular we want to show the following Proposition. Proposition 3.2.17. If π is stationary then it does not put mass on the set N ≡ {y ∈ [0, 1]d : ∃i ∈ {1, . . . , d} : yi = 0}, i.e. on the set where at least one of the coordinate processes becomes extinct. Proof. The generator A of our diffusion Y can be determined to be d  d  Ag(x) =  ∂i g(x)bi (x) + i=1  1 ∂ij g(x)aij (x), 2 i,j=1  (3.18)  where d  bi (x) = xi j=i  aii (x) = (xi )  2γ j xj xj+1 + (xi − 1)2γ i xi xi+1 +  2  xj qji ,  (3.19)  j=1  2  j=i  2γ j xj xj+1 + (1 − xi ) 2γ i xi xi+1 and for i = j  aij (x) = xi xj k∈{i,j} /  2γ k xk xk+1 − (1 − xi )xj 2γ i xi xi+1 − xi (1 − xj )2γ j xj xj+1  (see [10], V.(1.7) for the definition of A). In what follows we shall try to find a function g ∈ C 2 which leads to a contradiction to Agdπ = 0 in case π puts mass on the set N . Thereby we shall take advantage of the observation that for xi = 0 for i ∈ {1, . . . , d} arbitrarily fixed we have aii (x) = 0 and bi (x) ≥  j=i  xj qmin = (1 − xi )qmin = qmin > 0,  (3.20)  72 where we set qmin ≡ min qij . i=j  To make things easier we shall fix i ∈ {1, . . . , d} arbitrarily and only look for functions g(x) = g(xi ), g ∈ C 2 . Thus we obtain for (3.18) with f (xi ) = f (x) ≡ ∂i g(x) = ∂i g(xi ), f ∈ C 1 that [0,1]d Agdπ = 0 is equivalent to  [0,1]d  =−    f (xi ) xi [0,1]d    d j=i  2γ j xj xj+1 + (xi − 1)2γ i xi xi+1 +   1 2 ∂i f (xi ) (xi ) 2  Using that x ∈ [0, 1]d we get   − |f (xi )| C1 xi + f (xi ) [0,1]d   j=1  xj qji  dπ(x)   2  j=i  2γ j xj xj+1 + (1 − xi ) 2γ i xi xi+1  dπ(x).  xj qji j=i      dπ(x) ≤ C2  [0,1]d  |∂i f (xi )| xi dπ(x),  where all constants under consideration are nonnegative. Assuming that f is nonnegative and as xj qji ≥ (1 − xi )qmin (see (3.20)) we finally get j=i  [0,1]d  f (xi ) [−C1 xi + (1 − xi )qmin ] dπ(x) ≤ C2  [0,1]d  |∂i f (xi )| xi dπ(x). (3.21)  In what follows we shall try to find a nonnegative function f ∈ C 1 which gives a contradiction to the assumption that π puts mass on the set N . As we want to investigate the behaviour of π on the set N , we shall define a function f ∈ C 1 with support in [0, 1] and then “squeeze” the function, i.e. rescale it in such a way that the support of the new function f lies in [0, ]. This way we localize equation (3.21) at xi = 0. Let us make this more precise. Suppose we are given 1 f ∈ C+ with support in [0, 1] or (−∞, 1]  and let f (x) ≡ f  x  then  1 f ∈ C+ with support in [0, ] or (−∞, ] and f (x) = f  x  1  .  1 1(x ≤ 1). Plugging this into (3.21) and Choose for instance f (x) = exp − 1−x  abbreviating A ≡ [0, ] × [0, 1]d−1 we obtain for arbitrary 0 < < 1 f A  x  [−C1 + (1 − )qmin ] dπ(x, z) ≤  f A  x  x C2 dπ(x, z),  73 where we assumed without loss of generality that i = 1. This yields  A  exp −  1 1−  [−C1 + (1 − )qmin ] −  x  ≡ I1 ( ) + I 2 ( ) ≤ 0  1 1−  x x 2  C2  dπ(x, z)  (3.22)  for all > 0. For the first part I1 of the integral observe that the absolute value of the integrand is bounded for 0 < ≤ 0 via C1 0 + qmin . Hence we can apply the dominated convergence theorem to the first integral to obtain lim+ I1 ( ) = e−1 qmin π {0} × [0, 1]d−1 . ↓0  (3.23)  For the second part of the integral I2 note that for x ∈ [0, ] the absolute value of the integrand is bounded by 4e−2 x C2 ≤ 4e−2 C2 . As this result is uniform in we can apply the dominated convergence theorem again to obtain lim+ I2 ( ) = 0. ↓0  Plugging this and (3.23) into (3.22) we get with qmin > 0 e−1 qmin π {0} × [0, 1]d−1 ≤ 0 ⇒ π {0} × [0, 1]d−1 = 0,  i.e. π does not put mass on N as stated above.  3.2.5  Stationary distribution  Recall that we are given the system of SDEs (3.15) which we rewrite as dYt = σ(Yt )dBt + b(Yt )dt and look for solutions satisfying  i  Yti = 1, where 0 ≤ Yti ≤ 1, 1 ≤ i ≤ d.  Proposition 3.2.18. The above system of SDEs has a unique stationary distribution π supported by S ≡ [0, 1]d \∂[0, 1]d ∩ {y : i yi = 1}. Moreover, L(Yt |Y0 = y) ⇒ π holds for all y ∈ S. Proof. Let Yt be the unique strong Markov solution to (3.15). Recall that we already showed that every equilibrium distribution for A doesn’t put mass on N = {y : ∃i : yi = 0} in Proposition 3.2.17 and that i Yti = 1 for all t ≥ 0 in Proposition 3.2.13. Hence, if Yt has any equilibrium distributions, they are concentrated on S ≡ [0, 1]d \∂[0, 1]d ∩  y:  yi = 1 . i  In what follows we shall consider the process Y˜t ≡ (Yt1 , . . . , Ytd−1 ) ∈ [0, 1]d−1 instead. The martingale problem for the resulting SDE for Y˜ is consequently  74 well-posed as the corresponding martingale problem for Y is well-posed. For d−1  x ˜ ∈ S˜ ≡  [0, 1]d−1 ∩  i=1  d−1  x ˜i ≤ 1  = [0, 1]d−1 \∂[0, 1]d−1 ∩  \ x ˜ : ∃i : x ˜i = 0 or  x ˜i = 1 i=1  x˜i < 1 ,  x ˜:0< i  a ˜ij (˜ x) is non-singular by Corollary A.1.1 of the Appendix. Also observe that S˜ is an open subset of [0, 1]d−1 compact. Now the reasoning of [2], Section 3.1 can be applied to show that the system of SDEs for Y˜ has a unique stationary ˜ distribution π ˜ supported by S˜ and that L(Y˜t |Y˜0 = y˜) ⇒ π ˜ holds for all y˜ ∈ S. Note that, as in [2], the non-singularity of a ˜ij (˜ x) on S˜ is crucial. i The claim now follows from Y˜t ≡ (Yt1 , . . . , Ytd−1 ) and Ytd = 1 − d−1 i=1 Yt . A complete proof is given in the Appendix, Subsection A.2.  3.2.6  Extension to arbitrary networks  Instead of (3.1) we can consider d  dXti =  Xtj dBti +  2γ i Xti j∈Ci  j=1  Xtj qji dt, i ∈ {1, . . . , d},  (3.24)  where Ci ⊂ {1, . . . , d} with i ∈ / Ci and |Ci | ≥ 1. We can think of Ci as the set of catalysts of i. The cyclic case corresponds to Ci = {i + 1}. We shall assume as above that γ i and qji , i = j are given positive constants and the X0i ≥ 0, i ∈ {1, . . . , d} are given initial conditions. (qji ) is again a Q-matrix modelling mutations from type j to type i. For this setup the above proofs directly carry over. Observe in particular that (3.9) changes to the requirement that there exists δ = δ(ω) > 0 and i = i(ω), j = j(ω) ∈ {1, . . . , d} such that Xti |[τ,τ +δ) = 0 and Xtj |[τ,τ +δ) > δ. Also note that the restriction on the state space for the initial condition in d Lemma 3.2.11 changes to i=1 X0i + j∈Ci X0j > 0 as we now use Theorem 1.6 of [9].  3.2.7  Complete analysis of the case d = 2  Remark 3.2.19. We shall denote γ i by γi in what follows, as for instance γ 2 might easily be misunderstood. Recall that the normalized processes Yti = (3.15).  Xti St  for our given SDE satisfy  75 Corollary 3.2.20. For d = 2 we obtain the following SDE for Yt ≡ Yt1 (note that Yt2 = 1 − Yt1 ) dYt = 2(1 − Yt )Yt ((Yt )2 γ2 + (Yt − 1)2 γ1 )dBt  (3.25)  + 2(1 − Yt )Yt (Yt γ2 + (Yt − 1)γ1 ) dt + Yt (q11 − q21 ) dt + q21 dt.  Idea of the proof. We can calculate the SDE for Yt1 from (3.15), using our cyclic definition Yt2+1 = Yt1 . We get in particular that d < Y 1 >t = 2(1 − Yt1 )Yt1 (Yt1 )2 γ2 + (Yt1 − 1)2 γ1 dt.  Hence we can rewrite Yt1 on a possibly enlarged probability space in terms of a Brownian motion Bt as above (cf. [10], Theorem IV.(34.11)). In what follows we shall prove existence and pathwise uniqueness of solutions to the SDE (3.25) under the constraint Yt ∈ [0, 1]. First observe that (3.25) implies that dYt = σ(Yt )dBt + b(Yt )dt, (3.26) where Yt ∈ I ≡ [0, 1] with σ(x) =  2(1 − x)x (x2 γ2 + (x − 1)2 γ1 )  (3.27)  and b(x) = 2(1 − x)x (xγ2 + (x − 1)γ1 ) + x (q11 − q21 ) + q21 .  Lemma 3.2.21. If Y0 ∈ [0, 1], the SDE (3.26) has a pathwise unique solution, taking values in [0, 1]. Proof. Replace σ(x) and b(x) in (3.26) by continuous functions σ ˜ (x), ˜b(x) with compact support such that they coincide with σ(x), b(x) on [0, 1]. Then existence of solutions to the SDE with modified coefficients follows by Theorem V.3.10 in [7]. By reasoning as in Lemma 3.2.14 (first consider (Yt )− , then (1 − Yt )− ) we can further show that any solution to the modified SDE satisfies 0 ≤ Yt ≤ 1 a.s. and is therefore a solution to the given SDE as well. Pathwise uniqueness of a solution follows from the Yamada-Watanabe pathwise-uniqueness theorem for 1-dim. diffusions (see V.(40.1) in [10] and replace the term under the square root in (3.27) by its absolute value). As every one-dimensional diffusion can be uniquely characterized by its scale function and speed measure, we shall calculate the scale function as a first step towards investigating the long-time behaviour of the given one-dimensional SDE for Yt = Yt1 . Lemma 3.2.22. The scale function of Y is given by (up to increasing affine transformations) s (x) = x2 γ2 + (x − 1)2 γ1 × exp −  ” “ q11 q21 − 1+ 2γ − 2γ  q11 + q21 √ γ1 γ2  2  arctan  1  q11  q21  |1 − x| γ2 |x|− γ1  (γ1 + γ2 )x − γ1 √ γ1 γ2  .  76 This yields in particular that s (0) = s (1) = ∞. We obtain moreover s(0) =  q21 γ1  const. > −∞, −∞,  < 1, and s(1) = o.w.  const. < ∞, ∞,  −q11 γ2  < 1,  o.w.  Idea of the proof. Calculate the scale function as in [10], chapter V.28. Proposition 3.2.23.  recurrent    never hit (i) 0 is recurrent    never hit  We shall show the following result on speed and scale.     recurrent  −q11 < γ2 , q21 < γ1 ,            recurrent −q11 < γ2 , q21 ≥ γ1 , and 1 is for never hit  −q11 ≥ γ2 , q21 < γ1 ,            never hit −q11 ≥ γ2 , q21 ≥ γ1 ,          where “0 is recurrent” should mean P (∃n such that Yt = 0 for all t > n) = 0 and “1 never hit” that P (∃t ≥ 0 : Yt = 1) = 0.  (ii) In all cases  ∞ 0  1{0,1} (Yt )dt = 0.  (iii) The scale function of Y is given (up to increasing affine transformations) as in Lemma 3.2.22. (iv) The speed measure of the diffusion Z ≡ s(Y ) in natural scale on [s(0), s(1)] (read this as [s(0), s(1)] = (−∞, s(1)] for −∞ = s(0) < s(1) < ∞ etc.) is m(dz) = e =  2  R s−1 (z) 1 2  1 (s σ)2  2b(u)σ(u)−2 du  σ(s−1 (z))−2 1{z∈(s(0),s(1))} dz  ◦ s−1 (z)1{z∈(s(0),s(1))} dz.  In particular, m puts no mass on the endpoints s(0) and s(1). Idea of the proof. We shall mimic the calculations of [10], V.48, p. 287f. Corollary 3.2.24. We obtain as a result on the limiting distribution the following. Let {Pt } be the transition function of our diffusion in natural scale on R. Then for each x: lim π − Pt (x, ·) = 0, t→∞  where π(dz) ≡  m(dz) . m(R)  Here · denotes the total variation norm of a measure. Idea of the proof. This follows easily from [10], Theorem V.54.5.  ,  77  Bibliography [1] Bass, R.F. and Perkins, E.A. Degenerate stochastic differential equations arising from catalytic branching networks. Electron. J. Probab. (2008) 13, 1808–1885. MR2448130 [2] Dawson, D.A. and Greven, A. and den Hollander, F. and Sun, R. and Swart, J.M. The renormalization transformation for two-type branching models. Ann. Inst. H. Poincar´e Probab. Statist. (2008) 44, 1038–1077. MR2469334 [3] Dawson, D.A. and Perkins, E.A. On the uniqueness problem for catalytic branching networks and other singular diffusions. Illinois J. Math. (2006) 50, 323–383 (electronic). MR2247832 [4] Eigen, M. and Schuster, P. The Hypercycle: a principle of natural selforganization. Springer, Berlin, 1979. [5] Fleischmann, K. and Xiong, J. A cyclically catalytic super-brownian motion. Ann. Probab. (2001) 29, 820–861. MR1849179 [6] Hofbauer, J. and Sigmund, K. The Theory of Evolution and Dynamical Systems. London Math. Soc. Stud. Texts, vol. 7, Cambridge Univ. Press, Cambridge, 1988. MR1071180 [7] Ethier, S.N. and Kurtz, T.G. Markov Processes: Characterization and Convergence. Wiley and Sons, Inc., Hoboken, New Jersey , 2005. MR0838085 [8] Karatzas, I. and Shreve, S.E. Brownian Motion and Stochastic Calculus, second edition. Springer, New York, 1991. MR1121940 [9] Kliem, S. Degenerate Stochastic Differential Equations for Catalytic Branching Networks. To appear in Ann. Inst. H. Poincar´e Probab. Statist. [10] Rogers, L.C.G. and Williams, D. Diffusions, Markov Processes, and Martingales, vol. 2, Reprint of the second (1994) edition. Cambridge Mathematical Univ. Press, Cambridge, 2000. MR1780932  78  Chapter 4  Convergence of Rescaled Competing Species Processes to a Class of SPDEs1 4.1  Introduction  We investigate convergence of certain rescaled models that have their applications in biology. Such convergence results can for instance be used to relate the limits to questions of coexistence and survival of types in the original models. We start by introducing the underlying models and concepts for our later definitions in Subsections 4.1.1, 4.1.2 and 4.1.3. In Subsection 4.1.4 an overview of the results of this paper follows. Finally, in Subsection 4.1.5 we outline the remaining parts of the paper.  4.1.1  The voter model and the Lotka-Volterra model  An extensive introduction to the voter model can be found in Liggett [7], Chapter V. In short, the 1-dimensional voter model is a process ξt : Z → {0, 1} with the following interpretation. x ∈ Z is seen as an individual with political opinion 0 or 1. This is the common interpretation which gives the model its name. Alternatively we can think of Z as space occupied by two populations 0 and 1. If ξt (x) = 0, at time t, the coordinate x is occupied by an individual of population 0. As we shall consider approximate densities later on, this interpretation will suit our purpose better in what follows. The evolution of the process in time is given via infinitesimal rates. Following the notation in [7], let c(x, ξ) denote the rate at which the coordinate ξ(x) flips from 0 to 1 or from 1 to 0 when the system is in state ξ. Then the process ξt will satisfy P(ξt (x) = ξ0 (x)) = c(x, ξ0 )t + o(t) for t ↓ 0+ . For the voter model, the rates can for instance be given by a random walk kernel 1 A version of this chapter will be submitted for publication. Kliem, S.M. (2009) Convergence of Rescaled Competing Species Processes to a Class of SPDEs.  79 on Z, i.e. 0 ≤ p(x) ≤ 1 and  x∈Z p(x)  = 1 such that  0 → 1 at rate c(x, ξ) = 1 → 0 at rate c(x, ξ) =  y  y  p(x − y)ξ(y), p(x − y)(1 − ξ(y)).  Under certain conditions on the rate or kernel, it can now be shown that the given rates determine indeed a unique, {0, 1}Z-valued Markov process ξt . A possible interpretation of the kernel p(·) is that at exponential times with rate 1, the individual x ∈ Z selects a site at random according to the kernel p(x − ·) and in case this site has opposite opinion, changes its opinion to the opinion of the selected site. The exponential times and choices according to the random kernel are independent for all x ∈ Z. Finally observe that a special case of this model is the case where we fix a finite set N ⊂ Z of neighbours of 0. If we choose the random walk kernel p(x) = |N1 | 1(x ∈ N ), then a neighbour gets chosen with equal probability. Moreover, fi (x, ξ) =  1 |N |  1(ξ(y) = i) = y∈N  y∈Z  p(x − y)1(ξ(y) = i),  i = 0, 1  (4.1)  can be understood as the frequency of type i in the neighbourhood x + N of x in configuration ξ. In general, we can set fi (x, ξ) = y∈Z p(x − y)1(ξ(y) = i), i = 0, 1 to rewrite the rates from above to 0 → 1 at rate c(x, ξ) = f1 (x, ξ),  (4.2)  1 → 0 at rate c(x, ξ) = f0 (x, ξ).  For the Lotka-Volterra model we consider rate-changes 0 → 1 at rate  (4.3)  c(x, ξ) = f1 (x, ξ) (f0 (x, ξ) + α01 f1 (x, ξ)) = f1 (x, ξ) + (α01 − 1 → 0 at rate  1)f12 (x, ξ),  c(x, ξ) = f0 (x, ξ) (f1 (x, ξ) + α10 f0 (x, ξ)) = f0 (x, ξ) + (α10 − 1)f02 (x, ξ)  instead, where we used that f0 (x, ξ) + f1 (x, ξ) = 1 by definition. The definition will become clear in the Subsection to follow (choose λ = 1). Observe in particular that if we choose α01 , α10 close to 1, the Lotka-Volterra model can be seen as a small perturbation of the voter model. Finally, we can consider biased voter models by multiplying the rate c(x, ξ) of the change 0 → 1 by a factor of (1 + τ ), i.e. (4.2) becomes 0 → 1 at rate c(x, ξ) = (1 + τ )f1 (x, ξ), 1 → 0 at rate c(x, ξ) = f0 (x, ξ)  (4.4)  80 instead. For τ > 0 small we thus have a slight favour for type 1 and for τ < 0 small we have a slight favour for type 0. The biased Lotka-Volterra model is constructed analogously.  4.1.2  Spatial versions of the Lotka-Volterra model  As a further example consider spatial versions of the Lotka-Volterra model with finite range as introduced in [10] (they considered ξ(x) ∈ {1, 2} instead of {0, 1}). They use rates λf1 (x, ξ) (f0 (x, ξ) + α01 f1 (x, ξ)) , λf1 (x, ξ) + f0 (x, ξ) f0 (x, ξ) 1 → 0 at rate c(x, ξ) = (f1 (x, ξ) + α10 f0 (x, ξ)) , λf1 (x, ξ) + f0 (x, ξ) 0 → 1 at rate c(x, ξ) =  (4.5)  where α01 , α10 ≥ 0, λ > 0. Here fi is as in (4.1) and N = {y : 0 < |y| ≤ R} with R ≥ 1. We can think of R as the finite interaction range of the model. [10] use this model to obtain results on the parameter regions for coexistence, founder control and spatial segregation of types 0 and 1 in the context of a model that incorporates short-range interactions and dispersal. As a conclusion they obtain that the short-range interactions alter the predictions of the mean-field model. Following [10] we can interpret the rates as follows. The second multiplicative factor of the rate governs the density-dependent mortality of a particle, the first factor represents the strength of the instantaneous replacement by a particle of opposite type. The mortality of type 0 consists of two parts, f0 describes the effect of intraspecific competition, α01 f1 the effect of interspecific competition. [10] assume that the intraspecific competition is the same for both species. The replacement of a particle of opposite type is regulated by the fecundity parameter λ. The first factors of both rate-changes added together yield 1. Thus they can be seen as weighted densities of the two species. If λ > 1, species 1 has a higher fecundity than species 0.  4.1.3  Long-range limits  In [9], Mueller and Tribe show that the approximate densities of type 1 of rescaled biased voter processes, defined as in (4.1) and (4.4) with τ = Nθ , converge to continuous space time densities which solve the heat equation with drift, driven by Fisher-Wright noise, namely ∂u ∆u = + 2θ(1 − u)u + ∂t 6  ˙ . 4u(1 − u)W  (4.6)  Observe that [9] scale space by 1/N and consider N = {0 < |y| ≤ N −1/2 }. Hence, the number of neighbours of x ∈ Z/N is increasing, namely |N | = N →∞ 2c(N )N 1/2 with c(N ) → 1, and we thus obtain long-range interactions. Finally, they also rescale time by speeding up the rates of change c(x, ξ) as follows.  81 Let p(N ) (x) =  1 |N | 1(x  ∈ N ), then in the N th -model we have  0 → 1 at rate cN x, ξ N = N 1/2 + θN −1/2 |N | = 2c(N )(N + θ)f1 x, ξ 1 → 0 at rate c  N  x, ξ  N  =N  1/2  |N |  p  (N )  y∈Z/N  N  y∈Z/N  p(N ) (x − y)ξ N (y)  ,  (x − y)(1 − ξ N (y))  = 2c(N )N f0 x, ξ N . They fix θ ≥ 0, i.e. consider the case where the opinion of type 1 is slightly dominant. See the introduction and Theorem 2 of [9] for more details. In [3] it was shown that stochastic spatial Lotka-Volterra models, suitably rescaled in space and time, converge weakly to super-Brownian motion with linear drift. As they choose the parameters α01 , α10 (recall (4.3)) such that (N )  N αi(1−i) − 1 → θi ∈ R,  i = 0, 1  (4.7)  (see (H3) in [3]) their models can also be interpreted as small perturbations of the voter model. [3] extended the main results of Cox, Durrett and Perkins [2], which proved similar results for long-range voter models. Both papers treat the low density regime, i.e. where only a finite number of individuals of type 1 is present. Instead of investigating limits for approximate densities, both papers define measure-valued processes XtN by XtN =  1 N  √ x∈Z/(MN N )  ξtN (x)δx ,  i.e. they assign mass 1/N , N = N (N ) to each individual of type 1 and consider weak limits in the space of finite Borel measures on R. In particular, they establish the tightness of the sequence of measures and the uniqueness of the martingale problem, solved by any limit point. Note that both papers use a different scaling in comparison to [9]. Using the √ notation in [2],√for d = 1 they take N = N and the space is scaled by MN N with MN / N → ∞ (see for instance Theorem 1.1 of [2]√for d = 1) in the long-range setup. According to this notation, [9] used MN = N , which √ is at the threshold of the results in [2], but not included. By letting MN = N in our setup several non-linear terms will arise in our √ limiting SPDE below. Also note the brief discussion of the case where MN / N → 0 in d = 1 before (H3) in [2]. Additionally, [2] and [3] consider fixed kernel models in dimensions d ≥ 2 respectively d ≥ 3 with MN = 1 and a fixed random walk kernel q √satisfying √ some additional conditions such that p(x) = q( N x) on x ∈ Z/(MN N ). Finally, in Cox and Perkins [4], the results of [3] for d ≥ 3 are used to relate the limiting super-Brownian motions to questions of coexistence and survival of a rare type in the original Lotka-Volterra model.  82  4.1.4  Overview of results  In the present paper we first prove tightness of the local densities for scaling limits of more general particle systems. The generalization includes two features. Firstly, we shall extend the model in [9] to limits of small perturbations of the long-range voter model, including the setup from [10]. As the rates in [10] (see (4.5)) include taking ratios, we extend our perturbations to a set of power series (for extensions to polynomials of degree 2 recall (4.3)), thereby including certain analytic functions. Recall in particular from (4.7) that we shall allow the coefficients of the power series to depend on N . Secondly, we shall combine both long-range interaction and fixed kernel interaction for the perturbations. As we shall see, the tightness results will carry over. As a special case we shall be able to consider rescaled Lotka-Volterra models with long-range dispersal and short-range competition, i.e. where (4.3) gets generalized to 0 → 1 at rate c(x, ξ) = f1 (x, ξ) (g0 (x, ξ) + α01 g1 (x, ξ)) , 1 → 0 at rate c(x, ξ) = f0 (x, ξ) (g1 (x, ξ) + α10 g0 (x, ξ)) . Here fi (x, ξ), i = 0, 1 is the density corresponding to a long-range kernel pL and gi (x, ξ), i = 0, 1 is the density corresponding to a fixed kernel pF (also recall the interpretation of both multiplicative factors in Subsection 4.1.2). Finally, in the case of long-range interactions only we show the limit points are solutions of a SPDE similar to (4.6) but with a drift depending on the choice of our perturbation and small changes in constants due to simple differences in scale factors. Hence, we obtain a class of SPDEs that can be characterized as the limit of perturbations of the long-range voter model. If the limiting initial condition u0 satisfies u0 (x)dx < ∞, we can show the weak uniqueness of solutions to the limiting SPDE and therefore show weak convergence of the rescaled particle densities to this unique law. When there exists a fixed kernel, the question of uniqueness of all limit points and of identifying the limit remains an open problem. Also, when we consider long-range interactions only with u0 (x)dx = ∞ the proof of weak uniqueness of solutions to the limiting SPDE remains open. The proof of our results generalizes the work done in [9]. In [9], limits are considered for both the long-range contact process and the long-range voter process. Full details are given for the contact process. For the voter process, once the approximate martingale problem is derived, almost all of the remaining steps are left to the reader. Many arguments of our proof are similar to [9] but as additions and adaptations are needed due to our broader setup and as they did not provide details for the long-range voter model we shall sometimes be more detailed.  83  4.1.5  Outline of the paper  In Section 4.2 to follow we shall first set up our model and give the main results. Then we shall reformulate the model so that it can be approached by the methods used in [9]. A statement of the main results in the reformulated setting follows. In Section 4.3 we shall introduce a graphical construction for each approximating model ξ·N . This allows us to write out the time-evolution of our models. By integrating it against a test function and summing over x ∈ Z/N we finally obtain an approximate martingale problem for the N th -process. We define the approximate density A(ξtN )(x) as the average√density of particles of type 1 on Z/N in an interval centered at x of length 2/ N (see (4.10) below). By choosing a specific test function, the properties of which are under investigation at the beginning of Section 4.4, an approximate Green’s function representation for the approximate densities A(ξtN )(·) is derived towards the end of Section 4.4 and bounds on error-terms appearing in it are given. Making use of the Green’s function representation, tightness of A(ξtN )(·) is proven in Section 4.5. Here the main part of the proof consists in finding estimates on pth -moment differences. In Section 4.6 the tightness of the approximate densities is used to show tightness of the measure corresponding to the sequence of configurations ξtN . Finally, in the special case with no fixed kernel, every limit is shown to solve a certain SPDE. In Section 4.7 we prove that this SPDE has a unique weak solution if < u0 , 1 >< ∞. In this case, weak uniqueness of the limits of the sequence of approximate densities follows.  4.2 4.2.1  Main Results of the Paper The model  We define a sequence of rescaled competing species models ξtN in dimension d = 1, which can be described as perturbations of voter models. In the N th model the sites are indexed by x ∈ N −1 Z. We label the state of site x at time t by ξtN (x) where ξtN (x) = 0 if the site is occupied at time t by type 0 and ξtN (x) = 1 if it is occupied by type 1. In what follows we shall write x ∼ y if and only if 0 < |x − y| ≤ N −1/2 , i.e. if and only if x is a neighbour of y. Observe that each x has 2c(N )N 1/2 ,  N →∞  c(N ) → 1  neighbours. The rates of change incorporate both long-range models and fixed kernel models with finite range. The long-range interaction takes into account the densities of the neighbours of x at long-range, i.e. (N )  fi  (x, ξ) ≡  1 √ 2c(N ) N  1(ξ N (y) = i), √ 0<|y−x|≤1/ N , y∈Z/N  i = 0, 1  84 and the fixed kernel interaction considers (N )  gi  (x, ξ) ≡  p(N (x − y))1(ξ N (y) = i),  y∈Z/N  i = 0, 1,  where p(x) is a random walk kernel on Z of finite range, i.e. 0 ≤ p(x) ≤ 1, x∈Z p(x) = 1 and p(x) = 0 for all |x| ≥ Cp . In what follows we shall often (N ) (N ) (N ) (N ) abbreviate fi (x, ξ) by fi and gi (x, ξ) by gi if the context is clear. Now define the rates of change of our configurations. At site x in configuration ξ N ∈ {0, 1}Z/N the coordinate ξ N (x) makes transitions (N )  + f1  (N )  + f0  0 → 1 at rate N f1 1 → 0 at rate N f0 (N )  where Gi  (N )  , Hi  (N )  g0  (N )  g0  (N )  G0  (N )  f1  (N )  G1  (N )  f0  (N )  + g1  (N )  + g1  (N )  H0  (N )  f1  (N )  H1  (N )  f0  (N )  ,  (N )  ,  (4.8)  , i = 0, 1 are power series as follows.  Hypothesis 4.2.1. We assume that (N )  Gi  (x) ≡  ∞  (m+1,N ) m  αi  x  (N )  and Hi  m=0 (m+1,N )  with i = 0, 1, αi that  ∞  sup  N ≥N0 i=0,1 m=0  ∞  (m+1,N ) m  x , x ∈ [0, 1]  βi  m=0  (m+1,N )  ∈ R, m ≥ 0 and that there exists N0 ∈ N such  (m+1,N )  ∨ 0 + βi  , βi  αi  (m+1,N )  +m βi  (x) ≡  ∧0  (m+1,N )  (m+1,N )  ∨ 0 + m αi  ∧0  < ∞.  Remark 4.2.2. The above rates determine indeed a unique, {0, 1}Z/N -valued Markov process ξtN for N ≥ N0 with N0 as in Hypothesis 4.2.1 as we now show. See for instance Theorem B3, p.3 in Liggett [8] and note the uniform boundedness assumption on the rates from p.1 of [8]. Following the notation in [8], let c(x, ξ N ) denote the rate at which the coordinate ξ N (x) flips from 0 to 1 or from 1 to 0 when the system is in state ξ N . Then using (4.8), 0 ≤ (N ) (N ) fi , gi ≤ 1, i = 0, 1 and Hypothesis 4.2.1 yields sup  c x, ξ N  sup  x∈Z/N ξ N ∈{0,1}Z/N  ≤N+  ∞  (m+1,N )  α0  m=0  ≡ N + C0 (N ) < ∞  (m+1,N )  + β0  ∨  ∞ m=0  (m+1,N )  α1  (m+1,N )  + β1  85 and sup  sup  x∈Z/N u∈Z/N ξ N ∈{0,1}Z/N  ≤ sup  sup  x∈Z/N u∼x ξ N ∈{0,1}Z/N  c x, ξ N − c x, ξuN (N )  (N )  x, ξ N − gi  gi  sup  + sup x∈Z/N  c x, ξ N − c x, ξuN  N Z/N i=0,1 u∈Z/N ξ ∈{0,1}  ≤ 2c(N )N 1/2 2(N + C0 (N )) + sup  x∈Z/N u∈Z/N  x, ξuN  C0 (N )  2p(N (x − u))C0 (N )  ≤ 2c(N )N 1/2 2(N + C0 (N )) + 2C0 (N ) < ∞,  where ξuN (v) =  ξ N (v), 1 − ξ N (v),  v = u, v = u.  Following [8], the two conditions are sufficient to ensure the above. Additionally, the closure in the space of continuous functions on {0, 1} Z/N of the operator Ωf ξ N = x c x, ξ N f (ξxN ) − f (ξ N ) , which is defined on the space of finite cylinder functions on {0, 1}Z/N , is the Markov generator of the process ξtN . (N )  (N )  (N )  (N )  Remark 4.2.3. Observe in particular that f0 +f1 = 1 and g0 +g1 = 1. (N ) Hence the special case of no fixed kernel can be obtained by choosing G i ≡ (N ) Hi , i = 0, 1 and we get (N )  + f1  (N )  + f0  0 → 1 at rate N f1 1 → 0 at rate N f0 via  (N )  G0  (N )  f1  (N )  G1  (N )  f0  (N )  ,  (N )  .  (4.9)  For the configurations ξtN ∈ {0, 1}Z/N we define approximate densities A(ξtN ) A(ξtN )(x) =  1 2c(N )N 1/2  ξtN (y), y∼x  x ∈ N −1 Z  (4.10)  (N )  x, ξtN . By linearly interpolating between sites and note that A(ξtN )(x) = f1 we obtain approximate densities A(ξtN )(x) for all x ∈ R.  Notation 4.2.4. Set C1 ≡ {f : R → [0, 1] cont.} and let C1 be equipped with the topology of uniform convergence on compact sets. We obtain that t → A(ξtN ) is cadlag C1 -valued, where we used that 0 ≤ A(ξtN )(x) ≤ 1 for all x ∈ N −1 Z.  Hence, we can consider the law of A(ξ N ) on the space of cadlag C1 -valued paths with the Skorokhod topology.  86  4.2.2  Main results  Before stating our main results we need some more notation. For f, g : N −1 Z → R, we set  Notation 4.2.5.  < f, g >=  1 N  f (x)g(x). x  Let ν be a measure on N −1 Z. Then we set < ν, f >=  f dν.  Remark 4.2.6. We can rewrite every configuration ξtN in terms of its corresponding measure. Let νtN ≡  1 N  δx 1(ξtN (x) = 1), x  then < ξtN , φ >=< νtN , φ > . Definition 4.2.7. Let S be a Polish space and let D(S) denote the space of cadlag paths from R+ to S with the Skorokhod topology. Following the first definition on p.148 of Perkins [11], we shall say that a collection of processes with paths in D(S) is C-tight if and only if it is tight in D(S) and all weak limit points are a.s. continuous. Recall that for Polish spaces, tightness and weak relative compactness are equivalent. Remark 4.2.8. In what follows we shall investigate tightness of {A(ξ ·N ) : N ≥ N0 } in D(C1 ) and tightness of {νtN : N ≥ N0 } in D(M(R)), where M(R) is the space of Radon measures equipped with the vague topology (M(R) is indeed Polish, see Dawson [5], Section 3.1.3). Theorem 4.2.9. Suppose that A(ξ0N ) → u0 in C1 . Let the transition rates of (N ) (N ) ξ N (x) be as in (4.8) with Gi , Hi , i = 0, 1 satisfying Hypothesis 4.2.1. Then N A(ξt ) : t ≥ 0 are C-tight as cadlag C1 -valued processes and the νtN : t ≥ 0 are C-tight as cadlag Radon measure valued processes with the vague topology. If A ξtNk , νtNk converges to (ut , νt )t≥0 , then νt (dx) = ut (x)dx for all t≥0  t ≥ 0.  (N )  Remark 4.2.10. The above applies in particular to models where G i are finite sums (see Hypothesis 4.2.1).  (N )  , Hi  Hypothesis 4.2.11. Let us consider the special case with no fixed kernel (see Remark 4.2.3). Additionally to Hypothesis 4.2.1 we assume that (m+1,N ) N →∞  αi  (m+1)  → αi  for all i = 0, 1 and m ≥ 0  87 with (take sgn(0) = 0) (m+1,N )  sgn αi  (m+1,N )  ≥ 0 for all N ≥ N0 or sgn αi  ≤ 0 for all N ≥ N0  for all i = 0, 1, m ≥ 0 and that ∞  lim  N →∞  =  (m+1,N )  αi  i=0,1 m=0 ∞  (m+1)  αi  i=0,1 m=0  (m+1,N )  ∨ 0 + m αi (m+1)  ∨ 0 + m αi  ∧0  ∧0  (4.11)  .  Remark 4.2.12. The additional conditions of Hypothesis 4.2.11 are necessary to transform the given rates into rates with positive coefficients in a uniform way in Subsection 4.2.3 and to later characterize limit points of the approximate densities by taking limits in N → ∞ inside infinte sums in (4.55). Definition 4.2.13. Under the assumptions of Theorem 4.2.9 and Hypothesis 4.2.11 we let for x ∈ [0, 1], (N )  Gi (x) ≡ lim Gi N →∞  (x) = lim  ∞  N →∞  (m+1,N ) m  αi  x  =  ∞  (m+1) m  αi  x , i = 0, 1.  m=0  m=0  This is well-defined by (4.11) and Royden [12], Proposition 11.18. Theorem 4.2.14. We obtain under the assumptions of Theorem 4.2.9 and Hypothesis 4.2.11 for the special case with no fixed kernel that the limit points of A(ξtN ) are continuous C1 -valued processes ut which solve ∂u ∆u = + (1 − u)u {G0 (u) − G1 (1 − u)} + ∂t 6  ˙ 2u(1 − u)W  (4.12)  with initial condition u0 . If we assume additionally < u0 , 1 >< ∞, then ut is the unique (in law) [0, 1]-valued solution to the above SPDE. Remark 4.2.15. As an example consider spatial versions of the Lotka-Volterra model as introduced in Subsection 4.1.2. In what follows we shall choose the competition and fecundity parameters near one and we shall consider the longrange case. Namely, the model exhibits the following rates: (N )  0 → 1 at rate N  λ(N ) f1 (N )  λ(N ) f1  (N )  + f0  (N )  1 → 0 at rate N  f0  (N ) λ(N ) f1  +  (N ) f0  (N )  + α01 f1  (N )  + α10 f0  f0 f1  (N ) (N )  ,  (N ) (N )  .  We suppose that λ(N ) ≡ 1 +  α01 α10 λ (N ) (N ) , α01 ≡ 1 + , α10 ≡ 1 + . N N N  88 (N )  Using f0  (N )  + f1  = 1 we can therefore rewrite the rates as (N )  0 → 1 at rate (N + λ ) f1 (N )  1 → 0 at rate N f0  1+ (N )  = N f0  1+  α01 (N ) f N 1  α10 (N ) f N 0  1+  n≥0  α10 (N ) f N 0  −  n≥0  −  λ (N ) f N 1 (N )  f0  n  λ (N ) f N 1  k  k≥0  ,  (4.13)  n  λ N  k n≥k  n k  −  λ N  n−k  .  (N )  Here we used that fi ≤ 1, i = 0, 1 and that λN → 0 for N → ∞. We can use the explicit calculations for a geometric series, in particular that we have 1 n−k n n n≥k |q| n≥0 n|q| < ∞ and k = (1−|q|)k+1 for |q| < 1 to check that Hypothesis 4.2.1 and Hypothesis 4.2.11 are satisfied. Using Theorem 4.2.14 we further obtain that the limit points of A(ξtN ) are continuous C1 -valued processes ut which solve ∂u ∆u = + (1 − u)u {(λ + u (α01 − λ )) − (−λ + (1 − u) (α10 + λ ))} ∂t 6 ˙ + 2u(1 − u)W ∆u ˙ + (1 − u)u {λ − α10 + u (α01 + α10 )} + 2u(1 − u)W = 6  by rewriting the above rates (4.13) in the form (4.9) and taking the limit for N → ∞. For < u0 , 1 >< ∞, ut is the unique weak [0, 1]-valued solution to the above SPDE.  4.2.3  Reformulation  We proceed as in [9]. Recall from Hypothesis 4.2.1 and (4.8) that the rates of change are given by 0 → 1 at rate  (N ) N f1  +  (N ) g0  ∞  (m,N ) α0  (N ) f1  m  +  (N ) g1  ∞  m=1  m=1  ∞  ∞  (m,N )  β0  (N )  f1  m  ,  (4.14) (N )  1 → 0 at rate N f0 (m,N )  (N )  + g0  m=1  (m,N )  α1  (N )  f0  m  (N )  + g1  (m,N )  β1  (N )  f0  m  ,  m=1  (m,N )  where αj , βj ∈ R for all j = 0, 1, m ∈ N. Following [9] we shall model each term in the rate-changes via independent (m,N ) families of i.i.d. Poisson processes. For instance, if α0 is non-negative, the (N ) (m,N )  (N )  m  f1 of the rate-change 0 → 1 in (4.14) can be modeled term g0 α0 via i.i.d. Poisson processes Qt (x; y1 , . . . , ym ; z) : x, y1 , . . . , ym , z ∈ N −1 Z  89 of rate  (m,N )  α0 p(N (x − z)). (2c(N ))m N m/2 At a jump of Qt (x; y1 , . . . , ym ; z) the voter at x adopts the opinion 1 provided that all of y1 , . . . , ym have opinion 1 and z has opinion 0. (m,N ) (m,N ) As we want to allow the αi , βi to be negative, too, we first rewrite (N ) (N ) (N ) (N ) (4.14) with the help of f0 + f1 = 1 and g0 + g1 = 1 in a form where all resulting coefficients are non-negative. Corollary 4.2.16. We can rewrite our transitions as follows. 0 → 1 at rate N − θ(N )  (4.15)  (N ) f1  +  (N ) f1    1 → 0 at rate N − θ(N )     (N ) f0  +  (N ) f0      (N ) (N ) gi  ai i=0,1  (N ) f0  (k,m,N )  , qij  ∈ R+ , i, j, k = 0, 1, m ≥ 2.  (N )  (N )  Proof. We shall drop the superscripts of fi , gi , i = 0, 1 in what follows to simplify notation. (m,N ) Suppose for instance α0 < 0 for some m ≥ 1 in (4.14). Using that m−1  −xm = (1 − x)  k=1  xk − x  and recalling that 1 − f1 = f0 we obtain 2k (2k+1,N ) 2k+1 g 0 α0 f1  = g0  (2k+1,N ) −α0  (2k+1,N )  f1l + α0  f0  .  f1  l=1  Finally, we can use g0 = 1 − g1 to obtain 2k (2k+1,N ) 2k+1 f1  g 0 α0  (2k+1,N )  = g0 −α0  (2k+1,N )  f0 l=1  f1l +g1 −α0  (2k+1,N )  f1 +α0  f1 .  All terms on the r.h.s. but the last can be accommodated into an existing representation (4.15) as follows: (0,m,N )  q00  (N )  a1  (0,m,N )  → q00  (N )  → a1  (2k+1,N )  + −α0  (2k+1,N )  + −α0  .    ,  m−2     m≥2,i,j=0,1  (N )  , bi  ,    (1,m,N ) (N ) (N ) qij gi fj  +  i=0,1  (N )  f1  qij  m≥2,i,j=0,1  (N ) (N ) bi gi  with corresponding θ (N ) , ai  (N )  (0,m,N ) (N ) (N ) gi fj  +    m−2   for 2 ≤ m ≤ 2k + 1,  90 Finally, we can assimilate the last term into the first part of the rate 0 → 1, i.e. we replace (2k+1,N ) θ(N ) → θ(N ) − α0 . As we use the representation (4.15), a change in the first part of the rate 0 → 1 also impacts the rate 1 → 0 in its first term. Therefore we have to (2k+1,N ) (2k+1,N ) fix the rate 1 → 0 by adding a term of (−α0 )f0 = g0 f0 (−α0 )+ (2k+1,N ) g1 f0 (−α0 ) to the second and third term of the rate, i.e. by replacing (N )  → b0  (N )  → b1  b0 b1  (N )  + −α0  (2k+1,N )  ,  (N )  + −α0  (2k+1,N )  .  The case m = 2k, k ≥ 1 follows similarly and the general case with multiple negative α s and/or β s now follows inductively. Remark 4.2.17. The above construction yields the following non-negative coefficients: θ(N ) ≡ (0,m,N )  q00  (N )  a0  ≡  ∞ j=0,1 n=1 ∞  (1,N )  + (0,m,N )  (N )  a1  (m,N )  ≡α0  (1,N )  ≡β0  + (0,m,N )  (N )  b0  (m,N )  +  (m,N )  1 α0  (m,N )  (1,N )  ∞ n=1  (n,N )  n=1  (n,N )  −β0  ∞  <0 + (0,m,N )  ≥0 ,  q10 ∞  (n,N )  ≡  (n,N )  n=1  1 α1  −α0  ∞  <0 + (1,m,N )  ≥0 ,  ≥0 +  (n,N )  −α0  ∞  n=1  1 β0  q01 ∞ n=1  (n,N )  1 α0  (n,N )  1 βj  <0 ,  <0 ,  1 α1  ≥0 +  −α1  1 α1  (n,N )  < 0 + −βj  n=1  (n,N )  ≡β0  (1,N )  −α1  (1,N )  ∞  ≡α1  ≥0 +  1 β0  n=1  q11  (1,N )  (n,N )  n=1  q01  (n,N )  1 α0  1 α0  ∞  (n,N )  1 αj  (n,N )  −α0  n=m  ≡α0  (n,N )  −αj  ≡  (n,N )  −β1  <0 +  ∞ n=1  (n,N )  1 β0  (n,N )  −β1 ∞  <0 (n,N )  1 β1  (n,N )  −β0  n=m  (n,N )  1 α0  (n,N )  −β1 ∞  (n,N )  1 β1  (n,N )  (n,N )  1 β1  (n,N )  −β0  (n,N )  1 β0  , <0 ,  <0  −α1  n=m  <0  <0  (n,N )  1 α1  , <0 ,  <0 (n,N )  1 β0  <0  ,  91 (1,m,N ) q00  (N )  b1  (m,N ) ≡α1 1 (1,N )  ≡β1  (1,N )  (1,m,N )  q10  (m,N )  ≡β1  ≥0 ,  (n,N ) −α0  n=1  (1,m,N ) q11 ∞  ≥0 +  1 β1  ∞  +  (m,N ) α1  (m,N )  1 β1  1  −α1  ∞  <0 +  (n,N )  n=1  (n,N )  −β1  n=m  (n,N )  n=1  (n,N ) α0  ≡  ∞  (n,N )  1 α1  1 β1  <0 ,  <0 (n,N )  (n,N )  −β0  1 β0  <0  ,  ≥0 .  By Hypothesis 4.2.1 this implies in particular that there exists N 0 ∈ N such that (k,m,N )  qij  sup N ≥N0  i,j,k=0,1 m≥2  < ∞.  Remark 4.2.18. Observe that we can rewrite the transition rates in (4.15) (N ) (N ) such that ai = bi = 0, i = 0, 1, i.e. 0 → 1 at rate  (4.16) (N )  N − θ(N ) f1  (N )  m−2  (N )  (0,m,N ) (N ) (N ) gi fj  f1  (1,m,N ) (N ) (N ) gi fj  f0  qij  + f1  1 → 0 at rate (N )  N − θ(N ) f0  (N )  (N )  (N )  a0  .  = 1 we can change for instance  (0,2,N ) (N ) (N ) g0 f0  (0,2,N )  , q01  (0,2,N )  + q00  (N )  + f1  + q00  (0,2,N )  with a0 , q00  m−2  (N )  m≥2,i,j=0,1 (N )  (N ) (N )  qij  + f0  Indeed, using that f0 a0 g0  ,  m≥2,i,j=0,1  (N )  f1  0  (0,2,N ) (N ) (N ) g0 f1  + q01  (N )  f1  0  nonnegative into  (N ) (N ) f0  g0  (N )  f1  0  (N )  + a0  (0,2,N )  + q01  (N ) (N ) f1  g0  (N )  f1  0  ,  where the new coefficients are nonnegative again. Recall Remark 4.2.17 together with Hypothesis 4.2.1. We now introduce hy(k,m,N ) potheses directly on the qij as the primary variables. Observe in particular that they will be assumed to be non-negative. Hypothesis 4.2.19. Assume that there exists N0 ∈ N such that (k,m,N )  qij  sup N ≥N0 (k,m,N )  i,j,k=0,1 m≥2  <∞  for non-negative qij , i, j, k = 0, 1 and m ≥ 2. We can use this condition as in Remark 4.2.2 to show that the rewritten rates can be used to determine a {0, 1}Z/N -valued Markov process ξtN for N ≥ N0 .  92 Hypothesis 4.2.20. In the special case with no fixed kernel, i.e. where (k,m,N )  q00  (k,m,N )  = q10  (k,m,N )  and q01  (k,m,N )  = q11  (k,m,N )  ⇐⇒ q0j  (k,m,N )  = q1j  , j = 0, 1  (see Remark 4.2.3 and Remark 4.2.17) we assume that N →∞  θ(N ) → θ,  (k,m,N ) N →∞  (k,m)  → q0j  q0j and  (k,m,N )  lim  N →∞  for all j, k = 0, 1 and m ≥ 2  q0j j,k=0,1 m≥2  (k,m)  =  q0j  .  (4.17)  j,k=0,1 m≥2 (k,m,N )  Remark 4.2.21. Observe that if we assume that the q0j (m,N ) αj ,j  , j, k = 0, 1, m ≥ 2  were obtained from = 0, 1, m ≥ 1 as described earlier in Remark 4.2.17 and Remark 4.2.18, then (4.11) implies (4.17). Indeed, use for instance [12], Proposition 11.18 together with Remark 4.2.17. Notation 4.2.22. For k = 0, 1 and a ∈ R we let Fk (a) =  1 − a, k = 0, a, k = 1.  By the above it remains to prove the following theorem. The claim of Theorem 4.2.9 will then follow immediately and Theorem 4.2.14 will follow using Corollary 4.2.24 below. Theorem 4.2.23. Suppose that A(ξ0N ) → u0 in C1 . Let the transition rates (k,m,N ) of ξ N (x) be as in (4.16) and qij satisfying Hypothesis 4.2.19. Then the N A(ξt ) : t ≥ 0 are C-tight as cadlag C1 -valued processes and the νtN : t ≥ 0 are C-tight as cadlag Radon measure valued processes with the vague topology. converges to (ut , νt )t≥0 , then νt (dx) = ut (x)dx for all If A ξtNk , νtNk t≥0  t ≥ 0. For the special case with no fixed kernel we further obtain that if Hypothesis 4.2.20 holds, then the limit points of A(ξtN ) are continuous C1 -valued processes ut which solve ∆u ∂u = + ∂t 6  (1−2k) k=0,1  (k,m)  q0j  m≥2,j=0,1  Fj (u) (F1−k (u))  m−1  ˙ Fk (u)+ 2u(1 − u)W  (4.18) with initial condition u0 . If we assume additionally < u0 , 1 >< ∞, then ut is the unique (in law) [0, 1]-valued solution to the above SPDE. (k,m)  In the next Corollary we assume there is no fixed kernel and the q0j (m)  0, 1, m ≥ 2 are defined from the αj Remark 4.2.18 without the N ’s.  , j, k =  , j = 0, 1, m ≥ 1 as in Remark 4.2.17 and  93 Corollary 4.2.24. Under the assumption above, the SPDE (4.18) may be rewritten as ut =  ∞  ∞  ∆u (m+1) (m+1) m α1 (1−u)m + α0 u −u(1−u) +(1−u)u 6 m=0 m=0  ˙ . 2u(1 − u)W  Proof. Indeed first use the definition of Fk (a) and collect terms appropriately. Then recall how we rewrote the transition rates in Corollary 4.2.16 and Remark 4.2.18 to obtain (4.16) from (4.14). Now analogously rewrite (4.12) as (4.18). Before we start proving the above we need some notation. In what follows we shall consider eλ (x) = exp(λ|x|) for λ ∈ R and we let C = {f : R → [0, ∞) cont. with |f (x)eλ (x)| → 0 as |x| → ∞ for all λ < 0} be the set of non-negative continuous functions with slower than exponential growth. Define f λ = sup |f (x)eλ (x)| x  and give C the topology generated by the norms ( ·  λ:  λ < 0).  Remark 4.2.25. We work on the space C instead of C1 because in Section 4.4 we shall introduce functions 0 ≤ ψtz (x) ≤ CN 1/2 and shall show in Lemma 4.5.2(b) that they converge in C to the Brownian transition density p 3t , z − x . Finally, ˆ )(z) ≡ in Section 4.5 we shall derive estimates on pth -moment differences of A(ξ t  A(ξt )(z)− < ξ0 , ψtz >, where A(ξ0 ) → u0 in C to finally establish the tightness claim for the sequence of approximate densities A(ξ N )(x).  Notation 4.2.26. For x ∈ N −1 Z, f : N −1 Z → R and δ > 0 we shall use D(f, δ)(x) = sup{|f (y) − f (x)| : |y − x| ≤ δ, y ∈ N −1 Z}, (N )  ∆(f )(x) =  N −θ 2c(N )N 1/2  y∼x  (f (y) − f (x)),  where we suppress the dependence on N .  (4.19)  94  4.3  An Approximate Martingale Problem  We shall now derive a graphical construction and evolution in time of our approximating processes ξtN . The graphical construction uses independent families of i.i.d. Poisson processes: Pt (x; y) : x, y ∈ N −1 Z i.i.d. P.p. of rate  N − θ(N ) , 2c(N )N 1/2  (4.20)  and for m ≥ 2, i, j, k = 0, 1,  Qm,i,j,k (x; y1 , . . . , ym ; z) : x, y1 , . . . , ym , z ∈ N −1 Z t (k,m,N )  i.i.d. P.p. of rate  qij  (2c(N ))m N m/2  p(N (x − z)).  Note that we suppress the dependence on N in the family of Poisson processes Pt (x; y) and Qm,i,j,k (x; y1 , . . . , ym ; z). t At a jump of Pt (x; y) the voter at x adopts the opinion of the voter at y provided that y has the opposite opinion. At a jump of Qm,i,j,k (x; y1 , . . . , ym ; z) the voter at x adopts the opinion 1−k t provided that y1 has opinion j, all of y2 . . . , ym have opinion 1 − k and z has opinion i. This yields the following SDE to describe the evolution in time of our approximating processes ξtN : t  ξtN (x) =ξ0N (x) + y∼x  0  N N N N (y) − δ1 ξs− (x) δ0 ξs− (y) (x) δ1 ξs− δ0 ξs−  (4.21) × dPs (x; y) + k=0,1  (1 − 2k)  t m≥2,i,j=0,1 y1 ,...,ym ∼x z  0  N δk ξs− (x)  m  N × δj ξs− (y1 )  N N (z) dQm,i,j,k (x; y1 , . . . , ym ; z) δ1−k ξs− (yl ) δi ξs− s l=2  for all x ∈ N −1 Z. We now explain why the above system (4.21) has a unique solution. The problem with (4.21) is that although there is a first flip time for ξtN (x) (the jump rate there is bounded as can be shown using Hypothesis 4.2.19 and as the sum of Poisson processes is a Poisson process again, we have at most one flip at a time), what to do at this time depends on the states of the finite number of “communicating sites” y, z ∈ Z/N, y ∼ x, |z − x| ≤ Cp /N . The equations for each of these sites will in turn involve its communicating sites. If we now try to go backwards in time to determine the configuration at x ∈ Z/N , starting with t > 0, we may encounter an accumulation of jumps before time zero.  95 To see that the described problem cannot occur, we shall use the uniform boundedness of the flip-rates together with the finite interaction range R ≡ √ N −1 ( N ∨ Cp ) , where x denotes the next largest integer. Remark 4.3.1. We have avoided random walk kernels p with infinite range for the fixed kernel interactions to simplify the analysis of the above jump equations (4.21). We show that up to time t, the evolution of ξ·N can be divided up into finite random “islands” that do not communicate with each other. Indeed, two regions of Z/N do not interact with each other up to time t, if we can find an intermediate region of length 2R where no flips occur in [0, t]. We can now partition Z/N into such regions. The sums of flips for each region up to time t are independent and can be bounded by i.i.d. Poisson random variables as follows. The flips in the region centered at Z2R, Z ∈ Z can be bounded by PtZ ≡  Pt,x , x∈Z/N, Z2R−R<x≤Z2R+R  where Pt,x ≡  Qm,i,j,k (x; y1 , . . . , ym ; z). t  Pt (x; y) + y∼x  m≥2,i,j,k=0,1 y1 ,...,ym ∼x,|z−x|≤Cp /N  Using (4.20) we obtain that each PtZ has mean t2RN 2c(N )N 1/2  N − θ(N ) 2c(N )N 1/2  2c(N )N 1/2  + m≥2,i,j,k=0,1    = t2RN N − θ(N ) +  (k,m,N )  qij  m  (2c(N ))m N m/2  (k,m,N )   qij  m≥2,i,j,k=0,1     |z−x|≤Cp /N  p(N (x − z))  .  Thus by Hypothesis 4.2.19, (PtZ )Z∈Z is a sequence of i.i.d. Poisson random variables of finite mean. Let XZ be a sequence of random variables with XZ = 1 if PtZ > 0 and XZ = 0 if PtZ = 0. Then (XZ )Z∈Z is an i.i.d. sequence of Bernoulli variables with P(X0 = 0) > 0. In particular we can show that with probability one there exists a random sequence . . . , Z−2 , Z−1 , Z0 , Z1 , Z2 , . . . such that 1 ≤ |Zi − Zi−1 | < ∞ for all i ∈ Z and XZi = 0. Hence, up until time t, we can partition Z/N = ∪i∈Z (Zi 2R, Zi+1 2R] ∩ Z/N into finite regions that do not communicate with each other up to time t. For all x of each region we can now uniquely solve (4.21) on [0, t]. As the region has finite length, we only need to consider a finite number of sites. To see this, note  96 that as Pt,x is a Poisson random variable of finite mean for each x in the region, we can have at most a finite number of flips in each region up until time t. Now iterate on successive intervals of length t to uniquely solve the entire system for all times. The interested reader is referred to the proof of Proposition 2.1(a) in [4] for how to solve such systems. Remark 4.3.2. For an alternative proof of the partition in non-communicating islands the reader is referred to Theorem 2.1 in Durrett [6]. The ideas of [6] can be applied to our setup but as we only consider dimension d = 1 the more straightforward calculation given above was possible. Having solved the equation (4.21) it remains to ensure that the solution is the spin-flip system with rates c(x, ξ N ) given by (4.16). Recall the end of Remark 4.2.2 and the end of Hypothesis 4.2.19. By Theorem I.5.2 in [7] the process ξ N constructed from the given rates is the unique in law solution to the martingale problem for Ω, where Ωf ξ N =  c x, ξ N x  f (ξxN ) − f (ξ N )  with f in the space of finite cylinder functions on {0, 1}Z/N . Hence it remains to show that for the Markov process ξ N constructed in (4.21), f (ξtN ) − f (ξ0N ) −  t 0  = f (ξtN ) − f (ξ0N ) −  Ωf (ξsN )ds t 0  x  c(x, ξsN ) f ((ξsN )x ) − f (ξsN ) ds  is a martingale for all f in the space of finite cylinder functions on {0, 1}Z/N . Since f depends on finitely many coordinates, this is an exercise in stochastic calculus for jump processes, see Remark 4.3.4 below. In what follows we shall often drop the superscripts w.r.t. N to simplify notation. We now derive the approximate martingale problem. We take a test function φ : [0, ∞)×N −1 Z → R with t → φt (x) continuously differentiable and satisfying T 0  < |φs | + φ2s + |∂s φs |, 1 > ds < ∞  (4.22)  (this condition ensures that the following integration and summation are welldefined). We apply integration by parts to ξt (x)φt (x), sum over x and multiply  97 by  1 N,  to obtain for t ≤ T (recall that by Remark 4.2.6 < ξt , φ >= < νt , φ >)  < ν t , φt >  (4.23) t  =< ν0 , φ0 > +  < νs , ∂s φs > ds 0  + +  1 N 1 N  t x y∼x  0  ξs− (y) (φs (x) − φs (y)) dPs (x; y)  (4.24)  ξs− (x)φs (x) (dPs (y; x) − dPs (x; y))  (4.25)  t x y∼x  + k=0,1  0  (1 − 2k)  m≥2,i,j=0,1 m  × δj (ξs− (y1 ))  t  1 N  δk (ξs− (x)) x y1 ,...,ym ∼x z  0  δ1−k (ξs− (yl )) δi (ξs− (z)) φs (x)dQm,i,j,k (x; y1 , . . . , ym ; z). s l=2  (4.26) The main ideas for analyzing terms (4.24) and (4.25) will become clear once we analyze term (4.26) in detail. The latter is the only term where calculations changed seriously compared to [9]. Hence, we shall only summarize the results for terms (4.24) and (4.25) in what follows. We break term (4.24) into two parts, an average term and a fluctuation term and after proceeding as for term (3.1) in [9] we obtain t  (4.24) = 0  (1)  < νs− , ∆(φs ) > ds + Et (φ),  where (1)  Et (φ) ≡  1 N  t x y∼x  ξs− (y) (φs (x) − φs (y)) (dPs (x; y) − d P (x; y) s ) .  0  (1)  (1)  We have suppressed the dependence on N in Et (φ). Et (φ) is a martingale (recall that if N ∼ Pois(λ), then Nt −λt is a martingale with quadratic variation N t = λt) with predictable brackets process given by d E (1) (φ)  t  2  1 ≤ D φt , √ N  λ  < 1, e−2λ > dt.  (4.27)  Alternatively we also obtain the bound d E (1) (φ)  t  ≤ 4 φt  0<  |φt | , 1 > dt  (4.28)  with φt 0 = supx |φt (x)|. (N ) The second term (4.25) is a martingale which we shall denote by Mt (φ) (in what follows we shall drop the superscripts w.r.t. N and write Mt (φ)). It  98 can be analyzed as the martingale Zt (φ) of (3.3) in [9]. We obtain in particular that M (φ)  t  =2  t  N − θ(N ) N  < ξs− , φ2s > ds −  0  t  < A(ξs− φs ) , ξs− φs > ds . 0  (4.29)  Using that 1 2c(N )N 1/2  |A(ξs− φs )(x)| ≡  1 2c(N )N 1/2  ≤  we can further dominate M (φ)  t  ξs− (y)φs (y) y∼x  y∼x  |φs (y)| ≤ sup |φs (y)|. y∼x  by  t  M (φ)  t  ≤ C(λ)  φs 0  2 λ<  1, e−2λ > ∧ ( φs  0<  ξs− , |φs | >) ds.  (4.30)  We break the third term (4.26) into two parts, an average term and a fluctuation term. Recall Notation 4.2.22 and observe that if we only consider a ∈ {0, 1} we have Fk (a) = δk (a). (4.26) = k=0,1  (1 − 2k)  m≥2,i,j=0,1  t  1 N  δk (ξs− (x)) δj (ξs− (y1 )) (k,m,N )  m  × +  δ1−k (ξs− (yl )) δi (ξs− (z)) φs (x) l=2 (3) Et (φ)  = k=0,1  (1 − 2k)  m  ×  l=2  t  (k,m,N )  qij  0  m≥2,i,j=0,1  1 2c(N )N 1/2  qij  (2c(N ))m N m/2  1 N  = k=0,1  (1 − 2k)  k=0,1  (1 − 2k)  t 0  m≥2,i,j=0,1 m−1  1 N  Fj (A(ξs− )(x)) x (3)  t  (k,m,N )  m−1  p(N (x − z))δi (ξs− (z))  Fi ((pN ∗ ξs− )(x))δk (ξs− (x)) φs (x)ds + Et (φ)  qij m≥2,i,j=0,1  δj (ξs− (y1 )) y1 ∼x  (3)  (k,m,N )  × (F1−k ◦ A(ξs− ))  x  z  qij  × (F1−k (A(ξs− )(x)))  p(N (x − z))ds  1 2c(N )N 1/2  δ1−k (ξs− (yl )) yl ∼x  × δk (ξs− (x)) φs (x)ds + Et (φ)  =  0  x y1 ,...,ym ∼x z  0  < (Fj ◦ A(ξs− )) (3)  Fi ◦ (pN ∗ ξs− ) (δk ◦ ξs− ) , φs > ds + Et (φ),  99 where for x ∈ Z/N we set pN ∗ f (x) ≡  p(N (x − z))f (z)  z∈Z/N  (4.31)  and (3)  Et (φ) ≡  k=0,1  (1 − 2k)  m≥2,i,j=0,1 m  × δj (ξs− (y1 ))  1 N  t  δk (ξs− (x)) x y1 ,...,ym ∼x z  0  δ1−k (ξs− (yl )) δi (ξs− (z)) φs (x) l=2 (k,m,N )  dQm,i,j,k (x; y1 , . . . , ym ; z) − s  ×  qij  (2c(N ))m N m/2 (3)  p(N (x − z))ds . (3)  We have suppressed the dependence on N in Et (φ). Here, Et (φ) is a martingale with predictable brackets process given by E (3) (φ)  t  ≤  (k,m,N )  qij m≥2,i,j,k=0,1  m  1 N2  1 2c(N )N 1/2 y ∼x  x l=0  (4.32)  l  t  × ≤ ≤  z  p(N (x − z)) (k,m,N )  qij m≥2,i,j,k=0,1  1 N  0  1 N  φ2s (x)ds t 0 t  (k,m,N )  qij m≥2,i,j,k=0,1  < φ2s , 1 > ds φs  0  2 λ<  e−2λ , 1 > ds.  Taking the above together we obtain the following approximate semimartingale decomposition from (4.23). t  < νs , ∂s φs > ds  < νt , φt >= < ν0 , φ0 > +  (4.33)  0 t  + 0  (1)  < νs− , ∆(φs ) > ds + Et (φ) + Mt (φ)  + k=0,1  (1 − 2k)  t  (k,m,N )  qij m≥2,i,j=0,1  0  < (Fj ◦ A(ξs− ))  × (F1−k ◦ A(ξs− ))m−1 Fi ◦ (pN ∗ ξs− ) (δk ◦ ξs− ) , φs > ds (3)  + Et (φ).  100 Remark 4.3.3. Note that this approximate semimartingale decomposition provides the link between our approximate densities and the limiting SPDE in (4.18) for the case with no fixed kernel. Indeed, uniqueness of the limit ut of A(ξtN ) will be derived by proving that ut solves the martingale problem associated with the SPDE (4.18). Remark 4.3.4. For all f in the space of finite cylinder functions on {0, 1} Z/N the Markov process ξ(= ξ N ) constructed in (4.21) yields a martingale t  f (ξt )−f (ξ0 )−  t  Ωf (ξs )ds = f (ξt )−f (ξ0 )− 0  0  x  c(x, ξs ) (f ((ξs )x ) − f (ξs )) ds.  (4.34) Indeed, every finite cylinder function on {0, 1}Z/N , f (ξ) = f (ξ(x1 ), . . . , ξ(xn )), n ∈ N, ξ(xi ) ∈ {0, 1}, xi ∈ Z/N can be rewritten as a linear combination of functions of the form g(ξ) ≡ ξ(xi1 ) · · · ξ(xim ), where m ∈ N, m ≤ n and {i1 , . . . , im } ⊂ {1, . . . , n}. By linearity we only need to consider functions of this form. Now rewrite (4.21) as t  ξt (x) = ξ0 (x) + 0  c(x, ξs− ) (1 − 2ξs− (x)) ds + mart.  (4.35)  by breaking both integrals in (4.21) into an average term and a fluctuation term. Observe here that we can rewrite the sum of both average terms as in (4.35) by using for example δ0 (ξs− (x))  δ1 (ξs− (y)) y∼x  N − θ(N ) = δ0 (ξs− (x)) N − θ(N ) f1 (x, ξs− ) 2c(N )N 1/2  and z  δi (ξs− (z)) p(N (x − z)) = gi (x, ξs− ) .  Now use the representation of the rates c(x, ξ) from (4.16). Both fluctuation terms turn out to be martingales that only depend on the Poisson processes Pt (x; y) and Qm,i,j,k (x; y1 , . . . , ym ; z). Hence, for x = x the martingale terms t are orthogonal. For m = 2, that is for g(ξ) = ξ(x)ξ(x ) with x = x , we can now use the integration by parts formula (cf. Theorem VI.(38.3) in Rogers and Williams [13]), the orthogonality of the martingale terms of ξt (x) and ξt (x ) and g(ξx ) − g(ξ) = (1 − ξ(x))ξ(x ) − ξ(x)ξ(x ) = (1 − 2ξ(x))ξ(x ) to obtain that (4.34) is a martingale for f = g. Now iterate the above reasoning to obtain the claim for all m ∈ N.  101  4.4  Green’s Function Representation  Analogous to [9], define a test function ψtz (x) ≥ 0 for t ≥ 0, x, z ∈ N −1 Z as the unique solution, satisfying (4.22) and s.t. ∂ z ψ = ∆ψtz , ∂t t N 1/2 1(x ∼ z) ψ0z (x) = 2c(N ) with ∆ψtz (x) =  N − θ(N ) 2c(N )N 1/2  (ψtz (y) − ψtz (x))  (4.36)  y∼x  as in (4.19). Note that ψ0z was chosen s.t. < νt , ψ0z >= A(ξt )(z) and that we suppress the dependence on N . Next observe that ∆ is the generator of a simple random walk Xt ∈ N −1 Z, N −θ (N ) 2c(N )N 1/2 = N − θ(N ) = (1 + o(1))N with symjumping at rate 2c(N )N 1/2 N →∞  metric steps of variance N1 31 + o(1) , where we used that c(N ) → 1. Here o(1) denotes some function that satisfies o(1) → 0 for N → ∞. Define ψ¯tz (x) = N P(Xt = x|X0 = z) then < ψ0z , ψ¯tx > =  1 N  ψ0z (y)ψ¯tx (y) = y  1 N  y  N 1/2 1(y ∼ z)N P(Xt = y|X0 = x) 2c(N ) (4.37)  1/2  =  N P(Xt = y|X0 = x) = Ex [ψ0z (Xt )] = ψtz (x). 2c(N ) y∼z  As we shall see later in Lemma 4.5.2(b), when linearly interpolated, the functions ψtz (x) and ψ¯tz (x) converge to p 3t , z − x (the proof follows), where pt (x) = √  1 − x2t2 e is the Brownian transition density. 2πt  The next Lemma gives some information on the test functions ψ and ψ¯ from above. Later on, this will provide us with estimates necessary for establishing tightness.  102 Lemma 4.4.1. There exists N0 < ∞ s.t. for N ≥ N0 , T ≥ 0, z ∈ N −1 Z, λ ≥ 0, (a) < ψtz , 1 >=< ψ¯tz , 1 >= 1 and ψtz  0≤  CN 1/2 for all t ≥ 0.  (b) < eλ , ψtz + ψ¯tz >≤ C(λ, T )eλ (z) for all t ≤ T , (c)  ψtz  λ≤  C(λ, T ) N 1/2 ∧ t−2/3 eλ (z) for all t ≤ T ,  (d) < ψ¯tz − ψ¯sz , 1 >≤ 2N |t − s| for all s, t ≥ 0. If we further restrict ourselves to N ≥ N0 , N −3/4 ≤ s < t ≤ T, y, z ∈ N −1 Z, |y − z| ≤ 1, we get (e) ψtz − ψty  λ≤  C(λ, T )eλ (z) |z − y|1/2 t−1 + N −1/2 t−3/2 ,  (f ) ψtz − ψsz  λ≤  C(λ, T )eλ (z) |t − s|1/2 s−3/2 + N −1/2 s−3/2 ,  (g) D ψtz , N −1/2 (·)  λ  ≤ C(λ, T )eλ (z)N −1/4 t−1 .  Proof. First we shall derive an explicit description for the test functions ψtz and ψ¯tz . We proceed as at the beginning of Section 4 in [9] by using that ∆ as in (4.36) is the generator of a simple random walk. √ Let (Yi )i=1,2,... be i.i.d. and uniformly distributed on (jN −1 : 0 < |j| ≤ N ). Set k  ρ(t) = E eitY1  and Sk =  Yi .  (4.38)  i=1 4 Note that E[Y12 ] = 1+o(1) 3N , where o(1) → 0 for N → ∞. Similarly, E[Y1 ] = 1+o(1) 5N 2 , where o(1) may change from line to line. The relation between the test functions ψtz , ψ¯tz and Sk is as follows.  ψtz (x) = Ex [ψ0z (Xt )] =  ∞ k=0  ((N − θ(N ) )t)k −((N −θ(N ) )t) e N P(Sk+1 = x − z), k! (4.39)  ψ¯tz (x) = N P(Xt = x|X0 = z) =  ∞ k=0  ((N − θ(N ) )t)k −((N −θ(N ))t) e N P(Sk = x − z). k!  Now we can start proving the above Lemma. (a) follows as in the proof of Lemma 3(a), [9], using that P(Sk = x) ≤ CN −1/2 for all x ∈ N −1 Z, k ≥ 1. (b) follows as in the proof of Lemma 3(b), [9], where we shall use the bound E eµ|Y1 | ≤ exp 5µ2  1 N  103 for all µ ≥ 0 to obtain the claim. (c) Following the proof of Lemma 3(c) in [9], we first show that we have, for k ∈ N and |x| ≥ 1, P(Sk = x) ≤ N1 P(Sk ≥ |x| − 1), which we can use to obtain P(Sk = x) ≤ N1 e−µ(|x|−1) exp 5kµ2 N1 . Substituting this bound into (4.39) gives for any µ ≥ 0 ψtz (x) ≤ C(µ, T ) exp{−µ|x − z|}  (4.40)  for all t ≤ T and |x − z| ≥ 1. From (4.39) we further have for N big enough ψtz (x) ≡ E p  (1 + o(1))(Pt + 1) ,x −z 3N  + E(N, t, x − z),  where Pt ∼ Pois((N − θ(N ) )t). Using Corollary B.0.2 we get as in the proof of [9], Lemma 3(c), |E(N, t, x)| ≤ C  1 1 + t−3/2 N  for N −3/4 ≤ t.  Here we used that for P ∼ Pois(r), r > 0 we have E[(P + 1)a ] ≤ C(a)ra for all a < 0. (This is obviously true for 0 < r < 1. For r ≥ 1 fixed, prove the claim first for all a ∈ Z. Then extend this result to general a < 0 by an application of H¨ older’s inequality.) Using the trivial bound p(t, x) ≤ Ct−1/2 we get from the above ψtz (x) ≤ C(T )t−2/3  for N −3/4 ≤ t ≤ T.  Finally, we obtain ψtz  λ = sup x  |ψtz (x)| eλ|x|  (4.40)  ≤  sup  C(λ, T )e−λ|x−z| eλ|x|  {x:|x−z|≥1}  ∨ ∨  sup  C(T )t−2/3 eλ|x|  {x:|x−z|<1,N −3/4 ≤t≤T }  sup {x:|x−z|<1,0≤t≤N −3/4 }  |ψtz (x)| eλ|x|  (a)  ≤ C(λ, T )eλ|z| ∨ 1 N −3/4 ≤ t ≤ T C(T )t−2/3 e1 eλ|z| ∨ 1 0 ≤ t ≤ N −3/4 CN 1/2 e1 eλ|z| ≤C(λ, T ) N 1/2 ∧ t−2/3 eλ (z)  for all t ≤ T.  104 This proves part (c). (d) follows along the lines of the proof of [9], Lemma 3(d). (e) For the remaining parts (e)-(g) we fix N −3/4 ≤ s < t ≤ T, y, z ∈ N −1 Z, |y − z| ≤ 1. For part (e) we follow the reasoning of the proof of [9], Lemma 3(e). The only change occurs in the derivation of the last estimate. In summary, we find as in [9] that ψtz − ψty  0≤  C(T ) |z − y|t−1 + N −1 t−3/2 .  (4.41)  Now recall (4.40) with µ = 2λ to get ψtz (x) + ψty (x) ≤ C(λ, T ) exp{−2λ|x − z|} for |x − z| ≥ 1, |x − y| ≥ 1, |y − z| ≤ 1 and thus in particular for |x − z| ≥ 2, |y − z| ≤ 1. This yields ψtz − ψty  λ≤  sup {x:|x−z|<2}  +  ψtz − ψty  sup {x:|x−z|≥2}  0  eλ (x)  C(λ, T ) ψtz − ψty  1/2 0  e−λ|x−z| eλ (x)  ≤C(λ, T )eλ (z)  ψtz − ψty  ≤C(λ, T )eλ (z)  |z − y|t−1 + N −1 t−3/2  0  + |z − y|t−1 + N −1 t−3/2  + ψtz − ψty  1/2 0  1/2  ≤C(λ, T )eλ (z) |z − y|1/2 t−1 + N −1/2 t−3/2 . This proves (e). (f ) The proof of part (f) follows analogously to the proof of part (e), with changes as suggested in the proof of [9], Lemma 3(f). (g) Finally, to prove part (g), use part (e), ψtz (y) = ψty (z) (see (4.39)) and the definition of D ψtz , N −1/2 (x) = sup |ψtz (y) − ψtz (x)| : |x − y| ≤ N −1/2 , y ∈ N −1 Z to get D ψtz , N −1/2 (·) ≤  λ  sup  sup  {x:|x−z|<2}  y:|x−y|≤N −1/2  + (4.40)  {|ψtz (y) − ψtz (x)|} eλ|x|  sup  sup  {x:|x−z|≥2}  y:|x−y|≤N −1/2  ≤ C(λ)  {|ψtz (y) − ψtz (x)|} eλ|x|  sup  sup  {x:|x−z|<2}  y:|x−y|≤N −1/2  + C(λ, T )  {|ψtz (y) − ψtz (x)|} eλ|z|  sup  sup  {x:|x−z|≥2}  y:|x−y|≤N −1/2  |ψtz (y) − ψtz (x)|  1/2  e−λ|x−z| eλ|x|  .  105 Next use that ψta (b) = ψtb (a) to get as a further upper bound sup  sup  {x:|x−z|<2}  y:|x−y|≤N −1/2  C(λ)  {|ψty (z) − ψtx (z)|} eλ|z| 1/2  sup  sup  {x:|x−z|≥2}  y:|x−y|≤N −1/2  + C(λ, T ) (4.41)  ≤ C(λ, T )eλ (z)  eλ|z|  |x − y|t−1 + N −1 t−3/2  sup x,y:|x−y|≤N −1/2  + |x − y|t−1 + N −1 t−3/2  |ψty (z) − ψtx (z)|  1/2  ≤ C(λ, T )eλ (z)N −1/4 t−1 , where we used N −3/4 < t ≤ T . This finishes the proof of (g) and it also finishes the proof of the Lemma. The following Corollary uses the results of Lemma 4.4.1 to obtain estimates that we shall need later. Corollary 4.4.2. There exists N0 < ∞ s.t. for N ≥ N0 , 0 ≤ δ ≤ u ≤ t ≤ T and y, z ∈ N −1 Z, λ ≥ 0, we have (a)  t u  z 1/3 ψt−s eλ (z) λ ds ≤ C(λ, T )(t − u) t z 2 1/4 and 0 ψt−s λ ds ≤ C(λ, T )N e2λ (z).  (b) For |y − z| ≤ 1 and δ ≤ t − N −3/4 we further have y z sup0≤s≤δ ψt−s − ψt−s λ 1/2 ≤ C(λ, T )eλ (z) |z − y| (t − δ)−1 + N −1/2 (t − δ)−3/2 . (c) We also have  t δ  y z ψt−s − ψt−s  λ  ds ≤ C(λ, T ) (eλ (z) + eλ (y)) (t − δ)1/3 .  (d) For N −3/4 ≤ u − δ we have z z sup0≤s≤δ ψt−s − ψu−s λ ≤ C(λ, T )eλ (z) (t − u)1/2 (u − δ)−3/2 + N −1/2 (u − δ)−3/2 . (e) Finally, we have  u δ  z z ψt−s − ψu−s  λ  ds ≤ C(λ, T )eλ (z)(u − δ)1/3 .  Proof. The proof is a combination of the results of Lemma 4.4.1. (a) We have for n = 1, 2 and 0 ≤ u ≤ t by Lemma 4.4.1(c) t u  z ψt−s  n λ  t  ds ≤ C(λ, T )  u  N n/2 ∧ (t − s)−2n/3 ds enλ (z).  For n = 1 further bound the integrand by (t − s)−2/3 , for n = 2 and u = 0 use the above integrand to obtain the claim. (b) follows from Lemma 4.4.1(e).  106 (c) We further have by Lemma 4.4.1(c) t δ  y z ψt−s − ψt−s  t λ  ds ≤ C(λ, T ) (eλ (z) + eλ (y))  δ  (t − s)−2/3 ds.  (d) follows from Lemma 4.4.1(f). (e) Using Lemma 4.4.1(c) once more, we get u δ  z z ψt−s − ψu−s  u λ  ds ≤ C(λ, T )eλ (z)  δ  (t − s)−2/3 + (u − s)−2/3 ds,  which concludes the proof after some basic calculations. We shall need the following technical Lemma. Lemma 4.4.3. For f : N −1 Z → [0, ∞) with < f, 1 >< ∞, λ ∈ R we have z z (a) < νs , ψt−s >=< A(ξs ), ψ¯t−s >,  (b) |< νt , f > − < A(ξt ), f >| ≤ C(λ) D(f, N −1/2 )  λ.  Proof. (a) follows easily from z z < νs , ψt−s > =< ξs , ψt−s >= (4.37)  =  =  =  1 N 1 N  1 = N  1 N  z ξs (x)ψt−s (x) = x  1 N  x ξs (x)ψt−s (z) x  z ξs (x) < ψ0x , ψ¯t−s > x  ξs (x) x  y  1 N  x  1 N  y  N 1/2 z 1(y ∼ x)ψ¯t−s (y) 2c(N )  1 z 1(y ∼ x)ξs (x) ψ¯t−s (y) 2c(N )N 1/2  z z A(ξs )(y)ψ¯t−s (y) =< A(ξs ), ψ¯t−s >. y  Part (b) follows as in the proof of Lemma 5(b) in [9]. Observe in particular that < νt , e−λ >≤ C(λ) as will be shown before and in (4.44) below. Taken all together this finishes the proof. Next use the test function x φs ≡ ψt−s for s ≤ t  in the semimartingale decomposition (4.33) and observe that φ satisfies (4.22). Here the initial condition is chosen so that < νt , φt >=< νt , ψ0x >= A(ξt )(x). The test function chosen in [9] at the beginning of page 526, namely φs = x eθc (t−s) ψt−s was chosen so that the drift term < νs , θc φs > ds of the semimartingale decomposition (2.9) in [9] would cancel out with the drift term  107 < νs , ∂s φs > ds. As we have multiple coefficients, this is not possible. Also, it turned out that the calculations become easier once we consider time differences in Section 4.5 to follow. With the above choice we obtain, for a fixed value of t, an approximate Green’s function representation for A(ξt ), namely t  A(ξt )(x) = < ν0 , ψtx > + t  + 0  0  x < νs , −∆ψt−s > ds (1)  x > ds + Et < νs− , ∆ ψt−s  + k=0,1  (1 − 2k)  x x + Mt ψt−· ψt−· t  (k,m,N )  qij  0  m≥2,i,j=0,1  × (F1−k ◦ A(ξs− ))  m−1  (3)  x + Et (ψt−· ).  (4.42)  < (Fj ◦ A(ξs− ))  x Fi ◦ (pN ∗ ξs− ) (δk ◦ ξs− ) , ψt−s > ds  The following Lemma is stated analogously to Lemma 4 of [9]. Parts (a) and (c) will follow easily in our setup and so the only significant statement will be part (b). Lemma 4.4.4. Suppose that the initial conditions satisfy A(ξ0 ) → u0 in C as N → ∞. Then for T ≥ 0, p ≥ 2, λ > 0, (a) E supt≤T < νt , e−λ >p ≤ C(λ, p). (b) We further have (1)  E Et  z ψt−·  p  (3)  ∨ Et  z ψt−·  p  p/2  ≤ C(λ, p, T ) 1 + CQ  N −p/16 eλp (z)  for all t ≤ T and N big enough, where we set CQ ≡ sup  (k,m,N )  N ≥N0  (c) Finally, E[A(ξt )]  −λp ≤  qij  .  (4.43)  m≥2,i,j,k=0,1  1 for all t ≤ T .  Proof. First observe that we have ξt ∈ {0, 1}Z/N and 0 ≤ A(ξt ) ≤ 1. Therefore, parts (a) and (c) follow immediately. Indeed, for (a) we only need to observe that 0 ≤< νt , e−λ >=< ξt , e−λ|·| >= ≤  2 N  ∞ j=0  e−λj/N =  1 N  2 1 N 1 − e−λ/N  x  ξt (x)e−λ|x| ≤  N →∞  →  2 . λ  1 N  e−λ|x| x:x∈Z/N  108 Note in particular that we showed that C for all λ > 0, N = N (λ) big enough, λ  < e−λ , 1 >≤  (4.44)  which will prove useful later. For (c) we further have E[A(ξt )]  −λp =  sup |E[A(ξt )(x)]| e−λp|x| ≤ sup e−λp|x| ≤ 1. x  x  It only remains to show that (b) holds. (b) First observe that CQ < ∞ by Hypothesis 4.2.19. We shall apply a Burkholder-Davis-Gundy inequality in the form E sup |Xs |p ≤ C(p)E X s≤t  p/2 t  + sup |Xs − Xs− |p  (4.45)  s≤t  for a cadlag martingale X with X0 = 0 (this inequality may be derived from its discrete time version, see Burkholder [1], Theorem 21.1). To get an upper bound on the second term of the r.h.s. of (4.45) for the martingales we consider, observe that the largest possible jumps of the martingales (1) (3) z z Et (ψt−· ) respectively Et (ψt−· ) are bounded a.s. by CN −1/2 . Indeed, (1)  z Et (ψt−· )=  t  1 N  0  x y∼x  z z ξs− (y) ψt−s (x) − ψt−s (y) (dPs (x; y) − d P (x; y) s )  and thus, using Lemma 4.4.1(a), the maximal jump size is bounded by 1 2 sup ψtz N t≤T  0≤  C N 1/2  (4.46)  (the maximal number of jumps at a fixed time is 1). The bound on the maximal (3) z jump size of Et (ψt−· ) follows analogously. (3) z ). By (4.45), (4.46) and Now choose t ≤ T . We shall start with Et (ψt−· (4.32) we have (3)  z ) E Et (ψt−·   ≤ C(p)   (4.44)  p  (k,m,N )  qij  m≥2,i,j,k=0,1  t  p/2  ≤ C(λ, p)CQ N −p/2  1 N  0  t 0 z ψt−s  z ψt−s  2 λ  2 λ  p/2  < e−2λ , 1 > ds + C(p)N −p/2  p/2  ds  +1 .  By Corollary 4.4.2(a) this is bounded from above by p/2  p/2  C(λ, p, T )CQ N −p/2 N p/8 eλp (z) + 1 = C(λ, p, T )CQ N −3p/8 eλp (z).  109 (1)  z It remains to investigate Et (ψt−· ). Here (4.45), (4.46), (4.27) and (4.28) yield (1)  z ) E Et (ψt−·  p  t  ≤ C(p)  0  z ψt−s 0<  z ψt−s ,1  > ∧  D  z ψt−s ,  p/2  2  1 √ N  λ  < 1, e−2λ > ds  + C(p)N −p/2 . This in turn is bounded from above by t  C(p) 0  C(T )(t − s)  −2/3  ∧  D  z ψt−s ,  1 √ N  p/2  2  C(λ) ds  +C(p)N −p/2 ,  λ  where we used Lemma 4.4.1(a), (c) and (4.44). To apply Lemma 4.4.1(g) to the second part of the integrand, we need to ensure that N −3/4 ≤ t − s. As N −3/4 ≤ N −3/8 we get as a further upper bound N −3/8  C(T )s  C(p)  −2/3  t  C(λ, T )eλ (z)N  ds +  −1/4 −1  s  2  p/2  C(λ)ds  N −3/8 ∧t  0  + C(p)N −p/2 N −3/8  ≤ C(λ, p, T )eλp (z)  1/3  + N −1/2 N −3/8  −1  p/2  + N −p/2  ≤ C(λ, p, T )N −p/16 eλp (z). This finishes the proof.  4.5  Tightness  In what follows we shall derive estimates on pth -moment differences of ˆ t )(z) ≡ A(ξt )(z)− < ν0 , ψ z > . A(ξ t Recall the assumption A(ξ0 ) → u0 in C from Theorem 4.2.9 resp. Theorem 4.2.23 from the beginning. Also note that Lemma 4.5.2(b) to come will yield that ψtz (x) converges to p 3t , z − x . The estimates of Lemma 4.5.1 and the convergence of ψtz taken together will be sufficient to show C-tightness of the approximate densities A(ξt )(z) at the end of this Section.  110 Lemma 4.5.1. For 0 ≤ s ≤ t ≤ T, y, z ∈ N −1 Z, |t − s| ≤ 1, |y − z| ≤ 1, λ > 0 and p ≥ 2 we have ˆ t )(z) − A(ξ ˆ s )(y) E A(ξ  p  p eλp (z) |t − s|p/24 + |z − y|p/24 + N −p/24 . ≤ C(λ, p, T ) 1 + CQ  Proof. Fix s, t, T, y, z, λ, p as in the statement. We decompose the increment ˆ t )(z) − A(ξ ˆ s )(y) into a space increment A(ξ ˆ t )(z) − A(ξ ˆ t )(y) and a time inA(ξ ˆ ˆ crement A(ξt )(y) − A(ξs )(y). We consider first the space differences. From the Green’s function representation (4.42), the estimates obtained in Lemma 4.4.4(b) for the error terms E (1) (1) (3) and E (3) and the linearity of Mt (φ) and Et (φ), Et (φ) in φ, we get p  ˆ t )(z) − A(ξ ˆ t )(y) E A(ξ p/2  ≤ C(λ, p, T ) 1 + CQ +E k=0,1  (1 − 2k)  y z N −p/16 eλp (z) + E Mt ψt−· − ψt−· t  (k,m,N )  qij m≥2,i,j=0,1  × (F1−k ◦ A(ξs− ))  m−1  p  0  < (Fj ◦ A(ξs− ))  y z Fi ◦ (pN ∗ ξs− ) (δk ◦ ξs− ) , ψt−s − ψt−s > ds  p  .  Recall definition (4.31) and observe that 0 ≤ pN ∗ ξs− (x) ≤ 1 follows from ξs− ∈ {0, 1}Z/N . Use this and 0 ≤ A(ξs− )(x) ≤ 1 together with the definition of Fk from Notation 4.2.22 to get ˆ t )(z) − A(ξ ˆ t )(y) E A(ξ p/2  ≤ C(λ, p, T ) 1 + CQ  p  (4.47) p  y z N −p/16 eλp (z) + E Mt ψt−· − ψt−·  (k,m,N )  qij  +E m≥2,i,j,k=0,1 t  ×  0  y z < (F1−k ◦ A(ξs− )) (δk ◦ ξs− ) , ψt−s − ψt−s > ds p/2  ≤ C(λ, p, T ) 1 + CQ p + CQ E  t  0  y z N −p/16 eλp (z) + E Mt ψt−· − ψt−·  y z < A(ξs− ) + ξs− , ψt−s − ψt−s > ds  p  p  p  .  Note that this is the main step to see why the fixed kernel interaction does not impact our results on tightness. In what follows, we shall employ a similar strategy to the proof of Lemma 6 in [9] to obtain estimates on the above. We nevertheless give full calculations as we proceeded in a different logical order to highlight the ideas for obtaining bounds. Minor changes in the exponents of our bounds ensued, both due to the different logical order and the different setup.  111 p  y z . Using the Burkholder− ψt−· Let us first derive a bound on E Mt ψt−· Davis-Gundy inequality (4.45) from above and observing that the jumps of the x martingales Mt (ψt−· ) are bounded a.s. by CN −1/2 we have for any 0 ≤ δ ≤ t p  y z E Mt ψt−· − ψt−·  δ  (4.30)  ≤ C(λ, p)E  0  δ  z ψt−s  (4.44)  ≤ C(λ, p)E t  + δ  y z − ψt−s ψt−s  2 λ<  1, e−2λ > ds p/2  t  +  (4.48)  −  y ψt−s 0<  T sup 0≤s≤δ  z ψt−s  −  y z ψt−s − ψt−s  2 λ  ξs− ,  y ψt−s  + C(p)N −p/2  > ds  1 λ p/2  y z ψt−s − ψt−s  0<  y z > ds ξs− , ψt−s − ψt−s  + C(p)N −p/2 .  Now observe that by Lemma 4.4.3(a) and Lemma 4.4.1(a), y y z z > ≤< ξs− , ψt−s + ψt−s > < ξs− , ψt−s − ψt−s y z ¯ ¯ =< A(ξs− ), ψt−s + ψt−s > ≤< 1, ψ¯z + ψ¯y > t−s  (4.49)  t−s  = 2.  We can therefore apply the estimates from Corollary 4.4.2(b) to the first term in (4.48) and Corollary 4.4.2(c) to the second term, assuming δ ≤ t − N −3/4 ∨ 0 and using |y − z| ≤ 1 to obtain y z E Mt ψt−· − ψt−·  p  ≤ C(λ, p, T )eλp (z) |z − y|p/2 (t − δ)−p + N −p/2 (t − δ)−3p/2 + (t − δ)p/6 + C(p)N −p/2 . Now set δ =t−  |z − y|1/4 ∨ N −1/4 ∧ t  and observe that δ ≤ t − N −3/4 ∨ 0 follows. We obtain t − δ = |z − y|1/4 ∨ N −1/4 ∧ t and |z − y|1/4 ≤ N −1/4 ⇒ |z − y|p/2 (t − δ)−p + N −p/2 (t − δ)−3p/2 + (t − δ)p/6 (4.50) ≤ |z − y|p/2 |z − y|−p/4 + N −p/2 N 3p/8 + N −p/24 = |z − y|p/4 + N −p/8 + N −p/24 ,  |z − y|1/4 > N −1/4 ⇒ |z − y|p/2 (t − δ)−p + N −p/2 (t − δ)−3p/2 + (t − δ)p/6 ≤ |z − y|p/2 |z − y|−p/4 + N −p/2 N 3p/8 + |z − y|p/24 = |z − y|p/4 + N −p/8 + |z − y|p/24 .  112 Plugging this back in the above estimate we finally have y z E Mt ψt−· − ψt−·  p  ≤ C(λ, p, T )eλp (z) |z − y|p/24 + N −p/24 .  Next we shall get a bound on the last term of (4.47). Recall that < ξt , φ >= < νt , φ >. We get t  E 0  p  y z − ψt−s < A(ξs− ) + ξs− , ψt−s > ds  p  δ  ≤ C(p) E  0  < A(ξs− ) + νs− , e−λ > ds sup  0≤s≤δ  δ  −  y ψt−s λ  p  t  +E  z ψt−s  < A(ξs− ) + νs− , e−λ >  z ψt−s  −  y ψt−s λ  ds  .  Now use that (4.44)  < A(ξs− ) + νs− , e−λ >=< A(ξs− ) + ξs− , e−λ >≤< 2, e−λ > ≤ C(λ) (4.51) to obtain that the above is bounded by p  C(p)  T C(λ) sup 0≤s≤δ  y z ψt−s − ψt−s  λ  t  + δ  p y z C(λ) ψt−s − ψt−s  λ  ds  ≤ C(λ, p, T )eλp (z) |z − y|p/2 (t − δ)−p + N −p/2 (t − δ)−3p/2 +(t − δ)p/3 , where we used Corollary 4.4.2(b),(c) and |y − z| ≤ 1. Here we assumed δ ≤ t − N −3/4 ∨ 0 when we applied Corollary 4.4.2(b). Now choose δ = t − |z − y|1/4 ∨ N −1/4 ∧ t ≤ t − N −3/4 ∨ 0 as before. Reasoning as in (4.50), we get C(λ, p, T )eλp (z) N −p/8 + |z − y|p/12 as an upper bound. Now we can take all the above bounds together and plug them back into (4.47) to obtain (recall that |z − y| ≤ 1) ˆ t )(z) − A(ξ ˆ t )(y) E A(ξ p/2  ≤ C(λ, p, T ) 1 + CQ  p  N −p/16 eλp (z)  p + C(λ, p, T ) 1 + CQ eλp (z) |z − y|p/24 + N −p/24 p/2  p ≤ C(λ, p, T ) 1 + CQ + CQ eλp (z) |z − y|p/24 + N −p/24 .  113 Next we derive a similar bound on the time differences. We start by subtracting the two Green’s function representations again, this time for the time differences, using (4.42) and Lemma 4.4.4(b) for the error terms. ˆ t )(z) − A(ξ ˆ u )(z) E A(ξ p/2  ≤ C(λ, p, T ) 1 + CQ +E k=0,1  (1 − 2k)  p  (4.52) z z N −p/16 eλp (z) + E Mt ψt−· − Mu ψu−· (k,m,N )  qij m≥2,i,j=0,1  t  ×  < (Fj ◦ A(ξs− )) (F1−k ◦ A(ξs− ))  0  p  m−1  Fi ◦ (pN ∗ ξs− )  z × (δk ◦ ξs− ) , ψt−s > ds u  −  < (Fj ◦ A(ξs− )) (F1−k ◦ A(ξs− ))  0  p/2  Fi ◦ (pN ∗ ξs− )  p  z × (δk ◦ ξs− ) , ψu−s > ds  ≤ C(λ, p, T ) 1 + CQ  m−1  z z − Mu ψu−· N −p/16 eλp (z) + E Mt ψt−·  p  (k,m,N )  +E  qij m≥2,i,j,k=0,1 t  ×  < (Fj ◦ A(ξs− )) (F1−k ◦ A(ξs− ))  u  m−1  Fi ◦ (pN ∗ ξs− )  z × (δk ◦ ξs− ) , ψt−s > ds u  + 0  < (Fj ◦ A(ξs− )) (F1−k ◦ A(ξs− ))  z z × (δk ◦ ξs− ) , ψt−s − ψu−s > ds p/2  ≤ C(λ, p, T ) 1 + CQ  qij m≥2,i,j,k=0,1 u  + 0  p/2  u  + 0  p  t  u  t  u  p  z < (F1−k ◦ A(ξs− )) (δk ◦ ξs− ) , ψt−s > ds  z z < (F1−k ◦ A(ξs− )) (δk ◦ ξs− ) , ψt−s − ψu−s > ds  ≤ C(λ, p, T ) 1 + CQ p + CQ E  Fi ◦ (pN ∗ ξs− )  z z − Mu ψu−· N −p/16 eλp (z) + E Mt ψt−·  (k,m,N )  +E  m−1  p  z z − Mu ψu−· N −p/16 eλp (z) + E Mt ψt−·  p  z < A(ξs− ) + ξs− , ψt−s > ds  z z < A(ξs− ) + ξs− , ψt−s − ψu−s > ds  p  .  For the martingale term we further get via the Burkholder-Davis-Gundy in-  114 equality (4.45) p  z z − Mu ψt−· E Mt ψt−·  z z ≤ C(p) E Mt ψt−· − Mu ψt−· z M· ψt−·  ≤ C(p)E  t  p/2  u  p/2  + C(p)N −p/2  u  p/2  t  ≤ C(λ, p)E  z ψt−s  u  δ∧u  + C(λ, p) 0  z ξs− , ψt−s > ds p/2  −  z 2 ψu−s λ<  1, e−2λ > ds p/2  u  z z ψt−s − ψu−s  δ∧u  + C(p)N  0<  z ψt−s  + C(λ, p)E −p/2  p  z z + E Mu ψt−· − Mu ψu−·  z − M· ψt−·  z z M· ψt−· − ψu−·  + C(p)E  p  z z 0 < ξs− , ψt−s − ψu−s > ds  ,  where we used equation (4.30) to bound the first and second term. Using (4.44) and reasoning as in (4.49) the above can further be bounded by p  z z E Mt ψt−· − Mu ψu−· t  ≤ C(λ, p)  u  z ψt−s  p/2  ds  0  + C(λ, p, T )  sup 0≤s≤δ∧u  z z ψt−s − ψu−s  p λ  p/2  u  z z ψt−s − ψu−s  + C(λ, p) δ∧u  0  ds  + C(p)N −p/2 .  Under the assumption N −3/4 ∧ u ≤ u − (δ ∧ u) we obtain from Corollary 4.4.2(a), (d), (e) that z z E Mt ψt−· − Mu ψu−·  p  (4.53)  ≤ C(λ, p, T )eλp (z) (t − u)p/6 + |t − u|p/2 + N −p/2 (u − (δ ∧ u)) + (u − (δ ∧ u))  p/6  −3p/2  + N −p/2 .  Finally observe that with δ =u−  |t − u|1/4 ∨ N −1/4 ∧ u  we get N −3/4 ∧ u ≤ u − δ and by proceeding as in (4.50) we obtain z z − Mu ψu−· E Mt ψt−·  p  ≤ C(λ, p, T )eλp (z) (t − u)p/6 + |t − u|p/24 + N −p/24 + N −p/2 .  115 Finally, we can bound the last expectation of the last line of (4.52) by using z z < A(ξt−s ) + ξs− , ψt−s > ≤ < 1 + 1, ψt−s >= 2.  Here the last equality followed from Lemma 4.4.1(a). We thus obtain as an upper bound on the last expectation of the last line of (4.52), p  u  C(p) |t − u|p + E  z z > ds < A(ξs− ) + νs− , ψt−s − ψu−s  0  .  We further have for the second term u  E 0  z z < A(ξs− ) + νs− , ψt−s − ψu−s > ds  p  p  δ∧u  ≤ C(p) E  0 u  +E δ∧u  < A(ξs− ) + νs− , e−λ > ds sup  0≤s≤δ  −  z ψu−s λ  p z z < A(ξs− ) + νs− , e−λ > ψt−s − ψu−s p  (4.51)  ≤ C(λ, p, T )  z ψt−s  z z ψt−s − ψu−s  sup 0≤s≤δ∧u  ≤ C(λ, p, T )eλp (z) (t − u)  p/2  (u − (δ ∧ u))  λ  λ  ds p  u  + δ∧u  −3p/2  z z ψt−s − ψu−s  λ  ds  + N −p/2 (u − (δ ∧ u))−3p/2  +(u − (δ ∧ u))p/3 , where we assumed N −3/4 ∧ u ≤ u − (δ ∧ u) when we applied Corollary 4.4.2(d) together with Corollary 4.4.2(e) in the last line. Now reason as from (4.53) on to obtain C(λ, p, T )eλp (z) |t − u|p/24 + N −p/24 as an upper bound. Taking all bounds together we have for the time differences from (4.52) ˆ t )(z) − A(ξ ˆ u )(z) E A(ξ p/2  ≤ C(λ, p, T ) 1 + CQ  p  N −p/16 eλp (z)  + C(λ, p, T )eλp (z) (t − u)p/6 + |t − u|p/24 + N −p/24 + N −p/2 p + C(p)CQ |t − u|p + C(λ, p, T )eλp (z) |t − u|p/24 + N −p/24 p/2  p ≤ C(λ, p, T ) 1 + CQ + CQ eλp (z) |t − u|p/24 + N −p/24 .  The bounds on the space difference and the time difference taken together complete the proof.  116 We now show that these moment estimates imply C-tightness of the approximate densities. We shall start including dependence on N again to clarify the tightness argument. First define ˜ N )(z) = A(ξ ˆ N )(z) on the grid z ∈ N −1 Z, t ∈ N −2 N0 . A(ξ t t Linearly interpolate first in z and then in t to obtain a continuous C-valued process. Note in particular that we can use Lemma 4.5.1 to show that for 0 ≤ s ≤ t ≤ T, |t − s| ≤ 1 and y, z ∈ R, |y − z| ≤ 1, ˜ N )(z) − A(ξ ˜ N )(y) E A(ξ t s  p  p eλp (z) |t − s|p/48 + |z − y|p/24 ≤ C(λ, p, T ) 1 + CQ  for λ > 0, p ≥ 2 arbitrarily fixed. ˜ N ) and A(ξ ˆ N ) remain close. The advantage The next Lemma shows that A(ξ t t N ˜ of using A(ξt ) is that it is continuous. Using Kolmogorov’s continuity theorem (see for instance Corollary 1.2 in (i ,i ) Walsh [15]) on compacts R1 1 2 ≡ {(t, x) ∈ R+ × R : (t, x) ∈ (i1 , i2 ) + [0, 1]2 } ˜ N )(x) in the space of continfor i1 ∈ N0 , i2 ∈ Z we obtain tightness of A(ξ t (i ,i )  auous functions on (t, x) : (t, x) ∈ R1 1 2 . Indeed, we can use the Arzel` Ascoli theorem. With arbitrary high probability, part (ii) of Corollary 1.2 of [15] provides a uniform (in N ) modulus of continuity for all N ≥ N0 . Pointwise boundedness follows from the boundedness of A(ξtN )(x) together with Lemma 4.5.2(b) below. Now use a diagonalization argument to obtain tight˜ tN )(x) : t ∈ R+ , x ∈ R)N ∈N in the space of continuous functions ness of (A(ξ + from R × R to R equipped with the topology of uniform convergence on compact sets. Next observe that if we consider instead the space of continuous functions from R+ to the space of continuous functions from R to R, both equipped with the topology of uniform convergence on compact sets, tightness ˜ tN )(x) : t ∈ R+ , x ∈ R)N ∈N in the former space is equivalent to tightness of (A(ξ ˜ N )(·) : t ∈ R+ )N ∈N in the latter. of (A(ξ t Finally, tightness of (A(ξtN ) : t ∈ R+ )N ∈N as cadlag C1 -valued processes (recall that 0 ≤ A(ξtN )(x) ≤ 1 by construction) and also the continuity of all weak limit points follow from the next Lemma. Lemma 4.5.2. For any λ > 0, T < ∞ we have (a) P supt≤T  ˜ tN ) − A(ξ ˆ tN ) A(ξ  (b) supt≤T < ν0N , ψt· > −Pt/3 u0  −λ ≥  7N −1/4 → 0 as N → ∞.  −λ →  0 as N → ∞.  Proof. The proof is very similar to the proof of Lemma 7 in [9]. We shall only give some additional steps for part (a) to complement the proof of the given reference.  117 (a) For 0 ≤ s ≤ t we have < ν0N , ψt· > − < ν0N , ψs· >  −λ  = sup < A(ξ0N ), ψ¯tz − ψ¯sz > e−λ|z| z  ≤ sup < 1, ψ¯tz − ψ¯sz > z  ≤ 2N |t − s|. Here we used Lemma 4.4.3(a) in the first line, 0 ≤ A(ξ0N ) ≤ 1 in the second line and Lemma 4.4.1(d) in the last. Hence, this only changes by O(N −1 ) between the (time-)grid points in N −2 N0 . We obtain that ˜ tN ) − A(ξ ˆ tN ) P sup A(ξ t≤T  −λ ≥  7N −1/4  = P ∃t ∈ [0, T ] ∩ N −2 N0 , s ∈ [0, T ], |s − t| ≤ N −2 s.t. ˆ N ) − A(ξ ˆ N) A(ξ t s  −λ ≥  A(ξtN ) − A(ξsN )  −λ  A(ξtN ) − A(ξsN )  −λ ≥  7N −1/4  ≤ P ∃t ∈ [0, T ] ∩ N −2 N0 , s ∈ [0, T ], |s − t| ≤ N −2 s.t. + < ν0N , ψt· − ψs· >  −λ  ≥ 7N −1/4  ≤ P ∃t ∈ [0, T ] ∩ N −2 N0 , s ∈ [0, T ], |s − t| ≤ N −2 s.t. 6N −1/4  for N big enough. Next note that the value of A(ξtN )(x) changes only at jump times of Pt (x; y) (x; y1 , . . . , ym ; z), i, j, k = 0, 1, m ≥ 2 for some y ∼ x respectively for or Qm,i,j.k t some y1 , . . . , ym ∼ x and arbitrary z ∈ N −1 Z and that each jump of A(ξtN ) is by definition of A(ξtN ) bounded by N −1/2 . Then, writing P(a) for a Poisson variable with mean a, we get as a further bound on the above  l∈Z  P ∃z ∈ N −1 Z ∩ (l, l + 1], ∃t ∈ [0, T ] ∩ N −2 N0 , ∃s ∈ [t, t + N −2 ] with N N A(ξtN )(z) − A(ξsN )(z) ∧ A(ξt+N −2 (z) − A(ξs )(z)  ≤  N (N 2 T )P CN −1/2  PN −2 (0; y) y∼0  l∈Z  + i,j,k=0,1,m≥2 y1 ,...,ym ∼0 u  ≤ ≤  l∈Z       ≥ N −1/4 eλ(|l|−1)  Qm,i,j,k N −2 (0; y1 , . . . , ym ; u)  C(T )N 3 P CN −1/2 P N −2 C(T )N 3 P  l∈Z  ≥ N −1/4 eλ(|l|−1)  P N −2 (N + CQ )  N − θ(N ) + CQ p  ≥ N −1/4 eλ(|l|−1)  ≥ CN p/4 eλp(|l|−1)  118 for some p > 0. Now apply Chebyshev’s inequality. Choose p > 0 such that 3 − p/4 < 0. Then the resulting sum is finite and goes to zero for N → ∞. (b) The proof of part (b) follows as the proof of Lemma 7(b) of [9].  4.6  Characterizing Limit Points  To conclude the proof of Theorem 4.2.9, Theorem 4.2.14 and Theorem 4.2.23 we can proceed as in Section 4 in [9], except for the proof of weak uniqueness of (4.12) respectively (4.18). We shall give a short overview in what follows. The interested reader is referred to [9] for complete explanations. In short, Lemma 4.4.3(b) implies for all φ ∈ Cc that sup < νtN , φ > − < A(ξtN ), φ > ≤ C(λ) D(φ, N −1/2 ) t  λ  N →∞  → 0.  (4.54)  The C-tightness of (A(ξtN ) : t ≥ 0) in C1 follows from the results of Section 4.5. This in turn implies the C-tightness of (νtN : t ≥ 0) as cadlag Radon measure valued processes with the vague topology. Indeed, let ϕk , k ∈ N be a sequence of smooth functions from R to [0, 1] such that ϕk (x) is 1 for |x| ≤ k and 0 for |x| ≥ k + 1. Then a diagonalization argument shows that C-tightness of (νtN : t ≥ 0) as cadlag Radon measure valued processes with the vague topology holds if and only if C-tightness of (ϕk dνtN : t ≥ 0) as cadlag MF ([−(k + 1), k + 1])-valued processes with the weak topology holds. Here, MF ([−(k + 1), k + 1]) denotes the space of finite measures on [−(k + 1), k + 1]. Now use Theorem II.4.1 in [11] to obtain C-tightness of (ϕk dνtN : t ≥ 0) in D(MF ([−(k + 1), k + 1])). The compact containment condition (i) in [11] is obvious. The second condition (ii) in [11] follows from (4.54) and the C-tightness of (A(ξtN ) : t ≥ 0) in C1 together with Lemma 4.4.4(a). Observe in particular, that (4.54) implies the existence of a subsequence A(ξtNk ), νtNk that converges to (ut , νt ). Hence, we can define variables with the same distributions on a different probability space such that with probability one, for all T < ∞, λ > 0, φ ∈ Cc , sup A(ξtNk ) − ut t≤T  −λ  → 0 as k → ∞,  sup < φ, νtNk > − < φ, νt > → 0 as k → ∞, t≤T  where we used 0 ≤ A(ξtNk ) ≤ 1 and thus 0 ≤ ut (x) ≤ 1 a.s. for the first limit. We obtain in particular νt (dx) = ut (x)dx for all t ≥ 0. It remains to investigate ut in the special case, i.e. with no fixed kernel, i.e. where (k,m,N ) (k,m,N ) q0j = q1j , j = 0, 1.  119 Take φt ≡ φ ∈ Cc3 in (4.33). We get (N )  Mt  (φ) =< νtN , φ > − < ν0N , φ > − −  k=0,1  (1 − 2k)  t 0  (1)  N < νs− , ∆(φ) > ds − Et (φ) (4.55) t  (k,m,N )  q0j  0  m≥2,j=0,1  N × F1−k ◦ A ξs−  m−1  N < Fj ◦ A ξs− (3)  N , φ > ds − Et (φ). δk ◦ ξs−  From (4.27) and (4.32) and the Burkholder-Davis-Gundy inequality (4.45) we obtain that the error terms converge to zero for all 0 ≤ t ≤ T almost surely. Taylor’s theorem further shows that (replace Nk by N for notational ease) N − θ(N ) (φ(y) − φ(xN )) 2c(N )N 1/2 y∼x N √ N − θ(N ) N (φ(y) − φ(xN )) = c(N )N 2 y∼x  ∆(φ) (xN ) =  N  ∆φ → (x) as xN → x and N → ∞ 6 on the support of φ. (N ) Using this in (4.55) we can show that Mt (φ) converges to a continuous martingale Mt (φ) satisfying t  Mt (φ) = −  φ(x)ut (x)dx −  k=0,1  (1 − 2k)  φ(x)u0 (x)dx −  m≥2,j=0,1  × (F1−k (us (x)))  m−1  (4.56)  t  (k,m)  q0j  0  ∆φ(x) us (x)dxds 6  Fj (us (x)) 0  Fk (us (x))φ(x)dxds.  To exchange the limit in N → ∞ with the infinite sum we used [12], Proposition 11.18 together with Hypothesis 4.2.20. Recall in particular, that 0 ≤ Fl (us (x)) ≤ 1 for l = 0, 1. To show that Mt (φ) is indeed a martingale we used in particular (4.30) to see that M (N ) (φ) t ≤ C(λ)t φ 2λ < 1, e−2λ > is (N ) uniformly bounded. Therefore, (Mt (φ) : N ≥ N0 ) and all its moments are uniformly integrable, using the Burkholder-Davis-Gundy inequality of the form (4.45) once more. We can further calculate its quadratic variation by making use of (4.29) for (N ) N → ∞ together with the uniform integrability of ((Mt (φ))2 : N ≥ N0 ). Use our results for φ ∈ Cc3 , note that Cc3 is dense in Cc2 with respect to the norm f ≡ f ∞ + f ∞ + ∆f ∞ , and use (4.56) to see that ut solves the martingale problem associated with the SPDE (4.18). It is now straightforward to show that, with respect to some white noise, ut is actually a solution to (4.18) (see [13], V.20 for the similar argument in the case of SDEs).  120  4.7  Uniqueness in Law  To show uniqueness of all limit points of Section 4.6 in the case with no fixed kernel and with < u0 , 1 >< ∞, we need to show uniqueness in law of [0, 1]valued solutions to either (4.12) or (4.18) (recall Corollary 4.2.24). Indeed, as 0 ≤ A(ξtN )(x) ≤ 1 by definition, any limit point has to satisfy ut (x) ∈ [0, 1]. We shall choose to prove weak uniqueness of (4.18), i.e. of ∂u ∆u = + ∂t 6  k=0,1  (1 − 2k)  ˙ + 2u(1 − u)W ∆u + u(1 − u) = 6  (k,m)  q0j  Fj (u) (F1−k (u))  m−1  Fk (u)  (4.57)  m≥2,j=0,1  k=0,1  (1 − 2k)  ˙ + 2u(1 − u)W ∆u + u(1 − u)Q(u) + ≡ 6  (k,m)  q0j  Fj (u) (F1−k (u))  m−2  m≥2,j=0,1  ˙ 2u(1 − u)W  with initial condition u0 in what follows. Observe that |Q(us (x))| ≤ CQ with CQ as in (4.43) because 0 ≤ us (x) ≤ 1. To check uniqueness in law of [0, 1]-valued solutions we shall apply a version of Dawson’s Girsanov Theorem, see Theorem IV.1.6 in [11], p. 252. Let Pu denote the law of a solution to the SPDE (4.57) and Pv denote the unique law of the [0, 1]-valued solution to the SPDE ∂v ∆v = + ∂t 6  ˙ 2v(1 − v)W  (4.58)  with v0 = u0 . Reasons for existence and uniqueness of a [0, 1]-valued solution to the latter can be found in Shiga [14], Example 5.2, p. 428. Note in particular that the solution vt takes values in C1 . To prove weak uniqueness, we shall follow the reasoning of the proof of Theorem IV.1.6(a),(b) in [11] in a univariate setup. To follow the reasoning from [11], we need to show the following Lemma first. Lemma 4.7.1. Given u0 = v0 satisfying < u0 , 1 >< ∞, we have Pu -a.s. t t < us , 1 > ds < ∞ for all t ≥ 0 and Pv -a.s. 0 < vs , 1 > ds < ∞ for all 0 t ≥ 0. Proof. We shall prove the claim for Pu . The other claim then follows by considering the special case Q ≡ 0. As a first step we shall use a generalization of the weak form of (4.57) to functions in two variables. In the proof of Theorem 2.1  121 2 on p. 430 of [14] it is shown that for every ψ ∈ Drap (T ) and 0 < t < T we have t  < ut , ψt >= < u0 , ψ0 > +  ∆ ∂ ψs > ds + ∂s 6  < us , 0  (4.59)  t  < us (1 − us )Q(us ), ψs > ds  + 0 t  2us (x)(1 − us (x))ψs (x)dW (x, s).  + 0  Here we have for T > 0, C(R) = {f : R → R continuous} , Crap = f ∈ C(R) s.t. sup eλ|x| |f (x)| < ∞ for all λ > 0 , x  2 Crap  2 (T ) Drap  2  = ψ ∈ C (R) s.t. ψ, ψ , ψ ∈ Crap ,  2 = ψ ∈ C 1,2 ([0, T ) × R) s.t. ψ(t, ·) is Crap -valued continuous and  ∂ψ (t, ·) is Crap -valued continuous in 0 ≤ t < T ∂t  .  Also observe that the condition (2.2) of [14] is satisfied as we have 0 ≤ us (x) ≤ 1 and therefore |Q(us (x))| ≤ CQ .  Now recall that the Brownian transition density is ps (x) = (Ps φ)(x) = p s3 (y − x)φ(y)dy with φ ∈ Cc∞ , φ ≥ 0 and let  x2  √ 1 e− 2s 2πs  . Let  2 ψ(s, x) = ψs (x) = eCQ (T −s) (PT −s φ)(x) and thus ψ ∈ Drap (T ). ∂ (PT −s φ)(x) = − ∆ Note that ∂s 6 (PT −s φ)(x), where we used that 1 ∆p (x). We obtain for the drift term in (4.59) that s 2  ∂ ∂s ps (x)  =  ∂ ∆ ψs > + < us (1 − us )Q(us ), ψs > + ∂s 6 ∆ ∆ =< us , −CQ ψs − ψs + ψs > + < us (1 − us )Q(us ), ψs > 6 6 ≤0  < us ,  using that ψ(s, x) ≥ 0 for φ ≥ 0. Additionally, the local martingale in (4.59) is a true martingale as ·  2us (x)(1 − us (x))ψs (x)dW (x, s)  0 t  = 0  < 2us (1 − us ), ψs2 > ds ≤ 2e2CQ T  t t  0  2  < 1, (PT −s φ) > ds  t  ≤ 2e2CQ T  φ  0 0  < 1, PT −s φ > ds = 2e2CQ T  φ  0<  1, φ > t < ∞.  122 Hence we obtain from (4.59) for all 0 < t < T after taking expectations E[< ut , ψt >] ≤< u0 , ψ0 >, i.e.  eCQ (T −t) E[< ut , (PT −t φ) >] ≤ eCQ T < u0 , (PT φ) > .  Now choose an increasing sequence of non-negative functions φn ∈ Cc∞ such that φn ↑ 1 for n → ∞. Using the monotone convergence theorem, we obtain from the above eCQ (T −t) E[< ut , 1 >] = lim eCQ (T −t) E[< ut , (PT −t φn ) >] n→∞  ≤ lim eCQ T < u0 , (PT φn ) >= eCQ T < u0 , 1 > . n→∞  Hence by the Fubini-Tonelli theorem, t  E 0  t  < us , 1 > ds ≤< u0 , 1 >  0  eCQ s ds < ∞  for all t ≥ 0, which proves the claim. Lemma 4.7.2. If < u0 , 1 >< ∞ the weak [0, 1]-valued solution to (4.57) is unique in law. If we let t  Rt ≡ exp −  1 2  0 t 0  then  Q(vs (x)) 2  2vs (x)(1 − vs (x))dW (x, s) 2  (1 − vs (x)) (Q(vs (x))) vs (x)dxds , 2 dPu dPv  = Rt for all t > 0,  (4.60)  Ft  where Ft is the canonical filtration of the process v(t, x). Proof. We proceed analogously to the proof of Theorem IV.1.6(a),(b) in [11]. Observe in particular that we take t  Tn = inf t ≥ 0 :  0  (1 − us (x)) (Q(us (x)))2 us (x)dx + 1 ds ≥ n . 2  Lemma 4.7.1 shows that under Pu t 0  2  (CQ ) (1 − us (x)) (Q(us (x))) us (x)dxds ≤ 2 2  2  t 0  < us , 1 > ds < ∞  for all t > 0 Pu −a.s. and so Tn ↑ ∞ Pu -a.s. As in Theorem IV.1.6(a) of [11] this gives uniqueness of the law Pu of a solution to (4.57). As in Theorem IV.1.6(b) of [11] the fact that Tn ↑ ∞ Pv -a.s. (from Lemma 4.7.1) shows that (4.60) defines a probability Pu which satisfies (4.57).  123  Bibliography [1] Burkholder, D.L. Distribution function inequalities for martingales. Ann. Probab. (1973) 1, 19–42. MR0365692 [2] Cox, J.T. and Durrett, R. and Perkins, E.A. Rescaled voter models converge to super-Brownian motion. Ann. Probab. (2000) 28, 185–234. MR1756003 [3] Cox, J.T. and Perkins, E.A. Rescaled Lotka-Volterra models converge to super-Brownian motion. Ann. Probab. (2005) 33, 904–947. MR2135308 [4] Cox, J.T. and Perkins, E.A. Survival and coexistence in stochastic spatial Lotka-Volterra models. Probab. Theory Related Fields (2007) 139, 89– 142. MR2322693 ´ [5] Dawson, D.A. Measure-valued Markov processes. Ecole d’´ete de probabilit´es de Saint Flour, XXI (1991), 1–260, Lecture Notes in Math., 1541, Springer, Berlin, 1993. MR1242575 [6] Durrett, R. Ten lectures on particle systems. Lectures on probability theory (Saint-Flour, 1993), 97–201, Lecture Notes in Math., 1608, Springer, Berlin, 1995. MR1383122 [7] Liggett, T.M. Interacting Particle Systems, Reprint of the 1985 original. Classics in Mathematics, Springer, Berlin, 2005. MR2108619 [8] Liggett, T.M. Stochastic interacting systems: contact, voter and exclusion processes. Springer, Berlin Heidelberg New York, 1999. MR1717346 [9] Mueller, C. and Tribe, R. Stochastic p.d.e.’s arising from the long range contact and long range voter processes. Probab. Theory Related Fields (1995) 102, 519–545. MR1346264 [10] Neuhauser, C. and Pacala, S.W. An explicitly spatial version of the Lotka-Volterra model with interspecific competition. Ann. Appl. Probab. (1999) 9, 1226–1259. MR1728561 [11] Perkins, E.A. Dawson-Watanabe superprocesses and measure-valued diffusions. Lectures on Probability Theory and Statistics (Saint-Flour, 1999), 125–324, Lecture Notes in Math., 1781, Springer, Berlin, 2002. MR1915445 [12] Royden, H.L. Real Analysis, Third edition. Macmillan Publishing Company, New York, 1988. MR1013117 [13] Rogers, L.C.G. and Williams, D. Diffusions, Markov Processes, and Martingales, vol. 2, Reprint of the second (1994) edition. Cambridge Mathematical Univ. Press, Cambridge, 2000. MR1780932  124 [14] Shiga, T. Two contrasting properties of solutions for one-dimensional stochastic partial differential equations. Canad. J. Math. (1994) 46, 415– 437. MR1271224 [15] Walsh, J.B. An Introduction to Stochastic Partial Differential Equations. ´ Ecole d’´ete de probabilit´es de Saint Flour, XIV (1984), 265–439, Lecture Notes in Math., 1180, Springer, Berlin, 1986. MR0876085  125  Chapter 5  Conclusion The last three Chapters investigate different models for interacting multi-type populations from biology. All models under consideration are parameter-dependent and the behaviour of the parameters determines the behaviour of the system with respect to weak uniqueness or survival and coexistence results of types.  5.1 5.1.1  Overview of Results and Future Perspectives of the Manuscripts Degenerate stochastic differential equations for catalytic branching networks  Chapter 2 establishes weak uniqueness for systems of SDEs, modelling catalytic branching networks. These networks can be obtained as limit points of branching particle systems. Weak uniqueness of the solutions to the SDE shows uniqueness of the limit points and hence implies convergence of the above approximations. It also makes certain additional tools available for analysis of the solution, as can be seen in the application of the results of Chapter 2 in Chapter 3. For instance, in the proof of existence of a stationary distribution for the normalized processes in Subsection A.2, weak uniqueness yields that the generator A satisfies the positive maximum principle. This paper is an extension of Dawson and Perkins [6] (where the networks were essentially trees or cycles) to arbitrary networks. The additional dependency among catalysts led to a change of perspective from reactants to catalysts. In [6] every reactant j had one catalyst cj only but as it turned out for networks it is more effective to consider every catalyst i with the set Ri of its reactants. ¯ i , including only reactants whose cataIn particular, the restriction from Ri to R lysts are all zero, turns out to be crucial. As a result, this paper introduces new ideas on how to handle networks where there exist catalytic interlinks between vertices. As already mentioned in Subsection 1.1 of the introductory Chapter 1, the extension to networks becomes necessary for example in dimensions d ≥ 3 in the renormalization analysis of hierarchically interacting multi-type branching models treated in Dawson, Greven, den Hollander, Sun and Swart [5]. The consideration of successive block averages leads to a renormalization transformation on the diffusion functions of a system of SDEs similar to (2.1), (2.2). Unfortunately, [5] only show preservation of the continuity of the coefficients of  126 the SDE under this transformation but not preservation of H¨ older-continuity. In [2], Bass and Perkins prove similar results to [6], i.e. they restrict themselves to networks with at most one catalyst per reactant, but drop the requirement of H¨ older-continuity of the coefficients and replace it by continuity only. A future challenge would be to investigate how the ideas of my paper can be applied to extend the results of [2] to arbitrary networks. As a first step, [2] views the system of SDEs as a perturbation of a well-behaved system of SDEs with generator A0 . As part of my paper I found an explicit representation of the semigroup Pt corresponding to A0 in the extension to the [6] setup. This representation directly carries over to the setting of [2]. The modification of the remaining reasoning in [2] to arbitrary networks remains to be done. Here we note that the extension to general graphs in Chapter 2 led to a number of technical problems in the approach of [6] even after the structure of the generator was resolved.  5.1.2  Long-term behaviour of a cyclic catalytic branching system  The results of Chapter 2 are used in Chapter 3 to establish weak uniqueness of the system of SDEs under consideration. The system involves catalytic branching and mutation between types. Questions for survival and coexistence of types in the time-limit arise. Such questions naturally arise in biological competition models. Recall that Fleischmann and Xiong [7] investigated a cyclically catalytic super-Brownian motion. They showed global segregation (noncoexistence) of neighbouring types in the limit and other results on the finite time survival-extinction but they were not able to determine, if the overall sum dies out in the limit or not. I was able to show that in my setup the sum of all coordinates converges to zero but does not hit zero in finite time a.s. By changing my focus to the normalized processes I showed in particular that the normalized processes converge weakly to a unique stationary distribution that does not charge the set where at least one of the coordinates is zero. The weakness of this manuscript lies in the restriction to constant positive coefficients γ i and qji in the SDEs (3.1). It would be of interest to allow the coefficients to depend on the state of the system. As long as they are uniformly bounded away from zero and infinity, I conjecture a similar behaviour of the system but it would be of particular interest to see how the failure of uniform boundedness impacts questions on survival and coexistence. If one considers coefficients satisfying Hypothesis 2.1.2, the results of Chapter 2 can be applied to obtain weak uniqueness of the new system of SDEs as a first step. Finally, following the approach in Section III.12.2 of [8], one could consider dilution flows instead of the normalized processes, that is, one removes the excess concentration so that the total concentration remains constant.  127  5.1.3  Convergence of rescaled competing species processes to a class of SPDEs  In Chapter 4, the tightness results obtained yield the relative compactness of the approximating particle systems. Limits along subsequences therefore exist for combinations of long-range kernel and fixed kernel interactions in the perturbations, where a wide class of admissible perturbations was found, including analytic functions as they appear in the spatial versions of Lotka-Volterra models of Neuhauser and Pacala [11]. It was particularly interesting to see that adding fixed kernel perturbations to the long-range case does not impact tightness. In the long-range case I obtain that all subsequential limits satisfy a certain SPDE. It remains open to find the form of the limiting equations in the case of long-range dispersal in the presence of short-range, i.e. fixed kernel competition. If I additionally assume finite initial mass in the long-range case, weak uniqueness of the limiting SPDE follows. As a consequence of this, the weak uniqueness of the limits of the approximating particle systems follows. It would be of interest to find necessary and sufficient conditions for weak uniqueness of the limiting SPDE. I conjecture u0 (x)(1−u0 (x))dx < ∞ to be a sufficient condition. If this condition is preserved by the dynamics it suffices for the Girsanov Lemma 4.7.2 to hold. In the special case under investigation in Mueller and Sowers [9] (see below), ut (x)(1 − ut (x))dx < ∞, ∀t > 0 a.s. follows and so we expect this condition to define a state space for the solutions of the equations. The class of SPDEs that I obtain as the limits of sequences of rescaled longrange perturbations of the voter model under some conditions on the parameters (m) αi ∈ R, i = 0, 1, m ∈ N and their counterparts in the approximating models are ∆u ∂u = + (1 − u)u ∂t 6  ∞ m=0  (m+1) m  α0  u −  ∞ m=0  (m+1)  α1  (1 − u)m +  ˙ . 2u(1 − u)W  √ As I work at the critical range N , while Cox and Perkins [3] works with longer range interactions, I obtain a wide class of non-linear drifts. This opens up the possibility to interpret the limiting SPDEs and their behaviour via their approximating long-range particle systems and vice versa. For instance, a future challenge would be to use properties of the SPDE to obtain results on the approximating particle systems, following the ideas of [3] and Cox and Perkins [4]. As an example, recall Remark 4.2.15 where I obtained the SPDE ∆u ∂u = + (1 − u)u {λ − α10 + u (α01 + α10 )} + ∂t 6  ˙ 2u(1 − u)W  with parameters λ , α10 , α01 ∈ R as the limit of spatial versions of the LotkaVolterra model with competition and fecundity parameters near one. We can rewrite this SPDE to ∆u ∂u ˙ , = + θ0 (1 − u)u2 − θ1 u(1 − u)2 + 2u(1 − u)W (5.1) ∂t 6 u(0, x) ≡ u0 (x) ≥ 0,  128 where θi ∈ R, i = 0, 1. For θ1 = −θ0 < 0 one obtains after rescaling the Kolmogorov-Petrovskii-Piscuinov (KPP) equation driven by Fisher-Wright noise. This SPDE has already been investigated in Mueller and Sowers [9] in detail, where the existence of travelling waves was shown for θ0 big enough. A major question is how the change in the drift, in particular, the possibly 1 ∈ (0, 1), impacts the set of parameters for survival additional zero at θ0θ+θ 1 (i.e. lim supt→∞ < ut , 1 >> 0 with positive probability), coexistence (i.e. there exists a stationary distribution giving zero mass to the configurations 0 and 1) and extinction (i.e. lim supt→∞ < ut , 1 >= 0 with probability 1) and if there exist phase transitions. Aronson and Weinberger [1] showed for instance in Corollary 3.1(ii) that for θ0 < 0, θ1 < 0, the corresponding deterministic PDE 1 converges to the intermediate zero θ0θ+θ of the drift term uniformly on bounded 1 sets if u0 ≡ 0, 1. The author conjectures that there are parameter regions for (5.1) that yield survival and others that yield extinction. To prove survival the author envisions to apply methods of Mueller and Tribe [10] to the SPDE (5.1). In [10] and Tribe [12], rescaled versions of the SPDE √ ∂u ∆u ˙ , θ>0 = + θu − u2 + uW ∂t 6  (5.2)  were under investigation. Results on the existence of a phase transition between extinction and survival in terms of θ are obtained in the former paper and the existence of travelling wave solutions in the latter. Unfortunately, the proof of extinction in [10] and the proof of existence of travelling waves in [12] relies on the additive properties of the fluctuation term in (5.2) (also recall the discussion of additive properties of super-Brownian motion in the beginning of the introductory Chapter 1), which makes the application of their methods to SPDEs of the form (5.1) difficult. On the other hand, [10] shows that for θ large, the drift term of the SPDE (5.2) outcompetes the fluctuation term. The proof for survival then uses a comparison of u(t, x) to oriented site percolation to prove survival for θ big. It should be possible to apply their reasoning together with the results of [1] for the corresponding deterministic PDE to the SPDE (5.1) to show survival of types in certain parameter-regions. Extinction on the other hand seems much more delicate to prove.  129  Bibliography [1] Aronson, D.G. and Weinberger, H.F. Multidimensional Nonlinear Diffusion Arising in Population Genetics. Adv. Math. (1978) 30, 33–76. MR0511740 [2] Bass, R.F. and Perkins, E.A. Degenerate stochastic differential equations arising from catalytic branching networks. Electron. J. Probab. (2008) 13, 1808–1885. MR2448130 [3] Cox, J.T. and Perkins, E.A. Rescaled Lotka-Volterra Models converge to Super-Brownian Motion. Ann. Probab. (2005) 33, 904–947. MR2135308 [4] Cox, J.T. and Perkins, E.A. Survival and coexistence in stochastic spatial Lotka-Volterra models. Probab. Theory Related Fields (2007) 139, 89– 142. MR2322693 [5] Dawson, D.A. and Greven, A. and den Hollander, F. and Sun, R. and Swart, J.M. The renormalization transformation for two-type branching models. Ann. Inst. H. Poincar´e Probab. Statist. (2008) 44, 1038–1077. MR2469334 [6] Dawson, D.A. and Perkins, E.A. On the uniqueness problem for catalytic branching networks and other singular diffusions. Illinois J. Math. (2006) 50, 323–383 (electronic). MR2247832 [7] Fleischmann, K. and Xiong, J. A cyclically catalytic super-brownian motion. Ann. Probab. (2001) 29, 820–861. MR1849179 [8] Hofbauer, J. and Sigmund, K. The Theory of Evolution and Dynamical Systems. London Math. Soc. Stud. Texts, vol. 7, Cambridge Univ. Press, Cambridge, 1988. MR1071180 [9] Mueller, C. and Sowers, R.B. Random Travelling Waves for the KPP Equation with Noise. J. Funct. Anal. (1995) 128, 439–498. MR1319963 [10] Mueller, C. and Tribe, R. A phase transition for a stochastic PDE related to the contact process. Probab. Theory Related Fields (1994) 100, 131–156. MR1296425 [11] Neuhauser, C. and Pacala, S.W. An explicitly spatial version of the Lotka-Volterra model with interspecific competition. Ann. Appl. Probab. (1999) 9, 1226–1259. MR1728561 [12] Tribe, R. A travelling wave solution to the Kolmogorov equation with noise. Stochastics Stochastics Rep. (1996) 56, 317–340. MR1396765  130  Appendix A  Appendix for Chapter 3 A.1  a˜ is non-singular  ˜ where Corollary A.1.1. The matrix a ˜ is non-singular for all x ∈ S, d−1  d−1  S˜ =  [0, 1]  d−1  ∩  i=1  xi ≤ 1  \ x : ∃i : xi = 0 or  xi = 1 . i=1  Proof. Recall that σ ∈ M(d, d) (the space of d × d-matrices) and a = σσ T ∈ M(d, d). Let σ ¯ ∈ M(d − 1, d) be constructed from σ by deleting the last line of the matrix (i.e. by deleting the last equation for Ytd of our system of SDEs). Then a ˜=σ ¯σ ¯ T ∈ M(d − 1, d − 1). Further let σ ˜ ∈ M(d − 1, d − 1) be the matrix obtained from σ ¯ by deleting the last column. We claim that if σ ˜ is non-singular, then a ˜ is non-singular as well. Indeed, let v ∈ M(d − 1, 1) denote the last column of σ ¯ and suppose σ ˜ is non-singular, then det(˜ a) = det(¯ σσ ¯ T ) = det(˜ σσ ˜ T + vv T ) = det σ ˜σ ˜T = det σ ˜σ ˜T  1+  σ ˜ −1 v  2  2  = (det(˜ σ )) (1+  1 + v T (˜ σσ ˜ T )−1 v σ ˜ −1 v  2  ) > 0.  Recall that for i, j ∈ {1, . . . , d − 1} we have σ ˜ii (x) = (1 − xi ) 2γ i xi xi+1 and d−1 i σ ˜ij (x) = −x 2γ j xj xj+1 if i = j, where we set xd ≡ 1 − i=1 xi . Suppose d−1 that xi > 0 for all i ∈ {1, . . . , d − 1} and i=1 xi < 1. We shall show that in this case σ ˜ is non-singular. As a first step we divide the ith line of σ ˜ by xi for i = 1, . . . , d − 1. We obtain   d1 a2 a3 . . . ad−1  a1 d2 a3 . . . ad−1    det(˜ σ (x))  = det   a1 a2 d3 . . . ad−1  d−1 x  i i=1 ... ... ... ... ...  a1 a2 a3 . . . dd−1   d1 − a 1 a2 − d 2 0 ... 0 0  0  d 2 − a 2 a3 − d 3 . . . 0 0    0  0 d 3 − a3 . . . 0 0  = det   ...  ... ... ... ... ...    0 0 0 . . . dd−2 − ad−2 ad−1 − dd−1  a1 a2 a3 ... ad−2 dd−1 ≡ det A1 ,  131 where we used the line operations i → i − (i + 1) on all but the last line and set di − ai = (1 − xi ) 2γ i  xi+1 + xi  2γ i xi xi+1 =  xi+1 ai =− . xi xi  2γ i  (A.1)  Based on the first column we can calculate the determinant of σ ˜. d−1  det(A1 ) ≡ (d1 − a1 ) det(A2 ) + (−1)d a1  i=2  (ai − di ),  (A.2)  where we obtain the matrix A2 by crossing out the first row and column of A1 . We obtain recursively for k = 1, . . . , d − 3, using (A.1) that det(Ak ) = −  ak det(Ak+1 ) + ak (−1)d−k+1 xk  d−1  i=k+1  ai . xi  (A.3)  By using (A.3) recursively in (A.2) we get d−3  det(A1 ) = (−1)d−3 i=1  d−1  d−3  ai ai det(Ad−2 ) + (−1)d xi i=1 i=1  j=1,...,d−1,j=i  1 . xj  Finally, det(Ad−2 ) = det  dd−2 − ad−2 ad−2  ad−1 − dd−1 dd−1  (A.1)  = −  ad−2 ad−1 ad−2 dd−1 − xd−2 xd−1  and thus (recall that xi = 0 and thus ai = 0 for i = 1, . . . , d) det(˜ σ (x)) = 0 ⇐⇒ det(A1 ) = 0 d−2  ⇐⇒  i=1 j=1,...,d−1,j=i d−2  ⇐⇒  i=1 j=1,...,d−1,j=i  1 dd−1 + xj ad−1  j=1,...,d−1,j=d−1  1 − xd−1 1 − xj xd−1  d−2  ⇐⇒  i=1  1 =0 xj  j=1,...,d−1,j=d−1  1 =0 xj  d−1  xi − (1 − xd−1 ) = 0 ⇐⇒  xi = xd = 1, i=1  ˜ Hence det(˜ ˜ which is a contradiction to x ∈ S. σ (x)) = 0 for all x ∈ S.  132  A.2  Proof of Proposition 3.2.18  Proof. The main part of the proof is taken from Dawson, Greven, den Hollander, Sun and Swart [1], Section 3.1 and adjusted to our setting. Existence. Denote the distribution of Yt ∈ [0, 1]d by µt , with µ0 = δy for some t arbitrary y ∈ S. As the state space [0, 1]d is compact, {νt : νt ≡ 1t 0 µs ds}t≥0 forms a tight family of distributions. In this case Theorem III.2.2 in Ethier and Kurtz [2] implies that {νt }t≥0 is relatively compact for the Prohorov metric. As [0, 1]d is Polish, Theorem III.1.7 in [2] gives the existence of a limit. Also note that the convergence of a sequence of {νt }t≥0 is equivalent to weak convergence by Theorem III.3.1 of [2]. Taking this together we find a sequence (tn ) tending to infinity such that νtn converges weakly to a limiting distribution ν. The goal will be now to apply ˆ Theorem IV.9.17 of [2], where C([0, 1]d ) = C([0, 1]d ) by definition (cf. definition before Lemma IV.2.1) due to the compactness of our state space. To this purpose note first that the generator corresponding to (3.15) is given by d d 1 aij (x)∂ij f (x), Af (x) = bi (x)∂i f (x) + 2 i,j=1 i=1 where a = σσ T and b are as in (3.19) and D(A) = C 2 ([0, 1]d ). Note further that our system of SDEs has a solution that is unique in law (cf. Proposition 3.2.13). Hence Theorem V.(21.2) and Remark V.(21.9) in Rogers and Williams [3] yield that the solution is strong Markov. Hence A is the generator of a strong Markov process and thus satisties the positive maximum principle by [2], Theorem IV.2.2. Moreover, if ν is the limit of νtn then for any f ∈ C 2 ([0, 1]d ) we have Af ∈ C([0, 1]d) and Eν [(Af )(Y0 )] = lim Eνtn [(Af )(Y0 )] n→∞  1 tn Eµs [(Af )(Y0 )] ds n→∞ tn 0 1 tn = lim Eµ0 [(Af )(Ys )] ds n→∞ tn 0 tn 1 = lim (Af )(Ys )ds E µ0 n→∞ tn 0 1 Eµ0 [f (Ytn ) − f (Y0 )] = lim n→∞ tn = 0.  = lim  Here we used νtn ⇒ ν for Af ∈ C([0, 1]d ) in the first equality, the definition of νt in the second, Af ∈ C([0, 1]d ) in the fourth equality, that Y solves the martingale problem for A in the fifth equality and f bounded in the last.  133 Finally observe that C 2 ([0, 1]d ) forms an algebra of functions dense in C([0, 1]d). Taken the above together, we can apply Theorem IV.9.17 of [2] and obtain the existence of a stationary solution. Uniqueness. Let Yt be the unique strong Markov solution to (3.15). Recall that we already showed that every equilibrium distribution for A doesn’t put i mass on N = {y : ∃i : yi = 0} in Proposition 3.2.17 and that i Yt = 1 for all t ≥ 0 in Proposition 3.2.13. Hence, if Yt has two distinct equilibrium distributions, concentrated on [0, 1]d , or to be more precise, on S ≡ [0, 1]d \∂[0, 1]d ∩  yi = 1 ,  y: i  then we can find two extremal equilibrium distributions µ and ν that are singular with respect to each other (see for instance Exercise 6.9 in Varadhan [5]). As µ(B) = Pµ (Yt ∈ B) = pt (x, B)dµ(x), there have to exist x, y ∈ S such that the transition kernels pt (x, dz) and pt (y, dz) are mutually singular for all t > 0 as well. Also, as µ respectively ν do not put mass on N the same holds for pt (x, dz) respectively pt (y, dz). In what follows we shall consider the process Y˜t ≡ (Yt1 , . . . , Ytd−1 ) ∈ [0, 1]d−1 with transition kernels p˜t instead. The martingale problem for the resulting SDE for Y˜ is consequently well-posed as the corresponding martingale problem for Y is well-posed. Let p˜t (˜ x, d˜ z ) and p˜t (˜ y , d˜ z ) be the resulting transition kernels corresponding to pt (x, dz) and pt (y, dz). For d−1  x ˜ ∈ S˜ ≡  [0, 1]d−1 ∩  i=1  d−1  x ˜i ≤ 1  \ x ˜ : ∃i : x ˜i = 0 or  x ˜i = 1 i=1  d−1  = [0, 1]d−1 \∂[0, 1]d−1 ∩  x ˜:0<  x˜i < 1 , i=1  a ˜ij (˜ x) is non-singular by Corollary A.1.1 of the Appendix. Now we can appeal to Theorem B.4 (“Support theorem for uniformly elliptic diffusions”) of ¯ ⊂ S, ˜ where D is an arbitrarily fixed ball. Here observe [1] with x ˜, y˜ ∈ D, D ˜ that S is an open subset of [0, 1]d−1 . The Theorem allows us to transport the diffusions started at x ˜ respectively y˜ to a common small neighbourhood with positive probability. Subsequently we can apply Corollary B.3 (“Transition density for diffusions restricted to bounded domains”) of [1] to see that p˜t (˜ x, d˜ z) and p˜t (˜ y , d˜ z ) and hence pt (x, dz) and pt (y, dz) cannot be mutually singular for all t > 0. Convergence. Once more, we shall consider the process Y˜ instead of Y and the state space S˜ instead of S. Let π ˜ be the unique equilibrium distribution of Y˜ corresponding to the unique equilibrium distribution π of Y .  134 Firstly, note that by Theorem B.4 of [1] the equilibrium distribution π ˜ assigns ˜ positive measure to every open subset of S. Secondly we shall show that L(Y˜t |Y˜0 = x) ⇒ π ˜ for all x ∈ S˜ in three steps, namely by showing first that it holds for almost all x ∈ S˜ w.r.t. π ˜ . Then we shall extend this result to Lebesgue almost every x ∈ S˜ and finally we shall ˜ conclude that this implies that it holds for all x ∈ S. ∗ ˜ To prove the first step, we first choose x ∈ S arbitrary but fixed and let D ¯ ⊂ S. ˜ Recall that a be open such that x∗ ∈ D, D ˜ is non-singular on S˜ and S˜ is d−1 ˜ ˜ an open subset of [0, 1] . Let Yt , Zt be two independent copies of the process on [0, 1]d−1 . Then the joint process (Y˜t , Z˜t ) is strong Markov and has a unique equilibrium distribution given by the product measure π ˜⊗π ˜ . By Theorem 6.9 in [5] and the following Remarks the process (Y˜t , Z˜t ) started in equilibrium, i.e. with L((Y˜t , Z˜t )) = π ˜⊗π ˜ is ergodic. As the equilibrium distribution π ˜ assigns positive measure to every open ˜ π subset of S, ˜⊗π ˜ assigns positive measure to B (x∗ ) × B (x∗ ) ⊂ D × D for small enough. Therefore (Y˜t , Z˜t ) visits the set B (x∗ ) × B (x∗ ) after any finite time T a.s. by the ergodic theorem. We obtain in particular that for almost all (x, x ) w.r.t. π ˜⊗π ˜ , (Y˜t , Z˜t ) started at (x, x ) visits B (x∗ ) × B (x∗ ) after any finite time T a.s. Fix such a (x, x ). In what follows we shall start two independent processes Y˜t and Z˜t with initial conditions x respectively x as above and denote their laws by Px respectively Px . Let the first exit time from D be τD (ω) ≡ inf{t ≥ 0 : ω(t) ∈ D}. By Corollary B.3, for each δ > 0 and z ∈ D, the measure z µD δ (z, ·) ≡ P (ω : δ < τD (ω), ω(δ) ∈ ·)  admits a density pD δ (z, ·) with respect to Lebesgue measure. Moreover, (B.2) of the Corollary yields that for , δ sufficiently small we have uniformly for y, y ∈ B (x∗ ), 1 D pD (A.4) δ (y, u) ∧ pδ (y , u)du ≥ , 2 where we used that a + b − (a ∨ b − a ∧ b) = a ∨ b + a ∧ b − (a ∨ b − a ∧ b) ≥ 1 ⇐⇒ a ∧ b ≥ 1/2. We obtain in particular that for y ∈ B (x∗ ) (and analogously for y ∈ B (x∗ )) Py (ω : ω(δ) ∈ du) ≥ Py (ω : δ < τD (ω), ω(δ) ∈ du) =  pD δ (y, u)du  ≥  pD δ (y, u)  ∧  pD δ (y  (A.5) , u)du.  For y, y ∈ B (x∗ ) fixed let µ1(y,y ) be the measure on [0, 1]d−1 × [0, 1]d−1 defined by µ1(y,y ) (A × B) ≡  A×B  D pD δ (y, u) ∧ pδ (y , u)  D pD δ (y, v) ∧ pδ (y , v) dudv  135 for A, B ∈ B([0, 1]d−1) and observe that µ1(y,y ) (A × B) = µ1(y,y ) (B × A).  (A.6)  Let µ2(y,y ) (A × B) ≡ Py (ω : ω(δ) ∈ A)Py (ω : ω(δ) ∈ B) − µ1(y,y ) (A × B), which is non-negative by (A.5) and note that by (A.4), µ2(y,y ) [0, 1]d−1 × [0, 1]d−1 ≤  3 for all y, y ∈ B (x∗ ). 4  (A.7)  In what follows we shall give the motivation for the later rigorous definitions and calculations. We obtain that whenever (Y˜t , Z˜t ), starting at (x, x ) enters B (x∗ )×B (x∗ ) at a random time, say T1 and through some random point (y, y ) ∈ B (x∗ ) × B (x∗ ) we can decompose the conditional law of (Y˜t , Z˜t ) at time T1 + δ as follows: Py (ω : ω(δ) ∈ A)Py (ω : ω(δ) ∈ B) = µ1(y,y ) (A × B) + µ2(y,y ) (A × B).  (A.8)  Now we shall successively decompose the law of the process (Y˜t , Z˜t ) at times Tk + δ, k ∈ N, where Tk is the first time after Tk−1 + δ that (Y˜t , Z˜t ) enters B (x∗ ) × B (x∗ ) (see (A.9) below). For instance, if at time T1 , (Y˜t , Z˜t ) enters B (x∗ ) × B (x∗ ) through some random point (y, y ) ∈ B (x∗ ) × B (x∗ ), we decompose the law of (Y˜t , Z˜t ) at time T1 + δ into µ1(y,y ) , µ2(y,y ) as above. Next we consider (Y˜T1 +δ+· , Z˜T1 +δ+· ), starting in µ1 resp. µ2 . As µ1 yields (y,y )  (y,y )  (y,y )  a common part, we do not decompose the law of (Y˜T1 +δ+· , Z˜T1 +δ+· ) starting in µ1(y,y ) any further. On the other hand, starting in µ2(y,y ) , we wait until (Y˜T1 +δ+· , Z˜T1 +δ+· ) enters B (x∗ ) × B (x∗ ) again, say at time T1 = T2 − (T1 + δ) through the point (u, u ) ∈ B (x∗ ) × B (x∗ ). T1 is finite µ2(y,y ) -a.s. as µ2(y,y )  is absolutely continuous with respect to Py (ω : ω(δ) ∈ ·) ⊗ Py (ω : ω(δ) ∈ ·) by (A.8). Here recall that (y, y ) was the first entrance point of B (x∗ ) × B (x∗ ) by ˜ (Yt , Z˜t ) under Px ⊗ Px and that (Y˜t , Z˜t ) started at (x, x ) visits B (x∗ ) × B (x∗ ) after any finite time T a.s. Hence (Y˜t , Z˜t ) started at (x, x ) visits B (x∗ )×B (x∗ ) again at some finite random time T2 = T1 + δ + T1 . Thus, starting in µ2(y,y ) , we can decompose (Y˜T1 +δ+T1 +δ , Z˜T1 +δ+T1 +δ ) in µ1(u,u ) and µ2(u,u ) . Now iterate the above. To be more explicit, let U0 = 0 and define stopping times Tk = inf t ≥ Uk−1 : Y˜t , Z˜t ∈ B (x∗ ) × B (x∗ )  and Uk = Tk + δ  for k ∈ N and δ > 0. Then almost surely Tk < ∞ for all k ∈ N.  (A.9)  136 By the strong Markov property of the process (Y˜t , Z˜t ) we can condition on FU1 and obtain for n ∈ N arbitrarily fixed, Y˜t ∈ A, Z˜t ∈ B  Px ⊗ P x = Px ⊗ P x  Y˜t ∈ A, Z˜t ∈ B, t < Un ˜  + Px ⊗ P x  ˜  PYT1 (ω) ⊗ PZT1 (ω)  Y˜t−T1 (ω) ∈ A, Z˜t−T1 (ω) ∈ B,  t ≥ U1 (ω) + Un−1 ◦ θδ 1(t ≥ U1 (ω)) . Here θδ denotes the shift-operator θδ (ω(·)) = ω(δ + ·). Using (A.8) we can rewrite this as Y˜t ∈ A, Z˜t ∈ B  Px ⊗ P x = Px ⊗ P x  (A.10)  Y˜t ∈ A, Z˜t ∈ B, t < Un µ1Y˜ (dw, dw ) Pw ⊗ Pw ( T1 (ω) ,Z˜T1 (ω) )  + Px ⊗ P x  Y˜t−U1 (ω) ∈ A,  Z˜t−U1 (ω) ∈ B, t ≥ U1 (ω) + Un−1 1(t ≥ U1 (ω)) µ2(Y˜  + Px ⊗ P x  ˜  T1 (ω) ,ZT1 (ω)  w w ) (dw, dw ) P ⊗ P  Y˜t−U1 (ω) ∈ A,  Z˜t−U1 (ω) ∈ B, t ≥ U1 (ω) + Un−1 1(t ≥ U1 (ω)) . ˜ we get in particular that Using (A.6) and the symmetry of Tk and Uk in (Y˜ , Z) Px ⊗ P x  Y˜t ∈ A, Z˜t ∈ B − Px ⊗ Px  Y˜t ∈ B, Z˜t ∈ A  ≤ 2 Px ⊗ Px (t < Un ) + Px ⊗ P x Pw ⊗ P w  µ2(Y˜  ˜  T1 (ω) ,ZT1 (ω)  ) (dw, dw )  Y˜t−U1 (ω) ∈ [0, 1]d−1 , Z˜t−U1 (ω) ∈ [0, 1]d−1  3 ≤ 2 Px ⊗ Px (t < Un ) + , 4  the last by (A.7). If n ≥ 2 we can further condition the inner probability on FU1 and decompose  137 the last term in (A.10) into µ2(Y˜  Px ⊗ P x  ˜  T1 (ω) ,ZT1 (ω)  w w ) (dw, dw ) P ⊗ P  Y˜t−U1 (ω) ∈ A, Z˜t−U1 (ω) ∈ B,  t ≥ U1 (ω) + Un−1 1(t ≥ U1 (ω)) = Px ⊗ P x  µ2(Y˜  ˜  T1 (ω) ,ZT1 (ω)  w w ) (dw, dw ) P ⊗ P  µ1(Y˜  ˜  T1 (ω ) ,ZT1 (ω )  ) (dz, dz )  Y˜t−(U1 (ω)+U1 (ω )) ∈ A, Z˜t−(U1 (ω)+U1 (ω )) ∈ B,  Pz ⊗ P z  t ≥ U1 (ω) + U1 (ω ) + Un−2 1(t ≥ U1 (ω) + U1 (ω )) 1(t ≥ U1 (ω)) + Px ⊗ P x  µ2(Y˜  ˜  T1 (ω) ,ZT1 (ω)  w w ) (dw, dw ) P ⊗ P  µ2(Y˜  ˜  T1 (ω ) ,ZT1 (ω )  ) (dz, dz )  Y˜t−(U1 (ω)+U1 (ω )) ∈ A, Z˜t−(U1 (ω)+U1 (ω )) ∈ B,  Pz ⊗ P z  t ≥ U1 (ω) + U1 (ω ) + Un−2 1(t ≥ U1 (ω) + U1 (ω )) 1(t ≥ U1 (ω)) . Using this in (A.10) we obtain with (A.6) Px ⊗ P x  Y˜t ∈ A, Z˜t ∈ B − Px ⊗ Px  Y˜t ∈ B, Z˜t ∈ A  ≤ 2 Px ⊗ Px (t < Un ) + Px ⊗ P x  µ2Y˜ (dw, dw ) Pw ⊗ Pw ( T1 (ω) ,Z˜T1 (ω) )  Pz ⊗ P z  Y˜t−(U1 (ω)+U1 (ω  ≤ 2 Px ⊗ Px (t < Un ) +  3 4  ))  µ2Y˜ (dz, dz ) ( T1 (ω ) ,Z˜T1 (ω ) )  ∈ [0, 1]d−1 , Z˜t−(U1 (ω)+U1 (ω )) ∈ [0, 1]d−1  2  ,  the last by (A.7). By iterating the above decomposition we obtain for n ∈ N fixed, Px ⊗ P x  Y˜t ∈ A, Z˜t ∈ B − Px ⊗ Px  ≤ 2 Px ⊗ Px (t < Un ) +  3 4  n  Y˜t ∈ B, Z˜t ∈ A  .  Recall that we have for almost all (x, x ) w.r.t. π ˜ ⊗π ˜ that almost surely Tk < ∞ n for all k ∈ N. Hence, to given > 0 we can choose n ∈ N such that 34 < 2 and then choose T > 0 such that Px ⊗ Px (t < Un ) < obtain Px ⊗ P x  Y˜t ∈ A, Z˜t ∈ B − Px ⊗ Px  4  for all t ≥ T . We  Y˜t ∈ B, Z˜t ∈ A  <  138 for all t ≥ T and thus that lim  sup  t→∞ A,B∈B([0,1]d−1 )  Px ⊗ P x  Y˜t ∈ B, Z˜t ∈ A  − Px ⊗ P x  Y˜t ∈ A, Z˜t ∈ B = 0.  Choosing A = [0, 1]d−1 yields lim  sup  t→∞ B∈B([0,1]d−1 )  Px Z˜t ∈ B − Px Y˜t ∈ B  = 0.  A simple tightness-argument completes the proof of our first step. ˜ Next we shall extend our result L(Y˜t |Y˜0 = x) ⇒ π ˜ as t → ∞ for all x ∈ S, ˜ π ˜ -a.s. to Lebesgue almost every x ∈ S. The proof goes by contradiction. Let A = {x ∈ S˜ : L(Y˜t |Y˜0 = x) ⇒ π ˜ }. We first claim that A is Borel-measurable. Indeed, the martingale problem for Y˜ is well-posed and thus the process Y˜ is Feller continuous (see for example Stroock and Varadhan [4], Corollary 11.1.5). By using that the corresponding semigroup is a contraction we obtain the claim. Suppose by contradiction that A has positive Lebesgue measure. In this case there exists a simply connected bounded open domain D ⊂ S˜ with smooth boundary such that A ∩ D has positive Lebesgue measure. As π ˜ (A) = 0 by the step above, π ˜ (A ∩ D) = 0 follows. If Z˜t is the stationary solution of the T SDE in [0, 1]d−1 started with initial law π ˜ , then E 1(Z˜t ∈ A ∩ D)dt = 0  0 for all T > 0. On the other hand, by Theorem B.5 (“Occupation time measure for uniformly elliptic diffusions”) of [1], we have for every x ∈ D, τ ˜ E 0 D 1(Y˜t ∈ A ∩ D)dt | Y˜0 = x > 0, where τD = inf{t ≥ 0 : Y˜t ∈ D}. As π assigns positive probability to D, we have τD  E D  0  1(Y˜t ∈ A ∩ D)dt | Y˜0 = x π ˜ (dx) > 0.  By the monotone convergence theorem, we can choose T sufficiently large such that τD ∧T  E D  0  1(Y˜t ∈ A ∩ D)dt | Y˜0 = x π ˜ (dx) > 0. T  But the l.h.s. is dominated by E 0 1(Z˜t ∈ A ∩ D)dt = 0, which is a contradiction. Therefore A has Lebesgue measure zero. ˜ Indeed, for x ∈ S, ˜ It remains to show that L(Y˜t |Y˜0 = x) ⇒ π ˜ for all x ∈ S. ˜ let > 0 be such that B (x) ⊂ S. By Corollary B.3 applied to D = B (x), the B (x) (x, ·) is absolutely continuous w.r.t. Lebesgue measure. transition kernel µt As shown above, for Lebesgue almost every y ∈ B (x), L(Y˜t+s |Y˜t = y) ⇒ π ˜ as B (x) s → ∞. By observing that µt (x, B (x)) ↑ 1 as t → 0 (see (B.3)), we finally ˜ which completes our proof. get L(Y˜t |Y˜0 = x) ⇒ π ˜ for arbitrary x ∈ S,  139  Bibliography [1] Dawson, D.A. and Greven, A. and den Hollander, F. and Sun, R. and Swart, J.M. The renormalization transformation for two-type branching models. Ann. Inst. H. Poincar´e Probab. Statist. (2008) 44, 1038–1077. MR2469334 [2] Ethier, S.N. and Kurtz, T.G. Markov Processes: Characterization and Convergence. Wiley and Sons, Inc., Hoboken, New Jersey , 2005. MR0838085 [3] Rogers, L.C.G. and Williams, D. Diffusions, Markov Processes, and Martingales, vol. 2, Reprint of the second (1994) edition. Cambridge Mathematical Univ. Press, Cambridge, 2000. MR1780932 [4] Stroock, D.W. and Varadhan, S.R.S. Multidimensional Diffusion Processes. Grundlehren Math. Wiss., vol. 233, Springer, Berlin-New York, 1979. MR532498 [5] Varadhan, S.R.S. Probability Theory. Courant Lect. Notes Math., 7, New York; Amer. Math. Soc., Providence, Rhode Island, 2001. MR1852999  140  Appendix B  Appendix for Chapter 4 The following Lemma and Corollary are necessary to prove Lemma 4.4.1 of Chapter 4. Lemma B.0.1. There exists N0 < ∞ such that for all N ≥ N0 , k ≥ 1 2  2  kt ≤ C k1 exp −(1 + o(1)) 12N  (a) ρk (t) − exp −(1 + o(1)) kt 6N t ≤ (1 + o(1))  for  N 3, 2  t (b) |ρ(t)| ≤ exp −C 12N  for t ≤  6N (1+o(1))  1/2  ,  (c) There exists δ > 0 such that |ρ(t)| ≤ 1 − δ for t ∈  6N (1+o(1))  1/2  , πN .  Proof. The proof mainly follows along the lines of the proof of Lemma 8 in Mueller and Tribe [3]. Some small changes ensued due to the different setup. Recall the definition of ρ(t) from equation (4.38). For (b), we could not find the reference mentioned in [3] but the following reasoning in [3] based on applying Taylor’s theorem at t = 0 works well without it. For (a), first observe that ρk (t) = E eitSk and use Bhattacharya and Rao [1], (8.11), (8.13) and [1], Theorem 8.5. as suggested in [3]. We used that E[Y1 ] = E[Y13 ] = 0. It remains to prove (c). We have to change the proof of [3], Lemma 8(c) slightly, as we used x ∼ x. We get |ρ(t)| =  1 2c(N )N 1/2  j  √ 0<|j|≤c(N ) N  eit N = c(N )  1  √ N +1  N 1 eit N − eit = Re 1 1/2 it c(N )N 1−e N 1  √ c(N ) N+1 N  it it 1 1 e N − e −it 2N = Re e c(N )N 1/2 −2i sin  ≤  2 1 t c(N )N 1/2 2 sin 2N  .  1 2c(N )N 1/2  t 2N  j  √ 0<j≤c(N ) N  2Re eit N  141 For  1+ c(N )N 1/2  ≤  t 2N  ≤  π 2  with  > 0 fixed we get as an upper bound  1 c(N )N 1/2 sin  1  ≤  1+ c(N )N 1/2  given N big enough. Finally use that 2 <  √  1 1+  < 1,  6 to obtain the claim.  Corollary B.0.2. For N ≥ N0 , y ∈ N −1 Z we have N P(Sk = y) − p (1 + o(1))  k ,y 3N  ≤ C1 N exp{−kC2 } + N 1/2 k −3/2 ,  where C1 , C2 > 0 are some positive constants. Proof. This result corresponds to Corollary 9 in [3]. The proof works similarly. Instead of the reference given at the beginning of the proof of Corollary 9 in [3], we used Durrett [2], p. 95, Ex. 3.2(ii) and [2], Thm. (3.3). Note in particular that the result of Lemma B.0.1(c) can be extended to t ∈ (1 + o(1)) N3 , πN if we choose δ > 0 small enough. Indeed, using Lemma B.0.1(b) we obtain t2  N/3  |ρ(t)| ≤ e−C 12N ≤ e−C 12N ≤ (1 − δ) as claimed.  142  Bibliography [1] Bhattacharya, R.N. and Ranga Rao, R. Normal approximation and asymptotic expansions. Wiley and Sons, New York-London-Sydney, 1976. MR0436272 [2] Durrett, R. Probability: Theory and Examples, Brooks/Cole-Thomsom Learning, Belmont, 2005.  Third edition.  [3] Mueller, C. and Tribe, R. Stochastic p.d.e.’s arising from the long range contact and long range voter processes. Probab. Theory Related Fields (1995) 102, 519–545. MR1346264  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0070868/manifest

Comment

Related Items