UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Stochastic models for spatial populations Chen, Yu-Ting 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2013_fall_chen_yu-ting.pdf [ 1.29MB ]
Metadata
JSON: 24-1.0071988.json
JSON-LD: 24-1.0071988-ld.json
RDF/XML (Pretty): 24-1.0071988-rdf.xml
RDF/JSON: 24-1.0071988-rdf.json
Turtle: 24-1.0071988-turtle.txt
N-Triples: 24-1.0071988-rdf-ntriples.txt
Original Record: 24-1.0071988-source.json
Full Text
24-1.0071988-fulltext.txt
Citation
24-1.0071988.ris

Full Text

Stochastic models for spatial populations by Yu-Ting Chen B.A., National Central University, Taiwan, 2004 M.Sc., National Chiao Tung University, Taiwan, 2006 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Mathematics) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) May 2013 c© Yu-Ting Chen 2013 Abstract This thesis is dedicated to the study of various spatial stochastic processes from theoretical biology. For finite interacting particle systems from evolutionary biology, we study two of the simple rules for the evolution of cooperation on finite graph in Ohtsuki, Hauert, Lieberman, and Nowak [Nature 441 (2006) 502-505] which were first discovered by clever, but non-rigorous, methods. We resort to the notion of voter model perturbations and give a rigorous proof, very different from the original arguments, that both of the rules of Ohtsuki et al. are valid and are sharp. Moreover, the generality of our method leads to a first-order approximation for fixation probabilities of general voter model perturbations on finite graphs in terms of the voter model fixation probabilities. This should be of independent interest for other voter model perturbations. For spatial branching processes from population biology, we prove path- wise non-uniqueness in the stochastic partial differential equations (SPDE’s) of some one-dimensional super-Brownian motions with immigration and zero initial value. In contrast to a closely related case studied in a recent work by Mueller, Mytnik, and Perkins [30], the solutions of the present SPDE’s are assumed to be nonnegative and are unique in law. In proving possible separation of solutions, we use a novel method, called continuous decomposi- tion, to validate natural immigrant-wise semimartingale calculations for the approximating solutions, which may be of independent interest in the study of superprocesses with immigration. ii Preface The work presented henceforth was conducted by myself, under the supervi- sion of my advisor Professor Edwin A. Perkins. The problems investigated were originally conceived by Professor Perkins, and due to him is an im- portant idea used in Chapter 3. I am responsible for all of the proofs and writing. A part of my Ph.D. thesis research appears in the following publication: Chen, Y.-T. Sharp benefit-to-cost rules for the evolution of coopera- tion on regular graphs. Annals of Applied Probability Volume 23, Num- ber 2 (2013), 637–664. This paper forms the basis of Chapter 2. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Evolutionary game theory and voter model perturbations . . 2 1.2 One-dimensional super-Brownian motions with immigration and their SPDE’s . . . . . . . . . . . . . . . . . . . . . . . . 13 2 Sharp benefit-to-cost rules for the evolution of cooperation on regular graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2 Voter model perturbations . . . . . . . . . . . . . . . . . . . 21 2.3 Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 First-order approximations . . . . . . . . . . . . . . . . . . . 31 2.4.1 Proof of Theorem 1.2 (1) . . . . . . . . . . . . . . . . 33 2.4.2 Proof of Theorem 1.2 (2) . . . . . . . . . . . . . . . . 36 2.5 Proofs of Proposition 2.1 and Proposition 2.2 . . . . . . . . . 38 2.5.1 Fitnesses . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.5.2 Proof of Proposition 2.1 . . . . . . . . . . . . . . . . . 39 2.5.3 Proof of Proposition 2.2 . . . . . . . . . . . . . . . . . 40 iv Table of Contents 3 Pathwise Non-uniqueness for the SPDE’s of Some Super- Brownian Motions with Immigration . . . . . . . . . . . . . 42 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.2 A non-rigorous derivation of the SPDE of super-Brownian mo- tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.3 Super-Brownian motions with intermittent immigration . . . 49 3.4 Continuous decompositions of approximating processes . . . 52 3.5 First look at conditional separation . . . . . . . . . . . . . . 67 3.5.1 Basic results . . . . . . . . . . . . . . . . . . . . . . . 67 3.5.2 A non-rigorous proof for conditional separation . . . . 70 3.6 Conditional separation of approximating solutions . . . . . . 74 3.6.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.6.2 Auxiliary results and notation . . . . . . . . . . . . . 84 3.6.3 Proof of Lemma 3.14 . . . . . . . . . . . . . . . . . . 85 3.6.4 Proof of Lemma 3.15 . . . . . . . . . . . . . . . . . . 100 3.7 Uniform separation of approximating solutions . . . . . . . . 114 3.8 Proof of Proposition 3.9 . . . . . . . . . . . . . . . . . . . . . 128 3.9 Proof of Proposition 3.25 . . . . . . . . . . . . . . . . . . . . 137 3.10 An iterated formula for improved pointwise modulus of con- tinuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 3.11 Limit theorems for Crap(R)-valued processes . . . . . . . . . 162 3.12 Some properties of support processes . . . . . . . . . . . . . 170 4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 v List of Tables 3.1 List of frequent notation for Chapter 3 . . . . . . . . . . . . . 181 vi List of Figures 1.1 Pairwise payoffs. . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Failure of attractiveness. . . . . . . . . . . . . . . . . . . . . . 5 1.3 Failure of attractiveness. . . . . . . . . . . . . . . . . . . . . . 6 3.1 Decomposition of X along a space variable x. . . . . . . . . . 53 3.2 Parabolas PXiβ (t),PYjβ (t),PYkβ (t) and rectangles RX i β (t) and RXiβ′ (t), for 0 < β′ < β and t ∈ [si, si + 1). . . . . . . . . . . . 76 3.3 PXiβ (t), RX i β (t), and RX i β′ (t) for 0 < β ′ < β and t ∈ [si, si + 1]. 83 vii Acknowledgements My advisor, Professor Edwin A. Perkins, has my deepest gratitude for his thoughtful and patient guidance throughout my years as a Ph.D. student. I am especially grateful for the challenging research problems he suggested. Our many enlightening discussions and animated conversations have been an invaluable source of inspiration for me and have always led me to the right path. I also appreciate his careful reading of the manuscripts of this thesis and my other research publications. The experience I have gained under his guidance will be one of my most valuable assets for my career. A number of people have helped and inspired me in various ways. I am fortunate to have had Christoph Hauert as my guide into the world of evolutionary game theory, and I found myself constantly benefiting from our thought-provoking discussions. Without the generosity of David Brydges who gave a series of lectures on branched polymers and Mayer expansions in the UBC probability seminars, I would not have been inspired to see the final piece of the puzzle to solve the problem for voter model perturbations on finite graphs. J. Theodore Cox patiently explained to me several methods for interacting particle systems, and guided me to techniques which were also important in my investigation of SPDE’s. I wish to thank warmly Jean- Franco̧is Delmas who taught me important perspectives on super-Brownian motions with immigration during the early stage of my Ph.D. studies. My special thanks go to Leonid Mytnik as the problem on the SPDE’s of super- Brownian motions with immigration was in his joint research program with Ed, and he kindly agreed to having it be part of my thesis research. I would like to thank my defence examiners Joel Feldman, Harry Joe, Davar Khoshnevisan, and Gordon Slade (in addition to Ed and Christoph) for their suggestions and comments on my thesis. Finally, I heartily thank all my teachers, friends, and family members, especially my wife Heng-Ching Huang, who have given me unconditional support and encouragement. viii To my beloved family. ix Chapter 1 Introduction In recent theoretical studies of populations in biology, spatial structure has received much attention since its use allows for more realistic modelling and leads to significantly more detailed properties of populations. The goal of this thesis is to investigate the role of spatial structure for some of the associated stochastic processes. For discrete structures, a host of spatial stochastic processes in evolution- ary dynamics can be identified as Markov processes taking values in certain configuration spaces on graphs, and hence, fall in the framework of interact- ing particle systems. In the study of interacting particle systems, one of the most effective methods is through the analysis of an associated dual process. Duality, however, is a non-robust method in general, because it may impose severe restrictions on the particle system being studied. In the continuous setting, Euclidean spaces are the most natural candi- dates for modelling spatial structures of population processes. In this con- text, although stochastic partial differential equations (SPDE’s) are conve- nient in formalizing biological insights, they are difficult to analyze in general because of the lack of a powerful and systematic theory like Itô’s theory on stochastic differential equations (SDE’s). Moreover, solutions of SPDE’s are infinite-dimensional stochastic processes, and hence the investigation of basic properties which have been well understood for analogous finite-dimensional stochastic processes can lead to substantial additional complications. For example, proving uniqueness in law for solutions of SPDE’s, which can be obtained through the duality method for some special cases, remains difficult in general. In this thesis, we study two different types of spatial stochastic processes. The first type is from evolutionary game theory where the evolution of co- operation is the central issue. We consider two evolutionary games between “cooperators” and “defectors” on finite social networks. Although the un- derlying spatial structures inherently complicate the game dynamics, there are precise and surprisingly simple rules for the selective advantages of game players which were first discovered by insightful, but non-rigorous, methods. On the other hand, the games pose difficulties for the traditional approaches 1 1.1. Evolutionary game theory and voter model perturbations in interacting particle systems, including dual calculations. We provide a rigorous proof of these rules, introducing a method to study the particle systems and a wide range of closely related ones. In fact, our proof is very different from the original argument, and our conclusions reinforce the orig- inal discovery so that the rules now have stronger implications and further exhibit universality on finite social networks subject to mild conditions. The second type is from population biology, and the underlying random objects are called one-dimensional super-Brownian motions with immigra- tion. They arise from stochastic models for populations on the real line undergoing migration and critical reproduction and can be characterized by certain SPDE’s. We disprove a uniqueness property for these SPDE’s. This result sheds light on the analogous problem for super-Brownian motions which has been open for twenty-five years. In the following two sections, we will discuss each type of these spatial stochastic processes separately. 1.1 Evolutionary game theory for finite structured populations and voter model perturbations In biological systems, cooperation is fundamental in sustaining the survival, as well as the well-being, of species. The work [37] takes spatial structure into consideration and gives an explanation with analytical criteria for the ubiquity of cooperative entities observed in biological systems and human societies. (See also the references in [37] for other models on structured populations.) In particular, this provides a way to overcome one of the major difficulties in theoretical biology since Darwin. (See Hamilton [13], Axelrod and Hamilton [3], Chapter 13 in Maynard Smith [28], and many others.) We start by describing the evolutionary games defined in [37] and set some definitions. Consider a finite, connected, and simple (i.e. undirected and without loops or parallel edges) graph G = (V,E) on N vertices. (See, e.g., [5] for the standard terminology of graph theory.) Imagine the graph as a social network where a population ofN individuals occupy the vertices of G and the edges denote the links between the individuals. The population con- sists of cooperators and defectors labelled by 1’s and 0’s, respectively. Their fitness is described through payoffs from encounters as follows. Consider a 2 1.1. Evolutionary game theory and voter model perturbations C D b −c 0 0 C C b −c b −c D D 0 0 0 0 Figure 1.1: Pairwise payoffs. 2× 2 payoff matrix: Π = ( Π11 Π10 Π01 Π00 ) = ( b− c −c b 0 ) . (1.1.1) Here, while positive constants are natural candidates for both benefit b and cost c, we allow arbitrary reals for their possible values unless otherwise mentioned. Each entry Πij of Π denotes the payoff that an i-player receives from a j-player (cf. Figure 1.1). Hence, the payoff of a cooperator is bn− ck if n of its k neighbours are cooperators, and the payoff of a defector is bm if m of its neighbours are cooperators. The fitness of an individual located at x is given by a convex combination of the baseline fitness with weight 1−w and its payoff with weight w, where the parameter w is interpreted as the intensity of selection. Fitness can be thought of as the reproduction rate, and the baseline fitness is normalized to 1 for convenience. See Section 2.5.1 for the mathematical formulation of fitnesses. In most applications, w is assumed to be a small positive param- eter, the justification being that one trait should only play a small role in determining overall fitness. This weak selection is also convenient as it yields tractable mathematical results in many cases. In contrast to game theory where strategies are decided by rational play- ers, evolutionary game theory considers the random evolution of interacting players in which the “fitter” strategies have better chances to replicate. We study two of the updating mechanisms under weak selection in [37] for the 3 1.1. Evolutionary game theory and voter model perturbations evolution of cooperation throughout our work, and they are meant to model the effect of natural selection. Under the death-birth updating, a random individual dies and then its neighbours compete for the vacant vertex with success probability proportional to the fitness of its neighbours. Under the imitation updating, a random individual updates its strategy, but now it will either adhere to its original strategy or imitate one of the neighbours’ strategies with success probability proportional to fitness. See Section 2.5.2 and 2.5.3 for the formal definitions of the two games. In this way, each updating mechanism defines a Markov chain on the configuration space {1, 0}V, or more specifically, a spin system in the sense of Liggett [25] where each vertex can adopt only two possible opinions 1 and 0. Despite the simplicity of the transition rates, the readers may observe that the spin systems pose certain difficulty in terms of the classical approaches in interacting particle systems. For example, as a result of the asymmetry of payoffs, there is no symmetry between 1’s and 0’s in the two spin systems. In addition, it is not hard to see that in general the two spin systems are not attractive. (See Chapter III of [25] for the implications of attractiveness.) Definition 1.1. A spin system on V with flip rates a( · , · ) is said to be attractive if for any two configurations η and ζ such that η ≤ ζ vertex- wise, { a(x, η) ≥ a(x, ζ), if η(x) = ζ(x) = 1, and a(x, η) ≤ a(x, ζ), if η(x) = ζ(x) = 0. To see that attractiveness fails for both updating mechanisms, first con- sider the two configurations in Figure 1.2. The upper configuration η has fewer cooperators than the lower one ζ does. The flip rate at site x in ζ is larger, because the fitness of the defector at y is larger. Next, we consider the pair of configurations in Figure 1.3. The upper configuration ζ has more cooperators than the lower one η does, but the flip rate at x in ζ is smaller. Indeed, the defector at y in ζ has higher fitness, so at x there is higher probability to stay at the same type after flipping. We are now ready to describe the benefit-to-cost rules (abbreviated as b/c rules) for the two evolutionary games which are surprisingly simple cri- teria for criticality in the asymptotic. The degree of a graph is defined in [37] to be the average number of neighbours per vertex. Put a single coop- erative mutant on the vertices with a random location. Then the insightful, but non-rigorous, calculations in (the supplementary information of ) [37], supported by several numerical simulations, lead to the following b/c rule for the death-birth updating under weak selection on certain large graphs of 4 1.1. Evolutionary game theory and voter model perturbations C C C D D x yη C C C D C x yζ Figure 1.2: Failure of attractiveness. degree k: Selection favours cooperation whenever b/c > k, and selection op- poses cooperation whenever b/c < k. Here, selection favours (respectively opposes) cooperation if the probability that a single cooperative mutant converts the defecting population completely into a cooperative population is strictly higher (respectively lower) than the fixation probability 1/N of a neutral mutant. See (2.2.8) for the latter probability and also Theorem 1.2. In fact, (2.2.8) shows that if in particular the graph is regular, that is every vertex has the same number of neighbours, then the fixation probability of a neutral mutant at an arbitrary location without further randomization is precisely 1/N . A similar b/c rule under the imitation mechanism is discussed in the supplementary information of [37], with the modification that the cutoff point k should be replaced by k + 2. We remark that the work [37] also considers the birth-death updating (in contrast to the death-birth updating) and its associated b/c rule. See [46] for a further study of these b/c rules. For more b/c rules, see [13], [35], and [45], to name but a few. The monograph [36] gives an authoritative and excellent introduction to evolutionary dynamics. Lying at the heart of the work [37] to obtain the selective advantage of cooperators is the introduction of structured populations. This is manifested by the role of a fixed degree as population size becomes large. Consider instead a naive model where only fractions of players in a large population are concerned and the same payoff matrix (1.1.1) is in effect for evolutionary 5 1.1. Evolutionary game theory and voter model perturbations C C D D C x yζ C C D D D x yη Figure 1.3: Failure of attractiveness. fitness. The fractions zC and zD of cooperators and defectors are modelled through replicator equations as żC =zC ( ρC − ρ̄ ) , żD =zD ( ρD − ρ̄ ) . (1.1.2) Here, by the equality zC + zD = 1, the payoffs for cooperators and defectors are ρC = bzC − c and ρD = bzC , and ρ̄ is the average payoff given by zCρC + zDρD. By (1.1.2), the fraction of cooperators satisfies the following logistic differential equation: żC = −czC(1− zC). Hence, any proper fraction of cooperators must vanish eventually whenever cost c is positive. See, for example, Chapter 7 in [18], Chapter 4 in [36], or Section 3 in [17] for this model and more details. As discussed in more detail later on, a similar result holds in any unstructured population of finite size under the death-birth updating. Informally, a spatial structure, on one hand, promotes the formation of cliques of cooperators which collectively have a selective advantage and, on the other hand, reduces the exploitation of cooperators by defectors. In [38], Ohtsuki and Nowak gave a rigorous proof of the b/c rules on large cycles under weak selection in particular for the two updating mechanisms. 6 1.1. Evolutionary game theory and voter model perturbations The results in [38] exploit the fact that on cycles the fixation probabilities under each updating mechanism satisfy a system of birth-death-process type difference equations, and exact fixation probabilities can be derived accord- ingly. It is easy to get the exact solvability of fixation probabilities by the same approach on complete graphs, although on each fixed complete graph, cooperators are always opposed under weak selection for the death-birth up- dating. (See [37] and Remark 1.5 (3). Note that the degree of a complete graph has the same order as the number of vertices.) It seems, however, harder to obtain fixation probabilities by extending this approach beyond cycles and complete graphs. In this work, we will view each of the two spin systems as a voter model perturbation on a (finite, connected, and simple) graph of arbitrary size. Voter model perturbations are studied in Cox and Perkins [10] and further developed in generality in Cox, Durrett and Perkins [9] on transient integer lattices Zd for d ≥ 3. On the infinite lattices considered in [9], (often sharp) conditions, based on a related reaction diffusion equation, were found to ensure the coexistence of 1’s and 0’s, or to ensure that one type drives the other out. In particular, a rigorous proof of the b/c rule under the death- birth updating on these infinite graphs is obtained in [9]. In the context of finite graphs, the voter model perturbation associated with each of the spin systems fixates at one of the the two absorbing states of all 1’s or all 0’s, and we give a first-order approximation to the fixation probabilities by expansion. In spite of the apparent differences in the settings, there are interesting links between the reaction functions for the reaction diffusion equation criteria in [9] and the first-order correction terms in our fixation probability expansions (cf. (1.1.4) and (1.1.9)). Let us now introduce voter models and voter model perturbations as spin systems. Denote by x a vertex and by η ∈ {1, 0}V a configuration. Let c(x, η) be the flip rate of the (nearest-neighbour) voter model at x in state η. Then c(x, η) is equal to the probability of drawing a neighbour of x which has the opinion opposite to that of x. Hence, in the context of finite graphs, the voter models are simple Markov chains where at each epoch time, one random individual in the population update its opinion by copying that of a randomly chosen neighbour. Interpreted narrowly in the context considered in this work, the rates of a voter model perturbation are given by cw(x, η) = c(x, η) + wh1−η(x)(x, η) + w2gw(x, η) ≥ 0 (1.1.3) for a small perturbation parameter w > 0. Here, h1, h0, and gw for all w small are uniformly bounded functions. We refer the readers to Chapter V 7 1.1. Evolutionary game theory and voter model perturbations in [25] here and in the following for the classical results of voter models and to Section 1 in [9] for a general definition of voter model perturbations on transient integer lattices. We discuss in more detail the aforementioned result in [9] which is closely related to the present work. A key result in [9] states that on the integer lat- tices Zd for d ≥ 3, the invariant distributions of a voter model perturbation, for small enough perturbations, can be determined by the reaction function through a reaction-diffusion PDE: ∂v ∂t = σ2∆v 2 + f(v). (1.1.4) (See Section 1.2 in [9].) Here, the reaction function f takes the form u 7−→ lim s→∞ ∫ D(0, η)Pµu(ξs ∈ dη) u ∈ [0, 1], (1.1.5) where ((ξs),Pµu) denotes the voter model starting at the Bernoulli product measure µu with density µu(η(x) = 1) = u, and the difference kernel D is defined by D(x, η) =η̂(x)h1(x, η)− η(x)h0(x, η) (1.1.6) with η̂(x) ≡ 1 − η(x). By the duality between voter models and coalescing random walks, the reaction function defined by (1.1.5) can be expressed ex- plicitly as a polynomial with coefficients consisting of coalescing probabilities of random walks. The justification in [9] of the b/c rule for the death-birth updating on transient integer lattices is under a slightly different definition, for the sake of adaptation to the context of infinite graphs. Precisely, the result in [9] states that whenever b/c > k (resp. b/c < k) and there is weak selection (for sufficiently small w), given infinitely many cooperators (resp. defectors) at the beginning, any given finite set of vertices will become occupied by cooperators (resp. defectors) from some time on almost surely. Here, k refers to the number of neighbours of each vertex in the underlying lattice and is equal to 2d on Zd. The b/c rule under the imitation updating on the same integer lattices is verified in [7] by using the general result of [9] under the same definition, except that, as pointed out in [37], the cutoff point k needs to be replaced by k + 2. We remark that the proofs of some key results in [9] rely heavily on the geometry of Zd and hence do not allow for immediate generalizations to other spatial structures. For example, the role of reaction diffusion PDE’s remains 8 1.1. Evolutionary game theory and voter model perturbations unclear in the context of general spatial structures. Nonetheless, the work [9] truly identifies the direction to study equilibria of voter model perturbations, in view of the fact that its analysis underlines the importance of the difference kernel (1.1.6) which is meaningful on general spatial structures. Our main result for the voter model perturbations on finite graphs can be stated as follows. Regard the voter model perturbation with perturbation rate w as a continuous-time chain (ξs) with rates given by (1.1.3). We assume in addition the chain starting at any arbitrary state is eventually trapped at either of the two absorbing states, the all-1 configuration 1 and the all-0 configuration 0. This is a property enjoyed by both updating mechanisms under weak selection. Define τ1 for the time to the absorbing state 1 and H(ξ) = ∑ x∈V H(x, ξ)pi(x), (1.1.7) for any H(x, ξ). Here, pi(x) is the invariant distribution of the (nearest- neighbour) random walk on G given by pi(x) = d(x) 2 ·#E (1.1.8) with d(x) being the degree of x, i.e., the number of neighbours of x. (See, e.g., [1] and [26] for random walks on graphs.) We now use Landau’s notation O(w2) to denote a function θ(w) such that |θ(w)| ≤ Cw2 for all small w, for some constant C depending only on the graph G and the uniform bound for h1, h0, and gw in the definition (1.1.3) of the voter model perturbation. Then as w −→ 0+, we have the following approximation: Pw (τ1 <∞) = P (τ1 <∞) + w ∫ ∞ 0 E [ D(ξs) ] ds+O(w2) (1.1.9) (see Theorem 2.11). Here, Pw and P (with expectation E) denote the laws of the voter model perturbation with perturbation rate w and the voter model, respectively, both subject to the same, but arbitrary, initial distribution, and D is the difference kernel defined by (1.1.6). Moreover, the integral term on the right-hand side of (1.1.9) makes sense because D(ξs) ∈ L1(dP⊗ ds) (see Theorem 2.11). We apply the first-order approximation (1.1.9) to the two evolutionary games only on regular graphs. Under weak selection, the approximation (1.1.9) implies that we can approximate Pw(τ1 <∞) by P(τ1 <∞) and the 0-potential of D: ∫ ∞ 0 E [ D(ξs) ] ds, (1.1.10) 9 1.1. Evolutionary game theory and voter model perturbations all subject to the same initial distribution. Moreover, the comparison of Pw(τ1 <∞) for small w with P(τ1 <∞) is possible whenever the 0-potential is nonzero, with the order determined in the obvious way. For notions to be introduced later on, we take as initial distribution the uniform distribution un on the set of configurations with exactly n 1’s, where 1 ≤ n ≤ N − 1. Each 0-potential in (1.1.10) starting at un can be derived from the same 0-potentials starting at Bernoulli product measures µu with density u ∈ [0, 1]. Furthermore, each 0-potential with starting measure µu can be expressed in terms of some (expected) coalescing times of coalescing random walks. This is in contrast to the involvement of coalescing probabil- ities for the reaction functions in the context in [9]. By resorting to a simple identity in [1] between meeting times and hitting times of random walks, we obtain explicit forms of the coalescing times involved. Hence, by (1.1.9), we obtain the fixation probabilities explicit up to the first-order term, and the precise result is stated in Theorem 1.2 below. Theorem 1.2. Let G be any (finite, connected, and simple) graph on N vertices. Suppose in addition that G is k-regular, that is, every vertex of G has precisely k neighbors. Recall that we consider a payoff matrix of the form (1.1.1) (without restrictions on the signs of the parameters b and c), and w stands for the intensity of selection. Fix 1 ≤ n ≤ N − 1, and start the two evolutionary games with the uniform distribution un on the set of configurations with n many cooperators. (1) Under the death-birth updating, the probability to reach the absorbing configuration where the social network is fully occupied by cooperators is given by Pwun(τ1 <∞) = n N + w [ kn(N − n) 2N(N − 1) ] [( b k − c ) (N − 2) + b ( 2 k − 2 )] +O(w2), for any sufficiently small w. (2) Under the imitation updating, the analogous probability is given by Pwun(τ1 <∞) = n N + w [ k(k + 2)n(N − n) 2(k + 1)N(N − 1) ] × [( b (k + 2) − c ) (N − 1)− (2k + 1)b− ck k + 2 ] +O(w2), for any sufficiently small w. 10 1.1. Evolutionary game theory and voter model perturbations Here, O(w2) denotes a function θ(w) such that |θ(w)| ≤ Cw2 for all small w, for some constant C depending only on the graph G and the particular updating mechanism. Before interpreting the result of Theorem 1.2 in terms of evolutionary games, we first introduce the following definition which is stronger than that in [37]. We say selection strongly favours (resp. opposes) cooperation if for every nontrivial n, that is 1 ≤ n ≤ N−1, the following holds: The probability that n cooperative mutants with a joint location distributed as un converts the defecting population completely into a cooperative population is strictly higher (resp. lower) than n/N . (Here, n/N is the fixation of probability of n neutral mutants again by (2.2.8).) Under this definition, Theorem 1.2 yields simple algebraic criteria for both evolutionary games stated as follows. Corollary 1.3. Suppose again that the underlying social network is a k- regular graph on N vertices. (1) For the death-birth updating, if( b k − c ) (N − 2) + b ( 2 k − 2 ) > 0 (resp. < 0), then selection strongly favours (resp. opposes) cooperation under weak se- lection. (2) For the imitation updating, if( b (k + 2) − c ) (N − 1)− (2k + 1)b− ck k + 2 > 0 (resp. < 0), then selection strongly favours (resp. opposes) cooperation under weak se- lection. Applied to cycles, the algebraic criteria in Corollary 1.3 under the afore- mentioned stronger definition coincide with the algebraic criteria in [38] for the respective updating mechanism. See also Eq. (3) in [46] for the death- birth updating. As an immediate consequence of Corollary 1.3, we have the following result: Corollary 1.4. Fix a degree k. (1) Consider the death-birth updating. For every fixed pair (b, c) satisfying b/k > c (resp. b/k < c), there exists a positive integer N0 such that on any 11 1.1. Evolutionary game theory and voter model perturbations k-regular graph G = (V,E) with #V ≥ N0, selection strongly favours (resp. opposes) cooperation under weak selection. (2) Consider the imitation updating. For every fixed pair (b, c) satisfying b/(k + 2) > c (resp. b/(k + 2) < c), there exists a positive integer N0 such that on any k-regular graph G = (V,E) with #V ≥ N0, selection strongly favours (resp. opposes) cooperation under weak selection. In this way, we rigorously prove the validity of the b/c rule in [37] under each updating mechanism. It is in fact a universal rule valid for any non- trivial number of cooperative mutants, and holds uniformly in the number of vertices, for large regular graphs with a fixed degree under weak selection. Remark 1.5. (1) Although we only consider payoff matrices of the special form (1.1.1) in our work, interests in evolutionary game theory do cover general 2×2 payoff matrices with arbitrary entries. (See, e.g., [37] and [38].) In this case, a general 2×2 matrix Π∗ = (Π∗ij)i,j=1,0 is taken to define payoffs of players with an obvious adaptation of payoffs under Π. For example, the payoff of a cooperator is (Π∗11 − Π∗10)n + kΠ∗10 if n of its k neighbours are cooperators. In particular, if Π∗ satisfies the equal-gains-from-switching condition (Nowak and Sigmund [34]): Π∗11 −Π∗10 = Π∗01 −Π∗00, (1.1.11) then the results in Theorem 1.2, Corollary 1.3, and Corollary 1.4 still hold for Π∗ by taking Π in their statements to be the “adjusted” payoff matrix Πa := ( Π∗11 −Π∗00 Π∗10 −Π∗00 Π∗01 −Π∗00 0 ) , (1.1.12) which is of the form in (1.1.1). See Remark 2.15 for this reduction. (2) We stress that when n = 1 or N − 1 and the graphs are vertex-transitive [5] (and hence regular) such as tori, the exact locations of mutants become irrelevant. It follows that the randomization by un is redundant in these cases. (3) Let G be the complete graph on N vertices so that the spatial structure is irrelevant. Consider the death-birth updating and the “natural case” where benefit b and cost c are both positive. With the degree k set equal to N − 1, Theorem 1.2 (1) gives for any 1 ≤ n ≤ N − 1 the approximation Pwun (τ1 <∞) = n N + w n(N − n) 2N [ −c(N − 2)− ( 2− N N − 1 ) b ] +O(w2), as w −→ 0+. Hence, cooperators are always opposed under weak selection when N ≥ 3. 12 1.2. One-dimensional super-Brownian motions with immigration and their SPDE’s 1.2 One-dimensional super-Brownian motions with immigration and their SPDE’s In this section, we turn to a different type of spatial stochastic process called one-dimensional super-Brownian motions with immigration, and consider some theoretical questions. To characterize these random processes, we start with a discussion on how they arise from some discrete spatial branching processes which are natural generalizations of Galton-Watson chains (cf. [16]). We consider the critical Galton-Watson chains, so the offspring distribution µ has mean 1. We assume in addition that µ has unit variance. We slightly extend the classical definition of Galton-Watson chains and define a sequence (GW(n);n ∈ Z+) of random subsets of ∞⋃ n=1 Nn as follows. Let ( Ku;u ∈ ⋃∞n=1Nn) be a sequence of i.i.d. random variables with distribution µ. By assumption, each Ku has mean 1 and variance 1. Take a fixed number of individuals labelled as 1, · · · ,m at time 0, and set GW(0) = {1, · · · ,m}. In general, at time n+ 1, each individual from time n dies but gives birth to Ku children, where u denotes the label of this father. These children will be labelled as (u0, u2, · · · , un, j), 1 ≤ j ≤ Ku, if Ku ≥ 1, and GW(n + 1) consists of these labels of length n + 2. The associated critical Galton-Watson chain (#GW(n);n ∈ Z+) dies out almost surely (see Theorem I.6.1 of [16]). Hence, the Markov chain GW = (GW(n);n ∈ Z+) fixates at ∅. The skeleton GW serves as the genealogical structure of a population of individuals moving randomly in space. We assume that the individuals of generation n move between time n and time n + 1, independently of others and the discrete skeleton GW, according to the law of one-dimensional Brownian motion. At time n + 1, the newly born individuals start moving from the final position of their father, and so on. If we do not stress the order of the individuals, then these spatial motions can be summarized by the measure-valued process Zt = ∑ u∈GW(n) δξut if t ∈ [n, n+ 1), (1.2.1) with ∑ u∈∅ ≡ 0, where ξu denotes the spatial motion associated with the individual labelled by u. 13 1.2. One-dimensional super-Brownian motions with immigration and their SPDE’s The underlying critical Galton-Watson chain can be obtained from (Zt) by considering its total mass: Z0(1) = #GW(0) and Zn+1(1) = #GW(n+ 1) = ∑ u∈GW(n) Ku, where 1(x) ≡ 1 and µ(f) = ∫ fdµ for a measure µ. Hence by our assump- tion on the offspring distribution, ( Zn(1) ) is a martingale and satisfies the branching property: the conditional variance Varn( · ) of Zn+1(1) given information up to time n satisfies Varn ( Zn+1(1) ) = #GW(n) = Zn(1). (1.2.2) We can rescale a sequence of discrete spatial branching processes similar to (1.2.1) and obtain a random process in the continuous time setting. More precisely, for every k, we consider the measure-valued process X (k) t = 1 k ∑ u∈GW(k)(bktc) δ ξu,kt , (1.2.3) where the Markov chain GW(k) and the spatial motions ξu,k associated with u ∈ GW(k)(bktc) are as before except that ξu,k live in the shorter time interval [bktc, bktc+ 1k) (btc is the greatest integer less than or equal to t). We now assume that the initial population size in GW(k) tends to infinity and X (k) 0 −−−→ k→∞ X0, where X0 is a finite measure on the real line and the above denotes weak convergence for finite measures. Then we have the following convergence in distribution of { X(k) } as a sequence of measure-valued random processes: X(k) (d)−−−→ k→∞ X. (1.2.4) The limit X is called a one-dimensional super-Brownian motion. (See, for example, Sections II.1 and II.2 of [24] and Section II.3 of [39] for this convergence.) The density of the measure-valued process X in (1.2.4) exists, and its time evolution can be characterized by the following SPDE: ∂X ∂t (x, t) = ∆X 2 (x, t) +X(x, t)1/2Ẇ (x, t), X ≥ 0. (1.2.5) 14 1.2. One-dimensional super-Brownian motions with immigration and their SPDE’s Here, ∆ denotes the Laplacian, and Ẇ is a (two-parameter) space-time white noise on R× R+ and has the following informal interpretation: Ẇ (dx, dt) are i.i.d. N (0, dxdt)-distributed. (1.2.6) In Section 3.2, we will give a non-rigorous derivation of this SPDE for the density process and provide references for the rigorous treatment. The components of the SPDE (1.2.5) have the following non-rigorous interpretations. The drift ∆X2 for the time variation of X can be seen as a result of spatial motions, since one-dimensional Brownian motions move with the “speed” ∆2 by Itô’s formula (cf. [44]). Also, the Gaussian property (1.2.6) of space-time white noises implies that X(x, t)1/2Ẇ (dx, t) ∼ N (0, X(x, t)dx), (1.2.7) so the time variation of the density X within a spatial region of infinitesimal size dx has variance X(x, t)dx. In view of the additivity of variances for independent random quantities, we see that the branching property (1.2.2) of Galton-Watson chains carries over to super-Brownian motions in the form (1.2.7), and hence it is now valid in the continuum in space. See Section 3.2 for more details. We are now ready to introduce a class of super-Brownian motions with immigration. They will be considered throughout this work unless otherwise mentioned. Imagine that, in the barren territory R, clouds of independent immigrants with infinitesimal initial mass land randomly in space through- out time. The underlying immigration mechanism is time-homogeneous and gives a high intensity of arrivals, so the inter-landing times of the immigrants are infinitesimal. After landing, each of the immigrant processes evolves in- dependently of each other as a super-Brownian motion with infinitesimal initial mass. Superposing their masses then determines a super-Brownian motion with immigration and zero initial value. To formalize the above description, we take an immigration function ψ satisfying ψ ∈ C+c (R) with ψ 6= 0, (1.2.8) and assume that the intensity of immigration at time t and location x is given by ψ(x)dxdt. (The function space C+c (R) consists of nonnegative continuous functions on R with compact support.) Then the associated super-Brownian motion with immigration satisfies the following SPDE: ∂X ∂t (x, t) = ∆X 2 (x, t) + ψ(x) +X(x, t)1/2Ẇ (x, t), X ≥ 0, X(x, 0) = 0. (1.2.9) 15 1.2. One-dimensional super-Brownian motions with immigration and their SPDE’s Here, Ẇ is again a space-time white noise. Note that the branching property (1.2.7) for super-Brownian motions is respected in (1.2.9) since a solution is a superposition of independent super-Brownian motions (see (1.2.5)). To fix ideas, we now give the precise definition of the pair (X,W ) arising in the SPDE (1.2.9) before further discussions. We need a filtration (Gt) which satisfies the usual conditions, and it facilitates the following defini- tions of W and X. We require that W be a (Gt)-space-time white noise, which has the following formal definition by regarding W as a linear opera- tor from L2(R) into a linear space of (Gt)-Brownian motions: for any d ∈ N, φ1, · · · , φd ∈ L2(R), and a1, · · · , ad ∈ R, W  d∑ j=1 ajφj  = d∑ j=1 ajW (φj) a.s. and ( W (φ1), · · · ,W (φd) ) is a d-dimensional (Gt)-Brownian motion with co- variance matrix [〈φi, φj〉L2(R)]1≤i,j≤d. Since a generic immigration function under consideration has compact support, it can be shown that the corre- sponding super-Brownian motion with immigration takes values in the space of continuous functions with compact support (cf. Section III.4 of [39]). Let Crap(R) denote the function space of rapidly decreasing functions f : |f |λ , sup x∈R |f(x)|eλ|x| <∞ ∀ λ ∈ (0,∞). (1.2.10) Equip Crap(R) with the complete separable metric ‖f‖rap , ∞∑ λ=1 |f |λ ∧ 1 2λ . (1.2.11) For convenience, we follow the convention in [40] and choose the larger space Crap(R) for the state space of the density process. Then by saying that X = (Xt) is a solution to an SPDE of the form (1.2.9), we require that X be a nonnegative (Gt)-adapted continuous process with state space Crap(R) and satisfy the following weak formulation of (1.2.9): Xt(φ) = ∫ t 0 Xs ( ∆φ 2 ) ds+ t〈ψ, φ〉+ ∫ t 0 ∫ R X(x, s)1/2φ(x)dW (x, s) (1.2.12) for any test function φ ∈ C∞c (R). Here, we identify any f ∈ L1loc(R) as a signed measure on B(R) in the natural way and use the notation f(φ) = 〈f, φ〉 ≡ ∫ R φ(x)f(x)dx. (1.2.13) 16 1.2. One-dimensional super-Brownian motions with immigration and their SPDE’s The fundamental question for the SPDE (1.2.9) of a super-Brownian mo- tion with immigration concerns its uniqueness theory. This calls for nontriv- ial investigations, since the SPDE has a non-Lipschitz diffusion coefficient. Uniqueness in law for the SPDE (1.2.9) holds and can be proved by the duality method (cf. Section 1.6 of [11]) via Laplace transforms. In fact, it holds even if we assume general nonnegative initial conditions for the SPDE’s (1.2.5) and (1.2.9). Nonetheless, duality methods for more general SPDE’s of the form ∂X ∂t (x, t) = ∆X 2 (x, t) + b ( X(x, t) ) + σ ( X(x, t) ) Ẇ (x, t), (1.2.14) up to now seem only available when b and σ are of rather special forms, and hence are non-robust. (See [31] for the duality method for the case b = 0 and σ(x) = xp where p ∈ (1/2, 1) and nonnegative solutions are assumed.) After all, duality requires exact calculations and thus can be destroyed by even slight changes of coefficients in the context of SPDE’s. Under the classical theory of stochastic differential equations (SDE’s), uniqueness in law in an SDE is a consequence of pathwise uniqueness of its solutions (cf. Theorem IX.1.7 of [44] ). The strength of the classical method for pathwise uniqueness of solutions is that it has emphasis only on the ranges of the Hölder exponents of coefficients, instead of on the particular forms of the coefficients. It is then natural to consider circumventing the duality method by proving pathwise uniqueness of solutions of (1.2.14). Here, pathwise uniqueness for these SPDE’s is the property ensuring that any two solutions subject to the same space-time white noise and initial value always coincide almost surely. Our objective is to study the question of pathwise uniqueness in the particular SPDE’s (1.2.9). Let us discuss some results on pathwise uniqueness of various SDE’s and SPDE’s which are closely related to the SPDE (1.2.9). We focus on the role of non-Lipschitz diffusion coefficients in determining pathwise uniqueness. For one-dimensional SDE’s with Hölder-p diffusion coefficients, the fa- mous Girsanov example (see Section V.26 of [42]) shows the necessity of the condition p ≥ 12 for pathwise uniqueness of solutions. The sufficiency is later confirmed in the seminal work [47] by Yamada and Watanabe as far as the cases with sufficiently regular drift coefficients are concerned. In fact, the work [47] shows that a finite-dimensional SDE defined by dXit = bi(Xt)dt+ σi(X i t)dB i t, 1 ≤ i ≤ d, (1.2.15) enjoys pathwise uniqueness as long as all bi’s are Lipschitz continuous and each σi is Hölder p-continuous, for any p ≥ 12 . 17 1.2. One-dimensional super-Brownian motions with immigration and their SPDE’s In view of these complete results for SDE’s and the strong parallels be- tween (1.2.14) and (1.2.15), it was hoped that pathwise uniqueness would also hold in (1.2.14) if the diffusion coefficient σ is Hölder-p continuous when- ever p ≥ 12 . In [32], it was shown that this is the case if σ is Hölder-p for p > 34 , and in [6] and [30] it was shown that pathwise uniqueness for ∂X ∂t (x, t) = ∆X 2 (x, t) + |X(x, t)|pẆ (x, t), X(x, 0) = 0 (1.2.16) fails for p < 34 . Here, a non-zero solution to (1.2.16) exists and, as 0 is obviously another solution, uniqueness in law fails. All these results point to the general conclusion that pathwise uniqueness of solutions holds for Hölder-p diffusion coefficients σ for p > 34 but can fail for p ∈ (0, 34). (See also [33] for the case of coloured noises) In this work, we confirm pathwise non-uniqueness of solutions to the SPDE’s (1.2.9). We stress that by definition, only nonnegative solutions are considered in this regard and hence are unique in law by the duality argument mentioned above. Our main result is given by the following theorem. Theorem 1.6. For any nonzero immigration function ψ ∈ C+c (R), there exists a filtered probability space (Ω,F , (Gt),P) which accommodates a (Gt)- space-time white noise W and two solutions X and Y of the SPDE (1.2.9) with respect to (Gt) such that P ( supt∈[0,2] ‖Xt − Yt‖rap > 0 ) > 0. A comparison of diffusion coefficients may suggest that the construction in [30] of a nonzero signed solution of the particular case ∂X ∂t (x, t) = ∆X 2 (x, t) + |X(x, t)|1/2Ẇ (x, t), X(x, 0) = 0 (1.2.17) of (1.2.16) should be closely related to our case (1.2.9). Indeed, many features of our construction will follow that in [30], but several new problems arise due to the assumed nonnegativity of our solutions and the fact that in our setting uniqueness in law does hold. The latter means we will be dealing with two nonzero solutions which have the same law and are nontrivially correlated through the shared white noise. In Chapter 3, we will prove Theorem 1.6 by constructing pairs of distinct solutions of the SPDE’s (1.2.9). We will construct pairs of approximating so- lutions of the SPDE’s (1.2.9) as pairs of super-Brownian motions with inter- mittent immigration subject to the same space-time white noise. With a new 18 1.2. One-dimensional super-Brownian motions with immigration and their SPDE’s method called continuous decomposition, we are able to decompose each of the approximating solutions into independent super-Brownian motions, which validates the natural immigrant-wise semimartingale calculations for the approximating solutions. This allows us to follow a framework similar to that in [30] to investigate how the growth rates of the approximating so- lutions disagree. In this direction, we will use several asymptotic quantities associated with super-Brownian motions, including the improved modulus of continuity of their total mass processes, and obtain explicit asymptotic bounds for the local growth rates of the approximating solutions. Finally, we make an immediate corollary for the SPDE (1.2.9) in which ψ(1) is small and the initial value is replaced by a nonzero nonnegative Crap(R)-function. In this case, pathwise non-uniqueness remains true. This follows from the Markov property of super-Brownian motions immigration and the recurrence of Bessel-squared processes for small dimensions. More precisely, we can run a copy of such a super-Brownian motion with immi- gration until its total mass first hits zero, and then concatenate this piece with the separating solutions obtained by our main theorem to construct two separating solutions. 19 Chapter 2 Sharp benefit-to-cost rules for the evolution of cooperation on regular graphs 2.1 Introduction The main object of this chapter is to investigate the b/c rules for the evolu- tion of cooperation on certain large graphs, discovered by Ohtsuki, Hauert, Lieberman, and Nowak [37]. We view the Markov chains associated with the two updating mechanisms as voter model perturbations, and first work with more general discrete-time Markov chains of the voter model perturbations on (finite, connected, and simple) graphs of arbitrary size. Recall by our assumption that this chain starting at any state is eventually trapped at either of the absorbing states 1 and 0. We proceed analytically and decompose the transition kernel Pw of a voter model perturbation with perturbation rate w as the sum of the transition kernel P of the voter model and a signed kernel Kw. We apply an elementary expansion to finite-step transitions of the Pw-chain in the spirit of Mayer’s cluster expansion [27] in statistical mechanics. Then we show every linear combination of the fixation probabilities of 1 and 0 subject to small perturbation admits an infinite series expansion closely related to the voter model. A slight refinement of this expansion leads to our main result (1.1.9) for the general voter model perturbations on which our subsequent study of the b/c rules relies. The second part of this chapter will be concerned about the application of our result for general voter model perturbations. In this direction, we will consider calculations for the potentials (1.1.10), which will be exact and thereby yield the explicit formulas in Theorem 1.2 for the two simple rules. The chapter is organized as follows. In Section 2.2, we set up the standing assumptions of voter model perturbations considered throughout this chapter and discuss their basic properties. The Markov chains associated with the 20 2.2. Voter model perturbations two updating mechanisms in particular satisfy these standing assumptions, as stated in Proposition 2.1 and Proposition 2.2. In Section 2.3, we continue to work on the general voter model perturbations. We develop an expansion argument to obtain an infinite series expansion of fixation probabilities under small perturbation rates (Proposition 2.5) and then refine this argument to get the first-order approximation (1.1.9) (Theorem 2.11). In Section 2.4, we return to our study of the two evolutionary games and give the proof of Theorem 1.2. The vehicle for each explicit result is a simple identity between meeting times and hitting times of random walks. Finally, the proofs of Proposition 2.1 and Proposition 2.2 are deferred to Section 2.5. 2.2 Voter model perturbations Recall that we consider only finite, connected, and simple graphs in this chapter. Fix such a graph G = (V,E) on N = #V vertices. Write x ∼ y if x and y are neighbours to each other, that is, if there is an edge of G between x and y. We put d(x) for the number of neighbours of x. Introduce an auxiliary number λ ∈ (0, 1]. Take a nearest-neighbour discrete-time voter model with transition probabilities P (η, ηx) = λ N c(x, η), x ∈ V, P (η, η) =1− λ N ∑ x c(x, η). (2.2.1) Here, ηx is the configuration obtained from η by giving up the opinion of η at x for η̂(x) := 1− η(x) and holding the opinions at other vertices fixed, and we set c(x, η) = #{y ∼ x; η(y) = η̂(x)} d(x) . We now define the discrete-time voter model perturbations considered throughout this chapter as follows. Suppose that we are given functions hi and gw and a constant w0 ∈ (0, 1) satisfying sup w∈[0,w0],x,η ( |h1(x, η)|+ |h0(x, η)|+ |gw(x, η)| ) ≤ C0 <∞. (A1) cw(x, η) :=c(x, η) + wh1−η(x)(x, η) + w2gw(x, η) ≥ 0, (A2) cw(x,1) =cw(x,0) ≡ 0 for each x ∈ V, (A3) 21 2.2. Voter model perturbations for each w ∈ [0, w0]. Here, 1 and 0 denote the all-1 configuration and the all- 0 configuration, respectively. In (A2), we set up a basic perturbation of voter model rates up to the second order. In terms of the voter model perturbations defined below by cw(x, η), we will be able to control the higher order terms in an expansion of fixation probabilities with the uniform bound imposed in (A1). The assumption (A3) ensures that the voter model perturbations have the same absorbing states 1 and 0 as the previously defined voter model. Under the assumptions (A1)–(A3), we define for each perturbation rate w ∈ [0, w0] a voter model perturbation with transition probabilities Pw(η, ηx) = λ N cw(x, η), x ∈ V, Pw(η, η) =1− ∑ x λ N cw(x, η). (2.2.2) (Here, we assume without loss of generality by (A1) that each Pw(η, ·) is truly a probability measure, in part explaining the need of the auxiliary number λ.) In particular P 0 ≡ P . Notation. We shall write Pwν for the law of the voter model perturbation with perturbation rate w and initial distribution ν and set Pν := P0ν . In particular we put Pwη := Pwδη and Pη := Pδη , where δη is the Dirac measure at η. The discrete-time and continuous-time coordinate processes on {1, 0}V are denoted by (ξn;n ≥ 0) and (ξs; s ≥ 0), respectively. Here and in what follows, we abuse notations to read ‘n’ and other indices for the discrete time scale and ‘s’ for the continuous time scale, whenever there is no risk of confusion. Our last assumption which is obviously satisfied by the P -voter model thanks to the connectivity of G is: Pwη (ξn ∈ {1,0} for some n) > 0 for every η ∈ {1, 0}V (A4) for each w ∈ (0, w0]. Since 1 and 0 are absorbing by the condition (A3), it follows from the Markov property that the condition (A4) is equivalent to the condition that the limiting state exists and can only be either of the absorbing states 1 and 0 under Pw for any w ∈ (0, w0]. Proposition 2.1 ([9]). Suppose that the graph is k-regular. Then the Markov chain associated with the death-birth updating with small inten- sity of selection w is a voter model perturbation with perturbation rate w 22 2.2. Voter model perturbations satisfying (A1)–(A4) with λ = 1 and h1 =− (b+ c)kf0f1 + kbf00 + kf0 (bf11 − bf00) , h0 =− h1. (2.2.3) Here, fi(x, η) = 1 k #{y; y ∼ x, η(y) = i}, fij(x, η) = 1 k2 #{(y, z);x ∼ y ∼ z, η(y) = i, η(z) = j}. (2.2.4) Proposition 2.2. Suppose that the graph is k-regular. Then the Markov chain associated with the imitation updating with small intensity of selection w is a voter model perturbation with perturbation rate w satisfying (A1)– (A4) with λ = kk+1 and h1 =k[(b− c)f11 − cf10]− k 2 k + 1 f1[(b− c)f11 − cf10 + bf01]− k k + 1 bf21 h0 =kbf01 − k 2 k + 1 f0[(b− c)f11 − cf10 + bf01]− k k + 1 f0[(b− c)f1 − cf0], (2.2.5) where fi and fij are as in (2.2.4). The proofs of Proposition 2.1 and Proposition 2.2 are deferred to Sec- tion 2.5. The assumptions (A1)–(A4) are in force from now on. Let us consider some basic properties of the previously defined discrete- time chains. First, as has been observed, we know that 1 = Pw (τ1 ∧ τ0 <∞) = Pw(τ1 <∞) + Pw(τ0 <∞), where we write τη for the first hitting time of η. Observe that Pw(τ1 < ∞) is independent of the auxiliary number λ > 0. Indeed, the holding time of each configuration η 6= 1,0 is finite and the probability of transition from η to ηx at the end of the holding time is given by cw(x, η)∑ y∈V cw(y, η) , 23 2.3. Expansion which is independent of λ > 0. We can estimate the equilibrium probability Pw(τ1 <∞) by a “harmonic sampler” of the voter model from finite time. Let p1(ξ) be the weighted average of 1’s in the vertex set: p1(η) = ∑ x η(x)pi(x), (2.2.6) where pi(x) is the invariant distribution of the (nearest-neighbour) random walk on G and is given by (1.1.8). Since p1(1) = 1−p1(0) = 1 and the chain is eventually trapped at 1 or 0, it follows from dominated convergence that lim n→∞E w[p1(ξn)] = Pw(τ1 <∞). (2.2.7) On the other hand, the function p1 is harmonic for the voter model: Eη[p1(ξ1)] =p1(η) + λ N · 2#E ×  ∑ η(x)=0 #{y ∼ x; η(y) = 1} − ∑ η(x)=1 #{y ∼ x; η(y) = 0}  =p1(η). In particular, (2.2.7) applied to w = 0 entails Pη(τ1 <∞) = p1(η) = ∑ η(x)=1 d(x) 2 ·#E , (2.2.8) where the last equality follows from the explicit form (1.1.8) of pi. Remark 2.3. Since every harmonic function f for the voter model satisfies f(η) ≡ Eη[f(ξτ1∧τ0)], (2.2.8) implies that the vector space of harmonic functions is explicitly char- acterized as the span of the constant function 1 and p1. Recall also the foregoing display gives a construction of any harmonic function with preas- signed values at 1 and 0. (See, e.g., Chapter 2 in [1].) 2.3 Expansion We continue to study the discrete-time voter model perturbations defined in Section 2.2. For each w ∈ [0, w0], consider the signed kernel Kw = Pw − P 24 2.3. Expansion which measures the magnitude of perturbations of transition probabilities. We also define a nonnegative kernel |Kw| by |Kw|(η, η̃) = |Kw(η, η̃)|. Lemma 2.4. For any w ∈ [0, w0] and any f : {1, 0}V −→ R, we have Kwf(η) = λ N ∑ x [ wh1−η(x)(x, η) + w2gw(x, η) ] [ f(ηx)− f(η)] (2.3.1) and ∥∥|Kw|f∥∥∞ ≤ 4C0w‖f‖∞, (2.3.2) where C0 is the constant in (A1). Proof. We notice that for any η and any x, Kw(η, ηx) = λ N [ wh1−η(x)(x, η) + w2gw(x, η) ] , Kw(η, η) =− ∑ x λ N [ wh1−η(x)(x, η) + w2gw(x, η) ] , by the definitions of cw and Pw. Our assertions (2.3.1) and (2.3.2) then follow at one stroke. Using the signed kernel Kw, we can rewrite every T -step transition prob- ability PwT of the voter model perturbation as PwT (η0, ηT ) = ∑ η1,··· ,ηT−1 Pw(η0, η1)P w(η1, η2) · · ·Pw(ηT−1, ηT ) = ∑ η1,··· ,ηT−1 (P +Kw)(η0, η1) · · · (P +Kw)(ηT−1, ηT ) =PT (η0, ηT ) + T∑ n=1 ∑ j∈IT (n) ∑ η1,··· ,ηT−1 ∆w,jT (η0, · · · , ηT ). (2.3.3) Here, IT (n) is the set of strictly increasing n-tuples with entries in {1, · · · , T}, and for j = (j1, · · · , jn) ∈ IT (n) ∆w,jT (η0, · · · , ηT ) (2.3.4) is the signed measure of the path (η0, η1, · · · , ηT ) such that the transition from ηr to ηr+1 is determined by Kw(ηr, ηr+1) if r+ 1 is one of the (integer- valued) indices in j and is determined by P (ηr, ηr+1) otherwise. For conve- nience, we set for each j ∈ IT (n) Qw,jT (η0, ηT ) = ∑ η1,··· ,ηT−1 ∆w,jT (η0, · · · , ηT ) 25 2.3. Expansion as the T -step transition signed kernel, and we say Qw,jT has n faults (up to time T ) and j is its fault sequence. Then by (2.3.3), we can write for any f : {1, 0}V −→ R Ewη0 [f(ξT )] =Eη0 [f(ξT )] + T∑ n=1 ∑ j∈IT (n) ∑ η1,··· ,ηT ∆w,jT (η0, · · · , ηT )f(ηT ) =Eη0 [f(ξT )] + T∑ n=1 ∑ j∈IT (n) Qw,jT f(η0). (2.3.5) Write I(n) ≡ I∞(n) for the set of strictly increasing n-tuples with entries in N. We now state the key result which in particular offers an expansion of fixation probabilities. Proposition 2.5. Recall the parameter w0 > 0 in the definition of the voter model perturbations. There exists w1 ∈ (0, w0] such that for any harmonic function f for the voter model, f(1)Pwη (τ1 <∞) + f(0)Pwη (τ0 <∞) = f(η) + ∞∑ n=1 ∑ j∈I(n) Qw,jjn f(η), (2.3.6) where the series converges absolutely and uniformly in w ∈ [0, w1] and in η ∈ {1, 0}V. Remark 2.6. (i) There are alternative perspectives to state the conclusion of Proposition 2.5. Thanks to Remark 2.3 and the fact Qw,jT 1 ≡ 0, it is equivalent to the validity of the same expansion for p1 defined in (2.2.6) (for any small w). By Remark 2.3 again, it is also equivalent to an analogous expansion of any linear combination of the two fixation probabilities under Pw. (ii) The series expansion (2.3.6) has the flavour of a Taylor series expansion in w, as hinted by Lemma 2.9. The proof of Proposition 2.5 is obtained by passing T to infinity for both sides of (2.3.5). This immediately gives the left-hand side of (2.3.6) thanks to our assumption (A4). There are, however, two technical issues when we handle the right-hand sides of (2.3.5). The first one is minor and is the dependence on T of the summands Qw,jT f(η0). For this, the harmonicity of f implies that such dependence does not exist, as asserted in Lemma 2.7. As a result, the remaining problem is the absolute convergence of the series on 26 2.3. Expansion the right-hand side of (2.3.6) for any small parameter w > 0. This is resolved by a series of estimates in Lemma 2.8, Lemma 2.9, and finally Lemma 2.10. Lemma 2.7. For any harmonic function f for the voter model, any T ≥ 1, and any j ∈ IT (n), Qw,jT f(η0) ≡ Qw,jjn f(η0), where we identify j ∈ Ijn(n) in the natural way. Proof. This follows immediately from the martingale property of a harmonic function f for the voter model and the definition of the signed measures ∆w,jT in (3.7.7). Lemma 2.8. There exist C1 = C1(G) ≥ 1 and δ = δ(G) ∈ (0, 1) such that sup η 6=1,0 Pη(ξn 6= 1,0) ≤ C1δn for any n ≥ 1. (2.3.7) Proof. Recall that the voter model starting at any arbitrary state is even- tually trapped at either 1 or 0. By identifying 1 and 0, we deduce (2.3.7) from some standard results of nonnegative matrices, for suitable constants C1 = C1(G) ≥ 1 and δ ∈ (0, 1). (See, e.g., [2] Lemma I.6.1 and Proposition I.6.3. ) Lemma 2.9. Let C1 = C1(G) and δ = δ(G) be the constants in Lemma 2.8, and set C = C(G,C0) = max(4C0, C1). Then for any j ∈ I(n), any w ∈ [0, w0], and any harmonic function f for the voter model,∥∥∥Qw,jjn f∥∥∥∞ ≤ ‖f‖∞wnC2nδjn−n. (2.3.8) Proof. Without loss of generality, we may assume ‖f‖∞ = 1. By definition, Qw,jjn f(η0) = ∑ η1,··· ,ηjn ∆w,jjn (η0, · · · , ηjn)f(ηjn). (2.3.9) If ∆w,jjn (η0, · · · , ηjn) is nonzero, then none of the η0, · · · , ηjn−1 is 1 or 0. Indeed, if some of the ηi, 0 ≤ i ≤ jn− 1, is 1, then ηi+1, · · · , ηjn can only be 1 by (A3) so thatKwf(ηjn−1) = 0, as is a contradiction. Similarly, we cannot have ηi = 0 for some 0 ≤ i ≤ jn− 1. Hence, the non-vanishing summands of the right-hand side of (2.3.9) range over ∆w,jjn (η1, · · · , ηjn) f(ηjn) for which 27 2.3. Expansion none of the ηj1 , · · · , ηjn−1 is 1 or 0. With η0 fixed, write ∆w,j ′ U,η0 for the signed measure ∆w,j ′ U restricted to paths starting at η0. Thus we get from (2.3.9) that Qw,jjn f(η0) = ∆ w,j jn,η0 [ f(ξjn)1[ξ1,··· ,ξjn−1 6=1,0] ] . Here, our usage of the compact notation on the right-hand side is analogous to the convention in the modern theory of stochastic processes. Recall that |Kw| stands for the kernel |Kw|(η, η̃) = |Kw(η, η̃)|, and put |∆|w,j′η0,U for the measure on paths (η0, · · · , ηU ) obtained by replacing all the Kw in ∆w,j ′ U,η0 by |Kw|. Since ‖f‖∞ = 1, the foregoing display implies |Qw,jjn f(η0)| ≤|∆| w,j jn,η0 (ξ1, · · · , ξjn−1 6= 1,0) ≤|∆|w,(j1,··· ,jn−1)jn−1,η0 (ξ1, · · · , ξjn−1−1 6= 1,0) × ( sup η 6=1,0 Pη(ξjn−jn−1−1 6= 1,0) )∥∥|Kw|1∥∥∞ with j0 = 0, where supη 6=1,0 Pη(ξjn−jn−1−1 6= 1,0) bounds the measure of the yet “active” paths from jn−1 to jn − 1 and ∥∥|Kw|1∥∥∞ bounds the measure of the transition from jn − 1 to jn. Iterating the last inequality, we get |Qw,jjn f(η0)| ≤ ∥∥|Kw|1∥∥n∞ n∏ r=1 ( sup η 6=1,0 Pη(ξjr−jr−1−1 6= 1,0) ) ≤(4C0)nwn n∏ r=1 ( sup η 6=1,0 Pη(ξjr−jr−1−1 6= 1,0) ) , (2.3.10) where the last inequality follows from Lemma 2.4. Since ∑n r=1(jr − jr−1 − 1) = jn − n and C = max(4C0, C1), Lemma 2.8 applied to the right-hand side of (2.3.10) gives |Qw,jjn f(η0)| ≤ wn(C2)nδjn−n. (2.3.11) This completes the proof of (2.3.8). Lemma 2.10. Recall the constants C = C(G,C0) and δ = δ(G) in Lemma 2.9 and Lemma 2.8, respectively. There exists w1 ∈ (0, w0] such that ∞∑ n=1 ∑ j∈I(n) wn1 (C 2)nδjn−n <∞. (2.3.12) 28 2.3. Expansion Proof. Observe that every index in ⋃∞ n=1 I(n) can be identified uniquely by the time of the last fault and the fault sequence before the time of the last fault. Hence, letting S denote the time of the last fault and m the number of faults within {1, · · · , S − 1}, we can write for any w > 0 ∞∑ n=1 ∑ j∈I(n) wn(C2)nδjn−n = ∞∑ S=1 S−1∑ m=0 ( S − 1 m ) wm+1(C2)m+1δS−m−1. (2.3.13) For each S, write S−1∑ m=0 ( S − 1 m ) wm+1(C2)m+1δS−m−1 =wC2δS−1 ( 1 + wC2δ−1 )S−1 . (2.3.14) With δ ∈ (0, 1) fixed, we can choose w1 ∈ (0, w0] small such that C2(1 + w1C 2δ−1)S−1 ≤ ( 1√ δ )S−1 for any large S. Apply the foregoing inequality for large S to the right-hand side of (2.3.14) with w replaced by w1. This gives S−1∑ m=0 ( S − 1 m ) wm+11 (C 2)m+1δS−m−1 ≤ w1 (√ δ )S−1 , where the right-hand side converges exponentially fast to 0 as δ < 1. By (2.3.13), the asserted convergence of the series in (2.3.12) now follows. Proof of Proposition 2.5. We pick w1 as in the statement of Lemma 2.10. By Lemma 2.9 and the choice of w1, the series in (2.3.6) converges absolutely and uniformly in w ∈ [0, w1] and η ∈ {1, 0}V. By (2.3.5) and dominated convergence, it remains to show that lim T→∞ T∑ n=1 ∑ j∈IT (n) Qw,jT f(η0) = ∞∑ n=1 ∑ j∈I(n) Qw,jjn f(η0). To see this, note that by Lemma 2.7, we can write T∑ n=1 ∑ j∈IT (n) Qw,jT f(η0) = T∑ n=1 ∑ j∈I(n) jn≤T Qw,jjn f(η0), 29 2.3. Expansion where the right-hand side is a partial sum of the infinite series in (2.3.6). The validity of (2.3.6) now follows from the absolute convergence of the series in the same display. The proof is complete. For the convenience of subsequent applications, we consider from now on the continuous-time Markov chain (ξs) with rates given by (A2). We can define this chain (ξs) from the discrete-time Markov chain (ξn) by ξs = ξMs , where (Ms) is an independent Poisson process with E[Ms] = sNλ . (Recall our time scale convention: ‘n’ for the discrete time scale and ‘s’ for the continuous time scale.) Under this setup, the potential measure of (ξs) and the potential measure of (ξn) are linked by E [∫ ∞ 0 f(ξs)ds ] = λ N E [ ∞∑ n=0 f(ξn) ] , (2.3.15) for any nonnegative f . In addition, the fixation probability to 1 for this continuous-time Markov chain (ξs) is the same as that for the discrete-time chain (ξn). (See the discussion after Proposition 2.2.) We now state a first-order approximation of Pw(τ1 < ∞) by the voter model. Recall the difference kernel D defined by (1.1.6) with hi as in (A1) and the pi-expectation D defined by (1.1.7). Theorem 2.11. Let ν be an arbitrary distribution on {1, 0}V. Then as w −→ 0+, we have Pwν (τ1 <∞) = Pν(τ1 <∞) + w ∫ ∞ 0 Eν [ D(ξs) ] ds+O(w2). (2.3.16) Here, the convention for the function O(w2) is as in Theorem 1.2. Moreover, D(ξs) ∈ L1(dPν ⊗ ds). Proof. It suffices to prove the theorem for ν = δη0 for any η0 ∈ {1, 0}V. Recall that the function p1 defined by (2.2.6) is harmonic for the voter model and hence the expansion (2.3.6) applies. By (2.3.8) and Lemma 2.10, it is plain that ∞∑ n=2 ∑ j∈I(n) Qw,jjn p1(η0) = O(w 2). (2.3.17) 30 2.4. First-order approximations We identify each j ∈ I(1) as j = (j) = j and look at the summands Qw,jj p1. Write Ew,j for the expectation of the time-inhomogeneous Markov chain where the transition of each step is governed by P except that the transition from j − 1 to j is governed by Pw. Then Qw,jj p1(η0) =E w,j η0 [p1(ξj)]− Eη0 [p1(ξj)] =Eη0 [ Ewξj−1 [p1(ξ1)]− Eξj−1 [p1(ξ1)] ] =Eη0 [Kwp1(ξj−1)] =Eη0 [Kwp1(ξj−1); τ1 ∧ τ0 ≥ j], (2.3.18) where the last equality follows from the definition of Kw and the fact that 1 and 0 are both absorbing. Moreover, we deduce from Lemma 2.4 that Kwp1(η) = λ N wD(η) + λ N w2Gw(η), (2.3.19) where Gw(x, η) =gw(x, η) (1− 2η(x)) . Note that Eη0 [τ1 ∧ τ0] <∞ by Lemma 2.8. Hence by (2.3.18) and (2.3.19), we deduce that ∞∑ j=1 Qw,jj p1(η0) = λw N ∞∑ j=1 Eη0 [ D(ξj−1) ] +O(w2) =wEη0 [∫ ∞ 0 D(ξs)ds ] +O(w2), (2.3.20) where the last equality follows from (2.3.15). Moreover, D(ξs) ∈ L1(dPη0 ⊗ ds). The approximation (2.3.16) for each w ≤ w2 for some small w2 ∈ (0, w1] now follows from (2.3.17) and (2.3.20) applied to the expansion (2.3.6) for p1. This completes the proof. 2.4 First-order approximations In this section, we give the proof of Theorem 1.2. We consider only regular graphs throughout this section. (Recall that a graph is regular if all vertices have the same number of neighbours.) As a preliminary, let us introduce the convenient notion of Bernoulli transforms and discuss its properties. For each u ∈ [0, 1], let µu be the 31 2.4. First-order approximations Bernoulli product measure on {1, 0}V with density µu(ξ(x) = 1) = u. For any function f : {1, 0}V −→ R, define the Bernoulli transform of f by Bf(u) := ∫ fdµu = N∑ n=0  ∑ η:#{x;η(x)=1}=n f(η) un(1− u)N−n, u ∈ [0, 1]. (2.4.1) The Bernoulli transform of f uniquely determines the coefficients Af (n) := ∑ η:#{x;η(x)=1}=n f(η), 0 ≤ n ≤ N. Indeed, Bf(0) = f(0) = Af (0) and for each 1 ≤ n ≤ N , Af (n) = lim u↓0+ 1 un ( BI(u)− n−1∑ i=0 ui(1− u)N−iAf (i) ) . The Bernoulli transform Bf(u) is a polynomial ∑N i=0 αiu i of order at most N . Let us invert the coefficients Af (n) from αi by basic combinatorics. By the binomial theorem, ui = N−i∑ n=0 ( N − i n ) ui+n(1− u)N−i−n. Hence, summing over i+ n, we have N∑ i=0 αiu i = N∑ n=0 [ n∑ i=0 αi ( N − i n− i )] un(1− u)N−n, and the uniqueness of the coefficients Af implies Af (n) = n∑ i=0 αi ( N − i n− i ) , 0 ≤ n ≤ N. (2.4.2) As a corollary, we obtain∫ fdun = 1( N n )Af (n) = 1(N n ) n∑ i=0 αi ( N − i n− i ) , 1 ≤ n ≤ N − 1, (2.4.3) if we regard un, the uniform distribution on the set of configurations with precisely n many 1’s, as a measure on {1, 0}V in the natural way. 32 2.4. First-order approximations We will specialize the application of Bernoulli transforms to the function I(η) := ∫ ∞ 0 Eη [ D(ξs) ] ds. To obtain the explicit approximations (up to the first order) asserted in Theo- rem 1.2, we need to compute by Theorem 2.11 the 0-potentials ∫∞ 0 Eun [ D(ξs) ] ds for 1 ≤ n ≤ N − 1 under each updating mechanism. On the other hand, we will see that the Bernoulli transform of each 0-potential I is analytically tractable and BI(u) = Γu(1− u) (2.4.4) for some explicit constant Γ. Note that we have AI(N) = AI(0) = 0 for the updating mechanisms under consideration. Hence, the formula (2.4.3) entails ∫ ∞ 0 Eun [ D(ξs) ] ds = Γn(N − n) N(N − 1) , 1 ≤ n ≤ N − 1, (2.4.5) since ( N−1 n−1 )− (N−2n−2) = (N−2n−1) for n ≥ 2 and (N−10 ) = (N−20 ) = 1. 2.4.1 Proof of Theorem 1.2 (1) Assume that the graph G is k-regular. Recall that the death-birth updating defines a Markov chain of voter model perturbation satisfying (A1)–(A4) by Proposition 2.1 under weak selection. The functions hi in (A1) for this updating are given by (2.2.3). Hence, the difference kernel D is given by D(x, ξ) = h1(x, ξ), and for any x ∈ V, 1 k Eµu [D(x, ξs)] =− (b+ c)Eµu [f0f1(x, ξs)] + bEµu [f00(x, ξs)] + bEµu [f0f11(x, ξs)]− bEµu [f0f00(x, ξs)] =− (b+ c)Eµu [f0f1(x, ξs)] + bEµu [f0f11(x, ξs)] + bEµu [f1f00(x, ξs)]. (2.4.6) In analogy to the computations in [9] for coalescing probabilities, we re- sort to the duality between voter models and coalescing random walks for the right-hand side of (2.4.6). Let {Bx;x ∈ V} be the rate-1 coalescing random walks on G, where Bx starts at x. The random walks move independently of each other until they meet another and move together afterwards. The 33 2.4. First-order approximations duality between the voter model and the coalescing random walks is given by Pη (ξs(x) = ix, x ∈ Q) = P(η(Bxs ) = ix, x ∈ Q) for any Q ⊆ V and (ix;x ∈ Q) ∈ {1, 0}Q. (See Chapter V in [25].) Introduce two independent discrete-time random walks (Xn;n ≥ 0) and (Yn;n ≥ 0) starting at the same vertex, both independent of {Bx;x ∈ V}. Fix x and assume that the chains (Xn) and (Yn) both start at x. Recall that we write η̂ ≡ 1− η. Then by duality, we deduce from (2.4.6) that 1 k Eµu [D(x, ξs)] =− (b+ c) ∫ µu(dη)E[η̂(BX1s )η(BY1s )] + b ∫ µu(dη)E[η̂(BX1s )η(BY1s )η(BY2s )] + b ∫ µu(dη)E[η(BX1s )η̂(BY1s )η̂(BY2s )] =− c ∫ µu(dη)E[η(BY1s )] + c ∫ µu(dη)E[η(BX1s )η(BY1s )] + b ∫ µu(dη)E[η(BY1s )η(BY2s )] − b ∫ µu(dη)E[η(BX1s )η(BY2s )] + b ∫ µu(dη)E[η(BX1s )− η(BY1s )]. (2.4.7) For clarity, let us write from now on Pρ and Eρ for the probability measure and the expectation respectively under which the common initial position of (Xn) and (Yn) is distributed as ρ. Recall that D(ξ) is the pi-expectation of x 7−→ D(x, ξ) defined by (1.1.7). WriteMx,y = inf{t ∈ R+;Bxt = Byt } for the first meeting time of the random walks Bx and By, so Bx and By coincide after Mx,y. Then from (2.4.7), the spatial homogeneity of the Bernoulli product measures implies that 1 k Eµu [ D(ξs) ] =− cu+ c[uPpi(MX1,Y1 ≤ s) + u2Ppi(MX1,Y1 > s)] + b[uPpi(MY1,Y2 ≤ s) + u2Ppi(MY1,Y2 > s)] − b[uPpi(MX1,Y2 ≤ s) + u2Ppi(MX1,Y2 > s)] =− cu(1− u)Ppi(MX1,Y1 > s)− bu(1− u)Ppi(MY1,Y2 > s) + bu(1− u)Ppi(MX1,Y2 > s). 34 2.4. First-order approximations To obtain BI, we integrate both sides of the foregoing equality with respect to s over R+. This gives BI(u) =ku(1− u) ( − cEpi[MX1,Y1 ]− bEpi[MY1,Y2 ] + bEpi[MX1,Y2 ] ) . (2.4.8) We now turn to a simple identity between first meeting times and first hitting times. Let Ty = inf{n ≥ 0;Xn = y}, the first hitting time of y by (Xn). Observe that for the random walks on (connected) regular graphs, the invariant distribution is uniform and Ex[Ty] = Ey[Tx] for any x, y. Hence, the proof of Proposition 14.5 in [1] implies E[Mx,y] = 1 2 Ex[Ty] x, y ∈ V. (2.4.9) Write f(x, y) := Ex[Ty] = Ey[Tx], x, y ∈ V, where Ex = Eδx . Lemma 2.12. For any z ∈ V, Ez[f(X0, X1)] =Ez[f(Y1, Y2)] = N − 1, (2.4.10) Ez[f(X1, Y1)] =Ez[f(Y0, Y2)] = N − 2, (2.4.11) Ez[f(X1, Y2)] = ( 1 + 1 k ) (N − 1) + 1 k − 2. (2.4.12) Proof. The proof of the equality Ez[f(X0, X1)] = N − 1 can be found in Chapter 3 of [1] or [26]. We restate its short proof here for the convenience of readers. Let T+x = inf{n ≥ 1;Xn = x} denote the first return time to x. A standard result of Markov chains says Ex[T+x ] = pi(x)−1 = N for any x. The equalities in (2.4.10) now follow from the Markov property. Next, we prove (2.4.11). By (2.4.10) and the symmetry of f , we have N − 1 =Ez[f(X0, X1)] = ∑ x∼z 1 k Ez[Tx] = ∑ x∼z ∑ y∼z 1 k2 (Ey[Tx] + 1) = Ez[f(Y1, X1)] + 1, so Ez[f(X1, Y1)] = N − 2. Here, our summation notation ∑ x∼z means summing over indices x with z fixed, and the same convention holds in the proof of (2.4.12) and Section 2.5 below. A similar application of the Markov 35 2.4. First-order approximations property to the coordinate Y1 in Ez[f(Y0, Y1)] gives Ez[f(Y0, Y2)] = N − 2. This proves (2.4.11). Finally, we need to prove (2.4.12). We use (2.4.10) and (2.4.11) to get Ez[f(X0, X1)] =1 + Ez[f(X1, Y1)] =1 + ∑ x∼z ∑ y∼z y 6=x 1 k2 Ex[Ty] =1 + ∑ x∼z ∑ y∼z y 6=x 1 k2 (∑ w∼y 1 k Ex[Tw] + 1 ) (2.4.13) =1 + ∑ x∼z ∑ y∼z y 6=x 1 k2 + ∑ x∼z ∑ y∼z ∑ w∼y 1 k3 Ex[Tw]− ∑ x∼z ∑ w∼x 1 k3 Ex[Tw] =2− 1 k + Ez[f(X1, Y2)]− 1 k Ez[f(X1, X0)]. (2.4.14) Here, in (2.4.13) we use the symmetry of f , and the last equality follows from (2.4.10). A rearrangement of both sides of (2.4.14) and an application of (2.4.10) then lead to (2.4.12), and the proof is complete. Apply Lemma 2.12 and (2.4.9) to (2.4.8), and we obtain the following result. Proposition 2.13. For any u ∈ [0, 1], BI(u) = ku(1− u) 2 [( b k − c ) (N − 2) + b ( 2 k − 2 )] . (2.4.15) Finally, since BI(u) takes the form (2.4.4), we may apply (2.4.5) and Proposition 2.13 to obtain the explicit formula for the coefficient of w in (2.3.16), subject to each initial distribution un. This proves our assertion in Theorem 1.2 (1). 2.4.2 Proof of Theorem 1.2 (2) The proof of Theorem 1.2 (2) follows from almost the same argument for Theorem 1.2 (1) except for more complicated arithmetic. For this reason, we will only point out the main steps, leaving the detailed arithmetic to the interested readers. In the following, we continue to use the notations for the random walks in the proof of Theorem 1.2 (1). 36 2.4. First-order approximations Fix x ∈ V and assume the chains (Xn) and (Yn) both start at x. By Proposition 2.2, we have 1 k Eµu [D(x, ξs)] =Eµu [ (b− c)ξ̂s(x)f11(x, ξs)− cξ̂s(x)f10(x, ξs)− bξs(x)f01(x, ξs) ] − k k + 1 Eµu [( (b− c)f11(x, ξs)− cf10(x, ξs) + bf01(x, ξs) ) ( ξ̂s(x)f1(x, ξs)− ξs(x)f0(x, ξs) ) ] − 1 k + 1 Eµu [ bξ̂s(x)f 2 1 (x, ξs)− ξs(x)f0(x, ξs) ( (b− c)f1(x, ξs)− cf0(x, ξs) )] = ∫ µu(dη) ( E [ (b− c)η̂(Bxs )η(BY1s )η(BY2s )− cη̂(Bxs )η(BY1s )η̂(BY2s ) − bη(Bxs )η̂(BY1s )η(BY2s ) ] − k k + 1 E [( (b− c)η(BY1s )η(BY2s )− cη(BY1s )η̂(BY2s ) + bη̂(BY1s )η(BY2s ) ) × (η(BX1s )− η(Bxs ))] − 1 k + 1 E [ bη̂(Bxs )η(B X1 s )η(B Y1 s )− (b− c)η(Bxs )η̂(BX1s )η(BY1s ) + cη(Bxs )η̂(B X1 s )η̂(B Y1 s ) ]) , where the last equality follows again from duality. The last equality gives 1 k Eµu [D(x, ξs)] = ∫ µu(dη)E [ bη(BY1s )η(B Y2 s ) + c+ b k + 1 η(Bxs )η(B Y1 s ) − cη(BY1s )− b k + 1 η(Bxs )η(B Y2 s ) + kc− b k + 1 η(BX1s )η(B Y1 s ) − kb k + 1 η(BX1s )η(B Y2 s ) + c k + 1 η(Bxs )η(B X1 s )− c k + 1 η(Bxs ) ] . Recall that X1 (d) = Y1. Hence, by the definition of D and Ppi, the foregoing 37 2.5. Proofs of Proposition 2.1 and Proposition 2.2 implies that 1 k Eµu [ D(ξs) ] =− bu(1− u)Ppi(MY1,Y2 > s) − 2c+ b k + 1 u(1− u)Ppi(MX0,X1 > s) + b k + 1 u(1− u)Ppi(MY0,Y2 > s) − kc− b k + 1 u(1− u)Ppi(MX1,Y1 > s) + kb k + 1 u(1− u)Ppi(MX1,Y2 > s). Again, we integrate both sides of the foregoing display with respect to s and then apply (2.4.9) and Lemma 2.12 for the explicit form of BI. The result is given by the following. Proposition 2.14. For any u ∈ [0, 1], BI(u) = k(k + 2)u(1− u) 2(k + 1) [( b (k + 2) − c ) (N − 1)− (2k + 1)b− ck k + 2 ] . Our assertion for Theorem 1.2 (2) now follows from an application of Proposition 2.14 similar to that of Proposition 2.13 for Theorem 1.2 (1). The proof is now complete. 2.5 Proofs of Proposition 2.1 and Proposition 2.2 2.5.1 Fitnesses Suppose that ξ ∈ {1, 0}V is the present configuration on the graph. Let ni(x) = ni(x, ξ) be the number of neighbouring i players for an individual located at vertex x for i = 1, 0. Let w ∈ [0, 1] denote the intensity of selection. By definition, the fitness ρi(x) = ρi(x, ξ) of an i-player located at x is given by ρi(x) = (1− w) + w [ Πi1 Πi0 ] [ n1(x) n0(x) ] = (1− w) + wΠin(x) (2.5.1) Here, Πi is the payoff row of an i-player of the matrix Π and n(x) is the column vector [n1(x) n0(x)]>. Hence, there exists w0 > 0 depending only on k and Π such that ρi > 0 for every w ∈ [0, w0]; see [9]. 38 2.5. Proofs of Proposition 2.1 and Proposition 2.2 2.5.2 Proof of Proposition 2.1 The game with the death-birth updating under weak selection defines a Markov chain with transition probabilities Pw taking the form (2.2.2) and cw(x, ξ) =r1−ξ(x)(x, ξ) ≥ 0, (2.5.2) ri(x, ξ) = ∑ y∼x ρi(y)1ξ(y)=i∑ y∼x[ρ1(y)ξ(y) + ρ0(y)ξ̂(y)] . (2.5.3) It has been shown in Section 1.4 of [9] that the rates cw define voter model perturbations satisfying (A1) and (A2). Moreover, λ = 1 and the functions hi in the expansion (A2) are given by (2.2.3). Plainly, cw(x,1) ≡ r0(x,1) ≡ 0 and cw(x,0) ≡ r1(x,0) ≡ 0. Hence, (A3) is also satisfied. It remains to check that (A4) is satisfied. Since (A4) is satisfied when w = 0, it is enough to show that P (ξ, ξx) > 0⇐⇒ Pw(ξ, ξx) > 0, (2.5.4) for any ξ 6= 1,0 and any x. However, this is immediate from (2.5.3) if we notice that ρi(·) and the constant function 1, both regarded as measures on V in the natural way, are equivalent. Our proof of Proposition 2.1 is now complete. Remark 2.15. Suppose now that payoff is given by a general 2 × 2 payoff matrix Π∗ = (Π∗ij)i,j=1,0 subject only to the ‘equal-gains-from-switching’ condition (1.1.11). Let us explain how to reduce the games with payoff matrix Π∗ to the games with payoff matrix Πa under weak selection, where Πa is defined by (1.1.12). In this case, payoffs of players are as described in Remark 1.5, and fitness is given by ρΠ ∗ i (x) = (1− w) + wΠ∗in(x), x ∈ V. (2.5.5) Here again, Π∗i is the payoff row of an i-player. We put the superscript Π ∗ (only in this remark) to emphasize the dependence on the underlying payoff matrix Π∗, so in particular the previously defined fitness ρ in (2.5.1) is equal to ρΠ. Suppose that the graph is k-regular. The transition probabilities under the death-birth updating with payoff matrix Π∗ are defined in the same way as before through (2.5.2) and (2.5.3) with ρ replaced by ρΠ ∗ . Note that 39 2.5. Proofs of Proposition 2.1 and Proposition 2.2 n1(x) + n0(x) ≡ k. Then for all small w 1 1− (1− kΠ∗00)w ρΠ ∗ i (x) =1 + w 1− (1− kΠ∗00)w Πain(x) =1 + wa 1− waΠ a in(x) = 1 1− wa ρ Πa i (x) for some wa. Here, w and wa are defined continuously in terms of each other by wa = w 1 + kΠ∗00w and w = wa 1− kΠ∗00wa , so limwa→0w = limw→0wa = 0. Consequently, by (2.5.2) and (2.5.3), the foregoing display implies that the death-birth updating with payoff matrix Π∗ and intensity of selection w is ‘equivalent’ to the death-birth updating with payoff matrix Πa and intensity of selection wa, whenever wa or w is small. Here, ‘equivalent’ means equality of transition probabilities. A similar reduction applies to the imitation updating by using its formal definition described in the next subsection, and we omit the details. 2.5.3 Proof of Proposition 2.2 Under the imitation updating, the Markov chain of configurations has tran- sition probabilities given by Pw(ξ, ξx) = 1 N dw(x, ξ), Pw(ξ, ξ) =1− 1 N ∑ x dw(x, ξ), (2.5.6) where dw(x, ξ) =s1−ξ(x)(x, ξ) (2.5.7) si(x, ξ) = ∑ y∼x ρi(y)1ξ(y)=i∑ y∼x [ ρ1(y)ξ(y) + ρ0(y)ξ̂(y) ] + ρ1−i(x) . (2.5.8) and the fitness ρi are defined as before by (2.5.1). We assume again that the intensity of selection w is small such that ρi > 0. To simplify notations, let 40 2.5. Proofs of Proposition 2.1 and Proposition 2.2 us set the column vectors f(x) = [f1(x) f0(x)] > , f i•(x) = [fi1(x) fi0(x)] > , ni•(x) = [ni1(x) ni0(x)]> , where the functions fi and fij are defined by (2.2.4). By (2.5.1) and (2.5.8), we have si(x, ξ) = (1− w)ni(x) + wΠini•(x) (1− w)(k + 1) + w∑1j=0 Πjnj•(x) + wΠ1−in(x) = (1− w) kk+1fi(x) + k 2 k+1wΠif i•(x) (1− w) + w k2k+1 ∑1 j=0 Πjf j•(x) + w k k+1Π1−if(x) = k k+1fi(x) + w ( k2 k+1Πif i•(x)− kk+1fi(x) ) 1 + w ( k2 k+1 ∑1 j=0 Πjf j•(x) + k k+1Π1−if(x)− 1 ) . Note that the functions fi and fij are uniformly bounded. Apply Taylor’s expansion in w at 0 to the right-hand side of the foregoing display. We deduce from (2.5.7) that the transition probabilities (2.5.6) takes the form (2.2.2) with λ = kk+1 and the rates c w satisfying (A1) and (A2) for some small w0. Moreover, the functions hi are given by hi = (kΠif i• − fi)− fi  k2 k + 1 1∑ j=0 Πjf j• + k k + 1 Π1−if − 1  =kΠif i• − k2 k + 1 fi  1∑ j=0 Πjf j• − k k + 1 fiΠ1−if . By the definition of Π in (1.1.1), we get (2.2.5). The verifications of (A3) and (A4) follow from similar arguments for those of (A3) and (A4) under the death-birth updating, respectively. This completes the proof of Propo- sition 2.2. 41 Chapter 3 Pathwise Non-uniqueness for the SPDE’s of Some Super-Brownian Motions with Immigration 3.1 Introduction In this chapter, we construct a pair of distinct nonnegative solutions for a SPDE of the form (1.2.9) where the immigration function ψ satisfies (1.2.8). In the following, we outline our construction of the distinct solutions and the argument for their separation. The distinct solutions are obtained by approximation, and each ε-approx -imating pair, still denoted by (X,Y ) but now subject to Pε, consists of super-Brownian motions with intermittent immigration and subject to the same space-time white noise. By intermittent immigration, we mean that im- migrants land after intervals of deterministic lengths and with initial masses centered at i.i.d. targets, and then, along with their offspring, evolve inde- pendently as true super-Brownian motions. More precisely, the initial masses of the immigrant processes associated with the ε-approximating solutions are of the form ψ(1)Jxε ( · ) with x identified as the target, where Jxε (z) ≡ ε1/2J ( (x− z)ε−1/2), z ∈ R, (3.1.1) for an even C+c (R)-function J which is bounded by 1 and with topological support contained in [−1, 1], and satisfies ∫R J(z)dz = 1. In addition, the immigration events occur at alternating times si = ( i− 1 2 ) ε and ti = iε for i ∈ N, and the targets associated with the immigrants of X and Y are given by the 42 3.1. Introduction i.i.d. spatial variables xi and yi at si and ti, respectively, where Pε(xi ∈ dx) = Pε(yi ∈ dx) ≡ ψ(x)dx ψ(1) . (3.1.2) The weak existence of the approximating solutions follows from the usual Peano’s approximation argument. (See Section 3.3 for more details.) We remark that, despite the above informal descriptions, it seems difficult to refine this argument for an explicit construction of the immigration clusters, because the ε-approximating solutions X and Y , as their sums, have to be subject to the same space-time white noise. A simple heuristic argument by passing ε ↘ 0 suggests that these ap- proximating solutions converge to true solutions of the SPDE (1.2.9) (cf. (3.3.4)). The rigorous proof takes some routine, but a bit tedious, work. (See Section 3.9.) We remark that alternative constructions of super-Brownian motions with immigration can be done through Poisson point processes; see, for example, [8]. We notice that a similar construction of approximating solutions also appears in [30] for the equation (1.2.17). Each one is constructed from its “positive part” and “negative part” as two super-Brownian motions with in- termittent immigration but now subject to pairwise annihilation upon col- lision. In this case, the explicit construction of the immigrant processes for both parts is possible, and all of them are subject to independent space-time white noises and are consistently adapted, that is obeying their defining properties with respect to the same filtration. In fact, these features are at the heart of making immigrant-wise semimartingale calculations possible in [30]. For the above approximating solutions of (1.2.9), we overcome the restric- tion from Peano’s approximation and employ a method, called continuous decomposition, to elicit the immigrant processes from the approximating solutions such that they are consistently adapted. There seems no obvious reason why the resulting clusters should satisfy this property, if the general theory of coupling is applied to globally decompose the approximating so- lutions. In contrast, our method focuses on their local decompositions over time intervals of infinitesimal lengths but still gives the global result: X = ∞∑ i=1 Xi and Y = ∞∑ i=1 Y i. The resulting clusters associated with each approximating solution are in- dependent and are adapted to a common enlargement of the underlying fil- tration obtained by “continuously” joining σ-fields of independent variables 43 3.1. Introduction which validate the local decompositions for both approximating solutions. Moreover, the latter property makes all natural immigrant-wise semimartin- gale calculations possible. See Section 3.4 for the details of this discussion. We now explain why the approximating solutions are uniformly sepa- rated. We switch to the conditional probability measure under which the total mass process of a generic cluster, say Xi from the above continuous decomposition of the ε-approximating solution X, hits 1. Let us call this conditional probability measure Qiε from now on. The motivation of this approach is that by independence of the immigrants Xi, there must be an immigrant cluster associated with X whose total mass hits 1, so we should be able to carry the conditional separation under a generic Qiε to the separation of the approximating solutions under Pε. The readers may notice that our argument for separation under Pε is reminiscent of similar ones in the studies of SDE’s and SPDE’s on pathwise uniqueness of solutions by excursion theory (cf. [4] and [6]), except that in the present context, the “excursions” of immigrant clusters can overlap in time without waiting until the earlier excursions die out. Nonetheless, as in [30], we can still use some inclusion-exclusion arguments to establish conditional separation of our approximate solutions uniformly in . (See Section 3.7.) Under Qiε, an application of Girsanov’s theorem shows that Xi(1) is a constant multiple of a 4-dimensional Bessel squared process near its birth time and hence has a known growth rate. To obtain conditional separation of the approximating solutions under Qiε, we will show that the local growth rate of Y near (xi, si) is smaller than this known growth rate of Xi(1), which in turn will be smaller than the local growth rate of X near (xi, si). The latter step will use the known modulus of continuity of the support of super-Brownian motion. To introduce the appropriate local growth rate from the above scheme, we need to identify a growing space-time region starting at (xi, si) and then identify the sub-collection of immigrant clusters which can possibly invade this region in small time. For the former, we envelope the support processes of Y j and Xi by approximating parabolas of the form P(x,s)β (t) = { (z, r) ∈ R× [s, t]; |x− z| ≤ ε1/2 + (r − s)β } (3.1.3) for β close to 1/2 and consider the propagation of these parabolas instead of that of the support processes. The almost-sure growth rate of the support process of super-Brownian motion shows, for example, that supp(Xi) ∩ (R× [si, t]) ⊆ P(xi,si)β (t) for t− si small, 44 3.1. Introduction where supp(Xi) is the space-time support of the two-parameter random func- tion Xi. (See Section 3.6.1 and Proposition 3.51.) On the other hand, the Qiε-probability that one of the Y j clusters born before Xi invades the ter- ritory of Xi can be made relatively small by taking small t, which follows from an argument similar to Lemma 8.4 of [30] (see Section 3.12). These Y j clusters can henceforth be excluded from our consideration. As a re- sult, the tractable geometry of the approximating parabolas (3.1.3) yields the space-time rectangle Ri(t) = [ xi − 2 ( ε1/2 + (t− si)β ) , xi + 2 ( ε1/2 + (t− si)β )]× [si, t] so that the immigrant processes Y j landing inside Ri(t) are the only possible candidates which can invade the “territory” of Xi by time t. This identifies a family of clusters, say, {Y j ; j ∈ J i(t)}. Furthermore, we can classify these clusters Y j into critical clusters and lateral clusters. In essence, the critical clusters are born near the territory of Xi so the interactions between these clusters and Xi are significant. In contrast, the lateral clusters must evolve for relatively larger amounts of time before they start to interfere with Xi. Up to this point, the framework we set for investigating conditional sep- aration of approximating solutions is very similar to that in [30]. The inter- actions between the approximating solutions considered in both cases are, however, very different in nature. For example, the covariation between Xi and Y j under Qiε from Girsanov’s theorem is the main source of difficulty in our case, as is not in [30]. For this reason, our case calls for a new analysis in many aspects. Our result for the conditional separation can be captured quantitatively by saying: for arbitrarily small δ > 0, with high Qiε-probability, Xit(1) ≥ constant · (t− si)1+δ and∑ j∈J i(t) Y jt (1) ≤ constant · (t− si) 3 2 −δ, for t close to si+. (3.1.4) Here, the initial behavior of Xi(1) under Qiε as a constant multiple of a 4- dimensional Bessel squared process readily gives the first part of (3.1.4). (See Section 3.8.) On the other hand, the extra order, which is roughly (t−si)1/2, for the sum of the (potential) invaders Y j can be seen as the result of using spatial structure, which is not available for the SDE’s discussed above. In fact, the above framework needs to be further modified in a critical way due to a technical difficulty which arises in our setting (but not in [30]). We must consider a slightly looser definition for critical clusters, and 45 3.1. Introduction a slightly more stringent definition for lateral clusters. It will be convenient to consider this modified classification for the Y j clusters, still indexed by j ∈ J i(t) for convenience, landing inside a slightly larger rectangle in place of Ri(t). Write J i(t) = Ci(t) ∪ Li(t), where Ci(t) and Li(t) are the random index sets associated with critical clusters and lateral clusters, respectively. (See Section 3.6.1 for the precise classification.) Let us now bound the sum of the total masses Y jt (1), j ∈ J i(t), under Qiε. As in [30], this part plays a key role in this work. The treatment of the sum is through an analysis of its first-moment. The emphasis is on the covariation process between Xi and Y j under Qiε for j ∈ J i(t) resulting from Girsanov’s theorem. For the critical clusters Y j , their covariation processes with Xi have absolute bounds given by ∫ t tj [Y js (1)]1/2 [Xis(1)] 1/2 ds (3.1.5) for t sufficiently close to tj+ (cf. Lemma 3.8 below), so only the total masses of the clusters need to be handled. In this direction, we use an improved modulus of continuity of the total mass processes Y j(1) and the lower bound of Xi(1) in (3.1.4) to give deterministic bounds for the integrands in (3.1.5). The overall effect is a Riemann-sum bound for the sum of the total masses Y jt (1), j ∈ Ci(t), which has growth similar to that in the second part of (3.1.4). The lateral clusters pose an additional difficulty here which is not present in [30] due to the correlations between these clusters and Xi. The question is whether or not conditioning on the nearby Xi can pull along the nearby Y j ’s at a greater rate. In order to help bound the contributions of these clusters, we argue that a lateral cluster Y j is independent of Xi until these clusters collide (cf. Lemma 3.20 and Proposition 3.21). This allows us to adapt the arguments for the critical clusters and furthermore bound the growth rate of the sums of the total masses Y jt (1), j ∈ Li(t), by the desired order. See the discussion in Section 3.6.4 for more on this issue. This chapter is organized as follows. In Section 3.2, we give a non-rigorous derivation of the SPDE’s of one-dimensional super-Brownian motions. In Section 3.3, we give the precise definition of the pairs of approximating so- lutions considered throughout this chapter and state their existence in The- orem 3.1. 46 3.2. A non-rigorous derivation of the SPDE of super-Brownian motions Section 3.4–3.7 should be regarded as the heart of this work. In Sec- tion 3.4, we outline the idea behind our continuous decomposition of a super-Brownian motion with intermittent immigration and then give the rigorous proof for the continuous decompositions of the approximation solu- tions. Some properties of the resulting clusters from the decompositions are discussed at the end of Section 3.4. In Section 3.5, we set up some basic results and then proceed to an infor- mal calculation on obtaining the conditional separation of the approximating solutions. The latter should be regarded as a guide to Section 3.6 where we argue with complete rigour. Due to the complexity, the main two lemmas of Section 3.6 are proved in Section 3.6.3 and Section 3.6.4 respectively, with some preliminaries set in Section 3.6.2. Some technical details of Section 3.6 are relegated to Section 3.8 and Section 3.10. In particular, we give the proof for improved modulus of continuity of continuous functions in Section 3.10. In Section 3.7, we show the uniform separation of approximating solutions under Pε. The convergence of our approximating solutions is relegated to Section 3.9 and is supported by Section 3.11 in which some general limit theorems of Crap(R)-valued processes are discussed. For completeness, an adaptation of the result in [30] for the support processes of the immigrating super-Brownian motions to the present context is given in Section 3.12. A list of frequently used notation can be found at the end of this chapter for the convenience of readers. 3.2 A non-rigorous derivation of the SPDE of super-Brownian motions In this section, we give a non-rigorous discussion for the SPDE of super- Brownian motions via approximation by the simple class of spatial branching processes { X(k) } introduced in Section 1.2. Throughout this section, we will continue to follow the setting and notation therein. We refer the readers to [22], [41], and Section II.4 and Section III.4 of [39] for the rigorous proofs as well as the precise statements. Assume that (1.2.4) holds. To see how the continuous limit X should evolve, we consider the time evolution of X(k)(φ) for any φ ∈ C∞c (R) from the point of view of stochastic differential equations. By Itô’s formula and 47 3.2. A non-rigorous derivation of the SPDE of super-Brownian motions the definition of X(k), we obtain, for t ∈ [nk , n+1k ), dX (k) t (φ) = 1 k ∑ u∈GW(k)(n) dφ ( ξu,kt ) = 1 k ∑ u∈GW(k)(n) [ ∆φ 2 ( ξu,kt ) dt+ φ′ ( ξu,kt ) dξu,kt ] =X (k) t ( ∆φ 2 ) dt+ 1 k ∑ u∈GW(k)(n) φ′ ( ξu,kt ) dξu,kt . (3.2.1) We recall that ξu,k for u of the same length are independent, conditioned on the evolution of their ancestors. Hence, over [nk , n+1 k ), the square of the second term on the right-hand side of (3.2.1) is given by 1 k2 ∑ u∈GW(k)(n) [ φ′ ( ξu,kt )]2 dt = 1 k X (k) t ( (φ′)2 ) dt = O ( k−1 ) dt, where the use of O(k−1) is justified by the convergence of X(k). We also need to consider X (k) n k (φ)−X(k)n k −(φ) = 1 k ∑ u∈GW(k)(n−1) (Ku,k − 1)φ ( ξu,kn k ) . We now condition on the information of all the underlying random objects up to time nk −∆t, where ∆t > 0 is infinitesimal. Then the jump X (k) n k (φ)− X (k) n k −(φ) is distributed as the sum of independent variables: 1 k ∑ u∈A (Ku,k − 1)φ(xu), for xu = ξu,kn k , A = GW(k)(n− 1), which has mean zero and variance 1 k2 ∑ u∈A φ(xu)2 = X (k) n k −(φ 2) 1 k . (Recall that the offspring distributions are assumed to have unit mean and unit variance.) Hence, when population size is sufficiently large, the law of large numbers suggests that X (k) n k (φ)−X(k)n k −(φ) ∼ N ( 0, X (k) n k −(φ 2) 1 k ) . (3.2.2) 48 3.3. Super-Brownian motions with intermittent immigration In summary, the above observations suggest that X(φ) satisfies the fol- lowing characterization: dXt(φ) = Xt ( ∆φ 2 ) dt+ [Xt(φ 2)]1/2dBφt , (3.2.3) where Bφ is a standard Brownian motion. More precisely, the first term on the right-hand side of (3.2.1) contributes to Xt ( ∆φ 2 ) , and the terms (3.2.2) contribute to [Xt(φ2)]1/2dB φ t . Indeed, for the latter we can compare the fact [Xt(φ 2)]1/2dBφt ∼ N ( 0, Xt(φ 2)dt ) and the observation (3.2.2). To avoid using the Brownian motion Bφ depending on the test function φ, we can further write the second term on the right-hand side of (3.2.3) as [Xt(φ 2)]1/2dBφt = ∫ R X(x, t)1/2φ(x)dW (x, t) (3.2.4) by assuming the density {X(x, t)} of X, whereW is a space-time white noise and satisfies dW (x, s) · dW (y, t) = dxδx(dy)dsδs(dt). Indeed, both sides of (3.2.4) are distributed as N (0, Xt(φ2)dt) by the Gaussian property of Bφ and W . Applying the identification (3.2.4) to (3.2.3), we see that the density process of X is a weak solution to the SPDE (1.2.5). 3.3 Super-Brownian motions with intermittent immigration In this section, we describe in more detail the pairs of approximating solu- tions to the SPDE (1.2.9) considered throughout this chapter. Recall that, for f ∈ L1loc(R), we use the identification (1.2.13). We will further write f(Γ) = f(1Γ), Γ ∈ B(R), whenever the right-hand side makes sense. 49 3.3. Super-Brownian motions with intermittent immigration For ε ∈ (0, 1], the ε-approximating solution X is a nonnegative càdlàg Crap(R)-valued process and, moreover, continuous within each [si, si+1). Its time evolution is given by Xt(φ) = ∫ t 0 Xs ( ∆ 2 φ ) ds+ ∫ (0,t] ∫ R φ(x)dAX(x, s) + ∫ t 0 ∫ R X(x, s)1/2φ(x)dW (x, s), φ ∈ C∞c (R). (3.3.1) In (3.3.1), the nonnegative measure AX on R × R+ results from the initial masses of the immigrant processes associated with X and is defined by AX(Γ× [0, t]) , ∑ i:0<si≤t ψ(1)Jxiε (Γ) (3.3.2) (recall our notation Jxε in (3.1.1) and the i.i.d. spatial random points {xi} with individual law (3.1.2)), and W is a space-time white noise. A similar characterization applies to the other approximating solution Y . It is a nonnegative càdlàg Crap(R)-valued process satisfying Yt (φ) = ∫ t 0 Ys ( ∆ 2 φ ) ds+ ∫ (0,t] ∫ R φ(x)dAY (x, s) + ∫ t 0 ∫ R Y (x, s)1/2φ(x)dW (x, s), φ ∈ C∞c (R), (3.3.3) and is continuous over each [tj , tj+1). The nonnegative measure AY on R× R+ is now defined by AY (Γ× [0, t]) , ∑ j:0<tj≤t ψ(1)J yj ε (Γ). We stress that here X and Y are subject to the same space-time white noise W . The existence of these pairs of ε-approximation solutions follows by con- sidering the so-called mild forms of solutions of SPDE’s and using the classi- cal Peano’s existence argument as in Theorem 2.6 of [40]. The precise result is summarized in the following theorem. Here and throughout this chapter, we use the notation “G⊥⊥ξ” to mean that the σ-field G and the random element ξ are independent, and analogous notation applies to other pairs of objects which allow probabilistic independence in the usual sense. 50 3.3. Super-Brownian motions with intermittent immigration Theorem 3.1. For any ε ∈ (0, 1], we can construct a filtered probability space (Ω,F , (Ft),Pε), with a filtration (Ft) satisfying the usual conditions, on which there exist the following random elements: (i) an (Ft)-space-time white noise W , (ii) two nonnegative (Ft)-adapted Crap(R)-valued processesX and Y satis- fying (3.3.1) and (3.3.3) with respect toW with paths which are càdlàg on R+ and continuous within each [si, si+1) and [tj , tj+1), respectively, (iii) and i.i.d. random variables xi, yi with law given by (3.1.2), taking values in the topological support of ψ, and satisfying the property that ∀ i ∈ N, σ(Xs, Ys ; s < si)⊥⊥xi and σ(Xs, Ys ; s < ti)⊥⊥yi. Both X and Y are genuine approximating solutions to the SPDE (1.2.9) with respect to the same white noise. More precisely, for every sequence (εn) ⊆ (0, 1] with εn ↘ 0, the sequence of laws Pεn(X ∈ · , Y ∈ · ), n ∈ N, as probability measures on D ( R+,Crap(R) ) ×D(R+,Crap(R)) is tight, and the limit of every convergent subsequence is the joint law of a pair of solutions to (1.2.9) with respect to the same space-time white noise. This will be proved in Section 3.9. Although the proof is long, the readers should be convinced immediately upon considering the limiting behaviour of the random measures AX : for any t ∈ (0,∞), P- lim ε↓0+ ∫ (0,t] ∫ R φ(x)dAX(x, s) =P- lim ε↓0+ ψ(1)ε btε−1c∑ i=1 φ(xi) =t〈ψ, φ〉, (3.3.4) for any φ ∈ C∞c (R), by the law of large numbers. Here, P- lim denotes convergence in probability, and btc is the greatest integer less than or equal to t. Notation. The following convention will be in force throughout this chapter unless otherwise mentioned. We continue to suppress the dependence on ε as above whenever the context is clear. In this case, we only use the probability measure Pε to emphasize the dependence on ε. The subscript ε of Pε is further omitted in cases where there is no ambiguity, although in this context we will remind the readers of this practice. 51 3.4. Continuous decompositions of approximating processes 3.4 Continuous decompositions of approximating processes For every ε ∈ (0, 1], we have two approximating solutions X and Y , as stated in Theorem 3.1, to the SPDE (1.2.9) of a super-Brownian motion with immigration and zero initial condition. From their informal descriptions, we expect to decompose them into X = ∞∑ i=1 Xi and Y = ∞∑ i=1 Y i, (3.4.1) where the summands Xi and Y i are super-Brownian motions started at si and ti and with starting measures Jxiε and J yi ε , respectively, for each i, and each of the families {Xi} and {Y i} consists of independent random elements. Let us give an elementary discussion on how to obtain the decompositions in (3.4.1). Later on, we will require additional properties of the decomposi- tions. Now, it follows from the uniqueness in law of super-Brownian motions and the defining equation (3.3.1) of X that X is a (time-inhomogeneous) Markov process and, for each i ∈ N, (Xt)t∈[si,si+1) defines a super-Brownian motion with initial distribution Xsi . (Cf. the proof of Theorem IV.4.2 in [12] and the martingale problem characterization of super-Brownian motion in [39].) Hence, each of the equalities in (3.4.1) holds in the sense of being identical in distribution. We then recall the following general theorem; see Theorem 6.10 in [20]. Theorem 3.2. Fix any measurable space E1 and Polish space E2, and let ξ (d) = ξ̃ and η be random elements taking values in E1 and E2, respectively. Here, we only assume that ξ and η are defined on the same probability space. Then there exists a measurable function F : E1 × [0, 1] −→ E2 such that for any random variable Ũ uniformly distributed over [0, 1] with Ũ⊥⊥ξ̃, the random element η̃ = F (ξ̃, Ũ) solves (ξ, η) (d) = (ξ̃, η̃). By the preceding discussions and Theorem 3.2, we can immediately con- struct the summands Xi and Y i by introducing additional independent uni- form variables and validate the equalities (3.4.1) as almost-sure equalities. Such naive decompositions, however, are too crude because, for example, we are unable to say that all the resulting random processes perform their defin- ing properties with respect to the same filtration. This difficulty implies in particular that we cannot do semimartingale calculations for them. A finer 52 3.4. Continuous decompositions of approximating processes X(x,·) X1(x,·) X2(x,·) X3(x,·) ts1 s2 s3 Figure 3.1: Decomposition of X along a space variable x. decomposition method, however, does yield a solution to this problem, and the result is stated as follows. Theorem 3.3 (Continuous decomposition). Fix ε ∈ (0, 1], and let (X,Y,W ) be as in Theorem 3.1. By changing the underlying probability space if necessary, we can find a filtration (Gt) satisfying the usual condi- tions and two families {Xi} and {Y i} of nonnegative Crap(R)-valued pro- cesses, such that the followings are satisfied: (i) The processes {Xi} are independent. (ii) The equality in (3.4.1) involving X and Xi holds almost surely. (iii) ( Xit ) t∈[si,∞) is in C ( [si,∞),Crap(R) ) and is a (Gt)t≥si-super-Brownian motion started at time si with starting measure ψ(1)Jxiε . Also, Xit ≡ 0 for every t ∈ [0, si). (iv) The processes {Y i} satisfy properties analogous to (i)–(iii) with the roles of X and {(Xi, xi, si)} replaced by Y and {(Y i, yi, ti)}, respec- tively. (v) The conditions (i) and (ii) of Theorem 3.1 hold with (Ft) replaced by (Gt), and the condition (iii) of the same theorem is replaced by the 53 3.4. Continuous decompositions of approximating processes stronger independent landing property: ∀ i ∈ N, σ (Xjs , Y js ; s < si, j ∈ N)⊥⊥xi and σ ( Xjs , Y j s ; s < ti, j ∈ N )⊥⊥yi. (3.4.2) Due to the length of the proof, we first outline the informal idea for the convenience of readers. Recall that the first immigration event for X and Y occurs at s1 = ε2 . Take a grid of [ ε 2 ,∞) containing all the points si and ti for i ∈ N and with “infinitesimal” mesh size. Here, the mesh size of a grid is the supremum of the distances between consecutive grid points. The key observation in this construction is that, over any subinterval [t, t + ∆t] ⊆ [si, si+1) from this grid, (Xr; r ∈ [t, t + ∆t]) has the same distribution as the sum of i independent super-Brownian motions started at t over [t, t + ∆t], whenever the sum of the initial conditions of these independent super-Brownian motions has the same distribution as Xt. This fact allows us to inductively decompose X over the intervals of in- finitesimal lengths from this grid, such that the resulting infinitesimal pieces of super-Brownian motions can be concatenated in the natural way to obtain the desired super-Brownian motions. More precisely, we apply Theorem 3.2 by bringing in independent uniform variables as “allocators” to obtain these infinitesimal pieces. A similar method applies to continuously decompose Y into the desired independent super-Brownian motions by using another family of independent allocators. Finally, because the continuity of X, Y , and W allows us to characterize their laws over the entire time horizon R+ by their laws over [0, ε/2] and their probabilistic transitions on this grid with infinitesimal mesh size, the filtration obtained by sequentially adding the σ-fields of the independent allocators will be the desired one. From now on, we use the notation ∆Zs = Zs − Zs− with Z0− = 0 for a càdlàg process Z taking values in a Polish space. Proof of Theorem 3.3. Fix ε ∈ (0, 1] and we shall drop the subscript ε of Pε. Throughout the proof, we take for each m ≥ 1 a countable grid Dm of [ ε2 ,∞) which contains si and ti for any i ≥ 1 and satisfies # (Dm ∩K) < ∞ for any compact subset K of R+. We further assume that Dm+1 ⊆ Dm for each m, between any two points si and ti there is another point belonging to D1 and hence to each Dm, and the mesh size of Dm goes to zero as m −→ ∞. In addition, we will write for convenience {SBMt(µ, dν); t ∈ R+} for the semigroup of super-Brownian motion on R. When the density of the super- Brownian motion on R started at time s and with starting measure f(x)dx 54 3.4. Continuous decompositions of approximating processes for a nonnegative Crap(R)-function f is concerned, we write SBMf,[s,t] for the law of its C ( [s,∞),Crap(R) ) -valued density restricted to the time inter- val [s, t]. (Step 1). Fix m ∈ N and write ε2 = τ0 < τ1 < · · · as the consecutive points of Dm. Assume, by an enlargement of the underlying probability space where the triplet (X,Y,W ) lives if necessary, the existence of i.i.d. variables {UXj , UYj ; j ∈ N} with UX1 is uniformly distributed over [0, 1] and { UXj , U Y j ; j ∈ N } ⊥⊥F . (3.4.3) In this step, we will decompose X and Y based on the grid Dm into the random elements Xm = (Xm,1, Xm,2, · · · ) and Ym = (Y m,1, Y m,2, · · · ) , respectively. Here, Xm,i ∈ C([si,∞),Crap(R)) and Y m,i ∈ C([ti,∞),Crap(R)) with Xm,i ≡ 0 on [0, si) and Y m,i ≡ 0 on [0, ti), (3.4.4) so we will only need to specify Xm,i over [si,∞) and Y m,i over [ti,∞). We consider the construction of Xm first. The decomposition of X over [s1, s2] should be self-evident. Over this interval, set Xm,1 ≡ X on [s1, s2) with Xm,1s2 = Xs2− and Xm,2s = { 0, s ∈ [s1, s2), ψ(1)Jx2ε = ∆Xs2 , s = s2. From now on, we shall define Xm over [s2, τj ] by an induction on integers j ≥ jX∗ , where jX∗ ∈ N satisfies s2 = τjX∗ , such that( Xm,is ; 0 ≤ s ≤ τk, i ∈ N ) ∈ σ (Xs; s ≤ τk) ∨ σ (UXi , 1 ≤ i ≤ k) , ∀ k ∈ {0, · · · , j}, (3.4.5) with σ ( UXi , 1 ≤ i ≤ 0 ) understood to be the trivial σ-field {Ω,∅}, the laws of Xm,i obey (a) L ( Xm,is ; s ∈ [si, τj ] ) ∼ SBMψ(1)Jxiε ,[si,τj ] if si ≤ τj , and (b) ( Xm,is ; s ∈ [si, τj ] ) , for i satisfying si ≤ τj , are independent, (3.4.6) 55 3.4. Continuous decompositions of approximating processes and finally Xs = ∞∑ i=1 Xm,is , ∀ s ∈ [0, τj ] a.s. (3.4.7) By the foregoing identification of Xm over [s1, s2], we have obtained the case that j = jX∗ , that is, the first step of our inductive construction. Assume that Xm has been defined up to time τj for some integer j ≥ jX∗ such that (3.4.5)–(3.4.7) are all satisfied. We now aim to extend Xm over [τj , τj+1] so that all of (3.4.5)–(3.4.7) hold with j replaced by j + 1. First, consider the case that [τj , τj+1] ⊆ [sk, sk+1) (3.4.8) for some k. In this case, we only need to extend Xm,1, · · · , Xm,k. Take an auxiliary nonnegative random element ξ = ( ξ1, · · · , ξk) ∈ C ([τj ,∞), k∏ i=1 Crap(R) ) such that the coordinates ( ξis; s ∈ [τj ,∞) ) are independent processes and each of them defines a super-Brownian motion started at τj with initial law L ( ξiτj ) = L ( Xm,iτj ) , i ∈ {1, · · · , k}. (3.4.9) Now, our claim is that we can extend Xm,1, · · · , Xm,k continuously over [τj , τj+1] so that( Xm,1τj , · · · , Xm,kτj , (Xr)r∈[τj ,τj+1], ( Xm,1r ) r∈[τj ,τj+1] , · · · , ( Xm,kr ) r∈[τj ,τj+1] ) (d) = ξ1τj , · · · , ξkτj , ( k∑ i=1 ξir ) r∈[τj ,τj+1] , ( ξ1r ) r∈[τj ,τj+1] , · · · , ( ξkr ) r∈[τj ,τj+1]  . (3.4.10) In particular, the equality (3.4.10) in distribution implies that almost surely we have the following equalities: Xm,1r + · · ·+Xm,kr = Xr, ∀ r ∈ [τj , τj+1] , (3.4.11) 56 3.4. Continuous decompositions of approximating processes and L (( Xm,1r ) r∈[τj ,τj+1] , · · · , ( Xm,kr ) r∈[τj ,τj+1] ∣∣∣Xm,1τj , · · · , Xm,kτj , Xτj) =L (( Xm,1r ) r∈[τj ,τj+1] , · · · , ( Xm,kr ) r∈[τj ,τj+1] ∣∣∣Xm,1τj , · · · , Xm,kτj ) =SBM Xm,1τj ,[τj ,τj+1] ⊗ · · · ⊗ SBM Xm,kτj ,[τj ,τj+1] , (3.4.12) where the first equality of (3.4.12) follows from (3.4.11), and the second equality follows from the definition of ξ. To prove our claim (3.4.10), first we consider P ( (Xr)r∈[τj ,τj+1] ∈ Γ, Xm,1τj ∈ A1, · · · , Xm,kτj ∈ Ak ) =E [ P ( (Xr)r∈[τj ,τj+1] ∈ Γ ∣∣∣Xτj) ;Xm,1τj ∈ A1, · · · , Xm,kτj ∈ Ak] =E [ SBMXτj ,[τj ,τj+1] (Γ) ;X m,1 τj ∈ A1, · · · , Xm,kτj ∈ Ak ] =E [ SBM∑k i=1X m,i τj ,[τj ,τj+1] (Γ) ;Xm,1τj ∈ A1, · · · , Xm,kτj ∈ Ak ] , (3.4.13) where the first and the second equalities use the (time-inhomogeneous) Markov property of X and (3.4.5), and the last equality follows from the equality (3.4.7) from induction. Second, by (3.4.6) from induction and (3.4.9), we have ( Xm,1τj , · · · , Xm,kτj ) (d) = ( ξ1τj , · · · , ξkτj ) . Hence, from (3.4.13), we get P ( (Xr)r∈[τj ,τj+1] ∈ Γ, Xm,1τj ∈ A1, · · · , Xm,kτj ∈ Ak ) =E [ SBM∑k i=1 ξ i τj ,[τj ,τj+1] (Γ); ξ1τj ∈ A1, · · · , ξkτj ∈ Ak ] =E P ( k∑ i=1 ξir ) r∈[τj ,τj+1] ∈ Γ ∣∣∣∣∣ξ1τj , · · · , ξkτj  ; ξ1τj ∈ A1, · · · , ξkτj ∈ Ak  =P ( k∑ i=1 ξir ) r∈[τj ,τj+1] ∈ Γ, ξ1τj ∈ A1, · · · , ξkτj ∈ Ak  . (3.4.14) Here, the second equality follows from the convolution property of the laws of super-Brownian motions: SBMf1,[s,t] ? · · · ? SBMfk,[s,t] = SBM∑ki=1 fi,[s,t]. 57 3.4. Continuous decompositions of approximating processes Then (3.4.14) implies that ( Xm,1τj , · · · , Xm,kτj , (Xr)r∈[τj ,τj+1] ) (d) = ξ1τj , · · · , ξkτj , ( k∑ i=1 ξir ) r∈[τj ,τj+1]  . (3.4.15) Using the boundary condition (3.4.15) and Theorem 3.2, we can solve the stochastic equation on the left-hand side of (3.4.10) by a Borel measurable function Fmj : k∏ i=1 Crap(R)×C ( [τj , τj+1],Crap(R) )×[0, 1] −→ k∏ i=1 C ([τj , τj+1],Crap(R)) such that the desired extension of Xm over [τj , τj+1] can be defined by(( Xm,1r ) r∈[τj ,τj+1] , · · · , ( Xm,kr ) r∈[τj ,τj+1] ) = Fmj ( Xm,1τj , · · · , Xm,kτj , (Xr)r∈[τj ,τj+1], UXj+1 ) , (3.4.16) where the independent uniform variable UXj+1 now plays its role to decom- pose (Xr)r∈[τj ,τj+1]. This proves our claim on the continuous extension of Xm,1, · · · , Xm,k over [τj , τj+1] satisfying (3.4.10). By induction and (3.4.16), the extension of Xm over [τj , τj+1] satisfies (3.4.5) with j replaced by j + 1; by induction and (3.4.11), it satisfies (3.4.7) with j replaced by j + 1. Let us verify that (3.4.6) is satisfied with j replaced by j+ 1. By (3.4.5), we can write P (( Xm,ir ) r∈[τj ,τj+1] ∈ Ai, ( Xm,ir ) r∈[si,τj ] ∈ Bi, ∀ i ∈ {1, · · · , k} ) =E [ P (( Xm,ir ) r∈[τj ,τj+1] ∈ Ai, ∀ i ∈ {1, · · · , k} ∣∣∣Fτj ∨ σ (UX1 , · · · , UXj )) ;( Xm,ir ) r∈[si,τj ] ∈ Bi,∀ i ∈ {1, · · · , k} ] . (3.4.17) To reduce the conditional probability on the right-hand side to one condi- tioned on Xm,1τj , · · · , Xm,kτj , we review the defining equation (3.4.16) of Xm 58 3.4. Continuous decompositions of approximating processes over [τj , τj+1] and consider the calculation: E [ g1 ( Xm,1τj , · · · , Xm,kτj ) g2 ( (Xr)r∈[τj ,τj+1] ) g3 ( UXj+1 ) ∣∣∣Fτj ∨ σ (UX1 , · · · , UXj )] =g1 ( Xm,1τj , · · · , Xm,kτj ) E [ g2 ( (Xr)r∈[τj ,τj+1] ) ∣∣∣Fτj ∨ σ (UX1 , · · · , UXj )]E [g3 (UXj+1)] =g1 ( Xm,1τj , · · · , Xm,kτj ) E [ g2 ( (Xr)r∈[τj ,τj+1] ) ∣∣∣Xm,1τj , · · · , Xm,kτj ]E [g3 (UXj+1)] =E [ g1 ( Xm,1τj , · · · , Xm,kτj ) g2 ( (Xr)r∈[τj ,τj+1] ) g3 ( UXj+1 ) ∣∣∣Xm,1τj , · · · , Xm,kτj ] (3.4.18) where the first equality follows again from (3.4.5) and the second equality follows by using the (Ft)-Markov property of X and considering the “sand- wich” of σ-fields: σ(Xτj ) ⊆ σ ( Xm,1τj , · · · , Xm,kτj ) ∨N ⊆ Fτj ∨ σ ( UX1 , · · · , UXj ) with N being the collection of P-null sets, and the last equality (3.4.18) follows since UXj+1 is not yet used in the construction of Xm up to time τj . Hence, by (3.4.18) and (3.4.16), we can continue our calculation in (3.4.17) as follows: P (( Xm,ir ) r∈[τj ,τj+1] ∈ Ai, ( Xm,ir ) r∈[si,τj ] ∈ Bi, ∀ i ∈ {1, · · · , k} ) =E [ P (( Xm,ir ) r∈[τj ,τj+1] ∈ Ai, ∀ i ∈ {1, · · · , k} ∣∣∣Xm,1τj , · · · , Xm,kτj ) ;( Xm,ir ) r∈[si,τj ] ∈ Bi, ∀ i ∈ {1, · · · , k} ] =E [ k∏ i=1 SBM Xm,iτj ,[τj ,τj+1] (Ai); ( Xm,ir ) r∈[si,τj ] ∈ Bi, ∀ i ∈ {1, · · · , k} ] , where the second equality follows from (3.4.12). By (3.4.6) and induction, the foregoing equality implies that (3.4.6) with j replaced by j+1 still holds. This completes our inductive construction for the case (3.4.8). We also need to consider the case complementary to (3.4.8) that [τj , τj+1] ⊆ (sk, sk+1] and τj+1 = sk+1 for some k. In this case, the construction of Xm,1, · · · , Xm,k over the time interval [τj , τj+1] is the same as before, but the extra coordinate Xm,k+1 is now defined to be ψ(1)Jxk+1ε at time τj+1 = sk+1. The properties (3.4.5)–(3.4.7) with j replaced by j + 1 remain true in this case. This completes our inductive construction of Xm. The construction of Ym is very similar to that of Xm. We use {UYj } to validate decompositions, and the points {ti; i ∈ N} are now taken into consideration for the construction. We omit other details. 59 3.4. Continuous decompositions of approximating processes From the constructions of Xm and Ym, (3.4.3), and the property (iii) in Theorem 3.1, we see that the following independent landing property is satisfied by Xm and Ym: ∀ i ∈ N, σ (Xm,js , Y m,js ; s < si, j ∈ N)⊥⊥xi and σ ( Xm,js , Y m,j s ; s < ti, j ∈ N )⊥⊥yi. (3.4.19) (Step 2). We now define a filtration (G (m)t ) with respect to which the processes Xm,i, Y m,i, and W perform their defining properties on the grid Dm. The filtration (G (m) t ) is larger than (Ft) and is defined by G (m) t = Ft, t ∈ [0, τ0], G (m) t = Fτj+1 ∨ σ ( UXk , U Y k ; 1 ≤ k ≤ j + 1 ) , t ∈ (τj , τj+1], j ∈ Z+. In particular, it follows from (3.4.5) and the analogue for Ym that the pro- cesses Xm,i and Y m,i are all (G (m)t )-adapted. Also, it is obvious that X, Y , and W (φ) for any φ ∈ L2(R) are (G (m)t )-adapted. We now observe a key feature of Xm: P ( Xm,it ∈ Γ ∣∣∣G (m)τj ) = SBMt−τj (Xm,iτj ,Γ) , ∀ t ∈ (τj , τj+1] for si ≤ τj and i ∈ N (3.4.20) for any Borel measurable subset Γ of the space of finite measures on R. To see (3.4.20), we can consider a slight generalization of the proof of (3.4.18) by adding σ(UY1 , · · · , UYj ) to the σ-field Fτj ∨ σ(UX1 , · · · , UXj ) in the first line therein and then apply (3.4.6) to obtain P ( Xm,it ∈ Γ ∣∣∣G (m)τj ) =P(Xm,it ∈ Γ∣∣∣Xm,1τj , Xm,2τj , · · ·) =P ( Xm,it ∈ Γ ∣∣∣Xm,iτj ) =SBMt−τj ( Xm,iτj ,Γ ) , ∀ t ∈ (τj , τj+1]. In particular, we deduce from iteration and the semigroup property of {SBMt} that the following grid Markov property is satisfied: P ( Xm,it ∈ Γ ∣∣∣G (m)τj ) = SBMt−τj (Xm,iτj ,Γ) , ∀ t ∈ (τk, τk+1] when si ≤ τj ≤ τk. (3.4.21) 60 3.4. Continuous decompositions of approximating processes We note that the foregoing display does not say that Xm,i is a (G (m)s )s≥si- super-Brownian motion because the σ-fields which we can use for verifying the (G (m)s )s≥si-Markov property are only G (m) τj , rather than any σ-field G (m) s . With a similar argument, we also have the grid Markov property of Y m,i stated as P ( Y m,it ∈ Γ ∣∣∣G (m)τj ) = SBMt−τj (Y m,iτj ,Γ) , ∀ t ∈ (τk, τk+1] when ti ≤ τj ≤ τk. (3.4.22) With a much simpler argument, the space-time white noise W has the same grid Markov property: L ( Wt(φ) ∣∣G (m)τj ) = N (Wτj (φ), (t− τj)‖φ‖2L2(R)) , ∀ t ∈ (τk, τk+1] for τj ≤ τk and φ ∈ L2(R), (3.4.23) where N (µ, σ2) denotes the normal distribution with mean µ and variance σ2. (Step 3). To facilitate our argument in the next step, we digress to discuss a general property of space-time white noises. Let W 1 denote a space-time white noise, and suppose that {W 2(φn)} is a family of Brownian motions indexed by a countable dense subset {φn} of L2(R) such that {W 1(φn)} and {W 2(φn)} have the same law as random elements taking values in ∏∞ i=1C(R+,R). Then, whenever (φnk) is a subse- quence converging to some φ ∈ L2(R), the linearity of W 1 gives E [ sup 0≤s≤T ∣∣W 2s (φnk)−W 2s (φn`)∣∣2 ] = E [ sup 0≤s≤T ∣∣W 1s (φnk − φn`)∣∣2 ] ≤ 4T‖φnk − φn`‖2L2(R) −→ 0 as k, ` −→∞, ∀ T ∈ (0,∞), (3.4.24) where the inequality follows from Doob’s L2-inequality and the fact that, for any φ ∈ L2(R), W 1(φ) is a Brownian motion with L ( W 11 (φ) ) = N ( 0, ‖φ‖2L2(R) ) . The convergence in (3.4.24) implies that, for some continuous process, say W 2(φ), we have W 2(φnk) −→W 2(φ) uniformly on [0, T ] a.s., ∀ T ∈ (0,∞). 61 3.4. Continuous decompositions of approximating processes The same holds with W 2 replaced by W 1. Hence, making comparisons with the reference space-time white noiseW 1, we deduce that {W 2(φ);φ ∈ L2(R)} is a space-time white noise and, in fact, is uniquely defined by {W 2(φn)}. (Step 4). In this step, we formalize the infinitesimal description outlined before by shrinking the mesh size of Dm, that is, by passing m −→ ∞, and then work with the limits. To use our observation in (Step 3), we fix a choice of a countable dense subset {φn} of L2(R). We have constructed in (Step 1) random elements Xm and Ym and hence determined the laws L (X,Y,W, (xi), (yi),Xm,Ym) , m ∈ N, (3.4.25) as probability measures on a countably infinite product of Polish spaces. More precisely, our choice of the Polish spaces is through the following identifications. We identify X as a random element taking values in the closed subset of D ( R+,Crap(R) ) consisting of paths having continuity over each interval [si, si+1) for i ∈ Z+, with a similar identification applied to Y (cf. Proposition 5.3 and Remark 5.4 of [12]). We identify each coordi- nate Xm,i of Xm as a random element taking values in C([si,∞),Crap(R)), with a similar identification applied to Ym. By (Step 3), we identify W as the infinite-dimensional vectors (W (φ1),W (φ2), · · · , ) whose coordinates are C ( R+,Crap(R) ) -valued random elements. Finally, the identifications of the infinite-dimensional vectors (xi) and (yi) are plain. We make an observation for the sequence of laws in (3.4.25). Note that L (Xm) does not depend onm, because, by (3.4.6), any of its i-th coordinate Xm,i is a super-Brownian motion with starting measure ψ(1)Jxiε and started at si and the coordinates are independent. Similarly, L (Ym) does not de- pend on m. This implies that the sequence of laws in (3.4.25) is tight in the space of probability measures on the aforementioned infinite product of Pol- ish spaces. Hence, by taking a subsequence if necessary, we may assume that this sequence converges. By Skorokhod’s representation, we may assume the existence of the vectors of random elements in the following display as well as the almost-sure convergence therein:( X̃(m), Ỹ (m), {x̃mi }, {ỹmi }, W̃m, X̃m, Ỹm ) a.s.−−−−→ m→∞ ( X̃, Ỹ , {x̃i}, {ỹi}, W̃ , X̃ , Ỹ ) . (3.4.26) 62 3.4. Continuous decompositions of approximating processes Here, L ( X̃(m), Ỹ (m), {x̃mi }, {ỹmi }, W̃m, X̃m, Ỹm ) = L (X,Y, {xi}, {yi},W,Xm,Ym) , ∀ m ∈ N. (Step 5). We take (G̃t) to be the minimal filtration satisfying the usual conditions to which the limiting objects X̃, Ỹ , W̃ , X̃ , Ỹ on the right-hand side of (3.4.26) are adapted. We will complete the proof in this step by verifying that, with an obvious adaptation of notation, all the limiting objects on the right-hand side of (3.4.26) along with the filtra- tion (G̃t) are the required objects satisfying all of the conditions (i)–(v) of Theorem 3.3. First, let us verify the easier properties (i) and (ii) for {X̃i} and {Ỹ i}. The statement (i) and its analogue for {Ỹ i} obviously hold, by the analogous properties of X̃m and Ỹm. (See (b) of (3.4.6).) To verify the statement (ii), we use the property (3.4.7) possessed by ( X̃(m), X̃m ) and then pass limit, as is legitimate because the infinite series in (3.4.7) are always finite sums. Similarly, the analogue of (ii) holds for (Ỹ , Ỹ). The statement (iii) holds by the property (a) of (3.4.6) satisfied by X̃m, except that we still need to verify that each X̃i defines a (G̃t)t≥si-super Brownian motion, not just a super-Brownian motion in itself. From this point on, our arguments will rely heavily on the continuity of the underlying objects and the fact that ⋃ mDm is dense in [ ε 2 ,∞). Let ε2 ≤ s < t < ∞ with s, t ∈ ⋃mDm. Then s, t ∈ Dm from some large m on by the nesting property of the sequence {Dm}. For any bounded continuous function g on the path space of ( X̃(m), Ỹ (m), W̃m, X̃m, Ỹm ) restricted to the time interval [0, s], φ ∈ C+c (R), and index i such that si ≤ s, the grid Markov property (3.4.21) entails that E [ g ( X̃(m), Ỹ (m), W̃m, X̃m, Ỹm ) e − 〈 X̃ (m),i t ,φ 〉] = E [ g ( X̃(m), Ỹ (m), W̃m, X̃m, Ỹm )∫ SBMt−s ( X̃(m),is , dν ) e−〈ν,φ〉 ] . (3.4.27) The formula of Laplace transforms of super-Brownian motion shows that the map f 7−→ ∫ SBMt−s (f, dν) e−〈ν,φ〉 63 3.4. Continuous decompositions of approximating processes has a natural extension to Crap(R) which is continuous. (Cf. Proposition II.5.10 of [39].) Hence, passing m −→∞ for both sides of (3.4.27) implies E [ g ( X̃, Ỹ , W̃ , X̃ , Ỹ ) e−〈X̃it ,φ〉 ] = E [ g ( X̃, Ỹ , W̃ , X̃ , Ỹ )∫ SBMt−s ( X̃is, dν ) e−〈ν,φ〉 ] . (3.4.28) By the continuity of super-Brownian motion and the denseness of ⋃ mDm in [ ε2 ,∞), the foregoing display implies that each coordinate X̃i is truly a (G̃t)t≥si-super-Brownian motion. A similar argument shows that each Ỹ i is a (G̃t)t≥ti-super-Brownian motion. We have proved the statement (iii) and its analogue for Ỹ i in (iv). Next, we consider the statement (v). By definition, L ( X̃m, Ỹ m, {x̃mi }, {ỹmi }, W̃m ) = L (X,Y, {xi}, {yi},W ) , ∀ m ∈ N, and hence this stationarity gives L ( X̃, Ỹ , {x̃i}, {ỹi}, W̃ ) = L (X,Y, {xi}, {yi},W ) . (3.4.29) Now, arguing as in the proof of (3.4.27) and using the grid Markov property (3.4.23) of W̃m show that each W̃ (φn) is a (G̃t)-Brownian motion with L ( W̃1(φn) ) = N ( 0, ‖φn‖2L2(R) ) . It follows from (3.4.29) and our discussion in (Step 3) that W̃ extends uniquely to a (G̃t)-space-time white noise. In addition, one more applica- tion of (3.4.29) shows that the defining equations (3.3.1) and (3.3.3) of X and Y by {(xi, yi)} and W carry over to the analogous equations for X̃ and Ỹ by {(x̃i, ỹi)} and W̃ , respectively. This proves that ( X̃, Ỹ , W̃ ) satisfies the analogous property described in (i) and (ii) of Theorem 3.1 with (Ft) replaced by (G̃t). We have obtained the statement (v). Finally, to obtain the independent landing property (3.4.2) in the state- ment (v), we recall that an analogous property is satisfied by ( (x̃mi ), (ỹ m i ), X̃m, Ỹm ) in (3.4.19). Hence, arguing in the standard way as in the proof of (3.4.27) with the use of bounded continuous functions shows that the required in- dependent landing property (3.4.2) is satisfied by ( (x̃i), (ỹi), X̃ , Ỹ ) . This verifies the statement (v) asserted in Theorem 3.3, and the proof is com- plete. 64 3.4. Continuous decompositions of approximating processes Remark 3.4. Observe that ∆Xisi = ψ(1)J xi ε and, by definition, xi is the center of the topological support of Jxiε . Hence, we deduce that xi ∈ Gsi , yi ∈ Gti , ∀ i ∈ N. (3.4.30) By (iii) of Theorem 3.3, each ( Xit ) t∈[si,∞) is a (Gt)t≥si-super-Brownian motion and each ( Y it ) t∈[ti,∞) is a (Gt)t≥ti-super-Brownian motion. Hence, by a straightforward generalization of the standard proof of “appending” Brow- nian motions to solutions of martingale problems, we can find, by enlarging the filtered probability space if necessary, two families of (Gt)-white noises{ WX i} and {W Y i} such that any of (Xi,WXi) and (Y i,W Y i) solves the SPDE of super-Brownian motion with an appropriate translation of time. (See Theorem III.4.2 of [39] for details.) Moreover, by (i) of Theorem 3.3, we can further assume that each of the families { WX i} and {W Y i} consists of independent space-time white noises. In the remaining of this section, we present a general discussion of co- variations of two (Gt)-space-time white noises, say, W 1 and W 2. For such a pair, we can find a random locally bounded signed measure µW 1,W 2 on B(R2 × R+), called the covariation of W 1 and W 2, such that∫ [0,t]×R2 φ1(x)φ2(y)dµW 1,W 2(x, y, s) = 〈W 1(φ1),W 2(φ2)〉t ∀ t ∈ R+ a.s., ∀ φ1, φ2 ∈ L2(R). (3.4.31) Here, a locally bounded signed measure µ is one such that µ( · ∩ K) is a bounded signed measure for any compact subset K. We will rely on the following analogue of the classical Kunita-Watanabe inequality to derive a simple, but important, property of covariations. Proposition 3.5 (Kunita-Watanabe). Let W 1 and W 2 be two (Gt)- space-time white noises. Then except outside a null set, the inequality∫ R+ ∫ R2 |J(x, y, s)K(x, y, s)| ∣∣dµW 1,W 2(x, y, s)∣∣ ≤ (∫ R+ ∫ R J2(x, x, s)dxds )1/2(∫ R+ ∫ R K2(x, x, s)dxds )1/2 (3.4.32) holds for any pairs of Borel measurable functions J and K on R2 × R+. 65 3.4. Continuous decompositions of approximating processes The proof of Proposition 3.5 is in the same spirit of the proof of the clas- sical Kunita-Wanatabe inequality for local martingales, and thus, is omitted here. (Cf. the proof of Proposition IV.1.15 of [44].) The inequality (3.4.32) determines in particular the “worst variation” of covariations, as is made precise in the inequality (3.4.33) below. Corollary 3.6. Let W 1,W 2 be two (Gt)-space-time white noises. Then except outside a null event, the inequality∫ R+×R2 |K(x, y, s)| ∣∣dµW 1,W 2(x, y, s)∣∣ ≤ ∫ R+ ∫ R |K(x, x, s)|dxds (3.4.33) holds for any Borel measurable function K on R2 × R+. In particular, for any i, j1, · · · , jn ∈ N for n ∈ N with j1 < j2 < · · · < jn, except outside a null event, the inequality∣∣∣∣∣ ∫ t 0 Hsd 〈 Xi(1), n∑ `=1 Y j`(1) 〉 s ∣∣∣∣∣ ≤ ∫ t si∨tj1 |Hs| ∫ R ( Xi(x, s) · n∑ `=1 Y j`(x, s) )1/2 dxds, ∀ t ∈ [si ∨ tj1 ,∞), (3.4.34) holds for any locally bounded Borel measurable function H on R+. Here, {Xi} and {Y i} are obtained from the continuous decompositions of X and Y , respectively, in Theorem 3.3. Proof. The first assertion follows by writing |K| = |K|1/2 · |K|1/2 and then using (3.4.32). To obtain the second assertion, we first write n∑ `=1 Y j`t (φ) = ∫ t 0 n∑ `=1 Y j`s ( ∆ 2 φ ) ds+ ∑ `:1≤`≤n tj`≤t ψ(1)J yj` ε (φ) + n∑ `=1 ∫ t 0 ∫ R Y j`(x, s)1/2φ(x)dW Y j` (x, s). Recall that the space-time white noises W Y j1 , · · · ,W Y jn are independent. Hence, by enlarging the filtered probability space if necessary, we may assume 66 3.5. First look at conditional separation the existence of a (Gt)-space time white noise W Y j1 ,··· ,Y jn such that n∑ `=1 ∫ t 0 ∫ R Y j`(x, s)1/2φ(x)dW Y j (x, s) = ∫ t 0 ∫ R ( n∑ `=1 Y j`(x, s) )1/2 φ(x)dW Y j1 ,··· ,Y jn (x, s). From the last two displays and the analogue of the first one for Xi, we obtain〈 Xi(1), n∑ `=1 Y j`(1) 〉 t − 〈 Xi(1), n∑ `=1 Y j`(1) 〉 s = ∫ t s ∫ R2 Xi(x, r)1/2 ( n∑ `=1 Y j`(y, r) )1/2 dµ WXi ,WY j1 ,··· ,Y jn (x, y, r), ∀ s, t ∈ [si ∨ tj1 ,∞) with s < t. The second assertion now follows from the foregoing equality and (3.4.33). The proof is complete. 3.5 First look at conditional separation 3.5.1 Basic results We consider using the processes {Xi; i ∈ N} and {Y i; i ∈ N} obtained from the continuous decompositions of X and Y in Theorem 3.3 to show con- ditional separation of the approximating solutions. More precisely, for any ε ∈ (0, [8ψ(1)]−1 ∧ 1], we condition on the event that the total mass of a generic cluster Xi hits 1, and then the conditional separation refers to the separation of the approximating solutions under Qiε(A) ≡ Pε ( A ∣∣ TXi1 <∞). (3.5.1) Here, the restriction [8ψ(1)]−1 for ε is just to make sure that Xi(1) stays in (0, 1) initially, and we set THx , inf{t ≥ 0;Ht(1) = x} (3.5.2) for any nonnegative two-parameter process H = (H(x, t); (x, t) ∈ R × R+). Our specific goal is to study the differences in the local growth rates of masses 67 3.5. First look at conditional separation of X and Y over the “initial part” of the space-time support of Xi. In the following, we prove a few basic results concerning Qiε. Let us first represent Qiε via its Radon-Nikodym derivative process with respect to Pε. A standard calculation on scale functions of one-dimensional diffusions gives the following characterization of Qiε. Lemma 3.7. For any i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1], Pε ( TX i 1 < T Xi 0 ) =ψ(1)ε, (3.5.3) and Qiε(A) = ∫ A Xit(1) TX i 1 ψ(1)ε dPε, ∀ A ∈ Gt with t ∈ [si,∞). (3.5.4) Some basic properties of the total mass processes Xi(1) and Y j(1) for tj > si under Qiε are stated in the following lemma. Lemma 3.8. Fix i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1]. Then we have the following. (1) Xi(1)TX i 1 under Qiε is a copy of 14BESQ 4(4ψ(1)ε) started at si and stopped upon hitting 1. (2) For any j ∈ N with tj > si, the process (Y j(1)t)t≥tj is a continuous (Gt)t≥tj -semimartingale under Qiε with canonical decomposition Y jt (1) = ψ(1)ε+ I j t +M j t , t ∈ [tj ,∞), (3.5.5) where the finite variation process Ij satisfies Ijt = ∫ t tj 1 Xis(1) TX i 1 d 〈 Xi(1)T Xi 1 , Y j(1) 〉 s , (3.5.6) ∣∣Ijt ∣∣ ≤∫ t tj 1 [0,TX i 1 ] (s) 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds, (3.5.7) for t ∈ [tj ,∞), and M j is a true (Gt)t≥tj -martingale under Qiε. (3) For any j ∈ N with tj > si, xi, X i(1)[si, tj ], yj , and Y j(1)[tj ,∞) are Pε-independent. (3.5.8) (4) For any j ∈ N, Qiε(|yj − xi| ∈ dx) = Pε(|yj − xi| ∈ dx), x ∈ R, Pε(yj ∈ dx) ≤ ‖ψ‖∞ ψ(1) dx, x ∈ R. (3.5.9) 68 3.5. First look at conditional separation Proof. (1). The proof is omitted since it is a straightforward application of Girsanov’s theorem by using Lemma 3.7 and can be found in the proof of Lemma 4.1 of [30]. (2). The total mass process ( Y jt (1) ) t≥tj for any j ∈ N with tj > si is a (Gt)t≥tj -Feller process and hence a (Gt)t≥tj martingale. By Girsanov’s theorem (cf. Theorem VIII.1.4 of [44]), ( Y jt (1) ) t≥tj for any j ∈ N with tj > si is a continuous (Gt)t≥tj -semimartingale under Qiε with canonical decomposition, say, given by (3.5.5). Here, ( M jt ) t≥tj is a continuous (Gt)t≥tj - local martingale under Qiε with quadratic variation process 〈M j〉t = ∫ t tj Y js (1)ds, t ∈ [tj ,∞), (3.5.10) and by Lemma 3.7 the finite variation process ( Ijt ) t≥tj is given by (3.5.6). Applying (3.4.34) to (3.5.6), we obtain (3.5.7) at once. For the martingale property of M j under Qiε, we note that the one di- mensional marginals of Y j(1) have p -th moments which are locally bounded on compacts, for any p ∈ (0,∞). (Y j(1) under Pε is a Feller diffusion.) Applying this to (3.5.10) shows that EQiε [〈M j〉t] < ∞ for every t ∈ [tj ,∞) and hence M j is a true martingale under Qiε. (3). The assertion (3.5.8) is an immediate consequence of the independent landing property (3.4.2) and the Markov properties of Xi(1) and Y j(1) (cf. Theorem 3.3 (iii) and (iv)). (4). We consider (3.5.9). Recall that xi ∈ Gsi and yj ∈ Gtj . If tj > si, then we obtain from (3.5.4) that Qiε(|yj − xi| ∈ dx) = 1 ψ(1)ε EPε [ Xitj (1) TX i 1 ; |yj − xi| ∈ dx ] =Pε(|yj − xi| ∈ dx), (3.5.11) where the last equality follows from (3.5.8). If tj < si, then a similar ar- gument applies (without using (3.5.4)) since Xisi(1) = ψ(1)ε. Hence, the equality in (3.5.9) holds. The inequality in (3.5.9) is obvious. The proof is complete. 69 3.5. First look at conditional separation 3.5.2 A non-rigorous proof for conditional separation In this section, we consider a non-rigorous calculation to obtain conditional separation. We always consider the case that t is close to si from the right, without repeatedly mentioning this convention. We write A <a B if A ≤ CB for some constant C ∈ (0,∞) which depends only on ψ and may vary from line to line. Also, we suppress the arithmetic for the following arguments, as they are only a guide to Section 3.6. We begin with the setup in [30] on which our calculation is based. We have seen in Lemma 3.8 (1) thatXi(1)TX i 1 underQiε is a copy of 14BESQ 4(4ψ(1)ε) stopped upon hitting 1. As in the proof of Lemma 4.1 of [30], we can apply the lower escape rate of BESQ4, in the rough form (t− si) <a Xit(1), ∀ ε ∈ ( 0, [8ψ(1)]−1 ∧ 1]. (3.5.12) On the other hand, note that Xi under Pε is a true super-Brownian motion with starting measure Jxiε , where Jxiε has spatial support contained in [xi − ε1/2, xi + ε1/2] by definition. As in [30], we use the modulus of continuity of the support process of super-Brownian motions (see Theorem III.1.3 of [39]) and envelope the space-time support of Xi over [si, t] roughly by PXi1/2(t) , { (x, s) ∈ R× [si, t]; |x− xi| ≤ ( ε1/2 + (s− si)1/2 )} . We apply the analogous enveloping to other clusters Y j by PY j1/2(t), for any tj ∈ (0, t] . We continue to use all of these envelopes underQiε in the following (see Section 3.12 for a justification of this application). Now, by the foregoing envelope for the support of Xi, (3.5.12) implies that (t− si) <aXit(1) = Xit ( [xi − ε1/2 − (t− si)1/2, xi + ε1/2 + (t− si)1/2] ) ≤Xt ( [xi − ε1/2 − (t− si)1/2, xi + ε1/2 + (t− si)1/2] ) . This suggests that we can show the solutions separate by bounding the quan- tity Yt ( [xi − ε1/2 − (t− si)1/2, xi + ε1/2 + (t− si)1/2] ) , (3.5.13) or more generally the sum of the total masses of Y j clusters which can invade the support of Xi by time t. For this purpose, we ignore in particular the 70 3.5. First look at conditional separation clusters Y j born before Xi because essentially they do not contribute any mass in the support of Xi. (See Proposition 3.52 whose proof follows from that of Lemma 8.4 in [30].) To contribute to (3.5.13), the space-time landing locations (yj , tj) of the Y j invaders must fall in RXi1/2(t) , [ xi − 2 ( ε1/2 + (t− si)1/2 ) , xi + 2 ( ε1/2 + (t− si)1/2 )]× [si, t]. (See Lemma 3.53.) For convenience, we write J i1/2(t) for the set of these labels j. We remark that since (3.5.12), the elements considered so far have counterparts in [30]. From now on, we investigate the order of∑ j∈J i 1/2 (t) Y jt (1) in (t − si). In this direction, we use the canonical decomposition of Y j(1) under Qiε in (3.5.5) and calculate the Qiε-expectation of∑ j∈J i 1/2 (t) ( ψ(1)ε+ Ijt ) . Hence, unlike the case in [30], we have to deal with the correlations between Xi(1) and Y j(1) through Ijt . We bound the finite variation terms I j t as fol- lows. First, applying the Cauchy-Schwartz inequality to the space integrals in (3.5.7), we get a simple bound where only total masses of Xi and Y j are involved: Ijt ≤ ∫ t tj [Y js (1)]1/2 [Xis(1)] 1/2 ds. (3.5.14) Under Pε, Y j(1) is a Feller diffusion with initial value ψ(1)ε. Hence, by the Dambis-Dubins-Schwarz theorem (cf. Theorem V.1.6 of [44]), the martingale part of Y j(1) under Pε is a Brownian motion running on the time scale∫ · tj Y jr (1)dr. Also, Lévy’s theorem for the modulus of continuity of Brownian motions says that Brownian paths are, roughly speaking, pointwise Hölder-12 continuous. Hence, putting things together, we see that |Y js (1)− ψ(1)ε| <a (∫ s tj Y jr (1)dr )1/2 , s ∈ (tj , t]. 71 3.5. First look at conditional separation If we iterate the foregoing inequality as in the proof of Gronwall’s lemma (cf. [44] and Section 3.10), then an elementary argument gives the approximating inequality Y js (1) . C(s− tj) (3.5.15) for some constant C depending only on ψ(1). Here, we discard all those terms whose coefficients are powers of ε in order to get the right-hand side of (3.5.15). Using (3.5.12) and (3.5.15) as a true inequality in (3.5.14), we get Ijt <a ∫ t tj (s− tj)1/2 (s− si)1/2 1 [Y js (1)>0] ds. (3.5.16) For any tj ∈ (si, t], we have EQ i ε [ ψ(1)ε+ Ijt ; j ∈ J i1/2(t) ] =EQ i ε [ ψ(1)ε+ Ijt ; |yj − xi| ≤ 2 ( ε1/2 + (t− si)1/2 )] <a(t− si)1/2ε+ ∫ t tj (s− tj)1/2 (s− si)1/2 ×Qiε ( Y js (1) > 0, |yj − xi| ≤ 2 ( ε1/2 + (t− si)1/2 )) ds, (3.5.17) where the last <a-inequality follows from (3.5.9), the fact that tj ≥ si + ε2 , and (3.5.16). We now estimate the Qiε-probability in (3.5.17). Fix tj ∈ (si, t] and s ∈ (tj , t]. Then by Lemma 3.7, we can write Qiε ( Y js (1) > 0, |yj − xi| ≤ 2 ( ε1/2 + (t− si)1/2 )) = 1 ψ(1)ε EPε [ Xis(1) TX i 1 ;Y js (1) > 0, |yj − xi| ≤ 2 ( ε1/2 + (t− si)1/2 )] = 1 ψ(1)ε EPε [ Xis(1) TX i 1 ;Y js (1) > 0, |yj − xi| ≤ 2 ( ε1/2 + (t− si)1/2 ) , TX i 0 ≤ tj ] + 1 ψ(1)ε EPε [ Xis(1) TX i 1 ;Y js (1) > 0, |yj − xi| ≤ 2 ( ε1/2 + (t− si)1/2 ) , TX i 0 > tj ] . (3.5.18) For the first term on the right-hand side of (3.5.18), we note that when TX i 0 ≤ tj and Xis(1)T Xi 1 > 0, we must have TXi1 < TX i 0 ≤ tj < s and hence 72 3.5. First look at conditional separation Xis(1) TX i 1 = 1 (0 is an absorbing state of Xi(1) under Pε). Using Lemma 3.8 (3), we see that 1 ψ(1)ε EPε [ Xis(1) TX i 1 ;Y js (1) > 0, |yj − xi| ≤ 2 ( ε1/2 + (t− si)1/2 ) , TX i 0 ≤ tj ] ≤ 1 ψ(1)ε Pε ( TX i 1 < T Xi 0 ≤ tj ) Pε ( |yj − xi| ≤ 2 ( ε1/2 + (t− si)1/2 )) Pε ( Y js (1) > 0 ) <a 1 ψ(1)ε · ψ(1)ε · (t− si)1/2 · ψ(1)ε s− tj (3.5.19) <a(t− si)1/2(s− tj)−1 · ε. (3.5.20) Here in (3.5.19), the second factor follows from Lemma 3.7, the third factor follows from Lemma 3.8 (4), and the fourth factor follows since Y j(1) under Pε is a Feller diffusion (cf. Section 3.6.2 below). For the second term on the right-hand side of (3.5.18), we can use a sharper version of (3.5.15) for Xi(1) to obtain the following approximating inequality: |Xis(1)− ψ(1)ε| . C(s− si), s ∈ (si, t], for some constant C depending only on ψ, since Xi(1) is again a Feller diffu- sion under Pε (cf. Section 3.10). From the above approximating inequality, we get 1 ψ(1)ε EPε [ Xis(1) TX i 1 ;Y js (1) > 0, |yj − xi| ≤ 2 ( ε1/2 + (t− si)1/2 ) , TX i 0 > tj ] <a 1 ψ(1)ε EPε [∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣;Y js (1) > 0, |yj − xi| ≤ 2(ε1/2 + (t− si)1/2), TXi0 > tj] + ψ(1)ε ψ(1)ε Pε ( Y js (1) > 0, |yj − xi| ≤ 2 ( ε1/2 + (t− si)1/2 ) , TX i 0 > tj ) <a 1 ψ(1)ε (s− si) · ψ(1)ε s− tj · (t− si) 1/2 · ψ(1)ε tj − si + ψ(1)ε s− tj · (t− si) 1/2 · ψ(1)ε tj − si <a(t− si)1/2(tj − si)−1(s− si)(s− tj)−1 · ε + (t− si)1/2(tj − si)−1(s− tj)−1 · ε2, (3.5.21) where the next to the last <a-inequality follows from similar reasons for (3.5.19). We now apply (3.5.20) and (3.5.21) to (3.5.18) but discard the second term (t− si)1/2(tj − si)−1(s− tj)−1ε2 73 3.6. Conditional separation of approximating solutions of (3.5.21) as it is of order ε2. Applying the result to (3.5.17), we get EQ i ε  ∑ j∈J i 1/2 (t) ( ψ(1)ε+ Ijt ) <a(t− si)1/2 ∑ j:si<tj≤t ε+ (t− si)1/2 ∑ j:si<tj≤t ∫ t tj (s− si)−1/2(s− tj)−1/2ds · ε + (t− si)1/2 ∑ j:si<tj≤t (tj − si)−1 ∫ t tj (s− si)1/2(s− tj)−1/2ds · ε, (3.5.22) where on the right-hand side we see Riemann sums of integrals of the form∫ t si dr(r − si)a ∫ t r ds(s− si)b(s− r)c. We now use the formal calculus∫ s si (r − si)−1(s− r)−1/2dr = (s− si)−1/2 in the Riemann-sum approximation of the third sum in (3.5.22). Then (3.5.22) implies the approximating inequality EQ i ε  ∑ j∈J i 1/2 (t) ( ψ(1)ε+ Ijt ) <a (t− si)3/2 as ε ↓ 0. This is in contrast to (3.5.12) whenever ε is small. Hence, we expect that the approximating solutions X and Y do separate under Qiε. Since some cluster Xi will hit 1 with reasonable Pε-probability, a relatively elementary inclusion-exclusion argument in Section 3.7 will show that this suffices. 3.6 Conditional separation of approximating solutions 3.6.1 Setup In order to state precisely our quantifications of the local growth rates of X and Y , we need several preliminary results which have similar counterparts 74 3.6. Conditional separation of approximating solutions in [30]. First, we choose in Proposition 3.9 below a (Gt)-stopping time τ i satisfying τ i > si, so that within [si, τ i] we can explicitly bound from below the growth rate of Xi(1). Since X ≥ Xi, this gives a lower bound for the local growth rate of X over the initial part of the space-time support of Xi. Our objective is to study the local growth rate of Y within the initial part of this support. See Section 3.8 for the proof of the following proposition. Proposition 3.9. For any ε ∈ (0, [8ψ(1)]−1∧1], parameter vector (η, α, L) ∈ (1,∞)× (0, 12)× (0,∞), and i ∈ N, we define four (Gt)-stopping times by τ i,(1) , inf { t ≥ si;Xit(1)T Xi 1 < (t− si)η 4 } ∧ TXi1 , τ i,(2) , inf { t ≥ si; ∣∣∣Xit(1)TXi1 − ψ(1)ε− (t− si)∣∣∣ > L(∫ t si Xis(1) TX i 1 ds )α} ∧ TXi1 , τ i,(3) , inf t ≥ si; ∑ j:si<tj≤t Y jt (1) > 1  , τ i , τ i,(1) ∧ τ i,(2) ∧ τ i,(3) ∧ (si + 1). Then ∀ ρ > 0 ∃ δ > 0 such that sup { Qiε(τ i ≤ si + δ); i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ]} ≤ ρ. (3.6.1) Let us explain the meanings of the parameters η, α, L in this lemma. Since Xi(1) is a Feller diffusion under Pε, a straightforward application of Girsanov’s theorem (cf. Theorem VIII.1.4 of [44]) shows that Xi(1)TX i 1 under Qiε is a 14BESQ 4 ( 4ψ(1)ε ) stopped upon hitting 1; see Lemma 4.1 of [30] for details. As a result, in view of the lower escape rate of BESQ4 (cf. (3.8.2)) applied to Xi(1)TX i 1 under Qiε, we will take the parameter η in the definition of τ i,(1) close to 1. In addition, we will take the parameter α in the definition of τ i,(2) close to 12 by considering the local Hölder exponent of the martingale part of BESQ4 in terms of its quadratic variation. The parameter L bounds the associated local Hölder coefficient. To use the support of Xi within which we observe the local growth rate of Y , we take a parameter β ∈ (0, 12), which is now close to 12 . We use this parameter to get a better control of the supports of Xi and Y j , and this 75 3.6. Conditional separation of approximating solutions si tj t si+1 si tk t si+1 time PXiβ (t) PY jβ (t) PY kβ (t) width of RXiβ (t) width of RXi β′ (t) time xi xi+ε 1/2xi−ε1/2yj−ε1/2 yj yj+ε1/2 yk−ε1/2 yk Figure 3.2: Parabolas PXiβ (t),PYjβ (t),PYkβ (t) and rectangles RX i β (t) and RXiβ′ (t), for 0 < β′ < β and t ∈ [si, si + 1). means we use the parabola PXiβ (t) , { (x, s) ∈ R× [si, t]; |x− xi| ≤ ( ε1/2 + (s− si)β )} (3.6.2) to envelope the space-time support ofXi[si, t], for t ∈ (si,∞), with a similar practice applied to other clusters Y j . (See the speed of support propagation of super-Brownian motions in Theorem III.1.3 of [39].) More precisely, we can use the (Gt)-stopping time σX i β , inf { s ≥ si; supp(Xis) ⊆/ [ xi − ε1/2 − (s− si)β, xi + ε1/2 + (s− si)β ]} (3.6.3) as well as the analogous stopping times σY jβ for Y j to identify the duration of the foregoing enveloping. We now specify the clusters Y j taken for computing the local growth rate of Y . Suppose that at time t with t > si, we can still envelope the support of Xi by PXiβ (t) and the analogous enveloping for the support of Y j holds for any j ∈ N satisfying tj ∈ (si, t]. Informally, we ignore the clusters Y j born before Xi, because the probability that they can invade the initial part of the support of Xi is small for small t (cf. Lemma 3.53). Under such circumstances, simple geometric arguments show that only the Y j clusters born inside the space-time rectangle RXiβ (t) , [ xi − 2 ( ε1/2 + (t− si)β ) , xi + 2 ( ε1/2 + (t− si)β )]× [si, t] (3.6.4) 76 3.6. Conditional separation of approximating solutions can invade the initial part of the support of Xi by time t (see Lemma 3.53). We remark that this choice of clusters Y j for (yj , tj) ∈ RXiβ (t) is also used in [30]. For technical reasons (cf. Section 3.6.4 below), however, we will con- sider the super-Brownian motions Y j born inside the slightly larger rectan- gle RXiβ′ (t) for t ∈ (si, si + 1], where β′ is another value close to 12 , has the same meaning as β, and satisfies β′ < β. See Figure 3.2 for these rectangles as well as an example for three parabolas PXiβ (t), PY j β (t), and PY k β (t) where (yj , tj) ∈ RXiβ (t) and (yk, tk) /∈ RX i β (t). The labels j ∈ N of the clusters Y j born inside RXiβ′ (t) constitute the random index set J iβ′(t) ≡ J iβ′(t, t), (3.6.5) where J iβ′(t, t′) , { j ∈ N; |yj − xi| ≤ 2 ( ε1/2 + (t− si)β′ ) , si < tj ≤ t′ } , ∀ t, t′ ∈ (si,∞). (3.6.6) We now introduce the last two parameters ξ and N0 which are used to describe the improve modulus of continuity of Xi(1) and Y j(1). Since Xi(1)T Xi 1 is bounded by 1 and satisfies the integral inequality∣∣∣Xit(1)TXi1 − ψ(1)ε∣∣∣ ≤ (t− si) + L(∫ t si Xis(1) TX i 1 ds )α ∀ t ∈ [si, τ i] , Qiε-a.s., ∀ i ∈ N, ε ∈ ( 0, [8ψ(1)]−1 ∧ 1], (3.6.7) by the choice of τ i,(2) in Proposition 3.9, we can iterate (3.6.7) as in the classical proof of Gronwall’s lemma and get a finite “power-series” upper bound of ∣∣∣Xit(1)TXi1 − ψ(1)ε∣∣∣ in (t − si). Precisely, a routine, though a bit tedious, inductive argument (cf. Corollary 3.43 and recall α ∈ (0, 12)) shows that, whenever ξ ∈ (0, 1) and N0 ∈ N satisfies N0∑ j=1 αj ≤ ξ < N0+1∑ j=1 αj , (3.6.8) 77 3.6. Conditional separation of approximating solutions we have∣∣∣Xit(1)TXi1 − ψ(1)ε∣∣∣ ≤ KX1 [ψ(1)ε]αN0 (t− si)α +KX2 (t− si)ξ ∀ t ∈ [si, τ i] Qiε-a.s., ∀ i ∈ N, ε ∈ (0, [8ψ(1)]−1 ∧ 1], (3.6.9) where the constantsKX1 ,KX2 ≥ 1 depend only on (α,L, ξ,N0). In particular, since α is close to 12 , we can choose N0 large in (3.6.8) to make ξ close to 1, as is our intention in the sequel. Informally, we can then view the foregoing inequality as saying that t 7−→ Xit(1)T Xi 1 is Hölder-1 continuous at si from the right. A similar derivation of the improved modulus of continuity of Y j(1) will appear in the proof of Lemma 3.17 below. Assumption 1 (Choice of auxiliary parameters). Throughout the re- mainder of this section and Section 3.7, we fix a parameter vector (η, α, L, β, β′, ξ,N0) ∈ (1,∞)× ( 0, 1 2 ) × (0,∞) × [ 1 3 , 1 2 ) × [ 1 3 , 1 2 ) × (0, 1)× N (3.6.10) satisfying  (a) N0∑ j=1 αj ≤ ξ < N0+1∑ j=1 αj , (b) α < β′ β < 1, (c) β′ − η 2 + 3 2 α > 0. (d) (β′ + 1) ∧ ( β′ − η 2 + 3ξ 2 ) > η. (3.6.11) (Note that we restate (3.6.8) in (a).) We insist that this parameter vector is chosen to be independent of i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1]. For example, we can choose these parameters in the following order: first choose η, α, β′, ξ according to (c) and (d), choose β according to (b), and finally choose N0 according to (a) by enlarging ξ if necessary; the parameter L, however, can be chosen arbitrarily. 78 3.6. Conditional separation of approximating solutions The following theorem gives our quantification of the local growth rates of Y under Qiε. Theorem 3.10. Under Assumption 1, set three strictly positive constants by κ1 =(β ′ + 1) ∧ ( β′ − η 2 + 3ξ 2 ) , κ2 = αN0 4 , κ3 = β ′ − η 2 + 3α 2 . (3.6.12) Then there exists a constant K∗ ∈ (0,∞), depending only on the parameter vector in (3.6.10) and the immigration function ψ, such that for any δ ∈ (0, κ1 ∧ κ3), the following uniform bound holds: Qiε ∃ s ∈ (si, t], ∑ j∈J i β′ (s∧τ i∧σX i β ) Y js (1) τ i∧σXiβ ∧σY j β > K∗ [ (s− si)κ1−δ + εκ2 · (s− si)κ3−δ ] ≤ 2 · 2 κ1∨κ3 2(N+1)δ(1− 2−δ) , ∀ t ∈ [ si + 2 −(N+1), si + 2−N ] , N ∈ Z+, i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] , (3.6.13) where the (Gt)-stopping times τ i are defined in Proposition 3.9 Remark 3.11. If we follow the aforementioned interpretation of the param- eter vector in (3.6.10) that (η, β′, ξ) is close to (1, 12 , 1), then κ1 in (3.6.12) is close to 32 . Informally, if we regard the stopping times τ i, σXiβ , and σ Y j β as being bounded away from si, then the above reason for choosing the random index sets J iβ′(·) in (3.6.5), we can regard Theorem 3.10 as a formalization of the statement in (3.1.4). In fact, the proof of Theorem 3.10 is reduced to a study of some nonneg- ative (Gt)t≥si-submartingale dominating the process∑ j∈J i β′ (t∧τ i∧σX i β ) Y jt (1) τ i∧σXiβ ∧σY j β , t ∈ [si,∞), (3.6.14) involved in (3.6.13). We proceed as follows. We observe that by Lemma 3.8 (2), the process involved in (3.6.14) is 79 3.6. Conditional separation of approximating solutions dominated by the nonnegative process ∑ j∈J i β′ (t,t∧τ i∧σX i β ) ( ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds +M j t∧τ i∧σXiβ ∧σY j β ) , t ∈ [si,∞), (3.6.15) under Qiε for any i ∈ N and ε ∈ ( 0, [8ψ(1)]−1 ∧ 1]. The process in (3.6.15) is in fact a nonnegative (Gt)t≥si-submartingale under Qiε, since for any j ∈ N with si < tj , j ∈ J iβ′(t, t ∧ τ i ∧ σX i β ) if and only if the following Gtj -event occurs: [ |yj − xi| ≤ 2 ( ε1/2 + (t− si)β′ ) and tj ≤ t ∧ τ i ∧ σXiβ ] . (Recall Remark 3.4 for yj ∈ Gtj and xi ∈ Gsi .) It suffices to prove the bound (3.6.13) of Theorem 3.10 with the involved process in (3.6.14) replaced by the nonnegative submartingale in (3.6.15). To further reduce the problem, we resort to the following simple corollary of Doob’s maximal inequality. Lemma 3.12. Let F be a nonnegative function on [0, 1] such that F (0, 1] > 0 and sups,t:1≤ t s ≤2 F (t) F (s) <∞. In addition, assume that for some δ > 0, t 7−→ F (t) tδ is increasing. (3.6.16) Suppose that Z is a nonnegative submartingale with càdlàg sample paths such that E[Zt] ≤ F (t) for any t ∈ [0, 1]. Then for every N ∈ Z+, sup t∈[2−(N+1),2−N ] P ( ∃ s ∈ (0, t] , Zs > F (s) sδ ) ≤ ( sup s,t:1≤ t s ≤2 F (t) F (s) ) × 1 2(N+1)δ(1− 2−δ) . (3.6.17) 80 3.6. Conditional separation of approximating solutions Proof. For each m ∈ Z+, P ( ∃ s ∈ [ 2−(m+1), 2−m ] , Zs ≥ F (s) sδ ) ≤P ( sup 2−(m+1)≤s≤2−m Zs ≥ F ( 1 2(m+1) )/ 1 2(m+1)δ ) ≤ E[Z 1 2m ] F ( 1 2(m+1) )/ 1 2(m+1)δ ≤ F ( 1 2m ) F ( 1 2(m+1) )/ 1 2(m+1)δ = sup s,t:1≤ t s ≤2 F (t) F (s) × 1 2(m+1)δ , where the first inequality follows from (3.6.16) and the second inequality fol- lows from Doob’s maximal inequality. Hence, whenever t ∈ [2−(N+1), 2−N] for N ∈ Z+, the last inequality gives P ( ∃ s ∈ (0, t] , Zs > F (s) sδ ) ≤ ∞∑ m=N P ( ∃ s ∈ [ 2−(m+1), 2−m ] , Zs ≥ F (s) sδ ) ≤ ( sup s,t:1≤ t s ≤2 F (t) F (s) ) ∞∑ m=N 1 2(m+1)δ = ( sup s,t:1≤ t s ≤2 F (t) F (s) ) × 1 2(N+1)δ(1− 2−δ) . This completes the proof. Theorem 3.13. Under Assumption 1, take the same constants κj as in Theorem 3.10. Then we can choose a constant K∗ ∈ (0,∞) as stated in Theorem 3.10, such that the following uniform bound holds: EQ i ε  ∑ j∈J i β′ (t,t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds   ≤ K∗ [(t− si)κ1 + εκ2 · (t− si)κ3 ] , ∀ t ∈ (si, si + 1], i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] . Now, we prove the main result of this section, that is Theorem 3.10, assuming Theorem 3.13. Proof of Theorem 3.10. In, and only in, this proof, we denote by Z(0) the submartingale defined in (3.6.15). 81 3.6. Conditional separation of approximating solutions Since [ j ∈ J iβ′(t, t ∧ τ i ∧ σX i β ) ] ∈ Gtj , we obtain immediately from Lemma 3.8 (2) that the part∑ j∈J i β′ (t,t∧τ i∧σX i β ) M j t∧τ i∧σXiβ ∧σY j β , t ∈ [si,∞), in the definition of Z(0) is a true Qiε-martingale with mean zero, for any i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1]. Hence, setting F (0)(s) = K∗ (sκ1 + εκ2 · sκ3) , s ∈ [0, 1], we see from Theorem 3.13 that Eε [ Z (0) t ] ≤ F (0)(t− si) for any t ∈ (si, si + 1], i ∈ N, and ε ∈ ( 0, [8ψ(1)]−1 ∧ 1]. Note that sup s,t:1≤ t s ≤2 F (0)(t) F (0)(s) ≤ sup s,t:1≤ t s ≤2 ( tκ1 sκ1 + tκ3 sκ3 ) ≤ 2 · 2κ1∨κ3 . Hence, applying Lemma 3.12 with (Z,F ) taken to be (Z(0), F (0)), we see that (3.6.13) with the involved process in (3.6.14) replaced by Z(0) holds. The proof is complete. The remainder of this section is to prove Theorem 3.13. For this purpose, we need to classify the clusters Y j for j ∈ J iβ′(t, t ∧ τ i ∧ σX i β ). Set Ciβ′(t) , { j ∈ N; |yj − xi| < 2 ( ε1/2 + (tj − si)β′ ) , si < tj ≤ t } Liβ′(t, t′) , { j ∈ N; 2(ε1/2 + (tj − si)β′) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), si < tj ≤ t′}, for t′, t ∈ (si,∞) with t ≥ t′. Hence, as far as the clusters Y j born inside the rectangle RXiβ′ (t) are concerned, the clusters Y j , j ∈ Ciβ′(t), are those born inside the double parabola{ (x, s) ∈ R× [si, t]; |x− xi| < 2 ( ε1/2 + (s− si)β′ )} (the light grey area in Figure 3.3), and the clusters Y j , j ∈ Liβ′(t, t), are those born outside (the dark grey area in Figure 3.3). For any i ∈ N, we say a cluster Y j is a critical cluster if j ∈ Ciβ′(t) and a lateral cluster if j ∈ Li(t, t′) for some t, t′. Since { Ciβ′(t),Liβ′(t, t′) } is a cover of J iβ′(t, t′) by disjoint sets, Theo- rem 3.13 can be obtained by the following two lemmas. 82 3.6. Conditional separation of approximating solutions si t si+1 PXiβ (t) width of RXiβ (t) width of RXi β′ (t) time xi xi+ε 1/2xi+2ε 1/2xi−ε1/2xi−2ε1/2 Figure 3.3: PXiβ (t), RX i β (t), and RX i β′ (t) for 0 < β ′ < β and t ∈ [si, si + 1]. Lemma 3.14. Let κj be as in Theorem 3.10. We can choose a constant K∗ ∈ (0,∞) as in Theorem 3.10 such that the following uniform bound holds: EQ i ε  ∑ j∈Ci β′ (t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds   ≤ K ∗ 2 [(t− si)κ1 + εκ2 · (t− si)κ3 ] , ∀ t ∈ (si, si + 1], i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] . (3.6.18) Lemma 3.15. Let κj be as in Theorem 3.10. By enlarging the constant K∗ in Lemma 3.14 if necessary, the following uniform bound holds: EQ i ε  ∑ j∈Li β′ (t,t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds   ≤ K ∗ 2 [(t− si)κ1 + εκ2 · (t− si)κ3 ] , ∀ t ∈ (si, si + 1], i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] . (3.6.19) 83 3.6. Conditional separation of approximating solutions Despite some technical details, the methods of proof for Lemma 3.14 and Lemma 3.15 are very similar. For clarity, they are given in Section 3.6.3 and Section 3.6.4 separately, with some preliminaries set in Section 3.6.2 below. 3.6.2 Auxiliary results and notation For each z, δ ∈ R+, let ( Z,Pδz ) denote a copy of 14BESQ 4δ(4z). We assume that ( Z,Pδz ) is defined by a (Ht)-Brownian motion B, where (Ht) satisfies the usual conditions. This means that Zt = z + δt+ ∫ t 0 √ ZsdBs, P δ z-a.s. (Cf. Chapter XI.1 of [44] for Bessel squared processes.) As we will often investigate Z before it hits a constant level, we set the following notation similar to (3.5.2): for any real-valued process H = (Ht) THx = inf{t ≥ 0;Ht = x}, x ∈ R. For δ = 0, ( Z,P0z ) gives a Feller diffusion and its marginals are charac- terized by EP 0 z [exp (−λZt)] = exp ( −2λz 2 + λt ) , λ, t ∈ R+. In particular, the survival probability of ( Z,P0z ) is given by P0z(Zt > 0) = lim λ→∞ ( 1− EP0z [exp (−λZt)] ) =1− exp ( −2z t ) , z, t ∈ (0,∞). (3.6.20) Using the elementary inequality 1− e−x ≤ x for x ∈ R+, we obtain from the last inequality that P0z(Zt > 0) ≤ 2z t , z, t ∈ (0,∞). (3.6.21) To save notation in the following Section 3.6.3 and Section 3.6.4, we endow a finer meaning on the notation “<a” used in Section 3.5.2. Now, we write A <a B if A ≤ CB for some constant C ∈ (0,∞) which may vary from line to line but depends only on ψ and the parameter vector chosen in Assumption 1. 84 3.6. Conditional separation of approximating solutions 3.6.3 Proof of Lemma 3.14 Fix i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1], and henceforth we drop the subscripts ε of Pε and Qiε. In addition, we may only consider t ∈ [ si + ε 2 , si + 1 ] as there are no immigrants for Y arriving in [si, si + ε2). We do our analysis according to the following steps. (Step 1). We start with the simplification: ∑ j∈Ci β′ (t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds  ≤ ∑ j∈Ci β′ (t∧τ i) ( ψ(1)ε+ ∫ t∧τ i tj 1 [Xis(1)] 1/2 [ Y js (1) ]1/2 ds ) ≤ ∑ j∈Ci β′ (t∧τ i) ( ψ(1)ε+ ∫ t∧τ i tj 2 (s− si)η/2 [ Y js (1) ]1/2 ds ) , (3.6.22) where the first inequality follows from the Cauchy-Schwartz inequality and the second one follows by using the component τ i,(1) of τ i in Proposition 3.9. Now, we claim that EQ i  ∑ j∈Ci β′ (t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds   <a ∑ j:si<tj≤t (tj − si)β′ε+ ∑ j:si<tj≤t ∫ t tj ds 1 (s− si)η/2 EQ i [ [ Y js (1) ]1/2 ; s < τ i, |yj − xi| < 2 ( ε1/2 + (tj − si)β′ )] . (3.6.23) 85 3.6. Conditional separation of approximating solutions Note that EQ i [ ψ(1)ε#Ciβ′(t ∧ τ i) ] <a εE Qi [#Ciβ′(t)] = ε ∑ j:si<tj≤t Qi ( |yj − xi| < 2 ( ε1/2 + (tj − si)β′ )) <a ∑ j:si<tj≤t 4 ( ε1/2 + (tj − si)β′ ) ε <a ∑ j:si<tj≤t (tj − si)β′ε, (3.6.24) where the second <a-inequality follows from Lemma 3.8 (4), and the last <a-inequality follows since ε1/2 ≤ εβ′ ≤ 2β′(tj − si)β′ , ∀ j ∈ N with tj > si. (3.6.25) Our claim (3.6.23) now follows from (3.6.22) and (3.6.24). From the display (3.6.23), we see the necessity to obtain the order of EQ i [[ Y js (1) ]1/2 ; s < τ i, |yj − xi| < 2 ( ε1/2 + (tj − si)β′ )] , s ∈ (tj , t], si < tj < t, (3.6.26) in si, tj , s, t. We subdivide our analysis of a generic term in (3.6.26) into the following (Step 2-1)–(Step 2-4), with a summary given in (Step 2-5). (Step 2-1). We convert the Qi-expectations in (3.6.26) to P-expectations. Recalling that xi, yj ∈ Gtj (cf. Remark 3.4), we can use Lemma 3.7 to get EQ i [[ Y js (1) ]1/2 ; s < τ i, |yj − xi| < 2 ( ε1/2 + (tj − si)β′ )] = 1 ψ(1)ε EP [ Xis(1) TX i 1 [ Y js (1) ]1/2 ; s < τ i, |yj − xi| < 2 ( ε1/2 + (tj − si)β′ )] . (3.6.27) We break the P-expectation in (3.6.27) into finer pieces by considering the following. For s > tj , Xi(1) TX i 1 s is nonzero on the union of the two disjoint events:[ Xis(1) TX i 1 > 0, TX i 0 ≤ tj ] = [ TX i 1 < T Xi 0 ≤ tj ] (3.6.28) 86 3.6. Conditional separation of approximating solutions and [ Xis(1) TX i 1 > 0, tj < T Xi 0 ] . (3.6.29) Here, the equality in (3.6.28) holds P-a.s. since 0 is an absorbing state of Xi(1) under P. In fact, Xi(1)T Xi 1 s = 1 on the event in (3.6.28). To invoke the additional order provided by the improved modulus of continuity of Xi(1) at its starting point si, we use the trivial inequality Xis(1) TX i 1 ≤ ∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣+ ψ(1)ε on the event (3.6.29). Putting things together, we see from (3.6.27) that EQ i [[ Y js (1) ]1/2 ; s < τ i, |yj − xi| < 2 ( ε1/2 + (tj − si)β′ )] ≤ 1 ψ(1)ε EP [ [ Y js (1) ]1/2 ; s ≤ T Y j1 , |yj − xi| < 2 ( ε1/2 + (tj − si)β′ ) , TX i 1 < T Xi 0 ≤ tj ] + 1 ψ(1)ε EP [∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ [Y js (1)]1/2 ; s < τ i, |yj − xi| < 2 ( ε1/2 + (tj − si)β′ ) , Xis(1) TX i 1 > 0, tj < T Xi 0 ] + 1 ψ(1)ε · ψ(1)εEP [ [ Y js (1) ]1/2 ; s ≤ T Y j1 , |yj − xi| < 2 ( ε1/2 + (tj − si)β′ ) , tj < T Xi 0 ] , ∀ s ∈ (tj , t], si < tj < t, (3.6.30) where for the first and the third terms on the right-hand side, it is legitimate to replace the event [ s < τ i ] by the larger one [ s ≤ T Y j1 ] since, in Proposi- tion 3.9, τ i,(3) is a component of τ i, and for the third term we replace the event in (3.6.29) by the larger one [ tj < T Xi 0 ] . In (Step 2-2)–(Step 2-4) below, we derive a bound for each of the three terms in (3.6.30) which involves only Feller’s diffusion. We use the notation in Section 3.6.2. (Step 2-2). Consider the first term on the right-hand side of (3.6.30), and 87 3.6. Conditional separation of approximating solutions recall the notation in Section 3.6.2. It follows from (3.5.8) and (3.5.9) that 1 ψ(1)ε EP [ [ Y js (1) ]1/2 ; s ≤ T Y j1 , |yj − xi| < 2 ( ε1/2 + (tj − si)β′ ) , TX i 1 < T Xi 0 ≤ tj ] <a 1 ε P ( TX i 1 < T Xi 0 ≤ tj )( ε1/2 + (tj − si)β′ ) EP 0 ψ(1)ε [( Zs−tj )1/2 ; s− tj ≤ TZ1 ] ≤1 ε P ( TX i 1 < T Xi 0 )( ε1/2 + (tj − si)β′ ) EP 0 ψ(1)ε [( Zs−tj )1/2 ; s− tj ≤ TZ1 ] <a(tj − si)β ′ EP 0 ψ(1)ε [( Zs−tj )1/2 ; s− tj ≤ TZ1 ] , ∀ s ∈ (tj , t], si < tj < t, (3.6.31) where the last inequality follows from (3.6.25) and Lemma 3.7. (Step 2-3). Let us deal with the second term in (3.6.30). We claim that 1 ψ(1)ε EP [∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ [Y js (1)]1/2 ; s < τ i, |yj − xi| < 2 ( ε1/2 + (tj − si)β′ ) , Xis(1) TX i 1 > 0, tj < T Xi 0 ] <a ( εα N0 (s− si)α + (s− si)ξ ) (tj − si)β′−1 EP 0 ψ(1)ε [ (Zs−tj ) 1/2; s− tj ≤ TZ1 ] , ∀ s ∈ (tj , t], si < tj < t. (3.6.32) Fix such s throughout (Step 2-3). First, let us transfer the improved modulus of Xi(1) under Qi to one under P. It follows from (3.6.9) that on [ s < τ i, Xi(1) TX i 1 s > 0 ] ∈ Gs, we have∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ ≤ KX1 [ψ(1)ε]αN0 (s− si)α +KX2 (s− si)ξ Qi-a.s. and hence 0 =Qi (∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ > KX1 [ψ(1)ε]αN0 (s− si)α +KX2 (s− si)ξ, s < τ i, Xis(1) TX i 1 > 0 ) = 1 ψ(1)ε EP [ Xis(1) TX i 1 ; ∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ > KX1 [ψ(1)ε] αN0 (s− si)α +KX2 (s− si)ξ, s < τ i, Xis(1)T Xi 1 > 0 ] , (3.6.33) 88 3.6. Conditional separation of approximating solutions where the last equality follows from Lemma 3.7 since the event evaluated under Qi is a Gs-event. Using the restriction Xis(1)T Xi 1 > 0, we see that the equality (3.6.33) implies∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ ≤ KX1 [ψ(1)ε]αN0 (s− si)α +KX2 (s− si)ξ P-a.s. on [ s < τ i, Xis(1) TX i 1 > 0 ] . (3.6.34) Now, using (3.6.34) gives 1 ψ(1)ε EP [∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ [Y js (1)]1/2 ; s < τ i, |yj − xi| < 2 ( ε1/2 + (tj − si)β′ ) , Xis(1) TX i 1 > 0, tj < T Xi 0 ] <a εα N0 (s− si)α + (s− si)ξ ε EP [ [Y js (1)] 1/2; s ≤ T Y j1 , |yj − xi| < 2 ( ε1/2 + (tj − si)β′ ) , tj < T Xi 0 ] , (3.6.35) where in the last inequality we use the component τ i,(3) of τ i in Proposi- tion 3.9 and discard the event [ Xis(1) TX i 1 > 0 ] . Applying (3.5.8) and (3.5.9) to (3.6.35) gives 1 ψ(1)ε EP [∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ [Y js (1)]1/2 ; s < τ i, |yj − xi| ≤ 2 ( ε1/2 + (tj − si)β′ ) , Xis(1) TX i 1 > 0, tj < T Xi 0 ] <a εα N0 (s− si)α + (s− si)ξ ε · ( ε1/2 + (tj − si)β′ ) P ( tj < T Xi 0 ) EP 0 ψ(1)ε [ (Zs−tj ) 1/2; s− tj ≤ TZ1 ] . (3.6.36) We have P ( tj < T Xi 0 ) ≤ 2ψ(1)ε tj − si (3.6.37) by (3.6.21). Applying the last display and (3.6.25) to the right-hand side of (3.6.36) then gives the desired inequality (3.6.32). 89 3.6. Conditional separation of approximating solutions (Step 2-4). For the third term in (3.6.30), the arguments (Step 2-3) (cf. (3.6.35) and (3.6.36)) readily give 1 ψ(1)ε · ψ(1)εEP [[ Y js (1) ]1/2 ; s ≤ T Yj1 , |yj − xi| ≤ 2 ( ε1/2 + (tj − si)β′ ) , tj < T Xi 0 ] <a(tj − si)β ′−1εEP 0 ψ(1)ε [ (Zs−tj ) 1/2; s− tj ≤ TZ1 ] , ∀ s ∈ (tj , t], si < tj < t. (3.6.38) (Step 2-5). We note that in (3.6.31), (3.6.32), and (3.6.38), there is a common fractional moment, or more precisely EP 0 ψ(1)ε [( Zs−tj )1/2 ; s− tj ≤ TZ1 ] , (3.6.39) left to be estimated, as will be done in this step. Recall the filtration (Ht) defined in Section 3.6.2. Lemma 3.16. Fix z, T ∈ (0,∞). Under the conditional probability measure P (T ) z defined by P(T )z (A) , P0z(A|ZT > 0), A ∈HT , (3.6.40) the process (Zt)0≤t≤T is a continuous (Ht)-semimartingale with canonical decomposition Zt = z + ∫ t 0 F ( 2Zs T − s ) ds+Mt, 0 ≤ t ≤ T. (3.6.41) Here, F : R+ −→ R+ defined by F (x) , { e−xx 1−e−x , x > 0, 1, x = 0, (3.6.42) is continuous and decreasing, and M is a continuous (Ht)-martingale under P (T ) z with quadratic variation 〈M〉t ≡ ∫ t 0 Zsds. Proof. The proof of this lemma is a standard application of Girsanov’s the- orem (cf. Theorem VIII.1.4 of [44]), and we proceed as follows. 90 3.6. Conditional separation of approximating solutions First, let (Dt)0≤t≤T denote the (Ht,P0z)-martingale associated with the Radon-Nikodym derivative of P(T )z with respect to P0z, that is, Dt ≡ P 0 z (ZT > 0|Ht) P0z(ZT > 0) , 0 ≤ t ≤ T. (3.6.43) To obtain the explicit form of D under P0z, we first note that the (Ht,P0z)- Markov property of Z and (3.6.20) imply P0z (ZT > 0|Ht) =P0Zt(ZT−t > 0) = 1− exp ( − 2Zt T − t ) , 0 ≤ t < T. (3.6.44) Hence, it follows from Itô’s formula and the foregoing display that, under P0z, Dt = 1 P0z(ZT > 0) [ 1− exp ( −2z T )] + 1 P0z(ZT > 0) ∫ t 0 exp ( − 2Zs T − s ) · ( 2 T − s )√ ZsdBs, 0 ≤ t < T. (3.6.45) We now apply Girsanov’s theorem and verify that the components of the canonical decomposition of (Zt)0≤t≤T under P (T ) z satisfy the asserted properties. Under P(T )z , we have Zt = z + ∫ t 0 D−1s d〈D,Z〉s +Mt, 0 ≤ t ≤ T. Here, Mt = ∫ t 0 √ ZsdBs − ∫ t 0 D−1s d〈D,Z〉s, 0 ≤ t ≤ T is a continuous (Ht,P (T ) z )-local martingale with the asserted quadratic varia- tion 〈Mt〉 ≡ ∫ t 0 Zsds, which implies that M is a true martingale under P (T ) z . In addition, it follows from (3.6.44) and (3.6.45) that the finite variation 91 3.6. Conditional separation of approximating solutions process of Z under P(T )z is given by∫ t 0 D−1s d〈D,Z〉s = ∫ t 0 1 P0z(ZT > 0|Hs) d〈P0z(ZT > 0)D,Z〉s = ∫ t 0 exp ( − 2ZsT−s ) 2Zs T−s 1− exp ( − 2ZsT−s ) ds = ∫ t 0 F ( 2Zs T − s ) ds, 0 ≤ t ≤ T, where F is given by (3.6.42). The proof is complete. Lemma 3.17. For any p ∈ (0,∞), there exists a constant Kp ∈ (0,∞) depending only on p and (α, ξ,N0) such that EP 0 z [ (ZT ) p;T ≤ TZ1 ] ≤ Kp [(zpαN0T pα + zp)P0z(ZT > 0) + zT pξ−1] , ∀ z, T ∈ (0, 1]. (3.6.46) Proof. Recall the conditional probability measure P(T )z defined in (3.6.40) and write EP 0 z [ (ZT ) p;T ≤ TZ1 ] ≤P0z(ZT > 0)EP0z [(ZT∧TZ1 )p ∣∣∣ZT > 0] =P0z(ZT > 0)EP (T ) z [( ZT∧TZ1 )p] . (3.6.47) Henceforth, we work under the conditional probability measure P(T )z . We turn to the improved modulus of continuity of Z at its starting time 0 under P(T )z in order to bound the right-hand side of (3.6.47). We first claim that, by enlarging the underlying probability space if necessary, |Zt − z| ≤ t+ CZα (∫ t 0 Zsds )α ∀ t ∈ [0, T ∧ TZ1 ] under P(T )z , (3.6.48) where the random variable CZα under P (T ) z has distribution depending only on α and finite P(T )z -moment of any finite order. We show how to obtain (3.6.48) by using the canonical decomposition of the continuous (Ht,P (T ) z )- semimartingale (Zt)0≤t≤T in (3.6.41). First, since its martingale part M has 92 3.6. Conditional separation of approximating solutions quadratic variation ∫ · 0 Zsds, the Dambis-Dubins-Schwarz theorem (cf. The- orem V.1.6 of [44]) implies that, by enlarging of the underlying probability space if necessary, Mt = B̃ (∫ t 0 Zsds ) , t ∈ [0, T ∧ TZ1 ], for some standard Brownian motion B̃ under P(T )z . Here, the random clock∫ t 0 Zsds, t ∈ [0, T ∧TZ1 ], for B̃ is bounded by 1 by the assumption that z, T ≤ 1. On the other hand, recall that the chosen parameter α lies in (0, 12) and the uniform Hölder-α modulus of continuity of standard Brownian motion on compacts has moments of any finite order. (See, e.g., the discussion preceding Theorem I.2.2 of [44] and its proof.) Hence,∣∣∣∣B̃(∫ t 0 Zsds )∣∣∣∣ ≤ CZα (∫ t 0 Zsds )α , t ∈ [0, T ∧ TZ1 ], where the random variable CZα is as in (3.6.48). Second, Lemma 3.16 also states that the finite variation process of Z under P(T )z given by (3.6.41) is a time integral with integrand uniformly bounded by 1. This and the last two displays are now enough to obtain our claim (3.6.48). With the integral inequality (3.6.48) and the distributional properties of CZα , we obtain the following improved modulus of continuity of Z (cf. Corollary 3.43): ∣∣ZT∧TZ1 − z∣∣ ≤ KZ1 zαN0Tα +KZ2 T ξ (3.6.49) for some random variables KZ1 ,KZ2 ∈ ⋂ q∈(0,∞) L q ( P (T ) z ) obeying a joint law under P(T )z depending only on (α, ξ,N0) by the analogous property of CZα and Corollary 3.43. We now return to the calculation in (3.6.47). Applying (3.6.49) gives EP 0 z [ (ZT ) p;T ≤ TZ1 ] ≤P0z(ZT > 0)EP(T )z [(ZT∧TZ1 )p] ≤P0z(ZT > 0) ( 2p−1 ∨ 1)EP(T )z [∣∣ZT∧TZ1 − z∣∣p + zp] ≤P0z(ZT > 0)K ′p ( zpα N0 T pα + T pξ + zp ) for some constant K ′p depending only on p and (α, ξ,N0) by (3.6.49) and the distributional properties of KZj . (Cf. Lemma 3.41 for the second inequality.) Applying (3.6.21) to the last inequality gives the desired result. The proof is complete. 93 3.6. Conditional separation of approximating solutions (Step 2-6). At this step, we summarize our results in (Step 2-1)–(Step 2- 4), using Lemma 3.17. We apply (3.6.31), (3.6.32), and (3.6.38) to (3.6.30). This gives EQ i [[ Y js (1) ]1/2 ; s < τ i, |yj − xi| < 2 ( ε1/2 + (tj − si)β′ )] <a [ (tj − si)β′ + (tj − si)β′−1 ( εα N0 (s− si)α + (s− si)ξ + ε )] × EP0ψ(1)ε [ (Zs−tj ) 1/2; s− tj ≤ TZ1 ] <a [ (tj − si)β′ + (tj − si)β′−1 ( εα N0 (s− si)α + (s− si)ξ + ε )] × [( ε αN0 2 (s− tj)α2 + ε 12 ) P0ψ(1)ε(Zs−tj > 0) + ε(s− tj) ξ 2 −1 ] <a(tj − si)β ′−1 × ( (tj − si) + εαN0 (s− si)α + (s− si)ξ + ε ) × εα N0 2 (s− tj)α2 P0ψ(1)ε(Zs−tj > 0) + (tj − si)β′−1 × ( (tj − si) + εαN0 (s− si)α + (s− si)ξ + ε ) × ε 12 P0ψ(1)ε(Zs−tj > 0) + (tj − si)β′ × ε(s− tj) ξ 2 −1 + (tj − si)β′−1 × ( εα N0 (s− si)α + ε ) × ε(s− tj) ξ 2 −1 + (tj − si)β′−1 × (s− si)ξ × ε(s− tj) ξ 2 −1, ∀ s ∈ (tj , t], si < tj < t, (3.6.50) where the last <a-inequality follows by some algebra. We now make some simplifications for the right-hand side of (3.6.50) before going further. We remark that some orders in ε and other variables will be discarded here. We bound the survival probability in (3.6.50) by P0ψ(1)ε(Zs−tj > 0) ≤ ( 2ψ(1)ε s− tj )1−αN0 4 , (3.6.51) as follows from the elementary inequalities x ≤ xγ for any x ∈ [0, 1] and γ ∈ (0, 1], and then (3.6.21). Assuming s ∈ (tj , t] for si < tj < t, we have 94 3.6. Conditional separation of approximating solutions the inequalities 1 ≥ s− si ≥ tj − si ≥ ε 2 , s− tj ≤ 1, 0 < α+ αN0 < ξ < 1 (cf. (3.6.11)-(a) for the third inequality). These and (3.6.51) imply that the first term of (3.6.50) satisfies (tj − si)β′−1 × ( (tj − si) + εαN0 (s− si)α + (s− si)ξ + ε ) × εα N0 2 (s− tj)α2 P0ψ(1)ε(Zs−tj > 0) <a(tj − si)β ′−1(s− si)α(s− tj)α2 +α N0 4 −1ε1+ αN0 4 , (3.6.52) the second term of (3.6.50) satisfies (tj − si)β′−1 × ( (tj − si) + εαN0 (s− si)α + (s− si)ξ + ε ) × ε 12 P0ψ(1)ε(Zs−tj > 0) <a(tj − si)β ′−1(s− si)α(s− tj)α N0 4 −1ε 3 2 −αN0 4 <a(tj − si) β′+ ( 1 2 −αN0 2 ) −1 (s− si)α(s− tj)α N0 4 −1ε1+ αN0 4 <a(tj − si)β ′+α 2 −1(s− si)α(s− tj)α N0 4 −1ε1+ αN0 4 , (3.6.53) and, finally, the fourth term of (3.6.50) satisfies (tj − si)β′−1 × ( εα N0 (s− si)α + ε ) × ε(s− tj) ξ 2 −1 <a(tj − si)β ′−1(s− si)α(s− tj) ξ 2 −1ε1+α N0 <a(tj − si)β ′−1(s− si)α(s− tj)α2 +α N0 4 −1ε1+ αN0 4 . (3.6.54) Note that the bounds in (3.6.52) and (3.6.54) coincide. Using (3.6.52)– 95 3.6. Conditional separation of approximating solutions (3.6.54) in (3.6.50), we obtain EQ i [ [ Y js (1) ]1/2 ; s < τ i, |yj − xi| ≤ 2 ( ε1/2 + (tj − si)β′ )] <a (tj − si)β ′−1(s− si)α(s− tj)α2 +α N0 4 −1ε1+ αN0 4 + (tj − si)β′+α2−1(s− si)α(s− tj)α N0 4 −1ε1+ αN0 4 + (tj − si)β′(s− tj) ξ 2 −1ε + (tj − si)β′−1(s− si)ξ(s− tj) ξ 2 −1ε, ∀ s ∈ (tj , t], si < tj < t. (3.6.55) (Step 3). We digress to a conceptual discussion for some elementary inte- grals which will play an important role in the forthcoming calculations in (Step 4). First, for a, b, c ∈ R and T ∈ (0,∞), a straightforward application of Fubini’s theorem and changes of variables shows that I(a, b, c)T , ∫ T 0 drra ∫ T r dssb(s− r)c <∞ ⇐⇒ a, c ∈ (−1,∞) and a+ b+ c > −2. (3.6.56) Furthermore, when I(a, b, c)T is finite, it can be expressed as I(a, b, c)T = (∫ 1 0 drra(1− r)c ) · T a+b+c+2 a+ b+ c+ 2 . Given a + b + c > −2 with a, c ∈ (−1,∞), we consider alternative ways to show that the integral I(a, b, c)T is finite while preserving the same order T a+b+c+2 in T , according to b ≥ 0 and b < 0. If b ≥ 0, then I(a, b, c)T ≤ ∫ T 0 drra × T b × ∫ T 0 dssc = 1 a+ 1 1 c+ 1 T a+b+c+2, (3.6.57) where the first inequality follows since sb ≤ T b for any s ∈ [r, T ]. For the case that b < 0, we consider allocating the function s 7−→ sb. Precisely, for b1, b2 < 0 such that b1 + b2 = b, we have I(a, b, c)T ≤ ∫ T 0 drra+b1 ∫ T r ds(s− r)b2+c ≤ ∫ T 0 drra+b1 × ∫ T 0 dssb2+c (3.6.58) 96 3.6. Conditional separation of approximating solutions where the first inequality follows since for s > r, sb1 ≤ rb1 and sb2 ≤ (s−r)b2 . Using the following elementary lemma, we obtain from (3.6.58) that I(a, b, c)T ≤ 1 a+ b1 + 1 1 b2 + c+ 1 T a+b+c+2. Lemma 3.18. For any reals a, c > −1 and b < 0 such that a+ b+ c > −2, there exists a pair (b1, b2) ∈ (−∞, 0) × (−∞, 0) such that b = b1 + b2 and a+ b1 > −1 and b2 + c > −1. The two simple concepts for the inequalities (3.6.57) and (3.6.58) will be applied later on in (Step 4) to bound Riemann sums by integrals of the type I(a, b, c)T . (Step 4). We complete the proof of Lemma 3.14 in this step. Applying the bound (3.6.55) to the right-hand side of the inequality (3.6.23). We have EQ i  ∑ j∈Ci β′ (t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds   <a ∑ j:si<tj≤t (tj − si)β′ε + ∑ j:si<tj≤t (tj − si)β′−1 ∫ t tj (s− si)− η 2 +α(s− tj)α2 +α N0 4 −1ds · ε1+α N0 4 + ∑ j:si<tj≤t (tj − si)β′+α2−1 ∫ t tj (s− si)− η 2 +α(s− tj)α N0 4 −1ds · ε1+α N0 4 + ∑ j:si<tj≤t (tj − si)β′ ∫ t tj (s− si)− η 2 (s− tj) ξ 2 −1ds · ε + ∑ j:si<tj≤t (tj − si)β′−1 ∫ t tj (s− si)− η 2 +ξ(s− tj) ξ 2 −1ds · ε. (3.6.59) Recall the notation I(a, b, c) in (3.6.56). It should be clear that, up to a translation of time by si, the first, the fourth, and the fifth sums are Riemann 97 3.6. Conditional separation of approximating solutions sums of I(β′, 0, 0)t−si , I ( β′,−η 2 , ξ 2 − 1 ) t−si , I ( β′ − 1,−η 2 + ξ, ξ 2 − 1 ) t−si , respectively, and so are the second and the third sums after a division by ε αN0 4 with the corresponding integrals equal to I ( β′ − 1,−η 2 + α, α 2 + αN0 4 − 1 ) t−si , I ( β′ + α 2 − 1,−η 2 + α, αN0 4 − 1 ) t−si , respectively. It follows from (3.6.11)-(c) and (d) and (3.6.56) that all of the integrals in the last two displays are finite. We now aim to bound each of the five sums in (3.6.59) by suitable powers of ε and t, using integral comparisons. Observe that, whenever γ ∈ (−1,∞), the monotonicity of r 7−→ (r − si)γ over (si,∞) implies∑ j:si<tj≤t (tj − si)γ · ε ≤2 ∫ t+ε si (r − si)γdr = 2 γ + 1 (t+ ε− si)γ+1 ≤2 · 3 γ+1 γ + 1 (t− si)γ+1 (3.6.60) since t ≥ si + ε2 . (The constant 2 is used to accommodate the case that γ < 0.) Hence, the first sum in (3.6.59) can be bounded as:∑ j:si<tj≤t (tj − si)β′ε <a (t− si)β ′+1. (3.6.61) Consider the other sums in (3.6.59). Recall our discussion of some alter- native ways to bound I(a, b, c) for given a+ b+ c > −2 and a, c,∈ (−1,∞) according to b ≥ 0 or b < 0; see (3.6.57) and (3.6.58). We use Lemma 3.18 98 3.6. Conditional separation of approximating solutions in the following whenever necessary. Now, the second sum in (3.6.59) can be bounded as∑ j:si<tj≤t (tj − si)β′−1 ∫ t tj (s− si)− η 2 +α(s− tj)α2 +α N0 4 −1ds · ε1+α N0 4 = ∑ j:si<tj≤t (tj − si)β′−1 ∫ t−si tj−si s− η 2 +α[s− (tj − si)]α2 +α N0 4 −1ds · ε1+α N0 4 <a (t− si)β ′− η 2 + 3α 2 +α N0 4 · εα N0 4 <a (t− si)β ′− η 2 + 3α 2 · εα N0 4 , (3.6.62) Here, in the foregoing <a-inequality, we use the integral comparison discussed in (Step 3) (with Lemma 3.18 to algebraically allocate the exponent −η2 + α2 if necessary) and the Riemman-sum bound (3.6.60). The other sums on the right-hand side of (3.6.59) can be bounded similarly as follows. The third sum satisfies∑ j:si<tj≤t (tj − si)β′+α2−1 ∫ t tj (s− si)− η 2 +α(s− tj)α N0 4 −1ds · ε1+α N0 4 <a (t− si)β ′− η 2 + 3α 2 · εα N0 4 . (3.6.63) The fourth sum satisfies∑ j:si<tj≤t (tj − si)β′ ∫ t tj (s− si)− η 2 (s− tj) ξ 2 −1ds · ε <a(t− si)β ′− η 2 + ξ 2 +1 <a(t− si)β ′− η 2 + 3ξ 2 , (3.6.64) where the last inequality applies since ξ ∈ (0, 1). The last sum satisfies ∑ j:si<tj≤t (tj − si)β′−1 ∫ t tj (s− si)− η 2 +ξ(s− tj) ξ 2 −1ds · ε <a (t− si)β ′− η 2 + 3ξ 2 . (3.6.65) The proof of Lemma 3.14 is complete upon applying (3.6.61)–(3.6.65) to the right-hand side of (3.6.59). 99 3.6. Conditional separation of approximating solutions 3.6.4 Proof of Lemma 3.15 As in Section 3.6.3, we fix t ∈ [si + ε2 , si + 1], i ∈ N, and ε ∈ (0, [8ψ(1)]−1 ∧ 1] and drop the subscripts of Pε and Qiε. For the proof of Lemma 3.15, the argu- ments in Section 3.6.3 work essentially. Now, we begin to use the condition (3.6.11)-(b) in Assumption 1 and the upper limit σXiβ ∧ σY j β in the time integral in (3.6.19), which are neglected when we prove Lemma 3.14. To motivate our adaptation of the arguments for critical clusters in Sec- tion 3.6.3, we discuss some parts of Section 3.6.3. First, it is straightforward to modify the proof of (3.6.24) and obtain Qi ( 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′)) <a (t− si)β′ . (3.6.66) If we proceed as in (3.6.23) and use (3.6.66) in the obvious way, then this leads to EQ i  ∑ j∈Li β′ (t,t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds   <a ∑ j:si<tj≤t (t− si)β′ε+ ∑ j:si<tj≤t ∫ t tj ds 1 (s− si)η/2 EQ i [ [ Y js (1) ]1/2 ; s < τ i, 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′)]. (Compare this with (3.6.23) for critical clusters.) If we argue by using (3.6.66) repeatedly in the steps analogous to (Step 2-2)–(Step 2-4) of Sec- 100 3.6. Conditional separation of approximating solutions tion 3.6.3, then we obtain the following <a-inequality similar to (3.6.59): EQ i  ∑ j∈Li β′ (t,t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds   <a ∑ j:si<tj≤t (t− si)β′ε + ∑ j:si<tj≤t (t− si)β′(tj − si)−1 ∫ t tj (s− si)− η 2 +α(s− tj)α2 +α N0 4 −1ds · ε1+α N0 4 + ∑ j:si<tj≤t (t− si)β′(tj − si)α2−1 ∫ t tj (s− si)− η 2 +α(s− tj)α N0 4 −1ds · ε1+α N0 4 + ∑ j:si<tj≤t (t− si)β′ ∫ t tj (s− si)− η 2 (s− tj) ξ 2 −1ds · ε + ∑ j:si<tj≤t (t− si)β′(tj − si)−1 ∫ t tj (s− si)− η 2 +ξ(s− tj) ξ 2 −1ds · ε, (3.6.67) taking into account some simplifications similar to (3.6.52)–(3.6.54) where some orders are discarded. (We omit the derivation of the foregoing display, as it will not be used for the proof of Lemma 3.15.) In other words, replacing the factor (tj − si)β′ for each of the sums in (3.6.59) by (t − si)β′ gives the bound in the foregoing display. Applying integral domination to the second and the last sums of the foregoing display as in (Step 4) of Section 3.6.4 results in bounds which are divergent integrals. Examining the arguments in (Step 2-2)–(Step 2-4) of Section 3.6.3 shows that the problematic factor (tj − si)−1 (3.6.68) in (3.6.67) results from using the bound (3.6.37) for the survival probability P(tj < TX i 0 ). The exponent −1 in the foregoing display, however, is critical, and any decrease in this value will lead to convergent integrals. Also, we recall that (3.5.9) is used repeatedly in (Step 2-2)–(Step 2-4) of Section 3.6.3, while (3.5.9) is a consequence of (3.5.8) and the proof of (3.5.8) uses in particular the Markov property of Y j(1) at tj . These observations lead us to consider modifying the arguments in Section 3.6.3 by replacing tj with a 101 3.6. Conditional separation of approximating solutions “larger” value, subject to the condition that certain P-independence, similar to (3.5.8) with tj replaced by the resulting value, still holds. Let us start with identifying the value to replace tj . The idea comes from the following observation. Observation. The support process of a lateral cluster Y j takes a positive amount of time after its birth to meet the support of Xi, thereby leading to a time tcj larger than tj . Moreover, prior to t c j , the supports of X i and Y j separate. (Cf. Figure 3.2.) We first formalize the definition of this time tcj . Let j ∈ N with tj ∈ (si, si + 1]. Recall that the range for the possible values y of yj associated with a lateral cluster is 2 ( ε1/2 + (tj − si)β′ ) ≤ |y − xi| ≤ 2(ε1/2 + (t− si)β′), (3.6.69) and we use PXiβ (·) and PY j β (·) to envelope the support processes of Xi and Y j , respectively. Let the processes of parabolas {PXiβ (t); t ∈ [si,∞)} and{PY jβ (t); t ∈ [tj ,∞)} evolve in the deterministic way, and consider the sup- port contact time tcj(yj), that is, the first time t when PX i β (t) and PY j β (t) intersect. Here, for any y satisfying (3.6.69), tcj(y) ∈ (tj ,∞) solves{ xi + ε 1/2 + ( tcj(y)− si )β = y − ε1/2 − (tcj(y)− tj)β, if y > xi, xi − ε1/2 − ( tcj(y)− si )β = y + ε1/2 + ( tcj(y)− tj )β , if y < xi. (3.6.70) By simple arithmetic, we see that the minimum of tcj(y) for y satisfying (3.6.69) is attained at the boundary cases where y satisfies 2 ( ε1/2 + (tj − si) β′) = |y− xi|. Let us consider the worst case of the support contact time as t?j , min { tcj(y); y satisfies (3.6.69) } . (3.6.71) Recall that β′ < β by (3.6.11)-(b). Lemma 3.19. Let j ∈ N with tj ∈ (si, si + 1]. (1). The number t?j defined by (3.6.71) satisfies t?j = si +A(tj − si) · (tj − si) β′ β , (3.6.72) 102 3.6. Conditional separation of approximating solutions where A(r) is the unique number in ( r 1−β′ β ,∞) solving A(r)β + [ A(r)− r1−β ′ β ]β = 2, r ∈ (0, 1]. (3.6.73) (2). The function A(·) defined by (3.6.73) satisfies 1 ≤ A(r) ≤ 1 + r1−β ′ β , ∀ r ∈ (0, 1]. (3.6.74) Proof. Without loss of generality, we may assume that t?j = t c j(y) for y satisfying xi − y = 2 ( ε1/2 + (tj − si)β′ ) . Using this particular value y of yj in (3.6.70), we see that t?j solves the equation xi − ε1/2 − ( t?j − si )β =y + ε1/2 + ( t?j − tj )β =xi − ε1/2 − 2(tj − si)β′ + ( t?j − tj )β . Taking t?j = si + A · (tj − si) β′ β for some constant A ∈ (0,∞) left to be determined, we obtain from the foregoing equality that 2(tj − si)β′ =Aβ · (tj − si)β′ + [ A · (tj − si) β′ β − (tj − si) ]β =Aβ · (tj − si)β′ + [ A− (tj − si)1− β′ β ]β · (tj − si)β′ , which shows that A = A(tj− si) for A(·) defined by (3.6.73) upon cancelling (tj − si)β′ on both sides. We have obtained (1). From the definition (3.6.73) of A(·), we obtain 2A(r)β ≥A(r)β + [A(r)− r1−β′β ]β = 2, 2 [ A(r)− r1−β ′ β ]β ≤A(r)β + [A(r)− r1−β′β ]β = 2, and both inequalities in (3.6.74) follow. The proof is complete. As a result of Lemma 3.19, we have P ( t?j < T Xi 0 ) <a ε(tj − si)− β′ β , (3.6.75) where the exponent −β′β is now an improvement in terms of our preceding discussion about the factor (3.6.68). The value t?j will serve as the desired replacement of tj . We then turn to show how t?j still allows some independence similar to (3.5.8). 103 3.6. Conditional separation of approximating solutions Lemma 3.20 (Orthogonal continuation). Let (Ht) be a filtration satis- fying the usual conditions, and U and V be two (Ht)-Feller diffusions such that U0⊥⊥V0 and, for some (Ht)-stopping σ⊥, 〈U, V 〉σ⊥ ≡ 0. Then by enlarg- ing the underlying filtered probability space if necessary and writing again (Ht) for the resulting filtration with a slight abuse of notation in this case, we can find a (Ht)-Feller diffusion Û such that Û⊥⊥V and Û = U over [0, σ⊥]. Proof. We only give a sketch of the proof here, and leave the details, calling for standard arguments, to the readers. Using Lévy’s theorem, we can define a Brownian motion B̂ by B̂t = ∫ TU0 ∧σ⊥∧t 0 1√ Us dUs + ∫ t 0 1[TU0 ∧σ⊥<s]dBs, for some independent Brownian motion B. We can use B̂ to solve for a Feller diffusion Û with initial value U0. Then the proof of pathwise uniqueness for Feller diffusions (cf. [47]) gives Û = U on [0, σ⊥]. Note that 〈Û , V 〉 ≡ 0, and consider the martingale problem associated with a two-dimensional independent Feller diffusions with initial values U0 and V0. By its uniqueness, Û⊥⊥V . Hence, Û is the desired continuation of U beyond σ⊥. We now apply Lemma 3.20 to the total mass processes Xi(1) and Y j(1) under P to give the following analogue of (3.5.8). Proposition 3.21. Let i, j ∈ N be given so that si < tj . Suppose that σ⊥ is a (Gt)-stopping time such that σ⊥ ≥ tj and 〈Xi(1), Y j(1)〉σ⊥ ≡ 0. Then for r2 > r1 ≥ tj and nonnegative Borel measurable functions H1, H2, and h, EP [ H1 ( Y jr (1); r ∈ [tj , r2] ) H2 ( Xir(1); r ∈ [si, r1] ) h(yj , xi); r1 ≤ σ⊥ ] ≤EP [ H1 ( Y jr (1); r ∈ [tj , r2] ) ]× EP[H2(Xir(1); r ∈ [si, r1])] × EP [h(yj , xi)] . (3.6.76) Proof. By the monotone class theorem, we may only consider the case that H1 ( Y jr (1); r ∈ [tj , r2] ) =H1,1 ( Y jr (1); r ∈ [tj , r1] ) H1,2 ( Y jr (1); r ∈ [r1, r2] ) , H2 ( Xir(1); r ∈ [si, r1] ) =H2,1 ( Xir(1); r ∈ [si, tj ] ) H2,2 ( Xir(1); r ∈ [tj , r1] ) , for nonnegative Borel measurable functions Hk,`. 104 3.6. Conditional separation of approximating solutions As the first step, we condition on Gr1 and obtain EP [ H1 ( Y jr (1); r ∈ [tj , r2] ) H2 ( Xir(1); r ∈ [si, r1] ) h(yj , xi); r1 ≤ σ⊥ ] =EP [ H1,1 ( Y jr (1); r ∈ [tj , r1] ) EP [ H1,2 ( Y jr (1); r ∈ [r1, r2] ) ∣∣∣Gr1] H2 ( Xir(1); r ∈ [si, r1] ) h(yj , xi); r1 ≤ σ⊥ ] . (3.6.77) Since Y j(1) is a (Gt)-Feller process, we know that EP [ H1,2 ( Y jr (1); r ∈ [r1, r2] ) ∣∣∣Gr1] = Ĥ1,2 (Y jr1(1)) (3.6.78) for some nonnegative Borel measurable function Ĥ1,2. Hence, from (3.6.77), we get EP [ H1 ( Y jr (1); r ∈ [tj , r2] ) H2 ( Xir(1); r ∈ [si, r1] ) h(yj , xi); r1 ≤ σ⊥ ] =EP [ H1,1 ( Y jr (1); r ∈ [tj , r1] ) Ĥ1,2 ( Y jr1(1) ) H2 ( Xir(1); r ∈ [si, r1] ) h(yj , xi); r1 ≤ σ⊥ ] . (3.6.79) Next, since Y jtj (1) ≡ ψ(1)ε is obviously P-independent of Xitj (1) and σ⊥ ≥ tj by assumption, we can do an orthogonal continuation of Xi(1) over [σ⊥,∞) by Lemma 3.20. This gives a Feller diffusion X̂i such that X̂i⊥⊥Y j(1) under P and X̂i,σ⊥ = Xi(1)σ⊥ . Hence, Xi(1) = X̂i over [si, r1] on [ r1 ≤ σ⊥ ] and from (3.6.79) we get EP [ H1 ( Y jr (1); r ∈ [tj , r2] ) H2 ( Xir(1); r ∈ [si, r1] ) h(yj , xi); r1 ≤ σ⊥ ] =EP [ H1,1 ( Y jr (1); r ∈ [tj , r1] ) Ĥ1,2 ( Y jr1(1) ) H2 ( X̂ir; r ∈ [si, r1] ) h(yj , xi); r1 ≤ σ⊥ ] ≤EP [ H1,1 ( Y jr (1); r ∈ [tj , r1] ) Ĥ1,2 ( Y jr1(1) ) H2 ( X̂ir; r ∈ [si, r1] ) h(yj , xi) ] , (3.6.80) where the last inequality follows from the non-negativity of Ĥ1,2, Hk,`, and h. 105 3.6. Conditional separation of approximating solutions Next, we consider conditioning on Gtj . From (3.6.80), we get EP [ H1 ( Y jr (1); r ∈ [tj , r2] ) H2 ( Xir(1); r ∈ [si, r1] ) h(yj , xi); r1 ≤ σ⊥ ] ≤EP [ EP [ H1,1 ( Y jr (1); r ∈ [tj , r1] ) Ĥ1,2 ( Y jr1(1) ) H2,2 ( X̂ir; r ∈ [tj , r1] )∣∣∣Gtj] H2,1 ( X̂ir; r ∈ [si, tj ] ) h(yj , xi) ] . (3.6.81) To evaluate the conditional expectation in the last term, we use the in- dependence between X̂i and Y j(1) and deduce from the martingale prob- lem formulation and Theorem 4.4.2 of [12] that the two-dimensional process( X̂i, Y j(1) ) [tj ,∞) is (Gt)t≥tj -Markov with joint law L ( X̂i[tj ,∞) )⊗L (Y j(1)[tj ,∞)). Hence, EP [ H1,1 ( Y jr (1); r ∈ [tj , r1] ) Ĥ1,2 ( Y jr1(1) ) H2,2 ( X̂ir; r ∈ [tj , r1] )∣∣∣Gtj] =EP [ H1,1 ( Y jr (1); r ∈ [tj , r1] ) Ĥ1,2 ( Y jr1(1) )] × E P0 X̂itj [H2,2 (Zr; r ∈ [0, r1 − tj ])] , where we recall that (Z,P0z) denotes a copy of 1 4BESQ 0 (4z). (The value of Y j(1) at tj is ψ(1)ε.) Applying the foregoing equality to (3.6.81) and using (3.6.78), we obtain EP [ H1 ( Y jr (1); r ∈ [tj , r2] ) H2 ( Xir(1); r ∈ [si, r1] ) h(yj , xi); r1 ≤ σ⊥ ] ≤EP [H1 (Y jr (1); r ∈ [tj , r2])] × EP [ E P0 X̂itj [H2,2 (Zr; r ∈ [0, r1 − tj ])]H2,1 ( X̂ir; r ∈ [si, tj ] ) h(yj , xi) ] =EP [ H1 ( Y jr (1); r ∈ [tj , r2] )]× EP [ E P0 Xi(1)tj [ H2,2 (Zr; r ∈ [0, r1 − tj ]) ] H2,1 ( Xir(1); r ∈ [si, tj ] ) h(yj , xi) ] , (3.6.82) where the last equality follows since we only redefine Xi(1)t for t ≥ σ⊥ to obtain X̂i, whereas σ⊥ ≥ tj . The rest is easy to obtain. Using (3.5.8), we 106 3.6. Conditional separation of approximating solutions see that (3.6.82) gives EP [ H1 ( Y jr (1); r ∈ [tj , r2] ) H2 ( Xir(1); r ∈ [si, r1] ) h(yj , xi); r1 ≤ σ⊥ ] ≤EP [H1 (Y jr (1); r ∈ [tj , r2])] × EP [ E P0 Xi(1)tj [H2,2 (Zr; r ∈ [0, r1 − tj ])]H2,1 ( Xir(1); r ∈ [si, tj ] )]× EP [h(yj , xi)] =EP [ H1 ( Y jr (1); r ∈ [tj , r2] )] EP [ H2 ( Xir(1); r ∈ [si, r1] )] EP [h(yj , xi)] . We have obtained the desired inequality, and the proof is complete. We are now ready to prove Lemma 3.15 with arguments similar to those in Section 3.6.3. The following steps are labelled in the same way as their counterparts in Section 3.6.3, except that (Step 2-5) and (Step 3) below correspond to (Step 2-6) and (Step 4) in Section 3.6.3, respectively. Due to the similarity, we will only point out the key changes, leaving other details to readers. Recall that we fix t ∈ [si + ε2 , si + 1], i ∈ N, and ε ∈ ( 0, [8ψ(1)]−1 ∧ 1]. (Step 1). We begin with a simple observation for the integral term ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds in (3.6.19), for yj = y satisfying (3.6.69) and j ∈ N with tj ∈ (si, si + 1]. For s ∈ [tj , t ∧ τ i ∧ σXiβ ∧ σY j β ] with s < t ? j , the support processes of X i and Y j can be enveloped by PXiβ (·) and PY j β (·) up to time s, respectively, and PXiβ (s) ∩ PY j β (s) = ∅ by the definition of t?j in (3.6.71). Hence, for such s,∫ R Xi(x, s)1/2Y j(x, s)1/2dx = 0. 107 3.6. Conditional separation of approximating solutions Using the bound (3.6.66), we obtain as for (3.6.23) that EQ i  ∑ j∈Li β′ (t,t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(s, x)1/2Y j(s, x)1/2dxds   <a ∑ j:si<tj≤t (t− si)β′ε+ ∑ j:si<tj≤t ∫ t tj ds1t?j<s 1 (s− si)η/2 EQ i [ [ Y js (1) ]1/2 ; s < τ i ∧ σXiβ ∧ σY j β , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′)]. (3.6.83) Hence, for lateral clusters, we consider EQ i [ [ Y js (1) ]1/2 ; s < τ i ∧ σXiβ ∧ σY j β , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′)], s ∈ (t?j , t], si < tj < t, t?j < t. (Step 2-1). We partition the event [ Xis(1) TX i 1 > 0 ] into the two events in 108 3.6. Conditional separation of approximating solutions (3.6.28) and (3.6.29) with tj replaced by t?j . Then as in (3.6.30), we write EQ i [ [ Y js (1) ]1/2 ; s < τ i ∧ σXiβ ∧ σY j β , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′)] ≤ 1 ψ(1)ε EP [ [ Y js (1) ]1/2 ; s < τ i ∧ σXiβ ∧ σY j β , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), TX i 1 < T Xi 0 ≤ t?j ] + 1 ψ(1)ε EP [∣∣∣Xi(1)TXi1s − ψ(1)ε∣∣∣ [Y js (1)]1/2 ; s < τ i ∧ σXiβ ∧ σY jβ , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), Xis(1) TX i 1 > 0, t?j < T Xi 0 ] + 1 ψ(1)ε · ψ(1)εEP [ [ Y js (1) ]1/2 ; s < τ i ∧ σXiβ ∧ σY j β , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), t?j < TXi0 ], ∀ s ∈ (t?j , t], si < tj < t, t?j < t, (3.6.84) where we replace the event [ Xis(1) TX i 1 > 0, t?j < T Xi 0 ] by the larger one[ t?j < T Xi 0 ] for the third term. (Step 2-2). Consider the first term on the right-hand side of (3.6.84). We have 1 ψ(1)ε EP [ [ Y js (1) ]1/2 ; s < τ i ∧ σXiβ ∧ σY j β 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), TX i 1 < T Xi 0 ≤ t?j ] ≤ 1 ψ(1)ε EP [ [ Y js (1) ]1/2 ; s ≤ T Y j1 , t?j ≤ σX i β ∧ σY j β 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), TX i 1 < T Xi 0 ≤ t?j ] , ∀ s ∈ (t?j , t], si < tj < t, t?j < t. (3.6.85) 109 3.6. Conditional separation of approximating solutions We then apply Proposition 3.21, taking σ⊥ = ( σX i β ∧ σY j β ∧ t?j ) ∨ tj , r1 = t?j , r2 = s. (3.6.86) Hence, from (3.6.66) and (3.6.85), we obtain 1 ψ(1)ε EP [ [ Y js (1) ]1/2 ; s < τ i ∧ σXiβ ∧ σY j β 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), TXi1 < TXi0 ≤ t?j] <a 1 ε P ( TX i 1 < T Xi 0 ≤ t?j ) (t− si)β′EP 0 ψ(1)ε [( Zs−tj )1/2 ; s− tj ≤ TZ1 ] <a(t− si)β ′ · EP0ψ(1)ε [( Zs−tj )1/2 ; s− tj ≤ TZ1 ] , ∀ s ∈ (t?j , t], si < tj < t, t?j < t. (3.6.87) (Step 2-3). Let us consider the second term in (3.6.84). As before, using (3.6.34) gives 1 ψ(1)ε EP [∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ [Y js (1)]1/2 ; s < τ i ∧ σXiβ ∧ σY jβ , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), Xis(1)TXi1 > 0, t?j < TXi0 ] <a εα N0 (s− si)α + (s− si)ξ ε EP [[ Y js (1) ]1/2 ; s ≤ T Y j1 , t?j ≤ σX i β ∧ σY j β 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), t?j < TXi0 ], ∀ s ∈ (t?j , t], si < tj < t, t?j < t. (3.6.88) Taking the choice (3.6.86) again, we obtain from Proposition 3.21, (3.6.66), and the last display that 1 ψ(1)ε EP [∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ [Y js (1)]1/2 ; s < τ i ∧ σXiβ ∧ σY jβ , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), Xis(1)TXi1 > 0, t?j < TXi0 ] <a εα N0 (s− si)α + (s− si)ξ ε (t− si)β′P ( t?j < T Xi 0 ) EP 0 ψ(1)ε [ (Zs−tj ) 1/2; s− tj ≤ TZ1 ] , ∀ s ∈ (t?j , t], si < tj < t, t?j < t. 110 3.6. Conditional separation of approximating solutions Hence, by a computation similar to (3.6.36) and Lemma 3.19, the foregoing display gives 1 ψ(1)ε EP [∣∣∣Xis(1)TXi1 − ψ(1)ε∣∣∣ [Y js (1)]1/2 ; s < τ i ∧ σXiβ ∧ σY jβ , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), Xis(1)TXi1 > 0, t?j < TXi0 ] <a ( εα N0 (s− si)α + (s− si)ξ ) (t− si)β′(tj − si)− β′ β EP 0 ψ(1)ε [ (Zs−tj ) 1/2; s− tj ≤ TZ1 ] , ∀ s ∈ (t?j , t], si < tj < t, t?j < t. (3.6.89) (Step 2-4). For the third term in (3.6.84), the calculation in the foregoing (Step 2-3) readily shows 1 ψ(1)ε · ψ(1)εEP [ [ Y js (1) ]1/2 ; s < τ i ∧ σXiβ ∧ σY j β , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′), t?j < TXi0 ] <a(t− si)β ′ (tj − si)− β′ β · ε · EP0ψ(1)ε [ (Zs−tj ) 1/2; s− tj ≤ TZ1 ] , ∀ s ∈ (t?j , t], si < tj < t, t?j < t. (3.6.90) (Step 2-5). At this step, we apply (3.6.87), (3.6.89), and (3.6.90) to (3.6.84) 111 3.6. Conditional separation of approximating solutions and give a summary as follows: EQ i [ [ Y js (1) ]1/2 ; s < τ i ∧ σXiβ ∧ σY j β , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′)] <a [ (t− si)β′ + (t− si)β′(tj − si)− β′ β ( εα N0 (s− si)α + (s− si)ξ + ε )] × EP0ψ(1)ε [ (Zs−tj ) 1/2; s− tj ≤ TZ1 ] <a (t− si)β ′ (tj − si)− β′ β ( (tj − si) β′ β + εα N0 (s− si)α + (s− si)ξ + ε ) × εα N0 2 (s− tj)α2 ( ε s− tj )1−αN0 4 + (t− si)β′(tj − si)− β′ β ( (tj − si) β′ β + εα N0 (s− si)α + (s− si)ξ + ε ) × ε 12 ( ε s− tj )1−αN0 4 + (t− si)β′ε(s− tj) ξ 2 −1 + (t− si)β′(tj − si)− β′ β ( εα N0 (s− si)α + ε ) ε(s− tj) ξ 2 −1 + (t− si)β′(tj − si)− β′ β (s− si)ξε(s− tj) ξ 2 −1, ∀ s ∈ (t?j , t], si < tj < t, t?j < t, (3.6.91) where as in (Step 2-6) of Section 3.6.3, the last “<a”-inequality follows again from Lemma 3.17, some arithmetic, and an application of (3.6.51). Now, for any s ∈ (t?j , t] with si < tj < t and t?j < t, we have (tj − si) β′ β + εα N0 (s− si)α + (s− si)ξ + ε <a (s− si)α, which results from (3.6.11)-(a), (3.6.11)-(b), Lemma 3.19, and tj − si ≥ ε2 . 112 3.6. Conditional separation of approximating solutions Hence, with some simplifications similar to (3.6.52)–(3.6.54), we obtain EQ i [ [ Y js (1) ]1/2 ; s < τ i ∧ σXiβ ∧ σY j β , 2 ( ε1/2 + (tj − si)β′ ) ≤ |yj − xi| ≤ 2(ε1/2 + (t− si)β′)] <a (t− si)β ′ (tj − si)− β′ β (s− si)α(s− tj)α2 +α N0 4 −1ε1+ αN0 4 + (t− si)β′(tj − si) α 2 −β′ β (s− si)α(s− tj)α N0 4 −1ε1+ αN0 4 + (t− si)β′(s− tj) ξ 2 −1ε + (t− si)β′(tj − si)− β′ β (s− si)ξ(s− tj) ξ 2 −1ε, ∀ s ∈ (t?j , t], si < tj < t, t?j < t. (3.6.92) (Step 3). We complete the proof of Lemma 3.15 in this step. Applying the bound (3.6.92) to the right-hand side of the inequality (3.6.83). We have EQ i  ∑ j∈Li β′ (t,t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds   <a(t− si)β ′ ∑ j:si<tj≤t ε + (t− si)β′ ∑ j:si<tj≤t (tj − si)− β′ β ∫ t tj (s− si)− η 2 +α(s− tj)α2 +α N0 4 −1ds · ε1+α N0 4 + (t− si)β′ ∑ j:si<tj≤t (tj − si) α 2 −β′ β ∫ t tj (s− si)− η 2 +α(s− tj)α N0 4 −1ds · ε1+α N0 4 + (t− si)β′ ∑ j:si<tj≤t ∫ t tj (s− si)− η 2 (s− tj) ξ 2 −1ds · ε + (t− si)β′ ∑ j:si<tj≤t (tj − si)− β′ β ∫ t tj (s− si)− η 2 +ξ(s− tj) ξ 2 −1ds · ε. Thanks to the second inequality in (3.6.11)-(b), the integral domination out- lined in (Step 3) of Section 3.6.3 can be applied to each term on the right- hand side of the foregoing <a-inequality, giving bounds which are convergent 113 3.7. Uniform separation of approximating solutions integrals. As in (Step 4) of Section 3.6.3, we obtain EQ i  ∑ j∈Li β′ (t,t∧τ i∧σX i β ) ψ(1)ε+ ∫ t∧τ i∧σXiβ ∧σY jβ tj 1 Xis(1) ∫ R Xi(x, s)1/2Y j(x, s)1/2dxds   <a(t− si)β ′+1 + (t− si)β′− η 2 + 3α 2 · εα N0 4 + (t− si)β′− η 2 + 3ξ 2 , which proves Lemma 3.15. 3.7 Uniform separation of approximating solutions In this section, we prove the main theorem on the pathwise non-uniqueness of nonnegative solutions of the SPDE (1.2.9), and the result is summarized in Theorem 3.26. We will need the uniform separation of approximating so- lutions, and our tasks will be to show how it can be obtained from the con- ditional separation implied by Theorem 3.10 (cf. Remark 3.11) and obtain appropriate probability bounds. We continue to suppress the dependence on ε of the approximation solutions and use only Pε for emphasis, unless otherwise mentioned. Our program is sketched as follows. For small r ∈ (0, 1], we choose a number ∆(r) ∈ (0,∞) and an event S(r) satisfying the following proper- ties. First, ∆(r) depends only on the parameter vector in Assumption 1 and r. Second, for every small ε ∈ (0, 1], S(r) = Sε(r) is defined by the approximating solutions X and Y such that S(r) ⊆ [ sup 0≤s≤2r ‖Xs − Ys‖rap ≥ ∆(r) ] (3.7.1) and lim inf ε↓0+ Pε ( S(r) ) > 0. (3.7.2) Let us define the events Sε(r). First, recall the parameter vector chosen in Assumption 1 as well as the constants κj defined in Theorem 3.10. In the following, we need to use small portions of the constants κ1 and κ3, and by (3.6.11)-(d) we can take ℘ ∈ (0, κ1 ∧ κ3) such that κ1 − ℘ > η. 114 3.7. Uniform separation of approximating solutions We insist that ℘ depends only on the parameter vector in (3.6.10). For any i ∈ N, ε ∈ (0, [8ψ(1)]−1 ∧ 1], and random time T ≥ si, let Gi(T ) = Giε(T ) be the event defined by Gi(T ) =  Xis([xi − ε1/2 − (s− si)β, xi + ε1/2 + (s− si)β]) ≥ (s− si) η 4 and Ys([xi − ε1/2 − (s− si)β, xi + ε1/2 + (s− si)β]) ≤ K∗ [(s− si)κ1−℘ + εκ2(s− si)κ3−℘] , ∀ s ∈ [si, T ]  , (3.7.3) where the constant K∗ ∈ (0,∞) is as in Theorem 3.10. Note that Gi(·) is decreasing in the sense that, for any random times T1, T2 with T1 ≤ T2, Gi(T1) ⊇ Gi(T2). We then choose S(r) = Sε(r) , brε−1c⋃ i=1 Giε(si + r), r ∈ (0,∞), ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] . (3.7.4) We will explain later on that the events Gi( · ) roughly capture the main fea- ture of the events considered in Theorem 3.10 as well as the initial behaviour of Xi(1) under Qiε discussed in Section 3.6.1. The inclusion (3.7.1) is now a simple consequence of our choice of the events Gi( · ). Lemma 3.22. For some r0 ∈ (0, 1], we can find ε0(r) ∈ ( 0, r ∧ [8ψ(1)]−1 ∧ 1] and ∆(r) ∈ (0,∞) for any r ∈ (0, r0] so that the inclusion (3.7.1) holds al- most surely for any ε ∈ (0, ε0(r)]. The constant ∆(r) depends only on r and the parameter vector chose in Assumption 1. Proof. We first specify the strictly positive numbers r0, ε0(r), and ∆(r). Since the small portion ℘ taken away from κ1 and κ3 satisfies κ1 − ℘ > η, we can choose r0 ∈ (0, 1] such that rη 4 − 2K∗rκ1−℘ > 0, ∀ r ∈ (0, r0]. (3.7.5) Then we choose, for every r ∈ (0, r0], a number ε0(r) ∈ ( 0, r∧ [8ψ(1)]−1∧1] such that 0 < ε0(r) κ2 ≤ rκ1−κ3 . (3.7.6) 115 3.7. Uniform separation of approximating solutions Finally, we set ∆(r) , 1 2 [( rη 4 − 2K∗rκ1−κ 2 + 2rβ ) ∧ 1 ] > 0, r ∈ (0, r0]. (3.7.7) We now check that the foregoing choices give (3.7.1). Fix r ∈ (0, r0], ε ∈ (0, ε0(r)], and 1 ≤ i ≤ brε−1c. Note that brε−1c ≥ 1 since ε ≤ r. The arguments in this paragraph are understood to be valid on Gi(si + r). By definition, Ys([xi − ε1/2 − (s− si)β, xi + ε1/2 + (s− si)β]) ≤ K∗ [(s− si)κ1−℘ + εκ2(s− si)κ3−℘] , ∀ s ∈ [si, si + r]. (3.7.8) In particular, (3.7.6) and (3.7.8) imply that Ysi+r([xi − ε1/2 − rβ, xi + ε1/2 + rβ]) ≤2K∗rκ1−℘. Since X ≥ Xi, the last inequality and the definition of Gi(si + r) imply Xsi+r([xi − ε1/2 − rβ, xi + ε1/2 + rβ]) − Ysi+r([xi − ε1/2 − rβ, xi + ε1/2 + rβ]) ≥ rη 4 − 2K∗rκ1−℘, where the lower bound is strictly positive by (3.7.5). To carry this to the Crap(R)-norm of Xsi+r − Ysi+r, we make an elementary observation: if f is Borel measurable, integrable on a finite interval I, and satisfies ∫ I f > A, then there must exist some x ∈ I such that f(x) > A/`(I), where `(I) is the length of I. Using this, we obtain from the last inequality that, for some x ∈ [xi − ε1/2 − rβ, xi + ε1/2 + rβ], X(x, si + r)− Y (x, si + r) ≥ rη 4 − 2K∗rκ1−℘ 2ε1/2 + 2rβ ≥ rη 4 − 2K∗rκ1−℘ 2 + 2rβ , so the definition of ‖ · ‖rap (in (1.2.11)) and the definition (3.7.7) of ∆(r) entail ∆(r) ≤ ‖Xsi+r − Ysi+r‖rap ≤ sup 0≤s≤2r ‖Xs − Ys‖rap, where the second inequality follows since si = (2i−1) 2 ε and 1 ≤ i ≤ brε−1c. In summary, we have shown that (3.7.1) holds because each component Gi(si+r) of S(r) satisfies the analogous inclusion. The proof is complete. 116 3.7. Uniform separation of approximating solutions We move on to show that (3.7.2) holds whenever r > 0 is small enough. To use Theorem 3.10, we need to bring the involved stopping times into the events Gi( · ) and change the statements about Y . First, we define Γi(r) = Γiε(r) by Γi(r) , PXiβ (si + r) ∩  ⋃ j:tj≤si supp(Y j)  = ∅  ∩ ⋂ j:tj≤si+r [ σY j β > tj + 3r ] ∩ [ σX i β > si + 2r ] , r ∈ (0, 1], i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] , (3.7.9) where supp(Y j) denotes the topological support of the two-parameter (ran- dom) function (x, s) 7−→ Y j(x, s). Hence, through Γi(r), we confine the ranges of the supports of Y j , for j ∈ N satisfying tj ≤ si + r, and Xi. As will become clear in passing, one of the reasons for considering this event is to make precise the informal argument of choosing J iβ′( · ), as discussed in Section 3.6.1. Lemma 3.23. Fix r ∈ (0, 1], i ∈ N, and ε ∈ (0, [8ψ(1)]−1∧1]. Then on the event Γi(r) defined by (3.7.9), we have Ys([xi − ε1/2 − (s− si)β, xi + ε1/2 + (s− si)β]) = ∑ j∈J i β′ (s) Y js ([xi − ε1/2 − (s− si)β, xi + ε1/2 + (s− si)β]), ∀ s ∈ [si, si + r]. (3.7.10) In particular, on Γi(r), Ys([xi − ε1/2 − (s− si)β, xi + ε1/2 + (s− si)β]) ≤ ∑ j∈J i β′ (s) Y js (1), ∀ s ∈ [si, si + r]. (3.7.11) Proof. In this proof, we argue on the event Γi(r) and call Θs , {x; (x, s) ∈ Θ} the s-section of a subset Θ of R× R+ for any s ∈ R+. Consider (3.7.10). Since the s-section supp(Y j)s contains the support of Y js ( · ), it suffices to show that, for any s ∈ [si, si + r] and j ∈ N with tj ≤ s and j /∈ J iβ′(s), [xi − ε1/2 − (s− si)β, xi + ε1/2 + (s− si)β] ∩ supp(Y j)s = ∅. (3.7.12) 117 3.7. Uniform separation of approximating solutions If j ∈ N satisfies tj ≤ si, then using the first item in the definition (3.7.9) of Γi(r) gives PXiβ (si + r) ∩ supp(Y j) = ∅. Hence, taking the s-sections of both PXiβ (si + r) and supp(Y j) shows that Y j satisfies (3.7.12). Next, suppose that j ∈ N satisfies si < tj ≤ s but j /∈ J iβ′(s). On one hand, this choice of j implies |yj − xi| > 2 ( ε1/2 + (s− si)β′ ) ≥ 2(ε1/2 + (s− si)β), where the second inequality follows from the assumption r ∈ (0, 1] and the choice β′ < β by (3.6.11)-(b), so Lemma 3.53 entails PXiβ (s) ∩ PY j β (s) = ∅. (3.7.13) On the other hand, using the second item in the definition of Γi(r), we deduce that supp(Y j) ∩ (R× [tj , tj + 3r]) ⊆ PY jβ (tj + 3r). Using tj +r > si+r ≥ s and taking s-sections of supp(Y j) and PY jβ (tj +3r), we obtain from the foregoing inclusion that supp(Y j)s ⊆ [ yj − ε1/2 − (s− tj)β, yj + ε1/2 + (s− tj)β ] =PYjβ (tj + 3r)s =PYjβ (s)s. (3.7.14) Now, since PXiβ (s)s = [xi − ε1/2 − (s− si)β, xi + ε1/2 + (s− si)β], (3.7.13) and (3.7.14) give our assertion (3.7.12) for j ∈ N satisfying sj < tj ≤ s and j /∈ J iβ′(s). We have considered all cases for which j ∈ N, tj ≤ s, and j /∈ J iβ′(s). The proof is complete. Recall r0 ∈ (0, 1] and ε0(r) ∈ ( 0, r∧ [8ψ(1)]−1∧1] chosen in Lemma 3.22 and the events S(r) in (3.7.4). Lemma 3.24. For some r1 ∈ (0, r0], we can find ε1(r) ∈ (0, ε0(r)] for any r ∈ (0, r1] such that inf ε∈(0,ε1(r)] Pε ( S(r) ) > 0. (3.7.15) 118 3.7. Uniform separation of approximating solutions Proof. For any i ∈ N, ε ∈ (0, [8ψ(1)]−1 ∧ 1], and random time T ≥ si, we define Ĝi(·) = Ĝiε(·) by Ĝi(T ) =  Xis(1) ≥ (s− si)η 4 and∑ j∈J i β′ (s) Y js (1) ≤ K∗[(s− si)κ1−℘ +εκ2(s− si)κ3−℘], ∀ s ∈ [si, T ]  . (3.7.16) Note that Ĝi(·) is decreasing, and its definition about the masses of Y is the same as the event considered in Theorem 3.10 except for the restrictions from stopping times τ i, σXiβ , and σ Y j β . The connection between Ĝi(·) and Gi(·) is as follows. First note that by (3.7.11), the statement about the masses of Y in Ĝi(r) ∩ Γi(r) implies that in Gi(r) ∩ Γi(r). Also, the statements in Gi( · ) and Ĝi( · ) concerning the masses of Xi are linked by the obvious equality: Xis(1) = X i s([xi − ε1/2 − (s− si)β, xi + ε1/2 + (s− si)β]), ∀ s ∈ [ si, σ Xi β ] . Since σXiβ > si + 2r on Γ i(r), we are led to the inclusion Ĝi ( τ i ∧ (si + r) ) ∩ Γi(r) ⊆ Gi (τ i ∧ (si + r)) ∩ Γi(r) (3.7.17) for any r ∈ (0, 1], i ∈ N, and ε ∈ (0, [8ψ(1)]−1 ∧ 1] (τ i is defined in Propo- sition 3.9). We can also write (3.7.17) as Ĝi ( τ̂ i(si + r) ∧ (si + r) ) ∩ Γi(r) ⊆ Gi (τ̂ i(si + r) ∧ (si + r)) ∩ Γi(r), (3.7.18) where τ̂ i(si + r) , τ i ∧ σXiβ ∧ ∧ j:si<tj≤si+r σY j β . (3.7.19) Here, although the restriction σXiβ ∧ ∧ j:si<tj≤si+r σ Y j β is redundant in (3.7.18) (because σY jβ > tj + 3r > si + r for each j ∈ N with si < tj ≤ si + r by the definition of Γi(r)), we emphasize its role by writing it out. 119 3.7. Uniform separation of approximating solutions We now start bounding Pε ( S(r) ) . For any r ∈ (0, r0] and ε ∈ (0, ε0(r)], we have Pε ( S(r) ) ≥ Pε brε−1c⋃ i=1 Gi ( τ̂ i(si + r) ∧ (si + r) ) ∩ Γi(r) ∩ [TXi1 < TXi0 ] ∩ [τ̂ i(si + r) ≥ si + r]  ≥ Pε brε−1c⋃ i=1 Ĝi ( τ̂ i(si + r) ∧ (si + r) ) ∩ Γi(r) ∩ [TXi1 < TXi0 ] ∩ [τ̂ i(si + r) ≥ si + r]  , where the last inequality follows from the inclusion (3.7.18). We make the restrictions [ TX i 1 < T Xi 0 ] in order to invoke Qi-probabilities later on. By considering separately τ̂ i(si + r) ≥ si + r and τ̂ i(si + r) < si + r, we obtain from the last inequality that Pε ( S(r) ) ≥Pε brε−1c⋃ i=1 Ĝi ( τ̂ i(si + r) ∧ (si + r) ) ∩ Γi(r) ∩ [TXi1 < TXi0 ]  − Pε brε−1c⋃ i=1 [ τ̂ i(si + r) < si + r ] ∩ [TXi1 < TXi0 ]  . (3.7.20) Applying another inclusion-exclusion to the first term on the right-hand side of (3.7.20) now gives the main inequality of this proof: Pε ( S(r) ) ≥Pε brε−1c⋃ i=1 Ĝi ( τ̂ i(si + r) ∧ (si + r) ) ∩ [TXi1 < TXi0 ]  − Pε brε−1c⋃ i=1 Γi(r){ ∩ [ TX i 1 < T Xi 0 ] − Pε brε−1c⋃ i=1 [ τ̂ i(si + r) < si + r ] ∩ [TXi1 < TXi0 ]  , ∀ r ∈ (0, r0], ε ∈ (0, ε0(r)] . (3.7.21) In the rest of this proof, we bound each of the three terms on the right-hand side of (3.7.21) and then choose according to these bounds the desired r1 and ε1(r) for (3.7.15). 120 3.7. Uniform separation of approximating solutions At this stage, we use Proposition 3.9 and Theorem 3.10 in the following way. For any ρ ∈ (0, 12), we choose δ1 ∈ (0, 1], independent of i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1], such that sup { Qiε(τ i ≤ si + δ1); i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ]} ≤ ρ, (3.7.22) sup { Qiε ( ∃ s ∈ (si, si + δ1], ∑ j∈J i β′ (s∧τ i∧σX i β ) Y js (1) τ i∧σXiβ ∧σY j β > K∗[(s− si)κ1−℘ + εκ2 · (s− si)κ3−℘] ) ; i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ]} ≤ ρ. (3.7.23) Consider the first probability on the right-hand side of (3.7.21). We use the elementary inequality: for any events A1, · · · , An for n ∈ N, P  n⋃ j=1 Aj  ≥ n∑ j=1 P(Aj)− n∑ i=1 ∑ j:j 6=i 1≤j≤n P(Ai ∩Aj). Then Pε brε−1c⋃ i=1 Ĝi ( τ̂ i(si + r) ∧ (si + r) ) ∩ [TXi1 < TXi0 ]  ≥ brε−1c∑ i=1 Pε ( Ĝi ( τ̂ i(si + r) ∧ (si + r) ) ∩ [TXi1 < TXi0 ]) − brε−1c∑ i=1 ∑ j:j 6=i 1≤j≤brε−1c Pε ( TX i 1 < T Xi 0 , T Xj 1 < T Xj 0 ) , ∀ r ∈ (0, r0], ε ∈ (0, ε0(r)] , (3.7.24) The first term on the right-hand side of (3.7.24) can be written as brε−1c∑ i=1 Pε ( Ĝi ( τ̂ i(si + r) ∧ (si + r) ) ∩ [TXi1 < TXi0 ]) = brε−1c∑ i=1 ψ(1)ε ·Qiε ( Ĝi ( τ̂ i(si + r) ∧ (si + r) ) ) (3.7.25) 121 3.7. Uniform separation of approximating solutions by the definition of Qiε in (3.5.1). By inclusion-exclusion, we have Qiε ( Ĝi ( τ̂ i(si + r) ∧ (si + r) ) ) ≥Qiε ( Xis(1) ≥ (s− si)η 4 , ∀ s ∈ [si, τ̂ i(si + r) ∧ (si + r)]) −Qiε ( ∃ s ∈ (si, τ̂ i(si + r) ∧ (si + r)] , ∑ j∈J i β′ (s) Y j(1)s > K ∗[(s− si)κ1−℘ + εκ2 · (s− si)κ3−℘] ) , ∀ i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] . (3.7.26) Recall that τ i,(1) ≤ τ i and Xisi(1) = ψ(1)ε > 0. Hence, by the definition of τ̂ i(si + r), Qiε ( Xis(1) ≥ (s− si)η 4 , ∀ s ∈ [si, τ̂ i(si + r) ∧ (si + r)]) = 1, ∀ i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] . (3.7.27) For r ∈ (0, δ1], i ∈ N, and ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] , 122 3.7. Uniform separation of approximating solutions the second probability in (3.7.26) can be bounded as Qiε ( ∃ s ∈ (si, τ̂ i(si + r) ∧ (si + r)] , ∑ j∈J i β′ (s) Y js (1) > K∗[(s− si)κ1−℘ + εκ2 · (s− si)κ3−℘] ) ≤Qiε ( ∃ s ∈ (si, τ i ∧ (si + r)], ∑ j∈J i β′ (s∧τ i∧σX i β ) Y js (1) τ i∧σXiβ ∧σY j β > K∗[(s− si)κ1−℘ + εκ2 · (s− si)κ3−℘] ) ≤Qiε ( ∃ s ∈ (si, si + δ1], ∑ j∈J i β′ (s∧τ i∧σX i β ) Y js (1) τ i∧σXiβ ∧σY j β > K∗[(s− si)κ1−℘ + εκ2 · (s− si)κ3−℘ ) +Qiε(τ i ≤ si + δ1) ≤ 2ρ. (3.7.28) Here, the first inequality follows since for s ∈ (si, τ̂ i(si + r) ∧ (si + r)], s ≤ τ i ∧ σXiβ and j ∈ J iβ′(s) =⇒j ∈ J iβ′(si + r) =⇒tj ∈ (si, si + r] =⇒τ̂ i(si + r) ≤ σY jβ =⇒s ≤ σY jβ with the third implication following from the definition of τ̂ i(si + r) in (3.7.19). The first term in the second inequality follows by considering the scenario τ i > si + δ1 and using r ∈ (0, δ1], and the last inequality follows from (3.7.22) and (3.7.23). Applying (3.7.27) and (3.7.28) to (3.7.26), we get Qiε ( Ĝi(τ̂ i(si + r) ∧ (si + r)) ) ≥ 1− 2ρ, ∀ r ∈ (0, δ1 ∧ r0], i ∈ N, ε ∈ (0, ε0(r)] . (3.7.29) 123 3.7. Uniform separation of approximating solutions From (3.7.25) and the last inequality, we have shown that brε−1c∑ i=1 Pε ( Ĝi ( τ̂ i(si + r) ∧ (si + r) ) ∩ [TXi1 < TXi0 ]) ≥ ψ(1) (r − ε) (1− 2ρ), ∀ r ∈ (0, δ1 ∧ r0], ε ∈ (0, ε0(r)] . (Recall that ε0(r) ≤ r.) The second term on the right-hand side of (3.7.24) is relatively easy to bound. Indeed, by using the independence between the clusters Xi and Lemma 3.7, brε−1c∑ i=1 ∑ j:j 6=i 1≤j≤brε−1c Pε ( TX i 1 < T Xi 0 , T Xj 1 < T Xj 0 ) = brε−1c∑ i=1 ∑ j:j 6=i 1≤j≤brε−1c Pε ( TX i 1 < T Xi 0 ) Pε ( TX j 1 < T Xj 0 ) ≤ ψ(1)2r2, ∀ r ∈ (0, 1], ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] . Recalling (3.7.24) and using the last two displays, we have the following bound for the first term on the right-hand side of (3.7.21): Pε brε−1c⋃ i=1 Ĝi ( τ̂ i(si + t) ∧ (si + r) ) ∩ [TXi1 < TXi0 ]  ≥ ψ(1)(r − ε)(1− 2ρ)− ψ(1)2r2, ∀ r ∈ (0, δ1 ∧ r0], ε ∈ (0, ε0(r)] . (3.7.30) Next, we consider the second probability on the right-hand side of (3.7.21). By the definition of Γi(r) in (3.7.9) and the general inclusion (A1 ∩A2 ∩A3){ ⊆ (A{1 ∩A2 ∩A3) ∪A{2 ∪A{3, we have Γi(r){ ⊆ {[ PXiβ (si + r) ∩ ( ⋃ j:tj≤si supp(Y j) ) 6= ∅ ] ∩ ⋂ j:tj≤si [ σY j β > tj + 3r ] ∩ [ σX i β > si + 2r ]} ∪ ( ⋃ j:tj≤si+r [ σY j β ≤ tj + 3r ]) ∪ [ σX i β ≤ si + 2r ] , (3.7.31) 124 3.7. Uniform separation of approximating solutions where we note that the indices j in⋂ j:tj≤si [ σY j β > tj + 3r ] now range only over j ∈ N with tj ≤ si. Hence, Pε ( brε−1c⋃ i=1 Γi(r){ ∩ [ TX i 1 < T Xi 0 ]) ≤Pε ( brε−1c⋃ i=1 {[ PXiβ (si + r) ∩ ( ⋃ j:tj≤si supp(Y j) ) 6= ∅ ] ∩ ⋂ j:tj≤si [ σY j β > tj + 3r ] ∩ [ σX i β > si + 2r ] ∩ [ TX i 1 < T Xi 0 ]}) + Pε ( b2rε−1c+1⋃ j=1 [ σY j β ≤ tj + 3r ]) + Pε ( brε−1c⋃ i=1 [ σX i β ≤ si + 2r ]) , (3.7.32) where we have the second probability in the foregoing inequality since tj ≤ sbrε−1c + r =⇒ tj ≤ 2r =⇒ j ≤ b2rε−1c+ 1. Resorting to the conditional probability measures Qiε, we see that the first probability in (3.7.32) can be bounded as Pε ( brε−1c⋃ i=1 {[ PXiβ (si + r) ∩ ( ⋃ j:tj≤si supp(Y j) ) 6= ∅ ] ∩ ⋂ j:tj≤si [ σY j β > tj + 3r ] ∩ [ σX i β > si + 2r ] ∩ [ TX i 1 < T Xi 0 ]}) ≤ brε−1c∑ i=1 ψ(1)εQiε ([ PXiβ (si + r) ∩ ( ⋃ j:tj≤si supp(Y j) ) 6= ∅ ] ∩ ⋂ j:tj≤si [ σY j β > tj + 3r ] ∩ [ σX i β > si + 2r ]) ≤ brε−1c∑ i=1 ψ(1)εC1suppr 1/6 ≤ ψ(1)C1suppr7/6, ∀ r ∈ (0, r0], ε ∈ (0, ε0(r)], 125 3.7. Uniform separation of approximating solutions where the next to the last inequality follows from Proposition 3.52 and the constant C1supp ∈ (0,∞) is independent of r ∈ (0, r0] and ε ∈ (0, r0]. (Here, we use the choice β ∈ [13 , 12) to apply this proposition.) By Proposition 3.51, the second probability in (3.7.32) can be bounded as (recall ε0(r) ≤ r) Pε b2rε−1c+1⋃ j=1 [ σY j β ≤ tj + 3r ] ≤ C0supp(2rε−1 + 1) · 3εr ≤ 9C0suppr2, ∀ r ∈ (0, r0], ε ∈ (0, ε0(r)], (3.7.33) where C0supp is a constant independent of r ∈ (0, r0] and ε ∈ (0, ε0(r)]. Similarly, Pε brε−1c⋃ i=1 [ σX i β ≤ si + 2r ] ≤ 2C0suppr2, ∀ r ∈ (0, r0], ε ∈ (0, ε0(r)]. (3.7.34) From (3.7.32) and the last three displays, we have shown that the second probability in (3.7.21) satisfies the bound Pε brε−1c⋃ i=1 Γi(r){ ∩ [ TX i 1 < T Xi 0 ] ≤ 11C0suppr2 + ψ(1)C1suppr7/6, ∀ r ∈ (0, r0], ε ∈ (0, ε0(r)]. (3.7.35) It remains to consider the last probability on the right-hand side of (3.7.21). Recall the number δ1 chosen for (3.7.22). Similar to the deriva- tion of (3.7.32), we have Pε brε−1c⋃ i=1 [ τ̂ i(si + r) < si + r ] ∩ [TXi1 < TXi0 ]  ≤ Pε brε−1c⋃ i=1 [ σX i β ≤ si + r ]+ Pε b2rε−1c+1⋃ i=1 [ σY i β ≤ ti + r ] + brε−1c∑ i=1 ψ(1)εQiε ( τ i < si + r ) ≤ 11C0suppr2 + ψ(1)rρ, ∀ r ∈ (0, δ1 ∧ r0] , ε ∈ (0, ε0(r)] , (3.7.36) 126 3.7. Uniform separation of approximating solutions where in the last inequality we also use (3.7.33) and (3.7.34). We apply the three bounds (3.7.30), (3.7.35), and (3.7.36) to (3.7.21). This shows that for any ρ ∈ (0, 12), there exist δ1 > 0 such that for any r ∈ (0, δ1 ∧ r0] and ε ∈ (0, ε0(r)] (note that ε0(r) ≤ r ∧ 1), Pε ( S(r) ) ≥ [ψ(1)(r − ε)(1− 2ρ)− ψ(1)2r2]− (11C0suppr2 + ψ(1)C1suppr7/6) − (11C0suppr2 + ψ(1)rρ) =r [ ψ(1)(1− 3ρ)− (ψ(1)2 + 22C0supp) r − ψ(1)C1suppr1/6] − ψ(1)ε(1− 2ρ). Finally, to attain the uniform lower bound (3.7.15), we choose ρ ∈ (0, 12) and r1 ∈ (0, δ1 ∧ r0] such that ψ(1)(1−3ρ)− (ψ(1)2 + 22C0supp) r−ψ(1)C1suppr1/6 ≥ ψ(1)2 , ∀ r ∈ (0, r1] , and then ε1(r) ∈ (0, ε0(r)] such that ψ(1)ε1(r)(1− 2ρ) ≤ ψ(1)r 4 . Putting things together, we obtain Pε ( S(r) ) ≥ ψ(1)r 4 , ∀ ε ∈ (0, ε1(r)], r ∈ (0, r1], and hence (3.7.15) follows. The proof is complete. We use Lemma 3.24 to give the proof for a more precise version of our main theorem, namely Theorem 1.6, in Theorem 3.26 below. Before this, we state the following proposition whose proof is relegated to Section 3.9. Proposition 3.25. For any (εn) ⊆ ( 0, [8ψ(1)]−1 ∧ 1] such that εn −→ 0, the sequence of laws of ( (X,Y ) ,Pεn ) is relatively compact in the space of probability measures on the product spaceD ( R+,Crap(R) )×D(R+,Crap(R)) and any of its limits is the law of a pair of nonnegative solutions of the SPDE (1.2.9) subject to the same space-time white noise. Theorem 3.26. For any (εn) ⊆ ( 0, [8ψ(1)]−1∧1] with εn ↘ 0 such that the sequence of laws of ( (X,Y ),Pεn ) converges to the law of ( (X,Y ),P0 ) of a 127 3.8. Proof of Proposition 3.9 pair of nonnegative solutions of the SPDE (1.2.9) in the space of probability measures on the product space D ( R+,Crap(R) )×D(R+,Crap(R)), we have P0 ( sup 0≤s≤2r1 ‖X − Y ‖rap ≥ ∆(r1) 2 ) ≥ inf ε∈(0,ε1(r1)] Pε ( S(r1) ) > 0, where ∆(r1) > 0 is chosen in Lemma 3.22 and r1, ε1(r1) ∈ (0, 1] are chosen in Lemma 3.24. Proof. By Skorokhod’s representation theorem, we may take ( X(εn), Y (εn) ) to be copies of the εn-approximating solutions which live on the same prob- ability space, and assume that ( X(εn), Y (εn) ) converges almost surely to( X(0), Y (0) ) in the product (metric) spaceD ( R+,Crap(R) )×D(R+,Crap(R)). Now, it follows from Lemma 3.22 and Lemma 3.24 that inf n:εn≤ε1(r1) P ( sup 0≤s≤2r1 ∥∥∥X(εn)s − Y (εn)s ∥∥∥ rap ≥ ∆(r1) ) ≥ inf ε∈(0,ε1(r1)] Pε ( S(r1) ) > 0. Hence, by Fatou’s lemma, we get 0 < inf ε∈(0,ε1(r1)] Pε ( S(r1) ) ≤ lim sup n→∞ P ( sup 0≤s≤2r1 ∥∥∥X(εn)s − Y (εn)s ∥∥∥ rap ≥ ∆(r1) ) ≤ P ( lim sup n→∞ [ sup 0≤s≤2r1 ∥∥∥X(εn)s − Y (εn)s ∥∥∥ rap ≥ ∆(r1) ]) ≤ P ( sup 0≤s≤2r1 ∥∥∥X(0)s − Y (0)s ∥∥∥ rap ≥ ∆(r1) 2 ) , (3.7.37) where the last inequality follows from the convergence X(εn) a.s.−−−→ n→∞ X (0) and Y (εn) a.s.−−−→ n→∞ Y (0) in the Skorokhod space D ( R+,Crap(R) ) , the continuity of X(0) and Y (0), and Proposition 3.6.5 (a) of [12]. The proof is complete. 3.8 Proof of Proposition 3.9 In this section, we prove Proposition 3.9 by verifying all of the following analogues of (3.6.1): ∀ ρ > 0 ∃ δ > 0 such that sup { Qiε ( τ i,(j) ≤ si + δ ) ; i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ]} ≤ ρ, (3.8.1) 128 3.8. Proof of Proposition 3.9 where 1 ≤ j ≤ 3. Their proofs rely on the basic result that for any i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1], Xi(1)T Xi 1 under Qiε is a 14BESQ 4(4ψ(1)ε) started at si and stopped upon hitting 1 (see the discussion after Proposition 3.9). More precisely, this will be used via various couplings of 14BESQ 4(4z), which are obtained from a (Ht)-standard Brownian motion B for a filtration (Ht) satisfying the usual conditions. Recall that we write P1z for the law of a copy Z of 1 4BESQ 4(4z), and throughout this section the auxiliary parameters considered in Section 3.6 are not subject to the particular constraints in Assumption 1. Lemma 3.27. Fix η ∈ (1,∞), and let τ i,(1) be the stopping times defined in Proposition 3.9. Then (3.8.1) holds for j = 1. Proof. The proof of this lemma is an application of the following fact on the lower escape rate of BESQ4(0): P10 (∃ h > 0 such that 4Zt ≥ tη, ∀ t ∈ [0, h]) = 1 (3.8.2) (cf. Theorem 5.4.6 of [21]). Now, to use the monotonicity of BESQ4 in initial values (stated be- low), we construct 14BESQ 4(4z)-processes Zz with initial values z ∈ R+ si- multaneously from the (Ht)-standard Brownian motion by using the strong uniqueness of their stochastic differential equations (cf. Theorem IX.1.7 and Theorem IX.3.5 of [44]). Precisely, we use the identification Zzt ≡ z + t+ ∫ t 0 √ ZzsdBs, z ∈ R+. (3.8.3) Then the analogues of the first components in τ i,(1)ε (cf. Proposition 3.9) are σz , inf { t ≥ 0;Zz TZ z 1 ∧t < tη 4 } , z ∈ [0, 18 ]. Let us bound the distribution function of σz ∧ TZz1 . The comparison theorem of stochastic differential equations (cf. Theorem IX.3.7 of [44]) implies that Zz1 ≤ Zz2 whenever 0 ≤ z1 ≤ z2 < ∞. In particular, for any z ∈ (0, 18 ], TZ 1/8 1 ≤ TZ z 1 ≤ TZ 0 1 and σz ≥ σ0 a.s., (3.8.4) 129 3.8. Proof of Proposition 3.9 where the second inequality follows since Zz t∧TZz1 ≥ Z 0 t∧TZ01 ≥ t η 4 , ∀ t ∈ [0, σ0]. Hence, by (3.8.4), we have sup z∈(0, 1 8 ] P ( σz ∧ TZz1 ≤ δ ) ≤ sup z∈(0, 1 8 ] P (σz ≤ δ) + sup z∈(0, 1 8 ] P(TZ z 1 ≤ δ) ≤P (σ0 ≤ δ) + P(TZ1/81 ≤ δ), ∀ δ ∈ (0,∞). Applying the lower escape rate (3.8.2) to the right-hand side of the foregoing inequality shows that ∀ ρ > 0 ∃ δ > 0 such that sup z∈(0, 1 8 ] P ( σz ∧ TZz1 ≤ δ ) ≤ ρ. Using the foregoing display and the distributional property of Xi(1)TX i 1 un- der Qiε, for i ∈ N and ε ∈ ( 0, [8ψ(1)]−1 ∧ 1] mentioned above, we prove our assertion (3.8.1) for j = 1. The proof is complete. Lemma 3.28. Fix L ∈ (0,∞) and α ∈ (0, 12), and let τ i,(2) be the stopping times defined in Proposition 3.9. Then (3.8.1) holds for j = 2. Proof. As in the proof of Lemma 3.27, we need a grand coupling of all 1 4BESQ 4(4z), z ∈ R+, on the same probability space. For the first compo- nent of τ i,(2), we need to measure the modulus of continuity of the martingale part of a 14BESQ 4 in terms of its quadratic variation. Hence, it will be con- venient to take all of the 14BESQ 4(4z)’s, say Zz, from a fixed copy Z of 1 4BESQ 4(0), and we consider Zzt ≡ ZTZz +t, z ∈ R+, where the stopping times TZz are finite almost surely by the transience of BESQ4 (cf. p.442 of [44]). We may further assume that Z = Z0 and is defined by (3.8.3), so Zzt = z + t+ ∫ TZz +t TZz √ ZsdBs. (3.8.5) 130 3.8. Proof of Proposition 3.9 In this case, the analogues of τ i,(2) are given by, for z ∈ (0, 18 ], σz , inf { t ≥ 0; ∣∣∣Zzt∧TZz1 − z − t∣∣∣ > L (∫ t 0 Zz s∧TZz1 ds )α} ∧ TZz1 = inf { t ≥ 0; ∣∣∣∣∣ ∫ (TZz +t)∧TZ1 TZz √ ZsdBs + ( t ∧ TZz1 − t )∣∣∣∣∣ > L [∫ (TZz +t)∧TZ1 TZz Zsds+ ( t ∧ TZz1 − t )]α} ∧ TZz1 , (3.8.6) where the last equality follows from (3.8.5) and the obvious equality TZz + T Zz 1 = T Z 1 . Let us bound the distribution function of σz by using Brownian motions. By the Dambis-Dubins-Schwarz theorem (cf. Theorem V.1.6 of [44]), √ Z •B = B′〈Z〉 for some standard Brownian motion B′, where 〈Z〉 = ∫ ·0 Zsds. Also, 0 < ∫ (TZz +t)∧TZ1 TZz Zsds ≤ t if t > 0, where the first inequality follows since {0} is polar for BESQ4 (cf. p.442 of [44]). Hence, from (3.8.6), we deduce that, for any H, δ ∈ (0,∞), sup z∈(0, 1 8 ] P(σz ≤ δ) ≤P ( TZ1 > H ) + P  sup 0<|t−s|≤2δ 0≤s<t≤H |B′t −B′s| |t− s|α > L  + P ( TZ 1/8 1 ≤ δ ) . (3.8.7) (See (3.8.4) for the third probability.) Let us make the dependence on δ of the second probability of (3.8.7) explicit. For the fixed α ∈ (0, 12), let us pick α′ ∈ (0, 12) and p > 1 such that α < α′ < p−12p . Then applying Chebyshev’s inequality to the second term on the right-hand side of (3.8.7), we get sup z∈(0, 1 8 ] P(σz ≤ δ) ≤P ( TZ1 > H ) + δ2p(α ′−α) L2p E ( sup 0≤s<t≤H |B′t −B′s| |t− s|α′ )2p + P ( TZ 1/8 1 ≤ δ ) , ∀ H, δ ∈ (0,∞), (3.8.8) 131 3.8. Proof of Proposition 3.9 where E ( sup 0≤s<t≤H |B′t −B′s| |t− s|α′ )2p <∞ (3.8.9) (cf. the discussion preceding Theorem I.2.2 of [44] as well as its Theorem I.2.1). By the transience of BESQ4, the first probability on the right-hand side of (3.8.8) can be made as small as possible by choosing sufficiently large H. Since ( σψ(1)ε,P ) and ( τ i,(2),Pε ) have the same distribution and ψ(1)ε ≤ 18 , (3.8.8) and (3.8.9) are enough to obtain (3.8.1) for j = 2. The proof is complete. It remains to prove (3.8.1) for j = 3. We need a few preliminary results. Lemma 3.29. Fix i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1]. Then EQ i ε  sup r∈[0,R] ∑ j:si<tj≤si+r Y jsi+r(1) p <∞, ∀ p,R ∈ (0,∞). (3.8.10) Proof. Plainly, it suffices to consider p > 1. By Lemma 3.7, we have EQ i ε  sup r∈[0,R] ∑ j:si<tj≤si+r Y jsi+r(1) p = 1 ψ(1)ε EPε Xisi+R(1)TXi1  sup r∈[0,R] ∑ j:si<tj≤si+r Y jsi+r(1) p ≤ 1 ψ(1)ε EPε  sup r∈[0,R] ∑ j:si<tj≤si+r Y jsi+r(1) p ≤ 1 ψ(1)ε EPε  ∑ j:si<tj≤si+R sup t∈[tj ,si+R] Y jt (1) p ≤ 1 ψ(1)ε #{j; si < tj ≤ si +R}p−1 ∑ j:si<tj≤si+R EPε [( sup t∈[tj ,si+R] Y jt (1) )p] , (3.8.11) where the last inequality follows from Hölder’s inequality. Since each Y j(1) under Pε is a Feller diffusion with initial value ψ(1)ε and started at tj , the 132 3.8. Proof of Proposition 3.9 summands involving Y j on the right-hand side of (3.8.11) are finite. This gives (3.8.10), and the proof is complete. Next, we recall the canonical decomposition of Y j(1) for tj > si under Qiε in Lemma 3.8 (2). Using (3.4.34), the explicit form (3.5.6) of the finite variation process Ij of Y j(1) under Qiε, and the Cauchy-Schwarz inequality, we deduce that∑ j:si<tj≤t ( ψ(1)ε+ Ijt ) ≤ ψ(1)ε#{j; si < tj ≤ t}+ ∫ t∧TXi1 si (∑ j:si<tj≤t Y j s (1) Xis(1) )1/2 ds ≤ 2ψ(1)(t− si) + ∫ t∧TXi1 si (∑ j:si<tj≤s Y j s (1) Xis(1) )1/2 ds, ∀ t ∈ [si,∞). (3.8.12) Here, the last inequality follows since for t ≥ si + ε2 , si + ε ( #{j; si < tj ≤ t} − 1 2 ) ≤ t and the clusters Y j with s < tj ≤ t have no contributions to ∑ j:si<tj≤t Y j s (1). Also, recall that M j denotes the martingale part of Y j(1) under Qiε, and the super-Brownian motions Y j are chosen to be Pε-independent by Theorem 3.3. Hence, we deduce from Girsanov’s theorem that〈 ∑ j:si<tj≤· M j 〉 t = ∫ t si ∑ j:si<tj≤t Y js (1)ds = ∫ t si ∑ j:si<tj≤s Y js (1)ds, ∀ t ∈ [si,∞), (3.8.13) where the omission of the clusters Y j for s < tj ≤ t follows from the same reason as above. Lemma 3.30. Fix i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1]. Then EQ i ε [ 1 [Xisi+r(1)] a ; si + r ≤ TXi1 ] ≤ 1 ra EP 1 0 [ 1 (Z1)a ] , ∀ r, a ∈ (0,∞), (3.8.14) 133 3.8. Proof of Proposition 3.9 where EP 1 0 [ 1 (Z1)a ] <∞⇐⇒ a ∈ (−∞, 2). (3.8.15) Proof. Recall the grand coupling of 14BESQ 4(4z) in the proof of Lemma 3.27 in which Zz1 ≤ Zz2 whenever 0 ≤ z1 ≤ z2. Then for every r, a ∈ (0,∞), EQ i ε [ 1 [Xisi+r(1)] a ; si + r ≤ TXi1 ] ≤EP1ψ(1)ε [ 1 (Zr)a ] ≤ EP10 [ 1 (Zr)a ] = 1 ra EP 1 0 [ 1 (Z1)a ] , where the last equality follows from the scaling property of Bessel squared processes (cf. Proposition XI.1.6 of [44]). This gives the bound (3.8.14). Next we consider (3.8.15). Since Z under P10 has the same distribution as the image of a 4-dimensional standard Brownian motion under x 7−→ ‖x‖2 where ‖ · ‖ denotes the Euclidean norm, (3.8.15) follows by considering EP 1 0 [ 1 (Z1)a ] = ∫ R4 1 ‖x‖2a 1 (2pi)2 exp ( −‖x‖ 2 2 ) dx = ∫ ∞ 0 u3 u2a exp ( −u 2 2 ) du · 1 (2pi)2 ∫ ∂BR4 (0,1) dSx. The proof is complete. With Lemma 3.30, we have the following improvement of (3.8.10). Lemma 3.31. Fix i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1]. Then we have EQ i ε  ∑ j:si<tj≤si+r Y jsi+r(1)  ≤ (2ψ(1)R+ EP10 [ 1 Z1 ]1/2 2R1/2 ) × exp ( 2EP 1 0 [ 1 Z1 ]1/2√ r ) , ∀ r ∈ [0, R] , R ∈ (0,∞), (3.8.16) where EP10 [1/Z1] <∞ by (3.8.15). 134 3.8. Proof of Proposition 3.9 Proof. Recall that the local martingale part of Y j(1) under Qiε is a true martingale by Lemma 3.8 (2). Hence, for any r ∈ [0, R], we obtain from (3.8.12) that EQ i ε  ∑ j:si<tj≤si+r Y jsi+r(1)  ≤2ψ(1)r + ∫ si+r si EQ i ε (∑j:si<tj≤s Y js (1) Xis(1) )1/2 ; s ≤ TXi1  ds (3.8.17) ≤2ψ(1)r + ∫ si+r si EQ i ε [ 1 Xis(1) ; s ≤ TXi1 ]1/2 EQ i ε  ∑ j:si<tj≤s Y js (1) 1/2 ds ≤2ψ(1)r + ∫ si+r si 1√ s− siE P10 [ 1 Z1 ]1/21 + EQiε  ∑ j:si<tj≤s Y js (1)  ds ≤ ( 2ψ(1)R+ EP 1 0 [ 1 Z1 ]1/2 2R1/2 ) + EP 1 0 [ 1 Z1 ]1/2 × ∫ r 0 1√ s EQ i ε  ∑ j:si<tj≤si+s Y jsi+s(1)  ds, (3.8.18) where the third inequality follows from Lemma 3.30. With the change of variables s′ = √ s, the foregoing inequality with r replaced by r2 and R by R2 becomes EQ i ε  ∑ j:si<tj≤si+r2 Y j si+r2 (1)  ≤(2ψ(1)R2 + EP10 [ 1 Z1 ]1/2 2R ) + 2EP 1 0 [ 1 Z1 ]1/2 ∫ r 0 EQ i ε  ∑ j:si<tj≤si+(s′)2 Y j si+(s′)2 (1)  ds′, ∀ r ∈ [0, R], so by Lemma 3.29 and Gronwall’s lemma EQ i ε  ∑ j:si<tj≤si+r2 Y j si+r2 (1)  ≤ (2ψ(1)R2 + EP10 [ 1 Z1 ]1/2 2R ) × exp ( 2EP 1 0 [ 1 Z1 ]1/2 r ) , ∀ r ∈ [0, R]. 135 3.8. Proof of Proposition 3.9 With another change of time scales by r′ = r2, the foregoing gives the desired inequality (3.8.16). The proof is complete. We are now ready to prove (3.8.1) for j = 3. Lemma 3.32. Let τ i,(3) be the stopping times defined in Proposition 3.9. Then (3.8.1) holds for j = 3. Proof. Fix i ∈ N and ε ∈ (0, [8ψ(1)]−1 ∧ 1]. It follows from (3.8.12) that, for any R > 0 with 1 3 ≥ 2ψ(1)R, (3.8.19) we have Qiε  sup r∈[0,R] ∑ j:si<tj≤si+r Y jsi+r(1) > 1  ≤Qiε ∫ (si+R)∧TXi1 si (∑ j:si<tj≤s Y j s (1) Xis(1) )1/2 ds > 1 3  +Qiε  sup r∈[0,R] ∣∣∣∣∣∣ ∑ j:si<tj≤si+r M jsi+r ∣∣∣∣∣∣ > 13  ≤3EQiε ∫ (si+R)∧TXi1 si (∑ j:si<tj≤s Y j s (1) Xis(1) )1/2 ds  + 9 sup r∈[0,R] EQ i ε  ∑ j:si<tj≤si+r M jsi+r 2 , (3.8.20) where the last inequality follows by applying Doob’s L2-inequality to the Qiε-martingale ∑ j:si<tj≤·M j . We now consider making the right-hand side of (3.8.20) converges to zero uniformly in i ∈ N and ε ∈ (0, [8ψ(1)ε]−1 ∧ 1] as R −→ 0+. Inspecting the arguments from (3.8.17) to (3.8.18) shows that the first term in (3.8.20) 136 3.9. Proof of Proposition 3.25 satisfies 3EQ i ε ∫ (si+R)∧TXi1 si (∑ j:si<tj≤s Y j s (1) Xis(1) )1/2 ds  ≤3 ( 2ψ(1)R+ EP 1 0 [ 1 Z1 ]1/2 2R1/2 ) + 3EP 1 0 [ 1 Z1 ]1/2 × ∫ R 0 1√ s EQ i ε  ∑ j:si<tj≤si+s Y jsi+s(1)  ds. For the second term on the right-hand side of (3.8.20), we use (3.8.13) and obtain 9 sup r∈[0,R] EQ i ε  ∑ j:si<tj≤si+r M jsi+r 2 ≤ ∫ R 0 EQ i ε  ∑ j:si<tj≤si+s Y jsi+s(1)  ds, Applying the uniform bound (3.8.16) to the right-hand sides of the last two displays shows the existence of a constant C ∈ (0,∞) depending only on ψ such that Qiε  sup r∈[0,R] ∑ j:si<tj≤si+r Y jsi+r(1) > 1  ≤ CR1/2, ∀ R ∈ ( 0, 1 6ψ(1) ] , i ∈ N, ε ∈ ( 0, 1 8ψ(1) ∧ 1 ] , where the restriction on R follows from (3.8.19). The foregoing inequality is now enough to obtain our assertion. 3.9 Proof of Proposition 3.25 Throughout this section, we fix a sequence (εn) ⊆ (0, 1] with εn ↘ 0+ and assume that the εn-approximating solutions live on the same probability space. To save notation, we write( X(n), Y (n) ) , n ∈ N, for these approximating sequence, under P. We first consider the C-tightness of the sequence of joint laws of {( X(n), Y (n) )} inD ( R+,Crap(R) )×D(R+,Crap(R)). 137 3.9. Proof of Proposition 3.25 (Here, C-tightness means not only tightness but also the property that the limiting object of any convergent subsequence is a continuous process.) In this direction, we will only prove the C-tightness of the sequence of laws of{ X(n) } in D ( R+,Crap(R) ) , and the proof for { Y (n) } follows similarly. Later on in Lemma 3.40, we will show that the limit of any convergent subsequence of laws of {( X(n), Y (n) )} is the law of a pair of solutions to the SPDE (1.2.9) with respect to the same space-time white noise. We now consider our first objective that the sequence of laws of { X(n) } is tight as probability measures on D ( R+,Crap(R) ) . We will work with the mild form of X(n). Let ps(x)dx ≡  1√ 2pis exp ( −x 2 2s ) dx, s ∈ (0,∞), δ0(dx), s = 0, 0, s ∈ (−∞, 0). Recall the random measure AX(n) (cf. (3.3.2)) associated with X(n) which is contributed by the initial masses of its immigrants, and we write MX (n) t (φ) ≡X(n)t (φ)− ∫ t 0 X(n)s ( ∆ 2 φ ) ds − ∫ t 0 ∫ R φ(y)dAX (n) (y, s), φ ∈ C∞c (R), (3.9.1) for the martingale measure part of X(n). The martingale measure MY (n) for Y (n) is similarly defined. We claim that the mild form of X(n) is given by X(n)(x, t) = p ? AX (n) (x, t) + p ? MX (n) (x, t), (x, t) ∈ R× R+, (3.9.2) where the convolutions on the right-hand side are given by p ? AX (n) (x, t) = ∫ (0,t] ∫ R pt−s(x− y)dAX(n)(y, s) =ψ(1) ∑ i:0<si≤t ∫ R pt−si(x− y)Jxiεn(y)dy, (3.9.3) p ? MX (n) (x, t) = ∫ t 0 ∫ R pt−s(x− y)dMX(n)(y, s) = ∫ t 0 ∫ R pt−s(x− y)X(n)(y, s)1/2dW (y, s). (3.9.4) 138 3.9. Proof of Proposition 3.25 More precisely, in p ? AX(n) , we read p0(x− y)dy = δ0(x− dy) = δx(dy) and hence ∫ R p0(x− y)Jxiεn(y)dy ≡ Jxiεn(x). (3.9.5) Note that the SPDE of X(n) is slightly different from the cases considered in [40] on mild forms of solutions of one-dimensional SPDE’s, because of the random measure part AX(n) for X(n). To obtain (3.9.2), we first recall that the mild form of each summand X(n),i of X(n) is given by X(n),i(x, t) =ψ(1) ∫ R pt−si(x− y)Jxiεn(y)dy + ∫ t 0 ∫ R pt−s(x− y)X(n),i(y, s)1/2dWX(n),i(y, s), ∀ i ∈ N, (3.9.6) where we use the identification (3.9.5) for the first integral on the right-hand side. (See Theorem III.4.2 of [39] or Theorem 2.1 of [40] for (3.9.6).) On the other hand, since X(n) = ∑∞ i=1X (n),i, comparing their SPDE’s yields the following compatibility equation: X(n)(y, s)1/2dW (y, s) = ∞∑ i=1 X(n),i(y, s)1/2dWX (n),i (y, s). (3.9.7) Hence, by summing up the mild forms (3.9.6) of X(n),i over i ∈ N, we obtain (3.9.2) immediately from the compatibility equation (3.9.7). Next, the mild form (3.9.2) implies the C-tightness of the sequence of laws of { X(n) } in D ( R+,Crap(R) ) , provided that the sequences of laws of{ p ? AX (n) ;n ∈ N} and {p ? MX(n) ;n ∈ N} are both C-tight as probabil- ity measures on the same space. For this purpose, we need to understand the growth and the modulus of continuity of each process in the latter two sequences. We first consider some estimates for the growth of the summands of p ? AX (n) . For convenience, we write A <a B in this section if A ≤ CB for some constant C ∈ (0,∞) which may vary from line to line. Here, C depends only on ψ and some of the auxiliary parameters q, λ, T ∈ (0,∞), and the involvements of q, λ, T should be clear from the contexts. 139 3.9. Proof of Proposition 3.25 Lemma 3.33. For any λ, T ∈ (0,∞) and 0 < si < t ≤ T , we have the following two different bounds for the same quantity: sup x∈R eλ|x| ∫ R pt−si(x− y)Jxiεn(y)dy <a ( 1√ t− si + 1 ) εn (3.9.8) and sup x∈R eλ|x| ∫ R pt−si(x− y)Jxiεn(y)dy <a ε1/2n . (3.9.9) Proof. Fix λ, T ∈ (0,∞), and choose a constant M ∈ [1,∞) so that supp(ψ) ⊆ [−M,M ]. Recall the definition of Jxε in (3.1.1) and our choice that all of the spatial points xi take values in supp(ψ). For x ∈ R and 0 < si < t ≤ T , we have eλ|x| ∫ R pt−si(x− y)Jxiεn(y)dy = εn eλ|x|√ 2pi(t− si) ∫ R exp { −(x− xi − ε 1/2 n y)2 2(t− si) } J(y)dy ≤ εn e λMeλ|x−xi|√ 2pi(t− si) ∫ 1 −1 exp { −(x− xi − ε 1/2 n y)2 2(t− si) } J(y)dy, (3.9.10) where the last inequality follows since |xi| ≤M and the function J is chosen to have supp(J) ⊆ [−1, 1]. In the following, we will show how the desired bounds (3.9.8) and (3.9.9) stem from (3.9.10). First consider the upper bound in (3.9.8) of order εn. If (t − si) < 1/λ, then (3.9.10) implies that eλ|x| ∫ R pt−si(x− y)Jxiεn(y)dy ≤ εn e λMeλ|x−xi|√ 2pi(t− si) ∫ 1 −1 exp { −(x− xi − ε 1/2 n y)2 2(1/λ) } J(y)dy ≤ εn 2e λM√ 2pi(t− si) exp {−|x− xi|2 + 4|x− xi| 2(1/λ) } ≤ εn 2e λM√ 2pi(t− si) exp { 4 2(1/λ) } , (3.9.11) 140 3.9. Proof of Proposition 3.25 where the second inequality follows by expanding [(x − xi) − ε1/2n y]2 and using εn ∈ (0, 1] and ‖J‖∞ ≤ 1, and the last inequality follows since 2 = argmax{−w2 + 4w;w ∈ R}. For (t− si) ≥ 1/λ, we consider an alternative bound as follows. By (3.9.10) and ‖J‖∞ ≤ 1, we get eλ|x| ∫ R pt−si(x− y)Jxiεn(y)dy ≤ εn 2e λM√ 2pi(t− si) exp { λ|x− xi| − |x− xi| 2 − 2|x− xi| 2(t− si) } ≤ εn 2e λM√ 2pi(t− si) exp {−|x− xi|2 + 2 [λ(t− si) + 1] |x− xi| 2(t− si) } ≤ εn 2e λM√ 2pi(t− si) exp {−|x− xi|2 + 2 (λT + 1) |x− xi| 2(t− si) } , (3.9.12) where the last inequality follows since 0 < si < t ≤ T . Since (t− si) ≥ 1/λ and λT + 1 = argmax{−w2 + 2(λT + 1)w;w ∈ R}, (3.9.12) implies that eλ|x| ∫ R pt−si(x− y)Jxiεn(y)dz ≤ εn 2 exp { λM + (λT+1) 2 2(1/λ) } √ 2pi(1/λ) . (3.9.13) Our first <a-inequality (3.9.8) now follows from (3.9.11) and (3.9.13). Next, we consider the upper bound in (3.9.9) of order ε1/2n . For any x ∈ R, we obtain from (3.9.10) that eλ|x| ∫ R pt−si(x− y)Jxiεn(y)dy ≤ e λM+λ|x−xi|ε1/2n√ 2pi(t− si)/εn ∫ R exp { − [ε −1/2 n (x− xi)− y]2 2(t− si)/εn } dy ≤ eλM+λ|x−xi|ε1/2n , (3.9.14) where in the last inequality we identify the following Gaussian integral: 1√ 2pi(t− si)/εn ∫ R exp { − [ε −1/2 n (x− xi)− y]2 2(t− si)/εn } dy = 1. 141 3.9. Proof of Proposition 3.25 On the other hand, the inequality (3.9.10) also gives eλ|x| ∫ R pt−si(x− y)Jxiεn(y)dy ≤ e λMε 1/2 n√ 2pi(t− si)/εn ∫ 1 −1 eλ|x−xi| exp { −|x− xi|2 + 2(x− xi)ε1/2n y − εny2 2(t− si) } J(y)dy ≤eλMε1/2n exp {−|x− xi|2 + (2λT + 2)|x− xi| 2(t− si) } × 1√ 2pi(t− si)/εn ∫ R exp { − y 2 2(t− si)/εn } dy ≤eλMε1/2n exp {−|x− xi|2 + (2λT + 2)|x− xi| 2(t− si) } , (3.9.15) where the second inequality follows since 0 < si < t ≤ T and εn ∈ (0, 1], and we identify the following Gaussian integral: 1√ 2pi(t− si)/εn ∫ R exp { − y 2 2(t− si)/εn } dy = 1 in the last inequality. Now, we use (3.9.14) when |x − xi| ≤ 2λT + 2 and optimize (3.9.15) over |x−xi| > 2λT +2. This gives the second <a-inequality (3.9.9). The proof is complete. We now control the growth of p ? AX(n) by the following lemma. Lemma 3.34. For any λ, T ∈ (0,∞), we have sup x∈R eλ|x|p ? AX (n) (x, t) <a √ t+ t+ ε1/2n , ∀ t ∈ [0, T ]. Proof. Fix λ, T ∈ (0,∞). We may only consider t > 0. Then we choose kn to be the largest integer such that skn < t. Applying the bound (3.9.8) to the summands of p ? AX(n) in (3.9.3) indexed by 1 ≤ i ≤ kn − 1 and the bound (3.9.9) to the one indexed by i = kn gives sup x∈R eλ|x|p ? AX (n) (x, t) <a kn−1∑ i=1 εn√ t− si + kn−1∑ i=1 εn + ε 1/2 n + ψ(1)ε 1/2 n , (3.9.16) where the last term in (3.9.16) bounds ψ(1) ∫ R p0(x− y)Jxkn+1εn (y)dy 142 3.9. Proof of Proposition 3.25 when t = skn+1 since supx ‖Jxε ‖∞ ≤ ε1/2. We observe that the first two sums in (3.9.16) are Riemann sums over [s1, skn) of certain integrals with monotonically increasing integrands (up to time t) whose values at the left-end points of the subintervals [si, si+1) are used to define the corresponding step functions. In particular, these step functions are dominated by the integrands of the corresponding integrals. Hence, from (3.9.16), we obtain sup x∈R eλ|x|p ? AX (n) (x, t) <a ∫ t 0 ds√ t− s + t+ ε 1/2 n , and the desired <a-inequality now follows. The Riemann-sum approximation for monotonically increasing integrands in the proof of Lemma 3.34 will be used implicitly throughout the following proofs in this section. Next, we consider estimates for the modulus of continuity of p ? AX(n) . Lemma 3.35. For any T ∈ (0,∞), we have sup x′,x∈R |x′−x|≤δ sup t∈[0,T ] ∣∣∣p ? AX(n)(x′, t)− p ? AX(n)(x, t)∣∣∣ <a δ1/2 + ε1/2n , ∀ δ ∈ (0, 1). Proof. Fix T ∈ (0,∞). In this proof, we use the following elementary bound:∣∣∣∣ ddz pt(z) ∣∣∣∣ <a 1√tp2t(z), ∀ t ∈ (0,∞), z ∈ R. (3.9.17) (Cf. Lemma 4.2 of [32].) We start with some bounds for the summands of p ? AX (n) (x′, t)− p ? AX(n)(x, t). For 0 < si < t ≤ t′ and x′, x ∈ R, write∫ R [pt′−si(x ′ − z)− pt−si(x− z)]Jxiεn(z)dz =εn ∫ R [pt′−si(x ′ − xi − ε1/2n y)− pt−si(x− xi − ε1/2n y)]J(y)dy. (3.9.18) If t = t′, then applying (3.9.17) and the mean value theorem to the right- hand side of (3.9.18) entails∣∣∣∣∫ R [pt−si(x ′ − z)− pt−si(x− z)]Jxiεn(z)dz ∣∣∣∣ <a |x′ − x|(t− si)3/2 εn, 143 3.9. Proof of Proposition 3.25 since ∫ R J(y)dy = 1. On the other hand, we have, for any x ∈ R and t ∈ (0,∞), ∫ R pt(y)J x εn(y)dy ≤ 1√ 2pit ∫ R Jxεn(y)dy <a εn√ t again by ∫ R J(z)dz = 1. Hence, applying the <a-inequalities in the last two displays, we obtain∣∣∣∣∫ R [pt−si(x ′ − y)− pt−si(x− y)]Jxiεn(y)dy ∣∣∣∣ <a [ |x′ − x|(t− si)3/2 ∧ 1√t− si ] εn. (3.9.19) We are now ready to prove our assertion. Let T ∈ (0,∞), t ∈ [0, T ], and x′, x ∈ R. Let kn be the largest integer such that skn < t. We have∣∣∣p ? AX(n)(x′, t)− p ? AX(n)(x, t)∣∣∣ <a kn−1∑ i=1 ∣∣∣∣∫ R [pt−si(x ′ − y)− pt−si(x− y)]Jxiεn(y)dy ∣∣∣∣+ ε1/2n <a kn−1∑ i=1 [ |x′ − x| (t− si)3/2 ∧ 1√ t− si ] εn + ε 1/2 n <a ∫ t 0 [ |x′ − x| (t− s)3/2 ∧ 1√ t− s ] ds+ ε1/2n , (3.9.20) where we use (3.9.9) with λ = 1 in the first <a-inequality (cf. the argument for (3.9.16)) and (3.9.19) in the second <a-inequality, and the Riemann-sum approximation in the last <a-inequality is valid because the minimum of two increasing functions is again increasing. Note that, for constants a, b ∈ (0,∞) with a/b < t,∫ t 0 a s3/2 ∧ b s1/2 ds = ∫ a/b 0 b s1/2 ds+ ∫ t a/b a s3/2 ds = 4a1/2b1/2 − 2at−1/2. (3.9.21) Applying (3.9.21) to (3.9.20) proves our assertion. Lemma 3.36. For any T ∈ (0,∞), sup 0≤t<t′≤T |t′−t|≤δ sup x∈R ∣∣∣p ? AX(n)(x, t′)− p ? AX(n)(x, t)∣∣∣ <a δ1/4 + ε1/2n , ∀ δ ∈ (0, 1). 144 3.9. Proof of Proposition 3.25 Proof. Fix T ∈ (0,∞), 0 ≤ t < t′ ≤ T , and x ∈ R. We have∣∣∣p ? AX(n)(x, t′)− p ? AX(n)(x, t)∣∣∣ ≤ ∫ (t,t′] ∫ R pt′−s(x− y)dAX(n)(y, s) + ∣∣∣∣∣ ∫ (0,t] ∫ R [pt′−s(x− y)− pt−s(x− y)]dAX(n)(y, s) ∣∣∣∣∣ <a ∑ i:t<si<t′ ∫ R pt′−si(x− y)Jxiεn(y)dy + ∑ i:0<si<t ∣∣∣∣∫ R [pt′−si(x− y)− pt−si(x− y)]Jxiεn(y)dy ∣∣∣∣+ 2ψ(1)ε1/2n , (3.9.22) where the last term bounds∫ {t′} ∫ R pt′−s(x−y)dAX(n)(y, s)+ ∣∣∣∣∣ ∫ {t} ∫ R [pt′−s(x− y)− pt−s(x− y)]dAX(n)(y, s) ∣∣∣∣∣ (cf. (3.9.16)). Choose `n to be the largest integer such that s`n < t′. For the first sum in (3.9.22), we obtain from (3.9.8) and (3.9.9) with λ = 1∑ i:t<si<t′ ∫ R pt′−si(x− y)Jxiεn(y)dy <a ∑ i:t<si<t′,i<`n εn ( 1√ t′ − si + 1 ) + ε1/2n <a ∫ t′ t ( 1√ t′ − s + 1 ) ds+ ε1/2n . (3.9.23) For the second sum in (3.9.22), we have∣∣∣∣∫ R [pt′−si(x− y)− pt−si(x− y)]Jxiεn(y)dy ∣∣∣∣ =εn ∣∣∣∣∫ R ∫ R pt′−t(x− xi − u)[pt−si(u− ε1/2n y)− pt−si(x− xi − ε1/2n y)]duJ(y)dy ∣∣∣∣ =εn ∣∣∣∣∫ R E [ pt−si(x− xi +Bt′−t − ε1/2n y)− pt−si(x− xi − ε1/2n y) ] J(y)dy ∣∣∣∣ <aεn [√ (t′ − t)E[|B1|] (t− si)3/2 ∧ 1√ t− si ] , (3.9.24) where we use the Chapman-Kolmogorov’s equality in the first equality, and in the last <a-inequality B denotes a standard Brownian motion started at 0 145 3.9. Proof of Proposition 3.25 and we use the bound (3.9.17). Using the foregoing <a-inequality and (3.9.9), we can bound the second term in (3.9.22) in the way similar to (3.9.23) as∑ i:0<si<t ∣∣∣∣∫ R [pt′−si(x− y)− pt−si(x− y)]Jxiεn(y)dy ∣∣∣∣ <a ∫ t 0 (√ t′ − t s3/2 ∧ 1√ s ) ds+ ε1/2n . (3.9.25) Moreover, to get an explicit bound for the first term of (3.9.25) in powers of (t′ − t), we can use (3.9.21) again and obtain∑ i:0<si<t ∣∣∣∣∫ R [pt′−si(x− y)− pt−si(x− y)]Jxiεn(y)dy ∣∣∣∣ <a (t′ − t)1/4 + ε1/2n . Our assertion now follows by applying (3.9.23) and the last display to (3.9.22). The proof is complete. Lemma 3.37. The sequence of laws of { p?AX (n)} is C-tight as probability measures on D ( R+,Crap(R) ) . Proof. We note that the jumps of the Crap(R)-valued càdlàg process p?AX (n) are given by p?AX (n) (x, t)−p?AX(n)(x, t−) = { ψ(1)Jxiεn(x), if t = si for some i ∈ N, 0, otherwise. Hence, by Lemma 3.44 and Skorokhod’s representation, the sequence of laws under consideration is C-tight as soon as it is tight as probability measures on D ( R+,Crap(R) ) . For the latter, we claim that the necessary conditions of Proposition 3.50 are satisfied. By Lemma 3.34, the growth condition (3.11.11) is clearly satisfied by the sequence of laws of { p ?AX (n)}. To show that this sequence of laws is tight as probability measures on D ( R+,C (R) ) , we note that Lemma 3.34 and Lemma 3.35 imply that for each t ∈ R+, the sequence of laws of { p ? AX (n) ( · , t)} is tight as probability measures on C (R). Hence, by Lemma 3.36 and Theorem III.7.2 of [12], the sequence of laws of { p ? AX (n)} is tight as probability measures on D(R+,C (R)). This proves our claim, and the proof is complete. Our next step is to obtain the tightness of the sequence of laws of the stochastic integrals { p ? MX (n)} in C(R+,Crap(R)). 146 3.9. Proof of Proposition 3.25 Lemma 3.38. For any q ∈ [1,∞) and λ, T ∈ (0,∞), sup n∈N sup 0≤t≤T sup x∈R eλ|x|E [ X(n)(x, t)q + Y (n)(x, t)q ] <a 1. Proof. We will only show that sup n∈N sup 0≤t≤T sup x∈R eλ|x|E [ X(n)(x, t)q ] <a 1, (3.9.26) and the analogous bound for { Y (n) } follows from a similar argument. Plainly, we may restrict our attentions to q = 2k for k ∈ Z+. Then throughout this proof, we will work with the following inequality: eλ|x|E [ X(n)(x, t)q ] <a e λ|x|E [ p ? AX (n) (x, t)q ] + E [(∫ t 0 ∫ R pt−s(x− y)2e 2λ q |x−y| e 2λ q |y| X(n)(y, s)dyds ) q 2 ] , (3.9.27) which follows from the mild form (3.9.2) and the Burkholder-Davis-Gundy inequality (cf. Theorem IV.4.1 of [44]). First, we claim that sup n∈N sup 0≤t≤T sup x∈R eλ|x|E [ X(n)(x, t) q 2 ] <a 1, ∀ λ ∈ (0,∞) =⇒ sup n∈N sup 0≤t≤T sup x∈R eλ|x|E [ X(n)(x, t)q ] <a 1, ∀ λ ∈ (0,∞); q = 2k, k ∈ N. (3.9.28) Note that x1 · · ·xn ≤ xn1 + · · ·+ xnn, ∀ x1, · · · , xn ∈ R+, n ∈ N (3.9.29) and∫ t 0 ∫ R pt−s(y)2eλ|y|dsdy ≤ ∫ t 0 1√ 2pis E [ eλ|Bs| ] ds <∞, ∀ t, λ ∈ (0,∞), (3.9.30) where B is a standard Brownian motion. For q = 2k with k ∈ N, we can expand the integral(∫ t 0 ∫ R pt−s(x− y)2e 2λ q |x−y| e 2λ q |y| X(n)(y, s)dyds ) q 2 147 3.9. Proof of Proposition 3.25 in the second term on the right-hand side of (3.9.27) as a ( 2 · q2 ) -fold space- time integral and thereby use (3.9.29) to get E [(∫ t 0 ∫ R pt−s(x− y)2e 2λ q |x−y| e 2λ q |y| X(n)(y, s)dyds ) q 2 ] ≤ sup n∈N sup 0≤t≤T sup y∈R eλ|y|E [ X(n)(y, s) q 2 ] × (∫ t 0 ∫ R pt−s(y)2e 2λ q |y| dsdy ) q 2 , where the second factor is finite by (3.9.30). We now apply the last inequality to (3.9.27). Then our claim (3.9.28) follows from Lemma 3.34. Thanks to (3.9.28), it remains to verify (3.9.26) for q = 1 and any λ ∈ (0,∞). For any k ∈ N, set T (n) k , inf { t ≥ 0;∥∥X(n)t ∥∥∞ > k} . Then we have E [∫ t∧T (n)k 0 ∫ R p t∧T (n)k −s (x− y)2X(n)(y, s)dyds ] ≤ k ∫ t 0 1√ 2pis ds <∞, so the continuous local martingale∫ r∧T (n)k 0 ∫ R p t∧T (n)k −s (x− y)X(n)(y, s)1/2dW (y, s), 0 ≤ r ≤ t, is a true continuous martingale by Doob’s L2-inequality. Using this martin- gale property and the mild form (3.9.2) of X(n), we get E [ X(n) ( x, t ∧ T (n)k )] = E [ p ? AX (n) ( x, t ∧ T (n)k )] . Since T (n)k −→ ∞, applying Fatou’s lemma to the foregoing equality and using Lemma 3.34 show that sup n∈N sup 0≤t≤T sup x∈R eλ|x|E [ X(n)(x, t) ] <a 1. (3.9.31) Our assertion now follows from (3.9.28) and (3.9.31). Lemma 3.39. For some universal constants q ∈ (0,∞) and γ ∈ (2,∞), sup n∈N E [∣∣∣p ? MX(n)(x′, t′)− p ? MX(n)(x, t)∣∣∣q] <a (|x′ − x|2γ + |t′ − t|γ) e−λ|x|, ∀ t, t′ ∈ [0, T ], |x− x′| ≤ 1, (3.9.32) 148 3.9. Proof of Proposition 3.25 for any λ, T ∈ (0,∞). Moreover, the sequence of laws of {p?MX(n)} is tight as probability measures on C ( R+,Crap(R) ) . Proof. Fix λ, T ∈ (0,∞), and let t, t′ ∈ [0, T ] and x, x′ with |x−x′| ≤ 1. We may assume t ≤ t′. Using p ? MX (n) (x′, t′)− p ? MX(n)(x, t) = ∫ t′ t ∫ R pt′−s(x′ − y)X(n)(y, s)1/2dW (y, s) + ∫ t 0 ∫ R [pt′−s(x′ − y)− pt−s(x− y)]X(n)(y, s)1/2dW (y, s), we get E [∣∣∣p ? MX(n)(x′, t′)− p ? MX(n)(x, t)∣∣∣q] <aE [(∫ t′ t ∫ R pt′−s(x′ − y)X(n)(y, s)1/2dW (y, s) )q] + E [(∫ t 0 ∫ R [pt′−s(x′ − y)− pt−s(x− y)]X(n)(y, s)1/2dW (y, s) )q] <aE (∫ t′ t ∫ R pt′−s(x′ − y)2X(n)(y, s)dyds )q/2 + E [(∫ t 0 ∫ R [pt′−s(x′ − y)− pt−s(x− y)]2e−λ|y| × eλ|y|X(n)(y, s)dyds )q/2] , (3.9.33) where the last inequality follows from the Burkholder-Davis-Gundy inequal- ity (cf. Theorem IV.4.1 of [44]). We bound the two terms on the right-hand side of (3.9.33) in the follow- ing. For this, we take two pairs of constants (ai, bi) ∈ (1,∞) × (1,∞) such that a−1i + b −1 i = 1 for i = 1, 2. We choose their specific values later on. By Hölder’s inequality, the first term on the right-hand side of (3.9.33) 149 3.9. Proof of Proposition 3.25 can be bounded as E (∫ t′ t ∫ R pt′−s(x′ − y)2X(n)(y, s)dyds )q/2 ≤ (∫ t′ t ∫ R pt′−s(x′ − y)2a1dyds ) q 2a1 E [(∫ t 0 ∫ R X(n)(y, s)b1dyds ) q 2b1 ] ≤ (∫ t′ t 1 [2pi(t′ − s)]a1− 12 ds ) q 2a1 E [(∫ t 0 ∫ R X(n)(y, s)b1dyds ) q 2b1 ] . (3.9.34) For the second term on the right-hand side of (3.9.33), we use Hölder’s inequality twice to obtain E [(∫ t 0 ∫ R [pt′−s(x′ − y)− pt−s(x− y)]2e−λ|y| × eλ|y|X(n)(y, s)dyds )q/2] ≤ (∫ t 0 ∫ R [pt′−s(x′ − y)− pt−s(x− y)]2e−2λ|y|dyds ) q 4 × (∫ t 0 ∫ R [pt′−s(x′ − y)− pt−s(x− y)]2b2dyds ) q 4b2 × E [(∫ t 0 ∫ R e2λa2|y|X(n)(y, s)2a2dyds ) q 4a2 ] . Here for the first two factors on the right-hand side of the foregoing inequal- ity, we have(∫ t 0 ∫ R [pt′−s(x′ − y)− pt−s(x− y)]2e−2λ|y|dyds ) q 4 <a ( |x′ − x| q4 + |t′ − t| q8 ) e− qλ|x| 2 150 3.9. Proof of Proposition 3.25 by Lemma 6.5 of [30] and(∫ t 0 ∫ R [pt′−s(x′ − y)− pt−s(x− y)]2b2dyds ) q 4b2 ≤ [ C(b2) (∫ t 0 ∫ R pt′−s(x′ − y)2b2dyds+ ∫ t 0 ∫ R pt−s(x− y)2b2dyds )] q 4b2 ≤ [ C(b2) (∫ t 0 1 [2pi(t′ − s)]b2− 12 ds+ ∫ t 0 1 [2pi(t− s)]b2− 12 ds )] q 4b2 ≤ (∫ t′ 0 2C(b2) (2pis)b2− 1 2 ds ) q 4b2 , where C(b2) ∈ (0,∞) is a constant depending only on b2. Hence, from the last three displays, we get E [(∫ t 0 ∫ R [pt′−s(x′ − y)− pt−s(x− y)]2e−λ|y| × eλ|y|X(n)(y, s)dyds )q/2] <a ( |x′ − x| q4 + |t′ − t| q8 ) e− qλ|x| 2 (∫ t′ 0 2C(b2) (2pis)b2− 1 2 ds ) q 4b2 E [(∫ t 0 ∫ R e2λa2|y|X(n)(y, s)2a2dyds ) q 4a2 ] . (3.9.35) In view of the estimates in (3.9.34) and (3.9.35), we take (a1, b1) = (10/9, 10), (a2, b2) = (5, 5/4) and q = 20, γ = 5 2 . Then by (3.9.34), the first term on the right-hand side of (3.9.33) can be bounded by E (∫ t′ t ∫ R pt′−s(x′ − y)2X(n)(y, s)dyds )q/2 <a (t ′ − t) 72 × sup 0≤s≤T ∫ R E [ X(n)(y, s)10 ] dy <a (t ′ − t) 52 × sup 0≤s≤T ∫ R E [ X(n)(y, s)10 ] dy. 151 3.9. Proof of Proposition 3.25 By (3.9.35), the second term on the right-hand side of (3.9.33) can be bounded by E [(∫ t 0 ∫ R [pt′−s(x′ − y)− pt−s(x− y)]2e−λ|y| × eλ|y|X(n)(y, s)dyds )q/2] <a ( |x′ − x|5 + |t′ − t| 52 ) e−10λ|x| × sup 0≤s≤T ∫ R e10λ|y|E [ X(n)(y, s)10 ] dy. Now, apply the estimates in the last two displays to the right-hand side of (3.9.33) and then use Lemma 3.38. This gives the first assertion (3.9.32). The second assertion now follows from (3.9.32) and Lemma 6.4 of [30]. The proof is complete. By Lemma 3.37 and Lemma 3.39, the sequence of laws of { X(n) } is C-tight as probability measures on D ( R+,Crap(R) ) , thanks to (3.9.2). By similar arguments, the same is true for the sequence of laws of { Y (n) } . We are now ready to prove the main result of this section. Lemma 3.40. Suppose that, by taking a subsequence if necessary, we have( X(n), Y (n) ) (d)−−−→ n→∞ ( X(0), Y (0) ) (3.9.36) for some continuous Crap(R)-valued processes X(0) and Y (0). Then X(0) and Y (0) solve the SPDE (1.2.9) with respect to the same space-time white noise. Proof. In this proof, we first identify the limits of the random measures AX(n) and AY (n) and then study the limits of the martingale measures MX(n) and MY (n) . From the latter, we will show that X(0) and Y (0) are subject to the same space-time white noise. In the following, it is convenient to work with a fixed countable subset (φk) of C∞c (R) such that for any φ ∈ C∞c (R), φkj −→ φ and φ′′kj −→ φ′′ pointwise and boundedly along a subsequence (φkj ). (Step 1). First, we claim that, for any φ ∈ C∞c (R),(∫ t 0 ∫ R φ(y)dAX (n) (y, s) ) t∈R+ (d)−−−→ n→∞ ( t · 〈ψ, φ〉) t∈R+ (3.9.37) inD ( R+,R ) . We have seen the convergence in probability of one-dimensional marginals in (3.3.4), and hence the convergence in distribution of finite- 152 3.9. Proof of Proposition 3.25 dimensional marginals follows. Also, for 0 ≤ t < t′ <∞,∣∣∣∣∣ ∫ (t,t′] ∫ R φ(y)dAX (n) (y, s) ∣∣∣∣∣ = ∣∣∣∣∣∣ ∑ i:t<si≤t′ ψ(1)Jxiεn(φ) ∣∣∣∣∣∣ ≤‖φ‖∞ ∑ i:t<si≤t′ ψ(1)εn ≤‖φ‖∞ψ(1) [ (t′ − t) + εn ] , as implies the relative compactness of the sequence of laws under consider- ation by Corollary III.7.4 of [12]. With these properties, our claim (3.9.37) now follows from Theorem III.7.8 (b) of [12]. Next, we reinforce the convergence (3.9.37) as follows. Note that the infinite-dimensional vector(∫ t 0 ∫ R φk(y)dA X(n)(y, s) ) t∈R+ , k = 1, 2, · · · , of random processes converges in distribution to the infinite-dimensional vector (t · 〈ψ, φk〉)t∈R+ , k = 1, 2, · · · of deterministic continuous processes. Here, this convergence is in the infinite product of D(R+,R) equipped with the natural metric δ ( (xk)k∈N, (yk)k∈N ) , ∞∑ k=1 d(xk, yk) ∧ 1 2k , xk, yk ∈ D(R+,R), for d( · , · ) being the Skorokhod metric on D(R+,R). A similar result holds for the infinite-dimensional vector(∫ t 0 ∫ R φk(y)dA Y (n)(y, s) ) t∈R+ , k = 1, 2, · · · , of random processes. Hence, by our assumption (3.9.36), we have( X(n), Y (n), (∫ t 0 ∫ R φk(y)dA X(n)(y, s) ) t∈R+ , (∫ t 0 ∫ R φk(y)dA Y (n)(y, s) ) t∈R+ ) (d)−−−→ n→∞ ( X(0), Y (0), ( t · 〈ψ, φk〉 ) t∈R+ , ( t · 〈ψ, φk〉 ) t∈R+ ) , (3.9.38) 153 3.9. Proof of Proposition 3.25 where k ranges over N, in the Polish space D ( R+,Crap(R) )×D(R+,Crap(R))×D(R+,R)N ×D(R+,R)N. Here, recall that X(0) and Y (0) are continuous. By Skorokhod’s representation, we may assume from now on that the convergence in (3.9.38) holds in the sense of almost sure convergence. Then we denote by (Ht) the minimal filtration generated by the limiting objects X(0) and Y (0) which satisfies the usual conditions. (Step 2). Let us now study the limits of the martingale measures MX(n) and MY (n) . We start with some of properties ofMX(n) andMY (n) . For each n, k ∈ N, we have MX (n) t (φk) =X (n) t (φk)− ∫ t 0 X(n)s ( ∆ 2 φk ) ds− ∫ t 0 ∫ R φk(y)dA X(n)(y, s), MY (n) t (φk) =Y (n) t (φk)− ∫ t 0 Y (n)s ( ∆ 2 φk ) ds− ∫ t 0 ∫ R φk(y)dA Y (n)(y, s), (3.9.39) and their covariations are given by〈 MX (n) (φk),M X(n)(φ`) 〉 t = ∫ t 0 ∫ R X(n)(y, s)φk(y)φ`(y)dyds,〈 MY (n) (φk),M Y (n)(φ`) 〉 t = ∫ t 0 ∫ R Y (n)(y, s)φk(y)φ`(y)dyds,〈 MX (n) (φk),M Y (n)(φ`) 〉 t = ∫ t 0 ∫ R X(n)(y, s)1/2Y (n)(s, y)1/2φk(y)φ`(y)dyds. (3.9.40) In particular, by (3.9.40) and Lemma 3.38, MX(n)(φk) and MY (n) (φk) are true continuous martingales with respect to the natural filtration generated by X(n) and Y (n). Now, we observe that MX (0) t (φ) , limn→∞M X(n) t (φ) = X (0) t (φ)− ∫ t 0 X(0)s ( ∆φ 2 ) ds− t〈ψ, φ〉 (3.9.41) 154 3.9. Proof of Proposition 3.25 for any φ ∈ {φk} and any t ∈ R+ almost surely. Indeed, the second equality of (3.9.41) holds in the sense of convergence in D(R+,R) by (3.9.39) and the simple observation: un −→ u in D ( R+,Crap(R) ) and φ ∈ C∞c (R) =⇒ un(φ) −→ u(φ) in D ( R+,R ) , (3.9.42) and the convergence at any t follows since the right-hand side of (3.9.41) is continuous in t. Consider the martingale properties of the limiting objects MX(0)( · ) and MY (0) ( · ). Note that Lemma 3.34 and Lemma 3.38 imply the uniform in- tegrability of any moments of { MX (n) (φk) } n∈N by (3.9.39). Hence, for 0 ≤ t < t′ < ∞ and any bounded continuous function Φ on D([0, t],Crap(R)) × D ( [0, t],Crap(R) ) , we have E [ MX (0) t′ (φk)Φ ( X(0), Y (0) )] = lim n→∞E [ MX (n) t′ (φk)Φ ( X(n), Y (n) )] = lim n→∞E [ MX (n) t (φk)Φ ( X(n), Y (n) )] =E [ MX (0) t (φk)Φ ( X(0), Y (0) )] , as proves that MX(0)(φk) is a true (Ht)-martingale. Since Lemma 3.38 also gives sup 0≤s≤T sup x∈R eλ|x|E [ X(0)(x, s)q ] <∞, ∀ q ∈ [1,∞), λ, T ∈ N, we can further extend MX(0)( · ) from {MX(0)(φk)} (using (3.9.41)) to the entire C∞c (R) so thatMX (0) (φ) for arbitrary φ ∈ C∞c (R) defines a continuous (Ht)-martingale. With similar arguments, we have an analogous result for MY (0)( · ) and the list (3.9.40) leads to〈 MX (0) (φ),MX (0) (ϕ) 〉 t = ∫ t 0 ∫ R X(0)(y, s)φ(y)ϕ(y)dyds,〈 MY (0) (φ),MY (0) (ϕ) 〉 t = ∫ t 0 ∫ R Y (0)(y, s)φ(y)ϕ(y)dyds,〈 MX (0) (φ),MY (0) (ϕ) 〉 t = ∫ t 0 ∫ R X(0)(y, s)1/2Y (0)(s, y)1/2φ(y)ϕ(y)dyds, (3.9.43) 155 3.9. Proof of Proposition 3.25 for any φ, ϕ ∈ C∞c (R). (Step 3). It remains to show that X(0) and Y (0) solve the SPDE (1.2.9) subject to the same space-time white noise W . Note that by definition, we also need to check that these random objects obey their defining properties with respect to some filtration satisfying the usual conditions. Informally, the identification of the space-time white noise is to “invert W ” from X(0) or Y (0) wherever it is possible and to “append” an independent copy of space-time white noise W̃ elsewhere. Hence, we set Wt(φ) , ∫ t 0 ∫ R 1[X(0)(y,s)>0,Y (0)(y,s)>0] φ(x) X(0)(y, s)1/2 dMX (0) (y, s) + ∫ t 0 ∫ R 1[X(0)(y,s)>0,Y (0)(y,s)=0] φ(x) X(0)(y, s)1/2 dMX (0) (y, s) + ∫ t 0 ∫ R 1[X(0)(y,s)=0,Y (0)(y,s)>0] φ(x) Y (0)(y, s)1/2 dMY (0) (y, s) + ∫ t 0 ∫ R 1[X(0)(y,s)=0,Y (0)(y,s)=0]φ(x)dW̃ (y, s). (3.9.44) A standard argument shows that W is a space-time white noise with respect to the minimal filtration obtained in the natural way from (Ht) and W̃ and satisfying the usual conditions. If this enlarged filtration is under consideration, then the martingale properties of MX(0) and MY (0) still hold and, in particular, (3.9.41) implies that X(0) solves the SPDE (1.2.9) with respect to W . To see that Y (0) also satisfies the same SPDE, note that the first term in the definition (3.9.44) of W is equal to∫ t 0 ∫ R 1[X(0)(y,s)>0,Y (0)(y,s)>0] φ(x) Y (0)(y, s)1/2 dMY (0) (y, s), because, for each φ ∈ C∞c (R),∫ t 0 ∫ R Y (0)(y, s)1/2φ(y)dMX (0) (y, s),∫ t 0 ∫ R X(0)(y, s)1/2φ(y)dMY (0) (y, s) are continuous (Ht)-local martingales and are equal by using (3.9.43) to cal- culate the quadratic variation of their difference. Now, integrating Y (0)(x, s)1/2 156 3.10. An iterated formula for improved pointwise modulus of continuity with respect to both sides of (3.9.44) entails MY (0) t (φ) = ∫ t 0 ∫ R Y (0)(y, s)1/2φ(y)dW (y, s). The proof is complete. 3.10 An iterated formula for improved pointwise modulus of continuity In this section, we study the pointwise modulus of continuity of continuous functions satisfying certain integral inequalities. We start with triangle inequalities for power functions. Lemma 3.41. For every α ∈ (0,∞),( 2α−1 ∧ 1) · (xα + yα) ≤ (x+ y)α ≤ (2α−1 ∨ 1) · (xα + yα) , ∀ x, y ∈ R+. When α ∈ (0, 1) (resp. α ∈ (1,∞)), the first (resp. the second) inequality holds if and only if x = y. In particular, for any n ∈ N with n ≥ 2, n∑ j=1 xj α ≤ (2α−1 ∨ 1)n−1  n∑ j=1 xαj  , ∀ x1, · · · , xn ∈ R+. (3.10.1) We now state the main result of this section. Theorem 3.42. Let T ∈ (0,∞). Suppose that (Xt)t∈[0,T ] is a continuous function such that for some α, β ∈ (0,∞) and A,B,C ∈ R+ which are all independent of t ∈ [0, T ], we have |Xt −X0| ≤A+Btβ + C (∫ t 0 |Xs|ds )α , ∀ t ∈ [0, T ]. (3.10.2) Set ‖X‖∞ , sup s∈[0,T ] |Xt| and Dα , 2α−1 ∨ 1. 157 3.10. An iterated formula for improved pointwise modulus of continuity Then for any n ∈ N, |Xt −X0| ≤ A+Btβ + (Dα) 2n n∑ j=1 [∏j k=1(C) αk−1 ·∏j−1k=1(Dα)2(n−k)αk · (|X0|+A)αj∏j−1 k=1(ak + 1) αj−k ] taj + (Dα) 2n n∑ j=1  ∏j k=1(C) αk−1 ·∏j−1k=1(Dα)2(n−k)αk · ( Bβ+1)αj∏j−1 k=1(bk + 1) αj−k  tbj + (Dα) 2n [ (C)cn ·∏nk=1(Dα)2(n−k)αk · ‖X‖αn+1∞∏n k=1(ak + 1) αn−k+1 ] tan+1 , ∀ t ∈ [0, T ], (3.10.3) with the convention that ∏0 k=1 ≡ 1, where the sequences {an}, {bn}, and {cn} are given by an+1 = n+1∑ j=1 αj , a1 = α, bn+1 = n∑ j=1 αj + (β + 1)αn+1, b1 = (β + 1)α, cn+1 = n+1∑ j=0 αj , c1 = α+ 1. Proof. We note that alternative characterizations of the sequences {an}, {bn}, and {cn} are through the following difference equations as well as the associated initial conditions: an+1 =α(an + 1), a1 = α, bn+1 =α(bn + 1), b1 = (β + 1)α, cn+1 =αcn + 1, c1 = α+ 1. We use these identifications in the following argument. We prove the theorem by an induction on n ∈ N. Consider n = 1. Note that (3.10.2) implies |Xt −X0| ≤ A+Btβ + C‖X‖α∞tα, ∀ t ∈ [0, T ]. (3.10.4) 158 3.10. An iterated formula for improved pointwise modulus of continuity Apply (3.10.4) to (3.10.2) to obtain |Xt −X0| ≤A+Btβ + C ( |X0|t+ ∫ t 0 ( A+Bsβ + C‖X‖α∞sα ) ds )α =A+Btβ + C ( (|X0|+A)t+ B β + 1 tβ+1 + C‖X‖α∞ α+ 1 tα+1 )α ≤A+Btβ + C · (Dα)2 [ (|X0|+A)αtα + ( B β + 1 )α t(β+1)α + ( C‖X‖α∞ α+ 1 )α t(α+1)α ] =A+Btβ + (Dα) 2 [ C(|X0|+A)αta1 + C ( B β + 1 )α tb1 + (C)α+1‖X‖α2∞ (α+ 1)α ta2 ] , where the fourth line follows from (3.10.1). This gives the desired inequality for n = 1. Suppose now (3.10.3) holds for some n ∈ N. Then for any t ∈ [0, T ], we have∫ t 0 |Xs|ds ≤|X0|t+ ∫ t 0 |Xs −X0|ds ≤(|X0|+A)t+ B β + 1 tβ+1 + (Dα) 2n n∑ j=1 [∏j k=1(C) αk−1 ·∏j−1k=1(Dα)2(n−k)αk · (|X0|+A)αj∏j−1 k=1(ak + 1) αj−k ] 1 (aj + 1) taj+1 + (Dα) 2n n∑ j=1  ∏j k=1(C) αk−1 ·∏j−1k=1(Dα)2(n−k)αk · ( Bβ+1)αj∏j−1 k=1(bk + 1) αj−k  1 bj + 1 tbj+1 + (Dα) 2n [ (C)cn ·∏nk=1(Dα)2(n−k)αk · ‖X‖αn+1∞∏n k=1(ak + 1) αn−k+1 ] 1 an+1 + 1 tan+1+1, where the right-hand side is a sum of 2n + 3 many terms. Hence, applying 159 3.10. An iterated formula for improved pointwise modulus of continuity (3.10.1) for n replaced by 2n+ 3 and (3.10.2), we obtain, for every t ∈ [0, T ], |Xt −X0| ≤A+Btβ + C · (Dα)2n+2(|X0|+A)αtα + C · (Dα)2n+2 ( B β + 1 )α t(β+1)α + C · (Dα)2n+2 · (Dα)2nα n∑ j=1 [∏j+1 k=2(C) αk−1 ·∏jk=2(Dα)2[(n+1)−k]αk · (|X0|+A)αj+1∏j−1 k=1(ak + 1) α(j+1)−k ] × 1 (aj + 1)α taj+1 + C · (Dα)2n+2 · (Dα)2nα n∑ j=1  ∏j+1 k=2(C) αk−1 ·∏jk=2(Dα)2[(n+1)−k]αk · ( Bβ+1)αj+1∏j−1 k=1(bk + 1) α(j+1)−k  × 1 (bj + 1)α tbj+1 + C · (Dα)2n+2 · (Dα)2nα [ (C)cnα ·∏n+1k=2(Dα)2[(n+1)−k]αk · ‖X‖αn+2∞∏n k=1(ak + 1) α(n+1)−k+1 ] 1 (an+1 + 1)α tan+2 . The rest now follows by writing the right-hand side of the foregoing inequality 160 3.10. An iterated formula for improved pointwise modulus of continuity into the desired form: |Xt −X0| ≤A+Btβ + C · (Dα)2n+2(|X0|+A)αtα + C · (Dα)2n+2 ( B β + 1 )α t(β+1)α + (Dα) 2n+2 · n∑ j=1 [∏j+1 k=1(C) αk−1 ·∏jk=1(Dα)2[(n+1)−k]αk · (|X0|+A)αj+1∏j k=1(ak + 1) α(j+1)−k ] taj+1 + (Dα) 2n+2 n∑ j=1  ∏j+1 k=1(C) αk−1 ·∏jk=1(Dα)2[(n+1)−k]αk · ( Bβ+1)αj+1∏j k=1(bk + 1) α(j+1)−k  tbj+1 + (Dα) 2n+2 [ (C)cn+1 ·∏n+1k=1(Dα)2[(n+1)−k]αk · ‖X‖αn+2∞∏n+1 k=1(ak + 1) α(n+1)−k+1 ] tan+2 = A+Btβ + (Dα) 2n+2 · n+1∑ j=1 [∏j k=1(C) αk−1 ·∏j−1k=1(Dα)2[(n+1)−k]αk · (|X0|+A)αj∏j−1 k=1(ak + 1) αj−k ] taj + (Dα) 2n+2 n+1∑ j=1  ∏j k=1(C) αk−1 ·∏j−1k=1(Dα)2[(n+1)−k]αk · ( Bβ+1)αj∏j−1 k=1(bk + 1) αj−k  tbj + (Dα) 2n+2 [ (C)cn+1 ·∏n+1k=1(Dα)2[(n+1)−k]αk · ‖X‖αn+2∞∏n+1 k=1(ak + 1) α(n+1)−k+1 ] tan+2 . This proves our assertion for n+ 1, and the proof is now complete by math- ematical induction. Corollary 3.43 (Improved modulus of continuity). Let (Xt)t∈[0,T ] be a continuous process satisfying the assumption of Theorem 3.42, with T ≤ 1, α ∈ (0, 12), β = 1, A = 0, and ‖X‖∞ ≤ 1. Then for any n ∈ N, |Xt −X0| ≤Bt+ (C 11−α + 1) n∑ j=1 |X0|αj  tα + (C 11−α + 1) n∑ j=1 ( B 2 )αj t α1−α + (C 11−α + 1) tan+1 , ∀ t ∈ [0, T ]. (3.10.5) In particular, if ξ ∈ (0, 1) satisfying aN0 ≤ ξ < aN0+1, N0 ∈ N, (3.10.6) 161 3.11. Limit theorems for Crap(R)-valued processes then |Xt −X0| ≤ (C 11−α + 1) N0∑ j=1 |X0|αj  tα B + (C 11−α + 1) N0∑ j=1 ( B 2 )αj + C 1 1−α + 1  tξ, ∀ t ∈ [0, T ]. (3.10.7) Proof. We do some arithmetic. First, since j∑ k=1 αk−1 ≤ ∞∑ k=1 αk−1 = 1 1− α, j ∈ N, we have j∏ k=1 Cα k−1 ≤C 11−α + 1, 1 ≤ j ≤ n, Ccn ≤C 11−α + 1. (3.10.8) Next, using β = 1 and the definition of {bn} shows that bn = α(1− αn−1) 1− α + 2α n = α− αn + 2αn − 2αn+1 1− α = α+ αn(1− 2α) 1− α , so {bn} strictly decreases to α1−α by the assumption that α ∈ (0, 12). Note that Dα = 1, since α ∈ (0, 12). The inequality (3.10.5) now follows by applying the bounds (3.10.8) and the monotonicity of {bn} to (3.10.3). The proof is complete. 3.11 Limit theorems for Crap(R)-valued processes In this section, we study some limit theorems for Crap(R)-valued càdlàg processes. Recall the map | · |λ defined in (1.2.10) and the norm ‖ · ‖rap defined in (1.2.11) for Crap(R). In the following, if u is a two-parameter function, we write u(s) for the function x 7−→ u(x, s). If u ∈ D(R+,C (R)) 162 3.11. Limit theorems for Crap(R)-valued processes or u ∈ D(R+,Crap(R)), ∆u(s) = u(s) − u(s−) with the convention that u(0−) ≡ 0. We first give a simple sufficient condition for C-tightness of probability measures on D ( R+,Crap(R) ) . Lemma 3.44. Suppose that u, u1, u2, · · · ∈ D ( R+,Crap(R) ) with un −→ u,⋃ s∈[0,N ] ∞⋃ n=1 supp ( ∆un(s) ) is bounded, and sup s∈[0,N ] ‖∆un(s)‖∞ −−−→ n→∞ 0, ∀ N ∈ N. (3.11.1) Then u ∈ C(R+,Crap(R)). Proof. In the following, we consider some norms on Crap(R) defined by ‖f‖rap,K , ∞∑ λ=1 |f |λ ∧ 1 Kλ , K ∈ (2,∞). The norms induce identical topologies. Now, suppose that un −→ u in D ( R+,Crap(R) ) and (3.11.1) holds. For any N ∈ N, set SN , sup e|z|; z ∈ ⋃ s∈[0,N ] ∞⋃ n=1 supp(∆un(s))  . Then the definition of ‖ · ‖rap,KN entails that, for any KN > SN ∨ 2, sup 0≤s≤N ‖∆un(s)‖rap,KN ≤ [ ∞∑ λ=1 ( SN KN )λ] · sup 0≤s≤N ‖∆un(s)‖∞ −−−→ n→∞ 0 (3.11.2) by (3.11.1). Since ‖ · ‖rap,KN ≤ ‖ · ‖rap and un −→ u in D ( R+,Crap(R) ) , the definition of the Skorokhod metric entails that un −→ u in D ( [0, N ], (Crap(R), ‖ · ‖rap,KN ) ) and, hence, (3.11.2) implies that u ∈ C([0, N ], (Crap(R), ‖ · ‖rap,KN )) by Theorem III.10.2 of [12]. Using the equivalence between ‖ · ‖rap and ‖ · ‖rap,KN gives that u ∈ C ( [0, N ],Crap(R) ) . Since N ∈ N is arbitrary, we have proved that u ∈ C (R+,Crap(R)), as required. 163 3.11. Limit theorems for Crap(R)-valued processes Next, we turn to the tightness of probability measures onD ( R+,Crap(R) ) . More specifically, we compare limit theorems on the two separable Banach spaces Crap(R) and C (R), where where C (R) is the space of continuous functions on R and is equipped with the norm ‖ · ‖loc,∞ of local uniform convergence ‖f‖loc,∞ , ∞∑ λ=1 ( sup|x|≤λ |f(x)| ) ∧ 1 2λ . (3.11.3) The motivation to do this comparison is that tightness in D ( R+,C (R) ) is usually easier to verify. Our first step is to compare limit theorems in the spaces of probability measures on Crap(R) and C (R). Lemma 3.45. For a sequence (fn) ⊆ Crap(R), (fn) is Cauchy under ‖·‖loc,∞ and supn |fn|λ <∞ for each λ ∈ N if and only if (fn) is Cauchy under ‖·‖rap; in this case, the limiting function is a Crap(R)-function. Hence, a subset K of Crap(R) is compact if and only if supg∈K |g|λ <∞ for each λ ∈ N and K is compact as a subset of C (R). Proof. Suppose that (fn) ⊆ Crap(R) is Cauchy under ‖·‖rap. Since Crap(R) is a Banach space, (fn) converges to some f ∈ Crap(R) in Crap(R). The continu- ity of the mapping g 7−→ |g|λ, λ ∈ N, on Crap(R) entails that supn |fn|λ <∞. Also, since convergence in Crap(R) implies convergence in C (R), it follows that fn −→ f in C (R). Conversely, suppose that supn |fn|λ < ∞ for each λ ∈ N and fn −→ f in C (R). Then f ∈ C (R), and it is plain that |f |λ < ∞ for each λ ∈ N. This means f ∈ Crap(R). Next, we show that |fn − f |λ −→ 0. Given ε > 0, first pick M ≥ 1 such that e−|x| ≤ ε for each |x| > M and then N ≥ 1 such that supx∈[−M,M ] |fn(x)− f(x)| ≤ e−λMε for each n ≥ N . Then considering separately |x| > M and |x| ≤M , we obtain |fn − f |λ ≤ ( sup m |fm|λ+1 + |f |λ+1 ) · ε+ ε, ∀ n ≥ N. This proves |fn − f |λ −→ 0 for any λ ∈ N, and hence fn −→ f in Crap(R). The proof is now complete. Lemma 3.46. (1) The set Crap(R) is a Borel subset of C (R). (2) The topology of Crap(R) strictly contains the topology of( Crap(R), ‖ · ‖loc,∞ ) . 164 3.11. Limit theorems for Crap(R)-valued processes Hence, B ( Crap(R) ) ⊇ B(Crap(R), ‖ · ‖loc,∞) = B(C (R)) ∩ Crap(R). Proof. (1) Thanks to the obvious equality Crap(R) = ∞⋂ λ=1 ∞⋃ n=1 {f ∈ C (R); |f |λ ≤ n}, (3.11.4) it suffices to show that each set {f ∈ C (R); |f |λ ≤ n} is a Borel subset of C (R). Note that, for each λ,N ∈ N, f 7−→ sup x∈[−N,N ] eλ|x||f(x)| : C (R) −→ R+ is a continuous function, and |f |λ = lim N→∞ ↑ sup x∈[−N,N ] eλ|x||f(x)|. Hence, {f ∈ C (R); |f |λ ≤ n} is a Borel subset of C (R) and (1) follows. (2). Note that ‖gn‖rap −→ 0 implies ‖gn‖loc,∞ −→ 0, so for every ε > 0 there exists δ > 0 such that BCrap(R)(0, δ) ⊆ B(Crap(R),‖·‖loc,∞)(0, ε). This shows that the topology on Crap(R) generated by ‖ · ‖rap contains the topology generated by ‖ · ‖loc,∞. Also, it is elementary to construct a coun- terexample that (fn) ⊆ Crap(R), ‖fn‖loc,∞ −→ 0 but fn(n) = 1 for each n; then |fn|λ is at least eλn and hence ‖fn‖rap does not converge to zero. This shows that we cannot choose an open ball B(Crap(R),‖·‖loc,∞)(0, δ) to be contained in BCrap(R)(0, ε), for any ε ∈ (0, 1), and the above inclusion of topologies on Crap(R) is strict. As a result of Lemma 3.46 (3), a probability measure µ on the Borel σ-field of Crap(R) can always be regarded as a probability measure defined on the Borel σ-field of C (R). Lemma 3.47. Let (µn) be a sequence of (Borel) probability measures on Crap(R). Then the sequence (µn) is tight as probability measures on Crap(R) if and only if it is tight as probability measures on C (R) and ∀ ε ∈ (0, 1), λ ∈ N, ∃M ∈ (0,∞) such that sup n∈N µn ({f ∈ C (R); |f |λ > M}) ≤ ε, (3.11.5) 165 3.11. Limit theorems for Crap(R)-valued processes Here, recall that the mapping f 7−→ |f |λ is B ( C (R) ) -measurable, as has been seen in the proof of Lemma 3.46. Proof of Lemma 3.47. Suppose that the sequence (µn) is tight as probability measures on Crap(R). Then for every ε > 0, there exists a compact subset K of Crap(R) such that sup n∈N µn ( K{ ) ≤ ε. By Lemma 3.45, K is a compact subset of C (R) and supg∈K |g|λ < ∞ for each λ ∈ N. Hence, given λ ∈ N, (3.11.5) is satisfied for M > supg∈K |g|λ. The necessary condition then follows. Next, we consider the converse. Fix ε > 0. Choose a compact subset K of C (R) such that sup n≥1 µn ( K{ ) ≤ ε, (3.11.6) and, for each λ ∈ N, choose Mε,λ ∈ (0,∞) such that sup n≥1 µn ({f ∈ C (R); |f |λ > Mε,λ}) ≤ ε 2λ . (3.11.7) Note that K̃ = K ∩ ∞⋂ λ=1 {f ∈ C (R); |f |λ ≤Mε,λ}, is a compact subset of C (R). Then by Lemma 3.45, K̃ is a compact subset of Crap(R), and (3.11.6) and (3.11.7) entail sup n≥1 µn ( K̃{ ) ≤ 2ε. We have proved that the sequence (µn) is tight as probability measures on Crap(R). The proof is complete. Next, we consider the path spaces D ( R+,Crap(R) ) and D ( R+,C (R) ) . Lemma 3.48. A subsetK ofD ( R+,Crap(R) ) is compact inD ( R+,Crap(R) ) if and only if it is compact in D ( R+,C (R) ) and sup u∈K sup 0≤s≤T |u(s)|λ <∞, ∀ λ, T ∈ N. (3.11.8) 166 3.11. Limit theorems for Crap(R)-valued processes Proof. Suppose that K is compact in D ( R+,C (R) ) and (3.11.8) is satisfied. Let (un) ⊆ K with un −→ u in D ( R+,C (R) ) for some u ∈ K. Now, the assumption (3.11.8) implies that un(s) ∈ Crap(R) for any n ∈ N and any s ∈ R+. Since un(s) −→ u(s) in C (R) for any continuity point s of u with respect to the metric ‖ · ‖loc,∞ (by Proposition 3.5.2 of [12]), (3.11.8) and Lemma 3.46 also imply that u(s) ∈ Crap(R) for any such point s. Moreover, sup{|u(s)|λ; s ∈ [0, T ], s is a continuity point of u} <∞, ∀ λ, T ∈ N. (3.11.9) Since u ∈ D(R+,C (R)), u(s) converges to u(t) in ‖ · ‖loc,∞ as s ↓ t along continuity points s of u with s > t, for any t ∈ [0, T ]. Hence, by Lemma 3.45, (3.11.8) and (3.11.9) imply that u(t) ∈ Crap(R) for any t ∈ R+. Similarly, u(t−) ∈ Crap(R) for any t ∈ (0,∞). Moreover, we have( sup n∈N sup 0≤s≤T |un(s)|λ ) ∨ ( sup 0≤s≤T |u(s)|λ ∨ |u(s−)|λ ) <∞, ∀ λ, T ∈ N (with the convention that u(0−) ≡ 0). Hence, by similar arguments using re- peatedly the foregoing display, the various types of convergence under C (R) in (a)–(c) of Proposition III.6.5 of [12] are equivalent to the corresponding types of convergence under Crap(R). The same proposition therefore guaran- tees that un −→ u in D ( R+,Crap(R) ) , and we conclude that K is a compact subset of D ( R+,Crap(R) ) . Conversely, suppose thatK is compact inD ( R+,Crap(R) ) . Since ‖gn‖rap −→ 0 implies ‖gn‖loc,∞ −→ 0, K is compact in D ( R+,C (R) ) again by Propo- sition III.6.5 of [12]. It remains to show that (3.11.8) holds. Suppose not. Then for some λ, T ∈ N, we can find a sequence (un) ⊆ K and (sn) ⊆ [0, T ] such that |un(sn)|λ −→∞. Since K is compact in D ( R+,Crap(R) ) , we may assume that un −→ u in D ( R+,Crap(R) ) for some u ∈ K. In addition, we may assume sn −→ s ∈ [0, T ]. Then by Proposition III.6.5 in [12], lim n→∞ ‖un(sn)− u(s)‖rap ∧ ‖un(sn)− u(s−)‖rap = 0. By taking a subquence if necessary, we assume that either ‖un(sn)−u(s)‖rap −→ 0 or ‖un(sn)−u(s−)‖rap −→ 0. If ‖un(sn)−u(s)‖rap −→ 0, then necessarily |un(sn) − u(s)|λ −→ 0 for each λ ∈ N. Since |u(s)|λ < ∞, this implies that supn |un(sn)|λ <∞, which gives a contradiction. We have a similar contra- diction for the case that ‖un(sn) − u(s−)‖rap −→ 0. Hence, (3.11.8) must hold. We have proved the necessary condition, and the proof is complete. 167 3.11. Limit theorems for Crap(R)-valued processes Lemma 3.49. (1) For any T, λ ∈ (0,∞), u 7−→ sup 0≤s≤T |u(s)|λ : D ( R+,C (R) ) −→ [0,∞] is Borel measurable. (2) We have B ( D ( R+,Crap(R) )) ⊇B(D(R+, (Crap(R), ‖ · ‖loc,∞))) =B ( D(R+,C (R)) ) ∩D(R+, (Crap(R), ‖ · ‖loc,∞)). (3.11.10) Proof. (1). The regularity of u ∈ D(R+,C (R)) implies that sup 0≤s≤T sup x∈[−N,N ] eλ|x||u(x, s)| = sup s∈([0,T ]∩Q)∪{T} sup x∈[−N,N ]∩Q eλ|x||u(x, s)| is a Borel measurable function in the variable u from D ( R+,C (R) ) into [0,∞], and so is the map u 7−→ sup 0≤s≤T |u(s)|λ = lim N→∞ ↑ sup 0≤s≤T sup x∈[−N,N ] eλ|x||u(x, s)|. (2). Using the obvious inequality ‖ · ‖loc,∞ ≤ ‖ · ‖rap, we deduce from the definition of Skorokhod metrics that the identity map Id : u 7−→ u : D(R+,Crap(R)) −→ D(R+, (Crap(R), ‖ · ‖loc,∞)) is continuous and hence B ( D(R+,Crap(R)) ) ⊇ B(D(R+, (Crap(R), ‖ · ‖loc,∞))). The proof is complete. Lemma 3.49 ensures that any probability measure µ on the Borel σ-field of D ( R+,Crap(R) ) has a natural extension to a probability measure on the Borel σ-field of D ( R+,C (R) ) defined by A 7−→ µ(A ∩D(R+, (Crap(R), ‖ · ‖loc,∞)). We will still denote by µ the extension. We have finally arrived at the main tool by which we prove weak conver- gence of probability measures on D ( R+,Crap(R) ) . 168 3.11. Limit theorems for Crap(R)-valued processes Proposition 3.50. Let (µn) be a sequence of probability measures onD ( R+,Crap(R) ) . Then the sequence (µn) is tight as probability measures on D ( R+,Crap(R) ) if and only if it is tight as probability measures on D ( R+,C (R) ) and ∀ ε ∈ (0,∞), T, λ ∈ N ∃MT,λ ∈ (0,∞) such that sup n≥1 µn ({ u ∈ D(R+,C (R)); sup 0≤s≤T |u(s)|λ > MT,λ }) ≤ ε. (3.11.11) Proof. Suppose that the sequence (µn) is tight as probability measures on D ( R+,Crap(R) ) . Then for every ε > 0, there exists a compact subset K of D ( R+,Crap(R) ) such that sup n≥1 µn ( K{ ) ≤ ε. (3.11.12) Since K is compact in D ( R+,Crap(R) ) , it follows from Lemma 3.48 that it is compact in D ( R+,C (R) ) and ∀ T, λ ∈ N ∃MT,λ ∈ (0,∞) such that sup u∈K sup 0≤s≤T |u(s)|λ ≤MT,λ <∞. Hence, (3.11.12) gives the tightness of the sequence (µn) as probability mea- sures on D ( R+,C (R) ) , and (3.11.11) is satisfied by using (3.11.12) and the last display. Consider the converse. Given ε > 0, choose a compact subset K of D ( R+,C (R) ) such that sup n≥1 µn ( K{ ) ≤ ε. For each T, λ ∈ N, choose MT,λ ∈ (0,∞) such that sup n≥1 µn ({ u ∈ D(R+,C (R)); sup 0≤s≤T |u(s)|λ > MT,λ }) ≤ ε 2T+λ . Take K ′ = K ∩ ∞⋂ λ,T=1 { u ∈ D(R+,C (R)); sup 0≤s≤T |u(s)|λ ≤MT,λ } . 169 3.12. Some properties of support processes Then by Lemma 3.48, K ′ is a compact subset of D ( R+,Crap(R) ) , and our choice of MT,λ and K entails that sup n≥1 µn ( (K ′){ ) ≤ sup n≥1 µn ( K{ ) + ∞∑ T,λ=1 sup n≥1 µn ({ u ∈ D(R+,C (R)); sup 0≤s≤T |u(s)|λ > MT,λ }) ≤2ε. Hence, the sequence (µn) is tight as probability measures onD ( R+,Crap(R) ) , and the proof is complete. 3.12 Some properties of support processes In this section, we present some results concerning the supports of the im- migrants Xi and Y j . The proofs in this section are modified from their counterparts in [30] and are given here for completeness. Proposition 3.51. There is a constant C0supp ∈ (0,∞) depending only on the immigration function ψ and the parameter β ∈ [14 , 12) such that Pε ( σX i β − si ≤ r ) + Pε ( σY i β − ti ≤ r ) ≤ C0suppε(r ∨ ε), ∀ ε, r ∈ (0, 1], i ∈ N. (3.12.1) Proof. Fix β ∈ [14 , 12) and i ∈ N. Recall that Xi satisfies the SPDE Xit(φ) = ψ(1)J xi ε (φ) + ∫ t 0 Xis ( ∆ 2 φ ) ds+ ∫ t 0 ∫ R Xi(x, s)1/2dWX,i(x, s), whereWX,i is a (Gt)-space-time white noise. Hence, it is easy to see that the scaled process X̂i = ψ(1)−1Xi corresponds to the case discussed in Lemma 7.1 of [30] with a identified to be (√ ψ(1) )−1 . The support processes of X̂i and Xi coincide, so by Corollary 7.2 of [30] there is a constant Csupp ∈ (0,∞) depending only on ψ and β such that Pε ( σX i β − si ≤ r ) ≤ Csuppε(r ∨ ε), ∀ ε, r ∈ (0, 1]. The same bound holds for Pε ( σY i β − ti ≤ r ) . The desired inequality (3.12.1) now follows by using the last two displays and taking C0supp to be 2Csupp. 170 3.12. Some properties of support processes In the remaining of this section, we consider under Qiε the supports of the clusters Y j born by time si + r ∈ (si,∞) and with seeds (yj , tj) lying outside the rectangle RXiβ (si + r) defined by (3.6.4). We first consider the immigrants Y j born before time si. Proposition 3.52. There exists a constant C1supp ∈ (0,∞) depending only on the immigration function ψ such that whenever β ∈ [13 , 12), Qiε PXiβ (si + r) ∩ ( ⋃ j:tj≤si supp(Y j) ) 6= ∅, min j:tj≤si ( σY j β − tj ) > 3r, σX i β − si > 2r  ≤ C1suppr1/6, ∀ i ∈ N with si ≤ 1, r ∈ [si, 1], ε ∈ (0, r]. Proof. Fix i ∈ N with si ≤ 1, r ∈ [si, 1], ε ∈ (0, r], and β ∈ [13 , 12). We drop the subscripts ε of Qiε and Pε in the following and note that it suffices to consider clusters Y j with tj < si for the desired probability inequality. In this proof, it is more convenient to work with dyadic rationals, so we take n0, n1 ∈ Z+ with n0 ≤ n1 such that 2−n0−1 < r ≤ 2−n0 and 2−n1−1 < ε ≤ 2−n1 . (3.12.2) We will argue throughout this proof on the event that min j:tj≤si ( σY j β − tj ) > 3r and σX i β − si > 2r. (3.12.3) (Step 1). We claim that{ (yj , tj); tj ∈ (0, si),PXiβ (si + r) ∩ supp(Y j) 6= ∅ } ⊆ [ xi − 7 · 2−n0β, xi + 7 · 2−n0β ] × [0, si). (3.12.4) under (3.12.3). Take j ∈ N with tj < si, and suppose that yj /∈ [ xi − 7 · 2−n0β, xi + 7 · 2−n0β ] . By (3.12.3), we have σY j β > tj + 2r ≥ si + r since r ≥ si. Hence, for some sufficiently small (random) δ > 0, supp(Y js ) ⊆ [ yj − ε1/2 − (s− tj)β, yj + ε1/2 + (s− tj)β ] , ∀ s ∈ [tj , si+r+δ], 171 3.12. Some properties of support processes as implies that (R× [tj , si + r]) ∩ supp(Y j) ⊆ PY jβ (si + r + δ′), ∀ δ′ ∈ (0, δ). We get PXiβ (si + r) ∩ supp(Y j) ⊆ PX i β (si + r) ∩ PY j β (si + r). (3.12.5) On the other hand, the choice of yj implies PXiβ (si + r) ∩ PY j β (si + r) = ∅, (3.12.6) since |yj − xi| >7 · 2−n0β ≥ 7 · rβ ≥rβ + (2r)β + 2ε1/2 ≥[(si + r)− si]β + [(si + r)− tj ]β + 2ε1/2. Our claim (3.12.4) now follows from (3.12.5) and (3.12.6). The inclusion in (3.12.4) rules out a large number of clusters Y j born before si whose space-time supports can intersect PXiβ (si+ r) by time si+ r. In the rest of this proof, we focus on the remaining clusters Y j for j ∈ N with tj < si. (Step 2). We classify the clusters Y j for j ∈ N satisfying tj ∈ (0, si) and yj /∈ [ xi − 7 · 2−n0β, xi + 7 · 2−n0β ] according to the space-time locations of the seeds (yj , tj), using the following rectangles R0n = [ xi − 7 · 2−nβ, xi + 7 · 2−nβ ] × [si − 2−n+1, si − 2−n] , RLn = [ xi − 7 · 2−nβ, xi − 7 · 2−(n+1)β ] × [si − 2−n, si] , RRn = [ xi + 7 · 2−(n+1)β, xi + 7 · 2−nβ ] × [si − 2−n, si] , where n ≥ n0. (These are random because xi is.) The rectangles can only intersect at their boundaries. Note that the rectangle RRn is stacked above the upper right corner of R0n and the rectangle RLn is above the upper left corner of R0n. Each of the unions RLn ∪ R0n ∪ RRn has a “U -shape”, and RLn+1 ∪ R0n+1 ∪ RRn+1 is inscribed in the “valley” of RLn ∪ R0n ∪ RRn. These 172 3.12. Some properties of support processes descriptions should make it clear that ∞⋃ n=n0 (RLn ∪R0n ∪RRn) = [ xi − 7 · 2−n0β, xi + 7 · 2−n0β ] × [si − 2−n0+1, si] \ {(xi, si)} (3.12.7) ⊇ [ xi − 7 · 2−n0β, xi + 7 · 2−n0β ] × [0, si), (3.12.8) where the last inclusion follows since si ≤ r ≤ 2−n0+1 according to (3.12.2). We now group the clusters Y j according to the space-time locations of their seeds (yj , tj) and set Y (n),q = ∑ j:tj≤si 1Rqn(yj , tj)Y j , q = L, 0, R, n ≥ n0. It follows from (3.12.4) and (3.12.8) that Qi PXiβ (si + r) ∩ ( ⋃ j:tj≤si supp(Y j) ) 6= ∅, min j:tj≤si ( σY j β − tj ) > 3r, σX i β − si > 2r  ≤Qi ( ∞⋃ n=n1+1 ⋃ q=L,0,R [ PXiβ (si + r) ∩ supp ( Y (n),q ) 6= ∅ ]) + n1∑ n=n0 ∑ q=L,0,R Qi ( PXiβ (si + r) ∩ supp ( Y (n),q ) 6= ∅, min j:tj≤si ( σY j β − tj ) > 3r, σX i β − si > 2r ) . (3.12.9) In the following (Step 3)–(Step 5), we bound the Qi-probabilities in (3.12.9). A summary is then given in (Step 6) to prove the desired inequality. (Step 3). Consider the first probability in (3.12.9). In essence, we will show that it is small because there is small Qi-probability for the Y j processes to 173 3.12. Some properties of support processes land in ⋃∞ n=n1+1 (RLn ∪R0n ∪RRn). We have Qi ( ∞⋃ n=n1+1 ⋃ q=L,0,R [ PXiβ (si + r) ∩ supp ( Y (n),q ) 6= ∅ ]) ≤ Qi  ∑ j:tj≤si 1⋃∞ n=n1+1 (RLn∪R0n∪RRn)(yj , tj) ≥ 1  ≤ ∑ j:tj≤si Qi ( (yj , tj) ∈ ∞⋃ n=n1+1 (RLn ∪R0n ∪RRn) ) ≤ ∑ j:si−2−n1≤tj<si Qi ( yj ∈ [xi − 7 · 2−(n1+1)β, xi + 7 · 2−(n1+1)β] ) , (3.12.10) where the last inequality follows since ∞⋃ n=n1+1 (RLn ∪R0n ∪RRn) = [xi−7·2−(n1+1)β, xi+7·2−(n1+1)β]×[si−2−n1 , si]\{(xi, si)} by an argument analogous to (3.12.7). Note that, by (3.12.2), #{j : si − 2−n1 ≤ tj < si} ≤ #{j; si − 2ε ≤ tj < si} = 2. Applying the last display and Lemma 3.8 (4) to (3.12.10), we obtain Qi ( ∞⋃ n=n1+1 ⋃ q=L,0,R [ PXiβ (si + t) ∩ supp ( Y (n),q ) 6= ∅ ]) ≤ 2 · ‖ψ‖∞ ψ(1) 14 · 2−(n1+1)β ≤ 28‖ψ‖∞ ψ(1) εβ, (3.12.11) where the last inequality follows from (3.12.2). In (Step 4) and (Step 5) below, we bound the summands indexed by n0 ≤ n ≤ n1 and q = L, 0, R on the right-hand side of (3.12.9) . We will aim at computing the Qi-expectations in a way that we can regard each Y j cluster in RLn∪R0n∪RRn as a (Gs)s≥tj -super-Brownian motion. (Any of them is clearly a true (Gs)s≥tj -super-Brownian motion up to time si under Qi by Lemma 3.7 and the fact that Xisi(1) ≡ ψ(1)ε.) 174 3.12. Some properties of support processes (Step 4). We consider in this step the summands indexed by q = 0. Then we may consider n with n0 ≤ n ≤ n1 and si > 2−n; otherwise there cannot be any seed (yj , tj) falling inside R0n, and trivially the corresponding Qi- probability is zero. We will show that, on the event that (3.12.3) holds, the total mass process of Y (n),0 dies out with high probability before time si−2−n−1 and hence plainly, with this high probability, the support of Y (n),0 cannot intersect PXiβ (si + r) by time si + r. In this direction, we consider for the following probability the size of Y (n),0 relative to a threshold h > 0 right after all of its immigrants have landed: Qi ( Y (n),0 si−2−n−1(1) > 0 ) ≤Qi ( Y (n),0 si−2−n ≥ h ) + EQ i [ Qi ( T Y (n),0 0 ≥ si − 2−n−1 ∣∣∣Gsi−2−n) ;Y (n),0si−2−n(1) < h] ≤Qi ( Y (n),0 si−2−n ≥ h ) + EQ i [ 2Y (n),0 2−n−1 ;Y (n),0 si−2−n(1) < h ] ≤Qi ( Y (n),0 si−2−n ≥ h ) + 4h · 2n, (3.12.12) where the second inequality follows from (3.6.21). To compute the probability on the right-hand side of (3.12.12), we use Markov’s inequality and get Qi ( Y (n),0 si−2−n(1) ≥ h ) ≤h ∑ j:tj≤si EQ i [ Y j si−2−n(1)1R0n(yj , tj) ] =h ∑ j:tj≤si EP [ Y j si−2−n(1)1R0n(yj , tj) ] =ψ(1)εh ∑ j:si−2−n+1≤tj≤si−2−n P ( yj ∈ [ xi − 7 · 2−nβ, xi + 7 · 2−nβ ]) ≤ψ(1)εh#{j ∈ N; si − 2−n+1 ≤ tj ≤ si − 2−n} · ‖ψ‖∞ ψ(1) 14 · 2−nβ. (3.12.13) Here, the first equality follows since Xisi(1) = ψ(1)ε, the second equality fol- lows from Lemma 3.8 (3), and the second inequality follows from Lemma 3.8 (4). Note that #{j ∈ N : si − 2−n+1 ≤ tj ≤ si − 2−n} ≤ ε−12−n + 1. 175 3.12. Some properties of support processes Applying the last inequality to (3.12.13) with h set to be 2−n(1+β− 1 6), we obtain Qi ( Y (n),0 si−2−n(1) ≥ 2 −n(1+β− 16) ) ≤ψ(1)ε2n(1+β− 16) (ε−12−n + 1) · ‖ψ‖∞ ψ(1) · 14 · 2−nβ ≤14‖ψ‖∞2n(1− 1 6) ( 2−n + ε ) ≤28‖ψ‖∞2n(1− 1 6)2−n =28‖ψ‖∞2−n6 , (3.12.14) where in the third inequality we use ε ≤ 2−n1 ≤ 2−n by (3.12.2). Applying (3.12.14) to (3.12.12) with the same choice of h gives Qi ( Y (n),0 si−2−n−1(1) > 0 ) ≤28‖ψ‖∞ · 2−n6 + 4 · 2−n(1+β− 1 6) · 2n ≤ (28‖ψ‖∞ + 4) · 2−n6 , (3.12.15) since β ≥ 13 . As Y (n),0 has to survive beyond si − 2−n−1 in order to invade PXiβ (si + r), (3.12.15) gives Qi ( PXiβ (si + r) ∩ supp ( Y (n),0 ) 6= ∅, min j:tj≤si ( σY j β − tj ) > 3r, σX i β − si > 2r ) ≤ (28‖ψ‖∞ + 4) · 2−n6 . (3.12.16) (Step 5). We consider the summands in (3.12.9) arising from Y (n),R for n0 ≤ n ≤ n1. Essentially we will bound them in the same way as we do in (Step 4). Nonetheless, since the Y j clusters in Y (n),R can arrive up to si− ε2 , Y (n),R now survives beyond si (with high probability) and hence we need to modify the calculation of Feller-diffusions in (3.12.12) to accommodate this change. Fix such n with n0 ≤ n ≤ n1. First, we claim that Qi ( PXiβ (si + r) ∩ supp ( Y (n),R ) 6= ∅, min j:tj≤si ( σY j β − tj ) > 3r, σX i β − si > 2r ) ≤ Qi ( Y (n),R si+2−n (1) > 0, min j:tj≤si ( σY j β − tj ) > 3r, σX i β − si > 2r ) . (3.12.17) 176 3.12. Some properties of support processes Suppose now (3.12.3) holds, and we have Y (n),R si+2−n (1) = 0. Then Y (n),Ru = 0 for all u ≥ si + 2−n. Under (3.12.2) and (3.12.3), σY jβ > si + 2−n for each (yj , tj) in RRn, and hence supp ( Y (n),R ) ⊆ Γ , { (x, s) ∈ R× [si − 2−n, si + 2−n]; x ∈ [xi + 7 · 2−(n+1)β − (s− si + 2−n)β − ε1/2, xi + 7 · 2−nβ + (s− si + 2−n)β + ε1/2] } . (3.12.18) Note that the minimum distance between Γ and PXiβ (si + 2−n) is given by( xi + 7 · 2−(n+1)β − (2−n + 2−n)β − ε1/2 ) − ( xi + ε 1/2 + 2−nβ ) ≥7 · 2−β · 2−nβ − 2β · 2−nβ − 2 · 2−nβ − 2−nβ ≥ ( 7 · 2− 12 +( 12−β) − 2 12−( 12−β) − 3 ) · 2−nβ > 0, so Γ ∩ PXiβ (si + 2−n) = ∅. On the other hand, since the set in (3.12.18) only extends up to si + 2−n and 2−n ≤ r, Γ must be is disjoint from PXiβ (si + r). Our claim (3.12.17) now follows from the implication (3.12.18) of Y (n),R si+2−n (1) = 0. Next, we consider invoking Feller diffusions to calculate the right-hand side of (3.12.17). Let us start with the inequality: Qi ( Y (n),R si+2−n (1) > 0, min j:tj≤si ( σY j β − tj ) > 3r, σX i β − si > 2r ) ≤ 1 ψ(1)ε EP [ Xisi+2−n(1) TX i 1 ;Y (n),R si+2−n (1) > 0, σX i β − si > 2−n, σY j β − si > 2−n, ∀ (yj , tj) ∈ RRn ] , (3.12.19) where the restriction for σY jβ applies since r ≥ si and 2r ≥ 2−n0 ≥ 2−n. To evaluate the right-hand side of (3.12.19), first note that under P,Xi(1)[si,∞) and Y (n),R(1)[si,∞) are (Gs)s≥si-Feller diffusions with independent starting values. Define a (Gs)s≥si-stopping time σ⊥ by σ⊥ = σXiβ ∧ ∧ j:tj≤si σ̂Y j β ∧ (si + 2−n)  ∨ si, 177 3.12. Some properties of support processes where the (Gs)s≥si-stopping times σ̂Y j β are given by σ̂Y j β = { σY j β , (yj , tj) ∈ RRn, ∞, otherwise. Note that PXiβ ( si + 2 −n) ∩ PY jβ (si + 2−n) = ∅ (3.12.20) for any j ∈ N with (yj , tj) ∈ RRn, since the minimum distance between PXi(si + 2−n) and PY jβ (si + 2−n) is given by( yj − (si + 2−n − tj)β − ε1/2 ) − ( xi − 2−nβ − ε1/2 ) ≥7 · 2−(n+1)β − [si + 2−n − (si − 2−n)]β − 2−nβ − 2 · 2−nβ ≥ ( 7 · 2−β − 2β − 3 ) · 2−nβ > 0. (3.12.21) Hence, we always have 〈 Xi(1), Y (n),R(1) 〉σ⊥ = 0. This allows us to do orthogonal continuation to Xi(1) beyond σ⊥ (cf. Lemma 3.20), and thereby we get a (Gs)s≥si-Feller diffusion X̂i independent of Y (n),r(1)[si,∞) and satisfying X̂i = Xi(1) over [si, σ⊥], under P. From (3.12.19), we obtain Qi ( Y (n),R si+2−n (1) > 0, min j:tj≤si ( σY j β − tj ) > 3r, σX i β − si > 2r ) ≤ 1 ψ(1)ε EP [ Xisi+2−n(1) TX i 1 ;Y (n),R si+2−n (1) > 0, σ⊥ = si + 2−n ] = 1 ψ(1)ε EP [( X̂isi+2−n )T X̂i1 ;Y (n),R si+2−n (1) > 0, σ⊥ = si + 2−n ] ≤ 1 ψ(1)ε EP [( X̂isi+2−n )T X̂i1 ;Y (n),R si+2−n (1) > 0 ] = P ( Y (n),R si+2−n (1) > 0 ) , (3.12.22) where the last quantity now allows a calculation of Feller diffusions. With 178 3.12. Some properties of support processes an argument similar to (3.12.12) and the choice h = 2−n(1+β− 1 6), we have P ( Y (n),R si+2−n (1) > 0 ) ≤ P ( Y (n),Rsi ≥ 2−n(1+β− 1 6) ) + 2 · 2n · 2−n(1+β− 16) ≤ ψ(1)ε2n(1+β− 16)#{j ∈ N; si − 2−n ≤ tj < si} · ‖ψ‖∞ ψ(1) 14 · 2−nβ + 2 · 2−n(β− 16) ≤ (28‖ψ‖∞ + 2) 2−n6 , (3.12.23) where the last inequality follows since β ≥ 13 and #{j ∈ N; si − 2−n ≤ tj < si} ≤ ε−12−n + 1. Now, we apply (3.12.22) and (3.12.23) to bound the probability on the left- hand side of (3.12.17). By symmetry, the resulting bound also holds when Y (n),R is replaced by Y (n),L. We have shown that Qi ( PXiβ (si + r) ∩ supp ( Y (n),q ) 6= ∅, min j:tj≤si ( σY j β − tj ) > 3r, σX i β − si > 2r ) ≤ (28‖ψ‖∞ + 2) 2−n6 , q = L, R, ∀ n0 ≤ n ≤ n1. (3.12.24) We obtain (3.12.20) from (3.12.21). (Step 6). We now apply (3.12.11), (3.12.16), and (3.12.24) to (3.12.9). This results in Qi PXiβ (si + r) ∩ ( ⋃ j:tj≤si supp(Y j) ) 6= ∅, min j:tj≤si ( σY j β − tj ) > 3r, σX i β − si > 2r  ≤ 28‖ψ‖∞ ψ(1) εβ + n1∑ n=n0 (84‖ψ‖∞ + 8) 2−n6 ≤ 28‖ψ‖∞ ψ(1) εβ + [( ∞∑ n=0 (84‖ψ‖∞ + 8) 2−n6 ) · 2 16 ] · 2−n0−16 . Since 2−n0−1 ≤ r by (3.12.2) an εβ ≤ rβ ≤ r 16 , our assertion now follows from the last inequality. Finally, we deal with the simple case where the clusters are born after the birth time si of Xi but outside the rectangle RXiβ (si + r) defined by (3.6.4). 179 3.12. Some properties of support processes Lemma 3.53. Let r ∈ (0,∞). Then for any j ∈ N with tj ∈ (si, si + r] and |yj − xi| > 2 ( ε1/2 + rβ ) , PXiβ (si + r) ∩ PY j β (si + r) = ∅. Proof. We only consider the case that xi < yj , as the other case follows by symmetry. The minimum distance between PXiβ (si + r) and PY j β (si + r) is given by ( yj − (si + r − tj)β − ε1/2 ) − ( xi + r β + ε1/2 ) > 2ε1/2 + 2rβ − (si + r − tj)β − rβ − 2ε1/2 ≥ 2rβ − rβ − rβ = 0. The proof is complete. 180 si, ti s0 = t0 = 0 and si = (i− 12 )ε, ti = iε for i ∈ N. Jxε (z) ε 1/2J ( (x − z)ε−1/2); J is an even C+(R)-function bounded by 1 with supp(J) ⊆ [−1, 1]. 1 1(x) ≡ 1. f(Γ), f(φ) f(Γ) = ∫ Γ f(x)dx and f(φ) = ∫ R φ(x)f(x)dx. |f |λ supx∈R eλ|x||f(x)|. ‖f‖rap ∑∞λ=1(|f |λ ∧ 1)/2λ. Crap(R) The function space of rapidly decreasing functions f satisfying |f |λ <∞ for any λ ∈ (0,∞), equipped with the complete separable metric ‖ ·‖rap. Z⊥⊥G The random element Z and the σ-field G are independent. Analogous conventions apply to other pairs of objects which are independent in the usual sense. TZa inf{t ∈ R+;Zt = a} if Z is a real-valued one-parameter process. TZa = T Z(1) a if Z = {Z(x, t), (x, t) ∈ R× R+}. Pε Probability measure associated with the ε-approximating solutions. Qiε Pε ( · ∣∣TXi1 <∞). Pδz Law of 14BESQ 4δ(4z). α, β, β′ Auxiliary parameter close to 1 2 and in (0, 1 2 ). η Auxiliary parameter close to 1 and in (1,∞). ξ Auxiliary parameter close to 1 and in (0, 1). J iβ′(t, t′), J iβ′(t) J iβ′(t, t′) = { j ∈ N; |yj − xi| ≤ 2 ( ε1/2 + (t− si)β ) , si < tj ≤ t′ } and J iβ′(t) = J iβ′(t, t). PXiβ (t),PY i β (t) PX i β (t) = { (x, s) ∈ R× [si, t]; |x− xi| ≤ ( ε1/2 + (s− si)β )} . RXiβ′ (t) [ xi − 2(ε1/2 + (t− si)β′), xi + 2(ε1/2 + (t− si)β′) ]× [si, t]. σX i β , σ Y i β σ Xi β = inf { t ≥ si; supp(Xit) ⊆\ [ xi−ε1/2−(t−si)β , xi+ε1/2+(t−si)β ]} . A <a B A ≤ CB for some constant C ∈ (0,∞). The dependences on parameters for C are specified section-wise. Table 3.1: List of frequent notation for Chapter 3 181 Chapter 4 Conclusion In Chapter 2, our main result Theorem 2.11 for voter model perturbations on finite systems explains how the difference kernel (1.1.6) enters the first-order approximation of fixation probabilities. The connection between this kernel and equilibria of voter model perturbations on infinite spatial structures re- mains unclear in general, although it has been well understood on integer lattices in more than two dimensions [9]. On the other hand, in view of our mild assumptions imposed on the underlying graphs, one would hope that some progress could be made by passing limit along a sequence of growing graphs. There is a large gap to justify such an idea, since our first-order expansion (1.1.9) is valid only for w up to a bound depending on the un- derlying graph. Nonetheless, understanding voter model perturbations on infinite spatial structures is important not only for a theoretical interest but also in fully establishing the universality of the simple rules in [37]. Another issue arising from the evolutionary games studied in Chapter 2 is the robustness of the cut-offs. We assumed that the graphs are simple. What effect would the addition of multiple edges or loops have on the b/c > k cut-off? Is this related to the k + 2 cut-off for imitation updating? In Chapter 3, we obtain pathwise non-uniqueness in the SPDE (1.2.9), and the main step to allow immigrant-wise semimartingale calculations is our use of continuous decompositions for approximating solutions. On one hand, the generality of this method implies that it may be of independent interest in studying superprocesses with immigration. On the other hand, these decompositions use the branching property of super-Brownian motions, and hence they do not extend immediately to study the open problem on pathwise non-uniqueness of nonnegative solutions of other SPDE’s taking the form (1.2.16) where Xp is in place of X1/2 and p ∈ (12 , 34). Despite this difficulty, we observe the gap between the local growth rates of the approximating solutions (cf. (3.1.4)). It indicates the case treated here is not at all sharp, and hence in view of the result in [30], we have the following conjecture. 182 Chapter 4. Conclusion Conjecture. There is pathwise non-uniqueness for nonnegative solutions of the SPDE (1.2.9) with X1/2 replaced by Xp for any p ∈ (12 , 34). This conjecture is in fact parallel to the existing results for SDE’s. To see this, note that for SDE’s of the form dXt = bdt+X p t dBt, b ∈ (0,∞), p ∈ (0, 12), X ≥ 0, where B is a standard Brownian motion, there is pathwise non-uniqueness for non-negative solutions (see [6] and note that there is, however, uniqueness in law for these SDE’s). Therefore, the result of the above conjecture and the analogous result in [6] for exponents strictly less than 12 would coincide with this SDE setting but with p = 34 in place of p = 1 2 , and the theorem on pathwise uniqueness in [32] playing the role of the theorem due to Yamada and Watanabe [47]. 183 Bibliography [1] Aldous, D. and Fill, J. A. (1994). Reversible Markov Chains and Random Walks on Graphs. Monograph in preparation, available at http: //www.stat.berkeley.edu/~aldous/RWG/book.html. [2] Asmussen, S. (2003). Applied Probability and Queues, 2nd ed. Applica- tions of Mathematics 51. Springer, New York. [3] Axelrod, R. and Hamilton, W. D. (1981). The evolution of cooper- ation. Science 211 1390–1396. [4] Bass, R. F., Burdzy, K., and Chen, Z.-Q. (2007). Pathwise uniqueness for a degenerate stochastic differential equation. Ann. Probab. 35 2385–2418. [5] Bollobás, B. (1998). Modern Graph Theory. Graduate Texts in Math- ematics 184. Springer, New York. [6] Burdzy, C., Mueller, C. and Perkins, E. A. (2010). Nonuniqueness for nonnegative solutions of parabolic stochastic partial differential equa- tions. Illinois J. Math. 54 1481–1507. [7] Chen, Y.-T. (2010). The b/c rule under the imitation updating on Zd for d ≥ 3. Unpublished manuscript. [8] Chen, Y.-T. and Delmas, J.-F. (2012). Smaller population size at the MRCA time for stationary branching processes. Ann. Probab. 40 2034– 2068. [9] Cox, J. T., Durrett, R., and Perkins, E. A. (2011). Voter model perturbations and reaction diffusion equations. To appear in Astérisque. [10] Cox, J. T. and Perkins, E. A. (2005). Rescaled Lotka-Volterra mod- els converge to super-Brownian motion. Ann. Probab. 33 904–947. [11] Etheridge, A. M. (2000). An introduction to superprocesses. University Lecture Series. American Mathematical Society, Rhode Island. 184 Bibliography [12] Ethier, S. N., and Kurtz, T. G. (2005). Markov processes: characteri- zation and convergence, reprint edition. John Wiley & Sons, Inc., New Jersey. [13] Hamilton, W. D. (1964). The genetical evolution of social behaviour. I. J. Theoret. Biol. 7 1–16. [14] Harris, T. E. (1976). On a class of set-valued Markov processes. Ann. Probab. 4 175–194. [15] Harris, T. E. (1978). Additive set-valued Markov processes and graph- ical methods. Ann. Probab. 6 355–378. [16] Harris, T. E. (1989). The Theory of Branching Processes. Dover Pub- lications, Inc., New York. [17] Hauert, C. (2008). Evolutionary dynamics. In Evolution from Cellular to Social Scales, NATO Science for Peace and Security Series 11-44. Springer, Dordrecht. [18] Hofbauer, J. and Sigmund, K. (1998). Evolutionary Games and Population Dynamics. Cambridge University Press. [19] Jacod, J., and Shiryaev, A. N. (2003). Limit Theorems for Stochastic Processes, 2nd edition. Springer Verlag, Berlin. [20] Kallenberg, O. (2002). Foundations of Modern Probability, 2nd edition. Springer Verlag, Berlin. [21] Knight, F. B. (1981). Essentials of Brownian Motion and Diffusion. Mathematical Surveys Vol. 18. American Mathematical Society, Provi- dence, Rhode Island. [22] Konno, N., and Shiga, T. (1988). Stochastic partial differential equations for some measure-valued diffusions. Probab. Theory Related Fields 79 201–225. [23] Kurtz, T. G. (2007). The Yamada-Watanabe-Engelbert theorem for gen- eral stochastic equations and inequalities. Electronic J. Probab. 33 951– 965. [24] Le Gall, J.-F. (1999). Spatial branching processes, random snakes and partial differential equations. Lectures in Mathematics. Birkhäuser, Basel. 185 Bibliography [25] Liggett, T. M. (2005). Interacting Particle Systems, reprint of the 1985 edition with a new postface. Classics in Mathematics. Springer, Berlin. [26] Lovász, L. (1996). Random walks on graphs: A survey. In Combina- torics, Paul Erdős is eighty, Vol. 2, volume 2 of Bolyai Soc. Math. Stud. 353–398. János Bolyai Math. Soc., Budapest. [27] Mayer, J. E. and Montroll E. (1941). Molecular distribution. J. Chem. Phys. 9 2–16. [28] Maynard Smith, J. (1982). Evolution and the Theory of Games. Cam- bridge University Press. [29] Mueller, C., and Perkins, E. A. (1992). The compact support property for solutions to the heat equation with noise. Probab. Theory Related Fields 93 325–358. [30] Mueller, C., Mytnik, L., and Perkins, E. A. (2012). Nonuniqueness for a parabolic SPDE with 3/4−  diffusion coefficients. Available at http: //arxiv.org/abs/1201.2767. [31] Mytnik, L. (1998). Weak uniqueness for the heat equation with noise. Ann. Probab. 26 968–984. [32] Mytnik, L., and Perkins, E. A. (2011). Pathwise uniqueness for stochas- tic heat equations with Hölder continuous coefficients: the white noise case. Probab. Theory Related Fields 149 1–96. [33] Mytnik, L., Perkins, E. A., and Sturm, A. (2006). On pathwise unique- ness for stochastic heat equations with non-Lipschitz coefficients. Ann. Probab. 34 1910–1959. [34] Nowak, M. A. and Sigmund, K. (1990). The evolution of stochastic strategies in the prisoner’s dilemma. Acta Appl. Math. 20 247–265. [35] Nowak, M. A. (2006). Five rules for the evolution of cooperation. Nature 314 1560–1563. [36] Nowak, M. A. (2006). Evolutionary Dynamics: Exploring the Equa- tions of Life. Belknap Press of Harvard University Press. [37] Ohtsuki, H., Hauert, C., Lieberman, E., and Nowak, M. A. (2006). A simple rule for the evolution of cooperation on graphs and social networks. Nature 441 502–505. 186 Bibliography [38] Ohtsuki, H. and Nowak, M. A. (2006). Evolutionary games on cy- cles. Proc. R. Soc. B 273 2249–2256. [39] Perkins, E. A. (2002). Dawson-Watanabe superprocesses. Proceedings of the 1999 Saint Flour Summer School in Probability, Lect. Notes in Math. Vol. 1781, 132–329. Springer-Verlag, Berlin. [40] Shiga, T. (1994). Two contrasting properties of solutions for one- dimensional stochastic partial differential equations. Can. J. Math. 46 415–437. [41] Reimers, M. (1989). One dimensional stochastic partial differential equations and the branching measure measure diffusion. Probab. Theory Related Fields 81 319–340. [42] Rogers, L. C. G. and Williams D. (1987). Diffusions, Markov Processes, and Martingales, volume 2: Itô Calculus. Cambridge Mathematical Li- brary. John Wiley & Sons, Chichester, New York, Brisbane, Toronto, Singapore. [43] Rudin, W. (1987). Real and complex analysis, 3rd ed. McGraw Hill. [44] Revuz, D. and Yor, M. (2005). Continuous martingales and Brownian motion, corrected 3rd printing of the 3rd ed. Springer Verlag, Berlin. [45] Sigmund, K. (2010). The Calculus of Selfishness. Princeton series in theoretical and computational biology. Princeton University Press, New Jersey. [46] Taylor, P. D., Day, T., and Wild G. (2007). Evolution of cooper- ation in a finite homogeneous graph. Nature 447 469-472. [47] Yamada, T., and Watanabe, S. (1971). On the uniqueness of solutions of stochastic differential equations. J. Math. Kyoto Univ. 11 155-167. [48] Walsh, J. (1986). An introduction to stochastic partial differential equa- tions. In Lect. Notes in Math., Vol. 1180, 265–439. Springer-Verlag, Berlin. 187

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0071988/manifest

Comment

Related Items