UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Critical branching random walks, branching capacity and branching interlacements Zhu, Qingsan 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_november_zhu_qingsan.pdf [ 1.06MB ]
Metadata
JSON: 24-1.0355269.json
JSON-LD: 24-1.0355269-ld.json
RDF/XML (Pretty): 24-1.0355269-rdf.xml
RDF/JSON: 24-1.0355269-rdf.json
Turtle: 24-1.0355269-turtle.txt
N-Triples: 24-1.0355269-rdf-ntriples.txt
Original Record: 24-1.0355269-source.json
Full Text
24-1.0355269-fulltext.txt
Citation
24-1.0355269.ris

Full Text

Critical Branching Random Walks,Branching Capacity and BranchingInterlacementsbyQingsan ZhuB.Sc., Peking University, 2008M.Sc., Peking University, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Mathematics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2017c© Qingsan Zhu 2017AbstractThis thesis concerns critical branching random walks. We focus on super-critical (d ≥ 5) and critical (d = 4) dimensions.In this thesis, we extend the potential theory for random walk to criticalbranching random walk. In supercritical dimensions, we introduce branchingcapacity for every finite subset of Zd and construct its connections withcritical branching random walk through the following three perspectives.1. The visiting probability of a finite set by a critical branching randomwalk starting far away;2. Branching recurrence and branching transience;3. Local limit of branching random walk in torus conditioned on the totalsize.Moreover, we establish the model which we call ’branching interlacements’as the local limit of branching random walk in torus conditioned on the totalsize.In the critical dimension, we also construct some parallel results. On theone hand, we give the asymptotics of visiting a finite set and the conver-gence of the conditional hitting point. On the other hand, we establish theasymptotics of the range of a branching random walk conditioned on thetotal size.Also in this thesis, we analyze a small game which we call the Majority-Markov game and give an optimal strategy.iiLay SummaryThis thesis investigates a probabilistic model called branching random walk,which is a combination of two classical subjects in probability theory: ran-dom walk and branching process. A branching random walk is a randomprocess consisting of a finite number of particles doing independent randomwalks, which at every time step, will give birth to a random number of newparticles (particles are added) and then die (particles are removed), the newparticles then begin independent random walks from the location of theirparents. Of particular challenge to the analysis is a critical branching ran-dom walk, where the expected number of offsprings of each particle is one.The main contribution of this thesis is to develop new knowledge about crit-ical branching random walks by building an analogy with classical resultson random walk.iiiPrefaceThis dissertation is ultimately based on the original work of the author,under the supervisor Professor Omer Angel.A version of Chapter 2 has been divided into several preprint papers,which are currently under review for publication and put on the arXiv ([26–29]). I am responsible for all of the proofs and writing of Chapter 2 andChapter 4.Chapter 3 is based on a joint work together with Professor Omer Angeland Dr. Bala´zs Ra´th ([2]), being under review of publication.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . ixDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Critical branching random walks . . . . . . . . . . . . . . . . 11.2 Range of critical branching random walk conditioned on totalnumber of progeny . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Potential theory for random walk and our parallel results . . 61.3.1 Supercritical dimensions (d ≥ 5) . . . . . . . . . . . . 71.3.2 The critical dimension (d = 4) . . . . . . . . . . . . . 131.4 Branching interlacements . . . . . . . . . . . . . . . . . . . . 141.5 An optimal strategy for the Majority-Markov game . . . . . 162 Critical branching random walks . . . . . . . . . . . . . . . . 182.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.1.1 Finite and infinite trees . . . . . . . . . . . . . . . . . 192.1.2 Tree-indexed random walk . . . . . . . . . . . . . . . 20vTable of Contents2.1.3 Random walk with killing . . . . . . . . . . . . . . . . 212.1.4 Some facts about random walk and the Green function 222.2 Branching capacity and visiting probabilities . . . . . . . . . 242.2.1 Monotonicity and subadditivity . . . . . . . . . . . . 272.2.2 A key observation . . . . . . . . . . . . . . . . . . . . 282.2.3 Convergence of the Green function . . . . . . . . . . . 312.2.4 Proof of Theorem 1.3.1 . . . . . . . . . . . . . . . . . 332.2.5 The asymptotics for q(x), q−(x) and r(x) . . . . . . 342.2.6 Convergence of the conditional entering measure . . . 382.2.7 Branching capacity of balls. . . . . . . . . . . . . . . 452.2.8 Proof of Theorem 1.3.5 . . . . . . . . . . . . . . . . . 552.2.9 Bounds for the Green function . . . . . . . . . . . . . 562.2.10 Proof of Theorem 1.3.3. . . . . . . . . . . . . . . . . . 592.3 Branching capacity and branching recurrence . . . . . . . . . 612.3.1 Inequalities for convolved sums . . . . . . . . . . . . . 632.3.2 Restriction lemmas . . . . . . . . . . . . . . . . . . . 652.3.3 Visiting probability by an infinite snake . . . . . . . . 702.3.4 Upper bounds for the probabilities of visiting two sets 732.3.5 Proof of Wiener’s Test . . . . . . . . . . . . . . . . . 772.3.6 Proof of Lemma 2.3.17 . . . . . . . . . . . . . . . . . 792.4 The critical dimension: d=4 . . . . . . . . . . . . . . . . . . 822.4.1 An upper bound . . . . . . . . . . . . . . . . . . . . . 832.4.2 The visiting probability . . . . . . . . . . . . . . . . . 862.4.3 Convergence of the first visiting point . . . . . . . . . 972.4.4 The range of branching random walk conditioned onthe total size . . . . . . . . . . . . . . . . . . . . . . . 993 Branching interlacements . . . . . . . . . . . . . . . . . . . . . 1083.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083.1.1 Plane trees, contour function and branching randomwalk . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083.1.2 Some results on simple random walk . . . . . . . . . 1103.2 Basic model and some first properties . . . . . . . . . . . . . 112viTable of Contents3.2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . 1123.2.2 Simple random walk as a contour function and snakes 1143.2.3 Construction of the branching interlacement intensitymeasure . . . . . . . . . . . . . . . . . . . . . . . . . 1163.2.4 Branching interlacement point process . . . . . . . . 1203.3 Branching random walk on the torus and branching interlace-ments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233.3.1 Hitting probability of a set by a small snake . . . . . 1253.3.2 Cutting trees . . . . . . . . . . . . . . . . . . . . . . . 1343.3.3 Proof of the main theorem . . . . . . . . . . . . . . . 1434 An optimal strategy for the Majority-Markov game . . . . 1474.1 Definitions, settings and main result . . . . . . . . . . . . . . 1474.1.1 Markov systems . . . . . . . . . . . . . . . . . . . . . 1474.1.2 Games . . . . . . . . . . . . . . . . . . . . . . . . . . 1474.1.3 Strategies and costs . . . . . . . . . . . . . . . . . . . 1484.1.4 Grades and positive-(negative-)grades . . . . . . . . . 1494.1.5 Main result . . . . . . . . . . . . . . . . . . . . . . . . 1494.2 Some known results about Markov games . . . . . . . . . . . 1504.3 Proof of the main theorem . . . . . . . . . . . . . . . . . . . 151Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155AppendixA Sketch of Proof of Lemma 2.1.3 . . . . . . . . . . . . . . . . . 158viiList of Figures1.1 A simple example of Majority-Markov game . . . . . . . . . . 163.1 Construction of 2-sided infinite snake . . . . . . . . . . . . . . 115viiiAcknowledgementsI am greatly indebted to my supervisor Professor Omer Angel for his guid-ance throughout my PhD study. He always gives me interesting questionsand useful suggestions.I would like to thank Professor Edwin A. Perkins and Professor GordonSlade for serving on my advisory committee, Dr. Bala´zs Ra´th and themembers of the Probability Group at the University of British Columbia fortheir help and discussions.I also thank my colleagues in the Mathematics Department in UBC,especially to Li Xiaowei, Wang Li, Yang Wen and Ye Zichun for the happytime we study and play together.Last but not least, I want to show special thanks to my parents for theirsupport.ixDedicationTo my parents.xChapter 1IntroductionThis thesis studies some properties of critical branching random walks indimension four and higher.In the first part, we extend the theory of discrete capacity for randomwalk to critical branching random walk. We introduce branching capacity forany finite subset of Zd, for d ≥ 5 and establish its connections with criticalbranching random walks. In 4-dimension, we give some parallel results.In the second part, we introduce the model of branching interlacements.We show that this model turns out to be the local limit of the criticalbranching random walk in torus.1.1 Critical branching random walksAs the name suggests, a branching random walk can be viewed as a systemof particles performing random walks while branching (deterministically orrandomly). We are mainly interested in the case when the branching is alsorandom. For branching random walks with deterministic branching, one canrefer to the lecture notes [18]. In our situation, there are two levels of randommechanism. One is for the branching and the other is for the random walk.To define a branching random walk, we need to fix two distributions µ andθ for the randomness.1. µ is a probability measure on N;2. θ is a probability measure on Zd.Definition 1.1.1. A branching random walk starting from x ∈ Zd, withoffspring distribution µ and jump distribution θ, can be described as follows.11.1. Critical branching random walksAt time 0, a particle is located at x. Suppose that, at each time n, a particlev is located at S(v). At time n + 1, v dies and gives birth to a randomnumber, distributed according to µ, of children. Each child then moves to anew location S(v)+Y , with increment Y distributed according to θ. Differentparticles behave independently. Let T be the collection of all particles at alltimes. Then (S(v))v∈T forms a branching random walk.In this thesis, we study critical branching random walk. By ’critical’ wemean thatEµ = 1.We always assume this and rule out the degenerate case, i.e. µ(1) = 1(unless otherwise specified). For the jump distribution we assume that it iscentered in the following sense:Eθ = 0.In addition, for technical reasons, we always assume the following momentconditions unless otherwise specified:• µ has finite variance σ2 > 0;• θ is irreducible (i.e. not supported on a strict subgroup of Zd), and’weak’ Ld in the following sense: there exists C > 0, such that for anyr ≥ 1,θ({x ∈ Zd : |x| > r}) < C · r−d. (1.1.1)Note that (1.1.1) holds if θ has finite d-th moment and if (1.1.1) holds, thenθ has finite b-th moment, for any 0 < b < d. For some results, we needstronger assumptions, which will be stated explicitly.Remark 1.1.1. Why the critical case? Branching random walk generalizesboth branching process (no geometry) and random walk (no branching). Thecorresponding branching process for our branching random walk is the so-called Galton-Watson process. It is classical that for nondegenerate (i.e.21.1. Critical branching random walksµ(1) 6= 1) Galton-Watson processes, the extinction probability is one, ifEµ ≤ 1, and strictly less than one, if Eµ > 1. Moreover, the probabilityof survival to n-th generation decays (as n → ∞) exponentially, if Eµ < 1,and polynomially, if Eµ = 1. Similarly, it turns out that the probabilityof visiting a distant point by a branching random walk starting from theorigin has different asymptotics for three different regions: when Eµ > 1,the probability is bigger than a positive constant (which is just the probabilityfor survival of the corresponding Galton-Watson process); when Eµ = 1, itdecays to zero polynomially; when Eµ < 1, it decays to zero exponentially.Remark 1.1.2. We have not striven for the greatest generality about theassumptions on µ and θ, and it is plausible that many results also hold underweaker assumptions, especially for θ.On the other hand, the dimension d plays an important role in the studyof critical branching random walk: there are three regions:1. Supercriticality: d ≥ 5;2. Criticality: d = 4;3. Subcriticality: d ≤ 3.One can get an analogous feeling from random walk about the criticaldimension. It is well-known that the critical dimension for random walk isd = 2. There are many results reflecting this fact. Here are a couple. Thefamous Po´lya’s Recurrence Theorem states that a simple random walk ona d-dimension is recurrent for d = 1, 2 and transient for d > 2. On theother hand, the range of a random walk with n-steps behaves sublinearly(when n goes to infinity), if d = 1; linearly with logarithm correction, ifd = 2; linearly, if d ≥ 3. One will see that both results (together with manyothers) have analogues in the setting of critical branching random walk (seeCorollary 1.3.11, Proposition 2.3.3, and the rest of this chapter).In this thesis, we mainly focus on supercritical and critical dimensions.31.2. Range of critical branching random walk conditioned on total number of progeny1.2 Range of critical branching random walkconditioned on total number of progenyLe Gall and Lin ([13, 14]) have established the following result about thenumber of occupied sites by a critical branching random walk conditionedon the total number of offsprings being n, denoted by Rn: (under someregular conditions on µ and θ)1nRnP→ c1 as n→∞, when d ≥ 5; (1.2.1)log nnRnL2→ c2 as n→∞, when d = 4;n−d/4Rnd→ c3λd(supp(I)) as n→∞, when d ≤ 3;where ci, i = 1, 2, 3 are some constants and λd(supp(I)) stands for theLebesgue measure of the support of the random measure on Rd known asIntegrated Super-Brownian Excursion.In the critical dimension, they assume that the offspring distribution µ isthe critical geometric distribution (i.e. with parameter 1/2), while in otherdimensions, they could handle very general offspring distributions.In subcritical dimensions, they also established the asymptotics of thehitting probability of a distant point by critical branching random walk:limx→∞ ‖x‖2 · P (Sx visits 0) = 2(4− d)dσ2, (1.2.2)where Sx is a critical branching random walk starting at x, ‖x‖ =√x ·Q−1x/√d with Q being the covariance matrix of θ, and σ2 is the vari-ance of µ.They presented the following questions:1. The asymptotic of P (Sx visits 0) in other dimensions (d ≥ 4);2. The range Rn in the critical dimension for general offspring distribu-tions;41.2. Range of critical branching random walk conditioned on total number of progeny3. The range of branching random walk with a general initial configura-tion.We answer the first two questions in this thesis. We show that:When d ≥ 5, we havelimx→∞ ‖x‖d−2 · P (Sx visits 0) = adc1;When d = 4, we havelimx→∞ ‖x‖2 log ‖x‖ · P (Sx visits 0) = 1/(2σ2);andlog nnRnP→ 16pi2√detQσ2as n→∞;where c1 is the same constant in (1.2.1) and ad is some constant dependingon θ.To summarize, we have:P (Sx visits 0) ≈adc1/‖x‖d−2, when d ≥ 5;1/(2σ2‖x‖2 log ‖x‖), when d = 4;2(4− d)/(dσ2‖x‖2), when d ≤ 3.Here are the heuristics for the exponents. We know that a typical randomwalk sample path connecting x and 0 is with length of order ‖x‖2. In orderto reach a point with distance ‖x‖ away (with not too small probability),the corresponding branching process should survive at least for generationswith order ‖x‖2. This event has probability of ‖x‖−2. This incidence isenough to give order one probability for visiting x, when the dimension islow enough. On the other hand, it is not difficult to see that the expectationof the visiting times of 0 is the same as that for random walk, which is oforder ‖x‖2−d (for d ≥ 3). Note that when d = 4 (the critical dimension),it is just ‖x‖−2! This maybe one of the most natural ways to rememberthe critical dimension. When the dimension is high (d ≥ 5 is enough), theconditional visiting times, conditioned on visiting, is of order one, hence the51.3. Potential theory for random walk and our parallel resultsprobability of visiting should have the same order as the expectation, whichis just ‖x‖2−d.1.3 Potential theory for random walk and ourparallel resultsThere are two essential theories on random walk, one is the discrete potentialtheory, the other is about the scaling limit of random walk, i.e. Brownianmotion. It is well known that the scaling limit of critical branching randomwalk is the integrated super-Brownian excursion, or Brownian snake. Formore details about this, we refer the reader to [12], [8] and the referencestherein.In this thesis, we focus on the discrete potential theory and extend itto critical branching random walk. For the discrete potential theory forrandom walk, one can see e.g. [10, 11, 20]. Let us first review some resultson regular (discrete) capacity. For any finite subset K of Zd, d ≥ 3, theescape probability ESK(x) is defined to be the probability that a randomwalk starting from x ∈ Zd with symmetric jump distribution, denoted bySx = (Sx(n))n∈N, never returns to K. The capacity of K, Cap(K) is givenby:Cap(K) =∑a∈KESK(a).We havelimx→∞ ‖x‖d−2 · P (Sx visits K) = adCap(K),where ad =12d(d−2)/2√detQΓ(d−22 )pi−d/2. Moreover, let τK = inf{n ≥ 1 :Sx(n) ∈ K}, then for any a ∈ K, we havelimx→∞P (Sx(τK) = a|Sx visits K) = ESK(a)/Cap(K).ESK(a) is usually called the equilibrium measure and the normalized mea-sure ESK(a)/Cap(K) is called the harmonic measure of set K. In fact, notonly the distribution of the first visiting point, but also that of the last61.3. Potential theory for random walk and our parallel resultsvisiting point, conditioned on visiting K, converge to the same measure:limx→∞P (Sx(ξK) = a|Sx visits K) = ESK(a)/Cap(K),where ξK = sup{n ≥ 1 : Sx(n) ∈ K}.The results above apply to any symmetric irreducible jump distributionwith some finite moment conditions. Unfortunately we do not find anyreference for the nonsymmetric walks. However the following result is well-known and can be proved similarly to the symmetric case (see the Preface of[11]). When the jump distribution is irreducible, nonsymmetric, with meanzero and, for simplicity, finite range, we have:limx→∞P (Sx(τK) = a|Sx visits K) = ES−K(a)/Cap(K),limx→∞P (Sx(ξK) = a|Sx visits K) = ESK(a)/Cap(K),where ES− is the escape probability for the reversed random walk.In this thesis, we extend the theory of capacity for random walk to crit-ical branching random walk. As we have mentioned, for critical branchingrandom walk, the critical dimension is d = 4 instead of d = 2 (for randomwalk). This fact is also reflected in many of our results.1.3.1 Supercritical dimensions (d ≥ 5)In Zd, d ≥ 5, we introduce branching capacity for any finite subset. Inorder to define branching capacity, one needs to introduce analogues of theescape probability. For a finite set K of Zd, denoted by K ⊂⊂ Zd, onecould consider the probability that the branching random walk starting atx, denoted by Sx, avoids K. However, this turns out not to be the rightgeneralization. Two different extensions of the escape probability need tobe defined: one for the first and one for the last visiting point of K. Wedenote these by EsK(x) and EscK(x). Both correspond to infinite versionsof branching random walk. We defer the complete definitions to Chapter 2.71.3. Potential theory for random walk and our parallel resultsFormally one can define the branching capacity of K byBCap(K) =∑z∈KEsK(z)(also =∑z∈KEscK(z)).Then, we have:Theorem 1.3.1. For any nonempty finite subset K of Zd and a ∈ K, wehavelimx→∞ ‖x‖d−2 · P (Sx visits K) = adBCap(K); (1.3.1)limx→∞P (Sx(τK) = a|Sx visits K) = EsK(a)/BCap(K),limx→∞P (Sx(ξK) = a|Sx visits K) = EscK(a)/BCap(K),where τK and ξK respectively are the first, and the last respectively, visitingtime of K in a Depth-First search and ad =12d(d−2)/2√detQΓ(d−22 )pi−d/2 isthe same constant as in the random walk case.Let us make some comments here. First, if µ is the degenerate measure(that is µ(1) = 1), then the branching random walk is just the regularrandom walk, and EsK (EscK respectively) is just ES−K (ESK respectively).In this case, Theorem 1.3.1 is the classical result for random walk.Second, this result tells us that conditioned on visiting a fixed set, the’first’ (or the last) visiting point converges in distribution. It turns outthat we can say more about this. In fact, we also show that (see Section2.2.6) conditioned on visiting K, the set of entering points converges indistribution. Since the distribution of the intersection between K and therange of Sx can be determined by the entering points, hence we haveTheorem 1.3.2. Conditioned on Sx visiting K, the intersection between Kand the range of Sx converges in distribution, as x→∞.Third, this result gives the asymptotic behavior of the probability ofvisiting a fixed finite set by critical branching random walk starting fromfar away (for dimension d ≥ 5), answering the first question in the end of81.3. Potential theory for random walk and our parallel resultsSection 1.2.1, for supercritical dimensions. We also establish the asymptoticbehavior of the visiting probability for the case of critical dimension d = 4,(see Theorem 1.3.12). Note that we give the asymptotics of the probabilityof visiting any finite set while the original question is stated for one singlepoint.We mentioned in Section 1.2.3, that Le Gall and Lin establish the asymp-totic of the range of a critical branching random walk conditioned on totalsize being n. In supercritical dimensions, the range divided by n converges inprobability to a constant, c1 in (1.2.1), which they interpret as some escapeprobability. This constant is just BCap({0}) in our notation.We also construct the following bounds for the visiting probability bycritical branching random walk when the distance ρ(x,A) between x and Ais not too small, compared with the diameter of A, diam(A).Theorem 1.3.3. For any finite A ⊆ Zd and x ∈ Zd with ρ(x,A) ≥0.1diam(A), we have:P (Sx visits A)  BCap(A)(ρ(x,A))d−2, (1.3.2)where f(x,A)  g(x,A) indicates that there exists positive constants c1, c2independent of x,A such that c1f(x,A) ≤ g(x,A) ≤ c2f(x,A).One might compare this with the corresponding result for random walk:P (Sx visits A)  Cap(A)(ρ(x,A))d−2. (1.3.3)Similarly to random walk, computing escape probabilities can be verydifficult. Hence it might not be practical to estimate the branching capacityby definition directly. However we can use (1.3.1) in reverse: by estimatingthe probability of visiting a set, we can give bounds for the branching ca-pacity of that set. Through this, we find the order of the magnitude of thebranching capacity of low dimensional balls:Theorem 1.3.4. Let Bm(r) be the m-dimensional balls with radius r (asa subset of Zd), i.e. {z = (z1, 0) ∈ Zm × Zd−m = Zd : |z1| ≤ r}. For any91.3. Potential theory for random walk and our parallel resultsr > 2, we have:BCap(Bm(r)) rd−4 if m ≥ d− 3;rd−4/ log r if m = d− 4;rm if m ≤ d− 5.(1.3.4)One might compare this with the corresponding result about regularcapacity:Cap(Bm(r)) rd−2 if m ≥ d− 1;rd−2/ log r if m = d− 2;rm if m ≤ d− 3.(1.3.5)Our definition of branching capacity depends on the offspring distribu-tion µ and the jump distribution θ. From the previous result, one can seethat branching capacities of a ball for different µ’s and θ’s are comparable.We believe this is generally true for any finite subset but can only show onepart of it:Theorem 1.3.5. Suppose that µ1, µ2 are two nondegenerate critical off-spring distributions with finite second moment and let BCapµ1,θ and BCapµ2,θdenote the corresponding branching capacities (with the same jump distribu-tion θ). Then, there is a C = C(µ1, µ2) > 0 such that for all finite A ⊆ Zd,C−1 · BCapµ1,θ(A) ≤ BCapµ2,θ(A) ≤ C · BCapµ1,θ(A). (1.3.6)One might compare this with the corresponding result about regularcapacity:Suppose that θ1 and θ2 are two irreducible distributions on Zd (for d ≥ 3)with mean zero and finite range. Then, there is a C = C(θ1, θ2) > 0 suchthat,C−1 · Capθ1(A) ≤ Capθ2(A) ≤ C · Capθ1(A), for all finite A ⊆ Zd.We believe the following and can not prove at this time:101.3. Potential theory for random walk and our parallel resultsConjecture 1.3.6. Suppose that θ1 and θ2 are two irreducible distributionson Zd( d ≥ 3) with mean zero and finite range. Then, there is a C =C(µ, θ1, θ2) > 0 such that,C−1 ·BCapµ,θ1(A) ≤ BCapµ,θ2(A) ≤ C ·BCapµ,θ1(A), for all finite A ⊆ Zd.Furthermore, we construct an analogous version of Wiener’s Test. Let usfirst review the classical Wiener’s Test. A subset K ⊆ Zd is called recurrentifP (S0(n) ∈ K for infinite n ∈ N) = 1;and transient ifP (S0(n) ∈ K for infinite n ∈ N) = 0.For the recurrence and transience of a set, Wiener’s Test says that:Suppose K ⊆ Zd, d ≥ 3 and let Kn = {a ∈ K : 2n ≤ |a| < 2n+1}. Then,K is recurrent ⇔∞∑n=1Cap(Kn)2n(d−2)=∞.Inspired by this, we give the definition of branching recurrence andbranching transience by using branching random walk conditioned on sur-vival instead of random walk (see Chapter 2 for exact definitions). We havethe following version of Wiener’s Test:Theorem 1.3.7. Assume further that µ has finite third moment and θ iswith finite range. Then for any K ⊆ Zd, d ≥ 5, we haveK is branching recurrent ⇔∞∑n=1BCap(Kn)2n(d−4)=∞.Meanwhile, we give the asymptotics and bounds for the visiting probabil-ity of a finite set by critical branching random walk conditioned on survivalstarting from x (denoted by S∞x ):111.3. Potential theory for random walk and our parallel resultsProposition 1.3.8. For every finite A ⊆ Zd, we have:limx→∞ ‖x‖d−4 · P (S∞x visits A) = td · a2dσ2BCap(K), (1.3.7)where td = td(θ) = dd/2√detQ∫t∈Rd |t|2−d|h′ − t|2−ddt, and h′ ∈ Rd is anyvector with |h′| = 1. Recall σ2 is the variance of µ.Theorem 1.3.9. For every finite A ⊆ Zd and x ∈ Zd with ρ(x,A) ≥0.1diam(A), we have (assume further that θ has finite range):P (S∞x visits A) BCap(A)(ρ(x,A))d−4. (1.3.8)In particular, if we let M be the (d− i)-dimensional (i = 1, 2, 3, 4) linearsubspace, i.e. {z = (z1, z2) ∈ Zd−i × Zi : z2 = 0}, then by Theorem 1.3.4,Theorem 1.3.7 and the monotonicity of branching capacity, we can see thatM is branching recurrent. By projecting to Zi, we get that for Zi (i ≤ 4),the projection version of S∞0 will visit any vertex infinitely often, almostsurely. Hence we can get the following result which appeared in [3]:Corollary 1.3.10. The critical branching random walk conditioned on sur-vival in Zd (d ≤ 4) almost surely visits any vertex infinitely often, providedthat the offspring distribution has finite third moment and that the step dis-tribution is irreducible, centered, with finite range.In [3], this is proved when µ is the critical geometric distribution and θis simple, i.e. uniform on the unit vectors. It is mentioned there that theirmethod works for general critical offspring distribution with finite secondmoment, see Section 3.1 in [3]. It seems that their method requires thesymmetry assumption for θ.From Theorem 1.3.5 and Theorem 1.3.7, we can see that whether a set isbranching recurrent or branching transient is independent of the choice of theoffspring distribution, as long as that offspring distribution is nondegenerate,critical and with finite third moment:Corollary 1.3.11. Let θ be some fixed centered, irreducible distribution onZd with finite range. Then for any K ⊆ Zd, if there exists one nondegenerate121.3. Potential theory for random walk and our parallel resultscritical offspring distribution µ with finite third moment, such that K isbranching recurrent (corresponding to µ and θ), then for any such offspringdistribution, K is branching recurrent.1.3.2 The critical dimension (d = 4)In the critical dimension, we also establish the asymptotics of the visitingprobability with a logarithmic correction:Theorem 1.3.12. Assume further that θ has finite exponential moments.Then, for every finite subset K of Z4, we have:limx→∞(‖x‖2 log ‖x‖) · P (Sx visits K) = 12σ2. (1.3.9)We also show the convergence for the first visiting point conditioned onvisiting:Theorem 1.3.13. Assume further that θ has finite range. Then, for anyfinite nonempty subset K of Z4 and a ∈ K, we havelimx→∞P (Sx(τK) = a|Sx visits K) =σ24pi2√detQEK(a),where EK(a) is defined later in (2.4.27).Remark 1.3.1. Recall that, in supercritical dimensions, we mention furtherthat the conditional entering measure converges in distribution. However thisis false in the critical dimension (and in subcritical dimensions). It turnsout that, the conditional entering measure will blow up, as the starting pointtends to infinity.Note that in the random walk case, the random walk in 2-d is recurrentand hence P (Sx visits K) = 1. However the harmonic measure does exist:limx→∞P (Sx(τK) = a|Sx visits K) =1pi2√detQEK(a),131.4. Branching interlacementswhere EK(x) = limn→∞ log n · P (τn < τK) exists with τn being the hittingtime of Bc(n) by a random walk starting at x. As we will see, EK(a) has asimilar form.Furthermore, recall that Rn, the range of the critical branching randomwalk conditioned on the total size being n has the following asymptotics:log nnRnn→∞−→ 8pi2√detQ in L2,provided that µ is the geometric distribution with parameter 1/2 and θ issymmetric and has exponential moments.We establish the following:Theorem 1.3.14.log nnRnn→∞−→ 16pi2√detQσ2in probability,assuming further that θ is symmetric and has finite exponential moments.1.4 Branching interlacementsSznitman introduced the model of random interlacements which consist ofa countable collection of trajectories of doubly infinite random walks on thelattice Zd, for d ≥ 3 ([21]). Since this seminal work, many aspects of themodel of random interlacements have been studied by numerous authors.The basic results of the theory of random interlacements can be found inthe lecture notes [5] and [22]. The interlacement Iu at level u ≥ 0 is thetrace left on Zd by a cloud of paths constituting a Poisson point process onthe space of doubly infinite transient trajectories modulo time-shift. Its lawis characterized by:P (Iu ∩K = ∅) = exp(−u · Cap(K)), for every finite K ⊆ Zd. (1.4.1)There are two main initial results about random interlacements. On theone hand, the random interlacement at level u turns out to be the local141.4. Branching interlacementslimit of the set of sites on the discrete torus TdN := (Z/NZ)d visited bythe simple random walk up to buNdc steps ([25]). On the other hand, asa percolation model, the complement of the interlacement, the so-calledvacant set, exhibits a phase transition ([21] and [19]): there is a criticalvalue u∗ ∈ (0,∞) such that the vacant set percolates for u < u∗ and doesnot percolate for u > u∗.Inspired by this, we introduce another kind of interlacements consistingof a countable collection of doubly infinite trajectories that encode infinitetrees embedded in Zd, d ≥ 5. We restrict ourself to a special case µ =Geo(1/2) and simply assume that θ is the uniform measure on the set ofunit vectors in Zd. Similarly, a non-negative parameter u governs the amountof trajectories entering the picture. We show that:Theorem 1.4.1. For any u > 0, we can construct a random subset Iu ofZd, which is characterized by:P (Iu ∩K = ∅) = exp(−u · BCap(K)), for every finite K ⊆ Zd. (1.4.2)Furthermore, we prove: similar to the case of random interlacement,branching interlacement at level u, turns out to be the local limit of thelaw of the trace of branching random walk on torus with side-length N ,conditioned to have buNdc progeny. More precisely, let RN be the occupiedsites by a critical branching random walk, conditioned on the total size beingbuNdc, with uniform starting point in the torus with side-length N . Thenfor any fixed finite subsets B ⊆ A ⊆ Zd, we have:Theorem 1.4.2.limN→∞:N≡1 mod 2P (RN ∩A = B) = P (Iu ∩A = B).Note that when N is large enough, we can regard A,B as subsets of the toruswith side-length N .The reason we need to assume N ≡ 1 mod 2 is due to the periodicityof simple random walk.151.5. An optimal strategy for the Majority-Markov game1.5 An optimal strategy for the Majority-MarkovgameLet’s begin with a little game. Three tokens begin on vertices −2, −1 and1 of a path connecting vertices −3,−2, ..., 3 (see the figure below). At anytime the player may pay one dollar and choose a token; that token will thenmove randomly, with equal probability to its left or right neighboring vertex.Different tokens move independently without interfering. There are holes atthe endpoints. Hence once the token reaches the endpoint, it falls into thathole and cannot get out. If the player is curious about which hole finallycontains more tokens, the negative side, or the positive side, which tokenshould be chosen to move, with the goal of minimizing expected cost?Figure 1.1: A simple example of Majority-Markov gameNote that though the player wants to know which side wins, he, as amatter of fact, has no influence on where the tokens go when they move,hence no influence at all, on the result about which side to win. We canthink that the trajectories are pre-determined and the player do not knowthese trajectories. He needs to buy this information. At each step, theplayer can only decide, at cost of one dollar, which token’s next position tobe revealed to him, based on the current positions of the tokens. Only whenthe player sees two tokens in the same hole, he is sure which side has won.Then he stops paying and leaves. Therefore, his strategy has an effect onhis wallet, but no on which side to win.161.5. An optimal strategy for the Majority-Markov gameIt turns out that the optimal strategy is always moving the middle one(if there are two at the same site, then choose either) and this is the uniqueoptimal strategy.In this thesis, we will analyze and solve a type of games which we callMajority-Markov games, as the following. There are an odd number of finiteMarkov chains. Each Markov chains contains two absorbing target states,one specified as positive kind, the other as negative kind. Since the targetsare absorbing, finally, in each Markov chain, a target state will be reached,sometimes positive kind, sometimes negative kind. The player can decidewhich Markov chain to advance at every step. The goal of the player is toknow which kind of target states reached is in the majority. Then, what isthe best strategy to minimize the expected time?The solution involves computing functions called grades, which is in-troduced in [6] for the states of the individual chains. In some sense, the’middle’ one produces an optimal strategy. See Theorem 4.1.1 and Chapter4 for more details.Remark 1.5.1. Though the subjects of Chapter 2 and Chapter 3 are closelyrelated, Chapter 2, 3 and 4 are written in an independent way. Each chapteris self-contained. The notations may differ in different chapters.Remark 1.5.2. Note that for notational ease, we sometimes use the samenotation for a random variable and its law. However, the reader can judgeby the text.17Chapter 2Critical branching randomwalks2.1 PreliminariesWe begin with some notations. For a set K ⊆ Zd, we write |K| for itscardinality. We write K ⊂⊂ Zd to express that K is a finite nonemptysubset of Zd. For x ∈ Zd (or Rd), we denote by |x| the Euclidean norm ofx. We will mainly use the norm ‖ · ‖ corresponding the jump distributionθ, i.e. ‖x‖ =√x ·Q−1x/√d, where Q is the covariance matrix of θ. Forconvenience, we set |0| = ‖0‖ = 0.5. We denote by diam(K) = sup{‖a− b‖ :a, b ∈ K}, the diameter of K and by Rad(K) = sup{‖a‖ : a ∈ K}, theradius of K with respect to 0. We write C(r) for the ball {z ∈ Zd : ‖z‖ ≤ r}and B(r) for the Euclidean ball {z ∈ Zd : |z| ≤ r}. For any subsets A,Bof Zd, we denote by ρ(A,B) = inf{‖x − y‖ : x ∈ A, y ∈ B} the distancebetween A and B. When A = {x} consists of just one point, we just writeρ(x,B) instead. For any path γ : {0, . . . , k} → Zd, we let |γ| stand for k, thelength, i.e. the number of edges of γ , γ̂ for γ(k), the endpoint of γ and [γ]for k + 1, the number of vertices of γ. Sometimes we just use a sequence ofvertices to express a path. For example, we may write (γ(0), γ(1), . . . , γ(k))for the path γ. For any B ⊆ Zd, we write γ ⊆ B to express that all verticesof γ except the starting point and the endpoint, lie inside B, i.e. γ(i) ∈ Bfor any 1 ≤ i ≤ k − 1. If the endpoint of a path γ1 : {0, . . . , |γ1|} → Zdcoincides with the starting point of another path γ2 : {0, . . . , |γ2|} → Zd,then we can define the composite of γ1 and γ2 by concatenating γ1 and γ2:γ1 ◦ γ2 : {0, . . . , |γ1|+ |γ2|} → Zd,182.1. Preliminariesγ1 ◦ γ2(i) ={γ1(i), for i ≤ |γ1|;γ2(i− |γ1|), for i ≥ |γ1|.We now state our convention regarding constants. Throughout the text(unless otherwise specified), we use C and c to denote positive constantsdepending only on dimension d, the critical distribution µ and the jump dis-tribution θ, which may change from place to place. Dependence of constantson additional parameters will be made or stated explicit. For example, C(λ)stands for a positive constant depending on d, µ, θ, λ. For functions f(x) andg(x), we write f ∼ g if limx→∞(f(x)/g(x)) = 1. We write f  g, respective-ly f  g, if there exist constants C such that, f ≤ Cg, respectively f ≥ Cg.We use f  g to express that f  g and f  g. We write f  g for thatlimx→∞(f(x)/g(x)) = 0.2.1.1 Finite and infinite treesWe are interested in rooted ordered trees (plane trees), in particular, Galton-Watson (GW) trees and its companions. Recall that µ = (µ(i))i∈N is acritical distribution with finite variance σ2 > 0. We exclude the trivial casethat µ(1) = 1. Throughout this chapter, µ will be fixed. Define anoth-er probability measure µ˜ on N, call the adjoint measure of µ by settingµ˜(i) =∑∞j=i+1 µ(j). Since µ has mean 1, µ˜ is indeed a probability measure.The mean of µ˜ is σ2/2. A Galton-Watson process with distribution µ is aprocess starting with one initial particle, with each particle having indepen-dently a random number of children due to µ. The Galton-Watson tree isjust the family tree of the Galton-Watson process, rooted at the initial par-ticle. We simply write µ-GW tree for the Galton-Watson tree with offspringdistribution µ. If we just change the law of the number of children for theroot, using µ˜ instead of µ (for other particles still use µ), the new tree iscalled an adjoint µ-GW tree. The infinite µ-GW tree is constructedin the following way: start with a semi-infinite line of vertices, called thespine, and graft to the left of each vertex in the spine an independent adjointµ-GW tree, called a bush. The infinite µ-GW tree is rooted at the first ver-tex of the spine. Here the left means that we assume every vertex in spine192.1. Preliminariesexcept the root is the youngest child (the latest in the Depth-First searchorder) of its parent. The invariant µ-GW tree is defined as the infiniteµ-GW tree, except that for the root, we graft to the left of it, a µ-GW tree,in stead of an adjoint µ-GW tree. We also need to introduce the so-calledµ-GW tree conditioned on survival. Start with a semi-infinite path,called the spine, rooted at the starting point. For each vertex in the spine,with probability µ(i+ j+ 1) (i, j ∈ N), it has totally i+ j+ 1 children, withexactly i children elder than the child corresponding to the next vertex inthe spine, and exactly j children younger. For any vertex not in the spine,it has a random number of children due to µ. The number of children fordifferent vertices are independent. The random tree generated in this way isjust the µ-GW tree conditioned on survival. Each tree is ordered using theclassical order according to Depth-First search starting from the root. Notethat the subtree generated by the vertices of the spine and all vertices onthe left of the spine of the µ-GW tree conditioned on survival has the samedistribution as the infinite µ-GW tree.2.1.2 Tree-indexed random walkNow we introduce the random walk in Zd with jump distribution θ, indexedby a random plane tree T . First choose some a ∈ Zd as the starting point.Conditionally on T we assign independently to each edge of T a randomvariable in Zd according to θ. Then we can uniquely define a functionST : T → Zd, such that, for every vertex v ∈ T (we also use T for the setof all vertices of the tree T ), ST (v) − a is the sum of the variables of alledges belonging to the unique simple path from the root o to the vertex u(hence ST (o) = a). A plane tree T together with this random function ST iscalled T -indexed random walk starting from a. When T is a µ-GW tree, anadjoint µ-GW tree, an infinite µ-GW tree, and a µ-GW tree conditioned onsurvival respectively, we simply call the tree-indexed random walk a snake,an adjoint snake, an infinite snake and incipient infinite snake (alsocalled branching random walk conditioned on survival) respectively. Wewrite Sx, S ′x, S∞x and S∞x for a snake, an adjoint snake, and an infinite202.1. Preliminariessnake, respectively, starting from x ∈ Zd. Note that a snake is just thebranching random walk with offspring distribution µ and jump distributionθ. We also need to introduce the reversed infinite snake starting from x,S−x , which is constructed in the same way as S∞x except that the variablesassigned to the edges in the spine are now due to not θ but the reversedistribution θ− of θ (i.e. θ−(x) := θ(−x) for x ∈ Zd) and similarly theinvariant snake starting from x, SIx, which is constructed by using theinvariant µ-GW tree as the random tree T and using θ− for all edges ofthe spine of T and θ for all other edges. For an infinite snake (or reversedinfinite snake, invariant snake), the random walk indexed by its spine, calledits backbone, is just a random walk with jump distribution θ (or θ−). Notethat all snakes here certainly depend on µ and θ. Since µ and θ are fixedthroughout this chapter, we omit their dependence in the notation.2.1.3 Random walk with killingWe will use the tools of random walk with killing. Suppose that when therandom walk is currently at position x ∈ Zd, then it is killed, i.e. jumps toa ’cemetery’ state $, with probability k(x), where k : Zd → [0, 1] is a givenfunction. In other words, the random walk with killing rate k(x) (and jumpdistribution θ) is a Markov chain {Xn : n ≥ 0} on Zd ∪ {$} with transitionprobabilities p(·, ·) given by: for x, y ∈ Zd,p(x,$) = k(x), p($,$) = 1, p(x, y) = (1− k(x))θ(y − x).For any path γ : {0, . . . , n} → Zd with length n, its probability weight b(γ)is defined to be the probability that the path consisting of the first n stepsfor the random walk with killing starting from γ(0) is γ. Equivalently,b(γ) =|γ|−1∏i=0(1−k(γ(i)))θ(γ(i+ 1)− γ(i)) = s(γ)|γ|−1∏i=0(1−k(γ(i))), (2.1.1)where s(γ) =∏|γ|−1i=0 θ(γ(i + 1) − γ(i)) is the probability weight of γ cor-responding to the random walk with jump distribution θ. Note that b(γ)212.1. Preliminariesdepends on the killing. We delete this dependence on the notation for sim-plicity.Now we can define the corresponding Green function for x, y ∈ Zd:Gk(x, y) =∞∑n=0P (Skx (n) = y) =∑γ:x→yb(γ).where Skx = (Skx (n))n∈N is the random walk (with jump distribution θ)starting from x, with killing function k, and the last sum is over all pathsfrom x to y. For x ∈ Zd, A ⊆ Zd, we write Gk(x,A) for∑y∈AGk(x, y).For any B ⊆ Zd and x, y ∈ Zd, define the harmonic measure (whenexactly one of {x, y} is in B):HBk (x, y) =∑γ:x→y,γ⊆Bb(γ).Note that when the killing function k ≡ 0, the random walk with thiskilling is just random walk without killing and we write HB(x, y) for thiscase.We will repeatedly use the following First-Visit Lemma. The idea is todecompose a path according to the first or last visit of a set.Lemma 2.1.1. For any B ⊆ Zd and a ∈ B, b /∈ B, we have:Gk(a, b) =∑z∈BcHBk (a, z)Gk(z, b) =∑z∈BGk(a, z)HBck (z, b);Gk(b, a) =∑z∈BHBck (b, z)Gk(z, a) =∑z∈BcGk(b, z)HBk (z, a).2.1.4 Some facts about random walk and the GreenfunctionFrom now on, we assume d ≥ 4. For x ∈ Zd, we write Sx = (Sx(n))n∈Nfor the random walk with jump distribution θ starting from Sx(0) = x.The norm ‖ · ‖ corresponding to θ for every x ∈ Zd is defined to be ‖x‖ =√x ·Q−1x/√d, where Q is the covariance matrix of θ. Note that ‖x‖  |x|,222.1. Preliminariesespecially, there exists c > 1, such that C(c−1n) ⊆ B(n) ⊆ C(cn), for anyn ≥ 1. The Green function g(x, y) is defined to be:g(x, y) =∞∑n=0P (Sx(n) = y) =∑γ:x→ys(γ).We write g(x) for g(0, x).Our assumptions about the jump distribution θ guarantee the standardestimate for the Green function (see e.g. Theorem 2 in [23]):g(x) ∼ ad‖x‖2−d; (2.1.2)and (e.g. one can verify this using the error estimate of Local Central LimitTheorem in [23]) when d ≥ 5,∞∑n=0(n+ 1) · P (S0(n) = x) =∑γ:0→x[γ] · s(γ)  ‖x‖4−d  |x|4−d. (2.1.3)where ad =Γ((d−2)/2))2d(d−2)/2pid/2√detQ.Also by LCLT, one can get the following lemma.Lemma 2.1.2.limn→∞ supx∈Zd ∑γ:0→x,|γ|≥n|x|2s(γ)/g(x) = 0. (2.1.4)The following lemma is natural from the perspective of Brownian motion,the scaling limit of random walk.Lemma 2.1.3. Let U, V be two connected bounded open subset of Rd suchthat U ⊆ V . Then there exists a C = C(U, V ) such that if An = nU ∩Zd, Bn = nV ∩ Zd then when n is sufficiently large,∑γ:x→y,γ⊆Bn,|γ|≤2n2s(γ) ≥ Cg(x, y), for any x, y ∈ An (2.1.5)232.2. Branching capacity and visiting probabilitiesThis Lemma may not be standard, hence we give a sketch of proof inAppendix.Since our jump distribution θ may be unbounded, we need the followingOvershoot Lemma:Lemma 2.1.4. For any r, s > 1, let B = C(r). Then for any a ∈ B, wehave:∑y∈(C(r+s))cHBk (a, y) r2sd,∑y∈(C(r+s))cHBk (y, a) r2sd. (2.1.6)Proof. It suffices to show the case when k ≡ 0. By considering where thelast position is before leaving C(r), one can get:∑y∈(C(r+s))cHB(a, y) ≤∑z∈C(r)g(a, z)P (the jump leaving C(r) ≥ s)(1.1.1)≤ (∑z∈C(r)g(a, z)) · C/sd  r2sd.One can show the other inequality similarly.2.2 Branching capacity and visiting probabilitiesIn this and the following sections (Section 2.2 and Section 2.3), we focus onsupercritical dimensions and always assume d ≥ 5. For any K ⊂⊂ Zd, we areinterested in the probability of visiting K by the critical branching randomwalk with offspring distribution µ and jump distribution θ, or equivalently, asnake. For any x ∈ Zd, write p(x), r(x), q(x) and q−(x), respectively, for theprobability that a snake, an adjoint snake, an infinite snake and a reversedinfinite snake, respectively, starting from x visits K, i.e. P ((ST (T )∩K) 6= ∅)where T,ST are the corresponding random tree and random map. We writep(x) and r(x) respectively for the probability that a snake and an adjointsnake respectively, starting from x visits K strictly after time zero, i.e.P ((ST (T \ {o}) ∩ K) 6= ∅). Note that when x /∈ K, p(x) = p(x) and242.2. Branching capacity and visiting probabilitiesr(x) = r(x). For simplicity, we delete the dependence on K in the notations.We fix K from now on until Section 2.2.6.We first give some preliminary upper bounds for the visiting probabilitiesby computing the expectation of the number of visits. Here are the com-putations. When x is relatively far from K, say ρ(x,K) ≥ 2diam(K). Forthe snake Sx, the expectation of the number of offspring at n-th generationis one. Hence, the expectation of the number of visiting any a ∈ K is justg(x, a)  ‖x − a‖2−d  ‖x‖2−d. For the adjoint snake S ′x, the expectationof the number of offspring at n-th generation (for n ≥ 1) is Eµ˜ = σ2/2  1(recall that µ is fixed). Hence the expectation of the total number of visitinga can also be bounded by g(x, a) up to some constant multiplier. For the in-finite snake S∞x , one can see that the expectation of the number of offspringat n-th generation is 1 + n · Eµ˜  n+ 1. Hence when ρ(x,K) ≥ 2diam(K),the expectation of the total number of visiting a is bounded, up to someconstant, by:∞∑n=0(n+ 1)P (Sx(n) = a)(2.1.3) ‖x− a‖4−d  ‖x‖4−d.Recall that Sx = (Sx(n))n∈N is the random walk starting from x with jumpdistribution θ. Summing up over all a ∈ K, we getp(x)  |K|/‖x‖d−2;r(x)  |K|/‖x‖d−2;q(x)  |K|/‖x‖d−4.(2.2.1)For q−(x), by considering the expectation of the number of visiting points,one can get (or use (2.2.22)):q−(x) ∑y∈Zdg−(x, y)g(y,K)  |K|∑y∈Zd‖x− y‖2−d‖y‖2−d  |K|/‖x‖d−4,where g−(x, y) = g(y, x) is the Green function for the reversed random walk.From this, we see that when x tends to infinity, all four types of visitingprobabilities tend to 0. Now we introduce the escape probabilities.252.2. Branching capacity and visiting probabilitiesDefinition 2.2.1. K is a finite subset of Zd, for any x ∈ Zd, define EsK(x)to be the probability that a reversed infinite snake starting from x does notvisit K except possibly for the image of the bush grafted to the root andEscK(x) to be the probability that an invariant snake starting from x doesnot visit K except possibly for the image of the spine. Define the Branchingcapacity of K by:BCap(K) =∑a∈KEsK(a) =∑a∈KEscK(a). (2.2.2)Remark 2.2.1. In next chapter, we construct the model of branching inter-lacement. As a main step, we give the definition of branching capacity (only)when µ is the critical geometric distribution. In that case, the branching ca-pacity here is equivalent to the branching capacity there, up to a constantfactor 2. But here we do not need the so-called contour function whichplays an important role there. Furthermore, we can construct the model ofbranching interlacement for general critical offspring distribution.The last equality can be seen from our main theorem of branching capac-ity, Theorem 1.3.1. We also introduce the escape probability for the infinitesnake Es+K(x), which is defined to be the probability that an infinite snakestarting from x does not visit K except possibly for the image of the bushgrafted to the root. Note that Es+K(x) ≥ 1− q(x)→ 1, as x→∞.Remark 2.2.2. If we let µ be the degenerate measure, that is, µ(1) = 1,then: the snake and the infinite snake are just the random walk with jumpdistribution θ; the reversed infinite snake and the invariant snake are therandom walk with jump distribution θ−. Therefore EsK is just the escapeprobability for the ’reversed’ walk and EscK is the escape probability for the’original’ walk. In that case, Theorem 1.3.1 is just the classical theorem forregular capacity. Note that when θ is symmetric, for random walk, EsK(a) =EscK(a). But this is generally not true for branching random walk even whenθ is symmetric. If K = {a} consists of only one point, then it is true byTheorem 1.3.1262.2. Branching capacity and visiting probabilities2.2.1 Monotonicity and subadditivityWe postpone the proof of Theorem 1.3.1 until Section 2.2.4. We now s-tate some basic properties about branching capacity. Like regular capacity,branching capacity is monotone and subadditive:Proposition 2.2.2. For any K ⊆ K ′ finite subsets of Zd,BCap(K) ≤ BCap(K ′);For any K1,K2 finite subsets of Zd,BCap(K1 ∩K2) + BCap(K1 ∪K2) ≤ BCap(K1) + BCap(K2).Proof. When K ⊆ K ′, a snake visiting K must visit K ′. SoP (Sx visits K) ≤ P (Sx visits K ′).By (1.3.1), we get BCap(K) ≤ BCap(K ′).For the other inequality, we use a similar idea. First, we have:P (Sx visits K1) = P (Sx visits K1 but not K2) + P (Sx visits both K1&K2);P (Sx visits K2) = P (Sx visits K2 but not K1) + P (Sx visits both K1&K2);P (Sx visits K1 ∪K2) = P (Sx visits K1 but not K2)+P (Sx visits K2 but not K1) + P (Sx visits both K1&K2).Since P (Sx visits K1 ∩K2) ≤ P (Sx visits both K1&K2), we have:P (Sx visits K1 ∪K2) + P (Sx visits K1 ∩K2) ≤P (Sx visits K1) + P (Sx visits K2).This concludes the proposition by (1.3.1).272.2. Branching capacity and visiting probabilities2.2.2 A key observationWe begin with some straightforward computations. When a snake Sx =(T,ST ) visits K, since T is an ordered tree, we have the unique first vertex,denoted by τK , in {v ∈ T : ST (v) ∈ K} due to the default order. We sayST (τK) is the visiting point or Sx visits K at ST (τK). Assume (v0, v1, . . . , vk)is the unique simple path in T from the root o to τK . Define Γ(Sx) =(ST (v0),ST (v1), . . . ,ST (vk)) and say Sx visitsK via Γ(Sx). We now computeP (Γ(Sx) = γ), for any given γ = (γ(0), . . . , γ(k)) ⊆ Kc starting from x,ending at K. Let a˜i and b˜i respectively, be the number of the older, andyounger respectively, brothers of vi, for i = 1, . . . , k. From the tree structure,one can see that, for any l1, . . . , lk, m1, . . . ,mk ∈ N,P (Sx visits K via γ; a˜i = li, b˜i = mi, for i = 1, . . . , k)= s(γ)k∏i=1(µ(li +mi + 1)(r˜(γ(i− 1)))li), (2.2.3)where r˜(z) is the probability that a snake starting from z does not visit Kconditioned on the initial particle having only one child. Summing up, weget:P (Sx visits K via γ)=∑l1,...,lk;m1,...,mk∈NP (Sx visits K via γ; a˜i = li, b˜i = mi, for i = 1, . . . , k)=∑l1,...,lk;m1,...,mk∈Ns(γ)k∏i=1(µ(li +mi + 1)(r˜(γ(i− 1)))li)=s(γ)k∏i=1∑li,mi∈N(µ(li +mi + 1)(r˜(γ(i− 1)))li)=s(γ)k∏i=1∑li∈N(µ˜(li)(r˜(γ(i− 1)))li).282.2. Branching capacity and visiting probabilitiesNote that for any z /∈ K, ∑l∈Nµ˜(l)(r˜(z))lis just 1 − r(z), the probability that an adjoint snake starting form z doesnot visit K. If we let the killing function bek(x) = P (S ′x visits K) = r(x). (2.2.4)then we have (recall the definition of b(γ) from (2.1.1))b(γ) =s(γ)k∏i=1(1− k(γ(i− 1))) = s(γ)k∏i=1(1− r(γ(i− 1)))=s(γ)k∏i=1∑li∈N(µ˜(li)(r˜(γ(i− 1)))li)= P (Sx visits K via γ).This brings us to the key formula of this work:Proposition 2.2.3.b(γ) = P (Sx visits K via γ). (2.2.5)In words, the probability that a snake visits K via γ is just γ’s probabilityweight according to the random walk with the killing function given by(2.2.4). Throughout this chapter, we will mainly use this killing functionand write GK(·, ·) for the corresponding Green function. By summing thelast equality over γ, we get: for any a ∈ K,P (Sx visits K at a) =∑γ:x→ab(γ) = GK(x, a); (2.2.6)andp(x) = P (Sx visits K) =∑γ:x→Kb(γ) = GK(x,K). (2.2.7)Note that since r(x) = 1 for x ∈ K, when γ, except for the ending point,intersects K, b(γ) = 0.292.2. Branching capacity and visiting probabilitiesOn the other hand, from the structure of the infinite snake, one caneasily see that q(x) is just the probability that in this killing random walk,a particle starting at x will be killed at some finite time.Now we turn to the last visiting point, which can be addressed similarly.When a snake Sx = (T,ST ) visits K, denoted by ξK , the last vertex in{v ∈ T : ST (v) ∈ K} due to the default order. Assume (v0, v1, . . . , vk)is the unique simple path in T from the root o to ξK . Define Γ(Sx) =(ST (v0),ST (v1), . . . ,ST (vk)) and say Sx leaves K at ST (vk), via Γ(Sx). Wewould like to compute P (Γ(Sx) = γ), for any γ = (γ(0), . . . , γ(k)) startingfrom x and ending at A (note that unlike the former case, the interior ofγ now may intersect K). Let a˜i (˜bi respectively) be the number of theolder (younger respectively) brothers of vi, for i = 1, . . . , k. Similarly to theformer case, one can see that, for any l1, . . . , lk, m1, . . . ,mk ∈ N,P (Sx leaves K via γ; a˜i = li, b˜i = mi, for i = 1, . . . , k)= s(γ)(1− p(γ(k)))k∏i=1(µ(li +mi + 1)(rˆ(γ(i− 1)))mi) , (2.2.8)where rˆ(z) is the probability that a snake starting from z does not visit(except possibly for the root) K conditioned on the initial particle havingonly one child. Summing up, we get:P (Sx leaves K via γ) = s(γ)(1− p(γ̂))k∏i=1(1− r(γ(i− 1))). (2.2.9)If we let the killing function be k′(x) = r(x), then the last term is just(1− p(γ̂))bk′(γ).Remark 2.2.3. We will always use the killing function in (2.2.4), exceptin the proof of the third assertion in Theorem 1.3.1.Remark 2.2.4. Now the reason for the introduction of the adjoint snakeand the infinite snakes is clear: in order to understand p(x), the probabilityof visiting K, we need to study the random walk with killing where the killingfunction is just the probability of the adjoint snake visiting K.302.2. Branching capacity and visiting probabilitiesRemark 2.2.5. The computations here are initiated in [29]. Note that inthis subsection, we do not need the assumption d ≥ 5. All results are validfor all dimensions.2.2.3 Convergence of the Green functionThe goal of this subsection is to prove:Lemma 2.2.4.limx,y→∞GK(x, y)/g(x, y) = 1. (2.2.10)Proof. The part of ’≤’ is trivial, since GK(x, y) ≤ g(x, y). We need toconsider the other part.First, consider the case ‖x‖/2 ≤ ‖y‖ ≤ 2‖x‖1.1. LetΓ1 = {γ : x→ y||γ| ≥ ‖x‖0.1 · ‖x− y‖2};Γ2 = {γ : x→ y|γ visits C(‖x‖0.9)}.By Lemma 2.1.2, one can see that∑γ∈Γ1 s(γ)/g(x, y) tends to 0. Similar tothe First-Visit Lemma, by considering the first visiting place, we have (letB = C(‖x‖0.9)):∑γ∈Γ2s(γ) =∑a∈BHBc(x, a)g(a, y) ∑a∈BHBc(x, a)‖y‖2−d=P (Sx visits B) · ‖y‖2−d  (‖x‖0.9/‖x‖)d−2‖y‖2−d‖x‖−0.1‖x− y‖2−d  ‖x‖−0.1g(x, y).Note that the estimate of P (Sx visits C(r))  (r/‖x‖)d−2 is standard, andfor the second last inequality we use ‖y‖ ≥ (‖x‖+‖y‖)/3  ‖x−y‖. Hence,we get∑γ∈Γ2 s(γ)/g(x, y)→ 0 and therefore,∑γ:x→y, γ /∈Γ1∪Γ2s(γ) ∼ g(x, y). (2.2.11)312.2. Branching capacity and visiting probabilitiesFor any γ : x→ y, γ /∈ Γ1 ∪ Γ2, using (2.2.1), one can see:b(γ)/s(γ) =|γ|−1∏i=0(1− k(γ(i))) ≥ (1− c|K|/(‖x‖0.9)d−2)|γ|≥1− c|K||γ|/(‖x‖0.9)d−2 ≥ 1− c|K|‖x‖0.1‖x− y‖2/(‖x‖0.9)d−2≥1− c|K|‖x‖0.1‖x‖2.2/‖x‖0.9·3 ≥ 1− c|K|/‖x‖0.4 → 1.Hence, we have: ∑γ:x→y, γ /∈Γ1∪Γ2b(γ) ∼∑γ:x→y, γ /∈Γ1∪Γ2s(γ).Combining this and (2.2.11), we get: when ‖x‖/2 ≤ ‖y‖ ≤ 2‖x‖1.1, (2.2.10)is true.When ‖y‖ > 2‖x‖1.1, we know g(x, y) ∼ ad‖y‖2−d. Hence, we need toshow: GK(x, y) ∼ ad‖y‖2−d. Let r = 2‖y‖1/1.1 and B = C(r). Then for anya ∈ C(2r) \ C(r), ‖x‖ < ‖a‖ < ‖y‖ ≤ 2‖a‖1.1 (when ‖y‖ is large). HenceGK(a, y) ∼ g(a, y) ∼ ad‖y‖2−d. Applying the First-Visit Lemma, we have:GK(x, y) =∑a∈BcHBk (x, a)GK(a, y) ≥∑a∈C(2r)\BHBk (x, a)GK(a, y)∼∑a∈C(2r)\BHBk (x, a)ad‖y‖2−d=(∑a∈BcHBk (x, a)−∑a∈(C(2r))cHBk (x, a))ad‖y‖2−d≥((1− r(x))Es+K(x)− Cr2rd)ad‖y‖2−d∼ad‖y‖2−d.In the second last inequality we use the Overshoot Lemma and∑a∈BcHBk (x, a) ≥∑a∈BcHBk (x, a)(1− r(a))Es+K(a) = (1− r(x))Es+K(x)→ 1.Now, we show (2.2.10) for the case ‖x‖ ≤ ‖y‖. The case of ‖x‖ ≥ ‖y‖ can322.2. Branching capacity and visiting probabilitiesbe handled similarly.Remark 2.2.6. As we have seen in the proof, since the jump distributionθ maybe unbounded, we need an extra step to control the long jump, viathe Overshoot Lemma. This happens again and again later. It might beconvenient, especially for a first-time reader, to restrict the attention to thejump distribution with finite range.2.2.4 Proof of Theorem 1.3.1Now we are ready to prove Theorem 1.3.1. It is sufficient to show:Lemma 2.2.5. Under the same assumption of Theorem 1.3.1, we have:P (Sx visits K at a) ∼ ad‖x‖2−dEsK(a);P (Sx leaves K at a) ∼ ad‖x‖2−dEscK(a);whenever the escape probability on the right hand side is nonzero.Proof. Fix some α ∈ (0, 2/(d + 2)). Let r = ‖x‖α, s = ‖x‖1−α and B =C(r), B1 = C(s) \ B and B2 = (C(s))c. Note that our choice of α impliesr2/sd  ‖x‖2−d. Then,P (Sx visits K at a) (2.2.6)=∑γ:x→ab(γ) =∑b∈BcGK(x, b)HBk (b, a)=∑b∈B1GK(x, b)HBk (b, a) +∑b∈B2GK(x, b)HBk (b, a). (2.2.12)We argue that the first term has the desired asymptotics and the second is332.2. Branching capacity and visiting probabilitiesnegligible:∑b∈B1GK(x, b)HBk (b, a)(2.2.10)∼ ad‖x‖2−d∑b∈B1HBk (b, a)∼ ad‖x‖2−d(EsK(a)−∑b∈B2HBk (b, a))(2.1.6)= ad‖x‖2−d(EsK(a)−O(r2/sd)) ∼ ad‖x‖2−dEsK(a);∑b∈B2GK(x, b)HBk (b, a) ∑b∈B2HBk (b, a)(2.1.6) r2/sd  ‖x‖2−d.Note that the second line is due to EsK(a) =∑b∈B1∪B2 HBk (b, a)EsK(b) andEsK(x) ∼ 1.Now we complete the proof of the first assertion. Very similar argumentscan be used for the second assertion. Note that due to (2.2.9), we need touse the killing function k′(x) = r(x) and the analogous version of Lemma2.2.4 for this killing. We leave the details to the reader.2.2.5 The asymptotics for q(x), q−(x) and r(x)Thanks to Theorem 1.3.1, we also can find the exact asymptotics of thevisiting probabilities by an adjoint snake, an infinite snake and a reversedinfinite snake, i.e. r(x), q(x) and q−(x):Proposition 2.2.6.r(x) ∼ adσ2BCap(K)2‖x‖d−2 , (2.2.13)q(x) ∼ td · a2dσ2BCap(K)2‖x‖d−4 , (2.2.14)q−(x) ∼ td · a2dσ2BCap(K)2‖x‖d−4 , (2.2.15)where σ2 is the variance of µ, td = td(θ) =∫t∈Rd ‖t‖2−d‖h − t‖2−ddt, andh ∈ Rd is any vector satisfying ‖h‖ = 1.342.2. Branching capacity and visiting probabilitiesRemark 2.2.7. In fact, td = td(θ) has the following form:td =∫t∈Rd‖t‖2−d‖h− t‖2−ddt = dd/2√detQ∫t∈Rd|t|2−d|h′ − t|2−ddt,where h′ ∈ Rd is any vector with |h′| = 1 in Rd.Proof. Let s˜(x) be the probability that a snake starting from x visits Kconditioned on the initial particle having exactly one child. Then it is s-traightforward to see that: when x /∈ K,1− p(x) =∑i∈Nµ(i)(1− s˜(x))i, 1− r(x) =∑i∈Nµ˜(i)(1− s˜(x))i. (2.2.16)Note that∑i∈Nµ(i)(1− s˜(x))i ≥∑i∈Nµ(i)(1− is˜(x)) = 1− (Eµ)s˜(x)and∑i∈Nµ(i)(1− s˜(x))i ≤ µ(0) +∞∑i=1µ(i)(1− s˜(x)) = 1− (1− µ(0))s˜(x)Hence we havep(x)  s˜(x), (2.2.17)and similarly one can get r(x)  s˜(x). Therefore,r(x)  p(x). (2.2.18)We will use the following easy lemma and omit its proof.Lemma 2.2.7. Let (an)n∈N be a nonnegative sequence satisfying:∑n∈N an =1 and∑n∈N nan <∞. Let f(t) =∑n∈N antn. Then we have:limt→1−(1− f(t))/(1− t) =∑n∈Nnan.352.2. Branching capacity and visiting probabilitiesBy this lemma and (2.2.16), we havep(x) ∼∑iiµ(i)s˜(x) = s˜(x),r(x) ∼∑iiµ˜(i)s˜(x) =σ22s˜(x).Hence,r(x) ∼ σ22p(x) ∼ σ2adBCap(K)2‖x‖d−2 .Now we turn to the asymptotic of q(x). We point out two equalities forq(x):q(x) =∑y∈ZdGK(x, y)r(y); (2.2.19)q(x) =∑y∈Zdg(x, y)r(y)Es+K(y). (2.2.20)The first can be easily derived by considering where the particle dies in themodel of random walk with killing function r. For the second one, we needto consider a bit different but equivalent model: a particle starting from xexecutes a random walk, but at each step, the particle has the probabilityr to get a flag (instead of to die) and its movements are unaffected byflags. Let τ and ξ be the first and last time getting flags (if there is nosuch times then denote τ = ξ = ∞). Note that since q(z) < 1 (when |z|is large), the total number of flags gained is finite, almost surely. HenceP (τ < ∞) = P (ξ < ∞) and q(x) is just the probability that ξ < ∞. Byconsidering where the particle gets its last flag, one can get (2.2.20).We will use the following easy lemma and omit its proof:Lemma 2.2.8.‖x‖d−4∑z∈Zd1‖z‖d−2‖x− z‖d−2 ∼∫t∈Rd‖t‖2−d‖h− t‖2−ddt. (2.2.21)For the asymptotics of q(x), one can use either (2.2.19) or (2.2.20) and362.2. Branching capacity and visiting probabilitiesthe processes are similar to each other. Here we use (2.2.19). Let B =C(r) and r be very large. Divide the right hand side of (2.2.19) into threeparts:∑y∈B,∑y∈x+B and∑y/∈B∪x+B. We will argue that the first twoparts are negligible compared to ‖x‖4−d and the third term has the desiredasymptotics. For the first part, we have:‖x‖d−4∑y∈BGK(x, y)r(y)  ‖x‖d−4∑y∈Bg(x, y) · 1 ‖x‖d−4∑y∈B1(‖x‖ − r)d−2  ‖x‖d−4rd/(‖x‖−r)d−2 → 0 (when x→∞).For the second part, we have:‖x‖d−4∑y∈x+BGK(x, y)r(y)  ‖x‖d−4∑y∈x+B1 · r(y)(2.2.1) ‖x‖d−4∑y∈x+B|K|‖y‖2−d ≤ ‖x‖d−4rd|K|/(‖x‖ − r)d−2 → 0.When r and ‖x‖ are large and y /∈ B∪(x+B), the ratio betweenGK(x, y)r(y)and ad‖x − y‖2−dadσ2BCap(K)‖y‖2−d/2 is very close to 1. On the otherhand,‖x‖d−4∑y/∈B∪(x+B)ad‖x− y‖2−dadσ2BCap(K)‖y‖2−d/2=a2dσ2BCap(K)/2 · ‖x‖d−4∑y/∈B∪(x+B)‖x− y‖2−d‖y‖2−d=a2dσ2BCap(K)/2 · (‖x‖d−4∑y∈Zd‖x− y‖2−d‖y‖2−d−‖x‖d−4∑y∈(B∪x+B)‖x− y‖2−d‖y‖2−d).By (2.2.21), the first term in the bracket tends to td. Similar to the estimate372.2. Branching capacity and visiting probabilitiesfor the first two parts, one can verify that‖x‖d−4∑y∈(B∪x+B)‖x− y‖2−d‖y‖2−d  ‖x‖d−4rd/(‖x‖ − r)d−2 → 0.To sum up, we get‖x‖d−4∑y∈ZdGK(x, y)r(y) ∼ a2dσ2BCap(K) · td/2.This completes the proof of (2.2.14).(2.2.15) can be obtained in a very similar way and we leave the detailsto the reader. Note that one shall be a bit careful about whether to use theoriginal walk and the reversed walk. For example, instead of (2.2.20), wehave:q−(x) =∑y∈Zdg(y, x)r(y)EsK(y). (2.2.22)Remark 2.2.8. The analogous result (Proposition 1.3.8) also holds for theincipient infinite snake and can be proved similarly:limx→∞ ‖x‖d−4 · P (S∞x visits K) = tda2dσ2BCap(K). (2.2.23)2.2.6 Convergence of the conditional entering measureTheorem 1.3.1 implies that conditioned on visiting a finite set, the firstvisiting point and the last visiting point converge in distribution as thestarting point tends to infinity. In fact, not only the first and last visitingpoints, but also the set of ’entering’ points converge in distribution. Let usmake this statement precise.As before, we fix a K ⊂⊂ Zd. Let Mp(K) stand for the set of all finitepoint measures on K. The entering measure of a finite snake Sx = (T,ST )382.2. Branching capacity and visiting probabilitiesis defined by:Θx =∑v∈T :ST (v)∈K,v has no ancestor lying in KδST (v).Note that Θx is a random element in Mp(K) andP (Θx 6= 0) = P (Sx visits K) ≤ cK |x|2−d, E(〈Θx, 1〉) ≤ g(x,K) ≤ cK |x|2−d.We write Θx for Θx conditioned on Θx 6= 0. Now we can state our result:Theorem 2.2.9.Θxd→mK , as |x| → ∞, (2.2.24)where mK is defined later in (2.2.30) andd→ means convergence in distri-bution.Construction of the limiting measure.There are two steps needed, to sample an element from mK . The first stepis to sample the ’left-most’ path (Γ(Sx)) appeared in Section 2.2.2 and thenrun independent branching random walks from all vertices on that path.We begin with the second step. We write Θ˜x for Θx conditioned on theinitial particle having exactly one child. Inspired by (2.2.3), we introducethe position-dependent distribution µx on N and the random variable Λx onMp(K):µx(m) =∑l≥0µ(l +m+ 1)(r˜(x))l/(1− r(x)), for x /∈ K, (2.2.25)Λxd={ΣYi=1Xi, when x /∈ K;δx, when x ∈ K;(2.2.26)where Y is an independent random variable with distribution µx and Xi arei.i.d. with distribution Θ˜x. WriteN(x) = NK(x) = Eµx. (2.2.27)392.2. Branching capacity and visiting probabilitiesNote thatlimx→∞N(x) = Eµ˜ =σ22, limx→∞µx(m) = µ˜(m),µx(m) ≤ µ˜(m)/(1− r(x))  µ˜(m),E〈Λx, 1〉 = (Eµx)E(〈Θx, 1〉) ≤ cK |x|2−d.For any path γ, define Z(γ) and Z−(γ) by:Z(γ) = Σ|γ|i=0Λ(i);Z−(γ) = Σ|γ|−1i=0 Λ(i),where Λ(i)d= Λγ(i) are independent random variables. Note thatE〈Z(γ), 1〉 ≤ cKΣ|γ|i=0|γ(i)|2−d.Hence, for an infinite path γ : N→ Zd, we can also define Z(γ):Z(γ) =∞∑i=0Λ(γ(i)) ∈Mp(K) a.s.,as long as∞∑i=0|γ(i)|2−d <∞. (2.2.28)Now we move to the first step and explain how to sample the left-mostpath. For any x ∈ Zd, let h(x) = P (S−x does not visit K). Define P∞ to bethe transition probability of the Markov chain in {z ∈ Zd : EsK(z) > 0} by:P∞(x, y) =θ(x− y)h(y)∑z∈Zd θ(x− z)h(z)=θ(x− y)(1− k(y))EsK(y)EsK(x). (2.2.29)For any x with EsK(x) > 0, define P∞x to be the law of random walk startingfrom x with transition probability P∞. Define P∞K to be the law of randomwalk (with transition probability P∞) starting at a ∈ K with probabilityEsK(a)/BCap(K).402.2. Branching capacity and visiting probabilitiesNow we can give the definition of mK :mK = the law of Z :where first sample γ by P∞K and then sample Z by Z(γ). (2.2.30)Note that under P∞x (for those x with EsK(x) > 0),E∞x∞∑i=0|γ(i)|2−d = E∞x∑z∈Zd∑i∈N1γ(i)=z|z|2−d=∑z∈Zd|z|2−dE∞x∑i∈N1γ(i)=z =∑z∈Zd|z|2−dGK(z, x)EsK(z)EsK(x) 1EsK(x)∑z∈Zd|z|2−d|z − x|2−d  |x|4−dEsK(x)<∞.Therefore, under P∞x (and hence P∞K ), Z(γ) is well-defined a.s..Convergence of the conditional entering measureSince our sample space Mp(K) is discrete and countable, it is convenient touse the total variation distance. Recall that for two probability distributionsν1, ν2 on a discrete countable space Ω, the total variation distance is definedto bedTV (ν1, ν2) =12∑ω∈Ω|ν1(ω)− ν2(ω)| ∈ [0, 1]and νnd→ ν iff dTV (νn, ν)→ 0.Let us introduce some notations. Let Γ be a countable set of finitepaths. For each γ ∈ Γ, assign to it, the weight a(γ) ≥ 0 (assume thatthe total mass∑γ∈Γ a(γ) ≤ 1) and a probability law Z(γ) in Mp(K). Wedenote by⊔γ∈Γ a(γ) · Z(γ) for the random element in Mp(K) as follows:pick a random path γ′ among Γ with probability P (γ′ = γ) = a(γ) (withprobability 1−∑γ∈Γ a(γ) we do not get any path and in this case simply set⊔γ∈Γ a(γ) ·Z(γ) = 0) and then use the law Z(γ′) to sample⊔γ∈Γ a(γ) ·Z(γ).One can easily verify the following proposition:412.2. Branching capacity and visiting probabilitiesProposition 2.2.10. If ν =⊔γ∈Γ a(γ) · Z(γ), ν1 =⊔γ∈Γ a1(γ) · Z(γ) andν2 =⊔γ∈Γ a(γ) · Z1(γ), thendTV (ν, ν1) ≤∑γ∈Γ|a(γ)− a1(γ)|, dTV (ν, ν2) ≤∑γ∈Γa(γ)dTV (Z(γ), Z1(γ)).(2.2.31)For any n > Rad(K), write:mnK =⊔γ:(C(n))c→K,γ⊆(C(n)\K)b(γ)EsK(γ(0))BCap(K)· Z(γ).Note that mnK can be obtained equivalently as follows: first sample an infi-nite path γ′ by P∞K and cut γ′ into two pieces at the hitting time of (C(n))c;let γ be the first part and then sample mnK by Z(γ). Hence, we have:mnKd→mK as n→∞.Now we turn to Θx. Similar to the computations after (2.2.3), one canget, for γ = (γ(0), . . . , γ(k)) ⊆ Kc with γ(0) = x, γ̂ = γ(k) ∈ K and1 ≤ j1 < j2 ≤ k, (see the corresponding notations there)P (˜bj1 = m|Γ(Sx) = γ) =∑l∈N µ(l +m+ 1)(r˜(γ(j1 − 1)))l1− r(γ(j1 − 1)) ;P (˜bji = mi, for i = 1, 2|Γ(Sx) = γ) =∑l∈N µ(l +m1 + 1)(r˜(γ(j1 − 1)))l1− r(γ(j1 − 1))∑l∈N µ(l +m2 + 1)(r˜(γ(j2 − 1)))l1− r(γ(j2 − 1)) .From these (and the similar equations for more than two bj ’s), one cansee that conditioned on the event Γ(Sx) = γ, (˜bj)j=1,...,k are independentand have the distribution of the form in (2.2.25). Hence, conditioned onΓ(Sx) = γ, Θx has the law of Z(γ). Therefore, we haveProposition 2.2.11.Θx =⊔γ:x→Kb(γ)p(x)· Z(γ). (2.2.32)Remark 2.2.9. Note that for this proposition, we do not need the assump-422.2. Branching capacity and visiting probabilitiestion that d ≥ 5.Set n = n(x) = ‖x‖ d−1d . We need to show:limx→∞ dTV (Θx,mnK) = 0. (2.2.33)Let B = C(n) and B1 = C(2n). For any γ : x → K, we decompose γinto two pieces γ = γ1 ◦ γ2 according to the last visiting time of Bc. We canrewrite Θx as follows:Θx =⊔γ:x→Kb(γ)p(x)· Z(γ) =⊔γ:x→Kb(γ1)b(γ2)p(x)· (Z−(γ1) + Z(γ2)) =⊔γ2:Bc→K,γ2⊆Bb(γ2)gK(x, γ2(0))p(x)· (Z(γ2) +⊔γ1:x→γ2(0)b(γ1)gK(x, γ2(0))· Z−(γ1))We point out that∑γ2:Bc1→K,γ2⊆Bb(γ2)gK(x, γ2(0))p(x) 1. (2.2.34)This can be seen from: (by the Overshoot Lemma and (2.1.2))∑γ2:C(‖x‖/2)c→K,γ2⊆Bb(γ2)gK(x, γ2(0))  n2/‖x‖d  p(x);∑γ2:C(‖x‖/2)\B1→K,γ2⊆Bb(γ2)gK(x, γ2(0))  n2/nd · ‖x‖2−d  p(x).Furthermore, when y ∈ B1, by (2.2.10), (2.1.2) and (1.3.1), we have:gK(x, y)p(x)∼ 1BCap(K)∼ EsK(y)BCap(K). (2.2.35)432.2. Branching capacity and visiting probabilitiesHence (by Proposition 2.2.10, (2.2.34) and (2.2.35)), we have:dTV (Θx,⊔γ:B1\B→K,γ⊆Bb(γ)EsK(γ(0))BCap(K)·Z(γ) + ⊔γ1:x→γ(0)b(γ1)gK(x, γ(0))· Z−(γ1))→ 0.Similarly, we have:dTVmnK , ⊔γ:B1\B→K,γ⊆Bb(γ)EsK(γ(0))BCap(K)· Z(γ)→ 0.On the other hand, for any γ : B1 \B → K, γ ⊆ B, we have (let y = γ(0)):dTVZ(γ) + ⊔γ1:x→γ(0)b(γ1)gK(x, γ(0))· Z−(γ1),Z(γ)≤ P (⊔γ1:x→yb(γ1)gK(x, y)· Z−(γ1) 6= 0) ≤ E〈⊔γ1:x→yb(γ1)gK(x, y)· Z−(γ1), 1〉∑γ1:x→yb(γ1)gK(x, y)|γ1|∑i=0|γ1(i)|2−d =∑γ1:x→yb(γ1)gK(x, y)∑z∈Zd|z|2−d|γ1|∑i=01γ1(i)=z.We need to show the above term tends to 0. Note that gK(x, y)  |x|2−d andb(γ) ≤ s(γ), it suffices to show:(when x→∞, uniformly for any y ∈ B1 \B)|x|d−2∑z∈Zd|z|2−d∑γ1:x→ys(γ1)|γ1|∑i=01γ1(i)=z → 0. (2.2.36)Note that∑γ1:x→ys(γ1)|γ1|∑i=01γ1(i)=z =∑γ3:x→zs(γ3)∑γ4:z→ys(γ4) = g(x, z)g(z, y).442.2. Branching capacity and visiting probabilitiesHence, the left hand side of (2.2.36) can be bounded by:|x|d−2∑z∈Zd|z|2−dg(x, z)g(z, y)  |x|2−d∑z∈Zd|z|2−d|x− z|2−d|y − z|2−d= |x|d−2(∑z:|z−x|≤|x|/2+∑z:|z−x|>|x|/2)|z|2−d|x− z|2−d|y − z|2−d |x|d−2(∑z:|z−x|≤|x|/2|x|2−d|z − x|2−d|x|2−d+∑z:|z−x|>|x|/2|z|2−d|x|2−d|y − z|2−d) |x|d−2(|x|6−2d + |y|4−d|x|2−d)  |y|4−d  n4−d → 0.Now the proof is complete.2.2.7 Branching capacity of balls.In this subsection, we compute the branching capacity of balls. As men-tioned before, we carry out this by estimating the visiting probability ofballs and then use (1.3.1) in reverse. Let us set up the notations. Forx ∈ Zd and A ⊂⊂ Zd, we write pA(x), rA(x), qA(x) and q−A(x) respectively,for the probability that a snake, an adjoint snake, an infinite snake and areversed snake respectively, starting from x visits A.Theorem 2.2.12. Let A = {z = (z1, 0) ∈ Zm × Zd−m = Zd : ‖z‖ ≤ r}be the m-dimensional ball (1 ≤ m ≤ d) with radius r ≥ 1 and x ∈ Zd \ A.When s = ρ(x,A) ≥ 2, we havepA(x) rd−4/sd−2, if m ≥ d− 3 and s ≥ r;1/s2, if m ≥ d− 3 and s ≤ r;rd−4/(sd−2 log r), if m = d− 4 and s ≥ r;1/(s2 log s), if m = d− 4 and s ≤ r;rm/sd−2, if m ≤ d− 5 and s ≥ r;1/sd−m−2, if m ≤ d− 5 and s ≤ r.Proof. Let us first mention the organization of the proof. All lower bounds452.2. Branching capacity and visiting probabilitieswill be proved by the second moment method. So we first estimate the firstand the second moments. For upper bounds, due to Markov property (fromProposition 2.2.3), the case for ’big s’ (i.e. s ≥ r) can be reduced to the casefor ’small s’ (i.e. s ≤ r). For small s, visiting a large m-dimensional ball inZd behaves like visiting a point in Zd−m. Hence we can use the results onthe latter case.Upper bounds for m ≤ d− 5 and lower bounds for all cases.Let N be the number of times the branching random walk visits A. ThenEN =∑z∈A g(x, z) = g(x,A). For the first moment, we point out:g(x,A) {rm/sd−2, for s ≥ r;1/sd−m−2, for s ≤ r,m ≤ d− 3. (2.2.37)The computations are straightforward. When s ≥ r, for any a ∈ A, ρ(x, a) s. Hence g(x,A)  |A| ·1/sd−2  rm/sd−2. When s ≤ r,m ≤ d−3, the partof  is easy. Let b ∈ A satisfying ρ(x, b) = ρ(x,A) and let B = b + C(s).Then for any a ∈ B ∩ A, ρ(x, a)  s and |B ∩ A|  sm. Hence g(x,A) s2−d ·sm = 1/sd−m−2. For the other part, it needs a bit more work. Assumex = (x¯1, x¯2) ∈ Zm × Zd−m and let x1 = (x¯1, 0), x2 = (0, x¯2) ∈ Zd. Sinceρ(x,A) = s, either ρ(x, x1) ≥ s/2 or ρ(x1, A) ≥ s/2. When s/2 ≤ ρ(x, x1) =‖x2‖, note that |x2|  ‖x2‖  s. We have:g(x,A) ≤∑z∈Zm×0⊆Zdg(x, z) ∑z1∈Zm(√|z1|2 + |x2|2)2−d=∑z1∈Zm,|z1|≤s(√|z1|2 + |x2|2)2−d +∑z1∈Zm,|z1|≥s(√|z1|2 + |x2|2)2−d≤∑z1∈Zm,|z1|≤s|x2|2−d +∑z1∈Zm,|z1|≥s(|z1|)2−dsm · s2−d +∑n≥snm−1nd−2= sm+2−d +∑n≥s1nd−m−11/sd−m−2.462.2. Branching capacity and visiting probabilitiesWhen ρ(x1, A) ≥ s/2, note that |x1|  ‖x1‖  s. We have:g(x,A) ∑z∈Zm×0⊆Zd,‖z‖≤rρ(x, z)2−d ≤∑z∈Zm×0⊆Zd,‖z−x1‖≥s/2ρ(x, z)2−d=∑z∈Zm×0⊆Zd,‖z‖≥s/2‖z‖2−d ∑z∈Zm×0⊆Zd,‖z‖≥s/2|z|2−d≤∑n≥Csnm−1nd−2=∑n≥Cs1nd−m−1 1/sd−m−2.Now we finish the proof of (2.2.37). Note that (2.2.37) is also true even forx ∈ A i.e. g(x,A)  1(recall that since we set ‖0‖ = 1/2, when x ∈ A,ρ(x,A) = 1/2 by our convention).Using P (N > 0) ≤ EN , one can get the desired upper bounds for m ≤d− 5.For the lower bounds, we need to estimate the second moment and thefollowing is a standard result for branching random walk (for example, seeRemark 2 in Page 13 of [14]).Lemma 2.2.13. There exists a constant C, such that:EN2 ≤ C∑z∈Zdg(x, z)g2(z,A).We need to estimate the above sum. First consider the case when A is am-dimensional ball and m ≤ d−3, s ≥ r. Let B0 = {z ∈ Zd : ρ(z,A) ≤ r/6}and Bn = C(2ns/3), for n ∈ N+. Note that there exists some c > 0, such thatB(c−1r) ⊆ B0 ⊆ B(cr) and B(c−12n−1s) ⊆ Bn ⊆ B(c2n−1s), for any n ≥ 1.We will divide the sum into three parts and estimate separately:∑z∈B0 ,∑z∈B1\B0 , and∑n≥2∑z∈Bn\Bn−1 . When z = (z1, z2) ∈ B0, where z1 ∈ Zm472.2. Branching capacity and visiting probabilitiesand z2 ∈ Zd−m, ‖x− z‖  s and ρ(z,A)  |z2|. Hence∑z∈B0g(x, z)g2(z,A) ∑z∈B01‖x− z‖d−21ρ(z,A)2d−2m−4 1sd−2∑z∈B(cr)1|z2|2d−2m−4 1sd−2∑|z1|≤cr,|z2|≤cr1|z2|2d−2m−4 rmsd−2∑|z2|≤cr1|z2|2d−2m−4 rmsd−2∑n≤crnd−m−1n2d−2m−4=rmsd−2∑n≤cr1nd−m−3rm+1/sd−2, if m = d− 3;rm log r/sd−2, if m = d− 4;rm/sd−2, if m ≤ d− 5.When z ∈ B1 \ B0, ‖x − z‖  s, ρ(z,A)  |z| and g(z,A)  rm/|z|d−2.Hence:∑z∈B1\B0g(x, z)g2(z,A) ∑z∈B1\B01‖x− z‖d−2(rm|z|d−2)2∑z∈B(cs)\B(c−1r)r2msd−2|z|2d−4 r2msd−2∑c−1r≤n≤csnd−1n2d−4 r2msd−21rd−4.Note that this term is not bigger than the first term and hence negligible.The remaining part can be estimated similarly and is also negligible:∑n≥2∑z∈Bn\Bn−1g(x, z)g2(z,A) ∑n≥2∑z∈Bn\Bn−11|x− z|d−2(rm|z|d−2)2∑n≥2∑z∈B(c2n−1s)\B(c−12n−1s)1|x− z|d−2r2m(2ns)2d−4(∗)∑n≥2(2ns)2r2m(2ns)2d−4=∑n≥2r2ms2d−61(2n)2d−6 r2ms2d−6≤ r2msd−21rd−4.(∗) is due to the fact that ∑z∈B(n) |x− z|2−d ≤∑z∈B(n) |z|2−d  n2.482.2. Branching capacity and visiting probabilitiesTo summarize, we get:∑z∈Zdg(x, z)g2(z,A) rm+1/sd−2, if m = d− 3;rm log r/sd−2, if m = d− 4;rm/sd−2, if m ≤ d− 5.For r ≥ s, since we are considering lower bound on pA(x), by mono-tonicity, we can assume m ≤ d− 3, r ∈ [s/2, s]. Then, we can just let r  sin the last formula and get:∑z∈Zdg(x, z)g2(z,A) sm+1/sd−2 = 1, if m = d− 3;sm log s/sd−2 = log s/s2, if m = d− 4;sm/sd−2 = 1/sd−m−2, if m ≤ d− 5.Using pA(x) = P (N > 0) ≥ (EN)2/EN2, one can get the required lowerbounds for all cases.From small s to big s.We have proved the upper bound for m ≤ d−5and now consider the case m ≥ d − 4. Assume that we have the desiredupper bounds for small s. We want the upper bound for big s. Let B ={z ∈ Zd : ρ(z,A) ≤ r/2} and C = {z ∈ Zd : ρ(z,A) ≤ r/4}. Then by theassumption, we know that for any z ∈ B\C, p(z)  α(r), where α(r) = 1/r2or 1/(r2 log r) depending on m. LetΓ1 = {γ : x→ A|γ ⊆ Ac, γ visits B \ C},Γ2 = {γ : x→ A|γ ⊆ Ac, γ avoids B \ C}.We decompose pA(x) into two pieces:pA(x) =∑γ:x→Ab(γ) =∑γ∈Γ1b(γ) +∑γ∈Γ2b(γ).For the first term, by considering the first visiting point of B \ C, one can492.2. Branching capacity and visiting probabilitiessee:∑γ∈Γ1b(γ) ≤∑z∈B\C∑γ:x→z,γ⊆(B\C)cb(γ)pA(z) ≤∑z∈B\C∑γ:x→z,γ⊆(B\C)cs(γ)α(r)≤ α(r)P (Sx visits (B \ C)) ≤ α(r)P (Sx visits B) α(r)(r/s)d−2 ={rd−4/sd−2 if m ≥ d− 3;rd−4/(sd−2 log r) if m = d− 4.Recall that Sx is the random walk starting from x and we use the standardestimate of P (Sx visits B)  (r/s)d−2. For the other term, by consideringthe first jump from Bc to C, one can see:∑γ∈Γ2b(γ) ≤∑z∈BcGA(x, z)∑y∈Cθ(y − z)pA(y)=∑z∈Bc1GA(x, z)∑y∈Cθ(y − z)pA(y) +∑z∈B1\BGA(x, z)∑y∈Cθ(y − z)pA(y),where B1 = {z : ρ(z,A) ≤ r/2 + s/4}. Both terms are not more than thedesired order:∑z∈Bc1GA(x, z)∑y∈Cθ(y − z)pA(y) ∑z∈Bc11 ·∑y∈Cθ(y − z)α(r)≤∑y∈Cα(r)∑z∈Bc1θ(y − z) ∑y∈Cα(r)s−d  α(r)rd/sd ≤ α(r)(r/s)d−2;∑z∈B1\BGA(x, z)∑y∈Cθ(y − z)pA(y) ∑z∈B1\Bs2−d∑y∈Cθ(y − z)α(r)≤ s2−d∑y∈Cα(r)∑z∈Bcθ(y − z)  s2−d∑y∈Cα(r)r−d  s2−dα(r) ≤ α(r)(r/s)d−2.For small s and m ≥ d− 3. The upper bound in this case relieson the corresponding bound for one dimensional branching random walk.Let H be a half space, say H = {z = (z1, . . . , zd) ∈ Zd : z1 ≥ n}. Theprobability of visiting H is equivalent to the probability of 1-dimensionalbranching random walk visiting a half line. The asymptotic behavior of502.2. Branching capacity and visiting probabilitiesvisiting a single point in 1-d is known. Recall (1.2.2),lima→∞ ‖x‖2P ( Branching random walk from 0 visits x) = 2(4− d)/dσ2.However, our situation is a bit different. For our purpose, we give a weakerresult here under weaker assumptions:Proposition 2.2.14. Let Sx be 1-dimensional branching random walk s-tarting from x ∈ Z, given that the offspring distribution µ is critical andnondegenerate, and the jump distribution θ has zero mean and finite secondmoment, and satisfies∑i:i≤−k θ(i) ≤ Ck−4 for any k ∈ N+ and some C (in-dependent of k). Then for some large constant c = c(θ, µ) > 0 (independentof x), we have: for any x ∈ N+,P (Sx visits Z−) ≤ c/|x|2, (2.2.38)where Z− = {0,−1,−2, . . . }.We postpone the proof of this proposition. Return to d dimension. Sincewe can find at most d half spaces H1, H2, . . . ,Hd satisfying: ρ(x,Hi)  s forany i = 1, . . . , d; and that any path from x to A must hit at least one of Hi.Then we have:pA(x) ≤d∑i=1P (Sx visits Hi)  d · |s|−2  |s|−2.For small s and m = d− 4. Intuitively when the radius r is large,visiting a m = d − 4 dimensional ball in Zd, is similar to visiting a pointin Z4. This is indeed the case. In Section 2.4.1 we give the desired upperbound for the latter case and the method there also works here with slightmodifications. We point out the major differences and leave the detailsto the reader. On the one hand, one should use g˜(γ) :=∑|γ|−1i=0 g(γ(i), A)instead of g(γ) there. On the other hand, in proving an analogy of Lemma10.1.2(a) in [11], one might use the stopping times:ξ˜i = min{k : ρ(Sk, A) ≥ 2i} ∧ (ξ˜i−1 + (2i)2).512.2. Branching capacity and visiting probabilitiesinstead of ξi there.Proof of Proposition 2.2.14. Write p(x) for the left hand side in (2.2.38). Inorder to obtain upper bounds of p(x), we use some of the ideas of [9] (Section7.1), the techniques from nonlinear difference equations. We will exploit thefact that p(x) satisfies a parabolic nonlinear difference equation and use thecomparison principle.Let pn(x) = P (Sx visits Z− within the first n generations). Then pn(x)is increasing for n and converges to p(x) when n→∞. On the other hand,one can easily verify that pn(x) satisfies the recursive equations:p0(x) =1Z−(x); pn(x) = 1 for x ∈ Z−; (2.2.39)pn+1(x) = f(Apn(x)), for x ∈ N+; (2.2.40)where f(t) = 1 −∑k≥0 µ(k)(1 − t)k and A is the Markov operator for therandom walk, that is, for any bounded function w : Z → R, Aw(x) =∑y∈Z θ(y)w(x+ y). One can see that f : [0, 1]→ [0, 1−µ(0)] is in C1[0, 1]∩C∞(0, 1] with the first 2 derivatives as follows:f ′(t) =∑k≥1kµ(k)(1− t)k−1 > 0 for t ∈ [0, 1); f ′(0) = 1, f ′(1) = µ(1) ≥ 0;f ′′(t) = −∑k≥2k(k − 1)µ(k)(1− t)k−2 < 0 for t ∈ (0, 1).From these, it is easy to obtain:inft∈(0,1]t− f(t)t2> 0.Hence we can find some a ∈ (0, 1/2), such thatf(t) ≤ t− at2, for any t ∈ [0, 1] and t(1 + at) ≤ 1 for any t ∈ [0, 1− µ(0)].(2.2.41)To extract information from (2.2.40), we will use the following standardcomparison principle.522.2. Branching capacity and visiting probabilitiesLemma 2.2.15. Let un(x) and vn(x) be Z→ [0, 1], satisfyingun(x) = vn(x) = 1, for any x ∈ Z− and n ∈ N;un+1(x) = f(Aun(x)), vn+1(x) ≥ f(Avn(x)) for any x ∈ N+.If v0(x) ≥ u0(x) for all x, thenvn(x) ≥ un(x) for all n ∈ N+ and x ∈ Z.Proof. Note that for n > 0 and x ∈ N+:vn(x)− un(x) ≥ f(Avn−1(x))− f(Aun−1(x))≥ mint∈[0,1]{f ′(t)}(Avn−1(x)− Aun−1(x))= mint∈[0,1]{f ′(t)}A(vn−1 − un−1)(x).Since f ′(t) ≥ 0, one can use induction to finish the proof.Now let un(x) = pn(x) and vn(x) = v(x) = 1 ∧ (c/x2) when x ∈ N+ forsome large c (to be determined later). If we can showv(x) ≥ f(Av(x)) for any x ∈ N+, (2.2.42)then by the lemma above we conclude the proof of Proposition 2.2.14.Let us write down our strategy for choosing c. First we fix some  ∈(0, 1/2), such that (1− µ(0))/(1− )2 < 1. Choose c satisfying:ac2 ≥ C/4 + 3(E|θ|2)c/(1− )4. (2.2.43)We argue that (2.2.42) holds for our choice of c. When c/x2 ≥ 1 − µ(0),(2.2.42) is obvious since f(t) ≤ 1− µ(0).Now assume c/x2 < 1 − µ(0). Since f(t) is increasing, we need to findan upper bound of Av(x). We achieve this by decomposing Av(x) into two532.2. Branching capacity and visiting probabilitiespieces and estimating each one separately:Av(x) =∑y∈Zθ(y)v(x+ y) =∑y≤−xθ(y)v(x+ y) +∑y>−xθ(y)v(x+ y).We can use our assumption of θ to bound the first term:∑y≤−xθ(y)v(x+ y) ≤∑y≤−xθ(y) ≤ C/(x)4;Using Taylor expansion, the second term can be bounded by:∑y>−xθ(y)v(x+ y) ≤∑y>−xθ(y)(v(x) + yv′(x) +y22v′′((1− )x))= v(x)∑y>−xθ(y) + v′(x)∑y>−xθ(y)y + v′′((1− )x)∑y>−xθ(y)y22≤ v(x) · 1 + v′(x)(−∑y≤−xθ(y)y) + v′′((1− )x)E|θ|2/2≤ v(x) + 0 + E|θ|22· 6c(1− )4x4 .To summarize, we get (let K = (C/4 + 3E|θ|2c/(1− )4)):Av(x) ≤ v(x) + (C/4 + 3E|θ|2c/(1− )4)x−4 = v(x) +Kx−4.Note that by (2.2.41) and (2.2.43), we have v(x)+Kx−4 ≤ v(x)+a(v(x))2 ≤1. Hence:f(Av(x)) ≤ Av(x)(1− aAv(x)) ≤ (v(x) +Kx−4)(1− av(x))≤ v(x) +Kx−4 − a(v(x))2 = v(x) +Kx−4 − ac2x−4 ≤ v(x).This completes the proof of (2.2.42) and hence the proof of Proposition2.2.14.For the future use, we give the following upper bound for the visitingprobability of a ball by an infinite snake.542.2. Branching capacity and visiting probabilitiesLemma 2.2.16. Let A = C(r) (r ≥ 1) and x ∈ Zd such that s = ρ(x,A) ≥ r.Then we have:qA(x) ∨ q−A(x)  (r/s)d−4. (2.2.44)Proof. Consider a bigger ball B = C(1.5r). ThenqA(x) ≤ P ( backbone visits B) + P ( backbone avoids B, S∞x visits A).Since the backbone is just a random walk, the first term is comparable to(r/s)d−2, which is less that (r/s)d−4. On the other hand, when the backbonedoes not visit B, by considering where the particle is killed, we have:P (backbone avoids B,S∞x visits A) ≤∑z∈BcGA(x, z)rA(z) ∑z∈Bcg(x, z)pA(z)∑z∈Bc1|x− z|d−2rd−4|ρ(z,A)|d−2 ∑z∈Bc1|x− z|d−2rd−4|z|d−2 rd−4|x|d−4  (rs)d−4.This completes the proof of qA(x)  (r/s)d−4 and similarly one can showq−A(x)  (r/s)d−4.2.2.8 Proof of Theorem 1.3.5We use an equation approach similar to the proof of Proposition 2.2.14.Write fi(t) = 1 −∑k≥0 µi(k)(1 − t)k, i = 1, 2. We need the following littlelemma and postpone its proof.Lemma 2.2.17. There is a C = C(µ1, µ2) > 1 such that, for all t ∈ [0, 1],f1((Ct) ∧ 1) ≤ (Cf2(t)) ∧ 1. (2.2.45)For any A ⊂⊂ Zd fixed, as in the proof of Proposition 2.2.14, denoteui,n(x) (i = 1, 2) recursively by:ui,0(x) = 1A(x), u0,n(a) = 1 ∀a ∈ A; ui,n+1(x) = fi(Aui,n(x)) ∀a /∈ A.With the help of last lemma, one can see that u1,n(x) ≤ Cu2,n(x), for any552.2. Branching capacity and visiting probabilitiesn, x. On the other hand, we know that ui,n(x) → pi,A(x). Hence we havep1,A(x) ≤ Cp2,A(x). Then by Theorem 1.3.1, one can get Theorem 1.3.5.Proof of Lemma 2.2.17. Since limt→0 f2(t)/t = 1, when C is large enough,we have Cf2(C−1) ≥ 1− µ1(0) = f1(1). It suffices to show for t ∈ [0, C−1],g(t).= Cf2(t)− f1(Ct) ≥ 0. (2.2.46)Note that fi(0) = 0, f′i(0) = 1, f′′i (0) = −Var(µi), f ′′i (t) ≤ 0 and |f ′′i (t)|is non-increasing. Hence we can find some C = C(µ1, µ2) > 1 such that,C|f ′′1 (1/2)| ≥ 2|f ′′2 (0)| (and Cf2(C−1) ≥ 1− µ1(0)). Then we haveg′′(t) = C(f ′′2 (t)− Cf ′′1 (Ct)) ≥{C|f ′′2 (0)|, t ∈ [0, 1/(2C)];−C|f ′′2 (0)|, t ∈ [1/(2C), 1/C].Together with g(0) = g′(0) = 0, one can get (2.2.46).2.2.9 Bounds for the Green functionThe speed of convergence in (2.2.10) depends on K, which maybe not con-venient in some cases. For example, by that lemma, we know GK(x, y) ≥CKg(x, y) (when |x|, |y| are large), but the constant depends on K. Thepurpose of this section is to build up this type of bounds with constantsindependent of K.Thanks to lemma 2.1.3, we have:Lemma 2.2.18. Let U, V be two connected bounded open subset of Rd suchthat U ⊆ V . Then there exists a C = C(U, V ) such that if An = nU ∩Zd, Bn = nV ∩ Zd then when n is sufficiently large and K ⊆ Bcn, we haveGK(x, y) ≥ Cg(x, y) for any x, y ∈ An. (2.2.47)Proof. Without loss of generality, we can assume ρ(K,Bn)  n (by shrinkingV a bit). Hence for any z ∈ Bn, ρ(z,K)  n. By Proposition 2.2.14, one cansee that pK(x)  ρ(x,K)−2. Hence k(z) = rK(z)  pK(z)  n−2. Then we562.2. Branching capacity and visiting probabilitieshave, for any γ : x → y, γ ⊆ Bn, |γ| ≤ 2n2, b(γ)/s(γ) ≥ (1 − c/n2)2n2  1(provided that n is sufficiently large). Then we have:GK(x, y) ≥∑γ:x→y,γ⊆Bn,|γ|≤2n2b(γ) ∑γ:x→y,γ⊆Bn,|γ|≤2n2s(γ)(2.1.5) g(x, y).Before giving a better form, we turn to the escape probability and prove:Lemma 2.2.19. For any λ > 0, there exists a positive C = C(λ), such that,for any A ⊂⊂ Zd and x ∈ Zd satisfying ‖x‖ ≥ (1 + λ)Rad(A), we have:EsA(x) > C. (2.2.48)Proof. By lemma 2.2.16, we can find a positive constant c1 > 1, such that,for any z ∈ Zd with ‖z‖ ≥ c1Rad(A), we have EsA(z) ≥ 1/2. Write r =Rad(A), B = C(2c1r) and D = C(4c1r)\C(3c1r). Without loss of generality,assume 1 + λ < c1/2 and ‖x‖ < c1r. For any y ∈ D, by Lemma 2.2.18 (letU = {x ∈ Rd : ‖x‖ ∈ (1 + λ, 4c1)}), we have (when r is large): GA(y, x) g(y, x)  r2−d. Applying the First-Visit Lemma, we get:GA(y, x) =∑z∈BcGA(y, z)HBk (z, x).Hence, ∑y∈DGA(y, x) =∑y∈D∑z∈BcGA(y, z)HBk (z, x).Note that the left hand side is  rd · r2−d = r2 and the right hand side isnot larger than:∑y∈D∑z∈Bcg(y, z)HBk (z, x) =∑z∈BcHBk (z, x)∑y∈Dg(y, z) ∑z∈BcHBk (z, x) · r2.572.2. Branching capacity and visiting probabilitiesThis implies∑z∈Bc HBk (z, x)  1. Therefore we have:EsA(x) =∑z∈BcHBk (z, x)EsA(z) ≥ 1/2 ·∑z∈BcHBA(x, z)  1,which completes the proof.Remark 2.2.10. In fact we prove (2.2.48) only when Rad(A) is large. Weignore the case when Rad(A) is not large since this can be done by a s-tandard argument as follows. If Rad(A) is not sufficiently large, there areonly finite possibilities of A. For each of those A, we have already knownthe asymptotics of EsA(x) (limx→∞ EsA(x) = 1). On the other hand, it isobvious that for any ‖x‖ > Rad(A), EsA(x) > 0. Hence we can find someC(A) > 0 satisfying (2.2.48). Since there are finite many C(A)’s, we cansimply choose C to be the smallest one of those C(A) (together with the onefor sufficiently large A). We will also omit this type of standard argumentslater. In fact, we have done this in the proof of Theorem 2.2.12.Now we are ready to prove the following bound of Green function:Lemma 2.2.20. For any λ > 0, there exists C = C(λ) > 0, such that: forany A ⊂⊂ Zd and x, y ∈ Zd with ‖x‖, ‖y‖ > (1 + λ)Rad(A), we have:GA(x, y) ≥ Cg(x, y). (2.2.49)Proof. Without loss of generality, assume ‖x‖ ≤ ‖y‖. By Lemma 2.2.18 onecan assume ‖y‖ > 10‖x‖ and note that under this assumption g(x, y) ‖y‖2−d. Let B = C(‖y‖/2) and C = C(3‖y‖/4). For any z ∈ C \ B, alsoby Lemma 2.2.18, we have GA(y, z)  ‖y‖2−d. Applying the First-VisitLemma, we have:GA(x, y) =∑z∈BcHBk (x, z)GA(z, y) ∑z∈C\BHBk (x, z)‖y‖2−d= ‖y‖2−d(∑z∈BcHBk (x, z)−∑z∈CcHBA(x, z))≥ ‖y‖2−d(EsA(x)− c‖y‖2/‖y‖d),582.2. Branching capacity and visiting probabilitieswhere for the last step we use the Overshoot Lemma. Therefore, whenRad(A) is large enough, by Lemma 2.2.19, GA(x, y)  ‖y‖2−d  g(x, y).2.2.10 Proof of Theorem 1.3.3.Proof. By cutting A into small pieces, it is enough to show (1.3.2) under theassumption of ‖x‖ ≥ 3Rad(A). Also, as before, we can assume r = Rad(A)is sufficiently large. Let B = C(2r).Upper bound. By (2.2.7) and the First-Visit Lemma, we havepA(x) =∑y∈AGA(x, y) =∑y∈A∑z∈BcGA(x, z)HBk (z, y).We will decompose it into two parts and estimate them separately.Let D = {z ∈ Zd : ρ(z, x) ≤ 0.1ρ(x,A)}. Note that when z ∈ Bc \ D,ρ(x, z)  ρ(x,A) and EsA(z)  1 by Lemma 2.2.19. Hence,∑y∈A∑z∈Bc\DGA(x, z)HBk (z, y) ∑y∈A∑z∈Bc\Dρ(x,A)2−dHBk (z, y) ρ(x,A)2−d∑y∈A∑z∈Bc\DHBk (z, y)EsA(z)≤ ρ(x,A)2−d∑y∈AEsA(y) = ρ(x,A)2−dBCap(A).When z ∈ D, ρ(z,B)  ρ(x,A). By considering the position where the firstjump falls into, we have:HBk (z, y) ≤∑w∈Bθ(w − z)GA(w, y).592.2. Branching capacity and visiting probabilitiesHence,∑y∈A∑z∈DGA(x, z)HBk (z, y) ∑z∈Dg(x, z)∑w∈Bθ(w − z)∑y∈AGA(w, y)=∑z∈Dg(x, z)∑w∈Bθ(w − z)pA(w) ≤∑z∈Dg(x, z)∑w∈Bθ(w − z)(1.1.1)∑z∈Dg(x, z)ρ(z,B)−d ∑z∈Dg(x, z)ρ(x,A)−d  ρ(x,A)2−d.This completes the proof of the upper bound.Lower bound. First choose some a > 1, such that for any s ≥ 1,|C(s)| · θ{(C((a− 1)s))c} ≤ BCap({0})2, (2.2.50)Note that our assumption of θ guarantees that θ{(C((a− 1)s))c}  ((a −1)s)−d.Write ρ = ρ(x,A) and let C = C(aρ). Note that ρ ≥ 2r and ρ(x,B) ≥ r.Hence r ≤ ρ/2, B ⊆ C(ρ) and for any w ∈ B, z ∈ Cc,ρ(w, z) ≥ (a− 1)ρ. (2.2.51)ThenpA(x) =∑y∈AGA(x, y) =∑y∈A∑z∈BcGA(x, z)HBk (z, y)≥∑y∈A∑z∈C\BGA(x, z)HBk (z, y) ∑y∈A∑z∈C\B(2aρ)2−dHBk (z, y),We use the last Lemma in the last step. It is sufficient to show:∑y∈A∑z∈C\BHBk (z, y)  BCap(A). (2.2.52)602.3. Branching capacity and branching recurrenceNote that:∑y∈A∑z∈C\BHBk (z, y) ≥∑y∈A∑z∈C\BHBk (z, y)EsA(z)=∑y∈A(EsA(y)−∑z∈CcHBk (z, y)EsA(z)) ≥ BCap(A)−∑y∈A∑z∈CcHBk (z, y).As in the proof for the upper bound, we have:∑y∈A∑z∈CcHBk (z, y) ≤∑y∈A∑z∈Cc∑w∈Bθ(w − z)GA(w, y)=∑y∈A∑w∈BGA(w, y)∑z∈Ccθ(w − z)(2.2.51)≤∑w∈B∑y∈AGA(w, y)θ{(C((a− 1)ρ))c}=θ{(C((a− 1)ρ))c}∑w∈BpA(w) ≤ θ{(C((a− 1)ρ))c}|B|(2.2.50)≤ BCap(A)2.Now (2.2.52) follows and this completes the proof of the lower bound.2.3 Branching capacity and branching recurrenceWe now give the definitions of branching recurrence and branching tran-sience. Recall that we always assume d ≥ 5 in this section. In addition, weassume further that θ has finite range throughout this section.Definition 2.3.1. Let A be a subset of Zd. We call A a branching recur-rent (B-recurrent) set ifP (S∞0 visits A infinitely often) = 1, (2.3.1)and a branching transient (B-transient) set ifP (S∞0 visits A infinitely often) = 0. (2.3.2)In fact, it is equivalent to use the incipient infinite snake in the definitionof branching recurrence and branching transience.612.3. Branching capacity and branching recurrenceProposition 2.3.2.P (S∞0 visits A infinitely often) = 1⇔ P (S∞0 visits A infinitely often) = 1.Proof. The necessity is trivial. For the sufficiency, we use the followingcoupling between S∞0 and S∞0 . First sample S∞0 . Then we can constructS∞0 as follows: for the backbone of S∞0 , just use the backbone of S∞0 ; foreach vertex in the backbone, we graft to it an adjoint snake, independently,using either the left adjoint snake or the right one, corresponding to thesame vertex in S∞0 , with equal probability. When S∞0 visits A infinitelyoften, there are infinite adjoint snakes on S∞0 visiting A. For each vertexon the backbone, either the left adjoint snake or the right one is chosen,independently with equal probability. Therefore, by the strong law of largenumbers, an infinite number of adjoint snakes that visits A will be chosen,on the process of producing S∞0 , almost surely. It means that S∞0 visits Ainfinitely often almost surely.Proposition 2.3.3. Every set A ⊆ Zd is either B-recurrent or B-transient.Proof. Let f(x) = P (S∞x visits A infinitely often). It is easy to see thatf is a bounded harmonic function. But every bounded harmonic functionin Zd is constant. Hence f ≡ t for some t ∈ [0, 1]. Let V be the eventS∞0 visits A infinitely often. Since f ≡ t, we have P (V |Fn) = t for any n,where Fn is the σ-field generated by all ’information’ (the tree structure andthe random variables corresponding to the edges) after n-th vertex of thespine. Then V is a tail event. By the Kolmogorov 0-1 Law, t is either 0 or1.If A is finite, since qA(x) < 1 for large x, f(x) < 1 and must be 0. Hencewe have:Proposition 2.3.4. Every finite subset of Zd is B-transient.For some technical reasons, we assume further that θ has finite range inthis section.622.3. Branching capacity and branching recurrence2.3.1 Inequalities for convolved sumsWe need the following two inequalities in the proof of our version of Wiener’sTest.Lemma 2.3.5. For any n ∈ N+, let B = C(n). When A ⊆ B and x ∈ Zd,we have: ∑z∈BGA(x, z)qA(z)  (diam(B))2qA(x); (2.3.3)∑z∈BGA(x, z)pA(z)  (diam(B))2pA(x). (2.3.4)We prove (2.3.3) here and postpone the proof of (2.3.4) until Section2.3.3.Proof of (2.3.3). For (2.3.3), we do not need to assume that B is a ball andA ⊆ B. In fact, we will prove (2.3.3) for any finite subsets A,B of Zd andx ∈ Zd.We are working at the random walk with killing function rA. Considerthe following equivalent model: a particle starting from x executes a randomwalk S = (S(k))k∈N, but at each step, the particle has the probability rAto get a flag (instead of to die) and its movements are unaffected by flags.Let τ and ξ be the first and last time getting flags (if there is no suchtime, define both to be infinity). Note that since qA(z) < 1 (when |z|is large), the total number of flags gained is finite, almost surely. HenceP (τ <∞) = P (ξ <∞). Under this model, one can see thatGA(x, z)qA(z) = P (τ <∞, S(k) = z for some k ≤ τ).Hence it is not more than E(∑τi=0 1{S(i)=z}; τ <∞) and the L.H.S. of (2.3.3)is not more thanE(τ∑i=01{Sx(i)∈B}; τ <∞) ≤ E(ξ∑i=01{Sx(i)∈B}; ξ <∞).632.3. Branching capacity and branching recurrenceBy considering the place where the particle gets its last flag, one can see:E(ξ∑i=01{Sx(i)∈B}; ξ <∞) =∑w∈Zd ∑γ:x→ws(γ)(|γ|∑i=01γ(i)∈B) · rA(w)Es+A(w).We point out a result about random walk and prove it later:∑γ:x→ws(γ)(|γ|∑i=01γ(i)∈B)  (diam(B))2∑γ:x→ws(γ). (2.3.5)Hence we get:∑z∈BGA(x, z)qA(z) (diam(B))2∑w∈Zd∑γ:x→ws(γ)rA(w)Es+A(w)=(diam(B))2∑w∈Zdg(x,w)rA(w)Es+A(w)(2.2.20)= (diam(B))2qA(x).Now we just need to prove (2.3.5). First we assume x,w ∈ B, then∑γ:x→ws(γ)(|γ|∑i=01γ(i)∈B) ≤∑γ:x→ws(γ)[γ](2.1.3) |x− w|4−d≤ (diam(B))2|x− w|2−d  (diam(B))2g(x,w) = (diam(B))2∑γ:x→ws(γ).For general x,w, one just need to decompose γ into pieces according to the642.3. Branching capacity and branching recurrencefirst and last visiting time of B. For example, when x,w /∈ B, we have:∑γ:x→ws(γ)(|γ|∑i=01γ(i)∈B) =∑y,z∈BHBc(x, y) ∑γ′:y→zs(γ′)|γ′|∑i=01γ′(i)∈BHBc(z, w)∑y,z∈BHBc(x, y)(diam(B))2 ∑γ′:y→zs(γ′)HBc(z, w)=(diam(B))2∑y,z∈BHBc(x, y)∑γ′:y→zs(γ′)HBc(z, w)=(diam(B))2∑γ:x→w,γ visits Bs(γ) ≤ (diam(B))2∑γ:x→ws(γ).When just one of x and w is in B, the proof is similar but easier.2.3.2 Restriction lemmasRecall that we have: (see (2.2.7))pA(x) =∑γ:x→Ab(γ).Our goals of this section are to show:Proposition 2.3.6. For any n ∈ N+ sufficiently large and A ⊆ C(n), x ∈C(n), we have:pA(x) ∑γ:x→A,γ⊆C(1.1n)b(γ). (2.3.6)Proposition 2.3.7. For any n ∈ N+ sufficiently large and A ⊆ C(n), x ∈C(n), we have:qA(x) ∑γ:x→A,γ⊆C(4n)[γ] · b(γ). (2.3.7)We first introduce some notations. Since θ has finite range, we can definethe outer boundary ∂oB for any B ⊆ Zd by∂oB = {z ∈ Zd \B : ∃y ∈ B, θ(z − y) ∨ θ(y − z) > 0}.652.3. Branching capacity and branching recurrenceNote that for any y ∈ ∂oB, ρ(y,B) is bounded above by a constant dependingon θ. For A ⊆ B ⊆ Zd and x, y ∈ B ∪ ∂oB, writeGBA(x, y) =∑γ:x→y,γ⊆Bb(γ).Lemma 2.3.8. For any λ1, λ2, λ3 > 0, there exists C = C(λ1, λ2, λ3) > 0satisfying the following. When n is sufficiently large, let B0 = C(n),B1 =C((1 +λ1)n), B2 = C((1 +λ1 +λ2)n) and B = C((1 +λ1 +λ2 +λ3)n). Thenfor any x, y ∈ B2 \B1 and A ⊆ B0, we have:GBA(x, y) ≥ CGA(x, y). (2.3.8)Proof. Let B′ = C((1+λ1/2)n). Note that for any y ∈ B\B′, by (1.3.2) and(2.2.18), we have rA(y)  pA(y)  n−2. Hence, we have: for any γ ⊆ B \B′with |γ| ≤ 2n2, b(γ)/s(γ) ≥ (1− c/n2)2n2  1.Therefore, by Lemma 2.1.3, one can see that:GBA(x, y) =∑γ:x→y,γ⊆Bb(γ) ≥∑γ:x→y,γ⊆B\B′,|γ|≤2n2b(γ)∑γ:x→y,γ⊆B\B′,|γ|≤2n2s(γ)  g(x, y) ≥ GA(x, y).Lemma 2.3.9. For any λ > 0, ι > 0, there exists C = C(λ, ι) > 0 satisfyingthe following. When n is sufficiently large, let B0 = C(n), B1 = C((1+λ)n),B = C((1 + λ+ ι)n). Then for any x, y ∈ B1 and A ⊆ B0, we have:GBA(x, y) ≥ CGA(x, y). (2.3.9)Proof. By last lemma, one can get, for any z, w ∈ ∂oB1,GBA(z, w)  GA(z, w).662.3. Branching capacity and branching recurrenceFor any x, y ∈ B1, we have:GBA(x, y) = GB1A (x, y) +∑γ:x→y,γ visits Bc1,γ⊆Bb(γ).By considering the first and last visits in Bc1, we have:∑γ:x→y,γ visits Bc1,γ⊆Bb(γ) =∑z,w∈∂oB1HB1A (x, z)GBA(z, w)HB1A (w, y)(2.3.8)∑z,w∈∂oB1HB1A (x, z)GA(z, w)HB1A (w, y) =∑γ:x→y,γ visits Bc1b(γ).Hence, we have:GBA(x, y) =GB1A (x, y) +∑γ:x→y,γ visits Bc1,γ⊆Bb(γ)GB1A (x, y) +∑γ:x→y,γ visits Bc1b(γ) = GA(x, y).Now we can show Proposition 2.3.6:Proof of Proposition 2.3.6. Let B = C(1.1n). We have:pA(x) =∑γ:x→Ab(γ) =∑z∈AGA(x, z)(2.3.9)∑z∈AGBA(x, z) =∑γ:x→A,γ⊆Bb(γ).Now we turn to qA(x). The starting point is:Lemma 2.3.10. For any a ∈ Zd, A ⊂⊂ Zd, B ⊆ Zd, we have:∑z∈BGA(x, z)pA(z) =∑γ:x→Ab(γ)|γ|∑i=01γ(i)∈B. (2.3.10)672.3. Branching capacity and branching recurrenceProof. ∑z∈BGA(x, z)pA(z) =∑z∈B∑γ1:x→zb(γ1)∑γ2:z→Ab(γ2)=∑z∈B∑γ1:x→z∑γ2:z→Ab(γ1)b(γ2)=∑z∈B∑γ1:x→z∑γ2:z→Ab(γ1 ◦ γ2)=∑γ:x→Ab(γ) ·|γ|∑i=01γ(i)∈B.The last equality is due to the fact that for any γ : x→ A, there are exactly∑|γ|i=0 1γ(i)∈B ways to rewrite γ as the composite of two paths γ1 and γ2 suchthat the common point of γ1 and γ2 is in B.Corollary 2.3.11.qA(x) ∑γ:x→A[γ] · b(γ). (2.3.11)Proof. By (2.2.19) and (2.2.18), we have:qA(x) ∑z∈ZdGA(x, z)pA(z). (2.3.12)By last lemma, we have∑z∈Zd GA(x, z)pA(z) =∑γ:x→A[γ] · b(γ).Lemma 2.3.12. For any n sufficiently large, A ⊆ C(n), x ∈ Zd with ‖x‖ ≥1.1n, we have:qA(x) ∑γ:x→A,γ⊆C(3‖x‖)[γ] · b(γ). (2.3.13)Proof. The part of ’’ is trivial by the last corollary. It suffices to show theother part. As in the last corollary, we have:qA(x) ∑z∈ZdGA(x, z)pA(z).682.3. Branching capacity and branching recurrenceFirst by (2.1.2),(1.3.2) and (2.2.49), one can see that:∑z∈C(2‖x‖)\C(1.5‖x‖)GA(x, z)pA(z) ∑z∈C(2‖x‖)\C(1.5‖x‖)g(x, z)BCap(A)(ρ(z,A))d−2∑z∈C(2‖x‖)\C(1.5‖x‖)1|x|d−2BCap(A)|x|d−2 |x|d 1|x|d−2BCap(A)|x|d−2 =BCap(A)|x|d−4 .Similarly, we can get:∑z∈C(2‖x‖)cGA(x, z)pA(z) ∑z∈C(2‖x‖)cg(x, z)BCap(A)(ρ(z,A))d−2∑z∈C(2‖x‖)c1|z|d−2BCap(A)|z|d−2 BCap(A)∑z∈C(2‖x‖)c1|z|2d−4 BCap(A)|x|d−4 .Hence we have:qA(x) ∑z∈ZdGA(x, z)pA(z) ∑z∈C(2‖x‖)GA(x, z)pA(z).By Lemma 2.3.9, we have (let B = C(3‖x‖)):∑z∈C(2‖x‖)GA(x, z)pA(z) =∑z∈C(2‖x‖)GA(x, z)∑y∈AGA(z, y)∑z∈C(2‖x‖)GBA(x, z)∑y∈AGBA(z, y)=∑z∈C(2‖x‖)∑γ1:x→z,γ1⊆Bb(γ1)∑γ2:z→A,γ2⊆Bb(γ2)=∑z∈C(2‖x‖)∑γ1:x→z,γ1⊆B∑γ2:z→A,γ2⊆Bb(γ1 ◦ γ2)≤∑γ:x→A,γ⊆B[γ]b(γ).692.3. Branching capacity and branching recurrenceThis completes the proof.Proof of Proposition 2.3.7. Let B = C(1.1n) and B′ = C(4n). We have:qA(x) ∑γ:x→A[γ]b(γ) =∑γ:x→A,γ⊆B[γ]b(γ) +∑γ:x→A,γ visits Bc[γ]b(γ).By considering the first visit of Bc, the second term is equal to:∑y∈∂oB∑γ1:x→y,γ1⊆B∑γ2:y→A(|γ1|+ [γ2])(b(γ1)b(γ2))=∑y∈∂oB∑γ1:x→y,γ1⊆B|γ1|b(γ1)∑γ2:y→Ab(γ2)+∑y∈∂oB∑γ1:x→y,γ1⊆Bb(γ1)∑γ2:y→A[γ2]b(γ2)(2.2.7)(2.3.11)∑y∈∂oB∑γ1:x→y,γ1⊆B|γ1|b(γ1)pA(y) +∑y∈∂oB∑γ1:x→y,γ1⊆Bb(γ1)qA(y)(2.3.13),(2.3.6)∑y∈∂oB∑γ1:x→y,γ1⊆B|γ1|b(γ1)∑γ2:y→A,γ2⊆B′b(γ2)+∑y∈∂oB∑γ1:x→y,γ1⊆Bb(γ1)∑γ2:y→A,γ2⊆B′[γ2]b(γ2)=∑y∈∂oB∑γ1:x→y,γ1⊆B∑γ2:y→A,γ2⊆B′(|γ1|+ [γ2])(b(γ1)b(γ2))=∑γ:x→A,γ visits Bc,γ⊆B′[γ]b(γ).Hence, we getqA(x) ∑γ:x→A,γ⊆B[γ]b(γ)+∑γ:x→A,γ visits Bc,γ⊆B′[γ]b(γ) =∑γ:x→A,γ⊆B′[γ]b(γ).This completes the proof.2.3.3 Visiting probability by an infinite snakeIn this subsection we establish the following bounds analogous to (1.3.2):702.3. Branching capacity and branching recurrenceTheorem 2.3.13. For any A ⊂⊂ Zd and x ∈ Zd with ‖x‖ ≥ 2Rad(A), wehave:qA(x)  BCap(A)(ρ(x,A))d−4. (2.3.14)Remark 2.3.1. By cutting A into small pieces, one can replace ‖x‖ ≥2Rad(A) by ρ(x,A) ≥ diam(A), for any  > 0.Remark 2.3.2. The analogous result for S∞x (Theorem 1.3.9) can be provedin a similar way.Proof. It suffices to show the case when Rad(A) is sufficiently large since weknow the asymptotical behavior when x is far away (see (2.2.14)). The partfor  is straightforward and similar to the first part of the proof of Lemma2.3.12:qA(x)(2.3.12)∑z∈ZdGA(x, z)pA(z) ≥∑2‖x‖≤‖z‖≤4‖x‖GA(x, z)pA(z)(1.3.2)(2.2.49)∑2‖x‖≤‖z‖≤4‖x‖1|x− z|d−2BCap(A)(ρ(z,A))d−2∑2‖x‖≤‖z‖≤4‖x‖1|z|d−2BCap(A)|z|d−2|x|d 1|x|d−2BCap(A)|x|d−2 =BCap(A)|x|d−4 BCap(A)(ρ(x,A))d−4.The other part can be implied by (1.3.2) and the following lemma (let n =‖x‖).Lemma 2.3.14. For any n ∈ N+ sufficiently large, A ⊂ C(n), y ∈ C(n), wehave:qA(y)  n2pA(y). (2.3.15)Proof. Let B = C(4n). By (2.3.7) and (2.2.7), it suffices to prove:∑γ:y→A,γ⊆B[γ]b(γ)  n2∑γ:y→A,γ⊆Bb(γ). (2.3.16)By (2.3.3) and (2.3.7), one can get:∑z∈BGA(y, z)qA(z)  n2∑γ:y→A,γ⊆B[γ]b(γ). (2.3.17)712.3. Branching capacity and branching recurrenceFor the left hand side, we have:∑z∈BGA(y, z)qA(z)(2.3.11)∑z∈B∑γ1:y→zb(γ1)∑γ2:z→A[γ2]b(γ2)≥∑z∈B∑γ1:y→z,γ1⊆B∑γ2:z→A,γ2⊆B[γ2]b(γ1 ◦ γ2)=∑γ:y→A,γ⊆B(1 + 2 + ...+ [γ])b(γ) ∑γ:y→A,γ⊆B[γ]2b(γ).Hence, we have: ∑γ:y→A,γ⊆B[γ]2b(γ)  n2∑γ:y→A,γ⊆B[γ]b(γ). (2.3.18)By Cauchy-Schwarz inequality: ∑γ:y→A,γ⊆B[γ]b(γ)2 ≤ ∑γ:y→A,γ⊆B[γ]2b(γ) · ∑γ:y→A,γ⊆Bb(γ) n2∑γ:y→A,γ⊆B[γ]b(γ) · ∑γ:y→A,γ⊆Bb(γ) .Then (2.3.16) follows and we complete the proof.Proof of (2.3.4). When x ∈ B, by the last lemma (recall that qA(x) ∑z∈Zd GA(x, z)pA(x)), we have the desired bound. Now we assume x /∈ B.722.3. Branching capacity and branching recurrenceBy considering the first visit of B, we have∑z∈BGA(x, z)pA(z)(2.3.10)=∑γ:x→A(|γ|∑i=01γ(i)∈B)b(γ)=∑y∈B∑γ1:x→y,γ1⊆Bc∑γ2:y→Ab(γ1 ◦ γ2)(|γ2|∑i=01γ2(i)∈B)=∑y∈B∑γ1:x→y,γ1⊆Bcb(γ1)∑γ2:y→A(|γ2|∑i=01γ2(i)∈B)b(γ2)(2.3.10)=∑y∈B∑γ1:x→y,γ1⊆Bcb(γ1)∑z∈BGA(y, z)pA(z)(∗)∑y∈B∑γ1:x→y,γ1⊆Bcb(γ1)(diam(B))2pA(y)=(diam(B))2∑y∈B∑γ1:x→y,γ1⊆Bcb(γ1)pA(y)=(diam(B))2pA(x).(∗) is because we have proved that (2.3.4) is true for x ∈ B and for the lastline, we use the First-Visit Lemma and (2.2.7).2.3.4 Upper bounds for the probabilities of visiting two setsIn this subsection we aim to prove the following inequalities which we willuse in the proof of Wiener’s Test.Lemma 2.3.15. For any disjoint nonempty subsets A,B ⊂⊂ Zd and x ∈Zd, we have:P (Sx visits both A&B) ∑z∈ZdGA∪B(x, z)pA(z)pB(z); (2.3.19)732.3. Branching capacity and branching recurrenceP (S∞x visits both A&B) ∑z∈ZdGA∪B(x, z)(pA(z)qB(z) + qA(z)pB(z) + P (S ′z visits both A&B)).(2.3.20)Proof. (2.3.20) is a bit easier and we prove it first. When an infinite snakeS∞x = (T,ST ) visits both A and B, let u be the first vertex in the spinesuch that the image of the bush graft to u under ST intersects A ∪ B. As-sume (v0, . . . , vk) is the unique simple path in the spine from o to u. DefineΓ(A,B)(S∞x ) = (ST (v0), . . . ,ST (vk)). For any path γ = (γ(0), . . . , γ(k)) start-ing from x with length |γ| = k, we would like to estimate P (Γ(A,B)(S∞x ) = γ).If we can show that:P (Γ(A,B)(S∞x ) = γ) b(γ)(pA(γ̂)qB(γ̂) + qA(γ̂)pB(γ̂) + P (S ′γ̂ visits both A&B)), (2.3.21)then by summation, one can get (2.3.20).Now we argue that (2.3.21) is correct. Let t be the bush grafted to u.There are three possibilities: ST (t) visits A but not B, visits B but not Aor visits both A and B. For the first one, to guarantee Γ(A,B)(S∞x ) = γ, weneed three conditions to be true. The first is that ST maps (v0, . . . , vk) toγ and that the image of each bush grafted to vi does not intersect A ∪ B,for i = 0, . . . , k − 1. The probability of this condition being true is b(γ).The second condition is that ST (t) intersects A but not B. The probabilityof this condition being true is at most rA(γ̂)  pA(γ̂). The last conditionis that the image of the bushes after u intersects B. The probability ofthis condition being true is at most qB(γ̂). Note that for fixed γ, the threeconditions are independent. Hence we have:P (Γ(A,B)(S∞x ) = γ,ST (t) visits A not B) ≤ b(γ)pA(γ̂)qB(γ̂).Similarly, one can get the other two inequalities. This completes the proofof (2.3.20).742.3. Branching capacity and branching recurrenceFor (2.3.19), we use a similar idea. When a snake Sx = (T,ST ) visits bothA and B, then VA := {v ∈ T : ST (v) ∈ A} and VB := {v ∈ T : ST (v) ∈ B}are nonempty. We call a vertex v ∈ T good, if v is the last common ancestorfor some u1 ∈ VA and u2 ∈ VB (any vertex is regarded as an ancestorof itself). Since for any u1 ∈ VA and u2 ∈ VB, they have the unique lastcommon ancestor. Hence there exists at least one good vertex and we choosethe first good one (due to the default order, Depth-First order), say u.Assume γ = (v0, . . . , vk) is the unique simple path in T from the root o tou. Define Γ(A,B)(Sx) = (Sx(v0), . . . ,Sx(vk)). As before, we would like toestimate P (Γ(A,B)(Sx) = γ), for a fixed path γ = (γ(0), . . . , γ(k)) startingfrom x, with length |γ| = k. We argue that:P (Γ(A,B)(Sx) = γ)  b(γ)pA(γ̂)pB(γ̂). (2.3.22)Since u is the first good vertex, one can see that all vertices in VA ∪ VBare descendants of u or u itself. In particular,any vertex before u is not in VA ∪ VB. (2.3.23)Here, ’before’ is due to the Depth-First search order. This is the first nec-essary condition for the event Γ(A,B)(Sx) = γ being true. Similar to thecomputations in Section 2.2.2, the probability for (2.3.23) being true is b(γ).Note that this condition just depends on (T \ Tu,ST |T\Tu), where Tu is thesubtrees generated by u and its descendants, and T \Tu is the tree generatedby u and those vertices outside Tu.On the other hand, since u is the last common ancestor for some u1 ∈ VAand u2 ∈ VB, when u /∈ VA ∪ VB, u must have two different children u1 andu2, such that ST (Tu1) ∩ A 6= ∅ and ST (Tu2) ∩ B 6= ∅. This is the secondnecessary condition for the event Γ(A,B)(Sx) = γ being true. Note that forfixed γ, this condition is independent of (2.3.23), and its probability is atmost∞∑n=2µ(n)n(n− 1)pA(γ̂)pB(γ̂) = σ2pA(γ̂)pB(γ̂).752.3. Branching capacity and branching recurrenceWhen u ∈ VA ∪ VB, say ∈ A (it implies γ̂ ∈ A), then similarly, u musthave a descendant mapped into B. The probability for this condition is:pB(γ̂) = pA(γ̂)pB(γ̂). Combining the two conditions one can get (2.3.22).By summation, one can get (2.3.19). This completes the proof of (2.3.19).We require the assumption of the finite third moment of µ only for thefollowing lemma.Lemma 2.3.16. When µ has finite third moment, we have:P (Sx visits both A&B)  P (S ′x visits both A&B). (2.3.24)Proof. In fact, we will show:P (Sx visits both A&B)  pA(x)pB(x) + P (Sx visits both A&B);(2.3.25)P (S ′x visits both A&B)  pA(x)pB(x) + P (Sx visits both A&B);(2.3.26)where Sx is the finite snake from x conditioned on the initial particle havingonly one child.For the upper bound of the first assertion, consider whether Sx visits Avia the same child of the initial particle as it visits B via. If it does, thisprobability is at most∞∑i=1µ(i) · iP (Sx visits both A&B) = E(µ)P (Sx visits both A&B).If it does not, this probability is at most∞∑i=2µ(i) · i(i− 1)P (Sx visits A)P (Sx visits B)  pA(x)pB(x).Note that we use the fact that∑∞i=2 µ(i) · i(i− 1) is bounded by the secondmoment of µ and P (Sx visits A)  pA(x), which can be proved similar to762.3. Branching capacity and branching recurrence(2.2.18). Combining the last two inequalities, we get the upper bound of(2.3.25).For the lower bound, it is easy to see thatP (Sx visits both A&B) ≥∑i≥2µ(i)P (Sx visits A)P (Sx visits B) pA(x)pB(x);P (Sx visits both A&B) ≥∑i≥1µ(i)P (Sx visits both A&B).Combining these two, we can the lower bound of (2.3.25).Similarly one can get (2.3.26). Note that for the upper bound, we requirethat µ˜ has finite second moment which is equivalent to the assumption thatµ has finite third moment.2.3.5 Proof of Wiener’s TestWe first divide {x ∈ Rd : 1 ≤ |x| < 2} into a finite number of small pieceswith diameter less than 1/32: B1, . . . , BN . Let Kkn = Kn ∩ (2nBk) for anyn ∈ N+, 1 ≤ k ≤ N . For any nonempty set Kkn, we have diam(Kkn) ≤ 2n/32and ρ(0,Kkn) ∈ [2n, 2n+1). Let V kn be the event that S∞0 visits Kkn. ApplyingTheorem 2.3.13, we can get:P (V kn ) BCap(Kkn)2n(d−4). (2.3.27)Since each Kkn is finite (any finite set is B-transient), we haveP (S∞0 visits K i.o.) = P (V kn i.o.).When∑∞n=0 BCap(Kn)/2n(d−4) <∞, by monotonicity, for any 1 ≤ k ≤N ,∞∑n=1BCap(Kkn)2n(d−4)<∞∑n=0BCap(Kn)2n(d−4)<∞.772.3. Branching capacity and branching recurrenceHence,∞∑n=1N∑k=1P (V kn ) ∞∑n=1N∑k=1BCap(Kkn)2n(d−4)=N∑k=1( ∞∑n=1BCap(Kkn)2n(d−4))<∞.Then by Borel-Cantelli Lemma, almost surely, only finite V kn occurs andhence K is B-transient.When∑∞n=0 BCap(Kn)/2n(d−4) =∞, by subadditivity of branching ca-pacity (see Section 2.2.1), we have:∞∑n=1N∑k=1BCap(Kkn)2n(d−4)≥∞∑n=1BCap(Kn)2n(d−4)=∞.Hence for some 1 ≤ k ≤ N , ∑∞n=1 BCap(Kkn)/2n(d−4) =∞. Suppose∞∑n=1BCap(K1n)/2n(d−4) =∞.We need the following Lemma whose proof we postpone.Lemma 2.3.17. There exists some C > 0, such that, for any n < m, wehave:P (V 1n ∩ V 1m) ≤ CP (V 1n )P (V 1m).Let In =∑ni=1 1V 1iand F = 1{In≥E(In)/2}. By the lemma above, wehave:E(I2n) ≤ C(E(In))2.Note thatE(FIn) = EIn − E(In1{In<E(In)/2}) ≥ E(In)/2.Hence,P (In ≥ E(In)/2) = E(F ) = E(F 2)≥ (E(FIn))2/E(I2n) ≥ (E(In)/2)2/C(E(In))2 = 1/(4C).782.3. Branching capacity and branching recurrenceSince EIn →∞, let n→∞, we getP (In =∞) ≥ 1/(4C).By Proposition 2.3.3, we get that K is B-recurrent.2.3.6 Proof of Lemma 2.3.17Write A = K1n, B = K1m and M = 2m. Without loss of generality, assumeA,B 6= ∅. We knowdiam(A) ≤ 2n/32, diam(B) ≤ 2m/32.Fix any a ∈ A and b ∈ B. Let  = a+ C(2n/8) and B̂ = b+ C(2m/8). Thenwe haveρ(A, Âc)  ρ(0, Â)  2n; ρ(B, B̂c)  ρ(0, B̂)  2m; ρ(a, b)  ρ(Â, B̂)  2m.(2.3.28)We need to show:P (S∞0 visits both A&B)  qA(0)qB(0). (2.3.29)In the proof, we will repeatedly use (1.3.2), Theorem 2.3.13, (2.3.12) andLemma 2.3.5 without mention. Since (see (2.3.20) and (2.3.24))P (S∞0 visits both A&B) ∑z∈ZdGA∪B(0, z) · (pA(z)qB(z) + pB(z)qA(z) + P (Sz visits both A&B)) ,792.3. Branching capacity and branching recurrenceit suffices to show: ∑z∈ZdGA∪B(0, z)pA(z)qB(z)  qA(0)qB(0); (2.3.30)∑z∈ZdGA∪B(0, z)pB(z)qA(z)  qA(0)qB(0); (2.3.31)∑z∈ZdGA∪B(0, z)P (Sz visits both A&B)  qA(0)qB(0). (2.3.32)Note that by monotonicity, GA∪B(x, y) ≤ min{GA(x, y), GB(x, y)}. For(2.3.30), we have:∑z∈ZdGA∪B(0, z)pA(z)qB(z)=∑z∈B̂GA∪B(0, z)pA(z)qB(z) +∑z∈B̂cGA∪B(0, z)pA(z)qB(z)∑z∈B̂GB(0, z)pA(b)qB(z) +∑z∈B̂cGA(0, z)pA(z)qB(0)(diam(B̂))2qB(0)pA(b) + qA(0)qB(0)  qA(0)qB(0).Similarly one can show (2.3.31).We just need to show (2.3.32). We first argue that:P (Sz visits both A&B) {pA(b)qB(z) + pB(0)qA(z); when z ∈ C(4M);pA(z)qB(a); when z /∈ C(4M).(2.3.33)By (2.3.19), we need to estimate:∑w∈ZdGA∪B(z, w)pA(w)pB(w).802.3. Branching capacity and branching recurrenceWhen z ∈ C(4M), we have∑w∈ZdGA∪B(z, w)pA(w)pB(w)=∑w∈B̂GA∪B(z, w)pA(w)pB(w) +∑w∈B̂cGA∪B(z, w)pA(w)pB(w)∑w∈B̂GA(z, w)pA(b)pB(w) +∑w∈B̂cGB(z, w)pA(w)pB(0) pA(b)qB(z) + pB(0)qA(z).When z /∈ C(4M), let Ĉ = C(3M). We divide the sum into three parts:∑w∈B̂,∑w∈Ĉ\B̂,∑w∈Ĉc.∑w∈B̂GA∪B(z, w)pA(w)pB(w) ∑w∈B̂GB(z, w)pA(b)pB(w) (diamB̂)2pB(z)pA(b)  (ρ(a, b))2 BCap(A)BCap(B)(ρ(z,B))d−2(ρ(a, b)d−2) pA(z)qB(a);∑w∈Ĉ\B̂GA∪B(z, w)pA(w)pB(w) ∑w∈Ĉ\B̂GA(z, w)pA(w)pB(a) (diamĈ)2pA(z)pB(a)  pA(z)qB(a);∑w∈ĈcGA∪B(z, w)pA(w)pB(w) ∑w∈Ĉcg(z, w)BCap(A)BCap(B)|w − a|d−2|w − b|d−2BCap(A)BCap(B)∑w∈Ĉc1|w − z|d−2|w − a|2d−4(∗) BCap(A)BCap(B)|z − a|d−2|b− a|d−4  pA(z)qB(a).812.4. The critical dimension: d=4Combining all three above, we get (2.3.33). Note that for (∗), we use:∑w∈Ĉc1|w − z|d−2|w − a|2d−4≤∑‖w−z‖≤‖z‖/81|w − z|d−2|w − a|2d−4 +∑‖w−z‖≥‖z‖/8,w∈Ĉc1|w − z|d−2|w − a|2d−4∑‖w−z‖≤‖z‖/81|w − z|d−2|z − a|2d−4 +∑‖w−z‖≥‖z‖/8,w∈Ĉc1|z|d−2|w − a|2d−4 |z|2|z − a|2d−4 +1|z|d−2∑w∈Ĉc1|w − a|2d−4 1|z|2d−6 +1|z|d−2∑n≥3Mnd−1n2d−4 1|z|2d−6 +1|z|d−21Md−4 1|z|d−21Md−4 1|z − a|d−2|b− a|d−4 .Hence,∑z∈ZdGA∪B(0, z)P (Sz visits both A&B)∑z∈C(4M)GA∪B(0, z)(pA(b)qB(z) + pB(0)qA(z))+∑z∈C(4M)cGA(0, z)pA(z)qB(a)M2pA(b)qB(0) +M2pB(0)qA(0) + qA(0)qB(a)  qB(0)qA(0).This is just (2.3.32) and we finish the proof.2.4 The critical dimension: d=4In this section, we focus on the critical dimension d = 4. Note that now(2.1.2) is just:g(x) ∼ a4‖x‖−2; (2.4.1)where a4 = 1/(8pi2√detQ) with Q being the covariant matrix of θ.822.4. The critical dimension: d=4For some technical reasons, we assume further that, in this section, θhas finite exponential moments, i.e. for some λ > 0,∑z∈Z4θ(z) · exp(λ|z|) <∞.2.4.1 An upper boundIn this subsection, we construct a weaker result which will be used in theproof of Theorem 1.3.12:Theorem 2.4.1.p{0}(x)  (|x|2 log |x|)−1. (2.4.2)Remark 2.4.1. On the other hand, the reversed inequality can be obtainedby the second moment method. This process is similar to and easier thanthe proof for the lower bound in Theorem 2.2.12.The idea of proof is as follows. From simple calculation one can see thatthe expectation of the times of visiting x is g(x)  |x|−2. If conditioned onvisiting, the expectation of the visiting times is of order log |x|, then we canget (2.4.2). In fact, we will show that this is true with high probability.Let N0 be the number of times of visiting 0. We need to estimateE(N0|Sx visits K via γ). For any finite path γ, defineN0(γ) =|γ|∑i=0N(γ(i))g(γ(i), 0), (2.4.3)where N(x) = NK(x) = Eµx (see (2.2.25), (2.2.27) and set K = {0}).By Proposition 2.2.11, we have:E(N0|Sx visits 0 via γ) = N0(γ).832.4. The critical dimension: d=4For N(x), we have:N(x) =Eµx =∑l≥0,m≥0mµ(l +m+ 1)(r˜(x))l∑l≥0,m≥0 µ(l +m+ 1)(r˜(x))l≥∑l=0,m≥0mµ(m+ 1)∑l≥0,m≥0 µ(l +m+ 1)=∑m≥0mµ(m+ 1)∑l≥0,m≥0 µ(l +m+ 1)= µ(0).Write g(γ) =∑|γ|i=0 g(γ(i)). Then we have:N0(γ) ≥ µ(0)g(γ)  g(γ). (2.4.4)We need the following lemma and postpone its proof:Lemma 2.4.2. There exists a c > 0, such that for any x, we have:∑γ:x→0,g(γ)≤c log |x|b(γ)  |x|−2.1. (2.4.5)Now we start the proof of Theorem 2.4.1. First we haveEN0 = g(x, 0)  |x|−2.Hence,|x|−2  EN0 =∑γ:x→0b(γ)N0(γ)(2.4.4)∑γ:x→0b(γ)g(γ)≥∑γ:x→0,g(γ)≥c log |x|b(γ)g(γ) ∑γ:x→0,g(γ)≥c log |x|b(γ) log |x|.Therefore we have: ∑γ:x→0,g(γ)≥c log |x|b(γ)  1/(|x|2 log |x|).842.4. The critical dimension: d=4Then we have:p{0}(x) =∑γ:x→0b(γ) =∑γ:x→0,g(γ)≥c log |x|b(γ) +∑γ:x→0,g(γ)<c log |x|b(γ) 1/|x|2.1 + 1/(|x|2 log |x|)  1/(|x|2 log |x|).We still need to show (2.4.5). Note that b(γ) ≤ s(γ). Hence (2.4.5) can beobtained byProposition 2.4.3. There exist c1, c2 such that for x ∈ Z4 with |x| suffi-ciently large,P (τx <∞,τx∑i=0g(Si) ≤ c1 log |x|) ≤ c2|x|−2.1,where (Si)i∈N is a random walk starting from 0 with distribution θ− and τxis the hitting time for x.This proposition is an adjusted version of Lemma 10.1.2 (a) in [11].It is assumed there that θ has finite support which is stronger than ourassumptions, though its conclusion is also stronger than ours. The argumentis similar to the one there with small adjustments. We mention the maindifference here and leave the details to the reader. It suffices to prove:P (τn∑i=0g(Si) ≤ c1 log n) ≤ c2n−2.1, (2.4.6)where τn = min{k ≥ 0 : |Sk| ≥ n}.Let N = bn0.9c. Let A be the event that |Xi| ≤ N , for i = 1, 2, . . . , 2n2∧τn (where Xi = Si − Si−1). Note that P (Ac)  n−2.1. When A happens,the range of the random walk is bounded by N for the first 2n2 steps. Sinceonly first 2n2 steps are bounded, we need to change the stopping timesthere a bit. Let ξ0 = 0, ξi = min{k : |Sk| ≥ 2iN} ∧ (ξi−1 + (2iN)2), fori = 1, 2, . . . , L, where L = max{k : 2kN ≤ n}  log n. Now (2.4.6) can beobtained by following the argument of the proof of Lemma 10.1.2 (a) in [11].852.4. The critical dimension: d=42.4.2 The visiting probabilityThe main goal of this subsection is to prove Theorem 1.3.12. In this subsec-tion and the next, we fix a finite nonempty subset K ⊂⊂ Z4 and thereforethe corresponding constants may also depend on K.The first step is to construct the following estimate of the Green function:Lemma 2.4.4. For any α ∈ (0, 1/2), we have:limx,y→∞:‖x‖/(log ‖x‖)α≤‖y‖≤‖x‖·(log ‖x‖)αGK(x, y)g(x, y)= 1. (2.4.7)Remark 2.4.2. In supercritical dimensions, we have GK(x, y) ∼ g(x, y)(see Lemma 2.2.4). In the critical dimension, this holds only when x, y arenot too far away from each other, compared with their norms. We will givea more precise asymptotic behavior of GK in next subsection.Proof of Lemma 2.4.4. We use the same idea in the proof for a similar formin supercritical dimension (see Section 2.2.3). Since α < 1/2, we can pickup some β,  > 0, such that + 2α+ 2β < 1. Without loss of generality, weassume ‖y‖ ≥ ‖x‖. Let r = ‖x‖/ logβ ‖x‖ andΓ1 = {γ : x→ y| |γ| ≥ (log ‖x‖)‖x− y‖2};Γ2 = {γ : x→ y|γ visits C(r)}.We just need to check: (when ‖x‖ → ∞)∑γ∈Γ1s(γ)/g(x, y)→ 0;∑γ∈Γ2s(γ)/g(x, y)→ 0;b(γ)/s(γ)→ 1, for any γ : x→ y, /∈ Γ1 ∪ Γ2.The first one follows from Lemma 2.1.2. The second one can be obtained862.4. The critical dimension: d=4by: (let B = C(r)):∑γ∈Γ2s(γ) =∑a∈BHBc(x, a)g(a, y) ∑a∈BHBc(x, a)‖y‖−2=P (Sx visits B) · ‖y‖−2  (r/‖x‖)2‖y‖−2(log ‖x‖)−2β‖x− y‖−2  g(x, y).Note that the estimate of P (Sx visits C(r))  (r/‖x‖)2 is standard, and forthe second last inequality we use ‖y‖ ≥ (‖x‖+ ‖y‖)/2  ‖x− y‖.For the third one, note that in the critical dimension d = 4, by (2.4.2)and (2.2.18), the killing function k(z) = rK(z)  pK(z)  1/(‖z‖2 log ‖z‖).Hence, we have:for any γ : x→ y, /∈ Γ1 ∪ Γ2,b(γ)/s(γ) =|γ|−1∏i=0(1− k(γ(i))) ≥ (1− c/(r2 log r))|γ| ≥ 1− c|γ|/(r2 log r)≥1− c(log ‖x‖)‖y‖2/((‖x‖/ logβ ‖x‖)2(log ‖x‖))≥1− c(log ‖x‖)(‖x‖ logα ‖x‖)2/((‖x‖/ logβ ‖x‖)2(log ‖x‖))≥1− c(log ‖x‖)+2α+2β/ log ‖x‖ → 1.Let N be the number of times of visiting K. We need to estimateE(N |Sx visits K via γ). For any finite path γ, defineN (γ) =|γ|∑i=0N(γ(i))g(γ(i),K); N−(γ) =|γ|−1∑i=0N(γ(i))g(γ(i),K). (2.4.8)By Proposition 2.2.11, we have:E(N |Sx visits K via γ)) = N (γ).872.4. The critical dimension: d=4Hence, we have: ∑γ:x→Kb(γ)N (γ) = g(x,K) ∼ a4|K|‖x‖−2. (2.4.9)The main step is to control the sum of the escape probabilities:Proposition 2.4.5.∑γ:C(2n)\C(n)→K,γ⊆C(n)b(γ) ∼ 4pi2√detQσ21log n. (2.4.10)In order to prove this proposition, we need two lemmas about randomwalks. They are adjusted versions of Lemma 17 and Lemma 18 in [14]. Asbefore, write (Sj)j∈N for the random walk (starting from 0). Let τn be thefirst visiting time of C(n)c by the random walk and h(x) : Z4 → R+ is afixed positive function satisfying h(x) ∼ a4‖x‖−2.Lemma 2.4.6. For p = 1, 2, there exists a constant C(p) (also dependingon h) such that, for every n ≥ 2,E(τn∑j=0h(Sj))p ≤ C(p)(log n)p. (2.4.11).Lemma 2.4.7. For every α, p > 0, there exists a constant Cα(p) (alsodepending on h) such that, for every n ≥ 2, we haveP (|τn∑k=0h(Sj)− 4a4 log n| ≥ α log n) ≤ Cα(p)(log n)−p. (2.4.12)In fact, we apply both lemmas for the reversed random walk (thatis, with jump distribution θ−) other than the original random walk. Leth(x) = 2N(x)g(x,K)/(σ2|K|). Recall that N(x) = Eµx ∼ σ2/2 and882.4. The critical dimension: d=4N (γ) = ∑σ2|K|h(γ(i))/2). Hence, for any a ∈ K we have:∑γ:C(n)c→a,γ⊆C(n)s(γ)(N (γ))2  (log n)2,∑γ:C(n)c→a,γ⊆C(n),|N (γ)−2a4|K|σ2 logn|≥α logns(γ) ≤ Cα(p)(log n)−p.By monotonicity and summation, we get:∑γ:C(n)c→K,γ⊆C(n)\Ks(γ)(N (γ))2  (log n)2, (2.4.13)∑γ:C(n)c→K,γ⊆C(n)\K,|N (γ)−2a4|K|σ2 logn|≥α logns(γ)  Cα(p)(log n)−p.(2.4.14)Let us make some comments about the proofs. Lemma 18 in [14] statesthatP (|n∑k=0g(Sj)− 2a4 log n| ≥ α log n) ≤ Cα(log n)−3/2.where g is the Green function. Their argument is to derive an analogousresult for Brownian motion and then to transfer this result to the randomwalk via the strong invariance principle. This argument also works here withsmall adjustments. Note that it is assumed there that the jump distributionθ is symmetric (besides having exponential tail). However if one checks theproof there, one can see that the assumption of symmetry is not needed andg(x) can be replaced by any h(x) satisfying h(x) ∼ a4‖x‖−2 . Moreover,the exponent 3/2 can be replaced by any positive constant p with minormodifications. Combing this with the fact that for any fixed  > 0, P (τn /∈[n2−, n2+]) = o((log n)−p), one can get Lemma 2.4.7.For Lemma 2.4.6, we give a direct proof here:892.4. The critical dimension: d=4Proof of Lemma 2.4.6. For p=1,E(τn∑j=0h(Sj)) ∑z∈C(n)h(z)E(τn∑j=01Sj=z) ∑z∈C(n)|z|−2E(∞∑j=01Sj=z)=∑z∈C(n)|z|−2g(0, z) ∑z∈C(n)|z|−4  log n.For p=2,E(τn∑j=0h(Sj))2  E(∑z∈C(n)h(z)τn∑j=01Sj=z)2  E(∑z∈C(n)|z|−2∞∑j=01Sj=z)2=∑z,w∈C(n)|z|−2|w|−2E(∞∑j=01Sj=z∞∑i=01Si=w).Write Ax =∑∞j=0 1Sj=x and A = Az +Aw. We point out thatE(AzAw)  (|z|−2 + |w|−2)|z − w|−2. (2.4.15)If so, note that∑z,w∈C(n)|z|−2|w|−2(|z|−2 + |w|−2)|z − w|−2∑z,w∈C(n):|z|≤|w||z|−4|w|−2|z − w|−2≤∑w∈C(n)(∑z:|z|≤|w|,|z−w|≥|w|/2+∑z:|z|≤|w|,|z−w|≤|w|/2)|z|−4|w|−2|z − w|−2∑w∈C(n)(∑z:|z|≤|w|,|z−w|≥|w|/2|z|−4|w|−4 +∑z:|z|≤|w|,|z−w|≤|w|/2|w|−6|z − w|−2)∑w∈C(n)((log |w|)|w|−4 + |w|−4)  (log n)2,and then one can get E(∑τnj=0 h(Sj))2  (log n)2. We now only need to show(2.4.15). Without loss of generality, assume z 6= w (the case z = w can be902.4. The critical dimension: d=4addressed similarly with small adjustments). Note thatE(AzAw) ≤ E(A2;Az > 0, Aw > 0) ∑k≥2kP (A ≥ k,Az > 0, Aw > 0).By Markov property, one can see that:P (A ≥ k,Az > 0, Aw > 0) ≤P (Az > 0)((k−1)Pz(Aw > 0)ck−2) +P (Aw > 0)((k−1)Pw(Az > 0)ck−2),where we write Px for the law of random walk starting from x andc = supx 6=y∈Z4Px(Ax +Ay > 1) < 1.Hence, we have:∑k≥2kP (A ≥ k,Az > 0, Aw > 0)≤(∑k≥2k(k − 1)ck−2)(P (Az > 0)Pz(Aw > 0) + P (Aw > 0)Pw(Az > 0))P (Az > 0)Pz(Aw > 0) + P (Aw > 0)Pw(Az > 0)(|z|−2 + |w|−2)|z − w|−2.Proof of Proposition 2.4.5. We first show the following weaker result:Lemma 2.4.8. ∑γ:C(2n)\C(n)→K,γ⊆C(n)b(γ)  (log n)−1, as n→∞. (2.4.16)Proof. By (2.4.2) and (2.2.7), we have:∑γ:x→Kb(γ)  (‖x‖2 log ‖x‖)−1. (2.4.17)912.4. The critical dimension: d=4Pick some x ∈ Zd such that n = b‖x‖(log ‖x‖)−1/4c. Let B = C(n) andB1 = C(2n). By the First-Visit Lemma, we have:∑γ:x→Kb(γ) =∑a∈K∑z∈BcGK(x, z)HBk (z, a) ≥∑a∈K∑z∈B1\BGK(x, z)HBk (z, a)(2.4.7)∑a∈K∑z∈B1\Bg(x, z)HBk (z, a)  ‖x‖−2∑γ:C(2n)\C(n)→K,γ⊆C(n)b(γ).Combining this with (2.4.17) gives (2.4.16).We need to transfer (2.4.9) to the following form:Lemma 2.4.9.limn→∞∑γ:C(2n)\C(n)→K,γ⊆C(n)N (γ)b(γ) = |K|. (2.4.18)Proof of (2.4.18). Pick some x ∈ Zd such that n = b‖x‖(log ‖x‖)−1/4c. LetB = C(n) and B1 = C(2n). By decomposing γ at the last step in B, one canget:∑γ:x→Kb(γ)N (γ) =∑z∈Bc∑γ1:x→z∑γ2:z→K,γ2⊆Bb(γ1)b(γ2)(N−(γ1) +N (γ2))=∑z∈Bc∑γ1:x→z∑γ2:z→K,γ2⊆Bb(γ1)b(γ2)N−(γ1)+∑z∈Bc∑γ1:x→z∑γ2:z→K,γ2⊆Bb(γ1)b(γ2)N (γ2)=∑z∈Bc∑γ2:z→K,γ2⊆Bb(γ2)∑γ1:x→zb(γ1)N−(γ1)+∑z∈Bc∑γ2:z→K,γ2⊆Bb(γ2)N (γ2)∑γ1:x→zb(γ1).We argue that the first term is negligible:∑z∈Bc∑γ2:z→K,γ2⊆Bb(γ2)∑γ1:x→zb(γ1)N−(γ1) ‖x‖−2. (2.4.19)922.4. The critical dimension: d=4Note that∑γ:x→zb(γ)N−(γ) ≤∑w∈ZdN(w)g(w,K)∑γ:x→zb(γ)|γ|∑i=01γ(i)=w∑w∈Zd|w|−2∑γ:x→wb(γ)∑γ:w→zb(γ) ≤∑w∈Zd|w|−2g(x,w)g(w, z).In order to estimate the term above, we need the following easy lemmawhose proof we postponeLemma 2.4.10. For any a, b, c ∈ Z4, we have:∑z∈Z4|z − a|−2|z − b|−2|z − c|−2  1 ∨ log(M/m)M2, (2.4.20)where M = max{|a− b|, |b− c|, |c− a|} and m = min{|a− b|, |b− c|, |c− a|}.By this lemma, when z ∈ B1 \ B,∑γ:x→z b(γ)N−(γ)  log(‖x‖/n)‖x‖2 . To-gether with (2.4.16), we have∑z∈B1\B∑γ2:z→K,γ2⊆Bb(γ2)∑γ1:x→zb(γ1)N−(γ1) ‖x‖−2.Also by Lemma 2.4.10 when z ∈ Bc1,∑γ:x→z b(γ)N−(γ)  log(‖x‖)‖x‖2 . On theother hand, by the Overshoot Lemma, we have∑z∈Bc1∑γ2:z→K,γ2⊆B b(γ2) n−4. Hence,∑z∈Bc1∑γ2:z→K,γ2⊆Bb(γ2)∑γ1:x→zb(γ1)N−(γ1) ‖x‖−2.This completes the proof of (2.4.19). Combining (2.4.19) with (2.4.9) gives:∑z∈Bc∑γ2:z→K,γ2⊆Bb(γ2)N (γ2)∑γ1:x→zb(γ1) ∼ a4|K|‖x‖−2.932.4. The critical dimension: d=4Now we aim to show∑z∈Bc1∑γ2:z→K,γ2⊆Bb(γ2)N (γ2)∑γ1:x→zb(γ1) a4|K|‖x‖−2. (2.4.21)If so, then we have:∑z∈B1\B∑γ2:z→K,γ2⊆Bb(γ2)N (γ2)∑γ1:x→zb(γ1) ∼ a4|K|‖x‖−2, (2.4.22)and combining this with Lemma 2.4.4 gives (2.4.18). Since∑γ1:x→z b(γ1) =GK(x, z)  1 and b(γ) ≤ s(γ). It suffices to show:∑γ:Bc1→K,γ⊆Bs(γ)N (γ) ‖x‖−2. (2.4.23)By Cauchy-Schwarz inequality, one can get:∑γ:Bc1→K,γ⊆Bs(γ)N (γ) ≤ (∑γ:Bc1→K,γ⊆Bs(γ))1/2(∑γ:Bc1→K,γ⊆Bs(γ)(N (γ))2)1/2.By the Overshoot Lemma, the first term in the right hand side decays fasterthan any polynomial of n. On the other hand, due to (2.4.13), the sec-ond term in the right hand side is less than log n by a constant multiplier.Combining both gives (2.4.23) and finishes the proof of (2.4.18).Now we are ready to prove Proposition 2.4.5. Fix any small  > 0. Letn = ‖x‖/(log ‖x‖)1/4,Γ = {γ : C(2n) \ C(n)→ K, γ ⊆ C(n) \K},Γ1 = {γ ∈ Γ : |N (γ)− 2a4σ2|K| log n| >  log n} and Γ2 = Γ \ Γ1.By (2.4.14), we have: (when ‖x‖ and hence n are large)∑γ∈Γ1s(γ)  (log n)−4. (2.4.24)942.4. The critical dimension: d=4Hence, we have (when n is large):∑γ∈Γ1b(γ)N (γ) ≤∑γ∈Γ1s(γ)N (γ) ≤ (∑γ∈Γ1s(γ) ·∑γ∈Γ1s(γ)(N (γ))2)1/2(2.4.24),(2.4.13) ((log n)−4(log n)2)1/2 = (log n)−1  |K|.Combing this with (2.4.18) gives:∑γ∈Γ2b(γ)N (γ) ∼ |K|.Hence, we have (when n is large):(1− )|K|(2a4σ2|K|+ ) log n ≤∑γ∈Γ2b(γ) ≤ (1 + )|K|(2a4σ2|K| − ) log n.On the other hand,∑γ∈Γ1 b(γ)  (log n)−1. Let  → 0+, one can getProposition 2.4.5.Proof of Lemma 2.4.10. Without loss of generality, assume m = |a−b|. LetBa = {z : |z−a| ≤ 3m/4}, Bb = {z : |z−b| ≤ 3m/4} and Bc = {z : |z−c| ≤M/4}. Write t = (a + b)/2 and B = {z : |z − t| ≤ 2M}. Then we canestimate separately:∑z∈Ba|z − a|−2|z − b|−2|z − c|−2 ∑z∈Ba1|z − a|2m2M2 1m2M2∑z∈Ba1|z − a|2 m2m2M2≤ 1M2;∑z∈Bb|z − a|−2|z − b|−2|z − c|−2  1M2(similarly);952.4. The critical dimension: d=4∑z∈Bc|z − a|−2|z − b|−2|z − c|−2 ∑z∈Ba1|z − c|2M2M2 1M4∑z∈Bc1|z − c|2 M2M4≤ 1M2;∑z∈B\(Ba∪Bb∪Bc)|z − a|−2|z − b|−2|z − c|−2 ∑z∈B\(Ba∪Bb∪Bc)1|z − t|2|z − t|2M2 1M2∑z:m/4≤|z−t|≤2M1|z − t|4 1M2∑:m/4≤n≤2Mn3n4 1 ∨ log(M/m)M2;∑z∈Bc|z−a|−2|z − b|−2|z − c|−2 ∑z∈Bc1|z − t|6 ∑n≥2Mn3n6 1M2.This completes the proof.Now we are ready to prove Theorem 1.3.12.Proof of Theorem 1.3.12. Let n = ‖x‖/(log ‖x‖)1/4, B = C(n), B1 = C(2n)\B and B2 = C(2n)c. As before, by (2.2.7) and the First-Visit Lemma, wehave:P (Sx visits K) =∑γ:x→Kb(γ) =∑b∈BcGK(x, b)∑a∈KHBk (b, a)=∑b∈B1GK(x, b)∑a∈KHBk (b, a) +∑b∈B2GK(x, b)∑a∈KHBk (b, a).We argue that the first term has the desired asymptotics and the second isnegligible:∑b∈B1GK(x, b)∑a∈KHBk (b, a)(2.4.7)∼ a4‖x‖−2∑b∈B1∑a∈KHBk (b, a)(2.4.10)∼ a4‖x‖−2 4pi2√detQσ2 log n∼ 12σ2‖x‖2 log ‖x‖ ;962.4. The critical dimension: d=4∑b∈B2GK(x, b)∑a∈KHBk (b, a) ∑a∈K∑b∈B2HBk (b, a)(2.1.6) |K|n2/n5  1/‖x‖2 log ‖x‖.2.4.3 Convergence of the first visiting pointWe aim to show Theorem 1.3.13. For simplicity, we assume in this subsectionthat θ has finite range. Then, for any subset B ⊂⊂ Z4, we can denote itsouter boundary and inner boundary by:∂oB.= {y /∈ B : ∃x ∈ B, such that θ(x− y) ∨ θ(y − x) > 0};∂iB.= {y ∈ B : ∃x /∈ B, such that θ(x− y) ∨ θ(y − x) > 0}.The first step is to construct the following asymptotical behavior of theGreen function:Lemma 2.4.11.limx,y→∞:‖x‖≥‖y‖GK(x, y)(log ‖y‖/ log ‖x‖)g(x, y) = 1; (2.4.25)Remark 2.4.3. It is a bit unsatisfactory that we need to require ‖x‖ ≥ ‖y‖in the limit. When θ is symmetric, this requirement can be removed sinceGK(x, y)/(1− k(x)) = GK(y, x)/(1− k(y)).Proof. By Lemma 2.4.4, we can assume ‖x‖ ≥ ‖y‖(log ‖y‖)1/4. Let n =‖y‖(log ‖y‖)1/8 and B = C(n). As before, we have:pK(x) = GK(x,K) =∑z∈∂iBHBck (x, z)GK(z,K) =∑z∈∂iBHBck (x, z)pK(z).By Theorem 1.3.12, we get:∑z∈∂iBHBck (x, z) ∼n2 log n‖x‖2 log ‖x‖ . (2.4.26)972.4. The critical dimension: d=4By Lemma 2.4.4, we have GK(z, y) ∼ g(z, y) ∼ a4n−2 for any z ∈ ∂iB.Therefore,GK(x, y) =∑z∈∂iBHBck (x, z)GK(z, y) ∼∑z∈∂iBHBck (x, z)a4n−2∼ a4 log n/(‖x‖2 log ‖x‖) ∼ a4 log ‖y‖/(‖x‖2 log ‖x‖).This finishes the proof.Now we give the following asymptotics of the escape probability by areversed snake.Lemma 2.4.12. For any x ∈ Z4, we have:EK(x) .= limn→∞ log n ·∑z∈∂oC(n)HC(n)k (z, x) exists. (2.4.27)Remark 2.4.4. Note that HC(n)k (z, x) = HC(n)\Kk (z, x) and∑z∈∂oC(n)HC(n)k (z, x) is the probability that a reversed snake starting fromx does not return to K, except for the bush grafted to the root, until thebackbone reaches outside of C(n). For the random walk in critical dimension(d = 2), we also have (e.g. see Section 2.3 in [10]):EK(x).= limn→∞ log n·∑z∈∂oC(n)HC(n)\K(z, x) exists, for any x ∈ Z2,K ⊂⊂ Z2;andlimx→∞P (Sx(τK) = a|Sx visits K) =1pi2√detQEK(a).Proof. We first need to show:limn→∞,y→∞: ‖y‖≤nlog nlog ‖y‖∑z∈∂oC(n)HC(n)k (z, y) = 1. (2.4.28)Choose some x ∈ Z4 such that ‖x‖ ≥ n log n. By the First-Visit Lemma, we982.4. The critical dimension: d=4have:GK(x, y) =∑z∈∂oC(n)GK(x, z)HC(n)k (z, y). (2.4.29)Due to last lemma, GK(x, y) ∼ a4‖x‖−2 · log ‖y‖/ log ‖x‖, GK(x, z) ∼a4‖x‖−2 · log n/ log ‖x‖. Together with (2.4.29), one can get (2.4.28).Now we are ready to show (2.4.27). Without loss of generality, assume‖x‖ > Rad(K). Writea(n) = logn ·∑z∈∂oC(n)HC(n)k (z, x).Note that, for any (large) m > n,∑w∈∂oC(m)HC(m)k (w, x) =∑z∈∂oC(n)HC(n)k (z, x)∑w∈∂oC(m)HC(m)k (w, z).By (2.4.28), we have∑w∈∂oC(m)HC(m)k (w, z) ∼ log n/ logm. This impliesa(n)/a(m) ∼ 1 and hence the convergence of a(n).Proof of Theorem 1.3.13. Let n = ‖x‖/ log ‖x‖ and B = C(n). Then,P (Sx(τK) = a|Sx visits K) =∑γ:x→a b(γ)pK(x)∼∑z∈∂oB GK(x, z)HBk (z, a)1/2σ2‖x‖2 log ‖x‖∼ a4‖x‖−2∑z∈∂oBHBk (z, a)1/2σ2‖x‖2 log ‖x‖ ∼a4‖x‖−2EK(a) log−1 n1/2σ2‖x‖2 log ‖x‖∼ 2σ2a4EK(a) = σ2EK(a)4pi2√detQ.2.4.4 The range of branching random walk conditioned onthe total sizeThe main goal of this subsection is to construct the asymptotics of the rangeof the branching random walk conditioned on the total size, i.e. Theorem1.3.14. Our proof of this theorem is based on some ideas from [14]. Espe-992.4. The critical dimension: d=4cially, we need to use the invariant shift on the invariant snake, SI .For the invariant snake SI , recall that its backbone is just a randomwalk with jump distribution θ−. We write τn for the hitting time (vertex)of (C(n))c by the backbone. Thanks to Proposition 2.4.5, we can obtain thefollowing:Proposition 2.4.13.P (SI0 (v) 6= 0, ∀v ≤ τn not on the spine) ∼4pi2√detQσ21log n;P (SI0 (vi) 6= 0, i = 1, 2, . . . , n) ∼16pi2√detQσ21log n;where v1 < v2 < v3 < . . . are all vertices of SI0 that are not on the spine.Proof. By Proposition 2.4.5 (set K = {0}) and the Overshoot Lemma, wehave: ∑γ:(C(n))c→0,γ⊆C(n)b(γ) ∼ 4pi2√detQσ21log n.Hence, the first assertion can be obtained if we can showP (SI0 (v) 6= 0, ∀v ≤ τn not on the spine) ∼∑γ:(C(n))c→0,γ⊆C(n)b(γ).Let p0 = P (S0 does not visit 0 except at the root) and the new killingfunction k′(x) be the probability that S ′x visits to 0 (except possibly for thestarting point). Note that k′(x) = kK(x) when x 6= 0. We write bk′(γ) forthe probability weight of γ with this killing function. Then, we haveP (SI0 (v) 6= 0, ∀v ≤ τn not on the spine) ∼ p0∑γ:(C(n))c→0, γ⊆C(n)bk′(γ)= p0(∑γ:(C(n))c→0, γ⊆C(n)\{0}bk′(γ))(∑γ:0→0, γ⊆C(n)bk′(γ)).1002.4. The critical dimension: d=4Note that limn→∞∑γ:0→0, γ⊆C(n) bk′(γ) =∑γ:0→0 bk′(γ) and∑γ:(C(n))c→0, γ⊆C(n)b(γ) =∑γ:(C(n))c→0, γ⊆C(n)\{0}bk′(γ).Hence, for the first assertion, it is sufficient to show:p0∑γ:0→0bk′(γ) = 1. (2.4.30)Note that this is just (2.2.9) (note that we set x = 0,K = {0}). We finishthe proof of the first assertion. The second assertion is an easy consequenceof the first one, noting that, for any  ∈ (0, 1/4) fixed, P (vn ≤ τbn1/4−c) and,P (vn ≥ τbn1/4+c) are o((log n)−1).Now we can construct the following result about the range of SI :Theorem 2.4.14. Set RIn := #{SI0 (o),SI0 (v1), . . . ,SI0 (vn)} for every integern ≥ 0. We have:log nnRInL2−→ 16pi2√detQσ2as n→∞,where v1, v2, . . . are the same as in Proposition 2.4.13. Hence, we have:log nnRInP−→ 16pi2√detQσ2as n→∞.Remark 2.4.5. Since the typical number of vertices in the spine that comebefore vn is of order√n, which is much less than n/ log n, one can get,log nn#{SI0 (v¯0),SI0 (v¯1), . . . ,SI0 (v¯n)} P−→16pi2√detQσ2as n→∞,where v¯0, v¯1, . . . are all vertices due to the default order in the correspondingplane tree T in SI0 .Proof of Theorem 2.4.14. As mentioned before, we need to use the invariantshift ς on spacial trees, which appeared in [14]. For any spacial tree (T,ST ),set ς(T,ST ) = (T ′,S ′T ′). Roughly speaking, one can get T ′ by ’rerooting’ T1012.4. The critical dimension: d=4at the first vertex that is not in the spine and then removing the verticesthat are strictly before the parent of the new root. For S ′T ′ , just set:S ′T ′(v) = ST (v)− ST (o′), for any v ∈ ς(T ),where o′ is the new root. The key result is that ς is invariant under thelaw of the invariant snake from the origin. For more details about this shifttransformation, see Section 2 in [14].Now we start our proof. For simplicity, write vˆ0 = 0(∈ Z4) and vˆi =SI0 (vi). First observe that:E(RIn) = E(n∑i=01{vˆj 6=vˆi,∀j∈[i+1,n]}) =n∑i=0P (vˆj 6= vˆi, ∀j ∈ [i+ 1, n]).From the invariant shift mentioned in the beginning, we haveP (vˆj 6= vˆi,∀j ∈ [i+ 1, n]) = P (vˆj 6= vˆ0,∀j ∈ [1, n− i]).Therefore by Proposition 2.4.13, we getE(RIn) =n∑i=0P (vˆj 6= vˆ0,∀j ∈ [1, n− i]) ∼ 16pi2√detQσ2nlog n. (2.4.31)Now we turn to the second moment. Similarly, we haveE((RIn)2) = E(n∑i=0n∑j=01{vˆk 6=vˆi,∀k∈[i+1,n];vˆl 6=vˆj ,∀l∈[j+1,n]})= 2∑0≤i<j≤nP (vˆk 6= vˆi,∀k ∈ [i+ 1, n]; vˆl 6= vˆj ,∀l ∈ [j + 1, n]) + E(Rn)= 2∑0≤i<j≤nP (vˆk 6= 0, ∀k ∈ [1, n− i]; vˆl 6= vˆj−i,∀l ∈ [j − i+ 1, n− i])+ E(Rn),where the last equality again follows from the invariant shift. For any fixed1022.4. The critical dimension: d=4α ∈ (0, 1/4) defineσn := sup{k ≥ 0 : vk ≤ ubn 12−αc},where u0 ≤ u1 ≤ . . . are the all vertices on the spine. By standard argu-ments, one can showP (σn /∈ [n1−3α, n1−α]) = o(log−2 n).Therefore we havelim supn→∞(log nn)2E((RIn)2) = lim supn→∞2(log nn)2∑0≤i<j≤nP (vˆk 6= 0,∀k ∈ [1, n− i]; vˆl 6= vˆj−i, ∀l ∈ [j − i+ 1, n− i];σn ∈ [n1−3α, n1−α]).Obviously, in order to study the limsup in the right-hand side, we can restrictthe sum to indices i and j such that j − i > n1−α. However, when i and jare fixed and satisfied with j − i > n1−α,P (vˆk 6= 0, ∀k ∈ [1, n− i]; vˆl 6= vˆj−i, ∀l ∈ [j − i+ 1, n− i];σn ∈ [n1−3α, n1−α])≤ P (vˆk 6= 0,∀k ∈ [1, σn]; vˆl 6= vˆj−i,∀l ∈ [j − i+ 1, n− i];σn ∈ [n1−3α, n1−α])= P (vˆk 6= 0,∀k ∈ [1, σn];σn ∈ [n1−3α, n1−α])P (vˆl 6= vˆj−i,∀l ∈ [j − i+ 1, n− i])= P (vˆk 6= 0,∀k ∈ [1, σn];σn ∈ [n1−3α, n1−α])P (vˆl 6= 0,∀l ∈ [1, n− j]).Note that for the second last line, we use the fact that after conditioning onσn = m(< n1−α), the event on the second probability is independent to theevent on the first one, and for the last line, we use the invariant shift. Now,P (vˆk 6= 0,∀k ∈ [1, σn];σn ∈ [n1−3α, n1−α]) ≤ P (vˆk 6= 0,∀k ∈ [1, n1−3α]),1032.4. The critical dimension: d=4and then we havelim supn→∞(log nn)2E((RIn)2) ≤ lim supn→∞2(log nn)2·∑0≤i<j≤n,j−i>n1−αP (vˆk 6= 0,∀k ∈ [1, n1−3α])P (vˆl 6= 0, ∀l ∈ [1, n− j])=11− 3α(16pi2√detQσ2)2.Let α→ 0+, we getlim supn→∞(log nn)2E((RIn)2) ≤ (16pi2√detQσ2)2.Combining this with (2.4.31), we finish the proof of Theorem 2.4.14.Noting that S−0 is different to SI0 only at the subtree grafted to the root,one can also obtain the range of the infinite snake S−:Corollary 2.4.15. Set R−n := #{S−0 (v0),S−0 (v1), . . . ,S−0 (vn)}. Then,log nnR−nP−→ 16pi2√detQσ2as n→∞,where v0, v1, . . . are all vertices of the corresponding plane tree due to thedefault order in the reversed snake.Now we are ready to prove our main result about the range of branchingrandom walk conditioned on the total size. This result will follow fromCorollary 2.4.15 by an absolute continuity argument, which is similar to theone in the proof of Theorem 7 in [14]. The idea is as follows. We write Ξfor the law of the µ-GW tree. For every a ∈ (0, 1), the law under Ξn :=Ξ(·|#T = n) of the subtree obtained by keeping only the first banc vertices ofT is absolutely continuous with respect to the law under Ξ∞(·) := Ξ(·|#T =∞) of the same subtree, with a density that is bounded independently of n.Then a similar property holds for spatial trees, and hence we can use theconvergence in Corollary 2.4.15, for a tree distributed according to Ξ∞, toget a similar convergence for a tree distributed according to Ξn.1042.4. The critical dimension: d=4Proof of Theorem 1.3.14. Let G be the smallest subgroup of Z that containsthe support of µ. In fact, the cardinality of the vertex set of a µ-GW treebelongs to 1 + G. For simplicity, we assume in the proof that G = Z. Minormodifications are needed for the general case. On the other hand, for anysufficiently large integer n ∈ 1+G, we can define the conditional probabilitySn to be S0 conditioned on the total number of vertices being n (this eventis with strictly positive probability).For a finite plane tree T , write v0(T ), v1(T ), . . . , v#T−1(T ) for the verticesof T by the default order. The Lukasiewisz path of T is then the finitesequence (Xl(T ), 0 ≤ l ≤ #T ), which can be defined inductively byX0(T ) = 0, Xl+1 −Xl = kvl(T )(T )− 1, for every 0 ≤ l < #T,where ku(T )(for u ∈ T ) is the number of children of u. The tree T isdetermined by its Lukasiewisz path. A key result says that under Ξ, theLukasiewisz path is distributed as a random walk on Z with jump distribu-tion ν determined by ν(j) = µ(j + 1) for any j ≥ −1, which starts from0 and is stopped at the hitting time of −1 (in particular, the law or #Tcoincides with the law of that hitting time). For notational convenience, welet (Yk)k≥0 be a random walk on Z with jump distribution ν, which startsform i under P(i), and setτ := inf{k ≥ 0 : Yk ≤ −1}.We can also do this for infinite trees. When T ia an infinite tree withonly one infinite ray, now the Depth-First search sequence o = v0 < v1 <v2 < · · · < vn < . . . only examines part of the vertex set of T . We could alsodefine the Lukasiewisz path of T to be the infinite sequence (Xi(T ), i ∈ N):X0(T ) = 0, Xl+1 −Xl = kvl(T )(T )− 1, for every l ∈ N.Now, only the ’left half’ of T (precisely, the subtree generated by v0, v1, ...),not the whole tree T , is determined by its Lukasiewisz path. It is notdifficult to verify that when T is a µ-GW tree conditioned on survival, its1052.4. The critical dimension: d=4Lukasiewisz path is distributed as the random walk on the last paragraphconditioned on τ =∞, i.e, a Markov chain on N with transition probabilityp(i, j) = j+1i+1 ν(j− i). Recall that the infinite µ-GW tree is just the ’left half’of the µ-GW tree conditioned on survival.Next, take n large enough such that Ξ(#T = n) > 0. Fix a ∈ (0, 1), andconsider a tree (finite or infinite) T with #T ≥ n. Then, the collection ofvertices v0(T ), . . . , vbanc(T ) forms a subtree of T (because in the Depth-Firstsearch order the parent of a vertex comes before this vertex), and we denotethis tree by ρbanc(T ). It is elementary to see that ρbanc(T ) is determined bythe sequence (Xl(T ), 0 ≤ l ≤ banc). Let f be a bounded function on Zbanc.One can verify thatΞn(f((Xk)0≤k≤banc)) =1P(0)(τ = n+ 1)Ξ∞(f((Xk)0≤k≤banc)ψn(Xbanc)Xbanc + 1),(2.4.32)where for every j ∈ N , ψn(j) = P(j)(τ = n+ 1− banc).We now let n → ∞. Using Kemperman’s formula and a standard locallimit theorem, one can get,limn→∞(supj∈An| ψn(j)P(0)(τ = n+ 1)(j + 1)− Γa( jσ√n)|)= 0, (2.4.33)where Γa(x) = exp(− x22(1−a))/(1 − a)32 and An := {i ∈ N : P(i)(τ = n +1 − banc) > 0}. By combining (2.4.32) and (2.4.33), we get that, for anyuniformly bounded sequence of functions (fn)n≥1 on Zbanc+1, we havelimn→∞ |Ξn(fn((Xk)0≤k≤banc))− Ξ∞(fn((Xk)0≤k≤banc)Γa(Xbancσ√n))| = 0.Clearly, the above still holds after we add the spatial random mechanism.Therefore, when  > 0 is fixed, we havelimn→∞ |Ξnθ (1{|Rbanc−tan/ logn|>n/ logn})−Ξ∞θ (1{|Rbanc−tan/ logn|>n/ logn}Γa(Xbancσ√n))| = 0,1062.4. The critical dimension: d=4where t = 16pi2√detQσ2, Ξnθ , Ξ∞θ are the laws of the corresponding tree-indexedrandom walks, and Rbanc is the range of the subtree ρbanc(T ). Note that thefunction Γa is bounded and under Ξ∞θ , Rbanc is just the range of S∞0 for thefirst banc vertices. Hence, by Corollary 2.4.15 (note that S−0 = S∞0 since weassume that θ is symmetry), we obtain thatlimn→∞Ξnθ (1{|Rbanc−tan/ logn|>n/ logn}) = 0.Note that Rn ≥ Rbanc (under Ξnθ ) and a can be chosen arbitrarily close to1, this finishes the proof of the lower bound.We also need to show the upper bound. Note that ρbanc(T ) is the subtreelying on the ’left’ side, generated by the first banc vertices of T . Similarly,one can consider the subtree lying on the ’right’ side. Strictly speaking, toget the subtree lying on the right side, denoted by ρ−banc(T ), we first reversethe order of children for each vertex in T , and then ρbanc of the same tree Twith the new order is just ρ−banc(T ). Write R−banc for the range of ρ−banc(T )corresponding to Sn. By symmetry, we also havelimn→∞Ξnθ (1{|R−banc−tan/ logn|>n/ logn}) = 0.Now fix some a ∈ (0, 1). Note that ρbanc(T ) and ρ−b(1−a)nc(T ) cover thewhole tree T except for a number of vertices. This number is not morethan |ρbanc(T ) ∩ ρ−b(1−a)nc(T )| + 2. Note that on each generation, thereis at most one vertex that is in both ρbanc(T ) and ρ−b(1−a)nc(T ). Hence|ρbanc(T )∩ ρ−b(1−a)nc(T )| is not more than the number of generations, whichis typically of order√n (under Ξn). Hence, Rn−(Rbanc(Sn)+R−b(1−a)nc(Sn))is less than n0.6 with high probability (tending to 1). This finishes the proofof the upper bound.107Chapter 3Branching interlacements3.1 Preliminaries3.1.1 Plane trees, contour function and branching randomwalkWe are interested in (finite or infinite) rooted ordered trees, called planetrees. A rooted tree t is a tree with a distinguished vertex o called the root.t can be regarded as a family tree with ancestor o. A plane tree is a rootedtree in which an ordering for the children of each vertex is specified. Thesize |t| is the number of edges of t. We denote by A the set of all finiteplane trees and by An the set of all plane trees with n ∈ N edges.Let t be a plane tree and k ∈ N, we write [t]k for the subtree obtained bykeeping only the first k generations of t. Let T be a GW (Galton-Watson)tree with geometric offspring distribution of parameter 1/2 (throughout thischapter our GW tree will always be with this offspring distribution). Itis classical that the distribution of T conditioned on having n edges is theuniform probability measure on An. The following result is also standard(e.g. see [1]):Proposition 3.1.1. Let Tn be uniform on An. Then there exists a randominfinite plane tree T∞ such that for every k ∈ N we have[Tn]kd−→ [T∞]k, as n→∞. (3.1.1)Moreover, this random infinite plane tree T∞, called the critical Galton-Watson tree conditioned to survive, can be constructed in the following way:begin with a semi-infinite line of vertices called the spine and graft to the1083.1. Preliminariesleft and to the right of each vertex in the spine an independent GW tree. Itis rooted at the first vertex in the spine.A nice way to code plane trees is the so-called contour function. Assumet is a plane tree with k edges. Let v0 be the root of t. Define vi to be thefirst unexplored child of vi−1 if vi−1 has such children, or the parent of vi ifnot, for i = 1, . . . , 2k. Let C(i) be the tree distance between the root andvi. Then (C(i))i∈{0,...,2k} is the contour function of t.For k ∈ N, a Dyck path of length 2k is a sequence (s0, s1, ..., s2k) of inte-gers such that s0 = s2k = 0, si ≥ 0 and |si−si−1| = 1, for every i = 1, ..., 2k.If t is a plane tree of size k, then its contour function (C(0), C(1), . . . , C(2k))is a Dyck path of length 2k. Moreover, we have (e.g. see the lecture notes[15])Proposition 3.1.2. The mapping t→ (C(0), C(1), . . . , C(2k)) is a bijectionfrom Ak onto the set of all Dyck paths of length 2k. Therefore, the contourfunction of a GW tree conditioned on having k edges is uniform on all Dyckpaths of length 2k.There is a similar result for the unconditioned GW tree. Assume S =(Sn)n∈N is simple random walk on Z (starting from 0). Let τ = inf{n ∈N : Sn = −1} < ∞ a.s. Then, the distribution of the contour function of aunconditioned GW tree is the same as (Si)0≤i≤τ−1.Now we introduce the simple random walk in Zd indexed by a randomplane tree T . Conditionally on T we assign independently to each edge ofT a variable uniform on all unit vectors in Zd. Then for every vertex vin T , we assign to v the sum of the variables of all edges belonging to theunique simple path from the root o to the vertex v. This gives a randomfunction ST : T → Zd from the vertices of T to the vertices of Zd (note thatST (o) = 0). A plane tree T together with this random function ST is calleda spatial tree. When T is an unconditioned GW tree, a GW tree conditionedon having n edges or a GW tree conditioned to survive, the spatial tree iscalled finite branching random walk, branching random walk conditioned tohave n progeny or branching random walk conditioned to survive. WhenT = T∞, we can talk about recurrence and transience. If |S−1T∞(0)| <∞ a.s.,1093.1. Preliminarieswe say that the branching random walk conditioned to survive is transient.If |S−1T∞(0)| = ∞ a.s., we say that it is recurrent. About recurrence andtransience, we have (see [3] or see Corollary 1.3.10 and Proposition 2.3.4) :Proposition 3.1.3. Branching random walk on Zd conditioned to surviveis transient if and only if d > 4.3.1.2 Some results on simple random walkLet us now collect some facts about random walks for later use. We use C,c to denote positive constants, depending only on dimension d, which maychange from line to line. If a constant depends on some other variable, thiswill be made explicit. We use a ∨ b and a ∧ b for max{a, b} and min{a, b}respectively. We will write f  g (f  g resp.), if there exists a positiveconstant C (depending on dimension only), such that f ≤ Cg (f ≥ Cgresp.) and write f  g if f  g and f  g. For x ∈ Zd, we write Px (just inthis subsection) for the law of simple random walk (Zn)n≥0 on Zd startingat Z0 = x. Define:pn(x) = P0[Zn = x]; p¯n(x) := 2(d/2pin)d/2 exp(−d|x|22/2n), (3.1.2)where we write | · |2 for the Euclidean norm (and reserve | · | for the∞-norm).Then we have the so-called Local Central Limit Theorem (LCLT) (e.g. seeChapter 1.2 in [10]):Proposition 3.1.4. For x ∈ Zd, we havepn(x)  n−d/2. (3.1.3)If δ < 2/3 and |z| ≤ nδ such that z and n have the same parity, then wehavepn(z) = p¯n(z)(1 +O(n3δ−2)). (3.1.4)The next proposition follows from an application of the Azuma-Hoeffdinginequality (e.g. Proposition 2.1.2 [11]).1103.1. PreliminariesProposition 3.1.5. There exist positive C and c, such that for all n ands > 0,P0[ max0≤j≤n|Zj | ≥ s√n] ≤ C exp(−cs2). (3.1.5)The Green function of simple random walk on Zd is defined byG(x, y) =∞∑n=0Px[Zn = y] =∞∑n=0pn(y − x), x, y ∈ Zd. (3.1.6)Using LCLT, one can get the standard estimate for the Green function(for d ≥ 3)G(x, y) =∞∑n=0pn(y − x)  (|x− y| ∨ 1)2−d. (3.1.7)Using the same method, one can also get (for d ≥ 5):∞∑n=0n · pn(y − x)  (|x− y| ∨ 1)4−d. (3.1.8)We are particularly interested in one-dimensional simple random walk.The following is a special case of Kemperman’s formula (Lemma 2.12 in[15]).Proposition 3.1.6. Let τ be the hitting time of −1. We have, for anyk ∈ N and n ∈ N+,Pk[τ = n] =k + 1nPk[Sn = −1], (3.1.9)where Pk is the probability measure under which the simple random walk Sstarts from k.We will also use the so-called heat kernel bound (Lemma 2.1 [7]):Proposition 3.1.7. There exists positive C and c, such that for all n ∈ N+and k ∈ N, (P0 has the same meaning as in last proposition)P0[Sn = k] ≤ Cn−1/2 exp(−ck2/n). (3.1.10)1113.2. Basic model and some first properties3.2 Basic model and some first propertiesIn this section we give the definition of branching interlacements at level u asthe range of a countable collection of doubly-infinite trajectories in Zd. Aswe mentioned before, the model of branching interlacements is an analogousmodel to random interlacements. Many definitions here are similar or eventhe same as in [21]. The collection of doubly-infinite trajectories will arisefrom a certain Poisson point process, called the branching interlacementspoint process. The main task is to construct the intensity measure of thisPoisson point process.3.2.1 NotationsWe denote with | · |2 and | · | the Euclidean and ∞-norm on Zd. We writeBx(r) and Sx(r) for the closed | · |-ball and | · |-sphere with center x in Zdand radius r ≥ 0. We say that x, y in Zd are neighbors (denoted by x ∼ y),respectively *-neighbors, if |x−y|2 = 1, respectively |x−y| = 1. The notionof nearest neighbor or *-nearest neighbor paths in Zd is defined accordingly.For a subset K of Zd, we define∂oK := {x ∈ Zd\K : ∃y ∈ K such that x ∼ y} (3.2.1)its external boundary and∂iK := {x ∈ K : ∃y ∈ Zd\K such that x ∼ y} (3.2.2)its internal boundary. We consider W and W+ the space of 2-sided and1-sided nearest neighbor transient trajectories on Zd:W = {w : Z→ Zd; lim|n|→∞|w(n)| =∞ and w(n+ 1) ∼ w(n),∀n ∈ Z},(3.2.3)W+ = {w : N→ Zd; limn→∞ |w(n)| =∞ and w(n+ 1) ∼ w(n),∀n ∈ N}.(3.2.4)1123.2. Basic model and some first propertiesIf w = (w(n))n∈Z ∈ W , we define w+ ∈ W+ to be the part of w which isindexed by nonnegative coordinates, i.e., w+ = (w(n))n∈N. We denote byW, the product σ-algebra on W generated by coordinates, and by W+ theproduct σ-algebra on W+. For w ∈ W or W+ and x ∈ Zd, we denote thespace translation by w + x, i.e. (w + x)(n) = w(n) + x. We define the shiftoperators θk : W →W , k ∈ Z and θk : W+ →W+, k ∈ N by(θk(w))(n) = w(n+ k). (3.2.5)Next we will define the space (W ∗,W∗), which will play an important role inour construction of branching interlacement. Define the set of paths modulotime-shift by W ∗ = W/ ', where ' is the equivalence relationw1 ' w2, if θk(w1) = w2 for some k ∈ Z. (3.2.6)Denote the canonical projection by pi : W → W ∗ which sends each elementin W to its equivalence class in W ∗. We endow W ∗ with the shift invariantσ-field:W∗ = {A ⊆W ∗ : pi−1(A) ∈ W}. (3.2.7)For any finite subset K of Zd (we will write K ⊂⊂ Zd for this), define:WK ={w ∈W : w(n) ∈ K, for some n ∈ Z} and W ∗K = pi(WK),(3.2.8)WK+ = {w ∈W+ : w(n) ∈ K, for some n ∈ N}. (3.2.9)It follows from (3.2.3) that, for any trajectory w ∈ WK or WK+, the set{n : w(n) ∈ K} is finite. Hence, we can define the ‘entrance time’:HK(w) = inf{n : w(n) ∈ K}. (3.2.10)Thus HK(w) < ∞ if w ∈ WK or WK+. We can partition WK according to1133.2. Basic model and some first propertiesthe time of the first entrance:WK =⋃n∈ZWnK , where WnK = {w ∈WK : HK(w) = n}. (3.2.11)We define tK : WK → W 0K , respectively t∗K : W ∗K → W 0K with tK(w) = w0,respectively t∗K(w∗) = w0, where w0 is the unique element w0 in W 0K withw0 ' w, respectively pi(w0) = w∗. Also we can define t∗K+ : W ∗K →W+ witht∗K+(w) = (t∗K(w))+.3.2.2 Simple random walk as a contour function and snakesOur goal is to construct a σ-finite measure on (W ∗,W∗). Before doingthis, we need to introduce the finite measure QK for every K ⊂⊂ Zd on(W,W). The first step is to build a random matching on E(Z) = {ei =(i − 1, i); i ∈ Z}, the set of all edges of the lattice Z. Let S = (Si)i∈Zbe 1-dimensional two-sided simple random walk. Then S almost surelydetermines a matching of E(Z), or more precisely, a bijection fS betweenthe set of upsteps M(S) = {ei : Si − Si−1 = +1} and the set of downstepsN(S) = {ei : Si − Si−1 = −1}:fS(ek) = el if and only ifk < l, Sk − Sk−1 = 1, Sl − Sl−1 = −1, & Sk = Sl−1 = mink≤n≤l−1 Sn.(3.2.12)Remark 3.2.1. If we glue edges through the matching, the resulting quotientof the graph Z (rooted at 0) becomes an infinite plane tree. Precisely, for anyx, y ∈ Z, let d(x, y) = Sx + Sy − 2 min{St : t ∈ [x, y]}. If we identify x andy when d(x, y) = 0, then under this equivalence, the quotient space with themetric d is an infinite plane tree. One can check that using the descriptionafter Proposition 3.1.1 , this tree is just T∞ and S is just its contour functionif we let the spine go downwards and the finite trees attached to the spinegrow upwards as usual. Note that for a finite tree, since we place the root atthe bottom, its contour function is always non-negative. But here we let thespine go downwards hence the contour function can be negative.1143.2. Basic model and some first propertiesFigure 3.1: Construction of 2-sided infinite snakeNow we combine the contour function and the simple random walk in-dexed by a random tree. At the moment we have a mapping from Z toT∞ along the contour. A transient mapping from T∞ to Zd can be writtenas a trajectory X ∈ W with the property that if n, n′ ∈ Z correspond tothe same vertex of T∞ then Xn = Xn′ . We shall denote the incrementsof such a mapping by Yn = Xn − Xn−1. Since in the contour explorationof the tree, each edge is crossed once in each direction, the correspondingincrement variables should be opposite to each other, and otherwise theyare independent.Definition 3.2.1. Let S = (Sn)n∈Z be two-sided simple random walk and fSbe the corresponding matching. For each upstep en of S, where Sn = Sn−1+1let Yn be a uniform unit vector in Zd, all independent. For any downstepem = (m−1,m) let Ym = −Yn, where n is such that fS(en) = em (see figure.)The starting point X0 = 0 and relation Yn = Xn −Xn−1 determine Xn forall n. The trajectory (X)n∈Z is called the 2-sided infinite snake. Thehalf process (X)n∈N is called the (1-sided) infinite snake. In this section,we write P0 for the law of X. If the starting point is x (i.e. X0 = x), weuse Px.Claim 3.2.2. Since S is invariant under time-reversal and Y is symmetric,we can see that X is also invariant under time-reversal, i.e. (Xn)n∈Z and(X−n)n∈Z have the same distribution. Similarly, one can see that the law ofthe increment sequence Y is also invariant under time shift.Remark 3.2.2. X is not a branching random walk, but the contour functionof the branching random walk conditioned to survive. We primarily focus1153.2. Basic model and some first propertieson the range of X, which has the same distribution as the range of thecorresponding branching random walk. The reason for the introduction ofsnakes is that the contour function provides us a nice way to code branchingrandom walk. As we mentioned in last section, this branching random walkis transient if and only if d ≥ 5. Hence for d ≥ 5, Px is indeed a probabilitymeasure supported on W . This is why we assume d ≥ 5 throughout thischapter.The finite snake is defined similarly, as follows. For the simple randomwalk S, set τ = inf{n ≥ 0 : Sn = −1} < ∞ a.s. . The finite path{S0, S1, . . . , Sτ−1} is called an excursion of the simple random walk. We canalso define the matching on the finite edge set {ei = (i − 1, i) : 1 ≤ i ≤τ −1}, in the same way as before. Then we may also define (Yi)1≤i≤τ−1 and(Xi)0≤i≤τ−1 as before. This finite process X is called the finite snake and isthe contour function of the unconditioned branching random walk.Note: the process X is just the restriction of the infinite snake X to therandom time interval [0, τ − 1]. It is possible that τ = 1, in which case thebranching random walk dies immediately and its image is the single pointX0.If we condition on τ = 2L+ 1, then (Xi)0≤i≤2L is called the snake con-ditioned to have length 2L and is the contour function of the branchingrandom walk conditioned to have L progeny.3.2.3 Construction of the branching interlacement intensitymeasureOnce we have the definition of Px (see Definition 3.2.1), we can define ameasure QK on (W,W) for any K ⊂⊂ Zd. In fact, QK will be supportedon W 0K (see (3.2.11)). For any K ⊂⊂ Zd and A ∈ W, define:QK(A) =∑x∈KPx[A ∩W 0K ]. (3.2.13)Note that since Px is a probability measure, QK(A) ≤ |K|, so QK is afinite measure.1163.2. Basic model and some first propertiesFor different K ⊆ K ′ ⊂⊂ Zd, QK and QK′ are consistent in the followingsense:Proposition 3.2.3. For any A ∈ W∗, and K ⊆ K ′ ⊂⊂ Zd, we have:QK(pi−1(A) ∩WK) = QK′(pi−1(A) ∩WK). (3.2.14)With the help of this proposition, we can define a measure on (W ∗,W∗):Theorem 3.2.4. There exists a unique σ-finite measure ν on (W ∗,W∗)which satisfies: for all K ⊂⊂ Zd1{W ∗K} · ν = pi ◦QK . (3.2.15)Proof of Proposition 3.2.3. Write B = tK(pi−1(A) ∩WK). Since QK (QK′resp.) is supported on W 0K (W0K′ resp.), we have:QK(pi−1(A) ∩WK) = QK(B) and QK′(pi−1(A) ∩WK) = QK′(tK′(B))(3.2.16)So, we need to prove:QK(B) = QK′(tK′(B)). (3.2.17)We partition W 0K according to the hitting time and hitting point of Kand K ′. For any x ∈ K, y ∈ K ′ and n ∈ Z− = {0,−1,−2, ...}, define:Ax,n,y = {w ∈W : w(0) = x,HK(w) = 0, w(n) = y,HK′(w) = n}. (3.2.18)On Ax,n,y, tK′ is injective, tK′(w)(•) = w(•+ n) and:tK′(Ax,n,y) = {w ∈W : w(0) = y,HK(w) = −n,w(−n) = x,HK′(w) = 0}.Let Bx,n,y = B ∩Ax,n,y. Then B has a countable partition:B =⋃x∈K,y∈K′,n∈Z−Bx,n,y. (3.2.19)1173.2. Basic model and some first propertiesIn order to show (3.2.17), it is enough to prove:QK(Bx,n,y) = QK′(tK′(Bx,n,y)). (3.2.20)By definition of QK (see (3.2.13)), the left hand side is:QK(Bx,n,y) = Px(Bx,n,y) = Px[Xn = y,HK(X) = 0, HK′(X) = n,X0 = x](∗)= Py[X−n = x,HK(X) = −n,HK′(X) = 0, X0 = y] = QK′(tK′(Bx,n,y))Since {Xn = y,HK(X) = 0, HK′(X) = n,X0 = x} is the translation of{X−n = x,HK(X) = −n,HK′(X) = 0, X0 = y} by n, (∗) is due to thetranslation invariance of Y (see Claim 3.2.2).Proof of Theorem 3.2.4. Uniqueness is obvious by (3.2.15).For the existence of ν, fix a sequence K1 ⊆ K2 ⊆ . . . converging to Zd,define: ν(A) = limn→∞QKn((pi)−1(A ∩W ∗Kn)) (This sequence is increasingand hence the limit exists). We just need to check that ν does not dependon the choice of the sequence. The following is enough: if K ⊆ K ′ ⊂⊂ Zdand A ∈ W∗, A ⊆W ∗K ⊆W ∗K′ , thenQK′(pi−1(A)) = QK(pi−1(A)). (3.2.21)Note that A ⊆W ∗K , so pi−1(A)∩WK = pi−1(A). The equality above is whatProposition 3.2.3 tells us.One can easily check, by definition, the following proposition, which westate here for future use:Proposition 3.2.5. 1. ν is invariant under the time inversion: w∗ →wˇ∗, where wˇ∗ = pi(wˇ), with pi(w) = w∗ and wˇ(n) = w(−n), for n ∈ Z;2. ν is invariant under spatial translations: w∗ → w∗ + x, x ∈ Zd, wherew∗ + x = pi(w + x), with pi(w) = w∗.Given K ⊂⊂ Zd, we define the escape probability, similarly to the anal-1183.2. Basic model and some first propertiesogous notion for simple random walks.eK(x) :=Px[HK(X) = 0] = 1{x∈K} · Px[∪n<0{Xn} ∩K = ∅] (3.2.22)=1{x∈K} · Px[∪n>0{Xn} ∩K = ∅]. (3.2.23)The last equality is due to the fact that the law of X is invariant undertime-reversal by Claim 3.2.2. Note that eK is supported on ∂iK. We writeP(x,K) for the restriction to (W+,W+) of Px(·|HK(X) = 0), and write PeKfor the normalized measure:PeK =1∑x∈Supp(eK) eK(x)∑x∈Supp(eK)eK(x)P(x,K). (3.2.24)It is straightforward to check that (see the end of Section 3.2.1 for thedefinition of t∗K+): ∑x∈Supp(eK)eK(x)P(x,K) = t∗K+ ◦ (1{W ∗K}ν). (3.2.25)Remark 3.2.3. In fact, P(x,K) is the law of the positive part of a infinite2-sided snake starting from x, conditioned on its negative part avoiding K.The positive part and negative part are only related at the spine. Hence,compared to Px (restricted to (W+,W+)), P(x,K) just changes the law ofthe spatial spine, not the law of the spatial trees grafted through the spine.Moreover, under P(x,K), the spatial spine is a biased Random walk on Zd.The transition probability of this biased random walk can be expressed asfollows: for x ∼ y, the transition probability p(x, y) = Py[(X)n≤0 ∩ K =∅]/∑z∼x Pz[(X)n≤0 ∩K = ∅].We now define the branching capacity by:BCap(K) := ν(W ∗K). (3.2.26)Analogously to the standard capacity, branching capacity is the total1193.2. Basic model and some first propertiesmass of escape probability:BCap(K) = ν(W ∗K) = QK(WK) = QK(W0K)=∑x∈KPx[HK(X) = 0](3.2.22)=∑x∈KeK(x).Remark 3.2.4. The definition of branching capacity in this chapter is abit different to the one in previous chapters. In fact, one can verify thateK(x) = EsK(x)/2. Therefore, K’s branching capacity here equals half ofthe branching capacity before.Also, branching capacity is monotone and subaddictive:Proposition 3.2.6. For any K ⊂⊂ K ′ ⊂⊂ Zd,BCap(K) ≤ BCap(K ′); (3.2.27)For any K1,K2 ⊂⊂ Zd,BCap(K1 ∪K2) ≤ BCap(K1) + BCap(K2). (3.2.28)Proof. For monotonicity, assume K ⊂⊂ K ′ ⊂⊂ Zd. Any trajectory hittingK must hit K ′. Hence W ∗K ⊆ W ∗K′ . Therefore BCap(K) = ν(W ∗K) ≤ν(W ∗K′) = BCap(K′).Similarly, any trajectory hitting K1 ∪ K2 must hit either K1 or K2.Hence W ∗K1∪K2 ⊆ W ∗K1 ∪W ∗K2 . Therefore BCap(K1 ∪K2) = ν(W ∗K1∪K2) =ν(W ∗K1 ∪W ∗K2) ≤ ν(W ∗K1) + ν(W ∗K2) = BCap(K1) + BCap(K2).3.2.4 Branching interlacement point processWe further need to introduce the space of locally finite point measures onW ∗:Ω :={ω =∑n≥0δw∗n , where w∗n ∈W ∗, n ≥ 0and ω(W ∗K) <∞ for any K ⊂⊂ Zd}(3.2.29)1203.2. Basic model and some first propertiesWe endow W ∗ with the σ-algebra A generated by the evaluation mapsof formω 7→ ω(D) =∑n≥01[w∗n ∈ D], if ω =∑n≥0δw∗n , D ∈ W∗. (3.2.30)For any u ∈ R+, the probability space of the branching interlacement Pois-son point process (PPP) at level u is (Ω,A,Pu), whereω =∑n≥0δw∗n is a PPP with intesity measure u · ν on W ∗ under Pu,(3.2.31)where ν is defined in Theorem 3.2.4.Up to now, we have constructed the branching interlacement point pro-cess. In addition, we would like to introduce some relative PPP on W+.Consider the space of countable point measures on W+:M :={µ =∑i∈Iδwi , where wi ∈W+, I is a finite or infinite subset of N}(3.2.32)endowed with the canonical σ-fields M, i.e. generated by the evaluationmaps.For K ⊂⊂ Zd define µK and ΘK in the following way: if ω =∑n≥0 δw∗n ∈Ω, then µK(ω) =∑n≥0 δt∗K+(w∗n)1{w∗n ∈W ∗K}; if µ =∑i∈I δwi , then Θ(µ) =∑i∈I 1{HK(wi) <∞}δθHK (w). In words: in µK(ω) (or Θ(µ)) we only collectthe trajectories from ω (or µ) which hit the set K, and keep the part of eachtrajectory which comes after hitting K, and reparameterize the time of thetrajectories in a way such that the hitting time of K is 0. We record herethe straightforward identities valid for K ⊆ K ′ ⊂⊂ Zd:ΘK ◦ µK′ = µK ; (3.2.33)ΘK ◦ΘK′ = ΘK . (3.2.34)We have built the Poisson point measure Pu on (Ω,A) with intensity u ·ν1213.2. Basic model and some first properties(see (3.2.31)). Since PeK is a measure on (W+,W+), we can also realize on(M,M) a Poisson point measure with intensity u · PeK . We denote the lawof this Poisson point measure by PuK . Given ω =∑i≥0 δw∗i , we write:ωˇ =∑i≥0δwˇ∗i ∈ Ω; ϑx(ω) =∑i≥0δw∗i−x ∈ Ω. (3.2.35)The following follows from (3.2.25), (3.2.33) and Proposition 3.2.5:Proposition 3.2.7. For any K ⊆ K ′ ⊂⊂ Zd, u ∈ [0,∞) and x ∈ Zd, wehave:1. PuK is the law of µK under Pu;2. ΘK ◦ PuK′ = PuK ;3. Pu is invariant under ω → ωˇ;4. Pu is invariant under ϑx.We can now define the branching interlacement at level u:Definition 3.2.8. Branching interlacement at level u is defined to be therandom subset of Zd given byI = I(ω) :=⋃n≥0Range(w∗n), where ω =∑n≥0δw∗n has law Pu, (3.2.36)where for w∗ ∈ W ∗, Range(w∗) = w(Z), for any w ∈ W with pi(w) = w∗.The vacant set of branching interlacement at level u is defined byV = V(ω) := Zd \ I(ω). (3.2.37)Sometimes we use Iu and Vu instead of I and V to emphasize the de-pendence of u. Note that in view of (3.2.33), we have:I(ω) ∩K =⋃w∈SuppµK′ (ω)w(N) ∩K, (3.2.38)for any K ⊂⊂ K ′ ⊂⊂ Zd.1223.3. Branching random walk on the torus and branching interlacementsProposition 3.2.9. For any u ≥ 0 and K ⊂⊂ Zd, we have:I(ω) ∩K 6= ∅ ⇔ µK(ω) 6= 0; (3.2.39)Pu[K ⊆ V(ω)] = exp(−uBCap(K)). (3.2.40)Proof. (3.2.39) follows immediately from (3.2.38).Pu[K ⊆ V(ω)] = Pu[K ∩ I(ω) = ∅] = Pu[µK(ω) = 0]= exp(−u · ν(W ∗K)) = exp(−uBCap(K)).Remark 3.2.5. Analogously to the case of random interlacements in [21,(2.17)]), using the inclusion-exclusion principle, one can see that (3.2.40)uniquely determines the law of V and I.In view of Proposition 3.2.7, there is an equivalent way to construct aset with the same law as I ∩K.Proposition 3.2.10. For any K ⊂⊂ Zd, let NK be a Poisson randomvariable with parameter u·BCap(K), and (Xj)j≥1 i.i.d. with the law PeK andindependent from NK . Then K ∩(∪NKj=1Xj(N))has the same distributionas Iu ∩K.3.3 Branching random walk on the torus andbranching interlacementsIn this section we consider branching random walk on the discrete torusTN := (Z/NZ)d of side-length N (for any d ≥ 5 fixed). For some technicalreason due to the periodicity of simple random walk on the torus, we assumeN is an odd number, see Remark 3.3.4. We prove that for any fixed u > 0,the local limit (as N →∞) of the set of vertices in TN visited by the branch-ing random walk with a uniformly distributed starting point, conditioned tohave buNdc progeny is given by branching interlacement at level 2u.1233.3. Branching random walk on the torus and branching interlacementsWe write ϕ : Zd → TN for the canonical projection map induced by modN . Recall that (Definition 3.2.1) Px is the law of infinite snake. We writePL,Nx (respectively PLx ) for the law of snake conditioned to have length 2Lon the torus TN (respectively on Zd) with starting point x. If the startingpoint is uniform on TN , we will use PL,N . PL will be reserved for the lawof 1-dimensional simple random walk excursion conditioned to have length2L (i.e. τ = 2L+ 1).Theorem 3.3.1. For any K ⊂⊆ Zd and u > 0, if N is odd and (Xn) is asnake on TN , conditioned to have length 2L with uniform starting distribu-tion, where L = buNdc, thenlimN→∞PL,N [{X0, X1, ..., X2L} ∩ ϕ(K) = ∅] = e−2u·BCap(K). (3.3.1)Remark 3.3.1. By (3.2.40), the right hand side is P2u[I ∩K = ∅].Remark 3.3.2. Note that the statement here is a bit different to the state-ment (Theorem 1.4.1 and Theorem 1.4.2) in Section 1.4 . The reason is thatthe branching capacity in this chapter differs from the one in the previouschapters by a multiplicative constant, 1/2. See Remark 3.2.4.Through the inclusion-exclusion principle, this theorem implies the localconvergence of the configuration:Corollary 3.3.2. Under the same assumptions on Theorem 3.3.1, for anyA ⊆ K, we have:limN→∞PL,N [{X0, X1, ..., X2L} ∩ ϕ(K) = A] = P2u[I ∩K = A]. (3.3.2)The idea of the proof of Theorem 3.3.1 is to use the ’law of rare events’,i.e., to decompose the event into the intersection of weakly dependent rareevents. Hence the proof consists of two main ingredients. One is to estimatethe hitting probability of ϕ(K) by a small snake, see Section 3.3.1; the otheris to cut a large tree into small subtrees, see Section 3.3.2 .1243.3. Branching random walk on the torus and branching interlacements3.3.1 Hitting probability of a set by a small snakeTheorem 3.3.1 gives an asymptotic formula for the probability that the snakevisits a subset on TN with length proportional to the volume of the torus,Nd. The main result of this section gives an asymptotic formula for theprobability of the event that a set is hit by a much shorter snake.Proposition 3.3.3. For α1 < α2 ∈ (0, d) fixed, L = L(N) is any integer-valued function of N satisfying L(N) ∈ [Nα1 , Nα2 ], thenlimN→∞Nd2LPL,N ({X0, X1, ..., X2L} ∩ ϕ(K) 6= ∅) = BCap(K). (3.3.3)In order to prove this proposition, we need the following lemma.Lemma 3.3.4. If S = (Si) is one-dimensional simple random walk excur-sion conditioned to have length 2L, then for any L ∈ N+, i ∈ [[0, L]] andx ∈ [[0, i]], we have:PL[Si = x]  (x+ 1)2(i+ 1)− 32 ; (3.3.4)PL[Si ≤ x]  (x+ 1)3(i+ 1)− 32 ; (3.3.5)PL[Si = x]  (i+ 1)− 12 ; (3.3.6)For any  ∈ (0, 1/2) and n ∈ N, there exists C(, n) > 0, such that, for anyL ∈ N+ and i ∈ [[0, L]], we have:PL[Si ≥ i 12+] ≤ C(, n)i−n. (3.3.7)In words, the L.H.S. decays much faster than any polynomial of i.1253.3. Branching random walk on the torus and branching interlacementsProof. In view of (3.1.9), we have (when i and x have the same parity):PL[Si = x](∗)=(Px[τ = i+ 1] · 2i+1)(Px[τ = 2L− i+ 1] · 22L−i+1)P0[τ = 2L+ 1] · 22L+1=2x+1i+1 Px[Si+1 = −1] x+12L−i+1Px[S2L−i+1 = −1]P0[S2L+1=−1]2L+1 (x+ 1)2i+ 1P0[Si+1 = −1− x]P0[S2L−i+1 = −1− x]P0[S2L+1 = −1](3.1.4),(3.1.10) (x+ 1)2i+ 1P0[Si+1 = −1− x] 1√2L−i+11√2L+1 (x+ 1)2i+ 1P0[Si+1 = −1− x](3.1.10) (x+ 1)2i+ 1(i+ 1)−1/2 exp(−c(x+ 1)2i+ 1)=(x+ 1)2(i+ 1)3/2exp(−c(x+ 1)2i+ 1),where in (∗) we used the time-reversibility of the random walk. Sinceexp(−c(x + 1)2/(i + 1)) ≤ 1, we have (3.3.4). By summing (3.3.4), we get(3.3.5). Because((x+ 1)2/(i+ 1))exp(−c · (x+ 1)2/(i+ 1)) = t exp(−ct)is less than a constant (which only depends on c) we have (3.3.6). For (3.3.7),we have:PL[Si ≥ i 12+] ≤∑x≥i 12+1√i+ 1(x+ 1√i+ 1)2exp(−c(x+ 1√i+ 1)2)∫ ∞i12+√i+1t2 exp(−ct2)dt ≤∫ ∞i/2t2 exp(−ct2)dt=∫ ∞i2/4u2exp(−cu)du = 12c(i24+1c) exp(−ci24).The last term decays faster than any polynomial of i.Proof of Proposition 3.3.3. We start with recalling some combinatorial prop-erties of Dyck paths (see the discussion before Proposition 3.1.2 for thedefinition of Dyck paths). Fix k ≥ 1 and j ∈ {0, 1, ..., 2k}, for any i ∈1263.3. Branching random walk on the torus and branching interlacements{0, 1, ...2k − 1} and Dyck path (s0, s1, ...s2k) with length 2k, , defines(i)j = si + si⊕j − 2 mini∧(i⊕j)≤n≤i∨(i⊕j)sn (3.3.8)where i⊕ j = i+ j, if i+ j ≤ 2k and i⊕ j = i+ j − 2k, if i+ j ≥ 2k. It iselementary to see that (s(i)0 , s(i)1 , ...s(i)2k ) is still a Dyck path with length 2k andthat the mapping Φi : (s0, s1, ...s2k) → (s(i)0 , s(i)1 , ...s(i)2k ) is a bijection fromthe set of all Dyck paths with length 2k onto itself (e.g. see page 14 of [15]).Recall that, under PL (or PL,N ), (S0, S1, ...S2k) is uniformly distributedon the set of all Dyck paths with length 2L. Hence (S(i)0 , S(i)1 , ...S(i)2k ) isdistributed identically as (S0, S1, ...S2k). From this and the fact that thestarting measure is uniform on the torus, one can see that, under PL,N ,(X0, X1, ..., X2L) and (Xi, Xi+1, ..., X2L, X1, ..., Xi−1, Xi) have the same law.On the other hand, the ’time reversal’ map s = (s0, s1, ...s2k) → sˇ =(s2k, s2k−1, ..., s0) is also a bijection on the set of all Dyck paths with length2k. Hence, under PL,N , (X0, X1, ..., X2L) and (X2L, X2L−1, ..., X0) have thesame law (here we also use the fact that the increment variables Yi havesymmetric distribution).Write K ′ = ϕ(K), we have (when N > diam(K) := max{|a− b| : a, b ∈K}):PL,N ({X0, X1, ..., X2L} ∩K ′ 6= ∅)× Nd2L=∑x∈∂iK′2L−1∑k=1PL,N [X0, ...Xk−1 /∈ K ′;Xk = x] · Nd2L+∑x∈K′PL,N (X0 = x) · Nd2L(∗)=∑x∈∂iK′2L−1∑k=1PL,N [X0 = x;X1, ...Xk /∈ K ′] · Nd2L+∑x∈K′12L=∑x∈∂iK′12L2L−1∑k=1PL,Nx [X1, ...Xk /∈ K ′] +|K|2L,1273.3. Branching random walk on the torus and branching interlacements(∗) is due to:PL,N [X0, ...Xk−1 /∈ K ′;Xk = x] = PL,N [X2L−k, ...X2L−1 /∈ K ′;X2L = x]= PL,N [Xk, ...X1 /∈ K ′;X0 = x].Hence in order to prove Proposition 3.3.3 it suffices to show that forx ∈ ∂iKlimN→∞12L2L−1∑k=1PL,Nϕ(x)[X1, ...Xk /∈ K ′] = eK(x), (3.3.9)where eK(x) is the escape probability (see (3.2.22)).For the above, it is enough to prove:limN→∞maxL′<k<2L−L′|PL,Nϕ(x)[X1, ...Xk /∈ K ′]− Px[Xn /∈ K for anyn > 0]| = 0.(3.3.10)for some L′ = L′(N), a function of N satisfying L′(N) → ∞ andL′(N)L(N) → 0as N →∞ (e.g. we can fix L′ = bL0.2c which satisfies also the condition inLemma 3.3.7).The proof of Proposition 3.3.3 is now reduced to the following lemmas:Lemma 3.3.5. For any x ∈ ∂iK,limN→∞maxL′<k<2L−L′|PL,Nϕ(x)[X1, ...Xk /∈ K ′]− PLx [X1, ...Xk /∈ K]| = 0. (3.3.11)Lemma 3.3.6. For any x ∈ ∂iK,limN→∞maxL′<k<2L−L′|PLx [X1, ...Xk /∈ K]− PLx [X1, ...XL′ /∈ K]| = 0. (3.3.12)Lemma 3.3.7. For any x ∈ ∂iK, if L′ = o(√L), then:limN→∞|PLx [X1, ...XL′ /∈ K]− Px[X1, ...XL′ /∈ K]| = 0. (3.3.13)Note that Px[X1, ...XL′ /∈ K] converges to the escape probability eK(x) =Px(∪n>0{Xn} /∈ K), so (3.3.10) indeed follows from Lemmas 3.3.5, 3.3.6,3.3.7.1283.3. Branching random walk on the torus and branching interlacementsWithout loss of generality, we assume x = 0 ∈ ∂iK and N > 2diam(K).Proof of Lemma 3.3.5. Let ϕ−1(K ′) =⋃∞i=0Ki, such that, x ∈ K0 = Kand Ki is a translated copy of K0. Recall from the statement of Proposition3.3.3 that α2 < d and choose λ ∈ (14 , d4α2 ) and let b = bLλN c+ 1.PL0 [X1, ...Xk /∈ K]− PL,Nϕ(0) [X1, ...Xk /∈ K ′]=PL0 [X1, ...Xk /∈ K]− PL0 [X1, ...Xk /∈ ϕ−1(K)]=PL0 [X1, ...Xk /∈ K]− PL0 [X1, ...Xk /∈ K,X1, ...Xk /∈ Ki for i ≥ 1]=PL0 [X1, ...Xk /∈ K, {X1, ...Xk} ∩ (∪i≥1Ki) 6= ∅]≤PL0 [{X1, ...Xk} ∩ (∪i≥1Ki) 6= ∅]≤PL0 [ sup0≤i≤2L|Xi| > bN ] + PL0 [ sup0≤i≤2L|Xi| ≤ bN, {X1, ...Xk} ∩ (∪i≥1Ki) 6= ∅](3.3.14)The first term above goes to 0, due to the following (since bN ≥ Lλ, λ >1/4):Proposition 3.3.8. For any c > 14 ,limL→∞PL0 [ sup0≤i≤2L|Xi| > Lc]→ 0. (3.3.15)This Proposition is an easy corollary in the theory of convergence of dis-crete snakes (see e.g.[15], or more generally [8]). In fact, sup0≤i≤2L |Xi|/(2L)1/4converges in distribution as L→∞ to an a.s. finite random variable.For the estimate of the second term in (3.3.14), we will use the following(a special case of Theorem 1.13 in [7]):Proposition 3.3.9. There exists a constant C, such that for all n ∈ N, ifTn is GW tree conditioned to have n progeny and wk(Tn) is the number ofvertices in the k-th generation of Tn, then we haveE(wk(Tn)) ≤ C · k. (3.3.16)1293.3. Branching random walk on the torus and branching interlacementsWith the help of this, for any y ∈ Zd, let M = |{i ∈ [0, 2L] : Xi = y}|.We have:PL0 [Xi = y for some 0 ≤ i ≤ 2L] = PL0 [M > 0] ≤ E(M)(3.3.16)∞∑k=0k · pk(0, y)(3.1.8) |y|4−d. (3.3.17)Now we estimate the second term in (3.3.14) for any 1 ≤ k ≤ 2L:PL0 [ sup0≤i≤2L|Xi| ≤ bN, {X1, ...Xk} ∩ (∪i≥1Ki) 6= ∅]≤∑i:K0 6=Ki⊆B0((b+1)N)PL0 [{X1, ...Xk} ∩Ki 6= ∅]=b+1∑i=1∑j:Kj∩S0(iN)6=∅PL0 [{X1, ...Xk} ∩Kj 6= ∅](3.3.17)b+1∑i=1∑j:Kj∩S0(iN)6=∅|K|(iN)d−4b+1∑i=1id−1 · |K|(iN)d−4|K| b4Nd−4 |K|L4λ/N4 + 1Nd−4→ 0,where the last convergence follows from λ < d4α2 , α2 < d and L ≤ Nα2 .Proof of Lemma 3.3.6. Recall that we have assumed x = 0 ∈ ∂iK. LetL′ < k < 2L− L′.PL0 [X1, ...XL′ /∈ K]− PL0 [X1, ...Xk /∈ K]≤PL0 [∃i ∈ (L′, k], Xi ∈ K]≤PL0 [∃i ∈ (L′, 2L− L′), Xi ∈ K]≤PL0 [∃i ∈ (L′, L], Xi ∈ K] + PL0 [∃i ∈ [L, 2L− L′), Xi ∈ K]=2PL0 [∃i ∈ (L′, L], Xi ∈ K],where in the last line we used the reversal property described in the begin-ning of the proof of Proposition 3.3.3.1303.3. Branching random walk on the torus and branching interlacementsLet us estimate PL0 [Xi = y] for any y ∈ Zd:PL0 [Xi = y] =∞∑l=0PL[Si = l] · PL0 [Xi = y|Si = l] (3.3.18)Recall from Section 3.2.2 that under PL, Si = Si(U) is the contour functionof the random tree conditioned to have size L. PL0 [Xi = y|Si = l] is theprobability of Zl = y, where Z = (Zn)n∈N is the simple random walk from0 in Zd. Recall that we have ((3.1.3) and Lemma 3.3.4) :PL[Si = l]  (l + 1)2 · i− 32 , PL[Si = l]  i− 12 , P (Zl = y)  l−d2 .Therefore:PL0 [Xi = y](3.3.18)≤ PL0 [Si = 0] +∑0<l≤√iPL[Si = l] · PL0 [Xi = y|Si = l]+∑l>√iPL[Si = l] · PL0 [Xi = y|Si = l] i− 32 +∑0<l≤√i(l + 1)2i−32 · l− d2 +∑l>√ii−12 · l− d2 i− 32 + i− 32∑0<l≤√i(l + 1)2−d2 + i−12∑l>√il−d2 .Note that every term is decreasing in d. Hence we can assume d = 5:PL0 [Xi = y] i−32 + i−32∑0<l≤√i(l + 1)2−52 + i−12∑l>√il−52i− 32 + i− 32 (√i)12 + i−12 (√i)−32  i− 54 .So PL0 [Xi = y] is summable in i and:PL0 [∃i ∈ (L′, L], Xi ∈ K] ≤ 2|K|∑L′<i≤LPL0 [Xi = y] 2|K|∑i>L′i−54L′→∞−→ 0.1313.3. Branching random walk on the torus and branching interlacementsLet us do some preparations before proving Lemma 3.3.7. When consid-ering a 1-sided snake conditioned on survival, it is convenient to introducethe discrete Bessel Processes. We use the setting of 1-sided snake condi-tioned on survival:U ′i , i ∈ N+, i.i.d. with P (U ′i = 1) = P (U ′i = −1) =12;S′n(U′) = U ′1 + ...+ U′n.S′n(U ′) is the 1-sided simple random walk. LetMn = Mn(U′) = maxk≤nS′k, Rn = Rn(U′) = 2Mn − S′n,then the process (Rn)n∈N is called the discrete Bessel Process (DBP). Wealso can define a partial matching on the set E(N) = {ei = (i−1, i); i ∈ N+}of all edges of the lattice N. Any edge is either in the set of upsteps M(U ′) ={ei : Ri−Ri−1 = 1} or the set of downsteps N(U ′) = {ei : Ri−Ri−1 = −1}.For any edge el ∈ N(U ′), we can find a unique edge f ′U (el) = ek ∈ M(U ′),such that:k < l, Rk−1 = Rl, Rk = Rl−1 = mink≤i≤l−1Ri = Rl + 1. (3.3.19)Note that there are some upsteps with no downstep matched to them. Sim-ilarly to the construction given in Section 3.2.2, under gluing through thismatching, we get a (random) tree such that (Rn)n∈N is the contour function.It is elementary to see that the tree has the same distribution as the treecorresponding to the one-sided infinite snake.Another equivalent definition of the law of DBP is as follows (e.g. see[17]). (Rn)n∈N is the N-valued Markov process starting at zero and havingthe transition function specified by the relation:P [Rn+1 −Rn = ∆|Rn] = Rn + 1 + ∆2(Rn + 1), ∆ = ±1. (3.3.20)1323.3. Branching random walk on the torus and branching interlacementsDenote byΓn = {(s1, s2, ...sn) : si ∈ N, s1 = 1, |si+1 − si| = 1}the set of all sample paths of (Ri)1≤i≤n. In view of (3.3.20), for any s =(s1, ...sn) ∈ Γn, one obtainsa(s) := P [(R1, ...Rn) = s] =sn + 12n; (3.3.21)On the other hand, similarly to the computation in the proof of Lemma3.3.4, we obtain that for any 1 ≤ n ≤ 2L we haveaL(s) := PL[(S1, ...Sn) = s] =sn+12L+1−n(2L− n+ 1L− n+sn2)12L+1(2L+ 1L) . (3.3.22)Let us estimate aL(s)/a(s):aL(s)a(s)=(sn + 1) · (L+ 1)...(L− n−sn2 + 2) · L...(L− n+sn2 + 1)(2L)...(2L− n+ 1) /(sn + 1)2n=(L+ 1)L...(L− n−sn2 + 2) · L(L− 1)...(L− n+sn2 + 1)L(L− 12)(L− 22)...(L− n−12 )∈((L− nL)n,L+ 1L· ( LL− n)n).Hence, if n = o(√L), then aL(s)/a(s)→ 1 uniformly for all s ∈ Γn.Proof of Lemma 3.3.7. Using our new description of the law Px of the one-sided snake in terms of the DBP, the definitions (3.3.21) and (3.3.22) as wellas the factPx[X1, ...XL′ /∈ K|(R1, ...RL′) = s] = PLx [X1, ...XL′ /∈ K|(S1, ...SL′) = s],1333.3. Branching random walk on the torus and branching interlacementswe obtainPLx [X1, ...XL′ /∈ K] =∑s∈ΓL′aL(s) · PLx [X1, ...XL′ /∈ K|(S1, ...SL′) = s];Px[X1, ...XL′ /∈ K] =∑s∈ΓL′a(s) · PLx [X1, ...XL′ /∈ K|(S1, ...SL′) = s].Since Px[X1, ...XL′ /∈ K] ∈ (0, 1) if x ∈ ∂iK, we obtainPLx [X1, ...XL′ /∈ K]/Px[X1, ...XL′ /∈ K]→ 1as a consequence of L′/√L→ 0. The proof of Lemma 3.3.7 is complete.3.3.2 Cutting treesOur goal for this subsection is to construct the following ’cutting tree’ lem-ma:Lemma 3.3.10. Assume d ≥ 5, u > 0 fixed. Let T be a uniform tree in ALwhere L = buNdc. Then, there are some , η > 0, a1, a2 ∈ (4, d) (dependingon u, d only) such that for any sufficiently large N ∈ N, with probabilityat least 1 − N− we can find a number of rooted subtrees T1, . . . Tn′ (Ti isrooted at vi, the unique vertex in Ti closest to o, the root of T ) satisfyingthe following:1. For every i ∈ {1, ..., n′}, Na1 ≤ |Ti| ≤ Na2 and the distance betweenvi and o is bigger than N2+η;2. Let Tˆ be the graph generated by the all edges not in any Ti. Then Tˆis a tree and |Tˆ | ≤ Nd−;3. Let ιi (i = 1, . . . , n′) be the unique path starting from vi towards theroot of T , with length bN2+ηc+ 1. Then for any i ∈ {1, . . . , n′}, all Tjexcept Ti, are in the same component of T \ ιi;1343.3. Branching random walk on the torus and branching interlacements4. Conditioned on {n′; |T1|, . . . , |Tn′ |; Tˆ} (and even the places of vi in Tˆ ),the trees Ti are independent and uniform on all plane trees with theirsizes.As before, we will use contour function to represent a tree. For simplerandom walk (Sn)n∈N, conditioned on τ (= inf{n : Sn = −1}) = 2L + 1,(Sn)n∈[[0,2L]] is the contour function of a random tree T which is uniformlydistributed over AL. If for some subinterval I = [[a, b]] ⊆ [[0, 2L]] (a, b ∈ N)we have:Sa = Sb = mina≤n≤bSn, (3.3.23)then (Sn)n∈I is the contour function of a subtree of T (rooted at the vertexcorresponding to a and b). We denote by ξ the size of the unconditionedGW tree. It is standard thatP [ξ = j] = P [inf{n : Sn = −1} = 2j + 1](3.1.9)=12j + 1· 122j+1(2j + 1j)(3.1.4) (j + 1)− 32 . (3.3.24)First we introduce some lemmas which will be used in the proof of Lemma3.3.10.Lemma 3.3.11. For any β,  ∈ (0, 1/2),  < β/2, there exist positive con-stants C1 and C2 (depending on β, ) satisfying the following. For ξ1, ..., ξm,i.i.d. with the distribution of ξ, letξ˜i = ξ · 1{ξ≤m2−β}, σm = ξ1 + ...+ ξm, σ˜m = ξ˜1 + ...+ ξ˜m.Then, for any integer M ∈ [ 110m2−, 10m2.5], we have:P [σ˜m > C1m2−β/2|σm = M ] ≤ exp(−C2mβ/2). (3.3.25)We need the so-called Bernstein Inequality (see, e.g. the part of ’ExistingInequalities’ in [4]) to prove the lemma above:Proposition 3.3.12. Let X1, X2, ...Xn be independent zero-mean randomvariables. Suppose that |Xi| ≤ M¯ almost surely, for all i. Then, for all1353.3. Branching random walk on the torus and branching interlacementspositive t:P [n∑i=1Xi > t] ≤ exp(−12 t2∑E(X2j ) +13M¯t) (3.3.26)Proof of Lemma 3.3.11. Since Eξ˜  m1−β/2 and Eξ˜2  m3−3β/2, usingBernstein inequality (let t = m2−β/2), we get: for some positive constantsC1, C2,P [σ˜m > C1m2−β/2] ≤ exp(−C2mβ/2). (3.3.27)On the other hand, when M ∈ [ 110m2−, 10m2.5], we have:P [σm =M ] = Pm−1[τ = 2M +m](3.1.9)=m2M +mPm−1[S2M+m = −1]=m2M +mP0[S2M+m = m](3.1.4) m(2M +m)32exp(− m22(2M +m))≥ exp(−Cm).Combining (3.3.27) and the inequality above, we get: when  < β/2,P [σ˜m > C1m2−β/2|σm = M ] ≤ exp(−C3mβ/2). (3.3.28)Another lemma we need is:Lemma 3.3.13. For any positive η, δ, there exists a positive constant C(η, δ),such that, for any L ≥ 2N4+2η+δ, we have:PL[Si < N2+η, for some i ∈ [N4+2η+δ, 2L−N4+2η+δ]]≤ C(η, δ)/N 38 δ.(3.3.29)Recall that PL is the law of SRW conditioned on τ = 2L+ 1.If we letb1 = max{n ∈ [[0, L]], Sn = bN2+ηc}+ 1,b2 = min{n ∈ [[L, 2L]], Sn = bN2+ηc} − 1,1363.3. Branching random walk on the torus and branching interlacementsthen by Lemma 3.3.13, with probability at least 1−C(η, δ)N−3δ/8, the lengthof [[b1, b2]] is bigger than 2(L−N4+2η+δ). Since S on [[b1, b2]] satisfies treecondition ((3.3.23)), it means that S on [[b1, b2]] is a subtree. Note that thedistance between this subtree and the root is bN2+ηc + 1. Hence we caninterpret Lemma 3.3.13 in the language of random tree.Corollary 3.3.14. For any η, δ > 0, there exists a positive constant C(η, δ),satisfying the following: if T is uniform on AL with L ≥ 2N4+2η+δ, thenwith high probability, at least 1− C(η, δ)/N 38 δ, we can find a rooted subtreeT ′ which is rooted at the vertex closet to the original root, such that, thedistance between this subtree and the original root is equal to bN2+ηc + 1and the number of edges we discard (|T \ T˜ |) is at most N4+2η+δ. Moreover,conditioned on the size of T ′, it is uniform on all plane trees of that size.The last conclusion is simply from the fact that conditioned on thelength, each Dyck path of that length has the same probability weight.Note that if L  N4+2η+δ, then the ratio of edges discarded is less thanN4+2η+δ/L, which would be very small.Before proving Lemma 3.3.13 we introduce some notation. For any n ∈N+ and i ∈ [[0, n]], write A(n, i) = (ni)−( ni+1). Using the reflection principle,one can see: for any x, n ∈ N with the same parity, t ∈ N, x+ 2t ≤ n,|{s : [[0, n]]→ Z : s(0) = 0, s(n) = x, min0≤i≤nSi = −t; ∀i, |s(i)− s(i− 1)| = 1}|=|{s : [[0, n]]→ Z : s(0) = 0, s(n) = x, min0≤i≤nSi ≤ −t; ∀i, |s(i)− s(i− 1)| = 1}|−|{s : [[0, n]]→ Z :s(0) = 0, s(n) = x, min0≤i≤nSi ≤ −t− 1;∀i, |s(i)− s(i− 1)| = 1}|=(nn+x2 + t)−(nn+x2 + t+ 1)= A(n,n+ x2+ t).Lemma 3.3.15 (Comparison between Combinations). For any  ∈ (0, 12), A >0, there exists C = C(, A) > 1, satisfying the following:For any n, k, k′ ∈ N+, n/2 ≤ k < k′ < n, let i = k − n−12 , i′ = k′ − n−12 . If1373.3. Branching random walk on the torus and branching interlacementsi′ < n, i′(i′ − i) < An, then:A(n, k)/iA(n, k′)/i′∈ (1, C(, A)). (3.3.30)Remark 3.3.3. In fact, the case  = 14 , A = 1 is enough for our purposeand we only use this case.Proof. It is straightforward to get:A(n, k) =(nk)−(nk + 1)=n!(2k − n+ 1)(k + 1)!(n− k)! ; (3.3.31)A(n, k)/iA(n, k + 1)/(i+ 1)=k + 2n− k = 1 +2i+ 1(n+ 1)/2− i < 1 +4i(1/2− )n ;(3.3.32)Hence,ln(A(n, k)/iA(n, k′)/i′)<∑i≤i¯<i′ln(1 +4¯i(1/2− )n)≤∑i≤i¯<i′4¯i(1/2− )n ≤4(i′ − i)i′(1/2− )n ≤4A1/2−  .The upper bound follows. The lower bound is immediate from (3.3.32).Proof of Lemma 3.3.13. By symmetry, it suffices to show:PL[Si < N2+η, for some i ∈ [N4+2η+δ, L]]≤ C(η, δ)/N 38 δ. (3.3.33)Let j = bN4+2η+δc. By lemma 3.3.4,PL[Sj ≤ N2+η+ 38 δ]  N− 38 δ; (3.3.34)PL[SL ≤ N2+η+ 38 δ]  N− 38 δ; (3.3.35)PL[Sj ≥√L ·N δ10 ] ≤ C(η, δ)N− 38 δ; (3.3.36)PL[SL ≥√L ·N δ10 ] ≤ C(η, δ)N− 38 δ. (3.3.37)1383.3. Branching random walk on the torus and branching interlacementsHence, we have:PL[Sj , SL ∈ [N2+η+ 38 δ,√L ·N δ10 ] ≥ 1− C(η, δ)/N 38 δ. (3.3.38)For a1, a2 ∈ [[N2+η+ 38 δ,√L · N δ10 ]], such that 2|(a2 − a1) − (L − j) andm ∈ [[0, 12N2+η+38δ]], writeS(a1, a2,m) :=|{s : [[j, L]]→ N : s(j) = a1, s(L) = a2, minj≤i≤LSi = m;∀i, |s(i)− s(i− 1)| = 1}|We know S(a1, a2,m) = A(L − j, L−j+a2−a12 + a1 −m) (see the discussionbefore Lemma 3.3.15). We would like to use Lemma 3.3.15 to compareS(a1, a2,m1) and S(a1, a2,m2). For any m1,m2 ∈ [[0, 12N2+η+38δ]], one cancheck that, if we leti = (L− j)− (L−j+a2−a12 + a1 −m1)− 12,i′ = (L− j)− (L−j+a2−a12 + a1 −m2)− 12,then,i′ ≤ a2 ≤√LNδ10 <L4, i′ − i ≤ 12N2+η+38δ;i′(i′ − i) ≤√LNδ1012N2+η+38δ ≤ L.Also we have i  i′ (since i ≥ a1− 12N2+η+38δ ≥ 12N2+η+38δ ≥ i′− i). Hence,by Lemma (3.3.15), for any m1 ∈ [[N2+η, 12N2+η+38δ]] and m2 ∈ [[0, N2+η]],S(a1, a2,m1) ≥ C(14, 1)S(a1, a2,m2). (3.3.39)Note that the left hand side may be zero (when L−j+a2−a12 +a1−m1 > L−j2 ),1393.3. Branching random walk on the torus and branching interlacementsbut in that case the right hand side is also zero (since m1 > m2). Hence,PL(m ≤ N2+η|Sj = a1, SL = a2)PL(m ≤ N2+η+ 38 δ/2|Sj = a1, SL = a2)≤ C N2+ηN2+η+38δ/2≤ 2CN38δ. (3.3.40)Combining this and (3.3.38) completes the proof.Proof of Lemma 3.3.10. Let us first explain the rough idea of the proof. Wefirst divide the domain [0, 2L] into subintervals. In each subinterval, since Sdoes not necessarily satisfy the tree condition (3.3.23) generally, S restrictedto that interval does not correspond a tree. But S restricted to that intervalcan still be regarded as a series of trees which are attached to the vertices of asegment, called the ’spine’, which consists of those edges without matching.Then, we pick up those subtrees with large size. Assume those subtrees keptare T˜1, . . . , T˜K . For each T˜i, we can apply Corollary 3.3.14 to get its subtreeTi. This simple method can satisfy all requirements we need.Let k = bu ·Nαc and l = 2bNλc, where λ = d− α and α is a parameterwe will choose later. We will write down the constraints for α and otherparameters later. Let Ii = [[(i − 1)l, il]], for any i ∈ [[1, k]]. Write mi =minn∈Ii Sn and ∆i = S(i−1)l + Sil − 2mi. In fact, ∆i is the tree distancebetween the endpoint vertices corresponding to (i − 1)l and il. Note that∆1 = Sl, thus by Lemma 3.3.4, we have:PL[∆1 < Nλ/2−γ ] ≤ CN−3γ ;PL[∆1 > Nλ/2+] ≤ CN−3γ .Note that the constants here (and through out this proof) may depend onparameters α, γ,  (and of course u, d), but definitely not on N . In fact wewill choose α, β, γ, δ (β, γ, δ will appear later) in the end and they will bechosen to be small. After that  and η will be chosen to be even smallernumbers depending on α, β, γ, δ.Using the root-changing method, (see the beginning of the proof ofProposition 3.3.3) we have the same inequalities for not only ∆1, but ev-ery ∆i, since ∆i means the tree distance between the endpoints which is1403.3. Branching random walk on the torus and branching interlacementsinvariant under root-changing. The number of intervals is k  Nα. Hence,ifα < 3γ, (3.3.41)then (by Lemma 3.3.4) we obtain that with a high probability, at least1− C/N3γ−α, all ∆i are in [[Nλ/2−γ , Nλ/2+]].As mentioned earlier, the part of S in Ii can be regarded as a segmentcalled ’spine’, consisting of those edges which the contour walk crosses onceinside Ii and once outside I, together with a set of subtrees of S (we callthem bushes) attached to the vertices in the spine. The number of vertices,also the number of bushes (some maybe one-point trees) in the spine ism = ∆i + 1, and the total edges of these bushes is M =l−∆i2 . Moreover,it is elementary to see that the joint law of the sizes of these bushes is(ξ1, . . . , ξm) conditioned on∑ξj = M . We know that with high probability∆i ∈ [[Nλ/2−γ , Nλ/2+]]. Since  will be chosen very small (Nλ/2+  Nλ),M ≈ l2 = bNλc. Whenγ/λ < 0.1 (3.3.42)and  is very small, one can check that M and m are in the required relationfor Lemma 3.3.11 to hold. Hence we have:P [σ˜m > C1m2−β/2|σm = M ] ≤ exp(−C2mβ/2). (3.3.43)It means that if we discard those bushes with edges less then m2−β, withhigh probability, the total number of edges we lose is less than C1m2−β/2.We do so and pick up those bushes with size bigger than m2−β. The ratioof edges lost compared to total edges is less than:m2−β/22l N(λ/2+)(2−β/2)Nλ= N−(λβ4−(2−β2)). (3.3.44)When  is very small the exponent is negative, which is what we want (forCondition 2). The size of each bush we pick is less than l/2 ≤ Nλ and biggerthan:m2−β ≥ N (λ2−γ)(2−β). (3.3.45)1413.3. Branching random walk on the torus and branching interlacementsNote that the exponent λ is obviously less than d. The total number ofbushes picked is less than:k · (l/(N (λ2−γ)(2−β)))  Nα+2γ+βλ2 −βγ . (3.3.46)Assume now that all bushes (for all k intervals) picked are T˜1, . . . , T˜n′ andthe vertices where they are grafted in the spine are v¯1, . . . , v¯n′ . Note that foreach subinterval we have a spine and all spines together form the contourwalk of a connected subtrees of T , which we will call the skeleton. Allbushes (whether we picked or not) are grafted to the skeleton. Hence, theset T \ (∪T˜i) consists of the skeleton and the bushes we do not pick upand is connected. Since the root o is in the spine of the first interval, o isin the skeleton and in T \ (∪T˜i). Moreover, conditioned on their sizes andT \ (∪T˜i), the trees T˜i are independent and uniform. This can be inducedsimply by the fact that conditioned on the length, each bush is independentof the spine and any other bush, and the fact that each Dyck path with thatlength has the same probability weight.In view of Lemma 3.3.14, for each T˜i, with high probability, we can findits subtree Ti which is far from the root of v¯i. More precisely, for each T˜i,assume |T˜i| = Li and (Sn)n∈[[0,2|Li|]] is the contour function. We know thatwith high probability, the event in (3.3.29) is true. Setb1 = max{n ∈ [[0, Li]], Sn = bN2+ηc}+ 1,b2 = min{n ∈ [[Li, 2L]], Sn = bN2+ηc} − 1,and let Ti is the subtree corresponding to [[b1, b2]]. Then the distance fromT˜i and the root of T˜i is bN2+ηc+1. We use Ti to replace T˜i. If we can replaceall T˜i successfully, then Conditions 2 and 3 can be satisfied and T1, . . . , Tn′satisfy all conditions. When(λ2− γ)(2− β) > 4 + 2η + δ, (3.3.47)the probability of failure for one subtree has order N−3δ/8. Since there are1423.3. Branching random walk on the torus and branching interlacementsat most Nα+2γ+βλ2−βγ subtrees, ifα+ 2γ +βλ2− βγ < 38δ, (3.3.48)then the probability that we can replace all T ′i successfully is bigger thansomething like 1−N−′ .The constraints (3.3.41),(3.3.42),(3.3.47) and (3.3.48) are not tight, e.g.α = 0.001d, γ = 0.002d, δ = 0.05d and β = 0.02 (let  and η be very small)satisfy all constraints. Then we conclude the lemma.3.3.3 Proof of the main theoremLet S : T → Tn be the random function corresponding to Xn. Then T isuniform on AL and {X0, . . . , X2L} = S(T ). Due to Lemma 3.3.10, with highprobability (1− C/N ), we can find subtrees T1, . . . , Tn′ as in the lemma.We denote with A this event. We write P [·|(n′;L1, . . . , Ln′ ; t)] (respec-tively p(n′;L1, . . . , Ln′ ; t)) for the conditional probability conditioned (re-spectively the probability) that A is true, the number of subtrees of Ti is n′,the size of Ti is Li (i = 1, . . . , n′) and the subtree Tˆ with n′ vertices indicat-ing the places of vi is t (we also assume that Tˆ is a rooted tree together withn′ ordered vertices in it). Note that under P [·|(n′;L1, . . . , Ln′ ; t)], the treesT1, . . . , Tn′ are independent and uniform on all plane trees with the givensize.PL,N [{X0, X1, ..., X2L} ∩ ϕ(K) = ∅]=PL,N [S(T ) ∩ ϕ(K) = ∅]≤PL,N [Ac] +∑p(n′;L1, . . . , Ln′ ; t)P [S(T ) ∩ ϕ(K) = ∅|(n′;L1, . . . , Ln′ ; t)],where the sum runs over all possible values of Υ = (n′;L1, . . . , Ln′ ; t) suchthat p(Υ) > 0 (depending on N).Since PL,N [Ac]→ 0, it suffices to provelimN→∞maxΥ|P [S(T ) ∩ ϕ(K) = ∅|Υ]− exp (−2uBCap(K)) | = 0. (3.3.49)1433.3. Branching random walk on the torus and branching interlacementsThe above one can be reduced to (3.3.50)-(3.3.52):limN→∞maxΥ|P [S(T ) ∩ ϕ(K) = ∅|Υ]−P[(∪n′i=1S(Ti))∩ ϕ(K) = ∅|Υ]| = 0; (3.3.50)limN→∞maxΥ|P[(∪n′i=1S(Ti))∩ ϕ(K) = ∅|Υ]−n′∏i=1P [(S(Ti)) ∩ ϕ(K) = ∅|Υ] | = 0; (3.3.51)limN→∞maxΥ∣∣∣∣∣n′∏i=1P [(S(Ti)) ∩ ϕ(K) = ∅|Υ]− exp (−2uBCap(K))∣∣∣∣∣ = 0.(3.3.52)The proof of (3.3.50) is easy.|P [S(T ) ∩ ϕ(K) = ∅|Υ]− P[(∪n′i=1S(Ti))∩ ϕ(K) = ∅|Υ]|≤|P[S(T \(∪n′i=1S(Ti))) ∩ ϕ(K) 6= ∅|Υ]| ≤ Nd− 1Nd→ 0.The last inequality is due to Condition 2 in Lemma 3.3.10, the union boundand the fact that S(v) is uniformly distributed on TN for all v ∈ T .For (3.3.52), by Condition 1 and 4 in Lemma 3.3.10, we know that |Ti| ∈[Na1 , Na2 ] and that conditioned on the size, Ti is uniform on A|Ti|. Hencewe can apply Proposition 3.3.3. Then together with Condition 2, one canget (3.3.52).Now we turn to (3.3.51). We need the following lemma.Lemma 3.3.16. There exist positive c and C (depending on those vari-ables in Lemma 3.3.10 but not N), such that, for any N ∈ N+ and Υ =1443.3. Branching random walk on the torus and branching interlacements(n′;L1, . . . , Ln′ ; t) with p(Υ) > 0,k ∈ [[1, n′ − 1]], then|P[(∪n′i=kS(Ti))∩ ϕ(K) = ∅|Υ]− P [(S(Tk)) ∩ ϕ(K) = ∅|Υ]×P[(∪n′i=k+1S(Ti))∩ ϕ(K) = ∅|Υ]| ≤ C exp(−cNη), (3.3.53)where η is from Lemma 3.3.10.With this Lemma one can use induction to show|P[(∪n′i=1S(Ti))∩ ϕ(K) = ∅|Υ]−n′∏i=1P [S(Ti) ∩ ϕ(K) = ∅|Υ]|≤ (n′ − 1)C exp(−cNη). (3.3.54)Since n′ is bounded by a polynomial of N , the right hand side tends to 0,which implies (3.3.51).Proof of Lemma 3.3.16. Let o1 and o2 be the ends of ιk (say o1 ∈ Tk ). Forany x, y ∈ TN , definef(x) = P [(S(Tk)) ∩ ϕ(K) = ∅|S(o1) = x,Υ] , (3.3.55)h(y) = P[(∪n′i=k+1S(Ti))∩ ϕ(K) = ∅|S(o2) = y,Υ]. (3.3.56)By Condition 3, this path separates Tk and ∪n′i=k+1Ti, so we haveP[(∪n′i=kS(Ti))∩ ϕ(K) = ∅|S(o1) = x,S(o2) = y,Υ]= f(x)× h(y).(3.3.57)Therefore,P[(∪n′i=kS(Ti))∩ ϕ(K) = ∅|Υ]=∑x,y∈TNf(x)h(y)P [S(o1) = x,S(o2) = y|Υ]=N−d ·∑x,y∈TNf(x)h(y)PSRWx [ZbN2+ηc+1 = y],1453.3. Branching random walk on the torus and branching interlacementswhere PSRWx means the law of Z = (Zn)n∈N which is a simple random walkstarting from x. Note thatP [(S(Tk)) ∩ ϕ(K) = ∅|Υ] = N−d∑x∈TNf(x); (3.3.58)P[(∪n′i=k+1S(Ti))∩ ϕ(K) = ∅|Υ]= N−d∑y∈TNh(y). (3.3.59)Hence the left hand side of (3.3.53) is:|N−d∑x,y∈TNf(x)h(y)(PSRWx [ZbN2+ηc+1 = y]−N−d)|≤ maxx∈TN∑y∈TN|PSRWx [ZbN2+ηc+1 = y]−N−d|.Now (3.3.53) can be implied by the following result in the theory of mixingtime (e.g. see Chapter 5 in [16]).Proposition 3.3.17. Let κ > 2. There exist positive numbers c and C suchthat for any odd N ∈ N+, we have:maxx,y∈TN|PSRWx [ZbNκc+1 = y]−N−d| ≤ C exp(−cNκ−2). (3.3.60)Remark 3.3.4. The requirement of oddness is due to the periodicity ofSimple Random Walk. If the random walk is lazy, then Proposition 3.3.10is correct without assuming oddness. Hence if the branching random walk islazy, we still have Theorem 3.3.1 without assuming oddness.146Chapter 4An optimal strategy for theMajority-Markov game4.1 Definitions, settings and main resultOur theorem (Theorem 4.1.1) will be stated in terms of Markov game. Weadopt some terminologies from [6]. Furthermore, our theorem is based onthe key object ’grade’, and its properties from [6].4.1.1 Markov systemsA Markov system with one target (respectively with two targets) S =<V,P,C, t > (resp. S =< V,P,C, t+, t− >) consists of a Markov chain (V, P ),a cost function C : V → R+, and a target t ∈ V (resp. two targets t+ andt−). We assume that the targets are absorbing. We further assume that thestate space V is finite and that every target is accessible from any non-targetstate. The cost of a ’trip’ v(0), v(1), ...v(k) on S is the sum∑k−1i=0 Cv(i) ofthe costs of the visited states except the last. If C ≡ 1, then C could beregarded as the time or the number of steps.4.1.2 GamesLet S(1), S(2), ...S(n) be Markov systems with either one or two targets.For each S(i), we fix a starting state, u(i), and place a token i at that state.A ’game’ consists of Markov systems with tokens on their starting statesand a stopping rule Λ ⊆ V (1)× V (2)× ...× V (n): the set of configurationswhen the game ends.1474.1. Definitions, settings and main resultA single player plays against a ’bank’: chooses one (say token i at thestate v ∈ V (i)) of the n tokens to move (according to its transition proba-bility P (i)) and pays the cost (Cv(i)), then chooses and pays again... Whenall tokens form a configuration in the stopping rule Λ, the game ends andthe player leaves. We assume that if all tokens are at targets, the game ends(this means that Λ contains those configurations in which all coordinates aretargets). As targets are absorbing, we could assume that tokens at targetsare not allowed to choose.By setting different stopping rules, we have different games. A trivialstopping rule is that all tokens are at the targets. For a non-trivial example,[6] considers the simple multitoken game Sim(S(1), S(2), ..., S(n); 1), whosestopping rule is (at least) one of the tokens at the targets. Similarly, wedefine Sim(S(1), S(2), ..., S(n); k) to be the game whose stopping rule is atleast k of the tokens at the targets. In this chapter, we address the ’Majority-Markov’ game Maj(S(1), ..., S(2k + 1)) with n = 2k + 1 Markov systemswith two targets, whose stopping rule is ’k + 1 tokens at positive targets ork + 1 tokens at negative targets’.4.1.3 Strategies and costsA strategy tells us how to choose the token to move. Mathematically, bya strategy σ, we mean a function σ : V (1) × ... × V (n)\Λ → {1, 2, ...n}satisfying σ(u1, u2, ...un) 6= i if ui is a target. When tokens are at the stateu = (u1, u2, ...un) , under strategy σ, σ(u1, u2, ...un) is chosen. Note thatthe inequality means that we cannot choose tokens at target.The cost E[G, σ] (or simply E[σ]) is the expected cost (for the player)of playing G under strategy σ. The cost E[G] of a game G is the minimumexpected cost of playing G, under all possible strategies. The optimal s-trategies are those strategies that reach E[G]. If we want to emphasize thestarting state u = (u1, ...un), we use Eu[G, σ], Eu[σ], or even E(u) (whenthe game G and the strategy σ are explicit).1484.1. Definitions, settings and main result4.1.4 Grades and positive-(negative-)gradesFor a Markov system with one target S =< V,P,C, t >, a state u 6= t(where the token is), and a positive real number g, consider a modifiedgame where the player can leave at target t without money as usual, orleave at any other state by paying g dollars. This can be defined using ourterminology by adding a Markov system Tg. Define the terminator Tg as theMarkov system < {s, t}, P, g, t > with starting state s, where ps,t = 1. Theterminator always hits its target in exactly one step, at cost g. The modifiedgame is now the simple Markov game Sim(S, Tg; 1). We can imagine thatwhen g is small enough, the optimal strategy is to leave by paying g andwhen g is large enough, the optimal is to choose the token at the systemS until it hits the target. The grade γu(S) of the system S for state u isdefined to be the unique value of g at which an optimal player is indifferentbetween the two possible first moves in the game Sim(S, Tg; 1). Naturally,we set γt(S) = 0. It is possible to compute γ in a polynomial time (see [6]for this and more properties about grades).For a Markov system with two targets S =< V,P,C, t+, t− >, we de-fine the positive-grade γ+u (S) for state u ∈ V \{t−} to be the grade foru in S+ =< V,P,C, t+ > and the negative-grade γ−u (S) for u ∈ V \{t+}to be the grade for u in S− =< V,P,C, t− >. For convenience, we setγ−t+(S) = γ+t−(S) = ∞. Note that either positive-grade or negative-grade isa nonnegative number.4.1.5 Main resultTheorem 4.1.1. A strategy for the Majority-Markov game Maj(S(1), ...,S(2k + 1)) is optimal if and only if it always plays in a system in whichneither the positive-grade (of the position of the token) is larger than themedian of all 2k + 1 positive-grades, nor the negative-grade is larger thanthe median of all 2k + 1 negative-grades.1494.2. Some known results about Markov games4.2 Some known results about Markov gamesIt turns out that every Markov game (at least in our sense, i.e. when thestopping rule contains all configurations in which all coordinates are targets)has an optimal strategy. We refer the reader to [24] for more details.From a given state u = (u1, ...un) of a Markov game (with n Markovsystems), an action α ∈ {1, ..., n} (means choosing token α) gives an im-mediate cost Cu(α) (more precisely, Cuα(α)) and a probability distribution{pu,•} for the next state. Therefore a strategy σ with action α at the stateu satisfies:Eu[σ] = Cu(α) +∑vpu,v(α)Ev[σ].If among all possible actions at state u, α is the minimizer of the right-hand side of this expression, then σ is said to be consistent at u.Proposition 4.2.1. A strategy is optimal if and only if it is consistent atevery state (/∈ Λ).For the expected cost function, we have:Proposition 4.2.2. For the game G consisting of Markov systems S(1), ...,S(n) with stopping rule Λ, if a function E : V (1)× ...×V (n)→ R+ satisfies:E|Λ ≡ 0 andEu = minα{Cu(α) +∑vpu,v(α)Eu} ∀u ∈ V (1)× ...× V (n) \ Λ, (4.2.1)where min is under all possible actions α at state u, then, E is the (unique)cost function for G and a strategy is optimal if and only if, it, at every stateu ∈ V (1)× ...× V (n) \ Λ, takes the action which reaches the min.The game Sim(S(1), ..., S(n); 1) is analyzed in [6] where the optimalstrategy is established. Their argument also works for Sim(S(1), ..., S(n); k)with minor modifications.1504.3. Proof of the main theoremProposition 4.2.3. A strategy for the game Sim(S(1), S(2), ..., S(n); k) isoptimal if and only if it always plays in a system whose current grade is notlarger than the k-th smallest grade.4.3 Proof of the main theoremWe have n = 2k+ 1 Markov systems: S(i) =< V (i), P (i), C(i), t+, t− >,i =1, ..., n. For notational ease, we identify all positive (negative) states. Nowrecall and define some games by giving their stopping rules.Games Stopping rulesGM k + 1 tokens at positive targets or k + 1 at negativeG0 all n tokens are at targetsG+ k + 1 tokens at positive targets or all n tokens at targetsG− k + 1 tokens at negative targets or all n tokens at targetsGS+ k+1 tokens at positive targets (redefine pt−,t+ = 1, Ct− = C0for all Markov systems)GS− k + 1 tokens at negative targets (redefine pt+,t− = 1, Ct+ =C0 for all Markov systems.)Here, C0 is a large real number (such that it is larger than all positive-gradesand negative-grades for non-target states).Let EM , E0, E±, ES± be the expected cost functions for the correspond-ing games. The following lemma is natural in light of Proposition 4.2.3.Lemma 4.3.1. A strategy for G+ is optimal if and only if it always playsin a system whose current positive-grade is not larger than the median of allpositive-grades. For the game G−, we have the similar conclusion.Proof. For simplicity, consider the case of k = 1. The general case is similar.First consider GS+ here. Note that GS+ is a simple multitoken game.With Proposition 4.2.3 in mind, we need to consider the grade correspondingto the target t+. We point out that since C0 is larger than any other positive-grade, changing transition probability from t− does not change the positive-grades of other states. Hence for other states, the new grade is the same as1514.3. Proof of the main theoremthe old positive-grade (in G+).Under the adjustment pt−,t+ = 1, t+ is accessible from t−, so we canapply Proposition 4.2.3 for GS+. The optimal strategies are those movingthe token whose grade is not larger than the median. As C0 is the positive-grade of t−, which is larger than all other positive-grades. Hence for anyoptimal strategy, we can avoid playing tokens at t−, unless all tokens are att±. So, if we forbid choosing tokens at t− unless all tokens are at targets,the cost of GS+ remains the same. Under this assumption, at state u /∈ B ={u = (u1, u2, u3) : ui = t±, i = 1, 2, 3}, G+ and GS+ has the same possibleactions.Consider the expected cost functions E+ and ES+ for games G+ andGS+. One can easily find out their values on the boundary set (denoted by∂Λ):∂Λ E+ ES+(t+, t+, u3) 0 0(t+, u2, t+) 0 0(u1, t+, t+) 0 0(t−, t−, t+) 0 C0(t−, t+, t−) 0 C0(t+, t−, t−) 0 C0(t−, t−, t−) 0 2C0In order to remove the difference between E+ and ES+, we introduceEL+ : V (1)× V (2)× V (3)→ [0,+∞) by:EL+(u1, u2, u3) = (2p−u1p−u2p−u3 + (p+u1p−u2p−u3 + p−u1p+u2p−u3 + p−u1p−u2p+u3))×C0,(4.3.1)where p+ui denote the probability of the event that a token starting from uivisits t+ (before visiting t−), similarly for p−ui .Since p+t+= p−t− = 1, p+t− = p−t+= 0, we can check that EL+(u) =ES+(u), for any u ∈ ∂Λ. On the other hand using the equalities for hittingprobability: p±ui = Σvipui,vi(i)p±vi , we can see that EL+ satisfies the linear1524.3. Proof of the main theorempart of (4.2.1):EL+(u) = Σvpu,v(i)EL+(v) for i : ui 6= t±. (4.3.2)Because ES+ satisfies (4.2.1) and EL+ satisfies the linear part of (4.2.1),ES+−EL+ also satisfies (4.2.1). On the other hand, ES+−EL+ and E+ areequal to 0 on ∂Λ. By the uniqueness of Proposition 4.2.2, ES+−EL+ = E+.In particular, the actions which reaches the min in (4.2.1) for E+ and ES+are the same. Hence, for G+ and GS+, we has the same optimal strategies.This completes the proof of the first assertion. By symmetry, one can getthe other assertion.Remark 4.3.1. For general k, one should use:EL+(u) = C0 · Eu(max{0, the number of tokens hitting t− − k})= C0 ·∑τ1∈{+,−},...τn∈{+,−}max{0, (∑i1τi=−)− k}pτ1u1 ...pτnun .Now we can build a connection between GM , G0 and G±.Claim 4.3.2.EM = E+ + E− − E0. (4.3.3)For any strategy σ for game GM , we could use it to play any of G+, G−and G0: use strategy σ to play until the stopping rule for GM happens. Ifat that time the game does not end, the stopping rule switches to the trivialstopping rule, i.e., all tokens are at targets. Hence the subsequent cost afterGM ends is independent of the strategy. Note first that G+, G− and G0 willnot end before GM ends, since Λ+,Λ−,Λ0 ⊆ ΛM . And if GM ends beforeall tokens reach targets, exactly one of G+ and G− ends at this time, andthe other will end at the same time as G0 ends, i.e when all tokens reachtargets; if GM ends when all tokens are at the targets, then all four gamesend at this time. Consider game pairs (GM , G0) and (G+, G−). We get that(under any strategy) when one of the games in the left pair ends, one inthe right pair also ends and when the other game in the left pair ends, the1534.3. Proof of the main theoremother in the right also ends and vice versa. Note that at each step we paythe same amount of money for each pair. Hence, one can get:EM [σ] + E0[σ] = E+[σ] + E−[σ] ≤ E+ + E−.Therefore,EM [σ] ≤ E+ + E− − E0[σ] = E+ + E− − E0.The last equality is due to the fact that the cost of the trivial game E0 isindependent of strategies.Furthermore, the equality holds if and only if E+[σ] = E+ and E−[σ] =E−. This means that σ is optimal for both G+ and G−. Hence, σ is anoptimal strategy for GM if and only if it is an optimal strategy for both G+and G−. Therefore, by Lemma 4.3.1 we finish the proof of the main result.154Bibliography[1] David Aldous. The continuum random tree. II. An overview. In S-tochastic analysis (Durham, 1990), volume 167 of London Math. Soc.Lecture Note Ser., pages 23–70. Cambridge Univ. Press, Cambridge,1991.[2] Omer Angel, Bala´zs Ra´th, and Qingsan Zhu. Local limit of branchingrandom walk on the torus and branching interlacements. In preparation.[3] Itai Benjamini and Nicolas Curien. Recurrence of the Zd-valued infinitesnake via unimodularity. Electron. Commun. Probab., 17:no. 1, 10,2012.[4] George Bennett. Probability inequalities for the sum of independen-t random variables. Journal of the American Statistical Association,57(297):33–45, 1962.[5] Alexander Drewitz, Bala´zs Ra´th, and Arte¨m Sapozhnikov. LectureNotes on Random Interlacements.[6] Ioana Dumitriu, Prasad Tetali, and Peter Winkler. On playing golfwith two balls. SIAM J. Discrete Math., 16(4):604–615, 2003.[7] Svante Janson. Random cutting and records in deterministic and ran-dom trees. Random Structures Algorithms, 29(2):139–179, 2006.[8] Svante Janson and Jean-Franc¸ois Marckert. Convergence of discretesnakes. J. Theoret. Probab., 18(3):615–647, 2005.[9] Steven P. Lalley and Xinghua Zheng. Occupation statistics of critical155Bibliographybranching random walks in two or higher dimensions. Ann. Probab.,39(1):327–368, 2011.[10] Gregory F. Lawler. Intersections of random walks. Modern Birkha¨userClassics. Birkha¨user/Springer, New York, 2013. Reprint of the 1996edition.[11] Gregory F. Lawler and Vlada Limic. Random walk: a modern intro-duction, volume 123 of Cambridge Studies in Advanced Mathematics.Cambridge University Press, Cambridge, 2010.[12] Jean-Franc¸ois Le Gall. Spatial branching processes, random snakes andpartial differential equations. Lectures in Mathematics ETH Zu¨rich.Birkha¨user Verlag, Basel, 1999.[13] Jean-Franc¸ois Le Gall and Shen Lin. The range of tree-indexed randomwalk in low dimensions. Ann. Probab., 43(5):2701–2728, 2015.[14] Jean-Franc¸ois Le Gall and Shen Lin. The range of tree-indexed randomwalk. J. Inst. Math. Jussieu, 15(2):271–317, 2016.[15] Jean-Franc¸ois Le Gall and Gre´gory Miermont. Scaling limits of randomtrees and planar maps. In Probability and statistical physics in two andmore dimensions, volume 15 of Clay Math. Proc., pages 155–211. Amer.Math. Soc., Providence, RI, 2012.[16] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov chainsand mixing times. American Mathematical Society, Providence, RI,2009. With a chapter by James G. Propp and David B. Wilson.[17] Andrei S. Mishchenko. Discrete bessel process and its properties. The-ory Probab. Appl., 50:700–709, 2005.[18] Zhan Shi. Branching random walks, volume 2151 of Lecture Notes inMathematics. Springer, Cham, 2015. Lecture notes from the 42ndProbability Summer School held in Saint Flour, 2012, E´cole d’E´te´ deProbabilite´s de Saint-Flour. [Saint-Flour Probability Summer School].156[19] Vladas Sidoravicius and Alain-Sol Sznitman. Percolation for the vacantset of random interlacements. Comm. Pure Appl. Math., 62(6):831–858,2009.[20] Frank Spitzer. Principles of random walk. Springer-Verlag, New York-Heidelberg, second edition, 1976. Graduate Texts in Mathematics, Vol.34.[21] Alain-Sol Sznitman. Vacant set of random interlacements and percola-tion. Ann. of Math. (2), 171(3):2039–2087, 2010.[22] Augusto Q. Teixeira. From random walk trajectories to random inter-lacements. Ensaios Matema´ticos, 23:1–78, 2012.[23] Koˆhei Uchiyama. Green’s functions for random walks on ZN . Proc.London Math. Soc. (3), 77(1):215–240, 1998.[24] Douglas J. White. Markov decision processes. John Wiley & Sons, Ltd.,Chichester, 1993.[25] David Windisch. Random walk on a discrete torus and random inter-lacements. Electron. Commun. Probab., 13:140–150, 2008.[26] Qingsan Zhu. On the critical branching random walk i: Branchingcapacity and visiting probability. Preprint, arXiv:1611.10324.[27] Qingsan Zhu. On the critical branching random walk ii: Branchingcapacity and branching recurrence. Preprint, arXiv:1612.00161.[28] Qingsan Zhu. On the critical branching random walk iii: the criticaldimension. Preprint, arXiv:1701.08917.[29] Qingsan Zhu. An upper bound for the probability of visiting a dis-tant point by critical branching random walk in Z4. Preprint, arX-iv:1503.00305.157Appendix ASketch of Proof of Lemma2.1.3Proof. Without loss of generality, one can assume θ is aperiodic. The firststep is to show:• There is a δ ∈ (0, 0.1), such that, for any  > 0 small enough, andm ∈ N+ large enough (depending on ), we can find c1 = c1(), suchthat, for any n ∈ [m2, 2m2], z, w ∈ C(3δm), we have:pmn (z, w).=∑γ:z→w,γ⊆C(m),|γ|=ns(γ) ≥ c1 ·m−d. (A.0.1)Indeed, the Markov property implies that:pmn (z, w) ≥ P (Sz(n) = w)−max{P (Sy(k) = w) : k ≤ n, y ∈ (C(m))c},and the LCLT establishes (A.0.1). Using this estimate, one can see that:• For any  > 0 small enough, and m ∈ N+ large enough, we can findc2 = c2(), such that, for any z, w ∈ C(3δm), we have (we write Cx(r)for the ball centered at x with radius r):∑γ:z→w,|γ|≤2m2,γ⊆C(m)s(γ) ≥ c2m2−d; (A.0.2)∑γ:z→Cw(δm/10),|γ|≤2m2,γ⊆C(m)s(γ) ≥ c2m2.158Appendix A. Sketch of Proof of Lemma 2.1.3Note that in the first assertion, the left hand side is increasing for m whenz, w are fixed. Due to this fact, one can get that• For any  > 0 small enough, and m ∈ N+ large enough, we can findc2 = c2(), such that, for any z, w ∈ C(3δm), we have:∑γ:z→w,|γ|≤2m2,γ⊆C(m)s(γ) ≥ c3‖z − w‖2−d; (A.0.3)By considering the first visit of Cw(δm/10), one can get:∑γ:z→Cw(δm/10),|γ|≤2m2,γ⊆C(m)\Cw(δm/10)s(γ)≥∑γ:z→Cw(δm/10),|γ|≤2m2,γ⊆C(m)s(γ)/max{g(x, C(δm/10)) : x ∈ C(δm/10)}m2/m2  1.Hence we have:• For any  > 0 small enough, and m ∈ N+ large enough, we can findc4 = c4(), such that, for any z, w ∈ C(3δm), we have:∑γ:z→Cw(δm/10),|γ|≤2m2,γ⊆C(m)\Cw(δm/10)s(γ) ≥ c4. (A.0.4)Now we are ready to show the lemma. Without loss of generality, assumeρ(U, V c) = 1. First, choose a finite number of balls with radius δ and centersat U : B1, B2, . . . , Bk covering U . Choose  small enough for (A.0.2),(A.0.3),(A.0.4) and  < 1/k. Now we argue that when n is sufficiently large, (2.1.5)holds.Write B′i = nBi ∩ Zd and B′i = nBi ∩ Zd for i = 1, . . . , k, where Bi isthe ball with radius 1 and the same center of Bi. When ‖x− y‖ ≤ 2δn, by(A.0.3) we have (2.1.5). Otherwise, x, y are not on the same B′i. However,we can find at most k + 1 points x0 = x, x1, . . . , xl = y,(l ≤ k) such that xjand xj+1 are in the same B′i, say B′j . Note that when z, w are on the same159Appendix A. Sketch of Proof of Lemma 2.1.3B′i, by (A.0.4), for any z′ ∈ Cz(δn/10),∑γ:z′→Cw(δn/10),|γ|≤2n2,γ⊆B′i\Cw(δn/10)s(γ) ≥ c4.Hence, by connecting paths, one can get:∑γ:x→y,γ⊆Bn,|γ|≤2n2s(γ) ≥∑γ0:x0→Cx1 (δn/10),|γ0|≤2n2,γ0⊆B′0\Cx1 (δn/10)s(γ0)·∑1s(γ1) ·∑2s(γ2) · · · ·∑l−2s(γl−2) ·∑γl−1:γ̂l−2→y,|γl−1|≤2n2,γl−1⊆B′l−1s(γl−1)≥ (c4)l−1 · c2(n2−d) ≥ (c4)kc2n2−d  g(x, y),where∑j =∑γj :γ̂j−1→Cxj+1 (δn/10),|γj |≤2n2,γj⊆B′j\Cxj+1 (δn/10)for j = 1, . . . , l−2.160

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0355269/manifest

Comment

Related Items