UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A macroscopic view of two discrete random models Alma Sarai, Hernandez Torres 2020

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2020_november_hernandeztorres_almasarai.pdf [ 1.36MB ]
Metadata
JSON: 24-1.0394111.json
JSON-LD: 24-1.0394111-ld.json
RDF/XML (Pretty): 24-1.0394111-rdf.xml
RDF/JSON: 24-1.0394111-rdf.json
Turtle: 24-1.0394111-turtle.txt
N-Triples: 24-1.0394111-rdf-ntriples.txt
Original Record: 24-1.0394111-source.json
Full Text
24-1.0394111-fulltext.txt
Citation
24-1.0394111.ris

Full Text

A Macroscopic View of Two Discrete RandomModelsbyAlma Sarai Hernandez TorresB.Sc. Mathematics, Universidad de Guanajuato, 2014M.Sc. Mathematics, The University of British Columbia, 2016a thesis submitted in partial fulfillmentof the requirements for the degree ofDoctor of Philosophyinthe faculty of graduate and postdoctoralstudies(Mathematics)The University of British Columbia(Vancouver)August 2020c⃝ Alma Sarai Hernandez Torres, 2020The following individuals certify that they have read, and recommend tothe Faculty of Graduate and Postdoctoral Studies for acceptance, the thesisentitled:A Macroscopic View of Two Discrete Random Modelssubmitted by Alma Sarai Hernandez Torres in partial fulfillment of therequirements for the degree of Doctor of Philosophy in Mathematics.Examining Committee:Omer Angel, MathematicsSupervisorMartin Barlow, MathematicsSupervisorMathav Murugan, MathematicsSupervisory Committee MemberLior Silberman, MathematicsUniversity ExaminerNick Harvey, Computer ScienceUniversity ExaminerRussell Lyons, Indiana UniversityExternal ExamineriiAbstractThis thesis investigates the large-scale behaviour emerging in two discretemodels: the uniform spanning tree on R3 and the chase-escape with deathprocess.Uniform spanning treesWe consider the uniform spanning tree (UST) on R3 as a measured, rootedreal tree, continuously embedded into Euclidean space. The main result is onthe existence of sub-sequential scaling limits and convergence under dyadicscalings. We study properties of the intrinsic distance and the measure ofthe sub-sequential scaling limits, and the behaviour of the random walk onthe UST. An application of Wilson’s algorithm, used in the study of scalinglimits, is also instrumental in a related problem. We show that the numberof spanning clusters of the three-dimensional UST is tight under scalings ofthe lattice.Chase-escape with deathChase-escape is a competitive growth process in which red particles spread toadjacent uncoloured sites while blue particles overtake adjacent red particles.We propose a variant of the chase-escape process called chase-escape withdeath (CED). When the underlying graph of CED is a d-ary tree, we showthe existence of critical parameters and characterize the phase transitions.iiiLay SummaryStatistical mechanics states that natural phenomena arise as the averagebehaviour of a large number of particles with random interactions. A centralendeavour in probability theory is to establish a mathematical foundationfor this paradigm. Our objective is to obtain precise relations between themicroscopic and macroscopic descriptions of a phenomenon. This thesis is acontribution to the task. In particular, we are interested in the macroscopicproperties emerging in two discrete random models. In this work, we studythe “uniform spanning tree” and the “chase-escape with death” process. Thefirst one is a combinatorial model that provides insights into other modelsin statistical mechanics. In a different setting, “chase-escape with death”mimics the behaviour of predators chasing prey on space, or the spread ofa rumor throughout a social network.ivPrefacePart I is the introduction for this thesis. Chapter 1 is an overview, whileChapters 2, 3 and 4 are surveys on background material.Part II presents original research on uniform spanning trees. Chap-ter 5 and Chapter 6 are based on the preprints “Scaling limits of the three-dimensional uniform spanning tree and associated random walk” [11] and“The number of spanning clusters of the uniform spanning tree in three di-mensions” [10], respectively. Our work in [10] will appear in the proceedingsof “The 12th Mathematical Society of Japan, Seasonal Institute (MSJ-SI)Stochastic Analysis, Random Fields and Integrable Probability”, while [11]is under review for publication. The research leading to these was an equalcollaboration between Omer Angel, David Croydon, Daisuke Shiraishi, andmyself. The writings of [11] and [10] were done in equal parts between OmerAngel, David Croydon, Daisuke Shiraishi, and myself.Part III is original work on competitive growth processes. The re-search and writing was conducted in equal collaboration with Erin Beckman,Keisha Cook, Nicole Eikmeier and Matthew Junge. Chapter 7 is based on“Chase-escape with death on trees” [25] and has been submitted for publi-cation.vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . xixDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiI Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 11 Discrete and Continuous Probability Models . . . . . . . . 21.1 Scaling limits as weak convergence of probability measures . . 31.2 Phase transitions . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Structure of this thesis . . . . . . . . . . . . . . . . . . . . . . 82 Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . 9vi2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.1.1 Subsets . . . . . . . . . . . . . . . . . . . . . . . . . . 92.1.2 Paths and curves . . . . . . . . . . . . . . . . . . . . . 102.1.3 Constants and asymptotic notation . . . . . . . . . . . 112.2 Simple random walk . . . . . . . . . . . . . . . . . . . . . . . 122.2.1 Recurrence and transience . . . . . . . . . . . . . . . . 132.2.2 Harmonic measure and hitting probabilities . . . . . . 142.2.3 Scaling limit of the simple random walk . . . . . . . . 172.3 Loop-erased random walks . . . . . . . . . . . . . . . . . . . . 212.3.1 The infinite loop-erased random walk . . . . . . . . . 212.3.2 Growth exponent . . . . . . . . . . . . . . . . . . . . . 232.3.3 Hitting probabilities . . . . . . . . . . . . . . . . . . . 252.4 Scaling limits of loop-erased random walks . . . . . . . . . . . 262.4.1 Two dimensions . . . . . . . . . . . . . . . . . . . . . . 262.4.2 Three dimensions . . . . . . . . . . . . . . . . . . . . . 302.4.3 Four and higher dimensions . . . . . . . . . . . . . . . 313 Uniform Spanning Forests . . . . . . . . . . . . . . . . . . . . 333.1 Definition and basic properties . . . . . . . . . . . . . . . . . 333.2 Relation to other models . . . . . . . . . . . . . . . . . . . . . 353.2.1 The random-cluster model . . . . . . . . . . . . . . . . 353.3 Sampling algorithms . . . . . . . . . . . . . . . . . . . . . . . 363.3.1 Wilson’s algorithm . . . . . . . . . . . . . . . . . . . . 363.3.2 Aldous-Broder algorithm . . . . . . . . . . . . . . . . 373.3.3 Interlacement Aldous-Broder algorithm . . . . . . . . 383.4 Encodings of uniform spanning trees . . . . . . . . . . . . . . 383.4.1 Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.4.2 Graphs as metric spaces . . . . . . . . . . . . . . . . . 393.5 Scaling limits of uniform spanning trees . . . . . . . . . . . . 413.5.1 Finite graphs . . . . . . . . . . . . . . . . . . . . . . . 413.5.2 Infinite graphs . . . . . . . . . . . . . . . . . . . . . . 46vii4 Competitive Growth Processes . . . . . . . . . . . . . . . . . 494.1 Markov processes . . . . . . . . . . . . . . . . . . . . . . . . . 514.2 Interacting particle systems . . . . . . . . . . . . . . . . . . . 534.3 Richardson models . . . . . . . . . . . . . . . . . . . . . . . . 574.3.1 The one-type Richardson model . . . . . . . . . . . . . 584.3.2 The two-type Richardson model . . . . . . . . . . . . 604.3.3 Multiple type Richardson model . . . . . . . . . . . . 624.4 First passage percolation . . . . . . . . . . . . . . . . . . . . . 634.4.1 First passage competition models . . . . . . . . . . . . 654.4.2 Geodesics . . . . . . . . . . . . . . . . . . . . . . . . . 664.5 Chase-escape process . . . . . . . . . . . . . . . . . . . . . . . 694.5.1 Phase transitions . . . . . . . . . . . . . . . . . . . . . 71II Scaling Limits of Uniform Spanning Trees . . . . . . . 765 Scaling Limit of the Three-Dimensional Uniform SpanningTree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785.1.1 Scaling limits of the three-dimensional UST . . . . . . 795.1.2 Properties of the scaling limit . . . . . . . . . . . . . . 825.1.3 Scaling the random walk on U . . . . . . . . . . . . . 855.2 Topological framework . . . . . . . . . . . . . . . . . . . . . . 915.2.1 Path ensembles . . . . . . . . . . . . . . . . . . . . . . 955.3 Loop-erased random walks . . . . . . . . . . . . . . . . . . . . 965.3.1 Notation for Euclidean subsets . . . . . . . . . . . . . 975.3.2 Notation for paths and curves . . . . . . . . . . . . . . 985.3.3 Definition and parameterization of loop-erased ran-dom walks . . . . . . . . . . . . . . . . . . . . . . . . . 1025.3.4 Path properties of the infinite loop-erased random walk1035.3.5 Loop-erased random walks on polyhedrons . . . . . . 1095.3.6 Proof of Proposition 5.3.9 . . . . . . . . . . . . . . . . 1135.4 Checking the assumptions sufficient for tightness . . . . . . . 121viii5.4.1 Assumption 1 . . . . . . . . . . . . . . . . . . . . . . . 1215.4.2 Assumption 2 . . . . . . . . . . . . . . . . . . . . . . . 1275.4.3 Assumption 3 . . . . . . . . . . . . . . . . . . . . . . . 1395.5 Exponential lower tail bound on the volume . . . . . . . . . . 1425.6 Exponential upper tail bound on the volume . . . . . . . . . 1525.7 Convergence of finite-dimensional distributions . . . . . . . . 1585.7.1 Parameterized trees . . . . . . . . . . . . . . . . . . . 1585.7.2 The scaling limit of subtrees of the UST . . . . . . . . 1615.7.3 Parameterized trees and random walks . . . . . . . . . 1635.7.4 Essential branches of parameterized trees . . . . . . . 1645.7.5 Proof of Theorem 5.7.2 . . . . . . . . . . . . . . . . . 1675.7.6 Proof of Proposition 5.7.9 . . . . . . . . . . . . . . . . 1695.8 Proof of tightness and subsequential scaling limit . . . . . . . 1835.9 Properties of the limiting space . . . . . . . . . . . . . . . . . 1885.10 Simple random walk and its diffusion limit . . . . . . . . . . . 1946 Spanning Clusters of the Uniform Spanning Tree in ThreeDimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1986.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1996.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2026.3 Proof of the main result . . . . . . . . . . . . . . . . . . . . . 204III Competitive Growth Processes . . . . . . . . . . . . .2117 Chase-Escape with Death on Trees . . . . . . . . . . . . . . 2127.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2127.1.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 2137.1.2 Proof methods . . . . . . . . . . . . . . . . . . . . . . 2167.1.3 History and context . . . . . . . . . . . . . . . . . . . 2177.2 CED on the line . . . . . . . . . . . . . . . . . . . . . . . . . 2187.3 Properties of weighted Catalan numbers . . . . . . . . . . . . 2237.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 2237.3.2 Properties of f and M . . . . . . . . . . . . . . . . . . 225ix7.4 M and CED . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2307.5 Proofs of Theorems 7.1.1, 7.1.2 , and 7.1.3 . . . . . . . . . . 2327.6 Proof of Theorem 7.1.4 . . . . . . . . . . . . . . . . . . . . . 2357.7 Proof of Theorem 7.1.5 . . . . . . . . . . . . . . . . . . . . . 2367.7.1 A lower bound on Cλ,ρk . . . . . . . . . . . . . . . . . . 2367.7.2 An upper bound on Cλ,ρk . . . . . . . . . . . . . . . . . 2377.7.3 A finite runtime algorithm . . . . . . . . . . . . . . . . 2408 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2418.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 2418.1.1 Uniform spanning trees . . . . . . . . . . . . . . . . . 2418.1.2 Competitive growth process . . . . . . . . . . . . . . . 2438.2 Future directions . . . . . . . . . . . . . . . . . . . . . . . . . 2448.2.1 Uniform spanning trees . . . . . . . . . . . . . . . . . 2458.2.2 Competitive growth models . . . . . . . . . . . . . . . 246Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247xList of TablesTable 5.1 Exponents associated with the LERW and UST in twoand three dimensions. The two-dimensional exponentsare known rigorously from [20, 21, 24, 95]. The three-dimensional values are based on the results of this study,together with the numerical estimate for the growth expo-nent of the three-dimensional LERW from [170]. . . . . . . 87xiList of FiguresFigure 5.1 A realisation of the UST in a three-dimensional box, asembedded into R3 (left), and drawn as a planar graphtree (right). Source code adapted from two-dimensionalversion of Mike Bostock. . . . . . . . . . . . . . . . . . . . 79Figure 5.2 In the above sketch, |x − x′| = 1, but the path in theUST between these points has Euclidean diameter greaterthan R/3. We expect that such pairs of points occur withpositive probability, uniformly in R. . . . . . . . . . . . . 84Figure 5.3 On the event Ai, as defined at (5.21), the above config-uration occurs with probability greater than c0 for anyz ∈ B(xi, R/16λ). . . . . . . . . . . . . . . . . . . . . . . 108Figure 5.4 Notation used in the proof of Proposition 5.3.9. . . . . . . 114Figure 5.5 The random times s0, s1, s2, t0, u0, u1. . . . . . . . . . . 116Figure 5.6 On the event H(k, ζ), as defined at (5.35), the above con-figuration occurs with probability greater than 1− εζk forany x ∈ Dk, y ∈ B(x, εkδ−1). The circles shown are theboundaries of B(x, εkδ−1) and B(x,√εkδ−1). The non-bold paths represent γ∞ ∪ γU (x,U0), and the bold pathR[0, TR(x,√εkδ−1)]. . . . . . . . . . . . . . . . . . . . . . 125Figure 5.7 A typical realisation of γ on the event F 1, as defined at(5.54). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144xiiFigure 5.8 A realisation of a subtree of the UST of δR3 spanned by0 and the corners of the cube [−1, 1]3. The tree includespart of its path towards infinity (in green). Colours indi-cate different LERWs used in Wilson’s algorithm. . . . . . 159Figure 5.9 S is a parameterized tree with spanning points x(1), x(2)and x(3). The restriction S |s is the union of the pathsbetween x(i) and pi, with i = 1, 2, 3. In this example,S |r and S |s are different inside the radius m. A crucialdifference between these two sets is thatS |r is connected,but S |s is disconnected. . . . . . . . . . . . . . . . . . . 162Figure 5.10 The figure shows the decomposition of the curves γ¯u,Rn andγ¯v,Rn used in the proof of Proposition 5.7.9. The curve γ¯u,Rnis the concatenation of γ¯0 (in purple) and ζ¯u,vn (in red).The curve γ¯v,Rn is the concatenation of γ¯0 and η¯u,vn (inblue). The figure also shows a restriction of the randomwalk Sn from yn to zn (in yellow). In this case, Sn avoidshitting Av,Rn when it is close to yn. . . . . . . . . . . . . . 172Figure 5.11 A realization of the event Du,vn (ε)c ∩ Q(εM , ε). In thefigure, γ¯u,Rn is the concatenation of the purple and bluecurves, while γ¯v,Rn is the concatenation of the purple andred curves. . . . . . . . . . . . . . . . . . . . . . . . . . . 174Figure 6.1 Part of a UST in a two-dimensional box; the part shownis the central 115× 115 section of a UST on a 229× 229box. The single cluster spanning the two sides of the boxis highlighted. . . . . . . . . . . . . . . . . . . . . . . . . . 201Figure 6.2 Conditional on the event Hi, for any x ∈ B(0, 4) ∩ δR3with dist(x, γzi) ≤ 1/M , the above configuration occurswith probability at least 1−M−ξ. . . . . . . . . . . . . . 206xiiiFigure 7.1 (Left) The phase diagram for fixed d. The dashed line isρc and the solid line ρe. (Right) A rigorous approximationof ρc when d = 2. The approximations for larger d havea similar shape. . . . . . . . . . . . . . . . . . . . . . . . 215Figure 7.2 A Dyck path of length 10. The weight of this path isu(0)2v(0)2u(1)v(1)u(2)2v(2)2. . . . . . . . . . . . . . . . . 217Figure 7.3 Let k = 7. The black line with dots is a path γ ∈ Γ5. Theblue line with stars is the modified path γ˜ ∈ Γ2. The redline with pluses is the extension of γ˜ to a jump chain inR7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224Figure 8.1 The two-dimensional phase diagram with d fixed. Thehorizontal axis is for values of λ and the vertical axis isfor values of ρ. The dashed black line depicts ρc and thesolid black line ρe. . . . . . . . . . . . . . . . . . . . . . . 244xivList of SymbolsB(x, r) Discrete Euclidean ball of radius R around x. 9, 96BE(x, r) Euclidean ball of radius r around x. 9, 97D(x, r) Discrete cube of side-length 2r centred at x. 10, 97DE(x, r) Euclidean cube of side-length 2r centred at x. 10, 97diamA Euclidean diameter of the set A. 10radA Radius of the set A. 10len(γ) Length of a path γ. 10, 98T (γ) Duration of a curve γ. 10, 98Cf Space of parameterized curves of finite duration. 10, 98ψ Metric on Cf . 10, 98ρC Metric on the space of unparameterized paths. 11C Space of transient parameterized curves. 11, 98χ Metric on C. 11, 98O Big O notation. 11⪯ Asymptotically smaller than. 11≍ Order of magnitude estimate. 12xv∼ Asymptotically equivalent. 12τA First time a simple random walk hits A. 12τ+A First positive time a simple random walk hits A. 12ξA First exit time of simple random walk from A. 13D Unit ball (disk) on the complex plane. 26dH Hausdorff metric. 39, 95σ Local state space [Chapter 5]. 49λc(G) Critical parameter for chase-escape. 73U Uniform spanning tree on R3. 78P Law of the uniform spanning tree U . 78dU Intrinsic metric on the graph U . 78µU Counting measure on U . 78ϕU Continuous embedding of U into R3. 78ρU Root of U . It is equal to 0. 78β Growth exponent for the three-dimensional loop-erased random walk. 80Pδ Law of(U , δβdU , δ3µU , δϕU , ρU). 80BT (x, r) Ball in the metric space T of radius r around x. 82T Limit metric space of the scaled uniform spanning tree U . 82df Fractal dimension of U . 82Pˆ Law of the limit space (T , dT , µT , ϕT , ρT ). 83XU Simple random walk on U . 85PUx Quenched law of the simple random walk on U started at x. 85xviHU Annealed law of the simple random walk on U . 85RU Effective resistance on U . 85T Collection of measure, rooted, spatial trees. 91Bδ(x, r) Discrete Euclidean ball on δR3 of radius r around x. 97, 202Dδ(x, r) Discrete cube on δR3 of side-length 2r centred at x. 97∂iA Inner boundary of A ⊂ R3. 97dSγ Schramm metric on a parameterized curve γ. 101dγ Intrinsic metric on a parameterized curve γ. 101γ¯ Loop-erased random walk endowed with its β-parameterization. 103γx∞ Infinite loop-erased random walk on R3 starting at x. 103P Dyadic polyhedron. 109T Parameterized tree. 158FK Space of parameterized trees with K leaves. 159Γe(T ) Space of parameterized trees with K leaves. 164Uδ Uniform spanning tree on δR3. 198B Hypercube [0, 1]d 199Td d-ary tree 213R Set of sites that are ever coloured red. 213B Set of sites that are ever coloured blue. 213Cλ,ρk Weighted Catalan number 216g(z) Generating function of weighted Catalan numbers [Chapter 7]. 216,224xviif(z) Continued fraction equal to g(z) [Chapter 7]. 224M Radius of convergence of g centred at the origin [Chapter 7]. 224xviiiAcknowledgementsMy deepest gratitude goes to my supervisors Martin Barlow and Omer An-gel. I thank Martin for his support and guidance since my arrival at UBC asa master’s student, and Omer for the countless hours talking about math-ematics in coffee shops. I owe you my understanding and appreciation ofconcepts, heuristics, and methods; I am excited to continue thinking onthese topics as I move forward in my career.I thank Mathav Murugan, Lior Silberman, Nick Harvey, and RussellLyons for reading this thesis, for their precise comments, and their insightfulquestions during the defence. It was an honour to have you in my examiningcommittee.I thank Daisuke Shiraishi, David Croydon, Erin Beckman, Keisha Cook,Matthew Junge, Nicole Eikmeier, and Tom Hutchcroft for the pleasure ofworking together. I am especially indebted to Tom for connecting Omer andme with Daisuke and David.It was a privilege to have been part of the probability group at UBC.A special mention goes to my friends and colleagues Thomas Hughes andGuillermo Martinez Dibene. I think of them as my academic brothers. Iwould also like to acknowledge Ed Perkins for teaching several of my grad-uate courses in probability theory. These courses were inspiring, and mylecture notes from those days are a treasure.I am fortunate to have found good friends during my time in Van-couver. I thank Bernardo, Delphin, Donají, Felipe, Hugo, Javier, JuanCamilo, Manuel, Marybel, Myrto, Nishant, Raimundo, Tara, and ThomasBudzinski. I was blessed to count on the fellowship and support of Arista,xixChristina, Jessie, Kaori, Michael and Amy Weidman, all the Navigators,and my favourite Chinese-Colombian-Venezuelan friends: Karen, Kally, andCynthia Wong, and Gaby and Carlos García. I am happy to have kept closefriends in Guadalajara: Jeanette, Kike, Manuel, Natalia, Poncho, Sarai, andTonalli. I am also glad that Carmen and Malors have always been only atext message away.I am deeply grateful for the friendship of Daniel and Karina. I appreciatetheir constant encouragement, their help at all times, and their wise andrational advice–especially when I have been stubborn. I cherish the goodtimes that we have had together, and I look forward to more of them.I am thankful for my parents Alejandro and Alma, and my sister Lizette.Your unconditional love has been an anchor in my life. I also thank all myfamily in Mexico City for their love and support.This thesis was finished during a strange period of isolation, and I hada unique opportunity to walk around a quiet city. I treasure the view of thecherry blossoms, the turn at 8th Avenue, the mountains overlooking JerichoBeach, and of the gardens around Kitsilano and West Point Grey. Theseplaces guard sweet memories of friendship and peacefulness. I thank Godfor the gift that these years in Vancouver have been to me.Finally, I am grateful for the funding from the Mexican National Coun-cil for Science and Technology (Consejo Nacional de Ciencia y Tecnología,CONACYT) in support of my graduate studies.xxDedicationCon amor a mi familia.xxiPart IIntroduction1Chapter 1Discrete and ContinuousProbability ModelsA modern scientific paradigm is that natural phenomena arise from thecollective behaviour of random microscopic interactions. This principlegained widespread acceptance with the introduction of Brownian motionto physics. Brownian motion was first described by and named after, thebotanist Robert Brown in 1827. Brown observed the irregular movement ofparticles of pollen immersed in water. In 1905, Albert Einstein explainedthe physical mechanism for this motion as the result of random interactionsat the molecular level [65]. This explanation gave evidence for the discretenature of matter. Jean Perrin verified Einstein’s predictions experimentallyand hence confirmed molecular-kinetic theory in 1909 [145]. With exper-imental evidence firmly established and the pioneering theoretical work ofJames Clerk Maxwell, Ludwig Boltzmann, and J. Willard Gibbs, statisticalmechanics gained a central place within modern physics. Since then, physicshas thrived with successful applications of statistical mechanics. Its influencehas spread to all the sciences. Statistical physics has driven ground-breakingdevelopments in chemistry [70, 71], mathematical biology [12, 47, 142, 163]and theoretical computer science [137, 158], to cite some examples. Thesuccess of such applications, including further advancements in physics, re-quires a precise mathematical understanding of the relation between discrete2models and their continuous counterparts.In the last fifty years, mathematicians have reformulated problems fromclassical statistical physics (without quantum mechanics) within the frame-work of probability theory. Roland Dobrushin and Frank Spitzer startedthe study of interacting particle systems (see [75, 139] for surveys of theirrespective contributions), while Simon Broadbent, John Hammersley, andDominic Welsh introduced percolation and first-passage percolation, respec-tively [37, 81]. These models have a simple description in terms of particleinteractions. Even though the model is simple at a local (microscopic) level,it gains complexity when we consider a large number of particles. The lat-ter is the most relevant case since it corresponds to the phenomena at amacroscopic scale. Research in the interface of probability and mathemat-ical physics has flourished, and it has involved insights and methods fromanalysis and combinatorics. Nevertheless, some central questions in the arearesult in challenging mathematical problems. The work in this thesis is partof this continuing endeavour.In this thesis, we study two discrete models: the uniform spanning treeand a competitive growth process called chase-escape with death. Our objec-tive is to understand their large scale behaviour. For the uniform spanningforest, our main result is the existence of its scaling limits. The three-dimensional case is particularly interesting since it exhibits non-Gaussianbehaviour. In the case of chase-escape with death, we study its phase tran-sitions as we vary the model parameters. Our results touch on two maintopics in statistical physics, namely scaling limits and phase transitions. Inthe next two sections, we present the probabilistic approach to these con-cepts.1.1 Scaling limits as weak convergence ofprobability measuresA scaling limit is a formal connection between discrete and continuous prob-ability models. In line with the interpretation from statistical physics, aphenomenon may be described either by a discrete or a continuous model.3The discrete model represents a microscopic scale and is usually defined overa graph, whereas the continuous model reflects a macroscopic scale and it isdefined on Rd. Some properties are understood more easily in the discretesetting, where combinatorial tools are at hand, but the continuous modelusually presents symmetries absent in the discrete space. Examples of thesesymmetries are scale and rotational invariance or conformal invariance inthe two-dimensional case. (See Propositions 2.2.10, 2.2.11, and 2.4.3.) Weremark that the physics community was the first to observe symmetries onscaling limits, e.g. conformal invariance was predicted by Belavin, Polyakov,and Zamolodchikov [26].Let us describe the general framework of a scaling limit. Consider a dis-crete model with a parameter describing its size (for example, the number ofvertices of the underlying graph). We obtain a tractable problem by choos-ing a meaningful object associated with the discrete model. We say that thescaling limit exists when, after appropriate normalization, the chosen objectconverges as we increase the size parameter.The archetypical example is the convergence of simple random walk onR to Brownian motion. In this case, we scale the simple random walk bydefining the processes on δR := {δv : v ∈ R}. Note that the distance betweennearest-neighbour vertices on δR decreases as δ → 0; the geometric effect isa zoom-out of the space. The corresponding size parameter is the number ofvertices on [0, 1], while the meaningful object for the scaling limit is the curvedefined by interpolation of the random walk path. We have the convergenceof this curve with respect to the space of continuous curves C[0, 1] endowedthe supremum norm, and we thus say that the scaling limit exists. Chapter 2expands our discussion on the simple random walk and Brownian motion.Now, let us we specify the type of convergence of these random objects.Recall that we choose a representative object for studying the scaling limitof a given model. With this choice, we determine a Polish space E wherethese objects are defined. We thus get a probability measure µn, valuedon E, associated with each size parameter n. The precise meaning of theexistence of the scaling limit is that µn converges weakly to µ as n→∞,4i.e.limn→∞∫f dµn =∫fdµ for all f ∈ Cc(E),where Cc(E) is the space of continuous functions on E with compact support.The scaling limit operation is applicable to a large variety of discretemodels. In some situations, the limit object is deterministic, but it may alsobe random. The latter corresponds to phenomena exhibiting fluctuationsat every scale. This behaviour is typical of critical phenomena, which wepresent in the following section.1.2 Phase transitionsThe models studied in statistical physics depend on a set of parameters.Among others, these could be the dimension of the underlying space, tem-perature, pressure, or rate of change. A phase is determined by a set ofparameters with common qualitative properties. We observe a phase tran-sition when we move between different phases as we modify the model pa-rameters. The phases of water provide a well-known example. As we varytemperature or pressure, water abruptly transitions between solid, liquid,and gaseous states.Branching processes provide a simple example of a phase transition.They are also a fundamental piece of our analysis in Chapter 7. Here wefollow [157]; this reference provides more illustrations of critical phenomenafor the interested reader. Branching processes are a model for populationgrowth, where we represent individuals with identical particles ordered in agenealogical tree. For our purposes in this section, we restrict to branchingprocesses with binomial offspring distribution. We remark that this a fun-damental model in probability theory and its definition is more general (see[85, Chapter 3], [60, Section 2.1]).We start with one particle occupying the root of a d-ary tree, while therest of the vertices on the tree are vacant. Any particle reproduces onlyonce in its lifetime. The offspring of a particle occupies some of the childrennodes on the d-ary tree, leaving the rest of the children nodes vacant. Hence,the number of descendants of each individual follows a binomial distribu-5tion with parameters d (maximum number of children) and p (reproductionprobability). The number of individuals (or occupied vertices) at generationt is a Galton-Watson process with offspring distribution Bin(d, p). Let Zd(p)be the total number of individuals.One fundamental question for this model is on the size of the branch-ing process: is it finite, or does the process generate an infinite number ofgenerations? Letθd(p) = P (Zd(p) =∞)be the survival probability, and letξd(p) = E (Zd(p))be the average family size. A classical theorem states that the survivalprobability has a phase transition (see [85, 157]):θd(p) =0 if p ≤1d ,s if p > 1d ,where s > 0 is the non-trivial solution to s = ((1−p)+ps)b. The value pc :=1d is known as the critical parameter, since the model changes betweenphases at that point. Accordingly, a branching process is subcritical ifp < 1d , critical if p = pc, and supercritical if p > d. A simple calculationshows that the average family size also exhibits a phase transition aroundthe critical parameter:ξd(p) =11−dp if p <1d ,∞ if p ≥ 1d .We remark that at the critical parameter, the expected family size is infinite,even when survival probability at criticality is 0.A crucial observation is that universal exponents govern the asymptoticbehaviour around the critical point. For each d ≥ 2, there exists constants6C1(d) > 0, C2(d) > 0 depending on d such thatθd(p) ∼ C1(d)(p− pc)β, p→ p+c ,ξc(p)(p) ∼ C2(d)(pc − p)−γ , p→ p−c .In the asymptotic formulas above, β and γ are known as the critical ex-ponents. They take the values β = 1 and γ = 1 for all d-ary trees. Theindependence of the critical exponents from the parameter d is an instanceof universality.The different phases of a discrete model explain the qualitative propertiesof the modelled phenomenon. However, quantitative conclusions depend onthe details of the model, for example, the dimension of the underlying graph.In the case of the branching process, a qualitative property is the positivityof the survival probability, but the values of θd(p) and ξd(p) depend on theparameters d and p. Note that the parameter d is a significant restriction onthe model since it establishes a maximum number of offspring. Nevertheless,this restriction is irrelevant around the critical parameter. The principleof universality suggests that, at a critical point, the mathematical modelapproximates the physical reality. Therefore critical exponents determinethe behaviour of physical phenomena at criticality. This principle justifiesthe value of understanding the simple discrete model. Moreover, severalsystems converge to the same behaviour as they approach criticality; weobtain a division of these systems into universality classes.A common hypothesis in modelling is a convergence to the Gaussian uni-versality class. When a model has enough (stochastic) independence amongits different components, the central limit theorem applies, and its statis-tics converge to Gaussian random variables. However, several models havestrong intrinsic dependencies, and their limit behaviour is non-Gaussian.Understanding non-Gaussian limits is a challenge for modern probabilitytheory.71.3 Structure of this thesisThe rest of the chapters in Part I introduce background material for thisthesis. Chapter 2 and Chapter 3 are concise surveys on random walks anduniform spanning forests, respectively. These two chapters focus on essentialdefinitions for Part II. Chapter 4 is an introduction to competitive growthprocesses, which are the main topic in Part III.Part II and Part III report on original work on uniform spanning treesand competitive growth processes, respectively.Chapter 8 presents the conclusions. In the concluding chapter, we sum-marize the contributions included in this thesis and future research direc-tions.8Chapter 2Random WalksIn this chapter, we define the simple random walk and the loop-erased ran-dom walk on Rd and present their relevant properties for this work.2.1 NotationWe begin with some notation that we will use through this thesis.Following standard set notation N,R,R, and C represent the natural,integer, real and complex numbers, respectively. Rd is the d-dimensionalreal space, R+ = {x ∈ R : z ≥ 0}, R+ := {z ∈ R : z ≥ 0}, and N0 := R+.The indicator function 1{x ∈ P} : Rd → {0, 1} is defined as1{B}(x) :=1, if x satisfies property P,0, otherwise.2.1.1 SubsetsFor x ∈ Rd and z ∈ Rd, the discrete ℓ2 Euclidean ball and the Euclideanℓ2 ball are the setsB(x, r) :={y ∈ Rd : |x− y| < r}, BE(x, r) :={y ∈ Rd : |x− y| < r},9respectively. We use the abbreviations B(R) = B(0, R) and BE(0, R) =BE(R). The discrete cube (or ℓ∞ ball of radius r) with side-length 2rcentred at x is defined to be the setD(x, r) :={y ∈ Rd : ∥x− y∥∞ < r}.Similarly to the definitions above, but with ℓ∞ balls, DE(x, r) denotes theEuclidean cube. We further write D(R) = D(0, R). The Euclidean distancebetween a point x and a set A is given bydist(x,A) := inf {|x− y| : y ∈ A} .The Euclidean diameter and the radius of A arediamA := supx,y∈A|x− y|, radA := min{n ∈ N : A ⊂ B(n)}.If A ⊂ Rd, we denote by ∂A the discrete boundary of A. It is defined as theset{x /∈ A : there exists y ∈ A such that x and y are nearest-neighbours }.2.1.2 Paths and curvesA path in Rd is a finite or infinite sequence of vertices [v0, v1, . . .] such thatvi−1 and vi are nearest neighbours, i.e. |vi−1 − vi| = 1, for all i ∈ {1, 2, . . . }.The length of a finite path γ = [v0, v1, ..., vm] will be denoted len(γ) and isdefined to be the number of steps taken by the path, that is len(γ) = m.A (parameterized) curve is a continuous function γ : [0, T ] → Rd.For a curve γ : [0, T ] → R3, we say that T < ∞ is its duration, and willsometimes use the notation T (γ) := T . The curve γ is simple if it is aninjective function. When the specific parameterization of a curve γ is notimportant, then we might consider only its trace, which is the closed subsetof R3 given by tr γ = {γ(t) : t ∈ [0, T ]}. To simplify notation, we sometimeswrite γ, instead of tr γ, where the meaning should be clear.10The space of parameterized curves of finite duration, Cf , will be endowedwith a metric ψ, as defined byψ(γ1, γ2) := |T1 − T2|+ max0≤s≤1 |γ1(sT1)− γ2(sT2)| ,where γi : [0, Ti] → R3, i = 1, 2 are elements of Cf . Alternatively, considerthe metricρC(γ1, γ2) := inf supt∈[0,1]|γ1 ◦ θ1(t)− γ2 ◦ θ2(t)|,where the infimum is over all the reparameterizations θ1 : [0, 1] → [0, T1]and θ2 : [0, 1] → [0, T2]. In the literature, ρC is known as the metric of thespace of unparameterized paths.A continuous map γ∞ : [0,∞)→ Rd is a transient curve if |γ∞(t)| →∞ as t → ∞. Let C be the set of transient curves, and endow C with themetric χ given byχ(γ∞1 , γ∞2 ) =∞∑k=12−k(1 ∧maxt≤k|γ∞1 (t)− γ∞2 (t)|).2.1.3 Constants and asymptotic notationWe denote constants with the letters C, Cn c and cn, with n ∈ N. Thevalues of these constants change from line to line, and we indicate theirdependencies.Let f, g, h be real valued functions with f, g, h ≥ 0. We write f(x) =O (g(x)) to indicate that there exists a constant C > 0 such thatf(x) ≤ Cg(x), for all x.Similarly, we write f(x) = h(x) +O(g(x)) to indicate that|f(x)− h(x)| ≤ Cg(x), for all x.11If f(x)⪯g(x), it means that there exists C such thatf(x) ≤ Cg(x), for all x.Similarly, we write f(x)≍g(x) if there exist c1, c2 > 0 such thatc1g(x) ≤ f(x) ≤ c2g(x), for all x.Finally, if f and g are positive functions, we write f∼g iflimx→∞f(x)g(x) = 1.2.2 Simple random walkThe simple random walk is a random path on a given graph, where eachstep is chosen uniformly at random. For this work, we delimit our discus-sion to simple random walks on Rd. Consider the set of directions on Rd,E = {±e1, . . . ,±en}, where ek(j) = 1{k = j}. Let (ηj)j∈N be independentrandom variables, each one with uniform distribution over E . For x ∈ Rd,the simple random walk S = (Sn)n∈N started at x isS0 := x, Sn := S0 +n∑j=1ηj .We denote by P x the probability measure of S. The distribution of ηj iscalled the step distribution. Thorough this work, we may also use de notationS(n) = Sn.The simple random walk is a Markovian process. Our interest is in thegeometry of the random walk, and hence the most relevant stopping timesare those related to exit and hitting times. We define the hitting andpositive hitting times of S byτA := inf{n ≥ 0 : Sn ∈ A}, and τ+A := inf{n > 0 : Sn ∈ A}. (2.1)12We write τm and τ+m for the hitting times of the ball B(m). A relatedstopping time is the escape from a set. We writeξA := inf{n ≥ 0: Sn ∈ Ac} (2.2)for the escape time from A. If A = B(m), we writeξm := inf{n ≥ 0: Sn ∈ B(m)c}. (2.3)2.2.1 Recurrence and transienceA random walk S is recurrent ifP (Sn = 0 i.o ) = 1,otherwise, we say that S is transient. The recurrence of the simple randomwalk on Rd depends on the dimension d.Theorem 2.2.1. The simple random walk on Rd is recurrent in d = 1, 2and transient in d ≥ 3.A simple proof of Theorem 2.2.1 is in [61, Subsection 5.4]. The basis forthe proof is the following characterization of recurrence in terms of hittingprobabilities.Proposition 2.2.2 ([61, Theorem 5.4.3]). For a simple random walk S onRd, the following are equivalent:(i) S is recurrent,(ii) P 0(τ+{0} <∞) = 1, and(iii)∞∑m=0P 0(Sn = 0) =∞.The next proposition is a quantified version of Proposition 2.2.2 (ii) inthe transient case.13Proposition 2.2.3 ([117, Proposition 6.4.2]). Let d ≥ 3. For x ∈ Rd\B(m)P x(τ+m <∞)=(m|x|)d−2[1 +O(m−1)].Throughout this work, we often make a distinction between dimensionsd = 2 and d ≥ 3 of the lattice Rd. This distinction is due to the differencebetween recurrent and transient behaviour.2.2.2 Harmonic measure and hitting probabilitiesEstimates on hitting probabilities of a random walk lie at the core of thiswork. We review hitting probabilities, harmonic measure, and capacity toprovide some background. We follow [113, 117], where more details areavailable.Let A ⊂ Rd be a finite set. The harmonic measure of A is defined asthe limithmA(y) := lim|x|→∞Px(Sτ+A= y : τ+ <∞).In the two-dimensional case, the simple random walk is recurrent and theharmonic measure is simply defined byhmA(y) := lim|x|→∞Px(Sτ+A= y).We refer to [113, Theorem 2.1.3] for a proof of the existence of this limit.The following result gives bounds on the harmonic measure of straightsegments.Theorem 2.2.4 ([113, Section 2.4]). Let L ⊂ Rd be the line segment on thex-axis from (0, . . . , 0) to (n, 0, . . . , 0). ThenhmL(0) ≍cn−1/2, d = 2,c(logn)1/2n−1, d = 3.We define below the capacity of a set and relate it to the hitting proba-bility of a random walk and the harmonic measure.14Capacity in the transient caseOn Rd, and for d ≥ 3, the capacity of a finite set A is defined ascap(A) := limm→∞∑x∈AHx(τ+A > ξm)=∑Hx(τ+A =∞).It follows that we can write the harmonic measure ashmA(x) =Hx(τ+A =∞)cap(A) .The capacity of a set A indicates how much “hittable” is A by a randomwalk starting at a large distance. We formally state this relation in theproposition below.Proposition 2.2.5 ([117, Proposition 6.5.1]). Assume that A ⊂ B(n) and∥x∥ ≥ 2n, thenHx(τ+A <∞)= Cd∥x∥2−d cap(A)[1 +O(n∥x∥)],where Cd is a constant depending on the dimension.An example to keep in mind is the capacity of the closed ball of radiusn. This iscap(B¯(n)) = a−1d nd−2 +O(nd−3);where the constant ad takes the valuead =d2Γ(d2 − 1)pi−d/2 = 2(d− 2)ωd .In the expression above, Γ is the Gamma function and ωd is the volumeof the unit ball in Rd [113, (2.16) and Theorem 1.5.4]. In comparison, foran arbitrary connected set, we limit our calculations to bounds over theharmonic measure, e.g. Theorem 2.2.7 gives an upper bound.The recurrence of the two-dimensional random walk entails a differentdefinition of capacity for R2. We will not use this definition, and instead,15refer the reader to [113]. In exchange, we state below a general theorem forthe hitting probability of a connected subset of C.Beurling estimateThe Beurling projection theorem is a classical result for the hitting proba-bilities of a two-dimensional Brownian motion. Consider a Brownian motionon C, started at 0 and stopped when it hits the unit circle; and let A bethe collection of connected subsets of C containing the origin and intersect-ing the unit circle. The Beurling projection theorem states that, among allsubsets in A, this Brownian motion is most likely to avoid the straight line[0, 1] (see [27, 166]). This theorem is a consequence of Beurling’s theorem[32], formulated originally in potential theory.We have a discrete analogue for simple random walks. In this case,we consider A ⊂ Rd path connected, meaning that there is a path betweenany two points in A. In terms of hitting probabilities, the statement is thefollowing.Theorem 2.2.6 (Beurling estimate [115, Theorem 6.8.1]). Let A ⊆ R2 bean infinite and path-connected set containing the origin. Then, for a simplerandom walk starting at the originP(ξn < τ+A)≤ cn1/2.In terms of the harmonic measure, the Beurling estimate states that theharmonic measure of a line (hmL in Theorem 2.2.4) is an upper bound ofthe harmonic measure of any path-connected set. We state this version ofthe Beurling estimate as follows.Theorem 2.2.7 ([113, Theorem 2.5.2]). Let A ⊂ Rd be a path-connected setof radius n containing 0. ThenhmA(0) ≤cn−1/2, d = 2,c(logn)1/2n−1, d = 3,cn−1, d ≥ 4.16Kesten proved the two-dimensional case in [97], while the argument in[113] appeared first in [112]. The proof in [112] uses the following lowerbound on the capacity of connected sets with radius n.Proposition 2.2.8 ([113, Lemma 2.5.4]). Let A ⊂ Rd be a connected set ofradius n containing 0. Thencap(A) ≥cn (logn)−1 , d = 3,cn, d ≥ 4.In Chapter 5, we will require a better estimate in the three-dimensionalcase. In that case, the set to hit is the trace of a loop-erased random walkand we use properties specific to the loop-erased random walk. However, westill refer to such results as a Beurling-type estimate (see Subsection 2.3.3).2.2.3 Scaling limit of the simple random walkFor simplicity, consider a simple random walk S = (S(n)) in the line. If weinterpolate between S(0), S(1), . . . , S(n), we obtain a continuous function inR. Denote by S(t), t ≥ 0 the function defined by this interpolation. In thissense, the random walk S is a model for a discrete random function.The continuous analogue for a random function is Brownian motion. Inthis subsection, we introduce Brownian motion and its relation to the simplerandom walk through the scaling limit.Brownian motionLet (Ω,F , P ) be a probability space. We say that a random variable Xfollows a normal distribution with mean 0 and variance σ2, denoted byN (0, σ2) if for any Borel set A ⊂ B(R)P (X ∈ A) = P (ω ∈ Ω: X(ω) ∈ A) = 1√2piσ∫Aexp(− x22σ2).Definition 2.2.9. A Brownian motion in R (or linear Brownian motion) isa collection of random variables W := (W (t, ω) : t ≥ 0, ω ∈ Ω) satisfying17the following properties:(i) The distribution at time 0 is identically 0, i.e. W (0, ω) = 0, for allω ∈ Ω.(ii) For any 0 ≤ t < s, the random variable Wt −Ws := W (t, ·)−W (s, ·)follows a normal distribution N (0, t− s).(iii) For any 0 ≤ t1 ≤ t2 ≤ . . . ≤ tn, the increments Wtn −Wtn−1 ,Wtn−1 −Wtn−2 , . . . ,Wt2 −Wt1 are independent random variables.(iv) For P -almost all ω ∈ Ω, the function t 7→W (t, ω) is continuous.The extension to higher dimensions is straightforward. Let W 1, . . . ,W dbe d independent linear Brownian motions. The collection of random vari-ables (B(t) : t ≥ 0) given byB(t) = (W 1t , . . . ,W dt )Tis a d-dimensional Brownian motion. In the case d = 2, we call B aplanar Brownian motion. An equivalent definition for the d-dimensionalBrownian motion (B(t) : t ≥ 0, ω ∈ Ω) is the analogue of Definition 2.2.9,but we exchange condition (ii) by the requirement that B(t)−B(s) followsa d-dimensional normal distribution with mean 0 and covariance matrix(t− s)Id.We assume above that Brownian motion starts at 0, but the initial pointmay be any z ∈ Rd. In this case, we change condition (i) for B(0) =z with probability one. We thus say that B starts at z and denote thecorresponding probability measure by P z(·) We interpret B as a randomcontinuous function. That is, for each ω ∈ Ω we get a continuous functionB = B(·, ω) : [0,∞)→ Rd.We will see below that Brownian motion is a limit object. Accordingly,Brownian motion satisfies translation, scale, and rotation invariance. Werefer to [140] for the proof of Proposition 2.2.10 and Proposition 2.2.11.18Proposition 2.2.10. Let B = (B(t) : t ≥ 0) be a d-dimensional Brownianmotion. For a > 0, z ∈ Rd and an orthogonal linear transformation L :Rd → Rd, the processes(B˜(t) : t ≥ 0)and(B¯(t) : t ≥ 0)given byB˜(t) = 1aB(a2t) + z, B¯(t) = L(B(t)),are also Brownian motions, started at z and at 0, respectively.In the two-dimensional case, the planar Brownian motion satisfies con-formal invariance. We consider planar Brownian in the complex plane bysettingB(t) =W1 + iW2,where W1 and W2 are two independent linear Brownian motions. For adomain D ⊂ C, letξBD = inf{t ≥ 0 : B(t) /∈ D}be the exist time of the Brownian motion from the domain DProposition 2.2.11. Let B = (B(t) : t ≥ 0) be a planar Brownian motion.For a conformal map φ : D → Dˆ, the process(Bˆ(t) : t ≥ 0)given byBˆ(t) = φ(B(t))is a time-changed Brownian motion such that ξBˆDˆ= φ(ξBD).Convergence of the simple random walk to Brownian motionWe begin with a discussion of the one-dimensional case. Consider the func-tion in [0, 1] given byS∗n(t) =S(nt)√n, for all t ∈ [0, 1].With this normalization, the central limit theorem implies that S∗n(1) con-verges in distribution to N (0, 1). On the other hand, note that W (1) =19N (0, 1) in distribution. In general, for each fixed time t,S∗n(t)D=⇒W (t), as n→∞,and we see that S∗n converges weakly to W pointwise. However, if we thinkof S∗ as a continuous curve, pointwise convergence is unsatisfactory. It cor-responds to the convergence of finite-dimensional distributions. Donsker’sinvariance principle extends this convergence to the space of continuousfunctions C[0, 1], endowed with the supremum norm.Donker’s invariance principle holds in all dimensions. In the general case,we scale a d-dimensional simple random walk byS∗,dn =√d√nSd([nt]), t ∈ [0, 1].We state it below, and we refer to [66, Chapter 5, Theorem 1.2] for a proof.Theorem 2.2.12 (Donsker’s Invariance Principle). S∗,dn converges weaklyto a standard Brownian motion on Rd in the space of continuous functionsCRd [0, 1].We thus say that Brownian motion is the scaling limit of the simplerandom walk.As we have stated above, we are mainly interested in the simple randomwalk on Rd. We remark that the convergence of the simple random walkto Brownian motion holds in a large class of graphs. For example, finiterange, symmetric and irreducible random walks on Rd converge to Brown-ian motion [117, Chapter 3]. The convergence also holds for centred stepdistributions with finite variance, [102, Theorem 21.43] gives a proof in theone-dimensional case. The fact that this scaling limit holds in such gener-ality is an instance of the universality phenomenon, and we thus say thatBrownian motion is a universal object.202.3 Loop-erased random walksThe loop-erased random walk is a model for simple curves motivated by theself-avoiding walk (SAW) model [110]. In Rd, for d ≥ 2, consider a curveγ : {0, 1, . . . , n} → Rd. We define the loop-erasure of γ as a curve createdby deleting the loops of γ in chronological order. Lets0 := sup{j : γ(j) = γ(0)},and for i > 0,si := sup{j : γ(j) = γ(si−1 + 1)}.The length of the loop-erasure ism = inf{i : si = n}. Then, the loop-erasureof γ isLE(γ) := [γ(s0), γ(s1), . . . , γ(sm)].Let D be a subset of Rd. Consider a simple random walk S on Rdstarting at x ∈ D. The loop-erased random walk on D is defined as theloop-erasure of S up to its first exit from Dγ = LES[0, ξD], (2.4)where ξD is the escape time defined in (2.3).2.3.1 The infinite loop-erased random walkThe infinite loop-erased random walk is the loop-erasure of a simple randomwalk without an stopping condition. The latter statement has an immediateinterpretation when the simple random walk is transient, which is the caseof Rd with d ≥ 3. In R2, the simple random walk is recurrent, but we candefine the loop-erased random walk as a weak limit. We discuss both casesbelow.Transient caseLet S = (Sn)n≥0 be a simple random walk on Rd. We assume that thedimension is d ≥ 3. In this case, the simple random walk is transient, and21the loop-erasure of S is well-defined, with probability one. Similarly to thefinite case, we sets0 := sup{j : S(j) = S(0)},and for i > 0,si := sup{j : γ(j) = γ(si−1 + 1)}.We note that si is finite with probability one due to the transience of thesimple random walk. Then, the infinite loop-erased random walk (IL-ERW) is the transient pathLE(S) := [γ(s0), γ(s1), . . . , ].Two-dimensional caseFor each ℓ ≥ 1, let Ωℓ be the set of simple paths ω = [0, ω1, . . . , ωk] from 0to the boundary of Bℓ i.e. ω1, . . . , ωj−1 ∈ Bℓ and ωj ∈ ∂Bℓ. Let γm be aloop-erased random walk on Bm and γm|ℓ be the restriction of γm up to itsfirst exit from Bℓ. We denote byνm,ℓ(ω) = P (γm|ℓ = ω), ω ∈ Ωℓthe probability measure on Ωℓ induced by γm.Proposition 2.3.1 (Lawler [113, Proposition 7.4.2]). Let ω ∈ Ωℓ. If γm isa loop-erased random walk on Bm, thenlimm→∞P (γm|ℓ = ω) = limm→∞ νm,ℓ(ω) = νˆℓ(ω)exists.The collection {νˆℓ}ℓ≥1 is consistent and defines a measure νˆ on infinitepaths. The two-dimensional infinite loop-erased random walk is the randominfinite path with measure νˆ.22Restrictions of infinite loop-erased random walksThe LERW and ILERW are different objects. However, the definition ofthe ILERW suggests that their respective measures are comparable withina small ball.Proposition 2.3.2 (Masson [136, Corollary 4.5]). Let ℓ ≥ 1 and n ≥ 4. LetK be a subset containing B(nℓ) and such that, for the escape time definedin (2.2), P 0(ξK < ∞) = 1. If γ∞ is an infinite loop-erased random walkand γK is a loop-erased random walk on K and ω ∈ Ωℓ thenP (γ∞[0, ξ∞ℓ ] = ω) =[1 +O(1logn)]P (γK [0, ξKℓ ] = ω) d = 2,[1 +O(n2−d)]P (γK [0, ξKℓ ] = ω) d ≥ 3,where ξ∞ℓ and ξKℓ are the escape times from the ball B(ℓ) of γ∞ and γK ,respectively.2.3.2 Growth exponentThe growth exponent of the d-dimensional loop-erased random walk isthe asymptotic number of steps necessary to reach Euclidean distance n. Ina sense, it indicates the efficiency of the random path to reach a macroscopicdistance.It is convenient to compare the growth exponent of the LERW with twoexamples. A growth exponent equal to 1 indicates linear growth, and astraight line provides an example. The second example is a simple randomwalk. Its growth exponent is 2 since the loops increase the number of stepsin the path. It is intuitively clear that the growth exponent depends on thedimension of the lattice.Let S = (S(t)) be a simple random walk in Rd started at the origin andlet ξn = inf{t ≥ 0: ∥S(t)∥ > n} be the first exit time from the Euclideanball of radius n. ThenMd(n) := |LE(S[0, ξn])|,23is the number of steps it takes to the LERW to exit a ball of radius n in thed-dimensional space.We define the growth exponent of the loop-erased random walkasβd := limn→∞logE(Md(n))logn , (2.5)provided that the limit exists. In this case, we write that E(Md(n)) ≈ nβdThe following theorem summarizes results on the existence of the growthexponent for the LERW. Kenyon determined the planar case in [95]. Shi-raishi established the existence in d = 3 [155, Theorem 1.4], and the upperand lower bounds come from work in [114].Theorem 2.3.3. The growth exponent βd for the LERW on Rd exists for alld ≥ 2. The growth exponent takes the following values in each dimension:(a) β2 = 54 ,(b) β3 ∈ (1, 5/3],(c) βd = 2, for d ≥ 4.Further work has obtained a more precise asymptotic behaviour for theplanar LERW and in higher dimensions d ≥ 4. We present below some ofthese results.In the two-dimensional case, Lawler obtained the asymptotic probabilitythat the path of a LERW contains the edge [0, 1] while it crosses a squareof length 2n [116, Theorem 1.1]. This estimate gives a precise asymptoticfor the growth exponent. We refer to [18, Corollary 3.15] for details on theconnection between the crossing probability and the growth exponent.Theorem 2.3.4 (Lawler [116]). There exist absolute constant c1, c2 > 0such thatc1n5/4 ≤ E(M2(n)) ≤ c2n5/4. (2.6)The result extends to more general planar graphs. Proposition 2.3.5shows that the growth exponent is a function of the dimension, and it doesnot depend on the particularities of R2. This is an instance of the universality24of the growth exponent. Let S¯ be an irreducible bounded symmetric randomwalk, starting at the origin, on a two-dimensional lattice. As in R2, we setξ¯n = inf{t ≥ 0: ∥S(t)∥ > n} and M¯(n) = |LE(S¯[0, ξ¯n]).Proposition 2.3.5 (Masson [136]). The limitlimn→∞logE(M¯(n))logn =54and hence the growth exponent for the LERW on a two-dimensional latticeis 54 .In the critical dimension d = 4, Lawler obtained the logarithmic correc-tions for a related exponent. The physics community has predicted theselogarithmic corrections. Let Kn := |LE(Sn)|, that is, the number of pointskept in the loop-erasure of a simple random walk of n steps. It was provedon [114] that for loop-erased random walks in d = 4limn→∞E(Kn)c4n(logn)−1/2= 1.In comparison, E(Kn) ∼ cdn, for d ≥ 5.In higher dimensions, the behaviour is Gaussian, and precise asymptoticsare available (see [114]). For d ≥ 5, there exist c1, c2 > 0 such thatc1n2 ≤ E(Md(n)) ≤ c2n2.2.3.3 Hitting probabilitiesGeneral estimates on the harmonic measure are enough for the study of thetwo-dimensional loop-erased random walk. In three-dimensions, we requirea result specific for loop-erased random walks.Theorem 2.3.6 (Sapozhnikov-Shiraishi [148, Theorem 3.1]). Let γ be aloop-erased random walk on B(n). There exist η > 0 and an absolute con-25stant C <∞ such that for all ε > 0 and n ≥ 1,P For all x ∈ B(n) with dist(x, γ) ≤ ε2nP x(ξSB(x,√εn) <(τSγ)+) ≤ εη ≥ 1− Cε,where S is an independent simple random walk on R3 started at x.Subsection 5.3.4 presents variants of Theorem 2.3.6 for infinite loop-erased random walks and loop-erased random walks stooped at a randomboundary.2.4 Scaling limits of loop-erased random walksIn this section, we discuss the scaling limits of loop-erased random walks.In contrast with the case of simple random walks, the limit process dependson the dimension of the space.2.4.1 Two dimensionsWe begin with the planar case R2. The complex plane reveals a rich structurefor two-dimensional processes. In this subsection, we work on C and denotethe unit ball (or disk) by D = B(0, 1).We have an explicit description of the scaling limit of the two-dimensionalloop-erased random walk. This is the Schramm-Loewner evolution withparameter 2 (SLE(2)). We introduce SLE in this specific case for comparisonwith the scaling limits in higher dimensions.Radial SLEThe Schramm-Loewner evolution (SLE(κ)) is a one-parameter family of con-formally invariant scaling limits of two-dimensional discrete models. Thefollowing results are known as we take the scaling limit.• The loop-erased random walk converges to SLE(2) [122, 149].• The interface of the planar critical Ising model converges to SLE(3)[46].26• The harmonic explorer converges to SLE(4) [150].• SLE(6) corresponds to the scaling limit of critical percolation on thetriangular lattice (proof outlined in [159, 160] and completed in [41,42]).• The Peano curve of the uniform spanning tree converges to SLE(8)[122].SLE(8/3) is the conjectured limit of the self-avoiding random walk; closerelation between Brownian motion and SLE(8/3) supports this conjecture[121].SLE(κ) is defined over domains D ⊂ C. We distinguish two points on D,where the process starts and finishes. Radial SLE corresponds to D with theprocess starting at a point in the boundary and finishing at the origin. Onthe other hand, chordal SLE refers to the process on the upper-half planeH starting at 0 and ending at ∞.Let us describe radial SLE(2). We follow the construction in [122] andrefer the reader to proofs in [30]. The proofs in [30] are for the chordal case,but they also apply to radial SLE after a conformal transformation.We say that K is a D−hull if K is a compact subset of D¯ and D \K isa simply connected domain. There is a one-to-one correspondence betweenD-hulls and conformal homeomorphismsgK : D \K → D (2.7)satisfying gK(0) = 0 and g′K(0) ≥ 0. The Riemann mapping theorem andthe Schwarz lemma provide this bijection (see [30, Corollary 1.4]). We willlook at families of D-hulls. We say that (Kt : t ≥ 0) is increasing if Kt ⊊ Ksfor t < s. Moreover, a family (Kt : t ≥ 0) satisfies the local growth propertyifdiam(Kt,t+h)→ 0 as h ↓ 0 uniformly on compacts in t,where Kt,t+h = gKt(Kt+h \Kt). Simple continuous curves provide the mostrelevant example of a family of D-hulls. If η : [0,∞) → D¯ is a continuous27simple curve with η(0) ∈ ∂D, limt→∞ η(t) = 0 and η(0,∞) ⊂ D, thenKt = η[0, t] defines an increasing family of D− hulls with the local growthproperty.We also have a correspondence between continuous functionsW : [0,∞)→ ∂Dand increasing families of D-hulls satisfying the local growth property(Kt : t ≥ 0) (2.8)with K0 ∈ ∂D, Kt \K0 ⊂ D for t ∈ (0,∞), and the assumption that 0 is inthe closure of ∪t≥0Kt.Given (Kt : t ≥ 0) satisfying (2.8), let gt := gKt be as in (2.7) for eacht ≥ 0. We further assume that the conformal maps are parameterized sothat g′t(0) = exp(t). Then, for all t ≥ 0, there exist a unique real number inK¯t,t+h for all h > 0. We have thatW (t) := limh→0K¯t,t+h =W (t) (2.9)exists and W : [0,∞) → R defines a real-valued continuous function [30,Proposition 7.1].Loewner’s Slit Mapping Theorem [131] provides a crucial observation toreverse the construction.Theorem 2.4.1 (Loewner [131]). The conformal homeomorphism gt satis-fies the differential equation∂tgt(z) = −gt(z)gt(z) +W (t)gt(z)−W (t) , (2.10)where W is the continuous map (2.9). Clearly g0(z) = z, for all z ∈ D.In light of (2.10), we call W the driving function of (Kt : t ≥ 0).Now, we start with a continuous function W : [0,∞) → ∂D. We definea conformal map gt as the solution of the ODE (2.10) with initial value28g0(z) = z up to some time τ(z) ∈ (0,∞]. For t ≥ τ(z), the solution to (2.10)does not exist. We then define the hull at time t asKt = {z ∈ D¯ : τ(z) ≤ t},and Dt := D \ Kt, so the domain of gt is Dt and maps onto D. Then(Kt : t ≥ 0) is an increasing family of D-hulls with the local growth-propertyand W is its driving function. [30, Proposition 8.2].Schramm defined the Schramm-Loewner evolution (first known as thestochastic Loewner evolution) in his influential work on scaling limits of theloop-erased random walk [149].Definition 2.4.2. Radial Schramm-Loewner evolution with parameter k(SLE(k)) is the process of random D-hulls (Kt, t ≥ 0) with driving functionW (t) = exp(iB(kt)),where B : [0,∞)→ R is a Brownian motion.We define radial SLE similarly in any simply connected domain. If D isa simply connected domain containing 0, the radial SLE curves in D startat ∂D and converge to 0 as t → ∞. A fundamental property of SLE is itsconformal invariant.Proposition 2.4.3 (Conformal invariance). Let D be a simply connecteddomain containing 0 and let x ∈ ∂D. Let ηD,1,0(κ) denote the law of SLE(κ)in D between 1 and 0, and let νD,x,0 be the law of SLE(κ) in D between xand 0. If g : D 7→ D is the unique conformal map between D and D withg(1) = x and fixing 0, thenνD,x,0 = g ◦ ηD,1,0(κ).Convergence of LERW to SLELet D ⊊ C be a simply connected domain with 0 ∈ D and let Dδ = δR2∩D.Let νDδ be the law of a loop-erased random walk on Dδ, started at 0 and29stopped when it hits ∂Dδ. Let ηD be the law of a radial SLE2 path from 0to the boundary of D.Theorem 2.4.4 (Lawler-Schramm-Werner [122, Theorem 1.1]). The mea-sures νDδ converge weakly to ηD as δ → 0, in the space of unparameterizedcurves (C¯, ρC).The planar case is well-understood for the scaling limit of the simplerandom walk. Here, let W denote the law of a planar Brownian motionstarted at 0 and stopped on its first exist from the disk D = B(0, 1) on thecomplex plane. Let G be a planar graph such that the simple random walkon it is irreducible. We denote by µδ the law of the simple random walk onthe scaled graph δG, started at 0 and stopped when it exits D, and νD,Gδ isthe law of the loop-erased random walk on δG ∩ D.Theorem 2.4.5 ([171, Theorem 1.1]). Let (δn)n∈N be a sequence convergingto 0. If µδn converges weakly to W as n → ∞, then νD,Gδn converges weaklyto ηD.Lawler and Viklund proved the convergence of [122] for the naturalparametrization. They considered a planar LERW parameterized by itsrenormalized length, and showed that these curves converge to SLE(2) pa-rameterized by its Minkowski content [119].2.4.2 Three dimensionsIn the three-dimensional case, Kozma proved the existence of the scalinglimit of the loop-erased random walk in a polyhedral domain, along thescaling subsequence 2−n [107].For a set D ⊂ R3 and a ∈ R3, we write D2−n := D ∩ 2−nR3 and a2n forthe closest point to a in 2−nR3.Theorem 2.4.6 (Kozma [107, Theorem 6 and Subsection 6.1]). Let D ⊂ R3be a polyhedron and let a ∈ D. Let L2n be a loop-erased random walk onD2n, started at a2n and stopped at ∂D2n. Then the law of L2n convergesweakly as n→∞, with respect to the Hausdorff topology.30Moreover, if K is a sample of the scaling limit of L2n, then K is invariantunder dilations and rotations.Sapozhnikov and Shiraishi proved further topological properties.Theorem 2.4.7 (Sapozhnikov-Shiraishi [148, Theorem 1.2]). The scalinglimit K is a simple path, almost surely.Heuristically, the growth exponent gives an estimate of the number ofboxes (or cells in R3), which are hit by the loop-erased random walk. Thenumber of such boxes is related to the Hausdorff dimension of the LERW.Shiraishi proved that, in the scaling limit of loop-erased random walks, thegrowth exponent is the Hausdorff dimension. The next theorem builds upfrom an upper bound in [148].Theorem 2.4.8 (Shiraishi [156, Theorem 1.1.1]). The Hausdorff dimensionof K is equal to β3 (as defined in (2.5)), almost surely.2.4.3 Four and higher dimensionsIn dimensions d ≥ 4, the loop-erased random walk converges to Brownianmotion. In this case, the exponents present logarithmic corrections d = 4[110, 111]. The scaling is the following.Theorem 2.4.9 (Lawler [113, Theorem 7.7.6]). Let d = 4. Consider a sim-ple random walk S on R4 and denote its loop-erasure by L[0, n] = LES[0, n].There exists a non-negative sequence (an)L∗n(t) =√d√anL([nt])√n, t ∈ [0, 1].converges weakly to a 4-dimensional Brownian motion with respect to thethe space of continuous Cd[0, 1].The scaling is easier for d ≥ 5. In these high dimensions, the simplerandom walk intersects itself infrequently and in relatively small loops. Afterloop-erasure, the LERW preserves a positive fraction of the points in the31simple random walk. We denote this fraction by a. Moreover, the erasedloops are negligible when we re-scale the space. Hence, as we take the scalinglimit, the loop-erased random walk behaves like a random walk scaled bya. Therefore, a high-dimensional loop-erased random walk converges toBrownian motion, in the scaling limit. Lawler proved this convergence in[110]; for a concise proof, we refer to [113].Theorem 2.4.10 (Lawler [113, Theorem 7.7.6]). Let S be a simple randomwalk on Rd, with d ≥ 5. Denote its loop-erasure by Ln, L[0, n] = LES[0, n],and writeL∗n(t) =√d√aL([nt])√n, t ∈ [0, 1].Then L∗n converges weakly to a d-dimensional Brownian motion, in the spaceof continuous functions Cd[0, 1], endowed with the supremum norm.32Chapter 3Uniform Spanning ForestsThe uniform spanning forest on Rd arises from the infinite-volume limit ofuniform spanning tree measures of a growing sequence of boxes. Withinprobability theory, Pemantle was the first to study uniform spanning forests[143], while the work of Benjamini, Lyons, Peres, and Schramm brought thefield to maturity [29]. In this chapter, we first introduce the definition andfirst properties of the uniform spanning forest in Section 3.1. A remarkablefeature of the USF is its close relation to other probabilistic models. Sec-tion 3.2 describes the relation of uniform spanning forests with other modelsin statistical mechanics, while Section 3.3 includes more connections in theform of sampling algorithms. We finish the chapter with a survey of resultson scaling limits of uniform spanning forests in Section 3.5.3.1 Definition and basic propertiesWe follow the definition of uniform spanning forests in [132, Chapter 10]. Ina finite and connected graph G = (V,E), a spanning tree of G is a connectedsubgraph T such that, for any pair of vertices v, w ∈ G, there is a uniquepath in T connecting v and w. The uniform spanning tree (UST) of Gis a uniform sample over the collection of spanning trees of G.For an infinite graph, we take a weak limit of the uniform spanningtree measure over an increasing sequence of graphs. Let G be an infinite33connected and locally finite graph. An exhaustion of G is a sequence Gn =(Vn, En) of induced subgraphs of G that are finite and connected, and suchthat Vn ⊂ Vn+1 and V = ∪nVn. Let USTFGn be the uniform spanning treemeasure onGn. We add a superscript F to indicate free boundary conditions.Alternatively, for each induced subgraph Gn, we let GWn be Gn with a wiredboundary. We denote the uniform spanning tree measure of GWn by USTWGn .The superscript W indicates wired boundary conditions.Let Ω = {0, 1}E be the space of subgraphs of the infinite graph G.Each element ω = (ωe)e∈E ∈ Ω represents a subgraph of G, under thecorrespondence that an edge e ∈ E is present in the associated graph if, andonly if, ωe = 1. We endow Ω with the product topology, and BΩ denotesthe corresponding Borel sets. Let B be a finite set of edges of E and let Tbe a random spanning tree, then the limitslimn→∞USFFGn (B ⊂ T ) , limn→∞USFWGn (B ⊂ T )exist and do not depend on the exhaustion Gn (see [132, Section 10.1]).We thus define the free uniform spanning forest measure (FUSF) andthe wired uniform spanning forest (WUSF) measure of G as the weaklimitsUSTFGn ===⇒n→∞ FUSF, USTWGn ===⇒n→∞ WUSF,respectively.In this thesis, we study uniform spanning forests of Rd. In Rd the freeand the wired uniform spanning forests coincideWUSF = FUSF, so we referto both as the uniform spanning forest measure of Rd (USF).Theorem 3.1.1 (Pemantle [143]). The support of the uniform spanningforest measure of Rd is on disconnected subgraphs in dimensions d ≥ 5, andconnected subgraphs in dimensions d ≤ 4.We refer to a random subgraph U of Rd with the USF measure as auniform spanning forest, for d ≥ 5. In dimensions, d = 2, 3, and 4, U issimply called a uniform spanning tree.Given that a uniform spanning tree T connects vertices without creating34cycles, the presence of an edge in T depends on other edges. We state thisintuition as the negative correlation property.Proposition 3.1.2. Let T be a uniform spanning forest. For two differentedges e, f ∈ E,P (e ∈ T |f ∈ T ) ≤ P (e ∈ T ).We refer to [77, Theorem 2.1] for a proof of Proposition 3.1.2 for uniformspanning trees on a finite graph. It extends to uniform spanning forests bytaking limits.3.2 Relation to other modelsA remarkable feature of uniform spanning forests is its deep relation toother probabilistic models. These include electric networks [29, 40, 101],the random-cluster model [76–78], the Gaussian free field [31, 118], the bi-Laplacian Gaussian field [123], domino tilings [95], the Abelian sandpilemodel [15, 89, 91, 92, 134], and the rotor-router model [44, 45, 87]. Adifferent type of connection is through sampling algorithms, as it is the casefor the simple random walk [5, 38], the loop-erased random walk [29, 169],and the interlacement process [88]. We review these sampling algorithms inSection 3.3.We then refer to [132, Chapter 2, 4] and [90, Section 4] for the con-nection between electric networks and uniform spanning trees. The lecturenotes [167, Chapter 2] present a concise explanation of the relation betweenuniform spanning trees and the discrete Gaussian Free Field. In [90, Section10], we find the Majumdar-Dhar correspondence between Abelian sandpilesand uniform spanning trees. In this section, we will focus on the connectionto the random cluster-model, following [77].3.2.1 The random-cluster modelThe random cluster model unifies percolation, the Ising model, and the Pottsmodel in a single framework. The uniform spanning tree is a limit case, inthe sense of Theorem 3.2.1.35Let G = (V,E) be a finite graph with configuration space Ω = {0, 1}E .An edge e ∈ E on state 1 is open while state 0 indicates a closed edge.For each ω ∈ Ω, let η(ω) = {e ∈ E : ω(e) = 1} and k(ω) is the numberof connected components of the subgraph (V, η(ω)). The random-clustermeasure on Ω with parameters p ∈ [0, 1], q ∈ (0,∞) is given byϕp,q(ω) =1Z{∏e∈Epω(e)(1− p)1−ω(e)}qk(ω), ω ∈ Ω,where Z is the normalizing constant, also known as partition function.Theorem 3.2.1. The random-cluster measure converges ϕp,q weakly to theuniform spanning tree measure UST as q → 0, under the condition thatp→ 0 and q/p→ 0.3.3 Sampling algorithmsIn the subsections above, presented above the simple random walk, loop-erased random walk, the interlacement process, and the continuum randomtree. Our next task is to show the connection of these models to the uniformspanning forest. In the case of the interlacement process, simple randomwalk and loop-erased random walks, these connections appear as samplingalgorithms.3.3.1 Wilson’s algorithmWilson’s algorithm is an essential theoretical tool in the study of USF [132].Since it gives an explicit connection between loop-erased random walks andspanning trees, we can translate questions about uniform spanning foreststo questions about loop-erased random walks.Algorithm. Let G be a graph with a finite number of vertices V = {vi}.• Set v1 as the root and T1 = {v1}.• For i = 1, . . . , |V |, given a subtree Ti, let γ be loop-erased randomwalk starting at vi+1 and finishing at Ti. Then we set Ti+1 = Ti∪{γ}.36Theorem 3.3.1 (Wilson [169]). The tree T|V | is a uniform spanning tree ofG, and Ti is the subtree of T|V | spanned by {v1, . . . , vi}.Wilson’s algorithm also works for infinite recurrent graphs. In the case ofan infinite transient graph, Benjamini, Lyons, Peres, and Schramm extendedWilson’s algorithm in [29] as follows. Note that in Wilson’s algorithm, weconsider a loop-erased random walk until it hits the root v0. In the extensionfor transient graphs, we let the loop-erased random walk continue until it“hits infinity”. For this reason, we call the extension Wilson’s algorithmrooted at infinity.Algorithm (Wilson’s algorithm rooted at infinity). Let G be an infinite tran-sient graph and let V = {v0, v1, . . .} be an enumeration of its vertices.• Let γ0 be an infinite loop-erased random walk starting at 0. Let T0 =γ0 and set v0 as its root.• Given Ti, let γi be a loop-erased random walk starting at vi+1. Thisloop-erased random walk can be either infinite, or stopped when it hitsTi. Then set Ti+1 = Ti ∪ γi.Theorem 3.3.2 (Benjamini-Lyons-Peres-Schramm, [29]). Tk is the subtreeof the uniform spanning forest of G spanned by {v0, . . . , vk}.3.3.2 Aldous-Broder algorithmThe Aldous-Broder algorithm samples a uniform spanning tree over a finitegraph. It was proposed, simultaneously, by Aldous and Broder [5, 38]. Givena finite and connected graph G, let R be a simple random walk on G. Foreach vertex v ∈ G, let τ(v) := τ{v} the hitting time of v, as defined in (2.1).Then the oriented edgee(v) := {R(τ(v)− 1)}is the first entrance edge of v.37Theorem 3.3.3 (Aldous-Broder [5, 38]). The set of first entry edges{−e(v) : v ∈ G}has the distribution of a uniform spanning tree on G, oriented towards theroot.3.3.3 Interlacement Aldous-Broder algorithmThe interlacement Aldous-Broder algorithm extends the classic algorithmto infinite graphs. Instead of taking the first entrance edges in the sim-ple random walk, the interlacement Aldous-Broder algorithm takes the firstentrance edges in an interlacement process [88].Process (Interlacement Aldous-Broder). Let I be an interlacement processon Rd. For each v ∈ Rd, we define the first hitting time of the vertex v asτ t(v) := inf{s ≥ t : ∃(W, s) ∈ I such that v ∈W}. (3.1)Let et(v) be the oriented edge of Rd that is traversed by the trajectoryWτ t(v), as it enters v for the first time. For each t ∈ R, letABt := {−et(v) : v ∈ Rd}. (3.2)Theorem 3.3.4 (Hutchcroft [88, Theorem 1.1]). The set ABt has the lawof the uniform spanning forest of Rd, oriented towards the root.3.4 Encodings of uniform spanning treesThe uniform spanning tree of Rd has a natural embedding on the space R3.As we take the scaling limit, the number of vertices of the UST within anyneighbourhood increases, and eventually fills the space. The scaling limitis no longer a graph. Therefore, the study of these scaling limits requiresencoding of the uniform spanning tree that carries properties on to the limit.383.4.1 PathsSchramm proposed an encoding for the uniform spanning tree in terms ofthe collection of its paths [149]. Let Un be the UST in 2−nR3, with the pointat ∞ added to get a closed set in S3. For a, b ∈ Un, ωa,bn denotes the path inUn from a to b. The paths ensembleIn :={(a, b, ωa,bn ) : a, b ∈ Un}is the collection of all paths in the UST of 2−nR3. Note that In is a subsetof P := S3 × S3 ×H(S3), where H(S3) is the collection of closed subsets ofS3. P is a compact space and we endow it with the Hausdorff topology:dH(A,B) = inf {r ≥ 0 : A ⊂ Br, B ⊆ Ar} , A,B ∈P,where Br = {x ∈ X : d(x,B) ≤ r} is the r-expansion of B.The topology of paths ensemble quantifies the shape difference amongpaths between vertices. In particular, it does not take into account thelength of these paths inherited from the graph distance. This path-length isknown as intrinsic distance. A simple approach for studying the convergenceof the intrinsic distance is in terms of finite-dimensional distributions.For each fixed k ∈ N, we consider the joint distribution of the distancebetween k vertices chosen uniformly at random. This approach gives insightinto the structure of the typical sub-tree spanned by k vertices.3.4.2 Graphs as metric spacesTwo compact metric spaces (X, dX) and (Y, dY ) are isometrically equiva-lent if there exists an isometric map φ : X → Y Let M be the space ofisometry classes of compact metric space. We endowM with the Gromov-Hausdorff distance dGH, defined bydGH(X,Y ) = infφ,φ˜,ZdZH(φ(X), φ˜(Y )), X, Y ∈M,39where the infimum is over all metric spaces Z and isometries φ : X → Zand φ˜ : Y → Z.Additionally to the metric structure, we consider a measure endowed to acompact metric space. Recall that a Polish space is a metric is separable andcompletely metrizable topological space. For a Polish space X, we denoteby Mf (X) the set of all non-negative Borel measures on X. The Prohorovmetric is defined asdXP (µ, ν) = inf{ε > 0 : µ(A) ≤ ν(Aε) + ε andν(A) ≤ µ(Aε) + ε for all Borel sets A}.Let X = (X, dX , µX , ρX) and Y = (Y, dY , µY , ρY ) be two compact rootedand measure metric space. We define the Gromov-Hausdorff-Prohorovdistance bydGHP(X ,Y) = infΦ,Φ˜,Z{dZ(Φ(ρX), Φ˜(ρY ))+dZH(Φ(X), Φ˜(Y )) + dZP (Φ∗µX , Φ˜∗µY )},where the infimum is over all Polish spaces (Z, dZ) and isometries Φ : X → Zand Φ˜ : Y → Z.We are mainly interested in compact metric spaces with a tree-like struc-ture. A real tree (or R-tree) (T, dT ) is a metric space satisfying the followingconditions for any x, y ∈ T with D = dT (x, y)(i) there exists a unique isometric map γ(x,y) : [0, D] → T such thatγ(x,y)(0) = x and γ(x,y)(D) = y.(ii) If φ : [0, 1]→ T is a continuous injective map such that φ(0) = x andφ(1) = y then φ([0, 1]) = γ(x,y)([0, D]).Let T be the space of real trees and Tc denotes the subspace of compactreal trees. We write TGHc and TGHPc for the corresponding isometry classesunder the Gromov-Hausdorff and Gromov-Hausdorff-Prohorov metric, re-spectively.Theorem 3.4.1 ([67, Theorem 1], [1, Corollary 3.2]). The metric spaces(TGHc , dGH) and (TGHPc , dGHP) are Polish.40A graph G is easily described by its set of vertices V and edges E. Inthe scaling limit, it is convenient to consider any graph as a metric space.If G is a connected graph, we endow the set of vertices with the discretemetric given bydG(x, y) = infλ(x,y)len(λ(x, y)), x, y ∈ V ;where the infimum is taken over all paths λ(x, y) on G between x and y.We thus say that (G, dG) is a metric space. Furthermore, we endow G withcounting measure µG. This measure is uniform over the set of vertices.If G is a finite graph and U is a uniform spanning tree of G, it is imme-diate that (U , dU , µU ) is a measured real tree. In Chapter 5 we consider auniform spanning tree of R3 as a locally compact metric space. We extendthe topological framework introduced in Subsection 5.2.3.5 Scaling limits of uniform spanning treesIn this section, we overview different results for the scaling limit of uniformspanning trees. We begin discussing the case of uniform spanning trees onfinite graphs, and then we pass to the results known for infinite graphs.3.5.1 Finite graphsIn the mean-field case, the scaling limit of the uniform spanning tree of afinite graph is the Brownian continuum random tree. We begin with thepresentation of this universal object.The Brownian continuum random treeThe Brownian continuum random tree (CRT) is a compact rooted real tree.Aldous introduced the Brownian CRT as the scaling limit of critical Galton-Watson trees with finite variance and conditioned to have a large number ofvertices [6]. We can also describe the Brownian CRT in terms of a Brownianexcursion. A Brownian excursion e : [0, 1] → [0,∞) is a Brownian motionconditioned to be positive in [0, 1) and to take the value e(1) = 0.41Definition 3.5.1 ([124, Theorem 2.2] ). Letd(s, t) = 2e(s) + 2e(t)− 2 infr∈[s∧t,s∨t]2e(r) for s, t ≥ 0.We define T as the quotient space of [0, 1], where we identify points s and twith d(s, t) = 0. The CRT is the compact real tree (T, dT ).Let p be the canonical projection of [0, 1] onto T . The interval [0, 1] isendowed with the Lebesgue measure L. Then we define the uniform measureµT on T is the push-forward measure given byµT (B) := p♯(L)(B) = L(p−1(B)),where B is a Borel subset of T .Alternatively, we can sample a Brownian CRT with the stick-breakingalgorithm. This description corresponds to the original definition of theBrownian CRT in [6].Algorithm (Stick-breaking algorithm). Let R be an inhomogeneous Poissonprocess on [0,∞) with intensity measure tdt. We write (Rn)n∈N for thelocation of the points of the Poisson process in increasing order. Denote thelength of the sticks by L1 = R1, and Ln = Rn − Rn−1, and consider thefollowing sequence of compact real trees.1. T1 is the closed line segment with length L1. Label the ends of T1 asz1 and z2. The point z1 is the root of T1.2. For n > 1, let x be a uniform point in Tn−1. Then, attach at x a closedline segment with length Ln. We label by zn+1 the end of the segmentLn on the other side of x.Theorem 3.5.2 (Aldous [7, Corollary 22]). The tree Tn in the stick-breakingalgorithm is equal in distribution to a subtree of the Brownian CRT spannedby n+ 1 leaves independently sampled from its uniform measure µT . More-over, the closure of the set ∪n∈N Tn has the distribution of the BrownianCRT T .42For each k ∈ N, let w1, . . . , wk be independent random points with lawµT . We thus define the finite-dimensional distribution of the Brownian CRTFk as the joint distribution(dT (wi, wj))1≤i<j≤k. (3.3)We remark that dT (w1, w2) corresponds to the length of T1 in Algo-rithm 3.5.1, thenP (dT (w1, w2) > λ) = exp(−λ22).In general, the Brownian CRT is the scaling limit of trees arising in com-binatorial models. Among others, uniform random finite trees and randomuniform unordered trees converge to the Brownian CRT [135].Convergence of uniform spanning trees to the Brownian CRTThe scaling limit of the complete graph exhibits the mean-field behaviourof a model. In the case of the uniform spanning tree, the scaling limit is theBrownian CRT. The first theorem on this direction is due to Aldous, in thesense of convergence finite-dimensional distributions. Recall that we definethe joint distribution of the distance between k leaves on the Brownian CRTin (3.3) as Fk.Theorem 3.5.3 (Aldous [6]). Let Kn be the complete graph on n verticesand let Un be the uniform spanning tree of Kn. For a fixed k ∈ N, letx1, . . . , xk be vertices chosen uniformly at random from Kn and let dn(xi, xj)be the distance between xi and xj on UST(Kn). Then(dn(xi, xj)√n)1≤i<j≤k→ Fk as n→∞in distribution.Mean-field behaviour is characteristic of high-dimensional graphs. Thethreshold for a high dimension depends on the model. In the case of uniform43spanning trees on Rd, dimensions d = 4 is critical and d ≥ 5 is a high-dimension. This characterization is related to the scaling limit of the loop-erased random walk. As we described in Subsection 2.4.3, the LERW on Rdwith d ≥ 4 converges to Brownian motion.Mean-field behaviour holds for a larger family of connected graphs. Letus introduce some quantities of a graph relevant to the characterizationof mean-field behaviour. Here we follow [138]. Let G = (V,E) be a finiteconnected graph. Let δˆ be the ratio of the maximum to the minimum degreeof G. The lazy random walk X = (Xt)t≥0 is defined on the set of verticesV with transition probability pt(·, ·). Given Xt, with probability 12 Xt stayson the same vertex and Xt+1 = Xt. With probability 12 , the walk changesits position, and Xt+1 is one of the nearest-neighbours chosen uniformly atrandom. We denote by pt(u, v) = Pu(Xt = v) and by pi the stationarydistribution of the lazy random walk. We define the uniform mixing time ofthe lazy random walk on G bytmix(G) := min{t ≥ 0 : maxu,v∈V|pt(u, v)pi(v) − 1| ≤12}.The bubble sum of G isB(G) :=tmix∑t=0(t+ 1) supv∈Vpt(v, v).We say that(i) G is D-balanced if dˆ(G) ≤ D,(ii) G is α-mixing if tmix(G) ≤ n1/2−α, and(iii) G is θ-escaping if B(G) ≤ θ.The assumptions (i), (ii) and (iii) are proposed by Michaeli, Nachmiasand Shalev in [138] as a characterization of mean-field behaviour for finitegraphs (with respect to the UST). They also show that these assumptionsare sharp on Theorem 3.5.4 for the diameter of the UST of G, which wedenote by diam(UST(G)).44Theorem 3.5.4 (Michaeli-Nachmias-Shalev [138, Theorem 1.1]). For ev-ery D,α, θ, ε > 0, there exists a constant C = C(D,α, θ, ε) satisfying thefollowing. Let G be a connected graph on n-vertices and assume that it isD-balanced, α-mixing and θ-escaping thenP(C−1√n ≤ diam(UST(G)) ≤ C√n)≥ 1− ε.Graphs satisfying (i), (ii) and (iii) include the d-dimensional torus Rdm,the hypercube {0, 1}m and expander graphs. A version of the assumptions(ii) and (iii) was proposed first in [144]. However, instead of (i), the resultsin [144] assume vertex transitivity. The transitivity hypothesis holds for thed-dimensional torus and the hypercube, and we thus state convergence forfinite-dimensional distribution on these cases.Theorem 3.5.5 (Peres-Revelle [144, Theorem 1.2]). Let d ≥ 5 and let(Gn) be either the sequence of d-dimensional torus Rdm on n vertices, thesequence of hypercubes {01}m on n vertices or a d-regular expander family.For a fixed k ∈ N, let y1, . . . yk be points chosen uniformly at random on GnWe denote by dn the intrinsic distance on UST(Gn). Then there exists asequence of constants (βn) bounded away from 0 and infinity such that thejoint distribution of the distances(dn(yi, yjβn|Gn|1/2)1≤i<j≤k→ Fkin distribution as n→∞.The corresponding result for the finite torus R4m includes logarithmiccorrections. These are expected for the critical dimension d = 4.Theorem 3.5.6 (Schweinsberg [151, Theorem 1.1]). Let (Gn) be either thesequence of d-dimensional torus Rdm on n vertices and we denote by dn theintrinsic distance on UST(Gn). For a fixed k ∈ N, let z1, . . . , zk be pointschosen uniformly at random on Gn. There exists a sequence of constants45(γn) bounded away from 0 and infinity, such that(dn(zi, zjγnn1/2(logn)1/6)1≤i<j≤k→ Fkin distribution as n→∞.3.5.2 Infinite graphsIn [4], Aizenman, Burchard, Newman, and Wilson described scaling limitsof random trees in terms of their collection of subtrees. Following a differentapproach, Schramm studied in [149] the paths ensembles of UST as defined inSubsection 3.4.1. Both [4] and [149] prove existence of sub-sequential scalinglimits in their respective topologies. We present here the sub-sequentialscaling limit in the paths ensemble topology. Tightness is an immediateconsequence of the definition of the topological space.Theorem 3.5.7 (Schramm [149, Theorem 1.6]). Let Iδ be the paths ensem-ble of the UST on δS2. If µδ is the law of Iδ, then there is a sub-sequentialweak limit µδ → µ with respect to the space H(S2 × S2 ×H(S2)) as δ → 0.Although [149] does not prove the full convergence of the paths-ensemble,Schramm introduced a characterization of the limit object. He proposed theSchramm-Loewner evolution (SLE) as the conjectured scaling limit of loop-erased random walks, the Peano curve of the UST, and other conformallyinvariant processes in the plane. As we mentioned in Chapter 2, [122] estab-lishes the convergence of the LERW to SLE(2). Building up from the workin [149], [122] proves the existence of scaling limits of wired and free USTin bounded domains with smooth boundary, with respect to their paths en-sembles. One of the main results in [122] is on the conformal invariance ofthe scaling limit. With this in mind, we define the scaling limit of the USTon a domain of the complex plane.Let D ⊊ C be a simple domain. We consider both the wired UST onδR2∩D, denoted by WTDδ ; and the free UST on δR2∩D, denoted by FTDδ .Similarly, as in the infinite volume case, we consider their paths ensembles.46LetWIDδ and FIDδ be the paths ensembles of WTDδ and FTDδ , respectively.These paths ensembles are elements of the space H(D¯ × D¯ × H(D¯)) withthe Hausdorff topology. We denote their laws by µW,Dδ and µF,Dδ .Theorem 3.5.8 (Lawler-Schramm-Werner [122, Corollary 1.2]). Let D ⊊ Cbe a simple domain such that ∂D is a C1-smooth simple closed curve. Thenthe weak limit of the wired UST and the free UST on D exists:µW,Dδ → µW,D, µF,Dδ → µF,Das δ → 0. Moreover, the scaling limits µW,D and µF,D are conformallyinvariant.The paths ensemble allows us to study the scaling limit as a subset ofR2. Indeed, Schramm gave a complete topological description of the scalinglimit of the planar UST in [149]. Nevertheless, the topology of the pathsensemble is inadequate to study other properties, such as the intrinsic metric,the uniform measure, and the simple random walk over the limit object. Forthis latter purpose, it is more convenient to consider the UST as a measuredmetric space; in fact, the UST is a real tree.Barlow, Croydon, and Kumagai considered the uniform spanning tree asa quintuple T = (U , dU , µU , ϕU , 0), where U is a uniform spanning tree of R2,dU is intrinsic metric, µU is the uniform measure, ϕU is an embedding of Uinto R2, and 0 indicates that the embedded tree is rooted at the origin. Theirwork in [24] proves the existence of subsequential scaling limits of the USTin a Gromov-Hausdorff-Prohorov type topology. It includes the convergenceof the embedding ϕU . Recall that β2 denotes the growth exponent of theloop-erased random walk.Theorem 3.5.9 (Barlow-Croydon-Kumagai [24, Theorem 1.1]). Let Pδ thelaw of (U , δ−β2dU , δ2µU , δϕU , 0). Then the collection (Pδ)δ∈(0,1) is tight withrespect to a Gromov-Hausdorff-Prohorov topology.We remark that the results in [24] rely on a detailed understanding ofthe growth properties of the two-dimensional loop-erased random walk. Inparticular, [24] applies results of [116, 136].47The sub-sequential limit was extended to full convergence in the Gromov-Hausdorff-Prohorov topology by Holden and Sun [86]. In [86], the authorsprove the existence of the scaling limit of contour functions of the UST; theconvergence is for the space of continuous functions endowed with the topol-ogy of uniform convergence on compact sets. This topology is sufficientlystrong to imply the convergence of the corresponding real trees (see [1]).Theorem 3.5.10 (Holden-Sun [86, Theorem 1.1, Remark 1.2] ). The lawof the sequence of measured, rooted spatial trees(U , δ−β2dU , δ2µU , δϕU , 0)converges as δ → 0 with respect to a Gromov-Hausdorff-Prohorov-type topol-ogy.48Chapter 4Competitive GrowthProcessesAn interacting particle system is a random model of spatial configurationsevolving. The spatial structure is given by a connected graph G = (V,E).We often refer to the vertices of G as sites. At any given time, each siteis in a state, which is an element of a the local state space σ. σ maybe a set of numbers or letters. They represent the presence (or absence)of different types of particles. A set of local rules governs the interactionof these particles. These interactions induce changes in the state of a site.The global state (or configuration) of the system is given by a Markovchain X = (Xt)t≥0 such that Xt = (Xt(v))v∈V takes values in the collectionof functions σV .The simple random walk S = (Sn : n ≥ 0) provides an elementaryexample of a discrete interacting particle system on Rd. In this case, we setσ = {0, 1} as the state space. Here 0 represents a vacant site, while 1 is asite occupied by the random walk. Then, at each integer time n ∈ R+, theMarkov chain for the global configuration is defined asXn(z) =1 if Sn = Xn(z)0 otherwise, ∀ z ∈ Rd.49We may think of an occupied site as the position of a single particle. Underthis interpretation, at each time n ∈ N, a particle simultaneously producesan identical child and dies. The child-particle occupies a neighbouring sitechosen uniformly.Our next example is the branching random walk. A branching ran-dom walk represents a growing population of identical particles, where eachof them reproduces and moves randomly around space, independently fromothers. A branching process (introduced in Section 1.2) drives the reproduc-tion mechanism. We restrict to a binomial offspring distribution to simplifythe exposition and refer to [153, 154] for surveys on this model.Let T be the genealogical tree of a branching process with offspringdistribution Bin(m, p). T is rooted at ρ and has a set of edges E. Considera collection of random variables (ζe)e∈E indexed by the edges of T. Thiscollection is independent and uniformly distributed over the set of directionson Rd, E = {±e1, . . . ,±ed}, where ek(j) = 1{k = j}. For a vertex x in T,the ancestral line Jρ, xK is the unique path of edges from the root ρ to thevertex x. We denote by ∥x∥ the generation of x, defined as the number ofelements in Jρ, xK. With this notation, ∥ρ∥ = 0. LetV (x) :=∑e∈Jρ,xK ζebe the sum of increments associated with the ancestors of x. The branchingrandom walk on Rd, with offspring distribution B(m, p), is the collectionof random variables (V (x) : x ∈ T). For each generation n, we obtain afinite point process(V (x1,n), . . . V (xN,n) : i = 1, . . . , N) ,where {xi,n}1≤i≤N is the set of vertices in the n-th generation. Note thatN ≥ 0 is random. The number of vertices at generation n is a Galton-Watsonprocess. We can also describe the branching random walk like a (discrete-time) interacting particle system. In this case, the local state space is σ ={0, 1, 2, . . .}. The state of a site z ∈ Rd indicates the number of particles at50that position. For each integer time n ≥ 0, the global configuration isXn(z) =∑x: ∥x∥=n1{V (x) = z}, ∀ z ∈ Rd.At each time n ∈ R+, a particle at site z dies and simultaneously gives birthto a random number of particles. The offspring occupies positions uniformlydistributed among the nearest-neighbours of z.For the random walk and the branching random walk, particles evolveat a discrete time. In the following section, we consider interacting particlesystems in continuous time. We need to introduce a framework to justifythat the Markov chain X = (Xt)t≥0 is well-defined. This construction isour main task in Section 4.2. A second limitation in the branching randomwalk model lies in the independence between different particles. If we thinkof these systems as a spatial population model, competition for resourcesor predator-prey behaviour are natural assumptions. These assumptionslead to competitive growth processes. As the first example of a competitivegrowth process, we present the two-type Richardson model, and the originalone-type Richardson model, in Section 4.3. The motivation for the study ofthe Richardson model is related to first passage percolation and serves as aninspiration for further questions. We thus present first passage percolationand its connections to Richardson models in Section 4.4. We finish thechapter with the second example of a competitive growth process, calledchase-escape. We overview this model in Section 4.5. This process is closelyrelated to the model that we study in Chapter 7.4.1 Markov processesWe begin with standard background on Markov process. For this section wefollow [102] and [129].Let E be a Polish space with Borel σ-algebra B(E). We denote thecollection of continuous functions f : E → R by C(E). A map κ : Ω1×A2 →[0,∞] is a stochastic kernel between the measure spaces (Ω1,A1) and(Ω2,A2) if51(i) ω1 7→ κ(ω1, A2) is A1-measurable for any A2 ∈ A2, and(ii) A2 7→ κ(ω1, A2) is a probability measure on (Ω2,A2) for any ω1 ∈ Ω1.Let (Xt)t≥0 be an stochastic process and we write (Ft)t≥0 for the fil-tration generated by X. The stochastic process X = (Xt)t≥0 is a time-homogeneous Markov process with distributions (Pη)η∈E if:(i) For every η ∈ E, X is a stochastic process on an abstract probabilityspace (Ω,Σ,Pη) with Pη (X0 = η) = 1.(ii) The map κ : E × B(E)⊗R+ → [0, 1] defined by(η,B) 7→ Pη (X ∈ B)is a stochastic kernel. For every t ≥ 0, the transition kernel κt :E × B(E)→ [0, 1] is defined byκt(η,A) := κ(η, {y ∈ ER+ : y(t) ∈ A})= Pη (Xt ∈ A) .(iii) X satisfies the Markov property: for every A ∈ B(E), η ∈ E ands, t ≥ 0,Pη (Xt+s ∈ A | Fs) = PXs (Xt ∈ A) , Pη − a.s.A function f : [0,∞) → E is càdlàg if it is continuous from the rightand the left limits exist: for every t ≥ 0f(t) = lims↓tf(s), and lims↑tf(s) exists and is finite.We let D[0,∞) be the collection of càdlàg functions X : R+ → E and D isthe Borel σ-algebra generated by the evaluation maps X 7→ Xt for t ≥ 0.Then (D[0,∞),D) is the canonical path space for a stochastic process X,and in particular for the Markov processes that we define below. We denotethe expectation corresponding to Pη by Eη.52Now we additionally assume that the space E is locally compact. TheMarkov semigroup {Pt : t ≥ 0} associated to the Markov process X =(Xt)t≥0 starting at η ∈ E is defined asPtf(η) := Eη (f(Xt)) :=∫Ef(ηt)dPηfor bounded functions f : E → R. A Markov process X = (Xt)t≥0 is aFeller process if the Markov semigroup maps the collection of continuousfunctions C(E) into itself, i.e. for every f ∈ C(E), Ptf ∈ C(E) for all t ≥ 0.The Markov processes arising from interacting particle systems are Fellerprocesses. For our purposes, the fundamental property of a Feller process isthat we can define it in terms of its Markov semigroup (see [129, Theorem1.5]).The infinitesimal generator (or generator) of the Markov semigroup{Pt : t ≥ 0} is defined as the operatorGf := limt↓0Ptf − ft, (4.1)where f belongs to a subset of C(E) where the limit (4.1) exists. TheHille-Yosida theorem (see [129, Theorem 2.9]) establishes a one-to-one cor-respondence between infinitesimal generators on C(E) and Markov semi-groups on C(E). This property is crucial for the definition of interactingparticle systems, as we define them in terms of the corresponding generators.Such construction requires additional work. In the next section, we presentsufficient conditions for our interacting particle systems and refer to [129,Chapter 1] for details.4.2 Interacting particle systemsLet G be a connected graph with a countable set of sites V . Recall that wewrite σ = {s1, . . . , sn} for the local state space. Our interest is on the globalconfiguration of V , where each site has a local state. The configurationspace is σV , which is the collection of functions η : V → σ. We endow σV53with the product topology and denote the space of real-valued continuousfunctions on σV by C(σV ).For the description of an interacting particle system, we require a set ofpossible transitions between global states, and rates at which these transi-tions occur. Following [129, 130], we describe these two elements as(i) a set of local maps between global configurationsG = {ηT : σV → σV : T ⊂ V, |T | <∞},where the index T indicates the finite subset of sites where the mapηT changes values on an element η ∈ σV ; and(ii) a collection of non-negative transition rates{c(T, η) : ηT ∈ G}.Another common notation is to write c(η, ηT ) for c(T, η) . We will usec(T, η) in the construction of interacting particle systems, and c(η, ηT )for our examples. We assume that the function c is non-negative,uniformly bounded, and continuous as a function of η.An interacting particle system is a continuous-time Markov processX = (Xt)t≥0 on the configuration space σV . Under a suitable set of condi-tions over the local maps and the rates (see (4.5) and (4.6)), the generatorGf(x) =∑ηT∈Gc(T, η)(f(ηT )− f(η)), η ∈ σG, f ∈ C(σV ) (4.2)defines the Markov process X.The dynamics in our examples change at one or, at most, two sites, atthe same time. Then the collection of local maps and transition rates areG = {ηx, ηx,y : x, y ∈ V }, {c(x, η), c(x, y, η) : x, y ∈ V }. (4.3)54We also assume thatc(x, y, η) =p(x, y) if η(x) = 1, η(y) = 0,0 otherwise. (4.4)for some non-negative sequence of real numbers (p(x, y))x,y∈V . With theseassumptions, the generator of X takes the formGf(x) =∑xc(x, η) (f(ηx)− f(η)) +∑x,yc(x, y, η) (f(ηx,y)− f(η)) ,for η ∈ σV and f ∈ C(σV ).As pointed out above, we require some assumptions over the local mapsand their rates. For an interacting particle system of the form (4.3) andsatisfying (4.4), it suffices thatsupx∈V∑u∈Vsupη∈σV|c(x, η)− c(x, ηu)| <∞, (4.5)andsupy∈V∑x∈Vp(x, y) <∞. (4.6)Theorem 4.2.1 (Liggett, [130, Theorem B3]). Consider the description ofan interacting particle system of the form (4.3) and satisfying (4.4). If (4.5)and (4.6) are also satisfied, then the closure of (4.2) is the generator of aFeller Markov process X = (Xt)t≥0 on the space of global configurations σV .A general construction for a finite local state space σ is in [162, Chapter4], while [129, Theorem 3.9] gives general conditions for the existence ofparticle systems with a countable local state space.The interacting particle systems in this work satisfy the hypothesis ofTheorem 4.2.1. In particular, (4.5) is a consequence of the finite range ofthe rates in our examples. We say that a rate c(x, η) has finite range ifthere exists a constant C such that c(x, η) depends on η through at mostC coordinates of η. Conditions (4.3), (4.4) are part of the construction and(4.6) is easy to verify.55Example 4.2.2. Let us define the continuous-time random walk as aninteracting particle system. The underlying graph is Rd and the state spaceis σ = {0, 1}. Similarly to the example above of a simple random walk, 0indicates a vacant site, while 1 indicates a site occupied by the random walk.We define the local map ηx,y : σRd → σRd for x, y ∈ Rd nearest-neighbours(i.e. |x− y| = 1) byηx,y(z) = ηx,y(η)(z) =η(z) if z ̸= x, y,η(y) if z = x,η(x) if z = y,and the transition rate of this map isc(η, ηx,y) = 1{|x− y| = 1}. (4.7)The rate of any other local map is 0. The map ηx,y exchanges the positionof the particle if (and only if) one of these sites is occupied. The meaningof (4.7) is that change in positions occurs after a random time T , whereT ∼ Exp(1). For the continuous-time random walk on R, a simple way toindicate the local maps and their transition rates is by writing01 1−→ 10, 10 1−→ 01.An alternative way to define an interacting particle system is with aPoisson process. Let us define the Poisson process and then discuss theconstruction of interacting particle systems with an example.A Poisson process on R+ with intensity λ is a continuous-time Markovprocess (Nt, t ≥ 0), valued on R+ and satisfying the following conditions.(i) N0 = 0(ii) For any finite collection of indices 0 = t0 < t1 < . . . < tn, the familyof increments (Nti −Nti−1 : i = 1, . . . , n) is independent(iii) For t > s ≥ 0, the random variable Nt − Ns follows a Poisson distri-56bution with parameter λ(t− s).We refer to [102, Chapter 5] for a proof of the existence of the Poisson processon R+ (and in more general spaces). A classic and beautiful reference onthe general theory of Poisson processes is [100].Example 4.2.3. We continue Example 4.2.2 with an alternative construc-tion. Let (Rt : t ≥ 0) and (Lt : t ≥ 0) be two independent Poisson processeson R+. Then the continuous-time symmetric random walk on R is equal indistribution toXt = Rt − Lt.Poisson representations are available for more general interacting particlesystems. These constructions are in [129, Chapter 3, Section 6] and [162,Theorem 4.14].4.3 Richardson modelsOur first example of competitive growth processes is the Richardson model.They model populations spreading uniformly through a graph. We representthe population growth with the occupancy of vacant vertices. The dynamicsare similar to the growth of cells or infections. If an individual is at a givensite, it will “conquer” a vacant nearest-neighbour site (chosen uniformly)after an exponential waiting period. We consider two variants: the one-typeRichardson model for the growth of a (single) population, and the multipletype Richardson model. The first model considers unobstructed growth,while the multiple-type model corresponds to different species competingfor resources.In the one-type Richardson model, the population is homogeneous. Inthis case, we only have two states for a site. It is either vacant or occupied.Since our graphs are connected, any site will be occupied after some randomtime. The main question for this model is on the shape of the occupied sitesin the long run.The two-type Richardson model considers two species in competition forvacant spaces. We distinguish these species as red and blue individuals.57In contrast to the situation for the one-type Richardson model, it is notnecessarily true that both species occupy an infinite number of sites. Forexample, if red particles no longer have vacant sites between their nearest-neighbours (because blue particles are in those sites) then red will not beable to reproduce any longer. We interpret that situation as an extinctionevent for the red particles. A similar event is possible for blue. On thecontrary, if both red and blue particles occupy an infinite number of sites,we interpret this as coexistence. Our approach to the two-type Richardsonmodels focusses on sufficient conditions for a coexistence event.In the final subsection, we consider the multiple type Richardson model.It is a generalization of the two-type model for k competing species.4.3.1 The one-type Richardson modelThe one-type Richardson model is an interacting particle system on Rd withtwo states σ = {0, 1}. We refer to state 0 as vacant and to 1 as occupied. Ify is vacant then it becomes occupied at a rate proportional to the numberof occupied nearest-neighbours. An occupied site remains that way for therest of the process.For a precise definition, we follow the notation in Section 4.2. For eachx ∈ Rd, we define the occupation map (or infection map) Ix : σRd → σRd byIx(η)(z) =1 if z = x,η(z) otherwise, η ∈ σRd .The one-type Richardson model is the Markov process taking values in{0, 1}Rd with transition ratesc(η, Ix(η)) =∑y∈Rd|x−y|=11{η(y) = 1},with the initial configuration η(0) = 1 and η(z) = 0 for any other z ∈ Rd.Richardson introduced this interacting particle system in [147] as a modelfor cell-growth. We remark that the original definition was for a discrete58time Markov process.We are interested in the asymptotic shape of the occupied sites. Foreach t ≥ 0, we letB(t) := {z ∈ Rd : Xt(z) = 1} (4.8)denote the occupied vertices at time t ≥ 0. The main theorem in Richard-son’s seminal work [147] is on the asymptotic shape of B(t). It is shown in[147] that there exists a convex and compact deterministic set BR such that,for any ε > 0, the probability of the event(1− ε)BR ⊂ B(t)t⊂ (1 + ε)BR (4.9)tends to 1 as t→∞. The shape theorem in [147] if the first result of its kind.Cox and Durrett observed in [48] that a lemma suggested by Kesten in [39]improves Richardson’s theorem: the event in (4.9) holds almost surely fort ≥ 0 large enough. In [48], Cox and Durrett generalized the shape theoremto the setting of first passage percolation. We present this generalization asTheorem 4.4.2. We discuss the relation between the Richardson model andfirst passage percolation in Section 4.4.1.A model similar to the one-type Richardson model is the Eden model[63]. The Eden model has a simple construction on Rd. For simplicity, wedescribe the process in terms of cell growth, as in its original formulation.We start with a cell at the origin. This cell divides into an identical daugh-ter, and the newborn cell occupies one of the neighbouring sites, chosenuniformly at random. The process continues its reproduction in the sameway. To describe the evolution of the Eden process, for each n ≥ R+, we letA(0) = {0} and defineA(n) as the set of vertices after the n-th reproduction.The one-type Richardson model and the Eden model are the same up toa suitable time scale. Here we follow [17]. From the collection of discreteballs (B(t))t≥0 in (4.8), we construct a sequence of random times {Nk}k∈R+ .We define N0 = 0 andNk = inf{t ≥ 0 : B(t) contains k + 1 points of Rd}.59Richardson observed that the collection of subsets {A(k) : k ≥ 1} and{B(Nk) : k ≥ 1} have the same distribution [147, Example 9].4.3.2 The two-type Richardson modelIn the two-type Richardson model we have three states: σ = {w, r, b} forthe vertices of Rd, for d ≥ 3. Similarly to the one-type version, only sites atstate w may flip on a rate depending on its number of occupied neighbours.Once a site reaches states r or b, it remains on that state for the rest ofthe process. We identify the sites with state r as red particles, sites withstate b correspond to blue particles, while the state w represents a vacantsite. The blue and red particles represent two different species competingfor space. Häggström and Pemantle defined this variant of the one-typeRichardson model in [79], as a tool for understanding infinite geodesics infirst passage percolation. We expand on the discussion of this connectionin Subsection 4.4.2. For a formal definition of this process, we define thelocal maps acting on the model. For each x ∈ Rd, Rx : σRd → σRd is thered-occupation map defined byRx(η)(z) =r if z = x and η(x) = w,η(z) otherwise, η ∈ σRd .The blue-occupation map Bx : σRd → σRd is given byBx(η)(z) =b if z = x and η(x) = w,η(z) otherwise, η ∈ σRd .The two-type Richardson model with parameter λ is the Markov processtaking values in {w, r, b}Rd with transition ratesc(η,Rx(η)) = λ∑y∈Rd|x−y|=11{η(y) = r}, c(η,Bx(η)) =∑y∈Rd|x−y|=11{η(y) = b}.60The initial condition is the configurationη(x) =r if x = 0,b if x = (1, 0, . . . , 0),w otherwise.(4.10)We denote the probability measure associated to this process by Pλ.It is reasonable to think that λ rules the coexistence behaviour of red andblue particles. When λ = 1, the red and blue particles spread at the same“speed” and, intuitively, one would expect both reach an infinite number ofsites. Let A be set of sites occupied by red particles at some time duringthe process. We define B for the blue particles in an analogous way. Thecoexistence event is defined asE = {|A| =∞, |B| =∞}. (4.11)Positive probability for the coexistence event was first proved in R2 byHäggström and Pemantle in [79]. The d-dimensional case was proved inde-pendently by Garet and Marchand [72] and Hoffman [83].Theorem 4.3.1 (Häggström-Pemantle [79, Theorem 1.2], Garet-Marchand[72, Theorem 3.1], Hoffman [83, Theorem 2]). For the two-type Richardsonprocess on Rd with λ = 1, the coexistence event E has positive probability.Häggström and Pemantle conjectured in [79] and in [80, Conjecture 1.1]that the converse of Theorem 4.3.1 holds, and the coexistence event E hasprobability zero whenever λ ̸= 1 for all Rd with d ≥ 2. The article [80] givesa partial result, but the general case is an open question.Theorem 4.3.2 (Häggström-Pemantle [80, Theorem 1.2]). For the two-typeRichardson process on Rd,Pλ(E) = 0for all λ ∈ R+ \ Λ, and the cardinality of Λ is at most countable.A variation is to study the probability of the coexistence event underdifferent initial conditions. If two sites other than 0 and (1, 0, . . . , 0) are61occupied at t = 0, then the situation is equivalent to the initial conditionsin (4.10) [79, Proposition 1.1]. The same is true for any initial conditionwith a finite number of occupied sites [55, Theorem 1]. We have a change inthe model when a infinite number of particles are present at the beginningof the process. Consider the initial configuration on Rd, for d ≥ 2:ηH(x) =r if x = 0,b if r1 = 0, x ̸= 0,w otherwise,where x = (r1, . . . , rd). We denote the probability measure associated withthis process by PHλ . Recall that λ corresponds to the transition rate of thered particles.Theorem 4.3.3 (Deijfen-Häggström [57, Theorem 1.1]). For the two-typeRichardson process on Rd, with d ≥ 2,PHλ (E) > 0 if, and only if λ > 1.4.3.3 Multiple type Richardson modelAn immediate generalization of the two-type Richardson model is to considerk different species. The process evolves on Rd, and the local state spaceis {0, . . . , k}. When a site has state j ≥ 1, it means that it has beenoccupied by a particle of type j. It remains at that state for the rest of theprocess. Otherwise, the site has state j = 0 , meaning that it is vacant. Forj = 1, . . . , k, the Ixj -occupation map is the analogue of the blue occupationmaps defined for the two-type Richardson model. In words, the j-typeoccupies an adjacent vacant vertex after an exponential time Exp(1).Let x1, . . . , xk be different sites in Rd. The k-type Richardson model withinitial conditions (x1, . . . , xk) is the Markov process Xk = (Xkt )t≥0 taking62values in {0, 1, . . . , k}Rd , with ratesc(η, Ixj (η)) =∑y∈Rd|x−y|=11{η(y) = j},and initial conditionsη(z) =j if z = xj ,0 otherwise.For j = 1, . . . , k, we writeBj = {z ∈ Rd : Xk(t) = j for some t ≥ 0}for the set of sites that eventually become of type j. The coexistence eventfor the k-type Richardson model with initial conditions (x1, . . . , xk) isE(x1, . . . , xk) := {|Bj | =∞ : j = 1, . . . , k}. (4.12)In line with Theorem 4.3.1, Hoffman proved the next theorem as a toolto obtain a lower bound on the number of infinite geodesics in first passagepercolation (c.f. Theorem 4.4.4).Theorem 4.3.4 (Hoffman [84, Theorem 1.6]). Consider the 4-type Richard-son model on R2. For any ε > 0 there exist x1, x2, x3 and x4 in R2 suchthatP (E(x1, x2, x3, x4))) > 1− ε.4.4 First passage percolationHammersley and Welsh introduced first passage percolation (FPP) as an ex-tension of the percolation model [81]. The traditional example of FPP is thephenomenon of fluid moving through a random medium. While percolationstudies the sites where the fluid arrives (occupied sites), FPP incorporatesthe time of arrival to each site. In this section, we present the definition andmain theorems in the theory of FPP. We follow [17] in the presentation ofthese preliminaries.63Let G = (V,E) be a connected graph and let F be a probability distri-bution on (0,∞). In particular, in this work we only consider F (0) = 0.The collection of edge weights (τe)e∈E is a family of independent randomvariables with common distribution F . Recall that a path on G between thevertices x and y is a collection of vertices [v1, . . . , vn] such that (vi, vi+1) ∈ E,v1 = x and vn = y. We define the passage time of a path γ = [v1, . . . , vn] byTF (γ) =n−1∑i=1τ(vi,vi+1).For any x, y ∈ V , we define the passage time between x and y byTF (x, y) := inf{T (γ) : γ ∈ Γ(x, y)}, (4.13)where Γ(x, y) is the collection of (finite) paths on G between x and y.Proposition 4.4.1. If F (0) = 0, then TF defines a (random) metric on Valmost surely.If G = Rd, we extend the passage time to a metric for Rd. We define themetric T : R→ (0,∞) byTF (x, y) = TF (x′, y′),where x′ ∈ Rd is the closest vertex to x (in case of a tie, we choose thesmallest vertex with respect to the lexicographical order). We choose y′similarly.We denote by BF (t) the random open ball centred at the origin in themetric TFBF (t) := {y ∈ Rd : TF (0, y) < t}.One of the main results in the theory of FPP is on the shape of the ballBF (t) at large scales.Theorem 4.4.2 (Cox-Durrett [48]). There exists a deterministic, convex64and compact set BF ⊂ Rd such that for each ϵ > 0,(1− ϵ)BF ⊂ BF (t)t⊂ (1 + ϵ)BF for all large talmost surely.4.4.1 First passage competition modelsCompetitive growth models are interacting particle systems, but we candefine equivalent processes in terms of first passage percolation. We usuallyrefer to these models as first passage competition models or competing firstpassage percolation.We begin with a construction of the one-type Richardson model, definedin Subsection 4.3.1. Consider FPP on Rd with exponential (with parameter1) edge weights and let TExp : Rd×Rd → [0,∞) be the corresponding passagetime metric. The random discrete ballB(t) = {x ∈ R : TExp(0, x) ≤ t}represents the sites occupied at time t. We thus consider the Markov processX = (Xt)t≥0 taking values in {0, 1}Rd and given byXt(z) =1 if z ∈ B(t),0 otherwise.It is well-known that the one-type Richardson model is equal in distributionto (Xt)t≥0. The process (Xt)t≥0 is also known as the edge representation ofthe one-type Richardson model.Our next example is the two-type Richardson model. For simplicity, wefocus on the case λ = 1. As above, let TExp : Rd×Rd → [0,∞) be the passagetime metric defined by FPP, with Exp(1) passage times. Recall that, for thetwo-type Richardson process at time t = 0, the origin x1 = 0 is red, whilethe site x2 = (1, 0, . . . , 0) is blue. The idea is to compare the passage timesTExp(x1, z) and TExp(x2, z), which are the passage times from the initial65red and blue sites, respectively. The colour with a smaller passage timeconquers the site z and occupies it for the rest of the process. This colouris well-defined since the distribution of the passage times is continuous andhence {TExp(x1, z), TExp(x2, z)} has a unique minimum. Then, the two-typeRichardson model has the distribution of the Markov process Y = (Yt)t≥0with Yt(x1) = r and Yt(x2) = b for all t ≥ 0; and for z ∈ Rd different to x1and x2Yt(z) =r if TExp(x1, z) ≤ t and TExp(x1, z) < TExp(x2, z),b if TExp(x2, z) ≤ t and TExp(x1, z) > TExp(x2, z).The edge representation of a two-type Richardson model with rate λ ̸=1 is more complex and requires additional care. We need to enforce thecondition that, for every red site v, there is a nearest-neighbour path ofred sites from 0 to v; and similarly for each blue site. In particular, ifthe set of blue sites is surrounded by red sites, then blue will not longerreproduce (and vice versa). We refer to [55, 80] for the construction of theedge representation of the asymmetrical two-type Richardson model.In the following subsection, we will see that the construction of Richard-son models with first passage percolation also gives insights on FPP.4.4.2 GeodesicsIf a finite path γ between x and y satisfies TF (γ) = TF (v, w) (it achieves theminimum) we thus say that γ is a geodesic from x to y.Wierman and Reh proved the existence of geodesics in full generality forR2 [168, Corollary 1.3]. In higher dimensions, we require assumptions overthe passage time distribution F . We refer the reader to [17, Section 4.1] forgeneral conditions, but the next theorem is sufficient for us.Theorem 4.4.3 (Kesten [96, (9.23)]). For a continuous distribution F ,there exists a unique geodesic between any two points of Rd almost surely.From now on, we assume that the distribution F is continuous. Weremark that a crucial point for existence in Theorem 4.4.3 is that F (0) = 0.66Note that the exponential distribution Exp satisfies this condition. LetG(x, y) be the unique geodesic between x and y, and letTF =∪z∈R{G(0, z)}be the tree of infection rooted at 0, where F indicates the distribution ofthe passage times. Since the geodesics G(0, x) are unique, then TF is a tree.An infinite self-avoiding path [v1, v2, . . .] is a geodesic ray if [v1, . . . , vk]is a geodesic from v1 to vk for every k ≥ 1. The number of geodesicsrays is equal to the number of (topological) ends of the tree of infectionTF . With this equivalence in mind, we denote the number of geodesicrays by K(TF ) A simple compactness argument shows the existence of asub-sequential limit for (G(0, (n, 0, . . . , 0)))n∈N, and then K(TF ) ≥ 1.Newman conjectured in [141] that for a continuous distribution F (withadditional hypothesis over distribution tails and exponential moments),|K(TF )| =∞, almost surely. (4.14)This question is still open, and (4.14) for continuous distributions is statedas Question 27 in [17]. Progress on (4.14) has been close to the two-typeRichardson model for growth-cell. We defined this model in Subsection 4.3.2and discussed its relation to FPP in Subsection 4.4.1. Now we present theconsequences of the Theorems in Subsection 4.3.2 for K(TF ).Häggström and Pemantle observed that the coexistence event for thetwo-type Richardson model (defined in (4.11)) implies the existence of twodisjoint geodesic rays starting at 0. This property follows from a compact-ness argument. They showed that Theorem 4.3.1 implies K(TExp) ≥ 2 withpositive probability on R2 [79, Theorem 1.1]. Similarly, independent workof Garet and Marchand [72] and Hoffman [83] for the two-type Richardsonmodel implied for FPP on Rd, d ≥ 2, thatP (K(TF ) ≥ 2) > 0.67This positive probability holds for a large family of distributions F , in par-ticular for the exponential distribution.Hoffman proved in [84] a more general result for K(TF ). Hoffman’s the-orem is in terms of the geometry of the limit ball BF appearing in the shapetheorem (Theorem 4.4.2). The distributions considered in [84] constitute alarge family. Here, for simplicity, we state the result for continuous distri-butions F with finite 2 + α moment.If BF is a polygon, we define SidesF as the number of sides of ∂BF . If BFis not a polygon, then SidesF is equal to infinity. By symmetry, SidesF ≥ 4in R2. Theorem 4.4.4 is a consequence of a version of Theorem 4.3.4 withdistribution F for the passage times.Theorem 4.4.4 (Hoffman [84, Theorem 1.2 and Theorem 1.4] ). Let F bea continuous distribution with E(τ2+αe)<∞ for some α > 0. For any ε > 0and k ≤ SidesF there exist x1, . . . , xk such thatP (There exist k disjoint geodesics starting at x1, . . . , xk) > 1− ε.Moreover, if k ≤ SidesF /2,P (K(TF ) ≥ k) = 1.A consequence of Theorem 4.4.4 is the existence of infinite geodesicswhen the limit shape BF is not polygonal. This has been proved by Auffingerand Damron for measures ν (for the distribution of the passage time τe)which satisfysupp(ν) ⊂ [1,∞), and ν({1}) = p ≥ p⃗c, (4.15)where p⃗c is the critical parameter for oriented percolation on R2.Theorem 4.4.5 (Auffinger-Damron [16, Theorem 2.3]). Consider FPP onR2 and let ν be the law for the passage times τe. If ν is a measure satisfying(4.15) with p ≥ p⃗c on R2, thenP (K(Tν) =∞) = 1.68We have used that the coexistence event on the Richardson model im-plies the existence of geodesic rays. Nevertheless, the equivalence betweencoexistence, for a k-type Richardson model, and existence, of k ends for theinfection tree, is not immediate. Such equivalence corresponds to a resultannounced recently. Recall that E(x1, . . . , xk) was defined in (4.12) as thecoexistence event. We cite the theorem for two dimensions, and remark thatthere is a d-dimensional version of Theorem 4.4.6 with additional hypothesisover distribution moments and the number SidesF .Theorem 4.4.6 (Ahlberg [2, Theorem 1]). Let F be a continuous distri-bution with E(τ2+αe)< ∞ for some α > 0. For any k ∈ N ∪ {∞}, andε > 0:(i) If P (E(x1, . . . , xk)) > 0 for some x1, . . . , xk ∈ R2, thenP (K(TF ) ≥ k) = 1.(ii) If P (K(TF ) ≥ k) > 0, thenP (E(x1, . . . , xk)) > 1− εfor some x1, . . . , xk ∈ R2.4.5 Chase-escape processChase-escape is an interacting particle system on a connected graph G. Itemulates the behaviour of predators chasing prey that are escaping fromthem. On the graph G = (V,E) the local state space is σ = {w, r, b}. Werefer to states r and b as red and blue, respectively. The state w is white orvacant. The transitions on chase-escape arerwλ−→ rr, br 1−→ bb. (4.16)This means that red particles spread to adjacent uncoloured sites accordingto a Poisson process with rate λ. Meanwhile, blue particles overtake adjacent69red particles at rate 1. For a formal definition we follow [35]. We define themaps Rx : σV → σV and Bx : σV → σV byRx(η)(z) =r if z = x and η(x) = w,η(z) otherwise, η ∈ σRd ,Bx(η)(z) =b if z = x and η(x) = r,η(z) otherwise, η ∈ σRd .The chase-escape model with parameter λ is the Markov process takingvalues in {w, r, b}Rd with transition ratesc(η,Rx(η)) = λ1{η(x) = w}∑y∈Rd|x−y|=11{η(y) = r},c(η,Bx(η)) = 1{η(x) = r}∑y∈Rd|x−y|=11{η(y) = b}.Alternative interpretations of the chase-escape process include the spreadof a rumour or an infection. In that case, the white particles represent thesusceptible population (S), the red particles represent infected individuals(I) and the blue particles are recovered ones (R). Under the chase-escapedynamics, an individual can recover from an infection when it is in contactwith someone already in state R. In terms of a rumour, this recovery corre-sponds to the spread of facts. The properties of the graph G correspond tothe social network of the population.In the context of tree, chase escape is equivalent to the escape process andthe rumour-scotching process. Escape and chase-escape were both proposedby Kordzakhia [104]. They are similar except that, in the escape process,blue particles are allowed to conquer both vacant and red sites. A secondvariant, introduced by Bordenave, is the rumour-scotching process. Wemay think of the rumour-scotching process as a directed version of chase-escape. In this case, blue particles only conquer red sites, but such spreadis performed only through edges that have spread red particles at some70previous time. This restriction for the blue particles is associated with usualsocial dynamics for the transmission of a rumour. Under this model, a redsite represents an individual believing a rumour, and the blue sites representindividuals who have additional information that denies such belief. A whitesite stands for susceptible individuals who have not heard anything. Thered sites are prone to gossip, and they spread the rumour to their nearest-neighbours. When a site turns blue, they want to scotch the rumour thatthey propagated. However, they only turn to those with whom they sharedthe rumour before. Some results for chase-escape on trees were proved firstin the context of the escape or the rumour-scotching processes.4.5.1 Phase transitionsWe distinguish two phases on the chase-escape process, depending on thecardinality of the set of sites eventually occupied. In the first scenario, thepredator consumes all prey, and then the predator cannot move any further.On the contrary, if the prey advances fast enough, then there is prey aliveat any time in the process before it ends. (We specify below the end ofthe process for finite graphs.) It is intuitively clear that these two phasesdepend on the “speed” of the red particles relative to the “speed” of the blueparticles. The parameter λ controls such “speed”. Recall the assumptionin (4.16) that the transition rate for blue particles is 1. This choice isjust a normalization of the parameters. We denote by Pλ the probabilitymeasure of chase-escape with parameter λ for the spread rate of red particles.The phase transitions and the existence of a critical parameter λ have beenanalysed for d-ary trees, and Galton-Watson trees, and complete graphs.Let us overview the main results for these graphs.Finite graphsLet Kn be the complete graph on n vertices. At time t = 0, there is oneblue site, one red site, and n− 2 vacant sites. We add two absorbing statesto the dynamics. The chase-escape process on Kn, with n ≥ 3, finishes if(a) there are no red particles left; or71(b) there are no vacant sites left i.e. all sites are occupied by a blue or ared particle.Let An be the set of vertices of Kn that are red, at any time, before thechase-escape process stops. LetAn = {|A| = n− 1}be the coexistence event, where all vacant sites of Kn were coloured redat some finite time (with the addition of the initial red site). Note thatthe event An is equivalent to the absorbing state (b). We call Pλ(An) thecoexistence probability.Theorem 4.5.1 (Kortchemski [105, Theorem 1]). Let (Kn)n≥3 be a growingsequence of complete graphs. Thenlimn→∞Pλ(An) =0 if λ < 1,12 if λ = 1,1 if λ > 1.In particular, λ = 1 is the critical parameter.We see that on the complete graph, the coexistence probability is asymp-totically positive if λ ≥ 1. Otherwise, the coexistence probability is asymp-totically 0. The first scenario corresponds to an coexistence phase, whilethe second is the extinction phase on the finite graph. The transitionbetween these two phases occurs at the critical parameter λ = 1.Infinite graphsFor an infinite graph G with root ρG, we assume that the process evolveson a modified version of G. We add an additional vertex ρˆG attached to ρG.Then the initial conditions of chase-escape at time t = 0 areX0(ρˆG) = b, X0(ρG) = r72and the rest of the vertices are on state w, i.e. these sites are vacant.Let B denote the set of sites that are blue at some time in the process,and letB = {|B| =∞}.In the coexistence phase there is a positive probability that red particlesoccupy infinitely many sites so Pλ(B) > 0. We define the extinction phaseas Pλ(B) = 0, in the case both types occupy only finitely many sites almostsurely. We define the critical parameterλc(G) := inf{λ : Pλ(B) > 0}. (4.17)With this notation, we emphasize the dependence of the critical parameteron the underlying graph G.We consider first the case of chase-escape on the ray N. In this case, thefirst-guess answer is the correct one.Proposition 4.5.2 ([62, Proposition 2.1]). For chase-escape on Nλc(N) = 1, (4.18)and chase-escape is in the extinction phase at criticality.A tree is a natural generalization from N, since we may consider the treeis the union of an infinite number of rays. A d-ary tree Td is a rooted infinitetree were all vertices have d children.Theorem 4.5.3 (Kordzakhia [104, Theorem 1]). There exists a criticalvalueλc(Td) = 2d− 1− 2√d2 − d (4.19)such that, for chase-escape on a d-ary tree with parameter λ,(i) the process is in the extinction phase if 0 < λ < λc(Td), and(ii) coexistence occurs if λ > λc(Td).73Note thatλc ∼ 14d as d ↑ ∞.Bordenave extended the previous result and determined the behaviour atcriticality.Proposition 4.5.4 (Bordenave [35, Corollary 1.5]). Extinction happens forchase-escape on a d-ary tree at the critical parameter λc(Td).Durrett, Junge and Tang presented in [62] a simple probabilistic argu-ment for Theorem 4.5.3 that included the behaviour at criticality of Proposi-tion 4.5.4. The base of the proof in [62] is a comparison between chase-escapeon trees and N. The arguments in [35] are analytical, but they apply to amore general setting: Galton-Watson trees with an additional assumptionover the growth rate. The assumptions over the growth rate on [35] follow,almost surely, from conditioning T on being infinite. We state the followingtheorem under this hypothesis.Theorem 4.5.5 (Bordenave [35, Theorem 1.1., Corollary 1.5]). Let λc(Td)be as in (4.19). Let T be a realization of a Galton-Watson tree with meannumber of offsprings d > 1 and conditioned to be infinite. The followingholds T -almost surely for chase-escape on T with parameter λ.(i) If λ ≤ λc(Td) then we have extinction.(ii) If λ > λc(Td) , then coexistence occurs.Kortchemski extended the study of chase-escape on Galton-Watson trees.In [106], he introduced a coupling on Galton-Walton trees of the chase-escape dynamics and branching random walks killed at 0 [106, Theorem 1].With this method, Kortchemski obtained a shorter probabilistic proof of[35, Corollary 1.5] for super-critical Galton-Watson trees [106, Proposition3] and asymptotics on the tail of the distribution of the number of prey [106,Theorem 4].The examples above have a simple geometric structure. The completegraph represents the mean-field behaviour, while all the paths on trees are74self-avoiding. Studying chase-escape on other lattices has been a challengingproblem without significant advancements.Kordzakhia and Martin have conjectured that coexistence is possible forλc(G) < 1 (4.20)on two-dimensional lattices. Durrett, Junge, and Tang proved (4.20) fora variation of chase-escape on high-dimensional oriented lattices [62]. Thevariation considered in [62] allowed for infinite passage times for the redparticles. Blue is allowed to cross that edge if the opposite vertex, on thatedge, was reached by red already. To our knowledge, a similar propositionhas not been proved for a non-oriented graph. The main challenge in theanalysis of two-dimensional lattices is the presence of cycles. Some simplegraphs have been candidates for satisfying (4.20). This was the case ofthe graph N × [0, 1]. Durrett, Junge, and Tang discarded this graph as anexample for (4.20) by proving that λc(N× [0, 1]) = 1.Tang, Kordzakhia, and Lalley have obtained simulations in support of(4.20). Their simulations show that λc(R2) ≈ 12 and λc(Λ) < 1 for differenttwo-dimensional lattices Λ, including the hexagon, triangle and 8-directionallattices. Furthermore, the simulations in [164] show fractal behaviour for theshape of occupied sites in all these latices when the parameter λ is close to itscorresponding critical value. This property is typical in critical phenomenaand gives further support to the following conjecture.Conjecture 4.5.6 (Kordzakhia-Martin). The critical parameter for chase-escape on the square lattice is λc(R2)= 12 .75Part IIScaling Limits of UniformSpanning Trees76Chapter 5Scaling Limit of theThree-Dimensional UniformSpanning Tree and theAssociated Random Walk1Summary of this chapterWe show that the law of the three-dimensional uniform spanning tree (UST)is tight under rescaling in a space whose elements are measured, rootedreal trees, continuously embedded into Euclidean space. We also establishthat the relevant laws actually converge along a particular scaling sequence.The techniques that we use to establish these results are further applied toobtain various properties of the intrinsic metric and measure of any limit-ing space, including showing that the Hausdorff dimension of such is givenby 3/β, where β ≈ 1.624 . . . is the growth exponent of three-dimensional1Joint work with Omer Angel, David Croydon, and Daisuke Shiraishi.Acknowledgements. DC would like to acknowledge the support of a JSPS Grant-in-Aidfor Research Activity Start-up, 18H05832 and a JSPS Grant-in-Aid for Scientific Research(C), 19K03540. DS is supported by a JSPS Grant-in-Aid for Early-Career Scientists,18K13425.77loop-erased random walk. Additionally, we study the random walk on thethree-dimensional uniform spanning tree, deriving its walk dimension (withrespect to both the intrinsic and Euclidean metric) and its spectral dimen-sion, demonstrating the tightness of its annealed law under rescaling, anddeducing heat kernel estimates for any diffusion that arises as a scaling limit.5.1 IntroductionRemarkable progress has been made in understanding the scaling limits oftwo-dimensional statistical mechanics models in recent years, much of whichhas depended in a fundamental way on the asymptotic conformal invarianceof the models in question that has allowed many powerful tools from complexanalysis to be harnessed. See [122, 149, 160] for some of the seminal worksin this area, and [115] for more details. By contrast, no similar foothold forstudying analogous problems in the (physically most relevant) case of threedimensions has yet been established. It seems that there is currently littleprospect of progress for the corresponding models in this dimension.Nonetheless, in [107], Kozma made the significant step of establishingthe existence of a (subsequential) scaling limit for the trace of a three-dimensional loop-erased random walk (LERW). Moreover, in work thatbuilds substantially on this, the time parametrisation of the LERW hasbeen incorporated into the picture, with it being demonstrated that (againsubsequentially) the three-dimensional LERW converges as a stochastic pro-cess, see [127] and the related articles [128, 155]. The aim of this work isto apply the latter results in conjunction with the fundamental connectionbetween uniform spanning trees (USTs) and LERWs – specifically that pathsbetween points in USTs are precisely LERWs [143, 169] – to determine thescaling behaviour of the three-dimensional UST (see Figure 5.1) and theassociated random walk.Before stating our results, let us introduce some of our notation. Wefollow closely the presentation of [24], where similar results were obtainedin the two-dimensional case. Henceforth, we will write U for the UST onR3, and P the probability measure on the probability space on which this78is built (the corresponding expectation will be denoted E). We refer thereader to [143] for Pemantle’s construction of U in terms of a local limit ofthe USTs on the finite boxes [−n, n]3∩R3 (equipped with nearest-neighbourbonds) as n → ∞, and proof of the fact that the resulting graph is indeeda spanning tree of R3. We will denote by dU the intrinsic (shortest path)metric on the graph U , and µU the counting measure on U (i.e., the measurewhich places a unit mass at each vertex). Similarly to [24], in describinga scaling limit for U , we will view U as a measured, rooted spatial tree.In particular, in addition to the metric measure space (U , dU , µU ), we willalso consider the embedding ϕU : U → R3, which we take to be simply theidentity on vertices; this will allow us to retain information about U in theEuclidean topology. Moreover, it will be convenient to suppose the space(U , dU ) is rooted at the origin of R3, which we will write as ρU . To fit theframework of [24], we extend (U , dU ) by adding unit line segments alongedges, and linearly interpolate ϕU between vertices.5.1.1 Scaling limits of the three-dimensional USTWe have defined a random quintuplet (U , dU , µU , ϕU , ρU ). Our main result(Theorem 5.1.1 below) is the existence of a certain subsequential scalinglimit for this object in an appropriate Gromov-Hausdorff-type topology, theFigure 5.1: A realisation of the UST in a three-dimensional box, asembedded into R3 (left), and drawn as a planar graph tree(right). Source code adapted from two-dimensional version ofMike Bostock.79precise definition of which we postpone to Section 5.2. Moreover, the resultincorporates the statement that the laws of the rescaled objects are tighteven without taking the subsequence. One further quantity needed to statethe result precisely is the growth exponent of the three-dimensional LERW.Let Mn be the number of steps of the LERW on R3 until its first exit froma ball of radius n. The growth exponent is defined by the limit:β := limn→∞logEMnlogn ,(equivalently, EMn = nβ+o(1)). The existence of this limit was proved in[155]. Whilst the exact value of β is not known, rigourously proved boundsare β ∈ (1, 53 ], see [114]. Numerical estimates suggest that β = 1.624 . . . ,see [170]. We remark that in two dimensions the corresponding exponent is5/4, first proved by Kenyon [95], and in dimension 4 or more its value is 2.In three dimensions there is no conjecture for an exact value of β.The exponent β determines the scaling of dU . Specifically, let Pδ be thelaw of the measured, rooted spatial tree(U , δβdU , δ3µU , δϕU , ρU), (5.1)when U has law P. For the rooted measured metric space (U , dU , µU , ρ) weconsider the local Gromov-Hausdorff-Prohorov topology. This is extendedwith the locally uniform topology for the embedding ϕU . As a straightfor-ward consequence of our tightness and scaling results with respect to thisGromov-Hausdorff-type topology, we also obtain the corresponding conclu-sions with respect to Schramm’s path ensemble topology. The latter topol-ogy was introduced in [149] as an approach to taking scaling limits of two-dimensional spanning trees. Roughly speaking this topology observes theset of all macroscopic paths in an object, in the Hausdorff topology. SeeSection 5.2 for detailed definitions of these topologies.Theorem 5.1.1. The collection (Pδ)δ∈(0,1] is tight with respect to the localGromov-Hausdorff-Prohorov topology with locally uniform topology for theembedding, and with respect to the path ensemble topology. Moreover the80limit of Pδ exists as δ = 2−n → 0 exists in both topologies.Remark. The reason we only state convergence along the subsequence (2−n)stems from the fact that our argument fundamentally depends on Kozma’soriginal work on the scaling of three-dimensional LERW, where a similarrestriction was imposed [107]. There is no reason to believe that this isan essential requirement for the result to hold. (Indeed, Theorem 5.1.1shows that subsequential limits exist) This is the only place in our proofwhere we require δ = 2−n. If one were to generalise Kozma’s result to anarbitrary sequence of δs, the natural extension of the above theorem wouldimmediately follow.Remark. An important open problem, for both the LERW and UST in threedimensions, is to describe the limiting object directly in the continuum. Intwo dimensions, there are connections between the LERW and SLE2, aswell as between the UST and SLE8, see [86, 122, 149], which give a directconstruction of the continuous objects. In the three-dimensional case, thereis as yet no parallel theory. The development of such a representation wouldbe a significant advance in three-dimensional statistical mechanics.Before continuing, we briefly outline the strategy of proof for the conver-gence part of the above result, for which there are two main elements. Thefirst of these is a finite-dimensional convergence statement: Theorem 5.7.2states that the part of U spanning a finite collection of points converges un-der rescaling. Appealing to Wilson’s algorithm [169], which gives the meansto construct U from LERW paths, this finite-dimensional result extends thescaling result for the three-dimensional LERW of [127]. Here we encounter acentral hurdle: after the first walk, Wilson’s algorithm requires us to take aLERW in an rough subdomain of R3, namely the complement of the previousLERWs. Existing results in [107, 127] on scaling limits of LERWs requiresubdomains with smooth boundary, and some care is needed to extend theexistence of the scaling limit. We resolve this difficulty by proving that wecan approximate the rough subdomain with a simpler one, and showing thecorresponding LERWs are close to each other as parametrized curves.Secondly, to prove tightness, we need to check that the trees spanning a81finite collection of points give a sufficiently good approximation of the entireUST U , once the number of points is large. For this, we need to know thatLERWs started from the remaining lattice points hit the trees spanning afinite collection of points quickly. In two dimensions, such a property wasestablished using Beurling’s estimate, which says that a simple random walkhits any given path quickly if it starts close to it in Euclidean terms, see [97].In three dimensions, Beurling’s estimate does not hold. In its place, we havea result from [148], which yields that a simple random walk hits a typicalLERW path quickly if it starts close to it. Thus, although the intuition inthe three-dimensional case is similar, it requires us to remember much moreabout the structure of the part of the UST we have already constructed asWilson’s algorithm proceeds.5.1.2 Properties of the scaling limitWhile uniqueness of the scaling limit is as yet unproved, the techniques weuse to establish Theorem 5.1.1 allow us to deduce some properties of anypossible scaling limit. These are collected below. NB. For the result, thescaling limits we consider are with respect to the Gromov-Hausdorff-typetopology on the space of measured, rooted spatial trees, see Section 5.2below. The one-endedness of the limiting space matches the correspondingresult in the discrete case, [143, Theorem 4.3]. We use BT (x, r) to denotethe ball in the limiting metric space T = (T , dT ) of radius r around x. It isnatural to expect that the scaling limit will have dimensiondf :=3β.Moreover, one would expect that a ball of radius r in the limiting object hasmeasure of order r3/β. The following theorem establishes uniform boundsof this magnitude for all small balls in the limiting tree, with a logarithmiccorrection for arbitrary centres and with iterated logarithmic correctionsfor a fixed centre, which may be fixed to be ρ. We use f ⪯ g to denotethat f ≤ Cg for some absolute (i.e. deterministic, and not depending onthe particular subsequence) constant C. We denote by γT (x, y) the path82in the topological tree T between points x and y. We write L to representLebesgue measure on R3. The definition of the ‘Schramm distance’ below isinspired by [149, Remark 10.15].Theorem 5.1.2. Let Pˆ be a subsequential limit of Pδ as δ → 0, and therandom measured, rooted spatial tree (T , dT , µT , ϕT , ρT ) have law Pˆ. Thenthe following statements hold Pˆ-a.s.(a) The tree T is one-ended (with respect to the topology induced by themetric dT ).(b) Every ball in (T , dT ) has Hausdorff dimension df .(c) There exists an absolute constant C <∞ so that: for any R > 0, thereexists a random r0(T ) > 0 such thatrdf (log r−1)−C ⪯ infx∈BT (ρ,R)µT (BT (x, r))≤ supx∈BT (ρ,R)µT (BT (x, r)) ⪯ rdf (log r−1)C ,for all r < r0.(d) For some absolute C <∞, there exists a random r0(T ) > 0 such thatrdf (log log r−1)−C ⪯ µT (BT (ρ, r)) ⪯ rdf (log log r−1)C , ∀r < r0.(e) The metric dT is topologically equivalent to the ‘Schramm metric’ dSTon T , defined bydST (x, y) := diam (ϕT (γT (x, y))) , (5.2)where diam is the diameter in the Euclidean metric.(f) µT = L ◦ ϕT .83Differences from the two-dimensional caseAnalogues for the properties described in Theorem 5.1.2 (and others) wereproved in the two-dimensional case in [24], see also the related earlier work[149]. There are, however, several notable differences in three dimensions.Following Schramm [149], consider the trunk of the tree T , denoted T ◦,which is the set of all points of T of degree greater than 1, where the degree ofx is the number of connected components of T \{x}. In the two-dimensionalcase, it is known that the restriction of the continuous map ϕT to the trunkis a homeomorphism between T ◦ (equipped with the induced topology fromT ) and its image ϕT (T ◦) (equipped with the induced Euclidean topology).Thus the image of the trunk, which is dense in R2, determines its topology.We do not expect the same to be true in three-dimensions. Indeed, dueto the greater probability that three LERWs started from adjacent pointson the integer lattice escape to a macroscopic distance before colliding, weexpect that the image of the trunk ϕT (T ◦) is no longer a topological treein R3, see Figure 5.2. We aim to establish this as a result in a forthcomingwork.0x′x∂B(R)Figure 5.2: In the above sketch, |x − x′| = 1, but the path in theUST between these points has Euclidean diameter greater thanR/3. We expect that such pairs of points occur with positiveprobability, uniformly in R.84Secondly, for the two-dimensional UST, it was shown in [24] that themaximal degree in T is 3, and that µT is supported on the leaves of T , i.e.the set of points of degree 1. We can show that the same is true in threedimensions, though we also postpone these results to a separate paper, sincethey are significantly harder than in two dimensional case. Indeed, as well asappealing to the homeomorphism between the trunk and its embedding, thetwo-dimensional arguments in the literature depend on a duality argumentthat does not extend to three dimensions. We replace this with a moretechnical direct argument. The aforementioned homeomorphism and dualityalso allow it to be shown that in two dimensions maxx∈R3 |ϕ−1(x)| = 3(where we write |A| to represent the cardinality of a set A), and, althoughnot mentioned explicitly in [24, 149], it is also easy to deduce the Hausdorffdimension of the set of points with given pre-image size. Our forthcomingwork will explore the corresponding results in the three dimensional case.5.1.3 Scaling the random walk on UThe metric-measure scaling of U yields various consequences for the asso-ciated simple random walk (SRW), which we next introduce. For a givenrealisation of the graph U , the SRW on U is the discrete time Markov processXU = ((XUn )n≥0, (PUx )x∈R3) which at each time step jumps from its currentlocation to a uniformly chosen neighbour in U . For x ∈ R3, the law PUx iscalled the quenched law of the simple random walk on U started at x. Wethen define the annealed or averaged law for the process started from ρUas the semi-direct product of the environment law P and the quenched lawPU0 by settingHU (·) :=∫PU0 (·)dP.We use EU for the corresponding annealed expectation.The behaviour of the random walk on a graph is fundamentally linkedto the associated electrical resistance. We refer the reader to [19, 59, 126,132] for introductions to this connection, including the definition of effectiveresistance in particular. For the three-dimensional UST, we will write RUfor the effective resistance on U , considered as an electrical network with85unit resistors placed along each edge.As noted above, the typical measure of BU (ρ,R) is of order Rdf . We showbelow that the effective resistance to the complement of the ball is typicallyof order R (it is trivially at most R). In light of these, and following [109],we define the set of well-behaved scales with parameter λ byJ(λ) :={R ∈ [1,∞) : R−dfµU (BU (ρ,R)) ∈ [λ−1, λ]and RU (ρ,BU (ρ,R)c) ≥ λ−1R}.In particular, for R to be in J(λ), we require good control over the volumeof the intrinsic ball centred at the root of U of radius R, and control overthe resistance from the root to the boundary of this ball. As our next result,we show that the these events hold with high probability, uniformly in R.Theorem 5.1.3. There exist constants c, c1, c2 ∈ (0,∞) such that: for allR, λ > 1,P (R ∈ J(λ)) ≥ 1− ce−c1λc2 .The motivation for Theorem 5.1.3 is provided by the general randomwalk estimates presented by Kumagai and Misumi in [109]. (Which buildson the work [23].) More specifically, Theorem 5.1.3 establishes the condi-tions for the main results of [109], which yield several important exponentsgoverning aspects of the behaviour of the random walk. Indeed, as is madeprecise in the following corollary, we obtain that the walk dimension withrespect to intrinsic distance is given bydw := 1 + df =3 + ββ,the walk dimension with respect to extrinsic (Euclidean) distance dE is givenby βdw = 3+β (this requires a small amount of additional work to the toolsof [109]), and the spectral dimension is given byds :=2dfdw= 63 + β . (5.3)Various further consequences for the random walk on U also follow from the86General form d = 2 d = 3LERW growth exponent β 5/4 = 1.25 1.62Fractal dimension of U df = d/β 8/5 = 1.60 1.85Intrinsic walk dimension of U dw = 1 + df 13/5 = 2.60 2.85Extrinsic walk dimension of U βdw 13/4 = 3.25 4.62Spectral dimension of U 2df/dw 16/13 = 1.23 1.30Table 5.1: Exponents associated with the LERW and UST in two andthree dimensions. The two-dimensional exponents are knownrigorously from [20, 21, 24, 95]. The three-dimensional values arebased on the results of this study, together with the numericalestimate for the growth exponent of the three-dimensional LERWfrom [170].results of [109], but rather than simply list these here, we refer the interestedreader to that article for details. Table 5.1 summarises the numerical esti-mates for the three-dimensional random walk exponents that follow from theabove formulae, together with the numerical estimate for β from [170], andcompares these with the known exponents in the two-dimensional model.Corollary 5.1.1. (a) For P-a.e. realisation of U and all x ∈ U ,limR→∞logEUx τUx,RlogR = dw, (5.4)where τUx,R := inf{n ≥ 0 : dU (x,XUn ) > R},limR→∞logEUx τEx,RlogR = βdw, (5.5)where τEx,R := inf{n ≥ 0 : dE(x,XUn ) > R}, and− limn→∞2 log pU2n(x, x)logn = ds. (5.6)87(b) For HU -a.e. realisation of XU ,limR→∞log τU0,RlogR = dw, limn→∞logmax0≤m≤n dU (0, XUm)logn =1dw, (5.7)limR→∞log τE0,RlogR = βdw, limn→∞logmax0≤m≤n dE(0, XUm)logn =1βdw. (5.8)(c) It holds thatlimR→∞logEU(τU0,R)logR = dw, (5.9)limR→∞logEU(τE0,R)logR = βdw, (5.10)where EU is the expectation under HU , and− limn→∞2 logE(pU2n(0, 0))logn = ds. (5.11)Remark. In part (c) of the previous result, we do not provide averaged re-sults for the distance travelled by the process up to time n with respect toeither the intrinsic or extrinsic metrics. In the two-dimensional case, thecorresponding results were established in [21], with the additional input be-ing full off-diagonal annealed heat kernel estimates. Since the latter requirea substantial amount of additional work, we leave deriving such as an openproblem.Finally, it is by now well-understood how scaling limits of discrete treestransfer to scaling limits for the associated random walks on the trees, see[14, 24, 50, 52–54]. We apply these techniques in our setting to deduce a(subsequential) scaling limit for XU . As we will explain in Section 5.10, thelimiting process can be written as (ϕT (XTt ))t≥0, where ((XTt )t≥0, (P Tx )x∈T )is the canonical Brownian on the limit space (T , dT , µT ). This Brownian88motion is constructed in [13, 98]. Moreover, the volume estimates of Theo-rem 5.1.2, in conjunction with the general heat kernel estimates of [49], yieldsub-diffusive transition density bounds for the limiting diffusion. Modulo thedifferent exponents, these are of the same sub-Gaussian form as establishedfor the Brownian continuum random tree in [51], and for the two-dimensionalUST in [24]. Note in particular that our results imply that the spectral di-mension of the continuous model, defined analogously to (5.6), is equal tothe value ds given at (5.3).Theorem 5.1.4. If (Pδn)n≥0 is a convergent sequence with limit Pˆ, thenthe following statements hold.(a) The annealed law of (ϕT (XTt ))t≥0, where XT is Brownian motion on(T , dT , µT ) started from ρT , i.e.HT (·) :=∫P TρT ◦ ϕ−1T (·)dPˆ,is a well-defined probability measure on C(R+,R3).(b) Let (XUt )t≥0 be the simple random walk on U started from ρU , then theannealed laws of the rescaled processes(δnXUtδ−(3+β)n)t≥0converge to the annealed law of (ϕT (XTt ))t≥0.(c) Pˆ-a.s., the process XT is recurrent and admits a jointly continuoustransition density (pTt (x, y))x,y∈T ,t>0. Moreover, it Pˆ-a.s. holds that,for any R > 0, there exist random constants c1(T ), c2(T ), c3(T ), c4(T )and t0(T ) ∈ (0,∞) and deterministic constants θ1, θ2, θ3, θ4 ∈ (0,∞)(not depending on R) such thatpTt (x, y) ≤c1t−ds/2ℓ(t−1)θ1· exp−c2(dT (x, y)dwt) 1dw−1ℓ(dT (x, y)/t)−θ2 ,89pTt (x, y) ≥c3t−ds/2ℓ(t−1)−θ3· exp−c4(dT (x, y)dwt) 1dw−1ℓ(dT (x, y)/t)θ4 ,for all x, y ∈ BT (ρT , R), t ∈ (0, t0), where ℓ(x) := 1 ∨ log x.(d) (i) Pˆ-a.s., there exists a random t0(T ) ∈ (0,∞) and deterministicc1, c2, θ1, θ2 ∈ (0,∞) such thatc1t−ds/2(log log t−1)−θ1 ≤ pTt (ρT , ρT ) ≤ c2t−ds/2(log log t−1)θ2 ,for all t ∈ (0, t0).(i) There exist constants c1, c2 ∈ (0,∞) such thatc1t−ds/2 ≤ EˆpTt (ρT , ρT ) ≤ c2t−ds/2,for all t ∈ (0, 1).Organization of this chapterThe remainder of the chapter is organised as follows. In Section 5.2, weintroduce the topologies that provide the framework for Theorem 5.1.1, andset out three conditions that imply tightness in this topology. Then, in Sec-tion 5.3, we collect together the properties of loop-erased random walks thatwill be useful for this article. After these preparations, the three tightnessconditions are checked in Section 5.4, and the volume estimates containedwithin this are strengthened in Sections 5.5 and 5.6 in a way that yields moredetailed properties concerning the limit space and simple random walk. InSection 5.7, we demonstrate our finite-dimensional convergence result forsubtrees of U that span a finite number of points. The various pieces forproving Theorem 5.1.1 are subsequently put together in Section 5.8, and theproperties of the limiting space are explored in Section 5.9, with Theorem5.1.2 being proved in this part of the article. Finally, Section 5.10 covers theresults relating to the simple random walk and its diffusion scaling limit.905.2 Topological frameworkIn this section, we introduce the Gromov-Hausdorff-type topology on mea-sured, rooted spatial trees with respect to which Theorem 5.1.1 is stated.This topology is metrizable, and for completeness sake we include a possiblemetric (see Proposition 5.2.1). Moreover, we provide a sufficient criterion(Assumptions 1,2, and 3 below) for tightness of a family of measures onmeasured, rooted spatial trees in the relevant topology (see Lemma 5.2.2).This will be applied in order to prove tightness under scaling of the three-dimensional UST. In the first part of the section, we follow closely the pre-sentation of [24].Define T to be the collection of quintuplets of the formT = (T , dT , µT , ϕT , ρT ),where: (T , dT ) is a complete and locally compact real tree (for the defini-tion of a real tree, see [125, Definition 1.1], for example); µT is a locallyfinite Borel measure on (T , dT ); ϕT is a continuous map from (T , dT ) into aseparable metric space (M,dM ); and ρT is a distinguished vertex in T . (Inthis article, the image space (M,dM ) we consider is R3 equipped with theEuclidean distance.) We call such a quintuplet a measured, rooted, spa-tial tree. We will say that two elements of T, T and T ′ say, are equivalentif there exists an isometry pi : (T , dT )→ (T ′, d′T ) for which µT ◦ pi−1 = µ′T ,ϕT = ϕ′T ◦ pi and also pi(ρT ) = ρ′T .We now introduce a variation on the Gromov-Hausdorff-Prohorov topol-ogy on T that also takes into account the mapping ϕT . In order to introducethis topology, we start by recalling from [24] the metric ∆c on Tc, which isthe subset of elements of T such that (T , dT ) is compact. In particular, fortwo elements of Tc, we set ∆c (T , T ′) to be equal toinfZ,ψ,ψ′,C:(ρT ,ρ′T )∈C{dZP(µT ◦ ψ−1, µ′T ◦ ψ′−1)+sup(x,x′)∈C (dZ (ψ(x), ψ′(x′)) + dM (ϕT (x), ϕ′T (x′)))}(5.12)91where the infimum is taken over all metric spaces Z = (Z, dZ), isometricembeddings ψ : (T , dT ) → Z, ψ′ : (T ′, d′T ) → Z, and correspondences Cbetween T and T ′, and we define dZP to be the Prohorov distance betweenfinite Borel measures on Z. Note that, by a correspondence C between T andT ′, we mean a subset of T ×T ′ such that for every x ∈ T there exists at leastone x′ ∈ T ′ such that (x, x′) ∈ C and conversely for every x′ ∈ T ′ there existsat least one x ∈ T such that (x, x′) ∈ C. (Except for the term involving ϕ andϕ′, this is the usual metric for the Gromov-Hausdorff-Prohorov topology.)Given the definition of ∆c at (5.12), we then define a pseudo-metric ∆on T by setting∆(T , T ′) := ∫ ∞0e−r(1 ∧∆c(T (r), T ′(r)))dr, (5.13)where T (r) is obtained by taking the closed ball in (T , dT ) of radius r centredat ρT , restricting dT , µT and ϕT to T (r), and taking ρ(r)T to be equal to ρT .We have the following result, and it is the corresponding topology thatprovides the framework for Theorem 5.1.1.Proposition 5.2.1 ([24, Proposition 3.4]). The function ∆ defines a metricon the equivalence classes of T. Moreover, the resulting metric space isseparable.We next present a criterion for tightness of a sequence of random mea-sured, rooted spatial trees. This is a probabilistic version of [24, Lemma 3.5](which adds the spatial embedding to the result of [1, Theorem 2.11]) Recallthe definition of stochastic equicontinuity: Suppose for some index set Athere are random metric spaces (Xi, di) and random functions ϕi : Xi →Mfor a metric space (M,dM ). The functions are stochastically equicontinuousif their moduli of continuity converge to 0 uniformly in probability, i.e. forevery ε > 0,limη→0 supi∈AP supx,y∈Xi:di(x,y)≤ηdM (ϕi(x), ϕi(y)) > ε = 0.92Lemma 5.2.2. Suppose (M,dM ) is proper (i.e. every closed ball in M iscompact), and ρM is a fixed point in M . Let T δ = (Tδ, dTδ , µTδ , ϕTδ , ρTδ),δ ∈ A (where A is some index set), be a collection of random measured,rooted spatial trees. Moreover, assume that for every R > 0, the followingquantities are tight:(i) For every ε > 0, the number N (T δ, R, ε) of balls of radius ε requiredto cover the ball T (R)δ ,(ii) The measure of the ball: µTδ(T (R)δ);(iii) The distances dM (ρM , ϕTδ (ρTδ)).And additionally the restrictions of ϕTδ to T (R)δ are stochastically equicon-tinuous. Then the laws of (T δ)δ∈A, form a tight sequence of probabilitymeasures on the space of measured, rooted spatial trees.For convenience in applying Lemma 5.2.2 to the three-dimensional UST,we next summarise the conditions that we will check for this example. Sincethese are of a different form to those given above, we complete the sectionby verifying their sufficiency in Lemma 5.2.3. We recall that the notationBU (x, r) is used for balls in (U , dU ).Assumption 1. For every R ∈ (0,∞), it holds thatlimλ→∞lim supδ→0P(δ3µU(BU (0, δ−βR))> λ)= 0.Assumption 2. For every ε,R ∈ (0,∞), it holds thatlimη→0 lim supδ→0P(infx∈BU (0,δ−βR)δ3µU(BU (x, δ−βε))< η)= 0.Assumption 3. For every ε,R ∈ (0,∞), it holds thatlimη→0 lim supδ→0P infx,y∈BU (0,δ−βR):δdE(x,y)>εδβdU (x, y) < η = 0.93Lemma 5.2.3. If Assumptions 1, 2 and 3 hold, then so does the tightnessclaim of Theorem 5.1.1.Proof. We first check that if Assumptions 1 and 2 hold, then, for everyε,R ∈ (0,∞),limλ→∞lim supδ→0P(NU(δ−βR, δ−βε)> λ)= 0, (5.14)where NU (δ−βR, δ−βε) is the minimal number of intrinsic balls of radiusδ−βε needed to cover BU (0, δ−βR). Towards proving this, suppose thatδ3µU(BU (0, δ−β(R+ ε/2)))≤ λη, (5.15)and alsoinfx∈BU (0,δ−βR)δ3µU(BU (x, δ−βε/2))≥ η. (5.16)Set x1 = 0, and choosexi+1 ∈ BU (0, δ−βR)\ ∪ij=1 BU (xj , δ−βε),stopping when this is no longer possible, to obtain a finite sequence (xi)Mi=1.The construction ensures that ∪Mi=1BU (xi, δ−βε) contains BU (0, δ−βR), andso M ≥ NU (δ−βR, δ−βε). Moreover, since dU (xi, xj) ≥ δ−βε for i ̸= j, itis the case that the balls (BU (xi, δ−βε/2))Mi=1 are disjoint. Putting theseobservations together with (5.15) and (5.16), we find thatNU (δ−βR, δ−βε) ≤ M≤ η−1M∑i=1δ3µU(BU (xi, δ−βε/2))= η−1δ3µU(∪Mi=1BU (xi, δ−βε/2))≤ η−1δ3µU(BU (0, δ−β(R+ ε/2)))≤ λ.94From this, we conclude thatP(NU(δ−βR, δ−βε)> λ)≤ P(δ3µU(BU (0, δ−β(R+ ε/2)))> λη)+P(infx∈BU (0,δ−βR)δ3µU(BU (x, δ−βε/2))< η),and so (5.14) follows by letting δ → 0, λ→∞ and then η → 0.Second, we show that if Assumption 3 holds, then, for every ε,R ∈(0,∞),limη→0 lim supδ→0P supx,y∈BU (0,δ−βR):dU (x,y)<δ−βηδdE(x, y) > ε = 0. (5.17)Indeed, this follows from the elementary observation thatP supx,y∈BU (0,δ−βR):dU (x,y)<δ−βηδdE(x, y) > ε ≤ P infx,y∈BU (0,δ−βR):δdE(x,y)>εδβdU (x, y) < η .Given (5.14), Assumption 1, the fact that δϕU (ρU ) = 0, and (5.17), theresult is a straightforward application of Lemma 5.2.2.5.2.1 Path ensemblesFinally, we also define the path ensemble topology used in Theorem 5.1.1.This topology was introduced by Schramm [149] in the context of scalingof two-dimensional uniform spanning trees, and a related topology (basedon quad-crossings) have been used in the context of scaling limits of crit-ical percolation. Recall that γT (x, y) is the unique path from x to y in atopological tree T .We denote by H(X) the Hausdorff space of compact subsets of a metricspace X, endowed with the Hausdorff topology. This is generated by the95Hausdorff distance, given bydH(A,B) = inf {r ≥ 0 : A ⊂ Br, B ⊆ Ar} ,where Br = {x ∈ X : d(x,B) ≤ r} is the r-expansion of B.We shall consider the sphere S3 as the one-point compactification of R3,on which also consider the one-point compactification of a uniform span-ning tree of R3. For concreteness, fix some homeomorphism from R3 to S3and endow it with the Euclidean metric on the sphere. Given a compacttopological tree T ⊂ S3, we consider the set ΓT ⊂ S3 × S3 ×H(S3)ΓT = {(x, y, γT (x, y)) : x, y ∈ T }.Thus ΓT consists of a pair of points and the path between them. We callΓT the path ensemble of the tree T . Clearly ΓT is a compact subset ofS3 × S3 × H(S3). Since each tree corresponds to a compact subset of S3 ×S3×H(S3), the Hausdorff topology on this product space induces a topologyon trees. Theorem 5.1.1 states that the laws of the uniform spanning on δR3are tight and have a subsequential weak limit with respect to this topology(in addition to the Gromov-Hausdorff-type topology described above).5.3 Loop-erased random walksAs noted in the introduction, the fundamental connection between loop-erased random walks (LERWs) and uniform spanning tree (USTs) will becrucial to this study. In this section, we recall the definition of the LERW,and collect together a number of properties of the three-dimensional LERWthat hold with high probability. These properties will be useful in our studyof the three-dimensional UST. We start by introducing some general nota-tion and terminology.965.3.1 Notation for Euclidean subsetsThe discrete ℓ2 Euclidean ball will be denoted byB(x, r) :={y ∈ R3 : |x− y| < r},where we write |x − y| = dE(x, y) for the Euclidean distance between xand y. (We will use the notation |x − y| and dE(x, y), interchangeably.) Aδ-scaled discrete ℓ2 ball, for δ > 0, will be denoted byBδ(x, r) :={y ∈ δR3 : |x− y| < r},and the Euclidean ℓ2 ball isBE(x, r) :={y ∈ R3 : |x− y| < r}.We will also use the abbreviation B(r) = B(0, r), similarly for Bδ and BE .We also write Bn(0, r) = B2−n(r). The discrete cube (or ℓ∞ ball of radiusr) with side-length 2r centred at x is defined to be the setD(x, r) :={y ∈ R3 : ∥x− y∥∞ < r}.Similarly to the definitions above, but with ℓ∞ balls, Dδ(x, r) denotes theδ-scaled discrete cube and DE(x, r) the Euclidean cube. We further writeD(R) = D(0, R) and Dn(r) = D2−n(0, r). The Euclidean distance betweena point x and a set A is given bydist(x,A) := inf {|x− y| : y ∈ A} .For a subset A of R3, the inner boundary ∂iA is defined by∂iA :={x ∈ A : ∃y ∈ R3 \A such that |x− y| = 1}.975.3.2 Notation for paths and curvesWe introduce definitions related to paths and curves. Some concepts werealso defined in Section 2.1.A path in R3 is a finite or infinite sequence of vertices [v0, v1, . . .] such thatvi−1 and vi are nearest neighbours, i.e. |vi−1 − vi| = 1, for all i ∈ {1, 2, . . . }.The length of a finite path γ = [v0, v1, ..., vm] will be denoted len(γ) and isdefined to be the number of steps taken by the path, that is len(γ) = m.A (parameterized) curve is a continuous function γ : [0, T ]→ R3. Fora curve γ : [0, T ] → R3, we say that T is its duration, and will sometimesuse the notation T (γ) := T . When the specific parameterization of a curveγ is not important, then we might consider only its trace, which is the closedsubset of R3 given by tr γ = {γ(t) : t ∈ [0, T ]}. To simplify notation, wesometimes write γ for instead of tr γ where the meaning should be clear. Acurve is simple if γ is an injective function. All curves in this chapter areassumed to be simple, often implicitly.The space of parameterized curves of finite duration, Cf , will be endowedwith a metric ψ, as defined byψ(γ1, γ2) = |T1 − T2|+ max0≤s≤1 |γ1(sT1)− γ2(sT2)| ,where γi : [0, Ti]→ R3, i = 1, 2 are elements of Cf .We say that a continuous function γ∞ : [0,∞) → R3 is a transient(parameterized) curve if limt→∞ |γ∞(t)| = ∞. We let C be the set oftransient curves, and endow C with the metric χ given byχ(γ∞1 , γ∞2 ) =∞∑k=12−k(1 ∧maxt≤k|γ∞1 (t)− γ∞2 (t)|).The concatenation of two curves γ1 : [0, T1] → R3 and γ2 : [0, T2] → R3with γ1(T1) = γ2(0) is the curve γ1 ⊕ γ2 of length T1 + T2 given byγ1 ⊕ γ2(t) :=γ1(t) if 0 ≤ t ≤ T1γ2(t− T1) if T1 < t ≤ T1 + T2.98The time-reversal of γ : [0, T ]→ R3 is the curve γ⃗ : [0, T ]→ R3 defined byγ⃗(t) := γ(T − t), t ∈ [0, T ].We define several kinds of restrictions for a curve γ : [0, T ]→ R3. Anal-ogous restrictions are defined for transient curves. The restriction of γ to aninterval [a, b] ⊆ [0, T ] is the curve γ|[a,b] : [0, b− a]→ R3 defined by settingγ|[a,b](t) = γ(t+ a), 0 ≤ t ≤ b− a.Similarly, if γ is a simple parametrized curve, and x, y ∈ tr γ and x appearsbefore y in γ, then we define the restriction of γ between x and y to be thecurve γ(x, y), whereγ(x, y)(t) = γ(t+ tx), 0 ≤ t ≤ ty − tx,with tx ≤ ty satisfying γ(tx) = x and γ(ty) = y. (Note that the simplicityof γ ensures that tx and ty are well-defined.) Finally, the restriction of γ tothe Euclidean ball of radius R, with R > 0, is the curve γ|R := γ|[0,ξR∧T ],where ξR = inf{t ∈ [0, T ] : |γ(t)| ≥ R} is the time γ exits the ball of radiusR.Proposition 5.3.1. Let (γn)n∈N ⊂ Cf be a sequence of curves. Assume thatγn → γ ∈ Cf . Then, the convergence is preserved in Cf under the followingoperations.(a) Time reversal: for the sequence of curves under time-reversalγ⃗n → γ⃗ as n→∞.(b) Restriction: for 0 ≤ a < b < T (γ), the restrictionsγn|[a,b] → γ|[a,b] as n→∞,where the sequence above is defined for n large enough.99(c) Concatenation: if γ˜n → γ˜ in Cf , thenγn ⊕ γ˜n → γ ⊕ γ˜ as n→∞.Proof. In this proof, we write Tn = T (γn) and T = T (γ). The convergenceafter a time-reversal is immediate from the definition and we get (a). For(b), we consider the case a = 0. Let rn, r ∈ [0, 1] be such that b = rnTn andb = rT . Thenψ(γn|[0,b], γ|[0,b]) = max0≤s≤1 |γn(sb)− γ(sb)| = max0≤s≤1 |γn(srnTn)− γ(srT )|≤ max0≤s≤1|γn(srnTn)− γ(srnT )|+ max0≤s≤1 |γ(srnT )− γ(srT )|.The convergence of γn → γ implies that the first term above goes to 0 asn→∞. Note that |rn − r| = b|T−1n − T−1| → 0, and hence the convergenceof the last term above follows from uniform continuity of γ. The convergenceof γn under time-reversal gives the general when a > 0.Next we prove (c). We write T˜n = T (γ˜n), T˜ = T (γ˜) and δn = |Tn +T˜n − (T˜ + T )|. Note that δn → 0 as n → ∞. For 0 ≤ s ≤ 1, when wecompare the times that we compare for ψ, |s(Tn + T˜n) − s(T + T˜ )| ≤ δ.Then ψ(γn ⊕ γ˜n, γ ⊕ γ˜) is bounded above byδn + max|r−s|≤δnr≤Tn∨T˜n, s≤T∨T˜|γn ⊕ γ˜n(r)− γ ⊕ γ˜(s)|≤ δn + max|r−s|≤δnr≤Tn, s≤T|γn(r)− γ(s)|+ max|r−s|≤δnr≤T˜n, s≤T˜|γ˜n(r)− γ˜(s)|+ δnMn,where Mn is an upper bound for the norms of γn, γ˜n, γ, and γ˜. The lastterm in the inequality above comes from comparisons between γn and γ˜ (orbetween γ˜n and γ) close to the concatenation point. The convergence ofγn → γ and γ˜n → γ˜, and the uniform continuity of each curve give thedesired result.Proposition 5.3.2. Let (γ∞n )n∈N ⊂ C be a sequence of parameterized curveswith limit γ∞n → γ∞ in (C, χ). The convergence is preserved under the100operations below.(a) Restriction: for any b > 0γ∞n |[0,b] → γ∞|[0,b] as n→∞,in the space Cf .(b) Concatenation: if (γn)n∈N ⊂ Cf converges to a finite parameterizedcurve γ as n→∞, thenγn ⊕ γ∞n → γ ⊕ γ∞ as n→∞,in C.(c) Evaluation: if tn → t thenγ∞n (tn)→ γ∞(t) as n→∞.Proof. The convergence in (a) follows from the definition of the metric χ.Similarly, (b) is a consequence of Proposition 5.3.1 (c) and the definition ofχ. Finally, (c) follows from the uniform continuity of γn|[0,k].If γ is a parameterized (simple) curve and x, y ∈ tr γ, we define theSchramm metric (cf. (5.2)) by settingdSγ (x, y) := diam tr γ(x, y). (5.18)The intrinsic distance between x and y is given bydγ(x, y) := T (γ(x, y)) = ty − tx, (5.19)where γ(tx) = x and γ(ty) = y, i.e. this is the time duration of the curvesegment between x and y. Formally, both (5.18) and (5.19) are only definedwhen x comes before y in γ, but the definition is extended symmetrically inthe obvious way.1015.3.3 Definition and parameterization of loop-erasedrandom walksWe will now define the loop-erased random walk. Let S = [v0, . . . , vm] be apath in some graph (which we take to be R3 or δR3). By erasing the cycles(or loops) in S in chronological order, we obtain a simple path from v0 tovm. This operation is called loop-erasure, and is defined as follows. SetT (0) = 0 and v˜0 = v0. Inductively, we set T (j) according to the last visittime to each vertex:T (j) = 1 + sup {n : vn = v˜j} , v˜j = vT (j). (5.20)We continue until v˜l = vm, at which time T (j) = m + 1 and there is noadditional vertex v˜j . The loop-erased random walk (LERW) is thesimple path LE(S) = [v˜0, . . . , v˜l].The exact same definition also applies to an infinite, transient path S.Since the path S is transient, the times T (j) in (5.20) are finite, almostsurely, for every j ∈ N. In this case LE(S) is an infinite simple path.The loop-erased random walk is just what the name implies: the looperasure of a random walk. In R3 (or δR3) we can take S∞ to be an infiniterandom walk. S∞ is almost surely transient, so the path L(S∞), called theinfinite loop-erased random walk (ILERW), is a.s. well defined. Wewill also need loop-erased random walks in a domain D ⊂ R3. We willwrite Dˆ = R3 ∩ D for the subset of vertices of R3 inside D. Moreover,the (inner vertex) boundary of Dˆ is the set ∂Dˆ defined as the collection ofvertices v ∈ D for which v is connected to v1 ∈ R3 \ Dˆ. In this case, for agiven starting vertex v0, we may take S to be a simple random walk up tothe stopping time m when it first hits ∂Dˆ. (We will apply this to boundeddomains, so that m is almost surely finite, though the definition is valid evenif m =∞.) Examples of domains of a loop-erased random walk include thefamily of L2 balls {B(R)}R>0 and of L∞ balls {D(R)}r>0.A discrete simple path γ = (vi) may naturally be considered as a curveby setting γ(i) = vi, for i ∈ N, and linearly interpolating between γ(i) andγ(i+1). With this parameterization, the length of γ (as a path) is equal to102its duration as a curve: len(γ) = T (γ). If γ is a loop-erased random walkon δR3, its length as δ → 0, and the curve needs to be reparameterized.To obtain a macroscopic curve in the scaling limit, we reparameterize loop-erased random walks by β-parameterization:γ¯(t) := γ(δ−βt), ∀t ∈ [0, δ−β len(γ)],where β is the LERW growth exponent. Similarly, for an infinite loop-erased random walk γ∞ = [v0, v1, . . . ], we consider its associated curve γ∞by linearly interpolating between integer times, and its β-parameterizationis given byγ¯∞(t) = γ∞(δ−βt), ∀t ≥ 0.In this article, we will sometimes consider the ILERW restricted to afinite domain. Specifically, if γ∞ is an ILERW starting at the origin, we de-note its restriction to a ball of radius r > 0 by γ∞|r = LE(S∞)|[0,ξr(LE(S∞))],where ξr(LE(S∞)) is the first time LE(S∞) exits B(r). Note that this is adifferent object to a LERW started at the origin and stopped at the first hit-ting time of ∂B(r). However, the two are closely related, see [136, Corollary4.5].5.3.4 Path properties of the infinite loop-erased randomwalkIn this section, we summarize some path properties of the ILERW that holdwith high probability. Typically the events will involve some property thatholds on the appropriate scale in a neighbourhood of radius Rδ−1 about thestarting point of the ILERW, for δ the scaling parameter, and for some fixedR ≥ 1. Since the results hold uniformly in the scaling parameter δ ∈ (0, 1],they will also be useful in the scaling limit. As for notation, for x ∈ R3, welet γx∞ be an ILERW on R3 starting at x. If x = 0, then we simply writeγ∞. We highlight that in this section the space is not rescaled.103Quasi-loopsA path γ is said to have an (r,R)–quasi-loop if it contains two verticesv1, v2 ∈ γ such that |v1− v2| < r, but γ(v1, v2) ̸⊆ B(v1, R). (Up to changingthe parameters slightly, this is almost the same as dSγ (x, y) ≥ R.) We denotethe set of (r,R)–quasi-loops of γ by QL(r,R; γ). Estimates on probabilitiesof quasi-loops in LERWs were central to Kozma’s work [107]. The followingbound on the probability of quasi-loops for the ILERW was established in[148] for loop-erased random walks. The extension to the infinite case followsfrom [148, Theorem 6.1] in combination with [136, Corollary 4.5].Proposition 5.3.3 (cf. [148, Theorem 6.1]). For every R ≥ 1, there existconstants C,M <∞, and η˜ > 0 such that for any δ, ε ∈ (0, 1),P(QL(εMδ−1,√εδ−1; γ∞|Rδ−1) ̸= ∅)≤ Cεη˜.Intrinsic length and diameterLet ξn be the first time that the loop-erased walk γ∞ exits the ball B(n) (i.e.the number of steps after the loop erasure). The next result is a quantitativetightness result for n−βξn. It is a combination of the exponential tail boundsof [155], together with the estimates on the expected value of ξn from [128].We note that the result in [155] is for the LERW, but the proof is the samefor the ILERW.Proposition 5.3.4 ([155, Theorems 1.4 and 8.12] and [128, Corollary 1.3]).There exist constants C, c1, c2 ∈ (0,∞) such that: for all λ ≥ 1 and n ≥ 1,P(ξn ≤ λnβ)≥ 1− 2e−c1λ,P(ξn ≥ λ−1nβ)≥ 1− Ce−c2λ1/2 .While any possible pattern appears in γ∞, the scaling relation (given byβ) between the intrinsic distance and the Euclidean distance holds uniformlyalong the path of γ∞. We quantify this relation in terms of equicontinuity.Let R ≥ 1, δ ∈ (0, 1) and λ ≥ 1. We say that γ∞ is λ-equicontinuous104in the ball B(Rδ−1) (with exponents 0 < b1, b2 < ∞) if the following eventholds:E∗δ (λ,R) ={∀x, y ∈ γ∞|Rδ−1 ,if dγ∞(x, y) ≤ λ−b1δ−β, then |x− y| < λ−b2δ−1}.The bound for the ILERW was proved in [127].Proposition 5.3.5 (cf. [127, Proposition 7.1]). There exist constants 0 <b1, b2 < ∞ such that the following is true. Given R ≥ 1, there exists aconstant C such that: for all δ ∈ (0, 1)and λ ≥ 1,P(E∗δ (λ,R)) ≥ 1− Cλ−b2 .A partial converse bounds the intrinsic distance in terms of the Schrammdistance, where we recall that the Schramm distance was defined at (5.18).For δ, r ∈ (0, 1], λ ≥ 1, setS∗δ (λ, r) :={∀x, y ∈ γ∞|λrδ−1 ,if dSγ∞(x, y) < rδ−1, then dγ∞(x, y) < λrβδ−β}.The following result follows from [127, (7.51)].Proposition 5.3.6. There exist constants 0 < c,C <∞ such that: for anyδ, r ∈ (0, 1] and λ ≥ 1,P(S∗δ (λ, r)) ≥ 1− Cλ3e−cλ.Proof. For u ∈ R3, let Bu be the box of side length 3rδ−1 centred at u, andlet Xu = |γ∞|λrδ−1 ∩Bu| be the number of points in Bu hit by γ∞|λrδ−1 . Werecall [127, equation (7.51)], which states that for some absolute c, C andany u,P(Xu ≥ λrβδ−β)≤ Ce−cλ.Cover the ball B(0, λrδ−1) by boxes of side length rδ−1 centred at some{u1, . . . , uN} with N ≍ λ3. If some pair x, y violates the event S∗δ , and xis in the box of side length rδ−1 around ui, then the segment γ∞(x, y) is in105the thrice larger box around the same ui, and so Xui ≥ λrβδ−β. A unionbound gives the conclusion.Capacity and hittabilityAs noted in the introduction, one of the key differences from the two-dimensional case is that in three dimensions it is much easier for a randomwalk to avoid a LERW. The electrical capacity of a connected path of di-ameter r in R3 can be as large as Cr, but can also be as low as O(r/ log r)(see Proposition 2.2.8 for the lower bound). However, the latter occurs onlywhen the path is close to a smooth curve (see Subsection 2.2.2). The fractalnature of the scaling limit of LERWs suggests that a segment of LERW hascapacity comparable to its diameter, and consequently, is likely to be hit bya second random walk starting nearby.Let R ≥ 1 and r ∈ (0, 1), and γx∞ a LERW started at x and stoppedwhen exiting B(0, δ−1R). In this subsection, we give bounds on the hittingprobability of γx∞ by a random walk started from a point y. The hittingbounds are uniformly over the starting points y ∈ B := B(x,Rδ−1) withdist(y, γx∞) < rδ−1. More precisely, denote by PyS the probability measureof a random walk S starting at y, which is independent of γx∞. We say thatγx∞ is η-hittable in B if the following event holds:Aδ(x,R, r; η) := ∀y ∈ B(x,Rδ−1) with dist(y, γx∞) ≤ rδ−1,P yS(S[0, ξS(B(y, r1/2δ−1))]∩ γx∞ = ∅)≤ rη ,where ξS(B(y, r1/2δ−1)) is the first time that S exits from B(y, r1/2δ−1).(Recall dist(·, ·) stands for the Euclidean distance between a point and aset.) A local version of this event, restricted to starting points near x, isgiven byGδ(x, r; η) = ∀y ∈ B(x, rδ−1),P yS(S[0, ξS(B(y, r1/2δ−1))]∩ γx∞ = ∅)≤ rη .The next result, which was established in [148], indicates that γx∞ is η-106hittable with high probability.Proposition 5.3.7 (cf. [148, Lemma 3.2 and Lemma 3.3]). There exists aconstant ηˆ ∈ (0, 1) such that the following is true. Given R ≥ 1, there existsa constant C such that: for all δ, r ∈ (0, 1),P (Aδ(x,R, r; ηˆ)) ≥ 1− Cr.In particular, P(Gδ(x, r; ηˆ)) ≥ 1− Cr.In terms of capacity, Proposition 5.3.7 implies that, with high probabil-ity, the capacity of a connected segment of γ∞ is comparable to its diameter.We write Px,yS for the joint probability law of γx∞ and an independentsimple random walk S starting at y. Working on the joint probability space,together with a change of variable, Proposition 5.3.7 implies the followingresult. This result is well-know and simply states that a simple random walkhits a ILERW almost surely.Proposition 5.3.8 (cf. [133][Theorem 1.1, Corollary 5.3]). For x, y ∈ R3we have that, for all R > 0,infδ∈(0,1]Px,yS(S[0, ξS(B(y,Rδ−1))] ∩ γx∞ = ∅)= 0.Hittability of sub-pathsThe main result of this subsection, Proposition 5.3.9, is crucial for obtainingexponential tail bounds on the volume of balls in the UST in Section 5.5.It establishes that the path γ∞ = LE (S[0,∞)), i.e. the infinite LERW, hashittable sections across a range of distances from its starting point.For 1 ≤ λ < R, consider a sequence of boxesDi = D(iRλ), i = 1, 2, . . . , λ,where D(r) was defined in Subsection 5.3.1. Let ti be the first time that γ∞exits Di. We denote xi = γ∞(ti), and writeσi = inf{n ≥ ti | γ∞(n) /∈ B(xi,R2λ)}.107For each i = 1, 2, . . . , λ, we define the event Ai byAi = Pz(Rz[0, ξi] ∩ γ∞[ti, σi] ∩D R2λ(xi) ̸= ∅)≥ c0for all z ∈ B(xi,R16λ)  , (5.21)where: Rz is a simple random walk started at z, independent of γ∞, withlaw denoted P z; ξi is the first time that Rz exits B(xi, R2λ); and D R2λ (xi) isthe box centered on the infinite half line started at xi that does not intersectDi and is orthogonal to the face of Di containing xi, with centre at distanceR/4λ from x and radius R2,000λ , see Figure 5.3.xiγ∞∂B(xi, R/16λ)∂B(xi, R/2λ)Rz(ξi)zRzγ∞(ti)∂DiD R2λ(xi)Figure 5.3: On the event Ai, as defined at (5.21), the above config-uration occurs with probability greater than c0 for any z ∈B(xi, R/16λ).Now, for fixed a ∈ (0, 1), we consider a sequence of subsets of the indexset {1, 2, . . . , λ} as follows. Let q = ⌊λ1−a/3⌋. For each j = 0, 1, . . . , q, definethe subset Ij of the set {1, 2, . . . , λ} by settingIj := {⌊2jλa + 1⌋, ⌊2jλa + 2⌋, . . . , ⌊(2j + 1)λa⌋} , (5.22)108and the event Fj byFj = F aj =∪i∈IjAi, (5.23)i.e. Fj is the event that there exists at least one index i ∈ Ij such thatγ∞[ti, σi] is a hittable set in the sense that Ai holds. The next propositionshows that with high probability the event Fj holds for all j = 1, 2, . . . , q.We will prove it in the following subsection.Proposition 5.3.9. Define the events Fj as in (5.23). There exists a uni-versal constant c1 > 0 such thatP q∩j=1Fj ≥ 1− λ1−ae−c1λa . (5.24)Remark. (i) The reason that we decompose the ILERW γ∞ using the se-quence of random times ti as in the above definition is that we need tocontrol the future path γ∞[ti, σi] uniformly on the given past path γ∞[0, ti]via [155, Proposition 6.1].(ii) We expect that each γ∞[ti, σi] is a hittable set not only with positiveprobability as in Proposition 5.3.16 below, but also with high probability inthe sense of [148, Theorem 3.1]. However, since Proposition 5.3.9 is enoughfor us, we choose not to pursue this point further here.5.3.5 Loop-erased random walks on polyhedronsWe defined that a loop-erased random walk on a domain Dˆ ⊂ R3 starts atan interior vertex of Dˆ and ends with its first hitting time to the boundary∂Dˆ. As we have discussed above, the geometry of the domain Dˆ affects thepath properties of loop-erased random walks on it. In this subsection wewill see that the results in [107, 127, 148] hold for a collection of scaled poly-hedrons, which we define below. Similarly to Subsection 5.3.4, and underthe assumption that the polyhedrons are scaled with a large parameter, theproofs in the aforementioned papers carry without major modifications toour setting. For clarity, we comment on the differences between the work in[127, 148] and this subsection.109A dyadic polyhedron on R3 is a connected set P of the formP =m∪j=1Cj ,where each Cj ⊂ R3 is a closed cube of the form [a1, b1]×[a2, b2]×[a3, b3] withai, bi ∈ R3 (cf. (5.83), where we scale the lattice instead of the polyhedron).We say that a polyhedron P is bounded by R if P ⊂ B(R). Let us assumethat 0 ∈ P and write2nP := {z ∈ R3 : 2−nz ∈ P},for the 2n-expansion of the polyhedron P. In this subsection we restrict ourscaling to powers of 2, and note that 2nP is a dyadic polyhedron as well. IfP is bounded by R, then B(0, 1) ⊂ 2nP ⊂ B(0, 2nR), for all n ≥ 1.Let S be a simple random walk starting at 0 and let ξ∂P be the exit timeof the random walk from the polyhedron. In this section we study the pathproperties of the loop-erased random walkγPn = LE(S[0, ξ∂2nP ]). (5.25)Note that the index n indicates a 2n-expansion of P (cf. (5.84)).We say that γPn is η-hittable if the following event holds:APn (r; η) := ∀y ∈ 2nP with dist(y, γPn ) ≤ r2n,P yS(S[0, ξS(B(y, r1/22n))]∩ γPn = ∅)≤ rη ,where ξS(B(y, r1/22n)) is the first time that S exits from B(y, r1/22n).Proposition 5.3.10 (cf. Proposition 5.3.7). Fix R ≥ 1, let P be a dyadicpolyhedron containing 0 and bounded by R, and let γPn be the loop-erasedrandom walk in (5.25). There exists a constant ηˆ ∈ (0, 1) such that thereexists a constant C (depending on R) and N ≥ 1 for which the following is110true: for all r ∈ (0, 1) and n ≥ N ,P(APn (r; ηˆ))≥ 1− Cr.Proposition 5.3.10 follows from [148, Lemma 3.2] and [148, Lemma 3.3],using the argument for the proof of [148, Theorem 3.1]. The argument forProposition 5.3.10 considers two cases, depending on the starting point ofthe simple random walk S(0) = y. For some ε > 0, either y ∈ B(0, εn) ory ∈ P \B(0, εn). For the first case we apply [148, Lemma 3.2], and here weuse that γPn is a “large” path when n is large enough. If y ∈ P \B(0, εn), wethen consider a covering of P with a collection of balls {B(vi, ε2n)}1≤i≤L,with v1, . . . vL ∈ P \B(0, εn) and L ≤ 10R3⌊ε⌋−6. We then use [148, Lemma3.3] on each one of these balls and a union bound gives the desired result.Recall the definition of (r,R)–quasi-loop in Subsection 5.3.4 and thatQL(r,R; γ) denotes the set of (r,R)–quasi-loops of γ. Proposition 5.3.3indicates the the ILERW does not have quasi-loops with high probability.A similar statement holds for a polyhedral domain. The proof makes useof Proposition 5.3.10 and we use modifications over the stopping times andthe covering of the domain (as in Proposition 5.3.10). Indeed, the proof of[148, Theorem 6.1] is divided in three cases. If the LERW has a quasi-loopat a vertex v, then either v is close to the starting point of the LERW, or vis close to the boundary, or v is in an intermediate region. The probabilityof the first two cases is bounded by escape probabilities for random walks.We can use the same bounds in [148, Theorem 6.1] as long as the scale n islarge enough (as we assume in Proposition 5.3.11). The bound for the thirdcase follows from a union bound over a covering of the domains. We can usethis argument because P has a regular boundary.Proposition 5.3.11 (cf. Proposition 5.3.3). Fix R ≥ 1 and let P be adyadic polyhedron containing 0 and bounded by R, and let γPn be the loop-erased random walk in (5.25). There exist constants C,M <∞, N ≥ 1 andη˜ > 0 such that for any ε ∈ (0, 1) and n ≥ N ,P(QL(εM2n,√ε2n; γPn ) ̸= ∅)≤ Cεη˜.111Since Propositions 5.3.10 and 5.3.11 hold for scaled dyadic polyhedrons,we can follow the argument in [127] leading to the proof of the the scalinglimit of the LERW. From this argument we obtain control of the paths andthe scaling limit for the LERW γPn with β-parameterization. We finish thissections stating these three results.For a LERW γPn , n ≥ 1 and λ ≥ 1, the path γPn is λ-equicontinuous(with exponents 0 < b1, b2 <∞) ifEPn (λ,R) :={∀x, y ∈ γPn , if dγ(x, y) ≤ λ−b12β, then |x− y| < λ−b22n}.The partial converse is the event:SPn (λ, r) :={∀x, y ∈ γPn , if dSγ (x, y) < r2n, then dγ(x, y) < λrβ2β}.Proposition 5.3.12 (cf. Proposition 5.3.5). There exist constants 0 <b1, b2 <∞ such that the following is true. Given R ≥ 1, there exist constants0 < C <∞ and N ≥ 1 such that: for all λ ≥ 1 and n ≥ N ,P(EPn (λ,R)) ≥ 1− Cλ−b2 .Proposition 5.3.13 (cf. Proposition 5.3.6). There exist constants 0 <c,C <∞ and N ≥ 1 such that: for any r ∈ (0, 1], λ ≥ 1 and n ≥ N ,P(SPn (λ, r)) ≥ 1− Cλ3e−cλ.Proposition 5.3.14 (cf. [127, Theorem 1.4]). Let P be a dyadic polyhedroncontaining 0 and bounded by R and let γPn be the loop-erased random walk in(5.25). The β-parameterization of this loop-erased random walk is the curvegiven byγ¯Pn (t) = γPn (2βnt), t ∈ [0, 2−βn len(γPn )]and let γ¯Pn be the β-parameterization of the loop-erased random walk in(5.25). Then the law of γ¯Pn converges as n → ∞ with respect to the metricspace (Cf , ψ).1125.3.6 Proof of Proposition 5.3.9In this subsection we show that sub-paths of the ILERW are hittable in thesense required for the event (5.21) to hold, see Proposition 5.3.16 below. Thelatter result leads to the proof of Proposition 5.3.9. With this objective inmind, we first study a conditioned LERW. We begin with a list of notation.• Recall that D(R) is the cube of radius R centered at 0, as defined inSubsection 5.3.1.• Take positive numbersm,n. Let x ∈ ∂D(m) be a point lying in a “face"ofD(m) (we denote the face containing x by F ). Write ℓ for the infinitehalf line started at x which lies in D(m)c and is orthogonal to F . Welet y be the unique point which lies in ℓ and satisfies |x−y| = n/2. Weset Dn(x) := D(y, n/1000) for the box centered at y with side lengthn/500. (Cf. the definition of D R2λ(xi) above.)• Suppose that m,n, x,Dn(x) are as above. Take K ⊆ D(m) ∪ ∂D(m).Let X be a random walk started at x and conditioned that X[1,∞)∩K = ∅. We set η = LE (X[0,∞)) for the loop-erasure of X, and σ forthe first time that η exits B(x, n). Finally, we denote the number ofpoints lying in η[0, σ] ∩Dn(x) by JKm,n,x. This is an analogue of [155,Definition 8.7].• Suppose that X is the conditioned random walk as above. We writeGX(·, ·) for Green’s function of X.This setup is illustrated in Figure 5.4 (cf. [155, Figure 3]).We will give one- and two-point function estimates for η in the followingproposition.Proposition 5.3.15. Suppose that m,n, x,K,X, η, σ are as above. Thereexists a universal constant c such that for all z, w ∈ Dn(x) with z ̸= w,P (z ∈ η[0, σ]) ≥ cn−3+β, (5.26)P (z, w ∈ η[0, σ]) ≤ 1cn−3+β|z − w|−3+β. (5.27)113Proof. The inequality (5.26) follows from [155, (8.29)] and [128, Corollary1.3]. So, it remains to prove (5.27). We first recall [155, Proposition 8.1],the setting of which is as follows. Take z1, z2 ∈ Dn(x) with z1 ̸= z2. We setz0 = x, and write l = |z1 − z2|. Note that 1 ≤ l ≤ n/100. For i = 0, 1, 2, welet Xi be independent versions of X with Xi(0) = zi. We write σiw for thefirst time that Xi hits w. For i = 0, 1, let Zi be Xi conditioned on the event{σizi+1 <∞}, and also let Z2 = X2. Also for i = 0, 1, write u(i) for the lasttime that Zi passes through zi+1, and set u(2) =∞. Define the event F ηz1,z2byF ηz1,z2 ={There exist 0 < t1 < t2 <∞such that η(t1) = z1 and η(t2) = z2},D(m)KxDn(x) ℓ∂B(x, n)ηη(σ)Figure 5.4: Notation used in the proof of Proposition 5.3.9.114and non-intersection events F1 and F2 byF1 ={LE(Z0[0, u(0)])∩(Z1[1, u(1)] ∪ Z2[1,∞))= ∅},F2 ={LE(Z1[0, u(1)])∩ Z2[1,∞) = ∅}.Then [155, Proposition 8.1] shows thatP(F ηz1,z2)= GX(z0, z1)GX(z1, z2)P (F1 ∩ F2) .Now, in the proof of [155, Lemma 8.9], it is shown thatGX(z0, z1) ≤ Cn, GX(z1, z2) ≤ Cl,and so it suffices to estimate P(F1 ∩ F2). To do this, we consider four ballsB1 = B(z1, l/8), B2 = B(z2, l/8), B′1 = B(z1, 2l), B′′1 = B(z1, n/16).Note that B1 ∪ B2 ⊂ B′1 ⊂ B′′1 and B1 ∩ B2 = ∅. For i = 0, 1, letY i =(Zi[0, u(i)])R be the time reversal of Zi[0, u(i)] where for a pathλ = [λ(0), λ(1), . . . , λ(u)], we write (λ)R = [λ(u), λ(u − 1), . . . , λ(0)] forits time reversal. By the time reversibility of LERW (see [113, Lemma 7.2.1]for the time reversibility), we see that P(F1 ∩ F2) = P(F ′1 ∩ F ′2), where theevents F ′1 and F ′2 are defined byF ′1 ={LE(Y 0[0, σ(0)])∩(Y 1[0, σ(1)− 1] ∪ Z2[1,∞))= ∅},F ′2 ={LE(Y 1[0, σ(1)])∩ Z2[1,∞) = ∅}.Here σ(i) is the first time that Y i hits zi. We define several random timesas follows:• s0 is the first time that LE(Y 0[0, σ(0)])exits B1;• s2 is the first time that LE(Y 0[0, σ(0)])exits B′′1 ;• s1 is the last time up to s2 that LE(Y 0[0, σ(0)])exits B′1;115z0γ0γ0(s2)γ0(s1)γ0(s0)∂B1∂B′1∂B′′1∂B2γ1Z2(u1)Z2(u0)γ1(t0)Figure 5.5: The random times s0, s1, s2, t0, u0, u1.• t0 is the first time that LE(Y 1[0, σ(1)])exits B2;• t1 is the last time up to σ(1) that Y 1[0, σ(1)] hits ∂B1;• u0 is the first time that Z2 exits B2;• u1 is the first time that Z2 exits B′′1 .See Figure 5.5 for an illustration showing these random times. If we writeγi = LE(Y i[0, σ(i)])for i = 0, 1, we see that P(F ′1∩F ′2) ≤ P(H1∩H2∩H3),where the events H1,H2,H3 are defined byH1 ={γ0[0, s0] ∩ Y 1[t1, σ(1)− 1] = ∅},H2 ={γ1[0, t0] ∩ Z2[1, u0] = ∅},H3 ={γ0[s1, s2] ∩ Z2[u0, u1] = ∅}.Since dist (D(m), B′′1 ) ≥ n/4, it follows from the discrete Harnack princi-ple (see [113, Theorem 1.7.6], for example) that the distribution of Z2[0, u1]is comparable to that of R2[0, u′1], assuming R2(0) = z2 where R2 is a simple116random walk, and u′1 is the first time that R2 exits B′′1 . More precisely, thereexist universal constants c, C ∈ (0,∞) such that for any path λcP(R2[0, u′1] = λ) ≤ P (Z2[0, u1] = λ) ≤ CP (R2[0, u′1] = λ) .Also, since γ0[s1, s2] ⊆ (B′1)c, using the Harnack principle again, we see thatP(H1 ∩H2 ∩H3) ≍ EY 0,Y 1{1H1P z2R2 (γ1[0, t0] ∩R2[1, u′0] = ∅)·P z1R2 (γ0[s1, s2] ∩R2[0, u′1] = ∅)}, (5.28)where u′0 is the first time that R2 exits B2 and EY 0,Y 1 stands for the expec-tation with respect to the probability law of (Y 0, Y 1).Another application of the Harnack principle tells that γ1[0, t0] andY 1[t1, σ(1) − 1] are “independent up to constant” (see [127, Lemma 4.3]).Namely, there exist universal constants c, C ∈ (0,∞) such that for any pathsλ1, λ2cP (γ1[0, t0] = λ1)P(Y 1[t1, σ(1)− 1] = λ2)≤ P(γ1[0, t0] = λ1, Y 1[t1, σ(1)− 1] = λ2)≤ CP (γ1[0, t0] = λ1)P(Y 1[t1, σ(1)− 1] = λ2).This implies that given Y 0, 1H1 and P z2R2 (γ1[0, t0] ∩R2[1, u′0] = ∅) are in-dependent up to constant. Also, it is proved in [136, Propositions 4.2 and4.4] that the distribution of γ1[0, t0] is comparable with that of the ILERWstarted at z2 until it exits B2. Using the discrete Harnack principle again, wesee that the distribution of the time reversal of Y 1[t1, σ(1)−1] coincides withthat of the SRW started at z1 until it exits B1. Therefore, if we write R1and R3 for independent SRWs, the right hand side of (5.28) is comparable117toEY 0{P z1R2(γ0[s1, s2] ∩R2[0, u′1] = ∅)P z1R1(γ0[0, s0] ∩R1[1, σ′1] = ∅)}× P z2,z2R2,R3(R2[1, u′0] ∩ LE (R3[0,∞)) [0, t′3] = ∅),(5.29)where σ′1 is the first time that R1 exits B1, and t′3 is the first time thatLE (R3[0,∞)) exits B2. Moreover, it follows from [155, Proposition 6.7] and[128, Corollary 1.3] thatP z2,z2R2,R3(R2[1, u′0] ∩ LE (R3[0,∞)) [0, t′3] = ∅) ≍ l−2+β.Finally, let R0 be a SRW started at z1 and γ′0 = LE (R0[0,∞)) be theILERW. Similarly to above, define:• s′0 to be the first time that γ′0 exits B1;• s′2 to be the first time that γ′0 exits B′′1 ;• s′1 to be the last time up to s′2 that γ′0 exits B′1.We then have from [136, Propositions 4.2 and 4.4] that the distribution ofγ0[0, s2] is comparable with that of γ′0[0, s′2]. Moreover, [136, Proposition4.6] ensures that γ′0[0, s′0] and γ′0[s′1, s′2] are independent up to a constant.Therefore the expectation with respect to Y 0 in (5.29) is comparable toP z1,z1R0,R1(γ′0[0, s′0] ∩R1[1, σ′1] = ∅)P z1,z1R0,R2(γ′0[s′1, s′2] ∩R2[0, u′1] = ∅)≍ Es(l)Es(l, n),(5.30)where we use the notation Es defined in [155]. Finally, by [128, Corollary1.3], it holds that the right hand side of (5.30) is comparable to n−2+β. Thisgives (5.27) and finishes the proof.Definition 5.3.1. Suppose that m,n, x,K,X, η, σ are as above. For eachz ∈ B(x, n/8), let Rz be a SRW on R3 started at z, independent of X. Write118ξ for the first time that Rz exits B(x, n), and letNz = |Rz[0, ξ] ∩ η[0, σ] ∩Dn(x)|be the number of points in Dn(x) hit by both Rz[0, ξ] and η[0, σ]. Further-more, define the (random) function g(z) by settingg(z) := P z (Nz > 0) = P z (Rz[0, ξ] ∩ (η[0, σ] ∩Dn(x)) ̸= ∅) ,where P z stands for the probability law of Rz. Note that g(z) is a measurablefunction of η[0, σ], and that, given η[0, σ], g(·) is a discrete harmonic functionin Dn(x)c.The next proposition says that with positive probability (for η), g(z) isbounded below by some universal positive constant for all z ∈ B(x, n/8).Proposition 5.3.16. Suppose that the function g(z) is defined as in Defi-nition 5.3.1. There exists a universal constant c0 > 0 such thatP (g(z) ≥ c0 for all z ∈ B(x, n/8)) ≥ c0. (5.31)Proof. We claim that it suffices to show thatP (g(x) ≥ c0) ≥ c0 (5.32)for some c0 > 0. The reason for this is as follows. Suppose that (5.32) istrue and the event {g(x) ≥ c0} occurs. Since dist (B(x, n/8), Dn(x)) ≥ n/4,using the Harnack principle, there exists a universal constant c1 > 0 suchthat g(z) ≥ c1g(x) ≥ c1c0 for all z ∈ B(x, n/8). Thus we have P(g(z)) ≥c0c1 for all z ∈ B(x, n/8)) ≥ c0, which gives (5.31).We will prove (5.32). Recall the definition of Nz from Definition 5.3.1.By (5.26), we see thatE(Nx) =∑w∈Dn(x)P (w ∈ η[0, σ])P x (w ∈ Rx[0, ξ]) ≥ cn−1+β119for some c > 0. On the other hand, by (5.27), we haveE(N2x) =∑w1,w2∈Dn(x)P (w1, w2 ∈ η[0, σ])P x (w1, w2 ∈ Rx[0, ξ])≤ Cn−4+β∑w1,w2∈Dn(x)|w1 − w2|−4+β≤ Cn−4+βn2+β= Cn−2+2β.This gives E(N2x) ≤ C{E(Nx)}2. Therefore, the second moment methodtells us that E(g(x)) ≥ c2 for some universal constant c2 > 0. This impliesP (g(x) ≥ c2/2) ≥ c2/3, which gives (5.32).Proof of Proposition 5.3.9. We will prove that for each j = 1, 2, . . . , qP(F cj ) ≤ (1− c0)λa, (5.33)where c0 is the constant of Proposition 5.3.16. Since q ≤ λ1−a, the inequality(5.33) gives the desired inequality (5.24). Take j ∈ {1, 2, . . . , q}. Supposethat F cj occurs. This implies that for every i ∈ Ij , the event Ai does notoccur. Setting l = 2jλa, we need to estimateP l+λa∩i=l+1Aci = PAcl+λa l+λa−1∩i=l+1AciPl+λa−1∩i=l+1Aci .Note that the event ∩l+λa−1i=l+1 Aci is measurable with respect to γ[0, tl+λa ]while the event Acl+λa is measurable with respect to γ[tl+λa , σl+λa ]. There-fore, using the domain Markov property of γ (see [113, Proposition 7.3.1]),Proposition 5.3.16 tells us thatPAcl+λa l+λa−1∩i=l+1Aci ≤ 1− c0,where we apply Proposition 5.3.16 with m = (l+λa)Rλ , n =R2λ , x = γ (tl+λa)120and K = γ[0, tl+λa ]. Thus we have thatP l+λa∩i=l+1Aci ≤ (1− c0)Pl+λa−1∩i=l+1Aci .Repeating this procedure λa times, we obtain (5.33), and thereby finish theproof.5.4 Checking the assumptions sufficient fortightnessThe aim of this section is to check Assumptions 1, 2 and 3, as set out inSection 5.2. In what follows, we let γU (x, y) be the unique injective path inU between x and y. In particular, γU (x, y)(k) is the location at kth step ofthe path. Note that γU (x, y)(0) = x and γU (x, y) (dU (x, y)) = y. Given asubset A of R3, we define a α-net of A as the minimal set of lattice pointssuch that A⊆∪z∈Dk B (z, α).5.4.1 Assumption 1The first assumption follows from the following proposition.Proposition 5.4.1. For every R ∈ (0,∞), there exist universal constantsλ0 > 1, and constants c1, c2 ∈ (0,∞) depending only on R such that: for allλ ≥ λ0 and δ ∈ (0, 1) small enough,P(BU(0, Rδ−β)⊆ B(λδ−1))≥ 1− c1λ−c2 .Proof. Fix R ∈ (0,∞). We may assume that δ > 0 is sufficiently small sothatδ−1−2 log2 δ + 2≥ 10, (5.34)and also that λ ≥ 2. For each k ≥ 1, let εk = λ−12−k, ηk = (2k)−1 andAk = B(δ−1) \B((1− ηk)δ−1).121Write k0 for the smallest integer satisfying δ−1εk0 < 1. We remark that thecondition at (5.34) ensures that (1 − ηk0)δ−1 ≤ δ−1 − 10. Thus the innerboundary ∂iB(0, δ−1) is contained in Ak0 . (We defined the inner boundaryin Subsection 5.3.1.)Let Dk be a “δ−1εk-net” of Ak. The minimality assumption implies that|Dk| ≤ Cε−3k . Since δ−1εk0 < 1 and ∂iB(0, δ−1) ⊆ Ak0 , it follows that∂iB(0, δ−1) ⊆ Dk0 .Now, to construct U , we perform Wilson’s algorithm rooted at infinity(see [29, 169]) as follows:• Consider the infinite LERW γ∞ = LE(S[0,∞)), where S = (S(n))n≥0is a SRW on R3 started at the origin. We think of U0 = γ∞ as the“root” in this algorithm.• Consider a SRW started at a point in D1, and run until it hits U0;we add its loop-erasure to U0, and denote the union of them by U11 .We next consider a SRW from another point in D1 until it hits U11 ;let U21 be the union of U11 and the loop-erasure of the second SRW.We continue this procedure until all points in D1 are in the tree. Weconsider each loop-erased random walk as branches of the tree. WriteU1 for the output random tree.• We now repeat the above procedure for D2. Namely, we think of U1 asa root and add a loop-erasure of SRWs from each point in D2. Let U2be the output tree. We continue inductively to define U3,U4, . . . ,Uk0 .• Finally, we perform Wilson’s algorithm for all points in R3 \ Uk0 toobtain U .Note that, by construction, Uk ⊆ Uk+1, and also ∂iB(0, δ−1) ⊆ Uk0 .We proceed with an estimate on the number of steps that γ∞ takes toexit an extrinsic ball. Specifically, we define the event F := {ξ ≥ λ−9δ−β},where ξ is the first time that γ∞ exits B(0, λ−4δ−1). From Proposition 5.3.4we have thatP(F ) ≥ 1− Ce−c√λ122for some universal constants c, C ∈ (0,∞).Next, for each x ∈ R3, let tx = inf{k ≥ 0 : γU (x, 0)(k) ∈ U0} be thefirst time that γU (x, 0) hits U0. We write γU (x,U0) = γU (x, 0)[0, tx] for thepath in U connecting x and U0. We remark that tx = 0 and γU (x,U0) = {x}when x ∈ U0. We consider the event G defined byG ={γU (x,U0) ∩B(0, λ−4δ−1)= ∅ for all x ∈ D1}.Suppose that the event G does not occur, and that there exists an x ∈ D1such that γU (x,U0) hits B(0, λ−4δ−1). This implies that in Wilson’s algo-rithm, as described above, the SRW R started at x enters into B(0, λ−4δ−1)before it hits γ∞. Since δ−1/2 ≤ |x| ≤ δ−1, it follows from [113, Proposition1.5.10] thatP xR(R[0,∞) ∩B(0, λ−4δ−1)̸= ∅)≤ Cλ−4for some universal constant C <∞, where R is a SRW started at x, with P xRdenoting the law of the latter process. Taking the sum over x ∈ D1 (recallthat the number of points in D1 is comparable to λ3), we find thatP(G) ≥ 1− Cλ−1.To complete the proof, we will consider several “good” events that ensureγ∞ ∪ γU (x,U0) with x ∈ Dk (k = 1, 2, . . . , k0) is a “hittable” set in the sensethat if we consider another independent SRW R whose starting point is closeto γ∞ ∪ γU (x,U0), then, with high probability for γ∞ ∪ γU (x,U0), it is likelythat R intersects γ∞ ∪ γU (x,U0) quickly. Such hittability of LERW pathswas studied in [148, Theorem 3.1]. With this in mind, for k ≥ 1 and ζ > 0,we define the event H(k, ζ) by settingH(k, ζ) ={∀x ∈ Dk, y ∈ B(x, εkδ−1) :P yR(R[0, TR(x,√εkδ−1)] ∩ (γ∞ ∪ γU (x,U0)) = ∅) ≤ εζk},(5.35)where R is a SRW, independent of γ∞, P yR stands for its law assumingthat R(0) = y, and TR(x, r) is the first time that R exits B(x, r). (For123convenience, we omit the dependence of H(k, ζ) on δ.) Note that the eventH(k, ζ) roughly says that when R(0) is close to γ∞∪γU (x,U0), it is likely forR to intersect with γ∞ ∪ γU (x,U0) before it travels very far, see Figure 5.6.From [148, Lemma 3.2], we see that the probability of the event H(k, ζ) isgreater than 1−Cε2k if we take ζ sufficiently small. The reason for this is asfollows. Suppose that the event H(k, ζ) does not occur, which means thatthere exist x ∈ Dk and y ∈ B(x, εkδ−1) such that the probability consideredin (5.35) is greater than εζk. The existence of those two points x ∈ Dk andy ∈ B(x, εkδ−1) implies the occurrence of the event I(x, k, ζ), as defined byI(x, k, ζ) ={∃y ∈ B (x, εkδ−1) such thatP yR(R[0, TR(x,√εkδ−1)] ∩ γx∞ = ∅) > εζk},where we write γx∞ for the unique infinite path started at x in U (notice thatγ0∞ = γ∞). Namely, we haveH(k, ζ)c ⊆∪x∈DkI(x, k, ζ).We mention that the distribution of γx∞ coincides with that of the infiniteLERW started at x. With this in mind, applying [148, Lemma 3.2] withs = εkδ−1, t =√εkδ−1 and K = 10, it follows that there exist universalconstants ζ1 > 0 and C < ∞ such that for all k ≥ 1, λ ≥ 2, δ ∈ (0, 1) andx ∈ Dk,P (I(x, k, ζ1)) ≤ Cε5k.Since the number of points in Dk is comparable to ε−3k , we see thatP (H(k, ζ1)) ≥ 1− Cε2k, (5.36)as desired.Set A′1 := F ∩ G ∩ H(1, ζ1). For λ sufficiently large, the event A′1 isnon-empty. We have already proved that P(A′1) ≥ 1 − Cλ−1. Moreover,we note that on the event A′1, we have dU (0, y) ≥ λ−9δ−β for all y ∈ γ∞ ∩B(0, δ−1/3)c. We also have on A′1 that the event G holds, and then the124branch γU (x,U0) does not intersect with B(0, λ−4δ−1)for any x ∈ D1. Sincethe event F ⊂ A′1, we get that dU (0, y) ≥ λ−9δ−β for all y ∈ γU (x,U0) withx ∈ D1. Recall that U1 is the union of γ∞ and all branches γU (x,U0) withx ∈ D1. Conditioning U1 on the event A′1, we perform Wilson’s algorithmfor points in D2. It is convenient to think of U1 as deterministic sets inthis algorithm. Adopting this perspective, we take y ∈ D2 and considerthe SRW R started at y until it hits U1. Suppose that R hits B(0, δ−1/2)before it hits U1. Since the number of “√ε1δ−1-displacements” of R until ithits B(0, δ−1/2) is bigger than 10−1ε−1/21 , the hittability condition H(1, ζ1)ensures thatP yR(R hits B(0, δ−1/2)before it hits U1)≤ εcζ1√ε11 , (5.37)0y xAk∂B(δ−1)∂B((1− ηk)δ−1)Figure 5.6: On the event H(k, ζ), as defined at (5.35), the above con-figuration occurs with probability greater than 1 − εζk for anyx ∈ Dk, y ∈ B(x, εkδ−1). The circles shown are the boundariesof B(x, εkδ−1) and B(x,√εkδ−1). The non-bold paths representγ∞ ∪ γU (x,U0), and the bold path R[0, TR(x,√εkδ−1)].125for some universal constant c > 0. Define the event B2 byB2 ={∀y ∈ D2, γU (y,U1) ∩B(0, δ−1/2)= ∅},where γU (y,U1) denotes the branch between y and U1 in U . Taking the sumover y ∈ D2, the conditional probability (recall that we condition U1 on theevent A′1) of the event B2 satisfiesP(B2) ≥ 1− Cε−31 εcζ1√ε11 .Thus, letting A′2 := A′1 ∩B2 ∩H(2, ζ1), it follows thatP(A′2) ≥ 1− Cε21,where we also use that ε1 is comparable to ε2, and that the number of pointsin D2 is comparable to ε−32 . We also note that A′2 is non-empty for λ largeenough.Conditioning U2 on the event A′2, we can do the same thing as above fora SRW started at z ∈ D3. Hence if we define the event B3 by settingB3 ={∀z ∈ D3, γU (z,U2) ∩B(0, δ−1/2)= ∅},then the conditional probability of the event B3 satisfiesP(B3) ≥ 1− Cε−32 εcζ1√ε22 .So, letting A′3 := A′2 ∩B3 ∩H(3, ζ1), it follows thatP(A′3) ≥ 1− Cε22,and we continue this until we reach the index k0. In particular, if we defineBk and A′k for each k = 2, 3, . . . , k0 byBk ={∀z ∈ Dk, γU (z,Uk−1) ∩B(0, δ−1/2)= ∅},126and A′k := A′k−1 ∩Bk ∩H(k, ζ1), we can conclude thatP(A′k0)= P(A′1) k0∏k=2P(A′k|A′k−1)≥(1− Cλ−1) ∞∏k=1(1− Cε2k)≥ 1− Cλ−1.(5.38)We take a universal constant λ0 for which (5.38) is positive for all λ ≥ λ0.On the event A′k0 , it is easy to see that:• dU (0, y) ≥ λ−9δ−β for all y ∈ γ∞ ∩B(0, δ−1/3)c,• dU (0, y) ≥ λ−9δ−β for all y ∈ γU (x,U0) and all x ∈ Dk with k =1, 2, . . . , k0.Since ∂iB(0, δ−1) ⊆ Uk0 , this implies that dU (0, y) ≥ λ−9δ−β for all y ∈B(0, δ−1)c on the event A′k0 . Therefore, it follows thatP(BU(0, λ−9δ−β)⊆ B(0, δ−1))≥ 1− Cλ−1.Reparameterizing this, we haveP(BU(0, Rδ−β)⊆ B(0, λδ−1))≥ 1− CR 19λ−β9 ,for some universal constant C <∞. This finishes the proof.5.4.2 Assumption 2We will prove the following variation on Assumption 2. Given Proposition5.4.1, it is easy to check that this implies Assumption 2. The restriction ofballs to the relevant Euclidean ball will be useful in the proof of the scalinglimit part of Theorem 5.1.1.Assumption 4. For every ε,R ∈ (0,∞), it holds thatlimη→0 lim supδ→0P(infx∈B(δ−1R)δ3µU(BU (x, δ−βε) ∩B(δ−1R))< η)= 0.127We begin with the following warm-up lemma, which gives a lower boundon the volume of BU (0, θδ−β) for each fixed θ ∈ (0, 1].Lemma 5.4.2. There exist constants λ0 > 1, c1, c2 and c3, such that: forall λ ≥ λ0, δ ∈ (0, 1) and θ ∈ (0, 1]P(µU(BU(0, θδ−β))< λ−1δ−3)≤ c1θ−c2λ−c3 . (5.39)Proof. We will first deal with the case that θ = 1, and then prove (5.39)for general θ ∈ (0, 1] by reparameterizing. We may assume that λ ≥ 2 andδ > 0 sufficiently small. Let σ be the first time that the infinite LERW γ∞exits B(λ−1/3δ−1). Define the event F ∗ by settingF ∗ ={γ∞[σ,∞) ∩B(0, λ−1/2δ−1)= ∅, σ ≤ λ−1/4δ−β}.Suppose that γ∞ returns to the ball B(0, λ−1/2δ−1) after time σ. Then sodoes the SRW that defines γ∞ after the first time that it exits B(0, λ−1/3δ−1.The probability of such a return by the SRW is, by [113, Proposition 1.5.10],smaller than Cλ−1/6 for some universal constant C < ∞. On the otherhand, combining [155, Theorem 1.4] with [128, Corollary 1.3], it followsthat the probability that σ is greater than λ−1/4δ−β is bounded above byC exp{−cλ1/12}for some universal constants c, C ∈ (0,∞). Thus we haveP (F ∗) ≥ 1− Cλ−1/6. (5.40)Note that on the event F ∗, the number of steps (in γ∞) between the originand x ∈ γ∞ ∩B(0, λ−1/2δ−1)is smaller than λ−1/4δ−β.Next we introduce an event G∗, which ensures hittability of γ, similarlyto the event H(k, ζ) defined at (5.35). Namely, for ζ > 0, we setG∗(ζ) = ∀x ∈ B(0, 2λ−1δ−1),P xR(R[0, TR(0, λ−1/2δ−1)]∩ γ∞ = ∅)≤ λ−ζ .From [148, Lemma 3.2], we have that there exist universal constants C <∞128and ζ2 > 0 such that: for all λ ≥ 2 and δ > 0P (G∗(ζ2)) ≥ 1− Cλ−1.Moreover, we consider the following net, which is again similar to the ver-sion appearing in the proof of Proposition 5.4.1. Here is the list of notationthat we need.• For each k ≥ 1, let ε∗k = λ−(1+ ζ26)2−k and η∗k = 1/(2k).• Write k∗0 for the smallest integer satisfying ε∗k∗0δ−1 < 1.• Set A∗k = B(0, (1+ η∗k)λ−1δ−1) \B(0, (1− η∗k)λ−1δ−1) and let D∗k ⊆ R3be a ε∗kδ−1-net of A∗k in the sense that the number of points in D∗k issmaller than Cλ−3(ε∗k)−3 and A∗k is contained in the union of all ballsB(z, ε∗kδ−1) with z ∈ D∗k.Note that since we take δ > 0 sufficiently small, it follows that both bound-aries ∂iB(0, λ−1δ−1) and ∂B(0, λ−1δ−1) are contained in D∗k∗0 .Now we perform Wilson’s algorithm as follows:• The root of the algorithm is U∗0 := γ∞, the infinite LERW started atthe origin.• We run sequentially LERWs from each point in D∗1 until they hit thepart of the tree already constructed, and let U∗1 be the union of thosebranches and U∗0 .• We define U∗k inductively for k = 2, 3, . . . , k∗0 by adding all branchesstarting from every point in D∗k to U∗k−1.• Finally, we consider LERW’s starting from R3 \ U∗k∗0 to obtain U .We condition the root γ∞ on the event F ∗ ∩ G∗(ζ2) and think of it asa deterministic set. Since the number of points in D∗1 is bounded aboveby Cλζ2/2, it follows that with high (conditional) probability, every branch129γU (x,U∗0 ) with x ∈ D∗1 is contained in B(0, λ−1/2δ−1). Namely, if we definethe event H∗ byH∗ ={γU (x,U∗0 ) ⊆ B(0, λ−1/2δ−1)and dU (x,U∗0 ) ≤ λ−1/4δ−β for all x ∈ D∗1},where dU (x,U∗0 ) stands for the number of steps of the branch γU (x,U∗0 ), thenthe condition of the event G∗(ζ2), [155, Theorem 1.4] and [128, Corollary1.3] ensure that the conditional probability of the event H∗ satisfiesP (H∗) ≥ 1− Cλ−ζ2/2.If we define the event I∗(k, ζ) byI∗(k, ζ) =∀x ∈ D∗k, y ∈ B(x, ε∗kδ−1),P yR(R[0, TR(x, (ε∗k)1−ζ21000 δ−1)]∩ (γ∞ ∪ γU (x,U∗0 )) = ∅)≤(ε∗k)ζ ,then a similar technique to used to deduce the inequality at (5.36) gives thatthere exist universal constants ζ3 > 0 and C < ∞ such that: for all λ ≥ 2,δ > 0 and k = 1, 2, . . . , k∗0,P (I∗(k, ζ3)) ≥ 1− C(ε∗k)2.Now we define L∗1 := F ∗∩G∗(ζ2)∩H∗∩I∗(1, ζ3). Note that on the eventL∗1, it follows that for any y ∈(γ∞ ∩B(0, λ−1/2δ−1))∪ (∪x∈D∗1γU (x,U∗0 )),we havedU (0, y) ≤ 2λ−1/4δ−β.We inductively define L∗k for k ≥ 2 in the following way. LetH∗(k) = γU(x,U∗k−1)⊆ A∗k−1and dU (x,U∗k−1) ≤ 2−k/8λ−1/4δ−β for all x ∈ D∗k .We define L∗k := L∗k−1∩I∗(2, ζ3)∩H∗(k) for k ≥ 2. Suppose that we condition130U∗k−1 on the event L∗k−1. Since each branch γ∞ ∪ γU (x,U∗0 ) with x ∈ D∗k−1is a hittable set, by using a similar iteration argument to that used for(5.37), as well as [155, Theorem 1.4] and [128, Corollary 1.3], we see that theconditional probability of H∗(k) is bounded above by C exp{−c2k/4λ1/2}.With this in mind, we letL∗ =k∗0∩k=1L∗k.As at (5.38), we haveP (L∗) = P (L∗1)k∗0∏k=2P(L∗k|L∗k−1)≥(1− Cλ−ζ2/2) ∞∏k=1(1− C (ε∗k)2)≥ 1− Cλ−ζ2/2.The hard part of the proof is now complete. Indeed, on the event L∗, it iseasy to check thatdU (0, y) ≤ Cλ−1/4δ−β,as long as y ∈ (γ∞ ∩ B(0, λ−1/2δ−1)) ∪ U∗k∗0 . Since the subtree U∗k0contains∂iB(0, λ−1δ−1), using [155, Theorem 1.4] and [128, Corollary 1.3] again, wesee thatP(dU (0, y) ≤ Cλ−1/4δ−β for all y ∈ B(0, λ−1δ−1))≥ 1− Cλ−c. (5.41)for some universal constants c, C ∈ (0, 1). This implies thatP(µU(BU(0, δ−β))< λ−1δ−3)≤ Cλ−c.for some universal constants c, C ∈ (0,∞). Reparameterizing this, it followsthat for all λ ≥ 1, δ ∈ (0, 1) and θ ∈ (0, 1],P(µU(BU(0, θδ−β))< λ−1δ−3)≤ Cθ− 3cβ λ−c,which finishes the proof.131Let γv∞ be the infinite simple path in U started at v. When v = 0, wewrite γ0∞ = γ∞. Fix a point v. The next lemma gives a lower bound onBU(x, θδ−β)∩B(0, δ−1) uniformly in x ∈ γv∞ ∩B(0, δ−1).Lemma 5.4.3. There exist universal constants λ0 > 1 and b0, c6, c7, c8in (0,∞) such that for all λ ≥ λ0, δ ∈ (0, 1), θ ∈ (0, 1] and all v ∈B(0, λb0δ−1),P ∃x ∈ γv∞ ∩B(0, δ−1) such thatµU(BU(x, θδ−β)∩B(0, δ−1))< λ−1δ−3 ≤ c6θ−c7λ−c8 . (5.42)Proof. Again, by reparameterizing, it suffices to show the inequality (5.42)the case that θ = 1. Also, we may assume that λ ≥ 2 is sufficiently largeand that δ > 0 is sufficiently small. We recall that we proved at (5.41) thatthere exist universal constants a1, a2 ∈ (0,∞) such thatP(A1) := P(B(0, λ−1δ−1) ⊆ BU(0, a2λ−1/4δ−β))≥ 1− a2λ−a1 . (5.43)Similarly to previously, we need to deal with the hittability of γ∞. To thisend, define the event A(ζ) byA(ζ) = supx∈B(0,λδ−1):dist(x,γ∞)≤λ−a110 δ−1P xR(R[0, TR(x, λ−a120 δ−1)]∩ γ∞ = ∅)≤ λ−ζa1 .From [148, Lemmas 3.2 and 3.3], it follows that there exist universal con-stants ζ4 ∈ (0, 1) and C <∞ such thatP (A(ζ4)) ≥ 1− Cλ−a1 . (5.44)Now we let b0 = a1ζ4/5000 and take v ∈ B(0, λb0δ−1), henceforth in thisproof only, we write γv∞ = γ∞ to simplify notation. We also write ρ0 forthe first time that γ∞ exits B(0, δ−1) (we set ρ0 = 0 if v /∈ B(0, δ−1)), set132R = λζ4a1100 δ−1, and define ρ to be the first time that γ∞ exits B(0, R). Thena similar argument used to deduce (5.40) gives thatP(A2) := P(γ∞[ρ,∞) ∩B(0, δ−1) = ∅, ρ ≤ λa150 δ−β)≥ 1− Cλ− ζ4a1100 .So, it suffices to deal with the event that there exists an x ∈ γ∞[0, ρ] ∩B(0, δ−1) for which the volume ofBU(x, δ−β)∩B(0, δ−1) is less than λ−1δ−3.Given these preparations, and moreover writing r = λ−ζ4a1100 δ−1, we de-compose the path γ∞[0, ρ] in the following way.• Let τ0 = 0. For l ≥ 1, define τl by τl = inf{j ≥ τl−1 : |γ∞(j) −γ∞(τl−1)| ≥ r}.• Let N be the unique integer such that τN−1 < ρ0 ≤ τN .• Set τ ′1 = inf{j ≥ ρ0 : |γ∞(j)− γ∞(ρ0)| ≥ r}.• For l ≥ 1, if ρl−1 < ρ, then we define τ ′l = inf{j ≥ ρl−1 : |γ∞(j) −γ∞(ρl−1)| ≥ r} and set ρl = inf{j ≥ τ ′l : γ∞(j) ∈ B(0, δ−1)} ∧ ρ.Otherwise, we let τ ′l =∞ and ρl = ρ.• Let N ′ be the smallest integer l such that ρl = ρ.• For 0 ≤ l ≤ N ′ − 1, we let τ ′′l = max{j ≤ ρl : |γ∞(j)− γ∞(ρl)| ≥ r} ifit is the case that {j ≤ ρl : |γ∞(j)− γ∞(ρl)| ≥ r} ̸= ∅. Otherwise, weset τ ′′l = ρl.Notice that we don’t consider the sequence {τl} if v /∈ B(0, δ−1) since τ0 =ρ0 = 0 in that case. (Namely, if v /∈ B(0, δ−1), we only consider the sequence{τ ′l}.) We also note that for any x ∈ γ∞[ρ0, ρ] ∩ B(0, δ−1), there exists0 ≤ l < N ′ such that x ∈ γ∞[ρl, τ ′l+1].Our first observation is that by considering the same decomposition forthe corresponding SRW, it follows that the probability that N + N ′ ≥λζ4a1/10 is smaller than C exp{−cλc}. Furthermore, applying [155, Theo-rem 1.4] together with [128, Corollary 1.3], with probability at least 1 −133C exp{−cλc}, it holds that τl− τl−1 ≤ λ−ζ4a1200 δ−β for all l = 1, 2, . . . , N , andthat τ ′l − ρl−1 ≤ λ−ζ4a1200 δ−β for all l = 1, 2, . . . , N ′. Consequently,P(A3) ≥ 1− C exp{−cλc},where the event A3 is defined by settingA3 = N +N ′ ≤ λζ4a110 , τl − τl−1 ≤ λ−ζ4a1200 δ−β for all l = 1, 2, . . . , Nand τ ′l+1 − τ ′′l ≤ λ−ζ4a1200 δ−β for all l = 0, 1, . . . , N ′ − 1 .Replacing the constant ζ4 by a smaller constant if necessary, [148, The-orem 6.1] (see Proposition 5.3.3) guarantees that γ∞ has no “quasi-loops”.Namely, it follows thatP(A4) ≥ 1− Cλ−ca1 ,where the event A4 is defined by settingA4 =B(γ∞(τl), λ−a130 δ−1)∩ (γ∞[0, τl−1] ∪ γ∞[τl+1,∞)) = ∅∀ l = 1, 2, . . . , N andB(γ∞(ρl), λ−a130 δ−1)∩(γ∞[0, τ ′′l ] ∪ γ∞[τ ′l+1,∞))= ∅∀ l = 0, 2, . . . , N ′ − 1.We now consider a λ−a110 δ−1-net of B(R), which we denote byD. We mayassume that for each y ∈ D∩B(δ−1), it holds that B(y, 2λ−1δ−1) ⊆ B(δ−1).Notice that the number of the points in D is bounded above by Cλa1/3.For each 1 ≤ l ≤ N = 2, we can find a point xl ∈ D ∩ B(δ−1) satisfying|xl − γ∞(τl)| ≤ λ−a110 δ−1. Also, for each 0 ≤ l ≤ N ′ − 1, there exists a pointx′l ∈ D ∩ B(δ−1) satisfying |x′l − γ∞(ρl)| ≤ λ−a110 δ−1. (Here, note that wecan find x′l in B(δ−1) since γ∞(ρl) ∈ B(δ−1).)We perform Wilson’s algorithm as follows.• The root of the algorithm is γ∞.• Consider the SRW R1 started from x1, and run until it hits γ∞. We134let U1 be the union of γ∞ and LE(R1). Next, we consider the SRWR2 started at x2 until it hits U1; the union of it and U1 is denoted byU2. We define U l for l = 3, 4, . . . , N − 2 similarly.• Consider the SRW Z0 starting from x′0 until it hits UN−2. We let U˜0be the union of UN−2 and LE(Z0). Next, we consider the SRW Z1started at x′1 until it hits U˜0; the union of LE(Z1) and U˜0 is denotedby U˜1. We define U˜ l for l = 2, 3, . . . , N ′ − 1 similarly.• Finally, run sequentially LERWs from every point in R3 \ U˜N ′−1 toobtain U .Define F 0 := A1∩A2∩A3∩A4∩A(ζ4) as a “good” event for γ∞. Conditioningγ∞ on the event F 0, we consider all simple random walks R1, R2, . . . , RN−2,Z0, Z1, . . . , ZN′−1 starting from x1, x2, . . . , xN−2, x′0, x′1, . . . , x′N ′−1 respec-tively. The event A(ζ4) ensures that the probability that Rl (respectivelyZ l) exits B(xl, λ−a120 δ−1) (resp. B(x′l, λ−a120 δ−1) before hitting γ∞ is smallerthan λ−ζ4a1 for each l. Moreover, the event A4 says that the endpoint of Rl(resp. Z l) lies in γ∞[τl−1, τl+1] (resp. γ∞[τ ′′l , τ ′l+1]) for each l. On the otherhand, the number of SRW’s N +N ′ − 2 is less than λ ζ4a110 by the event A3.Also, we can again appeal to [155, Theorem 1.4] and [128, Corollary 1.3] tosee that with probability at least 1−C exp{−cλc}, the length of the branchLE(Rl) (resp. LE(Z l)) is less than λ−a140 δ−β for each l = 1, 2, . . . , N − 2(respectively l = 0, 1, . . . , N ′ − 1). Thus, taking the sum over l, we see thatP(F 1) ≥ 1− Cλ− ζ4a12 ,135where the event F 1 is defined by settingF 1 =LE(Rl) ⊆ B(xl, λ−a120 δ−1),the endpoint of Rl lies in γ∞[τl−1, τl+1],and the length of LE(Rl) is smaller than λ−a140 δ−βfor all l = 1, 2, . . . , N − 2∩LE(Z l) ⊆ B(x′l, λ−a120 δ−1),the endpoint of Z l lies in γ∞[τ ′′l , τ ′l+1],and the length of LE(Z l) is smaller than λ−a140 δ−βfor all l = 0, 1, . . . , N ′ − 1.Recall that for each y ∈ D ∩ B(δ−1), it holds that B(y, 2λ−1δ−1) ⊆B(δ−1). Since the number of the points in D is bounded above by Cλa1/3,the translation invariance of U and (5.43) tell thatP(F 2) ≥ 1− a2λ−2a13 ,where the event F 2 is defined byF 2 ={B(xl, λ−1δ−1) ⊆ BU(xl, a2λ−1/4δ−β)for all l = 1, 2, . . . , N − 2}∩{B(x′l, λ−1δ−1) ⊆ BU(x′l, a2λ−1/4δ−β)for all l = 0, 1, . . . , N ′ − 1}.We set F 3 := F 0 ∩ F 1 ∩ F 2. Suppose that the event F 3 occurs. Takea point x ∈ γ∞[0, ρ0]. We can then find l ∈ {0, 1, . . . , N − 2} such thatx ∈ γ∞[τl, τl+2]. Let yl be the endpoint of Rl. Since yl lies in γ∞[τl−1, τl+1],and the event A3 holds, we see that dU (x, yl) ≤ τl+2 − τl−1 ≤ 3λ−ζ4a1200 δ−β.However, the event F 1 says that dU (yl, xl) ≤ λ−a140 δ−β. Finally, the eventF 2 ensures that for every point z ∈ B(xl, λ−1δ−1), we have dU (xl, z) ≤a2λ−1/4δ−β. So, the triangle inequality tells that dU (x, z) ≤ 5λ−ζ4a1200 δ−β forall z ∈ B(xl, λ−1δ−1) ⊆ B(δ−1).We next consider a point x ∈ γ∞[ρ0, ρ] ∩ B(δ−1). There then exists0 ≤ l < N ′ such that x ∈ γ∞[ρl, τ ′l+1]. Let y′l be the endpoint of Z l. Since y′llies in γ∞[τ ′′l , τ ′l+1], and the event A3 holds, we see that dU (x, y′l) ≤ τ ′l+1 −136τ ′′l ≤ λ−ζ4a1200 δ−β. However, the event F 1 says that dU (y′l, x′l) ≤ λ−a140 δ−β.Finally, the event F 2 ensures that for every point z ∈ B(x′l, λ−1δ−1), wehave dU (x′l, z) ≤ a2λ−1/4δ−β. So, the triangle inequality tells that dU (x, z) ≤3λ−ζ4a1200 δ−β for all z ∈ B(x′l, λ−1δ−1) ⊆ B(δ−1).This implies that for all x ∈ γ∞[0, ρ] ∩B(δ−1),µU{BU(x, 5λ−ζ4a1200 δ−β)∩B(δ−1)}≥ cλ−3δ−3. (5.45)Reparameterizing this, we finish the proof.Assumption 4 immediately follows from the next lemma.Lemma 5.4.4. There exist constants c1, c2, c3 such that: for all λ ≥ 1,δ ∈ (0, 1) and θ ∈ (0, 1],P(infx∈B(δ−1)δ3µU(BU (x, θδ−β) ∩B(δ−1))< λ−1)≤ c1θ−c2λ−c3 .Proof. We will only consider the case that θ = 1. We also assume that λ ≥ 2is sufficiently large and that δ > 0 is sufficiently small, similarly to the proofof the previous lemma. Moreover, we will use the same notation as in theproof of Lemma 5.4.3. Recall that the constants a1 and ζ4 appeared at(5.43) and (5.44), and that we defined b0 := a1ζ4/5000 and R := λζ4a1100 δ−1.For v ∈ B(λb0δ−1), ρ was defined to be the first time that γv∞ exits B(R)(ρ = 0 if v /∈ B(δ−1)). In the proof of Lemma 5.4.3, we proved that for eachv ∈ B(λb0δ−1),P µU {BU (x, λ−b1δ−β) ∩B(0, δ−1)} ≥ cλ−3δ−3for all x ∈ γv∞[0, ρ] ∩B(δ−1) ≥ 1−Cλ−b1 , (5.46)for some b1 > 0, see (5.45). Let b2 = ζ4a1108 ∧ b1108 . We consider a λ−b2δ−1-net D′ = (xl)Ml=1 of B(0, 2δ−1). Note the number of points in D′, which isdenoted by M , can be assumed to be smaller than Cλ3b2 .Now we perform Wilson’s algorithm as follows:• The root of the algorithm is γ∞ = γ0∞.137• Consider the SRW R1 started at x1 ∈ D′, and run until it hits γ∞.Let U1 be the union of LE(R1) and γ∞. We then consider the SRW Rlstarted from xl ∈ D′, and run until it hits Ul−1; add LE(Rl) to Ul−1 –this union is denoted by Ul. Since M ≤ Cλ3b2 , by applying (5.46) foreach xl, we haveP(V 1) ≥ 1− Cλ−b1/2,where the event V 1 is defined by settingV 1 := µU{BU(x, λ−b1δ−β)∩B(δ−1)}≥ cλ−3δ−3,for all x ∈ UM ∩B(δ−1) .• Taking a > 0 such that a∑∞j=1 j−2 = 1/2, we let ak = a∑kj=1 j−2,and consider a 2−kλ−b2δ−1-net Dk = (xki )i of B((2 − ak)δ−1), wherethe number of points in Dk is bounded above by C23kλ3b2 . Let k0 bethe smallest integer k such that 2−kλ−b2δ−1 ≤ 1.• Perform Wilson’s algorithm for all points in D1 adding new branchesto UM ; the output tree is denoted by Uˆ1. Then perform Wilson’salgorithm for points Dk (k = 2, 3, . . . , k0) inductively; the output treesare denoted by Uˆ2, . . . , Uˆk0 . Note that B(δ−1) ⊆ Uˆk0 .Since every branch generated in the procedure above is a hittable set,we can prove that there exist universal 0 < b3 < b2 and C > 0 such thatP(V 2) ≥ 1− Cλb3 , (5.47)where the event V 2 is defined byV 2 :={∀x ∈ Uˆk0 , dU (x, x(M)) ≤ λ−b3δ−β and x(M) ∈ B(δ−1)}.Here, for each x, we write x(M) ∈ UM for the point such that dU (x, x(M)) =dU (x,UM ). The inequality (5.47) can be proved in a similar way to the proofof Proposition 5.4.1, so the details are left to the reader.Suppose that the event V 1∩V 2 occurs. Since B(δ−1) ⊆ Uˆk0 , this implies138that for any x ∈ B(δ−1), we haveµU{BU(x, 2λ−b3δ−β)∩B(0, δ−1)}≥ cλ−3δ−3.A simple reparameterization completes the proof.Combining Proposition 5.4.1 and Lemma 5.4.4, we have the following.Corollary 5.4.1. Assumptions 2 and 4 hold.5.4.3 Assumption 3In this subsection, we will prove the following proposition.Proposition 5.4.5. Assumption 3 holds.Proof. In [127], it is proved that there exist universal constants b3, b4 ∈(0,∞) such that for all δ ∈ (0, 1) and λ ≥ 1,P(J1) ≥ 1− b4λ−b3 , (5.48)where the event J1 is defined by settingJ1 = ∀x, y ∈ γ∞ ∩B(λb3δ−1)with dU (x, y) ≤ λ−b4δ−β, |x− y| ≤ λ−b3δ−1 ,see [127, (7.19)] in particular. We also need the hittability of γ∞ as follows.For ζ > 0, define the event J(ζ) by settingJ(ζ) = PxR(R[0, TR(x, λb3/2δ−1)]∩ γ∞ = ∅)≤ λ−ζb3for all x ∈ B(λb3/4δ−1)  .It follows from [148, Lemma 3.2 and Lemma 3.3] that there exist universalconstants C <∞ and ζ5 ∈ (0, 1) such that for all δ > 0 and λ ≥ 1,P (J(ζ5)) ≥ 1− Cλ−b3 .With this in mind, we set b5 = ζ5b31000 and R1 = λb5δ−1. Let D′′ = (zl)l be139a λ−b5δ−1-net of B(R1). The number of points M ′′ of D′′ can be assumedto be smaller than Cλ6b5 . We perform Wilson’s algorithm as follows. Theroot of the algorithm is γ∞ as usual. Then we consider the loop-erasureof the SRWs R1, R2, . . . , RM ′′ started from z1, z2, . . . , zM ′′ respectively; wedenote the output tree by UM ′′ . Finally, we consider LERW’s starting fromall points in R3 \ UM ′′ .Conditioning γ∞ on the event J1 ∩ J(ζ5), for each l = 1, 2, . . . ,M ′′, theprobability that Rl exits B(zl, λb3/2δ−1)before hitting γ∞ is, on the eventJ(ζ5), bounded above by λ−ζ5b3 . Taking the sum over l, we see that ifJ2 :={Rl[0, TRl(x, λb3/2δ−1)]∩ γ∞ ̸= ∅ for all l = 1, 2, . . . ,M ′′},thenP(J2) ≥ 1− Cλ−b5 .On the other hand, if we defineJ l3 = ∀x, y ∈ γzl∞ ∩B(zl, λb3δ−1)with dU (x, y) ≤ λ−b4δ−β, |x− y| ≤ λ−b3δ−1 ,for each l = 1, 2, . . . ,M ′′, (recall that γx∞ stands for the unique infinite pathin U starting from x,) by the translation invariance of U and (5.48), it followsthat P(J l3) ≥ 1− b4λ−b3 for all l. Thus, lettingJ3 =M ′′∩l=1J l3,we have P(J3) ≥ 1− λ−b3 .Now, suppose that the event J := J1 ∩ J(ζ5) ∩ J2 ∩ J3 occurs. Thetriangle inequality tells that on the event J , for all x, y ∈ UM ′′ ∩B(λ2b33 δ−1)with dU (x, y) ≤ λ−b4δ−β, we have |x− y| ≤ 3λ−b3δ−1. ThusP ∀x, y ∈ UM ′′ ∩B(λ2b33 δ−1)with dU (x, y) ≤ λ−b4δ−β, |x− y| ≤ 3λ−b3δ−1 ≥ 1− Cλ−b5 .140By the translation invariance of U again, we can prove that each branch γzl∞is also a hittable set with high probability. Namely, if we letJ l(ζ) = P xR(R[0, TR(x, λ−b5/2δ−1)]∩ γzl∞ = ∅)≤ λ−ζb5for all x ∈ B(zl, λ−b5δ−1)for each l = 1, 2, . . . ,M ′′, then by using [148, Lemma 3.2], we see that thereexist universal constants ζ6 ∈ (0, 1) and C < ∞ such that for all δ ∈ (0, 1),λ ≥ 1 and l = 1, 2, . . .M ′′,P(J l(ζ6))≥ 1− Cλ−100b5 .With this in mind, we letJ4 :=M ′′∩l=1J l(ζ6),so that P(J4) ≥ 1− Cλ−b5 .Conditioning UM ′′ on the event J ∩ J4, we perform Wilson’s algorithmfor all points in B(R1/2) \ UM ′′ , considering finer and finer nets there asin the proof of Proposition 5.4.1. The event J4 ensures that every SRWstarting from a point w in B(R1/2) hits UM ′′ before it exits B(w, λ−b5/3δ−1)with probability at least 1 − Cλ−ζ6b5λb56 . Thus we can conclude that withprobability at least 1−Cλ−b5 , we have diam(γU (w,UM ′′)) ≤ λ−b5/3δ−1 anddU (w,UM ′′) ≤ λ−b5/4δ−β for all w ∈ B (0, R1/2). Therefore, by the triangleinequality again, it follows thatP(∀x, y ∈ U ∩B (R1/2)with dU (x, y) ≤ λ−b4δ−β, |x− y| ≤ λ−b3/5δ−1)≥ 1− Cλ−b5 . (5.49)Finally, Proposition 5.4.1 shows that with probability 1 − Cλ−cb5 , the in-trinsic ball BU (0, Lδ−β) ⊆ B(R1/2) for each fixed L. Combining this with(5.49) completes the proof.1415.5 Exponential lower tail bound on the volumeIn Lemma 5.4.3, we established a polynomial (in λ) lower tail bound on thevolume of a ball. In this section, we will improve this bound to an expo-nential one, see Theorem 5.5.2 below. We start by proving the followinganalogue of [20, Theorem 3.4] in three dimensions. The proof strategy ismodelled on that of the latter result, though there is a key difference in thatthe Beurling estimate used there (see [113, Theorem 2.5.2]) is not applica-ble in three dimensions, and we replace it with the hittability estimate ofProposition 5.3.9.Theorem 5.5.1. There exist constants λ0 > 1 and c, C, b ∈ (0,∞) suchthat: for all R ≥ 1 and λ ≥ λ0,P(µU (BU (0, R)) ≤ λ−1R3β)≤ C exp{−cλb}. (5.50)Proof. We begin by describing the setting of the proof. We assume thatλ ≥ 1 is sufficiently large, and let a = 99100 . Let q = [λ(1−a)/3] be the numberof subsets I0, I1, . . . , Iq of the index set {1, 2, . . . , λ}, as defined in (5.22).Note that for all 0 ≤ j1 < j2 ≤ q and all i1 ∈ Ij1 , i2 ∈ Ij2 we havedist (∂Di1 , ∂Di2) ≥ λa−1R. (5.51)For each j = 0, 1, . . . , q, recall that the event Fj stands for the event thatthere exists a “good” index i ∈ Ij in the sense that γ[ti, σi] is a hittableset. By Proposition 5.3.9, with probability at least 1 − λ1−a exp {−c1λa},the ILERW γ has a good index in Ij for every j = 0, 1, . . . , q. LetF =q∩j=1Fj , (5.52)and suppose that the event F occurs. It then holds that, for each j =0, 1, . . . , q, we can find a good index ij ∈ Ij such that the event Aij occurs.We will moreover fix deterministic nets W p = (wpk)k, p = 1, 2, 3, of B(2R)142satisfyingB(2R) ⊆∪kB(wpk,R102λ2p)and |wpk − wpk′ | ≥R104λ2p for all k ̸= k′.Note that we may assume that |W p| ≍ λ6p.From now on, we assume that the event F occurs whenever we considerγ. We also highlight the correspondence between our setting and that of [20,Theorem 3.4]. In the proof of [20, Theorem 3.4], k points z1, z2, . . . , zk werechosen on the ILERW. Here the points xi0 = γ(ti0), xi1 = γ(ti1), . . . , xiq =γ(tiq) correspond to those points. Setting n = R2λ , we write Bj = B(xij , n)for j = 0, 1, . . . , q. Note that for each j1 ̸= j2dist (Bj1 , Bj2) ≥λa−1R2 (5.53)by (5.51).As in [20, (3.18) and (3.19)], we define the events F 1, F 2 by settingF 1 = {γ[T2R,∞) hits more than q/2 of B0, B1, . . . Bq} , (5.54)F 2 ={T2R ≥ λa′Rβ},where Tr is the first time that γ exits B(r), and a′ = 11000 , see Figure 5.7.Here we also need to introduce the event F 3, as given byF 3 ={∃w1k ∈W 1 such that N1k ≥ λ5},where N1k is equal to∣∣∣∣∣∣ w2l ∈W 2 :B(w2l ,R102λ4)⊆ B(w1k,R102λ2)and B(w2l ,R102λ4)∩ γ[0,∞) ̸= ∅∣∣∣∣∣∣ ,i.e., N1k stands for the number of balls of the net W 2 contained in the ballB(w1k, R102λ2 ) and hit by ILERW γ.We will first show that P(F 1) is exponentially small in λ. Let Γr be the143set of paths ζ satisfying P(γ[0, Tr] = ζ) > 0. Namely, Γr stands for the setof all possible candidates for γ[0, Tr]. Take ζ ∈ Γ2R, and let z = ζ (len(ζ)) bethe endpoint of ζ. Write Y for the random walk started at z and conditionedon the event that Y [1,∞) ∩ ζ = ∅. The domain Markov property (see [113,Proposition 7.3.1]) yields that the distribution of γ[T2R,∞) conditioned onthe event {γ[0, T2R] = ζ} coincides with that of LE(Y [0,∞)). Therefore, wehaveP(F 1) ≤∑ζ∈Γ2RP(Hζ)P (γ[0, T2R] = ζ) ,where the event Hζ is defined byHζ = {Y hits more than q/2 of B0, B1 . . . , Bq} .Recall that Rz stands for the simple random walk started at z. We remarkthat dist(z,Bj) ≥ R/4 for all j ∈ {0, 1, . . . , q}. Let τ be the first time that0B0B1Bq∂B(2R)Figure 5.7: A typical realisation of γ on the event F 1, as defined at(5.54).144Rz exits B(z,R/8), and observe thatP (Rz[1,∞) ∩ ζ = ∅) ≍ P (Rz[1, τ ] ∩ ζ = ∅) . (5.55)Indeed, it is clear that the left-hand side is bounded above by the right-handside. To see the opposite inequality, we note that [155, Proposition 6.1] (seealso [148, Claim 3.4]) yields thatP (Rz[1,∞) ∩ ζ = ∅)≥ P(Rz[1, τ ] ∩ ζ = ∅, dist (B(2R), Rz(τ)) ≥ R16 , Rz[τ,∞) ∩B(2R) = ∅)≥ cP(Rz[1, τ ] ∩ ζ = ∅, dist (B(2R), Rz(τ)) ≥ R16)≥ c′P (Rz[1, τ ] ∩ ζ = ∅) ,which gives (5.55). Consequently, we obtain thatP(Hζ) ≤ CP (Rz[1, τ ] ∩ ζ = ∅, Rz hits more than q/2 of B0, B1 . . . , Bq)P (Rz[1, τ ] ∩ ζ = ∅)≤ C maxz′∈B(z,R/8)P(Rz′ hits more than q/2 of B0, B1 . . . , Bq).Take z′ ∈ B(z,R/8), and note that dist(z′, Bj) ≥ R/8 for all j = 0, 1, . . . , q.We define a sequence of stopping times u1, u2, . . . as follows. Letu1 = inft ≥ 0 : Rz′(t) ∈q∪j=0Bj ,and j1 be the unique index such that Rz′(u1) ∈ Bj1 . For l ≥ 2, we define ulby settingul = inft ≥ ul−1 : Rz′(t) ∈ q∪j=0Bj \Bjl−1 ,and write jl for the unique index such that Rz′(ul) ∈ Bjl . Since the distancebetween two different balls is bigger than λa−1R/2 by (5.53), and each ball145has radius n = R/2λ, it follows from [113, Proposition 1.5.10] thatP (ul <∞ ul−1 <∞) ≤ Cλ−aλ1−a = Cλ−4950 ,for all l. Thus, taking λ sufficiently large so that Cλ− 4950 < 1/2, it holds thatP(Hζ) ≤ C(1/2)q/2 ≤ C exp{−cλ 1100},which givesP(F 1) ≤ C exp{−cλ 1100}. (5.56)As for the event F 2, we have from Proposition 5.3.4 thatP(F 2) ≤ C exp{−cλa′}. (5.57)Finally, we will deal with the event F 3. DefineM1k =∣∣∣∣∣∣w2l ∈W 2 : B(w2l ,R102λ4)⊆ B(w1k,R102λ2)and B(w2l ,R102λ4)∩ S[0,∞) ̸= ∅∣∣∣∣∣∣ ,in other words, M1k stands for the number of balls of the net W 2 containedin B(w1k,R102λ2)and hit by SRW S[0,∞). It is clear that N1k ≤M1k . Thus,on the event F 3, there exists w1k ∈W 1 such thatM1k ≥ λ5. However, for eachk, it is easy to see that P(M1k ≥ λ5) ≤ Ce−cλ. Therefore, since |W 1| ≍ λ6,we see thatP(F 3) ≤ C exp{−cλ1/2}. (5.58)We are now ready to follow the proof of [20, Theorem 3.4]. If the eventF c ∪ F 1 ∪ F 2 ∪ F 3 (recall that the event F is defined at (5.52)) occurs,we terminate the algorithm with a ‘Type 1’ failure. Otherwise, for eachj = 0, 1, . . . , q, we can find zj ∈ W 3 ∩ B(xij , n/8) such that B(zj , λ−4) ∩γ[0,∞) = ∅. Using this point zj , we writeB′j = B(zj , λ−4R), B′′j = B(zj , λ−6R).Let U0 = γ[0,∞). Suppose that the event F ∩ ∩3k=1(F k)c occurs. We146consider the SRW Rz0 until it hits U0. Let γ0 = LE(Rz0) be its loop-erasurewhich is the branch on U between z0 and U0. Define the event G01 byG01 = {Rz0 ̸⊆ B0}. Since γ satisfies the event F , we see thatP((G01)c)≥ c0. (5.59)Suppose that the event G01 occurs. We mark the ball Bj as ‘bad’ if Rz0∩Bj ̸=∅. Otherwise, we define the event G02 := {len(γ0) ≥ λ−1/2Rβ}∩{Rz0 ⊆ B0}.If the event G02 occurs, we also mark B0 as ‘bad’ (we only mark B0 in thiscase). By Proposition 5.3.4, it holds thatP(G02)≤ C exp{−cλ1/2}. (5.60)If the event (G01)c ∩ (G02)c occurs, we use Wilson’s algorithm to fill in thereminder of B′′0 . Define the event G03 by settingG03={∃v ∈ B′′0 such that γU (v, γ0 ∪ U0) ̸⊆ B′0or len (γU (v, γ0 ∪ U0)) ≥ λ−2Rβ}∩ (G01)c ∩ (G02)c,where we recall that γU (v,A) stands for the branch on U between v and A.Modifying the proof of Lemma 5.4.2, we see thatP(G03)≤ Cλ−c (5.61)for some universal constants c, C ∈ (0,∞). Suppose that the event G03occurs. We again mark the ball Bj as ‘bad’ if Sv hits Bj for some v ∈ B′′0in the algorithm above. If the event (G01)c ∩ (G02)c ∩ (G03)c occurs, we labelthis first ‘ball step’ as successful and we terminate the whole algorithm. Inthis case, for all v ∈ B′′0dU (0, v) ≤(λa′ + λ−1/2 + λ−2)Rβ ≤ Cλa′Rβ,and soµU(BU(0, Cλa′Rβ))≥ cλ−18R3. (5.62)147If the event G01∪G02∪G03 occurs, we denote the number of bad balls by NB0 .Using a similar idea used to establish (5.56), we see thatP(NB0 ≥√q/4)≤ C exp{−cλ1/200}. (5.63)If NB0 ≥√q/4, we terminate the whole algorithm as ‘Type 2’ failure. IfNB0 <√q/4, we can choose Bj which is not bad and perform the second‘ball step’, replacing B0 with Bj in the above. We terminate this ball stepalgorithm whenever we get a successful ball step or we have Type 2 failure.We write F 4 for the event that some ball step ends with a Type 2 failure.Since we perform at most q1/2 ball steps, it follows from (5.63) thatP(F 4) ≤ C exp{−cλ1/400}. (5.64)Finally, we let F 5 be the event that we can perform the jth ball step for allj = 1, 2 . . . , q1/2 without Type 2 failure and success. By combining (5.59),(5.60) and (5.61), taking λ sufficiently large, we see that each ball step hasa probability at least c0/2 of success. Therefore, we haveP(F 5) ≤ C exp{−cλ1/200}. (5.65)Once we terminate the ball step algorithm with a success, we end up with agood volume estimate as in (5.62). Combining (5.56), (5.57), (5.58), (5.64),(5.65) with Proposition 5.3.9, we conclude thatP(µU(BU(0, Cλa′Rβ))≥ cλ−18R3)≥ 1− C exp{−cλa′}.Reparameterizing this gives the desired result.We are now ready to derive the main result of this section, which givesexponential control of the volume of balls, uniformly over spatial regions.Theorem 5.5.2. There exist constants λ0 > 1 and c, C, b ∈ (0,∞) such148that: for all R ≥ 1 and λ ≥ λ0,P(infx∈B(R1/β)µU(BU(x, λ−b′R))≤ λ−1R 3β)≤ C exp{−cλb}. (5.66)Proof. We will follow the strategy used in the proofs of Lemmas 5.4.3 and5.4.4. Theorem 5.5.1 tells us that if A1 := {|BU (0, λ−1R)| ≤ λ−4R3β }, thenP(A1) ≤ C exp{−cλb}, (5.67)We may assume that b ∈ (0, 1). We also let b1 = b/1000. Applying Propo-sition 5.3.7 with s = exp{−λb1}R1/β, r = exp{−λb1/2}R1/β and K = 100,we find that there exists universal constants η ∈ (0, 1) and C > 0 such thatP(A2) ≤ C exp{−λb1},where A2 is defined to be the event{∃v ∈ B(5R1/β) such thatdist(v, γ) ≤ e−λb1R1/β and P v (Rv[0, tv] ∩ γ = ∅) ≥ e−ηλb1}.Here, γ represents the ILERW started at the origin, Rv stands for a SRWstarted at v, the probability law of which is denoted by P v, and tv stands forthe first time thatRv exitsB(v, exp{−λb1/2}R1/β). We next use Proposition5.3.3 to conclude that there exists universal constants C,M <∞ such thatP(A3) ≤ C exp{−λb1/M},where the event A3 is defined byA3 =∃v ∈ B(5R1/β) and i < j such thatγ(i), γ(j) ∈ B(v, 10 exp{−λb1/2}R1/β)and γ[i, j] ̸⊆ B(v, 10−1 exp{−λb1/M}R1/β) ,Namely, the event A3 says that γ has a quasi-loop in B(5R1/β). We next149letδ = 10−3min{η, 1/M}, (5.68)and define a sequence of random times s1, s2, . . . by setting s0 = 0,s1 = inf{t ≥ 0 : γ(t) /∈ B(exp{−δλb1}R1/β)},si = inf{t ≥ si−1 : γ(t) /∈ B(γ(si−1), exp{−δλb1}R1/β)}, ∀i ≥ 2.Let xi = γ(si), writeI ={i ≥ 1 : (γ[si−1, si] ∪ γ[si, si+1] ∪ γ[si+1, si+2]) ∩B(4R1/β) ̸= ∅},and setN = |I|. By considering the number of balls of radius exp{−δλb1}R 1βcrossed by a SRW before ultimately leaving B(4R1/β), we see thatP(A4) ≤ C exp{−ceδλb1},where A4 := {N ≥ exp{3δλb1}}. A similar argument to that used in [127,(7.51)] yields thatP(A5) ≤ C exp{−ce δλb12},where A5 is the event that there exists an i ∈ I such that si − si−1 ≥exp{−δλb1/2}R. Thus, defining the event A by settingA =5∩i=1Aci ,combining the above estimates gives us thatP(A) ≥ 1− C exp{−λb1/M}. (5.69)We now fix a net W = (wj)j of B(5R1/β) such thatB(5R1/β) ⊆∪jB(wj , exp{−λb1}R1/β)150and |W | ≍ exp{3λb1}. For i ∈ I, let wi ∈ W be such that |xi − wi| ≤exp{−λb1}R1/β. We now use Wilson’s algorithm for all points wi. On Ac2,it holds that, for each i ∈ I,Pwi (Rwi [0, twi ] ∩ γ = ∅) ≤ exp{−ηλb1}.Therefore we haveP(B1) ≤ C exp{−ηλb1} exp{3δλb1} ≤ C exp{−ηλb1/2}, (5.70)where B1 is the event that there exists i ∈ I such that Rwi [0, twi ] ∩ γ = ∅.Suppose that the event Bc1 occurs. For i ∈ I, write ui for the first time thatRwi hits γ, and let zi = Rwi(ui). On Ac3, we have that zi ∈ γ[si−1, si] ∪γ[si, si+1], because otherwise γ has a quasi-loop. We define the events B2and B3 by settingB2 ={∃i ∈ I such that len (LE (Rwi [0, ui])) ≥ exp{−λb1/4}R},B3 ={∃i ∈ I such that∣∣∣BU (wi, λ−1R)∣∣∣ ≤ λ−4R 3β } .Combining the translation invariance of U with Proposition 5.3.4 ensuresthatP(B2) ≤ C exp{−ceλb1/4}. (5.71)Moreover, by (5.67) and the translation invariance of the UST again, wehaveP(B3) ≤ Ce−cλb × e3λb1 ≤ Ce−cλb/2,where we use the fact that |W | ≍ e3λb1 and b1 = b/1000. DefiningB =3∩j=1Bcj ,we have proved thatP(A ∩B) ≥ 1− C exp{−δλb1},151where we recall that δ > 0 was defined as in (5.68).Next, suppose that the event A∩B occurs. Take x ∈ γ ∩B(4R1/β). Wecan then find some i ∈ I such that x ∈ γ[si, si+1]. On Ac5, we havedU (xi−1, xi) ≤ exp{−δλb1/2}R and dU (xi, xi+1) ≤ exp{−δλb1/2}R.Furthermore, on Bc1 ∩Ac3 ∩Bc2, it holds thatzi ∈ γ[si−1, si] ∪ γ[si, si+1] and dU (wi, zi) ≤ exp{−λb1/4}R.This implies that dU (wi, x) ≤ exp{−δλb1/4}R. If Bc3 also holds, it followsthat we haveµU(BU(x, 2λ−1R))≥ λ−4R 3β .Consequently, we have proved that there exist universal constants C, δ, b1 ∈(0,∞) such that for all R and λP µU (BU (x, λ−1R)) ≥ λ−5R 3βfor all x ∈ γ ∩B(4R1/β) ≥ 1− C exp{−δλb1} . (5.72)Finally, once we get (5.72), the proof of (5.66) can be completed byfollowing the strategy used to prove Lemma 5.4.4 given Lemma 5.4.3. In-deed, thanks to (5.72), we can use a net whose mesh size is exponentiallysmall in λ, which guarantees the exponential bound as in (5.50). The simplemodification is left to the reader.5.6 Exponential upper tail bound on the volumeComplementing the main result of the previous section, we next establish anexponential tail upper bound on the volume, see Theorem 5.6.2, which im-proves the polynomial tail upper bound on the volume proved in Proposition5.4.1. We begin with the following proposition.Proposition 5.6.1. There exist constants λ0 > 1 and c, C, a ∈ (0,∞) such152that: for all R ≥ 1 and λ ≥ λ0,P(BU(0, λ−1Rβ)̸⊆ B(R))≤ C exp{−cλa}.In particular, it holds thatP(µU(BU(0, λ−1Rβ))≥ R3)≤ C exp{−cλa}.Proof. The second inequality immediately follows from the first one. Thusit remains to prove the first inequality. We follow the strategy used in theproof of [20, Theorem 3.1]. We may assume that λ is sufficiently large. Itfollows from Proposition 5.3.4 that there exist constants C, c and a0 > 0such thatP(TR/8 < λ−1Rβ)≤ C exp{−cλa0},where again Tr stands for the first time that the ILERW γ exits B(r).Setting a1 = a0/10, we define a sequence of nets Dk as follows. For k ≥ 1,set δk = 2−k exp{−λa1}, ηk = (2k)−1, and k0 be the smallest integer suchthat δk0R < 1. DefiningAk := B(R) \B ((1− ηk)R) ,let Dk be a set of points in Ak satisfying |Dk| ≍ δ−3k and also thatAk ⊆∪w∈DkB (w, δkR) .We then perform Wilson’s algorithm as follows.• Let U0 = γ be the ILERW, which is the root of the algorithm.• Take w ∈ D1, and consider the SRW Rw started at w, and run untilit hits U0. We add LE(Rw) to U0. We choose another point w′ ∈ D1and add the loop-erasure of Rw′ , a SRW started at w′ and run untilit hits the part of the tree already constructed. We perform the sameprocedure for every point in D1. Let U1 be the output tree.153• We perform the same algorithm as above for all points in D2. Let U2be the output tree. Similarly, we define Uk.• We perform Wilson’s algorithm for all points in Uck0 .Since δk0R < 1, we note that ∂iB(R) ⊆ Ak0 ⊆ Uk0 .Now, take w ∈ D1, and let Nw be the first time that γU (0, w) exitsB(R/8). Using [136, Proposition 4.4], we see thatP(Nw < λ−1Rβ)≤ CP(TR/8 < λ−1Rβ)≤ C exp{−cλa0}for each w ∈ D1. Thus if we define the event F1 by settingF1 ={TR/8 < λ−1Rβ}∪∪w∈D1{Nw < λ−1Rβ},then it follows thatP(F1) ≤ Cδ−31 exp{−cλa0} ≤ C exp{−c′λa0},where we have used the fact that |D1| ≍ δ−31 ≍ exp{3λa1} and that a1 =a0/10.Next, for b > 0, we define Gw1 (b) to be the event{∃v ∈ B(2R) with dist (v, γU (w,∞)) ≤ δ1Rsuch that P v (Rv[0, ξ] ∩ γU (w,∞) = ∅) ≥ δb1},where ξ is the first time that Rv exits B(v,√δ1R). Applying Proposition5.3.7 to the case that K = 100, it holds that there exists b0 > 0 such thatP(Gw1 ) := P (Gw1 (b0)) ≤ Cδ501 . (5.73)So, if we define the event G1 := ∪w∈D1Gw1 , thenP(G1) ≤ Cδ471 .Suppose that the event F c1 ∩Gc1 occurs, and perform Wilson’s algorithm154(see [169]) from all points in D2. For w ∈ D2, defineHw2 := {γU (w, 0) enters B(R/2) before it hits U1},and let H2 = ∪w∈D2Hw2 . The event Hw2 implies that Rw enters B(R/2)without hitting U1. Since the event Gc1 occurs, we see thatP(Hw2 ) ≤(δb01)cδ−1/21,and thus we haveP(H2) ≤ Cδ101 .For w ∈ D2, we then define Gw2 = Gw2 (b0) to be the event{∃v ∈ B(2R) with dist (v, γU (w,∞)) ≤ δ2Rsuch that P v (Rv[0, ξ] ∩ γU (w,∞) = ∅) ≥ δb02},where ξ is the first time that Rv exits B(v,√δ2R), and b0 is the constantdefined as above (see (5.73) for b0). Using [148, Lemma 3.2 and Lemma 3.3]once again (with r =√δ2R and s = δ2R), we haveP(Gw2 ) ≤ Cδ502 .Importantly, we can take b0 depending only on K = 100. Define the eventG2 by setting G2 := ∪w∈D2Gw2 , and thenP(G2) ≤ Cδ472 .Defining Hk and Gk, k ≥ 3 similarly, it follows thatP(Hk ∪Gk) ≤ Cδ47k .Finally, we defineJ = F c1 ∩Gc1 ∩k0∩k=2(Hck ∩Gck).155On the event J , we have the following.• For all k = 1, 2, . . . k0 and every w ∈ Dk, the first time that γU (0, w)exits B(R/8) is greater than λ−1Rβ.• The set Dk0 disconnects 0 and B(R)c.Thus, on the event J , it holds that BU(0, λ−1Rβ)⊆ B(R). SinceP(Jc) ≤ C exp{−c′λa0}+ Ck0∑k=1δ47k ≤ C exp{−λa1},we have thus completed the proof.We are now ready to establish the main result of the section.Theorem 5.6.2. There exist constants λ0 > 1 and c′, C ′, a′ ∈ (0,∞) suchthat: for all R ≥ 1 and λ ≥ λ0,P(maxz∈B(R1/β)µU(BU(z, λa′R))≥ λR3/β)≤ C ′ exp{−c′λa′}. (5.74)Proof. Since the proof is very similar to that of Theorem 5.5.2, we will onlyexplain how to modify it here. Also, we will use the same notation used inthe proof of Theorem 5.5.2. Proposition 5.6.1 tells that there exist constantsc, C, b ∈ (0,∞) such thatP(A′1) ≤ C exp{−cλb}, (5.75)where A′1 := {µU (BU (0, λR)) ≤ λ10R3β }. In this proof, we choose the con-stant b in this way, and let b1 = b/1000. Using this constant b1, we define theevents A2, . . . , A5 as in the proof of Theorem 5.5.2. Let A = (A′1)c∩(∩5i=2Aci )so thatP(A) ≥ 1− C exp{−λb1/M},see (5.69). We also recall the events B1 and B2 defined in the proof of156Theorem 5.5.2, for whichP(B1 ∪B2) ≤ C exp{−ηλb1/2},see (5.70) and (5.71). Moreover, letB′3 ={∃i ∈ I such that µU (BU (wi, λR)) ≥ λ10R3β}.Combining (5.75) with the translation invariance of the UST, we haveP(B′3) ≤ Ce−cλb × e3λb1 ≤ Ce−cλb/2,where we have also used the fact that |W | ≍ e3λb1 and b1 = b/1000. SettingB = Bc1 ∩Bc2 ∩ (B′3)c, we then have thatP(B) ≥ 1− C exp{−ηλb1/4}.Now, suppose that the event A ∩ B occurs, and let x ∈ γ ∩ B(4R1/β).We can then find some i ∈ I such that x ∈ γ[si, si+1]. Since Ac5 holds, wehavedU (xi−1, xi) ≤ exp{−δλb1/2}R and dU (xi, xi+1) ≤ exp{−δλb1/2}R.Furthermore, since Bc1 ∩Ac3 ∩Bc2 holds, we have thatzi ∈ γ[si−1, si] ∪ γ[si, si+1] and dU (wi, zi) ≤ exp{−λb1/4}R.This implies that dU (wi, x) ≤ exp{−δλb1/4}R. Given (B′3)c also holds, wetherefore haveµU (BU (x, λR/2)) ≤ λ10R3β .Consequently, we have proved that there exist universal constants C, δ, b1 ∈(0,∞) such that: for all R and λ,P µU (BU (x, λR/2)) ≤ λ10R 3βfor all x ∈ γ ∩B(4R1/β) ≥ 1− C exp{−δλb1} . (5.76)157Similarly to the comment at the end of the proof of Theorem 5.5.2, given(5.76), the proof of (5.74) follows by applying the same strategy as thatused to prove Lemma 5.4.4 given Lemma 5.4.3. Indeed, given (5.76), we canuse a net whose mesh size is exponentially small in λ, which guarantees theexponential bound as in (5.74). The simple modification is again left to thereader.5.7 Convergence of finite-dimensionaldistributionsAs noted in the introduction, the existence of a scaling limit for the three-dimensional LERW was first demonstrated in [107]. The work in [107] estab-lished the result in the Hausdorff topology, and this was recently extendedin [127] to the uniform topology for parameterized curves. Whilst the lat-ter seems a particularly appropriate topology for understanding the scalinglimit of the LERW, the results in [107, 127] are restrictive when it comes tothe domain upon which the LERW is defined. More specifically, we say thata LERW is defined in a domain D if it starts in an interior point of D andends when it reaches the boundary of D. The assumptions in [107] coverthe case of LERWs defined in domains with a polyhedral boundary, while[127] requires the domain to be a ball or the full space.In this section, we extend the existence of the scaling limit to LERWsdefined in the domain R3\ ∪Kj=1 trKj , where each Kj is itself a path ofthe scaling limit of a LERW. Once we gain this level of generality, we useWilson’s algorithm to obtain the convergence in distribution of rescaledsubtrees of the UST (see Figure 5.8 for an example realisation of the subtreespanning a finite collection of points). This will be crucial for establishing theconvergence part of Theorem 5.1.1. We begin by introducing some notationfor subtrees.5.7.1 Parameterized treesA parameterized tree is an encoding for an infinite tree embedded in theclosure of R3. This encoding is specialized for infinite trees with a finite158Figure 5.8: A realisation of a subtree of the UST of δR3 spanned by0 and the corners of the cube [−1, 1]3. The tree includes part ofits path towards infinity (in green). Colours indicate differentLERWs used in Wilson’s algorithm.number of spanning points and one end. More precisely, a parameterizedtree T with K spanning points is defined as T = (X,Γ) where:1. X = {x(1), . . . , x(K)} ⊂ R3 are the spanning (or distinguished) points;and2. γx(i) is a transient parameterized (simple) curve starting at x(i), andΓ = {γx(i) : 0 ≤ i ≤ K}.We require that for any pair i, j there existmerging times si,j , sj,i ≥0 satisfying(a) γx(i)|[si,j ,∞) = γx(j)|[sj,i,∞); and(b) tr γx(i)|[0,si,j) ∩ tr γx(j)|[0,sj,i) = ∅.LetFK be the space of parameterized trees withK distinguished points.We endow FK with the distancedFK(T , T˜):= max1≤i≤K{χ(γx(i), γ˜x˜(i))}+ max1≤i,j≤K{|si,j − s˜i,j |},159for T = (X,Γ), T˜ = (X˜, Γ˜) ∈ FK .We writetrT =∪γ∈Γtr γfor the trace of a parameterized tree.Proposition 5.7.1. Let T be a parameterized tree. Then trT is a topo-logical tree with one end. Additionally, for any z, w ∈ trT there exists aunique curve from z to infinity on T , denoted by γz and a unique curvefrom z to w in T denoted by γz,w.Proof. The set trT is path-connected as a consequence of condition (2a) inthe definition of a parameterized tree. It is also one-ended, since ∩Ki=1γx(i)is a single parameterized curve towards infinity.The main task in this proof is to show that there cannot be cycles em-bedded in trT . We proceed by contradiction. Let S1 be the circle andassume that φ : S1 → trT is an injective embedding. Since every curve inΓ is simple and φ is injective, then φ(S1) intersects at least two differentcurves, say γx(i) and γx(j). From the definition of merging times, we see thatT 2 = tr γx(i)∪tr γx(j) is homeomorphic to ([0,∞)× {0})∪({1} × [0, 1]), butthe latter space cannot contain a embedding of S1. It follows that φ(S1)intersects at least a third curve γx(ℓ). We assume that φ(S1) is containedin T 3 = tr(γx(i)) ∪ tr(γx(j)) ∪ tr(γx(ℓ)). Under the last assumption, it isnecessary that γx(ℓ) intersects γx(i) and γx(j) before these last two curvesmerge (otherwise the case is similar to T 2). Denote the intersection timesby tℓ,i and ti,ℓ, so γx(ℓ)(tℓ,i) = γx(i)(ti,ℓ). We use the same notation for γx(j).Then, we have that ti,ℓ < si,j and tj,ℓ < sj,i. Without loss of generality,tℓ,i < tℓ,j . However, it is easy to verify that tℓ,i is not the merging time sℓ,i,since γx(ℓ)|[tℓ,i,∞) does not merge with γx(i) at that point. Therefore sℓ,i doesnot exist, and this conclusion contradicts the definition of Γ. It follows thatφ(S1) is not contained in T 3, but it intersects more curves, e.g. all of themin trT = ∪Ki=1 tr((γx(i))). However, the argument that we used for T 3 alsoapplies to trT . We conclude that the embedding φ does not exist.Finally, observe that trT is one ended and all curves in Γ are parame-160terized towards infinity. It is then straightforward to define γz and γz,w.A corollary of Proposition 5.7.1 is that the intrinsic distance in trT iswell-defined. It is given bydT (z, w) := T (γz,w), z, w ∈ trTwhere T (·) is the duration of a curve.We will consider restrictions of parameterized trees to balls centred atthe origin. For a parameterized tree T = (X,Γ), let R ≥ 1 be large enoughso that X ⊆ BE(R). We restrict each curve in Γ to γx(i)|R (where therestriction to the ball of radius R > 0 is in the sense described in Subsection5.3.2), and define the restriction of a parameterized tree to BE(R) as thesubset of R3T |R :=∪γ∈Γtr γx(i)|R.Note thatT |R may not be connected for some values of R > 0. But for Rlarge enough, T |R is a topological tree (Figure 5.9 gives an example of bothcases).5.7.2 The scaling limit of subtrees of the USTWe introduce the main results of this section.Let Un be the uniform spanning tree on 2−nR3. We are interested insubtrees of Un spanned by K distinguished points. Let x(1), . . . , x(K) bedifferent points in R3 and let Xn = {xn(1), . . . , xn(K)} be a subset of 2−nR3such that xn(i) → x(i) as n → ∞, for each i = 1, . . . , n. Denote by γxn(i)nthe transient path in Un starting at xn(i) and parameterized by path length.We set γ¯x(i)n to be the β-parameterization of γxn(i)n and Γn = {γxn(i)n : 1 ≤i ≤ K}. Then SKn = (X,Γn) is the parameterized tree corresponding tothe subtree of Un spanned by xn(1), . . . , xn(K) and the point at infinity.Theorem 5.7.2. The sequence of parameterized trees (SKn )n∈N convergesweakly to SˆK in the space FK as n→∞.161x1x2x3rsmp1p2p3Figure 5.9: S is a parameterized tree with spanning points x(1), x(2)and x(3). The restriction S |s is the union of the paths betweenx(i) and pi, with i = 1, 2, 3. In this example, S |r and S |s aredifferent inside the radius m. A crucial difference between thesetwo sets is that S |r is connected, but S |s is disconnected.The proof of Theorem 5.7.2 relies on the convergence of the branches ofthe uniform spanning tree as they appear in Wilson’s algorithm. In the nextsection, we control the behaviour of a LERW before it hits an approximationof a parameterized tree. These loop-erased random walks correspond to thebranches of the UST. Then Proposition 5.7.8 shows that convergence of suchbranches implies convergence of parameterized trees. After these arguments,we are prepared for the proof of Theorem 5.7.2. We present it in Subsection5.7.5.Conversely, Proposition 5.7.6 shows that convergence of parameterizedtrees implies the convergence of the intrinsic distance. We thus get thefollowing corollary of Theorem 5.7.2.Corollary 5.7.1. Let (xδ(i))Ki=1 be a collection of points in δR3 such thatxδ(i) → x(i), for all i = 1, . . . ,K, for some collection of distinct points(x(i))Ki=1 in R3. Along the subsequence δn = 2−n, it holds that(δβndU (xδn(i), xδn(j)))Ki,j=1162converges in distribution.5.7.3 Parameterized trees and random walksWe begin with a property on the hittability of a parameterized tree. We saythat a parameterized tree Tδ is on the scaled lattice δR3 if each one of itscurves defines a path on δR3.Definition 5.7.3. Let δ ∈ (0, 1), R ≥ 1 and ε ∈ (0, 1) and let Tδ be aparameterized tree on δR3. We say that Tδ is η-hittable in Bδ(0, R) if thefollowing event occurs:H(Tδ, ε; η) := ∀x ∈ Bδ(0, R) with dist(x,Tδ) ≤ ε2,P x(Sx[0, ξS(Bδ(x, ε1/2))]∩Tδ = ∅)≤ εη .In the definition above, recall that ξS(Bδ(x, ε1/2)) stands for the firstexit time from the δ-scaled discrete ball Bδ(x, ε1/2).Proposition 5.7.4. There exist constants η > 0 and C < ∞ such that: ifSKδ is a parameterized subtree of the uniform spanning tree on δR3 with Kspanning points for all δ ∈ (0, 1), R ≥ 1 and ε > 0,P(H(SKδ |R, ε; η))≥ 1− CKR3ε.Proof. Recall that any path towards infinity in the uniform spanning tree isequal, in distribution, to a ILERW. Then, the probability that x ∈ Bδ(0, R)hits the treeSKδ |R is at least the probability that x hits a restricted ILERW,where such restriction is up to the first exit of the LERW from Bδ(0, R).Then Proposition 5.7.1 is a consequence of Proposition 5.3.7.Remark. The proof of Proposition 5.7.9 can be generalized to any subsetof R3 that is η-hittable with high probability. We restrict to the case ofparameterized subtrees for clarity, and because it is the most relevant forour purposes. To further increase the clarity of the proof of Proposition 5.7.9,the reader can think of the subtree SKn as consisting of a single ILERW.1635.7.4 Essential branches of parameterized treesLet T = (X,Γ) be a parameterized tree. For a leaf x(i) ∈ X with i > 1, lety(i) := tr γx(i) ∩i−1∪j=1tr γx(j) (5.77)be the intersection point of γx(i) with any of the curves with an smaller index.We define y(1) to be the point at infinity and say y(i) is a branching point.When we compare (5.77) with conditions (2a) and (2b) in the definition ofparameterized tree, we see thaty(i) = γx(i)(si,m(j)),where si,m(j) = minj<i{si,j} is the first merging time.The parameterized curves γx(i),y(i) are called essential branches fori = 1, . . . ,K. Note that γx(1),y(1) is the transient curve γx(1) ∈ C, whileγx(i),y(i) ∈ Cf for i = 1, . . . ,K. We denote the set of essential branches byΓe(T ) := {γx(i),y(i)}1≤i≤K .Proposition 5.7.5. Assume that Tn → T in the space of parameterizedtrees FK . Thenγxn(1),y(1)n → γxn(1),y(1) as n→∞in the space C. For i = 2, . . . ,K, the essential branches and the curvesbetween branching points converge:γxn(i),yn(i)n → γx(i),y(i), γyn(i),yn(j)n → γy(i),y(j) as n→∞in the space of finite parameterized curves Cf .Proof. The convergence of the first essential branch is immediate from thedefinition of the metric dFK , since γxn(1),y(1)n = γxn(1)n .To prove the convergence of the other essential branches, and the curvesbetween branching points, we first need to show that spanning and branchingpoints converge.164Each xn(i) ∈ X is the initial point of a curve in Γ and hence conver-gence in the space of parameterized trees implies that xn → x as n → ∞.Now we consider a branching point yn(i), with i = 2, . . . ,K. Recall thatyn(i) = γx(i)n (si,m(j)n ), where si,m(j)n = minj<i{si,jn }. Since convergence of theparameterized trees T imply convergence of the merging times si,jn → si,jas n → ∞, then, for the sequence of minima, si,m(j)n → si,m(j). With anapplication of Proposition 5.3.2 (c), we get convergence of the branchingpoints yn(i) = γx(i)n (si,m(j)n )→ γx(i)(si,m(j)) = y(i).With convergence of both the spanning and branching points, Proposi-tion 5.3.1 and Proposition 5.3.2 imply that the corresponding restrictions ofγx(i) converge.Proposition 5.7.6. Assume that Tn = (Xn,Γn) converges to T in thespace FK . If the corresponding collections of spanning points are Xn ={xn(1), . . . , xn(K)} and X = {x(1), . . . , x(K)}, then(dTn(xn(i), xn(j)))1≤i,j≤K → (dT (x(i), x(j)))1≤i,j≤K (5.78)as n→∞.Proof. Proposition 5.7.1 shows that restriction, concatenation and time-reversal of the curves in Γn define γxn(i),xn(j)n . In fact,γxn(i),xn(j)n = γxn(i),yn(i)n ⊕ γyn(ℓ1),yn(ℓ2)n ⊕ . . .⊕ γyn(ℓm−1),yn(ℓm)n ⊕ γyn(j)xn(j)n ,(5.79)where ℓ1 = i and ℓm = j. Then Proposition 5.7.5 implies the convergence ofeach essential branch, and Proposition 5.3.1 and Proposition 5.3.2 imply theconvergence of (γxn(i),xn(k)n )n∈N. In particular, the duration of each curve in(5.79) converges and we get (5.78).Conversely, we can reconstruct a tree from a set of essential branches.Proposition 5.7.7. Let X = {x(1), . . . , x(K)} ⊂ R3 and consider a collec-tion of curves with the following conditions:(a) Let γx(1),y˜(1) be a transient parameterized curve starting at x(1); recallthat y˜(1) denotes the point at infinity.165(b) For i = 2, . . . n, γx(i),y˜(i) is a parameterized curve starting at x(i) andending at y˜(i), where the endpoint y˜(i) is the first hitting point to∪i−1j=1 tr γx(i),y˜(i).Then {γx(i),y˜(i)}1≤i≤K defines a set of transient curves Γ = {γx(i)}1≤i≤Kand a parameterized tree T = (X,Γ).Proof. First we to construct Γ from the collection of curves {γx(i),y˜(i)}1≤i≤K .Note that γx(1),y˜(1) is already a transient curve starting at x(1). We con-struct the other elements in Γ recursively. Assume that γx(1), . . . γx(i−1)have been defined and satisfy conditions (2a) and (2b) in the definition ofparameterized tree. Recall that the endpoint of γx(i),y˜(i) is y˜(i), and thispoint intersects some γx(j) with j < i. Thenγx(i) = γx(i),y˜(i) ⊕ γx(j)|[y˜(j),∞).Since the endpoint of γx(i),y˜(i) is the first hitting point to ∪i−1j=1 tr γx(i),y˜(i), wehave that (tr γx(i)|[x(i),y˜i) ∩ tr γx(j)) = ∅ for j < i. This construction ensuresthat γx(i) satisfies conditions (2a) and (2b), when we compare it againstcurves with smaller indexes. We continue with this construction for i =2, . . .K to define Γ. Therefore T = (X,Γ) is a parameterized tree. Finally,note that y˜(i) satisfies (5.77) and hence y˜(i) = y(i), for i = 2, . . . ,K.Proposition 5.7.8. Let (Tn)n∈N be a sequence of parameterized trees withessential branches Γe(Tn) = {γxn(i),yn(i)n }1≤i≤K . Assume that(γxn(i),yn(i)n)1≤i≤K →(γx(i),y(i))1≤i≤K (5.80)in the product topology as n → ∞ and {γx(i),y(i)} satisfy the conditionsin Proposition 5.7.7. Then (Tn)n∈N converges in the metric space FK toa parameterized tree T for which Γe(T ) = {γx(i),y(i)}0≤i≤K is a set ofessential branches.Proof. Convergence of (γxn(i),yn(i)n )0≤i≤K in the product topology impliesthat each element in Γen converges. Proposition 5.7.7 shows that every curveγxn(i)n is the concatenation of sub-curves of Γe. Moreover, {γx(i),y(i)}i=1...K166satisfy the conditions in Proposition 5.7.7 and hence they define a param-eterized tree T with Γ = (γx(i)). Finally, (5.80) implies the convergenceof the branching points yn(i), and from here we get convergence of themerging times. Then, Proposition 5.3.1 and Proposition 5.3.2 imply thatχ(γxn(i)n , γx(i))→ 0 as n→∞. Therefore dFK (Tn,T )→ 0 as n→∞.5.7.5 Proof of Theorem 5.7.2The proof of Theorem 5.7.2 is by mathematical induction. The convergencein the scaling limit of the ILERW provides the base case. We state theinductive step in Proposition 5.7.9.Proposition 5.7.9. Let Un be the uniform spanning tree on 2−nR3. Let(xn(i))i=1,...,K+1 be a set of vertices in 2−nR3 and assume that xn(i) con-verges to x(i) ∈ R3 as n → ∞. Let γ¯xn(i)n be the β-parameterization ofthe transient path in Un starting at xn(i) and directed towards infinity. As-sume that (γ¯xn(i)n )i=1,...,K converges weakly as a parameterized tree to SˆK .Then (γ¯xn(i)n )i=1,...,K+1 converges weakly to a parameterized tree SˆK+1, withrespect to the metric FK+1 for parameterized trees.We devote the rest of this section to the proof of Proposition 5.7.9. Itis based in Proposition 5.7.8. According to the latter proposition, it sufficesto prove convergence of the essential branches with respect to the producttopology. We then shift our attention to the essential branches of an infinitesubtree of the uniform spanning tree. Wilson’s algorithm provides a naturalconstruction of them; and we present it below. Subsection 5.7.6 developsthe arguments for the proof of Proposition 5.7.9.Let Un be the uniform spanning tree on 2−nR3. Let xn(i) ∈ R3 andx(i) ∈ R3 as in the statement of Proposition 5.7.9, so xn(i) → x(i) asn→∞ for i = 1, . . . ,K+1. Now we apply Wilson’s algorithm on the scaledlattice 2−nR3.• Let γ1n be an ILERW starting at xn(1), andγ¯x(1),y(1)n (t) = γ1n(2βnt), ∀t ≥ 0.167be its β-parameterization. Note that we omit the sub-index n on x(1)and y(1) to ease the notation. This transient curve is the first branchof the parameterized tree.• Let γin be the loop-erased random walk started at xn(i), and stoppedwhen it hits any of the previous loop-erased random walks γ1n, . . . , γi−1n .Let yn(i) ∈ 2−nR3 be the hitting point, and setγ¯x(i),y(i)n = γin(2βnt), ∀t ∈ [0, 2−βn len(γin)].The duration of the curve γ¯x(i),y(i)n is 2−βn len(γin), i.e. the length ofthe path γin with the appropriate scaling. We also omit the sub-indexn on x(i) and y(i) when they appear in the curve γ¯x(i),y(i)n .Set Xn = {xn(1), . . . , xn(K)} and Γen = {γ¯x(i),y(i)n : i = 1, . . . ,K}.By Proposition 5.7.7, Xn and Γen determine a parameterized tree SKn , andWilson’s algorithm shows that trSKn is equal in distribution to the subtreeof Un spanned by Xn and the point at infinity.As part of the proof of Theorem 5.7.2, we will show that the limitof parameterized trees SˆK has the following (formal) representation (seeLemma 5.7.19). The next construction is Wilson’s algorithm, but in thiscase, the branches have the distribution of the scaling limit of the ILERW.• Let γˆx(1),y(1) ∈ C be the scaling limit of ILERW starting at x(1),endowed with the natural parameterization, see [127].• Let γˆx(i),y(i) ∈ Cf be the scaling limit of LERW started at x(i), andstopped when it hits any of γˆx(1),y(1), . . . , γˆx(i−1),y(i−1). (Our construc-tion will give that this hitting time is finite, see Lemma 5.7.18.) Herewe denote the hitting point by y(i).Set X = {x(1), . . . , x(K)} and Γˆe = {γˆx(i),y(i)}1≤i≤K . Proposition 5.7.7defines the parameterized tree SˆK = (X, Γˆ) with set of essential branchesΓe(SK) = Γˆe.1685.7.6 Proof of Proposition 5.7.9The next proposition allows us to work with restrictions of parameterizedtrees, when we compare them within a smaller subset.Proposition 5.7.10. Let SKδ be a parameterized subtree of the uniformspanning tree on δR3 with K spanning points. Assume that |x| < m for eachx ∈ K. Then for r > s ≥ m2 > 0,P((SKδ |r△SKδ |s) ∩BE(m) ̸= ∅)≤ Kδm−1[1 +O(m−1)]. (5.81)Proof. The restrictions SKδ |r and SKδ |s are different inside BE(m) when apath returns to BE(m) after its first exit from BE(r); we refer to Figure 5.9as an example of this situation. By virtue of Wilson’s algorithm and aunion bound, the probability on (5.81) is bounded above by the probabilityof return to B(mδ−1) of K simple random walks on R3:K supx∈∂B(sδ−1)P xS(τS(B(mδ−1)) <∞),where P xS indicates the probability measure of a simple random walk on R3started at x, and τS(B(mδ−1)) is the first time that the random walk Shits the ball B(mδ−1). Therefore, the upper bound in (5.81) follows fromwell-known estimates on the return probability for the simple random walk,see e.g. [117, Proposition 6.4.2].The proof of Proposition 5.7.9 is divided into a sequence of lemmas, andthese are grouped into five steps. The final and sixth step finishes the proof.Step 1: set-up.We begin with the set-up of the proof. First note that the assumptionsof Proposition 5.7.9 indicate that (γ¯x(i)n )1≤i≤K converges in distribution.From now on, we work in the coupling given by Skorohod’s embeddingtheorem where (γ¯x(i)n )1≤i≤K converges to a collection of continuous curves(γˆx(i))1≤i≤K , almost surely.169Let Sn = (Sn(t))t∈N be an independent random walk on δnR3 startingat xn(K + 1). Consider the hitting time of the parameterized tree SKn , asgiven byξSn = inf{t ≥ 0 : Sn(t) ∩ trSKn ̸= ∅}.We let γn = LE(Sn[0, ξS ])be the corresponding LERW from xn(K +1) toSKn , and setγ¯n(t) = γn(2βnt), ∀t ∈ [0, 2−βn len(γn)].We want to show that γ¯n converges to a scaling limit. Since the domainR3 \ ∪Ki=1 tr γ¯x(i) does not have a polyhedral boundary, we cannot use [127,Theorem 1.3] directly. To get around this obstacle, we approximate witha simpler domain. Furthermore, to gain some control over the paths ofthe loop-erased random walks (γ¯x(i)n )1≤i≤K , we also need to work within abounded domain.We write Dn(R) = D2−n(R) to denote an scaled discrete box with sidelength R ≥ 1 around the origin. Since the points x(1), . . . , x(K + 1) arefixed, we can take R large enough so that x(1), . . . , x(K + 1) ∈ Dn(R).For each curve γ¯x(i)n ∈ Γ and for the parameterized tree SKn , we denote itsrestriction to the closed box D¯n(R) with a super-index, as γx(i),Rn and SK,Rn ,respectively. We also consider the ILERW in the domain Dn(R) \ trSK,Rn .The exit time from such domain for the random walk Sn isξS ,Rn = inf{t ≥ 0 : Sn(t) ∩(∂Dn(R) ∪ trSK,Rn)̸= ∅}.The curve γRn = LE(Sn[0, ξS ,R])is the LERW from xn(K + 1) to eitherSK,Rn or the boundary of Dn(R); and we setγ¯Rn (t) = γRn (2βnt), ∀t ∈ [0, 2−βn len(γRn )]. (5.82)Note that we omit xn(K+1) as a super-index of γRn to simplify the notation.We emphasize that γ¯Rn is not necessarily the same as γ¯n|R, where the latteris the restriction of the ILERW to the box Dn(R),170For each integer u, the cubes at scale u are closed cubes with vertices in2−uR3 and side length 2−u. For u ≤ n, let Au,Rn be the u-dyadic approxi-mation to trSK,Rn , defined byAu,Rn :=∪j{C(u) : dist(C(u), trSK,Rn ) ≤ 2−2u}. (5.83)Proposition 5.7.11. With fixed n ∈ N, the sequence of sets (Au,Rn )u∈Nconverges to trSK,Rn in the Hausdorff topology. If u ∈ N is fixed, then thesequence (Au,Rn )n∈N is eventually constant, almost surely.Proof. We begin with n ∈ N fixed. The construction of an u-dyadic approx-imation provides that dH(Au,Rn , trSK,Rn ) ≤ 2−(u−2). From here, it followsthe convergence of (Au,Rn )v∈N in the Hausdorff topology.Next we consider (Au,Rn )n∈N with u fixed. In this case, note that thea.s. convergence of SK,Rn implies the a.s. convergence of trSK,Rn in theHausdorff topology. Then, for N large enough, dH(trSK,Rn , trSK,Rm ) <2−4u, if n,m ≥ N , almost surely. It follows that (Au,Rn )n≥N is constantalmost surely.We denote the constant limit of Au,Rn , as n→∞, by Au,R.For each n, u ∈ N with u ≤ n, consider the loop-erasure of the randomwalk Sn started from xn(K+1), and stopped when it exits Dn(R)\trSK,Rn .We denote the latter hitting time by ξP,Ru , and the corresponding LERW byγu,Rn = LE(Sn[0, ξP,Ru ]). (5.84)This curve has the β-parameterizationγ¯u,Rn (t) := γu,Rn (2βnt), ∀t ∈ [0, 2−βn len(γu,Rn )]. (5.85)The weak convergence of (5.85) is an immediate consequence of [127, Theo-rem 1.4] and Proposition 5.7.11. We state this observation below as Lemma5.7.15.171wywnnznζ¯u;vnη¯u;vnAv;RnAu;RnSn[ξP;Ru ; ξP;Rz ]xK+1nγ¯0Figure 5.10: The figure shows the decomposition of the curves γ¯u,Rnand γ¯v,Rn used in the proof of Proposition 5.7.9. The curve γ¯u,Rnis the concatenation of γ¯0 (in purple) and ζ¯u,vn (in red). Thecurve γ¯v,Rn is the concatenation of γ¯0 and η¯u,vn (in blue). Thefigure also shows a restriction of the random walk Sn from ynto zn (in yellow). In this case, Sn avoids hitting Av,Rn when itis close to yn.Step 2: comparing γ¯u,Rn and γ¯v,RnLet γ¯u,Rn and γ¯v,Rn be the loop-erased random walks defined in (5.85). Ouraim is to bound the distance ψ(γ¯u,Rn , γ¯v,Rn ) for large values of n, u and v.Let us consider the event where this distance is large. More precisely, forε > 0 and integers n, u and v, with u, v < n, we defineEu,vn (ε) :={ψ(γ¯u,Rn , γ¯v,Rn ) ≥ ε}.Recall that we use the same random walk on 2−nR3 to generate γ¯u,Rn andγ¯v,Rn . Typically, these two curves have a segment in common, γ¯0 (see Fig-ure 5.10). We claim that γ¯u,Rn \ γ¯0 and γ¯v,Rn \ γ¯0 have a small effect onψ(γ¯un, γ¯vn).Towards proving the preceding claim, we start by introducing some fur-ther notation for elements in the curves γ¯u,Rn and γ¯v,Rn ; Figure 5.10 serves172as a reference. For clarity, and without loss of generality, we elaborate ourarguments on the event where the random walk Sn hits the boundary ofPu,R first, that isFu :={ξP,Ru ≤ ξP,Rv},and hence it generates γ¯u,Rn before γ¯v,Rn . The symmetric event is Fv :={ξP,Rv ≤ ξP,Ru }. We will consider the restriction of the random walk Sn:Su,vn := Sn|[ξP,Ru ,ξP,Rv ].Denote the endpoint of γ¯u,Rn by yn := Sn(ξP,Ru ). To simplify notation, wedenote the durations of γ¯u,Rn and γ¯v,Rn byT u = T (γ¯u,Rn ), T v = T (γ¯v,Rn ),respectively. The last time that Su,vn hits its past γ¯u,Rn determines the end-point of γ¯0. LetξP,Rz := sup{t ≤ ξP,Rv : S(t) ∈ γ¯u,Rn }and set zn := S(ξP,Rz ). Let Tz be such that γ¯v,Rn (Tz) = γ¯v,Rn (Tz) = zn.We then have for the common curve γ¯0 = γ¯u,Rn [0, Tz] = γv,Rn [0, Tz]. Thedifference between γ¯u,Rn and γ¯v,Rn are the curvesζ¯u,vn := γ¯u,Rn [Tz, T u], η¯u,vn = γ¯v,Rn [Tz, T v].Note that the range of η¯u,vn is a subset of Su,vn .We now compare the shapes of γ¯v,Rn and γ¯u,Rn . In particular, we notethat the respective traces of these curves can be significantly different if oneof the next two bad events occur. The first event controls the diameter ofη¯u,vn , while the second event imposes a limit on the size of ζ¯u,vn .• Since η¯u,vn is a subset of Su,vn , η¯u,vn has a diameter larger than ε0 onlyif Su,vn , the segment of the random walk Sn between the hitting timesξP,Ru and ξP,Rv , has a similarly large diameter. We denote this event173yznAv;RnAu;Rn""Mγ¯u;Rnγ¯v;Rnxn(K + 1)Figure 5.11: A realization of the event Du,vn (ε)c ∩ Q(εM , ε). In thefigure, γ¯u,Rn is the concatenation of the purple and blue curves,while γ¯v,Rn is the concatenation of the purple and red curves.byDu,vn (ε0) := {diam(Su,vn ) ≥ ε0} .• On the complementary event Du,vn (ε0)c, the curve ζ¯u,vn has diameterlarger than ε only if γ¯u,Rn has an (ε0, ε)–quasi-loop. Figure 5.11 showsan example of this situation. LetQ(ε0, ε; γ) := {γ has an (ε0, ε)-quasi-loop},and Qn(ε0, ε) = Q(ε0, ε; γ¯un) ∪Q(ε0, ε; γ¯vn).Combining the definitions above, we introduce a bad event for the shape bysettingBu,vn (ε) := Du,vn (εM ) ∪Qn(εM , ε),174noting that we have taken ε0 = εM , with M > 1 being the exponent ofProposition 5.3.11. We highlight that, on the event (Bu,vn (ε))c, it holds thatdH(γ¯u,Rn , γ¯v,Rn ) ≤ ε.The following result establishes that, on (Bu,vn (ε))c, γ¯u,Rn and γ¯v,Rn arealso close as parameterized curves. The issue here is that even if the tracesof two curves may be close in shape, they may take a large number ofsteps in a small diameter. We will compare the Schramm and intrinsicdistances, as defined at (5.18) and (5.19), on the event (Bu,vn (ε))c wherethe shapes are close to each other. The Schramm and intrinsic distancesof γ¯u,Rn are comparable on the events S†2−n(R, ε) and E†2−n(R, ε). Theseevents are introduced in Subsection 5.3.5. To simplify notation, we writeS†n(R, ε) = S†2−n(R, ε) and E†n(R, ε) = E†2−n(R, ε).Lemma 5.7.12. Fix R ≥ 1 and let ε ∈ (0, 1). On the event (Bu,vn (ε))c ∩S†n(Rε−1, ε), we have thatT (η¯u,vn ) ≤ Rεβ−1, T (ζ¯u,vn ) ≤ Rεβ−1.Proof. In this proof, we write G = (Bu,vn (ε))c ∩ S†(Rε−1, ε). We begin withan upper bound for the duration of η¯u,vn . On G, the random walk Su,vn islocalized in a neighbourhood around yn. Indeed, on Du,vn (εM )c we have thatdiam(Su,vn ) ≤ εM . Since η¯u,vn is a subset of Su,vn , it follows that diam(η¯u,vn ) <εM , and, in particular, for the endpoints of η¯u,vn , zn and wn say (as inFigure 5.10), we have that dSγ¯vn(zn, wn) = dSη¯ (zn, wn) ≤ εM < ε. On G ⊆S†n(Rε−1, ε), this implies thatT (η¯u,vn ) = dγ¯vn(zn, wn) ≤ Rεβ−1.Next we bound the duration of ζ¯u,vn on the event G. We have that theendpoints of ζ¯u,vn are in Su,vn . Indeed, yn = Su,vn (0) and zn ∈ ζ¯u,vn ⊆ Su,vn .Thus|zn − yn| < εM . (5.86)On the event G ⊆ Q(εM , ε)c, the loop-erased random walk γ¯u,Rn does nothave (εM , ε)–quasi-loops, and so (5.86) implies that dSγ¯un(zn, yn) < ε. The175argument used for η¯u,vn also gives T (ζ¯u,vn ) = dγ¯un (zn, yn) < Rεβ−1.We finish this step by showing that Eu,vn (ε) can be contained in the eventsalready described.Lemma 5.7.13. Let R ≥ 1 and ε > 0. On the event Bu,vn (ε)c∩S†n(Rε−1, ε)∩E†n(R, ε), we have thatψ(γ¯u,Rn , γ¯v,Rn ) ≤ CRb3εβ′,where 0 < b3, C <∞ are universal constants, and β′ = b3(β − 1).Proof. It suffices to show that on the event G2 = Bu,vn (ε)c ∩ S†(Rε−1, ε) ∩E†(R, ε),ψ(γ¯un, γ¯vn) = |T u − T v|+ max0≤s≤1 |γ¯un(sT u)− γ¯vn(sT v)| ≤ CRb3εβ′. (5.87)Lemma 5.7.12 gives |T u−T v| < 2Rεβ−1. Next we bound the second term in(5.87). Let a = γ¯u,Rn (sT u) and b = γ¯v,Rn (sT v) and assume that one of thesepoints belongs to the common path, say b ∈ γ¯0. In this case, sT v ≤ T u andwe can re-write b = γ¯u,Rn ((s(T v/T u))T u). Then, with respect to the intrinsicmetric of γ¯u,Rn , we compare points within distancedγ¯un(a, b) ≤ |sT u − (sT v/T u)T u| ≤ 2Rεβ−1.We introduceNu = sup{|a− b| : a, b ∈ tr γ¯u,Rn , dγ¯un (x, y) ≤ 2Rεβ−1},define Nv similarly from γ¯vn, and also introduce notation for the diameter ofthe segments η¯u,vn and ζ¯u,vn by settingNη = sup0≤t≤len(η¯u,vn )|η¯u,vn (t)− zn|, N ζ = sup0≤t≤len(ζ¯u,vn )|ζ¯u,vn (t)− zn|.176It readily holds that we have the following bound:max0≤s≤1|γ¯un(sT u)− γ¯vn(sT v)| ≤ Nu +Nv +Nη +N ξ. (5.88)Lemma 5.7.12 implies that dη(a, b) < Rεβ−1 for all a ∈ tr η. Let b3 =b2b1, where b1 and b2 are the constants of Proposition 5.3.5. On the eventE†n(R, ε), we thus have that Nη ≤ Rb3εb3(β−1), and similarly for N ξ. Onthe event E†n(R, ε), the loop-erased random walks γ¯un and γ¯vn are uniformlyequicontinuous, so that Nu ≤ CRb3εb3(β−1), and the same bound holds forNv. Adding the upper bounds for Nu, Nv Nη and N ξ in (5.88), we get(5.87).Step 3: bounding P(Eu,vn (ε))In this step, we give an upper bound on the probability of the bad eventEu,vn (ε). The key is that, given u, and v, this estimate is uniform over all nfor u and v large enough.Lemma 5.7.14. Fix R ≥ 1. For each ε ∈ (0, 1), there exists U = U(ε) suchthat for all n ≥ u, v ≥ U(ε)P (Eu,vn (ε)) ≤ Cεθ,for constants C = C(R) > 0 and θ = θ(R) > 0, depending only on R.Proof. Lemma 5.7.13 gives thatP(Eu,vn (CRεβ′))≤ P(Du,vn (εM ))+P(Q(εM , ε))+P((S†n(Rε−1, ε))c)+P((E†n(R, ε))c). (5.89)Proposition 5.3.11 impliesP(Q(εM , ε)) ≤ P(Q(εM , ε2)) ≤ CR3εbˆ2 ,and Propositions 5.3.12 and 5.3.13 give upper bounds for the last two termsof (5.89). Thus we are left to bound the probability of Du,vn (εM ). For this,177we need U(ε) large enough so that, by Proposition 5.7.11, we have thatdH(Au,Rn , Av,Rn ) < ε4M for all u, v ≥ U(ε). (5.90)On Fu, γ¯u,Rn is the first walk LERW to stop, and we call its endpointyn ∈ ∂Pu,R. From (5.90), we have dist(yn, ∂Pv,R) < ε4M . But, alongSu,vn , the random walk Sn reaches distance εM before hitting ∂Pv,R. Thesame argument on the complement of Fu, i.e. on Fv. Hence Proposition5.7.4 implies that P(Du,vn (εM )) ≤ CKR3ε2M + ε2Mηˆ. In conjunction withthe aforementioned bounds,supn,u,v:n≥u,v≥UP(Eu,vn (Rb3εβ′))≤ CKR3ε2M + ε2Mηˆ + Cεη˜+ C(Rε)3e−c(Rε )a+ Cεb2 .The dominant term above is εθ(R), and a reparameterization completes theproof.Step 4: the scaling limit of a loop-erased random walkRecall that γ¯Rn is the LERW on D¯n(R)\SK,Rn defined in (5.82). In (5.85), wedefined γ¯u,Rn , for u ≤ n, as the β-parameterization of the loop-erased randomwalk LE(Sn[0, ξP,Rm ]), where ξP,Rm is the first exit time from the dyadicpolyhedron Pm,R. In this step, we establish that γ¯Rn and γ¯u,Rn converge tothe same limit. We take limits on each variable in the following order. Forγ¯u,Rn , we first take n → ∞. The limit object is a curve on the boundedand polyhedral domain DE(R) \ Au,R ⊂ R3, where Au,R is the polyhedraldomain of Proposition 5.7.11. Then we take u→∞, and the limit is a curveon the bounded set DE(R) \ trSK,R. In Step 5, we take R → ∞, and wethus define γˆ as a limit curve on the full space R3 \ trSK .Lemma 5.7.15. Fix R ≥ 1. For each u ∈ N, the law of γ¯u,Rn converges withrespect to the metric ψ, as n→∞.Proof. Proposition 5.7.11 shows that the domain γ¯u,Rn is the polyhedron178DE(R)\Au,R, for n large enough. Then, the weak convergence of {γ¯u,Rn }n∈Nis an immediate consequence of Proposition 5.3.14.We denote by γˆu,R a curve with the limit law of Lemma 5.7.15.Lemma 5.7.16. Fix R ≥ 1. Let (γˆu,R)u∈N be the sequence of limit ele-ments from Lemma 5.7.15. It is then the case that (γˆu,R)u∈N converges indistribution in the metric ψ as u→∞.Proof. Denote the laws of γ¯u,Rn and γˆu,R by L(γ¯u,Rn ) and L(γˆu,R), respec-tively. Since (Cf , ψ) is a complete and separable metric space (see [94, Sec-tion 2.4]), to prove weak convergence it suffices to show that (L(γˆu,R))u∈N isa Cauchy sequence in the Prohorov metric dP. Let u, v ∈ N. By the triangleinequality, for n ≥ u, v,dP(L(γˆu,R),L(γˆv,R)) ≤ dP(L(γˆu,R),L(γ¯u,Rn )) + dP(L(γˆv,R),L(γ¯v,Rn ))+ supn≥u,vdP(L(γ¯u,Rn ),L(γ¯v,Rn )). (5.91)Letting n→∞, the first two terms on the right hand side of (5.91) convergeto 0 by Lemma 5.7.15. Moreover, Lemma 5.7.14 shows that the last termof (5.91) converges to 0 as u, v → ∞. Therefore (L(γˆu,R))u∈N is a Cauchysequence in the Prohorov metric. It follows that (γˆu,R)u∈N converges weakly.We denote by γˆR a curve with the limit law of Lemma 5.7.16. Therandom curve γˆR is the limit of dyadic approximations. We see below thatit is also the limit of the LERWs stopped when they hit SK,R.Lemma 5.7.17. Fix R ≥ 1. Then γ¯Rn → γˆR in distribution as n → ∞,with respect to the metric ψ.Proof. Since γ¯u,Rn → γˆu,R in distribution as n → ∞, and γ¯u,R → γˆR indistribution as u → ∞, to complete the proof it suffices to notice that, forε > 0,limu→∞ lim supn→∞P(ψ(γ¯u,Rn , γ¯Rn ) > ε)= 0,179see [33, Theorem 3.2], for example. However, since γ¯Rn = γ¯n,Rn , the abovestatement readily follows from Lemma 5.7.14.Step 5: taking R→∞Until this point, we have only considered LERW inside a boxDE(R). Indeed,γ¯Rn was defined as a LERW in Dn(R)\SKn , and its scaling limit γˆR is withinDE(R). In this final step, we will take R→∞ to consider the tree SˆK andthe random walk Sn in the full space.Lemma 5.7.18. Let(γˆR)R≥1 be the sequence of limit elements from Lemma5.7.16 and SˆK is the parameterized tree in Proposition 5.7.9. There existsa random element γˆ ∈ Cf such that γˆR converges in distribution to γˆ in themetric ψ as R → ∞. Moreover, the intersection of tr γˆ and tr SˆK is theendpoint of γˆ.Proof. Denote the laws of γ¯Rn and γˆR by L(γ¯Rn ) and L(γˆR), respectively.This proof is similar to the one for Lemma 5.7.16 as we will show that(L(γˆR))R≥1 is a Cauchy sequence in the Prohorov metric dP. For twointegers r > s > 0, the triangle inequality yieldsdP (L(γˆr, γˆs)) ≤ dP (L(γˆr, γ¯rn))+dP (L(γˆs, γ¯sn))+supndP (L(γ¯rn, γ¯sn)) . (5.92)Letting n→∞, the first two terms on the right hand side of (5.92) convergeto 0 by Lemma 5.7.17. Then we are left to bound supn dP (L(γ¯rn, γ¯sn)).dP (L(γˆr, γˆs)) ≤ supndP (L(γ¯rn, γ¯sn)) . (5.93)Recall that we sample γ¯rn and γ¯rn as loop-erasures of the simple random walkSn. On the event thatSK,rn △SK,sn ∩B(s1/2) = ∅, γ¯rn = γ¯rn (as parameterizedcurves) whenever Sn hits trSK,rn before reaching the boundary of Bn(s1/2),and soP (γ¯rn ̸= γ¯sn) ≤P((SK,rn △SK,sn ) ∩Bn(s1/2) ̸= ∅)+P(Sn[0, ξS(Bn(s1/2))] ∩ trSK,rn ̸= ∅).180Proposition 5.7.10 gives P((SK,rn △SK,sn ) ∩Bn(s1/2) ̸= ∅)→ 0 as s→∞.Recall that SKn is a subtree of the uniform spanning tree, including a pathto infinity. Then, Proposition 5.7.4 implies thatP(S[0, ξS(Bn(s1/2))] ∩ trSK,rn ̸= ∅)→ 0 as r, s→∞.Therefore, (5.93) converges to 0 as r, s→∞. It follows that(L(γˆR))R≥1 isa Cauchy sequence in the Prohorov metric. Since dP is a complete metric,we conclude that(L(γˆR))converges weakly. Such limit is a random elementγˆ taking values in Cf , and in particular γˆ has finite duration.On the space of finite curves (Cf , ψ), the evaluation of the endpointdefines a continuous function E : Cf → R3. Therefore, as we take n → ∞,the endpoint of E(γ¯Rn ) ∈ trSKn converges to E(γˆR) (see [33, Theorem 5.1],for example). Proposition 5.3.8 implies that, with probability one, E(γ¯Rn ) ∈trSKn for R large enough. Additionally, note that SKn converges weakly toSK as a parameterized tree, when n→∞. It follows that the law of E(γˆR)is supported on SK .Lemma 5.7.19. The collection of curves Γe(SˆK) ∪ {γˆ} define a parame-terized tree SˆK+1. This tree coincides with the description in Section 5.7.5.Proof. Lemma 5.7.18 shows that Γc(SK) ∪ {γˆ} satisfies the conditions ofProposition 5.7.7. It follows that Γc(SK) ∪ {γˆ} is the set of essentialbranches for a parameterized tree SˆK+1.Finally, note that Lemma 5.7.17 shows that γˆ is the limit of scaled loop-erased random walks, stopped when they hit the previous limit elementtrSK , and such hitting time is finite. Therefore SˆK+1 is the tree of Section5.7.5.Step 6: the scaling limit of parameterized treesProof of Proposition 5.7.9. First let us describe the probability measure in-duced by (SKn , γ¯n). Let µn be the probability measure on FK induced bySKn . For each SKn ∈ FK , let νγnn be the probability measure on (Cf , ψ)181induced by the loop-erased random walk γ¯n; recall that γ¯n is stopped whenit exits (R3 \ trSKn ) ∩ R3. The measure νγnn defines the stochastic kernelKn(SKn , A) = νγnn (A), ∀SKn ∈ FK , A ∈ B(Cf ),where B(Cf ) is the Borel σ-algebra corresponding to (Cf , ψ). That is, theprobability measure induced by (SKn , γ¯n), µn ⊗Kn say, is the unique prob-ability measure such thatµn ⊗Kn(A1 ×A2) =∫A1Kn(SKn , A2)µn(dSKn ),for Borel sets A1 ∈ B(FK) and A2 ∈ B(Cf ).Now, recall we are supposing that we have a coupling so that SKn →SˆK , almost-surely. In what follows, we write P∗ for the correspondingprobability measure. From Lemma 5.7.18, we obtain that, P∗-a.s., νγnn → ν γˆas n → ∞, where ν γˆ is the law of γˆ. Hence ν γˆ is P∗-measurable, and, inparticular, so is ν γˆ(A) for all A ∈ B(Cf ). As a consequence, the integralµ⊗K (A1 ×A2) :=∫1A1(SˆK)ν γˆ(A2)dP∗is well-defined for every A1 ∈ B(FK), A2 ∈ B(Cf ). Moreover, µ ⊗ K isreadily extended to give a measure on the product space FK ×Cf . Finally,let A1 ∈ B(FK), A2 ∈ B(Cf ) be continuity sets for µ ⊗ K, in the sensethat µ ⊗K(∂A1 × Cf ) = 0 = µ ⊗K(FK × ∂A2). We then have that, P∗-a.s., 1A1(SKn )ν γˆn(A2)→ 1A1(SˆK)ν γˆ(A2). An application of the dominatedconvergence theorem thus yieldsµn ⊗Kn(A1 ×A2)→ µ⊗K (A1 ×A2) ,which is enough to establish that µ⊗K is a measure on (FK , Cf ) (see [33,Theorem 2.8]). Lemma 5.7.19 shows that µ ⊗K defines a measure on thespaces of parameterized trees FK+1.1825.8 Proof of tightness and subsequential scalinglimitGiven the preparations in the previous sections, we are now in a position toestablish the first main result of this article, namely Theorem 5.1.1.Proof of Theorem 5.1.1. We start by establishing the parts of the result con-cerning the Gromov-Hausdorff-type topology. Applying Lemma 5.2.3, thetightness claim follows from Proposition 5.4.1, Corollary 5.4.1 and Propo-sition 5.4.5. It remains to check the distributional convergence of Un asn → ∞, where we write Un for the random measured, rooted spatial treeat (5.1), indexed by δn = 2−n. By the first part of the theorem and Pro-horov’s theorem (see [93, Theorem 16.3], for example), we know that everysubsequence (Uni)i≥1 admits a convergent subsubsequence (Unij )j≥1. Thuswe only need to establish the uniqueness of the limit.Now, suppose (Uni)i≥1 is a convergent subsequence, and write T =(T , dT , µT , ϕT , ρT ) for the limiting random element in T. To show thatthe convergence specifies the law of T uniquely, we will start by consideringfinite restrictions of Uni , i ≥ 1. In particular, for R ∈ (0,∞), set U (R)ni as(B(δ−1ni R), δβnidU |B(δ−1ni R)×B(δ−1ni R), δ3niµU(· ∩B(δ−1ni R)), δniϕU |B(δ−1ni R), ρU),i.e. the part of Uni contained inside B(δ−1ni R). (We acknowledge this notationclashes with that used in Section 5.2 for restrictions to balls with respect tothe tree metric.) Note that, by (5.13), we have thatlimR→∞lim supi→∞P(∆(U (R)ni ,Uni)> ε)≤ limR→∞lim supi→∞(1{e−λ−1Rβ>ε} +P(BU (0, λ−1δ−βni Rβ) ̸⊆ B(δ−1ni R)))≤ Ce−cλafor any ε > 0 and λ ≥ 1, where we have applied Proposition 5.6.1 to deducethe final bound. In particular, since λ can be taken arbitrarily large in the183above estimate, we obtain thatlimR→∞lim supi→∞P(∆(U (R)ni ,Uni)> ε)= 0. (5.94)As a consequence, to prove the uniqueness of the law of T , it will be enoughto show that, for each R ∈ (0,∞), (U (R)ni )i≥1 converges in distribution toa uniquely specified limit. Indeed, if T (R) is the limit of U (R)ni , then, sinceUnid→ T (as i → ∞) and (5.94) both hold, we have that T (R) d→ T asR→∞.Next, for given ni and R, consider the measure pi(R)ni on B(δ−1ni R) × R3given bypi(R)ni (dxdy) =µU (dx)δδniϕU (x)(dy)µU (B(δ−1ni R)),where δz(·) is the probability measure on R3 placing all its mass at z. Wewill check that the triple(B(δ−1ni R), δβnidU |B(δ−1ni R)×B(δ−1ni R), pi(R)ni)(5.95)converges in the marked Gromov-weak topology of [58, Definition 2.4]; acharacterisation of this convergence that will be relevant to us is given inthe following paragraph. Towards establishing tightness, we first note thatthe projections of pi(R)ni onto the sets B(δ−1ni R) and R3 are simply the uni-form probability measures on B(δ−1ni R) and δniB(δ−1ni R), respectively. Sincethe latter measure clearly converges to the uniform probability measure onBE(R), by [58, Theorem 4] (see also [74, Theorem 3]), the desired tightnessis implied by the following two conditions.(a) The distributions ofδβnidU(ξni,R1 , ξni,R2), i ≥ 1,are tight, where ξni,R1 and ξni,R2 are independent uniform random vari-ables on B(δ−1ni R), independent of U .184(b) For every ε > 0, there exists an η > 0 such thatE(δ3niµU({x ∈ B(δ−1ni R) : µU(BU (x, δ−βni ε) ∩B(δ−1ni R))≤ η}))≤ ε.The fact that (b) holds readily follows from the mass lower bound of Corol-lary 5.4.1. As for (a), this is a simple consequence of Corollary 5.7.1. More-over, if we write (ξni,Rj )j≥1 for a sequence of independent uniform randomvariables on B(δ−1ni R), independent of U , then Corollary 5.7.1 further impliesthat ((δβnidU(ξni,Rj , ξni,Rk))j,k≥1 ,(ξni,Rj)j≥1)(5.96)converges in distribution. This enables us to deduce, by applying [58, The-orem 5, see also Remark 2.7], that the triple at (5.95) in fact convergesin distribution in the marked Gromov-weak topology. We denote the limitby (T (R), dT (R) , piT (R)), where (T (R), dT (R)) is a complete, separable metricspace, and piT (R) is a probability measure on T (R)×R3 such that piT (R)(·×R3)has full support on T (R). In addition, by combining (5.43) with Proposi-tion 5.4.5, we have the following adaptation of Assumption 3: there existsa continuous, increasing function h(η) with h(0) = 0 such thatlimη→0 lim infδ→0 P supx,y∈B(δ−1R):δβdU (x,y)<ηδ |ϕU (x)− ϕU (y)| ≤ h(η) = 1.This allows us to apply [103, Theorem 3.7] to deduce thatpiT (R)(dxdy) = µT (R)(dx)δϕT (R) (x)(dy),where µT (R) is a probability measure on T (R) of full support, and ϕT (R) :T (R) → R3 is a continuous function.As a consequence of the convergence described in the previous paragraphand the separability of the marked Gromov-weak topology (see [58, Theo-rem 2]), we can assume that all the random objects are built on the sameprobability space with probability space with probability measure P∗ such185that, P∗-a.s.,(B(δ−1ni R), δβnidU |B(δ−1ni R)×B(δ−1ni R), pi(R)ni)→(T (R), dT (R) , piT (R)).By [58, Lemma 3.4], this implies that, P∗-a.s., there exists a complete andseparable metric space (Z, dZ) and isometric embeddingsψni : (B(δ−1ni R), δβnidU )→ (Z, dZ), ψ : (T (R), dT (R))→ (Z, dZ)such thatpi(R)ni ◦ (ψ˜ni)−1 → piT (R) ◦ ψ˜−1 (5.97)weakly as probability measures on Z × R3, where ψ˜ni(x, y) = (ψni(x), y)and ψ˜(x, y) = (ψ(x), y). From our initial assumption that (Uni)i≥1 is dis-tributionally convergent in T, Corollary 5.4.1 and (5.43), we further havethe existence of a deterministic subsequence (nij )j≥1 such that, P∗-a.s.,Unij → T in T,infj≥1δ3nijinfx∈B(δ−1nijR)µU(BU (x, δ−βnij δ))> 0, ∀δ > 0, (5.98)and alsosupx∈B(δ−1nijR)δβnijdU (0, x)→ Λ ∈ (0,∞). (5.99)Now, taking projections onto Z and rescaling, we readily obtain from (5.97)thatδ3niµU((ψni)−1(·) ∩B(δniR))→ cµT (R) ◦ ψ−1 (5.100)weakly as probability measures on Z, where the constant c is the Lebesguemeasure of BE(R). Moreover, appealing again to the mass lower bound of(5.98), we also obtain the subsequential convergence of measure supports,i.e.ψnij(B(δnijR))→ ψ(T (R))with respect to the Hausdorff topology on compact subsets of Z (cf. the186argument of [14, Theorem 6.1], for example). That T (R) is indeed compactis established as in [14], and that it is a real tree follows from [67, Lemma2.1]. In particular, if we define a sequence of correspondences by settingCnij := (x, x′) ∈ B(δnijR)× T (R) :dZ(ψnij (x), ψ(x′))≤ 2dZH(ψnij (B(δnijR)), ψ(T (R)))  ,where dZH is the Hausdorff distance on Z, then we have thatsup(x,x′)∈CnijdZ(ψnij (x), ψ(x′))→ 0. (5.101)Given that Unij → T in T and (5.99) holds, it is a straightforward applica-tion of [24, Lemmas 3.5 and 5.1] to also check that, P∗-a.s.,limη→0 lim supj→∞supx,y∈B(δ−1nijR):δβnijdU (x,y)<ηδnij |ϕU (x)− ϕU (y)| = 0,and, applying this equicontinuity in conjunction with (5.97), this yields inturn thatsup(x,x′)∈Cnij∣∣ϕU (x)− ϕT (R)(x′)∣∣→ 0. (5.102)Finally, although not included in the framework of [58, 74, 103], it is notdifficult to include the convergence of roots in the above arguments, i.e. wemay further suppose thatdZ(ψnij (ρU ), ψ(ρT (R)))→ 0 (5.103)for some ρT (R) ∈ T (R) with ϕT (R)(ρT (R)) = 0. Recalling the definition of ∆cfrom (5.12), combining (5.100), (5.101), (5.102) and (5.103) yields that∆c(U (R)nij , T(R))→ 0 P∗-a.s.,where T (R) := (T (R), dT (R) , µT (R) , ϕT (R) , ρT (R)). Since the distribution of187T (R) is uniquely specified by (5.96), and the same limit can be deduced forsome subsubsequence of any subsequence of (ni)i≥1, we obtain that U (R)ni →T (R) in distribution in T, and thus the part of the proof concerning theGromov-Hausdorff-type topology is complete.As for the path ensemble topology, we know from [24, Lemma 3.9] thatconvergence of compact measured, rooted spatial trees with respect to ourGromov-Hausdorff-type implies the corresponding path ensemble statement.To extend from this to the desired conclusion, we can proceed exactly as inthe proof of [24, Lemma 5.5], with the additional inputs required beingprovided by (5.43) and the coupling lemma that is stated below at Lemma5.9.3.5.9 Properties of the limiting spaceThe aim of this section is to prove Theorem 5.1.2. To this end, we presentseveral preparatory lemmas. In the first of these, we check that for largeenough annuli there is only one disjoint crossing by a path in U . Precisely,for r < R, we introduce the event CEU (r,R) by settingCEU (r,R) = {∃x, y ∈ B(R)c such that γU (x, y) ∩B(r) ̸= ∅} ,and show that the probability of this occurring decays as the ratio R/rincreases.Lemma 5.9.1. There exist universal constants λ0 > 0 and a, b, C ∈ (0,∞)such that for all δ ∈ (0, 1) and λ ≥ λ0,P(CEU(λ−aδ−1, δ−1))≤ Cλ−b.Proof. This is essentially established in the proof of Proposition 5.4.1. Wewill use the same notation as in that proof here. First, suppose that theevent A′k0 , as defined in the proof of Proposition 5.4.1, occurs. It then holdsthat: for every point x ∈ ∂B(δ−1),γU (x, γ∞) ∩B(λ−4δ−1)= ∅,188where γ∞ is the unique infinite simple path in U started at the origin, andγU (x, γ∞) is shortest path in U from x to a point of γ∞. Note that we havealready proved that P(A′k0) ≥ 1−Cλ−1. Second, let u be the first time thatγ∞ exits B(λ−4δ−1), and defineW ={γ∞[u,∞) ∩B(λ−5δ−1) = ∅}.By Proposition 1.5.10 of [113], it holds that P(W ) ≥ 1 − Cλ−1. Finally,suppose that the event A′k0 ∩W occurs. For x, y ∈ B(δ−1)c, let x′, y′ ∈ γ∞be such that γU (x, γ∞) = γU (x, x′) and γU (y, γ∞) = γU (y, y′). We then havethat γU (x, x′)∩B(λ−4δ−1) = ∅ and γU (y, y′)∩B(λ−4δ−1) = ∅. Also, it holdsthat x′, y′ ∈ γ∞[u,∞). In particular, it follows that γU (x, y)∩B(λ−5δ−1) = ∅for all x, y ∈ B(δ−1)c. This completes the proof of the result with a = 5 andb = 1.We next establish a result which essentially gives the converse of As-sumption 3. In particular, we define the event D(a, b, c) byD(a, b, c) ={∃x, y ∈ B(a) such that dSU (x, y) < b and dU (x, y) > c},where we define the Schramm metric dSU on U analogously to (5.2), andcheck the following.Lemma 5.9.2. There exist universal λ0 > 0 and a1, . . . , a4, C ∈ (0,∞) suchthat for all δ ∈ (0, 1) and λ ≥ λ0,P(D(λa1δ−1, λ−a2δ−1, λ−a3δ−β))≤ Cλ−a4 .Proof. Consider the event Dˆ(a, b, c) given byDˆ(a, b, c) ={∃x, y ∈ B(a) ∩ γ∞ such that dSU (x, y) < b and dU (x, y) > c}.We first prove that there exist universal a1, . . . , a4, C ∈ (0,∞) such that forall δ ∈ (0, 1) and λ ≥ 1,P(Dˆ(λa1δ−1, λ−a2δ−1, λ−a3δ−β))≤ Cλ−a4 . (5.104)189To do this, let a1 = 10−4, a2 = 1 and a3 = 1/2. Moreover, let D = (wk)Mk=1be a λ−a2δ−1-net of B(λa1δ−1) such that B(λa1δ−1) ⊆ ∪Mk=1B(wk, λ−a2δ−1)and M ≍ λ3(a1+a2). Suppose that the event Dˆ(λa1δ−1, λ−a2δ−1, λ−a3δ−β)occurs. Then there exists wk ∈ D such that |γ∞ ∩ B(wk, λ−a2δ−1)| ≥cλ−a3δ−β for some universal c > 0. Now, it follows from [127, (7.51)] thatP(∃wk ∈ D such that∣∣∣γ∞ ∩B(wk, λ−a2δ−1)∣∣∣ ≥ cλ−a3δ−β) ≤ Ce−c′λ1/2 ,for some universal c′, C ∈ (0,∞). Thus, the inequality (5.104) holds whenwe let a4 = 100.We next consider a λ−4δ−1-net D′ = (xi)Ni=1 of the ball B(λa1δ−1) forwhich B(λa1δ−1) is a subset of ∪Ni=1B(xi, λ−4δ−1) and N ≍ λ3(a1+4). Weperform Wilson’s algorithm as follows:• Consider a subtree spanned by D′ = (xi)Ni=1. The output random treeis denoted by U1.• Perform Wilson’s algorithm for all remaining points R3\D′ to generateU .We define the event L byL =N∩i=1Dˆ(λa1δ−1, λ−a2δ−1, λ−a3δ−β; i)c,where the event Dˆ (a, b, c; i) is defined byDˆ(a, b, c; i) ={∃x, y ∈ B(a) ∩ γxi∞such that dSU (x, y) < b and dU (x, y) > c},with γx∞ standing for the unique infinite simple path in U started at x. By(5.104), we have P(L) ≥ 1− Cλ−80. Furthermore, if we defineJ ={∀x ∈ B(λa1δ−1),diam (γU (x,U1)) < λ−2δ−1 and dU (x,U1) < λ−2δ−β},then applying the hittability of each branch of U as in the proof of Propo-190sition 5.4.1 guarantees that P(J) ≥ 1 − Cλ−10. Finally, suppose thatthe event L ∩ J occurs. The event L ensures that for all x, y ∈ U1 withdSU (x, y) < λ−a2δ−1, we have dU (x, y) < 2λ−a3δ−β. Also, the event J guar-antees that for all x, y ∈ B(λa1δ−1) with dSU (x, y) < 12λ−a2δ−1, we havedU (x, y) < 3λ−a3δ−β. Thus the proof is complete, establishing the resultwith a1 = 10−4, a2 = 1, a3 = 1/2 and a4 = 10.For the remainder of the section, including in the proof of Theorem5.1.2, we fix a sequence δn → 0 such that (Pδn)n≥1 converges weakly (asmeasures on (T,∆)), and write Uδn = (U , δκndU , δ2nµU , δnϕU , 0). Letting Pˆbe the relevant limiting law, we denote by T = (T , dT , µT , ϕT , ρT ) a randomelement of T with law Pˆ. A key ingredient to the proof of Theorem 5.1.2 isthe following coupling between the discrete and continuous models, which isa ready consequence of this convergence assumption. Since the proof of thecorresponding result in [24] was not specific to the two-dimensional case, weomit the proof here.Lemma 5.9.3 (cf. [24, Lemma 5.1]). There exist realisations of (Uδn)n≥1and T built on the same probability space, with probability measure P∗ say,such that: for some subsequence (ni)i≥1 and divergent sequence (rj)j≥1 itholds that, P∗-a.s.,Di,j := ∆c(U (rj)δni , T(rj))→ 0as i→∞, for every j ≥ 1.Proof of Theorem 5.1.2. We start by checking the measure bounds of parts(c) and (d), and we also remark that part (b) is an elementary consequenceof (c) (see [64, Proposition 1.5.15], for example). The uniform bound of (c)will follow from the estimates: for R > 0, there exist constants ci ∈ (0,∞)191such that, for every r ∈ (0, 1),Pˆ(infx∈BT (ρT ,R)µT (BT (x, r)) ≤ c1rdf (log r−1)−c2)≤ c3rc4 , (5.105)Pˆ(supx∈BT (ρT ,R)µT (BT (x, r)) ≥ c5rdf (log r−1)c6)≤ c7rc8 . (5.106)Indeed, given these, applying Borel-Cantelli along the subsequence rn = 2−n,n ∈ N, yields the result. By appealing to the coupling of Lemma 5.9.3, theabove inequalities readily follow from the following discrete analogues:lim supδ→∞P δ3minx∈BU (ρU ,δ−βR) µT (BU (x, δ−βr))≤ c1rdf (log r−1)−c2 ≤ c3rc4 , (5.107)lim supδ→∞P δ3maxx∈BU (ρU ,δ−βR) µT (BU (x, δ−βr))≥ c5rdf (log r−1)c6 ≤ c7rc8 . (5.108)To establish these, we start by noting that Proposition 5.6.1 implies thatthe probability in (5.107) is bounded above byCe−cza +P(δ3 minx∈B(δ−1R1/βz)µT(BU (x, δ−βr))≤ c1rdf (log r−1)−c2)for any z ≥ 1. Moreover, applying a simple union bound and Theorem 5.5.2(with R = δ−βr, λ = c−11 log(r−1)c2), we can bound this in turn byCe−cza + C′Rdf z3rdfe−c′c−a′1 log(r−1)a′c2.Choosing z = (c−1 log(r−1))1/a, c1 small enough so that c′c−a1 > df , andc2 = 1/a′, the above is bounded above by C ′′rc′′ , as desired. The proofof (5.108) is similar, with Theorem 5.6.2 replacing Theorem 5.5.2. As for(d), this follows from a Borel-Cantelli argument and the following estimates:192there exist constants ci ∈ (0,∞) such thatPˆ(µT (BT (ρT , r)) ≥ λrdf)≤ c1e−c2λc3 , (5.109)Pˆ(µT (BT (ρT , r)) ≤ λ−1rdf)≤ c4e−c5λc6 , (5.110)for all r > 0, λ ≥ 1. Similarly to the proof of the uniform estimates (5.105)and (5.106), applying the coupling of Lemma 5.9.3, these readily follow fromTheorem 5.5.1 and Proposition 5.6.1.For part (a), since (U , dU ) has infinite diameter, we immediately findthat (T , dT ) has at least one end at infinity. Thus we need to show thatthere can be no more than one end at infinity. Given Lemma 5.9.3 and theinclusion results of (5.43) and Proposition 5.6.1, this can be proved exactlyas in the two-dimensional case. In particular, as in [24], it follows from thefollowing crossing estimate: for r > 0,limR→∞lim supδ→0P(CEU (δ−1r, δ−1R))= 0,which is given by Lemma 5.9.1.For part (e), we can proceed exactly as in the proof of [24, Lemma 5.4].Given Lemma 5.9.3, the one additional ingredient we need to do this is theestimate corresponding to [24, (5.12)]: for every r, η > 0,limε→0 lim supδ→0P infx,y∈BU (0,δ−βr):dU (x,y)≥δ−βηdSU (x, y) < δ−1ε = 0,and this was established in Lemma 5.9.2 (when viewed in conjunction withProposition 5.6.1).Given Lemma 5.9.3 and (5.6.1), the proof of part (f) is identical to thatof [24, Lemma 5.2].1935.10 Simple random walk and its diffusion limitIn this section, we complete the article with the proofs of Theorem 5.1.3,Corollary 5.1.1 and Theorem 5.1.4.Proof of Theorem 5.1.3. On the event{infx∈BU (0,R)µU (BU (x,R/8)) ≥ λ−1Rdf , µU (BU (0, 2R)) ≤ λRdf}, (5.111)one can find a cover (BU (xi, R/4))Ni=1 of BU (0, R) of size N ≤ λ2 (cf. [49,Lemma 9], for example). Following the argument of [22, Lemma 2.4] (seealternatively [108, Lemma 4.1]), it holds that on the event at (5.111),RU (0, BU (0, R)c) ≥ Rλ2.Hence the result is a consequence of Theorem 5.5.2 and Proposition 5.6.1.Proof of Corollary 5.1.1. By Theorem 5.1.3, parts (1) and (4) of [109, As-sumption 1.2] hold. Moreover, since RU (0, BU (0, R)c) ≤ R+1, we also havethat part (2) of [109, Assumption 1.2] holds. Hence (5.4), (5.6), (5.7), (5.9)and (5.11) follow from [109, Proposition 1.4 and Theorem 1.5]. It remainsto prove the claims involving the Euclidean distance. To this end, note thatby (5.43) and Proposition 5.6.1,P(BU (0, λ−1Rβ) ⊆ B(R) ⊆ BU (0, λRβ))≥ 1− c1λ−c2 .Hence, by Borel-Cantelli, if Rn := 2n and λn := n2/c2 , thenBU (0, λ−1n Rβn) ⊆ B(Rn) ⊆ BU (0, λnRβn)for all large n, P-a.s. Combining this with the results at (5.4) and (5.7), weobtain (5.5) and (5.8). As for (5.10), the lower bound follows from Jensen’s194inequality, Fatou’s lemma and (5.8). Indeed,lim infR→∞logEU(τE0,R)logR ≥ lim infR→∞ EU(log τE0,RlogR)≥ EU(lim infR→∞log τE0,RlogR)= βdw.As for the upper bound, a standard estimate for exit times (see [19, Corollary2.66], for example) gives thatEU0 τE0,R ≤ R3RU (0, B(R)c) ≤ R3ξR,where ξR is defined above Proposition 5.3.4. The latter result thus yieldsEU(τE0,R)≤ R3E (ξR) ≤ cR3+β = cRβdw ,which gives (a stronger statement than) the desired conclusion.Proof of Theorem 5.1.4. The result can be proved by a line-by-line modifi-cation of [24, Theorems 1.4 and 7.2], and so we omit the details. However,as an aid to the reader, we summarise the key steps. As per the constructionof [98], Pˆ-a.s., there is a ‘resistance form’ (ET ,FT ) on (T , dT ), characterisedbydT (x, y)−1 = inf {ET (f, f) : f ∈ FT , f(x) = 0, f(y) = 1} ,for all x, y ∈ T , x ̸= y. Moreover, by takingDT := FT ∩ C0(T ),where C0(T ) are the compactly supported continuous functions on (T , dT ),and the closure is taken with respect to ET (f, f) +∫T f2dµT , we obtain aregular Dirichlet form (ET ,DT ) on L2(T , µT ) (see [13, Remark 1.6] or [99,Theorem 9.4]). Moreover, since (T , dT ) is complete and has one end atinfinity (by Theorem 5.1.2(a)), the naturally associated stochastic process((XTt )t≥0, (P Tx )x∈T ) is recurrent (see [13, Theorem 4]). And, from [99, The-195orem 10.4], we have that the process admits a jointly continuous transitiondensity (pTt (x, y))x,y∈T ,t>0.Next, by appealing to the Skorohod representation theorem, it is possibleto construct realisations of (U , δβndU , δ3nµU , δnϕU , ρU ), n ≥ 1, and the limit(T , dT , µT , ϕT , ρT ) on the same probability space with probability measureP∗ such that(U , δβndU , δ3nµU , δnϕU , ρU )→ (T , dT , µT , ϕT , ρT ) P∗-a.s.Moreover, applying Theorem 5.1.3 in a simple Borel-Cantelli argument al-lows one to deduce that, P∗-a.s.,limR→∞lim infn→∞ δβnRU(0, BU (0, Rδ−βn )c)=∞.Hence we can apply [54, Theorem 7.1] to deduce that, P∗-a.s.,PU0((δnXUtδ−(3+β)n)t≥0∈ ·)→ P TρT ◦ ϕ−1T (5.112)weakly as probability measures on C(R+,R3). Since the left-hand side aboveis P∗-measurable, so is the right-hand side. Moreover, for any measurableset B ⊆ C(R+,R3), we have thatP TρT ◦ ϕ−1T (B) = E∗(P TρT ◦ ϕ−1T (B) T),where E∗ is the expectation under P∗, and so P TρT ◦ ϕ−1T is in fact Pˆ-measurable, as is required to prove part (a). For part (b), we apply (5.112)and integrate out with respect to P∗.As for the heat kernel estimates, we note that the measure bounds ofTheorem 5.1.2(c) are enough to apply the arguments of [49] to deduce part(c) (for further details, see the proof of [24, Theorem 1.4(c)]). As for theon-diagonal estimates of part (d), similarly to the proof of [24, Theorem7.2] (cf. [51, Theorems 1.6 and 1.7]), these follow from the distributionalestimates on the measures of balls at (5.109) and (5.110), together with the196following resistance estimateP(RT (ρT , BT (ρT , R)c) ≤ λ−1R)≤ Ce−cλa , (5.113)where RT is the resistance associated with (ET ,FT ). As in the proof ofTheorem 5.1.3, to check (5.113), it is enough to combine (5.109) with theboundPˆ(infx∈BT (ρT ,R)µT (BT (x,R/8)) ≤ λ−1Rdf)≤ Ce−cλa ,which is again a ready consequence of the discrete analogue (see Theorem5.5.2 and Proposition 5.6.1).197Chapter 6The Number of SpanningClusters of the UniformSpanning Tree in ThreeDimensions1Summary of this chapterLet Uδ be the uniform spanning tree on δR3. A spanning cluster of Uδ is aconnected component of the restriction of Uδ to the unit cube [0, 1]3 thatconnects the left face {0}× [0, 1]2 to the right face {1}× [0, 1]2. In this note,we will prove that the number of the spanning clusters is tight as δ → 0,which resolves an open question raised by Benjamini in [28].1Joint work with Omer Angel, David Croydon, and Daisuke Shiraishi.Acknowledgements. DC would like to acknowledge the support of a JSPS Grant-in-Aidfor Research Activity Start-up, 18H05832 and a JSPS Grant-in-Aid for Scientific Research(C), 19K03540. DS is supported by a JSPS Grant-in-Aid for Early-Career Scientists,18K13425 and JSPS KAKENHI Grant Number 17H02849 and 18H01123.1986.1 IntroductionGiven a finite connected graph G = (V,E), a spanning tree T of G is asubgraph of G that is a tree (i.e. is connected and contains no cycles) withvertex set V . A uniform spanning tree (UST) of G is obtained by choosinga spanning tree of G uniformly at random. This is an important modelin probability and statistical physics, with beautiful connections to othersubjects, such as electrical potential theory, loop-erased random walk andSchramm-Loewner evolution. See [29] for an introduction to various aspectsof USTs.Fix δ ∈ (0, 1) and d ∈ N. In [143] it was shown that, by taking thelocal limit of the uniform spanning trees on an exhaustive sequence of finitesubgraphs of δRd, it is possible to construct a random subgraph Uδ of δRd.Whilst the resulting graph Uδ is almost-surely a forest consisting on aninfinite number of disjoint components that are trees when d ≥ 5, it is alsothe case that Uδ is almost-surely a spanning tree of δRd with one topologicalend for d ≤ 4, see [143]. In the latter low-dimensional case, Uδ is commonlyreferred to as the UST on δRd.In this note, we study a macroscopic scale property of Uδ, namely thenumber of its spanning clusters, as previously studied by Benjamini in [28].To be more precise, let us proceed to introduce some notation. WriteB = [0, 1]d ={(x1, x2, · · · , xd) ∈ Rd : 0 ≤ xi ≤ 1, i = 1, 2, · · · , d}(6.1)for the unit hypercube in Rd. Also, setF ={(x1, x2, · · · , xd) ∈ Rd : x1 = 0}(6.2)andG ={(x1, x2, · · · , xd) ∈ Rd : x1 = 1}(6.3)for the hyperplanes intersecting the ‘left’ and ‘right’ sides of the hypercubeB. Given a subgraph U = (V,E) of δRd, we write U ′ = (V ′, E′) for therestriction of U to the cube B, i.e. we set V ′ = V ∩B and E′ = {{x, y} ∈ E :x, y ∈ V ′}. A connected component of U ′ is called a cluster of U . Moreover,199following [28], a spanning cluster of U is a cluster of U containing verticesx and y such that dist(x, F ) < δ and dist(y,G) < δ, where dist(z,A) :=infw∈A |z −w| is the Euclidean distance between a point z ∈ Rd and subsetA ⊆ Rd. That is, a cluster of U is called spanning when it connects F to G(at the level of discretization being considered).Concerning the number of spanning clusters of Uδ, it was proved in [28]that:• for d ≥ 4, the expected number of spanning clusters of Uδ grows toinfinity as δ → 0;• for d = 2, the number of spanning clusters of U+δ is tight as δ → 0,where U+δ denotes the uniform spanning tree of the square B ∩ δR2when all the vertices on the right side of the square are identified to asingle point w (U+δ is called the right wired uniform spanning tree in[28]). We also consider two spanning clusters of U+δ different if theyare disjoint on U+δ \ w. Figure 6.1 shows the spanning cluster of arealisation of (an approximation to) Uδ on δR2.The case d = 3 was left as an open question in [28]. The main purpose ofthis note is to resolve it by showing the following theorem.Theorem 6.1.1. Let d = 3. It holds that the number of spanning clustersof Uδ is tight as δ → 0.Remark. The proof for Theorem 6.1.1 can be adapted to show tightness ofthe number of spanning clusters of Uδ on δR2. This is an improvement overthe result in [28], which required right-wired boundary conditions.Remark. Part of Benjamini’s motivation for studying the number of span-ning clusters came from percolation. Indeed, for critical Bernoulli percola-tion in Rd, it is conjectured that the number of spanning clusters is tightwhen d ≤ 6, while the expected number of spanning clusters grows to infinityas the mesh size goes to zero for d > 6, see for instance [3, 36, 43]. Puttingour main conclusion together with the results obtained by Benjamini in [28],the corresponding qualitative picture is proved for the uniform spanning tree.200Figure 6.1: Part of a UST in a two-dimensional box; the part shownis the central 115×115 section of a UST on a 229×229 box. Thesingle cluster spanning the two sides of the box is highlighted.Remark. In [11], we establish a scaling limit for the three-dimensional UST ina version of the Gromov-Hausdorff topology, at least along the subsequenceδ̂n := 2−n. The corresponding two-dimensional result is also known (alongan arbitrary sequence δ → 0), see [24] and [86, Remark 1.2]. In both cases,we expect that the techniques used to prove such a scaling limit can be usedto show that the number of spanning clusters of Uδ actually converges in201distribution. We plan to pursue this in a subsequent work that focusses onthe topological properties of the three-dimensional UST.The organization of the remainder of the paper is as follows. In Section2, we introduce some notation that will be used in the paper. The proof ofTheorem 6.1.1 is then given in Section 3.6.2 NotationIn this section, we introduce the main notation needed for the proof ofTheorem 6.1.1. We write | · | for the Euclidean norm on R3 and, as inthe introduction, dist(·, ·) for the Euclidean distance between a point and asubset of R3. Given δ ∈ (0, 1), if x ∈ δR3 and r > 0, then we writeBδ(x, r) ={y ∈ δR3 : |x− y| < r}for the lattice ball of centre x and radius r (we will commonly omit depen-dence on δ for brevity). Let B, F and G be defined as at (6.1), (6.2) and(6.3) in the case d = 3.For δ ∈ (0, 1), a sequence λ = (λ(0), λ(1), · · · , λ(m)) is said to be a pathof length m if λ(i) ∈ δR3 and |λ(i) − λ(i + 1)| = δ for every i. A path λ issimple if λ(i) ̸= λ(j) for all i ̸= j. For a path λ = (λ(0), λ(1), · · · , λ(m)),we define its loop-erasure LE(λ) as follows. Firstly, lets0 = max {j ≤ m : λ(j) = λ(0)} ,and for i ≥ 1, setsi = max {j ≤ m : λ(j) = λ(si−1 + 1)} .Moreover, write n = min{i : si = m}. The loop-erasure of λ is then givenbyLE(λ) = (λ(s0), λ(s1), · · · , λ(sn)) .We write LE(λ)(k) = λ(sk) for each 0 ≤ k ≤ n. Note that the vertices hit byLE(λ) are a subset of those hit by λ, and that LE(λ) is a simple path such202that LE(λ)(0) = λ(0) and LE(λ)(n) = λ(m). Although the loop-erasureof λ has so far only been defined in the case that λ has a finite length, itis clear that we can define LE(λ) similarly for an infinite path λ if the set{k ≥ 0 : λ(j) = λ(k)} is finite for each j ≥ 0. Additionally, when the pathλ is given by a simple random walk, we call LE(λ) a loop-erased randomwalk (see [114] for an introduction to loop-erased random walks).Again given δ ∈ (0, 1), write Uδ for the uniform spanning tree on δR3. Asnoted in the introduction, this object was constructed in [143], and shownto be a tree with a single end, almost-surely. The graph Uδ can be generatedfrom loop-erased random walks by a procedure now referred to as Wilson’salgorithm rooted at infinity (the name is after [169], while the version forinfinite graphs was proved in [29]), which is described as follows.• Let (xi)i≥1 be an arbitrary, but fixed, ordering of δR3.• Write Rx1 for a simple random walk on δR3 started at x1. Let γx1 =LE(Rx1) be the loop-erasure of Rx1 – this is well-defined since Rx1 istransient. Set U1 = γx1 . We refer to γx1 as a branch of U1.• Given U i for i ≥ 1, let Rxi+1 be a simple random walk (independentof U i) started at xi+1 and stopped on hitting U i. Then LE(Rxi+1) isa branch of the tree and we let U i+1 = U i ∪ LE(Rxi+1).It is then the case that the output random tree ∪∞i=1U i has the same dis-tribution as Uδ. In particular, the distribution of the output tree does notdepend on the ordering of points (xi)i≥1.Similarly to above, for z ∈ δR3, we will write γz for the infinite simplepath in Uδ starting from z. Given a point z ∈ δR3, it follows from theconstruction of Uδ explained hitherto that the distribution of γz coincideswith that of LE(Rz), where Rz is a simple random walk on δR3 started atz.Furthermore, as we explained in the introduction, we will write U ′δ forthe restriction of Uδ to the cube B. A connected component of U ′δ is calleda cluster. Also, as we defined previously, a spanning cluster is a clusterconnecting F to G. We let Nδ be the number of spanning clusters of Uδ.203Finally, we will use c, C, c0, etc. to denote universal positive constantswhich may change from line to line.6.3 Proof of the main resultIn this section, we will prove the following theorem, which incorporatesTheorem 6.1.1.Theorem 6.3.1. There exists a universal constant C such that: for allM <∞ and δ > 0,P (Nδ ≥M) ≤ CM−1. (6.4)In particular, the laws of (Nδ)δ∈(0,1) form a tight sequence of probabilitymeasures on R+.Remark. In [3], Aizenman proved that for critical percolation in two di-mensions, the probability of seeing M distinct spanning clusters is boundedabove by Ce−cM2 . We do not expect that the polynomial bound in (6.4)is sharp, but leave it as an open problem to determine the correct tail be-haviour for number of spanning clusters of the UST in three dimensions,and, in particular, ascertain whether it also exhibits Gaussian decay.Proof. Let δ ∈ (0, 1), and suppose M ≥ 1 is such that δ ≤ M−1. Forr ∈ [0, 1], we letA(r) = {(x1, x2, x3) ∈ B : x1 = r}.We also defineA = [−1, 2]3, B′ = {(x1, x2, x3) ∈ B : x1 ≤ 2/3}.Moreover, let (zi)Li=1 be a sequence of points in A ∩ δR3 such that A ⊆∪Li=1B(zi, 1/M) and L ≤ 105M3.To construct Uδ, we first perform Wilson’s algorithm rooted at infinityfor (zi)Li=1 (see Section 6.2). Namely, we considerU1 :=L∪i=1γzi ,204which is the subtree of Uδ spanned by (zi)Li=1. (Recall that for z ∈ δR3 wedenote the infinite simple path in Uδ starting from z by γz.) The idea of theproof is then as follows. Crucially, each branch of U1 is a ‘hittable’ set, inthe sense that for a simple random walk R whose starting point is close toU1, it is likely that R hits U1 before moving far away. As a result, Wilson’salgorithm guarantees that, with high probability, the spanning clusters ofUδ correspond to those of U1 when M is sufficiently large. So, the problemboils down to the tightness of the number of spanning clusters of U1, whichis not difficult to prove.To make the above argument rigorous, we introduce the following two“good" events for U1:Hi = Hi(ξ) :={For any x ∈ B(0, 4) ∩ δR3 with dist(x, γzi) ≤ 1/M,P xR (R[0, T ] ∩ γzi = ∅) ≤M−ξ},Ii :={The number of crossings of γzi between A(0) and A(2/3)in B′ is smaller than M},for 1 ≤ i ≤ L, where• R is a simple random walk which is independent of γzi , the law ofwhich is denoted by P xR when we assume R(0) = x;• T is the first time that R exits B(x, 1/√M);• a crossing of γzi between A(0) and A(2/3) in B′ is a connected com-ponent of the restriction of γzi to B′ that connects A(0) to A(2/3).Namely, the event Hi guarantees that the branch γzi is a hittable set (seeFigure 6.2), and the event Ii controls the number of crossings of γzi .Now, [148, Theorem 3.1] ensures that there exist universal constantsξ0, C > 0 such thatP(L∩i=1Hi(ξ0))≥ 1− CM−10.Thus, with high probability (for U1), each branch of U1 is a hittable set.205Figure 6.2: Conditional on the event Hi, for any x ∈ B(0, 4) ∩ δR3with dist(x, γzi) ≤ 1/M , the above configuration occurs withprobability at least 1−M−ξ.The probability of the event Ii is easy to estimate. Indeed, suppose thatthe event Ii does not occur. This implies that the number of “traversals”of Szi from A(0) to A(2/3) or vice versa must be bigger than M , where Szistands for a simple random walk starting from zi. Notice that there existsa universal constant c0 > 0 such that for any point w ∈ A(0) (respectivelyw ∈ A(2/3)), the probability that Sw hits A(2/3) (respectively A(0)) issmaller than 1 − c0 (see [113, Proposition 1.5.10], for example). Thus, theprobability of the event Ii is bounded below by 1− (1− c0)M =: 1− e−aM ,where a > 0. Taking sum over 1 ≤ i ≤ L, we find thatP(L∩i=1Ii)≥ 1− Le−aM .To put the above together, letJ =L∩i=1Hi(ξ0) ∩ Ii.For 1 ≤ i ≤ L, set U1i = ∪ij=1γzj so that U1 = U1L. As above, by a spanningcluster of U1i between A(0) and A(2/3) in B′ we mean a connected componentof the restriction of U1i to B′ which connects A(0) to A(2/3). We write nifor the number of spanning clusters of U1i between A(0) and A(2/3) in B′.206On the event J , we have thatni ≤ iM + i− 1,for all 1 ≤ i ≤ L, since ni+1 − ni is at most M + 1 for each i ≥ 1. Inparticular, we see that the number of spanning clusters of U1 between A(0)and A(2/3) in B′ is bounded above by L(M + 1), which is comparable toM4.We next consider a sequence of subsets of A as follows. Let a∗ > 0 bethe positive constant such thata∗∞∑k=1k−2 = 10−1. (6.5)Set η1 = 0, and ηk = a∗∑k−1j=1 j−2 for k ≥ 2. Finally, for k ≥ 1, letAk = [−1 + ηk, 2− ηk]3.Notice that Ak+1 ⊆ Ak and [−1/2, 3/2]3 ⊆ Ak for all k ≥ 1, and moreoverdist(∂Ak, ∂Ak+1) = a∗k−2. We further introduce sequences (zki )Lki=1 consist-ing of points in Ak ∩ δR3 such thatAk ⊆Lk∪i=1B(zki , δk),andLk ≤ 105δ−3k , where δk :=M−12−(k−1). (6.6)Note that we may assume that L1 = L and (z1i )L1i=1 = (zi)Li=1.For ξ > 0, we setHki = Hki (ξ) :=For any x ∈ B(0, 4) ∩ δR3 with dist(x, γzki)≤ δk,P xR(R[0, T k] ∩ γzki = ∅)≤ δξk ,where R is a simple random walk that is independent of γzki , with lawdenoted by P xR when we assume R(0) = x, and T k is the first time that R207exits B(x,√δk). By [148, Theorem 3.1] again, there exist universal constantsξ1, C > 0 (which do not depend on k) such thatPLk∩i=1Hki (ξ1) ≥ 1− Cδ10k ,for all k = 1, 2, · · · , k0, where k0 is the smallest integer k such that δk < δ.Thus if we writeHk =Lk∩i=1Hki (ξ1)andJ ′ = J ∩k0∩k=1Hk,we haveP(J ′) ≥ 1− CM−10.Given the above setup, we perform Wilson’s algorithm rooted at infinityas follows:• recall that U1 is the tree spanned by (z1i )L1i=1 = (zi)Li=1;• next perform Wilson’s algorithm for (z2i )L2i=1 – for each z2i , run a simplerandom walk Rz2i from z2i until it hits the part of the tree that hasalready been constructed, and adding its loop-erasure as a new branch– the output tree is denoted by U2;• repeat the previous step for (zki )Lki=1 to construct Uk for k = 1, 2, · · · , k0.Now, condition U1 on the event J above. We will show that, with high(conditional) probability, every new branch in U2 \ U1 has diameter smallerthan M−1/4. To this end, for 1 ≤ i ≤ L2, we write d2i for the Euclideandiameter of the path from z2i to U1 in U2, and define the eventW 2i by settingW 2i ={d2i ≥M−1/4}.Suppose that the event W 2i occurs. By Wilson’s algorithm, the simplerandom walk Rz2i must not hit the tree U1 until it exits B(z2i ,M−1/4).208Since dist(z2i , ∂A) ≥ a∗ (for the constant a∗ defined at (6.5)), it holds thatB(z2i ,M−1/4) ⊆ A. With this in mind, we set u0 = 0, andum = inf{j ≥ um−1 :∣∣∣Rz2i (j)−Rz2i (um−1)∣∣∣ ≥M−1/2}for m ≥ 1. We then have thatRz2i [um−1, um] ∩ U1 = ∅for all 1 ≤ m ≤ M1/4. Since A ⊆ ∪Li=1B(zi, 1/M), it follows that for each1 ≤ m ≤ M1/4, there exists a zi such that Rz2i (um−1) ∈ B(zi, 1/M). Thusthe event Hi(ξ0) guarantees thatP(Rz2i [um−1, um] ∩ U1 = ∅ for all 1 ≤ m ≤M1/4)≤M−ξ0M1/4 .Consequently, the conditional probability of ∪L2i=1W 2i is bounded above byL2M−ξ0M1/4 , which is smaller than CM−ξ0M1/4+3 for some universal con-stant C ∈ (0,∞) (see (6.6), for the definition of L2). Replacing constantsif necessary, this implies that, with probability at least 1−Ce−cM1/4 , everynew branch in U2 \ U1 has diameter smaller than M−1/4. Notice that onceeach new branch has such a small diameter, the event J guarantees that thenumber of spanning clusters of U2 between A(0) and A(2/3 +M−1/4) in Bis bounded above by L(M + 1) ≤ 106M4.Essentially the same argument is valid for Uk. Indeed, conditioning Ukon the good event J ∩ ∩kl=1H l as above, it holds that, with probability atleast 1−Ce−cδ−1/4k every new branch in Uk+1 \Uk has diameter smaller thanδ1/4k . Notice that∑k δ1/4k ≤ 10M−1/4 < 10−2 when M is large. Therefore,with probability at least 1−CM−10, the number of spanning clusters of Uk0between A(0) and A(3/4) in B is bounded above by L(M + 1) ≤ CM4 forsome universal constant C.Finally, we perform Wilson’s algorithm for all of the remaining pointsin δR3 to construct Uδ. Since k0 is the smallest integer k such that δk < δ,it follows that the restriction of Uδ to B coincides with that of Uk0 . Notethat k0 = 1 when δ =M−1. Thus we conclude that there exists a universal209constant C such that: for all M <∞ and δ ∈ (0,M−1],P (Nδ ≥M) ≤ CM−2. (6.7)For the case that δ ∈ (M−1,M−1/2], we apply (6.7) toM1/2 and monotonic-ity implies P (Nδ ≥M) ≤ P(Nδ ≥M1/2)< CM−1. Finally, if δ > M−1/2,we use that Nδ ≤ δ−2 < M . Combining these three bounds, we readilyobtain the bound at (6.4).210Part IIICompetitive GrowthProcesses211Chapter 7Chase-Escape with Death onTrees1Summary of this chapterChase-escape is a competitive growth process in which red particles spreadto adjacent uncoloured sites, while blue particles overtake adjacent red par-ticles. We introduce the variant in which red particles die and describe thephase diagram for the resulting process on infinite d-ary trees. A novel con-nection to weighted Catalan numbers makes it possible to characterize thecritical behaviour.7.1 IntroductionChase-escape (CE) is a model for predator-prey interactions in which expan-sion of predators relies on but also hinders the spread of prey. The spreadingdynamics come from the Richardson growth model [147]. Formally, the pro-1Joint work with Erin Beckman, Keisha Cook, Nicole Eikmeier, and Matthew Junge.Acknowledgements. Thanks to David Sivakoff and Joshua Cruz for helpful advice andfeedback. We are also grateful to Sam Francis Hopkins for pointing us to a referenceabout weighted Catalan numbers. Feedback during the review process greatly improvedthe final version. This work was partially supported by NSF DMS Grant #1641020 andwas initiated during the 2019 AMS Mathematical Research Community in StochasticSpatial Systems.212cess takes place on a graph in which vertices are in one of the three states{w, r, b}. Adjacent vertices in states (r, w) transition to (r, r) according toa Poisson process with rate λ. Adjacent (b, r) vertices transition to (b, b)at rate 1. The standard initial configuration has a single vertex in state rwith a vertex in state b attached to it. All other sites are in state w. Thesedynamics can be thought of as prey r-particles “escaping" to empty w-siteswhile being “chased" and consumed by predator b-particles. We will refer tovertices in states r, b, and w as being red, blue, and white, respectively.We introduce the possibility that prey dies for reasons other than pre-dation in a variant which we call chase-escape with death (CED) . This isCE with four states {w, r, b, †} and the additional rule that vertices in stater transition to state † at rate ρ > 0. We call such vertices dead. Dead sitescannot be reoccupied.7.1.1 ResultsWe study CED on the infinite rooted d-ary tree Td—the tree in which eachvertex has d ≥ 2 children—with an initial configuration that has the rootred, one extra blue vertex b attached to it, and the rest of the vertices white.Let R be the set of sites that are ever coloured red. Similarly, let B be theset of sites that are ever coloured blue. Denote the events that red and blueoccupy infinitely many sites by A = {|R| = ∞} and B = {|B| = ∞}. SinceB − {b} ⊆ R deterministically, we also have B ⊆ A. We will typically writeP and E in place of Pλ,ρ and Eλ,ρ for probability and expectation when therates are understood to be fixed. There are three possible phases for CED:• Coexistence P (B) > 0.• Escape P (A) > 0 and P (B) = 0.• Extinction P (A) = 0.For each fixed d and λ, we are interested in whether or not these phasesoccur and how the process transitions between them as ρ is varied. Accord-213ingly, we define the critical valuesρc = ρc(d, λ) = inf{ρ : Pλ,ρ(B) = 0}, (7.1)ρe = ρe(d, λ) = inf{ρ : Pλ,ρ(A) = 0}. (7.2)One feature of CE, and likewise CED, that makes it difficult to studyon graphs with cycles is that there is no known coupling that proves P (B)increases with λ. On trees the coupling is clear, which makes analysis moretractable. It follows from [35] that P (B) > 0 in CE on a d-ary tree if andonly if λ > λ−c withλ−c = 2d− 1− 2√d2 − d ∼ (4d)−1. (7.3)For CED on Td, P (B) is no longer monotonic with λ. As λ increases, bluefalls further behind and so the intermediate red particles must live longerfor coexistence to occur. This lack of monotonicity makesλ+c = 2d− 1 + 2√d2 − d ∼ 4dalso relevant, because we will see that when λ ≥ λ+c , the gap between redand blue is so large that coexistence is impossible for any ρ > 0.Suppressing the dependence on d, let Λ = (λ−c , λ+c ). Unless stated other-wise, we will assume that d ≥ 2 is fixed. Our first result describes the phasestructure of CED (see Figure 7.1).Theorem 7.1.1. Fix λ > 0.(i) If λ ∈ Λ, then 0 < ρc < ρe = λ(d− 1) with escape occurring at ρ = ρc,and extinction at ρ = ρe.(ii) If λ /∈ Λ, then 0 = ρc < ρe = λ(d − 1) with extinction occurring atρ = ρe.Our next result concerns the behaviour of E|B| at and above criticality.Theorem 7.1.2. Fix λ ∈ Λ.214λρλ−c λ+cextinctionescapecoexistence1 2 3 4 50.000.050.100.150.20Figure 7.1: (Left) The phase diagram for fixed d. The dashed line isρc and the solid line ρe. (Right) A rigorous approximation ofρc when d = 2. The approximations for larger d have a similarshape.(i) If ρ > ρc, then E|B| <∞.(ii) If ρ = ρc, then E|B| =∞.Theorem 7.1.2 (ii) is particularly striking because it is known that E|B| <∞ in CE with λ = λ−c (see [35, Theorem 1.4]). Hence the introductionof death changes the critical behaviour. The reason for this comes downto singularity analysis of a generating function associated to CED and isdiscussed in more detail in Remark 7.3.2.We prove three further results about ρc concerning: the asymptoticgrowth in d, smoothness in λ, and the approximate value for a given dand λ (see Figure 7.1).Theorem 7.1.3. Fix λ > 0, c <√λ/2, and C >√2λ. For all d largeenoughc√d ≤ ρc ≤ C√d.Theorem 7.1.4. The function ρc is infinitely differentiable in λ ∈ Λ.Theorem 7.1.5. Fix λ ∈ Λ. For every ρ ̸= ρc, there is a finite runtimealgorithm to determine if ρ < ρc or if ρ > ρc.2157.1.2 Proof methodsTheorems 7.1.1 and 7.1.3 are proven by relating coexistence in CED to thesurvival of an embedded branching process that renews each time blue iswithin distance one of the farthest red. Describing the branching processcomes down to understanding how CED behaves on the nonnegative integerswith 0 initially blue and 1 initially red. In particular, we are interested inthe event Rk that at some time k is blue, k + 1 is red, and all further sitesare white.The probability of Rk can be expressed as a weighted Catalan number.These are specified by non-negative weights u(j) and v(j) for j ≥ 0. Givena lattice path γ consisting of unit rise and fall steps, each rise step from(x, j) to (x + 1, j + 1) has weight u(j), while fall steps from (x, j + 1) to(x + 1, j) has weight v(j). The weight ω(γ) of a Dyck path γ is defined tobe the product of the rise and fall step weights along γ.The corresponding weighted Catalan number is Cu,vk =∑ω(γ) wherethe sum is over all Dyck paths γ of length 2k (nonnegative paths startingat (0, 0) consisting of k rise and k fall steps). See Figure 7.2 for an example.For CED, we define Cλ,ρk as the weighted Catalan number with weightsu(j) = λ1 + λ+ (j + 1)ρ and v(j) =11 + λ+ (j + 2)ρ. (7.4)At (7.8) we explain why P (Rk) = Cλ,ρk .Returning to CED on Td, self-similarity ensures that the expected num-ber of renewals in the embedded branching process is equal to the generatingfunction g(z) = ∑∞k=0Cλ,ρk zk evaluated at z = d. We prove in Proposi-tion 7.4.1 that ρc is the value at which the radius of convergence of g isequal to d. We characterize the radius of convergence using a continuedfraction representation of g, which leads to the proofs of Theorems 7.1.2,7.1.4, and 7.1.5.216u(0) v(0) u(0)u(1)u(2) v(2) u(2) v(2)v(1)v(0)Figure 7.2: A Dyck path of length 10. The weight of this path isu(0)2v(0)2u(1)v(1)u(2)2v(2)2.7.1.3 History and contextThe forebearer of CE is the Richardson growth model for the spread of asingle species [147]. In our notation, this process corresponds to the settingwith λ = 1, ρ = 0, and no blue particles. Many basic questions remainunanswered for the Richardson growth model on the integer lattice [17], aswell as for the competitive version [56].James Martin conjectured that coexistence occurs in CE on lattices whenred spreads at a slower rate than blue. Simulation evidence from Tang,Kordzakhia, Lalley in [164] suggested that, on the two-dimensional lattice,red and blue coexist with positive probability so long as λ ≥ 1/2. Durrett,Junge, and Tang proved in [62] that red and blue can coexist with redstochastically slower than blue on high-dimensional oriented lattices withspreading times that resemble Bernoulli bond percolation.The first rigorous result we know of for CE is Kordzakhia’s proof thatthe phase transition occurs at λ−c for CE on regular trees [104]. Later,Kortchemski considered the process on the complete graph as well as treeswith arbitrary branching number [105, 106]. An alternative perspective ofCE as scotching a rumor was studied by Bordenave in [34]. The continuouslimit of rumor scotching was studied many years earlier by Aldous and Krebs[8]. Looking to model malware spread and suppression through a devicenetwork, Hinsen, Jahnel, Cali, and Wary studied CE on Gilbert graphs [82].To the best of our knowledge, CED has not been studied before. Fromthe perspective of modelling species competition, it seems natural for preyto die from causes other than being consumed, and, in rumor scotching, for217holders to cease spread because of fading interest. Furthermore, CED hasnew mathematical features. The existence of an escape phase, the fact thatE|B| = ∞ at criticality, and the lack of monotonicity of P (B) in λ are alldifferent than what occurs in CE.The perspective we take on weighted Catalan numbers also appears to benovel. Typically they are studied with integer weights [9, 146, 152]. We areinterested in the fractional weights at (7.4). Flajolet and Guilleman observedthat fractionally weighted lattice paths describe birth and death chains forwhich the rates depend on the population size [68]. The distance betweenthe rightmost red and rightmost blue for CED on the nonnegative integersis a birth and death chain in which mass extinction may occur. Since weare interested in CED on trees, we analyse the radius of convergence of thegenerating function of weighted Catalan numbers, which we believe has notbeen studied before.7.2 CED on the lineLet CED+ denote the process with CED dynamics on the nonnegative in-tegers for which the vertices 0 and 1 are initially blue and red, respectively.All other vertices are initially white. Let st(n) ∈ {w, r, b, †} indicate thestate of vertex n at time t. Define the processes Bt = max{n : st(n) = b},Rt = max{n : st(n) = r}, and the random variableY = sup{Bt : t ≥ 0}. (7.5)If st(n) ̸= r for all n, then define Rt = −∞. Let ∂t = (st(n))Rtn=Bt be thestate of the interval [Bt, Rt]. One can think of this as the boundary of theprocess. Note that this interval only makes sense when Rt > −∞. Renewaltimes are times t ≥ 0 such that ∂t = (b, r). For k ≥ 0 letRk = {Bt = k for some renewal time t} (7.6)be the event that there is a renewal when blue occupies site k. Also definethe event At = {st(n) ̸= † for all n} that none of the red sites have died up218to time t.Let St = Rt − Bt be the distance between the rightmost blue and redparticles at time t. Define the collection of jump times as τ(0) = 0 and fori ≥ 1τ(i) = inf{t ≥ τ(i− 1) : St ̸= Sτ(i−1) or 1(At) = 0}.The jump chain J = (Ji) of St is given byJi =Sτ(i), 1(Aτ(i)) = 10, otherwise .This is a Markov chain with an absorbing state 0 corresponding to blue nolonger having the potential to reach infinity. The transition probabilities forj > 0 are:pj,j+1 =λ1 + λ+ jρ, pj,j−1 =11 + λ+ jρ, pj,0 =jρ1 + λ+ jρ. (7.7)Call a jump chain (J0, . . . , Jn) living if Ji > 0 for all 0 ≤ i ≤ n. Trans-lating the set of Dyck paths of length 2k up by one vertical unit givesthe jump chains corresponding to Rk. Notice that pj,j+1 = u(j − 1) andpj,j−1 = v(j− 2) with u and v as in (7.4). Thus, it is easy to see that for allk ≥ 0 we havePλ,ρ(Rk) = Cλ,ρk , (7.8)with Cλ,ρk the weighted Catalan number defined in Section 7.1.2.Lemma 7.2.1. For any ϵ > 0 there exists ρ′ > 0 such that for all ρ ∈ [0, ρ′)and sufficiently large kCλ,ρk ≥( (4− ϵ)λ(1 + λ)2)k.Proof. Let Ck,m be number of Dyck paths of length 2k that never exceed219height m. We first claim that for any δ > 0, there exists mδ such thatCk,mδ ≥ (4− δ)k (7.9)for sufficiently large k.Themth Catalan number Cm := Cm,∞ counts the number of Dyck pathsof length 2m. Consider any sequence of ⌊k/m⌋ Dyck paths of length 2m. Ifwe concatenate these paths, we have a path of length 2m⌊k/m⌋ ≥ 2k − 2mwhich stays below height m. We extend this path into a Dyck path of length2k by concatenating the necessary number of up and down steps to the endin any manner. Since each of the ⌊k/m⌋ Dyck paths of length 2m can bechosen independently of one another, we have Ck,m ≥ (Cm)⌊k/m⌋.Using the standard asymptotic relation Cm ∼ (1/√pi)m−3/24m (see[161]), we have for large enough m, kCk,m ≥( 4m2√pim3/2)⌊k/m⌋≥( 4m2√pim3/2)−1 ( 4(2√pim3/2)1/m)k.It is easy to verify that (2√pim3/2)1/m → 1 as m → ∞. Thus, we canchoose m large enough so that4(2√pim3/2)1/m> 4− δ2 .We then haveCk,m ≥ C( 4m2√pim3/2)−1(4− (δ/2))k.This is true for all m, k sufficiently large, and we can see that if we fix anmδ big enough so that this inequality holds, then we can increase k enoughsuch that we have the claimed inequality at (7.9).Using the weights at (7.4), each path γ counted by Ck,m satisfiesω(γ) ≥(λ(1 + λ+mρ)2)k,because γ has length 2k but never exceeds height m. Summing over just the220Dyck paths counted by Ck,m givesCλ,ρk ≥ Ck,mλk(1 + λ+mρ)2k . (7.10)Fix an ϵ > 0. We can choose δ > 0 small enough so that (4− δ)(1− δ) >4 − ϵ. By (7.9), pick mδ large enough to have Ck,mδ > (4 − δ)k for allsufficiently large k. Finally, choose ρ′ > 0 small enough so thatλ(1 + λ+mδρ′)2>λ(1 + λ)2 (1− δ).Since the Cλ,ρk are decreasing in ρ, applying these choices to (7.10) gives thedesired inequality for all ρ ∈ [0, ρ′):Cλ,ρk ≥((4− δ)(1− δ) λ(1 + λ)2)k≥( (4− ϵ)λ(1 + λ)2)k.Recall the definition of Y at (7.5). We conclude this section by provingthat P (Y ≥ k) can be bounded in terms of P (Rk). The difficulty is thatthe event {Y ≥ k} includes all realizations for which blue reaches k, whileRk only includes realizations which have a renewal at k.Lemma 7.2.2. For ρ > 0, there exists C > 0, which is a function of λ, ρ,such that P (Y ≥ k) ≤ Ck1+λ/ρP (Rk) for all k ≥ 1.Proof. Given a living jump chain J = (J0, J1, . . . , Jm), define the heightprofile of J to be h(J) = (h1(J), . . . , hm+1(J)), where hi(J) are the numberof entries Jℓ in J with ℓ < m for which Jℓ = i. These values correspond tothe total number of times that blue is at distance i from red. Suppose thatred takes r(J) many steps in a jump chain J . It is straightforward to showthatp(J) = λr(J)m+1∏j=1( 11 + λ+ jρ)hj(J)(7.11)221is the probability that the process follows the living jump chain J .We view realizations leading to outcomes in {Y ≥ k} as having twodistinct stages. In the first stage, the rightmost red particle reaches k. Inthe second stage, we ignore red and only require that blue advances untilit reaches k. The advantage of this perspective is that we can partitionoutcomes in {Y ≥ k} by their behaviour in the first stage and then restrictour focus to the behaviour of the process in the interval [0, k] for the secondstage.For 2 ≤ ℓ ≤ k, define Γℓ to be the set of all living jump chains of length2k − ℓ− 1 which go from (0, 1) to (2k − ℓ− 1, ℓ) with the last step being arise step (see Figure 7.3). These are the jump chains from the first stage.Now we describe the second stage. Assume that blue is at k − ℓ when redreaches k. For blue to reach k, the red sites in [k − ℓ + 1, k] must stayalive long enough for blue to advance another ℓ steps. This has probabilityσ(ℓ) := ∏ℓi=1(1+ iρ)−1. Given γ ∈ Γℓ, the formula at (7.11) implies that theprobability Y ≥ k and the first 2k − ℓ − 1 steps of the process jump chainfollow γ isq(γ) := p(γ)σ(ℓ) = λk−1σ(ℓ)k+1∏j=1(1 + λ+ jρ)−hj(γ). (7.12)Let qℓ =∑γ∈Γℓ q(γ). Notice thatq2λ(1 + λ+ ρ)(1 + λ+ 2ρ)2 ≤ P (Rk). (7.13)This is because a subset of Rk is the collection of processes which followjump chains in Γ2 and for which the next three steps have blue advanceby one, then red advance by one, followed by blue advancing one. We willfurther prove that there exists C0 (independent of ℓ) such thatqℓ ≤ C0ℓλ/ρq2. (7.14)The claimed inequality then follows from (7.13), (7.14), and the partitioningP (Y ≥ k) =∑kℓ=2 qℓ.222To prove (7.14), notice that by inserting ℓ− 2 fall steps right before thefinal upward step of each path in γ ∈ Γℓ, we obtain a path γ˜ ∈ Γ2 (seeFigure 7.3). Since the paths γ and γ˜ agree for the first 2k − ℓ− 2 steps, wehave from (7.12)q(γ) = σ(ℓ)σ(2)∏ℓ−2i=1(1 + λ+ iρ)−1 q(γ˜) (7.15)= (1 + λ+ ρ)(1 + λ+ 2ρ)(1 + ℓρ)(1 + (ℓ− 1)ρ) q(γ˜)ℓ−2∏i=3(1 + λ1 + iρ). (7.16)Rewriting as a sum and using integral bounds, one can verify thatℓ−2∏i=3(1 + λ1 + iρ)≤ C1ℓλ/ρ,for some C1 that depends on λ and ρ. This gives q(γ) ≤ C0ℓλ/ρq(γ˜). Thus,qℓ ≤∑γ∈Γℓ C0ℓλ/ρq(γ˜). If we restrict to paths γ ∈ Γℓ, the map γ 7→ γ˜ isinjective and henceqℓ ≤∑γ∈ΓℓC0ℓλ/ρq(γ˜) ≤ C0ℓλ/ρ∑γ∈Γ2q(γ) = C0ℓλ/ρq2.This yields (7.14) and completes the lemma.7.3 Properties of weighted Catalan numbers7.3.1 PreliminariesGiven a sequence (cn)n≥0, define the formal continued fractionK[c0, c1, . . .] :=c01− c11− . . .. (7.17)2230 8 1415⋆⋆⋆⋆+++Figure 7.3: Let k = 7. The black line with dots is a path γ ∈ Γ5. Theblue line with stars is the modified path γ˜ ∈ Γ2. The red linewith pluses is the extension of γ˜ to a jump chain in R7.The ci may be fixed numbers, or possibly functions. Also, whenever wewrite x we mean a nonnegative real number, and z represents an arbitrarycomplex number.The discussion in [73, Chapter 5] tells us that, for general weightedCatalan numbers, g(z) :=∑∞k=0Cu,vk zk is equal to the functionf(z) := K[1, a0z, a1z, . . .] (7.18)for all |z| < M , where M is the radius of convergence of g centred at theorigin, and aj = u(j)v(j). M is the modulus of the nearest singularity of gto the origin, or by the Hadamard-Cauchy theoremM = 1lim supk→∞(Cu,vk )1/k. (7.19)Let u(j) and v(j) be as in (7.4) so that, unless stated otherwise, we haveaj =λ(1 + λ+ (j + 1)ρ)(1 + λ+ (j + 2)ρ) . (7.20)When necessary, we denote the dependence on λ and ρ by fλ,ρ.2247.3.2 Properties of f and MOur proof that f is meromorphic relies on a classical convergence criteriafor continued fractions [165, Theorem 10.1].Theorem 7.3.1 (Worpitzky Circle Theorem). Let cj : D → {|w| < 1/4} bea family of analytic functions over a domain D ⊆ C. ThenK[1, c0(z), c1(z), . . .]converges uniformly for z in any compact subset of D and the limit functiontakes values in {|z − 4/3| ≤ 2/3}.Corollary 7.3.1. If ρ > 0, then f is a meromorphic function on C.Proof. We will prove that f is meromorphic for all z ∈ ∆ = {|z| < r0}with r0 > 0 arbitrary. Let hj(z) := K[ajz, aj+1z, . . .] be the tail of thecontinued fraction so that f(z) = K[1, a0z, . . . , aj−1z, hj(z)]. Since ρ > 0we have |aj | ↓ 0 as j → ∞. It follows that for some j = j(r0) largeenough, |akz| ≤ 1/4 for all k ≥ j and z ∈ ∆. Theorem 7.3.1 ensures that|hj(z)| <∞ and the partial continued fractions K[ajz, . . . , anz] are analytic(again by Theorem 7.3.1) and converge uniformly to hj for z ∈ ∆. Thus, hjis a uniform limit of analytic functions and is therefore analytic on ∆. Wecan then write f(z) = K[1, a0z, . . . , aj−1z, hj(z)]. Since each aiz is a linearfunction in z, f is a quotient of two analytic functions.Next we show that the exponential growth rate of the Cλ,ρk respondscontinuously to changes in ρ.Lemma 7.3.2. Fix λ > 0.(i) Fix ρ > 0. Given 0 ≤ ρ′ < ρ, there exists ϵ > 0 such thatCλ,ρk (1 + ϵ)k ≤ Cλ,ρ′k . (7.21)225(ii) Fix ρ > 0. Given ϵ′ > 0, there exists a δ > 0 such that for ρ′ ∈(ρ− δ, ρ+ δ)(1− ϵ′)k ≤ Cλ,ρkCλ,ρ′k≤ (1 + ϵ′)k. (7.22)(iii) If ρ = 0, then given ϵ′ > 0, there exists a δ > 0 such that for ρ′ ∈ [0, δ)it holds that1 ≤ Cλ,ρkCλ,ρ′k≤ (1 + ϵ′)k. (7.23)Proof. Let γ be a Dyck path of length 2k. From the definition of Cλ,ρk , wehave w(γ) is a product of some combination of exactly k of the aj terms.Let a′j be the weights corresponding to ρ′. It is a basic calculus exercise toshow that, when ρ′ < ρ we have the ratio a′j/aj > 1 is an increasing functionin j. Let a′0/a0 = 1 + ϵ. Using the fact that w(γ) and w′(γ) have the samenumber of each weight, we can directly compare their ratio using the worstcase lower bound(1 + ϵ)k = (a′0/a0)k ≤w′(γ)w(γ) . (7.24)Cross-multiplying then summing over all paths γ gives (i).Towards (ii), suppose 0 < ρ′ < ρ. Then we have 1 ≤ a′j/aj ≤ (ρ/ρ′)2.Choose δ1 such that (ρ − δ1)/ρ =√1− ϵ′. Using the same logic as above,we have for ρ′ > ρ− δ1 that(1− ϵ′)k ≤(ρ′ρ)2k≤ w(γ)w′(γ) ≤ 1k ≤ (1 + ϵ)k.Now suppose 0 < ρ < ρ′. Choose δ2 such that (ρ+ δ2)/ρ =√1 + ϵ′. Then,following the same steps as above, we see that for ρ′ < ρ+ δ2226(1− ϵ′)k ≤ (1)k ≤ w(γ)w′(γ) ≤(ρ′ρ)2k≤ (1 + ϵ′)k.The same reasoning as with (7.24) and choosing δ = min(δ1, δ2) gives (ii).We now prove (iii). Notice that Cλ,0k = Ck(λ/(1+λ)2)k with Ck the kthCatalan number, since the transition probabilities at (7.7) for the associatedjump chain are homogeneous when ρ = 0. By Lemma 7.2.1, we have for afixed ϵ˜ > 0 and the bound Ck ≤ 4k, there exists a ρ′ > 0 such that1 ≤ Cλ,0kCλ,ρ′k≤ Ck(λ/(1 + λ)2)k(4− ϵ˜)k (λ/(1 + λ)2)k =Ck(4− ϵ˜)k ≤( 44− ϵ˜)k. (7.25)Choosing ϵ˜ such that 4/(4 − ϵ˜) = 1 + ϵ′ and letting δ be small enough sothat (7.25) holds for all ρ′ < δ, we obtain an identical bound to the one inLemma 7.3.2 (ii) but for ρ′ ∈ [0, δ).Lemma 7.3.3. M is a continuous strictly increasing function for ρ ∈ [0,∞)satisfying M(0) = (1 + λ)2/(4λ).Proof. First note that for any ρ, M(ρ) ≤ (a0)−1 < ∞. This is becauseCλ,ρk ≥ ω(γk) = (a0)k where γk is the Dyck path consisting of k alternatingrise and fall steps.That M is increasing follows immediately from the fact that the Cλ,ρkare decreasing in ρ. To see that M is strictly increasing we use Lemma 7.3.2(i) in combination with the definition of M at (7.19). Indeed, the lemmaimplies that for any 0 ≤ ρ′ < ρ there exists a δ > 0 such thatM(ρ′) = 1lim supk→∞(Cλ,ρ′k )1/k≤ 1(1 + δ) lim supk→∞(Cλ,ρk )1/k= M(ρ)1 + δ .So M is strictly increasing.To show continuity for ρ > 0, we use (7.19). Fix ρ > 0 and ϵ > 0. Letϵ′ = ϵ/M(ρ) and let δ > 0 be as in Lemma 7.3.2 (ii). For ρ′ ∈ (ρ− δ, ρ+ δ)we have227lim infk→∞(Cλ,ρkCλ,ρ′k)1/k≤ M(ρ′)M(ρ) ≤ lim supk→∞(Cλ,ρkCλ,ρ′k)1/k.Using the bound in Lemma 7.3.2 (ii) results in1− ϵ′ ≤ M(ρ′)M(ρ) ≤ 1 + ϵ′.Simplifying, then replacing out ϵ′ gives |M(ρ′) −M(ρ)| < ϵ. Thus, M iscontinuous at ρ > 0. Continuity for ρ = 0 uses a similar argument alongwith Lemma 7.3.2 (iii). The explicit formula for M(0) comes from theformula Cλ,ρk = Ck(λ/(1 + λ)2)k and the fact that the generating functionfor the Catalan numbers has radius of convergence 1/4.Lemma 7.3.4. If λ ∈ Λ then there exists a unique value ρd > 0 such thatM(ρd) = d; Moreover, if λ /∈ Λ, then M > d for all ρ ≥ 0.Proof. Fix λ. To signify the dependence on ρ, let gρ(z) be the generatingfunction of the Cλ,ρk . Using the continuity and strict monotonicity of M inLemma 7.3.3, to show the first statement, it suffices to prove that gρ(d) <∞for ρ large enough, and that g0(d) =∞. It is easy to see that if ρ > λ(d−1)then the branching process of red spreading with no blue particles has finiteexpected size, and thus gρ(d) <∞ for such ρ.Using the formula for M(0) from Lemma 7.3.3 and rearranging, we cansee that when ρ = 0, M < d whenever λd/(1 + λ2) > 1/4. The set of λ forwhich this occurs is by definition Λ. Therefore, g0(d) =∞, proving the firstclaim. The claim that M > d for λ /∈ Λ, follows from Lemma 7.3.3. Theexplicit formula for M(0) is easily shown to satisfy M(0) > d, and since Mis increasing, this inequality holds for all ρ ≥ 0.Our next lemma requires a old theorem from complex variable theory(see [69, Theorem IV.6] for example).Theorem 7.3.5 (Pringsheim’s Theorem). If f(z) is representable at theorigin by a series expansion that has non-negative coefficients and radius ofconvergence M , then the point z =M is a singularity of f(z).228Lemma 7.3.6. Let ρ > 0. Then, M ≤ d if and only if g(d) =∞.Proof. We first note that the implication “M < d implies g(d) =∞" as wellas the reverse direction “g(d) =∞ implies M ≤ d" both follow immediatelyfrom the definition of the radius of convergence. It remains to show thatM = d implies g(d) = ∞. Corollary 7.3.1 proves that f is a meromorphicfunction. Since g = f for |z| < M , f(x) > 0 for x ∈ (0, d), and Theorem 7.3.5gives z = d is a pole, we have for x ∈ R thatg(d) = limx↑dg(x) = limx↑df(x) =∞.Remark. CE and CED differ in their behaviour at λ = λ−c (see Theo-rem 7.1.2). In particular, the argument that g(d) = ∞ when M = d doesnot apply to CE. Why is this the case? The difference is that when ρ = 0,we cannot use Corollary 7.3.1. Instead, we can write a closed expression forthe function:g(z) =∞∑k=0Cλ,0k =∞∑k=0Ck(λ(1 + λ)2)kzk = 1−√1− 4zλ/(1 + λ)22z ,we find that g has a branch cut rather than isolated poles. [69, TheoremIV.10] ensures that the growth of the Cλ,ρck is determined by the orders ofthe singularities of f at distance M = d. Formally,Cλ,ρck =∑|αj |=dα−kj pij(k) + o(d−k) (7.26)where the αj are the poles of f at distance d and the pij are polynomials withdegree equal to the order of the pole of f at αj minus one. When evaluatingat the radius of convergence M = λ−c , this gives a summable pre-factor ofk−3/2 in (7.26).2297.4 M and CEDThe main results from this section are that for λ ∈ Λ we have ρc = ρd andthat P (B) is continuous at ρd. The lemmas in this section will also be usefulfor characterizing the phase behaviour of CED.Proposition 7.4.1. For λ ∈ Λ it holds that ρd = ρc.Proof. Lemmas 7.4.5 and 7.4.6 give that whenever λ ∈ Λ we have ρd =inf{ρ : Pλ,ρ(B) = 0}, which is the definition of ρc.Lemma 7.4.2. M > d if and only if E|B| <∞.Proof. Letting Y be as at (7.5), we first observe thatE|B| =∞∑k=0P (Y = k)dk ≥∞∑k=1P (Rk)dk = g(d).Hence E|B| <∞ implies g(d) <∞, which gives M > d by Lemma 7.3.6.For the other direction, using the comparison in Lemma 7.2.2 givesE|B| =∞∑k=0P (Y = k)dk ≤ 1 +∞∑k=1Ck1+λ/ρP (Rk)dk. (7.27)Lemma 7.3.3 tells us that M is continuous and since M > d, the sum onthe right still converges with the polynomial prefactor.Call a vertex v ∈ Td a tree renewal vertex if at some time it is red, theparent vertex is blue, and all vertices one level or more down from v arewhite. Note that this definition of renewal is a translation one unit left ofthe definition of renewals in the CED+. This makes it so each vertex atdistance k from the root is a tree renewal vertex with probability P (Rk).We call a tree renewal vertex v a first tree renewal vertex if none of the non-root vertices in the shortest path from v to the root are renewal vertices.Let Z be the number of first tree renewal vertices in CED.Lemma 7.4.3. EZ ≥ 1 if and only if g(d) =∞.230Proof. Using self-similarity of Td, the number of renewal vertices is a Galton-Watson process with offspring distribution Z. Since each of the dk verticesat distance k have probability P (Rk) of being a tree renewal vertex, theexpected size of the Galton-Watson process is g(d). Standard facts aboutbranching processes imply the claimed equivalence.Lemma 7.4.4. If λ ∈ Λ and ρ = ρd, then EZ ≤ 1.Proof. Since M is strictly increasing in ρ (Lemma 7.3.3) we have M > d forρ > ρd and, so, Lemma 7.4.3 implies that EZ < 1. Let R′k be the eventthat a fixed, but arbitrary vertex at distance k is a first tree renewal vertex.Fatou’s lemma givesEλ,ρdZ =∞∑k=1dkPλ,ρd(R′k) =∞∑k=1dk lim infρ↓ρdH(R′k) (7.28)≤ lim infρ↓ρd∞∑k=1dkH(R′k) = lim infρ↓ρdEλ,ρZ ≤ 1.(7.29)To see the second equality, notice that H(R′k) is continuous in ρ at ρd, as R′kconsists of finitely many jump chains.Lemma 7.4.5. If λ ∈ Λ and ρ < ρd, then P (B) > 0.Proof. It follows from Lemmas 7.3.3 and 7.4.3, along with the easily observedfact that EZ is strictly decreasing in ρ that EZ > 1 for ρ < ρd. Thus,the embedded branching process of renewal vertices is infinite with positiveprobability. As |B| is at least as large as the number of renewal vertices, thisgives P (B) > 0.Lemma 7.4.6. If λ ∈ Λ and ρ = ρd, then P (B) = 0.Proof. The probability blue survives along a path while remaining at leastdistance L > 0 from the most distant red site from the root for k consecutive231steps is bounded by the probability none of the intermediate red particlesdie, a quantity which is in turn bounded by:( 1 + λ1 + λ+ Lρ)k. (7.30)Choose L0 large enough so that (7.30) is smaller than (2d)−k. Let Dk(v)be the event that, in the subtree rooted at v, the front blue particle movesfrom vertex v to a vertex at level k without ever coming closer than L0 tothe front red particle. Applying a union bound over all vertices at distancek from v and the estimate (7.30) we have P (∪|v|=kDk(v)) ≤ 2−k. As theseprobabilities are summable, the Borel-Cantelli lemma implies that almostsurely only finitely many Dk occur.On a given vertex self-avoiding path from the root to ∞, let Bt (re-spectively, Rt) be the distance between the furthest blue (respectively, thefurthest red) and the root. In order for B to occur there must exist a pathsuch that Bt − Rt = m infinitely often for some fixed m < L0. SupposeBt − Rt ≥ m for all times and Bt − Rt = m infinitely often. Self-similarityensures that the vertices at which Bt − Rt = m form a branching process.Using monotonicity of the jump chain probabilities at (7.7), this branchingprocess is dominated by the embedded branching process of renewal vertices(i.e., when m = 1). Lemma 7.4.4 ensures that the embedded branching pro-cess of renewal vertices is almost surely finite (since it is critical), and thusfor each fixed m < L0 the associated branching process is also almost surelyfinite. As blue can only reach infinitely many sites if either infinitely manyDk occur, or one of the m-renewing branching processes survives, we haveP (B) = 0.7.5 Proofs of Theorems 7.1.1, 7.1.2 , and 7.1.3Lemma 7.5.1. If λ ∈ Λ then ρc < ρe.Proof. Consider the functionfU (λ) =14(√(32d+ 2)λ+ λ2 + 1− 3λ− 3).232In the proof of Theorem 7.1.3, we verify that ρc < fU (λ). One can furthercheck that fU (λ−c ) = 0 = fU (λ+c ) are the only zeros of fU , so that fU (λ) > 0for λ ∈ Λ. Moreover, we haveρe − fU (λ) = λ(d− 1)− 14(√(32d+ 2)λ+ λ2 + 1− 3λ− 3).Some algebra gives ρe− fU (λ) = 0 is equivalent to solving −8+ λ(8d+8)+λ2(8λ − 16λ2) = 0, which has no real solutions for d ≥ 2. As ρe − fU (λ) iscontinuous in λ and positive at any choice of d ≥ 2 and λ, we must haveρe− fU (λ) > 0. We thus arrive at the claimed relation ρc < fU (λ) < ρe.Proof of Theorem 7.1.1. First we prove that ρe = λ(d − 1). In CED on Tdwith no blue particles present, red spreads as a branching process with meanoffspring dλ/(λ+ ρ). This is supercritical whenever ρ < λ(d− 1). It is easyto see that P (R) > 0 for such ρ, since with positive probability a child ofthe root becomes red, then the red particle at the root dies. At this point,blue cannot spread, and red spreads like an unchased branching process.Since the red branching process is not supercritical for ρ ≥ λ(d − 1), wehave P (R) = 0 for such ρ, which proves that ρe is as claimed.Now we prove (i). Suppose that λ ∈ Λ. Lemma 7.3.4 and Proposi-tion 7.4.1 ensure that ρc > 0, and Lemma 7.5.1 implies ρc < λ(d− 1) = ρe.For 0 ≤ ρ < ρc, Lemma 7.4.5 implies that coexistence occurs with positiveprobability. For ρc ≤ ρ < ρe, Lemma 7.4.6 and Lemma 7.4.5 imply thatP (B) = 0, but the red branching process survives so P (R) > 0. Thus,escape occurs. For ρ ≥ ρe, red cannot survive in CED with no blue, soextinction occurs.We end by proving (ii). Suppose that λ /∈ Λ. By Lemma 7.3.4 we haveM > d for all ρ ≥ 0. It then follows from Lemma 7.4.2 that E|B| <∞ andso P (B) = 0 for all such ρ. Similar arguments as in (i) that only considerthe behaviour of red after it separates from blue can show that the escapeand extinction phases occur for 0 ≤ ρ < ρe and ρ ≥ ρe, respectively.Proof of Theorem 7.1.2. Lemma 7.3.3 and Proposition 7.4.1 give that ρ > ρcimplies M > d, which, by Lemma 7.4.2, implies that E|B| <∞. This gives233(i). The claim at (ii) holds because Lemma 7.3.4 and Proposition 7.4.1 implythat M = d when ρ = ρc. Lemma 7.3.6 gives that M = d implies g(d) =∞.Since E|B| ≥ g(d), we obtain the claimed behaviour.Proof of Theorem 7.1.3. Define the functionsfL(λ) =14(√(8d+ 2)λ+ λ2 + 1− 3λ− 3),fU (λ) =14(√(32d+ 2)λ+ λ2 + 1− 3λ− 3).We will prove thatfL(λ) ≤ ρc(λ) < fU (λ). (7.31)Upon establishing this, it follows immediately that for large enough d wehave c√d ≤ ρc ≤ C√d for any c <√λ/2 and C >√2λ.On a given path from the root, the probability that red advances onevertex and then blue advances is given byp = λ(1 + λ+ ρ)(1 + λ+ 2ρ) . (7.32)For each vertex v at distance k from the root, let Gv be the event thatred and blue alternate advancing on the path from the root to the parentvertex of v, after which red advances to v, and then does not spread to anychildren of v before the parent of v is coloured blue. Lettingc = λ1 + λ+ ρ11 + dλ+ 2ρ, (7.33)it is easy to see that P (Gv) = cpk−1. A renewal occurs at each v for whichGv occurs. It is straightforward to verify that pd > 1 whenever ρ < fL(λ).Accordingly, we can choose K large enough so that cpK−1dK > 1, and thusthe branching process of these renewals is infinite with positive probability,which implies that P (B) > 0.To prove the upper bound we observe that monotonicity of the transition234probabilities at (7.7) in j ensures that the maximal probability jump chainof length 2k is the sawtooth path that alternates between height 1 andheight 2. This path occurs as a living jump chain with probability pk withp as in (7.32). As this is the maximal probability jump chain of the Ckmany Dyck paths counted by Rk, we have g(d) ≤∑∞k=1Ckpkdk. The radiusof convergence for the generating function of the Catalan numbers is 1/4with convergence also holding at 1/4. Thus, when ρ is large enough thatpd ≤ 1/4, we have g(d) < ∞. Lemma 7.3.6 and Lemma 7.4.2 then implythat E|B| <∞ and so ρ > ρc. It is easily checked that the above inequalityholds whenever ρ ≥ fU (λ). Thus, ρc < fU (λ).7.6 Proof of Theorem 7.1.4Recall our notation for continued fractions at (7.17). In this section we usethe definition of aj at (7.20) and also define bj := ajd. The following lemmashows that the nested continued fractions in the definition of f at (7.18) aredecreasing when z =M .Lemma 7.6.1. K[aiM,ai+1M, . . .] ≤ K[ai−1M,aiM, . . .] for all i ≥ 1.Proof. Let fi(x) = K[aix, ai+1x, . . .]. Since the fi are analytic for x < M ,we must have fi(x) < 1 for all x < M (otherwise f would have a singularitywith modulus smaller thanM). For any fixed n we can the use monotonicityof K when we change the entries of K one by one and use the fact thataj < aj−1 for all j ≥ 1 to conclude thatK[aix, ai+1x, . . . , anx] ≤ K[ai−1x, aix, ai+2x, . . . , an−1x]. (7.34)Taking the limit as n → ∞ gives fi(x) ≤ fi−1(x) for all x < M . Lettingx ↑M , these inequalities continue to hold.Note that since f(x) has a pole at M and fi(M) ≤ fi−1(M), we musthave that f0(M) = 1 and fi(M) < 1 for all i ≥ 0.235Proof of Theorem 7.1.4. When ρ = ρc, it follows from Proposition 7.4.1 andTheorem 7.3.5 that a pole of f occurs at z = d. Let K(λ, ρ) = K[b0, b1, . . .].Due to Lemma 7.6.1 and the equality f(z) = (1 − K[a0z, a1z, . . .])−1, thesingularity at z = d occurs as a result of K(λ, ρc(λ)) = 1.We can use a similar argument as in Corollary 7.3.1 to view f as ameromorphic function in the complex variables λ, ρ and z. Thus, whenfixing z = d and considering λ and ρ as nonnegative real numbers, thefunction K(λ, ρ) = fλ,ρ(d) is real analytic. Moreover, since K is easilyseen to be strictly decreasing in ρ, we have ∂K/∂ρ ̸= 0 at (λ, ρc(λ)). AsK(λ, ρc(λ)) ≡ 1, andK is infinitely differentiable, it follows from the implicitfunction theorem that ρc(λ) is smooth.7.7 Proof of Theorem 7.1.5We begin this section by describing lower and upper bounds on Cλ,ρk . Theseare easier to analyze than Cλ,ρk and lend insight into the local behaviourof ρc. In particular, we obtain if and only if conditions to have ρ < ρc(Lemma 7.7.1) and ρ > ρc (Lemma 7.7.3). We use these bounds to proveTheorem 7.1.5.7.7.1 A lower bound on Cλ,ρkThe idea is to assign weight 0 to rise and fall steps above a fixed heightm ≥ 1. Accordingly, we introduce the weightsuˆ(j) =u(j), j ≤ m0, j > m , vˆ(j) =v(j), j ≤ m0, j > m . (7.35)Here u(j) and v(j) are as in (7.4). Let Cˆλ,ρk be the corresponding weightedCatalan numbers. Since uˆ(j) ≤ u(j) and vˆ(j) ≤ v(j) for all j, we haveCˆλ,ρk ≤ Cλ,ρk .Let gˆm(z) =∑∞k=0 Cˆλ,ρk zk. The truncation ensures thatgˆm(z) = K[1, a0z, . . . , amz]236is a rational function with radius of convergence Mˆ ≥ M. Note that Mˆdepends on λ, ρ and m. For nonnegative real values x we have gˆm(x) <gˆm+1(x). It follows from the monotone convergence theorem that gˆm(x)→g(x) as m→∞ and thuslimm→∞ Mˆ =M. (7.36)Lemma 7.7.1. ρ < ρc if and only if K[bi, . . . , bm] > 1 for some m ≥ 1 and0 ≤ i ≤ m.Proof. Suppose that ρ < ρc. Lemma 7.3.3 and Proposition 7.4.1 implythat M < d. By (7.36), we have Mˆ < d for a large enough choice of m.Since gˆm(x) is a rational function, its singularities occur wherever one of thepartial denominators 1 −K[aix, . . . , amx] = 0. Let i∗ be the largest indexsuch that there is x0 < d with1−K[ai∗x0, . . . , amx0] = 0.Since i∗ is the maximum index for which the equation above holds, wehave K[ajx, . . . , amx] < 1 for all j > i∗ and x ∈ (x0, d). This ensuresthat K[ai∗x, . . . , amx] does not have a singularity and hence it is a strictlyincreasing function for x ∈ (x0, d). Thus, K[bi∗ , . . . , bm] > 1.Suppose that K[bi, . . . , bm] > 1 for some m ≥ 1 and 0 ≤ i ≤ m. Thisimplies that gˆm has a singularity of modulus strictly less than d. Thus,Mˆ < d. Since M ≤ Mˆ , Lemma 7.4.5 implies that ρ < ρc.7.7.2 An upper bound on Cλ,ρkSince u(j) and v(j) are decreasing in j, we obtain an upper bound by assign-ing weight u(m) and v(m) to all rise and fall steps above height m. Moreprecisely, setu˜(j) =u(j), j < mu(m), j ≥ m , v˜(j) =v(j), j < mv(m), j ≥ m . (7.37)237Let C˜λ,ρk be the weighted Catalan number using the weights u˜ and v˜.Also, let g˜ and M˜ be the corresponding generating function and radius ofconvergence. Since u(j) ≤ u˜(j) and v(j) ≤ v˜(j) for all j ≥ 0, we haveCλ,ρk ≤ C˜λ,ρk for all k ≥ 0. It follows that the corresponding generatingfunctions and radii of convergence satisfy g(x) ≤ g˜(x), and M ≥ M˜ .There is a finite procedure for bounding M˜ . We say that the functionK[1, c0, c1, . . . , ck] is good if all of the partial continued fractions are smallerthan 1:ck < 1, K[ck−1, ck] < 1, · · · , K[c0, . . . , ck] < 1. (7.38)Define the continued fractionKm(x) := K[1, a0x, . . . , am−2x, am−1xψ(amx)] (7.39)where ψ(x) = K[1, x, x, . . .] is the generating function of the usual Catalannumbers. By (7.18), so long as x < M˜ we haveg˜(x) = Km(x).We now prove that when a partial continued fraction is good, it is goodin a neighbourhood.Lemma 7.7.2. If K[c] := K[c0, c1, . . . , ck] is good, then there exists anϵ > 0 such that K[c˜] := K[c˜0, c˜1, . . . , c˜k] is good when c˜j ≤ cj(1 + ε) for allj = 0, . . . , k.Proof. If K[c] is good, then each partial fraction in (7.38) is < 1. Note thateach of those partial fractions is a decreasing function of the cj which appearin the fraction. Therefore, if c˜j ≤ cj , K[c˜] is good. Since the inequalities arestrict and K[c] is continuous in each cj , we can extend to have K[c˜] goodfor all c˜j ≤ (1 + ϵ)cj where ϵ is chosen small enough so that none of thepartial fractions in (7.38) equal 1.Lemma 7.7.3. ρ > ρc if and only if bm < 1/4 and Km(d) is good for somem ≥ 1.238Proof. Suppose that ρ > ρc. Then Lemma 7.3.3 and Proposition 7.4.1 saythat g has radius of convergence M > d. For |z| < M , we writeg(z) = K[1, a0z, . . . , am−2z, am−1zgm(z)] (7.40)with gm(z) := K[1, amz, am+1z, . . .] the tail of the continued fraction.Because M > d, gm(d) < ∞, we can see, using a quick argument bycontradiction, that g(d) is good. Suppose that one of the partial fractions in(7.38) is strictly larger than 1. But because these finite continued fractionsare continuous in the inputs, this would imply that there is a singularity atsome |z| < d, contradicting the fact that M > d.As the continued fraction representation of g(d) is good, we then applyLemma 7.7.2 to say that there exists an ϵ > 0 such thatK[1, b0, . . . , bm−2, bm−1gm(d)(1 + ϵ)] (7.41)is also good. Notice that ψ(amx) ≥ gm(x) ≥ 1 by definition. Since |amz| → 0as m → ∞, we can use the explicit formula for ψ to directly verify thatψ(amx) → 1 as m → ∞. Choose m large enough so that bm < 1/4 andψ(bm) < 1 + ϵ. Then ψ(bm) < (1 + ϵ)gm(d) and it follows from (7.41) thatKm(d) is good.Now, suppose that bm < 1/4 and Km(d) is good for some m ≥ 1. Ourdefinitions of u˜ and v˜ ensure that we can writeK[1, a˜mx, a˜m+1x, . . .] = K[1, amx, amx, . . .] = ψ(amx)for all x with amx < 1/4. Since bm = amd < 1/4, this ensures that this is truefor all x < d(1 + ϵ′) for some ϵ′ > 0. Similar reasoning as Corollary 7.3.1then gives Kz is a meromorphic function for |z| ≤ d(1 + ϵ′). Moreover,Theorem 7.3.5 ensures that the first pole occurs at the smallest x for whichsome partial continued fraction of Km is equal to 1. By Lemma 7.7.2, Km(d)being good implies that there exists 0 < ϵ ≤ ϵ′ such that Km is good for allx ≤ d(1 + ϵ). Hence, there are no poles of Km within distance d(1 + ϵ) ofthe origin. It follows that g˜ = Km for all such x, and thus M˜ ≥ d(1 + ϵ).239Since M ≥ M˜ , we have M > d. This gives ρ > ρc by Lemma 7.4.2 andProposition 7.4.1.7.7.3 A finite runtime algorithmProof of Theorem 7.1.5. Suppose we are given ρ ̸= ρc. Lemmas 7.7.1 and7.7.3 give a finite set of conditions to check whether ρ < ρc or ρ > ρc,respectively. To decide which is true, we increase m and the algorithmterminates once the conditions holds.240Chapter 8ConclusionsWe finish with a summary of the contributions in this work and furtherresearch directions.8.1 ContributionsWe divide this discussion between the two main topics in this dissertation.8.1.1 Uniform spanning treesIn Chapter 5, we show that the scaling limit of the UST on R3 exists in thespace of measured and rooted spatial trees. Let U be the uniform spanningtree on R3, let dU denote the intrinsic metric on U , let µU be the countingmeasure on U and let ϕU : U → R3 be a continuous map defined as theidentity on the vertices of U and a linear interpolation between its edges.Recall that we write β for the growth exponent of the loop-erased randomwalk. Theorem 5.1.1 shows that, if Pδ is the law of(U , δβdU , δ3µU , δϕU ). (8.1)then the collection (Pδ)(0,1] is tight. Moreover,(U , 2−βndU , 2−β3nµU , 2−nϕU )241converges to (T , dT , µT , ϕT ) as n → ∞, with respect to a type of Gromov-Hausdorff topology, as defined in Section 5.2. To our knowledge, this is thefirst result on scaling limits for three-dimensional uniform spanning trees.The techniques used to prove Theorem 5.1.1 are also useful to studygeometric properties of the limit space. If (T , dT ) is the scaling limit ofuniform spanning trees on R3, in Theorem 5.1.2 we prove that(i) the tree (T , dT ) is one-ended;(ii) the Hausdorff dimension of (T , dT ) is df = 3/β;(iii) the Schramm distance is a well defined metric on T and it is topolog-ically equivalent to the intrinsic metric dT .Theorem 5.1.2 also states upper and lower matching bounds for the measureof the intrinsic balls of (T , dT ). Theorem 5.1.1 has consequences on thesimple random walk defined over the uniform spanning tree. In particular,(i) we find exponents of the simple random walk in terms of β (see The-orem 5.1.3 and Corollary 5.1.1);(ii) the annealed law of the simple random walk on U is tight under rescal-ing (see Theorem 5.1.4); and(iii) we obtain kernel estimates for any diffusion that arises as a scalinglimit (see Theorem 5.1.4).The general technique Chapter 5 also serves to study macroscopic proper-ties of the uniform spanning tree. Let us denote by Uδ the uniform spanningtree on δR3. By a spanning cluster, we refer to a connected component ofthe uniform spanning tree intersecting two opposite faces of the unit cube[0, 1]3. Theorem 6.1.1 states that the number of spanning clusters of Uδ istight as δ → 0.Theorem 6.1.1 verifies a conjecture by Benjamini [28]. For models belowits critical dimension, we expect to have tightness on the number of generat-ing clusters, whereas it grows to infinity on high dimensions. This behaviourhas been verified, in all dimensions, only for uniform spanning trees. The242affirmative answer that we find for USTs supports the corresponding con-jecture for percolation on Rd with d < 6 (see [3]).8.1.2 Competitive growth processChapter 7 introduces a novel competitive growth process, chase-escape withdeath (CED). It is a natural generalization for predator-prey competitionmodels.Chase-escape with death is a competitive growth process on a graph G.It resembles the behaviour of predators and prey, including the possibilityof prey dying for reasons unrelated to the predators. We define chase-escapewith death as an interacting particle system with local state space {w, r, b, †}.We call these states white, red, blue, and dead, respectively. We also refer tored and blue as particles. White states indicate vacant sites. Predators, prey,and dead prey may occupy these sites as time advances. A red site indicatesthe presence of prey, blue is the colour for predators, and † corresponds to adead site. In the initial configuration, a red particle occupies one vertex, ablue particle occupies a neighbour vertex, and the rest of the graph is white.Red particles spread to adjacent white sites according to a Poisson processwith rate λ. Meanwhile, blue particles overtake adjacent red particles atrate 1. Red particles die, and turn to a death state, at an independent rateρ. Once a site reached a dead state, it stays dead for the rest of the process.Depending on the parameters λ and ρ, the process reaches either a finiteor an infinite number of vertices.(i) Coexistence phase: both particle types occupy infinitely many siteswith positive probability.(ii) Extinction phase: both types occupy only finitely many sites almostsurely.(iii) Escape phase: red particles occupy infinitely many sites with positiveprobability, but blue particles reach a finite number of vertices almostsurely.243λρλ−c λ+cextinctionescapecoexistenceFigure 8.1: The two-dimensional phase diagram with d fixed. Thehorizontal axis is for values of λ and the vertical axis is forvalues of ρ. The dashed black line depicts ρc and the solid blackline ρe.In Chapter 7, we analyse CED when the underlying graph G is a d-arytree Td. The main result was Theorem 7.1.1. It states that, for fixed λ withinan explicit interval Λ, there exists an interval of death rates [ρc, ρe) wherethe escape phase occurs. Coexistence occurs for ρ ∈ [0, ρc) and extinction forρ ≥ ρe. If λ ̸∈ Λ, then the process transitions between escape and extinctionphases. We obtain a characterization of the phase diagram for the spreadingparameter λ and the death rate ρ (see Figure 8.1).In Theorem 7.1.2, we analyse the behaviour at criticality of the averagegrowth of blue particles. If ρ > ρc, then the expected number of sites thatare ever coloured blue is finite. If ρ = ρc, this expected value is infinite.A closed formula is not available for ρc, but Theorems 7.1.3, 7.1.4 de-scribe the behaviour of this critical parameter. For large d, ρc ≍λ√d.Moreover, as as function of λ ∈ Λ, ρc is a smooth function. We can approx-imate the graph of ρc with an algorithm proposed in Theorem 7.1.5.8.2 Future directionsThe common obstacle for addressing open problems on three-dimensionaluniform spanning trees and chase-escape models is a lack of tools. Three-244dimensional models in statistical mechanics lack properties available in twodimensions, such as conformal invariance and planarity. The results in two-dimensions guide conjectures, but the proofs will require new methods. Thesituation is similar for chase-escape. We have results for chase-escape andchase-escape with death on N and trees Td, and N×{0, 1}. However, chase-escape on R2 seems out of reach.In general, obstacles for solving a mathematical problem maintain thevitality of our science. At first, one may regret the failure of a knowntechnique. But this momentarily defeat demands creativity for a deeperunderstanding of the problem in hand. Following this spirit, we concludethis thesis with a collection of problems that are not resolved in this workbut stimulate further research.8.2.1 Uniform spanning treesOne the main tools to analyse uniform spanning trees and forests is Wilson’salgorithm. As we have seen in this work, questions on uniform spanningtrees get an answer after their analogue finds a solution for loop-erasedrandom walks. The problems that we present here follow this direction andcorrespond to properties of the loop-erased random walk.Characterization of the scaling limit of the LERW on R3Let K be the scaling limit of the loop-erased random walk on R3 started atthe origin and stopped at its first exit from the unit ball. Kozma proved theexistence of K in [107], but a construction of K without passing through thelimit is missing. In [148], Sapozhnikov and Shiraishi compared the tracesof K, Brownian motion, and a Brownian loop-soup. Denote by B the traceof Brownian motion from the origin to the boundary of the unit ball. LetBS be the Brownian loop soup [120], and let L be the set of loops of BSintersecting K. If we setK̂ = K ∪ L,then B is equal in law to K̂ [148].Question 8.2.1. Does the law of K̂ determine the law of K?245Universality of the scaling limit of the LERWWe have a good understanding of the scaling limit of the loop-erased randomwalk in the planar case. For any planar graph where the simple random walkconverges to Brownian motion, the LERW converges in the scaling limit toSLE(2) [171]. In contrast, we know little on the three-dimensional scalinglimit.Question 8.2.2. Let G be a periodic three-dimensional graph different toR3. Prove the existence of the scaling limit of the LERW on G.8.2.2 Competitive growth modelsWe already presented an intriguing open question for chase-escape on R2in Conjecture 4.5.6. The simpler version of the conjecture of KordzakhiaMartin is also open.Question 8.2.3. Does there exist a graph non-oriented G for which thecritical parameter on chase-escape is λc(G) < 1?Natural starting points for Question 8.2.3 is a binary tree with one edgeadded. Such an edge creates a cycles. On a different direction, we canconsider Question 8.2.3 on Cayley graphs, such like R∗R∗R/3 or R∗R∗R2.The Birth-and-Assassination (BA) process is the scaling limit of chase-escape on a d-ary tree. Aldous and Krebs defined the BA process processin [8]. Bordenave proved this convergence, in the scaling limits, for therumor-scotching process on trees [34]. The equivalence between the rumor-scotching process chase-escape, on d-ary trees, extends the result to thelatter. A natural question is the nature of the scaling limit on other graphsat criticality. Simulations in [164] indicate that such limit exists for R2 atλ = 12 .Question 8.2.4. Does the chase-escape process at criticality on a graph Gconverge to a scaling limit? Does there exist a scaling limit for chase-escapewith death?246Bibliography[1] R. Abraham, J.-F. Delmas, and P. Hoscheit. A note on theGromov-Hausdorff-Prokhorov distance between (locally) compactmetric measure spaces. Electron. J. Probab., 18(14):1–21, 2013. doi:10.1214/EJP.v18-2116. → pages 40, 48, 92[2] D. Ahlberg. Existence and coexistence in first-passage percolation.ArXiv e-prints: 2001.04142. URL https://arxiv.org/abs/2001.04142.→ page 69[3] M. Aizenman. On the number of incipient spanning clusters. NuclearPhys. B, 485(3):551–582, 1997. doi: 10.1016/S0550-3213(96)00626-8.→ pages 200, 204, 243[4] M. Aizenman, A. Burchard, C. M. Newman, and D. B. Wilson.Scaling limits for minimal and random spanning trees in twodimensions. Random Struct. Algorithms, 15(3-4):319–367, 1999. doi:10.1002/(SICI)1098-2418(199910/12)15:3/4<319::AID-RSA8>3.0.CO;2-G. → page 46[5] D. Aldous. The random walk construction of uniform spanning treesand uniform labelled trees. J. SIAM J. Discret. Math., 3(4):450–465,1990. doi: 10.1137/0403039. → pages 35, 37, 38[6] D. Aldous. The continuum random tree. I. Ann. Probab., 19(1):1–28,1991. doi: 10.1214/aop/1176990534. → pages 41, 42, 43[7] D. Aldous. The continuum random tree. III. Ann. Probab., 21(1):248–289, 1993. doi: 10.1214/aop/1176989404. → page 42[8] D. Aldous and W. B. Krebs. The birth-and-assassination process.Statistics & Probability Letters, 10(5):427–430, 1990. doi:10.1016/0167-7152(90)90024-2. → pages 217, 246247[9] E. Allen and I. Gheorghiciuc. A weighted interpretation for thesuper catalan numbers. J. Integer Seq., 17(9), 2014. → page 218[10] O. Angel, D. A. Croydon, S. Hernandez-Torres, and D. Shiraishi.The number of spanning clusters of the uniform spanning tree inthree dimensions. ArXiv e-prints: 2003.04548, . URLhttps://arxiv.org/abs/2003.04548. → page v[11] O. Angel, D. A. Croydon, S. Hernandez-Torres, and D. Shiraishi.Scaling limits of the three-dimensional uniform spanning tree andassociated random walk. ArXiv e-prints: 2003.09055, . URLhttps://arxiv.org/abs/2003.09055. → pages v, 201[12] M. A. Aon, S. Cortassa, and B. O’Rourke. Percolation and criticalityin a mitochondrial network. Proc. Natl. Acad. Sci. U. S. A., 101(13):4447–4452, 2004. ISSN 00278424. doi: 10.1073/pnas.0307156101. →page 2[13] S. Athreya, M. Eckhoff, and A. Winter. Brownian motion on R-trees.Trans. Amer. Math. Soc., 365(6):3115–3150, 2013. → pages 89, 195[14] S. Athreya, W. Löhr, and A. Winter. Invariance principle forvariable speed random walks on trees. Ann. Probab., 45(2):625–667,2017. doi: 10.1214/15-AOP1071. → pages 88, 187[15] S. R. Athreya and A. A. Járai. Infinite volume limit for thestationary distribution of abelian sandpile models. Commun. Math.Phys., 249(1):197–213, 2004. doi: 10.1007/S00220-004-1080-0. →page 35[16] A. Auffinger and M. Damron. Differentiability at the edge of thepercolation cone and related results in first-passage percolation.Probab. Theory Relat. Fields, 156:193–227, 2013. doi:10.1007/s00440-012-0425-4. → page 68[17] A. Auffinger, M. Damron, and J. Hanson. 50 years of first-passagepercolation, volume 68 of University lecture series. AmericanMathematical Soc., 2017. → pages 59, 63, 66, 67, 217[18] M. T. Barlow. Loop Erased Walks and Uniform Spanning Trees,volume 34 of MSJ Memoirs, pages 1–32. The Mathematical Societyof Japan, Tokyo, Japan, 2016. doi: 10.2969/msjmemoirs/03401C010.→ page 24248[19] M. T. Barlow. Random walks and heat kernels on graphs, volume 438of London Mathematical Society Lecture Note Series. CambridgeUniversity Press, Cambridge, 2017. → pages 85, 195[20] M. T. Barlow and R. Masson. Spectral dimension and random walkson the two dimensional uniform spanning tree. Comm. Math. Phys.,305(1):23–57, 2011. doi: 10.1007/s00220-011-1251-8. → pagesxi, 87, 142, 143, 146, 153[21] M. T. Barlow, D. A. Croydon, and T. Kumagai. Quenched andaveraged tails of the heat kernel of the two-dimensional uniformspanning tree. In preparation. → pages xi, 87, 88[22] M. T. Barlow, T. Coulhon, and T. Kumagai. Characterization ofsub-Gaussian heat kernel estimates on strongly recurrent graphs.Comm. Pure Appl. Math., 58(12):1642–1677, 2005. doi:10.1002/cpa.20091. → page 194[23] M. T. Barlow, A. A. Járai, T. Kumagai, and G. Slade. Random walkon the incipient infinite cluster for oriented percolation in highdimensions. Comm. Math. Phys., 278(2):385–431, 2008. doi:10.1007/s00220-007-0410-4. → page 86[24] M. T. Barlow, D. A. Croydon, and T. Kumagai. Subsequentialscaling limits of simple random walk on the two-dimensional uniformspanning tree. Ann. Probab., 45(1):4–55, 2017. doi:10.1214/15-AOP1030. → pagesxi, 47, 78, 79, 84, 85, 87, 88, 89, 91, 92, 187, 188, 191, 193, 195, 196, 201[25] E. Beckman, K. Cook, N. Eikmeier, S. Hernandez-Torres, and J. M.Chase-escape with death on trees. ArXiv e-prints: 1909.01722. URLhttps://arxiv.org/abs/1909.01722. → page v[26] A. A. Belavin, A. M. Polyakov, and A. B. Zamolodchikov. Infiniteconformal symmetry in two-dimensional quantum field theory. Nucl.Physics, Sect. B, 241(2):333–380, 1984. ISSN 05503213. doi:10.1016/0550-3213(84)90052-X. → page 4[27] C. Beneš. Some estimates for planar random walk and brownianmotion. ArXiv e-prints: 0611127. URLhttps://arxiv.org/abs/math/0611127. → page 16249[28] I. Benjamini. Large scale degrees and the number of spanningclusters for the uniform spanning tree. In Perplexing problems inprobability, volume 44 of Progr. Probab., pages 175–183. BirkhäuserBoston, 1999. doi: 10.1007/978-1-4612-2168-5. → pages198, 199, 200, 242[29] I. Benjamini, R. Lyons, Y. Peres, and O. Schramm. Uniformspanning forests. Ann. Probab., 29(1):1–65, 2001. doi:10.1214/aop/1008956321. → pages 33, 35, 37, 122, 199, 203[30] N. Berestycki and J. Norris. Lectures on Schramm–LoewnerEvolution. 2014. → pages 27, 28, 29[31] N. Berestycki, B. Laslier, and G. Ray. Dimers and imaginarygeometry. Ann. Probab., 48(1):1–52, 2020. doi: 10.1214/18-aop1326.→ page 35[32] A. Beurling. Études sur un problème de majoration. ImprimerieAlmquist and Wiksell, 1933. → page 16[33] P. Billingsley. Convergence of Probability Measures.Wiley-Interscience, 2nd edition, 1999. → pages 180, 181, 182[34] C. Bordenave. On the birth-and-assassination process, with anapplication to scotching a rumor in a network. Electron. J. Probab.,13(66):2014–2030, 2008. doi: 10.1214/EJP.v13-573. → pages 217, 246[35] C. Bordenave. Extinction probability and total progeny ofpredator-prey dynamics on infinite trees. Electron. J. Probab, 19(20):1–33, 2014. doi: 10.1214/EJP.v19-2361. → pages 70, 74, 214, 215[36] C. Borgs, J. Chayes, H. Kesten, and J. Spencer. Uniformboundedness of critical crossing probabilities implies hyperscaling.Random Struct. Alg., 15(3-4):368–413, 1999. doi:10.1002/(SICI)1098-2418(199910/12)15:3/4<368::AID-RSA9>3.0.CO;2-B. → page 200[37] S. R. Broadbent and J. M. Hammersley. Percolation processes: I.Crystals and mazes. Math. Proc. Cambridge Philos. Soc., 1957. doi:10.1017/S0305004100032680. → page 3[38] A. Broder. Generating random spanning trees. pages 442–447, 1989.doi: 10.1109/SFCS.1989.63516. → pages 35, 37, 38250[39] D. L. Burkholder, D. Daley, H. Kesten, P. Ney, F. Spitzer, J. M.Hammersley, and J. F. C. Kingman. Discussion on professorKingman’s paper. Ann. Probab., 1(6):900–909, 1973. doi:10.1214/aop/1176996799. → page 59[40] R. Burton and R. Pemantle. Local characteristics, entropy and limittheorems for spanning trees and domino tilings viatransfer-impedances. Ann. Probab., 21(3):1329–1371, 1993. doi:10.1214/aop/1176989121. → page 35[41] F. Camia and C. M. Newman. Two-dimensional critical percolation:The full scaling limit. Commun. Math. Phys., 268(1):1–38, 2006. doi:10.1007/s00220-006-0086-1. → page 27[42] F. Camia and C. M. Newman. Critical percolation exploration pathand SLE 6: A proof of convergence. Probab. Theory Relat. Fields,139(3–4):473–519, 2007. doi: 10.1007/s00440-006-0049-7. → page 27[43] J. Cardy. The number of incipient spanning clusters intwo-dimensional percolation. J. Phys. A, 31(5):L105–L110, feb 1998.doi: 10.1088/0305-4470/31/5/003. → page 200[44] S. H. Chan. Infinite-step stationarity of rotor walk and the wiredspanning forest. ArXiv e-prints: 1909.13195. URLhttps://arxiv.org/abs/1909.13195. → page 35[45] S. H. Chan. Rotor walks on transient graphs and the wired spanningforest. SIAM J. Discret. Math., 33(4):2369 – 2393, 2019. doi:10.1137/18M1217139. → page 35[46] D. Chelkak, H. Duminil-Copin, C. Hongler, A. Kemppainen, andS. Smirnov. Convergence of Ising interfaces to Schramm’s SLEcurves. Comptes Rendus Math., 352(2):157 – 161, 2014. doi:10.1016/j.crma.2013.12.002. → page 26[47] E. A. Codling, M. J. Plank, and S. Benhamou. Random walk modelsin biology. Journal of the Royal Society, Interface, 5(25):813–834,August 2008. ISSN 1742-5689. doi: 10.1098/rsif.2008.0014. → page 2[48] J. T. Cox and R. Durrett. Some limit theorems for percolationprocesses with necessary and sufficient conditions. Ann. Probab., 9(4):583–603, 1981. doi: 10.1214/aop/1176994364. → pages 59, 64251[49] D. A. Croydon. Heat kernel fluctuations for a resistance form withnon-uniform volume growth. Proc. Lond. Math. Soc. (3), 94(3):672–694, 2007. doi: 10.1112/plms/pdl025. → pages 89, 194, 196[50] D. A. Croydon. Convergence of simple random walks on randomdiscrete trees to Brownian motion on the continuum random tree.Ann. Inst. Henri Poincaré Probab. Stat., 44(6):987–1019, 2008. doi:10.1214/07-AIHP153. → page 88[51] D. A. Croydon. Volume growth and heat kernel estimates for thecontinuum random tree. Probab. Theory Related Fields, 140(1-2):207–238, 2008. doi: 10.1007/s00440-007-0063-4. → pages 89, 196[52] D. A. Croydon. Hausdorff measure of arcs and Brownian motion onBrownian spatial trees. Ann. Probab., 37(3):946–978, 2009. doi:10.1214/08-AOP425. → page 88[53] D. A. Croydon. Scaling limits for simple random walks on randomordered graph trees. Adv. in Appl. Probab., 42(2):528–558, 2010. doi:10.1017/S0001867800004183.[54] D. A. Croydon. Scaling limits of stochastic processes associated withresistance forms. Ann. Inst. Henri Poincaré Probab. Stat., 54(4):1939–1968, 2018. doi: 10.1214/17-AIHP861. → pages 88, 196[55] M. Deijfen and O. Häggström. The initial configuration is irrelevantfor the possibility of mutual unbounded growth in the two-typeRichardson model. Comb. Probab. Comput., 15(3):345–353, 2006.doi: 10.1017/S0963548305007315. → pages 62, 66[56] M. Deijfen and O. Häggström. Analysis and Stochastics of GrowthProcesses and Interface Models, chapter The pleasures and pains ofstudying the two-type Richardson model, pages 39 – 55. OxfordUniversity Press, 2008. doi:10.1093/acprof:oso/9780199239252.001.0001. → page 217[57] M. Deijfen and O. Häggström. The two-type Richardson model withunbounded initial configurations. Ann. Appl. Probab., 17(5-6):1639–1656, 2007. doi: 10.1214/07-AAP440. → page 62[58] A. Depperschmidt, A. Greven, and P. Pfaffelhuber. Marked metricmeasure spaces. Electron. Commun. Probab., 16:174–188, 2011. doi:10.1214/ECP.v16-1615. → pages 184, 185, 186, 187252[59] P. G. Doyle and J. L. Snell. Random walks and electric networks,volume 22 of Carus Mathematical Monographs. MathematicalAssociation of America, 1984. doi: 10.4169/j.ctt5hh804. → page 85[60] R. Durrett. Random graph dynamics, volume 20 of Cambridge seriesin statistical and probabilistic mathematics. Cambridge UniversityPress, 2007. ISBN 9780511546594. doi: 10.1017/CBO9780511546594.→ page 5[61] R. Durrett. Probability: theory and examples, volume 49 ofCambridge series in statistical and probabilistic mathematics.Cambridge University Press, 5th edition, 2019. ISBN 9781108591034.doi: 10.1017/9781108591034. → page 13[62] R. Durrett, M. Junge, and S. Tang. Coexistence in chase-escape.Electron. Commun. Probab., 25(22):1 – 14, 2020. doi:10.1214/20-ECP302. → pages 73, 74, 75, 217[63] M. Eden. A two-dimensional growth process. In Proc. 4th BerkeleySymposium on Mathematics Statistics and Probability, volume 4,pages 223–239, 1961. URLhttps://projecteuclid.org/euclid.bsmsp/1200512888. → page 59[64] G. A. Edgar. Integral, probability, and fractal measures.Springer-Verlag, New York, 1998. doi: 10.1007/978-1-4757-2958-0.→ page 191[65] A. Einstein. Über die von der molekularkinetischen Theorie derWärme geforderte Bewegung. Ann. Phys., 322(8):549 – 560, 1905.URL https://einsteinpapers.press.princeton.edu/vol2-doc/. → page 2[66] S. N. Ethier and T. G. Kurtz. Markov processes: characterizationand convergence. Wiley Series in Probability and MathematicalStatistics: Probability and Mathematical Statistics. John Wiley &Sons Inc., New York, 1986. ISBN 0-471-08186-8. → page 20[67] S. N. Evans, J. Pitman, and A. Winter. Rayleigh processes, realtrees, and root growth with re-grafting. Probab. Theory RelatedFields, 134(1):81–126, 2006. doi: 10.1007/s00440-004-0411-6. →pages 40, 187[68] P. Flajolet and F. Guillemin. The formal theory of birth-and-deathprocesses, lattice path combinatorics and continued fractions.253Advances in Applied Probability, 32(3):750–778, 2000. doi:10.1017/S0001867800010247. → page 218[69] P. Flajolet and R. Sedgewick. Analytic Combinatorics. CambridgeUniversity Press, 2009. ISBN 9781139477161. doi:10.1017/CBO9780511801655. → pages 228, 229[70] P. J. Flory. Principles of polymer chemistry. Cornell Univ. Press,1953. ISBN 9788184120134. doi: 10.1126/science.119.3095.555-a. →page 2[71] P. J. Flory. Statistical mechanics of chain molecules. IntersciencePublishers, 1969. ISBN 9780470264959. → page 2[72] O. Garet and R. Marchand. Coexistence in two-type first-passagepercolation models. Ann. Appl. Probab., 15(1A):298–330, 2005. doi:10.1214/105051604000000503. → pages 61, 67[73] I. P. Goulden and D. M. Jackson. Combinatorial enumeration.Wiley-Interscience series in discrete mathematics. Wiley, 1983. →page 224[74] A. Greven, P. Pfaffelhuber, and A. Winter. Convergence indistribution of random metric measure spaces (Λ-coalescent measuretrees). Probab. Theory Related Fields, 145(1-2):285–322, 2009. doi:10.1007/s00440-008-0169-3. → pages 184, 187[75] D. Griffeath. Frank Spitzer’s Pioneering Work on InteractingParticle Systems. Ann. Probab., 21(2):608 – 621, 1993. doi:10.1214/aop/1176989258. → page 3[76] G. Grimmett. The Random-Cluster Model, volume 333 of Institute ofMathematical Statistics Textbooks. Springer-Verlag BerlinHeidelberg, 2006. doi: 10.1007/978-3-540-32891-9. → page 35[77] G. Grimmett. Probability on Graphs: Random Processes on Graphsand Lattices. Institute of Mathematical Statistics Textbooks.Cambridge University Press, 2010. doi: 10.1017/CBO9780511762550.→ page 35[78] O. Häggström. Random-cluster measures and uniform spanningtrees. Stoch. Process. Their Appl., 59(2):267 – 275, 1995. doi:10.1016/0304-4149(95)00042-6. → page 35254[79] O. Häggström and R. Pemantle. First passage percolation and amodel for competing spatial growth. Journal of Applied Probability,35(3):683–692, 1998. doi: 10.1017/S0021900200016338. → pages60, 61, 62, 67[80] O. Häggström and R. Pemantle. Absence of mutual unboundedgrowth for almost all parameter values in the two-type Richardsonmodel. Stoch. Process. Their Appl., 90(2):207–222, 2000. doi:10.1016/S0304-4149(00)00042-9. → pages 61, 66[81] J. M. Hammersley and D. J. A. Welsh. First-Passage Percolation,Subadditive Processes, Stochastic Networks, and GeneralizedRenewal Theory. In N. J. and L. C. L.M., editors, Bernoulli 1713Bayes 1763 Laplace 1813, pages 61–110. Springer, Berlin, Heidelberg,1965. doi: 10.1007/978-3-642-99884-3_7. → pages 3, 63[82] A. Hinsen, B. Jahnel, E. Cali, and J.-P. Wary. Phase transitions forchase-escape models on gilbert graphs. ArXiv e-prints: 1911.02622,2019. URL https://arxiv.org/abs/1911.02622. → page 217[83] C. Hoffman. Coexistence for Richardson type competing spatialgrowth models. Ann. Appl. Probab., 15(1B):739–747, 2005. doi:10.1214/105051604000000729. → pages 61, 67[84] C. Hoffman. Geodesics in first passage percolation. The Annals ofApplied Probability, 18(5):1944–1969, 2008. doi: 10.1214/07-AAP510.→ pages 63, 68[85] R. v. d. Hofstad. Random Graphs and Complex Networks, volume 1of Cambridge Series in Statistical and Probabilistic Mathematics.Cambridge University Press, 2016. doi: 10.1017/9781316779422. →pages 5, 6[86] N. Holden and X. Sun. SLE as a mating of trees in Euclideangeometry. Comm. Math. Phys., 364(1):171–201, 2018. doi:10.1007/s00220-018-3149-1. → pages 48, 81, 201[87] A. E. Holroyd, L. Levine, K. Mészáros, Y. Peres, J. Propp, and D. B.Wilson. Chip-Firing and Rotor-Routing on Directed Graphs. In Inand Out of Equilibrium 2, volume 60 of Progress in Probability, pages331–364. Birkhäuser Basel, 2008. doi:10.1007/978-3-7643-8786-0_17. → page 35255[88] T. Hutchcroft. Interlacements and the wired uniform spanningforest. Ann. Probab., 46(2):1170–1200, 2018. doi:10.1214/17-AOP1203. → pages 35, 38[89] T. Hutchcroft. Universality of high-dimensional spanning forests andsandpiles. Probab. Theory Relat. Fields, 176(1-2):533–597, 2020. doi:10.1007/s00440-019-00923-3. → page 35[90] A. A. Járai. The uniform spanning tree and related models, 2009.URL https://people.bath.ac.uk/aj276/teaching/USF/USFnotes.pdf. →page 35[91] A. A. Járai and F. Redig. Infinite volume limit of the Abeliansandpile model in dimensions d ≥ 3. Probab. Theory Relat. Fields,141(1):181–212, 2008. doi: 10.1007/s00440-007-0083-0. → page 35[92] A. A. Járai and N. Werning. Minimal Configurations and SandpileMeasures. J. Theor. Probab., 27(1):153–167, 2014. doi:10.1007/s10959-012-0446-z. → page 35[93] O. Kallenberg. Foundations of modern probability. Probability andits Applications. Springer-Verlag, 2nd edition, 2002. doi:10.1007/b98838. → page 183[94] I. Karatzas and S. E. Shreve. Brownian motion and stochasticcalculus, volume 113 of Graduate Texts in Mathematics.Springer-Verlag, New York, 2nd edition, 1991. → page 179[95] R. Kenyon. The asymptotic determinant of the discrete Laplacian.Acta Math., 185(2):239–286, 2000. doi: 10.1007/BF02392811. →pages xi, 24, 35, 80, 87[96] H. Kesten. Aspects of first passage percolation. In École d’été deprobabilités de Saint-Flour, XIV—1984, volume 1180 of LectureNotes in Math., pages 125–264. Springer, Berlin, 1986. doi:10.1007/BFb0074919. → page 66[97] H. Kesten. Hitting probabilities of random walks on Zd. StochasticProcess. Appl., 25(2):165–184, 1987. → pages 17, 82[98] J. Kigami. Harmonic calculus on limits of networks and itsapplication to dendrites. J. Funct. Anal., 128(1):48–86, 1995. doi:10.1006/jfan.1995.1023. → pages 89, 195256[99] J. Kigami. Resistance forms, quasisymmetric maps and heat kernelestimates. Mem. Amer. Math. Soc., 216(1015):vi+132, 2012. → page195[100] J. Kingman. Poisson Processes. Oxford science publications.Clarendon Press, 1993. ISBN 9780198536932. → page 57[101] G. Kirchhoff. Ueber die auflösung der gleichungen, auf welche manbei der untersuchung der linearen vertheilung galvanischer strömegeführt wird. Ann. Phys. Chem., 148(12):497–508, 1847. doi:10.1002/andp.18471481202. → page 35[102] A. Klenke. Probability Theory. Springer-Verlag London, 2008. doi:10.1007/978-1-84800-048-3. → pages 20, 51, 57[103] S. Kliem and W. Löhr. Existence of mark functions in marked metricmeasure spaces. Electron. J. Probab., 20(73):1–24, 2015. doi:10.1214/EJP.v20-3969. → pages 185, 187[104] G. Kordzakhia. The escape model on a homogeneous tree. Electron.Commun. Probab., 10:113–124, 2005. doi: 10.1214/ECP.v10-1140.→ pages 70, 73, 217[105] I. Kortchemski. A predator-prey SIR type dynamics on largecomplete graphs with three phase transitions. Stoch. Process. TheirAppl., 125(3):886–917, 2015. doi: 10.1016/j.spa.2014.10.005. → pages72, 217[106] I. Kortchemski. Predator–prey dynamics on infinite trees: Abranching random walk approach. Journal of Theoretical Probability,29(3):1027–1046, 2016. doi: 10.1007/s10959-015-0603-2. → pages74, 217[107] G. Kozma. The scaling limit of loop-erased random walk in threedimensions. Acta Math., 199(1):29–152, 2007. doi:10.1007/s11511-007-0018-8. → pages 30, 78, 81, 104, 109, 158, 245[108] T. Kumagai. Heat kernel estimates and parabolic Harnackinequalities on graphs and resistance forms. Publ. Res. Inst. Math.Sci., 40(3):793–818, 2004. doi: 10.2977/prims/1145475493. → page194257[109] T. Kumagai and J. Misumi. Heat kernel estimates for stronglyrecurrent random walk on random media. J. Theoret. Probab., 21(4):910–935, 2008. doi: 10.1007/s10959-008-0183-5. → pages 86, 87, 194[110] G. F. Lawler. A self-avoiding random walk. Duke Math. J., 47(3):655–693, 1980. doi: 10.1215/S0012-7094-80-04741-9. → pages21, 31, 32[111] G. F. Lawler. Gaussian behavior of loop-erased self-avoiding randomwalk in four dimensions. Duke Math. J., 53(1):249–269, 1986. doi:10.1215/S0012-7094-86-05317-2. → page 31[112] G. F. Lawler. Intersections of random walks with random sets. Isr.J. Math., 65(2):113–132, 1989. doi: 10.1007/BF02764856. → page 17[113] G. F. Lawler. Intersections of random walks. Probability and itsApplications. Birkhäuser Boston, 1991. doi:10.1007/978-1-4757-2137-9. → pages14, 15, 16, 17, 22, 31, 32, 115, 116, 120, 123, 128, 142, 144, 146, 189, 206[114] G. F. Lawler. Loop-erased random walk. In Perplexing problems inprobability, volume 44 of Progr. Probab., pages 197–217. BirkhäuserBoston, 1999. doi: 10.1007/978-1-4612-2168-5. → pages 24, 25, 80, 203[115] G. F. Lawler. Conformally Invariant Processes in the Plane,volume 14 of Mathematical Surveys and Monographs. AmericanMathematical Society, 2005. → pages 16, 78[116] G. F. Lawler. The probability that planar loop-erased random walkuses a given edge. Electron. Commun. Probab., 19, 2014. doi:10.1214/ECP.v19-2908. → pages 24, 47[117] G. F. Lawler and V. Limic. Random walk: a modern introduction,volume 123 of Cambridge studies in advanced mathematics.Cambridge University Press, 2010. ISBN 978-0-521-51918-2. doi:10.1017/CBO9780511750854. → pages 14, 15, 20, 169[118] G. F. Lawler and J. Perlman. Random Walks, Random Fields, andDisordered Systems, chapter Loop Measures and the Gaussian FreeField, pages 211–235. Springer International Publishing, Cham,2015. ISBN 978-3-319-19339-7. doi: 10.1007/978-3-319-19339-7_5.→ page 35258[119] G. F. Lawler and F. Viklund. Convergence of loop-erased randomwalk in the natural parametrization. ArXiv e-prints: 1603.05203.URL https://arxiv.org/abs/1603.05203. → page 30[120] G. F. Lawler and W. Werner. The Brownian loop soup. Probab.Theory Relat. Fields, 128(4):565–588, 2004. doi:10.1007/s00440-003-0319-6. → page 245[121] G. F. Lawler, O. Schramm, and W. Werner. Conformal restriction:The chordal case. Journal of the American Mathematical Society, 16(4):917–955, 2003. doi: 10.1090/S0894-0347-03-00430-2. → page 27[122] G. F. Lawler, O. Schramm, and W. Werner. Conformal Invariance ofPlanar Loop-Erased Random Walks and Uniform Spanning Trees.Ann. Probab., 32(1):939–995, 2004. doi: 10.1214/aop/1079021469.→ pages 26, 27, 30, 46, 47, 78, 81[123] G. F. Lawler, X. Sun, and W. Wu. Four-Dimensional Loop-ErasedRandom Walk. Ann. Probab., 47(6):3866–3910, 2019. doi:10.1214/19-AOP1349. → page 35[124] J.-F. Le Gall. Random trees and applications. Probab. Surv., 2:245–311, 2005. doi: 10.1214/154957805100000140. → page 42[125] J.-F. Le Gall. Random real trees. Ann. Fac. Sci. Toulouse Math. (6),15(1):35–62, 2006. → page 91[126] D. A. Levin, Y. Peres, and E. L. Wilmer. Markov chains and mixingtimes. American Mathematical Society, Providence, RI, 2009. Witha chapter by James G. Propp and David B. Wilson. → page 85[127] X. Li and D. Shiraishi. Convergence of three-dimensional loop-erasedrandom walk in the natural parametrization. ArXiv e-prints:1811.11685. URL https://arxiv.org/abs/1811.11685. → pages78, 81, 105, 109, 112, 117, 139, 150, 158, 168, 170, 171, 190[128] X. Li and D. Shiraishi. One-point function estimates for loop-erasedrandom walk in three dimensions. Electron. J. Probab., 24:Paper No.111, 46, 2019. doi: 10.1214/19-EJP361. → pages78, 104, 114, 118, 128, 130, 131, 133, 135[129] T. M. Liggett. Interacting Particle Systems. Springer-Verlag BerlinHeidelberg, 1985. doi: 10.1007/b138374. → pages 51, 53, 54, 55, 57259[130] T. M. Liggett. Stochastic Interacting Systems: Contact, Voter andExclusion Processes. Springer, Berlin, Heidelberg, 1999. doi:10.1007/978-3-662-03990-8. → pages 54, 55[131] K. Löwner. Untersuchungen über schlichte konforme Abbildungendes Einheitskreises. I. Math. Ann., 89:103–121, 1923. doi:10.1007/BF01448091. → page 28[132] R. Lyons and Y. Peres. Probability on trees and networks, volume 42of Cambridge Series in Statistical and Probabilistic Mathematics.Cambridge University Press, New York, 2016. doi:10.1017/9781316672815. → pages 33, 34, 35, 36, 85[133] R. Lyons, Y. Peres, and O. Schramm. Markov chain intersectionsand the loop-erased walk. Ann. Inst. Henri Poincaré Probab. Stat.,39:779–791, 2003. doi: 10.1016/S0246-0203(03)00033-5. → page 107[134] S. N. Majumdar and D. Dhar. Equivalence between the Abeliansandpile model and the q → 0 limit of the Potts model. Phys. AStat. Mech. its Appl., 1992. doi: 10.1016/0378-4371(92)90447-X. →page 35[135] J. F. Marckert and G. Miermont. The CRT is the scaling limit ofunordered binary trees. Random Struct. Algorithms, 38(4):467–501,2011. doi: 10.1002/rsa.20332. → page 43[136] R. Masson. The growth exponent for planar loop-erased randomwalk. Electron. J. Probab., 14:no. 36, 1012–1073, 2009. doi:10.1214/EJP.v14-651. → pages 23, 25, 47, 103, 104, 117, 118, 154[137] M. Mézard and A. Montanari. Information, Physics, andComputation. Oxford University Press, 2009. ISBN 9780191718755.doi: 10.1093/acprof:oso/9780198570837.001.0001. → page 2[138] P. Michaeli, A. Nachmias, and M. Shalev. The diameter of uniformspanning trees in high dimensions. ArXiv e-prints: 1911.12319. URLhttps://arxiv.org/abs/1911.12319. → pages 44, 45[139] R. A. Minlos. R.L. Dobrushin - one of the founders of modernmathematical physics. Russ. Math. Surv., 1997. ISSN 0036-0279.doi: 10.1070/rm1997v052n02abeh001772. → page 3260[140] P. Mörters and Y. Peres. Brownian motion, volume 30 of Cambridgeseries in statistical and probabilistic mathematics. CambridgeUniversity Press, 2010. doi: 10.1017/CBO9780511750489. → page 18[141] C. M. Newman. A surface view of first-passage percolation. InProceedings of the International Congress of Mathematicians (Zürich,1994), volume 1,2, page 10171023. Birkhäuser, 1995. → page 67[142] M. E. Newman. Spread of epidemic disease on networks. Phys. Rev.E - Stat. Physics, Plasmas, Fluids, Relat. Interdiscip. Top., 66(1 Pt2), 2002. ISSN 1063651X. doi: 10.1103/PhysRevE.66.016128. → page2[143] R. Pemantle. Choosing a spanning tree for the integer latticeuniformly. Ann. Probab., 19(4):1559–1574, 1991. doi:10.1214/aop/1176990223. → pages 33, 34, 78, 79, 82, 199, 203[144] Y. Peres and D. Revelle. Scaling limits of the uniform spanning treeand loop-erased random walk on finite graphs. ArXiv e-prints:0410430. URL https://arxiv.org/abs/math/0410430. → page 45[145] J. Perrin. Mouvement brownien et réalité moléculaire. Ann. Chim.Phys., 18, 1909. → page 2[146] A. Postnikov and B. E. Sagan. What power of two divides a weightedcatalan number? Journal of Combinatorial Theory, Series A, 114(5):970 – 977, 2007. doi: 10.1016/j.jcta.2006.09.007. → page 218[147] D. Richardson. Random growth in a tessellation. In MathematicalProceedings of the Cambridge Philosophical Society, volume 74, pages515–528. Cambridge University Press, 1973. doi:10.1017/S0305004100077288. → pages 58, 59, 60, 212, 217[148] A. Sapozhnikov and D. Shiraishi. On Brownian motion, simplepaths, and loops. Probab. Theory Relat. Fields, 172(3):615–662,2018. doi: 10.1007/s00440-017-0817-6. → pages25, 31, 82, 104, 106, 107, 109, 111, 123, 124, 128, 132, 134, 139, 141, 145, 155, 205, 208, 245[149] O. Schramm. Scaling limits of loop-erased random walks anduniform spanning trees. Israel J. Math., 118:221–288, 2000. doi:10.1007/BF02803524. → pages 26, 29, 39, 46, 47, 78, 80, 81, 83, 84, 85, 95261[150] O. Schramm and S. Sheffield. Harmonic explorer and its convergenceto sle 4. Ann. Probab., 33(6):2127–2148, 2005. doi:10.1214/009117905000000477. → page 27[151] J. Schweinsberg. The loop-erased random walk and the uniformspanning tree on the four-dimensional discrete torus. Probab. TheoryRelat. Fields, 144(3):319–370, 2009. doi: 10.1007/s00440-008-0149-7.→ page 45[152] S. Shader and M. G. Liu. Weighted Catalan numbers and theirdivisibility properties. PhD. Thesis, 2015. → page 218[153] Z. Shi. Random Walks and Trees. In ESAIM Proc., volume 31, pages1–39, 2011. doi: 10.1051/proc/2011002. → page 50[154] Z. Shi. Branching Random Walks. École d’Été de Probabilités deSaint-Flour. Springer International Publishing, 2012. doi:10.1007/978-3-319-25372-5. → page 50[155] D. Shiraishi. Growth exponent for loop-erased random walk in threedimensions. Ann. Probab., 46(2):687–774, 2018. doi:10.1214/16-AOP1165. → pages24, 78, 80, 104, 109, 113, 114, 115, 118, 128, 130, 131, 133, 135, 145[156] D. Shiraishi. Hausdorff dimension of the scaling limit of loop-erasedrandom walk in three dimensions. Ann. Inst. H. Poincaré Probab.Statist., 55(2):791–834, 2019. doi: 10.1214/18-AIHP899. → page 31[157] G. Slade. Probabilistic models of critical phenomena. In T. Gowers,J. Barrow-Green, and I. Leader, editors, The Princeton Companionto Mathematics. Princeton University Press, 2008. → pages 5, 6[158] A. Sly. Computational transition at the uniqueness threshold. InProc. - Annu. IEEE Symp. Found. Comput. Sci. FOCS, 2010. doi:10.1109/FOCS.2010.34. → page 2[159] S. Smirnov. Critical percolation in the plane. ArXiv e-prints:0909.4499. URL https://arxiv.org/abs/0909.4499. → page 27[160] S. Smirnov. Critical percolation in the plane: conformal invariance,Cardy’s formula, scaling limits. C. R. Acad. Sci. Paris, 333(3):239–244, 2001. doi: 10.1016/S0764-4442(01)01991-7. → pages 27, 78262[161] R. P. Stanley. Catalan Numbers. Cambridge: Cambridge UniversityPress, 2015. doi: 10.1017/CBO9781139871495. → page 220[162] J. Swart. A Course in Interacting Particle Systems. ArXiv e-prints:1703.10007v2, 2020. URL https://arxiv.org/abs/1703.10007v2. →pages 55, 57[163] A. Szabó and R. M. Merks. Cellular Potts modeling of tumorgrowth, tumor invasion, and tumor evolution. Front. Oncol., 3(87),2013. ISSN 2234943X. doi: 10.3389/fonc.2013.00087. → page 2[164] S. Tang, G. Kordzakhia, and S. P. Lalley. Phase Transition for theChase-Escape Model on 2D Lattices. ArXiv e-prints: 1807.08387,2018. → pages 75, 217, 246[165] H. Wall. Analytic Theory of Continued Fractions. Dover Books onMathematics. Dover Publications, 2018. ISBN 9780486830445. →page 225[166] W. Werner. Beurling’s projection theorem via one-dimensionalBrownian motion. Math. Proc. Cambridge Philos. Soc., 119(4):729–738, 1996. doi: 10.1017/S0305004100074557. → page 16[167] W. Werner and E. Powell. Lecture notes on the gaussian free field.ArXiv e-prints: 2004.04720. URL https://arxiv.org/abs/2004.04720.→ page 35[168] J. C. Wierman and W. Reh. On conjectures in first passagepercolation theory. Ann. Probab., 6(3):388–397, 1978. doi:10.1214/aop/1176995525. → page 66[169] D. B. Wilson. Generating random spanning trees more quickly thanthe cover time. In Proceedings of the Twenty-eighth Annual ACMSymposium on the Theory of Computing (Philadelphia, PA, 1996),pages 296–303. ACM, New York, 1996. doi: 10.1145/237814.237880.→ pages 35, 37, 78, 81, 122, 155, 203[170] D. B. Wilson. Dimension of loop-erased random walk in threedimensions. Phys. Rev. E, 82(6):062102, 2010. doi:10.1103/PhysRevE.82.062102. → pages xi, 80, 87[171] A. Yadin and A. Yehudayoff. Loop-erased random walk and Poissonkernel on planar graphs. Ann. Probab., 39(4):1243–1285, 2011. doi:10.1214/10-AOP579. → pages 30, 246263

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0394111/manifest

Comment

Related Items