Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Geometry of random spaces : geodesics and susceptibility Kolesnik, Brett Thomas 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_september_kolesnik_brett_thomas.pdf [ 1.32MB ]
Metadata
JSON: 24-1.0354257.json
JSON-LD: 24-1.0354257-ld.json
RDF/XML (Pretty): 24-1.0354257-rdf.xml
RDF/JSON: 24-1.0354257-rdf.json
Turtle: 24-1.0354257-turtle.txt
N-Triples: 24-1.0354257-rdf-ntriples.txt
Original Record: 24-1.0354257-source.json
Full Text
24-1.0354257-fulltext.txt
Citation
24-1.0354257.ris

Full Text

Geometry of Random SpacesGeodesics and SusceptibilitybyBrett Thomas KolesnikB.Sc. Double Hons., University of Manitoba, 2011M.Sc., University of British Columbia, 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Mathematics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2017c© Brett Thomas Kolesnik 2017AbstractThis thesis investigates the geometry of random spaces.Geodesics in Random SurfacesThe Brownian map, developed by Le Gall [98] and Miermont [108], is arandom metric space arising as the scaling limit of random planar maps.Its construction involves Aldous’ [7] continuum random tree, the canoni-cal random real tree, and Brownian motion, an almost surely continuousbut nowhere differentiable path. As a result, the Brownian map is a non-differentiable surface with a fractal geometry that is much closer to that of areal tree than a smooth surface.A key feature, observed by Le Gall [97], is the confluence of geodesicsphenomenon, which states that any two geodesics to a typical point coalescebefore reaching the point. We show that, in fact, geodesics to anywherenear a typical point pass through a common confluence point. This leads toinformation about special points that had remained largely mysterious.Our main result is the almost everywhere continuity and uniform stabilityof the cut locus of the Brownian map. We also classify geodesic networksthat are dense and find the Hausdorff dimension of the set of pairs that arejoined by each type of network.Susceptibility of Random GraphsGiven a graph G = (V,E) and an initial set I of active vertices in V , ther-neighbour bootstrap percolation process, attributed to Chalupa, Leath andReich [50], is a cellular automaton that evolves by activating vertices withat least r active neighbours. If all vertices in V are activated eventually,iiAbstractwe say that I is contagious. A graph with a small contagious set is calledsusceptible.Bootstrap percolation has been analyzed on several deterministic graphs,such as grids, lattices and trees. More recent work studies the model onrandom graphs, such as the fundamental Erdős–Rényi [60] graph Gn,p.We identify thresholds for the susceptibility of Gn,p, refining approxima-tions by Feige, Krivelevich and Reichman [62]. Along the way, we obtainlarge deviation estimates that complement central limit theorems of Janson,Łuczak, Turova and Vallier [84]. We also study graph bootstrap percolation, avariation due to Bollobás [39]. Our main result identifies the sharp thresholdfor K4-percolation, solving a problem of Balogh, Bollobás and Morris [24].iiiLay SummaryWe analyze two aspects of random spaces.Geodesics in Random SurfacesThe Brownian map is a random fractal surface, identified by Le Gall andMiermont. We study geodesics, which are shortest paths between points. Bystrengthening an observation of Le Gall, that geodesics to a typical pointin the Brownian map coalesce before reaching the point, we reveal severalproperties of its rich geometry. In particular, we analyze points from whichgeodesics leave in different directions but arrive at the same destination.Susceptibility of Random GraphsBootstrap percolation is a cellular automaton, attributed to Chalupa etal., that models an evolving network. A network is called susceptible if asmall part can influence the whole network. We identify thresholds for thesusceptibility of a fundamental random network, called the Erdős–Rényigraph, in which all elements are equally likely to be directly connected. Thisrefines approximations by Balogh et al. and Feige et al.ivPrefaceThis thesis is based on articles [11, 12, 13, 91], as introduced in Part I.Stability of geodesics in the Brownian map [13] is a joint work with OmerAngel (the author’s advisor) and Grégory Miermont. This article is the topicof Part II and is to appear in the Annals of Probability.Part III is based on a series of articles. Thresholds for contagious setsin random graphs [12] and Minimal contagious sets in random graphs [11]are joint works with Omer Angel. Article [12] is to appear in the Annalsof Applied Probability and [11] is currently under review for publication.Sharp threshold for K4-percolation [91] is an independent work of the author,currently under review for publication.vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivI Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Brownian map . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1 Our objective . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2.1 Planar maps . . . . . . . . . . . . . . . . . . . . . . . 41.2.2 CVS-bijection . . . . . . . . . . . . . . . . . . . . . . 51.2.3 Gromov-Hausdorff metric . . . . . . . . . . . . . . . . 61.2.4 Aldous’ continuum random tree . . . . . . . . . . . . 71.3 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3.1 CVS-bijection, extended . . . . . . . . . . . . . . . . 101.4 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . 101.4.1 Fractal, spherical geometry . . . . . . . . . . . . . . . 10viTable of Contents1.4.2 Volume and re-rooting invariance . . . . . . . . . . . 111.5 Geodesics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.5.1 Simple geodesics . . . . . . . . . . . . . . . . . . . . . 121.5.2 Tree of cut-points . . . . . . . . . . . . . . . . . . . . 131.5.3 Tree of geodesics . . . . . . . . . . . . . . . . . . . . . 141.5.4 Confluence of geodesics . . . . . . . . . . . . . . . . . 141.5.5 Regularity of geodesics . . . . . . . . . . . . . . . . . 161.6 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161.7 Related models . . . . . . . . . . . . . . . . . . . . . . . . . . 171.7.1 Local limits . . . . . . . . . . . . . . . . . . . . . . . 171.7.2 Brownian surfaces . . . . . . . . . . . . . . . . . . . . 181.8 Our results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191.8.1 Confluence points . . . . . . . . . . . . . . . . . . . . 191.8.2 Cut loci . . . . . . . . . . . . . . . . . . . . . . . . . . 201.8.3 Geodesic networks . . . . . . . . . . . . . . . . . . . . 222 Bootstrap percolation . . . . . . . . . . . . . . . . . . . . . . . 252.1 Our objective . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.2 Main questions and terminology . . . . . . . . . . . . . . . . 262.2.1 Critical thresholds . . . . . . . . . . . . . . . . . . . . 272.2.2 Minimal contagious sets . . . . . . . . . . . . . . . . . 272.3 Brief survey . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.3.1 First results . . . . . . . . . . . . . . . . . . . . . . . 282.3.2 More results . . . . . . . . . . . . . . . . . . . . . . . 302.4 Random graphs . . . . . . . . . . . . . . . . . . . . . . . . . 312.4.1 Bootstrap percolation on Gn,p . . . . . . . . . . . . . 322.4.1.1 Random contagious sets . . . . . . . . . . . 322.4.1.2 Small contagious sets . . . . . . . . . . . . . 322.4.2 Binomial chains . . . . . . . . . . . . . . . . . . . . . 342.4.2.1 Activation by small sets . . . . . . . . . . . 352.4.2.2 Criticality of `r . . . . . . . . . . . . . . . . 362.5 Graph bootstrap percolation . . . . . . . . . . . . . . . . . . 372.5.1 Clique processes . . . . . . . . . . . . . . . . . . . . . 38viiTable of Contents2.6 Our results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392.6.1 Susceptibility . . . . . . . . . . . . . . . . . . . . . . 392.6.2 Minimal contagious sets . . . . . . . . . . . . . . . . . 402.6.3 K4-percolation . . . . . . . . . . . . . . . . . . . . . . 43II Geodesics in Random Surfaces . . . . . . . . . . . . . . . 453 Stability of Geodesics in the Brownian Map . . . . . . . . . 463.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.2 Background and main results . . . . . . . . . . . . . . . . . . 473.2.1 Geodesic nets . . . . . . . . . . . . . . . . . . . . . . 473.2.2 Cut loci . . . . . . . . . . . . . . . . . . . . . . . . . . 503.2.3 Geodesic networks . . . . . . . . . . . . . . . . . . . . 523.2.4 Confluence points . . . . . . . . . . . . . . . . . . . . 563.3 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 583.3.1 The Brownian map . . . . . . . . . . . . . . . . . . . 583.3.2 Simple geodesics . . . . . . . . . . . . . . . . . . . . . 603.3.3 Confluence at the root . . . . . . . . . . . . . . . . . 613.3.4 Dimensions . . . . . . . . . . . . . . . . . . . . . . . . 623.4 Confluence near the root . . . . . . . . . . . . . . . . . . . . 653.5 Proof of main results . . . . . . . . . . . . . . . . . . . . . . 723.5.1 Typical points . . . . . . . . . . . . . . . . . . . . . . 723.5.2 Geodesic nets . . . . . . . . . . . . . . . . . . . . . . 733.5.3 Cut loci . . . . . . . . . . . . . . . . . . . . . . . . . . 743.5.3.1 Weak cut loci . . . . . . . . . . . . . . . . . 743.5.3.2 Strong cut loci . . . . . . . . . . . . . . . . 753.5.4 Geodesic stars . . . . . . . . . . . . . . . . . . . . . . 773.5.5 Geodesic networks . . . . . . . . . . . . . . . . . . . . 783.6 Related models . . . . . . . . . . . . . . . . . . . . . . . . . . 80viiiTable of ContentsIII Susceptibility of Random Graphs . . . . . . . . . . . . . 824 Thresholds for Contagious Sets in Random Graphs . . . . 834.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.2 Background and main results . . . . . . . . . . . . . . . . . . 834.2.1 Bootstrap percolation . . . . . . . . . . . . . . . . . . 834.2.2 Graph bootstrap percolation and seeds . . . . . . . . 854.2.3 A non-homogeneous branching process . . . . . . . . 884.2.4 Outline of the proof . . . . . . . . . . . . . . . . . . . 894.3 Lower bound for pc(n, r) . . . . . . . . . . . . . . . . . . . . 914.3.1 Small susceptible graphs . . . . . . . . . . . . . . . . 924.3.2 Upper bounds for susceptible graphs . . . . . . . . . 944.3.3 Susceptible subgraphs of Gn,p . . . . . . . . . . . . . . 954.3.4 Sub-critical bounds . . . . . . . . . . . . . . . . . . . 974.4 Upper bound for pc(n, r) . . . . . . . . . . . . . . . . . . . . 1004.4.1 Triangle-free susceptible graphs . . . . . . . . . . . . 1004.4.2 rˆ-bootstrap percolation on Gn,p . . . . . . . . . . . . 1084.4.3 Super-critical bounds . . . . . . . . . . . . . . . . . . 1104.4.4 rˆ-percolations are almost independent . . . . . . . . . 1114.4.5 Terminal r-percolations . . . . . . . . . . . . . . . . . 1164.4.6 Almost sure susceptibility . . . . . . . . . . . . . . . 1174.5 Time dependent branching processes . . . . . . . . . . . . . . 1194.6 Graph bootstrap percolation . . . . . . . . . . . . . . . . . . 1214.7 Technical lemmas . . . . . . . . . . . . . . . . . . . . . . . . 1224.7.1 Proof of Claim 4.3.6 . . . . . . . . . . . . . . . . . . . 1224.7.2 Proof of Claim 4.3.10 . . . . . . . . . . . . . . . . . . 1244.7.3 Proof of Claim 4.3.11 . . . . . . . . . . . . . . . . . . 1254.7.4 Proof of Claim 4.4.7 . . . . . . . . . . . . . . . . . . . 1294.7.5 Proof of Lemma 4.4.11 . . . . . . . . . . . . . . . . . 1355 Minimal Contagious Sets in Random Graphs . . . . . . . . 1405.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405.2 Background and main results . . . . . . . . . . . . . . . . . . 140ixTable of Contents5.2.1 Thresholds for contagious sets . . . . . . . . . . . . . 1435.3 Binomial chains . . . . . . . . . . . . . . . . . . . . . . . . . 1445.4 Optimal activation trajectories . . . . . . . . . . . . . . . . . 1476 Sharp Threshold for K4-Percolation . . . . . . . . . . . . . . 1596.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1596.2 Background and main results . . . . . . . . . . . . . . . . . . 1596.2.1 Seed edges . . . . . . . . . . . . . . . . . . . . . . . . 1616.2.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . 1616.3 Clique processes . . . . . . . . . . . . . . . . . . . . . . . . . 1646.3.1 Consequences . . . . . . . . . . . . . . . . . . . . . . 1656.4 Percolating graphs . . . . . . . . . . . . . . . . . . . . . . . . 1666.4.1 Basic estimates . . . . . . . . . . . . . . . . . . . . . 1676.4.2 Sharper estimates . . . . . . . . . . . . . . . . . . . . 1786.5 Percolating subgraphs with small cores . . . . . . . . . . . . 1836.6 No percolating subgraphs with large cores . . . . . . . . . . . 188Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196xList of Figures1.1 Three planar maps . . . . . . . . . . . . . . . . . . . . . . . . 51.2 The CVS-bijection . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Contour function of a tree . . . . . . . . . . . . . . . . . . . . 71.4 Confluence of geodesics . . . . . . . . . . . . . . . . . . . . . 151.5 A confluence point . . . . . . . . . . . . . . . . . . . . . . . . 201.6 A normal geodesic network . . . . . . . . . . . . . . . . . . . 221.7 Dense geodesic networks . . . . . . . . . . . . . . . . . . . . . 233.1 A normal (j, k)-network . . . . . . . . . . . . . . . . . . . . . 533.2 Classification of dense networks . . . . . . . . . . . . . . . . . 543.3 Confluence near the root . . . . . . . . . . . . . . . . . . . . . 563.4 The geodesic [u`, v`]− γ . . . . . . . . . . . . . . . . . . . . . 683.5 Illustrated proof of Lemma 3.4.4 . . . . . . . . . . . . . . . . 713.6 Asymmetry of the strong cut locus . . . . . . . . . . . . . . . 753.7 Illustrated proof of Theorem 3.2.16 . . . . . . . . . . . . . . . 795.1 Optimal activation trajectories . . . . . . . . . . . . . . . . . 1496.1 The smallest irreducible percolating 3-core. . . . . . . . . . . 168xiAcknowledgmentsFirst of all, I would like to express my sincerest gratitude to my advisor,Omer Angel, for guiding me through the journey that has resulted in thisthesis. His boundless insight, originality and enthusiasm, together with hiswelcoming presence and generosity of spirit, are a tremendous benefit to allof those around him. Omer, I cannot thank you enough.Many thanks are also due to Ed Perkins and Lior Silberman for servingon my supervisory committee, Martin Barlow and Harry Joe for servingas university examiners, Martin Tanner for chairing the defence, and UrielFeige for serving as external examiner. I thank all involved for their timeand careful consideration.I gratefully acknowledge the generous support I received from KillamTrusts, NSERC of Canada and UBC. These scholarships and awards allowedme to focus to an extent that would not have been possible otherwise.One of the most important moments during my graduate studies was the2012 PIMS-Mprime Summer School in Probability. This event was held atUBC at the end of my M.Sc. studies, and led me to make the decision tocontinue on towards a Ph.D. I thank Omer Angel and Grégory Miermont fortheir inspiring lectures, and the summer school organizers, Omer Angel, EdPerkins and Gordon Slade.The year after this summer school, I visited Grégory for six weeks at ENSLyon, while Omer was on sabbatical in Paris. I thank Omer, Grégory andUBC for support with travelling to Lyon, and Grégory for his kind hospitality.During this time, we began working on a project related to Grégory’s summerschool course that would eventually become a paper, appearing in Part IIof this thesis. It has been a great pleasure to be involved in this area ofresearch. I am truly grateful for the guidance and attention I received fromOmer and Grégory during this visit, and in the years that followed.In 2015 I had the opportunity to visit the Isaac Newton Institute for aspecial semester, titled Random Geometry. I thank Nathanaël Berestycki forhosting me and Killam Trusts and NSERC for their support. I benefited fromxiiAcknowledgmentsmany seminars, reading groups and the presence of leading researchers. Ipresented the results in Part II to experts in the field, and made considerableprogress towards the work that appears in Part III of this thesis.Next, I would like to thank the UBC Probability Group for creatingsuch a lively environment. Thank you to the organizers of our excellentdepartmental seminar and the yearly Northwest Probability Seminars. Inparticular, I have appreciated the reading groups and topics courses organizedover the years by Omer Angel, Brian Marcus and Asaf Nachmias. I havealso become friends with several graduate students, postdocs and visitorsin the department, such as Roland Bauerschmidt, Cindy Blois, RaimundoBriceño, Owen Daniel, Kyle Hambrook, Tyler Helmuth, Tim Hulshof, BalázsRáth, Gourab Ray, Saifuddin Syed, Júlia Komjáthy and Daniel Valesin, whoare no longer at UBC and I miss, and others such as Richárd Balka, SaraíHernández-Torres, Thomas Hughes, Tom Hutchcroft, Mathav Murugan, BenWallace, Zichun Ye and Qingsan Zhu, who are still here with me at UBC.I would like to say thank you to James Currie, Michael Doob, BradJohnson, Mohammad Jafari Jozani, Alexandre Leblanc, Andrew Morris,Fereidoun Ghahramani, Tommy Kucera and Nina Zorboska, from whomI took several very enjoyable courses in mathematics and statistics as anundergraduate student at the University of Manitoba and the University ofWinnipeg. Alexandre Leblanc’s course in probability theory persuaded meto study statistics. Nina Zorboska’s real analysis course introduced me tothe beauty of mathematics. I thank Nina Zorboska and Brad Johnson forsupervising me as an NSERC undergraduate researcher during the summersof 2009 and 2010. I am also very grateful to Nick Wormald for hosting meas an NSERC undergraduate researcher during the summer of 2011 at theUniversity of Waterloo. His continued work with me during my M.Sc. studiesresulted in our joint paper (not appearing here) and my Erdős number 2.Lastly, I would like to thank my family. To my wonderful wife, Scarlet,you have been with me every step of the way. Thank you for making melaugh, making me coffee, giving me perspective in difficult times, and lovingme so well. To my grandparents, Dan and Sheena, thank you for yourvisits and phone calls, your offers to help me with my math, and for tellingme you are proud of me. To the memory of my grandparents, Dave andFreda, keeping you in my heart gives me strength every day. To my brother,Jordan, thank you for our conversations and your creative spirit. It has beendifficult living so far apart during these years, but you are still my best friend.Finally, it is to my parents, Tom and Lori, that I dedicate this thesis, fortheir unwavering support in its many forms. You are an endless source oflove, faith and inspiration. I am so very blessed to have you in my life.xiiiTo my parentsxivPart IIntroduction1Chapter 1Brownian mapA planar map (of the sphere) is a discretization of the 2-dimensional sphere S2.More specifically, a planar map is a proper embedding (without crossing edges)of a finite, connected planar graph into S2, viewed up to orientation-preservinghomeomorphisms. The faces of the map are the connected components of S2minus its vertices and edges, inherited from its underlying graph structure.It is natural to ask what types of objects are obtained in this way, as thenumber of faces tends to infinity, that is, as the maps become increasinglylarge. Only very recently has the answer to this question been revealed.A planar map can be treated as a finite metric space, endowed withthe distance from the underlying graph. Since we view planar maps up toorientation-preserving homeomorphisms (that is, a planar map is a certainequivalence class), there is only a finite number of planar maps of size n(with n faces, say). A triangulation/quadrangulation is a map in which allfaces are delimited by three/four edges.The seminal work of Angel and Schramm (2003) [14] was the first to obtaina limiting object associated with random planar maps. In [14] the infinitevolume, local limit (see Section 1.7.1) of uniformly random triangulations isidentified, and named the uniform infinite planar triangulation, or UIPT.The UIPQ, corresponding to random quadrangulations, was later developedby Krikun [93]. Although these random infinite lattices can be embeddedin R2, their geometry is very different from that of the Euclidean lattice Z2.For instance, as shown by Benjamini and Curien [30], simple random walkon the UIPQ is sub-diffusive, typically moving a distance of at most ordern1/3 (up to a logarithmic factor) after n steps (whereas on Zd, d ≥ 1, thetypical displacement is of order n1/2). Very loosely speaking, these randomlattices are “discrete fractals.”2Chapter 1. Brownian mapIn a pioneering study, Chassaing and Schaeffer (2004) [51] showed thattypical distances in a uniformly random quadrangulation of size n are oforder n1/4. With this result in mind, Schramm (2007) [121], in his ICMsurvey, posed the question of identifying the scaling limit of random planarmaps (scaling the metric by n−1/4) with respect to the Hausdorff topology(see Section 1.2.3). This program was recently completed in independentworks by Le Gall (2013) [98] and Miermont (2013) [108]. The resulting limitis called the Brownian map, due to the fact that its construction involvesBrownian motion, the canonical uniformly random path. Unlike the UIPQ,the Brownian map is a finite volume limit, which is homeomorphic to S2, asshown by Le Gall and Paulin [100] and Miermont [106]. That being said,like that of the UIPQ, its geometry is far from Euclidean. Indeed, as provedby Le Gall [96], the Brownian map is of Hausdorff dimension 4. It thusexhibits a random, fractal, spherical geometry. In fact, these objects areclosely related. Roughly speaking, by scaling distances in the UIPQ by λ,and letting λ→ 0, an infinite volume (homeomorphic to R2) variant of theBrownian map, called the Brownian plane, is obtained (see Section 1.7.2).Brownian motion is a universal object, in the sense that it is the scalinglimit of many different types of random discrete paths. Similarly, the Brown-ian map is a universal object of interest. To quote Le Gall (2014) [95], in hisICM survey:Just as Brownian motion can be viewed as a purely randomcontinuous curve, the Brownian map seems to be the right modelfor a purely random surface.Although the Brownian map has been identified, and several of its funda-mental properties are understood, much of its intricate structure has yet tobe uncovered. As Le Gall [95] states,the Brownian map remains a mysterious object in many respects.As continues to become increasingly clear, the Brownian map is animportant new addition to the modern theory of probability. Notably, avery recent work (and the last in a long series of articles) by Sheffield and31.1. Our objectiveMiller (2016) [109] establishes the equivalence of the Brownian map with the√8/3-Liouville quantum gravity sphere, another candidate for a canonicaluniformly random spherical surface. This result is important, since prior totheir work the two objects had their own separate advantages: the Brownianmap with its natural metric structure, and the√8/3-Liouville quantumgravity sphere with its natural conformal structure. The results of [109] unifythe theories. To quote Sheffield and Miller [109], it is now that case thatany theorem about the Brownian map is henceforth also a theoremabout the√8/3-Liouville quantum gravity sphere and vice-versa.1.1 Our objectiveIn Part II of this thesis, we develop properties of geodesics, or shortestpaths, in the Brownian map, and so also equivalently, in the√8/3-Liouvillequantum gravity sphere. Such paths give insight into the geometry of thesefundamental random spaces. See Section 1.8 below for our results.1.2 Preliminaries1.2.1 Planar mapsA rooted planar map (of the sphere) is an equivalence class of finite, connected,planar graphs with a distinguished oriented edge, embedded in S2, andviewed up to orientation-preserving homeomorphisms. The orientated edgedetermines the orientation of the entire map. The initial vertex of theoriented edge is called the root of the map. See Figure 1.1.The faces of a planar map are the connected components of S2 minusthe edges and vertices of the map, given by its underlying graph structure.Each face of a planar map is delimited by a finite number of edges, which wecall its degree. In particular, a q-angulation is a planar map for which allfaces are of degree q. We say triangulation and quadrangulation in the caseof q = 3 and q = 4, respectively.41.2. PreliminariesM1 M2 M3Figure 1.1: Three rooted planar maps: M1 and M2are equivalent to each other, but not to M3. Theunderlying graphs are all isomorphic.1.2.2 CVS-bijectionLet (T, τ) be rooted plane tree, where T = (V,E) and τ ∈ V . We say that(T, τ) is well-labelled by a function ` : V → N if `(τ) = 1 and |`(a)−`(b)| ≤ 1,for all (a, b) ∈ E. We call (T, τ, `) a well-labelled, rooted plane tree.The CVS-bijection, due to Cori and Vauquelin [52] and Schaeffer [119],identifies the set of well-labelled, rooted plane trees with n edges with theset of rooted quadrangulations of the sphere with n faces. One direction ofthis bijection is as follows: Given (T, τ, `), an extra (disconnected) vertex ρis added and labelled 0. To begin, an oriented edge is drawn from ρ to τ .Then, proceeding along the corners (see Section 1.5.1) of T via the clockwise,contour-ordered path around T , an edge is drawn from a corner of T to thenext corner with a smaller label, if such a corner exists. If there is no suchcorner (that is, in the case of a corner with label 1), an edge is instead drawnto ρ. Once the process is complete, and the edges in E are removed (whilemaintaining the vertices in V ), a rooted quadrangulation (QT , ρ) on V ∪ {ρ}is obtained. The oriented edge from ρ to τ determines the orientation of QT .See Figure 1.2We observe that the successive edges drawn by the CVS-bijection from acorner of a vertex v ∈ V to ρ form a path in QT from v to ρ in such a waythat the labels of the vertices visited by the path equal the graph distance (inQT ) to ρ. Hence, for each v ∈ V with k corners, the CVS-bijection specifiesk distinguished geodesics from v to the root of the map ρ leaving from eachof its corners.We note that (more complicated) bijections for other classes of planar51.2. Preliminaries2112332112330τρ ρτ τFigure 1.2: From left to right: A well-labelled planetree (T, τ, `), the CVS-bijection, and the correspondingrooted quadrangulation of the sphere (QT , ρ). Onegeodesic to ρ drawn by the CVS-bijection is highlighted.maps are also available, see for instance the work of Bouttier, Di Francescoand Guitter [42].In closing we mention that Tutte [130] (motivated by the four colourproblem) developed enumerative methods for planar maps. In particular,by the quadratic method of [130], it follows that there are 2n+23nCatn quad-rangulations of S2 with n faces, where Catn is the nth Catalan number. Inlight of the CVS-bijection, we see very clearly why this is the case: Thereare Catn trees with n edges, 2 ways to orient the resulting map (either theroot edge is oriented from ρ to τ , or vice versa), and almost 3n choices forthe labels of the vertices in V − {τ}. Since the labels cannot be less than1 = `(τ), there are in fact only 3n/(n+ 2) ways to “well-label” (T, τ).1.2.3 Gromov-Hausdorff metricLet dH denote the Hausdorff distance on non-empty, compact subsets of ametric space (S, θ). The Gromov-Hausdorff metric dGH (see Edwards [59]and Gromov [73, 74]) on the set of all isometry classes of compact metricspaces K is obtained as follows: For two compact metric spaces S1 = (S1, θ1)and S2 = (S2, θ2) (representing isometry classes of K), let dGH(S1,S2) denotethe infimum of dH(ξ1(S1), ξ2(S2)) over all isometric embeddings ξi : Si → S,i = 1, 2, into metric spaces (S, θ).The space (K, dGH) is a Polish space (see for instance Burago, Burago,61.2. Preliminariesand Ivanov [45]). A sequence of random, compact metric spaces Sn = (Sn, θn)converges in distribution to a random, compact metric space S = (S, θ) withrespect to the Gromov-Hausdorff topology on K if almost surely S and thesequence Sn can be constructed so that dGH(Sn,S)→ 0 as n→∞.1.2.4 Aldous’ continuum random treeRecall that a real-tree, or R-tree, is a geodesic metric space in which eachpair of points is connected by a unique simple path. Given a non-negative,continuous function h on [0, 1] satisfying h(0) = 0 = h(1), an R-tree (Th, dh)is obtained as follows: Putd∗h(s, t) = h(s) + h(t)− 2 · mins∧t≤u≤s∨th(u), s, t ∈ [0, 1]and let dh denote the quotient distance induced by d∗h on Th = [0, 1]/{d∗h = 0}.We call h a contour function of the tree Th. Note that two points s < t areidentified above if h(s) = h(t) and h(u) > h(s) for all u ∈ (s, t). Informally,we think of constructing Th by gluing underneath the graph of h, and thenpressing it together from the sides, see Figure 1.3.0 1hThFigure 1.3: A tree Th and its contour function h.Aldous’ [7] continuum random tree, or CRT, denoted (Te, de), is formed asabove by taking h to be a normalized Brownian excursion e = {et : 0 ≤ t ≤ 1}.(Recall that e is Brownian motion conditioned to equal 0 at t ∈ {0, 1} andto be positive in (0, 1), see Revuz and Yor [117, Chapter XII].) Hence therelationship between the CRT and a Brownian excursion is analogous tothat of a discrete tree and its contour function. We note that the CRT isthe Gromov-Hausdorff scaling limit of uniformly random plane trees (see71.3. Constructionfor instance Le Gall and Miermont [99, Theorem 3.9]). Moreover, it is thecanonical uniformly random R-tree, being also the limit of many other classesof trees.1.3 ConstructionThe scaling limit of uniform planar maps has recently been identified.Theorem 1.3.1 (Le Gall [98] and Miermont [108]). Let Qn be a uniformlyrandom quadrangulation of S2. Let (Mn, dn) denote the metric space obtainedfrom Qn by scaling its graph distance dQn by (8n/9)−1/4. Then, as n→∞,(Mn, dn) converges in distribution with respect to the Gromov-Hausdorfftopology to a random metric space (M,d), called the Brownian map.Recall (see Section 1.2.3) that this convergence means that almost surelywe can construct (M,d) and the sequence (Mn, dn) such that (Mn, dn) con-verges to (M,d) in Gromov-Hausdorff distance. See Section 1.6 for a discus-sion on the proof of Theorem 1.3.1.In [98] it is shown that, in fact, the same convergence in distributionholds (up an unimportant adjustment of the constant factor (8/9)−1/4 inthe scaling term) for uniformly random triangulations and 2k-angulations,for all k > 1. Since then, the Brownian map has also been identified as thelimit of several other types of maps, see for example [1, 2, 29, 36, 98]. Inthis sense, the Brownian map is a universal limiting object, in a similar wayas Brownian motion is a universal limit of random paths and Aldous’ CRTis a universal limit of random trees. Moreover, both of these fundamentalobjects play a leading role in its construction, which we now describe.The general idea in the construction of the Brownian map is to extend (onedirection of) the CVS-bijection to the CRT, in order to obtain a uniformlyrandom, spherical metric space. Recall (see Section 1.2.2) that the CVS-bijection identifies random planar maps (specifically, quadrangulations) ofthe sphere with well-labelled plane trees. Thus, we require a method of“well-labelling” the CRT. Since the labels in a well-labelled tree increase or81.3. Constructiondecrease by at most 1 along edges, the natural analogue for a label processin this continuum setting is Brownian motion. We proceed as follows.The main ingredients are a normalized Brownian excursion e = {et : t ∈[0, 1]}, a random R-tree (Te, de) indexed by e, and a Brownian label processZ = {Za : a ∈ Te}. More specifically, define Te = [0, 1]/{de = 0} as thequotient under the pseudo-distancede(s, t) = es + et − 2 · mins∧t≤u≤s∨teu, s, t ∈ [0, 1]and equip it with the quotient distance, again denoted by de. The randommetric space (Te, de) is Aldous’ continuum random tree, or CRT (as discussedin Section 1.2.4). Let pe : [0, 1] → Te denote the canonical projection.Conditionally given e, Z is a centred Gaussian process satisfying E[(Zs −Zt)2] = de(s, t) for all s, t ∈ [0, 1]. The random process Z is the so-called headof the Brownian snake (see [99]). Note that Z is constant on each equivalenceclass p−1e (a), a ∈ Te. In this sense, Z is Brownian motion indexed by theCRT.Analogously to the definition of de, we putdZ(s, t) = Zs + Zt − 2 ·max{infu∈[s,t]Zu, infu∈[t,s]Zu}, s, t ∈ [0, 1]where we set [s, t] = [0, t] ∪ [s, 1] in the case that s > t. Then, to obtain apseudo-distance on [0, 1], we defineD∗(s, t) = inf{k∑i=1dZ(si, ti) : s1 = s, tk = t, de(ti, si+1) = 0}, s, t ∈ [0, 1].Finally, we set M = [0, 1]/{D∗ = 0} and endow it with the quotientdistance induced by D∗, which we denote by d. An easy property (see [105,Section 4.3]) of the Brownian map is that de(s, t) = 0 implies D∗(s, t) = 0, sothat M can also be seen as a quotient of Te, and we let Π : Te →M denotethe canonical projection, and put p = Π ◦ pe. Almost surely, the process Zattains a unique minimum on [0, 1], say at t∗. We set ρ = p(t∗). The random91.4. Basic propertiesmetric space (M,d) = (M,d, ρ) is called the Brownian map and we call ρ itsroot.1.3.1 CVS-bijection, extendedAlmost surely, for every pair of distinct points s 6= t ∈ [0, 1], at most one ofde(s, t) = 0 or dZ(s, t) = 0 holds, except in the particular case {s, t} = {0, 1}where both identities hold simultaneously (see [100, Lemma 3.2]). Thereforeonly leaves (that is, non-cut-points) of Te are identified in the constructionof the Brownian map, and this occurs if and only if they have the same labeland along either the clockwise or counter-clockwise, contour-ordered patharound Te between them, one only finds vertices of larger label. Thus, in theconstruction of the Brownian map, (Te, Z) is a continuum analogue for awell-labelled plane tree, and the quotient by {D∗ = 0} for the CVS-bijection(which recall identifies well-labelled plane trees with rooted planar maps, asdiscussed in Section 1.2.2).1.4 Basic properties1.4.1 Fractal, spherical geometryRecall that an original motivation for developing this theory is to obtain arandom spherical surface. Although a finite planar map is trivially homeo-morphic to S2, this property is not a prior preserved in the Gromov-Hausdorfflimit. Thus one of the most fundamental theorems regarding the Brownianmap is as follows.Theorem 1.4.1 (Le Gall and Paulin [100] and Miermont [106]). Almostsurely, the Brownian map (M,d) is homeomorphic to S2.The two proofs of this result take entirely different approaches. Theoriginal proof by Le Gall and Paulin [100] uses a general result of Moore [111]and works directly with the limiting object (M,d). On the other hand,Miermont [106] studies the discrete maps themselves, showing that largeplanar planar maps typically do not have cycles smaller than the scaling101.5. Geodesicsorder O(n1/4) that separate macroscopic (on the scaling order) areas of themap. In other words, the possible existence of “bottlenecks,” capable ofgiving rise to non-spherical limits, is ruled out directly.Recall that, for d ≥ 2, the Hausdorff dimension of a path of Brownianmotion in Rd is twice that of a smooth curve (see Kaufman [86]). Similarly,the Hausdorff dimension of (M,d) is twice that of S2.Theorem 1.4.2 (Le Gall [96]). Almost surely, the Hausdorff dimension ofthe Brownian map (M,d) is 4.Overall, we see by these theorems that the Brownian map has a sphericalgeometry, and an extremely singular metric space structure.1.4.2 Volume and re-rooting invarianceAlthough the Brownian map is a rooted metric space, it is not so dependenton its root. The volume measure λ on M is defined as the push-forward ofLebesgue measure on [0, 1] via p. A fundamental result of Le Gall showsthat the Brownian map is invariant under re-rooting, in the following sense.Theorem 1.4.3 (Le Gall [97]). Suppose that U is uniformly distributed over[0, 1] and independent of (M,d). Then (M,d, ρ) and (M,d,p(U)) are equalin law.Therefore, to some degree, the root of the map is but an artifact ofits construction. That being said, there is a dense set (of zero volume,but positive dimension) of special points that significantly contribute to itsgeometry. Indeed, investigating such points is a main focus in Part II of thisthesis (see Section 1.8 for an overview).1.5 GeodesicsA subset γ ⊂ M is called a geodesic segment if (γ, d) is isometric to acompact interval. An isometry from such an interval to γ is a geodesicassociated with the geodesic segment γ. We will however most often blur thisdistinction, referring to geodesic segments simply as geodesics. We note that111.5. Geodesicsthe Brownian map, being the Gromov-Hausdorff limit of geodesic spaces, isalmost surely a geodesic space (see for instance [45]).Le Gall [97] obtained a complete description of the geodesics to the rootρ of the Brownian map, as discussed in the next section. This descriptionimplies several interesting properties of the Brownian map, and has played acritical role in its further analysis. Indeed, the work [97] predates the mainTheorem 1.3.1, and is a key tool used in its proofs [98, 108] (see Section 1.6).1.5.1 Simple geodesicsAs discussed in Section 1.2.2, the CVS-bijection highlights geodesics fromeach corner of a well-labelled plane tree to the root of the resulting planarmap. As it turns out, these are the only geodesics to the root that remainvisible in the scaling limit.A corner of a vertex v in a discrete plane tree T is a sector centred at v,delimited by the edges which precede and follow v along a contour-orderedpath around T . Leaves of a tree have exactly one corner, and in general, thenumber of corners of v is equal to the number of connected components inT − {v}. Similarly, we may view the R-tree Te as having corners, however inthis continuum setting all sectors reduce to points. Hence, for the purpose ofthe following (slightly informal) discussion, let us think of each t ∈ [0, 1] ascorresponding to a corner of Te with label Zt. Thus as t ∈ [0, 1] varies from0 to 1, we think of exploring the clockwise, contour-ordered path around Te,encountering its corners labelled by Zt along the way.Recall (see Section 1.3) that ρ = p(t∗), such that Zt attains its minimumat t∗. Put Z∗ = Zt∗ . As it turns out, d(ρ,p(t)) = Zt − Z∗ for all t ∈ [0, 1](see [96]). In other words, up to a shift by the minimum label Z∗, theBrownian label of a point in Te is precisely the distance to ρ from thecorresponding point in the Brownian map.Simple geodesics to ρ are constructed as follows. For t ∈ [0, 1] and` ∈ [0, Zt − Z∗], let st(`) denote the point in [0, 1] corresponding to the firstcorner with label Zt − ` in the clockwise, contour-ordered path around Tebeginning at the corner corresponding to t. For each such t, the image of121.5. Geodesicsthe function Γt : [0, Zt − Z∗]→M taking ` to p(st(`)) is a geodesic segmentfrom p(t) to ρ. In [97] it is shown that all geodesics to ρ are of this form.Theorem 1.5.1 (Le Gall [97]). Almost surely, all geodesics in (M,d) to theroot ρ are simple geodesics Γt, t ∈ [0, 1].This result has several important implications, as discussed in the sectionsthat follow. In summary, the R-tree structure of the Brownian map is revealed.We find that the space (M,d) is comprised of two topological R-trees, G(ρ)and S(ρ): the former containing all points strictly inside geodesic segmentsto ρ, and the latter all points with multiple geodesics to ρ. These treesare dual to each other, in the sense that they are disjoint and both densein (M,d). Loosely speaking, they are “intertwined.” As it turns out, theHausdorff dimensions of G(ρ) and S(ρ) are 1 and 2, and so, what remains ofthe Brownian map is a 4-dimensional set of points at their interface.We remark here that, in brief, the main purpose of Part II of this thesisis to investigate the sets G(x) and S(x) for general points x ∈M .1.5.2 Tree of cut-pointsLet S(ρ) denote the set of points y ∈M with multiple geodesics to ρ. Notethat the cut-points of Te (that is, points a ∈ Te such that Te − {a} hasmultiple connected components) are exactly the points in Te with multiplecorners. Hence by Theorem 1.5.1 it follows that S(ρ) is precisely the R-treeTe = [0, 1]/{de = 0} minus its leaves (that is, non-cut-points), projected intoM . Informally, there is a geodesic to ρ leaving from each corner of the CRT(as is also the case for finite planar maps, see Section 1.2.2). Moreover, sincethe number of corners of a cut-point of Te is exactly the number of geodesicsfrom the corresponding point in the map to ρ, we obtain the following resultby standard properties of the CRT.For i ≥ 1, let Si(ρ) be the set of points y ∈ M with exactly i geodesicsegments to ρ.Theorem 1.5.2 (Le Gall [97]). Let ρ denote the root of the Brownian map.We have that131.5. Geodesics(i) S(ρ) = S2(ρ) ∪ S3(ρ);(ii) S2(ρ) is dense and has Hausdorff dimension 2;(iii) S3(ρ) is dense and countable.Although the Brownian map is an extremely singular metric space, in thisregard it nonetheless bears similarities with complete, analytic Riemanniansurfaces homeomorphic to the sphere, for which the cut locus S of a pointx is a tree and the number of “branches” emanating from a point in S isexactly the number of geodesics to x (see Poincaré [115] and Myers [113]).With this in mind, Le Gall [97] states that the set S(ρ)exactly corresponds to the cut locus of [the Brownian map]relative to the root.Extending the notion of a cut locus to general points x ∈M , however, turnsout to be a more delicate matter, see Section 1.8.2.1.5.3 Tree of geodesicsThe relative interior of a geodesic segment γ between x, y ∈ M , is the setγ − {x, y}, that is, the segment minus its endpoints. Let G(ρ) denote theset of points in the relative interior of a geodesic segment to ρ. We callG(ρ) the geodesic net of ρ. Theorem 1.5.1 implies that G(ρ) is preciselythe R-tree TZ = [0, 1]/{dZ = 0} minus its leaves, projected into M . In thissense, there is a tree of geodesics to ρ in (M,d). As shown in [97], G(ρ) is arelatively small subset of the map of Hausdorff dimension 1. Points in S(ρ)correspond to leaves of TZ (see [100, Lemma 3.2]), so the trees S(ρ) and G(ρ)are disjoint.1.5.4 Confluence of geodesicsPerhaps the most striking consequence of Theorem 1.5.1 is that any twogeodesics to the root coalesce before reaching the root. Le Gall [97] refersto this phenomenon as the confluence of geodesics. Let B(x, ε) denote theball of radius ε centred at x ∈M . More specifically, almost surely we havethat, for any ε > 0, there is some η > 0 such that all geodesic segments from141.5. Geodesicspoints y ∈ B(ρ, ε)c to ρ coincide inside B(ρ, η). In topological terminology(as pointed out in [105]), there is a unique germ of geodesics to ρ.This observation follows by the continuity of Zt and Theorem 1.5.1, as wenow explain. In this discussion, for s, t ∈ [0, 1], let [s, t] be the sub-intervalof [0, 1] in the cyclic order, where 0 ≡ 1. That is, [s, t] denotes [0, t] ∪ [s, 1] ifs > t. Recall that ρ = p(t∗) and Z∗ = Zt∗ is the minimum of Zt. Moreoverd(ρ,p(t)) = Zt − Z∗ for all t (see Section 1.5.1). Therefore, for some ξ > 0,Zt − Z∗ < ε for all t ∈ [t∗ − ξ, t∗ + ξ]. Hence p([t∗ − ξ, t∗ + ξ]) ⊂ B(ρ, ε).Let η be the minimum of Zt − Z∗ on [t∗ + ξ, t∗ − ξ]. Note that η > 0. Bythe choice of η (and Theorem 1.5.1), all geodesics to ρ from points outsideB(ρ, ε) coincide inside B(ρ, η). See Figure 1.4.εηt∗Ztt∗t∗ + ξ t∗ − ξFigure 1.4: All geodesics to the root ρ = p(t∗) frompoints outside B(ρ, ε) coincide inside B(ρ, η), where ηis the minimum of Zt − Z∗ on [t∗ + ξ, t∗ − ξ], and ξ issuch that Zt − Z∗ < ε on [t∗ − ξ, t∗ + ξ].Applying invariance under re-rooting (Theorem 1.4.3), we obtain thefollowing result.Theorem 1.5.3 (Le Gall [97, Corollary 7.7]). Almost surely, for λ-almostevery x ∈M , the following holds. For any neighbourhood N of x, there is asub-neighbourhood N ′ ⊂ N so that all geodesics from x to points outside Ncoincide inside N ′.Moreover, geodesics to the root of the map tend to coalesce quickly. Fort ∈ [0, 1], let γt denote the image of the simple geodesic Γt from p(t) to the151.6. Uniquenessroot of the map ρ (see Section 1.5.1). That is, γt is the geodesic segmentassociated with the geodesic Γt.Lemma 1.5.4 (Miermont [108, Lemma 5]). Almost surely, for all s, t ∈ [0, 1],γs and γt coincide outside of B(p(s), dZ(s, t)).This result follows simply by noting that the distance from p(s) to thepoint at which γs and γt coalesce is no longer than the path from s to t inthe tree TZ (with equality holding if and only if γt ⊂ γs).1.5.5 Regularity of geodesicsFor x, y ∈ M , we call the set of points in some geodesic segment from xto y the geodesic network from x to y. In this section we note that by theobservations in Sections 1.5.3 and 1.5.4, geodesic networks from ρ to pointsy ∈M have a specific topological structure.We say that the ordered pair (x, y) is regular if any two distinct geodesicsegments between x and y are disjoint inside, and coincide outside, a punc-tured ball centred at y of radius less than d(x, y). Formally, if γ, γ′ aregeodesic segments between x and y, then for some r ∈ (0, d(x, y)), we havethat γ ∩ γ′ ∩B(y, r) = {y} and γ −B(y, r) = γ′ −B(y, r).By the tree structure of G(ρ) (see Section 1.5.3) and Theorems 1.4.3and 1.5.3, we obtain the following result.Proposition 1.5.5. Almost surely, for λ-almost every x ∈M , for all y ∈M ,(x, y) is regular.1.6 UniquenessThe main obstacle to establishing the existence of the Brownian map (thatis, the existence of a unique scaling limit) is to obtain more informationabout its geodesics, beyond the foundational results of Le Gall [97] (seeSection 1.5.1).A compactness argument of Le Gall [96] established scaling limits ofplanar maps along subsequences, however the question of uniqueness remained161.7. Related modelsunresolved for some time. Some properties were known to hold regardless ofwhat subsequence had been extracted, notably Le Gall’s [97] description ofgeodesics to the root (Theorem 1.5.1) (and also, for instance, Theorems 1.4.1and 1.4.2). This information, however, is not a priori sufficient to show thatthe limit exists.The key to overcoming this difficulty is to relate a geodesic between a pairof typical points to geodesics to the root. Let γ be a geodesic segment betweenpoints selected uniformly according to λ. (Note that, by the confluence ofgeodesics phenomenon (Theorem 1.5.3), the root of the map is almost surelydisjoint from γ.) In [98, 108] the set of points z ∈ γ such that the relativeinterior of any geodesic from z to the root is disjoint from γ is shown to besmall compared to γ. Roughly speaking, “most” points in “most” geodesicsof the Brownian map are in a geodesic to the root. (See the discussion aroundequation (2) in [98] and [108, Section 2.3] for precise statements.) In thisway, Le Gall [98] and Miermont [108] show that geodesics to the root do infact provide enough information to characterize the Brownian map metric,leading to Theorem 1.3.1. See for example Miermont’s Saint-Flour notes[105, Section 7] for a more detailed overview.1.7 Related models1.7.1 Local limitsAs already mentioned, infinite volume, local limits of planar maps weredeveloped prior to the results on scaling limits.For a graph G = (V,E), vertex v ∈ V and r ≥ 0, let BG(v, r) denote theball of radius r in G, that is, the subgraph of G induced by the set of verticeswhose graph distance to v is at most r. A sequence of rooted graphs (Gn, ρn)is said to converge locally in distribution to a rooted graph (G∞, ρ), if forevery r ≥ 0 and graph H, the probability that BGn(ρn, r) = H converges tothe probability that BG∞(ρ, r) = H as n→∞.Theorem 1.7.1 (Angel and Schramm [14]). Let Tn be a uniformly randomtriangulation of S2 of size n. Then Tn converges locally in distribution to a171.7. Related modelsrandom infinite graph T∞, called the uniform infinite planar triangulation,or UIPT.The UIPQ, denoted by Q∞, which arises as the local limit of uniformlyrandom quadrangulations, was later developed by Krikun [93].It is interesting to note that, just as the Brownian map is obtained via acontinuum analogue of the CVS-bijection (see Sections 1.2.2 and 1.3), theUIPQ can also be constructed by an extension of this same bijection. Thisconstruction, as described below, is due to Curien, Ménard and Miermont [56].In this case, the role of Te (the CRT) is replaced with that of T∞, the criticalGalton-Watson tree conditioned to survive (due to Kesten [87]).Recall that T∞ is obtained from a half-infinite line (called the spine),with edges between each i ≥ 0 and i + 1, by “grafting” critical (and so,almost surely finite) Galton-Watson trees on the left and right sides of eachvertex i ≥ 0. This tree can be “well-labelled” via a discrete version of theBrownian snake. Specifically, each edge in T∞ is assigned an iid weight in{0,±1}. We define the label `v of vertex v in T∞ to be the sum of the edgeweights along the (unique) path in T∞ from 0 to v. Essentially, this labellingis (lazy) simple random walk indexed by T∞. Extending the CVS-bijectionto the object at hand, we draw an edge from each corner of a vertex v in T∞to the next corner in the clockwise, contour-ordered path around T∞ withlabel `v − 1. Since infi≥0 `i = −∞ (as the labels on the spine correspondto a standard (lazy) simple random walk), this procedure is well-defined.Moreover, in [56] it is shown that the object obtained in this way is equal indistribution to the UIPQ.1.7.2 Brownian surfacesAn infinite volume version of the Brownian map, called the Brownian plane(P,D), has been introduced and studied by Curien and Le Gall [53]. Therandom metric space (P,D) is almost surely homeomorphic to the plane R2,and like the Brownian map, of Hausdorff dimension 4.The spaces (M,d) and (P,D) have a similar local structure. Specifically,almost surely there are isometric neighbourhoods of the roots of (M,d) and181.8. Our results(P,D). The Brownian plane has an additional scale invariance propertywhich makes it more amenable to analysis, see the works of Curien and LeGall [54, 55]. We note that using these facts, properties of the Brownianplane can be deduced from those of the Brownian map.Along the same lines as the construction of the Brownian map, theBrownian plane can be obtained through an extension of the CVS-bijection.In this setting, the role of the Brownian excursion e is replaced with that of aindependent pair of three-dimensional Bessel process, indexed by [0,∞) and(−∞, 0]. Furthermore, (P,D) can also be obtained as a local (non-compact)Gromov-Hausdorff scaling limit of the UIPQ (see Section 1.7.1), by scalingdistances in the UIPQ by λ, and letting λ→ 0.Bettinelli [33, 34, 35] has investigated Brownian surfaces of positivegenus. In [33], the subsequential Gromov-Hausdorff convergence of uniformrandom bipartite quadrangulations of the g-torus Tg is established (alsogeneral orientable surfaces with a boundary are analyzed in [35]), and it isan ongoing work of Bettinelli and Miermont [37, 38] to confirm that a uniquescaling limit exists. Some properties hold independently of which subsequenceis extracted. For instance, any scaling limit of bipartite quadrangulations ofTg is homeomorphic to Tg (see [34]) and has Hausdorff dimension 4 (see [33]).Also, a confluence of geodesics is observed at typical points of the surface(see [35]). We also note that recently Baur, Miermont and Ray [28] haveclassified the scaling limits of uniform quadrangulations with a boundary.1.8 Our resultsIn this section, we discuss some of the main results proved in Part II of thisthesis. See Section 3.2 below for a complete overview.1.8.1 Confluence pointsWe strengthen the confluence of geodesics phenomenon of Le Gall [97] (The-orem 1.5.3). We find that for any neighbourhood N of a typical point in theBrownian map, there is a confluence point x0 between a sub-neighbourhood191.8. Our resultsN ′ ⊂ N and the complement of N . See Figure 1.5.Theorem 1.8.1 (Angel, Kolesnik and Miermont [13]). Almost surely, forλ-almost every x ∈ M , the following holds. For any neighbourhood N ofx, there is a sub-neighbourhood N ′ ⊂ N and some x0 ∈ N −N ′ so that allgeodesics between any points x′ ∈ N ′ and y ∈ N c pass through x0.x0Figure 1.5: All geodesics from points in N ′ to points inthe complement of N ⊃ N ′ pass through a confluencepoint x0.Using this key result, we establish several properties of the Brownianmap (see Section 3.2). In the remaining sections of this chapter, we discusssome of our more easily-stated results.1.8.2 Cut lociThe cut locus of a point p in a Riemannian manifold, first examined byPoincaré [115], is the set of points q 6= p which are endpoints of maximal(minimizing) geodesics from p. This collection of points is more subtle thanjust the set of points with multiple geodesics to p. In fact, it is generally theclosure thereof (see Klingenberg [90, Theorem 2.1.14]).In the Brownian map this equivalence breaks completely. Indeed, asshown in Chapter 3, almost all (in the sense of volume and Baire category)points are the end of a maximal geodesic, and every point is joined by multiplegeodesics to a dense set of points. Moreover, whereas in the Brownian mapthere are points with multiple geodesics to the root which coalesce beforereaching the root, in a Riemannian manifold any (minimizing) geodesic which201.8. Our resultsis not the unique geodesic between its endpoints cannot be extended (see,for example, the “short-cut principle” discussed in Shiohama, Shioya andTanaka [124, Remark 1.8.1]).As it would seem that the Brownian map is about as far from Riemannianas possible, the following cautionary note by Berger [31] seems as appropriateas ever:The cut-locus is essentially a Riemannian notion if one expects areasonable form of behavior. As soon as one goes to more generalmetric spaces things can become very, very wild.That being said, we wish to extend this notion to the Brownian map since,to quote Berger [32] once again, it isinteresting to contrast cut loci in Riemannian manifolds withthose of a more general metric space.We need only take care in order to define a suitable notion of cut locus forthis highly singular metric space. We proceed as follows, defining the (strong)cut locus C(x) of a point x ∈ M to be the set of points y ∈ M to whichthere are at least two geodesics from x that are disjoint in a neighbourhoodof y. Thus, roughly speaking, y ∈ C(x) if there are geodesics from x thatapproach y from different directions. We believe that this definition capturesthe essence of a cut locus as effectively as possible.We show that the cut locus of the Brownian map is uniformly stable, inthe following sense.Theorem 1.8.2 (Angel, Kolesnik and Miermont [13]). Almost surely, forall x, y ∈M , C(x) and C(y) coincide outside a closed, nowhere dense set ofzero λ-measure.Note that this result holds for any points x, y ∈M , not only for typicalpoints.Moreover, for typical points x ∈ M , a small perturbation from x hasonly a small, local effect on the cut locus. In this sense, the cut locus of theBrownian map is continuous almost everywhere.211.8. Our resultsTheorem 1.8.3 (Angel, Kolesnik and Miermont [13]). Almost surely, forλ-almost every x ∈ M , for any neighbourhood N of x, there is a sub-neighbourhood N ′ ⊂ N so that C(x′)−N is the same for all x′ ∈ N ′.On the other hand, we define the weak cut locus S(x) to be simply theset of points y ∈M with multiple geodesics to x. By Proposition 1.5.5, thetwo notions are typically one and the same.Proposition 1.8.4. Almost surely, for λ-almost every x ∈M , S(x) = C(x),that is, the weak and strong cut loci coincide.That being said, their general behaviour is markedly distinct. While thestrong cut locus is uniformly stable, the weak cut locus behaves quite wildly,oscillating in dimension and volume, see Section 3.5.3.1.1.8.3 Geodesic networksBy the results of Le Gall [97] discussed in Section 1.5, all geodesic networksto ρ are regular, and consist of at most three geodesics. We find that allexcept very few geodesic networks in the Brownian map are, in the followingsense, a concatenation of two regular networks.For x, y ∈M and j, k ∈ N, we say that the ordered pair (x, y) induces anormal (j, k)-network, and write (x, y) ∈ N(j, k), if for some z in the relativeinterior of all geodesic segments between x and y, (z, x) and (z, y) are regular(see Section 1.5.5) and z is connected to x and y by exactly j and k geodesicsegments, respectively. See Figure 1.6.x yzuFigure 1.6: As depicted, (x, y) ∈ N(2, 3). Note that(u, x) does not induce a normal (j, k)-network.In particular, note if x, y are joined by exactly k geodesics and (x, y) isregular, then (x, y) ∈ N(1, k). (Take z to be a point in the relative interiorof the geodesic segment contained in all k segments from x to y.)221.8. Our resultsNot all networks are normal (j, k)-networks. For instance, if (x, y) ∈N(j, k) and j > 1, then there is a point u that is joined to x by two geodesicswith disjoint relative interiors. See Figure 1.6. That being said, we find thatmost pairs induce normal (j, k)-networks. Moreover, for each j, k ∈ {1, 2, 3},there are many normal (j, k)-networks in the map. Hence, in particular, weestablish the existence of atypical networks comprised of more than threegeodesics (and up to nine).Theorem 1.8.5 (Angel, Kolesnik and Miermont [13]). The following holdalmost surely.(i) For any j, k ∈ {1, 2, 3}, N(j, k) is dense in M2.(ii) M2 −⋃j,k∈{1,2,3}N(j, k) is nowhere dense in M2.By Theorem 3.2.15, there are essentially only six types of geodesic net-works that are dense in the Brownian map. See Figure 1.7.Figure 1.7: Classification of geodesic networks that aredense in the Brownian map (up to symmetries andhomeomorphisms of the sphere).Finally, we also obtain the Hausdorff dimension of the set of pairs joinedby each type of normal network. For a subset A ⊂M , let dimA denote itsHausdorff dimension (see Section 3.3.4).Theorem 1.8.6 (Angel, Kolesnik and Miermont [13]). Almost surely, wehave that dimN(j, k) = 2(6−j−k), for all j, k ∈ {1, 2, 3}. Moreover, N(3, 3)is countable.231.8. Our resultsIn closing, we remark that it remains an interesting open problem tofully classify all types of geodesic networks in the Brownian map. Evenshowing that almost surely there are no x, y ∈M joined by infinitely manygeodesics is open, although the upper bound of nine seems plausible. Seealso the intriguing possible existence of ghost geodesics in the Brownian map,as discussed in Section 3.2.4. Such geodesics (if they exist), behave more likeEuclidean geodesics than typical geodesics in the Brownian map, in the sensethat they do not coalesce with any other geodesics. We call them “ghosts,”since in this way, they are undetected by all other geodesics.24Chapter 2Bootstrap percolationLet G = (V,E) be a graph and r a positive integer. Given an initial set ofactive vertices V0 ⊂ V , the r-neighbour bootstrap percolation process evolvesby activating vertices with at least r active neighbours. Formally, let Vt+1be the union of Vt and the set of all vertices with at least r neighbours in Vt,that is,Vt+1 = Vt ∪{v : |N(v) ∩ Vt| ≥ r},where N(v) is the set of neighbours of a vertex v. The sets Vt are increasing,and so converge to some set of eventually active vertices, denoted by 〈V0, G〉r.If 〈V0, G〉r = V , that is, all vertices in V are eventually activated, we saythat G percolates.Bootstrap percolation is most often attributed to Chalupa, Leath andReich (1979) [50], who studied the model on the Bethe lattice (the infinited-regular tree Td). However, as noted in the survey paper by Alder andLev [5], the idea was presented earlier by Pollak and Riess (1975) [116] (seealso the private communication with Kopelman cited therein).In fact, similar models had been considered even earlier. Since thestatus of any given vertex at any given time of the process depends only onthe status of its neighbourhood, bootstrap percolation is an example of acellular automaton, as developed by von Neumann (1966) [134], followingUlam (1950) [131]. Bollobás’ (1968) [39] study of weakly k-saturated graphsleads to a variation called graph bootstrap percolation, see Section 2.5 below.Also of note is a model proposed by McCullogh and Pitts (1943) [104] (seealso the modern review by Piccinini [114]) for neuronal interactions in thebrain, which bears similarities with bootstrap percolation. In this model,the underlying graph is directed and its edges are assigned weights. A vertex252.1. Our objectivebecome active/inactive if the weighted sum over the edges directed towardsit from its active neighbours is larger/smaller than a certain threshold.In any case, the term “bootstrap percolation” originates from [50], andthe works [50, 116] are the first to present the idea from the perspective ofstatistical physics. The basic motivation is to study the effect of an impurityon a magnetic system. Since magnetism is a phenomenon that comes aboutthrough interaction, it is assumed in [50, 116] that a magnetic particle, whichis no longer in direct contact with sufficiently many other magnetic particles,becomes non-magnetic, and so thereafter can be treated as an impurity itself.As a result, given the right initial conditions, the model exhibits an abrupt,first-order phase transition.Since its introduction, bootstrap percolation has found many applicationsin mathematics, physics, and in other fields, including computer science andsociology, see for instance [4, 5, 9, 57, 58, 63, 64, 65, 68, 69, 70, 89, 112, 126,133, 136, 137] and further references therein.2.1 Our objectiveThe bootstrap percolation process is well-studied on several classes of deter-ministic graphs, such as grids, lattices, trees and hypercubes. More recently,there has been interest in studying the model on random graphs. The fo-cus of Part III of this thesis is to analyze the model on the fundamentalErdős–Rényi [60] graph Gn,p. Recall that Gn,p is the random subgraph of thecomplete graph Kn (the graph on [n] = {1, 2, . . . , n} containing all possible(undirected) edges {i, j}, where i, j ∈ [n]), obtained by including each pos-sible edge independently with probability p. Our results are presented inSection 2.6 below.2.2 Main questions and terminologyIf 〈V0, G〉r = V , that is, all vertices in V are eventually active if V0 is initiallyactive, then we say that the set V0 is contagious for G. Note that if G isfinite then 〈V0, G〉r = Vτ , where τ is the smallest t such that Vt = Vt+1.262.2. Main questions and terminologyThe main questions of interest in the field revolve around the size of theset of eventually active vertices 〈V0, G〉r. In most works, the object of studyis the probability that a random initial set V0 is contagious. Usually V0 isobtained either by initially activating each vertex in V independently withprobability p, or else by selecting a random subset of V of a given size.2.2.1 Critical thresholdsSuppose that each vertex of a graph G is initially active independently withprobability p. Thresholds pε are defined as the infimum over p such thatG percolates with probability at least ε. We put pc = p1/2, and refer tothis quantity as the critical probability, or critical threshold. Sometimes wewrite pc(G, r) to explicitly denote the critical probability for r-bootstrappercolation on G.The ε-window, or scaling window, is the interval [pε, p1−ε]. Suppose thatG = G(n) is a sequence of graphs, obtained for instance by selecting Guniformly at random from a certain class of graphs. Then pε = pε(n). If, forany ε ∈ (0, 1/2), we have that p1−ε − pε = o(pc) as n→∞, the percolationthreshold pc is called sharp, and coarse otherwise.We note that since the event of percolation is monotone increasing in p,by a general principle of Bollobás and Thomason [41] a threshold pc existssuch that that if p pc then G percolates with high probability and if p pcthen G does not percolate with high probability. Moreover, in some casesthe existence of a sharp threshold follows by a general result of Friedgut [67,Theorem 1.4]. That being said, in this thesis our motivation is in actuallylocating certain sharp thresholds of interest (that is, we aim to identify afunction θ = θ(n), such that pc ∼ θ as n→∞).2.2.2 Minimal contagious setsRather than studying random contagious sets, it also is natural is ask whattypes of contagious sets exist for a graph. This is a more difficult questionto answer, since now interactions need to be considered. For instance, inorder to apply the standard second moment method to show that a (random)272.3. Brief surveygraph G = (V,E) has a contagious set of size q, estimates are required forthe probability that, for sets I 6= I ′ ⊂ V of size q and various values ofk > q, we have that |〈I,G〉r| ≥ k and |〈I ′, G〉r| ≥ k. As a result, there arecomparatively less results in this direction, and indeed, this is a main focusof our thesis in Chapters 4 and 5.Let m(G, r) denote the size of a minimal contagious set for G. Note thatm(G, r) ≥ r. We call a graph susceptible, and say that it r-percolates, if ithas a contagious set of the smallest possible size r. More generally, a graphwith a contagious set of size q is called (q, r)-susceptible, or equivalently(q, r)-percolating. Critical thresholds are defined as is pc in Section 2.2.1, andwe use similar notation to denote them when it is clear from the context.2.3 Brief surveyA well-known problem in the field is to show that any contagious set for2-bootstrap percolation on the finite grid [n]2 contains at least n vertices (seefor instance [15] for a discussion). An elegant solution involves consideringthe length of the boundary of the set of active vertices, noting that thisremains constant when an additional vertex is activated.2.3.1 First resultsApart from the founding articles [50, 116] already mentioned, the first resultsin the literature concern bootstrap percolation on the infinite lattices Zdand finite grids [n]d. The latter situation is referred to as the finite volume,or metastable, regime. In all of the works discussed in this section, theprocess is started by declaring each vertex initially active independently withprobability p. Recall that we let pc(G, r) denote the critical probability atwhich r-boostrap percolation occurs on G with probability at least 1/2.The first rigorous result is due to van Enter [133], who showed thatpc(Zd, 2) = 0 in all dimensions d ≥ 2. More generally, Schonmann [120]proved that pc(Zd, r) is equal to 0 if r ≤ d and equal to 1 otherwise. Inthis sense, pc is trivial on Zd. Another early result is that of Aizenman and282.3. Brief surveyLebowitz [6], which identifies the order of pc for 2-bootstrap percolation in themetastable regime in all dimensions as pc([n]d, 2) = Θ(log1−d n). This resultwas generalized many years later by Cerf and Manzo [47], who proved thatpc([n]d, r) = Θ(log1−d(r−1) n), where log(r−1) denotes the iterated logarithm,defined by log(1) = log and log(k+1) = log log(k).A famous result of Holroyd [79], the first to identify a sharp threshold,shows thatpc([n]2, 2) =pi2181 + o(1)logn .Besides its precision, part of what makes this result exciting is that theconstant pi2/18 ≈ 0.5483 does not compare well at all with the numericalestimate 0.245 ± 0.015 reported by Adler, Stauffer and Aharony [3]. Thisdiscrepancy is partially explained by the refined bounds for pc obtained byGravner and Holroyd [71] and Gravner, Holroyd and Morris [72]. There areconstants c, C > 0, so that for all large n,clog3/2 n≤ pi218 logn − pc([n]2, 2) ≤ C(log logn)3log3/2 n.Essentially, the issue seems to be that the lower order terms in the expansionfor pc are only of lower order for extremely large n, lying well outsidecomputational range. In other words, pc logn converges to pi2/18, but veryslowly.Finally, solving a long-standing problem in the field, Balogh, Bollobás,Duminil-Copin and Morris [20] identified the sharp threshold for all r in alldimensions. For any 2 ≤ r ≤ d, we have that, as n→∞,pc([n]d, r) =(λ(d, r) + o(1)log(r−1) n)d−r+1where λ(d, r) is an implicitly defined constant (without a simple closed formexpression for most values of d, r).292.3. Brief survey2.3.2 More resultsBeyond the celebrated results discussed in the previous Section 2.3.1, therelies is a vast literature. We close this section by only naming a few moreresults of interest. Extensive surveys can be found in the introductorysections of articles [20, 22, 24, 27, 84], for instance.Generalizing the work of [50] on the infinite d-regular tree Td, Balogh,Peres and Pete [25] calculate the critical probability for bootstrap percolationon a large class of trees and graphs, which includes in particular all trees ofbounded degree. For instance, it is shown thatlimε↓0pε(Td, 2) = 1− (d− 2)2d−5(d− 1)d−2(d− 3)d−3 .Galton-Watson trees T were considered by Bollobás, Gunderson, Holm-gren, Janson and Przykucki [40] (see also Gunderson and Przykucki [76]). IfT has branching number b, then pc(T, r) = Ω(e−b/(r−1)/b).Balogh, Bollobás and Morris [22] study majority bootstrap percolationon the hypercube Qn = [2]n, where a vertex is activated if at least half ofits neighbours are active. The first two terms in the expansion for pc areidentified. For n even, we have thatpc(Qn, n/2) =121−√lognn+ Θ( log logn√n logn).Bootstrap percolation has given insight into the Ising model at zero (ora very low) temperature, see the works by Cerf and Manzo [48, 49], Fontes,Schonmann and Sidoravicius [65] and Morris [112]. For instance, the zero-temperature Glauber dynamics on Zd are studied in [112], under which atrandom times (determined by independent exponential clocks) vertices of Zdupdate their status to agree with the majority of their neighbours (breakingties at random). Initially each vertex in Zd is assigned a positive spin withprobability p and a negative spin with probability 1− p, independently ofall other vertices. The process is said to fixate if eventually all spins arepositive. In this setting pc(Zd) is the infimum over p such that Zd fixates with302.4. Random graphsprobability 1. A result of Arratia [16] (see also Schwartz [122] and Lootgieter[101]) implies that pc(Z) = 1. On the other hand, it is a long-standingconjecture that pc(Zd) = 1/2 for all d ≥ 2. Using ideas from bootstrappercolation, it is shown in [112] that pc(Zd)→ 1 as d→∞.2.4 Random graphsMore recently, bootstrap percolation has been studied on random graphs.Balogh and Pittel [27] investigate bootstrap percolation on Gn,d, the uniformlyrandom d-regular graph of size n. With high probability Gn,d and the infinited-regular tree Td have a similar local structure. In [27], it is verified thatpc for Gn,d coincides with that for Td (which recall is computed in [25], seeSection 2.3.2). Moreover, the width of the scaling window is analyzed.Majority bootstrap percolation (see Section 2.3.2) on the Erdős–Rényirandom graph Gn,p (discussed in more detail in the next Section 2.4.1) hasbeen analyzed by Holmgren, Kettle and Juškevičius [78] (see also Kettle [88],Juškevičius [85] and Stefánsson and Vallier [125]). In this setting, for suf-ficiently small p, it turns out that pc for Gn,p and the hypercube Qn (asstudied in [22], see Section 2.3.2) are comparable.Turova and Vallier [129] analyzed a variation of Gn,p, where in additionto its usual random edges, each vertex i ∈ [n] is connected to vertex i+ 1(mod n) by an edge with probability 1. This is a simplified version of a modelproposed by Turova and Villa [128] for neuronal networks, where it seemsthat the strength of connections within such a network depend on a mixtureof random effects and distances between neurons. These additional “localconnections” tighten the scaling window and cause percolation to occur insome situations in which Gn,p is unlikely to percolate.Bootstrap percolation on random graphs with given degrees has beenstudied by Amini [9], Amini and Fountoulakis [10] and Janson [83].312.4. Random graphs2.4.1 Bootstrap percolation on Gn,pThe remainder of this section concerns bootstrap percolation on the Erdős–Rényi [60] random graph Gn,p, which is our focus in Chapters 4 and 5. Recallthat Gn,p is the random subgraph of the complete graph Kn, obtained byincluding each possible edge independently with probability p.2.4.1.1 Random contagious setsBootstrap percolation on Gn,p was first studied by Vallier [132] (see also therelated works of Ball and Britton [17, 18] and Scalia-Tomba [118]). Thework of [132] was expanded upon by Janson, Łuczak, Turova and Vallier [84].Among many other detailed results, the following is proved.Theorem 2.4.1 (Janson et al. [84, Theorem 3.1]). Fix r ≥ 2. Suppose thatϑ = ϑ(n) satisfies 1 ϑ n. Put `r = `r(ϑ) = rr−1ϑ andαr = (r − 1)!(r − 1r)2(r−1), p = p(n, ϑ) =(αrnϑr−1)1/r.Suppose that I = I(n) ⊂ [n] is independent of Gn,p and such that |I|/`r → ε,as n → ∞. If ε ∈ [0, 1) then with high probability |〈I,Gn,p〉r| < rr−1 |I|. Ifε > 1 then with high probability |〈I,Gn,p〉r| = n(1− o(1)), that is, all exceptpossibly very few vertices are eventually activated.In this sense, `r is the critical size for a random set (selected independentlyof Gn,p) to be contagious for Gn,p. (By symmetry, the probability that such aset I is contagious is the same as for the set of vertices labelled 1 through |I|,or for any other given set of size |I| that is independent of Gn,p.) A heuristicfor the criticality of `r is given in Section 2.4.2.2. Theorem 2.4.1 and a relatedcentral limit theorem are discussed in greater detail in Section 2.4.2.1 below.2.4.1.2 Small contagious setsMore recently, and in contrast with the work of [84] discussed in the previousSection 2.4.1.1, Feige, Krivelevich and Reichman [62] study small contagioussets in Gn,p, in a range of p. Although it is very unlikely for a random set322.4. Random graphs(selected independently of Gn,p) of size ` < `r to be contagious, there typicallyexist contagious sets in Gn,p that are much smaller than `r.Recall that m(G, r) denotes the size of a minimal contagious set for G.Theorem 2.4.2 (Feige et al. [62, Theorem 1.1]). Fix r ≥ 2. Suppose thatϑ = ϑ(n) satisfieslog2 nlog logn  ϑ n.Letαr = (r − 1)!(r − 1r)2(r−1), p = p(n, ϑ) =(αrnϑr−1)1/r.Then, with high probability,cr ≤ m(Gn,p, r)ψ(n, ϑ) ≤ Crwhereψ(n, ϑ) = ϑlog(n/ϑ) ,cr < r, and cr → 2 and Cr = Ω(rr−2), as r →∞.Note that d = np in [62] corresponds to (αr(n/ϑ)(r−1))1/r in this context.The lower bound holds in fact for all ϑ. (Although this is not stated in [62,Theorem 1.1], it follows from the proof, see [62, Corollaries 2.1 and 4.1].)The inequality cr < r (which is relevant to our results in Section 2.6.2)is not shown in [62], so we briefly explain it here: In [62, Lemma 4.2 andCorollary 4.1], it is observed that a graph of size k with a contagious set ofsize ` has at least r(k − `) edges. By this observation, it follows easily thatwith high probabilitym(Gn,p, r) ≥ ξ r − 1rndr/(r−1) log d,provided that ξr−1er+2/(2r)r < 1. Since (r−1)! > e((r−1)/e)r−1, this leads332.4. Random graphsto the bound m(Gn,p, r) ≥ cψ(n, ϑ), wherec < 2(rr − 1)3 (2re4)1/(r−1)< r,for all r ≥ 2.Recall that a graph is susceptible if it contains a contagious set of thesmallest possible size r. Another result of [62] identifies the order of thethreshold for p above which Gn,p is likely to be susceptible.Theorem 2.4.3 (Feige et al. [62, Theorem 1.2]). Let r ≥ 2. Let pc(n, r)denote the critical threshold for the susceptibility of Gn,p. As n → ∞, wehave that pc(n, r) = Θ((n logr−1 n)−1/r).In particular, we note that pc(n, 2) = Θ(1/√n logn), an observationrelevant to Section 2.5 below.2.4.2 Binomial chainsIn this section, we discuss the binomial chain construction used in [84] (and inChapter 5 below) to analyze the spread of activation from an initially activeset I in Gn,p. This representation of the bootstrap percolation dynamics isdue to Scalia-Tomba [118] (see also Sellke [123]). We refer to [84, Section 2]for a detailed description, and here only present the properties relevant tothis thesis. The main idea is to reveal the graph one vertex at a time. As avertex is revealed, we mark its neighbours. Once a vertex has been marked rtimes, we know it will be activated, and add it to the list of active vertices.Formally, sets A(t) and U(t) of active and used vertices at time t ≥ 0are defined as follows: Let A(0) = I and U(0) = ∅. For t > 0, choose someunused, active vertex vt ∈ A(t− 1)−U(t− 1), and give each neighbour of vta mark. Then let A(t) be the union of A(t − 1) and the set of all verticesin Gn,p with at least r marks, and put U(t) = U(t− 1) ∪ {vt}. The processterminates at time t = τ , where τ = min{t ≥ 0 : A(t) = U(t)}, that is, whenall active vertices have been used. It is easy to see that A(τ) = 〈I,Gn,p〉r.Let S(t) = |A(t)| − |I|. By exploring the edges of Gn,p one step at atime, revealing the edges from vt only at time t, the random variables S(t)342.4. Random graphscan be constructed in such a way that S(t) ∼ Bin(n − |I|, pi(t)), wherepi(t) = P(Bin(t, p) ≥ r), see [84, Section 2]. Moreover, for s < t, wehave that S(t)− S(s) ∼ Bin(n− |I|, pi(t)− pi(s)). Finally, it is shown that|〈I,Gn,p〉r| ≥ k if and only if τ ≥ k if and only if S(t) + |I| > t for all t < k.Thus to determine the size of the eventually active set 〈I,Gn,p〉r, it sufficesto analyze the process S(t).2.4.2.1 Activation by small setsMaking use of the binomial chain construction described in the previousSection 2.4.2, many results are developed in [84]. In this section we discuss tworesults which are relevant to the topic of Chapter 5 of this thesis (introducedin Section 2.6.2 below).The following quantities play an important role in [84]. We denotekr = kr(ϑ) =(rr − 1)2ϑ, `r = `r(ϑ) =r − 1rkr.For ε ∈ [0, 1], we define δε ∈ [0, ε] implicitly byδrεr= δε − εr, εr = r − 1rε.(We note that `r, kr, δε correspond to ac, tc, ϕ(ε) in [84].)As shown by Theorem 2.4.1 above, ifp = p(n, ϑ) =(αrnϑr−1)1/r=((r − 1)!nkr−1r)1/r(2.4.1)then `r is the critical size for a random set to be contagious (see Section 2.4.2.2for a heuristic explanation for this fact).More precisely, the following results are proved in [84].Theorem 2.4.4 ([84, Theorem 3.1]). Fix r ≥ 2. Let p be as in (2.4.1), whereϑ = ϑ(n) satisfies 1 ϑ n. Suppose that I = I(n) ⊂ [n] is independentof Gn,p and such that |I|/`r → ε, as n → ∞. If ε ∈ [0, 1), then with highprobability |〈I,Gn,p〉r| = (δε + o(1))kr. On the other hand, if ε > 1, then with352.4. Random graphshigh probability |〈I,Gn,p〉r| = n(1− o(1)).(If np  logn + (r − 1) log logn, then, in fact, with high probability Iis contagious, that is |〈I,Gn,p〉r| = n, see [84, Theorem 3.1](iii).) Moreover,the following central limit theorem is established. Recall that a sequence ofrandom variables Xn is asymptotically normal with mean µn and varianceσ2n if (Xn − µn)/σn converges in distribution to a standard normal.Theorem 2.4.5 ([84, Theorem 3.8(i)]). Fix r ≥ 2. Let p be as in (2.4.1),where ϑ = ϑ(n) satisfies 1  ϑ  n. Suppose that I = I(n) ⊂ [n] isindependent of Gn,p and such that |I|/`r → ε ∈ (0, 1), as n → ∞. Then|〈I,Gn,p〉r| is asymptotically normal with mean µ ∼ δεkr and variance σ2 =δ′εkr, where δ′ε = δrε(1− δr−1ε )−2/r.(See (3.13) and (3.22) in [84] for the definition of µ.) In particular, notethat the mean and variance of |〈I,Gn,p〉r| are of the same order as kr.2.4.2.2 Criticality of `rLet p, `r, kr be as defined in the previous Section 2.4.2.1. In [84, Section 6]a heuristic is provided for the criticality of `r, which we recount here. Bythe law of large numbers, with high probability S(t) ≈ ES(t). A calculationshows that if |I| > `r then |I| + ES(t) ≥ t for all t < n − o(n), whereas if|I| < `r then already for t = kr we get |I|+ES(kr) < kr.In particular, for t ≤ kr, since ϑ n we have thatpt ≤ pkr = O((ϑ/n)1/r) 1.It follows that pi(t) ∼ (tp)r/r!. We therefore have for t = xkr thatES(xkr) = (n− |I|)pi(t) ∼ xrrkr · kr−1r npr(r − 1)! =xrrkr.If |I| < `r, then for x = 1 we have|I|+ES(kr) < `r + kr/r = kr.362.5. Graph bootstrap percolation2.5 Graph bootstrap percolationIn this section, we discuss a variation of bootstrap percolation due to Bol-lobás [39], which is the topic of Chapter 6.Fix a graph H. Following [39], H-bootstrap percolation is a cellularautomaton that adds edges to a graph G = (V,E) by iteratively completingall copies of H missing a single edge. Formally, given a graph G0 = G, letGi+1 be Gi together with every edge whose addition creates a subgraph thatis isomorphic to H. For a finite graph G, this procedure terminates onceGτ+1 = Gτ , for some τ = τ(G). We denote the resulting graph Gτ by 〈G〉H .If 〈G〉H is the complete graph on V , the graph G is said to H-percolate, orequivalently, that G is H-percolating.Balogh, Bollobás and Morris [24] study H-bootstrap percolation in thecase that G = Gn,p and H = Kk. The case k = 4 is the minimal case ofinterest. Indeed, all graphs K2-percolate, and a graph K3-percolates if andonly if it is connected. Thus the case K3 follows by a classical result ofErdős and Rényi [60]. If p = (logn+ ε)/n then Gn,p is K3-percolating withprobability exp(−e−ε)(1 + o(1)), as n→∞.One may define the critical thresholds for H-bootstrap percolation bypc(n,H) = inf {p > 0 : P(〈Gn,p〉H = Kn) ≥ 1/2} .It is expected that this property has a sharp threshold for H = Kk for allk, in the sense that for some pc = pc(k) we have that Gn,p is Kk-percolatingwith high probability for p > (1 + δ)pc and with probability tending to 0for p = (1− δ)pc. Some bounds for pc(n,Kk) are obtained in [24]. A mainresult of [24] identifies the order of the threshold for K4-percolation.Theorem 2.5.1 (Balogh et al. [24, Theorem 2]). Let pc(n,K4) denote thecritical threshold for K4-bootstrap percolation on Gn,p. As n→∞, we havethat pc(n,K4) = Θ(1/√n logn).Note that the order of pc(n,K4) coincides with that for the susceptibilityof Gn,p in the case that r = 2, see Theorem 2.4.3. This connection is discussed372.5. Graph bootstrap percolationfurther in Section 2.6.3 below. For larger k, the order of pc is known only upto a poly-logarithmic factor, see [24, Theorem 1].2.5.1 Clique processesIn [24], the clique process is introduced as a way of analyzing K4-percolationon graphs. This process plays a key role in Chapter 6.Definition 2.5.2. We say that three graphs Gi = (Vi, Ei) form a triangleif there are distinct vertices x, y, z such that x ∈ V1 ∩ V2, y ∈ V1 ∩ V3 andz ∈ V2 ∩ V3.In [24], the following observation is made.Lemma 2.5.3. Suppose that Gi = (Vi, Ei) are K4-percolating.(i) If |V1 ∩ V2| > 1 then G1 ∪G2 is K4-percolating.(ii) If the Gi form a triangle then G1 ∪G2 ∪G3 is K4-percolating.By these observations, the K4-percolation dynamics are classified in [24]as follows.Definition 2.5.4. A clique process for a graph G is a sequence (St)τt=1 ofcollections of subgraphs of G with the following properties:(i) S0 = E(G) is the edge set of G.(ii) For each t < τ , St+1 is constructed from St by either (a) mergingtwo subgraphs G1, G2 ∈ St with at least two common vertices, or (b)merging three subgraphs G1, G2, G3 ∈ St that form a triangle.(iii) Sτ is such that no further operations as in (ii) are possible.Lemma 2.5.5. Let G be a finite graph and (St)τt=1 a clique process for G.For all t ≤ τ , St is a collection of edge-disjoint, K4-percolating subgraphsof G. Furthermore, 〈G〉K4 is the edge-disjoint, triangle-free union of thecliques 〈H〉, H ∈ Sτ . Hence G is K4-percolating if and only if Sτ = {G}.In particular, if two clique processes for G terminate at Sτ and S ′τ ′, thennecessarily Sτ = S ′τ ′.382.6. Our resultsThe existence of such a concise description of the dynamics is the reasonwhy the results of [24] are stronger forK4-percolation than forKk-percolation,k > 4. Indeed, the bounds for pc(n,Kk) obtained in [24] for the cases k > 4hold for pc(n,H) for all graphs H in a certain class that in particular containsKk. As it stands now, the order of pc(n,K4) is unknown for k > 4.2.6 Our resultsFinally, we introduce our main contributions to the study of bootstrappercolation on random graphs, in relation to the results discussed above.The following results are proved in Part III.2.6.1 SusceptibilityThe susceptibility of Gn,p is analyzed in Chapter 4.Theorem 2.6.1 (Angel and Kolesnik [12]). Fix r ≥ 2 and α > 0. Letαr = (r − 1)!(r − 1r)2(r−1), p = θr(α, n) =(αn logr−1 n)1/r.If α > αr then with high probability Gn,p is susceptible. If α < αr then withhigh probability Gn,p has no contagious set of size r.As a result, we identity the sharp thresholds for the susceptibility of Gn,pas pc(n, r) ∼ θr(αr, n), improving the estimates given by Feige, Krivelevichand Reichman [62] in Theorem 2.4.3.We identify pc using the standard first and second moment methods.That being said, due to the fact that contagious sets are highly correlated,establishing the upper bound for pc involves a fairly involved application of thesecond moment method. Roughly speaking, we restrict to a sub-process of ther-bootstrap percolation process that evolves without forming triangles. As itturns out, triangle-free percolating subgraphs of Gn,p are much less correlated,and by using Mantel’s [102] theorem, their approximate independence isreadily established. It then remains to show that the threshold for this392.6. Our resultssub-process coincides with pc up to smaller order terms. See Section 4.2.4below for a more detailed outline of the proof.It is interesting to compare this result with Theorem 2.4.1. We find thatif p = θr(α, n), for some α = (1 + δ)αr, then with high probability Gn,p hasa contagious set of size r, however a random set (selected independently ofGn,p) is likely to be contagious only if it is of size (roughly) at least rr−1 logn.Moreover, for sub-critical p, we obtain the following information aboutthe influence of sets of size r.Theorem 2.6.2 (Angel and Kolesnik [12]). Fix r ≥ 2. Let p = θr(α, n),for some α ∈ (0, αr). With high probability the maximum of |〈I,Gn,p〉r| oversets I ⊂ [n] of size r is equal to (β∗ + o(1)) logn, where β∗(α) ∈ (0, ( rr−1)2)satisfiesr + β log(αβr−1(r − 1)!)− αβrr! − β(r − 2) = 0.In other words, for any δ > 0, with high probability there exist sets Iof size r that activate more than (1− δ)β∗ logn vertices, however none thatactivate more than (1 + δ)β∗ logn.2.6.2 Minimal contagious setsIn Chapter 5, we study minimal contagious sets in Gn,p. Recall that m(G, r)denotes the size of minimal contagious sets for a graph G. We obtain thefollowing improved bounds for m(Gn,p, r), for all r ≥ 2.Theorem 2.6.3 (Angel and Kolesnik [11]). Fix r ≥ 2. Suppose that ϑ = ϑ(n)satisfies 1 ϑ n. Letαr = (r − 1)!(r − 1r)2(r−1), p = p(n, ϑ) =(αrnϑr−1)1/r.Then, with high probability,m(Gn,p, r) ≥ rψ(1 + o(1))402.6. Our resultswhereψ = ψ(n, ϑ) = ϑlog(n/ϑ)and o(1) depends only on n.This result improves the lower bounds of Feige, Krivelevich and Re-ichman [62] in Theorem 2.4.2, noting that cr < r for all r ≥ 2. To givesome intuition for this significant improvement, recall (as discussed belowTheorem 2.4.2) that the bound m(Gn,p, r) ≥ crψ in Theorem 2.4.2 is provedsimply by noting that a graph of size k with a contagious set of size ` has atleast r(k − `) edges. On the other hand, in Chapter 5 we in a sense trackthe full trajectory of activation in percolating graphs, rather than using onlya rough estimate for graphs arrived at by such trajectories. Using (discrete)variational calculus, we identify the optimal trajectory from a set of size` in Gn,p to an eventually active set of k vertices. This leads to refinedbounds for the structure of percolating subgraphs of Gn,p with unusuallysmall contagious sets, and so an improved bound for m(Gn,p, r).Moreover, since cr → 2, our bound is larger by a factor of roughly r/2for large r. Hence the improvement of our bound increases with r. This isdue to the fact that the crude bound of r(k − `) for the number of edges ina graph of size k with a contagious set of size ` is an increasingly inaccurateestimate for the combinatorics of such graphs as r →∞.Hence, in particular, we find thatm(Gn,p, r)/ψ(n, ϑ) grows at least linearlyin r. It seems plausible that this is the truth, and that moreover, our boundis asymptotically sharp. In any case, as it stands now, a substantial gapremains between our linear lower bound and the super-exponential upperbound in Theorem 2.4.2. This upper bound has the advantage of beingproved by a procedure that with high probability locates a contagious setin polynomial time. That being said, this set is possibly much larger thana minimal contagious set, especially for large r. In future work, we hopeto (1) identify m(Gn,p, r) up to a factor of 1 + o(1) and (2) efficiently locatecontagious sets that are as close as possible to minimal.As a consequence, we obtain lower bounds for the critical thresholdpc(n, r, q) for the (q, r)-susceptibility of Gn,p (see Section 2.2.2).412.6. Our resultsCorollary 2.6.4 (Angel and Kolesnik [11]). Fix r ≥ 2. Suppose that r ≤q = q(n) n/ logn. As n→∞,pc(n, r, q) ≥(αr,qn logr−1 n)1/r(1 + o(1)),where αr,q = αr(r/q)r−1.We note that the results in Section 2.6.1 confirm that this bound is sharpin the special case q = r.These results follows by large deviation estimates for the number ofvertices eventually activated by a set that is smaller than the critical amount`r, as defined in Theorem 2.4.1 (and discussed further in Sections 2.4.2.1and 2.4.2.2 above).We let P (`, k) denote the probability that for a given set I ⊂ [n] (indepen-dent of Gn,p), with |I| = `, we have that |〈I,Gn,p〉r| ≥ k. Recall kr, `r, δε, εras defined in Section 2.4.2.1. The following is the key result of Chapter 5.Theorem 2.6.5 (Angel and Kolesnik [11]). Fix r ≥ 2. Let p be as in (2.4.1),where ϑ = ϑ(n) satisfies 1 ϑ n. Let ε ∈ [0, 1) and δ ∈ [δε, 1]. Supposethat `/`r → ε and k/kr → δ, as n → ∞. Then, as n → ∞, we have thatP (`, k) = exp[ξkr(1 + o(1))], where ξ = ξ(ε, δ) is equal to−δrr+(δ − εr) log(er−1δr/(δ − εr)), δ ∈ [δε, ε);(ε/r) log(eεr−1)− (r − 2)(δ − ε) + (r − 1) log(δδ/εε), δ ∈ [ε, 1],and o(1) depends only on n.We note that t = kr is the point at which the binomial chain S(t)becomes super-critical (see Section 2.4.2), so we have that P (ε`r, δkr) =eo(kr)P (ε`r, kr) for δ > 1.These estimates complement the central limit theorems (Theorem 2.4.5)of Janson, Łuczak, Turova and Vallier [84]. Indeed, since the mean andvariance of |〈I,Gn,p〉r| are of the same order (see Theorem 2.4.5), the eventthat |〈I,Gn,p〉r| ≥ δkr, for some δ ∈ (δε, 1], represents a large deviation from422.6. Our resultsthe typical behaviour. Hence −( rr−1)2ξ is the large deviations rate functioncorresponding to the events of interest {|〈I,Gn,p〉r| ≥ k}.2.6.3 K4-percolationIn Chapter 6, we study K4-bootstrap percolation on Gn,p. We identifythe sharp threshold as pc(n,K4) ∼ 1/√3n logn, improving the estimates ofBalogh, Bollobás and Morris [24] in Theorem 2.5.1, thereby solving Problem 2stated in [24].Theorem 2.6.6 (Kolesnik [91]). Let p =√α/(n logn). If α > 1/3 then Gn,pis K4-percolating with high probability. If α < 1/3 then with high probabilityGn,p does not K4-percolate.We note that the super-critical case α > 1/3 follows by the results inSection 2.6.1 (joint work with Angel [12]) in the case of r = 2, as explainedbelow the statement of the next theorem. It thus remains to study thesub-critical case α < 1/3.In the sub-critical case, we also identify the size of the largest K4-percolating subgraphs of Gn,p.Theorem 2.6.7 (Kolesnik [91]). Let p =√α/(n logn), for some α ∈(0, 1/3). With high probability the largest clique in 〈Gn,p〉K4 has size (β∗ +o(1)) logn, where β∗(α) ∈ (0, 3) satisfies 3/2 + β log(αβ)− αβ2/2 = 0.By the results discussed in Section 2.6.1 (joint work with Angel [12]),it follows that with high probability 〈Gn,p〉K4 has cliques of size at least(β∗ + o(1)) logn. Our contribution is to show that these are typically thelargest cliques.The main ingredients in the proof are the large deviation estimatesdiscussed in Section 2.6.2 (joint work with Angel [11]) and a connectionwith the susceptibility of Gn,p, in the case of r = 2 (as observed in [12]).Indeed, since pc(n,K4) and pc(n, 2) are on the same order (see Theorems 2.4.3and 2.5.1), it is natural to ask how the two processes are related. We notethat if a graph G = (V,E) has a contagious pair {u, v} ⊂ V that is joined byan edge (u, v) ∈ E, then G is K4-percolating (see Section 4.2.2). In this case,432.6. Our resultswe call G a seed graph and (u, v) a seed edge. The above theorems show that,although a graph can K4-percolate in a variety of ways (see Section 2.5.1),up to smaller order terms pc(n,K4) coincides with the threshold for the eventthat Gn,p has a seed edge.It is a general phenomenon of Gn,p that often the threshold for a propertyof interest coincides with that of a more fundamental event. Moreover, evenstronger results hold in some cases. For example, one of the first resultson Gn,p, in the original paper [60], shows that with high probability Gn,pis connected (equivalently, K3-percolating) if and only if it has no isolatedvertices. Komlós and Szemerédi [92] showed that with high probability Gn,pis Hamiltonian if and only if its minimum degree is at least 2.In closing, we mention that it seems possible that K4-percolation is morecomplicated than K3-percolation. Perhaps, for p in the scaling window(see Section 2.2.1), the probability that Gn,p has a seed edge converges to aconstant in (0, 1), and with non-vanishing probability Gn,p is K4-percolatingdue instead to a small K4-percolating subgraph C of size O(1) that plays therole of a seed edge (i.e., is K4-percolating and causes Gn,p to K4-percolateby successively adding doubly connected vertices).44Part IIGeodesics in RandomSurfaces45Chapter 3Stability of Geodesics in theBrownian Map3.1 OverviewThe Brownian map is a random geodesic metric space arising as the scalinglimit of random planar maps. We strengthen the so-called confluence ofgeodesics phenomenon observed at the root of the map, and with this, revealseveral properties of its rich geodesic structure.Our main result is the continuity of the cut locus at typical points. Asmall shift from such a point results in a small, local modification to the cutlocus. Moreover, the cut locus is uniformly stable, in the sense that any twocut loci coincide outside a closed, nowhere dense set of zero measure.We obtain similar stability results for the set of points inside geodesics toa fixed point. Furthermore, we show that the set of points inside geodesicsof the map is of first Baire category. Hence, most points in the Brownianmap are endpoints.Finally, we classify the types of geodesic networks which are dense. Foreach k ∈ {1, 2, 3, 4, 6, 9}, there is a dense set of pairs of points which arejoined by networks of exactly k geodesics and of a specific topological form.We find the Hausdorff dimension of the set of pairs joined by each type ofnetwork. All other geodesic networks are nowhere dense.∗∗This chapter is joint work with Omer Angel and Grégory Miermont [13], to appear inthe Annals of Probabilty.463.2. Background and main results3.2 Background and main resultsA universal scaling limit of random planar maps has recently been identifiedby Le Gall [98] (triangulations and 2k-angulations, k > 1) and Miermont [108](quadrangulations) as a random geodesic metric space called the Brownianmap (M,d). In this chapter, we establish properties of the Brownian mapwhich are a step towards a complete understanding of its geodesic structure.The works of Cori and Vauquelin [52] and Schaeffer [119] describe abijection from well-labelled plane trees to rooted planar maps. The Brownianmap is obtained as a quotient of Aldous’ [7, 8] continuum random tree, orCRT, by assigning Brownian labels to the CRT and then identifying someof its non-cut-points, or leaves, according to a continuum analogue of theCVS-bijection (see Section 3.3.1). The resulting object is homeomorphic tothe sphere S2 (Le Gall and Paulin [100] and Miermont [106]) and of Hausdorffdimension 4 (Le Gall [96]) and is thus in a sense a random, fractal, sphericalsurface.Le Gall [97] classifies the geodesics to the root, which is a certain dis-tinguished point of the Brownian map (see Section 3.3.1), in terms of thelabel process on the CRT (see Section 3.3.2). Moreover, the Brownian mapis shown to be invariant in distribution under uniform re-rooting from thevolume measure λ on M (see Section 3.3.1). Hence, geodesics to typicalpoints exhibit a similar structure as those to the root. It thus remains toinvestigate geodesics from special points of the Brownian map.3.2.1 Geodesic netsA striking consequence of Le Gall’s description of geodesics to the root isthat any two such geodesics are bound to meet and then coalesce beforereaching the root, a phenomenon referred to as the confluence of geodesics(see Section 3.3.3). In fact, the set of points in the relative interior of ageodesic to the root is a small subset which is homeomorphic to an R-treeand of Hausdorff dimension 1 (see [97]).Definition 3.2.1. We call a subset γ ⊂ M a geodesic segment if (γ, d) is473.2. Background and main resultsisometric to a compact interval. The extremities of the geodesic segment arethe images, say x and y, of the extremities of the source interval, and we saythat γ is a geodesic segment between x and y (or from x to y if we insist ondistinguishing one orientation of γ).We will often denote a particular geodesic segment between x, y ∈M as[x, y], and denote its relative interior by (x, y) = [x, y]− {x, y}. (Since theremight be more than one such geodesic segment, we will be careful in liftingany ambiguity that might arise from this notation.) We define [x, y) and(x, y] similarly.Definition 3.2.2. For x ∈ M , the geodesic net of x, denoted G(x), is theset of points y ∈M that are contained in the relative interior of a geodesicsegment to x.Although geodesics to the root of the Brownian map are understood, thestructure of geodesics to general points remains largely mysterious. Indeed,the main obstacle in establishing the existence of the Brownian map is torelate a geodesic between a pair of typical points to geodesics to the root. Acompactness argument of Le Gall [96] yields scaling limits of planar mapsalong subsequences, however the question of uniqueness remained unresolvedfor some time. Finally, making use of Le Gall’s description of geodesics to theroot, Le Gall [98] and Miermont [108] show that distances to the root provideenough information to characterize the Brownian map metric. Let γ be ageodesic between points selected uniformly according to λ. (By the confluenceof geodesics phenomenon, the root of the map is almost surely disjoint fromγ.) In [98, 108] the set of points z ∈ γ such that the relative interior of anygeodesic from z to the root is disjoint from γ is shown to be small comparedto γ. Hence, roughly speaking, “most” points in “most” geodesics of theBrownian map are in a geodesic to the root. (See the discussion aroundequation (2) in [98] and [108, Section 2.3] for precise statements.)In this chapter, we show that for any two points x, y ∈M , points whichare in a geodesic to x but not in a geodesic to y are exceptional. Hence, to aconsiderable extent, the geodesic structure of the Brownian map is similar483.2. Background and main resultsas viewed from any point of the map, providing further evidence that it is,to quote Le Gall [95], “very regular in its irregularity.”Theorem 3.2.3. Almost surely, for all x, y ∈M , G(x) and G(y) coincideoutside a closed, nowhere dense set of zero λ-measure.Furthermore, for most points x ∈M , the effect of small perturbations ofx on G(x) is localized.Theorem 3.2.4. Almost surely, the function x 7→ G(x) is continuous almosteverywhere in the following sense.For λ-almost every x ∈ M , for any neighbourhood N of x, there is asub-neighbourhood N ′ ⊂ N so that G(x′)−N is the same for all x′ ∈ N ′.The uniform infinite planar triangulation, or UIPT, introduced by Angeland Schramm [14], is a random lattice which arises as the local limit ofrandom triangulations of the sphere. The case of quadrangulations, givingrise to the UIPQ, is due to Krikun [93]. We remark that Theorem 3.2.4 isin a sense a continuum analogue to a result of Krikun [94] (see also Curien,Ménard, and Miermont [56]) which shows that the “Schaeffer’s tree” of theUIPQ only changes locally after relocating its root.Next, we find that the union of all geodesic nets is relatively small.Definition 3.2.5. Let F = ⋃x∈M G(x) denote the set of points in therelative interior of a geodesic in (M,d). We refer to F as the geodesicframework and E = F c as the endpoints of the Brownian map.Theorem 3.2.6. Almost surely, the geodesic framework of the Brownianmap, F ⊂M , is of first Baire category.Hence, the endpoints of the Brownian map, E ⊂ M , is a residual sub-set. This property of the Brownian map is reminiscent of a result of Zam-firescu [138], which states that for most convex surfaces — that is, for allsurfaces in a residual subset of the Baire space of convex surfaces in Rnendowed with the Hausdorff metric — the endpoints form a residual set.493.2. Background and main results3.2.2 Cut lociRecall that the cut locus of a point p in a Riemannian manifold, first examinedby Poincaré [115], is the set of points q 6= p which are endpoints of maximal(minimizing) geodesics from p. This collection of points is more subtle thanmerely the set of points with multiple geodesics to p, and in fact, is generallythe closure thereof (see Klingenberg [90, Theorem 2.1.14]).In the Brownian map this equivalence breaks completely. Indeed, almostall (in the sense of volume, by the confluence of geodesics phenomenon andinvariance under re-rooting) and most (in the sense of Baire category, byTheorem 3.2.6) points are the end of a maximal geodesic, and every pointis joined by multiple geodesics to a dense set of points (see the note afterthe proof of Proposition 3.5.2). Moreover, whereas in the Brownian mapthere are points with multiple geodesics to the root which coalesce beforereaching the root, in a Riemannian manifold any (minimizing) geodesic whichis not the unique geodesic between its endpoints cannot be extended (see,for example, the “short-cut principle” discussed in Shiohama, Shioya andTanaka [124, Remark 1.8.1]).We introduce the following notions of cut locus for the Brownian map.Definition 3.2.7. For x ∈M , the weak cut locus of x, denoted S(x), is theset of points y ∈ M with multiple geodesics to x. The strong cut locus ofx, denoted C(x), is the set of points y ∈M to which there are at least twogeodesics from x that are disjoint in a neighbourhood of y.We will see that for most points x, it holds that S(x) = C(x) (Proposi-tion 3.5.3). However, in some sense, C(x) is better-behaved than S(x) forthe remaining exceptional points, and we will argue in Section 3.5.3 belowthat C(x) is more effective at capturing the essence of a cut-locus for themetric space (M,d).The construction of the Brownian map as a quotient of the CRT gives anatural mapping from the CRT to the map. Let ρ denote the root of themap. Cut-points of the CRT correspond to a dense subset S(ρ) ⊂ M ofHausdorff dimension 2 (see [97]). Le Gall’s description of geodesics revealsthat S(ρ) is almost surely exactly the set of points with multiple geodesics503.2. Background and main resultsto ρ (see Section 3.3.2). More specifically, for any y ∈ M , the number ofconnected components of S(ρ) − {y} is precisely the number of geodesicsfrom y to ρ. This is similar to the case of a complete, analytic Riemanniansurface homeomorphic to the sphere (see Poincaré [115] and Myers [113])where the cut locus S of a point x is a tree and the number of “branches”emanating from a point in S is exactly the number of geodesics to x.Since the strong cut locus of the root of the Brownian map correspondsto the CRT minus its leaves — that is, almost surely S(ρ) = C(ρ), where ρis the root (see Section 3.3.2) — it is a fundamental subset of the map.We obtain analogues of Theorems 3.2.3 and 3.2.4 for the strong cut locus.Theorem 3.2.8. Almost surely, for all x, y ∈M , C(x) and C(y) coincideoutside a closed, nowhere dense set of zero λ-measure.Theorem 3.2.9. Almost surely, the function x 7→ C(x) is continuous almosteverywhere in the following sense.For λ-almost every x ∈ M , for any neighbourhood N of x, there is asub-neighbourhood N ′ ⊂ N so that C(x′)−N is the same for all x′ ∈ N ′.Theorem 3.2.9 brings to mind the results of Buchner [44] and Wall [135],which show that the cut locus of a fixed point in a compact manifold iscontinuously stable under perturbations of the metric on an open, densesubset of its Riemannian metrics (endowed with the Whitney topology).As for the geodesic nets in Theorem 3.2.6, we show that the union of allstrong cut loci is a small subset of the map.Theorem 3.2.10. Almost surely, ⋃x∈M C(x) is of first Baire category.We remark that Gruber [75] (see also Zamfirescu [139]) shows that formost (in the sense of Baire category) convex surfaces X, for any point x ∈ X,the set of points with multiple geodesics to x is of first Baire category. Sincefor typical points x ∈ M , C(x) is exactly the set of points with multiplegeodesics to x (that is, C(x) = S(x), see Proposition 3.5.3), Theorem 3.2.10shows that this property holds almost surely for almost every point of theBrownian map. That being said, there is a dense set of atypical points D such513.2. Background and main resultsthat every x ∈ D is connected to all points outside a small neighbourhoodof x by multiple geodesics (see Proposition 3.5.2).3.2.3 Geodesic networksNext, we investigate the structure of geodesic segments between pairs ofpoints in the Brownian map.Definition 3.2.11. For x, y ∈ M , the geodesic network between x and y,denoted by G(x, y), is the set of points in some geodesic segment between xand y.Geodesic networks with one endpoint being the root of the map (or a typ-ical point by invariance under re-rooting) are well understood. As discussedin Section 3.2.2, for any y ∈ M , the number of connected components inS(ρ)− {y} gives the number of geodesics from y to ρ. Hence, by propertiesof the CRT, almost surely there is a dense set with Hausdorff dimension 2of points with exactly two geodesics to the root; a dense, countable set ofpoints with exactly three geodesics to the root; and no points connectedto the root by more than three geodesics. By invariance under re-rooting,it follows that the set of pairs that are joined by multiple geodesics is azero-volume subset of (M2, λ ⊗ λ) (see also Miermont [107]). Hence thevast majority of networks in the Brownian map consist of a single geodesicsegment. Furthermore, by Le Gall’s description of geodesics to the root andinvariance under re-rooting, geodesic segments from a typical point of theBrownian map have a specific topological structure.For x ∈M , let B(x, ε) denote the open ball of radius ε centred at x.Definition 3.2.12. We say that the ordered pair of distinct points (x, y) isregular if any two distinct geodesic segments between x and y are disjointinside, and coincide outside, a punctured ball centred at y of radius lessthan d(x, y). Formally, if γ and γ′ are geodesic segments between x andy, then there exists r ∈ (0, d(x, y)) such that γ ∩ γ′ ∩ B(y, r) = {y} andγ −B(y, r) = γ′ −B(y, r).For typical points x, all pairs (x, y) are regular (see Section 3.3.2).523.2. Background and main resultsWe note that this notion is not symmetric, that is, (x, y) being regulardoes not imply that (y, x) is regular. In fact, observe that (x, y) and (y, x)are regular if and only if there is a unique geodesic from x to y.A key property is the following.Lemma 3.2.13. If (x, y) is regular and γ is a geodesic segment between xand y, then for any point z in the relative interior of γ, the segment [x, z] ⊂ γis the unique geodesic segment between x and z. Hence, any points z 6= z′ inthe relative interior of γ are joined by a unique geodesic.Consequently, any geodesic segment γ′ to x that intersects the relativeinterior of γ at some point z coalesces with γ from that point on, that is,γ ∩B(x, d(x, z)) = γ′ ∩B(x, d(x, z)).Proof. Let (x, y) be regular and let γ be a geodesic segment between x and y.Assume that there are two distinct geodesic segments γ1, γ2 between z and x,where z is some point in the relative interior of γ. By adding the sub-segment[y, z] ⊂ γ to γ1 and γ2, we obtain two distinct geodesic segments betweeny and x that coincide in the non-empty neighbourhood B(y, d(y, z)) of y,contradicting the definition of regularity for (x, y). This gives the first partof the statement, and the second part is a straightforward consequence. We find that all except very few geodesic networks in the Brownian mapare, in the following sense, a concatenation of two regular networks.Definition 3.2.14. For (x, y) ∈M2 and j, k ∈ N, we say that (x, y) inducesa normal (j, k)-network, and write (x, y) ∈ N(j, k), if for some z in therelative interior of all geodesic segments between x and y, (z, x) and (z, y)are regular and z is connected to x and y by exactly j and k geodesicsegments, respectively.x yzuFigure 3.1: As depicted, (x, y) ∈ N(2, 3). Note that(u, x) does not induce a normal (j, k)-network.533.2. Background and main resultsIn particular, note if x, y are joined by exactly k geodesics and (x, y) isregular, then (x, y) ∈ N(1, k). (Take z to be a point in the relative interiorof the geodesic segment contained in all k segments from x to y.)Not all networks are normal (j, k)-networks. For instance, if (x, y) ∈N(j, k) and j > 1, then there is a point u ∈ G(x, y) so that u is joined tox by two geodesics with disjoint relative interiors. See Figure 3.1. Thatbeing said, most pairs induce normal (j, k)-networks. Moreover, for eachj, k ∈ {1, 2, 3}, there are many normal (j, k)-networks in the map. Hence, inparticular, we establish the existence of atypical networks comprised of morethan three geodesics (and up to nine).Theorem 3.2.15. The following hold almost surely.(i) For any j, k ∈ {1, 2, 3}, N(j, k) is dense in M2.(ii) M2 −⋃j,k∈{1,2,3}N(j, k) is nowhere dense in M2.By Theorem 3.2.15, there are essentially only six types of geodesic net-works which are dense in the Brownian map. See Figure 3.2.Figure 3.2: Theorem 3.2.15: Classification of networkswhich are dense in the Brownian map (up to symmetriesand homeomorphisms of the sphere).Since the geodesic net of the root, or a typical point by invariance underre-rooting, is a binary tree — which follows by the uniqueness of localminima of the label process Z, see [100, Lemma 3.1], and since G(ρ) is the543.2. Background and main resultstree [0, 1]/{dZ = 0}, see Section 3.3.2 — it can be shown using ideas in theproof of Theorem 3.2.16 below that the pairs of small dots near the largedots in the 3rd, 5th and 6th networks in Figure 3.2 are indeed distinct points.(That is, Theorem 3.2.15 would still hold if we were to further require thatnormal networks have this additional property.) For instance, in Figure 3.7below, note that all geodesic segments from y to y′ are sub-segments ofgeodesics from y to the typical point zn, and hence do not coalesce at thesame point. We omit further discussion on this small detail.It remains an interesting open problem to fully classify the types ofgeodesic networks in the Brownian map.Additionally, we obtain the dimension of the sets N(j, k), j, k ≤ 3.For a set A ⊂M , let dimA and dimPA denote its Hausdorff and packingdimensions, respectively (see Section 3.3.4).Theorem 3.2.16. Almost surely, we have dimN(j, k) = dimPN(j, k) =2(6− j − k), for all j, k ∈ {1, 2, 3}. Moreover, N(3, 3) is countable.We remark that since N(j, k), for any j, k ∈ {1, 2, 3}, is dense in M2(by Theorem 3.2.15) its Minkowski dimension is that of M2, which byProposition 3.3.5 below is almost surely equal to 8.Definition 3.2.17. For each k ∈ N, let P (k) ⊂M2 denote the set of pairsof points that are connected by exactly k geodesics.Theorems 3.2.15 and 3.2.16 imply the following results.Corollary 3.2.18. Put K = {1, 2, 3, 4, 6, 9}. The following hold almostsurely.(i) For each k ∈ K, P (k) is dense in M2.(ii) M2 −⋃k∈K P (k) is nowhere dense in M2.Corollary 3.2.19. Almost surely, we have that dimP (2) ≥ 6, dimP (3) ≥ 4,dimP (4) ≥ 4 and dimP (6) ≥ 2.We expect the lower bounds in Corollary 3.2.19 to give the correctHausdorff dimensions of the sets P (k), k ∈ K − {1, 9}. As discussed in553.2. Background and main resultsSection 3.2.2, P (1) is of full volume, and hence dimP (1) = 8. We suspectthat P (9) is countable. It would be of interest to determine if the set P (k)is non-empty for some k /∈ K, and whether there is any k 6∈ K for which ithas positive dimension. We hope to address these issues in future work.3.2.4 Confluence pointsOur key tool is a strengthening of the confluence of geodesics phenomenonof Le Gall [97] (see Section 3.3.3). We find that for any neighbourhood N ofa typical point in the Brownian map, there is a confluence point x0 betweena sub-neighbourhood N ′ ⊂ N and the complement of N . See Figure 3.3.Proposition 3.2.20. Almost surely, for λ-almost every x ∈M , the followingholds. For any neighbourhood N of x, there is a sub-neighbourhood N ′ ⊂ Nand some x0 ∈ N −N ′ so that all geodesics between any points x′ ∈ N ′ andy ∈ N c pass through x0.x0Figure 3.3: Proposition 3.2.20: All geodesics frompoints in N ′ to points in the complement of N ⊃ N ′pass through a confluence point x0.Definition 3.2.21. We say that a sequence of geodesic segments γn convergesto a geodesic segment γ, and write γn → γ, if γn converges to γ with respectto the Hausdorff topology.Since (M,d) is almost surely homeomorphic to S2, and hence almostsurely compact, the following lemma is a straightforward consequence ofthe Arzelà-Ascoli Theorem (see, for example, Bridson and Haefliger [43,Corollary 3.11]).563.2. Background and main resultsLemma 3.2.22. Almost surely, the set of geodesic segments in (M,d) iscompact (with respect to the Hausdorff topology).Our key result, Proposition 3.2.20, is related to the fact that manysequences of geodesic segments in the Brownian map converge in a strongersense.Definition 3.2.23. We say that a sequence of geodesic segments [xn, yn]converges strongly to [x, y], and write [xn, yn] ⇒ [x, y], if xn → x, yn → y,and for any geodesic segment [x′, y′] ⊂ (x, y) (excluding the endpoints) wehave that [x′, y′] ⊂ [xn, yn] for all sufficiently large n.Strong convergence is stronger than convergence in the Hausdorff topology.Indeed, if x′, y′ are ε away from x, y along [x, y], then for large n [x′, y′] ⊂[xn, yn]. Moreover, since d(xn, x′) ≤ d(xn, x) + ε for all such n, [xn, x′] iseventually contained in B(x, 2ε). Similarly, [y′, yn] is eventually containedin B(y, 2ε). In the Euclidean plane, or generic smooth manifolds, strongconvergence does not occur. In contrast, in the Brownian map it is the norm,as we shall see below. In light of this we also make the following definition.Definition 3.2.24. A geodesic segment γ is called a stable geodesic ifwhenever [xn, yn] → γ we also have [xn, yn] ⇒ γ. Otherwise, γ is called aghost geodesic.Proposition 3.2.25. Almost surely, for λ-almost every x ∈ M , for ally ∈M , all sub-segments of all geodesic segments [x, y] are stable.Proposition 3.2.20 follows by combining Proposition 3.2.25 with theconfluence of geodesics phenomenon and the fact that (M,d) is almost surelycompact (see Section 3.4).In closing, we remark that it would be interesting to know if Proposi-tion 3.2.25 holds for all x ∈M , that is, are all geodesics in M stable, or arethere any ghost geodesics? Ghost geodesics have various properties, and inparticular they intersect every other geodesic in at most one point. It wouldbe quite surprising if such geodesics exist, and we hope to rule them out infuture work. We thus expect an analogue of Proposition 3.2.20 to hold for allx ∈M . If so, then as a consequence, we would obtain the following result.573.3. PreliminariesConjecture 3.2.26. Almost surely, the geodesic framework of the Brownianmap, F ⊂M , is of Hausdorff dimension 1.In this way, we suspect that although the Brownian map is a complicatedobject of Hausdorff dimension 4, it has a relatively simple geodesic frameworkwhich is of first Baire category (Theorem 3.2.6) and Hausdorff dimension 1.3.3 PreliminariesIn this section, we briefly recount the construction of the Brownian map andwhat is known regarding its geodesics.3.3.1 The Brownian mapFix q ∈ {3} ∪ 2(N + 1) and set cq equal to 61/4 if q = 3 or (9/q(q − 2))1/4if q > 3. Let Mn denote a uniform q-angulation of the sphere (see Le Galland Miermont [99]) with n faces, and dn the graph distance on Mn scaledby cqn−1/4. The works of Le Gall [98] and Miermont [108] (for q = 4)show that in the Gromov-Hausdorff topology on isometry classes of compactmetric spaces (see Burago, Burago and Ivanov [45]), (Mn, dn) converges indistribution to a random metric space called the Brownian map (M,d).The Brownian map has also been identified as the scaling limit of severalother types of maps, see [1, 2, 29, 36, 98].The construction of the Brownian map involves a normalized Brownianexcursion e = {et : t ∈ [0, 1]}, a random R-tree (Te, de) indexed by e, anda Brownian label process Z = {Za : a ∈ Te}. More specifically, defineTe = [0, 1]/{de = 0} as the quotient under the pseudo-distancede(s, t) = es + et − 2 · mins∧t≤u≤s∨teu, s, t ∈ [0, 1]and equip it with the quotient distance, again denoted by de. The randommetric space (Te, de) is Aldous’ continuum random tree, or CRT. We letpe : [0, 1]→ Te denote the canonical projection. Conditionally given e, Z is acentred Gaussian process satisfying E[(Zs−Zt)2] = de(s, t) for all s, t ∈ [0, 1].583.3. PreliminariesThe random process Z is the so-called head of the Brownian snake (see [99]).Note that Z is constant on each equivalence class p−1e (a), a ∈ Te. In thissense, Z is Brownian motion indexed by the CRT.Analogously to the definition of de, we putdZ(s, t) = Zs + Zt − 2 ·max{infu∈[s,t]Zu, infu∈[t,s]Zu}, s, t ∈ [0, 1]where we set [s, t] = [0, t] ∪ [s, 1] in the case that s > t. Then, to obtain apseudo-distance on [0, 1], we defineD∗(s, t) = inf{k∑i=1dZ(si, ti) : s1 = s, tk = t, de(ti, si+1) = 0}, s, t ∈ [0, 1].Finally, we set M = [0, 1]/{D∗ = 0} and endow it with the quotientdistance induced by D∗, which we denote by d. An easy property (see [105,Section 4.3]) of the Brownian map is that de(s, t) = 0 implies D∗(s, t) = 0, sothat M can also be seen as a quotient of Te, and we let Π : Te →M denotethe canonical projection, and put p = Π ◦ pe. Almost surely, the process Zattains a unique minimum on [0, 1], say at t∗. We set ρ = p(t∗). The randommetric space (M,d) = (M,d, ρ) is called the Brownian map and we call ρ itsroot. Being the Gromov-Hausdorff limit of geodesic spaces, (M,d) is almostsurely a geodesic space (see [45]).Almost surely, for every pair of distinct points s 6= t ∈ [0, 1], at most one ofde(s, t) = 0 or dZ(s, t) = 0 holds, except in the particular case {s, t} = {0, 1}where both identities hold simultaneously (see [100, Lemma 3.2]). Hence,only leaves (that is, non-cut-points) of Te are identified in the constructionof the Brownian map; and this occurs if and only if they have the samelabel and along either the clockwise or counter-clockwise, contour-orderedpath around Te between them, one only finds vertices of larger label. Thus,as mentioned at the beginning of Section 3.2, in the construction of theBrownian map, (Te, Z) is a continuum analogue for a well-labelled plane tree,and the quotient by {D∗ = 0} for the CVS-bijection (which, as discussedin Section 3.2, identifies well-labelled plane trees with rooted planar maps).593.3. PreliminariesSee Section 3.3.2 for more details.Lastly, we note that although the Brownian map is a rooted metric space,it is not so dependent on its root. The volume measure λ on M is definedas the push-forward of Lebesgue measure on [0, 1] via p. Le Gall [97] showsthat the Brownian map is invariant under re-rooting in the sense that if Uis uniformly distributed over [0, 1] and independent of (M,d), then (M,d, ρ)and (M,d,p(U)) are equal in law. Hence, to some extent, the root of themap is but an artifact of its construction.3.3.2 Simple geodesicsRecall that a corner of a vertex v in a discrete plane tree T is a sectorcentred at v and delimited by edges which precede and follow v along acontour-ordered path around T . Leaves of a tree have exactly one corner, andin general, the number of corners of v is equal to the number of connectedcomponents in T − {v}. Similarly, we may view the R-tree Te as havingcorners, however in this continuum setting all sectors reduce to points. Hence,for the purpose of the following (informal) discussion, let us think of eacht ∈ [0, 1] as corresponding to a corner of Te with label Zt.Put Z∗ = Zt∗ . As it turns out, d(ρ,p(t)) = Zt − Z∗ for all t ∈ [0, 1](see [96]). In other words, up to a shift by the minimum label Z∗, theBrownian label of a point in Te is precisely the distance to ρ from thecorresponding point in the Brownian map.All geodesics to ρ are simple geodesics, constructed as follows. Fort ∈ [0, 1] and ` ∈ [0, Zt−Z∗], let st(`) denote the point in [0, 1] correspondingto the first corner with label Zt − ` in the clockwise, contour-ordered patharound Te beginning at the corner corresponding to t. For each such t, theimage of the function Γt : [0, Zt −Z∗]→M taking ` to p(st(`)) is a geodesicsegment from p(t) to ρ. Moreover, the main result of [97] shows that allgeodesics to ρ are of this form. Hence, the geodesic net of the root, G(ρ), isprecisely the set of cut-points of the R-tree TZ = [0, 1]/{dZ = 0} projectedinto M .These results mirror the fact that from each corner of a labelled, discrete603.3. Preliminariesplane tree, the CVS-bijection draws geodesics to the root of the resultingmap in such a way that the label of a vertex visited by any such geodesicequals the distance to the root. See [95, 97] for further details.Moreover, since the cut-points of Te are its vertices with multiple corners,we see that the set S(ρ) (discussed in Section 3.2.2) of points with multiplegeodesics to ρ is exactly the set of cut-points of the R-tree Te = [0, 1]/{de = 0}projected into M .Furthermore, since points in S(ρ) correspond to leaves of TZ (see [100,Lemma 3.2]), geodesics to the root of the map (or a typical point, by invari-ance under re-rooting) have a particular topological structure, as discussedin Section 3.2.3. We state this here for the record.Proposition 3.3.1. Almost surely, for λ-almost every x, for all y ∈ M ,(x, y) is regular.Hence, as mentioned in Section 3.2.2, we have that S(ρ) = C(ρ). Thatis, all points with multiple geodesics to the root are in the strong cut locusof the root.3.3.3 Confluence at the rootAs discussed in Section 3.2.1, a confluence of geodesics is observed at theroot of the Brownian map. Combining this with invariance under re-rooting,the following result is obtained.Lemma 3.3.2 (Le Gall [97, Corollary 7.7]). Almost surely, for λ-almostevery x ∈M , the following holds. For every ε > 0 there is an η ∈ (0, ε) sothat if y, y′ ∈ B(x, ε)c, then any pair of geodesics from x to y and y′ coincideinside of B(x, η).Moreover, geodesics to the root of the map tend to coalesce quickly.For t ∈ [0, 1], let γt denote the image of the simple geodesic Γt from p(t)to the root of the map ρ (see Section 3.3.2).Lemma 3.3.3 (Miermont [108, Lemma 5]). Almost surely, for all s, t ∈ [0, 1],γs and γt coincide outside of B(p(s), dZ(s, t)).613.3. PreliminariesWe require the following lemma.Lemma 3.3.4. Almost surely, for λ-almost every x ∈M , the following holds.For any y ∈M and neighbourhood N of y, there exists a sub-neighbourhoodN ′ ⊂ N so that if y′ ∈ N ′, then any geodesic from x to y′ coincides with ageodesic from x to y outside of N .Proof. Let ρ denote the root of the map. Let y ∈M and a neighbourhoodN of y be given. Select ε > 0 so that B(y, ε) ⊂ N . Let Nε denote the set ofpoints y′ ∈M with the property that for all t′ ∈ [0, 1] for which p(t′) = y′,there exists some t ∈ [0, 1] so that p(t) = y and dZ(t, t′) < ε. As discussed inSection 3.3.2, Le Gall [97] shows that all geodesics to ρ are simple geodesics.Hence, by Lemma 3.3.3, any geodesic from ρ to a point y′ ∈ Nε coincideswith some geodesic from ρ to y outside of N .We claim that Nε is a neighbourhood of y. To see this, note that ifp(tn) = yn → y in (M,d), then there is a subsequence tnk so that for somety ∈ [0, 1], we have that tnk → ty as k → ∞. Hence dZ(ty, tnk) < ε for alllarge k, and since p is continuous (see [97]), p(ty) = y. Therefore, for anyyn → y in (M,d), yn /∈ Nε for at most finitely many n, giving the claim.Hence the lemma follows by invariance under re-rooting. We remark that the size of N ′ in Lemma 3.3.4 depends strongly on x andy. For instance, for a fixed ε > 0 and convergent sequences of typical pointsxn (that is, points satisfying the statement of Lemma 3.3.4) and generalpoints yn, for each n let ηn > 0 be such that the statement of the lemmaholds for the pair xn, yn with Nn = B(yn, ε) and N ′n = B(yn, ηn). It is quitepossible that ηn → 0 as n→∞.3.3.4 DimensionsFinally, we collect some facts about the dimension of various subsets of theBrownian map. These statements are easily derived from established results,but are not explicitly stated in the literature.For a metric space X ⊂ M , let dimX denote its Hausdorff dimension,dimPX its packing dimension, and DimX (resp. DimX) its lower (resp.623.3. Preliminariesupper) Minkowski dimension. If the lower and upper Minkowski dimensionscoincide, we denote their common value by DimX. We note that for anymetric space X we havedimX ≤ DimX ≤ DimX and dimX ≤ dimPX ≤ DimX.See Mattila [103], for instance, for detailed definitions and other propertiesof these dimensions. (For example, dimA = inf{t : Ht(A) = 0}, where Ht isthe Hausdorff measure, defined by Ht(A) = limδ↓0Htδ(A), where Htδ(A) isthe infimum over sums ∑i δti such that there is a countable cover of A bysets Ai with diameters δi ≤ δ.)We require the following result, which is implicit in Le Gall’s [96] proofthat dimM = 4. For completeness, we include a proof via the uniformvolume estimates of balls in the Brownian map.Proposition 3.3.5. Almost surely, for any non-empty, open subset U ⊂M ,we have that λ(U) > 0 (hence λ has full support) and dimU = dimP U =DimU = 4.Proof. Let a non-empty, open subset U ⊂M be given. Fix some arbitraryη > 0.By [108, Lemma 15], there is a c ∈ (0,∞) and ε0 > 0 so that for allε ∈ (0, ε0) and x ∈ M , we have that λ(B(x, ε)) ≥ cε4+η. In particular,λ(U) > 0. For ε > 0, let N(ε) denote the number of balls of radius ε requiredto coverM . By a standard argument, it follows that there exists a c′ ∈ (0,∞)so that for all ε ∈ (0, 2ε0) we have N(ε) ≤ c′ε−(4+η). It follows directly thatDimM ≤ 4 + η, and the same bound holds for U ⊂M .On the other hand, by [108, Lemma 14] (a consequence of [96, Corollary6.2]), there is a C ∈ (0,∞) so that for all ε > 0 and x ∈ M , we havethat λ(B(x, ε)) ≤ Cε4−η. In particular, for all ε > 0 and x ∈ U we haveλ(B(x, ε) ∩ U) ≤ Cε4−η. It follows that dimU ≥ 4 − η (see, for example,Falconer [61, Exercise 1.8]).Since η > 0 is arbitrary, the general dimension inequalities imply theclaim. 633.3. PreliminariesDefinition 3.3.6. For x ∈ M , and k ≥ 1 or k = ∞, let Sk(x) denote theset of points y ∈M with exactly k geodesics to x.We believe that S∞(x) is empty for all x. In fact, it is plausible that allSk(x) are empty for all k > k0 (perhaps even k0 = 9).In particular, the weak cut locus S(x), as defined in Section 3.2.2, isequal to S∞(x) ∪ ⋃k≥2 Sk(x). As discussed in Section 3.2.3, by Le Gall’sdescription of geodesics to the root, properties of the CRT, and invarianceunder re-rooting, we have the following result.Proposition 3.3.7. Almost surely, for λ-almost every x ∈M(i) S(x) = S2(x) ∪ S3(x);(ii) S2(x) is dense, and has Hausdorff dimension 2 (and measure 0);(iii) S3(x) is dense and countable.We observe that the proof in [97, Proposition 3.3] that S(ρ) is almostsurely of Hausdorff dimension 2 gives additional information.Proposition 3.3.8. Almost surely, for λ-almost every x ∈ M , for anynon-empty, open set U ⊂M and each k ∈ {1, 2, 3}, we have thatdim(Sk(x) ∩ U) = dimP(Sk(x) ∩ U) = 2(3− k).Proof. By invariance under re-rooting, it suffices to prove the claim holdsalmost surely when x = ρ is the root of the map.Let a non-empty, open subset U ⊂M be given.Let S = S(x) and Si = Si(x) for i = 1, 2, 3. By Proposition 3.3.7(i),S = S2 ∪ S3 and M − {x} = S1 ∪ S.First, we note that by Proposition 3.3.7(iii), S3 ∩ U is countable, and sohas Hausdorff and packing dimension 0.From [97] we have that S is the image of the cut-points (or skeleton) ofthe CRT, Sk ⊂ Te, under the projection Π : Te →M . Moreover, Π is Höldercontinuous with exponent 1/2− ε for any ε > 0, and restricted to Sk, Π is ahomeomorphism from Sk onto S.Note that Sk is of packing dimension 1, being the countable union of setswhich are isometric to line segments (recall that the packing dimension of a643.4. Confluence near the rootcountable union of sets is the supremum of the dimension of the sets). Hence,by the Hölder continuity of Π, it follows that dimP S ≤ 2 (see, for instance,[103, Exercise 6, p. 108]) and so in particular, we find that dimP(S ∩U) ≤ 2.On the other hand, by the density of S in M and since Π is a homeo-morphism from Sk to S, we see that there is a geodesic segment in Sk thatis projected to a path in S ∩ U . In the proof of [97, Proposition 3.3] it isshown that the Hausdorff dimension of any such path is at least 2. Hencedim(S ∩ U) ≥ 2.Altogether, by the general dimension inequality dimA ≤ dimPA, we findthat S ∩ U has Hausdorff and packing dimension 2.Therefore, since S3 ∩ U has Hausdorff and packing dimension 0 andS = S2 ∪ S3, it follows that S2 ∩ U has Hausdorff and packing dimension 2.Moreover, since by Proposition 3.3.5, U has Hausdorff and packing dimension4 and M − {x} = S1 ∪ S, we find that S1 ∩ U has Hausdorff and packingdimension 4. In closing, we note that Propositions 3.3.7 and 3.3.8 imply the followingresult.Proposition 3.3.9. Almost surely, for λ-almost every x ∈M , S(x) is dense,dimS(x) = dimP S(x) = 2, and λ(S(x)) = 0.3.4 Confluence near the rootWe show that a confluence of geodesics is observed near the root of the Brow-nian map, strengthening the results discussed in Section 3.3.3. Specifically,we establish the following result.Lemma 3.4.1. Almost surely, for λ-almost every x ∈ M , the followingholds. For any y ∈ M and neighbourhoods Nx of x and Ny of y, there aresub-neighbourhoods N ′x and N ′y so that if x′ ∈ N ′x and y′ ∈ N ′y, then anygeodesic segment from x′ to y′ coincides with some geodesic segment from xto y outside of Nx ∪Ny.653.4. Confluence near the rootWe note that Lemma 3.4.1 strengthens Lemma 3.3.4 in that it allows forperturbations of both endpoints of a geodesic.Once Lemma 3.4.1 is established, our key result is a straightforwardconsequence of Lemma 3.3.2 and the fact that the Brownian map is almostsurely compact.Proof of Proposition 3.2.20. By invariance under re-rooting, it suffices toprove the claim when x = ρ is the root of the map. Let an (open) neighbour-hood N of x be given. By Lemma 3.3.2, there is a point x0 ∈ N −{x} whichis contained in all geodesic segments between x and points y ∈ N c. Hence,by Lemma 3.4.1, for each y ∈ N c there is an ηy > 0 so that x0 is contained inall geodesic segments between points x′ ∈ B(x, ηy) and y′ ∈ B(y, ηy). SinceN c is compact, it can be covered by finitely many balls B(y, ηy), say withy ∈ Y . Put N ′ = B(x,miny∈Y ηy). If y0 ∈ N c, then y0 ∈ B(y, ηy) for somey ∈ Y , and thus all geodesics from points x′ ∈ N ′ ⊂ B(x, ηy) to y0 passthrough x0. The rest of this section contains the proof of Lemma 3.4.1. By invarianceunder re-rooting, we may and will assume that x is in fact the root of theBrownian map. In rough terms, we must rule out the existence of a sequenceof geodesic segments [xn, yn] converging to a geodesic segment [x, y], but notconverging strongly in the sense given in Section 3.2.4.For the remainder of this section we fix a realization of the Brownianmap exhibiting the almost sure properties of the random metric space (M,d)that will be required below, notably the fact that M is homeomorphic tothe 2-dimensional sphere. Slightly abusing notation, let us refer to thisrealization as (M,d). We also fix a point y 6= x ∈M and a geodesic segmentγ = [x, y] between x and y.We utilize a dense subset T ⊂M of points, which we refer to as typicalpoints, containing the root x, and such that(i) the claims of Proposition 3.3.1 and Lemma 3.3.4 hold for all u ∈ T ;(ii) for each u, v ∈ T , there is a unique geodesic from u to v.Such a set exists almost surely. For example, the set of equivalence classescontaining rational points almost surely works. We may assume that T exists663.4. Confluence near the rootfor the particular realization of (M,d) we have selected. It is in fact possibleto choose T to have full λ-measure, but for now, we only need it to be densein M .In what follows, we will at times shift our attention to the homeomorphicimage of a neighbourhood of γ in which our arguments are more transparent.Whenever doing so, we will appeal only to topological properties of the map.We let dE be the Euclidean distance on C, and for w ∈ C and r > 0, we letBE(w, r) be the open Euclidean ball centered at w with radius r.Fix a homeomorphism τ from M to Cˆ. The image of γ under τ is asimple arc in Cˆ. Let φ be a homeomorphism from this arc onto the unitinterval I = [0, 1] ⊂ R ⊂ C, with φ(τ(x)) = 0 and thus φ(τ(y)) = 1. By avariation of the Jordan-Schönflies Theorem (see Mohar and Thomassen [110,Theorem 2.2.6]), φ can be extended to a homeomorphism from Cˆ onto Cˆ.Hence φ ◦ τ |γ can be extended to a homeomorphism from M to Cˆ sending γonto I. We fix such a homeomorphism, and denote it by ψ.Since M is homeomorphic to Cˆ, once the geodesic γ is fixed we can thinkof the Brownian map as just Cˆ with a random metric (for which [0, 1] is ageodesic). The reader may well do this, and then ψ becomes the identity.We do not take this route, since that would require showing that ψ can beconstructed in a measurable way, which we prefer to avoid.Definition 3.4.2. Let H+ = {w ∈ C : Imw > 0} (resp. H− = {w ∈ C :Imw < 0}) denote the open upper (resp. lower) half-plane of C. We refer toL = ψ−1(H+) (resp. R = ψ−1(H−)) as the left (resp. right) side of γ.Lemma 3.4.3. Let u, v ∈ γ. For all δ > 0, there are typical points u` ∈B(u, δ) ∩ L ∩ T and v` ∈ B(v, δ) ∩ L ∩ T so that [u`, v`] − γ is containedin (B(u, δ) ∪B(v, δ)) ∩ L. (See Figure 3.4.) An analogous statement holdsreplacing L with R.Proof. Let δ > 0 and u, v ∈ γ be given. We only discuss the argument forthe left side of γ, since the two cases are symmetrical. Moreover, we mayassume that u, v, x, y are all distinct. Indeed, suppose the lemma holds withdistinct u, v, x, y. If we shift u, v along γ by at most η > 0 and apply the673.4. Confluence near the rootψ(u) ψ(v)0 1ψ(u`) ψ(v`)Figure 3.4: Lemma 3.4.3: [u`, v`] − γ is contained in(B(u, δ) ∪B(v, δ)) ∩ L (as viewed through the homeo-morphism ψ).lemma with δ′ = δ−η, the resulting u`, v` will satisfy the requirements of thelemma for u, v and δ. Without loss of generality, we further assume x, u, v, yappear on γ in that order.We may and will assume that δ < d(u, x) ∧ d(v, y). In particular, B(u, δ)and B(v, δ) do not contain the extremities x, y of γ. Let δ′ > 0 be smallenough so that BE(ψ(v), δ′) ⊂ ψ(B(v, δ)). Note that the Euclidean ballBE(ψ(v), δ′) does not contain 0, 1 ∈ C, and so N = ψ−1(BE(ψ(v), δ′)) doesnot intersect the extremities x, y of γ.Let us apply Lemma 3.3.4 to the points x, v (using the fact that x istypical) and the neighbourhood N = ψ−1(BE(ψ(v), δ′)) of v defined above.According to this lemma, there exists a neighbourhood N ′ ⊂ N of v suchthat any geodesic segment γ′ between a point v′ ∈ N ′ and x coincides withsome geodesic between v and x outside N . Since x, y /∈ N , γ′ must firstencounter γ (if we see γ′ as parameterized from v′ to x) at a point w inthe relative interior of γ. Since (x, y) is regular, we apply Lemma 3.2.13 toconclude that γ and γ′ coincide between w and x and are disjoint elsewhere.If we further assume that v′ ∈ N ′ ∩ L is in the left side of γ, then weclaim that the sub-arc [v′, w) ⊂ γ′ is contained in L. Indeed, ψ([v′, w)) iscontained in the Euclidean ball BE(ψ(v), δ′), starts in H+, and is disjoint ofI, and so, it is contained in the upper half of the ball.Since T is dense in M , we can take some typical v` ∈ N ′ ∩ L ∩ T . Forthis choice, the geodesic segment [x, v`] is unique, and [x, v`]− γ is includedin B(v, δ) ∩ L.Assume also δ < 12d(u, v). By a similar argument, in which v` assumesthe role of x (which is a valid assumption since v` ∈ T ), for any u′ close683.4. Confluence near the rootenough to u, any geodesic [u′, v`] coalesces with [x, v`] within B(u, δ). Takingsuch a u′ = u` in T ∩L, we get that [v`, u`]− [v`, x] ⊂ B(u, δ)∩L, and hence[u`, v`]− γ ⊂ (B(u, δ) ∪B(v, δ)) ∩ L, as required. In the next lemma, recall the two notions of convergence (standard andstrong) of geodesic segments given in Section 3.2.4.Lemma 3.4.4. Suppose that [x′, y′] ⊂ γ and [xn, yn] → [x′, y′] as n → ∞.Then we have the strong convergence [xn, yn]⇒ [x′, y′].The proof is somewhat involved. The general idea of the proof is touse Lemma 3.4.3 to obtain geodesic segments γ` = [u`, v`] and γr = [ur, vr]between typical points in the left and right sides of γ, whose intersectionγ` ∩ γr contains a large segment from γ. Since γ` and γr are the uniquegeodesics between their (typical) endpoints, we deduce that γn containsγ` ∩ γr for all large n. See Figure 3.5.Proof. Let γn = [xn, yn] and γ′ = [x′, y′], such that γn → γ′, as in the lemmabe given.Let ε > 0 and put γ′ε = γ′ − (B(x′, ε) ∪ B(y′, ε)). We show that γncontains γ′ε for all large n. Since γn → γ′ (and hence xn → x′ and yn → y′)this implies that γn ⇒ γ′, as required.We may assume that ε < 2−1d(x′, y′). Let u (resp. v) denote the pointin γ′ at distance ε/2 from x′ (resp. y′). By Lemma 3.4.3, there are pointsu` ∈ B(u, ε/4) ∩ L ∩ T and v` ∈ B(v, ε/4) ∩ L ∩ T such that [u`, v`] − γis contained in (B(u, ε/4) ∪ B(v, ε/4)) ∩ L. We also let ur, vr be definedsimilarly, replacing L by R everywhere. Note that the geodesic segments[u`, v`] and [ur, vr] are unique since the extremities are all in T . Moreover,by our choice of ε, u, v, the segments [u`, v`] and [ur, vr] intersect γ and aredisjoint from {x′, y′}. Putδ = 12 min{d(u`, γ), d(v`, γ), d(ur, γ), d(vr, γ)}and note that δ > 0. Let [γ]δ = {z ∈M : d(z, γ) < δ} be the δ-neighbourhoodof γ in M .693.4. Confluence near the rootFor η > 0, let us write Vη = {w ∈ C : dE(w, I) < η} for the η-neighbourhood of I in C. Let η1 > 0 be such that Vη1 ⊂ ψ([γ]δ). Suchan η1 exists since, otherwise, we could find a sequence (zn) of points in Msuch that d(zn, γ) ≥ δ but dE(ψ(zn), I)→ 0 as n→∞, a clear contradictionsince ψ(γ) = I and (zn) has convergent subsequences.Note that ψ(u`), ψ(v`), ψ(ur), ψ(vr) /∈ Vη1 by the definition of δ. PutI` = ψ([u`, v`]), and fix η2 > 0 such thatη2 < dE(ψ(x′), I`) ∧ dE(ψ(y′), I`) ,which is possible since [u`, v`] does not intersect {x′, y′}. Finally, we letη` = η1 ∧ η2, and similarly define ηr, and set η = η` ∧ ηr.Consider I` as a parametrized simple path from ψ(u`) to ψ(v`). Thispath contains a single segment of I, since the geodesic [u`, v`] is unique. Letu′′` , v′′` be defined by I` ∩ I = [ψ(u′′` ), ψ(v′′` )], with u′′` the endpoint closer to x.Let the last point at which I` enters (the closure of) Vη before hitting I beψ(u′`). Let the first point it exits Vη after separating from I be ψ(v′`). SeeFigure 3.5. Let H` denote the connected component of Vη − ψ([u′`, v′`]) thatis contained in H+. Replacing u`, v` with ur, vr in the arguments above, weobtain u′′r , v′′r , Hr. Note that our choice of η implies that ψ(x′) and ψ(y′)are farther than η away (with respect to dE) from H`, Hr.Since γn → γ′, we have that for every n large enough, ψ(γn) ⊂ Vη,ψ(xn) ∈ BE(ψ(x′), η), and ψ(yn) ∈ BE(ψ(y′), η). By our choice of η, for suchan n, the extremities ψ(xn), ψ(yn) of ψ(γn) do not belong to H` ∪Hr.We claim that, for all such n, ψ(γn) ∩H` = ∅. Indeed, if ψ(γn) were tointersect H`, then by the Jordan Curve Theorem it would intersect ψ([u′`, v′`])at two points ψ(u0), ψ(v0) such that the segment ψ((u0, v0)) ⊂ ψ(γn) iscontained in H`. Since H` ∩ ψ([u′`, v′`]) = ∅, it would then follow that thereare distinct geodesics between u0, v0 ∈ [u`, ur], contradicting the uniqueness[u`, ur]. Similarly, for all such n, ψ(γn) ∩Hr = ∅.Let [u′′, v′′] = [u′′` , v′′` ]∩[u′′r , v′′r ], with u′′ the endpoint closer to x. Recalling(from the third paragraph of the proof) that d(x′, u) = ε/2, d(y′, v) = ε/2,u` ∈ B(u, ε/4), v` ∈ B(v, ε/4), and [u`, v`]−γ = [u`, u′′` )∪(v′′` , v`] is contained703.4. Confluence near the rootH`u`xnHrv`ur vrynu′`v′`x′u′′` v′′`y′0 1Figure 3.5: Lemma 3.4.4: Given [x′, y′] ⊂ γ we finda geodesic γ` = [u`, v`] which intersects γ in [u′′` , v′′` ],which is almost all of [x′, y′], and similarly [ur, vr].These are used to define the sets Vη (shaded), andsubsets H` and Hr (dark gray). For large n, thegeodesics γn are included in Vη and cannot enterH` ∪ Hr, leading to strong convergence. The pointsu, v, u′r, u′′r , v′r, v′′r , u′′, v′′ are not shown. For clarity, weomitted ψ(·) from all points (besides ψ(x) = 0 andψ(y) = 1) named in the figure.in B(u, ε/4) ∪ B(v, ε/4), it follows that d(u′′` , x′), d(v′′` , y′) < ε. Similarly,since ur ∈ B(u, ε/4), vr ∈ B(v, ε/4), and [ur, vr] − γ = [ur, u′′r) ∪ (v′′r , vr]is contained in B(u, ε/4) ∪ B(v, ε/4), we have that d(u′′r , x′), d(v′′r , y′) < ε.Hence d(u′′, x′), d(v′′, y′) < ε, and so γ′ε ⊂ [u′′, v′′].To conclude recall that, for all large n, we have that ψ(γn) ⊂ Vη, ψ(xn) ∈BE(ψ(x′), η), ψ(yn) ∈ BE(ψ(y′), η), and ψ(γn) ∩ (H` ∪ Hr) = ∅. By theJordan Curve Theorem, it moreover follows that [u′′, v′′] ⊂ γn, and henceγ′ε ⊂ γn, completing the proof. Proof of Proposition 3.2.25. Since γ = [x, y] is a general geodesic segmentfrom the root of the map, we obtain Proposition 3.2.25 immediately byLemma 3.4.4 and invariance under re-rooting. With Proposition 3.2.25 at hand, Lemma 3.4.1 follows easily.Proof of Lemma 3.4.1. By invariance under re-rooting, we may restrict to thecase that x is the root of M . Let y ∈M and neighbourhoods Nx of x and Nyof y be given. Almost surely, there are at most 3 geodesics from x to y, which713.5. Proof of main resultswe call γi, for i = 1, . . . , k with k ≤ 3. Suppose that [xn, yn] is a sequenceof geodesic segments with xn → x and yn → y in (M,d). If [xnk , ynk ] is aconvergent subsequence of [xn, yn], then by Lemma 3.2.22, [xnk , ynk ] convergesto some γi. By Proposition 3.2.25, it follows that [xnk , ynk ]− (Nx ∪Ny) iscontained in γi for all large k. We conclude that for any sequence [xn, yn]as above, for all sufficiently large n we have that [xn, yn] − (Nx ∪ Ny) iscontained in some geodesic segment from x to y. Hence sub-neighbourhoodsN ′x and N ′y as in the lemma exist. 3.5 Proof of main resultsIn this section, we use Proposition 3.2.20 to establish our main results.3.5.1 Typical pointsTo simplify the proofs below, we make use of a set of typical points T ⊂M(we slightly abuse notation by keeping the same notation as in Section 3.4).The set T will satisfy the following.(i) λ(T c) = 0;(ii) Proposition 3.2.25 (and weaker results such as Proposition 3.2.20 andLemmas 3.3.2, 3.3.4 and 3.4.1) holds for all x ∈ T ;(iii) Proposition 3.3.1 holds for all x ∈ T ;(iv) Proposition 3.3.7 holds for all x ∈ T ;(v) Proposition 3.3.8 holds for all x ∈ T ;(vi) For each x, y ∈ T , there is a unique geodesic from x to y.To be precise, when we say above that a proposition holds for all x ∈ T , wemean that the property in the proposition, known to hold for λ-almost everypoint, holds for every point of T .The almost sure existence of a set T satisfying (i)–(v) follows by invarianceunder re-rooting (and results cited or proved thus far). We note that property(vi) follows by (iii), since as mentioned in Section 3.2.3, if (x, y) and (y, x)are regular then there is a unique geodesic from x to y.Hence, in the sections which follow, to show that various properties hold723.5. Proof of main resultsalmost surely for λ-almost every x ∈M , it suffices to confirm that they holdfor points in T .3.5.2 Geodesic netsTheorems 3.2.3 and 3.2.4 follow by Proposition 3.2.20.Proof of Theorem 3.2.3. Let x, y ∈M and u ∈ T −{x, y} be given. Proposi-tion 3.2.20 provides an (open) neighbourhood Uu of u and a point u0 outsideUu so that all geodesics from any v ∈ Uu to either x or y pass through u0. Inparticular any geodesic [v, x], with v ∈ Uu, can be written as [v, u0] ∪ [u0, x].By the choice of u0, replacing the second segment by some [u0, y] gives ageodesic from v to y. The same holds with x, y reversed. Consequently,G(x) ∩ Uu = G(y) ∩ Uu.Thus G(x) and G(y) coincide in ⋃T−{x,y} Uu. Since T is dense and hasfull measure, the theorem follows. Proof of Theorem 3.2.4. Let x ∈ T and a neighbourhood N of x be given.Select ε > 0 so that B(x, 2ε) ⊂ N . Let N ′ ⊂ B(x, ε) and x0 ∈ B(x, ε)−N ′be as in Proposition 3.2.20. By the choice of x0, for any y0 ∈ N c and x′ ∈ N ′,observe that y0 ∈ G(x′) if and only if there is some y ∈ B(x, ε)c and geodesic[x0, y] so that y0 ∈ [x0, y). This condition is independent of x′. Hence allG(x′), x′ ∈ N ′, coincide on N c. In support of our conjecture in Section 3.2.4, we show that the union ofmost geodesic nets is of Hausdorff dimension 1.Proposition 3.5.1. Almost surely, there is a subset Λ ⊂M of full volume,λ(Λc) = 0, satisfying dim⋃x∈ΛG(x) = 1.Proof. We prove the claim with Λ = T , which has full measure.By property (ii) of points in T , there is a confluence of geodesics to allpoints x ∈ T (that is, the statement of Lemma 3.3.2 holds). As discussed inSection 3.2.1, we thus have that dimG(x) = 1 for all x ∈ T .Let ε > 0 be given. For each x ∈ T , put Gε(x) = G(x) − B(x, ε). ByTheorem 3.2.4, for each x ∈ T there is an ηx ∈ (0, ε) such that G2ε(x′) ⊂733.5. Proof of main resultsGε(x) for all x′ ∈ B(x, ηx). Since (M,d) is a separable metric space andhence strongly Lindelöf (that is, all open subspaces of (M,d) are Lindelöf)there is a countable subset Tε ⊂ T such that ⋃x∈Tε B(x, ηx) is equal to⋃x∈T B(x, ηx), and in particular, contains T . Hence, by the choice of Tε,⋃x∈T G2ε(x) is contained in⋃x∈Tε Gε(x), a countable union of 1-dimensionalsets, and so is 1-dimensional.Taking a countable union over ε = 1/n, we see that dim⋃x∈T G(x) = 1,which yields the claim. 3.5.3 Cut lociAs discussed in Section 3.2.2, Le Gall’s study of geodesics reveals a correspon-dence between cut-points of the CRT and points with multiple geodesics tothe root of the Brownian map. Hence, Le Gall [97] states that S(ρ) “exactlycorresponds to the cut locus of [the Brownian map] relative to the root.”3.5.3.1 Weak cut lociThe main way in which the weak cut locus is badly behaved is that there isa dense set of points for which the weak cut locus has positive volume andfull dimension (whereas typically it is much smaller, see Proposition 3.3.9).Proposition 3.5.2. Almost surely, for λ-almost every x ∈ M , for anyneighbourhood N of x, there is a set D with dimD = 2, dense in someneighbourhood N ′ ⊂ N of x, such that N c ⊂ S(x′) for all x′ ∈ D.Proof. Let x ∈ T and a neighbourhood N of x be given. Let N ′ ⊂ N andx0 ∈ N − N ′ be as in Proposition 3.2.20. Fix some u ∈ N c ∩ T , and putD = N ′ ∩ S(u) so that by properties (iv),(v) of points in T , we have thatD is dense in N ′ and satisfies dimD = 2. By property (vi) of points in T ,there is a unique geodesic from u to x. Since this geodesic passes through x0,it follows that there is a unique geodesic from u to x0. Hence, by the choiceof D and x0, we see that there are multiple geodesics from each point x′ ∈ Dto x0. We conclude, by the choice of x0, that N c ⊂ S(x′), for all x′ ∈ D. 743.5. Proof of main resultsSince the weak cut locus relation is symmetric — that is, y ∈ S(x) if andonly if x ∈ S(y) — we note that it follows immediately by Proposition 3.5.2that almost surely, for all x ∈ M , S(x) is dense in M (as mentioned inSection 3.2.2) and dimS(x) ≥ 2.By the proof of Proposition 3.5.2, we find that S(x) does not effectivelycapture the essence of a cut locus of a general point x ∈M . Therein, observethat although all points y ∈ N c are in S(x′), x′ ∈ D, this is due to thestructure of the map near x′ (namely the multiple geodesics to the confluencepoint x0) and does not reflect on the map near y. For this reason, we alsodefine a strong cut locus for the Brownian map, see Section 3.2.2.3.5.3.2 Strong cut lociBy Le Gall’s description of geodesics to the root and invariance under re-rooting, and in particular Proposition 3.3.1, we immediately obtain thefollowing:Proposition 3.5.3. Almost surely, for λ-almost every x ∈M , S(x) = C(x),that is, the weak and strong cut loci coincide.We remark that the strong cut locus relation, unlike the weak cut locus,is not symmetric in x and y, that is, y ∈ C(x) does not imply that x ∈ C(y).See Figure 3.6.x yFigure 3.6: Asymmetry of the strong cut locus relation:For a regular pair (x, y) joined by two geodesics, wehave y ∈ C(x), however x /∈ C(y), since all geodesicsfrom y to x coincide near x.Although more in tune with the singular geometry of the Brownian map,not all properties of cut loci in smooth manifolds apply for the Brownianmap. For instance, C(x) is much smaller than the closure of all points withmultiple geodesics to x (as is the case with the cut locus of a smooth surface,753.5. Proof of main resultssee Klingenberg [90, Theorem 2.1.14]) since the set of such points is densein M (as noted after the proof of Proposition 3.5.2). Moreover, it is notnecessarily the case that all points y ∈ C(x) are endpoints relative to x (thatis, extremities y of a geodesic [x, y] which cannot be extended to a geodesic[x, y′] ⊃ [x, y] for any y′ 6= y; in other words, y /∈ G(x)). For instance, ifγ, γ′ are distinct geodesics from the root of the map ρ to some point x,with a common initial segment [ρ, y] = γ ∩ γ′, then note that y is in C(x)(by Proposition 3.3.1), however not an endpoint relative to x, being in therelative interior of γ.Despite such differences, we propose that the set C(x) is a more interestingnotion of cut locus in our setting than S(x) or, say, the set of all endpointsrelative to x (that is, G(x)c − {x}), which by Theorem 3.2.6 is a residualsubset of the map.As stated in Section 3.2.2, analogues of Theorems 3.2.3 and 3.2.4 hold forthe strong cut locus. The proofs are very similar to those of Theorems 3.2.3and 3.2.4.Proof of Theorem 3.2.8. Let x, y ∈M and u ∈ T −{x, y} be given. Proposi-tion 3.2.20 provides an (open) neighbourhood Uu of u and a point u0 outsideUu so that all geodesics from any v ∈ Uu to either x or y pass through u0.In particular any geodesic [v, u0] can be extended to each of x, y.Since v ∈ C(x) is determined by the structure of geodesics [v, x] near v, apoint v ∈ Uu is in C(x) if and only if v ∈ C(y). Thus C(x) and C(y) agree in⋃u∈T−{x,y} Uu. The result follows, since T is dense and has full measure. Proof of Theorem 3.2.9. Let x ∈ T and a neighbourhood N of x be given.Let N ′ ⊂ N and x0 ∈ N −N ′ be as in Proposition 3.2.20. For any x′ ∈ N ′and y ∈ N c, y ∈ C(x′) if and only if there are multiple geodesics from x0 toy which are distinct near y. Since this condition is independent of x′, weconclude that all C(x′), x′ ∈ N ′, coincide on N c. Analogously to Proposition 3.5.1, we find that the union over most strongcut loci is of Hausdorff dimension 2.763.5. Proof of main resultsProposition 3.5.4. Almost surely, there is a subset Λ ⊂M of full volume,λ(Λc) = 0, satisfying dim⋃x∈ΛC(x) = 2.Proof. The proposition follows by the proof of Proposition 3.5.1, but replacingits use of Theorem 3.2.4 with that of Theorem 3.2.9, and noting, by property(iv) of points in T , that dimC(x) = 2 for all x ∈ T . We omit the details. It would be interesting to know if almost surely ⋃x∈M C(x) is of Hausdorffdimension 2.3.5.4 Geodesic starsA geodesic star is a formation of geodesic segments which share a commonendpoint and are otherwise pairwise disjoint. Geodesic stars play a importantrole in [108]. While every point is the centre of a geodesic star with a singleray, almost every point is not the centre of a star with any more rays.Definition 3.5.5. For ε > 0, let Z(ε) denote the set of points x ∈M suchthat for some y, y′ ∈ B(x, ε)c and geodesic segments [x, y] and [x, y′], wehave that (x, y]∩ (x, y′] = ∅. We call a point in Z(ε) the centre of a geodesicε-star with two rays.Note that any point in the interior of a geodesic is in Z(ε) for some ε > 0,but the converse need not hold.Proposition 3.5.6. Almost surely, for any ε > 0, Z(ε) is nowhere dense inM .Proof. Let ε > 0 and x ∈ T be given. Put N = B(x, ε/2). Let N ′ ⊂ N andx0 ∈ N −N ′ be as in Proposition 3.2.20. Since N ⊂ B(x′, ε) for all x′ ∈ N ′,x0 is contained in all geodesic segments of length ε from points x′ ∈ N ′.Hence Z(ε) ∩N ′ = ∅. The result thus follows by the density of T . Proof of Theorems 3.2.6 and 3.2.10. Note that if a point is either in therelative interior of a geodesic or in the strong cut locus of a point, then itis the centre of a geodesic ε-star with two rays, for some ε > 0. Therefore⋃x∈M G(x) and⋃x∈M C(x) are contained in⋃n≥1 Z(n−1), a set of first Bairecategory by Proposition 3.5.6. The theorems follow. 773.5. Proof of main results3.5.5 Geodesic networksIn this section, we classify the types of geodesic networks which are dense inthe Brownian map and calculate the dimension of the set of pairs with eachtype of network.Proof of Theorem 3.2.15. Let u 6= v ∈ T be given. By property (vi) of pointsin T , there is a unique geodesic [u, v]. Put ε = 13d(u, v). By property (ii)of points in T , we have by Lemma 3.4.1 that there is an η > 0 so that ifU = B(u, η) and V = B(v, η), then for any u′ ∈ U and v′ ∈ V , any geodesicsegment [u′, v′] coincides with [u, v] outside of B(u, ε) ∪B(v, ε).Let z denote the midpoint of [u, v]. By the choice of η and since u ∈ T ,we have by properties (iii),(iv) for points in T that for all v′ ∈ V , the pair(z, v′) is regular and joined by at most three geodesics. Hence we splitV = V1 ∪ V2 ∪ V3, where Vk consists of v′ ∈ V for which (z, v′) ∈ N(1, k).Similarly, we decompose U = U1 ∪ U2 ∪ U3 according to the number ofgeodesics between z and u′ ∈ U . Since u, v ∈ T , we see by property (iv) ofpoints in T that all Uj , Vk are dense in U, V .Finally, by the choice of η, observe that Uj × Vk ⊂ N(j, k), for allj, k ∈ {1, 2, 3}. Hence, parts (i),(ii) of the theorem follow by the density ofT . For the proof of Theorem 3.2.16, we require the following result concerningthe dimension of cartesian products in arbitrary metric spaces.Lemma 3.5.7 (Howroyd [81, 82]). For any metric spaces X,Y we have that(i) (dimX) + (dim Y ) ≤ dim(X × Y );(ii) dimP(X × Y ) ≤ (dimPX) + (dimP Y ),where the metric on X × Y is the L1 metric on the product.Proof of Theorem 3.2.16. Let u 6= v ∈ T and Uj , Vk, j, k ∈ {1, 2, 3}, be as inthe proof of Theorem 3.2.15. Since u, v ∈ T , we have by properties (iv),(v)of points in T that for all j, k ∈ {1, 2, 3}, dimUj = dimP Uj = 2(3 − j),dimVk = dimP Vk = 2(3− k), and moreover, the sets U3, V3 are countable.Recall that in the proof of Theorem 3.2.15, it is shown that for all j, k ∈{1, 2, 3}, Uj × Vk ⊂ N(j, k). We thus obtain the lower bounds dimN(j, k) ≥783.5. Proof of main results2(6 − j − k) by Lemma 3.5.7(i). In particular, since dimA ≤ dimPA, weobtain 8 ≤ dimN(1, 1) ≤ dimPN(1, 1) ≤ dimPM2 ≤ 8, where the lastinequality follows by Proposition 3.3.5 and Lemma 3.5.7(ii). Hence, we findthat dimN(1, 1) = dimPN(1, 1) = 8.It remains to give an upper bound on the dimensions of N(j, k) whenj, k are not both 1, in which case the complement of the geodesic networkG(x, y) is disconnected. By symmetry, we assume j 6= 1, so that there aremultiple geodesics leaving x. Let [x′, y′] be the closure of the intersection ofall relative interiors (x, y) of geodesics from x to y. (Since j 6= 1, it followsthat x 6= x′. If k = 1 then y = y′.)Fix a countable, dense subset T0 ⊂ T . Take some x0 ∈ T0 in a componentUx of G(x, y)c whose closure contains x but not [x′, y′]. (See Figure 3.7.) Bythe Jordan Curve Theorem and the choice of [x′, y′], for any geodesic [x0, y]we have that [x0, y]− Ux is contained in some geodesic from x to y, and inparticular, contains [x′, y′]. Since x0 is typical, by property (ii) of points inT , we have that all sub-segments of all geodesics [x0, y] are stable. Let zdenote the midpoint of [x′, y′]. Note that, in particular, [x′, z] ⊂ [x′, y′] and[z, y′] ⊂ [x′, y′] are stable.x yx0x′ y′znzUxFigure 3.7: Theorem 3.2.16: As depicted, (x, y) ∈N(2, 3). A typical point x0 ∈ Ux gives normal geodesics[x0, y]. For some zn ∈ T0 sufficiently close to z, we havethat (zn, x) ∈ N(1, 2) and (zn, y) ∈ N(1, 3), and hence(x, y) ∈ S2(zn)× S3(zn).Take a sequence of points zn ∈ T0 converging to z. Any subsequentiallimit of geodesics [x, zn] converges to some geodesic [x, z], which, by thechoice of [x′, y′], contains [x′, z]. Since [x′, z] is stable, for large enough n thegeodesics [x, zn] intersect [x′, z], and therefore (viewing [x, zn] as parametrizedfrom x to zn) necessarily coincide with one of the geodesics [x, x′], and thencontinue along [x′, y′] before branching off towards zn. It follows that for suchn, we have that (x, zn) ∈ N(j, 1). Similarly, since [z, y′] is stable, for large793.6. Related modelsenough n the geodesics [zn, y] all go through y′, and hence (zn, y) ∈ N(1, k).By property (iii) of points in T , we note that for any u ∈ T and i ∈{1, 2, 3}, Si(u) (as defined in Section 3.3.4) is equal to {v : (u, v) ∈ N(1, i)}.Furthermore, by properties (iv),(v) of points in T , we have that dimP Si(u) =6− 2i, and moreover, S3(u) is countable.The above argument shows that for every (x, y) ∈ N(j, k) we have that(zn, x) ∈ N(1, j) and (zn, y) ∈ N(1, k) for some zn ∈ T0. ThusN(j, k) ⊂⋃u∈T0Si(u)× Sj(u).Since T0 is countable, it follows by Lemma 3.5.7(ii) that dimPN(j, k) ≤(6− 2j) + (6− 2k), giving the requisite upper bound. Moreover, we find thatN(3, 3) is countable.Altogether, since dimA ≤ dimPA, we conclude that N(j, k) has Haus-dorff and packing dimension 2(6− j − k). Proof of Corollaries 3.2.18 and 3.2.19. Noting that N(j, k) ⊂ P (jk), for allj, k ∈ N, we observe that Theorems 3.2.15 and 3.2.16 immediately yieldCorollaries 3.2.18 and 3.2.19. 3.6 Related modelsOur results have implications for the geodesic structure of models related tothe Brownian map.An infinite volume version of the Brownian map, the Brownian plane(P,D), has been introduced by Curien and Le Gall [53]. The randommetric space (P,D) is homeomorphic to the plane R2 and arises as the localGromov-Hausdorff scaling limit of the UIPQ (discussed in Section 3.2.1). TheBrownian plane has an additional scale invariance property which makes itmore amenable to analysis, see the recent works of Curien and Le Gall [54, 55].As discussed in [95], almost surely there are isometric neighbourhoods of theroots of (M,d) and (P,D). Using this fact and scale invariance, propertiesof the Brownian plane can be deduced from those of the Brownian map.803.6. Related modelsIn a series of works, Bettinelli [33, 34, 35] investigates Brownian surfacesof positive genus. In [33] subsequential Gromov-Hausdorff convergence ofuniform random bipartite quadrangulations of the g-torus Tg is established(also general orientable surfaces with a boundary are analyzed in [35]),and it is an ongoing work of Bettinelli and Miermont [37, 38] to confirmthat a unique scaling limit exists. Some properties hold independently ofwhich subsequence is extracted. For instance, a scaling limit of bipartitequadrangulations of Tg is homeomorphic to Tg (see [34]) and has Hausdorffdimension 4 (see [33]). Also, a confluence of geodesics is observed at typicalpoints of the surface (see [35]). Our results imply further properties ofgeodesics in such surfaces, although in these settings there are additionaltechnicalities to be addressed.81Part IIISusceptibility of RandomGraphs82Chapter 4Thresholds for ContagiousSets in Random Graphs4.1 OverviewFor fixed r ≥ 2, we consider bootstrap percolation with threshold r on theErdős–Rényi graph Gn,p. We identify a threshold for p above which there iswith high probability a set of size r that can infect the entire graph. Thisimproves a result of Feige, Krivelevich and Reichman, which gives boundsfor this threshold, up to multiplicative constants.As an application of our results, we obtain an upper bound for thethreshold for K4-percolation on Gn,p, as studied by Balogh, Bollobás andMorris. This bound is proved to be sharp in Chapter 6.These thresholds are closely related to the survival probabilities of certaintime-varying branching processes, and we derive asymptotic formulae forthese survival probabilities which are of interest in their own right.∗4.2 Background and main results4.2.1 Bootstrap percolationThe r-bootstrap percolation process on a graph G = (V,E) evolves as follows.Initially, some set V0 ⊂ V is infected. Subsequently, any vertex that has atleast r infected neighbours becomes infected, and remains infected. Formally∗This chapter is joint work with Omer Angel [12], to appear in the Annals of AppliedProbabilty.834.2. Background and main resultsthe process is defined byVt+1 = Vt ∪{v : |N(v) ∩ Vt| ≥ r},where N(v) is the set of neighbours of a vertex v. The sets Vt are increasing,and so converge to some set V∞ of eventually infected vertices. We denotethe infected set by 〈V0, G〉r = V∞. A contagious set for G is a set I ⊂ V suchthat if we put V0 = I then we have that 〈I,G〉r = V , that is, the infection ofI results in the infection of all vertices of G.Bootstrap percolation was introduced by Chalupa, Leath and Reich [50](see also [39, 104, 116, 131, 134]), in the context of statistical physics, forthe study of disordered magnetic systems. Since then it has been applieddiversely in physics, and in other areas, including computer science, neuralnetworks, and sociology, see [7, 5, 9, 57, 58, 63, 64, 65, 68, 69, 70, 89, 112,126, 133, 136, 137] and further references therein.Special cases of r-bootstrap percolation have been analyzed extensivelyon finite grids and infinite lattices, see for instance [6, 20, 21, 23, 26, 47, 46,72, 79, 80, 120] (and references therein). Other special graphs of interesthave also been studied, including hypercubes and trees, see [19, 22, 25, 66].Recent work has focused on the case of random graphs, see for example[9, 10, 27, 83], and in particular, on the Erdős–Rényi random graph Gn,p.See [84, 132] (and [17, 18, 118] for related results).The main questions of interest in this field revolve around the size of theeventual infected set V∞. In most works, the object of study is the probabilitythat a random initial set is contagious, and its dependence on the size of V0.For example, in [84, Theorem 3.1], the critical size for a random set to becontagious for Gn,p is identified for all r ≥ 2 and p in a range depending on r.More recently, and in contrast with the above results, Feige, Krivelevichand Reichman [62] study the existence of small contagious sets in Gn,p, ina range of p. We call a graph susceptible (or say that it r-percolates) it ifcontains a contagious set of the smallest possible size r. In [62, Theorem 1.2],the threshold for p above which Gn,p is likely to be susceptible is approximated,up to multiplicative constants. Our main result identifies sharp thresholds844.2. Background and main resultsfor the susceptibility of Gn,p, for all r ≥ 2.Let pc(n, r) denote the infimum over p > 0 so that Gn,p is susceptiblewith probability at least 1/2.Theorem 4.2.1. Fix r ≥ 2 and α > 0. Letp = p(n) =(αn logr−1 n)1/rand denoteαr = (r − 1)!(r − 1r)2(r−1).If α > αr, then with high probability Gn,p is susceptible. If α < αr, then thereexists β = β(α, r) so that for G = Gn,p, with high probability for every I ofsize r we have |〈I,G〉r| ≤ β logn. In particular, as n→∞,pc(n, r) =(αrn logr−1 n)1/r(1 + o(1)).Thus r-bootstrap percolation undergoes a sharp transition. For small psets of size r infect at most O(logn) vertices, whereas for larger p there arecontagious sets of size r.We remark that for α < αr, with high probability Gn,p has susceptiblesubgraphs of size Θ(logn). Moreover, our methods identify the largest βso that there are susceptible subgraphs of size β logn (see Proposition 4.3.1below).4.2.2 Graph bootstrap percolation and seedsLet H be some finite graph. Following Bollobás [39], H-bootstrap percolationis a rule for adding edges to a graph G. Eventually no further edges can beadded, and the process terminates. An edge is added whenever its additioncreates a copy of H within G. Informally, the process completes all copiesof H that are missing a single edge. Formally, we let G0 = G, and Gi+1is Gi together with every edge whose addition creates a subgraph which isisomorphic to H. (Note that these are not necessarily induced subgraphs,854.2. Background and main resultsso having more edges in G can only increase the final result. The vertex setis fixed, and no vertices play any special role.) For a finite graph G, thisprocedure terminates once Gτ+1 = Gτ , for some τ = τ(G). We denote theresulting graph Gτ by 〈G〉H . If 〈G〉H is the complete graph on the vertexset V , the graph G is said to H-percolate (or that it is H-percolating).Balogh, Bollobás and Morris [24] study the model in the case thatH = Kkand G = Gn,p. The case H = K4 is the minimal case of interest. Indeed, allgraphs K2-percolate, and a graph K3-percolates if and only if it is connected.Hence by a classical result of Erdős and Rényi [60], Gn,p will K3-percolateprecisely for p > n−1 logn+ Θ(n−1). Critical thresholds are defined aspc(n,H) = inf {p > 0 : P(〈Gn,p〉H = Kn) ≥ 1/2} .It is expected that this property has a sharp threshold for H = Kk for anyk, in the sense that for some pc = pc(k) we have that Gn,p is Kk-percolatingwith high probability for p > (1+ δ)pc and is Kk-percolating with probabilitytending to 0 for p = (1−δ)pc. Some bounds on pc(n,Kk), k ≥ 4, are obtainedin [24]. One of the main results is that pc(n,K4) = Θ(1/√n logn).We improve the upper bound on pc(n,K4) given in [24].Theorem 4.2.2. Let p =√α/(n logn). If α > 1/3 then Gn,p is K4-percolating with high probability. In particular as n→∞, we have thatpc(n,K4) ≤ 1 + o(1)√3n logn.This bound is shown to be asymptotically sharp in Chapter 6.One way for a graph G to Kr+2-percolate is if there is some ordering ofthe vertices so that vertices 1, . . . , r form a clique, and every later vertex isconnected to at least r of the previous vertices according to the order. Inthis case we call the clique formed by the first r vertices a seed for G. Whenr = 2, the seed is a clique of size 2, so we call it a seed edge.Lemma 4.2.3. Fix r ≥ 2. If G has a seed for Kr+2-bootstrap percolation,then 〈G〉Kr+2 = Kn.864.2. Background and main resultsProof. We prove by induction that for k ≥ r the subgraph induced by thefirst k vertices percolates. For k = r, the definition of a seed implies that thesubgraph is complete. Given that the first k − 1 vertices span a percolatinggraph, some number of steps will add all edges among them. Finally, vertexk has r neighbours among these, and so every edge between vertex k and aprevious vertex can also be added by Kr+2-bootstrap percolation. In light of this, Theorem 4.2.2 above is a direct corollary of the followingresult.Theorem 4.2.4. Let p =√α/(n logn). As n→∞, the probability that Gn,phas a seed edge tends to 1 if α > 1/3 and tends to 0 if α < 1/3.The case of K4-bootstrap percolation, corresponding to r = 2, appearsto be special: We conjecture that existence of a seed edge is the easiestway for a graph to K4-percolate. This is similar to other situations wherea threshold of interest on Gn,p coincides with that of a more fundamentalevent. For instance, with high probability, Gn,p is connected (equivalently,K3-percolating) if and only if it has no isolated vertices (see [60]); Gn,pcontains a Hamiltonian cycle if and only if its minimum degree is at least 2(Komlós and Szemerédi [92]).Essentially, if G K4-percolates, then either there is a seed edge, or someother small structure that serves as a seed (i.e., K4-percolates and exhaustsG by adding doubly connected vertices), or else, there are at least two largestructures within G that K4-percolate independently. Since pc → 0, havingmultiple large percolating structures within G is less likely. This is furtherinvestigated in Chapter 6.For r > 2, having a seed is no longer the easiest way for a graph toK4-percolate. Indeed, by [24], the critical probability for Kr+2-bootstrappercolation is n−(2r)/(r2+3r−2) up to (unknown) poly-logarithmic factors (notethat r in [24] is r + 2 here). The threshold for having a seed is of ordern−1/r(logn)1/r−1, which is much larger (see Theorem 4.6.1).874.2. Background and main results4.2.3 A non-homogeneous branching processGiven an edge e = (x0, x1), we can explore the graph to determine if it is aseed edge. The number of vertices that are connected to both of its endpointsis roughly Poisson with mean np2. In our context, the interesting p areo(n−1/2), and therefore the number of such vertices has small mean, whichwe denote by ε = np2. If there are any such vertices, denote them x2, . . . .We then seek vertices connected to x2 and at least one of x0, x1. The numberof such vertices is roughly Poi(2ε). Indeed, the number of vertices connectedto the kth vertex and at least one of the previous vertices is (approximately)Poi(kε).This leads us to the case r = 2 of the following non-homogeneous branch-ing process defined by parameters r ∈ N and ε > 0. The process starts witha single individual. The first r − 2 individuals have precisely one child each.For n ≥ r − 1, the nth individual has a Poisson number of children withmean( nr−1)ε, where here ε = npr. Thus for r = 2 the nth individual hasa mean of nε children. The process may die out (e.g., if individual r − 1has no children). However, if the process survives long enough the meannumber of children exceeds one and the process becomes super-critical. Thusthe probability of survival is strictly between 0 and 1. Formally, this maybe defined in terms of independent random variables Zn = Poi(( nr−1)ε)byXt =∑tn=r−1 Zn − 1. Survival is the event {Xt ≥ 0, ∀t}.Theorem 4.2.5. As ε→ 0, we have thatP(Xt > 0, ∀t) = exp[−(r − 1)2rkr(1 + o(1))]wherekr = kr(ε) =((r − 1)!ε)1/(r−1).Note that ε( krr−1) ≈ 1. Hence kr is roughly the time at which the processbecomes super-critical.In closing, we mention that this process is closely related to the binomialchain representation of the r-bootstrap percolation dynamics, discussed in884.2. Background and main resultsmore detail in Section 5.3 below. (One difference here is that we keep trackof the number of vertices in each “generation.” This, in fact, can be recoveredfrom the binomial chain process as well, see [84, Chapter 10]. Also, whereashere the process starts with r infected vertices x0, x1, . . . xr−1, in Chapter 5we study the situation where a set of ` vertices is initially infected, wherepossibly `/kr ∼ c > 0. Hence Theorem 5.4.2 below, in the particular casethat `/kr → 0, is a (slightly more general) version of the above theorem,noting that setting ε = npr above, kr(ε) coincides with kr in Theorem 5.4.2.)4.2.4 Outline of the proofIn Section 4.3, we obtain a recurrence (4.3.1) for the number of graphs whichr-percolate with the minimal number of edges. Using this, we estimate theasymptotics of such graphs, and thereby identify a quantity β∗(α), so that forα < αr (and p as in Theorem 4.2.1), with high probability no r-percolation onGn,p grows to size β logn, for any β ≥ β∗(α) + δ. Let βr(α) = kr(npr)/ logn,where kr = kr(ε) is as in Section 4.2.3. Moreover, we find that β∗(α) = βr(α)if and only if α = αr, suggesting that αr is indeed the critical value of α.In Section 4.4, we show by the second moment method that, if α > αr,then Gn,p r-percolates with high probability. The main difficulty towardsestablishing this fact is that contagious sets are far from independent. Oneway to see (very roughly) that this is the case is as follows: For super-criticalα > αr, it is reasonable to presume that the expected number of contagioussets of size r is nµ, for some µ(α) ↓ 0 as α ↓ αr. Let r = 2 (the casesr > 2 are similar), and suppose that some pair x, y infects a set V containingβ logn vertices. Let x′, y′ be some other pair, such that {x, y} ∩ {x′, y′} = ∅.One way that x′, y′ can infect a set V ′ of size β logn is by first infectingsome set V1 where |V ∩ V1| = 2, and then infecting some V2 ⊂ V such that|V1 ∪ V2| = β logn. Note that this only implies the existence of at least threeedges in Gn,p with at most one endpoint in V . To see this, observe that thefirst infected vertex u ∈ V ∩ V1 necessary has at least two neighbours not inV , however the second vertex infected v 6= u ∈ V ∩V1 may only have one suchneighbour if (u, v) ∈ E(Gn,p). As a result, it is perhaps not straightforward894.2. Background and main resultsto obtain an upper bound for the conditional probability that x′, y′ infectsβ logn vertices, given that x, y infects β logn vertices, that is much smallerthan p3. Since there are O(n2) such pairs x′, y′, and since p =√α/(n logn),it would appear that correlations are too high for a simple application of thesecond moment method.To overcome this difficultly, we observe that if x′, y′ infects a set V ′ =V1 ∪ V2 as above, and moreover |V ∩ V ′| > 2, then either (i) the second andthird vertices v, w ∈ V ∩V ′ that are infected (after the first vertex u ∈ V ∩V ′,with two neighbours not in V , is infected) both have a neighbour not in V ,resulting in a total of at least four edges in Gn,p with at least one endpointnot in V , or else (ii) the vertices u, v, w induce a triangle. For this reason,we instead study contagious sets which infect triangle-free subgraphs of Gn,p.To give some intuition for why this restriction should not effect the threshold(up to smaller order terms), note that the threshold p′c for the existence ofa contagious set of size r that induces a graph with at least one edge ismuch larger, p′c  pc. Therefore, although for p close to pc there are manytriangles in Gn,p, we do not expect Gn,p to require a triangle in order to infectat large set of size β logn.More specifically, we modify the recurrence (4.3.1) to obtain a recursivelower bound for graphs which r-percolate without using triangles, and showthat this restriction does not significantly effect the asymptotics. UsingMantel’s [102] theorem, we establish the approximate independence of cor-respondingly restricted r-percolations, which we call rˆ-percolations, withrelative ease.A secondary obstacle is the need for a lower bound on the asymptotics ofgraphs which rˆ-percolate, with a significant proportion of vertices in the toplevel (i.e., vertices v of a graph G = (V,E) such that v ∈ Vt − Vt−1 whereVt = V ). Such bounds are required to estimate the growth of super-critical rˆ-percolations on Gn,p, which have grown larger than the critical size βr(α) logn.Using a lower bound for the overall number of graphs which rˆ-percolate, weobtain a lower bound for the number of such graphs with i = Ω(k) verticesin the top level. This estimate, together with the approximate independenceresult, is sufficient to show that with high probability Gn,p has subgraphs904.3. Lower bound for pc(n, r)of size β logn which r-percolate, for some β ≥ β∗(α) + δ (where for α > αr,βr(α) < β∗(α)).Finally, to conclude, we show by the first moment method that for anygiven A > 0, with high probability an r-percolation which grows to size(β∗(α) + δ) logn continues to grow to size A logn. Having established theexistence of a subgraph of Gn,p of size A logn, for a sufficiently large value ofA (depending on the difference α− αr), it is straightforward (by sprinkling)to show that with high probability Gn,p r-percolates.4.3 Lower bound for pc(n, r)In this section, we prove the sub-critical case of Theorem 4.2.1, by the firstmoment method. Throughout this section we fix some r ≥ 2. More precisely,we prove the followingProposition 4.3.1. Letαr = (r − 1)!(r − 1r)2(r−1), p = θr(α, n) =(αn logr−1 n)1/r.Define β∗(α) to be the unique positive root ofr + β log(αβr−1(r − 1)!)− αβrr! − β(r − 2).For any α < αr and δ > 0, with high probability, for every I ⊂ [n] of size r,we have that |〈I,Gn,p〉r| ≤ (β∗(α) + δ) logn.The methods of Section 4.4 can be used to show that with high probabilitythere are sets I of size r which infect (β∗ − δ) logn vertices. For α < αr, wehave (see Lemma 4.3.9) the following upper boundβ∗(α) <((r − 1)!α)1/(r−1).(In fact, it can be shown by elementary calculus that α can be replaced withαr on the right hand side, resulting in the slightly improved upper bound of914.3. Lower bound for pc(n, r)β∗(α) < (r/(r − 1))2.) This is asymptotically optimal for α ∼ αr.In closing, we mention that Proposition 4.3.1 can alternatively be estab-lished using the large deviations estimates developed in the next Chapter 5,see Theorem 5.4.2. These two approaches are completely different, and so areof independent interest: Theorem 5.4.2 is proved using variational calculus,whereas Proposition 4.3.1 is proved by combinatorial arguments.4.3.1 Small susceptible graphsAs discussed in Section 4.2.4, a key idea is to study the number of subgraphsof size k = Θ(logn) which are susceptible with the minimal number of edges.If none exist, then there can be no contagious set in G. Thus an importantstep is developing estimates for the number of such susceptible graphs of sizek.For a graph G and initial infected set V0, recall that Vt = Vt(V0, G) is theset of vertices infected up to and including step t. We let τ = inf{t : Vt =Vt+1}. We put I0 = V0 and It = Vt − Vt−1, for t ≥ 1. We refer to It as theset of vertices infected in level i. In particular, the top level of G is Iτ .For a graph G, we let V (G) and E(G) denote its vertex and edge sets,and put |G| = |V (G)|.We call a graph minimally susceptible if it is susceptible and has exactlyr(|G| − r) edges. If a graph G is susceptible, it has at least r(|G| − r) edges,since each vertex in It, t ≥ 1, is connected to r vertices in Vt−1.For k ∈ N, let [k] = {1, 2, . . . , k}.Definition 4.3.2. Let mr(k) denote the number of minimally susceptiblegraphs G with vertex set [k] such that [r] is a contagious set for G. Letmr(k, i) denote the number of such graphs with i vertices infected in the toplevel (so that mr(k) =∑k−ri=1 mr(k, i)).We note that mr(k, k − r) = 1, and claim that for i < k − r,mr(k, i) =(k − ri)k−r−i∑j=1ar(k − i, j)imr(k − i, j), (4.3.1)924.3. Lower bound for pc(n, r)wherear(x, y) =(xr)−(x− yr). (4.3.2)To see this, note that removing the top level from a minimally susceptiblegraph G of size k leaves a minimally susceptible graph G′ of size k− i. If thetop level of G′ has size j, then all vertices in the top level of G are connectedto r vertices of G′, with at least one in the top level of G′. Thus each vertexhas ar(k − i, j) options for the connections. The(k−ri)term accounts for theset of possible labels of the top level of G.To study asymptotics of m it is convenient to defineσr(k, i) =mr(k, i)(k − r)!((r − 1)!kr−1)k. (4.3.3)Substituting this in (4.3.1) givesσr(k, i) =k−r−i∑j=1Ar(k, i, j)σr(k − i, j) for i < k − r, (4.3.4)whereAr(k, i, j) =jii!(k − ik)(r−1)k ( (r − 1)!(k − i)r−1ar(k − i, j)j)i. (4.3.5)We make the following observation.Lemma 4.3.3. Let Ar(k, i, j) be as in (4.3.5). Put Ar(i, j) = jie−(r−1)i/i!.For any i < k − r and j ≤ k − r − i, we have that Ar(k, i, j) is increasing ink and converges to Ar(i, j).Proof. It is well known that for m > 0 we have (1−m/k)k is increasing andtends to e−m. Thusjii!(k − ik)(r−1)k→ Ar(i, j).The lemma follows by (4.3.5) and the following claim, a formula which willalso be of later use.934.3. Lower bound for pc(n, r)Claim 4.3.4. For all integers x ≥ r and 1 ≤ y ≤ x− r, we have that(r − 1)!xr−1ar(x, y)y= 1yy∑`=1(x− `x)r−1.Proof. For an integer m ≥ r, let (m)r = m!/(m− r)! denote the rth fallingfactorial of the integer m. Since(m)r − (m− 1)r = r(m− 1)r−1.it follows that(r − 1)!xr−1ar(x, y)y= (x)r − (x− y)rryxr−1= 1yy∑`=1(x− `x)r−1as required. Since each term on the right of Claim 4.3.4 is increasing to 1, the sameholds for their average. The proof is complete. 4.3.2 Upper bounds for susceptible graphsOur first task is to derive bounds on the number of minimally susceptiblegraphs of size k with i vertices in the top level. This relies on the recurrence(4.3.1).Lemma 4.3.5. Fix r ≥ 2. For all k > r and i ≤ k − r, we have thatmr(k, i) ≤ e−i−(r−2)k√i(k − r)!(kr−1(r − 1)!)k.Equivalently, σr(k, i) ≤ i−1/2e−i−(r−2)k.Proof. Since mr(k, k − r) = 1, it is straightforward to verify that the claimholds in the case that i = k − r.For the remaining cases i < k − r, we prove the claim by induction on k.Applying the inductive hypothesis to the right hand sum of (4.3.4), bounding944.3. Lower bound for pc(n, r)Ar(k, i, j) therein by Ar(i, j) using Lemma 4.3.3, and extending the sum toall j we haveσr(k, i) ≤∞∑j=1Ar(i, j)j−1/2e−j−(r−2)(k−i).Thus it suffices to prove that this sum is at most i−1/2e−i−(r−2)k. Usingthe definition of Ar(i, j) and cancelling the e−(2−r)k factors, we need thefollowingClaim 4.3.6. For any i ≥ 1 we have∞∑j=1jie−ii! j−1/2e−j ≤ i−1/2e−i.This is proved in Section 4.7.1.We remark that Claim 4.3.6 is fundamentally a pointwise bound on thePerron eigenvector of the infinite operator A2. (Other values of r follow sincethe influence of r cancels out.) This eigenvector decays roughly as e−i, butwith some lower order fluctuations. It appears that the√i correction can bereplaced by various other slowly growing functions of i. However, Claim 4.3.6fails for certain i without the√j term. 4.3.3 Susceptible subgraphs of Gn,pWith Lemma 4.3.5 at hand, we obtain upper bounds for the growth proba-bilities of r-percolations on Gn,p.A set I of size r is called k-contagious in the graph Gn,p, if there issome t so that |Vt(I,Gn,p)| = k, i.e., there is some time at which there areexactly k infected vertices. The set I is called (k, i)-contagious if in additionthe number of vertices infected at step t is i, i.e., |It(I,Gn,p)| = i. LetPr(k, i) = Pr(p, k, i) denote the probability that a given I ⊂ [n], with |I| = ris (k, i)-contagious. Let Pr(k) =∑i Pr(k, i) denote the probability that suchan I is k-contagious. Finally, let Er(k, i) and Er(k) denote the expectednumber of such subsets I.954.3. Lower bound for pc(n, r)We remark that Pr(k) is not the same as the probability of survival tosize k, which is given by ∑`≥k∑i>`−k Pr(`, i).Lemma 4.3.7. Let α > 0, and let p = θr(α, n) (as defined in Proposi-tion 4.3.1) and ε = npr = α/ logr−1 n. For i ≤ k − r and k ≤ n1/(r(r+1)), wehave thatPr(k, i) ≤ (1 + o(1))e−ε(k−ir )εk−r(k − r)! mr(k, i)where o(1) depends on n, but not on i, k.Proof. Let I ⊂ [n], with |I| = r, be given, and put`r(k, i) =e−ε(k−ir )εk−r(k − r)! mr(k, i)so that the lemma states Pr(k, i) ≤ (1 +o(1))`r(k, i). This follows by a unionbound: If I is (k, i)-contagious, then I is a contagious set for a minimallysusceptible subgraph G ⊂ Gn,p (perhaps not induced) of size k with i verticesinfected in the top level, and all vertices in v ∈ V (G)c are connected to atmost r − 1 vertices below the top level of G (so that V (G) = Vt(I,Gn,p), forsome t). There are( nk−r)choices for the vertices of G and mr(k, i) choicesfor its edges. For any such v and G, the probability that v is connected to rvertices below the top level of G is bounded from below by(k − ir)pr(1− p)k−i−r >(k − ir)pr(1− p)k.HencePr(k, i) <(nk − r)mr(k, i)pr(k−r)(1−(k − ir)pr(1− p)k)n−k.By the inequalities(nk) ≤ nk/k! and 1− x < e−x, it follows thatlog Pr(k, i)`r(k, i)< ε(k − ir)(1− (1− p)k(1− kn)).964.3. Lower bound for pc(n, r)By the inequality (1−x)y ≥ 1−xy, and since k ≤ n1/(r(r+1)), the right handside is bounded byεkr+1(p+ (1− pk)/n) ≤ εn1/r(p+ 1/n) 1as n→∞. Hence Pr(k, i) ≤ (1 + o(1))`r(k, i), as claimed. As a corollary we get a bound for Er(k, i).Lemma 4.3.8. Let α, β0 > 0. Put p = θr(α, n). For all k = β logn andi = γk, such that β ≤ β0, we have thatEr(k, i) . nµ logr(r−1) nwhereµ = µr(α, β, γ) = r+β log(αβr−1(r − 1)!)− αβrr! (1−γ)r−β(r−2+γ). (4.3.6)Here . denotes inequality up to a constant depending on α, β0, but not onβ, γ.Proof. Let r ≥ 2 and α, β0 > 0 be given. Put ε = npr. By Lemmas 4.3.5and 4.3.7, for all k = β logn and i = γk, with β ≤ β0, we have thatEr(k, i) ≤ (1 + o(1))(nr)(εkr−1(r − 1)!)kε−re−i−(r−2)k−ε(k−ir ) . nµ logr(r−1) n.The√i term from Lemma 4.3.5 is safely dropped for this upper bound. 4.3.4 Sub-critical boundsIn this section, we prove Proposition 4.3.1.The case of γ = 0 in Lemma 4.3.8 (corresponding to values of i suchthat i/k  1) is of particular importance for the growth of sub-criticalr-percolations. For this reason, we let µ∗(α, β) = µ(α, β, 0). The next resultin particular shows that β∗(α), as in Proposition 4.3.1, is well-defined.974.3. Lower bound for pc(n, r)Lemma 4.3.9. Let α > 0. Let αr be as in Proposition 4.3.1. Putβr(α) =((r − 1)!α)1/(r−1).(i) The function µ∗r(α, β) is decreasing in β, with a unique zero at β∗(α).(ii) We have thatµ∗r(α, βr(α)) = r − βr(α)(r − 1)2rand hence β∗(α) = βr(α) (resp. > or <) if α = αr (resp. > or <).The quantity β∗(α) also plays a crucial role in analyzing the growth ofsuper-critical r-percolations on Gn,p, see Section 4.4.5 below.Proof. For the first claim, we note that by setting γ = 0 in (4.3.6) we obtainµ∗r(α, β) = r + β log(αβr−1(r − 1)!)− αβrr! − β(r − 2). (4.3.7)Therefore∂∂βµ∗r(α, β) = 1 + log(αβr−1(r − 1)!)− αβr−1(r − 1)! .Since αβr(α)r−1/(r− 1)! = 1, the above expression is equal to 0 at β = βr(α)and negative for all other β > 0. Hence µ∗(α, β) is decreasing in β, asclaimed. Moreover, since limβ→0+ µ∗r(α, β) = r and limβ→∞ µ∗r(α, β) = −∞,β∗(α) is well-defined.We obtain the expression for µ∗r(α, βr(α)) in the second claim by (4.3.7)and the equality αβr(α)r−1/(r − 1)! = 1. The conclusion of the claimthus follows by the first claim, noting that βr(α) is decreasing in α andµ∗r(αr, βr(αr)) = 0 since βr(αr) = (r/(r − 1))2. We are ready to prove the main result of this section.Proof of Proposition 4.3.1. Let α < αr and δ > 0 be given. First, we showthat with high probability, Gn,p contains no m-contagious set, for m = β lognwith β ∈ [β∗(α) + δ, βr(α)].984.3. Lower bound for pc(n, r)Claim 4.3.10. For all β ≤ βr(α), we have that µr(α, β, γ) ≤ µ∗r(α, β).This is proved in Section 4.7.2.By Lemmas 4.3.8 and 4.3.9 and Claim 4.3.10, we find by summing overall O(logn) relevant k that the probability that such a set exists is bounded(up to a multiplicative constant) bynµ∗(α,β∗(α)+δ) logr(r−1)+1 n 1.It thus remains to show that with high probability, Gn,p has no m-contagious set I, for some m ≥ βr logn. To this end, note that if such a setI exists, then there is some t so that|Vt(I,Gn,p)| < βr logn ≤ |Vt+1(I,Gn,p)|.Letting k = |Vt(I,Gn,p)|, we find that for some k < βr logn there is a k-contagious set I, with m− k further vertices with r neighbours in Vt(I,Gn,p).The expected number of k-contagious sets with i vertices infected inthe top level is Er(k, i). Let pr(k, i) be the probability that for a givenset of size k with i vertices identified as the top level, there are at leastβr logn − k vertices r-connected to the set with at least one neighbour inthe top level. Hence the probability that Gn,p has a m-contagious set I forsome m ≥ βr logn is at most∑i<k<βr(α) lognEr(k, i)pr(k, i).The proposition now follows from the following claim, which is proved inSection 4.7.3.Claim 4.3.11. For all k < βr(α) logn and i ≤ k − r, we have thatEr(k, i)pr(k, i) . nµ∗r(α,βr(α)) logr(r−1) nwhere . denotes inequality up to a constant, independent of i, k.Indeed, by Claim 4.3.11, it follows, by summing over all O(log2 n) relevant994.4. Upper bound for pc(n, r)i, k, that the probability that Gn,p has an m-contagious set for some m ≥βr(α) logn is bounded (up to a constant) bynµ∗r(α,βr) logr(r−1)+2 n 1where the last inequality follows by Lemma 4.3.9, since α < αr and henceµ∗r(α, βr(α)) < 0. 4.4 Upper bound for pc(n, r)In this section, we prove Theorem 4.2.1. In light of Proposition 4.3.1, itremains to prove that for α > αr, with high probability Gn,p is susceptible.Fundamentally this is done using the second moment method. As discussedin the introduction, the main obstacle is showing that contagious sets aresufficiently independent for the second moment method to apply. To thisend, we restrict to a special type of contagious sets, which infect k verticeswith no triangles.As in the previous section, we fix r ≥ 2 throughout.4.4.1 Triangle-free susceptible graphsRecall that a graph is called triangle-free if it contains no subgraph which isisomorphic to K3.Definition 4.4.1. Let mˆr(k, i) denote the number of triangle-free graphsthat contribute to mr(k, i) (see Section 4.3.1). Put mˆr(k) =∑k−ri=1 mˆr(k, i).Following Section 4.3.1, we obtain a recursive lower bound for mˆr(k, i).We note that mˆr(k, k − r) = mr(k, k − r) = 1. For i < k − r we claim thatmˆr(k, i) ≥(k − ri)k−r−i∑j=1aˆr(k − i, j)imˆr(k − i, j) (4.4.1)whereaˆr(x, y) = max{0, ar(x, y)− 2ryxr−2}. (4.4.2)1004.4. Upper bound for pc(n, r)Note that (in contrast to the recursion for m(k, i)), this is only a lower bound.To see (4.4.1), we argue that of the ar(k − i, j) ways to connect a vertex inthe top level to lower levels, at most 2rj(k − i)r−2 create a triangle. This isso since the number of ways of choosing r vertices from k − i, including atleast one of the top j and including at least one edge is at mostjr(k − i− 2r − 2)+ jr(k − i− r)(k − i− 3r − 3)< 2jr(k − i)r−2,where the first (resp. second) term accounts for case that an edge selectedcontains (resp. does not contain) a vertex among the top j.Settingσˆr(k, i) =mˆr(k, i)(k − r)!((r − 1)!kr−1)k,(4.4.1) reduces toσˆr(k, i) ≥k−r−i∑j=1Aˆr(k, i, j)σˆr(k − i, j) (4.4.3)whereAˆr(k, i, j) =jii!(k − ik)(r−1)k ( (r − 1)!(k − i)r−1aˆr(k − i, j)j)i. (4.4.4)The following observation indicates that restricting to susceptible graphswhich are triangle-free does not have a significant effect on the asymptotics.Lemma 4.4.2. Let Aˆr(k, i, j) be as in (4.4.4) and let Ar(i, j) be as definedin Lemma 4.3.3. For any fixed i, j ≥ 1, we have that Aˆr(k, i, j)→ Ar(i, j),as k →∞.Proof. Fix i, j ≥ 1. From their definitions we have thatAˆr(k, i, j)Ar(k, i, j)=(aˆr(k, i, j)ar(k, i, j))i.Since ar(k, i, j) is of order ki and aˆr(k, i, j)− a(k, i, j) = O(ki−1), we have1014.4. Upper bound for pc(n, r)aˆr(k, i, j)/ar(k, i, j)→ 1. Since i is fixed, it follows by Lemma 4.3.3 thatlimk→∞Aˆr(k, i, j) = limk→∞Ar(k, i, j) = Ar(i, j). In order to get asymptotic lower bounds on mˆr(k, i) it is useful to furtherrestrict to graphs with bounded level sizes.Definition 4.4.3. For ` ≥ r, let mˆr,`(k) ≤ mˆr(k) be the number of graphsthat contribute to mˆr(k) which have level sizes bounded by ` (i.e., |Ii| ≤ `for all i). Let mˆr,`(k, i) be the number of such graphs with exactly i ≤ `vertices in the top level. Hence mˆr,`(k) =∑`i=1 mˆr,`(k, i).Observe that for fixed k, mˆr,`(k) is increasing in `, and equals mr(k) for` ≥ k − r.Lemma 4.4.2 will be used to prove asymptotic lower bounds for mˆ. Wheni is small, the resulting bounds are not sufficiently strong. Thus we alsomake use of the following lower bound on mˆr,`(k, i) for values of i which aresmall compared with k. This is also used as a base case for an inductiveproof of lower bounds using Lemma 4.4.2.Lemma 4.4.4. For all relevant i, k and ` ≥ r such that k > r(r2 +1) + i+2,we have thatmˆr,`(k, i) ≥(k − ri)bˆr(k, i)imˆr,`(k − i)wherebˆr(k, i) =(k − i− r − 1r − 1)(1− r3k − i− r − 2).In particular mˆr,`(k, i) > 0 for such k.Proof. Let i, k, ` as in the lemma be given. We obtain the lemma by consid-ering the subset H of graphs contributing to mˆr,`(k, i), constructed as follows.To obtain a graph H ∈ H, select a subset U ⊂ [k]− [r] of size i for the verticesin the top level of H, and a minimally susceptible, triangle-free graph H ′on [k]− U so that [r] is a contagious set for H ′ with all level sizes boundedby ` and j vertices in the top level, for some 1 ≤ j ≤ min{k − r − i, `}.Let v denote the vertex in the top level of H ′ of largest index. For each1024.4. Upper bound for pc(n, r)u ∈ U , select a subset Vu ⊂ [k]− U of size r which contains v and none ofthe neighbours of v in H ′, and so that no v′, v′′ ∈ Vu are neighbours in H ′.Finally, let H be the minimally susceptible graph on [k] with subgraph H ′such that each vertex u ∈ U is connected to all vertices in Vu. By the choiceof H ′ and Vu, H contributes to mˆr,`(k, i). By the choice of v, for any choiceof U , H ′ and Vu, a unique graph H is obtained. Hence |H| ≤ mˆr,`(k, i).To conclude, we claim that, for each u ∈ U , the number of possibilitiesfor Vu is bounded from below by(k − i− r − 1r − 1)− r(k − i− r − 1)(k − i− r − 3r − 3)≥ bˆr(k, i).To see this, note that of the r(k − i) edges in H ′, there are r(r + 1) that areeither incident to v or else connect a neighbour of v in H ′ to another vertexbelow the top level of H ′. Thereforemˆr,`(k, i) ≥(k − ri)bˆr(k, i)∑jmˆr,`(k − i, j) =(k − ri)bˆr(k, i)mˆr,`(k − i)(where the sum is over 1 ≤ j ≤ min{k − r − i, `}) as claimed.By the choice of i, k, bˆr(k, i) > 0. Hence mˆr,`(k, i) > 0, since mˆr,`(k) > 0for all relevant k, `, as is easily seen (e.g., by considering minimally susceptible,triangle-free graphs of size k = nr +m, for some n ≥ 1 and m ≤ r, whichhave m vertices in the top level and r vertices in all levels below, and allvertices in level i ≥ 1 are connected to all r vertices in level i− 1). Lemma 4.4.5. As k →∞, we have thatmr(k) ≥ mˆr(k) ≥ e−o(k)e−(r−2)k(k − r)!(kr−1(r − 1)!)k.Comparing this with Lemma 4.3.5, we see that the number of triangle-free susceptible graphs of size k is not much smaller than the number ofsusceptible graphs (up to an error of eo(k)).Proof. The idea is to use spectral analysis of the linear recursion (4.4.5).1034.4. Upper bound for pc(n, r)However, some work is needed to write the recursion in a usable form.Putσˆr,`(k, i) =mˆr,`(k, i)(k − r)!((r − 1)!kr−1)kRestricting (4.4.3) to j ≤ `, it follows thatσˆr,`(k, i) ≥∑`j=1Aˆr(k, i, j)σˆr,`(k − i, j) for i ≤ `. (4.4.5)In order to express (4.4.5) in matrix form, we introduce the followingnotations. For an `× ` matrix M , let Mj , be the `× ` matrix whose jth rowis that of M and all other entries are 0. Letψ(M) =M1 M2 · · · M`−1 M`I`I`. . .I`where I` is the `× ` identity matrix and all empty blocks are filled with 0’s.For all relevant k, putΣˆk = Σˆk(r, `) =σˆkσˆk−1...σˆk−`+1where σˆk = σˆk(r, `) is the 1× ` vector with entries (σˆk)j = σˆr,`(k, j).Using this notation, (4.4.5) can be written asΣˆk ≥ ψ(Aˆk)Σˆk−1, (4.4.6)where Aˆk = Aˆk(r, `) is the `× ` matrix with entries (Aˆk)i,j = Aˆr(k, i, j).By Lemma 4.4.4, we have that all coordinates of Σˆk are positive forall k large enough. Let A = A(r, `) denote the ` × ` matrix with entries1044.4. Upper bound for pc(n, r)Ai,j = Ar(i, j) (as defined in Lemma 4.3.3). For ε > 0, let Aε = Aε(r, `),be the `× ` matrix with entries (Aε)i,j = Ai,j − ε. By Lemma 4.4.2, for klarge enough each entry of Aˆk is greater than the same entry of Aε. SinceA > 0, for some εr,` > 0, we have that Aε > 0 for all ε ∈ (0, εr,`). Hence, byLemma 4.4.2 and (4.4.6), for any such ε > 0, there is a kε so thatΣˆkε+k ≥ ψ(Aε)kΣˆkε > 0 for k ≥ 0,with entries of Σkε positive. Therefore, up to a factor of e−o(k), the growthrate of σˆr,`(k) =∑i σˆr,`(k, i) is given by the Perron eigenvalue λ = λ(r, `) ofψ(A).Let Dλ = diag(λ−i : 1 ≤ i ≤ `). We claim that the Perron eigenvalue ofψ(A) is characterized by the property that the Perron eigenvalue of DλA is1. To see this, one simply verifies that if DλAv = v, thenvλ =λ`−1vλ`−2v...vsatisfies ψ(A)vλ = λvλ. If v has non-negative entries, then 1 is the Perroneigenvalue of DλA and λ the Perron eigenvalue of ψ(A).If λ < e−(r−2)(e`)−1/`, we claim that every row sum of DλA is greaterthan 1. Indeed, for all such λ, the sum of row i ≤ ` is (using the boundi! ≤ ei(i/e)i)(er−1λ)−i∑`j=1jii! > (er−1λ)−i `ii! >1ei((e`)1/` `i)i.Twice differentiating the log of the right hand side with respect to i, weobtain −(i−1)/i2. Therefore, noting that for i = ` the right hand side aboveequals to 1, and for i = 1 it equals (`/e)(e`)1/` ≥ 1 for all relevant `, theclaim follows.Since the spectral radius of a matrix is bounded below by its minimum1054.4. Upper bound for pc(n, r)row sum, it follows that for such λ, the spectral radius of DλA is greater than1. Since the spectral radius of DλA is decreasing in λ, the Perron eigenvalueλ(r, `) of ψ(A) is at least e−(r−2)(e`)−1/`, and hence lim inf`→∞ λ(r, `) ≥e−(r−2). Taking `→∞, we find thatmˆr(k) ≥ e−o(k)e−(r−2)k(k − r)!(kr−1(r − 1)!)kas required. We require a lower bound for the number of minimally susceptible graphsof size k with i = Ω(k) vertices in the top level in order to estimate thegrowth of super-critical r-percolations on Gn,p.Lemma 4.4.6. Let ε ∈ (0, 1/(r + 1)). For all sufficiently large k andi ≤ (ε/r)2k, we have thatmˆr(k, i) ≥ e−iε−(r−2)k−o(k)(k − r)!((k − i)kr−2(r − 1)!)kwhere o(k) depends on k, ε, but not on i.Although the proof is somewhat involved, the general scheme is straight-forward. We use Lemmas 4.4.4 and 4.4.5 to obtain a sufficient bound for i, kin a range for which i/k  1. Then, for all other relevant i, k we proceed byinduction, using (4.4.1). The inductive step (Claim 4.4.7 below) of the proofappears in Section 4.7.4.Proof. Fix some kr so thatkr > max{er/ε,r(r2 + 1) + 21− (ε/r)2}.Note that, for all k > kr and i ≤ (ε/r)2k, we have that k/ log2 k < (ε/r)2kand that Lemma 4.4.4 applies to mˆr(k, i) (setting ` = k − r, so thatmˆr,`(k, i) = mˆr(k, i)).1064.4. Upper bound for pc(n, r)For all relevant i, k, letρˆr(k, i) =mˆr(k, i)(k − r)!( (r − 1)!(k − i)kr−2)k. (4.4.7)By Lemma 4.4.5 there is some fr(k) k such thatmˆr(k) ≥ e−(r−2)k−fr(k)(k − r)!(kr−1(r − 1)!)k.Without loss of generality, we assume fr is non-decreasing.By Lemma 4.4.4, we find that for all k > kr and relevant i, ρˆr(k, i) isbounded from below bye−(r−2)(k−i)−fr(k−i)i! bˆr(k, i)i((k − i)r−1(r − 1)!)k−i ( (r − 1)!(k − i)kr−2)k.By the bound(nk) ≥ (n− k)k/k!,bˆr(k, i) ≥ (k − i− 2r)r−1(r − 1)!(1− r3k − i− r − 2).Therefore the lower bound for ρˆr(k, i) above is bounded from below by (usingthe inequality i! < ii)Cr(k, i)gr(k, i)e−(r−2)k−fr(k−i)−i log iwhereCr(k, i) =(1− 2rk − i)(r−1)i(1− r3k − i− r − 2)iandgr(k, i) = e(r−2)i(k − ik)(r−2)k.If r = 2, then gr ≡ 1. We note that, for r > 2,∂∂igr(k, i) = −(r − 2)ik − i gr(k, i) < 01074.4. Upper bound for pc(n, r)and so, for any such r, for any relevant k, gr(k, i) is decreasing in i. By theinequality (1− x)y > 1− xy, for any k > kr and i ≤ (ε/r)2k,Cr(k, i) > 1− r2ik − i −r3ik − i− r − 2> 1− 2ε21− (ε/r)2 −rε21− (ε/r)2 − (r + 2)/k> 1− 2/(r + 1)21− 1/r4 −1/r1− 1/r4 − (r + 2)/kr> 0since kr > er/ε > er(r+1), r ≥ 2, and ε < 1/(r + 1) (and noting that thesecond last line is increasing in r). Altogether, for some ξ′(r) > 0, we havethatρˆr(k, i) ≥ ξ′(r)e−(r−2)k−hr(k) for k > kr and i ≤ k/ log2 n (4.4.8)wherehr(k) = fr(k)− log gr(k,klog2 k)+ klog2 klog(klog2 k). (4.4.9)We note that h(k) k as k →∞.Claim 4.4.7. For some ξ = ξ(r, ε) > 0, for all k > kr and i ≤ (ε/r)2k, wehave that ρˆr(k, i) ≥ ξe−iε−(r−2)k−hr(k).Claim 4.4.7 is proved in Section 4.7.4.Since hr(k)  k and ξ depends only on r, ε, the lemma follows byClaim 4.4.7 and (4.4.7). 4.4.2 rˆ-bootstrap percolation on Gn,pWe define rˆ-percolation, a restriction of r-percolation, which informally haltsupon requiring a triangle. Formally, recall the definitions of It(I,G) andVt(I,G) given in Section 4.3.1. Let Iˆt = It if G contains a triangle-free1084.4. Upper bound for pc(n, r)subgraph H such that Vt(I,H) = Vt(I,G), and otherwise put Iˆt = ∅. PutVˆt =⋃s≤t Iˆs.Definition 4.4.8. Let Pˆr(k, i) = Pˆr(p, k, i), for some p = p(n), denote theprobability that for a given I ⊂ [n], with |I| = r, we have that |Vˆt(I,Gn,p)| = kand |Iˆt(I,Gn,p)| = i, for some t. Let Eˆr(k, i) denote the expected number ofsuch subsets I. We put Pˆr(k) =∑k−2i=1 Pˆr(k, i) and Eˆr(k) =∑k−ri=1 Eˆr(k, i).Using Lemma 4.4.6, we obtain lower bounds on the growth probabilitiesof rˆ-percolations on Gn,p.Lemma 4.4.9. Let α > 0. Put p = θr(α, n) and ε = npr = α/ logr−1 n. Fori ≤ k − r and k ≤ n1/(r(r+1)), we have thatPˆr(k, i) ≥ (1− o(1))e−ε(k−ir )εk−r(k − r)! mˆr(k, i)where o(1) depends on n, but not on i, k.Proof. Let I ⊂ [n], with |I| = r, be given. Putˆ`r(k, i) =e−ε(k−ir )εk−r(k − r)! mˆr(k, i).If for some V ⊂ [n] with |V | = k and I ⊂ V we have that the subgraphGV ⊂ Gn,p induced by V is minimally susceptible and triangle-free, I is acontagious set for GV with i vertices in the top level, and all vertices inv ∈ V c are connected to at most r − 1 vertices below the top level of GV ,then it follows that |Vˆt(I,Gn,p)| = k and |Iˆt(I,Gn,p)| = i for some t. HencePˆr(k, i) >(n− rk − r)mˆr(k, i)pr(k−r)(1− p)k2(1−(k − ir)pr)n.By the inequalities(nk) ≥ (n− k)k/k! and (1− x/n)n ≥ e−x(1− x2/n), itfollows thatPˆr(k, i)ˆ`r(k, i)>(1− kn)k(1− p)k21− (k − ir)2ε2n .1094.4. Upper bound for pc(n, r)For all large n, the right hand side is bounded from below by(1− kn)k (1− 1n1/r)k2 (1− k2rn)∼ 1since k ≤ n1/(r(r+1))  n1/(2r), as r ≥ 2. It follows that Pˆr(k, i) ≥ (1 −o(1))ˆ`r(k, i), where o(1) depends on n, but not on i, k, as required. 4.4.3 Super-critical boundsIn this section we show that, for α > αr, the expected number of super-critical rˆ-percolations on Gn,p which grow larger than the critical size ofβ∗(α) logn > βr(α) logn is large. The importance of β∗(α) is established inSection 4.4.5 below. Subsequent sections establish the existence of sets I ofsize r so that rˆ-percolation initialized at I grows larger than β∗(α) logn.Lemma 4.4.10. Let α, β0 > 0 and ε ∈ (0, 1/(r+ 1)). Put p = θr(α, n). Forall sufficiently large k = β logn and i = γk, with β ≤ β0 and γ ≤ (ε/r)2, wehave thatEˆr(k, i) ≥ nµε−o(1)whereµε = µr,ε(α, β, γ) = r+β log(αβr−1(1− γ)(r − 1)!)− αβrr! (1−γ)r−β(r−2 + εγ)and o(1) depends on α, ε, β0, but not on β, γ.Proof. Put δ = np2. By Lemmas 4.4.6 and 4.4.9, for large k = β logn andi = γk, with β ≤ β0 and γ ≤ (ε/r)2,Eˆr(k, i) ≥ ξ(n)(nr)(δ(k − i)kr−2(r − 1)!)kδ−re−iε−(r−2)k−δ(k−ir )−o(k) = nµε−o(1)where ξ(n) ∼ 1 depends only on n, and o(k) depends only on r, ε, β0. 1104.4. Upper bound for pc(n, r)We note that, for any α, ε > 0,µr,ε(α, β, 0) = µ∗r(α, β). (4.4.10)We now state the main result of this section.Lemma 4.4.11. Let ε < 1/(r+ 1). Put αr,ε = (1 + ε)αr and p = θr(αr,ε, n).For some δ(r, ε) > 0 and ζ(r, ε) > 0, if kn/ logn ∈ [β∗(αr,ε), β∗(αr,ε) + δ],for all large n, then Eˆr(kn) nζ as n→∞.The proof appears in Section 4.7.5. The argument is technical butstraightforward: the basic idea is to show that, for some ζ > 0 and all largen, for all relevant k there is some i so that Eˆr(k, i) > nζ . For k > β∗ logn,values of i with this property are on the order of k. We shall thus requireLemma 4.4.6.4.4.4 rˆ-percolations are almost independentFor a set I ⊂ [n], with |I| = r, let Eˆk(I) denote the event that rˆ-percolationon Gn,p initialized by I grows to size k, i.e., we have that |Vˆt(I)| = k forsome t. Hence Pˆr(k) = P(Eˆk(I)). In this section we show that for setsI 6= I ′ of size r and suitable values of k, p, the events Eˆk(I) and Eˆk(I ′) areapproximately independent. Specifically, we establish the followingLemma 4.4.12. Let α, β > 0 and put p = θr(α, n). Fix sets I 6= I ′ suchthat |I| = |I ′| = r and |I ∩ I ′| = m. For β logn ≤ k ≤ n1/(r(r+1)), we havethat P(Eˆk(I ′)|Eˆk(I)) is bounded from above by(k/n)r−m +O(k2r(kp)r(r−m))+(1 + o(1))Pˆr(k) if m = 0,o((n/k)m)Pˆr(k) if 1 ≤ m < r,where o(1) depends only on n.For sets I ⊂ V of sizes r and k, let Eˆ(I, V ) be the event that for some twe have Vˆt(I) = V . By symmetry these events all have the same probability.Since for a fixed I and different sets V these events are disjoint, we havePˆr(k) =(n−rk−r)P(Eˆ(I, V )).1114.4. Upper bound for pc(n, r)Lemma 4.4.13. Fix sets I ⊂ V with |I| = r and |V | = k.(i) For any set of edges E ⊂ [n]2 − V 2, the conditional probability thatE ⊂ E(Gn,p), given Eˆ(I, V ), is at most p|E|.(ii) For any u /∈ V and set of vertices W ⊂ [n] such that |W | = r and|V ∩ W | < r, the conditional probability that (u,w) ∈ Gn,p for allw ∈W , given Eˆ(I, V ), is at least pr(1− p)k.Proof. Let GV denote the subgraph of Gn,p induced by V . The event Eˆ(I, V )occurs if and only if for some t and triangle-free subgraph H ⊂ GV , we havethat Vt(I,H) = Vt(I,GV ) = V and all vertices in V c are connected to atmost r − 1 vertices below the top level of H (i.e., V − It(I,H)). This eventis increasing in the set of edges of GV , and decreasing in edges outside V . Bythe FKG inequality,P(E ⊂ E(Gn,p)|Eˆ(I, V )) ≤ P(E ⊂ E(Gn,p)) = p|E|.For claim (ii), let G be a possible value for GV on Eˆ(I, V ), with a subgraphH as above and i ≤ k− r vertices infected in the top level (i.e., It(I,H) = i).The conditional probability that u is connected to all vertices in W , givenEˆ(I, V ) and GV = G, is equal topr∑r−1−`0`=0(k−i−`0`)p`(1− p)k−i−`0−`∑r−1`=0(k−i`)p`(1− p)k−i−`where `0 < r is the number of vertices in W below the top level of H.Bounding the numerator by the ` = 0 term and the denominator by 1, theabove expression is at least pr(1− p)k−i−`0 ≥ pr(1− p)k. Hence, summingover the possibilities for G we obtain the second claim. The following result, a special case of Turán’s Theorem [127], plays ankey role in establishing the approximate independence of rˆ-percolations.Lemma 4.4.14 (Mantel’s Theorem [102]). If a graph G is triangle-free, thenwe have that e(G) ≤ bv(G)2/4c.In other words, a triangle-free graph has edge-density at most 1/2. The1124.4. Upper bound for pc(n, r)number 2r − 1 is key, since b(2r − 1)2/4c = r(r − 1), and thusr(2r − 1)− b(2r − 1)2/4c = r2. (4.4.11)Lemma 4.4.15. Let α > 0 and k ≤ n1/(r(r+1)). Put p = θr(α, n). Fix setsI ⊂ V and I ′ such that |I| = |I ′| = r, |V | = k and ` = |V ∩ I ′| < r. LetEˆk,q(I ′) denote the event that for some t we have that Vˆt(I ′) = V ′ for someV ′ such that |V ′| = k and |V ∩ V ′| = q. ThenP(Eˆk,q(I ′)|Eˆ(I, V )) ≤(1 + o(1))Pˆr(k) q = 0,o((n/k)`)Pˆr(k) 1 ≤ q < 2r − 1,k2r−1(kp)r(r−`) q ≥ 2r − 1,where o(1) depends only on n.Proof. Case i (q < 2r − 1). We claim thatP(Eˆk,q(I ′)|Eˆ(I, V )) ≤((nk)`( k2npq/4)q) k−r∑i=1Qˆr(k, i) (4.4.12)where Qˆr(k, i) is equal to(nk − r)mˆr(k, i)pr(k−r)(1−((k − ir)−(qr))pr(1− p)2k)n−2k.To see this, note that if Eˆk,q(I) occurs then for some V ′ such that |V ′| = k,I ′ ⊂ V ′, and |V ∩V ′| = q, we have that I ′ is a contagious set for a triangle-freesubgraph H ′ ⊂ Gn,p on V ′ with i vertices in the top level, for some i ≤ k− r,and all vertices in (V ∪ V ′)c are connected to at most r − 1 vertices belowthe top level of H ′. There are at most(kq − `)(n− (q − `)k − r − (q − `))≤(nk)`(k2n)q (nk − r)such subsets V ′. By Lemmas 4.4.13 and 4.4.14, for any such V ′ and i asabove, the conditional probability that such a subgraph H ′ exists, given1134.4. Upper bound for pc(n, r)Eˆ(I, V ), is bounded by mˆ(k, i)pr(k−r)−q2/4, since at most q2/4 edges of H ′join vertices in V ∩ V ′. By Lemma 4.4.13, for any u ∈ (V ∪ V ′)c and set V ′′of r vertices below the top level of H ′ with at most r − 1 vertices in V ∩ V ′,the conditional probability that u is connected to all vertices in V ′′ is at leastpr(1− p)k. Hence any such u is connected to all vertices in such a V ′′ withconditional probability at least((k−ir)− (qr)) pr(1− p)2k. The claim follows.To conclude, let ˆ`r(k, i) be as in the proof of Lemma 4.4.9, which recallshows that Pˆr(k, i) ≥ (1− o(1))ˆ`r(k, i) as k →∞, where o(1) depends onlyon n. We have, by the inequalities(nk) ≤ nk/k! and 1− x < e−x, thatlog Qˆr(k, i)ˆ`r(k, i)< ε(k − ir)(1− (1− p)2k(1− 2kn))+ εqr.By the inequality (1− x)y ≥ 1− xy, and since k ≤ n1/(r(r+1)), it follows thatthe right hand side is at most εn1/r(p+ 1/n) + εqr ∼ 0, and soQˆr(k, i) ≤ (1 + o(1))ˆ`r(k, i) ≤ (1 + o(1))Pˆr(k, i)where o(1) depends only on n. Hencek−r∑i=1Qˆr(k, i) ≤ (1 + o(1))k−r∑i=1Pˆr(k, i) = (1 + o(1))Pˆr(k).Finally, case (i) follows by (4.4.12) and noting thatnpq/4k2>npr/2k2≥ n1/2−2/(r(r+1))(αlogr−1 n)1/2 1since q < 2r, k ≤ n1/(r(r+1)) and r ≥ 2.Case ii (q ≥ 2r − 1). Put q∗ = 2r − 1 − `. If Eˆk,q(I ′) occurs, then forsome {vj}q∗j=1 ⊂ V − I ′ and non-decreasing sequence {tj}q∗j=1, we have thatvj ∈ Iˆtj (I ′) and Vˆj = Vˆtj−1(I ′) satisfy |Vˆq∗ | < k and Vˆj ∩ (V − I ′) ⊂⋃i<j{vi}.Informally, tj is the jth time that rˆ-percolation initialized by I ′ infects avertex in V − I ′. It follows that Gn,p contains a triangle-free subgraph on{vj}q∗j=1 ∪ Vˆq∗ . Since vj ∈ Iˆtj (I ′), note that vj is r-connected to Vˆj . Hence,1144.4. Upper bound for pc(n, r)by Lemma 4.4.14 and (4.4.11), there are at leastrq∗ − b(2r − 1)2/4c = r(r − `)edges between {vj}q∗j=1 and Vˆq∗ − V . Thus, by Lemma 4.4.13, the condi-tional probability of Eˆk,q(I ′), given Eˆ(I, V ), is bounded by kq∗(kp)r(r−`) ≤k2r−1(kp)r(r−`), as claimed. Using Lemma 4.4.15 we establish the main result of this section.Proof of Lemma 4.4.12. Fix a sequence of sets {V`}r`=m such that I ⊂ V`and ` = |V` ∩ I ′|. By symmetry, we have thatP(Eˆk(I ′)|Eˆk(I)) =(n− rk − r)−1 r∑`=m(n− r − (`−m)k − r − (`−m))P(Eˆk(I ′)|Eˆ(I, V`))≤r∑`=m(k/n)`−mP(Eˆk(I ′)|Eˆ(I, V`)).If ` = m, then by Lemma 4.4.15, summing over q ∈ [`, k], we getP(Eˆk(I ′)|Eˆ(I, Vm)) ≤(1 + o(1))Pˆr(k) + k2r(kp)r2 m = 0,o((n/k)m)Pˆr(k) + k2r(kp)r(r−m) 1 ≤ m < r.Likewise, for any m < ` < r,( kn)`−mP(Eˆk(I ′)|Eˆ(I, V`)) ≤ ( kn)`−m(o((nk )`)Pˆr(k) + k2r(kp)r(r−`))= o((nk )m)Pˆr(k) + k2r(kp)r(r−m)(nprkr−1)m−`≤ o((nk )m)Pˆr(k) + k2r(kp)r(r−m)(αβr−1)m−`= o((nk )m)Pˆr(k) +O(k2r(kp)r(r−m)).Finally, for ` = r we bound P(Eˆk(I ′)|Eˆ(I, Vr)) ≤ 1. Summing over ` ∈ [m, r]we obtain the result. 1154.4. Upper bound for pc(n, r)4.4.5 Terminal r-percolationsIn this section, we establish the importance of β∗(α) to the growth of super-critical r-percolations. Essentially, we find that an r-percolation on Gn,p,having grown larger than β∗(α) logn, with high probability continues togrow.Definition 4.4.16. We say that I ⊂ [n] is a terminal (k, i)-contagious setfor Gn,p if |Vτ (I,Gn,p, r)| = k and |Iτ (I,Gn,p, r)| = i.Lemma 4.4.17. Let α > αr and β∗r (α) < β1 < β2. Put p = θr(α, n). Withhigh probability, Gn,p has no terminal m-contagious set, with m = β logn,for all β ∈ [β1, β2].Proof. If r-percolation initialized by I ⊂ [n] terminates at size k with ivertices in the top level, then I is a contagious set for some subgraphH ⊂ Gn,p of size k with i vertices in the top level, and all vertices in V (H)care connected to at most r − 1 vertices in V (H). Hence the probability thata given I is as such is bounded by(nk − r)mr(k, i)pr(k−r)(1−(kr)pr(1− p)r)n−k.For k ≤ β2 logn and relevant i, we have that1−(kr)pr(1− p)r = 1−(kr)pr +O(n−1)where O(n−1) depends on α, β2, but not on k/ logn and i/k. Put ε = npr.By Lemma 4.3.5 (and the inequalities(nk) ≤ nk/k! and 1 − x < e−x), itfollows that the expected number of terminal (k, i)-contagious sets, withk = β logn and i = γk, for some β ≤ β2, is bounded (up to a constant) by(nr)(εkr−1(r − 1)!)kε−re−i−(r−2)k−ε(kr) . nµ∗r(α,β)−βγ logr(r−1) nwhere . denotes inequality up to a constant depending on α, β2, but not onβ, γ.1164.4. Upper bound for pc(n, r)By Lemma 4.3.9, we have that µ∗r(α, β) ≤ µ∗r(α, β1) < 0 for all β ∈ [β1, β2].Hence, summing over the O(log2 n) relevant values of i, k, we find thatthe probability that Gn,p contains a terminal m-contagious set for somem = β logn, with β ∈ [β1, β2], is bounded (up to a constant) bynµ∗r(α,β1) logr(r−1)+2 n 1as required. 4.4.6 Almost sure susceptibilityFinally, we complete the proof of Theorem 4.2.1. Using Lemmas 4.4.11,4.4.12 and 4.4.17, we argue that if α > αr, then with high probability Gn,pcontains a large susceptible subgraph. By adding independent random graphswith small edge probabilities, we deduce that percolation occurs with highprobability.Proof of Theorem 4.2.1. Proposition 4.3.1 gives the sub-critical case α < αr.Assume therefore that α > αr. Let G∗,Gi, for i ≥ 0, be independent randomgraphs with edge probabilities p∗ = θr(αr+ε, n) and pi = 2−i(r−1)/rpε, wherepε = θr(ε, n). Moreover, let ε > 0 be sufficiently small so that G = G∗∪⋃i≥0 Giis a random graph with edge probabilities at most p = θr(α, n). Thus, toshow that Gn,p is susceptible, it suffices to show that G is susceptible.Claim 4.4.18. Let A > 0. With high probability, the graph G∗ contains asusceptible subgraph on some set U0 ⊂ [n] of size |U0| ≥ A logn.Proof. Using Lemmas 4.4.11 and 4.4.12, we show by the second momentmethod that, with high probability, G∗ contains a susceptible subgraph ofsize at least (β∗r (α) + δ0) logn, for some δ0 > 0. By Lemma 4.4.17, this givesthe claim.Recall that Lemma 4.4.11 provides δ, ζ > 0 so that if kn/ logn ∈ [β∗(α) +δ/2, β∗(α) + δ], then Eˆr(kn) nζ . Fix such a sequence kn. For each n, fix1174.4. Upper bound for pc(n, r)In ⊂ [n] with |In| = r. By Lemma 4.4.12, it follows that∑IP(Eˆkn(I)|Eˆkn(In))Eˆr(kn)≤ 1 + o(1) +(nr)−1 r−1∑m=1(n−mr −m)o((n/kn)m)+ n−ζr−1∑m=0(n−mr −m)(O(k2rn (knp∗)r(r−m)) + (kn/n)r−m)≤ 1 + o(1) +r−1∑m=1o((r/kn)m)+ n−ζr−1∑m=0(O(k2rn ((knp∗)rn)r−m) + (kn)r−m)= 1 + o(1) +O(n−ζ log3r n)∼ 1where the sum is over I 6= In with |I| = r, and |I ∩ In| = m for some0 ≤ m < r. Hence, by the second moment method, with high probabilitysome rˆ-percolation on G∗ grows to size kn and thus G∗ contains a suceptiblesubgraph of size kn, as required. As discussed, the claim follows by the choiceof kn and Lemma 4.4.17. Claim 4.4.19. There is some A = A(ε) so that if U0 is a set of size|U0| ≥ A logn, then with high probability, r-percolation on ⋃i≥1 Gi initializedat U0 infects a set of vertices of order n/ logn.Proof. Let A = 2r(16r/ε)1/(r−1). Moreover assume that n is sufficientlylarge and ε is sufficiently small so that A ≥ 2 and A(21−rε/ logn)1/r ≤ 1/2.We define a sequence of disjoint sets Ui as follows. Given Ui, we considerall vertices not in U0, . . . , Ui, and add to Ui+1 some 2i+1A logn vertices thatare r-connected in Gi+1 to Ui (say, those of lowest index).We first argue that, as long as at most n/2 vertices are included in⋃ij=1 Uj and 2i ≤ n/ log2 n, the probability that we can find 2i+1A lognvertices to populate Ui+1 is at least 1−n−1. Indeed, a vertex not in ⋃ij=1 Uj1184.5. Time dependent branching processesis at least r-connected in Gi+1 to Ui with probability bounded from below by(|Ui|r)pri+1(1− pi+1)|Ui|−r ≥( |Ui|pi+1r)r(1− |Ui|pi+1) ≥ 12( |Ui|pi+1r)r,since, for all large n,|Ui|pi+1 = 2−(r−1)/r(A logn)(2iεn logr−1 n)1/r≤ A(21−rεlogn)1/r≤ 12 .Hence the expected number of such vertices is at leastn212( |Ui|pi+1r)r= ε4r(A2r)r−1(2iA logn) = 2i+2A lognby the choice of A. Therefore by Chernoff’s bound, such a set Ui+1 of size2i+1A logn can be selected with probability at least 1− exp(−2i−1A logn) ≥1−n−1, since A ≥ 2 and i ≥ 0, as required. Since the number of levels beforereaching n/2 vertices is O(logn), the claim follows. By Claims 4.4.18 and 4.4.19, with high probability G∗ ∪⋃i≥1 Gi containsa susceptible subgraph on some U ⊂ [n] of order n/ logn. To conclude,we observe that given this, by adding G0 we have that G = G∗ ∪ ⋃i≥0 Gi issusceptible with high (conditional) probability. Indeed, the expected numberof vertices in U c which are connected in G0 to at most r − 1 vertices of U isbounded from above bynr−1∑j=0(|U |j)pj0(1− p0)|U |−j  n(|U |p0)re−p0(|U |−r)  nre−n(1−1/r)/2  1.Hence G is susceptible with high probability, as required. 4.5 Time dependent branching processesIn this section, we prove Theorem 4.2.5, giving estimates for the survivalprobabilities for a family of non-homogenous branching process which are1194.5. Time dependent branching processesclosely related to contagious sets in Gn,p.Recall that in our branching process, the nth individual has a Poissonnumber of children with mean( nr−1)ε. This does not specify the order of theindividuals, i.e. which of these children is next. While the order would affectthe resulting tree, the choice of order clearly does not affect the probability ofsurvival. In light of this, we can use the breadth first order: Define generation0 to be the first r − 1 individuals, and let generation k be all children ofindividuals from generation k − 1. All individuals in a generation appear inthe order before any individual of a later generation. Let Yt be the size ofgeneration t, and St =∑i≤t Yi.Let Ψr(k, i) be the probability that for some t we have St = k and Yt = i.Lemma 4.5.1. We have thatΨr(k, i) =e−ε(k−ir )εk−r(k − r)! mr(k, i).Proof. We first give an equivalent branching process. Instead of each indi-vidual having a number of children, children will have r parents. We startwith r individuals (indexed 0, . . . , r − 1), and every subset of size r of thepopulation gives rise to an independent Poi(ε) additional individuals. Thusthe initial set of r individuals produces Poi(ε) further individuals, indexedr, . . . . Individual k together with each subset of r − 1 of the previous indi-viduals has Poi(ε) children, so overall individual k has Poi(( kr−1)ε)childrenwhere k is the maximal parent.Let XS be the number of children of a set S of individuals. A graphcontributing to mr(k, i) requires Poi(ε) variables to equal XS , so the prob-ability is ∏ e−εεXs/XS !. Up to generation t this considers (k−ir ) sets, and∑XS = k − r, giving the terms involving ε in the claim. The combinatorialterms ∏XS ! and (k − r)! come from possible labelings of the graph. Proof of Theorem 4.2.5. Up to the o(1) term appearing in the statement ofthe theorem, the survival of (Xt) is equivalent to the probability pS that forsome t we have that St ≥ kr, where (St)t≥0 is as defined above Lemma 4.5.11204.6. Graph bootstrap percolationand kr = kr(ε) is as in the theorem. By Lemma 4.5.1,pS ≥∑iΨr(kr, i) ≥ e−ε(krr )εkr−r(kr − r)!∑imr(kr, i) ≥ e−ε(krr )εkr−r(kr − r)! mr(kr).By Lemma 4.4.5, as ε→ 0, the right hand side is bounded from below bye−o(kr)e−(r−2)kr−ε(krr )(εkr−1r(r − 1)!)krε−r = e−(r−1)2rkr(1+o(1)).On the other hand, we note that the formula for Ψr(k, i) in Lemma 4.5.1agrees with the upper bound for Pr(k, i) in Lemma 4.3.7 (up to the 1 + o(1)factor). Hence, using the bounds in Lemma 4.3.5 and slightly modifying ofthe proof of Proposition 4.3.1 (since here we have Poisson random variablesinstead of Binomial random variables), it can be shown thatpS ≤ eo(kr) e−ε(krr )εkr−r(kr − r)! mr(kr) = e− (r−1)2rkr(1+o(1))completing the proof. 4.6 Graph bootstrap percolationFix r ≥ 2 and a graph H. We say that a graph G is (H, r)-susceptible if forsome H ′ ⊂ G we have that H ′ is isomorphic to H and V (H) is a contagiousset for G. We call such a subgraph H ′ a contagious copy of H. Hence a seed,as discussed in Section 4.2.2, is a contagious clique. Let pc(n,H, r) denotethe infimum over p > 0 such that Gn,p is (H, r)-susceptible with probabilityat least 1/2.By the arguments in Sections 4.3 and 4.4, with only minor changes, weobtain the following result. We omit the proof.Theorem 4.6.1. Fix r ≥ 2 and H ⊂ Kr with e(H) = `. Putαr,` = (r − 1)!((r − 1)2r2 − `)r−1.1214.7. Technical lemmasAs n→∞,pc(n,H, r) =(αr,`n logr−1 n)1/r(1 + o(1)).We obtain Theorem 4.2.4, from which Theorem 4.2.2 follows, as a specialcase.Proof of Theorem 4.2.4. The result follows by Theorem 4.6.1, taking r = 2and ` = 1, in which case α2,1 = 1/3. 4.7 Technical lemmasWe collect in this section several technical results used above.4.7.1 Proof of Claim 4.3.6Proof of Claim 4.3.6. By the bound i! >√2pii(i/e)i, it suffices to verify that(e/i)i√2piΛ(i) ≤ 1 for i ≥ 1, (4.7.1)where Λ(i) = Li(−i+1/2, 1/e) and Li(s, z) = ∑∞j=1 zjj−s is the polylogarithmfunction.Let Γ denote the gamma function. From the relationship between Liand the Herwitz zeta function, it can be shown that Λ(i)/Γ(i+ 1/2) ∼ 1, asi → ∞, and hence (e/i)iΛ(i) → √2pi, as i → ∞. It appears (numerically)that (e/i)iΛ(i) increases monotonically to√2pi, however this is perhaps notsimple to verify (or in fact true). Instead, we find a suitable upper boundfor Λ(i).Claim 4.7.1. For all i ≥ 1, we have thatΛ(i) < Γ(i+ 1/2)(1 + abi)where a = ζ(3/2) and b = e/(2pi), and ζ is the Riemann zeta function.1224.7. Technical lemmasProof. For all |u| < 2pi and s /∈ N, we have the series representationLi(s, eu) = Γ(1− s)(−u)s−1 +∞∑`=0ζ(s− `)`! u`.HenceΛ(i) = Γ(i+ 1/2) +∞∑`=0(−1)``! ζ(1/2− i− `). (4.7.2)Recall the functional equation for ζ,ζ(x) = 2xpix−1 sin(pix/2)Γ(1− x)ζ(1− x).Therefore, since ζ(1/2 + x) > 0 is decreasing in x ≥ 1 we have that, for allrelevant i, `,|ζ(1/2− i− `)| ≤ a√2piΓ(`+ i+ 1/2)(2pi)`+i < aΓ(`+ i+ 1/2)(2pi)`+i . (4.7.3)Applying (4.7.2),(4.7.3) (and the inequalities Γ(x+ `) < (x+ `− 1)`Γ(x),`! >√2pi`(`/e)`, and (1 + x/`)` < e`), we find that, for all i ≥ 1,Λ(i)Γ(i+ 1/2) − 1 <a(2pi)i∞∑`=0(`+ i− 1/2)`(2pi)``!<abiei(1 +∞∑`=11√2pi`(e2pi(1 + i− 1/2`))`)< abi(1e+ 1√2epi∞∑`=1(e2pi)`)< abiestablishing the claim. By Claim 4.7.1, the formulaΓ(i+ 1/2) =√pii!4i(2ii),1234.7. Technical lemmasand the bounds (2ii)<4i√pii(1− 19i)andi! <√2pii(ie)i (1− 112i)−1(valid for all i ≥ 1), we find that(e/i)i√2piΛ(i) < 439i− 112i− 1(1 + abi) for i ≥ 1. (4.7.4)Differentiating the right hand side of (4.7.4), and dividing by the positiveterm 4/(3(12i− 1)2), we obtain3 + abi(3 + log(b)(108i2 − 12i+ 1))which, for i ≥ 11, is bounded from below by3 + 108abi log(b)i2 > 3− 237bii2 > 0.Hence, for i ≥ 11, the right hand side of (4.7.4) increases monotonically to 1as i→∞. It follows that (4.7.1) holds for all i ≥ 11. Inequality (4.7.1), fori ≤ 10, can be verified numerically (e.g., by interval arithmetic), completingthe proof of Claim 4.3.6. 4.7.2 Proof of Claim 4.3.10Proof of Claim 4.3.10. By (4.3.6), we have that∂2∂γ2µr(α, β, γ) = − αβr(r − 2)!(1− γ)r−2 < 0.The result thus follows, noting that∂∂γµr(α, β, γ) = −β(1− αβr−1(r − 1)!(1− γ)r−1)1244.7. Technical lemmasand hence for any ξ < 1 and γ ∈ (0, 1),∂∂γµr(α, ξβr(α), γ) = −ξβr(α)(1− (ξ(1− γ))r−1)< 0. 4.7.3 Proof of Claim 4.3.11Proof of Claim 4.3.11. By Lemma 4.3.8, for all k = β logn and i = γk as inthe lemma, we have thatEr(k, i) . nµr(α,β,γ) logr(r−1) n. (4.7.5)We find a suitable upper bound for pr(k, i) as follows. For β < βr(α),put `β = ξβ logn, where ξβ = βr(α)− β. For a given set V of size k with ivertices identified as the top level, there are ar(k, i) ways to select r verticesin V with at least one in the top level. Hence, for k = β logn with β < βr(α),it follows thatpr(k, i) ≤(n`β)(ar(k, i)pr)`β .By Claim 4.3.4, we have that ar(k, i) < ikr−1/(r − 1)!. Hence, applying thebound(n`) ≤ (ne/`)`, we find thatpr(k, i) ≤(eαβrγξβ(r − 1)!)`β.Hence, by Lemma 4.3.8,Er(k, i)pr(k, i) . nµ¯r(α,β,γ) logr(r−1) n (4.7.6)whereµ¯r(α, β, γ) = µr(α, β, γ) + ξβ log(eαβrγξβ(r − 1)!). (4.7.7)Therefore, by (4.7.5),(4.7.6), we obtain Claim 4.3.11 by the followingfact.1254.7. Technical lemmasClaim 4.7.2. For any γ ∈ (0, 1), we have thatmin{µr(α, β, γ), µ¯r(α, β, γ)} ≤ µ∗r(α, βr)for all β ∈ (0, βr(α)].Proof. For convenience, we simplify notations as follows. Put βr = βr(α). Weparametrize β using a variable δ: for δ ∈ (0, 1], let βδ = δβr. For γ ∈ (0, 1),let µr(δ, γ) = µr(α, βδ, γ), µ¯r(δ, γ) = µ¯r(α, βδ, γ), and δγ = δγ(r) = 1−√γ/r.Finally, put µ∗r = µr(1, 0) = µ∗r(α, βr). In this notation, Claim 4.7.2 statesthatmin{µr(δ, γ), µ¯r(δ, γ)} ≤ µ∗r , for δ ∈ (0, 1].Since αβr−1r /(r−1)! = 1, it follows that αβr−1δ /(r−1)! = δr−1. Therefore,by (4.3.6),(4.7.7), we have thatµr(δ, γ) = r − βr(δrr(1− γ)r + δ(r − 2 + γ)− (r − 1)δ log δ)(4.7.8)andµ¯r(δ, γ) = µr(δ, γ) + βr(1− δ) log(eγδr1− δ). (4.7.9)We obtain Claim 4.7.2 by the following subclaims (as we explain belowthe statements).Sub-claim 4.7.3. For any fixed γ ∈ (0, 1), we have that µr(δ, γ) and µ¯r(δ, γ)are convex and concave in δ ∈ (0, 1), respectively.Sub-claim 4.7.4. For γ ∈ (0, 1), we have that(i) µr(1, γ) < µ∗r,(ii) µr(δγ , γ) < µ∗r, and(iii) eγδrγ/(1− δγ) < 1.Indeed, by Sub-claim 4.7.4(ii),(iii), we have that µ¯r(δγ , γ) < µr(δγ , γ) <µ∗r . Therefore, noting that limδ→1− µ¯r(δ, γ) = µr(1, γ), limδ→0+ µr(δ, γ) = r,and limδ→0+ µ¯r(δ, γ) = −∞ (see (4.7.8),(4.7.9)), we then obtain Claim 4.7.2by applying Sub-claims 4.7.3 and 4.7.4(i).1264.7. Technical lemmasProof of Sub-claim 4.7.3. By (4.7.8), for any γ ∈ (0, 1), we have that∂2∂δ2µr(δ, γ) =(r − 1)βrδ(1− δr−1(1− γ)r) > 0for all δ ∈ (0, 1). Moreover, by (4.7.8),(4.7.9), the above expression, andnoting that∂2∂δ2(1− δ) log(eγδr1− δ)= −r − (r − 1)δ2δ2(1− δ) ,it follows that, for any γ ∈ (0, 1),∂2∂δ2µ¯r(δ, γ) = − βrδ2(1− δ)(r − (r − 1)δ2 − δ(1− δ)(1− δr−1(1− γ)r))= − βrδ2(1− δ) (1 + (r − 1)(1− δ)(1 + δr(1− γ)r))< 0for all δ ∈ (0, 1). The claim follows. Proof of Sub-claim 4.7.4. Note that µ∗r = r−βr(r− 1)2/r. Since, by (4.7.8),µr(1, γ) = r − βr((1− γ)rr+ r − 2 + γ)claim (i) follows immediately by the inequality (1− γ)r > 1− rγ.Next, we note that by (4.7.8), to establish claim (ii) we need to showthat fr(δγ , γ) > (r − 1)2/r, wherefr(δ, γ) =δrr(1− γ)r + δ(r − 2 + γ)− (r − 1)δ log δ.We deal with the cases γ ∈ (0, 1/r) and γ ∈ [1/r, 1) separately. By theinequality log δ ≤ 1− δ, we have thatfr(δ, γ) > δ(r − 2 + γ)− (r − 1)δ(1− δ).The right hand side is equal to (r− 1)2/r when δ = δγ and γ = 1/r or γ = 1.Setting δ = δγ in the right hand side, and differentiating twice with respect1274.7. Technical lemmasto γ, we obtain −(1 + 3γ)/(4√γ3r) < 0. It follows that fr(δ, γ) > (r− 1)2/rfor all γ ∈ [1/r, 1). For the case γ ∈ (0, 1/r), we note that by the bound(1− γ)r > 1− γr,fr(δ, γ) >δrr(1− γr) + δ(r − 2 + γ)− (r − 1)δ log δ.Setting ζ =√γ/r, fr(δγ , γ) is thus bounded from below by(1− ζ)rr(1− (rζ)2) + (1− ζ)(r − 2 + rζ2)− (r − 1)(1− ζ) log(1− ζ).Hence it suffices to show that this expression is bounded from below by(r − 1)2/r for all ζ ∈ (0, 1/r). To this end, we note that it is equal to(r − 1)2/r when ζ = 0, and claim that it is increasing in ζ ≤ 1/r. Indeed,differentiating with respect to ζ, we obtain(1− ζ)r−1(r(r + 2)ζ2 − 2rζ − 1)− 3rζ2 + 2rζ + 1 + (r − 1) log(1− ζ).Note that r(r + 2)ζ2 − 2rζ − 1 < 0 for all ζ ∈ [0, 1/r]. Hence, since(1 − ζ)r−1 ≤ (1 + (r − 1)ζ)−1 and log(1 − ζ) ≥ −ζ(1 + ζ) for all relevantζ ≤ 1/2, the above expression is bounded from below by(r − 1)ζ2 (2(1− 2ζ)r + ζ)1 + (r − 1)ζ > 0.It follows that fr(δγ , γ) > (r − 1)2/r for all γ ∈ (0, 1/r). Altogether, claim(ii) is proved.Finally, for claim (iii), let gr(δ, γ) = eγδr/(1− δ). In this notation, claim(iii) states that gr(δγ , γ) < 1. To verify this inequality, we note that∂∂δgr(δ, γ) =eγδr−1(1− δ)2 (r − (r − 1)δ)and hence∂∂δgr(δγ , γ) = eδr−1γ (r + (r − 1)√γr).1284.7. Technical lemmasTherefore, noting that∂∂γgr(δ, γ) |δ=δγ=eδrγ1− δγ = eδr−1γ(√rγ− 1)and recalling that∂∂γδγ = − 12√γrit follows that∂∂γgr(δγ , γ) =eδr−1γ2(√rγ− (r + 1)).Therefore, for any r ≥ 2, gr(δγ , γ) is maximized at γ = r/(r + 1)2. By theinequality (1− x/n)n < e−x, we find thatgr(r/(r + 1)2) =err + 1(1− 1r + 1)r<rr + 1(1− 1r + 1)−1= 1giving the claim. As discussed, Sub-claims 4.7.3 and 4.7.4 imply Claim 4.7.2. To conclude, we recall that Claim 4.7.2 implies Claim 4.3.11. 4.7.4 Proof of Claim 4.4.7Proof of Claim 4.4.7. We recall the relevant quantities defined in the proofof Lemma 4.4.6, see (4.4.7),(4.4.8),(4.4.9). We have thatρˆr(k, i) ≥ ξ′e−(r−2)k−hr(k) for k > kr and i ≤ k/ log2 nwherehr(k) = fr(k)− log gr(k,klog2 k)+ klog2 klog(klog2 k),fr(k) is non-decreasing and fr(k)  k, and gr(k, i) = e(r−2)i(k−ik)(r−2)k.Claim 4.4.7 states that for some ξ > 0, for all large k and i ≤ (ε/r)2k, we1294.7. Technical lemmashave that ρˆr(k, i) ≥ ξe−iε−(r−2)k−hr(k).Sub-claim 4.7.5. For all k > kr, we have that hr(k) is increasing in k.Proof. Since fr(k) is non-decreasing and k/ log2 k is increasing, it suffices by(4.4.9) to show that gr(k, k/ log2 k) is decreasing for k > kr (and assumingr > 2, as else gr ≡ 1 and so there is nothing to prove). To this end, we notethat∂∂igr(k, i) = −(r − 2)ik − i gr(k, i),∂∂kklog2 k= log k − 2log3 k,and∂∂kgr(k, i) =r − 2k − i((k − i) log(k − ik)+ i)gr(k, i)Hence, differentiating gr(k, k/ log2 k) with respect to k, and dividing by− (r − 2)kk(1− log−2 k) log3 kgr(k, k/ log2 k) < 0we obtain(log3 k)(1− log−2 k) log(log2 klog2 k − 1)− log3 k − log k + 2log2 k.By the inequality log x > 2(x − 1)/(x + 1) (valid for x > 1), the aboveexpression is bounded from below bylog3 k − 4 log2 k − log k + 2(log2 k)(2 log2 k − 1) >log k − 52 log2 k − 1 > 0for all k > kr, since kr > er/ε > er(r+1) and r > 2. The claim follows. By Sub-claim 4.7.5, fix some k∗ = k∗(r, ε) > kr so that k/ log2 k is largerthan 9(r/ε)4 and (r + 2)!/(1− ε) for all k ≥ k∗, and hr(k) is increasing forall k ≥ (1− (ε/r)2)k∗. By (4.4.8), select some ξ(r, ε) ≤ ξ′ so that the claimholds for all k > kr and relevant i, provided either i ≤ k/ log2 k or k ≤ k∗.1304.7. Technical lemmasWe establish the remaining cases, k > k∗ and k/ log2 k < i ≤ (ε/r)2, byinduction. To this end, let k > k∗ be given, and assume that the claim holdsfor all k′ < k and relevant i. By (4.4.1) it follows thatρˆr(k, i) ≥k−r−i∑j=1Bˆr(k, i, j)ρˆr(k − i, j) (i < k − r) (4.7.10)whereBˆr(k, i, j) =jii!(k − ik)(r−2)k (k − i− jk − i)k−i ( (r − 1)!(k − i)r−1aˆr(k − i, j)j)i.Sub-claim 4.7.6. For all (r + 2)! ≤ i, j ≤ k/r2, we have thatBˆr(k, i, j) ≥ jii!(k − ik)(r−2)k (k − i− jk − i)k+(r−2)i.Proof. By the formula for Bˆr(k, i, j) above, it suffices to show that(r − 1)!(k − i)r−1aˆr(k − i, j)j>(k − i− jk − i)r−1.To this end, we note that by (4.4.2) and Claim 4.3.4 the left hand side isbounded from below by1jj∑`=1(k − i− `k − i)r−1− 2r!k − iSince, for any integer m, (1− y/x)m − (1− (y + 1/2)/x)m is decreasing in y,for y < x, it follows that1jj∑`=1(k − i− `k − i)r−1≥(k − i− (j + 1)/2k − i)r−1.Thus, applying the inequalities 1− xy ≤ (1− x)y ≤ 1/(1 + xy), we find that(r − 1)!(k − i)r−1aˆr(k − i, j)j−(k − i− jk − i)r−11314.7. Technical lemmasis bounded from below by1− (j + 1)(r − 1)2(k − i) −2r!k − i −11 + j(r − 1)/(k − i)which equals((r − 1)j − (r + 4r!− 1))(k − i)− ((r − 1)j + (r + 4r!− 1))(r − 1)j2(k − i)(k − i+ (r − 1)j) .It thus remains to show that the numerator in the above expression isnon-negative, for all i, j as in the claim. To see this, we observe thatr + 4r!− 1 < (r − 1)(r + 2)! for all r ≥ 2. Hence, for (r + 2)! ≤ i, j ≤ k/r2and r ≥ 2, the numerator divided by (r − 1)k > 0 is bounded from below by(j − (r + 2)!)(1− 1r2)− (j + (r + 2)!) 1r2=(1− 2r2)(j − (r + 2)!) ≥ 0as required. The claim follows. Applying Sub-claim 4.7.6, the inductive hypothesis, and the bound i! <3√i(i/e)i to (4.7.10), it follows thatρˆr(k, i) > ξe−(r−2)k+(r−1)i−hr(k−i)3√i(k − ik)(r−2)k ∑j∈Jr,εψr,ε(i/k, j/i)k(4.7.11)where Jr,ε(k, i) is the set of j satisfying (r + 2)! ≤ j ≤ (ε/r)2(k − i), andψr,ε(γ, δ) = δγe−δγε(1− δγ1− γ)1+γ(r−2).Sub-claim 4.7.7. Put δε = 1 − ε and δr,ε = δε + (ε/r)2. For any fixedγ ≤ (ε/r)2, we have that ψr,ε(γ, δ) is increasing in δ, for δ ∈ [δε, δr,ε].Proof. Differentiating ψr,ε(γ, δ) with respect to δ, we obtainψr,ε(γ, δ)γδ(1− γ − δγ)(εγδ2 − (1 + ε+ γ(r − 1− ε))δ + 1− γ).1324.7. Technical lemmasHence, to establish the claim, it suffices to show thatεγδ2r,ε − (1 + ε+ γ(r − 1− ε))δr,ε + 1− γis positive for relevant γ. Moreover, since the above expression is decreasingin γ, we need only verify the case γ = (ε/r)2. Setting γ as such in the aboveexpression, and then dividing by ε2/r6, we obtainr6 − (1− ε)r5 − (1 + 3ε2 − ε3)r4 − r3ε2 + ε2(1 + 3ε− 2ε2)r2 + ε5.For ε < 1/r and r ≥ 2, this expression is bounded from below byr(r5 − r4 − (1 + 3/r2)r3 − 1) ≥ r > 0as required, giving the claim. By the choice of k∗ and since k > k∗, for all relevant k/ log2 k ≤ i ≤(ε/r)2k, we have that δεi ≥ (r + 2)! andδr,εik − i ≤ (ε/r)2 1− ε+ (ε/r)21− (ε/r)2 ≤ (ε/r)2where the second inequality follows since∂∂ε1− ε+ (ε/r)21− (ε/r)2 = −r2 (r2 + ε2 − 4ε)(r − ε)2(r + ε)2 < 0for all r ≥ 2. Hence, for all such i, k, we have that j ∈ Jr,ε(k, i) for allj ∈ [δε, δr,ε]. Therefore, for any such i, k, by (4.7.11) and Sub-claim 4.7.7,we have thatρˆr(k, i) > ξe−(r−2)k+(r−1)i−hr(k−i)3√i(k − ik)(r−2)k ∑δεi≤j≤δr,εiψr,ε(i/k, j/i)k> ξ(δr,ε − δε)√i3 e−(r−2)k+(r−1)i−hr(k−i)(k − ik)(r−2)kψr,ε(i/k, δε)k> ξe−(r−2)k+(r−1)i−hr(k−i)(k − ik)(r−2)kψr,ε(i/k, δε)k1334.7. Technical lemmaswhere the last inequality follows since for any such i, k, by the choice of k∗and since k > k∗, we have that δr,ε − δε = (ε/r)2 > 3/√i.Sub-claim 4.7.8. Fix k/ log2 k ≤ i ≤ (ε/r)2k, and define ζr(k, i) such thatρˆr(k, i) = ξe−ζr(k,i)εi−(r−2)k−hr(k).We have that ζr(k, i) < 1.Proof. Letting γ = i/k, it follows by the bound for ρˆr(k, i) above, and sincek > k∗ and hence hr(k − i) < hr(k) by the choice of k∗, that ζr(k, i) isbounded from above byδε − r − 1ε− r − 2εγlog(1− γ)− 1εlog δε − 1 + γ(r − 2)εγlog(1− δεγ1− γ).Recall that δε = 1− ε. Applying the bound − log(1− x) ≤ x/(1− x) forx = γ and x = δεγ/(1− γ), and the bound − log(1− x) ≤ x+ (1 + x)x2/2for x = ε (valid for any x < 1/3, and so for all relevant ε < 1/(r + 1) withr ≥ 2), we find that the expression above is bounded from above byν(ε, γ) = 2− ε(1− ε)2 −1− (r − 1)γε(1− γ) +(1− ε)(1 + (r − 2)γ)ε(1− (2− ε)γ) .Therefore, noting that∂∂γν(ε, γ) = r − 2ε(1− γ)2 +(1− ε)(r − ε)ε(1− (2− ε)γ)2 > 0,to establish the subclaim, it suffices to verify that ν(ε, (ε/r)2) < 1 for allr ≥ 2 and ε < 1/(r + 1). Furthermore, sinceν(ε, (ε/r)2) = 2− ε(1− ε)2 −r2 − ε2(r − 1)ε(r2 − ε2) +(1− ε)(r2 + ε2(r − 2))ε(r2 − 2ε2 + ε3)and hence∂∂rν(ε, (ε/r)2) = −ε(r(r − 4) + ε2)(r2 − ε2)2 −ε(1− ε)(r(r − 2ε) + ε2(2− ε))(r2 − 2ε2 + ε3)2 < 01344.7. Technical lemmasfor all k ≥ 4 and ε < 1, we need only verify the cases r ≤ 4.To this end, let η(r, ε) denote the difference of the numerator and denom-inator of ν(ε, (ε/r)2) (in its factorized form), namely− ε7 + 3ε6 + (r2 − 4)ε5 − 2(2r2 − 2r + 1)ε4 + (5r2 − 6r + 8)ε3+ r2(r2 − 2r − 2)ε2 − r2(r − 2)2ε.For all ε < 1/3, we have thatη(2, ε) = −ε2(1− ε)(2− ε)(2 + ε)(2− 2ε+ ε2) < −ε2 < 0.Similarly,η(3, ε) = −ε(9− 9ε− 35ε2 + 26ε3 − 5ε4 − 3ε5 + ε6) < −ε < 0andη(4, ε) = −ε(64− 96ε− 64ε2 + 50ε3 − 12ε4 − 3ε5 + ε6) < −ε < 0.It follows that ν(ε, (ε/r)2) < 1 for all ε < 1/3 and k ≤ 4, and hence forall k ≥ 2, giving the subclaim. By Sub-claim 4.7.8, we find that ρˆr(k, i) = ξe−εi−(r−2)k−hr(k) for all i, ksuch that k/ log2 k ≤ i ≤ (ε/r)2k, completing the induction, and thus givingClaim 4.4.7. 4.7.5 Proof of Lemma 4.4.11Proof of Lemma 4.4.11. Put αr,ε = (1 + ε)αr. Let βr = βr(αr,ε) and β∗ =β∗(αr,ε). For β > 0 and γ ∈ [0, 1), let µr,ε(β, γ) = µr,ε(αr,ε, β, γ) andµ∗r(β) = µ∗r(αr,ε, β). Let γ∗r,ε(β) denote the maximizer of µr,ε(β, γ) overγ ∈ [0, 1), which is well-defined, since for all γ ∈ (0, 1),∂2∂γ2µr,ε(β, γ)− β(1− γ)2 −αr,εβr(r − 2)!(1− γ)r−2 < 0 (4.7.12)1354.7. Technical lemmasand limγ→1− µr,ε(β, γ) = −∞. Finally, put γr,ε(β) = min{γ∗r,ε(β), (ε/r)2}.We show that µr,ε(β, γr,ε(β)) is bounded away from 0 for β ∈ [β∗r , β∗r + δ],for some δ > 0. By Lemma 4.4.10, the result follows.Claim 4.7.9. For γ ∈ (0, 1), letβr,ε(γ) =(1/(1− γ) + ε)1/(r−1)1− γ βrand putβr,ε = limγ→0+βr,ε(γ) = (1 + ε)1/(r−1)βr.We have that(i) γ∗r,ε(β) = 0, for all β ≤ βr,ε,(ii) for β > βr,ε, γ = γ∗r,ε(β) if and only if β = βr,ε(γ), and(iii) γ∗r,ε(β) is increasing in β, for β ≥ βr,ε.Proof. By (4.7.12), we have that µr,ε(β, γ) is concave in γ. Therefore, since∂∂γµr,ε(β, γ)− β(11− γ + ε−αr,εβr−1(r − 1)! (1− γ)r−1)and hence, for any ξ > 0,∂∂γµr,ε(ξβr, γ) = −ξβr( 11− γ + ε− ξr−1(1− γ)r−1),the first two claims follow. The third claim is a consequence of the secondclaim and the fact that βr,ε(γ) is increasing in γ. By the following claims, we obtain the lemma (as we discuss below thestatements).Claim 4.7.10. For β > 0 and γ ∈ [0, 1), letωr,ε(β, γ) = µr,ε(β, γ)− µ∗r(β).We have that(i) ωr,ε(β, γr,ε(β)) = 0, for all β ≤ βr,ε, and1364.7. Technical lemmas(ii) ωr,ε(β, γr,ε(β)) is increasing in β, for β ≥ βr,ε.Claim 4.7.11. We have that βr,ε < β∗.Indeed, the claims together imply that ωr,ε(β∗, γr,ε(β∗)) > 0. Therefore,since µ∗r(β∗) = 0, we thus have that µr,ε(β∗, γr,ε(β∗)) > 0. Therefore, by thecontinuity of µr,ε(β, γr,ε(β)) in β, it follows that µr,ε(β, γr,ε(β)) > 0 for allβ ∈ [β∗, β∗ + δ], for some δ > 0. As discussed the lemma follows, applyingLemma 4.4.10.Proof of Claim 4.7.10. The first claim follows by (4.4.10) and Claim 4.7.9(i).For the second claim, we show that (a) ωr,ε(β, γ∗r,ε(β)) is increasing in β,for β ≥ βr,ε such that γ∗r,ε(β) ≤ (ε/r)2, and (b) ωr,ε(β, (ε/r)2) is increasingin β, for β ≥ βr,ε. By Claim 4.7.9(iii), this implies the claim.Since γ∗r,ε(β) maximizes µr,ε(β, γ), and so ∂ωr,ε(β, γ∗r,ε(β))/∂γ = 0, itfollows that∂∂βωr,ε(β, γ∗r,ε(β)) =∂∂βωr,ε(β, γ)∣∣γ=γ∗r,ε(β).Hence, by Claim 4.7.9(ii), to establish (a) we show that for all γ ≤ (ε/r)2,∂ωr,ε(βr,ε(γ), γ)/∂β > 0. To this end, we observe that∂∂βωr,ε(β, γ) = log(1− γ)− εγ + αr,εβr−1(r − 1)! (1− (1− γ)r). (4.7.13)Setting β = βr,ε(γ), the above expression simplifies aslog(1− γ)− εγ + 1/(1− γ) + ε(1− γ)r−1 (1− (1− γ)r).By the inequalities (1− x)y ≤ 1/(1 + xy) and log(1− x) ≥ −x/(1− x), thisexpression is bounded from below by− γ1− γ − εγ + (1 + (r − 1)γ)( 11− γ + ε)(1− 11 + γr)which factors asγ(1 + ε(1− γ))(1− γ)(1 + γr)(r − 1 + γr(r − 2)) > 01374.7. Technical lemmasand (a) follows.Similarly, we note that by (4.7.13), for any β ≥ βr,ε and γ > 0,∂∂βωr,ε(β, γ) ≥ log(1− γ)− εγ +αr,εβr−1r,ε(r − 1)! (1− (1− γ)r)= log(1− γ)− εγ + (1 + ε)(1− (1− γ)r).Hence, using the same bounds for (1− x)y and log(1− x) as above, we findthat for all such β ≥ βr,ε, ∂ωr,ε(β, (ε/r)2)/∂β is bounded from below byε2(r3(r − 1)(1 + ε)− 2r2ε2 − r(2r − 1)ε3 + ε5)(r − ε)(r + ε)(r + ε2)r2 .For ε < 1/r, the numerator is bounded from below byε2(r3(r − 1)− 2− 2r − 1r2)= ε2r(r6 − r5 − 2r2 − 2r + 1)> 0since r ≥ 2. Hence ∂ωr,ε(β, (ε/r)2)/∂β > 0, giving (b), and thus completingthe proof of the second claim. Proof of Claim 4.7.11. By Lemma 4.3.9, the claim is equivalent to the in-equality µ∗r(βr,ε) > 0. To verify this, we note thatβr =((r − 1)!αr,ε)1/(r−1)=( 11 + ε)1/(r−1) (r − 1r)2,and hence by (4.3.7), for any ξ > 0, we have thatµ∗r(ξ1/(r−1)βr) = r − ξ1/(r−1)βr(r − 2 + ξr− log ξ)= r −(rr − 1)2 ( ξ1 + ε)1/(r−1) (r − 2 + ξr− log ξ).In particular,µ∗r(βr,ε) = r −(rr − 1)2 (r − 2 + 1 + εr− log(1 + ε)).1384.7. Technical lemmasTherefore, by the bound log(1 + x) > x/(1 + x), we find thatµ∗r(βr,ε) >εr(r − 1− ε)(1 + ε)(r − 1)2 > 0as required. As discussed, Lemma 4.4.11 follows by Claims 4.7.10 and 4.7.11. 139Chapter 5Minimal Contagious Sets inRandom Graphs5.1 OverviewBootstrap percolation with threshold r on a graph G = (V,E) is the followingprocess: Initially some subset I ⊂ V is declared active. Subsequently, anyvertex with at least r active neighbours is activated. If all vertices in V areeventually activated, we call I contagious for G.We take G to be the Erdős–Rényi random graph Gn,p. We obtain lowerbounds for the size of the smallest contagious sets in Gn,p, improving thoserecently obtained by Feige, Krivelevich and Reichman. A key step is toidentify the large deviations rate function for the number of vertices eventuallyactivated by small sets that are unlikely to be contagious. This complementsthe central limit theorems of Janson, Łuczak, Turova and Vallier, whichdescribe the typical behaviour. As a further application, our large deviationestimates play a key role in Chapter 6 to locate the sharp threshold forK4-bootstrap percolation on Gn,p, refining an approximation due to Balogh,Bollobás and Morris.∗5.2 Background and main resultsLet G = (V,E) be a graph. Given an initial set of activated vertices V0 ⊂ V ,the r-bootstrap percolation process activates all vertices with at least r activeneighbours. Formally, let Vt+1 be the union of Vt and the set of all vertices∗This chapter is joint work with Omer Angel [11], currently under review for publication.1405.2. Background and main resultswith at least r neighbours in Vt. The sets Vt are increasing, and thereforeconverge to some set of eventually active vertices, denoted by 〈V0, G〉r. A setI ⊂ V is called contagious for G if it activates all of V , that is, 〈I,G〉r = V .Let m(G, r) denote the size of a minimal contagious set for G.Bootstrap percolation is most often attributed to Chalupa, Leath andReich [50] (see also Pollak and Riess [116]), who introduced the model on theBethe lattice (the infinite d-regular tree Td) as a simple model for a magneticsystem undergoing a phase transition. Since then the process has beenanalyzed on various graphs and found many applications in mathematics,physics and several other fields, see for example the extensive surveys in theintroductory sections of articles [24, 27, 84] and the references therein. Morerecently, bootstrap percolation has been studied on random graphs, see forinstance [24, 27, 83, 84].Recall that the Erdős–Rényi [60] graph Gn,p is the random subgraph ofKn obtained by including edges independently with probability p. In thiswork, we obtain improved bounds for m(Gn,p, r), for all r ≥ 2.Theorem 5.2.1. Fix r ≥ 2. Suppose that ϑ = ϑ(n) satisfies 1  ϑ  n.Letαr = (r − 1)!(r − 1r)2(r−1), p = p(n, ϑ) =(αrnϑr−1)1/r.Then, with high probability,m(Gn,p, r) ≥ rϑlog(n/ϑ)(1 + o(1))where o(1) depends only on n.We denote ψ = ψ(n, ϑ) = ϑ/ log(n/ϑ), so that the theorem states thatwith high probability m(Gn,p, r) ≥ rψ(1 + o(1)). Of course, this bound isonly of interest if rψ > 1, as else we have the trivial bound m(G, r) ≥ r,which holds for any graph G.Janson, Łuczak, Turova and Vallier [84] (see also Vallier [132]) showedthat for p as in Theorem 5.2.1, `r = rr−1ϑ is the critical size for a random1415.2. Background and main resultsset (selected independently of Gn,p) to be contagious (see Section 5.3). The-orem 5.2.1 is a consequence of our key result, Theorem 5.4.2 below, whichidentifies the large deviations rate function associated with the number ofvertices activated by sets smaller than `r.More recently, Feige, Krivelevich and Reichman [62] studied small conta-gious sets in Gn,p. Although it is unlikely for a random set of size ` < `r to becontagious, there typically exist contagious sets in Gn,p that are much smallerthan `r. In [62] it is shown that if p is as in Theorem 5.2.1 and moreoverlog2 nlog logn  ϑ n,then, with high probability,cr ≤ m(Gn,p, r)ψ(n, ϑ) ≤ Cr (5.2.1)where cr < r and, as r →∞, cr → 2 and Cr = Ω(rr−2). (Note that d in [62]corresponds to (αr(n/ϑ)(r−1))1/r in this context.) The lower bound in (5.2.1)holds in fact for all ϑ. (Although this is not stated in [62, Theorem 1.1], itfollows from the proof, see [62, Corollaries 2.1 and 4.1].)The inequality cr < r is not shown in [62], so we briefly explain it here:In [62, Lemma 4.2 and Corollary 4.1], it is observed that a graph of size kwith a contagious set of size ` contagious has at least r(k − `) edges. Fromthis it follows easily that with high probabilitym(Gn,p, r) ≥ ξ r − 1rndr/(r−1) log d,provided that ξr−1er+2/(2r)r < 1. Since (r−1)! > e((r−1)/e)r−1, this leadsto the bound m(Gn,p, r) ≥ cψ(n, ϑ), where for all r ≥ 2,c < 2(rr − 1)3 (2re4)1/(r−1)< r.Therefore, since cr < r, we find that Theorem 5.2.1 improves the lowerbound in (5.2.1) for all r ≥ 2. To obtain this significant improvement, we in1425.2. Background and main resultsa sense (see Section 5.4) track the full trajectory of activation in percolatinggraphs, rather than using only a rough estimate for graphs arrived at by suchtrajectories. Using (discrete) variational calculus, we identify the optimaltrajectory from a set of size ` in Gn,p to an eventually active set of k vertices.This leads to refined bounds for the structure of percolating subgraphs ofGn,p with unusually small contagious sets, and so an improved bound form(Gn,p, r). Moreover, we note that this improvement increases as r increases.Since cr → 2, our bound is larger by a factor of roughly r/2 for large r. Thisis due to the fact that the crude bound of r(k− `) for the number of edges ina graph of size k with a contagious set of size ` is an increasingly inaccurateestimate for the combinatorics of such graphs as r →∞.Hence, in particular, we find thatm(Gn,p, r)/ψ(n, ϑ) grows at least linearlyin r. It seems plausible that this is the truth, and that moreover, the boundin Theorem 5.2.1 is asymptotically sharp. In any case, as it stands now, asubstantial gap remains between the linear lower bound of Theorem 5.2.1 andthe super-exponential upper bound in (5.2.1). The upper bound in (5.2.1)has the advantage of being proved by a procedure that with high probabilitylocates a contagious set in polynomial time. That being said, this set ispossibly much larger than a minimal contagious set, especially for large r.In closing, we state the open problems of (i) identifying m(Gn,p, r) up to afactor of 1 + o(1) and (ii) efficiently locating contagious sets that are as closeas possible to minimal.5.2.1 Thresholds for contagious setsThe critical threshold pc(n, r, q) for the existence of contagious sets of size qin Gn,p is defined as the infimum over p > 0 for which such a set exists withprobability at least 1/2. If q = r, we simply write pc(n, r). In [62] it is shownthat pc(n, r) = Θ((n logr−1 n)−1/r). In the previous Chapter 4, we identifiedthe sharp threshold for contagious sets of the smallest possible size r aspc(n, r) =(αrn logr−1 n)1/r(1 + o(1)). (5.2.2)1435.3. Binomial chainsMoreover, (5.2.2) holds if the 1/2 in the definition of pc is replaced with anyprobability in (0, 1). As a consequence of Theorem 5.2.1, we obtain lowerbounds for pc(n, r, q) for q ≥ r.Corollary 5.2.2. Fix r ≥ 2. Suppose that r ≤ q = q(n)  n/ logn. Asn→∞,pc(n, r, q) ≥(αr,qn logr−1 n)1/r(1 + o(1)),where αr,q = αr(r/q)r−1.Indeed, by Theorem 5.2.1, we see that if p = (α/(n logr−1 n))1/r, whereα = (1− δ)αr,q for some δ > 0, then with high probability m(Gn,p, r) > q. Inparticular, we obtain an alternative proof of the lower bound in (5.2.2).In closing, we remark that determining whether the inequalities in Corol-lary 5.2.2 are asymptotically sharp, even for fixed q > r, is of interest. Theproof in Chapter 4 of the special case q = r is fairly involved. Althoughthe upper bound in (5.2.2) is proved using the standard second momentmethod, the application is not straightforward (see Section 4.2.4 for a briefoverview). Roughly speaking, for p > pc, we show that the expected numberof triangle-free percolating subgraphs of Gn,p is large. We then use Mantel’stheorem to deduce the existence of such sets (see Section 4.4.4). This strategyis not sufficient, however, for q > r.5.3 Binomial chainsFix some r ≥ 2. To analyze the spread of activation from an initiallyactive set I in Gn,p, we consider the binomial chain construction, as used byJanson, Łuczak, Turova and Vallier [84]. This representation of the bootstrappercolation dynamics is due to Scalia-Tomba [118] (see also Sellke [123]). Werefer to [84, Section 2] for a detailed description, and here only present theproperties relevant to the current chapter. The main idea is to reveal thegraph one vertex at a time. As a vertex is revealed, we mark its neighbours.Once a vertex has been marked r times, we know it will be activated, andadd it to the list of active vertices.1445.3. Binomial chainsFormally, sets A(t) and U(t) of active and used vertices at time t ≥ 0are defined as follows: Let A(0) = I and U(0) = ∅. For t > 0, choose someunused, active vertex vt ∈ A(t− 1)−U(t− 1), and give each neighbour of vta mark. Then let A(t) be the union of A(t − 1) and the set of all verticesin Gn,p with at least r marks, and put U(t) = U(t− 1) ∪ {vt}. The processterminates at time t = τ , where τ = min{t ≥ 0 : A(t) = U(t)}, that is, whenall active vertices have been used. It is easy to see that A(τ) = 〈I,Gn,p〉r.Let S(t) = |A(t)| − |I|. By exploring the edges of Gn,p one step ata time, revealing the edges from vt only at time t, the random variablesS(t) can be constructed in such a way that S(t) ∼ Bin(n− |I|, pi(t)), wherepi(t) = P(Bin(t, p) ≥ r), see [84, Section 2]. Moreover, for s < t, we have thatS(t)−S(s) ∼ Bin(n−|I|, pi(t)−pi(s)). Finally, it is shown that |〈I,Gn,p〉r| ≥ kif and only if τ ≥ k if and only if S(t)+|I| > t for all t < k. Thus to determinethe size of the eventually active set 〈I,Gn,p〉r, it suffices to analyze S(t).Making use of this construction, many results are developed in [84]. Weclose this section by mentioning two such results that are closely relatedto our key result, Theorem 5.4.2 below. The following quantities play animportant role in [84] and in the present article. We denotekr = kr(ϑ) =(rr − 1)2ϑ, `r = `r(ϑ) =r − 1rkr. (5.3.1)For ε ∈ [0, 1], we define δε ∈ [0, ε] implicitly byδrεr= δε − εr, εr = r − 1rε. (5.3.2)It is easily verified that εr ≤ δε ≤ ε, for all ε ∈ [0, 1]. (We note that `r, kr, δεcorrespond to ac, tc, ϕ(ε) in [84].)As mentioned already, `r is identified in [84] as the critical size fora random set (selected independently of Gn,p) to be contagious. Morespecifically, suppose thatp = p(n, ϑ) =(αrnϑr−1)1/r=((r − 1)!nkr−1r)1/r(5.3.3)1455.3. Binomial chainsand I ⊂ [n] is such that |I|/`r → ε. If ε < 1 then with high probability Iactivates less than εkr vertices. On the other hand, if ε > 1, then with highprobability I activates all except possibly very few vertices. In the sub-criticalcase ε < 1, |〈I,Gn,p〉r| is asymptotically normal with mean µ ∼ δεkr.More precisely, the following results are proved in [84].Theorem 5.3.1 ([84, Theorem 3.1]). Fix r ≥ 2. Let p be as in (5.3.3), whereϑ = ϑ(n) satisfies 1 ϑ n. Suppose that I = I(n) ⊂ [n] is independentof Gn,p and such that |I|/`r → ε, as n → ∞. If ε ∈ [0, 1), then with highprobability |〈I,Gn,p〉r| = (δε + o(1))kr. On the other hand, if ε > 1, then withhigh probability |〈I,Gn,p〉r| = n(1− o(1)).(If np logn+ (r − 1) log logn, then, in fact, with high probability I iscontagious, that is |〈I,Gn,p〉r| = n, see [84, Theorem 3.1](iii).) Moreover, thefollowing central limit theorem is established.Theorem 5.3.2 ([84, Theorem 3.8(i)]). Fix r ≥ 2. Let p be as in (5.3.3),where ϑ = ϑ(n) satisfies 1  ϑ  n. Suppose that I = I(n) ⊂ [n] isindependent of Gn,p and such that |I|/`r → ε ∈ (0, 1), as n → ∞. Then|〈I,Gn,p〉r| is asymptotically normal with mean µ ∼ δεkr and variance σ2 =δ′εkr, where δ′ε = δrε(1− δr−1ε )−2/r.(See (3.13) and (3.22) in [84] for the definition of µ.) In particular, notethat the mean and variance of |〈I,Gn,p〉r| are of the same order as kr.In [84, Section 6] a heuristic is provided for the criticality of `r, whichwe recount here. By the law of large numbers, with high probability S(t) ≈ES(t). A calculation shows that if |I| > `r then |I|+ES(t) ≥ t for all t <n− o(n), whereas if |I| < `r then already for t = kr we get |I|+ES(kr) < kr.In particular, for t ≤ kr, since ϑ n we have thatpt ≤ pkr = O((ϑ/n)1/r) 1. (5.3.4)It follows that pi(t) ∼ (tp)r/r!. We therefore have for t = xkr thatES(xkr) = (n− |I|)pi(t) ∼ xrrkr · kr−1r npr(r − 1)! =xrrkr. (5.3.5)1465.4. Optimal activation trajectoriesIf |I| < `r, then for x = 1 we have|I|+ES(kr) < `r + kr/r = kr.5.4 Optimal activation trajectoriesRecall kr, `r, δε, εr as defined in (5.3.1) and (5.3.2), and let p be as in (5.3.3).By Theorems 5.3.1 and 5.3.2, `r is the critical size for a random (equivalently,given) set to be contagious for Gn,p. Moreover, a set of size ε`r < `r typicallyactivates approximately δεkr vertices. In this section, we study the probabilitythat such a set activates more than δεkr vertices.Definition 5.4.1. We let P (`, k) denote the probability that for a given setI ⊂ [n] (independent of Gn,p), with |I| = `, we have that |〈I,Gn,p〉r| ≥ k.Theorem 5.4.2. Fix r ≥ 2. Let p be as in (5.3.3), where ϑ = ϑ(n) satisfies1 ϑ n. Let ε ∈ [0, 1) and δ ∈ [δε, 1]. Suppose that `/`r → ε and k/kr →δ, as n→∞. Then, as n→∞, we have that P (`, k) = exp[ξkr(1 + o(1))],where ξ = ξ(ε, δ) is equal to−δrr+(δ − εr) log(er−1δr/(δ − εr)), δ ∈ [δε, ε);(ε/r) log(eεr−1)− (r − 2)(δ − ε) + (r − 1) log(δδ/εε), δ ∈ [ε, 1],and o(1) depends only on n.Since the mean and variance of |〈I,Gn,p〉r| are of the same order (see The-orem 5.3.2), the event that |〈I,Gn,p〉r| ≥ δkr, for some δ ∈ (δε, 1], representsa large deviation from the typical behaviour.We note that by (5.3.2), we have that ξ(ε, δε) = 0 for all ε ∈ [0, 1), inline with Theorem 5.3.1. Note that t = kr is the point at which the binomialchain S(t) becomes super-critical (since npr( tr−1) ≈ (t/kr)r−1), so we havethat P (ε`r, δkr) = eo(kr)P (ε`r, kr) for δ > 1.We remark that the main novelty of Theorem 5.4.2 is that it gives boundsfor P (`, k) when `/k → c > 0. The case ε = 0 and δ = 1 in Theorem 5.4.2(essentially) follows by Theorem 4.2.5 proved in the previous Chapter 4 (where1475.4. Optimal activation trajectoriesthe initial set is of size ` = r). That being said, the proof of Theorem 5.4.2takes a completely different approach. In Chapter 4 the equality (in the case` = r) is proved by combinatorial arguments, whereas here we use variationalcalculus to obtain a more general result.Before proving Theorem 5.4.2, we observe that Theorem 5.2.1 followsas a simple consequence. For this proof, we only require the special caseξ(0, 1) = −(r − 1)2/r. In Chapter 6, Theorem 5.4.2 is used to its full extent,in the case of r = 2, to locate the sharp threshold for K4-percolation, asintroduced by Balogh, Bollobás and Morris [24].Proof of Theorem 5.2.1. Let δ > 0 be given. The theorem states that withhigh probability m(Gn,p, r) ≥ (1− δ)rψ, where ψ = ϑ/ log(n/ϑ). Let kr beas in (5.3.1) and put `δ = (1− δ)rψ. Since ϑ n,`δ/kr = O (1/ log(n/ϑ)) 1.Hence by Theorem 5.4.2, noting that ξ(0, 1)kr = −rϑ, the expected numberof subsets I ⊂ [n] such that |I| = `δ and |〈I,Gn,p〉r| ≥ kr is bounded by(n`δ)e−rϑ(1+o(1)) ≤(ne`δ)`δe−rϑ(1+o(1)) = e−rϑν ,whereν = 1 + o(1)− (1− δ) log(ne/`δ)log(n/ϑ) .Sincelog(ne/`δ) ≤ log(n/ϑ) +O (log log(n/ϑ))we have that ν > 0 for all large n. Therefore, with high probability Gn,p hasno contagious set of size at most `δ. The result follows. We turn to the proof of Theorem 5.4.2. The overall idea is to identifythe optimal trajectory for the spread of activation from a set I with |I| =ε`r = εrkr to a set of size δkr, where δ ∈ [δε, 1]. Intuitively, we expect this tofollow a trajectory S(xkr) + εrkr = f(x)kr for some function f : [0, δ]→ Rthat starts at f(0) = εr and ends at f(δ) ≥ δ. Recall (see Section 5.3) that1485.4. Optimal activation trajectoriesthe binomial chain S(t) is non-decreasing and |〈I,Gn,p〉r| ≥ k if and only ifS(t) + |I| > t for all t < k. Hence, in order for |〈I,Gn,p〉r| ≥ δkr, we requiref to be non-decreasing and f(x) > x for all x ∈ [0, δ). Moreover, since thisevent is very unlikely, and since until reaching size δkr ≤ kr the binomialchain S(t) is sub-critical (noting that npr( tr−1) ≈ (t/kr)r−1), it is reasonableto further expect that f(δ) = δ and f to be convex. Thus possibly we havethat f(x) = x for all x in some interval [ε′, δ].To identify f , we use a discrete analogue of the Euler–Lagrange equation,due to Guseinov [77], to deduce that the optimal trajectory between pointsabove the diagonal is of the form axr + b. In light of this, in the case thatδ > ε, we expect the trajectory to meet the diagonal at ε′ = ε (and thencoincide with it on [ε, δ]), since then f ′(x) is continuous at ε′. On the otherhand, if δ ≤ ε, we expect the trajectory to intersect the diagonal only atx = δ. Since, as discussed near (5.3.5), we have S(xkr) ≈ (xr/r)kr for x ≤ 1,the typical trajectory is xr/r + εr. By (5.3.2) this trajectory intersects thediagonal at x = δε, in line with Theorem 5.3.1. See Figure 5.1.εδε δ2δ2εεrδ1δεδ1Figure 5.1: Three activation trajectories: The trajec-tory ending at (δi, δi), i ∈ {1, 2}, is optimal amongthose from (0, εr) to endpoints (δ, δ′), with δ′ ≥ δi.Note that for δ1 < ε, the optimal trajectory intersectsthe diagonal only at δ1, whereas for δ2 > ε, it coin-cides with the diagonal between ε and δ2. The typicaltrajectory xr/r + εr intersects the diagonal at δε.1495.4. Optimal activation trajectoriesFor a function g : {0, 1, . . . ,m} → R, where m ∈ N, let ∆g(i) = g(i +1)− g(i) for all 0 ≤ i < m.Lemma 5.4.3 ([77, Theorem 5]). Let a, b ∈ R. Let x0 < x1 < · · · < xm ∈ Rbe m ∈ N evenly spaced points. Put X = {x0, x1, . . . , xm}. Suppose thatσ(s, t, w) is a function from X × X × R to R with continuous first orderpartial derivative σw. Let F denote the set of functions f : X→ R such thatf0 = a and fm = b, where fi = f(xi) for 0 ≤ i ≤ m. For f ∈ F , letS(f) =m−1∑i=0σ(xi, xi+1,∆fi∆xi)∆xi.If some function fˆ is a local extremum of S on F , then fˆ satisfiesσw(xi, xi+1,∆fi∆xi)≡ c (5.4.1)for some c ∈ R and all 0 ≤ i < m.We remark that this is a special case of [77, Theorem 5] that sufficesfor our purposes. In [77] a more general result is established that allows forfunctions σ = σ(xi, xi+1, fi, fi+1,∆fi/∆xi), that is, depending also on thevalues fi and points xi that are not necessarily evenly spaced. The conclusionthere is a discrete version of the Euler–Lagrange equation, which simplifiesto (5.4.1) in the special case we consider.Proof of Theorem 5.4.2. Recall `r, kr, δε, εr, as defined at (5.3.1) and (5.3.2).In particular, recall that εr = r−1r ε, so ε`r = εrkr. We show thatP (ε`r, δkr) = exp[ξkr(1 + o(1))], (5.4.2)whereξ =∫ δ0(f ′∗(x) log(exr−1f ′∗(x))− xr−1)dxand f∗ is defined byf∗(x) =δ − εrδrxr + εr,1505.4. Optimal activation trajectoriesif δ ∈ [δε, ε], and byf∗(x) =xr/(rεr−1) + εr, if x ≤ ε;x, if x > ε,if δ ∈ (ε, 1]. See Figure 5.1. (We note that (δ − εr)/δr is increasing inδ ∈ [0, 1] and equal to 1/(rεr−1) when δ = ε, since εr = r−1r ε. Therefore wecan express f∗ as f∗(x) = (η − εr)(x/η)r + εr for x ∈ [0, η] and f∗(x) = xotherwise, where η = min{δ, ε}.) The theorem follows.Fix a vertex set I of size εrkr. (For simplicity, we ignore the insignificantdetail of rounding to integers, here and in the arguments that follow.) Recall(see Section 5.3) that the binomial chain S(t) is a non-decreasing processand |〈I,Gn,p〉r| ≥ δkr if and only if S(t) + εrkr > t for all t < δkr, whereS(t) ∼ Bin(n− εrkr, pi(t)) and pi(t) = P(Bin(t, p) ≥ r).We first show that we can restrict to the event that S(t) is never toolarge. Indeed, for any c > 0, by (5.3.5) and Chernoff’s bound, it follows thatP(S(δkr) ≥ (1 + c)kr)(ec(1 + c)1+c)(1+o(1))δrkr/r eνwhere ν = c log(e/c) · δrkr/r. Noting that c log(e/c) ↓ −∞ as c→∞, thereis some sufficiently large C > 0 so thatP(S(δkr) + εrkr > Ckr) ≤ e−kr . (5.4.3)Therefore, to establish (5.4.2), we may assume that S(δkr) + εrkr ≤ Ckr.Since S(t) is non-decreasing, the same holds for S(t) + εrkr, for all t ≤ δkr.Let x0 < x1 < · · · < xm ∈ R be evenly spaced points such that x0 = 0and xm = δ, wherem = min{log(δkr), (n/kr)1/(2r)}. (5.4.4)Note that since 1 kr  n, we have that m 1.1515.4. Optimal activation trajectoriesFor a function f : {x0, x1, . . . , xm} → R let fi = f(xi) andp(f, i) = P(S(xi+1kr) + εrkr = fi+1kr|S(xikr) + εrkr = fikr).Recall (see Section 5.3) that S(t)−S(s) ∼ Bin(n− εrkr, pi(t)−pi(s)). Hencep(f, i) = P(Bin(n− εrkr,∆pi(xikr)) = ∆(fikr)). (5.4.5)Let F denote the set of non-decreasing functions f : {x0, x1, . . . , xm} → Rsuch that f0 = εr, fi ≥ xi and fm = δ′, where δ′ ∈ [δ, C]. Let F ′ ⊂ F denotethe subset of functions f which additionally satisfy fikr ∈ N for all i. (Asalready mentioned, we will ignore the small detail of rounding to integerswhenever the issue is immaterial, and hence often not differentiate betweenthe sets F and F ′.)Claim 5.4.4. We have thatP (ε`r, δkr) = eo(kr)m−1∏i=0p(fˆ , i) (5.4.6)where fˆ maximizes ∏i p(f, i) on F .Proof. By (5.4.4) there are at most eo(kr) functions f ∈ F ′. Therefore, sinceF ′ ⊂ F , we have thatP(|〈I,Gn,p〉r|/kr ∈ [δ, C]) ≤∑f∈F ′m−1∏i=0p(f, i) ≤ eo(kr)m−1∏i=0p(fˆ , i).Applying (5.4.3), it follows thatP (ε`r, δkr) ≤ eo(kr)m−1∏i=0p(fˆ , i).Next, to obtain the matching lower bound, we consider the functionfˆ + 1/m. Note that S(t) = 0 for all t < r (since pi(t) = 0 for all such t),and S(r) ∼ Bin(n − εrkr, pr). Thus, for convenience, assume for this partof the argument that x0 = r/kr  1 (rather than 0) and so fˆ(r) = εr, as1525.4. Optimal activation trajectoriesclearly this has no effect on our calculations up to the eo(kr) error. Since, by(5.4.5), p(f, i) depends on the difference ∆fi but not on the specific values offi and fi+1, we have that p(fˆ , i) = p(fˆ + 1/m, i) for all i. On the other hand,recalling that 1 kr  n, and hence m 1 and npr = (r − 1)!/kr−1r  1,we find (using the inequality(nk) ≥ (n/k)k) thatP(S(r) = kr/m) ≥(n− εrkrkr/m)prkrm (1− pr)n = eo(kr)(mkrr) krm= eo(kr).Moreover, since S(t) is non-decreasing and all ∆xi = δ/m ≤ 1/m, if S(xikr)+εrkr = fˆikr+1/m for all i, it follows that S(t) > t for all t < δkr. Altogether,we conclude thatP (ε`r, δkr) ≥ eo(kr)m−1∏i=0p(fˆ , i),completing the proof of the claim. Therefore, to establish (5.4.2), it remains to identify fˆ . To this end, wefirst obtain the following estimate in order to put the problem of maximizing∏i p(f, i) in a convenient form for the application of Lemma 5.4.3.Claim 5.4.5. For all 0 ≤ i < m,n∆pi(xikr) =∆(xrikr)r(1 + o(1)). (5.4.7)In particular, note that ∆pi(xikr) = O(kr/n) 1, since kr  n.Proof. Since x0 = 0, the case i = 0 follows by (5.3.5). Hence assume thati ≥ 1. It is easy to show (see [84, Section 8]) that, for all t > 0 such thatpt ≤ 1,pi(t) = (pt)rr! (1 +O(pt+ t−1)).By (5.3.4), for all i, we have that xikrp ≤ krp 1. By (5.3.3) we have thatn(krp)r/r! = kr/r. Hence, for i ≥ 1,n∆pi(xikr) =∆(xrikr)r[1 +O(xri+1∆(xri )(krp+ (xikr)−1))].1535.4. Optimal activation trajectoriesRecall that xi = iδ/m. Note that krp = O((kr/n)1/r). We therefore havexri+1∆(xri )(krp+ (xikr)−1) ≤ O((kr/n)1/r +m/kr1− (1− 1/m)r).By the bound (1− 1/m)r ≤ 1/(1 + r/m), and recalling 1 kr  n and thedefinition of m at (5.4.4), the right hand side is bounded byO(m((kr/n)1/r +m/kr)) 1.We conclude that (5.4.7) holds for all i, as claimed. Recall that for any f ∈ F , all fi ≤ Ckr. Hence, by (5.4.5) and (5.4.7),and the inequalities 1 kr  n, e−x/(1−x) ≤ 1− x ≤ e−x and(nek)k≥(nk)≥ (n− k)kk! ≥1ek(n− kk)k (nek)k,it is straightforward to verify thatp(f, i) = eo(kr)(en∆(xikr)∆(fikr))∆pi(fikr)e−n∆pi(xikr)for any f ∈ F and 0 ≤ i < m. Applying (5.4.7), for any such f and i, weobtainp(f, i) = exp[σikr(1 + o(1))], (5.4.8)whereσi = (xi+1 − xi)(∆fi∆xilog(exr−1i∆xi∆fi)− xr−1i). (5.4.9)We express σ in this way to relate to Lemma 5.4.3, which we now apply.The optimal function fˆ is a local extremum of the functional, exceptthat at some xi we may have fi = xi, in which case it is only extremal sincefi is at the boundary of its allowed set. Suppose first that fi > xi for alli ∈ (0,m), i.e. except the endpoints. We apply Lemma 5.4.3 withσ(s, t, w) = (t− s)(w log(esr−1/w)− sr−1),1545.4. Optimal activation trajectoriesso thatσw(s, t, w) = (t− s)(log(esr−1/w)− 1).We apply this to equally spaced points, so t − s is constant. In this case,Lemma 5.4.3 implies that ∆fˆi/∆xi = cxr−1i for some constant c. Supposenext that fˆ takes some values on the diagonal, and suppose fj = xj andfk = xk are two consecutive places this occurs. The above gives that∆fˆi/∆xi = cxr−1i for j ≤ i < k. This is impossible unless k = j + 1. Thusfˆi = xi for a single contiguous interval of i’s.Let us summarize our findings so far. Having fixed m and the equallyspaced points (xi)i≤m, we wish to maximize∑i σi over non-decreasing se-quences (fi)i≤m with fi ≥ xi. We know that the maximizing function satisfiesfi = xi for some (possibly empty) interval xi ∈ [ε′, δ′] and that ∆fi/xr−1iis constant for xi < ε′ and another constant for xi ≥ δ′. Next, we observethat if ∆fi/∆xi = cxr−1i for some c and all j ≤ i < k, then f satisfiesf(x) = g(x) + O(1/m), for some g(x) = (c/r)xr + c′, and all x ∈ [xj , xk].Moreover, it is easy to verify using (5.4.9) thatm∑i=0σi = (1 + o(1))[I(g, 0, δ)− δrr](5.4.10)where o(1) is as n (and hence m) tends to infinity, and withI(g, s, t) =∫ tsg′(x) log(exr−1g′(x))dx. (5.4.11)In light of this, to establish (5.4.2), it suffices to identify the maximizer gˆof I(g, 0, δ) over continuous, non-decreasing functions g, satisfying g(x) ≥ x,of the formg(x) =c1xr + εr, if x ∈ [0, ε′];x, if x ∈ [ε′, δ′];c2(xr − (δ′)r) + δ′, if x ∈ [δ′, δ],where(i) c1 ≥ 0, and hence ε′ ≥ εr;1555.4. Optimal activation trajectories(ii) if ε′ ≥ δ ∧ ε then c1 ≥ min{(δ − εr)/δr, 1/(rεr−1)}, and hence ε′ = δ;(iii) if ε′ < δ ∧ ε then c1 = (ε′ − εr)/(ε′)r; and(iv) c2 ≥ 1/(r(δ′)r−1).(Here δ ∧ ε denotes min{δ, ε}.) Note that (i) holds since g is non-decreasingon [0, ε′]; (ii) says that if g(x) > 0 for all x ∈ [0, δ ∧ ε), then g(x) = c1xr + εrfor some c1 as above and all x ∈ [0, δ]; (iii) holds since g is continuous atx = ε′; and (iv) holds since g(x) ≥ x on [δ′, δ].Indeed, if gˆ maximizes I(g, 0, δ) over such g, then by (5.4.6), (5.4.8) and(5.4.10), we have thatP (ε`r, δkr) = exp[(I(gˆ, 0, δ)− δr/r)kr(1 + o(1))]. (5.4.12)Therefore, noting that ξ = I(f∗, 0, δ)− δr/r, (5.4.2) follows once we verifythat gˆ = f∗. To this end, we observe that, if δ ≤ ε, then f∗ corresponds to gin the case that ε′ = δ and c1 = (δ − εr)/δr. On the other hand, if δ > ε,then f∗ corresponds to g in the case that ε′ = ε, δ′ = δ and c1 = 1/(rεr−1).See Figure 5.1. Hence, to complete the proof, we verify that the optimalε′, δ′ are ε′ = δ ∧ ε and δ′ = δ (i.e. gˆ = f∗).We use of the following observations in the calculations below. For anyc, c′ and u ≤ v, note thatI(x, u, v) = −(r − 2)(v − u) + (r − 1) log(vv/uu) (5.4.13)andI(cxr + c′, u, v) = c(vr − ur) log(e/(cr)). (5.4.14)First, we show that if the optimal trajectory intersects the diagonal atsome x = δ′ it coincides with it thereafter for all x ∈ [δ′, δ].Claim 5.4.6. For all δ′ ∈ [εr, δ) and c2 ≥ 1/(r(δ′)r−1), we have thatI(c2(xr − (δ′)r) + δ′, δ′, δ) < I(x, δ′, δ). (5.4.15)Proof. Let g(x) = c2(xr − (δ′)r) + δ′. By (5.4.14), it follows that I(g, δ′, δ) isdecreasing in c2 for c2 ≥ 1/r, and hence for all relevant c2, since we have that1565.4. Optimal activation trajectoriesδ′ ≤ δ ≤ 1. Therefore it suffices to assume that c2 is the minimal relevantvalue c2 = 1/(r(δ′)r−1). In this case, by (5.4.13) and (5.4.14), we have thatI(g, δ′, δ)− I(x, δ′, δ) is equal toδr − (δ′)rr(δ′)r−1 log(e(δ′)r−1) + (r − 2)(δ − δ′)− (r − 1) log(δδ/(δ′)δ′).Differentiating this expression with respect to δ′ we obtain−1− r − 1r((r − 1)(δ/δ′)r + 1) log(δ′)− (r − 2) + (r − 1)(log(δ′) + 1),which simplifies as(r − 1)2rlog(δ′)(1− (δ/δ′)r) ≥ 0for all δ′ ≤ δ ≤ 1. Since I(g, δ′, δ) − I(x, δ′, δ) → 0 as δ′ ↑ δ, the claimfollows. Next, we show that the optimal trajectory intersects the diagonal at somepoint ε′ ≤ δ ∧ ε.Claim 5.4.7. Suppose that c1 > min{(δ − εr)/δr, 1/(rεr−1)}. ThenI(c1xr + εr, 0, δ) < I(f∗, 0, δ). (5.4.16)Proof. As already noted, by (5.4.14) we see that I(cxr + c′, u, v) is increasingin c ≥ 1/r. Since (δ − εr)/δr is increasing in δ ∈ [0, 1], it follows by (5.3.2)that for all relevant δ ∈ [δε, 1],δ − εrδr≥ δε − εrδrε= 1r.Therefore, we may assume that c1 = min{(δ − εr)/δr, 1/(rεr−1)}. In thiscase, note that f∗(x) = c1xr + εr for x ∈ [0, δ ∧ ε]. Hence, if δ ≤ ε, the claimfollows immediately. On the other hand, if δ > ε, the claim follows notingthat f∗(x) = x for x ∈ [ε, δ], and I(c1xr + εr, ε, δ) < I(x, ε, δ) by (5.4.15)(setting δ′ = ε and c2 = 1/(rεr−1)). 1575.4. Optimal activation trajectoriesBy (5.4.15) and (5.4.16) the optimal trajectory intersects the diagonal atδ∧ε and coincides with it thereafter on [δ∧ε, δ] (i.e. the optimal δ′ is δ′ = δ).Finally, to identify f∗ as the optimal trajectory, we show that δ ∧ ε is thefirst place the optimal trajectory intersects the diagonal (i.e. the optimal ε′ isε′ = δ∧ ε). By (5.4.15) the only other possibility is that the trajectory meetsthe diagonal at some ε′ ∈ [εr, δ ∧ ε) and then coincides with it on [ε′, δ ∧ ε].We rule this out by the following observation.Claim 5.4.8. Let ε′ ∈ [εr, δ ∧ ε). ThenI((ε′ − εr)(x/ε′)r + εr, 0, ε′) + I(x, ε′, δ ∧ ε) < I(f∗, 0, δ ∧ ε). (5.4.17)Proof. Let η = δ ∧ ε. By (5.4.13) and (5.4.14), the left hand side above isequal to(ε′ − εr) log(er(ε′)rε′ − εr)− (r − 2)(η − ε′) + (r − 1) log(ηη/(ε′)ε′).Differentiating this expression with respect to ε′ we obtainlog( (ε′)rr(ε′ − εr))+ r(1− εr/ε′) + (r − 2)− (r − 1)(log(ε′) + 1).Since εr = rr−1ε, this expression simplifies as− log(r − (r − 1)ε/ε′) + (r − 1)(1− ε/ε′).By the inequality log x < x− 1 for x < 1, the above expression is positive forall ε′ ∈ [εr, η) ⊂ [εr, ε). The claim follows, taking ε′ ↑ η and recalling thatf∗(x) = (η − εr)(x/η)r + εr for x ∈ [0, η]. By (5.4.15), (5.4.16) and (5.4.17) it follows that the maximizer gˆ ofI(g, 0, δ) (over functions g, as described below (5.4.11)) is gˆ = f∗. Asdiscussed, (5.4.2) follows by (5.4.12), completing the proof. 158Chapter 6Sharp Threshold forK4-Percolation6.1 OverviewGraph bootstrap percolation is a variation of bootstrap percolation introducedby Bollobás. LetH be a graph. Edges are added to an initial graphG = (V,E)if they are in a copy of H minus an edge, until no further edges can be added.If eventually the complete graph on V is obtained, G is said to H-percolate.We identify the sharp threshold for K4-percolation on the Erdős–Rényi graphGn,p. This refines an approximation due to Balogh, Bollobás and Morris,which bounds the threshold up to multiplicative constants.∗6.2 Background and main resultsFix a graph H. Following Bollobás [39], H-bootstrap percolation is a cellularautomaton that adds edges to a graph G = (V,E) by iteratively completingall copies of H missing a single edge. Formally, given a graph G0 = G, letGt+1 be Gt together with every edge whose addition creates a subgraph thatis isomorphic to H. For a finite graph G, this procedure terminates onceGτ+1 = Gτ , for some τ = τ(G). We denote the resulting graph Gτ by 〈G〉H .If 〈G〉H is the complete graph on V , the graph G is said to H-percolate, orequivalently, that G is H-percolating.Recall that the Erdős–Rényi [60] graph Gn,p is the random subgraph ofKn obtained by including each possible edge independently with probability∗This chapter is independent work of the author [91], currently under review forpublication.1596.2. Background and main resultsp. In this work, we identify the sharp threshold for K4-percolation on Gn,p.Theorem 6.2.1. Let p =√α/(n logn). If α > 1/3 then Gn,p is K4-percolating with high probability. If α < 1/3 then with high probabilityGn,p does not K4-percolate.In Chapter 4 (joint work with Angel [12]) the super-critical case α > 1/3is established, via a connection with 2-neighbour bootstrap percolation (seeSection 6.2.1). It thus remains to study the sub-critical case α < 1/3. Inthis case, we also identify the size of the largest K4-percolating subgraphs ofGn,p.Theorem 6.2.2. Let p =√α/(n logn), for some α ∈ (0, 1/3). With highprobability the largest cliques in 〈Gn,p〉K4 are of size (β∗ + o(1)) logn, whereβ∗(α) ∈ (0, 3) satisfies 3/2 + β log(αβ)− αβ2/2 = 0.From the results in Chapter 4, it follows that with high probability〈Gn,p〉K4 has cliques of size at least (β∗ + o(1)) logn. Our contribution is toshow that these are typically the largest cliques.Balogh, Bollobás and Morris [24] study H-bootstrap percolation in thecase that G = Gn,p and H = Kk. The case k = 4 is the minimal case ofinterest. Indeed, all graphs K2-percolate, and a graph K3-percolates if andonly if it is connected. Therefore the case K3 follows by a classical result ofErdős and Rényi [60]. If p = (logn+ ε)/n then Gn,p is K3-percolating withprobability exp(−e−ε)(1 + o(1)), as n→∞.Critical thresholds for H-bootstrap percolation are defined in [24] bypc(n,H) = inf {p > 0 : P(〈Gn,p〉H = Kn) ≥ 1/2} .In light of Theorem 6.2.1, we find that pc(n,K4) ∼ 1/√3n logn, solvingProblem 2 in [24]. Moreover, the same holds if the 1/2 in the definition aboveis replaced by any probability in (0, 1). It is expected that this propertyhas a sharp threshold for H = Kk for all k, in the sense that for somepc = pc(k) we have that Gn,p is Kk-percolating with high probability forp > (1 + δ)pc and with probability tending to 0 for p = (1 − δ)pc. Some1606.2. Background and main resultsbounds for pc(n,Kk) are established in [24]. A main result of [24] is thatpc(n,K4) = Θ(1/√n logn). For larger k even the order of pc is open.6.2.1 Seed edgesIn Chapter 4 (see Theorem 4.2.2), a sharp upper bound for pc(n,K4) isestablished by observing a connection with 2-neighbour bootstrap percolation(see Pollak and Riess [116] and Chalupa, Leath and Reich [50]). This processis defined as follows: Let G = (V,E) be a graph. Given some initial setV0 ⊂ V of active vertices, let Vt+1 be the union of Vt and the set of allvertices with at least 2 neighbours in Vt. The sets Vt are increasing, and soconverge to some set of eventually active vertices, denoted by 〈V0, G〉2. Aset I is called contagious for G if it activates all of V , that is, 〈I,G〉2 = V .(Note that, despite the similar notation, 〈·〉2 has a different meaning than〈·〉H above for graphs H. In the present article, we only use 〈·〉2 and 〈·〉K4 .)If G = (V,E) has a contagious pair {u, v}, and moreover (u, v) ∈ E, thenclearly G is K4-percolating (see Lemma 4.2.3). In this case we call (u, v) aseed edge and G a seed graph. Hence G is a seed graph if some contagiouspair of G is joined by an edge.While it is possible for a graph to be K4-percolating without containinga seed edge (see Section 6.3), we believe that the two properties are fairlyclose. In particular, they have the same asymptotic threshold. In Chapter 4,the sharp threshold for the existence of contagious pairs in Gn,p is identified,and is shown to be 1/(2√n logn). It is also shown that if p =√α/(n logn),then for α > 1/3 with high probability Gn,p has a seed edge, and so isK4-percolating. If α < 1/3 then the largest seed subgraphs of Gn,p are of size(β∗+o(1)) logn with high probability, where β∗ is as defined in Theorem 6.2.2.6.2.2 OutlineBy the results in Chapter 4 discussed in the previous Section 6.2.1, to proveTheorems 6.2.1 and 6.2.2 it remains to establish the following result.Proposition 6.2.3. Let p =√α/(n logn), for some α ∈ (0, 1/3). Forany δ > 0, with high probability 〈Gn,p〉K4 contains no clique larger than1616.2. Background and main results(β∗ + δ) logn, where β∗ is as defined in Theorem 6.2.2.In other words, we need to rule out the possibility that some subgraph ofGn,p is K4-percolating and larger than (β∗ + δ) logn.For a graph G = (V,E), let V (G) = V and E(G) = E denote its vertexand edge sets. For H ⊂ G, let 〈H,G〉2 denote the subgraph of G inducedby 〈V (H), G〉2 (see Section 6.2.1). It is easy to see that if H ⊂ G is K4-percolating, then so is 〈H,G〉2. In particular, G is a seed graph if 〈e,G〉2 = Gfor some seed edge e ∈ E(G). On the other hand, if a K4-percolating graphG is not a seed graph, we show that there is some K4-percolating subgraphC ⊂ G of minimum degree at least 3 such that 〈C,G〉2 = G. We call C the3-core of G. Hence, to establish Proposition 6.2.3, we require bounds for (i)the number of K4-percolating graphs C of size q with minimum degree atleast 3, and (ii) the probability that for a given set I ⊂ [n] of size q we havethat |〈I,Gn,p〉2| ≥ k.We obtain an upper bound of (2/e)qq!qq for the number of K4-percolating3-cores C of size q. (This is much smaller than the number of seed subgraphsof size q, which in Chapter 4, see Lemmas 4.3.5 and 4.4.5, is shown to be equalto q!qqeo(q).) Further arguments imply that, for p as in Proposition 6.2.3,with high probability Gn,p has no such subgraphs C larger than (2α)−1 logn.This already gives a strong indication that 1/3 is indeed the critical constant,since as shown by Janson, Łuczak, Turova and Vallier [84, Theorem 3.1] (seeTheorem 2.4.1), (2α)−1 logn is the critical size above which a random set islikely to be contagious.In Chapter 5 (joint work with Angel [11]), large deviation estimates aredeveloped for the probability that small sets of vertices eventually activatea relatively large set of vertices via the r-neighbour bootstrap percolationdynamics. These bounds complement the central limit theorems of [84] (seeTheorem 2.4.5). This result, in the case of r = 2, plays an important role inthe current chapter. For 2 ≤ q ≤ k, let P (q, k) denote the probability thatfor a given set I ⊂ [n], with |I| = q, we have that |〈I,Gn,p〉2| ≥ k.Lemma 6.2.4 (Angel and Kolesnik [11, Theorem 3.2]). Let p =√α/(n logn),for some α > 0. Let ε ∈ [0, 1) and β ∈ [βε, 1/α], where βε = (1−√1− ε)/α.1626.2. Background and main resultsPut kα = α−1 logn and qα = (2α)−1 logn. Suppose that q/qα → ε andk/kα → αβ as n→∞. Then P (q, k) = nξε+o(1), where ξε = ξε(α, β) is equalto−αβ22 +(2αβ − ε)(2α)−1 log(e(αβ)2/(2αβ − ε)), β ∈ [βε, ε/α);β log(αβ)− ε(2α)−1 log(ε/e), β ∈ [ε/α, 1/α].(This estimate follows by Theorem 5.4.2, setting r = 2, ϑ = (4α)−1 lognand δ = αβ, in which case, in the notation of [11], we have k2 = kα, `2 = qαand δε = αβε.) Applying the lemma and the bound (2/e)qq!qq for the numberof K4-percolating 3-cores of size q, we deduce that the expected number ofK4-percolating subgraphs of Gn,p of size k = β logn, for some β ∈ [βε, 1/α],is bounded by nµ+o(1), whereµ(α, β) = 3/2 + β log(αβ)− αβ2/2,leading to Proposition 6.2.3.In closing, we remark that the proof of Proposition 4.3.1 in Chapter 4shows that the expected number of edges in Gn,p that are a seed edge for asubgraph of size at least k = β logn, for β ∈ (0, 1/α], is bounded by nµ+o(1).(Alternatively, we recover this bound from the case ε = 0 in Lemma 6.2.4.)This suggests that perhaps Gn,p is as likely to K4-percolate due to a seededge as in any other way. That being said, the precise behaviour in thescaling window (the range of p where Gn,p is K4-percolating with probabilityin [ε, 1− ε]) remains an interesting open problem. As mentioned above, thecase of K3-percolation follows by fundamental work of Erdős and Rényi [60]:With high probability Gn,p is K3-percolating (equivalently, connected) if andonly if it has no isolated vertices. It seems possible that K4-percolation ismore complicated. Perhaps, for p in the scaling window, the probability thatGn,p has a seed edge converges to a constant in (0, 1), and with non-vanishingprobability Gn,p is not a seed graph, however is K4-percolating due to a small3-core C of size O(1) such that |〈C,Gn,p〉2| = n. We hope to investigate thisin future work.1636.3. Clique processes6.3 Clique processesIf a graph G is K4-percolating, we will often simply say that G percolates, orthat it is percolating. Following [24], we define the clique process, as a wayto analyze K4-percolation on graphs.Definition 6.3.1. We say that three graphs Gi = (Vi, Ei) form a triangleif there are distinct vertices x, y, z such that x ∈ V1 ∩ V2, y ∈ V1 ∩ V3 andz ∈ V2 ∩ V3. If |Vi ∩ Vj | = 1 for all i 6= j, we say that the Gi form exactlyone triangle.In [24] the following observation is made.Lemma 6.3.2. Suppose that Gi = (Vi, Ei) percolate.(i) If |V1 ∩ V2| > 1 then G1 ∪G2 percolates.(ii) If the Gi form a triangle then G1 ∪G2 ∪G3 percolates.Moreover, if the Gi form multiple triangles (that is, if there are multipletriplets x, y, z as above), then the percolation of G1 ∪ G2 ∪ G3 follows byapplying Lemma 6.3.2(ii) twice. Indeed, some Gi, Gj have two vertices incommon, and so G′ = Gi ∪Gj percolates, and G′ has two common verticeswith the remaining graph Gk.By these observations, the K4-percolation dynamics are classified in [24]as follows (which we modify slightly here in light of the previous observation).Definition 6.3.3. A clique process for a graph G is a sequence (Ct)τt=1 ofsets of subgraphs of G with the following properties:(i) C0 = E(G) is the edge set of G.(ii) For each t < τ , Ct+1 is constructed from Ct by either (a) mergingtwo subgraphs G1, G2 ∈ Ct with at least two common vertices, or(b) merging three subgraphs G1, G2, G3 ∈ Ct that form exactly onetriangle.(iii) Cτ is such that no further operations as in (ii) are possible.The reason for the name is that for any t ≤ τ and H ∈ Ct, 〈H〉K4 is thecomplete graph on V (H).1646.3. Clique processesLemma 6.3.4. Let G be a finite graph and (Ct)τt=1 a clique process forG. For each t ≤ τ , Ct is a set of edge-disjoint, percolating subgraphs of G.Furthermore, 〈G〉K4 is the edge-disjoint, triangle-free union of the cliques〈H〉K4 , H ∈ Cτ . Hence G percolates if and only if Cτ = {G}. In particular, iftwo clique processes for G terminate at Cτ and C′τ ′ , then necessarily Cτ = C′τ ′ .6.3.1 ConsequencesThe following corollaries of Lemma 6.3.4 are proved in [24].Lemma 6.3.5. If G = (V,E) percolates then |E| ≥ 2|V | − 3.In light of this, we define the excess of a percolating graph G = (V,E) tobe |E| − (2|V | − 3). We call a percolating graph edge-minimal if its excess is0. To prove Lemma 6.3.5, the following observations are made in [24].Lemma 6.3.6. Suppose that Gi = (Vi, Ei) percolate.(i) If the Gi form exactly one triangle, then the excess of G1 ∪G2 ∪G3 isthe sum of the excesses of the Gi.(ii) If |V1 ∩ V2| = m ≥ 2, then the excess of G1 ∪ G2 is the sum of theexcesses of the Gi plus 2m− 3.Hence, if G is edge-minimal and percolating, then every step of anyclique process for G involves merging three subgraphs that form exactly onetriangle. A special class of percolating graphs are seed graphs, as discussedin Section 6.2.1. In an edge-minimal seed graph G, every step of some cliqueprocess for G involves merging three subgraphs, two of which are a singleedge.Finally, since in each step of any clique process for a graph G either 2 or 3subgraphs are merged, we have the following useful criterion for percolation.Lemma 6.3.7. Let G = (V,E) be a graph of size n, and 1 ≤ k ≤ n. If thereis no percolating subgraph G′ ⊂ G of size k′, for any k′ ∈ [k, 3k], then G hasno percolating subgraph larger than k. In particular, G does not percolate.1656.4. Percolating graphs6.4 Percolating graphsIn this section, we analyze the general structure of percolating graphs.Definition 6.4.1. We say that a graph G is irreducible if removing any edgefrom G results in a non-percolating graph.Clearly, a graph G is percolating if and only if it has an irreduciblepercolating subgraph G′ ⊂ G such that V (G) = V (G′).For a graph G and vertex v ∈ V (G), we let Gv denote the subgraph of Ginduced by V − {v}, that is, the subgraph obtained by removing v.Lemma 6.4.2. Let G be an irreducible percolating graph. If v ∈ V (G) is ofdegree 2, then Gv is percolating.Proof. The proof is by induction on the size of G. The case |V (G)| = 3,in which case G is a triangle, is immediate. Hence suppose that G, with|V (G)| > 3, percolates and some v ∈ V (G) is of degree 2, and assume thatthe statement of the lemma holds for all graphs H with |V (H)| < |V (G)|.Let (Ct)τt=1 be a clique process for G. Let e1, e2 denote the edges incidentto v in G. Let t < τ be the first time in the clique process (Ct)τt=1 thata subgraph containing either e1 or e2 is merged with other (edge-disjoint,percolating) subgraphs. We claim that Ct+1 is obtained from Ct by merginge1, e2 with a subgraph in Ct. To see this, we first observe that if a graphH percolates and |V (H)| > 2 (that is, H is not simply an edge), then allvertices in H have degree at least 2. Next, by the choice of t, we note thatnone of the graphs being merged contain both e1, e2. Therefore, since v isof degree 2, if one the graphs contains exactly one ei, then it is necessarilyequal to ei, being a percolating graph of minimum degree 1. It follows thatv is contained in two of the graphs being merged, and hence that Ct+1 is theresult of merging the edges e1, e2 with a subgraph in Ct, as claimed.To conclude, note that if t = τ − 1 then since G percolates (and soCτ = {G}) we have that Cτ−1 = {e1, e2, Gv}, and so Gv percolates. On theother hand, if t < τ − 1, then Cτ contains 2 or 3 subgraphs, one of whichcontains e1 and e2. If Cτ−1 = {G1, G2}, where e1, e2 ∈ E(G1), say, then1666.4. Percolating graphsby the inductive hypothesis we have that (G1)v percolates. Since G1, G2are edge-disjoint, we have that v /∈ V (G2), as otherwise G2 would be apercolating graph with an isolated vertex. Hence, by Lemma 6.3.2(i), we findthat (G1)v ∪G2 = Gv percolates. Similarly, if Cτ−1 = {G1, G2, G3}, wheree1, e2 ∈ E(G1), say, then by the inductive hypothesis and Lemma 6.3.2(ii),we find that (G1)v ∪G2 ∪G3 = Gv percolates.The induction is complete. Recall (see Sections 6.2.1 and 6.2.2) that for graphsH ⊂ G, we let 〈H,G〉2denote the subgraph of G induced by 〈V (H), G〉2, that is, the subgraph of Ginduced by the closure of V (H) under the 2-neighbour bootstrap percolationdynamics on G. By Lemma 6.3.2(i), if H ⊂ G is percolating then so is〈H,G〉2.The following is an immediate consequence of Lemma 6.4.2.Lemma 6.4.3. Let G be an irreducible percolating graph. Then either(i) G = 〈e,G〉2 for some edge e ∈ E(G), or else,(ii) G = 〈C,G〉2 for some percolating subgraph C ⊂ G of minimum degreeat least 3.Futhermore,(iii) the excess of G is equal to the excess of C.We note that in case (i), G is a seed graph and e is a seed edge for G.An irreducible seed graph is edge-minimal, that is, it has 0 excess. In case(ii), we call C the 3-core of G. If G = C we say that G is a 3-core.It is straightforward to verify that all irreducible percolating graphs on2 < k ≤ 6 vertices have a vertex of degree 2. There is however an edge-minimal percolating graph of size k = 7 with no vertex of degree 2, seeFigure 6.1.6.4.1 Basic estimatesIn this section, we use Lemma 6.4.3 to obtain upper bounds for irreduciblepercolating graphs. For such a graph G, the relevant quantities are the1676.4. Percolating graphsFigure 6.1: The smallest irreducible percolating 3-core.number of vertices in G of degree 2, the size of its 3-core C ⊂ G, and itsnumber of excess edges.Definition 6.4.4. Let I`q(k, i) be the number of labelled, irreducible graphsG of size k with an excess of ` edges, i vertices of degree 2, and a 3-coreC ⊂ G of size 2 < q ≤ k. If i = 0, and hence k = q, we let C`(k) = I`k(k, 0).In the case ` = 0, we will often simply write Iq(k, i) and C(k).By Lemma 6.4.3(iii), if a graph G contributes to I`q(k, i) then its 3-coreC ⊂ G has an excess of ` edges. Also, as noted above, there are no irreducible3-cores on k ≤ 6 vertices. Hence I`q(k, i) = 0 if 2 < q ≤ 6.Definition 6.4.5. We define I2(k, i) to be the number of labelled, edge-minimal seed graphs of size k with i vertices of degree 2.For convenience, we let C(2) = 1 and set I`2(k, i) = 0 and C`(2) = 0 for` > 0 (in light of Lemma 6.4.3(iii)). Moreover, to simplify several statementsin this work, if we say that a graph G has a 3-core of size less than q > 2,we mean to include also the possibility that q = 2.Definition 6.4.6. We let I`(k, i) = ∑q I`q(k, i) denote the number of labelled,irreducible graphs G of size k with an excess of ` edges and i vertices ofdegree 2.If ` = 0, we will often write I(k, i).We obtain the following estimate for I`(k, i) in the case that ` ≤ 3, thatis, for graphs with at most 3 excess edges.Lemma 6.4.7. For all k ≥ 2, ` ≤ 3 and relevant i, we have thatI`(k, i) ≤ (2/e)kk!kk+2`+i.1686.4. Percolating graphsIn particular, C`(k) ≤ (2/e)kk!kk+2`.The method of proof gives bounds for larger `, however, as it turns out,percolating graphs with a larger excess can be dealt with using less accurateestimates (see Lemma 6.5.3).The proof is somewhat involved, as there are several cases to consider,depending on the nature of the last step of a clique process for G. We proceedby induction: First, we note that the cases i > 0 follow easily, since if Ghas i vertices of degree 2, then removing such a vertex from G results in agraph with j ∈ {i, i± 1} vertices of degree 2. Analyzing this case leads tothe constant 2/e. The case i = 0 (corresponding to 3-cores) is the heart ofthe proof. The following observation allows the induction to go through inthis case: If G is a percolating 3-core, then in the last step of a clique processfor G either (i) three graphs G1, G2, G3 are merged that form exactly onetriangle on T = {v1, v2, v3}, or else (ii) two graphs G1, G2 are merged thatshare exactly m ≥ 2 vertices S = {v1, v2, . . . , vm}. We note that if some Gjhas a vertex v of degree 2, then necessarily v ∈ T in case (i), and v ∈ S incase (ii) (as else, G would have a vertex of degree 2). In other words, if apercolating 3-core is formed by merging graphs with vertices of degree 2,then all such vertices belong to the triangle that they form or the set of theircommon vertices.Proof. It is easily verified that the statement of the lemma holds for k ≤ 4.We prove the remaining cases by induction. For k > 4, we claim moreoverthat for all ` ≤ 3 and relevant i,I`(k, i) ≤ Aζk(ki)k!kk+2` (6.4.1)where ζ = 2/e and A = 6/(ζ55!55). The lemma follows, noting that A < 1and(ki) ≤ ki.We introduce the constant A < 1 in order to push through the inductionin the case i = 0, corresponding to 3-cores. The last step of a clique processfor such a graph G involves merging 2 or 3 subgraphs Gj . Informally, weuse the constant A to penalize graphs G such that at least two of the Gj1696.4. Percolating graphscontain more than 4 vertices, that is, graphs G formed by merging at leasttwo “macroscopic” subgraphs.By the choice of A, we have that (6.4.1) holds for k = 5. Indeed, notethat I(5, i) ≤ (5i)(42) for all i ∈ {1, 2, 3} and I`(5, i) = 0 otherwise. Assumethat for some k > 5, (6.4.1) holds for all 4 < k′ < k, and all ` ≤ 3 andrelevant i.We begin with the case of graphs G of size k with at least one vertex ofdegree 2. This case follows easily by a recursive upper bound (and explainsthe choice of ζ = 2/e).Case 1 (i > 0). Suppose that G is a graph contributing to I`(k, i), wherei > 0 and ` ≤ 3. Let v ∈ V (G) be the vertex of degree 2 in G with theminimal index. By considering which two of the k− i vertices of G of degreelarger than 2 are neighbours of v, we find that I`(k, i) is bounded from aboveby (ki)(k − i2) 2∑j=0(2j)I`(k − 1, i− 1 + j)( k−1i−1+j) .In this sum, j ∈ {0, 1, 2} is the number of neighbours of v that are of degree2 in the subgraph of G induced by V (G) − {v}. Applying the inductivehypothesis, we obtainI`(k, i) ≤ Aζk(ki)k!kk+2` · 2ζ(k − 1k)k≤ Aζk(ki)k!kk+2`,as required.The remaining cases deal with 3-cores G of size k, where i = 0. First, weestablish the case i = ` = 0, corresponding to edge-minimal percolating 3-cores. The cases i = 0 and ` ∈ {1, 2, 3} are proved by adapting the argumentfor i = ` = 0.Case 2 (i = ` = 0). Let G be a graph contributing to C(k) = I(k, 0).Then, by Lemma 6.3.6, in the last step of a clique process for G, threeedge-minimal percolating subgraphs Gj , j ∈ {1, 2, 3}, are merged whichform exactly one triangle on some T = {v1, v2, v3} ⊂ V (G). Moreover, eachGj has at most 2 vertices of degree 2, and if some Gj has such a vertex v1706.4. Percolating graphsthen necessarily v ∈ T (as else G would have a vertex of degree 2). Also ifkj = |V (Gj)|, with k1 ≥ k2 ≥ k3, then (i) ∑3j=1 kj = k + 3, (ii) k1, k2 ≥ 4and (iii) k3 = 2 or k3 ≥ 4 (since if some kj = 3 or some kj = kj′ = 2, j 6= j′,then G would have a vertex of degree 2).Since the inductive hypothesis only holds for graphs with more than 4vertices, it is convenient to deal with the case k1 = 4 separately: Note thatthe only irreducible percolating 3-cores of size k with all kj ≤ 4 are of sizek ∈ {7, 9}. These graphs are the graph in Figure 6.1 and the graph obtainedfrom this graph by replacing the bottom edge with a copy of K4 minus anedge. It is straightforward to verify that (6.4.1) holds if k ∈ {7, 9}, and so inthe arguments below we assume that k1 > 4. Moreover, since the graph inFigure 6.1 is the only irreducible percolating 3-core on k = 7 vertices, wefurther assume below that k ≥ 8.We take three cases, with respect to whether (i) k2 = 4, (ii) k2 > 4 andk3 ∈ {2, 4}, or (iii) k3 > 4.Case 2(i) (i = ` = 0 and k2 = 4). Note that if k2 = 4 then k3 ∈ {2, 4}.The number of graphs G as above with k3 = 2 and k2 = 4 is bounded fromabove by (kk − 3)(k − 32)(31)2!22∑j=0(2j)I(k − 3, j)(k−3j) .Here the first binomial selects the vertices for the subgraph of size k1 = k−3,the next two binomials select the vertices for the triangle T , and the rightmostfactor bounds the number of possibilities for the subgraph of size k1 = k − 3(recalling that it can have at most 2 vertices of degree 2, and if it containsany such vertex v, then v ∈ T ). Applying the inductive hypothesis (recallthat we may assume that k1 > 4), the above expression is bounded byAζkk!kk · (k − 3)k−1kk4ζ3≤ Aζkk!kk · 1k4ζ3e3.Here, and throughout this proof, we use the fact that (k−xk )k−y ≤ e−xprovided that 2y ≤ x < k and x > 0. To see this, note that (k−xk )k−y → e−x1716.4. Percolating graphsas k →∞, and∂∂k(k − xk)k−y=(k − xk)k−y (log(k − xk)+ x(k − y)k(k − x))≥(k − xk)k−y x(x− 2y)2k(k − x) ≥ 0,by the inequality log u ≥ (u2 − 1)/(2u) (which holds for u ∈ (0, 1]).Similarly, the number of graphs G as above such that k1 = k2 = 4 isbounded by (kk − 5, 3, 2)(k − 52)(31)2!32∑j=0(2j)I(k − 5, j)(k−5j) .By the inductive hypothesis, this is bounded byAζkk!kk · (k − 5)k−3kk4ζ5≤ Aζkk!kk · 1k5/2√k − 54ζ5e5.Altogether, we find that the number of graphs G contributing to C(k)with k2 = 4, divided by Aζkk!kk, is bounded byγ1 =184ζ3e3+ 185/2√34ζ5e5< 0.07. (6.4.2)Case 2(ii) (i = ` = 0, k2 > 4 and k3 ∈ {2, 4}). Note that in this casewe may further assume that k ≥ 9. For a given k1, k2 > 4, the number ofgraphs G as above with k3 = 2 (in which case k1 + k2 = k+ 1) is bounded by(kk1, k2 − 1)(k12)(k2 − 11)2!22∏j=12∑i=0(2i)I(kj , i)(kji) .Applying the inductive hypothesis, this is bounded byAζkk!kk · 2 · 42Aζ kk1+21 kk2+22kk.1726.4. Percolating graphsSince k2 = k + 1− k1, we have that∂∂k1kk1+21 kk2+22 = −kk1+11 kk2+12 (k1k2 log(k2/k1)− 2(k1 − k2)).By the bound log x ≤ x− 1, we see thatk1k2 log(k2/k1)− 2(k1 − k2) ≤ −(k2 + 2)(k1 − k2) ≤ 0.Hence, setting k1 to be the maximum relevant value k1 = k−4 (when k2 = 5),we findkk1+21 kk2+22kk≤ 57(k − 4)k−2kk≤ 1k257e4for all relevant k1, k2. Therefore, summing over the at most k/2 possibilitiesfor k1, k2, we find that at mostAζkk!kk · 1kAζ4257e4graphs G with k3 = 2 and k2 > 4 contribute to C(k).The case of k3 = 4 is very similar. In this case, for a given k1, k2 > 4such that k1 + k2 = k − 1, the number of graphs G as above is bounded by(kk1, k2 − 1, 2)(k12)(k2 − 11)2!32∏j=12∑i=0(2i)I(kj , i)(kji) ,which, by the inductive hypothesis, is bounded byAζkk!kk · 2 · 42Aζkk1+21 kk2+22kk.Arguing as in the previous case, we see that the above expression is maximizedwhen k2 = 5 and k1 = k−6. Hence, summing over the at most k/2 possibilitiesfor k1, k2, there are at mostAζkk!kk · 1(k − 6)k2A4257ζe61736.4. Percolating graphsgraphs G that contribute to C(k) with k3 = 4 and k2 > 4.We conclude that the number of graphs G that contribute to C(k) withk2 > 4 and k3 ∈ {2, 4}, divided by Aζkk!kk, is bounded byγ2 =19Aζ4257e4+ 13 · 92A4257ζe6< 0.15. (6.4.3)Case 2(iii) (i = ` = 0 and k3 > 4). In this case we may further assumethat k ≥ 12. For a given k1, k2, k3 > 4 such that k1 + k2 + k3 = k + 3, thenumber of graphs G as above is bounded by(kk1, k2 − 1, k3 − 2)(k12)(k2 − 11)2!33∏j=12∑i=0(2i)I(kj , i)(kji) .By the inductive hypothesis, this is bounded byAζkk!kk · 2243A2ζ3kk1+21 kk2+22 kk3+23kk.As in the previous cases considered, the above expression is maximized whenk2 = k3 = 5 and k1 = k− 7. Hence, summing over the at most k2/12 choicesfor the kj , we find that at most1((k − 7)k)3/2A2ζ3435143e7graphs G contribute to C(k) with k3 > 4. Hence, the number of such graphs,divided by Aζkk!kk, is bounded byγ3 =1(5 · 12)3/2A2ζ3435143e7 < 0.01. (6.4.4)Finally, combining (6.4.2), (6.4.3) and (6.4.4), we find thatC(k)Aζkk!kk ≤ γ1 + γ2 + γ3 < 0.23 < 1, (6.4.5)completing the proof of Case 2.1746.4. Percolating graphsIt remains to consider the cases i = 0 and ` ∈ {1, 2, 3}, corresponding to3-cores G with a non-zero excess. In these cases, it is possible that only 2subgraphs are merged in the last step of a clique process for G. We provethe cases ` = 1, 2, 3 separately, however they all follow by adjusting the proofof Case 2.First, we note that if two graphs G1, G2 with at least 2 vertices in commonare merged to form an irreducible percolating 3-core G, then necessarily eachGj contains more than 4 vertices. In particular, such a graph G containsat least 8 vertices. This allow us to apply the inductive hypothesis in thesecases (recall that we claim that (6.4.1) holds only for graphs with more than4 vertices), without taking additional sub-cases as in the proof of Case 2.Case 3 (i = 0 and ` = 1). IfG contributes to C1(k), then by Lemma 6.3.6,in the last step of a clique process for G, there are two cases to consider:(i) Three percolating subgraphs Gj , j ∈ {1, 2, 3}, are merged which formexactly one triangle T = {v1, v2, v3}, such that for some ij ≤ 2 andkj , `j ≥ 0 with∑ kj = k+3 and∑ `j = 1, we have that Gj contributesto I`j (kj , ij). Moreover, if any ij > 0, the corresponding ij vertices ofGj of degree 2 belong to T .(ii) Two percolating subgraphs Gj , j ∈ {1, 2}, are merged that shareexactly two vertices S = {v1, v2}, such that for some ij ≤ 2 and kj with∑kj = k + 2, we have that the Gj contribute to I(kj , ij). Moreover, ifany ij > 0, the corresponding ij vertices of Gj of degree 2 belong to S.We claim that, by the arguments in Case 2 leading to (6.4.5), the numberof graphs G satisfying (i), divided by Aζkk!kk+2, is bounded byγ1 + 2γ2 + 3γ3 < 0.40. (6.4.6)To see this, note the only difference between (i) of the present case and Case2 above is that here one of the Gj has exactly 1 excess edge. Note that ifone of the graphs Gj has an excess edge, then necessarily kj > 4. Recallthat graphs G that contribute to C(k), as considered in Cases 2(i),(ii),(iii)above, have exactly 1, 2, 3 subgraphs Gj with kj > 4. Moreover, recall thatthe number of such graphs G, divided by Aζkk!kk, is bounded by γ1, γ2, γ3,1756.4. Percolating graphsrespectively, in these cases. Therefore, applying the inductive hypothesis,and noting that if Gj has exactly 1 excess edge then it contributes an extrafactor of k2j < k2, it follows that the number of graphs G as in (i) of thepresent case, divided by Aζkk!kk+2, is bounded by ∑3j=1 jγj , as claimed.(By (6.4.2), (6.4.3) and (6.4.4), this sum is bounded by 0.40.)On the other hand, arguing along the lines as in Case 2, the number ofgraphs G satisfying (ii), for a given k1, k2 > 4 such that k1 + k2 = k + 2, isbounded by (kk1, k2 − 2)(k12)2!22∏j=12∑i=0(2i)I(kj , i)(kji) .By the inductive hypothesis, this is bounded byAζkk!kk · 2 · 42Aζ2kk1+21 kk2+22kk.Arguing as in Case 2, we find that this expression is maximized when k2 = 5and k1 = k − 3. Hence, summing over the at most k/2 choices for k1, k2, thenumber of graphs G satisfying (ii), divided by Aζkk!kk+2, is at mostγ4 =182Aζ24257e3< 0.04. (6.4.7)Altogether, by (6.4.6) and (6.4.7), we conclude thatC1(k)Aζkk!kk+2 ≤ γ1 + 2γ2 + 3γ3 + γ4 < 0.44 < 1, (6.4.8)completing the proof of Case 3.Case 4 (i = 0 and ` = 2). This case is nearly identical to Case 3.By Lemma 6.3.6, in the last step of a clique process for a graph G thatcontributes to C2(k), either (i) three graphs that form exactly one triangleare merged whose excesses sum to 2, or else (ii) two graphs that share exactlytwo vertices are merged whose excesses sum to 1. Hence, by the arguments1766.4. Percolating graphsin Case 3 leading to (6.4.8), we find thatC2(k)Aζkk!kk+4 ≤ γ1 + 3γ2 + 6γ3 + 2γ4 < 0.66 < 1, (6.4.9)as required.Case 5 (i = 0 and ` = 3). Since ` = 3, it is now possible that in the laststep of a clique process for a graph G contributing to C`(k), two graphs aremerged that share three vertices. Apart from this difference, the argumentis completely analogous to the previous cases.If G contributes to C3(k), then by Lemma 6.3.6, in the last step of aclique process for G, there are three cases to consider:(i) Three percolating subgraphs Gj , j ∈ {1, 2, 3}, are merged which formexactly one triangle T = {v1, v2, v3}, such that for some ij ≤ 2 andkj , `j ≥ 0 with∑ kj = k+3 and∑ `j = 3, we have that Gj contributesto I`j (kj , ij). If any ij > 0, the corresponding ij vertices of Gj ofdegree 2 belong to T .(ii) Two percolating subgraphs Gj , j ∈ {1, 2}, are merged that share exactlytwo vertices S = {v1, v2}, such that for some ij ≤ 2 and kj , `j ≥ 0with ∑ kj = k + 2 and ∑ `j = 2, we have that the Gj contribute toI`j (kj , ij). If any ij > 0, the corresponding ij vertices of Gj of degree2 belong to S.(iii) Two percolating subgraphs Gj , j ∈ {1, 2}, are merged that share exactlythree vertices R = {v1, v2, v3}, such that for some ij ≤ 3 and kj with∑kj = k+ 3, we have that the Gj contribute to I(kj , ij). If any ij > 0,the corresponding ij vertices of Gj of degree 2 belong to R.As in Case 4, we find by the arguments in Case 3 leading to (6.4.8)that the number of graphs G satisfying (i) or (ii), divided by Aζkk!kk+6, isbounded byγ1 + 4γ2 + 10γ3 + 3γ4 < 0.89. (6.4.10)On the other hand, by the arugments in Case 3 leading to (6.4.7), thenumber of graphs G satisfying (iii), for a given k1, k2 > 4 such that k1 +k2 =1776.4. Percolating graphsk + 3, is bounded by(kk1, k2 − 3)(k13)3!22∏j=13∑i=0(3i)I(kj , i)(kji) .By the inductive hypothesis, this is bounded byAζkk!kk · 3!82Aζ3kk1+31 kk2+32kk.This expression is maximized when k2 = 5 and k1 = k − 2. Hence, summingover the at most k/2 choices for k1, k2, the number of graphs G satisfying(iii), divided by Aζkk!kk+6, is at mostγ5 =184Aζ33!58822e2 < 0.08. (6.4.11)Therefore, by (6.4.10) and (6.4.11), we have thatC3(k)Aζkk!kk+6 ≤ γ1 + 4γ2 + 10γ3 + 3γ4 + γ5 < 0.97 < 1,completing the proof of Case 5.This last case completes the induction. We conclude that (6.4.1) holdsfor all k > 4, ` ≤ 3 and relevant i. As discussed, Lemma 6.4.7 follows. 6.4.2 Sharper estimatesIn this section, using Lemma 6.4.7, we obtain upper bounds for I`q(k, i),which improve on those for I`(k, i) given by Lemma 6.4.7, especially when qis significantly smaller than k. These are used in Section 6.6 to rule out theexistence of large percolating subgraphs of Gn,p with few vertices of degree 2and small 3-cores.Lemma 6.4.8. Let ε > 0. For some constant ϑ(ε) ≥ 1, the following holds.For all k ≥ 2, ` ≤ 3, and relevant q, i, we have thatI`q(k, i) ≤ ϑψε(q/k)kk!kk+2`+i1786.4. Percolating graphswhereψε(y) = max{3/(2e) + ε, (e/2)1−2yy2}.This lemma is only useful for ε < 1/(2e), as otherwise ψε(y) ≥ 2/efor all y, and so Lemma 6.4.7 gives a better bound. Note that, for anyε < 1/(2e), we have that ψε(y) is non-decreasing and ψε(y)→ 2/e as y ↑ 1,in agreement with Lemma 6.4.7. Moreover, ψε(y) = 3/(2e) + ε for y ≤ y∗and ψε(y) = (e/2)1−2yy2 for y > y∗, where y∗ = y∗(ε) satisfies(e/2)1−2y∗y2∗ = 3/(2e) + ε. (6.4.12)We define yˆ = y∗(0) ≈ 0.819, and note that y∗(ε) ↓ yˆ, as ε ↓ 0.The general scheme of the proof is as follows: First, we note that thecase i = k − q follows easily by Lemma 6.4.7, since I`q(k, k − q) is equal to( kk−q)(q2)k−qC`(q). We establish the remaining cases by induction, notingthat if a graph G contributes to I`q(k, i) and i < k − q, then there is a vertexv ∈ V (G) of degree 2 with a neighbour not in the 3-core C ⊂ G. Therefore,either (i) some neighbour of v is of degree 2 in the subgraph of G inducedby V (G) − {v}, or else (ii) there are vertices u 6= w ∈ V (G) of degree 2with a common neighbour not in C. This observation leads to an improvedbound (when q < k) for I`q(k, i) compared with that for I`(k, i) given byLemma 6.4.7.Proof. Let ε > 0 be given. We may assume that ε < 1/(2e), as otherwisethe statement of lemma follows by Lemma 6.4.7, noting that for any q,I`q(k, i) ≤ I`(k, i). We claim that, for some ϑ(ε) ≥ 1 (to be determinedbelow), for all k ≥ 2, ` ≤ 3 and relevant q, i, we have thatI`q(k, i) ≤ ϑ(ki)ψε(q/k)kk!kk+2`. (6.4.13)Case 1 (i = k − q). We first observe that Lemma 6.4.7 implies thecase i = k − q. Indeed, if q = k, in which case i = 0, then (6.4.13) followsimmediately by Lemma 6.4.7, noting that I`k(k, 0) = C`(k) and ψ(1) = 2/e.1796.4. Percolating graphsOn the other hand, if i = k − q > 0 thenI`q(k, k − q) =(kk − q)(q2)k−qC`(q),since all k − q vertices of degree 2 in a graph that contributes to I`q(k, k − q)are connected to 2 vertices in its 3-core of size q. We claim that the righthand side is bounded by(kk − q)(e/2)k−2q(q/k)2kk!kk+2`.Since (e/2)k−2q(q/k)2k ≤ ψ(q/k)k, (6.4.13) follows. To see this, note that byLemma 6.4.7, we have that(q2)k−qC`(q)(e/2)k−2q(q/k)2kk!kk+2` ≤(qk)2` q!(q/e)q(k/e)kk! ≤q!(q/e)q(k/e)kk! .By the inequalities 1 ≤ i!/(√2pii(i/e)i) ≤ e1/(12i), it is easy to verify thatthe right hand side above is bounded by 1, for all relevant q ≤ k. Hence(6.4.13) holds also in the case i = k − q > 0.Case 2 (i < k − q). Fix some kε ≥ 1/(1− y∗)2 such that, for all k ≥ kεand relevant q, we have that1 + 2k − 1(k − 2k − 1)k ψε(q/(k − 2)k−2ψε(q/(k − 1))k−1 = 1 +O(1/k) ≤ 1 + δ,whereδ = δ(ε) = min{1− 3/(2e)3/(2e) + ε, 1−3(1− y∗)y2∗}.Note that, since 3(1 − y)/y2 < 1 for all y > (√21 − 3)/2 ≈ 0.791, andrecalling (see (6.4.12)) that y∗ > yˆ ≈ 0.819, we have that δ > 0.Select ϑ(ε) ≥ 1 so that (6.4.13) holds for all k ≤ kε and relevant q, `, i.By Case 1 and since ϑ ≥ 1, we have that (6.4.13) holds for all k, q in thecase that i = k− q. We establish the remaining cases i < k− q by induction:Assume that for some k > kε, (6.4.13) holds for all k′ < k and relevant q, `, i.1806.4. Percolating graphsIn any graph G contributing to I`q(k, i), where i < k − q, there is somevertex of degree 2 with at least one of its two neighbours not in the 3-core ofG. There are two cases to consider: either(i) there is a vertex v of degree 2 such that at least one of its two neighboursis of degree 2 in the subgraph of G induced by V (G)− {v}, or else,(ii) there is no such vertex v, however there are vertices u 6= w of degree 2with a common neighbour that is not in the 3-core of G.Note that, in case (i), removing v results in a graph with j ∈ {i, i+1} verticesof degree 2. On the other hand, in case (ii), removing u and w results ina graph with j ∈ {i − 2, i − 1, i} vertices of degree 2. By considering thevertices v or u,w as above with minimal labels, we see that, for i < k − q,I`q(k, i)/(ki)is bounded byI`q(k − 1, i+ 1)(k−1i+1) (k − i− q2)+I`q(k − 1, i)(k−1i) (k − i− q)(k − i)+ (k − i− q)(k − i)22∑j=0I`q(k − 2, i− 2 + j)( k−2i−2+j) .Applying the inductive hypothesis, it follows thatI`q(k, i)ϑ(ki)ψε(q/k)kk!kk+2`≤ Ψε(q, k)[1 + 2k − 1(k − 2k − 1)k ψε(q/(k − 2)k−2ψε(q/(k − 1))k−1]whereΨε(q, k) =32k − qk(k − 1k)k ψε(q/(k − 1))k−1ψε(q/k)k.By the choice of kε, and since k ≥ kε, we have thatI`q(k, i)ϑ(ki)ψε(q/k)kk!kk+2`≤ Ψε(q, k)(1 + δ). (6.4.14)Next, we show that Ψε(q, k) < 1− δ, completing the induction. To thisend, we take cases with respect to whether (i) q/(k − 1) ≤ y∗, (ii) y∗ ≤ q/k,or (iii) q/k < y∗ < q/(k − 1).Case 2(i) (q/(k − 1) ≤ y∗). In this case ψε(q/m) = 3/(2e) + ε, for each1816.4. Percolating graphsm ∈ {k − 1, k}. It follows, by the choice of δ, thatΨε(q, k) ≤(k − 1k)k 3/23/(2e) + ε ≤3/(2e)3/(2e) + ε < 1− δ,as required.Case 2(ii) (y∗ ≤ q/k). In this case ψ(q/m)m = (e/2)m−2q(q/m)2m, foreach m ∈ {k − 1, k}. HenceΨε(q, k) =3e(kk − 1)k−1 (k − q)(k − 1)q2≤ 3(1− y)y2,where y = q/k. Since the right hand side is decreasing in y, we find, by thechoice of δ, thatΨε(q, k) ≤ 3(1− y∗)y2∗< 1− δ.Case 2(iii) (q/k < y∗ < q/(k − 1)). In this case, ψε(q/k) = 3/(2e) + εandψε(q/(k − 1))k−1 = (e/2)k−1−2q(q/(k − 1))2(k−1).HenceΨε(q, k) =3e(kk − 1)k−1 (k − q)(k − 1)q2(e/2)k−2q(q/k)2k(3/(2e) + ε)k .As in the previous case, we consider the quantity y = q/k. The aboveexpression is bounded by3(1− y)y2((e/2)1−2yy23/(2e) + ε)k.We claim that this expression is increasing in y ≤ y∗. By (6.4.12) and thechoice of δ, it follows thatΨε(q, k) ≤ 3(1− y∗)y2∗< 1− δ,1826.5. Percolating subgraphs with small coresas required. To establish the claim, simply note that∂∂y1− yy2((2/e)yy)2k = 1y3((2/e)yy)2k (2(1− y)(1 + y log(2/e))k + y − 2)>2y3((2/e)yy)2k((1− y)2k − 1) ≥ 0for all y ≤ y∗, since k ≥ kε ≥ 1/(1− y∗)2.Altogether, we conclude that Ψε(q, k) ≤ 1 − δ, for all relevant q. By(6.4.14), it follows thatI`q(k, i) ≤ (1− δ2)ϑ(ki)ψε(q/k)kk!kk+2` < ϑ(ki)ψε(q/k)kk!kk+2`,completing the induction. We conclude that (6.4.13) holds for k ≥ 2, ` ≤ 3and relevant q, i. Since(ki) ≤ ki, the lemma follows. 6.5 Percolating subgraphs with small coresWith Lemmas 6.2.4, 6.4.7 and 6.4.8 at hand, we begin to analyze percolatingsubgraphs of Gn,p. In this section, we show that for sub-critical p, with highprobability Gn,p has no percolating subgraphs larger that (β∗ + o(1)) lognwith a small 3-core. The non-existence of large percolating 3-cores is verifiedin the next Section 6.6, completing the proof of Proposition 6.2.3. Morespecifically, we prove the following result.Proposition 6.5.1. Let p =√α/(n logn), for some α ∈ (0, 1/3). Then, forany δ > 0, with high probability Gn,p has no irreducible percolating subgraphG of size k = β logn with a 3-core C ⊂ G of size q ≤ (3/2) logn, for anyβ ≥ (β∗ + δ) logn.Recall that (as discussed in Section 6.4.1), in statements such as thisproposition, we mean also to include the possibility that q = 2 (correspondingto a seed graph G) when we say that the 3-core of a graph G is of size lessthan q > 2.First, we justify the definition of β∗ in Theorem 6.2.2.1836.5. Percolating subgraphs with small coresLemma 6.5.2. Fix α ∈ (0, 1/3). For β > 0, letµ(α, β) = 3/2 + β log(αβ)− αβ2/2.The function µ(α, β) is decreasing in β, with a unique zero β∗(α) ∈ (0, 3).In particular, for α ∈ (0, 1/3), we have that β∗ ≤ 1/α.Proof. Differentiating µ(α, β) with respect to β, we obtain 1 + log(αβ)−αβ.Since log x < x− 1 for all positive x 6= 1, we find that µ(α, β) is decreasingin β. Moreover, since α < 1/3, we have that µ(α, 3) < (3/2)(3α − 1) < 0.The result follows, noting that µ(α, β)→ 3/2 > 0 as β ↓ 0. Recall that the bounds in Sections 6.4.1 and 6.4.2 apply only to graphswith an excess of ` ≤ 3 edges. The following observation is useful for dealingwith graphs with a larger excess.Lemma 6.5.3. Let p =√α/(n logn), for some α ∈ (0, 1/3). Then withhigh probability Gn,p contains no subgraph of size k = β logn with an excessof ` edges, for any β ∈ (0, 2] and ` > 3, or any β ∈ (0, 9] and ` > 27.Proof. The expected number of subgraphs of size k = β logn in Gn,p withan excess of ` edges is bounded by(nk)( (k2)2k − 3 + `)p2k−3+` ≤(e316knp2)k (e4kp)`−3≤ nν log` nwhereν(β, `) = −(`− 3)/2 + β log(αβe3/16).Note that ν is convex in β and ν(β, `)→ −(`− 3)/2 as β ↓ 0. Note also that2 log(2/3 · e3/16) ≈ −0.356 < 0and9 log(9/3 · e3/16) ≈ 11.934 < 12.1846.5. Percolating subgraphs with small coresTherefore, since α < 1/3, ν(2, `) < −(` − 3)/2 and ν(9, `) < −(` − 27)/2.Hence, the first claim follows by summing over all k ≤ 2 logn and ` > 3. Thesecond claim follows, summing over all k ≤ 9 logn and ` > 27. Definition 6.5.4. Let E(q, k) denote the expected number of irreduciblepercolating 3-cores C ⊂ Gn,p of size q (or seed edges, if q = 2), such that|〈C,Gn,p〉2| ≥ k.Combining Lemmas 6.2.4, 6.4.7 and 6.5.3, we obtain the following es-timate. Recall βε, kα, qα as defined in Lemma 6.2.4, and µ as defined inLemma 6.5.2.Lemma 6.5.5. Let p =√α/(n logn), for some α ∈ (0, 1/3). Let ε ∈ [0, 3α]and β ∈ [βε, 1/α]. Suppose that q/qα → ε and k/kα → αβ as n→∞. ThenE(q, k) ≤ nµε+o(1), where µε(α, β) = µ(α, β) for β ∈ [ε/α, 1/α],µε(α, β) = µ(α, β)− β log(αβ) + ε2α log(ε/e) +2αβ − ε2α log(e(αβ)22αβ − ε)for β ∈ [βε, ε/α], and o(1) depends only on n.We note that µε(α, ε/α) = µ(α, ε/α), as is easily verified.Proof. By the proof of Lemma 6.5.3, the expected number of irreduciblepercolating 3-cores in Gn,p of size q ≤ (3/2) logn with an excess of ` > 3edges tends to 0 as n→∞. Therefore, it suffices to show that, for all ` ≤ 3,we have that E`(q, k) ≤ nµε+o(1), where E`(q, k) is the expected numberof irreducible percolating 3-cores C ⊂ Gn,p of size q = ε(2α)−1 logn withan excess of ` edges, such that |〈C,Gn,p〉2| ≥ k = β logn. For such `, byLemmas 6.2.4 and 6.4.7, we find thatE`(q, k) ≤(nq)C`(q)p2q−3+`P (q, k)≤ q2`p`−3(2eqnp2)qP (q, k) ≤ nν+o(1)whereν = 3/2 + ε(2α)−1 log(ε/e) + ξε(α, β) = µε(α, β)1856.5. Percolating subgraphs with small cores(and P (q, k) and ξε are as in Lemma 6.2.4), as required. Having established Lemma 6.5.5, we aim to prove Proposition 6.5.1 by thefirst moment method. To this end, we first show that for some ε∗ ∈ (0, 3α),with high probability there are no irreducible percolating 3-cores in Gn,p ofsize ε(2α)−1 logn, for all ε ∈ (ε∗, 3α]. Moreover, we establish a slightly moregeneral result that allows for graphs with i = O(1) vertices of degree 2, whichis also used in the next Section 6.6.Lemma 6.5.6. Let p =√α/(n logn), for some α ∈ (0, 1/3). Fix somei∗ ≥ 0. Define ε∗(α) ∈ (0, 3α) implicitly by 3/2 + ε∗(2α)−1 log(ε∗/e) = 0.Then, for any η > 0, with high probability Gn,p has no irreducible percolatingsubgraph G of size q = ε(2α)−1 logn with i vertices of degree 2, for any i ≤ i∗and ε ∈ [ε∗ + η, 3α].Proof. By Lemma 6.5.3, it suffices to consider subgraphs G with an excessof ` ≤ 3 edges. By Lemma 6.4.7, the expected number of such subgraphs isbounded by(nq)p2q−3+`I`(q, i) ≤ k2`+ip`−3(2eknp2)q≤ nν+o(1)where ν(ε) = 3/2 + ε(2α)−1 log(ε/e). Noting that ν is decreasing in ε < 1,ν(ε) → 3/2 > 0 as ε ↓ 0, and ν(3α) = (3/2) log(3α) < 0, the lemmafollows. Next, we plan to use Lemma 6.5.5 to rule out the remaining casesε ≤ ε∗ + η (where η > 0 is a small constant, to be determined below). Inorder to apply Lemma 6.5.5, we first verify that for such ε, we have that β∗is within the range of β specified by Lemma 6.5.5, that is, β∗ ≥ βε.Lemma 6.5.7. Fix α ∈ (0, 1/3). Let βε, β∗, ε∗ be as defined in Lemmas 6.2.4,6.5.2 and 6.5.6. Then, for some sufficiently small η(α) > 0, we have thatβ∗ ≥ βε for all ε ∈ [0, ε∗ + η].Proof. By Lemma 6.5.2 and the continuity of µ(α, βε) in ε, it suffices to showthat µ(α, βε) > 0, for all ε ∈ [0, ε∗]. Let δε = 1−√1− ε, so that βε = δε/α.1866.5. Percolating subgraphs with small coresNote thatµ(α, βε) = 3/2 + (2α)−1(2δε log δε − δ2ε).Therefore, by the bound log x ≤ x− 1,∂∂εµ(α, βε) = (2α)−1(1 + log(δε)/(1− δε)) ≤ 0.It thus suffices to verify that µ(α, βε∗) > 0. To this end note that, by thedefinition of ε∗ (see Lemma 6.5.6),µ(α, βε∗) = (2α)−1(2δε∗ log δε∗ − δ2ε∗ − ε∗ log(ε∗/e)).By Lemma 6.5.6, we have that ε∗ = δε∗(2− δε∗) ∈ (0, 1), and so δε∗ ∈ (0, 1).Hence the lemma follows if we show that ν(δ) > 0 for all δ ∈ (0, 1), whereν(δ) = 2δ log δ − δ2 − δ(2− δ) log(δ(2− δ)/e).Note thatν(δ)/δ = δ log δ − (2− δ) log(2− δ) + 2(1− δ).Differentiating this expression with respect to δ, we obtain log(δ(2− δ)) < 0,for all δ < 1. Noting that ν(1) = 0, the lemma follows. It can be shown that for all sufficiently large ε < ε∗, we have thatβ∗ < ε/α. Therefore, we require the following bound.Lemma 6.5.8. Fix α ∈ (0, 1/3). Let ε ∈ [0, 1) and βε, µε be as defined inLemmas 6.2.4 and 6.5.5. Then µε(α, β) ≤ µ(α, β), for all β ∈ [βε, 1/α].Proof. Since µ(α, β) = µε(α, β) for β ∈ [ε/α, 1/α], we may assume thatβ < ε/α. Let δ = αβ. Thenα(µ(α, β)− µε(α, β)) = δ log δ − ε2 log(ε/e)−2δ − ε2 log(eδ22δ − ε).Differentiating this expression with respect to δ, we obtainε/δ − 1− log(δ/(2δ − ε)) ≤ 0,1876.6. No percolating subgraphs with large coresby the inequality log x ≥ (x−1)/x. Since µ(α, ε/α) = µε(α, ε/α), the lemmafollows. Finally, we prove the main result of this section.of Proposition 6.5.1. Let δ > 0 be given. By Lemma 6.5.2, we may assumewithout loss of generality that β∗ + δ < 1/α. If Gn,p has an irreduciblepercolating subgraph G of size k ≥ (β∗ + δ) logn with a 3-core of size q ≤(3/2) logn, then by Lemma 6.4.2 it has such a subgraph of size k = β logn forsome β ∈ [β∗ + δ, 1/α]. Select η(α) > 0 as in Lemma 6.5.7. By Lemma 6.5.6,with high probability Gn,p has no percolating 3-core of size q = ε(2α)−1 logn,for any ε ∈ [ε∗+ η, 3α]. On the other hand, by the choice of η, Lemmas 6.5.5,6.5.7 and 6.5.8 imply that for any β ∈ [β∗, 1/α], the expected number ofirreducible percolating subgraphs of size k = β logn with a 3-core of sizeq ≤ (ε∗ + η)(2α)−1 logn is bounded by nµ+o(1), where µ = µ(α, β). Hencethe result follows by Lemma 6.5.2, summing over the O(log2 n) possibilitiesfor k, q. 6.6 No percolating subgraphs with large coresIn the previous Section 6.5, it is shown that for sub-critical p, with highprobability Gn,p has no percolating subgraphs larger than (β∗ + o(1)) lognwith a 3-core smaller than (3/2) logn. In this section, we rule out theexistence of larger percolating 3-cores.Proposition 6.6.1. Let p =√α/(n logn), for some α ∈ (0, 1/3). Thenwith high probability Gn,p has no irreducible percolating 3-core C of sizek = β logn, for any β ∈ [3/2, 9].Before proving the proposition we observe that it together with Proposi-tion 6.5.1 implies Proposition 6.2.3. As discussed in Section 6.2.2, our mainTheorems 6.2.1 and 6.2.2 follow.of Proposition 6.2.3. Let δ > 0 be given. By Lemma 6.5.2, without lossof generality we may assume that β∗ + δ < 3. Hence, by Lemmas 6.3.71886.6. No percolating subgraphs with large coresand 6.4.3, if Gn,p has a percolating subgraph that is larger than (β∗+ δ) logn,then with high probability it has some irreducible percolating subgraph Gof size k = β logn with a 3-core C ⊂ G of size q ≤ k (or a seed edge, ifq = 2), for some β ∈ (β∗ + δ, 9]. By Proposition 6.6.1, with high probabilityq ≤ (3/2) logn. Hence, by Proposition 6.5.1, with high probability Gn,pcontains no such subgraph G. Therefore, with high probability, all percolatingsubgraphs of Gn,p are of size k ≤ (β∗ + δ) logn. Towards Proposition 6.6.1, we observe that Gn,p has no percolatingsubgraph with a small 3-core and few vertices of degree 2.Lemma 6.6.2. Let p =√α/(n logn), for some α ∈ (0, 1/3). Fix somei∗ ≥ 1. With high probability Gn,p has no irreducible percolating subgraph Gof size k ≥ (3/2) logn with a 3-core C ⊂ G of size q ≤ (3/2) logn and i ≤ i∗vertices of degree 2.This is a straightforward consequence of Lemma 6.4.8, proved by a directapplication of the first moment method and elementary calculus.Proof. By Lemmas 6.4.3 and 6.5.3, we may assume that if Gn,p has anirreducible percolating subgraph G of size k = β logn with a 3-core of sizeq ≤ (3/2) logn, then G has an excess of ` ≤ 3 edges. By Proposition 6.5.1and Lemmas 6.5.2 and 6.5.6, we may further assume that β ∈ [3/2, 3] andq = yk, where yβ ∈ [0, 3/2−ε], for some ε(α) > 0. Without loss of generality,we assume that ε < 1/(2e) and log(3/(2e) + ε) < −1/2 (which is possible,since 1 + 2 log(3/(2e)) ≈ −0.189 < 0). By Lemma 6.4.8 and since α < 1/3,for some constant ϑ(ε) ≥ 1, the expected number of such subgraphs G isbounded by(nk)p2k−3+`I`q(k, i) ≤ ϑk2`+ip`−3(knp2ψε(q/k))k  nν (6.6.1)whereν(β, ψε(y)) = 3/2 + β log(β/3) + β logψε(y)1896.6. No percolating subgraphs with large coresand ψε(y) is as in Lemma 6.4.8, that is,ψε(y) = max{3/(2e) + ε, (e/2)1−2yy2}.Recall that ψε(y) = 3/(2e)+ε for y ≤ y∗ and ψε(y) = (e/2)1−2yy2 for y > y∗,where y∗ = y∗(ε) is as defined by (6.4.12). Moreover, y∗ ↓ yˆ as ε ↓ 0, whereyˆ ≈ 0.819.Therefore, to verify that with high probability Gn,p has no subgraphs Gas in the lemma, we show that ν(β, ψε(y)) < −δ for some δ > 0 and all β, yas above. Moreover, since ν is convex in β, it suffices to consider the extremepoints β = 3/2 and β = min{3, 3/(2y)} in the range y ∈ [0, 1 − ε′], whereε′ = 2ε/3.Since ψε(1) = 2/e, we have that ν(3/2, ψε(1)) = 0. Hence, for someδ1 > 0, we have that ν(3/2, ψε(y)) < −δ1 for all y ∈ [0, 1 − ε′]. Next, forβ = min{3, 3/(2y)}, we treat the cases (i) y ∈ [0, 1/2] and β = 3 and (ii)y ∈ [1/2, 1−ε′] and β = 3/(2y) separately. If y ≤ 1/2, then ψε(y) = 3/(2e)+ε,in which case, by the choice of ε,ν(3, ψε(y)) =32(1 + 2 log(3/(2e) + ε)) < 0.On the other hand, for y ≥ 1/2, we need to show thatν(3/(2y), ψε(y)) =32(1 + 1ylog(ψε(y)2y))< 0.To this end, we first note that differentiating ν(3/(2y), 3/(2e) + ε) twice withrespect to y, we obtain32y3(3 + 2 log(3/(2e) + ε2y))≥ 32(3 + 2 log( 34e))≈ 0.637 > 0.Therefore it suffices to consider the extreme points y = 1/2 and y = 1.Noting that, by the choice of ε, we have thatν(3, 3/(2e)) = 32(1 + 2 log(3/(2e) + ε)) < 01906.6. No percolating subgraphs with large coresandν(3/2, 3/(2e)) = 32(1 + log(3/(2e) + ε2))<32(1 + 2 log(3/(2e) + ε)) < 0,it follows that ν(3/(2y), 3/(2e) + ε) < 0 for all y ∈ [1/2, 1]. Next, we observethat differentiating ν(3/(2y), (e/2)1−2yy2) with respect to y, we obtain32y2 (1− log(ey/4)) ≥ 3 log 2 > 0.Therefore, since ν(3/(2y), (e/2)1−2yy2) → ν(3/2, ψε(1)) = 0 as y ↑ 1, itfollows that ν(3/(2y), (e/2)1−2yy2) < 0 for all y ∈ [1/2, 1− ε′]. Altogether,there is some δ2 > 0 so that ν(min{3, 3/(2y)}, ψε(y)) < −δ2 for all y ∈[0, 1− ε′].Put δ = min{δ1, δ2}. We conclude that ν(β, ψε(y)) < −δ, for all relevantβ, y. The lemma follows by (6.6.1), summing over the O(log2 n) choices fork and q and O(1) relevant values ` ≤ 3 and i ≤ i∗. With Lemma 6.6.2 at hand, we turn to Proposition 6.6.1. The generalidea is as follows: Suppose that Gn,p has an irreducible percolating 3-coreC of size k = β logn, for some β ∈ [3/2, 9]. By Lemma 6.5.3, we canassume that the excess of C is ` ≤ 27 edges. Hence, in the last step of aclique process for C, either 2 or 3 percolating subgraphs are merged thathave few vertices of degree 2 (as observed above the proof of Lemma 6.4.7,in Section 6.4.1). Therefore, by Lemma 6.6.2, each of these subgraphs issmaller than (3/2) logn, or else has a 3-core larger than (3/2) logn. In thisway, we see that considering a minimal such graph C is the key to provingProposition 6.6.1. By Lemma 6.5.6, there is some β1 < 3/2 so that withhigh probability Gn,p has no percolating subgraph of size β logn with fewvertices of degree 2, for all β ∈ [β1, 3/2]. Hence such a graph C, if it exists,is the result of the unlikely event that 2 or 3 percolating graphs, all of whichare smaller than β1 logn and have few vertices of degree 2, are merged toform a percolating 3-core that is larger than (3/2) logn. In other words,1916.6. No percolating subgraphs with large cores“macroscopic” subgraphs are merged to form C.of Proposition 6.6.1. By Lemma 6.5.6, there is some β1 < 3/2 so that withhigh probability Gn,p has no percolating subgraph of size β logn with ivertices of degree 2, for any i ≤ 15 and β ∈ [β1, 3/2].Suppose that Gn,p has an irreducible 3-core C of size k = β logn with anexcess of ` edges, for some β ∈ [3/2, 9]. By Lemma 6.5.3, we may assumethat ` ≤ 27. Moreover, assume that C is of the minimal size among suchsubgraphs of Gn,p. By Lemma 6.3.6, there are two possibilities for the laststep of a clique process for C:(i) Three irreducible percolating subgraphs Gj , j ∈ {1, 2, 3}, are mergedwhich form exactly one triangle T = {v1, v2, v3}, such that for someij ≤ 2 and kj , `j ≥ 0 with ∑ kj = k+ 3 and ∑ `j = `, we have that theGj contribute to I`j (kj , ij). If any ij > 0, the corresponding ij verticesof Gj of degree 2 belong to T .(ii) For some m ≤ (`+3)/2 ≤ 15, two percolating subgraphs Gj , j ∈ {1, 2},are merged that share exactly m vertices S = {v1, v2, . . . , vm}, suchthat for some ij ≤ m and kj , `j ≥ 0 with ∑ kj = k + m and ∑ `j =`−(2m−3), we have that the Gj contribute to I`j (kj , ij). If any ij > 0,the corresponding ij vertices of Gj of degree 2 belong to S.Moreover, in either case, by the choice of C, each Gj is either a seed graph orelse has a 3-core smaller than (3/2) logn. Hence, by Lemmas 6.4.3 and 6.5.3,we may assume that each `j ≤ 3. Also, by Lemma 6.6.2 and the choice ofβ1, we may further assume that all Gj are smaller than β1 logn.Case (i). Let k, kj , `j be as in (i). Let kj − (j − 1) = εjk, so that∑εj = 1. Without loss of generality we assume that k1 ≥ k2 ≥ k3. Henceε1, ε2 satisfy 1/3 ≤ ε1 ≤ β1/β < 1 and (1 − ε1)/2 ≤ ε2 ≤ min{ε1, 1 − ε1}.The number of 3-cores C as in (i) for these values k, kj , `j is bounded by(kk1, k2 − 1, k3 − 2)(k12)(k2 − 11)2!33∏j=12∑i=0(2i)I`j (kj , i)(kji) .Applying Lemma 6.4.7 and the inequality k! < ek(k/e)k (and recalling1926.6. No percolating subgraphs with large cores`j ≤ 3), this is bounded by(kk − k1)(k − k1k3 − 2)k32 (8ek7)3( 2e2)k+3 3∏j=1k2kjj .By the inequality(nk)< (ne/k)k, and noting thatk2kjj ≤ (ek)2(j−1)(kj − (j − 1))2(kj−(j−1)),we see that the above expression is bounded by (2e−2η(ε1, ε2))kk2kno(1),whereη(ε1, ε2) =(e1− ε1)1−ε1 ((1− ε1)eε3)ε3ε2ε11 ε2ε22 ε2ε33= e1−ε1+ε3(1− ε1)ε2 ε2ε11 ε2ε22 εε33 .Since there are O(1) choices for ` and the `j , and since α < 1/3, the expectednumber of 3-cores C in Gn,p of size k = β logn with Gj of size kj as in (i) isat most(nk)p2k−3( 2e2η(ε1, ε2)k2)kno(1) = p−3(2eαβη(ε1, ε2))kno(1)  nν(6.6.2)whereν(β, ε1, ε2) =32 + β log( 23eβη(ε1, ε2)).Since there are O(log3 n) possibilities for k and the kj , to show that with highprobability Gn,p has no subgraphs C as in (i), it suffices to show that for someδ > 0, we have that ν(β, ε1, ε2) < −δ for all relevant β and εj . Moreover,since ν is convex in β, we can restrict to the extreme points β = 3/2 andβ = 3/(2ε1) > β1/ε1. To this end, observe that when β = 3/2, we have thatν < 0 if and only if η < 1. Similarly, when β = 3/(2ε1), ν < 0 if and only ifη < ε1e1−ε1 . Since ε1e1−ε1 ≤ 1 for all relevant ε1, it suffices to establish the1936.6. No percolating subgraphs with large coreslatter claim. To this end, we observe that∂∂ε2η(ε1, ε2) = η(ε1, ε2) log(eε22(1− ε1)(1− ε1 − ε2))≥ η(ε1, ε2) log(e/2) > 0for all relevant ε2 ≥ (1− ε1)/2. Therefore, we need only show thatζ(ε1) =η(ε1,min{ε1, 1− ε1})ε1e1−ε1< 1− δfor some δ > 0 and all relevant ε1. We treat the cases ε1 ∈ [1/3, 1/2] andε1 ∈ [1/2, 1) separately.For ε1 ∈ [1/3, 1/2], we haveζ(ε1) =η(ε1, ε1)ε1e1−ε1= (e(1− 2ε1))1−2ε1ε4ε1−11(1− ε1)ε1 .Hence∂∂ε1ζ(ε1) = ζ(ε1)(log(ε41(1− ε1)(1− 2ε1)2)+ ε21 + ε1 − 1ε1(1− ε1)).The terms ε41/((1−ε1)(1−2ε1)2) and (ε21 +ε1−1)/(ε1(1−ε1)) are increasingfor ε1 ∈ [1/3, 1/2], as is easily verified. Hence ζ(ε1) is decreasing in ε1 for1/3 ≤ ε1 ≤ x1 ≈ 0.439 and increasing for x1 ≤ ε1 ≤ 1/2. Therefore, sinceζ(1/3) = (e/6)1/3 < 1 and ζ(1/2) = 1/√2 < 1, we have that, for some δ1 > 0,ζ(ε1) < 1− δ1 for all ε1 ∈ [1/3, 1/2].Similarly, for ε ∈ [1/2, 1), we haveζ(ε1) =η(ε1, 1− ε1)ε1e1−ε1= (1− ε1)1−ε1ε2ε1−11 .Hence∂∂ε1ζ(ε1) = ζ(ε1)(log(ε211− ε1)+ ε1 − 1ε1).Since ε21/(1 − ε1) and (ε1 − 1)/ε1 are increasing in ε1 ∈ [1/2, 1), we find1946.6. No percolating subgraphs with large coresthat ζ(ε1) is decreasing in ε1 for 1/2 ≤ ε1 ≤ x2 ≈ 0.692 and increasing forx2 ≤ ε1 < 1. Note that ζ(1/2) = 1/√2 < 1 and ζ(1) = 1. Hence, for someδ2 > 0, ζ(ε1) < 1− δ2 for all ε1 ∈ [1/2, β1/β] ⊂ [1/2, 1).Setting δ′ = min{δ1, δ2}, we find that ζ(ε1) < 1− δ′ for all relevant ε1. Itfollows that, for some δ > 0, we have that ν(β, ε1, ε2) < −δ, for all relevantβ, ε1, ε2. Summing over the O(log3 n) possibilities for k, kj and the O(1)possibilities for `, `j , we conclude by (6.6.2) that with high probability Gn,phas no 3-cores C as in (i).Case (ii). Let k, kj , `j ,m be as in (ii). Let k1 = ε1k and k2 −m = ε2k,so that ∑ εj = 1. Without loss of generality we assume that k1 ≥ k2. Henceε1, ε2 satisfy 1/2 ≤ ε1 ≤ β1/β < 1 and ε2 = 1− ε1. The number of 3-coresC as in (ii) for these values k, kj , `j ,m is bounded by(kk2 −m)(k1m)m!22∏j=1m∑i=0(mi)I`j (kj , i)(kji) .Arguing as in Case (i), by Lemma 6.4.7 and the inequality k! < ek(k/e)k,we see that this is bounded by(kk2 −m)kmm! (2mm!ek7)2( 2e2)k+m 2∏j=1k2kjj .By the inequality(nk)< (ne/k)k, and sincek2k22 < (ek)2m(k2 −m)2(k2−m),the above expression is bounded by (2e−2η(ε1, 1 − ε1))kk2kno(1), where ηis as defined in Case (i). Therefore, by the arguments in Case (i), whenε1 ≥ 1/2 and ε2 = 1 − ε1, we find that with high probability Gn,p has no3-cores C as in (ii).The proof is complete. 195Bibliography1. C. Abraham, Rescaled bipartite planar maps converge to the Brownianmap, Ann. Inst. Henri Poincaré Probab. Stat. 52 (2016), no. 2, 575–595.MR 34980012. L. Addario-Berry and M. Albenque, The scaling limit of random simpletriangulations and random simple quadrangulations, Ann. Probab., toappear, preprint available at arXiv:1306.5227.3. D. Adler, J. Stauffer and A. Aharony, Comparison of bootstrap percola-tion models, J. Phys. A 22 (1989), L297–L301.4. J. Adler, Bootstrap percolation, Physica A 171 (1991), 453–470.5. J. Adler and U. Lev, Bootstrap percolation: Visualizations and applica-tions, Braz. J. Phys. 33 (2003), 641–644.6. M. Aizenman and J. L. Lebowitz, Metastability effects in bootstrappercolation, J. Phys. A 21 (1988), no. 19, 3801–3813. MR 9683117. D. Aldous, The continuum random tree. I, Ann. Probab. 19 (1991),no. 1, 1–28. MR 10853268. , The continuum random tree. III, Ann. Probab. 21 (1993), no. 1,248–289. MR 12072269. H. Amini, Bootstrap percolation and diffusion in random graphs withgiven vertex degrees, Electron. J. Combin. 17 (2010), no. 1, ResearchPaper 25, 20. MR 2595485196Bibliography10. H. Amini and N. Fountoulakis, What I Tell You Three Times Is True:Bootstrap Percolation in Small Worlds, Internet and Network Eco-nomics: 8th International Workshop, WINE 2012, Liverpool, UK, De-cember 10–12, 2012. Proceedings (Berlin, Heidelberg) (P. W. Goldberg,ed.), Springer Berlin Heidelberg, 2012, pp. 462–474.11. O. Angel and B. Kolesnik, Minimal contagious sets in random graphs,preprint available at arXiv:1705.06815.12. , Thresholds for contagious sets in random graphs, Ann. Appl.Probab., to appear, preprint available at arXiv:1611.10167.13. O. Angel, B. Kolesnik, and G. Miermont, Stability of geodesics inthe Brownian map, Ann. Probab., to appear, preprint available atarXiv:1502.04576.14. O. Angel and O. Schramm, Uniform infinite planar triangulations,Comm. Math. Phys. 241 (2003), no. 2-3, 191–213. MR 201379715. G. Antonick, Béla Bollobás: The Spread of Infection on a SquareGrid, Wordplay: New York Times Blog (July 8, 2013), available athttps://wordplay.blogs.nytimes.com/2013/07/08/bollobas/.16. R. Arratia, Site recurrence for annihilating random walks on Zd, Ann.Probab. 11 (1983), no. 3, 706–713. MR 70455717. F. Ball and T. Britton, An epidemic model with exposure-dependentseverities, J. Appl. Probab. 42 (2005), no. 4, 932–949. MR 220381318. , An epidemic model with infector and exposure dependent sever-ity, Math. Biosci. 218 (2009), no. 2, 105–120. MR 251367619. J. Balogh and B. Bollobás, Bootstrap percolation on the hypercube,Probab. Theory Related Fields 134 (2006), no. 4, 624–648. MR 221490720. J. Balogh, B. Bollobás, H. Duminil-Copin, and R. Morris, The sharpthreshold for bootstrap percolation in all dimensions, Trans. Amer. Math.Soc. 364 (2012), no. 5, 2667–2701. MR 2888224197Bibliography21. J. Balogh, B. Bollobás, and R. Morris, Bootstrap percolation in threedimensions, Ann. Probab. 37 (2009), no. 4, 1329–1380. MR 254674722. , Majority bootstrap percolation on the hypercube, Combin.Probab. Comput. 18 (2009), no. 1-2, 17–51. MR 249737323. , Bootstrap percolation in high dimensions, Combin. Probab.Comput. 19 (2010), no. 5-6, 643–692. MR 272607424. , Graph bootstrap percolation, Random Structures Algorithms41 (2012), no. 4, 413–440. MR 299312825. J. Balogh, Y. Peres, and G. Pete, Bootstrap percolation on infinite treesand non-amenable groups, Combin. Probab. Comput. 15 (2006), no. 5,715–730. MR 224832326. J. Balogh and G. Pete, Random disease on the square grid, Proceed-ings of the Eighth International Conference “Random Structures andAlgorithms” (Poznan, 1997), vol. 13, 1998, pp. 409–422. MR 166279227. J. Balogh and B. G. Pittel, Bootstrap percolation on the random regulargraph, Random Structures Algorithms 30 (2007), no. 1-2, 257–286. MR228323028. E. Baur, G. Miermont, and G. Ray, Classification of scaling limitsof uniform quadrangulations with a boundary, preprint available atarXiv:1608.01129.29. J. Beltran and J.-F. Le Gall, Quadrangulations with no pendant vertices,Bernoulli 19 (2013), no. 4, 1150–1175. MR 310254730. I. Benjamini and N. Curien, Simple random walk on the uniform infiniteplanar quadrangulation: subdiffusivity via pioneer points, Geom. Funct.Anal. 23 (2013), no. 2, 501–531. MR 305375431. M. Berger, Riemannian geometry during the second half of the twentiethcentury, University Lecture Series, vol. 17, American MathematicalSociety, Providence, RI, 2000.198Bibliography32. , A panoramic view of Riemannian geometry, Springer-Verlag,Berlin, 2003.33. J. Bettinelli, Scaling limits for random quadrangulations of positivegenus, Electron. J. Probab. 15 (2010), no. 52, 1594–1644. MR 273537634. , The topology of scaling limits of positive genus random quad-rangulations, Ann. Probab. 40 (2012), no. 5, 1897–1944. MR 302570535. , Geodesics in Brownian surfaces (Brownian maps), Ann. Inst.Henri Poincaré Probab. Stat. 52 (2016), no. 2, 612–646. MR 349800336. J. Bettinelli, E. Jacob, and G. Miermont, The scaling limit of uni-form random plane maps, via the Ambjørn-Budd bijection, Electron. J.Probab. 19 (2014), no. 74, 16. MR 325687437. J. Bettinelli and G. Miermont, Compact Brownian surfaces I. Browniandisks, Probab. Theory Related Fields, to appear, preprint available atarXiv:1507.08776.38. , Compact Brownian surfaces II. The general case, in prepara-tion.39. B. Bollobás, Weakly k-saturated graphs, Beiträge zur Graphentheorie(Kolloquium, Manebach, 1967), Teubner, Leipzig, 1968, pp. 25–31. MR024407740. B. Bollobás, K. Gunderson, C. Holmgren, S. Janson, and M. Przykucki,Bootstrap percolation on Galton-Watson trees, Electron. J. Probab. 19(2014), no. 13, 27. MR 316476641. B. Bollobás and A. Thomason, Threshold functions, Combinatorica 7(1987), no. 1, 35–38. MR 90514942. J. Bouttier, P. Di Francesco, and E. Guitter, Planar maps as labeledmobiles, Electron. J. Combin. 11 (2004), no. 1, Research Paper 69, 27.199Bibliography43. M. R. Bridson and A. Haefliger, Metric spaces of non-positive curva-ture, Grundlehren der Mathematischen Wissenschaften [FundamentalPrinciples of Mathematical Sciences], vol. 319, Springer-Verlag, Berlin,1999. MR 174448644. M. A. Buchner, Stability of the cut locus in dimensions less than orequal to 6, Invent. Math. 43 (1977), no. 3, 199–231. MR 048281645. D. Burago, Y. Burago, and S. Ivanov, A course in metric geometry,Graduate Studies in Mathematics, vol. 33, American MathematicalSociety, Providence, RI, 2001. MR 183541846. R. Cerf and E. N. M. Cirillo, Finite size scaling in three-dimensionalbootstrap percolation, Ann. Probab. 27 (1999), no. 4, 1837–1850. MR174289047. R. Cerf and F. Manzo, The threshold regime of finite volume bootstrappercolation, Stochastic Process. Appl. 101 (2002), no. 1, 69–82. MR192144248. , A d-dimensional nucleation and growth model, Probab. TheoryRelated Fields 155 (2013), no. 1-2, 427–449. MR 301040349. , Nucleation and growth for the Ising model in d dimensions atvery low temperatures, Ann. Probab. 41 (2013), no. 6, 3697–3785. MR316146350. J. Chalupa, P. L. Leath, and G. R. Reich, Bootstrap percolation on aBethe lattice, J. Phys. C 21 (1979), L31–L35.51. P. Chassaing and G. Schaeffer, Random planar lattices and integratedsuperBrownian excursion, Probab. Theory Related Fields 128 (2004),no. 2, 161–212. MR 203122552. R. Cori and B. Vauquelin, Planar maps are well labeled trees, Canad. J.Math. 33 (1981), no. 5, 1023–1042. MR 638363200Bibliography53. N. Curien and J.-F. Le Gall, The Brownian plane, J. Theoret. Probab.27 (2014), no. 4, 1249–1291. MR 327894054. , The hull process of the Brownian plane, Probab. Theory RelatedFields 166 (2016), no. 1-2, 187–231. MR 354773855. , Scaling limits for the peeling process on random maps, Ann.Inst. Henri Poincaré Probab. Stat. 53 (2017), no. 1, 322–357. MR360674456. N. Curien, L. Ménard, and G. Miermont, A view from infinity of theuniform infinite planar quadrangulation, ALEA Lat. Am. J. Probab.Math. Stat. 10 (2013), no. 1, 45–88. MR 308391957. P. De Gregorio, A. Lawlor, P. Bradley, and K. A. Dawson, Exact solutionof a jamming transition: closed equations for a bootstrap percolationproblem, Proc. Natl. Acad. Sci. USA 102 (2005), no. 16, 5669–5673(electronic). MR 214289258. P. A. Dreyer, Jr. and F. S. Roberts, Irreversible k-threshold processes:graph-theoretical threshold models of the spread of disease and of opinion,Discrete Appl. Math. 157 (2009), no. 7, 1615–1627. MR 251024259. D. A. Edwards, The structure of superspace, Academic Press, New York,1975. MR 040106960. P. Erdős and A. Rényi, On random graphs. I, Publ. Math. Debrecen 6(1959), 290–297. MR 012016761. K. J. Falconer, The geometry of fractal sets, Cambridge Tracts inMathematics, vol. 85, Cambridge University Press, Cambridge, 1986.MR 86728462. U. Feige, M. Krivelevich, and D. Reichman, Contagious sets in ran-dom graphs, Ann. Appl. Probab., to appear, preprint available atarXiv:1602.01751.201Bibliography63. A. Fey, L. Levine, and Y. Peres, Growth rates and explosions in sandpiles,J. Stat. Phys. 138 (2010), no. 1-3, 143–159. MR 259489564. P. Flocchini, E. Lodi, F. Luccio, :. Pagli, and N. Santoro, Dynamicmonopolies in tori, Discrete Appl. Math. 137 (2004), no. 2, 197–212,1st International Workshop on Algorithms, Combinatorics, and Opti-mization in Interconnection Networks (IWACOIN ’99). MR 204803065. L. R. Fontes, R. H. Schonmann, and V. Sidoravicius, Stretched expo-nential fixation in stochastic Ising models at zero temperature, Comm.Math. Phys. 228 (2002), no. 3, 495–518. MR 191878666. L. R. G. Fontes and R. H. Schonmann, Bootstrap percolation on homo-geneous trees has 2 phase transitions, J. Stat. Phys. 132 (2008), no. 5,839–861. MR 243078367. E. Friedgut, Sharp thresholds of graph properties, and the k-sat problem,J. Amer. Math. Soc. 12 (1999), no. 4, 1017–1054, With an appendix byJean Bourgain. MR 167803168. K. Froböse, Finite-size effects in a cellular automaton for diffusion, J.Statist. Phys. 55 (1989), no. 5-6, 1285–1292. MR 100249269. J. P. Garrahan, P. Sollich, and C. Toninelli, Kinetically constrainedmodels, Dynamical heterogeneities in glasses, colloids, and granularmedia (L. Berthier, G. Biroli, J.-P. Bouchaud, L. Cipelletti, and W. vanSaarloos, eds.), Oxford, 2011, pp. 341–369.70. M. Granovetter, Threshold models of collective behavior, Am. J. Sociol.83 (1978), 1420–1443.71. J. Gravner and A. E. Holroyd, Slow convergence in bootstrap percolation,Ann. Appl. Probab. 18 (2008), no. 3, 909–928. MR 241823372. J. Gravner, A. E. Holroyd, and R. Morris, A sharper threshold forbootstrap percolation in two dimensions, Probab. Theory Related Fields153 (2012), no. 1-2, 1–23. MR 2925568202Bibliography73. M. Gromov, Groups of polynomial growth and expanding maps, Inst.Hautes Études Sci. Publ. Math. (1981), no. 53, 53–73. MR 62353474. , Structures métriques pour les variétés riemanniennes, TextesMathématiques [Mathematical Texts], vol. 1, CEDIC, Paris, 1981,Edited by J. Lafontaine and P. Pansu. MR 68206375. P. M. Gruber, Geodesics on typical convex surfaces, Atti Accad. Naz.Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 82 (1988), no. 4, 651–659(1990). MR 113981276. K. Gunderson and M. Przykucki, Lower bounds for bootstrap percolationon Galton-Watson trees, Electron. Commun. Probab. 19 (2014), no. 42,7. MR 323320477. G.-S. Guseinov, Discrete calculus of variations, Global analysis andapplied mathematics, AIP Conf. Proc., vol. 729, Amer. Inst. Phys.,Melville, NY, 2004, pp. 170–176. MR 221569678. C. Holmgren, N. Kettle, and T. Juškevičius, Majority bootstrap percola-tion on G(n, p), Electron. J. Combin. 24 (2017), no. 1, 1–32.79. A. E. Holroyd, Sharp metastability threshold for two-dimensional boot-strap percolation, Probab. Theory Related Fields 125 (2003), no. 2,195–224. MR 196134280. A. E. Holroyd, T. M. Liggett, and D. Romik, Integrals, partitions,and cellular automata, Trans. Amer. Math. Soc. 356 (2004), no. 8,3349–3368 (electronic). MR 205295381. J. D. Howroyd, On dimension and on the existence of sets of finitepositive Hausdorff measure, Proc. London Math. Soc. (3) 70 (1995),no. 3, 581–604. MR 131751582. , On Hausdorff and packing dimension of product spaces, Math.Proc. Cambridge Philos. Soc. 119 (1996), no. 4, 715–727. MR 1362951203Bibliography83. S. Janson, On percolation in random graphs with given vertex degrees,Electron. J. Probab. 14 (2009), no. 5, 87–118. MR 247166184. S. Janson, T. Łuczak, T. Turova, and T. Vallier, Bootstrap percolation onthe random graph Gn,p, Ann. Appl. Probab. 22 (2012), no. 5, 1989–2047.MR 302568785. T. Juškevičius, Probabilistic inequalities and bootstrap percolation, Ph.D.thesis, University of Memphis, 2015.86. R. Kaufman, Une propriété métrique du mouvement Brownien, C. R.Acad. Sci. Paris Sér. A-B 268 (1969), A727–A728.87. H. Kesten, Subdiffusive behavior of random walk on a random cluster,Ann. Inst. H. Poincaré Probab. Statist. 22 (1986), no. 4, 425–487. MR87190588. N. Kettle, Vertex disjoint subgraphs and non-repetitive sequences, Ph.D.thesis, University of Cambridge, 2014.89. S. Kirkpatrick, W. W. Wilcke, R. B. Garner, and H. Huels, Percolationin dense storage arrays, Phys. A 314 (2002), no. 1-4, 220–229, Horizonsin complex systems (Messina, 2001). MR 196170390. W. P. A. Klingenberg, Riemannian geometry, second ed., de GruyterStudies in Mathematics, vol. 1, Walter de Gruyter & Co., Berlin, 1995.MR 133091891. B. Kolesnik, Sharp threshold for K4-percolation, preprint available atarXiv:1705.08882.92. J. Komlós and E. Szemerédi, Limit distribution for the existence ofHamiltonian cycles in a random graph, Discrete Math. 43 (1983), no. 1,55–63. MR 68030493. M. Krikun, Local structure of random quadrangulations, preprint avail-able at arXiv:math/0512304.204Bibliography94. , On one property of distances in the infinite random quadran-gulation, preprint available at arXiv:0805.1907.95. J.-F. Le Gall, Random geometry of the sphere, Proceedings of ICM2014, Seoul (2014), to appear, preprint available at arXiv:1403.7943.96. , The topological structure of scaling limits of large planar maps,Invent. Math. 169 (2007), no. 3, 621–670. MR 233604297. , Geodesics in large planar maps and in the Brownian map, ActaMath. 205 (2010), no. 2, 287–360. MR 274634998. , Uniqueness and universality of the Brownian map, Ann. Probab.41 (2013), no. 4, 2880–2960. MR 311293499. J.-F. Le Gall and G. Miermont, Scaling limits of random trees and planarmaps, Probability and statistical physics in two and more dimensions,Clay Math. Proc., vol. 15, Amer. Math. Soc., Providence, RI, 2012,pp. 155–211. MR 3025391100. J.-F. Le Gall and F. Paulin, Scaling limits of bipartite planar maps arehomeomorphic to the 2-sphere, Geom. Funct. Anal. 18 (2008), no. 3,893–918. MR 2438999101. J.-C. Lootgieter, Problèmes de récurrence concernant des mouvementsaléatoires de particules sur Z avec destruction, Ann. Inst. H. PoincaréSect. B (N.S.) 13 (1977), no. 2, 127–139. MR 0445645102. W. Mantel, Problem 28, Wiskundige Opgaven 10 (1907), 60–61.103. P. Mattila, Geometry of sets and measures in Euclidean spaces, Cam-bridge Studies in Advanced Mathematics, vol. 44, Cambridge UniversityPress, Cambridge, 1995, Fractals and rectifiability. MR 1333890104. W. S. McCulloch and W. H. Pitts, “A Logical Calculus of Ideas Imma-nent in Nervous Activity”, Bulletin of Mathematical Biophysic 7 (1943),115–133.205Bibliography105. G. Miermont, Aspects of random maps: Lecture notes of the 2014 Saint-Flour Probability Summer School, preprint available at http://perso.ens-lyon.fr/gregory.miermont/coursSaint-Flour.pdf.106. , On the sphericity of scaling limits of random planar quad-rangulations, Electron. Commun. Probab. 13 (2008), 248–257. MR2399286107. , Tessellations of random maps of arbitrary genus, Ann. Sci. Éc.Norm. Supér. (4) 42 (2009), no. 5, 725–781. MR 2571957108. , The Brownian map is the scaling limit of uniform randomplane quadrangulations, Acta Math. 210 (2013), no. 2, 319–401. MR3070569109. J. Miller and S. Sheffield, Liouville quantum gravity and the Brownianmap III: the conformal structure is determined, preprint available atarXiv:1608.05391.110. B. Mohar and C. Thomassen, Graphs on surfaces, Johns HopkinsStudies in the Mathematical Sciences, Johns Hopkins University Press,Baltimore, MD, 2001. MR 1844449111. R. L. Moore, Concerning upper semi-continuous collections of continua,Trans. Amer. Math. Soc. 27 (1925), no. 4, 416–428. MR 1501320112. R. Morris, Zero-temperature Glauber dynamics on Zd, Probab. TheoryRelated Fields 149 (2011), no. 3-4, 417–434. MR 2776621113. S. B. Myers, Connections between differential geometry and topology.I. Simply connected surfaces, Duke Math. J. 1 (1935), no. 3, 376–391.MR 1545884114. G. Piccinini, The First Computational Theory of Mind and Brain:A Close Look at McCulloch and Pitts’s “Logical Calculus of IdeasImmanent in Nervous Activity”, Synthese 141 (2004), 175–215.206Bibliography115. H. Poincaré, Sur les lignes géodésiques des surfaces convexes, Trans.Amer. Math. Soc. 6 (1905), no. 3, 237–274. MR 1500710116. M. Pollak and I. Riess, Application of percolation theory to 2d-3dHeisenberg ferromagnets, Physica Status Solidi (b) 69 (1975), no. 1,K15–K18.117. D. Revuz and M. Yor, Continuous martingales and Brownian motion,Grundlehren der Mathematischen Wissenschaften [Fundamental Princi-ples of Mathematical Sciences], vol. 293, Springer-Verlag, Berlin, 1991.MR 1083357118. G.-P. Scalia-Tomba, Asymptotic final-size distribution for some chain-binomial processes, Adv. in Appl. Probab. 17 (1985), no. 3, 477–495.MR 798872119. G. Schaeffer, Conjugaison d’arbres et cartes combinatoires aléatoires,Ph.D. thesis, Université Bordeaux I, 1998.120. R. H. Schonmann, On the behavior of some cellular automata relatedto bootstrap percolation, Ann. Probab. 20 (1992), no. 1, 174–193. MR1143417121. O. Schramm, Conformally invariant scaling limits: an overview and acollection of problems, International Congress of Mathematicians. Vol.I, Eur. Math. Soc., Zürich, 2007, pp. 513–543. MR 2334202122. D. Schwartz, On hitting probabilities for an annihilating particle model,Ann. Probability 6 (1978), no. 3, 398–403. MR 0494573123. T. Sellke, On the asymptotic distribution of the size of a stochasticepidemic, J. Appl. Probab. 20 (1983), no. 2, 390–394. MR 698541124. K. Shiohama, T. Shioya, and M. Tanaka, The geometry of total curvatureon complete open surfaces, Cambridge Tracts in Mathematics, vol. 159,Cambridge University Press, Cambridge, 2003. MR 2028047207Bibliography125. S. Ö. Stefánsson and T. Vallier, Majority bootstrap percolation on therandom graph G(n, p), preprint available at arXiv:1503.07029.126. T. Tlusty and J.-P. Eckmann, Remarks on bootstrap percolation inmetric networks, J. Phys. A 42 (2009), no. 20, 205004, 11. MR 2515591127. P. Turán, Eine extremalaufgabe aus der graphentheorie, Mat. Fiz Lapook48 (1941), 436–452.128. T. Turova and A. Villa, On a phase diagram for random neural networkswith embedded spike timing dependent plasticity, BioSystems 89 (2007),280–286.129. T. S. Turova and T. Vallier, Bootstrap percolation on a graph withrandom and local connections, J. Stat. Phys. 160 (2015), no. 5, 1249–1276. MR 3375589130. W. T. Tutte, A census of planar maps, Canad. J. Math. 15 (1963),249–271. MR 0146823131. S. Ulam, Random processes and transformations, Proceedings of theInternational Congress of Mathematicians, Vol. 2, Cambridge, Mass.,1950, Amer. Math. Soc., Providence, R. I., 1952, pp. 264–275. MR0045334132. T. Vallier, Random graph models and their applications, Ph.D. thesis,Lund University, 2007.133. A. C. D. van Enter, Proof of Straley’s argument for bootstrap percolation,J. Statist. Phys. 48 (1987), no. 3-4, 943–945. MR 914911134. J. von Neumann, Theory of self-reproducing automata, University ofIllinois Press, Champaign, IL, USA, 1966.135. C. T. C. Wall, Geometric properties of generic differentiable manifolds,Geometry and topology (Proc. III Latin Amer. School of Math., Inst.Mat. Pura Aplicada CNPq, Rio de Janeiro, 1976), Springer, Berlin,1977, pp. 707–774. Lecture Notes in Math., Vol. 597. MR 0494233208Bibliography136. D. J. Watts, A simple model of global cascades on random networks,Proc. Natl. Acad. Sci. USA 99 (2002), no. 9, 5766–5771 (electronic).MR 1896072137. S. Wolfram (ed.), Theory and applications of cellular automata, Ad-vanced Series on Complex Systems, vol. 1, World Scientific PublishingCo., Singapore, 1986, Including selected papers 1983–1986. MR 857608138. T. Zamfirescu, Many endpoints and few interior points of geodesics,Invent. Math. 69 (1982), no. 2, 253–257. MR 674405139. , Conjugate points on convex surfaces, Mathematika 38 (1991),no. 2, 312–317 (1992). MR 1147829209

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0354257/manifest

Comment

Related Items