UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Adapted metrics and random walks on graphs Folz, Matthew Bryan 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2013_fall_folz_matthew.pdf [ 756.98kB ]
JSON: 24-1.0074191.json
JSON-LD: 24-1.0074191-ld.json
RDF/XML (Pretty): 24-1.0074191-rdf.xml
RDF/JSON: 24-1.0074191-rdf.json
Turtle: 24-1.0074191-turtle.txt
N-Triples: 24-1.0074191-rdf-ntriples.txt
Original Record: 24-1.0074191-source.json
Full Text

Full Text

Adapted metrics and random walks on graphsbyMatthew Bryan FolzB. Sc., Simon Fraser University, 2007M. Sc., The University of British Columbia, 2009a thesis submitted in partial fulfillmentof the requirements for the degree ofDoctor of Philosophyinthe faculty of graduate and postdoctoralstudies(Mathematics)The University Of British Columbia(Vancouver)August 2013c? Matthew Bryan Folz, 2013AbstractThis thesis discusses various aspects of continuous-time simple random walkson measure weighted graphs, with a focus on behaviors related to large-scalegeometric properties of the underlying graph. In contrast to previous work inthis area, the majority of the results presented here are applicable to randomwalks with unbounded generators. A recurring theme in this research is theuse of novel distance functions for graphs known as adapted metrics, whichare demonstrated to be a powerful tool for studying random walks on graphs.Chapter 2 provides an overview of the relevant probabilistic and analytictheory, and provides multiple constructions of the heat kernel and a briefintroduction to the theory of Dirichlet forms. Chapter 3 introduces adaptedmetrics, which play a central role in the following chapters, and which areespecially useful in understanding random walks with unbounded generators.Chapter 4 discusses heat kernel estimates, and presents an overview of on-diagonal heat kernel estimates for graphs, as well as techniques for obtainingoff-diagonal estimates of the heat kernel. The off-diagonal estimates wereproved by the author in [26]. Chapter 5 introduces metric graphs, a contin-uous analogue of graphs which possess many desirable analytic properties,and analyzes the problem of constructing a Brownian motion on a metricgraph which behaves similarly to a given continuous-time simple randomwalk. Chapter 6 analyzes recurrence and transience of graphs, and provesan estimate relating adapted volume growth to recurrence of graphs. Chap-ter 7 discusses bounds for the bottom of the essential spectrum in terms ofgeometric inequalities such as volume growth estimates and isoperimetric in-equalities; many of these results were proved by the author in [28]. Chapter8 deals with stochastic completeness of graphs and proves sharp criteria re-lating volume growth to stochastic completeness. The results in this sectionuse the machinery of metric graphs, and were proved by the author in [27].iiPrefaceThis thesis is original, independent work of the author, Matthew Folz. Itis based primarily on three research articles written solely by the author:[26], [27], and [28]. At the time of this writing, [26] has been published inthe Electronic Journal of Probability, and [27] and [28] have been acceptedfor publication in Transactions of the American Mathematical Society andMathematische Zeitschrift, respectively.The results of [26] are presented in Section 4.2. The publication [27] formsthe basis of Chapter 5 and Chapter 8, and the main result of Chapter 6,Theorem 6.2.4, is a straightforward consequence of the techniques developedthere. Additionally, Chapter 3 is based on the exposition of adapted metricsgiven in [27], although we work in a slightly more general setting in thisthesis. The results of [28] comprise the majority of Chapter 7, with theexception of Section 7.5.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . ivAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . viDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Probability and Analysis on Graphs . . . . . . . . . . . . . . 52.1 Measure weighted graphs . . . . . . . . . . . . . . . . . . . . . 52.2 Random walks and generators . . . . . . . . . . . . . . . . . . 82.3 The heat kernel . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4 Dirichlet forms on graphs . . . . . . . . . . . . . . . . . . . . . 222.5 Spectral theory . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Adapted Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 323.1 Adapted metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2 Metric comparisons . . . . . . . . . . . . . . . . . . . . . . . . 383.3 Volume growth and isoperimetric inequalities . . . . . . . . . . 403.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 Heat Kernel Estimates . . . . . . . . . . . . . . . . . . . . . . 464.1 On-diagonal heat kernel estimates . . . . . . . . . . . . . . . . 474.2 Off-diagonal heat kernel estimates . . . . . . . . . . . . . . . . 50iv5 Metric Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . 795.1 Brownian motion on metric graphs . . . . . . . . . . . . . . . 805.2 Computing exit probabilities and hitting times . . . . . . . . . 845.3 Metrics for metric graphs . . . . . . . . . . . . . . . . . . . . . 895.4 Synchronizing random walks and Brownian motion . . . . . . 916 Recurrence and Transience . . . . . . . . . . . . . . . . . . . 1006.1 Equivalent definitions of recurrence . . . . . . . . . . . . . . . 1016.2 Volume growth and recurrence . . . . . . . . . . . . . . . . . . 1046.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077 Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087.1 Volume growth and the bottom of the essential spectrum . . . 1087.2 Proof of Theorem 7.1.3 and Theorem 7.1.5 . . . . . . . . . . . 1127.3 Discreteness of the spectrum . . . . . . . . . . . . . . . . . . . 1197.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207.5 Lower bounds for the spectrum and isoperimetric inequalities . 1248 Stochastic Completeness . . . . . . . . . . . . . . . . . . . . . 1268.1 Equivalent definitions of stochastic completeness . . . . . . . . 1278.2 Volume growth and stochastic completeness . . . . . . . . . . 1298.3 Proof of Theorem 8.2.5 . . . . . . . . . . . . . . . . . . . . . . 1328.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148Appendix A Symmetry of the Heat Kernel . . . . . . . . . . . 156vAcknowledgementsMy six years of graduate study at the University of British Columbia wereyears of great growth, both as a mathematician and as an individual, and Iwill remember them fondly.My greatest debt of gratitude is to my supervisor, Martin Barlow. Martinwas tremendously generous to me throughout my graduate studies at UBC:generous with his time, with interesting research ideas and clever sugges-tions, and with funding. Most of the problems studied in this thesis weresuggested by him, and many of the difficulties I encountered in my researchwere resolved by ideas of his. The exposition of my papers and thesis weregreatly aided by his careful reading. It was truly an honor to have Martin asa supervisor.Before Martin was my supervisor at UBC, Ed Perkins was, and Ed providedvaluable advice throughout my time at UBC, as well as a number of usefulcomments and suggestions on my second paper. It was a privilege to haveDavid Brydges and Gord Slade on my thesis committee, and they contributednumerous helpful suggestions to my thesis. My third paper was written dur-ing a visit to the Statistical Laboratory at the University of Cambridge inearly 2012, and Nathana?l Berestycki generously sponsored my visit and sug-gested several interesting avenues of research.It was a pleasure to be a part of the UBC Probability Group and the UBCMathematics Department. I?d like to thank my officemates and fellow math-ematicians for all of the interesting discussions (both mathematical and non-mathematical) we shared over the years: in particular, Roland Bauerschmidt,Cindy Blois, Karel Casteels, Craig Cowan, Hardeep Gill, Tyler Helmuth,David Laferriere, Robert Masson, Terry Soo, and Paul Vrbik.viI also would like to acknowledge the excellent mathematics teachers I hadprior to graduate school; individuals who encouraged my love of mathemat-ics, imparted knowledge, provided invaluable advice, and shaped my devel-opment as a mathematician. In particular, Rustum Choksi, who gave me myfirst real exposure to mathematical research, Jason Bell, and Peter Borweinat Simon Fraser University, George Devita at Port Moody Secondary School,and my parents.Finally, the love and support of my family and friends sustained me duringmy years of graduate study, and I will be forever grateful for that.viiDedicationTo my parents, for everything.viiiChapter 1IntroductionThis thesis discusses aspects of random walks and heat flow on infinite graphs.Much of the intuition for the discrete setting is provided by the simple ex-ample of the heat equation on Rd, so we begin with a brief summary of thestandard theory there. For a more complete treatment of these topics, werefer the interested reader to [24], [60], and [67].The Laplacian ? is the partial differential operator on Rd given by? :=d?i=1?2?x2i.This operator has many desirable properties from a physical standpoint; forexample, up to a multiplicative constant, it is the only second-order differ-ential operator which commutes with translations and rotations.Associated with this operator is the heat equation??tu(x, t)?12?u(x, t) = 0, (1.1)where u is defined on U ? [0,?) for some open subset U of Rd. This partialdifferential equation models the transfer of heat in a homogeneous isotropicmedium. It is common to impose some initial temperature distribution att = 0 by setting u(x, 0) = g(x) for some reasonably well-behaved function g;the resulting initial-value problem is called the Cauchy problem for the heatequation.1The heat kernel pt(x, y) is the minimal positive fundamental solution to (1.1)on Rd, and is given bypt(x, y) :=1(2pit)d/2exp(??x? y?22t).The heat kernel has many uses. The most immediate of these is that it allowsus to solve the Cauchy problem for arbitrary initial data g ? C(Rd)?L?(Rd);the solution is simply given byu(x, t) :=?Rdpt(x, y)g(y)dy.Energy methods also play an important role in the analysis of this problem.In this setting, the energy form E is given byE(u, v) :=12?Rd?u ? ?v dx;this is one of the simplest examples of a Dirichlet form. An integration byparts shows that if u satisfies (1.1) and has sufficient regularity, thenddt(?Rdu2(x, t)dx)= ?2E(u, u) ? 0,and this quickly leads to a proof that solutions of the Cauchy problem (ifthey exist) on a bounded open subset of Rd are unique.It is no coincidence that the heat kernel pt(x, ?) coincides with the densityof a d?dimensional normal distribution with mean x and covariance matrixtI, and this is precisely the transition density of a standard Brownian mo-tion (Bt)t?0 on Rd started at x. Indeed, the operator 12? is the infinitesimalgenerator of Brownian motion on Rd, and it is a straightforward exercise instochastic calculus to show that if the Cauchy problem has a solution, it canbe represented as u(x, t) = Exg(Bt). Connections between probability andanalysis such as this form the basis of this thesis.Given a Riemannian manifold M , there is a natural generalization of theLaplacian to M given by the Laplace-Beltrami operator ?M , and Brown-2ian motion on M is simply defined to be the strong Markov process withinfinitesimal generator 12?M . Properties of the operator ?M (or proper-ties of the heat kernel or the Brownian motion on M) encode informationabout the underlying space. For example, Brownian motion is recurrent inR1, neighborhood-recurrent in R2, and transient in Rd for d ? 3. Surpris-ingly, these behaviors can be deduced from coarse geometric estimates forRd; namely, that a ball of radius r has a volume of approximately rd, andthat the surface measure of a sphere of radius r (which is an optimal set froman isoperimetric perspective) is approximately rd?1.Working in the general setting of Riemannian manifolds allows one to con-sider various geometric properties such as nonzero curvature and superpoly-nomial volume growth. These geometric properties make it possible for Brow-nian motion on a manifold to exhibit behaviors such as explosiveness, whichis impossible for Brownian motion in Euclidean space. Understanding theinterplay between analytic properties of the operator ?M and geometric es-timates on the underlying space, such as volume growth or isoperimetricinequalities, has been the subject of a great deal of research.The discrete setting of weighted graphs also exhibits a tremendous amountof geometric diversity, and resembles the continuous setting of Riemannianmanifolds in many respects. Many of the objects discussed above, such asdifferential operators, energy forms, and stochastic processes, have naturalanalogues in the setting of weighted graphs. This thesis is primarily con-cerned with elucidating the relationship between probability, analysis, andgeometry on graphs, and understanding the essential similarities and differ-ences between the discrete and continuous settings.There are many compelling aspects of the graph setting which do not havegood continuous analogues. For example, studying random walks on Cayleygraphs of finitely generated groups gives insight into the algebraic and geo-metric structure of groups. Random walks in random environments modelthe motion of particles in disordered media, and interacting random walkshave complex dynamics which have proved useful in modelling phenomenaranging from statistical physics to population genetics to voting. Additionalmotivation to study the discrete setting is provided by the well-known con-nections between random walks and the theory of electrical networks, theconnections between analysis and probability on finite graphs and problems3in theoretical computer science, and the use of random walks in models ofphysical phenomena utilized in mathematical physics.4Chapter 2Probability and Analysis onGraphsThis chapter provides an introduction to the theory of continuous-time simplerandom walks on graphs, which will be the central topic of study in this thesis.In it, we develop the probabilistic and analytic tools necessary to understandand prove the original results which will be presented later in this thesis.The results presented in this chapter are well-known to experts in the field,and we make no claim to originality for any of the results in this chapter.However, aspects of the proofs may be original, as we work in a more generalsetting than much of the existing literature.2.1 Measure weighted graphsLet ? = (G,E) be an unoriented graph. We assume that G is finite orcountably infinite, and that ? is connected, contains neither loops nor mul-tiple edges, and is locally finite. Given x, y ? G, we write x ? y (and saythat x and y are adjacent) if and only if {x, y} ? E, and we let deg(x) :=|{y ? G : x ? y}| denote the degree of a vertex x. A natural distanceon ? is given by the graph metric d; given x, y ? G, d(x, y) is equal tothe number of edges in a shortest path between x and y. We will some-times refer to shortest paths as geodesic paths. Given a set U ? G, we let?U := {y ? U : there exists x ? G \ U with x ? y} denote the boundary ofU .5Some important examples of graphs are as follows:Example 2.1.1 (Euclidean lattice). The d?dimensional Euclidean lattice isthe graph ? := (Zd,Ed), where Ed := {{x, y} : x, y ? Zd and ?x? y? = 1} isthe set of nearest-neighbor edges; by a minor abuse of notation we will oftenrefer to this graph as Zd.In many respects, this graph is a discrete analogue of the d?dimensionalmanifold Rd. In particular, if B ? Zd is a ball of radius r with respect tothe graph metric, then B consists of approximately c1(d)rd points, and ?Bconsists of approximately c2(d)rd?1 points.Example 2.1.2 (Cayley graphs of finitely generated groups). Let G be afinitely generated group, and let S be a set of generators for G. We donot assume that S is minimal, but we assume that it does not contain theidentity, and that S is symmetric (i.e., S = S?1). The Cayley graph as-sociated with the pair (G,S) is the graph with vertex set G and edge set{{g, gs} : g ? G, s ? S}.Cayley graphs possess excellent symmetry properties. They are regular (thedegree of each vertex is the same) and even vertex transitive (given any twovertices in the graph, there exists a graph automorphism which maps onevertex to the other). These regularity properties mean that Cayley graphsare often much easier to analyze than general graphs. Additionally, manystrong results have been proved regarding the structure of finitely generatedgroups, as a consequence of research in geometric graph theory. For example,Gromov?s theorem states that a group has polynomial growth if and only ifit has nilpotent subgroups of finite index, and structural results such as thisare very useful in resolving probabilistic and analytic questions pertaining toCayley graphs. See [75] for more on Cayley graphs.Example 2.1.3 (Infinite trees). A graph is an infinite tree if it is infinite,connected, and contains no cycles. Fix d ? 3. The d?regular tree Td is theunique (up to isomorphism) d?regular infinite tree.More generally, given an arbitrary function k : Z?0 ? Z?1, we let Tk de-note the spherically symmetric infinite tree with branching function k. This6infinite tree is rooted at x0 and is constructed such that each vertex at agraph distance of D from x0 has exactly k(D) neighbors at a graph dis-tance D + 1 from x0; consequently deg(x0) = k(0), and if x 6= x0 thendeg(x) = k(d(x0, x)) + 1. The term ?spherically symmetric? comes from thefact that if x, y ? G satisfy d(x0, x) = d(x0, y), there is a graph automor-phism which fixes x0 and maps x to y.Remark 2.1.4. Given ? ? R, we define Z?? := {x ? Z : x ? ?} and defineR?? analogously. We will also sometimes use the notation Z+ := Z?0 fortypographic reasons, usually to denote the index set of a sequence of realnumbers or random variables.Example 2.1.5 (Supercritical percolation clusters). A d?dimensional super-critical percolation cluster is a random infinite connected subgraph of (Zd,Ed)constructed as follows: Fix d ? 2. Given p ? [0, 1], we define i.i.d. Bernoullirandom variables (?p,e)e?Ed such that P(?p,e = 1) = p. Edges e for which?p,e = 1 are called open edges; we define Edp,open := {x ? Ed : ?p,e = 1} tobe the set of open edges, and the open cluster C(x) containing x is definedto be the set of vertices which can be connected to x by a (possibly empty)path of open edges. It is a fundamental result in percolation theory thatthere exists a critical probability pc(d) ? (0, 1) such that if p > pc(d), thereis an a.s. unique infinite open cluster, which we denote by Cp,? = Cp,?(?).By a minor abuse of notation we also write Cp,? for the (random, infinite,connected) subgraph of (Zd,Edp,open) induced by the vertex set Cp,?(?). See[43] for more on percolation theory.This is a special case of the random conductance model, in which one takesa deterministic graph such as (Zd,Ed) and equips it with i.i.d. edge weightsaccording to some probability distribution supported on the nonnegative orpositive real line. The resulting random environments have been the subjectof much study; see [9] for more on this model.We will consider graphs equipped with edge weights (also known as edge con-ductances) (pie)e?E; associated with each (x, y) ? G ? G is an edge weightpixy ? 0 which is symmetric (pixy = piyx for x, y ? G) and satisfies pixy > 0if and only if {x, y} ? E; in this case, we set pixy := pi{x,y} := pie. The pair(?, pi) is a weighted graph. We will often use edge weights satisfying pie = 17for all e ? E; these are called the natural weights.The edge weights (pie)e?E induce a measure on G by settingpix := pi({x}) :=?y?xpixy,and extending to arbitrary A ? G by pi(A) :=?x?A pix. In the case where(pie)e?E are the natural weights, the associated measure pi satisfies pix =deg(x) for x ? G, and we denote this measure by deg.Any function ? : G ? R>0 induces a measure on G with full support bysetting ?({x}) := ?(x) := ?x and using countable additivity as above. If ? isa measure on (?, pi) with full support, we call the triple (?, pi, ?) a measureweighted graph.Given a metric ?, B?(x, r) := {y ? G : ?(x, y) ? r} denotes the closed ballof radius r in this metric, and the volume of this ball with respect to themeasure ? is given byV?(x, r) := ?(B?(x, r)) =?x?B?(x,r)?x.2.2 Random walks and generatorsLet (?, pi, ?) be a measure weighted graph. We begin our discussion of anal-ysis on graphs by defining some function spaces on (?, pi, ?). The simplestof these is the space C(G) := RG, the set of real-valued functions on G.Since the default topology on graphs is the discrete topology, the set ofreal-valued functions and continuous real-valued functions coincide. We alsodefine Cc(G) := {f ? C(G) : f is finitely supported}; in the discrete topol-ogy the compact subsets of G are precisely the finite subsets of G.It is straightforward to define Lp spaces on (?, pi, ?). For f ? C(G) and8p ? 1, we define?f?Lp(?) :=(?x?G|f(x)|p?x)1/p;as usual, Lp(?) := Lp(G, ?) := {f ? C(G) : ?f?Lp(?) < ?}. In particular,L2(?) is a (real) Hilbert space, with norm induced by the inner product?f, g?L2(?) :=?x?Gf(x)g(x)?x.Associated with (?, pi, ?) is the operator L?, which is defined pointwise onC(G) by(L?f)(x) :=1?x?y?Gpixy(f(y)? f(x)). (2.1)The operator L? is often called a discrete Laplace operator or a graph Lapla-cian. We emphasize the role of ? because we will sometimes want to comparethe operator L?1 and L?2 for distinct measures ?1 and ?2 on (?, pi).Since pixy 6= 0 if and only if x ? y, we can replace the sum over y ? G in(2.1) with a sum over y ? x. By (2.1), L? clearly maps C(G) to C(G), butit will subsequently be useful to restrict the domain of L? in various ways.Other natural choices for the domain of L? are {f ? L2(?) : L?f ? L2(?)}and Cc(G).The operator L? shares many properties with Laplace operators on continu-ous spaces, such as the Laplacian on Rd or the Laplace-Beltrami operator ona Riemannian manifold. Before exploring some of these properties, we definea discrete version of the energy form, which will be explored further in ourdiscussion of Dirichlet forms in Section 2.4.Define the quadratic form Q byQ(f, g) :=12?x?G?y?Gpixy(f(y)? f(x))(g(y)? g(x)),where f and g are such that the above double sum converges absolutely. For9example, if f or g are in Cc(G), then the above sum only consists of finitelymany nonzero terms.Applying a discrete integration-by-parts argument to Q yields the followingidentity:Theorem 2.2.1. (Discrete Gauss-Green Theorem) If f ? C(G) and g ?Cc(G), thenQ(f, g) = ??L?f, g?. (2.2)Proof. Expanding the quadratic form Q givesQ(f, g) =12?x?G?y?Gpixy(f(y)? f(x))g(y)?12?x?G?y?Gpixy(f(y)? f(x))g(x)= ??x?G?y?Gpixy(f(y)? f(x))g(x)= ??x?G??1?x?y?Gpixy(f(y)? f(x))?? g(x)?x= ??L?f, g?.The second equation follows by interchanging the order of summation in oneof the sums from the first equality.Corollary 2.2.2. The operator L? : Cc(G)? C(G) is negative semidefinite.Proof. For f ? Cc(G), ?L?f, f? = ?Q(f, f) ? 0.Of course, the Laplacian ? on Rd is a negative-definite operator as well. Amore subtle question concerns the self-adjointness of L?. For f, g ? Cc(G),Theorem 2.2.1 yields?L?f, g? = Q(f, g) = Q(g, f) = ?f,L?g?, (2.3)so the restriction of L? to Cc(G) is symmetric.One can show (see [52] or [53]) that in fact L? : Cc(G)? C(G) is essentiallyself-adjoint if (?, pi, ?) has the property that whenever U ? G is infinite andconnected (with respect to the graph structure), ?(U) =?. This is satisfied,10for example, if infx?G ?x > 0. In this case, the operator L? with the ?maxi-mal? domain D(L?) := {f ? L2(?) : L?f ? L2(?)} is self adjoint. For moreon self-adjoint extensions of the Laplacian, see [50].We can also define the weighted adjacency operator A? by(A?f)(x) :=1?x?y?Gpixyf(y),and the multiplication operator D? by(D?f)(x) :=pix?xf(x),so that L? = A? ?D?; this decomposition will be of use occasionally.The operator L? is the infinitesimal generator of a continuous-time simplerandom walk (Xt)t?0 on (?, pi). The process X is a strong Markov processwith invariant measure (or speed measure) ?, and it moves according to thefollowing dynamics: started at a vertex x ? G, it waits an exponentiallydistributed time with parameter pix/?x and then moves instantaneously toa vertex adjacent to x; the probability of jumping to a vertex y ? x isP (x, y) = pixy/pix. In particular, the probability of jumping along an edge eincident to a vertex x is proportional to the edge weight pie; random walkswhich have this property are called simple.On occasion it will be useful to consider a discrete-time simple random walk(Yn)n?Z+ . This walk jumps at times n = 1, 2, . . . with jump probabilitiesgiven by P (x, y) = pixy/pix as above. In particular, each continuous-timesimple random walk (Xt)t?0 may be coupled with a discrete-time simple ran-dom walk (Yn)n?0 by defining Yn to be the value of X after it has jumped ntimes; in light of this Y is sometimes called the discrete skeleton of X.We single out two important choices of the measure ?. The first of theseis the choice ? = pi, i.e., the measure induced by the edge weights (pie)e?E.Let X denote the corresponding random walk. In this case, the expectedtime between consecutive jumps of X is always 1, and does not depend onthe location of the random walk; consequently we will refer to X as theconstant-speed random walk (CSRW), and the operator Lpi is often referred11to as the combinatorial Laplacian.The second is the choice ? = 1, where 1 is the measure satisfying 1x := 1for all x ? G. Let X denote the corresponding random walk. In this case,started at a vertex x ? G, X waits an exponentially distributed time withparameter pix before jumping to a new vertex. As the time between jumpsdepends on the local geometry of the graph, we call this the variable-speedrandom walk (VSRW), and the operator L1 is often referred to as the physicalLaplacian, owing to the fact that it appears in many discrete mathematicalmodels of physical phenomena.Perhaps the simplest non-trivial analytic question one can ask about the op-erator L? is whether it is bounded on Lp(?). In general, this depends on thegeometry of ? as well as the edge weights (pie)e?E and the measure ?. Wewrite ???p?p to denote the operator norm of an operator from Lp(?) to Lp(?).Theorem 2.2.3. For 1 ? p ? ?, the operator L? is a bounded operator onLp(?) if and onlyA(?, pi, ?) := supx?Gpix?x<?.Proof. The theorem will follow from the stronger result thatA(?, pi, ?) ? ?L??p?p ? 2A(?, pi, ?).We begin with the upper bound. Recall the decomposition L? = A? ? D?.It is clear that?D??p?p = A(?, pi, ?).Using Holder?s inequality (and letting q denote the exponent conjugate top), we have that12?A?f?pLp(?) :=?x?G??????1?x?y?Gpixyf(y)??????p?x??x?G??1?x?y?Gpixy|f(y)|p??p/p??1?x?y?Gpixy??p/q?x? A(?, pi, ?)p?1?x,y?Gpixy|f(y)|p= A(?, pi, ?)p?1?y?G|f(y)|ppiy? A(?, pi, ?)p?f?pLp(?),which shows that ?L??p?p ? ?A??p?p + ?D??p?p ? 2A(?, pi, ?).For the lower bound, define ?y ? C(G) for y ? G by?y(x) :=???1 if x = y,0 otherwise.For p ? 1, we have that ??y?Lp(?) = ?1/py . If p = 1, a direct computationshows that ?L??y?L1(?) = 2piy, so that2piy?y=?L??y?L1(?)??y?L1(?)? ? ?L??1?1,and taking the sup over y ? G gives ?L??1?1 ? 2A(?, pi, ?) (and in fact thisis an equality, in view of the upper bound).For 1 < p <?, we have thatpiy =?x?G(L??y)(x)?y(x)?x ? ?L??y?Lp(?)??y?Lq(?)? ?L??p?p??y?Lp(?)??y?Lq(?) = ?L??p?p?y,and taking the sup over y ? G gives ?L??p?p ? A(?, pi, ?).13Finally, if p =?, we have that?L??y?L?(?) ? (L??y)(y) =piy?y,so that ?L????? ? A(?, pi, ?), and this completes the proof of the lowerbound.Since A(?, pi, pi) = 1, Lpi is a bounded operator on any weighted graph (?, pi).On the other hand, A(?, pi,1) = supx?G pix, and consequently the operator L1is bounded on L2(1) if and only if the vertex weights (pix)x?G are bounded.The possibility of unboundedness has substantial implications for the be-havior of the VSRW; on graphs where (pix)x?G is unbounded, we will subse-quently demonstrate that the CSRW and VSRW may behave very differently.One can interpret Theorem 2.2.3 as a condition on the jump times (Xt)t?0.This provides our first link between the analytic properties of L? and thebehavior of the associated random walk X.Lemma 2.2.4. Suppose that the operator L? is bounded on L2(?). Thenthe expected times between jumps of the associated random walk (Xt)t?0 areuniformly bounded below.Proof. Recall that the an exponentially distributed random variable with pa-rameter ? has expected value 1/?. By Theorem 2.2.3, we know that bound-edness of L? is equivalent to having A(?, pi, ?) < ?. Started at a vertex x,the time it takes X to jump to a new vertex is an exponentially distributedrandom variable with parameter pix/?x, so the expected times between jumpsare bounded below byinfx?G?xpix=(supx?Gpix?x)?1=1A(?, pi, ?)> 0.An obvious corollary of this is the following:Corollary 2.2.5. Suppose that the operator L? is bounded on L2(?), and let(Xt)t?0 be the associated random walk. Then for any M > 0, X jumps onlyfinitely many times on [0,M ], almost surely.14Proof. Let (Tj)j?Z+ denote the independent random variables describing thetimes between jumps of X. Each Tj is exponentially distributed with param-eter ?j ? A(?, pi, ?) < ?, so for each ? > 0, there is a universal constantC(?) > 0 such that for each j ? Z+, if Aj := {Tj ? ?}, then P(Aj) ? C(?).Consequently the second Borel-Cantelli lemma gives P(Tj ? ? i.o.) = 1,which implies the desired result.Understanding when the converse is true, i.e., when it is possible for Xto make infinitely many jumps in a finite time interval, leads naturally tothe topic of stochastic completeness, which will be discussed in detail inChapter 8, and which we illustrate with an example:Example 2.2.6. Let ? = (Z?0, E?0), where E?0 := {{n, n,+1} : n ? Z?0},and set pin,n+1 := pi({n, n + 1}) := (n + 1)3 for n ? Z?0. We let (XCSt )t?0denote the CSRW started at 0, and (XV St )t?0 denote the VSRW started at0.It is not difficult to show that both XCS and XV S are transient, that is, withpositive probability they never return to 0. Let TCSN and TV SN denote the firsthitting times of the vertex N for the processes XCS and XV S, respectively.We let Px denote the probability law for these processes started at the vertexx, and Ex denotes expectation with respect to Px.Since the expected time between jumps of the CSRW is 1 and it takes atleast N jumps to get from 0 to N , we get E0TCSN ? N .On the other hand, by conditioning on the first jump of the VSRW, a straight-forward calculation using the strong Markov property shows that for N ? 1,E0T V SN =N?1?j=0j + 1pij,j+1=N?1?j=01(j + 1)2? 2.In particular, since the expected hitting times are uniformly bounded above,there is a positive probability that the VSRW will reach every vertex in afinite amount of time. Consequently, it is possible for the to VSRW to have afinite lifetime (this property is called explosiveness or stochastic incomplete-ness), which is impossible for the CSRW.In general, the unbounded setting (with regards to the operator L?) poses15several challenges which are not found in the bounded setting, and addi-tional insights will be required. However, certain interesting behaviors suchas explosiveness are only possible in the unbounded setting, and this leadsto a richer, more complex theory. Additionally, many of these interestingbehaviors may be exhibited by diffusions on manifolds, so in many respectsthe VSRW is a better discrete analogue of Brownian motion than the CSRW.Finally, any discussion of graph Laplacians and random walks on graphswould be incomplete without a mention of electrical networks. An electricalnetwork is simply a weighted graph (?, pi); typically the edge weights (pie)e?Eare referred to as edge conductances (c(e))e?E. One can imagine electronsmoving along this electrical network according to a simple random walk; ata given vertex the probability of the electron moving along an incident edgee is proportional to the edge conductance c(e). Physical quantities such ascurrent, resistance, and voltage have mathematical definitions in this setting,and mathematical analogues of familiar physical laws such as Kirchoff?s andOhm?s laws hold. These tools have proved very useful for studying randomwalks on graphs. See [22] and [57] for a detailed discussion.2.3 The heat kernelIn this section, we discuss several constructions of the heat kernel pt(x, y) as-sociated with the operator L? (as well as related analytic objects), using theassociated random walk, the discrete heat equation, and the heat semigroupetL? . These constructions have direct parallels with constructions of the heatkernel in the setting of manifolds. Heat kernel techniques are a very powerfultool, and Chapter 4 of this thesis is devoted entirely to obtaining pointwiseestimates for the heat kernel.Let (?, pi, ?) be a measure weighted graph. Our first definition of the heatkernel will be probabilistic in nature. Indeed, the heat kernel pt(x, y) of theoperator L? is simply the transition density of the associated continuous-time simple random walk (Xt)t?0 with respect to the measure ?. That is,if Px denotes the law of the process (Xt)t?0 started at x, and Ex denotes16expectation with respect to this measure, thenpt(x, y) :=1?yPx(Xt = y). (2.4)While this definition is simple and intuitive, it glosses over the fact thatthere is nontrivial analytic and probabilistic machinery used to construct theprocess (Xt)t?0 in the first place. In particular, one typically begins with asemigroup (Pt)t?0, and uses the theory of Markov processes to construct theprocess (Xt)t?0 from the semigroup.The heat kernel has the following elementary properties:Theorem 2.3.1. Suppose that x, y ? G and t, s ? 0. Then1. (Positivity) pt(x, y) ? 0.2. (Symmetry) pt(x, y) = pt(y, x).3. (Semigroup property)pt+s(x, y) =?z?Gpt(x, z)ps(z, y)?z.4. (Sub-Markov property)?z?Gpt(x, z)?z ? 1. (2.5)Proof. Positivity is an immediate consequence of (2.4). The proof of sym-metry is surprisingly difficult using the definition (2.4), and we defer it toAppendix A. The semigroup property follows from conditioning on the lo-cation of X at time t, and using the Markov property. The sub-Markovproperty follows from the fact that if U ? G, countable additivity of theprobability measure Px gives?y?Upt(x, y)?y =?y?UPx(Xt = y) = Px(Xt ? U) ? 1. (2.6)17Remark 2.3.2. Even when U = G, the inequality (2.6) may be strict, andthis occurs precisely when (Xt)t?0 fails to have infinite lifetime. An exampleof this was given in Example 2.2.6.Here are some simple consequences of these properties:Corollary 2.3.3. The measure ? is a reversible measure for the Markovprocess (Xt)t?0.Proof. For x, y ? G, the symmetry property of the heat kernel gives?xPx(Xt = y) = ?x?ypt(x, y) = ?y?xpt(y, x) = ?yPy(Xt = x).Corollary 2.3.4. For all x0 ? G, pt(x0, ?) ? L2(?).Proof. Fix x0 ? G. The symmetry and semigroup properties give?x?G|pt(x0, x)|2?x =?x?Gpt(x0, x)pt(x, x0)?x = p2t(x0, x0) ?1?x0.Remark 2.3.5. We will show soon that pt(x0, ?) ? Lp(?) for all p ? 1. But theabove calculation is still a very useful one (and will be used subsequently),because it relates the L2(?)?norm of the heat kernel to the pointwise valuesof the heat kernel.We will now give two other constructions of the heat kernel, which will intro-duce some other useful analytic objects. Let (Pt)t?0 be the heat semigroup,which is defined on L?(G) by(Ptf)(x) :=?y?Gpt(x, y)f(y)?y. (2.7)In light of (2.4), we have the probabilistic representation (Ptf)(x) = Exf(Xt).We mentioned in the previous section that L? was the infinitesimal generatorof the stochastic process (Xt)t?0; by this, we mean that if I denotes theidentity operator and f ? D(L?), then for each x ? G,limt?0((Pt ? It)f)(x) = limt?0Exf(Xt)? f(x)t= (L?f)(x).18The semigroup (Pt)t?0 may also be constructed using techniques from spec-tral theory. With the appropriate domain, L? is a self-adjoint operator, andconsequently the spectral theorem allows us to define the heat semigroup by(etL?)t?0; one can then use this semigroup to construct the strong Markovprocess (Xt)t?0 using standard techniques (see e.g. [25]). This definitioncoincides with our earlier definition of (Pt)t?0 via (2.4) and (2.7) (see The-orem 2.4.1 in Section 2.4), and consequently an alternative definition of theheat kernel pt(x, y) is that it is the integral kernel of the heat semigroup(etL?)t?0.In general, applying the operator Pt to the function f has the effect of?smoothing out? f . This should not be surprising, since in the familiar case ofRd, applying Pt to a function f is equivalent to convolving f with a Gaussian.This is reflected in the following estimate:Theorem 2.3.6. For all t ? 0 and 1 ? p ? ?, ?Pt?p?p ? 1.Proof. The proof of this result is similar to the proof of Theorem 2.2.3. Fix1 ? p < ?. If q denotes the exponent conjugate to p and f ? Lp(?), thenusing the symmetry and sub-Markovian properties of the heat kernel, weobtain?Ptf?pLp(?) =?x?G???????y?Gpt(x, y)f(y)?y??????p?x=?x?G???y?Gpt(y, x)|f(y)|p?y??p/p???y?Gpt(x, y)?y??p/q?x??x,y?Gpt(y, x)|f(y)|p?y?x??y?G|f(y)|p?y.The case p = ? follows immediately from the sub-Markovian property ofthe heat kernel.This can be used to show the following:Corollary 2.3.7. For all 1 ? p ? ? and x0 ? G, pt(x0, ?) ? Lp(?) andL?pt(x0, ?) ? Lp(?).19Proof. Fix 1 ? p ? ? and x0 ? G, and set ux0 := ??1x0 ?x0 ? Cc(G). A directcalculation gives pt(x0, ?) = Ptux0 , and the first part follows from (2.3.6),since Cc(G) ? Lp(?).For the second part, note that on Cc(G), a straightforward calculation (seeLemma 4.13 of [74]) gives L?Pt = PtL?, so that L?pt(x0, ?) = L?Ptux0 =PtL?ux0 , and since L?ux0 ? Cc(G) ? Lp(?), the desired result follows from(2.3.6).Finally, we give a construction of the heat kernel using the heat equation ona graph. On (?, pi, ?), the heat equation is given by??tu(x, t)? (L?)xu(x, t) = 0,where the subscript x on the operator L? indicates that the operator is ap-plied to the x argument. We begin by extending the notion of a fundamentalsolution to the setting of graphs. Given U ? G, we let LU? denote the Dirich-let Laplacian on U , which is defined pointwise by(LU? f)(x) :=???(L?f)(x) if x ? U ,0 if x ? G \ U,and which has domain D(LU? ) := D(L?) ? {f ? L2(?) : f |G\U = 0}. A fun-damental solution to the heat equation on U is a function pU(x, y, t) definedon G?G? [0,?) such that for each y ? U ,??tpU(x, y, t)? (LU? )xpU(x, y, t) = 0,and u(?, y, 0) = ?y. The operator LU? is negative semidefinite and self-adjoint(by arguing as in Corollary 2.2.2 and (2.3)), and since U is finite, it hasfinitely many eigenvalues 0 ? ?1 ? ?2 ? ? ? ? ? ?N , with a correspondingbasis of real eigenfunctions (?j)1?j?N . One can then verify directly (seeLemma 4.5 of [74]) that the functionpU(x, y, t) :=n?j=1e?jt?j(x)?j(y) = etLU? ?y(x) (2.8)20is a fundamental solution to the heat equation on U . Finally, if U ? V ? G,a version of the maximum principle (see Section 4.1 of [74]) gives pU ? pV ,and consequently if (Gj)j?Z+ is a sequence of finite sets which increases toG, then the limitpG(x, y, t) := limj??pGj(x, y, t) (2.9)exists, and we take pt(x, y) := pG(x, y, t), yielding another construction ofthe heat kernel. We henceforth write pUt (x, y) for pU(x, y, t).Remark 2.3.8. Using this definition, the symmetry of the heat kernel followsimmediately from (2.8) and (2.9).This construction has a probabilistic interpretation as well as a geometricone. The Dirichlet Laplacian is associated with absorption (or killing) at theboundary of U . In light of this, it can be shown that if U is finite, and ifTV := inf{t ? 0 : Xt ? V } denotes the first hitting time of V ? G, thenpUt (x, y) =1?yPx(Xt = y, TG\U > t),and we call pUt (x, y) the heat kernel killed at U . Note that we can also definethe killed heat semigroup (PUt )t?0 for functions supported on U by(PUt f)(x) :=?y?GpUt (x, y)f(y).The connections with spectral theory in this construction extend beyond theuse of the eigenvalues and eigenvectors of LU? to construct the heat kernelpU . Recall that the transition semigroup was given by (etL?)t?0, and a formalcalculation using the chain rule indicates that if u0 ? C(G), then the functionu(x, t) = etL?u0 satisfies??tu =??tetL?u0 = L?etL?u0 = L?u,with u(?, 0) = u0. This solution also has certain minimality properties:Proposition 2.3.9. Suppose that u0 ? L?(G) is nonnegative. The func-tion u = Ptu0 = etL?u0 is the smallest nonnegative solution of the Cauchy21problem.?????tu(x, t) = L?u(x, t) in G? (0, T ),u(x, t) = 0 on G? {t = 0}.Proof. See Theorem 1.4.15 in [49]. One uses the ideas presented above toshow that u(x, t) := Ptu0 solves the Cauchy problem, and combines this withan appropriate version of the parabolic minimum principle (Theorem 1.3.2in [49]) to show minimality.Let us briefly summarize the relationship between these various constructionsof the heat kernel. Beginning with the operator L?, we have the associatedrandom walk (Xt)t?0 (with infinitesimal generator L?), the heat semigroup(etL?)t?0, and the heat equation (?t?L?)u = 0, and these lead to equivalentdefinitions of the heat kernel. In particular, the constructions of the heat ker-nel via probabilistic methods and spectral-theoretic methods are linked bythe fact that the heat semigroup (Pt)t?0 may be expressed as Pt = Exf(Xt)and as Pt = etL? . Similarly, the constructions using spectral-theoretic meth-ods and the heat equation are linked by the fact that the function etL??y isa minimal fundamental solution to the heat equation on G.2.4 Dirichlet forms on graphsWorking directly with the infinitesimal generator is often difficult due totechnical complications involving the domain of the generator. Many prob-lems involving the generator may be addressed using the more robust theoryof Dirichlet forms, developed by Silverstein and Fukushima. Our discussionhere closely follows the exposition given in [14] and [32]. We have previouslyconsidered the quadratic form Q defined byQ(f, g) :=12?x?G?y?Gpixy(f(y)? f(x))(g(y)? g(x));sometimes Q is also referred to as an energy form. This may be viewed as adiscrete version of the energy form E(u, v) =??u ? ?vdx on Rd.The quadratic form Q with an appropriately chosen domain D(Q) is anexample of a Dirichlet form, which we will now define. Let H be a real22Hilbert space with inner product ??, ??H , and induced norm ???H . A quadraticform (E ,D(E)) is said to be a symmetric form on H if the domain D(E) isa dense linear subspace of H, and E is defined on D(E)?D(E) and satisfiesthe following properties:1. (Bilinearity) For all f, g, h ? D(E) and all ?, ? ? R,E(?f + ?g, h) = ?E(f, h) + ?E(g, h).2. (Symmetry) For all f, g ? D(E), E(f, g) = E(g, f).3. (Positive semidefiniteness) For all f ? D(E), E(f, f) ? 0.Given ? > 0 and f, g ? D(E), we defineE?(f, g) := E(f, g) + ??f, g?H ,and we say that that the symmetric form (E ,D(E)) is closed if D(E) is com-plete with respect to the norm induced by the inner product E1. Note thatfor any ?, ? > 0, E? and E? induce equivalent norms on H.Now, let (X,B(X),m) be a ??finite measure space, and let H be the Hilbertspace L2(X,m) with inner product given by?f, g?H =?Xf(x)g(x)m(dx).We say that a symmetric form E on L2(X,m) is Markovian if wheneveru ? D(E), v := 0 ? (u ? 1) ? D and E(v, v) ? E(u, u). The pair (E ,D(E)) issaid to be a Dirichlet form if it is symmetric, closed, and Markovian.A Dirichlet form (E ,D(E)) is said to be regular if there exists a subset ofD(E)?C0(X) which is dense in D(E) with respect to the E1 norm and densein C0(X) with respect to the uniform norm; such a subset is also called a core.A Dirichlet form (E ,D(E)) is local if, whenever f, g ? D(E) are such thatsupp f and supp g are disjoint and compact, E(f, g) = 0. A Dirichlet formis strongly local if whenever f, g ? D(E) are such that supp f and supp g arecompact and g is constant on a neighborhood of supp f , E(f, g) = 0.23The usefulness of Dirichlet forms as an analytic tool is clear from the followingresult:Theorem 2.4.1. There is a one-to-one correspondence between any two ofthe following three families of objects1. The family of negative semidefinite self-adjoint operators on H.2. The family of strongly continuous semigroups on H.3. The family of closed symmetric quadratic forms on H.Proof. See Lemma 1.3.2 and Theorem 1.3.1 in [32]. In particular, the rela-tionship between 1 and 2 is that if A is a negative semidefinite self-adjointoperator on H, then the semigroup (Tt)t?0 defined by Tt := exp(tA) is theunique strongly continuous semigroup with generator A.The relationship between 1 and 3 is that if A is a negative semidefiniteself-adjoint operator on H, then the corresponding form E is given by???D(E) = D((?A)1/2),E(f, g) = ?(?A)1/2f, (?A)1/2g?H .and one can verify that this correspondence is indeed one-to-one.Remark 2.4.2. Note that the above correspondence is characterized by???D(A) ? D(E),E(f, g) = ??Af, g?H for f ? D(A), g ? D(E).(see Corollary 1.3.1 in [32]), which shows that one generally has a version ofthe Gauss-Green identity for any Dirichlet form.In particular, it is well understood how to obtain the strong Markov processassociated with a prescribed semigroup (see e.g. [25]), so starting with aDirichlet form allows us to recover all of the analytic objects of interest inour study.24We conclude this section by discussing Dirichlet forms on weighted graphsin depth, using many of the results proved by Keller and Lenz in [53]. Oneparticularly appealing aspect of the Dirichlet form construction is that thedomains of the various objects under consideration are much clearer than inour earlier discussion.Let (?, pi, ?) be a measure weighted graph. We begin with the formQ(f, g) :=12?x?G?y?Gpixy(f(y)? f(x))(g(y)? g(x)),and define D(Q) to be the closure of Cc(G) in theQ1?norm. By our choice ofD(Q), (Q,D(Q)) is a regular Dirichlet form (see the discussion in Section 1in [53]), and the associated nonnegative definite operator (via Theorem 2.4.1)is the operator L? with some domain D(L?) (see Theorem 9 in [53]).As discussed previously, if one makes some additional geometric assumptionson (?, pi, ?), then D(L?) has a particularly nice description. If (?, pi, ?) is suchthat ?(U) =? whenever U ? G is connected in ?, thenD(L?) = {f ? L2(?) : L?f ? L2(?)},and the restriction of L? to Cc(G) is essentially self-adjoint (by Theorem 5in [53]). This is certainly true if ? = 1, or if ? = pi and the edge weights(pie)e?E are bounded below.The Dirichlet form (Q,D(Q)) is non-local on any measure weighted graph.Indeed, if x ? y, then f := 1{x} and g := 1{y} have disjoint compact supports,but Q(f, g) = ?pixy 6= 0.2.5 Spectral theoryIn this section, we assume that L? is the negative semidefinite self-adjointoperator associated with a regular Dirichlet form Q, as discussed in the pre-vious section, and we write D(L?) for the domain of L?.Let I denote the identity operator. Recall that the spectrum of an operatorA, ?(A), is defined to be the set of ? ? C such that A? ?I does not have a25bounded inverse. The essential spectrum of A, ?ess(A), is the subset of ?(A)consisting of points which are either accumulation points of ?(A) or whichare discrete eigenvalues of A with infinite multiplicity. The discrete spectrumof A, ?disc(A), is the subset of ?(A) consisting of points which are discreteeigenvalues of A with finite multiplicity. Note that ?disc(A) = ?(A) \?ess(A).We will write ?(?, pi, ?) (respectively ?ess(?, pi, ?), ?disc(?, pi, ?)) to denote thespectrum (respectively essential spectrum, discrete spectrum) of the operator?L? on (?, pi, ?). As the operator ?L? is positive semidefinite and self-adjoint, we have ?(?, pi, ?) ? [0,?), and we define the bottom of the spectrumof ?L? to be?0(?, pi, ?) := inf ?(?, pi, ?).We define ?ess0 (?, pi, ?) := inf ?ess(?, pi, ?) and ?disc0 (?, pi, ?) := inf ?disc(?, pi, ?).We will use several equivalent expressions for the bottom of the spectrum andessential spectrum. The most important of these is the following variationalcharacterization of the bottom of the spectrum in terms of the Dirichlet form,which is an analogue of the classical Rayleigh-Ritz formula:Theorem 2.5.1. The bottom of the spectrum of the operator ?L? satisfies?0(?, pi, ?) = inff?D(Q),f 6=0Q(f, f)?f?2L2(?)= inff?Cc(G),f 6=0Q(f, f)?f?2L2(?). (2.10)Proof. See Theorems 13.1 and 13.2 in [64].For U ? G, let ?U(?, pi, ?) denote the bottom of the spectrum of the DirichletLaplacian on U . Then (2.10) implies that if U ? V ,?U0 (?, pi, ?) ? ?V0 (?, pi, ?), (2.11)by extending any function f on U to a function on V by setting f(x) = 0for x ? V \ U . Consequently, if (Gj)j?Z+ are finite sets which increase to G,then as j ??,?Gj0 (?, pi, ?) ? ?0(?, pi, ?),by (2.10), (2.11), and the fact that Cc(G) is dense in D(Q) with respect to26the Q1?norm.For the bottom of the essential spectrum, note first that we have the trivialbound ?0(?, pi, ?) ? ?ess0 (?, pi, ?). The essential spectrum is invariant underperturbations of finite support (see Lemma 1 of [31]), and by standard tech-niques of spectral theory, if (Gk)k?Z+ is any sequence of finite subsets of Gwhich increases to G, then we have that?ess0 (?, pi, ?) = limk???G\Gk0 (?, pi, ?);see Proposition 18 of [52] for a proof.The bottom of the spectrum is particularly useful because it encodes informa-tion about the long-time decay of the heat kernel pt(x, y) and the associatedsemigroup (Pt)t?0:Theorem 2.5.2. For t ? 0, ?Pt?2?2 = exp(??0(?, pi, ?)t).Proof. Fix t ? 0. A direct calculation, together with (2.10), shows that forf ? L2(?),ddt?Ptf?2L2(?) = ?2?L?Ptf, Ptf?L2(?) ? ?2?0(?, pi, ?)?Ptf?2,and since P0 = I, integrating this differential inequality gives ?Pt?2?2 ?exp(??0(?, pi, ?)t). On the other hand, if U ? G is finite and (PUt )t?0 denotesthe killed heat semigroup, then since LU? and PUt commute on D(LU? ) (seeLemma 4.13 of [74]), if (Gj)j?Z+ are finite sets which increase to G and?j := ?Gj0 (?, ?, pi) has the corresponding eigenfunction fj, it follows that forall j ? Z+,ddt?PGjt fj?2L2(?) = ?2?LGj? PGjt fj, PGjt fj?L2(?) = ?2?j?PGjt fj?2L2(?).This differential equation yields ?PGjt ?2?2 ? exp(??jt) for all j ? Z+. Since0 ? pGjt (x, y) ? pt(x, y), we have ?PGjt ?2?2 ? ?Pt?2?2, and hence ?Pt?2?2 =exp(??0(?, pi, ?)t).27Remark 2.5.3. In [54], a pointwise version of Theorem 2.5.2 is established;they show that for x, y ? G,limt??1tlog pt(x, y) = ??0(?, pi, ?).We now discuss some simple examples in which explicit calculations are pos-sible:Example 2.5.4 (Euclidean lattice). Here, we fix d ? 1, and work on (Zd,Ed)equipped with the natural weights. By a minor abuse of notation we will useZd to refer to both the graph (Zd,Ed) and the vertex set Zd. We have thatpi({x}) = 2d for all x ? Zd, so 2dLpi = L1, and it suffices to consider Lpi.Given an operator A, let ?(A) := sup ?(A) denote the spectral radius of A.Since we have ?(?Lpi) = 2 from Theorem 2.2.3, it follows that ?(?, pi, pi) ?[0, 2] (and this is valid for any weighted graph (?, pi)). On the other hand,one can directly check that if F denotes the discrete Fourier transform, whichwe define for f ? L2(pi) by(Ff)(?) :=1(2pi)d/2?x?Zdf(x)e?i?x,??,and F?1 denotes its inverse, then for x = (x1, . . . , xd) ? Zd,(F(?Lpi)F?1) =12dd?j=1(2 + 2 cosxj) =: m(x).That is, ?Lpi is unitarily equivalent (via the discrete Fourier transform) tothe multiplication operator m, so that ?(Zd, pi, pi) = range(m) = [0, 2]. Con-sequently, we have ?0(Zd, pi, pi) = 0, as well as ?(Zd, pi,1) = [0, 4d] and?0(Zd, pi,1) = 0.Note that the spectrum of ?Lpi and ?L1 on Zd is purely continuous (andin fact is purely absolutely continuous). However, one can find pointwisesolutions to Lpif(x) = ?f(x); for example, if d = 1, ? ? [0, 2], and ? satisfies2+? = 2 cos ?, then for any constants ?, ?, the function f?,? : Z? R defined28byf?,?(x) := ? cos(?x) + ? sin(?x)satisfies Lpif(x) = ?f(x), but f?,? 6? L2(pi) unless ? = ? = 0.Example 2.5.5 (Regular trees). Fix d ? 3, and let Td be the d?regularinfinite tree equipped with the natural weights. On this graph, we havedLpi = L1. We claim that?(Td, pi, pi) =[1?2?d? 1d, 1 +2?d? 1d],?(Td, pi,1) = [d? 2?d? 1, d+ 2?d? 1].We will prove the second of these. Given a vertex x0 ? Td and ` ? 0, we letfx0,` denote the function defined by fx0,`(x) := `d(x0,x) for x ? Td. A directcomputation shows that if ? ? R, then(?L1 ? ?I)fx0,`(x) = (?A1 ? (?? d)I)fx0,`(x)=????d`? (?? d) if x = x0,`n?1((1? d)`2 ? (?? d)`? 1) if d(x, x0) = n ? 1.Assume that ? ? R satisfies |? ? d| > 2?d? 1. The quadratic equation(1? d)`2 + (?? d)`? 1 = 0 has two solutions given byr1, r2 =?(?? d)??(?? d)2 ? 4(d? 1)2(1? d). (2.12)In particular, since 0 ??(?? d)2 ? 4(d? 1) ? |? ? d|, one of the rootsr = r(d, ?) satisfies |r| < 1?d?1, and we have?fx0,r?2L2(1) =?n?0?d(x,x0)=n|fx0,r(x)|2 =?n?0|r|2n ? (d? 1)n <?. (2.13)with ?fx0,r?L2(1) being independent of the choice of x0 ? G. Our hypotheseson ? and r imply that ?dr?(??d) is nonzero, so the equation (?L1??I)u =?x has a solution of the form u = ?fx,r, and in light of (2.13), by taking lin-ear combinations of solutions of this form we can solve (?L1 ? ?I)u = g for29arbitrary g ? L2(1), with constants c1(d, ?), c2(d, ?) such that c1?g?L2(1) ??u?L2(1) ? c2?g?L2(1). This means precisely that ?L1 ? ?I has a boundedinverse, so that ? 6? ?(Td, pi,1).Conversely, suppose that |? ? d| < 2?d? 1, and suppose that for somex0 ? Td there exists fx0 ? L2(1) with (?L1 ? ?I)fx0 = ?x0 . Let Sx0(n) :={x ? Td : d(x, x0) = n} and let fx0 be the function defined byfx0(x) =1|Sx0(n)|?x?Sx0 (n)fx0(x).It is clear that (?L1 ? ?I)fx0 = ?x0 also, with fx0 ? L2(1). On the otherhand, if we write un for the common value of fx0 on Sx0(n), then from(?L1??I)fx0 = ?x0 we must have (1?d)un+2?(??d)un+1?un for n ? 0, andthis second-order recurrence can be solved directly to give un = ?rn1 + ?rn2 ,where r1 and r2 are given by (2.12). In this case both r1 and r2 are complexnumbers with |r1| = |r2| =?d? 1, and comparing with (2.13) shows thatfx0 6? L2(1), a contradiction. We conclude that if |? ? d| < 2?d? 1, then?L1 ? ?I fails to have a bounded inverse, and hence ? ? ?(Td, pi,1).Since ?(Td, pi,1) is closed and real, we conclude that ?(Td, pi,1) = {? ? R :|? ? d| < 2?d? 1} = [d ? 2?d? 1, d + 2?d? 1]. Note that the aboveargument also shows that the spectrum is purely continuous.Remark 2.5.6. The preceding analysis shows that the bottom of the spectrumof the d?regular infinite tree is positive, and in particular ?(Td, pi, pi) is astrict subset of [0, 2]. The d?regular tree is an extremal example in severalrespects; it can be shown that it has the smallest spectrum (with respectto the measure pi) among all infinite connected d?regular graphs, and wewill prove subsequently that it has the largest possible bottom of spectrumamong all graphs with the same rate of volume growth.We will show subsequently that it is necessary for a graph to have exponen-tial volume growth (measured with respect to a certain type of metric) inorder for the bottom of the spectrum to be positive. However, this conditionis not sufficient, as the next example will show.Example 2.5.7 (Lamplighter graph). Let S = Z? (Z/2Z)Z. We define the30lamplighter graph L = (G,E) byG := {(x, `) ? S : all but finitely many coordinates of ` are equal to 0} ,E := {{(x, `), (x?, `?)} : ?x? x?? = 1 and `(y) = `?(y) for y ? Z \ {x, x?}}.Informally, we may describe this graph as follows. The vertex set consists ofconfigurations of lamps indexed by the integers, such that each lamp is eitheron or off, and such that only finitely many lamps are on at a given time. Alamplighter stands at the lamp associated with a vertex x, moves to the lampat an adjacent vertex x? (so that ?x ? x?? = 1), and changes the settings ofthe lamps at x and x? to one of the four possibilities (on/on, on/off, off/on,off/off) while leaving all other lamps unchanged. If a single iteration of thisprocess can change one configuration into another configuration, then thevertices corresponding to the configurations are adjacent.The lamplighter graph is the Cayley graph of the wreath product Z o (Z/2Z)(the lamplighter group) associated with an appropriate choice of generators.See [57] for more on the lamplighter graph and the behavior of random walkson this graph.The lamplighter graph has exponential growth (and in fact the volume growthsatisfies Vd(x, r) ? ?r, where ? = 1+?52 is the golden mean). We have thefollowing nontrivial result with regards to the spectrum:Theorem 2.5.8. (Grigorchuk-Zuk, [35]) The spectrum of the operator ?Lpion the lamplighter graph is [0, 2].In particular, despite having exponential growth, the bottom of the spectrumof ?Lpi on L is equal to 0.31Chapter 3Adapted MetricsLet (?, pi, ?) be a measure weighted graph, and let (Xt)t?0 denote the ran-dom walk associated with the operator L?. This measure weighted graph isequipped with a natural notion of distance via the graph metric. However,the graph metric does not incorporate the information contained in the edgeweights pi or the measure ?. Since we are primarily interested in the behav-ior of X, we would like to have a metric ? which captures, in some sense,distances from the perspective of X. For example, if the random walk canmove quickly between x and y, the distance ?(x, y) should be appropriatelysmall.Having a good metric to work with is particularly important if the oper-ator L? is unbounded. We showed in Example 2.2.6 it is possible for therandom walk associated with an unbounded graph Laplacian to travel aninfinite distance in the graph metric in a finite amount of time, with positiveprobability. In this case, one should expect that the metric ? should be muchsmaller than the graph metric, since two vertices which are far apart withrespect to the graph metric are still very close from the perspective of therandom walk. It may even be reasonable to expect that the distance ?(x, y)between two vertices x, y ? G should be uniformly bounded.The use of metrics other than the graph metric to study random walks andheat flow on graphs was initiated by E.B. Davies in the paper [19]. This paperdefines several metrics and uses functional-analytic techniques to prove off-diagonal heat kernel estimates involving these metrics. However, the metricsconsidered by Davies do not generally produce optimal results in applications.32Working independently, the author of this thesis, as well as Frank, Lenz, andWingert (in the setting of general non-local Dirichlet spaces), and Grigor?yan,Huang, and Masamune, subsequently considered metrics on graphs satisfyinga condition that we will refer to as adaptedness. Metrics of this type haveproven extremely useful in settings in which the operator L? is unbounded,and their use will be essential in the work which follows.Some additional notation will be helpful in this section. Given an edge e ? E,we use e, e to denote the (arbitrarily chosen) endpoints of e. This abuse ofnotation will not cause any complications, since all expressions involving thisnotation will be symmetric in e and e. For x ? G, we let E(x) denote theset of edges incident to x. Given vertices x, y ? G, we use C(x, y) to denotethe set of all finite chains of edges e1, e2, . . . , en joining x and y, and P(x, y)to denote the set of all finite paths of vertices x1, x2, . . . , xm joining x and y.3.1 Adapted metricsLet ? be a metric on (?, pi). We say that ? is an adapted metric on (?, pi, ?)(or a C??adapted metric) if there exist constants c?, C? > 0 such that1. For all x, y ? G with x ? y,?(x, y) ? c?. (3.1)2. For all x ? G,1?x?y?xpixy?2(x, y) ? C?. (3.2)As we will show subsequently, for a general measure weighted graph (?, pi, ?),adapted metrics always exist. However, they are not unique; if ?1 is anadapted metric, then any metric ?2 which satisfies ?2(x, y) ? ?1(x, y) for allx, y ? G is also an adapted metric. Additionally, it is typically not possibleto choose a (pointwise) maximal metric with respect to the condition (3.2).33One convenient way to construct adapted metrics is to use a set of positiveedge conductances (?(e))e?E for which there exist constants c?, C? > 0 suchthat1. For all e ? E,?(e) ? c?. (3.3)2. For all x ? G,1?x?e?E(x)pi(e)?2(e) ? C?. (3.4)Edge conductances satisfying (3.3) and (3.4) will be called adapted edge con-ductances or C??adapted edge conductances. Any collection of edge conduc-tances (adapted or not) induce metrics in the following way:d?,1(x, y) := sup{|f(x)? f(y)| : f ? S?(?, pi, ?)}, (3.5)d?,2(x, y) := inf????e???(e) : ? ? C(x, y)???. (3.6)Here S?(?, pi, ?) := {f ? C(G) : |f(e)? f(e)| ? ?(e) for all e ? E}. In fact,these metrics coincide:Lemma 3.1.1. For all x, y ? G, d?,1(x, y) = d?,2(x, y).Proof. Given x0 ? G, define fx0 := d?,2(x0, ?). Since |fx0(e) ? fx0(e)| ?d?,2(e, e) ? ?(e), it follows that fx0 ? S?(?, pi, ?). Consequently for x, y ? G,we have d?,1(x, y) ? |fx(x)? fx(y)| = d?,2(x, y).For the reverse inequality, given any ? > 0 and f ? S?(?, pi, ?), there is achain of edges ? = e1, . . . , en ? C(x, y) such thatd?,2(x, y) + ? ?n?j=1?(ej) ?n?j=1|f(ej)? f(ej)| ? |f(x)? f(y)|.Taking the supremum over f ? S?(?, pi, ?) and using the fact that ? > 0was arbitrary, we conclude that for x, y ? G, d?,2(x, y) ? d?,1(x, y), andconsequently d?,1 = d?,2.34Henceforth, we write d? to refer to the common value of d?,1 and d?,2. Itis clear from (3.6) that for all e ? E, d?(e, e) ? ?(e), and this leads to thefollowing observation:Lemma 3.1.2. If (?(e))e?E are C??adapted edge conductances, then theinduced metric d? is a C??adapted metric satisfying d?(x, y) ? c? wheneverx ? y.In fact, any metric ? (adapted or not) induces edge conductances by setting??(e) := ?(e, e), and in turn the edge conductances (??(e))e?E induce a metricd? := d?? . We call d? the metric induced by ?. The induced metric has thefollowing properties:Lemma 3.1.3. For any metric ?, ?(x, y) ? d?(x, y) for all x, y ? G. Ifadditionally ? is a C??adapted metric satisfying ?(x, y) ? c? for all x ? y,then the induced metric d? is also a C??adapted metric satisfying d?(x, y) ?c? for all x ? y.Proof. We have that d?(x, y) := sup{|f(x)? f(y)| : f ? S??(?, pi, ?)}. Givenx0 ? G, set fx0 := ?(x0, ?). As before, we have |fx0(e) ? fx0(e)| ? ?(e, e) =??(e), and hence fx0 ? S??(?, pi, ?). Consequently for x, y ? G, we haved?(x, y) ? |fx(x)? fx(y)| = ?(x, y).For the second part, it suffices to prove that ?(x, y) = d?(x, y) wheneverx ? y. We have already shown that d?(x, y) ? ?(x, y) for any x, y ? G, andfrom (3.6), whenever e ? E, d?(e, e) ? ??(e, e) = ?(e, e).Remark 3.1.4. The metrics ? and d? do not coincide in general. For example,if ? = (Zd,Ed), ` > 1, and ?(x, y) = ?x? y?`, then d?(x, y) = ?x? y?1.Remark 3.1.5. A useful characterization of the induced metric d? is that itis the largest possible metric which coincides with ? on adjacent vertices; by(3.6), any larger metric would violate the triangle inequality.Some variations on the definition of adaptedness appear in the literature.Frequently, the condition (3.1) is omitted in the definition of adaptedness. Itis also common to take C? to be 1; note that if ? satisfies (3.2) with C? = C,then ?? := C?1/2? satisfies (3.2) with C?? = 1.35One can also define a notion of strong adaptedness, where (3.2) is replacedby the hypothesis that there exist constants C1(?), C2(?) > 0 such that forall x ? G,C1(?) ?1?x?y?xpixy?2(x, y) ? C2(?).We will see that metrics which are adapted but not strongly adapted mayyield sub-optimal results in applications. However, not all graphs admit astrongly adapted metric (see Example 8.2.1).The definition of adaptedness (and in particular the condition (3.2)) is diffi-cult to motivate in purely discrete terms. One observation is that estimateswhich control the Dirichlet form Q in terms of the norm ???L2(?) are useful inobtaining results on the analytic behaviour of L? or the associated randomwalk X. A simple calculation using the inequality (a ? b)2 ? 2|a|2 + 2|b|2yieldsQ(f, f) ? 2?f?2L2(pi),but for an arbitrary measure ? this argument only givesQ(f, f) ? 2A(?, pi, ?)?f?2L2(?),and of course this last estimate is useless if A(?, pi, ?) = ? (i.e., if L? isunbounded). On the other hand, if one considers the closely related quadraticformQ?(f, f) :=12?x,y?Gpixy?2(x, y)(f(y)? f(x))2,then using (3.2) and estimating as above yieldsQ?(f, f) ? 2C??f?2L2(?).Of course, Q? is the Dirichlet form associated with the operator L?? on themeasure weighted graph (?, p?i, ?), where p?ixy := pixy?2(x, y), and by Theo-rem 2.2.3 and (3.2), L?? is bounded. The idea is that if one considers distanceswith respect to ? instead of the graph metric d, then in many respects the36behavior of the random walk X is similar to the behavior of a random walkwhose generator is bounded.Perhaps the best motivation for the definition of adaptedness comes fromthe continuous setting. Later on, we will prove Proposition 5.4.12, whichrelates the condition of adaptedness to a property satisfied by the intrinsicmetric on an appropriately chosen metric graph. A useful heuristic is tothink of adapted metrics as a discrete version of the Riemannian metric ona Riemannian manifold (see [67]), or the intrinsic metric associated with astrongly local Dirichlet form (see [69]). In [29], Frank, Lenz, and Wingertdefine metrics on non-local Dirichlet spaces which have many properties incommon with the intrinsic metric on a strongly local Dirichlet space, andapplying their methods to the setting of measure weighted graphs yieldsadapted metrics.We now present some examples of adapted metrics:Example 3.1.6. The graph metric d, which was defined earlier to bed(x, y) := inf????e??1 : ? ? C(x, y)???,is adapted to the operator Lpi on any weighted graph (?, pi) with constantCd = 1.Remark 3.1.7. If L? is bounded, and we set A := A(?, pi, ?), then A?1/2d is1?adapted to L?. Conversely, no constant multiple of the graph metric canbe adapted to L? when L? is unbounded.Example 3.1.8. The edge metric dE defined bydE(x, y) := inf????e??1 ? pi?1/2e : ? ? C(x, y)???,is adapted to the operator L1 on (?, pi) if and only if the graph ? has uni-formly bounded vertex degrees, with constant CdE := supx?G deg(x). Thismetric was first considered by Davies in [19].37Example 3.1.9. The vertex metric dV defined bydV (x, y) := inf????e??1 ?(?epie)1/2?(?epie)1/2: ? ? C(x, y)???,is 1?adapted to the operator L? on any weighted graph (?, pi).3.2 Metric comparisonsIn this section we will collect estimates comparing different adapted metrics,as well as other metrics which have arisen previously in the study of randomwalks on graphs. Initially, we will let ? be arbitrary, although we will restrictto the case ? = 1 subsequently.We begin by defining the Davies metric dD on (?, pi, ?), which appeared firstin [19], where it is referred to as the d3 metric. This metric also has certainsimilarities to the intrinsic metric associated with a strongly local Dirichletform, and it is an ?extremal metric? in the sense that it is the maximal metricwith respect to a certain condition. Given x, y ? G, we definedD(x, y) := sup{|f(y)? f(x)| : f ? T (?, pi, ?)},whereT (?, pi, ?) :={f ? C(G) : for all x ? G,1?x?y?xpixy(f(y)? f(x))2 ? 1}.Remark 3.2.1. This metric is not in general adapted. If ? = 1 and T is aninfinite tree with the natural weights, then dD(x, y) = 1 whenever x ? y.Consequently, if there are vertices with arbitrarily high degree in T , then dDwill fail to be adapted.Our first comparison result is as follows:Theorem 3.2.2. Let ? be a C??adapted metric. Then for all x, y ? G,?(x, y) ? C1/2? dD(x, y).Proof. Fix x0 ? G and set fx0 := C?1/2? ?(x0, ?). Note that for x ? G, by38C?-adaptedness of ? we have1?x?y?xpixy(fx0(y)? fx0(x))2 ?1C?(1?x?y?xpixy?2(x, y))? 1,so that fx0 ? T (?, pi, ?). Since dD(x, y) ? |fx(y)? fx(x)| = C?1/2? ?(x, y), weconclude that ?(x, y) ? C1/2? dD(x, y).For further comparison results, we will need to impose additional restric-tions. The remainder of the results in this section will be proved under theassumption that ? = 1.Theorem 3.2.3. Suppose that ? = 1 and pie ? 1 for all e ? E. Then for allx, y ? G, dD(x, y) ? dE(x, y).Proof. Fix e ? E. First, we observe that for f ? T (?, pi,1), pie|f(e)?f(e)|2 ?1, so that |f(e)? f(e)| ? pi?1/2e = 1?pi?1/2e . Given f ? T (?, pi,1) and ? > 0,we can find a chain ? = e1, . . . , en ? C(x, y) such thatdE(x, y) + ? ?n?j=11 ? pi?1/2ej ?n?j=1|f(ej)? f(ej)| ? |f(y)? f(x)|,from which we conclude that dD(x, y) ? dE(x, y).Remark 3.2.4. This shows that adaptedness of dE implies adaptedness of dD;in particular, dD is adapted when ? has uniformly bounded vertex degrees.Corollary 3.2.5. If ? has vertex degrees uniformly bounded by D, ? = 1,and pie ? 1 for all e ? E, then for all x, y ? G, D?1/2dE(x, y) ? dD(x, y) ?dE(x, y).Proof. Under these hypotheses, dE is a D?adapted metric, so the leftmostinequality follows from Theorem 3.2.2 and the rightmost follows from Theo-rem 3.2.3.We need additional geometric hypotheses to compare the metrics dE and dV :Theorem 3.2.6. Suppose that ? = 1, pi(e) ? 1 for all e ? E, and that thereexists C > 0 such that for each e ? E,1pie? C(1pie?1pie). (3.7)39Then for all x, y ? G, dV (x, y) ? dD(x, y) ? dE(x, y) ? C1/2dV (x, y).Proof. Since dV is a 1?adapted metric, the leftmost inequality follows fromTheorem 3.2.2, the middle inequality follows from Theorem 3.2.3, and therightmost inequality follows from (3.7).3.3 Volume growth and isoperimetricinequalitiesThe volume growth function and isoperimetric inequalities aim to quantifythe rate of expansion of a measure weighted graph (?, pi, ?). The rate of ex-pansion has several implications for the behavior of the associated randomwalk X; for example, sufficiently rapid expansion means that X will typicallywander very far from its starting point in a short amount of time. Generi-cally, a slow rate of expansion is associated with the random walk X being?well-behaved?, whereas various pathological behaviors may arise as the rateof expansion increases.Let (?, pi, ?) be a measure weighted graph. As discussed earlier, given ametric ? on (?, pi, ?), we define the closed ball of radius r centered at x byB?(x, r) := {y ? G : ?(x, y) ? r},and the volume of this ball is given byV?(x, r) := ?(B?(x, r)) :=?y?B?(x,r)?y.In the special case in which ? = 1, we simply have that V?(x, r) = |B?(x, r)|and we will write the latter.If x0 ? G is fixed, the large-scale behavior of the function r 7? V?(x0, r) isreferred to as the volume growth of (?, pi, ?). Typically, this does not dependon the choice of x0. In particular, if one can find polynomials p1, p2 suchthat p1(r) ? V?(x0, r) ? p2(r) for all sufficiently large r, then we say that(?, pi, ?) has polynomial volume growth with respect to the metric ?, and ifone can find constants c1, c2 > 0 such that ec1r ? V?,?(x0, r) ? ec2r for allsufficiently large r, then we say that (?, pi, ?) has exponential volume growth40with respect to the metric ?. In an analogous way we can define other typesof volume growth, such as subpolynomial, superpolynomial, subexponential,and superexponential.The volume growth only encodes a small amount of the geometric informationpresent in (?, pi, ?). However, we will see subsequently that many analyticproperties of the operator L? on (?, pi, ?) may be deduced from estimates onthe volume growth with respect to an adapted metric ?.Given a measure weighted graph (?, pi, ?), an isoperimetric inequality yieldsestimates on the size of the boundary of an arbitrary subset U ? G in termsof the measure of U . Formulating the proper notion of an isoperimetric in-equality on general measure weighted graphs (?, pi, ?) is somewhat difficult.This was done recently by Bauer, Keller, and Wojciechowski in [7], and usesthe notion of adapted metrics.Given U ? G, define the edge-boundary ?eU by?eU := {e ? E : one endpoint of e is in U and the other is in G \ U}.Let ? be an adapted metric on (?, pi, ?), and define the measure of the bound-ary of of U with respect to ? to bepi?(?eU) :=?e??eUpi(e)?(e, e).Given an increasing function ? : R>0 ? R>0, we say that (?, pi, ?) satisfies a??isoperimetric inequality with respect to ? ifinfU?GU finite, nonemptypi?(?eU)?(?(U))> 0. (3.8)In the special case where ?(t) = t, we call the quantity??(?, pi, ?) := infU?GU finite, nonemptypi?(?eU)?(U)a Cheeger constant or adapted Cheeger constant. If the Cheeger constant??(?, pi, ?) is positive, we say that (?, pi, ?) satisfies a strong isoperimetric41inequality.Remark 3.3.1. Although the use of adapted metrics is a recent development,volume growth estimates and isoperimetric inequalities for graphs have beenthe subject of a great deal of study. Prior results have focused largely on thebounded setting, in which the measure under consideration is the measurepi and the distance function used is the graph metric d. For an overview ofsome of these results, see [2], [16], [40].Isoperimetric inequalities generally imply lower bounds for the volume growthfunction. To illustrate this, let us consider the case where ? = pi and ? = d.We say that (?, pi, pi) satisfies a D?dimensional isoperimetric inequality if(3.8) holds with ? = d and ?(t) = t(D?1)/D. Applying (3.8) to the sequenceof balls (Bd(x0, n))n?Z+ shows that there exists C1 > 0 such that for n ? Z+,Vd(x0, n+ 1)? Vd(x0, n) ? pid(?eBd(x0, n)) ? C1Vd(x0, n)(D?1)/D.Setting ?(n) := Vd(x0, n)(D?1)/D and summing gives Vd(x0, n) ? C2nD forsome C2 > 0 and all n ? Z+; i.e., the volume growth of (?, pi, pi) is at leastD?dimensional. However, the converse is not true; bounds on the volumegrowth do not generally imply that a corresponding isoperimetric inequalityholds.3.4 ExamplesThese two families of graphs will be our main source of examples throughoutthe remainder of this thesis. In these examples we will take ? = 1.Example 3.4.1 (Weighted half-line). As in Example 2.2.6, we set ? =(Z?0, E?0). Given ? ? 0 and ? ? R, we define edge weights (pi?,?e )e?E?0bypi?,?n,n+1 := pi?,?({n, n+ 1}) = (n+ 1)? log?+(n+ 1),where n ? Z?0 and log+(x) := log(x) ? 1.42Since this graph is almost 2?regular, we will use the adapted metric dE forease of computation. We will analyze several cases separately. In all cases,we will begin with the observation thatdE(0, R) =R?1?j=01(pi?,?j,j+1)1/2=R?1?j=01(j + 1)?/2 log?/2+ (j + 1).Case 1: 0 ? ? < 2, ? = 0.In this case, we have thatdE(0, R) ?22? aR1??/2,and it follows thatVdE(0, r) = |BdE(0, r)| ?(2? ?2r)2/(2??).In particular, this regime corresponds to polynomial volume growth. We willsee subsequently that this is the range of parameters in which one observesa transition from recurrence to transience.Case 2: ? = 2, ? ? R.This case is the most interesting one. We have thatdE(0, R) ?22? ?log1??/2(R),and it follows thatlog VdE(0, r) = log |BdE(0, r)| ?(2? ?2r)2/(2??).This regime corresponds to super-polynomial volume growth for all ? ? R.The volume growth is subexponential when ? < 0, exponential when ? = 0,and super-exponential when ? > 0. We will subsequently observe a widerange of behavior from the associated VSRW as ? varies.43It is not difficult to see that the Cheeger constant ?dE(?, pi?,?,1) is positive if? ? 2 and ? ? 0. Indeed, in this case, we have pi?,?(e)dE(e, e) = (pi?,?(e))1/2.In particular, suppose that U ? Z?0 is finite, and that supU = N . Thenpi?,?? (?eU) ? pi?,?({N,N + 1}) ? N + 1 ? ?(U),so that ?dE(?, pi,1) ? 1.Case 3: 2 < ? <?, ? = 0.In this case, we have thatlimR??dE(0, R) = limR??R?1?j=01(j + 1)?/2<?.Consequently, the adapted distance between 0 and ? is finite, and one canfind r? <? such that BdE(0, r?) = Z?0 (i.e., the entire graph is contained ina ball of finite radius). The VSRW on this graph moves very rapidly towardsinfinity, and in fact ?reaches infinity in finite time? almost surely.Example 3.4.2 (Rapidly branching tree). For ? > 0, let ?? be the spheri-cally symmetric infinite tree rooted at xroot, with branching function k?(r) :=b(r + 1)?c, equipped with the natural weights.Note that for ??, we have|Bd(xroot, r)| =r?1?j=0j?i=0k?(i).For this example, we will use the adapted metric dV . If xR ? ?? satisfiesd(xroot, xR) = R, thendV (0, R) R?1?j=01(j + 1)?/2 R1??/2.44Hence, if dV (xroot, y)  r, then d(xroot, y)  r2/(2??). Consequently,log VdV (0, r) = log |BdV (xroot, r)|r2/(2??)?j=0log(j + 1)? ( 2?2? ?)r2/(2??) log r.Remark 3.4.3. On any infinite tree T equipped with the natural weights andthe measure 1, the Davies metric dD is approximately equal to the graphmetric.To see this, note that since we are using the natural weights, dE = d, soTheorem 3.2.3 gives dI(x, y) ? d(x, y). On the other hand, given two ver-tices x, y ? T with d(x, y) = D, let p = x0, . . . , xD ? P(x, y) be the uniquegeodesic path joining x and y, and set f(xj) = j/?2 for 0 ? j ? D. For anyx ? T \ {x0, . . . , xD}, we define n(x) to be the unique vertex in {x0, . . . , xD}such that d(x, n(x)) = d(x, {x0, . . . , xD}), and set f(x) = f(n(x)). We havethat f ? T (??,1), and hence dD(x, y) ? |f(x)?f(y)| = D/?2 = d(x, y)/?2.This is undesirable, because the VSRW can move very quickly on a rapidlybranching tree, and a good metric should reflect this in some way. As aconsequence, despite having certain nice properties, the Davies metric is notvery useful for studying random walks on graphs, and it does not yield sharpresults in applications (see also the discussion at the end of Example 8.4.2).45Chapter 4Heat Kernel EstimatesThis section deals with various pointwise estimates of the heat kernel pt(x, y)associated with a measure weighted graph (?, pi, ?). We will be interestedprimarily in upper bounds for the heat kernel, and we will classify our esti-mates as either on-diagonal estimates or off-diagonal estimates. On-diagonalestimates of the heat kernel consider pt(x, x) for some x ? G, whereas off-diagonal estimates of the heat kernel consider pt(x, y) for distinct x, y ? G,and typically have an additional decaying term, which reflects the difficultyin moving from x to y.As the heat kernel is the transition density of the continuous-time simple ran-dom walk (Xt)t?0 with generator L?, estimating the heat kernel allows oneto control the behavior of the random walk X. For example, upper boundson the heat kernel can be used to control the probability of the associatedrandom walk moving a large distance in a short amount of time. We will seethat the rate of decay of the heat kernel for large t is intimately related tothe large-scale geometry of the underlying graph, and in many instances canbe quantified in terms of geometric quantities such as the volume growth orisoperimetric inequalities.We have already seen one indication that pointwise estimates may be usefulin studying the heat kernel, via the equality?x?G|pt(x0, x)|2?x = p2t(x0, x0). (4.1)In particular, pointwise estimates of the heat kernel imply a bound on the46L2-norm of the heat kernel.We will begin with a brief discussion of on-diagonal estimates for the heatkernel. Estimates such as these have been studied in great depth for Rie-mannian manifolds or even weighted graphs equipped with the measure pi(for an overview on manifolds, see [37], for graphs, see [2] and [41], for thespecial case of groups, see [62] and [72]), but for the most part, analoguesof these results have not been established for weighted graphs with arbitrarymeasure ?.4.1 On-diagonal heat kernel estimatesThe simplest type of heat kernel behavior to analyze is polynomial decay,which corresponds to an estimate of the formsupx?Gpt(x, x) ?Ct?/2, (4.2)for all x ? G and t ? 0.A fundamental result in this area is that the on-diagonal estimate (4.2) isequivalent to the Nash inequalityQ(f, f) ? C??f?2+4/?L2(?) ?f??4/?L1(?), (4.3)which is assumed to hold for some C? > 0 and all f ? L1(?) ? L2(?).This equivalence was proved by Carlen, Kusuoka, and Stroock in [12], andhas its origins in earlier work of Nash. In particular, (4.3) applied to thefunction ?(t) := p2t(x0, x0) leads directly to a differential inequality whichcan be solved to givept(x0, x0) ?Ct?/2,with a constant C independent of x0.Remark 4.1.1. One can also define versions of the Nash inequality whichimply rates of on-diagonal heat kernel decay different than the polynomialcase considered here; see Theorem 5.1 in [40].47Remark 4.1.2. While the Nash inequality is arguably the most importantfunctional inequality related to heat kernel decay, there are several others,including Sobolev inequalities, log-Sobolev inequalites, and Dirichlet inequal-ities. See [40] for an overview of the situation on manifolds, in particularTheorem 5.6.Another family of important inequalities are Faber-Krahn inequalities, whichrelate spectral information to on-diagonal heat kernel decay. Given U ? G,let ?0(U) denote the bottom of the spectrum of the Dirichlet Laplacian onU . We say that (?, pi, pi) satisfies a Faber-Krahn inequality with function ? ifthe inequality?0(U) ? ?(pi(U))holds for any finite nonempty U ? G. Faber-Krahn inequalities are logicallyequivalent to uniform on-diagonal heat kernel bounds and Nash inequalities;see Section 5.4 in [41].Of particular interest is the relationship between geometric estimates fora weighted graph and on-diagonal heat kernel bounds. For graphs, the firstgeneral result relating isoperimetric inequalities to (discrete-time) heat kernelestimates was established by Varopoulos in [71].Theorem 4.1.3. (Varopoulos, [71]) Suppose that (?, pi, pi) is a weightedgraph satisfying a D?dimensional isoperimetric inequality with respect tothe graph metric d. Then there exists C > 0 such thatpn(x, y) ? Cn?D/2, (4.4)for all x, y ? G and n ? Z+.Versions of this result also hold in continuous time, and it is likely that theyalso hold for arbitrary measure weighted graphs (?, pi, ?) if one assumes thata D?dimensional isoperimetric inequality with respect to an adapted metricholds.Remark 4.1.4. In general, D?dimensional isoperimetric inequalities are notequivalent to on-diagonal heat kernel estimates with a rate of decay liken?D/2. The optimal isoperimetric inequality for a graphical Sierpinski carpetis 3/2?dimensional, but the heat kernel on this graph decays like n?D/2 forsome D > 3/4; see [5] and [61].48The relationship between on-diagonal heat kernel decay and volume growthis more complicated. In general, estimates on the volume growth are insuf-ficient to obtain on-diagonal heat kernel bounds. To control the on-diagonalheat decay using the volume growth function, one typically needs a rela-tive Faber-Krahn inequality to hold; see [16]. However, in the special caseof Cayley groups of finitely generated graphs, volume growth estimates areequivalent to isoperimetric inequalities; see Theorem 4.9 of [41].We conclude with some examples:Example 4.1.5 (Euclidean lattice). Let ? = (Zd,Ed), equipped with thestandard weights and the measure deg. One can show that Zd satisfies ad?dimensional isoperimetric inequality (see, for example, Theorem 4.9 of[41]), and consequently there exists Cd > 0 such thatpn(x, y) ? Cdn?d/2for all x, y ? G. In fact, one has p2n(x, y) ? Cdn?d/2 or pt(x, y) ? Cdt?d/2 viaa local central limit theorem or otherwise; see the discussion in Example 6.1.7.Example 4.1.6 (Groups of superpolynomial volume growth). Let ? be theCayley graph of a finitely generated group H equipped with the naturalweights and the measure deg, and write V (r) = Vd(e, r) for the volume growthof ? with respect to the measure pi. Suppose that there exists ? ? (0, 1] andc > 0 such that log V (r) ? cr? for all r ? 0. Using Theorem 4.9 of [41], thisvolume growth lower bound corresponds to a ??isoperimetric inequality with?(t) = c1t(log t)?1/?, which allows one to conclude that there exists c2 > 0such thatpn(x, y) ? exp(?c2n?/(?+2))for all x, y ? G and n ? Z+.In particular, in the case of exponential volume growth (? = 1), this giveson-diagonal heat kernel decay of exp(?cn1/3), and this is (up to lower-orderterms) the correct rate for the lamplighter group (see [65]).This estimate is not sharp for the d?regular tree Td equipped with the naturalweights. Indeed, since the bottom of the spectrum of this graph is strictly49positive, the heat kernel associated with (Td, deg) decays at an exponentialrate; see Remark 2.5.3 or Theorem 4.2.2.Remark 4.1.7. It is not obvious that there exist groups which satisfy c1er?1 ?V (r) ? c2er?2 for all r ? 0. These groups are called groups of intermediategrowth and their existence was established by Grigor?chuk in [34].Example 4.1.8 (Supercritical percolation clusters). On-diagonal estimatesfor supercritical percolation clusters require tools which are somewhat be-yond the scope of this section. Intuitively, on a sufficiently large scale,a d?dimensional supercritical percolation cluster ?looks? much like Zd, al-though one can find arbitrarily large 1?dimensional sections. Consequently,one needs to give the random walk a certain amount of time to escape fromlocal obstructions before one observes heat decay similar to that in Zd.Fix d ? 2, and suppose that p > pc(d). We write q?t (x, y) for the heat kernelof the CSRW on the d?dimensional supercritical percolation cluster Cp,?(?);the dependence on ? of q?t (x, y) is a consequence of Cp,?(?) being random.In [58], Mathieu and Remy used local isoperimetric inequalities to establishthe following on-diagonal heat kernel bound for the CSRW on Cp,?(?):Theorem 4.1.9. (Mathieu and Remy, [58]) There exist random variablesNx(?) <? and non-random constants c1, c2 such that almost surely, for allx ? G and t > 0,q?t (x, x) ????c1t?1/2 if 0 < t ? Nx(?),c2t?d/2 if Nx(?) < t.4.2 Off-diagonal heat kernel estimatesGiven x, y ? G, the semigroup property, Cauchy-Schwarz, and (4.1) yieldpt(x, y) =?z?Gpt/2(x, z)pt/2(z, y)?z?(?z?G|pt/2(x, z)|2?z)1/2 (?z?G|pt/2(z, y)|2?z)1/2= (pt(x, x)pt(y, y))1/2.50Consequently, the uniform on-diagonal boundsupx?Gpt(x, x) ? f(t)implies the global off-diagonal boundsupx,y?Gpt(x, y) ? f(t).However, we can typically do better than this. In general, we want to obtainoff-diagonal estimates which contain some sort of additional factor involvingthe adapted distance ?. For example, on Rd, the heat kernel is given byqt(x, y) =1(2pit)d/2exp(??x? y?22t),and Riemannian manifolds admit a similar Gaussian-type off-diagonal es-timate, as was proved in [38]. Unfortunately, it is known that in generalGaussian upper bounds for the heat kernel do not hold in the setting ofweighted graphs. Indeed, if pt(x, y) denotes the heat kernel of the CSRW Xon Z, and Y denotes the discrete skeleton of X, then if D  1,p1(0, D) ?1piDP(N1 = D, Y0 = 0, ? ? ? , YD = D)=e1 ? 1DD!12D+1? e?D logD,where the first inequality comes from assuming that the random walk makesexactly D jumps by time 1, and jumps along the unique shortest path from 0to D. This observation appears in [4], although it is implicit in earlier workof Davies. In fact, when d(x, y)  t, this lower bound is essentially sharp.We will refer to estimates of this type as Poisson-type estimates.In [26], the author proved that for general continuous-time simple randomwalks on weighted graphs, one can prove two sets of different heat kernel esti-mates, both involving an adapted metric ?. When t ? C?(x, y), a Gaussian-type upper bound for the heat kernel holds (and is essentially sharp), and aweaker, Poisson-type upper bound for the heat kernel holds everywhere, but51is only optimal when t ?(x, y).We will begin with the proof of the Poisson-type upper bounds, which willalso be used in the proof of Gaussian-type upper bounds. The techniquesused to obtain Poisson-type upper bounds for the heat kernel are primarilyfunctional-analytic and were developed by Davies in [18] and [19], where heused them to obtain off-diagonal bounds involving several different choices ofdistance functions, including the graph metric d and the intrinsic metric dI .Subsequently, the author used these estimates in [26] to obtain Poisson-typeestimates involving adapted metrics.We begin with the following result of Davies:Theorem 4.2.1. (Davies [18], Proposition 5 and Theorem 8) For all x, y ?G and t > 0,pt(x, y) ? (?x?y)?1/2 inf??L?(G)exp(?(x)? ?(y) + c(?)t),where c(?) := supx?G b(?, x)? ?0(?, pi, ?), andb(?, x) :=12?x?y?xpixy(e?(y)??(x) + e?(x)??(y) ? 2).We will use this to prove the following Poisson-type upper bound for the heatkernel, which appeared first in [26]:Theorem 4.2.2. Let ? be a 1?adapted metric on (?, pi, ?) which satisfies?(x, y) ? 1 for all x ? y. For all x, y ? G and t > 0,pt(x, y) ? (?x?y)?1/2 exp(?12?(x, y) log(?(x, y)2et)? ?0(?, pi, ?)t).Proof. Fix a 1?adapted metric ?, as well as vertices x1, x2 ? G and set D :=?(x1, x2). For ? > 0, we define ?? := ?(D ? ?(?, x1)) ? L?(G). Using thetriangle inequality for ? and the fact that the function t 7? et+e?t = 2 cosh(t)52is increasing for t ? 0, we estimateb(??, x) :=12?x?y?xpixy(e??(y)???(x) + e??(x)???(y) ? 2)?12?x?y?xpixy(e??(x,y) + e???(x,y) ? 2).Using the inequality et + e?t ? 2 ? t2et, which is valid for t ? 0, we getb(??, x) ?12?x?y?xpixy(e??(x,y) + e???(x,y) ? 2)?12?x?y?xpixy(?2?2(x, y)e??(x,y))?12?2e?(1?x?y?xpixy?2(x, y))?12?2e?.Note that we have used the 1?adaptedness of ? and the bound ?(x, y) ? 1for x ? y in this step. Since this holds uniformly in x, we have thatsupx?Gb(??, x) ?12?2e?,and consequentlyc(??) := supx?Gb(??, x)? ?0(?, pi, ?) ?12?2e? ? ?0(?, pi, ?).Set f(?) := 12?2e?. By Theorem 4.2.1, we get that for ? > 0 and all t > 0,pt(x1, x2) ? (?x1?x2)?1/2 exp(??(x1)? ??(x2) + c(??)t)? (?x1?x2)?1/2 exp(???(x1, x2) + f(?)t? ?0(?, pi, ?)t)= (?x1?x2)?1/2 exp(t(??(?(x1, x2)t)+ f(?))? ?0(?, pi, ?)t).53Optimizing over ? > 0 yieldspt(x1, x2) ? (?x1?x2)?1/2 exp(tf?(?(x1, x2)t)? ?0(?, pi, ?)t), (4.5)where f?(?) := inf?>0(??? + f(?)) is the Legendre transform of f . Now,note that for ? > 0, f(?) ? e2? =: g(?). If f(?) ? g(?) for ? > 0, thenf?(?) ? g?(?), sof?(?) ? g?(?) = ??2log( ?2e), (4.6)and combining (4.5) with (4.6) gives the estimatept(x1, x2) ? (?x1?x2)?1/2 exp(?12?(x1, x2) log(?(x1, x2)2et)? ?0(?, pi, ?)t),which is valid for all t > 0.Remark 4.2.3. These estimates incorporate very little geometric informationabout the underlying graph. In particular, they do not have a term whichcorresponds to the on-diagonal heat decay (which in turn may reflect thevolume growth or isoperimetric properties of the underlying graph).For example, if ? = pi, and one takes ? = d (which is an adapted metric) thenTheorem 4.2.2 yields the same Poisson-type upper bound for the heat kernelof all weighted graphs with the same value of ?0(?, pi, ?), but the behavior ofthe heat kernel on two such graphs may be very dissimilar.As shown in [26], these techniques may also be used to obtain weak Gaussian-type heat kernel estimates:Theorem 4.2.4. (F., [26]) Let ? be a 1?adapted metric on (?, pi, ?) satisfy-ing ?(x, y) ? 1 for all x ? y. For all x, y ? G and t ? ?(x, y),pt(x, y) ? (?x?y)?1/2 exp(??2(x, y)2t(1??(x, y)t)? ?0(?, pi, ?)).Proof. The proof proceeds as above, except that the estimate et + e?t ? 2 ?54t2et is replaced by the estimateet + e?t ? 2 ? t2(1 +tet6),which was also used in [18]. This leads to similar estimates as the onesobtained above, except with f(?) := 12?2(1 + ?e?6). In [18], it is shown that2?f(?) ? ??24+?38,and since f?(?) = 12?(2f)(2?), we obtain thatf?(?) ? ?12?2 +12?3 = ?12?2(1? ?),and this estimate implies the desired result.We begin our discussion of Gaussian upper bounds with a brief discussionof heat kernel estimates on Riemannian manifolds. A remarkable result ofGrigor?yan states that if one has on-diagonal estimates for the heat kernelqt(x, y) of a Riemannian manifold M at two points x1, x2, then one can es-tablish a Gaussian-type upper bound for the function t 7? qt(x1, x2).To state this result, we need some additional terminology.A monotonically increasing function g : (a, b)? (0,?) is (A, ?)?regular on(a, b) (A ? 1, ? > 1, 0 ? a < b ? ?) if for all a < t1 < t2 < ??1b, theinequalityg(?t1)g(t1)? Ag(?t2)g(t2)holds. If a = 0 and b =?, then we say that g is (A, ?)?regular.For appropriate values of A and ?, this set of functions includes polynomialfunctions such as ctd/2, exponential functions such as c1 exp(c2t?), and vari-ous piecewise combinations of (A, ?)?regular functions such as c1td1/21(0,T ] +c2td2/21(T,?), where c1 and c2 are chosen to ensure that the resulting functionis continuous.55Grigor?yan?s result is as follows:Theorem 4.2.5 (Grigor?yan, [38]). Let x1, x2 be distinct points on a smoothRiemannian manifoldM with Riemannian metric dM , and suppose that thereexist (A, ?)?regular functions f1, f2 such that, for all t > 0 and i ? {1, 2},qt(xi, xi) ?1fi(t). (4.7)Then for any D > 2 and all t > 0, the Gaussian upper boundqt(x1, x2) ?4A(f1(?t)f2(?t))1/2exp(?d2M(x1, x2)2Dt)(4.8)holds, where ? = ?(D, ?).The remarkable aspect of this result is that one only requires control of theheat kernel at two points. Of course, if in place of (4.7), one has a uniformon-diagonal estimate for the heat kernel, then (4.8) is valid for all x1, x2 ? G.We will prove an analogue of Theorem 4.2.5 for graphs, where the distancefunction appearing in the Gaussian exponential factor is an adapted metric.The first version of our result is as follows:Theorem 4.2.6. Let (?, pi, ?) be a weighted graph with a 1?adapted metric? satisfying ?(x, y) ? 1 whenever x ? y, and suppose that there exists a con-stant C? > 0 such that ?x ? C? for each x ? G. Let f1, f2 be (A, ?)?regularfunctions satisfying, for i ? {1, 2},sup0<t<?fi(t)et1/2? A. (4.9)Suppose also that there exists x1, x2 ? G such that for all t > 0 and i ? {1, 2},pt(xi, xi) ?1fi(t). (4.10)Then there exist constants C1(A, ?, C?), C2(?), ?(?) > 0, such that for all56t ? 1 ? ?(x1, x2),pt(x1, x2) ?C1(f1(?t)f2(?t))1/2exp(?C2?2(x1, x2)t).Remark 4.2.7. The main utility of this result is in settings where fi(t) haspolynomial growth, so that (4.9) is satisfied. Suppose that for i ? {1, 2},fi(t) = f(t) := exp(ct?) for some c, ? > 0. By Cauchy-Schwarz, pt(x1, x2) ?(pt(x1, x1)pt(x2, x2))1/2, and hence pt(x1, x2) ? exp(?ct?) for all t > 0. Onthe other hand, by Theorem 4.2.4, if C > 1 and t ? C?(x1, x2),pt(x1, x2) ? (?x1?x2)?1/2 exp(?c1?2(x1, x2)t).If 0 ? x ? y and 0 ? x ? z, then x ? (yz)1/2, so for t ? C?(x1, x2),pt(x1, x2) ? (?x1?x2)?1/4 exp(?c2t? ? c1?2(x1, x2)2t)=c2f(c3t)exp(?c1?2(x1, x2)2t),so that a Gaussian upper bound of the desired form can be obtained veryeasily. Moreover, as t??, it is the on-diagonal term which provides mostof the decay in the heat kernel and not the Gaussian exponential factor.Remark 4.2.8. Let us note that if f is (A1, ?)?regular, and A2 ? A1 ? 1, thenf is also (A2, ?)?regular. Thus, as long as there exist A1, A2, A3 ? 1 suchthat f1 is (A1, ?)?regular, f2 is (A2, ?)?regular, and sup0<t<?fi(t)et1/2? A3,then for A = A1 ? A2 ? A3, f1, f2 are (A, ?)?regular, and (4.9) is satisfied.If one replaces (4.9) with a uniform on-diagonal heat kernel bound, we getthe following corollary:Corollary 4.2.9. Let (?, pi, ?) be a weighted graph with a 1?adapted metric? satisfying ?(x, y) ? 1 whenever x ? y, and suppose that there exists aconstant C? > 0 such that ?x ? C? for each x ? G. Let f be an (A, ?)?regular57function satisfying (4.9). If for each t > 0, the uniform heat kernel conditionsupx?Gpt(x, x) ?1f(t)is satisfied, then there exist constants C1(A, ?, C?), C2(?), ?(?) > 0 such thatfor all x1, x2 ? G and t ? 1 ? ?(x1, x2),pt(x1, x2) ?C1f(?t)exp(?C2?2(x1, x2)t).If f1 and f2 are only (A, ?)?regular on (T1, T2), then we obtain a restrictedversion of Theorem 4.2.6.Theorem 4.2.10. Let (?, pi, ?) be a weighted graph with a 1?adapted metric? satisfying ?(x, y) ? 1 whenever x ? y, and suppose that there exists a con-stant C? > 0 such that ?x ? C? for each x ? G. Let f1, f2 be (A, ?)?regularfunctions on (T1, T2) satisfying, for i ? {1, 2},supt?(T1,T2)fi(t)et1/2? A.If there exist vertices x1, x2 ? G such that for all t ? (T1, T2) and i ? {1, 2},the estimatept(xi, xi) ?1fi(t)holds, then there exist constants C1(A, ?, C?), C2(?), ?(?) > 0 such that forall t > 0 satisfying 72?4e4T 21 ? 1 ? ?(x1, x2) < t < T2,pt(x1, x2) ?C1(f1(?t)f2(?t))1/2exp(?C2?2(x1, x2)t).Remark 4.2.11. The primary use of this result is in the case that T2 = ?,in which case one obtains Gaussian upper bounds for all sufficiently largetimes. In random environments such as supercritical percolation clusters,the functions which appear in existing on-diagonal heat kernel upper boundsmay fail to be (A, ?)?regular while being (A, ?)?regular on (T,?) for someT > 0; Theorem 4.2.10 is useful for obtaining Gaussian upper bounds in this58setting. Theorem 4.2.10 was also used in [1] to obtain Gaussian heat kernelestimates for the random conductance model.We proceed to the proof of Theorem 4.2.6:Fix a weighted graph (?, pi, ?) where the measure ? satisfies ?x ? C? > 0,and a 1?adapted metric ? satisfying ?(x, y) ? 1 whenever x ? y. Assumethat there exists a point x0 ? G and a (A, ?)?regular function f satisfying(4.9) such thatpt(x0, x0) ?1f(t)(4.11)for t > 0. Let (Gj)j?Z+ be a sequence of finite connected subsets which in-crease to G.We define u(x, t) := pt(x0, x) and uj(x, t) := pGjt (x0, x). We will begin byestablishing a maximum principle for the quantityJR,k(x, t) :=?x?Gku2k(x, t)?(x, t)?x,where the function ? is defined as ? := exp ? ?R, where ?R is a ?cutoff? func-tion which will be defined subsequently.We begin with the following simple lemma:Lemma 4.2.12. For H ? G and k ? Z+,?x?Hu2k(x, t)?x ??x?Hu2(x, t)?x ?1f(2t).Proof. The leftmost inequality follows from the fact that 0 ? uk ? u, andthe rightmost inequality follows by estimating?x?Hu2(x, t)?x =?x?Gu2(x, t) = p2t(x0, x0) ?1f(2t),where the rightmost inequality is simply (4.11).Fix k ? Z+. Differentiating JR,k with respect to t, and suppressing the t59argument for brevity, yields the following:ddtJR,k(t) =?x?G(??tuk(x))(2uk(x)?(x))?x +?x?G(??t?(x))u2k(x)?x=?x?Gk(L?uk(x))(2uk(x)?(x))?x +?x?Gk(??t?(x))u2k(x)?x.In the above, we used the fact that uk solves the heat equation on Gk. Notethat(??tuk(x))(2uk(x)?(x)) = (L?uk(x))(2uk(x)?(x))even if x0 6? Gk or x 6? Gk. Using the fact that uk(y) = 0 for y ? G \ Gk, aGauss-Green type calculation yields?x?Gk(L?uk(x))(2uk(x)?(x))?x =?x?Gk?y?G(uk(y)? uk(x))(2uk(x)?(x))pixy=?x?Gk?y?Gk(uk(y)? uk(x))(2uk(x)?(x))pixy+?x?Gk?y?G\Gk(uk(y)? uk(x))(2uk(x)?(x))pixy=?x?Gk?y?Gk(uk(y)? uk(x))(2uk(x)?(x))pixy+?x?Gk?y?G\Gk(?uk(x))(2uk(x)?(x))pixy??x?Gk?y?Gk(uk(y)? uk(x))(2uk(x)?(x))pixy= ??x,y?Gk(uk(y)? uk(x))(uk?(y)? uk?(x))pixy. (4.12)The equality (4.12) follows from interchanging the order of summation, which60is permissible since uk has finite support. Completing the square, we see that?x,y?Gk(uk(y)? uk(x))(uk?(y)? uk?(x))pixy =?x,y?Gk?(y)(uk(y)? uk(x))2pixy+?x,y?Gkuk(x)(uk(y)? uk(x))(?(y)? ?(x))pixy? ?14?x,y?Gku2k(x)(?(x)? ?(y))2?(y)pixy.It follows thatddtJR,k(t) ?14?x,y?Gku2k(x)(?(x)? ?(y))2?(y)pixy +?x?Gk(??t?(x))u2k(x)?x=?x?Gku2k(x)?y?Gk(?xpix??t?(x) +14(?(x)? ?(y))2?(y))pixy=?x?Gku2k(x)?(x)?y?Gk(?xpix??t?(x) +14(?(x)2 ? 2?(x)?(y) + ?(y)2?(x)?(y)))pixy=?x?Gku2k(x)?(x)?y?Gk(?xpix??t?(x) +12(cosh(?(x)? ?(y))? 1))pixy.Now, we define the cutoff function ?R. Set ?R := (R? ?(?, x0))+, and let?R(x, t) := ???2R(x) + ?s? t.Here R ? 0, t > 0, and s := s(t) > t are parameters that will be allowed tovary, and ?, ? > 0 are parameters that will be fixed. Given ? > 1, setK? := sup{t > 0 : 2 cosh t? 2 ? ?t2}. (4.13)For the remainder of this proof, we will fix ?, ?, ? so that the following con-61ditions are satisfied:? > 1, (4.14)? <1?, (4.15)? ???24(1? ??), (4.16)K??= 6?e2. (4.17)Let us show that such an assignment of constants is possible by exhibiting?0, ?0, ?0 which satisfy the above conditions. First, we choose ?0 = 2, so thatK?0 = 2.98 . . . ? 3; this satisfies (4.14). Next, since ?0 and ? are known, wemay define ?0 through (4.17), and estimate?0 :=K?06e2<12e2<1?0,so that (4.15) is also satisfied. We then choose ?0 to be?0 :=?0?204(1? ?0?0).Let us also note that (4.16) is equivalent to4???(? + 4?)? 1. (4.18)Once ?, ? and ? have been fixed, we have the following result:Proposition 4.2.13. (Maximum Principle) If the parameters ?, ? and ? sat-isfy (4.14),(4.15),(4.16),(4.17), and R ? 0, t > 0, and s > t satisfyR? 6e2(s? t) +12? 0, (4.19)then for each k ? Z+,??tJR,k(t) ? 0.62Proof. Given k ? Z+ and x ? Gk, define?k(x) :=?y?Gkpixy(?xpix??t?(x) +12(cosh(?(x)? ?(y))? 1)).Suppose that for all x ? Gk, whenever y ? x and y ? Gk, |?(x)? ?(y)| ? K?.Using (4.13), the inequality |?2R(x)??2R(y)| = |?R(x)??R(y)||?R(x)+?R(y)| ??(x, y)(2?R(x) + 1), and (4.14),(4.15), (4.16), and (4.18), we obtain?k(x) :=?y?Gkpixy(?xpix??t?(x) +12(cosh(?(x)? ?(y))? 1))??y?Gkpixy(?xpixddt?(x) +?4(?(x)? ?(y))2)= (s? t)?2?y?Gkpixy(??xpix(??2R(x) + ?) +??24(?2R(x)? ?2R(y))2)? (s? t)?2?y?Gkpixy(??xpix(??2R(x) + ?) +??24?2(x, y)(2?R(x) + 1)2)=??24(2?R(x) + 1)2(s? t)?2?x???1?x?y?Gk?2(x, y)pixy ?4??2??2R(x) + ?(2?R(x) + 1)2?????24(2?R(x) + 1)2(s? t)?2?x???1?x?y?Gk?2(x, y)pixy ? infu?04??2?u2 + ?(2u+ 1)2??=??24(2?R(x) + 1)2(s? t)?2?x??1?x?y?Gk?2(x, y)pixy ?4???(? + 4?)?????24(2?R(x) + 1)2(s? t)?2?x??1?x?y?Gk?2(x, y)pixy ? 1??? 0.63SinceddtJR,k(t) ??x?Gu2k(x)?(x)?k(x),we conclude thatddtJR,k(t) ? 0.Now, let us analyze the inequality|?x ? ?y| =??????(?2R(x)? ?2R(y))s? t?????? K?. (4.20)Since |?2R(x)? ?2R(y)| ? 2?R(x) + 1, for (4.20) to hold, it is sufficient to have?R(x) ?K?2?(s? t)?12, (4.21)and using (4.17) and the estimate ?R(x) ? R shows that (4.21) certainlyholds ifR? 6e2(s? t) +12? 0,which is precisely the condition in the statement of the Lemma.Now, for k ? Z+, we defineIR,k(t) :=?x?Gk\B?(x0,R)u2k(x)?x,IR(t) :=?x?G\B?(x0,R)u2(x, t)?x.By (4.2.12), all of these sums are finite, and by monotone convergence,limk??IR,k(t) = IR(t). (4.22)The maximum principle allows us to estimate IR, as follows:64Lemma 4.2.14. Suppose that R0 ? R1, and s > t0 ? t1 > 0 are such thatR0, s, t0 satisfy (4.19). ThenIR0(t0) ? exp( ?s? t0)IR1(t1)+ exp( ?s? t0)exp(??(R0 ?R1)2 + ?s? t1)1f(2t1).Proof. First, since ?R vanishes outside of B?(x0, R0), for each k ? Z+,IR0,k(t0) :=?x?Gk\B?(x0,R0)u2k(x0, t)?x? supx?Gk\B?(x0,R0)exp(??R0(x, t0))??x?Gk\B?(x0,R0)u2k(x, t) exp(?R0(x, t0))?x? exp( ?s? t0)?x?Gk\B?(x0,R0)u2k(x0, t) exp(?R0(x, t0))?x? exp( ?s? t0)JR0,k(t0).For ` ? [t1, t0],R0 ? 6e2(s? `) +12? R0 ? 6e2(s? t0) +12? 0,and consequently the maximum principle implies that JR0,k(t0) ? JR0,k(t1).65This gives us the following:IR0,k(t0) ? exp( ?s? t0)JR0,k(t1)= exp( ?s? t0)????x?Gk\B?(x0,R1)+?x?Gk?B?(x0,R1)??u2k(x, t1) exp(?R0(x, t1))?x? exp( ?s? t0)IR1,k(t1) + exp( ?s? t0)? supx?Gk?B?(x0,R1)exp(?R0(x, t1))?x?Gk?B?(x0,R1)u2k(x, t1)?x? exp( ?s? t0)IR1,k(t1)+ exp( ?s? t0)exp(??(R0 ?R1)2 + ?s? t1)1f(2t1).The last three inequalities follow from bounding above the exponential weightexp(?R0(x, t1)) by 1 on Gk \ B?(x0, R1), by using the inequality ?R0(x) ?R0 ?R1 on Gk ?B?(x0, R1), and using (4.2.12).Letting k ?? and using (4.22), we getIR0(t0) ? exp( ?s? t0)IR1(t1)+ exp( ?s? t0)exp(??(R0 ?R1)2 + ?s? t1)1f(2t1),which completes the proof of the lemma.We use this result to obtain the following estimate on IR:Proposition 4.2.15. Suppose that t0 ? R0 ? 1/2. There exist positiveconstants m0,m1, n0, n1, ?, which do not depend on either t0 or R0, suchthatIR0(t0) ? m01f(?t0)exp(?m1R20t0)+ n0 exp(?n1R0).66Remark 4.2.16. An estimate of this type is a key step in [38], but the anal-ogous estimate there does not contain a term of the form n0 exp(?n1R0);one can view this term as a consequence of the Poisson-type long range heatkernel decay of pt(x, y). The assumption (4.9) ensures that this term doesnot dominate the ?Gaussian? term m0 1f(?t0) exp(?m1R20t0).Proof. Given t0 ? R0 ? 1/2, we define sequences (tj)j?Z+ , (sj)j?Z+ ,(Rj)j?Z+bytj := t0??j,sj := 2tj,Rj :=(12+1j + 2)R0.Recall that ? > 1 is a parameter obtained from the (A, ?)?regularity of f .Simple estimation givesRj ?Rj+1 ?R0(j + 3)2,sj ? tj+1 =(2?1?)tj.As long asRj ? 6e2(sj ? tj) +12? 0, (4.23)Lemma 4.2.14 implies thatIRj(tj) ? exp(?sj ? tj)IRj+1(tj+1)+1f(2tj+1)exp(?sj ? tj)exp(??(Rj ?Rj+1)2 + ?sj ? tj+1). (4.24)Since sj ? tj ? 0 as j ? ?, (4.23) fails to hold for all sufficiently large j.Let N denote the largest j ? Z satisfying (4.23). Note that N ? 0, sinceR0 ? 6e2(s0 ? t0) +12= R0 ? 6e2t0 +12< 0.67We will need some estimates on the size of tN and RN in terms of the initialdata R0. By the definition of (Rj)j?Z+ , clearly14?R02< RN ? R0, (4.25)and the maximality of N implies thatRN ? 6e2tN , (4.26)RN+1 > 6e2tN+1 ?12. (4.27)Combining (4.25) with (4.26) and (4.27) then gives the following estimates:124e2?112e2R0 <16e2RN ? tN <?2e2RN ??2e2R0, (4.28)124e2??112e2?R0 <16e2?RN+1 ? tN+1 <12e2RN+1 ?12e2R0. (4.29)Now, applying (4.24) for 0 ? j ? N ? 1 givesIR0(t0) ?N?k=0exp( ?sk ? tk)IRN+1(tN+1)+N?k=0(k?`=0exp( ?s` ? t`))exp(??(Rk ?Rk+1)2 + ?sk ? tk+1)1f(2tk+1):= S1 + S2.We estimate S1 as follows:S1 :=N?k=0exp( ?sk ? tk)IRN+1(tN+1)= exp(?t0N?k=0?k)IRN+1(tN+1)? exp(??? ? 11tN)IRN+1(tN+1)? exp(24??e2? ? 1)IRN+1(tN+1). (4.30)68We will estimate the IRN+1(tN+1) term subsequently. Now, as in [38], weestimateS2 :=N?k=0(k?`=0exp( ?s` ? t`))exp(??(Rk ?Rk+1)2 + ?sk ? tk+1)1f(2tk+1)?N?k=0exp(??(? ? 1)t0?k)exp(??(Rk ?Rk+1)2 + ?sk ? tk+1)1f(2tk+1)=N?k=0exp(??2(? ? 1)(2? ? 1)t0?k)exp(??(Rk ?Rk+1)2sk ? tk+1)1f(2tk+1)?N?k=0exp(??2(? ? 1)(2? ? 1)t0?k)exp(???(2? ? 1)?k(k + 3)4R20t0)1f(2tk+1).At this point, define ? > 0, which depends only on ? > 1, by? := infk?0?k+1(2? ? 1)(k + 2)(k + 3)4,so that for k ? 0,?(k + 2) ??k+1(2? ? 1)(k + 3)4.The (A, ?)?regularity of f gives, for 0 ? j ? k,f(2tj)f(2tj+1)? Af(2t0)f(2t1),and multiplying these estimates together yields1f(2tk+1)?1f(2t0)(Af(2t0)f(2t1))k+1=1f(2t0)exp((k + 1) log(Af(2t0)f(2t1))). (4.31)We remark that this is the only point in the proof where the (A, ?)?regularityof f is used.69Set L := log(Af(2t0)f(2t1))and insert (4.31) into our earlier estimate for S2 toobtainS2 ?1f(2t0)?N?k=0exp(??2(? ? 1)(2? ? 1)t0?k)exp(???(k + 2)R20t0)exp ((k + 1)L)?1f(2t0)exp(??2(? ? 1)(2? ? 1)tN)?N?k=0exp(???(k + 2)R20t0)exp ((k + 1)L)=1f(2t0)exp(24??2e2(? ? 1)(2? ? 1))exp(???R20t0)?N?k=0exp(?(k + 1)(??R20t0? L)).At this point, we divide into cases based on whether??R20t0? L ? log 2 (4.32)or not. If (4.32) holds, then we haveS2 ?1f(2t0)exp(24??2e2(? ? 1)(2? ? 1))exp(???R20t0)N?k=0exp (?(k + 1) log 2)?1f(2t0)exp(24??2e2(? ? 1)(2? ? 1))exp(???R20t0). (4.33)70If (4.32) does not hold, then we can estimate S2 byS2 ? IR0(t0)??x?Gu2(x, t0)?x?1f(2t0)?1f(2t0)exp(???R20t0+ log(Af(2t0)f(2t1))+ log 2)=2Af(2t1)exp(???R20t0). (4.34)It remains to estimate the quantity IRN (tN+1). From Theorem 4.2.2, the heatkernel satisfiespt(x, y) ? (?x?y)?1/2 exp(?12?(x, y) log(?(x, y)2et)).Hence,IRN+1(tN+1) :=?x?G\B?(x0,RN+1)u2(x, tN+1)?x? supx?G\B?(x0,RN+1)u(x, tN+1)?x?G\B?(x0,RN+1)u(x, tN+1)?x? supx?G\B?(x0,RN+1)u(x, tN+1).At this point, note that if t > 0 is fixed, the function?t(d) := exp(?12d log(d2et))71is nonincreasing for d ? 2t. Since RN+1 > 2e2tN+1, we getIRN+1(tN+1) ? supx?G\B?(x0,RN+1)u(x, tN+1)? C?1? ?tN+1(RN+1)? C?1? ?tN+1(2e2tN+1)= C?1? exp(?e2tN+1)? C?1? exp(?112?R0). (4.35)This is the only point in the argument at which we explicitly use the factthat the vertex weights are bounded below.Combining (4.30),(4.33),(4.34),(4.35) we haveIR0(t0) ? m01f(?t0)exp(?m1R20t0)+ n0 exp(?n1R0),where the constants ?,m0,m1, n0, n1 may be taken to be? :=2?, m0 := exp(24??2e2(? ? 1)(2? ? 1))? 2A, m1 := ??,n0 := C?1? exp(24??e2? ? 1), n1 :=112?.Remark 4.2.17. One can decrease some of the above constants at the cost ofincreasing other ones. In particular, one does not want to choosing ? to betoo close to 1, or the constants m0 and n0 will be very large. In applications,one will often have the choice of several values of ?; for example, if f(t) = t?,one may choose any ? > 1. One also has the option of using the fact that(A, ?)?regularity implies (A2n, ?2n)?regularity to increase ? at the cost ofincreasing A (and hence m0). However, choosing ? excessively large willcause ? and n1 to be undesirably close to zero.72Given H ? G, ? > 0, and D > 0, defineE?,D,H(x0, t) =?x?Hu2(x, t) exp(?(?(x, x0) ?D)2t)?x.Using the previous estimates, we can estimate this as follows:Proposition 4.2.18. There exist constants ?0, C, ?0 > 0 such that for t ?12 ?D2 ,E?0,D,G(x0, t) ?Cf(?0t).Proof. Fix t ? 12 ?D2 , and choose ?0 to satisfy the inequalities 16?0?m1 < 0,8?0 ? n1 < 0, where m1, n1 are the constants in Lemma 4.1.Define M to be the largest nonnegative integer satisfying 2M ? t1/2; if thereis no such nonnegative integer, set M = 0. We can write G as the disjointunion of the sets Aj (0 ? j ?M + 1), whereA0 := {x ? G : ?(x0, x) ? t1/2},Ak := {x ? G : 2k?1t1/2 < ?(x0, x) ? 2kt1/2} for 1 ? k ?M,AM+1 := {x ? G : ?(x0, x) > 2M t1/2}.Consequently, we haveE?0,D,G(x0, t) =M+1?j=0E?0,D,Aj(x0, t), (4.36)and it remains to estimate the terms on the right hand side of (4.36).On A0, the exponential weight exp(?0(?(x,x0)?D)2t)is bounded above by e?0 ,and henceE?0,D,A0(x0, t) ? e?0?x?A0u2(x, t)?x ? e?01f(2t)? e?01f(?t). (4.37)For 1 ? j ?M , on Aj, the exponential weight exp(?0(?(x,x0)?D)2t)is boundedabove by exp(?04j). Since 2j?1t1/2 ? t, we may apply Proposition 4.2.15 to73obtainM?j=1E?0,D,Aj(x0, t) ?M?j=1exp(?04j)I2jt1/2(t)?M?j=1exp(?04j)(m01f(?t)exp(?m14j) + n0 exp(?n12j?1t1/2))= m01f(?t)M?j=1exp((4?0 ?m1)4j?1) + n0M?j=1exp(2j?1(2?02j ? n1t1/2))? m01f(?t)M?j=1exp((4?0 ?m1)4j?1) + n0M?j=1exp(2j?1(4?0 ? n1)t1/2))? m01f(?t)M?j=1exp((4?0 ?m1)4j?1)+ n0 exp((4?0 ? n1)t1/2)M?j=1exp((2j?1 ? 1)(4?0 ? n1)t1/2)? m01f(?t)M?j=1exp((4?0 ?m1)4j?1)+ n0 exp((4?0 ? n1)t1/2)M?j=1exp(1?2(2j?1 ? 1)(4?0 ? n1))? m0T01f(?t)+ n0T1 exp((4?0 ? n1)t1/2),whereT0 :=??j=1exp((4?0 ?m1)4j?1) <?,T1 :=??j=1exp(1?2(2j?1 ? 1)(4?0 ? n1))<?.By (4.9),exp((4?0 ? n1)t1/2) ?Af((4?0 ? n1)2t),74so thatM?j=1E?0,D,Aj(x0, t) ? (m0T0 + n0T1A)1f((? ? (4?0 ? n1)2)t). (4.38)On AM+1, the exponential weight exp(?0(?(x,x0)?D)2t)is bounded above byexp(?0D2t)? exp(4?0t), since D ? 2t. By definition, we have 12t1/2 < 2M ?t1/2, and hence another application of Proposition 4.2.15 yieldsE?0,D,AM+1(x0, t) ? exp(4?0t)I2M t1/2(t)? exp(4?0t)It/2(t)? m01f(?t)exp(4?0t?m1t4)+ n0 exp(4?0t? n1t2)= m01f(?t)exp(14(16?0 ?m1)t)+ n0 exp(12(8?0 ? n1)t)? m01f(?t)exp(18(16?0 ?m1))+ n0 exp(12(8?0 ? n1)t)? m01f(?t)exp(18(16?0 ?m1))+ n0 exp(12?2(8?0 ? n1)t1/2).By (4.9) again,exp(12?2(8?0 ? n1)t1/2)?Af(1/8 ? (8?0 ? n1)2t),and soE?0,D,AM+1(x0, t) ?(m0 exp(18(16?0 ?m1))+ n0A)?1f((? ? 1/8 ? (8?0 ? n1)2t). (4.39)Combining (4.36) with (4.37),(4.38), and (4.39) completes the proof.We now are able to prove Theorem 4.2.6:Proof. Set D := ?(x1, x2), and assume that t ? 1 ? D. Then t2 ?12 ?D2 ,75so we may apply Proposition 4.2.18 with the points x1 and x2 (at which wehave the on-diagonal estimates (4.10)) to obtain positive constants c, C, ?such that, for t ? 1 ?D,Ec,D,G(x1, t/2) ?Cf1(?t/2),Ec,D,G(x2, t/2) ?Cf2(?t/2).The truncated distance ?D := ? ? D satisfies ?2(x1, x2) = ?2D(x1, x2) ?2(?2D(x1, x) + ?2D(x, x2)) for all x ? G. Consequently, for t ? 1 ? D, thesemigroup property and Cauchy-Schwarz yieldpt(x1, x2) =?x?Gpt/2(x1, x)pt/2(x, x2)?x??x?Gpt/2(x1, x) exp(c?2D(x1, x)t)pt/2(x, x2) exp(c?2D(x2, x)t)? exp(?c?2D(x1, x2)2t)?x? (Ec,D,G(x1, t/2)Ec,D,G(x2, t/2))1/2 exp(?c?2D(x1, x2)2t)?C(f1(?t/2)f2(?t/2))1/2exp(?c?2(x1, x2)2t),which completes the proof of Theorem 4.2.6.With minor modifications to Proposition 4.2.15, we can prove Theorem 4.2.10,which involves functions which are only (A, ?)?regular on (T1, T2):Proof. In Proposition 4.2.15, where estimates for IR(t) were obtained, weassumed that t0 ? R0 ? 1/2, and used (A, ?)?regularity to obtain1f(2tk+1)?1f(2t0)(Af(2t0)f(2t1))k+1. (4.40)for 0 ? k ? N . This is the only point in the proof where (A, ?)?regularityis used. It follows that if f is instead regular on (T1, T2), then for (4.40) tohold, we must have T1 < 2tN+1 and 2t1 < ??1T2. Subsequently, in the proof76of Proposition 4.2.18, we apply our bounds for IR(t) with t ? R = 2jt1/2, for0 ? j ? sup{k ? Z : 2k ? t1/2} ? 0 to obtain an estimate for E?,D,H(x, t).In the final step of the proof of Theorem 4.2.6, we apply our estimates forE?,D,H evaluated at t/2 (where t ? 1 ? D). Consequently, we are applyingour bounds for IR0(t0) with t0 = t/2 and various values of R0, where thesmallest value of R0 used is t1/20 .By (4.28),2tN+1 ?16e2?R0.Consequently, if we assume thatT1 <16?2e2( t2)1/2, (4.41)T2 > 2( t2), (4.42)then the inequalities T1 < 2tN+1 and 2t1 < ??1T2 are satisfied. Thus,the proof of Theorem 4.2.6 is still applicable for functions which are only(A, ?)?regular on (T1, T2) if we require thatt > 72e4?4T 21 ,t < T2,and applying these additional constraints yields Theorem 4.2.10.Theorem 4.2.10 is of particular interest in obtaining Gaussian upper boundsfor various models of random walks in random environments:Example 4.2.19 (Off-diagonal bounds for random walks on percolation clus-ters). Theorem 4.2.10 may be used to obtain off-diagonal heat kernel esti-mates for the CSRW or VSRW on supercritical percolation clusters, whichwere defined in Example 2.1.5. For results on random walks on percolationclusters, including on-diagonal heat kernel estimates and invariance princi-ples, see [58] and [3].77Fix a dimension d ? 2, and suppose that p > pc(d). We write q?t (x, y) forthe heat kernel of the CSRW on the d?dimensional supercritical percolationcluster Cp,?(?), and we denote the graph metric on Cp,?(?) by dC. As dis-cussed in Example 4.1.9, Mathieu and Remy proved that q?t (x, y) satisfiesthe following upper bound:q?t (x, x) ????c1t?1/2 if 0 < t ? Nx(?),c2t?d/2 if Nx(?) < t.The polynomial function f(t) := c2td/2 is (A, ?)?regular on (Nx(?),?) forA = 1, ? = 2, and hence an application of Theorem 4.2.10 shows that forx, y ? G and t ? C(Nx(?) ?Ny(?)) ? 1 ? dC(x, y),q?t (x, y) ? C1t?d/2 exp(?C2d2C(x, y)t), (4.43)where C1, C2 > 0 are non-random constants.Remark 4.2.20. For the discrete-time simple random walk on Cp,?(?), Gaus-sian upper bounds are obtained in [17] as an application of their discrete-timeheat kernel estimates. However, the bounds in [17] have a random constantC1 = C1(?) in (4.43). The reason is that [17] only considers functions whichare (A, ?)?regular, and in general the function f(t) := c?11 t1/21{0<t?Nx(?)} +c?12 td/21{Nx(?)<t} is not (A, ?)?regular. The authors of [17] therefore boundf(t) by a smaller random function g(t) := d1t1/21{0<t?Nx(?)}+d2td/21{Nx(?)<t},where d1 = d1(?) and d2 = d2(?) are random constants chosen to ensure thatf ? g and g is (A, ?)?regular.Remark 4.2.21. Theorem 4.2.10 is also used in [1] to obtain Gaussian upperbounds for the heat kernel in the random conductance model; as in the case ofsupercritical percolation clusters, the function appearing in the on-diagonalheat kernel estimate of Proposition 4.1 of [1] is not (A, ?)?regular but rather(A, ?)?regular on (T,?) for some T > 0, so Theorem 4.2.10 yields Gaussianupper bounds for all sufficiently large times.78Chapter 5Metric GraphsMetric graphs are a continuous analogue of graphs. Informally, one constructsa metric graph by beginning with an ordinary graph (G,E), identifying eache ? E with an interval of length `(e) > 0, and equipping the resulting objectwith the ?shortest path? metric induced by taking the Euclidean metric alongeach edge.The natural stochastic process to study on metric graphs is Brownian motion.Brownian motion on a metric graph moves like 1?dimensional Brownian mo-tion on the interior of an edge and randomly chooses an incident edge to movealong when at a vertex.Our motivation for studying metric graphs is that the techniques used tostudy Brownian motion on Riemannian manifolds are easier to translate tothe setting of metric graphs than the setting of weighted graphs, since met-ric graphs and Riemannian manifolds may both be viewed as strongly localDirichlet spaces. Many of the complications posed by discreteness are absentin this hybrid setting. We will see that if one has a continuous-time simplerandom walk on a weighted graph, it is possible to construct a metric graphsuch that Brownian motion on the metric graph behaves very similarly to therandom walk on the original graph. Consequently, if one has strong analyticestimates for the Brownian motion on the metric graph, it may be possibleto obtain analogous results for the continuous-time simple random walk onthe weighted graph.The study of Brownian motion on metric graphs begins with Walsh in [73],79who studied the case of single-vertex graphs incident to some (possibly infi-nite) number of finite or infinite rays. There have been several constructionsof Brownian motion on metric graphs: by Rogers ([66]), using resolvents, byBaxter and Chacon [8], using the infinitesimal generator, by Varopoulos [70],using Dirichlet form theory, and by Salisbury [68], using excursion theory.Metric graphs have also been used on several occasions to study problemsarising in the theory of random walks; in particular, [70] used Brownianmotion on metric graphs to establish estimates for discrete-time simple ran-dom walks, and Barlow and Bass used Brownian motion on metric graphs tostudy a continuous-time simple random walk. The paper [27] of the authoris probably the first work in which Brownian motion was used to obtain esti-mates for continuous-time simple random walks on general measure weightedgraphs.5.1 Brownian motion on metric graphsWe begin with a careful construction of metric graphs, largely following thenice exposition given in [44]. We begin with an graph ? = (G,E), and amap ` : E ? (0,?) which assigns a length `(e) to each e ? E. Equiva-lently, we may view ` as a collection of positive lengths (`(e))e?E; note thatwe do not assume that the resulting lengths satisfy the triangle inequality.Consequently, we can identify each edge in the metric graph with a copy of[0, `(e)], and we will sometimes write I(e) for the metric edge correspondingto e. Assign some arbitrary orientation on edges, and let the maps i : E ? Vand j : E ? V denote the initial and terminal vertices of each edge, so thatan edge e is incident to a vertex v if and only if either i(e) = v or j(e) = v.To define a topological structure, we set Xe := {e} ? (0, `(e)), X? := G ??e?E Xe, and X e := Xe?{i(e), j(e)}. We take the canonical homeomorphism?e := Xe ? (0, `(e)) defined by ?e((e, t)) = t and extend it to a homeomor-phism ?e : X e 7? [0, `(e)] by setting ?e(i(e)) = 0 and ?e(j(e)) = `(e).The quadruple X (?, `, i, j) is a metric graph; however, since the functions iand j are not significant, we will simply refer to metric graphs by the pairingX (?, `) or X (?, (`(e))e?E).80Finally, we define a shortest-path metric dX on X (?, `). Given x, y ? X (?, `),letM(x, y) denote the collection of (possibly empty) finite paths of verticesp0, . . . , pN with pj ? G for 0 ? j ? N , such that the extended path q0 = x,qj = pj?1 for 1 ? j ? N + 1, qN+2 = y has the property that for each0 ? j ? N + 1, there exists an edge ej such that qj, qj+1 ? X ej . Let N (x, y)denote the collection of extended paths q obtained in this way.Given p = p0, . . . , pN ? M(x, y) with the associated path q0, . . . , qN+2, wedefineL(q) = inf???N+1?j=0|?ej(qj)? ?ej(qj+1)|???,and setdX (x, y) = inf{L(q) : q ? N (x, y)}.For technical reasons which will become clear subsequently, we will want toconsider metric graphs which have loops. Given a graph ? = (G,E) with noloops, we denote by ?loop := (G,Eloop) the augmented graph with the samevertex set and an edge set which consists of all edges in E as well as loopedges added at each vertex in G. In ?loop we denote the loop incident to avertex x by xloop. Given a vertex x ? G, we write E(x) to denote all non-loopedges incident to x (regardless of whether we are working on ? or ?loop) andEloop(x) to denote all edges (loop or non-loop) incident to x.We equip X (?, (`(e))e?E) with two sets of positive edge weights: edge den-sities (?(e))e?E, which determine the diffusivity of metric edges, and ?jumpconductances? (p(e))e?E, which control the probability of Brownian motionchoosing a particular metric edge to move along. The resulting object, whichwe denote by X (?, (`(e))e?E, (?(e))e?E, (p(e))e?E), is a weighted metric graph.For brevity, we writeX (?, `) := X (?, (`(e))e?E),Xloop(?, `) := X (?loop, (`(e))e?Eloop),X (?, `, p, ?) := X (?, (`(e))e?E, (?(e))e?E, (p(e))e?E),Xloop(?, `, p, ?) := X (?loop, (`(e))e?Eloop , (?(e))e?Eloop , (p(e))e?Eloop).81We will now construct Brownian motion on weighted metric graphs usingDirichlet space theory. For simplicity, we work for now on X (?, `, p, ?),although the case of loops is handled by simply changing all sums to be overEloop instead of E. We consider the Hilbert space L2(X (?, `, p, ?), ?X ), where?X (dx) :=?e?E1I(e)(x)p(e)?(e)m(dx),and m is Lebesgue measure on the edge I(e).Let C(X (?, `)) denote the set of continuous functions on the weighted metricgraph X (?, `); continuity clearly does not depend on the choice of the edgeweights (?(e))e?E or (p(e))e?E. Similarly, let Cc(X (?, `)) denote the set ofcontinuous functions with compact support, and let C0(X (?, `)) denote theclosure of Cc(X (?, `)) with respect to the ? ? ?? norm.We will need to define Sobolev spaces on metric graphs. For k ? Z+ andp ? 1, we defineSk,p(X (?, `, p, ?), ?) :={u ? C(X (?, `)) :for all e ? E, u|I(e) ? Wk,p(I(e), ?|I(e))},W k,p(X (?, `, p, ?), ?) :={u ? Sk,p(X (?, `, p, ?), ?) :?e?E?u|I(e)?pWk,p(I(e),?|I(e))<?},W k,p0 (X (?, `, p, ?), ?) := C0(X (?, `)) ?Wk,p(X (?, `, p, ?), ?).We consider the strongly local Dirichlet form E on L2(X (?, `, p, ?), ?) withdomain D(E) = W 1,20 (X (?, `, p, ?), ?), which is defined on D(E)?D(E) byE(f, g) :=?e?E?I(e)f ?(x)g?(x)p(e)m(dx).The closedness of E is implied by the completeness of the Sobolev spacesW k,p(I), where I is a finite interval in R.The stochastic process associated with this Dirichlet form is Brownian mo-tion on X (?, `, p, ?), which we denote by (Zt)t?0. From this construction,82(Zt)t?0 is a continuous Hunt process (see [14] or [32]), and consequently Z isstrong Markov.By the Gauss-Green formula, one may recover conditions for the generatorof (Zt)t?0, LZ ; the domain of LZ is the set of functions u such that u ?W 2,20 (X (?, `, p, ?), ?) and, for each x ? G,?e?E(x)p(e)u?e(x) = 0, (5.1)where u?e(x) denotes the directional derivative of u at x in the direction ofthe edge e. On each edge I(e), the generator LZ is given byLZ???I(e)u???I(e)=12?(e)u?????I(e).One may view the Brownian motion on X (?, `, p, ?) as being obtained froma single standard Brownian motion on R, (Wt)t?0, in the following way. Westart the Brownian motion (Zt)t?0 at some vertex x ? G. The excursionset {Wt 6= 0} consists of countably many intervals, so for each excursion,we pick an incident edge f with probability p(f)(?e?E(x) p(e))?1, and movelike |W?(f)?1t| along this edge. This is done until a new vertex is reached, atwhich point we use the strong Markov property and continue.The case of loops requires some small changes. On Xloop(?, `, p, ?), the con-struction of Brownian motion via Dirichlet spaces goes through unchangedif one replaces E with Eloop. For the generator, the compatibility condition(5.1) becomes?e?E(x)p(e)u?e(x) + p(xloop)u?1(xloop) + p(xloop)u?2(xloop) = 0, (5.2)where u?1(xloop), u?2(xloop) are the directional derivatives of u at x in the direc-tion of either of the two possible orientations of the loop edge xloop. Similarly,if one views the Brownian motion on Xloop(?, `, p, ?) as being obtained froma single standard Brownian motion (Wt)t?0, then for each interval, we pickan incident non-loop edge f with probability p(f)(?e?E(x) p(e)+2p(xloop))?1,and each of the two possible orientations of the loop edge are chosen withprobability p(xloop)(?e?E(x) p(e) + 2p(xloop))?1.835.2 Computing exit probabilities and hittingtimesThis section collects several results on the hitting probabilities and hittingtimes of Brownian motion on metric graphs. We will work exclusively withthe augmented weighted metric graph Xloop(?, `, p, ?). We setq(e) :=???p(e) if e is not a loop edge,2p(e) if e is a loop edge.Using this definition, (5.2) becomes?e?E(x)q(e)u?e(x) +12q(xloop)u?1(xloop) +12q(xloop)u?2(xloop) = 0. (5.3)Our results for hitting probabilities and hitting times will be stated strictly interms of q because Theorem 5.2.3 and Theorem 5.2.4 are simpler to expressin terms of the q conductances.Denote Brownian motion on Xloop(?, `, p, ?) by (Zt)t?0. As we will not beusing the VSRW in this section, there is no ambiguity in writing L := LZ .Let the vertex x be adjacent to the vertices x1, . . . , xk via the edges e1, . . . ek,and to itself through the loop xloop. We use B to denote the part of the graphcontaining x, x1 . . . , xk, as well as the metric edges I(e1), . . . , I(ek), I(xloop).We also introduce coordinates on B, rooted at x; given 1 ? j ? k and0 ? z ? `(ej), we write (j, z) for the unique point on I(ej) which is ata distance of z from x (measured along the edge `(ej)). On the loop, wechoose an arbitrary orientation and write (loop, z) for the unique point onI(xloop) which is at a positive distance of z from x (measured along the loopaccording to its orientation).We write T = TZ = inf{t ? 0 : Zt ? ?B} for the first time that (Zt)t?0,started at y, hits any of x1, . . . , xk. Given y ? B, we write Py for the prob-ability law of (Zt)t?0 started at y, and Ey for expectations with respect to84this probability law.We remark also that all results in this section are valid for loopless metricgraphs; one needs only to replace all sums over Eloop(x) with sums over E(x).Theorem 5.2.1. For 1 ? j ? k,Px(ZT = xj) =q(ej)`(ej)?e?E(x)q(e)`(e). (5.4)Proof. Let u(y) := Py(ZT = xj). By general Markov process theory, usatisfies???????Lu = 0 in B,u = 1 at xj,u = 0 on ?B \ {xj}.By solving the differential equation along each edge, we obtain constants(ai)1?i?k and (bi)1?i?k such that u((i, y)) = aiy + bi. By letting y ? 0,we get that bi = b for 1 ? i ? k. On the loop, u is linear and satisfiesu((loop, 0)) = u((loop, `(xloop))) = b, so u is identically equal to b along theloop.Plugging in the boundary condition shows that, for 1 ? i ? k,???ai`(ei) + b = 1 if i = j,ai`(ei) + b = 0 if i 6= j.(5.5)By (5.3), since the terms corresponding to the loop vanish, we havek?i=1q(ei)ai = 0.By multiplying the i?th equation in (5.5) by q(ei)`(ei) and summing over 1 ? i ?k, we obtain thatk?i=1q(ei)ai + bk?i=1q(ei)`(ei)=q(ej)`(ej),85and hencePx(ZT = xj) = b =q(ej)`(ej)?e?E(x)q(e)`(e).With this in hand, we may also compute from (5.5) that for 1 ? i ? k,P(i,y)(ZT = xj) = u((i, y)) =?????????????????????????????y`(ei)+`(ei)? y`(ei)q(ej)`(ej)?e?E(x)q(e)`(e)if i = j,`(ei)? y`(ei)q(ej)`(ej)?e?E(x)q(e)`(e)if i 6= j,q(ej)`(ej)?e?E(x)q(e)`(e)if i = loop.Remark 5.2.2. Note that the edge densities (?(e))e?E do not appear in (5.4).This is to be expected, given that changing the edge densities causes a time-change of the Brownian motion (Zt)t?0, which does not change the proba-bilities of Z hitting ?B through the vertices x1, . . . , xk. Similarly, none ofthe quantities related to the loop edge xloop appear, since traversing the loopdoes not change the probability of hitting ?B at a specified vertex.Theorem 5.2.3. The first moment of the hitting time T satisfiesExT =?e?Eloop(x) ?(e)q(e)`(e)?e?E(x)q(e)`(e).Proof. Let v(y) := EyT . By general Markov process theory, v satisfies???Lv = ?1 in B,v = 0 on ?B.Thus, we obtain that v((i, y)) = ??(ei)y2 + aiy + bi for some constants(ai)1?i?k and (bi)1?i?k. Letting y ? 0, we get that bi = b for 1 ? i ? k. Since86v((i, `(ei)) = 0 for 1 ? i ? k, we have that??(ei)`2(ei) + ai`(ei) + b = 0. (5.6)On the loop edge, since v((loop, 0)) = v((loop, `(xloop))) = b, we have thatv((loop, y)) = ?(xloop)y(`(xloop)? y) + b.The compatibility condition (5.3) givesk?i=1q(ei)ai + q(xloop)?(xloop)`(xloop) = 0.As before, we multiply the i?th equation in (5.6) by q(ei)`(ei) , and sum over1 ? i ? k to obtainExT = b =?e?Eloop(x) ?(e)q(e)`(e)?e?E(x)q(e)`(e).Once we have computed b, we may compute from (5.6) that for 1 ? i ? k,v((i, y)) = ? ?(ei)y2 +???(ei)`(ei)?1`(ei)?e?Eloop(x) ?(e)q(e)`(e)?e?E(x)q(e)`(e)?? y+?e?Eloop(x) ?(e)q(e)`(e)?e?E(x)q(e)`(e).Theorem 5.2.4. The second moment of the hitting time T and the variance87of T are given byExT 2 =13?e?Eloop(x) ?2(e)q(e)`3(e)?e?E(x)q(e)`(e)+43???e?Eloop(x) ?(e)q(e)`(e)?e?E(x)q(e)`(e)??2+ 2?(xloop)q(xloop)`(xloop)?e?Eloop(x) ?(e)q(e)`(e)(?e?E(x)q(e)`(e))2 ,Var T =13?e?Eloop(x) ?2(e)q(e)`3(e)?e?E(x)q(e)`(e)+13???e?Eloop(x) ?(e)q(e)`(e)?e?E(x)q(e)`(e)??2+ 2?(xloop)q(xloop)`(xloop)?e?Eloop(x) ?(e)q(e)`(e)(?e?E(x)q(e)`(e))2 .Proof. Define w(y) = EyT 2. Given ?  1 and a point (i, y) which is notone of the vertices x, x1, . . . , xk, we let ? be the first hitting time of the set{(i, y ? ?), (i, y + ?)}, and use the strong Markov property to obtainw((i, y)) := E(i,y)T 2 = E(i,y)(ET 2|F? ) = E(i,y)(EZ? (T + ?)2)= E(i,y)(EZ? (T 2 + 2?T + ? 2))= E(i,y)(EZ?T 2) + 2E(i,y)(?EZ?T ) + E(i,y)(EZ? ? 2)=12(w((i, y + ?)) + w((i, y ? ?)))+ 2?2?(ei)12(v((i, y + ?)) + v((i, y ? ?))) +53?2(ei)?4.Rearranging and letting ? ? 0, we get that w??((i, y)) = ?4?(ei)v((i, y)), orLw = ?2v, so that the function w satisfies???Lw = ?2v in B,w = 0 on ?B.The explicit form for v was worked out in the proof of Theorem 5.2.3, andhence, using the same techniques as before, one obtains the desired formulae.For a continuous-time simple random walk, the time between jumps is in-dependent of the edge the random walk chooses to move along. This is not88the case for Brownian motion on metric graphs; the expected time to for theBrownian motion to exit B conditional on exiting from a vertex y ? ?B mayvary depending on the choice of y.Theorem 5.2.5. The expected value of T , conditional on exiting B fromxj ? ?B, is given byEx(T |XT = xj) =13?(ej)`2(ej) +13???(xloop)q(xloop)`(xloop)?e?E(x)q(e)`(e)??+23???e?Eloop(x) ?(e)q(e)`(e)?e?E(x)q(e)`(e)?? .Proof. Define ?(y) = Ey(T1{XT=xj}), where 1U denotes the indicator func-tion of the event U . Proceeding as before, we can show that ? satisfies thefollowing boundary value problem:???L? = ?u in B,? = 0 on ?B.The explicit form for u was worked out in the proof of Theorem 5.2.1. Toobtain the desired result, one uses the same techniques as above, togetherwith the fact that Ex(T |XT = xj) = Ex(T1{XT=xj})/Px(XT = xj).5.3 Metrics for metric graphsStrongly local Dirichlet spaces have a natural notion of distance associatedwith them, known as the intrinsic metric. The intrinsic metric is the metricwhich typically appears in off-diagonal estimates or volume growth estimatesfor strongly local Dirichlet forms.In the setting of metric graphs, the intrinsic metric can be obtained as follows.We work with Brownian motion on X (?, `, p, ?), viewed as the process asso-ciated with the strongly local Dirichlet form (E ,D(E)) on L2(X (?, `, p, ?), ?).According to Dirichlet form theory (see [32] or [69]), there exists a measure89(the carre-du-champs measure) ?(f, g) satisfyingE(f, g) =?X (?,`)d?(f, g).The intrinsic metric on X (?, `, p, ?) associated with the Dirichlet form E isdefined to be?X (x, y) := sup{f(x)? f(y) : f ? W1,2loc (X (?, `, p, ?), ?) and d?(f, f) ? d?}.For x ? I(e), we haved?(f, f)(x) = (f ?(x))2p(e)dm(x),d?(x) = p(e)?(e)dm(x).Suppose that f ? W 1,2loc (X (?, `, p, ?), ?) with d?(f, f) ? d?. On the edgee, if x ? I(e), then |f ?(x)|2 ? ?(e), so that (f(e) ? f(e))2 ? ?(e)`2(e). Inparticular, if we consider points x, y ? G, the intrinsic metric satisfies?X (x, y) = sup{|f(x)? f(y)| : f ? C(G) such that(f(e)? f(e))2 ? ?(e)`2(e) for all e ? E},and following the proof of Lemma 3.1.1 implies that for x, y ? G,?X (x, y) = inf{?e?c(?(e))1/2`(e) : c ? C(x, y)}. (5.7)Remark 5.3.1. There are obvious connections between the intrinsic metric?X and the metric d? induced by a set of edge conductances (?(e))e?E. Inparticular, if ?(e) = (?(e))1/2`(e) for each e ? E, then d?(x, y) = ?X (x, y)for each x, y ? G.Loops do not have a significant effect on the intrinsic metric. Let ?X denotethe intrinsic metric on X (?, `, p, ?) and ?X ,loop denote the intrinsic metricon Xloop(?, `, p, ?). As before, we have that for x, y ? X (?, `) and u, v ?Xloop(?, `),?X (x, y) := sup{|f(x)? f(y)| : f ? EZ},?X ,loop(u, v) := sup{|g(u)? g(v)| : g ? EZ,loop},90whereEZ := {f ? W1,2loc (X (?, `, p, ?), ?) :for all e ? E, if x ? I(e), |f ?(x)|2 ? ?(e)},EZ,loop := {f ? W1,2loc (Xloop(?, `, p, ?), ?) :for all e ? Eloop, if x ? I(e), |f ?(x)|2 ? ?(e)}.Proposition 5.3.2. If x, y ? X (?, `), then ?X (x, y) = ?X ,loop(x, y).Proof. First, the restriction of any g ? EZ,loop to X (?, `) is in EZ ; as an imme-diate consequence ?X ,loop(x, y) ? ?X (x, y). On the other hand, given f ? EZ ,one may extend f to Xloop(?, `) by setting, at each x ? G, f(y) = f(x)for each y ? I(xloop). This extension is in EZ,loop; consequently ?X (x, y) ??X ,loop(x, y), and hence ?X (x, y) = ?X ,loop(x, y) for all x, y ? X (?, `).In particular, ?X and ?X ,loop are equal when evaluated between points of G.We will write ?X for the intrinsic metric on Xloop(?, `, p, ?) as well.5.4 Synchronizing random walks andBrownian motionIn this section, we begin with a weighted graph (?, pi, ?) and the associ-ated continuous-time simple random walk (Xt)t?0. We investigate the ques-tion of when it is possible to find a weighted metric graph X (?, `, p, ?) orXloop(?, `, p, ?) such that Brownian motion (Zt)t?0 on this space behaves sim-ilarly to X with respect to certain properties.Let T be the hitting time for Z described in Section 5.2; it is the time ittakes Z to hit a vertex different from the last visited one. Let T? be the timeit takes the VSRW to jump to a new vertex; recall that started at x ? G, T?is exponentially distributed with parameter pix. We write PxX for the law of(Xt)t?0 started at x ? G, and PxZ for the law of (Zt)t?0 started at x ? G. Wealso write ExX and ExZ for expectations with respect to PxX and PxZ , respec-tively.Our first result concerns jump probabilities only:91Theorem 5.4.1. X and Z have the same jump probabilities if and only if(`(e))e?E and (p(e))e?E are chosen such that there exists ? > 0 such that foreach e ? E, ?pi(e) = p(e)/`(e).Proof. Fix x ? G and suppose that xj is adjacent to x via the edge ej.Setting PxX(XT? = xj) = PxZ(ZT = xj), we obtainp(ej)`(ej)?e?E(x)p(e)`(e)=pi(ej)?e?E(x) pi(e),This implies that there exists ? = ?(x) such that ?pi(e) = p(e)/`(e) fore ? E(x). Given an edge e = {u, v} ? E, by comparing the equalitiesPuX(XT? = v) = PuZ(ZT = v) and PvX(XT? = u) = PvZ(ZT = u), we obtain that?(u) = ?(v). Since G is connected, we conclude that ? is constant.Synchronizing the expected jump times is more complex. For the special casewhere ? = 1 (so that the random walk under consideration is the VSRW), thefollowing result establishes necessary and sufficient criteria for synchronizingjump probabilities and expected jump times using a loopless metric graph.We begin with a definition from graph theory:A disjoint cycle cover of ? = (G,E) is a collection of vertex-disjoint cycles inG such that every vertex in G is incident to some edge in one of the cycles.A single edge is considered a cycle.Theorem 5.4.2. Let X be the VSRW. It is possible to choose (`(e))e?E,(p(e))e?E, and (?(e))e?E so that X and Z satisfy, for each x ? G and y ? x,PxX(XT? = y) = PxZ(ZT = y),ExX T? = ExZT,if and only, if for every e ? E, there exists a disjoint cycle cover of (G,E)which contains the edge e.Proof. First, by the previous result, we know that there exists ? > 0 suchthat (p(e))e?E and (`(e))e?E satisfy p(e)/`(e) = ?pi(e) for each e ? E.92Fix x ? G. If ExX T? = ExZT , then?e?E(x) ?(e)p(e)`(e)?e?E(x)p(e)`(e)=1?e?E(x) pi(e).Once (p(e))e?E and (`(e))e?E have been fixed, we set ?(e) := c(e)(p(e)`(e))?1,so that?e?E(x) c(e)?e?E(x) pi(e)=1?e?E(x) pi(e).Clearly, it suffices to determine when it is possible to assign edge weights(c(e))e?E to each edge of the graph so that the edge weights incident to eachvertex sum to 1. By the following lemma, we see that this happens if andonly if, for each e ? E, there is a disjoint cycle cover containing e in one ofits cycles.Lemma 5.4.3. Let ? = (G,E) be a locally finite graph. It is possible toassign edge weights to each edge of the graph (G,E) so that the edge weightsincident to each vertex sum to 1 if and only if, for each e ? E, there is adisjoint cycle cover containing e in one of its cycles.Proof. We reproduce the proof at [78]. Suppose that for each e ? E, thereis a disjoint cycle cover containing e in one of its cycles. Fix an edge f , andpick a disjoint cycle cover containing that edge. For each e ? E, we definecf (e) := 1 if the edge f appears as an isolated edge in the disjoint cycle cover,cf (e) := 1/2 if the edge e is part of a proper cycle in the disjoint cycle cover,and cf (e) = 0 otherwise. For any x ? G, we have?e?E(x) cf (e) = 1, and forany f ? E, cf (f) > 0.Now, let (?(e))e?E be a arbitrary collection of positive numbers satisfying?e?E ?(e) = 1, and set, for each e ? E, c(e) =?f?E ?(f)cf (e). Note thatsince ?(e)ce(e) > 0, c(e) > 0, and for any vertex x ? G with neighbors93e1, . . . , ek, we have that?e?E(x)c(e) =?e?E(x)?f?E?(f)cf (e)=?f?E?e?E(x)?(f)cf (e)=?f?E?(f)= 1as desired.On the other hand, suppose that for each x ? G, the sum of the edge weightsincident to x is 1. It is clear that the (possibly infinite) weighted adjacencymatrix A is doubly stochastic, and by the Birkhoff-von Neumann theorem isa (possibly infinite) convex combination of permutation matrices, which wedenote by (Pn)n?I . WriteA =?n?IanPn,with an > 0,?n?I an = 1. Fix e ? E. Then the corresponding entry of Ais nonzero, so some Pj must have a nonzero entry in the same place. A haszero entries on its diagonal (as (G,E) has no loops), so each Pj must also.By considering the cycle decomposition of the permutation corresponding tothe matrix Pj, we see that Pj naturally corresponds to a disjoint cycle covercontaining the edge e.Remark 5.4.4. The condition that every edge be contained in a disjoint cyclecover is very unstable, as it can be destroyed by perturbing a graph veryslightly; modifying any graph so that it has at least one vertex of degree 1will make it impossible to synchronize the VSRW with a Brownian motionas in Theorem 5.4.2.Remark 5.4.5. It is not difficult to see that it is always possible to synchro-nize the jump probabilities and jump times on Xloop(?, `, p, ?); one proceedsas above and sets c(e) := 12(pi?1/2e ? pi?1/2e ) for all non-loop edges, so that?e?E(x) c(e) ?12 for all x ? G. One then chooses the edge weights on loopsso that ?(xloop)p(xloop)`(xloop) = 1??e?E(x) c(e) for all x ? G.94We can obtain more interesting results if we weaken the assumption thatExX T? = ExZT . Indeed, given a weighted graph (?, pi, ?), a metric ? adapted toL?, and the associated continuous-time simple random walk X, the followingresult yields a Brownian motion Z on a weighted metric graph Xloop(?, `, p, ?)such that X and Z have the same jump probabilities and approximately thesame expected jump times, and such that the intrinsic metric for Z is closelyrelated to the adapted metric for X.Theorem 5.4.6. Let (?, pi, ?) be a weighted graph with C??adapted edgeconductances (?(e))e?E. Let Xloop(?, `, p, ?) be the metric graph such that,for e ? E,`(e) := 1,p(e) := pi(e),?(e) := ?2(e),and for each x ? G,`(xloop) := 1, p(xloop) :=12?x, ?(xloop) := 1.For all x ? G and y ? xPxX(XT? = y) = PxZ(ZT = y), (5.8)ExX T? ? ExZT ? (C? + 1)ExX T? . (5.9)Additionally, the induced metric d? satisfies d?(x, y) = ?X (x, y) for all x, y ?G, and for all x0 ? G and r ? 0,V Z?X (x0, r) ? (C? + 1)VX? (x0, r),where V X? denotes the volume growth on (?, pi, ?) and VZ?X denotes the volumegrowth on Xloop(?, `, p, ?).Proof. The relation (5.8) follows immediately from Theorem 5.4.1. By adapt-edness of (?(e))e?E, there exists a positive constant C? such that for anyx ? G,0 ?1?x?e?E(x)pie?2(e) ? C?,95from which it follows immediately that?x ??e?Eloop(x)?(e)p(e)`(e) ? (C? + 1)?x, (5.10)so that (5.9) follows from (5.10) and (5.2.3).By (5.7) and Proposition 5.3.2, we have d?(x, y) = ?X (x, y) for all x, y ? G.Finally, for x0 ? G and r ? 0, we haveV Z?X (x0, r) := ?X (B?X (x0, r))??e?Eloop such that e?B?X (x0,r)6=??X (e)??x?G?B?X (x0,r)?e?Eloop(x)?X (e)=?x?G?B?X (x0,r)?e?Eloop(x)?(e)p(e)`(e)??x?G?B?X (x0,r)(C? + 1)?x= (C? + 1)?x?Bd? (x0,r)?x? (C? + 1)V X? (x0, r).Remark 5.4.7. Note that it is not possible to obtain an inequality of the formV Z?X (x0, r) ? CVX? (x0, r) for some x0 ? G and all r ? 0. In particular, forany r ? 0, V X? (x0, r) ? ?x0 , whereas VZ?X (x0, r) ? 0 as r ? 0. This is due tothe fact that the measure ? can be viewed as a Dirac measure on the verticesof X (?, `).Corollary 5.4.8. Let (?, pi) be a weighted graph equipped with a C??adaptedmetric ?. Let Xloop(?, `, p, ?) be the metric graph such that, for e ? E,`(e) := 1,p(e) := pi(e),?(e) := ?2(e, e).96and for each x ? G,`(xloop) := 1, p(xloop) :=12?x, ?(xloop) := 1.For any x ? G and y ? xPxX(XT? = y) = PxZ(ZT = y), (5.11)ExX T? ? ExZT ? (C? + 1)ExX T? . (5.12)Additionally, if d? is the metric induced by the edge weights ?(e) := ?(e, e),then for all x, y ? G, ?(x, y) = ?X (x, y), and for all x0 ? G and r ? 0,V Z?X (x0, r) ? (C? + 1)VX? (x0, r).Proof. Since the edge weights (?(e))e?E are C??adapted, the first part ofthis result was established in Theorem 5.4.6. By Proposition 3.1, (5.7), andProposition 5.3.2, ?(x, y) ? d?(x, y) = ?X (x, y) for all x, y ? G. Since?(x, y) ? ?X (x, y), for all x0 ? G and r ? 0, B?(x0, r) ? Bd?(x0, r) ?G ?B?X (x0, r), which implies thatV Z?X (x0, r) ? (C? + 1)VXd? (x0, r) ? (C? + 1)VX? (x0, r).Remark 5.4.9. If one-sided control on the expected jump times is sufficient(i.e., having 0 ? ExZT ? C?ExX T? ), then there is no need to use loops; setting`(e) := 1, p(e) := pi(e), ?(e) := ?2(e, e) for each e ? E on the looplessmetric graph X (?, `, p, ?) will suffice. In this setting, we have the volumegrowth estimate V Z?X (x0, r) ? C?VX? (x0, r). We will use this weaker versionin Theorem 6.2.4.Remark 5.4.10. The use of loops in Theorem 5.4.6 and Theorem 5.4.8 isnecessary in order to get the two-sided bound (5.9). If one wishes to provean analogue of this result on the loopless metric graph X (?, `, p, ?) while stillhaving control over the intrinsic metric for the associated Brownian motion,then the necessary condition on the edge weights (?(e))e?E is not adaptednessbut rather strong adaptedness. This is problematic since strongly adaptededge weights do not exist for every weighted graph.Remark 5.4.11. Note that the choice that `(e) = 1 for all e ? E is somewhatarbitrary. Take any ? : E ? (0,?). If one replaces (`(e))e?E, (p(e))e?E,97(?(e))e?E with (`?(e))e?E, (p?(e))e?E, (??(e))e?E satisfying `?(e) := `(e)?(e),p?(e) := p(e)?(e), ??(e) := ?(e)(?(e))?2, then the Brownian motion onthe new weighted graph behaves identically to the Brownian motion on theoriginal weighted graph with respect to hitting probabilities and moments ofhitting times.In fact, the proof of Theorem 5.4.8 may be used to motivate the definitionof adaptedness. Recall that strongly local Dirichlet spaces have a naturalnotion of distance associated with them via the intrinsic metric, and thisdistance reflects the behavior of the strong Markov process associated withthe Dirichlet space. Weighted graphs do not have an analogous metric; whileadapted metrics are a useful tool, there is not a canonical choice of adaptedmetric. However, we have the following result.Proposition 5.4.12. Let (?, pi, ?) be a weighted graph, and let X (?, `, p, ?)be a metric graph such that for all x ? G and y ? x,PxX(XT? = y) = PxZ(ZT = y), (5.13)ExZT ? C?ExX T? . (5.14)Suppose that ? is a metric on (?, pi) satisfying the following two conditions:? For all x, y ? G, ?(x, y) ? ?X (x, y),? There exists c? > 0 such that for all x ? y, ?(x, y) ? c?.Then ? is a C??adapted metric.Proof. By (5.14) and Theorem 5.4.1, it follows that for each x ? G,1?x?e?E(x)pi(e)?(e)(`(e))2 ? C?,If we set ?(e) := (?(e))1/2`(e) for each e ? E, then the edge conductances(?(e))e?E induce a metric d? which satisfies1?x?e?E(x)pi(e)d2?(e, e) ? C? (5.15)98for each x ? G, and by (5.7), for x, y ? G, d?(x, y) = dX (x, y) ? ?(x, y).Consequently, for x ? G1?x?y?xpixy?2(x, y) ? C?,and since ?(x, y) ? c? for all x ? y, ? is C??adapted.Remark 5.4.13. This result should be interpreted as follows: Suppose thatone has a Brownian motion on a metric graph X (?, `, p, ?) which behavessimilarly to the continuous-time simple random walk on a weighted graph(?, pi, ?) in the sense that (5.13) and (5.14) are satisfied. The metric graph hasa natural distance dX given by the intrinsic metric, and it seems reasonable toexpect that the intrinsic metric also is a ?good? metric for the weighted graph.As a consequence of (5.13) and (5.14), the intrinsic metric satisfies (5.15). Ifone makes a leap of faith and assumes that any metric satisfying (5.15) isalso a ?good? metric, then one arrives at the definition of adaptedness.99Chapter 6Recurrence and TransienceA natural probabilistic question to ask about (weighted) graphs is whetherthey are recurrent; that is, given a weighted graph (?, pi) and an arbitraryvertex x ? G, will a discrete-time simple random walk started at x returnto x almost surely? Graphs which do not have this property are called tran-sient, and the question of determining whether a given graph is transient orrecurrent is known as the type problem. By a minor abuse of terminologywe will call both the weighted graph (?, pi) and the associated discrete-timesimple random walk (Yn)n?Z+ recurrent.Before continuing, let us make a few observations. First, the distinction be-tween discrete time and continuous time is mostly insignificant in this setting,but one has to be careful with the definitions. For example, the CSRW orany continuous-time simple random walk will stay at its starting point forsome positive amount of time before making its first jump, so one could saythat a graph is recurrent if and only if one can find arbitrarily large times atwhich the CSRW returns to its starting point, and it is not difficult to showthat this definition coincides with the definition of recurrence given above.The speed measure ? of the random walk is insignificant as well; since modi-fying the measure only changes the speed at which the random walk moves,it does not affect the probability that the random walk will return to itsstarting point. Consequently, in this section we will often find it more conve-nient to work with a weighted graph (?, pi) and the associated discrete-timesimple random walk (Yn)n?0.1006.1 Equivalent definitions of recurrenceRecurrence and transience of weighted graphs are well-studied properties,and their study predates the development of modern probability theory. Theresults of this section (Section 6.1) are well-known, and we do not claimoriginality for any of them. We begin with some equivalent definitions ofrecurrence and transience. Given U ? G, set TU := inf{n ? 0 : Yn ? U} andT+U := inf{n ? 1 : Yn ? U}; for x ? G we write Tx and T+x for T{x} and T+{x},respectively. In this notation, the aforementioned definition of recurrence isthat Px(T+x <?) = 1 for all x ? G. We have the following equivalences:Theorem 6.1.1. For a weighted graph (?, pi), the following are equivalent:1. There exists x ? G such that Px(T+x <?) = 1.2. For all x ? G, Px(T+x <?) = 1.3. For all x, y ? G, Px(T+y <?) = 1.4. For all x, y ? G,Px(Yn = y i.o.) = Px(??n=01{Yn=y} =?)= 1.5. For all x ? G,??n=0Px(Yn = x) =?.Proof. The equivalence between 2, 4, and 5 may be found in Theorem 4.2.2in [23].The equivalence between 1, 2, and 3 follows from noting that if x ? G isvisited infinitely often by Y , then for every y ? G, given two consecutivevisits to x, there is a uniform, positive probability that y is visited betweenthese two visits.Remark 6.1.2. The quantity??n=0 Px(Yn = x) can be interpreted as the ex-pected number of visits to x made by the random walk Y , started at x.101The preceding result defines recurrence in terms of the behavior of thediscrete-time simple random walk Y on (?, pi). However, we can define itdirectly in terms of the operator Lpi (or the operators L?). Given a functionf ? C(G) and a measure ? on (?, pi), we say that f is ??harmonic (resp.??superharmonic, ??subharmonic) if (L?f)(x) = 0 for all x ? G (resp.(L?f)(x) ? 0, (L?f)(x) ? 0).Theorem 6.1.3. A weighted graph (?, pi) is transient if and only if thereexists a nonconstant nonnegative pi?superharmonic function f ? C(G).Proof. Suppose that (?, pi) is transient, fix x0 ? G, and set ux0(x) := Px(Tx0 <?). Since ux0(x0) = 1 and ux0 is not identically equal to 1 (by tran-sience), a direct computation shows that ux0 is a nonconstant nonnegativepi?superharmonic function.Conversely, suppose that (?, pi) is recurrent, and let f be a nonnegativepi?superharmonic function. We can verify that the sequence of random vari-ables (Mn)n?N defined by Mn := f(Xn) is a nonnegative supermartingale, soMn converges almost surely (see, for example, Theorem 5.2.9 in [23]), andsince Xn visits all vertices infinitely often, f must be constant.Remark 6.1.4. This result makes it particularly clear that the choice of mea-sure ? is of no consequence, since if ?1 and ?2 are distinct measures on (?, pi),then a function is ?1?superharmonic if and only if it is ?2?superharmonic.Remark 6.1.5. Recurrent graphs have the strong Liouville property; all har-monic functions on a recurrent graph are constant. However, the converse isfalse, as (Zd,Ed) has the strong Liouville property for all d ? 1; see [75].Remark 6.1.6. Modern work on recurrence and transience of graphs generallyemploys electrical network theory. One can show that a graph is transient ifand only if the effective resistance between any vertex and ?infinity? is finite,and this is a very convenient characterization of recurrence and transience inapplications. See [22] or [57] for details.The first major result in this area was established by Polya, concerning ran-dom walks on the d?dimensional integer lattice:Theorem 6.1.7. (Polya, [63]) The Euclidean lattice (Zd,Ed) is recurrent ford = 1, 2 and transient for d ? 3.102Proof. By Proposition 6.1.1, it suffices to show that there exists Cd > 0 suchthat for x ? Zd,Px(Y2n = x) ? Cdn?d/2;note that because (Zd,Ed) is bipartite, Px(Y2n+1 = x) = 0 for all n ? N.For d = 1, 2, this may be accomplished via a simple combinatorial estimate;if d = 1, Stirling?s formula givesP0(Y2n = 0) =(2nn)122n?1(pin)1/2,and for d = 2 we haveP(0,0)(Y2n = (0, 0)) =((2nn)122n)2?1pin.The latter estimate may be obtained by considering a modified random walkwhich takes one step horizontally and one step vertically at each integer time;this yields a random walk on the two-dimensional ?diagonal? lattice, which isisomorphic to (Z2,E2).For d ? 3, the computations are more complex. One can use a version of the?Local Central Limit Theorem? (see Theorem 2.1.1 in [56]) to obtain that forx ? Zd,Px(Y2n = x) ? Cdn?d/2.The LCLT also gives bounds for d = 1, 2, but the above combinatorial meth-ods are more elementary and instructive.On the other hand, for a much faster growing family of graphs such as thed?regular infinite trees, proving transience is more or less trivial:Theorem 6.1.8. For d ? 3, the d?regular tree Td is transient.Proof. Let Yn be the random walk started at a vertex x ? Td. The stochasticprocessDn := d(x, Yn) can be identified with a nearest-neighbor random walkon (Z?0, E?0) which jumps from n to n+ 1 with probability (d? 1)/d > 1/2103and from n to n ? 1 with probability 1/d. A straightforward calculationshows that the process D is transient, withP0(D returns to 0) =1d? 1.With positive probability D does not return to 0, so with positive probability,Y does not return to x, and Y is transient.Remark 6.1.9. The 2?regular infinite tree is isomorphic to (Z1,E1), so ofcourse this argument fails there.6.2 Volume growth and recurrenceThe previous results hint that the critical volume growth threshold for re-currence/transience is in general between r2 and r3. However, one can findexamples of recurrent graphs with arbitrarily large volume growth. See,for example, the recurrent infinite tree described in Example 6.16 in [75],which has exponential volume growth. Consequently we will investigate therelationship between recurrence and upper bounds for the adapted volumegrowth.If one considers (?, pi, pi) and measures volume growth with respect to thegraph metric, sharp results are known. We begin with some terminology.A set of edges ? separates x0 ? G and ? if every infinite path of verticesstarting at x0 with no repeated vertices contains an endpoint of some edgein ?; we will also call ? a cutset.Theorem 6.2.1. (The ?Nash-Williams Criterion?, [59]) If (?n)n?Z+ is a se-quence of pairwise disjoint cutsets separating x0 from ? such that?n?Z+???e??npi(e)???1=?,then (?, pi) is recurrent.Proof. See Section 2.5 in [57].This result can be used to obtain a volume growth criterion for recurrenceof graphs:104Theorem 6.2.2. (Grigor?yan, [40]) Let (?, pi) be a weighted graph. If thereexists x0 ? G such that??r=0rpi(Bd(x0, r))=?,then (?, pi) is recurrent.Proof. See Theorem 5.16 in [40].This estimate is analogous to the sharp volume growth estimate for recur-rence on manifolds and for strongly local Dirichlet spaces; see e.g. [69]. Infact, we will use Sturm?s result shortly to prove a more general version ofTheorem 6.2.2, so we state it here:Theorem 6.2.3. (Sturm, [69]) Let (E ,D(E)) be a strongly local Dirichletform on L2(X,m), with intrinsic metric ?I . Suppose that there exists x0 ? Xand r0 > 0 such that? ?r0rm(B?I (x0, r))dr =?.Then the strong Markov process (Zt)t?0 associated with (E ,D(E)) is almostsurely recurrent, in the sense that it visits any open set U ? X at arbitrarilylarge times.One can also obtain a criterion for recurrence involving adapted metrics. Wewill use metric graphs to give a short proof of this result.Theorem 6.2.4. Let (?, pi, ?) be a measure weighted graph, and suppose thatthere is an adapted metric ? on (?, pi, ?) satisfying? ?r0r?(B?(x0, r))dr =?for some x0 ? G and r0 > 0. Then (?, pi) is recurrent.Proof. Suppose that there exists an adapted metric ? on (?, pi, ?) satisfying? ?r0r?(B?(x0, r))dr =? (6.1)105for some x0 ? G and r0 > 0, and let (Xt)t?0 be the continuous-time sim-ple random walk associated with (?, pi, ?). By Proposition 5.4.2 and Re-mark 5.4.9, one can define a metric graph X (?, `, p, ?) such that the Brown-ian motion (Zt)t?0 on this metric graph has the same jump probabilities asX, and satisfies, for all x ? G, ExZT ? C?ExX T? , where T and T? are the timesit takes Z and X, respectively, to reach a new vertex.We now define a simple random walk (Yn)n?Z+ using the process (Zt)t?0 bysetting Y0 = Z0 and defining, for n ? 1, Yn to be the n?th vertex visited byZ. Clearly, Y is recurrent if and only if Z returns to the origin at arbitrarilylarge times.By (6.1) and the estimate V Z?X (x0, r) ? C?VX? (x0, r) := C??(B?(x0, r)) (whichis part of the conclusion of Proposition 5.4.2), we have that? ?r0rV Z?X (x0, r)dr =?, (6.2)By Theorem 6.2.3, this implies that Z is recurrent; in the setting of stronglylocal Dirichlet spaces, recurrence means that almost surely, Z returns toany neighborhood of its starting point at arbitrarily large times. However,X (?, `, p, ?) is locally one-dimensional; if we take our neighborhood to beB := B?X (x0, r), where 0 < r < min{`(e) : e incident to x0}, then withpositive probability bounded away from 0, started anywhere in B \ {x0}, theprocess Z will hit x0 before hitting any of its neighbors (because this happenswith positive probability for Brownian motion started on the interior of any ofthe edges incident to x0). Since Z visits B infinitely often by recurrence, thismeans that Z almost surely hits x0 at arbitrarily large times. In particular,Y is recurrent, and by Remark 6.1.4, (Xt)t?0 also is recurrent.Remark 6.2.5. Recently, a direct proof of Theorem 6.2.4 was given by Huaand Keller in [45].Remark 6.2.6. One can use isoperimetric inequalities to obtain criteria fortransience. Working in the case where ? = pi and ? = d, Grigor?yan es-tablished established sufficient criteria for transience using isoperimetric andFaber-Krahn inequalities in [40]. The isoperimetric and Faber-Krahn in-equalities imply on-diagonal upper bounds for the heat kernel, which implytransience via Proposition 6.1.1. However, analogous results have not yet106been proven for weighted graphs (?, pi) equipped with an arbitrary measure?.6.3 ExamplesWe conclude this section by showing that Theorem 6.2.4 yields sharp resultsin some settings.Example 6.3.1 (Weighted half-line). Let ? = (Z?0,E?0), and set pi?({n, n+1}) = (n+ 1) log?+(n+ 1) for n ? 0; we will consider (?, pi?, pi?) and measurevolume growth with respect to the graph metric d.We haveVd(0, r) r?1?j=0(j + 1) log?+(j + 1)  (r + 1)2 log?(r + 1).Consequently, we have that? ?r0r?(B?(x0, r))dr =?if and only if 0 ? ? ? 1, so Theorem 6.2.4 or Theorem 6.2.2 show that(?, pi?) is recurrent if ? ? 1.To see that this is sharp, we use electrical network methods. Using theone-dimensional structure of ?, one can compute directly that the effectiveresistance between 0 and ? is given byReff(0??) =??j=01pi?({j, j + 1})=??j=01(j + 1) log?+(j + 1),and consequently Reff(0??) =? if and only if ? ? 1.Standard results of electrical network theory (see e.g. Theorem 2.3 of [57])imply that (?, pi?) is recurrent if and only if Reff(0 ? ?) = ?, so weconclude that in fact (?, pi?) is recurrent if and only if ? ? 1.107Chapter 7SpectrumAs seen in Theorem 2.5.2 and Theorem 4.2.2, positivity of the bottom ofthe spectrum is related to very fast heat kernel decay. We have seen in Ex-ample 2.5.5 that graphs of exponential growth (in particular, the d?regularinfinite tree) may have positive bottom of spectrum, and in fact the d?regulartree is extremal; the bottom of the spectrum for weighted graphs with subex-ponential adapted volume growth is always 0, and the d?regular tree has thelargest possible bottom of spectrum among all infinite graphs with the sameexponential rate of volume growth. We will prove those assertions in thischapter, in addition to sharp estimates relating the bottom of the essentialspectrum to the adapted volume growth.7.1 Volume growth and the bottom of theessential spectrumThe exponential rate of volume growth of a weighted graph (?, pi, ?) withrespect to a metric ? is given by??(?, pi, ?) := lim supr??1rlog V?(x0, r). (7.1)An application of the triangle inequality shows that the right-hand side of(7.1) is independent of the choice of x0 ? G. If ??(?, pi, ?) = 0 (resp.??(?, pi, ?) ? (0,?), ??(?, pi, ?) = ?), we say that (?, pi, ?) has subexpo-nential (resp. exponential, superexponential) volume growth with respect to108the metric ?.In the continuous setting of Riemannian manifolds, Brooks established thefollowing result relating volume growth in the Riemannian metric to thebottom of the spectrum of the associated Laplace-Beltrami operator:Theorem 7.1.1. (Brooks, [10]) LetM be a geodesically complete non-compactRiemannian manifold with Riemannian metric ?M and Riemannian volumeVM , and define, for some x0 ?M ,?M := lim supr??1rlog VM(B?M (x0, r)).Let ?ess0 (M) denote the bottom of the essential spectrum of the Laplace-Beltrami operator ??M on L2(M). If M has infinite volume, then?ess0 (M) ?14?2M .A similar estimate was subsequently proved for strongly local Dirichlet spacesby Sturm in [69].In the nonlocal setting of graphs, the first sharp results were established byFujiwara, who considered the spectrum of the operator Lpi on unweightedgraphs:Theorem 7.1.2. (Fujiwara, [30]) If ? is an unoriented, connected, countablyinfinite, locally finite graph with the natural weights, then?ess0 (?, pi) ? 1?2 exp(?d(?, pi)/2)1 + exp(?d(?, pi)).While the Laplace-Beltrami operator ?M considered by Brooks is generi-cally unbounded, the operator Lpi considered by Fujiwara is bounded. Inthis setting, there is no need to consider metrics besides the graph metric,and the boundedness of Lpi rules out various interesting behaviors, such asabsence of essential spectrum, or the possibility of stochastic incompleteness.Before proceeding to our results, we discuss some technical assumptions.In this section, we will assume the that the weighted graph (?, pi, ?) under109consideration satisfies?(U) =? (7.2)whenever U ? G is infinite and connected (with respect to the graph struc-ture). This condition may be viewed as a discrete analogue of the assumptionin [10] that the manifold under consideration is non-compact and has infi-nite volume, and is implicit in [30] due to the fact that the edge weights areuniformly bounded below.The analytic consequences of this condition were discussed in Chapter 2. LetQ be the usual Dirichlet form, with domain D(Q) given by the closure ofCc(G) in the Q1?norm; consequently (Q,D(Q)) is a closed, regular Dirich-let form. For simplicity we will use ? ? ?Q to denote the Q1?norm. Thelocal finiteness of ? and (7.2) imply that the infinitesimal generator associ-ated with the regular Dirichlet form (Q,D(Q)) is the self-adjoint operatorL? with domain D(L?) = {f ? L2(?) : L?f ? L2(?)}.We have two main results for the bottom of the essential spectrum in the set-ting of weighted graphs. These results appeared first in [28]. For notationalsimplicity, given an adapted metric ? on (?, pi, ?), we denote the exponentialrate of volume growth by ?? := ??(?, pi, ?).Our first result is an analogue of Theorem 7.1.1 for weighted graphs, and isvalid even when L? is unbounded.Theorem 7.1.3. Suppose that (?, pi, ?) is a weighted graph satisfying (7.2).If ? is a 1?adapted metric on (?, pi, ?), then?ess0 (?, pi, ?) ?18?2?.Remark 7.1.4. The exponent 2 cannot be reduced; see Example 7.4.1.Our second result generalizes results of Fujiwara in [30] and yields betterresults than Theorem 7.1.3 when L? is bounded and ? is large.Theorem 7.1.5. Suppose that (?, pi, ?) is a weighted graph satisfying (7.2).If ? is a 1?adapted metric on (?, pi, ?) for which there exist positive constants110m,M such that m ? ?(x, y) ?M whenever x ? y, then?ess0 (?, pi, ?) ?1m2(1? exp(12M??))21 + exp(M??). (7.3)Adapted metrics satisfying the hypothesis of this theorem exist if and only ifL? is a bounded operator. On the k?regular tree with the standard weights,there is equality in (7.3) for Lpi when using the graph metric d, or for L1when using the metric 1?kd; see Example 7.4.1.Theorem 7.1.3 has the following two immediate corollaries:Corollary 7.1.6. If there exists an adapted metric ? on (?, pi, ?) such that?? = 0, then ?ess0 (?, pi, ?) = 0.The converse of this statement is false; in [20], it is shown that a counterex-ample is provided by the operator Lpi on the Cayley graph of any solvablegroup with exponential growth, such as the lamplighter group.Corollary 7.1.7. If there exists an adapted metric ? on (?, pi, ?) such that?? <?, the essential spectrum is nonempty.This result cannot be improved upon in the following sense: For every ? > 0,there exists a weighted graph (?, pi) satisfying the following conditions:1. ?ess(?, pi,1) = ?,2. There exists a metric ? adapted to (?, pi,1) satisfyinglim supr??1r1+?log V?(x0, r) <?.See Example 7.4.2 and Example 7.4.4.The similarities between the results of this section and the correspondingresults for manifolds (and strongly local Dirichlet spaces) provide further ev-idence that adapted metrics are the right tool for studying heat flow andrandom walks on graphs where the graph Laplacian is unbounded.The structure of this section is as follows. We begin by proving Theorem 7.1.5and Theorem 7.1.3 in Section 7.2; this is accomplished by using adapted111metrics as well as the techniques of Brooks and Fujiwara. In particular,the Dirichlet forms of certain functions of these metrics can be controlledby the L2(?) norm. In Section 7.3, we discuss the phenomenon of absenceof essential spectrum and its relation to volume growth, and in Section 7.4,we provide various examples showing the sharpness of our results, as wellas settings where the graph metric gives very poor results and the use ofadapted metrics seems to be essential.7.2 Proof of Theorem 7.1.3 andTheorem 7.1.5We fix a weighted graph (?, pi, ?) satisfying (7.2). We also fix a metric ?which is adapted to the operator L?, and set ?x0 := ?(x0, ?) for x0 ? G. Bythe triangle inequality, |?x0(x) ? ?x0(y)| ? ?(x, y). For simplicity, we write? = ??(?, pi, ?).It suffices to assume that ? < ?. If ? = ?, then the conclusion of Theo-rem 7.1.3 is ?ess0 (?, pi, ?) ? ?, which is trivial. If the hypotheses of Theo-rem 7.1.5 are satisfied (with m ? ?(x, y) whenever x ? y) and ? =?, thenthe conclusion of Theorem 7.1.5 is?ess0 (?, pi, ?) ?1m2. (7.4)On the other hand, this estimate follows directly by noting that for x ? G,???1/2x ?x?L2(?) = 1, andQ(??1x ?x, ??1x ?x) =pix?x?1m2, (7.5)where the last inequality follows from adaptedness of ? and the inequalitym ? ?(x, y) for x ? y. The inequality (7.4) then follows from (7.5) and (2.10).In particular, the assumption ? < ? implies that for all x0 ? G and r ? 0,the balls B?(x0, r) have finite volume (i.e., for all x0 ? G and r ? 0,V?(x0, r) < ?). Combining this with (7.2), we see that for x0 ? G andr ? 0, the balls B?(x0, r) contain only finitely many points, as well.112We begin with the following lemma:Lemma 7.2.1. If ? < 2?, then ?exp(???x0), exp(???x0)?L2(?) <?.Proof. We estimate?exp(???x0), exp(???x0)?L2(?) =?x?Gexp(?2??x0(x))?x? ?x0 +??r=0??x0 (x)?(r,r+1]exp(?2?r)?x= ?x0 +??r=0(V?(x0, r + 1)? V?(x0, r)) exp(?2?r)= (exp(2?)? 1)??r=1V?(x0, r) exp(?2?r).The last expression is finite by the definition of ? and comparison with ageometric series.We now fix ? > ?/2. For j ? Z+, we define a sequence of ?tent functions? byhj(x) :=?????x0(x) if ?x0(x) ? j,2?j ? ??x0(x) if ?x0(x) > j,and set fj := exp(hj).We also define?(t) :=(1? e?t)21 + e2?t= 1?2e?t1 + e2?t.On [0,?), ? is increasing, and satisfies the inequality?(t) ??2t22. (7.6)Lemma 7.2.2. If x ? y, for all j ? Z+,(fj(y)? fj(x))2 ? ?(?(x, y))(f 2j (x) + f2j (y)). (7.7)113Proof. Fix j ? Z+. We prove first that for all x ? y and all j ? Z+, we have(fj(y)? fj(x))2 ? ?(|?x0(y)? ?x0(x)|)(f2j (x) + f2j (y)). (7.8)We may assume without loss of generality that ?x0(y) ? ?x0(x). It sufficesto check two cases.Case 1: j ? ?x0(y) ? ?x0(x) or j ? ?x0(x) ? ?x0(y).In this case, one verifies directly that there is equality in (7.8).Case 2: ?x0(x) < j < ?x0(y).First, the inequality 2j ? (?x0(y) + ?x0(x)) ? ?x0(y)? ?x0(x) is equivalent toj ? ?x0(y), which is true by hypothesis. A direct computation and the factthat ? is increasing on [0,?) then yields the inequality(fj(y)? fj(x))2f 2j (x) + f2j (y)= ?(2j ? (?x0(x) + ?x0(y)))? ?(?x0(y)? ?x0(x))= ?(|?x0(y)? ?x0(x)|).Finally, combining (7.8) with the inequality |?x0(x) ? ?x0(y)| ? ?(x, y) andthe fact that ? is increasing on [0,?) yields that for all x ? y and all j ? Z+,(fj(y)? fj(x))2 ? ?(|?x0(y)? ?x0(x)|)(f2j (x) + f2j (y))? ?(?(x, y))(f 2j (x) + f2j (y)),which completes the proof.We now use (7.7) to prove two new bounds:Corollary 7.2.3. Suppose there exist constants m,M > 0 such that m ??(x, y) ?M whenever x ? y. Then for all x ? y and j ? Z+,(fj(y)? fj(x))2 ??2(x, y)m2?(M)(f 2j (x) + f2j (y)).Proof. The inequality m ? ?(x, y) is equivalent to ?2(x,y)m2 ? 1, and since114?(x, y) ? M and ? is increasing on [0,?), ?(?(x, y)) ? ?(M). Combiningthese with (7.7) gives the result.Corollary 7.2.4. For x ? y and any j ? Z+,(fj(y)? fj(x))2 ? ?2(x, y)?22(f 2j (x) + f2j (y)).Proof. This follows from (7.7) together with (7.6).These estimates give us control of Q(fj, fj), as follows:Lemma 7.2.5. Suppose that f ? D(Q) has the property that for all x ? y,(f(y)? f(x))2 ? C?2(x, y)(f 2(x) + f 2(y)). ThenQ(f, f) ? C?f?2L2(?).Proof. Using adaptedness of ?, we haveQ(f, f) :=12?x,y?Gpixy(f(y)? f(x))2?C2?x?G?y?xpixy?2(x, y)(f 2(x) + f 2(y))= C?x?G(1?x?y?xpixy?2(x, y))f 2(x)?x? C?f?2L2(?).To apply this to the functions fj, we should check that fj ? D(Q) for allj ? Z+. Fix j ? Z+. Since hj ? 2?j ? ??x0 , 0 ? fj ? exp(2?j ? ??x0), soLemma 7.2.1 gives fj ? L2(?) and the estimate in Proposition 7.2.5 showsthat ?fj?Q < ?. Note that for any ? > 0, if R?j := j ?(2j ? log ??),then on G \ B?(x0, R?j), 0 ? fj ? ?. Since B?(x0, R?j) is finite (using(7.2) and the assumption that ? < ?), f ?j := (f ? ?)+ ? Cc(G); notethat Q(f ?j , f?j ) ? Q(fj, fj). Now, since f?j ? fj as ? ? 0, we conclude that?f ?j ? fj?Q ? 0 as ? ? 0, and hence fj ? D(Q).Combining Corollary 7.2.3 and Corollary 7.2.4 with Lemma 7.2.5, we havethe following:115Lemma 7.2.6. Suppose there exist constants m,M > 0 such that m ??(x, y) ?M whenever x ? y. For all j ? Z+,Q(fj, fj) ?1m2(1? exp(?M))21 + exp(2?M)?fj?2L2(?). (7.9)Lemma 7.2.7. For all j ? Z+,Q(fj, fj) ??22?fj?2L2(?). (7.10)Now, to combine the previous results, we assume that there exists a positive,increasing, continuous function t 7? I?(t) such thatQ(fj, fj) ? I?(?)?fj?2L2(?). (7.11)Let K be a finite set, and define MK := max{?(x0, x) : x ? K} andgj := fj(1?1K) (here, we use 1U to denote the indicator function of U ? G).Note that by construction, B?(x0,MK) ? K.Lemma 7.2.8. For all j ? Z+, ?gj, gj?L2(?) <?, andlimj???gj, gj?L2(?) =?.Proof. First, for j ? Z+,?gj, gj?L2(?) =?x?B?(x0,j)g2j (x)?x +?x?G\B?(x0,j)g2j (x)?x? exp(2?j)V?(x0, j) + exp(4?j)?exp(???x0), exp(???x0)?L2(?),and finiteness follows from Lemma 7.2.1.For the second part, if j > MK , then since gj ? 1 on B?(x0, j) \B?(x0,MK),it follows that?gj, gj?L2(?) ? V?(x0, j)? V?(x0,MK), (7.12)and (7.12) follows from V?(x0,MK) <? and V?(x0, j) ? ? as j ?? (using(7.2)).116Next, we estimate the Dirichlet forms Q(gj, gj) as follows:Lemma 7.2.9. There exists a constant c(?, ?,K) such that for j ? Z+,Q(gj, gj) ? c(?, ?,K) + I?(?)?gj, gj?L2(?).Additionally, gj ? D(Q) for each j ? Z+.Proof. Fix j ? Z+. First, we note that gj = fj on G\B?(x0,MK), and for allj ? Z+ and r > 0, since hj(x) ? ??x0(x), we have 0 ? gj ? fj ? exp(??x0) ?exp(?r) on B?(x0, r).Consequently, we have the crude estimateQ(gj, gj) ? Q(fj, fj) + pi(B?(x0,MK + c?)) supx?B?(x0,MK+c?)|g2j (x)|? Q(fj, fj) + pi(B?(x0,MK + c?)) exp(2?(MK + c?)). (7.13)Remark 7.2.10. The reason for using the larger distance MK + c? is that alledges with at least one end in B?(x0,MK) have both ends in B?(x0,MK+c?).We also have the simple estimate?fj, fj?L2(?) ? ?gj, gj?L2(?) + exp(2?MK)V?(x0,MK). (7.14)Combining (7.13) with (7.11) and (7.14) gives the desired result, withc(?, ?,K) = exp(2?MK)(pi(B?(x0,MK + c?)) exp(2?c?) + I?(?)V?(x0,MK)).The argument that was used to show fj ? D(Q) also shows that gj ? D(Q).Finally, we can show the following result:Theorem 7.2.11. Assume that ? is such that (7.11) holds. Then?ess0 (?, pi, ?) ? I?(?/2).Proof. We argue by contradiction. Suppose thatI?(?/2) ? ?ess0 (?, pi, ?).117By definition, there exists a finite subsetK such that I?(?/2) < ?G\K0 (?, pi, ?).Since t 7? I?(t) is increasing and continuous, we can find a constant ? suchthat ? < 2? and such thatI?(?/2) < I?(?) < ?G\K0 (?, pi, ?).On the other hand, from Lemma 7.2.9Q(gj, gj)?gj, gj?L2(?)?c(?, ?,K)?gj, gj?L2(?)+ I?(?),so letting j ? ? and using Lemma 7.2.8 shows that for any ? > 0, we canchoose j0 := j0(?) such thatQ(gj0 , gj0)?gj0 , gj0?L2(?)? I?(?/2) + ?.In particular, there exists some j0 ? Z+ such thatQ(gj0 , gj0)?gj0 , gj0?L2(?)< ?G\K0 (?, pi, ?).This is a contradiction, since gj0 ? D(Q), gj0|K = 0, and?G\K0 (?, pi, ?) := inf{Q(f, f)?f, f?L2(?): f ? D(Q), f 6= 0, f |K = 0}.We conclude that ?ess0 (?, pi, ?) ? I?(?/2).Finally, combining Lemma 7.9 with Theorem 7.2.11 proves Theorem 7.1.5,and combining Lemma 7.10 with Theorem 7.2.11 proves Theorem 7.1.3.Remark 7.2.12. Let us discuss the relevance of the condition (7.2) to theproof. Assume that (7.2) does not hold. Under the assumption ? < ?,all metric balls with respect to ? have finite volume, but do not necessarilycontain only finitely many points. Consequently, while one can still use Cc(G)functions to approximate the functions (gj)j?Z+ in the ???L2 norm, one cannotnecessarily use Cc(G) functions to approximate the functions (gj)j?Z+ in the? ? ?Q norm, and hence gk may fail to belong to D(Q) for some sufficientlylarge k ? Z+. In this case, one may then take the domain of the Dirichlet118form to be the larger set Dmax(Q) := {f ? C(G) : Q1(f, f) < ?}, andLemma 7.2.9 shows that gj ? Dmax(Q) for all j ? Z+. However, since gkcannot be approximated in the ? ? ?Q norm by Cc(G) functions, the Dirichletform (Q,Dmax(Q)) would fail to be regular.7.3 Discreteness of the spectrumIf the graph (?, pi, ?) is such that the operator L? is bounded on L2(?), thenthe essential spectrum is necessarily nonempty. For graphs with the stan-dard weights and the operator Lpi, Theorem 1 of [31] establishes necessaryand sufficient conditions for the essential spectrum to consist of a single pointin terms of certain isoperimetric quantities.If L? is unbounded, it is possible for the essential spectrum to be empty(or, equivalently, for the spectrum to be discrete). Graphs on which thisphenomenon occurs (using the operator L1) are given in Example 7.4.2 andExamples 7.4.4 of Section 7.4. By Corollary 7.1.7, a necessary condition forabsence of essential spectrum is that for any adapted metric ?, ??(?, pi, ?) =?. Examples 2 and 3 also show that for any ? > 0, it is possible to find agraph (?, pi) and a metric ? adapted to L1 such that ??(?, pi,1) =? and L1has empty essential spectrum, but for every x0 ? G,limr??1r1+?log V?(x0, r) = 0.In particular, in these examples the essential spectrum becomes empty pre-cisely as the volume growth changes from exponential to superexponential.In [55], the relationship between the properties of positive bottom of spec-trum, discrete spectrum, and stochastic completeness is discussed. For gen-eral weighted graphs, none of these three properties imply any of the othertwo.1197.4 ExamplesExample 7.4.1 (k?regular tree, k ? 3). We let Tk denote the k?regulartree, equipped with the natural weights. In Example 2.5.5 we proved that?0(Tk, pi) = 1?2?k ? 1k. (7.15)This graph satisfies ?d(Tk, pi) = log(k ? 1). Consequently, Theorem 7.1.5yields?ess0 (Tk, pi) ?(1? exp(12?d(Tk, pi)))21 + exp(?d(Tk, pi))= 1?2?k ? 1k,so that there is equality in Theorem 7.1.5. This was observed earlier in [30].Similarly, for L1, k?regularity implies kLpi = L1, and hence (7.15) implies?0(Tk,1) = k(1?2?k ? 1k)= k ? 2?k ? 1.We compute the volume growth in the 1?adapted metric 1?kd. In this metric,two vertices at distance r are at distance?kr in the graph metric, and con-sequently |B 1?kd(x0, R)| =??kRj=0 (k ? 1)j, from which it follows immediatelythat? 1?kd(Tk,1) =?k log(k ? 1).Again, Theorem 7.1.5 yields?ess0 (Tk,1) ? k ?(1? exp( 12?k? 1?kd(Tk,1)))21 + exp( 1?k? 1?kd(Tk,1))= k ? 2?k ? 1,so that Theorem 7.1.5 may yield sharp results for the operator L1 also.Note also that for large k, ?0(Tk,1) ? k2 , whereas ? 1?k d(Tk,1) =?k log(k?1).Consequently, for any ? ? (0, 2) and any C > 0, if one takes K = K(?, C)120large enough, then?0(TK ,1) > C(? 1?kd(Tk,1))2??,so that Theorem 7.1.3 is asymptotically sharp.Example 7.4.2 (Weighted half-line). As in Example 3.4.1, we consider? := (Z?0, E?0), and let pi?({n, n+1}) := (n+1)2 log?+(n+1) for ? ? (??, 2),where log+(x) := log(x) ? 1.Since pi?(Bd(0, r)) ? Cr3+? for large r, it follows that ?d(?, pi?, pi?) = 0 forany ? ? (??, 2), and hence ?ess0 (?, pi?, pi?) = 0 for all ? ? (??, 2).However, the situation is quite different for the unbounded operator L1.In this setting, we use the 1?adapted metric 1?2dE. In Example 3.4.1, wecomputed that1?2dE(0, R) ??22? ?log1??/2(R),from which it follows thatlog V 1?2dE(0, r) (2? ??2r)2/(2??),and hence? 1?2dE(?, pi?,1) =???????0 if ? < 0,?2 if ? = 0,? if 0 < ? < 2.We conclude that ?ess0 (?, pi?,1) = 0 for ? < 0.The case ? = 0, which corresponds to exponential volume growth, is moresubtle. From Theorem 7.1.3, we have?ess0 (?, pi0,1) ?18(? 1?2dE(?, pi?,1))2 =14.121We will show that ?ess0 (?, pi0,1) ? ?0(?, pi0,1) ? 19 . To do this, it suffices toshow that if f ? Cc(Z+),?n?Z+(n+ 1)2(f(n)? f(n+ 1))2 ?19?n?Z+f 2(n). (7.16)For n ? Z+, we define g ? Cc(Z+) by g(n) := (n+ 1)(f(n)? f(n+ 1)). Sincef(n)? 0 as n??, we can reconstruct f from g asf(n) =?j?n1j + 1g(j);consequently, (7.16) is equivalent to9?n?Z+g2(n) ??n?Z+???j?n1j + 1g(j)??2.It suffices to show that the operator T : Cc(Z+)? Cc(Z+) defined by(Tf)(n) :=?j?n1j + 1f(j)is bounded on L2(1), and satisfies ?T? ? 3. This is accomplished throughan application of Schur?s test. Let u(n) := (n + 1)?1/2. Then for n ? Z+,simple estimation using the integral test shows that(Tu)(n) ? 3u(n),(T ?u)(n) ? 3u(n),and hence Schur?s test implies that ?T? ??3 ? 3 = 3.A similar calculation shows the absence of essential spectrum for 0 < ? < 2.In this case, we consider the graphs (?k, pi?,k), where ?k is the subgraph of? induced by the subset {j ? Z+ : k ? j} and pi?,k are the restriction of theedge weights pi? to ?k. Proceeding as above, we consider the operator(T?f)(n) :=?j?n1(j + 1) log?/2+ (j + 1)f(j),122on (?k, pi?,k). If u?(n) := (n+ 1) log?/2+ (n+ 1), then(T?u)(n) ?3log?/2+ (k + 1)u?(n),(T ??u)(n) ?3log?/2+ (k + 1)u?(n),so that on (?k, pi?,k), ?T?? ? 3 log??/2+ (k + 1), and consequently?0(?k, pi?,k,1) ?log?+(k + 1)9.Since ?ess0 (?, pi?,1) = limk?? ?0(?k, pi?,k,1) = ?, we conclude that the es-sential spectrum is empty for 0 < ? < 2.Remark 7.4.3. This example shows that even for graphs on which the VSRWbehaves very differently from the CSRW (and, in particular, the operatorL1 is unbounded), the critical behavior of the bottom of the spectrum forthe operator L1 is correctly predicted by computing the volume growth in ametric (strongly) adapted to L1.Example 7.4.4 (Rapidly branching tree). As in Example 3.4.2, given 0 ?? < 2, let Tk? be the spherically symmetric infinite tree rooted at xroot, withbranching function k?(r) := b(r+ 1)?c. We assume that Tk? has the naturalweights. We equip Tk? with the measure 1, and since vertex degrees on Tk?are unbounded, we use the adapted metric dV .In Example 3.4.2, we showed that if xR ? Tk? satisfies d(xroot, xR) = R, thenthe 1?adapted metric satisfies dV (0, R)  R(2??)/2, so thatlog VdV (x0, r) r2/(2??)?j=1log(j + 1)? ???r if ? = 0,r2/(2??) log r if 0 < ? < 2.Since we have exponential volume growth for ? = 0 in the dV metric, theessential spectrum of L1 on Tk0 is nonempty.For 0 < ? < 2, since Tk? has positive Cheeger constant at infinity, Theorem2 of [51] implies that the essential spectrum is empty. In particular, for any? > 0 there exists some ? ? (0, 2) such that on ?k? , ?dV (Tk? ,1) ? Cecr1+?123and the essential spectrum of L1 on Tk? is empty.Consequently, this example, as well as the results in the previous exampleon absence of essential spectrum, shows that Corollary 7.1.7 is optimal incertain settings.7.5 Lower bounds for the spectrum andisoperimetric inequalitiesFor completeness, we mention that lower bounds on the bottom of the spec-trum have been obtained by Bauer, Keller, and Wojciechowski, using isoperi-metric constants related to adapted metrics. These isoperimetric constantswere discussed earlier in Section 3.3. Their results are as follows:Theorem 7.5.1. (Bauer-Keller-Wojciechowski, [7]) If (?, pi, ?) is a weightedgraph, and ? is an adapted metric with associated Cheeger constant ? :=??(?, pi, ?), then?0(?, pi, ?) ??22.Bauer, Keller, and Wojciechowski also obtained lower bounds on the essentialspectrum by defining a certain isoperimetric quantity at infinity. Indeed,given U ? G and an adapted metric ?, one can define the Cheeger constantof U with respect to ? as??(U) := supV?UV nonempty, finitepi?(?eV )?(V );note that ??(G) = ??(?, pi, ?). The Cheeger constant at infinity is given by?? := supU?GU finite??(G \ U). (7.17)Theorem 7.5.2. (Bauer-Keller-Wojciechowski, [7]) If (?, pi, ?) is a weightedgraph, and ? is an adapted metric with associated Cheeger constant at infinity124?? defined in (7.17), then?ess0 (?, pi, ?) ??2?2.Remark 7.5.3. These results are discrete analogues of Cheeger?s original re-sults for manifolds (see [13]), and generalize prior results for finite and infinitegraphs (see [15] and [31]).125Chapter 8Stochastic CompletenessAs discussed in Example 2.2.6, we say that (?, pi, ?) is stochastically completeif the associated continuous-time simple random walk (Xt)t?0 has infinitelifetime, almost surely. If (Xt)t?0 has finite lifetime with positive probabilitythen we say that (?, pi, ?) is stochastically incomplete. If (?, pi, ?) is stochas-tically complete, we say that X is conservative, and if it is stochasticallyincomplete we say that X is explosive.The property of stochastic incompleteness is associated with a very fast rateof escape of the process X from its starting point. For example, we haveseen that positive bottom of spectrum implies rapid heat kernel decay. TheCSRW or VSRW on a d?regular infinite tree moves at an asymptoticallylinear speed away from its starting point, and has exponential on-diagonalheat kernel decay, but is still conservative. In fact, via Corollary 2.2.5, it isclear that if (?, pi, ?) is stochastically incomplete and (Xt)t?0 is the associatedcontinuous-time random walk, then the generator L? must be unbounded. Inparticular, if (?, pi) is any weighted graph, then (?, pi, pi) is always stochasti-cally complete.In [27], where many of the results of this section were first established, wemostly restricted our attention to weighted graphs (?, pi) equipped with themeasure 1, and investigated the question of whether the VSRW on (?, pi) hadfinite lifetime. This approach allows one to view stochastic completeness asa property of weighted graphs (rather than weighted graphs equipped witha measure), but for consistency we will treat the general case here.1268.1 Equivalent definitions of stochasticcompletenessStochastic completeness may be defined in ways other than the probabilisticdefinition given above. The following result, which was proven in the settingof Riemannian manifolds in [39], gives several analytic conditions which areequivalent to stochastic incompleteness:Theorem 8.1.1. Stochastic incompleteness of (?, pi, ?) is equivalent to anyof the following criteria:1. There exists x ? G and T > 0 such that the associated continuous-timesimple random walk (Xt)t?0 started at x has lifetime ? T with positiveprobability.2. For all x ? G and T > 0, the associated continuous-time simple randomwalk (Xt)t?0 started at x has lifetime ? T with positive probability.3. There exists x ? G and t > 0 such that Pt1G(x) < 1.4. For all x ? G and t > 0, Pt1G(x) < 1.5. The heat kernel pt(x, y) satisfies?y?Gpt(x, y)?y < 1 (8.1)for all x ? G and t > 0.6. For all ? > 0, there exists a positive f ? C(G) ? L?(G) such thatL?f = ?f. (8.2)Functions satisfying (8.2) are called ??harmonic.7. For all T > 0, the Cauchy problem?????tu(x, t) = L?u(x, t) in G? (0, T ),u(x, t) = 0 on G? {t = 0}.(8.3)has a nonzero bounded solution in G? (0, T ).127Proof. Equivalence of 1 and 3, and 2 and 4: Recall that Ptf(x) = Exf(Xt)for bounded f ? C(G). Applying this to f = 1G yields the desired result.Equivalence of 3 and 4: Clearly 4 implies 3. Suppose that Pt01G(x0) = 1 forsome t0 > 0 and x0 ? G. By the strong maximum principle (see Lemma 1.2.2of [77]), it follows that Pt01G = 1G. If 0 ? t ? t0, then by the semigroupproperty, Pt01G = Pt0?tPt1G ? Pt1G ? 1G, so all inequalities must actuallybe equalities, and consequently Pt1G = 1G.Given n ? Z+, we may iterate this argument by noting that Pnt0 =?nj=1 Pt0 ,so that Pnt01G =?nj=1 Pt01G = 1G. Consequently, Pt1G = 1G for all0 ? t ? nt0. Since n was arbitrary, we conclude that Pt1G = 1G for allt > 0.Equivalence of 4 and 5: Obvious from the definition of Pt.Equivalence of 4, 6, 7, and 3: We begin by showing that 4 implies 6. Setu(x, t) := Pt1G; note that 0 < u(x, t) < 1 for all x ? G and t > 0. Setw(x, t) :=? ?0e??tu(x, t)dt.Using the fact that u solves the Cauchy problem for the heat equation withinitial data 1G (by Proposition 2.3.9) and integration by parts, we haveL?w =? ?0e??tL?udt=? ?0e??t??tudt= ue??t?????0+ ?? ?0e??tudt= ?1 + ?w,and since 0 < u < 1 we have 0 ? w ???0 e??tdt = 1? . Consequentlyv := 1? ?w satisfies (8.2) and 0 < v(x, t) < 1 for all x ? G and t > 0.To show that 6 implies 7, suppose that v is a nonzero bounded ??harmonicfunction and set u1(x, t) := v(x)e?t. Then a direct calculation shows thatu1 satisfies (8.3). Of course, by Proposition 2.3.9, we know that u2 := Ptv128also satisfies (8.3), and for t > 0 we have ?u2(?, t)?L?(G) ? ?v?L?(G) <e?t?v?L?(G) = ?u1(?, t)?L?(G). Consequently u1 ? u2 is a nonzero boundedsolution to (8.3) on G? (0, T ).To show that 7 implies 3, let u be a nonzero bounded solution to (8.3) onG ? (0, T ) for some T > 0. By scaling, we may assume that supu > 0 andsup |u| < 1, so that v := 1 ? u is a positive function with inf v < 1. Byconstruction v solves?????tu(x, t) = L?u(x, t) in G? (0, T ),u(x, t) = 1 on G? {t = 0},(8.4)and by Proposition 2.3.9, Pt1G also solves (8.4) and satisfies Pt1G ? v byminimality. But since inf v < 1 we have that Pt1G(x) < 1 for some x ? Gand t > 0.Since we have previously shown that 3 implies 4, the proof is complete.There are several other equivalent definitions of stochastic completeness be-sides the ones mentioned above. See [47], [49], [53], and [55] for more on thetheory of stochastic completeness on weighted graphs, as well as [39] for thecorresponding theory on Riemannian manifolds.8.2 Volume growth and stochasticcompletenessWe will be primarily interested in the relationship between stochastic com-pleteness and adapted volume growth. It is reasonable to assume that if therate of expansion of (?, pi, ?) is not too large, then the associated random walk(Xt)t?0 cannot escape to infinity fast enough for its lifetime to be finite. Theaforementioned example of the CSRW or the VSRW on a d?regular infinitetree implies that the minimal rate of volume growth required for stochasticincompleteness is superexponential.On the other hand, it is not difficult to see that there cannot be any criterionshowing that sufficiently large volume growth implies stochastic incomplete-129ness, since one can find stochastically complete graphs with arbitrarily largevolume growth:Example 8.2.1. Fix an unbounded function r : Z+ ? Z+, and let (An)n?Z+be disjoint sets with |An| = r(n) and such that Z+?An = ? for each n ? Z+.Let ?r := (Gr, Er) be as follows:Gr := Z+ ??n?Z+An,Er :=?n?Z+?x?An{n, x} ??n?Z+?x?An{n+ 1, x}.Equip this graph with the natural weights and the measure 1. The vol-ume growth of this graph (with respect to some distance function d) can bemade arbitrarily large by choosing r to be a very rapidly growing function.However, this graph is always stochastically complete. From any vertex inZ+, the next jump is to a vertex in some An, and since each of these ver-tices have degree 2, the expected time taken by the VSRW to jump is 1/2.Consequently, the VSRW must have infinite lifetime.Remark 8.2.2. Note that ?r is also a graph which admits no strongly adaptedmetric. Indeed, suppose that ? is a strongly adapted metric on ?r. Thenthere exist positive constants C1(?), C2(?) such that for each x ? Gr,C1(?) ??e?E(x)pi(e)?2(e, e) ? C2(?).On the other hand, for each n ? Z+,2C2(?) ??e?E(n)pi(e)?2(e, e) +?e?E(n+1)pi(e)?2(e, e)??x?An?e?E(x)pi(e)?2(e, e)? r(n)C1(?),and since r is unbounded, this leads to a contradiction.In the setting of Riemannian manifolds, Grigor?yan proved the followingsharp result relating volume growth to stochastic completeness. A Rieman-nian manifold M is stochastically complete if the Brownian motion on M130(i.e., the diffusion whose infinitesimal generator is the Laplace-Beltrami op-erator on M) is non-explosive, almost surely.Theorem 8.2.3. (Grigor?yan, [36]) Let M be a geodesically complete Rie-mannian manifold with Riemannian metric ?M and Riemannian volume VM .If there exists r0 ? 0 and x0 ?M such that? ?r0rlog VM(B?M (x0, r))dr =?, (8.5)then M is stochastically complete.For example, this result implies that if there exists x0 ? G and r0 > 0such that VM(B?M (x0, r)) ? Cecr2 log r for r > r0, then M is stochasticallycomplete. This result is sharp in the sense that for every ? > 0, one can finda stochastically incomplete Riemannian manifold M with VM(B?M (x0, r)) ?Cecr2 log1+? r. Using similar techniques, Sturm later extended this result tothe setting of strongly local Dirichlet spaces in [69]:Theorem 8.2.4. (Sturm, [69]) Let (E ,D(E)) be a strongly local Dirichletform on L2(X,m), with intrinsic metric ?I . Suppose that there exists x0 ? Xand r0 > 0 such that? ?r0rlogm(B?I (x0, r))dr =?.Then the strong Markov process (Zt)t?0 associated with (E ,D(E)) is non-explosive, almost surely.The main result of this section is an analogue of Grigor?yan?s result forweighted graphs, where the volume growth is computed with respect to anadapted metric. This result was established first in [27]:Theorem 8.2.5. Let (?, pi, ?) be a weighted graph equipped with the adaptedmetric ?. If there exists x0 ? G and r0 ? 0 such that? ?r0rlog V?(x0, r)dr =?,then (?, pi, ?) is stochastically complete.131An immediate consequence of this criterion is that a weighted graph isstochastically complete if there exists x0 ? G and r0 ? 0 such that that|B?(x0, r)| ? Cecr2 log r for r ? r0. This is a substantial improvement overprior results relating volume growth and stochastic completeness of graphs(found in [42] and [46]), which proved stochastic completeness under theassumption that there exists x0 ? G, r0 ? 0, and c ? (0, 2) such that|B?(x0, r)| ? Cecr log r for r ? r0.The criterion in Theorem 8.2.5 will be seen to yield sharp results for stochasticcompleteness of graphs in certain applications where necessary and sufficientconditions for stochastic completeness are known, such as birth-death chainsor spherically symmetric trees.Metric graphs will play a central role in the proof of Theorem 8.2.5. Themain advantage of considering metric graphs is that one has access to Sturm?ssharp result relating volume growth in the intrinsic metric on a metric graphto stochastic completeness. It seems very difficult to prove a result such asTheorem 8.2.5 by trying to adapt the arguments of [36] to the discrete setting.This was the approach taken by Grigor?yan, Huang, and Masamune in [42],but the volume growth estimate obtained there is not sharp. Theorem 8.2.5is in part a result about the long-range behavior of the heat kernel (namely,that it sums to 1), and as discussed in Chapter 4, heat kernels on graphshave very different long-range behaviour than heat kernels on manifolds.8.3 Proof of Theorem 8.2.5Before proceeding to the proof of Theorem 8.2.5, we collect some results onthe convergence of random series:Let (?,F ,P) be a probability space, on which we have the sequence of randomvariables (Xn)n?Z+ , and the associated filtration Fn := ?(X0, . . . , Xn). We132define the following events:A :={??n=0Xn <?},B :={??n=0P(|Xn| > 1|Fn?1) <?},C :={??n=0E(Xn1{Xn?1}|Fn?1) <?},D :={??n=0Var(Xn1{Xn?1}|Fn?1) <?}.Two events are said to be equivalent if their symmetric difference has proba-bility 0. We have the following two results:Theorem 8.3.1. (Doob, [21]) Suppose that the sequence of random variables(Xn)n?Z+ is nonnegative and that there exists C > 0 such that P(0 ? Xn ?C) = 1 for all n ? Z+. Then the events A and C are equivalent.Theorem 8.3.2. (Brown, [11]) The event B?C?D is almost surely a subsetof A (i.e., P((B ? C ? D) \ A) = 0). That is, if each of the series associatedwith B, C, and D converge, then the series associated with A converges also.Remark 8.3.3. The paper [11] claims to prove a generalization of the Kol-mogorov three-series theorem for dependent random variables. While [11]does correctly prove that simultaneous convergence of the series associatedwith the events B, C, and D is sufficient for the convergence of the originalseries (which yields Theorem 8.3.2), the proof of necessity is incorrect, andin [33] it is shown that there can be no general three-series theorem of thistype.We begin by proving the following result:Theorem 8.3.4. Let (?, pi, ?) be a weighted graph, and let (?(e))e?E beC??adapted edge conductances. Let ? be any metric satisfying ?(x, y) ?d?(x, y) for all x, y ? G. If there exists x0 ? G and r0 > 0 such that? ?r0rlog V?(x0, r)dr =?,then (?, pi) is stochastically complete.133Proof. Recall that from the definition of adapted edge conductances, thereexists c? > 0 such that ?(x, y) ? c? whenever x ? y.Let (Xt)t?0 denote the continuous-time simple random walk associated with(?, pi, ?), which has the natural filtration (GXt )t?0, let (?n)n?Z+ denote thejump times for X, and set HX(n) :=?nj=0 ?j and FXn := GXHX(n).We begin with the following lemma:Lemma 8.3.5. Under the hypotheses of Theorem 8.3.4, P?a.s.,??n=0E(?n|FXn?1) =?.Proof. By C??adaptedness, for all x ? G,?e?E(x)pi(e)?2(e) ? C?.We will work with the metric graph Xloop(?, `, p, ?), where for e ? E,`(e) := 1,p(e) := pi(e),?(e) := ?2(e),and for each x ? G,`(xloop) := 1, p(xloop) :=12?x, ?(xloop) := 1.Let Z denote Brownian motion on Xloop(?, `, p, ?). By Theorem 5.4.6, X andZ have the same jump probabilities. We begin with the process (Zt)t?0. Bysampling Z each time it hits a vertex different from the last visited vertex, weobtain a discrete-time simple random walk on (?, pi, ?) which we denote by(Yn)n?Z+ . It is convenient to construct X by letting Y be its discrete skeletonand by letting it jump at independent exponentially distributed times. Thiscreates a coupling of X and Z; in particular X and Z exist on the sameprobability space, so there is no need to distinguish between PX and PZ (orEX and EZ). Let (?n)n?Z+ and (?n)n?Z+ be the jump times for X and Z,respectively; setting HX(n) := inf{t ? HX(n ? 1) : Xt ? G \ {Yn?1}} and134HZ(n) := inf{t ? HZ(n? 1) : Zt ? G \ {Yn?1}}, we have that?n := HX(n)?HX(n? 1),?n := HZ(n)?HZ(n? 1).Let (FXn )n?Z+ be the filtration such that for each n ? Z+, FXn := GXHX(n),where (GXt )t?0 is the natural filtration for X. Similarly, let (FZn )n?Z+ be thefiltration such that for each n ? Z+, FZn := GZHZ(n), where (GZt )t?0 is thenatural filtration for Z.From Theorem 5.4.6, for n ? Z+,E(?n|FXn?1) ? E(?n|FZn?1) ? (C? + 1)E(?n|FXn?1). (8.6)We also obtain from Theorem 5.4.6 that for x, y ? G, d?(x, y) = ?X (x, y) andV Z?X (x0, r) ? (C? + 1)VXd? (x0, r). Since ?(x, y) ? d?(x, y) for all x, y ? G, wehave that B?(x0, r) ? Bd?(x0, r) for all x0 ? G and r ? 0, and consequentlyfor all x0 ? G and r0 ? 0,V Z?X (x0, r) ? (C? + 1)VXd? (x0, r) ? (C? + 1)VX? (x0, r). (8.7)From (8.7), we conclude that since? ?r0rlog V X? (x0, r)dr =?,we must also have? ?r0rlog V Z?X (x0, r)dr =?. (8.8)By Theorem 8.2.4, (8.8) implies the non-explosiveness of Z, and hence,P?a.s.,??n=0?n =?.By Theorem 8.3.2, for any K > 0, P?a.s., at least one of the following135equalities holds:??n=0P(?n ? K|FZn?1) =?, (8.9)??n=0E(?n1{?n?K}|FZn?1) =?, (8.10)??n=0Var(?n1{?n?K}|FZn?1) =?. (8.11)Fix K > 0. We will show that any of (8.9), (8.10), (8.11) implies??n=0E(?n|FZn?1) = +?. (8.12)If (8.9) holds, by Markov?s inequality,??n=0P(?n ? K|FZn?1) ?1K??n=0E(?n1{?n?K}|FZn?1)?1K??n=0E(?n|FZn?1).If (8.10) holds, we use the trivial estimate??n=0E(?n1{?n?K}|FZn?1) ???n=0E(?n|FZn?1).If (8.11) holds, we begin by noting thatVar(?n|FZn?1)? Var(?n1{?n?K}|FZn?1) = E(?2n|FZn?1)? E(?2n1{?n?K}|FZn?1)? (E(?n|FZn?1))2 + (E(?n1{?n?K}|FZn?1))2= E(? 2n1{?n>K}|FZn?1) + (E(?n1{?n?K}|FZn?1))2 ? (E(?n|FZn?1))2? (E(?n1{?n>K}|FZn?1))2 + (E(?n1{?n?K}|FZn?1))2 ? (E(?n|FZn?1))2? ?12(E(?n|FZn?1))2.136HenceVar(?n|FZn?1) +12(E(?n|FZn?1))2 ? Var(?n1{?n?K}|FZn?1).We conclude that (8.11) implies that one of the following equalities holds:??n=0Var(?n|FZn?1) =?, (8.13)??n=0(E(?n|FZn?1))2 =?. (8.14)Suppose that (8.13) holds. Note that if ?n is the jump time for (Zt)t?0 startedat the vertex x, then by Theorem 5.2.4,Var(?n|ZHZ(n?1) = x) =13?e?Eloop(x) ?2(e)q(e)`3(e)?e?E(x)q(e)`(e)+13???e?Eloop(x) ?(e)q(e)`(e)?e?E(x)q(e)`(e)??2+ 2?(xloop)q(xloop)`(xloop)?e?Eloop(x) ?(e)q(e)`(e)(?e?E(x)q(e)`(e))2?13?e?Eloop(x) ?2(e)q(e)`3(e)?e?E(x)q(e)`(e)+73(E(?n|ZHZ(n?1) = x))2?13pix??c2??e?E(x)?(e)p(e)`(e) + ?x??+73(E(?n|ZHZ(n?1) = x))2?13(c2? ? 1)E(?n|ZHZ(n?1) = x)+73(E(?n|ZHZ(n?1) = x))2,137and henceVar(?n|FZn?1) ?13(c2? ? 1)E(?n|FZn?1) +73(E(?n|FZn?1))2.From this estimate, it is clear that if (8.13) holds, then either (8.12) holds,in which case we are done, or (8.14) holds. In the latter case, either thepositive sequence (E(?n|FZn?1))n?Z+ converges to 0, in which case eventuallyE(?n|FZn?1) ? (E(?n|FZn?1))2 (implying that that (8.12) holds), or it does notconverge to 0, in which case it is clear that (8.12) holds also. Thus, we con-clude that (8.11) implies (8.12).We conclude that??n=0 ?n =? P?a.s. implies??n=0 E(?n|FZn?1) =? P?a.s.By (8.6), this implies??n=0 E(?n|FXn?1) =? P?a.s. also.At this point, we assume that the vertex weights (pix)x?G are bounded belowby Cpi > 0. We will subsequently discharge this assumption by a probabilisticargument.Lemma 8.3.6. Under the hypotheses of Theorem 8.3.4 and the additionalhypothesis that the vertex weights (pix/?x)x?G are bounded below by Cpi,? > 0,(?, pi) is stochastically complete.Proof. By Lemma 8.3.5, we have that P-a.s.,??n=0E(?n|FXn?1) =?.Note that E(?n|XHX(n?1) = x) =?xpixand that the jump time from the vertexx is exponential with parameter ?x := ?xpix . Clearly, ?x ? Cpi,? > 0. Then we138have thatE(?n1{?n?1}|XHX(n?1) = x) =? 10?xue??xudu=1?x(1? (?x + 1)e??x)?1?x(1? (Cpi,? + 1)e?Cpi,?)? C ?pi,??xpix= C ?pi,?E(?n|XHX(n?1) = x).Note that C ?pi,? := 1? (Cpi,? + 1)e?Cpi,? =? Cpi,?0 ve?vdv > 0. It follows that forn ? Z+,E(?n1{?n?1}|FXn?1) ? C?pi,?E(?n|FXn?1),and hence P?a.s.,??n=0 E(?n1{?n?1}|FXn?1) =?. Since (?n1{?n?1})n?Z+ is auniformly bounded sequence of nonnegative random variables, Theorem 8.3.1implies that P?a.s.,??n=0?n1{?n?1} =?.Since 0 ? ?n1{?n?1} ? ?n pointwise, we conclude that P?a.s.,??n=0?n =?,and hence (?, pi, ?) is stochastically complete.Finally, we remove the hypothesis that the vertex weights are bounded below.Suppose that (?, pi, ?) is a stochastically incomplete graph which satisfies thehypotheses of Theorem 8.3.4 but does not satisfy pix?x ? Cpi,? > 0 for all x ? G.Let H := {x ? G : pix?x ? 1}. Consider the augmented graph (??, p?i, ??), where?? = (G?, E?) is obtained from (G,E) by adding, for each x ? H, a vertexx? connected to the rest of the graph only by the edge {x, x?}; denote theseadditional vertices by H?. The edges of the form {x, x?} have p?ixx? = ?x, and139vertices x? ? H have ?x? = ?x.Let (X?t)t?0 denote the continuous-time simple random walk associated with(??, p?i, ??). We use this to construct the continuous-time simple random walkassociated with (?, pi, ?) via a coupling, as follows: At each vertex in G \H,X? and X move identically. At a vertex x ? H, X? eventually jumps to somevertex in G?\{x, x?}. X jumps to the same vertex after waiting an exponentialtime with parameter pix.We define ?? on (??, p?i) as follows. Given x, y ? G with x 6= y, we set??(x, y) = ?(x, y),??(x, y?) = ?(x, y) + 1,??(x?, y?) = ?(x, y) + 2.Since ? was adapted and pixx???2(x, x?) = ?x for all x ? H, ?? is a (C? +1)?adapted metric on (??, p?i, ??). For any x0 ? G and r ? 0, we have thatV?(x0, r) ? V??(x0, r) ? V?(x0, r) + ?({x? ? H? : ?(x0, x) ? r}) ? 2V?(x0, r).Consequently, (??, p?i, ??) satisfies the hypotheses of Lemma 8.3.6, so (??, p?i, ??)is stochastically complete and X? is non-explosive.By hypothesis, X explodes with positive probability, and we denote the eventon which this occurs by {X explodes}. Given U ? G and V ? G?, TXU is thetotal amount of time X spends at vertices of U , and similarly T X?V is the totalamount of time X? spends at vertices of V .On SIX , we have thatTXG\H + TXH = TXG <?.Clearly TXG\H < ? on {X explodes}. Since X and X? move identically onG \H, this implies that T X?G\H <? on {X explodes}.Next, we use the fact that TXH < ? on {X explodes}. Since the times be-tween jumps for the VSRW are independent of the jump directions, the times140between jumps from vertices in H are independent exponentially distributedrandom variables with mean at least 1. By the second Borel-Cantelli Lemma,an infinite number of visits to H would cause X to spend an infinite amountof time at vertices of H, and have infinite lifetime, which would be impossi-ble. So on {X explodes}, X visits H only finitely many times.Using the coupling, each visit to a vertex x ? H by X is associated with aninitial visit to x by X?, as well as a geometrically distributed (finite) numberof additional visits to x and x? by X? (before X? visits G? \ {x, x?} again). Thetimes between all of these jumps for X? are independent exponentially dis-tributed random variables with mean at most 1; consequently, their sum isfinite. Since X? only visits H and H? finitely many times on SIX , we concludethat on {X explodes}, T X?H <? and TX?H?<?.Thus, on {X explodes},T X?G?= T X?G\H + TX?H + TX?H?<?.Consequently, X? explodes with positive probability, which contradicts thestochastic completeness of (??, p?i, ??) via Lemma 8.3.6. We conclude that underthe hypotheses of Theorem 8.3.4, (?, pi, ?) is stochastically complete.Theorem 8.2.5 now follows immediately from Theorem 8.3.4:Corollary 8.3.7. Let (?, pi, ?) be a weighted graph equipped with the adaptedmetric ?. If there exists x0 ? G and r0 > 0 such that? ?r0rlog |B?(x0, r)|dr =?,then (?, pi) is stochastically complete.Proof. Upon noting that the edge conductances ?(e) := ?(e, e) are adapted,this follows from Corollary 5.4.8 and Theorem 8.3.4.Remark 8.3.8. None of the Theorems and Corollaries in this section use thenotion of strong adaptedness. However, the lower bound appearing in thedefinition of strong adaptedness ensures that the metric cannot be unneces-sarily small, which would in turn cause the volume growth to be unnecessarilylarge. If the metric ? fails to be strongly adapted, the resulting criteria one141obtains for stochastic completeness may be far from optimal; one instance ofthis is given in Example 8.4.2.Remark 8.3.9. Corollary 8.3.7 does not necessarily hold if one uses the Daviesmetric dD in place of an adapted metric; see Example 8.4.2 for an examplewhere the VSRW is stochastically incomplete but where log |BdD(x, r)| r log r (of course, in this example the metric dD fails to be adapted).8.4 ExamplesWe conclude this chapter by presenting examples which demonstrate howone can use Theorem 8.3.4 and its various corollaries to quickly determinesharp stochastic completeness criteria for various graphs.Example 8.4.1 (Weighted half-line). As in Example 3.4.1, we consider? := (Z?0, E?0), and assign edge weights (pi?(e))e?E satisfying pi?n,n+1 =(n + 1)2 log?+(n + 1) for 0 ? ? < 2, where log+(x) := log(x) ? 1. We equipthis weighted graph with the measure 1 and use the adapted metric dE.In Example 3.4.1, we computed thatdE(0, R) ?22? ?log(2??)/2(R),and it follows thatlog VdE(0, r) = log |BdE(0, r)| ?(2? ?2r)2/(2??).By Corollary 8.3.7, we conclude that if 0 ? ? ? 1, then (?, pi?,1) is stochas-tically complete. This result may be seen to be sharp; by Example 4.12 of[76], (?, pi,1) is stochastically incomplete if and only if??r=0rpir,r+1<?. (8.15)Plugging in pi?n,n+1 = (n+ 1)2 log?+(n+ 1) with 0 ? ? < 2, we see that (8.15)occurs precisely when 1 < ? < 2.142Example 8.4.2 (Rapidly branching tree). As in Example 3.4.2, given 0 ?? < 2, let Tk? be the spherically symmetric infinite tree rooted at xroot, withbranching function k?(r) := b(r+ 1)?c. We assume that Tk? has the naturalweights. We equip Tk? with the measure 1, and since vertex degrees on Tk?are unbounded, we use the adapted metric dV .In Example 3.4.1, we showed that if xR ? ?? satisfies d(xroot, xR) = R, thendV (0, R)  R(2??)/2, so thatlog VdV (x0, r) r2/(2??)?j=1log(j + 1)? ( 2?2? ?)r2/(2??) log r.From Corollary 8.3.7, we conclude that Tk? is stochastically complete if ? ? 1.By Remark 4.3 of [76], the exponent 1 is sharp; if ? > 1 then Tk? is stochas-tically incomplete.Note that in this setting the metric dV is strongly adapted. Now, supposethat we work on Tk1 with a different choice of metric. Given 1 < ? < 2,we consider the metric d? induced by the edge weights c?(e) = r??/2 ifd(x0, e) ? d(x0, e) = r; these weights are adapted but not strongly adapted.Proceeding as above, we obtain that log Vd?(x0, r)  r2/(2??) log r; this gives? ?r0rlog Vd?(x0, r)dr <?,even though Tk1 is stochastically complete. Consequently, this adapted met-ric does not give sharp volume growth criteria.This family of graphs also yields an interesting counterexample. We previ-ously computed that dD(x0, xR)  R on any tree with the natural weights,so for any ? > 1 we havelog VdD(x0, r) r?j=1log j?  r log r,from which it follows that? ?r0rlog VdD(x0, r)dr =?,143even though Tk? is stochastically incomplete. Comparing this result withTheorem 8.2.4 shows that the relationship between volume growth in theintrinsic metric and stochastic completeness on graphs is different than thecorresponding result for local Dirichlet spaces.Remark 8.4.3. A natural line of inquiry is to determine the relationship be-tween non-explosiveness of the VSRW and volume growth in the graph met-ric. In [42], it was proven that if there exists x0 ? G and r0 > 0 such thatVd(x0, r) ? Cr3 for r ? r0, then (?, pi,1) is stochastically complete. Onthe other hand, as seen in Example 8.4.1, the critical behavior for stochasticcompleteness is observed at cubic volume growth with an additional logarith-mic factor. Thus, the result of [42] for volume growth in the graph metricshows that (?, pi?,1) is stochastically complete for ? ? 0 (corresponding toat-most-exponential adapted volume growth), but does not give the optimalrange of ? ? 1.144Chapter 9ConclusionThis thesis provides an introduction to the theory of continuous-time simplerandom walks on measure weighted graphs. Two aspects of this work areparticularly notable; the use of adapted metrics, which appears to be essen-tial in studying random walks with unbounded generators, and the use ofBrownian motion on metric graphs to prove results about random walks.The main results of this thesis are Theorem 4.2.6, Theorem 7.1.3, and The-orem 8.2.5, which were published in [26], [28], and [27], respectively. Wesummarize these results briefly:Theorem 4.2.6 establishes Gaussian upper bounds for the heat kernel usingon-diagonal estimates at only two points. This method was pioneered byGrigor?yan in [38], who proved Gaussian upper bounds for the heat kernelon a Riemannian manifold, and it was adapted to the setting of graphs bythe author. The discrete setting of graphs poses several analytic difficultieswhich are not present in the manifold setting, and the required estimatesare more difficult. Several aspects of this result are notable. First, one onlyrequires local information on the heat kernel decay; on-diagonal bounds arerequired at only two points, rather than uniform estimates over the wholegraph. Second, the distance function appearing in the Gaussian exponentialfactor is an adapted metric, which strongly indicates that adapted metricsare the correct metrics to use in analyzing heat flow on graphs. Along with[42], this was one of the first two papers to use adapted metrics to studythe behavior of random walks on graphs. Additionally, with minor modifica-tions (see Theorem 4.2.10), these results are applicable to certain important145models of random walks in random environments, such as the random con-ductance model.Theorem 7.1.3 establishes asymptotically sharp bounds for the bottom ofthe spectrum in terms of the adapted volume growth of the underlying mea-sure weighted graph. In particular, the bound we obtain is analogous to thecorresponding bound for Riemannian manifolds and strongly local Dirichletspaces, and is valid for random walks with unbounded generators. This re-sult extends previous work of Fujiwara in [30].Theorem 8.2.5 establishes sharp criteria relating adapted volume growth tostochastic completeness of the underlying measure weighted graph. This re-sult is analogous to the corresponding bound for Riemannian manifolds andstrongly local Dirichlet spaces, and improves substantially upon previouswork of Grigor?yan, Huang, and Masamune in [42]. In many respects, Theo-rem 8.2.5 is the most difficult result proved in this thesis, because stochasticincompleteness is intrinsically associated with unboundedness of the gener-ator; any continuous-time simple random walk with a bounded generator isstochastically complete. The techniques used to prove this result are also no-table; we show how to construct a Brownian motion on a metric graph whichclosely mimics the behaviour of the continuous-time simple random walk un-der consideration. The upshot of this construction is that one may then usethe powerful machinery of strongly local Dirichlet forms (which applies tothe Brownian motion) to study the random walk (which is associated with anon-local Dirichlet form). In this way, one can avoid many of the difficultiesassociated with adapting continuous techniques to the discrete setting. Weuse this technique elsewhere in this thesis to establish a sharp criterion re-lating adapted volume growth to recurrence in Chapter 6, and other authorshave subsequently used this construction to prove other results for randomwalks on graphs; see [47] and [48].Finally, we mention some future avenues of study related to the research pre-sented here. In the setting of Riemannian manifolds, there are many resultsrelating large-scale geometric properties of the manifold to various behaviorsof the Brownian motion on the manifold. Examples of large-scale geometricproperties of interest include volume growth estimates, isoperimetric inequal-ities, and curvature conditions. In this thesis, we have discussed how adaptedvolume growth is related to recurrence, stochastic completeness, and the bot-146tom of the spectrum, and shown that for each of these behaviors, the volumegrowth criteria for Riemannian manifolds and measure weighted graphs areessentially the same if one measures volume growth on the graph with re-spect to an adapted metric. It is likely that the use of adapted metrics willallow analogues of other results for Riemannian manifolds to be proved formeasure weighted graphs.The techniques presented in this thesis are also likely applicable to the prob-lem of obtaining on-diagonal heat kernel bounds for continuous-time simplerandom walks with unbounded generators. In particular, it may be possibleto combine the adapted isoperimetric inequalities of Bauer, Keller, and Woj-ciechowski in [7] with standard heat kernel techniques to obtain on-diagonalheat kernel estimates for a wide range of continuous-time simple randomwalks with unbounded generators.Adapted metrics will also be useful for studying models of random walks inrandom environments in which the random environments have ?unboundedgeometry?. Examples of this are given in [1] and [6], although one may alsowish to consider graphs which do not have uniformly bounded vertex degrees.Heat kernel estimates are an important tool in obtaining scaling limits andinvariance principles, and the results and techniques developed in this thesisare likely to be applicable to models of this type.147Bibliography[1] S. Andres, M. T. Barlow, J.-D. Deuschel, and B. M. Hambly, Invarianceprinciple for the random conductance model, Probab. Theory RelatedFields 156 (2013), no. 3-4, 535?580. ? pages 59, 78, 147[2] M. T. Barlow, Random Walks on Graphs. Unpublished manuscript. ?pages 42, 47[3] M. T. Barlow, Random walks on supercritical percolation clusters, Ann.Probab. 32 (2004), no. 4, 3024?3084. ? pages 77[4] M. T. Barlow and R. F. Bass, Stability of parabolic Harnackinequalities, Trans. Amer. Math. Soc. 356 (2004), no. 4, 1501?1533. ?pages 51[5] M. T. Barlow and R. F. Bass, Random walks on graphical Sierpinskicarpets, in Random walks and discrete potential theory (Cortona, 1997),26?55, Sympos. Math., XXXIX Cambridge Univ. Press, Cambridge. ?pages 48[6] M. T. Barlow and J.-D. Deuschel, Invariance principle for the randomconductance model with unbounded conductances, Ann. Probab. 38(2010), no. 1, 234?276. ? pages 147[7] F. Bauer, M. Keller, and R. K. Wojciechowski, Cheeger inequalities forunbounded graph Laplacians. To appear in J. Eur. Math. Soc. ? pages41, 124, 147[8] J. R. Baxter and R. V. Chacon, The equivalence of diffusions onnetworks to Brownian motion, in Conference in modern analysis andprobability (New Haven, Conn., 1982), 33?48, Contemp. Math., 26 Amer.Math. Soc., Providence, RI. ? pages 80148[9] M. Biskup, Recent progress on the random conductance model, Probab.Surv. 8 (2011), 294?373. ? pages 7[10] R. Brooks, A relation between growth and the spectrum of theLaplacian, Math. Z. 178 (1981), no. 4, 501?508. ? pages 109, 110[11] B. M. Brown, A general three-series theorem, Proc. Amer. Math. Soc.28 (1971), 573?577. ? pages 133[12] E. A. Carlen, S. Kusuoka and D. W. Stroock, Upper bounds forsymmetric Markov transition functions, Ann. Inst. H. Poincar? Probab.Statist. 23 (1987), no. 2, suppl., 245?287. ? pages 47[13] J. Cheeger, A lower bound for the smallest eigenvalue of the Laplacian,in Problems in analysis (Papers dedicated to Salomon Bochner, 1969),195?199, Princeton Univ. Press, Princeton, NJ. ? pages 125[14] Z.-Q. Chen and M. Fukushima, Symmetric Markov processes, timechange, and boundary theory, London Mathematical Society MonographsSeries, 35, Princeton Univ. Press, Princeton, NJ, 2012. ? pages 22, 83[15] . F. R. K. Chung, Laplacians of graphs and Cheeger?s inequalities, inCombinatorics, Paul Erd?s is eighty, Vol. 2 (Keszthely, 1993), 157?172,Bolyai Soc. Math. Stud., 2 J?nos Bolyai Math. Soc., Budapest. ? pages125[16] T. Coulhon and A. Grigoryan, Random walks on graphs with regularvolume growth, Geom. Funct. Anal. 8 (1998), no. 4, 656?701. ? pages42, 49[17] T. Coulhon, A. Grigoryan and F. Zucca, The discrete integralmaximum principle and its applications, Tohoku Math. J. (2) 57 (2005),no. 4, 559?587. ? pages 78[18] E. B. Davies, Analysis on graphs and noncommutative geometry, J.Funct. Anal. 111 (1993), no. 2, 398?430. ? pages 52, 55[19] E. B. Davies, Large deviations for heat kernels on graphs, J. LondonMath. Soc. (2) 47 (1993), no. 1, 65?72. ? pages 32, 37, 38, 52149[20] J. Dodziuk and L. Karp, Spectral and function theory forcombinatorial Laplacians, in Geometry of random motion (Ithaca, N.Y.,1987), 25?40, Contemp. Math., 73 Amer. Math. Soc., Providence, RI. ?pages 111[21] J. L. Doob, Stochastic processes, reprint of the 1953 original, WileyClassics Library, Wiley, New York, 1990. ? pages 133[22] P. G. Doyle and J. L. Snell, Random walks and electric networks,Carus Mathematical Monographs, 22, Math. Assoc. America,Washington, DC, 1984. ? pages 16, 102[23] R. Durrett, Probability: theory and examples, fourth edition,Cambridge Series in Statistical and Probabilistic Mathematics,Cambridge Univ. Press, Cambridge, 2010. ? pages 101, 102[24] L. C. Evans, Partial differential equations, Graduate Studies inMathematics, 19, Amer. Math. Soc., Providence, RI, 1998. ? pages 1[25] S. N. Ethier and T. G. Kurtz, Markov processes, Wiley Series inProbability and Mathematical Statistics: Probability and MathematicalStatistics, Wiley, New York, 1986. ? pages 19, 24[26] M. Folz, Gaussian upper bounds for heat kernels of continuous timesimple random walks, Electron. J. Probab. 16 (2011), no. 62, 1693?1722.? pages ii, iii, 51, 52, 54, 145[27] M. Folz, Volume growth and stochastic completeness of graphs. Toappear in Trans. Amer. Math. Soc. ? pages ii, iii, 80, 126, 131, 145[28] M. Folz, Volume growth and spectrum for general graph Laplacians.To appear in Math. Z. ? pages ii, iii, 110, 145[29] R. L. Frank, D. Lenz and D. Wingert, Intrinsic metrics for non-localsymmetric Dirichlet forms and applications to spectral theory. To appearin J. Funct. Anal. ? pages 37[30] K. Fujiwara, Growth and the spectrum of the Laplacian of an infinitegraph, Tohoku Math. J. (2) 48 (1996), no. 2, 293?302. ? pages 109, 110,120, 146150[31] K. Fujiwara, The Laplacian on rapidly branching trees, Duke Math. J.83 (1996), no. 1, 191?202. ? pages 27, 119, 125[32] M. Fukushima, Y. O?shima and M. Takeda, Dirichlet forms andsymmetric Markov processes, de Gruyter Studies in Mathematics, 19, deGruyter, Berlin, 1994. ? pages 22, 24, 83, 89[33] D. Gilat, On the nonexistence of a three series condition for series ofnonindependent random variables, Ann. Math. Statist. 42 (1971), no. 1,409. ? pages 133[34] R. I. Grigorchuk, Degrees of growth of finitely generated groups andthe theory of invariant means, Izv. Akad. Nauk SSSR Ser. Mat. 48(1984), no. 5, 939?985. ? pages 50[35] R. I. Grigorchuk and A. ?uk, The lamplighter group as a groupgenerated by a 2-state automaton, and its spectrum, Geom. Dedicata 87(2001), no. 1-3, 209?244. ? pages 31[36] A. Grigoryan, Stochastically complete manifolds, Dokl. Akad. NaukSSSR 290 (1986), no. 3, 534?537. ? pages 131, 132[37] A. Grigoryan, Heat kernel of a noncompact Riemannian manifold, inStochastic analysis (Ithaca, NY, 1993), 239?263, Proc. Sympos. PureMath., 57 Amer. Math. Soc., Providence, RI. ? pages 47[38] A. Grigoryan, Gaussian upper bounds for the heat kernel on arbitrarymanifolds, J. Differential Geom. 45 (1997), no. 1, 33?52. ? pages 51, 56,67, 69, 145[39] A. Grigoryan, Analytic and geometric background of recurrence andnon-explosion of the Brownian motion on Riemannian manifolds, Bull.Amer. Math. Soc. (N.S.) 36 (1999), no. 2, 135?249. ? pages 127, 129[40] A. Grigoryan, Heat kernels on weighted manifolds and applications, inThe ubiquitous heat kernel, 93?191, Contemp. Math., 398 Amer. Math.Soc., Providence, RI. ? pages 42, 47, 48, 105, 106[41] A. Grigoryan, Analysis on Graphs. Unpublished lecture notes; availableat http://www.math.uni-bielefeld.de/ grigor/aglect2.pdf. ?pages 47, 48, 49151[42] A. Grigoryan, X. Huang and J. Masamune, On stochastic completenessof jump processes, Math. Z. 271 (2012), no. 3-4, 1211?1239. ? pages132, 144, 145, 146[43] G. Grimmett, Percolation, second edition, Grundlehren derMathematischen Wissenschaften, 321, Springer, Berlin, 1999. ? pages 7[44] S. Haeseler, Heat kernel estimates and related inequalities on metricgraphs. Preprint. ArXiv:1101.3010. ? pages 80[45] B. Hua and M. Keller, Harmonic functions of general graphLaplacians. To appear in Calc. Var. Partial Differ. Eq. ? pages 106[46] X. Huang, On uniqueness class for a heat equation on graphs, J. Math.Anal. Appl. 393 (2012), no. 2, 377?388. ? pages 132[47] X. Huang, A note on the volume growth criterion for stochasticcompleteness of weighted graphs. To appear in Potential Anal. ? pages129, 146[48] X. Huang and Y. Shiozawa, Upper escape rate of Markov chains onweighted graphs. Preprint. arXiv: 1304.6197. ? pages 146[49] X. Huang, On stochastic completeness of weighted graphs. Ph.DThesis, Bielefeld University, 2011. ? pages 22, 129[50] X. Huang, M. Keller, J. Masamune and R. K. Wojciechowski, A noteon self-adjoint extensions of the Laplacian on weighted graphs, J. Funct.Anal. 265 (2013), no. 8, 1556?1578. ? pages 11[51] M. Keller, The essential spectrum of the Laplacian on rapidlybranching tessellations, Math. Ann. 346 (2010), no. 1, 51?66. ? pages123[52] M. Keller and D. Lenz, Unbounded Laplacians on graphs: basicspectral properties and the heat equation, Math. Model. Nat. Phenom. 5(2010), no. 4, 198?224. ? pages 10, 27[53] M. Keller and D. Lenz, Dirichlet forms and stochastic completeness ofgraphs and subgraphs, J. Reine Angew. Math. 666 (2012), 189?223. ?pages 10, 25, 129152[54] M. Keller, D. Lenz, H. Vogt and R. Wojciechowski, Note on basicfeatures of large time behaviour of heat kernels. To appear in J. ReineAngew. Math. ? pages 28[55] M. Keller, D. Lenz and R. K. Wojciechowski, Volume growth,spectrum and stochastic completeness of infinite graphs, Math. Z. 274(2013), no. 3-4, 905?932. ? pages 119, 129[56] G. F. Lawler and V. Limic, Random walk: a modern introduction,Cambridge Studies in Advanced Mathematics, 123, Cambridge Univ.Press, Cambridge, 2010. ? pages 103[57] R. Lyons and Y. Peres, Probability on Trees and Networks.Unpublished manuscript; available athttp://mypage.iu.edu/ rdlyons/prbtree/prbtree.html. ? pages16, 31, 102, 104, 107[58] P. Mathieu and E. Remy, Isoperimetry and heat kernel decay onpercolation clusters, Ann. Probab. 32 (2004), no. 1A, 100?128. ? pages50, 77[59] C. St. J. A. Nash-Williams, Random walk and electric currents innetworks, Proc. Cambridge Philos. Soc. 55 (1959), 181?194. ? pages 104[60] B. ?ksendal, Stochastic Differential Equations: An Introduction withApplications, sixth edition, Springer-Verlag, Berlin, 2010. ? pages 1[61] H. Osada, Isoperimetric constants and estimates of heat kernels of preSierpi?ski carpets, Probab. Theory Related Fields 86 (1990), no. 4,469?490. ? pages 48[62] C. Pittet and L. Saloff-Coste, A survey on the relationships betweenvolume growth, isoperimetry, and the behavior of simple random walk onCayley graphs, with examples. Preprint. ? pages 47[63] G. P?lya, ?ber eine Aufgabe der Wahrscheinlichkeitsrechnungbetreffend die Irrfahrt im Stra?ennetz, Math. Ann. 84 (1921), no. 1-2,149?160. ? pages 102[64] M. Reed and B. Simon, Methods of modern mathematical physics. I,second edition, Academic Press, New York, 1980. ? pages 26153[65] D. Revelle, Heat kernel asymptotics on the lamplighter group,Electron. Comm. Probab. 8 (2003), 142?154 (electronic). ? pages 49[66] L. C. G. Rogers, Ito excursion theory via resolvents, Z. Wahrsch. Verw.Gebiete 63 (1983), no. 2, 237?255. ? pages 80[67] S. Rosenberg, The Laplacian on a Riemannian manifold, LondonMathematical Society Student Texts, 31, Cambridge Univ. Press,Cambridge, 1997. ? pages 1, 37[68] T. S. Salisbury, Construction of right processes from excursions,Probab. Theory Related Fields 73 (1986), no. 3, 351?367. ? pages 80[69] K.-T. Sturm, Analysis on local Dirichlet spaces. I. Recurrence,conservativeness and Lp-Liouville properties, J. Reine Angew. Math. 456(1994), 173?196. ? pages 37, 89, 105, 109, 131[70] N. Th. Varopoulos, Long range estimates for Markov chains, Bull. Sci.Math. (2) 109 (1985), no. 3, 225?252. ? pages 80[71] N. Th. Varopoulos, Isoperimetric inequalities and Markov chains, J.Funct. Anal. 63 (1985), no. 2, 215?239. ? pages 48[72] N. Th. Varopoulos, L. Saloff-Coste and T. Coulhon, Analysis andgeometry on groups, Cambridge Tracts in Mathematics, 100, CambridgeUniv. Press, Cambridge, 1992. ? pages 47[73] J. Walsh, A diffusion with a discontinuous local time. Ast?risque. 52(1978), no. 53, 37?45. ? pages 79[74] A. Weber, Analysis of the physical Laplacian and the heat flow on alocally finite graph, J. Math. Anal. Appl. 370 (2010), no. 1, 146?158. ?pages 20, 21, 27[75] W. Woess, Random walks on infinite graphs and groups, CambridgeTracts in Mathematics, 138, Cambridge Univ. Press, Cambridge, 2000.? pages 6, 102, 104[76] R. Wojciechowski, Stochastically incomplete manifolds and graphs, inBoundaries and Spectra of Random Walks (Kathrein, 2009), 165-181,Progress in Probability, 64 Birkh?user, Basel. ? pages 142, 143154[77] R. Wojciechowski, Stochastic completeness of graphs. Ph.D Thesis,The City University of New York, 2008. ? pages 128[78] G. Zaimi, Assigning positive edge weights to a graph so that theweight incident to each vertex is 1.mathoverflow.net/questions/59117/ (2011). ? pages 93155Appendix ASymmetry of the Heat KernelIn this appendix, we prove that the heat kernel satisfies pt(x, y) = pt(y, x)for all t ? 0 and x, y ? G, using the definitionpt(x, y) :=Px(Xt = y)?y.Proof. Let (Yj)j?Z+ be the discrete skeleton of X, and let (Nt)t?0 denote thenumber of jumps that (Xt)t?0 has made by time t. We write Y [0, n] for thepath of vertices p0, . . . , pn visited by Y after it has jumped n times.Given x, y ? G, let Pn(x, y) denote the collection of paths of length n fromx to y, and for x ? G, let ?x denote the time it takes X to jump from x (i.e.,?x is exponentially distributed with parameter pix/?x). Clearly, we havept(x, y) =1?y??n=0Px(Yn = y,Nt = n)=1?y??n=0???Pn(x,y)Px(Y [0, n] = ?, ?x + ?M ? t, ?x + ?M + ?y > t)=1?y??n=0???Pn(x,y)Px(Y [0, n] = ?)Px(?x + ?M ? t, ?x + ?M + ?y > t),(A.1)where we?ve written ?M =?n?1j=1 ?p(j). Given a path ? ? Pn(x, y), let ?rev156denote the path reversal of ?. A direct computation givesPx(Y [0, n] = ?) =piypixPy(Y [0, n] = ?rev). (A.2)Additionally, if we set ?1 := pix/?x and ?2 := piy/?y, thenPx(?x + ?M ? t, ?x + ?M + ?y > t) =? t0P(?M = u)? P(?x ? t? u, ?x + ?y > t? u)du=? t0P(?M = u)?? t?u0?1e??1x? ?t?u?x?2e??2ydydxdu= ?1? t0Px(?M = u)?(e??2(t?u) ? e??1(t?u)?1 ? ?2)du,so thatPx(?x + ?M ? t, ?x + ?M + ?y > t) =?1?2Py(?y + ?M ? t, ?y + ?M + ?x > t).(A.3)Combining (A.1) with (A.2) and (A.3) givespt(x, y) =1?y??n=0???Pn(x,y)Px(Y [0, n] = ?)Px(?x + ?M ? t, ?x + ?M + ?y > t)=1?ypiypix?1?2??n=0???Pn(y,x)Py(Z[0, n] = ?rev, ?y + ?M ? t, ?y + ?M + ?x > t)=1?xPy(Xt = x)= pt(y, x).157


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items