UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Models of gradient type with sub-quadratic actions and their scaling limits Ye, Zichun 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_february_ye_zichun.pdf [ 994.55kB ]
Metadata
JSON: 24-1.0340866.json
JSON-LD: 24-1.0340866-ld.json
RDF/XML (Pretty): 24-1.0340866-rdf.xml
RDF/JSON: 24-1.0340866-rdf.json
Turtle: 24-1.0340866-turtle.txt
N-Triples: 24-1.0340866-rdf-ntriples.txt
Original Record: 24-1.0340866-source.json
Full Text
24-1.0340866-fulltext.txt
Citation
24-1.0340866.ris

Full Text

Models of Gradient Type withSub-Quadratic Actions and TheirScaling LimitsbyZichun YeB.Sc. Peking University, 2010M.Sc. The University of British Columbia, 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Mathematics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)January 2017c© Zichun Ye 2017AbstractThe main results of this thesis concern models of gradient type with sub-quadratic actions and their scaling limits. The model of gradient type is thedensity of a collection of real-valued random variables φ := {φx : x ∈ Λ}given by Z−1 exp(−∑j∼k V (φj − φk)). We focus our study on the case thatV (∇φ) = [1 + (∇φ)2]α with 0 < α < 1/2, which is a non-convex potential.The first result concerns the thermodynamic limits of the model of gra-dient type. We introduce an auxiliary field tjk for each edge and representthe model as the marginal of a model with log-concave density. Based onthis method, we prove that finite moments of the fields 〈[v · φ]p〉 are boundeduniformly in the volume for the finite volume measure. This bound leads tothe existence of infinite volume measures.The second result is the random walk representation and the scalinglimit of the translation-invariant, ergodic gradient infinite volume Gibbsmeasure. We represent every infinite volume Gibbs measure as a mixtureover Gaussian gradient measures with a random coupling constant ωxy foreach edge. With such representation, we give an estimation on the decay ofthe two-point correlation function. Then by the quenched functional centrallimit theorem in random conductance model, we prove that every ergodic,infinite volume Gibbs measure with mean zero for the potential V abovescales to a Gaussian free field.iiPrefaceChapter 1 is an introduction and motivation for the problems studied in theremainder of the thesis. No originality is claimed and, to give an informa-tive exposition, we explain a number of ideas from a number of referencesmentioned, but without explicit reference to the origin of each single idea.Chapter 2, 3 and 4 introduce the results of the original work conductedby me under the supervision of my advisors Professor David Brydges andProfessor Martin Barlow.Chapter 4 discusses ideas developed together with Professor Martin Bar-low.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Background . . . . . . . . . . . . . . . . . . . . . . . 11.1.2 Overview of the results . . . . . . . . . . . . . . . . . 31.2 Model of gradient type . . . . . . . . . . . . . . . . . . . . . 41.2.1 ϕ-Gibbs measure . . . . . . . . . . . . . . . . . . . . 41.2.2 Massive ϕ-Gibbs measure . . . . . . . . . . . . . . . . 61.2.3 ∇φ-Gibbs measure . . . . . . . . . . . . . . . . . . . 91.3 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . 121.3.1 Markov property, shift invariance and ergodicity . . . 121.3.2 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 151.4 Potential function . . . . . . . . . . . . . . . . . . . . . . . . 171.4.1 Convex potentials . . . . . . . . . . . . . . . . . . . . 171.4.2 Non-convex potentials . . . . . . . . . . . . . . . . . . 201.4.3 Sub-quadratic potentials . . . . . . . . . . . . . . . . 22ivTable of Contents2 Thermodynamic limit . . . . . . . . . . . . . . . . . . . . . . . 262.1 Introduction and main result . . . . . . . . . . . . . . . . . . 262.1.1 Extended model and its marginals . . . . . . . . . . . 262.1.2 Main result . . . . . . . . . . . . . . . . . . . . . . . . 282.2 Bounds on the moments . . . . . . . . . . . . . . . . . . . . . 312.2.1 Bounds for the auxiliary field . . . . . . . . . . . . . . 312.2.2 Bounds for the φ field . . . . . . . . . . . . . . . . . . 372.3 Existence of infinite volume measure . . . . . . . . . . . . . . 402.3.1 Green’s function . . . . . . . . . . . . . . . . . . . . . 412.3.2 Existence of infinite volume measure . . . . . . . . . 473 Random walk connection . . . . . . . . . . . . . . . . . . . . . 523.1 Introduction and main result . . . . . . . . . . . . . . . . . . 523.1.1 Random walk in Zd . . . . . . . . . . . . . . . . . . . 523.1.2 Coupling to random conductance model . . . . . . . . 563.1.3 Main results . . . . . . . . . . . . . . . . . . . . . . . 583.2 Proof of main results . . . . . . . . . . . . . . . . . . . . . . 593.2.1 Connection to random conductance model . . . . . . 593.2.2 Two point correlation . . . . . . . . . . . . . . . . . . 634 Scaling limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.1 Introduction and main results . . . . . . . . . . . . . . . . . 714.1.1 Gaussian free field . . . . . . . . . . . . . . . . . . . . 714.1.2 Scaling limits . . . . . . . . . . . . . . . . . . . . . . 734.1.3 Main results . . . . . . . . . . . . . . . . . . . . . . . 754.2 Regularity estimates . . . . . . . . . . . . . . . . . . . . . . . 784.3 Proof of main result . . . . . . . . . . . . . . . . . . . . . . . 805 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835.1 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835.2 Gradient Gibbs measure with disorder . . . . . . . . . . . . . 845.2.1 Model of interest . . . . . . . . . . . . . . . . . . . . 845.2.2 Dynamical method . . . . . . . . . . . . . . . . . . . 86vTable of ContentsBibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88AppendicesA Stable density . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96A.1 Definition of stable distribution . . . . . . . . . . . . . . . . 96A.2 Log concavity . . . . . . . . . . . . . . . . . . . . . . . . . . 98A.3 Tail behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . 99B Random walk in random environment . . . . . . . . . . . . . 105B.1 Previous results about random walk in random environment 105B.2 Potential theory for random conductance models . . . . . . . 112viList of Figures1.1 Water exposing in air . . . . . . . . . . . . . . . . . . . . . . 21.2 One dimension interface and its approximation . . . . . . . . 2viiAcknowledgmentsI am greatly indebted to my supervisors Professor David Brydges and Profes-sor Martin Barlow for their guidance throughout my Ph.D. study. They notonly introduced me to the beautiful world of probability theory, statisticalmechanics, and random walk but also taught me how to be a mathematician.I am fortunate to have their support, both in academic and non-academicmatters, all these years.I gratefully thank Professor Gordon Slade and Professor Brian Marcusfor serving on my advisory committee, and the members of the ProbabilityGroup at the University of British Columbia for having provided a cooper-ative research environment.I also thank all Dr Wen Yang for his help with the thesis template andthe process of final defense, Li Wang, Qingsan Zhu and Xiaowei Li for theproofreading, Yingluo Wang for her help with image manipulations, and allmy friends in Peking University, UBC Math Department and Saint John’scollege for their precious support during my writing of the thesis.Last but not least, I want to show my gratitude to my parents XiaohuiYe and Ping Yang for their love and support throughout my life.viiiDedicationTo my parentsixChapter 1Introduction1.1 Introduction1.1.1 BackgroundThe model of gradient type originates from a problem with long history inphysics: formation of the interface. When a fluid is in contact with anotherfluid, like water exposing to air, a portion of the total free energy of thesystem is proportional to the area of the surface of contact and equilibriumwill accordingly be obtained when the free energy of the surfaces in contactis a minimum [50, 77]. Now consider the case of water exposing in airin dimension one. Then the interface is described as φ = φ(x) which isthe height of water at position x. Then, if we neglect the possibility ofoverhangs, the area of the interface, which is arc-length in this case, is givenbyL(φ) =∫ √1 +(dφdx)2dx =∫ √1 + (∇φ(x))2dx, (1.1)namely the arc-length of the interface is the function of the gradient ∇φ(x).Furthermore, if we discretize the continuous interface into n parts and ap-proximate the interface by multiple linear segments, then by proper scalingwe haveL(φ) =n∑x=1√1 + (∇φ(x))2 =n∑x=1√1 + (φ(x)− φ(x− 1))2. (1.2)11.1. IntroductionFigure 1.1: Water exposing in airFigure 1.2: One dimension interfaceand its approximationIf we write V (η) =√1 + η2, then the energy associated with the interfaceis given byHn(φ) = βn∑x=1V (∇φ(x)). (1.3)where β is the constant representing the proportional relation between theenergy and the arc-length of the interface. In later context, V will be calledthe potential function and Hn will be called the Hamiltonian. The Hamil-tonian Hn here is a function of the gradient of φ, which is the key featureof the model of gradient type.In the case of dimension two, we have the similar result as the surfacearea can be also represented by the integral with respect to the function ofgradient. Then the sum in (1.3) is replaced by summing over all sites in afinite subset of Z2. Brydges and Spencer [25] firstly studied the model ofgradient type with potential function V (η) =√1 + η2 in dimension d, whilelater they pointed out the relationship between their model and the interfacemodel in low dimension. In fact, the model of gradient type has a close con-nection with different models in statistical mechanics, including Ising model[45], SOS (Solid on Solid) model [68] and Cauchy-Born rule [2]. As the maininterest of this thesis, we will extend the model in [25] and study the modelof gradient type with potential function V (η) = (1 + η2)α for an arbitraryfractional power 0 < α ≤ 0.5. We are interested in this class of V for severalreasons. On the one hand, it is a class of non-convex potential functions21.1. Introductionwhile this class of measures still keeps features in common with the masslessfree field, namely Markov property and Osterwalder Schrader positivity. Onthe other hand, similar measures have been derived from other models inprobability theory. For example, when α = 1/2, the Hamiltonian is simi-lar to the one appearing in the measure describing linearly edge reinforcedrandom walk.1.1.2 Overview of the resultsIn the rest of this chapter, the model of gradient type is precisely defined.The microscopic configurations of height variables φ of interface are assignedthe energy H(φ). The statistical ensemble in equilibrium is defined by Gibbsmeasures. We will introduce both ϕ-Gibbs measure and ∇ϕ-Gibbs mea-sure, as well as some important boundary conditions, like periodic bound-ary condition. We are interested in the Gibbs measures with some specialproperties, namely Markov property, shift invariance and ergodicity. Thenthe corresponding time evolution called Ginzburg-Laudau dynamics is con-structed in such a way that it is reversible under the Gibbs measures. Asthe key element in the Gibbs measure, the influence of convexity of potentialfunction on the Gibbs measure will be explained. At last we will introducethe potential of interest and review some previous results.Chapter 2 - 4 will give the main results of this thesis. In Chapter 2,we will discuss the thermodynamic limit of the Gibbs measure. We willgive the bounds for the finite order moments of the φ field (Theorem 2.1.4).The existence of the infinite volume measures is a natural result of thesebounds (Theorem 2.1.6). In Chapter 3, we will briefly introduce the randomconductance model, or, the random walk in random environment in Rd. Ourmodel admits a random walk representation as in the case of the Gaussiansystems (Proposition 3.1.5). We will also calculate the decay of two pointfunction at the end of this chapter (Theorem 3.1.7). In Chapter 4, the scalinglimits connecting microscopic and macroscopic levels will be explained. Wewill give the definition of the Gaussian free field and prove that every ergodicgradient Gibbs measure in our model scales to a Gaussian free field (Theorem31.2. Model of gradient type4.1.5). In Chapter 5, we will introduce some work about the uniquenesswhich is in preparation.1.2 Model of gradient type1.2.1 ϕ-Gibbs measureWe are interested in a hyper-surface embedded in d + 1 dimensional spaceRd+1. The hyper-surface is represented by a graph viewed from a fixed refer-ence d-dimensional hyperplane Γ located in the space Rd+1. In other words,there are no overhangs and the location of the hyper-surface is described bythe configuration φ = {φ(x) ∈ R;x ∈ Γ}, which measure the height of thehyper-surface at site x. The variables φ are microscopic objects, and thespace Γ is discretized and taken as Γ = Λ ⊂ Zd. Let ΩΛ = RΛ be the setof all configurations over Λ and Ω = ΩZd be the configurations over Zd. In[45], φ is called height variable of the interface.We think of Zd as a graph with edgesE ={{j, k} : j, k ∈ Zd, ‖j − k‖2 = 1}(1.4)where ‖·‖2 is the Euclidean norm. We use the notation jk = {j, k} for theundirected edges. Given a finite set Λ ⊂ Zd and the even potential functionV (x), we define Hamiltonian HΛ byHΛ(φ) :=∑jk∈E,jk∩Λ 6=∅V (φj − φk). (1.5)HΛ(φ) is also called the total energy of φ in Λ for the potential function V (x).In most contexts, the potential function V is required to be smooth andsymmetric. The surface φ has low energy if the tilts |φ(x)−φ(y)| are small.The energy (1.5) of the interface φ is constructed in such a manner that itis invariant under a uniform translation φ(x)→ φ(x) + h for all x ∈ Zd andh ∈ R. A typical example of V is a quadratic potential V (x) = cx2/2, c > 0.For a function ψ ∈ Ω, we define the finite volume Gibbs measure (or41.2. Model of gradient typemore exactly, ϕ-Gibbs measure) over Λ given byµψΛ :=1ZΛ(ψΛc)e−HΛ(φ∨ψ)∏j∈Λdφj . (1.6)Here (φ ∨ ψ)(k) = φk for k ∈ Λ and = ψk for k ∈ Λc, and ZΛ(ψΛc) is thenormalization constant defined byZΛ(ψΛc) =∫RΛe−HΛ(φ)∏j∈Λdφj . (1.7)We call ψ a boundary condition. The term e−HΛ(φ) is the Boltzmann factor,while dφΛ :=∏j∈Λ dφj is the Lebesgue measure on Rd which representsuniform fluctuations of the interface. Notice that ZΛ(ψΛc) is finite as theboundary condition ψ breaks the symmetry under constant translation of φ.Remark 1.2.1. A usual and widely discussed boundary condition is theDirichlet boundary condition, simply setting ψ = 0 outside Λ. An exampleof the model with the Dirichlet boundary condition is discussed in [33].Remark 1.2.2. To make the notation clear, we will use φ as the randomvariables in the model, and use ϕ only in the name of the model, for example,ϕ-Gibbs measure.For an infinite region Λ : |Λ| = ∞, the expression (1.6) is meaninglessas the Hamiltonian HΛ(φ) is a formal sum. However, one can define thenotion of Gibbs measure on Zd based on well-known Dobrushin-Lanford-Ruelle (DLR) formulations[48].Definition 1.2.1. Let FΛ = σ (φj , j ∈ Λ). We say a measure µ on (Ω,FZd)is an infinite volume ϕ-Gibbs measure if for any finite set Λ ⊂ Zd andµ− a.s. ψ, the following DLR-equation holds:µ( · |FΛc)(ψ) = µψΛ(·), (1.8)where µψΛ(·) is defined in (1.6). Let G , or more precisely G (V ), denote theset of all infinite volume Gibbs measures.51.2. Model of gradient typeRemark 1.2.3. In some contexts [48], the finite volume measure is called(Gibbs) specification.Gibbs measures in the set G (V ) are called the equilibrium states for aphysical system which are coupled together by V . Such physical system mayvary between different equilibrium states if there are more than 1 elementsin G (V ). The non-uniqueness of the Gibbs measure for a given V providessuch “free choice” and indicates that the physical system undergoes a phasetransition. This fact leads to the following definition.Definition 1.2.2. We say that the model is at phase coexistence (or under-goes a first-order phase transition) if |G (V )| > 1.It should be noticed that in physics, the concept of critical phenomenon isnot limited to the non-uniqueness of the equilibrium state. For example, forthe two-dimensional Abelian spin systems, there is a transition from a hightemperature phase to a low temperature phase: the two point correlation hasa power fall-off at low temperature. This is the so-called Kosterlitz-Thoulesstransition; cf. [44]. In this thesis, however, we shall limit our discussion ofthe phase transitions in the sense of the above definition.1.2.2 Massive ϕ-Gibbs measureIn the quantum field theory, H defined in (1.5) is called the massless Hamil-tonian. The massive Hamiltonian is given byHΛ,(φ) = HΛ(φ) + ∑j∈Λφ2j . (1.9)This provides an approximation of the massless Hamiltonian when letting → 0. Similar to the massless case, given a finite set Λ ⊂ Zd and ψ ∈ Ω,we define the finite volume massive Gibbs measure over Λ given byµψΛ, :=1ZΛ,(ψΛc)e−HΛ,(φ∨ψ)∏j∈Λdφj , (1.10)61.2. Model of gradient typewhere ZΛ,(ψΛc) is the normalization constant. Then the massive ϕ-Gibbsmeasure µ (on Zd) is defined by means of the DLR equation with the localspecifications µΛ, in place of µΛ in Definition (1.2.1). For every fixed ,let G denote the set of all massive infinite volume Gibbs measures. Thefollowing lemma gives the relation between G and the set G of masslessinfinite volume measure.Lemma 1.2.4. For a sequence {µ}>0 with µ ∈ G and µ ∈ G ,µ(·|FΛc) = limε→0µε(·|FΛc), µ− a.s. (1.11)where the convergence is in weak sense.Proof. See Proposition 4.19 of [48].Remark 1.2.5. The longitudinal correlation length, which measures thecorrelations along the surface, is one of the quantities of great interest incharacterizing the interface. In math, it is related to the rate of exponentialdecay of its covariance, which is also called the mass in the physics literature.The mass associated to an infinite-volume Gibbs measure Q is defined, forany x ∈ Sd−1 bymQ(x) := − limk→∞1klogCovQ(φ0, φbkxc), (1.12)where bkxc = (bkx1c, · · · , bkxdc). In fact , when V (x) = 12η2 is the quadraticfunction, which is the case of Gaussian equilibrium systems, for a infinitevolume measure µ ∈ G, we havemµ(x) =√2+ o(√) as → 0. (1.13)Thus in the definition of massive Gibbs measure,  may be replaced by 12m2in some contexts. One can refer to [45, 74] for more discussion about thistopic.Now we introduce the massive ϕ-Gibbs measure with periodic boundaryconditions, which will provide us the property of translation invariance: the71.2. Model of gradient typedistribution of the field is invariant under any space shift of Zd. In thiscase, the boundary condition outside Λ is given by periodic extension ofconfiguration inside Λ to Zd. To be specific, let Λ be a cuboid in Zd. Forφ ∈ ΩΛ, let φ˜ be the periodic extension of φ from Λ to Zd. We define theperiodic boundary condition by setting ψ = φ˜ in (1.6). Then the finitevolume measure on RΛ with periodic boundary condition is defined asµpΛ, :=1ZΛ,e−HΛ,(φ˜)∏j∈Λdφj . (1.14)where  > 0 and Λ is a d-dimension lattice torus ZdN of Zd. Let Gp be theset of all cluster points of {µpΛ,|Λ = ZdN , N ≥ 3}.Remark 1.2.6. We cannot introduce periodic boundary condition for themassless Gibbs measure. The Gibbs measure is unnormalizable since HΛ istranslation invariant and this makes the normalization Z(Λ) = ∞. In factthe boundary condition like Dirichlet boundary condition for massless Gibbsmeasures plays a role of breaking the symmetry under constant translation ofφ. With massive term ∑x∈Λ φ2 adding the Hamiltonian, the distributioncan be normalized.Remark 1.2.7. In [48], the periodic boundary condition is introduced in adifferent way. Instead of periodically extending the configuration φ in Λ toZd, one can use the periodic modification of the potential function to definethe new Hamiltonian. See Example (4.20)(2) for more detail about the setup.Also see (2.25) of [12].We will show in next Lemma that the set of all cluster points in the senseof weak limits of finite volume measures with periodic boundary conditionis a subset of the set of all the massive infinite volume measures in the senseof Definition 1.2.1. Notice that there is something to prove here because thedefinition of periodic extension φ˜ depends on Λ.Lemma 1.2.8. G p ⊂ G.Proof. See Example 4.20.2 of [48]. Also refer to Lemma 2.4 of [12].81.2. Model of gradient type1.2.3 ∇φ-Gibbs measureFor every Λ ⊂ Zd, let Λ∗ be the set of all directed bonds b = 〈x, y〉, distin-guished from undirected edges {j, k} ∈ E. For b = 〈x, y〉, we write xb = xas the start point and yb = y as the end point. For each b ∈ (Zd)∗ andconfiguration φ ∈ Ω, define∇φ(b) = φ(yb)− φ(xb). (1.15)We also define ∇iφ(x) = φ(x + ei) − φ(x), 1 ≤ i ≤ d for x ∈ Zd whereei ∈ Zd is the i-th unit vector given by (ei)j = δij . Then the HamiltonianHΛ is rewritten asHΛ(φ) =12∑b∈Z∗,b∩Λ6=∅V (∇φ(b)). (1.16)The factor 1/2 is needed as each undirected edge {j, k} is counted twice inthe sum and V is an even function.By (1.15), a configuration of height variables φ determines a field ofheight differences ∇φ = {∇φ(b) : b ∈ (Zd)∗}. Therefore we can introduce ameasure µ∇ of ∇ϕ for under the ϕ-Gibbs measure µ. We shall call µ∇ the∇ϕ-Gibbs measure. In fact, it is possible to define the ∇ϕ-Gibbs measuresdirectly by means of the DLR equations and, we will prove that ∇ϕ-Gibbsmeasures exist for all dimensions d ≥ 1 in this sense.A sequence of bonds C = {b(1), b(2), . . . , b(n)} is called a chain connectingx and y, (x, y ∈ Zd) if xb(1) = x, xb(i+1) = yb(i+1) for 1 ≤ i ≤ n − 1 andyb(n) = y. The chain C is called a closed loop if yb(n) = xb(1) . A plaquette isa closed loop P = {b(1), b(2), b(3), b(4)} such that {xb(i) , i = 1, . . . , 4} consistsof four different points. The field η = η(b) ∈ R(Zd)∗ is said to satisfy theplaquette condition if• η(b) = −η(−b) for all b ∈ R(Zd)∗ ;• ∑b∈P η(b) = 0 for all plaquettes P ∈ Zd.where −b denotes the reversed bond of b. Note that, if φ = {φ(x)} ∈ RZd ,then ∇φ = {∇φ(b)} ∈ R(Zd)∗ automatically satisfies the plaquette condition.91.2. Model of gradient typeWe setχ = {η ∈ R(Zd)∗ ; η satisfies the plaquette condition}, (1.17)then χ is the state space for the ∇ϕ-field endowed with the topology inducedfrom the space R(Zd)∗ having product topology. Comparing to the φ fieldwhich is called height variables, ∇φ field is called height differences. In fact,the height differences ηφ ∈ χ are associated with the height φ ∈ RZd byηφ(b) := ∇φ(b), b ∈ Zd, (1.18)and conversely, the height φη,φ(O) ∈ RZd can be constructed from heightdifferences η and the height variable φ(O) at x = O asφη,φ(O)(x) :=∑b∈CO,xη(b) + φ(O), (1.19)where CO,x is an arbitrary chain connecting O and x. Note that φη,φ(O) iswell-defined if η = {η(b)} ∈ χ.After setting the space of height differences, we now define the finite vol-ume ∇ϕ-Gibbs measures. For every ξ ∈ χ and every simple connected finitesubset Λ ⊂ Zd, the space of all possible configurations of height differenceson Λ∗ := {b = 〈x, y〉 ∈ (Zd)∗;x or y ∈ Λ} for given boundary condition ξ isdefined asχΛ∗,ξ = {η = (η(b))b∈Λ∗ ; η ∨ ξ ∈ χ}, (1.20)where η ∨ ξ is defined by (η ∨ ξ)(b) = η(b) for b ∈ Λ∗ and = ξ(b) for b 6∈ Λ∗.The finite volume ∇ϕ-Gibbs measure in Λ with boundary condition ξ isdefined byµ∇Λ,ξ(dη) =1ZΛ,ξexp−12 ∑b∈Λ∗V (η(b)) dηΛ,ξ, (1.21)where dηΛ,ξ denotes a uniform measure on the affine space χΛ∗,ξ and ZΛ,ξ isthe normalization constant.101.2. Model of gradient typeWe also need to introduce periodic boundary condition for ∇ϕ-Gibbsmeasures. Let TdN = (Z/NZ)d be the lattice torus of size N and let Td,∗N bethe set of all directed bonds in TdN . Let χTdN be the family of all η ∈ RTd,∗Nsatisfying the plaquette condition on the torus, and define µ˜∇N ∈ P(χTdN ) byµ˜∇N (dη˜) :=1Z˜Nexp−12∑b∈Td,∗NV (η˜) dη˜, (1.22)where Z˜N is the normalization constant and dη˜ is the uniform measure onthe affine space χTd,∗N.The finite volume ϕ-Gibbs measure and the finite volume ∇ϕ-Gibbsmeasures are associated with each other by a simple change of variables.Namely, given ξ ∈ χ and h ∈ R, define ψ ∈ Rd as ψ = φξ,h by (1.19). Then,if φ is µψΛ-distributed with the boundary condition ψ constructed in this way,η = ∇φ is µ∇Λ,ξ-distributed. The distribution of ∇φ is certainly independentof the choice of h. Furthermore, for any w ∈ RΛ∗ , definevw =∑b∈(Zd)∗,wb 6=0wb (δ(yb)− δ(xb)) ∈ RΛ. (1.23)Then we havew · η d= vw · φ. (1.24)Notice vw is orthogonal to the constant under this definition. On the otherhand, for each v ∈ Zd orthogonal to constant, we can find a wv ∈ R(Zd)∗such that (1.24) holds. Notice that (1.24) also holds for finite volume mea-sures with periodic boundary condition. Combining this observation andProposition 4.19 in [48], we have the following lemma.Lemma 1.2.9. Let µpΛ, defined in (1.14) be the massive Gibbs measure,µ∇Λ,p defined in (1.22) be the ∇ϕ-Gibbs measure and 〈·〉µ be the expectationwith respect to measure µ. Then〈(w · η)k〉µ∇Λ,p= lim→0〈(vw · φ)k〉µpΛ,(1.25)111.3. Basic propertiesif the limit one the right hand side exists. Here vw is given in (1.23).Now similarly to the definition of the ϕ-Gibbs measures on Zd, we intro-duce the ∇ϕ-Gibbs measures on (Zd)∗.Definition 1.2.3. LetF ∗Λ = σ(η(b); b ∈ Λ∗) and J ∗Λ = σ (η(b); b ∈ (Zd)∗ \ Λ∗) .The probability measure µ∇ on (χ,F ∗Zd) is called a ∇ϕ-Gibbs measure if itsatisfies the DLR equationµ∇(·|JΛ)(ξ) = µ∇Λ,ξ(·), µ∇ − a.e.ξ, (1.26)for every finite subset Λ ⊂ Zd.1.3 Basic properties1.3.1 Markov property, shift invariance and ergodicityHere we will introduce some properties of the finite volume measure µψΛ andinfinite volume measure µ. Recall that we denote ΩΛ = RΛ the set of allconfigurations over Λ and Ω = ΩZd the configurations over Zd. Let FΛ bethe σ-field of all Borel sets in RΛ and F = FZd . For a finite subset Λ ⊂ Zd,∂+Λ = {x 6∈ Λ, |x− y| = 1 for some y ∈ Λ} (1.27)is the outer boundary of Λ and Λ¯ = Λ ∪ ∂+Λ is the closure of Λ.In the Hamiltonian H(φ), the interactions among the height variablesonly occur between the neighboring sites. This structure is reflected as theMarkov property of the field of height variables φ = {φ(x)} under the finiteϕ-Gibbs measures µψ and the infinite ϕ-Gibbs measures µ.Proposition 1.3.1 ([45, Proposition 2.1]).1. Let Λ ⊂ Zd be a finite subset and the boundary condition ψ ∈ RZdbe given. Suppose that Λ is decomposed into three regions A1, A2, B121.3. Basic propertiesand B separates A1 and A2; namely, Λ = A1 ∪ A2 ∪ B, A1 ∩ A2 =A1 ∩ B = A2 ∩ B = ∅ and |x1 − x2| > 1 holds for every x1 ∈ A1and x2 ∈ A2. Then, under the conditional probability µψΛ( · |FB),the random variables φA1 and φA2 are mutually independent, wherewe denote φA1 = {φ(x);x ∈ A1} etc.2. Let µ ∈ G be a ϕ-Gibbs measure. Then for every finite subset A ⊂ Zdthe random variables φA and φA¯c are mutually independent under theconditional probability µ( · |F∂+A).Remark 1.3.2. In the context of random cluster model and SLE, a similarMarkov property is called domain Markov property. See [9, 52] for referenceabout random cluster model and [10] for reference about SLE.Remark 1.3.3. As a result of Markov property, in the case of one dimen-sion where Λ = {0, 1, · · · , N}, φ = {φ(x)} is a pinned random walk underµψΛ regarding x as time variables. φ(0) and φ(N) are pinned by the bound-ary condition ψ, and the height differences {ηx} are i.i.d R-value randomvariables. Related research is in [26, 57].Here, we recall the notion of shift invariance and ergodicity under theshifts for ϕ-fields and ∇ϕ-fields, respectively, see, e.g., [48]. For x ∈ Zd,we define the shift operators τx : RZd → RZd for heights by (τxφ)(y) =φ(y − x) for y ∈ Zd and φ ∈ RZd . The shifts for height differences are alsodenoted by τx. Namely, τx : χ → χ (or τx : R(Zd)∗ → R(Zd)∗) are definedby (τxη)(b) = η(b − x) for b ∈ (Zd)∗ and η ∈ χ(or η ∈ R(Zd)∗) , whereb− x = 〈xb − x, yb − x〉 ∈ (Zd)∗.Definition 1.3.1. A probability measure on (Ω,F ) is called shift invariantif µ ◦ τ−1x = µ for every x ∈ Zd. A shift invariant µ on (Ω,F ) is calledergodic (under the shifts) if {τx}-invariant functions F = F (φ) on RZd (i.e.,functions satisfying F (τxφ) = F (φ) µ-a.s. for every x ∈ Zd) are constant(µ-a.e.).Remark 1.3.4. For ergodicity, an equivalent definition [75] is µ(A) ∈ {0, 1}for any event A with the property τ−1x (A) = A for all x ∈ Zd.131.3. Basic propertiesAs we stated within the definition of periodic boundary condition, weare interested in the Gibbs measures with periodic boundary condition asit will provide us the property of translation invariance. The next lemmagives us the desired result. Recall that G p is the set of all cluster points of{µpΛ,|Λ = ZdN , N ≥ 3}. Notice that by Lemma 1.2.8, G p ⊂ G, namely allthe cluster points are infinite volume Gibbs measures.Lemma 1.3.5 ([48, Example 5.20.3]). Any measure in G p is translationinvariant.Let GI denote the set of all translation invariant infinite volume mea-sures. Then GI is a convex set and, being weakly compact, its extremalpoints, whose collection is denoted by Gext, are also in GI . By the Krein-Millman theorem, see I.3.10 in [64], elements in GI/Gext are convex combi-nations (in general integrals) over the extremal elements. We are especiallyconcerned about the translation invariant infinite volume Gibbs measuresas they are called phases in some physics literature [37, 67]. Moreover, anergodic infinite-volume Gibbs measure is called a pure phase while theirnontrivial convex combination is called a mixed phase. The next theoremexplains the relation between the ergodic infinite-volume Gibbs measuresand the set Gext.Theorem 1.3.6 ([48, Theorem 14.15]). A Gibbs measure µ ∈ GI is extremein Gext if and only if µ is ergodic.Remark 1.3.7. A direct corollary is that if GI is not empty, then thereexists at least one ergodic translation invariant infinite volume measure.Remark 1.3.8. For a real physical system in equilibrium, the microscopicquantities are rapidly fluctuating, like the position of the particle in the Brow-nian motion. However, the macroscopic quantities remain constant withincertain bounds of accuracy of the observation and on a relatively long timescale. If we want to describe the observed state in the context of mathe-matics, we shall describe the state by by a probability measure µ due to themicroscopic fluctuations. Moreover, µ should not only be consistent with141.3. Basic propertiesthe observed distributions of the microscopic variables, but also reflect thenon-randomness in the macroscopic scale. The former leads to the conclu-sion that µ is a Gibbs measure according to the basic principles of statisticalmechanics, while the latter needs a further requirement on µ: µ should be er-godic. The ergodic phases are characterized among all translation invariantstates by the property that macroscopic quantities are given definite values.For example, an experimenter might measure the average height∑i∈Λ φi/|Λ|in a sample φ drawn from the space. The ergodic theorem implies that∑i∈Λ φi/|Λ| tends to a constant with respect to an ergodic phase such asµ, which is almost surely independent of the sample chosen. This justifiescalling an ergodic phase pure. Refer to [37, 48] for more explanations aboutthe relation between ergodic states and statistical mechanics.1.3.2 DynamicsCorresponding to the Hamiltonian H(φ), one can naturally introduce a ran-dom time evolution of microscopic height variables φ of the interface. Indeed,we consider the stochastic differential equations (SDEs) for φt = {φt(x);x ∈Λ} ∈ RΛ, t > 0dφt(x) = − ∂H∂φ(x)(φt)dt+√2dwt(x), x ∈ Λ, (1.28)where wt = {wt(x);x ∈ Λ} is a family of independent one dimensionalstandard Brownian motions. The derivative of H(φ) with respect to thevariable φ(x) is given by∂H∂φ(x)(φ) =∑y∈Λ¯:|x−y|=1V ′(φ(x)− φ(y)). (1.29)When Λ ⊂ Zd is a finite subset, the SDEs (1.28) have the formdφt(x) = −∑y∈Λ¯:|y−x|=1V ′(φt(x)− φt(y))dt+√2dwt(x), x ∈ Λ (1.30)151.3. Basic propertiessubject to the boundary conditionsφt(y) = ψ(y), y ∈ ∂+Λ. (1.31)When Λ = Zd, we can write down the SDEs for φt = {φt(x);x ∈ Zd} ∈RZd , t > 0dφt(x) = −∑y∈Zd:|y−x|=1V ′(φt(x)− φt(y))dt+√2dwt(x), x ∈ Zd. (1.32)Also by writing down the SDEs (1.32) for φt(x) and φt(y) and then takingtheir difference, we obtain the dynamics for height difference ηt = {ηt(b)} ∈R(Zd)∗ asdηt(b) = − ∑b¯:xb¯=xbV ′(ηt(b¯))−∑b¯:xb¯=ybV ′(ηt(b¯)) dt+√2dwt(b), x ∈ Zd(1.33)for b ∈ (Zd)∗, where wt(b) = wt(xb)− wt(yb). The relationship between thesolutions of (1.32) and (1.33) is summarized in the next lemma. Recall thatthe height differences ηφ are associated with the heights φ by (1.18) and,conversely, the heights φη,φ(O) can be constructed from height differences ηand the height variable φ(O) at x = O by (1.19). We always assume η0 ∈ χfor the initial data of (1.33).Lemma 1.3.9 ([45, Lemma 9.1]).1. The solution of (1.33) satisfies ηt ∈ χ for all t > 0.2. If φt is a solution of (1.32), then ηt := ηφtis a solution of (1.33).3. Conversely, let ηt be a solution of (1.33) and define φt(O) through(1.32) for x = O and ∇φ(b) replaced by ηt(b) with arbitrary initialcondition φ0(O) ∈ R. Then φt := φη,φt(O) is a solution of (1.32).To discuss the existence and uniqueness of solutions to (1.33), we intro-161.4. Potential functionduce weighted `2-spaces on (Zd)∗`2r,∗ =η ∈ R(Zd)∗ ; ‖η‖22 = ∑b∈(Zd)∗η(b)2e−2r|xb| <∞ (1.34)for r > 0. Let χr = χ ∩ `2r,∗. If the second derivative of V is boundedboth from above and below, which is the case in the strict convex potentialfunction or the potential function (1.48) discussed later with α ≥ 1/2, thisimplies global Lipschitz continuity in χr, r > 0, of the drift term of the SDE(1.33). The following lemma about the existence and uniqueness of solutionsto (1.33) is a standard consequence of successive approximations.Lemma 1.3.10 ([46, Lemma 2.2]). If the first derivative of V is Lipschitzcontinuous, for each η ∈ χr, r > 0, the SDE (1.33) has a unique χr-valuedcontinuous solution ηt starting at η0 = η.The SDEs (1.32) are called Ginzburg-Landau dynamics. In fact, for thestochastic process φt given by the SDEs (1.32), its stationary and reversiblemeasure is given by the Gibbs measures µ. Next proposition states thisresult with respect to associated ∇ϕ-dynamics.Proposition 1.3.11. Every shift invariant ∇ϕ-Gibbs measure is reversibleunder the dynamics η defined by the SDEs (1.33).1.4 Potential function1.4.1 Convex potentialsAs we see from the definition of Hamiltonian (1.5), the potential functionV determines Hamiltonian and thus the Gibbs measure. A well studiedcase of the potential functions is that with strict convexity, namely for somec−, c+ > 0c− ≤ V ′′(η) ≤ c+, η ∈ R. (1.35)A direct result of the the strict convexity is the uniqueness of solutionof SDEs (1.32) as it provides the Lipschitz continuity of the coefficient171.4. Potential functionV ′ in the drift term. The analysis of strict convex potential functions in[45, 46] is based on three fundamental tools: Helffer-Sjo¨strand represen-tation, FKG (Fortuin-Kasteleyn-Ginibre) inequality and Brascamp-Lieb in-equality. Helffer-Sjo¨strand representation expresses the correlation functionsunder the Gibbs measures by means of a certain random walk in randomenvironments. Its original idea comes from [54, 72]. This representationreadily implies FKG and Brascamp-Lieb inequalities. The latter is an in-equality between the variances of non-Gaussian fields and those of Gaussianfields. Here is a introduction of the results with respect to the strict convexpotential functions.Let the finite region Λ ⊂ Zd and the boundary condition ψ ∈ RZd begiven. We shall consider slightly more general Hamiltonian having externalfield (chemical potential) ρ = {ρ(x);x ∈ Λ} ∈ RΛ:Hψ,ρΛ (φ) = HΛ(φ ∨ ψ)− [ρ, φ]Λ (1.36)where [ρ, φ]Λ =∑x∈Λ ρxφx and the corresponding finite volume φ-Gibbsmeasureµψ,ρΛ =1Zψ,ρΛe−Hψ,ρΛ (φ)dφΛ (1.37)where Zψ,ρΛ is the normalization constant. For φt = {φt} being the ϕ-dynamics defined by the SDEs (1.32) with µ-distributed initial data, definethe time-inhomogeneous generator Qφt byQφΛf(x) =∑b∈Λ∗:yb=xV ′′(∇(φ ∨ ψ)(b))∇(f ∨ 0)(b). (1.38)Let ∂−Λ = {x ∈ Λ, |x−y| = 1 for some y ∈ Λc} be the inside boundary of Λand ∆ be a absorbing state. Then define the Xt, t ≥ 0 the random walk onΛ∪{∆} with temporally inhomogeneous generator QφΛ and with killing rate∑y∈∂+Λ:|x−y|=1 V′′(φt(x)−ψ(y)) at x ∈ ∂−Λ. Note that the random walk Xtexists since its jump rate V ′′(∇(φ ∨ ψ)(b)) is positive from our assumptionof strict convexity (1.35). Next theorem describes the correlation functionsunder the measure µψ,ρΛ by the random walk Xt. Recall that for a measure181.4. Potential functionµ, the covariance Eµ[F ;G] of F = F (φ) and G = G(φ) is defined to beEµ[F ;G] = Eµ[FG]− Eµ[F ]Eµ[G].Theorem 1.4.1 ([45, Theorem 4.2, Helffer-Sjo¨strand representation]). Thecorrelation function of F = F (φ) and G = G(φ) under µψ,ρΛ has the repre-sentationEµψ,ρΛ [F ;G] =∑x∈Λ∫ ∞0Eδx⊗µψ,ρΛ [∂F (x, φ0)∂G(Xt, φt), t < τΛ]dt. (1.39)In the right hand side, δx ⊗ µψ,ρΛ indicates the initial distribution of (Xt, φt)and δx as a measure on RΛ is defined by δx(z) = δ(z−x). In particular, thedistribution of φ0 is µψ,ρΛ and the random walk Xt starts at x. τΛ = inf{t >0;Xt ∈ Λc} is the exit time of Xt from Λ.The function F = F (φ) on RΛ is called increasing if it satisfies ∂F =∂F (x, φ) ≥ 0 so that it is nondecreasing under the semi-order on RΛ deter-mined by φ1 ≥ φ2, φ1, φ2 ∈ RΛ ⇔ φ1(x) ≥ φ2(x) for every x ∈ Λ. Theorem1.4.1 immediately implies the following inequality.Corollary 1.4.2 ([45, Corollary 4.4, FKG inequality]). If F and G are both(L2-integrable) increasing functions, then we haveEµψ,ρΛ [F ;G] ≥ 0, (1.40)namely,Eµψ,ρΛ [FG] ≥ Eµψ,ρΛ [F ]Eµψ,ρΛ [G]. (1.41)A bound on the variances under non-Gaussian ϕ-Gibbs measures bythose under Gaussian ϕ-Gibbs measures is provided the Brascamp-Lieb in-equality. The original proof is in [19, 21], but here we cite the result from [45]based on on Helffer-Sjo¨strand representation. To state the result, let µψ,GΛbe the Gaussian ϕ-Gibbs measures determined from the quadratic potentialV ∗(η) = 12c−η2 where c− is lower bound in condition (1.35).Theorem 1.4.3 ([45, Theorem 4.8, Brascamp-Lieb inequality]). For every191.4. Potential functionv ∈ RΛ, we havevar([v, φ]Λ, µψ,ρΛ ) ≤ var([v, φ]Λ, µψ,G). (1.42)Based on the above tools and ϕ/∇ϕ-Gibbs measure dynamics, namelySDEs (1.32) and (1.33), there are fruitful results about the Gibbs measurewith strict convex potential, including the scaling limit [33, 49, 63, 67], Gibbsmeasure with constraints (pinning, wetting, etc)[17, 18, 34, 56, 74] and soon. Especially, Funaki and Spohn [46] have shown that ergodic infinite-volume Gibbs measures are characterized by their tilt. For any translationinvariant infinite volume ϕ-Gibbs measure µ, the tilt u ∈ Rd is defined byu = (Eµ(η(e1)), Eµ(η(e2)), · · · , Eµ(η(ed))) where ei is the unit vector at i’thdirection. Their result is as follows.Theorem 1.4.4 ([46, Characterization of ∇ϕ-Gibbs measures]). For eachu ∈ Rd, there exits unique ergodic infinite-volume Gibbs measures µ∇u withtilt u.Remark 1.4.5. By Theorem 1.3.6, the extreme set of translation invariantGibbs measures is the same as the set of ergodic Gibbs measures. Thus theset of all translation invariant Gibbs measures is the convex hull of {µ∇u :u ∈ Rd}.1.4.2 Non-convex potentialsIf we release the strict convexity condition on the potential, the above toolsno longer work. Especially in the Helffer-Sjo¨strand representation, the ran-dom walk Xt has jump rate V′′(∇(φ∨ψ)(b)), which may be negative in thenon-convex case. Despite this, different methods have been developed in thestudy of non-convex potential.One direction of the study on non-convex potential is to study a per-turbation of a strict convex function. In [27] and [28], C. Cotar et al. giveresults for a certain class of non-convex V . They study a class of models201.4. Potential functionwith V admitting the representationV (t) = V0(t) + g0(t) (1.43)where V0 satisfies (1.35) and g0 ∈ C2(R) has a negative bounded secondderivative. Their idea is to integrate out some of the variables leading to anew V (x) which is convex. In [2], S. Adams et al consider the case that thepotential function is a perturbation of a quadratic functions, i.e.V (η) =12η2 + V (η) (1.44)withes some perturbation V : R → R. They seek to identify uniform con-vexity properties for a class of lattice gradient models with non-convex mi-croscopic interactions, and extend the rigorous renormalisation group tech-niques developed by Brydges and coworkers to models without a discreterotational symmetry of the interaction.The other direction is to introduce new randomness and represent themodel of a marginal of a ”larger” model. In [15, 16], Marek Biskup, RomanKotecky´ and Herbert Spohn give results about a class of models with Vadmitting the representatione−V (t) =∫%(dκ) exp[− 12κt2](1.45)where % is a positive measure with compact support in (0,∞). This classof V is symmetric, but non-convex in general, for example, when % =pδa + (1− p)δb is two-point measure. Their strategy is to write the measureas the marginal of a distribution for φ and κ, and then to apply theory ofrandom walk in random conductance where κ is considered as the variablefor conductance. The relevant conclusion from [15] for the general theoryis that the one-to-one correspondence between ergodic GGMs and their tiltbreaks down once V is sufficiently non-convex. In [60, 73], continuous in-terfaces with disorder are firstly introduced and studied. [29] introducestwo models, model A and B, of gradient Gibbs measures with disorder, or211.4. Potential functionrandom gradient states. For model A, the Hamiltonian isHψΛ [ξ](φ) :=12∑x,y∈Λ,|x−y|=1V (φ(x)− φ(y))+∑x∈Λ,y∈Λc,|x−y|=1V (φ(x)− φ(y)) +∑x∈Λξ(x)φ(x)(1.46)where the random fields (ξ(x))x∈Zd are assumed to be i.i.d. real-valuedrandom variables, with finite non-zero second moments. The disorder con-figuration (ξ(x))x∈Zd denotes an arbitrary fixed configuration of externalfields, modeling a quenched (or frozen) random environment. For model B,we define the Hamiltonian for each fixed ω ∈ RZd byHψΛ [ξ](ω) :=12∑x,y∈Λ,|x−y|=1V ω(x,y)(φ(x)− φ(y))+∑x∈Λ,y∈Λc,|x−y|=1V ω(x,y)(φ(x)− φ(y)) (1.47)where V ω(x,y)(s) : (ω, s) ∈ RZd × R 7→ R is a random real-valued functiondefine for each edge (x, y). The uniqueness of gradient Gibbs measures withdisorder is given in [30].1.4.3 Sub-quadratic potentialsIn this thesis, we discuss a special case with a non-convex potential func-tion V called sub-quadratic potential. We focus our study on the class ofpotential function given byV (∇φ) = [1 + (∇φ)2]α (1.48)with 0 < α < 12 . In this case, V is not convex, while V is convex whenα > 12 . We will follow the strategy of Biskup and Spohn. To be specific, wewrite e−V in the form (1.45) as∫e−κ[1+(∇φ)2]f(κ)dκ = e−V (∇φ), (1.49)221.4. Potential functionand then the model can be represented as1Ze−∑[1+(∇φ)2]etjk f(etjk)etjk∏i∈Λdφi∏jk∈Edtjk. (1.50)Here we make the substitution κ 7→ et so that the density has the niceproperty of log concavity.We are interested in this class of V for several reasons. Firstly, this classof measures has features in common with the massless free field, namelyMarkov property and reflection positivity. Reflection positivity was firstintroduced in quantum field theory by Osterwalder and Schrader [66] andwas developed in the late 1970s to establish phase transitions in classical andquantum lattice spin models [40–43]. With reflection positivity, there aregenerally two types of arguments one can use: the infrared bound and thechessboard estimates. The next theorem about the existence of the massiveinfinite volume measure is a direct result of reflection positivity.Lemma 1.4.6. Let µpΛ, be the massive Gibbs measure defined in 1.14 withV (η) = (1 + η2)α in (1.48) and G p be the set of all cluster points of{µpΛ,|Λ = ZdN , N ≥ 3}. Then G p 6= ∅.Proof. This lemma is a direct corollary of Theorem 18.12 of [48] by check-ing that the potential function V along with the mass term satisfies theassumption of C-potential in Definition 17.18 in [48].Secondly, when we follow Biskup’s notation and write e−V in the form(1.45), we find that the measure ρ corresponding to our V is no longer com-pactly supported. The compact support of measure ρ give a uniform boundof the auxiliary field κ. In the context of random conductance model, theconductance is bounded away from 0 and∞. Thus two known results applyin this case: Kipnis and Varadhans [59] invariance principle (i.e., scalingof the random walk to Brownian motion) and Delmotte and Deuschels [32]annealed derivative heat kernel bounds. In the sub-quadratic case, when theconductance is no longer bounded, we need the results for more general ran-dom conductance model. The required results are summarized in Appendix231.4. Potential functionB.Last but not least, when α = 1/2, as studied in [25], the Hamiltonianis similar to the one appearing in the measure describing linearly edge re-inforced random walk. Consider a finite graph G = (V,E) with |V | = N .Then the linearly edge reinforced random walk defines measures on (Ui)Ni=1,which has the following density distribution on G = {(ui) :∑i ui = 0}1(2pi)(N−1)/2eui0e−H(W,u)√D(W,u), (1.51)whereH(W,u) = 2∑{i,j}∈EWi,j sinh2(12(ui − uj))(1.52)nd D(W,u) is any diagonal minor of the N ×N matrix M(W,u) with coef-ficients mi,j = Wi,jeti+tj if i 6= j and = −∑k∈V Wi,keti+tk if i = j. Somesurveys and latest results on this topic can be find in [6, 35, 61, 69]Previous study on the sub-quadratic potential focused on a special valueof α. Brydges and Spencer [25] studied the case of α = 12 . Let Λ = Zd/ZdLbe a torus and {j, k} be an unordered pair of nearest neighbor sites in Λ,denoted by jk. They are interested in the case that the model (1.14) hasV (t) = 2√1 + βt2/2, namely1ZΛ(β)e−2∑jk∈E√1+ 12β(φj−φk)2− 12 ∑j∈Λ φ2j ∏j∈Λdφ, (1.53)with ZΛ(β) is the normalization.In order to state their main result, let 〈·〉Λ, be the expectation defined via(1.53), and [v;w] =∑j vjwj be the usual scalar product in RΛ, and write[v;w] = v · w for short. Define the lattice Laplacian −∆Λ with periodicboundary condition by[φ;−∆Λφ] =∑j,k∈Λ,jk∈E(φj − φk)2 (1.54)for all φ : Λ→ R satisfying periodic boundary condition. However, −∆Λ is241.4. Potential functionnot invertible as 0 is an eigenvalue of −∆Λ with eigenvector φ ≡ const on Λ.Thus we consider the domainDp ={v ∈ RΛ| v compact support and [ v; 1] =0} for the test function v and let GΛ = (−∆Λ)−1 be the inverse of the latticeLaplacian on the domain Dp. The main results of Brydges and Spencer areas follows.Proposition 1.4.7 ([25, Proposition 3]).〈(φ · v)2p〉Λ,is bounded uniformlyin Λ and  > 0 by constants C˜(p, v) provided v ∈ Dp.Theorem 1.4.8 ([25, Theorem 4]). If v ∈ Dp and if γ2[v;GΛv] < 1, then〈eγφ·v〉Λ,is uniformly bounded in  and Λ.25Chapter 2Thermodynamic limit2.1 Introduction and main result2.1.1 Extended model and its marginalsRecall that we think of Zd as a graph with edgesE ={{j, k} : j, k ∈ Zd, ‖j − k‖2 = 1}(2.1)where ‖·‖2 is the Euclidean norm. We use the notation jk = {j, k} for theundirected edges. For a finite subset Λ of Zd, let E(Λ) be the set of edgeswith at least one vertex in Λ, namely,E(Λ) = {jk ∈ E|jk ∩ Λ 6= ∅}. (2.2)Let µpΛ, be the massive finite volume Gibbs measure with periodic boundarycondition defined in (1.14). By scaling φ → √βφ and  → /β2, we takeβ = 1 in the rest of the chapter and thus omit β in the notation. In thischapter we will study the bounds for the finite moments of the massive finitevolume measures for the case α < 1/2. The bounds for the moments of finitevolume measure will lead to the existence of infinite volume measure. Thebasic method is to introduce the auxiliary field tjk for each edge jk ∈ E(Λ)and represent the measure µpΛ, as the marginal of a model with log-concavedensity.Define a Gaussian actionA(φ, t) =∑jk(1 + (φj − φk)2)etjk +∑j∈Λφ2j , β > 0,  ≥ 0. (2.3)262.1. Introduction and main resultLet fα(x) be the unique positive density such that∫ ∞0e−λxfα(x)dx = e−λα. (2.4)Note that the existence, uniqueness and positivity of fα(x) are proved inChapter IX, section 11 in [78]. Consider the measure µˆΛ, on RΛ × RE(Λ)defined byµˆΛ,(dφ, dt) :=1ZΛ,(α)e−A(φ,t)∏jk∈E(fα(etjk)etjkdtjk)∏j∈Λdφj . (2.5)where the partition function ZΛ(α) is defined to beZΛ,(α) =∫e−A(φ,t)∏jk∈E(fα(etjk)etjkdtjk)∏j∈Λdφj (2.6)=∫e−∑jk∈E(1+(φj−φk)2)α−∑j∈Λ φ2j ∏j∈Λdφj . (2.7)Notice that by integrating all tjk over R we get (2.7) from (2.6). Thus ifboth φ and t satisfy periodic boundary condition, the φ marginal of µˆΛ, isµΛ,.Recall that the lattice Laplacian −∆Λ with periodic boundary conditionis defined in (1.54) and that GΛ is its inverse on the domain Dp. To statethe t marginal of µˆΛ,, we define the symmetric finite difference operatorDΛ,(t) with periodic boundary condition by the quadratic form[f ;DΛ,(t)f ] =∑jk∈E,jk∩Λ 6=∅(fj − fk)2etjk + ∑j∈Λf2j (2.8)for all f : Λ→ R satisfying periodic boundary condition. The eigenvalue ofDΛ,(t) is positive because [f ;DΛ,(t)f ] > 0 for f 6= 0, and thus DΛ,(t) isinvertible in RΛ. Let GΛ,(t) = (DΛ,(t))−1 be the Green’s function. Noticethat for a positive self-adjoint operator S on a finite-dimensional Euclidean272.1. Introduction and main resultspace V, the formula1√detS=∫Ve−pi〈x,Sx〉 dx (2.9)holds. Then by evaluating the integral over φ over RΛ in (2.5), we have thet marginal µˆΛ,µˆΛ,(dt) =1Z ′Λ,(α)det[DΛ,(t)]−1/2 ∏jk∈E(exp(−etjk + tjk)fα(etjk)dtjk).(2.10)where Z ′Λ, is the partition function for t variable defined byZ ′Λ,(α) =∫det[DΛ,(t)]−1/2 ∏jk∈E(exp(−etjk + tjk)fα(etjk)dtjk). (2.11)Here we absorb the factors of pi into the new partition function Z ′Λ,.OperatorDΛ,(t) and its inverseGΛ,(t) is the key element in our analysis.The matrix tree theorem (see [1, Theorem 1]) expresses detDΛ,(t) as asum over weighted rooted forests with each nearest neighbor edge (j, k)assigned a weight etjk and each root a weight . Since detDΛ,(t) is a positivesuperposition of exponentials its logarithm is convex. This argument is givenin [25] and it proves the following Lemma.Lemma 2.1.1 ([25, Lemma 1]). For all t, ln det[DΛ,(t)] is a convex functionof t.2.1.2 Main resultThe way we introduce the auxiliary field results in a log-concave density ofthe t marginal of µˆΛ,. More explicitly, define the action of t variable by the282.1. Introduction and main resultnegative of the logarithm of the unnormalized density of t, namelyEA(t) := − logdet[DΛ,(t)]− 12 ∏jk∈E(e−etjk+tjkfα(etjk)dtjk)(2.12)=12ln det[DΛ(t),] +∑jk∈E(etij − ln fα(etjk)− tjk). (2.13)By Lemma 2.1.1 and Corollary A.2.3, EA(t) is a convex function with re-spect to t. Furthermore we will prove that its Hessian is bounded from belowby a diagonal matrix.The convexity of the action EA(t) provides a Brascamp-Lieb bound forthe second moment of the auxiliary field t. The Brascamp-Lieb bound pro-vides a Gaussian domination on the second moment of the field with strictlyconvex action. The original reference of the Brascamp-Lieb inequality is asfollows.Theorem 2.1.2 ([20, Brascamp-Lieb bound]). Let F (φ) be a convex func-tion on Rn, and let A be a real, positive definite, n × n matrix. Assumeexp[−(φ,Aφ)− F (φ)] ∈ L1 and define〈k〉F =∫k(φ) exp[−(φ,Aφ)− F (φ)]dφ∫exp[−(φ,Aφ)− F (φ)]dφ .If F (φ) ≡ 0 we write 〈·〉0. Let v ∈ Rn, α ≥ 1. Then〈|v · φ− 〈v · φ〉F |α〉F ≤ 〈|v · φ|α〉0 (2.14)when F is log concave. Furthermore, let fxx be the Hessian matrix of F andM be the covariance matrixMij = 〈φiφj〉F − 〈φi〉F 〈φj〉F . (2.15)ThenM ≤ 〈(2A+ fxx)−1〉F ≤ (2A)−1. (2.16)We are interested in the bounds on the auxiliary field t as it has a deep292.1. Introduction and main resultconnection with the φ field. Let α < 12 and 〈·〉α,Λ, denote the expectationin t and φ with respect to the measure (2.5). Conditional on the t field, theconditional distribution of the φ field is a massive Gaussian free field withcovariance GΛ,(t). Some computations with Gaussian integrals leads us to:〈(φ · v)2n〉α,Λ,= 〈(2n− 1)!!([v;GΛ,(t)v]n)〉α,Λ, , (2.17)〈eγφ·v〉α,Λ,=〈eγ2[v;GΛ,(t)v]/2〉α,Λ,. (2.18)Our first result is the bounds on〈eλt〉for all λ ∈ R. Notice that all thesebounds are uniform in  and Λ.Proposition 2.1.3. Assume α ∈ (0, 12). For λ ∈ R, there is a constantC(λ, α) such that for all jk ∈ E,〈eλtjk〉α,Λ,≤ C(λ, α). (2.19)From Proposition 2.1.3 and Lemma 2.2.6 below, we prove the bound forthe finite order moment of the φ field, which is similar to Theorem 1.4.7.Recall that the Laplace operator with periodic boundary condition −∆Λ isdefined in (1.54) and that GΛ is its inverse in Dp.Theorem 2.1.4. Assume α ∈ (0, 12). If v ∈ Dp, then〈(φ · v)2n〉α,Λ,≤ C˜(α, p)[v;GΛv]n (2.20)Remark 2.1.5. The above theorem states the case that 0 < α < 12 . Brydgesand Spencer proved earlier for the case α = 12 in [25] (Theorem 1.4.8).The bounds for the finite order moments provides tightness and further-more gives the existence of infinite volume Gibbs measure.Theorem 2.1.6 (Existence of infinite volume Gibbs measure). Assume α ∈(0, 12).1. For d ≥ 3, there exists a translation invariant, ergodic Gibbs measureµ on (Ω,FZd) such that DLR equation (1.8) holds.302.2. Bounds on the moments2. For d ≥ 1, there exists a translation invariant, ergodic Gibbs measureµ∇ on (χ,F ∗Zd) such that DLR equation (1.26) holds.Our second result about〈ev·φ〉for v ∈ D0 shows that the behavior ofmodels with α < 12 is quite different from that of α =12 .Theorem 2.1.7. Assume α ∈ (0, 12). Let 〈·〉µ be the expectation with respectto any infinite volume measure µ ∈ G . For all v ∈ Do with v 6= 0 ,〈ev·φ〉µis infinity.2.2 Bounds on the momentsIn this section, we start the proof of Theorem 2.1.4 and 2.1.7 with the caseα < 12 . We will first prove Proposition 2.1.3 by a combination of a Wardidentity and the Brascamp-Lieb bounds given in Theorem 2.1.2. Proposition2.1.3 and Ho¨lder inequality give the result of Theorem 2.1.4. The proof ofTheorem 2.1.7 is based on DLR equation (1.11).2.2.1 Bounds for the auxiliary fieldWe will now prove bounds on eλtjk for all λ ∈ R. We will follow the idea ofthe proof of Proposition 3 in [25]. As all these bounds are uniform in  andΛ, to shorten the notation, we write 〈·〉α,Λ, as 〈·〉.Lemma 2.2.1. There exists a constant c(α) such that〈(tjk − 〈tjk〉)2〉 ≤ c(α). (2.21)Furthermore, for all λ ∈ R,〈eλtjk〉≤ ec(α)λ2/2eλ〈tjk〉. (2.22)Proof. For λ ∈ R, let Fλ = eλtjk , q(λ) = ln 〈Fλ〉. We bound q(λ) usingTaylor’s theorem to second order in λ. To do this consider a λ dependentmeasure 〈·〉λ := 〈·Fλ〉 / 〈Fλ〉. Thenq(0) = 0, q′(0) = 〈tjk〉 , q′′(λ) =〈(tjk − 〈tjk〉λ)2〉λ. (2.23)312.2. Bounds on the momentsBy the definition of 〈·〉λ and (2.10), the density of t under 〈·〉λ is givenby1〈Fλ〉 det[DΛ,(t)]−1/2eλtjk∏il∈E(Λ)(exp(−etil + til)fα(etil)dtil). (2.24)Then similar to (2.12), the action in the t variables corresponding to 〈·〉λ isEAλ(t) =12ln det[DΛ(t)]− λtjk +∑il∈E(Λ)(etil − ln fα(etil)− til), (2.25)which is convex by Lemma 2.1.1 and Corollary A.2.3. In fact, the Hessianof 2.25 is bounded from below by the diagonal matrix H = (hjk,il)jk,il∈E(Λ)given byinf(et − d2dt2ln fα(et))δjk,il = h(α)δjk,il. (2.26)Furthermore, et− d2dt2ln fα(et) goes to +∞ both when t→ −∞ by the secondresult of Proposition A.3.6, and when t → +∞ due to the term et. As aresult, h(α) > 0. In (2.16), let A = H and F = EAλ(t) −∑il∈E(Λ) h(α)t2il,and then for all λ ∈ R we haveq′′(λ) =〈(tjk − 〈tjk〉λ)2〉λ≤ h(α)−1/2. (2.27)Let λ = 0 and c(α) = h(α)−1/2, and then we have〈(tjk − 〈tjk〉)2〉 ≤ c(α). (2.28)By Taylor expansion with mean-value forms of the remainder,q(λ) = q(0) + q′(0)λ+ q′′(ξλ)λ2/2. (2.29)for some real number ξλ ∈ [0, λ]. Thus by (2.23) and (2.27), we haveq(λ) ≤ λ 〈tjk〉+ c(α)λ2/2. (2.30)322.2. Bounds on the momentsThen by the definition of q(λ),〈eλtjk〉≤ ec(α)λ2/2eλ〈tjk〉. (2.31)Given (2.22), to bound〈eλtjk〉, we still need bounds on 〈tjk〉. The proofof bounds on 〈tjk〉 is based on the Ward identity. The term Ward identity isused in theoretical physics to describe identities that arise by differentiatingintegrals with respect to a parameter that represents a continuous symmetryor approximate symmetry of the integrand. In our case the Ward identityis given by the following lemma.Lemma 2.2.2. Let g(t) = ddt ln fα(et). Then〈g(tjk)〉 =〈etjk [1 +12(φj − φk)2]〉− 1. (2.32)Proof. Recall that the partition function ZΛ,(α) is defined in (2.6) byZΛ,(α) =∫e−∑jk(1+(φj−φk)2)etjk−∑j∈Λ φ2j∏jk∈E(fα(etjk)etjkdtjk)∏j∈Λdφj .(2.33)Here we use a Ward identity generated by the change of variablestjk → tjk + b. (2.34)Since the partition function does not depend on the constant b, the derivativewith respect to b evaluated at b = 0 vanishes hence〈−etjk [1 + 12(φj − φk)2] + ddtjkln fα(etjk) + 1〉= 0. (2.35)By the definition of g(t), we have (2.32).We will derive both upper and lower bounds on 〈tjk〉 from (2.32). Theidea is to prove 〈tjk〉 satisfies inequalities with solution set bounded fromabove and below respectively.332.2. Bounds on the momentsLemma 2.2.3. There exists a constant Cu(α) such that〈tjk〉 ≤ Cu(α). (2.36)Proof. By the first result of Prop A.3.6, there exists constantsM and C, suchthat when t < M , g(t) < C exp( αα−1 t) + C and when t ≥ M , g(t) ≤ g(M).Then〈g(tjk)〉 =〈g(tjk)1{tjk<M}〉+〈g(tjk)1{tjk≥M}〉≤〈(C exp(α/(α− 1)t) + C)1{tjk<M}〉+〈g(M)1{tjk≥M}〉≤ 〈(C exp(α/(α− 1)t)) + C〉+ g(M). (2.37)By (2.22) with λ = α/(α− 1), we have〈g(tjk)〉 ≤ C + g(M) + ec(α)−1(α/(α−1))2/2e(α/(α−1))〈tjk〉. (2.38)On the other hand, by Jensen inequality〈etjk [1 +12(φj − φk)2]〉≥ 〈etjk〉 ≥ e〈tjk〉. (2.39)Combining (2.32), (2.38) and (2.39), we havee〈tjk〉 + 1 ≤ C + g(M) + ec(α)−1(α/(α−1))2/2e(α/(α−1))〈tjk〉. (2.40)As α < 12 , α/(α−1) < 0. Thus C+g(M)+ec(α)−1(α/(α−1))2/2e(α/(α−1))〈tjk〉is decreasing with respect to 〈tjk〉 while e〈tjk〉 + 1 is increasing. Therefore,for some positive constant Cu(α),〈tjk〉 ≤ Cu(α). (2.41)Before moving to the lower bound, we first prove a formula regardingthe operator DΛ, and its inverse GΛ,.342.2. Bounds on the momentsLemma 2.2.4.[f ;GΛ,(t)f ] = supϕ(2[f ;ϕ]− [ϕ;DΛ,(t)ϕ]) (2.42)Proof. By the definition of DΛ,(t) in (2.8), DΛ,(t) is a self-adjoint andpositive definite invertible operator. Thus so is its inverse GΛ,(t).As GΛ,(t) is positive definite, for all f, ϕ ∈ RΛ, we have[f −DΛ,(t)ϕ,GΛ,(t)(f −DΛ,(t)ϕ)] ≥ 0. (2.43)By the linearity of the inner product, we have[f,GΛ,(t)f ]− [f, ϕ]− [DΛ,(t)ϕ,GΛ,(t)f ] + [GΛ,(t)ϕ,ϕ] ≥ 0. (2.44)As DΛ,(t) is a self-adjoint operator,[f,GΛ,(t)f ] ≥ 2[f, ϕ]− [ϕ;DΛ,(t)ϕ]. (2.45)As above inequality holds for all ϕ ∈ RΛ,[f ;GΛ,(t)f ] ≥ supϕ(2[f ;ϕ]− [ϕ;DΛ,(t)ϕ]). (2.46)Furthermore, in (2.43), the equality holds when f = DΛ,φ. Thus[f ;GΛ,(t)f ] = supϕ(2[f ;ϕ]− [ϕ;DΛ,(t)ϕ]). (2.47)Lemma 2.2.5. There exists a negative constant Cl(α) such that〈tjk〉 ≥ Cl(α). (2.48)Proof. Referring to (2.8), by (2.42) and [φ;DΛ,(t)φ] ≥ βetjk(φj − φk)2 we352.2. Bounds on the momentshave(φj − φk)2 = [(δj − δk);GΛ,(t)(δj − δk)]= supϕ(2[(δj − δk);ϕ]− [ϕ;DΛ,(t)ϕ])≤ supϕ(2(ϕj − ϕk)− βetjk(φj − φk)2)= supϕ(1βe−tjk − βetjk(φj − φk − 1βe−tjk)2)=1βe−tjk . (2.49)Thus if g(t) = ddt ln fα(et), then by (2.32)〈g(tjk)〉 =〈etjk〉+12〈etjk(φj − φk)2]〉− 1≤ 〈etjk〉+ 12β− 1. (2.50)By Corollary A.3.3, g(tjk) is decreasing and bounded below by some negativeconstant K. For a fixed constant a > 0, when t < 〈tjk〉+a, g(t) ≥ g(〈tjk〉+a)by monotonicity, and t ≥ 〈tjk〉+ a, g(t) ≥ K by the lower bound. Thus〈g(tjk)〉 =〈g(tjk)1{tjk<〈tjk〉+a}〉+〈g(tjk)1{tjk≥〈tjk〉+a}〉≥〈g(〈tjk〉+ a)1{tjk<〈tjk〉+a}〉+〈K1{tjk≥〈tjk〉+a}〉≥ g(〈tjk〉+ a)〈1{tjk<〈tjk〉+a}〉+K≥ g(〈tjk〉+ a)P ({tjk < 〈tjk〉+ a}) +K. (2.51)By Chebyshev’s inequality,P ({tjk ≥ 〈tjk〉+ a}) ≤ P ({|tjk − 〈tjk〉 | ≥ a}) ≤ a−2Var(tjk). (2.52)By (2.21), Var(tjk) ≤ c(α), so P ({tjk ≥ 〈tjk〉+ a}) ≤ a−2c(α). Let a =√(c(α))/2, and then we haveP ({tjk < 〈tjk〉+ a}) > 1− a−2c(α) = 34. (2.53)362.2. Bounds on the momentsCombining this with (2.51), we have〈g(tjk)〉 ≥ 34g(〈tjk〉+√(c(α))2)+K. (2.54)Combine this with (2.50), and by (2.22) with λ = −1 we have34g(〈tjk〉+√(c(α))2)+K ≤ ec(α)/2e〈tjk〉. (2.55)By Corollary A.3.3, g(t)→∞ when t→ −∞. Therefore, for some negativeconstant Cl(α),〈tjk〉 ≥ Cl(α). (2.56)Proof of Proposition 2.1.3. For λ < 0, by (2.22) and Lemma 2.2.5, we have〈eλtjk〉≤ ec(α)λ2/2eλCl(α). (2.57)For λ > 0, by (2.22) and Lemma 2.2.3, we have〈eλtjk〉≤ ec(α)λ2/2eλCu(α). (2.58)Let C(λ, α) = max{ec(α)λ2/2eλCu(α), ec(α)λ2/2eλCl(α)}, and then we have〈eλtjk〉≤ C(λ, α) (2.59)2.2.2 Bounds for the φ fieldRecall that in 2.17, the finite moments of φ are represented by the expecta-tion over the t field〈(φ · v)2n〉α,Λ,= 〈(2n− 1)!!([v;GΛ,(t)v]n)〉α,Λ, . (2.60)372.2. Bounds on the momentsFurthermore, recall that the Laplace operator with periodic boundary con-dition −∆Λ is defined in (1.54) and that GΛ is its inverse in Dp. Lemma 2in [25] gives the following bound of [v;GΛ,(t)v] by GΛ.Lemma 2.2.6 ([25, Lemma 2]). Consider v ∈ Dp. Then the Green’s func-tion satisfies the quadratic form bound0 ≤ [v,GΛ,(t)v] ≤∑jk∈E(Λ)((GΛv)j − (GΛv)k)2e−tjk . (2.61)Then the proof of Theorem 2.1.4 is a combination of Proposition 2.1.3,(2.60) and Lemma 2.2.6.Proof of Theorem 2.1.4. By Lemma 2.2.6,[v;G(t)Λ,v]2n ≤ ∑ij∈E(Λ)((GΛv)i − (GΛv)j)2 e−tijn=∑i1j1,···injnn∏k=1((GΛv)ik − (GΛv)jk)2 e−tikjk . (2.62)By the Ho¨lder inequality〈exp(∑−tikjk)〉α,Λ,≤n∏k=1〈exp(−ntikjk)〉1/nα,Λ, . (2.63)By Proposition 2.1.3, 〈exp(−ntikjk)〉α,Λ, is bounded from above by C(n, α).Thus〈n∏k=1exp(−tikjk)〉α,Λ,≤n∏k=1〈exp(−ntikjk)〉1/nα,Λ, ≤ C(n, α). (2.64)382.2. Bounds on the momentsCombing this bound with (2.62), we have〈([v;G(t)v]2n)〉α,Λ,≤〈 ∑i1j1,···injnn∏k=1((GΛv)ik − (GΛv)jk)2 e−tikjk〉α,Λ,=∑i1j1,···injnn∏k=1((GΛv)ik − (GΛv)jk)2〈e−∑tikjk〉α,Λ,≤ C(n, α)∑i1j1,···injnn∏k=1((GΛv)ik − (GΛv)jk)2= C(n, α) ∑ij∈E(Λ)((GΛv)i − (GΛv)j)2n= C(n, α)[GΛv, (−∆Λ)GΛv]n= C(n, α)[v,GΛv]n. (2.65)Notice that in the last equality we use the fact that GΛ = (−∆Λ)−1.Let C˜(n, α) = (2n− 1)!!C(n, α). Thus by (2.60),〈(φ · v)2n〉α,Λ,≤ C˜(n, α)[v,GΛv]n. (2.66)This finishes the proof of Theorem 2.1.4.Now we will give the proof of the result about the exponential moment.The basic idea is as  → 0, the potential function grows sublinearly at ∞.As exp(v · φ) grows more rapidly than exp{−[1 + (∇φ)2]α} as ∇φ → ∞, itmakes the integral blow up at the end of infinity. More precise proof is givenby DLR equation. Notice that we assume the existence of infinite volumemeasures here, which will be proved in next section.Proof of Theorem 2.1.7. For any v ∈ Do, choose a finite set Λ ⊂ Zd s.t.supp(v) := {x ∈ Zd|vx 6= 0} ⊂ Λ. For µ ∈ G , by (1.11), µ(·|FΛc) =lim→0 µ(·|FΛc) in weak sense. Thus it suffices to prove lim→0µpΛ,β,(ev·φ) =∞. Choose j, k ∈ {x ∈ Zd|vx 6= 0}, and let Aj be the event that all φ’s392.3. Existence of infinite volume measureexcept φj are in [−1, 1], namelyAj = {φi ∈ [−1, 1], i ∈ Λ/ {j}} .Then we haveµpΛ,β,(ev·φ) ≥ µpΛ,β,(ev·φ1Aj ) ≥ CµpΛ,β,(evj(φj−φk)1Aj ), (2.67)where C = exp(−2|Λ| · ||v||∞). By (1.14),µpΛ,β,(evj(φj−φk)1Aj )=∫Aj1ZΛ,β,expvj (φj − φk)−∑jk(1 + (φj − φk)α)− ∑jφ2j∏j∈Λdφj .(2.68)As µpΛ,β,(evj(φj−φk)1Aj ) is increasing as  → 0, by monotone convergencetheorem, when → 0, the limit is∫Aj1ZΛ,βexpvj (φj − φk)−∑jk(1 + (φj − φk)α)∏j∈Λdφj . (2.69)The divergence of the integral is obvious as integrand goes to infinity in aset with infinite measure when φj − φk → +∞.2.3 Existence of infinite volume measureIn this section, we will prove the existence of infinite volume massless Gibbsmeasure. Notice that for each  > 0, the existence of the massive infinitevolume measure is given by Lemma 1.2.8. For massless case, the basicidea is to show the existence of the weak limit of a sequence of massiveinfinite volume measure by proving tightness. In this section, we will firstlyintroduce the definition of Green functions on Zd and give a quadratic form402.3. Existence of infinite volume measurebound of the Green function for Zd case. Then we will give the proof ofTheorem 2.1.6 based on this bound. Recall that for each  > 0, we denoteby G the set of all massive infinite volume Gibbs measures and Gp the setof all cluster points of µpΛ,β,. Then Gp ⊂ G by Lemma 1.2.8.2.3.1 Green’s functionIn (1.54) and (2.8), we give the definition of operators −∆Λ with periodicboundary condition and DΛ, on a torus Λ of Zd. Now we will review somebasic properties of −∆Λ and its inverse GΛ on Dp where Dp is the set ofcompactly supported vectors orthogonal to constant, namelyDp ={v ∈ RΛ| v compact support and [v; 1] = 0} .Then we will give the definition of the lattice Laplacian and the correspond-ing Green’s functions on Zd. Recall that we think of Zd as a graph withE being the set of edges. We write x ∼ y for x, y ∈ Zd if xy ∈ E. Fori = 1, 2, · · · , d, let ei be the unit vector in ith direction and E = {±ei, i =1, 2, · · · , d} be the set of all unit vectors from the origin.Let Λ = ZdN be a d-dimension torus. Consider the gradient operator onΛ = ZdN ∇Λ : RΛ → (Rd)Λ given by∇Λφ(x) = (∇φ(x, x+ e1),∇φ(x, x+ e2), · · · ,∇φ(x, x+ en)) (2.70)where ∇ is defined in (1.15) and ei is the unit vector of the ith direction.If the inner product on (Rd)Λ is given by [f, g] =∑x∈Λ f(x) · g(x) forf, g ∈ (Rd)Λ where f(x)·g(x) is usual dot product in Rd, then [φ, (−∆Λ)φ] =[∇Λφ,∇Λφ]. Furthermore, if the operator ∇∗Λ : (Rd)Λ → RΛ is defined by(∇∗Λ(f1, f2, · · · , fd)) (x) =d∑j=1(fj(x− ej)− fj(x)), (2.71)then (−∆Λ) = ∇∗Λ∇Λ. Notice that for each g ∈ (Rd)Λ, ∇∗Λg is orthogonalto constant.412.3. Existence of infinite volume measureRecall that N is the length of the sides of the torus Λ = ZdN . LetBN =2piN ZdN . The Fourier transform of the function f : Λ→ C is given byfˆ(k) =∑x∈Λf(x)e−ix·k (2.72)where k ∈ BN . For g = (g1, g2, · · · , gd) ∈ (Cd)Λ, define its Fourier transformby gˆ = (gˆ1, gˆ2, · · · , gˆd). Then for f ∈ RΛ and g ∈ (Rd)Λ, we have∇̂f(k) = (eik·e1 − 1, eik·e2 − 1, · · · , eik·ed − 1)fˆ(k), (2.73)and∇̂∗g(k) =d∑j=1(e−ik·ej − 1)gˆj(k). (2.74)As −∆Λ = ∇∗∇, thus−̂∆Λf(k) =∑e∈E(1− eik·e)fˆ(k) = 2d∑i=1(1− cos ki)fˆ(k) (2.75)and for GΛ being the inverse of −∆Λ in Dp,ĜΛf(k) =(2d∑i=1(1− cos ki))−1fˆ(k). (2.76)Consider A = ∇GΛ∇∗, and the next lemma shows that A is a boundedoperator on (Rd)Λ with the norm uniformly bounded with respect to Λ.Lemma 2.3.1. Let A = ∇GΛ∇∗. Then for g ∈ (Rd)Λ, [Ag,Ag] ≤ C[g, g]with C independent of Λ.Proof. Here we give the proof when d = 2. The proof for other dimensionsis based on a similar calculation.Now extend the definition of the inner product to (Cd)Λ by [f, g] =∑x∈Λ f¯(x) · g(x) for f, g ∈ (Cd)Λ with f¯ is the component-wise conjugate off . As the Fourier transform preserves the inner product, it suffice to prove[Âg, Âg] ≤ C[ĝ, ĝ].422.3. Existence of infinite volume measureWrite Gˆ = 1/(2∑2i=1(1− cos ki)). By (2.73), (2.74) and (2.76), wehaveÂg(k) = (eik·e1 − 1, eik·e2 − 1)Gˆ2∑j=1(e−ik·ej − 1)gˆj(k). (2.77)Then the first component of A1 := Âg isA1(k) = (eik·e1 − 1)Gˆ((e−ik·e1 − 1)gˆ1(k) + (e−ik·e2 − 1)gˆ2(k))= Gˆ(2(1− cos k1)gˆ1(k) + (eik·e1 − 1)(e−ik·e2 − 1)gˆ2(k)).(2.78)For a, b ∈ C, as 2(aa¯ + bb¯) − (a + b)(a¯ + b¯) = (a − b)(a¯ − b¯) ≥ 0, thus(a+ b)(a¯+ b¯) ≤ 2(aa¯+ bb¯). Then we haveA1A¯1(k) ≤ Gˆ2(4(1− cos k1)2gˆ1(k)gˆ1(k) + (1− cos k1)(1− cos k2)gˆ2(k)gˆ2(k)).(2.79)Notice thatGˆ2 = 1/(4(1− cos k1)2 + 4(1− cos k2)2 + 8(1− cos k1)(1− cos k2)).Thus Gˆ24(1− cos k1)2gˆ1(k) ≤ 1 and Gˆ2(1− cos k1)(1− cos k2) ≤ 1. ThusA1A¯1(k) ≤ gˆ1(k)gˆ1(k) + gˆ2(k)gˆ2(k). (2.80)Similarly, we haveA2A¯2(k) ≤ gˆ1(k)gˆ1(k) + gˆ2(k)gˆ2(k). (2.81)ThusA1A¯1(k) +A2A¯2(k) ≤ 2(gˆ1(k)gˆ1(k) + gˆ2(k)gˆ2(k)). (2.82)Summing over all k ∈ BN , we have [Âg, Âg] ≤ C[ĝ, ĝ] with C = 2 which isindependent with Λ.Now we start our discussion for the case that Λ = Zd. Let D0 be the432.3. Existence of infinite volume measuresubset of RZd representing the collection of all functions on Zd with compactsupport. For v ∈ D0, define the lattice Laplacian by(−∆dv)x =∑y:y∼x(vx − vy). (2.83)Similar to (1.54), for v ∈ D0, we have[v,−∆dv] =∑xy∈E(vy − vx)2. (2.84)Similar to the torus case, if ∇d is the lattice gradient given by∇dφ(x) = (∇φ(x, x+ e1),∇φ(x, x+ e2), · · · ,∇φ(x, x+ en)) (2.85)where∇ is defined in (1.15) and ei is the unit vector of the ith direction, then−∆d = ∇∗d∇d. Let B = (−pi, pi]d. The Fourier transform of a summablefunction f : Zd → C is given byfˆ(k) =∑x∈Zdf(x)e−ix·k (2.86)where k ∈ B and, if fˆ is integrable, then there is the inversion theorem,f(x) = (2pi)−d∫Bfˆ(k)eix·kdk. (2.87)By the proceeding definitions in (1.64) of [22], if we define∆̂d(k) =∑e∈E(1− eik·e), (2.88)then we have−̂∆df(k) = ∆̂d(k)fˆ(k). (2.89)We say ∆̂d is the Fourier transform of operator −∆d.Gd(x, y) is said to be a lattice Green function if Gd(x, y) is an inverse to442.3. Existence of infinite volume measurethe operator −∆d by demanding that G is a weak solution to(−∆d)Gd(x, y) = δ(x− y) (2.90)with δ(x − y) being the identity matrix. More explicitly, the definition ofweak solution means that G satisfies((−∆d)Gdv)x = vx (2.91)for all x ∈ Zd and v ∈ D0. It is well-known that Gd(x, y) <∞ if and only ifd ≥ 3. In fact, this can be seen from an explicit formula for Gd(x, y) by theinversion theorem of Fourier transform. By (3.8) in [45], we haveGd(x, y) = (2pi)−d∫B1∆̂(k)ei(x−y)·kdk = (2pi)−d∫Bei(x−y)·k2∑dj=1(1− cos θj)dθ.(2.92)The integral converges if and only if d ≥ 3.Similar to (2.8), define the symmetric difference operator Dd,(t) by thequadratic form[f ;Dd,(t)f ] =∑jk∈E(fj − fk)2etjk + ∑j∈Zdf2j (2.93)for all f ∈ D0. Let Gd,(t) = (Dd,(t))−1 be the Green’s function. The nextlemma states a quadratic form bound of [v;Gd,(t)v] by the lattice Green’sfunction GdLemma 2.3.2. For d ≥ 3, consider v ∈ D0. Then the Green’s functionGd,(t) satisfies the quadratic form bound0 ≤ [v,Gd,(t)v] ≤∑jk∈E((Gdv)j − (Gdv)k)2e−tjk . (2.94)Remark 2.3.3. Comparing to Lemma 2.2.6, this lemma differs in two as-pects. The first one is that the definition of Gd is the lattice Green’s function(2.90). This makes the sum on the right hand side a infinite sum over all452.3. Existence of infinite volume measureedges in E. The second one is the domain of the test function v. As thedimension d ≥ 3, the lattice Laplace ∆d is invertible in a relative largerdomain. Here v is required to be compact supported, but not orthogonal toconstant as in Lemma 2.2.6.Proof. By (2.84), G−1d = −∆d = ∇∗d∇d. Furthermore, Gd is symmetric bythe definition. Thus by integration by parts[v,Gd,(t)v] = [v,Gd(−∆d)Gd,(t)v] = [∇dGdv,∇dGd,(t)v]. (2.95)According to (2.93) we can write Dd(t) = ∇∗dA2∇d + I where A is thediagonal matrix whose entries on the diagonal are etjk/2 and I is the Identity.Thus by the Schwartz inequality followed by reversing the integration byparts,[v,Gd,(t)v] = [∇dGdv,∇dGd,(t)v] = [A−1∇dGdv,A∇dGd,(t)v]≤ [A−1∇dGdv,A−1∇dGdv]1/2[A∇dGd,(t)v,A∇dGd,(t)v]1/2= [A−2∇dGdv,∇dGdv]1/2[Gd,(t)v,∇∗dA2∇dGd,(t)v]1/2≤ [A−2∇dGdv,∇dGdv]1/2[Gd,(t)v, (∇∗dA2∇d + I)Gd,(t)v]1/2= [A−2∇dGdv,∇dGdv]1/2[v,Gd,(t)v]1/2, (2.96)which, after dividing by [v,Gd,(t)v]1/2, is the desired inequality.Now we introduce the extended infinite volume measure on RZd × RE .Recall that the finite volume extended measure µˆΛ, is given in (2.5). Forfixed , let Ĝ p be the set of cluster points of µˆΛ,. By the Remark 2.1 of [15],there is a one-to-one correspondence between the infinite volume measureon φ’s in G p and the infinite volume measure on (φ, t)’s in Gˆp . Explicitly,by (2.7) in [15], if µ ∈ G p , then the corresponding µˆ ∈ Gˆ p is defined by462.3. Existence of infinite volume measure(extending the consistent family of measures of the form)µˆ((φx, tjk)x∈Λ,jk∈E(Λ) ∈ A×B):=∫B∏jk∈E(Λ)fα(etjk)etjkdtjkEµ1A∏jk∈E(Λ)eV (φj−φk)−(φj−φk)2etjk . (2.97)Notice that V in the exponent cancels part of the interaction in the infinitevolume measure µˆ and then the integral over t in B will restore it by (2.4).On the other hand, the φ-marginal of µˆ gives us back µ. Furthermore, bydirect inspection of (2.97), conditional on t, the conditional distribution ofφ is a multivariate Gaussian law. Then by the property of the multivariateGaussian law, for v ∈ D0 we have〈(φ · v)2n〉µ= 〈(2n− 1)!!([v;Gd,(t)v]n)〉µ˜ (2.98)2.3.2 Existence of infinite volume measureAs described before, the strategy of the proof of existence is to show theexistence of the weak limit of a sequence of massive infinite volume measureby proving tightness. Tightness is the result of moment bound (2.19) andTheorem 2.1.4. We now give the proof of the first part of Theorem 2.1.6. Thecondition that the dimension d ≥ 3 results in a finite lattice Greens functionG(x, y) and furthermore provides a bound of the moment by (2.94).Proof of Theorem 2.1.6 (1). Recall the G is the set of all cluster points ofµpΛ,β,. By Lemma 1.2.8 and Lemma 1.4.6, Gp is non-empty and Gp ⊂ G.Consider a sequence {µn|µn ∈ G p1/n}. Then each µn is translation invariant.For each n, let µˆn be the extended Gibbs measure with respect to µn definedin (2.97). Let Pn and 〈·〉n be the probability and expectation with respectto µn respectively, and 〈·〉nˆ be the expectation with respect to µˆn. Wewill prove that there is a probability measure ν on {Ω,FZd} which is asubsequence limit of µn in weak sense.472.3. Existence of infinite volume measureTo show this, it suffices to show the tightness of µn. Introduce weighted`2 norm on Ω by‖φ‖2r =∑x∈Zdφ(x)2e−2r|x| (2.99)for r > 0. By the proof of Proposition 3.3, for M > 0, KM = {φ ∈ Ω| ‖φ‖r ≤M} is a compact set in Ω. ThenPn(KcM ) = 〈1 {‖φ‖r > M}〉n≤〈‖φ‖2r /M2〉n=∑x∈Zd〈φ(x)2〉ne−2|x|/M2. (2.100)By translation invariance of µn,〈φ(x)2〉n=〈φ(0)2〉nfor all x ∈ Zd. Letv = δ0 and then by (2.94) and (2.98), we have〈φ20〉n= 〈[v,Gd,(t)v]〉nˆ ≤∑jk∈E((Gdv)j − (Gdv)k)2〈e−tjk〉nˆ. (2.101)By (2.19),〈e−tjk〉nˆis bounded above by constant C. As G−1d = −∆d =∇∗∇, we have〈φ20〉n≤ C∑jk∈E((Gdv)j − (Gdv)k)2 = C[v,Gdv] = CGd(0, 0). (2.102)As discussed in last section, Gd(0, 0) < ∞ if and only if d ≥ 3. Combinethis with (2.100), and we havePn(KcM ) ≤C ∑x∈Zde−2|x|/M2Gd(0, 0). (2.103)As the right-hand side of the inequality is independent with n, this gives thetightness of {µn}. Thus there exists a probability measure ν on {Ω,FZd}which is a subsequence limit of µn in weak sense.Now we will prove that there exists a Gibbs measure µ on {Ω,FZd} whichsatisfies all the requirements in Theorem 2.1.6 (1). As µn’s is translation482.3. Existence of infinite volume measureinvariant, thus so is ν. Also ν satisfies DLR equation (1.8) by Lemma 1.2.8.Let GΘ ⊂ G be the set of all translation invariant infinite volume Gibbsmeasures. Then GΘ is non-empty as ν ∈ GΘ. By Theorem 1.3.6, there existsan ergodic Gibbs measure µ ∈ G which is the extreme point of G as a convexset. Then µ satisfies all the requirements in Theorem 2.1.6 (1).In the above proof, we shows that the weak cluster point of a sequenceof massive ϕ-Gibbs measures is a massless ϕ-Gibbs measure by tightness,and the proof relies on the finiteness of Gd(0, 0) when d ≥ 3. To provethe existence of the ∇ϕ-Gibbs measure, we will consider the weak clusterof a sequence of finite volume measures, and Lemma 2.3.1 will provide usthe proper bound for the tightness. To state the proof, recall that GΛ =(−∆Λ)−1 be the inverse of the lattice Laplacian on the domain Dp. Letthe χ defined in (1.17) be the state space of the infinite volume ∇ϕ-Gibbsmeasure and P(χ) be the set of probability measure on χ. For any measureµ ∈ P(χ), let 〈·〉µ be the expectation with respect to the measure µ.Proof of Theorem 2.1.6 (2). For n ∈ N+, let Λn = Zdn be the lattice torusof size n. For a configuration η ∈ R(Zd)∗ , let ηΛn be the restriction insideΛn and η˜Λn be the periodic extension of ηΛn to (Zd)∗. For a finite volume∇ϕ-Gibbs measure with periodic boundary condition µ∇Λn,p defined in (1.22),One can regard it as a measure in P(χ) by consideringµ∇n (dη) = µ∇Λn,p(dηΛ)δη˜Λn (η), (2.104)namely we assign the configurations satisfying the periodic boundary con-dition the probability equal to its restriction inside Λn under µ∇Λn,pand theconfigurations not satisfying the periodic boundary condition 0 probabil-ity. This ”periodic extension” is also used in the proof of Theorem 4.15 in[45]. Let Pn be the probability with respect to µ∇n . As µ∇Λn,p is translationinvariant, µ∇n is also translation invariant. We will prove that there is aprobability measure ν which is a subsequence limit of µ∇n in weak sense.To show this, it suffices to show the tightness of {µ∇n }. Recall in (1.34),492.3. Existence of infinite volume measurethe weighted `2r norm is given by‖η‖22 =∑b∈(Zd)∗η(b)2e−2r|xb| (2.105)for r > 0. Then similar to the proof of Theorem 2.1.6, for M > 0, KM ={η ∈ χ| ‖η‖r ≤M} is a compact set. ThenPn(KcM ) = 〈1 {‖η‖r > M}〉µ∇n≤〈‖η‖2r /M2〉µ∇n=∑b〈η(b)2〉µ∇ne−2|xb|/M2. (2.106)Let µpΛn, defined in (1.14) be the massive Gibbs measure. Let E = {ei|i =1, 2, · · · , d} be the set of unit vectors in d directions. For b ∈ E , let vb =δ(yb)− δ(xb). Then by Lemma 1.2.9〈η2b〉µ∇n= lim→0〈(vb · φ)2〉µpΛn,. (2.107)By 2.17 and 2.61,〈(vb · φ)2〉µpΛn,≤∑jk∈E(Λn)((GΛnvb)j − (GΛnvb)k)2〈e−tjk〉µpΛn,(2.108). Furthermore, by Proposition 2.1.4, there exist constant C1 s.t.〈η2b〉µ∇n≤ C1∑jk∈E(Λn)((GΛnvb)j − (GΛnvb)k)2 = C1[∇nGΛnvb,∇nGΛnvb](2.109)where ∇n = ∇Λn defined in (2.70). Notice that there exists gb ∈ (Rd)Λn s.t.∇∗gb = vb. In fact, if b = ek, then gb is given by (gb)i(x) = 1 if i = k andx = 0, and = 0 otherwise. By Lemma 2.3.1, [∇nGΛn∇∗ngb,∇nGΛn∇∗ngb] ≤C2[gb, gb] = C2 for some C2 independent with n. As a result, for each b ∈ E ,〈η2b〉µ∇n≤ C1C2 where the bound is uniform in n.Furthermore, for any b ∈ (Zd)∗, by translation invariance and the pla-quette condition, if yb − xb = ei or yb − xb = −ei, then〈η2b〉µ∇n=〈η2ei〉µ∇n.502.3. Existence of infinite volume measureThen 〈η2b〉µ∇n≤ maxe∈E〈η2e〉µ∇n≤ C1C2. (2.110)Combining this with (2.106), we havePn(KcM ) ≤ C1C2∑be−2|xb|/M2 (2.111)This gives the tightness of {µ∇n }. Thus there exists a probability measureν ∈ P(χ) which is a subsequence limit of µ∇n in the weak sense.The rest of the proof (translation invariance and ergodicity) is the sameas that of Theorem 2.1.6 (1).51Chapter 3Random walk connection3.1 Introduction and main result3.1.1 Random walk in ZdConsider Zd as a graph with vertices V and edgesE ={{j, k} : j, k ∈ Zd, ||j − k||2 = 1}(3.1)where ||·||2 is the Euclidean norm. Let d be the natural graph distance on Zd,i.e. d(x, y) is the minimal length of a path between x and y. We denote byB(x, r) the closed ball with center x ∈ V and radius r, i.e. B(x, r) := {y ∈V |d(x, y) ≤ r}. We take `2(Zd) to be endowed by the counting measureand inner product 〈u, v〉 = ∑x∈Zd uxvx. In later contexts, we denote byδz(x) as the Kronecker delta function which equals one when x = z and zerootherwise. Here we will follow the terminology of random walk formulationin [13].A positive weight ω is a map from E → (0,∞). This also induces aconductance matrix by ω, that is for x, y ∈ V we set ω(x, y) = ω(y, x) =ω({x, y}) if {x, y} ∈ E and ω(x, y) = 0 otherwise. Also we write ωxy =ω(x, y). Let us further define measures uω and vω on V byuω(x) :=∑y∼xω(x, y) and vω(x) :=∑y∼x1ω(x, y). (3.2)Let Ωω = (0,∞)E be the set of all weights. We will henceforth denote by Pa probability measure on (Ωω,Fω) = ((0,∞)E ,B((0,∞))⊗E), and we writeE to denote the expectation with respect to P.523.1. Introduction and main resultA space shift by z ∈ Zd is the map τz : Ωω → Ωω(τzω)(x, y) := ω(x+ z, y + z), ∀{x, y} ∈ E. (3.3)The set {τx : x ∈ Zd} together with the operation τx ◦ τy := τx+y definesthe group of space shifts. Recall that we say the probability measure P istranslation invariant ifP ◦ τ−1x = P, x ∈ Zd. (3.4)For a translation invariant measure, it is said to be ergodic if P(A) ∈ {0, 1}for any event A with the property τ−1x (A) = A for all x ∈ Zd.For any fixed configuration ω ∈ Ωω, the random walk in environment ωis a discrete-time Markov chain with state-space Zd and transition kernelPω(x, y) :=ω(x, y)uω(x), x, y ∈ Zd. (3.5)Basically, the walk at site x chooses its next position y proportionally to thevalue of the conductance wxy. Note that for constant configuration ωxy ≡const, the above Markov chain reduces to the ordinary simple (symmetric)random walk. By checking the detailed balance conditionuω(x)Pω(x, y) = uω(y)Pω(y, x), (3.6)we find uω is a stationary and reversible measure for the Markov chain. LetXd = (Xdn) denote a sample path of the above Markov chain and let Pxωdenote the law of Xd subject to the initial conditionP xω (Xd0 = x) = 1. (3.7)Let P nω denote the n-th power of the transition kernel Pω, i.e.,P nω (x, y) = Pxω (Xdn = y). (3.8)Although the discrete-time Markov chain is quite natural, one is often533.1. Introduction and main resultinterested in a continuous-time version. There are two natural ways how tomake the time flow continuously. First, we Poissonize the discrete time andconsider transition kernelQtω(x, y) =∑n≥0tnn!e−tP nω (x, y). (3.9)The corresponding continuous-time Markov process, Y = {Yt : t ≥ 0},chooses the next position in the same way as the discrete-time one, whilethe jumps happen at an exponential time with the same rate 1 regardlessof the current position. Thus this process is referred to as constant-speedrandom walk among random conductances (CSRW). By the definition ofQ in (3.9) and detailed balance condition in (3.6), we find the CSRW isreversible with respect to uω.Notice that if we consider LωC := Pω − 1 as operator acting on boundedfunctions f : V → R defined by(LωCf)(x) =1u(x)∑y∼xω(x, y)(f(y)− f(x)), (3.10)then the transition density Qtω(x, y) admits the representationQtω(x, y) =〈δx, etLωCδy〉. (3.11)In this case the generator of the Markov chain is simply LωC .Another natural way how to make time flow continuously is by attach-ing a clock to each (x, y) that rings after exponential waiting time withexpectation 1/ωxy. Consider the generator LωX acting on bounded functionsf : V → R defined by(LωXf)(x) =∑y∼xω(x, y)(f(y)− f(x)). (3.12)This process is called variable speed random walk (VSRW), and has a wait-ing time at x an exponential time with mean 1/µω(x). We denote by Pωxthe law of the process starting at the vertex x ∈ V . The corresponding543.1. Introduction and main resultexpectation will be denoted by Eωx . For x, y ∈ B and t ≥ 0 let pω(t, x, y) bethe transition densities of X with respect to the counting measure (whichis the reversible measure of X), which are also known as the heat kernelsassociated with LωX , i.e.pω(t, x, y) := Pωx [Xt = y]. (3.13)For both CSRW and VSRW, we define the Green’s function by integrat-ing the heat kernel with respect to t from 0 to ∞, i.e.gωC(x, y) :=∫ ∞0qω(t, x, y)dt and gωV (x, y) :=∫ ∞0pω(t, x, y)dt. (3.14)By (5.29) of [8], the CSRW is a time change of the VSRW and thus theGreen function of both walks are the same, namelygωC(x, y) = gωV (x, y). (3.15)We denote by gω(x, y) the common Green function.Now we introduce random walks with killing. As in [24], consider thecontinuous-time Markov chain Xt defined as follows. The state space of Xtis Zd ∪ {∂}, where ∂ is an absorbing state called the cemetery. For anyconfiguration ω ∈ Ωω define measures uω on V byuω (x) :=∑y∼xω(x, y) + . (3.16)When X arrives at state x it waits for an Exp(uω (x)) holding time and thenjumps to y with probability pix,y = ωx,y/uω (x) and jumps to the cemeterywith probability /uω (x). The holding times are independent of each otherand of the jumps. Let ζ denote the time at which the process arrives in thecemetery. Then the generator of the stochastic process Xt∧ζ is LωX−I whereI is identity. Similar to VSRW, the heat kernel associated with LωX − I isdefined to bepω (t, x, y) := Pωx [Xt = y], (3.17)553.1. Introduction and main resultand the Green’s function gω (x, y) is defined to begω (x, y) :=∫ ∞0pω (t, x, y)dt. (3.18)Furthermore, if pω(t, x, y) is the heat kernel associated with LωX defined in(3.13), thengω (x, y) =∫ ∞0e−tpω(t, x, y)dt. (3.19)Remark 3.1.1. Although CSRW and VSRW share the same green function(3.14), the Green’s function of random walks with killing would be differentif we replace pω(t, x, y) by qω(t, x, y) in (3.18), as the variable speed randomwalk is more likely than the constant speed random walk to be killed in regionswhere the jump rate is rapid.3.1.2 Coupling to random conductance modelRecall that Ω = RZd is the space of fields on the vertices, while Ωω = (0,∞)Eis the set of all weights on edges. To state the coupling of the random con-ductance model and our model, we first extend the infinite volume measureto the space Ω × Ωω. To state the extension, let g(ω) : R+ 7→ R+ be thedensity that satisfiesexp(−(1 + βx2))α = ∫ ∞0exp(−12ωx2)g(ω)dω. (3.20)Let µ be an infinite volume Gibbs measure defined in (1.8) and M ⊂ Ebe a finite set of edges, consider the measure µ˜M on Ω× (R+)M defined byµ˜M (A×B) :=∫B∏e∈Mg(ωe)dωeEµ1A∏jk∈MeV (φj−φk)−12ωjk(φj−φk)2 ,(3.21)where A ∈ FZd and B ∈ B((R+)M ) are Borel sets and V (φj − φk) =(1 + (φj − φk)2)α. By (3.20), {µ˜M |M ⊂ E, |M | <∞} satisfies consistencyconditions and thus by Kolmogorov’s extension theorem, these exists a aunique measure µ˜ on Ω× Ωω with marginals {µ˜M |M ⊂ E, |M | <∞}. Fur-563.1. Introduction and main resultthermore, the φ marginal of µ˜ is µ. Following the terminology in [16], wecall µ˜ extended gradient Gibbs measure as it is in fact a Gibbs measure withHamiltonian 12ωjk(φj−φk)2. To simplify the notation, we regard the weightω’s as symmetric objects here, namelyωxy = ωyx, |x− y| = 1. (3.22)Remark 3.1.2. In the context of the Potts model, the so called Edwards-Sokal coupling measure plays the same role as the extended measure µ˜ [36].Furthermore, there is an one-to-one correspondence for the Edwards-Sokalmeasures [14, 51] similar to (3.21).The following lemma from [16] characterizes the properties of µ˜.Lemma 3.1.3 ([16, Lemma 3.2]). Let µ be a gradient Gibbs measure andlet µ˜ be its extension to Ω × Ωω. If µ is translation-invariant and ergodic,then so is µ˜.For the extended measure, we will prove later is that the variables onthe sites φ’s are distributed as gradients of a Gaussian field conditioned onthe fields on the edges ω’s. Its covariance is given the negative of the inverseof the operator LωX given in (3.12). Furthermore, the conditional Gaussianmeasure follows a random walk representation, with LωX being the generatorof VSRW. For the theory of random walk representation of Gaussian systemor more general spin model, one can refer to [23, 39].With a simple change of variables, we see that if g satisfies (3.20) andfα(x) is given by (2.4), theng(ω) =12βfα(ω2β)e− ω2β , (3.23)and that etjk has the same distribution with ωjk/2β in (3.21). We introducethe auxiliary field tjk in previous sections to make the action defined in(2.12) convex by Lemma 2.1.1 and properties of α-stable density. Here weintroduce a new auxiliary field to connect our model to random walk inrandom environment. Combining this observation with Proposition 2.1.3,573.1. Introduction and main resultwe have following corollary.Corollary 3.1.4. For any p ∈ R and e ∈ E, Eµ˜(ωp(e)) exists.3.1.3 Main resultsNow we will state our results of random walk representation of the extendedinfinite volume Gibbs measure. Recall that in (3.12), for each ω ∈ Ωω, wedefine the generator LωX for VSRW acting on bounded functions f : V → Rdefined by(LωXf)(x) =∑y∼xω(x, y)(f(y)− f(x)). (3.24)To state the proposition, we call a translation-invariant Gibbs measure µwith mean zero if Eµ(φ0) = 0.Proposition 3.1.5. Let µ be a translation-invariant, ergodic gradient Gibbsmeasure with mean zero and let µ˜ be its extension to Ω×Ωω. Consider theσ-field E := σ({ωb : b ∈ E}). For µ˜-a.e. ω, the conditional law µ˜(·|E )(ω),regarded as a measure on Ω, is Gaussian with constant mean,Eµ˜(φx|E )(ω) = 0, x ∈ Zd, (3.25)and covariance given by (−LωX)−1. Moreover, if gω(x, y) is the Green func-tion defined in (3.14), then the conditional two point correlations satisfyEµ˜(φx;φy|E )(ω) = gω(x, y). (3.26)Remark 3.1.6. Note that in the Helffer-Sjo¨strand random walk represen-tation in Theorem 1.4.1, random environment fluctuates in time, while, inthis representation, it is static.As we have seen in last proposition, the two point correlation functionof φ under infinite volume ϕ-Gibbs measure has a close connection withthe Green function gω(x, y). The next theorem states that the two-pointpoint correlation decays only algebraically (or in polynomial order). In thissense the field has long dependence. To state the proposition, for any Gibbs583.2. Proof of main resultsmeasure µ, let 〈φx;φy〉µ = Eµ (φxφy) − Eµ (φx)Eµ (φy) be the two pointcorrelation function.Theorem 3.1.7. Assume d ≥ 3. Let µ ∈ G be a translation-invariant,ergodic gradient Gibbs measure with mean zero defined in (1.8). Then thetwo-point correlation function is always positive and behaves like〈φx;φy〉µ ∼C|x− y|d−2 (3.27)as |x− y| → ∞ where ∼ means that the ratio of both sides converges to 1.This proposition, in particular, implies that one of the important ther-modynamic quantities called the compressibility diverges in massless model:∑x∈Zd〈φx;φy〉µ =∞. (3.28)Similar asymptotics for the two-point correlation function also hold for mass-less Gaussian system (See Proposition 3.5 of [45]).3.2 Proof of main results3.2.1 Connection to random conductance modelGiven a configuration ω ∈ Ωω, recall that LωX is the generator for variablespeed random walk corresponding to the weight ω defined in (3.12). Follow-ing the terminology in [16], we say a function g : Ωω ×Zd → R satisfies shiftcovariance property ifg(ω, x+ e)− g(ω, x) = g(τxω, e), (3.29)where x ∈ Zd and e is one of the unit vectors from the origin, andg(ω, 0) = 0; (3.30)593.2. Proof of main resultswe say g is harmonic with respect to LωX , or harmonic for short, ifLωXg(ω, ·) = 0, P− a.s. ω. (3.31)The next lemma, which is similar to Lemma 3.3 in [16], shows that harmonic,shift-covariant functions are uniquely determined by their expectation withrespect to ergodic measures on the conductances.Lemma 3.2.1. Let µ be a translation-invariant, ergodic probability measureon configurations ω = (ωb) ∈ Ωω. Let g : ΩEω × Zd → R be a measurablefunction which is:1. harmonic in the sense of (3.31), µ-a.s.;2. shift-covariant in the sense of (3.29) and (3.30), µ-a.s.;3. square integrable for each component in the sense that Eµ˜|g(ω, x)|2 <∞ for all x with |x| = 1;4. square integrable as a vector field Eµ˜(∑|x|=1 ω0x|g(ω, x)|2)<∞.If Eµ(g(·, x)) = 0 for all x with |x| = 1, then g(·, x) = 0 a.s. for all x ∈ Zd.Compared with [16, Lemma 3.3], we make two changes in the assump-tion: relax the condition of uniformly elliptic for conductance, namely µ( ≤ωb ≤ 1/) = 1 for some  > 0; add condition 4. In fact, in the proof of [16,Lemma 3.3], the condition of uniformly elliptic for conductance is used onlyto prove a equivalence of condition 3 and 4. We defer the proof, and fur-ther discussion of the consequences of shift covariance and harmonicity, toSection B.2 of Appendix B.Now we will give the proof of Proposition 3.1.5.Proof of Proposition 3.1.5. By (3.21), the conditional measure is a multi-variate Gaussian law. Gaussian measures are characterized by the mean andthe covariance. (3.21) indicates that the covariance is given by (−LωX)−1.To identify the mean, define u : Ωω × Zd → R byu(ω, x) = Eµ˜(φx − φ0|E )(ω). (3.32)603.2. Proof of main resultsWe claim that u satisfies all conditions in Lemma 3.2.1. First, to prove u isharmonic in the sense of (3.31), considerLωXu(ω, x) = Eµ˜ ∑y:|y−x|=1ωxy(φy − φx)∣∣∣∣∣∣E (ω). (3.33)Using the fact that the µ˜ is Gibbs, conditional on σ(φy; y 6= x) the conditionmeasure µ{x} is Gaussian with the explicit formµ{x}(dφx) =1Zexp−12φ2x ∑y:|y−x|=1ωxy + φx∑y:|y−x|=1ωxyφy (3.34)where Z is an appropriate normalization constant. By change of variablesφx → φx + a and differentiating with respect to a at a = 0, we have themean of φx∑y:|y−x|=1wxy under µ{x} is exactly∑y:|y−x|=1wxyφy, proving thatLωXu(ω, x) = 0.Next, we observe that the translation invariance of µ˜ implies thatu(τxω, b)− u(τxω, 0) = Eµ˜(φb − φ0|E )(τxω)= Eµ˜(φx+b − φb|E )(ω)= u(ω, x+ b)− u(ω, x) (3.35)and so u is shift-covariant, as defined in (3.29) and (3.30).Thirdly, by Theorem 2.1.4, for p = 1 and 2Eµ˜((φx − φ0)2p)<∞. (3.36)ThenEµ˜|u(ω, x)|2 = Eµ˜ (Eµ˜(φx − φ0|E ))2≤ Eµ˜(Eµ˜((φx − φ0)2|E))= Eµ˜((φx − φ0)2)<∞,613.2. Proof of main resultswhere the second line is given by Jensen inequality with respect to condi-tional expectation. This gives the square integrability for each component.As for the square integrability as a vector field, by Corollary 3.1.4, we haveEµ˜(ω20x) <∞. ThenEµ˜∑|x|=1ω0x|g(ω, x)|2 = ∑|x|=1Eµ˜ω0x|g(ω, x)|2≤∑|x|=1√Eµ˜ω20xEµ˜|g(ω, x)|4=∑|x|=1√Eµ˜ω20xEµ˜ (Eµ˜(φx − φ0|E ))4≤∑|x|=1√Eµ˜ω20xEµ˜ (Eµ˜((φx − φ0)4|E ))=∑|x|=1√Eµ˜ω20xEµ˜ ((φx − φ0)4) <∞.(3.37)This gives the square integrability as a vector field.Finally, the definition of u and the fact that µ˜ is translation invariantimply thatEµ˜(u(·, x)) = Eµ˜(φx − φ0) = 0. (3.38)As u obeys all conditions of Lemma 3.2.1, we have Eµ˜(φx − φ0|E )(ω) = 0,µ˜− a.s, namelyEµ˜(φx|E )(ω) = Eµ˜(φ0|E )(ω), µ˜− a.s (3.39)Furthermore, consider f(ω) = Eµ˜(φ0|E )(ω) as a function of ω. Then forx ∈ Zdf(τxω) = Eµ˜(φ0|E )(τxω) = Eµ˜(φx|E )(ω) = f(ω). (3.40)By ergodicity of µ˜, f(ω) is constant and thusEµ˜(φ0|E )(ω) = Eµ˜ (Eµ˜(φ0|E )(ω)) = Eµ˜(φ0) = 0. (3.41)623.2. Proof of main results3.2.2 Two point correlationAs the key element of estimation the two point correlation, previous resultsabout random walk in random environment are summarized in AppendixB. Especially, we will need heat kernel estimation Theorem B.1.3 and localcentral limit theorem B.1.10. Before giving the proof of Theorem 3.1.7, weneed to check Assumption B.1.3 and B.1.4 holds for µ˜−a.s. ω to apply thesetheorems. Notice that these assumptions are also useful in the later proofof scaling limit.Lemma 3.2.2. Assumption B.1.3 and B.1.4 holds for µ˜− a.s. ω.Proof. Here we give the proof of Assumption B.1.3. Notice that for chemi-cal distance dω defined in (B.16), dω(x, y) < d(x, y) and therefore B(x, r) ⊂B˜(x, r). Thus the proof of Assumption B.1.4 is the same as that of Assump-tion B.1.3.By Lemma 3.1.3, µ˜ is translation-invariant and ergodic. By Corollary3.1.4, Eµ˜(ωp) exist for all p ∈ R. By the ergodicity of µ˜, for µ˜− a.s. ωu¯ωp (x) := lim supn→∞‖uω‖p,B(x,n)≤ lim supn→∞2d∑i=1‖ω(y, y + ei)‖p,B(x,n)=2d∑i=1Eµ˜(ω(0, ei)p)1/p, (3.42)where ei is the unit vector at ith direction. Note that the second inequalityis due to Minkowski inequality. Similarly we havev¯ωq (x) := lim supn→∞‖vω‖q,B(x,n) ≤2d∑i=1Eµ˜(ω(0, ei)−q)1/q. (3.43)Notice that u¯ωp (x) and v¯ωq (x) are uniformly bounded with respect to x, sou¯ωp := supx∈V u˜ωp (x) and v¯ωq := supx∈V v˜ωq (x) exist. As p and q are arbitrary633.2. Proof of main resultshere, it certainly satisfies 1p +1q ≤ 2d . Thus Assumption B.1.3 holds forµ˜− a.s. ω.Recall that the heat kernels of CSRW associated with LωX is defined byqω(t, x, y) := Pωx [Yt = y]/uω(y) (3.44)and thatkt(x) =1√(2pit)d det Σ2exp(−x · (Σ2)−1 x/2t) (3.45)for the Gaussian heat kernel with covariance matrix Σ2. The Green functiongω(x, y) with respect to CSRW is given bygω(x, y) =∫ ∞0qω(t, x, y)dt. (3.46)Also define the Green function of a Brownian motion with covariance matrixΣ2 bygBM (x) =∫ ∞0kt(x)dt. (3.47)For x ∈ Rd write bxc = (bx1c, bx2c, · · · , bxdc) and write ‖x‖1 =∑i |xi| asits `1 norm. Now we gives the proof of Theorem 3.1.7.Proof of Theorem 3.1.7. By the translation invariance of µ, it suffices tocalculate 〈φ0φz〉µ. By Proposition 3.1.5, conditional on the environment ω,the conditional two point function can be written asEµ˜(φ0;φz|E )(ω) = gω(0, z). (3.48)Our basic strategy is to establish a local limit theorem for the Green functionand to show that for any x ∈ Rd/{0}, gω(0, bnxc) decays algebraically in thespeed of n2−d. By the definition of the Green functions in (3.46), we have∫ ∞0ndqω(n2t, 0, bnxc) = nd−2gω(0, bnxc). (3.49)The following proof is based on the analysis of ndqω(n2t, 0, bnxc). First we643.2. Proof of main resultswill give the uniform bounds on both end of the integral in (3.49). Then wewill establish the decay of the green function by the local limit theorem inTheorem B.1.10.Step 1: bound of the integral near infinity Now we fix the conduc-tance ω ∈ Ωω and x ∈ Rd with ‖x‖1 > 2d. By Lemma 3.2.2 and CorollaryB.1.5, there exists constants C and T s.t. for√t ≥ T and all v ∈ V ,qω(t, 0, v) ≤ Ct−d/2. (3.50)Thenqω(n2t, 0, v) ≤ Cn−dt−d/2. (3.51)Thus for each fixed  there exists T2 such that∫ ∞T2ndqω(n2t, 0, bnxc)dt ≤ 8and∫ ∞T2akt(x)dt ≤ 8(3.52)where a = 1/Eµ˜[uω(0)]. Notice that all the bounds are uniform in x and n.Step 2: bound of the integral near 0 By Lemma 3.2.2 and TheoremB.1.3, there exist constants ci and N such that for any given t with√t ≥ Nand all v ∈ V , if d(0, v) ≤ c1t thenqω(t, 0, v) ≤ c2t−d/2 exp(−c3d(0, v)2/t). (3.53)and if d(0, v) ≥ c1t thenqω(t, 0, v) ≤ c2t−d/2 exp(−c4d(0, v)(1 ∨ ln(d(0, v)/t))). (3.54)For N defined as above, consider t small so that‖x‖13N√t> 1 and log(3N√t‖x‖1)+12< 0 and ‖x‖1 −12√t >12‖x‖1 .(3.55)653.2. Proof of main resultsWe will show in the range of t satisfying (3.55), ndqω(n2t, 0, nx) is boundedfrom above uniformly with respect to n.Now fix t satisfying (3.55). First we consider the case n ≥ N/√t andn ≥ ‖x‖1 /(c1t). In this case√n2t ≥ N and d(0, bnxc) ≤ c1n2t. Thenapplying (3.53) to ndqω(n2t, 0, bnxc), we havendqω(n2t, 0, bnxc) ≤ c2t−d/2 exp(−c3 ‖x‖21t). (3.56)The second case is that n ≥ N/√t and n ≤ ‖x‖1 /(c1t). Under thisassumption,√n2t ≥ N and d(0, bnxc) ≥ c1n2t. Then applying (3.54) tondqω(n2t, 0, bnxc), we havendqω(n2t, 0, bnxc) ≤ c2t−d/2 exp(−c4n ‖x‖1 (1 ∨ ln(‖x‖1 /nt))). (3.57)Combining this with the facts that n ≤ ‖x‖1 /c1t and that n ≥ N/√t, wehavendqω(n2t, 0, bnxc) ≤ c2t−d/2 exp(−c4(1 ∨ ln(c1))‖x‖1N√t). (3.58)At last, we consider the case that n ≤ N/√t. By Proposition B.1.8, wehavendqω(n2t, 0, bnxc) ≤ CPH‖uω‖1,B(0,n√t/2)2dt−d/2Pωbnxc[Y3n2t/2 ∈ B(0,12n√t)].(3.59)As ‖uω‖1,B(0,n√t/2) admits a finite positive limit E(uω) as n → ∞ by er-godicity of the environment, thus ‖uω‖1,B(0,n√t/2) is bounded from below bysome constant C for all n. Thusndqω(n2t, 0, bnxc) ≤ CPHC2dt−d/2Pωbnxc[Y3n2t/2 ∈ B(0,12n√t)]. (3.60)Let Nt be the number of steps Yt moving up to time t. By the definitionof CSRW Yt, which has an exponential waiting time of mean one at eachpoint, Nt follows a Poisson distribution with parameter t. For the CRSW663.2. Proof of main resultsYt starting at bnxc, to reach the ball B(0, 12n√t) at time 3n2t/2, it requiresat least d(0, bnxc)− n√t/2 moves. ThenPωbnxc[Y3n2t/2 ∈ B(0,12n√t)]≤ P(N 32n2t ≥ d(0, bnxc)−12n√t)≤ P(N 32n2t ≥12n ‖x‖1),where the second inequality comes from the third condition in (3.55). Forany λ > 0,P(N 32n2t ≥12n ‖x‖1)= P(exp(λN 32n2t)≥ exp(12λn ‖x‖1))≤ exp(−12λn ‖x‖1)E(exp(λN 32n2t))= exp(−12λn ‖x‖1 −32n2t+32n2teλ).The minimum on the bound is attained by taking λ = log (‖x‖1 /3nt). No-tice that log (‖x‖1 /3nt) > 0 due to the assumption of n ≤ N/√t and thefirst condition of (3.55). ThenPωbnxc[Y3n2t/2 ∈ B(0,12n√t)]≤ exp(log(3nt‖x‖1)· 12n ‖x‖1 −32nt2 +12n ‖x‖1)≤ exp(log(3nt‖x‖1)· 12n ‖x‖1 +12n ‖x‖1)(as n ≤ N/√t) ≤ exp(log(3N√t‖x‖1)· 12n ‖x‖1 +12n ‖x‖1)= exp(12n ‖x‖1(log(3N√t‖x‖1)+ 1)).By the second condition of (3.55), we have log(3N√t‖x‖1)+ 1 < 0. Notice that673.2. Proof of main resultsn ≥ 1, and then we havePωbnxc[Y3n2t/2 ∈ B(0,12n√t)]≤ exp(12‖x‖1(log(3N√t‖x‖1)+ 1))= td(0,x)/4 exp(12‖x‖1(log(3N‖x‖1)+ 1)).Combing above bound with (3.60), we havendqω(n2t, 0, bnxc) ≤ CPHC2dt(‖x‖1−2d)/4 exp(12‖x‖1(log(3N‖x‖1)+ 1)).(3.61)As ‖x‖1 > 2d, (‖x‖1 − 2d)/4 > 0.By (3.56), (3.58) and (3.61), if t satisfy (3.55), we havendqω(n2t, 0, bnxc) ≤ CPHC2dt(‖x‖1−2d)/4 exp(12‖x‖1(log(3N‖x‖1)+ 1))+ c2t−d/2 exp(−c3 ‖x‖21t)+ c2t−d/2 exp(−c4(1 ∨ ln(c1))‖x‖1N√t). (3.62)As the right hand side is an integrable function about t in the neighborhoodof 0, there exist T1 such that∫ T10ndqω(n2t, 0, bnxc)dt ≤ 8and∫ T10akt(x)dt ≤ 8. (3.63)Notice that these bounds are uniform in any finite set of x with ‖x‖1 > 2d.Step 3: decay of the green function By Theorem B.1.10,limn→∞ supt∈[T1,T2]∣∣∣ndqω(n2t, 0, bnxc)− akt(x)∣∣∣ = 0, P− a.s. (3.64)683.2. Proof of main resultsThen ∃N0 s.t. when n ≥ N0,∫ T2T1∣∣∣ndqω(n2t, 0, bnxc)− akt(x)∣∣∣ < 2. (3.65)Thus ∫ ∞0∣∣∣ndqω(n2t, 0, bnxc)− akt(x)∣∣∣≤∫ T2T1∣∣∣ndqω(n2t, 0, bnxc)− akt(x)∣∣∣+∫ T10ndqω(n2t, 0, bnxc)dt+∫ T10akt(x)dt+∫ ∞T2ndqω(n2t, 0, bnxc)dt+∫ ∞T2akt(x)dt<2+8+8+8+8< . (3.66)Thuslimn→∞∣∣∣nd−2gω(0, bnxc)− agBM (x)∣∣∣ = 0. (3.67)Notice that this convergence holds for a.s. ω and is uniform in any compactset of x with ‖x‖1 > 2d.For z ∈ Zd, let nz = b‖z‖ /4dc and z0 = z/nz where ‖·‖ is the `2 normof z. Thengω(0, z) = gω(0, nzz0)nd−2z · n2−dz . (3.68)As ‖z‖ → ∞, nz ≈ ‖z‖ /4d → ∞ and ‖z0‖ → 4d, so ‖z0‖1 > 2d when‖z‖ large. Notice that for gBM defined in (3.47), gBM (x) = gBM (‖x‖).Combining this fact with (3.67), we havegω(0, nzz0)nd−2z ∼ gBM (4d). (3.69)Thusgω(0, z) ∼ gBM (4d)n2−dz ∼C‖z‖d−2 . (3.70)By translation invariance of µ and (3.48), we get the result of Theorem 3.1.7.693.2. Proof of main resultsRemark 3.2.3. In [5], a similar local central limit theorem for the Greenfunction is given in Theorem 1.14. However they give only the strategy ofthe proof. Here we provide the details of the proof, especially the boundson the transition densities by the Harnack inequalities and the heat kernelestimations, as a complement of [5].70Chapter 4Scaling limit4.1 Introduction and main results4.1.1 Gaussian free fieldThe d-dimensional Gaussian free field (GFF) is a natural d-dimensional-timeanalog of Brownian motion. Like Brownian motion, it is a simple randomobject of widespread application and great intrinsic beauty. It plays animportant role in statistical physics and the theory of random surfaces. Itis also a starting point for many constructions in quantum field theory [39].As a standard definition, a Gaussian or normal random variable is a real-valued random variable X with characteristic function E(eitX) = eiµt−σ2t2/2for some µ ∈ (−∞,∞) and σ2 ≥ 0. Note that we include the degeneratecase σ = 0, when the variable X a.s. equals µ. The variable is centered orsymmetric if µ = 0. Then we take the following definition about GaussianHilbert space from [58].Definition 4.1.1. A Gaussian linear space is a real linear space of ran-dom variables, defined on an arbitrary probability space (Ω,F , µ), such thateach variable in the space is a centered Gaussian. A Gaussian Hilbert spaceis a Gaussian linear space which is complete, i.e., a closed subspace ofL2R(Ω,F , µ), consisting of centered Gaussian variables, which inherits thestandard L2R(Ω,F , µ) inner product: (X,Y ) =∫XY dµ. We also assumethat F is the smallest σ-algebra in which these random variables are mea-surable.Before giving the definition of Gaussian free field, we introduce the indexset. Let ∆ denote the Laplace differential operator in Rd and consider the714.1. Introduction and main resultssetH0 := {∆g : g ∈ C∞0 (Rd)}. (4.1)If (·, ·) is the usual inner product in L2-space, the set H0 is endowed with anatural quadratic form f → (f, f) + (f,∆f), defined as(∆g,∆g) + (∆g −∆−1∆g) =∫Rddx(|∆g|2 + |∇g|2). (4.2)Then we define the norm on H0‖f‖H := [(f, f) + (f,−∆−1f)]1/2 (4.3)Let H be the completion of H0 in this norm.Remark 4.1.1. Let fˆ be the Fourier transform of f . Then by Parseval’stheorem,‖f‖2H = (f, f) + (f,−∆−1f) =∥∥∥∥(1 + 14piξ−2)1/2fˆ(ξ)∥∥∥∥22. (4.4)Thus H is in fact the Sobolev space W−1,2(Rd).With the space H, we will give the following definition of a Gaussianfree field via Gaussian Hilbert spaces. Note that if X1, ..., Xn are any realrandom variables with the property that all linear combinations of theXj arecentered Gaussians, then the joint law of the Xj is completely determined bythe covariances Cov[Xj , Xk] = E(XjXk), and it is a linear transformationof the standard normal distribution. A similar statement holds for infinitecollections of random variables [58]. Then we give the definition of theGaussian free field based on the variance structure.Definition 4.1.2. We say that a Gaussian Hilbert space {ψ(f) : f ∈ H} ofrandom variables on a probability space (Ω,F ,P) is a Gaussian free field ifthe map f → ψ(f) is linear a.s. and each ψ(f) is Gaussian with mean zeroand varianceE(ψ(f)2) = (f,Q−1f), (4.5)724.1. Introduction and main resultswhere Q−1 is the inverse of the operatorQf :=d∑i,j=1qij∂2∂i∂jf, (4.6)with (qij) denoting some positive semidefinite, non-degenerate, d×d matrix.Remark 4.1.2. By the identity (a, b) = 12 [(a+ b, a+ b)− (a, a)− (b, b)], forf, g ∈ H, E((ψ(f)ψ(g)) = (f,−∆−1g)). By integration by parts, (f,−∆−1g)) =(∇f,∇g). In most context of Gaussian free field, this quantity is called theDirichlet energy of f . See section 2.1 and 2.4 of [70].4.1.2 Scaling limitsAs we have observed, in statistical mechanics, there are at least two differ-ent scales: macroscopic and microscopic ones. The procedures connectingmicroscopic models with the macroscopic phenomena are realized by takingthe scaling limits. The scaling parameter N ∈ N represents the ratio of themacroscopically typical length (e.g., 1 cm) to the microscopic one (e.g., 1nm) and it is finite, but turns out to be quite large (N = 107 in this ex-ample). The physical phenomena can be mathematically understood onlyby taking the limit N → ∞. The dynamics involves the scalings also intime. Within a macroscopic unit length of time, the molecules collide witheach other with tremendous frequency. Since the microscopic models suchas the Ising model and the ϕ-interface model involve randomness, the limitprocedure N → ∞ can be formulated in the framework of classical limittheorems in probability theory.The principal ideas behind these limit theorems are that, by the ergodicor mixing properties of the microscopic systems, the microscopic (physical)quantities are locally in macroscopic scale averaged or homogenized underthe scaling limits. The macroscopic observables are obtained under suchaveraging effects. However, the ∇ϕ interface model which we shall discusshas only an extremely weak mixing property and this sometimes makes theanalysis of the model difficult. For instance, the thermodynamic quantities734.1. Introduction and main resultsmay diverge under the usual scaling. This suggests the necessity of intro-ducing scalings different from the usual one of the central limit theorem toobtain a nontrivial limit.For an explicit discussion of the proper scaling, we first assume d ≥ 3and consider a translation invariant, ergodic ϕ-Gibbs measure with mean 0.Let N be the ratio of typical lengths at macroscopic and microscopic levels.Then the point θ = (θi)di=1 ∈ Rd at macroscopic level corresponds to thelattice point bNθc := (bNθic)di=1 ∈ Zd at microscopic level. If x ∈ Zd is closeto bNθc in such sense that |x−bNθc|  N , then x also macroscopically cor-responds to θ. This means that observing the random field φ at macroscopicpoint θ is equivalent to taking its sample mean around the microscopic pointbNθc. Such averaging yields a cancellation in the fluctuations of φ.Motivated by these observations, let us consider the fluctuation of theϕ-field over the microscopic region ΛN = (N,N ]d ∩ Zd, which correspondsto the macroscopic box D = (1, 1]d under the usual scaling of the CLT:Φ˜N = (2N)−d/2∑x∈ΛN{φ(x)}. (4.7)But Theorem 3.1.7 actually implies thatEµ[{Φ˜N}2] = 2−dN−d∑x,y∈ΛNG(x− y)≈ CN−d∑x,y∈ΛN|x− y|2−d≈ CN2 →∞ as N →∞. (4.8)Therefore, Φ˜N does not give the right scaling, and we should scale it downand considerΦN = N−1Φ˜N = (2N)−d/2−1∑x∈ΛN{φ(x)}. (4.9)This actually coincides with the interpretation stated in Sect. 1.2: φ ={φ(x);x ∈ Zd} represents the height of an interface embedded in d + 1744.1. Introduction and main resultsdimensional space so that both x and φ-axes should be rescaled by thefactor 1/N at the same time.Remark 4.1.3. In d = 1 and for general potentials V , the increments ηbare in fact i.i.d. and the scaling limit follows from the standard central limittheorem. See remark 2.5(2) of [16].If we introduce random signed measure on Rd byΦN (dθ) = (2N)−d/2−1∑x∈Zdφ(x)δx/N (dθ), θ ∈ Rd, (4.10)then ΦN in (4.9) is represented as ΦN = 2−d/2〈ΦN (·),1ΛN〉, where〈ΦN (·), f〉stands for the integral of f = f(θ) under the measure ΦN (·). In this way,studying the limit of ΦN is reduced to investigating more general problemfor the properly scaled empirical measures of φ.4.1.3 Main resultsMotivated by (4.10), we will interpret samples from gradient Gibbs measuresas random linear functionals on an appropriate space of functions. LetC∞0 (Rd) denote the set of all infinitely differentiable functions f : Rd 7→ Rwith compact support. For any function f ∈ C∞0 (Rd), we introduce therandom linear functional f 7→ φ(f) given byφ(f) :=∫dx f(x)φbxc, (4.11)where f satisfies ∫dx f(x) = 0. (4.12)Notice f satisfies (4.12) if f ∈ H0. The following lemma provides regularityestimates so that φ can be extended to a linear functional on H.Lemma 4.1.4. let d ≥ 3 and µ be a translation-invariant, ergodic gradientGibbs measure defined in (1.8). There then exists a constant c ∈ ∞ such754.1. Introduction and main resultsthat for each f ∈ H0,‖φ(f)‖L2(µ) ≤ c ‖f‖H . (4.13)In particular, φ is L2 continuous and thus extends to a random linear func-tional φ : H → R.In this chapter, the main goal is to show that the family of randomvariables {φ(f) : f ∈ H} has, asymptotically, in the scaling limit, the law ofa linear transformation of a Gaussian free field. To define to the scaling limitof the random variable, we first consider the scaling of the test functions onthe scale −1. For  > 0 and a function f : Rd → R, letf(x) := d/2+1f(x). (4.14)Note that the scaling will not increase the H-norm defined in (4.3). Indeed,if f ∈ H and  < 1, then‖f(x)‖2H = (f(x), f(x)) + (f(x),−∆−1f(x))= 2(f, f) + (f,−∆−1f) ≤ ‖f‖2H .The scaling of random functional φ on the scale −1, φ, is defined to be φoperating on the scaling of the test function, namely,φ(f) := φ(f(x)). (4.15)Then ‖φ(f)‖L2(µ) = ‖φ(f)‖L2(µ) ≤ c ‖f‖H ≤ c ‖f‖H, and thus φ can beextended to H. Our main result is as follows.Theorem 4.1.5. Let d ≥ 3, α < 1/2, and µ be a translation-invariant,ergodic gradient Gibbs measure defined in (1.8). Then for every f ∈ H.lim→0Eµ(eiφ(f)) = exp{12∫dxf(x)(Q−1f)(x)}, (4.16)764.1. Introduction and main resultswhere Q−1 is the inverse of the operatorQf :=d∑i,j=1qij∂2∂i∂jf, (4.17)with (qij) denoting some positive semidefinite, non-degenerate, d× d matrixdepending only on α and d.Remark 4.1.6.1. In (4.14), the individual φs get scaled by (−d/2+1) due to the discussionin section 4.1.2;2. The restriction of the dimension that d > 3 is due to two reason. Onthe one hand, the estimation in Corollary 4.2.2 requires d ≥ 3 dueto the heat kernel estimation we use (Corollary B.1.5). On the otherhand, here we state the theorem in the context of φ-Gibbs measure, ofwhich the existence require d ≥ 3 (Theorem 2.1.6). Notice that thelatter problem can be fixed by stating the theorem in the context of∇φ-Gibbs measure as Theorem 2.4 in [16];3. The covariance matrix (qij) is not necessarily a constant multiple ofidentity matrix I as in general µ is not guaranteed to be invariant underthe permutation of the coordinates, namely the mapping Iij : Rd 7→ Rdwith 1 ≤ i < j ≤ d define byIij : (x1, x2, · · · , xi, · · · , xj , · · · , xd) 7→ (x1, x2, · · · , xj , · · · , xi, · · · , xd).(4.18)It is easy to check that invariance under the permutation of the co-ordinates does not hold for all finite volume measures, for example,finite volume measures on rectangular cuboid. With our existence re-sult Theorem 2.1.6, we cannot determine whether µ is invariant underthe permutation of the coordinates. One way to prove this property forµ is to prove the uniqueness of µ, which is still an open problem forthis model.774.2. Regularity estimates4.2 Regularity estimatesNow we will give the proof of Lemma 4.1.4. which implies that the randomoperator φ given in (4.11) is L2 continuous. With Lemma 4.1.4, we needonly to work with smooth and compactly supported test function.Proof of Lemma 4.1.4. For any f ∈ H0, define vf ∈ RZd byvf (x) =∫dyf(y)1byc=x. (4.19)Then if vf ·φ =∑vf (x)φ(x) denotes the usual scaler product, vf ·φ = φ(f).Also vf is compact supported and orthogonal to constant as f ∈ H0Let ∆d be the lattice Laplacian defined in (2.93). Then by (2.66) in ,‖φ(f)‖L2(µ) = Eµ((vf ·φ)2) ≤ C1[vf ,−∆−1d vf ]. By (4.4) in [16], [vf ,−∆−1d vf ] ≤C2 ‖f‖H. These two give us (4.13).For convenience of notation, whenever R is an operator on `2(Zd), wewill extend it to an operator on L2(Rd) via formula(f,Rf) :=∫dxdyf(x)f(y)R(bxc, byc), (4.20)with R(x, y) is the kernel of R in the canonical basis in `2(Zd). Next we willprove that the tail of (f, etLωXf) is bounded by some integrable function.Lemma 4.2.1. Let µ be a translation-invariant, ergodic gradient Gibbs mea-sure defined in (1.8) and µ˜ is the extension of µ to RZd×RE defined in (3.21).Then there exist N(ω) for µ˜−a.s. ω such that if f ∈ C∞0 (Rd) and t > N(ω)(f, etLωXf) ≤ C ‖f‖2∞ λ(supp f)21td/2(4.21)where λ(A) is the Lebesgue measure of set A.Proof. Recall that X = {Xt : t ≥ 0} is a reversible continuous time Markovchain with generator LωX and that pω(t, x, y) be the transition densities ofX with respect to the reversible measure or the heat kernels associated with784.2. Regularity estimatesLωX , i.e. pω(t, x, y) = Pωx [Xt = y]. Then(f, etLωXf)=∑x,y∈Zd∫[0,1]ddz∫[0,1]ddz′f(x+ z)f(y + z′)pω(t, x, y). (4.22)For any x ∈ supp f , by Lemma 3.2.2 and Corollary B.1.5, there existsconstant C and N(x, ω) s.t. for√t ≥ N(x, ω) and all y ∈ V ,pω(t, x, y) ≤ Ct−d/2. (4.23)Choose N(ω) = maxx∈supp f{N2(x, ω)}, and then when t ≥ N(x, ω)(f, etLωXf)≤∑x,y∈Zd∫[0,1]ddz∫[0,1]ddz′f(x+ z)f(y + z′)Ct−d/2≤∑x,y∈Zd∫[0,1]ddz∫[0,1]ddz′1(x+z∈suppf)1(y+z′∈suppf) ‖f‖2∞Ct−d/2= C ‖f‖2∞ λ(supp f)21td/2, (4.24)in which λ(supp f) by our definition is the Lebesgue measure of supp fCorollary 4.2.2. Let µ be a translation-invariant, ergodic gradient Gibbsmeasure defined in (1.8) and µ˜ is the extension of µ to RZd ×RE defined in(3.21). Then for µ˜− a.s. ω, if f ∈ C∞0 (Rd),limM→∞sup0<<1∫ ∞Mdt−2(f, et−2LωXf) = 0. (4.25)Proof. Recall that f(x) = d/2+1f(x). Thus ‖f‖∞ = d/2+1 ‖f‖∞. Alsonotice that λ(supp f) = −dλ(supp f). By the proof of Lemma 4.2.1, N(ω)given by the lemma is finite for µ˜ − a.s. ω. Furthermore, if t > N(ω)and 0 <  < 1, −2t > N(ω). Thus with previous lemma we have, when794.3. Proof of main resultt > N(ω),−2(f, et−2LωXf) ≤ −2C ‖f‖2∞ λ(supp f)21td/2= C ‖f‖2∞ λ(supp f)21td/2.(4.26)The functions on the left (indexed by ) are uniformly integrable in t ford ≥ 3.4.3 Proof of main resultBefore proving the final result about scaling limits, we need two lemmas.Lemma 4.3.1. Let µ be a translation-invariant, ergodic gradient Gibbs mea-sure defined in (1.8) and µ˜ is the extension of µ to RZd×RE defined in (3.21).There exists a positive semidefinite, non-degenerate d× d matrix q, s.t. forevery t > 0 and f ∈ C∞0 (Rd) ∩H,−2(f, et−2LωXf)→ (f, etQf) µ˜− a.s. ω. (4.27)where Q is defined from q by (4.17).Proof. By the definition of LωX and (4.20), we have(f, etLωXf) =∫dxdyf(x)f(y)pω(t, bxc, byc)=∫dxf(x)∫[0,1]ddz∑y∈Zdf(y + z)pω(t, bxc, byc)=∫dxf(x)∫[0,1]ddzEbxc[f(Xt + z)]. (4.28)804.3. Proof of main resultRecall that f(x) = (d/2+1)f(x), so−2(f, et−2LωXf)= −2∫dx(d/2+1)f(x)∫[0,1]ddz(d/2+1)Ebxc[f(Xt−2 + z)]= −d∫dxf(x)∫[0,1]ddzEbxc[f(Xt−2 + z)]=∫dxf(x)∫[0,1]ddzEbx/c[f(Xt−2 + z)]. (4.29)The last equation is by variable change x → x. Then by Corollary B.1.2and the fact that f ∈ C∞0 (Rd), we have−2(f, et−2LωXf)→ (f, etQf) µ˜− a.s. ω. (4.30)Lemma 4.3.2. Let µ be a translation-invariant, ergodic gradient Gibbs mea-sure defined in (1.8) and µ˜ is the extension of µ to RZd×RE defined in (3.21).There exists a positive semidefinite, non-degenerate d× d matrix q, s.t. forf ∈ C∞0 (Rd) ∩H,lim↓0(f, (−LωX)−1f) = (f, (−Q)−1f) µ˜− a.s. ω. (4.31)where Q is defined from q by (4.17).Proof. For any f ∈ C∞0 (Rd) ∩H,(f, (−LωX)−1f) =∫ ∞0dt(f, etLωXf). (4.32)Replacing f by f, we get that(f, (−LωX)−1f) =∫ ∞0dt−2(f, et−2LωXf). (4.33)By above lemma, −2(f, et−2LωXf)→ (f, etQf) µ˜−a.s. ω; the monotonicityin t and continuity of the limit shows that the convergence is actually uniform814.3. Proof of main resulton compact intervals. By Corollary 4.2.2, the integral can be truncated to afinite interval and similarly for the integral of the limit. Therefor it followsthatlim↓0(f, (−LωX)−1f) =∫ ∞0dt(f, etQf) = (f,−Q−1f). (4.34)Now we will prove the main result of this section.Proof of Theorem 4.1.5. Let µ be a translation-invariant, ergodic gradientGibbs measure defined in (1.8) and µ˜ is the extension of µ to RZd × REdefined in (3.21). We want to show φ(f) converge weakly to a normalrandom variable with mean zero and variance (f, (−Q)−1f). By Lemma4.13, it suffices to prove this for f ∈ C∞0 (Rd) ∩H.By Proposition 3.1.5, we know that φ(f) is Gaussian conditional on ω.The characteristic function of Gaussian random variable XE(eiX) = eiE(X)−1/2V ar(X), (4.35)shows, by conditional expectation and variance in Proposition 3.1.5, thatEµ˜(eiφ(f)|F)(ω) = e−1/2(f,(−LωX)−1f). (4.36)By Lemma 4.3.2, lim↓0(f, (−LωX)−1f) = (f, (−Q)−1f) µ˜ − a.s. ω. Sincethe right-hand side of (4.36) is a bounded function, Theorem 4.1.5 followsby the bounded convergence theorem.82Chapter 5Outlook5.1 UniquenessUniqueness is one of the main concerns in the problem of the Gibbs measure,which is still open for our case. As in definition 1.2.2, uniqueness impliesno phase transition for the Gibbs measure. Also by the remark 4.1.6(3),uniqueness will provide more information about the scaling limit of the φfield.We are especially interested in the uniqueness of translation invariantGibbs measures as they are called phases in section 1.3.1. If GI is the setof all translation invariant Gibbs measure, then by Theorem 1.3.6, a Gibbsmeasure µ ∈ GI is extreme if and only if µ is ergodic. Thus the uniquenessof the translation invariant Gibbs measure is equivalent to the uniquenessof the ergodic Gibbs measure.Standard methods for the uniqueness fail in our model. One of theconditions for the uniqueness, Dobrushin’s condition, can be roughly statedas follows: The total interaction of a given spin with all other spins shouldbe so small that some crucial quantity is less than one. See Section 8.1 of[48] for more details about this condition. In fact, Dobrushin’s condition willyield more than uniqueness of the Gibbs measure. It implies an exponentialdecay in two point function [7]. It obviously does not apply for our modeldue to Theorem 3.1.7, which implies an algebraical decay of the two pointfunction. For uniformly convex potential function, Funaki and Sphohn provea one-to-one corresponding between tilt, and the ergodic Gibbs measure(Theorem 1.4.4). Their method is based on the Ginzburg-Landau dynamicsgiven in section 1.3.2 and the uniform bounds on the second derivative ofthe potential function V play an essential role in the proof.835.2. Gradient Gibbs measure with disorderCodina and Christof introduced the gradient Gibbs measure with disorder-dependence structure (model B in [30]), and prove the existence and unique-ness for this model when d ≥ 1 and for any stationary disorder-dependencestructure. In next section, we will introduce this model and the idea mo-tivated by their work. To simplify the problem, we will first focus on theuniqueness of ergodic ϕ-Gibbs measure with mean zero.5.2 Gradient Gibbs measure with disorder5.2.1 Model of interestNow we will introduce the gradient Gibbs measure with disorder. Here wewill follow the notation and definition in [30]. Recall that we think of Zd asa graph with edges E ={{j, k} : j, k ∈ Zd, ‖j − k‖2 = 1} where ‖·‖2 is theEuclidean norm. We use the notation jk = {j, k} for the undirected edges.For a finite subset Λ of Zd, let E(Λ) be the set of edges with at least onevertex in Λ, namely, E(Λ) = {jk ∈ E|jk ∩ Λ 6= ∅}, and ∂Λ be its boundary∂Λ := {x ∈ Λ, ‖x− y‖ = 1 for some y ∈ Λ}. On the boundary we set aboundary condition ψ such that φ(x) = ψ(x) for x ∈ ∂Λ. Then the modelare given in terms of the finite-volume Hamiltonian on Λ. Let Ω = RZd bethe space of φ-field and Ωω = RE be the space of disorder ω.Definition 5.2.1. For each bond jk ∈ E, we define the measurable mapV ωjk(s) : (ω, s) : Ωω×R→ R. Then V ω(x,y)(s) is a random real-valued function.Assume that V ωjk(s) ∈ C2(R) have uniformly-bounded finite second momentsand jointly stationary distribution. We set the further condition that foreach fixed ω ∈ Ωω and for each bond jk, V ωjk(s) ∈ C2(R) is an even function.Then we define the Hamiltonian for each fixed ω ∈ Ωω byHψΛ [ω](φ) :=∑jk∈E(Λ)V ωjk(φ ∨ ψ(j)− φ ∨ ψ(k)), (5.1)where φ ∨ ψ(x) = φ(x) if x ∈ Λ and = ψ(x) if x ∈ ∂Λ.Let Cb(RZd) denote the set of continuous and bounded functions on RRd .The functions considered are functions of the interface configuration φ, and845.2. Gradient Gibbs measure with disordercontinuity is with respect to each coordinate φ(x), x ∈ Zd, of the interface.For a finite region Λ ⊂ Zd, let dφΛ :=∏x∈Λ dφ(x) be the Lebesgue measureover RΛ. Now we define the ϕ-Gibbs measures for fixed disorder ω.Definition 5.2.2 (Finite-volume ϕ-Gibbs measure). For a finite region Λ ⊂Zd, the finite-volume Gibbs measure νΛ,ψ[ω] on RZdwith given HamiltonianH[ω] := (HψΛ [ω])Λ⊂Zd,ψ∈RZd ,with boundary condition ψ for the field of height variables (φ(x))x∈Zd overΛ, and with a fixed disorder configuration ω, is defined byνΛ,ψ[ω] :=1ZψΛ [ω]exp{−HψΛ [ω](φ)}dφΛδψ(dφZd/Λ), (5.2)whereZψΛ [ω] =∫RZdexp{−HψΛ [ω](φ)}dφΛδψ(dφZd/Λ), (5.3)andδψ(dφZd/Λ) =∏x∈Zd/Λδψ(x)(dφ(x)). (5.4)Similar to (1.2.1), we define the infinite volume measure for fixed disorderω.Definition 5.2.3 (ϕ-Gibbs measure on Zd). The probability measure ν[ω]on RZd is called an infinite-volume Gibbs measure for the ϕ-field with givenHamiltonian H[ω] (ϕ-Gibbs measure for short), if it satisfies the DLR equa-tion ∫ν[ω](dψ)∫νψΛ [ω](dφ)F (φ) =∫ν[ω](dφ)F (dφ) (5.5)for every finite Λ ⊂ Zd and for all F ∈ Cb(RZd).For the mapping ω → ν[ω], we will introduce the concept of translation-covariant. For v ∈ Zd, we define the shift operators: τv for the heights φby (τvφ)(y) := φ(y − v) for y ∈ Zd and φ ∈ RZd and τv for the disorderconfiguration by (τvω)(x, y) := ω(x+ v, y + v) for xy ∈ E and ω ∈ Ωω.855.2. Gradient Gibbs measure with disorderDefinition 5.2.4 (Translation-covariant random (gradient) Gibbs measures).A measurable map ω → ν[ω] is called a translation-covariant random Gibbsmeasure if [ω] is a ϕ-Gibbs measure for P-almost every ω, and if∫ν[τvω](dφ)F (φ) =∫ν[ω](dφ)F (τvφ), (5.6)for all v ∈ Zd and for all F ∈ Cb(RZd).To define the notion of measurability for a measure-valued function weuse the evaluation σ-algebra in the image space, which is the smallest sigma-algebra such that the evaluation maps µ→ µ(A) are measurable for all eventsA (for details, see page 129 from Section 7.3 on the extreme decompositionin [48]).There is a natural connection between our model and the gradient Gibbsmeasure with disorder. Let µ be a translation-invariant, ergodic gradientGibbs measure defined in (1.8) and µ˜ is the extension of µ to RZd × REdefined in (3.21). Consider the σ-field E := σ({ωb : b ∈ E}). For µ˜-a.e. ω, the conditional law µ[ω] := µ˜(·|E )(ω), regarded as a measure onΩ, is the ϕ-Gibbs measure defined in Definition 5.2.3 with V ωjk(s) =12ωjkt2.Furthermore, by the proof of Proposition 3.1.5, the mapping ω → µ[ω] istranslation-covariant.5.2.2 Dynamical methodAs we explained before, Dobrushin type methods do not work for the unique-ness problem for gradient models with or without disorder because of long-range dependence, so in [30, 45] the dynamics in section 1.3.2 is used to helpestablish the result. We assume that the dynamics of the height variablesφt = {φt(y)}y∈Zd are generated by the following family of SDEs: for allω ∈ Ω,dφt(x) = −∑y∈Zd:|y−x|=1(V ωxy)′(φt(x)− φt(y))dt+√2dWt(x), x ∈ Zd, (5.7)865.2. Gradient Gibbs measure with disorderwhere {Wt(y), y ∈ Zd} is a family of independent Brownian motions. Thedynamics for the height differences ηt = {ηt(b) := ∇bφt}b∈E are then deter-mined bydηt(b) = −∑b′∈E:xb′=xb(V ωb′ )′(ηt(b′))dt+√2dWt(b), x ∈ Zd, (5.8)where Wt(b) := Wt(xb)−Wt(yb).Suppose that there exist two translation-invariant, ergodic gradient Gibbsmeasures µ and µ¯ such that Eµ(φ0) = Eµ¯(φ0) = 0, then there exist twoshift-covariant measures ω → µ[ω], ω → µ¯[ω] stationary for the SDE (5.7).For each fixed ω ∈ Ωω, we construct two independent random variables φ ={φ(x)}x∈Zd and φ¯ = {φ¯(x)}x∈Zd on a common probability space (Y,H,Q[ω])in such a manner that φ and φ¯ are distributed by µ[ω] and µ¯[ω] under Q[ω],respectively. Let φt and φ¯t be two solutions of the SDE (5.7) with commonBrownian motions having initial data φ and φ¯. Let ηt and η¯t be defined byηt := ∇φt(b) and η¯t := ∇φ¯t(b).In a work in preparation, we plan to show that there exists a determin-istic sequence (mr)r∈N in N such that for P -almost every ωlimk→∞1k(k∑i=11mi∫ mi0∑b∈Ee−2r|xb|EQ[ω][(ηt(b)− η¯t(b))2]dt)= 0, (5.9)which is similar to Lemma 4.4 in [30]. Combining this and (68) in [30], wewill have that the Wasserstein distance between µ[ω] and µ¯[ω] vanishes andhence µ[ω] = µ¯[ω] for P-almost all ω. Thus for the annealed measures, wehave µ = µ¯. This gives the uniqueness of the ergodic ϕ-Gibbs measure, andthus the the uniqueness of the translation invariant ϕ-Gibbs measure.87Bibliography[1] A. Abdesselam. The Grassmann–Berezin calculus and theorems of thematrix-tree type. Advances in Applied Mathematics, 33(1):51–70, 2004.[2] S. Adams, R. Kotecky´, and S. Muller. Strict convexity of the surfacetension for non-convex potentials. Communications in MathematicalPhysics, 2013.[3] S. Andres, J.D. Deuschel, and M. Slowik. Heat kernel estimates forrandom walks with degenerate weights. Preprint available at http://arxiv.org/pdf/1412.4338.pdf, 2015.[4] S. Andres, J.D. Deuschel, and M. Slowik. Invariance principle for therandom conductance model in a degenerate ergodic environment. Ann.Probab., 43(4):1866–1891, 2015.[5] S. Andres, J.D. Deuschel, and M. Slowik. Harnack inequalities onweighted graphs and some applications to the random conductancemodel. Probab. Theory Related Fields, 164(3-4):931–977, 2016.[6] O. Angel, N. Crawford, and G. Kozma. Localization for linearly edgereinforced random walks. Duke Mathematical Journal, 163(5):889–921,2014.[7] A.Val. Antoniouk and A.Vict. Antoniouk. Decay of correlations anduniqueness of gibbs lattice systems with nonquadratic interaction. Jour-nal of Mathematical Physics, 37(11):5444–5454, 1996.[8] M.T. Barlow and J.D. Deuschel. Invariance principle for the randomconductance model with unbounded conductances. The Annals of Prob-ability, 38(1):234–276, 2010.88Bibliography[9] V. Beffara and H. Duminil-Copin. The self-dual point of the two-dimensional random-cluster model is critical for q ≥ 1. ProbabilityTheory and Related Fields, 153(3-4):511–542, 2012.[10] N. Berestycki and J.R. Norris. Lectures on Schramm–Loewner evolu-tion. Cambridge University, 2014.[11] E. Bertin, I. Cuculescu, and R. Theodorescu. Unimodality of ProbabilityMeasuress. Mathematics and Its Applications, Volume 382. Springer-Science+Business Media, B. V., 1997.[12] M. Biskup. Reflection positivity and phase transitions in lattice spinmodels. In Methods of contemporary mathematical statistical physics,pages 1–86. Springer, 2009.[13] M. Biskup. Recent progress on the random conductance model. Prob-ability Surveys, 8, 2011.[14] M. Biskup, C. Borgs, J.T. Chayes, and R. Kotecky`. Gibbs states ofgraphical representations of the potts model with external fields. Jour-nal of Mathematical Physics, 41(3):1170–1210, 2000.[15] M. Biskup and R. Kotecky´. Phase coexistence of gradient Gibbs states.Probab. Theory Related Fields, 139(1-2):1–39, 2007.[16] M. Biskup and M. Spohn. Scaling limit for a class of gradient fieldswith non-convex potentials. Ann. Probab., 39:224–251, 2011.[17] E. Bolthausen. Large deviations and perturbations of random walks andrandom surfaces, volume 1. Birkha¨user, 1998.[18] E. Bolthausen and D. Ioffe. Harmonic crystal on the wall: a microscopicapproach. Communications in mathematical physics, 187(3):523–566,1997.[19] H.J. Brascamp and E.H. Lieb. Some inequalities for Gaussian mea-sures and the long range order of the one dimensional plasma. In A.M.89BibliographyArthurs, editor, Functional integration and its applications, Oxford,1975. Clarendon.[20] H.J. Brascamp and E.H. Lieb. On extensions of the Brunn-Minkowskiand Prekopa-Leindler theorems, including inequalities for log concavefunctions, and with an application to the diffusion equation. J. Funct.Anal., 22:366–389, 1976.[21] H.J. Brascamp, E.H. Lieb, and J.L. Lebowitz. The statistical mechan-ics of anharmonic lattices. In Statistical Mechanics, pages 379–390.Springer, 1975.[22] D.C. Brydges. Lectures on the renormalisation group. Lec-ture notes available at http://www.math.ubc.ca/~db5d/Seminars/PCMILectures/lectures.pdf, 2007.[23] D.C. Brydges, J. Fro¨hlich, and T. Spencer. The random walk represen-tation of classical spin systems and correlation inequalities. Communi-cations in Mathematical Physics, 83(1):123–150, 1982.[24] D.C. Brydges, J.Z. Imbrie, and G. Slade. Functional integral represen-tations for self-avoiding walk. Probab. Surv., 6:34–61, 2009.[25] D.C. Brydges and T. Spencer. Fluctuation estimates for sub-quadraticgradient field actions. J. Math. Phys., 53(9):095216, 5, 2012.[26] J.T. Chalker. The pinning of an interface by a planar defect. Journalof Physics A: Mathematical and General, 15(9):L481, 1982.[27] C. Cotar and J.D. Deuschel. Decay of covariances, uniqueness of er-godic component and scaling limit for a class of ∇φ systems with non-convex potential. Ann. Inst. Henri Poincare´ Probab. Stat., 48(3):819–853, 2012.[28] C. Cotar, J.D. Deuschel, and S. Mu¨eller. Strict convexity of the freeenergy for a class of non-convex gradient models. Commun. Math.Phys., 286(1):359–376, 2009.90Bibliography[29] C. Cotar and C. Ku¨lske. Existence of random gradient states. TheAnnals of Applied Probability, pages 1650–1692, 2012.[30] C. Cotar and C. Ku¨lske. Uniqueness of gradient gibbs measures withdisorder. Probability Theory and Related Fields, 162(3-4):587–635, 2015.[31] I. Cuculescu and R. Theodorescu. Multiplicative strong unimodality.Austral. & New Zealand J. Statist, 40:205–214, 1998.[32] T. Delmotte and J.D. Deuschel. On estimating the derivatives of sym-metric diffusions in stationary random environment, with applicationsto ∇φ interface model. Probability theory and related fields, 133(3):358–390, 2005.[33] J.D. Deuschel, G. Giacomin, and D. Ioffe. Large deviations and con-centration properties for ∇φ interface models. Probability Theory andRelated Fields, 117:49–111, 2000. 10.1007/s004400050266.[34] J.D. Deuschel and Y. Velenik. Non-Gaussian surface pinned by a weakpotential. Probability theory and related fields, 116(3):359–377, 2000.[35] M. Disertori, F. Merkl, and S.W.W. Rolles. A comparison of a nonlinearsigma model with general pinning and pinning at one point. ElectronicJournal of Probability, 21, 2016.[36] R.G. Edwards and A.D. Sokal. Generalization of the Fortuin-Kasteleyn-Swendsen-Wang representation and Monte Carlo algorithm. Physicalreview D, 38(6):2009, 1988.[37] R.S. Ellis. Entropy, large deviations, and statistical mechanics, volume271. Springer Science & Business Media, 2012.[38] W. Feller. An Introduction to Probability Theory and Its Applications,Volume II. Wiley, New York, second edition, 1971.[39] R Ferna´ndez, J. Fro¨hlich, and A.D. Sokal. Random walks, critical phe-nomena, and triviality in quantum field theory. Texts and Monographsin Physics. Springer Science & Business Media, Berlin, 1992.91Bibliography[40] J. Fro¨hlich. Phase transitions, Goldstone bosons and topological su-perselection rules. In Current Problems in Elementary Particle andMathematical Physics, pages 133–269. Springer, 1976.[41] J. Fro¨hlich, R. Israel, E.H. Lieb, and B. Simon. Phase transitions andreflection positivity. i. general theory and long range lattice models. InStatistical Mechanics, pages 213–246. Springer, 1978.[42] J. Fro¨hlich and E.H. Lieb. Phase transitions in anisotropic lattice spinsystems. In Statistical Mechanics, pages 127–161. Springer, 1978.[43] J. Fro¨hlich, B. Simon, and T. Spencer. Infrared bounds, phase transi-tions and continuous symmetry breaking. Communications in Mathe-matical Physics, 50(1):79–95, 1976.[44] J. Fro¨hlich and T. Spencer. The Kosterlitz-Thouless transition in two-dimensional abelian spin systems and the coulomb gas. Communica-tions in Mathematical Physics, 81(4):527–602, 1981.[45] T. Funaki. Stochastic interface models, volume 1869 of Lecture Notesin Mathematics. Springer-Verlag, Berlin, 2005. Lectures from the 33rdProbability Summer School held in Saint-Flour, July 6–23, 2003, Editedby Jean Picard.[46] T. Funaki and H. Spohn. Motion by mean curvature from the Ginzburg-Landau ∇φ interface model. Comm. Math. Phys., 185(1):1–36, 1997.[47] W. Gawronski. On the bell-shape of stable densities. The Annals ofProbability, pages 230–242, 1984.[48] H.O. Georgii. Gibbs Measures and Phase Transitions, volume 9. Walterde Gruyter, 2011.[49] G. Giacomin, S. Olla, and H. Spohn. Equilibrium fluctuations for ∇φinterface model. Ann. Probab., 29(3):1138–1172, 2001.[50] J.W. Gibbs. On the equilibrium of heterogeneous substances. AmericanJournal of Science, (96):441–458, 1878.92Bibliography[51] G. Grimmett. The stochastic random-cluster process and the unique-ness of random-cluster measures. The Annals of Probability, pages1461–1510, 1995.[52] G. Grimmett. The random-cluster model. In Probability on discretestructures, pages 73–123. Springer, 2004.[53] J. Hawkes. A lower Lipschitz condition for at the stable subordinator.Z. Wahrsch. Verw. Gebiete, 7:23–32, 1971.[54] B. Helffer and J. Sjo¨strand. On the correlation for Kac-like models inthe convex case. J. Statist. Phys., 74(1-2):349–409, 1994.[55] I.A. Ibragimov and K.E. Chernin. On the unimodality of stable laws.Theory of Probability and Its Applications, 4:453–456, 1953.[56] D. Ioffe and Y. Velenik. A note on the decay of correlations underδ-pinning. Probability theory and related fields, 116(3):379–389, 2000.[57] Y. Isozaki and N. Yoshida. Weakly pinned random walk on the wall:pathwise descriptions of the phase transition. Stochastic processes andtheir applications, 96(2):261–284, 2001.[58] S. Janson. Gaussian Hilbert spaces, volume 129. Cambridge universitypress, 1997.[59] C. Kipnis and S.R.S. Varadhan. Central limit theorem for additive func-tionals of reversible markov processes and applications to simple exclu-sions. Communications in Mathematical Physics, 104(1):1–19, 1986.[60] C. Kuelske and E. Orlandi. A simple fluctuation lower bound for adisordered massless random continuous spin model in d = 2. ElectronicCommunications in Probability, 11:200–205, 2006.[61] F. Merkl and S.W.W. Rolles. Linearly edge-reinforced random walks.In Dynamics & stochastics, volume 48 of IMS Lecture Notes Monogr.Ser., pages 66–77. Inst. Math. Statist., Beachwood, OH, 2006.93Bibliography[62] P.D. Miller. Applied asymptotic analysis, volume 75. American Math-ematical Soc., 2006.[63] A. Naddaf and T. Spencer. On homogenization and scaling limit ofsome gradient perturbations of a massless free field. Comm. Math.Phys., 183(1):55–84, 1997.[64] M.A. Naimark and L.F. Boron. Normed rings. Wolters-Noordhoff Pub-lishing, distributed in the United States by Crane, P. Noordhoff NV,1964.[65] J.P. Nolan. Stable Distributions - Models for Heavy Tailed Data.Birkhauser, Boston, 2015. In progress, Chapter 1 online atacademic2.american.edu/∼jpnolan.[66] K. Osterwalder and R. Schrader. Axioms for Euclidean Green’s func-tions. Communications in mathematical physics, 31(2):83–112, 1973.[67] E. Presutti. Scaling limits in statistical mechanics and microstructuresin continuum mechanics. Springer Science & Business Media, 2008.[68] V. Privman and N.M. Sˇvrakic´. Difference equations in statistical me-chanics. ii. solid-on-solid models in two dimensions. Journal of Statis-tical Physics, 51(5-6):1111–1126, 1988.[69] C. Sabot and P. Tarres. Edge-reinforced random walk, vertex-reinforcedjump process and the supersymmetric hyperbolic sigma model. arXivpreprint arXiv:1111.3991, 2011.[70] S. Sheffield. Gaussian free fields for mathematicians. Probab. TheoryRelated Fields, 139(3-4):521–541, 2007.[71] T. Simon. Multiplicative strong unimodality for positive stable laws.Proc. Amer. Math. Soc., 139:2587–2595, 2010.[72] J. Sjo¨strand. Correlation asymptotics and Witten Laplacians. Algebrai Analiz, 8(1):160–191, 1996.94Bibliography[73] A.C.D. van Enter and C. Ku¨lske. Nonexistence of random gradientgibbs measures in continuous interface models in d = 2. The Annals ofApplied Probability, pages 109–119, 2008.[74] Y. Velenik. Localization and delocalization of random interfaces.Probab. Surv, 3:112–169, 2006.[75] P. Walters. An introduction to ergodic theory, volume 79. SpringerScience & Business Media, 2000.[76] G.N. Watson. The harmonic functions associated with the paraboliccylinder. Proceedings of the London Mathematical Society, 2(1):116–148, 1918.[77] G. Wul. Zur frage der geschwindigkeit des wachstums und der auflosungder kristall achen. Z. Kristallogr, 34:449–530, 1901.[78] K. Yosida. Functional Analysis. Springer-Verlag, Berlin, sixth edition,1980.[79] V.M. Zolotarev. One dimensional stable distributions. American Math-ematical Society, one edition, 1986.95Appendix AStable densityLet fα(x) be the unique positive density such that∫ ∞0e−λxfα(x)dx = e−λα. (A.1)Then by the Mellin’s inverse formula for the inverse Laplace transform,fα(x) =12pii∫ σ+i∞σ−i∞ezx−tzαdz(σ > 0, x ≥ 0, 0 < α < 1),= 0 (when x < 0). (A.2)By Proposition 2 of Chapter IX, section 11 in [78], fα(λ) is non-negative.We will show in this Chapter that fα(λ) is in fact the density of the α-stabledistribution and discuss several properties of fα(λ).A.1 Definition of stable distributionStable distributions are natural generalizations of the normal distribution.We write Ud= V if the random variable U and V have the same distribution.Here we give the definition of stable distribution. Let X,X1, X2, · · · beindependent random variables with a common distribution R and let Sn =X1 +X2 + · · ·+Xn.Definition A.1.1. The distribution R is stable if for each n there existconstants cn > 0, γn such thatSnd= cnX + γn (A.3)and R is not concentrated at one point. R is stable in strict case if (A.3)96A.1. Definition of stable distributionholds with γn = 0.The next theorem characterizes the norming constant cn and gives thedefinition of the characteristic exponent of R.Theorem A.1.1 ([38, Theorem 1 in Section 6.1]). The norming constantscn are of the form cn = n1/α with 0 < α ≤ 2. The constant α will be calledthe characteristic exponent of R.Two famous examples of stable distribution are Gaussian distributionwith α = 2 and Cauchy distribution with α = 1. By Lemma 1 of [47],the density of the stable distribution is two-sided, namely with support Rif α ≥ 1, and one-sided, namely with support [0,∞) if 0 < α < 1. Thenext theorem explains that fα(x) is a special type of stable density with0 < α < 1.Theorem A.1.2 ([38, Theorem 1 in Section 13.6]). For fixed 0 < α < 1 thefunction γα(λ) = e−λα is the Laplace transform of a distribution Gα withthe following properties:1. Gα is stable with γn = 0 and cn = n1/α in Definition A.1.1.2. When x→∞,xα[1−Gα(x)]→ 1Γ(1− α) . (A.4)Except for a few α’s, like the case of α = 0.5 used in [25], there is noclosed form for fα(x). However, it is possible to represent the stable densityin an integral form. The following theorem is cited from (2.5.10)1 in [79].Theorem A.1.3. For 0 < α < 1, letUα(φ) =(sinαφα sinφ)α/(1−α) sin(1− α)φα sinφ, φ ∈ (−pi, pi) (A.5)andz(x) = (1− α)(x/α)α/(α−1).1There is a typing error in the definition of Uα in the reference: first power of Uαshould be α/(1− α) instead of α/(α− 1)97A.2. Log concavityThen for x ≥ 0,fα(x) =z(x)1/α2pi(1− α)1/α∫ pi−piUα(φ) exp {−z(x)Uα(φ)} dφ. (A.6)A.2 Log concavityA non-negative function f : Rn → R+ is logarithmically concave (or log-concave for short) if its domain is a convex set, and if it satisfies the in-equalityf(θx+ (1− θ)y) ≥ f(x)θf(y)1−θfor all x, y in the domain of f and 0 < θ < 1. Log concavity of f1/2(et)plays an important role in Brydges-Spencer’s proof for the case α = 1/2.It provides a Brascamp-Lieb bound for the second moment of the auxiliaryfield t as stated in Theorem 2.1.2. In the following paragraphs, we willdiscuss several definitions about unimodality, which is related closely to logconcavity. Then we will give a result about log concavity of fα(et).The term ”unimodality” originally refers to distributions with a uniquemode. The unimodal property is not preserved under addition or multi-plication of random variables so it has been strengthened in the literatureaccording to the following definition. See [11, 31, 55] for more details aboutthe history and definition of unimodality.Definition A.2.1.1. [55]A real random variable X is said to be unimodal (or quasi-concave)if there exists a ∈ R such that the functions P (X ≤ x) and P (X > x)are convex respectively in (−∞, a) and (a,+∞). The number a iscalled a mode of X.2. [55]A real variable X is said to be strongly unimodal if the sumX + Y is unimodal for all unimodal variables Y that are independentof X.3. [31, Definition 3.3] A real variable X is said to be multiplicative98A.3. Tail behaviorstrongly unimodal if the product XY is unimodal for all unimodalvariables Y that are independent of X.Strongly unimodal has been proved to be equivalent to the log concavityof the density [11], while multiplicative strongly unimodal, which is a morerecent concept, also turns out to be related to log concavity [31].Theorem A.2.1.1. [11, Theorem 6.1.4] Let X be a random variable. Then X is stronglyunimodal if and only if it is absolutely continuous and its probabilitydensity function fX is log concave;2. [31, Theorem 3.7] Let X be a unimodal random variable such that 0is not a mode of X. Then X is multiplicative strongly unimodal if andonly if it is absolutely continuous, with a density fX satisfying that{fX 6= 0} is an interval contained either in (∞, 0] or in [0,∞), andf(ex) (respectively f(−ex)) is log-concave on this interval.It has been proved in [55] that stable distribution functions are uni-modal, so to achieve our goal, the next step is to find out whether a stabledistribution function is multiplicative strongly unimodal. The following the-orem from [71] gives a condition that a stable distribution is multiplicativestrongly unimodal.Theorem A.2.2 ([71, Main Theorem]). If X is a random variable withthe stable distribution of index α ∈ (0, 1), then X is multiplicative stronglyunimodal if and only if α ≤ 12 .From above theorem, we immediately have a criteria for log concavity offα(et).Corollary A.2.3. fα(et) is log concave if and only if α ≤ 12 .A.3 Tail behaviorKnowledge of the tail behavior of the density of the auxiliary field, fα(et),plays an essential role in the proof of Theorem 2.1.7. It suffices to dis-cuss the tail behavior of fα(x) at +∞ and 0. In the following paragraph,99A.3. Tail behaviorg(x)  h(x), as x → a means that there exist positive constants c, C,and a neighborhood N of a, s.t. ch(x) < g(x) < Ch(x) for x ∈ N ;g(x) ∼ h(x), as x→ a means limx→a g(x)/h(x) = 1.For the tail behavior of fα(x) at ∞, the next proposition from Section1.5 in [65] gives the asymptotic behavior of fα(x).Proposition A.3.1. When x→∞, fα(x)  x−α−1.On the other hand, the next proposition in Lemma of [53] gives a resultfor the tail behavior of fα(x) at 0.Proposition A.3.2. When x→ 0,fα(x)  x−(1/(1−α)+α/(2(1−α)) exp(−cx−α/(1−α))where c = c(α) = (1− α)αα/(1−α).Combing these two propositions with Corollary A.2.3, we have the follow-ing corollary which states the tail behavior of the first derivative of ln fα(et).Corollary A.3.3. Let α < 12 and g(t) =ddt ln fα(et). Then1. g(t) is decreasing;2. g(t) ≥ −α− 1;3. g(t)→∞ as t→ −∞.Proof. As fα(et) is log concave, g(t) is deceasing in t. This gives the firstresult.For the second result, because g(t) is deceasing in t, it suffices to provefor the case that t → ∞. By Proposition A.3.1, there exist some constantC1 such that fα(x) > C1x−α−1 as t→∞, so when t is largeln fα(et) > lnC1 + (−α− 1)t. (A.7)100A.3. Tail behaviorIf there exists some t0 such that g(t0) < −α− 1, then for  = (−g(t0)−α−1)/2, g(t) < −α− 1 for t > t0 as g(t) is deceasing. Thenln fα(et) =∫ tt0g(s)ds+ ln fα(et0) < (−α− 1− )t+ ln fα(et0) (A.8)when t > t0, which contradicts (A.7).For the third result, if g(t) < K for some constant K, then when t < 0ln fα(et) =∫ t0g(s)ds+ ln fα(0) > Kt+ ln fα(1). (A.9)On the other hand, by Proposition A.3.2, there exists some constant C2such that when x → 0, fα(x) < C2x−(1/(1−α)+α/(2(1−α)) exp(−cx−α/(1−α)),so when t→ −∞ln fα(et) < lnC2 − (1/(1− α) + α/(2(1− α))t− ce−α/(1−α)t, (A.10)which contradicts to (A.9).The tail behavior of d2dt2ln fα(et) when t→ −∞ is also important in ourproof. Before giving this result, we need two more lemmas for preparation.Lemma A.3.4. Let f, g : (−pi, pi)→ R satisfy the following conditions:1. f and g are positive, analytic and even, with global min at 0;2. f is strictly monotone on (0, pi);3. g = O(fn) as t→ pi for some n ∈ N.Write fn = f(n)(0) and gn = g(n)(0). Then we have the following approxi-mate formula:∫ pi−pie−Nf(x)g(x)dx ∼√2pif2g0e−Nf0N−1/2(1 + (− f48f22+g22g0f2)N−1 +O(N−2)).(A.11)101A.3. Tail behaviorProof. Since f and g are even, f2n+1 = g2n+1 = 0 for n ≥ 0. Then∫ pi−pie−Nf(x)g(x)dx = e−Nf0∫ pi−pie−N(f(x)−f0)g(x)dx. (A.12)Here we apply saddle point estimation to give an approximation to Laplaceintegral on the right hand side of (A.12). The result is a corollary of Watson’sLemma (see [76] for the original reference), but here we will use a morespecific case solved in [62]. It follows from (3.15) in [62] that∫ pi−pie−N(f(x)−f0)g(x)dx ∼√piN(φ0 +φ14N+O(N−2))(A.13)whereφ0 = g0√2f2and φ2 =(2g2f2− g0f42f22)√2f2. (A.14)Apply (A.13) and (A.14) to (A.12), and then we have∫ pi−pie−Nf(x)g(x)dx ∼√2piNf2g0e−Nf0(1 + (− f48f22+g22g0f2)N−1 +O(N−2)).(A.15)For Uα defined in (A.5), the next lemma is about the property of Uα,which is the summary of Lemma 2.7.5 and the calculation above Theorem2.5.2 in [79].Lemma A.3.5. For Uα defined above,1. Uα are positive, analytic and even, with global minimum at 0;2. Uα is strictly monotone on (0, pi).Combining above two lemmas, we have the following Proposition.Proposition A.3.6. Assume α ≤ 12 . Then1. ddt ln fα(et) ∼ α1/(1−α) exp( αα−1 t) +O(1) as t→ −∞;102A.3. Tail behavior2. d2dt2ln fα(et)→ −∞ as t→ −∞.Proof. By Theorem A.1.3, if Uα is defined above andz = (1− α)(et/α)α/(α−1),thenln fα(et) = C +1α− 1 t+ ln(∫ pi−piUα(x) exp {−zUα(x)} dx), (A.16)where C is some constant depending only on α. Letg(t) =∫ pi−piUα(x) exp {−zUα(x)} dx. It suffices to prove d2dt2ln gα(et) → ∞ as t → −∞. Notice that whent → −∞, z → ∞ and that dzdt = cz where c = αα−1 . By Lemma A.3.4 andA.3.5, writing U(n)α (0) = un, we have, when t→ −∞,g(t) =∫ pi−piUα(x) exp {−zUα(x)} dx∼ u0√2piu2z−1/2 exp {−zu0)} [1 + (− u48u22+u22u0u2)z−1 +O(z−2)];(A.17)g′(t) = −cz∫ pi−piU2α(x) exp {−zUα(x)} dx∼ −czu20√2piu2z−1/2 exp {−zu0} [1 + (− u48u22+2u0u22u20u2)z−1 +O(z−2)];(A.18)g′′(t) = c2z2∫ pi−piU3α(x) exp {−zUα(x)} dx− c2z∫ pi−piU2α(x) exp {−zUα(x)} dx∼ c2z2u30√2piu2z−1/2 exp {−zu0} [1 + (− u48u22+3u20u22u30u2)z−1 +O(z−2)]−c2zu20√2piu2z−1/2 exp {−zu0} [1 + (− u48u22+2u20u22u30u2)z−1 +O(z−2)].(A.19)103A.3. Tail behaviorUsing the fact that u0 = 1, we haveg′/g ∼ −cz[1 + 32z−1 +O(z−2)]; (A.20)(g′/g)2 ∼ c2z2[1 + 3z−1 +O(z−2)]; (A.21)g′′/g ∼ c2z2[1 + z−1 +O(z−2)]− c2z(1 +O(z−1))= c2z2[1 +O(z−2)]. (A.22)For the first derivative of ln fα(et), when t→ −∞ we haveddtln fα(et) = 1/(α− 1) + g′/g∼ −cz +O(1). (A.23)As z = (1−α)(et/α)α/(α−1) and c = α/(α− 1), this gives us the first result.For the second derivative of ln fα(et), when t→ −∞, we haved2dt2ln fα(et) = g′′/g − (g′/g)2 = −3c2z +O(1)→ −∞. (A.24)This gives us the second result.104Appendix BRandom walk in randomenvironmentIn Chapter 3, we define the random conductance model, or random walkin random environment. We introduce both CSRW Yt associated with thegenerator LωC in (3.10) and VSRW Xt associated with the generator LωV in(3.12). In this part, we will introduce some previous results about randomwalk in random environment. Then we will introduce the potential theoryfor the random conductance model.B.1 Previous results about random walk inrandom environmentIn this part, we will introduce several previous results about random walk inrandom environment. The first one is the quenched functional central limittheorem for the process of VSRW Xt in the sense of following definition. Re-call that P is a probability measure on (Ωω,Fω) = ((0,∞)E ,B((0,∞))⊗E),and we write E to denote the expectation with respect to P. For a fixedω ∈ Ω, let Pωx be the measure associated with VSRW in the environment ωstarting at x.Definition B.1.1. Let Xt be the VSRW associated with the generator LVin (3.12). Set X(n)t :=1nXn2t, t ≥ 0. We say that the Quenched Func-tional CLT (QFCLT) or quenched invariance principle holds for X if forP − a.e.ω, under Pω0 , X(n) converges in law to a Brownian motion on Rdwith covariance matrix Σ2 = Σ · ΣT . That is, for every T > 0 and everybounded continuous function F on the Skorohod space D([0, T ],Rd), setting105B.1. Previous results about random walk in random environmentψn = Eω0 [F (X(n))] and ψ∞ = EBM0 [F (Σ ·W )] with (W,PBM0 ) being a Brow-nian motion started at 0, we have that ψn → ψ∞, P− a.s.The followings are two important assumptions on P.Assumption B.1.2. Assume that P satisfies the following conditions:(i) P(0 < ω(e) <∞) = 1 and E[ω(e)] <∞ for all e ∈ Ed.(ii) P is ergodic with respect to translations of Zd, that is, P ◦ τ−1x = P forall x ∈ Zd and P[A] ∈ {0, 1} for any A ∈ F such that τx(A) = A forall x ∈ Zd.With additional moment conditions on the conductances ω, we havefollowing QFCLT for X.Theorem B.1.1 ([4, Theorem 1.3]). Suppose that d ≥ 2 and AssumptionB.1.2 holds. Let p, q ∈ (1,∞] be such that 1/p+ 1/q < 2/d and assume thatE[(ω(e))p] <∞ and E[(1/ω(e))q] <∞ (B.1)for any e ∈ Ed. Then, the QFCLT holds for X with a deterministic non-degenerate covariance matrix Σ2.The proof of above the theorem is based on the following method. Con-struct a corrector χ : ω × Zd → Rd such thatΦ(ω, x) = x− χ(ω, x) (B.2)is an LXω -harmonic function, that is, for P− a.e. ω and every x ∈ Zd,(LωXΦ)(x) =∑y∼xω(x, y)(Φ(ω, y)− Φ(ω, x)) = 0. (B.3)This implies thatMt = Xt − χ(ω,Xt) (B.4)is a martingale under Pω0 for P − a.e. ω, and a QFCLT for the martingalepart M can be shown by Lindeberg-Feller Functional CLT. Then we verify106B.1. Previous results about random walk in random environmentthat P-almost surely the corrector is sub-linear [4, Proposition 2.6]:limn→∞max|x|≤n|χ(ω, x)|n= 0. (B.5)We thus get a QFCLT for Xt.Based on this argument, we will prove a more general result in the corol-lary.Corollary B.1.2. For every T > 0 and every bounded continuous functionF on the Skorohod space D([0, T ],Rd), set ψn(x) = Eωbnxc[F (X(n))] andψ∞(x) = EBM0 [F (Σ · W + x)] with (W,PBM0 ) being a Brownian motionstarted at 0. Then under the assumption of Theorem B.1.1, we have thatψn(x)→ ψ∞(x), P− a.s.Proof. Here we will provide the sketch of the proof. For χ defined by (B.2)and (B.3), for any x ∈ Rd, we haveMt = Xt − bx/c − χ(ω,Xt − bx/c) (B.6)is a martingale under Pωbx/c for P−a.e. ω. The rest of the proof is the sameas that of [4, Theorem 1.3].The second result about random walk in random environment is the heatkernel estimate for transition density. Recall that the heat kernels associatedwith LωX is defined bypω(t, x, y) := Pωx [Xt = y]. (B.7)and the one associated with LωY is defined byqω(t, x, y) := Pωx [Yt = y]/uω(y). (B.8)To state the result for qω(t, x, y), we introduce space-averaged `p-normson functions f : A → R for any non-empty, finite A ⊂ V and p ∈ [1,∞) by107B.1. Previous results about random walk in random environmentthe usual formula‖f‖p,A :=(1|A|∑x∈A|f(x)|p) 1pand ‖f‖∞,A := maxx∈A|f(x)|. (B.9)Also we need to make the following assumption on the ergodicity of theconductances. To state the assumption, with uω and vω defined in (3.2), weset for any x ∈ Vu¯ωp (x) := lim supn→∞‖uω‖p,B(x,n) and v¯ωq (x) := lim supn→∞‖vω‖q,B(x,n) . (B.10)Assumption B.1.3. There exist p, q ∈ (1,+∞] with1p+1q<2d(B.11)such thatu¯ωp = supx∈Vu¯p(x) <∞ and v¯ωq = supx∈Vv¯q(x) <∞. (B.12)In particular for every x ∈ V there exists N¯(x, ω) > 2 such thatsupn≥N¯(x)‖uω‖p,B˜(x,n) ≤ 2u¯ωp (x) and supn≥N¯(x)‖vω‖q,B˜(x,n) ≤ 2v¯ωq (x). (B.13)Theorem B.1.3 ([3, Theorem 1.6]). Suppose that Assumption B.1.3 holds.Then, there exist constants ci(d, p, q, u¯p, v¯q) such that for any given t and xwith√t ≥ N¯(x, ω) and all y ∈ V the following hold,• if d(x, y) ≤ c1t thenqω(t, x, y) ≤ c2t−d/2 exp(−c3d(x, y)2/t). (B.14)• if d(x, y) ≥ c1t thenqω(t, x, y) ≤ c2t−d/2 exp(−c4d(x, y)(1 ∨ ln(d(x, y)/t))). (B.15)108B.1. Previous results about random walk in random environmentTo state the upper bounds on the heat kernel of VSRW pω(t, x, y), weneed to introduce the distance dω defined bydω(x, y) := infγ{ lγ−1∑i=01 ∧ ω(zi, zi+1)−1/2}(B.16)where the infimum is taken over all paths γ = (z0, · · · , zlγ ) connecting x andy.We denote by B˜(x, r) the closed ball with center x and radius r withrespect to dω, that is B˜(x, r) := {y ∈ V |dω ≤ r}. Notice that for usual graphdistance d, dω(x, y) < d(x, y) and therefore B(x, r) ⊂ B˜(x, r) where B(x, r)the closed ball with center x and radius r with respect to d. Moreover, wedefine for any x ∈ Vu˜p(x) := lim supn→∞‖uω‖p,B˜(x,n) and v˜q(x) := lim supn→∞‖vω‖q,B˜(x,n) .(B.17)Assumption B.1.4. There exist p, q ∈ (1,+∞] with1p− 1 +1q<2d(B.18)such thatu˜p = supx∈Vu˜p(x) <∞ and v˜q = supx∈Vv˜q(x) <∞. (B.19)In particular for every x ∈ V there exists N˜(x, ω) > 2 such thatsupn≥N˜(x)‖uω‖p,B˜(x,n) ≤ 2u˜p(x) and supn≥N˜(x)‖vω‖q,B˜(x,n) ≤ 2v˜q(x).(B.20)Theorem B.1.4 ([3, Theorem 1.10]). Suppose that Assumption B.1.4 holds.Then, there exist constants ci(d, p, q, u˜p, v˜q) such that for any given t and xwith√t ≥ N˜(x, ω) and all y ∈ V the following hold,109B.1. Previous results about random walk in random environment• if dω(x, y) ≤ c5t thenpω(t, x, y) ≤ c6t−d/2 exp(−c7dω(x, y)2/t). (B.21)• if dω(x, y) ≥ c5t thenpω(t, x, y) ≤ c6t−d/2 exp(−c8dω(x, y)(1 ∨ ln(dω(x, y)/t))). (B.22)A straightforward corollary of Theorems B.1.3 and B.1.4 is as follows.Corollary B.1.5. Suppose that Assumption B.1.3 and B.1.4 holds. Then,there exist constants C(d, p, q, u˜p, v˜q) and N(x, ω) such that for any given tand x with√t ≥ N(x, ω) and all y ∈ V ,qω(t, x, y) ≤ Ct−d/2; (B.23)pω(t, x, y) ≤ Ct−d/2. (B.24)The third result is Parabolic Harnack inequality. For abbreviation weintroduce Mωx0,n := 1 ∨(‖uω‖1,B(n) / ‖uω‖1,B(n/2)).Theorem B.1.6 ([5, Theorem 1.4]). For any x0 ∈ V , t0 ≥ 0 and for n realand n ≥ 1 let Q(n) = [t0, t0 + n2]× B(x0, n). Suppose that u > 0 is caloricon Q(n), i.e. ∂tu− LωY u = 0. Then for any p, q ∈ (1,∞) with1p+1q<2d(B.25)there exists CPH = CPH(Mωx0,n, ‖uω‖p,B(x0,n) , ‖vω‖q,B(x0,n)) such thatmax(t,x)∈Q−u(t, x) ≤ CPH min(t,x)∈Q+u(t, x), (B.26)where Q− = [t0 + 14n2, t0 +12n2]×B(x0, 12n) and Q+ = [t0 + 34n2, t0 + n2]×B(x0,12n). The constant CPH is more explicitly given byCPH(Mωx0,n, ‖uω‖p,B(x0,n) , ‖vω‖q,B(x0,n))= c1 exp(c2(Mωx0,n(1 ∨ ‖uω‖p,B(x0,n))(1 ∨ ‖vω‖q,B(x0,n)))κ)(B.27)110B.1. Previous results about random walk in random environmentfor some positive ci = ci(d, p, q) and κ = κ(d, p, q).Remark B.1.7. Notice that the definition of caloric function is based onthe operator LωY . A typical caloric function is the heat kernel of CSRWqω(t, x, y). A similar result w.r.t. LωX is given in Remark 1.5 of [5].The parabolic Harnack inequality is powerful and has a number of impor-tant consequences. For example, it provides immediately the near-diagonalestimate on the heat kernel.Proposition B.1.8. Let t > 0 and x1 ∈ V . Then for any x2 ∈ V ,qω(t, x1, x2) ≤ CPH‖uω‖1,B(x1,√t/2)2dt−d/2Pωx2[Y3t/2 ∈ B(x1,12√t)]. (B.28)Proof. Given x1, x2 ∈ V and t > 0, we apply the PHI to the caloric functionqω(·, ·, x2) on [t0, t0 + t] × B(x1,√t) with t0 =34 t. This yield for any z ∈B(x1,12√t)qω(t, x1, x2) ≤ CPHqω(32t, z, x2), (B.29)with CPH = CPH(Mωx0,√t, ‖uω‖p,B(x0,√t) , ‖vω‖q,B(x0,√t)). Hence, by multi-plying both sides with uω , a summation over all z ∈ B(x1, 12√t) givesqω(t, x1, x2) ≤ CPHuω(B(x1,12√t))Pωx2 [Y 3t2 ∈ B(x1, 12√t)]=CPH‖uω‖1,B(x1,√t/2)2dt−d2Pωx2[Y 3t2∈ B(x1, 12√t)].(B.30)Remark B.1.9. Above near-diagonal estimate on the heat kernel is differentfrom heat kernel estimate given in Theorem B.1.3 in two ways. Above near-diagonal estimate is hold for all t > 0 while Theorem B.1.3 requires t largeenough. On the other hand, the constants in Theorem B.1.3 are universalwhile ones in above estimate depends on x1. In fact, as above estimateworks in the case that t is small, it is useful in the analysis of integratingheat kernel to get the green function.111B.2. Potential theory for random conductance modelsThe last one is a local limit theorem for the CSRW Y . It roughly de-scribes how the transition probabilities of the random walk Y can be rescaledin order to get the Gaussian transition density of the Brownian motion.Writekt(t) =1√(2pit)d det Σ2exp(−x · (Σ2)−1 x/2t) (B.31)is the Gaussian heat kernel with covariance matrix Σ2 determined in Theo-rem B.1.1. For x ∈ Rd write bxc = (bx1c, bx2c, · · · , bxdc).Theorem B.1.10 ([5, Theorem 1.11]). Suppose that d ≥ 2 and AssumptionB.1.2 holds. Let p, q ∈ (1,∞] be such that 1/p+ 1/q < 2/d and assume thatE[(ω(e))p] <∞ and E[(1/ω(e))q] <∞ (B.32)for any e ∈ Ed. Let T2 > T1 > 0 and K > 0 and suppose d ≥ 2. Then,limn→∞ sup|x|≤Ksupt∈[T1,T2]∣∣∣ndqω(n2t, 0, bnxc)− akt(x)∣∣∣ = 0, P− a.s. (B.33)with a = 1/Eµ˜[uω(0)].B.2 Potential theory for random conductancemodelsThe proof of the Lemma 3.2.1 leads us to the study of potential theory foroperators depending on a random environment that fall into the class ofrandom conductance models. We have borrowed some of the notation andresults from the paper of Biskup and Spohn [16]. Recall that in Chapter 3,we say a function g : Ωω × Zd → R satisfies the shift covariance property ifg(ω, x+ e)− g(ω, x) = g(τxω, e), (B.34)where x ∈ Zd and e is one of the unit vectors from the origin, andg(ω, 0) = 0. (B.35)112B.2. Potential theory for random conductance modelsWe also define a space shift by z ∈ Zd to be the map τz : Ωω → Ωω(τzω)(x, y) := ω(x+ z, y + z), ∀{x, y} ∈ E. (B.36)Consider a translation-invariant ν probability measure on Ωω = (R+)Zd.Let Eν be the expectation with respect to ν. A function h : Ωω 7→ R iscalled local if it depends on only finitely many sites in Zd. Let L2(ν) be theclosure of the set of all local functions in the topology induced by the innerproduct〈h, g〉 := Eν(h(ω)g(ω)). (B.37)Let E := {eˆ1, ..., eˆd} denote the set of unit vectors in Zd. For each directionej , the unitary maps Tj on L2(ν) defined is defined by(Tjh) := h ◦ τeˆj , j = 1, ..., d. (B.38)Apart from square integrable functions, we will also need the functions onthe vector fields. A function on the vector fields is a measurable functionu : Ωω ×E → R or Ωω ×E → Rd, depending on the context. To simplify thenotation, We will write u1, ..., ud for u(·, eˆ1), ..., u(·, eˆd).Notice that we choose the index set E be the set of positive unit vectors.For the negative unit vectors, we defineu(ω,−e) = −u(τ−eω, e), e ∈ E . (B.39)This definition is motivated by the cycle condition, which will be definedlater, for the trivial cycles crossing only a single edge.Let L2vec(ν) be the set of all vector fields with (u, u) < ∞, where (·, ·)denotes the inner product(u, v) := Eν(∑b∈Bωbu(ω, b) · v(ω, b)). (B.40)113B.2. Potential theory for random conductance modelsFor j’th direction, define the gradient by∇jh := Tjh− h. (B.41)For each function local functions h ∈ L2(ν), define∇h = (∇1h,∇2h, · · · ,∇dh). (B.42)Then ∇h ∈ L2vec(ν). We denote by L2∇(ν) the closure of the set of gradientsof local functions in the topology induced by the above inner product.Lemma B.2.1 ([16, Lemma 5.2]). Let u ∈ L2∇(ν). Then, u satisfies thecycle conditionn∑j=1u(τxjω, xj+1 − xj) = 0 (B.43)for any finite (nearest-neighbor) cycle (x0, x1, ..., xn = x0) on Zd. In par-ticular, there exists a shift-covariant function u¯ : Ωω × Zd → Rd such thatu(ω, b) = u¯(ω, b) for every b ∈ B.The next lemma characterize the functions in (L2∇)⊥.Lemma B.2.2 ([16, Lemma 5.3]). For u ∈ L2vec(ν), let L u be the functionin L2(ν) defined by(L u)(ω) :=∑b∈B[ωbu(ω, b)− (τ−bω)bu(τ−bω, b)], (B.44)where b is the coordinate vector opposite to b. We then haveu ∈ (L2∇)⊥ ⇔ L u = 0, ν − a.s. (B.45)If u satisfies the cycle condition and u¯ is its extension, then L u(τxω) =L ωXu(, x) where LX is defined in (3.12).Clearly, all u ∈ L2∇ are shift-covariant and have zero mean. A questionwhich naturally arises is whether every shift-covariant zero-mean u is in L2∇.In [16] for the case that weights are compacted bounded away from zero and114B.2. Potential theory for random conductance modelsinfinity, the answer to this is in the affirmative. However in our case we needone more condition of u.Theorem B.2.3. Suppose ν is ergodic. Suppose u ∈ L2vec obeys the cyclecondition (B.43) and Eνu = 0. Furthermore, suppose u is square integrablefor each component in the sense that Eν |u(ω, x)|2 <∞ for all x with |x| = 1.Then u ∈ L2∇.Proof. The proof is similar as that of Theorem 5.4 in [16]. However, theboundedness of the ωb’s away from zero and infinity ensures that u ∈ L2vec(ν)if and only if all of its components are in L2(ν). For the case without compactbound, the equivalence does not hold anymore. If we add the condition thatu is square integrable for each component, the rest of the proof is the sameof that of that of Theorem 5.4 in [16].We still have to supply the proof of Lemma 3.2.1.Proof of Lemma 3.2.1. Since g is square integrable as a vector field, g ∈L2vec. As g is shift-covariant, square integrable for each component and haszero expectation, Theorem B.2.3 implies that g ∈ L2∇. However, g is alsoharmonic and so, in turn, Lemma 5.3 forces g ∈ (L2vec)⊥. Thus, g = 0, asdesired.115

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0340866/manifest

Comment

Related Items