Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Decomposition of free fields and structural stability of dynamical systems for renormalization group.. 2013

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2013_fall_bauerschmidt_roland.pdf
ubc_2013_fall_bauerschmidt_roland.pdf
ubc_2013_fall_bauerschmidt_roland.pdf [ 1.01MB ]
Metadata
JSON: 1.0074097.json
JSON-LD: 1.0074097+ld.json
RDF/XML (Pretty): 1.0074097.xml
RDF/JSON: 1.0074097+rdf.json
Turtle: 1.0074097+rdf-turtle.txt
N-Triples: 1.0074097+rdf-ntriples.txt
Citation
1.0074097.ris

Full Text

Decomposition of free fields and structural stability of dynamical systems for renormalization group analysis by Roland Bauerschmidt Bachelor of Science in Physics, Eidgenössische Technische Hochschule Zürich, 2007 Master of Science in Physics, Eidgenössische Technische Hochschule Zürich, 2009 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Mathematics) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2013 c© Roland Bauerschmidt 2013 Abstract The main results of this thesis concern the spatial decomposition of Gaussian fields and the structural stability of a class of dynamical systems near a non-hyperbolic fixed point. These are two problems that arise in a renormalization group method for random fields and self-avoiding walks developed by Brydges and Slade. This renormalization group program is outlined in the introduction of this thesis with emphasis on the relevance of the problems studied subsequently. The first original result is a new and simple method to decompose the Green functions corresponding to a large class of interesting symmetric Dirichlet forms into integrals over symmetric positive semi-definite and finite range (properly sup- ported) forms that are smoother than the original Green function. This result gives rise to multiscale decompositions of the associated free fields into sums of inde- pendent smoother Gaussian fields with spatially localized correlations. Such de- compositions are the point of departure for renormalization group analysis. The novelty of the result is the use of the finite propagation speed of the wave equa- tion and a related property of Chebyshev polynomials. The result improves several existing results and also gives simpler proofs. The second result concerns structural stability, with respect to contractive third- order perturbations, of a certain class of dynamical systems near a non-hyperbolic fixed point. We reformulate the stability problem in terms of the well-posedness of an infinite-dimensional nonlinear ordinary differential equation in a Banach space of carefully weighted sequences. Using this, we prove the existence and regularity of flows of the dynamical system which obey mixed initial and final boundary conditions. This result can be applied to the renormalization group map of Brydges and Slade, and is an ingredient in the analysis of the long-distance behavior of four dimensional weakly self-avoiding walks using this approach. ii Preface Chapter 1 is an introduction and motivation for the problems studied in the remain- der of the thesis. No originality is claimed and, to give an informative exposition, we explain a number of ideas from a number of references mentioned, but without explicit reference to the origin of each single idea. Chapter 2, in slightly modified form, has been accepted for publication in the journal Probability Theory and Related Fields; see reference [8]. Chapter 3 is based on joint work with David Brydges and Gordon Slade; a ver- sion of it has been accepted for publication in the journal Annales Henri Poincaré; see reference [11]. Chapter 4 discusses ideas developed together with David Brydges and Gordon Slade. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Outline and preliminaries . . . . . . . . . . . . . . . . . . . . . 1 1.2 Random polymers . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Random fields and local time . . . . . . . . . . . . . . . . . . . 10 1.4 The renormalization group . . . . . . . . . . . . . . . . . . . . . 20 2 Decomposition of free fields . . . . . . . . . . . . . . . . . . . . . . 39 2.1 Introduction and main result . . . . . . . . . . . . . . . . . . . . 39 2.2 Proof of main result . . . . . . . . . . . . . . . . . . . . . . . . 52 2.3 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3 Structural stability of a class of dynamical systems . . . . . . . . . 65 3.1 Introduction and main result . . . . . . . . . . . . . . . . . . . . 65 3.2 The quadratic flow . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3 Proof of main result . . . . . . . . . . . . . . . . . . . . . . . . 82 4 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.1 The weakly self-avoiding walk with contact attraction . . . . . . 101 4.2 Logarithmic corrections to scaling behavior . . . . . . . . . . . . 102 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 iv Table of Contents Appendices A Perturbation theory and coordinates of the renormalization group . 114 A.1 Flow of coupling constants . . . . . . . . . . . . . . . . . . . . . 114 A.2 Bounds on the coefficients . . . . . . . . . . . . . . . . . . . . . 115 A.3 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 v List of Symbols Chapter 1 (X, E) graph with vertices X and edges E wt position of a walk on a graph at time t Lx local time of a walk, see (1.7) φx field on a graph, for example, φ : X → R Ht (L) energy (or Hamilton function) of a walk of length t as a function of its local time, see (1.9) Chapter 2 ϕ a smooth function on Rwith rapid decay, ϕ ≥ 0, and supp(ϕ̂) ⊂ [−1, 1] where ϕ̂ is the Fourier transform, see Lemma 2.2.5 E Dirichlet form L generator of Dirichlet form, see (2.20) Φ Green form corresponding to a Dirichlet form, see (2.24) a coefficients of a generator, see Examples 2.1.3–2.1.4 ˆf (ξ) Fourier transform of a function f (Pγ,θ) finite propagation speed condition of wave equation associated to generator of Dirichlet form L (P∗ θ,B ) discrete finite propagation speed condition associated to L Chapter 3 j discrete time parameter X j = K j ⊕ R state space of dynamical system at time j x j = (K j ,Vj ) position of dynamical system at time j Φ j evolution map Φ j : X j → X j+1 of dynamical system at time j ϕ̄ j quadratic part of evolution map, see (3.1) ḡj solution to the recursion relation ḡj+1 = ḡj − β j ḡ2j Xw space of sequences with weight w, see Definition 3.3.1 w, r specific choices of weights for sequences, see (3.95) vi Acknowledgments I thank my research advisers, David Brydges and Gordon Slade, without whom this thesis would not have been possible. I feel fortunate that I was introduced to their area of research, and their support throughout the course of this thesis, in academic and non-academic matters, was indispensable for its success. I also thank Joel Feldman for serving on my thesis committee, Tyler Helmuth for proofreading the introduction, and the members of the Probability Group at the University of British Columbia for having provided an inspiring research environ- ment. Part of the research that led to this thesis was carried out during stays at the Institute Henri Poincaré in Paris and the Department of Mathematics and Statistics at McGill University in Montreal, and I thank these institutions for their hospitality during my stays. I also gratefully acknowledge financial support from several sources, including the University of British Columbia, the Government of British Columbia, and the Fondation Mathématique Science de Paris. Finally, I thank my family and friends for their emotional and financial support during the course of my studies, without which I could not have carried these out. vii Chapter 1 Introduction 1.1 Outline and preliminaries 1.1.1 Outline The main results of this thesis concern the spatial decomposition of Gaussian fields and the structural stability of a class of dynamical systems near a non-hyperbolic fixed point, and are given in Chapters 2 and Chapter 3, respectively. The primary motivation for the study of the these problems is an application to a renormaliza- tion group method for the analysis of four-dimensional weakly self-avoiding walks developed by Brydges and Slade. However, the results of Chapter 2–3 are not spe- cific to the application to self-avoiding walks, and we expect that they may also be useful for renormalization group analysis of other models. The aim of the present chapter is to sketch the background of the problems studied in Chapter 2–3, in particular their advent in the renormalization group con- text. In Section 1.2, some aspects of random polymer models are introduced; these models of phenomena from polymer chemistry are our primary motivation. Their relation to the problems studied in this thesis is indirect, however, via random fields which are introduced in Section 1.3. Random fields are related to a broad range of models of statistical mechanics, for example the description of interfaces describ- ing phase separation and models for ferromagnetism. In the description of random polymers, they appear as the local time of a perturbed Markov process. The main results of this thesis are discussed in Section 1.4. In statistical mechanics, the behavior at large distances of a model is of main interest. For random polymer and random field models, the large distance behavior is notoriously difficult to study, however, because both classes of models involve a large number of strongly coupled degrees of freedom. The renormalization group, which is discussed in Section 1.4, is a program to study the large-distance behavior of random fields, pioneered in this sense by the theoretical physicist Wilson. The mathematical realization of Wilson’s ideas has been a major challenge since their seminal proposal. We discuss some of the difficulties involved in it, and then sketch important aspects of one of several approaches to resolve these difficulties, initi- ated by Brydges and Yau, and much generalized and improved in recent work of 1 1.1. Outline and preliminaries Brydges and Slade, based on work of many others. The emphasis of this discussion is on how, specifically, the problems studied in the main part of this thesis pertain to this program, but we also aim to give an introduction to the general ideas. 1.1.2 Preliminaries General notation. We use the usual Landau notation: f (t) = o(g(t)) as t → T if lim t→T f (t)/g(t) → 0; (1.1) f (t) = O(g(t)) as t → T if lim sup t→T | f (t)/g(t) | < ∞; (1.2) and also the usual asymptotic notation: f ∼ g as t → T if f (t) = g(t)(1 + o(1)) as t → T (1.3) where T is often 0 or ∞. Limits are abbreviated by f (t±) = lims→t± f (s). The indicator function 1z is given by 1z = 1 if condition z is satisfied and 1z = 0 otherwise. The symbols C and c will mostly denote constants whose values are allowed to change between two occurances. The dependence of a constant on a parameter is sometimes emphasized by a subscript. The letter d is reserved for the dimension of the relevant physical space, i.e., of Zd or Rd , and for metrics (which of the two should be clear from the context). The expectation value of a random variable, φ, is denoted by E(φ). Graphs. It will be convenient at various places to use the language of graphs, but we do not use any non-trivial results from graph theory. We say Γ = (X, E) is a (simple) graph if X is a finite or countable set of vertices and E ⊂ P2(X ) is a set of (undirected) edges, where P2(X ) denotes the set of subsets of X with exactly two elements. The words simple and undirected will be implicit from now on. Vertices will typically be denoted by the letters x and y and edges by the letter e. The edge connecting two vertices x and y is written as xy = yx = {x , y}. The graph distance d(x , y) between two vertices x , y ∈ X is the number of edges of the shortest path between two vertices x and y, if there is one, and ∞ otherwise. That x and y are neighbors, xy ∈ E, is denoted by x ∼ y. All graphs will be assumed to be locally finite, i.e., for any x ∈ X there are only finitely many y ∈ X with x ∼ y. The graphs of primary relevance for this thesis are lattice graphs which can be embedded in Rd or in the torus Td = Rd/Zd . The most important examples are the Euclidean (or hypercubic) lattice Zd with nearest-neighbor edges, i.e., xy ∈ E(Zd ) if |xi − yi | = 1 for exactly one i ∈ {1, . . . , d} and otherwise |xi − yi | = 0, and the discrete torus of side length n, denoted Zdn , for which xy ∈ E(Zdn ) if |xi − yi | = 1 mod n for exactly one i ∈ {1, . . . , d} and otherwise |xi − yi | = 0 mod n. 2 1.2. Random polymers 1.2 Random polymers 1.2.1 The simple random walk Let Γ = (X, E) be a graph. A walk on Γ of length t ∈ (0,∞] is a right-continuous path w : [0, t) → X with finitely many jumps in finite intervals, i.e., d(ws ,ws−) ≤ 1 for all s ∈ (0, t) with equality for only finitely many s in each bounded subinter- val of (0, t). Let Wt denote the set of all walks of length t. An important subclass of walks are discrete walks, denoted W ∗t ⊂ Wt , for which a jump happens at s if and only if s is an integer and 0 < s < t. For t < ∞, each walk w ∈ Wt can be specified uniquely by an integer n ≥ 0, a finite sequence t0 = 0 < t1 < t2 < · · · < tn < tn+1 = t, and an element w∗ ∈ W ∗n as ws = w ∗ n if s ∈ [tn , tn+1). (1.4) The walk w∗ is called the skeleton walk of w and (t1 , . . . , tn ) are the jump times. Let Wx ,t = {w ∈ Wt : w0 = x} and W ∗x ,t = W ∗t ∩ Wx ,t be the sets of (contin- uous and discrete) walks starting at x. There are several natural probability mea- sures on Wx ,t that arise as restrictions of measures on Wx ,∞ and W ∗x ,∞ (stochastic processes). The simple random walk is the discrete Markov process which chooses uniformly from its neighbors at each step. The constant-speed simple random walk is a continuous Markov process with skeleton walk given by the simple random walk and the times between two jumps distributed independently with exponential distribution with parameter 1. The variable-speed simple random walk likewise has the simple random walk as skeleton walk, but the waiting times between two jumps now have exponential distribution with parameter given by the degree of the vertex before the jump. This can also be interpreted as that each edge has a supply of exponential clocks with parameter 1 and that the next jump is along the edge whose clock rings first. The two continuous processes only differ by rescaling of the time when the graph is regular, i.e., when all vertices have the same degree as is in particular the case for the graph of main interest, Γ= Zd . To illustrate what is understood, consider (any of) the simple random walks on Z d . Then, for any t ≥ 0, λ−1/2wλt → Nt (λ → ∞) (1.5) where the convergence is in distribution and Nt is a vector of independent Gaussian random variables with mean 0 and variance t (or 2dt for the variable-speed walk). This result is essentially the classical central limit theorem. It shows that wt grows typically like √ t as t → ∞. A less precise way of measuring this is the statement that E|wt |2 ∼ t as t → ∞. But much more is understood. It is also a well-known 3 1.2. Random polymers result that if (Bt )t is the Wiener process, a continuous random path [0,∞) → Rd (thus not in W∞) with Gaussian distribution defined by B0 = 0, E(Bt ) = 0, and E(Bit B js ) = δi j min{s, t} for i, j ∈ {1, . . . , d}, that then the convergence of (1.5) holds on a space of paths [0,∞) → Rd , i.e., for all t simultaneously in a sense. Proofs of the latter result, Donsker’s invariance principle, can be found in many textbooks on advanced probability theory; see e.g. reference [105]. It describes the behavior at large distances of the paths of (wt )t . 1.2.2 Polymers and local time In polymer science, a linear polymer is a long chain of molecules (monomers). The simplest mathematical model for a linear polymer is the uniform ensemble, the uniform probability measures on W ∗x ,t , for some graph (in particular for Zd), but it is not a well motivated approximation. For example, polymers should not be able to intersect themselves due to the finite extent of each molecule. A model that takes this into consideration is the strictly self-avoiding walk, the uniform measure on (discrete) walks conditioned on the event that walks do not intersect themselves. For regular graphs, the uniform ensemble and the simple random walk are the same. This has turned out to be an important observation for the study of random polymers. Moreover, in some aspects, the continuous-time random walks have fa- vorable analytic properties over the discrete-time random walk. For regular graphs, the constant- and variable-speed walks are identical, up to rescaling of time by the constant vertex degree, their jump sequences are Poisson processes, and the skele- ton walks are simple random walks, thus uniform when conditioned on the number of jumps. In view of the last aspect, the continuous-time random walks are natural variants of the uniform ensemble. In reference [43], den Hollander gives a broad overview of mathematical mod- els for random polymers. Like the strictly self-avoiding walk, these polymer mod- els for example suppress self-intersections by giving smaller weight to intersecting paths with respect to a reference measure. Natural choices for the reference mea- sure P0x ,t are any of the simple random walk models on the regular graphs Zd where d = 1, 2, . . . . These models are then defined by an energy or Hamilton function, Ht : Wt → R, assigning an energy cost to every path, as a probability measure PHx ,t on Wx ,t by PHx ,t (dw) = 1 Z e−Ht (w) P0x ,t (dw) (1.6) where Z = ZHx ,t is a normalizing constant, called the partition function. The mea- sure PHx ,t can be viewed as a kind of Gibbs measure on walks. For a number of interesting models, the energy function is a functional of the local time. The local 4 1.2. Random polymers Figure 1.1: Polymer with one self-intersection and several self-contacts. time of a walk w is given by: Ltx (w) = ∫ t 0 1ws=x ds. (1.7) where we recall the indicator function: 1a=b = 1 if a = b and 1a=b = 0 otherwise. To say that H is a functional of the local time means that there is H : M+(X ) → R, where M+(X ) = {m : X → R+ : ∑x∈X mx < ∞}, such that1 Ht (w) = H (Lt (w)) (w ∈ Wt ). (1.8) For example, an interesting class of Hamilton functions is given by Hβ,γ (L) = β ∑ x∈X L2x − γ ∑ x∈X ∑ y∈X :y∼x LxLy (β, γ ≥ 0). (1.9) This model is known under a number of names. If γ = 0, it is called the weakly self- avoiding walk, soft polymer, discrete Edwards model, and Domb-Joyce model [15, 43, 88], and with self-attraction is added to the name if γ > 0 [43]. The repulsive force (β > 0) models the effect that polymers should not intersect themselves by suppressing self-intersections of walks, as can be seen from the elementary identity ∑ x∈X Ltx (w)2 = ∫ t 0 ∫ t 0 1ws1=ws2 ds1 ds2. (1.10) The (optional) attractive force (γ > 0) models the effect of a solution in which the polymer is immersed, by making it energetically beneficial for a polymer to be in contact with itself (rather than the solution). This can be understood from ∑ x∈X ∑ y∈X :y∼x Ltx (w)Lty (w) = ∫ t 0 ∫ t 0 1ws1∼ws2 ds1 ds2. (1.11) 1Observe that ∑x∈X Ltx (w) = t < ∞ for w ∈ Wt and thus Lt (Wt ) ⊂ M+(X ). 5 1.2. Random polymers Note that the strictly self-avoiding walk is obtained in the limit γ = 0, β → ∞ of the discrete-parameter version of (1.6); see e.g. references [82,88]. It can also be related to the continuous-parameter model, but then the relation is more subtle [26]. Figure 1.2: A trapped self-avoiding walk. Unlike the simple random walks, random polymer models like (1.6) are almost never stochastic processes. For example, it is easy to see that strictly self-avoiding walks can get trapped as shown in Figure 1.2. The parameter t of the measures PHx ,t can therefore not be interpreted as time, but it is rather a measure of the lengths of the polymers described by the measures. In analogy to the classical theory of gases in statistical mechanics, the measures PHx ,t describe ensembles of walks (which take the role of particle configurations of a gas) with fixed length (taking the role of a fixed number of particles in the gas). As a consequence, the standard tools for the analysis of stochastic process are not available to study the measures (1.6), making their analysis decidedly more difficult than that of simple random walks. It turns out that random polymer models depend sensitively on the presence of an interaction given as in (1.6). For example, it is believed (but only proved in dimension d = 1 so far; but see Section 4.2) that even arbitrarily small values of β > 0 can change the asymptotic behavior of the walks drastically compared to the case β = 0. On the other hand, the behavior for all β > 0 is believed to be similar. 1.2.3 Asymptotic behavior and universality From now on, the discussion will be restricted to polymer models on the Euclidean lattice Zd ; we also consider only spatially homogeneous interactions, i.e., interac- tions that are invariant under translations like (1.9). To simplify the notation, we then set the starting point to 0 and drop it from the notation, for example in (1.6). The perhaps most interesting mathematical problem about random polymers is to determine the typical growth of the distance between the starting and endpoint with its length, t. For the simple random walk, this, and almost any other question, are very well understood, for example by (1.5). However, for self-interacting ran- 6 1.2. Random polymers dom polymers (H , 0), it is in general a difficult (open) problem to determine the growth of the end-to-end distance Et |wt |2. It is a general conjecture that the end- to-end distance is asymptotically described by a power law, i.e., that for β, γ ≥ 0, there are constants c > 0 and ν ≥ 0 such that EHt |wt |2 ∼ ct2ν as t → ∞ (1.12) where EHt (F) is the expectation value of a random variable F = F (w) under PHt . For the simple random walk, the exponent is ν = 12 , in any dimension. It is believed that, for general polymers, the constant c > 0 depends on all of d, β, and γ, but that the exponent ν is universal, i.e., constant for appropriate ranges of β and γ and also independent of the lattice of a given dimension d. It does in general depend on d. In Figure 1.3, the conjectured phase diagram for the weakly self-avoiding walk with self-attraction is shown; it was conjectured by v.d. Hofstad and Klenke [110]. γ β ν = 1/d ν = 1/(1 + d) ν = 0 ν = νθ ν = νSAW Figure 1.3: The phase diagram conjectured (for discrete-time) in d ≥ 2, from [110] The kind of universality described in the last paragraph is one of the paradigms in equilibrium statistical mechanics, yet in general only understood in few specific examples mathematically. For the self-avoiding walk, without self-attraction, sem- inal results by Brydges and Spencer [20] and by Hara and Slade [73–75] provide an essentially complete picture in dimensions five and higher. In particular, these results include the result EHt |wt |2 ∼ ct which is the same behavior as for the simple random walk, except for the constant. In dimension two, there is strong evidence 7 1.2. Random polymers that the long-distance behavior of (strictly) self-avoiding walks is described by the so-called Schramm-Loewner-Evolution [83]. This is a subject of intense research, but proofs are not known at the time this thesis is written. The weakly self-avoiding walk, even without self-attraction, seems even more difficult to understand in two dimensions, but it is believed to be in the same universality class as the strictly self-avoiding walk in any dimension. The term universality class refers to the class of models that share the same scaling limit (or at least the same critical exponents). The validity of the former conjecture is known only for dimension one [111] and, as discussed, in dimensions five and above without self-attraction. For the physi- cally most interesting dimension three, only numerical estimates of the values for the critical exponents are known [41]. Dimension four is expected to be critical, in the sense that the behavior of self-avoiding walks changes from behavior similar to that of the simple random walk to complex behavior as d gets smaller through 4. The critical dimension. Many models of discrete equilibrium statistical mechan- ics can be defined, by the “same” specification like (1.9), on an (essentially) arbi- trary graph. It is a paradigm of statistical mechanics that when such models are defined on Zd , there is a critical dimension, dc , such that for d < dc , the behavior is complex, meaning for self-avoiding walks, for example, that it is different from that of the simple random walk, while for d > dc , the model has the so-called mean-field behavior, meaning for self-avoiding walks that the behavior is the same as that of the simple random walk. The term mean-field stems from analogy with models of ferromagnetism, but it is standard terminology for more general models. For self-avoiding walk models, there is overwhelming evidence that the critical di- mension is dc = 4. In the critical dimension, the behavior is expected to be that of the mean-field model with universal logarithmic corrections. For example, for self-avoiding walks (with additional small self-attraction allowed), it is conjectured that, in d = 4, in the phase where β > 0, γ ≪ β, EHt |wt |2 ∼ ct(log t) 1 4 (t → ∞); (1.13) see e.g. references [17,28,29,50,88]. The exponent 14 is expected not to depend on the “details” of the model. Brydges and Slade have developed methods by which a proof of (1.13) seems within reach (but not reached, see also Section 4.2). The work of this thesis is a contribution to this program which we will therefore discuss in some detail. 8 1.2. Random polymers 1.2.4 The two-point function as a Laplace transform Let Ea (F) be the expectation value of a random variable F = F (w) with respect to the simple random walk probability distribution P0a on walks starting at a, let cHt (a, b) = Ea (e−H (L t )1wt=b ) (a, b ∈ X ) (1.14) be the probability weight function of the endpoint b for the ensemble of walks of length t that start at a, and set cHt (x) = cHt (0, x) on Zd . The main goal in the study of random polymers is to understand this function “very well,” in the limit t → ∞. For example, this would enable one to understand EHt |wt |2 = ∑ x c H t (x) |x |2∑ x c H t (x) . (1.15) An approach to understanding cHt (x) is via its Laplace transform in t, GHµ (x) = ∫ ∞ 0 E(e−H (Lt )1wt=x )e−µt dt (µ ∈ R), (1.16) which is called the two-point function for the random polymer described by Hamil- tonian H . To recover information about ct (x), as t → ∞, from Gµ (x), it is partic- ularly important to understand Gµ (x) as the minimal value, µ = µc , above which the Laplace transform converges is approached. To illustrate this, it is instructive to consider the simple random walk with, say, variable-speed. In this case, the two-point function is the Green function of −∆+ µ where ∆ is the graph Laplace operator given by ∆ f (x) = ∑ y:y∼x ( f (y) − f (x)). (1.17) By use of the Fourier transform, it is straightforward to establish the exact relations ∑ x Gµ (x) = 1 µ , ∑ x |x |2Gµ (x) = 2d µ2 (µ > 0). (1.18) In particular, µc = 0 and the Laplace transforms of the numerator and the denomi- nator in (1.15) can be inverted explicitly to obtain Et |wt |2 = 2d · t (1.19) as explained in [28, p. 526]. Even though it may not be the most efficient way to compute Et |wt |2 for the simple random walk by analysis of the two-point function, 9 1.3. Random fields and local time as the result is elementary there, this approach has proven fruitful for the analysis of interacting models as we will explain (see also [28, 29]). The two-point function is however also of independent interest. For the simple random walk, it is possible to determine the asymptotic behavior of the two-point function for fixed value of µ. For reference, we record from [64] that if2 d > 2, Gµ (x) ∼  c |x |d−2 (µ = 0) cµ |x |(d−1)/2 e −M (µ)b(x/ |x |)·x (µ > 0) (|x | → ∞). (1.20) where b : Sd−1 → Rd and the rate of exponential decay satisfies M = M (µ) ∼ √µ as µ ↓ 0. It is related to the divergence of (1.18); see e.g. [88, Appendix A]. The parameter µ ≥ 0 is also called the killing rate of the simple random walk because it has an interpretation in terms of random walks that die (stop) after a finite random time, if µ > 0. In the context of the next section, µ is also called the square of the mass and we write µ = m2 also in the context of the simple random walks. It turns out that questions about random polymers are related to questions about random fields. 1.3 Random fields and local time 1.3.1 Generalities Let X be a countable set. It should be thought of as a spatial configuration of points; in the main examples, it is the vertex set of a graph, Γ= (X, E). Let us call any map φ : X → R a real-valued field on X . It is also of interest to consider vector-valued fields or more generally maps φ : X → M that take values in a manifold M , but most of the discussion will be restricted to the simplest case of real-valued fields, M = R. The space of fields is MX = {φ : X → M }. Random fields, or probability measures on MX , are one of the main structures of interest in equilibrium statistical mechanics, in particular with X an infinite set or in the limit when X tends to an infinite set. Examples of random fields in statistical mechanics include spin models, i.e., models of ferromagnetism in which a random field describes spins of particles located at the vertices of a graph, the description of dislocations of particles from a crystal, the modelling of phase interfaces, height functions of some configuration models (e.g. dimers), the local time of Markov (or more general random) processes, and more. 2The formula for µ > 0 also holds for d = 2, but for µ = 0, the homogeneous function 1/|x |d−2 is replaced by − log |x |. For simplicity, we restrict to d > 2. 10 1.3. Random fields and local time In general, it is non-trivial to define random fields on an infinite set X so that their definition often proceeds through an approximation by finite sets. 1.3.2 Gaussian fields A class of random fields of fundamental importance are Gaussian fields. These are special in many ways: they can be defined essentially directly on infinite sets (and also in the continuum), many properties are accessible by elementary calculations, and they play an important role in the study of a number of non-Gaussian fields. Let X be a finite set and C = (Cxy )x ,y∈X be a symmetric positive semi-definite matrix with real entries indexed by X , i.e., Cxy = Cyx for all x , y ∈ X and ∑ x ,y∈X f xCxy fy ≥ 0 for all f ∈ RX . (1.21) The Gaussian measure PC on RX with mean 0 and covariance C is uniquely de- fined by the Fourier transform: ∫ eiφ f PC (dφ) = e− 12 f C f for all f ∈ RX (1.22) where φ f = ∑ x∈X f xφx , f C f = ∑ x ,y∈X f xCxy fy . (1.23) In particular, when C is a strictly positive definite matrix, i.e., if equality in (1.21) holds only if f x = 0 for all x ∈ X , then the inverse matrix L = C−1 exists and the Gaussian measure PC is equivalently given by the density PC (dφ) = e − 12φLφ √ det(2piC) λX (dφ) (1.24) where λX denotes the |X |-dimensional Lebesgue measure on RX . We then say that PC is a non-degenerate Gaussian measure. The matrix C is the covariance matrix or two-point function of PC in the sense that EC (φxφy ) := ∫ φxφy PC (dφ) = Cxy (1.25) where we have introduced the notation EC (F) for the integral or expectation of a random variable F with respect to the Gaussian measure PC . 11 1.3. Random fields and local time Wick’s formula The moments of PC are given explicitly in terms of C by: EC  2p∏ i=1 φxi  = ∑ P p∏ j=1 Cxn j xmj (1.26) where the sum ranges over all pairings P of 1, . . . , 2p into p distinct unordered pairs {n1 ,m1}, . . . , {np ,mp }; the odd moments vanish [104, Proposition 1.2]. Consistency Gaussian fields are consistent, in the sense that if φ is a Gaussian field on X with covariance matrix C = (Cxy )x ,y∈X , then for any subset Y ⊂ X , the restriction of φ to Y is also a Gaussian field with covariance (Cxy )x ,y∈Y ; this follows from (1.22). The consistency implies the existence of Gaussian fields on infinite index sets. A matrix C on an infinite index set X is positive definite if, for every finite subset Y ⊂ X , the restriction of C to Y is positive definite. For any positive definite ma- trix C indexed by a set X , Kolmogorov’s extension theorem [61, Theorem 10.18] implies that there exists a random field φ on X such that, for each finite Y ⊂ X , the restriction of φ to Y is a Gaussian field with covariance the restriction of C to Y . Free fields Now suppose that X is the vertex set of a graph Γ = (X, E). A random field φ on X is called a Markov field on Γ if, for any A ⊂ X , {φx : x ∈ A} is independent of {φx : d(A, x) > 1} conditionally on {φx : d(x , A) = 1}. Markov random fields play an important role in statistical mechanics because the Markov property describes local interactions. It is not difficult to see that a non-degenerate Gaussian field on a finite graph is Markovian if and only if the matrix L = C−1 is local in the sense that Lxy = 0 if d(x , y) > 1; see e.g. [98, Theorems 2.1–2.2]. Let E(φ, φ) = φLφ denote the quadratic form associated to such an L. Note that every quadratic form compatible with the locality requirement is given by functions α : E → R and µ : X → R as E(φ, φ) = φLφ = ∑ e∈E αe (∇φ)2e + ∑ x∈X µxφ 2 x (1.27) ( = 2 ∑ xy∈E αxyφxφy + ∑ x∈X ( µx + 2 ∑ y:y∼x αxy ) φ2x ) where (∇φ)2xy = (φx − φy )2. (1.28) 12 1.3. Random fields and local time A quadratic form of this form is called a Dirichlet form on X when α, µ ≥ 0, but there are also interesting situations in which the last requirement is relaxed and only positive definiteness of E is required [3, 4]. The inverse matrix C = L−1 is called the Green function of E. A Gaussian field whose covariance is the Green function of a Dirichlet form E is called the free field associated to E. Much interest is already in the simplest case where α and µ are both constant, say, αe = 1 for all e ∈ E and µx = m 2 ≥ 0 for all x ∈ X . Then such a field is called the discrete free field on Γ with mass m. This terminology has its roots in quantum field theory [104]. Local perturbations of Gaussian fields It turns out that a number of interesting problems can be studied through (approx- imately) local perturbations of Gaussian fields, in particular local perturbations of free fields. By a local perturbation, we shall understand a random field given on a finite graph by a measure of the form PC ,Z0 (dφ) = 1 Z Z0(φ) PC (dφ) (1.29) where local means that Z0 is a product of local field functionals3, Z0(φ) = ∏ x∈X Z0,x (φ), (1.30) i.e., Z0,x depends on {φy : d(x , y) ≤ 1} only. The most interesting examples are given by homogeneous perturbations for which Z0,x is the same functional for all x which is analogous to the requirement that α and µ are constant in (1.27). The term “perturbation” might suggests that fields described by such measures are very similar to free fields, in particular when “Z0,x ≈ 1,” but it turns out that the large distance behavior can be drastically different, in a way very much analogous to the behavior of polymer models discussed in the last paragraph of Section 1.2.2. This is no coincidence. In Section 1.3.3, we will sketch how, in terms of a general- ized notion of Gaussian field, random polymers are models that can be described in terms of such local perturbations. This description is closely related to spin models, subsequently discussed briefly in Section 1.3.4. 3We use the term field functional rather than random variable for several reasons. It emphasizes the point of view that the former are defined on the fields themselves rather than a probability space. For example, it will become useful to evaluate field functionals on deterministic fields. The second reason is that, in a generalized context involving differential forms (Fermions) introduced later, the notion of random variable does not exist while the notion of field functional still does. 13 1.3. Random fields and local time 1.3.3 Local time of Markov processes and free fields The local time of a Markov process on a graph Γ= (X, E) is a random field on X , given (for every t ≥ 0) by (1.7). It is of considerable interest for random polymers. For example, the ratio of weight functions cHt (a, b)/c0t (a, b) of Section 1.2.4 is the expectation of a functional of the random field Lt under the conditional probability distribution Pa ( · |wt = b) of the simple random walk. The distribution of the local time of a Markov process is difficult to study di- rectly, but it is known that, for continuous-time Markov processes, the local time4 is closely related to the free field associated to the Dirichlet form of the Markov pro- cess. (The connection of Dirichlet forms and Markov processes is discussed in the next subsection.) These relations go back to Symanzik [107], Brydges, Fröhlich, and Spencer [18], and Dynkin [51–54], and there are also a number of more recent results [108]. For example, Dynkin’s so-called isomorphism theorem states [108] EC (φaφbF ( 12φ2)) = ∫ ∞ 0 (EC ⊗ Ea )(F ( 12φ2 + Lt )1wt=b )e−µt dt (1.31) where C is the covariance of the free field φ with mass m2 = µ > 0, i.e., the Green function of the variable-speed simple random walk wt killed at rate µ, EC is the expectation functional of the field φ, and Ea is the expectation of the simple random walk wt started at w0 = a. Parisi and Sourlas [92,93] and McKane [89] discovered a more direct relation- ship involving supersymmetry; see also Luttinger [87]. In notation to be introduced below, the so-called τ-isomorphism [17, 30] can be stated as EC ( ¯φaφbF ( ¯φφ + ¯ψψ)) = ∫ ∞ 0 Ea (F (Lt )1wt=b )e−µt dt (1.32) where the pair (φ, ψ) a supersymmetric Gaussian field with the same covariance C. Thus, if the square of the free field is replaced by the square of the supersymmetric field on the left-hand side, 12φ 2 + Lt is replaced by only Lt on the right-hand side. The supersymmetric partner ψ of the complex free field φ decouples the two sides. Dirichlet forms, random walks, and free fields The theory of Dirichlet forms is concerned with far-reaching generalizations of the quadratic form (1.27); see reference [63]. Dirichlet forms stand in close connection to continuous-parameter Markov processes. For example, the Dirichlet form (1.27) with constant coefficients, αe = 1, µx = m2 ≥ 0, is associated to the variable-speed 4For continuous-time Markov processes, the local time is also often called the occupation time to distinguish it from the local time of the skeleton Markov chain. 14 1.3. Random fields and local time simple random walk on the graph Γ. Indeed, (αe )e∈E with αe = 1 can be viewed as the adjacency matrix (Axy )x ,y∈X of Γ, defined by Axy = 1xy∈E =  1 (xy ∈ E), 0 (xy < E). (1.33) Let Dxx = ∑ y∼x Axy be the number of neighbors of the vertex x, and set Dxy = 0 if x , y. The generator of the form (1.27) with m2 = 0 can then be written as the graph Laplace operator L = −∆ = D − A. (1.34) Standard theory of Markov process implies that there is a Markov process (wt )t≥0 on X with Ex (1wt=y ) = [e−Lt ]xy . The two-point function of this process is Gm2 (x , y) = ∫ ∞ 0 Ex (1wt=y )e−m 2t dt = ∫ ∞ 0 [ e(∆−m 2)t ] xy dt = [ (−∆ + m2)−1 ] xy . (1.35) Thus the two-point functions of the simple random walk and the two-point function of the free field are the same. The connections between a Markov process and the corresponding free field go much further, however. Complex and supersymmetric Gaussian fields A natural variant of (real) Gaussian fields are complex Gaussian fields. In general, a complex field is merely a two-component real field, but we restrict to symmetric complex Gaussian fields which means that the real and imaginary parts of the field are independent real Gaussian fields with the same covariance [79]. The symmetric complex field is then determined by E( ¯φxφy ) = Cxy , E(φxφy ) = E( ¯φx ¯φy ) = 0 (1.36) and C is called its covariance. (The real and imaginary components of φ both have covariance 12C in the usual sense.) Let us consider the symmetric complex Gaussian measure on CX with strictly positive definite covariance matrix C for a finite set X . Then, with L = C−1, the expectation of a random variable F : CX → C is given by EC (F (φ)) = 1det(2piiC) ∫ CX F (φ) exp − ∑ x ,y∈X φxLxy ¯φy  d ¯φ dφ (1.37) 15 1.3. Random fields and local time which is interpreted as follows: in terms of two real fields, u and v, φ and ¯φ are given by φx = ux + ivx and ¯φx = ux − ivx , and the measure d ¯φ dφ is a shorthand for d ¯φx1 dφx1 · · · d ¯φxn dφxn , if X = {x1 , . . . , xn }, where d ¯φi dφi = 2i dux dvx (1.38) with dux dvx the usual Lebesgue measures on C  R2. Now observe that the probability density of the complex Gaussian measure is the top degree part of the differential form γC = exp − ∑ x ,y∈X φxLxy ¯φy − 1 2pii ∑ x ,y∈X dφxLxyd ¯φy  . (1.39) Here differential forms are multiplied with the anticommuting wedge product (sup- pressed in the notation above), and the exponential function is defined by expansion into a power series (which is unambiguous because the argument has even degree). An interesting property of this formula is that the normalization factor of the mea- sure does not appear explicitly. The expectation (1.37) can now be written as EC (F (φ)) = ∫ CX FγC (1.40) with the convention that the integral of a differential form is the integral of the top degree part of the form only, in the usual sense of integrals of differential forms. Observe that, while equation (1.37) only has an interpretation for ordinary ran- dom variables F (φ), i.e., differential forms of degree 0, equation (1.40) has a natu- ral interpretation when F is a more general differential form, namely as the integral of the top degree part of the differential form FγC . Differential forms can then be viewed as functionals of the field φx and the differential form ψx = (2pii)−1/2dφx .5 φx and ψx appear in a (formally) symmetric way in the formula for γC . In the terminology of quantum mechanics, φ has the interpretation of a Boson field, while ψ can be interpreted as a Fermion field. The formal symmetry between φ and ψ is called a supersymmetry and has several fascinating implications which we will not discuss, but see references [17, 30]. We still call the pair (φx , ψx ) the supersymmetric Gaussian field with covariance C. The identification of Fermion fields with differential forms in this context is due to Le Jan [85, 86]. To exemplify in which ways supersymmetric Gaussian fields behave like ordi- nary Gaussian fields, let us mention that the sum of two supersymmetric Gaussian fields can again be interpreted as a supersymmetric Gaussian field whose covari- ance is the sum of the covariances [34, Proposition 2.6]. The covariance is E( ¯φxφy ) = E( ¯ψxψy ) = −E(ψy ¯ψx ) = Cxy . (1.41) 5The complex square root function is fixed in an arbitrary way. 16 1.3. Random fields and local time This can be generalized to a version of Wick’s formula for the moments: EC  p∏ i=1 ¯φxiφyi q∏ j=1 ¯ψu jψv j  =  ∑ π∈Sp p∏ i=1 Cxi ypi (i)   ∑ π∈Sq (−1) |π | q∏ j=1 Cu j vpi ( j )  (1.42) where Sn is the symmetric group of order n, and (−1) |π | is the sign of a permutation pi ∈ Sn . More details are given in [29,30], but the upshot is that again, as in (1.26), all moments can be calculated in a simple way in terms of the covariance. Local time and supersymmetry Finally, we can discuss the connection between random walks and supersymmetry, discovered by Parisi and Sourlas [92, 93] and McKane [89], in the form stated in reference [33]. To explain it, define differential forms τx , x ∈ X , on CX by τx = ¯φxφx + 1 2pii d ¯φxdφx = ¯φxφx + ¯ψxψx . (1.43) For F : RX → R smooth, it is natural to define a differential form F (τ) as the finite Taylor series around the degree 0 part of τ which is ¯φφ = |φ|2: F (τ) = |X |∑ m=1 1 m! ∑ x1 ,... ,xm∈X Fx1 ···xm ( ¯φφ) m∏ j=1 1 2pii d ¯φx j dφx j (1.44) where Fx1 ···xm (t) is the mth derivative of F (t) in direction (ex1 , . . . , exm ). The Taylor series is finite because differential forms on a finite dimensional space have a maximal degree (the dimension of the space). It is unambiguous because the differential form τ is even. Theorem 1.3.1. Let X be a finite set, (wt )t≥0 be a continuous-time Markov process on X, and C be the Green function of (wt )t≥0 with killing rate m2 > 0: Cxy = ∫ ∞ 0 Ex (1wt=y )e−m 2t dt. (1.45) Then, for any smooth F : RX+ → R that does not grow too rapidly,∫ ∞ 0 Ex (F (Lt )1wt=y )e−m 2t dt = EC (F (τ) ¯φxφy ). (1.46) Proof. See [30, Propositions 2.7 and 4.4].  17 1.3. Random fields and local time Theorem 1.3.1 with F = ∏x e−gτ2x−(µ−m2)τx for some m2 > 0 implies that the two-point function of the continuous-time weakly self-avoiding walk on a finite graph is equal to the two-point function of a local perturbation of the supersym- metric free field on the same graph, in the sense of Section 1.3.2 with the Gaussian measure PC replaced by the supersymmetric Gaussian “measure” γC . If we write g instead of β and set γ = 0, the two-point function (1.16) is thus, more explicitly, Gµ (a, b) = EC ( ¯φaφbZ0) (1.47) where C = [−∆ + m2]−1 and Z0 = ∏ x∈X e−gτ 2 x−(µ−m2)τx (1.48) is a local perturbation. In fact, there is some flexibility in the split of perturbation and Gaussian measure, for example, by choice of m2. It turns out that this split can be made use of in the context of the renormalization group, and that then, it is also necessary to consider a more general splitting, C = (1 + z)[−∆ + m2]−1 with Z0 = ∏ x∈X e−gτ 2 x−(µ−zm2)τx−zτ∆x (1.49) and τ∆,x = 1 2 [ φx (∆ ¯φ)x + (∆φx ) ¯φx + ψx (∆ψ)x + (∆ψ)x ¯ψx ] . (1.50) The study of the perturbation (1.48) is actually also very interesting when the Fermionic (differential form) part of τ is dropped, and then such perturbations have been studied extensively, as spin models which are models of ferromagnetism. 1.3.4 Spin models Let Γ = (X, E) be a finite graph. A spin model on Γ is real- or vector-valued random field on Γ with distribution given by [60] P(dφ) = 1 Z e−H (φ) ∏ x∈X ρ(dφx ) (1.51) where Z is a normalizing constant, ρ is a probability measure on RN called a priori measure of the spin model, α : E → [0,∞) are pair interactions, and H (φ) = − ∑ xy∈E αxy φx · φy . (1.52) 18 1.3. Random fields and local time The best-known case is when α is constant, i.e., αe = α > 0, and the a priori measure is given by the uniform (surface) measure of the unit sphere SN−1 ⊂ RN . These so-called N-vector models include the Ising model (N = 1), the rotor or XY model (N = 2), and the Heisenberg model (N = 3). Much attention has also been devoted to the φ4 models, given by αe = 1 and a priori measure ρ(dφx ) = e−g |φx |4−s |φx |2 . (1.53) The φ4 models include the N-vector models as limits with g → ∞ and s ∝ −g; see references [60, 100]. They can be written in exact analogy to (1.48) as dP = 1 Z Z0 dPC , Z0 = ∏ x∈X e−g |φx | 4−µ |φx |2 . (1.54) where PC is the Gaussian measure with covariance6 given by C = [−∆ + m2]−1 and µ = s − 1 − m2. Spin models and walks The relation (1.54) with N = 2 components is the same as (1.46) with τx replaced by its 0-degree part, |φx |2. Thus the weakly self-avoiding walk model is a super- symmetric version of the two-component φ4-model. It is known that spin models also have interpretations in terms of walks, but with additional loops [18, 60]. In fact, the discovery of the relations between walks and fields departed from this direction in the study of field theories in terms walks and loops [107]. De Gennes [42] also argued that the self-avoiding walk is described by the limit N → 0 of the N-vector model (also see [30, 88]), but this limit does not have a meaning at the level of probability measures. The supersymmetric version is a way of giving rigorous meaning to it, in the context of the weakly self-avoiding walk. The essential idea is that the Fermion components of τ count, in a sense, negatively to the number of components due to the minus sign in equation (1.42), in this sense giving “N = 2 − 2 = 0.” For a more complete discussion, see reference [30]. Behavior of spin models In view of the connection between spin models and interacting walks (with loops), it is not surprising that many qualitative features of the weakly self-avoiding walk are shared by the spin models. In the context of spin models, the critical value µc has an instructive interpretation. For example, consider the φ4 model with N = 1 6Each component is an independent Gaussian field with this covariance, in the vector-valued case. 19 1.4. The renormalization group and g > 0 fixed (or the Ising model). It is known, see e.g. reference [6], that there is µc > −∞ such that its infinite volume limits on Zd , d ≥ 2, satisfy χ(µ) = ∑ x |E(φ0φx ) |  < ∞ (µ > µc ), = ∞ (µ < µc ). (1.55) The field φx can be interpreted as a kind of spin of a particle (an arrow) located at vertex x. For µ < µc , (1.55) means that the spins are ordered. This corresponds to the ferromagnetic phase of a magnet in which most spins point in the same direc- tion. On the other hand, the case µ > µc corresponds to a disordered phase. The variation of µ corresponds to a variation in inverse temperature. The critical point µ = µc corresponds to the critical temperature of the phase transition between the ordered and the disordered phase. For N > 1, the picture is similar, but much more delicate due to the continuous O(N )-symmetry of the model on finite graphs. This continuous symmetry is “spontaneously” broken in the ordered phase in the infinite volume limit [62], giving a different magnitude of difficulty to the problem. 1.4 The renormalization group 1.4.1 The concept of renormalization in statistical mechanics Random polymer models on the Euclidean lattice are expected to have scaling lim- its. The fundamental example of this is the convergence of the simple random walk to the Wiener process (1.5). This is a statement about large distances and times related by diffusive scaling. The basic idea of renormalization is to study the large- distance behavior of a model by reduction of the degrees of freedom of the model by a version of coarse graining, i.e., disregarding information about the behavior at small distances, say, smaller than 0 ≪ L ≪ ∞. The fundamental hypothesis of the renormalization idea is that, after coarse graining and rescaling, the model should be similar to the original model with modified parameters. The combination of the two operations of coarse graining and rescaling is called a renormalization group transformation. However, concrete formulations of such transformations for models of self-avoiding walks on the Euclidean lattices, in any dimension, defined directly in terms of walks and amenable to analysis, seem not to be understood.7 The renormalization group concept is, however, much better understood in the context of (near) critical random fields, in particular if these are local perturbations 7On hierarchical groups, the work of Brydges, Evans, and Imbrie [17,28,29] has an interpretation in terms of walks, and there is also work in preparation by Ohno in which renormalization of self- avoiding walks on hierarchical lattices is studied directly in terms of walks. 20 1.4. The renormalization group of a Gaussian field on Zd , i.e., described in finite volume by measures PC ,Z0 (dφ) = 1 Z Z0(φ) PC (dφ) (1.56) where PC is a Gaussian measure on Zd , Z0(φ) is a local perturbation, in the sense discussed in Section 1.3.4, and Z is the normalization constant Z = EC (Z0). In this context (but not only in this), the renormalization group has been used successfully to study the long-distance behavior of a number of such models. It also provides an approach to a renormalization group study of random polymers via (1.46). The term critical random field refers, for example, to a spin model at the critical point; see Section 1.3.4. In the context of models of walks, the near critical behavior is related to the behavior of long polymers as discussed in Section 1.2.4. Let us mention two historically important ideas for the renormalization group study of random fields: Kadanoff [80] proposed the intuitively appealing idea to replace a random field in an L × L × · · · × L block of points in Zd by an effective block spin field, constructed for example by averaging the field in that block. He claimed that this block spin field should behave in a similar way as the original field, but did not provide arguments to justify such an approximation. Wilson later argued, still non-rigorously but with deep insight, how a variant of this idea may be justified. He was awarded the Nobel Prize in Physics in 1982 for his contributions [113]. Following the introduction of [1], let us sometimes refer to the mathematical realization of Wilson’s ideas as Wilson’s program. There has been quite remarkable progress in the realization of approaches like Wilson’s renormalization group. We do not attempt to provide a comprehensive list of references, but let us only mention a few relevant references: Benfatto et al. [13], Feldman et al. [58], Gawedzki and Kupiainen [67], and Brydges and Yau [22]. Unfortunately, these works all involve numerous technical challenges, and it seems unlikely that the full capacity of the renormalization group idea has been attained yet. Nonetheless, it is one of the most powerful tools available for the study of random fields. We will give a short heuristic account of our interpretation of the challenges of Wilson’s program and also sketch very briefly aspects of the approach initi- ated by Brydges and Yau [22], in a further developed form of Brydges and Slade [10, 34–38]. The latter authors conceptualized, simplified, and generalized the ap- proach in significant aspects to study weakly self-avoiding walks via (1.46). The method of Brydges and Yau has, however, also been applied to a number of other models, including the dipole and Coulomb gases [44–46, 48, 49, 55], gradient in- terface models [3], as well as problems from quantum field theory [1, 16, 32, 47]. Introductions to concepts of the method are given in [12, 24, 25, 106]. Our dis- cussion is inspired by many of the references previously mentioned and by the general expositions on the renormalization group [14, 67, 100, 113]. The focus of 21 1.4. The renormalization group our discussion is on the relation to the problems studied in this thesis. 1.4.2 Progressive integration, dynamical systems, and coordinates Let us consider a random field that is a local perturbation of the free field, (1.56), with covariance given by the Green function C = [−∆+m2]−1 of the graph Laplace operator on Λ ⊂ Zd . The perturbation Z0 makes sense only if it depends on a finite set Λ and then m2 > 0 may be required, but the goal is to analyze such measures PC ,Z0 in the limitΛ→ Zd and m2 ↓ 0; we will, however, not devote much attention to the details of these limits. In principle, the measure PC ,Z0 can of course be studied in terms of EC (F Z0), (1.57) for enough field functionals F which we call observables. For instance, with F = 1, (1.57) expresses the normalization factor in (1.56), and with F = φaφb , it gives the unnormalized two-point function. It is well-known, however, that it can be useful to study a measure in terms of a transform, e.g., its Laplace or Fourier transformation. Let us denote the Laplace transform of the unnormalized measure Z0 dPC by Z f := EC (e−φ f Z0(φ)) =: EC (Z f0 (φ)). (1.58) To study the large distance behavior of the field, the class of test functions f should be insensitive to fluctuations at short distances. For example, a scaling limit would be determined by increasingly smooth f = f ε given by f εx = εα ¯f (εx), (x ∈ Zd ), for some exponent α > 0 and ¯f ∈ C∞c (Rd ), in the limit ε ↓ 0. It is, however, also interesting to consider pointwise asymptotics of correlation functions, for example with f = f ab = σaδa +σbδb as d(a, b) → ∞, where σc are constants (c = a, b), and (δc )x = 1 if c = x and (δc )x = 0 otherwise. Then the normalized two-point function is the derivative of log Z f with respect to σa and σb . The accurate analysis of expectations like (1.57)–(1.58) is however highly non- trivial because the free field is strongly correlated: for example, see (1.20), (1.18), EC ((φx − EC (φx )(φy − EC (φy ))) = EC (φxφy ) → 0 (1.59) so slowly that ∑ y |EC (φxφy ) | → ∞ as Λ → Zd and m ↓ 0. The crucial property that will facilitate the analysis is that the perturbation Z0 is local, i.e., a product Z0 = ∏ x∈Λ Z0,x (1.60) where each Z0,x is a local field functional; see Section 1.3.2. This factorization property provides the important structure, as we will sketch, for the iterative anal- ysis of such expectations by a particular form of coarse graining. 22 1.4. The renormalization group The fundamental idea is to decompose the free field φ into a sum of two in- dependent Gaussian fields, φ = φs + φl , corresponding to small and large dis- tances. The coarse graining step is then implemented by taking the expectation of the field φs which is called the fluctuation field because it captures the small dis- tance fluctuations that are to be eliminated. Wilson’s renormalization group pro- gram involves iteration of this procedure and rescaling of the underlying physical space after each step. The motivation is that this renormalization group transfor- mation, the combination of coarse graining and rescaling, should bring a critical model approximately back to its original form so that the transformation can be iterated to obtain an effective description for increasingly large distances. In practice, it can be convenient to omit the rescaling step and instead consider “increasingly smooth” test functions, as discussed below (1.58). Furthermore, the iterated decomposition of the Gaussian field, or equivalently of its covariance, into small and large distance contributions can be implemented by a priori decomposi- tion of the initial covariance, C = C1 + C2 + · · · (1.61) into a sum of covariances corresponding to geometrically increasing length scales. This idea goes back to Wilson, but was perhaps first explicitly formulated by Ben- fatto et al. [13]. The somewhat vague term length or distance scale means that each Cj should account for the fluctuations of the free field in an exponential range of distances L j−1 . |x | . L j for a fixed L > 1. This is discussed in the next section. From a pragmatic point of view, the covariance decomposition C = C1+C2+· · · allows to evaluate the expectation EC (Z f0 (φ)) progressively, in terms of a sequence of field functionals Z f j which are integrated with respect to the Gaussian fields with covariance Cj+1 + · · · , defined by Z f j+1(φ) := E j+1Z f j (φ) := Eφ′ C j+1 ( Z f j (φ + φ′) ) , (1.62) where the expectation on the right-hand side is that of the fluctuation field φ′. E j is thus the convolution operator of the Gaussian measure with covariance Cj . It then follows that the expectation is given by8 Z f = Z f∞ (0) := lim j→∞ Z f j (0). (1.63) The progressive integration (E j ) can be regarded as a time-dependent dynami- cal system, with the scale parameter j in the role of “time:” if N is an appropriate 8The limit requires some mild assumptions on the decomposition. Moreover, in practice, it can be more convenient to stop the iteration after finitely many steps, when the decomposition has reached the size of the finite set Λ; we will ignore such details. 23 1.4. The renormalization group space of field functionals and N j ⊂ N a subspace of field functionals which are integrable with respect to the Gaussian measure with covariance Cj + Cj+1 + · · · , then E j+1 : N j → N j+1 ⊆ N. This picture, in itself, is not a simplification of the problem since the dynamical system (E j ) j is enormously complicated and time- dependent, and it must be understood in the limit Λ → Zd . To analyze particular aspects of this dynamical system, one must find appropriate coordinates in which an aspect of consideration becomes tractable, uniformly in Λ. In particular, it is natural to consider the evolution of the perturbation Z0 only, without f . For example, by an elementary calculation for Gaussian measures, Z f = Z f∞ (0) = e f C f Z∞ (C f ) (1.64) where Z∞ on the right-hand side does not have a superscript f . Thus, in principle, i.e., given sufficient knowledge about Z∞, the general case can be reduced to it. The goal of the next subsections is to outline how coordinates x j can be found in which the action of the Gaussian convolution with covariance Cj+1 on Z j is expressed in a much simpler form by a map Φ j acting on x j : E j+1( ˆZ j (x j )) = ˆZ j+1(Φ j (x j )) (1.65) for some coordinate maps ˆZ j that map an “abstract” coordinate x j to a field func- tional ˆZ j (x j ) = ˆZ j (x j , φ). In his pioneering work, Wilson argued how this should be possible and, with the previously mentioned rescaling step, his dynamical sys- tem is approximately autonomous. In the rigorous approach of Brydges and Slade [10,34–37], it has turned out useful to allow the coordinate spaces to depend on the scale j. Thus there is a sequence of spaces X j such that x j ∈ X j and the evolution maps are given as Φ j : X j → X j+1, but approximate invariance under rescaling must, of course, still play a role. Finding such coordinates x j , rigorously, is at the heart of the difficulties of the renormalization group. Let us mention again, with the more specific context that has now been intro- duced, that the main results of this thesis are the following. • Chapter 2 provides a new method for decomposition of Green functions that give decompositions of free fields with particularly useful properties for the analysis of the renormalization group transformations that they induce. • Chapter 3 is the analysis of a class of general dynamical systems Φ = (Φ j ) that arise as coordinates of the renormalization group map for four-dimen- sional weakly self-avoiding walks [10, 37]. The outline of this subsection will be expanded with further details in the following subsections. In Appendix A, we provide some concrete details how the covariance decomposition of Chapter 2 gives rise to the assumptions of Chapter 3. 24 1.4. The renormalization group 1.4.3 Decomposition of the free field The starting point for the renormalization group, in the form discussed in the previ- ous section, is a decomposition of the free field, or equivalently the decomposition of its covariance C, into distance scales: C = C1 + C2 + · · · . (1.66) The covariance should here be regarded as an infinite (in the limit Λ → Zd) sym- metric matrix (Cxy )x ,y∈Zd that is positive definite in the sense that ∑ x ,y∈Zd f xCxy fy ≥ 0 for all finitely supported f : Zd → R. (1.67) The decomposition (1.66) must be such that each term Cj satisfies (1.67), in order for the Cj to be the covariances associated to Gaussian fields, and, at the same time, the covariances Cj must “capture” the distance scales L j−1 . |x | . L j for some fixed L > 1, where L j = L × · · · × L. These are two competing constraints. In Chapter 2, in particular in Theorem 2.1.2 and Example 2.1.3, we prove that, if C is the Green function of a quadratic form in general class (containing Dirichlet forms on a general graph, not necessarily Zd), then a strong form of the decompo- sition of the above kind is possible. There exists φt (x , y), t > 0 such that Cxy = ∫ ∞ 0 φt (x , y) dtt (1.68) where φt is positive definite, for each t > 0. The use of the scale-invariant measure dt/t on [0,∞) in (1.68), rather than the Lebesgue measure dt, is not important but a natural choice. The kernel φt satisfies the finite range property φt (x , y) = 0 if d(x , y) > t, (1.69) and natural estimates. For example, if C is the Green function associated to the lattice Laplace operator, then, for all multi-indices lx , ly ∈ N{±1,... ,±d}0 ,∣∣∣∇lxx ∇lyy φt (x , y)∣∣∣ ≤ Ct−(d−2)−|lx |1−|ly |1 (1.70) where negative components of l denote discrete gradients in the negative coordinate directions and |l |1 = ∑d i=1(li + l−i ). Moreover, φt is then also translation-invariant, i.e., φt (x , y) = φt (0, y − x), and symmetric, i.e., φt (0, x) = φt (0, −x). 25 1.4. The renormalization group To obtain a discrete decomposition, as in (1.66), the integral (1.68) can be split into integrals over finite intervals. For example, for any L > 1, set [Cj ]xy =  ∫ 1 2 L 0 φt (x , y) dtt ( j = 1),∫ 1 2 L j 1 2 L j−1 φt (x , y) dtt ( j > 1). (1.71) The properties of φt immediately imply  Cj is positive definite; Cj has the finite range property: [Cj ]xy = 0 if d(x , y) > 12 L j ; Cj is translation-invariant: [Cj ]x+a ,y+a = [Cj ]xy ; Cj satisfies |[∇lxx ∇lyy Cj ]xy | ≤ O(L−(d−2+|lx |1+|ly |1)( j−1)). (1.72) In addition, we show that the φ of the Euclidean lattice has a scaling limit. For the discrete decomposition, this means that there exists c ∈ C∞c (B 12 (0)) such that [Cj ]xy = L−(d−2) jc(L− j (x − y)) + O(L−(d−2+1) j ). (1.73) An analogous result also holds for all discrete gradients of Cj . The existence of the scaling limit implies that certain functions of Cj can be computed very precisely in the limit j → ∞, as illustrated in Appendix A. As hinted at, the two constraints that Cj is positive definite and finite range are non-trivial to satisfy simultaneously. It is a natural question if covariance decom- positions in which Cj is localized exponentially, e.g., for some c > 0, |[Cj ]xy | ≤ O(L−(d−2)( j−1)e−cL− ( j−1) |x−y | ), (1.74) would be equally useful. It is much easier to find decompositions with this relaxed localization property. The answer is that such decompositions are almost as useful, and, in fact, they have been used in earlier results on the renormalization group, see in particular [13, 65]. The use of the finite range property, originally proposed by Brydges [90], leads to simplifications of the method and, in some aspects, slightly better results. 1.4.4 Formal perturbation theory Physicists have long understood that the evolution Z j → Z j+1 = E j+1Z j becomes formally simple when expressed as an exponential function. Let ˜Vj = − log Z j be 26 1.4. The renormalization group the effective potential. In particular, for the weakly self-avoiding walk model, by (1.49), ˜V0 = ∑ x∈Λ (g0τ2x + µ0τx + z0τ∆,x ) (1.75) is parametrized by the three coupling constants (g0 , µ0 , z0). Formally, by which we mean by expanding the exponential function into a power series without paying attention to its convergence, ˜Vj+1 = − log(E j+1(exp(− ˜Vj ))) ≈ E j+1( ˜Vj ) + 12 (E j+1( ˜Vj )2 − E j+1( ˜V 2j )) + · · · (1.76) where ≈ means in the sense of a formal power series in ˜Vj . This relation is called the cumulant expansion and also perturbation expansion in the physics literature. If ˜Vj was a polynomial (or formal power series) of the field, as for ˜V0 in (1.75), then the terms of each order of on the right-hand side of (1.76) could be calculated explicitly in terms of the covariance by Wick’s formula (1.42). Ignoring a number of problems with (1.76), Wilson observed that in this formal series of monomials of the field, a few terms seem to be much more important than the others. He argued that, in dimensions four and above9, ˜Vj can be approximated by a polynomial of the same form as ˜V0. Effectively, this reduces the complexity from an infinite number of variables to three variables, (gj , µ j , z j ), parametrizing ˜Vj as in (1.75). First-order perturbation theory and local field monomials To explain Wilson’s argument, some terminology is convenient. A field functional M is a local field monomial, localized at x ∈ Λ, if M can be expressed as a mono- mial in φx and ∇φx , and corresponding terms in other fields (such as the Fermionic field ψ). Moreover, P is a local field polynomial if there is X ⊂ Λ and local field monomials Mx for x ∈ X such that P = ∑ x∈X Mx . For example, φ2x is a local field monomial and ∑x∈Λ φ2x is a local field polynomial. In particular, ˜V0 is a local field polynomial. Then, to explain a fundamental idea, suppose that ˜Vj+1 is given by the first term of the right-hand side of (1.76) only, i.e., ˜Vj+1 = E j+1 ˜Vj . Observe that E j (τx ) = τx , (1.77) E j (τ2x ) = τ2x + 2[Cj ]xxτx = τ2x + 2[Cj ]00τx , (1.78) E j (τ∆,x ) = τ∆,x , (1.79) 9In fact, he also considers dimension “4 − ε,” but we will not be concerned with this case. Below we outline the considerations in general dimension d, but for field theories with “quadric interac- tions,” these will only be useful in d ≥ 4 which is our main interest. 27 1.4. The renormalization group by the definitions of τ and τ∆, (1.43) and (1.50), Wick’s formula (1.42), and transla- tion-invariance of Cj , i.e., [Cj ]xx = [Cj ]00. The exact expressions of the right-hand sides in (1.77) rely on the differential form parts of τx and τ∆,x , but for many other purposes one can think of τx and τ∆,x simply as their degree 0 parts, ¯φxφx and 1 2 ( ¯φx (∆φ)x + (∆ ¯φ)xφx ), and we will then do so, and also replace the complex field by a real field if the distinction is not important.10 It follows that, in the linear approximation, all ˜Vj are local field monomials of the same form as ˜V0 with (g0 , µ0 , z0) replaced by (g̃j , µ̃ j , z̃ j ) determined by the recursion relation (g̃j+1, µ̃ j+1 , z̃ j+1) = (g̃j , µ̃ j + 2[Cj+1]00g̃j , z̃ j ). (1.80) Now observe that, according to the discussion about the decomposition of the Green function in the previous section, Var(∇lxφ j+1,x ) = [∇lx∇lyCj+1]xy ∣∣∣ x=y ≈ cL−(d−2+2|l |1) j = cL−2([φ]+|l |1) j (1.81) for any multi-index l. The constant [φ] := 12 (d − 2) on the right-hand side is called the dimension of φ. A measure of the typical magnitude of a field is the square root of its variance and, in this sense, |φ j+1,x | ≈ L−[φ] j . (1.82) Moreover, by (1.81), each discrete derivative ∇ of φ j+1 decreases this typical mag- nitude by an additional factor of L− j (up to an absolute constant). The dimension [M] of a local field monomial M is defined so that |M (φ j+1) | ≈ L−[M ] j according to this heuristic, i.e., by adding a summand [φ] for each factor of φ and a summand of 1 for each discrete gradient ∇. For example, [φ4] = 4[φ] = 2(d − 2), [(∇φ)2] = 2[φ] + 2 = d. (1.83) In dimensions d > 2, the typical magnitude of a fluctuation field decreases as j increases, by (1.82), but at the same time its range increases like L j . For a scaling limit, the natural “effective size” of a field monomial is that of its sum over a block B of approximate diameter L j , i.e., ∑ x∈B |M (φ j+1,x ) | ≈ L(d−[M ]) j . (1.84) This gives rise to the following classification of local field monomials: 10The expressions with differential forms are simpler than those of their degree 0 parts alone. This is because of cancellations due to supersymmetry. It corresponds to the cancellation of “loops” in the random walk representation; see Section 1.3.4. 28 1.4. The renormalization group • if [M] > d, the heuristic magnitude of M contracts (M is irrelevant); • if [M] < d, the heuristic magnitude of M expands (M is relevant); • if [M] = d, the heuristic magnitude of M remains the same (M is marginal). It is therefore natural to consider the coupling constants (gj , µ j , z j ) with respect to the “normalized” field monomials11 L(d−4) jτ2x , L−2 jτx , and τ∆,x . In the formal first order approximation, the evolution of these is given by: (gj+1 , µ j+1, z j+1) = (L−(d−4)gj , L2(µ j − 2L2 j [Cj+1]00gj ), z j ). (1.85) These heuristic considerations lead to the following predictions for the large distance behavior of the perturbed field. In dimension five and higher, the only non- contracting local field monomials compatible with the symmetries12 of the model are τ and τ∆; in particular gj → 0, and the large distance behavior is expected to be that of the free field. In dimension three and lower, there are several relevant local field monomials, finitely many in dimension three, for example τ and τ2, and infinitely many in dimension two, and in both cases the large distance behavior is expected to be non-trivial (different from the free field). In dimension four, there is only one relevant field monomial, τ, and only two marginal field monomials, τ2 and τ∆, and the first-order approximation is not sufficient to (heuristically) determine the long-distance behavior. A second-order analysis reveals that the long distance behavior should be like that of the free field, but in a much more subtle way than in dimensions above four. Higher-order perturbation theory and approximation by local polynomials The immediate difficulty encountered when trying to formally include higher-order terms of (1.76) in the previously described heuristic procedure is that such terms are not local field monomials. For example, an (important, as it will turn out) term arising at second-order is −g2j ∑ x ,y [Cj+1]2xy ¯φxφx ¯φyφy . (1.86) This term involves φx and φy with d(x , y) ≈ L j+1 and is therefore not a local field polynomial. However, such terms which arise in (1.76) can always be replaced by 11For simplicity, we refer to, e.g., τx = ¯φxφx + ¯ψxψx as a “monomial,” even though it is actually a sum of two monomials in the fields in the previously introduced terminology. 12 The model is symmetric under Euclidean transformation that preserve the lattice and under a so-called supersymmetry [10, 35]. 29 1.4. The renormalization group a local field polynomial and a contracting non-local remainder term. For example, for the above term, one can make the replacement ∑ x ,y [Cj+1]2xy ¯φxφx ¯φyφy  C (2)j+1 ∑ x ( ¯φxφx )2 (1.87) where we have introduced the abbreviation C (2) j = ∑ y [Cj ]2xy (1.88) which is independent of x, by translation-invariance of Cj . The right-hand side of (1.87) is again a local field monomial and, as such, it can be included as a second- order correction to the flow of coupling constants (gj , µ j , z j ) 7→ (gj+1 , µ j+1, z j+1). The above term results in a contribution to gj+1 like gj+1 = gj − β jg2j + · · · with β j > 0. More details of the resulting equations are given in Appendix A. The difference between the right- and left-hand sides of (1.87) is ∑ x ,y [Cj+1]2xy ( ¯φxφx )( ¯φyφy − ¯φxφx ). (1.89) This term “contracts” in dimensions d ≥ 4, roughly, since the difference between a local field monomial at two points decays faster than the individual monomials, by (1.72), if the distance between the points remains fixed. To illustrate this, consider (1.89) with y = x + re where e is a unit lattice vector and r an integer with |r | ≤ O(L j+1); the latter restriction on r is because of the finite range condition that Cj (x , y) = 0 if d(x , y) ≥ cL j . Then ¯φxφx ( ¯φyφy − ¯φxφx ) = r−1∑ k=0 ¯φxφx (∇e ( ¯φφ))x+ke . (1.90) This term, at scale l > j, i.e., if tested with fluctuation covariance Cl , has the effec- tive size O(rL(d−4[φ]−1)l ) = O(L−(l− j ) L(d−4[φ])l ) which decreases exponentially in l (because r and therefore j remain fixed). This argument can be made for each term appearing in (1.76). Brydges and Slade developed a systematic treatment [35]. 1.4.5 Dynamical systems That the space of relevant and marginal spatially homogeneous local field polyno- mials has finite dimension (in dimension four and above), and that every term in (1.76) can be approximated by such a local field polynomial with a “contracting er- ror,” is the principal idea of Wilson’s renormalization group. Wilson argues [113] 30 1.4. The renormalization group that the contractive terms should not influence the critical behavior of the model, determined by the evolution of the relevant and marginal terms, in our example, the three dimensional system (gj , z j , µ j ) 7→ (gj+1, z j+1, µ j+1). There are numerous mathematical difficulties encountered when trying to jus- tify this picture given by formal perturbation theory; these are discussed (in part) in Section 1.4.6. Formal perturbation theory suggests that there should be coordinates x j = (K j ,Vj ) determining Z j where Vj = (gj , z j , µ j ) is the three dimensional vec- tor describing the marginal and relevant monomials of formal perturbation theory and K j is an infinite-dimensional vector capturing all of the irrelevant directions. The evolution of Vj should approximately be given by a “localized” version of (1.76) as illustrated in (1.87), while K j should be contractive in some sense. In Chapter 3, the following abstract version of this set-up is considered. We assume that there is a sequence of Banach spaces K j such that K j ∈ K j , that the joint evolution of (K j ,Vj ) is described by an evolution map Φ j : K j × R3 7→ K j+1 × R3 (1.91) of the form Φ j (K j ,Vj ) = (ψ j (K j ,Vj ), ϕ̄ j (Vj ) + ρ j (K j ,Vj )) (1.92) with ψ j and ρ j contractive in K j and third-order in Vj , and ϕ̄ j a quadratic poly- nomial of Vj . The quadratic polynomials ϕ̄ j describe the formal second-order per- turbation theory of the relevant and marginal directions and therefore depend on Vj only; ρ j describes higher-order contributions which can either be due to the rele- vant and marginal coordinates or due to the contracting directions. The maps Φ j are allowed to have a weak scale-dependence. In addition, we assume that the ϕ̄ do not have constant parts which allows for the interpretation Φ j (0, 0) = (0, 0) such that 0 = (0, 0) can be considered a kind of fixed point13 of the dynamical system Φ = (Φ j ) j . This corresponds to the fact that the evolution of Z j is trivial if Z0 = 1. The main interest is in the long-time behavior of this dynamical system, as this is related to the large distance behavior of the fields. For a dynamical system near a hyperbolic fixed point, the structure of the flows near the fixed point are well understood. A dynamical systemΦ : X → X on a Banach space X has a hyperbolic fixed point 0 if the spectrum of DΦ(0) is bounded away from 1. Informally stated, the stable manifold theorem [99, Theorem 6.1] asserts that if Φ is a hyperbolic Cr map (for some integer r > 0), then there exists a decomposition X = Xs ⊕ Xu such that, near 0, the domain of attraction M ⊂ X is the graph of a Cr map Xs → Xu , and that the convergence under iteration of Φ of points on M to 0 is exponentially 130 is a different element in every space X j = K j ×R3, but we will neglect this point in the present discussion. 31 1.4. The renormalization group fast. This gives rise to a schematic phase portrait as shown in Figure 1.4. In the context of the renormalization group, the choice of V0 on the stable manifold of a fixed point corresponds to a critical model, whose scaling limit is the same as that of the perturbed Gaussian measure. (This is known as infrared asymptotic freedom in the physics literature. It will be discussed again in Section 1.4.8.) stable manifold fixed point unstable manifold Figure 1.4: Schematic phase portrait of the renormalization group. The “fixed point” of the dynamical system arising in the renormalization group analysis of the four dimensional weakly self-avoiding walk, outlined above, is not hyperbolic; the reason is that τ2 is marginal. The analysis of the (local) long-time behavior of non-hyperbolic fixed points is more subtle than that of hyperbolic ones and depends on specific properties of the dynamical system. For example, a small change of the value of a single coefficient of the quadratic term ϕ̄ above can change the long-time behavior in an important way; see e.g. Example 3.1.6. In Chapter 3, we study dynamical systems of the form (1.92), and prove that, for the class of dynamical systems considered, an analog of a stable manifold the- orem holds. The exponentially fast convergence along the stable trajectory of the stable manifold theorem is replaced in our result by a polynomial bound with log- arithmic correction (which is likely optimal). Informally said, we show that, for sufficiently small V0 and K0, there is a codimension two manifold of (K0 ,V0) such that the solution to (K j+1 ,Vj+1) = Φ j (K j ,Vj ) exists for all j ∈ N and is a pertur- bation of the solution to the analogous two dimensional manifold for the recursion ¯Vj+1 = ϕ̄ j ( ¯Vj ) which can be studied by elementary means. In Appendix A, we provide the explicit expression of the quadratic part, ϕpt, of 32 1.4. The renormalization group the dynamical system that arises in the renormalization group map for the weakly self-avoiding walk [10]. It is expressed in terms of the covariance decomposition of Chapter 2. It turns out that ϕpt is not exactly of the form of ϕ̄ studied in Chapter 3. We provide an explicit transformation which expresses ϕpt in terms of a map ϕ̄ to which the result of Chapter 3 can be applied. 1.4.6 The error coordinate and polymer gases Finally, we provide some indication how the error coordinate K j can be found. This is, of course, the major mathematical difficulty in implementing Wilson’s program. In essence, this amounts to obtaining an approximate version of (1.76) with a useful remainder estimate. This was first achieved for the φ4 model, in a somewhat different formulation, by Gawedzki and Kupiainen [65–67]; this model has also been studied by different approaches, see e.g. [58]. An infamous difficulty, known as the large field problem, is that (1.76) can only be a good approximation when V and, thus φ, are small. This problem, in its simplest form, is already present in perturbations of the standard one-dimensional Gaussian measure. For example, I (g) = ∫ R e−gt 4 e−πt 2 dt (1.93) is a singular function of g at g = 0 because e−gt4 is not integrable for g < 0. Large fields turn out to cause difficulties for the applicability of certain expansion meth- ods, but their probability is very small (in a large deviation sense). The solution of Gawedzki and Kupiainen to the large field problem involves a separate treatment of small and large fields, in which the small field contribution gives rise to similar effective action as the formal analysis of Section 1.4.4, while the large field contri- bution is very small. Brydges and Yau [22] developed a different solution in which no distinction between small and large fields has to be made, by use of well-chosen weights on the space of field functionals. The main issue, however, is that the perturbations Z j involve an unbounded number of variables (as Λ → Zd) and that it is difficult to estimate the error to a formal approximation like (1.76) in a uniform way. This difficulty has historically been handled by cluster expansions [22,65–67]. There, an important role is played by a polymer gas14 which, informally said, can describe the irrelevant directions of the formal analysis. The previously mentioned references use covariance decom- positions C = C1+ · · · in which the Cj are only exponentially localized, rather than 14As a warning, we emphasize that the polymers that appear in the polymer gas are not the same kind of polymers as in Section 1.2. 33 1.4. The renormalization group finite range, discussed in Section 1.4.3. The use of the finite range property allows a simplified treatment without cluster expansion [25, 27, 90]. Polymer gases The simplest version of a polymer gas is defined as follows; see [23,68]. Let P0(Λ) be the set of finite subsets of Λ; for later convenience, we include the empty set ∅ in P0(Λ) although, at the moment, it would be more natural not to do so. Suppose that, for each polymer Y ∈ P0(Λ), there is a weight K (Y ), called polymer activity. The partition function of the polymer gas with activity K is given by Z = ∞∑ N=0 1 N! ∑ Y1 ,... ,YN ∈P0 disjoint,Yi,∅ K (Y1) · · · K (YN ) = ∞∑ N=0 1 N! ∑ Y1 ,... ,YN ∈P0 K (Y1) · · · K (YN ) ∏ i, j e−v (Yi ,Yj ) , (1.94) with the hard core interaction v(Yi ,Yj ) =  0 (Yi ∩ Yj = ∅), ∞ (otherwise). (1.95) K (Y ) appears in analogy to the activity of the “particle” at Y in the grand canonical partition function of a gas, which is why it is called polymer activity. A simplification is the connected polymer gas with configuration space given by connected polymers CP0 ⊂ P0. This requires a notion of connected polymer with the property that each Y ∈ P0 has a unique disjoint decomposition Y = Y1 ∪ · · · ∪ YN into connected polymers in CP0. The activities K can then naturally be extended from CP0 to P0 by K (Y ) = K (Y1) · · · K (YN ) (1.96) with the convention K (∅) = 1. We identify polymer activities defined on CP0 with such defined on P0 satisfying (1.96) and call them connected polymer activities. The partition function (1.94) with P0 replaced by CP0 then has the simple form Z = 1 + ∑ Y ∈P0 ,Y,∅ K (Y ) = ∑ Y ∈P0 K (Y ). (1.97) The first expression shows that “Z ≈ 1” if K is “small,” but observe that P0 has 2|Λ| elements, so that, for the sum to be small, K (X ) must be very small for most X . 34 1.4. The renormalization group For the further development, it turns out convenient to introduce an algebraic structure on polymer activities, introduced in [22, 25]. Define a commutative and associative product on polymer activities by (F ◦ G)(X ) = ∑ Y ∈P0 (X ) F (Y )G(X \ Y ) (1.98) where P0(X ) denotes the polymers contained in X . Let 1 denote the constant polymer activity given by 1(Y ) = 1 for all Y ∈ P0. Then the partition function of a connected polymer gas is given by Z = (K ◦ 1)(Λ). (1.99) 1.4.7 Polymer representation In the use of polymer gases to control the renormalization group, the polymer activ- ities K (X ) are local field functionals. More precisely, the space of field functionals N is considered a commutative algebra with subalgebras N0(Y ) ⊂ N of field func- tionals that only depend on the field in Y ∈ P0 and a “small neighborhood” of Y . The polymer activities are then local in the sense that K (Y ) ∈ N0(Y ). The simplest example is the trivial polymer activity, denoted K = 1∅, and defined by 1∅ (X ) = 1 if X = ∅ and 1∅ (X ) = 0 else. 1∅ is the unit of the product ◦. The initial partition function can then be written as Z0 = I0(Λ) = (1∅ ◦ I0)(Λ) (1.100) where I0 : P0 → N is given by I (X ) = ∏x e−V0,x . If the covariance decomposition has the finite range property, see Section 1.4.3, it turns out that all Z j can be expressed in a similar way, but to obtain a useful representation, the class of polymers must be restricted to reflect the increasingly long range nature of the remaining fluctuation fields. More specifically, if Λ is a finite torus or cube of side length LN for some integers L and N , let B j (Λ) be a set of mutually disjoint blocks of side length L j with the property that their union equals Λ. Let P j (Λ) be the set of finite unions of blocks in B j (Λ); these are called scale j polymers. Then everything discussed in the previous section about polymer gases has a straightforward scale j generalization, given by replacing P0 with P j , and N0(Y ) with N j (Y ) which are field functionals that are allowed to depend on Y and a small neighborhood of blocks in B j near Y . In particular, the circle product ◦ then depends on j, although we will not emphasize this in the notation. Brydges and Slade [25,37] show that, if the finite range decomposition is given in terms of the same parameter L > 1, then Z j can be written as Z j = (K j ◦ I j )(Λ) = I j (Λ) + ∑ X∈P j (Λ) ,X,∅ K j (X )I j (Λ \ X ) (1.101) 35 1.4. The renormalization group where I j and K j and ◦ are defined on scale j polymers, I j is to second order es- sentially given by (1.76), and K j represents all of the higher order terms of formal perturbation theory in a rigorous fashion. To understand the significance that poly- mers must be at the correct length scale, observe that nth order terms of the formal approximation (1.76) have range O(nL j ). This is easily understood by the example of the the second-order term (1.86). The polymer gas description becomes useful if it can be arranged in such a way that nth order terms correspond, approximately, to polymer activities K (X ) on polymers with O(n) blocks so that K (X ) can be expected to be smaller and smaller when X is large. This compensates “loss of locality” by smallness. Finite range property To illustrate how the finite range property is helpful in obtaining the representation (1.101), we recall that the finite range property [Cj ]xy = 0 if d(x , y) ≥ cL j has the consequence that, if φ j = (φ j ,x )x is a Gaussian field with such a covariance, then φ j ,x and φ j ,y are independent if d(x , y) ≥ cL j . In particular, if Y1 , . . . ,YN ∈ P j do not touch each other, then E j N∏ i=1 K (Yi ) = N∏ i=1 E jK (Yi ). (1.102) Now suppose that a local field functional I1,x = I1,x (φ2+φ3+· · · ), independent of the first fluctuation field, φ1, is given in some way, and let δI0,x = I0,x − I1,x where I0,x = I0,x (φ1 + φ2 + · · · ) does depend on φ1. Then Z0 = I0(Λ) = ∏ x∈Λ I0,x = ∏ x∈Λ (I1,x + δI0,x ) = (δI0 ◦ I1)(Λ). (1.103) The expectation of Z0 with respect to C1 can be written as Z1 = E1Z0 = ∑ X∈P0 (Λ)  ∏ x∈Λ\X I1,x  E1  ∏ x∈X δI0,x  = ( ˜K1 ◦ I1)(Λ), (1.104) where I1(X ) = ∏ x∈X I1,x , ˜K1(X ) = E1  ∏ x∈X δI0,x  (1.105) and the product ◦ on scale 0. However, ˜K1(X ) depends on the field in a neighbor- hood of X of range O(L1) (or more generally O(L j ) at scale j). To write (1.104) 36 1.4. The renormalization group in terms of scale 1 polymers, one can restrict I1 to B1 and “coarsen” ˜K1 by setting K1(U) = ∑ X∈P0 (U ):X=U  ∏ x∈U\X I1,x  E1  ∏ x∈X δI0,x  (1.106) for U ∈ P1, where the closure X ∈ P j+1 of a polymer X ∈ P j is the smallest scale j +1 polymer such that X ⊇ X . The finite range property of E1 implies that K1(U) only depends on the field in U and in B1-blocks touching U; the appropriate choice of N j is such that K1(U) ∈ N1(U). The representation Z j = K j ◦ I j is far from unique. There are many choices of K j and I j that satisfy Z j = K j ◦ I j . It is crucial to choose the I j correctly to capture the important directions, and the K j such that K j+1 contracts compared to K j in an appropriate norm. The details of this are quite delicate [25,37]. The representation Z j = K j ◦ I j bridges between the representations as an effective action, i.e., as an exponential and as a polymer gas. It resembles the expression e−Vj+K j sufficiently well to serve as a replacement, but gives at the same time the flexibility to measure the non-locality of the error. 1.4.8 Conclusion The renormalization group, in the sense sketched in the previous subsections, can provide a complete description of the evolution of a local perturbation of a Gaus- sian field, Z j+1 = E j+1Z j , induced by a finite range decomposition of its covariance C = C1 + C2 + · · · , (1.107) in terms of tractable coordinates x j = (K j ,Vj ) defining field functionals ˆZ j (K j ,Vj ) such that, with Vj = (gj , µ j , z j ), E j · · · E1 Z0(V0) = ˆZ j (K j ,Vj ) ≈ ∏ x∈Λ e−g jτ 2 x−µ jτx−z jτ∆,x . (1.108) The coordinates x j lie on the trajectory of a dynamical system Φ, Φ j (K j ,Vj ) = (K j+1 ,Vj+1). (1.109) The long-time properties of the dynamical system Φ can be used to establish properties of the large distance behavior of the fields. For example, if V0 is chosen carefully, the flow Vj converges to the fixed point 0; this choice describes critical models. The phenomenon Vj → 0 is called infrared asymptotic freedom. The term infrared means that it concerns the large distance (short “wavelength”) limit, 37 1.4. The renormalization group while freedom refers to the fact that V = 0 describes a free field. Together with detailed estimates on K j , guaranteeing that its contribution is sufficiently small, the convergence Vj → 0 can, for example, be used to prove that the critical model has the same scaling limit as the perturbed Gaussian field (in an appropriate sense). In addition, the trajectories of Φ close to the critical V0 reveal information about the approach of the critical point, again with appropriate (non-trivial) estimates on the remainder part K j . In the next two chapters, two aspects of this program are studied in detail, the decomposition of Gaussian fields and the analysis of a class of dynamical systems that arises in the renormalization group study of the weakly self-avoiding walk. As mentioned, we provide some explicit details of the connection between the results of Chapter 2 and Chapter 3 in Appendix A. 38 Chapter 2 Decomposition of free fields 2.1 Introduction and main result 2.1.1 The Newtonian potential Let us place the result of this chapter into context via an example. Consider the Newtonian potential, the Green function of the Laplace operator on Rd given by Φ(x) = Cd  |x |−(d−2) (d ≥ 3) log 1/|x | (d = 2) for all x ∈ R d , x , 0. (2.1) For d ≥ 3 and any measurable function ϕ : [0,∞) → R such that td−3ϕ(t) is integrable, the Newtonian potential can be written, up to a constant, as |x |−(d−2) = ∫ ∞ 0 t−(d−2) ϕ(|x |/t) dt t for all x ∈ Rd , x , 0. (2.2) This is true because both sides are radially symmetric and homogeneous of degree −(d−2), where homogeneity of the right-hand side simply follows from the change of variables formula. In particular, ϕ can be chosen smooth with compact support and such that ϕ(|x |) is a positive semi-definite function on Rd . The last condition means that ϕ(|x |) is positive as a quadratic form: for any f ∈ C∞c (Rd ), that is, f : Rd → R smooth with compact support, Φt ( f , f ) := ∫ Rd ×Rd ϕ(|x − y |/t) f (x) f (y) dx dy ≥ 0. (2.3) Similarly, if d = 2, and ϕ : [0,∞) → R is any absolutely continuous function with ϕ(0) = 1 and such that ϕ′(t) is integrable, then log 1/|x | = ∫ ∞ 0 (ϕ(|x |/t) − ϕ(1/t)) dt t for all x ∈ R2, x , 0. (2.4) Indeed, for x , 0, log 1/|x | = ϕ(0) log 1/|x | = − ∫ ∞ 0 ϕ′(s) log 1/|x | ds = ∫ ∞ 0 ϕ′(s) ∫ s s/ |x | dt t ds, (2.5) 39 2.1. Introduction and main result and thus, since ϕ′ is integrable, by Fubini’s theorem, log 1/|x | = ∫ ∞ 0 ∫ t |x | t ϕ′(s) ds dt t = ∫ ∞ 0 (ϕ(t |x |) − ϕ(t)) dt t , (2.6) showing (2.4) after the change of variables t 7→ 1/t. Now suppose again that ϕ is chosen such that ϕ(|x |) is a positive semi-definite function on R2. Then the function R2 ∋ x 7→ ϕ(|x |/t) − ϕ(1/t) is positive as a quadratic form on the domain of smooth and compactly supported functions with vanishing integral: Φt ( f , f ) := ∫ R2×R2 (ϕ(|x − y |/t) − ϕ(1/t)) f (x) f (y) dx dy (2.7) = ∫ R2×R2 ϕ(|x − y |/t) f (x) f (y) dx dy ≥ 0 for all f ∈ C∞c (R2) with ∫ f dx = 0. The above shows that the Newtonian potentials (2.1) can be decomposed into integrals of compactly supported and positive semi-definite functions, with the ap- propriate restriction of the domain for d = 2. Let us recall at this point that the positivity of a quadratic form has the impor- tant implication that it entails the existence of a corresponding Gaussian process, discussed briefly in Section 2.1.4. However, it is also of interest in mathematical physics for different reasons [71]. 2.1.2 Finite range decompositions of quadratic forms It is an open problem to characterize the class of positive quadratic forms, S : D(S) × D(S) → R, that admit decompositions into integrals (or sums) of positive quadratic forms of finite range: for all f , g ∈ D(S), t > 0,  S( f , g) = ∫ ∞ 0 St ( f , g) dtt , St : D(S) × D(S) → R, St ( f , f ) ≥ 0, St ( f , g) = 0 if d(supp( f ), supp(g)) > θ(t), (2.8) where θ : (0,∞) → (0,∞) is increasing and d is a distance function. The condition of finite range, the last condition in (2.8), generalizes the property of compact support of the function ϕ in (2.3) to quadratic forms that are not defined by a convolution kernel. The difficulty in decomposing quadratic forms in such a way is to achieve the two conditions of positivity and finite range simultaneously. Note 40 2.1. Introduction and main result that by splitting up the integral, one can obtain a decomposition into a sum from (2.8), and conversely, a decomposition into a sum can be written as an integral (without regularity in t). For applications, not only the existence, but also the regularity of the decom- position (2.8) is important. Let (X, µ) be a metric measure space, i.e., a locally compact complete separable metric space X with a Radon measure µ on X with full support (i.e., µ is strictly positive), Cc (X ) the space of continuous functions on X with compact support, and Cb (X ) the space of bounded and continuous func- tions on X . Let us say that the decomposition (2.8) is regular if Cc (X ) ∩ D(S) is S-dense in D(S) and if every St has a bounded continuous kernel st ∈ Cb (X × X ): St ( f , g) = ∫ st (x , y) f (x)g(y) dµ(x) dµ(y) for all f , g ∈ Cc (X ) ∩ D(S). (2.9) For the decompositions (2.2), (2.4), the kernels are of course given in terms of the smooth function ϕ by the explicit formula φt (x , y) = t−(d−2)ϕ(|x − y |/t) for all x , y ∈ Rd , t > 0. (2.10) Note that for d = 2 the second term in (2.4) could be omitted by (2.7), with the understanding that the quadratic form is restricted to functions with vanishing in- tegral. It follows in particular that |φt (x , y) | ≤ Ct−(d−2) uniformly in all x , y ∈ Rd . (2.11) This reflects the decay of the Newtonian potential. Moreover, for all integers lx , ly ≥ 0, the derivatives of the kernel φt decay according to |Dlxx Dlyy φt (x , y) | ≤ Cl t−(d−2)t−lx−ly , (2.12) reflecting that |DlΦ(x) | ≤ Cl |x |−(d−2−l ) for all x ∈ Rd , x , 0. The main result of this chapter is a rather simple construction of decomposi- tions (2.8) with estimates like (2.11) for quadratic forms that arise by duality with Dirichlet forms in a large class. We call such forms Green forms, motivated by the Newtonian potential, or Green function, that is a special case; this is explained in Section 2.1.3. The main idea of our method is that (2.8) can be achieved by applying formulae like (2.2) to the spectral representation of the Green form, and then exploiting finite propagation speed properties of appropriate wave flows. These are generalizations of the fact that if u(t , x) is a solution to ∂2t u − ∆u = 0, u(0, x) = u0(x), ∂tu(0, x) = 0 (2.13) 41 2.1. Introduction and main result with compactly supported initial data u0 that then supp(u(t , ·)) ⊆ Nt (supp(u0)) (2.14) where Nt (U) = {x ∈ X : d(x ,U) ≤ t} for any U ⊂ X . The idea of exploiting properties of the wave equation in the context of proba- bility theory is not new. For example, Varopoulos [112] used the finite propagation speed of the wave equation to obtain Gaussian bounds on the heat kernel of gen- eral Markov chains, by decomposing it into an integral over compactly supported parts. Our objective is slightly different in that we are interested in the constraint of positive definite decompositions. Decompositions of singular functions into sums or integrals of smooth and compactly supported functions have a history in analysis. For example, Feffer- man’s celebrated proof of pointwise almost everywhere convergence of the Fourier series [56] uses a decomposition of 1/x onR like (2.2), albeit without using positive semi-definiteness. Hainzl and Seiringer [71], motivated by applications to quantum mechanics such as [57], decompose general radially symmetric functions, without assuming a priori that they are positive definite, into weighted integrals over tent functions. These, like ϕ(|x |) in (2.2), are positive semi-definite. They state suffi- cient conditions for the weight to be non-negative, and thus obtain decompositions like (2.2) for a class of radially symmetric potentials including e−m |x |/|x | on R3. Special cases and similar results have also appeared in earlier works of Pólya [94] and of Gneiting [69, 70]. These results, like (2.2), make essential use of radial symmetry. One example of particular interest for probability theory—where radial symmetry is not given— is the Green function of the discrete Laplace operator: ∆Zdu(x) = ∑ e∈Zd :|e |1=1 (u(x + e) − u(x)) for any u : Zd → R, x ∈ Zd . (2.15) Brydges, Guadagni, and Mitter [27] showed that also in this discrete case, the corresponding Green function, or more generally the resolvent, admits a decompo- sition like (2.8) into a sum (instead of an integral) of positive semi-definite lattice functions with estimates analogous to (2.12). Brydges and Talarczyck [21] gave a related construction which applies to quite general elliptic operators on domains in R d , but estimates on the kernels of this decomposition are only known when the coefficients are constant. Their construction was adapted by Adams, Kotecký, and Müller [4] to show that the Green functions of constant coefficient discrete ellip- tic systems on Zd admit decompositions with estimates analogous to (2.12) and that the decomposition obtained this way is analytic as a function of the (constant) coefficients. These results are based on constructions that average Poisson kernels. 42 2.1. Introduction and main result Our method, sketched earlier, is different from that of [4, 21, 27, 31] and yields simpler proofs of their results about constant coefficient elliptic operators—both in discrete and continuous context. It furthermore naturally yields a decomposition into an integral instead of a sum (with integrand smooth in t), and gives effective estimates for decompositions of Green functions of variable coefficient operators. 2.1.3 Duality and spectral representation of the Green form Let us now introduce the general set-up in which our result is framed more pre- cisely. For motivation, we first return to the quadratic forms defined by the Newto- nian potentials (2.1): Φ( f , g) := ∫ Rd ×Rd Φ(x − y) f (x)g(y) dx dy, f , g ∈ D(Φ) (2.16) where  D(Φ) = C∞c (Rd ) (d ≥ 3) D(Φ) = { f ∈ C∞c (R2) : ∫ R2 f dx = 0} (d = 2). (2.17) These quadratic forms are not bounded on L2(Rd ), as is most apparent when d = 2. They are closely related to the Dirichlet forms given by E(u, v) := ∫ Rd ∇u · ∇v dx , u, v ∈ C∞c (Rd ). (2.18) The correspondence between the two is duality: for all f ∈ D(Φ), √ Φ( f , f ) = sup {∫ Rd f u dx : u ∈ C∞c (Rd ), E(u, u) ≤ 1 } . (2.19) This set-up admits the following natural generalization: Let (X, µ) be a metric measure space and L2(X ) be the Hilbert space of equivalence classes of real-valued square µ-integrable functions on X with inner product (u, v) = (u, v)L2 . Let E : D(E) × D(E) → R be a closed positive quadratic form on L2(X ) with D(E) ⊆ L2(X ) a dense linear subspace. It is sometimes convenient to assume that E is regular, i.e., that Cc (X ) ∩ D(E) is E-dense in D(E). That E is closed means that D(E) is a Hilbert space with inner product E(u, v) + m2(u, v)L2 for any m2 > 0. For the example (2.18), the domain of the form closure D(E) of C∞c (Rd ) is the usual Sobolev space H1(Rd ) and (u, v)H1 = E(u, v)+ (u, v)L2 is the usual Sobolev inner product. It follows [96] from closedness that E is the quadratic form associated to a unique self-adjoint operator L : D(L) → L2(X ), E(u, v) = (u, Lv) for u ∈ D(E), v ∈ D(L), (2.20) 43 2.1. Introduction and main result where D(L) ⊆ D(E) is a dense linear subspace in L2(X ). The self-adjointness of L gives rise to a spectral family and functional calculus. This means in particular that for any Borel measurable F : [0,∞) → R, there is a self-adjoint operator, denoted F (L) : D(F (L)) → L2(X ), where F (L) := ∫ ∞ 0 F (λ) dPλ , (2.21) D(F (L)) := { u ∈ L2(X ) : ∫ ∞ 0 F (λ)2 d(u, Pλu) < ∞ } (2.22) with Pλ the spectral family associated to L, and (u, Pλu) is the spectral measure associated to L and u ∈ L2(X ). In these terms, E has the representation E(u, u) = ‖L 12 u‖L2 (X ) = ∫ spec(L) λ d(u, Pλu), u ∈ D(E) = D(L 12 ), (2.23) where E(u, v) is defined by the polarization identity, if u , v. Similarly, the corre- sponding Green form can be defined by polarization and Φ( f , f ) = ‖L− 12 f ‖L2(X ) = ∫ spec(L) λ−1 d(u, Pλu), f ∈ D(Φ) = D(L− 12 ). (2.24) This representation will be our starting point for the decomposition of the Green form. Before stating the result and its proof, let us sketch how the decomposition problem arises in probability theory. 2.1.4 Gaussian fields and statistical mechanics The linear space D(E) is complete under the metric induced by the inner product E(u, v) + m2(u, v)L2 for any m2 > 0, but it is generally not complete for m2 = 0. It may however be completed to a Hilbert space abstractly; we denote this Hilbert space by (HE , (·, ·)E). Similarly, we can complete the domain D(Φ) to a Hilbert space under the quadratic form Φ; this Hilbert space is denoted by (HΦ , (·, ·)Φ). HE and HΦ are dual in the following sense: The L2 inner product can be restricted to 〈·, ·〉 : D(Φ) × D(E) → R, 〈 f , u〉 = ( f , u) = (L− 12 f , L 12 u) (2.25) which extends to a bounded bilinear form on HΦ × HE. L acts by definition iso- metric from D(E) to D(Φ), with respect to the norms of HE and HΦ, and it extends to an isometric isometry from HE to HΦ. Thus HΦ is identified with the dual space of HE naturally, via the extension of the L2 pairing 〈·, ·〉. 44 2.1. Introduction and main result Remark 2.1.1. To give some insight into the interpretation of the spaces HE and HΦ, let us mention how HE can be characterized in the case of the Newtonian potential [40]: HE  { f : Rd → R measurable : there exists an E-Cauchy sequence fn ∈ D(E) with fn → f a.e.}/ ∼d (2.26) where ∼d is the usual identification of functions that are equal almost everywhere when d ≥ 3. For d = 2, ∼d in contrast identifies functions that may differ by a constant almost everywhere. (It is therefore sometimes said that the massless free field does not exist in two dimensions, but that its gradient does. The massless free field is the free field corresponding to Φ in the terminology explained below.) To understand this distinction, take a smooth cut-off function ϕ1 on R2, e.g. with ϕ1 ≡ 1 on B1(0) and ϕ1 ≡ 0 on B2(0)c , set ϕn (x) = ϕ1(x/n), and note that E(ϕn , ϕn ) = nd−2E(ϕ1 , ϕ1). Thus, (ϕn ) is bounded in HE whenever d ≤ 2, and then (by the Banach-Alaoglu theorem) there is ψ ∈ HE such that ϕn → ψ weakly along a subsequence in HE; however, ϕn → 1 pointwise, so that ψ ≡ 1 ∈ HE. Now E(1, 1) = 0 implies that the constant functions must be in the same equivalence class as the zero function. It is well-known that any separable real Hilbert space (H, (·, ·)H ) defines a Gaussian process indexed by H [105]. This is a probability space (Ω, P) and a unitary map f ∈ H → 〈 f , φ〉 ∈ L2(P) such that the random variables 〈 f , φ〉 are Gaussian with variance ( f , f )H . Note that 〈 f , φ〉 is merely a symbolic notation for the random variable on L2(P) that corresponds to f ∈ H . It cannot in gen- eral be interpreted as the pairing of f ∈ H with a random element φ(ω) ∈ H defined for ω ∈ Ω; see e.g. [101]. In particular, if (H, (·, ·)H ) is the Hilbert space (HΦ , (·, ·)HΦ ), this process is called the free field or the Gaussian free field (corre- sponding to Dirichlet form E or Green function Φ). This is a generalization of the context introduced in Section 1.3.2 where X is a countable set and δx ∈ H , (x ∈ X ) so that the field φx = 〈δx , φ〉 has a pointwise interpretation. 2.1.5 Main result Let (X, µ) be a metric measure space. In addition, suppose that d : X ×X → [0,∞] is an extended pseudometric on X . (Extended means that d(x , y) may be infinite and pseudo that d(x , y) = 0 for x , y is allowed. Example 2.1.4 below gives an example of interest where d is not the metric of X .) Let E : D(E) × D(E) → R be a regular closed symmetric form on L2(X ) as in Section 2.1.3 and denote by L : D(L) → L2(X ) its self-adjoint generator. 45 2.1. Introduction and main result Theorem 2.1.2 assumes that (X, µ, d , E) satisfies one of the following two finite propagation speed conditions that we now introduce: For γ > 0, B > 0, and an increasing function θ : (0,∞) → (0,∞), let us say that (X, µ, d , E) satisfies (Pγ,θ) respectively (P∗ θ,B ) if: supp(cos(L 12γt)u) ⊆ Nθ (t ) (supp(u)) for all u ∈ Cc (X ), t > 0, (Pγ,θ) respectively E(u, u) ≤ B‖u‖L2(X ) for all u ∈ L2(X ), supp(Lnu) ⊆ Nθ (n) (supp(u)) for all u ∈ Cc (X ), n ∈ N, (P∗ θ,B ) where as before Nt (U) = {x ∈ X : d(x ,U) ≤ t} for any U ⊂ X . The left-hand side of (Pγ,θ) is defined in terms of functional calculus for the self-adjoint operator L. Note that if L = −∆Rd = − ∑d i=1 ∂ 2 xi is the standard Laplace operator of Rd , then u(t , x) = [cos(L 12 t)u0](x) is a solution to the standard wave equation (2.13), and the condition (Pγ,θ) with γ = 1 and θ(t) = t is the finite propagation speed property (2.14). The property holds for more general elliptic operators and ellip- tic systems (not necessarily of second order), however; see Example 2.1.4 below. Similarly, if L = −∆Zd is the discrete Laplace operator (2.15), then (P∗θ,B) holds with B = 2d and θ(n) = n, since Lu(x) only depends on u(y) when x and y are nearest neighbors. As for the property (Pγ,θ), the condition (P∗θ,B) remains true for more general discrete Dirichlet forms; see Examples 2.1.4–2.1.5. Let us introduce a further condition: The heat kernel bound (Hα,ω) holds when the heat semigroup (e−t L )t>0 has continuous kernels pt for all t > 0 and there is α > 0 and a bounded function ω : X → R+ such that pt (x , x) ≤ ω(x)t−α/2 for all x ∈ X . (Hα,ω) Criteria for (Hα,ω) are classic; see e.g. [91] for second-order elliptic operators and also the discussion in the examples below. Theorem 2.1.2. Suppose (X, µ, d , E) satisfies (Pγ,θ) or (P∗θ,B). Then the corre- sponding Green form (2.24) admits a finite range decomposition (2.8) with S = Φ and St = Φt such that the Φt are bounded quadratic forms with |Φt ( f , g) | ≤ Cγ,Bt2/γ ‖ f ‖L2(X ) ‖g‖L2(X ) for all f , g ∈ L2(X ). (2.27) Moreover, (Hα,ω) implies that the Φt have continuous kernels φt that satisfy |φt (x , y) | ≤ Cα,γ,B √ ω(x)ω(y)t−(α−2)/γ . (2.28) 46 2.1. Introduction and main result 2.1.6 Examples Example 2.1.3 (Elliptic operators with constant coefficients). Let a = (ai j )1≤i , j≤d be a strictly positive definite matrix in Rd×d and Ea (u, v) = d∑ i , j=1 ∫ Rd (Diu(x))ai j (D jv(x)) dx , u, v ∈ C∞c (Rd ), (2.29) E ∗ a (u, v) = d∑ i , j=1 ∑ x∈Zd (∇iu(x))ai j (∇ jv(x)), u, v ∈ Cc (Zd ), (2.30) where Diu(x) is the partial derivative of u(x) in direction i = 1, . . . , d, ∇iu(x) = u(x + ei ) − u(x) (2.31) with ei the unit vector in the positive ith direction, and Cc (Zd ) is the space of functions u : Zd → R with finite support. For m2 ≥ 0, further set Ea ,m2 (u, v) = Ea (u, v) + m2 ∫ Rd u(x)v(x) dx (2.32) and define E∗a ,m2 analogously. Assume that the eigenvalues of a are contained in the interval [B2− , B2+], and in the discrete case also that m2 ∈ [0, M2+] for some B2− , B2+ , M2+ > 0; these assumptions are only important for uniformity in the con- stants below. In the continuous context, let d be the Euclidean distance on X = Rd and µ be the Lebesgue measure. It follows that (X, µ, d , E) satisfies (Pγ,θ) with γ = 1, θ(t) = B+t; see Example 2.1.4 for more details. In the discrete context, let d be the infinity distance on X = Zd , i.e., d(x , y) = maxi=1,...d |xi − yi |, and µ be the counting measure. Then (P∗ θ,B ) holds with B = B+ + M2+ and θ(n) = n. Theorem 2.1.2 implies that the Green functions associated to Ea ,m2 and E∗a ,m2 admit finite range decompositions. We denote their kernels by φt (x , y; a,m2) and φ∗t (x , y; a,m2). In addition to (2.28), it is not difficult to obtain estimates on the decay of the derivatives of φt and φ∗t , like (2.12), in this situation of constant coefficients. Since these estimates are of interest for applications, we provide the details in Section 2.3.2 (in a slightly more general context). We show that there are constants Cl ,k > 0 depending only on B− and B+, and in the discrete case also on M+, such that |Dlaa Dlm2m2 Dlyy Dlxx φt (x , y; a,m2) | ≤ Cl ,k t−(d−2)−lx−ly+2lm2 (1 + m2t2)−k (2.33) and |Dlaa Dlm2m2 ∇lyy ∇lxx φ∗t (x , y, t; a,m2) | ≤ Cl ,k t−(d−2)−lx−ly+2lm2 (1+m2t2)−k (2.34) 47 2.1. Introduction and main result for all integers la , lm2 , lx , ly , and k such that lm2 < 12 (d + lx + ly ), (2.35) and that the following approximation result holds: There is c > 0 such that ∇lxx ∇lyy φ∗t (x , y; a,m2) = cd−2Dlxx Dlyy φt (cx , cy; a, c−2m2) + O(t−(d−2)−lx−ly−1(1 + m2t2)−k ). (2.36) This reproduces and generalizes many results of [4, 27]. More precisely, we verify that there exists a smooth function ¯φ : Rd×[B2− , B2+]×[0,∞) → R supported in |x | ≤ B+ such that φt (x , y; a,m2) = t−(d−2) ¯φ ( x − y t ; a,m2t2 ) (2.37) which has the same structure as (2.10) when m2 = 0; this is scale invariance. Moreover, by (2.36), the discrete Green function has a scaling limit and the error is of the order of the rescaled lattice spacing O(t−1). This result improves [31]. Example 2.1.4 (Elliptic operators and systems with variable coefficients). Let M ∈ N and ai j : Rd → RM ×M , i, j = 1, . . . , d, be the smooth coefficients of a uni- formly elliptic system (or in particular, if M = 1, of a uniformly elliptic operator): B2− |ξ |2 ≤ M∑ k ,l=1 d∑ i , j=1 akli j (x)ξki ξ lj ≤ B2+ |ξ |2 for all ξ ∈ RdM , x ∈ Rd , (2.38) with B− , B+ > 0. Let us write u = (u1 , . . . , uM ) ∈ RdM with ui ∈ Rd , i = 1, . . . , M . Let E(u, v) = d∑ i , j=1 ∫ Rd (Diuk (x))akli j (x)(D jul (x)) dx , u, v ∈ C∞c (Rd ,RM ) (2.39) and analogously in the discrete case (as in (2.29), (2.30)). To apply Theorem 2.1.2, (X, µ, d) is defined by X = Rd × {1, . . . , M }, µ is the product of the Lebesgue measure on Rd and the counting measure on {1, . . . , M }, and the distance is given by d((x , i), (y, j)) = d(x , y). In particular, d is only a pseudometric on X . We may use the identification of u : Rd → RM and u : X → R by u(x , i) = ui (x). It suffices to verify the condition (P1,B+t ) for smooth, compactly supported u0 : R d → RM . For such a u0, set, by using spectral theory for self-adjoint operators: u(t) := cos((L + m2) 12 t)u0. (2.40) 48 2.1. Introduction and main result Then, since u0 is smooth, u(t , x) : R × Rd → RM is smooth jointly in (t , x), and ∂2t u + Lu + m 2u = 0, ∂tu(0) = 0, u(0) = u0 (2.41) holds in the classical sense. If M = 1, m2 = 0, and a is the d × d identity matrix, (P1,t ) is the finite propagation speed of the wave equation. Similarly, in the general situation, the property (P1,B+t ) can be deduced from the finite propagation speed of first order hyperbolic systems. This is well-known, but the explicit reduction for the case of (2.41) with (2.39) is difficult is to find in the literature. Let us therefore sketch how to convert (2.41) to a hyperbolic system for readers interested in this case. For example, one can define v : R × Rd → R(d+2)M by: vk0 = ∂tu k , vki = d∑ j=1 M∑ l=1 akli j ∂x j u l , vkd+1 = mu k , (2.42) where i = {1, . . . , d} and k ∈ {1, . . . , M }. It follows that v satisfies S∂tv + d∑ j=1 A j∂x j v + Bv = 0, v(0) = (0, (aDu0)1 , . . . , (aDu0)d ,mu0) (2.43) where S,A j ,B : Rd → R(d+2)M × (d+2)M are defined as the block matrices S =  1M ×M 0dM ×M 0M ×M 0M ×dM a−1 0M ×dM 0M ×M 0dM ×M 1M ×M  , B =  01×1 0d×1 m 01×d 0d×d 01×d −m 0d×1 01×1  ⊗ 1M ×M , (2.44) and Ai =  0 −δ1i · · · −δdi 0 −δ1i 0 . . . 0 0 ... ... . . . ... 0 −δdi 0 . . . 0 0 0 0 · · · 0 0  ⊗ 1M ×M , i = 1, . . . , d. (2.45) It is immediate that this system is symmetric uniformly hyperbolic, by the sym- metry and uniform ellipticity of the matrix a. The property (P1,B+t ) now follows from the finite propagation speed of linear hyperbolic systems; see e.g. [7, 84]. Nash showed [91] that (Hd ,ω ) holds when M = 1. In [77, 81], conditions are given for (Hd ,ω ) to hold when M > 1. In particular, this includes the constant coefficient case. The latter case can be treated by using the Fourier transform; see Section 2.3.2. 49 2.1. Introduction and main result Example 2.1.5 (Random walk on graphs). Let (X, E) be a (locally finite) graph, with vertex set X and edge set E ⊂ P2(X ), where X is a countable (or finite) set and P2(X ) are the subsets of X with two elements. Let d : X × X → [0,∞] be the graph distance on (X, E), i.e., d(x , y) is the (unweighted) length of the shortest path from x to y. Suppose that edge weights µxy = µyx ≥ 0, x , y ∈ X are given. These induce a natural measure, also denoted µ, on X by: µx = ∑ y∈X µxy , µ(A) = ∑ x∈A µx for all A ⊆ X . (2.46) The associated Dirichlet form is E(u, u) = 12 ∑ xy∈E µxy (u(x) − u(y))2 for all u ∈ D(E) = L2(µ) (2.47) and its generator is given by Lu(x) = µ−1x ∑ y∈X µxy (u(x)−u(y)) for all finitely supported u : X → R. (2.48) L is called the probabilistic Laplace operator associated to the simple random walk on the weighted graph (X, µ) with transition probabilities µxy/µx . Let us remark that a probabilistic interpretation (or a maximum principle) does not hold in general for Examples 2.1.3–2.1.4 (when a is non-diagonal or vector-valued). The Dirichlet form (2.47) is bounded on L2(µ) with operator norm 2 so that the property (P∗ θ,B ) holds with θ(n) = n and B = 2, and Theorem 2.1.2 is applicable. For applications, it is often useful to add a killing rate to the random walk: The probabilistic Green density with killing rate κ ∈ (0, 1) is defined by: Gκ (x , y) = ∑ n≥0 pn (x , y)κn = (κL + (1 − κ))−1(x , y) = (Lκ )−1(x , y) (2.49) where pn (x , y) is the kernel of the operator Pn on L2(µ). Note that (2.49) only converges for κ = 0 when the random walk is transient, but that L−1 still makes sense as a quadratic form on its appropriate domain when the random walk is re- current, as in (2.16), (2.17) for d = 2. Note further that spec(Lκ ) ⊆ [0, 2] for all κ ∈ [0, 1], so that Theorem 2.1.2 is applicable uniformly in κ ∈ [0, 1]. Closely related to the killed Green function Gκ is the resolvent kernel of L. The resolvent of L is defined on L2(µ) by Gm2 = (L + m2)−1 for m2 > 0. It is related to the killed Green density by: Gκ = κ−1G(1−κ)/κ . (2.50) 50 2.1. Introduction and main result One difference compared with the killed Green function is that L+m2 is not bound- ed uniformly in m2 ≥ 0. To achieve the condition (P∗ θ,B ) for fixed B > 0, it is therefore necessary to restrict to m2 ≤ M2+ with M2+ = B − 2. Remark 2.1.6. Other examples which Theorem 2.1.2 is applicable to include Dir- ichlet spaces that satisfy a Davies-Gaffney estimate [103] such as weighted mani- folds and quadratic forms corresponding to powers of elliptic operators like ∆2. 2.1.7 Remarks Remark 2.1.7. Theorem 2.1.2 also gives the decomposition into sums as in [4, 21, 27]: Suppose that the assumptions of Theorem 2.1.2 are satisfied and, for notational simplicity, that the resulting decomposition has a kernel. Then, for any L > 1, Φ(x , y) = ∑ j∈Z Cj (x , y) for all x , y ∈ X × X (2.51) where the functions Cj : X × X → [0,∞), j ∈ Z are given by Cj (x , y) := ∫ L j L j−1 φt (x , y) dtt for all x , y ∈ X . (2.52) They satisfy the following properties: Cj is the kernel of a positive semi-definite form, (2.53) Cj (x , y) = 0 for all x , y ∈ X with d(x , y) ≥ L j , (2.54) and, if (Hα,ω) holds, |Cj (x , y) | ≤ cα (x , y)  L−(α−2)( j−1) (α > 2) L(2−α) j (α < 2) log(L) (α = 2) (2.55) with cα (x , y) is independent of L. Thus, (Cj ) j∈Z is a finite range decomposition into discrete scales of the Green function Φ. Similarly, gradient estimates such as (2.33), (2.34), (2.36) in Example 2.1.3 have obvious discrete versions. Remark 2.1.8. More generally than in Theorem 2.1.2, we may consider a family of symmetric forms, (Es )s∈Y , where Y is a domain in a Banach space, with generators Ls . Let us assume that Es is smooth in s, in the following sense: There exists a projection-valued measure P on a measurable space M and a function V : M ×Y → (0,∞), smooth in Y , such that F (Ls ) = ∫ spec(Ls ) F (λ) dPsλ = ∫ M F (V (s, τ)) dPτ . (2.56) 51 2.2. Proof of main result An example of this condition is Es ( f , f ) = E( f , f ) + s( f , f ) so that V (s, λ) = λ + s and (Ls )−1 is the resolvent of L; similarly, the killed Green function of Example 2.1.5 can be expressed in this way. Then the family of kernels φs is continuous in s, and if (Hα,ω) holds for s = 0, and V (λ, s) ≥ z2(s)V (λ, 0)+m2(s), then |φst (x , y) | ≤ Cα,γ,l √ ω(x)ω(y)(z(s)t)−(α−2)/γ (1 + tm(s))−l . (2.57) This can be verified by a straightforward adaption of the proof of Theorem 2.1.2. 2.2 Proof of main result 2.2.1 Spectral decomposition The starting point for the proof is the spectral representation of the Green form (2.24): Φ( f , f ) = ∫ spec(L) λ−1 d( f , Pλ f ) for all f ∈ D(Φ), (2.58) where f ∈ D(Φ) implies that the integral can be restricted to spec(L) \ 0. The main result follows by decomposition of the function λ−1 : spec(L) \ 0 → R+. Different decompositions are needed under the two conditions (Pγ,θ), (P∗θ,B). The main idea of the proof is that decompositions with good properties exist. The result that we prove after using it to deduce Theorem 2.1.2 is summarized in the following lemma. Lemma 2.2.1 (Spectral decomposition). Suppose that L satisfies (Pγ,θ) or (P∗θ,B); in the second case, we assume that γ = 1. Then there exists a smooth family of functions Wt ∈ C∞ (R), t > 0, such that for all λ ∈ spec(L) \ 0, t > 0, and all integers l, λ−1 = ∫ ∞ 0 t 2 γ Wt (λ) dtt , (2.59) Wt (λ) ≥ 0, (2.60) (1 + t 2γ λ)lWt (λ) ≤ Cl , (2.61) and that for all u ∈ Cc (X ), supp(Wt (L)u) ⊆ Nθ (t ) (supp(u)). (2.62) 52 2.2. Proof of main result Remark 2.2.2. More precisely, we will give explicit formulae for Wt that imply (1 + t2λ)lλm ∣∣∣∣∣ ∂ m ∂λm Wt (λ) ∣∣∣∣∣ ≤ Cl ,m (2.63) for all m and l, improving (2.61). This improvement is used in Section 2.3.2. Proof of Theorem 2.1.2. It follows from (2.59) that, for any f ∈ D(Φ), Φ( f , f ) = ∫ spec(L) (∫ ∞ 0 t 2 γ Wt (λ) dtt ) d( f , Pλ f ) (2.64) = ∫ ∞ 0 t 2 γ (∫ spec(L) Wt (λ) d( f , Pλ f ) ) dt t = ∫ ∞ 0 t 2 γ ( f ,Wt (L) f ) dtt . The exchange of the order of the two integrals in the equation above is justi- fied by non-negativity of the integrand, by (2.60). The latter also implies that ( f ,Wt (L) f ) ≥ 0 for all f ∈ L2(X ). The polarization identity allows to recover Φ( f , g) for all f , g ∈ D(Φ). Finally, (2.62) completes the verification of (2.8) for Φt defined by Φt ( f , g) = t 2 γ ( f ,Wt (L)g). (2.65) It remains to prove that (Hα,ω) implies (2.28). The semigroup property and the continuity of pt imply that pt ∈ Cb (X, L2(X )) with ‖pt (x , ·)‖L2 (X ) = ∫ X pt (x , y)pt (y, x) dµ(y) = p2t (x , x), (2.66) ‖pt (x , ·) − pt (y, ·)‖L2 (X ) = p2t (x , x) + p2t (y, y) − 2p2t (x , y) → 0 as x → y. (2.67) This implies that e−t L : L2(X ) → Cb (X ) is a bounded linear operator (e−t L f (x) = (pt (x , ·), f )). Duality then also implies continuity of e−t L : Cb (X )∗ → L2(X ) (with respect to the strong topology on Cb (X )∗). Let M (X ) ⊆ Cb (X )∗ be the space of signed finite Radon measures on X equipped with the weak-* topology. Let mi ∈ M (X ) with mi → 0. Then: ‖e−t Lmi ‖L2 (X ) =  ∫ X (∫ X pt (x , y) dmi (y) )2 dµ(x)  1 2 = (∫ X ∫ X (pt (y, ·), pt (z, ·)) dmi (y) dmi (z) ) 1 2 → 0 (2.68) 53 2.2. Proof of main result which means that e−t L : M (X ) → L2(X ) is continuous (because X is separable and therefore the weak-* topology of M (X ) is metrizable). This implies that (1 + t2/γL)−l : M (X ) → L2(X ) is likewise continuous for all l > α/4. To see this, we use the relation (1 + t2/γλ)−l = Γ(l)−1 ∫ ∞ 0 e−s sl−1e−st 2/γλ ds (2.69) which holds by the change of variables formula and the definition of Euler’s gamma function. The spectral theorem thus implies that, for any u ∈ L2(X ), ‖(1 + t2/γL)−lu‖L2 (X ) ≤ Γ(l)−1 ∫ ∞ 0 e−s sl−1‖e−st2/γLu‖L2 (X ) ds. (2.70) Since µ has full support, L2(X )∩M (X ) is dense in M (X ) (where Lp (X ) is always with respect to µ), and the claimed continuity of (1 + t2/γL)−l : M (X ) → L2(X ) follows from (2.68). In particular, the pointwise bound for pt implies that for l > α/4, ‖(1 + t2/γL)−lδx ‖L2 (X ) ≤ Γ(l)−1 ∫ ∞ 0 e−s sl−1‖e−st2/γLδx ‖L2 (X ) ds (2.71) ≤ Γ(l)−1 √ ω(x)t−α/2γ ∫ ∞ 0 e−s sl−1−α/4 ds = C √ ω(x)t−α/2γ . Let κt (λ) = Wt (λ)1/2. Then (2.61) and the spectral theorem also imply that ‖κt (L)(1 + t2/γL)l ‖L2 (X )→L2 (X ) = sup λ>0 κt (λ)(1 + t2/γλ)l ≤ Cl , (2.72) uniformly in t > 0. It follows from (2.71) that κt (L) : M (X ) → L2(X ) with ‖κt (L)δx ‖L2 ≤ C √ ω(x)t−α/2γ . (2.73) Finally, by the Cauchy-Schwarz inequality, |φt (x , y) | = t2/γ (κt (L)δy , κt (L)δx ) ≤ t2/γ ‖κt (L)δy ‖L2 (X ) ‖κt (L)δx ‖L2 (X ) (2.74) which, with (2.73), proves (2.28). The continuity of φt is implied by the continuity of κt (L) : M (X ) → L2(X ) and of δx in x ∈ X (in the weak-* topology).  Remark 2.2.3. The decay for φs claimed in (2.57) can be obtained by a straightfor- ward generalization of the above argument, replacing (2.69) by (1 + t2/γ z2λ + t2/γm2)−l = Γ(l)−1 ∫ ∞ 0 e−s sl−1e−st 2/γm2 e−sz 2t2/γλ ds. (2.75) 54 2.2. Proof of main result Remark 2.2.4. Furthermore, by (2.61), the operators Wt (L) are smoothing for t > 0, in the general sense that, for any t > 0, Wt (L) : L2(X ) → C∞ (L), where C∞ (L) := ∞⋂ n=0 D(Ln ) ⊂ L2(X ) (2.76) is the set of C∞-vectors for L; see [95]. Standard elliptic regularity estimates imply e.g. that C∞ (L) = C∞ (X ) when E is the quadratic form associated to an elliptic operator with smooth coefficients. 2.2.2 Proof of Lemma 2.2.1 To complete the proof of Theorem 2.1.2, it remains to demonstrate Lemma 2.2.1. We first prove it under condition (Pγ,θ) in Lemma 2.2.5 below; this proof is quite straightforward using the assumption and (2.2). Then we prove Lemma 2.2.1 in the situation of condition (P∗ θ,B ) in Lemma 2.2.7; here additional ideas are required. To fix conventions, let us define the Fourier transform of an integrable function ϕ : R→ R by ϕ̂(k) = (2pi)−1 ∫ R ϕ(x)e−ik x dx for all k ∈ R. (2.77) Lemma 2.2.5 (Lemma 2.2.1 under (Pγ,θ)). For any ϕ : R→ [0,∞) such that ϕ̂ is smooth and symmetric with supp(ϕ̂) ⊆ [−1, 1], and for any γ > 0, there is C > 0 such that Wt (λ) := Cϕ(λ 12γt) (2.78) satisfies (2.59), (2.60), (2.61), and also (2.63), for all λ > 0, t > 0; and if (Pγ,θ) holds, then (Wt ) also satisfies (2.62). Remark 2.2.6. It is not difficult to see that such ϕ exist. For example, if κ̂ is a smooth real-valued function with support in [− 12 , 12 ], then ϕ = |κ |2 satisfies the assumptions. For simplicity, let us assume sometimes in the following that ϕ is chosen such that C = 1 when Lemma 2.2.1 is applied. Proof. Note that for any ϕ : [0,∞) → R with tϕ(t) integrable, there is C > 0 such that λ−1 = C ∫ ∞ 0 t 2 γ ϕ(λ 12γt) dt t for all λ > 0. (2.79) This simply follows (as in (2.2)) because the right-hand side is homogeneous in λ of degree −1, which is immediate by rescaling of the integration variable. This shows (2.59); (2.60) is obvious by assumption; and (2.61) follows since ϕ̂ 55 2.2. Proof of main result is smooth. The improved estimate (2.63) follows from the chain rule (or Faà di Bruno’s formula) and λm− 1 2γ ∣∣∣∣∣ ∂ m ∂λm λ 1 2γ ∣∣∣∣∣ ≤ Cγ,m (2.80) for non-negative integers m, using that supp(ϕ̂) ⊆ [−1, 1] implies that ϕ is smooth. Moreover, since supp(ϕ̂) ⊂ [−1, 1], and since ϕ̂ is smooth, Wt (L)u = C ∫ 1 −1 ϕ̂(s) cos(L 12γts)u ds for all u ∈ L2(X ), (2.81) where the integral is the Riemann integral, i.e., the strong limit of its Riemann sums (with values in L2). Therefore (2.62) follows from (Pγ,θ).  The previous proof makes essential use of the finite propagation speed of the wave equation (Pγ,θ) to prove (2.62). This property fails for discrete Dirichlet forms such as (2.30) where we instead know the property (P∗ θ,B ) that polynomials of degree n of the generator have finite range θ(n). This leads to the following problem. Find polynomials W ∗t , t > 0, of degree at most t satisfying the properties (2.60), (2.61), (2.63) such that the decomposi- tion formula (2.59) for 1/λ holds. In the proof of Lemma 2.2.5, the verification of (2.61) (and (2.63)) and of the decomposition formula (2.59) are directly linked to the “ballistic” scaling of the wave equation: Wt (λ) = W1(λt2). To construct polynomials satisfying such “ballistic” estimates, we are led by the following re- markable discovery of Carne [39]: The Chebyshev polynomials Tk , k ∈ Z, defined by Tk (θ) = cos(k arccos(θ)) for all θ ∈ [−1, 1], k ∈ Z, (2.82) are solutions to the discrete (in space and time) wave equation in the following sense: Let ∇+ f (n) = f (n + 1) − f (n) and ∇− f (n) = f (n − 1) − f (n) be the discrete (forward and backward) time differences. Then, as polynomials in X , ∇−∇+Tn (X ) = ∇+∇−Tn (X ) = 2(X − 1)Tn (X ). (2.83) In particular, when 2(X − 1) = −L or equivalently X = 1 − 12 L, then v(n, x) = [Tn (1 − 12 L)u](x) solves the following “Cauchy problem” for the discrete wave equation: −∇+∇−v + Lv = 0, v(0) = u, (∇+v − ∇−v)(0) = 0. (2.84) The analogy between the discrete- and the continuous-time wave equations is like that between the discrete- and the continuous-time random walk. It turns out that the structure of Chebyshev polynomials allows to prove the following lemma. 56 2.2. Proof of main result Lemma 2.2.7 (Lemma 2.2.1 under (P∗ θ,B )). Let ϕ : R → [0,∞) satisfy the as- sumptions of Lemma 2.2.5. Then W ∗t : [0, 4] → [0,∞), defined by W ∗t (λ) := ∑ n∈Z ϕ(arccos(1 − 12λ)t − 2pint) for all λ ∈ [0, 4], t > 0, (2.85) is the restriction of a polynomial in λ of degree at most t to [0, 4], with coefficients smooth in t, and, for any ε > 0, (2.59), (2.60), (2.61), (2.62), and (2.63) hold for all λ ∈ (0, 4 − ε], t > 0. Proof. The proof verifies that W ∗t as defined in (2.85) has the asserted properties. Let ϕ∗t (x) := ∑ n∈Z ϕ(xt − 2pint) = ∑ k∈Z t−1ϕ̂(k/t) cos(k x) (2.86) where the second equality follows by symmetry of ϕ̂, the change of variables for- mula, and a version of the Poisson summation formula which is easily verified, for sufficiently nice ϕ. Then the claim (2.59) can be expressed as λ−1 = ∫ ∞ 0 t2ϕ∗t (arccos(1 − 12λ)) dt t for all λ ∈ (0, 4]. (2.87) Let x = arccos(1− 12λ) or equivalently λ = 2(1−cos x) = 4 sin2( 12 x). In terms of this change of variables, (2.87) and thus the claim (2.85) are then equivalent to 1 4 sin −2( 12 x) = ∫ ∞ 0 t2ϕ∗t (x) dt t for all x ∈ (0, pi]. (2.88) The left-hand side defines a meromorphic function on C with poles at 2piZ. Its development into partial fractions is (see e.g. [5, page 204]) 1 4 sin −2( 12 x) = ∑ n∈Z (x − 2pin)−2 for all x ∈ C \ 2piZ. (2.89) It follows, by (2.79) with γ = 1 and λ = (x − 2pin)2, assuming C = 1, that 1 4 sin −2( 12 x) = ∑ n∈Z ∫ ∞ 0 t2ϕ((x − 2pin)t) dt t for all x ∈ (0, pi]. (2.90) The order of the sum and the integral can be exchanged, by non-negativity of the integrand, thus showing (2.88) and therefore (2.59). To verify that W ∗t is the restriction of a polynomial, we note that by (2.85), (2.86), and supp(ϕ̂) ⊆ [−1, 1], W ∗t (λ) = ϕ∗t (arccos(1 − 12λ)) = ∑ k∈Z t−1ϕ̂(k/t) cos(k arccos(1 − 12λ)) (2.91) = ∑ k∈Z∩[−t ,t] t−1ϕ̂(k/t)Tk (1 − 12λ) 57 2.2. Proof of main result where Tk , k ∈ Z, are the Chebyshev polynomials defined by (2.82). This shows that W ∗t (λ) is indeed the restriction of a polynomial in λ of degree at most t to the interval λ ∈ [0, 4]. In particular, (2.62) is a trivial consequence of (P∗ θ,B ) which states that polynomials in L of degree n have range at most θ(n). Finally, we verify the estimate (2.63) and thus in particular (2.61). To this end, we note that, in analogy to (2.80), for λ ∈ [0, 4 − ε] and non-negative integers m, λm− 1 2 ∣∣∣∣∣ ∂ m ∂λm arccos(1 − 12λ) ∣∣∣∣∣ ≤ Cε,m . (2.92) For example, for m = 1, ∂ ∂λ arccos(1 − 12λ) = 12 (λ − 14λ2)− 1 2 ≤ ε− 12 λ− 12 for λ ∈ [0, 4 − ε]. (2.93) Therefore (2.63) follows, by the chain rule (or Faà di Bruno’s formula), from (1 + t2(1 − cos(x))l t−m ∣∣∣∣∣ ∂ m ∂xm ϕ∗t (x) ∣∣∣∣∣ ≤ Cl ,m (2.94) which we will now show. The argument is essentially a discrete version of the classic fact that the Fourier transform acts continuously on the Schwartz space of smooth and rapidly decaying functions on R. To show (2.94), first note that (1 − cos(x))eik x = eik x − 12 ei (k+1)x − 12 ei (k−1)x =: ∆keik x (2.95) and thus by induction, for any l ∈ N, (1 − cos(x))leik x = (1 − cos(x))l−1∆keik x = ∆k (1 − cos(x))l−1eik x = ∆lkeik x . (2.96) It follows by (2.86) and summation by parts that (1 + t2(1 − cos(x))l t−m ∂ m ∂xm ϕ∗t (x) = ∑ k∈Z t−1ϕ̂(k/t)(ik/t)m[(1 + t2∆k )leik x] (2.97) = ∑ k∈Z [(1 + t2∆k )l t−1ϕ̂(k/t)(ik/t)m]eik x . Let h(s) = 12 (|s | − 1)1|s |≤1 for s ∈ R. Then, for any smooth f : R→ R, ∆ n k f (k) = (h∗n ∗ D2n f )(k), (2.98) 58 2.3. Extensions where ∗ denotes convolution of two functions on R, h∗n = h ∗ h ∗ · · · ∗ h, and D f is the derivative of f . Indeed, ∆k f (k) = − 12 ∫ 1 0 [D f (k + t) − D f (k − t)] dt = − 12 ∫ 1 0 ∫ t −t D2 f (k + s) ds dt = ∫ R D2 f (s)h(s − k) ds = (h ∗ D2 f )(k), (2.99) and (2.98) then follows by induction: ∆ n+1 k f = ∆(h∗n ∗ D2n f ) = h ∗ D2(h∗n ∗ D2n f ) = h ∗ h∗n ∗ D2D2n f . (2.100) It then follows using the facts that ∑ k∈Z |h∗n (k − s) | ≤ Cn , uniformly in s ∈ R, and that ϕ̂ is smooth and of rapid decay, t−1 ∑ k∈Z ∣∣∣∣(1 + t2∆2k )l [ϕ̂(k/t)(ik/t)m] ∣∣∣∣ (2.101) = l∑ n=0 Cl ,nt−1 ∑ k∈Z ∫ R |h∗n (k − s) | |[D2n ((·)m ϕ̂)](s/t) | ds ≤ l∑ n=0 Cl ,nt−1 ∫ R |[D2n ((·)m ϕ̂)](s/t) | ds = l∑ n=0 Cl ,n ∫ R |[D2n ((·)m ϕ̂)](s) | ds ≤ Cm ,l and thus (2.94), and therefore (2.63), follow from this inequality and (2.97).  Proof of Lemma 2.2.1. Lemma 2.2.1 under (Pγ,θ) is an immediate consequence of Lemma 2.2.5; under (P∗ θ,B ), it follows from Lemma 2.2.7 with appropriate rescal- ing to achieve λ ≤ 3, i.e., by setting Wt (λ) = c−1W ∗t (cλ) for some c > 0.  2.3 Extensions 2.3.1 Discrete approximation In view of the discussion about Chebyshev polynomials before Lemma 2.2.7, it is not surprising that the functions W ∗t of Lemma 2.2.7 approximate the Wt of Lemma 2.2.5. In Proposition 2.3.1 below, we show that this is indeed the case with natural error O(t−1) as t → ∞. This result is used in Section 2.3.2 to prove (2.36). 59 2.3. Extensions Proposition 2.3.1 (Discrete approximation). Let ϕ be as in Lemma 2.2.5 and 2.2.7, with associated functions Wt and W ∗t for γ = 1. Then, for any integer l, |W ∗t (λ) − Wt (λ) | ≤ Cl (1 ∨ t)−1(1 + t2λ)−l for all λ ∈ [0, 4]. (2.102) In particular, W ∗t (λ/t2) → Cϕ(λ 1 2 ) as t → ∞. Proof. Note that it suffices to restrict to t ≥ 1, since for t ≤ 1, the claim follows from (2.61). The left-hand side of (2.102) is then proportional to the absolute value of ϕ(arccos(1 − 12λ)t) − ϕ(λ 1 2 t) + ∑ n∈Z\{0} ϕ(arccos(1 − 12λ)t + 2pint). (2.103) We estimate the difference of the first two terms in (2.103) and the sum separately, and show that each of them satisfies (2.102). The first two terms can be written as ϕ(arccos(1 − 12λ)t) − ϕ(λ 1 2 t) = (arccos(1 − 12λ) − λ 1 2 )tζt (λ) (2.104) with ζt (λ) = ∫ 1 0 ϕ′(s arccos(1 − 12λ)t + (1 − s)λ 1 2 t) ds. (2.105) The bounds √ 2λ = arccos(1 − λ) + O(λ) as λ → 0+, (2.106) √ 2λ ≤ arccos(1 − λ) ≤ π2 √ 2λ for all λ ∈ [0, 2], (2.107) and the rapid decay of ϕ′ therefore imply that |ζt (λ) | ≤ Cl (1 + λt2)−l (2.108) and ϕ(arccos(1 − 12λ)t) − ϕ(λ 1 2 t) ≤ Cl t−1(1 + t2λ)−l . (2.109) To estimate the sum in (2.103), we can use the rapid decay of ϕ with the in- equality x + y ≥ 2(xy)1/2 to obtain that∑ n∈Z\{0} ϕ(xt + 2pint) ≤ Cl ∑ n∈Z\{0} (1 + xt + 2pint)−l (2.110) ≤ Cl (1 + xt)−l/2t−l/2 ∑ n>0 n−l/2 ≤ Cl (1 + xt)−l/2t−l/2 for any l > 2, with the constant changing from line to line. In particular, upon substituting x = arccos(1 − 12λ), this bound and (2.107) imply∑ n∈Z\{0} ϕ(arccos(1 − 12λ)t + 2pint) ≤ Cl t−2l (1 + t2λ)−l . (2.111) The claim then follows by adding (2.109) and (2.111).  60 2.3. Extensions 2.3.2 Estimates for systems with constant coefficients In this section, we verify the assertions of Example 2.1.3. We work in the slightly more general context of second-order elliptic systems (instead of operators) with constant coefficients. These are defined as in Example 2.1.4, and we now show that claims of Example 2.1.3 hold mutadis mutandis. The analysis is straightforward, with aid of the Fourier transform. It reproduces several results of [4,31]. Note that by writing L = 1 c2 [ 1 c2 L]−1 and considering 1 c2 L instead of L, we may assume that the coefficients, a, are bounded such that (P∗ θ,B ) holds with B = 3 (for example). Spectral measures The spectral measures corresponding to the vector-valued case of (2.29) are given in terms of the Fourier transform as follows. For F : [0,∞) → R, (v, F (L)u) = M∑ k ,l=1 ∫ Rd F  d∑ i , j=1 ai j ξiξ j   kl v̂ k (ξ)ûl (ξ) dξ (2.112) where û = (û1 , . . . , ûM ) is the Fourier transform of u = (u1 , . . . , uM ), separately for each component, a(ξ) := d∑ i , j=1 ai j ξiξ j =  d∑ i , j=1 akli j ξiξ j  k ,l=1,... ,M (2.113) are symmetric positive definite M × M matrices, for all ξ ∈ Rd , and the matrices F (a(ξ)) are defined in terms of the spectral decomposition of a(ξ). Similarly, for the (vector-valued case of the) discrete Dirichlet form (2.30), (v, F (L)u) = M∑ k ,l=1 ∫ [−π,π]d F  d∑ i , j=1 ai j (1 − eiξi )(1 − e−iξ j )   kl v̂ k (ξ)ûl (ξ) dξ (2.114) where here û is the component-wise discrete Fourier transform. Let us also write a∗ (ξ) := d∑ i , j=1 ai j (1−eiξi )(1−e−iξ j ) =  d∑ i , j=1 akli j (1 − eiξi )(1 − e−iξ j )  k ,l=1,...,M . (2.115) We will often use, without mentioning this further, that the spectra of a(ξ) and a∗ (ξ) are bounded from above and from below by |ξ |2. 61 2.3. Extensions Estimates Let us introduce the following notation for derivatives: For a function u : Rd → R, we regard the lth derivative, Dlu(x), as an l-linear form, and |Dlu(x) | is a norm of the form Dlu(x). In terms of the Fourier transform, we denote by ˆDl (ξ) the corresponding “multiplier” operator from functions to l-linear forms, and by | ˆDl (ξ) | its norm. Similarly, for a discrete function u : Zd → R, the lth order discrete difference in positive coordinate direction is denoted by ∇lu(x) and has Fourier multiplier ˆ∇l (ξ). In particular, when l = 1, ˆD(ξ)  (iξ1 , . . . , iξd ), ˆ∇(ξ)  (eiξ1 − 1, . . . , eiξd − 1). (2.116) Furthermore, k and p will denote integers that may be chosen arbitrarily, and C constants that can change from instance to instance and may depend on k and p, as well as l = (lx , ly , la , lm2 ), B+, B− , and M+, but not on x, ξ, and m. Proof of (2.37),(2.33),(2.34). It follows by the change of variables ξ 7→ tξ, from the fact that a(ξ) is homogeneous of degree 2, and from Wt (λ) = W1(λt2) that φt (x , y; a,m2) = t2 ∫ Rd Wt (a(ξ) + m2)ei (x−y)·ξ dξ (2.117) = t−(d−2) ¯φ( x − y t ; a,m2t2) with ¯φ(x; a,m2) := ∫ Rd W1(a(ξ) + m2)ei (x−y)·ξ dξ (2.118) which is supported in |x | ≤ B+. This verifies (2.37). Furthermore, (2.33) is a straightforward consequence of (2.117) by differentiation and (2.63). Let us omit the details and only verify them explicitly in the discrete case (2.34): The (deriva- tives of the) decomposition kernel φ∗t can here be expressed as Dlaa D lm2 m2 ∇lxx ∇lyy φ∗t (x , y; a,m2) = t−(d−2)−lx−ly+2lm2 ¯φ∗t ;l (x − y; a,m2) (2.119) with ¯φ∗t ;l (x; a,m2) = td+lx+ly−2lm2 ∫ [−π,π]d Dlaa D lm2 m2 W ∗t (a∗ (ξ) + m2) ˆ∇ly ˆ∇lx ei x ·ξ dξ. (2.120) Thus (2.63), | ˆ∇(ξ) | ≤ C |ξ |, and η · a∗ (ξ)η ≥ C |ξ |2 |η |2 for η ∈ RM imply | ¯φ∗t ;l (x; a,m2) | ≤ C ∫ [−π,π]d (1 + C |ξ |2t2 + m2t2)−k−p (t |ξ |)lx+ly−2lm2 tddξ (2.121) ≤ C(1 + m2t2)−k ∫ Rd (1 + C |ξ |2)−p |ξ |lx+ly−2lm2 dξ 62 2.3. Extensions and therefore that the integral converges if 12 (d + lx + ly ) > lm2 and p is chosen sufficiently large. It follows that | ¯φ∗t ;l (x; a,m2) | ≤ C(1 + m2t2)−k (2.122) verifying the claim.  Proof of (2.36). ∇lxx ∇lyy φ∗t (x , y) − Dlxx Dlyy φt (x , y) = t2 ∫ [−π,π]d W ∗t (a∗ (ξ)) ˆ∇lx ˆ∇ly eiξ ·(x−y) dξ (2.123) − t2 ∫ Rd Wt (a(ξ)) ˆDlx ˆDly eiξ ·(x−y) dξ. To simplify notation, we will write ˆDl = ˆDlx ˆDly = ˆDlx ⊗ ˆDly if l = (lx , ly ), and similarly for ∇. Then the difference (2.123) may be estimated as follows. Proposition 2.3.1 implies ∫ [−π,π]d |W ∗t (a∗ (ξ) + m2) − Wt (a∗ (ξ) + m2) | | ˆDl (ξ) | dξ ≤ Ct−1 ∫ Rd (1 + C |ξ |2t2 + m2t2)−p−k |ξ |l dξ ≤ Ct−d−l−1(1 + m2t2)−k (2.124) where we have assumed in the second inequality above that p was chosen suffi- ciently large so that the integral is convergent. Similarly, we may proceed for the other differences, always choosing p large enough in the estimates. Using (2.63) with m = 1 and |a∗ (ξ) − a(ξ) | = O(|ξ |3), which follows from Taylor’s theorem, we obtain ∫ [−π,π]d |Wt (a∗ (ξ) + m2) − Wt (a(ξ) + m2) | | ˆDl (ξ) | dξ ≤ C ∫ Rd |ξ |(1 + C |ξ |2t2 + m2t2)−p−k |ξ |l dξ ≤ Ct−d−l−1(1 + m2t2)−k . (2.125) Taylor’s theorem similarly implies | ˆ∇l (ξ) − ˆDl (ξ) | ≤ C |ξ |l+1 so that, by (2.61), ∫ [−π,π]d |W ∗t (a∗ (ξ) + m2) | | ˆ∇l (ξ) − ˆDl (ξ) | dξ ≤ C ∫ Rd (1 + C |ξ |2t2 + m2t)−p−k |ξ |l+1 dξ ≤ Ct−d−l−1(1 + m2t2)−k . (2.126) 63 2.3. Extensions Finally, we obtain by (2.61) that ∫ Rd\[−π,π]d |Wt (a(ξ) + m2) | | ˆDl (ξ) | dξ ≤ C ∫ Rd\[−π,π]d (1 + C |ξ |2t2 + m2t2)−p−k |ξ |l dξ ≤ Ct−2p (1 + m2t2)−k . (2.127) The combination of the previous four inequalities gives (2.36).  64 Chapter 3 Structural stability of a class of dynamical systems 3.1 Introduction and main result 3.1.1 Introduction Let V = R3 with elements V ∈ V written V = (g, z, µ) and considered as a column vector for matrix multiplication. For each j ∈ N0 = {0, 1, 2, . . .}, we define the quadratic flow ϕ̄ j : V → V by ϕ̄ j (V ) =  1 0 0 0 1 0 η j γ j λ j  V −  VT qg j V VT qz j V VT qµ j V  , (3.1) with the quadratic terms of the form qg j =  β j 0 0 0 0 0 0 0 0  , qzj =  θ j 1 2 ζ j 0 1 2 ζ j 0 0 0 0 0  , (3.2) and qµ j =  υ gg j 1 2υ gz j 1 2υ gµ j 1 2υ gz j υzz j 1 2υ zµ j 1 2υ gµ j 1 2υ zµ j 0  . (3.3) All entries in the above matrices are real numbers. We assume that there exists a λ > 1 such that λ j ≥ λ for all j, together with assumptions that ensure that for most values of j we have β j ≥ c > 0 and ζ j ≤ 0. Our hypotheses on the parameters of ϕ̄ are stated precisely in Assumptions (A1–A2) below. The significance of the assumption c > 0 is explained in Section 3.1.3 below. The quadratic flow ϕ̄ defines a time-dependent discrete-time 3-dimensional dy- namical system. It is triangular, in the sense that the equation for g does not depend on z or µ, the equation for z depends only on g, and the equation for µ depends on 65 3.1. Introduction and main result g and z. Moreover, the equation for z is linear in z, and the equation for µ is linear in µ. This makes the analysis of the quadratic flow elementary. Our main result concerns structural stability of the dynamical system ϕ̄ under a class of infinite-dimensional perturbations. Let (K j ) j∈N0 be a sequence of Banach spaces and X j = K j ⊕ V. We write x j ∈ X j as x j = (K j ,Vj ) = (K j , gj , z j , µ j ). A norm on X j is given by ‖x j ‖X j = max{‖K j ‖K j , ‖Vj ‖V} = max{‖K j ‖K j , |gj |, |z j |, |µ j |}. (3.4) We identify K j and V with subspaces of X j , so that ‖K j ‖K j = ‖K j ‖X j and ‖V ‖V = ‖V ‖X j with this norm on X j . However, we will only make use of the norm of the K- and V -components in X j separately, but never of ‖x j ‖X j . (The reason is that the two components will need to be re-weighted.) Suppose that we are given maps ψ j : X j → K j+1 and ρ j : X j → V. Then we define Φ j : X j → X j+1 by Φ j (K j ,Vj ) = (ψ j (K j ,Vj ), ϕ̄ j (Vj ) + ρ j (K j ,Vj )). (3.5) This is an infinite-dimensional perturbation of the 3-dimensional quadratic flow ϕ̄, which breaks triangularity and which involves the spaces K j in a nontrivial way. We will impose estimates on ψ j and ρ j below, which make Φ a third-order perturbation of ϕ̄. We give hypotheses under which there exists a sequence (x j ) j∈N0 with x j ∈ X j which is a global flow of Φ, in the sense that x j+1 = Φ j (x j ) for all j ∈ N0 , (3.6) obeying the boundary conditions that (K0 , g0) is fixed, z j → 0, and µ j → 0. Moreover, within an appropriate space of sequences, this global flow is unique. As we have discussed in more detail in Chapter 1, this result provides an essen- tial ingredient in a renormalisation group analysis of the 4-dimensional continuous- time weakly self-avoiding walk [9, 19, 38], where the boundary condition µ j → 0 is the appropriate boundary condition for the study of a critical trajectory. It is this application that provides our immediate motivation to study the dynamical system Φ, but we expect that the methods developed here will have further applications to dynamical systems arising in renormalisation group analyses in statistical mechan- ics. 3.1.2 Dynamical system We think of Φ = (Φ j ) j∈N0 as the evolution map of a discrete time-dependent dy- namical system, although it is more usual in dynamical systems to have the spaces 66 3.1. Introduction and main result X j be identical. Our application in [9, 19, 38] requires the greater generality of j-dependent spaces. In the case that Φ is a time-independent dynamical system, i.e., when Φ j = Φ and X j = X for all j ∈ N0, its fixed points are of special interest: x∗ ∈ X is a fixed point of Φ if x∗ = Φ(x∗). The dynamical system is called hyperbolic near a fixed point x∗ ∈ X if the spectrum of DΦ(x∗) is disjoint from the unit circle [99]. It is a classic result that for a hyperbolic system there exists a splitting X  Xs ⊕ Xu into a stable and an unstable manifold near x∗. The stable manifold is a submanifold Xs ⊂ X such that x j → x∗ in X , exponentially fast, when (x j ) satisfies (3.6) and x0 ∈ Xs . This result can be generalised without much difficulty to the situation when the Φ j and X j are not necessarily identical, viewing “0” as a fixed point (although 0 is the origin in different spaces X j ). The hyperbolicity condition must now be imposed in a uniform way [25, Theorem 2.16]. By definition, ϕ̄ j (0) = 0, and we will make assumptions below which can be interpreted as a weak formulation of the fixed point equation Φ j (0) = 0 for the dynamical system defined by (3.5). Despite this technical condition, will simply refer to 0 as a fixed point of Φ. This fixed point 0 is not hyperbolic due to the two unit eigenvalues of the matrix in the first term of (3.1). Thus the g- and z-directions are centre directions, which neither contract nor expand in a linear approximation. On the other hand, the hypothesis that λ j ≥ λ > 1 ensures that the µ-direction is expanding, and we will assume below that ψ j : X j → K j+1 is such that the K- direction is contractive near the fixed point 0. The behaviour of dynamical systems near non-hyperbolic fixed points is much more subtle than for the hyperbolic case. A general classification does not exist, and a nonlinear analysis is required. 3.1.3 Main result In Section 3.2, we give an elementary proof that there exists a unique global flow ¯V = (ḡ, z̄, µ̄) of the quadratic flow ϕ̄ with boundary conditions ḡ0 = g0 (always assumed sufficiently small) and ( z̄∞ , µ̄∞) = (0, 0), where we are writing, e.g., z̄∞ = lim j→∞ z̄ j . Our main result is that, under the assumptions stated below, there exists a unique global flow of Φ with small initial conditions (K0 , g0) and final conditions (z∞ , µ∞) = (0, 0), and that this flow is a small perturbation of ¯V . The sequence ḡ = (ḡj ) plays a prominent role in the analysis. Determined by the sequence (β j ), it obeys ḡj+1 = ḡj − β j ḡ2j , ḡ0 = g0 > 0. (3.7) We regard ḡ as a known sequence (only dependent on the initial condition g0). The following examples are helpful to keep in mind. 67 3.1. Introduction and main result Example 3.1.1. (i) Constant β j = b > 0. In this case, it is not difficult to show that ḡj ∼ g0(1 + g0bj)−1 ∼ (bj)−1 as j → ∞ (e.g., by applying (3.41) below with ψ(t) = t−2). (ii) Abrupt cut-off, with β j = b for j ≤ J and β j = 0 for j > J, with J ≫ 1. In this case, ḡj is approximately the constant (bJ)−1 for j > J. In particular, ḡj does not go to zero as j → ∞. Example 3.1.1 prompts us to make the following general definition of a cut-off time for bounded sequences β j . Let ‖ β‖∞ = sup j≥0 | β j | < ∞, and let n+ = n if n ≥ 0 and otherwise n+ = 0. Given a fixed Ω > 1, we define the Ω-cut-off time jΩ by jΩ = inf{k ≥ 0 : | β j | ≤ Ω−( j−k )+ ‖ β‖∞ for all j ≥ 0}. (3.8) The infimum of the empty set is defined to equal ∞, e.g., if β j = b for all j. By definition, jΩ ≤ jΩ′ if Ω ≤ Ω′. To abbreviate the notation, we write χ j = Ω −( j− jΩ)+ . (3.9) The evolution maps Φ j are specified by the real parameters η j , γ j , λ j , β j , θ j , ζ j , υ αβ j , together with the maps ψ j and ρ j on X j . Throughout this paper, we fix Ω > 1 and make Assumptions (A1–A2) on the real parameters and Assump- tion (A3) on the maps, all stated further below. The constants in all estimates are permitted to depend on the constants in these assumptions, including Ω, but not on jΩ and g0 > 0. Furthermore, we consider the situation when the parameters of ϕ̄ j are continuous maps from a metric space Mext of external parameters, m ∈ Mext, into R, that the maps ψ j and ρ j similarly have continuous dependence on m, and that jΩ is allowed to depend on m, but that Assumptions (A1–A3) hold with the constants independent of m. Corollary 3.1.7 below then shows that the solutions to (3.6) constructed in Theorem 3.1.4 below also depend continuously on m. In Section 3.2, as a preliminary result to the proof of the main result, we prove the following Proposition 3.1.2 concerning flows of the three-dimensional quadratic dynamical system ϕ̄. Its proof is elementary. Assumption (A1). The sequence β: The sequence (β j ) is bounded: ‖ β‖∞ < ∞. There exists c > 0 such that β j ≥ c for all but c−1 values of j ≤ jΩ. Assumption (A2). The other parameters of ϕ̄: There exists λ > 1 such that λ j ≥ λ for all j ≥ 0. There exists c > 0 such that ζ j ≤ 0 for all but c−1 values of j ≤ jΩ. Each of ζ j , η j , γ j , θ j , ζ j , υαβj is bounded in absolute value by O( χ j ), with a constant that is independent of both j and jΩ. 68 3.1. Introduction and main result Note that when jΩ < ∞, Assumption (A1) permits the possibility that eventu- ally βk = 0 for large k. The simplest setting for the assumptions is in the situation when jΩ = ∞, for which χ j = 1 for all j. Our applications include situations in which β j approaches a positive limit as j → ∞, but also situations in which β j is approximately constant in j over a long initial interval j ≤ jΩ and then abruptly decays to zero. Proposition 3.1.2. Assume (A1–A2). If ḡ0 > 0 is sufficiently small, then there exists a unique global flow ¯V = ( ¯V ) j∈N0 = (ḡj , z̄ j , µ̄ j ) j∈N0 of ϕ̄ with initial condition ḡ0 and ( z̄∞ , µ̄∞) = (0, 0). This flow satisfies the estimates χ j ḡj = O ( ḡ0 1 + ḡ0 j ) , z̄ j = O( χ j ḡj ), µ̄ j = O( χ j ḡj ), (3.10) with constants independent of jΩ and ḡ0. Furthermore, if the maps ϕ̄ j depend con- tinuously on an external parameter such that (A1–A2) hold with uniform constants, then ¯Vj is continuous in this parameter, for every j ∈ N0. We now define domains D j ⊂ X j on which we assume the perturbation (ψ j , ρ j ) to be defined, and an assumption which states estimates for (ψ j , ρ j ). The domain and estimates depend on an initial condition g0 > 0 and a possible external param- eter m. Theorem 3.1.4 below shows existence and uniqueness of solutions to (3.6) with this initial condition, and existence and differentiability of solutions for initial conditions in a neighborhood of g0. For parameters r, u > 0 and sufficiently small g0 > 0, let (ḡj , z̄ j , µ̄ j ) j∈N0 be the sequence determined by Proposition 3.1.2 with initial condition ḡ0 = g0, and define the domain D j = D j (g0 , r, u) ⊂ X j by D j = {x j ∈ X j : ‖K j ‖K j ≤ r χ j ḡ3j , |gj − ḡj | ≤ uḡ2j | log ḡj |, |z j − z̄ j | ≤ uχ j ḡ2j | log ḡj |, |µ j − µ̄ j | ≤ uχ j ḡ2j | log ḡj |}. (3.11) Note that if β j depends on an external parameter m, then D j also depends on this parameter through ḡj = ḡj (m). For statements concerning continuity in m, we will assume that Φ j is defined on the union of these domains over m ∈ Mext. Throughout this chapter, we denote by Dαφ the Fréchet derivative of a map φ with respect to the component α, and by Lm (X j , X j+1) the space of bounded m- linear maps from X j to X j+1. The following Assumption (A3) depends on positive parameters (g0 , r, u, κ,Ω, R, M). 69 3.1. Introduction and main result Assumption (A3). The perturbation: The maps ψ j : D j → K j+1 ⊂ X j+1 and ρ j : D j → V ⊂ X j+1 are three times continuously Fréchet differentiable, there exist κ ∈ (0,Ω−1), R ∈ (0, r (1− κΩ)), and M > 0 such that, for all x j = (K j ,Vj ) ∈ D j , ‖ψ j (0,Vj )‖K j+1 ≤ Rχ j+1ḡ3j+1, ‖ρ j (x j )‖V ≤ M χ j+1ḡ3j+1, (3.12) ‖DKψ j (x j )‖L(K j ,K j+1) ≤ κ, ‖DK ρ j (x j )‖L(K j ,V) ≤ M, (3.13) and such that, for both φ = ψ and φ = ρ and 2 ≤ n + m ≤ 3, ‖DV φ j (x j )‖L(V,X j+1) ≤ M χ j ḡ2j+1, (3.14) ‖DmV DnKφ j (x j )‖Ln+m (X j ,X j+1) ≤ M ( χ j ḡ3j+1)1−n (ḡ2j+1 | log ḡj+1 |)−m . (3.15) The bounds (3.12) guarantee that Φ is a third-order perturbation of ϕ̄. More- over, since κ < 1, the ψ-part of (3.13) ensures that the K-direction is contractive for Φ. (3.15) imposes bounds on the second and third derivatives of ψ and ρ which permit these derivatives to be quite large. The following elementary Lemma 3.1.3 shows that a sequence ( ¯K j ) j∈N0 can be defined inductively by ¯K j+1 = ψ j ( ¯K j , ¯Vj ), assuming that r is large enough. Denote by piK D j the projection of D j onto K j , i.e., piK D j = {K j ∈ K j : ‖K j ‖K j ≤ r χ j ḡ3j }. (3.16) Lemma 3.1.3. Assume Assumption (A3), let r∗ ∈ (R/(1− κΩ), r], and assume that g0 > 0 is sufficiently small. Then ψ j (D j (g0 , r∗ , u)) ⊆ piK D j+1(g0 , r∗ , u). Proof. The triangle inequality and the first bounds of (3.12)–(3.13) imply ‖ψ j (K j ,Vj )‖K j+1 ≤ ‖ψ j (0,Vj )‖K j+1 + ‖ψ j (K j ,Vj ) − ψ j (0,Vj )‖K j+1 ≤ Rχ j+1ḡ3j+1 + r∗κΩ(1 + O(g0)) χ j+1ḡ3j+1 ≤ r∗ χ j+1ḡ3j+1, (3.17) where the last inequality uses that ḡ3 j /ḡ3 j+1 = 1+O(g0) whose elementary verifica- tion is given in Lemma 3.2.1(i) below, and that g0 > 0 is sufficiently small.  The sequence x̄ = ( ¯K j , ¯Vj ) j∈N0 is a flow of the dynamical system ¯Φ = (ψ, ϕ̄) in the sense of (3.6), with initial condition ( ¯K0 , ḡ0) = (K0 , g0) and final condition ( z̄∞ , µ̄∞) = (0, 0). We consider this sequence as a function x̄ : (K0 , g0) 7→ x̄ = x̄(K0 , g0) of the initial condition (K0 , g0). Our main result is the following Theo- rem 3.1.4 about flows x of the dynamical system Φ = (ψ, ϕ̄ + ρ) = ¯Φ + (0, ρ) of interest, as perturbations of the flows x̄ of ¯Φ. 70 3.1. Introduction and main result Theorem 3.1.4. Assume (A1–A3) with parameters (g0 , r, u, κ,Ω, R, M), let r∗ ∈ (R/(1−κΩ), r), b ∈ (0, 1). There exists u∗ > 0 such that for all u ≥ u∗, there exists g∗ > 0 such that if g0 ∈ (0, g∗] and ‖K0‖K0 ≤ r∗g30 , the following conclusions hold. (i) There exists a global flow x = (K j ,Vj ) j∈N0 of Φ = (ψ, ϕ̄ + ρ) with initial condition (K0 , g0) and final condition (z∞ , µ∞) = (0, 0) such that, with x̄ = x̄(K0 , g0), the following estimates hold: ‖K j − ¯K j ‖K j ≤ b(r − r∗) χ j ḡ3j , (3.18) |gj − ḡj | ≤ buḡ2j | log ḡj |, (3.19) |z j − z̄ j | ≤ buχ j ḡ2j | log ḡj |, (3.20) |µ j − µ̄ j | ≤ buχ j ḡ2j | log ḡj |. (3.21) The sequence x is the unique solution to (3.6) which obeys the boundary conditions and the bounds (3.18)–(3.21). (ii) There is a neighbourhood I = I(K0 , g0) ⊂ K0 ⊕ R of (K0 , g0) such that, for initial conditions (K ′0 , g′0) ∈ I, there also exists a global flow x′ of Φ with (z′∞ , µ′∞) = (0, 0), and (3.18)–(3.21) hold with x replaced by x′ and x̄ replaced by x̄′ = x̄(K ′0 , g′0). Moreover, for all j ∈ N0, the maps (K j ,Vj ) : I → K j ⊕ V are continuously Fréchet differentiable, and ∂z0 ∂g0 = O(1), ∂µ0 ∂g0 = O(1). (3.22) Remark 3.1.5. (i) For jΩ = ∞ and with (3.10), the bounds (3.18) and (3.19)–(3.21) imply ‖K j ‖K j = O( j−3) and Vj = O( j−2 log j), respectively. However, the latter bounds do not reflect that K j ,Vj → 0 as g0 → 0, while the former do. Furthermore, (3.10) implies χ j ḡj → 0 as j → ∞ (also when jΩ < ∞), and thus (3.18) and (3.20)–(3.21) imply K j → 0, z j → 0, µ j → 0 as j → ∞. More precisely, these estimates imply z j , µ j = O( χ j ḡj ) so that z j and µ j decay exponentially after the Ω-cut-off time jΩ; we interpret this as indicating that the boundary condition (z∞ , µ∞) = (0, 0) is essentially achieved already at jΩ. (ii) We do not give a proof, but we expect that the error bounds in (3.18)–(3.21) have optimal decay as j → ∞. Some indication of this can be found in Re- mark 3.3.2 below. Theorem 3.1.4 is an analogue of a stable manifold theorem for the non-hyper- bolic dynamical system defined by (3.5). It is inspired by [25, Theorem 2.16] which however holds only in the hyperbolic setting. Irwin [78] showed that the stable manifold theorem for hyperbolic dynamical systems is a consequence of the 71 3.1. Introduction and main result implicit function theorem in Banach spaces (see also [99, 102]). Irwin’s approach was inspired by Robbin [97], who showed that the local existence theorem for ordinary differential equations is a consequence of the implicit function theorem. By contrast, in our proof of Theorem 3.1.4, we directly apply the local existence theorem for ODEs, without explicit mention of the implicit function theorem. This turns out to be advantageous to deal with the lack of hyperbolicity. Our choice of ϕ̄ in (3.1) has a specific triangular form. One reason for this is that (3.1) accommodates what is required in our application in [9,19,38]. A second reason is that additional nonzero terms in ϕ̄ can lead to the failure of Theorem 3.1.4. The condition that β j is mainly non-negative is important for the sequence ḡj of (3.7) to remain bounded. The following example shows that for the ζ j term in the flow of z̄, our sign restriction on ζ j is also important, since positive ζ j can lead to violation of a conclusion of Theorem 3.1.4. Example 3.1.6. Suppose that ζ j = θ j = β j = 1, that ρ = 0, and that ḡ0 > 0 is small. For this constant β sequence, jΩ = ∞ (for any Ω > 1) and hence χ j = 1 for all j. As in Example 3.1.1, ḡj ∼ j−1. By (3.1) and (3.7), z̄ j+1 = z̄ j (1 − ḡj ) − ḡ2j = z̄ j ḡj+1 ḡj − ḡ2j . (3.23) Let ȳ j = z̄ j/ḡj . Since ḡj/ḡj+1 = (1 − ḡj )−1 ≥ 1, we obtain ȳ j ≥ ȳ j+1 + ḡj and hence ȳ j ≥ ȳn+1 + n∑ l= j ḡl . (3.24) Suppose that z̄ j = O(ḡj ), as in (3.20). Then ȳ j = O(1) and hence by taking the limit n → ∞ we obtain ȳ j ≥ lim sup n→∞ ȳn+1 + n∑ l= j ḡl  ≥ −C + ∞∑ l= j ḡl . (3.25) However, since ḡj ∼ j−1, the last sum diverges. This contradiction implies that the conclusion z̄ j = O(ḡj ) of (3.20) is impossible. Because of its triangularity, an exact analysis of the flows of ϕ̄ with the bound- ary conditions of interest is straightforward: the three equations for g, z, µ can be solved successively and we do this in Section 3.2 below. Triangularity does not hold for Φ, and we prove that the flows of Φ with the same boundary conditions nevertheless remain close to the flow of ϕ̄ in Section 3.3. 72 3.2. The quadratic flow 3.1.4 Continuity in external parameter The uniqueness statement of Theorem 3.1.4 implies the following Corollary 3.1.7 regarding continuous dependence on an external parameter of the solution to (3.6) given by Theorem 3.1.4. Corollary 3.1.7. Assume that the Φ j depend continuously on an external param- eter m ∈ Mext and that Assumptions (A1–A3) hold uniformly in m. Let x(m) = (K (m),V (m)) be the solution for external parameter m given by Theorem 3.1.4. Then x j (m) is continuous in m for each j ∈ N0. Proof. Theorem 3.1.4 implies that V0(m) is bounded uniformly in m ∈ Mext. This implies that there exists some limit point V ∗0 of V0(m′) as m′ → m. Let x∗j = (V ∗ j , K∗ j ) be the flow of Φ(m, ·) started with this V ∗0 and K∗0 = K0 independent of m. By Proposition 3.1.2, ¯V0(m) is continuous in m ∈ Mext. The continuity of ¯K j (m) follows inductively from this and the assumed continuity of the ψ j and ρ j . This continuity and (3.18)–(3.21) imply that any limit point x∗ must satisfy ‖K∗j − ¯K j (m)‖K j ≤ b(r − r∗) χ j (m)ḡj (m)3 , (3.26) |g∗j − ḡj (m) | ≤ buḡj (m)2 | log ḡj (m) |, (3.27) |µ∗j − µ̄ j (m) | ≤ buχ j (m)ḡj (m)2 | log ḡj (m) |, (3.28) |z∗j − z̄ j (m) | ≤ buχ j (m)ḡj (m)2 | log ḡj (m) |. (3.29) The uniqueness assertion of Theorem 3.1.4 implies that x∗ j = x j (m), and therefore that V0 is continuous in m. The continuity of x j now follows immediately from the continuity of the Φ j .  3.2 The quadratic flow In this section, we prove that, for the quadratic approximation ϕ̄, there is a unique solution ¯V = ( ¯Vj ) j∈N0 = (ḡj , z̄ j , µ̄ j ) j∈N0 to the flow equation ¯Vj+1 = ϕ̄ j ( ¯Vj ) with fixed small ḡ0 > 0 and with ( z̄∞ , µ̄∞) = (0, 0). (3.30) Due to the triangular nature of ϕ̄, we can obtain very detailed information about the sequence ¯V . 3.2.1 Flow of ḡ We start with the analysis of the sequence ḡ, which obeys the recursion ḡj+1 = ḡj − β j ḡ2j , ḡ0 > 0. (3.31) 73 3.2. The quadratic flow The following lemma collects the information we will need about ḡ. Lemma 3.2.1. Assume (A1). The following statements hold if ḡ0 > 0 is sufficiently small, with all constants independent of jΩ and ḡ0. (i) For all j, ḡj > 0, ḡj = O( inf k≤ j ḡk ), and ḡj ḡ−1j+1 = 1 + O( χ j ḡj ) = 1 + O(ḡ0). (3.32) Moreover, for any j and k, ḡj is non-increasing in βk . (ii) For n ≥ 1 and m ≥ 0, there exists Cn ,m > 0 such that for all k ≥ j ≥ 0, k∑ l= j χl ḡ n l | log ḡl |m ≤ Cn ,m  | log ḡk |m+1 n = 1 χ j ḡ n−1 j | log ḡj |m n > 1, (3.33) and there exists C > 0 such that for all j ≥ 0, χ j ḡj ≤ Cḡ0 1 + ḡ0 j . (3.34) (iii) (a) For γ ≥ 0 and j ≥ 0, there exist constants cj = 1 + O( χ j ḡj ) (depending on γ) such that, for all l ≥ j, l∏ k= j (1 − γ βk ḡk )−1 = ( ḡj ḡl+1 )γ (cj + O( χl ḡl )). (3.35) (b) For ζ j ≤ 0 except for c−1 values of j ≤ jΩ, ζ j = O( χ j ), and j ≤ l, (with a constant independent of j and l), l∏ k= j (1 − ζk ḡk )−1 ≤ O(1). (3.36) (iv) Suppose that ḡ and g̊ each satisfy (3.31). Let δ > 0. If |g̊0 − ḡ0 | ≤ δg̊0 then |g̊j − ḡj | ≤ δg̊j (1 + O(ḡ0)) for all j. Proof. (i) By (3.31), ḡj+1 = ḡj (1 − β j ḡj ). (3.37) Since β j = O( χ j ), by (3.37) the second statement of (3.32) is a consequence the first, so it suffices to verify the first statement of (3.32). Assume inductively that ḡj > 0 and that ḡj = O(infk≤ j ḡk ). It is then immediate from (3.37) that ḡj+1 > 0 if 74 3.2. The quadratic flow ḡ0 is sufficiently small depending on ‖ β‖∞, and that ḡj+1 ≤ ḡj if β j ≥ 0. By (A1), there are at most c−1 values of j ≤ jΩ for which β j < 0. Therefore, by choosing ḡ0 sufficiently small depending on ‖ β‖∞ and c, it follows that ḡj ≤ O(infk≤ j ḡk ) for all j ≤ jΩ with a constant that is independent of jΩ. To advance the inductive hypothesis for j > jΩ, we use 1 − t ≤ e−t and∑∞ l= jΩ | βl | ≤ ∑∞ n=1Ω −n = O(1) to obtain, for j ≥ k ≥ jΩ, ḡj ≤ ḡk exp − j−1∑ l=k βl ḡl  ≤ ḡk exp Cḡk j−1∑ l=k | βl |  ≤ O(ḡk ). (3.38) This shows that ḡj = O(inf jΩ≤k≤ j ḡk ). However, by the inductive hypothesis, ḡjΩ = O(infk≤ jΩ ḡk ) for j ≤ jΩ, and hence for j > jΩ we have ḡj = O(infk≤ j ḡk ) as claimed. This completes the verification of the first bound of (3.32) and thus, as already noted, also of the second. The monotonicity of ḡj in βk can be proved as follows. Since it is obvious that ḡj does not depend on βk if k ≥ j, we may assume that k < j. Moreover, by replacing j by j + k, we can assume that k = 0. Let ḡ′ j = ∂ḡj/∂ β0. Clearly, ḡ′0 = 0 and therefore ḡ ′ 1 = −ḡ20 < 0. (3.39) Assuming that ḡ′ j < 0 by induction, it follows that for j ≥ 1, ḡ ′ j+1 = ḡ ′ j (1 − 2β j ḡj ) < 0, (3.40) and the proof of monotonicity is complete. (ii) We first show that if ψ : R+ → R is absolutely continuous, then k∑ l= j βlψ(ḡl )ḡ2l = ∫ ḡ j ḡk+1 ψ(t) dt + O (∫ ḡ j ḡk+1 t2 |ψ′(t) | dt ) . (3.41) To prove (3.41), we apply (3.31) to obtain k∑ l= j βlψ(ḡl )ḡ2l = k∑ l= j ψ(ḡl )(ḡl − ḡl+1) = k∑ l= j ∫ ḡl ḡl+1 ψ(ḡl ) dt. (3.42) The integral can be written as ∫ ḡl ḡl+1 ψ(ḡl ) dt = ∫ ḡl ḡl+1 ψ(t) dt + ∫ ḡl ḡl+1 ∫ ḡl t ψ′(s) ds dt. (3.43) 75 3.2. The quadratic flow The first term on the right-hand side of (3.41) is then the sum over l of the first term on the right-hand side of (3.43), so it remains to estimate the double integral. By Fubini’s theorem, ∫ ḡl ḡl+1 ∫ ḡl t ψ′(s) ds dt = ∫ ḡl ḡl+1 ∫ s ḡl+1 ψ′(s) dt ds = ∫ ḡl ḡl+1 (s − ḡl+1)ψ′(s) ds. (3.44) By (3.31) and (3.32), for s ∈ [ḡl+1 , ḡl ] we have |s − ḡl+1 | ≤ |ḡl − ḡl+1 | = | βl |ḡ2l ≤ (1 + O(ḡ0)) | βl |ḡ2l+1 ≤ O(s2). (3.45) This permits us to estimate (3.44) and conclude (3.41). Direct evaluation of the integrals in (3.41) with ψ(t) = tn−2 | log t |m gives k∑ l= j βl ḡ n l | log ḡl |m ≤ Cn ,m  | log ḡk+1 |m+1 n = 1 ḡ n−1 j | log ḡj |m n > 1. (3.46) To deduce (3.33), we only consider the case n > 1, as the case n = 1 is similar. Suppose first that j ≤ jΩ (and jΩ < ∞). Assumption (A1) implies 1 ≤ βl c + ( 1 + | βl | c ) 1βl<c ≤ O(βl ) + O(1βl<c ) (3.47) and therefore that k∑ l= j χl ḡ n l | log ḡl |m ≤ jΩ∑ l= j O(βl )ḡnl | log ḡl |m + jΩ∑ l= j O(1βl<c )ḡnl | log ḡl |m + k∑ l= jΩ+1 Ω −(l− jΩ)+ ḡnl | log ḡl |m . (3.48) By (3.46), the first term is bounded by O(ḡn−1 j | log ḡj |m ). The second term obeys the same bound, by (A1) and (3.32), as does the last term due to the exponential decay. This proves (3.33) for the case j ≤ jΩ. On the other hand, if j > jΩ, then again using the exponential decay of χl and (3.32), we obtain k∑ l= j χl ḡ n l | log ḡl |m ≤ C χ j ḡnj | log ḡj |m ≤ Cḡ0 χ j ḡn−1j | log ḡj |m . (3.49) This completes the proof of (3.33) for the case n > 1. 76 3.2. The quadratic flow To prove (3.34), let c > 0 be as in Assumption (A1) and set ĝj+1 = ĝj − cĝj with ĝ0 = ḡ0. Let j0 = −1 and denote by 0 ≤ j1 < j2 < . . . the sequence of j such that β j < c. By induction, we show that ḡjk+1 ≤ (1 + O(ḡ0))k ĝjk+1. This is trivial for k = 0. To advance the induction, note that, since ḡj is monotone in β, ḡj ≤ (1 + O(ḡ0))k ĝj for j ≤ jk+1, and therefore ḡjk+1+1 = ḡjk+1 (1 − β jk+1 ḡjk+1 ) ≤ (1 − β jk+1 ḡjk+1 )(1 + O(ḡ0))k ĝjk+1 = 1 − β jk+1 ḡjk+1 1 − cĝjk+1 (1 + O(ḡ0))k ĝjk+1+1. (3.50) The induction is advanced since 1 − β jk+1 ḡjk+1 1 − cĝjk+1 = 1 + O(ḡ0). (3.51) By Assumption (A1), m = max{k : jk ≤ jΩ} is bounded so that, for j ≤ jΩ, χ j ḡj = ḡj ≤ (1 + O(ḡ0))m ĝj ≤ (1 + O(ḡ0))ĝj . (3.52) For j > jΩ, we use that, for ḡ0 sufficiently small, Ω −1 ≤ 1 − cḡ0 ≤ 1 − cĝj = ĝj+1 ĝj (3.53) and that, by (3.32), ḡj = O(ḡjΩ ) which together imply χ j ḡj ≤ O(Ω−( j− jΩ) ḡjΩ ) ≤ O(Ω−( j− jΩ) ĝjΩ ) ≤ O  j−1∏ l= jΩ ĝl+1 ĝl  ĝjΩ = O(ĝj ). (3.54) The proof of (3.34) is concluded by the observation that ĝj satisfies the bound claimed, as can be seen by applying (3.41) with ψ(t) = t−2. (iii-a) By Taylor’s theorem and (3.31), there exists rk = O(βk ḡk )2 such that (1 − γ βk ḡk )−1 = (1 − βk ḡk )−γ (1 + rk ) = ( ḡk ḡk+1 )γ (1 + rk ). (3.55) Let cj ,l = l∏ k= j (1 + rk ). (3.56) 77 3.2. The quadratic flow With the bounds 1 + t ≤ e |t | and βk = O( χk ), we obtain ∣∣∣cj ,l − 1∣∣∣ = ∣∣∣∣∣∣∣∣ l∑ k= j rk l∏ m=k+1 (1 + rm ) ∣∣∣∣∣∣∣∣ ≤ l∑ k= j O( χk ḡ2k ) exp  l∑ m=k+1 O( χm ḡ2m )  ≤ O( χ j ḡj ). (3.57) In particular, cj ,l = 1 + O(ḡ0) = O(1) uniformly in j and l. Similarly, we obtain that, with r̃k = (1 + rk )−1 − 1 = O( χk ḡ2k ), for n ≥ l, |cj ,l − cj ,n | = cj ,n ∣∣∣∣∣∣∣ n∏ k=l (1 + rk )−1 − 1 ∣∣∣∣∣∣∣ = cj ,n ∣∣∣∣∣∣∣ n∑ k=l r̃k n∏ m=k+1 (1 + r̃m ) ∣∣∣∣∣∣∣ ≤ O( χl ḡl ). (3.58) In particular, (cj ,l )l is a Cauchy sequence, cj = liml→∞ cj ,l exists, and with (3.57), cj = 1 + O( χ j ḡj ). It also follows that |cj ,l − cj | ≤ O( χl ḡl ) as claimed, and the proof is complete. (iii-b) Since ζ j ≤ 0 for all but c−1 values of j ≤ jΩ, by (3.32) with ḡ0 sufficiently small, ∏l k= j (1 − ζk ḡk )−1 ≤ O(1) for l ≤ jΩ, with a constant independent of jΩ. For j ≥ jΩ, we use 1/(1 − x) ≤ 2ex for x ∈ [− 12 , 12 ] to obtain l∏ k= j (1 − ζk ḡk )−1 ≤ 2 exp  l∑ k= j ζk ḡk  ≤ 2 exp Cḡj ∞∑ k= jΩ | χk |  ≤ O(1). (3.59) The bounds for l ≤ jΩ and j ≥ jΩ together imply (3.36). (iv) If |g̊j − ḡj | ≤ δ j g̊j then by (3.31), |g̊j+1 − ḡj+1 | = |g̊j − ḡj |(1 − β j (g̊j + ḡj )) ≤ δ j+1g̊j+1 (3.60) with δ j+1 = δ j 1 − β j (g̊j + ḡj ) 1 − β j g̊j = δ j ( 1 − β j ḡj 1 − β j g̊j ) . (3.61) In particular, if β j ≥ 0, then δ j+1 ≤ δ j . By (A1), there are at most c−1 values of j ≤ jΩ for which β j < 0, and hence δ j ≤ δ(1 + O(ḡ0)) for j ≤ jΩ. The desired estimate therefore holds for j ≤ jΩ. For j ≥ l > jΩ, as in (3.38) we have j∏ k=l (1 + O(βk ḡk )) ≤ exp O(ḡl ) j∑ k=l χk  ≤ 1 + O(ḡ0), (3.62) and thus the claim remains true also for j > jΩ.  78 3.2. The quadratic flow 3.2.2 Flow of z̄ and µ̄ and proof of Proposition 3.1.2 We now establish the existence of unique solutions to the z̄ and µ̄ recursions with boundary conditions z̄∞ = µ̄∞ = 0, and obtain estimates on these solutions. Lemma 3.2.2. Assume (A1–A2). If ḡ0 is sufficiently small then there exists a unique solution to (3.30) obeying z̄∞ = µ̄∞ = 0. This solution obeys z̄ j = O( χ j ḡj ) and µ̄ j = O( χ j ḡj ). Furthermore, if the maps ϕ̄ j depend continuously on an external parameter m ∈ Mext such that (A1–A2) hold with uniform constants, then z̄ j and µ̄ j are continuous in Mext. Proof. By (3.1), z̄ j+1 = z̄ j − ζ j ḡj z̄ j − θ j ḡ2j , so that z̄ j = n∏ k= j (1 − ζk ḡk )−1 z̄n+1 + n∑ l= j l∏ k= j (1 − ζk ḡk )−1θl ḡ2l . (3.63) In view of (3.36), whose assumptions are satisfied by (A2), the unique solution to the recursion for z̄ which obeys the boundary condition z̄∞ = 0 is z̄ j = ∞∑ l= j l∏ k= j (1 − ζk ḡk )−1θl ḡ2l , (3.64) and by (A2), (3.33), and (3.36), | z̄ j | ≤ ∞∑ l= j O( χl )ḡ2l ≤ O( χ j ḡj ). (3.65) To verify continuity of z̄ j in an external parameter, let z̄ j ,n = ∑n l= j ∏l k= j (1 − ζk ḡk )−1θl ḡ2l . Clearly, since ḡj is continuous in Mext for any j ≥ 0, z̄ j ,n is also continuous, for any j ≤ n. (3.33)–(3.34) of Lemma 3.2.1(ii) imply that | z̄ j− z̄ j ,n | ≤ O( χn ḡn ) → 0 uniformly, as n → ∞, and thus, as a uniform limit of continuous functions, it follows that z̄ j must be continuous in Mext. For µ̄, we first define σ j = +η j ḡj + γ j z̄ j − υggj ḡ2j − υ gz j ḡj z̄ j − υzzj z̄2j , τj = υ gµ j ḡj + υ zµ j z̄ j , (3.66) so that the recursion for µ̄ can be written as µ̄ j+1 = (λ j − τj ) µ̄ j + σ j . (3.67) Alternatively, µ̄ j = (λ j − τj )−1( µ̄ j+1 − σ j ). (3.68) 79 3.2. The quadratic flow Given α ∈ (λ−1 , 1), we can choose ḡ0 sufficiently small that 1 2λ −1 ≤ (λ j − τj )−1 ≤ α. (3.69) The limit of repeated iteration of (3.68) gives µ̄ j = − ∞∑ l= j  l∏ k= j (λk − τk )−1  σl (3.70) as the unique solution which obeys the boundary condition µ∞ = 0. Geometric convergence of the sum is guaranteed by (3.69), together with the fact that σ j ≤ O( χ j ḡj ) ≤ O(1). To estimate (3.70), we use | µ̄ j | ≤ ∞∑ l= j αl− j+1O( χl ḡl ). (3.71) Since α < 1, the first bound of (3.32) and monotonicity of χ imply that | µ̄ j | ≤ O( χ j ḡj ). (3.72) The proof of continuity of µ̄ j in Mext is analogous to that for z̄ j . The proof is complete.  Proof of Proposition 3.1.2. (3.10) follows immediately from Lemma 3.2.1(ii) and Lemma 3.2.2. Since ḡj is defined by a finite recursion, its continuity in m ∈ Mext is trivial. The continuity of z̄ j and µ̄ j was proved in Lemma 3.2.2.  3.2.3 Differentiation of quadratic flow The following lemma gives estimates on the derivatives of the components of ¯Vj with respect to the initial condition ḡ0. We write f ′ for the derivative of f with re- spect to ḡ0. These estimates will be an ingredient in the proof of Theorem 3.1.4(ii). Lemma 3.2.3. For each j ≥ 0, ¯Vj = (ḡj , z̄ j , µ̄ j ) is twice differentiable with respect to the initial condition ḡ0 > 0, and the derivatives obey ḡ ′ j = O  ḡ 2 j ḡ 2 0  , z̄′j = O χ j ḡ 2 j ḡ 2 0  , µ̄′j = O χ j ḡ 2 j ḡ 2 0  , (3.73) ḡ ′′ j = O  ḡ 2 j ḡ0  , z̄′′j = O χ j ḡ 2 j ḡ 3 0  , µ̄′′j = O χ j ḡ 2 j ḡ 3 0  . (3.74) 80 3.2. The quadratic flow Proof. Differentiation of (3.7) gives ḡ ′ j+1 = ḡ ′ j (1 − 2β j ḡj ), (3.75) from which we conclude by iteration and ḡ′0 = 1 that for j ≥ 1, ḡ ′ j = j−1∏ l=0 (1 − 2βl ḡl ). (3.76) Therefore, by (3.35), ḡ ′ j = ( ḡj ḡ0 )2 (1 + O(ḡ0)). (3.77) For the second derivative, we use ḡ′′0 = 0 and ḡ ′′ j+1 = ḡ ′′ j (1 − 2β j ḡj ) − 2β j ḡ′2j to obtain ḡ ′′ j = −2 j−1∑ l=0 βl ḡ ′2 l j−2∏ k=l (1 − 2βk ḡk ). (3.78) With the bounds of Lemma 3.2.1, this gives ḡ ′′ j = O ( ḡj ḡ0 )2 j−1∑ l=0 βl ḡ 2 l = O  ḡ 2 j ḡ0  . (3.79) For z̄, we define σ j ,l = ∏l k= j (1 − ζk ḡk )−1. Then (3.64) becomes z̄ j =∑∞ l= j σ j ,lθl ḡ 2 l . By (3.36), σ j ,l = O(1). It then follows from (A2), (3.77), and Lemma 3.2.1(ii,iii-b) that σ′j ,l = σ j ,l l∑ k= j (1 − ζk ḡk )−1ζk ḡ′k = l∑ k= j O(ζk ḡ′k ) = O χ j ḡj ḡ 2 0  . (3.80) We differentiate (3.64) and apply (3.77) and Lemma 3.2.1(ii) to obtain z̄′j = ∞∑ l= j σ′j ,lθl ḡ 2 l + 2 ∞∑ l= j σ j ,lθl ḡl ḡ ′ l = O χ j ḡ 2 j ḡ 2 0  . (3.81) Similarly, σ′′ j ,l = O(ḡ2 j /ḡ40 ) and z̄′′j = ∞∑ l= j σ′′j ,lθl ḡ 2 l + 4 ∞∑ l= j σ′j ,lθl ḡl ḡ ′ l + 2 ∞∑ l= j σ j ,lθl (ḡl ḡ′′l + ḡ′2l ) = O χ j ḡ 2 j ḡ 3 0  (3.82) 81 3.3. Proof of main result using the fact that ḡ3 j /ḡ40 = O(ḡ2j /ḡ30 ) by (3.32). It is straightforward to justify the differentiation under the sum in (3.81)–(3.82). For µ̄ j , we recall from (3.69)–(3.70) that µ̄ j = − ∞∑ l= j  l∏ k= j (λk − τk )−1  σl , (3.83) with τj and σl given by (3.66), and with 0 ≤ (λ j − τj )−1 ≤ α < 1. This gives µ̄′j = − ∞∑ l= j  l∏ k= j (λk − τk )−1  σ′l + l∑ i= j (λi − τi )−1τ′i  . (3.84) The first product is bounded by αl− j+1, and this exponential decay, together with (3.66), (3.65), and the bounds just proved for ḡ′ and z̄′, lead to the upper bound | µ̄′ j | ≤ O( χ j ḡ2j ḡ−20 ) claimed in (3.73). Straightforward further calculation leads to the bound on µ̄′′ j claimed in (3.74) (the leading behaviour can be seen from the z̄′′ j contribution to the σ′′ l term).  3.3 Proof of main result In this section, we prove Theorem 3.1.4. We begin in Section 3.3.1 with a sketch of the main ideas, without entering into details. The remainder of Section 3.3 expands the sketch into a complete proof. 3.3.1 Proof strategy Two difficulties in proving Theorem 3.1.4 arflow-e: (i) from the point of view of dynamical systems, the evolution map Φ is not hyperbolic; and (ii) from the point of view of nonlinear differential equations, a priori bounds that any solution to (3.6) must satisfy are not readily available due to the presence of both initial and final boundary conditions. Our strategy is to consider the one-parameter family of evolution maps Φ = (Φt )t∈[0,1] defined by Φ t (x) = Φ(t , x) = (ψ(x), ϕ̄(x) + tρ(x)) for t ∈ [0, 1], (3.85) with the t-independent boundary conditions that K0 and g0 are given and that z∞ = 0 and µ∞0). This family interpolates between the problem Φ1 = Φ we are interested in, and the simpler problem Φ0 = ¯Φ = (ψ, ϕ̄). The unique solution 82 3.3. Proof of main result for ¯Φ is x̄ j = ( ¯K j , ¯Vj ), where ¯V is the unique solution of ϕ̄ from Section 3.2, and where ¯K j is defined inductively for j ≥ 0 by ¯K j+1 = ψ j ( ¯Vj , ¯K j ), ¯K0 = K0. (3.86) We refer to x̄ as the approximate flow. We seek a t-dependent global flow x which obeys the generalisation of (3.6) given by x j+1 = Φ t j (x j ). (3.87) Assuming that x j = x j (t) is differentiable in t for each j ∈ N0, we set ẋ j = ∂ ∂t x j . (3.88) Then differentiation of (3.87) shows that a family of flows x = (x j (t)) j∈N0 ,t∈[0,1] must satisfy the infinite nonlinear system of ODEs ẋ j+1 − DxΦ j (t , x j ) ẋ j = ρ j (x j ), x j (0) = x̄ j . (3.89) Conversely, any solution x(t) to (3.89), for which each x j is continuously differen- tiable in t, gives a global flow for each Φt . We claim that (3.89) can be reformulated as a well-posed nonlinear ODE ẋ = F (t , x), x(0) = x̄ , (3.90) in a Banach space of sequences x = (x0 , x1 , . . . ) with carefully chosen weights, and for a suitable nonlinear functional F. To see this, consider the linear equation y j+1 − DxΦ j (t , x j )y j = r j , (3.91) where the sequences x and r are held fixed. Its solution with the same boundary conditions as stated below (3.85) is written as y = S(t , x)r . Then we define F, which we consider as a map on sequences, by F (t , x) = S(t , x)ρ(x). (3.92) Thus y = F (t , x) obeys the equation y j+1 − DxΦ j (t , x j )y j = ρ j (x), and hence (3.90) is equivalent to (3.89) with the same boundary conditions. The main work in the proof is to obtain good estimates for S(t , x), in the Ba- nach space of weighted sequences, which allow us to treat (3.90) by the standard theory of ODE. We establish bounds on the solution simultaneously with existence, via the weights in the norm. These weights are useful to obtain bounds on the so- lution, but they are also essential in the formulation of the problem as a well-posed ODE. 83 3.3. Proof of main result As we will see in more detail in Section 3.3.4, the occurrence of DxΦ j (t , x j ) in (3.89), rather than the naive linearisation DxΦ j (0) at the “fixed point” x = 0, replaces the eigenvalue 1 in the upper left corner of the square matrix in (3.1) by a smaller eigenvalue 1 − 2β jgj < 1. This helps address difficulty (i) mentioned above. Also, the weights guarantee that a solution in the Banach space obeys the final conditions (z∞ , µ∞) = (0, 0), thereby helping to solve difficulty (ii). 3.3.2 Sequence spaces and weights We now introduce the Banach spaces of sequences used in the reformulation of (3.89) as an ODE. These are weighted l∞-spaces. Definition 3.3.1. Let X ∗ be the space of sequences x = (x j ) j∈N0 with x j ∈ X j . For each α = K, g, z, µ and j ∈ N0, we fix a positive weight wα, j > 0. We write x j ∈ X j = K j ⊕ V as x j = (xα, j )α=K ,g ,z ,µ . Let ‖x j ‖Xwj = maxα=K ,g ,z ,µ (wα, j ) −1‖xα, j ‖X j , ‖x‖Xw = sup j∈N0 ‖x j ‖Xwj , (3.93) and Xw = {x ∈ X ∗ : ‖x‖Xw < ∞}. (3.94) It is not difficult to check that Xw is a Banach space for any positive weight sequence w. Different choices of weights w will be needed. These are all defined in terms of the sequence g̊ = (g̊j ) j∈N0 which is the same as the sequence ḡ for a fixed g̊0; i.e., given g̊0 > 0, it satisfies g̊j+1 = g̊j − β j g̊2j . We define the two weights w = w(g̊0 , r, u) and r = r(g̊0 , r, u) by wα, j =  (r − r∗)g̊3j χ j α = K ug̊2 j | log g̊j | α = g ug̊2 j | log g̊j | χ j α = z, µ, rα, j =  (r − r∗)g̊3j χ j α = K ug̊3 j χ j α = g ug̊3 j χ j α = z, µ, (3.95) where ( χ j ) is the Ω-dependent sequence defined by (3.9). Furthermore, we recall that x̄ = ( ¯K , ¯V ) = x̄(K0 , g0) denotes the sequence in X ∗ uniquely determined from the boundary conditions ( ¯K0 , ḡ0) = (K0 , g0) and ( z̄∞ , µ̄∞) = (0, 0) via ¯Vj+1 = ϕ̄ j ( ¯Vj ) and ¯K j+1 = ψ j ( ¯K j , ¯Vj ), whenever the latter is well-defined. Given an initial condition ( ˚K0 , g̊0), let x̊ = x̄( ˚K0 , g̊0). Denoting the closed ball of radius s in Xw by sB, observe that, if g̊0 = g0 and ˚K0 = K0, the bounds (3.18)–(3.21) are equivalent to x ∈ x̊ + bB, and that, by definition, the projection of x̊ + B onto the the jth sequence element is contained in the domain D j . We will always assume that g0 = ḡ0 and g̊0 are close, but not necessarily that they are equal. The use of g̊ rather than ḡ permits us to vary the 84 3.3. Proof of main result initial condition g0 = ḡ0 without changing the Banach spaces Xw , X r. The use of g0-dependent weights rather than, e.g., the weight j−2 log j for jΩ = ∞ (see Remark 3.1.5(i)) allows us to obtain estimates with good behaviour as g0 → 0. Note that the weight wg , j does not include a factor χ j , and thus does not go to 0 when jΩ < ∞ (see Example 3.1.1(ii)). Remark 3.3.2. The weights w apply to the sequence ẋ (see (3.88)). As motivation for their definition, consider the explicit example of ρ j (x j ) = χ jg3j . In this case, the g equation becomes simply gj+1 = gj − β jg2j + t χ jg3j . (3.96) With the notation ġj = ∂∂t g t j , differentiation gives ġj+1 = ġj (1 − 2β jgj + 3t χ jg2j ) + χ jg3j . (3.97) Thus, by iteration, using ġ0 = 0, we obtain ġj = j−1∑ l=0 χlg 3 l j−1∏ k=l+1 (1 − 2βkgk + 3t χkg3k ). (3.98) For simplicity, consider the case t = 0, for which g = ḡ. In this case, it follows from (3.35), (3.32), and (3.46) that ġj ≤ O(1) j−1∑ l=0 ( ḡj ḡl+1 )2 χl ḡ 3 l = O(1)g2j j−1∑ l=0 χl ḡl ≤ O(ḡ2j | log ḡj |), (3.99) which produces the weight wg , j of (3.95). (It can also be verified using (3.41) that if we replace χ j by β j in the above then no smaller weight will work.) 3.3.3 Reduction to a linear equation with nonlinear perturbation For given sequences x , r ∈ X ∗, we now consider the equation y j+1 − DxΦ j (t , x j )y j = r j . (3.100) For x and r fixed, (3.100) is an inhomogeneous linear equation in y. Lemma 3.3.3 below, which lies at the heart of the proof of Theorem 3.1.4, obtains bounds on solutions to (3.100), including bounds on its x-dependence. The latter will allow us to use the standard theory of ODE in Banach spaces to treat the original nonlinear equation, where x and r are both functionals of the solution y, as a perturbation of the linear equation. 85 3.3. Proof of main result In addition to the decomposition X j = K j ⊕ V j , with x j ∈ X j written x j = (K j ,Vj ), it will be convenient to also use the decomposition X j = E j ⊕ Fj with E j = K j ⊕ R and Fj = R ⊕ R, for which we write x j = (u j , v j ) with u j = (K j , gj ) and v j = (z j , µ j ). We denote by piα the projection operator onto the α-component of the space in which it is applied, with α in any of {K,V }, {u, v} = {(K, g), (z, µ)}, or {K, g, z, µ}. Recall that the spaces of sequences Xw are defined in Definition 3.3.1 and the specific weights w and r in (3.95). Lemma 3.3.3. Assume (A1–A3). There exists a constant CS , independent of r and u, and a constant C′ S = C′ S (r, u), such that if g̊0 > 0 is sufficiently small, the following hold for all t ∈ [0, 1], x ∈ x̊ + B. (i) For r ∈ X r, there exists a unique solution y = S(t , x)r ∈ Xw of (3.100) with boundary conditions piu y0 = 0, piv y∞ = 0. (ii) The linear solution operator S(t , x) satisfies ‖S(t , x)‖L(X r ,Xw) ≤ CS . (3.101) (iii) As a map S : [0, 1] × ( x̊ + B) → L(Xw , X r), the solution operator is contin- uously Fréchet differentiable and satisfies ‖DxS(t , x)‖L(Xw ,L(X r ,Xw)) ≤ C′S . (3.102) Lemma 3.3.3 needs to be supplemented with information about the initial con- dition x̄ and the perturbation ρ for the analysis of (3.90) with (3.92). (Note that the sequence x̄ serves as initial condition, at t = 0, for the ODE (3.89), not as initial condition for the flow equation (3.5).) Some information about x̄ is already contained in Lemma 3.2.2. For ρ, we define ρ : x̊ + B → X ∗ by (ρ(x))0 = 0, (ρ(x)) j+1 = ρ j (x j ), (3.103) where ρ j is the map of (3.5). The map ψ : x̊+B → X ∗ is defined analogously. The next lemma expresses immediate consequences of Assumption (A3) for ρ and ψ in terms of the weighted spaces. Although the proof of Theorem 3.1.4 only directly requires the estimates for ρ, we will also need bounds on ψ to prove Lemma 3.3.3, so for convenience we combine both in a single lemma. Lemma 3.3.4. Assume (A3), let ω > κΩ, and assume that g̊0 > 0 is sufficiently small. Then (ψ, ρ) : x̊ + B → X r is twice continuously Fréchet differentiable, ‖ρ(x)‖X r ≤ M/u, (3.104) 86 3.3. Proof of main result and there exists a constant C = C(r, u) such that ‖DK ρ(x)‖L(Xw ,X r) ≤ C, ‖DV ρ(x)‖L(Xw ,X r) ≤ O(g̊0 | log g̊0 |), ‖DKψ(x)‖L(Xw ,X r) ≤ ω, ‖DVψ(x)‖L(Xw ,X r) ≤ O(g̊0 | log g̊0 |), (3.105) and ‖D2x ρ(x)‖L2 (Xw ,X r) ≤ C, ‖D2xψ(x)‖L2 (Xw ,X r) ≤ C. (3.106) We defer the proofs of Lemmas 3.3.3–3.3.4 to Sections 3.3.4 and 3.3.5, respec- tively. Given these, we now prove Theorem 3.1.4(i). Proof of Theorem 3.1.4(i). Let CS be the constant of Lemma 3.3.3, define u∗ = CSM/( 12 b ∧ (1 − b)), and assume u > u∗. For t ∈ [0, 1] and x ∈ x̊ + B, let F (t , x) = S(t , x)ρ(x). (3.107) Let ( ˚K0 , g̊0) = (K0 , g0). Lemmas 3.3.3–3.3.4 imply that if g̊0 > 0 is sufficiently small, F : [0, 1] × ( x̊ + B) → Xw is continuously Fréchet differentiable and ‖F (t , x)‖Xw ≤ ‖S(t , x)‖L(X r ,Xw) ‖ρ(x)‖X r ≤ CSM/u ≤ 12 b ∧ (1 − b). (3.108) Similarly, by the product rule, it follows that there is C such that ‖DxF (t , x)‖L(Xw ,Xw) ≤ ‖[DxS(t , x)]ρ(x)‖L(Xw ,Xw) + ‖S(t , x)[Dx ρ(x)]‖L(Xw ,Xw) ≤ C, (3.109) and thus, in particular, that F is Lipschitz continuous on x ∈ x̊ + B. The theorem now follows from the well-known local existence theory for ODE in Banach spaces. Indeed, for y ∈ B, let ˚F (t , y) = F (t , x̊ + y). (3.110) Let Xw0 = {y ∈ Xw : piu y0 = 0}, B0 = B ∩ Xw0 . Then the statement about boundary conditions of Lemma 3.3.3(i) and (3.108) imply that ˚F (t , 12 bB0) ⊆ ˚F (t ,B0) ⊆ 1 2 bB0. With (3.108)–(3.109), the local existence theory for ODEs on Banach spaces [2, Chapter 2, Lemma 1] implies that the initial value problem ẏ = ˚F (t , y), y(0) = 0 (3.111) has a unique C1-solution y : [0, 1] → Xw0 with y(t) ∈ 12 bB0 for all t ∈ [0, 1]. (The length of the existence interval of the initial value problem (3.111) in 12 bB is 87 3.3. Proof of main result bounded from below by 12 b/( 12 b ∧ (1 − b)) ≥ 1 because ‖ ˚F (t , y)‖ ≤ 12 b ∧ (1 − b) when ‖y‖ ≤ 12 b. It does not depend on the Lipschitz constant of ˚F.) In particular, as discussed around (3.90), it follows that x = x̊ + y(1) is a solution to (3.6). By construction, piu x0 = piu x̊0 = ( ˚K0 , g̊0) = (K0 , g0). Also, piv y∞ (1) = 0 because y(1) ∈ Xw, and since piv x̊∞ = 0, it is also true that piv x∞ = 0. Thus x satisfies the required boundary conditions. The estimates (3.18)–(3.21) are an immediate consequence of ‖y‖Xw ≤ 12 b, with (3.95). To prove uniqueness, suppose that x′ is a solution to (3.6) with boundary con- ditions (K ′0 , g′0) = (K0 , g0) and (z′∞ , µ′∞) = (0, 0), and such that (3.18)–(3.21) hold (with x replaced by x′, and with x̄ as before). Let x̊ = x̄ as before. By as- sumption, x′ − x̊ ∈ bB0. It follows that F : [0, 1] × (x′ + (1 − b)B0) → Xw is Fréchet differentiable and ‖F (t , x)‖Xw ≤ 1 − b for all t ∈ [0, 1] and for all x ∈ x′ + (1 − b)B0 ⊂ x̊ + B0 as discussed around (3.107)–(3.109). In particular there is a unique solution x′(t) for t ∈ [0, 1] to ẋ′ = F (t , x′) with x′(1) = x′ and x′(t) ⊂ x̊ +B0, by considering the ODE backwards in time, which is equally well- posed. It follows that x′(0) is a flow of Φ0 = ¯Φ with the same boundary conditions as x̊. The uniqueness of such flows, by Lemma 3.2.2, implies that x′(0) = x̊, and the uniqueness of solutions to the initial value problem (3.111) in x̊ + B0 then also that x = x′ as claimed. This completes the proof of Theorem 3.1.4(i).  To prove Theorem 3.1.4(ii), we need to know that the initial condition x̄ is differentiable in a small ball x̊ + δB. The smoothness of x̄ is addressed in the following lemma, whose proof is deferred to Section 3.3.5. Lemma 3.3.5. Assume (A1–A3), and let δ > 0 and g̊0 > 0 both be sufficiently small. Then there exists a neighbourhood ¯I = ¯Iδ ⊂ K0 ⊕ R+ of ( ˚K0 , g̊0) such that x̄ : ¯I → x̊ + δB is continuously Fréchet differentiable with ‖Dg0 x̄‖Xw ≤ O(g̊−20 | log g̊0 |−1). (3.112) Proof of Theorem 3.1.4(ii). For fixed initial condition ( ˚K0 , g̊0) = (K0 , g0) = u0 obeying the hypothesis of Theorem 3.1.4(i), let ¯I be the neighbourhood of u0 de- fined by Lemma 3.3.5 with δ < 12 b. By Lemma 3.3.5, x̄ : ¯I → x̊ + δB ⊂ Xw is continuously Fréchet differentiable. It follows from [2, Chapter 2, Lemma 4] that ẏ = ˚F (t , y), y(0) = x̄(u0) − x̊ (3.113) has a unique C1-solution y : [0, 1] × ¯I → Xw0 with ‖y(t)‖Xw ≤ 12 b. [2, Chapter 2, Lemma 4] and Lemma 3.3.5 also imply ∥∥∥Dg0 y(t , K0 , g0)∥∥∥Xw ≤ C ∥∥∥Dg0 x̄(K0 , g0)∥∥∥Xw ≤ O(g̊−20 | log g̊0 |−1). (3.114) 88 3.3. Proof of main result Let x(u0) = x̊ + y(1, u0). It follows as previously that x(u0) = (u(u0), v(u0)) is a solution to (3.6) with boundary conditions u(u0) = u0 and v∞ (u0) = 0. Moreover, the differentiability in the sequence space Xw implies in particular that, as elements of the spaces X j , each x j = (K j ,Vj ) is a C1 function of u0. Also (3.114) with (3.95) immediately implies that ∂z0 ∂g0 = O(1), ∂µ0 ∂g0 = O(1). (3.115) To prove (3.18)–(3.21) for x(u0) with u0 ∈ I ⊆ ¯I, we use that ‖x(u0) − x̊‖ ≤ 12 b and ‖ x̄(u0) − x̊‖Xw ≤ δ imply ‖K j − ¯K j ‖K j ≤ ‖K j − ˚K j ‖K j + ‖ ˚K j − ¯K j ‖K j ≤ ( 12 b + δ)(r − r∗)g̊3j (3.116) and analogously that |gj − ḡj | ≤ ( 12 b + δ)ug̊2j | log g̊2j | (3.117) |z j − z̄ j | ≤ ( 12 b + δ)uχ j g̊2j | log g̊2j | (3.118) |µ j − µ̄ j | ≤ ( 12 b + δ)uχ j g̊2j | log g̊2j |. (3.119) Since ( 12 b + δ) < b, by assuming that |g̊0 − ḡ0 | is sufficiently small, i.e., shrinking ¯I to a smaller neighborhood I if necessary, we obtain with (3.73) that ( 12 b + δ)g̊2j | log g̊2j | ≤ bḡ2j | log ḡ2j |. (3.120) This completes the proof of Theorem 3.1.4(ii).  It now remains only to prove Lemmas 3.3.3–3.3.5. We begin with Lemma 3.3.3, which lies at the heart of the proof. 3.3.4 Proof of Lemma 3.3.3 The proof proceeds in three steps. The first two steps concern an approximate version of the equation and the solution of the approximate equation, and the third step treats (3.100) as a small perturbation of this approximation. Step 1. Approximation of the linear equation Define ¯Φ0 j : X j → X j+1 by extending ϕ̄ j trivially to the K-component, i.e., ¯Φ0j = (0, ϕ̄ j ) with respect to the decomposition X j+1 = K j+1 ⊕ V. Thus Φ(t , x) = 89 3.3. Proof of main result ¯Φ 0(x) + (ψ(x), tρ(x)). Explicit computation of the derivative of ϕ̄ j of (3.5), using (3.1), shows that D ¯Φ0j (x j ) =  0 0 0 0 0 1 − 2β jgj 0 0 0 − ˜ξ j 1 − 2ζ jgj 0 0 η̃ j −γ̃ j ˜λ j  , (3.121) with η̃ j = η j − 2υggj gj − υ gz j z j − υgµj µ j , γ̃ j = γ j − υgzj gj − 2υzzj z j − υ zµ j µ j , ˜λ j = λ j − υgµj gj − υ zµ j z j , ˜ξ j = 2θ jgj + 2ζ j z j . (3.122) The block matrix structure in (3.121) is with respect to the decomposition X j = E j ⊕ Fj introduced in Section 3.3.3. The matrix D ¯Φ0 j (x j ) depends on x j ∈ X j , but it is convenient to approximate it by the constant matrix L j = D ¯Φ0j ( x̊ j ) = ( Aj 0 Bj Cj ) , (3.123) where the blocks Aj , Bj , and Cj of L j are defined by evaluating the blocks of the matrix (3.121) at x̊ j rather than at x j (given explicitly in (3.129) below). We will study the equation y j+1 = L j y j + r j , (3.124) which approximates (3.100). Lemma 3.3.6 below provides a useful reformulation of (3.124). For its statement, we define linear operators H : D(H) → X ∗ and U : D(U) → X ∗ (where D(H) and D(U) are the subspaces of X ∗ on which the infinite sums converge) by piuH = 0, (pivH x) j = − ∞∑ l= j C−1j · · ·C−1l Blpiu xl , (3.125) and (piuU x) j = j−1∑ l=0 Aj−1 · · · Al+1piu xl , (pivU x) j = − ∞∑ l= j C−1j · · ·C−1l piv xl . (3.126) 90 3.3. Proof of main result It follows from the definitions (recalling piK Aj = 0 = AjpiK ) that piK H = 0 = HpiK , piV H = H = HpiV , piKU = UpiK , piVU = UpiV . (3.127) The empty product in the formula for piuU x is interpreted as the identity, so the term in the sum corresponding to l = j − 1 is simply piu x j . Lemma 3.3.6. Assume (A1–A2) and that g̊0 > 0 is sufficiently small. If r ∈ D(U) and y ∈ D(H) satisfies piu y0 = 0 and piv y∞ = 0, then (3.124) holds if and only if y = Hy +Ur, (3.128) holds. The proof is straightforward, but requires an estimate on the product of the matrices Cj which we will prove first. Products of the Cj and Aj will also play an important role in the analysis of the operators H and U in the following section, so that it is convenient to prove a more precise statement about them now than what is needed for the proof of Lemma 3.3.6. Let us first record explicitly the blocks of L j : Aj = ( 0 0 0 1 − 2β j g̊j ) , Bj = ( 0 − ˚ξ j 0 η̊ j ) , Cj = ( 1 − 2ζ j g̊j 0 −γ̊ j ˚λ j ) (3.129) with η̊ j , γ̊ j , ˚λ j , and ˚ξ j as in (3.122) with x replaced by x̊. Lemma 3.3.7. Assume (A1–A2). Let α ∈ (λ−1 , 1). Then for g̊0 > 0 sufficiently small (depending on α), the following hold. (i) Uniformly in all l ≤ j, Aj · · · Al = ( 0 0 0 O(g̊2 j+1/g̊ 2 l ) ) . (3.130) (ii) Uniformly in all j, Bj = ( 0 O(g̊j χ j ) 0 O( χ j ) ) . (3.131) (iii) Uniformly in all l ≥ j, C−1j · · ·C−1l = ( O(1) 0 O( χ j ) O(αl− j+1) ) . (3.132) 91 3.3. Proof of main result Proof. (i) It follows immediately from (3.129) that Aj · · · Al = j∏ k=l (1 − 2βk g̊k )pig , (3.133) and thus (3.35) implies (i). (ii) It follows directly from (3.129) and Lemma 3.2.2 that (3.131) holds. (iii) Note that ( c1 0 b1 a1 ) · · · ( cn 0 bn an ) = ( c∗ 0 b∗ a∗ ) (3.134) with a∗ = a1 · · · an , b∗ = n∑ i=1 a1 · · · ai−1bici+1 · · · cn , c∗ = c1 · · · cn . (3.135) We apply this formula with the inverse matrices C−1j = ( (1 − 2ζ j g̊j )−1 0 (1 − 2ζ j g̊j )−1γ̊ j α̊ j α̊ j ) (3.136) where α̊ j = ˚λ−1j . Thus C−1j · · ·C−1l = ( τ̊j ,l 0 σ̊ j ,l α̊ j ,l ) (3.137) with α̊ j ,l = α̊ j · · · α̊l , τ̊j ,l = l∏ k= j (1 − 2ζk g̊k )−1 , (3.138) σ̊ j ,l = l− j+1∑ i=1  l∏ k= j+i (1 − 2ζk g̊k )−1  γ̊ j+i−1  j+i−2∏ k= j α̊k  . (3.139) The product defining τ̊j ,l is O(1) by (3.36). Assume that g̊0 is sufficiently small that, with Lemma 3.2.2 and (A2), α̊m < α for all m. Then α̊ j ,l ≤ O(αl− j+1). Similarly, since γ̊m ≤ O( χm ), |σ̊ j ,l | ≤ l− j+1∑ i=1 αiO( χ j+i−1) ≤ O( χ j ). (3.140) This completes the proof.  92 3.3. Proof of main result Proof of Lemma 3.3.6. The u-component of (3.124) is given by u j+1 = Aju j + piur j . (3.141) By induction, under the initial condition u0 = 0 this recursion is equivalent to u j = piu y j = j−1∑ l=0 Aj−1 · · · Al+1piurl , (3.142) which is the same as the u-component of (3.128). The v-component of (3.124) states that v j+1 = Bju j + Cjv j + pivr j , (3.143) and this is equivalent to v j = C−1j v j+1 − C−1j Bju j − C−1j pivr j . (3.144) By induction, for any k ≥ j, the latter is equivalent to v j = C−1j · · ·C−1k vk+1 − k∑ l= j C−1j · · ·C−1l (Blul + pivrl ). (3.145) By Lemma 3.3.7(iii), with some α ∈ (λ−1 , 1) and with g̊0 sufficiently small, ‖C−10 · · ·C−1k ‖ is uniformly bounded. Thus, if y j = (u j , v j ) satisfies (3.124) and v j → 0, then C−10 · · ·C−1k vk+1 → 0 and hence v j = − ∞∑ l= j C−1j · · ·C−1l (Blul + pivrl ), (3.146) which is the same as the v-component of (3.128). Conversely, suppose that y j satisfies (3.128) and v j → 0. It is also straightforward to conclude that (3.146) implies (3.145) and thus that the v-component of y satisfies (3.124).  Step 2. Solution of the approximate equation We now prove existence, uniqueness, and bounds for the solution to the approxi- mate equation (3.124). Lemma 3.3.8. Assume (A1–A2) and that g̊0 > 0 is sufficiently small. For each r ∈ X r and x ∈ x̊ + B, there exists a unique solution y = S0r ∈ Xw to (3.124) 93 3.3. Proof of main result obeying the boundary conditions piu y0 = 0, piv y∞ = 0. The solution operator S0 is block diagonal w.r.t. the decomposition x = (K,V ), with S0 = ( 1 0 0 S0 VV ) , (3.147) and there is a constant CS0 > 0 such that, uniformly in small g̊0, ‖S0VV ‖L(X r ,Xw) ≤ CS0 . (3.148) The constant CS0 is independent of u and r. Proof. According to Lemma 3.3.6, it suffices to prove that there is a unique solu- tion in Xw to (3.128) (instead of (3.124)) which obeys the required boundary condi- tions. Observe that as a block matrix with respect to the decomposition x = (u, v), with Hvu = pivHpiu , the operator 1 − H is triangular of the form 1 − H = ( 1 0 −Hvu 1 ) . (3.149) We will prove that Hvu is a bounded operator in L(Xw , Xw). It follows that 1 − H has a bounded inverse on Xw given by the block matrix (1 − H)−1 = ( 1 0 Hvu 1 ) . (3.150) We further show that U is a bounded operator in L(X r , Xw). This implies that the unique solution in Xw of (3.124) is given by y = S0r = (1 − H)−1Ur (3.151) and, since piu (1 − H)−1 = piu and piKU = piK , that (3.147)–(3.148) hold. The boundary condition piv y∞ = 0 is a consequence of y ∈ Xw, and the initial condition piu y0 = 0 is implicit in the equation (3.128). The claim that piK S0 = S0piK and piV S0 = S0piV then follows from (3.127). Since piuS0r = piuUr , the cases α = K, g of (3.148) follow from the bounds claimed for U. To complete the proof, we require estimates for piαU for α ∈ {K, g, z, µ}, and on piαH for α = z, µ. Thus there are six estimates in all. Their treatment is similar, and uses Lemma 3.2.1(ii), which gives that for all k ≥ j ≥ 0 and m ≥ 0, k∑ l= j χl g̊ n l | log g̊l |m ≤ Cn ,m  | log g̊k |m+1 n = 1 χ j g̊ n−1 j | log g̊j |m n > 1. (3.152) 94 3.3. Proof of main result (i) Bound for K-component. By definition, since piK Al = 0, we have piKU = piK . Therefore, ‖piKUr ‖Xw ≤ sup j ‖piK r j ‖Xwj ≤ sup j [ w−1K , j rK , j ] ‖r ‖X r = ‖r ‖X r . (3.153) (ii) Bound for g-component. By Lemma 3.3.7(i), (3.95), (3.32), and (3.152), ‖pigUr ‖Xw ≤ sup j j−1∑ l=0 ‖pig Aj−1 · · · Al+1rl ‖Xwj ≤ sup j j−1∑ l=0 w−1V , j rV ,lO(g̊j/g̊l )2‖r ‖X r ≤ c‖r ‖X r sup j | log g̊j |−1 j−1∑ l=0 χl g̊l ≤ c‖r ‖X r . (3.154) (iii) Bound for z-component. By Lemma 3.3.7(iii), (3.95), and (3.152), ‖pizUr ‖Xw ≤ sup j ∞∑ l= j ‖pizC−1j · · ·C−1l rl ‖Xwl ≤ c sup j hVw−1V , j ∞∑ l= j χl g̊ 3 l ‖r ‖X r ≤ c | log g̊0 |−1‖r ‖X r . (3.155) Similarly, by Lemma 3.3.7(ii-iii), (3.95), and (3.152), ‖pizH ‖L(Xw ,Xw) ≤ sup j ∞∑ l= j ‖pizC−1j · · ·C−1l Bl ‖L(Xwl ,Xwj ) ≤ c sup j w−1V , j ∞∑ l= j χl g̊lwV ,l ≤ c. (3.156) (iv) Bound for µ-component. Using Lemma 3.3.7(iii), we obtain ‖piµUr ‖Xw ≤ sup j [ ∞∑ l= j ‖piµC−1j · · ·C−1l rl ‖Xwj ] ≤ c sup j uw−1V , j [ ∞∑ l= j χl g̊ 3 l + ∞∑ l= j αl− j+1 χl g̊ 3 l ] ‖r ‖X r ≤ c | log g̊0 |−1‖r ‖X r , (3.157) where we used (3.152) and also that ∑∞l= j αl+1− j χl g̊3l ≤ c χ j g̊3j in the last step. To bound ‖piµH ‖L(Xw ,Xw) , we argue similarly as for piµUr , and use Lemma 3.3.7 to 95 3.3. Proof of main result obtain ‖piµH ‖L(Xw ,Xw) ≤ sup j ∞∑ l= j ‖piµC−1j · · ·C−1l Bl ‖L(Xwl ,Xwj ) ≤ c sup j w−1V , j  ∞∑ l= j g̊j χ jwV ,l + ∞∑ l= j αl+1− j χ jwV ,l  ≤ c. (3.158) This proves the required bounds for α = µ and thus completes the proof.  Step 3. Solution of the linear equation We now prove Lemma 3.3.3, which involves solving the equation (3.100). Proof of Lemma 3.3.3. Fix ω ∈ (κΩ, 1). (i) We define W j (t , x j ) = DxΦ j (t , x j ) − L j = [Dx ¯Φ0j (x j ) − Dx ¯Φ0j ( x̊)] + Dx (ψ j (x j ), tρ j (x j )), (3.159) and rewrite (3.100) as y j+1 = DxΦ j (t , x j )y j + r j = L j y j +W j (t , x j )y j + r j . (3.160) It will be convenient to combine the W j (t , x) to an operator on sequences via (W (t , x))0 = 0 and (W (t , x)) j+1 = W j (t , x). This operator can be written as a block matrix with respect to the decomposition x = (K,V ) as W (t , x) = ( WKK WKV WVK WVV ) , (3.161) with Wαβ = piαW (t , x)piβ . We claim that W : [0, 1] × ( x̊ + B) → L(Xw , X r), that W is continuously Fréchet differentiable, and that if x ∈ x̊ + B then, ‖WKK ‖L(Xw ,X r) ≤ ω, ‖WVK ‖L(Xw ,X r) ≤ C, ‖WKV ‖L(Xw ,X r) ≤ o(1), ‖WVV ‖L(Xw ,X r) ≤ o(1), (3.162) as g̊0 → 0, and ‖DxW j (t , x j )‖L(Xwj ,L(Xwj ,X rj+1)) ≤ C. (3.163) To see this, note that the first term on the right-hand side of (3.159) only depends on the V -components, and is continuously Fréchet differentiable since, by (3.121), 96 3.3. Proof of main result D2 ¯Φ0 j is a constant matrix for each j with coefficients bounded by O( χ j ). There- fore, for x ∈ x̊ + B, ‖[D ¯Φ0j ( x̊ j ) − D ¯Φ0j (x j )]piV ‖L(Xwj ,X rj+1) ≤ c χ j r −1 V , j+1w 2 V , j ‖ x̊ j − x j ‖Xwj = O(ug̊0 | log g̊0 |2). (3.164) This contributes to the bounds (3.162), with g̊0 taken small enough. The second term on the right-hand side of (3.159), as well as its derivative, have been bounded in Lemma 3.3.4, completing the proof of (3.163). By the assumption that y ∈ Xw, Lemma 3.3.8, and (3.162), the equation (3.160) with the boundary conditions of Lemma 3.3.3(i) is equivalent to y = S0(W (t , x)y + r). (3.165) (ii) To solve this equation, we use that if A and B are bounded operators on a Banach space such that A has a bounded inverse A−1 and ‖A−1B‖ < 1, then A− B has a bounded inverse. (Indeed, A− B = A(1− A−1B) and the inverse of 1− A−1B is given by the Neumann series.) As in (3.147), we write S0 as a block matrix with respect to the decomposition x = (K,V ) as S0 = ( 1 0 0 S0 VV ) . (3.166) Let A = ( 1 − WKK 0 −S0 VV WVK 1 − S0VVWVV ) , B = ( 0 WKV 0 0 ) (3.167) such that 1 − S0W (t , x) = A − B. Then (3.162) with g̊0 sufficiently small implies ‖WKK ‖L(Xw ,Xw) < 1 and ‖S0VVWVV ‖L(Xw ,Xw) < 1. Thus A is a block matrix of the form A = ( AKK 0 AVK AVV ) (3.168) where AKK and AVV have inverses in L(Xw , Xw), and it follows that A has the bounded inverse on Xw given by the block matrix A−1 = ( A−1 KK 0 A−1 VV AVK A−1KK A −1 VV ) . (3.169) Moreover, (3.162) with g̊0 sufficiently small implies that ‖A−1B‖L(Xw ,Xw) < 1 and thus that 1 − S0W (t , x) has a bounded inverse in L(Xw , Xw). It follows that the solution operator is given by S(t , x) = (1 − S0W (t , x))−1S0. (3.170) 97 3.3. Proof of main result (iii) By (3.170), continuous Fréchet differentiability in x of S(t , x) follows from the continuous Fréchet differentiability of S0W (t , x), which itself follows from part (i) and from DxS0W (t , x) = S0DxW (t , x) by linearity of S0. Explicitly, DxS(t , x) = (1 − S0W (t , x))−1DxS0W (t , x)(1 − S0W (t , x))−1S0. (3.171) By (3.163), ‖DxS0W (t , x)‖L(Xw ,L(Xw ,Xw)) ≤ C‖DxW (t , x)]‖L(Xw ,L(Xw ,X r)) ≤ C. (3.172) Together with the boundedness of the operators (1 − S0W (t , x))−1 and S0, this proves (3.102) and completes the proof.  3.3.5 Proofs of Lemmas 3.3.4–3.3.5 Proof of Lemma 3.3.4. We begin with the verification of the bounds on the first derivatives in (3.104). By assumptions (3.13)–(3.14), together with (3.32), the definition of the weights (3.95), and for (3.174) also the fact that χ j/χ j+1 ≤ Ω by (3.9), we obtain for x ∈ x̊ + B, ‖DVψ j (x j )‖L(Xwj ,X rj+1) ≤ M χ j g̊ 2 j r −1 K , j+1wV , j ≤ O(g̊0 | log g̊0 |), (3.173) ‖DKψ j (x j )‖L(Xwj ,X rj+1) ≤ κr −1 K , j+1wK , j ≤ κΩ(1 + O(g̊0)), (3.174) ‖DV ρ j (x j )‖L(Xwj ,X rj+1) ≤ M χ j g̊ 2 j r −1 V , j+1wV , j ≤ O(g̊0 | log g̊0 |), (3.175) ‖DK ρ j (x j )‖L(Xwj ,X rj+1) ≤ Mr −1 V , j+1wK , j ≤ O(1), (3.176) which establishes the bounds on the first derivatives in (3.104), choosing g̊0 small enough. The bounds on the second derivatives are also immediate consequences of Assumption (A3). Let φ denote either ψ or ρ. Then (3.15) and the definition of the weights (3.95) imply that, for 2 ≤ n + m ≤ 3, ‖DnK DmV φ‖Ln+m (Xw ,X r) ≤ C. (3.177) In addition, these bounds on the second and third derivatives imply that ‖φ(x + y) − φ(x) − Dφ(x)y‖X r ≤ C‖y‖2Xw , (3.178) ‖Dφ(x + y) − Dφ(x) − D2φ(x)y‖L(Xw ,X r) ≤ C‖y‖2Xw , (3.179) and hence that φ : x̊ + B → X r is indeed twice Fréchet differentiable. The above bound on the third derivatives also implies continuity of this differentiability. The ρ-bound is equivalent to Assumption (A3) since ‖ρ j (x j )‖X rj+1 = r −1 V , j+1M χ j+1g̊ 3 j+1 = M/u. (3.180) This completes the proof.  98 3.3. Proof of main result Proof of Lemma 3.3.5. Let ¯I = ([ 12 g̊0 , 2g̊0] × K0) ∩ x̄−1( x̊ + δB). (3.181) We will show that ¯I is a neighbourhood of ( ˚K0 , g̊0) and that x̄ : ¯I → x̊ + δB is con- tinuously Fréchet differentiable. Since x̄−1( x̊+δB) = ¯V−1( x̊+δB)∩ ¯K−1( x̊+δB), it suffices to show that each of ¯V−1( x̊ + δB) and ¯K−1( x̊ + δB) is a neighbourhood of ( ˚K0 , g̊0), and that each of ¯V and ¯K is continuously Fréchet differentiable on ¯I as maps with values in subspaces of Xw. We begin with ¯V . Let ¯V ′ j denote the derivative of ¯Vj with respect to g0, and let ¯V ′ = ( ¯V ′ j ) denote the sequence of derivatives. It is straightforward to conclude from Lemmas 3.2.3 and 3.2.1(iv) and (3.95) that ‖ ¯V ′‖Xw ≤ O(g̊−20 | log g̊0 |−1), (3.182) and hence that ¯V ′ ∈ Xw if g0 ∈ ¯Ig ⊆ [ 12 g̊0 , 2g̊0], and similarly that ¯V−1( x̊ + δB) contains a neighbourhood of g̊. That ¯V ′ is actually the derivative of ¯V in the space Xw can be deduced from the fact that the sequence ¯V ′′(g0) is uniformly bounded in Xw for g0 ∈ ¯Ig (though not uniform in g̊0). In fact, by Lemma 3.2.3, ‖ ¯Vj (g0 + ε) − ¯Vj (g0) − ε ¯V ′j (g0)‖Xwj ≤ O(ε 2) sup 0<ε′<ε ‖ ¯V ′′j (g + ε′)‖Xwj . (3.183) The continuity of ¯V ′ in Xw follows similarly. For ¯K , we first note that ‖DK0 ¯K0‖L(K0 ,K0) = 1, ‖Dg0 ¯K0‖K0 = 0. By (A3) and induction, ‖DK0 ¯K j+1‖L(K0 ,K j+1) ≤ κ‖DK0 ¯K j ‖L(K0 ,K j ) ≤ κ j+1. (3.184) Since κ < Ω−1 < 1, and since g̊j+1/g̊j → 1 by (3.32), we obtain ‖DK0 ¯K j+1‖L(K0 ,K j+1) ≤ O(g̊−30 wK , j+1). (3.185) Similarly, by (3.14) and Lemma 3.2.3, ‖Dg0 ¯K j+1‖K j+1 ≤ κ‖Dg0 ¯K j ‖K j + O( χ j ḡ2j )‖Dg0 ¯Vj ‖V ≤ κ‖Dg0 ¯K j ‖K j + O( χ j ḡ4j /ḡ20 ). (3.186) By induction as in the proof of Lemma 3.1.3, again using κ < Ω−1, we conclude ‖Dg0 ¯K j+1‖K j+1 ≤ O( χ j ḡ4j /ḡ20 ) ≤ O(g̊−10 wK , j+1). (3.187) 99 3.3. Proof of main result These bounds imply that ¯K−1( x̊ + δB) contains a neighbourhood of ( ˚K0 , g̊0) and also that the component-wise derivatives of ¯K with respect to g0 and K0 are respec- tively in Xw  L(R, Xw) and L(K0 , Xw). To verify that the component-wise derivative of ¯K is the Fréchet derivative in Xw, it again suffices to obtain bounds on the second derivatives in Xw, as in (3.183). For example, since D2 K0 ¯K0 = 0, DK0 ¯Vj = 0, and D2K0 ¯K j+1 = DKψ( ¯K j , ¯Vj )D2K0 ¯K j + D2Kψ( ¯K j , ¯Vj )DK0 ¯K j DK0 ¯K j , (3.188) it follows from (3.184) and induction that, for (K0 , g0) ∈ ¯I with ¯I ⊂ K0 ⊕ R chosen sufficiently small, ‖D2K0 ¯K j+1‖ ≤ κ‖D2K0 ¯K j ‖ + Cκ2 j ≤ C(1 + jκ)κ j ≤ O(g̊−30 wK , j+1). (3.189) Thus the component-wise derivative D2 K0 ¯K is uniformly bounded L2(K0 , Xw) for (K0 , g0) ∈ ¯I. Similarly, slightly more complicated recursion relations than (3.188) for D2g0 ¯K j and Dg0 DK0 ¯K j show that the component-wise second derivative of ¯K is uniformly bounded in L2(K0 ⊕ R, Xw) for ¯I sufficiently small. This shows as in (3.183) that ¯K is continuously Fréchet differentiable from ¯I to Xw. We have thus shown that x̄ is continuously Fréchet differentiable from a neigh- bourhood ¯I of ( ˚K0 , g̊0) to Xw, and (3.112) follows from (3.182), (3.187).  100 Chapter 4 Outlook 4.1 The weakly self-avoiding walk with contact attraction In Section 1.2, the weakly self-avoiding walk with additional contact self-attraction was introduced, see (1.9), but the subsequent discussion focused on the special case without self-attraction, γ = 0. For the model with self-attractive interaction, there is the conjectured phase diagram of Figure 1.3 which, in particular, predicts the same behavior as for γ = 0 also for sufficiently small γ > 0. However, even small self-attraction makes the analysis more difficult than the weakly self-avoiding walk already is because the energy functional then loses the superadditivity property. For γ = 0, H (L + L′) = β ∑ x (Lx + L′x )2 ≥ β ∑ x (L2x + L′2x ) = H (L) + H (L′). (4.1) This superadditivity implies, for example, that ct = ∑ x ct (x) is submultiplicative, i.e., ct+s ≤ ctcs , and therefore that there is µc such that 1t log ct → µc ; see e.g. [88] or [12]. The subadditivity (4.1) does not hold if γ > 0. As a result of the failure of (4.1), little is known if γ > 0. For example, the re- sults about the (weakly or strictly) self-avoiding walk in dimension five and higher obtained with the lace expansion do not easily extend to small γ > 0. The unique exception is a result by Ueltschi [109] who studies a model of the strictly self- avoiding walk with additional small self-attraction, in dimension five and higher, but relies on very particular exponentially decaying step weights (instead of nearest neighbor steps). The special step distribution helps in the analysis, for example by making ct submultiplicative, but is an undesirable feature otherwise. Although superadditivity of H fails for γ > 0, it has been observed [110] that the attractive force can be written as ∑ x ∑ y:y∼x LtxL t y = 2d ∑ x (Ltx )2 + ∑ x Ltx (∆Lt )x = 2d ∑ x (Ltx )2 − 1 2 ∑ x (∇Lt )2x (4.2) 101 4.2. Logarithmic corrections to scaling behavior so that Hβ,γ (L) = (β − 2dγ) ∑ x L2x + γ 2 ∑ x (∇L)2x ≥ Hβ−2dγ,0(L). (4.3) In terms of the renormalization group approach sketched in Section 1.4, the term (∇L)2 is irrelevant. This is the basis for our work in preparation, with Brydges and Slade, in which we extend the result [38] to small γ > 0, thus showing that the two-point function is asymptotic to a multiple for |x |−(d−2) in dimension d ≥ 4. 4.2 Logarithmic corrections to scaling behavior A long-term goal of the renormalization group program for four dimensional weak- ly self-avoiding walks is to prove the conjecture (1.13) for the weakly self-avoiding walk, or more generally that, for any p ≥ 0, ( EHt |wt |p ) 1 p ∼ cpt 1 2 (log t) 18 (t → ∞). (4.4) A step towards this goal, interesting in itself, is to establish that the so-called sus- ceptibility χ(µ) = ∑x Gµ (x) has a related logarithmic correction, χ(µc + T−1) ∼ cT (log T ) 14 (T → ∞) (4.5) where µc is the smallest real number such that χ(µ) < ∞ for µ > µc . In work in preparation with Brydges and Slade, we utilize results from Chapters 2–3, together with [10, 34–37], to establish (4.5). 4.2.1 End-to-end distance and Laplace transforms A heuristic argument (a version of Fisher’s scaling relation for the critical expo- nents that applies in the critical dimension, see e.g. [15, 88]) predicts that if EHt |wt |2 ∼ ct(log t)2ν (t → ∞), (4.6) χ(µ + ε) ∼ ε−1(− log ε)γ (ε ↓ 0), (4.7) Gµc (x) ∼ c |x |−(d−2) (log |x |)−η (|x | → ∞), (4.8) then the exponents of the logarithms should be related by γ = 2ν − η. (4.9) It has been proved that η = 0 [38] and we can prove that γ = 14 . Then (4.9) leads to the prediction ν = 18 as in (4.4). 102 4.2. Logarithmic corrections to scaling behavior Let us give some indication in which way (4.5) is a natural step in the direction of proving (4.4). The left-hand side of (4.4) is ( EHt |wt |p ) 1 p = (∑ x ct (x) |x |p∑ x ct (x) ) 1 p . (4.10) It would suffice to establish the more general claim that e−µc t ∑ x ct (x) |x |p ∼ cpt p 2 (log t) 14+ p8 (t → ∞). (4.11) This would in particular include ∑ x ct (x) ∼ ceµc t (log t) 14 (t → ∞). (4.12) An approach to proving (4.11) is given by proving related asymptotic behavior of its Laplace transform, which is given in terms of the two-point function (1.16) by ∫ ∞ 0  ∑ x ct (x) |x |p  e−µt dt = ∑ x Gµ (x) |x |p . (4.13) The asymptotics (4.11) are related to the asymptotics of the Laplace transform near its the critical point µc . For example, equation (4.11) implies that ∑ x Gµc+ 1T (x) |x | p ∼ c′p ( T (log T ) 14 )1+ p2 (T → ∞). (4.14) For p = 0, this is the same as (4.5). Equation (4.14) follows from (4.11) by a direct calculation: indeed, with t = sT , ∫ ∞ 0 e−(µc+ 1 T )t  ∑ x ct (x) |x |p  dt = T ∫ ∞ 0 e−se−µc sT  ∑ x csT (x) |x |p  ds, (4.15) and, using (4.11), it is possible to conclude that e−µc sT ∑ x csT (x) |x |p ∼ cpT p 2 s p 2 (log T ) 14+ p8 (T → ∞). (4.16) This implies (4.14) with c′p given by c′p = cp ∫ ∞ 0 e−s s p 2 ds = cpΓ ( 1 + p2 ) . (4.17) 103 4.2. Logarithmic corrections to scaling behavior The converse, that (4.14) implies (4.11), is not true in general. However, Tauberian theory [59, Chapter XIII] shows that (4.14) implies that (4.11) holds asymptotically in Cesaro mean, i.e., 1 T ∫ T 0 e−µc t  ∑ x ct (x) |x |p  dt ∼ cpT p2 (log T ) 14+ p8 (T → ∞). (4.18) To conclude (4.11) rather than the averaged version (4.18), further informa- tion is needed such as, e.g., eventual monotonicity of the integrand in (4.18), or related asymptotics as z = 1 T → 0 for z in a region of the complex plane. The lat- ter approach presumably requires major extensions to the argument which shows (4.5), but in the simpler case of weakly self-avoiding walks on a four dimensional hierarchical lattice, this was successfully carried by Brydges and Imbrie [28]. 4.2.2 The renormalization group approach The renormalization group method can be used to establish that the long-distance behavior of the weakly self-avoiding walk is, in a suitable sense, related to that of a free field. The critical model, µ = µc , is described by a massless free field, m2 = 0, and subcritical models, µ > µc , are related to massive free fields, m2 > 0. For example, we can show that there is a function µ = µ(m2) such that χ(µ(m2)) ∼ c m2 (m2 ↓ 0), (4.19) i.e., the susceptibility of the weakly self-avoiding walk with parameter µ = µ(m2) is similar to that of the free field with mass m2. It turns out important to establish the relation between µ and m2 in the non-critical case. We can show that the right- inverse m2(µ) = inf{m2 > 0 : µ(m2) = µ} satisfies m2(µc + ε) ∼ cε(− log ε)− 14 (ε ↓ 0). (4.20) These two properties allow to conclude (4.5). To exemplify in which ways the results of Chapters 2 and 3 enter the proof of (4.20), let us mention that the coefficients β j of Appendix A, in particular (A.8), given in terms of the decomposition of the Green function with m2 > 0, satisfy ∞∑ j=0 β j ∼ c(− log m2) (m2 ↓ 0). (4.21) Using this, it can be shown that ḡj → ḡ∞ as j → ∞ with ḡ∞ ∼ c(− log m2)−1 as m2 ↓ 0. This is origin of the logarithm in (4.20). The power 14 is a consequence of the explicit structure of the µ-equation of the recursion (A.8). 104 Bibliography [1] A. Abdesselam, A complete renormalization group trajectory between two fixed points, Comm. Math. Phys. 276 (2007), no. 3, 727–772. [2] R. Abraham and J.E. Marsden, Foundations of mechanics, Benjamin/Cum- mings Publishing Co. Inc. Advanced Book Program, Reading, Mass., 1978, Second edition, revised and enlarged, With the assistance of Tudor Raţiu and Richard Cushman. [3] S. Adams, R. Kotecký, and S. Müller, Strict convexity of the surface tension for non-convex potentials, Preprint, 2012. [4] S. Adams, R. Kotecký, and S. Müller, Finite range decomposition for fami- lies of gradient Gaussian measures, J. Funct. Anal. 264 (2013), no. 1, 169– 206. [5] L.V. Ahlfors, Complex analysis, third ed., McGraw-Hill Book Co., New York, 1978, An introduction to the theory of analytic functions of one com- plex variable, International Series in Pure and Applied Mathematics. [6] M. Aizenman, D.J. Barsky, and R. Fernández, The phase transition in a general class of Ising-type models is sharp, J. Statist. Phys. 47 (1987), no. 3- 4, 343–374. [7] S. Alinhac, Hyperbolic partial differential equations, Universitext, Springer, Dordrecht, 2009. [8] R. Bauerschmidt, A simple method for finite range decomposition of quadratic forms and Gaussian fields, To appear in Probab. Theory Related Fields. [9] R. Bauerschmidt, D.C. Brydges, and G. Slade, Logarithmic correction to scaling for the susceptibility of the 4-dimensional weakly self-avoiding walk: a renormalisation group analysis, In preparation. [10] , A renormalisation group method. III. Perturbative analysis of weakly self-avoiding walk, In preparation. 105 Bibliography [11] , Structural stability of a dynamical system near a non-hyperbolic fixed point, To appear in Annales Henri Poincaré, 2013. [12] R. Bauerschmidt, H. Duminil-Copin, J. Goodman, and G. Slade, Lectures on self-avoiding walks, Probability and Statistical Physics in Two and More Di- mensions, Clay Mathematics Proceedings, vol. 15, Amer. Math. Soc., Prov- idence, RI, 2010, pp. 395–467. [13] G. Benfatto, M. Cassandro, G. Gallavotti, F. Nicolò, E. Olivieri, E. Presutti, and E. Scacciatelli, Some probabilistic techniques in field theory, Comm. Math. Phys. 59 (1978), no. 2, 143–166. [14] G. Benfatto and G. Gallavotti, Renormalization group, Physics Notes, vol. 1, Princeton University Press, Princeton, NJ, 1995, Princeton Paperbacks. [15] A. Bovier, G. Felder, and J. Fröhlich, On the critical properties of the Ed- wards and the self-avoiding walk model of polymer chains, Nuclear Phys. B 230 (1984), no. 1, FS10, 119–147. [16] D. Brydges, J. Dimock, and T.R. Hurd, The short distance behavior of (φ4)3, Comm. Math. Phys. 172 (1995), no. 1, 143–186. [17] D. Brydges, S.N. Evans, and J.Z. Imbrie, Self-avoiding walk on a hierarchi- cal lattice in four dimensions, Ann. Probab. 20 (1992), no. 1, 82–124. [18] D. Brydges, J. Fröhlich, and T. Spencer, The random walk representation of classical spin systems and correlation inequalities, Comm. Math. Phys. 83 (1982), no. 1, 123–150. [19] D. Brydges and G. Slade, Renormalisation group analysis of weakly self- avoiding walk in dimensions four and higher, Proceedings of the Interna- tional Congress of Mathematicians. Volume IV (New Delhi), Hindustan Book Agency, 2010, pp. 2232–2257. [20] D. Brydges and T. Spencer, Self-avoiding walk in 5 or more dimensions, Comm. Math. Phys. 97 (1985), no. 1-2, 125–148. [21] D. Brydges and A. Talarczyk, Finite range decompositions of positive- definite functions, J. Funct. Anal. 236 (2006), no. 2, 682–711. [22] D. Brydges and H.-T. Yau, Grad φ perturbations of massless Gaussian fields, Comm. Math. Phys. 129 (1990), no. 2, 351–392. 106 Bibliography [23] D.C. Brydges, A short course on cluster expansions, Phénomènes critiques, systèmes aléatoires, théories de jauge, Part I, II (Les Houches, 1984), North- Holland, Amsterdam, 1986, pp. 129–183. [24] , Functional integrals and their applications, Troisieme Cycle de la Physique en Suisse Romande, 1992, Notes by Roberto Fernández. [25] , Lectures on the renormalisation group, Statistical mechanics, IAS/Park City Math. Ser., vol. 16, Amer. Math. Soc., Providence, RI, 2009, pp. 7–93. [26] D.C. Brydges, A. Dahlqvist, and G. Slade, The strong interaction limit of continuous-time weakly self-avoiding walk., Berlin: Springer, 2012 (En- glish). [27] D.C. Brydges, G. Guadagni, and P.K. Mitter, Finite range decomposition of Gaussian processes, J. Statist. Phys. 115 (2004), no. 1-2, 415–449. [28] D.C. Brydges and J. Imbrie, End-to-end distance from the Green’s func- tion for a hierarchical self-avoiding walk in four dimensions, Comm. Math. Phys. 239 (2003), no. 3, 523–547. [29] D.C. Brydges and J.Z. Imbrie, Green’s function for a hierarchical self- avoiding walk in four dimensions, Comm. Math. Phys. 239 (2003), no. 3, 549–584. [30] D.C. Brydges, J.Z. Imbrie, and G. Slade, Functional integral representations for self-avoiding walk, Probab. Surv. 6 (2009), 34–61. [31] D.C. Brydges and P.K. Mitter, On the convergence to the continuum of finite range lattice covariances, J. Statist. Phys. 147 (2012), no. 4, 716–727. [32] D.C. Brydges, P.K. Mitter, and B. Scoppola, Critical (Φ4)3,ǫ , Comm. Math. Phys. 240 (2003), no. 1-2, 281–327. [33] D.C. Brydges and I. Muñoz Maya, An application of Berezin integration to large deviations, J. Theoret. Probab. 4 (1991), no. 2, 371–389. [34] D.C. Brydges and G. Slade, A renormalisation group method. I. Gaussian integration and normed algebras, In preparation. [35] , A renormalisation group method. II. Approximation by local poly- nomials, In preparation. 107 Bibliography [36] , A renormalisation group method. IV. Nonperturbative analysis of weakly self-avoiding walk, In preparation. [37] , A renormalisation group method. V. A single renormalisation group step, In preparation. [38] , Weakly self-avoiding walk in dimensions four and higher: a renor- malisation group analysis, In preparation. [39] T.K. Carne, A transmutation formula for Markov chains, Bull. Sci. Math. (2) 109 (1985), no. 4, 399–405. [40] Z.-Q. Chen and M. Fukushima, Symmetric Markov processes, time change, and boundary theory, London Mathematical Society Monographs Series, vol. 35, Princeton University Press, Princeton, NJ, 2012. [41] N. Clisby, R. Liang, and G. Slade, Self-avoiding walk enumeration via the lace expansion, J. Phys. A 40 (2007), no. 36, 10973–11017. [42] P.-G. de Gennes, Exponents for the excluded volume problem as derived by the wilson method, Physics Letters A 38 (1972), no. 5, 339–340. [43] F. den Hollander, Random polymers, Lecture Notes in Mathematics, vol. 1974, Springer-Verlag, Berlin, 2009, Lectures from the 37th Probability Summer School held in Saint-Flour, 2007. [44] J. Dimock, Infinite volume limit for the dipole gas, J. Stat. Phys. 135 (2009), no. 3, 393–427. [45] J. Dimock and T.R. Hurd, A renormalization group analysis of the Koster- litz-Thouless phase, Comm. Math. Phys. 137 (1991), no. 2, 263–287. [46] , A renormalization group analysis of correlation functions for the dipole gas, J. Statist. Phys. 66 (1992), no. 5-6, 1277–1318. [47] , A renormalization group analysis of infrared QED, J. Math. Phys. 33 (1992), no. 2, 814–821. [48] , Construction of the two-dimensional sine-Gordon model for β < 8pi, Comm. Math. Phys. 156 (1993), no. 3, 547–580. [49] , Sine-Gordon revisited, Ann. Henri Poincaré 1 (2000), no. 3, 499– 541. 108 Bibliography [50] B. Duplantier, Critical amplitudes of Edwards’ continuous polymer chain model, J. Physique 47 (1986), no. 11, 1865–1884. [51] E.B. Dynkin, Gaussian random fields and Gaussian evolutions, Theory and application of random fields (Bangalore, 1982), Lecture Notes in Control and Inform. Sci., vol. 49, Springer, Berlin, 1983, pp. 28–39. [52] , Markov processes as a tool in field theory, J. Funct. Anal. 50 (1983), no. 2, 167–187. [53] , Gaussian and non-Gaussian random fields associated with Markov processes, J. Funct. Anal. 55 (1984), no. 3, 344–376. [54] , Polynomials of the occupation field and related random fields, J. Funct. Anal. 58 (1984), no. 1, 20–52. [55] P. Falco, Kosterlitz-Thouless transition line for the two dimensional Cou- lomb gas, Comm. Math. Phys. 312 (2012), no. 2, 559–609. [56] C. Fefferman, Pointwise convergence of Fourier series, Ann. of Math. (2) 98 (1973), 551–571. [57] C. Fefferman and R. de la Llave, Relativistic stability of matter. I, Rev. Mat. Iberoamericana 2 (1986), no. 1-2, 119–213. [58] J. Feldman, J. Magnen, V. Rivasseau, and R. Sénéor, Construction and Borel summability of infraredΦ44 by a phase space expansion, Comm. Math. Phys. 109 (1987), no. 3, 437–480. [59] W. Feller, An introduction to probability theory and its applications. Vol. II., Second edition, John Wiley & Sons Inc., New York, 1971. [60] R. Fernández, J. Fröhlich, and A.D. Sokal, Random walks, critical phe- nomena, and triviality in quantum field theory, Texts and Monographs in Physics, Springer-Verlag, Berlin, 1992. [61] G.B. Folland, Real analysis, second ed., Pure and Applied Mathematics (New York), John Wiley & Sons Inc., New York, 1999, Modern techniques and their applications, A Wiley-Interscience Publication. [62] J. Fröhlich, B. Simon, and T. Spencer, Infrared bounds, phase transitions and continuous symmetry breaking, Comm. Math. Phys. 50 (1976), no. 1, 79–95. 109 Bibliography [63] M. Fukushima, Y. Oshima, and M. Takeda, Dirichlet forms and symmetric Markov processes, extended ed., de Gruyter Studies in Mathematics, vol. 19, Walter de Gruyter & Co., Berlin, 2011. [64] T. Funaki, Stochastic interface models, Lectures on probability theory and statistics, Lecture Notes in Math., vol. 1869, Springer, Berlin, 2005, pp. 103–274. [65] K. Gaw ‘ edzki and A. Kupiainen, A rigorous block spin approach to massless lattice theories, Comm. Math. Phys. 77 (1980), no. 1, 31–64. [66] , Massless lattice ϕ44 theory: rigorous control of a renormalizable asymptotically free model, Comm. Math. Phys. 99 (1985), no. 2, 197–252. [67] , Asymptotic freedom beyond perturbation theory, Phénomènes cri- tiques, systèmes aléatoires, théories de jauge, Part I, II (Les Houches, 1984), North-Holland, Amsterdam, 1986, pp. 185–292. [68] J. Glimm and A. Jaffe, Quantum physics, second ed., Springer-Verlag, New York, 1987, A functional integral point of view. [69] T. Gneiting, Radial positive definite functions generated by Euclid’s hat, J. Multivariate Anal. 69 (1999), no. 1, 88–119. [70] , Criteria of Pólya type for radial positive definite functions, Proc. Amer. Math. Soc. 129 (2001), no. 8, 2309–2318 (electronic). [71] C. Hainzl and R. Seiringer, General decomposition of radial functions on Rn and applications to N-body quantum systems, Lett. Math. Phys. 61 (2002), no. 1, 75–84. [72] T. Hara, A rigorous control of logarithmic corrections in four-dimensional φ4 spin systems. I. Trajectory of effective Hamiltonians, J. Statist. Phys. 47 (1987), no. 1-2, 57–98. [73] T. Hara and G. Slade, Critical behaviour of self-avoiding walk in five or more dimensions, Bull. Amer. Math. Soc. (N.S.) 25 (1991), no. 2, 417–423. [74] , The lace expansion for self-avoiding walk in five or more dimen- sions, Rev. Math. Phys. 4 (1992), no. 2, 235–327. [75] , Self-avoiding walk in five or more dimensions. I. The critical be- haviour, Comm. Math. Phys. 147 (1992), no. 1, 101–136. 110 Bibliography [76] T. Hara and H. Tasaki, A rigorous control of logarithmic corrections in four- dimensional φ4 spin systems. II. Critical behavior of susceptibility and cor- relation length, J. Statist. Phys. 47 (1987), no. 1-2, 99–121. [77] S. Hofmann and S. Kim, Gaussian estimates for fundamental solutions to certain parabolic systems, Publ. Mat. 48 (2004), no. 2, 481–496. [78] M.C. Irwin, On the stable manifold theorem, Bull. London Math. Soc. 2 (1970), 196–198. [79] S. Janson, Gaussian Hilbert spaces, Cambridge Tracts in Mathematics, vol. 129, Cambridge University Press, Cambridge, 1997. [80] L.P. Kadanoff, Scaling laws for Ising models near Tc , Physics 2 (1966), no. 6, 263–272. [81] S. Kim, Gaussian estimates for fundamental solutions of second order parabolic systems with time-independent coefficients, Trans. Amer. Math. Soc. 360 (2008), no. 11, 6031–6043. [82] G.F. Lawler, Intersections of random walks, Probability and its Applica- tions, Birkhäuser Boston Inc., Boston, MA, 1991. [83] G.F. Lawler, O. Schramm, and W. Werner, On the scaling limit of planar self-avoiding walk, Fractal geometry and applications: a jubilee of Benoît Mandelbrot, Part 2, Proc. Sympos. Pure Math., vol. 72, Amer. Math. Soc., Providence, RI, 2004, pp. 339–364. [84] P.D. Lax, Hyperbolic partial differential equations, Courant Lecture Notes in Mathematics, vol. 14, New York University Courant Institute of Math- ematical Sciences, New York, 2006, With an appendix by Cathleen S. Morawetz. [85] Y. Le Jan, Temps local et superchamp, Séminaire de Probabilités, XXI, Lec- ture Notes in Math., vol. 1247, Springer, Berlin, 1987, pp. 176–190. [86] , On the Fock space representation of functionals of the occupation field and their renormalization, J. Funct. Anal. 80 (1988), no. 1, 88–108. [87] J.M. Luttinger, The asymptotic evaluation of a class of path integrals. II, J. Math. Phys. 24 (1983), no. 8, 2070–2073. [88] N. Madras and G. Slade, The self-avoiding walk, Probability and its Appli- cations, Birkhäuser Boston Inc., Boston, MA, 1993. 111 Bibliography [89] A.J. McKane, Reformulation of n → 0 models using anticommuting scalar fields, Phys. Lett. A 76 (1980), no. 1, 22–24. [90] P.K. Mitter and B. Scoppola, Renormalization group approach to interacting polymerised manifolds, Comm. Math. Phys. 209 (2000), no. 1, 207–261. [91] J. Nash, Continuity of solutions of parabolic and elliptic equations, Amer. J. Math. 80 (1958), 931–954. [92] G. Parisi and N. Sourlas, Random magnetic fields, supersymmetry, and neg- ative dimensions, Phys. Rev. Lett. 43 (1979), 744–745. [93] , Self avoiding walk and supersymmetry, Journal de Physique Lettres 41 (1980), no. 17, 403–405. [94] G. Pólya, Remarks on characteristic functions, Proceedings of the Berkeley Symposium on Mathematical Statistics and Probability, 1945, 1946 (Berke- ley and Los Angeles), University of California Press, 1949, pp. 115–123. [95] M. Reed and B. Simon, Methods of modern mathematical physics. II. Fourier analysis, self-adjointness, Academic Press [Harcourt Brace Jo- vanovich Publishers], New York, 1975. [96] , Methods of modern mathematical physics. I, second ed., Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1980, Func- tional analysis. [97] J.W. Robbin, On the existence theorem for differential equations, Proc. Amer. Math. Soc. 19 (1968), 1005–1006. [98] H. Rue and L. Held, Gaussian Markov random fields, Monographs on Statis- tics and Applied Probability, vol. 104, Chapman & Hall/CRC, Boca Raton, FL, 2005, Theory and applications. [99] D. Ruelle, Elements of differentiable dynamics and bifurcation theory, Aca- demic Press Inc., Boston, MA, 1989. [100] M. Salmhofer, Renormalization, an introduction, Texts and Monographs in Physics, Springer-Verlag, Berlin, 1999. [101] S. Sheffield, Gaussian free fields for mathematicians, Probab. Theory Re- lated Fields 139 (2007), no. 3-4, 521–541. 112 [102] M. Shub, Global stability of dynamical systems, Springer-Verlag, New York, 1987, With the collaboration of Albert Fathi and Rémi Langevin, Translated from the French by Joseph Christy. [103] A. Sikora, Riesz transform, Gaussian bounds and the method of wave equa- tion, Math. Z. 247 (2004), no. 3, 643–662. [104] B. Simon, The P(φ)2 Euclidean (quantum) field theory, Princeton University Press, Princeton, N.J., 1974, Princeton Series in Physics. [105] , Functional integration and quantum physics, Pure and Applied Mathematics, vol. 86, Academic Press Inc. [Harcourt Brace Jovanovich Pub- lishers], New York, 1979. [106] G. Slade, The self-avoiding walk, Lecture notes for course given at the In- stitute Henri Poincaré, 2009. [107] K. Symanzik, Euclidean quantum field theory, Local Quantum Field Theory (New York) (R. Jost, ed.), Academic Press, 1969. [108] A.-S. Sznitman, Topics in occupation times and Gaussian free fields, Zurich Lectures in Advanced Mathematics, European Mathematical Soci- ety (EMS), Zürich, 2012. [109] D. Ueltschi, A self-avoiding walk with attractive interactions, Probab. The- ory Related Fields 124 (2002), no. 2, 189–203. [110] R. van der Hofstad and A. Klenke, Self-attractive random polymers, Ann. Appl. Probab. 11 (2001), no. 4, 1079–1115. [111] R. van der Hofstad, A. Klenke, and W. König, The critical attractive random polymer in dimension one, J. Statist. Phys. 106 (2002), no. 3-4, 477–520. [112] N.T. Varopoulos, Long range estimates for Markov chains, Bull. Sci. Math. (2) 109 (1985), no. 3, 225–252. [113] K.G. Wilson and J.B. Kogut, The renormalization group and the ε expan- sion, Physics Reports 12 (1974), no. 2, 75–200. 113 Appendix A Perturbation theory and coordinates of the renormalization group In this appendix, the second-order part of the renormalization group map for the weakly self-avoiding walk model is considered, i.e., the map ϕ̄ of Section 1.4.5. The map ϕ̄ is defined in terms of a map ϕpt that arises from formal perturbation theory, but does not satisfy the condition (3.1) imposed on the map ϕ̄ of Chapter 3 itself. The remedy to this issue is an (explicit) coordinate change, exhibited in this appendix, that transforms ϕpt into a map ϕ̄ to which Chapter 3 can be applied. The maps are defined in terms of the decomposition of the Green function of Chapter 2. This provides an explicit connection between Chapters 2 and 3. A.1 Flow of coupling constants Let C = C1 + C2 + · · · be a positive definite decomposition of the Green function, and use the convenient short-hand notation, with j fixed, C = Cj , w = w j = j∑ l=1 Cl . (A.1) By translation-invariance, we can identify C and w with functions of one variable, for example, Cx = C0x . Let Vx = gτ2x + ντx + zτ∆,x (A.2) be the (local) interaction polynomial for the weakly self-avoiding walk model. (For the definitions of τ and τ∆, see (1.43) and (1.50).) In [10], a new local interaction polynomial Vpt,x is defined, in terms of V , C, and w, describing the effect of (for- mal) second-order perturbation theory. The details of the specification of Vpt are not important for the current discussion, so we only state the result: Vpt is essen- tially of the same form as (A.2) with coefficients gpt , νpt , zpt given by polynomials 114 A.2. Bounds on the coefficients of degree two in g, ν, z. To express the coefficients of the polynomials, it is con- venient to introduce the following abbreviations: for a function f = f (ν,w), set δ[ f ] = f (ν + 2C0g,w + C) − f (ν,w). (A.3) Moreover, for a function q : Zd → R, set (∆q)x = 12 ∑ e∈Zd :|e |1=1 (qx+e − qx ), (A.4) (∇q)2x = 12 ∑ e∈Zd :|e |1=1 (qx+e − qx )2 , (A.5) and q(n) = ∑ x qnx . (A.6) All functions q below arise in terms of the covariance decomposition, e.g., q = w, and satisfy: ∑ x qx xi = 0, ∑ x qx xi x j = q(∗∗)δi j (i, j = 1, . . . , d). (A.7) Then the coefficients are given by:  gpt = g − 8g2δ[w(2)] − 4gδ[νw(1)], νpt = ν + 2C0g − 4g2 ( δ[w(3)] − 3w(2)C0 ) − 2g(ν + 2C0g)δ[w(2)] − δ[ν2w(1)] + 2g(z + y)δ[(w∆w)(1)] + 8gνw(1)C0 , zpt = z − 2g2δ[(w3)(∗∗)] − 12δ[ν2w(∗∗)] − 2zδ[νw(1)]. (A.8) A.2 Bounds on the coefficients From now on, assume that the covariance decomposition C = ∑∞j=1 Cj is given by [Cj ]x =  ∫ 1 2 L 0 φ∗t (x) dt t ( j = 1) ∫ 1 2 L j 1 2 L j−1 φ∗t (x) dt t ( j > 1) (A.9) 115 A.2. Bounds on the coefficients where φ∗t is as given in Example 2.1.3. In particular, (A.9) implies the finite range property [Cj ]x = 0 if d(x , 0) > 12 L j (A.10) and the bounds |[∇αCj ]x | ≤ O(L−(d−2+|α |1)( j−1)). (A.11) Natural estimates on the coefficients in (A.8) are given in terms of the variable µ = L2 jν instead of ν and µpt = L2( j+1)νpt. Let ϕpt(g, z, µ) = (gpt , zpt , µpt). Proposition A.2.1. The coefficients of the polynomials ϕpt are bounded by O((1+ m2L2 j )−k ) for any k ∈ R and continuous in m2 ∈ [0, δ) for some δ > 0. Proof. The proof uses (A.10)–(A.11) and is given in reference [10].  The previous result is similar to Assumption (A2) of Chapter 3. (We will show below that O((1 + m2L2 j )−k ) can be bounded by O( χ j ).) However, the map ϕ̄ of Chapter 3 is assumed to be triangular which ϕpt is not. This is will be addressed in the next subsection. In addition, for the applicability of the result of Theorem 3.1.4, a positive lower bound on the coefficient of the g2-term in the g-equation is crucial to satisfy assumption (A1). This is a consequence of Lemma A.2.2 below, in which we verify that the sequence of coefficients has a positive limit if m2 = 0. Lemma A.2.2. Let d = 4, m2 = 0. Then there is β∞ > 0 such that β j := 8δ j [w(2)] = β∞ + O(L− j ). (A.12) Remark A.2.3. The constant β∞ can be determined exactly: β∞ = log(L) pi2 . (A.13) Proof of Lemma A.2.2. Denote the covariance decomposition by Cj (x), x ∈ Z4. By (2.36), there is c0 ∈ Cc (R4) such that with cj (x) = L−2 jc0(L− j x), Cj (x) = cj (x) + O(L−3 j ). (A.14) Let us first verify (Cj ,Cj+l ) − 〈c0 , cl 〉 = O(L− jL−2l ) (A.15) where we use the notation (F,G) = ∑x∈Z4 F (x)G(x) whenever F,G : Z4 → R and 〈 f , g〉 = ∫ R4 f g dx for f , g : R4 → R. Let Rj = Cj − cj . Then: (Cj ,Cj+l ) = (cj , cj+l ) + (cj , Rj+l ) + (cj+l , Rj ) + (Rj , Rj+l ). (A.16) 116 A.2. Bounds on the coefficients Riemann sum approximation shows (cj , cj+l ) − 〈c0 , cl 〉 = L−4 j ∑ y∈L− jZd c(y)cl (y) − ∫ Rd c(y)cl (y) dy = O(L− j )‖∇(ccl )‖L∞ = O(L−2l− j ). (A.17) The remaining terms are easily bounded using |supp(Cj ) |, |supp(Rj ) | = O(L4 j ): (cj , Rj+l ) ≤ O(L4 j )‖cj ‖L∞ (Z4) ‖Rj+l ‖L∞ (Z4) ≤ O(L− jL−3l ), (A.18) (cj+l , Rj ) ≤ O(L4 j )‖cj+l ‖L∞ (Z4) ‖Rj ‖L∞ (Z4) ≤ O(L− jL−2l ), (A.19) (Rj , Rj+l ) ≤ O(L4 j )‖Rj ‖L∞ (Z4) ‖Rj+l ‖L∞ (Z4) ≤ O(L−2 jL−3l ), (A.20) and (A.15) follows. From this we can now deduce: j∑ k=1 (Ck ,Cj+1) = j∑ k=1 〈c0 , cj+1−k 〉 + j∑ k=1 O(L−k L−2( j−k )) = j∑ k=1 〈c0 , ck 〉 + O(L− j ), (A.21) (Cj+1,Cj+1) = 〈c0 , c0〉 + O(L− j ), (A.22) and thus, using 〈c0 , ck 〉 = 〈c0 , c−k 〉, w (2) j+1 − w (2) j = 2(w j ,Cj+1) + (Cj+1 ,Cj+1) (A.23) = j∑ k=− j 〈c0 , ck 〉 + O(L− j ). (A.24) Note that with ‖c−k ‖L∞ ≤ L2k ‖c0‖L∞ and supp(c−k ) ⊂ BCL−k , ∞∑ k= j+1 |〈c0 , ck 〉| = ∞∑ k= j+1 |〈c0 , c−k 〉| ≤ ‖c0‖L∞ ∞∑ k= j+1 L2k ∫ B CL−k |c0(x) | dx ≤ ‖c0‖2L∞ ∞∑ k= j+1 O(L−2k ) ≤ O(L−2 j ). (A.25) Thus, with β∞ = 8 ∑∞ k=−∞〈c0 , ck 〉, we have obtained 8(w(2) j+1 − w (2) j ) = β∞ + O(L− j ). (A.26) That β∞ > 0 can be seen from the fact that ĉk ≥ 0 and Plancherel’s theorem.  117 A.2. Bounds on the coefficients Proof of Remark A.2.3. By (A.21), it follows that β∞ = 8〈c0 , v〉 with v = ∑ k∈Z ck . (A.27) The Fourier transforms of c and v are ĉ0(ξ) = 1|ξ |2 ∫ |ξ | L−1 |ξ | ρ(t) dt , v̂(ξ) = 1|ξ |2 (A.28) where ρ is a non-negative function with ∫ ∞ 0 ρ dt = 1. Observe that the claim for v̂ follows from the claim for ĉ; the latter claim is verified at the end of the proof. (A.28) implies, by Plancherel’s theorem, radial symmetry, and Fubini’s theorem, 〈c0 , v〉 = 1 (2pi)4 ∫ R4 |ξ |−4 (∫ |ξ | L−1 |ξ | ρ(t) dt ) dξ = ω3 (2pi)4 ∫ ∞ 0 (∫ r L−1r ρ(t) dt ) dr r = ω3 (2pi)4 ∫ ∞ 0 (∫ Lt t dr r ) ρ(t) dt (A.29) where ω3 = 2pi2 is the surface measure of the 3-sphere (⊂ R4). The inner integral in the last equation is equal to log(L). Thus, with ∫ ∞ 0 ρ dt = 1, β∞ = 8ω3 (2pi)4 log(L) = log(L) pi2 (A.30) as claimed. To verify (A.28), use that by (2.36)–(2.37), there is k > 0 such that φ∗t (x) = (t/k)−(d−2) ¯φ(k x/t) + O(t−(d−2+1)) (A.31) where, denoting the Fourier transform of ¯φ by ˜φ, see (2.118), (2.78), ˜φ(ξ) = ∫ ∞ 0 t2ϕ(|ξ |t) dt t , ∫ ∞ 0 t2ϕ(t) dt t = 1. (A.32) In particular, the function c in (A.14) is more explicitly given by ĉ0(ξ) = 1k2 ∫ 1 2 1 2 L −1 t2ϕ ( |ξ |t k ) dt t = 1 |ξ |2 ∫ |ξ | L−1 |ξ | ρ(t) dt (A.33) as claimed where ρ is given by ρ(t) = ( t 2k )2 ϕ ( t 2k ) 1 t . (A.34) This completes the proof.  118 A.3. Transformation A.3 Transformation As discussed, the map ϕpt(g, z, µ) = (gpt , zpt , µpt) does not have the right form to apply the result of Chapter 3. In Proposition A.3.1, we show that the coordinates can be brought to the form expected in Chapter 3 by a simple transformation. Proposition A.3.1. Define ϕ̄ : R3 → R3 by (ḡ, z̄, µ̄) = ϕ̄(g, z, µ) with µ̄ = L2( j+1) ν̄, µ = L2 jν, and ḡ = g − 8g2δ[w(2)], (A.35) z̄ = z − 2g2δ[(w3)(∗∗)], (A.36) ν̄ = ν + 2C0,0g − 4g2 ( δ[w(3)] − 3w(2)C0,0 + C0,0δ[w(2)] ) − 2gνδ[w(2)] + 2gzδ[(w∆w)(1)]. (A.37) Then the coefficients of the polynomials ϕ̄ are bounded by O((1 + m2L2 j )−k ) for an arbitrary k and m2 ∈ [0, δ). Define T : R3 → R3 by T (g, z, µ) = (gT , zT , µT ), with µ = L2 jν, µT = L2 jν, where gT = g + 4gνw(1) , (A.38) zT = z + 2zνw(1) + 12 ν 2 w (∗∗) , (A.39) νT = ν + ν 2 w (1) . (A.40) Then T (V ) = V + O(|V |2). Let T+ = Tj+1. There exists a ball B ⊂ R3 independent of j and m2 ∈ [0, δ) such that, on B, T+ ◦ ϕpt ◦ T−1 = ϕ̄ + ρpt (A.41) where ρpt is an analytic function on B with ρpt(g, z, µ) = O((1 + m2L2 j )−k (|g | + |z | + |µ|)3) uniformly in j and m2 ∈ [0, δ), for any k. Remark A.3.2. The transformation T is simple and explicit, but we believe that its existence may have a deeper origin that we have not unravelled. Formally, i.e., without consideration of the formal third-order error, different covariance decom- positions induce dynamical systems like (1.92) whose three-dimensional parts can be of slightly different form. Some of the monomials that appear in the polynomi- als ϕ̄ j are essentially independent of the decomposition. On the other hand, some decompositions of the Green function have the special property that ∑ x [Cj ]x = 0 (A.42) 119 A.3. Transformation which is not true for the finite range decomposition discussed in Chapter 2. It can be seen that the terms in (1.43) involving w(1) would thus vanish with such a de- composition. Is it possible that the existence of such a transformation expresses an invariance property of the dynamical system under coordinates induced by different covariance decompositions? Note that the map ϕ̄ has the form assumed for ϕ̄ in Chapter 3. The next corol- lary illustrates how the result of Chapter 3 is used in the study of the weakly self- avoiding walk, except that in the real application, the error coordinate is non-trivial. Corollary A.3.3. Fix any Ω > 1. The maps ϕ̄ then satisfy Assumptions (A1–A2) of Chapter 3. Moreover, Assumption (A3) can be satisfied with ρ = ρpt and ψ = 0. Sketch of proof. (i) Set jm = [logL m]. We first show that for any c < log L/pi2, there is n < ∞ such that the number of j ≤ jm with β j < c is bounded by n, uniformly in m2 ∈ [0, δ). To prove this, we first note that (A.13) implies that, if m2 = 0, for every c + ε < log L/pi2, there is n0 such that the number of j such that β j < c + ε is bounded by n0. We now prove the claim for m2 > 0. It can be shown using Example 2.1.3 that there are constants c′ and q independent of L such that∣∣∣∣∣ ∂∂m2 β j (m2) ∣∣∣∣∣ ≤ c′LqL2 j , (A.43) but we omit the proof. This implies that for j ≤ jm − q − p, with p large enough, | β j (0) − β j (m2) | ≤ c′LqL2 jm2 ≤ c′L−p ≤ ε. (A.44) It follows that the number of j ≤ jm such that β j < c can be bounded by n0+p+q. (ii) We now verify Assumptions (A1)–(A2) of Chapter 3 for ϕ̄. Let Ω > 1, jΩ = inf{k ≥ 0 : | β j | ≤ Ω−( j−k ) ‖ β‖∞ for all j}, and χ j = Ω−( j− jΩ)+ . (A.45) Let k be such that L2k ≥ Ω. Then (1 + m2L2 j )−k ≤ L−2k ( j− jm )+ ≤ Ω−( j− jm )+ . (A.46) (i) implies that ‖ β‖∞ > c > 0 uniformly in m2 ∈ (0, δ). By Proposition A.3.1 and (A.46), there is a constant C such that | β j | ≤ CΩ−( j− jm )+ ≤ C c Ω −( j− jm )+ ‖ β‖∞ ≤ Ω−( j− jΩ)+ ‖ β‖∞ (A.47) with jΩ ≤ jm + logΩC − logΩ c. In particular, the number of j ≤ jΩ with β j < c is bounded by nΩ = n + logΩC − logΩ c where n is as in (i), uniformly in m2 ∈ [0, δ). This proves Assumption (A1) and Assumption (A2) is then a consequence of Proposition A.3.1 with (1 + m2L2 j )−k = O( χ j ).  120 A.3. Transformation Sketch of proof of Proposition A.3.1. The bounds on the coefficients of the maps ϕ̄, given in (A.35)–(A.36), follow from Proposition A.2.1 and w(∗∗) = O(L4 j ) and w (1) = O(L2 j ). The last two bounds are a straightforward with the properties of the covariance decomposition that |Cj | ≤ O(L−2 j ) and Cj (x) = 0 for x ≥ cL j . Indeed, w (1) j = j∑ l=1 ∑ x [Cl ]x = j∑ l=1 O(L2l ) = O(L2 j ), (A.48) w (∗∗) j = j∑ l=1 ∑ x |x |2[Cl ]x = j∑ l=1 O(L4l ) = O(L4 j ). (A.49) These bounds similarly imply T = id + O((|g | + |z | + |µ|)2) uniformly in j. Let w+ = w + C and ν+ = ν + 2C0g. Then (A.8) can be written as gpt + 4gν+w(1)+ = (g + 4gνw(1)) − 82δ[w(2)]g2, (A.50) νpt + ν 2 +w (1) + = (ν + ν2w(1)) + 2C0,0(g + 4gνw(1)) − 4g2 (δ[w(3)] − 3w(2)C0,0) − 2g(ν + 2C0g)δ[w(2)] + 2g(z + y)δ[(w∆w)(1)], (A.51) zpt + 2zν+w(1)+ + 12 ν 2 +w (∗∗) + = (z + 2zνw(1) + 12 ν2w(∗∗)) − 2g2δ[(w3)(∗∗)]. (A.52) Expressing ν and νpt as ν = L−2 j µ and νpt = L−2( j+1)µpt, the right- and left-hand sides of (A.50)–(A.52) equal ϕ̄ ◦ T (g, z, µ) + O((|g | + |z | + |µ|)3) respectively T+ ◦ ϕ̂(g, z, µ) + O((|g | + |z | + |µ|)3), with both bounds uniform in j. This and T+((g, z, µ) + r) = T+(g, z, µ) + O(r) imply the claim.  121

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 9 0
Canada 9 5
Romania 3 0
China 2 0
United Kingdom 2 0
France 2 0
Brazil 2 0
Germany 2 1
Japan 2 0
South Africa 1 1
Singapore 1 0
City Views Downloads
Unknown 8 2
Vancouver 8 5
Bucharest 3 0
Beijing 2 0
Manchester 2 0
Washington 2 0
Bonn 2 0
Tokyo 2 0
Somerville 2 0
Sunnyvale 1 0
Maple Ridge 1 0
Singapore 1 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items