Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Anderson localization with self-avoiding walk representation Suzuki, Fumika 2012

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2012_fall_suzuki_fumika.pdf [ 584.17kB ]
Metadata
JSON: 24-1.0073089.json
JSON-LD: 24-1.0073089-ld.json
RDF/XML (Pretty): 24-1.0073089-rdf.xml
RDF/JSON: 24-1.0073089-rdf.json
Turtle: 24-1.0073089-turtle.txt
N-Triples: 24-1.0073089-rdf-ntriples.txt
Original Record: 24-1.0073089-source.json
Full Text
24-1.0073089-fulltext.txt
Citation
24-1.0073089.ris

Full Text

Anderson Localization with Self-Avoiding Walk Representation by Fumika Suzuki  B.Sc., The University of Leeds, 2010  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Physics)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2012 c Fumika Suzuki 2012  Abstract The Green’s function contains much information about physical systems. Mathematically, the fractional moment method (FMM) developed by Aizenman and Molchanov connects the Green’s function and the transport of electrons in the Anderson model. Recently, it has been discovered that the Green’s function on a graph can be represented using self-avoiding walks on a graph, which allows us to connect localization properties in the system and graph properties. We discuss FMM in terms of the self-avoiding walks on a general graph with a small number of assumptions.  ii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  v  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  1 Introduction . . . . . . . . . . . . . . 1.1 The Anderson Model . . . . . . . 1.2 Anderson Localization . . . . . . . 1.3 Results and Open Problems . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  1 2 4 5  2 Localization Properties . . . . . . . . . . . . . . . . 2.1 Spectral Localization . . . . . . . . . . . . . . . . 2.2 Dynamical Localization and Transport . . . . . . 2.3 Spectral Localization and Dynamical Localization  . . . .  . . . .  . . . .  . . . .  . . . .  . . . .  8 8 9 10  3 Self-Avoiding Walk Representation . . . . . . . . . . . . . . 3.1 SAW Representation for the Green’s Function . . . . . . . . 3.2 Fractional Moment Bounds with SAW Representation . . . .  13 13 15  4 Fractional Moment Method . . . . . . . . . . . . . . . . . . . 4.1 Fractional Moment Method with SAW Representation . . . . 4.2 Second Moment and Fractional Moment of the Green’s Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 From Fractional Moment to Dynamical Localization . . . . .  21 21  5 Open Problems & Discussion . . . . . . . . . . . . . . . . . . 5.1 Level Statistics Conjecture and RMT . . . . . . . . . . . . . 5.2 Recent Studies in Physics . . . . . . . . . . . . . . . . . . . .  32 32 33  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  35  22 24  iii  Table of Contents  Appendices A Lemmas in Chapter 3  . . . . . . . . . . . . . . . . . . . . . . .  38  B Lemmas in Chapter 4  . . . . . . . . . . . . . . . . . . . . . . .  44  iv  List of Figures 1.1  Shaded area represents the localized states and we expect to have mobility edge and absolutely continuous spectrum [3, 23].  7  3.1 3.2  H and H[v0 ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . The difference between three kinds of self-avoiding walks. . .  13 17  5.1  The dashed lines describe random shortcut edges which represent the off-diagonal disorder [11]. . . . . . . . . . . . . . .  34  v  Acknowledgements This thesis was completed under the supervision of Richard Froese. I am deeply grateful to Richard Froese for his support, guidance, ideas, patience and friendship during my master’s studies. I also would like to thank P. C. E. Stamp for his support and the frank but kind advices on the topic from a physical point of view, which were really helpful for my study. It is a pleasure to thank Joel Feldman for reading this thesis and giving me detailed comments and suggestions.  vi  Chapter 1  Introduction The study of conductivity properties of materials is one of the most important research areas in condensed matter physics. In 1958, a physicist P. W. Anderson [1] discussed the phenomena which is now called Anderson localization where materials such as alloys and amorphous media become insulators if they have largely disordered atomic structure. This is in contrast with the behaviour of ideal crystals which have ordered structure and are always conductors. Many convincing arguments and experiments on this topic have been done by physicists. Rigorous studies of Anderson localization in the mathematical context started in the 1970s. So far, there exist several methods invented to prove Anderson localization and two methods provide proofs of Anderson localization in arbitrary dimension, not only one dimension. They are multiscale analysis (MSA) by Fr¨ ohlich and Spencer (1983) [10] and the fractional moment method (FMM) by Aizenman and Molchanov (1993) [2]. Although MSA can handle more situations of the Anderson model than FMM can, FMM is a simpler method and gives us stronger results on dynamical localization. In this thesis, we deal with Anderson localization using FMM. In chapter 1, we introduce the Anderson model on a graph G which we will study throughout this thesis and discuss what kind of studies and results have been obtained so far. In chapter 2, we discuss spectral localization and dynamical localization. Also, we will prove dynamical localization implies localization of the position of an electron, which is more directly connected to the physical meaning of the absence of the electron transport. In chapter 3, we will discuss that the fractional moment bound can be represented by the self-avoiding walks on a graph, which allows us to connect fractional moment bounds and graph properties. In chapter 4, we will discuss how dynamical localization can be obtained using FMM and the self-avoiding walk (SAW) representation for the fractional moment bound. In chapter 5, we will discuss open problems and the possible relation between our result and the level statistics conjecture hypothesis. 1  1.1. The Anderson Model  1.1  The Anderson Model  In this section, we introduce the Anderson model which describes the motion of an electron in a disordered system. In physics, it is common to model the system by a lattice Zn or a graph G. In this thesis, we deal with the Anderson model on a graph G with the following assumptions. Let G = (V, E) be a graph with V vertices and E edges. We assume G is a connected graph where the number of edges between any pair of vertices is either one or zero. We write v ∼ w if an edge connects the vertices v and w. Let N (v) be the degree of a vertex v ∈ V . i.e., N (v) := #{w ∈ V : w ∼ v}. We assume that the degree of a vertex is bounded above by some constant N , N (v) ≤ N < ∞ for all v ∈ V . d(v, w) is the graph distance from v to w, which is the minimum number of edges from v to w on G. W(v, w) is the set of self-avoiding walks (sequences of vertices) [v, v1 , . . . , vd(v,w) ] with d(v, w) steps starting at v0 = v. The walks need not end at w. Furthermore, W (v, w) is the set of self-avoiding walks [v, v1 , . . . , vd(v,w) ] with d(v, w) steps starting at v0 = v and with vd(v,w) connected to w in the graph obtained by deleting all edges attached to [v, v1 , . . . , vd(v,w)−1 ]. We define a function F1 which measures the maximum number of selfavoiding walks W (v, w) with d steps that can happen in G. F1 (d) = max{|W (v, w)| : d(v, w) = d}, thus |W (v, w)| ≤ F1 (d(v, w)). We write the set of vertices on a sphere (or shell) with radius d from some origin v as S(v, d). We write the set of vertices in a ball with radius d from some origin v as B(v, d). |S(v, d)| and |B(v, d)| are the number of vertices in S(v, d) and B(v, d) respectively. Then, we define F2 (d) to be the largest possible value of |S(v, d)| as v ranges over the graph, F2 (d) = max |S(v, d)|. v  F2 (d) is bounded by the biggest possible value N (N − 1)d . Disordered matter can be described by a random Schr¨ odinger operator acting on the Hilbert space l2 (V ): |ψ(v)|2 < ∞}  l2 (V ) = {ψ : V → C : v∈V  ¯ ψ(v)φ(v).  with inner product ψ, φ = v∈V  A random Schr¨ odinger operator can be written as 2  1.1. The Anderson Model H = Hω = T + λωv where T is the kinetic energy, the random potential ωv is a multiplication operator on l2 (V ) with a coupling constant λ > 0. We assume the simplest case where (ωv )v∈V is a set of independent, identically distributed (i.i.d.) real-valued random variables. Recall that i.i.d. random variables can be defined as follows. Definition 1.1.1. (i.i.d. random variables) (ωv )v∈V are identically distributed if there exists a Borel probability measure µ on R such that P(ωv ∈ A) = µ(A)  for all v ∈ V and Borel sets A ⊂ R  (ωv )v∈V are independent if l  P(ωv1 ∈ A1 , . . . , ωvl ∈ Al ) =  l  P(ωvk ∈ Ak ) = k=1  µ(Ak ) k=1  for each finite subset {v1 , . . . , vl } of V and arbitrary Borel sets A1 , . . . , Al ⊂ R. Note that we can use the re-scaled distribution of the i.i.d. random variables λωv : P(λωv ∈ B) = µλ (B) := µ(B/λ) Large coupling constant λ >> 1 indicates large disorder and small coupling constant λ << 1 indicates small disorder. As λ increases, the distribution is spread out over larger supports and the potentials can take a wider range of possible random values. We assume that the distribution µ of ωv is absolutely continuous with density ρ where ρ is bounded with compact support, i.e., µ(B) =  B  ρ(u)du for B ⊂ R Borel,  ρ ∈ L∞ 0 (R)  Physically, ωv represents the random electric potential created by nuclei at the sites v ∈ V . T describes the kinetic energy and it is often called next neighbour hopping operator acting on ψ ∈ l2 (V ). Also, T is the negative adjacency matrix of G. 3  1.2. Anderson Localization T ψ(v) = −  ψ(w) w:w∼v  so that (Hψ)(v) = (T ψ)(v) + λωv ψ(v),  v∈V  If we use the Dirac notation, we can write (|v w| + |w v|) + λ  H=− {v,w}:v∼w  ωv |v v| v∈V  where |v = δv with δv the Kronecker delta function. i.e., δv (v) = 1 and δv (w) = 0 for v = w. {δv }v∈V is the canonical orthonormal basis for l2 (V ). We can write the projection operator as |v v| = δv , · δv , where ·, · is the usual scalar product in l2 (V ). For a bounded operator M on l2 (V ), we can write the (v, w)-entry of the matrix as M (v, w) = v|M |w . The Green’s function G(v, w; z) is the kernel of the resolvent G(z) = (H − z)−1 and can be written by G(v, w; z) = v|G(z)|w . T is symmetric and bounded since there is an uniform bound N < ∞ on the vertex degree. Thus T is self-adjoint. The random potential term λωv : l2 (V ) → l2 (V ) is symmetric and bounded with the assumption that ρ has compact support. Therefore H is also bounded and self-adjoint. The quantum mechanical motion of an electron in a disordered system can be described by the above random Schr¨odinger operator H and this model is called Anderson model.  1.2  Anderson Localization  In this section, we briefly discuss the mathematical interpretation of Anderson localization. A more detailed discussion will be done in the section 2.3 after introducing spectral and dynamical localization in chapter 2. We can study the spectrum of H to determine whether an electron is in a localized state (the system is an insulator) or a delocalized (extended) state (the system is a conductor). It is known that there exists a decomposition of the Hilbert space into invariant subspaces for any self-adjoint operator H in a Hilbert space H [19]: H = Hpp ⊕ Hsc ⊕ Hac  4  1.3. Results and Open Problems where Hpp , Hsc , Hac are corresponding to pure point spectrum σpp , singular continuous spectrum σsc and absolutely continuous spectrum σac respectively. i.e., σ(H) = σpp (H) ∪ σac (H) ∪ σsc (H) In many cases, the pure point spectrum indicates that an electron is localized and the absolutely continuous spectrum indicates that an electron is delocalized. Some correspondences are shown by RAGE theorem below. We define the continuous subspace as Hc = Hac ⊕ Hsc with the corresponding spectra σc = σac ∪ σsc . Then RAGE theorem states the following [7, 14]. Theorem 1.2.1. (RAGE) Suppose H is a Schr¨ odinger operator acting on H = l2 (G) Then (i) φ ∈ Hpp if and only if lim sup χ(|x|>R) e−itH φ = 0 R→∞ t∈R  T 1 χ(|x|≤R) e−itH φ dt = 0 T →0 2T −T where χ(|x|≤R) and χ(|x|>R) are the characteristic functions of the ball {|x| < R} and the complement of the ball respectively.  (ii) ψ ∈ Hc if and only if lim  As we can see, RAGE theorem does not distinguish Hsc and Hac . See [4] for a further refinement of the absolutely continuous subspace Hac into the transient absolutely continuous subspace Htac and the recurrent absolutely continuous subspace Hrac , i.e., Hac = Htac ⊕ Hrac .  1.3  Results and Open Problems  There exist many studies on the Anderson model in both physics and mathematics communities. In this section, we introduce the known results and open problems of those studies. 1. Anderson localization in Zn for n = 1, 2. In physics, it is expected that the whole spectrum is pure point for Zn (n = 1, 2). A one or two-dimensional disordered system becomes an insulator since arbitrary small disorder changes the total spectrum from absolutely continuous to pure point for n = 1, 2. However, it is expected that the pure point spectrum is less stable for n = 2. To describe the quantum transport of an electron, we can use the expectation value of the distance that an electron has moved at some later time t with initial condition ψ ∈ l2 (Zn ): 5  1.3. Results and Open Problems Rp (t) = E sup |X|p e−itH χI (H)ψ t∈R  where p > 0 and χI (H) is the spectral projection for H onto an open interval I ⊂ R. Then, more complicated facts are known in physics. For n = 2, we have [23] R2 (t) ∼ t2 , 0 ≤ t ≤ λ−2 2 ≤ Dt, λ−2 ≤ t ≤ e1/λ 2 ≤ const, t > e1/λ This means that an electron shows ballistic behaviour for a very short time and then changes its motion to diffusive, finally its motion will be 2 localized after some time t > e1/λ . Mathematically, for n = 1, Anderson localization for any disorder λ > 0 and ρ has already been proved in [5, 16, 24]. However, there are no mathematical proofs for n = 2. 2. Anderson transition in Zn for n ≥ 3. It is known that physical properties of Anderson model in Zn for n ≥ 3 are more complicated. For large disorder λ >> 1, the whole spectrum is pure point. For small disorder (randomness), there is an interval of pure point spectrum near the band edges of the spectrum and Anderson localization occurs there. It is expected that the spectrum is absolutely continuous inside the bands and we have delocalization (extended) states there. As the disorder increases, pure point spectrum expands and absolute continuous spectrum contracts. As a result, it is believed that the phase transition from an insulating phase to a conducting phase occurs. This phase transition is called Anderson transition and a transition point is called mobility edge (Figure 1.1). The mathematical explanations of Anderson transition and delocalized (extended) states are still open problems and there are no mathematical proofs of the Anderson delocalization in l2 (Zn ) (n ≥ 3) nor existence of absolutely continuous spectrum or mobility edge. There are some attempts to understand the transition by the level statistics conjecture hypothesis which we will discuss in chapter 5. However, mathematical understanding of Anderson localization is a rich field of study and many rigorous studies have been achieved so far. For example, MSA [10] allows us to study the localization or absence of absolutely 6  1.3. Results and Open Problems continuous spectrum using Kubo formula. FMM [2] used in this thesis allows us to study the localization properties by looking the fractional moment of the Green’s function.  Figure 1.1: Shaded area represents the localized states and we expect to have mobility edge and absolutely continuous spectrum [3, 23].  7  Chapter 2  Localization Properties As we discussed in the section 1.2, the spectrum of H tells us the localization properties of Anderson model. In this chapter, we study those localization properties in more detail by introducing two kinds of localization, spectral localization and dynamical localization.  2.1  Spectral Localization  Definition 2.1.1. (Spectral localization) A random Schr¨ odinger operator H has spectral localization in an energy interval I if H almost surely has only pure point spectrum in I ⊂ R. i.e., I ⊂ σ(H) and σc (H) ∩ I = ∅ almost surely Furthermore, we can say H exhibits exponential spectral localization in I if it has spectral localization in I and the eigenfunctions to all eigenvalues in I decay exponentially. i.e., for almost all ω, H has a complete set of eigenfunctions (φω,n )n∈N in the energy interval I such that |φω,n (v)| ≤ Cω,n e−µd(v,vn,ω ) where µ > 0 and Cω,n is finite constant. vn,ω are the centres of localization. The concept of spectral localization is often only used in mathematics, but not in physics. Physicists use the concept of dynamical localization more often since it gives us direct physical intuition. In the next section, we prove dynamical localization implies the absence of electron transport.  8  2.2. Dynamical Localization and Transport  2.2  Dynamical Localization and Transport  Definition 2.2.1. (Dynamical localization) H exhibits dynamical localization in I if there exist constants C < ∞ and µ > 0 such that E sup | δy , e−itH χI (H)δx |  ≤ Ce−µd  (2.1)  t∈R  y∈S(x,d)  for all x ∈ V . E is the expectation with respect to the probability measure for random variables λωv . χI is the characteristic function of I and so χI (H) is the orthogonal projection onto the spectral subspace of H corresponding to energies in I. i.e., we only deal with the initial states with energy in I. This is a stronger definition than the standard definition of dynamical localization in the lattice case which requires the expectation for any x and y (with no sum over y) to decay exponentially in the distance d(x, y). In this definition, the sum of the expectation over all y ∈ S(x, d) should decay exponentially with distance d. For the lattice, definitions are equivalent since |S(x, d)| grows polynomially. Dynamical localization gives us physical intuition. It implies that the wavefunctions which are the solutions of the time-dependent Schr¨odinger equation Hψ(t) = i∂t ψ(t) are uniformly localized in space for all times. This leads to the localization of an electron, therefore the absence of electron transport with some conditions. Proposition 2.2.2. Let x, y ∈ V . Define a distance operator D from the origin x on the graph G by Dδy = d(x, y)δy . Then, dynamical localization implies that the pth moment of the distance operator is bounded, i.e., for all p > 0 and all finitely supported initial condition ψ ∈ l2 (V ), sup Dp e−itH χI (H)ψ < ∞  almost surely  t∈R  Proof. The proof is based on that in [25]. Firstly, we have Dp e−itH χI (H)ψ 2 = | δy , Dp e−itH χI (H)ψ |2 y∈V  d(x, y)2p | δy , e−itH χI (H)ψ |2  = y∈V  9  2.3. Spectral Localization and Dynamical Localization 2  =  d(x, y)  −itHω  2p  y∈V  ψ(z) δy , e  χI (Hω )δz  z:d(x,z)≤R  ψ(z)δz  by assuming ψ is localized in a ball with radius R, i.e., ψ = z:d(x,z)≤R  Then Dp e−itH χI (H)ψ  2  | δy , e−itH χ(H)δz |2 ψ  d(x, y)2p  ≤ y∈V  2  z:d(x,z)≤R  by Cauchy-Schwarz inequality. Since | δy , e−itH χI (H)δz | ≤ 1, we can drop the square (as it will be larger if we drop it). Then E sup Dp e−itH χI (H)ψ  2  t  d(x, y)2p E sup | δy , e−itH χI (H)δz |  ≤  ψ  2  t  z:d(x,z)≤R y∈V  (R + d(z, y))2p E sup | δy , e−itH χI (H)δz |  ≤ z:d(x,z)≤R y∈V ∞  E sup | δy , e−itH χI (H)δz |  (R + d)2p  ≤  ψ  2  t  z:d(x,z)≤R d=0  y∈S(z,d)  ψ  2  t  ∞  ≤ C|B(x, R)| ψ  (R + d)2p e−µd < ∞ by dynamical localization.  2 d=0  The second step used the triangle inequality: d(x, y) ≤ d(x, z) + d(z, y) ≤ R + d(z, y)  2.3  Spectral Localization and Dynamical Localization  Dynamical localization provides us more direct physical meaning and also it is mathematically stronger statement than spectral localization. In fact, dynamical localization implies spectral localization as follows.  10  2.3. Spectral Localization and Dynamical Localization Proposition 2.3.1. Let H be a discrete Schr¨ odinger operator in l2 (G). Suppose H exhibits dynamical localization in an open interval I. Then H exhibits almost surely only pure point spectrum in I. Proof. The proof is based on that in [25]. Let us write the distance |x| = d(0, x) from some origin 0. Let Pcont (H) be the projection onto continuous spectral subspace of H. Using the RAGE theorem [7], for every ψ ∈ l2 (G) we have T 2  Pcont (H)χI (H)ψ  = lim lim  R→∞ T →∞ 0  dt χ e−itH χI (H)ψ T {|x|≥R}  2  Let ψ have finite support. i.e., suppψ ⊂ {|x| ≤ r}. Then χ{|x|≥R} e−itH χI (H)ψ  2  ≤ χ{|x|≥R} e−itH χI (H)χ{|x|≤r}  ≤ χ{|x|≥R} e−itH χI (H)χ{|x|≤r}  ψ  2  2  ψ  2  | δx , e−itH χI (H)δy | ψ  ≤ |x|≥R,|y|≤r  where the first step used Cauchy-Schwarz inequality, the second step dropped a square since χ{|x|≥R} e−itH χI (H)χ{|x|≤r} ≤ 1 and the fourth step used |x x|, χ{|x|≤r} =  χ{|x|≥R} = |x|≥R  |y y|. |y|≤r  Now, we take the expectation T  E( Pcont (H)χI (H)ψ 2 ) = E  T dt ≤ E  lim lim R→∞ T →∞ 0 T  lim lim  R→∞ T →∞ 0  dt χ e−itH χI (H)ψ T {|x|≥R}   2  | δx , e−itH χI (H)δy | ψ 2   |x|≥R,|y|≤r  T  ≤ lim lim  R→∞ T →∞ 0 T  ≤ lim  lim  R →∞ T →∞ 0  ≤ const lim  R →∞  dt T  E(| δx , e−itH χI (H)δy |) ψ  dt T  2  |x|≥R,|y|≤r ∞  E(| δx , e−itH χI (H)δy |) ψ  2  |y|≤r d=R x∈S(y,d) ∞ −µd  e  =0  |y|≤r d=R  The third step used Fatou and Fubini’s lemma. In the fourth step, we used that if |x| ≥ R and |y| ≤ r, then d(x, y) ≥ R − r. The last step used dynamical localization. Therefore, Pcont (H)χI (H)ψ = 0 for almost every ω and every ψ of finite support. Such ψ are dense in l2 (G). This implies Pcont (H)χI (H) = 0 almost surely. Thus, the spectrum in I is pure point. 11  2  2.3. Spectral Localization and Dynamical Localization  Therefore, dynamical localization implies spectral localization. However, the opposite is not always true. It was showed that pure point spectrum implies absence of ballistic motion [22], i.e., lim  t→∞  |X|2 e−itH χI (H)ψ =0 t2  for all compactly supported initial conditions ψ once the spectrum in I is pure point. However, there exists an example where |X|2 e−itH χI (H)ψ can show a behaviour which is arbitrarily close to the ballistic motion even though the operator has exponential spectral localization [8, 9]. This indicates that spectral localization is not as strong as dynamical localization. The main cause is that even H only has pure point spectrum, the constant Cω,n in spectral localization can arbitrarily grow in n and the eigenvectors can be extended over arbitrarily large length scales, which causes an electron transport arbitrarily close to the ballistic motion as a result [14].  12  Chapter 3  Self-Avoiding Walk Representation Recently, it has been discovered that the Green’s function on a graph can be represented using self-avoiding walks on a graph [14, 26]. This allows us to connect localization properties in the system and graph properties. In this chapter, we will derive the self-avoiding walk (SAW) representation for the Green’s function in the way which gives us physical intuition.  3.1  SAW Representation for the Green’s Function  Figure 3.1: H and H[v0 ] The Green’s function G(x, y; z) is the matrix element of the resolvent of H, which is written as G(x, y; z) := x|(H − z)−1 |y where z ∈ C \ R represents imaginary energy z = E + i . We define a depleted random Schr¨odinger operator which can be made from H by self-avoiding walk process as follows (Figure 3.1).  13  3.1. SAW Representation for the Green’s Function (|v0 v1 | + |v1 v0 |)  H[v0 ] = H + v1 ∼[v0 ]  (H[v0 ] is H without edges connected to v0 .) (|vi−1 vi | + |vi vi−1 |).  H[v0 ,...vi ] = H[v0 ,...vi−1 ] + vi ∼[v0 ,...vi−1 ]  is summing over every possible vertex vi which can be  where vi ∼[v0 ,...vi−1 ]  reached by taking the next step after the self-avoiding walk [v0 , . . . , vi−1 ]. Then we obtain the following proposition. Proposition 3.1.1. (SAW representation) Let x = v0 , y ∈ V . Then the Green’s function can be written as G(x, y; z) = x|(H − z)−1 |y d(x,y)−1  vi |(H[v0 ,...,vi−1 ] − z)−1 |vi  = [v0 ,...,vd(x,y) ]  i=0  × vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] − z)−1 |y where  is summing over all self-avoiding walks starting at x with [v0 ,...,vd(x,y) ]  length d(x, y). When i = 0, H[v0 ,v−1 ] = H. When i = 1, H[v0 ,v0 ] = H[v0 ] . Proof. Firstly, H = H[v0 ] −  (|v0 v1 | + |v1 v0 |) v1 ∼[v0 ]  Using the resolvent formula A−1 − B −1 = A−1 (B − A)B −1 with A = H − z, B = H[v0 ] − z, we have (|v0 v1 | + |v1 v0 |)(H[v0 ] − z)−1 + (H[v0 ] − z)−1  (H − z)−1 = (H − z)−1 v1 ∼[v0 ]  Then, if d(x, y) ≥ 1 so that y = v0 , v1 |(H[v0 ] − z)−1 |y  v0 |(H − z)−1 |y = v0 |(H − z)−1 |v0 v1 ∼[v0 ]  since v0 |(H[v0 ] − z)−1 |y = 0 as edges connected to v0 are removed in H[v0 ] (H[v0 ] is block-diagonal). Similarly, 14  3.2. Fractional Moment Bounds with SAW Representation (|v1 v2 | + |v2 v1 |)  H[v0 ] = H[v0 ,v1 ] − v2 ∼[v0 ,v1 ]  Then, if d(x, y) ≥ 2 so that y = v1 , we have v2 |(H[v0 ,v1 ] − z)−1 |y  v1 |(H[v0 ] − z)−1 |y = v1 |(H[v0 ] − z)−1 |v1 v2 ∼[v0 ,v1 ]  Therefore v0 |(H − z)−1 |y = v0 |(H − z)−1 |v0  v1 |(H[v0 ] − z)−1 |v1 v1 ∼[v0 ]  v2 |(H[v0 ,v1 ] − z)−1 |y  × v2 ∼[v0 ,v1 ]  Repeating the above process we have the form: G(x, y; z) = v0 |(H − z)−1 |y d(x,y)−1  vi |(H[v0 ,...,vi−1 ] − z)−1 |vi  ···  = v1 ∼[v0 ] v2 ∼[v0 ,v1 ]  vd(x,y) ∼[v0 ,...,vd(x,y)−1 ]  i=0  × vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] − z)−1 |y d(x,y)−1  vi |(H[v0 ,...,vi−1 ] − z)−1 |vi  = i=0  [v0 ,...,vd(x,y) ]  × vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] − z)−1 |y since  ···  = [v0 ,...,vd(x,y) ]  v1 ∼[v0 ] v2 ∼[v0 ,v1 ]  vd(x,y) ∼[v0 ,...,vd(x,y)−1 ]  Here, self-avoiding walks are sequences of vertices [v0 , . . . , vd(x,y) ]. Note that if a walker can not find any edge to walk from vi since he deleted all edges connected to vi , then the contribution of that walk to the Green’s function is 0 since vi |(H[v0 ,...,vi−1 ] − z)−1 |y = 0 (See X (x, y) in Figure 3.2). There may be a small analogy between SAW representation for the Green’s function and path integral approach to propagator in quantum mechanics, although the Green’s function here is not a propagator.  3.2  Fractional Moment Bounds with SAW Representation  Now, we write fractional moment bounds for the Green’s function in terms of self-avoiding walks, which will be used in FMM.  15  3.2. Fractional Moment Bounds with SAW Representation Firstly, let W(x, y) be the set of self-avoiding walks [x, v1 , . . . , vd(x,y) ] with d(x, y) steps starting at v0 = x. Then, we can divide W(x, y) into three subsets (Figure 3.2): Y(x, y) : Self-avoiding walks in W(x, y) with vd(x,y) = y. X (x, y) : Self-avoiding walks in W(x, y) with vd(x,y) = y where vd(x,y) is connected to y in the graph obtained by deleting all edges attached to [x, v1 , . . . , vd(x,y)−1 ]. X (x, y) : Self-avoiding walks in W(x, y) with vd(x,y) = y where vd(x,y) is not connected to y in the graph obtained by deleting all edges attached to [x, v1 , . . . , vd(x,y)−1 ]. Only Y(x, y) and X (x, y) contribute to the Green’s function since vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] − z)−1 |y = 0  if  [x, v1 , . . . , vd(x,y) ] ∈ X (x, y).  Now we can define W (x, y) = Y(x, y)∪X (x, y) be the set of self-avoiding walks [x, v1 , . . . , vd(x,y) ] with d(x, y) steps starting at v0 = x and vd(x,y) is connected to y in the graph obtained by deleting all edges attached to [x, v1 , . . . , vd(x,y)−1 ]. i.e., y is connected to itself in the case vd(x,y) = y. The other fact that we will use to derive the fractional moment bound with SAW representation is an a priori bound: Lemma 3.2.1. (A priori bound) Let 0 < s < 1. There exist constants C1 (s, ρ), C2 (s, ρ) < ∞ such that Ex (|G(x, x; z)|s ) = Ex (| x|(H − z)−1 |x |s ) s s−s ≤ ρ s∞ 21−s λ−s = C1 (s, ρ)λ−s Ex,y (|G(x, y; z)|s ) = Ex,y (| x|(H − z)−1 |y |s ) s s−s ≤ ρ s∞ 2s+1 21−s λ−s = C2 (s, ρ)λ−s for all x, y ∈ V where x = y, z ∈ C \ R and λ > 0. Here Ex (. . .) =  . . . ρ(ωx )dωx  and Ex,y (. . .) =  · · · ρ(ωx )dωx ρ(ωy )dωy  16  3.2. Fractional Moment Bounds with SAW Representation is the conditional expectation with (ωu )u∈V \{x,y} fixed. After averaging over ωx and ωy , the bound above does not depend on the remaining random potentials [25]. Note C1 (s, ρ) < C2 (s, ρ). The proof of Lemma 3.2.1 (Lemma A.0.1) is given in Appendix A.  Figure 3.2: The difference between three kinds of self-avoiding walks. Theorem 3.2.2. (Fractional moment bounds) Let us write the number of walks Y(x, y), X (x, y) and W (x, y) as |Y(x, y)|, |X (x, y)| and |W (x, y)| respectively. Let 0 < s < 1. Then, fractional moment bounds of the Green’s function can be written as follows. E(|G(x, y; z)|s ) ≤ |W (x, y)| where C2 (s, ρ) = ρ  C2 (s,ρ) λs  d(x,y)+1  s 2s+1 2s s−s . ∞ 1−s  Proof. Let v0 = x. s −1 s E(|G(x,  y; z)| ) = E(| v0 |(H − z) |y | )  s  d(x,y)−1  vi |(H[v0 ,...,vi−1 ] − z)−1 |vi vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] − z)−1 |y  = E [v0 ,...,vd(x,y) ]      d(x,y)−1  vi |(H[v0 ,...,vi−1 ] − z)−1 |vi vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] − z)−1 |y  ≤ E [v0 ,...,vd(x,y) ]    i=0  i=0  17  s    3.2. Fractional Moment Bounds with SAW Representation     d(x,y)−1  =  vi |(H[v0 ,...,vi−1 ] − z)−1 |vi vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] − z)−1 |y  E [v0 ,...,vd(x,y) ]  i=0  ≤ |Y(x, y)|(C1 (s, ρ)λ−s )d(x,y)+1 + |X (x, y)|(C1 (s, ρ)λ−s )d(x,y) C2 (s, ρ)λ−s d(x,y)+1  ≤ (|Y(x, y)|+|X (x, y)|)(C2 (s, ρ)λ−s )d(x,y)+1 = (|W (x, y)|) C2λ(s,ρ) s s s where the third step used | xi | ≤ |xi | for 0 < s < 1 (Lemma A.0.2 in Appendix A), the fourth step used the fact that sum of expectations is equal to expectation of sums and the sixth step used C1 (s, ρ) < C2 (s, ρ) (they are different just by a factor 2s+1 ). In the fifth step, we have   d(x,y)−1  vi |(H[v0 ,...,vi−1 ] − z)−1 |vi vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] − z)−1 |y  E  s    i=0  = E(| v0 |(H − z)−1 |v0 |s | v1 |(H[v0 ] − z)−1 |v1 |s × · · · × | vd(x,y)−1 |(H[v0 ,...,vd(x,y)−2 ] −z)−1 |vd(x,y)−1 |s | vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] −z)−1 |y |s ) = Ev\v0 [Ev0 (| v0 |(H − z)−1 |v0 |s ) × | v1 |(H[v0 ] − z)−1 |v1 |s × · · · × | vd(x,y)−1 |(H[v0 ,...,vd(x,y)−2 ] −z)−1 |vd(x,y)−1 |s | vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] −z)−1 |y |s ] (3.1) where Ev\v0 is the expectation with respect to every random potential except the one at v0 which is ωv0 and Ev0 is the expectation with respect to a random potential ωv0 . Since only | v0 |(H − z)−1 |v0 |s depends on ωv0 , by Lemma 3.2.1, (3.1) ≤ C1 (s, ρ)λ−s Ev\v0 (| v1 |(H[v0 ] − z)−1 |v1 |s × · · · × | vd(x,y)−1 |(H[v0 ,...,vd(x,y)−2 ] −z)−1 |vd(x,y)−1 |s | vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] −z)−1 |y |s ) = C1 (s, ρ)λ−s Ev\{v0 ,v1 } [Ev1 (| v1 |(H[v0 ,v1 ] − z)−1 |v1 |s ) × · · · × | vd(x,y)−1 |(H[v0 ,...,vd(x,y)−2 ] −z)−1 |vd(x,y)−1 |s | vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] −z)−1 |y |s ] ≤ C12 (s, ρ)λ−2s Ev\{v0 ,v1 ,v2 } [Ev2 (| v2 |(H[v0 ,v1 ,v2 ] − z)−1 |v2 |s ) × · · · × | vd(x,y)−1 |(H[v0 ,...,vd(x,y)−2 ] −z)−1 |vd(x,y)−1 |s | vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] −z)−1 |y |s ] Here, we used the fact that Lemma 3.2.1 can also be applied to H[v0 ,v1 ,...] . Continuing the process, we obtain the above expression. Note that in the last step, we have Ey (| y|(H[v0 ,...,vd(x,y)−1 ] |y |s ) ≤ C1 (s, ρ)λ−s  if  d(x, y) = y.  18  s    3.2. Fractional Moment Bounds with SAW Representation Evd(x,y) ,y (| vd(x,y) |(H[v0 ,...,vd(x,y)−1 ] |y |s ) ≤ C2 (s, ρ)λ−s  if  d(x, y) = y.  In the lattice case, FMM states that if fractional moment bounds decay sufficiently rapidly, we obtain dynamical localization. In the general graph case, the situation is more complicated as we will discuss in the next chapter. However, it is still true in the general graph case that as the fractional moment gets larger, the system needs larger λ to obtain dynamical localization. In other words, as |W (x, y)| on a graph increases, it becomes more difficult for the system to have dynamical localization. Therefore, SAW representation allows us to connect localization properties and graph properties. Although counting |W (x, y)| on a graph is a complicated task, we can estimate some upper bounds. Example 3.2.3. (General graph) The kinetic energy term T of a random Schr¨odinger operator is the negative adjacency matrix of G. Therefore, the (i, j)-entry of the matrix (−T )d(x,y) is equal to the number of walks from i to j in G of length d(x, y). Those walks include walks which are not self-avoiding walks, however we can have the upper bound: ((−T )d(x,y) )xj  |W (x, y)| ≤ j∈V  Therefore, E(|G(x, y; z)|s )  ≤  ((−T )  d(x,y)  )xj  j∈V  C2 (s, ρ) λs  d(x,y)+1  Example 3.2.4. (Largest upper bound) Since we assumed the degree of a vertex is bounded above by some constant N , the self-avoiding walker can have at most N choices at the first step and N − 1 choices for each step after that. Thus, we have the largest upper bound: E(|G(x, y; z)|s ) ≤ N (N − 1)d(x,y)−1  C2 (s,ρ) λs  d(x,y)+1  19  3.2. Fractional Moment Bounds with SAW Representation Example 3.2.5. (n-dimensional Euclidean lattice) The above expression can be used for n-dimensional Euclidean lattice Zn . The self-avoiding walker can have at most 2n choices at the first step and 2n − 1 choices for each step after that. Thus, we have the upper bound: E(|G(x, y; z)|s ) ≤ 2n(2n − 1)d(x,y)−1  C2 (s,ρ) λs  d(x,y)+1  This suggests that as the number of spatial dimensions n increases, the systems needs larger λ to obtain dynamical localization. Example 3.2.6. (One-dimensional Euclidean lattice and tree graph) Even the type of graph changes, if |W (x, y)| stays the same, then the upper bound of the fractional moment of the Green’s function does not change. One-dimensional Euclidean lattice and tree graphs such as Bethe lattice have |Y(x, y)| = 1 and |X (x, y)| = 0 since only one self-avoiding walk can arrive y with d(x, y) steps and the other self-avoiding walks with d(x, y) steps do not have edges connected to y. Therefore, E(|G(x, y; z)|s ) ≤  C2 (s,ρ) λs  d(x,y)+1  This suggests that one-dimensional Euclidean lattice and tree graphs have the same fractional moment bounds. However, as we will see in the next chapter, they still behave differently on FMM.  20  Chapter 4  Fractional Moment Method This chapter introduces fractional moment method (FMM) which connects fractional moment bounds of the Green’s function and dynamical localization. FMM is an useful method in the regime of large disorder or extreme energies.  4.1  Fractional Moment Method with SAW Representation  Recall that we defined two functions in chapter 1. F1 (d) = max{|W (x, y)| : d(x, y) = d}, thus |W (x, y)| ≤ F1 (d(x, y)). F2 (d) = max |S(x, d)|. x  Theorem 4.1.1. (FMM) Let I ⊂ R, 0 < s < 1 and ∈ (0, 1/2). Then dynamical localization (2.1) holds for disorder λ which satisfies the following condition. ln |S(x, d)| > 0, thus, d x,d ln |S(x, d)| s ln λ > ln C2 + sup and d x,d  µ = s ln λ − ln C2 − sup  ∞  F2 (d )F1 (d ) d =0  C2 λs  (1−2 )d  <∞  where µ and C in (2.1) can be written as ln |S(x, d)| µ = s ln λ− ln C2 −sup and C = d x,d  ∞ C C2 |I| πλs  F2 (d )F1 (d ) d =0  C is a constant which obtained from Lemma B.0.3 in Appendix B. The proof uses the argument by Graf [12] that the fractional moment of Green’s function Ex (|G(x, y; z)|s ) can bound the second moment of Green’s function Ex (|G(x, y; z)|2 ). 21  C2 λs  (1−2 )d  .  4.2. Second Moment and Fractional Moment of the Green’s Function  4.2  Second Moment and Fractional Moment of the Green’s Function  In this section, we prove the fact that an upper bound of the second moment of Green’s function can be written in terms of the fractional moment of Green’s function, which we will use to prove Theorem 4.1.1 in the next section. Proposition 4.2.1. For every 0 < s < 1, there exists a constant C < ∞ only depending on s and ρ such that |Imz|Ex (|G(x, y; z)|2 ) ≤ C Ex (|G(x, y; z)|s ) for all z ∈ C \ R and x, y ∈ G. Ex denotes averaging over ωx . Proof. The proof is that of [12, 25]. For fixed x ∈ G, write ω = (ˆ ω , ωx ) where ω ˆ = (ωu )u∈G\{x} . Consider the Hamiltonian H (α) obtained from H by wiggling the potential at x. We denote its Green’s function by G(α) . With Px := δx (δx , ·) = |x x|, the orthogonal projection onto the span of |x , we can separate the ωx and ω ˆ dependence of H (α) as H (α) = H(ˆω,ωx +α) = H + α|x x| Then by the resolvent formula: A−1 − B −1 = A−1 (B − A)B −1 = B −1 (B − A)A−1 We have (H − z)−1 = (H (α) − z)−1 + α(H − z)−1 |x x|(H (α) − z)−1 by taking A = (H − z) , B = (H (α) − z). Then x|(H − z)−1 |y = x|(H (α) − z)−1 |y + x|α(H − z)−1 |x x|(H (α) − z)−1 |y G(x, y; z) = G(α) (x, y; z) + αG(x, x; z)G(α) (x, y; z)  G(α) (x, y; z) =  G(x, y; z) 1 G(x, y; z) = · −1 1 + αG(x, x; z) α + G(x, x; z) G(x, x; z)  (4.1)  If we take x = y and α ˜ = −Re[Gω (x, x; z)−1 ], then  22  4.2. Second Moment and Fractional Moment of the Green’s Function 1 1 (α) ˜ Im[G(x,x;z)−1 ] = |G (x, x; z)| ≤ |Imz|  This inequality can be obtained from the following: Let us write z = E + i . Then, for U ∈ D(H), we have |(H − z)U |2 = (H − z)U, (H − z)U = (H − E − i )U, (H − E − i )U = (H−E)U, (H−E)U +| |2 U, U −i U, (H−E)U +i (H−E)U, U = (H − E)U, (H − E)U + | |2 U, U Thus |(H − z)U |2 ≥ | |2 |U |2 Since (H − z) is invertible, we have 1 |(H − z)−1 | ≤ |Im z| 1 ˜ (x, x; z)| ≤ Therefore, we have |G(α) and |Im[G(x, x; z)−1 ]| ≥ |Imz|. |Imz| Substituting this inequality into (4.1), we obtain  Im[G(x,x;z)−1 ]| · |Imz||G(α) (x, y; z)|2 ≤ ||α+G(x,x;z) −1 |2  |G(x,y;z)|2 |G(x,x;z)|2  Also, we can bound the same expression as follows |G(α) (x, y ; z)|2  |Imz||G(α) (x, y; z)|2 ≤ |Imz| y ∈G  = |Imz| δx , (H (α) − z¯)−1 (H (α) − z)−1 δx 1 (α) − z)−1 − (H (α) − z = |Imz| δx , z−¯ ¯)−1 ]δx z [(H −1 −1 = |ImG(α) (x, x; z)| = Im[α+G(x,x;z) ] = |Im[G(x,x;z) ]| |α+G(x,x;z)−1 |2  |α+G(x,x;z)−1 |2  where the fifth step used (4.1) with x = y. Now, for t ≥ 0, min(1, t2 ) ≤ ts . Therefore, we can combine the above two estimates as |Imz||G(α) (x, y; z)|2 ≤  |Im[G(x, x; z)−1 |] |G(x, y; z)|s · |α + G(x, x; z)−1 |2 |G(x, x; z)|s  (4.2)  Now, we introduce ”re-sampling trick” below which allows us to create an additional random variable (in this proof, α) to average over. For a nonnegative Borel function f on R, we have f (ωx + α)ρ(ωx + α)dαρ(ωx )dωx = f (ωx + α)ρ(ωx + α)ρ(ωx )dωx dα = f (ωx )ρ(ωx )ρ(ωx − α)dωx dα = f (ωx )ρ(ωx )( ρ(ωx − α)dα)dωx 23  4.3. From Fractional Moment to Dynamical Localization = f (ωx )ρ(ωx )dωx The first and third step changed the order of integration, the second step used translation invariance of Lebesgue measure and the last step used ρ(ωx − α)dα = 1 since it is the probability measure. Let f (ωx ) = |G(x, y; z)|2 , then |Imz|Ex (|G(x, y; z)|2 ) = |Imz|Ex ( |G(α) (x, y; z)|2 ρ(ωx + α)dα) s  |G(x,y;z)| ≤ Ex |Im[G(x, x; z)−1 ]| |G(x,x;z)| s  ρ(ωx +α) dα |α+G(x,x;z)−1 |2  by (4.2). Using Lemma B.0.3 in Appendix B with w = G(x, x; z)−1 , we obtain |Imz|Ex (|G(x, y; z)|2 ) ≤ C Ex (|G(x, y; z)|s ) where C < ∞ is a constant which only depends on suppρ.  4.3  From Fractional Moment to Dynamical Localization  Now we prove Theorem 4.1.1 using Proposition 4.2.1. Proof. Again, the proof is based on that in [12, 25]. We introduce the complex Borel spectral measures µy,x of H written as µy,x (B) = δy , χB (H)δx for Borel sets B ⊂ R. Then, the total variation |µy,x | of µy,x is given by |µy,x |(B) =  g(λ)dµy,x (λ) = sup | δy , g(H)χB (H)δx |  sup g:R→C,Borel,|g|≤1  |g|≤1  B  This is a regular bounded Borel measure. If we choose g(H) = e−itH , we can bound the expectation in (2.1). E sup | δy , e−itH χI (H)δx | y∈S(x,d)  t∈R  ≤  E(|µy,x |(I)) y∈S(x,d)  24  4.3. From Fractional Moment to Dynamical Localization E(|µy,x |(I)) implies dynami-  Therefore, the exponential decay of y∈S(x,d)  cal localization. Now, we use Lusin’s theorem B.0.4 in Appendix B [21] and replace Borel functions g by continuous functions with compact support in I. Then, |µy,x |(I) =  sup  | δy , g(H)δx |  (4.3)  g∈Cc (I),|g|≤1  This allows us to introduce Lemma B.0.6 in Appendix B [26]. For g ∈ Cc (I), we have g(H)χI (H) = 12 g(H)(χI (H) + χI (H)) where a closed interval I is the closure of I. Thus δy , g(H)δx = 12 δy , g(H)(χI (H) + χI (H))δx = lim  →0+  g(E) δy , (H − E − i )−1 (H − E + i )−1 δx dE  π  I  Since |g| ≤ 1, we have E(|µy,x |(I)) y∈S(x,d)  ≤  E lim inf →0+  y∈S(x,d)  ≤ =  1 π 1 π  lim inf →0+  | δy , (H − E − i )−1 δz || δz , (H − E + i )−1 δx |dE  π  I z∈G  | δy , (H − E − i )−1 δz || δz , (H − E + i )−1 δx |dE  E I z∈G  y∈S(x,d)  E ( |G(y, z; E + i )||G(z, x; E + i )|) dE  lim inf  ≤ lim inf →0+  →0+  1 π  I z∈G y∈S(x,d)  (E( |G(y, z; E+i )|2 ))1/2 ·(E( |G(z, x; E+i )|2 ))1/2 dE I z∈G y∈S(x,d)  (4.4) The second step used Fatou’s lemma, the third step used Fubini’s theorem and the fourth step used Cauchy-Schwarz inequality. Now, we introduce Proposition 4.2.1 and Theorem 3.2.2.  25  4.3. From Fractional Moment to Dynamical Localization 1 →0+ π  (E(|G(y, z; E+i )|s ))1/2 (E(|G(z, x; E−i )|s ))1/2 dE  (4.4) ≤ lim  C |I| ≤ π  I z∈G y∈S(x,d)  |W (y, z)|  d(y,z)+1 2  C2 λs  1/2  z∈G y∈S(x,d)  C2 λs  1/2  |W (z, x)|  d(z,x)+1 2  (4.5) Using triangle inequality, d(y,z)+d(z,x) 2  = (d(y, z) + d(z, x)) + 12 − (d(y, z) + d(z, x)) ≥ d(y, x) + 21 − (d(y, z) + d(z, x)) where 0 < < 1/2. Then, by assuming Cλs2 < 1 (λs > C2 ), we have C2 (d(y,z)+d(z,x))/2 λs  ≤  d(y,x)+(1/2− )(d(y,z)+d(z,x))  C2 λs  Therefore, (4.5) ≤  C |I| π z∈G y∈S(x,d)  C2 λs  d(y,x)  |W (y, z)|  1/2  ×|W (z, x)|1/2 =  C |I| π  C2 λs  d  |W (y, z)|1/2 z∈G y∈S(x,d)  ×|W (z, x)|1/2 ≤  C |I| π  C2 λs  d  |W (y, z)| y∈S(x,d)  z∈G  ×  |W (z, x)| z∈G  C2 λs  (1/2− )d(y,z)+1/2  C2 (1/2− )d(z,x)+1/2 λs  (1/2− )d(y,z)+1/2  C2 λs  C2 (1/2− )d(z,x)+1/2 λs (1−2 )d(y,z)+1  C2 λs C2 λs  (1−2 )d(z,x)+1  1/2  1/2  (4.6)  by Cauchy-Schwarz inequality. (4.6) ≤  C |I| π  C2 λs  d y∈S(x,d)  F1 (d(y, z))  C2 λs  F1 (d(z, x))  C2 λs  z∈G  × z∈G  (1−2 )d(y,z)+1  (1−2 )d(z,x)+1  1/2  1/2  26  4.3. From Fractional Moment to Dynamical Localization  =  C |I| π  C2 λs  d  ∞  d =0 z∈S(y,d )  y∈S(x,d)    ∞  ×  F1 (d )  d =0 z∈S(x,d )  =  C |I| π  C2 λs  d  C2 λs  F1 (d )    ≤  C |I| π  C2 λs  d   (1−2 )d +1  1/2  |S(y, d )|F1 (d )  (1−2 )d +1  1/2  |S(x, d )|F1 (d )  C2 λs  d =0 ∞  |S(x, d)|  F2 (d )F1 (d ) d =0  C2 λs  (1−2 )d +1  ≤ Ce−µd  ln |S(x, d)| and C = d x,d Therefore, we have dynamical localization (2.1) if  where µ = s ln λ− ln C2 −sup  µ > 0, thus, s ln λ > ln C2 + sup x,d ∞  d =0  ln |S(x, d)| d  ∞ C C2 |I| πλs  F2 (d )F1 (d ) d =0  and  (1−2 )d  C2 λs  F2 (d )F1 (d )  1/2  C2 λs  d =0 ∞  ×  1/2   (1−2 )d +1  C2 λs  ∞  |S(x, d)|  (1−2 )d +1  < ∞.  This is a stronger result than dynamical localization. For an open interval I, there are constants C < ∞ and µ > 0 such that E(|µy,x |(I)) y∈S(x,d)  =  E( y∈S(x,d)  sup  g:R→C  | δy , g(H)χI (H)δx |) ≤ Ce−µd  Borel,|g|≤1  for all x, y ∈ G. For example, if we take g(H) = 1, we obtain exponential decay of correlations in the spectral projection χI (H) [25].  27  C2 λs  (1−2 )d  .  4.3. From Fractional Moment to Dynamical Localization Example 4.3.1. (Bethe lattice) From the assumption, the upper bound of |S(x, d)| is obtained when G is a Bethe lattice with vertex degrees N . We have |S(x, d)| = N (N − 1)d−1 for d ≥ 0. The number of vertices increases exponentially with the distance from the origin by ∼ e(d−1) ln(N −1) . To obtain dynamical localization, we need µ > 0, thus, s ln λ > ln C2 + ln N > ln C2 +  ln N (N −1)d−1 d  Furthermore, F2 (d ) = N (N − 1)d −1 since it is not depend on the choice of origin on a Bethe lattice. We know F1 (d ) = 1 from the previous chapter. Thus, ∞  F2 (d )F1 (d ) d =0  <N  C2 λs  ∞  (N −1)d d =0  ∞  (1−2 )d  C2 λs  N (N − 1)d −1  = d =0 (1−2 )d  C2 λs  (1−2 )d  ∞  e−(s(1−2  =N  ) ln λ−(1−2 ) ln C2 −ln(N −1))d  d =0  <∞ if λ satisfies s ln λ > ln C2 +  ln(N −1) (1−2 ) .  Therefore, the system obtains dynamical localization if λ satisfies s ln λ > ln C2 +  ln N  and s ln λ > ln C2 +  ln(N −1) (1−2 ) .  This indicates that although the trees and one-dimensional lattice have the same fractional moment bounds, trees need larger λ to obtain dynamical localization because of the factors |S(x, d)| and F2 (d ). Example 4.3.2. (Euclidean lattice) For lattice Zn , we have n  |S(x, d)| = k=0  n k  d−k+n−1 n−1  28  4.3. From Fractional Moment to Dynamical Localization In one-dimensional Euclidean lattice, we have F1 (d ) = 1 and |S(x, d)| = F2 (d ) = 2. Therefore, we need µ > 0, thus, s ln λ > ln C2 + ln 2 > ln C2 + ∞  F2 (d )F1 (d ) d =0  C2 λs  ∞  (1−2 )d  ln 2 d  C2 λs  =2 d =0  (1−2 )d  <∞  if λs > C2 . Thus, the system obtains dynamical localization if λ satisfies s ln λ > ln C2 + ln 2 . For two-dimensional Euclidean lattice, we have 2  2 k  |S(x, d)| = k=0  d−k+1 1  = 4d  Then we need µ > 0, thus, s ln λ > ln C2 +  4 e  > ln C2 +  ln 4d d  Also, F2 (d ) = 4d and F1 (d ) < 4 × 3d −1 . Then ∞  C2 λs  F2 (d )F1 (d ) d =0  ∞  (1−2 )d  d =0  ∞  e−(s(1−2  = 16  C2 λs  d 3d  < 16  ) ln λ−ln d /d −ln 3−(1−2 ) ln C2 )d  (1−2 )d  <∞  d =0  if s ln λ >  1 (1−2 )e  +  ln 3 1−2  + ln C2 .  Thus, the system obtains dynamical localization if λ satisfies 1 ln 3 s ln λ > ln C2 + e4 and s ln λ > (1−2 )e + 1−2 + ln C2 . For three-dimensional Euclidean lattice, we have |S(x, d)| ∼ 4d2 . Then, we need µ > 0, thus, s ln λ > ln C2 +  4 e  > ln C2 +  ln(4d2 ) . d  F2 (d ) = 4d 2 and F1 (d ) = 6 × 5d −1 . Then,  29  4.3. From Fractional Moment to Dynamical Localization ∞  C2 λs  F2 (d )F1 (d ) d =0  ∞  (1−2 )d  d =0  ∞  e−(s(1−2  = 24  d 2 5d  < 24  ) ln λ−2 ln d /d −ln 5−(1−2 ) ln C2 )d  C2 λs  (1−2 )d  <∞  d =0  if s ln λ >  2 (1−2 )e  +  ln 5 1−2  + ln C2 .  Therefore, the system obtains dynamical localization if λ satisfies 2 ln 5 s ln λ > ln C2 + e4 and s ln λ > (1−2 )e + 1−2 + ln C2 . For n-dimensional space, we have n  |S(x, d)| = k=0  n! (d − k + n − 1)! × k!(n − k)! (n − 1)!(d − k)!  We make an√approximation of the above by using Stirling’s approximax tion (i.e., x! ∼ 2πx xe taking d → ∞). By using (d − k + n − 1)! ∼  d−k+n−1 d−k+n−1 e d−k d−k e  2π(d − k + n − 1)  (d − k)! ∼  2π(d − k)  we obtain |S(x, d)| ∼ dn−1 for d → ∞. Then, for n-dimensional Euclidean lattice, we need µ > 0, thus, s ln λ > ln C2 +  n−1 e  > ln C2 +  ln dn−1 . d  Also, ∞  C2 λs  F2 (d )F1 (d ) d =0  e−(s(1−2  d =0 n−1 > (1−2 )e  +  d n−1 (2n − 1)d  ∼ 2n d =0  ∞  = 2n if s ln λ  ∞  (1−2 )d  ) ln λ−(n−1) ln d /d −ln(2n−1)−(1−2 ) ln C2 )d  ln(2n−1) 1−2  C2 λs  (1−2 )d  <∞  + ln C2 .  30  4.3. From Fractional Moment to Dynamical Localization Therefore, the system obtains dynamical localization if λ satisfies ln(2n−1) n−1 s ln λ > ln C2 + n−1 +ln C2 . Clearly, we can e and s ln λ > (1−2 )e + 1−2 see that the system needs larger disorder λ to obtain dynamical localization as the number of the spatial dimensions n gets higher.  31  Chapter 5  Open Problems & Discussion 5.1  Level Statistics Conjecture and RMT  One of the most important open problems in random operator theory is to understand the transition between the localized regime and the extended regime. There exist some mathematical proofs of the existence of extended states in certain systems [15]. Also, there is an attempt to understand the transition using the level statistics conjecture and random matrix theory (RMT). This method allows physicists to distinguish the two regimes numerically. We use the statistical distribution of the eigenvalues of finite volume restrictions to distinguish two regimes. It is expected that the localized regime and the extended regime are corresponding to Poisson statistics and Gaussian orthogonal ensemble (GOE) statistics of the eigenvalues respectively. Some studies proved mathematically that the finite volume eigenvalues show Poisson distribution in the localized regime. Molchanov first proved the Poisson statistics for eigenvalues for one-dimensional continuum random Schr¨ odinger operator [18]. Subsequently, Minami [17] proved Poisson statistics for eigenvalues of the Anderson model. He assumed the exponential decay of the fractional moment of the Green’s function holds for complex energies near E. Then, he proved the random sequence of rescaled eigenvalues of finite volume converges weakly to the stationary Poisson point process as the finite volume gets large and there is no correlation between eigenvalues near the energy E where Anderson localization is expected. However, it is still an open problem whether the extended regime can be characterized by GOE statistics. In RMT, GOE statistics can be obtained from Wigner random matrices. All elements in Wigner matrices are random, while only the diagonal matrix elements are random in the Anderson model. Therefore, it is suggested that random band matrices which increase amount of off-diagonal random entries can be an useful tool to understand the transition between two regimes [25]. In the next section, we discuss the study which has been done by physicists and the possible relation between their result and our result. 32  5.2. Recent Studies in Physics  5.2  Recent Studies in Physics  Although the study of the Anderson model started in the condensed matter physics, recent studies show this model has an application in the study of quantum walks because of the possibility of building quantum computers in condensed matter physics. Quantum walks are powerful tools in many areas such as quantum computing and quantum biology. The tight-binding, single particle model can be written as Jvw |v w| +  H= v=w  Ev |v v| v∈V  where Jvw is an adjacency matrix. In the Anderson model, Ev is a random potential λωv . Quantum walk is corresponding to the case where Ev = 0 and Bloch oscillations are corresponding to the case where Ev is the Stark energy. Generally, it is expected that quantum walk has a speedup over classical random walk due to superposition states. i.e., we have diffusive motion x2 ∼ t in classical random walk, while ballistic motion x2 ∼ t2 in quantum walk. However, in those results, it is assumed that there is neither disorder in the graph nor decoherence. In reality, it would be impossible to build a perfectly ideal graph. Those disorder in the graph or decoherence can be expressed by different connection strength (weight) in the edges (off-diagonal disorder) or diagonal disorder in the on-site energies. Diagonal disorder in the on-site energies can cause the exponential decay of the motion of quantum particle due to Anderson localization. Because of this reason, there are many studies of the Anderson model in quantum computing recently. Giraud et al. [11] studied the model of a circular graph with on-site disorder where each vertex is linked with its two nearest-neighbours and also they added shortcut edges between random pairs of vertices (Figure 5.1). Therefore, this is the one-dimensional Anderson model with extra off-diagonal random elements. They studied level spacing statistics for Hamiltonian and obtained GOE distribution for small on-site disorder λ and Poisson distribution as they made λ larger. According to the level statistics conjecture, it is expected that GOE distribution represents the extended states, while Poisson distribution represents the localized states. It might be possible to make a relation between this transition and our theorem 4.1.1. Adding off-diagonal random entries (shortcut edges between 33  5.2. Recent Studies in Physics  Figure 5.1: The dashed lines describe random shortcut edges which represent the off-diagonal disorder [11]. random pairs of vertices) increases the number of self-avoiding walks F1 (d) and firstly we have the extended states with small on-site disorder. As we increase on-site disorder λ, it overcomes the number of self-avoiding walks and we obtain the localized states. A small number of self-avoiding walks may correspond to the localized states by theorem 4.1.1 and also Poisson statistics since distant regions are uncorrelated and the system creates almost independent eigenvalues which do not have energy repulsion. On the other hand, a larger number of selfavoiding walks may correspond to the delocalized states by theorem 4.1.1 if λ is not large enough to overcome F1 (d) and also GOE statistics since distant regions are correlated, which creates energy level repulsion [6]. When λ overcomes F1 (d), we may have the transition from the extended states to the localized states. In our work, the connection between distant regions is reflected in the size of F1 (d). When F1 (d) is large, it is harder to obtain dynamical localization. Also, different from diagonal disorder λ, off-diagonal disorder does not always work for localization, but it works against localization when it increases the number of self-avoiding walks F1 (d).  34  Bibliography [1] Anderson P. W 1958 Absence of Diffusion in Certain Random Lattices. Phys. Rev. 109 1492-1505. [2] Aizenman M & Molchanov S 1993 Localization at large disorder and at extreme energies: an elementary derivation. Comm. Math. Phys. 157 245-278. [3] Aizenman M & Graf G. M 1998 Localization bounds for an electron gas. J. Phys. A: Math. Gen. 31 6783-6806. [4] Avron J. E & Simon B 1981 Transient and Recurrent Spectrum. Journal of Functional Analysis 43 1-31 [5] Carmona R, Klein A & Martinelli F 1987 Anderson localization for Bernoulli and other singular potentials. Comm. Math. Phys. 108 4166. [6] Combes G, Germinet F & Klein A 2009 Poisson Statistics for Eigenvalues of Continuum Random Schr¨ odinger Operators. [arXiv:0807.0455] [7] Cycon H. L, Froese R. G, Kirsch W & Simon B 1987 Schr¨ odinger Operators with Application to Quantum Mechanics and Global Geometry. Texts and Monographs in Physics Springer. [8] del Rio R, Jitomirskaya S, Last Y & Simon B 1995 What is localization?. Phys. Rev. Lett. 75 117-119. [9] del Rio R, Jitomirskaya S, Last Y & Simon B 1996 Operators with singular continuous spectrum. IV. Hausdorff dimensions, rank one perturbations, and localization. J. Anal. Math. 69 153-200. [10] Fr¨ ohlich J & Spencer T 1983 Absence of diffusion in the Anderson tight binding model for large disorder or low energy. Comm. Math. Phys. 151 184.  35  Bibliography [11] Giraud O, Georgeot B & Shepelyansky D.L 2005 Quantum computing of delocalization in small-world networks. Phys. Rev. E. 72 036203. [12] Graf G. M 1994 Anderson localization and the space-time characteristic of continuum states. J. Stat. Phys. 75 337-346. [13] Hamza E, Joye A & Stolz G 2009 Dynamical localization for unitary Anderson models. Math. Phys. Anal. Geom. 12 381-444. [14] Hundertmark D 2008 A short introduction to Anderson localization. In Analysis and Stochastics of Growth Processes and Interface Models, Oxford Scholarship Online Monographs, 194-219. [15] Klein A 1998 Extended States in the Anderson Model on the Bethe Lattice. Advances in Math. 133 163-184. [16] Kunz H & Souillard B 1981 Sur le spectre des op´erateurs aux diff´erences finies al´eatoires. Comm. Math. Phys. 78 201-246. [17] Minami N 1996 Local fluctuation of the spectrum of a multidimensional Anderson tight binding model. Comm. Math. Phys. 177 709-725. [18] Molchanov S. A 1981 The local structure of the spectrum of the onedimensional Schr¨ odinger operator. Comm. Math. Phys. 78 429-446. [19] Reed S & Simon B 1980 Methods of modern mathematical physics. I. Functional analysis, 2nd Edition. Academic Press, New York. [20] Reed S & Simon B 1978 Methods of modern mathematical physics. IV. Analysis of operators. Academic Press, New York. [21] Rudin W 1987 Real and Complex Analysis, 3rd Edition. McGraw-Hill, Boston. [22] Simon B 1990 Absence of ballistic motion. Comm. Math. Phys. 134 209-212. [23] Spencer T 2008 Anderson localization: phenomenology and mathematics. Lecture in Mathematics and Physics of Anderson localization: 50 Years After, Isaac Newton Institute for Mathematical Sciences. [24] Stolz G 2002 Strategies in localization proofs for one-dimensional random Schr¨ odinger operators. Proc. Indian Acad. Sci. (Math. Sci.) 112, 229-243. 36  [25] Stolz G 2010 An introduction to the mathematics of Anderson localization. Lecture notes of the Arizona School of Analysis with Applications. [26] Tautenhahn M 2011 Localization criteria for Anderson models on locally finite graphs. J. Stat. Phys. 144 60-75.  37  Appendix A  Lemmas in Chapter 3 Lemma A.0.1. (A priori bound) Let 0 < s < 1. There exist constants C1 (s, ρ), C2 (s, ρ) < ∞ such that Ex (|G(x, x; z)|s ) = Ex (| x|(H − z)−1 |x |s ) s s−s ≤ ρ s∞ 21−s λ−s = C1 (s, ρ)λ−s Ex,y (|G(x, y; z)|s ) = Ex,y (| x|(H − z)−1 |y |s ) s s−s ≤ ρ s∞ 2s+1 21−s λ−s = C2 (s, ρ)λ−s for all x, y ∈ V where x = y, z ∈ C \ R and λ > 0. Here Ex (. . .) =  . . . ρ(ωx )dωx  and Ex,y (. . .) =  · · · ρ(ωx )dωx ρ(ωy )dωy  is the conditional expectation with (ωu )u∈V \{x,y} fixed. After averaging over ωx and ωy , the bound does not depend on the remaining random parameters. Proof. The proof is that of [25]. Firstly, we prove the first inequality: E(|G(x, x; z)|s ) ≤ C1 (s, ρ)λ−s We write ω = (ˆ ω , ωx ) where ω ˆ = (ωvu )vu ∈V \{x} for fixed x ∈ V . Then, we can separate the ωx and ω ˆ dependence of H by the orthogonal projection onto the span of δx = |x , Px := δx (δx , ·) = |x x|. H = Hω := Hωˆ + λωx Px Now, we use the resolvent formula: A−1 − Aˆ−1 = Aˆ−1 (Aˆ − A)A−1 Let A = Hω − z, Aˆ = Hωˆ − z, we have  38  Appendix A. Lemmas in Chapter 3 (Hω − z)−1 = (Hωˆ − z)−1 − λωx (Hωˆ − z)−1 |x x|(Hω − z)−1 Then x|(Hω − z)−1 |x = x|(Hωˆ − z)−1 |x − λωx x|(Hωˆ − z)−1 |x x|(Hω − z)−1 |x Therefore G(x, x; z) = Gω (x, x; z) := Gωˆ (x, x; z) − λωx Gωˆ (x, x; z)Gω (x, x; z) Thus Gω (x, x; z) =  1 a+λωx  with a =  1 Gω ˆ (x,x;z)  ω ˆ (x,x;z) This is well defined by the Herglotz property ImGIm > 0 of the z Green function. Note that a is a complex number which is independent from ωx . Then we obtain  Ex (|Gω (x, x; z)|s ) =  1 λs  ρ(ωx )dωx |a/λ+ωx |s  ≤ ρ  s 2s s−s λ−s ∞ 1−s  = C1 (s, ρ)λ−s  with C1 (s, ρ) independent of λ and a, and thus independent of ω ˆ , z and x. The last step used the following fact in [12, 26]. For g : R → R non-negative with g ∈ L∞ (R) ∩ L1 (R) and 0 < s < 1. Then we have for all β ∈ C |ξ − β|−s g(ξ)dξ ≤ g  s ∞  R  g  s −s 1−s 2 s 1 L 1−s  (A.1)  Now, we prove the second inequality for x = y: Ex,y (|Gω (x, y; z)|s ) ≤ C2 (s, ρ)λ−s For fixed x, y ∈ V , write ω = (ˆ ω , ωx , ωy ) where ω ˆ = (ωvu )vu ∈V \{x,y} H = Hω := Hωˆ + λωx |x x| + λωy |y y| where Hωˆ is obtained by setting ωx = ωy = 0. We use the resolvent formula again. A−1 − Aˆ−1 = Aˆ−1 (Aˆ − A)A−1 Let A = Hω − z, Aˆ = Hωˆ − z, we have (Hω − z)−1 = (Hωˆ − z)−1 − λ(Hωˆ − z)−1 (ωx |x x| + ωy |y y|)(Hω − z)−1 39  Appendix A. Lemmas in Chapter 3 Consider P = Px + Py = |x x| + |y y|. This is the projection onto twodimensional subspace spanned by |x and |y defining the 2 × 2 matrices: P (Hω − z)−1 P = P (Hωˆ − z)−1 P − λP (Hωˆ − z)−1 P (ωx |x x| + ωy |y y|)P (Hω − z)−1 P ˆ = P (Hωˆ − z)−1 P , then Let B = P (Hω − z)−1 P, B ˆ − λB ˆ ωx 0 B, thus, B=B 0 ωy  ˆ ωx 0 Iˆ + λB 0 ωy  ˆ B=B  Therefore, B=  −1  ˆ ωx 0 Iˆ + λB 0 ωy  ˆ= B  ˆ −1 + λ ωx 0 B 0 ωy  −1  where B=  Gω (x, x) Gω (x, y) ˆ= ,B Gω (y, x) Gω (y, y)  Gωˆ (x, x) Gωˆ (x, y) Gωˆ (y, x) Gωˆ (y, y)  ˆ −1 exists for z ∈ C \ R. B (i.e., (Imz)−1 Im[(Hωˆ − z)−1 ] = (Hωˆ − z¯)−1 (Hωˆ − z)−1 and ˆ −1 )∗ −B ˆ −1 ˆ −1 B 1 (B ˆ −1 ˆ −1 )∗ P Im(Hωˆ −z)−1 P B − Im = = ( B 2i Imz Imz Imz are positive definite.) Then Ex,y (|Gω  (x, y; z)|s )  ˆ −1 + λ ωx 0 B 0 ωy  ≤ Ex,y =  ˆ −1 − Bλ  λ−s Ex,y  ρ 2∞ ≤ λs  r  r  −r  −r  −1 s  ωx 0 − 0 ωy  ˆ −1 B ωx 0 − − 0 ωy λ  −1 s  −1 s  dωx dωy  (A.2) where [−r, r] is an interval containing suppρ. In the first step, we bounded a matrix element by the norm of matrix. We use the change of variables u+ = 12 (ωx + ωy ), u− = 21 (ωx − ωy )  40  Appendix A. Lemmas in Chapter 3 This gives a Jacobian factor of 2. Since (u+ , u−1 ) ∈ [−r, r]2 for (ωx , ωy ) ∈ [−r, r]2 , we have Ex,y (|Gω (x, y; z)|s )  2 ρ 2∞ ≤ λs  r  r  −r  −r  ˆ −1 B −u− 0 − + 0 u− λ  −1 s  − u+ Iˆ  du+ du−  (A.3)  Now, let ˆ −1  M = − Bλ +  −u− 0 0 u−  By Schur’s theorem, M is unitarily equivalent to an upper triangular matrix. We can also assume that ImM ≥ 0 without loss of generality. Thus, M=  m11 m12 0 m22  and 1 m11 −u+  ˆ −1 = (M − u+ I)  0  12 − (m11 −u+m)(m 22 −u+ )  1 m22 −u+  =  m11 m12 0 m22  Then m11 m12 0 m22  2  x y  = |m11 + m12 y|2 + |m22 y|2 ≤ m 211 x2 + m 212 y 2 + 2m 11 m 12 xy + m 222 y 2 ≤ 2m 211 x2 + 2m 212 y 2 + m 222 y 2  ≤ 3m2∗ (x2 + y 2 ) where m∗ = max{|m11 |, |m12 |, |m22 |}. Therefore, m11 m12 0 m22  s  √ ≤ ( 3m∗ )s = 3s/2 max{|m11 |s , |m12 |s , |m22 |s }  41  Appendix A. Lemmas in Chapter 3 Then r −r  ˆ −1 s du+ ≤ 3s/2 (M −u+ I) r  ≤ 3s/2 −r  r −r  max  m12 1 |m11 −u+ |s , (m11 −u+ )(m22 −u+ )  1 m12 + s |m11 − u+ | (m11 − u+ )(m22 − u+ )  s  +  1 |m22 − u+ |s  s  1 , |m22 −u du+ s +|  du+ (A.4)  We can bound the second term as follows. m12 |m12 | 1 ≤ = +Imm22 11 m22 ] (m11 − u+ )(m22 − u+ ) |Im[(m11 − u+ )(m22 − u+ )]| u+ Imm11 − Im[m |m12 | |m12 | (A.5) Since we assumed ImM ≥ 0 and the positive matrix ImM =  1 Imm11 2i m12 1 − 2i m ¯ 12 Imm22  has positive determinant, we have det ImM = Imm11 Imm22 −  |m12 |2 4  ≥0  Thus Imm11 + Imm22 m12  2  ≥  1 2Imm11 Imm22 ≥ |m12 |2 2  (A.6)  Substituting (A.6) into (A.5), we have m12 2 ≤ Im m22 ] (m11 − u+ )(m22 − u+ ) u+ − Imm[m+11Im m22 11  (A.7)  Substituting (A.7) into (A.4), we have r −r  ˆ −1 s du+ (M − u+ I) r  ≤ 3s/2 −r       2s  1 + Im[m11 m22 ] |m11 − u+ |s u+ − Im m11 +Imm22  s  +  1  du+ < C(r, s) |m22 − u+ |s (A.8)  42  Appendix A. Lemmas in Chapter 3 Finally, we substitute (A.8) into (A.3) and obtain 2  C (s,ρ)  Ex,y (|Gω (x, y; z)|s ) ≤ 4r λρs ∞ C(r, s) = 2λs The detailed calculation using (A.1) shows that Ex,y (|Gω (x, y; z)|s ) is s s−s bounded by ρ s∞ 2s+1 21−s λ−s , which is discussed in [26]. Generally, we can assume that the upper bound of Ex,y (|Gω (x, y; z)|s ) is larger than that of Ex (|Gω (x, x; z)|s ). Lemma A.0.2. For s < 1 and for any collection of complex numbers {xi } ⊂ C, we have s  n  xi  n  |xi |s  ≤  i=1  i=1  Proof. For n = 1, equality holds. For n = 2, |x1 | |x1 |+|x2 | = (|x1 |+|x 1−s (|x1 |+|x2 |)1−s 2 |) |x2 | s s = |x1 | + |x2 | (|x2 |)1−s  |x1 + x2 |s ≤ (|x1 | + |x2 |)s = ≤  |x1 | (|x1 |)1−s  +  +  |x2 | (|x1 +|x2 |)1−s  The first step used the triangle inequality. In the fourth step, we dropped |x2 | and |x1 | in the first and second denominator respectively. s  n−1  If i=1  xi  =  i=1  = ≤  | n−1 i=1  (| |  (|  |xi |s is true, we have  i n−1  s  n  n−1  ≤  xi  s  ≤  xi + xn  i=1 n−1 i=1 xi |  1−s  xi |+|xn |)  n−1 i=1 xi | 1−s n−1 i=1 xi |  )  +  +  i=1  n−1 i=1  (|  1−s  xi |+|xn |) s n−1  =  xi  + |xn |s  i=1 n  |xi |s + |xn |s =  ≤  xi + |xn | i=1 |xn |  |xn | (|xn |)1−s  n−1  s  n−1  |xi |s i=1  Therefore, by induction, we proved the lemma.  43  Appendix B  Lemmas in Chapter 4 Lemma B.0.3. Let 0 < s < 1. There exists a constant C = C (ρ) < ∞ such that ρ(ωx +α) dα |α+w|2  |Imw| · |w|s  ≤C  uniformly in w ∈ C and ωx ∈ suppρ. Proof. The proof is that of [25]. Since |w|s ≤ |α|s + |α + w|s , we bound two expressions: |Imw|  |α|s ρ(ωx +α) dα |α+w|2  ≤ |Imw|  |α|s ρ(ωx +α) ∞ dα |α+w|2 |α|s ρ(ωx + α) ∞ (α+u)1 2 +t2 dα  ≤ |Imw| by setting w = u + it. By changing variable and shift the integral, we have 1 dα α2 +t2  Thus |Imw|  = 2πiRes( α2 1+t2 , it) =  2πi 2it  |α|s ρ(ωx +α) dα |α+w|2  ≤ π |α|s ρ(ωx + α) ∞ ≤ π(|ωx |s ρ ∞ + |λ|s ρ(λ)  ∞)  Now, we bound the second expression. |Imw|  ρ(ωx +α) dα |α+w|2−s  ≤ |Imw| ρ  dα = |α+w|2−s 1 dβ = |tβ+it|2−s  ∞  = |t|2 ρ ∞ by setting w = u + it and α = tβ. Let C = |β+i|1 2−s dβ, then we have |Imw|  ρ(ωx +α) dα |α+w|2−s  1 ≤ min( |Imw| ,C ρ 1−s  |t| ρ ρ  ∞  ∞ s t  s ∞ |Imw| )  1 dα |α+it|2−s 1 dβ |β+i|2−s  ≤ C ||ρ  1−s ∞  44  Appendix B. Lemmas in Chapter 4 Theorem B.0.4. (Lusin’s Theorem) Suppose f is a complex measurable function on X, µ(A) < ∞, f (x) = 0 if x ∈ / A, and > 0. Then there exists a g ∈ Cc (X) such that [21] µ({x : f (x) = g(x)}) < We can arrange it so that sup |g(x)| ≤ sup |f (x)| x∈X  x∈X  Lemma B.0.5. Let g : R → C be a bounded and piecewise continuous function. Then, for all a ∈ R [26] lim  0  π  R  g(E) (a − E)2 +  2  1 dE = ( lim g(λ) + lim g(λ)) λ a 2 λ a  Proof. The proof is that of [26]. Assume g is not continuous at some fixed a ∈ R. (The proof for the case where g is continuous at a is similar and easier.) Suppose there is no further discontinuity point at [a − β, a + β] for small β > 0. Define J as follows. g(E) 1 dE − ( lim g(λ) + lim g(λ)) 2 2 0 π R (a − E) + λ a 2 λ a ∞ g(E) − 1/2(limλ a g(λ) + limλ a g(λ)) = lim dE 0π ∞ (a − E)2 + 2 a ∞ g(E) − limλ a g(λ) g(E) − limλ a g(λ) = lim dE + dE 2 2 0π (a − E) + (a − E)2 + 2 −∞ a The second step used Dirac’s delta function: ∞ δ = π (a−E)1 2 + 2 and −∞ δ dE = 1 J = lim  If J = 0, we obtain Lemma. Now, we decompose the integration in J using arbitrary δ ∈ (0, β): a−δ  J = lim  0  π  a  + −∞  a−δ  g(E) − limλ a g(λ) dE + (a − E)2 + 2  ∞  a+δ  + a  a+δ  g(E) − limλ a g(λ) dE (a − E)2 + 2  Let us denote the above four intergrals in the order in which they occur by I1 , I2 , I3 , I4 respectively. Since g is bounded, we have |I1 | ≤ g  a−δ 1 ∞ −∞ (a−E)2 +  2  dE + | limλ  a−δ 1 a g(λ)| −∞ (a−E)2 +  2  dE  45  Appendix B. Lemmas in Chapter 4 =( g  ∞  + | limλ  a g(λ)|)  1  π 2  δ  − arctan  Therefore, lim 0 π |I1 | = 0. Similarly, we have lim 0 π |I4 | = 0 for all δ ∈ (0, β). Now, for I3 we have |I3 | =  a+δ g(E)−limλ a g(λ) dE a (a−E)2 + 2  a+δ  1 λ a (a − E)2 + a E∈(a,a+δ] 1 δ ≤ sup {|g(E)− lim g(λ)|} arctan  ≤  {|g(E)− lim g(λ)|}  sup  λ  E∈(a,a+δ]  a  Similarly, |I2 | =  g(E)−limλ a g(λ) a dE a−δ (a−E)2 + 2  ≤  sup  1 {|g(E)− lim g(λ)|} arctan λ  E∈[a−δ,a)  δ  a  Then, we have for arbitrary δ ∈ (0, β) that |J| ≤  {|g(E) − lim g(λ)|} +  sup  λ  E∈(a,a+δ]  a  sup  {|g(E) − lim g(λ)|} λ  E∈[a−δ,a)  a  Since δ ∈ (0, β) is arbitrary, limλ a g(λ) is the right-hand limit and limλ a g(λ) is the left-hand limit of g in a respectively. Therefore, |J| = 0 and we proved Lemma. Lemma B.0.6. Let H be a self-adjoint operator on a Hilbert space H, χI (H) and χI (H) be the spectral projection onto the open interval I ⊂ R and closed interval (closure of I) I ⊂ R associated to the operator H respectively. Let g : R → C be a bounded continuous function. Then [26] 1 2  ψ, g(H)(χI (H) + χI (H))φ = lim  0  π  g(E) ψ, (H − E − i )−1 (H − E + i )−1 φ dE  (B.1)  I  Proof. The proof is that of [26]. We prove in the case ψ = φ. The case where ψ = φ can be proved by polarization. Let dµψ (λ) := ψ, Eλ ψ be a positive and finite Borel measure. Let {Eλ }λ∈R be the spectral family associated to the operator H and dµψ (λ). Then, the right hand side of (B.1) can be written as R = lim  0  π  g(E) I  R  dµψ (λ) (λ − E)2 +  2  dE = lim  0 R  f (λ)dµψ (λ)  where 46  2  dE  Appendix B. Lemmas in Chapter 4 f (λ) =  π  χI (E)g(E) R (λ−E)2 + 2 dE  by the spectral theorem and Fubini’s theorem. By Lemma B.0.5, lim f (λ) = lim 0  0  π  R  χI (E)g(E) g(λ) dE = (χI (λ) + χI (λ)) 2 2 (λ − E) + 2  for all λ ∈ R. Let I = (a, b), then for all |f (λ)| ≤ g =  ∞π  g ∞ π  b dE a (λ−E)2 +  arctan  λ−a  > 0 and λ ∈ R we also have 2  − arctan  λ−b  ≤ g  ∞  By the dominated convergence theorem, we have R=  1 2  R g(λ)(χI (λ)  + χI (λ))dµψ (λ) =  1 2  ψ, g(H)(χI (H) + χI (H))ψ  47  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0073089/manifest

Comment

Related Items