Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Markov random fields, Gibbs states and entropy minimality Chandgotia, Nishant 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_may_chandgotia_nishant.pdf [ 782.12kB ]
Metadata
JSON: 24-1.0166259.json
JSON-LD: 24-1.0166259-ld.json
RDF/XML (Pretty): 24-1.0166259-rdf.xml
RDF/JSON: 24-1.0166259-rdf.json
Turtle: 24-1.0166259-turtle.txt
N-Triples: 24-1.0166259-rdf-ntriples.txt
Original Record: 24-1.0166259-source.json
Full Text
24-1.0166259-fulltext.txt
Citation
24-1.0166259.ris

Full Text

Markov Random Fields, Gibbs States and Entropy MinimalitybyNishant ChandgotiaB.Math., Indian Statistical Institute, 2009M.Sc. in Mathematics, The University of British Columbia, 2011a thesis submitted in partial fulfillmentof the requirements for the degree ofDoctor of Philosophyinfaculty of graduate and postdoctoral studies(Mathematics)The University of British Columbia(Vancouver)April 2015c© Nishant Chandgotia, 2015AbstractThe well-known Hammersley-Clifford Theorem states (under certain conditions) that any Markovrandom field is a Gibbs state for a nearest neighbour interaction. Following Petersen and Schmidtwe utilise the formalism of cocycles for the homoclinic relation and introduce “Markov cocycles”,reparametrisations of Markov specifications. We exploit this formalism to deduce the conclusionof the Hammersley-Clifford Theorem for a family of Markov random fields which are outside thetheorem’s purview (including Markov random fields whose support is the d-dimensional “3-coloredchessboard”). On the other extreme, we construct a family of shift-invariant Markov random fieldswhich are not given by any finite range shift-invariant interaction.The techniques that we use for this problem are further expanded upon to obtain the followingresults: Given a “four-cycle free” finite undirected graph H without self-loops, consider the corre-sponding ‘vertex’ shift, Hom(Zd,H) denoted by XH. We prove that XH has the pivot property,meaning that for all distinct configurations x, y ∈ XH which differ only at finitely many sites thereis a sequence of configurations (x = x1), x2, . . . , (xn = y) ∈ XH for which the successive configu-rations (xi, xi+1) differ exactly at a single site. Further if H is connected then we prove that XHis entropy minimal, meaning that every shift space strictly contained in XH has strictly smallerentropy. The proofs of these seemingly disparate statements are related by the use of the ‘lifts’ ofthe configurations in XH to their universal cover and the introduction of ‘height functions’ in thiscontext.Further we generalise the Hammersley-Clifford theorem with an added condition that the un-derlying graph is bipartite. Taking inspiration from Brightwell and Winkler we introduce a notionof folding for configuration spaces called strong config-folding to prove that if all Markov randomfields supported on X are Gibbs with some nearest neighbour interaction so are Markov randomfields supported on the “strong config-folds” and “strong config-unfolds” of X.iiPrefaceThis thesis is a combination of three manuscripts: [13], [10] and [11].The broad direction of study was suggested by Prof. Brian Marcus. Chapter 2 is based onthe manuscript [13] and is joint with Dr. Tom Meyerovitch. Given the nature of this work, it isimpossible to separate the individual contributions for this chapter. Chapter 3 is based on themanuscript [11]. Chapter 4 is based on the manuscript [10].iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Markov Random Fields and Gibbs States . . . . . . . . . . . . . . . . . . . . . . . . 11.2 A Generalisation of the Hammersley-Clifford Theorem for Bipartite Graphs . . . . . 51.3 Pivot Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Entropy Minimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Markov Random Fields, Markov Cocycles and the 3-Coloured Chessboard . . 92.1 Background and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.1.1 Markov Random Fields and Topological Markov Fields . . . . . . . . . . . . 92.1.2 The Homoclinic Equivalence Relation of a Topological Markov Field andAdapted MRFs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.1.3 Zd-Shift Spaces and Shifts of Finite Type . . . . . . . . . . . . . . . . . . . . 112.1.4 Gibbs States with Nearest Neighbour Interactions . . . . . . . . . . . . . . . 132.1.5 Invariant Spaces, Measures and Interactions . . . . . . . . . . . . . . . . . . . 132.2 Markov Specifications and Markov Cocycles . . . . . . . . . . . . . . . . . . . . . . . 142.2.1 The “Safe Symbol Property” and the Hammersley-Clifford Theorem . . . . . 172.2.2 The Pivot Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.3 Zr-Homomorphisms, 3-Coloured Chessboards and Height Functions . . . . . . . . . 202.4 Markov Cocycles on Xr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27iv2.5 Markov Random Fields on Xr Are Gibbs . . . . . . . . . . . . . . . . . . . . . . . . 312.5.1 Real Valued Cocycles for Measure-Preserving Zd-Actions . . . . . . . . . . . 322.5.2 Maximal Height Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352.6 Non-Frozen Adapted Shift-Invariant Markov Random Fields onXr Are Fully-Supported 402.7 Fully Supported Shift-Invariant Gibbs Measures on Xr . . . . . . . . . . . . . . . . 452.8 “Strongly” Non-Gibbsian Shift-Invariant Markov Random Fields . . . . . . . . . . . 483 Generalisation of the Hammersley-Clifford Theorem on Bipartite Graphs . . . 553.1 N.N.Constraint Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.2 Hammersley-Clifford Spaces and Strong Config-Folds . . . . . . . . . . . . . . . . . . 583.2.1 Hammersley-Clifford Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583.2.2 Markov-Similar and V -Good Pairs . . . . . . . . . . . . . . . . . . . . . . . . 593.2.3 Strong Config-Folding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603.3 The Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653.3.1 A Concrete Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663.3.2 Proof of Theorems 3.3.1 and 3.3.2 . . . . . . . . . . . . . . . . . . . . . . . . 684 Four-Cycle Free Graphs, the Pivot Property and Entropy Minimality . . . . . 824.1 Hom-Shifts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.2 Thermodynamic Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.3 Folding, Entropy Minimality and the Pivot Property . . . . . . . . . . . . . . . . . . 864.4 Universal Covers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924.5 Generalised Height Functions and Sub-Cocycles . . . . . . . . . . . . . . . . . . . . . 954.6 Proofs of the Main Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005 Further Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085.1 Markov Random Fields and Gibbs States with Nearest Neighbour Interactions . . . 1085.1.1 Supports of Markov Random Fields . . . . . . . . . . . . . . . . . . . . . . . 1085.1.2 Algorithmic Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085.1.3 Markov Random Fields for other Models . . . . . . . . . . . . . . . . . . . . . 1095.1.4 Changing the Underlying Graph . . . . . . . . . . . . . . . . . . . . . . . . . 1095.1.5 Mixing Properties of Subshifts and the Dimension of the Invariant MarkovCocycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095.1.6 Identifying Hammersley-Clifford Spaces . . . . . . . . . . . . . . . . . . . . . 1095.2 Hom-shifts, Entropy Minimality and the Pivot Property . . . . . . . . . . . . . . . . 1105.2.1 Identification of Hom-Shifts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105.2.2 Hom-Shifts and Strong Irreducibility . . . . . . . . . . . . . . . . . . . . . . . 1105.2.3 Hom-Shifts and Entropy Minimality . . . . . . . . . . . . . . . . . . . . . . . 1115.2.4 Hom-Shifts and the Pivot Property . . . . . . . . . . . . . . . . . . . . . . . . 111vBibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113viList of FiguresFigure 1.1 A, ∂A and B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Figure 1.2 The Graph for the Hard Square Model . . . . . . . . . . . . . . . . . . . . . . . . 4Figure 1.3 A Dismantlable Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Figure 2.1 The Alphabet A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Figure 2.2 A 3-Square-Island . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Figure 2.3 A ‘Generic’ Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Figure 3.1 C4 and C3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61Figure 3.2 H . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Figure 3.3 H′ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Figure 3.4 A Graph which Folds to the Edge Graph . . . . . . . . . . . . . . . . . . . . . . 66Figure 3.5 H9,2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Figure 3.6 Graphs H and H′ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Figure 4.1 Graph for the Hard Square Shift . . . . . . . . . . . . . . . . . . . . . . . . . . . 83Figure 5.1 Frozen Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111viiAcknowledgmentsI savour the sweet honey taste,Yet I do not feel the spring gale,See the bright flowerOr hear the busy bee.A piece of work is a product of its milieu but the contribution of the milieu often remainshidden even to the discerning eye. I am not the centre of this work but a mere representative; itis therefore imperative that I acknowledge as far as my vision extends and this restricted spaceallows, the many faces behind this thesis.I would like to acknowledge the great ergodic theory group I have had here at UBC. The facil-itator of this group Brian is my advisor; I cannot thank him enough for supporting and nurturingme; he brought me up from the ebb of my confidence to being someone aware of his abilities. Hisgift of simplicity and humility is something I have always cherished and hope to attain with time.I was introduced to symbolic dynamics by Ronnie. Through the course of my studies he has beenvery helpful giving me questions to think about, suggesting problems, suggesting papers to readand critiquing my presentations. Discussions and collaborations with Tom have been very fruitful;I have learnt and continue to learn immensely from him. Felipe and Raimundo have been greatcompany through the course of my PhD; whether it be travel guidance in Chile and Mexico, beingpart of the Great Annex Orchestra, deep philosophical discussions or visiting some interesting gunshops, they have always been there with me. The warmth of this group has kept the Vancouveritecold far away; I can’t picture how life would be otherwise. I will also like to acknowledge the steadystream of visitors which helped create an intellectually stimulating atmosphere: numerous conver-sations with Prof. Klaus Schmidt, Prof. Mike Boyle and Prof. Michael Schraudner have helpedme broaden my view point significantly. The active probability group here, discussions with Prof.Omer Angel and Prof. David Brydges has also been very helpful. I would also like to thank Prof.Lior Silberman, Prof. Peter Winkler and Prof. Anthony Quas for many useful discussions. Addi-tionally I would like to thank Prof. Brian Marcus, Prof. Andrew Rechnitzer, Prof. Lior Silberman,Prof. Ed Perkins, Prof. Will Evans and Prof. Klaus Schmidt for being part of my examinationcomittee and giving many useful suggestions. I was funded by the Four-year Fellowship and theInternational Tuition award at the University of British Columbia.viiiI thank my parents for the privilege of a happy childhood and giving me opportunities whichwere not always within their reach. Rinku bhaiya, Bhabhi, Tini and Mini di, Ashu bhaiya, Uncleand Aunty, Harshu have all in their little ways been a constant source of support. Through thecourse of my thesis I have missed them the most. Akash, Gudia and Sonu have always held myback and supported my crazy dreams; their humour has kept my spirit up all my life.I am also lucky to have been endowed with a great group of friends here in Vancouver. Vasuand Gourab have been the nearest and dearest. They willingly bore my nonsense all these yearsand continue to support me in my endeavours. I cannot forget my first few days at UBC; I waslow and life seemed very hard. It was Vasu who helped me stay afloat and took me away from thesadness and misery. Terry has been like an elder brother to me; taking me for dinners, draftingmy letters and extricating me from mires whenever I was stuck. His loud laughter still ringsthrough the annex. Lalitha has been a very dear friend of mine; among many other things shehas brought me closer to my country and culture, closer to myself. She continues to be a patientlistener to my qualms and poetry, despite the distance. I hope that our endless conversation remainendless. Ankur, my current roommate has been a blessing as well. A great friend, he puts his lifeon line for my crazy streaks for hiking and driving each and every time. Our mutual love forgood food, music and philosophy has made my stay in Vancouver very pleasant. Nithya, Swetha,Sneha, Pawan and Vignesh formed for me a great group of friends; our poker games, the drinkingsessions were a treat despite my sobriety. I can only wish for more of our craziness; it pains meto see that we must now move away looking for another life at another place. Ajith and Monikahave been my dear neighbours, excellent hosts, great friends and fine mentors; our chai sessionsand discussions of Indian politics have always been very informative, the cakes quite delicious andcompany impeccable. In a short period of time, I have found a great friend in Rajarshi. He allowsme to rumble around without much grumble and only minor voices of protest. The depth of histhinking has always been very inspiring. My previous roommate and officemate Marc has alwaysbeen a good friend; his zest for mathematics, food and life in general was contagious and hastruly caught on to me. I am glad that despite my fallacies Anujit has always been a great friendand a good roommate; I hope someday to make up for my misgivings. Subhajit, my dear juniorcontinues to torment me whenever he gets the opportunity, the boiling blood has always been veryentertaining! I would also like to thank the meek-mannered Tatchai for numerous discussions inergodic theory and being a great friend (and the many dinners and snacks). Lastly I would like tothank Navid, Myrto, Amir and Haider for being around and helping in a multitude of ways.Lastly I would like to thank the beautiful city of Vancouver. Most of the ideas in this thesiscome from its beautiful summer months which were spent climbing its hills and watching the searoll by.ixDedicationI dedicate this thesis to my parents, to my country, India and to the beautiful world of mathematics.xChapter 1IntroductionThis thesis can be divided broadly into two areas of study: identifying conditions on the support ofMarkov random fields such that they are Gibbs for some nearest neighbour interaction and provingthat a certain class of shift spaces is entropy minimal. In the introduction we will briefly discussthe results obtained in these areas (highlighted as theorems) during the course of my PhD and theirinterrelations. Further details and formal definitions can be found in the subsequent chapters.G = (V, E) will always refer to a locally finite countable undirected graph without multiple edgesand self-loops, H will always refer to an undirected graph without multiple edges. By Hom(G,H)we will denote the space of graph homomorphisms from G to H. Let ~0 denote the origin and ~eidenote the ith coordinate vector of Zd.1.1 Markov Random Fields and Gibbs StatesThe boundary (or the external vertex boundary) of a set of vertices F ⊂ V, denoted by ∂F , is theset of vertices outside F which are adjacent to F :∂F := {v ∈ V \ F | ∃w ∈ F s.t. (v, w) ∈ E} .Given a finite set A, the space AV is a compact topological space with respect to the producttopology, where the topology on A is discrete. For F ⊂ V finite and a ∈ AF , we denote by [a]F thecylinder set[a]F :={x ∈ AV | x|F = a}.For x ∈ AV we use the notation [x]F for [x|F ]F . The collection of cylinder sets generates the Borelσ-algebra on AV .A Markov random field (MRF) is a Borel probability measure µ on AV with the property thatfor all finite A,B ⊂ V such that ∂A ⊂ B ⊂ Ac (as illustrated in Figure 1.1) and a ∈ AA, b ∈ AB1−Elements of A−Elements of B−Elements of theboundary of AFigure 1.1: A, ∂A and Bsatisfying µ([b]B) > 0µ([a]A∣∣∣ [b]B)= µ([a]A∣∣∣ [b|∂A]∂A).Given an MRF µ the space of conditional probabilities or the “specification” is the system ofconditional probabilities of the type µ([·]A∣∣ [x]∂A) for all finite sets A ⊂ V and x ∈ supp(µ). Ingeneral infinitely many parameters may be required to describe a specification even if the graph Gis Zd and the measure is shift-invariant.A closed configuration space is a closed subsetX ⊂ AV . In most casesX will be the (topological)support of some MRF µ denoted by supp(µ). Let us consider some examples:1. r-colourings of G: Let Kr denote the complete graph (without self-loops) with vertices1, 2, . . . r. Then X = Hom(G,Kr) denotes the set of all r-colourings of G.2. Homomorphisms to the n-cycle: Let d ≥ 2 and Cn denote the n-cycle with vertices 0, 1, 2, . . . , n−1. Then X = Hom(Z2, Cn) will form an important class of closed configuration spaces forthis thesis.If G = Z2, r = 3 and n = 2 then Hom(G,Kr) and Hom(G, Cn) is also known as the “3-colouredchessboard”. Given a graph H and d ∈ N, let XdH = Hom(Zd,H).For all W ⊂ V letLW (X) := {w ∈ AW | there exists x ∈ X such that x|W = w}.The language of X ⊂ AV denoted by L(X) is defined as all finite patterns which occur in theelements of X:L(X) :=⋃W⊂V finiteLW (X).A finite range interaction is a function φ : L(X) −→ R such that for some r ∈ N, φ(a) = 0for all a ∈ LA(X) whenever diam(A) > r. A nearest neighbour interaction on X is a finite-rangeinteraction where r = 2. When G = Zd, an interaction φ is shift-invariant if for all ~n ∈ Zd anda ∈ L(X), φ(a) = φ(σ~n(a)). Since the standard Cayley graph of Zd has no triangles, a shift-invariant nearest neighbour interaction is uniquely determined by its values on patterns on {~0}2(“single site potentials”) and on patterns on pairs {~0, ~ei} where i = 1, . . . , d (“edge interactions”).A Gibbs state with a nearest neighbour interaction φ is an MRF µ such that for all x ∈ supp(µ)and A,B ⊂ V finite satisfying ∂A ⊂ B ⊂ Ac,µ([x]A∣∣∣ [x]B)=∏C⊂A∪∂Aeφ(x|C)ZA,x|∂Awhere ZA,x|∂A is the uniquely determined normalising factor so that µ(X∣∣∣ [x]∂A) = 1 for allx ∈ supp(µ).Some examples:1. (Ising Model) Fix some J,E ∈ R and let X = {1,−1}Zdand φ be a shift-invariant nearestneighbour interaction φ given byφ([m,n]~0,~ei) = Jmnφ([m]~0) = Emfor all m,n ∈ {−1, 1} and 1 ≤ i ≤ d. The Ising model with “interaction” J and external fieldE is a Gibbs state supported on X with interaction φ.2. (Hard Square Model) Fix some λ ∈ R. The hard square model is the set X ⊂ {0, 1}Zdwhich consists of configurations such that adjacent symbols cannot both be 1. Let φ be aninteraction on X such thatφ([0, 0]~0,~ei) = φ([0, 1]~0,~ei) = φ([1, 0]~0,~ei) = 0φ([0]~0) = 0 and φ([1]~0) = λfor all 1 ≤ i ≤ d. If µ is a Gibbs state supported on X with the shift-invariant nearestneighbour interaction φ then the specification takes a particularly nice form: For all x ∈ Xwe getµ([x]A∣∣∣ [x]B)=∏C⊂A∪∂Aeφ(x|C)ZA,x|∂A=eλ(number of ones in x|A∪∂A)ZA,x|∂Awhere ZA,x|∂A is the normalising factor.30 1Figure 1.2: The Graph for the Hard Square ModelNote that the hard square model is XdH where H is the graph given by Figure 1.2.If G = Zd we the specification of a Gibbs state with some shift-invariant nearest neighbourinteraction is completely determined by the interaction and therefore by finitely many parameters.We want to address the question: Under what conditions on the support is every Markov randomfield a Gibbs measure for some nearest neighbour interaction?The well-known Hammersley-Clifford theorem (Theorem 2.2.2) [3, 9, 23, 24, 54] gives one suchcondition, a positivity assumption on the MRF given by the presence of a safe symbol in the supportwhich we explain next:A closed configuration space X ⊂ AV is said to have a safe symbol ? ∈ A if for all A ⊂ V andx ∈ X the configuration y given byyn =xn if n ∈ A? if n /∈ Ais an element of X.It is not hard to see that a space of configurations XdH has a safe symbol ? if and only if ? isadjacent to all the vertices of H. For instance the symbol 0 is a safe symbol for the hard squaremodel but XdKr and XdCn do not have a safe symbol for r ≥ 2 and n ≥ 1. The Hammersley-Cliffordtheorem does not apply to Markov random fields whose support is XdKr or XdCn .We will prove:Theorem 1.1.1. [13] Let n 6= 1, 4 and d ≥ 2. If the support of a shift-invariant MRF is XdCn thenit is a Gibbs state for some shift-invariant nearest neighbour interaction.On the other hand,Theorem 1.1.2. [13] For G = Z2, there exists a shift-invariant MRF which is not Gibbs for anyshift-invariant finite range interaction.It was proved in [9, 12] that if the underlying graph G = Z, then every shift-invariant MRF is aGibbs state for some shift-invariant nearest neighbour interaction. Previously the same conclusionwas obtained under certain mixing conditions with a more general alphabet (Theorems 10.25 and10.35 in [22]); however there are MRFs which are not Gibbs states for any nearest neighbourinteraction: when the alphabet is countable (Theorem 10.33 in [22]) and when the measure is notshift-invariant [16]. When G is finite, such MRFs were constructed in [34]. On the other hand,4vuwstFigure 1.3: A Dismantlable Graphthere are algebraic conditions on the support [21] and conditions on the graph [28] which guaranteethe conclusions of the Hammersley-Clifford Theorem.MRFs and Gibbs States with nearest neighbour interactions will be defined in Subsection 2.1.1and Subsection 2.1.4. The proof of Theorems 1.1.1 and 1.1.2 will be given in Sections 2.5 and 2.8respectively.1.2 A Generalisation of the Hammersley-Clifford Theorem forBipartite GraphsIn the following v ∼H w denotes that (v, w) is an edge in H. Also H will denote both the graphand its set of vertices.The following notions were introduced in [37] to classify cop-win graphs: Given a graph H, wesay that a vertex v ∈ H folds to w ∈ H if u ∼H v implies that u ∼H w. A graph H\{v} is called afold of H. A graph H is called dismantlable if there exists a sequence of folds H,H1, . . . ,Hn suchthat Hn is a single vertex (with or without a loop).For instance if n 6= 4 and H = Cn then i ∼Cn i+ 1, i− 1(mod n) for all 0 ≤ i ≤ n− 1 provingthat Cn cannot be folded. We had remarked that for a graph H, XdH has a safe symbol ? if andonly if ? ∼H v for all v ∈ H. Thus all vertices in such a graph H can be folded to ? and so H isdismantlable. In particular, 0 is a safe symbol for the graph H in Figure 1.2. However there aremany examples of dismantlable graphs H for which XdH does not have a safe symbol; for example,the graph H in Figure 1.3 can be folded to a single vertex by the sequence: s folds to t, v foldsto w, u folds to w and t folds to w. Thus H is dismantlable but there is no vertex in H which isadjacent to all its other vertices; XdH does not have a safe symbol.We will prove:Theorem 1.2.1. [11] Let G be a bipartite graph and let H be a dismantlable graph. Then anyMarkov random field on Hom(G,H) is a Gibbs state for some nearest neighbour interaction. Furtherif G = Zd and the Markov random field is shift-invariant then the corresponding interaction can bechosen to be shift-invariant as well.In fact we prove a more general result for configuration spaces which cannot be representedas the space of graph homomorphisms, generalising the Hammersley-Clifford theorem when the5underlying graph G is bipartite. The details are part of Chapter 3 and for the statement of resultsin this direction look at Theorems 3.3.1, 3.3.2 and Corollary 3.3.8.1.3 Pivot PropertyFor the study of Markov random fields we introduce the following combinatorial property on closedconfiguration spaces:A closed configuration space X is said to have the pivot property if for all (x, y) ∈ ∆X withx 6= y there exists a finite sequence x(1) = x, x(2), . . . , x(k) = y ∈ X such that each (x(i), x(i+1))differ exactly at a single site.The pivot property is useful to study Markov random fields for the following reason: LetG = Zd. If µ is a shift-invariant Markov random field such that supp(µ) has the pivot propertythen for all x, y ∈ supp(µ) which differ at finitely many sites there exists a finite sequence x(1) =x, x(2), . . . , x(k) = y ∈ X such that each (x(i), x(i+1)) differ exactly at a single site ~ni ∈ Zd. Let Fbe a set of sites such that x|F c = y|F c and ~ni ∈ F for all i. Thenµ([x]F | [x]∂F )µ([y]F | [y]∂F )=k−1∏i=1µ([x(i)]F∣∣ [x(i)]∂F )µ([x(i+1)]F∣∣ [x(i+1)]∂F )=k−1∏i=1µ([x(i)]~ni∣∣ [x(i)]∂{~ni})µ([x(i+1)]~ni∣∣ [x(i+1)]∂{~ni}).Since µ is shift-invariant these equations imply that the entire space of conditional probabilitiesis completely determined by finitely many parameters, viz., µ([x]~0 | [x]∂{~0}) for x ∈ supp(µ).Examples of Configuration Spaces with the Pivot Property:1. Any closed configuration space with a safe symbol.2. XdKr if r ≥ 2d+ 2 (Proposition 2.2.5) or r = 3 (Proposition 2.3.4).3. XdH if H is a dismantlable graph [6].On the other hand it is not true that all homomorphism spaces have the pivot property: It wasobserved by Brian Marcus that X2K4 and X2K5 do not have the pivot property. We will show thatX2K4 does not have the pivot property in Subsection 5.2.4.We will prove:Theorem 1.3.1. [13] Let n 6= 1, 4 and d ≥ 2. Then XdCn has the pivot property.Theorem 1.3.1 will be further generalised: A finite graph H is said to be four-cycle free if it hasno self-loops and C4 is not a subgraph of H.We will prove:6Theorem 1.3.2. [10] Let d ≥ 2. For all four-cycle free graphs H, XdH has the pivot property.The pivot property will be defined and further examples will be given in Section 2.2.2. Theorems1.3.1 and 1.3.2 will be restated as Proposition 2.3.4 and Theorem 4.1.4 respectively.1.4 Entropy MinimalityWhile the proof of Theorem 1.2.1 is more combinatorial in nature the proof of Theorem 1.1.1 restson some ergodic theory considerations which we will now touch upon:Fix d ≥ 2 and let Bn = [−n, n]d ∩ Zd. A shift space is a closed configuration space X ⊂ AZdwhich is shift-invariant. If X = XdH for some H then X is called a hom-shift. The topologicalentropy of a shift space X ishtop(X) := limn−→∞1|Bn|log |LBn(X)|.It measures the growth rate of the number of allowed patterns in X. It is easy to see that ifY ( X then htop(Y ) ≤ htop(X). As in [14] a shift space X is called entropy minimal if for any shiftspace Y ( X, htop(Y ) < htop(X); In other words, exclusion of any pattern in X causes a drop inits entropy.There has been much work trying to establish some relationship between “mixing conditions”on the shift space and entropy minimality: For d = 1, all irreducible shifts of finite type are entropyminimal (Theorem 4.4.7 in [29]). Not much is known for higher dimensions; it is known that somestrong mixing conditions like uniform filling imply entropy minimality [51] but weaker conditionslike corner gluing do not [4]. There has been some recent work [52] which gives a condition equivalentto entropy minimality for shifts of finite type.Our results in Chapter 2 imply that XdCn is entropy minimal for n 6= 4:For proving this we will use the associated ‘height functions’: A height function is a functionh : Zd −→ Z such that for adjacent vertices ~i,~j ∈ Zd, |h(~i) − h(~j)| = 1. Let Htd denote the setof height function on Zd. We will prove that the map h ∈ Htd −→ h mod n ∈ XdCn is surjectivewhen n 6= 4 (for the case when n = 3, also look at [48]) and that given any ergodic MRF µ (withsome technical assumptions) on XdCn by the ergodic theorem there exists a notion of slope (averagerate of increase of height) for every direction; if the slope is maximal in some direction then thesupport of µ is frozen, otherwise it is fully supported. From this, standard results in thermodynamicformalism (the Lanford-Ruelle Theorem [47]) imply that XdCn is entropy minimal. Have a look atProposition 2.6.1 and Lemma 2.7.2.These ideas will be further extended to prove:Theorem 1.4.1. [10] Let H be a four-cycle free connected graph. Then XdH is entropy minimal.In the following, by the neighbourhood of a vertex v ∈ H we will mean the set of all verticesadjacent to v. As in algebraic topology, there is a notion of covering spaces for graphs: C is a7covering space of H if there is a graph homomorphism (called the covering map) f : C −→ Hsuch that for every vertex v ∈ H the preimage of the neighbourhood N of v in H is a disjointunion of a constant number of neighbourhoods in C isomorphic to N via f . Given a graph H, itsuniversal cover (denoted by EH) is the unique covering space which is a tree (Look for instance in[1]). Denote the corresponding covering map by pi.For example, the universal cover of Cn is Z and the corresponding graph homomorphism pi :Z −→ Cn is the map pi(r) = r mod n. On the other hand the universal cover of a tree is itself.It is easy to see that for the induced map on XdEH (also denoted by pi) we have pi(XdEH) ⊂ XdH;we will prove in Proposition 4.4.2 that the map is surjective. Also the preimage of a configurationin XdH is unique if we fix the lift at a single vertex. Associate to every configuration x ∈ XH aconfiguration x˜ ∈ XdEH such that pi(x˜) = x. We can thus associate to the space XdH the generalisedheight function hH : XdH × Zd −→ N ∪ {0} such that hH(x,~i) is the graph distance between x˜~0and x˜~i. From here on, the steps for proving Theorem 1.4.1 are similar to those used in the proofof Proposition 2.6.1 but the proofs obtained for the individual steps in the latter are somewhatdifferent. For instance now the existence of a slope for any ergodic measure on XdH in every directionwill follow from the subadditive ergodic theorem instead of the ergodic theorem. The reason is thatunlike height functions introduced earlier, hH is not additive but is subadditive, meaninghH(x,~i+~j) ≤ hH(x,~i) + hH(σ~i(x),~j).Topological entropy will be defined in Section 2.7. A description of height functions will begiven in Section 2.3 and that of the universal covers and generalised height functions will be givenin Sections 4.4 and 4.5. A brief description of the tools from thermodynamic formalism will begiven in Sections 2.7 and 4.2 respectively. Theorem 1.4.1 will be the same as Theorem 4.1.2.8Chapter 2Markov Random Fields, MarkovCocycles and the 3-ColouredChessboardThe main results of this chapter are Theorems 1.1.1 and 1.1.2. The results in this chapter are jointwith Tom Meyerovitch [13].2.1 Background and NotationThis section will recall the necessary concepts and introduce the basic notation.2.1.1 Markov Random Fields and Topological Markov FieldsLet G = (V, E) be a simple, undirected graph where the vertex set V is finite or countable. Wealways assume that G is locally finite, meaning all v ∈ V have a finite number of neighbours. Theboundary of a set of vertices F ⊂ V, denoted by ∂F , is the set of vertices outside F which areadjacent to F :∂F := {v ∈ V \ F | ∃w ∈ F s.t. (v, w) ∈ E} .Remark: Observe that in our notation ∂F ⊂ F c. This is sometimes called the outer boundaryof the set F . Consistent with our notation the inner boundary of F is ∂(F c).Given a finite set A, the space AV is a compact topological space with respect to the producttopology, where the topology on A is discrete. For F ⊂ V finite and a ∈ AF , we denote by [a]F thecylinder set[a]F :={x ∈ AV | x|F = a}.For x ∈ AV we use the notation [x]F for [x|F ]F . The collection of cylinder sets generates the Borelσ-algebra on AV .9A Markov random field is a Borel probability measure µ on AV with the property that for allfinite A,B ⊂ V such that ∂A ⊂ B ⊂ Ac and a ∈ AA, b ∈ AB satisfying µ([b]B) > 0µ([a]A∣∣∣ [b]B)= µ([a]A∣∣∣ [b|∂A]∂A).We say that the sets of vertices A,B ⊂ V are separated (in the graph G) if they are disjoint and(v, w) 6∈ E whenever v ∈ A and w ∈ B.Here is an equivalent definition of a Markov random field: If x is a point chosen randomlyaccording to the measure µ, and A,B ⊂ V are finite and separated, then conditioned on x|V\(A∪B),x|A and x|B are independent random variables.A Markov random field is called global if the conditional independence above holds for allseparated sets A,B ⊂ V (finite or not) .As in [9, 12], a topological Markov field is a compact set X ⊂ AV such that for all finite F ⊂ Vand x, y ∈ X satisfying x|∂F = y|∂F , there exist z ∈ X satisfyingzv :=xv for v ∈ Fyv for v ∈ V \ F.A topological Markov field is called global if we do not demand that F be finite.The support of a Borel probability measure µ on AV denoted by supp(µ) is the intersection ofall closed sets Y ⊂ AV for which µ(Y ) = 1. Equivalently,supp(µ) = AV \⋃[a]A∈N (µ)[a]A,where N (µ) is the collection of all cylinder sets [a]A with µ([a]A) = 0. The support of a Markovrandom field is a topological Markov field (see Lemma 2.0.1 in [9]).2.1.2 The Homoclinic Equivalence Relation of a Topological Markov Field andAdapted MRFs.Following [42, 49], we denote by ∆X the asymptotic relation of a topological Markov field X ⊂ AV ,which is given by∆X := {(x, y) ∈ X ×X | xn = yn for all but finitely many n ∈ V}. (2.1.1)Given a shift space X the homoclinic relation is defined to be the same as the asymptoticrelation on X. We say that an MRF µ is adapted with respect to a topological Markov field X ifsupp(µ) ⊂ X andx ∈ supp(µ) =⇒ {y ∈ X | (x, y) ∈ ∆X} ⊂ supp(µ).10It follows that an MRF µ is adapted with respect to a TMF X if and only if all continuousfunctions f : X −→ X satisfying (x, f(x)) ∈ ∆X for all x ∈ X are absolutely continuous withrespect to µ. In this case µ is said to be non-singular with respect to ∆X and it is possible topatch up all the Radon-Nikodym derivatives df(µ)dµ for all f as described above: The Radon-Nikodymcocycle of µ is a map cµ : ∆X −→ R+ such that for all functions f described abovecµ(x, f(x)) =df(µ)d µ(x) µ-almost everywhere.If µ is an MRF then the Radon-Nikodym derivative takes a particularly nice form: If (x, y) ∈∆supp(µ) and a finite set F ⊂ V such that x|F c = y|F c thencµ(x, y) =µ([y]F∪∂F )µ([x]F∪∂F ).Since µ is a Markov random field the right hand side is independent of the choice of F . See [33]and references within for further details.To illustrate adaptedness, if supp(µ) = X then µ is adapted with respect to X, and if X =X1 ∪ X2 is the union of two topological Markov fields over disjoint alphabets and µ is a Markovrandom field with supp(µ) = X1 then µ is adapted with respect to X. On the other hand, theBernoulli measure (12δ0 +12δ1)V is not adapted with respect to {0, 1, 2}V . In fact, if X = AV forsome finite alphabet A then any Markov random field which is adapted to X has supp(µ) = X.2.1.3 Zd-Shift Spaces and Shifts of Finite TypeFor the Markov random fields we discuss in this chapter, the set of vertices of the underlying graphis the d-dimensional integer lattice. We identify Zd with the set of vertices of the Cayley graphwith respect to the standard generators. Rephrasing, ~n, ~m ∈ Zd are adjacent iff ‖~n − ~m‖1 = 1,where for all ~n = (n1, . . . , nd) ∈ Zd, ||~n||1 :=∑dr=1 |nr| denotes the l1 norm of ~n. The boundary ofa given finite set F ⊂ Zd is thus given by:∂F = {~m ∈ F c | ||~n− ~m||1 = 1 for some ~n ∈ F} .On the compact space AZd(with the product topology over the discrete set A) the mapsσ~n : AZd→ AZdgiven by(σ~n(x))~m := x~m+~n for all ~m,~n ∈ Zddefine a Zd-action by homeomorphisms, called the shift-action. The pair (AZd, σ) is a topologicaldynamical system called the d-dimensional full shift on the alphabet A. Note that σ acts on theCayley graph of Zd by graph isomorphisms.A Zd-shift space or subshift is a dynamical system (X,σ) where X ⊂ AZdis closed and invariant11under the map σ~n for each ~n ∈ Zd.A Borel probability measure µ on AZdis shift-invariant if µ ◦ σ~n = µ for all ~n ∈ Zd. It followsthat the support of all shift-invariant measures µ is a subshift.For X ⊂ AV and W ⊂ V letLW (X) := {w ∈ AW | there exists x ∈ X such that x|W = w}.The language of X ⊂ AV denoted by L(X) is defined as all finite patterns which occur in theelements of X:L(X) :=⋃W⊂V finiteLW (X).If A,B ⊂ V and x ∈ AA, y ∈ AB such that x|A∩B = y|A∩B then x ∨ y ∈ AA∪B is given by(x ∨ y)n :=xn n ∈ Ayn n ∈ B.An alternative equivalent definition for a subshift is given by forbidden patterns as follows: LetA? :=⋃W⊂Zd finiteAW .For all F ⊂ A? letXF = {x ∈ AZd | no translate of a subconfiguration of x belongs to F}.It is well-known that a subset X ⊂ AZdis a subshift if and only if there exists F ⊂ A? such thatX = XF (for d = 1 this is stated in [29] as Theorem 6.1.21; the proof for higher dimensions issimilar). The set F is called the set of forbidden patterns for X. A subshift X is called a shiftof finite type if X = XF for some finite set F . A shift of finite type is called a nearest neighbourshift of finite type if X = XF where F consists of nearest neighbour constraints, i.e. F consistsof patterns on single edges. When d = 1 nearest neighbour shifts of finite type are also calledtopological Markov chains. In fact the study of nearest neighbour shifts of finite type has beenpartly motivated by the fact that the support of stationary Markov chains are one-dimensionalnearest neighbour shifts of finite type.Every nearest neighbour Zd-shift of finite type is a shift-invariant topological Markov field.When d = 1 the converse is also true under the assumption that the subshift is non-wandering [12].Without the non-wandering assumption, one-dimensional shift-invariant topological Markov fieldsare still so-called sofic shifts, but not necessarily of finite type [12]. This does not hold in higherdimensions ([9] and Section 2.8).122.1.4 Gibbs States with Nearest Neighbour InteractionsFor a graph G = (V, E) and A ⊂ V, let diam(A) denote the diameter of A with respect to the graphdistance (denoted by dG) in G, that is, diam(A) = maxi,j∈A d(i, j).Following [47], an interaction on X is a function φ from L(X) to R, satisfying certain summa-bility conditions. Here we will only consider finite range interactions, for which the summabilityconditions are automatically satisfied.An interaction is of range at most k if φ(a) = 0 for a ∈ LA(X) whenever diam(A) > k. We willcall an interaction of range 1 a nearest neighbour interaction. When G = Zd, an interaction φ isshift-invariant if for all ~n ∈ Zd and a ∈ L(X), φ(a) = φ(σ~n(a)). Since the standard Cayley graphof Zd has no triangles, a shift-invariant nearest neighbour interaction is uniquely determined by itsvalues on patterns on {~0} (“single site potentials”) and on patterns on pairs {~0, ~ei}, i = 1, . . . , d(“edge interactions”). We denote these by φ([a]0) and φ([a, b]i) respectively where a, b ∈ A.A Gibbs state with a nearest neighbour interaction φ is a Markov random field µ such that forall x ∈ supp(µ) and A,B ⊂ V finite satisfying ∂A ⊂ B ⊂ Ac,µ([x]A∣∣∣ [x]B)=∏C⊂A∪∂Aeφ(x|C)ZA,x|∂Awhere ZA,x|∂A is the uniquely determined normalising factor so that µ(X∣∣∣ [x]∂A) = 1 for allx ∈ supp(µ).2.1.5 Invariant Spaces, Measures and InteractionsAn automorphism of the graph G = (V, E) is a bijection on the vertex set g : V −→ V whichpreserves the adjacencies, that is, (u, v) ∈ E if and only if (gu, gv) ∈ E . Let the group of allautomorphisms of the graph G be denoted by Aut(G).There is a natural action of Aut(G) on patterns and configurations: given a ∈ AF , x ∈ AV andg ∈ Aut(G) we have ga ∈ AgF and gx ∈ AV given by(ga)gv := av and(gx)v := xg−1v.This induces an action on measures on the space AV given by(gµ)(L) := µ(g−1L)for all measurable sets L ⊂ AV .For a given subgroup G ⊂ Aut(G), a set of configurations X ⊂ AV is said to be G-invariant if13gX = X for all automorphisms g ∈ G. Similarly a measure µ on AV is said to be G-invariant ifgµ = µ for all g ∈ G. Note, for any subgroup G ⊂ Aut(G), if µ is a G-invariant probability measurethen supp(µ) is also a G-invariant configuration space. For G = Zd we abuse notation to denotethe group of translations by Zd as well. In this notation σ~n(x) = (−~n)(x).Let X ⊂ AV be a closed configuration space invariant under a subgroup G ⊂ Aut(G). Then Gacts on the interactions on X: Given an interaction φ on X for all a ∈ AF and g ∈ Ggφ([a]F ) := φ([g−1a]g−1F ).2.2 Markov Specifications and Markov CocyclesAny Markov random field µ yields conditional probabilities of the form µ(xF = · | x∂F = δ)for all finite F ⊂ V and admissible δ ∈ A∂F (by admissible we mean µ([δ]∂F ) > 0). We referto such a collection of conditional probabilities as the Markov specification associated with µ. Itmay happen that two distinct Markov random fields have the same specification, as in the case ofthe 2-dimensional Ising model in low temperature [38]. In general it is a subtle and challengingproblem to determine if a given Markov specification admits more than one Markov random field(the problem of uniqueness for the measure of maximal entropy of a Zd-shift of finite type is aninstance of this problem [7]). For the purpose of our study and statement of our results, it wouldbe convenient to have an intrinsic definition for a Markov specification, not involving a particularunderlying Markov random field.Let X ⊂ AV be a topological Markov field. A Markov specification on X is an assignmentfor each finite and non-empty F ⊂ V and x ∈ L∂F (X) of a probability measure ΘF,x on LF (X)satisfying the following conditions:1. Support condition: For all finite and non-empty F ⊂ V, x ∈ L∂F (X) and y ∈ LF (X),x ∨ y ∈ LF∪∂F (X) if and only if ΘF,x(y) > 0. This condition can be written as follows:supp(ΘF,x) = {y ∈ LF (X) | x ∨ y ∈ LF∪∂F (X)}.2. Markovian condition: For all finite and non-empty F ⊂ V and x ∈ L∂F (X), ΘF,x is aMarkov random field on the finite graph induced from V on F .3. Consistency condition: If F ⊂ H ⊂ V are finite and non-empty, x ∈ L∂F (X), y ∈ L∂H(X)and xn = yn for n ∈ ∂F ∩ ∂H, then for all z ∈ LF (X)ΘF,x(z) =ΘH,y([z ∨ x](F∪∂F )∩H)ΘH,y([x]∂F∩H).The definition above has been set up so that for any Markov random field µ with X = supp(µ),14the assignment(F, x) 7→ ΘF,x(a) := µ([a]F | [x]∂F )is a Markov specification. Conversely, given any Markov specification Θ on X there exists aMarkov random field µ on X compatible with Θ in the sense that µ([a]F | [y]∂F ) = ΘF,y(a) for alla ∈ LF (X) whenever µ([y]∂F ) > 0 (Chapter 4 in [22]). Furthermore, when X ⊂ AZdis a subshiftand the specification Θ is shift-invariant, it follows from amenability of Zd that there exists ashift-invariant Markov random field µ compatible with Θ. However, in general it is possible thatfor a given specification Θ the support of any µ satisfying the above is a strict subset of X, inwhich case there exist certain finite F ⊂ V and y ∈ L∂F (X) for which the conditional probabilitiesµ([x]F | [y]∂F ) are meaningless for all x ∈ X. We will provide such examples in Section 2.5. In sucha case, according to our definition, the Markov specification associated with µ is the restriction ofΘ to the support of µ, meaning the collection of conditional probabilities ΘF,x for finite sets F ⊂ Vand x ∈ L∂F (supp(µ)).It will be convenient for our purposes to re-parameterize the set of Markov specifications on agiven topological Markov field X. For this purpose we use the formalism of ∆X -cocycles. To thiswell-known formalism we introduce an ad-hoc definition of a Markov cocycle, which we will explainnow. As in [49], a (real-valued) ∆X-cocycle is a function M : ∆X −→ R satisfyingM(x, z) = M(x, y) +M(y, z) whenever (x, y), (y, z) ∈ ∆X . (2.2.1)Given G ⊂ Aut(G), M is a G-invariant ∆X -cocycle if M(x, y) = M(g(x), g(y)) for all g ∈ G. IfG = Zd then Zd-invariant ∆X -cocycles are said to be shift-invariant. We call M a Markov cocycleif it is a ∆X -cocycle and satisfies: For all (x, y) ∈ ∆X such that x|F c = y|F c the value M(x, y) isdetermined by x|F∪∂F and y|F∪∂F .There is a clear bijection between Markov cocycles and Markov specifications on X: If Θ is aMarkov specification on X, the corresponding Markov cocycle is given byM(x, y) := log(ΘF,y|∂F (y|F ))− log(ΘF,x|∂F (x|F )),where (x, y) ∈ ∆X and F ⊂ V finite such that x|F c = y|F c . Clearly M does not depend on thechoice of F : Let F˜ is the set of sites on which x and y differ thenM(x, y) := log(ΘF,y|∂F (y|F ))− log(ΘF,x|∂F (x|F ))= log(ΘF,y|∂F ([y]F˜∣∣∣ [y]F\F˜ ))− log(ΘF,x|∂F ([x]F∣∣∣ [x]F\F˜ ))(Markovian condition) = log(ΘF,y|∂F ([y]F˜∣∣∣ [y]∂F˜∩F ))− log(ΘF,x|∂F ([x]F∣∣∣ [x]∂F˜∩F ))(Consistency condition) = log(ΘF,y|∂F˜ (y|F˜ ))− log(ΘF˜ ,x|∂F˜(x|F˜ )).15Conversely, given a Markov cocycle M on X, the corresponding specification Θ is given byΘF,a(y) =1ZM,F,a,zeM(x∨z,x∨y),where F ⊂ V is a finite set, a ∈ L∂F (X), y, z ∈ LF (X) are such that a ∨ y, a ∨ z ∈ LF∪∂F (X) andx ∈ LF c(X) with x|∂F = a. The normalising constant ZM,F,a,z is given by:ZM,F,a,z =∑y′eM(x∨z,x∨y′),where the sum is over all y′ ∈ LF (X) such that y′ ∨ a ∈ LF∪∂F (X). Note that the expression forthe specification is well defined: Since M is a Markov cocycle on the topological Markov field X theright-hand side does not depend on the particular choice of x. The choice of the auxiliary variablez on the right-hand side changes the normalising constant ZM,F,a,z, but does not change ΘF,a.When X ⊂ AV is G-invariant, this bijection maps G-invariant specifications to G-invariantMarkov cocycles. Thus, a G-invariant Borel probability measure µ is a G-invariant Markov randomfield if and only if X = supp(µ) is a topological Markov field and the Radon-Nikodym cocycle of µwith respect to ∆X is some G-invariant Markov cocycle M , that is, for all (x, y) ∈ ∆Xµ([y]Λ)µ([x]Λ)= eM(x,y)for all Λ ⊃ F ∪ ∂F where F is the set of sites on which x, y differ.Fix a topological Markov field X and a nearest neighbour interaction φ on X. The Gibbs cocyclecorresponding to φ is given by:Mφ(x, y) =∑W⊂V finiteφ(y|W )− φ(x|W ).Note that when φ is a nearest neighbour interaction, there are only finitely many non-zero termsin the sum whenever (x, y) ∈ ∆X and so Mφ is well defined. Examination of the definitions verifiesthat under the above assumptions Mφ is a Markov cocycle. Our point of interest is the converse:When can a Markov cocycle be expressed in this form?Proposition 2.2.1. Let µ be an MRF and M be a Markov cocycle on supp(µ) given byM(x, y) = log(µ([y]Λ)µ([x]Λ))for all (x, y) ∈ ∆supp(µ)for any Λ ⊃ F ∪ ∂F where F is the set of vertices where x and y differ. Then µ is a Gibbs statewith a nearest neighbour interaction φ if and only if M = Mφ.The proof of the proposition follows because µ is a Gibbs state with a nearest neighbour inter-16action φ if and only ifM(x, y) = log(µ ([y]F | [y]∂F )µ ([x]F | [y]∂F ))=∑W⊂V finiteφ(y|W )− φ(x|W ) = Mφ(x, y)for all x, y ∈ ∆supp(µ). Let X be a topological Markov field. Denote by MX the set of all Markovcocycles and by GX the set of all nearest neighbour Gibbs cocycles. In case X is G-invariant,we denote by MGX the set of G-invariant Markov cocycles and by GGX the set of Gibbs cocyclescorresponding to a G-invariant nearest neighbour interaction.The set MX of Markov cocycles naturally carries a vector space structure: Given M1,M2 ∈MXand c1, c2 ∈ R, c1M1 + c2M2 ∈ MX . The reader can easily verify that GX is a linear subspaceof MX . If X is a shift-space then the shift-invariant nearest neighbour interactions constitutea finite-dimensional vector space, and the map sending a nearest neighbour interaction φ to thecocycle Mφ is linear, it follows that GZdX ⊂MX has finite dimension.For a topological Markov fieldX defined over a finite graph G = (V, E),MX is finite dimensional;the problem of determining which Markov cocycles are Gibbs amounts to solving a finite (butpossibly large) system of linear equations. The resulting equations are essentially the ‘balancedconditions’ mentioned in [34].2.2.1 The “Safe Symbol Property” and the Hammersley-Clifford TheoremA topological Markov field X ⊂ AZdis said to have a safe symbol if there exists an element ? ∈ Asuch that for all x ∈ X and A ⊂ Zd, y ∈ AZdgiven byyn :=xn for n ∈ A? for n ∈ Acis also an element of X.A formulation of the Hammersley-Clifford Theorem states:Theorem 2.2.2. (Hammersley-Clifford, weak version [24]) Let X be a topological Markovfield with a safe symbol. Then:1. Any Markov random field with supp(µ) = X is a Gibbs state for a nearest neighbour interac-tion.2. If X is G-invariant for some G ⊂ Aut(G) then any G-invariant Markov random field withsupp(µ) = X is a Gibbs state for some G-invariant nearest neighbour interaction.The second statement in the theorem above is not a part of the original formulation, but doesfollow since there is an explicit algorithm to produce a nearest neighbour interaction which isinvariant under all graph automorphisms for which the original Markov random field was invariant17[9]. See also [2, 54, 55]. It is in general false that a G-invariant Markov random field whose(G-invariant) specification is compatible with some nearest neighbour interaction must also becompatible with some G-invariant nearest neighbour interaction (see Corollary 2.4.6 below). Inparticular, for a general topological Markov field X we have GGX ⊂ MGX ∩GX , but the inclusionmay be strict.An inspection of the original proof of the Hammersley-Clifford Theorem actually gives thefollowing a priori stronger result:Theorem 2.2.3. (Hammersley-Clifford Theorem, strong version) Let X be a topologicalMarkov field with a safe symbol. Then:1. Any Markov cocycle on X is a Gibbs cocycle given by a nearest neighbour interaction. In ournotation this is expressed by: MX = GX .2. Given G ⊂ Aut(G), if X is G-invariant then any G-invariant Markov cocycle on X is a Gibbscocycle given by a G-invariant nearest neighbour interaction. In our notation this is expressedby: MGX = GGX .It is easily verified that any topological Markov field X which satisfies one of the conclusionsof the “strong version” immediately satisfies the corresponding conclusion of the “weak version”.We will demonstrate in the following section that the converse implication is false in general. Theproof of the first part of this version follows from Theorem 2.2.2 with the additional knowledgethat given a Markov cocycle on a topological Markov field X with a safe symbol there exists acorresponding MRF µ such that supp(µ) = X. This in turn is implied by arguments very similarto those in the proof of the following proposition. The second part of the theorem can be provedusing Theorem 2.0.6 in [9], noting that the conclusion holds even if the MRF is not invariant underG but the corresponding Markov cocycle is.Proposition 2.2.4. Let X be a topological Markov field with a safe symbol. Then any Markovrandom field µ adapted to X has supp(µ) = X.Proof. Let µ be a Markov random field adapted to X. We need to show that for all finite F ⊂ Vand a ∈ LF (X), µ([a]F ) > 0. Denote F ∪ ∂F by F˜ . Let b ∈ LF˜ (X) and c ∈ L∂F˜ (X) satisfyµ([b ∨ c]F˜∪∂F˜ ) > 0. In particular, µ([c]∂F˜ ) > 0. Let b˜ ∈ LF˜ (X) be given byb˜n :=bn n ∈ F? n ∈ ∂F.Note that b˜ ∨ c ∈ LF˜∪∂F˜ (X) because ? is a safe-symbol. Again, by the safe symbol property itfollows that a˜ ∈ LF˜ (X) where:a˜n :=an n ∈ F? n ∈ ∂F.18Since X is a topological Markov field, a˜ ∨ c ∈ LF˜∪∂F˜ (X). Since µ is an adapted Markov randomfield it follows that µ([a˜]F˜ | [c]∂F˜ ) > 0, and since µ([c]∂F˜ ) > 0 it follows that µ([a˜ ∨ c]F˜∪∂F˜ ) > 0; inparticular we get that µ([a]F ) > 0.Remark: Proposition 2.2.4 is a particular instance of the more general fact that all ∆X -nonsingular measures µ satisfy supp(µ) = X, whenever ∆X is a topologically minimal. The lattercondition means that for any x ∈ X, the ∆X -orbit ∆X(x) := {y ∈ X | (x, y) ∈ ∆X} is dense in X.Remark: When the underlying graph is Zd, any shift-invariant topological Markov field Xwith a safe symbol is actually a nearest-neighbour shift of finite type; Proposition 3.1.2 proves amore general fact.2.2.2 The Pivot PropertyWe shall now consider a weaker property than that of having a safe symbol. Let X be a topologicalMarkov field. If x, y ∈ X only differ at a single v ∈ V, then the pair (x, y) will be called a pivotmove in X. A topological Markov field X is said to have the pivot property if for all (x, y) ∈ ∆Xsuch that x 6= y there exists a finite sequence of points x(1) = x, x(2), . . . , x(k) = y ∈ X such thateach (x(i), x(i+1)) is a pivot move. In this case we say x(1) = x, x(2), . . . , x(k) = y is a chain of pivotsfrom x to y. Many properties similar to the pivot property have appeared in the literature (oftenby the name local-move connectedness), for instance look at [6, 36, 39, 53] Here are some examplesof subshifts which have the pivot property:1. Any topological Markov field with a trivial homoclinic relation.2. Any topological Markov field with a safe symbol.3. The r-colourings of Zd (defined below) where r = 3 (generalised in Proposition 2.3.4 andTheorem 4.1.4) or r ≥ 2d+ 2 (Proposition 2.2.5).4. The space of graph homomorphisms from Zd to a “dismantlable graph”, as in [6] (defined inthe examples after the statement of Theorem 3.3.2).Consider a countable graph G = (V, E) without multiple edges and self-loops. Let Coln denotethe proper vertex colourings of G with n colours:Coln :={x ∈ {1, . . . , n}V | xv 6= xw whenever (v, w) ∈ E}.The r-colourings of Zd is the space Colr when G is Zd. The fact that r-colourings of Zd withr ≥ 2d+ 2 have the pivot property is a consequence of the following:Proposition 2.2.5. Let G = (V, E) be a graph with all vertices of degree at most d and n ≥ d+ 2.Then Coln is a topological Markov field that has the pivot property.19Proof. Given x, y ∈ ∆Coln we will describe a sequence of pivot moves from x to y: Let F := {v ∈V | xv 6= yv}. Let x(0) := x. For every i ∈ {1, . . . , n}, obtain a point x(i) ∈ Coln by pivots movesstarting from x(i−1) as follows:• Step 1: For every v ∈ F such that x(i−1)v = i, choose j ∈ {1, . . . , n} such thatj 6∈{x(i−1)v | v ∈ ∂{v} ∪ {v}}.Such j exists because by our assumption on n: |∂{v} ∪ {v}| ≤ d+ 1 < n. In particular i 6= j.Make a pivot move by changing x(i−1)v from i to j. After at most |F | pivot moves we obtainthe point z such that zv 6= i for all v ∈ F and zv = x(i−1)v unless v ∈ F and x(i−1)v = i.• Step 2: For every v ∈ F such that yv = i apply a pivot move by changing zv to i:x′w :=zw if w 6= vi if w = v.To see that x′ ∈ Coln, we need to check that zw 6= i for any w ∈ ∂{v}. Indeed if w ∈ ∂{v}∩Fthen zw 6= i by step 1. If w ∈ ∂{v} ∩ F c then zw = yw. Because yv = i it follows that yw 6= i.Now iterate with z replaced by x′. After at most |F | pivot moves we obtain the point x(i).This configuration has the property that x(i)v = yv unless v ∈ F and yv > i.This describes a finite sequence of pivot moves from x = x(0) to x(n) = y.Proposition 2.2.6. Let X ⊂ AZdbe a shift-invariant topological Markov field with the pivotproperty. Then the dimension of MZdX is finite.Proof. Let (x, y) ∈ ∆X . Let x = x(1), x(2), x(3), . . . , x(k) = y be a chain of pivots from x to y. ThenM(x, y) =k−1∑i=1M(x(i), x(i+1)). (2.2.2)If x(i), x(i+1) differ only at ~mi ∈ Zd then M(x(i), x(i+1)) = M(σ−~mix(i), σ−~mix(i+1)) depends only onσ−~mix(i)|{~0}∪∂{~0} and σ−~mix(i+1)|{~0}∪∂{~0}. Therefore the dimension of the space of shift-invariantMarkov cocycles is bounded by |L{~0}∪∂{~0}(X)|2.2.3 Zr-Homomorphisms, 3-Coloured Chessboards and HeightFunctionsRecall that a graph-homomorphism from the graph G1 = (V1, E1) to the graph G2 = (V2, E2) is afunction f : V1 → V2 from the vertex set of G1 to the vertex set of G2 such that f sends edges in20G1 to edges in G2. Namely, if (v, w) ∈ E1 then (f(v), f(w)) ∈ E2. We consider Zd as the vertex setof the standard Cayley graph, where an edge (~n, ~m) is present if and only if ‖~n− ~m‖1 = 1. Also asubset A ⊂ Zd will also denote the induced subgraph of Zd on A.For the purposes of this chapter, a height function on A ⊂ Zd is a graph homomorphism from Ato the standard Cayley graph of Z. We denote the set of height functions on A ⊂ Zd by Ht (d)(A):Ht(d)(A) :={xˆ ∈ ZA | |xˆ~n − xˆ~m| = 1 whenever ~n, ~m ∈ A and ‖~n− ~m‖1 = 1}. (2.3.1)In particular, we denoteHt(d) := Ht(d)(Zd)We now introduce a certain family of Zd-shifts of finite type X(d)r , where r, d ∈ N, and r > 1:Denote by Zr = Z/rZ ∼= {0, . . . , r−1} the finite cyclic group of residues modulo r. Let φr : Ht(d) →(Zr)Zdbe defined by(φr(xˆ))~n := xˆ~n mod r for all ~n ∈ Zd.The Zd-subshift X(d)r is defined by:X(d)r := φr(Ht(d)).For r = 2 it is easily verified that X(d)2 consists precisely of two points xeven, xodd. These are“chessboard configurations”, given by xeven~n = ‖~n‖1 mod 2 and xodd~n = ‖~n‖1 + 1 mod 2.In the following, to avoid cumbersome superscripts, we will fix some dimension d ≥ 2, anddenote Ht := Ht(d), Xr := X(d)r and Ht(A) := Ht(d)(A) for all A ⊂ Zd.For r 6= 1, 4, there is a direct and simple interpretation for the subshift Xr as the set ofgraph homomorphisms from the standard Cayley graph of Zd to the standard Cayley graph of Zr(Proposition 2.3.1 below). In the particular case r = 3 the standard Cayley graph of Zr is thecomplete graph on 3 vertices, and so X3 is the set of proper vertex-colourings of standard Cayleygraph of Zd with colours in {0, 1, 2}. This relation has certainly been noticed and recorded in theliterature. For instance, it is stated without proof in [20]. The general method of using heightfunctions is attributed to J.H.Conway [57]. For the sake of completeness, we bring a self-containedproof in Proposition 2.3.1 and Lemma 2.3.2 below. The proofs below essentially follow [48, 50],where the corresponding results are obtained for the case d = 2, r = 3. Also, many ideas going intothe proofs of Propostion 2.3.1, Lemma 2.3.2 and Lemma 2.3.3 come from algebraic topology; wemention the connection briefly at the end of Section 4.4 (where we generalise some of these results).Within the proof we also define a function grad : Xr × Zd → Z, which we use later on.Proposition 2.3.1. For any d ≥ 2 and r ∈ N \ {1, 4}, Xr is a nearest neighbour shift of finite typegiven byXr = {x ∈ (Zr)Zd| x~n − x~m = ±1 mod r, whenever ‖~n− ~m‖1 = 1}.21Proof. When r = 2, by our previous remark X2 = {xodd, xeven}; the proposition is easily verifiedin this case. From now on assume r ∈ N \ {1, 2, 4}. Temporarily, let us denoteYr := {x ∈ (Zr)Zd| x~n − x~m = ±1 mod r, whenever ‖~n− ~m‖1 = 1}.We need to establish that Yr = Xr.For all xˆ ∈ Ht and ~m,~n ∈ Zd with ‖~n− ~m‖1 = 1, by definition of Ht, we have |xˆ~n − xˆ~m| = 1.Thus, (φr(xˆ))~n−(φr(xˆ))~m = ±1 mod r, and so φr(xˆ) ∈ Yr. This establishes the inclusion Xr ⊂ Yr.To complete the proof, given x ∈ Yr we will exhibit xˆ ∈ Ht so that φr(xˆ) = x. Choose ~n0 ∈ Zdand define xˆ~n0 := x~n0 . For all ~n ∈ Zd, choose a path ~n0, ~n1 . . . , ~nk = n from ~n0 to ~n in the standardCayley graph of Zd, meaning, ~ni+1 − ~ni ∈ {±~e1, . . . ,±~ed}. Definexˆ~n := xˆ~0 +k∑i=1[x~ni − x~ni−1 ],where for x ∈ Yr and ~m,~n ∈ Zd with ‖~m− ~n‖1 = 1,[x~n − x~m] :=1 if x~n − x~m = 1 mod r−1 if x~n − x~m = −1 mod r. (2.3.2)We claim that for all x ∈ Yr the value of xˆ~n thus obtained is independent of the path chosenfrom ~n0 to ~n. Another way to express this is as follows:For x ∈ Yr and ~n ∈ {±~e1, . . . ,±~ed} letgrad(x, n) := [x~n − x~0]. (2.3.3)Extend grad to a map grad : Yr × Zd → Z as follows: For an arbitrary ~n ∈ Zd write ~n =∑Mj=1 ~sj ,with ~sj ∈ {±~e1, . . . ,±~ed}. Define:grad(x, ~n) :=M∑k=1grad(σ~nk−1x,~sk), (2.3.4)where ~nk :=∑kj=1 ~sj and the expressions appearing in the sum on the right-hand side of (2.3.4) aredefined by (2.3.3). We will now verify that grad(x, ~n) is well defined, which means it does not dependon the representation ~n =∑Mj=1 ~sj . Specifically, we will check that for all ~t1, . . . ,~tN , ~s1, . . . , ~sk ∈{±~e1, . . . ,±~ed} satisfying∑Ni=1~ti =∑kj=1 ~sj and x ∈ Yrk∑j=1[x~nj − x~nj−1 ] =N∑j=1[x~mj − x~mj−1 ], (2.3.5)22where ~nj =∑ji=1 ~si and ~mj =∑ji=1~ti.From (2.3.2) it follows that [x~n − x~m] = −[x~m − x~n] whenever ‖~m − ~n‖1 = 1. Note that~0, ~n1, ~n2, . . . , ~nk, ~mN−1, . . . , ~m1,~0 forms a loop in Zd. Since loops in Zd are generated by four-cyclesof the type ~0, ~ei, ~ei + ~ej , ~ej ,~0 for all 1 ≤ i, j ≤ d to check (2.3.5), it thus suffices to verify for allx ∈ Yr:[x~ej − x~0] + [x~ej+~ei − x~ej ] = [x~ei − x~0] + [x~ei+~ej − x~ei ]. (2.3.6)From the definition (2.3.2), the equality in (2.3.6) holds modulo r. Also,[x~ej − x~0], [x~ej+~ei − x~ej ], [x~ei − x~0], [x~ej+~ei − x~ei ] ∈ {±1}.Thus (2.3.6) is a consequence of the following simple exercise:For all A1, A2, A3, A4 ∈ {±1} satisfyingA1 +A2 +A3 +A4 = 0 mod r, where r ∈ N \ {1, 2, 4},we haveA1 +A2 +A3 +A4 = 0.It now follows that for all x ∈ Yr and n ∈ Zdgrad(x, ~n) = xˆ~n − xˆ~0. (2.3.7)In particular it follows that for all ~n, ~m ∈ Zd with ‖~n − ~m‖1 = 1, xˆ~m − xˆ~n ∈ {±1}. So indeedxˆ ∈ Ht. It is straightforward to check that φr(xˆ) = x.Remark: From the proof above we see that map grad : Xr × Zd → Z satisfies the followingrelation:grad(x, ~n+ ~m) = grad(x, ~n) + grad(σ~n(x), ~m). (2.3.8)This means that grad is a cocycle for the shift-action on Xr. See [48, 50] for more on cocycles forXr and other subshifts.In the particular case when r = 3 , X(d)3 is a presentation of a shift of finite type known asthe d-dimensional 3-coloured chessboard. The subshift X(d)3 is the set of proper vertex-colourings ofstandard Cayley graph of Zd with colours in {0, 1, 2}. On the first reading of the following sections,we advise the reader to keep in mind the case r = 3 and d = 2, in which X(d)r is the “2-dimensional3-coloured chessboard”.Remark: For the “exceptional” case r = 4, X4 is still a shift of finite type. This can be directly23deduced from the following formula:X(d)4 = {x ∈ (Zr)Zd | x~n − x~m = ±1 mod 4, whenever ||~n− ~m||1 = 1and [x~p − x~p+~ei ] + [x~p+~ei − x~p+~ei+~ej ] = [x~p − x~p+~ej ] + [x~p+~ej − x~p+~ei+~ei ]for all 1 ≤ i, j ≤ d and ~p ∈ Zd}.However X(d)4 is not a topological Markov field, as we now explain. For simplicity assume d = 2.Let x, y ∈ X(2)4 be the periodic points satisfyingy~n =2∑i=1~ni mod 4 and x~n = 2−2∑i=1~ni mod 4.That is:y =............ ...... 1 2 3 ...... 0 1 2 ...... 3 0¯ 1 ...... 2 3 0 ............... ...and x =............ ...... 1 0 3 ...... 2 1 0 ...... 3 2¯ 1 ...... 0 3 2 ............... ...(2.3.9)Observe that x~0 = 2 and y~0 = 0 and x|∂{~0} = y|∂{~0}. However the configuration z given byz~i =x~0 if~i = ~0y~i otherwiseis not an element of X(2)4 because[z−~e2 − z−~e2+~e1 ] + [z−~e2+~e1 − z~e1 ] = −2while[z−~e2 − z~0] + [z~0 − z~e1 ] = 2.A similar construction works for any d > 2.Lemma 2.3.2. Fix any d ≥ 2 and r ∈ N \ {1, 2, 4}. For x ∈ Xr any two pre-images under φrdiffer by a constant integer multiple of r, that is, if xˆ, yˆ ∈ Ht satisfy φr(xˆ) = φr(yˆ) then there existsM ∈ Z so that xˆ~n − yˆ~n = rM for all ~n ∈ Zd.Proof. Let x ∈ Xr, xˆ, yˆ ∈ Ht satisfy φr(xˆ) = φr(yˆ) = x. We havexˆ~0 ≡ yˆ~0 ≡ x~0 mod r.24Thus there exists M ∈ Z so that xˆ~0 − yˆ~0 = rM . By (2.3.7) it follows that for all n ∈ Zdxˆ~n − xˆ~0 = yˆ~n − yˆ~0 = grad(x, ~n).It follows that for all ~n ∈ Zdxˆ~n − yˆ~n = xˆ~0 − yˆ~0 = rM.Lemma 2.3.3. Let d ≥ 2 and r ∈ N \ {1, 2, 4}. Fix any (x, y) ∈ ∆Xr and a finite F ⊂ Zd so thatx~n = y~n for all ~n ∈ Zd \ F .1. There exists a finite set F˜ so that for all xˆ, yˆ ∈ Ht such that φr(xˆ) = x and φr(yˆ) = y, thereexists M ∈ Z so that xˆ~n − yˆ~n = rM for all ~n ∈ Zd \ F˜ .2. We can choose xˆ, yˆ ∈ Ht so that M = 0, that is, (xˆ, yˆ) ∈ ∆Ht.3. If (xˆ′, yˆ′), (xˆ, yˆ) ∈ ∆Ht and φr(xˆ′) = φr(xˆ) = x, φr(yˆ′) = φr(yˆ) = y then there exists M ∈ Zso that for all ~n ∈ Zd, yˆ′~n = yˆ~n + rM and xˆ′~n = xˆ~n + rM .4. For any (xˆ, yˆ) ∈ ∆Ht and all ~n ∈ Zd, (xˆ~n − yˆ~n) ∈ 2Z.5. If x, y ∈ Xr satisfy x~n = y~n for all ~n ∈ Zd \ {~n0} and x~n0 6= y~n0 then there exists xˆ, yˆ ∈ Htsuch that φr(xˆ) = x, φr(yˆ) = y and|xˆ~n − yˆ~n| =2 if ~n = ~n00 otherwise .Proof. 1. Choose xˆ ∈ φ−1r (x) and yˆ ∈ φ−1r (y). Since the set F is finite, there is an infiniteconnected component A˜ ⊂ Zd \ F in the standard Cayley graph of Zd so that F˜ := Zd \ A˜ isfinite. Fix some ~n0 ∈ A˜. For any ~n ∈ A˜ choose a path ~n0, ~n1 . . . , ~nk = ~n ∈ A˜ from ~n0 to ~n inthe standard Cayley graph of Zd, that is, ~nj+1 − ~nj ∈ {±~e1, . . . , ~ed} for all 1 ≤ j ≤ k− 1 andxj = yj for all 1 ≤ j ≤ k. Using (2.3.3) and (2.3.4) we conclude thatgrad(σ~n0(x), ~n− ~n0) = grad(σ~n0(y), ~n− ~n0).It now follows using (2.3.7) thatxˆ~n − xˆ~n0 = yˆ~n − yˆ~n0 .Because x~n0 = y~n0 , xˆ~n0 − yˆ~n0 ∈ rZ, and so there exists M ∈ Z so that xˆ~n − yˆ~n = rM for all~n ∈ A˜ = Zd \ F˜ .252. Define zˆ byzˆ~n := yˆ~n − (yˆ~n0 − xˆ~n0) for all ~n ∈ Zd.Obviously zˆ ∈ Ht. Since x~n0 = y~n0 it follows that yˆ~n0 − xˆ~n0 ∈ rZ. Thus φr(zˆ) = φr(yˆ) = y.Also zˆ~n = yˆ~n for all ~n ∈ A˜. Thus, (xˆ, zˆ) ∈ ∆Ht, proving the second assertion.3. By Lemma 2.3.2, for any choice of xˆ′ ∈ φ−1r (x) there exists M1 ∈ Z so that xˆ′~n = xˆ~n+rM1 forall ~n ∈ Zd. Similarly, for any choice of yˆ′ ∈ φ−1r (y) there exists M2 ∈ Z so that yˆ′~n = yˆ~n+ rM2for all ~n ∈ Zd. But if (xˆ′, yˆ′) ∈ ∆Ht it follows that M1 = M2. This proves the third assertion.4. From (2.3.3) it follows that grad(x,±~ej) = 1 mod 2 for all x ∈ Xr and j ∈ {1, . . . , d}. Thusby (2.3.4) the parity of grad(x, ~n) is equal to the parity of ‖~n‖1 and does not depend on x.Therefore if xˆ~n0 = yˆ~n0 for some ~n0 ∈ Zd, from (2.3.7) it follows that xˆ~n − yˆ~n ∈ 2Z for all~n ∈ Zd.5. Suppose that x, y ∈ Xr satisfy x~n = y~n for all ~n ∈ Zd \ {~n0} and x~n0 6= y~n0 . Choose(xˆ, yˆ) ∈ ∆Ht so that φr(xˆ) = x and φr(yˆ) = y. By the argument above xˆ~n = yˆ~n for all ~n 6= ~n0.Let ~m := ~n0 + ~e1. Since x~m = y~m and x~n0 6= y~n0 it follows that [x~n0 − x~m] 6= [y~n0 − y~m], thus|[x~n0 − x~m]− [y~n0 − y~m]| = 2.Using (2.3.3) and (2.3.7), we conclude that |xˆ~n0 − yˆ~n0 | = 2.It is a known and useful fact that the 3-coloured chessboard has the pivot property. This canbe shown, for instance, using height functions. Essentially the same argument shows that Xr hasthe pivot property for all r ∈ N \ {1, 2, 4}. We include a short proof below. Similar argumentsappear in the proofs of certain claims in the subsequent sections.Proposition 2.3.4. For any d ≥ 2 and r ∈ N \ {1, 2, 4} the subshift Xr has the pivot property. Inother words, given any (x, y) ∈ ∆Xr there exist x = z(0), z(1), . . . , z(N) = y ∈ Xr such that for all0 ≤ k < N , there is a unique nk ∈ Zd for which z(k)nk 6= z(k+1)nk .This will be further generalised as Theorem 4.1.4.Proof. Fix (x, y) ∈ ∆Xr . By Lemma 2.3.3, we can choose (xˆ, yˆ) ∈ ∆Ht with φr(xˆ) = x andφr(yˆ) = y. We will proceed by induction on D =∑~n∈Zd |xˆ~n − yˆ~n|. Note that by Lemma 2.3.3 Dis well defined, that is, the differences (xˆ~n − yˆ~n) in the sum do not depend on the choice of (xˆ, yˆ).When D = 0, then x = y and the claim is trivial. Now, suppose D > 0. LetF+ = {~n ∈ Zd | (xˆ~n − yˆ~n) > 0}F− = {~n ∈ Zd | (xˆ~n − yˆ~n) < 0},26that is, F+ ⊂ Zd is the finite set of sites where xˆ is strictly above yˆ and F− ⊂ Zd is the finite setof sites where yˆ is strictly above xˆ. Without loss of generality assume that F+ is non-empty. Since(xˆ~n − yˆ~n) ∈ 2Z, xˆ~n − yˆ~n ≥ 2 for all ~n ∈ F+. Consider ~n0 ∈ F+ such that xˆ~n0 = max{xˆ~n | ~n ∈ F+}.It follows that xˆ~n0 − xˆ~m = 1 for all ~m neighbouring ~n0. We can thus define zˆ ∈ Ht which is equalto xˆ everywhere except at ~n0, where zˆ~n0 = xˆ~n0 − 2. Now set z(1) = φr(zˆ) and apply the inductionhypothesis on (z(1), y).2.4 Markov Cocycles on XrOur goal in the current section is to describe the space of shift-invariant Markov cocycles on Xr,when r ∈ N \ {1, 2, 4}, and the subspace of Gibbs cocycles for shift-invariant nearest neighbourinteractions.In the following we assume r ∈ N \ {1, 2, 4} and d > 1, unless explicitly stated otherwise.Lemma 2.4.1. Fix r ∈ N \ {1, 2, 4} and d > 1. Let F ⊂ Zd be a finite set and x, y, z, w ∈Xr such that x|F c = y|F c, z|F c = w|F c and x|F∪∂F = z|F∪∂F , y|F∪∂F = w|F∪∂F . Consider(xˆ, yˆ), (zˆ, wˆ) ∈ ∆Ht such that they are mapped by φr to the pairs (x, y), (z, w) ∈ ∆Xr respectively.Then xˆ~n − yˆ~n = zˆ~n − wˆ~n for all ~n ∈ Zd.Proof. Let F0 ⊂ Zd denote the infinite connected component of F c. For ~n ∈ F0, we clearly havexˆ~n − yˆ~n = zˆ~n − wˆ~n = 0. We can now prove by induction on the distance from ~n ∈ Zd to F0 thatxˆ~n − yˆ~n = zˆ~n − wˆ~n. Given ~n ∈ Zd \ F0, find a neighbour ~m of ~n which is closer to F0. By theinduction hypothesis, xˆ~m − yˆ~m = zˆ~m − wˆ~m.If either ~n ∈ F or ~m ∈ F , then both ~m and ~n are in F ∪ ∂F and so x~n − x~m = z~n − z~m andy~n − y~m = w~n − w~m. (2.3.2) and (2.3.7) imply xˆ~n − xˆ~m = zˆ~n − zˆ~m and yˆ~n − yˆ~m = wˆ~n − wˆ~m.Subtracting the equations and applying the induction hypothesis, we conclude in this case thatxˆ~n − yˆ~n = zˆ~n − wˆ~n.Otherwise, ~n, ~m ∈ F c and so x~n − x~m = y~n − y~m and z~n − z~m = w~n −w~m, and again by (2.3.2)and (2.3.7) we conclude that xˆ~n − yˆ~n = zˆ~n − wˆ~n.For i ∈ Zr and integers a, b with a− b ∈ 2Z, letNi(a, b) :=|{m ∈ (2Z+ a) ∩ (rZ+ i) | m ∈ [a, b)}| if a ≤ b−Ni(b, a) otherwiseHere Ni is the “net” number of crossings from (i+ rZ) to (i+ 2 + rZ) in a path going from a to bin steps of magnitude 2. Note thatNi(a, b) = Ni(a+ rn, b+ rn) (2.4.1)27for all a, b, n ∈ Z andNi(a, b) = Ni(a+ c, b+ c) (2.4.2)for all a, b, c ∈ Z such that a− b ∈ rZ ∩ 2Z .Proposition 2.4.2. For any r ∈ N \ {1, 2, 4} and d > 1, the space MZdXr of shift-invariant Markovcocycles on Xr has a linear basis{M0,M1, . . . ,Mr−1},where the cocycle Mi is given byMi(x, y) :=∑n∈ZdNi(xˆn, yˆn); (2.4.3)(xˆ, yˆ) ∈ ∆Ht is any pair which is mapped to (x, y) by φr. In particular dim(MZdXr) = r.Proof. By Lemma 2.3.3 different choices (xˆ, yˆ) ∈ ∆Ht which map to (x, y) via φr differ by afixed translation in rZ. Thus by (2.4.1) the values Mi(x, y) are independent of the choice of thecorresponding height functions (xˆ, yˆ) ∈ ∆Ht and hence are well defined.We will show that for i = 0, . . . , r − 1, Mi is indeed a Markov cocycle. Since Ni(a, c) =Ni(a, b) + Ni(b, c) whenever a ≡ b ≡ c mod 2, it follows that Mi(x, z) = Mi(x, y) + Mi(y, z)whenever x, y, z ∈ Xr are homoclinic. Thus Mi is a ∆Xr -cocycle. Clearly, Mi is shift-invariant.We now check that Mi satisfies the Markov property. This is equivalent to showing thatMi(x, y) = Mi(z, w) whenever x, y, z, w ∈ Xr satisfy the assumption in Lemma 2.4.1. In thiscase by Lemma 2.4.1, xˆ~n − yˆ~n = zˆ~n − wˆ~n for all ~n ∈ Zd. Also note that for any ~n ∈ Zd, eitherx~n = z~n and y~n = w~n in which case xˆ~n − zˆ~n = yˆ~n − wˆ~n ∈ rZ or x~n = y~n and z~n = w~n, inwhich case by Lemma 2.3.3 xˆ~n − yˆ~n = zˆ~n − wˆ~n ∈ rZ ∩ 2Z. By (2.4.1) and (2.4.2) in either caseNi(xˆ~n, yˆ~n) = Ni(zˆ~n, wˆ~n) and summing over the ~n’s, we get Mi(x, y) = Mi(z, w) as required.To complete the proof we need to show that all shift-invariant Markov cocycles on Xr can beuniquely written as a linear combination of M0, . . . ,Mr−1. For i ∈ {0, . . . , r−1} let (x(i), y(i)) ∈ ∆Xrsuch that x(i)~0 = i, y(i)~0= i + 2 mod r and x(i)~n = y(i)~n for all ~n ∈ Zd \ {~0}. Given a shift-invariantMarkov cocycle M , letαi := M(x(i), y(i)) (2.4.4)We claim that for all (x, y) ∈ ∆Xr :M(x, y) =r−1∑i=0αi ·Mi(x, y). (2.4.5)Since Xr has the pivot property (Proposition 2.3.4), by (2.2.2) it is sufficient to show that (2.4.5)holds for all (x, y) ∈ ∆Xr which differ only at a single site. By shift-invariance of M and the Mi’sit is further enough to show this for (x, y) which differ only at the origin ~0. In this case, we note28that (x, y) coincide with either (x(i), y(i)) or (y(i), x(i)) on the sites {~0} ∪ ∂{~0} for some i. Withoutloss of generality assume that (x, y) coincide with (x(i0), y(i0)) on the sites {~0}∪∂{~0}. Since M andthe Mi’s are Markov cocycles we have M(x, y) = M(x(i0), y(i0)) = αi0 andr−1∑j=0αj ·Mj(x, y) =r−1∑j=0αjMj(x(i0), y(i0)) =r−1∑j=0αjδi0,j = αi0 .Remark: Without the assumption of shift-invariance, a similar argument shows that anyMarkov cocycle on Xr is of the following form:M(x, y) =r−1∑i=0∑~n∈Zdai,~nNi(xˆ~n, yˆ~n) with ai,~n ∈ R for all ~n ∈ Zd, 0 ≤ i ≤ r − 1.We now describe the space GZdXr of Gibbs cocycles corresponding to shift-invariant nearestneighbour interactions for Xr.Proposition 2.4.3. Let r ∈ N \ {1, 2, 4} and d > 1. A shift-invariant Markov cocycle on Xr is aGibbs cocycle corresponding to a shift-invariant nearest neighbour interaction if and only if it is ofthe form M =∑r−1i=0 αiMi, with∑r−1i=0 αi = 0 and Mi’s as in Proposition 2.4.2 and α0, . . . , αr−1given by (2.4.4). In other words,GZdXr ={r−1∑i=0αiMi |r−1∑i=0αi = 0}.In particular, dim(GZdXr) = r − 1.Proof. Let M be a Gibbs cocycle given by a shift-invariant nearest neighbour interaction φ. Choose(x(i), y(i)) ∈ ∆Xr as in the proof of Proposition 2.4.2 so that αi = M(x(i), y(i)). Expanding theGibbs cocycle we have:M(x(i), y(i)) = φ([i+ 2]0)− φ([i]0)+d∑j=1(φ([i+ 2, i+ 1]j)− (φ([i, i+ 1]j) + φ([i+ 1, i+ 2]j)− φ([i+ 1, i]j)) .Summing these equations over i we get:r−1∑i=0αi =r−1∑i=0M(x(i), y(i)) = 029Conversely, for any values αi = M(x(i), y(i)) such that∑r−1i=0 αi = 0 it is easy to see that there isa corresponding nearest neighbour shift-invariant interaction φ: For instance, set φ([i, i+ 1]1) =−∑r−1k=i αk, φ([i, i+ 1]j) = 0 for j = 2, . . . , d and φ([i+ 1, i]j) = φ([i]0) = 0 for j = 1, . . . , d andi = 0, . . . , r − 1.Let Mˆ : ∆Xr → R be the Markov cocycle given byMˆ(x, y) :=∑n∈Zdyˆn − xˆn, (2.4.6)where (xˆ, yˆ) ∈ ∆Ht satisfy φr(xˆ) = x and φr(yˆ) = y. By the following we observe that Mˆ(x, y) =2∑r−1i=0 Mi(x, y) for all (x, y) ∈ ∆Xr :As in the proof of Proposition 2.4.2 it sufficient to verify this for (x, y) = (x(i0), y(i0)) where0 ≤ i0 ≤ r − 1. In that caseMˆ(x(i0), y(i0)) = yˆ(i0)0 − xˆ(i0)0 = 2and2r−1∑i=0Mi(x(i0), y(i0)) = 2r−1∑i=0δi0,i = 2.Corollary 2.4.4. Let r ∈ N \ {1, 2, 4} and d > 1. Any shift-invariant Markov cocycle M on Xrcan be uniquely written asM = M0 + αMˆwhere M0 is some Gibbs cocycle, α ∈ R and Mˆ is given by (2.4.6).Thus, the conclusion of the second part of the strong version of the Hammersley-Clifford The-orem regarding shift-invariant Markov cocycles fails for Xr. Our next proposition asserts thatthe conclusion of the first part of the strong version of the Hammersley-Clifford Theorem stillholds for Xr. This immediately implies the conclusion of the first part of the weak version of theHammersley-Clifford Theorem of Xr.Proposition 2.4.5. (MXr = GXr) Let r ∈ N\{1, 2, 4} and d > 1. Let M : ∆Xr → R be a Markovcocycle. There exists a nearest neighbour interaction φ, which is not necessarily shift-invariant, sothat M = Mφ.Proof. Given M ∈MXr , we will define a compatible nearest neighbour interaction φ as follows:The interaction φ will assign 0 to any single site pattern. For ~n = (n1 . . . , nd) ∈ Zd, and1 ≤ j ≤ d, let φ~n,j(a, b) denote the weight the interaction φ assigns to the pattern (a, b) on the edge(~n, ~n+ ej). Set φ~n,j(a, b) = 0 whenever j ∈ {2, . . . , d}. For ~n = (n1, . . . , nd) ∈ Zd and i ∈ Zr define30recursivelyφ~n,1(i, i+ 1) :=0 n1 ≤ 0M(σ~ny(i), σ~nx(i)) + φ~n−~e1,1(i+ 1, i+ 2) n1 > 0φ~n,1(i+ 1, i) :=0 n1 ≥ 0M(σ~n+~e1y(i), σ~n+~e1x(i)) + φ~n+~e1,1(i+ 2, i+ 1) n1 < 0whereas in the proof of Proposition 2.4.2, (x(i), y(i)) ∈ ∆Xr are such that x(i)~0= i mod r, y(i)~0 = i+2mod r and x(i)~n = y(i)~n for all ~n ∈ Zd \ {~0}. To see that φ defines a nearest neighbour interaction forM , since Xr has the pivot property (Proposition 2.3.4) it is sufficient to verify by (2.2.2)M(y, x) =∑~n∈Zdd∑j=1φ~n,j(x~n, x~n+~ej )− φ~n,j(y~n, y~n+~ej ). (2.4.7)for (y, x) ∈ ∆Xr which differ at a single site n′ ∈ Zd.Then (y, x) coincide with either (σ~n′y(i), σ~n′x(i)) or (σ~n′x(i), σ~n′y(i)) on the sites {~n′}∪∂{~n′} forsome 0 ≤ i ≤ r − 1. Without loss of generality assume that (y, x) coincide with (σ~n′y(i0), σ~n′x(i0))on the sites {n′} ∪ ∂{n′} for some 0 ≤ i0 ≤ r − 1. Since M is a Markov cocycleM(y, x) = M(σ~n′y(i0), σ~n′x(i0))=φ~n′,1(i0, i0 + 1)− φ~n′−~e1,1(i0 + 1, i0 + 2) if n′1 > 0φ~n′−~e1,1(i0 + 1, i0)− φ~n′,1(i0 + 2, i0 + 1) if n′1 ≤ 0=∑~n∈Zdd∑j=1φ~n,j(x~n, x~n+ej )− φ~n,j(y~n, y~n+ej ).Combining the above results we obtain:Corollary 2.4.6. Let r ∈ N \ {1, 2, 4} and d > 1. There exists a shift-invariant Markov cocycle onXr which is given by a nearest neighbour interaction but not by a shift-invariant nearest neighbourinteraction, that is,GZdXr 6= GXr ∩MZdXr .2.5 Markov Random Fields on Xr Are GibbsOur main goal is to prove the following result:31Theorem 2.5.1. Let r ∈ N \ {1, 2, 4} and d > 1. Any shift-invariant Markov random field adaptedto Xr is a Gibbs state for some shift-invariant nearest neighbour interaction. In particular anyshift-invariant Markov random field µ with supp(µ) = Xr is a Gibbs state for some shift-invariantnearest neighbour interaction.Theorem 2.5.1 implies that the conclusion of the second part of the weak version of theHammersley-Clifford Theorem holds for Xr, that is, Theorem 1.1.1 although the argument is verydifferent from the safe-symbol case.For a subshift X, a point x ∈ X will be called frozen if its homoclinic class is a singleton set.This notion coincides with the notion of frozen colouring in [6]. By Proposition 2.3.4, Xr has thepivot property so x ∈ Xr is frozen if and only if for every ~n ∈ Zd, x~j 6= x~k for some~j,~k ∈ ∂{~n},that is, any site is adjacent to at least two sites with distinct symbols. A subshift X will be calledfrozen if it consists of frozen points, equivalently ∆X is the diagonal. A measure on a subshift Xwill be called frozen if its support consists of frozen points. Note that the collection of frozen pointsof a given topological Markov field X is itself a topological Markov field.We derive Theorem 2.5.1 as an immediate corollary of the following proposition:Proposition 2.5.2. Let r ∈ N \ {1, 2, 4} and d > 1. Let µ be a shift-invariant Markov randomfield adapted to Xr with Radon-Nikodym cocycle equal to the restriction of eM to its support whereM ∈ MZdXr \ GZdXr is a Markov cocycle which is not given by a shift-invariant nearest neighbourinteraction. Then µ is frozen.Note that any frozen probability measure is Gibbs with any nearest neighbour interactionbecause the homoclinic relation of the support of the measure is trivial. Hence this propositionproves that any shift-invariant Markov random field adapted to Xr is Gibbs for some nearestneighbour interaction and thus implies Theorem 2.5.1. The intuition behind the proof of thisproposition is the following: For a Markov cocycle M =∑ri=1 αiMi the condition∑ri=1 αi > 0indicates an inclination to raise the height function. However shift-invariance implies the existenceof a well defined “slope” for the height function in all directions. Unless this slope is extremal, thatis, maximal (±‖~n‖1) in some direction ~n ∈ Zd \ {~0}, this will lead to a contradiction.In preparation for the proof, we set up some auxiliary results.2.5.1 Real Valued Cocycles for Measure-Preserving Zd-ActionsWe momentarily pause our discussion about Markov random fields on Xr to discuss cocycles formeasure-preserving Zd actions. Subcocycles and further generalisations will be discussed in Section4.5. Let (X,F , µ, T ) be an ergodic measure-preserving Zd-action. A measurable function c :X × Zd → R is called a T -cocycle if it satisfies the following equation µ-almost everywhere withrespect to x ∈ X:c(x, ~n+ ~m) = c(x, ~n) + c(T ~nx, ~m) ∀~n, ~m ∈ Zd. (2.5.1)32By (2.3.8) the function grad : Xr×Zd → R defined in (2.3.3) and (2.3.4) is indeed a shift-cocyclewith respect to the shift action on Xr.We will use the following lemma:Lemma 2.5.3. Let (X,F , µ, T ) be an ergodic measure-preserving Zd action and c : X × Zd → Rbe a measurable cocycle such that for all ~n ∈ Zd the function fc,~n(x) := c(x, ~n) is in L1(µ), then forall ~n = (n1, n2, . . . , nd) ∈ Zdlimk→∞c(x, k~n)k=∫c(x, ~n)dµ(x)=d∑i=1ni∫c(x,~ei)dµ(x).The convergence holds almost everywhere with respect to µ and also in L1(µ).Proof. By the cocycle equation (2.5.1) for µ-almost every x ∈ X, any k ∈ N and n ∈ Zd we have:c(x, k · ~n) =k−1∑i=0c(T i~nx, ~n).The existence of almost everywhere and L1 limit f¯(x) := limk→∞c(x,k~n)k follows from the pointwiseand L1 ergodic theorems. To complete the proof we need to show that the limit is constant almosteverywhere. We do this by showing that the limit is T -invariant. By the cocycle equation (2.5.1),for almost every x ∈ X and any m,n ∈ Zd and k ∈ N we have:c(x, k~n) = c(x, ~m+ k~n− ~m)= c(x, ~m) + c(T ~mx, k~n− ~m)= c (x, ~m) + c(T ~mx, k~n)+ c(T ~m+k~nx,−~m).Thus,|f¯(x)−f¯(T ~m(x))| ≤ lim supk→∞1k(|c(x, ~m)|+ |c(T ~m+k·~nx,−~m)|)≤ lim supk→∞1k|fc,~m(x)|+1k|fc,−~m(T~nkT ~mx)|.Since fc,~m, fc,−~m ∈ L1(µ), lim supk→∞1k |fc,~m(x)| and lim supk→∞1k |fc,−~m(Tk~nT ~mx)| are both equalto 0 almost everywhere (the second term vanishes because limk→∞ 1kg(Skx) = 0 a.e for g ∈ L1 andS measure-preserving). Therefore,limk→∞c(x, k · ~n)k=∫c(x, ~n)dµ(x)=d∑i=1ni∫c(x,~ei)dµ(x).33Remark: In the specific case that T is totally ergodic, meaning that the individual action ofeach T ~n is ergodic for all ~n ∈ Zd \ {~0}, the lemma above is completely obvious since c(x,k·~n)k =1k∑k−1j=0 c(Tj~nx, ~n), which is an ergodic average. The point of Lemma 2.5.3 is that ergodicity of theZd-action is sufficient for the limit to be constant.The cocycle grad : Xr × Zd → Z is not only measurable but also continuous. We can use this,along with compactness of Xr and the unit ball in Rd to obtain uniformity of the convergence withrespect to the “direction” on a set of full measure.For convenience we extend the definition of grad : X × Zd → R given by (2.3.3) and (2.3.4) toa function grad : X × Rd −→ R as follows:grad(x, ~w) := grad(x, b~wc) (2.5.2)where ~w = (w1, w2, . . . , wd) ∈ Rd and b~wc denotes (bw1c, bw2c, . . . , bwdc).Lemma 2.5.4. Let r ∈ N\{1, 2, 4} and d > 1. Let µ be an ergodic measure on Xr. Then µ-almostsurelylimk→∞sup‖~w‖1=k1k|grad(x, ~w)− 〈~w,~v〉| = 0wherevj :=∫grad(x,~ej)dµ(x) , ~v := (v1, . . . , vd),the supremum is over {~w ∈ Rd | ‖~w‖1 = k}, and 〈~n,~v〉 =∑di=1 nivi is the standard inner product.Proof. LetE :={x ∈ Xr | lim supk→∞sup‖~w‖1=k1k|grad(x, ~w)− 〈~w,~v〉| > }.We will prove the lemma by showing that µ(E) = 0 for all  > 0.Fix  > 0. Since Qd is dense in Rd, using compactness of the unit ball in (Rd, ‖ · ‖1), we canfind a finite F ⊂ Qd which 8 -covers the unit ball. By this we mean that for all ~w ∈ Rd such that‖~w‖1 = 1 there exists ~u ∈ F so that ‖~w − ~u‖1 ≤ 8 . Because F ⊂ Qd is finite, there exists M ∈ Nso that M~u ∈ Zd for all ~u ∈ F . By Lemma 2.5.3, there exists a measurable set X ′ ⊂ Xr withµ(X ′) = 1 so that for all ~u ∈ F , x ∈ X ′limk→∞1Mkgrad(x,Mk~u) = 〈~u,~v〉.To complete the proof we will prove that X ′ ⊂ Ec . Given x ∈ X′ we can find an integer34J > 8M−1 so that for all k > JM and all ~u ∈ F∣∣∣∣1Mkgrad(x,Mk~u)− 〈~u,~v〉∣∣∣∣ <8.Note that for all j > J ∣∣∣∣j −M⌊jM⌋∣∣∣∣ ≤8jConsider some ~w ∈ Rd such that ‖~w‖1 = j > J . We can find ~u ∈ F so that ‖ ~w‖~w‖1 − ~u‖1 ≤8 . Letk˜ := b jM c. Then‖~w − k˜M~u‖1 ≤ ‖~w − j~u‖1 + |j − k˜M |‖~u‖1 ≤4j. (2.5.3)Now observe that |grad(x, ~w)| ≤ ‖~w‖1 for all x ∈ Xr, ~w ∈ Zd. From the cocycle property (2.3.8) itfollows that for all ~n, ~m ∈ Zd, x ∈ Xr:grad(x, ~n) = grad(x, ~m) + grad(σ ~mx, ~n− ~m).Therefore for all ~w′, ~u′ ∈ Rd|grad(x, ~w′)− grad(x, ~u′)| ≤ ‖~w′ − ~u′‖1 + 2d.Applying the above inequality with ~w′ = ~w and ~u′ = k˜M~u it follows using (2.5.3)∣∣∣grad(x, ~w)− grad(x, k˜M~u)∣∣∣ ≤4j + 2d.Also, since ‖~v‖∞ ≤ 1, it follows using (2.5.3) that∣∣∣〈~w,~v〉 − 〈k˜M~u,~v〉∣∣∣ <4jwhich yields that for sufficiently large j,1j|grad(x, ~w)− 〈~w,~v〉| < .This proves that X ′ ⊂ Ec .2.5.2 Maximal Height FunctionsFor xˆ ∈ Ht and a finite F ⊂ Zd, letHtxˆ,F := {yˆ ∈ Ht | yˆ~n = xˆ~n if ~n 6∈ F}.Consider the partial ordering on Htxˆ,F given by yˆ ≥ zˆ if yˆ~n ≥ zˆ~n for all ~n ∈ Zd.35Lemma 2.5.5. Let xˆ ∈ Ht and N ∈ N be given. Then (Htxˆ,F ,≥) has a maximum. If the maximumis attained by the height function yˆ then for all ~n ∈ F :yˆ~n = min{xˆ~k + ||~n−~k||1 | ~k ∈ ∂F}.Proof. We will first prove that given height functions yˆ, zˆ ∈ Htxˆ,F the function wˆ defined bywˆ~i := max(yˆ~i, zˆ~i).is an element of Htxˆ,F .To see that wˆ is a valid height function, we will show that |wˆ~i − wˆ~j | = 1 for all two adjacentsites ~i,~j ∈ Zd. If (wˆ~i, wˆ~j) = (yˆ~i, yˆ~j) or (wˆ~i, wˆ~j) = (zˆ~i, zˆ~j) then |wˆ~i − wˆ~j | = 1 because yˆ, zˆ ∈ Ht.Otherwise, we can assume without the loss of generality that wˆ~i = yˆ~i > zˆ~i and wˆ~j = zˆ~j > yˆ~j . ByLemma 2.3.3, because (yˆ, zˆ) ∈ ∆Ht we haveyˆ~i − zˆ~i, yˆ~j − zˆ~j ∈ 2Z.Thus yˆ~i ≥ zˆ~i + 2 and zˆ~j ≥ yˆ~j + 2. Since yˆ, zˆ ∈ Ht we have:yˆ~j + 1 ≥ yˆ~i ≥ zˆ~i + 2 ≥ zˆ~j + 1 ≥ yˆ~j + 3,a contradiction.We conclude that wˆ ∈ Ht. Also wˆ~i = max(yˆ~i, zˆ~i) = xˆ~i for all~i ∈ F c. Hence wˆ ∈ Htxˆ,F .Since Htxˆ,F is finite, it has a maximum.Suppose the maximum is attained by a height function yˆ. Let ~i ∈ F , ~k ∈ ∂F and (~i1 =i),~i2,~i3, . . . ,~ip, (~ip+1 = ~k) be a shortest path between ~i and ~k. Thenyˆ~i =p∑t=1yˆ~it − yˆ~it+1 + yˆ~k =p∑t=1yˆ~it − yˆ~it+1 + xˆ~k.Therefore yˆ~i ≤ ||~i− ~k||1 + xˆ~k which proves thatyˆ~i ≤ min{xˆ~k + ||~i− ~k||1 | ~k ∈ ∂F}. (2.5.4)For proving the reverse inequality, note that if yˆ has a local minimum at some ~n ∈ F then theheight at ~n can be increased. Since yˆ is the maximum height function, for each ~n ∈ F at least oneof the adjacent sites ~m must satisfy yˆ~n − yˆ~m = 1. Iterating this argument, for all ~i ∈ F , we canchoose a path ~j1,~j2,~j3, . . . ,~jp+1, with ~j1 =~i, ~j2, . . . ,~jp ∈ F , ~jp+1 ∈ ∂F along which yˆ is increasing:36yˆ~jt − yˆ~jt+1 = 1 for all t ∈ {1, 2, . . . , p}. Thenyˆ~i =p∑t=1yˆ~jt − yˆ~jt+1 + yˆ~jp+1 ≥ ||~i−~jp+1||1 + yˆ~jp+1 .Combining with the inequality (2.5.4), we getyˆ~i = min{xˆ~k + ||~i− ~k||1 | ~k ∈ ∂F}.Consider a shift-invariant Markov cocycle M ∈ MZdXr . Recall that by Corollary 2.4.4 thereexists α ∈ R and a Gibbs cocycle M0 ∈ GZdXr compatible with a shift-invariant nearest neighbourinteraction such that M = M0 + αMˆ . The following lemma is based on the idea that in the “non-Gibbsian” case α 6= 0, whenever yˆ is much bigger than xˆ, M(x, y) is roughly α times the ‘volume’of the (d+ 1)-dimensional ‘shape’ bounded between the graphs of yˆ and xˆ in Zd × Z.For N ∈ N, letDN :={~n ∈ Zd | ‖~n‖1 ≤ N}(2.5.5)be the ball of radius N centered at the origin in the standard Cayley graph of Zd. Also, denote:SN :={~n ∈ Zd | ‖~n‖1 = N}(2.5.6)Note that SN = ∂(DcN ) = ∂DN−1.Lemma 2.5.6. Fix r ∈ N \ {1, 2, 4} and d > 1. Let M = M0 + αMˆ be a shift-invariant Markovcocycle on X(d)r where M0 ∈ GZdX(d)ris a Gibbs cocycle compatible with a shift-invariant nearestneighbour interaction, Mˆ is the Markov cocycle given by (2.4.6) and α > 0. Then there exist apositive constant c1 > 0 (depending only on d) and another positive constant c2 > 0 (dependingonly on d and M0) such that for all N ∈ NM(x, y) ≥ c1α(yˆ~0 − xˆ~0)d+1 − c2 ·Ndfor all (xˆ, yˆ) ∈ ∆Ht satisfying xˆ ≤ yˆ, x = φ(xˆ), y = φ(yˆ) and x|DcN = y|DcN .Proof. Let M = M0 +αMˆ , (x, y) ∈ ∆Xr and (xˆ, yˆ) ∈ ∆Ht be as given in the lemma. First we showthat there exists a suitable constant c1 > 0 (depending on d) so thatMˆ(x, y) ≥ c1(yˆ~0 − xˆ~0)d+1. (2.5.7)Assume that yˆ~0 − xˆ~0 > 0. Denote:K :=yˆ~0 − xˆ~02(2.5.8)37Recall that (xˆ, yˆ) ∈ ∆Ht, so by Lemma 2.3.3, yˆ~n − xˆ~n ∈ 2Z for all ~n ∈ Zd. In particular, K is aninteger. Since yˆ~n − xˆ~n ≥ 0 we have:Mˆ(x, y) ≥∑~n∈DN(yˆ~n − xˆ~n) ≥K∑j=0∑~n∈Sj(yˆ~n − xˆ~n).Since yˆ~n − xˆ~n ≥ yˆ~0 − xˆ~0 − 2‖~n‖1:K∑j=0∑~n∈Sj(yˆ~n − xˆ~n) ≥K∑j=0|Sj |(yˆ~0 − xˆ~0 − 2j).Finally the estimates|Sj | ≥ |{~n ∈ Sj | ~n > 0}| =(j + d− 1d− 1)≥1d!jd−1giveMˆ(x, y) ≥ 1d!∑Kj=0 jd−1(2K − 2j) ≥1d!b2K/3c∑j=dK/3ejd−1(2K − 2j)≥ 1d!K3(K3)d−1 2K3 ≥ (6d)−(d+1)(yˆ0 − xˆ0)d+1proving (2.5.7) with c1 = (6d)−(d+1).Let φ be the shift-invariant nearest neighbour interaction corresponding to M0. We will showthat there exists a suitable constant c2 > 0 (depending on M0 and d) so that |M0(x, y)| ≤ c2Nd.By expressing M0 in terms of its interaction we see that|M0(x, y)| ≤∑C∩DN 6=∅|φ(y|C)− φ(x|C)| ≤∑~n∈DN∑C∩{~n}6=∅|φ(x|C)− φ(y|C)|,where C ranges over all the cliques (edges and vertices) in Zd. It follows that |M0(x, y)| ≤ c′2|DN |wherec′2 := (4d+ 2) sup{|φ(x|C)| | x ∈ Xr and C is a clique in Zd}.Since |DN | ≤ (2N + 1)d, it follows that |M0(x, y)| ≤ c2Nd with c2 := 3dc′2.Putting everything together, we conclude thatM(x, y) ≥ αMˆ(x, y)− |M0(x, y)| ≥ c1α(yˆ~0 − xˆ~0)d+1 − c2 ·Nd.38Under the same hypothesis except with α < 0, we get,M(x, y) ≤ c1α · (yˆ~0 − xˆ~0)d+1 + c2Ndfor the same constants c1, c2 > 0.Lemma 2.5.7. Let r ∈ N \ {1, 2, 4} and d > 1. Let µ be a shift-invariant measure on Xr and~n ∈ Zd such that ‖~n‖1 = 1. If ∣∣∣∣∫grad(x, ~n)dµ(x)∣∣∣∣ = 1,then µ is frozen.Proof. If∣∣∫grad(x, ~n)dµ(x)∣∣ = 1 then either µ({x ∈ Xr | x~0 − x~n = 1 mod r}) = 1 or µ({x ∈Xr | x~0 − x~n = −1 mod r}) = 1. In the first case it follows that µ-almost surely x~m−~n = x~m + 1mod r and x~m+~n = x~m − 1 mod r for all ~m ∈ Zd, so µ-almost surely x is frozen. The second caseis similar.In the course of our proof, it will be convenient to restrict to ergodic shift-invariant Markovrandom fields. The following claim justifies this:Theorem 2.5.8. All shift-invariant Markov random fields µ with specification Θ are in the closureof the convex hull of the ergodic shift-invariant Markov random fields with specification Θ.Proof. See Theorem 14.14 in [22].We now proceed to complete the proof of Proposition 2.5.2.Proof. Since a convex combination of frozen measures is frozen, by Theorem 2.5.8 it suffices toprove that any ergodic Markov random field µ adapted to Xr with its Radon-Nikodym cocycleequal to eM on its support where M = M0 + αMˆ (as in Corollary 2.4.4) and α 6= 0 is frozen.Choose any µ satisfying the above assumptions, assuming without loss of generality that α > 0.Letvj :=∫grad(x,~ej)dµ(x) for j = 1, . . . , d. (2.5.9)If |vj | = 1 for some 1 ≤ j ≤ d, it follows from Lemma 2.5.7 that µ is frozen. We can thus assumethat |vj | < 1 for all 1 ≤ j ≤ d. Choose  > 0 satisfying  < 14 min{1− |vj | | 1 ≤ j ≤ d}.For k ∈ N, letAk ={x ∈ Xr | sup‖~w‖1=k1k|grad(x, ~w)− 〈~w,~v〉| < }.39By Lemma 2.5.4 for sufficiently large k, µ(Ak) > 1 − . Consider x ∈ Ak ∩ supp(µ) such thatµ(Ak | [x]∂Dk−1) > 1− 2. Then for all y ∈ Ak satisfying y|Dck−1 = x|Dck−1 and ~n ∈ ∂Dk−1− grad(y, ~n) = yˆ~0 − yˆ~n ≤ −〈~n,~v〉+ k < (1− )k. (2.5.10)Choose zˆ which is maximal in Htxˆ,Dk−1 and let z = φr(zˆ). It follows from Lemma 2.5.5 that forsome ~n ∈ ∂Dk−1,zˆ~0 − xˆ~n = ‖~n‖1 = k. (2.5.11)Since x|Dck−1 = y|Dck−1 = z|Dck−1 we can assume by Lemma 2.3.3 thatxˆ|Dck−1 = yˆ|Dck−1 = zˆ|Dck−1 . (2.5.12)(2.5.10) together with (2.5.11) and (2.5.12) imply that zˆ~0 − yˆ~0 ≥ k for all y ∈ Ak satisfyingy|Dck−1 = z|Dck−1 . Thus, by Lemma 2.5.6M(y, z) > c1α(k · )d+1 − c2kd > c3kd+1,the last inequality holding for some c3 > 0, when k is sufficiently large.It follows thatµ([z]Dk−1 | [x]∂Dk−1) ≥ µ([y]Dk−1 | [x]∂Dk−1)ec3kd+1 .We can write Ak ∩ [x]∂Dk−1 =⋃y([y]Dk−1 ∩ [x]∂Dk−1), where the union is over all y ∈ LDk−1(Xr)such that [y]Dk−1 ∩Ak ∩ [x]∂Dk−1 6= ∅. There are at most |LDk−1∪∂Dk−1(Xr)| = eO(kd) terms in theunion above soµ(Ak | [x]∂Dk−1) ≤ eO(kd)e−c3kd+1µ([z]Dk−1 | [x]∂Dk−1).It follows that µ(Ak | [x]∂Dk−1) → 0 as k → ∞. For k sufficiently large this would contradict ourchoice of x, for which µ(Ak | [x]∂Dk−1) > 1− 2.2.6 Non-Frozen Adapted Shift-Invariant Markov Random Fieldson Xr Are Fully-SupportedWe have concluded that any shift-invariant Markov random field which is adapted with respect toXr is a Gibbs measure for some shift-invariant nearest neighbour interaction. Our next goal is toshow that any such measure must be fully-supported on Xr.Proposition 2.6.1. Let r ∈ N \ {1, 2, 4}, d > 1 and µ be a shift-invariant MRF adapted withrespect to Xr. Then either supp(µ) = Xr or µ is frozen.Roughly speaking we shall show that for non-frozen shift-invariant Markov random fields theheight function corresponding to a typical point is “not very steep”. Given a height function that40is “not very steep”, there is enough flexibility to “deform” the height function while keeping thevalues fixed outside some finite set. For an adapted Markov random field, the “deformed heightfunction” corresponds to a point in the support as well, which will be the key to proving the requiredresult. Somewhat related methods can be found in Section 4.3 of [48]. This is further generalisedin Theorem 4.2.4.We first introduce some more notation. For x ∈ Xr and a finite set F ⊂ Zd denote:RangeF (x) := max~n∈Fgrad(x, ~n)−min~n∈Fgrad(x, ~n). (2.6.1)Given A ⊂ Zd, xˆ ∈ Ht(A) and a finite set F ⊂ A ⊂ Zd, we define:RangeF (xˆ) := max~n∈Fxˆ~n −min~n∈Fxˆ~n. (2.6.2)It follows that if xˆ ∈ Ht and x ∈ Xr are such that x = φr(xˆ) then for all finite F ⊂ Zd, RangeF (x) =RangeF (xˆ).Lemma 2.6.2. (“Extremal values of height obtained on the boundary”) Let F ⊂ Zd be afinite set and xˆ ∈ Ht such that Range∂F (xˆ) > 2. Then there exists yˆ ∈ Ht such that yˆ~n = xˆ~n forall ~n ∈ F c andRangeF (yˆ) = Range∂F (yˆ)− 2 = Range∂F (xˆ)− 2.Proof. DenoteT := max~n∈∂Fxˆ~n and B := min~n∈∂Fxˆ~n.Letκ = κ(xˆ, F ) :=∑~n∈Fmax(xˆ~n − T + 1, B − xˆ~n + 1, 0).The number κ is the absolute value for the deviations of xˆ|F from the (open) interval (B, T ).We prove the claim by induction on κ. If κ = 0 then y = x already satisfies the conclusion of thislemma because B + 1 ≤ xˆ~n ≤ T − 1 for all ~n ∈ F which implies thatRangeF (xˆ) = max~m∈Fxˆ~m − min~m∈Fxˆ~m ≤ ( max~m∈∂Fxˆ~m − 1)− ( min~m∈∂Fxˆ~m + 1)= Range∂F (xˆ)− 2.Now suppose κ > 0, and let n ∈ F be a coordinate where xˆ obtains an extremal value forF ∪ ∂F . Without loss of generality suppose,xˆ~n = max~m∈F∪∂Fxˆ~m.Since all neighbours of ~n are in F ∪ ∂F , it follows that xˆ~m = xˆ~n − 1 for all ~m adjacent to ~n.41Therefore we have yˆ ∈ Ht given byyˆ~m :=xˆ~m − 2 for ~m = ~nxˆ~m otherwise.Since Range∂F (xˆ) > 2, it follows that yˆn is neither a minimum nor a maximum for yˆ in F ∪ ∂F .Thus κ(yˆ, F ) < κ(xˆ, F ) and so we can apply the induction hypothesis on yˆ and conclude theproof.Lemma 2.6.3. (“Flat extension of an admissible pattern”) Let xˆ ∈ Ht and N ∈ N. Thenthere exists yˆ ∈ Ht such that yˆ~n = xˆ~n for ~n ∈ DN+1 andRange∂DN+k(yˆ) = Range∂DN (xˆ)− 2k,for all 1 ≤ k ≤Range∂DN (xˆ)2 .Proof. We will prove the following statement by induction on M ∈ N: For all N ∈ N andheight functions xˆ ∈ Ht(DN+1+M ) with Range∂DN xˆ = 2M there exists a height function yˆ ∈Ht(DN+1+M ) such that yˆ~n = xˆ~n for all ~n ∈ DN+1 and 1 ≤ k ≤ M , Range∂DN+k(yˆ) = 2M − 2k.Observe that the height function yˆ satisfies in particular Range∂DN+M (yˆ) = 0. Thus, the outermostboundary of yˆ is flat and it can be extended to a height function on Zd, so the lemma will followimmediately once we prove the statement above for all M ∈ N.For the base case of the induction, there is nothing to prove.Assume the result for some M ∈ N. Let xˆ ∈ Ht be a height function such that Range∂DN xˆ =2(M + 1). Denote N˜ := N + 1 + (M + 1) = N + M + 2. Let ~n ∈ DN˜ \DN+1 be a site where xˆobtains an extremal value for DN˜ \DN . If there is no such site thenRange∂DN+1(xˆ) = max~m∈∂DN+1xˆ~m − min~m∈∂DN+1xˆ~m= ( max~m∈∂DNxˆ~m − 1)− ( min~m∈∂DNxˆ~m + 1)= 2Mproving the induction step for that case. Without loss of generality we assume that it is a maximum,that is,xˆ~n = max~m∈DN˜\DNxˆ~m.Then the function ˆ˜y given byˆ˜y~m =xˆ~n − 2 if ~m = ~nxˆ~m otherwise42is a valid height function on DN˜ . Hence we have lowered the height function at the site ~n of xˆ.Repeating the steps for sites with extremal height (formally, this is another internal induction, seefor example the proof of Lemma 2.6.2), a height function zˆ can be obtained on DN˜ such that zˆ = xˆon DN+1 andRange∂DN+1(zˆ) = 2M.Thus we can apply the induction hypothesis to zˆ, substituting N + 1 for N to obtain a heightfunction yˆ on DN˜ such that yˆ = xˆ on DN+1 andRange∂DN+k(yˆ) = Range∂DN (xˆ)− 2kfor 1 ≤ k ≤Range∂DN (xˆ)2 .This completes the proof of the statement.Lemma 2.6.4. (“Patching an arbitrary finite pattern inside a non-steep point”) Let r 6=1, 2, 4 be a positive integer, d > 1 and N, k ∈ N. Choose y ∈ Xr which satisfies Range∂D2N+2r+k+1(y) ≤2k and some x ∈ Xr. Then:1. If either r is odd or x~n − y~n is even for all ~n ∈ Zd, then there exists z ∈ Xr such thatz~n =x~n if ~n ∈ DNy~n if ~n ∈ Dc2N+2r+k+12. If r is even and x~n − y~n is odd for all ~n ∈ Zd , then there exists z ∈ Xr such thatz~n =x~n+~e1 if ~n ∈ DNy~n if ~n ∈ Dc2N+2r+k+1The idea of this proof lies in the use of Lemmata 2.6.2 and 2.6.3. Given any pattern on DNwe can extend it to a pattern on D2N with flat boundary which can be then extended to y a littlefurther away provided the range of y is not too large. There is a slight technical issue we deal within the proof below in case r is even. This is essentially due to the fact that for r even we havex~n − x~m ≡ ‖~n− ~m‖1 mod 2 for all x ∈ Xr and ~n, ~m ∈ Zd.Proof. By applying Lemma 2.6.2 k−1 times we conclude that there exists y′ ∈ Xr such that y′ = yon Dc2N+2r+k+1 andRange∂D2N+2r+2(y′) = 2.43For all ~n, ~m ∈ ∂D2N+2r+2, ||~n− ~m||1 is even, therefore yˆ′~n and yˆ′~m have the same parity. Thus,there exist two values c, d ∈ Zr which y′ takes on ∂D2N+2r+2. Consider a ∈ Zr which is adjacent(for the Cayley graph of Zr) to both c and d. Then the configuration y(1) given byy(1)~n :=y′~n if ‖~n‖1 ≥ 2N + 2r + 3a if ‖~n‖1 ≤ 2N + 2r + 2 is evenc if ‖~n‖1 ≤ 2N + 2r + 2 is oddis an element of Xr. By Lemma 2.6.3 choose x(1) ∈ Xr such that x(1)|DN = x|DN andRange∂D2N−1(x(1)) = 0.Equivalently, there exists b ∈ Zr so that x(1)n = b for all n ∈ ∂D2N−1.If we are in case (1), either r is even and b ≡ a mod 2 or r is odd. Either way, there is someinteger k ∈ [0, . . . r − 1] such that a+ k ≡ b− k mod r. Thus, we can find y(2), x(2) ∈ Xr, so thaty(2) agrees with y(1) in Dc2N+2r, x(2) agrees with x(1) in D2N , and so that both x(2) and y(2) havea common constant value a + k = b + (r − k) mod r on ∂D2N+r−k−1. Thus, we get the requiredz ∈ Xr by settingz~n :=x(2)~n for ~n ∈ D2N+r−k−1y(2)~n for ~n ∈ Dc2N+r−k−1To prove case (2) we follow the same procedure, substituting x by σ~e1(x).We can now conclude the proof of Proposition 2.6.1:Proof. Let µ be a shift-invariant Markov random field and v1, . . . , vd be given by (2.5.9).Assume that supp(µ) is not frozen. Then by Lemma 2.5.7, |vj | < 1 for all 1 ≤ j ≤ d. Again,choose  > 0 satisfying  < 14 min{1− |vj | | 1 ≤ j ≤ d}.We need to show that for all N ∈ N and patterns c ∈ LDN (Xr), µ([c]DN ) > 0.From Lemma 2.5.4 it follows that for sufficiently large k,µ({y ∈ Xr | Range∂Dk(y) ≤ 2(1− )k}) > 1− .Now choose k > (2N + 2r + 1) large enough so that there exists y ∈ supp(µ) withRange∂Dk(y) ≤ 2(1− )k ≤ 2(k − (2N + 2r + 1)).By Lemma 2.6.4, it follows that there exists z ∈ Xr with z~n = y~n for ~n ∈ Zd \ Dk and z~n = c~nfor ~n ∈ DN . Since µ is an adapted Markov random field it follows that z ∈ supp(µ), in particular44µ([c]DN ) > 0.2.7 Fully Supported Shift-Invariant Gibbs Measures on XrNext we demonstrate the existence of a fully-supported shift-invariant Gibbs measure for shift-invariant nearest neighbour interactions on Xr. We will obtain such measures by showing thatequilibrium measures for certain interactions are non-frozen and thus are fully supported by Propo-sition 2.6.1. To state and prove this result, we need to introduce (measure-theoretic) pressure andequilibrium measures and apply a theorem of Lanford and Ruelle relating equilibrium measures andGibbs measures. Our presentation is far from comprehensive, and is aimed to bring only definitionsnecessary for our current results. We refer readers seeking background on pressure and equilibriummeasures to the many existing textbooks on the subject, for instance [47, 58].Let µ be a shift-invariant probability measure on a shift of finite type X. The measure theoreticentropy can be defined byhµ := limN→∞1|DN |HDNµ , (2.7.1)where DN was defined in (2.5.5) andHDNµ :=∑a∈LDN (X)−µ([a]DN ) logµ([a]DN ), (2.7.2)with the understanding that 0 log 0 = 0.Given a continuous function f : X → R, the measure-theoretic pressure of f with respect to µis given byPµ(f) :=∫fdµ+ hµ.A shift-invariant probability measure µ is an equilibrium state for f if the maximum of ν 7→ Pν(f)over all shift-invariant probability measures is attained at µ. The existence of an equilibrium statefor any continuous f follows from upper-semi-continuity of the function ν 7→ Pν(f) with respect tothe weak-∗ topology.Let φ be a nearest neighbour interaction on X. As in [47] define a function fφ : X −→ R byfφ(x) :=∑A finite | ~0∈A⊂Zd1|A|φ(x|A). (2.7.3)The following is a restricted case of a classical theorem by Lanford and Ruelle:Theorem 2.7.1. (Lanford-Ruelle Theorem [27, 47]) Let X be a Zd-shift of finite type andφ a shift-invariant nearest neighbour interaction. Then any equilibrium state µ for fφ is a Gibbsstate for the given interaction φ adapted to X.45The statement of the Lanford-Ruelle theorem (in [27, 47]) does not explicitly mention theadaptedness assumption because for them Gibbs states are always adapted. The topological entropyof a Zd-subshift X is given byh(X) := limk→∞1|Dk|log |LDk(X)|.We recall the well known variational principle for topological entropy of Zd-actions, which (inparticular) asserts that h(X) = supν hν whenever X is a Zd-shift space and the supremum is overall probability measures on X (Theorem 3.12 in [47]).Lemma 2.7.2. Let µ be shift-invariant, frozen Markov random field on AZd, then hµ = 0.Proof. Consider Xµ := supp(µ). This is a shift-invariant topological Markov field, consisting offrozen points. Thus for all finite F ⊂ Zd, |LF (Xµ)| ≤ |L∂F (Xµ)| . In particular,log |LDk(Xµ)| ≤ log |L∂Dk(Xµ)| ≤ Ckd−1.It follows that h(Xµ) = 0, so by the variational principle hµ = 0.Lemma 2.7.3. Let M be a Gibbs cocycle on Xr with a shift-invariant nearest neighbour interaction.Then there exists a shift-invariant nearest neighbour interaction φ such that M = Mφ and anyequilibrium measure for fφ is non-frozen.Proof. Let (x(i), y(i)) ∈ ∆Xr be as in the proof of Proposition 2.4.2. If M ∈ GXr then there existsa shift-invariant nearest neighbour interaction φ so thatM(x(i), y(i)) = φ([i+ 2]0)− φ([i]0)+d∑j=1(φ([i+ 2, i+ 1]j)− φ([i+ 1, i]j) + φ([i+ 1, i+ 2]j)− φ([i, i+ 1]j)) .Consider the nearest neighbour interaction φ˜ given byφ˜ ([i+ 1, i]j) := φ˜ ([i, i+ 1]j) :=12dd∑j=1(φ([i+ 1, i]j) + φ([i, i+ 1]j)) + φ([i]0)for all i ∈ Zr and j ∈ {1, . . . , d}, and φ˜([i]0) = 0 for all i ∈ Zr.It follows that M(x(i), y(i)) = Mφ˜(x(i), y(i)) for all i ∈ Zr and so M = Mφ˜.Thus we can assume without loss of generality that φ = φ˜ satisfiesφ([i, i+ 1]j) = φ([i+ 1, i]j) = ai for all i ∈ Zr and j ∈ {1, 2, . . . , d}46and φ([i]0) = 0 for all i ∈ Zr.By (2.7.3):∫fφ(x)dµ(x) =∫12d∑j=1(φ([x−ej , x0]j) + φ([x0, xej ]j))µ(x)Thus:∫fφ(x)dµ(x) =d∑j=1r−1∑i=0φ([i, i+ 1]j)µ([i, i+ 1]j) + φ([i+ 1, i]j)µ([i+ 1, i]j)=d∑j=1r−1∑i=0ai(µ([i, i+ 1]j) + µ([i+ 1, i]j)).Let a = max1≤i≤r ai attained by ai0 . It follows that for any shift-invariant probability measure∫fφ(x)dµ(x) ≤ d · awith equality holding iff µ([i, i+ 1]j) = µ([i+ 1, i]j) = 0 for all ai < a and j = 1, . . . , d.For a frozen measure µ it follows that for some j ∈ {1, 2, . . . , d}, µ([i, i+1]j) > 0 or µ([i+1, i]j) >0 for all i ∈ {0, 1, . . . , r − 1}. Thus if ai < a for some 0 ≤ i ≤ r − 1, it follows that for any frozenmeasure µ, ∫fφ(x)dµ(x) < supν∫fφ(x)dν(x). (2.7.4)where the supremum is attained by the measure supported on the orbit of the point periodic pointx ∈ Xr given byxn :={i0 ‖n‖1 oddi0 + 1 ‖n‖1 evenBy Lemma 2.7.2, if µ is frozen then hµ = 0.Thus in this case by (2.7.4) for any frozen probability measure µPµ(fφ) =∫fφ(x)dµ(x) < supν∫fφ(x)dν(x) ≤ supνPν(fφ)and in particular any frozen measure µ can not be an equilibrium measure for fφ.The remaining case is when ai = a for all i, in which case fφ(x) = d ·a is constant. Thus, by thevariational principle supν Pν(fφ) = d · a+ supν hν = d · a+ h(Xr). Since h(Xr) > 0, it follows thatthe strict inequality Pµ(fφ) < supν Pν(fφ) holds also in this case for any frozen measure µ. Thus wehave the result that for a given Gibbs cocycle with a shift-invariant nearest neighbour interactionthere exists an interaction for that cocycle such that the corresponding equilibrium state is notfrozen.47Corollary 2.7.4. Let r ∈ N \ {1, 2, 4} and d > 1. For all shift-invariant Gibbs cocycles Mon Xr there exists a shift-invariant nearest neighbour interaction φ on Xr with M = Mφ and acorresponding shift-invariant Gibbs state ν with supp(ν) = Xr.Proof. By Lemma 2.7.3, there exists a shift-invariant nearest neighbour interaction φ on Xr withM = Mφ and an equilibrium measure µ for fφ which is non-frozen. By the Lanford-Ruelle Theoremsuch µ is a Gibbs state for φ and by Proposition 2.6.1 it is fully supported.2.8 “Strongly” Non-Gibbsian Shift-Invariant Markov RandomFieldsIn this section we will prove Theorem 1.1.2, that is, demonstrate the existence of shift-invariantMarkov random fields whose specification is not given by any shift-invariant finite range interaction.In contrast to Gibbs measures with shift-invariant finite range interaction, our example proves thatgenerally the specification of a shift-invariant Markov random field cannot be “given by a finitenumber of parameters”. Our construction is somewhat similar to the checkerboard island systemas introduced in [44].Let A be the alphabet consisting of the 18 ‘tiles’ illustrated in Figure 2.1: A blank tile, a “seed”tile (marked with a “0”), 8 “interior arrow tiles”( 4 of them have arrows in the coordinate directionsand the rest are “corner tiles”) and finally 8 additional “border arrow tiles”( marked by an extrasymbol “B”).0B BB BB BBB      Blank Tile               Square TilesFigure 2.1: The Alphabet AAll tiles other than the blank tile will be called square tiles and the tiles with arrows will becalled arrow tiles. The arrow tiles with ‘B’ will be called border tiles and those without ‘B’ will becalled interior tiles. Configurations of an (2n+ 1)× (2n+ 1) square shape whose inner boundaryconsists of border tiles as illustrated by the example in Figure 2.2 will be called an n-square-island.A square-island refers to an n-square island for some n.48B B BBBBBB BBBB BB BBBBBBBBBB0Figure 2.2: A 3-Square-IslandInformally, the idea is to have the square tiles form square-islands with the seed tile in the centerand border tiles on their boundary “floating in the sea of the blank tiles”. A ‘generic’ configurationcan be seen in Figure 2.3.B B B B BBBBBBBBBBBBB B B0 BBBBB0B B BBBBBB BBBB BB BBBBBBBBBB0Figure 2.3: A ‘Generic’ ConfigurationLet X be the nearest neighbour shift of finite type on the alphabet A with constraints:1. Any “arrow head” must meet an “arrow tail” of matching type (interior or border) and viceversa.2. Adjacent arrow tiles should not point in opposite directions.3. Two corner direction tiles cannot be adjacent to one another.4. The seed tile is only allowed to sit adjacent to straight arrow tiles.5. An interior tile is always surrounded by other square tiles while a border tile has an interiortile on its right and the blank tile on its left (left and right here are taken from the point ofview of the arrow).Notice that the arrow tiles can turn only in the clock-wise direction. By Constraint (1), everyarrow must either be a part of a bi-infinite path or a closed path. Any such path must either tracea straight line (vertical or horizontal) or an “L shape” or a “U shape”, or a closed path which tracesa rectangle. By Constraint (5) we find that tiles forming a rectangular path must have square tilesto their right (in the interior of the rectangle). These are confined to the interior of the rectangleand thus must themselves trace a smaller rectangle (smaller in terms of the area confined by the49rectangle). Note that in constraint (5) we mean in particular that corner direction border tileshave blank tiles on the two sites on their left (where left is taken from the point of view of both theinitial direction and the final direction of the arrow). By Constraints (3) and (2) we can concludeby induction on the length that closed paths must actually trace squares with a seed in the center.It follows that the finite connected components of the square tiles are square-islands. The lengthof these square-islands is 2N + 1 for some N ∈ N because of Constraints (3) and (4). Two suchsquare-islands cannot be adjacent because of Constraint (5).We can also exclude the possibility of a “U shaped” path using Constraints (3) and (2), againby induction on the length of the “base of the U”.Because a border of blank tiles can be filled either with a square-island or with blank tiles, onecan easily deduce that X has positive entropy.For r ∈ N denote by Br ⊂ Z2, the l∞-ball of radius r in Z2, that is,Br := {(i, j) ∈ Z2 | |i|, |j| ≤ r}.Proposition 2.8.1. Let µ be any measure of maximal entropy for X. Then µ is fully-supported.Proof. We will begin by proving that µ-almost surely any square tile is part of a square-island.Since a seed must be surrounded by arrow tiles, it suffices to prove that any arrow tile is part of asquare-island. By the discussion above, any arrow tile is part of a path which is bi-infinite or tracesa square. An infinite path is either “L shaped” or a straight vertical or horizontal line. It is easilyverified that the appearance of an “L shape” in the origin is a transient event with respect to thehorizontal or vertical shifts, so by Poincare´ recurrence the probability of having a “L shaped path”is zero. An infinite horizontal line forces a half space of horizontal straight line. This forces eithera transient event or periodicity. Because X has positive entropy, for a measure of maximal entropyfor X, the measure of periodic points is 0 as well. Thus µ-almost surely, any arrow tile is part of apath which traces a square. By Constraint (5) either the square tile is contained in a square-islandor there is an infinite sequence of nested square paths. The latter is again a transient event.Let x ∈ X be such that it does not have any infinite connected component composed of squaretiles. We will now show for all r ∈ N that there exists a finite set Ar such that Br ⊂ Ar ⊂ Z2 andxi is the blank tile for all i ∈ ∂Ar. Let Sq1 ∈ AC1 , Sq2 ∈ AC2 , . . . , Sqk ∈ ACk be an enumeration ofthe square-islands in x such that Ci ∩Br+1 6= ∅ . LetAr :=k⋃i=1Ci ∪Br.Since every square-island is surrounded by the blank tile, Ar has the required properties.Consider some y ∈ X and n ∈ N. We will prove that µ([y]Bn) > 0. Any incomplete square-islandin y|Br can be completed (possibly in multiple ways) in B4r. By completing these square-islands50we can obtain z ∈ X such that it satisfieszi :=yi for i ∈ Brblank tile for i ∈ Bc4r.Now choose any x ∈ supp(µ) which does not have any infinite connected component composedof square tiles. As previously discussed we can find A4r ⊂ Z2 such that B4r ⊂ A4r and xi is theblank tile for all i ∈ ∂A4r. Then z|∂A4r = x|∂A4r . By the Lanford-Ruelle Theorem µ is a Markovrandom field with the uniform specification adapted to X. Thereforeµ([z]A4r∪∂A4r)µ([x]A4r∪∂A4r)= 1proving µ([z]Br) = µ([y]Br ]) > 0.We now describe a subshift of finite type, Y , for which dimension of the space of Markov cocyclesis infinite. Y is a nearest neighbour shift of finite type with the alphabet as in Figure 2.1 but nowwith two types of square tiles, type 1 and 2. The adjacency rules are as in the subshift X but alsoforce adjacent square tiles to be of the same type, that is, any square-island in an element of Y willconsist entirely of tiles of type 1 or of type 2. Let p = (pi)i∈N ∈ (0, 1)N and φ : Y −→ X be themap which forgets the type of square tiles. We will now construct a shift-invariant Markov randomfield µp obtained by picking x ∈ X according to a fixed measure of maximal entropy µ and thenchoosing the type of square-islands in x with the distribution: an i-square-island is of type 1 withprobability pi and 2 with probability 1− pi. Precisely, let F := φ−1(Borel(X)) be the pull-back ofthe Borel sigma-algebra on X. For all y ∈ Y , i ∈ N and Λ ⊂ Z2 finite consider the functionsmiΛ(y) := the number of i-square-islands of type 1 in y intersecting ΛandniΛ(y) := the number of i-square-islands of type 2 in y intersecting Λ.µp is the unique probability measure on Y satisfying µp|F = φ−1µ, the pull-back of the measure µandµp([y]Λ | F)(y) =∞∏i=1pmiΛ(y)i (1− pi)niΛ(y)for all Λ ⊂ Z2 finite, y ∈ Y . In particular, if there are no square-islands in y intersecting both Λand Z2 \Λ then µp([y]Λ) can be written as a product of the distribution φ−1µ and the distributionof colours, that is,µp([y]Λ) = µ([φ(y)]Λ)∞∏i=1pmiΛ(y)i (1− pi)niΛ(y)51and further, for finite sets Γ ⊃ Λ ∪ ∂Λµp([y]Λ∣∣[y]Γ\Λ)= µ([φ(y)]Λ∣∣[φ(y)]∂Λ)∞∏i=1pmiΛ(y)i (1− pi)niΛ(y). (2.8.1)Let (y, y′) ∈ ∆Y and F be the set of sites on which (y, y′) differ. We note that if y|C is aninfinite connected component consisting of square tiles then y|C = y′|C , that is, C ∩ F = ∅. Thuswe can choose a finite set Λ ⊂ Z2 such that F ⊂ Λ and yi = y′i is a blank tile for all i ∈ ∂Λ. Sinceµ is a Markov random field with uniform specification, (2.8.1) implies for all finite sets Γ ⊃ Λ∪ ∂Λthatµp ([y′]Γ)µp ([y]Γ)=∞∏i=1pmiΛ(y′)−miΛ(y)i (1− pi)niΛ(y′)−niΛ(y) (2.8.2)=∞∏i=1pmiF (y′)−miF (y)i (1− pi)niF (y′)−niF (y).Let Mp be the shift-invariant cocycle on Y given by the following:Mp(y, y′) =∞∑i=1(miF (y′)−miF (y)) log(pi) + (niF (y′)− niF (y)) log(1− pi)for all (y, y′) ∈ ∆Y and the finite set F ⊂ Z2 on which they differ.It follows from (2.8.2) thatµp([y′]Γ)µp([y]Γ)= eMp(y,y′) (2.8.3)whenever (y, y′) ∈ ∆Y and Γ satisfies the assumptions above. It follows that Mp is the logarithmof the Radon-Nikodym cocycle for µ.We first state a small technical lemma:Lemma 2.8.2. Let (y, y′), (z, z′) ∈ ∆Y and F be the set of sites on which y and y′ differ. Supposey|F = z|F and y′|F = z′|F ,y|F c = y′|F c and z|F c = z′|F c .Let C ⊂ Z2 be a (2i+ 1)× (2i+ 1) square shape such that y|C is an i-square-island of type 1 andC ∩ F 6= ∅. Then z|C = y|C .52For all A ⊂ Z2 let∂2(A) := {i ∈ Z2 \A | there exists j ∈ A such that ‖i− j‖1 ≤ 2}.Proof. Let C1, C2, . . . , Cn ⊂ Z2 be the square shapes such that for all 1 ≤ j ≤ n,1. y′|Cj is a square-island.2. There is a vertex l ∈ Cj ∩ C such that yl is a border tile.If there is no such square shape and E is the union of all the edges of C, then E ⊂ F implyingthat z|C = y|C . Otherwise we note that by Constraint (5) y′|∂Cj consists only of blank tiles for all1 ≤ j ≤ n and divide the proof into the following cases:1. n = 1 and C ⊂ C1: Since C 6= C1 are square shapes and y|∂(Cc) consists of border tiles, Fcompletely contains at least two edges of C. Thus y|F completely determines the square-islandy|C and z|C = y|C .2. For some 1 ≤ j ≤ n, Cj completely contains one of the edges of C and C 6⊂ Cj: Suppose Cjcontains the top edge T of C. Since C 6⊂ Cj , T does not intersect the top edge of Cj and thusT ⊂ F . Then y|F completely determines the square-island y|C and z|C = y|C .3. For all 1 ≤ j ≤ n the square shape Cj does not contain any of the edges of C completely: LetE denote the union of all the edges of C. By Constraint (5), for all 1 ≤ j ≤ n and l ∈ ∂2Cj ,y′l is either a blank tile or a border tile implying that C∩∂2Cj ⊂ F . Also y′|E\⋃nj=1 Cj consistsonly of blank tiles therefore E \ ∪nj=1Cj ⊂ F . Thus we find thatE \n⋃j=1Cj ∪n⋃j=1C ∩ ∂2Cj ⊂ Fis a connected set in Z2 and touches all the edges of C. Thus y|F completely determines thesquare-island y|C proving that y|C = z|C .Proposition 2.8.3. For any p ∈ (0, 1)N, the measure µp defined above is a shift-invariant Markovrandom field with Radon-Nikodym cocycle Mp.Proof. By (2.8.3) we are left to prove that the shift-invariant cocycleMp is Markov. Let (y, y′), (z, z′) ∈∆Y and F ⊂ Z2 be as in Lemma 2.8.2. We will show that miF (y′)−miF (y) = miF (z′)−miF (z): Theinequality miF (z) ≥ miF (y) follows by Lemma 2.8.2. By interchanging z by y the reverse inequalityholds, proving miF (z) = miF (y), and similarly we obtain that miF (z′) = miF (y′). It follows that Mpis Markov.53Proposition 2.8.3 above proves the existence of uncountably many linearly independent shift-invariant Markov cocycles which have corresponding fully-supported shift-invariant Markov randomfields. Since the space of Markov cocycles which come from some shift-invariant finite range in-teraction is a union of finite dimensional vector spaces, this further implies that there exists ashift-invariant Markov random field which is not Gibbs for any shift-invariant finite range interac-tion proving Theorem 1.1.2. Alternatively, note that for any Gibbs cocycle with some shift-invariantfinite range interaction the magnitude of the cocycle at a particular homoclinic pair is at most linearin the size of set of sites at which the two configurations differ. We can choose a p ∈ (0, 1)N suchthat this does not happen.A simple variation on the above construction yields topological Markov fields which are notsofic: Choose p ∈ [0, 1]N. If pi = 0 or pi = 1, this would disallow square-islands of certain typesfor specific sizes. Each such p would determine a shift-invariant Markov random field supportedon a shift space contained in Y . Since there are uncountably many such subshifts a majority ofsuch spaces will be not sofic. However it is easy to see from the proofs above that these are globaltopological Markov fields.54Chapter 3Generalisation of theHammersley-Clifford Theorem onBipartite GraphsIn this chapter we will prove Theorem 1.2.1. Most of this chapter is a part of the submittedmanuscript [11]. Taking inspiration from symbolic dynamics we will define n.n.constraint spacesin Section 3.1. Section 3.2 builds up the necessary background for this work. In Subsection 3.2.1we introduce Hammersley-Clifford spaces and in Subsection 3.2.2 we introduce Markov-similarityand V -good pairs. In Subsection 3.2.3 we introduce folding and strong config-folding. Section 3.3states and proves the main results of this chapter. Since the proofs are technical we work out aconcrete example of our results in Subsection 3.3.1.In this chapter G = (V, E) will always denote an undirected locally finite, countable, bipartitegraph without self-loops and multiple edges and A will always denote a finite set and be referredto as the alphabet. H = (VH, EH) will always denote a finite undirected graph with or withoutself-loops. Adjacency in a graph S will be denoted by ∼S . We will often drop the subscript whenthe denotation is clear from the context. We remark that in this chapter given A ⊂ B ⊂ V, givenb ∈ AB the cylinder set (as defined in Section 2.1.1) [b]A also represents the corresponding patternb|A.3.1 N.N.Constraint SpacesThe following definitions take inspiration from symbolic dynamics ([29]): A closed configurationspace is a closed subset of configurations contained in AV . Let F be a given set of patterns onfinite sets. Then the configuration space with constraints F is defined to beXF := {x ∈ AV | patterns from F do not appear in x}.55A set of constraints F is called nearest neighbour if F consists of patterns on cliques, that is, forall [a]F ∈ F , diam(F ) ≤ 1.A n.n.constraint space is a configuration space with nearest neighbour constraints. Note that ifG is bipartite then F consists of patterns on edges and vertices. These spaces correspond to nearestneighbour shifts of finite type as defined in Subsection 2.1.3.Let Hom(G,H) denote the space of all graph homomorphisms (defined in Section 2.3) from Gto H. For example it was mentioned in Section 2.3 that if C3 is the 3-cycle with vertices 0, 1 and2 then the space of 3-colourings is Hom(G, C3).Given a graphs G and H, Hom(G,H) is an n.n.constraint space where the constraint is givenbyF := {[a, b]v,w | a  b ∈ VH and v ∼ w ∈ V}.Then for all x ∈ XF and vertices v ∼ w ∈ V, xv ∼ xw which implies x ∈ Hom(G,H). Converselyfor all homomorphisms x ∈ Hom(G,H) and vertices v ∼ w ∈ V we have [x]{v,w} /∈ F and hencex ∈ XF .N.N.Constraint spaces arise naturally in the study of MRFs as is shown in the following propo-sitions.Proposition 3.1.1. Let A be some finite set, G = (V, E) be a graph and X ⊂ AV be an n.n.constraintspace. Then X is a topological Markov field.Proof. Consider A ⊂ V finite and x, y ∈ X such that x|∂A = y|∂A. We want to prove that z ∈ AVdefined byzv :=xv if v ∈ A ∪ ∂Ayv if v ∈ Acis an element of X. Let B ⊂ V be a clique. If B∩A 6= φ then B ⊂ A∪∂A and z|B = x|B ∈ LB(X)else B ∩A = φ implying z|B = y|B ∈ LB(X). Since X is an n.n.constraint space z ∈ X.The following proposition completes the bridge between MRFs and n.n.constraint spaces.Proposition 3.1.2. Let G = (V, E) be a given graph and X be a topological Markov field on thegraph G with a safe symbol. Then X is an n.n.constraint space.Remark: If µ is an MRF then supp(µ) is a topological Markov field. Thus this proposition impliesthat if a measure µ satisfies the hypothesis of the weak Hammersley-Clifford theorem (Theorem2.2.2), that is, if µ is an MRF such that supp(µ) has a safe symbol then supp(µ) is an n.n.constraintspace. The conclusion of this proposition does not hold without assuming presence of a safe symbol.(look at the comments following proof of Proposition 3.5 in [12])Proof. Let ? be a safe symbol for X. Consider the setF := {a ∈ AA |A ⊂ V forms a clique and there does not exist x ∈ X such that x|A = a}.56Note that X ⊂ XF and if A ⊂ V is a clique then LA(XF ) = LA(X). We want to prove thatXF ⊂ X. We will proceed by induction on n ∈ N, the hypothesis being: Given A ⊂ V such that|A| = n, LA(XF ) ⊂ LA(X).The base case follows immediately. Suppose for some n ∈ N, given A ⊂ V satisfying |A| ≤ n,LA(XF ) ⊂ LA(X).For the induction step consider A ⊂ V such that |A| = n+ 1. There are two cases to consider:If A is a clique then LA(XF ) = LA(X). If A is not a clique then there exists v ∈ A such that|∂{v}∩A| < n. Let a ∈ LA(XF ). We will prove that a ∈ LA(X). Now | ({v} ∪ ∂{v})∩A|, |A\{v}| ≤n, thus the induction hypothesis impliesa|({v}∪∂{v})∩A ∈ L({v}∪∂{v})∩A(X)anda|A\{v} ∈ LA\{v}(X).Consider x, y ∈ X such thatx|({v}∪∂{v})∩A := a|({v}∪∂{v})∩Aandy|A\{v} := a|A\{v}.Since ? is a safe symbol for X therefore x?, y? ∈ AV given byx?w :=xw if w ∈ ({v} ∪ ∂{v}) ∩A? otherwiseandy?w :=yw if w ∈ A \ {v}? otherwiseare elements of X. Note that x?w = xw = aw, y?w = yw = aw if w ∈ ∂{v} ∩ A and x?w = y?w = ? ifw ∈ Ac. Therefore x?|∂{v} = y?|∂{v}. Since X is a topological Markov field, z ∈ AV defined byzw :=x?w if w ∈ {v} ∪ ∂{v}y?w otherwiseis an element of X. But zv = x?v = xv = av and zw = y?w = yw = aw if w ∈ A \ {v}. Hencez|A = a ∈ LA(X). This completes the induction. Hence X = XF .N.N.Constraint spaces allow us to change configurations one site at a time provided the edge-constraints are satisfied. To state this rigorously we define the following: given x ∈ AV , and distinct57vertices w1, w2, . . . , wr ∈ V and c1, c2, . . . , cr ∈ A we denote by θw1,w2,...,wrc1,c2,...,cr (x) an element of AVgiven by(θw1,w2,...,wrc1,c2,...,cr (x))u :=xu if u 6= w1, w2 . . . , wrci if u = wi for some 1 ≤ i ≤ r.Proposition 3.1.3. Let G = (V, E) be a bipartite graph. Suppose X ⊂ AV is an n.n.constraintspace and x ∈ X. Let w1, w2, . . . , wr ∈ V be distinct vertices such that wi  wj for 1 ≤ i, j ≤ rand c1, c2, . . . , cr ∈ A such that [ci, xw′ ]{wi,w′} ∈ L{wi,w′}(X) for all w′ ∼ wi and 1 ≤ i ≤ r. Thenθw1,w2,...,wrc1,c2...,cr (x) ∈ X.Specialising to r = 1, if X ⊂ AV is an n.n.constraint space and x ∈ X then for v ∈ V and c ∈ A,θvc (x) ∈ X if and only if [xw, c]{w,v} ∈ L{w,v}(X) for all w ∼ v.Proof. The constraint set for X consists only of patterns on edges and vertices. Thus it is sufficientto check for all v ∼ w that[θw1,w2,...,wrc1,c2...,cr (x)]{v,w} ∈ L{v,w}(X).Since wi  wj for all 1 ≤ i, j ≤ r at most one among v and w is wi for some 1 ≤ i ≤ r. If bothof them are not equal to wi then[θw1,w2,...,wrc1,c2...,cr (x)]{v,w} = [x]{v,w} ∈ L{v,w}(X).Otherwise we may assume v = wi for some 1 ≤ i ≤ r giving us[θw1,w2,...,wrc1,c2...,cr (x)]{v,w} = [ci, xw]{wi,w} ∈ L{wi,w}(X).3.2 Hammersley-Clifford Spaces and Strong Config-Folds3.2.1 Hammersley-Clifford SpacesA topological Markov field X ⊂ AV is called Hammersley-Clifford if the space of Markov cocycleson X is equal to the space of Gibbs cocycles on X, that is, MX = GX . If X is invariant under thesome subgroup G ⊂ Aut(G) then X is said to be G-Hammersley-Clifford if MGX = GGX .Examples: (Further examples and explaination follow the statement of Theorem 3.3.2)1. A frozen space of configurations.2. A topological Markov field with a safe symbol.583. Hom(G, Edge) where Edge consists of two vertices 0 and 1 connected by a single edge.4. Hom(Zd, Cn) where Cn is an n-cycle, d > 1 and n 6= 4. (Proposition 2.4.5)This gives examples of Hammersley-Clifford spaces which are not G-Hammersley-Cliffordspaces for some subgroup G ⊂ Aut(Zd). It will follow from Theorem 3.3.2 below and example3 above that Hom(G, C4) is both Hammersley-Clifford and G-Hammersley-Clifford for allbipartite graphs G and subgroups G ⊂ Aut(G).3.2.2 Markov-Similar and V -Good PairsSuppose we are given a closed configuration space X, a Markov cocycle M ∈MX and an interactionV on X. If M is not Gibbs with the interaction V we might be still interested in the extent towhich it is not. An asymptotic pair (x, y) ∈ ∆X is called (M,V)-good ifM(x, y) =∑S⊂V finite(V ([y]S)− V ([x]S)) .In most cases the Markov cocycle M will be fixed, so we will drop M and call a pair V -good insteadof (M,V )-good. An asymptotic pair (x, y) ∈ ∆X is said to be Markov-similar to (z, w) if there isa finite set A ⊂ V such thatxu = yu,zu = wu for u ∈ Acandxu = zu,yu = wu for u ∈ A ∪ ∂A.Being V -good is infectious.Proposition 3.2.1. Let X be an n.n.constraint space, M a Markov cocycle and V a nearestneighbour interaction on X. The set of V -good pairs is an equivalence relation on X. Additionallyif (x, y), (z, w) ∈ ∆X are Markov similar then (x, y) is V -good if and only if (z, w) is V -good.Proof. The reflexivity and symmetry of the relation V -good are immediate; the cocycle conditionimplies that the relation is transitive. Thus the relation is an equivalence relation.Let (x, y), (z, w) ∈ ∆X be Markov-similar pairs. Since M is a Markov cocycleM(x, y) = M(z, w). (3.2.1)59Let A ⊂ V be a finite set such thatxu = zu and yu = wufor u ∈ A ∪ ∂A andxu = yu and zu = wufor u ∈ Ac. If S ⊂ V is a clique then either S ⊂ A ∪ ∂A or S ⊂ Ac. If S ⊂ A ∪ ∂A thenx|S = z|S and y|S = w|SimplyingV ([y]S)− V ([x]S) = V ([w]S)− V ([z]S).If S ⊂ Ac thenx|S = y|S and z|S = w|SimplyingV ([y]S)− V ([x]S) = V ([w]S)− V ([z]S) = 0.Since V is a nearest neighbour interaction∑S⊂V finiteV ([y]S)− V ([x]S) =∑S⊂V finiteV ([w]S)− V ([z]S).Since (x, y) is a V -good pair by (3.2.1)M(z, w) = M(x, y) =∑S⊂V finite(V ([y]S)− V ([x]S)) =∑S⊂V finiteV ([w]S)− V ([z]S)completing the proof.Corollary 3.2.2. Let X be an n.n.constraint space, M a Markov cocycle and V a nearest neighbourinteraction on X. Suppose for some (x, y) ∈ ∆X there exists a chain x = x1, x2, x3, . . . , xn = ysuch that each (xi, xi+1) ∈ ∆X and is Markov similar to a V -good pair. Then (x, y) is V -good.This follows from Lemma 3.2.1.3.2.3 Strong Config-FoldingWe shall now introduce graph folding and extract some of its properties so as to define a strongnotion of folding for closed configuration spaces. Graph folding was introduced in [37] and usedin [6] so as to prove a slew of properties which are satisfied by a given graph if and only if it issatisfied by its folds. Fix some finite undirected graph H = (VH, EH) without multiple edges. For602C C4 30 12301Figure 3.1: C4 and C3any vertex a ∈ H we say that H \ {a} is a fold of the graph H if there exists b ∈ H \ {a} such that{c ∈ VH | c ∼ a} ⊂ {c ∈ VH | c ∼ b}.In such a case we say that a is folded into b.For example in the 4-cycle C4 the vertex 3 can be folded into the vertex 1. However no vertexcan be folded in the 3-cycle C3.For any vertex v ∈ V the n-ball around v is given byDn(v) := {w ∈ V | dG(v, w) ≤ n}where dG is the graph distance on G.We wish to generalise the following property:Proposition 3.2.3. Consider a bipartite graph G = (V, E), a graph H = (VH, EH) and verticesa, b ∈ VH where the vertex a can be folded into the vertex b. Let X = Hom(G,H). Then for alledges (v1, v2), (v2, v3) ∈ E and c ∈ VH, [a, c]{v1,v2} ∈ L{v1,v2}(X) implies[b, c]{v1,v2} ∈ L{v1,v2}(X)[c, b]{v2,v3} ∈ L{v2,v3}(X) and[b]∂D1(v1) ∈ L∂D1(v1)(X).Proof. Since a ∼ c and a can be folded into the vertex b we have b ∼ c. Consider partite classesP1, P2 ⊂ V of G such that v1 ∈ P1. Then the configuration x ∈ VVH given byxv :=b if v ∈ P1c if v ∈ P261is an element of Hom(G,H). Thus[b, c]{v1,v2} = [x]{v1,v2} ∈ L{v1,v2}(X)[c, b]{v2,v3} = [x]{v2,v3} ∈ L{v2,v3}(X) and[b]∂D1(v1) = [x]∂D1(v1) ∈ L∂D1(v1)(X).For the rest of the chapter fix a bipartite graph G = (V, E). Let X ⊂ AV be an n.n.constraintspace. Given distinct symbols a, b ∈ A, we say that a can be strongly config-folded into b if for alledges (v1, v2), (v2, v3) ∈ E and c ∈ A, [a, c]{v1,v2} ∈ L{v1,v2}(X) implies[b, c]{v1,v2} ∈ L{v1,v2}(X), (3.2.2)[c, b]{v2,v3} ∈ L{v2,v3}(X) and (3.2.3)[b]∂D1(v1) ∈ L∂D1(v1)(X). (3.2.4)In such a case, X ∩ (A \ {a})V is called a strong config-fold of X and X is called a strong config-unfold of X ∩ (A\ {a})V . Note that X ∩ (A\ {a})V is still an n.n.constraint space and is obtainedby forbidding the symbol a in X. Further if X is invariant under a subgroup G ⊂ Aut(G) thenX ∩ (A \ {a})V is also invariant under G. Let Xa denote the strong config-fold X ∩ (A \ {a})V .The idea of folding is captured by (3.2.2) while (3.2.3), (3.2.4) are reminiscent of homomorphismspaces. Indeed if an n.n.constraint space X satisfies (3.2.2) then for all x ∈ X and v ∈ V such thatxv = a, the configuration θvb (x) ∈ X. Thus if a strongly config-folds into b then any appearance ofa in any configuration in X can be replaced by b. Recall that a safe symbol can replace any othersymbol. Thus the notion of strong config-folding generalises the notion of a safe symbol.Proposition 3.2.4. Let G = (V, E) be a bipartite graph. Let X ⊂ AV be an n.n.constraint spacewith a safe symbol ?. Then any symbol a ∈ A \ {?} can be strongly config-folded into ?. Theresulting strong config-fold Xa is also an n.n.constraint space with the same safe symbol ?.Indeed Xa is obtained just by forbidding the symbol a from X and ? is still a safe symbol. Ingeneral it is not necessary that the symbol being strongly config-folded into has to be a safe symbol.For instance given any bipartite graph G the space Hom(G, C4), can be strongly config-folded in twosteps to Hom(G, Edge), yet C4 does not have any safe symbol. Note that the strong config-unfoldof an n.n.constraint space with a safe symbol need not have a safe symbol. For example if H isthe graph given by Figure 3.2 then for any bipartite graph G the top vertex is a safe symbol in thespace Hom(G, H).62Figure 3.2: HFigure 3.3: H′However if we attach trees to H to obtain H′ given by Figure 3.3 then Hom(G,H′) does not haveany safe symbol but can be strongly config-folded into Hom(G,H) by folding in the trees attachedto H.Strong config-folding induces a natural map between the spaces of configurations and theircocycles as demonstrated by the following proposition.Proposition 3.2.5. Let G be a bipartite graph and G ⊂ Aut(G) be a subgroup. Suppose X ⊂ AVis a G-invariant n.n.constraint space and let Xa be its strong config-fold. Then the linear mapF : MGX −→MGXa given by F (M) := M |∆Xa is surjective and F (GGX) = GGXa.Proof. If M ∈ GGX then the restriction of the G-invariant nearest neighbour interaction for Mto Xa gives us a G-invariant nearest neighbour interaction for F (M) proving that F (M) ∈ GGXa .Thus F (GGX) ⊂ GGXa . We will construct a map φ? : MGXa −→MGX such that φ?(GGXa) ⊂ GGX andF ◦φ? is the identity map on MGXa . Note that this is sufficient to conclude that F is surjective andF (GGX) = GGXa thereby completing the proof.The strong config-folding induces a mapping φ : X −→ Xa given byφ(x)v :=xv if xv 6= ab if xv = afor all x ∈ X and v ∈ V. Let g ∈ G and x ∈ X. Then(φ(gx))v =(gx)v = xg−1v if xg−1v 6= ab if (gx)v = xg−1v = aand(g(φ(x))v = (φ(x))g−1v =xg−1v if xg−1v 6= ab if xg−1v = a.63Therefore φ commutes with the action of G. Note that φ|Xa is the identity.The map φ in turn induces a map between the cocycles which we shall now describe. LetM ∈MGXa be a Markov cocycle. Consider M′ : ∆X −→ R given byM ′(x, y) := M(φ(x), φ(y)).We will prove that M ′ ∈MGX .Cocycle condition: If (x, y), (y, z) ∈ ∆X thenM ′(x, y) +M ′(y, z) = M(φ(x), φ(y)) +M(φ(y), φ(z)) = M(φ(x), φ(z)) = M ′(x, z).Markov condition: If (x, y), (z, w) ∈ ∆X are Markov-similar then (φ(x), φ(y)), (φ(z), φ(w)) ∈ ∆Xaare Markov-similar as well implying M(φ(x), φ(y)) = M(φ(z), φ(w)) and thusM ′(x, y) = M(φ(x), φ(y))= M(φ(z), φ(w))= M ′(z, w)which verifies the Markov condition for M ′.G-invariance condition: Since φ commutes with the action of G, for all g ∈ GM ′(gx, gy) = M(φ(gx), φ(gy)) = M(g(φ(x)), g(φ(y))) = M(φ(x), φ(y)) = M ′(x, y).Hence M ′ ∈MGX . Moreover if M ∈ GGXa with a G-invariant nearest neighbour interaction V , thenfor all (x, y) ∈ ∆XM ′(x, y) = M(φ(x), φ(y)) =∑A⊂V finiteV ([φ(y)]A)− V ([φ(x)]A)proving that V ◦ φ is a G-invariant nearest neighbour interaction for M ′.Thus the map φ? : MGXa −→MGX given byφ?(M)(x, y) := M(φ(x), φ(y))satisfies φ?(GGXa) ⊂ GGX . Moreover since φ|Xa is the identity map on Xa therefore φ?(M)|∆Xa = Mfor all M ∈MGXa proving F ◦ φ? is the identity map on MXa .Given a G-invariant topological Markov field Y ⊂ X there is always a linear map F : MGX −→MGY given by F (M) := M |∆Y and F (GGX) ⊂ GGY . However if Y cannot be obtained by a sequenceof strong config-folds starting with X, then this map need not be surjective. Indeed, consider thefollowing example:64Let H be the graph given by Figure 3.2. Fix some d ∈ N. Let X := Hom(Zd,H) andY := Hom(Zd, C3). Since there is a graph embedding from the 3-cycle C3 to H it follows thatHom(Zd, C3) ⊂ Hom(Zd,H). Let Zd denote the group of translations of the Zd lattice. Since thetop vertex of H is a safe symbol for Hom(Zd,H) it follows from the strong Hammersley-Cliffordtheorem (Theorem 2.2.3) that MZdX = GZdX . Therefore F (MZdX ) ⊂ GZdY . However by Proposition2.4.3, GZdY ( MZdY . It follows that F (MZdX ) ( MZdY .3.3 The Main ResultsTheorem 3.3.1. Let G = (V, E) be a bipartite graph, A a finite alphabet and X ⊂ AV a Hammersley-Clifford n.n.constraint space . Then the strong config-folds and strong config-unfolds of X are alsoHammersley-Clifford.The G-invariant version of Theorem 3.3.1 holds as well.Theorem 3.3.2. Let G = (V, E) be a bipartite graph, A a finite alphabet, G ⊂ Aut(G) a subgroupand X ⊂ AV a G-Hammersley-Clifford n.n.constraint space. Then the strong config-folds and strongconfig-unfolds of X are also G-Hammersley-Clifford.We know that all frozen spaces of configurations are G-Hammersley-Clifford for all subgroupsG ⊂ Aut(G). We can construct many more examples of Hammersley-Clifford spaces by using thesetheorems.1. N.N.Constraint space with a safe symbol.By Proposition 3.2.4 starting with an n.n.constraint space with a safe symbol ? we canstrong config-fold all the symbols one by one into the symbol ? resulting in {?}V which isfrozen. Thus these theorems generalise Theorem 2.2.3 in the case when G is a bipartite graph.Furthermore any closed configuration space which can be strongly config-folded into a spacewith a safe symbol is still Hammersley-Clifford. For instance given the graph H′ in Figure3.3, even though Hom(G,H′) does not have any safe symbol, it is G-Hammersley-Clifford forany subgroup G ⊂ Aut(G).2. Hom(G, Edge) where Edge consists of two vertices 0 and 1 connected by a single edge.By these theorems a closed configuration space which can be strongly config-folded intoHom(G, Edge) is still Hammersley-Clifford. For example if H is the graph given by Figure3.4 then it can be folded to the graph Edge and hence Hom(G,H) is G-Hammersley-Cliffordfor any subgroup G ⊂ Aut(G).3. Consider the space Hom(G,Hn,m) where Hn,m is a graph with vertices VHn,m := {1, 2, . . . , n}and edges given by (i, j) ∈ EHn,m if and only if |i− j| ≤ m.65Figure 3.4: A Graph which Folds to the Edge Graph123456789Figure 3.5: H9,2The sequence of folds 1 to 2, 2 to 3, 3 to 4, . . . , n − 1 to n yields the space {n}G fromHom(G,Hn,m) proving that it is G-Hammersley-Clifford for any subgroup G ⊂ Aut(G). Agraph H is called dismantlable if there exists a sequence of folds on the graph leading to asingle vertex. By these theorems, if H is dismantlable then Hom(G,Hn,m) is G-Hammersley-Clifford for any subgroup G ⊂ Aut(G).Note that although these are homomorphism spaces, the theorems are true in the general settingof closed configuration spaces. These specific examples have been chosen for convenience.3.3.1 A Concrete ExampleWe will first work out the following example to illustrate the key ideas of the proof.Suppose H and H′ are graphs given by Figure 3.6. Let X := Hom(Z2,H). Then by strongconfig-folding the vertex a into the vertex b we obtain the space Xa := Hom(Z2,H′).Note that X does not have any safe symbol but b is a safe symbol for Xa. Let Z2 denote thesubgroup of all translations of Z2. By the strong Hammersley-Clifford theorem (Theorem 2.2.3)Xa is Z2-Hammersley-Clifford. We will prove that X is Z2-Hammersley-Clifford.Let M ∈MZ2X be a shift-invariant Gibbs cocycle. Then M |Xa is a shift-invariant Markov cocycleon Xa and hence a Gibbs cocycle with some shift-invariant nearest neighbour interaction, whichbcda bcdFigure 3.6: Graphs H and H′66we will call V .For e, f, g, h, i ∈ VH and ~v ∈ Z2 consider the configuration x =[ ef g hi]~vgiven byxu :=g if u = ~ve if u = ~v + (0, 1)f if u = ~v − (1, 0)h if u = ~v + (1, 0)i if u = ~v − (0, 1)b if ~u ∈ D1(~v)c.For all ~v ∈ Z2 let x~v =[dd a dd]~v. Consider a shift-invariant nearest neighbour interaction V ′ asfollows:1. If ~v ∼ ~w ∈ Z2, [e, f ]{~v,~w} ∈ L{~v,~w}(Xa) thenV ′([e, f ]{~v,~w}) := V ([e, f ]{~v,~w}) andV ′([e]~v) := V ([e]~v). (3.3.1)2. The interaction between a and d is 0, that is, for all ~v ∼ ~w ∈ Z2V ′([a, d]{~v,~w}) := 0. (3.3.2)3. The single site interaction for [a]~v for all ~v ∈ Z2 is given byV ′([a]~v) := M([dd b dd]~v,[dd a dd]~v)+ V ([b]~v) + V ([b, d]{~v,~v+(1,0)}) + V ([b, d]{~v,~v−(1,0)})+V ([b, d]{~v,~v+(0,1)}) + V ([b, d]{~v,~v−(0,1)}).By (3.3.1) and (3.3.2) this implies that the pair([dd b dd]~v,[dd a dd]~v)is V ′-good.4. LetV ′([a, c]{~v,~v+(1,0)}) := M([dd a dd]~v,[dd a cd]~v)+ V ([d]~v+(1,0))− V ([c]~v+(1,0))+V ([d, b]{~v+(1,0),~v+(1,1)}) + V ([d, b]{~v+(1,0),~v+(2,0)})+V ([d, b]{~v+(1,0),~v+(1,−1)})− V ([c, b]{~v+(1,0),~v+(1,1)})−V ([c, b]{~v+(1,0),~v+(2,0)})− V ([c, b]{~v+(1,0),~v+(1,−1)}).67By (3.3.1) and (3.3.2) the previous equation implies that the pair([dd a dd]~v,[dd a cd]~v)is V ′-good. Similarly we can define V ′([a, c]{~v,~v−(1,0)}), V′([a, c]{~v,~v+(0,1)}) and V′([a, c]{~v,~v−(0,1)}),the corresponding expressions of which will imply that the pairs([dd a dd]~v,[dc a dd]~v),([dd a dd]~v,[ cd a dd]~v),([dd a dd]~v,[dd a dc]~v)are V ′-good.Since V and M are shift-invariant it follows that V ′ is also shift-invariant. We want to provethat V ′ is an interaction for M . Equivalently we want to prove that all asymptotic pairs are V ′-good. Let (x, y) ∈ ∆X . Since any appearance of a in the elements of X can be replaced by b, byreplacing all the a’s outside the set of sites where x and y differ and its boundary we can obtain apair (x1, y1) ∈ ∆X which is Markov-similar to (x, y) and has finitely many a’s. Thus by Proposition3.2.1 it is sufficient to prove that pairs (x, y) ∈ ∆X with finitely many a’s are V ′-good. Since thea’s can be replaced by b’s one by one and any pair in ∆Xa is V′-good by Lemma 3.2.2 it is sufficientto prove that pairs in X in which a single a is replaced by b are V ′-good. Since a can be folded intob and ∂{a} = {c, d} any such pair is Markov-similar to a pair of the type([ ef a hi]~v,[ ef b hi]~v)forsome ~v ∈ Z2 and e, f, g, h, i ∈ {c, d}.The pairs([ ef a hi]~v,[ df a hi]~v),([ df a hi]~v,[dd a hi]~v),([dd a hi]~v,[dd a di]~v),([dd a di]~v,[dd a dd]~v)are Markov-similar to([ ed a dd]~v,[dd a dd]~v),([ df a dd]~v,[dd a dd]~v),([dd a hd]~v,[dd a dd]~v),([dd a di]~v,[dd a dd]~v)respectively. Since e, f, g, h, i ∈ {c, d}, these pairs are V ′-good. Thus each adjacent pair in thechain [ ef a hi]~v,[ df a hi]~v,[dd a hi]~v,[dd a di]~v,[dd a dd]~v,[dd b dd]~v,[ ef b hi]~vis V ′-good. By Corollary 3.2.2 the pair([ ef a hi]~v,[ ef b hi]~v)is V ′-good. This completes the proof.3.3.2 Proof of Theorems 3.3.1 and 3.3.2We will now prove Theorems 3.3.1 and 3.3.2. The proof will give an explicit way of computing theinteraction as well. It should also be noted that Theorem 3.3.1 is a special case of Theorem 3.3.2.Yet we separate the proofs so as to separate the various complications.Proof of Theorem 3.3.1. The bulk of the proof lies in showing that the strong config-unfolds of68Hammersley-Clifford spaces are Hammersley-Clifford. We will first prove that the strong config-folds of a Hammersley-Clifford space are Hammersley-Clifford. Let X ⊂ AV be Hammersley-Clifford and Xa be its strong config-fold. Using Proposition 3.2.5 in the case where G = {id|G}we obtain a surjective map F : MX −→MXa such that F (GX) = GXa . Since X is Hammersley-Clifford, MX = GX . HenceMXa = F (MX) = F (GX) = GXaproving that Xa is Hammersley-Clifford.Now we will prove that strong config-unfolds of Hammersley-Clifford spaces are Hammersley-Clifford spaces as well. Let X ⊂ AV be an n.n.constraint space and Xa be a strong config-fold ofX where a is strongly config-folded into b. Let the set of nearest neighbour constraints of X begiven by the set FX . Suppose Xa is Hammersley-Clifford.Let M ∈ MX be a Markov cocycle. Since Xa is Hammersley-Clifford M |∆Xa ∈ GXa . LetV be a corresponding nearest neighbour interaction. We shall now construct a nearest neighbourinteraction V ′ for M . The idea is the following:Since we have a nearest neighbour interaction for M |∆Xa we will change asymptotic pairs inX to asymptotic pairs in Xa using the fewest possible distinct single site changes. These distinctsingle site changes will correspond to patterns on edges and vertices helping us build V ′. If we usethe single site changes which involve blindly changing the a’s into b’s we will incur a large numberof such changes; instead we will use a smaller number as described by the following lemma.Lemma 3.3.3 (Construction of special configurations). Let X be an n.n.constraint space and Xabe a strong config-fold of X where the symbol a is strongly config-folded into the symbol b. LetV1 :={v ∈ V | there exists w ∼ v such that [a, a]{v,w} ∈ L{v,w}(X)}andV2 :={v ∈ V \ V1 | [a]{v} ∈ L{v}(X)}.For all v ∈ V1 ∪ V2 there exists xv ∈ X such that1. If v ∈ V1 then xvv := a and xv|D2(v)\{v} := b.2. If v ∈ V2 then xvv := a and xv|∂D1(v) := b.Moreover θvb (xv) ∈ Xa and if w1, w2, w3, . . . wr ∼ v and c1, c2, . . . , cr ∈ A such that [a, ci]{v,wi} ∈L{v,wi}(X) then θw1,w2,...,wrc1,c2,...,cr (xv) ∈ X.Proof. Let v ∈ V1. By (3.2.3) [a, b]{v,w} ∈ L{v,w}(X) for all w ∼ v. Again by (3.2.3) it follows that[b, b]{w,w1} ∈ L{w,w1}(X) for all w,w1 ∈ V such that w ∼ v and w1 ∼ w. Then none of the patterns69from FX , the nearest neighbour constraint set for X appear in αv ∈ AD2(v) given byαvu :=a if u = vb if u ∈ D2(v) \ {v}.For v ∈ V2 there exists x1 ∈ X such that x1v = a. For all w,w1 ∈ V such that w ∼ v and w1 ∼ w(3.2.3) implies that [x1w, b]{w,w1} ∈ L{w,w1}(X). Then none of the patterns from FX appear inαv ∈ AD2(v) given byαvu :=x1u if u ∈ D1(v)b if u ∈ D2(v) \D1(v).Fix v ∈ V1∪V2. By (3.2.4) there exists x ∈ X such that x|∂D1(v) = b. Moreover since a stronglyconfig-folds into b we can assume that x ∈ Xa. Consider xv ∈ AV given byxvu :=αvu if u ∈ D2(v)xu if u ∈ D1(v)c.The configurations xv satisfy Conclusions (1) and (2) of this lemma. Since each edge in G eitherlies completely in D2(v) or in D1(v)c, no subpattern of xv belongs to FX . Therefore xv ∈ X.Let v ∈ V1 ∪ V2. Since x ∈ Xa, a appears in xv only at v . Moreover since a stronglyconfig-folds into b by (3.2.2), θvb (xv) ∈ Xa. Let w1, w2, w3, . . . wr ∼ v and c1, c2, . . . , cr ∈ A suchthat [a, ci]{v,wi} ∈ L{v,wi}(X) for all 1 ≤ i ≤ r. Because the graph is bipartite wi  wj for all1 ≤ i, j ≤ r. By (3.2.3) for all w′ ∼ wi and 1 ≤ i ≤ r, [ci, b]{wi,w′} ∈ L{wi,w′}(X). By Proposition3.1.3 θw1,w2,...,wrc1,c2,...,cr (xv) ∈ X.We will now construct an interaction via the following technical lemma.Lemma 3.3.4 (Construction of V ′). Let X be an n.n.constraint space and Xa be a strong config-fold of X where the symbol a is strongly config-folded into the symbol b. Consider sets V1,V2 ⊂ Vand for all v ∈ V1 ∪ V2, configurations xv ∈ X satisfying the conclusions of Lemma 3.3.3. LetM ∈MX be a Markov cocycle on X such that M |∆Xa is a Gibbs cocycle with interaction V . Thenthere exists a unique nearest neighbour interaction V ′ on X which satisfies:If v ∼ w ∈ V and [c, d]{v,w} ∈ L{v,w}(Xa) thenV ′([c, d]{v,w}) = V ([c, d]{v,w}) and (3.3.3)V ′([c]){v} = V ([c]{v}). (3.3.4)For v ∈ V1 ∪ V2 and w ∼ vV ′([xv]{v,w}) = 0. (3.3.5)70such that the following pairs are V ′-good:1. (x˜, y˜) ∈ ∆Xa.2. (θvb (xv), xv) for v ∈ V1 ∪ V2.3. (θwc (xv), xv) for v ∈ V1 ∪ V2, w ∼ v and c ∈ A \ {a} satisfying [a, c]{v,w} ∈ L{v,w}(X).4. (θwa (xv), xv) for all v ∈ V1 ∩ P1, w ∼ v satisfying [a, a]{v,w} ∈ L{v,w}(X).In the following proof the reader is encouraged to refer to the statement of Lemma 3.3.3 forinformation about configurations xv.Proof of Lemma 3.3.4. We will begin by proving uniqueness of the interaction assuming its exis-tence. Consider a nearest neighbour interaction V ′ on X which satisfies the conclusion of thislemma. We will prove the uniqueness by expressing V ′ in terms of the cocycle M and V .Since V ′ satisfies (3.3.3), (3.3.4) and (3.3.5) we have to prove that the following can be expressedin terms of M and V :(a) For all v ∈ V1 ∪ V2, the value V ′([a]v),(b) For all v ∈ V1 ∪ V2, w ∼ v and c ∈ A \ {xvw, a} such that [a, c]{v,w} ∈ L{v,w}(X), the valueV ′([a, c]{v,w}) and(c) For all v ∈ V1 ∩ P1, w ∼ v such that [a, a]{v,w} ∈ L{v,w}(X), the value V′([a, a]{v,w}).Proof for part (a): Let v ∈ V1 ∪V2. Since the pair (θvb (xv), xv) ((2) in the statement of the lemma)is V ′-good by rearranging the expression for M(θvb (xv), xv) we get thatV ′([a]v) = V′([xv]v)= M(θvb (xv), xv) + V ′([θvb (xv)]v) +∑w:w∼vV ′([θvb (xv)]{v,w})−(∑w:w∼vV ′([xv]{v,w})). (3.3.6)Now we will express the right hand side of this expression in terms of M and V . Since θvb (xv) ∈ XaV ′([θvb (xv)]{v,w}) = V ([θvb (xv)]{v,w}) and V′([θvb (xv)]v) = V ([θvb (xv)]v). By (3.3.5), V ′([xv]{v,w}) = 0.Putting all this together we getV ′([a]v) = M(θvb (xv), xv) + V ([θvb (xv)]v) +∑w:w∼vV ([θvb (xv)]{v,w}). (3.3.7)Proof for part (b): Consider v ∈ V1∪V2, w ∼ v and c ∈ A\{a, xvw} such that [a, c]{v,w} ∈ L{v,w}(X).Since the pair (θwc (xv), xv) ((3) in the statement of the lemma) is V ′-good by rearranging the71expression for M(xv, θwc (xv)) we getV ′([a, c]{v,w}) = V′([θwc (xv)]{v,w})= M(xv, θwc (xv)) +∑w′:w′∼wV ′([xv]{w′,w}) + V′([xv]w)−∑w′:w′∼w,w′ 6=vV ′([θwc (xv)]{w′,w})− V ′([θwc (xv)]w). (3.3.8)We will now express the right hand side of this expression in terms of M and V .By (3.3.5), V ′([xv]{v,w}) = 0. We know that (θwc (xv))w, xvw 6= a and if w′ ∼ w, w′ 6= v thenw′ ∈ ∂D1(v) and so (θwc (xv))w′ = xvw′ = b. Therefore by (3.3.3) and (3.3.4)V ′([xv]{w′,w}) = V ([xv]{w′,w}), V′([θwc (xv)]{w′,w}) = V ([θwc (xv)]{w′,w})andV ′([xv]w) = V ([xv]w), V′([θwc (xv)]w) = V ([θwc (xv)]w).Putting all this together we getV ′([a, c]{v,w}) = M(xv, θwc (xv)) +∑w′:w′∼w,w′ 6=v(V ([xv]{w′,w})− V ([θwc (xv)]{w′,w}))+V ([xv]w)− V ([θwc (xv)]w). (3.3.9)Proof for part (c): Consider v ∈ V1 ∩ P1 and w ∼ v such that [a, a]{v,w} ∈ L{v,w}(X). Since thepair (θwa (xv), xv) ((4) in the statement of the lemma) is V ′-good by rearranging the expression forM(xv, θwa (xv)) we get thatV ′([a, a]{v,w}) = V′([θwa (xv)]{v,w})= M(xv, θwa (xv)) +∑w′:w′∼wV ′([xv]{w′,w}) + V′([xv]w)−∑w′:w′∼w,w′ 6=vV ′([θwa (xv)]{w′,w})− V ′([θwa (xv)]w). (3.3.10)We will now express the right hand side of this expression in terms of M and V . By (3.3.5),V ′([xv]{v,w}) = 0. Since v ∈ V1, for w′ ∼ w such that w′ 6= v we know that xvw = xvw′ = b 6= a.Therefore by (3.3.3) and (3.3.4)V ′([xv]{w′,w}) = V ([xv]{w′,w})72andV ′([xv]w) = V ([xv]w).Since [a, a] ∈ L{v,w}(X) therefore v, w ∈ V1 and xvw′ = xww′ = b for all w′ ∼ w, w′ 6= v. Then by(3.3.5)V ′([θwa (xv)]{w′,w}) = V′([b, a]{w′,w}) = V′([xw]{w′,w}) = 0.By (3.3.7) we get thatV ′([θwa (xv)]w) = V′([a]w) = M(θwb (xw), xw) + V ([θwb (xw)]w) +∑w′:w′∼wV ([θwb (xw)]{w,w′}).Putting all this together, we getV ′([a, a]{v,w}) = M(xv, θwa (xv)) +∑w′:w′∼w,w′ 6=vV ([xv]{w′,w}) + V ([xv]w)−M(θwb (xw), xw)−V ([θwb (xw)]w)− (∑w′:w′∼wV ([θwb (xw)]{w,w′})). (3.3.11)This completes proof for uniqueness. It follows from the proofs that given an interaction V ′which satisfies (3.3.3), (3.3.4) and (3.3.5), Equations(3.3.7), (3.3.9) and (3.3.11) are satisfied if andonly if the pairs listed in (1), (2), (3) and (4) are V ′-good.Consider a nearest neighbour interaction V ′ on X given by the following:(i) If v ∼ w ∈ V and [c, d]{v,w} ∈ L{v,w}(Xa) then V′([c, d]v,w) is given by (3.3.3)(ii) and V ′([c]v) is given by (3.3.4).(iii) If v ∈ V1 ∪ V2 and w ∼ v, then V ′([xv]{v,w}) is given by (3.3.5).(iv) If v ∈ V1 ∪ V2, the value V ′([a]v) is given by (3.3.7).(v) If v ∈ V1 ∪ V2, w ∼ v and c ∈ A \ {xvw, a} such that [a, c]{v,w} ∈ L{v,w}(X), the valueV ′([a, c]{v,w}) is given by (3.3.9).(vi) If v ∈ V1 ∩ P1, w ∼ v such that [a, a]{v,w} ∈ L{v,w}(X), the value V′([a, a]{v,w}) is given by(3.3.11).By the preceding paragraph the proof is complete.Now we will reap the benefits of the previous lemma. The following lemma explains why theweak conclusions of Lemma 3.3.4 are sufficient and completes the proof of Theorem 3.3.1.73Lemma 3.3.5. Let X be an n.n.constraint space and Xa be a strong config-fold of X where thesymbol a is strongly config-folded into the symbol b. Let P1, P2 be the partite classes of V andconsider V1,V2 ⊂ V and xv ∈ X for all v ∈ V1 ∪ V2 satisfying the conclusion of Lemma 3.3.3.Let M ∈ MX be a Markov cocycle on X such that M |∆Xa is a Gibbs cocycle with some nearestneighbour interaction V and V ′ be an interaction on X as obtained in Lemma 3.3.4. Then M ∈ GXis Gibbs with nearest neighbour interaction V ′.Proof. We will use the V ′-good pairs guaranteed by Lemma 3.3.4 as steps in proving the followingpairs are V ′-good:(a) Let x ∈ X and v ∈ V1 ∪ V2 such that xv = a and xw 6= a for all w ∼ v. Then (x, θvb (x)) isV ′-good.(b) Let x ∈ X and v ∈ V1 ∩ P1 and w ∼ v such that xv = xw = a. Then (x, θvb (x)) is V′-good.(c) All asymptotic pairs (x, y) ∈ ∆X are V ′-good.Given an asymptotic pair, Statements (a) and (b) allow replacement of the a’s by b’s giving us apair in ∆Xa . From Conclusion (1) in Lemma 3.3.4 we know that all pairs in ∆Xa are V′-good.Since the relation V ′-good is an equivalence relation this proves Statement (c) thereby completingthe proof.Consider any v ∈ V1 ∪ V2 and x ∈ X such that xv = a. Let∂{v} = {w1, w2, . . . wn}.Since for all 1 ≤ r ≤ n, [a, xwr ]{v,wr} ∈ L{v,wr}(X), Lemma 3.3.3 implies thatθwr,wr+1...,wnxwr ,xwr+1 ,...,xwn (xv) ∈ X.Let x1 = θw1,w2,...,wnxw1 ,xw2 ,...,xwn (xv). The pair (x, θvb (x)) is Markov-similar to (x1, θvb (x1)) with A := {v}.Noteθw1,w2,...,wrxvw1 ,xvw2 ,...,xvwr(x1) = θwr+1...,wnxwr+1 ,...,xwn (xv) ∈ Xand that x1 and xv differ only on ∂{v}.In the following we will remove a’s in configurations from those vertices which are isolated fromother a’s.Proof of Statement (a): Consider the sequencex1, θw1xvw1(x1), θw1,w2xvw1 ,xvw2(x1), . . . , θw1,w2,...,wnxvw1 ,xvw2 ,...,xvwn(x1) = xv, θvb (xv), θvb (x1).Here single site changes have been made on ∂{v} taking us from x1 to xv. Then the symbol at v74has been changed to obtain θvb (xv). In the last step θvb (xv) has been changed on ∂{v} to obtainθvb (x1).Note that each(θw1,w2,...,wrxvw1 ,xvw2 ,...,xvwr(x1), θw1,w2,...,wr+1xvw1 ,xvw2 ,...,xvwr+1(x1))is Markov-similar to (θwr+1x1wr+1(xv), xv) for all 0 ≤ r ≤ n− 1 with A := {wr+1}. By Conclusion (3) inLemma 3.3.4, (θwr+1x1wr+1(xv), xv) is V ′-good for all 0 ≤ r ≤ n−1. Thus by Corollary 3.2.2, we get that(x1, xv) is V ′-good. By Conclusion (2) in Lemma 3.3.4 and symmetry of the relation V ′-good we getthat (xv, θvb (xv)) is V ′-good. Since θvb (xv), θvb (x1) ∈ Xa, Conclusion (1) in Lemma 3.3.4 implies that(θvb (xv), θvb (x1)) is V ′-good. Stringing these together by Corollary 3.2.2 we arrive at (x1, θvb (x1))being V ′-good. But (x1, θvb (x1)) is Markov-similar to (x, θvb (x)). Therefore by Proposition 3.2.1 weget that (x, θvb (x)) is V′-good.In the next step we remove the a’s which are not isolated.Proof of Statement (b): We construct a sequence from x to θvb (x) in three parts. In the first partsingle site changes will be made on ∂{v} taking us from x1 to xv. In the second part the symbolat v will be changed to obtain θvb (xv). In the last part single site changes will be made on ∂{v} toobtain θvb (x1) from θvb (xv).Consider the sequence(x1, θw1xvw1(x1), θw1,w2xvw1 ,xvw2(x1), . . . , θw1,w2,...,wnxvw1 ,xvw2 ,...,xvwn(x1) = xv),(xv, θvb (xv)),(θvb (xv), θw1x1w1(θvb (xv)), θw1,w2x1w1 ,x1w2(θvb (xv)), . . . , θw1,w2,...,wnx1w1 ,x1w2 ,...,x1wn(θvb (xv)) = θvb (x1)).In the first part of the sequence notice that(θw1,w2,...,wrxvw1 ,xvw2 ,...,xvwr(x1), θw1,w2,...,wr+1xvw1 ,xvw2 ,...,xvwr+1(x1))is Markov-similar to (θwr+1x1wr+1(xv), xv) for all 0 ≤ r ≤ n − 1 with A := {wr+1}. If for some 0 ≤r ≤ n − 1, x1wr+1 6= a then by Conclusion (3) in Lemma 3.3.4 we get that (θwr+1x1wr+1(xv), xv) is V ′-good. If for some 0 ≤ r ≤ n − 1, x1wr+1 = a then by Conclusion (4) in Lemma 3.3.4 we get that(θwr+1x1wr+1(xv), xv) is V ′-good. Proposition 3.2.1 implies that(θw1,w2,...,wrxvw1 ,xvw2 ,...,xvwr(x1), θw1,w2,...,wr+1xvw1 ,xvw2 ,...,xvwr+1(x1))is V ′-good for all 0 ≤ r ≤ n − 1. By Corollary 3.2.2, we get that (x1, xv) is V ′-good and we aredone with the first part of the sequence.For the second part of the sequence by Conclusion (2) in Lemma 3.3.4 and symmetry of the75relation V ′-good we get that (xv, θvb (xv)) is V ′-good.For the third part of the sequence the asymptotic pair(θw1,w2,...,wrx1w1 ,x1w2 ,...,x1wr(θvb (xv)), θw1,w2,...,wr+1x1w1 ,x1w2 ,...,x1wr+1(θvb (xv)))is Markov-similar to (θvb (xv), θwr+1x1wr+1(θvb (xv))) for all 0 ≤ r ≤ n− 1 with A = {wr+1}.If for some 0 ≤ r ≤ n−1, x1wr+1 6= a then (θvb (xv), θwr+1x1wr+1(θvb (xv)) ∈ Xa and by Conclusion (1) inLemma 3.3.4 we get that (θvb (xv), θwr+1x1wr+1(θvb (xv))) is V ′-good. Since v ∈ V1, xv|D2(v)\{v} = b. Thusif for some 0 ≤ r ≤ n− 1, x1wr+1 = a then(θwr+1x1wr+1(θvb (xv)), θvb (xv)) = (θwr+1a (θvb (xv)), θwr+1b (θwr+1a (θvb (xv)))and (θwr+1a (θvb (xv)))w′ = b 6= a for all w′ ∼ wr+1. By Statement (a) in the proof of this lemma weget that (θwr+1a (θvb (xv)), θwr+1b (θwr+1a (θvb (xv))) is V ′-good. By symmetry of the relation V ′-good weget that (θvb (xv), θwr+1x1wr+1(θvb (xv))) is V ′-good in this case as well.Thus for all 0 ≤ r ≤ n − 1 we find that (θvb (xv), θwr+1x1wr+1(θvb (xv))) is V ′-good. Using Corollary3.2.2 we find that (θvb (xv), θvb (x1)) is V ′-good.So we have proven that (x1, xv), (xv, θvb (xv)), (θvb (xv), θvb (x1)) are V ′-good. Stringing them byCorollary 3.2.2 we get that (x1, θvb (x1)) is V ′-good. But (x1, θvb (x1)) is Markov-similar to (x, θvb (x)).Therefore by Proposition 3.2.1 we get that (x, θvb (x)) is V′-good.The previous two statements give us the freedom to change the a’s into b’s. Now we will usethem to prove the last statement.Proof of Statement (c): Consider an asymptotic pair (x, y) ∈ ∆X . LetF := {v ∈ V | xv 6= yv}and x1, y1 ∈ AV be obtained by replacing the a’s outside F ∪ ∂F by b’s, that isx1u :=xu if u ∈ F ∪ ∂F or xu 6= ab otherwiseandy1u :=yu if u ∈ F ∪ ∂F or yu 6= ab otherwise.By (3.2.2) x1, y1 ∈ X. Since x = y on F c, it follows that (x, y) and (x1, y1) are Markov-similar but76there are only finitely many vertices where x1 and y1 equal a. Let{v1, v2, . . . , vr} := {v ∈ P1 | x1v = a}{w1, w2, . . . , wr′} := {w ∈ P1 | y1w = a}{vr+1, vr+2 . . . , vr+k} := {v ∈ P2 | x1v = a}{wr′+1, wr′+2 . . . , wr′+k′} := {w ∈ P2 | y1w = a}index the vertices with a in x1 and y1. By Lemma 3.3.3 the configurations θv1,v2...,vib,b,...,b (x1) andθw1,w2...,wi′b,b,...,b (y1) are elements of X for all 1 ≤ i ≤ r + k and 1 ≤ i′ ≤ r′ + k′. Therefore we canconsider the sequence (3.3.12 to 3.3.16) in X:We begin by replacing the a’s in x1 from the partite class P1 by b’s.(x1, θv1b (x1), θv1,v2b,b (x1), . . . , θv1,v2,...,vrb,b,...,b (x1)). (3.3.12)In the resulting configuration θv1,v2,...,vrb,b,...,b (x1) adjacent vertices cannot both have the symbol a; thea’s left in the configuration x1 are changed to b’s.(θv1,v2,...,vrb,b,...,b (x1), θv1,v2,...,vr+1b,b,...,b (x1), . . . , θv1,v2,...,vr+kb,b,...,b (x1)). (3.3.13)After removing the a’s from x1 and y1 the configurations obtained are elements of Xa.(θv1,v2,...,vr+kb,b,...,b (x1), θw1,w2...,wr′+k′b,b,...,b (y1)). (3.3.14)Tactics from Sequences 3.3.12 and 3.3.13 are employed in reverse to obtain y1 starting withθw1,w2...,wr′+k′b,b,...,b (y1).(θw1,w2...,wr′+k′b,b,...,b (y1), θw1,w2...,wr′+k′−1b,b,...,b (y1), . . . , θw1,w2...,wr′b,b,...,b (y1)), (3.3.15)(θw1,w2...,wr′b,b,...,b (y1), θw1,w2...,wr′−1b,b,...,b (y1), . . . , θw1b (y1), y1). (3.3.16)For all 1 ≤ i ≤ r, the vertex vi ∈ P1 and the symbol x1vi = a. Thus by Statements (a) and (b)in this proof we get that(θv1,v2,...,vib,b,...,b (x1), (θv1,v2,...,vi+1b,b,...,b (x1))is V ′-good. Thus all adjacent pairs in the Sequence 3.3.12 are V ′-goodNotice that (θv1,v2,...,vrb,b,...,b (x1))v 6= a for all v ∈ P1 and hence (θv1,v2,...,vr+ib,b,...,b (x1))v 6= a for all 1 ≤ i ≤ kand v ∈ P1. Now consider an adjacent pair in the Theorems 3.3.13,(θv1,v2,...,vr+ib,b,...,b (x1), θv1,v2,...,vr+i+1b,b,...,b (x1))for some 0 ≤ i ≤ k − 1. Since vr+i+1 ∈ P2 , (θv1,v2,...,vr+ib,b,...,b (x1))w 6= a for all w ∼ vr+i+1. But77(θv1,v2,...,vr+ib,b,...,b (x1))vr+i+1 = a, therefore by Statement (a) we get that(θv1,v2,...,vr+ib,b,...,b (x1), θv1,v2,...,vr+i+1b,b,...,b (x1))is V ′-good.Notice that (θv1,v2,...,vr+kb,b,...,b (x1)), (θw1,w2,...,wr′+k′b,b,...,b (y1)) ∈ Xa. Thus by Conclusion (1) in Lemma3.3.4, we get that the Pair 3.3.14 is V ′-good.The proof that the adjacent pairs listed in Sequences 3.3.15 and 3.3.16 are V ′-good is identicalto the proof for the Sequences 3.3.13 and 3.3.12 with an additional use of the symmetry of therelation V ′-good.Thus all adjacent pairs in Sequences 3.3.12-3.3.16 are V ′-good. By Corollary 3.2.2 we get that(x1, y1) is V ′-good. But (x1, y1) is Markov-similar to (x, y). By Proposition 3.2.1 we have that(x, y) is V ′-good. This completes the proof.If G is finite then Theorem 3.3.2 follows immediately from Theorem 3.3.1: if V ′ is a nearestneighbour interaction for a G-invariant Markov cocycle M then∑g∈G gV′|G|is a G-invariant nearest neighbour interaction for M . We will prove the following result which alongwith Proposition 3.2.5 immediately implies Theorem 3.3.2.Theorem 3.3.6. Let G = (V, E) be a bipartite graph and A a finite alphabet. Let G ⊂ Aut(G)be a subgroup. Let X be a G-invariant n.n.constraint space and Xa be a strong config-fold of X.Suppose M ∈MGX is a G-invariant Markov cocycle. Then M ∈ GGX if and only if M |∆Xa ∈ GGXa.Proof. By Proposition 3.2.5, M ∈ GGX implies M |∆Xa ∈ GGXa . We will prove the converse. LetM ∈ MGX such that M |∆Xa ∈ GGXa . Let V be a G-invariant nearest neighbour interaction forM |∆Xa .Mimicking the proof of Lemma 3.3.3 we will now obtain special configurations xv in aG-invariantway.Lemma 3.3.7. Let G ⊂ Aut(G) be a subgroup, X be a G-invariant n.n.constraint space and Xa bea strong config-fold of X where the symbol a is strongly config-folded into the symbol b. LetV1 := {v ∈ V | there exists w ∼ v such that [a, a]{v,w} ∈ L{v,w}(X)}andV2 := {v ∈ V \ V1 | [a]v ∈ Lv(X)}.Then V1 and V2 are invariant under the action of G. Moreover for all v ∈ V1 ∪ V2 there existsxv ∈ X satisfying the conclusions of Lemma 3.3.3 such that (gxv)|gD2(v) = xgv|gD2(v) for all g ∈ G.78Proof. Since X is G-invariant it follows that the sets V1 and V2 are G-invariant.Consider some v ∈ V1 and g ∈ G. Then by Lemma 3.3.3 there exists xv, xgv ∈ X such thatxvv = xgvgv = a and xv|D2(v)\{v} = xgv|D2(gv)\{gv} = b. Thus we find that (gxv)|gD2(v) = xgv|gD2(v).Let v ∈ V2. Then for all w ∼ v, g ∈ G and c ∈ A \ {a} the pattern [a, c]{v,w} ∈ L{v,w}(X) ifand only if [a, c]{gv,gw} ∈ L{gv,gw}(X). Thus for all w ∼ v we can choose cv,w ∈ A \ {a} such that[a, cv,w]{v,w} ∈ L{v,w}(X) and cv,w = cgv,gw for all g ∈ G. Note that since v ∈ V2 we know thatcv,w 6= a.By (3.2.4) there exists x1,v ∈ X such that x1,v|∂D1(v) = b. Since a can be strongly config-foldedinto the symbol b we can assume that x1,v ∈ Xa. Consider xv ∈ AV defined byxvu :=a if u = vcv,u if u ∼ vx1,vu if u ∈ D1(v)c.Note that a appears in xv only at the vertex v. Any edge (u1, u2) in G lies either completely inD2(v) or in D1(v)c. If the edge lies in D1(v)c then [xv]{u1,u2} = x1,v{u1,u2}∈ L{u1,u2}(X). If the edgeis of the form (v, w) then [xv]{v,w} = [a, cv,w]{v,w} ∈ L{v,w}(X). If the edge is of the form (w,w′)where w ∈ ∂{v} and w′ ∈ ∂D1(v) then [xv]{w,w′} = [cv,w, b]{w,w′}. Since (v, w) and (w,w′) are edgesin the graph G and [a, cv,w] ∈ L{v,w}(X) by (3.2.3) we know that [cv,w, b]{w,w′} ∈ L{w,w′}(X).Thus we have proved for every edge (u1, u2) in G that [xv]{u1,u2} ∈ L{u1,u2}(X). Since X is ann.n.constraint space we get that xv ∈ X.Moreover for all v ∈ V2 and g ∈ G(gxv)u =a if u = gvcv,g−1u if u ∼ gvb if u ∈ ∂D1(gv)and(xgv)u =a if u = gvcgv,u = cv,g−1u if u ∼ gvb if u ∈ ∂D1(gv),that is, (gxv)|gD2(v) = xgv|gD2(v).Thus the configurations xv satisfy Conclusions (1) and (2) of Lemma 3.3.3 and (gxv)|gD2(v) =xgv|gD2(v) for all g ∈ G and v ∈ V1∪V2. The rest follows exactly as in the proof of Lemma 3.3.3.Consider sets V1,V2 ⊂ V and for all v ∈ V configurations xv ∈ X as obtained by Lemma 3.3.7.Then by Lemma 3.3.4 there exists a unique nearest neighbour interaction V ′ on X such that thepairs listed in (1), (2), (3) and (4) listed in Lemma 3.3.4 are V ′-good. By Lemma 3.3.5 we get that79V ′ is a nearest neighbour interaction for M . We will prove that the interaction V ′ is G-invariant.For this we will invoke the uniqueness of the interaction satisfying the conclusions of Lemma 3.3.4.Let g ∈ G. gV ′ is a nearest neighbour interaction corresponding to gM = M . Thus the pairslisted in (1), (2), (3) and (4) in Lemma 3.3.4 are gV ′-good. Since V is G-invariant, gV ′|LXa =gV = V . Hence gV ′ satisfies (3.3.3), (3.3.4).If v ∈ V1 ∪ V2 then we know from Lemma 3.3.7 that (gxv)|gD1(v) = xgv|gD1(v). Thus if w ∼ vsince V ′ satisfies (3.3.5) we get thatgV ′([xv]{v,w}) = V′([xvv, xvw]{g−1v,g−1w}) = V′([xg−1vg−1v, xg−1vg−1w]{g−1v,g−1w}) = 0.Thus the interaction gV ′ satisfies (3.3.5). We have seen that the interaction gV ′ is a nearestneighbour interaction which satisfies (3.3.3), (3.3.4) and (3.3.5) such that the pairs listed in (1),(2), (3) and (4) in Lemma 3.3.4 are gV ′-good. By Lemma 3.3.4 we know that such an interactionis unique. Thus gV ′ = V ′ and M ∈ GGX .This leads us to the following corollary:Corollary 3.3.8. Let G = (V, E) be a bipartite graph and A a finite alphabet. Let G ⊂ Aut(G) be asubgroup. Let X be a G-invariant n.n.constraint space and Xa be a strong config-fold of X. ThenMGX/GGX is isomorphic to MGXa/GGXa.Clearly this corollary subsumes Theorem 3.3.2 and implies Theorem 1.2.1. Thereby to under-stand the difference between Markov and Gibbs cocycles it is sufficient to study the cocycles overclosed configuration spaces which cannot be strongly config-folded any further.Also this corollary is most relevant when the dimension of the quotient space MGX/GGX is finite.For example this holds in the following two situations:1. The underlying graph G is finite.2. The underlying graph G is Zd for some dimension d, G is the group of translations on Zd andthe space X has the pivot property (Proposition 2.2.6).Proof. By Proposition 3.2.5 the map F : MGX −→MGXa given byF (M) := M |∆Xa for all M ∈MGXais surjective. By Theorem 3.3.6 we know that for a Markov cocycle M ∈MGX , M ∈ GGX if and onlyif M |∆Xa ∈ GGXa . Thus F−1(GGXa) = GGX .80Via the second isomorphism theorem for vector spaces the mapF˜ : MGX/F−1(GGXa) −→MGXa/GGXagiven byF˜ (M mod F−1(GGXa)) := F (M) mod GGXais an isomorphism. Since F−1(GGXa) = GGX the proof is complete.81Chapter 4Four-Cycle Free Graphs, the PivotProperty and Entropy MinimalityBy H we will always denote an undirected graph without multiple edges and single isolated ver-tices. The main aim of this chapter is to prove Theorem 1.3.2 (Theorem 4.1.4) and Theorem 1.4.1(Theorem 4.1.2). Most of this chapter is part of the submitted manuscript [10].Hom-shifts will be introduced in Section 4.1. Some aspects of thermodynamic formalism willbe stated in Section 4.2. Some technical details regarding folding will be discussed in Section 4.3.Universal covers will be defined in Section 4.4 and the generalised height functions, subcocycleswill be described in Section 4.5. The proof of the main results can be found in Section 4.6.4.1 Hom-ShiftsFor a graph H we will denote the adjacency relation by ∼H and the set of vertices of H by H(abusing notation). We identify Zd with the set of vertices of the Cayley graph with respect to thestandard generators ~e1, ~e2, . . . , ~ed, that is, ~i ∼Zd ~j if and only if ‖~i −~j‖1 = 1 where ‖ · ‖1 is the l1norm. We drop the subscript in ∼H when H = Zd. Let Dn and Bn denote the Zd-balls of radius naround ~0 in the l1 and the l∞ norm respectively. The graph Cn will denote the n-cycle where theset of vertices is {0, 1, 2, . . . , n− 1} and i ∼Cn j if and only if i ≡ j ± 1 mod n. The graph Kn willdenote the complete graph with n vertices where the set of vertices is {1, 2, . . . , n} and i ∼Kn j ifand only if i 6= j.A sliding block code from a shift space X to a shift space Y is a continuous map f : X −→ Ywhich commutes with the shifts, that is, σ~i ◦ f = f ◦ σ~i for all ~i ∈ Zd. A surjective sliding blockcode is called a factor map and a bijective sliding block code is called a conjugacy. We note that aconjugacy defines an equivalence relation; in fact, it has a continuous inverse since it is a continuousbijection between compact sets.In this chapter, we will focus on a special class of nearest neighbour shifts of finite type where82the forbidden patterns are the same in every ‘direction’:Given a graph H letXdH := Hom(Zd,H) = {x ∈ HZd| x~i ∼H x~j for all~i ∼ ~j}.Such spaces will be called hom-shifts. We fix some dimension d ≥ 2 and thereafter drop thesuperscript in XdH. If H is finite andFH := {[v, w]~0,~ej | v H w, 1 ≤ j ≤ d}then we noted in Section 3.1 that XH = XFH and hence is nearest neighbour shift of finite type.These are exactly the nearest neighbour shifts of finite type with symmetric and isotropic con-straints. For example if the graph H is given by Figure 4.1 then XH is the hard square shift, thatis, configurations with alphabet {0, 1} such that adjacent symbols cannot both be 1. XKn is thespace of n-colourings of the graph, that is, configurations with alphabet {1, 2, . . . , n} where all ad-jacent colours are distinct. We note that the properties, symmetry and isotropy, are not invariantunder conjugacy. In this new notation the spaces Xn introduced in Chapter 2 are the hom-shiftsXCn for n 6= 1, 4.F will always denote a set of patterns and H will always denote a graph, there will not be anyambiguity in the notations XF , XH.A finite graph H is called four-cycle free if it is finite, it has no self-loops and C4 is not asubgraph of H. For instance K4 is not a four-cycle free graph.0 1Figure 4.1: Graph for the Hard Square ShiftHom-shifts form a special class of shifts of finite type. In general the set of globally allowedpattern is different from the set of locally allowed patterns: Let X be a shift space with a forbiddenlist F . Given a finite set A, a pattern a ∈ AA is said to be locally allowed if no pattern from Fappears in a. In general it is undecidable for shifts of finite type whether a locally allowed patternbelongs to L(X) [46]; however it is decidable when X is a hom-shift where it is sufficient to checkwhether the pattern extends to a locally allowed pattern on Bn for some n.It is well known that the topological entropy (introduced in Section 2.7) of a shift space X canbe calculated via the following equation:htop(X) = limn−→∞log |LBn(X)||Bn|.The existence of the limit follows from subadditivity arguments via the well-known multivariateversion of Fekete’s Lemma [8]. Moreover the topological entropy is an invariant under conjugacy83(for d = 1 look at Proposition 4.1.9 in [29]; the proof extends to higher dimensions). We remarkthat the computation of this invariant for shifts of finite type in d > 1 is a hard problem andvery little is known [40], however there are algorithms to compute approximating upper and lowerbounds of the topological entropy of the hom-shifts [19, 30]. When H is a finite connected graphwith at least two edges, then htop(X) > 0:Proposition 4.1.1. Let H be a finite graph with vertices a, b and c such that a ∼H b and b ∼H c.Then htop(XH) ≥log 22 .Proof. It sufficient to see this for a graph H with exactly three vertices a, b and c such that a ∼H band b ∼H c. For such a graph any configuration in XH is composed of b on one partite class of Zdand a free choice between a and c for vertices on the other partite class. Then|LBn(X)| = 2b (2n+1)d2 c + 2d(2n+1)d2 eproving that htop(XH) =log 22 .A shift space X is called entropy minimal if for all shift spaces Y ( X, htop(X) > htop(Y ). Inother words, a shift space X is entropy minimal if forbidding any word causes a drop in entropy.From [45] we know that every shift space contains an entropy minimal shift space with the sameentropy and also a characterisation of same entropy factor maps on entropy minimal shifts of finitetype.One of the main results of this chapter is the following:Theorem 4.1.2. Let H be a connected four-cycle free graph. Then XH is entropy minimal.For d = 1 all irreducible shifts of finite type are entropy minimal [29]. A necessary conditionfor the entropy minimality of XH is that H has to be connected.Proposition 4.1.3. Suppose H is a finite graph with connected components H1,H2, . . .Hr. Thenhtop(XH) = max1≤i≤r htop(XHi).This follows from the observation thatmax1≤i≤r|LBn(XHi)| ≤ |LBn(XH)| =r∑i=1|LBn(XHi)| ≤ r max1≤i≤r|LBn(XHi)|.The following theorem is another main result in this chapter. We refer the reader to Section2.2.2 so as to review the pivot property.Theorem 4.1.4. For all four-cycle free graphs H, XH has the pivot property.It is sufficient to prove this theorem for four-cycle free graphs H which are connected becauseof the following proposition:84Proposition 4.1.5. Let X1, X2, . . . , Xn be shift spaces on disjoint alphabets such that each of themhas the pivot property. Then ∪ni=1Xi also has the pivot property.This is true since (x, y) ∈ ∆∪ni=1Xi implies (x, y) ∈ ∆Xi for some 1 ≤ i ≤ n.4.2 Thermodynamic FormalismIn this chapter we will introduce several special cases of theorems introduced in Section 2.7 whichwill be useful in this chapter. We refer to it in case the reader would like to review the definitionsof measure theoretic entropy and equilibrium states. Adapted measures can be reviewed fromSubsection 2.1.2.Given a shift-invariant probability measure ν let hν denote the measure theoretic entropy ofν. A shift-invariant probability measure µ is a measure of maximal entropy of X if the maximumof ν 7→ hν over all shift-invariant probability measures on X is obtained at µ. In other words,measures of maximal entropy are equilibrium states for the function f ≡ 0. The well-knownvariational principle for topological entropy of Zd-actions asserts that if µ is a measure of maximalentropy htop(X) = hµ whenever X is a shift space.The following is a well-known characterisation of entropy minimality (it is used for instance inthe proof of Theorem 4.1 in [32]):Proposition 4.2.1. A shift space X is entropy minimal if and only if every measure of maximalentropy for X is fully supported.We understand this by the following: Suppose X is entropy minimal and µ is a measure ofmaximal entropy for X. Then by the variational principle for X and supp(µ) we gethtop(X) = hµ ≤ htop(supp(µ)) ≤ htop(X)proving that supp(µ) = X. To prove the converse, suppose for contradiction that X is not entropyminimal and consider Y ( X such that htop(X) = htop(Y ). Then by the variational principle thereexists a measure µ on Y such that hµ = htop(X). Thus µ is a measure of maximal entropy for Xwhich is not fully supported.Further is known if X is a nearest neighbour shift of finite type: Given a set A ⊂ Zd we denotethe r-boundary of A by ∂rA, that is,∂rA = {~w ∈ Zd \A∣∣∣ ‖~w − ~v‖1 ≤ r for some ~v ∈ A}.Note that ∂1A = ∂A. A uniform Markov random field is a Markov random field µ such thatµ([a]A∣∣∣ [b]∂A) =1nA,b|∂A85where nA,b|∂A = |{a ∈ AA | µ([a]A ∩ [b]∂A) > 0}|.The following is a special case of Theorem 2.7.1:Theorem 4.2.2. All measures of maximal entropy on a nearest neighbour shift of finite type Xare shift-invariant uniform Markov random fields µ adapted to X.The converse is also true under further mixing assumptions on the shift space X (called theD-condition).We will often restrict our proofs to the ergodic case. We can do so via the following standardfacts implied by Theorem 2.5.8 and Theorem 4.3.7 in [26]:Theorem 4.2.3. Let µ be a shift-invariant uniform Markov random field adapted to a shift spaceX. Let its ergodic decomposition be given by a measurable map x −→ µx on X, that is, µ =∫X µxdµ. Then µ-almost everywhere the measures µx are shift-invariant uniform Markov randomfields adapted to X such that supp(µx) ⊂ supp(µ). Moreover∫hµxdµ(x) = hµ.We will prove the following:Theorem 4.2.4. Let H be a connected four-cycle free graph. Then every ergodic probability measureadapted to XH with positive entropy is fully supported.This implies Theorem 4.1.2 by the following: The Lanford-Ruelle theorem implies that everymeasure of maximal entropy on XH is a uniform shift-invariant Markov random field adapted toXH. By Proposition 4.1.1 and the variational principle we know that these measures have positiveentropy. By Theorems 4.2.3 and 4.2.4 they are fully supported. Finally by Proposition 4.2.1, XHis entropy minimal.Alternatively, the conclusion of Theorem 4.2.4 can be obtained via some strong mixing con-ditions on the shift space; we will describe one such assumption. A shift space X is calledstrongly irreducible if there exists g > 0 such that for all x, y ∈ X and A,B ⊂ Zd satisfyingmin~i∈A,~j∈B ‖~i − ~j‖1 ≥ g, there exists z ∈ X such that z|A = x|A and z|B = y|B. For such aspace, the homoclinic relation is minimal implying the conclusion of Theorem 4.2.4 and further,that every probability measure adapted to X is fully supported. Note that this does not prove thatX is entropy minimal unless we assume that X is a nearest neighbour shift of finite type. Such anargument is used in the proof of Lemma 4.1 in [32] which implies that every strongly irreducibleshift of finite type is entropy minimal. A more combinatorial approach was used in [51] to showthat shift spaces (and not just shifts of finite type) with a more general mixing property calleduniform filling are entropy minimal.4.3 Folding, Entropy Minimality and the Pivot PropertyRecall, as in Subsection 3.2.3 given a graph H we say that v folds into w if and only if u ∼H vimplies u ∼H w. In this case the graph H \ {v} is called a fold of H. This map gives rise to a86‘retract’, that is a graph homomorphism from H to H \ {v} which is the identity on H \ {v} andsends v to w. This was introduced in [37] to help characterise cop-win graphs and used in [6] toestablish many properties which are preserved under ‘folding’ and ‘unfolding’. Given a finite treeH with more than two vertices note that a leaf vertex (vertex of degree 1) can always be folded tosome other vertex of the tree. Thus starting with H, there exists a sequence of folds resulting in asingle edge. In fact using a similar argument we can prove the following proposition.Proposition 4.3.1. Let H ⊂ H′ be trees. Then there is a graph homomorphism f : H′ −→ H suchthat f |H is the identity map.To show this, first note that if H ( H′ then there is a leaf vertex in H′ which is not in H. Thisleaf vertex can be folded into some other vertex in H′. Thus by induction on |H′ \H| we can provethat there is a sequence of folds from H′ to H. Corresponding to this sequence of folds we obtaina graph homomorphism from H′ to H which is the identity on H.Here we consider a related notion for shift spaces. Given a nearest neighbour shift of finite typeX ⊂ AZd, the neighbourhood of a symbol v ∈ A is given byNX(v) := {a ∈ A∂~0 | [v]~0 ∩ [a]∂~0 ∈ LD1(X)},that is the collection of all patterns which can ‘surround’ v in X. We will say that v config-foldsinto w in X if NX(v) ⊂ NX(w). In such a case we say that X config-folds to X ∩ (A\{v})Zd. Notethat X ∩ (A \ {v})Zdis obtained by forbidding v from X and hence it is also a nearest neighbourshift of finite type. Also if X = XH for some graph H then v config-folds into w in XH if and onlyif v folds into w in H. Thus if H is a tree then there is a sequence of folds starting at XH resultingin the two checkerboard configurations with two symbols (the vertices of the edge which H foldsinto). This property is weaker than the notion of folding introduced in Subsection 3.2.3.The main thrust of this property in our context is: if v config-folds into w in X then givenany x ∈ X, every appearance of v in x can be replaced by w to obtain another configuration inX. This replacement defines a factor (surjective, continuous and shift-invariant) map f : X −→X ∩ (A \ {v})Zdgiven by(f(x))~i :=x~i if x~i 6= vw if x~i = v.Note that the map f defines a ‘retract’ from X to X ∩ (A \ {v})Zd. Frequently we will config-foldmore than one symbol at once (especially in Section 4.6):Distinct symbols v1, v2, . . . , vn config-fold disjointly into w1, w2, . . . , wn in X if vi config-foldsinto wi and vi 6= wj for all 1 ≤ i, j ≤ n. In this case the symbols v1, v2, . . . , vn can be replaced byw1, w2, . . . , wn simultaneously for all x ∈ X. Suppose v1, v2, . . . vn is a maximal set of symbols whichcan be config-folded disjointly in X. Then X ∩ (A\ {v1, v2, . . . , vn})Zdis called a full config-fold of87X. Let vi config-fold into wi for all 1 ≤ i ≤ n. Consider fX : A −→ A \ {v1, v2, . . . , vn} given byfX(v) :=v if v 6= vj for all 1 ≤ j ≤ nwj if v = vj for some 1 ≤ j ≤ n.This defines a factor map fX : X −→ X ∩ (A \ {v1, v2, . . . , vn})Zdgiven by (fX(x))~i := fX(x~i) forall ~i ∈ Zd. fX denotes both the factor map and the map on the alphabet, it should be clear fromthe context which function is being used.For example consider a treeH := (V, E) where V := {v1, v2, v3, . . . , vn+1} and E := {(vi, vn+1)|1 ≤i ≤ n}. Then {v1, v2, . . . , vn−1} is a maximal set of symbols which config-folds disjointly into vnin XH resulting in the checkerboard patterns with the symbols vn and vn+1. Though the fullconfig-fold is not necessarily unique, we choose a full config-fold for every shift space and use it toconstruct the corresponding function fX .In many cases we will fix a configuration on a set A ⊂ Zd and apply a config-fold on the rest.Hence we define the map fX,A : X −→ X given by(fX,A(x))~i :=x~i if~i ∈ AfX(x~i) otherwise.The map fX,A can be extended beyond X:Proposition 4.3.2. Let X ⊂ Y be nearest neighbour shifts of finite type, Z be a full config-foldof X and y ∈ Y such that for some A ⊂ Zd, y|Ac∪∂(Ac) ∈ LAc∪∂(Ac)(X). Then the configuration zgiven byz~i :=y~i if~i ∈ AfX(y~i) otherwiseis an element of Y . Moreover z|Ac ∈ LAc(Z).Abusing the notation, in such cases we shall denote the configuration z by fX,A(y).If Ac is finite, then fX,A changes only finitely many coordinates. These changes can be appliedone by one, that is, there is a chain of pivots in Y from y to fX,A(y).A nearest neighbour shift of finite type which cannot be config-folded is called a stiff shift. Asin the case for graphs where all the stiff graphs obtained by a sequence of folds of a given graph areisomorphic [6], all the stiff shifts obtained by a sequence of config-folds of a given nearest neighbourshift of finite type are topologically conjugate via one-block maps; the proof is similar and we omitit. Starting with a nearest neighbour shift of finite type X the radius of X is the smallest number88of full config-folds required to obtain a stiff shift. If H is a tree then the radius of XH is equal to⌊diameter(H)2⌋.Thus for every nearest neighbour shift of finite type X there is a sequence of full config-folds (notnecessarily unique) which starts at X and ends at a stiff shift of finite type. Let the radius of X ber and X = X0, X1, X2, . . . , Xr be a sequence of full config-folds where Xr is stiff. This generates asequence of maps fXi : Xi −→ Xi+1 for all 0 ≤ i ≤ r−1. In many cases we will fix a pattern on Dnor Dcn and apply these maps on the rest of the configuration. Consider the maps IX,n : X −→ Xand OX,n : X −→ X (for n > r) given byIX,n(x) := fXr−1,Dn+r−1(fXr−2,Dn+r−2 (. . . (fX0,Dn(x)) . . .))(Inward Fixing Map)andOX,n(x) := fXr−1,Dcn−r+1(fXr−2,Dcn−r+2(. . .(fX0,Dcn(x)). . .))(Outward Fixing Map).Similarly we consider maps which do not fix anything, FX : X −→ Xr given byFX(x) := fXr−1(fXr−2 (. . . (fX0(x)) . . .)).Note that Dk ∪ ∂Dk = Dk+1 and Dck ∪ ∂(Dck) = Dck−1. This along with repeated application ofProposition 4.3.2 implies that the image of IX,n and OX,n lie in X. This also implies the followingproposition:Proposition 4.3.3 (The Onion Peeling Proposition). Let X ⊂ Y be nearest neighbour shifts offinite type with radius r, Z be a stiff shift obtained by a sequence of config-folds starting with Xand y1, y2 ∈ Y such that y1|Dcn−1 ∈ LDcn−1(X) and y2|Dn+1 ∈ LDn+1(X). Let z1, z2 ∈ Y be given byz1 := fXr−1,Dn+r−1(fXr−2,Dn+r−2(. . .(fX0,Dn(y1)). . .))z2 := fXr−1,Dcn−r+1(fXr−2,Dcn−r+2(. . .(fX0,Dcn(y2)). . .))for n > r.The patterns z1|Dcn+r−1 ∈ LDcn+r−1(Z) and z2|Dn−r+1 ∈ LDn−r+1(Z). If y1, y2 ∈ X then in additionz1|Dcn+r−1 = FX(y1)|Dcn+r−1 andz2|Dn−r+1 = FX(y2)|Dn−r+1 .Abusing the notation, in such cases we shall denote the configurations z1 and z2 by IX,n(y1)and OX,n(y2) respectively. Note that IX,n(y1)|Dn = y1|Dn and OX,n(y2)|Dcn = y2|Dcn . Also, OX,n isa composition of maps of the form fX,A where Ac is finite; there is a chain of pivots in Y from y to89OX,n(y).There are two kind of stiff shifts which will be of interest to us: A configuration x ∈ AZdiscalled periodic if there exists n ∈ N such that σn~ei(x) = x for all 1 ≤ i ≤ d. A configuration x ∈ Xis called frozen if its homoclinic class is singleton. This notion coincides with the notion of frozencoloring in [6]. A subshift X will be called frozen if it consists of frozen configurations, equivalently∆X is the diagonal. A measure on X will be called frozen if its support is frozen. Note that anyshift space consisting just of periodic configurations is frozen. All frozen nearest neighbour shiftsof finite type are stiff.Proposition 4.3.4. Let X be a nearest neighbour shift of finite type such that a sequence of config-folds starting from X results in the orbit of a periodic configuration. Then every shift-invariantprobability measure adapted to X is fully supported.Proposition 4.3.5. Let X be a nearest neighbour shift of finite type such that a sequence of config-folds starting from X results in a frozen shift. Then X has the pivot property.Examples:1. X := {0}Zd∪{1}Zdis a frozen shift space but not the orbit of a periodic configuration. Clearlythe delta measure δ{0}Zd is a shift-invariant probability measure adapted to X but not fullysupported. A more non-trivial example of the nearest neighbour shift of finite type whichis frozen but not the orbit of a periodic configuration is the set of the Robinson tilings [46].It is well known that it is uniquely ergodic and the unique measure is an (adapted) uniformMarkov random field which is not fully supported.2. A shift space X ⊂ AZdis said to have a safe symbol ? if for all x ∈ X and A ⊂ Zd theconfiguration z ∈ AZdgiven byz~i :=x~i if~i ∈ A? if ~i ∈ Acis also an element of X. Then any symbol in X can be config-folded into the safe sym-bol. By config-folding the symbols one by one we obtain a fixed point {?}Zd. Thus anynearest neighbour shift of finite type with a safe symbol satisfies the hypothesis of both thepropositions.3. Suppose H is a graph which folds into a single edge (denoted by Edge) or a single vertex vwith a loop. Then the shift space XH can be config-folded to XEdge (which consists of twoperiodic configurations) or the fixed point {v}Zdrespectively. In the latter case, the graph His called dismantlable [37]. Note that finite trees and the graph C4 fold into an edge. Thus inthis class of examples H may have C4 as a subgraph or self-loops. For dismantlable graphsH Theorem 4.1 in [6] implies the conclusion of Propositions 4.3.4 and 4.3.5 for XH.90Proof of Proposition 4.3.4. Let µ be a shift-invariant probability measure adapted to X. To provethat supp(µ) = X it is sufficient to prove that for all n ∈ N and x ∈ X that µ([x]Dn) > 0. LetX0 = X, X1, X2, . . . , Xr be a sequence of full config-folds where Xr := {σ~i1(p), σ~i2(p), . . . , σ~ik−1(p)}is the orbit of a periodic point. For any two configurations z, w ∈ X there exists ~i ∈ Zd such thatFX(z) = FX(σ~i(w)). Since µ is shift-invariant we can choose y ∈ supp(µ) such that FX(x) =FX(y). Consider the configurations IX,n(x) and OX,n+2r−1(y). By Proposition 4.3.3 they satisfythe equationsIX,n(x)|Dcn+r−1 = FX(x)|Dcn+r−1 andOX,n+2r−1(y)|Dn+r = FX(y)|Dn+r .Then IX,n(x)|∂Dn+r−1 = OX,n+2r−1(y)|∂Dn+r−1 . Since X is a nearest neighbour shift of finite type,the configuration z given byz|Dn+r := IX,n(x)|Dn+rz|Dcn+r−1 := OX,n+2r−1(y)|Dcn+r−1is an element of X. Moreoverz|Dn = IX,n(x)|Dn = x|Dnz|Dcn+2r−1 = OX,n+2r−1(y)|Dcn+2r−1 = y|Dcn+2r−1 .Thus (y, z) ∈ ∆X . Since µ is adapted we get that z ∈ supp(µ). Finallyµ([x]Dn) = µ([z]Dn) > 0.Note that all the maps being discussed here, fX , fX,A, FX , IX,n and OX,n are (not necessarilyshift-invariant) single block maps, that is, maps f where (f(x))~i depends only on x~i. Thus if f isone such map and x|A = y|A for some set A ⊂ Zd then f(x)|A = f(y)|A; they map homoclinic pairsto homoclinic pairs.Proof of Proposition 4.3.5. Let X0 = X, X1, X2, . . . , Xr be a sequence of full config-folds whereXr is frozen. Let (x, y) ∈ ∆X . Since Xr is frozen, FX(x) = FX(y). Suppose x|Dcn = y|Dcn for somen ∈ N. Then OX,n+r−1(x)|Dcn = OX,n+r−1(y)|Dcn . Also by Proposition 4.3.3,OX,n+r−1(x)|Dn = FX(x)|Dn = FX(y)|Dn = OX,n+r−1(y)|Dn .91This proves that OX,n+r−1(x) = OX,n+r−1(y). In fact it completes the proof since for all z ∈ Xthere exists a chain of pivots in X from z to OX,n+r−1(z).4.4 Universal CoversMost cases will not be as simple as in the proof of Propositions 4.3.4 and 4.3.5. We wish to prove theconclusions of these propositions for hom-shifts XH when H is a connected four-cycle free graph.Many ideas carry over from the proofs of these results because of the relationship of such graphswith their universal covers; we describe this relationship next. The results in this section are notoriginal; look for instance in [56]. We mention them for completeness.Let H be a finite connected graph with no self-loops. We denote by dH the ordinary graphdistance on H and by DH(u), the ball of radius 1 around u. A graph homomorphism pi : C −→ H iscalled a covering map if for some n ∈ N ∪ {∞} and all u ∈ H, there exist disjoint sets {Ci}ni=1 ⊂ Csuch that pi−1 (DH(u)) = ∪ni=1Ci and pi|Ci : Ci −→ DH(u) is an isomorphism of the inducedsubgraphs for 1 ≤ i ≤ n. A covering space of a graph H is a graph C such that there exists acovering map pi : C −→ H.A universal covering space of H is a covering space of H which is a tree. Unique up to graphisomorphism [56], these covers can be described in multiple ways. Their standard constructionuses non-backtracking walks [1]: A walk on H is a sequence of vertices (v1, v2, . . . , vn) such thatvi ∼H vi+1 for all 1 ≤ i ≤ n− 1. The length of a walk p = (v1, v2, . . . , vn) is |p| = n− 1, the numberof edges traversed on that walk. It is called non-backtracking if vi−1 6= vi+1 for all 2 ≤ i ≤ n − 1,that is, successive steps do not traverse the same edge. Choose a vertex u ∈ H. The vertex set ofthe universal cover is the set of all non-backtracking walks on H starting from u; there is an edgebetween two such walks if one extends the other by a single step. The choice of the starting vertex uis arbitrary; choosing a different vertex gives rise to an isomorphic graph. We denote the universalcover by EH. The covering map pi : EH −→ H maps a walk to its terminal vertex. Usually, we willdenote by u˜, v˜ and w˜ the vertices of EH such that pi(u˜) = u, pi(v˜) = v and pi(w˜) = w.This construction shows that the universal cover of a graph is finite if and only if it is a finitetree. To see this if the graph has a cycle then the finite segments of the walk looping around thecycle give us infinitely many vertices for the universal cover. If the graph is a finite tree, then allwalks must terminate at the leaves and their length is bounded by the diameter of the tree. Infact, the universal cover of a tree is itself while the universal cover of a cycle (for instance C4) is Zobtained by finite segments of the walks (1, 2, 3, 4, 1, 2, 3, 4, . . .) and (1, 4, 3, 2, 1, 4, 3, 2, . . .).Following the ideas of homotopies in algebraic topology, there is a natural operation on theset of walks: two walks can be joined together if one begins where the other one ends. Moreformally, given two walks p = (v1, v2, . . . , vn) and q = (w1, w2, . . . , wm) where vn = w1, considerp ? q = (v1, v2, . . . , vn, w2, w3, . . . , wm). However even when p and q are non-backtracking p ? q neednot be non-backtracking. So we consider the walk [p ? q] instead which erases the backtracking92segments of p ? q, that is, if for some i ∈ N, vn−i+1 6= wi and vn−j+1 = wj for all 1 ≤ j ≤ i− 1 then[p ? q] := (v1, v2, . . . , vn−i+1, wi−1, wi, . . . , wm).This operation of erasing the backtracking segments is called reduction, look for instance in[56]. The following proposition is well-known (Section 4 of [56]) and shall be useful in our contextas well.Proposition 4.4.1. Let H be a finite connected graph without any self-loops. Then for all v˜, w˜ ∈EH satisfying pi(v˜) = pi(w˜) there exists a graph isomorphism φ : EH −→ EH such that φ(v˜) = w˜and pi ◦ φ = pi.To see how to construct this isomorphism, consider as an example (u), the empty walk on H and(v1, v2, . . . , vn), some non-backtracking walk such that v1 = vn = u. Then the map φ : EH −→ EHgiven byφ(w˜) := [(v1, v2, . . . , vn) ? w˜].is a graph isomorphism which maps (u) to (v1, v2, . . . , vn); its inverse is ψ : EH −→ EH given byψ(w˜) := [(vn, vn−1, . . . , v1) ? w˜].The maps φ, pi described above give rise to natural maps, also denoted by φ and pi whereφ : XEH −→ XEHis given by φ(x˜)~i := φ(x˜~i) andpi : XEH −→ XHis given by pi(x˜)~i := pi(x˜~i) for all~i ∈ Zd respectively. A lift of a configuration x ∈ XH is aconfiguration x˜ ∈ XEH such that pi ◦ x˜ = x.Now we shall analyse some consequences of this formalism in our context. More general state-ments (where Zd is replaced by a different graph) are true (under a different hypothesis on H), butwe restrict to the four-cycle free condition. We noticed in Section 4.3 that if H is a tree then XHsatisfies the conclusions of Theorems 4.2.4 and 4.1.4. Now we will draw a connection between thefour-cycle free condition on H and the formalism in Section 4.3.Proposition 4.4.2 (Existence of Lifts). Let H be a connected four-cycle free graph. For all x ∈ XHthere exists x˜ ∈ XEH such that pi(x˜) = x. Moreover the lift x˜ is unique up to a choice of x˜~0.Proof. We will begin by constructing a sequence of graph homomorphisms x˜n : Dn −→ EH suchthat pi ◦ x˜n = x|Dn and x˜m|Dn = x˜n for all m > n. Then by taking the limit of these graphhomomorphisms we obtain a graph homomorphism x˜ ∈ XEH such that pi ◦ x˜ = x. It will follow93that given x˜0 the sequence x˜n is completely determined proving that that the lifting is unique upto a choice of x˜~0.The recursion is the following: Let x˜n : Dn −→ EH be a given graph homomorphism for somen ∈ N ∪ {0} such that pi ◦ x˜n = x|Dn . For any ~i ∈ Dn+1 \ Dn, choose a vertex ~j ∈ Dn such that~j ∼~i. Then pi(x˜n~j ) = x~j ∼ x~i. Since pi defines a local isomorphism between EH and H, there existsa unique vertex v˜~i ∼ x˜n~j∈ EH such that pi(v˜~i) = x~i. Define x˜n+1 : Dn+1 −→ EH byx˜n+1~i :=x˜n~i if~i ∈ Dnv˜~i if~i ∈ Dn+1 \Dn.Then clearly pi ◦ x˜n+1 = x|Dn+1 and x˜n+1|Dn = x˜n. Note that the extension x˜n+1 is uniquelydefined given x˜n.We need to prove that this defines a valid graph homomorphism from Dn+1 to EH. Let ~i ∈Dn+1 \ Dn and ~j ∈ Dn be chosen as described above. Consider if possible any ~j′ 6= ~j ∈ Dn suchthat ~j′ ∼~i. To prove that x˜n+1 is a graph homomorphism we need to verify that x˜n+1~j′ ∼ x˜n+1~i.Consider ~i′ ∈ Dn such that ~i′ ∼ ~j and ~j′. Then ~i′,~j,~i and ~j′ form a four-cycle. Since H isfour-cycle free either x~i′ = x~i or x~j′ = x~j .Suppose x~i′ = x~i; the other case is similar. Since pi is a local isomorphism and x˜n+1~i, x˜n+1~i′ ∼x˜n+1~j , we get that x˜n+1~i= x˜n+1~i′ . But~i′,~j′ ∈ Dn and x˜n+1|Dn = x˜n is a graph homomorphism;therefore x˜n+1~i = x˜n+1~i′∼ x˜n+1~j′ .Corollary 4.4.3. Let H be a connected four-cycle free graph and x, y ∈ XH. Consider some liftsx˜, y˜ ∈ XEH such that pi(x˜) = x and pi(y˜) = y. If for some ~i0 ∈ Zd, x˜~i0 = y˜~i0 then x˜ = y˜ on theconnected subset of{~j ∈ Zd | x~j = y~j}which contains ~i0.Proof. Let D be the connected component of {~i ∈ Zd |x~i = y~i} and D˜ be the connected componentof {~i ∈ Zd | x˜~i = y˜~i} which contain~i0.Clearly D˜ ⊂ D. Suppose D˜ 6= D. Since both D and D˜ are non-empty, connected sets thereexist ~i ∈ D \ D˜ and ~j ∈ D˜ such that ~i ∼ ~j. Then x~i = y~i, x~j = y~j and x˜~j = y˜~j . Since pi is a localisomorphism, the lift must satisfy x˜~i = y˜~i implying~i ∈ D˜. This proves that D = D˜.The following corollary says that any two lifts of the same graph homomorphism are ‘identical’.Corollary 4.4.4. Let H be a connected four-cycle free graph. Then for all x˜1, x˜2 ∈ XEH satisfyingpi(x˜1) = pi(x˜2) = x there exists an isomorphism φ : EH −→ EH such that φ ◦ x˜1 = x˜2.94Proof. By Proposition 4.4.1 there exists an isomorphism φ : EH −→ EH such that φ(x˜1~0) = x˜2~0andpi ◦ φ = pi. Then (φ ◦ x˜1)~0 = x˜2~0and pi(φ ◦ x˜1) = (pi ◦ φ)(x˜1) = pi(x˜1) = x. By Proposition 4.4.2φ ◦ x˜1 = x˜2.It is worth noting at this point the relationship of the universal cover described here withthe universal cover in algebraic topology. Undirected graphs can be identified with 1 dimensionalCW-complexes where the set of vertices correspond to the 0-cells, the edges to the 1-cells of thecomplex and the attaching map sends the end-points of the edges to their respective vertices.With this correspondence in mind the (topological) universal covering space coincides with the(combinatorial) universal covering space described above; indeed a 1 dimensional CW-complex issimply connected if and only if it does not have any loops, that is, the corresponding graph does nothave any cycles; it is a tree. The results in the section are well known in much greater generality.Look for instance in Chapter 13 in [35] or Chapters 5 and 6 in [31].4.5 Generalised Height Functions and Sub-CocyclesExistence of lifts as described in the previous section enables us to measure the ‘rigidity’ of con-figurations. In this section we define generalised height functions and subsequently the slope ofconfigurations, where steepness corresponds to this ‘rigidity’.Fix a connected four-cycle free graph H. Given x ∈ XH we can define the correspondinggeneralised height function hx : Zd × Zd −→ Z given by hx(~i,~j) := dEH(x˜~i, x˜~j) where x˜ is a lift ofx. It follows from Corollary 4.4.4 that hx is independent of the lift x˜.Given a finite subset A ⊂ Zd and x ∈ XH we define the range of x on A asRangeA(x) := max~j1,~j2∈Ahx(~j1,~j2).For all x ∈ XHRangeA(x) ≤ Diameter(A)and more specificallyRangeDn(x) ≤ 2n (4.5.1)for all n ∈ N. Since x˜ ∈ XEH is a map between bipartite graphs it preserves the parity of thedistance function, that is, if ~i,~j ∈ Zd and x ∈ XH then the parity of ‖~i−~j‖1 is the same as that ofhx(~i,~j). As a consequence it follows that Range∂Dn(x) is even for all x ∈ XH and n ∈ N. We notethatRangeA(x) = diameter(Image(x˜|A)).The generalised height function hx is subadditive, that is,hx(~i,~j) ≤ hx(~i,~k) + hx(~k,~j)95for all x ∈ XH and ~i,~j and ~k ∈ Zd. This is in contrast with the usual height function (as inSubsection 2.3 and [41]) where there is an equality instead of the inequality. This raises sometechnical difficulties which are partly handled by the subadditive ergodic theorem.The following terminology is not completely standard: Given a shift space X a sub-cocycle is ameasurable map c : X × Zd −→ N ∪ {0} such that for all ~i,~j ∈ Zdc(x,~i+~j) ≤ c(x,~i) + c(σ~i(x),~j).Sub-cocycles arise in a variety of situations; look for instance in [25]. We are interested in the casec(x,~i) = hx(~0,~i) for all x ∈ XH and~i ∈ Zd. The measure of ‘rigidity’ lies in the asymptotics of thissub-cocycle, the existence of which is provided by the subadditive ergodic theorem. Given a set Xif f : X −→ R is a function then let f+ := max(0, f).Theorem 4.5.1 (Subadditive Ergodic Theorem). [58] Let (X,B, µ) be a probability space and letT : X −→ X be measure preserving. Let {fn}∞n=1 be a sequence of measurable functions fn : X −→R ∪ {∞} satisfying the conditions:(a) f+1 ∈ L1(µ)(b) for each m, n ≥ 1, fn+m ≤ fn + fm ◦ Tn µ-almost everywhere.Then there exists a measurable function f −→ R ∪ {−∞} such that f+ ∈ L1(µ), f ◦ T = f ,limn→∞ 1nfn = f , µ-almost everywhere andlimn−→∞1n∫fndµ = infn1n∫fndµ =∫fdµ.Given a direction~i = (i1, i2, . . . , id) ∈ Rd let b~ic = (bi1c, bi2c, . . . , bidc). We define for all x ∈ XHthe slope of x in the direction ~i assl~i(x) := limn−→∞1nhx(~0, bn~ic)whenever it exists.If ~i ∈ Zd we note that the sequence of functions fn : XH −→ N ∪ {~0} given byfn(x) = hx(~0, n~i)satisfies the hypothesis of this theorem for any shift-invariant probability measure on XH: |f1| ≤‖~i‖1 and the subadditivity condition in the theorem is just a restatement of the sub-cocycle conditiondescribed above, that is, if T = σ~i thenfn+m(x) = hx(~0, (n+m)~i) ≤ hx(~0, n~i) + hσn~ix(~0,m~i) = fn(x) + fm(Tn(x)).96The asymptotics of the generalised height functions (or more generally the sub-cocycles) are aconsequence of the subadditive ergodic theorem as we will describe next. In the following by anergodic measure on XH, we mean a probability measure on XH which is ergodic with respect tothe Zd-shift action on XH.Proposition 4.5.2 (Existence of Slopes). Let H be a connected four-cycle free graph and µ be anergodic measure on XH. Then for all ~i ∈ Zdsl~i(x) = limn−→∞1nhx(~0, n~i)exists almost everywhere and is independent of x. Moreover if ~i = (i1, i2 . . . , id) thensl~i(x) ≤d∑k=1|ik|sl~ek(x).Proof. Fix a direction ~i ∈ Zd. Consider the sequence of functions {fn}∞n=1 and the map T :XH −→ XH as described above. By the subadditive ergodic theorem there exists a functionf : XH −→ R ∪ {−∞} such thatlimn→∞1nfn = f almost everywhere.Note that f = sl~i. Since for all x ∈ XH and n ∈ N, 0 ≤ fn ≤ n‖~i‖1, 0 ≤ f(x) ≤ ‖i‖1 whenever itexists. Fix any ~j ∈ Zd. Thenfn(σ~j(x)) = hσ~j(x)(~0, n~i) = hx(~j, n~i+~j)and hence−hx(~j,~0) + hx(~0, n~i)− hx(n~i, n~i+~j) ≤ fn(σ~j(x))≤ hx(~j,~0) + hx(~0, n~i) + hx(n~i, n~i+~j)implying−2‖~j‖1 + fn(x) ≤ fn(σ~j(x)) ≤ 2‖~j‖1 + fn(x)implyingf(x) = limn−→∞1nfn(x) = limn−→∞1nfn(σ~jx) = f(σ~j(x))almost everywhere. Since µ is ergodic sl~i = f is constant almost everywhere. Let~ik = (i1, i2, . . . , ik, 0, . . . , 0) ∈97Zd. By the subadditive ergodic theoremsl~i(x) =∫sl~i(x)dµ = limn−→∞1n∫hx(~0, n~i)dµ≤d∑k=1limn−→∞1n∫hσn~ik−1 (x)(~0, nik~ek)dµ=d∑k=1limn−→∞1n∫hx(~0, nik~ek)dµ≤d∑k=1|ik| limn−→∞1n∫hx(~0, n~ek)dµ=d∑k=1|ik|sl~ek(x).almost everywhere.Corollary 4.5.3. Let H be a connected four-cycle free graph. Suppose µ is an ergodic measure onXH. Then for all ~i ∈ Rdsl~i(x) = limn−→∞1nhx(~0, bn~ic)exists almost everywhere and is independent of x. Moreover if ~i = (i1, i2, . . . , id) thensl~i(x) ≤d∑k=1|ik|sl~ek(x).Proof. Let ~i ∈ Qd and N ∈ N such that N~i ∈ Zd. For all n ∈ N there exists k ∈ N ∪ {0} and0 ≤ m ≤ N − 1 such that n = kN +m. Then for all x ∈ XHhx(~0, kN~i)−N‖~i‖1 ≤ hx(~0, bn~ic) ≤ hx(~0, kN~i) +N‖~i‖1provingsl~i(x) = limn−→∞1nhx(~0, bn~ic) =1Nlimk−→∞1khx(~0, kN~i) =1NslN~i(x)almost everywhere. Since slN~i is constant almost everywhere, we have that sl~i is constant almosteverywhere as well; denote the constant by c~i . Alsosl~i(x) ≤1Nd∑l=1|Nil|sl~el(x) =d∑l=1|il|sl~el(x).98Let X ⊂ XH be a set of configurations x such thatlimn−→∞1nhx(~0, bn~ic) = c~ifor all ~i ∈ Qd. We have proved that µ(X) = 1.Fix x ∈ X. Let ~i,~j ∈ Rd such that ‖~i−~j‖1 < . Then∣∣∣∣1nhx(~0, bn~ic)−1nhx(~0, bn~jc)∣∣∣∣ ≤1n‖bn~ic − bn~jc‖1 ≤ +2dn.Thus we can approximate 1nhx(~0, bn~ic) for ~i ∈ Rd by 1nhx(~0, bn~jc) for ~j ∈ Qd to prove thatlimn−→∞ 1nhx(~0, bn~ic) exists for all ~i ∈ Rd, is independent of x ∈ X and satisfiessl~i(x) ≤d∑k=1|ik|sl~ek(x).The existence of slopes can be generalised from generalised height functions to continuous sub-cocycles; the same proofs work:Proposition 4.5.4. Let c : X×Zd −→ R be a continuous sub-cocycle and µ be an ergodic measureon X. Then for all ~i ∈ Rdslc~i (x) := limn−→∞1nc(x, bn~ic)exists almost everywhere and is independent of x. Moreover if ~i = (i1, i2 . . . , id) thenslc~i (x) ≤d∑k=1|ik|slc~ek(x).Let CX be the space of continuous sub-cocycles on a shift space X. CX has a natural vectorspace structure: given c1, c2 ∈ CX , (c1 + αc2) is also a continuous sub-cocycle on X for all α ∈ Rwhere addition and scalar multiplication is point-wise. The following is not hard to prove andfollows directly from definition.Proposition 4.5.5. Let X,Y be conjugate shift spaces. Then every conjugacy f : X −→ Y inducesan isomorphism f? : CY −→ CX given byf?(c)(x,~i) := c(f(x),~i)for all c ∈ CY , x ∈ X and ~i ∈ Zd. Moreover slc~i (y) = slf?(c)~i(f−1(y)) for all y ∈ Y and ~i ∈ Rd forwhich the slope slc~i (y) exists.994.6 Proofs of the Main TheoremsProof of Theorem 4.2.4. If H is a single edge, then XH is the orbit of a periodic configuration; theresult follows immediately. Suppose this is not the case. The proof follows loosely the proof ofTheorem 4.3.4 and morally the ideas from [52]: We prove existence of two kind of configurations inXH, ones which are ‘poor’ (Lemma 4.6.1), in the sense that they are frozen and others which are‘universal’ (Lemma 4.6.2), for which the homoclinic class is dense.Ideas for the following proof were inspired by discussions with Anthony Quas. A similar resultin a special case is contained in Lemma 2.7.2.Lemma 4.6.1. Let H be a connected four-cycle free graph and µ be an ergodic probability measureon XH such that sl~ek(x) = 1 almost everywhere for some 1 ≤ k ≤ d. Then µ is frozen and hµ = 0.Proof. Without loss of generality assume that sl~e1(x) = 1 almost everywhere. By the subadditivityof the generalised height function for all k, n ∈ N and x ∈ XH we know that1knhx(~0, kn~e1) ≤1knn−1∑m=0hx(km~e1, k(m+ 1)~e1) =1nn−1∑m=01khσkm~e1 (x)(~0, k~e1) ≤ 1.Since sl~e1(x) = 1 almost everywhere, we get thatlimn−→∞1nn−1∑m=01khσkm~e1 (x)(~0, k~e1) = 1almost everywhere. By the ergodic theorem∫1khx(~0, k~e1)dµ = 1.Therefore hx(~0, k~e1) = k almost everywhere which implies thathx(~i,~i+ k~e1) = k (4.6.1)for all~i ∈ Zd and k ∈ N almost everywhere. Let X ⊂ supp(µ) denote the set of such configurations.For some n ∈ N consider two patterns a, b ∈ LBn∪∂2Bn(supp(µ)) such that a|∂2Bn = b|∂2Bn .We will prove that then a|Bn = b|Bn . This will prove that µ is frozen, and so |LBn(supp(µ))| =|L∂2Bn(supp(µ))| ≤ |A||∂2Bn| implying that htop(supp(µ)) = 0. By the variational principle thisimplies that hµ = 0.Consider x, y ∈ X such that x|Bn∪∂2Bn = a and y|Bn∪∂2Bn = b. Noting that ∂2Bn is con-nected, by Corollary 4.4.3 we can choose lifts x˜, y˜ ∈ XEH such that x˜|∂2Bn = y˜|∂2Bn . Considerany ~i ∈ Bn and choose k ∈ −N such that ~i + k~e1,~i + (2n + 2 + k)~e1 ∈ ∂Bn. Then by (4.6.1)100dEH(x˜~i+k~e1 , x˜~i+(2n+2+k)~e1) = 2n+ 2. But(x˜~i+k~e1 , x˜~i+(k+1)~e1 , . . . , x˜~i+(2n+2+k)~e1) and(y˜~i+k~e1 , y˜~i+(k+1)~e1 , . . . , y˜~i+(2n+2+k)~e1)are walks of length 2n + 2 from x˜~i+k~e1 to x˜~i+(2n+2+k)~e1 . Since EH is a tree and the walks are ofminimal length, they must be the same. Thus x˜|Bn = y˜|Bn . Taking the image under the map pi wederive thata|Bn = x|Bn = y|Bn = b|Bn .This partially justifies the claim that steep slopes lead to greater ‘rigidity’. We are left toanalyse the case where the slope is submaximal in every direction. As in the proof of Proposition2.6.1 we will now prove a certain mixing result for the shift space XH.Lemma 4.6.2. Let H be a connected four-cycle free graph and |H| = r. Consider any x ∈ XH andsome y ∈ XH satisfying Range∂D(d+1)n+3r+k(y) ≤ 2k for some n ∈ N. Then1. If either H is not bipartite or x~0, y~0 are in the same partite class of H then there exists z ∈ XHsuch thatz~i =x~i if~i ∈ Dny~i if~i ∈ Dc(d+1)n+3r+k.2. If H is bipartite and x~0, y~0 are in different partite classes of H then there exists z ∈ XH suchthatz~i =x~i+~e1 if~i ∈ Dny~i if~i ∈ Dc(d+1)n+3r+k.The distance dn+ 3r + k is not optimal, but sufficient for our purposes.Proof. We will construct the configuration z only in the case when H is not bipartite. The con-struction in the other cases is similar; the differences will be pointed out in the course of theproof.1. Boundary patterns with non-maximal range to monochromatic patterns inside.Let y˜ be a lift of y and T ′ be the image of y˜|D(d+1)n+3r+k+1 . Let T be a minimal subtree ofEH such thatImage(y˜|∂D(d+1)n+3r+k) ⊂ T ⊂ T′.101Since Range∂D(d+1)n+3r+k(y) ≤ 2k, diameter(T ) ≤ 2k. By Proposition 4.3.1 there exists agraph homomorphism f : T ′ −→ T such that f |T is the identity. Consider the configurationy˜1 given byy˜1~i =f(y˜~i) if~i ∈ D(d+1)n+3r+k+1y˜~i otherwise.The patterny˜1|D(d+1)n+3r+k+1 ∈ LD(d+1)n+3r+k+1(XT ) ⊂ LD(d+1)n+3r+k+1(XEH).Moreover since f |T is the identity map,y˜1|Dc(d+1)n+3r+k = y˜|Dc(d+1)n+3r+k ∈ LDc(d+1)n+3r+k(XEH).Since XEH is given by nearest neighbour constraints y˜1 ∈ XEH .Recall that the radius of a nearest neighbour shift of finite type (in our case XT ) is the totalnumber of full config-folds required to obtain a stiff shift. Since diameter(T ) ≤ 2k the radiusof XT ≤ k. Let a stiff shift obtained by a sequence of config-folds starting at XT be denotedby Z. Since T folds into a graph consisting of a single edge, Z consists of two checkerboardpatterns in the vertices of an edge in T , say v˜1 and v˜2. Corresponding to such a sequence offull config-folds, we had defined in Section 4.3 the outward fixing map OXT ,(d+1)n+3r+k. ByProposition 4.3.3 the configuration OXT ,(d+1)n+3r+k(y˜1) ∈ XEH satisfiesOXT ,(d+1)n+3r+k(y˜1)|D(d+1)n+3r+1 ∈ LD(d+1)n+3r+1(Z)OXT ,(d+1)n+3r+k(y˜1)|Dc(d+1)n+3r+k = y˜1|Dc(d+1)n+3r+k = y˜|Dc(d+1)n+3r+k .Note that the pattern OXT ,(d+1)n+3r+k(y˜1)|∂D(d+1)n+3r uses a single symbol, say v˜1. Letpi(v˜1) = v1. Then the configuration y′ = pi(OXT ,(d+1)n+3r+k(y˜1)) ∈ XH satisfiesy′|∂D(d+1)n+3r = v1y′|Dc(d+1)n+3r+k = y|Dc(d+1)n+3r+k .2. Constant extension of an admissible pattern. Consider some lift x˜ of x. We begin byextending x˜|Bn to a periodic configuration x˜1 ∈ XEH . Consider the map f : [−n, 3n] −→[−n, n] given byf(k) =k if k ∈ [−n, n]2n− k if k ∈ [n, 3n].102Then we can construct the pattern a˜ ∈ L[−n,3n]d(XEH) given bya˜i1,i2,...id = x˜f(i1),f(i2),...,f(id).Given k, l ∈ [−n, 3n] if |k − l| = 1 then |f(k)− f(l)| = 1. Thus a˜ is a locally allowed patternin XEH . Moreover since f(−n) = f(3n) the pattern a˜ is ‘periodic’, meaning,a˜i1,i2,...,ik−1,−n,ik+1,...,id = a˜i1,i2,...,ik−1,3n,ik+1,...,idfor all i1, i2, . . . , id ∈ [−n, 3n]. Also a˜|Bn = x˜|Bn . Then the configuration x˜1 obtained bytiling Zd with a˜|[−n,3n−1]d , that is,x˜1~i = a˜(i1 mod 4n, i2 mod 4n, ..., id mod 4n)−(n,n,...,n) for all~i ∈ Zdis an element of XEH . Moreover x˜1|Bn = a˜|Bn = x˜|Bn and Image(x˜1) = Image(x˜|Bn).Since diameter(Bn) = 2dn, diameter(Image(x˜1)) ≤ 2dn. Let T˜ = Image(x˜1). Thenradius(XT˜ ) ≤ dn. Let a stiff shift obtained by a sequence of config-folds starting at XT˜be denoted by Z ′. Since T˜ folds into a graph consisting of a single edge, Z ′ consists of twocheckerboard patterns in the vertices of an edge in T˜ , say w˜1 and w˜2. Then by Proposition4.3.3IXT˜ ,n(x˜1)|Dn = x˜1|Dn = x˜|DnIXT˜ ,n(x˜1)|Dc(d+1)n−1 ∈ LDc(d+1)n−1(Z′).We note that IXT˜ ,n(x˜1)|∂D(d+1)n−1 consists of a single symbol, say w˜1. Let pi(w˜1) = w1. Thenthe configuration x′ = pi(IXT˜ ,n(x˜1)) ∈ XH satisfiesx′|Dn = x|Dn andx′|∂D(d+1)n−1 = w1.3. Patching of an arbitrary pattern inside a configuration with non-maximal range.We will first prove that there exists a walk on H from w1 to v1, ((w1 = u1), u2, . . . , (u3r+2 =v1)). Since the graph is not bipartite, it has a cycle p1 such that |p1| ≤ r − 1 and is odd.Let v′ be a vertex in p1. Then there exist walks p2 and p3 from w1 to v′ and from v′ to v1respectively such that |p2|, |p3| ≤ r − 1. Consider any vertex w′ ∼H v1. If 3r + 1− |p2| − |p3|is even then the walkp2 ? p3(?(v1, w′, v1))3r+1−|p2|−|p3|2103and if not then the walkp2 ? p1 ? p3(?(v1, w′, v1))3r+1−|p1|−|p2|−|p3|2is a walk of length 3r + 1 in H from w1 to v1. This is the only place where we use the factthat H is not bipartite. If it were bipartite, then we would require that x′~0 and y′~0have to bein the same partite class to construct such a walk.Given such a walk the configuration z given byz|D(d+1)n = x′|(d+1)nz|Dc(d+1)n+3r = y′|Dc(d+1)n+3rz|∂D(d+1)n+i−2 = ui for all 1 ≤ i ≤ 3r + 2is an element of XH for which z|Dn = x′|Dn = x|Dn and z|Dc(d+1)n+3r+k = y′|Dc(d+1)n+3r+k =y|Dc(d+1)n+3r+k .We now return to the proof of Theorem 4.2.4. Let µ be an ergodic probability measure adaptedto XH with positive entropy.Suppose sl~ei(x) = θi almost everywhere. By Lemma 4.6.1, θi < 1 for all 1 ≤ i ≤ d. Letθ = maxi θi and 0 <  < 14 (1− θ). Denote by Sd−1, the sphere of radius 1 in Rd for the l1 norm.Since Sd−1 is compact in Rd we can choose a finite set {~v1, ~v2, . . . , ~vt} ⊂ Sd−1 such that for all~v ∈ Sd−1 there exists some 1 ≤ i ≤ t satisfying ‖~vi − ~v‖1 < . By Corollary 4.5.3 for all ~v ∈ Sd−1limn−→∞1nhx(~0, bn~vc) ≤ θalmost everywhere. By Egoroff’s theorem [18] there exists N0 ∈ N such that for all n ≥ N0 and1 ≤ i ≤ tµ({x ∈ XH | hx(~0, bn~vic) ≤ nθ + n for all 1 ≤ i ≤ t}) > 1− . (4.6.2)Let ~v ∈ ∂Dn−1 and 1 ≤ i0 ≤ t such that | 1n~v − ~vi0 | < . If for some x ∈ XH and n ∈ Nhx(~0, bn~vi0c) ≤ nθ + nthenhx(~0, b~vc) ≤ hx(~0, bn~vi0c) + dne ≤ nθ + 2n+ 1.104By Inequality (4.6.2) we getµ({x ∈ XH | hx(~0, b~vc)≤ nθ + 2n+ 1 for all ~v ∈ ∂Dn−1})> 1− for all n ≥ N0. Therefore for all n ≥ N0 there exists x(n) ∈ supp(µ) such thatRange∂Dn−1(x(n))≤ 2nθ + 4n+ 2 < 2n(1− ) + 2.Let x ∈ XH and n0 ∈ N. It is sufficient to prove that µ([x]Dn0−1) > 0. Suppose r := |H|. Choosek ∈ N such thatn0(d+ 1) + 3r + k + 1 ≥ N02 (n0(d+ 1) + 3r + k + 1) (1− ) + 2 ≤ 2k.Then by Lemma 4.6.2 there exists z ∈ XH such that eitherz~j =x~j if~j ∈ Dn0x(n0(d+1)+3r+k+1)~j if~j ∈ Dcn0(d+1)+3r+korz~j =x~j+~e1 if~j ∈ Dn0x(n0(d+1)+3r+k+1)~j if~j ∈ Dcn0(d+1)+3r+k.In either case (z, x(n0(d+1)+3r+k+1)) ∈ ∆XH . Since µ is adapted to XH, z ∈ supp(µ). In the firstcase we get that µ([x]Dn0−1) = µ([z]Dn0−1) > 0. In the second case we get thatµ([x]Dn0−1) = µ(σ~e1([x]Dn0−1)) = µ([z]D(n0−1)−~e1) > 0.This completes the proof.Every shift space conjugate to an entropy minimal shift space is entropy minimal. However ashift space X which is conjugate to XH for H which is connected and four-cycle free need not evenbe a hom-shift. By following the proof carefully it is possible to extract a condition for entropyminimality which is conjugacy-invariant:Theorem 4.6.3. Let X be a shift of finite type and c a continuous sub-cocycle on X with theproperty that c(·,~i) ≤ ‖~i‖1 for all ~i ∈ Zd and for every ergodic probability measure µ adapted to X1. If slc~ei(x) = 1 almost everywhere for some 1 ≤ i ≤ d then hµ < htop(X).2. If slc~ei(x) < 1 almost everywhere for all 1 ≤ i ≤ d then supp(µ) = X.105Then X is entropy minimal.Here is a sketch: By Proposition 4.2.1 and Theorems 4.2.2, 4.2.3 it is sufficient to prove thatevery ergodic measure of maximal entropy is fully supported. If X is a shift of finite type satisfyingthe hypothesis of Theorem 4.6.3 then it is entropy minimal because every ergodic measure ofmaximal entropy of X is an ergodic probability measure adapted to X; its entropy is either smallerthan htop(X) or it is fully supported. To see why the condition is conjugacy invariant supposethat f : X −→ Y is a conjugacy and c ∈ CY satisfies the hypothesis of the theorem. Then byProposition 4.5.5 it follows that f?(c) ∈ CX satisfies the hypothesis as well.Proof of Theorem 4.1.4. By Proposition 4.1.5 we can assume that H is connected. Consider some(x, y) ∈ ∆XH . By Corollary 4.4.3 there exist (x˜, y˜) ∈ ∆XEH such that pi(x˜) = x and pi(y˜) = y.It is sufficient to prove that there is a chain of pivots from x˜ to y˜. We will proceed by inductionon∑~i∈Zd dEH(x˜~i, y˜~i). The induction hypothesis (on M) is : If∑~i∈Zd dEH(x˜~i, y˜~i) = 2M then thereexists a chain of pivots from x˜ to y˜.We note that dEH(x˜~i, y˜~i) is even for all~i ∈ Z2 since there exists ~i′ ∈ Zd such that x˜~i′ = y˜~i′ andhence x˜~i and y˜~i are in the same partite class of EH for all~i ∈ Zd.The base case (M = 1) occurs exactly when x˜ and y˜ differ at a single site; there is nothing toprove in this case. Assume the hypothesis for some M ∈ N.Consider (x˜, y˜) ∈ ∆XEH such that∑~i∈ZddEH(x˜~i, y˜~i) = 2M + 2.LetB = {~j ∈ Zd | x˜~j 6= y˜~j}and a vertex v˜ ∈ EH. Without loss of generality we can assume thatmax~i∈BdEH(v˜, x˜~i) ≥ max~i∈BdEH(v˜, y˜~i). (4.6.3)Consider some ~i0 ∈ B such thatdEH(v˜, x˜~i0) = max~i∈BdEH(v˜, x˜~i).Consider the shortest walks (v˜ = v˜1, v˜2, . . . , v˜n = x˜~i0) from v˜ to x˜~i0 and (v˜ = v˜′1, v˜′2, . . . , v˜′n′ = y˜~i0)from v˜ to y˜~i0 . By Assumption (4.6.3), n′ ≤ n. Since these are the shortest walks on a tree, ifv˜′k = v˜k′ for some 1 ≤ k ≤ n′ and 1 ≤ k′ ≤ n then k = k′ and v˜l = v˜′l for 1 ≤ l ≤ k. Letk0 = max{1 ≤ k ≤ n′ | v˜′k = v˜k}.106Then the shortest walk from x˜~i0 to y˜~i0 is given by x˜~i0 = v˜n, v˜n−1, v˜n−2, . . . , v˜k0 , v˜′k0+1, . . . , v˜′n′ = y˜~i0 .We will prove for all ~i ∼ ~i0, x˜~i = v˜n−1. This is sufficient to complete the proof since then theconfigurationx˜(1)~j =x˜~j if~j 6=~i0v˜n−2 if ~j =~i0,is an element of XEH , (x˜, x˜(1)) is a pivot andn+ n′ − 2k0 − 2 = dEH(x˜(1)~i0, y˜~i0) < dEH(x˜~i0 , y˜~i0) = n+ n′ − 2k0giving us a pair (x˜(1), y˜) such that∑~i∈ZddEH(x˜(1)~i, y˜~i) =∑~i∈ZddEH(x˜~i, y˜~i)− 2 = 2M.There are two possible cases:1. ~i ∈ B: Then dEH(v˜, x˜~i) = dEH(v˜, x˜~i0)− 1 and x˜~i ∼EH x˜~i0 . Since EH is a tree, x˜~i = v˜n−1.2. ~i /∈ B: Then x˜~i = y˜~i and we get that dEH(x˜~i0 , y˜~i0) = 2. Since x˜~i ∼EH x˜~i0 , the shortest walkjoining v˜ and x˜~i must either be v˜ = v˜1, v˜2, . . . , v˜n−1 = x˜~i or v˜ = v˜1, v˜2, . . . , v˜n = x˜~i0 , v˜n+1 = x˜~i.We want to prove that the former is true. Suppose not.Since y˜~i0 ∼EH x˜~i and~i0 ∈ B, the shortest walk from v˜ and y˜~i0 is v˜ = v˜1, v˜2, . . . , v˜n =x˜~i0 , v˜n+1 = x˜~i, v˜n+2 = y˜~i0 . This contradicts Assumption (4.6.3) and completes the proof.107Chapter 5Further Directions5.1 Markov Random Fields and Gibbs States with NearestNeighbour InteractionsIn Chapter 2 we introduced Markov cocycles and used them to study Markov random fields andGibbs states with nearest neighbour interactions. When the underlying configuration space is ashift space with the pivot property the space of shift-invariant Markov cocycles is finite dimensional;it can act as a substitute for shift-invariant nearest neighbour interactions for giving a descriptionof the specification “using finitely many parameters”. In Chapter 3 we introduced a new notionof folding called strong config-folding and used it to generalise the Hammersley-Clifford theoremwhen the underlying graph is bipartite.5.1.1 Supports of Markov Random FieldsWhich shift spaces can be the support of a shift-invariant Markov random field on the Cayley graphof Zd? A necessary condition is that they must be topological Markov fields which are the supportof some shift-invariant probability measure. For d = 1 this condition is sufficient as well and thesupport of shift-invariant Markov random fields are further characterised as the non-wanderingnearest neighbour shifts of finite type [12]. However we do not know whether the condition issufficient in higher dimensions. Also: Suppose a shift of finite type is the support of some shift-invariant Markov random field. Must it also be the support of a shift-invariant Gibbs state forsome shift-invariant nearest neighbour interaction?5.1.2 Algorithmic AspectsSuppose we are given a nearest neighbour shift of finite type X ⊂ AZdwith the pivot propertyalong with its globally allowed patterns on {~0} ∪ ∂{~0}. Is there an algorithm to determine thedimension of MZdX ? If so, is there a way to decide which of the shift-invariant Markov cocycles have108a corresponding fully supported shift-invariant probability measure on the subshift? In case thesubshift has a safe symbol, such an algorithm can be derived from the proof of the Hammersley-Clifford Theorem [43] and also from Lemma 3.1 in [15]. More generally, in case the subshift stronglyconfig-folds to {?}Zdfor some ? ∈ A such an algorithm can be derived from the proof of Theorem3.3.2.5.1.3 Markov Random Fields for other ModelsThere are other models for which it would be interesting to get a good description of the Markovrandom fields and Gibbs states. One such family is the r-colourings of Zd with r ≥ 2d+2. The spaceof domino tilings of Z2 is another interesting model. For these examples we know the generalisedpivot property (defined in Subsection 5.2.4) holds, so the space of shift-invariant Markov cocyclesis finite dimensional. We believe that some of the techniques developed Chapters 2 and 3 will beuseful in studying other systems.5.1.4 Changing the Underlying GraphOur results in Chapter 2 were dependent on the structure of the standard Cayley graph of Zd.How does the underlying graph affect our results? What happens if we choose a different set ofgenerators?Further, by Theorems 3.3.1 and 3.3.2 we have generalised the Hammersley-Clifford Theorem,but only when the graph G is bipartite. Can this be generalised beyond the bipartite case? Notethat G being bipartite is used in many critical parts of the proof e.g. the construction of the elementsxv in Lemma 3.3.3, construction of the interaction in Lemma 3.3.4 etc.5.1.5 Mixing Properties of Subshifts and the Dimension of the InvariantMarkov CocyclesIn Section 2.8, we constructed a subshift such that the dimension of the space of shift-invariantMarkov cocycles is uncountable. However this subshift has poor mixing properties (See [4] for adiscussion of some mixing properties of Zd-subshifts). On the other hand the safe symbol assump-tion implies that the space of shift-invariant Markov cocycles is the same as the space of Gibbscocycles with shift-invariant nearest neighbour interactions (and hence finite dimensional). Arethere some natural mixing conditions for shift spaces which imply that the space of shift-invariantMarkov cocycles is finite-dimensional?5.1.6 Identifying Hammersley-Clifford SpacesSuppose a finite graph H can be folded into a single vertex (with or without a loop) or an edge. Wehave proved that for any bipartite graph G the space Hom(G,H) is Hammersley-Clifford. We havealso shown that the strong config-folds and strong config-unfolds of Hammersley-Clifford spaces109are Hammersley-Clifford spaces as well. Following [6] we will call a graph H stiff if it cannot befolded anymore. Fixing a particular domain graph say Z2, is it possible to classify all stiff graphs Hfor which Hom(Z2,H) is Hammersley-Clifford? What if we want Hom(G,H) to be Hammersley-Clifford for all bipartite graphs G?5.2 Hom-shifts, Entropy Minimality and the Pivot PropertyIn Chapter 4 we proved entropy minimality and the pivot property for a special class of hom-shiftsviz. XH where H is a finite connected four-cycle free graph.5.2.1 Identification of Hom-ShiftsCan we determine when a shift space is conjugate to a hom-shift?Being conjugate to a hom-shift lays many restrictions on the shift space, for instance on itsperiodic configurations. Consider a conjugacy f : X −→ XH where H is a finite undirected graph.Let Z ⊂ XH be the set of configurations invariant under {σ2~ei}di=1. Then there is a bijectionbetween Z and LA(XH) where A is the rectangular shapeA := {d∑i=1δi~ei | δi ∈ {0, 1}}because every pattern in LA(XH) extends to a unique configuration in Z. More generally given agraph H it is not hard to compute the number of periodic configurations for a specific finite-indexsubgroup of Zd. Moreover periodic points are dense in these shift spaces and there are algorithmsto compute approximating upper and lower bounds of their entropy [19, 30]. Thus the same holdsfor the shift space X as well. We are not familiar with nice decidable conditions which imply thata shift space is conjugate to a hom-shift.5.2.2 Hom-Shifts and Strong IrreducibilityWhich hom-shifts are strongly irreducible?We know two such conditions:1. [6] If H is a finite graph which folds into H′ then XH is strongly irreducible if and only if XH′is strongly irreducible. This reduces the problem to graphs H which are stiff. For instance ifH is dismantlable, then XH is strongly irreducible.2. [5] XH is single site fillable. A shift space XF ⊂ AZdis said to be single site fillable if for allpatterns a ∈ A∂{~0} there exists a locally allowed pattern in XF , b ∈ AD1 such that b|∂{~0} = a.In case XF = XH for some graph H then it is single site fillable if and only if given verticesv1, v2, . . . , v2d ∈ H there exists a vertex v ∈ H adjacent to all of them.110It follows that XK5 is single site fillable and hence strongly irreducible for d = 2. In fact strongirreducibility has been proved in [5] for shifts of finite type under a property weaker than singlesite fillability called TSSM. This does not cover all possible examples. For instance it was provedin [5] that XK4 is strongly irreducible for d = 2 even though it is not TSSM and K4 is stiff. We donot know if it is possible to verify whether a given hom-shift is TSSM.We remark that results about TSSM are in contrast to the results obtained in Chapter 4. Itcan be concluded from the results in Chapter 4 that XH is not even block-gluing (a weaker mixingproperty than strong irreducibility) when H is four-cycle free.5.2.3 Hom-Shifts and Entropy MinimalityGiven a finite connected graph H when is XH entropy minimal?We have provided some examples in Chapter 4:1. H can be folded to a single vertex with a loop or a single edge. (Proposition 4.3.4)2. H is connected and four-cycle free. (Theorem 4.1.2)This does not provide the complete picture. For instance XK4 is strongly irreducible when d = 2and hence entropy minimal even though K4 is stiff and not four-cycle free. A possible approachmight be via identifying the right sub-cocycle and Theorem 4.6.3.Conjecture: Let d = 2 and H be a finite connected graph. Then XH is entropy minimal.5.2.4 Hom-Shifts and the Pivot PropertyWe had provided some examples of graphs H for which the shift space XH has the pivot propertyin Section 2.2.2. In Chapter 4 we gave two further sets of examples:1. H can be folded to a single vertex with a loop or a single edge. (Proposition 4.3.5)2. H is four-cycle free. (Theorem 4.1.4)It is not true that all hom-shifts have the pivot property. The following was observed by Brian1 2 3 4 53 4 5 1 25 1 2 3 42 3 4 5 14 5 1 2 3Figure 5.1: Frozen PatternMarcus: Recall that Kn denotes the complete graph with n vertices. XK4 , XK5 do not possess thepivot property if the dimension is 2. For instance consider a configuration in XK5 which is obtainedby tiling the plane with the pattern given in Figure 5.1. It is clear that the symbols in the box can111be interchanged but no individual symbol can be changed. Therefore XK5 does not have the pivotproperty. However both XK4 and XK5 satisfy a more general property:A shift space X is said to have the generalised pivot property if there is an r ∈ N such that forall (x, y) ∈ ∆X there exists a chain (x1 = x), x2, x3, . . . , (y = xn) ∈ X such that xi and xi+1 differat most on some translate of Dr.If a shift space X satisfies this generalised property then MZdX is a finite-dimensional vectorspace. It can be shown that that any nearest neighbour shift of finite type X ⊂ AZ has thegeneralised pivot property. In higher dimensions this is not always true; consider the subshift Yconstructed in Section 2.8; since MZdY is infinite dimensional it follows that Y does not have thegeneralised pivot property. It is not hard to prove that any single site fillable nearest neighbourshift of finite type has the generalised pivot property. This can be generalised further: in [5] it isproven that every shift space satisfying TSSM has the generalised pivot property. The space ofdomino tilings forms another interesting and well known example for a subshift with the generalisedpivot property [17].For which graphs H does XH satisfy the pivot property? What about the generalised pivotproperty?112Bibliography[1] D. Angluin. Local and global properties in networks of processors (extended abstract). InProceedings of the 12th Annual ACM Symposium on Theory of Computing, April 28-30,1980, Los Angeles, California, USA, pages 82–93, 1980.[2] M. B. Averincev. Description of Markov random fields by means of Gibbs conditionalprobabilities. Teor. Verojatnost. i Primenen., 17:21–35, 1972.[3] J. Besag. Spatial interaction and the statistical analysis of lattice systems. J. Roy. Statist.Soc. Ser. B, 36:192–236, 1974. With discussion by D. R. Cox, A. G. Hawkes, P. Clifford, P.Whittle, K. Ord, R. Mead, J. M. Hammersley, and M. S. Bartlett and with a reply by theauthor.[4] M. Boyle, R. Pavlov, and M. Schraudner. Multidimensional sofic shifts without separationand their factors. Trans. Amer. Math. Soc., 362(9):4617–4653, 2010.[5] R. Bricen˜o. The topological strong spatial mixing property and new conditions for pressureapproximation. http://arxiv.org/abs/1411.2289, 2014.[6] G. R. Brightwell and P. Winkler. Gibbs measures and dismantlable graphs. J. Combin.Theory Ser. B, 78(1):141–166, 2000.[7] R. Burton and J. E. Steif. Non-uniqueness of measures of maximal entropy for subshifts offinite type. Ergodic Theory Dynam. Systems, 14(2):213–235, 1994.[8] S. Capobianco. Multidimensional cellular automata and generalization of Fekete’s lemma.Discrete Math. Theor. Comput. Sci., 10(3):95–104, 2008.[9] N. Chandgotia. Markov random fields and measures with nearest neighbour potentials. MScThesis, Mathematics Department, the University of British Columbia, 2011.[10] N. Chandgotia. Four-cycle free graphs, the pivot property and entropy minimality.http://arxiv.org/abs/1411.4029, 2014.[11] N. Chandgotia. Generalisation of the Hammersley-Clifford theorem on bipartite graphs.http://arxiv.org/abs/1406.1849, 2014.[12] N. Chandgotia, G. Han, B. Marcus, T. Meyerovitch, and R. Pavlov. One-dimensionalMarkov random fields, Markov chains and topological Markov fields. Proc. Amer. Math.Soc., 142(1):227–242, 2014.113[13] N. Chandgotia and T. Meyerovitch. Markov random fields, Markov cocycles and the3-colored chessboard. http://arxiv.org/abs/1305.0808, 2013.[14] E. M. Coven and J. Smı´tal. Entropy-minimality. Acta Math. Univ. Comenian. (N.S.),62(1):117–121, 1993.[15] S. Dachian and B. S. Nahapetian. Description of random fields by means of one-pointconditional distributions and some applications. Markov Process. Related Fields,7(2):193–214, 2001.[16] R. L. Dobrusˇin. Description of a random field by means of conditional probabilities andconditions for its regularity. Teor. Verojatnost. i Primenen, 13:201–229, 1968.[17] N. Elkies, G. Kuperberg, M. Larsen, and J. Propp. Alternating-sign matrices and dominotilings. I. J. Algebraic Combin., 1(2):111–132, 1992.[18] G. B. Folland. Real analysis. Pure and Applied Mathematics (New York). John Wiley &Sons, Inc., New York, second edition, 1999. Modern techniques and their applications, AWiley-Interscience Publication.[19] S. Friedland. On the entropy of Zd subshifts of finite type. Linear Algebra Appl.,252:199–220, 1997.[20] D. Galvin. On homomorphisms from the Hamming cube to Z. Israel J. Math., 138:189–213,2003.[21] D. Geiger, C. Meek, and B. Sturmfels. On the toric algebra of graphical models. Ann.Statist., 34(3):1463–1492, 2006.[22] H.-O. Georgii. Gibbs measures and phase transitions, volume 9 of de Gruyter Studies inMathematics. Walter de Gruyter & Co., Berlin, 1988.[23] G. R. Grimmett. A theorem about random fields. Bull. London Math. Soc., 5:81–84, 1973.[24] J. Hammersley and P. Clifford. Markov fields on finite graphs and lattices. Unpublishednotes, 1971.[25] J. M. Hammersley and D. J. A. Welsh. First-passage percolation, subadditive processes,stochastic networks, and generalized renewal theory. In Proc. Internat. Res. Semin., Statist.Lab., Univ. California, Berkeley, Calif, pages 61–110. Springer-Verlag, New York, 1965.[26] G. Keller. Equilibrium states in ergodic theory, volume 42 of London Mathematical SocietyStudent Texts. Cambridge University Press, Cambridge, 1998.[27] O. E. Lanford, III and D. Ruelle. Observables at infinity and states with short rangecorrelations in statistical mechanics. Comm. Math. Phys., 13:194–215, 1969.[28] S. L. Lauritzen. Graphical models, volume 17 of Oxford Statistical Science Series. TheClarendon Press, Oxford University Press, New York, 1996. Oxford Science Publications.[29] D. Lind and B. Marcus. An introduction to symbolic dynamics and coding. CambridgeUniversity Press, 1995, reprinted 1999.114[30] E. Louidor and B. H. Marcus. Improved lower bounds on capacities of symmetric 2Dconstraints using Rayleigh quotients. IEEE Trans. Inform. Theory, 56(4):1624–1639, 2010.[31] W. S. Massey. Algebraic topology: an introduction. Springer-Verlag, New York-Heidelberg,1977. Reprint of the 1967 edition, Graduate Texts in Mathematics, Vol. 56.[32] R. Meester and J. E. Steif. Higher-dimensional subshifts of finite type, factor maps andmeasures of maximal entropy. Pacific J. Math., 200(2):497–510, 2001.[33] T. Meyerovitch. Gibbs and equilibrium measures for some families of subshifts. ErgodicTheory Dynam. Systems, 33(3):934–953, 2013.[34] J. Moussouris. Gibbs and Markov random systems with constraints. J. Statist. Phys.,10:11–33, 1974.[35] J. R. Munkres. Topology: a first course. Prentice-Hall, Inc., Englewood Cliffs, N.J., 1975.[36] F. Nakano, H. Ono, and T. Sadahiro. Local move connectedness of domino tilings withdiagonal impurities. Discrete Math., 310(13-14):1918–1931, 2010.[37] R. Nowakowski and P. Winkler. Vertex-to-vertex pursuit in a graph. Discrete Math.,43(2-3):235–239, 1983.[38] L. Onsager. Crystal statistics. I. A two-dimensional model with an order-disorder transition.Phys. Rev. (2), 65:117–149, 1944.[39] E. Ordentlich and R. M. Roth. When data must satisfy constraints upon writing. 2014.[40] R. Pavlov. Approximating the hard square entropy constant with probabilistic methods.Ann. Probab., 40(6):2362–2399, 2012.[41] R. Peled. High-dimensional Lipschitz functions are typically flat.http://arxiv.org/abs/1005.4636, 2010.[42] K. Petersen and K. Schmidt. Symmetric Gibbs measures. Trans. Amer. Math. Soc.,349(7):2775–2811, 1997.[43] C. J. Preston. Gibbs states on countable sets. Cambridge University Press, London-NewYork, 1974. Cambridge Tracts in Mathematics, No. 68.[44] A. Quas and A. A. S¸ahin. Entropy gaps and locally maximal entropy in Zd subshifts. ErgodicTheory Dynam. Systems, 23(4):1227–1245, 2003.[45] A. N. Quas and P. B. Trow. Subshifts of multi-dimensional shifts of finite type. ErgodicTheory Dynam. Systems, 20(3):859–874, 2000.[46] R. M. Robinson. Undecidability and nonperiodicity for tilings of the plane. Invent. Math.,12:177–209, 1971.[47] D. Ruelle. Thermodynamic formalism. Cambridge Mathematical Library. CambridgeUniversity Press, Cambridge, second edition, 2004. The mathematical structures ofequilibrium statistical mechanics.115[48] K. Schmidt. The cohomology of higher-dimensional shifts of finite type. Pacific J. Math.,170(1):237–269, 1995.[49] K. Schmidt. Invariant cocycles, random tilings and the super-K and strong Markovproperties. Trans. Amer. Math. Soc., 349(7):2813–2825, 1997.[50] K. Schmidt. Tilings, fundamental cocycles and fundamental groups of symbolic Zd-actions.Ergodic Theory Dynam. Systems, 18(6):1473–1525, 1998.[51] M. Schraudner. Projectional entropy and the electrical wire shift. Discrete Contin. Dyn.Syst., 26(1):333–346, 2010.[52] M. Schraudner and S. Lightwood. Entropy minimality of Zd shifts of finite type and thefamily of wire shifts. Work in progress.[53] S. Sheffield. Ribbon tilings and multidimensional height functions. Trans. Amer. Math. Soc.,354(12):4789–4813 (electronic), 2002.[54] S. Sherman. Markov random fields and Gibbs random fields. Israel J. Math., 14:92–103, 1973.[55] F. Spitzer. Markov random fields and Gibbs ensembles. Amer. Math. Monthly, 78:142–154,1971.[56] J. R. Stallings. Topology of finite graphs. Invent. Math., 71(3):551–565, 1983.[57] W. P. Thurston. Conway’s tiling groups. Amer. Math. Monthly, 97(8):757–773, 1990.[58] P. Walters. An introduction to ergodic theory, volume 79 of Graduate Texts in Mathematics.Springer-Verlag, New York-Berlin, 1982.116

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0166259/manifest

Comment

Related Items