@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix skos: . vivo:departmentOrSchool "Science, Faculty of"@en, "Mathematics, Department of"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCV"@en ; dcterms:creator "Chandgotia, Nishant"@en ; dcterms:issued "2011-08-30T17:56:15Z"@en, "2011"@en ; vivo:relatedDegree "Master of Science - MSc"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description "This thesis will discuss the relationship between stationary Markov random fields and probability measures with a nearest neighbour Gibbs potential. While the relationship has been well explored when the measures are fully supported, we shall discuss what happens when we weaken this assumption."@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/37000?expand=metadata"@en ; skos:note "Markov Random Fields and Measures with Nearest Neighbour Gibbs Potential by Nishant Chandgotia B. Math., Indian Statistical Institute, 2009 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in THE FACULTY OF GRADUATE STUDIES (Mathematics) The University Of British Columbia (Vancouver) August 2011 © Nishant Chandgotia, 2011 Abstract This thesis will discuss the relationship between stationary Markov random fields and probability measures with a nearest neighbour Gibbs potential. While the re- lationship has been well explored when the measures are fully supported, we shall discuss what happens when we weaken this assumption. ii Preface This thesis came out of discussions and collaborations with Brian Marcus, Tom Meyerowitch, Guangyue Han, Ronnie Pavlov and Ori Gurel-Gurevich. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Hammersley-Clifford Theorem . . . . . . . . . . . . . . . . . . . . . . . 3 3 Symbolic Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4 Markov Random Fields in 1 Dimension . . . . . . . . . . . . . . . . . . 24 5 Markov Random Fields in 2 Dimensions . . . . . . . . . . . . . . . . . 30 6 Further Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6.1 Generalising Theorem 2.0.4 . . . . . . . . . . . . . . . . . . . . . . . 40 6.2 Topological Markov Fields . . . . . . . . . . . . . . . . . . . . . . . 42 6.3 Pivot Property and 3-Checkerboard . . . . . . . . . . . . . . . . . . . 43 7 A Linear Algebra Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 iv List of Figures Figure 5.1 G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Figure 5.2 The Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 32 Figure 5.3 G′ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Figure 5.4 Correspondence between G and G′ . . . . . . . . . . . . . . . . . 33 Figure 5.5 Conrrespondence between H and H′ . . . . . . . . . . . . . . . . 34 Figure 6.1 The thicker lines represent the subgraph induced by D3 . . . . . 43 Figure 6.2 p,x,q,r,y and z . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 v Acknowledgments I will like to thank my parents and my family for bringing me up and teaching me how to stand on my feet. I am grateful to Prof. Brian Marcus for helping me understand my abilities and to reach beyond my reach. I would not have learnt so much, if Tom Meyerovitch would not have been so kind with his time and so patient with my faltering steps. It was joy to be a student in Prof. David Brydges’ probability classes and to have him examine my thesis. His humour and knowledge has taught me more than what books could ever have. Last but not the least I would like to thank my friends Marc Carnovale, Vasu Tewari and Gourab Ray. If it were not for their nonsensensical banter, I would not have survived so far away from home. vi Chapter 1 Introduction Stationary Markov random fields are probability measures arising naturally in er- godic theory, image processing, statistical physics among many other fields. In general, there is no compact way of representing them. However, if they are fully supported, it known by the Hammersley-Clifford theorem [2] that these measures have a nearest neighbour Gibbs potential. This is one of the avenues of understand- ing the structure of stationary Markov random fields. There are further generalisa- tions of this result in [7]. We prove that the assumption on the support can be dropped in the Hammersley- Clifford Theorem for stationary Markov random fields with a finite state space on the Z lattice. However this generalisation fails to extend in higher dimensions. In Chapter 2, we will discuss the classic Hammersley-Clifford theorem. This chapter follows the treatment in [11]. In Chapter 3, we will introduce some sym- bolic dynamics. This chapter can be skipped by any one familiar with the field. In Chapter 4, we prove that every stationary Markov random field on the Z lattice and finite state space is a Markov chain and consequently a measure with a near- est neighbour Gibbs potential. We begin by proving that the support of stationary Markov random fields are nearest neighbour shifts of finite type. Using the spe- cific structure of shifts of finite type and the results in Chapter 2 we prove that the measures are Markov chains. In Chapter 5 we shall discuss how this result fails to extend in higher dimensions. Here we will also discuss the pivot property which gives another “compact” representation of the conditional probabilities of Markov 1 random fields. Chapter 6 shall discuss some conjectures and ideas for further work. 2 Chapter 2 Hammersley-Clifford Theorem An undirected graph is a tuple G = (V;E)where V , the set of vertices is a countable set and E , the set of edges is a subset of the set of unordered pairs of distinct elements from V . An undirected graph is called locally finite if for all x ∈ V the set{y ∣ (x;y) ∈ E} is finite. Vertices will be often referred to as sites. We will always assume that the graphs are locally finite.R is some set which we will call the set of symbols or the alphabet. We shall focus on the case where R is finite but most of results in Chapter 2 will hold for a countable alphabet. We will sometimes assume that R has a special symbol ‘0’. We will define probability measures on the space (RV ;F) where F is the sigma- algebra generated by cylinder sets which are given by [a;A] = {x ∈RV ∣ x∣A = a} where A ⊂ V is finite and a ∈RA Given a set A ⊂ V , elements of RA will be called configurations on A. So the probability space is all ways of putting symbols on sites and the events which generate the sigma algebra are all ways in which symbols at certain sites have been previously fixed. The topology on the space is also generated by the cylinder sets. This is the same as the product topology of the discrete topology over R. The closure of sets under this topology is denoted by K The support of a measure m is defined as supp(m) =RV −⋃[a;A] 3 where the union is over all cylinder sets with 0 measure. Note that, since every cylinder set is open, the support of any measure is always a closed set. The boundary of a set A ⊂ V is given by ¶A = {x ∈ V −A ∣ (x;y) ∈ E for some y ∈ V} In this chapter we will assume that the support of any given measure has a safe symbol ‘0’. To be precise, given a measure m on RV , A ⊂ V and x ∈ supp(m), y defined by y(t) = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ x(t) for t in A 0 for t in Ac belongs to supp(m). This means that if at some sites some symbols can be placed with positive probabilities, those symbols can be replaced with ‘0’ and still the probability will be positive. Definition. A set S ⊂RV is called a topological Markov field with respect to a graph G if for all x;y ∈ S such that x = y on ¶C for some C finite in V , z ∈RV defined by z = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ x on C∪¶C y on(C∪¶C)c is an element of S This means that if we have configurations which agree on the boundary of some finite set then we can paste one on the other i.e. outside the boundary we see y and inside it we see x. Definition. A Markov random field on a graph G = {V;E} with alphabet R is a probability measure m on (RV ;F) such that for all A;B ⊂ V finite such that ¶A ⊂ B ⊂ Ac and a ∈RA;b ∈RB satisfying m([b;B]) > 0 m([a;A] ∣ [b;B]) = m([a;A] ∣ [b∣¶A;¶A]): 4 All this is saying is that probability of having something on a finite set condi- tioned on its complement is same as the probability of having it given its boundary. Lemma 2.0.1. Suppose m is a Markov random field on a graph G = (V;E) with an alphabetR. Then supp(m) is a topological Markov field. Proof. Let C ⊂ V be finite. Consider x;y ∈ supp(m) such that x = y on ¶C. Let z ∈RV be such that z = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ x onC∪¶C y on(C∪¶C)c: Consider a finite set D ⊂ V such that C∪¶C ⊂D. We will prove that for any such D, m([z∣D;D]) > 0. This will prove that z ∈ supp(m). Since the choice of x;y and D is arbitrary, this is sufficient to prove that supp(m) is a topological Markov field. m([z∣D;D]) = m([z∣C;C] ∣ [z∣D−C;D−C])m([z∣D−C;D−C])= m([z∣C;C] ∣ [z∣¶C;¶C])m([z∣D−C;D−C]) since m is a Markov random field= m([x∣C;C] ∣ [x∣¶C;¶C])m([y∣D−C;D−C]) > 0 To simplify notation we will sometimes use the following: m(a b c) = m([a;A]∩ [b;B]∩ [c;C]) and m(a ∣ b) = m([a;A] ∣ [b;B]) The following lemma gives an equivalent way of defining Markov random fields when V is finite. Two sets A;B ⊂ V are independent if A∩B = ¶A∩B = ¶B∩A = f i.e. A∩B = f and no edges connect vertices of A to vertices of B. Lemma 2.0.2. Let m be a probability measure onRV where V is finite. Then m is a Markov random field on the graph G = {V;E} if and only if for all A, B independent 5 and finite, a;a′ ∈RA;b;b′ ∈RB;c ∈RC where C = (A∪B)c m(a b c)m(a′ b′ c) = m(a′ b c)m(a b′ c) (2.0.1) The proof of this lemma can be found in [11]. Now we shall define measures with nearest neighbour Gibbs potential. Moti- vated by topological dynamics, we will call an ordered pair (x;y) ∈RV homoclinic if x = y on all but finitely many points in V . Suppose m is a probability mea- sure on RV . For a graph G = (V;E) an induced subgraph is defined to be a tuple H = (V1;E1) where V1 ⊂V and E1 = {(a;b) ∈ E ∣ a;b ∈V1}. Cliques of a graph G are complete graphs which occur as induced subgraphs of the graph G. Define T to be the set of all configuration on cliques with positive measure i.e. T = {c ∈RC∣ C is a clique and m([c;C]) > 0}: A nearest neighbour Gibbs potential is a function V ∶ T Ð→R. A measure m has a nearest neighbour Gibbs potential V if m is a probability measure such that for any homoclinic pair (x;y) ∈ supp(m) and T;L ⊂Z2 finite such that x = y on Lc and L∪¶L ⊂ T m([x∣T ;T ]) m([y∣T ;T ]) = ∏C−cliqueseV(x∣C)−V(y∣C) Lemma 2.0.3. Let m be a probability measure onRV . The following are equivalent : 1. m has a nearest neighbour Gibbs potential V 2. For every finite A;B ⊂ V satisfying ¶A ⊂ B ⊂ Ac and x ∈ supp(m) m([x∣A;A]∣[x∣B;B]) = ∏C−cliques⊂A∪¶Ae V(x∣C) ZA;x∣B where ZA;x∣B is a normalising factor depending on A and x∣B. The proof is left to the reader. We will now state the Hammersley-Clifford Theorem. 6 Theorem 2.0.4. Suppose m is a probability measure onRV such that supp(m) has a safe symbol ‘0’. Then, m has a nearest neighbour Gibbs potential if and only if m is a Markov random field. Proof. We will first prove that if m has a nearest neighbour Gibbs potential then it is a Markov random field. This part does not require the assumption on the support. Let A;B ⊂ V finite such that ¶A ⊂ B ⊂ Ac. Consider b ∈RB such that m([b;B]) > 0. Let a ∈RA. We will first prove that m([a;A] ∣ [b;B]) ≤ m([a;A] ∣ [b∣¶B;¶B]) for all a ∈RA: Since [a;A]∩ [b;B] ⊂ [a;A]∩ [b∣¶B;¶B] m([a;A] ∣ [b∣¶B;¶B]) = 0Ô⇒ m([a;A] ∣ [b;B]) = 0: (2.0.2) Let us assume that m([a;A] ∣ [b;B]) > 0. Consider x ∈ supp(m)⋂([a;A]∩ [b;B]): Then, m([a;A] ∣ [b;B]) = m([x∣A;A] ∣ [x∣B;B]) = ∏C−cliques⊂A∪¶AeV(x∣C) ZA;x∣B ≤ ∏C−cliques⊂A∪¶AeV(x∣C) ZA;x∣¶B by equation 2.0.2= m([x∣A;A] ∣ [x∣¶B;¶B])= m([a;A] ∣ [b∣¶B;¶B]): Therefore, m([a;A] ∣ [b;B]) ≤ m([a;A] ∣ [b∣¶B;¶B]) for all a ∈RA: 7 But, 1 = ∑ a∈RAm([a;A] ∣ [b;B]) ≤ ∑a∈RAm([a;A] ∣ [b∣¶B;¶B]) = 1: Hence, m([a;A] ∣ [b;B]) = m([a;A] ∣ [b∣¶B;¶B]) for all a ∈RA: Thus m is a Markov random field. Now assume that m is a Markov random field such that supp(m) has a safe symbol ‘0’. We want to prove that m has a nearest neighbour Gibbs potential. We will first prove this for the case when V is finite and then generalise this for countably infinite V . Suppose m is a Markov random field. We will translate the question of whether or not there exists a Gibbs potential into a linear algebra problem. Consider a matrix A whose rows are indexed by elements of supp(m) and the columns are indexed by elements of T where the entries are Aa;c = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ 1 if a∣C = c 0 otherwise where a ∈ supp(m) and c ∈ RC for some clique C. Take the column vector b indexed by supp(m) such that ba = log(m(a)) Then finding a nearest neighbour Gibbs potential with normalising constant Z = 1 is equivalent to solving the equation Ax = b. Note that the matrix might have countably infinite rows and columns. However since the graph is finite, every row has finitely many non-zero entries. It is proved in Chapter 7 that 8 Ax = b has a solution ⇐⇒ (vA = 0Ô⇒ vb = 0) for all v with finitely many non-zero entries ⇐⇒ ( k∑ i=1naiAai;c = 0 for all c ∈ T Ô⇒ k∑ i=1naibai = 0) (2.0.3) Consider A, B independent and finite, a;a′ ∈ RA;b;b′ ∈ RB;d ∈ RD where D =(A∪B)c. Let, {x} = [a;A]∩ [b;B]∩ [d;D]{y} = [a′;A]∩ [b′;B]∩ [d;D]{z} = [a′;A]∩ [b;B]∩ [d;D]{w} = [a;A]∩ [b′;B]∩ [d;D] Then by Equation (2.0.1) m({x})m({y}) = m({z})m({w}): it follows that Ax;c = Az;c+Aw;c−Ay;c for all c ∈ T bx = bz+bw−by (2.0.4) Thus the Markov random field conditions correspond to Equations (2.0.3) where k = 4. Thus the heart of the question whether or not a Markov random field has a nearest neighbour Gibbs potential is whether the left null space of A is generated by vectors corresponding to the Markov random field conditions. Now we will prove the following ‘break-up’ lemma. It says that any configura- tion in the support can be broken into configurations with non-zero symbols exactly on a clique. This is exactly where we need the support to have a safe symbol. Lemma 2.0.5. Suppose m is a Markov random field such that supp(m) has a safe 9 symbol ‘0’. Then for any b ∈ supp(m) there exists b1;b2 : : :bk ∈ supp(m) such that for each i, bi = 0 exactly on Cic for some clique Ci and Ab;c = k∑ i=1 riAbi;c for all c ∈ T bb = k∑ i=1 ribbi Proof. We will induct on the number of non-zero sites. If b has only 1 non-zero site, then we are done and there is nothing to prove. Suppose the result is true for the number of non-zero sites ≤ k. Now assume that b has k+1 non zero sites. If the k+1 sites form a clique we are done. Otherwise, there exist independent sets A;B ⊂ V such that b is non zero on A∪B. Let C = (A∪B)c. Then, {b} = [b∣A;A]∩ [b∣B;B]∩ [b∣C;C] Let x;y and z ∈ supp(m) be given by {x} = [b∣A;A]∩ [0;B]∩ [b∣C;C]{y} = [0;A]∩ [b∣B;B]∩ [b∣C;C]{z} = [0;A]∩ [0;B]∩ [b∣C;C] where [0;D] = {x ∈RV such that x(i) = 0 for all i ∈D} for all D ⊂ V . By equations (2.0.4) Ab;c = Ax;c+Ay;c−Az;c for all c ∈ T bb = bx+by−bz and x, y and z have smaller number of non-zero sites than b. By induction we are done. Continuing the proof of Theorem (2.0.4), we can now assume that all the con- figurations are non-zero exactly on cliques. If not we can break up our configura- tions by using Lemma (2.0.5). The final component of the proof is the following observation. 10 Suppose ai’s are distinct configurations non-zero exactly on cliquesCi. Then k∑ i=1naiAai;c = 0 for all c ∈ T Ô⇒ nai = 0 for all i (2.0.5) This can be proved by induction on k. It is certainly true when k = 1. Suppose it is true for k < r and r∑ i=1naiAai;c = 0 for all c ∈ T By renumbering we can assume that among Ci’s, C1 is a maximal subgraph. Let c1 = a1∣C1 .Then Aai;c1 = 1 only for i = 1 . Hence na1 = 0. Thus by induction we are done. Therefore if ∑ri=1naiAai;c = 0 for all c ∈ T , then by Lemma (2.0.5) we can as- sume that the configurations are non-zero exactly on a clique. Then the observation above proves that nai’ s are 0 for all i. Hence the implication as given in Equation (2.0.3) holds. Therefore, the Markov random field has a nearest neighbour Gibbs potential. Now we will prove the general case where the graph is infinite. The idea is to reduce it to the case where the graph is finite. Let 0V ∈ supp(m) be the point such that 0V(v) = 0 for all v ∈ V . Let H = {a ∈ supp(m) ∣ (a;0V) is a homoclinic pair } Let A be a matrix such that rows are indexed by pairs in H and the columns by elements of T . H and T are countable and any pair in H is homoclinic. Suppose c ∈ T is a configuration on C and a;b ∈H. The matrix entries are given by A(a;b);c = ⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩ 1 if a∣C = c;b∣C ≠ c−1 if a∣C ≠ c;b∣C = c 0 otherwise Note that since a and b are homoclinic and the graph is locally finite, every row has at most finitely many non-zero entries. Let b be a column vector indexed by 11 pairs in H and the entries are given by b(a;b) = log(m([a;L])m([b;L])) where a ≠ b exactly on T and T ∪ ¶T ⊂ L. The existence of a solution to the equation Ax = b is equivalent to the measure having a nearest neighbour Gibbs potential. This is true because of the following. Take any homoclinic pair (a,b) in supp(m) such that they differ exactly on the set T. Then a′;b′ ∈ supp(m) defined by a′ = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ a on T ∪¶T 0 on (T ∪¶T)c b′ = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ b on T ∪¶T 0 on (T ∪¶T)c satisfy log(m([a;L]) m([b;L])) = log(m([a′;L])m([b′;L])) Therefore the existence of a nearest neighbour Gibbs potential for pairs in H gives us existence of potentials for homoclinic pairs in general. A solution to Ax = b exists if and only if vA = 0Ô⇒ vb = 0 where v has finitely many non-zero entries. The proof is given in the Chapter 7 at the end. Note that A will now have countably many rows and countably many columns. Restating it, it becomes k∑ i=1nbi;ciA(bi;ci);c = 0 for all c ∈ T Ô⇒ k∑ i=1nbi;cib(bi;ci) = 0 Assume {(bi;ci)}ki=1 is a set of homoclinic pairs satisfying k∑ i=1nbi;ciA(bi;ci);c = 0 for all c ∈ T 12 Since the pairs are homoclinic, each pair agrees outside some finite T ⊂ V . Let T ∪¶T = L. Then, A(bi;ci);c = 0 for c ∈RC where C ⊂ V −L is a clique. Define a probability measure m̃ onRL∪¶L where the graph structure is induced by G and the sigma-field is the power set. The measure is given by m̃(a) = m([a∣L;L] ∣ 0∣¶L;¶L) Then m̃ is a Markov random field on the finite graph L∪¶L satisfying the safe symbol requirement about its support. Let à be its corresponding matrix whose rows are indexed by elements of supp(m̃) and columns are indexed by the set of configurations on cliques T̃ . The entries are given by Ãa;c = 1 if a∣C = c= 0 otherwise for a ∈ supp(m̃), c ∈RC and C is a clique in L∪ ¶L. Take a column vector b̃ indexed by supp(m) such that b̃a = log m̃(a) Let, b̃i; c̃i ∈RL∪¶L such that b̃i = bi∣L c̃i = ci∣L Now, k∑ i=1nbi;ciA(bi;ci);c = 0 for all c ∈ T Therefore, k∑ i=1nbi;ciÃb̃i;c− k∑ i=1nbi;ciÃc̃i;c = 0 for all c ∈ T̃ By Lemma (2.0.5), we can assume that the configurations are non- zero exactly on 13 cliques. By Equation (2.0.5) we have k∑ i=1nbi;ci b̃b̃i − k∑ i=1nbi;ci b̃c̃i = 0 Now bbi;ci = b̃b̃i − b̃c̃i Hence, k∑ i=1nbi;cib(bi;ci) = 0 Therefore, the proof is complete. To summarise, we began by looking at the case where the graph was finite. We broke our configurations up till the configurations were non- zero exactly on cliques. In the case where the graph was infinite, we replaced configurations by configurations which are ‘0’ outside a finite set. This reduced the problem into the case where the graph was finite. We used the fact that the support of the measure had a safe symbol. In Chapter 5 we will see that the theorem fails otherwise. One deficiency in the proof is that it does not give a clear direction as to how to find the potential. For this one can refer to [11] where the proof uses Möbius inversion. The reason that we did not go with the seemingly easier proof is that we believe that this setting is the correct one for generalising the result beyond the safe symbol case. Related work on finite graphs can be found in [1]. Now we will consider stationary Markov random fields. Let G be a subgroup of the automorphism group of the graph G. Then G acts naturally on the configurations on the graph by (ga)v = agv for all g ∈G and v ∈ V Thus it also acts on the borel measurable sets gA = {ga ∣ a ∈ A} 14 where A is a measurable set and on borel measures onRV . (gm)(A) = m(gA) A measure m is called stationary under the action of the group G if gm = m for g ∈G. A nearest neighbour Gibbs potential is called stationary under the action of a group G ifV(c) =V(gc). We will require the following result in the next chapter. Theorem 2.0.6. Let m be a Markov random field on a graph G = (V;E) with al- phabet R that is stationary with respect to a group G and supp(m) has a safe symbol ‘0’. Then the measure has a nearest neighbour Gibbs potential stationary with respect to G. Proof. Wewill consider a special class of potentials for the measure and then prove that it is unique and invariant under the action of G. Lemma 2.0.7. Suppose m is a Markov random field on a graph G such that supp(m) has a safe symbol ‘0’. Then there exists a unique nearest neighbour Gibbs poten- tial V ∶ T Ð→ R such that V(c) = 0 whenever c is a configuration on a clique C such that c(i) = 0 for some i ∈C All this is saying is that the potential is 0 when any of the symbols on the concerned clique is ‘0’. We will prove this lemma and the theorem for the case when V is finite. The case where V is infinite requires similar adjustments to the proof as in Theorem 2.0.4 We will begin by making some changes to the matrices A and b. Let T1 ⊂ T be defined as T1 = {c ∈RC ∣C is a clique and c(i) = 0 for some i ∈C} Consider a matrix A whose rows are indexed by elements of supp(m)∪T1 and columns are indexed by elements of T . If a ∈ supp(m) and c ∈RC for some clique C then Aa;c = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ 1 if a∣C = c;a ∈ supp(m) 0 if a∣C ≠ c;a ∈ supp(m) 15 If c ∈ T1 and c′ ∈RC for some clique C then Ac;c′ = 1 if c = c′= 0 otherwise Take the column vector b indexed by supp(m)∪T1 defined by ba = log(m(a))− log(m(0V)) for a ∈ supp(m)= 0 for a ∈ T1 Solving the equation Ax = b is equivalent to finding a nearest neighbour Gibbs potential satisfying the requirements of Lemma 2.0.7 with a normalising constant 1(m(0V)) . As before, Ax = b has a solution ⇐⇒ (vA = 0Ô⇒ vb = 0) for all v with finitely many non-zero entries ⇐⇒ ( k∑ i=1naiAai;c = 0 for all c ∈ T Ô⇒ k∑ i=1naibai = 0) Suppose there exist distinct configurations {ai}li=1 ⊂ supp(m), {ci}ki=1 ⊂ T1 and{nai}li=1; {nci}ki=1 ⊂R satisfying l∑ i=1naiAai;c+ k∑ i=1nciAci;c = 0 for all c ∈ T We want to prove that l∑ i=1naibai + k∑ i=1ncibci = l∑ i=1naibai = 0 (2.0.6) By the ‘break-up’ lemma (Lemma(2.0.5)), we can assume that the configurations{ai}li=1 are non-zero exactly on cliques Ci for every i. We will prove by induction on l that l∑ i=1naibai = 0: 16 Suppose l = 1. For all c ∈ T −T1 na1Aa1;c+ k∑ i=1nciAci;c = 0 This implies na1Aa1;c = 0 for all c ∈ T −T1 This implies that either na1 = 0 or a1 = 0V . In either case, na1ba1 = 0. Now assume the result for l ≤ t −1 and t∑ i=1naiAai;c+ k∑ i=1nciAci;c = 0 for all c ∈ T By renumbering, we can assume that among Ci’s, C1 is maximal. If C1 is empty then C1;C2 : : :Ct = f . This implies a1;a2 : : : ;at = 0V . This implies bai = 0 for all i. IfC1 is not empty assume that a1∣C1 = c0 ∈ T −T1. Then Aai;c0 ;Ac j;c0 = 0 for all i > 1 and all j and Aa1;c0 = 1. Therefore na1 = 0. By the induction hypothesis the proof of Equation (2.0.6) is complete. We will now prove by induction on the number of elements of a clique that the measure completely determines the potential as described in Lemma (2.0.7). Con- sider a clique C of size 1 and a non-zero configuration c on it. Take a configuration x ∈RV given by x = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ c on C 0 otherwise Then, V(c) is uniquely determined as log(m(x))− log(m(0V)). Suppose the po- tential values of configurations on cliques of size less than r are known. Now take a clique C of size r and a configuration c ∈RC with every symbol being non-zero. Take a configuration x ∈RV such that x = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ c on C 0 otherwise 17 Then, V(c) = log(m(x))− log(m(0V))−∑ D⊊CV(x∣D): So V(c) is completely determined by potential on configurations of size less than r. (Since any induced subgraph of a clique is also a clique.) By induction we have proved the measure completely determines the potential as given in Lemma (2.0.7). Suppose a potential V as in Lemma (2.0.7) is determined by the Markov random field m . Then consider the potential gV for some g ∈G. It is a potential for gm . But since the measure is stationary gm = m . Therefore gV is a potential for m . Also it satisfies the conditions as stated in Lemma (2.0.7). Since the potential satisfying the conditions is unique, therefore gV =V . Hence the potential is unique. 18 Chapter 3 Symbolic Dynamics This chapter shall follow the treatment as given in [10]. Let R be a finite set called the alphabet. The full shift on alphabetR is the dynamical system (RZ;s). The topology on RZ is given by the product of the discrete topology on R and s ∶RZÐ→RZ is given by the map s(x)i = xi+1 where xi = x(i). In fact RZ is a compact metrizable space. The map s , called the shift map is a homeomorphism on the spaceRZ. Let G be the group {s i∣i ∈ Z}. A shift space is a dynamical system (X ;s) where X ⊂RZ is closed and invariant under G. Since X is invariant under s , it is a homeomorphism on the space X . An equivalent definition is given by forbidden patterns which follows. Let F ⊂R⋆ whereR⋆ is the set of all finite words inR. Consider the space, XF = {x ∈RZ ∣ no subword of x belongs to F} It can be checked that a subset X ⊂RZ is a shift space if and only if there existsF ⊂R⋆ such that X = XF . A shift space X is called a shift of finite type if X = XF for some finite set F . A good example to keep in mind is the golden mean shift which is defined as the shift space over the alphabet {0;1} given by X{11}, that is all bi-infinite sequences in {0;1} such that no two 1′s are adjacent. 19 A sliding block code is a continuous map f from one shift space X to another shift space Y which commutes with the shift map, that is f ○s = s ○f . A sliding block code is called a conjugacy if it is a bijection. This also guarantees that the inverse is also a sliding block code since a bijective continuous map from one compact metric space to another is a homeomorphism. Conjugacy is the notion of “sameness” for shift spaces. To relate all this to the framework of chapter 2, Z can be made into a graph(Z;E) where E = {(n;n+1)∣n ∈Z}. Every point in a shift space can thus be consid- ered as a configuration on the graph Z and the alphabet R. We shall cheat a little by calling Z both the graph and the vertex set. The language of a shift space X denoted by B(X) is defined as all finite words which occur in elements of X . Define Bn(X) = {w ∈ B(X) ∣ length of w is n} A shift space is called irreducible if for all a;b ∈B(X) there exists a w ∈B(X) such that awb ∈B(X). (wmight possibly be the empty word.) A shift space X is called a nearest neighbour shift of finite type if ab;bc ∈B(X) and b ∈R implies abc ∈B(X); that is if the last symbol and the first symbol of two finite words agree we can paste them together. This is a shift of finite type since X = XF where F =R2 −B2(X). The golden mean shift is an irreducible nearest neighbour shift of finite type. It is irreducible because if w1;w2 ∈ B(X{11}) then w10w2 ∈ B(X{11}). The study of nearest neighbour shifts of finite type was motivated by the fact that the support of stationary Markov chains are nearest neighbour shifts of finite type. In fact they were initially called intrinsic Markov chains.[12] Definition. A shift space X is called non-wandering if for any open set U ⊂X there exists n ∈N such that sn(U)∩U ≠ f We need the following result from ergodic theory in the following chapters. Lemma 3.0.8. Let m be a stationary probability measure such that supp(m) = X is a nearest neighbour shift of finite type. Then X is a finite union of disjoint irreducible nearest neighbour shifts of finite type. 20 Proof. Firstly, we will consider the non-wandering property of the support which is the consequence of Poincaré Recurrence Theorem.[8] Let U be an open set in X. Then m(U) > 0. Then there exists a n ∈N such that sn(U)∩U ≠ f . Take a1 ∈R. If not the set {s i(U) ∣ i ∈N} consists of disjoint sets all of the same measure. Then by stationarity of the measure m(X) >∑im(s i(U)) > nm(U) for all n. Therefore there exists a natural number n ∈N such that sn(U)∩U ≠ f . Consider ∼ a relation on R defined by a ∼ b if there exists x ∈ X satisfying xi = a;x j = b for some i < j We claim that ∼ is an equivalence relation. By the non-wandering property a ∼ a for all a ∈R. Let a ∼ b. Then there exists w such that awb ∈ B(X). By the non- wandering property there exists u such that awbuawb ∈B(X). Hence b ∼ a Let a ∼ b and b ∼ c. Then since X is a nearest neighbour shift of finite type a ∼ c. Hence ∼ is an equivalence relation. ∼ partitions the alphabet intoR1;R2 : : :Rn. Define Xi = {x ∈ X ∣ x0 ∈Ri} = X ∩RZi Then Xi’s are shift spaces which partition X. Xi’s are irreducible nearest neighbour shifts of finite type. To prove that Xi’s are irreducible, let a;b ∈ B(Xi) for some i where a ends with alphabet u and b begins with w. u ∼ w. So, there exists d such that udw ∈ B(X ∩RZi ). Since X is a nearest neighbour shift of finite type adb ∈ B(X ∩RiZ). Therefore Xi is irreducible. The period of an element x ∈ X is given by period(x) =min{i ∈N ∣ s i(x) = x} Suppose an irreducible shift of finite type X is given. The period of X is defined as period(X) = the greatest common divisor of the periods of elements of X: It can be proved that periodic points are dense in every irreducible shift of finite type [10]. So the definition makes sense. For example, the period of X{11} is 1 21 because it has an element of period 1, namely 0∞. An alternative and equivalent description is the following. Suppose R1;R2 : : :Rp partition R such that if x ∈ X ;x1 ∈ A1 then xi ∈ Ai where i = i− kp ∈ [1; p] for some k ∈ Z. The smallest such p is the period of X . In the full shift, any symbol can appear after a given symbol. Irreducible shifts of finite type do not have the same but a related property. The offset of an irreducible nearest shift of finite type X with period p is the smallest natural number t such that for any a ∈Ri;b ∈R j and T > t, there exists xT ∈ X such that xT1 = a;xTT p+ j−i+1 = b. It can be proved that every irreducible nearest neighbour shift of finite has a finite offset.[10] For example, the offset of the golden mean shift is 1. This is because take a;b ∈ {0;1} then a0b ∈ B(X{11}) however if a;b = 1 then ab ∉ B(X{11}). The entire framework can be generalised from Z to Zd . The full d-dimensional shift on alphabetR is the dynamical system (RZd ;s1;s2 : : :sd) whereRZd has the product topology over the discrete topology on R and the shift maps are defined by (si(x))v = xv+eiwhere v ∈Zd and ei is the vector with the ith coordinate 1 and all other entries 0. As before, si’s are homeomorphisms of the space RZd . Let G be the group generated by s1;s2 : : :sd . A d-dimensional shift space is a dynamical system(X ;s1;s2 : : :sd) where X ⊂RZd is closed and invariant under G. Let R⋆ = {RW ∣W is a finite subset of Zd} For any F ⊂R⋆ let XF = {x ∈RZd ∣ no translate of a subword of x belongs to F} It can be checked that X is a shift space if and only if there exists a F ⊂R⋆ such that X = XF . A shift space is called a shift of finite type if and only if X = XF for some finite F ⊂R⋆. A shift space is called a nearest neighbour shift of finite type if X =XF for someF ⊂R⋆ which are patterns on edges of Zd . A sliding block code is a map between d- dimensional shift space f ∶ X Ð→Y such that f ○si = si ○f . The 22 map is called a conjugacy if it is bijective. Two shift spaces are called conjugate if there exists a conjugacy between them. It can be shown that any shift of finite type is conjugate to a nearest neighbour shift of finite type. As before we can look atRZd as the space of configuration on graph Zd where there is an edge from x to y if x = y±ei for some i. Again, the support of any probability measure stationary with respect to G on the graph Zd is a shift space. Define the language of a shift space X on a set A ⊂Zd as BA(X) = {x∣A ∣ x ∈ X} Suppose a stationary probability measure m on RZd is given. Then the following can be proved. For any finite A ⊂Zd BA(supp(X)) = {w ∈RA ∣ m([w;A]) > 0} 23 Chapter 4 Markov Random Fields in 1 Dimension In the following m will always be a stationary Borel probability measure on the space RZ for this chapter and the alphabet R will always be finite. A probability measure m is called a Markov chain if m([x;0] ∣ [x−1x−2 : : :x−r;{−1;−2 : : :− r}]) = m([x;0] ∣ [x−1;−1]) We prove the following result in this chapter: Theorem 4.0.9. The following are equivalent : 1. m is stationary Markov random field. 2. m is stationary Markov chain 3. m is a stationary measure which has a stationary nearest neighbour Gibbs potential The following short hand notation will often be used xi1xi2 : : :xir = [xi1 ;xi2 : : :xir ;{i1; i2 : : : ir}] 24 The result is not true ifR is not finite.[10] Proof. By Lemma (2.0.3), (3) implies (1). It can be easily checked that (2) implies (3). We need to prove that (1) implies (2). We first consider the case where supp(m) =RZ. By Theorem (2.0.6) the mea- sure has a stationary nearest neighbour Gibbs potential V . If the measure has the following ‘mixing’ condition : for all x0;x−1;x−2 : : :x−r ∈R lim n→∞m(x0∣x−1x−2 : : :x−r;(xn = a)) exists and is independent of a ∈R then m(x0 ∣ x−1x−2 : : :x−r) = lim n→∞∑a∈Rm(x0x−1 : : :x−r;xn = a)∑a∈Rm(x−1 : : :x−rxn = a)= lim n→∞ m(x0x−1 : : :x−r;xn = a)m(x−1x−2 : : :x−r;xn = a) choosing any a ∈R= lim n→∞m(x0 ∣ x−1;xn = a) since m is a Markov random field= m(x0 ∣ x−1) by the assumption above Therefore we need to prove that lim n→∞m(x0 ∣ x−1x−2 : : :x−r(xn = a)) exists and is independent of a ∈R We require the following version of the Perron-Frobenius theorem. [6] Theorem. Suppose T is a matrix of size n×n with all entries positive. Then there exists l > 0 and vectors v;w with all components positive such that lim n→∞ (T n)i; jl nviw j = 1 Let T be a matrix whos rows and columns are indexed by elements of the alphabetR and Ta;b = eV([ab;{0;1}]) and l > 0;v;w ∈R∣R∣ be such that lim n→∞ (T n)i; jl nviw j = 1 25 Then, lim n→∞m(x0 ∣ x−1x−2 : : :x−r;xn = a) = limn→∞m(x0 ∣ x−1;xn = a)= lim n→∞ m(x−1x0;xn = a)m(x−1xn = a) = lim n→∞ ∑ x1x2:::xn−1∈Rn−1 l−1∏ i=−1eV(xixi+1) ∑ x0′x1′x2′:::xn−1′∈Rn l−1∏ i=−1eV(xi ′x′i+1) = lim n→∞ e V(x−1x0)(T n)x0;a(T n+1)x−1;a= eV(x−1x0)vx0wa lvx−1wa= eV(x−1x0)vx0 lvx−1 where x′−1 = x−1;xn′ = xn. Thus if m is a stationary Markov random field such that supp(m) =RZ then m is a stationary Markov chain. Now, we will consider the general case. Suppose m is a stationary Markov random field. We will first prove that supp(m) is a nearest neighbour shift of finite type. Let m(a1a2 : : :an);m(anan+1 : : :an+m) > 0. We want to prove m(a1a2 : : :an+m) > 0. Let m̃ be the independent product m × m i.e. a probability measure on (R×R)Z such that for all ci;di ∈R m̃(c1c2:::ckd1d2:::dk) = m(c1c2 : : :ck)m(d1d2 : : :dk) It can be checked that m̃ is a stationary probability measure.We can choose e1e2 : : :em and f1; f2 : : : fn−1 inR such that m(a1a2 : : :ane1e2 : : :em)m( f1 f2 : : : fn−1an : : :am+n) > 0 Then by stationarity of m̃ the support has the non-wandering property. So we can 26 choose g1;g2 : : :gr and h1;h2 : : :hr inR such that m(a1a2 : : :ane1e2 : : :emg1g2 : : :gra1a2 : : :ane1e2 : : :em) > 0 m( f1 f2 : : : fn−1an : : :am+nh1h2 : : :hr f1 f2 : : : fn−1an : : :am+n) > 0 Then by the first equation m([a1a2 : : :an;{1;2 : : :n}]∩ [an;2n+m+ r]) > 0 and by the second one m([anan+1 : : :am+nh1h2 : : :hr f1 f2 : : : fn−1an;{n;n+1 : : :2n+m+ r}]) > 0 Since m is a stationary Markov random field, by Lemma (2.0.1) we get m([a1a2 : : :anan+1 : : :am+nh1h2 : : :hr f1 f2 : : : fn−1an;{1;2 : : :2n+m+ r]) > 0 Therefore m(a1a2 : : :am+n) > 0. Thus supp(m) is a nearest neighbour shift of finite type and consequently by Lemma 3.0.8 it is a union of disjoint irreducible nearest neighbour shifts of finite type. Thus we can now assume that supp(m) is an irreducible nearest neighbour shift of finite type X with period p and offset t. Let the A1;A2 : : :Ap be a partition of R such that if x ∈X and x1 ∈A1 then xi ∈Ai as described in Chapter 3. Define for every r ∈N and i Bir(X) = {a1a2 : : :ar ∈ Br(X) ∣ a1 ∈ Ai} Let x0 ∈R, r ∈N, (i ∈N large) and x−r : : :x−2x−1 ∈ B1r (X) m(x0∣x−1x−2 : : :x−r) = ∑ a1a2:::ar∈B1r (X)m((xit p−r = a1) : : :(xit p−1 = ar);x0 ∣ x−1x−2 : : :x−r)= ∑m(x0 ∣ x−1x−2 : : :x−rxit p−r : : :xit p−1) m(xit p−r : : :xit p−1 ∣ x−1x−2 : : :x−r) by Bayes’ Theorem= ∑m(x0 ∣ x−1xit p−r)m(xit p−r : : :xit p−1 ∣ x−1x−2 : : :x−r) by the Markov random field property 27 Now to understand the expression, we need to consider the map f ∶ X Ð→ {B1r (X)}Z given by (f(x))i = (xit p−r− j+1 : : :xit p−1− j+1) if x−r ∈ A j Then f is surjective and continuous. Let m ′ be a probability measure on {B1r (X)}Z given by the push forward of the map m by f i.e. m ′(U) = m(f−1(U)). Then it can be checked that m ′ is a stationary markov random field such that supp(m ′) ={B1r (X)}Z. Therefore m ′ is a stationary Markov chain. Let it have a stationary distribution p . Then the following is true. [4] lim i→∞m(xit p−r : : :xit p−1 ∣ x−1x−2 : : :x−r) = limi→∞m ′(xit p−r : : :xit p−1 ∣ x−1x−2 : : :x−r)= p(xit p−r : : :xit p−1) Take a sequence {ik}∞k=1 ⊂N such that lim k→∞m(x0 ∣ x−1(xikt p−r = a1)) exists for all a1 ∈ A1. Then, m(x0 ∣ x−1x−2 : : :x−r) = lim k→∞∑m(x0 ∣ x−1xikt p−r)m(xikt p−r : : :xikt p−1 ∣ x−1x−2 : : :x−r)= ∑p(a1a2 : : :ar) lim k→∞m(x0 ∣ x−1xikt p−r) which is independent of x−2;x−3 : : :x−r . Therefore, m(x0 ∣ x−1x−2 : : :x−r) = m(x0 ∣ x−1) which proves that m is a stationary Markov chain. The theorem does not hold if the measure is not assumed to be stationary. Consider any probability measure m such that supp(m) is the shift space {0∞;0∞1∞;1∞;1∞02∞;2∞} 28 To clarify the notation, 1∞02∞ refers to the point x such that xi = ⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩ 1 if i < 0 0 if i = 0 2 if i > 0 and all its shifts. The shift space is countable. It can be checked that any such measure is a Markov random field. But the measure cannot be a Markov chain of any order because the support is a shift space which is not a shift of finite type. This is an elaboration of an example as given in [3]. 29 Chapter 5 Markov Random Fields in 2 Dimensions In the last chapter, we saw that stationary Markov random fields in 1 dimension are measures with nearest neighbour Gibbs potentials. This chapter intends to show how this result fails in 2 dimensions. One of the key observations in the proof of Theorem 4:1 was that the support of any stationary Markov random field in 1 dimension is a nearest neighbour shift of finite type. Our first example will be that of a stationary Markov random field in 2 dimensions such that its support is not a nearest neighbour shift of finite type. In fact it is not a shift of finite type at all. Let n be some stationary probability measure on {0;1}Z. Consider the proba- bility measure m on {0;1}Z2 such that m([a;A]) = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ 0 if a(m;n) ≠ a(m;n+1)for some m and n n([b;B]) where B = {x∣ there exists y such that (x;y) ∈ A} and b(t) = a(t;m) if (t;m) ∈ B for some m ∈Z for all A finite, a ∈ {0;1}A. The measure constrains the symbols to remain constant vertically and behave with accordance to n horizontally. Then m([a;A] ∣ [b;B]) = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ 1 if a(i; j) = b(i;k) for (i; j) ∈ A;(i;k) ∈ B 0 otherwise 30 provided ¶A ⊂ B ⊂ Ac and m([b;B]) > 0. That is, given a boundary configuration with positive measure there is a single way to fill it in to be in the support of m . It follows that m([a;A] ∣ [b;B]) = m([a;A] ∣ [b∣¶A;;¶A]) This proves that m is a stationary Markov random field. If we choose n such that supp(n) is not a nearest neighbour shift of finite type then m is a stationary Markov random field whos support is not a nearest neighbour shift of finite type.There are many such examples. We will now give an example showing that the equivalence of (1) and (3) in Theorem (4.0.9) fails in 2 dimensions. We will take an example on a finite graph as given in [11] and the extend it to the Z2 lattice. Recall the characterisation of Markov random fields on finite graphs from Equation (2.0.1): m is a Markov random field on a finite graph G = (V;E) with alphabetR if and only if for all A;B independent a;a′ ∈RA;b;b′ ∈RB;c ∈RC where C = (A∪B)c m(a b c)m(a′ b′ c) = m(a b′ c)m(a′ b c) Consider G = (V;E) with V = {0;1;2 : : :5} and E given by Figure 1.1. Figure 5.1: G The alphabet isR = {a;b}. Consider the configurations given in Figure 1.2. It can be checked that any pair of the configurations H, I, K, L and M differ on a connected subset of the vertices. Consider any probability measure m on RV such that supp(m) = {H;I;J;K;L;M} (5.0.1) and m({H})m({J})m({L}) ≠ m({I})m({K})m({M}) (5.0.2) We will use equation (5.0.1) to prove that m is a Markov random field and equation 31 Figure 5.2: The Configurations (5.0.2) to prove that it does not have a nearest neighbour Gibbs potential. Since any pair of the configurations H, I, J, K, L and M differ on a connected subset of the vertices the measure m is a Markov random field. To see this, take two sets A;B ⊂ V which are independent. Let C = (A∪B)c. Consider configurations x;x′ ∈RA;y;y′ ∈RB and z ∈RC. Then (x y z);(x′ y′ z) ∈ supp(m) implies x = x′ or y = y′. Therefore, m(x y z)m(x′ y′ z) = m(x′ y z)m(x′ y z) So the measure is a Markov random field. Suppose the measure did have a nearest neighbour Gibbs potential V. Then, m({H})m({J})m({L}) = ∏ C cliques in G eV(H∣C)eV(J∣C)eV(L∣C)= ∏ C cliques in G eV(I∣C)eV(K∣C)eV(M∣C)= m({I})m({K})m({M}) since H, J and L together have the same collection of configurations on every 32 clique as I, K and L. But since we have chosen a measure such that m({H})m({J})m({L}) ≠ m({I})m({K})m({M}) m has no nearest neighbour Gibbs potential. We will transport this example to one in the Z2 lattice by stretching out edges and introducing some new symbols. Consider the graph G′ with verticesV ′ = {1;2;3 : : :25} and edges as shown in Figure 3. Figure 5.3: G′ The new alphabet shall beR′ = (R∪(R×R)∪{f})×{1;2;3 : : :25}. We will con- struct configurations H′;I′;J′;K′;L′ andM′ corresponding to the configurations H, I, J, K, L and M given before. The graph G′ is identified with the the graph G as in Figure 1.4. Figure 5.4: Correspondence between G and G′ The darkened edges act like wires carrying information between the marked ver- 33 tices. For the configurations in the support, the symbol on vertex 11 is always f , The vertices 3, 7, 16, 24, 13 and 14 take symbols as on 0, 1, 3, 2, 4 and 5 respec- tively. The edges on the darkened edges carry information about the symbols from one end to another. For example look at Figure 1.5 to see how H′ corresponds to H. Figure 5.5: Correspondence between H and H′ To illustrate how the darkened edges act like wires note that in H vertex 0 has symbol b and vertex 4 has symbol a. Consequently, in H′ vertex 3 has symbol b and vertex 13 has symbol a. Going clockwise vertex 13 comes before vertex 3. So the vertices in between have symbols (a;b). As in the last example, we consider a probability measure m ′ onR′V′ such that m ′({H′})m ′({J′})m ′({L′}) ≠ m ′({I′})m ′({K′})m ′({M′}) and supp(m ′) = {H′;J′;L′;I′;K′;M′} For reasons as before the measure is a Markov random field such that it does not have a nearest neighbour Gibbs potential. Now, divide the Z2 lattice into disjoint 5×5 grids. There are 25 ways of doing this. Choose the one in which the origin is left-upper corner of a certain grid. Now place configurationsH′;I′;J′;K′;L′;M′ independently on the grids with probability 34 m ′. This gives a measure m̃ onR′Z2 .Consider the measure n onR′Z2 given by n(U) = 1 25 ∑ 0≤i; j≤4 m̃(s1is2 j(U)) for any measurable setU Proposition 5.0.10. n is a stationary Markov random field that does not have a nearest neighbour Gibbs potential. Note, that in this example supp(n) is a shift of finite type but not a nearest neighbour shift of finite type. Proof. We will first prove that the measure is stationary. Take any cylinder set[a;A] such that m̃([a;A]) > 0 where a ∈ (R′)A and A ⊂ Z2 finite. While setting up the configurations, we began by partitioning Z2 into 5× 5 blocks. Let the parti- tioning sets be denoted by {Ti}∞i=1. This also gives a partition of A, say {Ai}ni=1. Since the configurations are put independently on the grids with probability m ′ on the 5×5 boxes m̃([a;A]) = m̃( n⋂ i=1[a∣Ai ;Ai])= n∏ i=1 m̃([a∣Ai ;Ai])= n∏ i=1 m̃(s5k ([a∣Ai ;Ai]))= m̃(s5k ([a;A]))= m̃(s−5k ([a;A])) for k = 1;2 Therefore, for any cylinder set [a;A] ⊂ (R′)Z2 m̃([a;A]) = m̃(s51 ([a;A])) = m̃(s52 ([a;A])) Since cylinder sets generate the borel sigma-algebra of R′Z2 , for any measurable setU m̃(U) = m̃(s51 (U)) = m̃(s52 (U)) 35 Therefore, for any measurable setU ⊂R′Z2 n(s1(U)) = 125 ∑0≤i; j≤4 m̃(s1is2 j(s1(U))) = 125 ∑0≤i; j≤4 m̃(s1is2 j(U)) = n(U) and n(s2(U)) = 125 ∑0≤i; j≤4 m̃(s1is2 j(s2(U))) = 125 ∑0≤i; j≤4 m̃(s1is2 j(U)) = n(U) Therefore n is stationary. Now, we will prove that n is a Markov random field. Suppose A is some non- empty finite subset of Z2 . Let x ∈ supp(n) and a ∈R′Z2 be some configuration. Let B be another finite set such that ¶A ⊂ B ⊂ AC. We want to prove n([a;A]∣[x∣B;B]) = n([a;A]∣[x∣¶A;¶A]) To construct configurations on Z2 we began by dividing Z2 into 5×5 blocks and then placing the configurations on these. Each symbol used has a number in be- tween 1 and 25 which determines how this division has been done. By looking at x( j;k) for any ( j;k) ∈ ¶A one can determine how the division has been made. Take some ( j;k) ∈ ¶A. Z2 can be partitioned into 5×5 blocks Ti’s and numbered accord- ing to Figure 1.3 such that the number on (j,k) is the same as the number designated to a( j;k). This will give a partition of A say {Ai}ni=1. By renumbering the Ti’s we can assume that each Ai ⊂ Ti. Since the configurations are put independently on the grids with probability m ′ on the 5×5 boxes n([a;A] ∣ [x∣B;B]) = n∏ i=1 n([a∣Ai ;Ai] ∣ [x∣B;B])by independence= n∏ i=1 n([a∣Ai ;Ai] ∣ [x∣B∩Ti ;B∩Ti])= n∏ i=1 n([a∣Ai ;Ai] ∣ [x∣¶Ai∩Ti ;¶Ai∩Ti]) by the Markov random field property of m ′= n([a;A] ∣ [x∣¶A;¶A]) 36 Now we will prove that the measure does not have a nearest neighbour Gibbs potential. Let A ⊂ Z2 be a 5×5 block. Let Ho;Io;Jo;Ko;Lo;Mo be configurations on A corresponding to H′;I′;J′;K′;L′;M′. Now consider a configuration a on ¶A such that n([Ho;A]∩ [a;¶A]) > 0 This assumption implies that n([Ho;A]∩ [a;¶A])n([Jo;A]∩ [a;¶A])n([Lo;A]∩ [a;¶A]) > 0 n([Io;A]∩ [a;¶A])n([Ko;A]∩ [a;¶A])n([Mo;A]∩ [a;¶A]) > 0 Then, n([Ho;A] ∣ [a;¶A])n([Jo;A] ∣ [a;¶A])n([Lo;A] ∣ [a;¶A])= m ′({H′})m ′({J′})m ′({L′})≠ m ′({I′})m ′({K′})m ′({M′})= n([Io;A] ∣ [a;¶A])n([Ko;A] ∣ [a;¶A])n([Mo;A] ∣ [a;¶A]) Therefore, the measure does not have a nearest neighbour Gibbs potential. We were prompted by this example to study Markov random fields from an- other direction. It turns out that if the support of a Markov random field has the pivot property(defined below) then there is another compact way of representing its conditional probabilities. Let D be a set of finite subsets of Z2. Let A ∈D;X ⊂RZ2 a shift space and x;y be distinct elements of BA∪¶A(X) such that x = y on ¶A. Then a pivot from x to y in the shift space X is a sequence of points x = x1;x2;x3 : : :xn = y ∈ BA∪¶A(X) such that xi∣¶A = x j∣¶A for all i and j and ∣{r ∈ A∣xi(r) ≠ xi+1(r)}∣ = 1 Definition. A shift space X ⊂RZ2 is said to have the D− pivot property if for all A ∈D and x;y ∈ BA∪¶A(X) such that x = y on ¶A there is a pivot from x to y in X. This means that any configuration can be changed site by site to obtain any other configuration if they agree on the boundary of an element of D. Note that 37 a shift space with safe symbol or more specifically full support has the D− pivot property where D is the set of all finite subsets of Z2. This property was initially explored by us to give another proof of the Hammersley-Clifford Theorem. However the property has some other consequences which will be discussed in this chapter and the next. Suppose a stationary Markov random field m is given on RZ2 and D be the set of all finite subsets of Z2. Let supp(m) = X . The specification of m is the function D ∶ {A×BA∪¶A(X)∣A ∈D}Ð→R given by D(A;x) = m([x∣A;A] ∣ [x∣¶A;¶A]) The infralocal specification of m is the function d ∶ B{(0;0)}∪¶{(0;0)}(X) Ð→ R given by d(x) = m([x∣(0;0);{(0;0)}] ∣ [x∣¶{(0;0)};¶{(0;0)}]) Proposition 5.0.11. Suppose D is a set of finite subsets of Z2 such that any finite subset of Z2 is contained in an element of D. Let X be a shift space with the D− pivot property and m be a stationary Markov random field such that supp(m) = X. Then there is an expression of the specification of m in terms of the infralocal specification of m . The proposition can be restated with Z2 replaced by any graph. Proof. Suppose D, X and m be as stated in the lemma. Suppose the infralocal specification of m is given. We will determine the specification of the measure. Let A ⊂ Z2 be finite and x;y ∈ BA∪¶A(X) be distinct such that x = y on ¶A.Let B ∈ D such that A∪ ¶A ⊂ B. Let z ∈ BB∪¶B(X) such that z = x on ¶A. Since m is a Markov random field, supp(m) is a topological Markov field. So there exist x1;y1 ∈BB∪¶B(X) such that x1 = x and y1 = y on A∪¶A and x1 = y1 = z on B∪¶B−A∪¶A. 38 Let x = (x1 : : :xn = y1) be a B− pivot. Let xi ≠ xi+1 at vi = (ai;bi) for all i. Then, D(A;x) D(A;y) = m([x∣A;A] ∣ [x∣¶A;¶A])m([y∣A;A] ∣ [y∣¶A;¶A]) = m([x;A∪¶A]) m([y;A∪¶A]) = m([x1;B∪¶B]) m([xn;B∪¶B]) since m is a Markov random field = n−1∏ i=1 m([xi;B∪¶B]) m([xi+1;B∪¶B]) = n−1∏ i=1 m([xi∣{vi}∪¶({vi});{vi}∪¶({vi})]) m([xi+1∣{vi}∪¶({vi});{vi}∪¶({vi})]) since m is a Markov random field = n−1∏ i=1 d(sai1 sbi2 (xi)) d(sai1 sbi2 (xi+1)) Therefore the specification is completely determined by the infralocal specifi- cation. The proof gives an expression for the specification in terms of the infralocal specification. However the expression depends on the pivot chosen between the two configurations. This indicates that the infralocal specification is not arbitrary and depends on the shift space X. To understand the constraints satisfied by the in- fralocal specification, we have to gain a deeper insight into pivoting process. This is pivoting at a single site. One might ask whether nearest neighbour shifts of finite type always have this pivot property of some finite size. However there do exist shifts of finite type such that they do not have pivots of any fixed finite size. We will continue this discussion at the end of next chapter. 39 Chapter 6 Further Work In this chapter, we will discuss some questions which arose in the study and, to the author, are still open. This chapter lacks proofs and the emphasis is on a quick introduction to various questions. 6.1 Generalising Theorem 2.0.4 In Theorem (2.0.4) we proved the equivalence of Markov random fields and mea- sures with nearest neighbour Gibbs potential under the assumption that the support of the measure has a safe symbol. We were given a graph G = (V;E) and a Markov random field onRV such that its support had a safe symbol ‘0’ Let H = {a ∈ supp(m) ∣ (a;0V) is a homoclinic pair } and A be a matrix whose rows are indexed by pairs in H and the columns by elements of T i.e. configurations on cliques with positive probability. Suppose c ∈ T is a configuration on C and a;b ∈H. The matrix entries were given by A(a;b);c = ⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩ 1 if a∣C = c;b∣C ≠ c−1 if a∣C ≠ c;b∣C = c 0 otherwise 40 An essential part of the proof was to show that V ′ = {v ∣ vA = 0 and v has finitely many non-zero entries} is generated by Ṽ = {v ∣ vA = 0 and v has at most 2 non-zero entries} Now, let a set X ⊂RV be given and H be the set of homoclinic pairs in X. As before A is a matrix whose rows are indexed by pairs in H and the columns by elements of T i.e. configurations on cliques in X. Suppose c ∈ T is a configuration onC and a;b ∈H. The matrix entries are given by A(a;b);c = ⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩ 1 if a∣C = c;b∣C ≠ c−1 if a∣C ≠ c;b∣C = c 0 otherwise Then X is called Gibbs-feasible if V ′ = {v ∣ vA = 0 and v has finitely many non-zero entries} is generated by Ṽ = {v ∣ vA = 0 and v has at most 2 non-zero entries} Then the proof of Theorem (2.0.4) shows that Theorem 6.1.1. Let m be a Markov random field such that supp(m) is Gibbs- feasible. Then m has a nearest neighbour Gibbs potential. Question 1What are nice conditions on a shift space X such that it becomes Gibbs- feasible? One such nice condition is that X has a safe symbol. 41 6.2 Topological Markov Fields Topological Markov fields are central to this thesis. They played an important role in the proof of Theorem 4.0.9 which states that stationary Markov random fields are Markov chains in 1 dimension. In the proof we showed that topological Markov fields on Z which support a stationary probability measure are nearest neighbour shifts of finite type. A natural generalisation of the class of shifts of finite type is the class of sofic shifts defined as shift spaces which are factors of shifts of finite type. The following is true. Lemma 6.2.1. Suppose X ⊂RZ is a shift space such that it is a topological Markov field. Then X is a sofic shift. With the help of this lemma, the following is a consequence of the proof of Theorem (4.0.9). Theorem 6.2.2. Suppose X ⊂RZ is a shift space. The following are equivalent: 1. X is the support of a stationary Markov chain. 2. X is the support of a stationary Markov random field. 3. X is a topological Markov field and X ×X is non-wandering. 4. X is a topological Markov field and X is non-wandering. 5. X is a disjoint union of finitely many irreducible nearest neighbor shifts of finite type. However there does not seem to be any such theorem in dimensions greater than 1. The following aspect has not yet been explored but is of interest to the author. X ⊂RZ2 is called a global topological Markov field if for all x;y ∈ X such that x = y on ¶C for someC ⊂V which is not necessarily finite. Then z ∈RV defined by z = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ x onC∪¶C y on (C∪¶C)c 42 is an element of X. Question 2 Suppose X is a global topological Markov field and a shift space. Does this imply that it is sofic? Another point of interest is the characterisation of the supports of a stationary Markov random field which are necessarily shift spaces. Suppose X ⊂RZ is a shift space. Then X is the support of a stationary Markov random field if and only if it is support of a stationary probability measure and is a topological Markov field. This lead us to the following question. Question 3 Let X ⊂RZ2 . If X is the support of a stationary Markov random field, then X is the support of a stationary probability measure and X is a topological Markov field. Is the converse true? 6.3 Pivot Property and 3-Checkerboard The n-checkerboard is a shift space given by Xn ={x ∈{0;1;2 : : :n−1}Z2 ∣ x(i; j)≠ x(i; j+1) and x(i; j)≠ x(i+1; j) for all i and j} Then, as in [5], X3 and Xn for n ≥ 6 has the D− pivot property where D is the set of diamonds Dn’s. Dn = {(x;y) ∈Z2 ∣ ∣x∣+ ∣y∣ ≤ n} D can be replaced by the set of finite simply-connected subsets of Z2 Figure 6.1: The thicker lines represent the subgraph induced by D3 We will work with the 3-checkerboard, however the methods employed are more general. Suppose m is a stationary Markov random field such that supp(m) = X3. It is to be noted that we do not know if such a measure exists. In the remainder 43 of this chapter we shall derive some features of such a measure. Then by proof of Lemma 5.0.11 there is an expression for the specification D in terms of the infralocal specification d . In X3, the symbol at a site can change if and only if the neighbourhood is monochromatic i.e. a single symbol occupies the sites in the neighbourhood. Let p;x;q;r;y and z ∈ BD2(X3) be as in Figure 1.7. Figure 6.2: p,x,q,r,y and z Let v1 = log(d(p))− log(d(x)) v2 = log(d(q)− log(d(y)) v3 = log(d(r))− log(d(z)): Then v1;v2 and v3 completely determine the specification D in the following way. Let d = (d1;d2 : : :dm) be a Dn-pivot. Define ad = ∣{i∣di( j) = 1;di+1( j) = 2 for some j ∈Dn}∣−∣{i∣di( j) = 2;di+1( j) = 1 for some j ∈Dn}∣ bd = ∣{i∣di( j) = 2;di+1( j) = 0 for some j ∈Dn}∣−∣{i∣di( j) = 0;di+1( j) = 2 for some j ∈Dn}∣ cd = ∣{i∣di( j) = 0;di+1( j) = 1 for some j ∈Dn}∣−∣{i∣di( j) = 1;di+1( j) = 0 for some j ∈Dn}∣ These record the number of switches of each kind. Suppose A ⊂ Z2 is finite. Let x;y ∈ BA∪¶A(X3) be distinct such that x = y on ¶A. Let z ∈ X3 such that z = x on ¶A and k such that A∪¶A ⊂Dk. Then as in the proof of Proposition 5.0.11, we 44 consider points x1;y1 ∈ BDk∪¶Dk(X3) such that x1 = x and y1 = y on A∪¶A and x1 = y1 = z on Dk ∪¶Dk −(A∪¶A) Now consider the pivot d = (d1;d2 : : :dn) such that d1 = x1 and dn = y1 Then, D(A;x) D(A;y) = ev1ad+v2bd+v3cd (6.3.1) Suppose m has a stationary nearest neighbour Gibbs potential V. We can assume that the potential is zero on configurations on single sites. Then, v1 =V(01)+V(10)+V(01)+V(01)−V(02)−V(20)−V(02)−V(20) v2 =V(12)+V(21)+V(21)+V(12)−V(01)−V(10)−V(01)−V(10) v3 =V(02)+V(20)+V(20)+V(02)−V(21)−V(12)−V(21)−V(12) Therefore, v1+v2+v3 = 0. The converse is also true. Proposition 6.3.1. Suppose m is a stationaryMarkov random field such that supp(m)= X3 Let v1;v2 and v3 be as described above. Then m has a stationary nearest neigh- bour Gibbs potential if and only if v1+v2+v3 = 0 Given this lemma, we would like to know if the infralocal specifications satisfy any relations. For this we refer to the proof of Proposition 5.0.11. There we derived an expression for the specification in terms of the infralocal specification. However this expression depended upon the pivot chosen between the two configurations. Different pivots might lead to different expressions. This will lead to an equation which the infralocal specification must satisfy. To be more formal, let X , m and D be as in Proposition 5.0.11. Relationships arise in the following two ways: 1. Suppose A ⊂ Z2 finite such that A ∉ D. Let x;y ∈ BA∪¶A(X) be distinct such that x = y on ¶A. Let z1;z2 ∈ X such that z1 = z2 = x on ¶A and B ∈ D such that A∪ ¶A ⊂ B. Then as in the proof of the lemma, we consider points 45 x1;y1 ∈ BB∪¶B(X) such that x1 = x and y1 = y on A∪¶A and x1 = y1 = z1 on B∪¶B−A∪¶A Similarly x2;y2 ∈ BB∪¶B(X) are chosen such that x2 = x and y2 = y on A∪¶A and x2 = y2 = z2 on B∪¶B−A∪¶A Now consider pivots a = (a1;a2 : : :an);b = (b1;b2 : : :bn) such that a1 = x1, an = y1, b1 = x2 and bn = y2 Let ai ≠ ai+1 at vi = (li;mi) and bi ≠ bi+1 at wi =(ni;ki)for all i. Then, D(A;x) D(A;y) = m−1∏i=1 d(s li 1 s mi 2 (ai)∣D2) d(s li1 smi2 (ai+1)∣D2) = n−1∏ i=1 d(sni1 s ki2 (bi)∣D2) d(sni1 s ki2 (bi+1)∣D2) taking logarithm on both sides we get, ∑m−1i=1 (logd(s li1 smi2 (ai)∣D2)− logd(s li1 smi2 (ai+1)∣D2))= ∑n−1i=1 (logd(sni1 s ki2 (bi)∣D2)− logd(sni1 s ki2 (bi+1)∣D2)) 2. Suppose B ∈D. Let x;y ∈ BB∪¶B(X) be distinct such that x = y on ¶B. Con- sider pivots a = (a1;a2 : : :am) and b = (b1;b2 : : :bn) such that a1 = b1 = x and a2 = b2 = y. Let ai ≠ ai+1 at vi = (li;mi) and bi ≠ bi+1 at wi = (ni;ki) for all i. Then, D(B;x) D(B;y) = m−1∏i=1 d(s li 1 s mi 2 (ai)∣D2) d(s li1 smi2 (ai+1)∣D2) = n−1∏ i=1 d(sni1 s ki2 (bi)∣D2) d(sni1 s ki2 (bi+1)∣D2) Taking logarithm on both sides we get ∑m−1i=1 (logd(s li1 smi2 (ai)∣D2)− logd(s li1 smi2 (ai+1)∣D2))= ∑n−1i=1 (logd(sni1 s ki2 (bi)∣D2)− logd(sni1 s ki2 (bi+1)∣D2)) 46 Note that these relationships are dependent on the shift space X rather than on the measure m . Also the constraints of type (2) can be looked at as a special case of constraints of type (1). However checking constraints of type (1) presents an extra level of difficulty. Suppose a shift space X is fixed. Let U = {m ∣ m is a stationary Markov random field and supp(m) = X} Given a probability measure m ∈U , let d be its infralocal specification. Consider the function dm ∶ {(x;y) ∣ x;y ∈ BD2(X) and x = y on ¶{(0;0)}}Ð→R given by dm(x;y) = log(d(x))− log(d(y)) Then the set {dm ∣ m ∈U} is contained in a finite dimensional vector space satisfy- ing equations of type (1) and (2). Question 4 Given a shift space X , is there a finite procedure to determine the linear space generated by equations of type (1) and (2)? These constraints depend on how much can configurations in X can pivot. For example, if X is the full shift then pivoting is very easy. Hence the infralocal spec- ification of a stationary Markov random field supported on it is very constrained. However in X3, the pivoting is very constrained and hence there are no relations arising of type (1) and (2). To be more formal, suppose x = (x1;x2;x3 : : :xm) is an Dn- pivot in X3 for some m. The following proposition will establish a consistency result between various pivots between two given configurations. Proposition 6.3.2. Let x = (x1;x2 : : :xn) and y = (y1;x2 : : :yr) be a Dm pivot for some m and x1 = y1 and xn = yr. Then, ax = ay;bx = by and cx = cy This rules out any constraints of type (2). The following is a stronger version 47 of the proposition given above. It rules out any constraints of type (1). Proposition 6.3.3. Let x = (x1;x2 : : :xn) and y = (y1;x2 : : :yr) be a Dm pivot for some m. Suppose there exists a set A ⊂Dm such that x1 = y1 and xn = yr on A∪¶A. Also x1 = xn and y1 = yr on Dm−A. Then, ax = ay;bx = by and cx = cy Thus there are no constraints on v1;v2 and v3 of type (1) and (2). Question 5 Let D′ be the set of all finite subsets of Z2. Let v1;v2 and v3 ∈R. Then, using the expression in Equation 6.3.1, a function D ∶ {A×BA∪¶A(X3) ∣ A ∈D′}Ð→ [0;1] can be constructed such that for any finite A ∈Z2 and y ∈ B¶A(X3) ∑ x∈BA∪¶A(X3) ∣ x=y on ¶AD(A;x) = 1 Does there exist a stationary Markov random field m such that supp(m) = X3 and D is its specification? We are close to proving that there exist a stationary Markov random field m such that supp(m) = X3 for v1;v2 and v3 = 0. 48 Chapter 7 A Linear Algebra Theorem Typically linear algebra deals with finite dimensional vector spaces. However, in this thesis we have encountered the use of matrices with countably many rows and columns. This chapter is devoted to clarify the notions. It might be possible that functional analysis is a much better basis for these things. However, since the concepts are so simple, we shall keep it to this setting. Given arbitrary non-empty sets M and N, a matrix of the size M ×N is a function A ∶M ×N Ð→ R such that {y∣A(x;y) ≠ 0} is finite for all x ∈M. This is what is meant by having a matrix such that the rows are indexed by elements of M and the columns are indexed by elements of N. Suppose we are given A and B matrices of size M×N and N ×K respectively. Then AB is a matrix of size M×K such that AB(m;k) =∑ n∈NA(m;n)B(n;k) for all m ∈M and k ∈K whenever the sums are well defined. Theorem. Suppose M and N are countable sets and K is singleton. Let A be a matrix of size M×N and b a matrix of size M×K. Then there exists a matrix x of size N ×K such that Ax = b if and only if vA = 0 Ô⇒ vb = 0 where v is a matrix of size K ×M with finitely many non- zero entries and 0 is a 49 matrix of size K×N such that 0(a;b) = 0 for all a ∈K and b ∈N If M is finite, this is a well known linear algebra fact. One can for instance look in [9]. We shall assume this and prove the theorem. The idea is to take the solutions of finite parts and then stitch them together to get the entire solution. The theorem holds even if restrictions on the cardinality of M, N and K are removed. Proof. Suppose A, b as stated in the theorem. Consider an increasing sequence of finite subsets {Mi} ofM such that ⋃ i Mi =M Let Ai be a submatrix of A of sizeMi×N and bi be a submatrix of b of sizeMi×K given by Ai = A∣Mi×N bi = b∣Mi×N SinceMi is finite and vAi = 0 Ô⇒ vbi = 0; for every i there exists a solution to the equation Aix = bi. Take an increasing sequence of finite subsets {N j} of N such that ⋃ j N j =N Let Sij = {x ∈RN j×K ∣ there exists y ∈RN×K such that Aiy = bi and y∣N j×K = x} be the projection of the solution space onto the N jth coordinates. Note that, each Sij is non-empty, S i+1 j ⊂ Sij and the projection of Sij+1 onto the N jth coordinates gives us Sij. Sij = {x ∈RN j×K ∣ there exists y ∈ Sij+1 such that y∣N j×K = x} (7.0.1) 50 Therefore {Sij}∞i=1 is a nested sequence of affine subspaces of a finite dimen- sional vector space. Hence the dimension of Sij is eventually a constant. Therefore for any j, Sij = Si+1j for a large enough i. In particular, ⋂ i∈NSij is non-empty for all j. We will now prove some consistency result of these solution sets by induction. Lemma 7.0.4. There exists a sequence {xt}t∈N such that xt ∈⋂ i∈NSit and xt ∣Nt−1×K = xt−1 for all t. Proof. Let x1 ∈⋂ i∈NSi1 be chosen. By equation (7.0.1), for every i there exists xi2 ∈ Si2 such that xi2∣N1×K = x1. Since the sequence {Si2}i∈N is eventually constant there exists x2 ∈⋂ i∈NSi2 such that x2∣N1×K = x1. Let x1;x2 : : :xr be chosen such that xt ∈⋂i∈NSit for 1 ≤ t ≤ r and xt ∣Nt−1×K = xt−1 for 2 ≤ t ≤ r. For every i, there exists xir+1 ∈ Sir+1 such that xir+1∣Nr×K = xr Since the sequence {Sir+1}i∈N is eventually a constant there exists xr+1 ∈⋂ i∈NSr+1i such that xr+1∣Nr×K = xr. Thus the induction is complete and the lemma is proved. Let a sequence {xt}t∈N be as in the lemma before. Let x ∈ RN×K such that x∣Nr×K = xr for all r ∈N. Take an arbitrary i ∈N. Since every row has finitely many non-zero entries for a given z ∈ RN×K , Aiz is independent of z∣(Nr)c×K for large enough r. Choose such a r = r0. We have xr0 ∈ ⋂ l∈NSr0l ⊂ Sr0i . By the definition of Sir0 , there exists x i r0 such that Aix i r0 = bi and xir0 ∣Nr0 = xr0 . Therefore, Aixir0 = Aix = bi. Since i was arbitrary, Ax = b. 51 Bibliography [1] Drton, Mathias; Sturmfels, Bernd; Sullivant, Seth. Lectures on algebraic statistics Oberwolfach Seminars, Birkhuser Verlag, Basel, 39, 2009. [2] J. M. Hammersley; P. Clifford. Markov fields on finite graphs and lattices. 1971. [3] R.L. Dobruschin. Description of a random field by means of conditional probabilities and conditions for its regularity. Teor. Verojatnost. i Primenen, 13:201–229, 1968. [4] Rick Durrett. Probability: theory and examples. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge,, 4 edition, 2010. [5] Kari Eloranta. Diamond ice. J. Statist. Phys., 96(5-6):1091–1109, 1999. [6] F.R.Gantmacher. The Theory of Matrices, volume 2. AMS Chelsea Publishing, Providence, RI, 1998. [7] Hans-Otto Georgii. Gibbs measures and phase transitions. de Gruyter Studies in Mathematics, Walter de Gruyter & Co., Berlin, 1988. [8] Paul R. Halmos. Lectures in Ergodic Theory. Chelsea Publishing Co., New York, 1960. [9] Kenneth Hoffman; Ray Kunze. Linear Algebra. Prentice-Hall Mathematics Series Prentice-Hall, Inc., Englewood Cliffs, N.J., 1961. [10] Douglas Lind; Brian Marcus. An introduction to symbolic dynamics and coding. Cambridge University Press, Cambridge., 1995. [11] John Moussouris. Gibbs and markov random systems with constraints. Journal of Statistical Physics, 10(1), 1974. 52 [12] William Parry. Intrinsic markov chains. Trans. Amer. Math. Soc, 112:55–66, 1964. 53"@en ; edm:hasType "Thesis/Dissertation"@en ; vivo:dateIssued "2011-11"@en ; edm:isShownAt "10.14288/1.0072112"@en ; dcterms:language "eng"@en ; ns0:degreeDiscipline "Mathematics"@en ; edm:provider "Vancouver : University of British Columbia Library"@en ; dcterms:publisher "University of British Columbia"@en ; dcterms:rights "Attribution 3.0 Unported"@en ; ns0:rightsURI "http://creativecommons.org/licenses/by/3.0/"@en ; ns0:scholarLevel "Graduate"@en ; dcterms:title "Markov random fields and measures with nearest neighbour Gibbs potential"@en ; dcterms:type "Text"@en ; ns0:identifierURI "http://hdl.handle.net/2429/37000"@en .