UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Hypercube coloring and the structure of binary codes Rix, James Gregory 2008

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2008_fall_rix_james.pdf [ 699.4kB ]
Metadata
JSON: 24-1.0066806.json
JSON-LD: 24-1.0066806-ld.json
RDF/XML (Pretty): 24-1.0066806-rdf.xml
RDF/JSON: 24-1.0066806-rdf.json
Turtle: 24-1.0066806-turtle.txt
N-Triples: 24-1.0066806-rdf-ntriples.txt
Original Record: 24-1.0066806-source.json
Full Text
24-1.0066806-fulltext.txt
Citation
24-1.0066806.ris

Full Text

Hypercube Coloring and the Structure of Binary Codes by James Gregory Rix  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The College of Graduate Studies (Interdisciplinary Studies)  The University Of British Columbia (Okanagan) July 2008 c James Gregory Rix 2008  Abstract A coloring of a graph is an assignment of colors to its vertices so that no two adjacent vertices are given the same color. The chromatic number of a graph is the least number of colors needed to color all of its vertices. Graph coloring problems can be applied to many real world applications, such as scheduling and register allocation. Computationally, the decision problem of whether a general graph is m-colorable is NP-complete for m ≥ 3. The graph studied in this thesis is a well-known combinatorial object, the k-dimensional hypercube, Qk . The hypercube itself is 2-colorable for all k; however, coloring the square of the cube is a much more interesting problem. This is the graph in which the vertices are binary vectors of length k, and two vertices are adjacent if and only if the Hamming distance between the two vectors is at most 2. Any color class in a coloring of Q2k is a binary (k, M, 3) code. This thesis will begin with an introduction to binary codes and their structure. One of the most fundamental combinatorial problems is finding optimal binary codes, that is, binary codes with the maximum cardinality satisfying a specified length and minimum distance. Many upper and lower bounds have been produced, and we will analyze and apply several of these. This leads to many interesting results about the chromatic number of the square of the cube. The smallest k for which the chromatic number of Q2k is unknown is k = 8; however, it can be determined that this value is either 13 or 14. Computational approaches to determine the chromatic number of Q28 were performed. We were unable to determine whether 13 or 14 is the true value; however, much valuable insight was learned about the structure of this graph and the computational difficulty that lies within. Since a 13-coloring of Q28 must have between 9 and 12 color classes being (8, 20, 3) binary codes, this led to a thorough investigation of the structure of such binary codes.  ii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  List of Tables  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  v  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Chapter 1. Coding Theory . . . . . . . . . . . . . . . . . . 1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 1.2 An Introduction to Codes . . . . . . . . . . . . . . . . 1.3 Optimal Binary Codes . . . . . . . . . . . . . . . . . 1.3.1 The Sphere-Packing Bound . . . . . . . . . . . 1.3.2 Delsarte’s Linear Programming Bound . . . . 1.4 Applications of Delsarte’s Linear Programming Bound  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  1 1 5 11 18 19 24  . . . . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  27 27 30 33 35 36 38 39 41  Chapter 3. The Hypercube . . . . . . . . . . . . . . . . . . . . . 3.1 Automorphisms of the Cube and its Square . . . . . . . . . . 3.2 The Chromatic Number of the Cube and its Square . . . . .  43 45 53  Chapter 2. Graph Theory . . . . . . . . . . . . . . . . . 2.1 Elementary Concepts . . . . . . . . . . . . . . . . . 2.2 Graph Isomorphisms and Automorphisms . . . . . . 2.3 Vertex Coloring . . . . . . . . . . . . . . . . . . . . 2.4 Algorithms to Solve Vertex Coloring Problems . . . 2.4.1 An Introduction to Algorithms and Efficiency 2.4.2 Integer Programming . . . . . . . . . . . . . 2.4.3 Column Generation . . . . . . . . . . . . . . 2.4.4 The Petford-Welsh Algorithm . . . . . . . .  . . . . .  iii  Table of Contents 3.2.1  3.3  The Asymptotic Behavior of the Chromatic Number of the Square of the Cube . . . . . . . . . . . . . . . Applying Computational Methods to Determine the Chromatic Number of the Square of 8-Dimensional Cube . . . . . 3.3.1 Integer Programming . . . . . . . . . . . . . . . . . . 3.3.2 Column Generation . . . . . . . . . . . . . . . . . . . 3.3.3 The Petford-Welsh Algorithm . . . . . . . . . . . . .  Chapter 4. The Structure of Optimal Binary Codes . . . . . 4.1 Computationally Determining all Weight Distributions . . 4.2 Structural Results . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 On the Size of Shortened Codes . . . . . . . . . . . 4.2.2 Bounds on the Weight Distributions of Color Classes 4.3 The Number of Nonequivalent Codes . . . . . . . . . . . . 4.4 Another Definition of Equivalence . . . . . . . . . . . . . .  64 65 67 68  . .  69 69 76 76 77 78 79  . . . . . . . . . . . . . . . . . . . . . . . .  82  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  83  Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Current Status of Coloring the Square of the Cube Appendix B. A 14-coloring of the Square of the 8-Cube . . . . . Appendix C. Weight Distributions of (8, 20, 3) Binary Codes . . Appendix D. Hougardy.cpp . . . . . . . . . . . . . . . . . . . . . Appendix E. CPDist.cpp . . . . . . . . . . . . . . . . . . . . . .  85 86 88 89 92 97  Chapter 5. Conclusion  . . . .  62  . . . . . .  iv  List of Tables Table 3.1: A 4-coloring of Q23 using Wan’s construction . . . . . . .  58  Table 4.1: Distance Distributions of (8, 20, 3) Binary Codes . . . .  79  Table A.1: Current Status of Coloring the Square of the Cube . . .  86  Table B.1: A 14-coloring of Q28 . . . . . . . . . . . . . . . . . . . .  88  Table C.1: Weight Distributions of (8, 20, 3) Binary Codes . . . . .  89  v  List of Figures Figure 2.1: Venn Diagram of the Complexity Classes . . . . . . . .  37  Figure 3.1: Computation Times Using Integer Programming . . . .  66  Figure 4.1: Computation Times for Feasible Distributions . . . . .  74  Figure 4.2: Computation Times for Infeasible Distributions . . . .  75  vi  Acknowledgements First and foremost, I would like to thank my supervisor, Dr. Donovan Hare, for both introducing me to the beauty of optimization from a real world perspective, and for giving me the opportunity to pursue my research in this field. His infinite patience was extremely helpful over the course of my educational experience. In addition, I would like to give thanks to everyone who has had an invaluable impact on my educational experience: Dr. Wayne Broughton, Dr. Heinz Bauschke, Dr. Shawn Wang, Dr. Yves Lucet, Dr. Yong Gao, and Dr. Blair Spearman. This thesis was partially supported by the Natural Sciences and Engineering Research Council of Canada, the Pacific Institute for the Mathematical Sciences and the U.B.C. Okanagan College of Graduate Studies.  vii  Chapter 1  Coding Theory In this chapter, we recall background material from coding theory that will be used in subsequent chapters. The chapter will conclude with the spherepacking bound and Delsarte’s linear programming bound, two very useful bounds in determining optimal binary code size. We will also include a proof of A(9, 4) = 20 that uses Delsarte’s linear programming bound. This value will be very important in later chapters. We will now introduce the notation and definitions that will be used throughout, which closely follow those used in [20] and [13]. It will be assumed throughout this thesis that reader has a basic knowledge of field theory, set theory, linear and integer programming, and probability.  1.1  Preliminaries  Notation 1.1. Let Z denote the set of all integers, and Z+ denote the set of all positive integers. Notation 1.2. For any positive integer n, let [n] denote the set of integers {1, 2, . . . , n}. Consider a finite field F with addition and multiplication defined, satisfying |F | = q. Recall that all fields have an additive identity, 0, and a multiplicitave identity, 1. Now consider the vector space F n . Clearly, |F n | = q n . A vector u in F n is of the following form:   u1  u2    u =  . .  ..  un For notational convenience, this will also be denoted u1 u2 . . . un . We will refer to ui as the ith coordinate of u. 1  Chapter 1. Coding Theory Notation 1.3. Let 0n and 1n denote the vectors in F n in which every coordinate is 0 and 1, respectively. Notation 1.4. Let µni denote the vector in F n in which the ith coordinate is 1, and every other coordinate is 0. Definition 1.5. Consider two vectors u and v in F n . The (Hamming) distance between u and v, denoted d(u, v), is defined to be the number of coordinates in which u and v differ. Let u, v, w ∈ F n . We state without proof the following properties of the Hamming distance function: • 0 ≤ d(u, v) ≤ n, • d(u, v) = 0 if and only if u = v, • d(u, v) = d(v, u), • d(u, w) ≤ d(u, v) + d(v, w)  (the triangle inequality).  Definition 1.6. Let u ∈ F n and t be a positive integer. The (Hamming) sphere of radius t centered at u, denoted St (u), is the following set of vectors: St (u) = {v ∈ {0, 1}n | d(u, v) ≤ t}. Definition 1.7. Consider a vector u in F n . The (Hamming) weight of u, denoted wt(u), is defined to be the number of coordinates of u that are nonzero. The parity of u is even if wt(u) is even, and odd if wt(u) is odd. Consider two vectors u and v in F n . The following properties of the Hamming weight function are immediate: • 0 ≤ wt(u) ≤ n, • wt(u) = 0 if and only if u = 0n , • wt(u) = n if and only if u = 1n , • wt(u − v) = d(u, v). Definition 1.8. Consider a vector u in F n . We define pu and qu to be following elements of F : pu =  0 1  wt(u) is even, wt(u) is odd, 2  Chapter 1. Coding Theory  qu =  1 wt(u) is even, 0 wt(u) is odd.  Lemma 1.9. Let u, v, w ∈ F n . Then d(u, v) = d(u + w, v + w). Proof. d(u + w, v + w) = wt((u + w) − (v + w)) = wt((u − v) + (w − w)) = wt(u − v) = d(u, v). Notation 1.10. Let Sn , for all positive integers n, denote the symmetric group of degree n (i.e. the set of all permutations on the integers [n]). Definition 1.11. Let σ ∈ Sn and u ∈ F n with coordinates defined by u = u1 u2 . . . un . Then σ(u) ∈ F n is defined as follows: σ(u) = uσ(1) uσ(2) . . . uσ(n) . For any vectors u and v in F n , and permutation σ ∈ Sn , the following properties are immediate: • wt(σ(u)) = wt(u), • d(σ(u), σ(v)) = d(u, v). • σ(u) + σ(v) = σ(u + v). Definition 1.12. Let u ∈ F m and v ∈ F n have coordinates defined by u = u1 u2 . . . um and v = v1 v2 . . . vn . We define u concatenated with v, denoted u v, to be: u v = u1 u2 . . . um v1 v2 . . . vn , an element of the vector space F m+n . Let u, u ∈ F m and v, v ∈ F n . Clearly, from the distance functions of their respective vector spaces: d(u v, u v ) = d(u, u ) + d(v, v ). One technique used often in the study of codes is to concatenate a vector u ∈ F n with a parity coordinate, that is, create the vector u pu . It is clear that wt(u pu ) is even for any u ∈ F n . 3  Chapter 1. Coding Theory Definition 1.13. Consider two vectors u and v in F n with their coordinates defined by u = u1 u2 . . . un and v = v1 v2 . . . vn . The inner product of u and v is defined as follows: n  < u, v > =  ui vi . i=1  Note: the inner product of u and v is an element of the field F . Let u, v, w ∈ F n and c ∈ F . As this is an inner product, it satisfies several key properties: • < u, v >=< v, u >, • < u, v + w >=< u, v > + < u, w >, • c < u, v >=< cu, v >. In this thesis, we will often be dealing with the field Z2 = {0, 1}, with binary addition and multiplication defined normally. When dealing with this field, we can define the intersection of two vectors. This will give rise to several useful results. Definition 1.14. Let u and v be binary vectors in {0, 1}n with their coordinates defined by u = u1 u2 . . . un and v = v1 v2 . . . vn . The intersection of u and v is defined to be:   u1 v1  u2 v2    u ∩ v =  . .  ..  un vn Thus the ith coordinate of u ∩ v is 1 if and only if the ith coordinate of both u and v are 1. Let u and v be binary vectors in {0, 1}n . We have the following properties: • wt(u ∩ v) ≤ wt(u) and wt(u ∩ v) ≤ wt(v), • u ∩ 0n = 0n and u ∩ 1n = u. Lemma 1.15. Let u and v be binary vectors in {0, 1}n . Then d(u, v) = wt(u) + wt(v) − 2wt(u ∩ v). 4  Chapter 1. Coding Theory Proof. Let U be the number of coordinates in which u is 1 and v is 0, and let V be the number of coordinates in which v is 1 and u is 0. Then d(u, v) = U + V . Since wt(u ∩ v) represents the number of coordinates in which u and v are both 1, we have: d(u, v) = U + V = (wt(u) − wt(u ∩ v)) + (wt(v) − wt(u ∩ v)) = wt(u) + wt(v) − 2wt(u ∩ v). Corollary 1.16. Let u and v be binary vectors in {0, 1}n . Then u and v are of the same parity if and only if d(u, v) is even. Proof. u and v are of the same parity ⇔ wt(u) + wt(v) is even ⇔ wt(u) + wt(v) − 2wt(u ∩ v) is even ⇔ d(u, v) is even  by Lemma 1.15.  For notational convenience, we will often represent a binary vector by its integer representation. Definition 1.17. Let v be a binary vector in {0, 1}n whose coordinates are defined by v1 v2 . . . vn . Consider the nonnegative integer i satisfying: i = 2n−1 v1 + 2n−2 v2 + . . . + 2vn−1 + vn . This integer i is the integer representation of the vector v. Definition 1.18. For any positive integer k, the binary representation matrix of order k is the log2 (k + 1) × k matrix in which column i, for i ∈ [k], is the binary vector of length log2 (k + 1) whose integer representation is i. This matrix is denoted BRMk . For example, the binary representation matrix of order 6 is defined as follows:   0 0 0 1 1 1 BRM6 =  0 1 1 0 0 1  . 1 0 1 0 1 0  1.2  An Introduction to Codes  We will now define a code over a finite field F . 5  Chapter 1. Coding Theory Definition 1.19. Let F be a field. A code C over F is a subset of F n . The elements u ∈ C are the codewords of C. If F = {0, 1}, then C is a binary code. Definition 1.20. Let C = {u1 , u2 , . . . , uM }, where ui ∈ F n for all i ∈ [M ]. The (code) length is the length of the vectors, n, whereas the (code) size is the number of codewords in the code, |C| = M . In addition to code length and size, a third parameter is usually specified when referring to a code. Definition 1.21. Let C be an (n, M ) code over a field F . The minimum distance of C, denoted d(C), is defined to be: d(C) = min{d(u, v) | u, v ∈ C, u = v}. If M = 1 then d(C) = ∞. Notation 1.22. An (n, M ) code is a code with length n and size M . An (n, M, d) code is a code with length n, size M , and minimum distance at least d. Definition 1.23. Let C be an (n, M ) code over a field F , and let σ ∈ Sn . Then σ(C) is defined to be: σ(C) = {σ(u) | u ∈ C}. Clearly, σ(C) is an (n, M ) binary code. Moreover, d(σ(C)) = d(C). Definition 1.24. Let C be an (n, M ) code over a field F , and let w ∈ F n . Then C + w is the (n, M ) code over F satisfying: C + w = {u + w | u ∈ C}. Proposition 1.25. Let C be an (n, M ) code over a field F , and let w ∈ F n . Then d(C + w) = d(C). Proof. To determine the minimum distance of C + w, we have: d(C + w) = min{d(u, v) | u, v ∈ C + w, u = v} = min{d(x + w, y + w) | x, y ∈ C, x = y} = min{d(x, y) | x, y ∈ C, x = y}  by Lemma 1.9  = d(C). 6  Chapter 1. Coding Theory In the following definitions, we will introduce techniques to create longer or shorter codes from a given code. Definition 1.26. Let C be an (n, M ) code over a field F , and let v ∈ F m for some m ∈ Z+ . Then C v is the (n + m, M ) code over F defined as follows: C v = {u v | u ∈ C}. It is clear that d(C v) = d(C). Definition 1.27. Let C be an (n, M, 2) code over a field F , and let i ∈ [n]. The punctured code C(i) is defined to be the (n−1, M ) code over F containing all codewords of C with the ith coordinate removed. That is: C(i) = {u1 u2 . . . ui−1 ui+1 . . . un | u ∈ C}. Clearly, d(C) − 1 ≤ d(C(i) ) ≤ d(C). Definition 1.28. Let C be an (n, M ) code over a field F . Moreover, let x ∈ F and i ∈ [n]. The shortened code Ci,x is defined to be the (n − 1, M ≤ M ) code over F containing all codewords of C with the field element x in coordinate i, then punctured at i. That is: Ci,x = {u1 u2 . . . ui−1 ui+1 . . . un | u1 u2 . . . ui−1 xui+1 . . . un ∈ C}. Clearly, d(Ci,x ) ≥ d(C). Codes are used in real world applications every day. In practice, information is sent in the form of codewords from some (n, M ) code C over a field F , where different codewords represent different meanings. The required codeword u ∈ C is sent over a channel, over which errors can occur in any and all of the coordinates that change the coordinate to a different field element. Therefore, it is possible that one or more errors have transformed the original codeword into a vector in F n that is not in the code. When a vector x ∈ F n is received at the other end of this channel, x is then decoded to some codeword v ∈ C, generally the one that maximizes the likelihood of receiving x (in the hopes that u = v). Assuming the probability of error in every coordinate is the same, this codeword v will be the v ∈ C that minimizes d(x, v) with ties broken arbitrarily. Formally, this is expressed as in the following definition.  7  Chapter 1. Coding Theory Definition 1.29. Let C be an (n, M ) code over a field F . A decoder is a function D : F n → C. The decoder D is a nearest codeword decoder if it satisfies: D(x) = v ⇒ d(x, v) = min{d(x, v ) | v ∈ C}. It is clear that the higher the minimum distance of a code, the higher the probability that a received vector will be decoded correctly. The result of Corollary 1.31 will specify explicitly the error correcting capability of a code with fixed minimum distance. The following two proofs are similar to those used in [13, Ch. 1, Theorem 2]. Proposition 1.30. Let C be an (n, M, d) code over a field F , and t = d−1 . For any u, v ∈ C such that u = v, St (u) ∩ St (v) = ∅. 2 Proof. Suppose, by way of contradiction, that there exists w ∈ F n such that d(u, w) ≤ t and d(v, w) ≤ t. We have: d(u, v) ≤ d(u, w) + d(w, v)  by the triangle inequality  d−1 d−1 + 2 2 d−1 d−1 ≤ + 2 2 = d − 1,  ≤  a contradiction to the minimum distance of C. Corollary 1.31. Let C be an (n, M, d) code over a field F . A nearest d−1 codeword decoder will correctly recover any pattern of up to errors. 2 Proof. Let C be an (n, M, d) code over F . Suppose codeword u ∈ C is d−1 sent over a channel and t ≤ errors occur to yield vector w ∈ F n . 2 Thus w ∈ St (u). Take any v ∈ C \ {u}. By the result of Proposition 1.30, w∈ / St (v), implying: d(v, w) > t = d(u, w). Therefore a nearest codeword decoder will decode w to u.  8  Chapter 1. Coding Theory Definition 1.32. A constant weight code C is an (n, M, d) code in which every codeword in C has the same weight w. C can then be referred to as an (n, M, d, w) code. Definition 1.33. An even weight code C is a code in which every codeword u ∈ C is of even weight. Similarly, an odd weight code C is a code in which every codeword u ∈ C is of odd weight. For the remainder of this thesis, we will be strictly using binary codes. For binary vectors u and v, note that u − v = u + v. Corollary 1.34. Even weight binary codes and odd weight binary codes have even minimum distance. Proof. Since every codeword is of the same parity, the proof of this corollary follows directly from Corollary 1.16. Definition 1.35. The weight distribution of an (n, M ) binary code C with respect to a vector u ∈ {0, 1}n is the sequence of nonnegative integers (Wi (u))ni=0 , where Wi (u) for i = 0, 1, . . . , n is the number of codewords v ∈ C such that d(u, v) = i. Symbolically, this definition may be represented as: Wi (u) =  1 v∈C  d(u,v)=i  =  1. v∈C  wt(u+v)=i  When u is not specified, it will be assumed that u = 0n . We will denote (Wi (0n ))ni=0 as simply (Wi )ni=0 . Let C be an (n, M ) binary code, u ∈ {0, 1}n , and σ ∈ Sn . Moreover, let (Wi (u))ni=0 denote the weight distribution of C with respect to u. The weight distribution satisfies the following properties: n  •  Wi = M , i=0  • The weight distribution of σ(C) with respect to u is also (Wi (u))ni=0 , • Letting (Wi )ni=0 denote the weight distribution of C + u, (Wi )ni=0 = (Wi (u))ni=0 . 9  Chapter 1. Coding Theory Definition 1.36. The distance distribution of an (n, M ) binary code C is the list of nonnegative, rational numbers (Ai )ni=0 , where Ai represents the average number of pairs of codewords in C at distance i from each other. Thus each Ai satisfies: Ai =  1 M  1 u,v∈C  d(u,v)=i  =  1 M  1 u∈C  v∈C  d(u,v)=i  1 = M  Wi (u). u∈C  The following statements are clear for any (n, M, d) binary code with distance distribution (Ai )ni=0 : • A0 = 1, • Ai ≥ 0, • Ai = 0 for i ∈ [d − 1], n  •  n  Ai = M − 1.  Ai = M , or equivalently, i=0  i=d  Definition 1.37. Let C and D be (n, M, d) binary codes. C and D are equivalent if C = σ(D) + v for some σ ∈ Sn and v ∈ {0, 1}n . C is unique if every (n, M, d) binary code is equivalent to C. It is clear that code equivalence is reflexive, symmetric, and transitive. Therefore, it is an equivalence relation on the set of all binary codes. The notion of equivalent codes will be very important throughout this thesis. One very important property of equivalent codes is that they share the same distance distribution. Lemma 1.38. Let C and D be equivalent (n, M, d) binary codes. Then C and D have the same distance distributions. Proof. Let σ ∈ Sn and w ∈ {0, 1}n satisfy D = σ(C) + w. Moreover, n D n let (AC i )i=0 and (Ai )i=0 denote the distance distributions of C and D, respectively. For any i ∈ [n], we have: 10  Chapter 1. Coding Theory  AC i =  1 M  1 u,v∈C  d(u,v)=i  1 = M  1 u,v∈C  d(σ(u),σ(v))=i  =  1 M  1  by Lemma 1.9  u,v∈C  d(σ(u)+w,σ(v)+w)=i  =  1 M  1 x,y∈D  d(x,y)=i  =  1.3  AD i .  Optimal Binary Codes  Finding binary codes of optimal size when length and minimum distance are held fixed has become a well-known and difficult combinatorial problem. Such optimal codes allow for a maximum number of possible messages to be sent efficiently, while maintaining a specified error correcting ability. Notation 1.39. The size of the largest possible binary code of length n and minimum distance at least d is denoted A(n, d). An (n, M, d) binary code in which M = A(n, d) is optimal. The following trivial values of A(n, d) can easily be verified for any positive integers n and d: • A(n, 1) = 2n , • A(n, 2) =  2n = 2n−1 , 2  • A(n, d) = 1 if d > n. 2n When d is sufficiently large ( < d ≤ n), it is known that A(n, d) = 2. 3 Proposition 1.40 proves this result. Proposition 1.40. A(n, d) = 2 for n, d ∈ Z+ with  2n < d ≤ n. 3 11  Chapter 1. Coding Theory Proof. Fix n and d as specified. First, we will show the existence of an (n, 2, d) binary code. Consider any v ∈ {0, 1}n with wt(v) = d (if d = n then v = 1n ). The code {0n , v} is an example of an (n, 2, d) binary code. We now show that an (n, 3, d) code can’t exist. Suppose, by way of contradiction, that it does, and is defined by C = {u, v, w}. We have: d(u, v) ≤ d(u, w + 1n ) + d(v, w + 1n ) n  by the triangle inequality n  = wt(u + w + 1 ) + wt(v + w + 1 ) = d(u + w, 1n ) + d(v + w, 1n ) = wt(u + w) + wt(1n ) − 2wt((u + w) ∩ 1n ) + wt(v + w) + wt(1n ) − 2wt((v + w) ∩ 1n ) = wt(u + w) + wt(1n ) − 2wt(u + w) + wt(v + w) + wt(1n ) − 2wt(v + w) = 2n − d(u, w) − d(v, w) 2n 2n − < 2n − 3 3 2n = , 3 a contradiction to the minimum distance of C. In fact, this result is tight with respect to d, as will be proven in Proposition 1.41. Proposition 1.41. A(n, d) ≥ 3 for n, d ∈ Z+ with d ≤  2n . 3  n . We will prove this 3 result by exhibiting an (n, 3, d) binary code. Consider C = {u1 , u2 , u3 }, where the vectors ui are defined as follows:  Proof. Fix n and d as specified, and define t =  t  2t  µni ,  u1 = i=1  n  µni ,  u2 = i=t+1  µni .  u3 = i=2t+1  For any i, j ∈ [3] with i = j we have: ui ∩ uj = 0n , 12  Chapter 1. Coding Theory and hence by Lemma 1.15: d(ui , uj ) = wt(ui ) + wt(uj ). The minimum distance of C is calculated as follows: d(C) = min{d(u1 , u2 ), d(u1 , u3 ), d(u2 , u3 )} = min{wt(u1 ) + wt(u2 ), wt(u1 ) + wt(u3 ), wt(u2 ) + wt(u3 )} n n n n = min 2 ,2 , + n−2 3 3 3 3 n =n− 3 2n . = 3 Therefore C is an (n, 3, d) binary code. Propositions 1.42 and 1.43 show how values of A(n, d) can be bounded above by A(n + 1, d) and A(n − 1, d). Proposition 1.42. A(n, d) ≤ A(n + 1, d) for any n, d ∈ Z+ . Proof. Let C be an (n, M, d) binary code of optimal size M = A(n, d). Consider the code C 0, an (n + 1, M, d) binary code. The existence of this code proves A(n + 1, d) ≥ A(n, d). Proposition 1.43. A(n, d) ≤ 2A(n − 1, d) for any n, d ∈ Z+ . Proof. Let C be an (n, M, d) binary code of optimal size M = A(n, d). Consider the shortened codes Ci,0 and Ci,1 for any i ∈ [n]. It is clear that M M |Ci,0 | ≥ or |Ci,1 | ≥ . Let D be the code formed by taking the larger of 2 2 M Ci,0 and Ci,1 . D is clearly an (n − 1, M , d) binary code with size M ≥ , 2 and hence: A(n − 1, d) ≥  A(n, d) . 2  The search of the values of A(n, d) for less trivial n and d is a classic combinatorial problem. The result of Theorem 1.44 will show that we need only find A(n, d) for even d. Theorem 1.44. A(n, d) = A(n + 1, d + 1) for n, d ∈ Z+ with d ≤ n and d odd. 13  Chapter 1. Coding Theory Proof. We will prove equality holds by first proving A(n, d) ≤ A(n+1, d+1), then by proving A(n, d) ≥ A(n + 1, d + 1). Let C be an (n, M, d) binary code of optimal size M = A(n, d). To prove A(n, d) ≤ A(n + 1, d + 1), we will show the existence of an (n + 1, M, d + 1) binary code. Let C p defined as follows: C p = {u pu | u ∈ C}. It is clear that C p is an (n + 1, M, d) binary code. However, C p is also an even weight code, and therefore d(C p) is even. C p is hence an (n + 1, M, d + 1) binary code, which completes the first half of the proof. Let D be an (n + 1, M, d + 1) binary code of optimal size M = A(n + 1, d + 1). To prove A(n, d) ≥ A(n + 1, d + 1), we will show the existence of an (n, M, d) binary code. Since d is odd, d + 1 ≥ 2, and hence the punctured code D(i) for any i ∈ [n + 1] is an (n, M ) binary code. In addition, since only one coordinate has been punctured, d(D(i) ) ≥ d. Therefore D(i) is an (n, M, d) binary code for any i ∈ [n + 1]. A table of values of A(n, d) for n ≤ 28 and d ≤ 14 (for even d) can be found in [2, Table I]. The listed entries of A(n, 4) have been included in Appendix A (included as A(n − 1, 3)). When n ≥ 17, many of these values are currently unknown. That so few values are actually known is a testament to the difficulty of this problem. However, various bounds have been developed. Lower bounds most often arise due to construction of a specific binary code attaining the given parameters, and upper bounds can be proven through a number of different methods, two of which we will show in the following sections. Furthermore, many optimal binary codes are unique, that is, every binary code attaining the given parameters is equivalent. For example, consider the (16, 256, 6), (15, 256, 5), (15, 128, 6), (14, 128, 5), (14, 64, 6), (13, 64, 5), (13, 32, 6), and (12, 32, 5) binary codes. All are seen to be optimal in [2, Table I]; moreover, all are unique with the exception of the (12, 32, 5) code [13, pp.74-75]. ¨ Osterg˚ ard, Baicheva, and Kolev developed an algorithm in [18] that can determine the number of nonequivalent binary codes for certain code parameters. The number of nonequivalent (n, M, 3) and (n, M, 4) binary codes were then calculated for various values of n and M , as seen in [18, Table I]. 14  Chapter 1. Coding Theory In addition to bounds on the size of binary codes, another common problem in coding theory is determining the size of the largest possible constant weight binary code attaining given parameters. Notation 1.45. The size of the largest possible constant weight binary code of length n, minimum distance at least d, and codeword weight w, for w ≤ n, is denoted A(n, d, w). An (n, M, d, w) constant weight binary code in which M = A(n, d, w) is optimal. The following properties of A(n, d, w) can be easily verified for any positive integers n and d, and nonnegative integer w ≤ n: • A(n, 2, w) =  n w  ,  • A(n, d, w) = 1 if d > n, • A(n, d, w) = A(n, d + 1, w) for d odd, • A(n, d, w) = A(n, d, n − w). The final statement is proven to hold by adding 1n to either optimal code. Due to these properties, the search for A(n, d, w) to be restricted to n even d and w ≤ . We also make use of the results of Propositions 1.46 2 and 1.47 (see [13, Ch. 17, Theorem 1]): Proposition 1.46. A(n, d, w) = 1 for n, d ∈ Z+ and w ∈ Z+ ∪{0} satisfying 2w < d ≤ n. Proof. The existence of a code meeting this bound is trivial (as it contains a single codeword). Thus we need only prove A(n, d, w) ≤ 1. To prove A(n, d, w) ≤ 1, assume by way of contradiction there are two vectors u, v ∈ {0, 1}n satisfying wt(u) = wt(v) = w and d(u, v) ≥ d. Then, by the triangle inequality: d(u, v) ≤ d(u, 0n ) + d(0n , v) = wt(u + 0n ) + wt(u + 0n ) = wt(u) + wt(v) d d < + 2 2 = d, a contradiction to the minimum distance of d, and hence A(n, d, w) = 1. 15  Chapter 1. Coding Theory Proposition 1.47. A(n, 2w, w) =  n w  for n, w ∈ Z+ .  n , we need only show the existence of an w n (n, t, 2w, w) binary code where t = . Consider C = {v1 , v2 , . . . , vt }, w n where vi ∈ {0, 1} is the vector with a 1 in coordinates (i − 1)w + 1, (i − 1)w + 2, . . . , (i − 1)w + w. It is clear that these vectors all exist as, for i ≤ t:  Proof. To show A(n, 2w, w) ≥  n − 1 w + w ≤ n − w + w = n. w  (i − 1)w + w ≤  Moreover, d(vi , vj ) = 2w for i = j. Therefore C is an (n, t, 2w, w) binary code. n , assume by way of contradiction there w exists an (n, t + 1, 2w, w) binary code D, with t defined as above. Let D = {v1 , v2 , . . . , vt+1 }. For any i, j ∈ [t + 1], i = j, by Lemma 1.15 we have: To show A(n, 2w, w) ≤  wt(vi ) + wt(vj ) − d(vi , vj ) 2 w + w − 2w ≤ 2 = 0.  wt(vi ∩ vj ) =  t+1  Thus wt(vi ∩ vj ) = 0, and hence  wt(vi ) ≤ n. However: i=1  t+1  t+1  wt(vi ) = i=1  w= i=1  a contradiction. Therefore A(n, 2w, w) =  n + 1 w > n, w n . w  Agrell, Vardy, and Veger improved bounds on A(n, d, w) for many values in [1]. This article includes tables of A(n, d, w) for values of and bounds on A(n, d, w) for even values of d between 4 and 14. Recall that for odd values of d, A(n, d, w) = A(n, d + 1, w). Later in this thesis, we will refer back to these tables. The following theorem and corollary show how the weight and distance distribution of any binary code can be bounded above by the sizes of appropriate optimal constant weight codes. 16  Chapter 1. Coding Theory Theorem 1.48. Let C be an (n, M, d) binary code and fix u ∈ {0, 1}n . Moreover, let Wi (u) be the weight distribution of C with respect to u. Then Wi (u) ≤ A(n, d, i) for i = 0, 1, . . . , n. Proof. Assume, by way of contradiction, that Wi (u) > A(n, d, i) for some i. Let k = Wi (u). Next, let {v1 , v2 , . . . , vk } ⊆ C be the set of codewords at distance i from u. Consider the set of vectors D = {w1 , w2 , . . . , wk } defined by wj = vj + u for j ∈ [k]. We claim that D is an (n, k, d, i) constant weight binary code with code size k > A(n, d, i), a contradiction. To show that wj has weight i for all j ∈ [k], take arbitrary j ∈ [k]. We have: wt(wj ) = wt(vj + u) = d(vj , u) = i. Thus D is constant weight with weight i. Finally, for j1 , j2 ∈ [k], j1 = j2 : d(wj1 , wj2 ) = d(vj1 + u, vj2 + u) = d(vj1 , vj1 )  by Lemma 1.9  ≥ d. Corollary 1.49. Let C be an (n, M, d) binary code, and let (Ai )ni=0 be the distance distribution of C. Then Ai ≤ A(n, d, i) for all i ∈ {0, 1, . . . , n}. Proof. For any i ∈ {0, 1, . . . , n} we have: Ai = ≤  1 M 1 M  Wi (u) u∈C  A(n, d, i)  by Theorem 1.48  u∈C  1 (M )A(n, d, i) M = A(n, d, i). =  We will now introduce two useful upper bounds on A(n, d), the spherepacking bound and Delsarte’s linear programming bound.  17  Chapter 1. Coding Theory  1.3.1  The Sphere-Packing Bound  The sphere-packing bound is a well-known bound on optimal code size, and can be useful when d is small. While the bound can be calculated for codes over any field, we will offer the binary simplification. This bound will then give rise to the notion of perfect codes. The proof of the sphere-packing bound used follows that in [20, Theorem 4.3]. Theorem 1.50 (Sphere-Packing Bound). For any n, d ∈ Z+ with d ≤ n, d−1 let t = . Then: 2   −1   t  n  n . A(n, d) ≤ 2 i i=0  Proof. Let C be any (n, M, d) binary code, with optimal code size M = A(n, d). Proposition 1.30 implies that the spheres St (u) are distinct for all u ∈ C. Thus we have the following: 2n ≥  |St (u)| u∈C  |{v ∈ {0, 1}n | d(u, v) ≤ t}|  = u∈C  t  = u∈C i=0  n , i  since for a fixed i, there are ni ways to choose i of the n coordinates of u to be different. Thus we have: t  2n ≥ A(n, d) i=0  n . i  Since A(n, d) must be an integer, we arrive at the desired result. Definition 1.51. An (n, M, d) binary code C is called perfect if it attains the sphere-packing bound. One well-known class of codes are the class of (binary) Hamming codes, as defined in [13, p.25]. The Hamming code is defined for any n ∈ Z+ n satisfying n ≥ 2, and is a (2n − 1, 22 −n−1 , 3) binary code. Note that as the Hamming code is a linear code (not defined in this thesis), its second 18  Chapter 1. Coding Theory parameter is usually expressed in terms of code dimension, rather than code size. The dimension of a linear code of size M over a field F is defined to be log|F | (M ). The Hamming code attains the sphere-packing bound, and therefore gives rise to the following result. n −n−1  Theorem 1.52. A(2n − 1, 3) = 22  for any n ∈ Z+ . 1  Proof. First, note that A(21 − 1, 3) = A(1, 3) = 1 = 22 −1−1 is trivially satisfied. We now fix n ∈ Z+ with n ≥ 2. The sphere-packing bound (Theorem 1.50) implies:   −1   1 n  n  2 −1  A(2n − 1, 3) ≤ 22 −1 i i=0  2n −1  2 1 + 2n − 1 n = 22 −n−1 . =  However, the existence of the Hamming code implies that A(2n − 1, 3) = n 22 −n−1 .  1.3.2  Delsarte’s Linear Programming Bound  Another useful upper bound on A(n, d) is Delsarte’s linear programming bound, developed by Delsarte in [8] and [7]. Recall that, for any (n, M, d) binary code with distance distribution (Ai )ni=0 : n  Ai = M − 1. i=d  These Ai will thus be treated as variables in Delsarte’s linear program. The result of Theorem 1.55 will produce the constraints of this linear program. We closely follow the notation used in [5]. Definition 1.53. The Krawtchouk polynomial Kk (i), for i, k ∈ {0, 1, . . . , n}, is defined to be: k  (−1)j  Kk (i) = j=0  n−i k−j  i . j  The Krawtchouk polynomial can be rewritten as we will see in Theorem 1.54. The proof given is similar to that used in [14, Lemma 3.5.5]. 19  Chapter 1. Coding Theory Theorem 1.54. For any w ∈ {0, 1}n satisfying wt(w) = i, the Krawtchouk polynomial, for k ∈ {0, 1, . . . , n}, satisfies: (−1)<w,x> .  Kk (i) = x∈{0,1}n  wt(x)=k  Proof. Take arbitrary w ∈ {0, 1}n with wt(w) = i. Define K ⊆ {0, 1}n to be: K = {x ∈ {0, 1}n | wt(x) = k}. Next, define Kj ⊆ K for j ∈ {0, 1, . . . , max{i, k}} to be: Kj = {x ∈ K | wt(w ∩ x) = j}. It is clear that the Kj form a partition of K. This implies: (−1)<w,x> =  (−1)wt(w∩x)  x∈{0,1}n  x∈{0,1}n  wt(x)=k  wt(x)=k  = |K0 | − |K1 | + |K2 | − . . . ± |Kmax{i,k}−1 | ∓ |Kmax{i,k} | max{i,k}  (−1)j |Kj |.  =  (1.1)  j=0  For any j ∈ {0, 1, . . . , max{i, k}}, we determine |Kj |. Fix j and consider x ∈ Kj . Of the i coordinates where w is 1, we have j choices of coordinates where we can let x be 1. Then, of the n − i coordinates where w is 0, we have n − j choices of coordinates where we can let x be 1. From equation (1.1): max{i,k}  (−1)<w,x> = x∈{0,1}n  (−1)j |Kj | j=0  wt(x)=k max{i,k}  (−1)j  = j=0 k  (−1)j  = j=0  i j  i j  n−i k−j  n−i k−j  = Kk (i). 20  Chapter 1. Coding Theory The constraints of Delsarte’s linear program are proven to hold with a proof similar to that used in [5, Theorem 3], but rearranged and expressed in the same manner as in [2]. Theorem 1.55. Let C be an (n, M, d) binary code with distance distribution (Ai )ni=0 . Then, for k ∈ {0, 1, . . . , n}: n  Ai Kk (i) ≥ − i=d  n . k  Proof. For arbitrary w ∈ {0, 1}n satisfying wt(w) = i, we have: n  n  Ai Kk (i) − A0 Kk (0)  Ai Kk (i) = i=0  i=d        1    (−1)<w,x>     n  = i=0  1  M  u,v∈C  d(u,v)=i    − (1)    x∈{0,1}n  wt(x)=k    x∈{0,1}n   (−1)<0,x>  .  wt(x)=k  Moreover, since wt(u + v) = d(u, v) = i: n i=d  1 Ai Kk (i) = M  =  1 M  n  (−1)<u+v,x> − i=0  x∈{0,1}n  x∈{0,1}n  d(u,v)=i wt(x)=k  wt(x)=k  u,v∈C  n  (−1)<u,x> (−1)<v,x> − x∈{0,1}n  wt(x)=k  =  1 M  (−1)0  i=0  u,v∈C  d(u,v)=i  (−1)<u,x> (−1)<v,x> − x∈{0,1}n  n k  u,v∈C  n k  wt(x)=k  21  Chapter 1. Coding Theory  =  1 M  (−1)<u,x> x∈{0,1}n  (−1)<v,x>  u∈C  v∈C  −  n k  wt(x)=k  1 = M  2 <u,x>  (−1) x∈{0,1}n  −  u∈C  n k  (1.2)  wt(x)=k  ≥−  n . k  In the case where M is odd, these constraints can be strengthened. This will be seen in the following corollary, a result first shown in [5, Theorem 5]. Corollary 1.56. Let C be an (n, M, d) code with M odd. Then, for k ∈ {0, 1, . . . , n}: n  Ai Kk (i) ≥ i=d  1−M n . M k  Proof. Recall from equation (1.2) in the proof of Theorem 1.55: n i=d  1 Ai Kk (i) ≥ M  2 <u,x>  (−1) x∈{0,1}n  u∈C  −  n . k  wt(x)=k  (−1)<u,x> is odd and thus nonzero. Hence we reach the  If M is odd, u∈C  following conclusion: n  Ai Kk (i) ≥ i=d  1 M  1− x∈{0,1}n  n k  wt(x)=k  n 1 n − M k k 1−M n = . M k =  Delsarte’s linear programming bound (Version 1) of A(n, d) is then stated in Theorem 1.57.  22  Chapter 1. Coding Theory Theorem 1.57. [Delsarte’s Linear Programming Bound (Version 1)] A(n, d) ≤ 1 + M ∗ , where M ∗ is the optimal objective value of the following linear program: n  Maximize M ∗ =  Ai subject to: i=d  n  Ai Kk (i) ≥ − i=d  Ai ≥ 0  n k  for k ∈ 0, 1, . . . ,  n 2  ,  for i ∈ {d, d + 1, . . . , n}.  However, this linear program can indeed be further simplified. The following theorem will show how all odd-indexed variables of the linear program can be eliminated. Theorem 1.58. Let d ∈ Z+ be even. Any (n, M, d) binary code can be transformed into an (n, M, d) even weight binary code. Proof. Let C be an (n, M, d) binary code, and take any i ∈ [n]. Since d is even, we have d ≥ 2. Thus we consider the (n − 1, M, d − 1) code C(i) . We now (as in the first half of the proof of Theorem 1.44) create the (n, M, d) even weight code C(i) p that results from concatenating pw with every codeword w of C(i) . Since d − 1 is odd, C(i) p is an (n, M, d) even weight binary code. We now have, by Theorem 1.44, that the search for the values of A(n, d) can be restricted to even d. Moreover, Theorem 1.58 implies that the search with even d can be restricted to codes in which Ai = 0 for odd i. Therefore, the first version of Delsarte’s linear programming bound given in Theorem 1.57 can be restricted as in Theorem 1.59. Theorem 1.59. [Delsarte’s Linear Programming Bound (Version 2)] Let d ∈ Z+ be even. A(n, d) ≤ 1 + M ∗ , where M ∗ is the optimal objective value of the following linear program:  23  Chapter 1. Coding Theory n/2  A2i subject to:  Maximize i=d/2 n/2  A2i Kk (2i) ≥ − i=d/2  n k  A2i ≥ 0  n 2  ,  n d d , + 1, . . . , 2 2 2  .  for k ∈ 0, 1, . . . , for i ∈  Additional bounds can also be imposed, which can help lower the bound. For example, if A(n, d, w) is known for some (or all) w, the result of Corollary 1.49 can be imposed as a constraint (or constraints). This will be seen in Theorem 1.60 in the following section.  1.4  Applications of Delsarte’s Linear Programming Bound  Delsarte’s linear programming bound has been crucial in determining many new bounds for A(n, d). One result that will be used often throughout this thesis is the result A(9, 4) = 20 (and hence by Theorem 1.44 A(8, 3) = 20). This result was first proved in [5, Theorem 6], and our proof will be analogous. Theorem 1.60. A(9, 4) = 20. Proof. To prove that A(9, 4) ≥ 20, we will exhibit a (9, 20, 4) binary code. Let C be the code where each codeword is a column of the following matrix:                0 0 0 0 1 1 1 1 0  0 0 0 1 1 1 0 0 1  0 0 1 0 0 1 1 0 1  0 0 1 0 1 0 0 1 1  0 0 1 1 0 0 1 1 0  0 1 0 0 1 0 1 0 1  0 1 0 1 0 0 0 0 0  0 1 0 1 0 1 1 1 1  0 1 1 0 0 1 0 1 0  0 1 1 1 1 1 1 0 0  1 0 0 0 0 0 0 0 1  1 0 0 1 0 1 1 0 0  1 0 0 1 1 0 1 1 1  1 0 1 0 1 0 1 0 0  1 0 1 1 0 1 0 1 1  1 1 0 0 0 0 1 1 0  1 1 0 0 1 1 0 0 0  1 1 1 0 1 1 1 1 1  1 1 1 1 0 0 1 0 1  1 1 1 1 1 0 0 1 0         .        The vectors in C can also be represented by their integer representations as  24  Chapter 1. Coding Theory follows: C = {30, 57, 77, 83, 102, 149, 160, 175, 202, 252, 257, 300, 311, 340, 363, 390, 408, 479, 485, 498}. It is easily verified that C is a (9, 20, 4) binary code. We will now apply Version 2 of Delsarte’s linear programming bound given in Theorem 1.59. We will also be using the result of Proposition 1.46 (with Corollary 1.48) to impose the additional constraint A8 ≤ 1. We see that: A(9, 4) ≤ 1 + max{M ∗ } , where M ∗ is the optimal objective value of the following linear program: Maximize A4 + A6 + A8 subject to: A4 + A6 + A8 ≥ −1, A4 − 3A6 − 7A8 ≥ −9, −4A4 + 20A8 ≥ −36, −4A4 + 8A6 − 28A8 ≥ −84, 6A4 − 6A6 + 14A8 ≥ −126, A8 ≤ 1, A4 , A6 , A8 ≥ 0. Upon finding the optimal objective value of this linear program, we obtain: A(9, 4) ≤ 1 +  61 3  = 21. We now claim that a (9, 21, 4) binary code can’t exist. Assume, by way of contradiction, that C is a (9, 21, 4) binary code with distance distribution (Ai )9i=0 . Since 21 is odd, the result of Corollary 1.56 allows us to transform our first five constraints to: 1 − 21 9 21 k 9 Kk (8) ≥ − . k  A4 Kk (4) + A6 Kk (6) + A8 Kk (8) ≥ ⇔  21 A4 Kk (4) + 20  21 A6 Kk (6) + 20  21 A8 20  25  Chapter 1. Coding Theory Also, by definition of Ai , we can also similarly transform our sixth constraint. Since 1 must be even (due to the symmetry of every pair of codewords u,v∈C  d(u,v)=8  u and v), we have: A8 ≤ 1 ⇔ A8 ≤ ⇔  20 21  21 A8 20  ≤ 1.  20 Thus, the optimal solution of the new linear program is simply times the 21 original optimal solution. Therefore: A(9, 4) ≤ 1 +  61 3  20 21  = 20. The optimal (9, 20, 4) code is not unique, as 2 nonequivalent (9, 20, 4) codes were exhibited in [13, p.57]. In fact, there exist exactly 3, as seen in [18, Table I]. n  Recall the result of Theorem 1.52, A(n, d) = A(2n − 1, 3) = 22 −n−1 for n ∈ Z+ . This result can be generalized as in Theorem 1.61. This will be stated without proof, but is the main result proved by Best and Brouwer in [4]. The 0, 1, 2, and 3 times shortened Hamming codes are the optimal codes that attain this value, and it was proven to hold as an upper bound through the use of Delsarte’s linear programming bound. Theorem 1.61. Let i, n ∈ Z+ satisfy n ≥ 3 and i ∈ [4]. Then: n −n−i  A(2n − i, 3) = 22  .  The result of Theorem 1.61 allows for the values of A(n, 3) to be calculated for 28 ≤ n ≤ 31, which are included in Appendix A.  26  Chapter 2  Graph Theory In this chapter we will review parts of graph theory needed throughout this thesis. The definitions and notation closely follow those used in [22]. We will also introduce the concept of vertex coloring, and introduce three algorithms that solve various vertex coloring problems.  2.1  Elementary Concepts  Definition 2.1. A graph G is defined by a vertex set and an edge set, where elements of the edge set are unordered pairs of vertices. Notation 2.2. For any graph G, V (G) denotes the vertex set of G, and n(G) = |V (G)|, the number of vertices of G. In addition E(G) denotes the edge set of G, and e(G) = |E(G)|, the number of edges of G. Definition 2.3. The null graph is the graph whose vertex set and edge set are both empty. Definition 2.4. Let u, v ∈ V (G) and e = {u, v} ∈ E(G). Then u and v are the endpoints of e, and u and v are incident to e. Equivalently, u and v are adjacent to each other. For notational convenience, we will commonly write an edge {u, v} as uv. Definition 2.5. In a graph G, a loop is an edge e ∈ E(G) where both endpoints of e are the same vertex v ∈ V (G). Multiple edges are edges e1 , e2 ∈ E(G) satisfying e1 = e2 . If G has no loops or multiple edges, then G is a simple graph. Definition 2.6. A finite graph is a graph with both its vertex set and edge set finite. For the remainder of this paper, we will assume all graphs are finite, simple graphs.  27  Chapter 2. Graph Theory Definition 2.7. Consider a graph G, and let v ∈ V (G). The neighbourhood of v, denoted NG (v), is the set of all vertices adjacent to V . That is: NG (v) = {u ∈ V (G) | uv ∈ E(G)}. The degree of vertex v, denoted dG (v), is defined to be |NG (v)|. This is the number of edges in E(G) that are incident to v. If, for some k ∈ Z+ ∪ {0}, dG (v) = k for every v ∈ V (G), then G is k-regular. The following proposition allows for the number of edges of a k-regular graph to be calculated directly. Proposition 2.8. Let G be an n-vertex, k-regular graph. Then: e(G) =  nk . 2  Proof. Since summing the degrees of all vertices of a graph counts each edge twice, the following result is clear (and commonly referred to as the degree-sum formula): 2e(G) =  dG (v). v∈V (G)  However, since G is k-regular, we have the following: 2e(G) =  k v∈V (G)  = nk. Definition 2.9. A graph G is a subgraph of a graph G (denoted G ⊆ G) if and only if: V (G ) ⊆ V (G), E(G ) ⊆ E(G). Definition 2.10. Let G be a graph and let V ⊆ V (G) be a subset of the vertices of G. The subgraph of G induced by V , denoted G[V ], is the graph satisfying the following: V (G[V ]) = V , E(G[V ]) = {uv ∈ E(G) | u, v ∈ V }. 28  Chapter 2. Graph Theory Definition 2.11. The complement of a graph G, denoted G, is the graph that satisfies the following criteria: V (G) = V (G), E(G) = {uv | u, v ∈ V (G), uv ∈ / E(G)}. Definition 2.12. A clique in a graph G is a set of vertices V ⊆ V (G) such that any two distinct vertices in V are adjacent in G. The clique number of G, denoted ω(G), is the number of vertices in a clique of largest cardinality of G. Definition 2.13. An independent set in a graph G is a set of vertices V ⊆ V (G) such that any two distinct vertices in V are non-adjacent in G. V is a maximal independent set if V ∪ {v} is not independent for any v ∈ V (G)\V . The independence number of G, denoted α(G), is the number of vertices in an independent set of largest cardinality G. Definition 2.14. A graph G is bipartite if and only if the vertex set of G can be expressed as V (G) = V1 ∪ V2 , where both V1 and V2 are independent. Definition 2.15. The n-vertex complete graph, denoted Kn , is the graph such that any two distinct vertices in V (Kn ) are adjacent. Since every pair of distinct vertices in the complete graph are adjacent, ω(Kn ) = n for all n ∈ Z+ . Definition 2.16. Let G be a graph. A walk in G is an alternating sequence of vertices and edges W = (v0 , e1 , v1 , e2 , v2 , . . . , en , vn ) such that ei = vi−1 vi for all i ∈ [n]. The length of W is n, the number of edges in W . W is denoted a v0 , vn -walk in G, and the vertices v0 and vn are called the endpoints of W . The vertices vi , for i ∈ [n − 1], are called the internal vertices of W . Definition 2.17. A path in a graph G is a walk in G such that every edge and every vertex in the walk are distinct. A cycle in G is a path whose endpoints are the same vertex. Definition 2.18. Let u, v ∈ V (G) be two vertices in a graph G. The distance from u to v in G, denoted dG (u, v), is the length of the shortest u, v-path in G (the path of least length). If there exists no u, v-path in G, then dG (u, v) = ∞. Let u, v, w ∈ V (G) for some graph G. Similarly to the Hamming distance, the graph theoretical distance satisfies the following properties: 29  Chapter 2. Graph Theory • dG (u, v) = 0 if and only if u = v, • dG (u, v) = dG (v, u), • dG (u, w) ≤ dG (u, v) + dG (v, w)  (the triangle inequality).  Definition 2.19. The kth power of G is defined for any graph G and k ∈ Z+ . Denoted Gk , it is the graph that satisfies: V (Gk ) = V (G), E(Gk ) = {uv | u, v ∈ V (G), dG (u, v) ≤ k}.  2.2  Graph Isomorphisms and Automorphisms  Definition 2.20. An isomorphism from a graph G to a graph H is a bijection Φ : V (G) → V (H) that satisfies: uv ∈ E(G) ⇔ Φ(u)Φ(v) ∈ E(H). If there exists an isomorphism from G to H, then G is isomorphic to H. This is denoted G ∼ = H. If there is an isomorphism Φ from G to H, then the function Φ−1 is an isomorphism from H to G. In addition, ∼ = is both reflexive (through the identity function) and transitive (through functional composition). Therefore, ∼ = is an equivalence relation on the set of all graphs. It is clear that, in order for G ∼ = H, both of the following must hold: n(G) = n(H), e(G) = e(H). Definition 2.21. For any graphs G and H, a copy of H in G is a subgraph G ⊆ G such that H ∼ =G. Definition 2.22. An automorphism of a graph G is an isomorphism from G onto itself. The automorphism group of a graph G is the set of all automorphisms of G, with functional composition as the group operation. It is denoted Aut(G). Clearly, any permutation of the vertices in the complete graph Kn is an automorphism. Therefore |Aut(Kn )| = |Sn | = n!.  30  Chapter 2. Graph Theory Definition 2.23. A graph G is vertex transitive if, for all u, v ∈ V (G), there exists an automorphism Φ : V (G) → V (G) such that Φ(u) = v. The notion of vertex transitivity will be very important later in this thesis. If a graph is vertex transitive, then it “looks the same” from every vertex. What this means is that many times we can prove a result for every vertex in the graph by simply proving the result for a single, arbitrary vertex. Definition 2.24. Let V ⊆ V (G) be a set of vertices of a graph G, and Φ : V (G) → V (G) be a function of the vertices of G. Then the image of V under Φ, denoted Φ(V ), is defined to be: Φ(V ) = {Φ(v) | v ∈ V }. We will now prove several results about automorphisms that will be needed later in this thesis. Proposition 2.25. Let V ⊆ V (G) be a set of vertices of a graph G, and Φ : V (G) → V (G) be an automorphism of G. Then V is independent if and only if Φ(V ) is independent. Similarly, V is a clique if and only if Φ(V ) is a clique. Proof. Take any u, v ∈ V . We have the following: V is independent ⇔ uv ∈ / E(G) ⇔ Φ(u)Φ(v) ∈ / E(G)  since Φ is an automorphism  ⇔ Φ(V ) is independent. The result, as applied to a clique rather than an independent set, is proven in an analogous manner. Theorem 2.26. For any graph G, Aut(G) = Aut(G). Proof. We will show that Φ : V (G) → V (G) is an automorphism of G if and only if Φ is an automorphism of G. We first assume that Φ is an automorphism of G. We now take arbitrary uv ∈ E(G). We have: uv ∈ E(G) ⇔ uv ∈ / E(G) ⇔ Φ(u)Φ(v) ∈ / E(G) ⇔ Φ(u)Φ(v) ∈ E(G), and thus Φ is an automorphism of G.  31  Chapter 2. Graph Theory We now assume that Φ is an automorphism of G, and take arbitrary uv ∈ E(G). We have: uv ∈ E(G) ⇔ uv ∈ / E(G) ⇔ Φ(u)Φ(v) ∈ / E(G) ⇔ Φ(u)Φ(v) ∈ E(G). Therefore Φ is an automorphism of G. Theorem 2.27. Let Φ : V (G1 ) → V (G2 ) be an isomorphism between graphs G1 and G2 . Then, for all u, v ∈ V (G1 ), dG1 (u, v) = dG2 (Φ(u), Φ(v)). Proof. This theorem will be proven by induction on dG1 (u, v). • Base Case: For all u, v ∈ V (G1 ) such that dG1 (u, v) = 1, we prove that dG2 (Φ(u), Φ(v)) = 1. Take arbitrary u, v ∈ V (G1 ) such that dG1 (u, v) = 1. Thus uv ∈ E(G1 ), and this case is trivial by the definition of an isomorphism. • Inductive Hypothesis: Assume, for some k ∈ Z+ , that if u, v ∈ V (G1 ) satisfy dG1 (u, v) ≤ k, then dG2 (Φ(u), Φ(v)) = dG1 (u, v). We now prove the result for k + 1. Take arbitrary u, v ∈ V (G1 ) such that dG1 (u, v) = k + 1 (if dG1 (u, v) < k + 1, then the statement is true by the inductive hypothesis). First, to show that dG2 (Φ(u), Φ(v)) ≤ k + 1, we will show the existence of a Φ(u), Φ(v)-path in G2 of length at most k + 1. Let P be a shortest u, v-path in G1 having length k + 1, and let w be the vertex on P adjacent to v. We have that dG1 (u, w) = k; otherwise there would be a shorter u, vpath. By the inductive hypothesis, dG2 (Φ(u), Φ(w)) = k. Also, since Φ is an isomorphism, Φ(w)Φ(v) ∈ E(G2 ). Therefore, there is a Φ(u), Φ(v)-path of length at most k + 1, and thus dG2 (Φ(u), Φ(v)) ≤ k + 1. Next, to show that dG2 (Φ(u), Φ(v)) ≥ k + 1, we will show that there exists no Φ(u), Φ(v)-path in G2 of length less than k + 1. Assume, by way of contradiction, that there exists some Φ(u), Φ(v) path P of length at most k. Let w be the vertex on P adjacent to Φ(v). Thus we have dG2 (Φ(u), w) ≤ k − 1, and therefore: 32  Chapter 2. Graph Theory  dG1 (u, v) ≤ dG1 (u, Φ−1 (w)) + dG1 (Φ−1 (w), v) = dG2 (Φ(u), w) + dG2 (w, Φ(v))  by the triangle inequality  by the inductive hypothesis  ≤ (k − 1) + 1 = k, a contradiction to dG1 (u, v) = k + 1. Therefore dG2 (Φ(u), Φ(v)) = k + 1, and the induction proof is complete. Note that the previous result also holds when G1 = G2 and Φ is an automorphism of this graph. This specific case of Theorem 2.27 leads to the following corollary. Corollary 2.28. Let Φ : V (G) → V (G) be an automorphism of a graph G. Then, for any v ∈ V (G), Φ(NG (v)) = NG (Φ(v)). Proof. Recall that: NG (v) = {u ∈ V (G) | dG (u, v) = 1}, and therefore the result is immediate from Theorem 2.27. Theorem 2.29. For any graph G, Aut(G) ⊆ Aut(Gk ) for all k ∈ Z+ . Proof. Let Φ : V (G) → V (G) be an automorphism of a graph G. Consider the graph Gk for arbitrary k ∈ Z+ , and take arbitrary u, v ∈ V (Gk ). The following then holds: uv ∈ E(Gk ) ⇔ dG (u, v) ≤ k ⇔ dG (Φ(u), Φ(v)) ≤ k  by Theorem 2.27  k  ⇔ Φ(u)Φ(v) ∈ E(G ), and thus Φ is an automorphism of Gk .  2.3  Vertex Coloring  The field of graph coloring is one that is very interesting and heavily studied in discrete mathematics, and it contains many unsolved problems. In addition, many applications, such as timetabling, scheduling, register allocation, and frequency assignment problems, can be formulated as graph coloring 33  Chapter 2. Graph Theory problems. Vertex coloring problems are the most common type of graph coloring problems. For definitions of other graph coloring problems, such as edge and total colorings, we refer the reader to [22]. For the remainder of this thesis, it will be assumed that graph coloring refers specifically to vertex coloring. A coloring of a graph can be thought of as assigning a color to each vertex of the graph so that no two adjacent vertices are given the same color. Formally, this can be represented as in the following definition. Definition 2.30. A k-coloring of a graph G, for any k ∈ Z+ , is a function f : V (G) → [k] such that: uv ∈ E(G) ⇒ f (u) = f (v). G is k-colorable if and only if a k-coloring of G exists. Definition 2.31. Let G be a graph with k-coloring f . The ith color class of G (with respect to f ) is the set of vertices {v ∈ V (G) | f (v) = i} for i ∈ [k]. It is clear that any color class of a coloring of G is an independent set of vertices. Proposition 2.32. A graph G is 2-colorable if and only if it is bipartite. Proof. Assume G is 2-colorable, and let V1 and V2 be the color classes of a 2-coloring of G. Then V (G) = V1 ∪ V2 , the union of two independent sets of vertices. Thus G is bipartite. Next, assume G is bipartite with V (G) = V1 ∪ V2 , with V1 and V2 independent sets of vertices. Next, define f : V (G) → [2] as follows: f (v) =  1 2  if v ∈ V1 , if v ∈ V2 .  Clearly the function f is a 2-coloring of G. Let G be an n-vertex graph. Since a set consisting of a single vertex is trivially independent, it is clear that G has an n-coloring, the coloring where every vertex of G forms a distinct color class. However, in general, we can find k-colorings of G with k < n. 34  Chapter 2. Graph Theory Definition 2.33. The chromatic number of a graph G, denoted χ(G), is the least integer k such that G is k-colorable. Theorem 2.34. Let G be an n-vertex graph. Then χ(G) ≥  n . α(G)  Proof. Let f : V (G) → [k] be an optimal coloring of G, that is, k = χ(G). We have: k  |{v ∈ V (G)|f (v) = i}|  n= i=1 k  ≤  α(G) i=1  = kα(G). Rearranging, and applying the fact that k ∈ Z+ : k≥  n . α(G)  The chromatic number of a graph can be bounded below by its clique number, as we will see in Theorem 2.35. Theorem 2.35. For any graph G, χ(G) ≥ ω(G). Proof. Let V ⊆ V (G) be an optimal clique of V (G), that is |V | = ω(G). Then, since every pair of vertices in V are adjacent, every vertex v ∈ V (G) must be in a distinct color class of any coloring of G. Thus χ(G) ≥ ω(G). Corollary 2.36. For any n ∈ Z+ , χ(Kn ) = n. Proof. Since ω(Kn ) = n, by the result of Theorem 2.35, χ(Kn ) ≥ n. However, as n(Kn ) = n, Kn is clearly n-colorable. Hence we have χ(Kn ) = n.  2.4  Algorithms to Solve Vertex Coloring Problems  Suppose we wish to determine the chromatic number of a graph, the smallest integer k such that G is k-colorable. This, and a variety of other problems in the field of graph theory, are often solved algorithmically. Much effort is put into the study and development of these algorithms. The desired result is, of course, to determine the solution in as computationally efficient manner as possible. 35  Chapter 2. Graph Theory  2.4.1  An Introduction to Algorithms and Efficiency  Definition 2.37. The running time of an algorithm, expressed as a function of the size of the input, is the maximum possible number of basic computational steps used in the execution of the algorithm. When dealing with graph theoretic problems, the size of the input refers to the number of vertices in the graph. The running time of an algorithm is commonly measured as follows. Definition 2.38. Suppose the running time of an algorithm with input size n is bounded by the expression g(n). Let f (n) be an expression such that, for any N ∈ Z+ , there exists c ∈ Z+ such that g(n) ≤ cf (n) for all n > N . Then the running time of this algorithm is O(f (n)). It is widely considered that an algorithm is computationally efficient if it has a polynomial running time. There are methods to determine if a graph is 2-colorable that are computationally efficient. It is known that a graph is 2-colorable if and only if it contains no odd cycles (for a proof of this statement we refer the reader to [22, Theorem 1.2.18]). Because of this nice characterization of 2-colorability, determining if any n-vertex graph G is 2-colorable is solvable in polynomial time. One algorithm that accomplishes this is the DSatur Algorithm [6, p.252], whose running time is O(n2 ). However, for k ≥ 3, whether or not the problem of determining if a graph is k-colorable can be solved efficiently is unknown. Definition 2.39. A P (polynomial) problem is a problem that is solvable in polynomial time, that is, there exists an efficient algorithm to solve it. An NP (non-deterministic polynomial) problem is a problem such that a potential solution can be verified in polynomial time. An NP-hard problem is a problem that, if solvable by a polynomial time algorithm, then this algorithm could be used to construct a polynomial time algorithm for all NP problems. An NP-complete problem is a problem that is both NP and NP-hard. The class of NP-hard problems can informally be thought of as the class of problems at least as hard as the hardest problems in NP, and possibly harder. The class of NP-hard problems also include not only decision problems, but those problems that are posed as optimization problems. Clearly, all P problems are also NP problems, as a P problem can verify a solution in polynomial time. It is currently unknown if that converse is true, that is, if any of the many NP-complete problems can be solved in polynomial time.  36  Chapter 2. Graph Theory However, if there exists a polynomial time algorithm for one then there exists a polynomial time algorithm for all NP problems, and hence the class of NP problems is equal to the class of all P problems. This P=NP question has become one of the most famous problems in applied mathematics and computer science. The Clay Mathematics Institute, a nonprofit education foundation in Cambridge, Mass., is currently offering a million dollar prize for a solution to this problem.1 A Venn diagram for the P, NP, NP-complete, and NP-hard set of problems is included in Figure 2.12 . Figure 2.1: Venn Diagram of the Complexity Classes  Determining if an n-vertex graph G is k-colorable, for k ≥ 3, is an NPcomplete problem[9, p.191]. A solution to the problem can be verified in polynomial time. Given a function f : V (G) → [k], to determine if this satisfies the definition of a vertex coloring, one merely needs to check every edge of the graph and make sure its endpoints are not in the same color class. Since a simple, finite graph has at most n2 < n2 edges, this can be done efficiently. However, the decision problem of finding a k-coloring in a graph is NP-complete. Hence it is not known, for a fixed k ≥ 3, if there exists a polynomial time algorithm to determine whether or not a graph has a k-coloring. Currently, the only general algorithm to prove that a graph is not k-colorable, for k ≥ 3, is equivalent to an exhaustive search. In fact, one may have to test every function from V (G) to [k], of which there are k n , to prove the existence or non-existence of a k-coloring. 1 2  http://www.claymath.org/millennium/ Source: http://en.wikipedia.org/wiki/Np-hard/  37  Chapter 2. Graph Theory  2.4.2  Integer Programming  Integer programming is also an NP-complete problem. Therefore, as discussed in the previous section, it is computationally equivalent to graph coloring with respect to efficiency. Modeling a graph coloring problem as an integer program (IP), however, can provide some practical computational improvements over an exhaustive search for a given graph. We explore this model here. Let G be a n-vertex graph (for simplicity, let V (G) = [n]). The first, and most simple, algorithm to determine if G is k-colorable is a basic integer programming model. The variables of the IP are xi,j where i ∈ [n] and j ∈ [k]. We let xi,j be a binary variable that equals 1 if vertex i is assigned color j and 0 otherwise. The problem is then equivalent to determining if the following system (which we will refer to as VC) has a feasible solution.  xi1 ,j + xi2 ,j ≤ 1  for all i1 i2 ∈ E(G) and j ∈ [k],  k  xi,j ≥ 1  for all i ∈ [n],  j=1  xi,j ∈ {0, 1}  for all i ∈ [n] and j ∈ [k].  The first constraint forces that no two adjacent vertices can be assigned the same color. The second constraint forces each vertex to be assigned to at least one color (if it is assigned two colors then it can be assigned either color arbitrarily). Trivially, χ(G) is the smallest value of k such that this integer program has a feasible solution. The model is simple to design; however, it can be very computationally inefficient in execution. Graph coloring problems are a very specific kind of integer program, and this basic model has inherent symmetry associated with it. For example, suppose V1 and V2 are color classes in a coloring of G. Assigning V1 color j1 and V2 color j2 , according to the VC model, is a completely different solution than assigning V1 color j2 and V2 color j1 . It becomes apparent that a basic integer program can waste much computational time searching for “new” solutions that are actually the same, or search branches of the search tree where prior knowledge should imply that these branches contain no feasible solution. The following algorithm attempts to rectify this problem.  38  Chapter 2. Graph Theory  2.4.3  Column Generation  In this section, we will introduce a model that avoids some of the symmetry inherent in graph coloring problems. This model is introduced in [15], and we will adopt the same notation. Consider a graph G and let S be the set of all maximal independent sets of G. Now define xs to be a binary variable for all s ∈ S that equals 1 if that set is used as a color class in an optimal coloring of G, and 0 otherwise. Now, since all of these sets are by definition independent, so long as every vertex is in some set s that is assigned a color, we have a coloring of G. Note that a vertex can appear in more than one set s that is assigned a label, as we are able to break the tie of which label it gets arbitrarily. The problem, denoted IS, that determines χ(G) is outlined below. Minimize  xs subject to: s∈S  xs ≥ 1  for all i ∈ V (G),  s∈S|i∈s  xs ∈ {0, 1}  for all i ∈ V (G).  Minimizing this objective function minimizes the number of colors used. The first constraint forces each vertex to be in at least one color class of the optimal coloring found, and the second constraint forces the xs to be binary. Note that this model treats a coloring as a collection of maximal independent sets with no specific color assigned to each set. Therefore, it loses the symmetry problem discussed in the integer programming approach. However, the number of variables (one for every maximal independent set) is exponential in general and expensive to determine. Thus, we will look at a column generation method that reduces the number of variables immensely at the start, and generates new ones as needed. In this method, S is extended to be all independent sets (rather than only maximal independent sets). First, we must come up with a collection of sets that produces a coloring. Many times there is a coloring previously found that we are trying to improve on, so its color classes could be a good starting point. Or, the simple approach is to let the initial S be all the singleton sets of vertices, which trivially results in an n-coloring of an n-vertex graph.  39  Chapter 2. Graph Theory We then solve a linear relaxation of IS with the restriction that S is our predetermined collection of independent sets. Solving this relaxation generated a dual value, πi , for each constraint (each of which represents a vertex of G). We then use these dual values as input for the following integer program, denoted MWIS, below. πi zi subject to:  Maximize i∈V (G)  z i1 + zi2 ≤ 1  for all i1 i2 ∈ E(G),  zi ∈ {0, 1}  for all i ∈ V (G).  The optimal zi values correspond to a new independent set of vertices (the vertex i is in this set if and only if zi = 1). The first constraint forces that no two adjacent vertices can be included in this set, and the second forces the zi to be binary. The algorithm solves the IS relaxation with the predetermined set of independent sets S, and uses the dual values of this linear program as input for MWIS. Solving MWIS generates an independent set, which is added to S. IS is then solved with the new set S, and its dual values are again used as input for MWIS. The algorithm terminates when the optimal objective function value of MWIS is less than or equal to 1. Linear programming duality, discussed in the next paragraph, tells us that no new independent set will improve the coloring of the graph. To understand why this algorithm works, suppose the Simplex Algorithm on the relaxation of IS was applied with all independent sets included in S. An independent set (a variable) would enter the basis (thus improving our solution) if and only if it has a negative reduced cost. The reduced cost of a variable s is given by 1 − πAs , where π is the row vector of dual values from the previous Simplex iteration and As is the column of the constraint matrix A corresponding to possible entering set s. The entries of the columns of As are 1 if that vertex is in the set and 0 otherwise. This, however, is exactly what the zi from MWIS represent. Thus we have: 1−  πi zi < 0, or equivalently, i∈v  πi zi > 1. i∈v  40  Chapter 2. Graph Theory Upon termination of the algorithm, we then look at the final IS relaxation solved. If it has an integral solution, we are done the original problem and χ(G) is determined. Otherwise, integrality must be enforced. To enforce integrality, we treat the solved problem as the initial node on a branch and bound tree. We then examine our fractional solution achieved in the algorithm, and consider an independent set s1 such that xs1 is fractional. There must exist another independent set s2 = s1 such that s1 ∩ s2 = ∅; otherwise the vertices in s1 would have their only nonnegative variable being xs1 < 1, an infeasible solution to the relaxed linear program. Consider vertices i1 , i2 ∈ V (G) such that i1 ∈ s1 ∩s2 and i2 ∈ s1 \s2 or i2 ∈ s2 \s1 . We then create two subproblems, denoted DIF F ER(i1 , i2 ) and SAM E(i1 , i2 ). DIF F ER(i1 , i2 ) requires that i1 and i2 are given different colors, whereas SAM E(i1 , i2 ) requires that i1 and i2 are given the same color. It is clear that any feasible coloring (and thus, an optimal coloring) of G occurs in one of these two cases. These subproblems can be thought of graph theoretically: DIF F ER(i1 , i2 ) corresponds to adding the edge i1 i2 to the graph, and SAM E(i1 , i2 ) corresponds to collapsing vertices i1 and i2 into a single vertex j, with NG (j) = NG (i1 ) ∪ NG (i2 ). The column generation algorithm is then solved in each subproblem, in the hopes that an integral solution is found. If not, subproblems are again created and solved through column generation, and the branch and bound tree is explored until an optimal value is found. This optimal value is χ(G). This algorithm is in most cases superior to the straight integer programming approach discussed in Chapter 2.4.2 (see [15] for a comparison on a test bed of graphs). However, the algorithm can still be very computationally expensive. While it is a simpler integer program than the one outlined in Chapter 2.4.2, the MWIS model is still an integer program which are in the class of NP-Complete problems. Since MWIS may be repeated many times over the course of the algorithm, it is vital to be able to solve it in as efficient a manner as possible. Mehrotra and Trick discuss ways of doing this in [15, pp.345–346].  2.4.4  The Petford-Welsh Algorithm  Petford and Welsh introduced an algorithm in [19] that searches for a 3coloring of a graph. The generalization of the algorithm to search for a k-coloring is trivial, and we offer this generalization. Consider an n-vertex 41  Chapter 2. Graph Theory graph G. The algorithm first assigns every vertex of G a random number (color) in [k] (note that this is simply a random assignment of k colors and not likely to be a proper k-coloring). At any instant, for all vertices v and every i in [k], we define Si (v) to be the subset of NG (v) that are colored i. In addition, si (v) is defined to be |Si (v)|. A transition function p is also defined that satisfies, for s1 , s2 . . . , sk ∈ [n]: p(s1 , s2 , . . . , sk ; j) ≥ 0  for all j in [k],  k  p(s1 , s2 , . . . , sk ; j) = 1. j=1  Note that this satisfies the definition of a probability mass function P (X = j) of a discrete random variable X that can take on values in [k]. One example of such a transition function is:  1  p(s1 , s2 , . . . , sk ; j) = 1 − sj k−1  −1  k  si   .  i=1  Every iteration, a vertex v is chosen at random out of all “bad” vertices, that is, all vertices adjacent to another vertex currently assigned the same color. Next, a (not necessarily new) color is randomly generated according to the probability mass function defined by p(s1 (v), s2 (v), . . . , sk (v); j), and this color is assigned to the vertex v. The algorithm continues until there are no more bad vertices to choose from, in which case it has found a k-coloring, or the running time goes beyond a predetermined stopping time. If it does not find a k-coloring in the allotted time, then the algorithm is inconclusive. Obviously, this algorithm can’t be used to prove that a graph is not k-colorable. However, for “good” choices of the transition function p, this algorithm was shown to find 3-colorings on randomly generated 3-colorable graphs in a relatively short length of time (see [19] for more details on these graphs and choices of transition functions). The preliminary results for larger k were not very promising, and one possible reason that was given was that much more experimentation was needed to find a good transition function. However, this algorithm will be revisited in Chapter 3.3.3, in which it is applied successfully to find a 14-coloring of Q28 , a graph that will be defined next. 42  Chapter 3  The Hypercube We now discuss the main combinatorial object studied in this thesis. Definition 3.1. For any k ∈ Z+ , the k-dimensional hypercube (k-cube), denoted Qk , is the graph satisfying the following: V (Qk ) = {0, 1}k , E(Qk ) = {uv ∈ {0, 1}k | d(u, v) = 1}. This is the graph in which the vertices are all binary vectors of length k, and two vertices are adjacent if and only if the Hamming distance between them is 1. Often, we will be dealing with the jth power of the k-cube (when j ≤ k), Qjk . This graph satisfies: V (Qjk ) = {0, 1}k , E(Qjk ) = {u, v ∈ {0, 1}k | d(u, v) ≤ j}. It is clear that n(Qjk ) = 2k for any j, k ∈ Z+ . The result of the following proposition will allow us to enumerate e(Qjk ). Proposition 3.2. Let j, k ∈ Z+ with j ≤ k. Then e(Qjk ) = 2k−1 t where t satisfies: j  t= i=1  k . i  Qjk  Proof. First, we note that is t-regular since, by definition, t sums up all binary vectors of distance at most j from any vertex. Thus Proposition 2.8 implies: n(Qjk )t 2 2k t = 2 = 2k−1 t.  e(Qjk ) =  43  Chapter 3. The Hypercube Note that if j ≥ k then every vertex of Qjk is adjacent. Therefore Qjk ∼ = K2k , and hence: e(Qjk ) =  2k 2  =  (2k )(2k − 1) = 2k−1 (2k − 1). 2  This agrees with the result of Proposition 3.2 as: j  2k−1 i=1  k i  k  = 2k−1 i=1  k i  = 2k−1 (2k − 1), by the binomial theorem. We will most often be discussing the case j = 2, the graph Q2k . In this case: t=  k k + 1 2  =  k+1 , 2  and hence e(Q2k ) = 2k−1  k+1 . 2  The number of vertices and edges of Q2k have been calculated for all k ≤ 31 in Appendix A. We now note the following results, and our first connections between Chapter 1 and Chapters 2 − 3. Lemma 3.3. α(Qjk ) = A(k, j + 1) for any j, k ∈ Z+ with j ≤ k. Proof. As two vertices of Qjk are adjacent if and only if their Hamming distance is at most j, any independent set V of Qjk is a set of vertices satisfying d(u, v) ≥ j + 1 for all u, v ∈ V . This is a binary code with minimum distance at least j + 1, which leads to the desired result. This leads to the following proposition. Proposition 3.4. Let Φ : {0, 1}k → {0, 1}k be an automorphism of Qjk , and C be a (k, M, j + 1) binary code. Φ(C) is then a (k, M, j + 1) binary code, where Φ(C) is defined as follows: Φ(C) = {Φ(u) | u ∈ C}. 44  Chapter 3. The Hypercube Proof. Clearly, C is an independent set of Qjk . Moreover, by Proposition 2.25, Φ(C) is an independent set of Qjk . Therefore Φ(C) is a (k, M, j + 1) binary code.  3.1  Automorphisms of the Cube and its Square  In this section, we will define the automorphism groups of Qk and Q2k . The result of Theorem 2.29 implies that Aut(Qk ) ⊆ Aut(Qjk ) for all j ∈ Z+ . Therefore every automorphism of Qk is, in addition, an automorphism of Qjk . Consider the following function. Definition 3.5. Let x ∈ {0, 1}k . Then the vector addition function: Ψx : {0, 1}k → {0, 1}k , is defined to be: Ψx (v) = v + x. This function is an automorphism of Qk , as will be seen in the Theorem 3.6. Theorem 3.6. For any x ∈ {0, 1}k , Ψx is an automorphism of the graph Qk . Proof. Consider an arbitrary x ∈ {0, 1}k . To prove Ψx is an automorphism of Qk , we first prove that Ψx is a permutation of {0, 1}k . To prove Ψx is a permutation, assume (for u, v ∈ {0, 1}k ) that Ψx (u) = Ψx (v). We have: Ψx (u) = Ψx (v) ⇔ u + x = v + x ⇔ u = v, and hence Ψx is one-to-one. Since {0, 1}k is finite, it is therefore onto and thus a permutation. Note that throughout this thesis, we will prove a function from {0, 1}k to {0, 1}k is a permutation by simply proving it is one-toone.  45  Chapter 3. The Hypercube To prove Ψx is an automorphism of Qk , take u, v ∈ {0, 1}k such that uv ∈ E(Qk ). We have: uv ∈ E(Qk ) ⇔ d(u, v) = 1 ⇔ d(u + x, v + x) = 1  (by Lemma 1.9)  ⇔ d(Ψx (u), Ψx (v)) = 1 ⇔ Ψx (u)Ψx (v) ∈ E(Qk ), and therefore Ψx preserves edges and is hence an automorphism. Corollary 3.7. Qjk is vertex transitive for any j, k ∈ Z+ . Proof. Consider arbitrary u, v ∈ {0, 1}k . By Theorem 3.6, the vector addition function Ψu+v is an automorphism of Qk . Theorem 2.29 then implies that Ψu+v ∈ Aut(Qjk ) for all j ∈ Z+ . Moreover: Ψu+v (u) = u + (u + v) = v, and hence, Ψu+v maps u to v. Since this holds for any choices of u, v ∈ {0, 1}k , Qjk is vertex transitive. Permuting the coordinates of the vertices of Qk is also an automorphism, as we will prove in Theorem 3.8. Theorem 3.8. Permuting the vertex coordinates under the permutation σ, for any σ ∈ Sk , is an automorphism of Qk . Proof. To prove σ is an automorphism of Qk , we first prove that permuting the coordinates under σ also permutes the vertices in {0, 1}k . Assume, for u, v ∈ {0, 1}k , that σ(u) = σ(v). Since σ is a permutation, it by definition has an inverse σ −1 , satisfying σ −1 (σ(u)) = u and σ −1 (σ(v)) = v. Now since σ(u)i = σ(v)i for all coordinates i ∈ [k], ui = vi for all coordinates i. Therefore σ is a permutation. To prove σ is an automorphism of Qk , consider u, v ∈ {0, 1}k such that uv ∈ E(Qk ). By definition of Qk , d(u, v) = 1. Thus there is exactly one coordinate i ∈ [k] where ui = vi . Therefore σ(u) and σ(v) differ only in coordinate σ(i), and hence d(σ(u), σ(v)) = 1. Clearly σ(u)σ(v) ∈ E(Qk ). Now consider u, v ∈ {0, 1}k such that σ(u)σ(v) ∈ E(Qk ). By definition of Qk , d(σ(u), σ(v)) = 1. Thus there is exactly one coordinate i ∈ [k] where 46  Chapter 3. The Hypercube σ(u)i = σ(v)i . Therefore u and v differ only in coordinate σ −1 (i), and hence d(u, v) = 1. Clearly uv ∈ E(Qk ). Lemma 3.9. Let x, y ∈ {0, 1}k and σ1 , σ2 ∈ Sk such that x = y or σ1 = σ2 . Then: σ1 ◦ Ψx = σ2 ◦ Ψy . Proof. Assume that σ1 ◦ Ψx = σ2 ◦ Ψy and, by way of contradiction, x = y or σ1 = σ2 . Thus the following holds: (σ1 ◦ Ψx )(x) = (σ2 ◦ Ψy )(x) ⇔ σ1 (0) = σ2 (x + y) ⇔ 0 = σ2 (x + y) ⇔0=x+y ⇔ x = y, and therefore σ1 = σ2 , so it must be the case that x = y. Then there is some v ∈ {0, 1}k such that: σ1 (v) = σ2 (v) ⇔ σ1 (Ψx (v + x)) = σ2 (Ψy (v + y)) ⇔ σ1 (Ψx (v + x)) = σ2 (Ψy (v + x))  since x = y  ⇒ σ1 ◦ Ψx = σ2 ◦ Ψy , a contradiction. Since |Sk | = k! and |{0, 1}k | = 2k , the result of Lemma 3.9 implies that we can create 2k k! distinct automorphisms of Qk through composing vector addition and coordinate permutation automorphisms. As we will see in Theorem 3.11, every automorphism of Qk is of this form. The proofs of the following lemma and theorem are of a combinatorial nature similar to that used in the solution manual to [22, Exercise 1.3.29]. An alternative proof of the result that has an algebraic flavor can be found in [16]. Lemma 3.10. Let j, k ∈ Z+ with j ≤ k. Every copy of Qj in Qk is Qk [V ] where V ⊆ {0, 1}k satisfies: V = {v ∈ {0, 1}k | v has specified values in a fixed set of k − j coordinates}. Proof. Let H be any subgraph of Qk such that H is a copy of Qj in Qk . There then exists some isomorphism Φ from Qj (defined normally) to H. Since H is a subgraph of Qk , for every x1 , x2 ∈ V (H), dH (x1 , x2 ) ≥ dQk (x1 , x2 ). Let u = Φ(0j ) and v = Φ(1j ). We will prove by induction on j that d(u, v) = j. 47  Chapter 3. The Hypercube • Base Case(s): If j = 1, then H ∼ = Q1 , which contains simply two vertices connected by an edge. Thus dQk (u, v) = 1 and hence d(u, v) = 1. If j = 2, then u and v have the common neighbour Φ(01) in H since 02 and 12 have the common neighbour 01 in Q2 . Now if d(u, v) = 1 then uv ∈ E(Qk ), and since H ⊆ Qk , uvΦ(01)u would form a cycle of length 3 in the bipartite graph Qk , a contradiction. Thus d(u, v) = 2. • Inductive Hypothesis: Suppose for some positive integer j > 2, if ξ is an isomorphism from Ql , for some l ∈ [j − 1], to a subgraph of Qk , then d(ξ(0l ), ξ(1l )) = l. We now prove the result for j. Consider the isomorphism Φ : Qj → H. For all i ∈ [j], let wi = Φ(µji ). Now since 0j µji ∈ E(Qj ), uwi ∈ E(H), and hence d(u, wi ) = 1. Now let Wi = {x ∈ V (Qj ) | xi = 1}. Then Qj−1 ∼ = Qj [Wi ] under the isomorphism ξi that lengthens the vector by inserting a 1 in the ith coordinate. Thus Φi ◦ ξi is an isomorphism from Qj−1 to a subgraph of Qk where Φi is the restriction of Φ to Wi . By the induction hypothesis, for i ∈ [j]: d(Φi ◦ ξi (0j−1 ), Φi ◦ ξi (1j−1 )) = j − 1. Moreover, ξi (0j−1 ) = µji and ξi (1j−1 ) = 1j . Therefore we have d(wi , v) = j − 1 for i ∈ [j]. Thus for each i ∈ [j], wi is different from v in j − 1 coordinates and is different from u in one coordinate. If for some i ∈ [j], the j − 1 coordinates in which wi is different from v do not include the one coordinate in which wi is different from u, then u and v differ in all these j coordinates and hence d(u, v) = j. So, suppose to the contrary that there are distinct i1 , i2 ∈ [j] such that wi1 and wi2 differ from u only in coordinates j1 and j2 , respectively, and additionally that uj1 = vj1 and uj2 = vj2 . Since wi1 differs from v in j − 1 coordinates and differs from u in exactly one of these coordinates, wi1 differs from v in j − 2 coordinates but is the same as u in these same j − 2 coordinates. Thus u and v differ in these j − 2  48  Chapter 3. The Hypercube coordinates. Hence d(u, v) ≥ j − 2. Consider now α = µji1 + µji2 and its image z = Φ(µji1 + µji2 ). Since µji1 α, µji2 α ∈ E(Qj ), wi1 z, wi2 z ∈ E(H). Thus: d(wi1 , z) = 1 = d(wi2 , z). If d(u, z) = 1, then uwi1 zu is a cycle of length 3 in the H. Since H ⊆ Qk this is also a cycle of length 3 in the bipartite graph Qk , a contradiction. Thus d(u, z) = 2. Suppose z differs from u in coordinate j3 where j3 ∈ / {j1 , j2 }. Then u and wi1 , and u and wi2 , all agree in coordinate j3 . However: d(wi1 , z) = 1 = d(wi2 , z), and so wi1 = wi2 , a contradiction. Therefore j3 ∈ {j1 , j2 }. Since d(u, z) = 2, z therefore differs from u only in coordinates j1 and j2 . Since uj1 = vj1 and uj2 = vj2 , z differs from v in coordinates j1 and j2 . Moreover, z differs from v in all the coordinates in which v differs from u, since in those coordinates z agrees with u. Thus: d(v, z) ≥ d(u, z) + d(u, v) ≥ 2 + (j − 2) = j. With α ∈ V (Qj ), we have: j − 2 = dQj (1j , α) = dH (Φ(1j ), Φ(α))  by Theorem 2.27  = dH (v, z) ≥ dQk (v, z)  since H ⊆ Qk  = d(v, z) ≥ j, a contradiction. Thus no such i1 and i2 exist, and d(u, v) = j. Hence by induction, the statement holds for all j.  49  Chapter 3. The Hypercube Next, let S be the set of coordinates in which u and v are the same. By the preceding inductive proof, |S| = k − j. Consider w ∈ V (H). Since Φ−1 (w) is in Qj and for all y1 , y2 ∈ V (Qj ), d(y1 , y2 ) = dQj (y1 , y2 ), we have: dQj (0j , Φ−1 (w)) + dQj (1j , Φ−1 (w)) = d(0j , Φ−1 (w)) + d(1j , Φ−1 (w)) = j. By Theorem 2.27, it then follows that dH (u, w) + dH (v, w) = j. Hence d(u, w) + d(v, w) ≤ j. Moreover, the coordinates in [k]\S (as they are the coordinates in which u and v differ) contribute j to the sum d(u, w)+d(v, w). Thus u and w must be the same in all coordinates in S. Hence every vertex in H has the same values in S and therefore H is of the form Qk [V ] for V defined as in the theorem statement. Theorem 3.11. |Aut(Qk )| = 2k k! for k ∈ Z+ . Proof. As shown earlier, the result of Lemma 3.9 implies that |Aut(Qk )| ≥ 2k k!. Thus, we need only prove that |Aut(Qk )| ≤ 2k k!, that is, that every automorphism of Qk can be expressed as a composition of a vector addition and coordinate permutation automorphism. Let Φ be an automorphism of Qk , and consider 0k . Define v = Φ(0k ). Corollary 2.28 implies that Φ(NQk (0k )) = NQk (v). Consider w ∈ {0, 1}k such that wt(w) ≥ 2, and let j = wt(w). Then 0k and w are vertices in a copy of Qj in Qk , call this copy G. Define H = Qk [Φ(G)]. Since Φ is an automorphism, H ∼ = Qj . By Lemma 3.10, H is a subgraph induced by a set j of 2 vertices having specified values on a fixed set of k − j coordinates, call this set S. Let x be the vector attaining these specified values in S and with a 0 in the j coordinates not in S. Moreover, consider x + µki for all i not in S. These vertices are in H since their coordinates in S are as specified, and their images under Φ provide j vertices that differ from v in exactly 1 place. Thus once these images are chosen, the k − j coordinates that are the same in all vertices of H have been determined. Since Φ(w) is different from v in exactly j places, and the k − j coordinates in which v and Φ(w) are the same have already been determined by the choice of the images of NQk (v), Φ(w) is completely determined. Thus every automorphism of Qk has at most 2k choices to map 0k , and at most k! choices to map NQk (0k ), after which the rest of the automorphism is determined. Therefore, |Aut(Qk )| ≤ 2k k!, and we conclude with the desired result. 50  Chapter 3. The Hypercube Definition 3.12. For every i in [k], the function: τi : {0, 1}k → {0, 1}k , satisfies the following, for all v v1 v2 , . . . , v n :   v, τi (v) = v,   v + µki ,  in {0, 1}k with coordinates defined by v =  if wt(v) is even and vi = 0, if wt(v) is odd and vi = 1, otherwise.  Note that if vi = 0, then τi (v) is of even parity. Moreover, if vi = 1, then τi (v) is of odd parity. It is clear that τi is not an automorphism of Qk , as it can’t be expressed as a composition of vector addition and coordinate permutation. However, τi is in fact an automorphism of Q2k , as we will see in Theorem 3.13, first proved by Miller in [16, Lemma 3.1]. Theorem 3.13. The function τi is an automorphism of Q2k for all i ∈ [k]. Proof. Fix i ∈ [k]. To prove τi is an automorphism of Q2k , we first prove that τi is a permutation of {0, 1}k . To prove τi is a permutation, assume by way of contradiction that there exist u, v ∈ {0, 1}k such that τi (u) = τi (v) and u = v. We must have uj = vj for all j ∈ [k] \ i and ui = vi . Assume, without loss of generality, that ui = 0 and vi = 1. Thus d(u, v) = 1, and therefore u and v are of differing parity. If wt(u) is even, then τi (u) = u and τi (v) = v by defintion and hence u = v, a contradiction. Similarly, if wt(u) is odd, then τi (u) = u + µki and τi (v) = v + µki , and hence u + µki = v + µki . Then by Lemma 1.9, u = v, also a contradiction. Thus τi is a permutation. We will now prove that τi is an automorphism of Q2k . Consider u, v ∈ {0, 1}k such that uv ∈ E(Q2k ). We consider two cases: • Case 1: d(u, v) = 1. This case is trivial since τi only alters one coordinate of both u and v. Thus d(τi (u), τi (v)) ≤ 2 and hence τi (u)τi (v) ∈ E(Q2k ). • Case 2: d(u, v) = 2. By Corollary 1.16, u and v are either both of even weight or both of 51  Chapter 3. The Hypercube odd weight. Thus, if u and v differ in coordinate i we have: d(τi (u), τi (v)) = wt(τi (u) + τi (v)) = wt(u + v + µki ) as µki will be added to exactly one of u, v = wt(u + v) − 1  as (u + v)i = 1  = d(u, v) − 1 ≤ 2, and therefore τi (u)τi (v) ∈ E(Q2k ). Now, if u and v are the same in coordinate i, since they are of the same parity, τi performs the same operation (either adding µki or not adding µki ) on both u and v. Therefore d(τi (u), τi (v)) = d(u, v) = 2, and hence τi (u)τi (v) ∈ E(Q2k ). Now consider u, v ∈ {0, 1}k such that τi (u)τi (v) ∈ E(Q2k ). We again consider two cases: • Case 1: d(τi (u), τi (v)) = 1. This case is trivial since τi only alters one coordinate of both u and v. Thus d(u, v) ≤ 2 and hence uv ∈ E(Q2k ). • Case 2: d(τi (u), τi (v)) = 2. Again by Corollary 1.16, τi (u) and τi (v) are either both of even weight or both of odd weight. Thus, if τi (u) and τi (v) differ in coordinate i we have: d(u, v) = wt(u + v) = wt(τi (u) + τi (v) + µki ) as µki will be added to exactly one of u, v = wt(τi (u) + τi (v)) − 1  as (τi (u) + τi (v))i = 1  = d(u, v) − 1 ≤ 2, and therefore uv ∈ E(Q2k ).  52  Chapter 3. The Hypercube Now, if τi (u) and τi (v) are the same in coordinate i, since they also are of the same parity, τi performs the same operation (either adding µki or not adding µki ) on both u and v. Thus d(u, v) = d(τi (u), τi (v)) = 2, and hence uv ∈ E(Q2k ). For any i ∈ [k], since τi ∈ Aut(Q2k ) \ Aut(Qk ), we have Aut(Qk ) ⊂ Aut(Q2k ), with equality not holding. In fact, Miller proved in [16] that the automorphism group of the square of the k-cube can be generated by vector addition, coordinate permutation, and a single τi automorphism. Using this, he was then able to prove the following result. Theorem 3.14. For any positive integer k at least 4: |Aut(Q2k )| = (|{0, 1}k |)(|Sk+1 |) = 2k (k + 1)!.  3.2  The Chromatic Number of the Cube and its Square  Determining the chromatic number of a graph, as discussed in Chapter 2.4, can be very computationally expensive. This, however, does not hold for the k-cube, Qk . Theorem 3.15. χ(Qk ) = 2 for all k ∈ Z+ . Proof. Define Ve , Vo ⊆ {0, 1}k as follows: Ve = {v ∈ {0, 1}k |wt(v) is even}, Vo = {v ∈ {0, 1}k |wt(v) is odd}. Trivially, Ve ∪ Vo = {0, 1}k . No two vertices in either set can be adjacent, since d(u, v) = 1 implies that u and v are of differing parity. Thus Qk is bipartite, and hence 2-colorable. Moreover, we can trivially have no 1coloring of Qk as there is at least one pair of adjacent vertices (for example, consider 0k and µk1 ). Therefore χ(Qk ) = 2. The chromatic number of the k-cube is thus 2 for any k ∈ Z+ . However, higher powers of the k-cube are not bipartite (for k ≥ 2). Thus, determining χ(Qjk ) for j, k ≥ 2 and j ≤ k−2 can’t necessarily be done in polynomial time. We consider specifically the square of the k-cube, whose chromatic number is unknown for many k. Included in Appendix A is a table of known 53  Chapter 3. The Hypercube values of and bounds on χ(Q2k ) for k ≤ 31, as well as other relevant information. Due to the result of Lemma 3.3, any coloring of Q2k partitions the vertices into binary codes of length k and minimum distance at least 3. Theorem 2.34 then allows us to determine the lower bound on χ(Q2k ) seen in Proposition 3.16. Proposition 3.16. χ(Q2k ) ≥  2k A(k, 3)  for any k ∈ Z+ .  Proof. The proof of this proposition is immediate from the results of Theorem 2.34 and Lemma 3.3. As discussed in Chapter 1, A(k, 3) is not known for many k ≥ 16. If not known, we can still apply the sphere-packing bound to Proposition 3.16, as will be seen in Theorem 3.19 and Corollary 3.20. However, consider the case where k = 2n − i for some i, n ∈ Z+ with n ≥ 3 and i ∈ [4]. A(k, 3) is known explicitly from the result of Theorem 1.61. This allows for the following corollary to Proposition 3.16. Corollary 3.17. Let k = 2n − i for some i, n ∈ Z+ with n ≥ 3 and i ∈ [4]. Then: χ(Q2k ) ≥ 2n . Proof. By Theorem 1.61 and Proposition 3.16 we have: χ(Q2k )  n −i  ≥  22  22n −n−i = 2n .  Another way to determine a lower bound on χ(Q2k ) is by exhibiting an optimal clique in Q2k , and then applying the result of Theorem 2.35. Lemma 3.18. ω(Q2k ) = k + 1 for any k ∈ Z+ with k ≥ 3. Proof. Let k ≥ 3. We will first exhibit a clique of size k + 1. Consider the set of vertices V ⊆ {0, 1}k defined as follows: V = {0k , µk1 , µk2 , . . . , µkk }. It is clear that any two vertices in V have distance at most 2, and thus V is a clique of size k + 1 in Q2k .  54  Chapter 3. The Hypercube Now assume, by way of contradiction, that ω(Q2k ) ≥ k + 2. Let K be a clique of Q2k with |K| = k + 2. Since Q2k is vertex transitive, we can assume 0k ∈ K. Thus every vertex in k has weight at most 2. Since there are at most k vertices in K of weight 1, namely ones from {µk1 , µk2 , . . . , µkk }, some vertex v ∈ K has weight 2. Let v = µki1 +µki2 for i1 , i2 ∈ [k] with i1 < i2 . Now let σ1 ∈ Sk be a permutation that satisfies σ1 (i1 ) = 1 and σ1 (i2 ) = 2. Clearly σ1 (v) = µk1 + µk2 . Define K = σ1 (K) and v = σ1 (v). It is clear that 0k , v ∈ K , |K | = k + 2, and, by Proposition 2.25, K is also a clique of Q2k . Thus µki ∈ / K for i ∈ [k] with i ≥ 3, since then: d(v , µki ) = wt(v ) + wt(µki ) − 2wt(v ∩ µki ) = 2 + 1 + 0 = 3. Therefore, the only vertices of weight 1 possibly contained in K are µk1 and µk2 . Since k + 2 − 4 ≥ 1, K has another vertex of weight 2, w = v . Let w = µkj1 + µkj2 for some j1 , j2 ∈ [k] with j1 < j2 . Since d(v , w ) ≤ 2 and w = v , |{j1 , j2 } ∩ {1, 2}| = 1. Therefore j1 ∈ {1, 2} and j2 ≥ 3. Define j3 ∈ {1, 2} where j3 = j1 . Now let σ2 ∈ Sk be a permutation that satisfies σ2 (j1 ) = 1, σ2 (j3 ) = 2, and σ2 (j2 ) = 3. Clearly σ2 (w ) = µk1 + µk3 and σ2 (v ) = v . We define w = σ2 (w ), v = σ2 (v ) = v , and K = σ2 (K). It is clear that 0k , v ∈ K , |K | = k + 2 and, again by Proposition 2.25, K is a clique of Q2k . Hence, the only vertex of weight 1 possibly contained in K is µk1 as d(w , µk2 ) = 3 and d(v , µki ) = 3 for i ≥ 3. If µk1 ∈ K , then all vertices of weight 2 in K have the form µk1 + µki with 2 ≤ i ≤ k; otherwise we would have two of distance 3 in K . But there are only k − 1 such weight 2 vertices forcing |K | ≤ k + 1, a contradiction. Meanwhile, if µk1 ∈ / K , we consider any two vertices of weight 2, x, y ∈ K \ {v, w} with x = y. Let x = µka + µkb and y = µkc + µkd for some a, b, c, d ∈ [k] with a < b and c < d. Since d(v , x) ≤ 2 and d(v , y) ≤ 2, a, c ∈ {1, 2} and b, d ≥ 3. If a = 2, then since d(x, w ) ≤ 2, it must be the case that b = 3. Similarily, if c = 2, then d = 3. If a = 2 then c = 1; moreover d(x, y) ≤ 2 implies that d = 3 giving y = w , a contradiction. Thus a = 1. A similar argument shows that c = 1. Since a = c = 1, then every vertex of weight 2 in K has the form µk1 + µki for 2 ≤ i ≤ k. Again, there are only k − 1 such weight 2 vertices, and hence |K | ≤ k, a contradiction. Therefore, every clique in Q2k , for k ≥ 3, has at most k + 1 vertices. 55  Chapter 3. The Hypercube Note that for k = 1, 2, since χ(Q2k ) ∼ = K2k , we have ω(Q21 ) = 2 and ω(Q22 ) = 4. The result of Theorem 2.25 then implies that χ(Q2k ) ≥ k + 1 for all k ∈ Z+ . This will be stated again in Theorem 3.19; however, a second proof will be offered that leads to the strengthened bound of Corollary 3.20. Theorem 3.19. χ(Q2k ) ≥ k + 1 for any k ∈ Z+ . Proof 1. The result of Lemma 3.18 proved that ω(Q2k ) = k + 1. Thus, by the result of Theorem 2.35, χ(Q2k ) ≥ k + 1. Proof 2. The result of Proposition 3.16 implies that: 2k A(k, 3) 2k ≥ 2k (k + 1)−1  χ(Q2k ) ≥  by the sphere-packing bound (Theorem 1.50) (3.1)  ≥  2k (k  2k + 1)−1  = k + 1. Corollary 3.20. Consider any k ∈ Z+ such that k = 2n − 1 for any n ∈ Z. Then χ(Q2k ) ≥ k + 2. Proof. Consider line (3.1) in the second proof of Theorem 3.19. We have, if k + 1 = 2n , that 2k (k + 1)−1 is non-integral. We then have the desired result. In addition to the aforementioned lower bounds on χ(Q2k ), it is also possible to determine upper bounds on χ(Q2k ), as will be seen in the following theorems. Theorem 3.21. χ(Q2k ) ≤ χ(Q2k+1 ) for any k ∈ Z+ . Proof. Assume, by way of contradiction, that χ(Q2k ) > χ(Q2k+1 ), and let χ(Q2k+1 ) = n. Consider the copy of Qk in Qk+1 induced by the vectors with a 0 in the last coordinate. Applying an n-coloring of Q2k+1 to this copy of Qk yields an m-coloring of this copy for some m ≤ n, a contradiction to the chromatic number of Qk .  56  Chapter 3. The Hypercube Another well-known upper bound on χ(Q2k ) will be proved in Theorem 3.22. Linial, Meshulam, and Tarsi proved this result in [12], and another proof was offered by Wan in [21, Theorem 2] that exhibits an appropriate coloring with the use of the binary representation matrix, as defined in Definition 1.18. We will prove this result in an analogous method to Wan, and show an example of this coloring applied to Q23 in Example 3.23. Note that we have defined our binary representation matrix to be the transpose of that used by Wan, as we have represented our vectors as columns rather than rows. Theorem 3.22. χ(Q2k ) ≤ 2  log2 (k+1)  for any k ∈ Z+ .  Proof. Consider the binary representation matrix of order k, BRMk , and define n = log2 (k+1) . Now define the following function for all v ∈ {0, 1}k : f (v) = (BRMk )(v), with matrix vector multiplication defined normally over the binary field. Note that (BRMk )(v) is a binary vector of length n, and hence f : {0, 1}k → {0, 1}n . Since |{0, 1}n | = 2n , we claim f is a 2n -coloring of Q2k . In fact, f could be an m-coloring of Q2k for some m < 2n , which still yields the upper bound of 2n . Consider arbitrary vertices u and v with uv ∈ E(Q2k ), that is, 1 ≤ d(u, v) ≤ 2. We prove f (u) = f (v). We consider two cases. If d(u, v) = 1, u and v differ in exactly one coordinate in [k], which we denote i. We have: (BRMk )(u) − (BRMk )(v) = (BRMk )(u − v) = (BRMk )(µki ) = (BRMk )i . where (BRMk )i represents the ith column of BRMk . However, (BRMk )i is the binary vector of length n whose integer representation is i. Since i = 0 by definition of the binary representation matrix, (BRMk )(u) − (BRMk )(v) = 0n and hence f (u) = f (v). If d(u, v) = 2, u and v differ in exactly two coordinates in [k], which we denote i1 and i2 . We have: (BRMk )(u) − (BRMk )(v) = (BRMk )(u − v) = (BRMk )(µki1 + µki2 ) = (BRMk )i1 + (BRMk )i2 , 57  Chapter 3. The Hypercube which represents the sum of the binary vectors of length n with integer representations i1 and i2 . However, since i1 = i2 , (BRMk )(u)−(BRMk )(v) = 0n . Therefore f (u) = f (v). Example 3.23. Consider the case k = 3. Theorem 3.22 implies that: χ(Q23 ) ≤ 2  log2 (4)  = 22 = 4.  We will therefore exhibit a 4-coloring of Q23 . The binary representation matrix of order 3 is: 0 1 1 BRM3 = . 1 0 1 The coloring f described in Theorem 3.22 is applied as in Table 3.1 (for simplicity we will convert f (v) to its integer representation i). Table 3.1: A 4-coloring v ∈ {0, 1}3 (BRM3 )(v)   0 0  0  0 0   0 1  1  0 0   1 0  0  1 0   1 1  1  1 0  of Q23 using Wan’s construction i v ∈ {0, 1}3 (BRM3 )(v) i   0 1  0  0 3 1 1   0 0  1  2 1 1 1   1 1  0  1 2 0 1   1 0  1  3 0 0 1  This divides {0, 1}3 into the following 4 color classes: {v | (BRM3 )(v) = 0} = {000, 111}, {v | (BRM3 )(v) = 1} = {011, 100}, {v | (BRM3 )(v) = 2} = {010, 101}, {v | (BRM3 )(v) = 3} = {001, 110}.  58  Chapter 3. The Hypercube It is easily verified that this is a 4-coloring of Q23 . In fact, by Theorem 3.19, χ(Q23 ) ≥ 4 so this coloring is optimal. Theorem 3.22 will now allow us to determine exactly χ(Q2k ) when k = 2n − i for some i, n ∈ Z+ with n ≥ 3 and i ∈ [4]. Theorem 3.24. Let k = 2n − i for any i, n ∈ Z+ with n ≥ 3 and i ∈ [4]. Then: χ(Q2k ) = 2n . Proof. The result of Theorem 3.22 implies that: χ(Q2k ) ≤ 2 ≤2  log2 (2n −i+1) log2 (2n )  = 2n , and therefore, in addition with the result of Corollary 3.17, we have the desired result. One useful upper bound on χ(Q2n ) occurs when n is odd. In this case, defining n = 2k+1, χ(Q22k+1 ) ≤ 2χ(Q2k ), as we will prove in Theorem 3.26 by means of the technique used in [17, Theorem 1]. An example of this bound applied to Q23 will be seen in Example 3.27. The u (u + v) construction used in the following lemma and theorem was introduced in [13, p.76], and is a common technique in building longer codes from shorter codes. Lemma 3.25. Let V be a (k, M, 3) binary code for positive integers k and M . Define the following sets (where pu and qu are defined as in Chapter 1): Vp = {u pu (u + v) | u ∈ {0, 1}k , v ∈ V }, Vq = {u qu (u + v) | u ∈ {0, 1}k , v ∈ V }, Vp and Vq are (2k + 1, 2k M, 3) binary codes. Proof. The code length and size of Vp and Vq are trivially 2k + 1 and 2k M , respectively. We will prove the minimum distance of Vp is at least 3. Since Vq = Vp + µ2k+1 k+1 , the result for Vq will follow immediately from Proposition 1.25.  59  Chapter 3. The Hypercube Consider distinct v12k+1 , v22k+1 ∈ Vp . For v1 , v2 ∈ V and u1 , u2 ∈ {0, 1}k , let: v12k+1 = u1 pu1 (u1 + v1 ), v22k+1 = u2 pu2 (u2 + v2 ). We will show that d(v12n+1 , v22n+1 ) ≥ 3. First note the following: d(u1 , u2 ) + d(u1 + v1 , u2 + v2 ) = d(u1 , u2 ) + wt(u1 + v1 + u2 + v2 ) = d(u1 , u2 ) + d(u2 , u1 + v1 + v2 ) ≥ d(u1 , u1 + v1 + v2 ) by the triangle inequality = wt(u1 + v1 + u1 + v2 ) = d(u1 + v1 , u1 + v2 ) = d(v1 , v2 )  by Lemma 1.9.  Therefore, depending on whether or not v1 = v2 is satisfied, we have: d(v12k+1 , v22k+1 ) = d(u1 , u2 ) + d(pu1 , pu2 ) + d(u1 + v1 , u2 + v2 ) ≥ max{2d(u1 , u2 ) + d(pu1 , pu2 ), d(v1 , v2 )}. If v1 = v2 , then u1 = u2 . Moreover, d(u1 , u2 ) = 1 implies d(pu1 , pu2 ) = 1 (by Corollary 1.16), and thus: d(v12k+1 , v22k+1 ) ≥ 2d(u1 , u2 ) + d(pu1 , pu2 ) ≥ 3. If v1 = v2 , then d(v1 , v2 ) ≥ 3 since v1 , v2 ∈ V . Hence: d(v12k+1 , v22k+1 ) ≥ d(v1 , v2 ) ≥ 3. Theorem 3.26. χ(Q22k+1 ) ≤ 2χ(Q2k ) for any positive integer k. Proof. Fix k ∈ Z+ . We will show this upper bound on χ(Q22k+1 ) by creating a 2χ(Q2k )-coloring of Q22k+1 . Take an optimal coloring of Q2k and denote its color classes as Vjk ⊆ {0, 1}k for j ∈ [χ(Q2k )]. Clearly, we have: χ(Q2k )  |Vjk | = 2k . j=1  60  Chapter 3. The Hypercube We will use these color classes to build color classes in a coloring of To create our color classes of Q22k+1 , we define (for j ∈ [χ(Q2k )]):  Q22k+1 .  2k+1 Vp,j = {u pu (u + v) | u ∈ {0, 1}k , v ∈ Vjk }, 2k+1 Vq,j = {u qu (u + v) | u ∈ {0, 1}k , v ∈ Vjk }.  Obviously, we have created 2χ(Q2k ) such sets. We claim these sets are color classes in a 2χ(Q2k )-coloring of Q22k+1 . The sum of the vertices in the sets can be determined as follows: χ(Q2k )  χ(Q2k ) 2k+1 |Vρ,j |=  ρ∈{p,q} j=1  2k |Vjk | ρ∈{p,q} j=1  (2k )(2k )  = ρ∈{p,q}  22k  = ρ∈{p,q}  = (2)(22k ) = 22k+1 = n(Q22k+1 ). Moreover, Lemma 3.25 implies that these sets have minimum distance 3, and are thus independent in Q22k+1 . Therefore, we need only prove that they are pairwise disjoint, and as such their union contains every vertex of Q22k+1 . Assume, by way of contradiction, that for some ρ, θ ∈ {p, q} and j1 , j2 ∈ [χ(Q2k )]: 2k+1 2k+1 Vρ,j ∩ Vθ,j = ∅, 1 2 2k+1 2k+1 Vρ,j = Vθ,j . 1 2 2k+1 2k+1 Take v 2k+1 ∈ Vρ,j ∩ Vθ,j . We have, for some u1 , u2 ∈ {0, 1}k , v1 ∈ Vjk1 , 1 2 and v2 ∈ Vjk2 :  u1 ρu1 (u1 + v1 ) = u2 θu2 (u2 + v2 ). Thus we must have u1 = u2 . Trivially, u1 and u2 then are of the same parity; so for ρu1 = θu2 , we must have ρ = θ. Finally, since u1 = u2 and u1 + v1 = u2 + v2 , Lemma 1.9 implies that v1 = v2 . Since v1 = v2 , we must 61  Chapter 3. The Hypercube 2k+1 2k+1 have j1 = j2 as the Vjk form a partition of {0, 1}k . Thus Vρ,j = Vθ,j ,a 1 2 2 contradiction. Therefore these sets form color classes of a 2χ(Qk )-coloring of Q22k+1 .  Example 3.27. Let us look at the case k = 1. χ(Q21 ) = 2, with the optimal coloring having the following trivial color classes: V11 = {0}, V21 = {1}. We use these sets to build our color classes for an optimal coloring of Q23 . So, we apply our general formula: 3 Vp,j = {u pu (u + v)|u ∈ {0, 1}, v ∈ Vj1 }, 3 Vq,j = {u qu (u + v)|u ∈ {0, 1}, v ∈ Vj1 },  to build: 3 Vp,1 = {000, 111}, 3 Vq,1 = {010, 101}, 3 Vp,2 = {001, 110}, 3 Vq,2 = {011, 100}.  It is easily determined that these are the color classes of a 4-coloring of Q23 . In fact, these color classes are the same as those realized in the optimal coloring of Q23 determined in Example 3.23.  3.2.1  The Asymptotic Behavior of the Chromatic Number of the Square of the Cube  The result of Theorem 3.21 states that χ(Q2k ) is nondecreasing as k increases. ¨ In fact, Osterg˚ ard proved the following result in [17, Theorem 5]. Theorem 3.28. χ(Q2k ) = 1. k→∞ k lim  Therefore, as k increases, χ(Q2k ) approaches k. From Theorem 3.24, if k = 2n − i for n ≥ 3 and i ∈ [4], then χ(Q2k ) = 2n . Moreover, if k = 2n for n ∈ Z+ , then Corollary 3.20 yields χ(Q2k ) ≥ 2n +2. Therefore we can think of 62  Chapter 3. The Hypercube 2n − i as a “tail” of values satisfying χ(Q22n −i ) = 2n , with i = 1 representing the uppermost value at the base of the tail. We will now describe the length of this tail, T . Let T : Z+ \ {1, 2} → Z+ be the maximum positive integer i such that χ(Q22n −i ) = 2n . It must be the case that: 4 ≤ T (n) ≤ 2n − 2n−1 = 2n−1 . Based on the bounds in Appendix A we have T (3) = 4, 4 ≤ T (4) ≤ 7, and 4 ≤ T (5) ≤ 14. The result of Theorem 3.28 implies that T (n) gets relatively smaller as n tends to ∞, as we will show in Corollary 3.29. In other words, the lengths of these tails of values of k in which the chromatic number of Q2k is a power of 2 gets relatively smaller as k increases. Corollary 3.29. For every λ ∈ (0, 1), there exists some N ∈ Z+ such that T (n) < λ for all n > N . 2n Proof. Take arbitrary λ ∈ (0, 1) and define λ0 = 2 log2 (λ) . Clearly λ0 ≤ λ. Given a positive integer n, let k = 2n−1 +(1−2λ0 )2n−1 . Clearly, there exists a positive integer N1 such that k is an integer for all n > N1 . Moreover, by Theorem 3.19, we have: χ(Q2k ) k+1 −1≥ − 1 > 0. k k We now apply Theorem 3.28. For all N2 such that n > N2 implies:  > 0 there exists a positive integer  χ(Q2k ) −1< . k  (3.2)  2 −1 and fix the N2 that satisfies (3.2). Let N = max{N1 , N2 }, 2 − 2λ0 and hence n > N implies: Take =  χ(Q2k ) < = =  2 −1 k 2 − 2λ0 2 2n−1 + (1 − 2λ0 )2n−1 2 − 2λ0 2 (2 − 2λ0 ) 2n−1 2 − 2λ0  1+  = (2)2n−1 = 2n . 63  Chapter 3. The Hypercube Since χ(Q2k ) < 2n , by the definition of T (n), n > N implies: T (n) < 2n − k = 2n − 2n−1 + (1 − 2λ0 )2n−1 ≤ 2n − (2 − 2λ0 )(2n−1 ) = 2n − 2n + 2λ0 (2n−1 ) = λ 0 2n ≤ λ2n .  3.3  Applying Computational Methods to Determine the Chromatic Number of the Square of 8-Dimensional Cube  As seen in Appendix A, the smallest k such that χ(Q2k ) is unknown is k = 8. This graph has 28 = 256 vertices and 27 92 = 4608 edges. Although the actual value of χ(Q2k ) is unknown, we know from Theorem 1.60 that A(8, 3) = 20 (by Theorem 1.44 we have A(8, 3) = A(9, 4)). Thus, we can use Theorem 3.16 to determine a lower bound on χ(Q28 ) as follows: χ(Q28 ) ≥  256 = 12.8 = 13. 20  Moreover, 14-colorings of Q28 are known to exist. This was first exhibited by Hougardy in 1991 [23, p.168] from an adaptation of the Petford-Welsh algorithm described in Chapter 2.4.4. This adaptation will be covered in more detail in Chapter 3.3.3. Another 14-coloring was found independently by Royle in 1993 by means of a randomized search [11, p.157]. Hence, these colorings provide an upper bound on its chromatic number, so χ(Q28 ) is either 13 or 14. Therefore, in order to determine the chromatic number, one either needs to find a 13-coloring, or prove that one does not exist. Also, since there can be no color class of size greater than 20, if a 13coloring exists, its color classes must be of one of the following 5 cases: • 12 independent sets of size 20 and 1 of size 16, • 11 independent sets of size 20, 1 of size 19, and 1 of size 17, • 11 independent sets of size 20 and 2 of size 18, 64  Chapter 3. The Hypercube • 10 independent sets of size 20, 2 of size 19, and 1 of size 18, and • 9 independent sets of size 20 and 4 of size 19. These cases show that, if searching for a 13-coloring of Q28 , independent sets of size 15 or less can be ignored, as they could not possibly be a color class of a 13-coloring. The three algorithms introduced in Chapter 2.4 were then applied to the graph Q28 in an effort to determine the chromatic number. However, the computational difficulty associated with this graph turned out to be far greater than expected.  3.3.1  Integer Programming  Consider the IP model to determine the existence of an m-coloring in a graph described in Chapter 2.4.2. We can denote every vertex of Q2k by its integer representation, and create the binary variables xi,j for all i in {0, 1, . . . , 2k − 1} and j in [m]. A feasible solution to the following IP would then be an m-coloring of Q2k (letting d(i1 , i2 ) denote the Hamming distance between the binary representations of i1 and i2 ): xi1 ,j + xi2 ,j ≤ 1 for all i1 and i2 in {0, 1, . . . , 2k − 1} and j in [m], such that 1 ≤ d(i1 , i2 ) ≤ 2 m  xi,j ≥ 1  for all i in {0, 1, . . . , 2k − 1},  j=1  xi,j ∈ {0, 1}  for all i in {0, 1, . . . , 2k − 1} and j in [m].  We then applied this model to determine if Q28 is 13-colorable (i.e. the case where k = 8 and m = 13). Computationally, we modeled this IP in C++ with CPLEX 9.0, a powerful mixed integer program optimizer designed by ILOG, the world’s leading combinatorial optimization tool provider. However, the number of variables in this IP are 13n(Q28 ) = (13)(256) = 3, 328 and the number of constraints are e(Q28 ) + n(Q28 ) = 4, 608 + 256 = 60, 160. After CPLEX’s internal presolver was applied, the reduced IP matrix contained 13, 607 rows, 3, 328 columns, and 72, 722 nonzero entries. However, it is possible to impose several additional constraints.  65  Chapter 3. The Hypercube Consider the clique in Q2k defined as V = {08 , µ81 , µ82 , . . . , µ8k } (this was proven to be a clique in Lemma 3.18). It is clear that every vertex in V must be in a separate color class. Without loss of generality, we can therefore add the following constraints:  x0,1 = 1,  x20 ,2 = 1,  x21 ,3 = 1,  x22 ,4 = 1, . . .  ,  x2k ,k+1 = 1.  Including these constraints allowed CPLEX’s presolve to eliminate many more variables and constraints (such the variables representing vertices adjacent to vertex µki in color class i + 1, which must of course equal 0). We applied this IP to find optimal colorings for all 3 ≤ k ≤ 7 (that is, a 4coloring of Q23 and 8-colorings of Q2k for 4 ≤ k ≤ 7). The times to find these colorings are included in Figure 3.1, and were obviously very reasonable. Figure 3.1: Computation Times Using Integer Programming  However, applied to Q28 , the new reduced IP matrix contained 8, 431 rows, 2, 959 columns, and 47, 784 nonzero entries. This problem was unfortunately too large to compute in a reasonable amount of time. After running for over 23 hours, CPLEX had only explored 1, 040 out of a maximum possible 22,959 nodes of the branch and bound tree. No effort was made to help CPLEX determine the variable choice in the branch and bound. Instead, the defaults were used. Given the vertex transitivity of the graph and large number of symmetries of the IP model previously discussed, it seemed more worthwhile to use another computational method.  66  Chapter 3. The Hypercube  3.3.2  Column Generation  We then applied the column generation algorithm described in Chapter 2.4.3, again implemented in C++ with CPLEX 9.0. This was done first with the initial set S being the singleton sets of vertices. Moreover, we could impose the restriction on the MWIS phase of the algorithm that we only needed to generate independent sets of size 16 or greater, as prior knowledge indicated that a 13-coloring of Q2k could not contain any color classes of size less than 16. This approach showed much promise; however, the computation time at each node of the tree turned out to be prohibitive. For example, to solve the initial node of branch and bound tree, it took 2, 388 iterations of the IS-MWIS column generation method. This process took just over 35 minutes on a 2 G.Hz. Pentium processor with 2 G.B. of R.A.M. After approximately 19 hours, 18 nodes had been explored. The branch and bound tree for this method has maximum size of 232,640 nodes = 32, 640). This approach was abandoned since it seemed that (as 256 2 an optimal coloring was unlikely to be determined in a reasonable about of time. Varying the choice of the initial set of independent sets S showed no noticeable improvement. On the other hand, when applied to Q27 , the initial node took only 18 iterations and 1 second to perform on the same machine. It also turned out to find the optimal coloring with the independent sets generated in the first node. Part of the reason we were unable to bring the computation time at each node down to a manageable length of time is due to the immense number of columns available for the MWIS portion of the algorithm to choose from. From Proposition 2.25, we know that, given an independent set V and automorphism Φ, that Φ(V ) is independent. Therefore, given any independent set V , we can generate a collection of independent sets by producing the independent set Φ(V ) for every automorphism Φ. It is unlikely that every Φ(V ) will be distinct. However, if a graph has many automorphisms, the number of generated independent sets that are distinct will still likely be very large. Recall from Theorem 3.14 that |Aut(Q2k )| = 2k (k + 1)!. Therefore, there exist 28 9! = 92, 897, 280 automorphisms of Q28 , and given a single independent set V , we can immediately generate over 90 million (again, not necessarily distinct) independent sets of the same size.  67  Chapter 3. The Hypercube The challenges associated with the column generation algorithm for determining a 13-coloring of Q28 led us naturally to looking at properties of these independent sets. In Chapter 4, we will attempt to determine their structure.  3.3.3  The Petford-Welsh Algorithm  Recall the Petford-Welsh algorithm described in Chapter 2.4.4. As stated previously, Hougardy exhibited a 14-coloring of Q28 with an adaptation of this algorithm. We were able to reproduce his results, and find a 14-coloring of Q28 with a running time of within several seconds. The C++ code of our implementation of Hougardy’s adaptation is included in Appendix D, under the file name Hougardy.cpp. The 14-coloring generated with this algorithm is included in Appendix B. However, attempts with varying values for the probability input parameter did not yield a 13-coloring. This, of course, does not prove the non-existence of such a coloring.  68  Chapter 4  The Structure of Optimal Binary Codes Recall from the Chapter 3.3 that if a 13-coloring of Q28 exists, it contains at least 9 independent sets of size 20, and possibly as many as 12. Therefore, it contains at least 9 disjoint (8, 20, 3) binary codes. The knowledge of the structure of these codes is thus very important in proving the existence or non-existence of a 13-coloring, or providing MIP cuts to speed up the exploration of the branch and bound (cut) tree.  4.1  Computationally Determining all Weight Distributions  We know from Chapter 1 that every binary code has a weight distribution with respect to any vector u. One may wish to determine how many sequences of integers are a feasible weight distribution of some (n, M, d) binary code C. Since C + u is an (n, M, d) binary code whose weight distribution (with respect to 0n ) is the same as the weight distribution of C with respect to u, a sequence of integers is a feasible weight distribution with respect to any vector u if and only if it is a feasible weight distribution with respect to 0n . Given a specific sequence of integers, (Wi )ni=0 , it is easy to set up an IP that determines if there is some (n, M, d) binary code with that weight distribution. We create a binary variable vj for all j in {0, 1, . . . , 2n − 1}. This variable is set to equal 1 if the binary representation of integer j is included in C, and 0 otherwise. We will let wt(j) and d(j1 , j2 ) be defined with respect to the appropriate binary representations. We then determine if the following system of constraints has a feasible solution:  69  Chapter 4. The Structure of Optimal Binary Codes  2n −1  vj = M, j=0  vj = Wi  for all i in {0, 1, . . . , n},  j|wt(j)=i  vj1 + vj2 ≤ 1  for all j1 and j2 in {0, 1, . . . , n} satisfying 1 ≤ d(j1 , j2 ) ≤ d − 1.  It is clear that if there is a feasible solution to these constraints, then C = {j|vj = 1} is an (n, M, d) binary code. Otherwise, if this IP is infeasible, then no such code exists. However, this can be a very computationally expensive procedure. Fortunately, the knowledge of permutations can allow us to fix at least one of the codewords in C and speed this up dramatically. Proposition 4.1. Let C ⊆ {0, 1}n be a binary (n, M, d) code with weight distribution (Wi )ni=0 . Let v ∈ C with wt(v) = w and consider any u in {0, 1}n with wt(u) = w. Then there exists another binary (n, M, d) code C such that u ∈ C and C also has weight distribution (Wi )ni=0 . Proof. Consider the permutation σ in Sk such that σ(v) = u, which exists since wt(u) = wt(v). Define C = σ(C). It is clear that C is an (n, M, d) binary code that also has weight distribution (Wi )ni=0 . In addition, since v ∈ C, we have u ∈ C . In this chapter, we will be determining the feasible weight distributions of optimal binary codes of length 8 and minimum distance 3. It was proven in Theorem 1.60 that these are (8, 20, 3) binary codes. From [1, Table II], it is known that A(8, 6, 4) = 2. Therefore, in determining if (Wi )8i=0 is a feasible weight distribution of an (8, 20, 3) binary code C, consider the case where W4 ≥ 3. In this case, we can in fact fix two codewords to any u and v in C such that wt(u) = wt(v) = 4 and d(u, v) = 4. We will be using the codewords 00001111 and 00110011, whose integer representations are 15 and 51, respectively. The IP to determine if (Wi )8i=0 is a feasible weight distribution of an (8, 20, 3) binary code C is then defined by the following system of constraints, denoted (FD) for feasible distributions:  70  Chapter 4. The Structure of Optimal Binary Codes  255  vj = 20, j=0  vj = Wi  for all i in {0, 1, . . . , 8},  j|wt(j)=i  v j1 + v j2 ≤ 1  for all j1 and j2 in {0, 1, . . . , 8} satisfying 1 ≤ d(j1 , j2 ) ≤ 2,  v15 = 1  if W4 ≥ 1,  v51 = 1  if W4 ≥ 3.  Thus we could run this IP on every sequence of positive integers (Wi )8i=0 , 8  where  Wi = 20. Unfortunately, there are  28 8  = 3, 108, 105 such se-  i=0  quences. However, the result of Theorem 1.48 allows us to impose Wi ≤ A(8, 3, i) = A(8, 4, i) for all i. Proposition 1.46 implies that A(8, 3, 0) = A(8, 3, 1) = 1, and hence A(8, 3, 8) = A(8, 3, 7) = 1. In addition, Proposition 1.47 implies that 8 A(8, 3, 2) = A(8, 4, 2) = = 4, and hence A(8, 3, 6) = 4. Finally, [1, 2 Table I] contains the values A(8, 3, 3) = A(8, 3, 5) = 8 and A(8, 3, 4) = 14. Applying all of these constraints on the Wi dramatically reduces the number of sequences that must be tested. In addition, we will apply the result of the following theorem and corollary. Theorem 4.2. Let C be an (8, M, 3) binary code with distance distribution (Wi )8i=0 . Then W0 + W1 + W2 ≤ 4. Proof. Assume, by way of contradiction, there exists an (8, M, 3) binary code with weight distribution (Wi )8i=0 with W0 + W1 + W2 ≥ 5. If 08 ∈ C, W1 = W2 = 0 as all v in {0, 1}8 with wt(v) ≤ 2 is at distance at most 2 from 08 . Thus 08 ∈ / C as W1 + W2 ≤ 4. Since W1 ≤ A(8, 3, 1) = 1 and W2 ≤ A(8, 3, 2) = 4, we must have W1 = 1 and W2 = 4. Let u, v1 , v2 , v3 , v4 ∈ C with wt(u) = 1 and wt(vi ) = 2 for all i  71  Chapter 4. The Structure of Optimal Binary Codes in [4]. We have wt(vi ∩ vj ) = 0 for i and j in [4] with i = j as otherwise: d(vi , vj ) = wt(vi ) + wt(vj ) − 2wt(vi ∩ vj )  by Lemma 1.15  ≤2+2−2 = 2. 4  vi = 18 , and hence wt(u ∩ vi ) = 1 for some i in [4]. This  Therefore i=1  implies: d(u, vi ) = wt(u) + wt(vi ) − 2wt(u ∩ vi ) = 1 + 2 − 2 = 1, a contradiction. Corollary 4.3. Let C be an (8, M, 3) binary code with weight distribution (Wi )8i=0 . Then W6 + W7 + W8 ≤ 4. Proof. Assume, by way of contradiction, there exists an (8, M, 3) binary code with weight distribution (Wi )8i=0 with W6 +W7 +W8 ≥ 5. Let (Wi )8i=0 be the weight distribution of C + 18 . We have W0 + W1 + W2 = W8 + W7 + W6 ≥ 5; however, C + 18 is an (8, M, 3) binary code by Proposition 1.25. This contradicts the result of Theorem 4.2. Therefore, to determine all sequences of integers to test as feasible weight distributions for an (8, 20, 3) binary code, we must enumerate all feasible solutions to the following system, denoted (WD) for weight distributions: 8  Wi = 20  (4.1)  W0 ≤ 1,  (4.2)  W1 ≤ 1,  (4.3)  W2 ≤ 4,  (4.4)  W3 ≤ 8,  (4.5)  W4 ≤ 14,  (4.6)  W5 ≤ 8,  (4.7)  W6 ≤ 4,  (4.8)  W7 ≤ 1,  (4.9)  W8 ≤ 1,  (4.10)  W0 + W1 + W2 ≤ 4,  (4.11)  i=0  72  Chapter 4. The Structure of Optimal Binary Codes W6 + W7 + W8 ≤ 4,  (4.12)  W0 = 1 ⇒ W1 + W2 = 0  (4.13)  W8 = 1 ⇒ W6 + W7 = 0  (4.14)  +  Wi ∈ Z ∪ {0} zi ∈ {0, 1}  for all i in {0, 1, . . . , 8}, for all i in [4].  Constraint (4.1) forces the code to have size 20. Constraints (4.2) through (4.10) enforce the Wi ≤ A(8, 3, i) inequalities. Constraints (4.11) and (4.12) follow from Theorem 4.2 and Corollary 4.3, respectively. Moreover, constraint (4.13) encodes the statement that if W0 = 1, then W1 = W2 = 0, as was determined in the proof of Theorem 4.2. Finally, constraint (2.14) analogously encodes the statement that if W8 = 0, then W6 = W7 = 0.3 These constraints were then inputted into the constraint programming solver ILOG Solver 6.0, and 7, 539 possible solutions were enumerated. Each of these solutions were then inputted into the (FD) IP, and solved for feasibility as a weight distribution using the mixed IP optimizer ILOG CPLEX 9.0. The C++ code that solved both (WD), and every (FD) iteration, is named CPDist.cpp, and included in Appendix E. In the end, 111 of the distributions were proven feasible, whereas 7, 428 of the distributions were proven infeasible. All of the feasible weight distributions found are then included in Appendix C. All computation was done on a computer with a 2.19 G.Hz. A.M.D. processor with 1.0 G.B. of R.A.M. To show how important the codeword fixing constraints are in the (FD) model, we reexecuted the IP without the two conditional constraints for all 111 feasible distributions, and 102 of the 7, 428 infeasible distributions selected at random. The computation times with and without the conditional constraints for these 111 feasible and 102 infeasible distributions were compared.  3 Implications are allowed in CP models and are implemented quite differently from the standard ways they are modeled in MIPs.  73  Chapter 4. The Structure of Optimal Binary Codes The computation times with and without the conditional constraints for the feasible distributions are shown in Figure 4.1. Figure 4.1: Computation Times for Feasible Distributions  The mean computation time with the constraints was 0.52 seconds, with a maximum of 2.28 seconds. The mean computation time with the constraints removed was 2.16 seconds, with a maximum of 40.49 seconds. Clearly the relative increase in times is significant; however, with only 111 distributions it did not make much difference in absolute terms.  74  Chapter 4. The Structure of Optimal Binary Codes The computation times with and without the conditional constraints for the randomly selected infeasible distributions are shown in Figure 4.2. Figure 4.2: Computation Times for Infeasible Distributions  Here, it can be seen that including these constraints had a substantial impact on computation time. The mean computation time with the constraints was 12.62 seconds, with a maximum of 425.64 seconds, or approximately 7 minutes. The mean computation time with the constraints removed was 2, 064 seconds, or about 34 minutes. The longest computation took 116, 872 seconds, or over 32 hours. With over 7000 distributions in which to determine infeasibility, it is apparent that determining infeasibility in all such distributions would be extremely impractical without applying the knowledge of permutations to fix codewords.  75  Chapter 4. The Structure of Optimal Binary Codes  4.2 4.2.1  Structural Results On the Size of Shortened Codes  It is easily determined from the table in Appendix C that every (8, 20, 3) binary code contains between 8 and 12 codewords of even weight (and thus between 8 and 12 codewords of odd weight). This knowledge can be used to prove the result that every (8, 20, 3) binary code has between 8 and 12 codewords with a specified value in each coordinate, as will be proved in Corollary 4.5. Theorem 4.4. Let C be an (n, M, 3) binary code with n ≥ 3, and fix t = n . For any i in [n] and x in {0, 1}, C can be transformed into an (n, M, 3) 2 binary code D whose weight distribution satisfies, letting (Wi )ni=0 denote the weight distribution of D: t  W2i = |{u ∈ C | the ith coordinate of u is x}|. i=0  Proof. First consider the case where x = 0. Fix an arbitrary i in [n], and let D be defined as follows: D = {τi (u) | u ∈ C}. Recall from the definition of τi that if the ith coordinate of u is 0, then τi (u) is of even parity. Moreover, if the ith coordinate of u is 1, then τi (u) is of odd parity. In addition, by Theorem 3.13, τi is an automorphism of Q2n . Therefore, by Proposition 3.4, since C is an (n, M, 3) binary code, D is an (n, M, 3) binary code. We then have: t  W2i = |{v ∈ D | wt(v) is even}| i=0  = |{u ∈ C | the ith coordinate of u is 0}|. In the case where x = 1, we instead define D to be: D = {τi (u) + µni | u ∈ C}. D is again an (n, M, 3) binary code and in this case: t  W2i = {v ∈ D | wt(v) is even} i=0  = |{u ∈ C | the ith coordinate of u is 1}|. 76  Chapter 4. The Structure of Optimal Binary Codes Corollary 4.5. Let C be an (8, 20, 3) binary code. Then, for any i in [8] and x in {0, 1},: 8 ≤ |{u ∈ C | the ith coordinate of u is x}| ≤ 12. Proof. Assume by way of contradiction, for arbitrary i in [8] and x in {0, 1} that: |{u ∈ C | the ith coordinate of u is x}| = M , where M ≤ 7 or M ≥ 13. Then, by Theorem 4.4, C can be transformed into an (8, 20, 3) binary code D with M codewords of even weight. This, however, can’t exist due to the computational results of Chapter 4.1. Another way of expressing this result is that for any (8, 20, 3) binary code C, consider the shortened code Ci,x for any i in [8] and x in {0, 1}. The code size of this shortened code must satisfy: 8 ≤ |Ci,x | ≤ 12.  4.2.2  Bounds on the Weight Distributions of Color Classes  The weight distributions listed in Appendix C can provide many interesting results about the color classes of what a 13-coloring of Q28 must look like. First and foremost, assume there exists a 13-coloring of Q28 and denote its color classes Cj for j ∈ [13]. Moreover, let (Wij )8i=0 denote the weight distribution of Cj . It is clear that: 13  Wij = j=0  8 , i  for all i ∈ {0, 1, . . . , 8}. Since between 9 and 12 of these Cj must be of size 20, this greatly restricts the combinations of weight distributions that could form a possible 13-coloring of Q28 . In fact, the algorithm described in Chapter 4.1 can trivially be modified to generate the weight distributions of all (8, M, 3) codes for M < 20. Unfortunately, this led to too many cases to reasonably keep track of. However, useful bounds can still be found. For example, recall from [1, Table I] that A(8, 3, 4) = 14. Therefore, any binary code with length 8 and minimum distance at least 3 can have a maximum of 14 codewords whose weight is 8. However, as can be seen in 77  Chapter 4. The Structure of Optimal Binary Codes Appendix C, any optimal (8, 20, 3) binary code C with weight distribution (Wi )8i=0 must satisfy W4 ≤ 10. Moreover, if W4 ≥ 9, then W0 + W8 ≥ 1. Therefore at least one of 08 and 18 must belong to C. Therefore, if a 13coloring of Q28 exists, at most 2 of the minimum 9 color classes of size 20 can have at least 9 codewords of weight 4. Therefore, at least 7 color classes of size 20 must have no more than 8 codewords of weight 4. Moreover, if W3 = A(8, 3, 3) = 8, then C follows weight distribution 109 or 110. Therefore W0 = 1, and hence there can be at most one color class with 8 codewords of weight 3. An analogous argument shows that there can be at most one color class with 8 codewords of weight 5. Bounds like these can be added to MIPs or other models for finding, or proving the non-existence of, a 13-coloring of Q28 . Attempts were made in this direction.  4.3  The Number of Nonequivalent Codes  Consider the following five (8, 20) binary codes (given in terms of their integer representations), which can be easily verified to be (8, 20, 3) binary codes. C1 = {0, 7, 25, 30, 42, 53, 75, 84, 97, 108, 114, 127, 140, 147, 169, 182, 194, 221, 231, 248}, C2 = {0, 7, 25, 30, 42, 53, 75, 84, 97, 108, 114, 127, 140, 147, 166, 169, 176, 194, 197, 216}. C3 = {0, 43, 53, 62, 79, 83, 92, 102, 121, 135, 154, 157, 172, 179, 201, 214, 229, 234, 240, 255}, C4 = {15, 26, 38, 51, 66, 73, 84, 101, 120, 127, 133, 144, 172, 185, 206, 211, 221, 224, 235, 246}, C5 = {15, 20, 26, 37, 40, 51, 67, 76, 118, 121, 134, 145, 171, 188, 201, 210, 223, 224, 238, 245}.  78  Chapter 4. The Structure of Optimal Binary Codes The distance distributions of these five codes are easily calculated, and seen in Table 4.1. Table 4.1: Distance Distributions of (8, 20, 3) Binary Codes Code C1 C2 C3 C4 C5  A0 1 1 1 1 1  A1 0 0 0 0 0  Distance Distribution A2 A3 A4 A5 A6 0 5.8 7.4 3.4 1.4 0 6.1 7.5 2.9 1.5 0 5.6 8 3.2 1.2 0 6 7.2 3 1.8 0 5.6 7.6 3.2 1.6  A7 0.8 0.9 0.8 1 0.8  A8 0.2 0.1 0.2 0 0.2  As these distance distributions are all distinct, by the result of Lemma 1.38, these codes are pairwise nonequivalent. Therefore there are at least five nonequivalent (8, 20, 3) binary codes. In fact, five was the exact number determined computationally in [18] (Baicheva and Kolev first proved this result in [3]). Thus every (8, 20, 3) binary code is equivalent to one of these five aforementioned codes. Therefore every weight distribution listed in Appendix C is realized in a code (or codes) equivalent to at least one Ci for i in [5]. To determine which Ci , we calculated Ci + v for all i in [5] and v in {0, 1}8 . This yielded 1, 280 (not necessarily distinct) (8, 20, 3) binary codes, and the weight distribution of each one was calculated. Each of the 111 weight distributions appeared at least once in these codes, and for each of the 111, the Ci that generated it were also recorded in this appendix. Note that, as some weight distributions can be generated by more than one Ci , two (8, 20, 3) binary codes with the same weight distribution need not be equivalent.  4.4  Another Definition of Equivalence  The hypercube coloring problem leads to another definition of code equivalence. Definition 4.6. Let C and D be (n, M, d) binary codes. C and D are Qjn equivalent if C = Φ(D) for some Φ in Aut(Qjn ). C is Qjn -unique if every (n, M, d) binary code is Qjn -equivalent to C.  79  Chapter 4. The Structure of Optimal Binary Codes Analogous to equivalence in the code theoretical sense, Qjn -equivalence is an equivalence relation on the set of all binary codes. Since coordinate permutation and vector addition functions are automorphisms of Qn , and hence Qjn , for any choices of positive integers j and n, it is clear that if two codes are equivalent in the code theoretical sense, then they are also Qjn -equivalent. In fact, for j = 1, these two definitions are equal. While there are five nonequivalent (8, 20, 3) binary codes, Hougardy determined via an exhaustive search that there are only two that are Q28 nonequivalent [10]. The two codes exhibited by Hougardy are C1 and C2 above. Therefore C3 , C4 , and C5 must each be Q28 -equivalent to exactly one of C1 and C2 . This can indeed be verified. Let σ3 , σ4 , and σ5 be the permutations in S8 defined by (2 6)(5 8), (1 3 2)(4 7 8)(5 6), and (2 8)(3 4 7 6), respectively. We then have: σ3 ◦ τ1 (C3 + 255) = C2 , σ4 ◦ τ1 (C4 + 73) = C1 , σ5 ◦ τ1 (C5 + 67) = C1 , where the addition, of course, corresponds to the addition of the binary vector with the appropriate integer representation. Since the functions used are all automorphisms of the square of the cube (as seen in Chapter 3.1), C2 and C3 are Q28 -equivalent. Moreover, so are C1 , C4 , and C5 . As stated previously, two (8, 20, 3) binary codes with the same weight distribution need not be equivalent from a code theoretical perspective. In fact, they need not even be Q28 -equivalent. Consider, for example, weight distribution 14 in Appendix C. It can be realized in codes equivalent to both C3 and C5 . C3 is Q28 -equivalent to C2 and C5 is Q28 -equivalent to C1 . Therefore, these codes are not Q28 -equivalent. Knowing the structure of equivalent codes invites further computational investigation into resolving whether or not a 13-coloring of Q28 exists. We have looked at this only briefly. One can produce a large number of (8, 20, 3) codes chosen randomly by first choosing at randomly one of the Ci , for i in [5], then choosing randomly a v in [255], and then choosing randomly a permutation from S8 . The set of these codes are then put into a MIP similar to the IS method of the column generation algorithm described in 80  Chapter 4. The Structure of Optimal Binary Codes Chapter 2.4.3 to see if they can produce a 13-coloring. A column generation approach could be used to increase the size of the set using this random method. Unfortunately, even if no such 13-coloring is found, it will not prove that one doesn’t exist. On the other hand, the only algorithm known to find even a 14-coloring of Q28 was the modified Petford-Welsh randomized algorithm. Therefore if a 13-coloring exists, then it may very well be found by a random method.  81  Chapter 5  Conclusion We introduced the study of graph theory, and specifically, vertex coloring problems. The decision problem of determining whether or not a general graph is m-colorable for m ≥ 3 is NP-complete. Our main focus was on determining the chromatic number of the square of the hypercube, Q2k . Our computational effort was focused on the smallest value of k for which this is currently unknown, k = 8. To determine the chromatic number of Q28 , based on current knowledge, requires either exhibiting a 13-coloring, or proving that one can’t exist. While our computational approaches did not determine this value, they did serve to direct our attention to the study of binary codes. The knowledge of binary codes and their structure further strengthened our understanding of the combinatorial nature of this class of graphs. We then directed our computational efforts on determining the structure of optimal binary codes with length 8 and minimum distance at least 3, of which there must be at least 9 representing the color classes of a 13-coloring of Q28 . Future work may focus on the following: • Investigating further the link between weight and distance distributions of binary codes, • Applying these results to develop new adaptations of graph coloring algorithms to determine the chromatic number of Q28 , • Generalize our research to determine values of, or narrow bounds on, other unknown values of the chromatic number of Q2k .  82  Bibliography [1] E. Agrell, A. Vardy, and K. Zeger. Upper bounds for constant-weight codes. IEEE Trans. Inform. Theory, 46(7):2373–2395, 2000. [2] E. Agrell, A. Vardy, and K. Zeger. A table of upper bounds for binary codes. IEEE Trans. Inform. Theory, 47(7):3004–3006, 2001. [3] T. Baicheva and E. Kolev. Binary codes of length eight, minimum distance three and twenty codewords. In Proc. II Int. Workshop Optimal Codes and Related Topics, pages 5–8, Sozopol, Bulgaria, 1998. [4] M. R. Best and A. E. Brouwer. The triply shortened binary Hamming code is optimal. Discrete Math., 17(3):235–245, 1977. [5] M. R. Best, A. E. Brouwer, F. J. MacWilliams, F. Jessie, A. M. Odlyzko, and N. J. A. Sloane. Bounds for binary codes of length less than 25. IEEE Trans. Infomation Theory, IT-24(1):81–93, 1978. [6] D. Br´elaz. New methods to color the vertices of a graph. Comm. ACM, 22(4):251–256, 1979. [7] P. Delsarte. Bounds for unrestricted codes, by linear programming. Philips Res. Rep., 27:272–289, 1972. [8] P. Delsarte. An algebraic approach to the association schemes of coding theory. Philips Res. Rep. Suppl., (10):vi+97, 1973. [9] M. R. Garey and D. S. Johnson. Computers and intractability. W. H. Freeman and Co., San Francisco, Calif., 1979. A guide to the theory of NP-completeness, A Series of Books in the Mathematical Sciences. [10] S. Hougardy. Personal email communication, 2008. [11] T. R. Jensen and B. Toft. Graph coloring problems. Wiley-Interscience Series in Discrete Mathematics and Optimization. John Wiley & Sons Inc., New York, 1995. A Wiley-Interscience Publication.  83  Bibliography [12] N. Linial, R. Meshulam, and M. Tarsi. Matroidal bijections between graphs. J. Combin. Theory Ser. B, 45(1):31–44, 1988. [13] F. J. MacWilliams and N. J. A. Sloane. The theory of error-correcting codes. II. North-Holland Publishing Co., Amsterdam, 1977. NorthHolland Mathematical Library, Vol. 16. [14] S. McKinley. The hamming codes and delsarte’s linear programming bound. Master’s thesis, Portland State University, May 2003. [15] A. Mehrotra and M. A. Trick. A column generation approach for graph coloring. INFORMS Journal on Computing, 8:344–354, 1996. [16] Z. Miller and M. Perkel. A stability theorem for the automorphism groups of powers of the n-cube. Australas. J. Combin., 10:17–28, 1994. ¨ [17] P. R. J. Osterg˚ ard. On a hypercube coloring problem. J. Combin. Theory Ser. A, 108(2):199–204, 2004. ¨ [18] P. R. J. Osterg˚ ard, T. Baicheva, and E. Kolev. Optimal binary oneerror-correcting codes of length 10 have 72 codewords. IEEE Trans. Inform. Theory, 45(4):1229–1231, 1999. [19] A. D. Petford and D. J. A. Welsh. A randomised 3-colouring algorithm. Discrete Math., 74(1-2):253–261, 1989. Graph colouring and variations. [20] R. M. Roth. Introduction to coding theory. Cambridge University Press, New York, second edition, 2006. [21] P.-J. Wan. Near-optimal conflict-free channel set assignments for an optical cluster-based hypercube network. J. Comb. Optim., 1(2):179– 186, 1997. [22] D. B. West. Introduction to graph theory. Prentice Hall, Inc., Upper Saddle River, New Jersey, second edition, 2001. [23] G. M. Ziegler. Coloring Hamming graphs, optimal binary codes, and the 0/1-Borsuk problem in low dimensions. In Computational discrete mathematics, volume 2122 of Lecture Notes in Comput. Sci., pages 159– 171. Springer, Berlin, 2001.  84  Appendices  85  Appendix A  Current Status of Coloring the Square of the Cube Table A.1: Current Status of Coloring the Square of the Cube k  n(Q2k )  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15  2 4 8 16 32 64 128 256 512 1, 024 2, 048 4, 096 8, 192 16, 384 32, 768  a b c d  e(Q2k )  A(k, 3)  ω(Q2k )  1 1a 2 6 1a 4 a 24 2 4 80 2a 5 240 4a 6 a 672 8 7 1, 792 16a 8 a 4, 608 20 9 11, 520 40a 10 28, 160 72a 11 a 67, 584 144 12 159, 744 256a 13 a 372, 736 512 14 860, 160 1, 024a 15 1, 966, 080 2, 048a 16 Continued on next page  [2, Table I]. Theorem 1.61. Exhibited 14-colorings. Proposition 3.16.  e f g  n(Q2k ) A(k, 3) 2 4 4 4 8 8 8 12.8 12.8 14.2 14.2 16 16 16 16  χ(Q2k ) 2 4 4 8e 8e 8e 8e 13d − 14c 13d − 16g 15d − 16g 15d − 16g 16e 16e 16e 16e  Theorem 3.24. Theorem 3.26. Theorem 3.21.  86  Appendix A. Current Status of Coloring the Square of the Cube  k 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 k 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31  Table A.1– continued from previous page e(Q2k ) A(k, 3) 65, 536 4, 456, 448 2, 720a − 3, 276a 131, 072 10, 027, 008 5, 312a − 6, 552a 262, 144 22, 413, 312 10, 496a − 13, 104a 524, 288 49, 807, 360 20, 480a − 26, 208a 1, 048, 576 110, 100, 480 36, 864a − 43, 689a 2, 097, 152 242, 221, 056 73, 728a − 87, 378a 4, 194, 304 530, 579, 456 147, 456a − 173, 491a 8, 388, 608 1, 157, 627, 904 294, 912a − 344, 308a 16, 777, 216 2, 516, 582, 400 524, 288a − 599, 185a 33, 554, 432 5, 452, 595, 200 1, 048, 576a − 1, 198, 370a 67, 108, 864 11, 777, 605, 632 2, 097, 152a − 2, 396, 740a 134, 217, 728 25, 367, 150, 592 4, 194, 304a − 4, 793, 480a 268, 435, 456 54, 492, 397, 568 8, 388, 608b 536, 870, 912 116, 769, 423, 360 16, 777, 216b 1, 073, 741, 824 249, 644, 974, 080 33, 554, 432b 2, 147, 483, 648 532, 575, 944, 704 67, 108, 864b n(Q2k ) ω(Q2k ) χ(Q2k ) A(k, 3) 17 20.004884 − 24.094 21d − 28g 18 20.004884 − 24.675 21d − 28f 19 20.004884 − 24.976 21d − 32g 20 20.004884 − 25.6 21d − 32g 21 24.00091556 − 28.4 25d − 32g 22 24.00091556 − 28.4 25d − 32g 23 24.17591691 − 28.4 25d − 32g 24 24.36367438 − 28.4 25d − 32g 25 28.00006008 − 32 29d − 32g 26 28.00006008 − 32 29d − 32g 27 28.00006008 − 32 29d − 32g 28 28.00006008 − 32 29d − 32g 29 32 32e 30 32 32e 31 32 32e 32 32 32e n(Q2k )  87  Appendix B  A 14-coloring of the Square of the 8-Cube Table B.1: A 14-coloring of Q28 V1 13 19 34 62 75 80 103 104 125 142 152 171 183 194 197 223 244 250  V2 4 24 31 42 53 77 82 97 102 123 147 160 175 182 185 199 202 212 236  V3 2 17 30 47 56 69 91 99 108 118 137 151 164 178 189 207 220 234 241  V4 22 27 48 61 78 88 98 105 119 132 143 145 163 168 190 210 221 229 251  V5 12 18 35 57 65 94 106 116 139 149 166 176 191 196 216 237 243  V6 3 8 37 59 60 68 87 89 111 112 141 148 154 162 177 193 206 232 246 253  V7 16 29 39 44 79 83 96 117 130 133 158 169 179 180 204 217 230 255  V8 5 11 28 50 63 64 86 110 121 134 136 161 205 209 218 235 247 252  V9 14 23 36 43 49 67 72 84 109 114 129 146 157 167 184 198 219 224 245 254  V10 6 9 32 55 58 74 95 113 124 144 155 174 195 213 228 233 242  V11 15 21 40 54 70 73 90 101 127 138 156 173 187 192 215 227 238 248  V12 0 41 46 51 71 81 92 100 122 131 159 165 188 200 214 226 239 249  V13 7 10 25 33 52 76 85 107 126 128 150 172 186 201 211 231 240  V14 1 20 26 38 45 66 93 115 120 135 140 153 170 181 203 208 222 225  The Vi for i in [14] given are the color classes of the 14-coloring of Q28 determined by the Hougardy.cpp code of Appendix D, with the vectors given by their integer representations.  88  Appendix C  Weight Distributions of (8, 20, 3) Binary Codes Table C.1: Weight Distributions of (8, 20, 3) Binary Codes  Distribution  W0  W1  W2  0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  1 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4  W3  W4  5 4 5 6 6 7 7 3 3 4 4 5 5 5 5 6 6 6 6 7 7 7 7 7 2 4 4 4 5 5 Continued  W5  W6  5 5 3 6 4 3 6 4 2 4 4 3 5 3 3 3 4 3 4 4 2 4 6 4 6 4 3 4 6 3 6 4 3 3 5 4 4 4 3 5 3 3 6 2 3 3 4 3 4 4 3 5 2 3 5 3 2 2 4 3 3 3 3 3 4 2 4 2 3 4 3 2 4 6 4 2 6 4 3 5 4 4 4 4 3 4 4 3 5 3 on next page  W7  W8  1 1 1 1 1 1 1 0 1 0 0 0 1 1 1 1 0 1 1 1 1 1 1 1 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  Realized in code(s) equivalent to C2 C2 C1 , C 4 C2 C1 , C 4 C3 C5 C2 C3 C1 C5 C4 C1 , C 4 C2 C3 , C 5 C2 C1 C2 C1 , C 4 C5 C2 C3 C1 , C 4 C2 C3 C1 C2 C3 , C 5 C2 C4  89  Appendix C. Weight Distributions of (8, 20, 3) Binary Codes Table C.1– continued from previous page Distribution  W0  W1  W2  30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1  4 4 4 4 4 0 0 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3  W3  W4  W5  W6  6 2 4 4 6 3 3 3 6 4 2 3 6 4 2 4 6 4 3 3 0 10 8 0 5 5 5 3 3 7 7 0 3 8 6 0 4 6 5 2 4 6 7 0 4 7 4 2 5 5 5 2 5 6 5 1 2 7 7 0 2 8 6 0 2 9 5 0 3 4 7 3 3 5 6 3 3 6 4 3 3 6 5 2 3 6 7 0 3 7 4 2 3 8 3 2 3 8 5 0 4 3 7 3 4 4 7 2 4 5 4 3 4 5 5 2 4 5 7 0 4 6 5 2 4 7 3 2 4 7 4 1 4 7 5 0 5 4 5 2 5 5 4 2 5 5 5 1 5 6 3 2 5 6 4 1 2 4 6 4 2 4 7 3 2 5 6 3 2 6 5 3 3 3 6 4 Continued on next page  W7  W8  0 1 1 0 0 0 1 0 0 1 0 1 1 1 0 0 0 0 0 1 1 0 1 1 0 0 0 1 1 0 0 1 1 0 1 1 1 1 1 0 0 0 0 0  0 0 0 0 0 1 0 1 1 0 1 0 0 0 1 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0  Realized in code(s) equivalent to C1 C4 C2 C3 C2 C2 C3 C2 C1 , C 4 C2 C3 C1 , C 4 C3 C5 C1 , C 4 C2 C3 , C 5 C2 C1 , C 4 C2 C1 , C 4 C2 C2 C3 , C 5 C2 C3 C5 C1 , C 4 C2 C5 C1 , C 4 C2 C1 , C 4 C1 , C 4 C5 C2 C3 C1 , C 4 C2 C2 C1 , C 4 C2 C3 , C 5 C4  90  Appendix C. Weight Distributions of (8, 20, 3) Binary Codes Table C.1– continued from previous page Distribution  W0  W1  W2  W3  W4  W5  W6  W7  W8  74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1  1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  3 3 3 3 3 4 4 4 4 4 4 4 4 4 5 5 5 0 4 4 4 5 5 5 5 5 6 6 6 6 7 7 7 7 7 8 8  3 5 5 6 6 2 3 3 4 4 5 6 6 6 5 5 5 10 8 9 10 5 7 8 9 9 6 8 8 8 5 6 6 7 7 10 10  7 5 6 3 6 7 6 7 5 6 4 3 3 4 5 5 5 8 6 5 4 5 4 3 2 4 3 2 3 4 4 3 4 2 3 0 0  3 3 2 3 0 3 3 2 3 2 2 2 3 2 0 0 1 0 0 0 0 3 2 2 2 0 3 2 1 0 2 2 1 2 1 0 0  0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 1 1 1 0 1 1 1 0 1 1 1 1 1 0 1  0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0  Realized in code(s) equivalent to C2 C2 C1 , C 4 C5 C4 C5 C2 C3 C1 , C 4 C2 C1 , C 4 C2 C3 C2 C2 C3 C2 C3 C1 C2 C3 , C 5 C2 C1 , C 4 C2 C3 , C 5 C2 C4 C2 C1 , C 4 C1 C5 C2 C3 C1 , C 4 C2 C3 C2  91  Appendix D  Hougardy.cpp 1 #include < i l c p l e x / i l o c p l e x . h> 2 #include <s t r i n g > 3 4 ILOSTLBEGIN 5 6 typedef I l o A r r a y <I l o I n t A r r a y > I l o I n t M a t r i x ; 7 8 I l o I n t lastTime = 0 ; 9 I l o I n t timePeriod = 5; 10 int maxTotalColored = 0 ; 11 12 13 14 15 int b i t S t r i n g D i f f ( I l o I n t A r r a y s t r i n g 1 , I l o I n t A r r a y string2 ){ 16 int count = 0 ; 17 f o r ( int i = 0 ; i < s t r i n g 1 . g e t S i z e ( ) ; i ++){ 18 i f ( s t r i n g 1 [ i ] != s t r i n g 2 [ i ] ) { 19 count++; 20 } 21 } 22 return count ; 23 } ; 24 25 26 void main ( ) 27 { 28 I lo E nv env ; 29 30 try {  92  Appendix D. Hougardy.cpp 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68  I l o I n t nBits = 8; I l o I n t n V e r t i c e s = int ( pow ( 2 , n B i t s ) ) ; I l o I n t n C o l o r C l a s s e s = int ( pow ( 2 , int ( c e i l f ( l o g ( f l o a t ( n B i t s +1) ) / l o g ( 2 . 0 ) ) ) ) ) ; c o u t << ”number o f b i t s = ” << n B i t s << e n d l ; I l o I n t M a t r i x b i t S t r i n g s ( env , n V e r t i c e s ) ; f o r ( int v = 0 ; v < n V e r t i c e s ; v++){ b i t S t r i n g s [ v ] = I l o I n t A r r a y ( env , n B i t s ) ; int r e m a i n d e r = v ; f o r ( int b = n B i t s − 1 ; b >= 0 ; b−−){ i f ( r e m a i n d e r >= pow ( 2 , b ) ) { b i t S t r i n g s [ v ] [ nBits − 1 − b ] = 1; r e m a i n d e r −= pow ( 2 , b ) ; } else { b i t S t r i n g s [ v ] [ nBits − 1 − b ] = 0; } } } I l o I n t M a t r i x n e i g h b o r s ( env , n V e r t i c e s ) ; f o r ( int v = 0 ; v < n V e r t i c e s ; v++){ n e i g h b o r s [ v ] = I l o I n t A r r a y ( env ) ; IloIntArray bitStringOfv = bitStrings [ v ] ; f o r ( int n = 0 ; n < n V e r t i c e s ; n++){ i f ( n != v ) { i f ( b i t S t r i n g D i f f ( bitStringOfv , bitStrings [ n ] ) <= 2 ) { n e i g h b o r s [ v ] . add ( n ) ; } } } } double c = 0 . 0 0 6 ; IloNumArray p r o b v a l ( env , n B i t s +1) ; prob val [ 0 ] = 1 . 0 ; f o r ( int i = 1 ; i < p r o b v a l . g e t S i z e ( ) ; i ++){ p r o b v a l [ i ] = p r o b v a l [ i −1] ∗ c ; 93  Appendix D. Hougardy.cpp 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107  } I l o I n t nColors = 14;  IloRandom random ( env ) ; I l o I n t A r r a y c o l o r i n g ( env , n V e r t i c e s ) ; f o r ( int i = 0 ; i < n V e r t i c e s ; i ++){ c o l o r i n g [ i ] = random . g e t I n t ( n C o l o r s ) ; } I l o I n t A r r a y num col ( env , n C o l o r s ) ; I l o I n t num bad ; IloInt num trials = 0; do{ num bad = n V e r t i c e s ; f o r ( int i = 0 ; i < n V e r t i c e s ; i ++){ f o r ( int j = 0 ; j < n C o l o r s ; j ++){ num col [ j ] = 0 ; } f o r ( int n i = 0 ; n i < n e i g h b o r s [ i ] . g e t S i z e ( ) ; n i++){ i f ( neighbors [ i ] [ ni ] < nVertices ){ num col [ c o l o r i n g [ n e i g h b o r s [ i ] [ n i ] ] ] + + ; } } i f ( num col [ c o l o r i n g [ i ] ] == 0 ) { num bad−−; } else { double sum = 0 . 0 ; f o r ( int c = 0 ; c < n C o l o r s ; c++){ sum += p r o b v a l [ num col [ c ] ] ; } double r = random . g e t F l o a t ( ) ∗ sum ; double temp = 0 . 0 ; int i n d e x = −1; do{ i n d e x++; 94  Appendix D. Hougardy.cpp 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135  136 137 138 139 140 141 142  temp += p r o b v a l [ num col [ i n d e x ] ] ; } while ( temp < r ) ; c o l o r i n g [ i ] = index ; } } n u m t r i a l s ++; } while ( num bad > 0 ) ; I l o I n t M a t r i x c o l o r C l a s s e s ( env , n C o l o r s ) ; f o r ( int c = 0 ; c < n C o l o r s ; c++){ c o l o r C l a s s e s [ c ] = I l o I n t A r r a y ( env ) ; c o u t << ” c o l o r ” << c << ” : ” ; f o r ( int v = 0 ; v < n V e r t i c e s ; v++){ i f ( c o l o r i n g [ v ] == c ) { c o u t << v << ” ” ; c o l o r C l a s s e s [ c ] . add ( v ) ; } } c o u t << e n d l ; } bool g o o d C o l o r i n g = true ; f o r ( int c = 0 ; c < n C o l o r s ; c++){ f o r ( int v1 = 0 ; v1 < c o l o r C l a s s e s [ c ] . g e t S i z e ( ) ; v1++){ f o r ( int v2 = v1 + 1 ; v2 < c o l o r C l a s s e s [ c ] . g e t S i z e ( ) ; v2++){ if ( bitStringDiff ( bitStrings [ colorClasses [ c ] [ v1 ] ] , b i t S t r i n g s [ c o l o r C l a s s e s [ c ] [ v2 ] ] ) <= 2 ) { goodColoring = false ; } } } }; i f ( goodColoring ) { c o u t << ” a s s i g n m e n t s a r e a good c o l o r i n g ” << endl ; 95  Appendix D. Hougardy.cpp 143 144 145 146 147 148 149 150 151 152 153 154 155 156 }  } else { c o u t << ” a s s i g n m e n t s a r e NOT a good c o l o r i n g ” << endl ; }  } catch ( I l o E x c e p t i o n& ex ) { c o u t << ” E r r o r : ” << ex << e n d l ; } env . end ( ) ;  96  Appendix E  CPDist.cpp 1 #include < i l c p l e x / i l o c p l e x . h> 2 #include < i l s o l v e r / i l o s o l v e r i n t . h> 3 #include <s t r i n g > 4 #include <f s t r e a m > 5 #include <time . h> 6 #include < s t d l i b . h> 7 #include <s t d i o . h> 8 9 ILOSTLBEGIN 10 typedef I l o A r r a y <I l o I n t A r r a y > I l o I n t M a t r i x ; 11 typedef I l o A r r a y <I l o I n t V a r A r r a y > I l o I n t V a r M a t r i x ; 12 13 int b i t S t r i n g D i f f ( I l o I n t A r r a y s t r i n g 1 , I l o I n t A r r a y string2 ){ 14 int count = 0 ; 15 f o r ( int i = 0 ; i < s t r i n g 1 . g e t S i z e ( ) ; i ++){ 16 i f ( s t r i n g 1 [ i ] != s t r i n g 2 [ i ] ) { 17 count++; 18 } 19 } 20 return count ; 21 } 22 23 int main ( ) { 24 25 I lo E nv env ; 26 27 int n B i t s = 8 ; 28 int n V e r t i c e s = int ( pow ( 2 , n B i t s ) ) ; 29 int D i s t [ 1 0 0 0 0 ] [ 9 ] ; 30 int f e a s D i s t [ 1 0 0 0 0 ] [ 9 ] ;  97  Appendix E. CPDist.cpp 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65  int f e a s [ 1 0 0 0 0 ] ; int s o l u t i o n C o u n t e r = 0 ; int s e t S i z e = 2 0 ; try { I l o M o d e l CPmodel ( env ) ; I l o I n t V a r A r r a y x ( env , n B i t s +1 ,0 ,14) ; CPmodel . add ( IloSum ( x ) == s e t S i z e ) ; CPmodel . add ( x [ 0 ] CPmodel . add ( x [ 1 ] CPmodel . add ( x [ 2 ] CPmodel . add ( x [ 3 ] CPmodel . add ( x [ 4 ] CPmodel . add ( x [ 5 ] CPmodel . add ( x [ 6 ] CPmodel . add ( x [ 7 ] CPmodel . add ( x [ 8 ]  <= <= <= <= <= <= <= <= <=  1) ; 1) ; 4) ; 8) ; 14) ; 8) ; 4) ; 1) ; 1) ;  CPmodel . add ( x [ 0 ] + x [ 1 ] + x [ 2 ] <= 4 ) ; CPmodel . add ( x [ 6 ] + x [ 7 ] + x [ 8 ] <= 4 ) ; CPmodel . add ( I l o I f T h e n ( env , x [ 0 ] == 1 , x [ 1 ] + x [ 2 ] == 0 ) ) ; CPmodel . add ( I l o I f T h e n ( env , x [ 8 ] == 1 , x [ 6 ] + x [ 7 ] == 0 ) ) ; I l o S o l v e r s o l v e r ( CPmodel ) ; s o l v e r . startNewSearch ( ) ; while ( s o l v e r . next ( ) ) { s o l v e r . out ( ) << ” D i s t [ ” << s o l u t i o n C o u n t e r << ” ] = (” ; f o r ( int i = 0 ; i < n B i t s ; i ++) { Dist [ solutionCounter ] [ i ] = s o l v e r . getValue ( x [ i ]) ; s o l v e r . out ( ) << s o l v e r . g e t V a l u e ( x [ i ] ) << ” , ” ; } Dist [ solutionCounter ] [ nBits ] = s o l v e r . getValue ( x [ nBits ] ) ; 98  Appendix E. CPDist.cpp 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104  s o l v e r . out ( ) << s o l v e r . g e t V a l u e ( x [ n B i t s ] ) << ” ) ” ; s o l v e r . out ( ) << e n d l ; s o l u t i o n C o u n t e r ++; } s o l v e r . endSearch ( ) ; c o u t << e n d l << e n d l << e n d l << e n d l ; } catch ( I l o E x c e p t i o n& e ) { c e r r << ” Concert e x c e p t i o n caught : ” << e << e n d l ; } catch ( . . . ) { c e r r << ”Unknown e x c e p t i o n caught ” << e n d l ; } f o r ( int i = 0 ; i < s o l u t i o n C o u n t e r ; i ++) { try { clock t start , f i n i s h ; long double d u r a t i o n ; start = clock () ; I l o M o d e l MIPmodel ( env ) ; I l o I n t A r r a y x D i s t ( env , n B i t s +1) ; f o r ( int b = 0 ; b < n B i t s +1; b++) { xDist [ b ] = Dist [ i ] [ b ] ; } I l o I n t M a t r i x b i t S t r i n g s ( env , n V e r t i c e s ) ; f o r ( int v = 0 ; v < n V e r t i c e s ; v++){ b i t S t r i n g s [ v ] = I l o I n t A r r a y ( env , n B i t s ) ; int r e m a i n d e r = v ; f o r ( int b = n B i t s − 1 ; b >= 0 ; b−−){ i f ( r e m a i n d e r >= pow ( 2 , b ) ) { b i t S t r i n g s [ v ] [ nBits − 1 − b ] = 1; r e m a i n d e r −= pow ( 2 , b ) ; } else { b i t S t r i n g s [ v ] [ nBits − 1 − b ] = 0; } } 99  Appendix E. CPDist.cpp 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141  } I l o I n t M a t r i x wt ( env , n B i t s + 1 ) ; f o r ( int b = 0 ; b < n B i t s +1; b++){ wt [ b ] = I l o I n t A r r a y ( env , n V e r t i c e s ) ; f o r ( int v = 0 ; v < n V e r t i c e s ; v++){ if ( bitStringDiff ( bitStrings [v] , bitStrings [ 0 ] ) == b ) { wt [ b ] [ v ] = 1 ; } e l s e wt [ b ] [ v ] = 0 ; } } I l o I n t V a r A r r a y x ( env , n V e r t i c e s , 0 , 1 ) ; MIPmodel . add ( IloSum ( x ) == s e t S i z e ) ; f o r ( int b = 0 ; b < n B i t s +1; b++) { MIPmodel . add ( I l o S c a l P r o d ( wt [ b ] , x ) == x D i s t [ b ] ) ; } f o r ( int v1 = 0 ; v1 < n V e r t i c e s ; v1++) { char xName [ 1 0 ] ; s p r i n t f (xName , ” x %d” , v1 ) ; x [ v1 ] . setName (xName) ; f o r ( int v2 = v1 + 1 ; v2 < n V e r t i c e s ; v2++){ i f ( b i t S t r i n g D i f f ( b i t S t r i n g s [ v1 ] , b i t S t r i n g s [ v2 ] ) <= 2 ) { MIPmodel . add ( x [ v1 ] + x [ v2 ] <= 1 ) ; } } } i f ( x D i s t [ 4 ] >= 1 ) { MIPmodel . add ( x [ 1 5 ] == 1 ) ; } i f ( x D i s t [ 4 ] >= 3 ) { MIPmodel . add ( x [ 5 1 ] == 1 ) ; }  100  Appendix E. CPDist.cpp 142 143 144 145 146 147 148 149  150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173  174  I l o C p l e x c p l e x ( MIPmodel ) ; c p l e x . setParam ( I l o C p l e x : : MIPInterval , 1 0 0 0 0 ) ; c p l e x . setParam ( I l o C p l e x : : MIPEmphasis , 1 ) ; i f ( cplex . solve () ) { f i n i s h = clock () ; long double p e r s e c = CLOCKS PER SEC ; duration = ( f i n i s h − s t a r t ) / persec ; c o u t << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ” << e n d l ; c o u t << ” This took ” << d u r a t i o n << ” s e c o n d s . ” << e n d l ; feas [ i ] = 1; f o r ( int b = 0 ; b < n B i t s +1; b++) { feasDist [ i ] [ b ] = Dist [ i ] [ b ] ; } c o u t << e n d l << e n d l << ” D i s t r i b u t i o n ” << i << ” −−−> [ ” ; f o r ( int b = 0 ; b < n B i t s ; b++) { c o u t << D i s t [ i ] [ b ] << ” , ” ; } c o u t << D i s t [ i ] [ n B i t s ] << ” ] ” ; c o u t << ” : ” << e n d l << e n d l ; f o r ( int v1 = 0 ; v1 < n V e r t i c e s ; v1++) { i f ( c p l e x . g e t V a l u e ( x [ v1 ] ) > 0 . 5 ) { int d i f f = b i t S t r i n g D i f f ( b i t S t r i n g s [ v1 ] , bitStrings [0]) ; c o u t << v1 << ” −−−− ” << b i t S t r i n g s [ v1 ] ; c o u t << ” ” << d i f f ; c o u t << e n d l ; } } } else { f i n i s h = clock () ; duration = ( f i n i s h − s t a r t ) /1000.0; c o u t << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ” << e n d l ; c o u t << ” This took ” << d u r a t i o n << ” s e c o n d s . 101  Appendix E. CPDist.cpp  175 176 177 178 179 180 181 182 183 184 185 186 187  188 189 190 191 192 193 194 195 196 197 198  199  200  201 202 203  ” << e n d l ; ; feas [ i ] = 0; f o r ( int b = 0 ; b < n B i t s +1; b++) { feasDist [ i ] [ b ] = 0; } c o u t << ” D i s t r i b u t i o n ” << i << ” −−−> [ ” ; f o r ( int b = 0 ; b < n B i t s ; b++) { c o u t << D i s t [ i ] [ b ] << ” , ” ; } c o u t << D i s t [ i ] [ n B i t s ] << ” ] ” ; cout<< ” : ” << ” i n f e a s i b l e ” << e n d l << e n d l << endl ; } c o u t << e n d l ; c o u t << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ” << e n d l ; MIPmodel . end ( ) ; } catch ( I l o E x c e p t i o n& e ) { c e r r << ” Concert e x c e p t i o n caught : ” << e << endl ; } catch ( . . . ) { c e r r << ”Unknown e x c e p t i o n caught ” << e n d l ; } } c o u t << e n d l << e n d l << e n d l << e n d l ; c o u t << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ” << e n d l ; c o u t << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ” << e n d l ; c o u t << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ” << e n d l ; c o u t << ” F e a s i b l e D i s t r i b u t i o n s : ” << e n d l << e n d l ; int f e a s S o l u t i o n C o u n t e r = 0 ; f o r ( i = 0 ; i < s o l u t i o n C o u n t e r ; i ++) { 102  Appendix E. CPDist.cpp 204 205  if ( feas [ i ] > 0.5) { c o u t << ” F e a s i b l e D i s t r i b u t i o n ” << f e a s S o l u t i o n C o u n t e r << ” ( D i s t r i b u t i o n ” << i << ” ) −−−> [ ” ; f o r ( int b = 0 ; b < n B i t s ; b++) { c o u t << f e a s D i s t [ i ] [ b ] << ” , ” ; } c o u t << f e a s D i s t [ i ] [ n B i t s ] << ” ] ” << e n d l ; f e a s S o l u t i o n C o u n t e r ++; }  206 207 208 209 210 211 212 } 213 env . end ( ) ; 214 }  103  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0066806/manifest

Comment

Related Items