- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Hypercube coloring and the structure of binary codes
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Hypercube coloring and the structure of binary codes 2008
pdf
Page Metadata
Item Metadata
Title | Hypercube coloring and the structure of binary codes |
Creator |
Rix, James Gregory |
Publisher | University of British Columbia |
Date Created | 2008-11-24 |
Date Issued | 2008-11-24 |
Date | 2008 |
Description | A coloring of a graph is an assignment of colors to its vertices so that no two adjacent vertices are given the same color. The chromatic number of a graph is the least number of colors needed to color all of its vertices. Graph coloring problems can be applied to many real world applications, such as scheduling and register allocation. Computationally, the decision problem of whether a general graph is m-colorable is NP-complete for m ≥ 3. The graph studied in this thesis is a well-known combinatorial object, the k-dimensional hypercube, Qk. The hypercube itself is 2-colorable for all k; however, coloring the square of the cube is a much more interesting problem. This is the graph in which the vertices are binary vectors of length k, and two vertices are adjacent if and only if the Hamming distance between the two vectors is at most 2. Any color class in a coloring of Q2k is a binary (k;M, 3) code. This thesis will begin with an introduction to binary codes and their structure. One of the most fundamental combinatorial problems is finding optimal binary codes, that is, binary codes with the maximum cardinality satisfying a specified length and minimum distance. Many upper and lower bounds have been produced, and we will analyze and apply several of these. This leads to many interesting results about the chromatic number of the square of the cube. The smallest k for which the chromatic number of Q2k is unknown is k = 8; however, it can be determined that this value is either 13 or 14. Computational approaches to determine the chromatic number of Q28 were performed. We were unable to determine whether 13 or 14 is the true value; however, much valuable insight was learned about the structure of this graph and the computational difficulty that lies within. Since a 13-coloring of Q28 must have between 9 and 12 color classes being (8; 20; 3) binary codes, this led to a thorough investigation of the structure of such binary codes. |
Extent | 716189 bytes |
Subject |
Combinatorial Object K-dimensional Hypercube Color Class Binary Codes Structure Chromatic Number Of The Square Of The Cube Graph Coloring Maximum Cardinality Chromatic Number Binary Vectors |
Genre |
Thesis/Dissertation |
Type |
Text |
File Format | application/pdf |
Language | Eng |
Collection |
Electronic Theses and Dissertations (ETDs) 2008+ |
Date Available | 2008-11-24 |
DOI | 10.14288/1.0066806 |
Degree |
Master of Science - MSc |
Program |
Interdisciplinary Studies |
Affiliation |
Science, Faculty of |
Degree Grantor | University of British Columbia |
Graduation Date | 2008-11 |
Campus |
UBCO |
Scholarly Level | Graduate |
URI | http://hdl.handle.net/2429/2809 |
Aggregated Source Repository | DSpace |
Digital Resource Original Record | https://open.library.ubc.ca/collections/24/items/1.0066806/source |
Download
- Media
- ubc_2008_fall_rix_james.pdf
- ubc_2008_fall_rix_james.pdf [ 699.4kB ]
- Metadata
- JSON: 1.0066806.json
- JSON-LD: 1.0066806+ld.json
- RDF/XML (Pretty): 1.0066806.xml
- RDF/JSON: 1.0066806+rdf.json
- Turtle: 1.0066806+rdf-turtle.txt
- N-Triples: 1.0066806+rdf-ntriples.txt
- Citation
- 1.0066806.ris
Full Text
Hypercube Coloring and the Structure of Binary Codes by James Gregory Rix A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The College of Graduate Studies (Interdisciplinary Studies) The University Of British Columbia (Okanagan) July 2008 c© James Gregory Rix 2008 Abstract A coloring of a graph is an assignment of colors to its vertices so that no two adjacent vertices are given the same color. The chromatic number of a graph is the least number of colors needed to color all of its vertices. Graph coloring problems can be applied to many real world applications, such as scheduling and register allocation. Computationally, the decision problem of whether a general graph is m-colorable is NP-complete for m ≥ 3. The graph studied in this thesis is a well-known combinatorial object, the k-dimensional hypercube, Qk. The hypercube itself is 2-colorable for all k; however, coloring the square of the cube is a much more interesting prob- lem. This is the graph in which the vertices are binary vectors of length k, and two vertices are adjacent if and only if the Hamming distance between the two vectors is at most 2. Any color class in a coloring of Q2k is a binary (k,M, 3) code. This thesis will begin with an introduction to binary codes and their structure. One of the most fundamental combinatorial problems is finding optimal binary codes, that is, binary codes with the maximum cardinality satisfying a spec- ified length and minimum distance. Many upper and lower bounds have been produced, and we will analyze and apply several of these. This leads to many interesting results about the chromatic number of the square of the cube. The smallest k for which the chromatic number of Q2k is unknown is k = 8; however, it can be determined that this value is either 13 or 14. Computational approaches to determine the chromatic number of Q28 were performed. We were unable to determine whether 13 or 14 is the true value; however, much valuable insight was learned about the structure of this graph and the computational difficulty that lies within. Since a 13-coloring of Q28 must have between 9 and 12 color classes being (8, 20, 3) binary codes, this led to a thorough investigation of the structure of such binary codes. ii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Chapter 1. Coding Theory . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 An Introduction to Codes . . . . . . . . . . . . . . . . . . . . 5 1.3 Optimal Binary Codes . . . . . . . . . . . . . . . . . . . . . 11 1.3.1 The Sphere-Packing Bound . . . . . . . . . . . . . . . 18 1.3.2 Delsarte’s Linear Programming Bound . . . . . . . . 19 1.4 Applications of Delsarte’s Linear Programming Bound . . . . 24 Chapter 2. Graph Theory . . . . . . . . . . . . . . . . . . . . . . 27 2.1 Elementary Concepts . . . . . . . . . . . . . . . . . . . . . . 27 2.2 Graph Isomorphisms and Automorphisms . . . . . . . . . . . 30 2.3 Vertex Coloring . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4 Algorithms to Solve Vertex Coloring Problems . . . . . . . . 35 2.4.1 An Introduction to Algorithms and Efficiency . . . . 36 2.4.2 Integer Programming . . . . . . . . . . . . . . . . . . 38 2.4.3 Column Generation . . . . . . . . . . . . . . . . . . . 39 2.4.4 The Petford-Welsh Algorithm . . . . . . . . . . . . . 41 Chapter 3. The Hypercube . . . . . . . . . . . . . . . . . . . . . 43 3.1 Automorphisms of the Cube and its Square . . . . . . . . . . 45 3.2 The Chromatic Number of the Cube and its Square . . . . . 53 iii Table of Contents 3.2.1 The Asymptotic Behavior of the Chromatic Number of the Square of the Cube . . . . . . . . . . . . . . . 62 3.3 Applying Computational Methods to Determine the Chro- matic Number of the Square of 8-Dimensional Cube . . . . . 64 3.3.1 Integer Programming . . . . . . . . . . . . . . . . . . 65 3.3.2 Column Generation . . . . . . . . . . . . . . . . . . . 67 3.3.3 The Petford-Welsh Algorithm . . . . . . . . . . . . . 68 Chapter 4. The Structure of Optimal Binary Codes . . . . . . 69 4.1 Computationally Determining all Weight Distributions . . . 69 4.2 Structural Results . . . . . . . . . . . . . . . . . . . . . . . . 76 4.2.1 On the Size of Shortened Codes . . . . . . . . . . . . 76 4.2.2 Bounds on the Weight Distributions of Color Classes 77 4.3 The Number of Nonequivalent Codes . . . . . . . . . . . . . 78 4.4 Another Definition of Equivalence . . . . . . . . . . . . . . . 79 Chapter 5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . 82 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Appendix A. Current Status of Coloring the Square of the Cube . 86 Appendix B. A 14-coloring of the Square of the 8-Cube . . . . . . 88 Appendix C. Weight Distributions of (8, 20, 3) Binary Codes . . . 89 Appendix D. Hougardy.cpp . . . . . . . . . . . . . . . . . . . . . . 92 Appendix E. CPDist.cpp . . . . . . . . . . . . . . . . . . . . . . . 97 iv List of Tables Table 3.1: A 4-coloring of Q23 using Wan’s construction . . . . . . . 58 Table 4.1: Distance Distributions of (8, 20, 3) Binary Codes . . . . 79 Table A.1: Current Status of Coloring the Square of the Cube . . . 86 Table B.1: A 14-coloring of Q28 . . . . . . . . . . . . . . . . . . . . 88 Table C.1: Weight Distributions of (8, 20, 3) Binary Codes . . . . . 89 v List of Figures Figure 2.1: Venn Diagram of the Complexity Classes . . . . . . . . 37 Figure 3.1: Computation Times Using Integer Programming . . . . 66 Figure 4.1: Computation Times for Feasible Distributions . . . . . 74 Figure 4.2: Computation Times for Infeasible Distributions . . . . 75 vi Acknowledgements First and foremost, I would like to thank my supervisor, Dr. Donovan Hare, for both introducing me to the beauty of optimization from a real world perspective, and for giving me the opportunity to pursue my research in this field. His infinite patience was extremely helpful over the course of my educational experience. In addition, I would like to give thanks to every- one who has had an invaluable impact on my educational experience: Dr. Wayne Broughton, Dr. Heinz Bauschke, Dr. Shawn Wang, Dr. Yves Lucet, Dr. Yong Gao, and Dr. Blair Spearman. This thesis was partially supported by the Natural Sciences and Engi- neering Research Council of Canada, the Pacific Institute for the Mathe- matical Sciences and the U.B.C. Okanagan College of Graduate Studies. vii Chapter 1 Coding Theory In this chapter, we recall background material from coding theory that will be used in subsequent chapters. The chapter will conclude with the sphere- packing bound and Delsarte’s linear programming bound, two very useful bounds in determining optimal binary code size. We will also include a proof of A(9, 4) = 20 that uses Delsarte’s linear programming bound. This value will be very important in later chapters. We will now introduce the notation and definitions that will be used throughout, which closely follow those used in [20] and [13]. It will be assumed throughout this thesis that reader has a basic knowl- edge of field theory, set theory, linear and integer programming, and prob- ability. 1.1 Preliminaries Notation 1.1. Let Z denote the set of all integers, and Z+ denote the set of all positive integers. Notation 1.2. For any positive integer n, let [n] denote the set of integers {1, 2, . . . , n}. Consider a finite field F with addition and multiplication defined, sat- isfying |F | = q. Recall that all fields have an additive identity, 0, and a multiplicitave identity, 1. Now consider the vector space Fn. Clearly, |Fn| = qn. A vector u in Fn is of the following form: u = u1 u2 ... un . For notational convenience, this will also be denoted u1u2 . . . un. We will refer to ui as the ith coordinate of u. 1 Chapter 1. Coding Theory Notation 1.3. Let ~0n and ~1n denote the vectors in Fn in which every coordinate is 0 and 1, respectively. Notation 1.4. Let µni denote the vector in F n in which the ith coordinate is 1, and every other coordinate is 0. Definition 1.5. Consider two vectors u and v in Fn. The (Hamming) distance between u and v, denoted d(u, v), is defined to be the number of coordinates in which u and v differ. Let u, v, w ∈ Fn. We state without proof the following properties of the Hamming distance function: • 0 ≤ d(u, v) ≤ n, • d(u, v) = 0 if and only if u = v, • d(u, v) = d(v, u), • d(u,w) ≤ d(u, v) + d(v, w) (the triangle inequality). Definition 1.6. Let u ∈ Fn and t be a positive integer. The (Hamming) sphere of radius t centered at u, denoted St(u), is the following set of vectors: St(u) = {v ∈ {0, 1}n | d(u, v) ≤ t}. Definition 1.7. Consider a vector u in Fn. The (Hamming) weight of u, denoted wt(u), is defined to be the number of coordinates of u that are nonzero. The parity of u is even if wt(u) is even, and odd if wt(u) is odd. Consider two vectors u and v in Fn. The following properties of the Hamming weight function are immediate: • 0 ≤ wt(u) ≤ n, • wt(u) = 0 if and only if u = ~0n, • wt(u) = n if and only if u = ~1n, • wt(u− v) = d(u, v). Definition 1.8. Consider a vector u in Fn. We define pu and qu to be following elements of F : pu = { 0 wt(u) is even, 1 wt(u) is odd, 2 Chapter 1. Coding Theory qu = { 1 wt(u) is even, 0 wt(u) is odd. Lemma 1.9. Let u, v, w ∈ Fn. Then d(u, v) = d(u+ w, v + w). Proof. d(u+ w, v + w) = wt((u+ w)− (v + w)) = wt((u− v) + (w − w)) = wt(u− v) = d(u, v). Notation 1.10. Let Sn, for all positive integers n, denote the symmetric group of degree n (i.e. the set of all permutations on the integers [n]). Definition 1.11. Let σ ∈ Sn and u ∈ Fn with coordinates defined by u = u1u2 . . . un. Then σ(u) ∈ Fn is defined as follows: σ(u) = uσ(1)uσ(2) . . . uσ(n). For any vectors u and v in Fn, and permutation σ ∈ Sn, the following properties are immediate: • wt(σ(u)) = wt(u), • d(σ(u), σ(v)) = d(u, v). • σ(u) + σ(v) = σ(u+ v). Definition 1.12. Let u ∈ Fm and v ∈ Fn have coordinates defined by u = u1u2 . . . um and v = v1v2 . . . vn. We define u concatenated with v, denoted u v, to be: u v = u1u2 . . . umv1v2 . . . vn, an element of the vector space Fm+n. Let u, u′ ∈ Fm and v, v′ ∈ Fn. Clearly, from the distance functions of their respective vector spaces: d(u v, u′ v′) = d(u, u′) + d(v, v′). One technique used often in the study of codes is to concatenate a vector u ∈ Fn with a parity coordinate, that is, create the vector u pu. It is clear that wt(u pu) is even for any u ∈ Fn. 3 Chapter 1. Coding Theory Definition 1.13. Consider two vectors u and v in Fn with their coordinates defined by u = u1u2 . . . un and v = v1v2 . . . vn. The inner product of u and v is defined as follows: < u, v > = n∑ i=1 uivi. Note: the inner product of u and v is an element of the field F . Let u, v, w ∈ Fn and c ∈ F . As this is an inner product, it satisfies several key properties: • < u, v >=< v, u >, • < u, v + w >=< u, v > + < u,w >, • c < u, v >=< cu, v >. In this thesis, we will often be dealing with the field Z2 = {0, 1}, with binary addition and multiplication defined normally. When dealing with this field, we can define the intersection of two vectors. This will give rise to several useful results. Definition 1.14. Let u and v be binary vectors in {0, 1}n with their coor- dinates defined by u = u1u2 . . . un and v = v1v2 . . . vn. The intersection of u and v is defined to be: u ∩ v = u1v1 u2v2 ... unvn . Thus the ith coordinate of u∩v is 1 if and only if the ith coordinate of both u and v are 1. Let u and v be binary vectors in {0, 1}n. We have the following proper- ties: • wt(u ∩ v) ≤ wt(u) and wt(u ∩ v) ≤ wt(v), • u ∩~0n = ~0n and u ∩~1n = u. Lemma 1.15. Let u and v be binary vectors in {0, 1}n. Then d(u, v) = wt(u) + wt(v)− 2wt(u ∩ v). 4 Chapter 1. Coding Theory Proof. Let U be the number of coordinates in which u is 1 and v is 0, and let V be the number of coordinates in which v is 1 and u is 0. Then d(u, v) = U + V . Since wt(u ∩ v) represents the number of coordinates in which u and v are both 1, we have: d(u, v) = U + V = (wt(u)− wt(u ∩ v)) + (wt(v)− wt(u ∩ v)) = wt(u) + wt(v)− 2wt(u ∩ v). Corollary 1.16. Let u and v be binary vectors in {0, 1}n. Then u and v are of the same parity if and only if d(u, v) is even. Proof. u and v are of the same parity⇔ wt(u) + wt(v) is even ⇔ wt(u) + wt(v)− 2wt(u ∩ v) is even ⇔ d(u, v) is even by Lemma 1.15. For notational convenience, we will often represent a binary vector by its integer representation. Definition 1.17. Let v be a binary vector in {0, 1}n whose coordinates are defined by v1v2 . . . vn. Consider the nonnegative integer i satisfying: i = 2n−1v1 + 2n−2v2 + . . .+ 2vn−1 + vn. This integer i is the integer representation of the vector v. Definition 1.18. For any positive integer k, the binary representation ma- trix of order k is the dlog2(k+ 1)e× k matrix in which column i, for i ∈ [k], is the binary vector of length dlog2(k + 1)e whose integer representation is i. This matrix is denoted BRMk. For example, the binary representation matrix of order 6 is defined as follows: BRM6 = 0 0 0 1 1 10 1 1 0 0 1 1 0 1 0 1 0 . 1.2 An Introduction to Codes We will now define a code over a finite field F . 5 Chapter 1. Coding Theory Definition 1.19. Let F be a field. A code C over F is a subset of Fn. The elements u ∈ C are the codewords of C. If F = {0, 1}, then C is a binary code. Definition 1.20. Let C = {u1, u2, . . . , uM}, where ui ∈ Fn for all i ∈ [M ]. The (code) length is the length of the vectors, n, whereas the (code) size is the number of codewords in the code, |C| = M . In addition to code length and size, a third parameter is usually specified when referring to a code. Definition 1.21. Let C be an (n,M) code over a field F . The minimum distance of C, denoted d(C), is defined to be: d(C) = min{d(u, v) | u, v ∈ C, u 6= v}. If M = 1 then d(C) =∞. Notation 1.22. An (n,M) code is a code with length n and size M . An (n,M, d) code is a code with length n, size M , and minimum distance at least d. Definition 1.23. Let C be an (n,M) code over a field F , and let σ ∈ Sn. Then σ(C) is defined to be: σ(C) = {σ(u) | u ∈ C}. Clearly, σ(C) is an (n,M) binary code. Moreover, d(σ(C)) = d(C). Definition 1.24. Let C be an (n,M) code over a field F , and let w ∈ Fn. Then C + w is the (n,M) code over F satisfying: C + w = {u+ w | u ∈ C}. Proposition 1.25. Let C be an (n,M) code over a field F , and let w ∈ Fn. Then d(C + w) = d(C). Proof. To determine the minimum distance of C + w, we have: d(C + w) = min{d(u, v) | u, v ∈ C + w, u 6= v} = min{d(x+ w, y + w) | x, y ∈ C, x 6= y} = min{d(x, y) | x, y ∈ C, x 6= y} by Lemma 1.9 = d(C). 6 Chapter 1. Coding Theory In the following definitions, we will introduce techniques to create longer or shorter codes from a given code. Definition 1.26. Let C be an (n,M) code over a field F , and let v ∈ Fm for some m ∈ Z+. Then C v is the (n + m,M) code over F defined as follows: C v = {u v | u ∈ C}. It is clear that d(C v) = d(C). Definition 1.27. Let C be an (n,M, 2) code over a field F , and let i ∈ [n]. The punctured code C(i) is defined to be the (n−1,M) code over F containing all codewords of C with the ith coordinate removed. That is: C(i) = {u1u2 . . . ui−1ui+1 . . . un | u ∈ C}. Clearly, d(C)− 1 ≤ d(C(i)) ≤ d(C). Definition 1.28. Let C be an (n,M) code over a field F . Moreover, let x ∈ F and i ∈ [n]. The shortened code Ci,x is defined to be the (n− 1,M ′ ≤ M) code over F containing all codewords of C with the field element x in coordinate i, then punctured at i. That is: Ci,x = {u1u2 . . . ui−1ui+1 . . . un | u1u2 . . . ui−1xui+1 . . . un ∈ C}. Clearly, d(Ci,x) ≥ d(C). Codes are used in real world applications every day. In practice, in- formation is sent in the form of codewords from some (n,M) code C over a field F , where different codewords represent different meanings. The re- quired codeword u ∈ C is sent over a channel, over which errors can occur in any and all of the coordinates that change the coordinate to a different field element. Therefore, it is possible that one or more errors have transformed the original codeword into a vector in Fn that is not in the code. When a vector x ∈ Fn is received at the other end of this channel, x is then decoded to some codeword v ∈ C, generally the one that maximizes the likelihood of receiving x (in the hopes that u = v). Assuming the probability of error in every coordinate is the same, this codeword v will be the v ∈ C that minimizes d(x, v) with ties broken arbitrarily. Formally, this is expressed as in the following definition. 7 Chapter 1. Coding Theory Definition 1.29. Let C be an (n,M) code over a field F . A decoder is a function D : Fn → C. The decoder D is a nearest codeword decoder if it satisfies: D(x) = v ⇒ d(x, v) = min{d(x, v′) | v′ ∈ C}. It is clear that the higher the minimum distance of a code, the higher the probability that a received vector will be decoded correctly. The result of Corollary 1.31 will specify explicitly the error correcting capability of a code with fixed minimum distance. The following two proofs are similar to those used in [13, Ch. 1, Theorem 2]. Proposition 1.30. Let C be an (n,M, d) code over a field F , and t =⌊ d− 1 2 ⌋ . For any u, v ∈ C such that u 6= v, St(u) ∩ St(v) = ∅. Proof. Suppose, by way of contradiction, that there exists w ∈ Fn such that d(u,w) ≤ t and d(v, w) ≤ t. We have: d(u, v) ≤ d(u,w) + d(w, v) by the triangle inequality ≤ ⌊ d− 1 2 ⌋ + ⌊ d− 1 2 ⌋ ≤ d− 1 2 + d− 1 2 = d− 1, a contradiction to the minimum distance of C. Corollary 1.31. Let C be an (n,M, d) code over a field F . A nearest codeword decoder will correctly recover any pattern of up to ⌊ d− 1 2 ⌋ errors. Proof. Let C be an (n,M, d) code over F . Suppose codeword u ∈ C is sent over a channel and t ≤ ⌊ d− 1 2 ⌋ errors occur to yield vector w ∈ Fn. Thus w ∈ St(u). Take any v ∈ C \ {u}. By the result of Proposition 1.30, w /∈ St(v), implying: d(v, w) > t = d(u,w). Therefore a nearest codeword decoder will decode w to u. 8 Chapter 1. Coding Theory Definition 1.32. A constant weight code C is an (n,M, d) code in which every codeword in C has the same weight w. C can then be referred to as an (n,M, d,w) code. Definition 1.33. An even weight code C is a code in which every codeword u ∈ C is of even weight. Similarly, an odd weight code C is a code in which every codeword u ∈ C is of odd weight. For the remainder of this thesis, we will be strictly using binary codes. For binary vectors u and v, note that u− v = u+ v. Corollary 1.34. Even weight binary codes and odd weight binary codes have even minimum distance. Proof. Since every codeword is of the same parity, the proof of this corollary follows directly from Corollary 1.16. Definition 1.35. The weight distribution of an (n,M) binary code C with respect to a vector u ∈ {0, 1}n is the sequence of nonnegative integers (Wi(u))ni=0, where Wi(u) for i = 0, 1, . . . , n is the number of codewords v ∈ C such that d(u, v) = i. Symbolically, this definition may be represented as: Wi(u) = ∑ v∈C d(u,v)=i 1 = ∑ v∈C wt(u+v)=i 1. When u is not specified, it will be assumed that u = ~0n. We will denote (Wi(~0n))ni=0 as simply (Wi) n i=0. Let C be an (n,M) binary code, u ∈ {0, 1}n, and σ ∈ Sn. Moreover, let (Wi(u))ni=0 denote the weight distribution of C with respect to u. The weight distribution satisfies the following properties: • n∑ i=0 Wi = M , • The weight distribution of σ(C) with respect to u is also (Wi(u))ni=0, • Letting (W ′i )ni=0 denote the weight distribution of C + u, (W ′i )ni=0 = (Wi(u))ni=0. 9 Chapter 1. Coding Theory Definition 1.36. The distance distribution of an (n,M) binary code C is the list of nonnegative, rational numbers (Ai)ni=0, where Ai represents the average number of pairs of codewords in C at distance i from each other. Thus each Ai satisfies: Ai = 1 M ∑ u,v∈C d(u,v)=i 1 = 1 M ∑ u∈C ∑ v∈C d(u,v)=i 1 = 1 M ∑ u∈C Wi(u). The following statements are clear for any (n,M, d) binary code with distance distribution (Ai)ni=0: • A0 = 1, • Ai ≥ 0, • Ai = 0 for i ∈ [d− 1], • n∑ i=0 Ai = M , or equivalently, n∑ i=d Ai = M − 1. Definition 1.37. Let C and D be (n,M, d) binary codes. C and D are equivalent if C = σ(D) + v for some σ ∈ Sn and v ∈ {0, 1}n. C is unique if every (n,M, d) binary code is equivalent to C. It is clear that code equivalence is reflexive, symmetric, and transitive. Therefore, it is an equivalence relation on the set of all binary codes. The notion of equivalent codes will be very important throughout this thesis. One very important property of equivalent codes is that they share the same distance distribution. Lemma 1.38. Let C and D be equivalent (n,M, d) binary codes. Then C and D have the same distance distributions. Proof. Let σ ∈ Sn and w ∈ {0, 1}n satisfy D = σ(C) + w. Moreover, let (ACi ) n i=0 and (A D i ) n i=0 denote the distance distributions of C and D, respectively. For any i ∈ [n], we have: 10 Chapter 1. Coding Theory ACi = 1 M ∑ u,v∈C d(u,v)=i 1 = 1 M ∑ u,v∈C d(σ(u),σ(v))=i 1 = 1 M ∑ u,v∈C d(σ(u)+w,σ(v)+w)=i 1 by Lemma 1.9 = 1 M ∑ x,y∈D d(x,y)=i 1 = ADi . 1.3 Optimal Binary Codes Finding binary codes of optimal size when length and minimum distance are held fixed has become a well-known and difficult combinatorial problem. Such optimal codes allow for a maximum number of possible messages to be sent efficiently, while maintaining a specified error correcting ability. Notation 1.39. The size of the largest possible binary code of length n and minimum distance at least d is denoted A(n, d). An (n,M, d) binary code in which M = A(n, d) is optimal. The following trivial values of A(n, d) can easily be verified for any pos- itive integers n and d: • A(n, 1) = 2n, • A(n, 2) = 2 n 2 = 2n−1, • A(n, d) = 1 if d > n. When d is sufficiently large ( 2n 3 < d ≤ n), it is known that A(n, d) = 2. Proposition 1.40 proves this result. Proposition 1.40. A(n, d) = 2 for n, d ∈ Z+ with 2n 3 < d ≤ n. 11 Chapter 1. Coding Theory Proof. Fix n and d as specified. First, we will show the existence of an (n, 2, d) binary code. Consider any v ∈ {0, 1}n with wt(v) = d (if d = n then v = ~1n). The code {~0n, v} is an example of an (n, 2, d) binary code. We now show that an (n, 3, d) code can’t exist. Suppose, by way of contradiction, that it does, and is defined by C = {u, v, w}. We have: d(u, v) ≤ d(u,w +~1n) + d(v, w +~1n) by the triangle inequality = wt(u+ w +~1n) + wt(v + w +~1n) = d(u+ w,~1n) + d(v + w,~1n) = wt(u+ w) + wt(~1n)− 2wt((u+ w) ∩~1n) + wt(v + w) + wt(~1n)− 2wt((v + w) ∩~1n) = wt(u+ w) + wt(~1n)− 2wt(u+ w) + wt(v + w) + wt(~1n)− 2wt(v + w) = 2n− d(u,w)− d(v, w) < 2n− 2n 3 − 2n 3 = 2n 3 , a contradiction to the minimum distance of C. In fact, this result is tight with respect to d, as will be proven in Propo- sition 1.41. Proposition 1.41. A(n, d) ≥ 3 for n, d ∈ Z+ with d ≤ ⌊ 2n 3 ⌋ . Proof. Fix n and d as specified, and define t = ⌈n 3 ⌉ . We will prove this result by exhibiting an (n, 3, d) binary code. Consider C = {u1, u2, u3}, where the vectors ui are defined as follows: u1 = t∑ i=1 µni , u2 = 2t∑ i=t+1 µni , u3 = n∑ i=2t+1 µni . For any i, j ∈ [3] with i 6= j we have: ui ∩ uj = ~0n, 12 Chapter 1. Coding Theory and hence by Lemma 1.15: d(ui, uj) = wt(ui) + wt(uj). The minimum distance of C is calculated as follows: d(C) = min{d(u1, u2), d(u1, u3), d(u2, u3)} = min{wt(u1) + wt(u2), wt(u1) + wt(u3), wt(u2) + wt(u3)} = min { 2 ⌈n 3 ⌉ , 2 ⌈n 3 ⌉ , ⌈n 3 ⌉ + ( n− 2 ⌈n 3 ⌉)} = n− ⌈n 3 ⌉ = ⌊ 2n 3 ⌋ . Therefore C is an (n, 3, d) binary code. Propositions 1.42 and 1.43 show how values of A(n, d) can be bounded above by A(n+ 1, d) and A(n− 1, d). Proposition 1.42. A(n, d) ≤ A(n+ 1, d) for any n, d ∈ Z+. Proof. Let C be an (n,M, d) binary code of optimal size M = A(n, d). Consider the code C 0, an (n + 1,M, d) binary code. The existence of this code proves A(n+ 1, d) ≥ A(n, d). Proposition 1.43. A(n, d) ≤ 2A(n− 1, d) for any n, d ∈ Z+. Proof. Let C be an (n,M, d) binary code of optimal size M = A(n, d). Consider the shortened codes Ci,0 and Ci,1 for any i ∈ [n]. It is clear that |Ci,0| ≥ M2 or |Ci,1| ≥ M 2 . Let D be the code formed by taking the larger of Ci,0 and Ci,1. D is clearly an (n− 1,M ′, d) binary code with size M ′ ≥ M2 , and hence: A(n− 1, d) ≥ A(n, d) 2 . The search of the values of A(n, d) for less trivial n and d is a classic combinatorial problem. The result of Theorem 1.44 will show that we need only find A(n, d) for even d. Theorem 1.44. A(n, d) = A(n + 1, d + 1) for n, d ∈ Z+ with d ≤ n and d odd. 13 Chapter 1. Coding Theory Proof. We will prove equality holds by first proving A(n, d) ≤ A(n+1, d+1), then by proving A(n, d) ≥ A(n+ 1, d+ 1). Let C be an (n,M, d) binary code of optimal size M = A(n, d). To prove A(n, d) ≤ A(n+ 1, d+ 1), we will show the existence of an (n+ 1,M, d+ 1) binary code. Let C p defined as follows: C p = {u pu | u ∈ C}. It is clear that C p is an (n + 1,M, d) binary code. However, C p is also an even weight code, and therefore d(C p) is even. C p is hence an (n+ 1,M, d+ 1) binary code, which completes the first half of the proof. Let D be an (n + 1,M, d + 1) binary code of optimal size M = A(n + 1, d+ 1). To prove A(n, d) ≥ A(n+ 1, d+ 1), we will show the existence of an (n,M, d) binary code. Since d is odd, d+1 ≥ 2, and hence the punctured code D(i) for any i ∈ [n + 1] is an (n,M) binary code. In addition, since only one coordinate has been punctured, d(D(i)) ≥ d. Therefore D(i) is an (n,M, d) binary code for any i ∈ [n+ 1]. A table of values of A(n, d) for n ≤ 28 and d ≤ 14 (for even d) can be found in [2, Table I]. The listed entries of A(n, 4) have been included in Appendix A (included as A(n − 1, 3)). When n ≥ 17, many of these values are currently unknown. That so few values are actually known is a testament to the difficulty of this problem. However, various bounds have been developed. Lower bounds most often arise due to construction of a specific binary code attaining the given parameters, and upper bounds can be proven through a number of different methods, two of which we will show in the following sections. Furthermore, many optimal binary codes are unique, that is, every bi- nary code attaining the given parameters is equivalent. For example, con- sider the (16, 256, 6), (15, 256, 5), (15, 128, 6), (14, 128, 5), (14, 64, 6), (13, 64, 5), (13, 32, 6), and (12, 32, 5) binary codes. All are seen to be optimal in [2, Ta- ble I]; moreover, all are unique with the exception of the (12, 32, 5) code [13, pp.74-75]. Österg̊ard, Baicheva, and Kolev developed an algorithm in [18] that can determine the number of nonequivalent binary codes for certain code pa- rameters. The number of nonequivalent (n,M, 3) and (n,M, 4) binary codes were then calculated for various values of n and M , as seen in [18, Table I]. 14 Chapter 1. Coding Theory In addition to bounds on the size of binary codes, another common problem in coding theory is determining the size of the largest possible constant weight binary code attaining given parameters. Notation 1.45. The size of the largest possible constant weight binary code of length n, minimum distance at least d, and codeword weight w, for w ≤ n, is denoted A(n, d, w). An (n,M, d, w) constant weight binary code in which M = A(n, d, w) is optimal. The following properties of A(n, d, w) can be easily verified for any pos- itive integers n and d, and nonnegative integer w ≤ n: • A(n, 2, w) = (nw), • A(n, d, w) = 1 if d > n, • A(n, d, w) = A(n, d+ 1, w) for d odd, • A(n, d, w) = A(n, d, n− w). The final statement is proven to hold by adding ~1n to either optimal code. Due to these properties, the search for A(n, d, w) to be restricted to even d and w ≤ ⌊n 2 ⌋ . We also make use of the results of Propositions 1.46 and 1.47 (see [13, Ch. 17, Theorem 1]): Proposition 1.46. A(n, d, w) = 1 for n, d ∈ Z+ and w ∈ Z+∪{0} satisfying 2w < d ≤ n. Proof. The existence of a code meeting this bound is trivial (as it contains a single codeword). Thus we need only prove A(n, d, w) ≤ 1. To prove A(n, d, w) ≤ 1, assume by way of contradiction there are two vectors u, v ∈ {0, 1}n satisfying wt(u) = wt(v) = w and d(u, v) ≥ d. Then, by the triangle inequality: d(u, v) ≤ d(u,~0n) + d(~0n, v) = wt(u+~0n) + wt(u+~0n) = wt(u) + wt(v) < d 2 + d 2 = d, a contradiction to the minimum distance of d, and hence A(n, d, w) = 1. 15 Chapter 1. Coding Theory Proposition 1.47. A(n, 2w,w) = ⌊ n w ⌋ for n,w ∈ Z+. Proof. To show A(n, 2w,w) ≥ ⌊ n w ⌋ , we need only show the existence of an (n, t, 2w,w) binary code where t = ⌊ n w ⌋ . Consider C = {v1, v2, . . . , vt}, where vi ∈ {0, 1}n is the vector with a 1 in coordinates (i − 1)w + 1, (i − 1)w+ 2, . . . , (i− 1)w+w. It is clear that these vectors all exist as, for i ≤ t: (i− 1)w + w ≤ (⌊ n w ⌋ − 1 ) w + w ≤ n− w + w = n. Moreover, d(vi, vj) = 2w for i 6= j. Therefore C is an (n, t, 2w,w) binary code. To show A(n, 2w,w) ≤ ⌊ n w ⌋ , assume by way of contradiction there exists an (n, t + 1, 2w,w) binary code D, with t defined as above. Let D = {v1, v2, . . . , vt+1}. For any i, j ∈ [t+ 1], i 6= j, by Lemma 1.15 we have: wt(vi ∩ vj) = wt(vi) + wt(vj)− d(vi, vj)2 ≤ w + w − 2w 2 = 0. Thus wt(vi ∩ vj) = 0, and hence t+1∑ i=1 wt(vi) ≤ n. However: t+1∑ i=1 wt(vi) = t+1∑ i=1 w = (⌊ n w ⌋ + 1 ) w > n, a contradiction. Therefore A(n, 2w,w) = ⌊ n w ⌋ . Agrell, Vardy, and Veger improved bounds on A(n, d, w) for many values in [1]. This article includes tables of A(n, d, w) for values of and bounds on A(n, d, w) for even values of d between 4 and 14. Recall that for odd values of d, A(n, d, w) = A(n, d + 1, w). Later in this thesis, we will refer back to these tables. The following theorem and corollary show how the weight and distance distribution of any binary code can be bounded above by the sizes of appro- priate optimal constant weight codes. 16 Chapter 1. Coding Theory Theorem 1.48. Let C be an (n,M, d) binary code and fix u ∈ {0, 1}n. Moreover, let Wi(u) be the weight distribution of C with respect to u. Then Wi(u) ≤ A(n, d, i) for i = 0, 1, . . . , n. Proof. Assume, by way of contradiction, that Wi(u) > A(n, d, i) for some i. Let k = Wi(u). Next, let {v1, v2, . . . , vk} ⊆ C be the set of codewords at distance i from u. Consider the set of vectors D = {w1, w2, . . . , wk} defined by wj = vj +u for j ∈ [k]. We claim that D is an (n, k, d, i) constant weight binary code with code size k > A(n, d, i), a contradiction. To show that wj has weight i for all j ∈ [k], take arbitrary j ∈ [k]. We have: wt(wj) = wt(vj + u) = d(vj , u) = i. Thus D is constant weight with weight i. Finally, for j1, j2 ∈ [k], j1 6= j2: d(wj1 , wj2) = d(vj1 + u, vj2 + u) = d(vj1 , vj1) by Lemma 1.9 ≥ d. Corollary 1.49. Let C be an (n,M, d) binary code, and let (Ai)ni=0 be the distance distribution of C. Then Ai ≤ A(n, d, i) for all i ∈ {0, 1, . . . , n}. Proof. For any i ∈ {0, 1, . . . , n} we have: Ai = 1 M ∑ u∈C Wi(u) ≤ 1 M ∑ u∈C A(n, d, i) by Theorem 1.48 = 1 M (M)A(n, d, i) = A(n, d, i). We will now introduce two useful upper bounds on A(n, d), the sphere- packing bound and Delsarte’s linear programming bound. 17 Chapter 1. Coding Theory 1.3.1 The Sphere-Packing Bound The sphere-packing bound is a well-known bound on optimal code size, and can be useful when d is small. While the bound can be calculated for codes over any field, we will offer the binary simplification. This bound will then give rise to the notion of perfect codes. The proof of the sphere-packing bound used follows that in [20, Theorem 4.3]. Theorem 1.50 (Sphere-Packing Bound). For any n, d ∈ Z+ with d ≤ n, let t = ⌊ d− 1 2 ⌋ . Then: A(n, d) ≤ 2n( t∑ i=0 ( n i ))−1 . Proof. Let C be any (n,M, d) binary code, with optimal code size M = A(n, d). Proposition 1.30 implies that the spheres St(u) are distinct for all u ∈ C. Thus we have the following: 2n ≥ ∑ u∈C |St(u)| = ∑ u∈C |{v ∈ {0, 1}n | d(u, v) ≤ t}| = ∑ u∈C t∑ i=0 ( n i ) , since for a fixed i, there are ( n i ) ways to choose i of the n coordinates of u to be different. Thus we have: 2n ≥ A(n, d) t∑ i=0 ( n i ) . Since A(n, d) must be an integer, we arrive at the desired result. Definition 1.51. An (n,M, d) binary code C is called perfect if it attains the sphere-packing bound. One well-known class of codes are the class of (binary) Hamming codes, as defined in [13, p.25]. The Hamming code is defined for any n ∈ Z+ satisfying n ≥ 2, and is a (2n − 1, 22n−n−1, 3) binary code. Note that as the Hamming code is a linear code (not defined in this thesis), its second 18 Chapter 1. Coding Theory parameter is usually expressed in terms of code dimension, rather than code size. The dimension of a linear code of size M over a field F is defined to be log|F |(M). The Hamming code attains the sphere-packing bound, and therefore gives rise to the following result. Theorem 1.52. A(2n − 1, 3) = 22n−n−1 for any n ∈ Z+. Proof. First, note that A(21 − 1, 3) = A(1, 3) = 1 = 221−1−1 is trivially satisfied. We now fix n ∈ Z+ with n ≥ 2. The sphere-packing bound (Theorem 1.50) implies: A(2n − 1, 3) ≤ 22n−1( 1∑ i=0 ( 2n − 1 i ))−1 = 22 n−1 1 + 2n − 1 = 22 n−n−1. However, the existence of the Hamming code implies that A(2n − 1, 3) = 22 n−n−1. 1.3.2 Delsarte’s Linear Programming Bound Another useful upper bound on A(n, d) is Delsarte’s linear programming bound, developed by Delsarte in [8] and [7]. Recall that, for any (n,M, d) binary code with distance distribution (Ai)ni=0: n∑ i=d Ai = M − 1. These Ai will thus be treated as variables in Delsarte’s linear program. The result of Theorem 1.55 will produce the constraints of this linear pro- gram. We closely follow the notation used in [5]. Definition 1.53. The Krawtchouk polynomial Kk(i), for i, k ∈ {0, 1, . . . , n}, is defined to be: Kk(i) = k∑ j=0 (−1)j ( n− i k − j )( i j ) . The Krawtchouk polynomial can be rewritten as we will see in Theorem 1.54. The proof given is similar to that used in [14, Lemma 3.5.5]. 19 Chapter 1. Coding Theory Theorem 1.54. For any w ∈ {0, 1}n satisfying wt(w) = i, the Krawtchouk polynomial, for k ∈ {0, 1, . . . , n}, satisfies: Kk(i) = ∑ x∈{0,1}n wt(x)=k (−1)<w,x>. Proof. Take arbitrary w ∈ {0, 1}n with wt(w) = i. Define K ⊆ {0, 1}n to be: K = {x ∈ {0, 1}n | wt(x) = k}. Next, define Kj ⊆ K for j ∈ {0, 1, . . . ,max{i, k}} to be: Kj = {x ∈ K | wt(w ∩ x) = j}. It is clear that the Kj form a partition of K. This implies:∑ x∈{0,1}n wt(x)=k (−1)<w,x> = ∑ x∈{0,1}n wt(x)=k (−1)wt(w∩x) = |K0| − |K1|+ |K2| − . . .± |Kmax{i,k}−1| ∓ |Kmax{i,k}| = max{i,k}∑ j=0 (−1)j |Kj |. (1.1) For any j ∈ {0, 1, . . . ,max{i, k}}, we determine |Kj |. Fix j and consider x ∈ Kj . Of the i coordinates where w is 1, we have j choices of coordinates where we can let x be 1. Then, of the n − i coordinates where w is 0, we have n − j choices of coordinates where we can let x be 1. From equation (1.1): ∑ x∈{0,1}n wt(x)=k (−1)<w,x> = max{i,k}∑ j=0 (−1)j |Kj | = max{i,k}∑ j=0 (−1)j ( i j )( n− i k − j ) = k∑ j=0 (−1)j ( i j )( n− i k − j ) = Kk(i). 20 Chapter 1. Coding Theory The constraints of Delsarte’s linear program are proven to hold with a proof similar to that used in [5, Theorem 3], but rearranged and expressed in the same manner as in [2]. Theorem 1.55. Let C be an (n,M, d) binary code with distance distribution (Ai)ni=0. Then, for k ∈ {0, 1, . . . , n}: n∑ i=d AiKk(i) ≥ − ( n k ) . Proof. For arbitrary w ∈ {0, 1}n satisfying wt(w) = i, we have: n∑ i=d AiKk(i) = n∑ i=0 AiKk(i)−A0Kk(0) = n∑ i=0 1M ∑ u,v∈C d(u,v)=i 1 ∑ x∈{0,1}n wt(x)=k (−1)<w,x> − (1) ∑ x∈{0,1}n wt(x)=k (−1)<~0,x> . Moreover, since wt(u+ v) = d(u, v) = i: n∑ i=d AiKk(i) = 1 M n∑ i=0 ∑ u,v∈C d(u,v)=i ∑ x∈{0,1}n wt(x)=k (−1)<u+v,x> − ∑ x∈{0,1}n wt(x)=k (−1)0 = 1 M ∑ x∈{0,1}n wt(x)=k n∑ i=0 ∑ u,v∈C d(u,v)=i (−1)<u,x>(−1)<v,x> − ( n k ) = 1 M ∑ x∈{0,1}n wt(x)=k ∑ u,v∈C (−1)<u,x>(−1)<v,x> − ( n k ) 21 Chapter 1. Coding Theory = 1 M ∑ x∈{0,1}n wt(x)=k (∑ u∈C (−1)<u,x> )(∑ v∈C (−1)<v,x> ) − ( n k ) = 1 M ∑ x∈{0,1}n wt(x)=k (∑ u∈C (−1)<u,x> )2 − ( n k ) (1.2) ≥ − ( n k ) . In the case where M is odd, these constraints can be strengthened. This will be seen in the following corollary, a result first shown in [5, Theorem 5]. Corollary 1.56. Let C be an (n,M, d) code with M odd. Then, for k ∈ {0, 1, . . . , n}: n∑ i=d AiKk(i) ≥ 1−M M ( n k ) . Proof. Recall from equation (1.2) in the proof of Theorem 1.55: n∑ i=d AiKk(i) ≥ 1 M ∑ x∈{0,1}n wt(x)=k (∑ u∈C (−1)<u,x> )2 − ( n k ) . If M is odd, ∑ u∈C (−1)<u,x> is odd and thus nonzero. Hence we reach the following conclusion: n∑ i=d AiKk(i) ≥ 1 M ∑ x∈{0,1}n wt(x)=k 1− ( n k ) = 1 M ( n k ) − ( n k ) = 1−M M ( n k ) . Delsarte’s linear programming bound (Version 1) of A(n, d) is then stated in Theorem 1.57. 22 Chapter 1. Coding Theory Theorem 1.57. [Delsarte’s Linear Programming Bound (Version 1)] A(n, d) ≤ 1+bM∗c, where M∗ is the optimal objective value of the following linear program: Maximize M∗ = n∑ i=d Ai subject to: n∑ i=d AiKk(i) ≥ − ( n k ) for k ∈ { 0, 1, . . . , ⌊n 2 ⌋} , Ai ≥ 0 for i ∈ {d, d+ 1, . . . , n}. However, this linear program can indeed be further simplified. The fol- lowing theorem will show how all odd-indexed variables of the linear program can be eliminated. Theorem 1.58. Let d ∈ Z+ be even. Any (n,M, d) binary code can be transformed into an (n,M, d) even weight binary code. Proof. Let C be an (n,M, d) binary code, and take any i ∈ [n]. Since d is even, we have d ≥ 2. Thus we consider the (n − 1,M, d − 1) code C(i). We now (as in the first half of the proof of Theorem 1.44) create the (n,M, d) even weight code C(i) p that results from concatenating pw with every codeword w of C(i). Since d − 1 is odd, C(i) p is an (n,M, d) even weight binary code. We now have, by Theorem 1.44, that the search for the values of A(n, d) can be restricted to even d. Moreover, Theorem 1.58 implies that the search with even d can be restricted to codes in which Ai = 0 for odd i. Therefore, the first version of Delsarte’s linear programming bound given in Theorem 1.57 can be restricted as in Theorem 1.59. Theorem 1.59. [Delsarte’s Linear Programming Bound (Version 2)] Let d ∈ Z+ be even. A(n, d) ≤ 1 + bM∗c, where M∗ is the optimal objective value of the following linear program: 23 Chapter 1. Coding Theory Maximize bn/2c∑ i=d/2 A2i subject to: bn/2c∑ i=d/2 A2iKk(2i) ≥ − ( n k ) for k ∈ { 0, 1, . . . , ⌊n 2 ⌋} , A2i ≥ 0 for i ∈ { d 2 , d 2 + 1, . . . , ⌊n 2 ⌋} . Additional bounds can also be imposed, which can help lower the bound. For example, if A(n, d, w) is known for some (or all) w, the result of Corollary 1.49 can be imposed as a constraint (or constraints). This will be seen in Theorem 1.60 in the following section. 1.4 Applications of Delsarte’s Linear Programming Bound Delsarte’s linear programming bound has been crucial in determining many new bounds for A(n, d). One result that will be used often throughout this thesis is the result A(9, 4) = 20 (and hence by Theorem 1.44 A(8, 3) = 20). This result was first proved in [5, Theorem 6], and our proof will be analogous. Theorem 1.60. A(9, 4) = 20. Proof. To prove that A(9, 4) ≥ 20, we will exhibit a (9, 20, 4) binary code. Let C be the code where each codeword is a column of the following matrix: 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 1 1 1 0 1 0 0 1 0 1 1 0 1 0 1 1 0 1 0 0 0 1 1 1 1 0 1 0 1 0 0 0 1 0 0 1 1 0 0 1 1 0 1 1 1 1 0 0 0 0 1 1 1 0 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 0 1 0 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 1 0 0 1 1 0 0 0 1 0 1 1 0 1 0 1 0 1 1 1 0 1 0 1 0 0 1 0 1 0 1 0 0 1 1 0 . The vectors in C can also be represented by their integer representations as 24 Chapter 1. Coding Theory follows: C = {30, 57, 77, 83, 102, 149, 160, 175, 202, 252, 257, 300, 311, 340, 363, 390, 408, 479, 485, 498}. It is easily verified that C is a (9, 20, 4) binary code. We will now apply Version 2 of Delsarte’s linear programming bound given in Theorem 1.59. We will also be using the result of Proposition 1.46 (with Corollary 1.48) to impose the additional constraint A8 ≤ 1. We see that: A(9, 4) ≤ 1 + bmax{M∗}c, where M∗ is the optimal objective value of the following linear program: Maximize A4 +A6 +A8 subject to: A4 +A6 +A8 ≥ −1, A4 − 3A6 − 7A8 ≥ −9, −4A4 + 20A8 ≥ −36, −4A4 + 8A6 − 28A8 ≥ −84, 6A4 − 6A6 + 14A8 ≥ −126, A8 ≤ 1, A4, A6, A8 ≥ 0. Upon finding the optimal objective value of this linear program, we obtain: A(9, 4) ≤ 1 + ⌊ 61 3 ⌋ = 21. We now claim that a (9, 21, 4) binary code can’t exist. Assume, by way of contradiction, that C is a (9, 21, 4) binary code with distance distribution (Ai)9i=0. Since 21 is odd, the result of Corollary 1.56 allows us to transform our first five constraints to: A4Kk(4) +A6Kk(6) +A8Kk(8) ≥ 1− 2121 ( 9 k ) ⇔ ( 21 20 A4 ) Kk(4) + ( 21 20 A6 ) Kk(6) + ( 21 20 A8 ) Kk(8) ≥ − ( 9 k ) . 25 Chapter 1. Coding Theory Also, by definition of Ai, we can also similarly transform our sixth constraint. Since ∑ u,v∈C d(u,v)=8 1 must be even (due to the symmetry of every pair of codewords u and v), we have: A8 ≤ 1⇔ A8 ≤ 2021 ⇔ ( 21 20 A8 ) ≤ 1. Thus, the optimal solution of the new linear program is simply 20 21 times the original optimal solution. Therefore: A(9, 4) ≤ 1 + ⌊( 20 21 )( 61 3 )⌋ = 20. The optimal (9, 20, 4) code is not unique, as 2 nonequivalent (9, 20, 4) codes were exhibited in [13, p.57]. In fact, there exist exactly 3, as seen in [18, Table I]. Recall the result of Theorem 1.52, A(n, d) = A(2n − 1, 3) = 22n−n−1 for n ∈ Z+. This result can be generalized as in Theorem 1.61. This will be stated without proof, but is the main result proved by Best and Brouwer in [4]. The 0, 1, 2, and 3 times shortened Hamming codes are the optimal codes that attain this value, and it was proven to hold as an upper bound through the use of Delsarte’s linear programming bound. Theorem 1.61. Let i, n ∈ Z+ satisfy n ≥ 3 and i ∈ [4]. Then: A(2n − i, 3) = 22n−n−i. The result of Theorem 1.61 allows for the values of A(n, 3) to be calcu- lated for 28 ≤ n ≤ 31, which are included in Appendix A. 26 Chapter 2 Graph Theory In this chapter we will review parts of graph theory needed throughout this thesis. The definitions and notation closely follow those used in [22]. We will also introduce the concept of vertex coloring, and introduce three algorithms that solve various vertex coloring problems. 2.1 Elementary Concepts Definition 2.1. A graph G is defined by a vertex set and an edge set, where elements of the edge set are unordered pairs of vertices. Notation 2.2. For any graph G, V (G) denotes the vertex set of G, and n(G) = |V (G)|, the number of vertices of G. In addition E(G) denotes the edge set of G, and e(G) = |E(G)|, the number of edges of G. Definition 2.3. The null graph is the graph whose vertex set and edge set are both empty. Definition 2.4. Let u, v ∈ V (G) and e = {u, v} ∈ E(G). Then u and v are the endpoints of e, and u and v are incident to e. Equivalently, u and v are adjacent to each other. For notational convenience, we will commonly write an edge {u, v} as uv. Definition 2.5. In a graph G, a loop is an edge e ∈ E(G) where both endpoints of e are the same vertex v ∈ V (G). Multiple edges are edges e1, e2 ∈ E(G) satisfying e1 = e2. If G has no loops or multiple edges, then G is a simple graph. Definition 2.6. A finite graph is a graph with both its vertex set and edge set finite. For the remainder of this paper, we will assume all graphs are finite, simple graphs. 27 Chapter 2. Graph Theory Definition 2.7. Consider a graph G, and let v ∈ V (G). The neighbourhood of v, denoted NG(v), is the set of all vertices adjacent to V . That is: NG(v) = {u ∈ V (G) | uv ∈ E(G)}. The degree of vertex v, denoted dG(v), is defined to be |NG(v)|. This is the number of edges in E(G) that are incident to v. If, for some k ∈ Z+ ∪ {0}, dG(v) = k for every v ∈ V (G), then G is k-regular. The following proposition allows for the number of edges of a k-regular graph to be calculated directly. Proposition 2.8. Let G be an n-vertex, k-regular graph. Then: e(G) = nk 2 . Proof. Since summing the degrees of all vertices of a graph counts each edge twice, the following result is clear (and commonly referred to as the degree-sum formula): 2e(G) = ∑ v∈V (G) dG(v). However, since G is k-regular, we have the following: 2e(G) = ∑ v∈V (G) k = nk. Definition 2.9. A graph G′ is a subgraph of a graph G (denoted G′ ⊆ G) if and only if: V (G′) ⊆ V (G), E(G′) ⊆ E(G). Definition 2.10. Let G be a graph and let V ′ ⊆ V (G) be a subset of the vertices of G. The subgraph of G induced by V ′, denoted G[V ′], is the graph satisfying the following: V (G[V ′]) = V ′, E(G[V ′]) = {uv ∈ E(G) | u, v ∈ V ′}. 28 Chapter 2. Graph Theory Definition 2.11. The complement of a graph G, denoted G, is the graph that satisfies the following criteria: V (G) = V (G), E(G) = {uv | u, v ∈ V (G), uv /∈ E(G)}. Definition 2.12. A clique in a graph G is a set of vertices V ′ ⊆ V (G) such that any two distinct vertices in V ′ are adjacent in G. The clique number of G, denoted ω(G), is the number of vertices in a clique of largest cardinality of G. Definition 2.13. An independent set in a graph G is a set of vertices V ′ ⊆ V (G) such that any two distinct vertices in V ′ are non-adjacent in G. V ′ is a maximal independent set if V ′ ∪ {v} is not independent for any v ∈ V (G)\V ′. The independence number of G, denoted α(G), is the number of vertices in an independent set of largest cardinality G. Definition 2.14. A graph G is bipartite if and only if the vertex set of G can be expressed as V (G) = V1∪V2, where both V1 and V2 are independent. Definition 2.15. The n-vertex complete graph, denoted Kn, is the graph such that any two distinct vertices in V (Kn) are adjacent. Since every pair of distinct vertices in the complete graph are adjacent, ω(Kn) = n for all n ∈ Z+. Definition 2.16. Let G be a graph. A walk in G is an alternating sequence of vertices and edges W = (v0, e1, v1, e2, v2, . . . , en, vn) such that ei = vi−1vi for all i ∈ [n]. The length of W is n, the number of edges in W . W is denoted a v0, vn-walk in G, and the vertices v0 and vn are called the endpoints of W . The vertices vi, for i ∈ [n− 1], are called the internal vertices of W . Definition 2.17. A path in a graph G is a walk in G such that every edge and every vertex in the walk are distinct. A cycle in G is a path whose endpoints are the same vertex. Definition 2.18. Let u, v ∈ V (G) be two vertices in a graph G. The distance from u to v in G, denoted dG(u, v), is the length of the shortest u, v-path in G (the path of least length). If there exists no u, v-path in G, then dG(u, v) =∞. Let u, v, w ∈ V (G) for some graph G. Similarly to the Hamming dis- tance, the graph theoretical distance satisfies the following properties: 29 Chapter 2. Graph Theory • dG(u, v) = 0 if and only if u = v, • dG(u, v) = dG(v, u), • dG(u,w) ≤ dG(u, v) + dG(v, w) (the triangle inequality). Definition 2.19. The kth power ofG is defined for any graphG and k ∈ Z+. Denoted Gk, it is the graph that satisfies: V (Gk) = V (G), E(Gk) = {uv | u, v ∈ V (G), dG(u, v) ≤ k}. 2.2 Graph Isomorphisms and Automorphisms Definition 2.20. An isomorphism from a graph G to a graph H is a bijec- tion Φ : V (G)→ V (H) that satisfies: uv ∈ E(G)⇔ Φ(u)Φ(v) ∈ E(H). If there exists an isomorphism from G to H, then G is isomorphic to H. This is denoted G ∼= H. If there is an isomorphism Φ from G to H, then the function Φ−1 is an isomorphism from H to G. In addition, ∼= is both reflexive (through the identity function) and transitive (through functional composition). There- fore, ∼= is an equivalence relation on the set of all graphs. It is clear that, in order for G ∼= H, both of the following must hold: n(G) = n(H), e(G) = e(H). Definition 2.21. For any graphs G and H, a copy of H in G is a subgraph G′ ⊆ G such that H ∼= G′. Definition 2.22. An automorphism of a graph G is an isomorphism from G onto itself. The automorphism group of a graph G is the set of all auto- morphisms of G, with functional composition as the group operation. It is denoted Aut(G). Clearly, any permutation of the vertices in the complete graph Kn is an automorphism. Therefore |Aut(Kn)| = |Sn| = n!. 30 Chapter 2. Graph Theory Definition 2.23. A graph G is vertex transitive if, for all u, v ∈ V (G), there exists an automorphism Φ : V (G)→ V (G) such that Φ(u) = v. The notion of vertex transitivity will be very important later in this thesis. If a graph is vertex transitive, then it “looks the same” from every vertex. What this means is that many times we can prove a result for every vertex in the graph by simply proving the result for a single, arbitrary vertex. Definition 2.24. Let V ′ ⊆ V (G) be a set of vertices of a graph G, and Φ : V (G)→ V (G) be a function of the vertices of G. Then the image of V ′ under Φ, denoted Φ(V ′), is defined to be: Φ(V ′) = {Φ(v) | v ∈ V ′}. We will now prove several results about automorphisms that will be needed later in this thesis. Proposition 2.25. Let V ′ ⊆ V (G) be a set of vertices of a graph G, and Φ : V (G)→ V (G) be an automorphism of G. Then V ′ is independent if and only if Φ(V ′) is independent. Similarly, V ′ is a clique if and only if Φ(V ′) is a clique. Proof. Take any u, v ∈ V ′. We have the following: V ′ is independent⇔ uv /∈ E(G) ⇔ Φ(u)Φ(v) /∈ E(G) since Φ is an automorphism ⇔ Φ(V ′) is independent. The result, as applied to a clique rather than an independent set, is proven in an analogous manner. Theorem 2.26. For any graph G, Aut(G) = Aut(G). Proof. We will show that Φ : V (G) → V (G) is an automorphism of G if and only if Φ is an automorphism of G. We first assume that Φ is an automorphism of G. We now take arbitrary uv ∈ E(G). We have: uv ∈ E(G)⇔ uv /∈ E(G) ⇔ Φ(u)Φ(v) /∈ E(G) ⇔ Φ(u)Φ(v) ∈ E(G), and thus Φ is an automorphism of G. 31 Chapter 2. Graph Theory We now assume that Φ is an automorphism of G, and take arbitrary uv ∈ E(G). We have: uv ∈ E(G)⇔ uv /∈ E(G) ⇔ Φ(u)Φ(v) /∈ E(G) ⇔ Φ(u)Φ(v) ∈ E(G). Therefore Φ is an automorphism of G. Theorem 2.27. Let Φ : V (G1)→ V (G2) be an isomorphism between graphs G1 and G2. Then, for all u, v ∈ V (G1), dG1(u, v) = dG2(Φ(u),Φ(v)). Proof. This theorem will be proven by induction on dG1(u, v). • Base Case: For all u, v ∈ V (G1) such that dG1(u, v) = 1, we prove that dG2(Φ(u),Φ(v)) = 1. Take arbitrary u, v ∈ V (G1) such that dG1(u, v) = 1. Thus uv ∈ E(G1), and this case is trivial by the definition of an isomorphism. • Inductive Hypothesis: Assume, for some k ∈ Z+, that if u, v ∈ V (G1) satisfy dG1(u, v) ≤ k, then dG2(Φ(u),Φ(v)) = dG1(u, v). We now prove the result for k + 1. Take arbitrary u, v ∈ V (G1) such that dG1(u, v) = k + 1 (if dG1(u, v) < k + 1, then the statement is true by the inductive hypothesis). First, to show that dG2(Φ(u),Φ(v)) ≤ k + 1, we will show the existence of a Φ(u),Φ(v)-path in G2 of length at most k + 1. Let P be a shortest u, v-path in G1 having length k + 1, and let w be the vertex on P adjacent to v. We have that dG1(u,w) = k; otherwise there would be a shorter u, v- path. By the inductive hypothesis, dG2(Φ(u),Φ(w)) = k. Also, since Φ is an isomorphism, Φ(w)Φ(v) ∈ E(G2). Therefore, there is a Φ(u),Φ(v)-path of length at most k + 1, and thus dG2(Φ(u),Φ(v)) ≤ k + 1. Next, to show that dG2(Φ(u),Φ(v)) ≥ k + 1, we will show that there exists no Φ(u),Φ(v)-path in G2 of length less than k + 1. Assume, by way of contradiction, that there exists some Φ(u),Φ(v) path P of length at most k. Let w be the vertex on P adjacent to Φ(v). Thus we have dG2(Φ(u), w) ≤ k − 1, and therefore: 32 Chapter 2. Graph Theory dG1(u, v) ≤ dG1(u,Φ−1(w)) + dG1(Φ−1(w), v) by the triangle inequality = dG2(Φ(u), w) + dG2(w,Φ(v)) by the inductive hypothesis ≤ (k − 1) + 1 = k, a contradiction to dG1(u, v) = k+1. Therefore dG2(Φ(u),Φ(v)) = k+1, and the induction proof is complete. Note that the previous result also holds when G1 = G2 and Φ is an automorphism of this graph. This specific case of Theorem 2.27 leads to the following corollary. Corollary 2.28. Let Φ : V (G)→ V (G) be an automorphism of a graph G. Then, for any v ∈ V (G), Φ(NG(v)) = NG(Φ(v)). Proof. Recall that: NG(v) = {u ∈ V (G) | dG(u, v) = 1}, and therefore the result is immediate from Theorem 2.27. Theorem 2.29. For any graph G, Aut(G) ⊆ Aut(Gk) for all k ∈ Z+. Proof. Let Φ : V (G) → V (G) be an automorphism of a graph G. Consider the graph Gk for arbitrary k ∈ Z+, and take arbitrary u, v ∈ V (Gk). The following then holds: uv ∈ E(Gk)⇔ dG(u, v) ≤ k ⇔ dG(Φ(u),Φ(v)) ≤ k by Theorem 2.27 ⇔ Φ(u)Φ(v) ∈ E(Gk), and thus Φ is an automorphism of Gk. 2.3 Vertex Coloring The field of graph coloring is one that is very interesting and heavily studied in discrete mathematics, and it contains many unsolved problems. In addi- tion, many applications, such as timetabling, scheduling, register allocation, and frequency assignment problems, can be formulated as graph coloring 33 Chapter 2. Graph Theory problems. Vertex coloring problems are the most common type of graph coloring problems. For definitions of other graph coloring problems, such as edge and total colorings, we refer the reader to [22]. For the remainder of this thesis, it will be assumed that graph coloring refers specifically to vertex coloring. A coloring of a graph can be thought of as assigning a color to each vertex of the graph so that no two adjacent vertices are given the same color. Formally, this can be represented as in the following definition. Definition 2.30. A k-coloring of a graph G, for any k ∈ Z+, is a function f : V (G)→ [k] such that: uv ∈ E(G)⇒ f(u) 6= f(v). G is k-colorable if and only if a k-coloring of G exists. Definition 2.31. Let G be a graph with k-coloring f . The ith color class of G (with respect to f) is the set of vertices {v ∈ V (G) | f(v) = i} for i ∈ [k]. It is clear that any color class of a coloring of G is an independent set of vertices. Proposition 2.32. A graph G is 2-colorable if and only if it is bipartite. Proof. Assume G is 2-colorable, and let V1 and V2 be the color classes of a 2-coloring of G. Then V (G) = V1 ∪V2, the union of two independent sets of vertices. Thus G is bipartite. Next, assume G is bipartite with V (G) = V1 ∪ V2, with V1 and V2 inde- pendent sets of vertices. Next, define f : V (G)→ [2] as follows: f(v) = { 1 if v ∈ V1, 2 if v ∈ V2. Clearly the function f is a 2-coloring of G. Let G be an n-vertex graph. Since a set consisting of a single vertex is trivially independent, it is clear that G has an n-coloring, the coloring where every vertex of G forms a distinct color class. However, in general, we can find k-colorings of G with k < n. 34 Chapter 2. Graph Theory Definition 2.33. The chromatic number of a graph G, denoted χ(G), is the least integer k such that G is k-colorable. Theorem 2.34. Let G be an n-vertex graph. Then χ(G) ≥ ⌈ n α(G) ⌉ . Proof. Let f : V (G) → [k] be an optimal coloring of G, that is, k = χ(G). We have: n = k∑ i=1 |{v ∈ V (G)|f(v) = i}| ≤ k∑ i=1 α(G) = kα(G). Rearranging, and applying the fact that k ∈ Z+: k ≥ ⌈ n α(G) ⌉ . The chromatic number of a graph can be bounded below by its clique number, as we will see in Theorem 2.35. Theorem 2.35. For any graph G, χ(G) ≥ ω(G). Proof. Let V ⊆ V (G) be an optimal clique of V (G), that is |V | = ω(G). Then, since every pair of vertices in V are adjacent, every vertex v ∈ V (G) must be in a distinct color class of any coloring of G. Thus χ(G) ≥ ω(G). Corollary 2.36. For any n ∈ Z+, χ(Kn) = n. Proof. Since ω(Kn) = n, by the result of Theorem 2.35, χ(Kn) ≥ n. How- ever, as n(Kn) = n, Kn is clearly n-colorable. Hence we have χ(Kn) = n. 2.4 Algorithms to Solve Vertex Coloring Problems Suppose we wish to determine the chromatic number of a graph, the smallest integer k such that G is k-colorable. This, and a variety of other problems in the field of graph theory, are often solved algorithmically. Much effort is put into the study and development of these algorithms. The desired result is, of course, to determine the solution in as computationally efficient manner as possible. 35 Chapter 2. Graph Theory 2.4.1 An Introduction to Algorithms and Efficiency Definition 2.37. The running time of an algorithm, expressed as a function of the size of the input, is the maximum possible number of basic computa- tional steps used in the execution of the algorithm. When dealing with graph theoretic problems, the size of the input refers to the number of vertices in the graph. The running time of an algorithm is commonly measured as follows. Definition 2.38. Suppose the running time of an algorithm with input size n is bounded by the expression g(n). Let f(n) be an expression such that, for any N ∈ Z+, there exists c ∈ Z+ such that g(n) ≤ cf(n) for all n > N . Then the running time of this algorithm is O(f(n)). It is widely considered that an algorithm is computationally efficient if it has a polynomial running time. There are methods to determine if a graph is 2-colorable that are computationally efficient. It is known that a graph is 2-colorable if and only if it contains no odd cycles (for a proof of this statement we refer the reader to [22, Theorem 1.2.18]). Because of this nice characterization of 2-colorability, determining if any n-vertex graph G is 2-colorable is solvable in polynomial time. One algorithm that accomplishes this is the DSatur Algorithm [6, p.252], whose running time is O(n2). However, for k ≥ 3, whether or not the problem of determining if a graph is k-colorable can be solved efficiently is unknown. Definition 2.39. A P (polynomial) problem is a problem that is solvable in polynomial time, that is, there exists an efficient algorithm to solve it. An NP (non-deterministic polynomial) problem is a problem such that a potential solution can be verified in polynomial time. An NP-hard problem is a problem that, if solvable by a polynomial time algorithm, then this algorithm could be used to construct a polynomial time algorithm for all NP problems. An NP-complete problem is a problem that is both NP and NP-hard. The class of NP-hard problems can informally be thought of as the class of problems at least as hard as the hardest problems in NP, and possibly harder. The class of NP-hard problems also include not only decision prob- lems, but those problems that are posed as optimization problems. Clearly, all P problems are also NP problems, as a P problem can verify a solution in polynomial time. It is currently unknown if that converse is true, that is, if any of the many NP-complete problems can be solved in polynomial time. 36 Chapter 2. Graph Theory However, if there exists a polynomial time algorithm for one then there exists a polynomial time algorithm for all NP problems, and hence the class of NP problems is equal to the class of all P problems. This P=NP question has become one of the most famous problems in applied mathematics and computer science. The Clay Mathematics Institute, a nonprofit education foundation in Cambridge, Mass., is currently offering a million dollar prize for a solution to this problem.1 A Venn diagram for the P, NP, NP-complete, and NP-hard set of problems is included in Figure 2.12. Figure 2.1: Venn Diagram of the Complexity Classes Determining if an n-vertex graph G is k-colorable, for k ≥ 3, is an NP- complete problem[9, p.191]. A solution to the problem can be verified in polynomial time. Given a function f : V (G) → [k], to determine if this satisfies the definition of a vertex coloring, one merely needs to check every edge of the graph and make sure its endpoints are not in the same color class. Since a simple, finite graph has at most ( n 2 ) < n2 edges, this can be done efficiently. However, the decision problem of finding a k-coloring in a graph is NP-complete. Hence it is not known, for a fixed k ≥ 3, if there exists a polynomial time algorithm to determine whether or not a graph has a k-coloring. Currently, the only general algorithm to prove that a graph is not k-colorable, for k ≥ 3, is equivalent to an exhaustive search. In fact, one may have to test every function from V (G) to [k], of which there are kn, to prove the existence or non-existence of a k-coloring. 1http://www.claymath.org/millennium/ 2Source: http://en.wikipedia.org/wiki/Np-hard/ 37 Chapter 2. Graph Theory 2.4.2 Integer Programming Integer programming is also an NP-complete problem. Therefore, as dis- cussed in the previous section, it is computationally equivalent to graph coloring with respect to efficiency. Modeling a graph coloring problem as an integer program (IP), however, can provide some practical computational improvements over an exhaustive search for a given graph. We explore this model here. Let G be a n-vertex graph (for simplicity, let V (G) = [n]). The first, and most simple, algorithm to determine if G is k-colorable is a basic integer programming model. The variables of the IP are xi,j where i ∈ [n] and j ∈ [k]. We let xi,j be a binary variable that equals 1 if vertex i is assigned color j and 0 otherwise. The problem is then equivalent to determining if the following system (which we will refer to as VC) has a feasible solution. xi1,j + xi2,j ≤ 1 for all i1i2 ∈ E(G) and j ∈ [k], k∑ j=1 xi,j ≥ 1 for all i ∈ [n], xi,j ∈ {0, 1} for all i ∈ [n] and j ∈ [k]. The first constraint forces that no two adjacent vertices can be assigned the same color. The second constraint forces each vertex to be assigned to at least one color (if it is assigned two colors then it can be assigned either color arbitrarily). Trivially, χ(G) is the smallest value of k such that this integer program has a feasible solution. The model is simple to design; however, it can be very computationally inefficient in execution. Graph coloring problems are a very specific kind of integer program, and this basic model has inherent symmetry associated with it. For example, suppose V1 and V2 are color classes in a coloring of G. Assigning V1 color j1 and V2 color j2, according to the VC model, is a completely different solution than assigning V1 color j2 and V2 color j1. It becomes apparent that a basic integer program can waste much computational time searching for “new” solutions that are actually the same, or search branches of the search tree where prior knowledge should imply that these branches contain no feasible solution. The following algorithm attempts to rectify this problem. 38 Chapter 2. Graph Theory 2.4.3 Column Generation In this section, we will introduce a model that avoids some of the symmetry inherent in graph coloring problems. This model is introduced in [15], and we will adopt the same notation. Consider a graph G and let S be the set of all maximal independent sets of G. Now define xs to be a binary variable for all s ∈ S that equals 1 if that set is used as a color class in an optimal coloring of G, and 0 otherwise. Now, since all of these sets are by definition independent, so long as every vertex is in some set s that is assigned a color, we have a coloring of G. Note that a vertex can appear in more than one set s that is assigned a label, as we are able to break the tie of which label it gets arbitrarily. The problem, denoted IS, that determines χ(G) is outlined below. Minimize ∑ s∈S xs subject to: ∑ s∈S|i∈s xs ≥ 1 for all i ∈ V (G), xs ∈ {0, 1} for all i ∈ V (G). Minimizing this objective function minimizes the number of colors used. The first constraint forces each vertex to be in at least one color class of the optimal coloring found, and the second constraint forces the xs to be binary. Note that this model treats a coloring as a collection of maximal indepen- dent sets with no specific color assigned to each set. Therefore, it loses the symmetry problem discussed in the integer programming approach. How- ever, the number of variables (one for every maximal independent set) is exponential in general and expensive to determine. Thus, we will look at a column generation method that reduces the number of variables immensely at the start, and generates new ones as needed. In this method, S is ex- tended to be all independent sets (rather than only maximal independent sets). First, we must come up with a collection of sets that produces a col- oring. Many times there is a coloring previously found that we are trying to improve on, so its color classes could be a good starting point. Or, the simple approach is to let the initial S be all the singleton sets of vertices, which trivially results in an n-coloring of an n-vertex graph. 39 Chapter 2. Graph Theory We then solve a linear relaxation of IS with the restriction that S is our predetermined collection of independent sets. Solving this relaxation generated a dual value, pii, for each constraint (each of which represents a vertex of G). We then use these dual values as input for the following integer program, denoted MWIS, below. Maximize ∑ i∈V (G) piizi subject to: zi1 + zi2 ≤ 1 for all i1i2 ∈ E(G), zi ∈ {0, 1} for all i ∈ V (G). The optimal zi values correspond to a new independent set of vertices (the vertex i is in this set if and only if zi = 1). The first constraint forces that no two adjacent vertices can be included in this set, and the second forces the zi to be binary. The algorithm solves the IS relaxation with the predetermined set of independent sets S, and uses the dual values of this linear program as input for MWIS. Solving MWIS generates an independent set, which is added to S. IS is then solved with the new set S, and its dual values are again used as input for MWIS. The algorithm terminates when the optimal objective function value of MWIS is less than or equal to 1. Linear programming duality, discussed in the next paragraph, tells us that no new independent set will improve the coloring of the graph. To understand why this algorithm works, suppose the Simplex Algorithm on the relaxation of IS was applied with all independent sets included in S. An independent set (a variable) would enter the basis (thus improving our solution) if and only if it has a negative reduced cost. The reduced cost of a variable s is given by 1−piAs, where pi is the row vector of dual values from the previous Simplex iteration and As is the column of the constraint matrix A corresponding to possible entering set s. The entries of the columns of As are 1 if that vertex is in the set and 0 otherwise. This, however, is exactly what the zi from MWIS represent. Thus we have: 1− ∑ i∈v piizi < 0, or equivalently,∑ i∈v piizi > 1. 40 Chapter 2. Graph Theory Upon termination of the algorithm, we then look at the final IS relax- ation solved. If it has an integral solution, we are done the original problem and χ(G) is determined. Otherwise, integrality must be enforced. To enforce integrality, we treat the solved problem as the initial node on a branch and bound tree. We then examine our fractional solution achieved in the algorithm, and consider an independent set s1 such that xs1 is fractional. There must exist another independent set s2 6= s1 such that s1 ∩ s2 6= ∅; otherwise the vertices in s1 would have their only nonnegative variable be- ing xs1 < 1, an infeasible solution to the relaxed linear program. Consider vertices i1, i2 ∈ V (G) such that i1 ∈ s1∩s2 and i2 ∈ s1\s2 or i2 ∈ s2\s1. We then create two subproblems, denoted DIFFER(i1, i2) and SAME(i1, i2). DIFFER(i1, i2) requires that i1 and i2 are given different colors, whereas SAME(i1, i2) requires that i1 and i2 are given the same color. It is clear that any feasible coloring (and thus, an optimal coloring) of G occurs in one of these two cases. These subproblems can be thought of graph theoreti- cally: DIFFER(i1, i2) corresponds to adding the edge i1i2 to the graph, and SAME(i1, i2) corresponds to collapsing vertices i1 and i2 into a single vertex j, with NG(j) = NG(i1) ∪NG(i2). The column generation algorithm is then solved in each subproblem, in the hopes that an integral solution is found. If not, subproblems are again created and solved through column generation, and the branch and bound tree is explored until an optimal value is found. This optimal value is χ(G). This algorithm is in most cases superior to the straight integer program- ming approach discussed in Chapter 2.4.2 (see [15] for a comparison on a test bed of graphs). However, the algorithm can still be very computation- ally expensive. While it is a simpler integer program than the one outlined in Chapter 2.4.2, the MWIS model is still an integer program which are in the class of NP-Complete problems. Since MWIS may be repeated many times over the course of the algorithm, it is vital to be able to solve it in as efficient a manner as possible. Mehrotra and Trick discuss ways of doing this in [15, pp.345–346]. 2.4.4 The Petford-Welsh Algorithm Petford and Welsh introduced an algorithm in [19] that searches for a 3- coloring of a graph. The generalization of the algorithm to search for a k-coloring is trivial, and we offer this generalization. Consider an n-vertex 41 Chapter 2. Graph Theory graph G. The algorithm first assigns every vertex of G a random number (color) in [k] (note that this is simply a random assignment of k colors and not likely to be a proper k-coloring). At any instant, for all vertices v and every i in [k], we define Si(v) to be the subset of NG(v) that are colored i. In addition, si(v) is defined to be |Si(v)|. A transition function p is also defined that satisfies, for s1, s2 . . . , sk ∈ [n]: p(s1, s2, . . . , sk; j) ≥ 0 for all j in [k], k∑ j=1 p(s1, s2, . . . , sk; j) = 1. Note that this satisfies the definition of a probability mass function P (X = j) of a discrete random variable X that can take on values in [k]. One example of such a transition function is: p(s1, s2, . . . , sk; j) = 1 k − 1 1− sj ( k∑ i=1 si )−1 . Every iteration, a vertex v is chosen at random out of all “bad” vertices, that is, all vertices adjacent to another vertex currently assigned the same color. Next, a (not necessarily new) color is randomly generated according to the probability mass function defined by p(s1(v), s2(v), . . . , sk(v); j), and this color is assigned to the vertex v. The algorithm continues until there are no more bad vertices to choose from, in which case it has found a k-coloring, or the running time goes beyond a predetermined stopping time. If it does not find a k-coloring in the allotted time, then the algorithm is inconclusive. Obviously, this algorithm can’t be used to prove that a graph is not k-colorable. However, for “good” choices of the transition function p, this algorithm was shown to find 3-colorings on randomly generated 3-colorable graphs in a relatively short length of time (see [19] for more details on these graphs and choices of transition functions). The preliminary results for larger k were not very promising, and one possible reason that was given was that much more experimentation was needed to find a good transition function. However, this algorithm will be revisited in Chapter 3.3.3, in which it is applied successfully to find a 14-coloring of Q28, a graph that will be defined next. 42 Chapter 3 The Hypercube We now discuss the main combinatorial object studied in this thesis. Definition 3.1. For any k ∈ Z+, the k-dimensional hypercube (k-cube), denoted Qk, is the graph satisfying the following: V (Qk) = {0, 1}k, E(Qk) = {uv ∈ {0, 1}k | d(u, v) = 1}. This is the graph in which the vertices are all binary vectors of length k, and two vertices are adjacent if and only if the Hamming distance between them is 1. Often, we will be dealing with the jth power of the k-cube (when j ≤ k), Qjk. This graph satisfies: V (Qjk) = {0, 1}k, E(Qjk) = {u, v ∈ {0, 1}k | d(u, v) ≤ j}. It is clear that n(Qjk) = 2 k for any j, k ∈ Z+. The result of the following proposition will allow us to enumerate e(Qjk). Proposition 3.2. Let j, k ∈ Z+ with j ≤ k. Then e(Qjk) = 2k−1t where t satisfies: t = j∑ i=1 ( k i ) . Proof. First, we note that Qjk is t-regular since, by definition, t sums up all binary vectors of distance at most j from any vertex. Thus Proposition 2.8 implies: e(Qjk) = n(Qjk)t 2 = 2kt 2 = 2k−1t. 43 Chapter 3. The Hypercube Note that if j ≥ k then every vertex of Qjk is adjacent. Therefore Qjk ∼= K2k , and hence: e(Qjk) = ( 2k 2 ) = (2k)(2k − 1) 2 = 2k−1(2k − 1). This agrees with the result of Proposition 3.2 as: 2k−1 j∑ i=1 ( k i ) = 2k−1 k∑ i=1 ( k i ) = 2k−1(2k − 1), by the binomial theorem. We will most often be discussing the case j = 2, the graph Q2k. In this case: t = ( k 1 ) + ( k 2 ) = ( k + 1 2 ) , and hence e(Q2k) = 2 k−1 ( k + 1 2 ) . The number of vertices and edges of Q2k have been calculated for all k ≤ 31 in Appendix A. We now note the following results, and our first connections between Chapter 1 and Chapters 2− 3. Lemma 3.3. α(Qjk) = A(k, j + 1) for any j, k ∈ Z+ with j ≤ k. Proof. As two vertices of Qjk are adjacent if and only if their Hamming distance is at most j, any independent set V of Qjk is a set of vertices satisfying d(u, v) ≥ j + 1 for all u, v ∈ V . This is a binary code with minimum distance at least j + 1, which leads to the desired result. This leads to the following proposition. Proposition 3.4. Let Φ : {0, 1}k → {0, 1}k be an automorphism of Qjk, and C be a (k,M, j + 1) binary code. Φ(C) is then a (k,M, j + 1) binary code, where Φ(C) is defined as follows: Φ(C) = {Φ(u) | u ∈ C}. 44 Chapter 3. The Hypercube Proof. Clearly, C is an independent set of Qjk. Moreover, by Proposition 2.25, Φ(C) is an independent set of Qjk. Therefore Φ(C) is a (k,M, j + 1) binary code. 3.1 Automorphisms of the Cube and its Square In this section, we will define the automorphism groups of Qk and Q2k. The result of Theorem 2.29 implies that Aut(Qk) ⊆ Aut(Qjk) for all j ∈ Z+. Therefore every automorphism of Qk is, in addition, an automorphism of Qjk. Consider the following function. Definition 3.5. Let x ∈ {0, 1}k. Then the vector addition function: Ψx : {0, 1}k → {0, 1}k, is defined to be: Ψx(v) = v + x. This function is an automorphism of Qk, as will be seen in the Theorem 3.6. Theorem 3.6. For any x ∈ {0, 1}k, Ψx is an automorphism of the graph Qk . Proof. Consider an arbitrary x ∈ {0, 1}k. To prove Ψx is an automorphism of Qk, we first prove that Ψx is a permutation of {0, 1}k. To prove Ψx is a permutation, assume (for u, v ∈ {0, 1}k) that Ψx(u) = Ψx(v). We have: Ψx(u) = Ψx(v)⇔ u+ x = v + x ⇔ u = v, and hence Ψx is one-to-one. Since {0, 1}k is finite, it is therefore onto and thus a permutation. Note that throughout this thesis, we will prove a func- tion from {0, 1}k to {0, 1}k is a permutation by simply proving it is one-to- one. 45 Chapter 3. The Hypercube To prove Ψx is an automorphism of Qk, take u, v ∈ {0, 1}k such that uv ∈ E(Qk). We have: uv ∈ E(Qk)⇔ d(u, v) = 1 ⇔ d(u+ x, v + x) = 1 (by Lemma 1.9) ⇔ d(Ψx(u),Ψx(v)) = 1 ⇔ Ψx(u)Ψx(v) ∈ E(Qk), and therefore Ψx preserves edges and is hence an automorphism. Corollary 3.7. Qjk is vertex transitive for any j, k ∈ Z+. Proof. Consider arbitrary u, v ∈ {0, 1}k. By Theorem 3.6, the vector addi- tion function Ψu+v is an automorphism of Qk. Theorem 2.29 then implies that Ψu+v ∈ Aut(Qjk) for all j ∈ Z+. Moreover: Ψu+v(u) = u+ (u+ v) = v, and hence, Ψu+v maps u to v. Since this holds for any choices of u, v ∈ {0, 1}k, Qjk is vertex transitive. Permuting the coordinates of the vertices of Qk is also an automorphism, as we will prove in Theorem 3.8. Theorem 3.8. Permuting the vertex coordinates under the permutation σ, for any σ ∈ Sk, is an automorphism of Qk. Proof. To prove σ is an automorphism of Qk, we first prove that permuting the coordinates under σ also permutes the vertices in {0, 1}k. Assume, for u, v ∈ {0, 1}k, that σ(u) = σ(v). Since σ is a permutation, it by definition has an inverse σ−1, satisfying σ−1(σ(u)) = u and σ−1(σ(v)) = v. Now since σ(u)i = σ(v)i for all coordinates i ∈ [k], ui = vi for all coordinates i. There- fore σ is a permutation. To prove σ is an automorphism of Qk, consider u, v ∈ {0, 1}k such that uv ∈ E(Qk). By definition of Qk, d(u, v) = 1. Thus there is exactly one coordinate i ∈ [k] where ui 6= vi. Therefore σ(u) and σ(v) differ only in coordinate σ(i), and hence d(σ(u), σ(v)) = 1. Clearly σ(u)σ(v) ∈ E(Qk). Now consider u, v ∈ {0, 1}k such that σ(u)σ(v) ∈ E(Qk). By definition of Qk, d(σ(u), σ(v)) = 1. Thus there is exactly one coordinate i ∈ [k] where 46 Chapter 3. The Hypercube σ(u)i 6= σ(v)i. Therefore u and v differ only in coordinate σ−1(i), and hence d(u, v) = 1. Clearly uv ∈ E(Qk). Lemma 3.9. Let x, y ∈ {0, 1}k and σ1, σ2 ∈ Sk such that x 6= y or σ1 6= σ2. Then: σ1 ◦Ψx 6= σ2 ◦Ψy. Proof. Assume that σ1 ◦Ψx = σ2 ◦Ψy and, by way of contradiction, x 6= y or σ1 6= σ2. Thus the following holds: (σ1 ◦Ψx)(x) = (σ2 ◦Ψy)(x)⇔ σ1(~0) = σ2(x+ y) ⇔ ~0 = σ2(x+ y) ⇔ ~0 = x+ y ⇔ x = y, and therefore σ1 6= σ2, so it must be the case that x = y. Then there is some v ∈ {0, 1}k such that: σ1(v) 6= σ2(v)⇔ σ1(Ψx(v + x)) 6= σ2(Ψy(v + y)) ⇔ σ1(Ψx(v + x)) 6= σ2(Ψy(v + x)) since x = y ⇒ σ1 ◦Ψx 6= σ2 ◦Ψy, a contradiction. Since |Sk| = k! and |{0, 1}k| = 2k, the result of Lemma 3.9 implies that we can create 2kk! distinct automorphisms of Qk through composing vector addition and coordinate permutation automorphisms. As we will see in Theorem 3.11, every automorphism of Qk is of this form. The proofs of the following lemma and theorem are of a combinatorial nature similar to that used in the solution manual to [22, Exercise 1.3.29]. An alternative proof of the result that has an algebraic flavor can be found in [16]. Lemma 3.10. Let j, k ∈ Z+ with j ≤ k. Every copy of Qj in Qk is Qk[V ] where V ⊆ {0, 1}k satisfies: V = {v ∈ {0, 1}k | v has specified values in a fixed set of k − j coordinates}. Proof. Let H be any subgraph of Qk such that H is a copy of Qj in Qk. There then exists some isomorphism Φ from Qj (defined normally) to H. Since H is a subgraph of Qk, for every x1, x2 ∈ V (H), dH(x1, x2) ≥ dQk(x1, x2). Let u = Φ(~0 j) and v = Φ(~1j). We will prove by induction on j that d(u, v) = j. 47 Chapter 3. The Hypercube • Base Case(s): If j = 1, then H ∼= Q1, which contains simply two vertices connected by an edge. Thus dQk(u, v) = 1 and hence d(u, v) = 1. If j = 2, then u and v have the common neighbour Φ(01) in H since ~02 and ~12 have the common neighbour 01 in Q2. Now if d(u, v) = 1 then uv ∈ E(Qk), and since H ⊆ Qk, uvΦ(01)u would form a cycle of length 3 in the bipartite graph Qk, a contradiction. Thus d(u, v) = 2. • Inductive Hypothesis: Suppose for some positive integer j > 2, if ξ is an isomor- phism from Ql, for some l ∈ [j − 1], to a subgraph of Qk, then d(ξ(~0l), ξ(~1l)) = l. We now prove the result for j. Consider the isomorphism Φ : Qj → H. For all i ∈ [j], let wi = Φ(µji ). Now since ~0jµji ∈ E(Qj), uwi ∈ E(H), and hence d(u,wi) = 1. Now let Wi = {x ∈ V (Qj) | xi = 1}. Then Qj−1 ∼= Qj [Wi] under the isomorphism ξi that lengthens the vector by inserting a 1 in the ith coordinate. Thus Φi ◦ ξi is an isomorphism from Qj−1 to a subgraph of Qk where Φi is the restriction of Φ to Wi. By the induction hypothesis, for i ∈ [j]: d(Φi ◦ ξi(~0j−1),Φi ◦ ξi(~1j−1)) = j − 1. Moreover, ξi(~0j−1) = µ j i and ξi(~1 j−1) = ~1j . Therefore we have d(wi, v) = j − 1 for i ∈ [j]. Thus for each i ∈ [j], wi is different from v in j − 1 coordi- nates and is different from u in one coordinate. If for some i ∈ [j], the j − 1 coordinates in which wi is different from v do not include the one coordinate in which wi is different from u, then u and v differ in all these j coordinates and hence d(u, v) = j. So, suppose to the contrary that there are distinct i1, i2 ∈ [j] such that wi1 and wi2 differ from u only in coordinates j1 and j2, respectively, and additionally that uj1 = vj1 and uj2 = vj2 . Since wi1 differs from v in j−1 coordinates and differs from u in exactly one of these coordinates, wi1 differs from v in j − 2 coordinates but is the same as u in these same j−2 coordinates. Thus u and v differ in these j−2 48 Chapter 3. The Hypercube coordinates. Hence d(u, v) ≥ j − 2. Consider now α = µji1 + µ j i2 and its image z = Φ(µji1 + µ j i2 ). Since µji1α, µ j i2 α ∈ E(Qj), wi1z, wi2z ∈ E(H). Thus: d(wi1 , z) = 1 = d(wi2 , z). If d(u, z) = 1, then uwi1zu is a cycle of length 3 in the H. Since H ⊆ Qk this is also a cycle of length 3 in the bipartite graph Qk, a contradiction. Thus d(u, z) = 2. Suppose z differs from u in coordinate j3 where j3 /∈ {j1, j2}. Then u and wi1 , and u and wi2 , all agree in coordinate j3. However: d(wi1 , z) = 1 = d(wi2 , z), and so wi1 = wi2 , a contradiction. Therefore j3 ∈ {j1, j2}. Since d(u, z) = 2, z therefore differs from u only in coordinates j1 and j2. Since uj1 = vj1 and uj2 = vj2 , z differs from v in coordinates j1 and j2. Moreover, z differs from v in all the coordinates in which v differs from u, since in those coordinates z agrees with u. Thus: d(v, z) ≥ d(u, z) + d(u, v) ≥ 2 + (j − 2) = j. With α ∈ V (Qj), we have: j − 2 = dQj (~1j , α) = dH(Φ(~1j),Φ(α)) by Theorem 2.27 = dH(v, z) ≥ dQk(v, z) since H ⊆ Qk = d(v, z) ≥ j, a contradiction. Thus no such i1 and i2 exist, and d(u, v) = j. Hence by induction, the statement holds for all j. 49 Chapter 3. The Hypercube Next, let S be the set of coordinates in which u and v are the same. By the preceding inductive proof, |S| = k − j. Consider w ∈ V (H). Since Φ−1(w) is in Qj and for all y1, y2 ∈ V (Qj), d(y1, y2) = dQj (y1, y2), we have: dQj (~0 j ,Φ−1(w)) + dQj (~1 j ,Φ−1(w)) = d(~0j ,Φ−1(w)) + d(~1j ,Φ−1(w)) = j. By Theorem 2.27, it then follows that dH(u,w) + dH(v, w) = j. Hence d(u,w) + d(v, w) ≤ j. Moreover, the coordinates in [k]\S (as they are the coordinates in which u and v differ) contribute j to the sum d(u,w)+d(v, w). Thus u and w must be the same in all coordinates in S. Hence every vertex in H has the same values in S and therefore H is of the form Qk[V ] for V defined as in the theorem statement. Theorem 3.11. |Aut(Qk)| = 2kk! for k ∈ Z+. Proof. As shown earlier, the result of Lemma 3.9 implies that |Aut(Qk)| ≥ 2kk!. Thus, we need only prove that |Aut(Qk)| ≤ 2kk!, that is, that every automorphism of Qk can be expressed as a composition of a vector addition and coordinate permutation automorphism. Let Φ be an automorphism of Qk, and consider ~0k. Define v = Φ(~0k). Corollary 2.28 implies that Φ(NQk(~0k)) = NQk(v). Consider w ∈ {0, 1}k such that wt(w) ≥ 2, and let j = wt(w). Then ~0k and w are vertices in a copy of Qj in Qk, call this copy G. Define H = Qk[Φ(G)]. Since Φ is an automorphism, H ∼= Qj . By Lemma 3.10, H is a subgraph induced by a set of 2j vertices having specified values on a fixed set of k− j coordinates, call this set S. Let x be the vector attaining these specified values in S and with a 0 in the j coordinates not in S. Moreover, consider x + µki for all i not in S. These vertices are in H since their coordinates in S are as specified, and their images under Φ provide j vertices that differ from v in exactly 1 place. Thus once these images are chosen, the k − j coordinates that are the same in all vertices of H have been determined. Since Φ(w) is different from v in exactly j places, and the k − j coordinates in which v and Φ(w) are the same have already been determined by the choice of the images of NQk(v), Φ(w) is completely determined. Thus every automorphism of Qk has at most 2k choices to map ~0k, and at most k! choices to map NQk(~0 k), after which the rest of the automorphism is determined. Therefore, |Aut(Qk)| ≤ 2kk!, and we conclude with the desired result. 50 Chapter 3. The Hypercube Definition 3.12. For every i in [k], the function: τi : {0, 1}k → {0, 1}k, satisfies the following, for all v in {0, 1}k with coordinates defined by v = v1v2, . . . , vn: τi(v) = v, if wt(v) is even and vi = 0, v, if wt(v) is odd and vi = 1, v + µki , otherwise. Note that if vi = 0, then τi(v) is of even parity. Moreover, if vi = 1, then τi(v) is of odd parity. It is clear that τi is not an automorphism of Qk, as it can’t be expressed as a composition of vector addition and coordinate permutation. However, τi is in fact an automorphism of Q2k, as we will see in Theorem 3.13, first proved by Miller in [16, Lemma 3.1]. Theorem 3.13. The function τi is an automorphism of Q2k for all i ∈ [k]. Proof. Fix i ∈ [k]. To prove τi is an automorphism of Q2k, we first prove that τi is a permutation of {0, 1}k. To prove τi is a permutation, assume by way of contradiction that there exist u, v ∈ {0, 1}k such that τi(u) = τi(v) and u 6= v. We must have uj = vj for all j ∈ [k]\ i and ui 6= vi. Assume, without loss of generality, that ui = 0 and vi = 1. Thus d(u, v) = 1, and therefore u and v are of differing par- ity. If wt(u) is even, then τi(u) = u and τi(v) = v by defintion and hence u = v, a contradiction. Similarly, if wt(u) is odd, then τi(u) = u + µki and τi(v) = v+µki , and hence u+µ k i = v+µ k i . Then by Lemma 1.9, u = v, also a contradiction. Thus τi is a permutation. We will now prove that τi is an automorphism of Q2k. Consider u, v ∈ {0, 1}k such that uv ∈ E(Q2k). We consider two cases: • Case 1: d(u, v) = 1. This case is trivial since τi only alters one coordinate of both u and v. Thus d(τi(u), τi(v)) ≤ 2 and hence τi(u)τi(v) ∈ E(Q2k). • Case 2: d(u, v) = 2. By Corollary 1.16, u and v are either both of even weight or both of 51 Chapter 3. The Hypercube odd weight. Thus, if u and v differ in coordinate i we have: d(τi(u), τi(v)) = wt(τi(u) + τi(v)) = wt(u+ v + µki ) as µki will be added to exactly one of u, v = wt(u+ v)− 1 as (u+ v)i = 1 = d(u, v)− 1 ≤ 2, and therefore τi(u)τi(v) ∈ E(Q2k). Now, if u and v are the same in coordinate i, since they are of the same parity, τi performs the same operation (either adding µki or not adding µki ) on both u and v. Therefore d(τi(u), τi(v)) = d(u, v) = 2, and hence τi(u)τi(v) ∈ E(Q2k). Now consider u, v ∈ {0, 1}k such that τi(u)τi(v) ∈ E(Q2k). We again consider two cases: • Case 1: d(τi(u), τi(v)) = 1. This case is trivial since τi only alters one coordinate of both u and v. Thus d(u, v) ≤ 2 and hence uv ∈ E(Q2k). • Case 2: d(τi(u), τi(v)) = 2. Again by Corollary 1.16, τi(u) and τi(v) are either both of even weight or both of odd weight. Thus, if τi(u) and τi(v) differ in coordinate i we have: d(u, v) = wt(u+ v) = wt(τi(u) + τi(v) + µki ) as µki will be added to exactly one of u, v = wt(τi(u) + τi(v))− 1 as (τi(u) + τi(v))i = 1 = d(u, v)− 1 ≤ 2, and therefore uv ∈ E(Q2k). 52 Chapter 3. The Hypercube Now, if τi(u) and τi(v) are the same in coordinate i, since they also are of the same parity, τi performs the same operation (either adding µki or not adding µki ) on both u and v. Thus d(u, v) = d(τi(u), τi(v)) = 2, and hence uv ∈ E(Q2k). For any i ∈ [k], since τi ∈ Aut(Q2k) \ Aut(Qk), we have Aut(Qk) ⊂ Aut(Q2k), with equality not holding. In fact, Miller proved in [16] that the automorphism group of the square of the k-cube can be generated by vector addition, coordinate permutation, and a single τi automorphism. Using this, he was then able to prove the following result. Theorem 3.14. For any positive integer k at least 4: |Aut(Q2k)| = (|{0, 1}k|)(|Sk+1|) = 2k(k + 1)!. 3.2 The Chromatic Number of the Cube and its Square Determining the chromatic number of a graph, as discussed in Chapter 2.4, can be very computationally expensive. This, however, does not hold for the k-cube, Qk. Theorem 3.15. χ(Qk) = 2 for all k ∈ Z+. Proof. Define Ve, Vo ⊆ {0, 1}k as follows: Ve = {v ∈ {0, 1}k|wt(v) is even}, Vo = {v ∈ {0, 1}k|wt(v) is odd}. Trivially, Ve∪Vo = {0, 1}k. No two vertices in either set can be adjacent, since d(u, v) = 1 implies that u and v are of differing parity. Thus Qk is bipartite, and hence 2-colorable. Moreover, we can trivially have no 1- coloring of Qk as there is at least one pair of adjacent vertices (for example, consider ~0k and µk1). Therefore χ(Qk) = 2. The chromatic number of the k-cube is thus 2 for any k ∈ Z+. However, higher powers of the k-cube are not bipartite (for k ≥ 2). Thus, determining χ(Qjk) for j, k ≥ 2 and j ≤ k−2 can’t necessarily be done in polynomial time. We consider specifically the square of the k-cube, whose chromatic num- ber is unknown for many k. Included in Appendix A is a table of known 53 Chapter 3. The Hypercube values of and bounds on χ(Q2k) for k ≤ 31, as well as other relevant in- formation. Due to the result of Lemma 3.3, any coloring of Q2k partitions the vertices into binary codes of length k and minimum distance at least 3. Theorem 2.34 then allows us to determine the lower bound on χ(Q2k) seen in Proposition 3.16. Proposition 3.16. χ(Q2k) ≥ ⌈ 2k A(k, 3) ⌉ for any k ∈ Z+. Proof. The proof of this proposition is immediate from the results of Theo- rem 2.34 and Lemma 3.3. As discussed in Chapter 1, A(k, 3) is not known for many k ≥ 16. If not known, we can still apply the sphere-packing bound to Proposition 3.16, as will be seen in Theorem 3.19 and Corollary 3.20. However, consider the case where k = 2n− i for some i, n ∈ Z+ with n ≥ 3 and i ∈ [4]. A(k, 3) is known explicitly from the result of Theorem 1.61. This allows for the following corollary to Proposition 3.16. Corollary 3.17. Let k = 2n − i for some i, n ∈ Z+ with n ≥ 3 and i ∈ [4]. Then: χ(Q2k) ≥ 2n. Proof. By Theorem 1.61 and Proposition 3.16 we have: χ(Q2k) ≥ ⌈ 22 n−i 22n−n−i ⌉ = 2n. Another way to determine a lower bound on χ(Q2k) is by exhibiting an optimal clique in Q2k, and then applying the result of Theorem 2.35. Lemma 3.18. ω(Q2k) = k + 1 for any k ∈ Z+ with k ≥ 3. Proof. Let k ≥ 3. We will first exhibit a clique of size k + 1. Consider the set of vertices V ⊆ {0, 1}k defined as follows: V = {~0k, µk1, µk2, . . . , µkk}. It is clear that any two vertices in V have distance at most 2, and thus V is a clique of size k + 1 in Q2k. 54 Chapter 3. The Hypercube Now assume, by way of contradiction, that ω(Q2k) ≥ k + 2. Let K be a clique of Q2k with |K| = k + 2. Since Q2k is vertex transitive, we can as- sume ~0k ∈ K. Thus every vertex in k has weight at most 2. Since there are at most k vertices in K of weight 1, namely ones from {µk1, µk2, . . . , µkk}, some vertex v ∈ K has weight 2. Let v = µki1+µki2 for i1, i2 ∈ [k] with i1 < i2. Now let σ1 ∈ Sk be a permutation that satisfies σ1(i1) = 1 and σ1(i2) = 2. Clearly σ1(v) = µk1 + µ k 2. Define K ′ = σ1(K) and v′ = σ1(v). It is clear that ~0k, v′ ∈ K ′, |K ′| = k + 2, and, by Proposition 2.25, K ′ is also a clique of Q2k. Thus µ k i /∈ K ′ for i ∈ [k] with i ≥ 3, since then: d(v′, µki ) = wt(v ′) + wt(µki )− 2wt(v′ ∩ µki ) = 2 + 1 + 0 = 3. Therefore, the only vertices of weight 1 possibly contained in K ′ are µk1 and µk2. Since k + 2 − 4 ≥ 1, K ′ has another vertex of weight 2, w′ 6= v′. Let w′ = µkj1 + µ k j2 for some j1, j2 ∈ [k] with j1 < j2. Since d(v′, w′) ≤ 2 and w′ 6= v′, |{j1, j2} ∩ {1, 2}| = 1. Therefore j1 ∈ {1, 2} and j2 ≥ 3. Define j3 ∈ {1, 2} where j3 6= j1. Now let σ2 ∈ Sk be a permutation that satisfies σ2(j1) = 1, σ2(j3) = 2, and σ2(j2) = 3. Clearly σ2(w′) = µk1 + µk3 and σ2(v′) = v′. We define w′′ = σ2(w′), v′′ = σ2(v′) = v′, and K ′′ = σ2(K). It is clear that ~0k, v′ ∈ K ′, |K ′| = k + 2 and, again by Proposition 2.25, K ′′ is a clique of Q2k. Hence, the only vertex of weight 1 possibly contained in K ′′ is µk1 as d(w′′, µk2) = 3 and d(v′′, µki ) = 3 for i ≥ 3. If µk1 ∈ K ′′, then all vertices of weight 2 in K ′′ have the form µk1 + µki with 2 ≤ i ≤ k; otherwise we would have two of distance 3 in K ′′. But there are only k − 1 such weight 2 vertices forcing |K ′′| ≤ k + 1, a contradiction. Meanwhile, if µk1 /∈ K ′′, we consider any two vertices of weight 2, x, y ∈ K ′′ \ {v, w} with x 6= y. Let x = µka + µkb and y = µkc + µkd for some a, b, c, d ∈ [k] with a < b and c < d. Since d(v′′, x) ≤ 2 and d(v′′, y) ≤ 2, a, c ∈ {1, 2} and b, d ≥ 3. If a = 2, then since d(x,w′′) ≤ 2, it must be the case that b = 3. Similarily, if c = 2, then d = 3. If a = 2 then c = 1; moreover d(x, y) ≤ 2 implies that d = 3 giving y = w′′, a contradiction. Thus a = 1. A similar argument shows that c = 1. Since a = c = 1, then every vertex of weight 2 in K ′′ has the form µk1 + µki for 2 ≤ i ≤ k. Again, there are only k − 1 such weight 2 vertices, and hence |K ′′| ≤ k, a contradiction. Therefore, every clique in Q2k, for k ≥ 3, has at most k + 1 vertices. 55 Chapter 3. The Hypercube Note that for k = 1, 2, since χ(Q2k) ∼= K2k , we have ω(Q21) = 2 and ω(Q22) = 4. The result of Theorem 2.25 then implies that χ(Q2k) ≥ k + 1 for all k ∈ Z+. This will be stated again in Theorem 3.19; however, a second proof will be offered that leads to the strengthened bound of Corollary 3.20. Theorem 3.19. χ(Q2k) ≥ k + 1 for any k ∈ Z+. Proof 1. The result of Lemma 3.18 proved that ω(Q2k) = k + 1. Thus, by the result of Theorem 2.35, χ(Q2k) ≥ k + 1. Proof 2. The result of Proposition 3.16 implies that: χ(Q2k) ≥ ⌈ 2k A(k, 3) ⌉ ≥ ⌈ 2k b2k(k + 1)−1c ⌉ by the sphere-packing bound (Theorem 1.50) (3.1) ≥ ⌈ 2k 2k(k + 1)−1 ⌉ = k + 1. Corollary 3.20. Consider any k ∈ Z+ such that k 6= 2n− 1 for any n ∈ Z. Then χ(Q2k) ≥ k + 2. Proof. Consider line (3.1) in the second proof of Theorem 3.19. We have, if k + 1 6= 2n, that 2k(k + 1)−1 is non-integral. We then have the desired result. In addition to the aforementioned lower bounds on χ(Q2k), it is also possible to determine upper bounds on χ(Q2k), as will be seen in the following theorems. Theorem 3.21. χ(Q2k) ≤ χ(Q2k+1) for any k ∈ Z+. Proof. Assume, by way of contradiction, that χ(Q2k) > χ(Q 2 k+1), and let χ(Q2k+1) = n. Consider the copy of Qk in Qk+1 induced by the vectors with a 0 in the last coordinate. Applying an n-coloring of Q2k+1 to this copy of Qk yields an m-coloring of this copy for some m ≤ n, a contradiction to the chromatic number of Qk. 56 Chapter 3. The Hypercube Another well-known upper bound on χ(Q2k) will be proved in Theorem 3.22. Linial, Meshulam, and Tarsi proved this result in [12], and another proof was offered by Wan in [21, Theorem 2] that exhibits an appropriate coloring with the use of the binary representation matrix, as defined in Definition 1.18. We will prove this result in an analogous method to Wan, and show an example of this coloring applied to Q23 in Example 3.23. Note that we have defined our binary representation matrix to be the transpose of that used by Wan, as we have represented our vectors as columns rather than rows. Theorem 3.22. χ(Q2k) ≤ 2dlog2(k+1)e for any k ∈ Z+. Proof. Consider the binary representation matrix of order k, BRMk, and define n = dlog2(k+1)e. Now define the following function for all v ∈ {0, 1}k: f(v) = (BRMk)(v), with matrix vector multiplication defined normally over the binary field. Note that (BRMk)(v) is a binary vector of length n, and hence f : {0, 1}k → {0, 1}n. Since |{0, 1}n| = 2n, we claim f is a 2n-coloring of Q2k. In fact, f could be an m-coloring of Q2k for some m < 2 n, which still yields the upper bound of 2n. Consider arbitrary vertices u and v with uv ∈ E(Q2k), that is, 1 ≤ d(u, v) ≤ 2. We prove f(u) 6= f(v). We consider two cases. If d(u, v) = 1, u and v differ in exactly one coordinate in [k], which we denote i. We have: (BRMk)(u)− (BRMk)(v) = (BRMk)(u− v) = (BRMk)(µki ) = (BRMk)i. where (BRMk)i represents the ith column of BRMk. However, (BRMk)i is the binary vector of length n whose integer representation is i. Since i 6= 0 by definition of the binary representation matrix, (BRMk)(u)− (BRMk)(v) 6= ~0n and hence f(u) 6= f(v). If d(u, v) = 2, u and v differ in exactly two coordinates in [k], which we denote i1 and i2. We have: (BRMk)(u)− (BRMk)(v) = (BRMk)(u− v) = (BRMk)(µki1 + µ k i2) = (BRMk)i1 + (BRMk)i2 , 57 Chapter 3. The Hypercube which represents the sum of the binary vectors of length n with integer repre- sentations i1 and i2. However, since i1 6= i2, (BRMk)(u)−(BRMk)(v) 6= ~0n. Therefore f(u) 6= f(v). Example 3.23. Consider the case k = 3. Theorem 3.22 implies that: χ(Q23) ≤ 2dlog2(4)e = 22 = 4. We will therefore exhibit a 4-coloring of Q23. The binary representation matrix of order 3 is: BRM3 = ( 0 1 1 1 0 1 ) . The coloring f described in Theorem 3.22 is applied as in Table 3.1 (for simplicity we will convert f(v) to its integer representation i). Table 3.1: A 4-coloring of Q23 using Wan’s construction v ∈ {0, 1}3 (BRM3)(v) i v ∈ {0, 1}3 (BRM3)(v) i 00 0 ( 0 0 ) 0 00 1 ( 1 1 ) 3 01 0 ( 1 0 ) 2 01 1 ( 0 1 ) 1 10 0 ( 0 1 ) 1 10 1 ( 1 0 ) 2 11 0 ( 1 1 ) 3 11 1 ( 0 0 ) 0 This divides {0, 1}3 into the following 4 color classes: {v | (BRM3)(v) = 0} = {000, 111}, {v | (BRM3)(v) = 1} = {011, 100}, {v | (BRM3)(v) = 2} = {010, 101}, {v | (BRM3)(v) = 3} = {001, 110}. 58 Chapter 3. The Hypercube It is easily verified that this is a 4-coloring of Q23. In fact, by Theorem 3.19, χ(Q23) ≥ 4 so this coloring is optimal. Theorem 3.22 will now allow us to determine exactly χ(Q2k) when k = 2n − i for some i, n ∈ Z+ with n ≥ 3 and i ∈ [4]. Theorem 3.24. Let k = 2n − i for any i, n ∈ Z+ with n ≥ 3 and i ∈ [4]. Then: χ(Q2k) = 2 n. Proof. The result of Theorem 3.22 implies that: χ(Q2k) ≤ 2dlog2(2 n−i+1)e ≤ 2dlog2(2n)e = 2n, and therefore, in addition with the result of Corollary 3.17, we have the desired result. One useful upper bound on χ(Q2n) occurs when n is odd. In this case, defining n = 2k+1, χ(Q22k+1) ≤ 2χ(Q2k), as we will prove in Theorem 3.26 by means of the technique used in [17, Theorem 1]. An example of this bound applied to Q23 will be seen in Example 3.27. The u (u+v) construction used in the following lemma and theorem was introduced in [13, p.76], and is a common technique in building longer codes from shorter codes. Lemma 3.25. Let V be a (k,M, 3) binary code for positive integers k and M . Define the following sets (where pu and qu are defined as in Chapter 1): Vp = {u pu (u+ v) | u ∈ {0, 1}k, v ∈ V }, Vq = {u qu (u+ v) | u ∈ {0, 1}k, v ∈ V }, Vp and Vq are (2k + 1, 2kM, 3) binary codes. Proof. The code length and size of Vp and Vq are trivially 2k + 1 and 2kM , respectively. We will prove the minimum distance of Vp is at least 3. Since Vq = Vp + µ2k+1k+1 , the result for Vq will follow immediately from Proposition 1.25. 59 Chapter 3. The Hypercube Consider distinct v2k+11 , v 2k+1 2 ∈ Vp. For v1, v2 ∈ V and u1, u2 ∈ {0, 1}k, let: v2k+11 = u1 pu1 (u1 + v1), v2k+12 = u2 pu2 (u2 + v2). We will show that d(v2n+11 , v 2n+1 2 ) ≥ 3. First note the following: d(u1, u2) + d(u1 + v1, u2 + v2) = d(u1, u2) + wt(u1 + v1 + u2 + v2) = d(u1, u2) + d(u2, u1 + v1 + v2) ≥ d(u1, u1 + v1 + v2) by the triangle inequality = wt(u1 + v1 + u1 + v2) = d(u1 + v1, u1 + v2) = d(v1, v2) by Lemma 1.9. Therefore, depending on whether or not v1 = v2 is satisfied, we have: d(v2k+11 , v 2k+1 2 ) = d(u1, u2) + d(pu1 , pu2) + d(u1 + v1, u2 + v2) ≥ max{2d(u1, u2) + d(pu1 , pu2), d(v1, v2)}. If v1 = v2, then u1 6= u2. Moreover, d(u1, u2) = 1 implies d(pu1 , pu2) = 1 (by Corollary 1.16), and thus: d(v2k+11 , v 2k+1 2 ) ≥ 2d(u1, u2) + d(pu1 , pu2) ≥ 3. If v1 6= v2, then d(v1, v2) ≥ 3 since v1, v2 ∈ V . Hence: d(v2k+11 , v 2k+1 2 ) ≥ d(v1, v2) ≥ 3. Theorem 3.26. χ(Q22k+1) ≤ 2χ(Q2k) for any positive integer k. Proof. Fix k ∈ Z+. We will show this upper bound on χ(Q22k+1) by creating a 2χ(Q2k)-coloring of Q 2 2k+1. Take an optimal coloring of Q 2 k and denote its color classes as V kj ⊆ {0, 1}k for j ∈ [χ(Q2k)]. Clearly, we have: χ(Q2k)∑ j=1 |V kj | = 2k. 60 Chapter 3. The Hypercube We will use these color classes to build color classes in a coloring of Q22k+1. To create our color classes of Q 2 2k+1, we define (for j ∈ [χ(Q2k)]): V 2k+1p,j = {u pu (u+ v) | u ∈ {0, 1}k, v ∈ V kj }, V 2k+1q,j = {u qu (u+ v) | u ∈ {0, 1}k, v ∈ V kj }. Obviously, we have created 2χ(Q2k) such sets. We claim these sets are color classes in a 2χ(Q2k)-coloring of Q 2 2k+1. The sum of the vertices in the sets can be determined as follows: ∑ ρ∈{p,q} χ(Q2k)∑ j=1 |V 2k+1ρ,j | = ∑ ρ∈{p,q} χ(Q2k)∑ j=1 2k|V kj | = ∑ ρ∈{p,q} (2k)(2k) = ∑ ρ∈{p,q} 22k = (2)(22k) = 22k+1 = n(Q22k+1). Moreover, Lemma 3.25 implies that these sets have minimum distance 3, and are thus independent in Q22k+1. Therefore, we need only prove that they are pairwise disjoint, and as such their union contains every vertex of Q22k+1. Assume, by way of contradiction, that for some ρ, θ ∈ {p, q} and j1, j2 ∈ [χ(Q2k)]: V 2k+1ρ,j1 ∩ V 2k+1θ,j2 6= ∅, V 2k+1ρ,j1 6= V 2k+1θ,j2 . Take v2k+1 ∈ V 2k+1ρ,j1 ∩ V 2k+1θ,j2 . We have, for some u1, u2 ∈ {0, 1}k, v1 ∈ V kj1 , and v2 ∈ V kj2 : u1 ρu1 (u1 + v1) = u2 θu2 (u2 + v2). Thus we must have u1 = u2. Trivially, u1 and u2 then are of the same parity; so for ρu1 = θu2 , we must have ρ = θ. Finally, since u1 = u2 and u1 + v1 = u2 + v2, Lemma 1.9 implies that v1 = v2. Since v1 = v2, we must 61 Chapter 3. The Hypercube have j1 = j2 as the V kj form a partition of {0, 1}k. Thus V 2k+1ρ,j1 = V 2k+1θ,j2 , a contradiction. Therefore these sets form color classes of a 2χ(Q2k)-coloring of Q22k+1. Example 3.27. Let us look at the case k = 1. χ(Q21) = 2, with the optimal coloring having the following trivial color classes: V 11 = {0}, V 12 = {1}. We use these sets to build our color classes for an optimal coloring of Q23. So, we apply our general formula: V 3p,j = {u pu (u+ v)|u ∈ {0, 1}, v ∈ V 1j }, V 3q,j = {u qu (u+ v)|u ∈ {0, 1}, v ∈ V 1j }, to build: V 3p,1 = {000, 111}, V 3q,1 = {010, 101}, V 3p,2 = {001, 110}, V 3q,2 = {011, 100}. It is easily determined that these are the color classes of a 4-coloring of Q23. In fact, these color classes are the same as those realized in the optimal coloring of Q23 determined in Example 3.23. 3.2.1 The Asymptotic Behavior of the Chromatic Number of the Square of the Cube The result of Theorem 3.21 states that χ(Q2k) is nondecreasing as k increases. In fact, Österg̊ard proved the following result in [17, Theorem 5]. Theorem 3.28. lim k→∞ χ(Q2k) k = 1. Therefore, as k increases, χ(Q2k) approaches k. From Theorem 3.24, if k = 2n − i for n ≥ 3 and i ∈ [4], then χ(Q2k) = 2n. Moreover, if k = 2n for n ∈ Z+, then Corollary 3.20 yields χ(Q2k) ≥ 2n+2. Therefore we can think of 62 Chapter 3. The Hypercube 2n− i as a “tail” of values satisfying χ(Q22n−i) = 2n, with i = 1 representing the uppermost value at the base of the tail. We will now describe the length of this tail, T . Let T : Z+ \ {1, 2} → Z+ be the maximum positive integer i such that χ(Q22n−i) = 2 n. It must be the case that: 4 ≤ T (n) ≤ 2n − 2n−1 = 2n−1. Based on the bounds in Appendix A we have T (3) = 4, 4 ≤ T (4) ≤ 7, and 4 ≤ T (5) ≤ 14. The result of Theorem 3.28 implies that T (n) gets relatively smaller as n tends to ∞, as we will show in Corollary 3.29. In other words, the lengths of these tails of values of k in which the chromatic number of Q2k is a power of 2 gets relatively smaller as k increases. Corollary 3.29. For every λ ∈ (0, 1), there exists some N ∈ Z+ such that T (n) 2n < λ for all n > N . Proof. Take arbitrary λ ∈ (0, 1) and define λ0 = 2blog2(λ)c. Clearly λ0 ≤ λ. Given a positive integer n, let k = 2n−1+(1−2λ0)2n−1. Clearly, there exists a positive integer N1 such that k is an integer for all n > N1. Moreover, by Theorem 3.19, we have: χ(Q2k) k − 1 ≥ k + 1 k − 1 > 0. We now apply Theorem 3.28. For all > 0 there exists a positive integer N2 such that n > N2 implies: χ(Q2k) k − 1 < . (3.2) Take = 2 2− 2λ0−1 and fix theN2 that satisfies (3.2). LetN = max{N1, N2}, and hence n > N implies: χ(Q2k) < ( 1 + 2 2− 2λ0 − 1 ) k = ( 2 2− 2λ0 )( 2n−1 + (1− 2λ0)2n−1 ) = ( 2 2− 2λ0 ) (2− 2λ0) ( 2n−1 ) = (2)2n−1 = 2n. 63 Chapter 3. The Hypercube Since χ(Q2k) < 2 n, by the definition of T (n), n > N implies: T (n) < 2n − k = 2n − (2n−1 + (1− 2λ0)2n−1) ≤ 2n − (2− 2λ0)(2n−1) = 2n − 2n + 2λ0(2n−1) = λ02n ≤ λ2n. 3.3 Applying Computational Methods to Determine the Chromatic Number of the Square of 8-Dimensional Cube As seen in Appendix A, the smallest k such that χ(Q2k) is unknown is k = 8. This graph has 28 = 256 vertices and 27 ( 9 2 ) = 4608 edges. Although the actual value of χ(Q2k) is unknown, we know from Theorem 1.60 that A(8, 3) = 20 (by Theorem 1.44 we have A(8, 3) = A(9, 4)). Thus, we can use Theorem 3.16 to determine a lower bound on χ(Q28) as follows: χ(Q28) ≥ ⌈ 256 20 ⌉ = d12.8e = 13. Moreover, 14-colorings of Q28 are known to exist. This was first exhibited by Hougardy in 1991 [23, p.168] from an adaptation of the Petford-Welsh algorithm described in Chapter 2.4.4. This adaptation will be covered in more detail in Chapter 3.3.3. Another 14-coloring was found independently by Royle in 1993 by means of a randomized search [11, p.157]. Hence, these colorings provide an upper bound on its chromatic number, so χ(Q28) is ei- ther 13 or 14. Therefore, in order to determine the chromatic number, one either needs to find a 13-coloring, or prove that one does not exist. Also, since there can be no color class of size greater than 20, if a 13- coloring exists, its color classes must be of one of the following 5 cases: • 12 independent sets of size 20 and 1 of size 16, • 11 independent sets of size 20, 1 of size 19, and 1 of size 17, • 11 independent sets of size 20 and 2 of size 18, 64 Chapter 3. The Hypercube • 10 independent sets of size 20, 2 of size 19, and 1 of size 18, and • 9 independent sets of size 20 and 4 of size 19. These cases show that, if searching for a 13-coloring of Q28, independent sets of size 15 or less can be ignored, as they could not possibly be a color class of a 13-coloring. The three algorithms introduced in Chapter 2.4 were then applied to the graph Q28 in an effort to determine the chromatic number. However, the computational difficulty associated with this graph turned out to be far greater than expected. 3.3.1 Integer Programming Consider the IP model to determine the existence of an m-coloring in a graph described in Chapter 2.4.2. We can denote every vertex of Q2k by its integer representation, and create the binary variables xi,j for all i in {0, 1, . . . , 2k − 1} and j in [m]. A feasible solution to the following IP would then be an m-coloring of Q2k (letting d(i1, i2) denote the Hamming distance between the binary representations of i1 and i2): xi1,j + xi2,j ≤ 1 for all i1 and i2 in {0, 1, . . . , 2k − 1} and j in [m], such that 1 ≤ d(i1, i2) ≤ 2 m∑ j=1 xi,j ≥ 1 for all i in {0, 1, . . . , 2k − 1}, xi,j ∈ {0, 1} for all i in {0, 1, . . . , 2k − 1} and j in [m]. We then applied this model to determine if Q28 is 13-colorable (i.e. the case where k = 8 and m = 13). Computationally, we modeled this IP in C++ with CPLEX 9.0, a powerful mixed integer program optimizer designed by ILOG, the world’s leading combinatorial optimization tool provider. However, the number of variables in this IP are 13n(Q28) = (13)(256) = 3, 328 and the number of constraints are e(Q28) + n(Q 2 8) = 4, 608 + 256 = 60, 160. After CPLEX’s internal presolver was applied, the reduced IP ma- trix contained 13, 607 rows, 3, 328 columns, and 72, 722 nonzero entries. However, it is possible to impose several additional constraints. 65 Chapter 3. The Hypercube Consider the clique in Q2k defined as V = {~08, µ81, µ82, . . . , µ8k} (this was proven to be a clique in Lemma 3.18). It is clear that every vertex in V must be in a separate color class. Without loss of generality, we can therefore add the following constraints: x0,1 = 1, x20,2 = 1, x21,3 = 1, x22,4 = 1, . . . , x2k,k+1 = 1. Including these constraints allowed CPLEX’s presolve to eliminate many more variables and constraints (such the variables representing vertices ad- jacent to vertex µki in color class i + 1, which must of course equal 0). We applied this IP to find optimal colorings for all 3 ≤ k ≤ 7 (that is, a 4- coloring of Q23 and 8-colorings of Q 2 k for 4 ≤ k ≤ 7). The times to find these colorings are included in Figure 3.1, and were obviously very reasonable. Figure 3.1: Computation Times Using Integer Programming However, applied to Q28, the new reduced IP matrix contained 8, 431 rows, 2, 959 columns, and 47, 784 nonzero entries. This problem was unfor- tunately too large to compute in a reasonable amount of time. After running for over 23 hours, CPLEX had only explored 1, 040 out of a maximum pos- sible 22,959 nodes of the branch and bound tree. No effort was made to help CPLEX determine the variable choice in the branch and bound. Instead, the defaults were used. Given the vertex transitivity of the graph and large number of symmetries of the IP model previously discussed, it seemed more worthwhile to use another computational method. 66 Chapter 3. The Hypercube 3.3.2 Column Generation We then applied the column generation algorithm described in Chapter 2.4.3, again implemented in C++ with CPLEX 9.0. This was done first with the initial set S being the singleton sets of vertices. Moreover, we could impose the restriction on the MWIS phase of the algorithm that we only needed to generate independent sets of size 16 or greater, as prior knowledge indicated that a 13-coloring of Q2k could not contain any color classes of size less than 16. This approach showed much promise; however, the computation time at each node of the tree turned out to be prohibitive. For example, to solve the initial node of branch and bound tree, it took 2, 388 iterations of the IS-MWIS column generation method. This process took just over 35 minutes on a 2 G.Hz. Pentium processor with 2 G.B. of R.A.M. After approximately 19 hours, 18 nodes had been explored. The branch and bound tree for this method has maximum size of 232,640 nodes (as ( 256 2 ) = 32, 640). This approach was abandoned since it seemed that an optimal coloring was unlikely to be determined in a reasonable about of time. Varying the choice of the initial set of independent sets S showed no noticeable improvement. On the other hand, when applied to Q27, the initial node took only 18 iterations and 1 second to perform on the same machine. It also turned out to find the optimal coloring with the independent sets generated in the first node. Part of the reason we were unable to bring the computation time at each node down to a manageable length of time is due to the immense number of columns available for the MWIS portion of the algorithm to choose from. From Proposition 2.25, we know that, given an independent set V and auto- morphism Φ, that Φ(V ) is independent. Therefore, given any independent set V , we can generate a collection of independent sets by producing the independent set Φ(V ) for every automorphism Φ. It is unlikely that every Φ(V ) will be distinct. However, if a graph has many automorphisms, the number of generated independent sets that are distinct will still likely be very large. Recall from Theorem 3.14 that |Aut(Q2k)| = 2k(k + 1)!. There- fore, there exist 289! = 92, 897, 280 automorphisms of Q28, and given a single independent set V , we can immediately generate over 90 million (again, not necessarily distinct) independent sets of the same size. 67 Chapter 3. The Hypercube The challenges associated with the column generation algorithm for de- termining a 13-coloring of Q28 led us naturally to looking at properties of these independent sets. In Chapter 4, we will attempt to determine their structure. 3.3.3 The Petford-Welsh Algorithm Recall the Petford-Welsh algorithm described in Chapter 2.4.4. As stated previously, Hougardy exhibited a 14-coloring of Q28 with an adaptation of this algorithm. We were able to reproduce his results, and find a 14-coloring of Q28 with a running time of within several seconds. The C++ code of our implementation of Hougardy’s adaptation is included in Appendix D, under the file name Hougardy.cpp. The 14-coloring generated with this algorithm is included in Appendix B. However, attempts with varying values for the probability input parameter did not yield a 13-coloring. This, of course, does not prove the non-existence of such a coloring. 68 Chapter 4 The Structure of Optimal Binary Codes Recall from the Chapter 3.3 that if a 13-coloring of Q28 exists, it contains at least 9 independent sets of size 20, and possibly as many as 12. Therefore, it contains at least 9 disjoint (8, 20, 3) binary codes. The knowledge of the structure of these codes is thus very important in proving the existence or non-existence of a 13-coloring, or providing MIP cuts to speed up the exploration of the branch and bound (cut) tree. 4.1 Computationally Determining all Weight Distributions We know from Chapter 1 that every binary code has a weight distribu- tion with respect to any vector u. One may wish to determine how many sequences of integers are a feasible weight distribution of some (n,M, d) binary code C. Since C + u is an (n,M, d) binary code whose weight distri- bution (with respect to ~0n) is the same as the weight distribution of C with respect to u, a sequence of integers is a feasible weight distribution with respect to any vector u if and only if it is a feasible weight distribution with respect to ~0n. Given a specific sequence of integers, (Wi)ni=0, it is easy to set up an IP that determines if there is some (n,M, d) binary code with that weight distribution. We create a binary variable vj for all j in {0, 1, . . . , 2n − 1}. This variable is set to equal 1 if the binary representation of integer j is included in C, and 0 otherwise. We will let wt(j) and d(j1, j2) be defined with respect to the appropriate binary representations. We then determine if the following system of constraints has a feasible solution: 69 Chapter 4. The Structure of Optimal Binary Codes 2n−1∑ j=0 vj = M,∑ j|wt(j)=i vj = Wi for all i in {0, 1, . . . , n}, vj1 + vj2 ≤ 1 for all j1 and j2 in {0, 1, . . . , n} satisfying 1 ≤ d(j1, j2) ≤ d− 1. It is clear that if there is a feasible solution to these constraints, then C = {j|vj = 1} is an (n,M, d) binary code. Otherwise, if this IP is infeasible, then no such code exists. However, this can be a very computationally expensive procedure. Fortunately, the knowledge of permutations can allow us to fix at least one of the codewords in C and speed this up dramatically. Proposition 4.1. Let C ⊆ {0, 1}n be a binary (n,M, d) code with weight distribution (Wi)ni=0. Let v ∈ C with wt(v) = w and consider any u in {0, 1}n with wt(u) = w. Then there exists another binary (n,M, d) code C ′ such that u ∈ C ′ and C ′ also has weight distribution (Wi)ni=0. Proof. Consider the permutation σ in Sk such that σ(v) = u, which exists since wt(u) = wt(v). Define C ′ = σ(C). It is clear that C ′ is an (n,M, d) binary code that also has weight distribution (Wi)ni=0. In addition, since v ∈ C, we have u ∈ C ′. In this chapter, we will be determining the feasible weight distributions of optimal binary codes of length 8 and minimum distance 3. It was proven in Theorem 1.60 that these are (8, 20, 3) binary codes. From [1, Table II], it is known that A(8, 6, 4) = 2. Therefore, in determining if (Wi)8i=0 is a feasible weight distribution of an (8, 20, 3) binary code C, consider the case where W4 ≥ 3. In this case, we can in fact fix two codewords to any u and v in C such that wt(u) = wt(v) = 4 and d(u, v) = 4. We will be using the codewords 00001111 and 00110011, whose integer representations are 15 and 51, respectively. The IP to determine if (Wi)8i=0 is a feasible weight distribution of an (8, 20, 3) binary code C is then defined by the following system of constraints, denoted (FD) for feasible distributions: 70 Chapter 4. The Structure of Optimal Binary Codes 255∑ j=0 vj = 20,∑ j|wt(j)=i vj = Wi for all i in {0, 1, . . . , 8}, vj1 + vj2 ≤ 1 for all j1 and j2 in {0, 1, . . . , 8} satisfying 1 ≤ d(j1, j2) ≤ 2, v15 = 1 if W4 ≥ 1, v51 = 1 if W4 ≥ 3. Thus we could run this IP on every sequence of positive integers (Wi)8i=0, where 8∑ i=0 Wi = 20. Unfortunately, there are ( 28 8 ) = 3, 108, 105 such se- quences. However, the result of Theorem 1.48 allows us to impose Wi ≤ A(8, 3, i) = A(8, 4, i) for all i. Proposition 1.46 implies that A(8, 3, 0) = A(8, 3, 1) = 1, and hence A(8, 3, 8) = A(8, 3, 7) = 1. In addition, Proposition 1.47 implies that A(8, 3, 2) = A(8, 4, 2) = ⌊ 8 2 ⌋ = 4, and hence A(8, 3, 6) = 4. Finally, [1, Table I] contains the values A(8, 3, 3) = A(8, 3, 5) = 8 and A(8, 3, 4) = 14. Applying all of these constraints on the Wi dramatically reduces the number of sequences that must be tested. In addition, we will apply the result of the following theorem and corollary. Theorem 4.2. Let C be an (8,M, 3) binary code with distance distribution (Wi)8i=0. Then W0 +W1 +W2 ≤ 4. Proof. Assume, by way of contradiction, there exists an (8,M, 3) binary code with weight distribution (Wi)8i=0 with W0 + W1 + W2 ≥ 5. If ~08 ∈ C, W1 = W2 = 0 as all v in {0, 1}8 with wt(v) ≤ 2 is at distance at most 2 from ~08. Thus ~08 /∈ C as W1 +W2 ≤ 4. Since W1 ≤ A(8, 3, 1) = 1 and W2 ≤ A(8, 3, 2) = 4, we must have W1 = 1 and W2 = 4. Let u, v1, v2, v3, v4 ∈ C with wt(u) = 1 and wt(vi) = 2 for all i 71 Chapter 4. The Structure of Optimal Binary Codes in [4]. We have wt(vi ∩ vj) = 0 for i and j in [4] with i 6= j as otherwise: d(vi, vj) = wt(vi) + wt(vj)− 2wt(vi ∩ vj) by Lemma 1.15 ≤ 2 + 2− 2 = 2. Therefore 4∑ i=1 vi = ~18, and hence wt(u ∩ vi) = 1 for some i in [4]. This implies: d(u, vi) = wt(u) + wt(vi)− 2wt(u ∩ vi) = 1 + 2− 2 = 1, a contradiction. Corollary 4.3. Let C be an (8,M, 3) binary code with weight distribution (Wi)8i=0. Then W6 +W7 +W8 ≤ 4. Proof. Assume, by way of contradiction, there exists an (8,M, 3) binary code with weight distribution (Wi)8i=0 with W6+W7+W8 ≥ 5. Let (W ′i )8i=0 be the weight distribution of C +~18. We have W ′0 +W ′1 +W ′2 = W8 +W7 +W6 ≥ 5; however, C + ~18 is an (8,M, 3) binary code by Proposition 1.25. This contradicts the result of Theorem 4.2. Therefore, to determine all sequences of integers to test as feasible weight distributions for an (8, 20, 3) binary code, we must enumerate all feasible solutions to the following system, denoted (WD) for weight distributions: 8∑ i=0 Wi = 20 (4.1) W0 ≤ 1, (4.2) W1 ≤ 1, (4.3) W2 ≤ 4, (4.4) W3 ≤ 8, (4.5) W4 ≤ 14, (4.6) W5 ≤ 8, (4.7) W6 ≤ 4, (4.8) W7 ≤ 1, (4.9) W8 ≤ 1, (4.10) W0 +W1 +W2 ≤ 4, (4.11) 72 Chapter 4. The Structure of Optimal Binary Codes W6 +W7 +W8 ≤ 4, (4.12) W0 = 1⇒W1 +W2 = 0 (4.13) W8 = 1⇒W6 +W7 = 0 (4.14) Wi ∈ Z+ ∪ {0} for all i in {0, 1, . . . , 8}, zi ∈ {0, 1} for all i in [4]. Constraint (4.1) forces the code to have size 20. Constraints (4.2) through (4.10) enforce the Wi ≤ A(8, 3, i) inequalities. Constraints (4.11) and (4.12) follow from Theorem 4.2 and Corollary 4.3, respectively. Moreover, con- straint (4.13) encodes the statement that if W0 = 1, then W1 = W2 = 0, as was determined in the proof of Theorem 4.2. Finally, constraint (2.14) analogously encodes the statement that if W8 = 0, then W6 = W7 = 0.3 These constraints were then inputted into the constraint programming solver ILOG Solver 6.0, and 7, 539 possible solutions were enumerated. Each of these solutions were then inputted into the (FD) IP, and solved for feasi- bility as a weight distribution using the mixed IP optimizer ILOG CPLEX 9.0. The C++ code that solved both (WD), and every (FD) iteration, is named CPDist.cpp, and included in Appendix E. In the end, 111 of the distributions were proven feasible, whereas 7, 428 of the distributions were proven infeasible. All of the feasible weight distributions found are then included in Appendix C. All computation was done on a computer with a 2.19 G.Hz. A.M.D. processor with 1.0 G.B. of R.A.M. To show how important the codeword fixing constraints are in the (FD) model, we reexecuted the IP without the two conditional constraints for all 111 feasible distributions, and 102 of the 7, 428 infeasible distributions selected at random. The computation times with and without the condi- tional constraints for these 111 feasible and 102 infeasible distributions were compared. 3Implications are allowed in CP models and are implemented quite differently from the standard ways they are modeled in MIPs. 73 Chapter 4. The Structure of Optimal Binary Codes The computation times with and without the conditional constraints for the feasible distributions are shown in Figure 4.1. Figure 4.1: Computation Times for Feasible Distributions The mean computation time with the constraints was 0.52 seconds, with a maximum of 2.28 seconds. The mean computation time with the constraints removed was 2.16 seconds, with a maximum of 40.49 seconds. Clearly the relative increase in times is significant; however, with only 111 distributions it did not make much difference in absolute terms. 74 Chapter 4. The Structure of Optimal Binary Codes The computation times with and without the conditional constraints for the randomly selected infeasible distributions are shown in Figure 4.2. Figure 4.2: Computation Times for Infeasible Distributions Here, it can be seen that including these constraints had a substan- tial impact on computation time. The mean computation time with the constraints was 12.62 seconds, with a maximum of 425.64 seconds, or ap- proximately 7 minutes. The mean computation time with the constraints removed was 2, 064 seconds, or about 34 minutes. The longest computation took 116, 872 seconds, or over 32 hours. With over 7000 distributions in which to determine infeasibility, it is apparent that determining infeasibility in all such distributions would be extremely impractical without applying the knowledge of permutations to fix codewords. 75 Chapter 4. The Structure of Optimal Binary Codes 4.2 Structural Results 4.2.1 On the Size of Shortened Codes It is easily determined from the table in Appendix C that every (8, 20, 3) binary code contains between 8 and 12 codewords of even weight (and thus between 8 and 12 codewords of odd weight). This knowledge can be used to prove the result that every (8, 20, 3) binary code has between 8 and 12 codewords with a specified value in each coordinate, as will be proved in Corollary 4.5. Theorem 4.4. Let C be an (n,M, 3) binary code with n ≥ 3, and fix t =⌊n 2 ⌋ . For any i in [n] and x in {0, 1}, C can be transformed into an (n,M, 3) binary code D whose weight distribution satisfies, letting (Wi)ni=0 denote the weight distribution of D: t∑ i=0 W2i = |{u ∈ C | the ith coordinate of u is x}|. Proof. First consider the case where x = 0. Fix an arbitrary i in [n], and let D be defined as follows: D = {τi(u) | u ∈ C}. Recall from the definition of τi that if the ith coordinate of u is 0, then τi(u) is of even parity. Moreover, if the ith coordinate of u is 1, then τi(u) is of odd parity. In addition, by Theorem 3.13, τi is an automorphism of Q2n. Therefore, by Proposition 3.4, since C is an (n,M, 3) binary code, D is an (n,M, 3) binary code. We then have: t∑ i=0 W2i = |{v ∈ D | wt(v) is even}| = |{u ∈ C | the ith coordinate of u is 0}|. In the case where x = 1, we instead define D to be: D = {τi(u) + µni | u ∈ C}. D is again an (n,M, 3) binary code and in this case: t∑ i=0 W2i = {v ∈ D | wt(v) is even} = |{u ∈ C | the ith coordinate of u is 1}|. 76 Chapter 4. The Structure of Optimal Binary Codes Corollary 4.5. Let C be an (8, 20, 3) binary code. Then, for any i in [8] and x in {0, 1},: 8 ≤ |{u ∈ C | the ith coordinate of u is x}| ≤ 12. Proof. Assume by way of contradiction, for arbitrary i in [8] and x in {0, 1} that: |{u ∈ C | the ith coordinate of u is x}| = M ′, where M ′ ≤ 7 or M ′ ≥ 13. Then, by Theorem 4.4, C can be transformed into an (8, 20, 3) binary code D with M ′ codewords of even weight. This, however, can’t exist due to the computational results of Chapter 4.1. Another way of expressing this result is that for any (8, 20, 3) binary code C, consider the shortened code Ci,x for any i in [8] and x in {0, 1}. The code size of this shortened code must satisfy: 8 ≤ |Ci,x| ≤ 12. 4.2.2 Bounds on the Weight Distributions of Color Classes The weight distributions listed in Appendix C can provide many interesting results about the color classes of what a 13-coloring of Q28 must look like. First and foremost, assume there exists a 13-coloring of Q28 and denote its color classes Cj for j ∈ [13]. Moreover, let (W ji )8i=0 denote the weight distribution of Cj . It is clear that: 13∑ j=0 W ji = ( 8 i ) , for all i ∈ {0, 1, . . . , 8}. Since between 9 and 12 of these Cj must be of size 20, this greatly restricts the combinations of weight distributions that could form a possible 13-coloring of Q28. In fact, the algorithm described in Chapter 4.1 can trivially be modified to generate the weight distribu- tions of all (8,M, 3) codes for M < 20. Unfortunately, this led to too many cases to reasonably keep track of. However, useful bounds can still be found. For example, recall from [1, Table I] that A(8, 3, 4) = 14. Therefore, any binary code with length 8 and minimum distance at least 3 can have a maximum of 14 codewords whose weight is 8. However, as can be seen in 77 Chapter 4. The Structure of Optimal Binary Codes Appendix C, any optimal (8, 20, 3) binary code C with weight distribution (Wi)8i=0 must satisfy W4 ≤ 10. Moreover, if W4 ≥ 9, then W0 + W8 ≥ 1. Therefore at least one of ~08 and ~18 must belong to C. Therefore, if a 13- coloring of Q28 exists, at most 2 of the minimum 9 color classes of size 20 can have at least 9 codewords of weight 4. Therefore, at least 7 color classes of size 20 must have no more than 8 codewords of weight 4. Moreover, if W3 = A(8, 3, 3) = 8, then C follows weight distribution 109 or 110. Therefore W0 = 1, and hence there can be at most one color class with 8 codewords of weight 3. An analogous argument shows that there can be at most one color class with 8 codewords of weight 5. Bounds like these can be added to MIPs or other models for finding, or proving the non-existence of, a 13-coloring of Q28. Attempts were made in this direction. 4.3 The Number of Nonequivalent Codes Consider the following five (8, 20) binary codes (given in terms of their in- teger representations), which can be easily verified to be (8, 20, 3) binary codes. C1 = {0, 7, 25, 30, 42, 53, 75, 84, 97, 108, 114, 127, 140, 147, 169, 182, 194, 221, 231, 248}, C2 = {0, 7, 25, 30, 42, 53, 75, 84, 97, 108, 114, 127, 140, 147, 166, 169, 176, 194, 197, 216}. C3 = {0, 43, 53, 62, 79, 83, 92, 102, 121, 135, 154, 157, 172, 179, 201, 214, 229, 234, 240, 255}, C4 = {15, 26, 38, 51, 66, 73, 84, 101, 120, 127, 133, 144, 172, 185, 206, 211, 221, 224, 235, 246}, C5 = {15, 20, 26, 37, 40, 51, 67, 76, 118, 121, 134, 145, 171, 188, 201, 210, 223, 224, 238, 245}. 78 Chapter 4. The Structure of Optimal Binary Codes The distance distributions of these five codes are easily calculated, and seen in Table 4.1. Table 4.1: Distance Distributions of (8, 20, 3) Binary Codes Code Distance Distribution A0 A1 A2 A3 A4 A5 A6 A7 A8 C1 1 0 0 5.8 7.4 3.4 1.4 0.8 0.2 C2 1 0 0 6.1 7.5 2.9 1.5 0.9 0.1 C3 1 0 0 5.6 8 3.2 1.2 0.8 0.2 C4 1 0 0 6 7.2 3 1.8 1 0 C5 1 0 0 5.6 7.6 3.2 1.6 0.8 0.2 As these distance distributions are all distinct, by the result of Lemma 1.38, these codes are pairwise nonequivalent. Therefore there are at least five nonequivalent (8, 20, 3) binary codes. In fact, five was the exact number determined computationally in [18] (Baicheva and Kolev first proved this result in [3]). Thus every (8, 20, 3) binary code is equivalent to one of these five aforementioned codes. Therefore every weight distribution listed in Appendix C is realized in a code (or codes) equivalent to at least one Ci for i in [5]. To determine which Ci, we calculated Ci + v for all i in [5] and v in {0, 1}8. This yielded 1, 280 (not necessarily distinct) (8, 20, 3) binary codes, and the weight distribution of each one was calculated. Each of the 111 weight distributions appeared at least once in these codes, and for each of the 111, the Ci that generated it were also recorded in this appendix. Note that, as some weight distributions can be generated by more than one Ci, two (8, 20, 3) binary codes with the same weight distribution need not be equivalent. 4.4 Another Definition of Equivalence The hypercube coloring problem leads to another definition of code equiva- lence. Definition 4.6. Let C and D be (n,M, d) binary codes. C and D are Qjn- equivalent if C = Φ(D) for some Φ in Aut(Qjn). C is Q j n-unique if every (n,M, d) binary code is Qjn-equivalent to C. 79 Chapter 4. The Structure of Optimal Binary Codes Analogous to equivalence in the code theoretical sense, Qjn-equivalence is an equivalence relation on the set of all binary codes. Since coordinate permutation and vector addition functions are auto- morphisms of Qn, and hence Q j n, for any choices of positive integers j and n, it is clear that if two codes are equivalent in the code theoretical sense, then they are also Qjn-equivalent. In fact, for j = 1, these two definitions are equal. While there are five nonequivalent (8, 20, 3) binary codes, Hougardy de- termined via an exhaustive search that there are only two that are Q28- nonequivalent [10]. The two codes exhibited by Hougardy are C1 and C2 above. Therefore C3, C4, and C5 must each be Q28-equivalent to exactly one of C1 and C2. This can indeed be verified. Let σ3, σ4, and σ5 be the permu- tations in S8 defined by (2 6)(5 8), (1 3 2)(4 7 8)(5 6), and (2 8)(3 4 7 6), respectively. We then have: σ3 ◦ τ1(C3 + 255) = C2, σ4 ◦ τ1(C4 + 73) = C1, σ5 ◦ τ1(C5 + 67) = C1, where the addition, of course, corresponds to the addition of the binary vec- tor with the appropriate integer representation. Since the functions used are all automorphisms of the square of the cube (as seen in Chapter 3.1), C2 and C3 are Q28-equivalent. Moreover, so are C1, C4, and C5. As stated previously, two (8, 20, 3) binary codes with the same weight distribution need not be equivalent from a code theoretical perspective. In fact, they need not even be Q28-equivalent. Consider, for example, weight distribution 14 in Appendix C. It can be realized in codes equivalent to both C3 and C5. C3 is Q28-equivalent to C2 and C5 is Q 2 8-equivalent to C1. Therefore, these codes are not Q28-equivalent. Knowing the structure of equivalent codes invites further computational investigation into resolving whether or not a 13-coloring of Q28 exists. We have looked at this only briefly. One can produce a large number of (8, 20, 3) codes chosen randomly by first choosing at randomly one of the Ci, for i in [5], then choosing randomly a v in [255], and then choosing randomly a permutation from S8. The set of these codes are then put into a MIP similar to the IS method of the column generation algorithm described in 80 Chapter 4. The Structure of Optimal Binary Codes Chapter 2.4.3 to see if they can produce a 13-coloring. A column generation approach could be used to increase the size of the set using this random method. Unfortunately, even if no such 13-coloring is found, it will not prove that one doesn’t exist. On the other hand, the only algorithm known to find even a 14-coloring of Q28 was the modified Petford-Welsh randomized algorithm. Therefore if a 13-coloring exists, then it may very well be found by a random method. 81 Chapter 5 Conclusion We introduced the study of graph theory, and specifically, vertex coloring problems. The decision problem of determining whether or not a general graph is m-colorable for m ≥ 3 is NP-complete. Our main focus was on determining the chromatic number of the square of the hypercube, Q2k. Our computational effort was focused on the smallest value of k for which this is currently unknown, k = 8. To determine the chromatic number of Q28, based on current knowledge, requires either exhibiting a 13-coloring, or prov- ing that one can’t exist. While our computational approaches did not determine this value, they did serve to direct our attention to the study of binary codes. The knowledge of binary codes and their structure further strengthened our understanding of the combinatorial nature of this class of graphs. We then directed our computational efforts on determining the structure of optimal binary codes with length 8 and minimum distance at least 3, of which there must be at least 9 representing the color classes of a 13-coloring of Q28. Future work may focus on the following: • Investigating further the link between weight and distance distribu- tions of binary codes, • Applying these results to develop new adaptations of graph coloring algorithms to determine the chromatic number of Q28, • Generalize our research to determine values of, or narrow bounds on, other unknown values of the chromatic number of Q2k. 82 Bibliography [1] E. Agrell, A. Vardy, and K. Zeger. Upper bounds for constant-weight codes. IEEE Trans. Inform. Theory, 46(7):2373–2395, 2000. [2] E. Agrell, A. Vardy, and K. Zeger. A table of upper bounds for binary codes. IEEE Trans. Inform. Theory, 47(7):3004–3006, 2001. [3] T. Baicheva and E. Kolev. Binary codes of length eight, minimum dis- tance three and twenty codewords. In Proc. II Int. Workshop Optimal Codes and Related Topics, pages 5–8, Sozopol, Bulgaria, 1998. [4] M. R. Best and A. E. Brouwer. The triply shortened binary Hamming code is optimal. Discrete Math., 17(3):235–245, 1977. [5] M. R. Best, A. E. Brouwer, F. J. MacWilliams, F. Jessie, A. M. Odlyzko, and N. J. A. Sloane. Bounds for binary codes of length less than 25. IEEE Trans. Infomation Theory, IT-24(1):81–93, 1978. [6] D. Brélaz. New methods to color the vertices of a graph. Comm. ACM, 22(4):251–256, 1979. [7] P. Delsarte. Bounds for unrestricted codes, by linear programming. Philips Res. Rep., 27:272–289, 1972. [8] P. Delsarte. An algebraic approach to the association schemes of coding theory. Philips Res. Rep. Suppl., (10):vi+97, 1973. [9] M. R. Garey and D. S. Johnson. Computers and intractability. W. H. Freeman and Co., San Francisco, Calif., 1979. A guide to the theory of NP-completeness, A Series of Books in the Mathematical Sciences. [10] S. Hougardy. Personal email communication, 2008. [11] T. R. Jensen and B. Toft. Graph coloring problems. Wiley-Interscience Series in Discrete Mathematics and Optimization. John Wiley & Sons Inc., New York, 1995. A Wiley-Interscience Publication. 83 Bibliography [12] N. Linial, R. Meshulam, and M. Tarsi. Matroidal bijections between graphs. J. Combin. Theory Ser. B, 45(1):31–44, 1988. [13] F. J. MacWilliams and N. J. A. Sloane. The theory of error-correcting codes. II. North-Holland Publishing Co., Amsterdam, 1977. North- Holland Mathematical Library, Vol. 16. [14] S. McKinley. The hamming codes and delsarte’s linear programming bound. Master’s thesis, Portland State University, May 2003. [15] A. Mehrotra and M. A. Trick. A column generation approach for graph coloring. INFORMS Journal on Computing, 8:344–354, 1996. [16] Z. Miller and M. Perkel. A stability theorem for the automorphism groups of powers of the n-cube. Australas. J. Combin., 10:17–28, 1994. [17] P. R. J. Österg̊ard. On a hypercube coloring problem. J. Combin. Theory Ser. A, 108(2):199–204, 2004. [18] P. R. J. Österg̊ard, T. Baicheva, and E. Kolev. Optimal binary one- error-correcting codes of length 10 have 72 codewords. IEEE Trans. Inform. Theory, 45(4):1229–1231, 1999. [19] A. D. Petford and D. J. A. Welsh. A randomised 3-colouring algorithm. Discrete Math., 74(1-2):253–261, 1989. Graph colouring and variations. [20] R. M. Roth. Introduction to coding theory. Cambridge University Press, New York, second edition, 2006. [21] P.-J. Wan. Near-optimal conflict-free channel set assignments for an optical cluster-based hypercube network. J. Comb. Optim., 1(2):179– 186, 1997. [22] D. B. West. Introduction to graph theory. Prentice Hall, Inc., Upper Saddle River, New Jersey, second edition, 2001. [23] G. M. Ziegler. Coloring Hamming graphs, optimal binary codes, and the 0/1-Borsuk problem in low dimensions. In Computational discrete mathematics, volume 2122 of Lecture Notes in Comput. Sci., pages 159– 171. Springer, Berlin, 2001. 84 Appendices 85 Appendix A Current Status of Coloring the Square of the Cube Table A.1: Current Status of Coloring the Square of the Cube k n(Q2k) e(Q 2 k) A(k, 3) ω(Q 2 k) n(Q2k) A(k, 3) χ(Q2k) 1 2 1 1a 2 2 2 2 4 6 1a 4 4 4 3 8 24 2a 4 4 4 4 16 80 2a 5 4 8e 5 32 240 4a 6 8 8e 6 64 672 8a 7 8 8e 7 128 1, 792 16a 8 8 8e 8 256 4, 608 20a 9 12.8 13d − 14c 9 512 11, 520 40a 10 12.8 13d − 16g 10 1, 024 28, 160 72a 11 14.2 15d − 16g 11 2, 048 67, 584 144a 12 14.2 15d − 16g 12 4, 096 159, 744 256a 13 16 16e 13 8, 192 372, 736 512a 14 16 16e 14 16, 384 860, 160 1, 024a 15 16 16e 15 32, 768 1, 966, 080 2, 048a 16 16 16e Continued on next page a [2, Table I]. e Theorem 3.24. b Theorem 1.61. f Theorem 3.26. c Exhibited 14-colorings. g Theorem 3.21. d Proposition 3.16. 86 Appendix A. Current Status of Coloring the Square of the Cube Table A.1– continued from previous page k n(Q2k) e(Q 2 k) A(k, 3) 16 65, 536 4, 456, 448 2, 720a − 3, 276a 17 131, 072 10, 027, 008 5, 312a − 6, 552a 18 262, 144 22, 413, 312 10, 496a − 13, 104a 19 524, 288 49, 807, 360 20, 480a − 26, 208a 20 1, 048, 576 110, 100, 480 36, 864a − 43, 689a 21 2, 097, 152 242, 221, 056 73, 728a − 87, 378a 22 4, 194, 304 530, 579, 456 147, 456a − 173, 491a 23 8, 388, 608 1, 157, 627, 904 294, 912a − 344, 308a 24 16, 777, 216 2, 516, 582, 400 524, 288a − 599, 185a 25 33, 554, 432 5, 452, 595, 200 1, 048, 576a − 1, 198, 370a 26 67, 108, 864 11, 777, 605, 632 2, 097, 152a − 2, 396, 740a 27 134, 217, 728 25, 367, 150, 592 4, 194, 304a − 4, 793, 480a 28 268, 435, 456 54, 492, 397, 568 8, 388, 608b 29 536, 870, 912 116, 769, 423, 360 16, 777, 216b 30 1, 073, 741, 824 249, 644, 974, 080 33, 554, 432b 31 2, 147, 483, 648 532, 575, 944, 704 67, 108, 864b k ω(Q2k) n(Q2k) A(k, 3) χ(Q2k) 16 17 20.004884− 24.094 21d − 28g 17 18 20.004884− 24.675 21d − 28f 18 19 20.004884− 24.976 21d − 32g 19 20 20.004884− 25.6 21d − 32g 20 21 24.00091556− 28.4 25d − 32g 21 22 24.00091556− 28.4 25d − 32g 22 23 24.17591691− 28.4 25d − 32g 23 24 24.36367438− 28.4 25d − 32g 24 25 28.00006008− 32 29d − 32g 25 26 28.00006008− 32 29d − 32g 26 27 28.00006008− 32 29d − 32g 27 28 28.00006008− 32 29d − 32g 28 29 32 32e 29 30 32 32e 30 31 32 32e 31 32 32 32e 87 Appendix B A 14-coloring of the Square of the 8-Cube Table B.1: A 14-coloring of Q28 V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 13 4 2 22 12 3 16 5 14 6 15 0 7 1 19 24 17 27 18 8 29 11 23 9 21 41 10 20 34 31 30 48 35 37 39 28 36 32 40 46 25 26 62 42 47 61 57 59 44 50 43 55 54 51 33 38 75 53 56 78 65 60 79 63 49 58 70 71 52 45 80 77 69 88 94 68 83 64 67 74 73 81 76 66 103 82 91 98 106 87 96 86 72 95 90 92 85 93 104 97 99 105 116 89 117 110 84 113 101 100 107 115 125 102 108 119 139 111 130 121 109 124 127 122 126 120 142 123 118 132 149 112 133 134 114 144 138 131 128 135 152 147 137 143 166 141 158 136 129 155 156 159 150 140 171 160 151 145 176 148 169 161 146 174 173 165 172 153 183 175 164 163 191 154 179 205 157 195 187 188 186 170 194 182 178 168 196 162 180 209 167 213 192 200 201 181 197 185 189 190 216 177 204 218 184 228 215 214 211 203 223 199 207 210 237 193 217 235 198 233 227 226 231 208 244 202 220 221 243 206 230 247 219 242 238 239 240 222 250 212 234 229 232 255 252 224 248 249 225 236 241 251 246 245 253 254 The Vi for i in [14] given are the color classes of the 14-coloring of Q28 determined by the Hougardy.cpp code of Appendix D, with the vectors given by their integer representations. 88 Appendix C Weight Distributions of (8, 20, 3) Binary Codes Table C.1: Weight Distributions of (8, 20, 3) Binary Codes Distribution W0 W1 W2 W3 W4 W5 W6 W7 W8 Realized in code(s) equivalent to 0 0 0 1 5 5 5 3 1 0 C2 1 0 0 2 4 6 4 3 1 0 C2 2 0 0 2 5 6 4 2 1 0 C1, C4 3 0 0 2 6 4 4 3 1 0 C2 4 0 0 2 6 5 3 3 1 0 C1, C4 5 0 0 2 7 3 4 3 1 0 C3 6 0 0 2 7 4 4 2 1 0 C5 7 0 0 3 3 4 6 4 0 0 C2 8 0 0 3 3 6 4 3 1 0 C3 9 0 0 3 4 4 6 3 0 0 C1 10 0 0 3 4 6 4 3 0 0 C5 11 0 0 3 5 3 5 4 0 0 C4 12 0 0 3 5 4 4 3 1 0 C1, C4 13 0 0 3 5 5 3 3 1 0 C2 14 0 0 3 5 6 2 3 1 0 C3, C5 15 0 0 3 6 3 4 3 1 0 C2 16 0 0 3 6 4 4 3 0 0 C1 17 0 0 3 6 5 2 3 1 0 C2 18 0 0 3 6 5 3 2 1 0 C1, C4 19 0 0 3 7 2 4 3 1 0 C5 20 0 0 3 7 3 3 3 1 0 C2 21 0 0 3 7 3 4 2 1 0 C3 22 0 0 3 7 4 2 3 1 0 C1, C4 23 0 0 3 7 4 3 2 1 0 C2 24 0 0 4 2 4 6 4 0 0 C3 25 0 0 4 4 2 6 4 0 0 C1 26 0 0 4 4 3 5 4 0 0 C2 27 0 0 4 4 4 4 4 0 0 C3, C5 28 0 0 4 5 3 4 4 0 0 C2 29 0 0 4 5 3 5 3 0 0 C4 Continued on next page 89 Appendix C. Weight Distributions of (8, 20, 3) Binary Codes Table C.1– continued from previous page Distribution W0 W1 W2 W3 W4 W5 W6 W7 W8 Realized in code(s) equivalent to 30 0 0 4 6 2 4 4 0 0 C1 31 0 0 4 6 3 3 3 1 0 C4 32 0 0 4 6 4 2 3 1 0 C2 33 0 0 4 6 4 2 4 0 0 C3 34 0 0 4 6 4 3 3 0 0 C2 35 0 1 0 0 10 8 0 0 1 C2 36 0 1 0 5 5 5 3 1 0 C3 37 0 1 1 3 7 7 0 0 1 C2 38 0 1 1 3 8 6 0 0 1 C1, C4 39 0 1 1 4 6 5 2 1 0 C2 40 0 1 1 4 6 7 0 0 1 C3 41 0 1 1 4 7 4 2 1 0 C1, C4 42 0 1 1 5 5 5 2 1 0 C3 43 0 1 1 5 6 5 1 1 0 C5 44 0 1 2 2 7 7 0 0 1 C1, C4 45 0 1 2 2 8 6 0 0 1 C2 46 0 1 2 2 9 5 0 0 1 C3, C5 47 0 1 2 3 4 7 3 0 0 C2 48 0 1 2 3 5 6 3 0 0 C1, C4 49 0 1 2 3 6 4 3 1 0 C2 50 0 1 2 3 6 5 2 1 0 C1, C4 51 0 1 2 3 6 7 0 0 1 C2 52 0 1 2 3 7 4 2 1 0 C2 53 0 1 2 3 8 3 2 1 0 C3, C5 54 0 1 2 3 8 5 0 0 1 C2 55 0 1 2 4 3 7 3 0 0 C3 56 0 1 2 4 4 7 2 0 0 C5 57 0 1 2 4 5 4 3 1 0 C1, C4 58 0 1 2 4 5 5 2 1 0 C2 59 0 1 2 4 5 7 0 0 1 C5 60 0 1 2 4 6 5 2 0 0 C1, C4 61 0 1 2 4 7 3 2 1 0 C2 62 0 1 2 4 7 4 1 1 0 C1, C4 63 0 1 2 4 7 5 0 0 1 C1, C4 64 0 1 2 5 4 5 2 1 0 C5 65 0 1 2 5 5 4 2 1 0 C2 66 0 1 2 5 5 5 1 1 0 C3 67 0 1 2 5 6 3 2 1 0 C1, C4 68 0 1 2 5 6 4 1 1 0 C2 69 0 1 3 2 4 6 4 0 0 C2 70 0 1 3 2 4 7 3 0 0 C1, C4 71 0 1 3 2 5 6 3 0 0 C2 72 0 1 3 2 6 5 3 0 0 C3, C5 73 0 1 3 3 3 6 4 0 0 C4 Continued on next page 90 Appendix C. Weight Distributions of (8, 20, 3) Binary Codes Table C.1– continued from previous page Distribution W0 W1 W2 W3 W4 W5 W6 W7 W8 Realized in code(s) equivalent to 74 0 1 3 3 3 7 3 0 0 C2 75 0 1 3 3 5 5 3 0 0 C2 76 0 1 3 3 5 6 2 0 0 C1, C4 77 0 1 3 3 6 3 3 1 0 C5 78 0 1 3 3 6 6 0 0 1 C4 79 0 1 3 4 2 7 3 0 0 C5 80 0 1 3 4 3 6 3 0 0 C2 81 0 1 3 4 3 7 2 0 0 C3 82 0 1 3 4 4 5 3 0 0 C1, C4 83 0 1 3 4 4 6 2 0 0 C2 84 0 1 3 4 5 4 2 1 0 C1, C4 85 0 1 3 4 6 3 2 1 0 C2 86 0 1 3 4 6 3 3 0 0 C3 87 0 1 3 4 6 4 2 0 0 C2 88 0 1 3 5 5 5 0 0 1 C2 89 0 1 3 5 5 5 0 1 0 C3 90 0 1 3 5 5 5 1 0 0 C2 91 1 0 0 0 10 8 0 0 1 C3 92 1 0 0 4 8 6 0 0 1 C1 93 1 0 0 4 9 5 0 0 1 C2 94 1 0 0 4 10 4 0 0 1 C3, C5 95 1 0 0 5 5 5 3 1 0 C2 96 1 0 0 5 7 4 2 1 0 C1, C4 97 1 0 0 5 8 3 2 1 0 C2 98 1 0 0 5 9 2 2 1 0 C3, C5 99 1 0 0 5 9 4 0 0 1 C2 100 1 0 0 6 6 3 3 1 0 C4 101 1 0 0 6 8 2 2 1 0 C2 102 1 0 0 6 8 3 1 1 0 C1, C4 103 1 0 0 6 8 4 0 0 1 C1 104 1 0 0 7 5 4 2 1 0 C5 105 1 0 0 7 6 3 2 1 0 C2 106 1 0 0 7 6 4 1 1 0 C3 107 1 0 0 7 7 2 2 1 0 C1, C4 108 1 0 0 7 7 3 1 1 0 C2 109 1 0 0 8 10 0 0 0 1 C3 110 1 0 0 8 10 0 0 1 0 C2 91 Appendix D Hougardy.cpp 1 #include < i l c p l e x / i l o c p l e x . h> 2 #include <s t r i ng> 3 4 ILOSTLBEGIN 5 6 typedef I loArray<I lo IntArray> I l o In tMat r i x ; 7 8 I l o I n t lastTime = 0 ; 9 I l o I n t t imePeriod = 5 ; 10 int maxTotalColored = 0 ; 11 12 13 14 15 int b i t S t r i n g D i f f ( I l o In tAr ray s t r ing1 , I l o In tAr ray s t r i n g 2 ) { 16 int count = 0 ; 17 for ( int i = 0 ; i < s t r i n g 1 . g e t S i z e ( ) ; i++){ 18 i f ( s t r i n g 1 [ i ] != s t r i n g 2 [ i ] ) { 19 count++; 20 } 21 } 22 return count ; 23 } ; 24 25 26 void main ( ) 27 { 28 IloEnv env ; 29 30 try { 92 Appendix D. Hougardy.cpp 31 I l o I n t nBits = 8 ; 32 I l o I n t nVer t i c e s = int (pow(2 , nBits ) ) ; 33 I l o I n t nColorClas ses = int (pow(2 , int ( c e i l f ( l og ( f loat ( nBits +1) ) / log ( 2 . 0 ) ) ) ) ) ; 34 cout << ”number o f b i t s = ” << nBits << endl ; 35 36 I l o In tMat r i x b i t S t r i n g s ( env , nVer t i c e s ) ; 37 for ( int v = 0 ; v < nVer t i c e s ; v++){ 38 b i t S t r i n g s [ v ] = I l o In tAr ray ( env , nBits ) ; 39 int remainder = v ; 40 for ( int b = nBits − 1 ; b >= 0 ; b−−){ 41 i f ( remainder >= pow(2 , b) ) { 42 b i t S t r i n g s [ v ] [ nBits − 1 − b ] = 1 ; 43 remainder −= pow(2 , b) ; 44 } 45 else { 46 b i t S t r i n g s [ v ] [ nBits − 1 − b ] = 0 ; 47 } 48 } 49 } 50 51 I l o In tMat r i x ne ighbors ( env , nVer t i c e s ) ; 52 for ( int v = 0 ; v < nVer t i c e s ; v++){ 53 ne ighbors [ v ] = I l o In tAr ray ( env ) ; 54 I l o In tArray b i tS t r ingOfv = b i t S t r i n g s [ v ] ; 55 for ( int n = 0 ; n < nVer t i c e s ; n++){ 56 i f (n != v ) { 57 i f ( b i t S t r i n g D i f f ( b i tStr ingOfv , b i t S t r i n g s [ n ] ) <= 2) { 58 ne ighbors [ v ] . add (n) ; 59 } 60 } 61 } 62 } 63 64 double c = 0 . 0 0 6 ; 65 IloNumArray prob va l ( env , nBits +1) ; 66 prob va l [ 0 ] = 1 . 0 ; 67 for ( int i = 1 ; i < prob va l . g e t S i z e ( ) ; i++){ 68 prob va l [ i ] = prob va l [ i −1] ∗ c ; 93 Appendix D. Hougardy.cpp 69 } 70 71 I l o I n t nColors = 14 ; 72 73 74 IloRandom random ( env ) ; 75 76 I l o In tArray c o l o r i n g ( env , nVer t i c e s ) ; 77 for ( int i = 0 ; i < nVer t i c e s ; i++){ 78 c o l o r i n g [ i ] = random . ge t In t ( nColors ) ; 79 } 80 81 I l o In tArray num col ( env , nColors ) ; 82 I l o I n t num bad ; 83 I l o I n t num tr i a l s = 0 ; 84 do{ 85 num bad = nVer t i c e s ; 86 for ( int i = 0 ; i < nVer t i c e s ; i++){ 87 for ( int j = 0 ; j < nColors ; j++){ 88 num col [ j ] = 0 ; 89 } 90 for ( int ni = 0 ; n i < ne ighbors [ i ] . g e t S i z e ( ) ; n i++){ 91 i f ( ne ighbors [ i ] [ n i ] < nVer t i c e s ) { 92 num col [ c o l o r i n g [ ne ighbors [ i ] [ n i ] ] ]++; 93 } 94 } 95 i f ( num col [ c o l o r i n g [ i ] ] == 0) { 96 num bad−−; 97 } 98 else { 99 double sum = 0 . 0 ; 100 for ( int c = 0 ; c < nColors ; c++){ 101 sum += prob va l [ num col [ c ] ] ; 102 } 103 double r = random . getF loat ( ) ∗ sum ; 104 double temp = 0 . 0 ; 105 int index = −1; 106 do{ 107 index++; 94 Appendix D. Hougardy.cpp 108 temp += prob va l [ num col [ index ] ] ; 109 } 110 while ( temp < r ) ; 111 c o l o r i n g [ i ] = index ; 112 } 113 } 114 num tr i a l s++; 115 } 116 while ( num bad > 0) ; 117 118 I l o In tMat r i x c o l o r C l a s s e s ( env , nColors ) ; 119 for ( int c = 0 ; c < nColors ; c++){ 120 c o l o r C l a s s e s [ c ] = I l o In tAr ray ( env ) ; 121 cout << ” c o l o r ” << c << ” : ” ; 122 for ( int v = 0 ; v < nVer t i c e s ; v++){ 123 i f ( c o l o r i n g [ v ] == c ) { 124 cout << v << ” ” ; 125 c o l o r C l a s s e s [ c ] . add ( v ) ; 126 } 127 } 128 cout << endl ; 129 } 130 131 bool goodColor ing = true ; 132 for ( int c = 0 ; c < nColors ; c++){ 133 for ( int v1 = 0 ; v1 < c o l o r C l a s s e s [ c ] . g e t S i z e ( ) ; v1++){ 134 for ( int v2 = v1 + 1 ; v2 < c o l o r C l a s s e s [ c ] . g e t S i z e ( ) ; v2++){ 135 i f ( b i t S t r i n g D i f f ( b i t S t r i n g s [ c o l o r C l a s s e s [ c ] [ v1 ] ] , b i t S t r i n g s [ c o l o r C l a s s e s [ c ] [ v2 ] ] ) <= 2) { 136 goodColor ing = fa l se ; 137 } 138 } 139 } 140 } ; 141 i f ( goodColor ing ) { 142 cout << ” ass ignments are a good c o l o r i n g ” << endl ; 95 Appendix D. Hougardy.cpp 143 } 144 else { 145 cout << ” ass ignments are NOT a good c o l o r i n g ” << endl ; 146 } 147 148 149 150 151 } 152 catch ( I l oExcept ion& ex ) { 153 cout << ” Error : ” << ex << endl ; 154 } 155 env . end ( ) ; 156 } 96 Appendix E CPDist.cpp 1 #include < i l c p l e x / i l o c p l e x . h> 2 #include < i l s o l v e r / i l o s o l v e r i n t . h> 3 #include <s t r i ng> 4 #include <fstream> 5 #include <time . h> 6 #include <s t d l i b . h> 7 #include <s t d i o . h> 8 9 ILOSTLBEGIN 10 typedef I loArray<I lo IntArray> I l o In tMat r i x ; 11 typedef I loArray<I loIntVarArray> I lo IntVarMatr ix ; 12 13 int b i t S t r i n g D i f f ( I l o In tAr ray s t r ing1 , I l o In tAr ray s t r i n g 2 ) { 14 int count = 0 ; 15 for ( int i = 0 ; i < s t r i n g 1 . g e t S i z e ( ) ; i++){ 16 i f ( s t r i n g 1 [ i ] != s t r i n g 2 [ i ] ) { 17 count++; 18 } 19 } 20 return count ; 21 } 22 23 int main ( ) { 24 25 IloEnv env ; 26 27 int nBits = 8 ; 28 int nVer t i c e s = int (pow(2 , nBits ) ) ; 29 int Dist [ 1 0 0 0 0 ] [ 9 ] ; 30 int f e a s D i s t [ 1 0 0 0 0 ] [ 9 ] ; 97 Appendix E. CPDist.cpp 31 int f e a s [ 1 0 0 0 0 ] ; 32 int so lut ionCounter = 0 ; 33 int s e t S i z e = 20 ; 34 try { 35 I loModel CPmodel ( env ) ; 36 37 I loIntVarArray x ( env , nBits +1 ,0 ,14) ; 38 39 CPmodel . add ( IloSum ( x ) == s e t S i z e ) ; 40 41 CPmodel . add ( x [ 0 ] <= 1) ; 42 CPmodel . add ( x [ 1 ] <= 1) ; 43 CPmodel . add ( x [ 2 ] <= 4) ; 44 CPmodel . add ( x [ 3 ] <= 8) ; 45 CPmodel . add ( x [ 4 ] <= 14) ; 46 CPmodel . add ( x [ 5 ] <= 8) ; 47 CPmodel . add ( x [ 6 ] <= 4) ; 48 CPmodel . add ( x [ 7 ] <= 1) ; 49 CPmodel . add ( x [ 8 ] <= 1) ; 50 51 CPmodel . add ( x [ 0 ] + x [ 1 ] + x [ 2 ] <= 4) ; 52 CPmodel . add ( x [ 6 ] + x [ 7 ] + x [ 8 ] <= 4) ; 53 54 CPmodel . add ( I l o I fThen ( env , x [ 0 ] == 1 , x [ 1 ] + x [ 2 ] == 0) ) ; 55 CPmodel . add ( I l o I fThen ( env , x [ 8 ] == 1 , x [ 6 ] + x [ 7 ] == 0) ) ; 56 57 I l o S o l v e r s o l v e r (CPmodel ) ; 58 s o l v e r . startNewSearch ( ) ; 59 while ( s o l v e r . next ( ) ) { 60 s o l v e r . out ( ) << ” Dist [ ” << so lut ionCounter << ” ] = ( ” ; 61 for ( int i = 0 ; i < nBits ; i++) { 62 Dist [ so lut ionCounter ] [ i ] = s o l v e r . getValue ( x [ i ] ) ; 63 s o l v e r . out ( ) << s o l v e r . getValue ( x [ i ] ) << ” , ” ; 64 } 65 Dist [ so lut ionCounter ] [ nBits ] = s o l v e r . getValue ( x [ nBits ] ) ; 98 Appendix E. CPDist.cpp 66 s o l v e r . out ( ) << s o l v e r . getValue ( x [ nBits ] ) << ” ) ” ; 67 s o l v e r . out ( ) << endl ; 68 so lut ionCounter++; 69 } 70 s o l v e r . endSearch ( ) ; 71 cout << endl << endl << endl << endl ; 72 } 73 catch ( I l oExcept ion& e ) { 74 c e r r << ” Concert except ion caught : ” << e << endl ; 75 } 76 catch ( . . . ) { 77 c e r r << ”Unknown except ion caught ” << endl ; 78 } 79 80 for ( int i = 0 ; i < so lut ionCounter ; i++) { 81 82 try { 83 c l o c k t s ta r t , f i n i s h ; 84 long double durat ion ; 85 s t a r t = c lock ( ) ; 86 I loModel MIPmodel ( env ) ; 87 I l o In tArray xDist ( env , nBits +1) ; 88 for ( int b = 0 ; b < nBits +1; b++) { 89 xDist [ b ] = Dist [ i ] [ b ] ; 90 } 91 92 I l o In tMat r i x b i t S t r i n g s ( env , nVer t i c e s ) ; 93 for ( int v = 0 ; v < nVer t i c e s ; v++){ 94 b i t S t r i n g s [ v ] = I l o In tAr ray ( env , nBits ) ; 95 int remainder = v ; 96 for ( int b = nBits − 1 ; b >= 0 ; b−−){ 97 i f ( remainder >= pow(2 , b) ) { 98 b i t S t r i n g s [ v ] [ nBits − 1 − b ] = 1 ; 99 remainder −= pow(2 , b) ; 100 } 101 else { 102 b i t S t r i n g s [ v ] [ nBits − 1 − b ] = 0 ; 103 } 104 } 99 Appendix E. CPDist.cpp 105 } 106 107 I l o In tMat r i x wt ( env , nBits + 1) ; 108 for ( int b = 0 ; b < nBits +1; b++){ 109 wt [ b ] = I l o In tArray ( env , nVer t i c e s ) ; 110 for ( int v = 0 ; v < nVer t i c e s ; v++){ 111 i f ( b i t S t r i n g D i f f ( b i t S t r i n g s [ v ] , b i t S t r i n g s [ 0 ] ) == b) { 112 wt [ b ] [ v ] = 1 ; 113 } 114 else wt [ b ] [ v ] = 0 ; 115 } 116 } 117 118 I loIntVarArray x ( env , nVert ices , 0 , 1) ; 119 MIPmodel . add ( IloSum ( x ) == s e t S i z e ) ; 120 for ( int b = 0 ; b < nBits +1; b++) { 121 MIPmodel . add ( I l oSca lProd (wt [ b ] , x ) == xDist [ b ] ) ; 122 } 123 124 for ( int v1 = 0 ; v1 < nVer t i c e s ; v1++) { 125 char xName [ 1 0 ] ; 126 s p r i n t f (xName , ” x %d” , v1 ) ; 127 x [ v1 ] . setName (xName) ; 128 for ( int v2 = v1 + 1 ; v2 < nVer t i c e s ; v2++){ 129 i f ( b i t S t r i n g D i f f ( b i t S t r i n g s [ v1 ] , b i t S t r i n g s [ v2 ] ) <= 2) { 130 MIPmodel . add ( x [ v1 ] + x [ v2 ] <= 1) ; 131 } 132 } 133 } 134 135 i f ( xDist [ 4 ] >= 1) { 136 MIPmodel . add ( x [ 1 5 ] == 1) ; 137 } 138 i f ( xDist [ 4 ] >= 3) { 139 MIPmodel . add ( x [ 5 1 ] == 1) ; 140 } 141 100 Appendix E. CPDist.cpp 142 I loCplex cp lex (MIPmodel ) ; 143 cp lex . setParam ( I loCplex : : MIPInterval , 10000) ; 144 cp lex . setParam ( I loCplex : : MIPEmphasis , 1 ) ; 145 i f ( cp l ex . s o l v e ( ) ) { 146 f i n i s h = c lock ( ) ; 147 long double per s e c = CLOCKS PER SEC; 148 durat ion = ( f i n i s h − s t a r t ) / pe r s e c ; 149 cout << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ” << endl ; 150 cout << ” This took ” << durat ion << ” seconds . ” << endl ; 151 f e a s [ i ] = 1 ; 152 for ( int b = 0 ; b < nBits +1; b++) { 153 f e a s D i s t [ i ] [ b ] = Dist [ i ] [ b ] ; 154 } 155 cout << endl << endl << ” D i s t r i b u t i o n ” << i << ” −−−> [ ” ; 156 for ( int b = 0 ; b < nBits ; b++) { 157 cout << Dist [ i ] [ b ] << ” , ” ; 158 } 159 cout << Dist [ i ] [ nBits ] << ” ] ” ; 160 cout << ” : ” << endl << endl ; 161 for ( int v1 = 0 ; v1 < nVer t i c e s ; v1++) { 162 i f ( cp l ex . getValue ( x [ v1 ] ) > 0 . 5 ) { 163 int d i f f = b i t S t r i n g D i f f ( b i t S t r i n g s [ v1 ] , b i t S t r i n g s [ 0 ] ) ; 164 cout << v1 << ” −−−− ” << b i t S t r i n g s [ v1 ] ; 165 cout << ” ” << d i f f ; 166 cout << endl ; 167 } 168 } 169 } 170 else { 171 f i n i s h = c lock ( ) ; 172 durat ion = ( f i n i s h − s t a r t ) /100 0 . 0 ; 173 cout << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ” << endl ; 174 cout << ” This took ” << durat ion << ” seconds . 101 Appendix E. CPDist.cpp ” << endl ; ; 175 f e a s [ i ] = 0 ; 176 for ( int b = 0 ; b < nBits +1; b++) { 177 f e a s D i s t [ i ] [ b ] = 0 ; 178 } 179 cout << ” D i s t r i b u t i o n ” << i << ” −−−> [ ” ; 180 for ( int b = 0 ; b < nBits ; b++) { 181 cout << Dist [ i ] [ b ] << ” , ” ; 182 } 183 cout << Dist [ i ] [ nBits ] << ” ] ” ; 184 cout<< ” : ” << ” i n f e a s i b l e ” << endl << endl << endl ; 185 } 186 cout << endl ; 187 cout << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ” << endl ; 188 MIPmodel . end ( ) ; 189 } 190 catch ( I l oExcept ion& e ) { 191 c e r r << ” Concert except ion caught : ” << e << endl ; 192 } 193 catch ( . . . ) { 194 c e r r << ”Unknown except ion caught ” << endl ; 195 } 196 } 197 cout << endl << endl << endl << endl ; 198 cout << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗” << endl ; 199 cout << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗” << endl ; 200 cout << ” ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗” << endl ; 201 cout << ” F e a s i b l e D i s t r i b u t i o n s : ” << endl << endl ; 202 int f ea sSo lu t i onCounte r = 0 ; 203 for ( i = 0 ; i < so lut ionCounter ; i++) { 102 Appendix E. CPDist.cpp 204 i f ( f e a s [ i ] > 0 . 5 ) { 205 cout << ” F e a s i b l e D i s t r i b u t i o n ” << f ea sSo lu t i onCounte r << ” ( D i s t r i b u t i o n ” << i << ” ) −−−> [ ” ; 206 for ( int b = 0 ; b < nBits ; b++) { 207 cout << f e a s D i s t [ i ] [ b ] << ” , ” ; 208 } 209 cout << f e a s D i s t [ i ] [ nBits ] << ” ] ” << endl ; 210 f ea sSo lut i onCounte r++; 211 } 212 } 213 env . end ( ) ; 214 } 103
Cite
Citation Scheme:
Usage Statistics
Country | Views | Downloads |
---|---|---|
United States | 23 | 1 |
Russia | 7 | 0 |
Germany | 4 | 3 |
China | 2 | 68 |
Netherlands | 1 | 0 |
Canada | 1 | 0 |
City | Views | Downloads |
---|---|---|
Washington | 12 | 0 |
Unknown | 12 | 12 |
Ashburn | 9 | 0 |
Beijing | 2 | 14 |
Vancouver | 1 | 0 |
Sunnyvale | 1 | 0 |
Redmond | 1 | 0 |
{[{ mDataHeader[type] }]} | {[{ month[type] }]} | {[{ tData[type] }]} |
Share
Share to: