Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Modeling the 8-queens problem and Sudoku using an algorithm based on projections onto nonconvex sets Schaad, Jason 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_fall_schaad_jason.pdf [ 2.46MB ]
Metadata
JSON: 24-1.0071292.json
JSON-LD: 24-1.0071292-ld.json
RDF/XML (Pretty): 24-1.0071292-rdf.xml
RDF/JSON: 24-1.0071292-rdf.json
Turtle: 24-1.0071292-turtle.txt
N-Triples: 24-1.0071292-rdf-ntriples.txt
Original Record: 24-1.0071292-source.json
Full Text
24-1.0071292-fulltext.txt
Citation
24-1.0071292.ris

Full Text

Modeling the 8-Queens Problem and Sudoku using an Algorithm based on Projections onto Nonconvex Sets by Jason Schaad  B.Sc., The University of British Columbia, 2000 M.A., Gonzaga University, 2006  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The College of Graduate Studies (Mathematics)  THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan) August 2010 c Jason Schaad 2010  Abstract We begin with some basic definitions and concepts from convex analysis and projection onto convex sets (POCS). We next introduce various algorithms for converging to the intersection of convex sets and review various results (Elser’s Difference Map is equivalent to the Douglas-Rachford and Fienup’s Hybrid Input-Output algorithms which are both equivalent to the Hybrid Projection-Reflection algorithm). Our main contribution is two-fold. First, we show how to model the 8-queens problem and following Elser, we model Sudoku as well. In both cases we use several very nonconvex sets and while the theory for convex sets does not apply, so far the algorithm finds a solution. Second, we show that the operator governing the Douglas-Rachford iteration need not be a proximal map even when the two involved resolvents are; this refines an observation of Eckstein.  ii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  v  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  2 Background . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Vector Space . . . . . . . . . . . . . . . . . . . . . . 2.2 Subspaces and Convex Sets . . . . . . . . . . . . . . 2.3 The Inner Product and Inner Product Spaces . . . . 2.4 Cauchy Sequences, Completeness and Hilbert spaces 2.5 Product Space . . . . . . . . . . . . . . . . . . . . . 2.6 Lipschitz and Nonexpansivity . . . . . . . . . . . . . 2.7 Projections . . . . . . . . . . . . . . . . . . . . . . . 2.8 Properties of the Projector . . . . . . . . . . . . . . 2.9 The Reflector . . . . . . . . . . . . . . . . . . . . . . 2.10 Miscellaneous . . . . . . . . . . . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  3 3 4 5 6 11 13 18 23 24 25  3 Projection Methods . . . . . . 3.1 Elser’s Difference Map . . . . 3.2 Douglas-Rachford and Fienup 3.3 Fixed Point Results . . . . . 3.4 The Main Result . . . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  26 26 27 28 29  4 Monotone Operators . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Eckstein’s Observation . . . . . . . . . . . . . . . . . . . . .  33 33 43  . . . . . . HIO . . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  iii  Table of Contents 5 8 Queens . . . . . . . . . . . . . . . . . . . 5.1 Queen’s Role in Chess . . . . . . . . . . 5.2 8 Queens History . . . . . . . . . . . . 5.3 Solutions to 8 Queens . . . . . . . . . . 5.4 Modeling 8 Queens . . . . . . . . . . . 5.5 Row Constraints . . . . . . . . . . . . . 5.6 Column Constraints . . . . . . . . . . . 5.7 Positive Slope Constraints . . . . . . . 5.8 Negative Slope Constraints . . . . . . . 5.9 Projection onto the Nearest Unit Vector 5.10 Iterations . . . . . . . . . . . . . . . . . 5.11 8 Queens Example . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  47 47 47 48 50 51 51 52 53 54 55 56  6 Sudoku . . . . . . . . . . . . . . 6.1 What is Sudoku? . . . . . . 6.2 Modeling Sudoku . . . . . . 6.3 Constraints . . . . . . . . . . 6.3.1 North-South Hallways 6.3.2 East-West Hallways . 6.3.3 Meeting Rooms . . . 6.3.4 The Given Sudoku . . 6.4 Iterations . . . . . . . . . . . 6.5 Sudoku Example . . . . . . . 6.6 Sudoku Distance . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  60 60 60 63 63 64 65 66 66 67 74  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  75  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  76  7 Conclusion  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  Appendices A 8 Queens PHP Code . . . . . . . . . . . . . . . . . . . . . . . .  80  B Sudoku PHP Code . . . . . . . . . . . . . . . . . . . . . . . . .  93  C Eckstein Calculation Details . . . . . . . . . . . . . . . . . . . 109  iv  List of Figures 2.1  Example of Convexity . . . . . . . . . . . . . . . . . . . . . .  5.1 5.2 5.3 5.4 5.5 5.6  Queen moves . . . . . . Initial Starting Position 30 iterations . . . . . . . 60 iterations . . . . . . . 90 iterations . . . . . . . Solved in 96 iterations .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  48 57 57 58 58 59  6.1 6.2 6.3 6.4 6.5 6.6 6.7  A Sample Sudoku Puzzle . . . . . . A Sample Sudoku Solution . . . . . 9 Sudoku Elavator Shafts . . . . . . A Sodoku Hallway . . . . . . . . . . A Sodoku Meeting Room . . . . . . Solution to Escargot . . . . . . . . . Distance versus number of iterations  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  61 62 63 64 65 73 74  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  5  C.1 Calculation Details . . . . . . . . . . . . . . . . . . . . . . . . 110  v  Acknowledgements First and foremost, I thank my supervisor Heinz Bauschke. His enthusiasm for mathematics is truly infectious. Without the support from my wife Yuki, the flexibility of my boss Jon Rever and the help of Liangjin Yao, I would not have been able to complete this thesis- thank you. I also thank Shawn Wang, Yves Lucet, Yong Gao, and Pulkit Bansal for careful reading of my thesis.  vi  Chapter 1  Introduction In mid August 2007, I attended the Second Mathematical Programming Society International Conference on Continuous Optimization [ICC07] held at McMaster University in Hamilton Ontario. I had the pleasure of attending a talk given by Dr. Veit Elser from Cornell University. Elser talked very passionately about a new algorithm he discovered while researching the hybrid input-output (HIO) algorithm of James R. Fienup, Ph.D., a University of Rochester optics researcher. I was intrigued by the variety of problems Elser was able to solve with his difference map. A large number of problems can be formulated as follows: find an element of A ∩ B with the understanding that finding elements of A and B is easy [ERT07]. “Difficult problems can often be broken down into a collection of smaller, more tractable, subproblems.” [GE08]. Elser gives a very easy to understand example: Suppose you want to unscramble the anagram ”ilnoostu”. We think about two sets, A and B. Set A is the 20,160 distinct permutations of the given 8 letters. Set B, is the collection of eight-letter English words; the cardinality of set B is approximately 30,000. The intersection of these two sets contains our answer, in fact A ∩ B = {solution}. The algorithm projects onto the sets A and B, this projection needs to be computationally quick in order for the difference map to be successful. Convergence to a fixed point has been proved when our sets are convex [BCL02]. This of course is not the case with Sudoku, as we are working with positive integers from 1 to 9. But, the Sudoku solver has solved every Sudoku problem we have thrown at it. In fact, Science News published an article about the Sudoku Solver [Reh] and people from around the world tried to find a Sudoku puzzle that it could not solve, but to no avail. The server that hosted the PHP code to solve Sudoku was overloaded because of the interest. The Internet Service Provider emailed and was upset because the algorithm was using over 99 percent of virtual server’s processor; we had to move the code to the University of British Columbia Servers. We were hoping to find a Sudoku puzzle that didn’t work so that we can understand what makes that particular Sudoku special. There may be a theorem out there that shows our method always works on Sudoku. We leave that for 1  Chapter 1. Introduction future work in this area. Modeling Sudoku is complicated. Perhaps the greatest contribution of this thesis is providing the full details of modeling Sudoku in order for the difference map to solve it; it is very ingenious. I want to be very clear, we did not discover the Sudoku model, Dr. Veit Elser did. In fact, talking with him at the International Math Conference in Banff in 2009 [bir09], I asked him how he ever came up with such a clever model. Elser told me simply, ”I was motivated”. He knew his difference map could solve something as difficult as Sudoku puzzle, so he was determined to show the world. We wanted to try to model something simpler in order to better explain how to model Sudoku. Once we explain how to model 8-queens, the modeling of Sudoku is much easier to understand. The modeling of the 8-queens problem is a new application, and as far as I know Elser has not attempted it. As mathematicians we want to know why it works. The study of projection operators reveals a strong connection to monotone operators. We revisit work by Eckstein, correct a computational error, and provide a much simpler example. The remainder of this thesis is organized as follows. Chapter 2 is a review of the well known definitions and concepts from the broad area of convex analysis. We build the theory so we can define a Hilbert space and projections. In Chapter 3 we consider an algorithm for converging to the intersection of convex sets and review various related results. We show that Elser’s Difference Map is equivalent to the Douglas-Rachford and Fienup’s Hybrid Input-Output algorithms. In Chapter 4 we show that the operator governing the Lions-Mercier iteration need not be a proximal map even when the two involved resolvents are; this refines an observation of Eckstein. Chapter 5 focusses on modeling the 8-queens problem, and Chapter 6 concerns the Sudoku puzzle. Finally, Chapter 7 concludes this thesis.  2  Chapter 2  Background This section is included so the results of this thesis can be understood in one self contained document. The readers can first familiarize themselves with the well known facts presented in this chapter.  2.1  Vector Space  Definition 2.1 (Vector space). A vector space V over R is a nonempty set or collection of elements (called vectors) with two algebraic operations. These operations are called vector addition and multiplication of vectors by scalars. For all vectors x,y,z ∈ V and α,β ∈ R the following conditions are required to hold: 1. x + y, called the sum of x and y, is a vector in V and is the unique result of addition applied to x and y. 2. αx, called the scalar multiple of x by α, is a vector in V and is the unique result of scalar multiplication applied to x by α. 3. Addition is commutative, x + y = y + x. 4. Addition is associative, x + (y + z) = (x + y) + z. 5. There is a zero vector 0 in V such that x + 0 = x. 6. There is an additive inverse −x such that x + (−x) = 0. 7. Scalar multiplication is distributive, α(x+y) = αx+αy, and (α+β)x = αx + βx. 8. Scalar multiplication is commutative, α(βx) = (βα)x. 9. Scalar multiplication by 1 is the identity operator, 1x = Id x = x.  3  2.2. Subspaces and Convex Sets  2.2  Subspaces and Convex Sets  Definition 2.2 (subspace). A subset W of a vector space V is a subspace of V if W is a vector space itself with the same operations of addition and scalar multiplication We note that the zero vector, 0, is in every subspace. The smallest possible subspace is {0}. And the largest possible subspace is the vector space V . It is very tedious and time consuming to check if a subset of V is a subspace by applying the definition of a vector space. The following two facts are well known easier ways to verify if W is indeed a subspace. The proofs are omitted, but can be found in many Linear Algebra textbooks. Fact 2.3. [Mey00, page 162] Let V be a real vector space. Then W is a subspace of V if and only if W ⊆ V and αx + y ∈ W ∀x, y ∈ W, ∀α ∈ R. Proposition 2.4. If W1 and W2 are subspaces of a real vector space V , then W1 ∩ W2 is a subspace of the vector space V . Proof. By Fact 2.3 we need only show that W1 ∩ W2 ⊆ V and αx + y ∈ W1 ∩ W2 ∀x, y ∈ W ∀α ∈ R. Pick x ∈ W1 ∩ W2 , then x ∈ W1 ⇒ x ∈ V as W1 is a subspace. Now pick x, y ∈ W1 ∩ W2 this implies that x ∈ W1 and x ∈ W2 ,  (2.1)  y ∈ W1 and y ∈ W2 .  (2.2)  Since W1 and W2 are both subspaces, αx + y ∈ W1  (2.3)  αx + y ∈ W2 .  (2.4)  We conclude that αx + y ∈ W1 ∩ W2 . Definition 2.5 (Affine subspace). The set W is an affine subspace of V if there exists a vector v ∈ V and a subspace W0 of V such that W = v +W0 = {v + w : w ∈ W0 }. Definition 2.6 (Convex set). Let X be a vector space and let C ⊆ X. We say C is convex if λC + (1 − λ)C ⊆ C,  ∀λ ∈ [0, 1].  (2.5)  4  2.3. The Inner Product and Inner Product Spaces  Figure 2.1: Example of Convexity A geometric characterization of this definition is that a set is convex if it contains all line segments whose end points are in the set. See Figure 2.1 It is easy to see that subspaces are convex sets. Definition 2.7 (Convex function). Let f : X → ] − ∞,+∞]. We say f is a convex function if f ((1 − λ)x + λy) ≤ (1 − λ)f (x) + λf (y),  ∀x, y ∈ X, ∀λ ∈ ]0, 1[ .  (2.6)  Definition 2.8 (Domain). Let f : X → ]−∞, +∞]. The domain of a function is dom f = {x | f (x) < +∞}. (2.7) Definition 2.9 (Epigraph). Let f : X → ]−∞, +∞]. Then the epigraph of a function is the set of points lying on or above the graph, i.e., epi f = {(x, µ) | x ∈ dom f, µ ∈ R, µ ≥ f (x)}. A function is convex if and only if its epigraph is a convex set; this is called a geometric characterization of a convex function.  2.3  The Inner Product and Inner Product Spaces  Definition 2.10 (Inner Product). An inner product on X is a mapping of X × X (where X is a vector space) to R. For every pair of vectors x and y there is an associated scalar x, y , called the inner product of x and y, such that for all x, y, z ∈ X and for all α ∈ R we have: 1. x + y, z = x, z + y, z 5  2.4. Cauchy Sequences, Completeness and Hilbert spaces 2. αx, y = α x, y 3. x, y = y, x 4. x, x ≥ 0 5. x, x = 0 ⇐⇒ x = 0 Definition 2.11. (Inner Product Space) An inner product space (sometimes called a pre-Hilbert Space) is a vector space X with an inner product defined on X. Definition 2.12. (Orthogonality) An element x of an inner product space X is said to be orthogonal to an element y ∈ X if x, y = 0. Definition 2.13. (Norm) Let V be a vector space. The function · : V → R is called a norm if the following hold: 1. x ≥ 0 and x = 0 if and only if x = 0 2. x + y ≤ x + y , 3. αx = |α| x  ∀x, y ∈ V  ∀α ∈ R, ∀x ∈ V  An inner product on X defines a norm on X which is given by x, x ≥ 0.  (2.8)  = x, x ≥ 0.  (2.9)  x = Note that x  2  We also have a metric on X which is given by d(x, y) = x − y =  2.4  x − y, x − y ≥ 0.  (2.10)  Cauchy Sequences, Completeness and Hilbert spaces  Definition 2.14. (Cauchy Sequence) Let X be an inner product space. A sequence (xn ) in X is said to be a Cauchy sequence if for every > 0 there is an N = N ( ) such that xm − xn < for every m, n > N . Definition 2.15. (Completeness) The space X is said to be complete if every Cauchy sequence in X has a limit which is an element of X. 6  2.4. Cauchy Sequences, Completeness and Hilbert spaces Definition 2.16. (Hilbert Space) A complete inner product space is said to be a Hilbert Space. From now on we assume that X is a Hilbert Space. Example 2.17 (Euclidean space Rn ). The space Rn is a Hilbert Space. This is the primary Hilbert Space focussed on in this thesis. [Kre78, Example 1.5-1] shows us that Rn is complete. The inner product is defined by x, y = x1 y1 + · · · + xn yn ,  (2.11)  where x = (x1 , · · · , xn ) and y = (y1 , · · · , yn ). Now from (2.11) we see that x =  x, x =  x21 + · · · + x2n .  (2.12)  If n = 3, (2.11) gives us the usual dot product x, y = x · y = x1 y1 + x2 y2 + x3 y3 ,  (2.13)  where x = (x1 , x2 , x3 ) and y = (y1 , y2 , y3 ). The orthogonality x, y = x · y = 0  (2.14)  agrees with the concept of perpendicularity. Example 2.18 ( 2 ). [Kre78, 1.2-3] Pronounced ”Little el 2” and sometimes written as 2 (I) is the Hilbert space of sequences index by a countable set, usually N (but could be Z). Each element x of the space 2 is a sequence x = (xn ) = (x1 , x2 , . . . , xn , . . .) with ∞  x2n < +∞.  (xn ) =  (2.15)  n=1  For two elements (sequences) x = (xn ) ∈ product is defined by  2  and y = (yn ) ∈  2,  the inner  ∞  x, y =  xn , yn .  (2.16)  n=1  Definition 2.19 (Bounded). Given a set A in X, A is said to be bounded if we have sup x < +∞ x∈A  . 7  2.4. Cauchy Sequences, Completeness and Hilbert spaces Theorem 2.20. Every convergent sequence in a Hilbert space is a bounded Cauchy Sequence. Proof. Suppose xn −→ x. We first show (xn ) is Cauchy. By the triangle inequality we have xn − xm = xn − x + x − xm ≤ xn − x + x − xm .  (2.17)  We now let ε > 0. Then ∃N so for n > N and m > N we have ε xn − x < , 2  (2.18)  ε x − xm < . 2  (2.19)  Hence m, n > N xn − xm ≤ xn − x + x − xm =  ε ε + = ε, 2 2  (2.20)  which proves the sequence is Cauchy. Now we show (xn ) is bounded. Again we have xn −→ x. For ε = 1, we get N ∈ N such that n > N =⇒ xn − x < 1.  (2.21)  xn − x ≤ xn − x .  (2.22)  We claim  By the triangle inequality xn = − xn = − xn + x − x = (x − xn ) + (−x) ≤ x + x − xn . (2.23) Thus, for n > N , xn < x + 1.  (2.24)  M = max{ x + 1, x1 , x2 , ..., xN }  (2.25)  If we choose M to be  we get xn ≤ M,  ∀n.  (2.26)  as desired. 8  2.4. Cauchy Sequences, Completeness and Hilbert spaces Definition 2.21 (Distance to a set). The distance from a vector x ∈ X to a nonempty set A ⊆ X is defined as follows dA (x) := inf x − y . y∈A  Theorem 2.22 (Cauchy-Schwarz inequality). For any x and y in a Hilbert space X, we have that | x, y | ≤ x y . Proof. We prove this with two cases. Case 1: x = 0 or y = 0. Clear. Case 2: x = 0 and y = 0. Then x y x y − , − x y x y  0≤  =  2  y x − x y  ⇐⇒  x x  =  2 2  −  2 x, y y, y x, x − + 2 x x y y 2 2 x, y y + x y y  (2.28)  2 2  ≥0  (2.29)  2 x, y x y  (2.30)  y ≥ x, y  (2.31)  ⇐⇒ 2 ≥  ⇐⇒ x  (2.27)  ⇐⇒ x, y ≤ x  y  (2.32)  This is true for all x. Let x be replaced by −x. Now we have −x  y ≥ −x, y  ⇐⇒ − x, y ≤ x  y .  (2.33)  (2.34) 9  2.4. Cauchy Sequences, Completeness and Hilbert spaces Now max{ −x, y , − x, y } = | x, y |.  (2.35)  Combine (2.32) and (2.34) to get | x, y | ≤ x  y ,  ∀x, y ∈ X  (2.36)  as desired. Theorem 2.23 (Triangle Inequality). Then for every x, y ∈ X, we have that x+y ≤ x + y . Proof. We have x+y  2  = x + y, x + y  (2.37)  = x, x + y + y, x + y  (2.38)  = x, x + x, y + y, x + y, y  (2.39)  = x  2  ≤ x  2  + 2 x, y + y  2  (2.40) 2  + 2| x, y | + y .  (2.41)  By the Cauchy-Schwarz inequality we have x+y  2  ≤ x  2  +2 x  y + y  2  = ( x + y )2 .  (2.42) (2.43)  We take the square root on both sides to obtain x+y ≤ x + y  (2.44)  as desired. Theorem 2.24 (Parallelogram law). For every x, y ∈ X 2  x+y  + x−y  2  = 2( x  2  + y 2 ).  Proof. We have, x+y  2  + x−y  2  = x + y, x + y + x − y, x − y  (2.45)  = 2 x, x + 2 y, y + 2 x, y − 2 x, y  (2.46)  = 2( x  2  2  + y ).  (2.47)  10  2.5. Product Space  2.5  Product Space  Definition 2.25 (Product Space). Let N ∈ N = {1, 2, 3, . . .}, and Xi be Hilbert spaces, with inner products ·, · i , for each i ∈ {1, . . . , N }. The corresponding product space H := X1 × X2 × . . . × XN is defined by H = (xi ) = (x1 , x2 , . . . , xN ) | xi ∈ Xi ,  (2.48)  N  x, y =  xi , yi i ,  ∀x = (xi ), y = (yi ) ∈ H.  (2.49)  i=1  Then, N  xi 2i ,  x =  ∀x = (xi ) ∈ H.  (2.50)  i=1  If Xi = X, ∀i ∈ {1, · · · , N }, we denote H by X N . In fact, the well known space RN is actually a product space. We can write RN as R × · · · × R. Proposition 2.26. The Product Space is a Hilbert space. Proof. We first prove that N  x, y =  xi , yi i ,  ∀x = (xi ), y = (yi ) ∈ H.  (2.51)  i=1  is an inner product. Let x = (xi ), y = (yi ), z = (zi ) ∈ H, and α ∈ R. As described above, we need to show five properties in order to prove this is an inner product. Until the end of this proof, we abbreviate N . i=1 by 1. x + y, z = (xi ) + (yi ), (zi ) = (xi + yi ), (zi ) =  xi + yi , zi  i  (2.52)  Since xi , yi , zi ∈ Xi and Xi is a Hilbert space, we can write =  ( xi , zi i + yi , zi i ) =  xi , zi i +  yi , zi  i  = x, z + y, z . (2.53)  2. αx, y = α(xi ), (yi ) = (αxi ), (yi )  (2.54) 11  2.5. Product Space Since xi , yi , zi ∈ Xi and Xi is a Hilbert space, we can write αx, y =  αxi , yi  i  =  α xi , yi  i  =α  xi , yi  i  = α x, y .  (2.55)  3. x, y = (xi ), (yi ) =  xi , yi  i  =  yi , xi  i  = (yi ), (xi ) = y, x . (2.56)  4. x, x = (xi ), (xi ) =  xi , xi  i  =  xi  2 i  ≥ 0.  (2.57)  5. From above we know xi  2 i  ≥ 0.  (2.58)  Then x, x = 0 = xi 2i ⇔ xi = 0, ∀i ⇔ x = (0, 0, . . .) = 0. Hence, ·, · as defined above is an inner product. It also induces the following norm, ∀x = (xi ) ∈ H  x =  (2.59)  xi 2i .  x, x =  (2.60)  Finally, to show this is a Hilbert space, we need to show that it is complete. Let (xn ) be a Cauchy sequence in H. Write xn = (xn,i ) = (xn,1 , xn,2 , . . . , xn,N ). Since (xn ) is a Cauchy sequence we know that ∀ε∃N1 such that xn − xm < ε,  ∀n, m > N1  (2.61)  < ε2 .  (2.62)  If we square both sides we get xn − xm  2  Note that, xn − xm  (2.63)  = (xn,1 , xn,2 , . . . , xn,N ) − (xm,1 , xm,2 , . . . , xm,N )  (2.64)  = (xn,1 − xm,1 , xn,2 − xm,2 , . . . , xn,N − xm,N ).  (2.65)  12  2.6. Lipschitz and Nonexpansivity Thus, xn − xm  2  (2.66)  = xn,1 − xm,1  2 1  + xn,2 − xm,2  2 2  + . . . + xn,N − xm,N  2 N.  (2.67)  ≤ ε2 .  (2.68)  By (2.62) xn,1 − xm,1  2 1  + xn,2 − xm,2  2 2  + . . . + xn,N − xm,N )  2 N  So, this implies that xn,i − xm,i  2 i  ≤ ε2 ,  ∀i.  (2.69)  Thus, xn,i − xm,i ≤ ε,  ∀i ∀n, m > N1 ,  (2.70)  i.e., (xn,i ) is a Cauchy sequence for every i, and xn,i is a sequence in Xi , which is complete. So ∃x0,i ∈ Xi such that xn,i −→ x0,i . Therefore, xn −→ (x0,1 , x0,2 , . . . , x0,N ).  (2.71)  We have shown that our product space is indeed complete and thus a Hilbert space. Definition 2.27 (Diagonal space). Let N ∈ N, and let X be a Hilbert space. The diagonal space ∆ in X N is the subspace ∆ = {(x, . . . , x) ∈ X N | x ∈ X}.  2.6  (2.72)  Lipschitz and Nonexpansivity  Definition 2.28 (Lipschitz). Let T : X → X. Then T is said to be αLipschitz continuous if (∀x ∈ X)(∀y ∈ X)  Tx − Ty ≤ α x − y .  Example 2.29 (The Identity). The Identity operator on X is 1-Lipschitz; this is very easy to show: x − y = Id(x) − Id(y) ≤ 1 · x − y . Theorem 2.30. Let F1 : X → X be L1 -Lipschitz and F2 : X → X be L2 -Lipschitz, then F1 + F2 is L1 + L2 Lipschitz.  13  2.6. Lipschitz and Nonexpansivity Proof. We start with the definition of being Lipschitz continuous. F1  is  L1 −Lipschitz ⇐⇒ ∀x ∀y  F1 (x) − F1 (y) ≤ L1 · x − y  (2.73)  F2  is  L2 −Lipschitz ⇐⇒ ∀x ∀y  F2 (x) − F2 (y) ≤ L2 · x − y  (2.74)  Hence, (F1 + F2 )(x) − (F1 + F2 )(y) = (F1 (x) + F2 (x)) − (F1 (y) + F2 (y)) (2.75) = (F1 (x) − F1 (y)) + (F2 (x) − F2 (y)) ,  (2.76)  which, by the triangle inequality, is ≤ F1 (x) − F1 (y) + F2 (x) − F2 (y) .  (2.77)  Now since F1 is L1 -Lipschitz and F2 is L2 -Lipschitz we get ≤ L1 · x−y +L2 · x−y = x−y ·(L1 +L2 ) = (L1 +L2 )· x−y , (2.78) as desired. Definition 2.31 (Nonexpansivity). Let T : X → X. We say T is nonexpansive if (∀x ∈ X)(∀y ∈ X) T x − T y ≤ x − y . Remark 2.32. We note that T being nonexpansive means it is 1-Lipschitz continuous. Therefore, the Identity is not only Lipschitz continuous but it is also nonexpansive. Example 2.33 (The Right Shift Operator). The circular right shift operator R : X n → X n : (x1 , . . . , xn ) → (xn , x1 , . . . , xn−1 ) satisfies Rx = x so that, using linearity of R, R(x − y) = Rx − Ry = x − y , and hence R is nonexpansive. Example 2.34 (Linear operator on 2 ). Recall that an element x in of the form x = (xn ) = (x1 , x2 , . . .) such that  2  is  ∞  xn 2 < +∞  (xn ) =  (2.79)  n=1  14  2.6. Lipschitz and Nonexpansivity We define a linear operator T : 2 → 2 by xn x2 x3 xn T x := ( ) = (x1 , , , . . . , , . . .), ∀x = (xn ) = (x1 , x2 , . . .) ∈ 2 . n 2 3 n (2.80) So we have (∀x = (xn ) ∈ 2 ) ∞  Tx  2  = i=1  x2 ( 2i ) ≤ i  ∞  (x2i ) = x  2  (by (2.79)).  (2.81)  i=1  Taking the square root on both sides, Tx ≤ x ,  ∀x ∈  2.  (2.82)  Now by linearity of T we have T x − T y = T (x − y) ≤ x − y ,  ∀x, y ∈  2.  (2.83)  Therefore, T is nonexpansive. Definition 2.35 (Continuity). Let T : X → X. T is said to be continuous if for all sequences (xn ) ∈ X with xn → x, then T xn → T x. Alternatively, given an arbitrary element x ∈ X, for all > 0 there exists δ > 0 such that T x − T y < whenever y ∈ X with x − y < δ. Proposition 2.36. If F is L-Lipschitz, then F is continuous. Proof. The result is clear when L = 0. So assume L > 0. Let c ∈ X. We will show that F is continuous at c. Given ε > 0, we need to find δ > 0 such that F (x) − F (c) < ε when x − c < δ. We now set δ :=  ε > 0. L  Then (∀x ∈ X), x − c < δ ⇒ F (x) − F (c) ≤ L x − c ≤ Lδ = ε.  (2.84)  So F (x) − F (c) < ε as desired. Theorem 2.37. The composition of nonexpansive operators is nonexpansive. Proof. Let T and N be nonexpansive, and let x and y be in X. Then, T (N x) − T (N y) ≤ N x − N y ≤ x − y .  (2.85)  15  2.6. Lipschitz and Nonexpansivity Definition 2.38 (Firmly nonexpansive). Let T : X → X. We say T is firmly nonexpansive if (∀x ∈ X)(∀y ∈ X)  Tx − Ty  2  ≤ x − y, T x − T y  Theorem 2.39. Let T : X → X. Then the following statements are equivalent: 1. T is firmly nonexpansive. 2  2.  Tx − Ty  + (x − T x) − (y − T y)  3.  (2T x − x) − (2T y − y) ≤ x − y  2  ≤ x−y  2  ∀x, ∀y.  ∀x, ∀y.  4. Id −T is firmly nonexpansive 5. T = 12 N + 21 Id, where N is nonexpansive. Proof. We rewrite the statements above by letting a = x − y, b = T x − T y. The following statements are equivalent: 1. b  2  ≤ a, b .  2. b  2  + a−b  2  ≤ a 2.  3. 2b − a ≤ a . 4. Id −T is firmly nonexpansive. 5. T = 12 N + 21 Id, where N is nonexpansive 2 ⇐⇒ b  2  ⇐⇒ b  2  ⇐⇒ 2 b ⇐⇒ b  2  + a−b + a 2  2  2  ≤ a  2  (2.86)  − 2 a, b + b  2  ≤ a  ≤ 2 a, b  (2.89)  ⇐⇒ 1  (2.90)  3 ⇐⇒ 2b − a  2  ≤ a  2  ⇐⇒ 4 b  2  − 4 a, b + a  ⇐⇒ 4 b  2  ≤ 4 a, b  ⇐⇒ 1  (2.87) (2.88)  ≤ a, b  ⇐⇒ b  2  2  ≤ a, b  (2.91) 2  ≤ a  2  (2.92) (2.93) (2.94) (2.95) 16  2.6. Lipschitz and Nonexpansivity  2  4 ⇐⇒ (Id −T )x − (Id −T )y ⇐⇒ a − b  2  ≤ x − y, (Id −T )x − (Id −T )y  ≤ a, a − b  ⇐⇒ a  2  ⇐⇒ a  2  + b  ⇐⇒ b  2  ≤ a, b  − 2 a, b + b 2  ≤ a  2  (2.96) (2.97)  2  ≤ a, a − a, b  − a, b + 2 a, b  (2.98) (2.99) (2.100)  ⇐⇒ 1  (2.101)  Note that T =  1 1 Id + N ⇐⇒ N = 2T − Id . 2 2  (2.102)  3 ⇐⇒ N is nonexpansive  (2.103)  Hence,  ⇐⇒ 5.  (2.104)  The last equivalence is a tool to create new firmly nonexpansive mappings from nonexpansive mappings. Example 2.40. We define A on  2  as follows: x2 x3 , , . . .). 2 3  (2.105)  4 3 A + Id : (x1 , x2 , x3 , . . .) −→ (2x1 , x2 , x3 , . . .). 2 3  (2.106)  A : (x1 , x2 , x3 , . . .) −→ (x1 , If we take A + Id we get  We now define F to be the inverse, 2 3 1 F := (A + Id)−1 : (ξ1 , ξ2 , ξ3 , . . .) −→ ( ξ1 , ξ2 , ξ3 , . . .). 2 3 4  (2.107)  We will now show that F := (A + Id)−1 is firmly nonexpansive. By (2.107), F is linear. We will now show that Fx  2  ≤ F x, x  ∀x,  (2.108)  17  2.7. Projections which, by linearity of F , is equivalent to the firm nonexpansivity of F . Let y = F x. Then x = F −1 y = (A + Id)y. Then F x 2 = y 2 , as they are equal, and since y, Ay ≥ 0, F x, x = y, (A + Id)y = y, Ay + y We have shown that F x, x ≥ F x pansive.  2  2  ≥ y  2  = F x 2.  (2.109)  and therefore F is firmly nonex-  Corollary 2.41. All firmly nonexpansive mappings are nonexpansive. Proof. For T , a firmly nonexpansive mapping, we know that ∀x, ∀y T (x) − T (y)  2  ≤ T (x) − T (y), x − y ≤ T x − T y · x − y .  (2.110)  We now consider two cases. Case 1: T (x) − T (y) = 0. T (x) − T (y) = 0 ≤ x − y  (2.111)  as norms are always greater or equal to zero. So we have, T (x) − T (y) ≤ x − y ,  (2.112)  therefore T is nonexpansive, as desired. Case 2: T (x) − T (y) > 0. Again we have T (x) − T (y) 2 ≤ T (x) − T (y) · x − y . (2.113) Divide both sides by T (x) − T (y) to obtain, T (x) − T (y) ≤ x − y ,  (2.114)  as desired.  2.7  Projections  The main focus of this section is to provide an introduction to projections onto convex sets. Definition 2.42. Let C ⊆ X, x ∈ X, and c∗ ∈ C. Then c∗ is the projection of x onto C if (∀c ∈ C)  x − c∗ ≤ x − c .  18  2.7. Projections In other words, c∗ is a closest point to x ∈ C. The associated operator PC : X ⇒ C : x → {c∗ ∈ C| c∗ is a projection of x onto C } is called the projection operator or projector. If PC x = {c∗ } we also write c∗ = PC x rather than {c∗ } = PC x, which is a slight abuse of notation. Lemma 2.43. Let u and v be in X. Then the following holds: u, v ≤ 0 ⇐⇒ (∀α ∈ R+ ) u ≤ u − αv ⇐⇒ (∀α ∈ [0, 1]) u ≤ u − αv . Proof. Observe that (∀α ∈ R)  2  u − αv  − u  2  2  = α(α v  − 2 u, v ).  (2.115)  Hence, for forward implications follow immediately. Conversely, if for every α ∈]0, 1], u ≤ u − αv , then (2.115) implies that u, v ≤ α v 2 /2. As α ↓ 0, we obtain u, v ≤ 0. Theorem 2.44 (The Projection Theorem). Let C be a nonempty closed convex subset of X. Then ∀x ∈ X there exists a unique c∗ ∈ C such that x − c∗ = dC (x). Moreover, c∗ is characterized by c∗ ∈ C  (2.116)  c − c∗ , x − c∗ ≤ 0.  (2.117)  and (∀c ∈ C)  Proof. Fix x ∈ X. Pick a sequence (cn ) in C such that x − cn −→ dC (x). By the parallelogram law, cn − cm  2  = 2 cn − x  = 2 cn − x  2  2  + 2 cm − x  ≤ 2 cn − x as dC (x) ≤ x −  + 2 cm − x  2  2  2  − (cn + cm ) − 2x  −4  + 2 cm − x  2  cn + cm −x 2  − 4(dC (x))2 ;  2  2  (2.118)  (2.119)  (2.120)  cn + cm . Now as n, m −→ ∞ 2 19  2.7. Projections  −→ 2(d2C (x)) + 2(dC (x))2 − 4(dC (x))2 = 0.  (2.121)  Therefore (cn ) is Cauchy. Hence (cn ) is Cauchy and thus convergent as X is a Hilbert space and thus complete: cn −→ c∗ ∈ X.  (2.122)  But (cn ) lies in C and C is closed, so c∗ ∈ C as well. Now since cn −→ c∗ we also have that x − cn −→ x − c∗ and thus x − cn −→ x − c∗ . On the other hand, x − cn → dC (x) by assumptions. Since limits are unique x − c∗ = dC (x) so c∗ ∈ PC x. We have shown existence. Now we need to show uniqueness. We let c be another nearest point in PC x. We need only show that c∗ = c . Set (cn ) = (c∗ , c , c∗ , c , ...), then x − cn = dC (x). By (2.119) we deduce that c∗ − c 2 = 0 i.e., c∗ = c . Now for the characterization. Pick c ∈ C and set cα = αc+(1−α)c∗ ∈ C where α ∈ [0, 1]. Then, using Lemma 2.43 (with u = x − c∗ and v = c − c∗ ), we obtain x − c∗ = dC (x)  (2.123)  ⇐⇒ (∀c ∈ C)(∀α ∈ [0, 1])  ∗  x−c  ∗  ≤ x − cα = (x − c ) − α(c − c∗ ) (2.124)  ⇐⇒ (∀c ∈ C) x − c∗ , c − c∗ ≤ 0.  (2.125)  We now give a few examples of projections. Example 2.45 (Hyperplane). (See [Deu01, page 99].) If a ∈ X\{0}, β ∈ R and C = {x ∈ X| a, x = β} then PC x = x −  ( a, x − β) a a 2  (2.126)  Proof. We shall use the characterization of the Projection theorem. It can be easily shown that our set C is closed and convex. Next we call our projection candidate c∗ and check that c∗ ∈ C. a, x −  ( a, x − β) ( a, x − β) a = a, x − a, a = β. a 2 a 2  (2.127)  20  2.7. Projections Next we take c ∈ C, which means that a, c = β. Then, by simplifying and recalling that a, c = β and a, a = a 2 , ( a, x − β) a, x − β a), a 2 a a 2 a, x − β ( a, x − β) = ( c, a − x, a + a, a ) a 2 a 2 a, x − β = (β − x, a + a, x − β) a 2 a, x − β = (0) a 2 =0  (2.132)  ≤ 0.  (2.133)  c − (x −  (2.128) (2.129) (2.130) (2.131)  Since the two conditions for the characterization are satisfied, our candidate c∗ is indeed the projection PC x Example 2.46 (The unit ball). We provide the projection onto the unit ball, C := B(0; 1) = {x| x ≤ 1}: PC x =  x , max{ x , 1}  ∀x ∈ X.  (2.134)  Proof. C is closed and convex this implies that our projection is a singleton by Theorem 2.44. We denote c∗ our projection candidate in (2.134). To prove this is a projection we need to show two things: 1. c∗ ∈ C 2. c − c∗ , x − c∗ ≤ 0 ∀c ∈ C,  ∀x ∈ X  Case 1: Assume that x ∈ C. This means that x is in the unit ball, so max{ x , 1} = 1 and hence c∗ = x. 1. Trivial by assumptions. 2. c − c∗ , x − c∗ = c − x, x − x = c − c∗ , 0 = 0 ≤ 0. Case 2: Assume that x ∈ / C. x 1. x ∈ / C implies x > 1 and c∗ = by definition. We also have that x c∗ = 1 so c∗ ∈ C: 2. ∀c ∈ C we have c ≤ 1 by definition 21  2.7. Projections  x x ,x − x x  c − c∗ , x − c∗ = c − = c−  1 1 x , (1 − )x = c, (1 − )x − x x x  (2.135) x 1 , (1 − )x . (2.136) x x  Now since  (1 − x  1 ) x  1 ) x  (1 − x, x =  x  x  2  = x (1 −  1 ) = x − 1 > 0, (2.137) x  we have that c − c∗ , x − c∗ = 1 1 c, (1 − )x − x (1 − ) x x 1 1 ≤ c x (1 − ) − x (1 − ) x x  (2.138) (2.139) (2.140)  by the Cauchy-Schwarz inequality. So this = ( x − 1)( c − 1) ≤ 0,  (2.141)  as x − 1 > 0 and c − 1 ≤ 0. Example 2.47 (The Product Space). We show that the projection onto the product set, C := C1 × C2 × . . . × Cn is: PC (x1 , x2 , . . . , xn ) = (PC1 (x1 ), PC2 (x2 ), . . . , PCn (xn )).  (2.142)  Proof. Clearly, PCi (xi ) ∈ Ci . Then (PC1 (x1 ), PC2 (x2 ), . . . , PCn (xn )) ∈ C1 × C2 × · · · × Cn = C. Now ∀c ∈ C, let c = (c1 , c2 , . . . , cn ) with ci ∈ Ci . We will show that c − ((PC1 (x1 ), . . . , PCn (xn )), (x1 , x2 , . . . , xn ) − (PC1 (x1 ), . . . , PCn (xn )) ≤ 0 (2.143)  22  2.8. Properties of the Projector Now, (c1 , . . . , cn ) − (PC1 (x1 ), . . . , PCn (xn )), (x1 , . . . , xn ) − (PC1 (x1 ), . . . , PCn (xn ) (2.144) = (c1 − PC1 (x1 ), . . . , cn − PCn (xn )), (x1 − PC1 (x1 ), . . . , xn − PCn (xn )) (2.145) = c1 − PC1 (x1 ), x1 − PC1 (x1 ) + . . . + cn − PCn (xn ), xn − PCn (xn ) ≤ 0, (2.146) because every term above is less than or equal to 0 by the Theorem 2.44. Example 2.48 (The Diagonal Set). We show that the projection onto the Diagonal Set, C := {(x, x, . . . , x) ∈ X n } is: PC (x1 , x2 , . . . , xn ) = (x, x, . . . , x), where x =  1 n  (2.147)  xi .  Proof. C is a closed and convex set. Clearly, (x, . . . , x) ∈ C. Let c ∈ C, say c = (x, x, . . . , x) for some x ∈ X. Then c − (x, . . . , x), (x1 , x2 , . . . , xn ) − (x, x, . . . , x  (2.148)  = (x, x, . . . , x) − (x, . . . , x), (x1 , x2 , . . . , xn ) − (x, x, . . . , x  (2.149)  = (x − x, . . . , x − x), (x1 − x, . . . , xn − x)  (2.150)  = x − x, x1 − x + . . . + x − x, xn − x  (2.151)  = x − x, (x1 − x) + . . . + (xn − x)  (2.152)  = x − x, (x1 + . . . + xn ) − nx  (2.153)  = x − x, nx − nx  (2.154)  = x − x, 0  (2.155)  =0  (2.156)  ≤ 0.  (2.157)  The conclusion follows from the Projection theorem.  2.8  Properties of the Projector  Theorem 2.49 (Projections are firmly nonexpansive). Suppose C is a nonempty closed convex set in the Hilbert space X. Then the projection PC onto C is firmly nonexpansive: PC x − PC y  2  ≤ x − y, PC x − PC y ,  ∀x, y ∈ C  (2.158) 23  2.9. The Reflector for all x, y ∈ X. Proof. By (2.117), we have that PC x − PC y  2  = PC x − PC y, PC x − PC y ≤ PC x − PC y, PC x − PC y + x − PC x, PC x − PC y (2.159) = x − PC y, PC x − PC y  (2.160)  ≤ x − PC y, PC x − PC y + PC y − y, PC x − PC y  (2.161)  = x − y, PC x − PC y . So PC is firmly nonexpansive as desired. Corollary 2.50. Suppose C is a nonempty closed convex set in the Hilbert space X. Then PC is nonexpansive. Proof. PC is firmly nonexpansive and therefore by Corollary 2.41 nonexpansive. Remark 2.51. Since PC is nonexpansive, it is 1-Lipschitz and hence continuous. Remark 2.52. Since projection operators are nonexpansive, the composition of projection operators is nonexpansive.  2.9  The Reflector  Definition 2.53. Let B be a subset of X. The reflector is defined as RB := 2PB − Id. Proposition 2.54. Let B ⊆ X be a nonempty closed convex set. Then the reflector RB is nonexpansive. Proof. Recall Theorem 2.39 and Theorem 2.49, so PB = RB + Id is firmly nonexpansive. 2  2PB − Id + Id = 2  This means that RB is nonexpansive if and only if PB is firmly nonexpansive. The good news is that all projections are firmly nonexpansive by Theorem 2.49. Proposition 2.55. The composition of reflectors is nonexpansive. 24  2.10. Miscellaneous Proof. We proved that the composition of nonexpansive operators is nonexpansive Theorem 2.37. Since each reflector is nonexpansive by Proposition 2.54, we are done.  2.10  Miscellaneous  Theorem 2.56. We claim if xn → x, then (xn ) is bounded. Proof. For ε = 1, ∃N such that ∀n ≥ N we have xn − x < ε = 1,  (2.162)  by the definition of a limit. By the triangle inequality, xn − x ≤ xn − x ≤ 1  ∀n ≥ N.  (2.163)  Thus, xn ≤ x + 1  ∀n ≥ N.  (2.164)  Next we define M to be max{ x + 1, x1 , x2 , ..., xN −1 }. Thus xn ≤ M,  ∀n.  (2.165)  Theorem 2.57. We claim if (xn ) is bounded, and B is a nonempty closed convex set, then (PB (xn )) is also bounded. Proof. Fix y, since PB is firmly nonexpansive by Theorem 2.49 and thus nonexpansive by Theorem 2.41 we have, PB (xn ) − PB (y) ≤ xn − y ≤ xn + y .  (2.166)  Since (xn ) is bounded we have, xn + y ≤ M + y  (2.167)  for some M > 0. By the triangle inequality, PB (xn ) − PB (y) ≤ PB (xn ) − PB (y) ≤ M + y .  (2.168)  Therefore, P B(xn ) ≤ PB y + M + y and (PB (xn )) is bounded. Definition 2.58 (Weakly convergent sequence). We say xn xn , z −→ x, z ∀z ∈ X.  x ⇐⇒  Definition 2.59 (Weak cluster point). A vector z is a weak cluster point of a sequence (xn ) if there is a subsequence (xkn ) of (xn ) such that xkn z. 25  Chapter 3  Projection Methods In this chapter we show that Elser’s Difference Map algorithm is equivalent to the Douglas-Rachford and Fienup Hybrid Input-Output (HIO) algorithms. Throughout this chapter A and B are nonempty fixed convex and closed sets in X a real Hilbert space, and PA ,PB ,RA , and RB are the associated projections and reflectors onto the sets A and B.  3.1  Elser’s Difference Map  Definition 3.1. Elser’s Difference Map [ERT07, page 419] is defined by D(x) = x + β[PA ◦ gB (x) − PB ◦ fA (x)]  (3.1)  fA (x) = PA (x) − (PA (x) − x)/β  (3.2)  gB (x) = PB (x) + (PB (x) − x)/β,  (3.3)  where  and  where β is a real parameter that is not equal to zero. Proposition 3.2. Elser’s Difference Map, for β = 1, can be written in the form D = PA (2PB − Id) + (Id −PB ). (3.4) This is the same as Fienup’s HIO [Fie82, page 2763] and the DouglasRachford algorithm [LM79]. Proof. The Difference Map, when β = 1, becomes D(x) = x + [PA ◦ gB (x) − PB ◦ fA (x)]  (3.5)  fA (x) = PA (x) − (PA (x) − x) = x  (3.6)  where  26  3.2. Douglas-Rachford and Fienup HIO and gB (x) = PB (x) + (PB (x) − x) = 2PB (x) − x.  (3.7)  We now plug into (3.5) giving: D(x) = x + PA (2PB (x) − x) − PB (x).  (3.8)  D = PA (2PB − Id) + (Id −PB ),  (3.9)  Hence as desired. Lemma 3.3. If A is a closed subspace, in such a case PA is linear by [Deu01, 5.13(1) page 79], we can write D = PA (2PB − Id) + (Id −PB ) in a more symmetric fashion: D = PA PB + (Id −PA )(Id −PB ).  (3.10)  Proof. Indeed, D = PA (2PB − Id) + (Id −PB )  (3.11)  = 2PA PB − PA + Id −PB  (3.12)  (2PA PB = PA PB + PA PB )  = PA PB + PA PB − PA + Id −PB  (3.13)  = PA PB + PA (PB − Id) + (Id −PB )  (3.14)  = PA PB − PA (Id −PB ) + (Id −PB )  (3.15)  = PA PB + (−PA + Id)(Id −PB )  (3.16)  = PA PB + (Id −PA )(Id −PB )  (3.17)  as desired.  3.2  Douglas-Rachford and Fienup HIO  Proposition 3.4. The Fienup’s Hybrid Input-Output Algorithm and DouglasRachford Algorithm is described by xn+1 = PA (2PB − Id) + (Id −PB ) (xn ).  (3.18)  For brevity, we set T = PA (2PB − Id) + (Id −PB ).  (3.19)  Then 1 T = (RA RB + Id), 2 where RA = 2PA − Id and RB = 2PB − Id.  (3.20)  27  3.3. Fixed Point Results Proof. Indeed, T = PA (2PB − Id) + (Id −PB )  (3.21)  = (PA RB + Id −PB ) (2PB − Id replaced with RB ) 1 1 = (2PA RB + 2 Id −2PB ) (factor out ) 2 2 1 = (2PA RB + Id + Id −2PB ) (write 2 Id as Id + Id = 2 Id) 2 1 = (2PA RB + Id −RB ) (2PB − Id replaced with RB ) 2 1 = (2PA RB − RB + Id) (rearrange) 2 1 = ((2PA − Id)RB + Id) (factor out RB ) 2 1 = (RA RB + Id) 2  (3.22) (3.23) (3.24) (3.25) (3.26) (3.27) (3.28)  as desired.  3.3  Fixed Point Results  Theorem 3.5. Set T = PA (2PB − Id) + (Id −PB ). Then PB (Fix T ) = A ∩ B ⊆ Fix T  (3.29)  Proof. Fix x ∈ X, and write as x = b + q, where b = PB (x) and hence q = x − b. Then x = T (x)  definition of a fixed point  (3.30)  ⇐⇒ x = PA (2PB − Id)(x) + (Id −PB )(x)  (3.31)  ⇐⇒ b + q = PA (2b − (b + q)) + b + q − b substitution  (3.32)  ⇐⇒ b + q = PA (2b − b − q) + q  (3.33)  ⇐⇒ b = PA (b − q)  simplify  recall b = PB (x)  (3.34)  ⇐⇒ PB (x) = PA (PB (x) − (x − PB (x))) ⇐⇒ PB (x) = PA (PB (x) − x + PB (x)) ⇐⇒ PB (x) = PA (2PB (x) − x) ⇐⇒ PB (x) = PA (RB (x)).  substitution simplify  simplify  (3.35) (3.36) (3.37) (3.38)  All together this means that x = T (x) ⇐⇒ PB (x) = PA (RB (x)), PB (x) ∈ B in which case PB (x) ∈ A. So when x ∈ Fix T , we have PB (x) ∈ B ∩ A, and 28  3.4. The Main Result since x = T (x) and x was an arbitrary fixed point we have PB (Fix T ) ⊆ A ∩ B.  (3.39)  We now prove that A ∩ B ⊆ Fix T . ∀x ∈ A ∩ B we have PA (x) = x and PB (x) = x so T (x) = PA (2PB − Id)(x) + (Id −PB )(x)  expand  (3.40)  = PA (2PB (x) − x) + (x − PB (x))  (3.41)  = PA (2x − x) + (x − x)  (3.42)  = PA (x)  (3.43)  = x.  (3.44)  Hence A ∩ B ⊆ Fix T.  (3.45)  Now apply PB to (3.45) to deduce that A ∩ B = PB (A ∩ B) ⊆ PB (Fix T ).  (3.46)  Combining (3.39) and (3.46), we see that PB (Fix T ) = A ∩ B.  3.4  (3.47)  The Main Result  We next present the main convergence result. We show what happens in a finite dimensional Hilbert space and then give an example about why the proof fails when we try to extend to infinite dimensions. Throughout the remainder of this chapter, we set (see Proposition 3.4) 1 T = (RA RB + Id) = PA (2PB − Id) + (Id −PB ). 2  (3.48)  Fact 3.6 (Opial). [Opi67] Suppose that T is a firmly nonexpansive mapping from X to X with Fix T = ∅. Then for every x ∈ X, the sequence (T n x) weakly converges to some point in Fix T .  29  3.4. The Main Result Theorem 3.7. Suppose that A ∩ B = ∅. Then T is firmly nonexpansive and the sequence (xn ) generated by 1 xn+1 = (RA RB + Id)(xn ) = T (xn ) 2  (3.49)  converges weakly to some point x ∈ Fix T , and PB (x) ∈ A ∩ B. Moreover, the sequence (PB (xn )) is bounded and every weak cluster point of (PB (xn )) lies in A ∩ B. Proof. Fix T contains A ∩ B = ∅ by Theorem 3.5. The sequence (xn ) = (T n (x0 ))  (3.50)  converges weakly to some fixed point x of T by Fact 3.6. Since (xn ) is bounded, it follows (PB (xn )) is bounded by Theorem 2.57. RA RB is nonexpansive by Proposition 2.54 and Theorem 2.37. This implies that T = Id +R2A RB is firmly nonexpansive by Theorem 2.39. Thus, T (x)−T (y)  2  + (Id −T )(x)−(Id −T )(y)  2  ≤ x−y  2  ∀x, y ∈ X. (3.51)  Pick x = xn and y = x a fixed point. Then we get xn − x  2  2  ≥ T (xn ) − T (x)  + xn − T (xn ) − x + T (x)  = xn+1 − x  2  + xn − xn+1 − x + x  = xn+1 − x  2  + xn − xn+1  So  2  2  (3.52) (3.53)  2  (3.54)  m  m  ( xn − x  2  − xn+1 − x 2 ) ≥  n=1  xn − xn+1 2 .  (3.55)  n=1  The summation on the left is x1 − x  2  − xm+1 − x 2 ,  which is less than or equal to x1 − x  2  (3.56)  < +∞. Hence  m  ∀m  xn − xn+1  2  ≤ x1 − x  2  < +∞  (3.57)  n=1  since x1 − x  2  is a constant we have ∞  xn − xn+1  2  ≤ x1 − x  2  < +∞.  (3.58)  n=1  30  3.4. The Main Result Hence, xn − xn+1  2  → 0, thus, xn − xn+1 → 0 i.e., xn − xn+1 → 0.  (3.59)  xn − xn+1 = xn − PA (2PB − Id)xn − (Id −PB )xn  (3.60)  = PB xn − PA (2PB − Id)xn → 0.  (3.61)  It follows that  In infinite dimensions, it is a fact that if (PB xn )n∈N is bounded then it has weak cluster points by [Deu01, page 205, 9.12] and Definition 2.59, say PB xkn  z.  (3.62)  Now (PB xn )n∈N lies in B and B is weakly closed by [Rud91, page 66, Theorem 3.12 corollaries (a)] (B is weakly closed since B is closed and convex) so its weak limit z ∈ B. We have PB xn = PB xn − PA (2PB − Id)xn + PA (2PB − Id)xn ,  (3.63)  recall that PB xn − PA (2PB − Id)xn → 0. Thus PA (2PB − Id)xkn  z.  (3.64)  Since A is also weakly closed we have that z ∈ A. Altogether we have z ∈ A ∩ B. The statement of Theorem 3.7 is even simpler in finite dimensions. Theorem 3.8. Suppose that X is finite dimensional and A ∩ B = ∅. Then the sequence (xn ) generated by xn+1 = T xn ,  1 where T := (RA RB + Id) 2  (3.65)  converges to some point x ∈ Fix T , and PB (x) ∈ A ∩ B. Moreover, PB (xn ) → PB (x) ∈ A ∩ B.  (3.66)  Proof. Note that T is a firmly nonexpansive operator, because it is a projection by Theorem 2.49. So xn → x = T x =⇒ PB xn → PB x  (3.67)  as PB is continuous. We know PB x ∈ PB (Fix T ) = A ∩ B by Theorem 3.5.  31  3.4. The Main Result Remark 3.9. The proof of the Theorem 3.8 does not work in infinite dimensions because PB may fail to be weakly continuous. Zarantonello [Zar71, page 245], showed that if xn Let B be the unit-ball in  2.  x  PB xn  PB x.  (3.68)  Then let  xn = e 1 + e n  e1 = PB e1 as e1 ∈ B.  (3.69)  It is a fact that if we are outside the unit ball the closed-form projection is x by Example 2.46. So if n ≥ 2, then x PB xn =  e1 + en e1 + en = √ e1 + en 2  e √1 . 2  (3.70)  So on one hand we have xn  e1 ,  (3.71)  but on the other we have PB xn  e √1 = e1 = PB e1 . 2  (3.72)  Altogether, PB is not weakly continuous.  32  Chapter 4  Monotone Operators Throughout this chapter, X is a finite-dimensional real Hilbert space with inner product ·, · and induced norm · . C and D are two closed convex sets in X such that C ∩ D = ∅. Recall that if A : X → X is linear, then A is automatically continuous (see, e.g., [Kre78, Page 94]).  4.1  Definitions  We first introduce some fundamental definitions. The arrow ”→” is used for a single-valued mapping, whereas ”⇒” denotes a set-valued mapping, i.e. X → P(X), the power set of X. Definition 4.1. Let T : X ⇒ X. We say T is monotone if x∗ − y ∗ , x − y ≥ 0,  (4.1)  whenever x∗ ∈ T (x), y ∗ ∈ T (y). Proposition 4.2. Let A : X → X be linear. Then A is monotone, if and only if, x, Ax ≥ 0  ∀x ∈ X.  (4.2)  Proof. “⇒” For every x ∈ X, by monotonicity and linearity of A we have Ax, x = Ax − A0, x − 0 ≥ 0.  (4.3)  (4.3) holds by A0 = 0 (since A is linear). “⇐” For every x, y ∈ X, since x, Ax ≥ 0, we have Ax − Ay, x − y = A(x − y), x − y ≥ 0.  (4.4)  33  4.1. Definitions Definition 4.3 (Adjoint operator). Let A : X → X be continuous and linear. We say A∗ is the adjoint operator of A if x, Ay = A∗ x, y  ∀x, y ∈ X.  (4.5)  Remark 4.4. We know A∗ exists and is unique, see [Kre78, Theorem 3.9-2]. Remark 4.5. We know A∗∗ = A, see [Deu01, Lemma 8.29 (3)]. Remark 4.6. If A ∈ Rn×n then A∗ = AT , see [Mey00, Page 84]. Definition 4.7 (Symmetric). Let A : X → X be linear. We say that A is symmetric if A∗ = A. Definition 4.8 (Symmetric Part). Let A : X → X be linear. We say A+ is the symmetric part of A and is defined as follows: A+ =  A + A∗ . 2  (4.6)  Proposition 4.9. Let A : X → X be linear. Then A+ is symmetric. ∗  ∗ Proof. A∗+ = ( A+A 2 ) =  A∗ +A∗∗ 2  =  A∗ +A 2  = A+ .  Definition 4.10 (Antisymmetric). Let A : X → X be linear. We say that A is antisymmetric if A∗ = −A. Definition 4.11 (Antisymmetric Part). Let A : X → X be linear. We say A0 is the antisymmetric part of A and is defined as follows: A0 =  A − A∗ . 2  (4.7)  Proposition 4.12. Let A : X → X be linear. Then A0 is antisymmetric. ∗  ∗ Proof. A∗0 = ( A−A 2 ) =  A∗ −A∗∗ 2  =  A∗ −A 2  ∗  = −( A−A 2 ) = −A0 .  Lemma 4.13. Let A : X → X be linear. Then A is monotone ⇔ A∗ is monotone. Proof. By Proposition 4.2 we have that, A is monotone ⇐⇒ x, Ax ≥ 0  ∀x  (4.8)  ⇐⇒ Ax, x ≥ 0  ∀x  (4.9)  ∗  ⇐⇒ x, A x ≥ 0 ∗  ∀x  ⇐⇒ A is monotone.  (4.10) (4.11)  34  4.1. Definitions Definition 4.14. Let A : X → X be linear. We set qA : X → R : x →  1 x, Ax . 2  (4.12)  Proposition 4.15. Let A : X → X be linear. Then ∇qA = A+ . Proof. For x ∈ X and h ∈ X\{0}, we have, qA (x + h) − qA (x) − A+ x, h h qA (x) + qA (h) + 12 x, Ah + 12 Ax, h − qA (x) − A+ x, h = h 1 1 ∗ qA (h) + 2 A x, h + 2 Ax, h − A+ x, h = h qA (h) + A+ x, h − A+ x, h = h qA (h) = h 1 h, Ah . = 2 h  (4.13) (4.14) (4.15) (4.16) (4.17) (4.18)  Since A is continuous, we have, for M = A , Ah ≤ M h ,  ∀h ∈ X.  (4.19)  Then, by Cauchy-Schwarz we get 0≤ = ≤  qA (x + h) − qA (x) − A+ x, h h 1 2  (4.20)  h, Ah h  1 2  h ·M · h 1 = M h →0 h 2  (4.21) as h → 0+ .  (4.22)  By the Squeeze theorem we get, lim h  →0+  qA (x + h) − qA (x) − A+ x, h =0 h  (4.23)  i.e., ∇qA = A+ . So we have proven that qA is Fr´echet differentiable at x and ∇qA = A+ . 35  4.1. Definitions Proposition 4.16. Let A : X → X be linear. Then A is monotone if and only if qA ≥ 0. Proof. See Proposition 4.2. Proposition 4.17. Let A : X → X be linear. Then qA = qA+ . Proof. Let x ∈ X. Then, 2qA+ (x) = A+ x, x = =  A∗ x 2 ,x  +  A∗ +A 2 x, x Ax x 2 , x = 2 , Ax  +  Ax 2 ,x  = Ax, x = 2qA (x).  Proposition 4.18. Let A : X → X be linear. Then A is monotone if and only if A+ is monotone. Proof. By Proposition 4.2 we have that A is monotone ⇐⇒ qA ≥ 0 ⇐⇒ qA+ ≥ 0  by Proposition 4.16 by Proposition 4.17  ⇐⇒ A+ is monotone by Proposition 4.16 .  (4.24) (4.25) (4.26)  Definition 4.19. Let T : X ⇒ X be monotone. We call T maximal monotone if for every (y, y ∗ ) ∈ / gra T there exists (x, x∗ ) ∈ gra T with x − y, x∗ − y ∗ < 0. Fact 4.20. Let A : X ⇒ X be maximal monotone and (x0 , x∗0 ) ∈ X × X. Let A : X ⇒ X such that gra A = gra A − (x0 , x∗0 ) ( i.e., a rigid translation of gra A). Then A is maximal monotone. Proof. Follows directly from Definition 4.19. Lemma 4.21. Every continuous monotone operator A : X → X is maximal monotone. Proof. [RW98, Example 12.7].  36  4.1. Definitions Definition 4.22 (Subdifferential). Let f : X → ]−∞, +∞] and let x ∈ X be such f (x) < +∞. Then the subdifferential of f at x is ∂f (x) = {s ∈ X | f (y) ≥ f (x) + s, y − x ,  ∀y ∈ X}.  (4.27)  The elements in ∂f (x) are called subgradients of f at x. See [Roc97, Part V] for basic properties of the subdifferential operator. Fact 4.23. ∂f (x) = ∇f (x) if f is Gateaux differentiable, see [Z˘ al02, Theorem 2.4.4(i)]. Example 4.24. Let X = R and suppose f (x) = |x|. Then,   if x < 0; {−1} ∂f (x) = [−1, +1] if x = 0;   {1} if x > 0.  (4.28)  Proof. Let x < 0 and s ∈ ∂f (x). We have s, y − x ≤ f (y) − f (x) = |y| − |x|.  (4.29)  Let z ∈ X and y = x + tz, where t > 0. By (4.29), s, tz ≤ |x + tz| − |x| = |x + tz| − (−x).  (4.30)  Since x < 0, x + tz < 0 when t is small. Thus by (4.30), s, tz ≤ −(x + tz) − (−x) = −tz  (4.31)  s, z ≤ −z.  (4.32)  so So plug in z = ±1 and we get that s = −1. This shows that when x < 0 the subdifferential is the set {−1}. Now let x = 0 and s ∈ ∂f (x) = ∂f (0). We have s, y − 0 ≤ f (y) − f (0) = |y| ⇐⇒ s, y ≤ |y|  ∀y  (4.33)  ⇐⇒ s, y ≤ |y| ∀y = 0 y ⇐⇒ s, ≤ 1 ∀y = 0 |y| ⇐⇒ max{ s, 1 , s, −1 } ≤ 1  (4.34)  ⇐⇒ max{s, −s } ≤ 1  (4.37)  ⇐⇒ |s| ≤ 1  (4.38)  ⇐⇒ s ∈ [−1, 1].  (4.39)  (4.35) (4.36)  37  4.1. Definitions This shows that when x = 0 the subdifferential is the set [−1, 1]. Finally, let x > 0 and s ∈ ∂f (x). We have s, y − x ≤ f (y) − f (x) = |y| − |x|.  (4.40)  Let z ∈ X and y = x + tz, where t > 0. By (4.40), s, tz ≤ |x + tz| − |x| = |x + tz| − x.  (4.41)  Since x > 0, x + tz > 0 when t is small. Thus by (4.41), s, tz ≤ (x + tz) − x = tz  (4.42)  s, z ≤ z.  (4.43)  so So plug in z = ±1 and we get that s = 1. This shows that when x > 0 the subdifferential is the set {1}. Fact 4.25 (Rockafellar). Let f : X → ]−∞, +∞] be proper lower semicontinuous and convex. Then ∂f is maximal monotone. Proof. See [Sim98, Page 113] or [Roc70]. Definition 4.26 (Positive-semidefinite matrix). The matrix A ∈ Rn×n is positive semidefinitite, written A 0, if A is symmetric, i.e. A = AT and x, Ax ≥ 0  ∀x.  (4.44)  Fact 4.27. In the 2 × 2 case it can be checked that a b b c  0 ⇐⇒ a ≥ 0, c ≥ 0, ac − b2 ≥ 0,  (4.45)  see [Mey00, Page 566]. Definition 4.28 (Resolvent). Let M be maximal monotone and JM = (Id +M )−1 . Then JM is called the resolvent of M . Fact 4.29 (Minty). Let T : X → X. Then T is firmly nonexpansive ⇐⇒ T = (Id +M )−1 , where M is maximal monotone, (4.46) see [EB92, Theorem 2]. 38  4.1. Definitions The above fact can also be found in [Min62]. We can see from the above that we have another way to characterize firmly nonexpansive mappings. Definition 4.30 (Indicator Function). Let S ⊆ X, then the indicator function is defined by x → ιS (x) =  0, ∞,  if x ∈ S; otherwise.  Definition 4.31 (Proximal Mapping). Let T : X → X be firmly nonexpansive, so that T = (Id +M )−1 , where M is maximal monotone, see Fact 4.29. If M = ∂f for some function f that is convex, lower semicontinuous and proper, then T is called a proximal mapping. Remark 4.32 (Projection). Let T : X → X be firmly nonexpansive, say T = (Id +M )−1 , where M is maximal monotone, see Fact 4.29. If M = ∂ιC , the subdifferential of the indicator function, then T = PC . Projection mappings are a subset of proximal mappings and proximal mappings are subsets of firmly nonexpansive mappings. Lemma 4.33. Every proximal mapping is a subdifferential operator. Proof. Let T be a proximal mapping. Then by definition, the sum rule [Z˘ al02, Theorem 2.8.7(iii)], and [Roc97, Theorem 23.8], T = (Id +∂f )−1 1 = (∇ · 2 + ∂f )−1 2 1 = (∂( · 2 + f ))−1 2 1 = ∂( · 2 + f )∗ , 2  (4.47) (4.48) (4.49) (4.50)  where g ∗ (x) = supy∈X ( x, y −g(y)) is the Fenchel conjugate of g, see [Roc97, Chapter 12] for basic properties. This clearly shows that the proximal mapping is a subdifferential. Lemma 4.34. The projection operator PC is the gradient of  1 2  · − 12 d2C .  Proof. We use [RW98, Theorem 2.26] throughout this proof. If we set f to be the indicator function and λ = 1, then by [RW98, Definition 1.22] we get  39  4.1. Definitions when x ∈ X, e1 ιC (x) = inf ιg (ω) + ω∈X  = inf  ω∈C  1 ω−x 2  1 ω−x 2  2  2  (4.51) (4.52)  1 = d2C (x), 2  (4.53)  where e1 g(x) = inf ω∈X g(ω) + 21 ω − x 2 is the Moreau envelope of g and P1 g(x) is the corresponding set of minimizers. Again by [RW98, Definition 1.22] we get, P1 ιC (x) = argmin ιC + ω∈X  1 ω−x 2  1 ω−x ω∈C 2 1 = argmin d2C (x) ω∈C 2 = argmin  2  2  = PC (x).  (4.54) (4.55) (4.56) (4.57)  Now by [RW98, Theorem 2.26b] we have, ∇e1 ιC = Id −PC .  (4.58)  By rearranging, then using (4.53) we have PC = Id −∇e1 ιC 1 = Id −∇( d2C ) 2 1 1 = ∇( · 2 ) − ∇( d2C ) 2 2 1 1 2 2 = ∇( · − dC ), 2 2  (4.59) (4.60) (4.61) (4.62)  which clearly shows that PC is in fact a gradient as desired. Proposition 4.35. Let B : X → X be linear. Then B = ∂g ⇐⇒ B = B ∗ , in which case B = ∇qB and g = qB + c, where c ∈ R. Proof. ”⇐”: B = B ∗ =⇒ B = B+ now by Proposition 4.15 =⇒ B = B+ = ∇qB . ”⇒”: B = ∇g =⇒ ∇2 g = B is symmetric [MN88, Theorem 4 Page 120]. 40  4.1. Definitions Lemma 4.36. Let T be linear and a proximal mapping. Then T = T ∗ . Proof. By assumption, there exists a lower semicontinuous convex function f such that T = (Id +∂f )−1 . (4.63) By Proposition 4.35, and the fact that Id is symmetric, Id = ∇qId .  (4.64)  T = (∇qId + ∂f )−1 = (∂(f + qId ))−1  (4.65)  By (4.63), by [Roc97, Theorem 23.8]. Let g = (f + qId )∗  (4.66)  ∂g = (∂(f + qId ))−1  (4.67)  then by [Roc97, Corollary 23.5.2]. By (4.65), we have T = ∂g.  (4.68)  Then by Proposition 4.35, T = T ∗ . Below is an alternative proof provided by Liangjin Yao. We first note that if T = (Id +∂f )−1 (4.69) then T −1 = Id +∂f  (4.70)  T −1 − Id = ∂f.  (4.71)  and Since T −1 and Id are linear relations, so is T −1 − Id and hence gra(T −1 − Id) is a linear subspace. Then (∂f )∗ = ∂f (4.72) by [BWY09, Lemma 4.1]. Now T = (Id +∂f )−1  (4.73)  41  4.1. Definitions and T ∗ = [(Id +∂f )−1 ]∗  (4.74)  ∗ −1  = [(Id +∂f ) ] ∗  by [Cro98, Proposition III 1.3b]  ∗ −1  = [(Id) + (∂f ) ] −1  = (Id +∂f )  by [Bor83, Theorem 7.4]  by (4.72)  (4.75) (4.76) (4.77)  = T.  (4.78)  Lemma 4.37. Let P =  a b c d  such that P is a proximal mapping. Then  b = c. Proof. By Lemma 4.36, P = P ∗ , hence b = c. Lemma 4.38. (See also [RW98, Example 12.7].) Let A : X → X be continuous and monotone. Then A is maximal monotone. Proof. Let (x, x∗ ) ∈ X × X be such that x − y, x∗ − Ay ≥ 0,  ∀y ∈ X.  (4.79)  It suffices to show that x∗ = Ax. Let z ∈ X and t > 0. Let y = x + tz ∈ X. Then x − (x + tz), x∗ − A(x + tz) ≥ 0  (4.80)  ∗  ⇐⇒ −tz, x − A(x + tz) ≥ 0 ∗  ⇐⇒ A(x + tz) − x , tz ≥ 0  (4.81) divide both sides by t  ∗  ⇐⇒ A(x + tz) − x , z ≥ 0.  (4.82) (4.83)  Let t → 0+ . Since A is continuous, Ax − x∗ , z ≥ 0,  ∀z ∈ X.  (4.84)  Set z = x∗ − Ax. Then (4.84) yields, 0 ≥ Ax − x∗  2  ⇐⇒ Ax − x∗ , x∗ − Ax ≥ 0.  (4.85)  So, Ax − x∗ = 0, i.e., x∗ = Ax. Corollary 4.39. Let A : X → X be linear. Then A is maximal monotone if and only if ∀x x, Ax ≥ 0. Proof. We have ∀x x, Ax ≥ 0 ⇐⇒ A is monotone by Proposition 4.2. Now since X is finite-dimensional, this implies that A is continuous by [Kre78, Page 94]. Therefore, since A monotone and continuous, it is maximal monotone by Lemma 4.38. 42  4.2. Eckstein’s Observation  4.2  Eckstein’s Observation  Recall that RC and RD are the reflectors of given subsets C and D. Definition 4.40. Following Eckstein in his thesis, for any maximal monotone operators A and B, and λ > 0 we define Gλ,A,B = JλA (2JλB − Id) + (Id −JλB ).  (4.86)  Remark 4.41. It is important to note that JλA = (Id +λA)−1 is the resolvent of λA by Definition 4.28 as A is maximal monotone, Fact 4.42. (See [Eck89, Corollary 4.21].) Let G−1 λ,A,B = Sλ,A,B + Id, then Gλ,A,B is firmly nonexpansive. Definition 4.43 (Splitting Operator). (See [Eck89, Page 137].) Sλ,A,B = G−1 λ,A,B − Id .  (4.87)  In [Eck89, Corollary 4.21] we see a fairly complicated proof of Fact 4.42. However, just like we proved that T is firmly nonexpansive in Theorem 3.7, it may be proved that Gλ,A,B is firmly nonexpansive by replacing projections with appropriate resolvents. See also [Com04] and [EF98] for further information on the Douglas-Rachford and related methods involving (firmly) nonexpansive operators. Eckstein was the first to observe the following [Eck89, Proposition 4.10]. Proposition 4.44. There exist maximal monotone operators A and B on R2 and a constant λ > 0 such that A and B are both subdifferentials of convex functions, but Sλ,A,B is not the subdifferential of any convex function. Proof. Let  By Fact 4.27, A and B A = ∂f,  A=  2 1 1 2  (4.88)  B=  2 0 0 1  (4.89)  1 λ= . 3 0. Then, by Proposition 4.35, 1 where f (x) = xT 2  2 1 x 1 2  (4.90)  (4.91) 43  4.2. Eckstein’s Observation  B = ∂g,  1 where g(x) = xT 2  2 0 x. 0 1  (4.92)  −1 8 5 8  (4.93)  By direct calculation, JλA = (Id +λA)  −1  =  5 8 −1 8  JλB = (Id +λB)−1 =  3 5  0  0  3 4  .  (4.94)  Now Eckstein writes Gλ,A,B = JλA (2JλB − Id) + (Id −JλB ) = and Sλ,A,B = G−1 λ,A,B − Id =  28 27 −8 27  −5 27 13 27  1 2 1 10  .  1 16 11 16  (4.95)  (4.96)  Since Gλ,A,B is not symmetric, it cannot be a proximal mapping: indeed, G is firmly nonexpansive by Fact 4.42 and if G were a proximal map, it would by symmetric by Lemma 4.37, but it is not. Eckstein made a numerical error in (4.95) and as a result (4.96). The matrices in fact are Gλ,A,B = JλA (2JλB − Id) + (Id −JλB ) = and Sλ,A,B = G−1 λ,A,B − Id =  43 47 4 47  −1 16 9 16  21 40 −1 40 10 47 37 47  ,  (4.97)  (4.98)  however, Eckstein’s conclusion remains valid (see appendix for the calculation details). In Example 4.46, we provide an example which isn’t prone to numerical errors as it is much simpler. Lemma 4.45. Let T : X → X be linear. If T =  Id +RD RC 2  (4.99)  then, RD RC is symmetric ⇐⇒ T is symmetric.  (4.100) 44  4.2. Eckstein’s Observation Proof. RD RC is symmetric ⇐⇒ (RD RC )∗ = RD RC  (4.101)  ∗  (4.102)  ∗  (4.103)  ⇐⇒ Id +(RD RC ) = Id +RD RC ⇐⇒ (Id +RD RC ) = Id +RD RC (Id +RD RC )∗ Id +RD RC ⇐⇒ = 2 2 ∗ ⇐⇒ T = T  (4.104) (4.105)  Example 4.46. Set Id +RD RC . (4.106) 2 Then we have the implications if T is a proximal mapping then T is symmetric ⇐⇒ RD RC is symmetric, by Lemma 4.36 and Lemma 4.45. Consider, C = {(x, 0)|x ∈ R} the x-axis in R2 (4.107) T =  D = {(x, x)|x ∈ R} We have PC =  1 0 , PD = 0 0  the diagonal in R2 .  (4.108)  1 2 1 2  (4.109)  1 2 1 2  , Id =  1 0 . 0 1  Now the reflectors onto set C and D are defined as follows: RC = 2PC − Id = 2  RD = 2PD − Id = 2  1 0 1 0 − 0 1 0 0 1 2 1 2  1 2 1 2  −  1 0 0 −1  =  1 0 0 1  =  0 1 . 1 0  (4.110)  (4.111)  Furthermore, RD RC =  0 1 1 0  1 0 0 −1  =  0 −1 1 0  (4.112)  which is clearly not symmetric! Therefore, T is a firmly nonexpansive mapping that is not a proximal mapping even though it was constructed from two proximal mappings.  45  4.2. Eckstein’s Observation Now Id +RD RC T = = 2  1 0 0 −1 + 0 1 1 0 2  =  1 2 1 2  Id +RD RC = (Id +M )−1 2 where M is maximal monotone, and hence T =  M = T −1 − Id .  − 12 1 2  (4.113)  (4.114)  (4.115)  We calculate T −1 directly from T , T −1 =  1 1 . −1 1  (4.116)  And finally M , M = T −1 − Id =  0 1 . −1 0  (4.117)  This is clearly not symmetric and does not come from a subdifferential operator, by Proposition 4.35 applied to M . Eckstein in his thesis pointed out that the T in Douglas-Rachford (aka Difference map or Lions-Mercier) is not necessarily a proximal map even if it is constructed from two proximal mappings. (However, his example is more complicated and has a numerical error, compared to the simpler example we just showed). This shows that you truly need monotone operator theory to understand Douglas-Rachford or the Difference Map, even if you started with the projection operators since the resolvent is not a proximal mapping (the monotone operator part is not a subdifferential operator).  46  Chapter 5  8 Queens The eight queens puzzle is the problem of putting eight chess queens on an 8 × 8 chessboard such that none of them is able to capture any other using the standard chess queen’s moves. The queens must be placed in such a way that no two queens would be able to attack each other. Thus, a solution requires that no two queens share the same row, column, or diagonal. The eight queens puzzle is an example of the more general n queens puzzle of placing n queens on an n × n chessboard, where solutions exist only for n = 1 or n ≥ 4. This is a new application of Elser’s Difference Map. From a complexity point of view, the n queens puzzle is easy — in fact, it can be solved in linear time, see [AY89] for details. Our interest in it stems from a modeling perspective; it will make the modeling of Sudoku in the next chapter easier to understand.  5.1  Queen’s Role in Chess  The queen can be moved any number of unoccupied squares in a straight line vertically, horizontally, or diagonally, thus combining the moves of the rook and bishop. The queen captures by occupying the square on which an opponent’s piece sits, see Figure 5.1 (from [Que]).  5.2  8 Queens History  The puzzle was originally proposed in 1850 by Franz Nauck. Nauck also extended the puzzle to the n queens problem (on an n × n chess-board, how can n queens be placed) [RBC74, page 166]. In 1874, S. G¨ unther proposed a method of finding solutions by using determinants, and J.W.L. Glaisher refined this approach [RBC74, page 166-167]. This puzzle even appeared in the popular early 1990s computer game The 7th Guest.  47  5.3. Solutions to 8 Queens  Figure 5.1: Queen moves  5.3  Solutions to 8 Queens  The eight queens puzzle has 92 distinct solutions. If solutions that differ only by symmetry operations (rotations and reflections) of the board are counted as one, the puzzle has 12 unique (or fundamental) solutions [RBC74, page 177].  48  5.3. Solutions to 8 Queens 12 Unique solutions to the 8 Queens problem  0Z0l0Z0Z 7 Z0Z0Z0l0 6 0ZqZ0Z0Z 5 Z0Z0Z0Zq 4 0l0Z0Z0Z 3 Z0Z0l0Z0 2 qZ0Z0Z0Z 1 Z0Z0ZqZ0  8  8  0Z0l0Z0Z ZqZ0Z0Z0 6 0Z0Z0ZqZ 5 Z0l0Z0Z0 4 0Z0Z0l0Z 3 Z0Z0Z0Zq 2 0Z0ZqZ0Z 1 l0Z0Z0Z0  8  7  7  0ZqZ0Z0Z 7 Z0Z0ZqZ0 6 0Z0Z0Z0l 5 l0Z0Z0Z0 4 0Z0l0Z0Z 3 Z0Z0Z0l0 2 0Z0ZqZ0Z 1 ZqZ0Z0Z0  8  8  a  a  b  b  c  c  d  d  e  e  f  f  g  g  h  h  8  a  b  c  d  e  f  g  h  0Z0ZqZ0Z ZqZ0Z0Z0 6 0Z0l0Z0Z 5 Z0Z0Z0l0 4 0ZqZ0Z0Z 3 Z0Z0Z0Zq 2 0Z0Z0l0Z 1 l0Z0Z0Z0 7  a  b  c  d  e  f  g  h  0Z0l0Z0Z Z0Z0ZqZ0 6 0Z0Z0Z0l 5 Z0l0Z0Z0 4 qZ0Z0Z0Z 3 Z0Z0Z0l0 2 0Z0ZqZ0Z 1 ZqZ0Z0Z0 a  b  c  d  e  f  g  h  0Z0ZqZ0Z Z0l0Z0Z0 6 0Z0Z0Z0l 5 Z0ZqZ0Z0 4 0Z0Z0ZqZ 3 l0Z0Z0Z0 2 0Z0Z0l0Z 1 ZqZ0Z0Z0 7  a  b  c  d  e  f  g  h  49  5.4. Modeling 8 Queens 12 Unique solutions to the 8 Queens problem  0Z0ZqZ0Z 7 Z0Z0Z0l0 6 0Z0l0Z0Z 5 l0Z0Z0Z0 4 0ZqZ0Z0Z 3 Z0Z0Z0Zq 2 0Z0Z0l0Z 1 ZqZ0Z0Z0  8  8  0ZqZ0Z0Z Z0Z0ZqZ0 6 0Z0l0Z0Z 5 l0Z0Z0Z0 4 0Z0Z0Z0l 3 Z0Z0l0Z0 2 0Z0Z0ZqZ 1 ZqZ0Z0Z0  8  7  7  0Z0l0Z0Z 7 Z0Z0Z0l0 6 qZ0Z0Z0Z 5 Z0Z0Z0Zq 4 0Z0ZqZ0Z 3 ZqZ0Z0Z0 2 0Z0Z0l0Z 1 Z0l0Z0Z0  8  8  a  a  b  b  c  c  d  d  e  e  f  f  g  g  h  h  8  a  5.4  b  c  d  e  f  g  h  0Z0l0Z0Z l0Z0Z0Z0 6 0Z0ZqZ0Z 5 Z0Z0Z0Zq 4 0Z0Z0l0Z 3 Z0l0Z0Z0 2 0Z0Z0ZqZ 1 ZqZ0Z0Z0 7  a  b  c  d  e  f  g  h  0Z0Z0l0Z ZqZ0Z0Z0 6 0Z0Z0ZqZ 5 l0Z0Z0Z0 4 0Z0l0Z0Z 3 Z0Z0Z0Zq 2 0Z0ZqZ0Z 1 Z0l0Z0Z0 a  b  c  d  e  f  g  h  0Z0Z0l0Z Z0ZqZ0Z0 6 0Z0Z0ZqZ 5 l0Z0Z0Z0 4 0Z0Z0Z0l 3 ZqZ0Z0Z0 2 0Z0ZqZ0Z 1 Z0l0Z0Z0 7  a  b  c  d  e  f  g  h  Modeling 8 Queens  We begin by thinking of the 8 queens as a 8 × 8 binary matrix. We represent a queen on a square with a 1 and a blank square with a 0. Our underlying  50  5.5. Row Constraints 2  Hilbert space is X = R8 or X = R64 .  5.5  Row Constraints  Each row can only have one queen in it. In each row we are only allowed 1 one the rest are zeros. We call this constraint C1 . C1 = {x ∈ R8×8 | every row of x is a unit vector}.  (5.1)  The row constraint ensures that each row has only one queen in it.  qZ0Z0Z0Z l0Z0Z0Z0 6 qZ0Z0Z0Z 5 l0Z0Z0Z0 4 qZ0Z0Z0Z 3 l0Z0Z0Z0 2 qZ0Z0Z0Z 1 l0Z0Z0Z0 8 7  a  5.6  b  c  d  e  f  g  h  Column Constraints  Each column can only have one queen in it. In each column we are only allowed 1 one the rest are zeros. We call this constraint C2 C2 = {x ∈ R8×8 | every column x is a unit vector}.  (5.2)  The column constraint ensures that each column has only one queen in it.  51  5.7. Positive Slope Constraints  qlqlqlql 7 Z0Z0Z0Z0 6 0Z0Z0Z0Z 5 Z0Z0Z0Z0 4 0Z0Z0Z0Z 3 Z0Z0Z0Z0 2 0Z0Z0Z0Z 1 Z0Z0Z0Z0 8  a  5.7  b  c  d  e  f  g  h  Positive Slope Constraints  Every positively sloped diagonal must have at most one queen in it. In each positively sloped diagonal we are only allowed at most one 1. We call this constraint C3 . We can think of these as diagonals, or bishop moves. There are 15 positively sloped diagonals. C3 = {x ∈ R8×8 | every positive diagonal of x is a unit vector or the 0 vector}. (5.3) The positive slope constraint ensures that each positive diagonal has at most one queen in it.  52  5.8. Negative Slope Constraints  0Z0Z0Z0Z 7 Z0Z0Z0Z0 6 qZ0Z0Z0Z 5 ZqlqZ0Z0 4 0Z0l0l0Z 3 Z0Z0Z0l0 2 0Z0Z0Z0Z 1 Z0Z0Z0Zq 8  a  5.8  b  c  d  e  f  g  h  Negative Slope Constraints  As with the positive slope constraint, the negative sloped constraint must have at most one queen in it. Again we are only allowed at most one 1. We call this constraint C4 . There are 15 negatively sloped diagonals. C4 = {x ∈ R8×8 | every negative diagonal of x is a unit vector or the 0 vector}. (5.4) The negative slope constraint ensures that each negative diagonal has at most one queen in it.  0Z0Z0Z0Z Z0Z0Z0Z0 6 0Z0Z0Z0l 5 Z0ZqZql0 4 0l0l0Z0Z 3 Z0Z0Z0Z0 2 0Z0Z0Z0l 1 Z0Z0Z0Z0 8 7  a  b  c  d  e  f  g  h  53  5.9. Projection onto the Nearest Unit Vector  5.9  Projection onto the Nearest Unit Vector  Denote the standard unit vectors in Rn by ei . Let C = {e1 , . . . , en } be a very nonconvex set in X. Then PC x = {ei | xi = max{x1 , . . . , xn }}.  (5.5)  Proof. The set C has a finite number of elements so PC x = ∅. Let i ∈ {1, . . . , n}. Then ei ∈ PC x ⇐⇒ x − ei ≤ x − ej ⇐⇒ x − ei ⇐⇒ x  2  2  ≤ x − ej  − 2 x, ei + ei  ∀j 2 2  ⇐⇒ −2 x, ei ≤ −2 x, ej ⇐⇒ x, ej ≤ x, ei ⇐⇒ xj ≤ xi  (5.6)  ∀j  (5.7)  ≤ x  2  − 2 x, ej + ej  2  ∀j (5.8)  ∀j  (5.9)  ∀j  (5.10)  ∀j.  (5.11)  This projection is very easy to compute, especially for a computer as it picks the biggest component, sets it to 1 and the others to zero. All the constraints can be represented as unit vectors. For the diagonals we compare the projection with the zero vector and choose the nearest one. In general we pick the biggest component in our vector, set it to 1 and the others to 0. In the case of a tie we pick the lowest component. The constraints C1 ,C2 ,C3 , and C4 can be applied to the 8 queens matrix as projections.              .3 .1 .2 .9 .4 1 −6 .3  .8 .1 .7 .5 .2 .1 .8 .4  .3 .1 .4 .5 0 .1 .1 .7  .2 .1 .2 .5 0 .3 .2 .4  .4 .1 .7 .5 0 .2 .3 .2  .5 .1 .8 .5 0 .6 .4 1  .5 .1 .5 .4 .2 .5 .7 .8  .1 .1 .9 .5 .3 .6 .9 2                →            0 1 0 1 1 1 0 0  1 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0  0 0 1 0 0 0 1 1               The matrix on the right is PC1 x where x is the matrix on the left. The negative slope constraint is exactly the same as the positive constraint except in the negative direction. If we rotate the chessboard 90 54  5.10. Iterations degrees in a counterclockwise direction, apply the positive slope constraint and rotate it 90 degrees clockwise back we get the negative slope constraint. In fact this is how the algorithm is implemented.              .3 0 .2 .9 .4 1 −6 .3  0 .1 .7 .5 .2 .1 .9 .4  .3 .1 .4 .5 0 .1 .1 .7  .2 .1 .2 .5 0 .3 .2 .4  .4 .1 .7 .5 0 .2 .3 0  .5 .1 .8 .5 0 .6 0 1  .5 .1 .7 .4 .2 0 .7 .8  .1 .1 .9 .5 0 .6 .9 0                →            1 0 0 1 0 1 0 0  0 0 0 1 0 0 1 0  1 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0  0 0 1 0 0 0 0 0  0 0 0 0 0 1 0 1  0 0 1 0 0 0 0 0  0 0 1 0 0 0 1 0               The matrix on the right is PC3 x where x is the matrix on the left.  5.10  Iterations  Recall that Elser’s Difference Map with β = 1, Lions-Mercier and Douglas Rachford algorithms can be defined as T =  Id +RA RB , 2  (5.12)  where RA and RB are reflectors onto their respective sets. The solution to our 8 Queens problem is in C1 ∩ C2 ∩ C3 ∩ C4 . We have 4 constraints and our algorithm can only handle two. In order to address this we use the product space and the diagonal space previously defined by Definitions 2.25 and 2.27 respectively. We’ll work in the Hilbert space 2  X × X × X × X, recall X = R8 .  (5.13)  We also define C = C1 × C2 × C3 × C4 = {(c1 , c2 , c3 , c4 )|ci ∈ Ci } ⊆ X 4 .  (5.14)  Now the Diagonal, ∆ = {(x1 , x2 , x3 , x4 ) | x1 = x2 = x3 = x4 ∈ X}  (5.15)  ∆ ∩ C = {(c1 , c2 , c3 , c4 ) | c1 = c2 = c3 = c4 , ci ∈ Ci }  (5.16)  Then  55  5.11. 8 Queens Example and get that x ∈ C1 ∩ C2 ∩ C3 ∩ C4 ⇐⇒ (x, x, x, x) ∈ C ∩ ∆.  (5.17)  Our solutions lie in the intersection of the 4 constraints. The reflectors used in the Difference Map are two times the projection minus the identity. In our model, we have two projections. The projection onto C, our constraints and the projection onto the diagonal. The projection onto the diagonal is described by Example 2.48: P∆ (x1 , x2 , x3 , x4 ) = (y, y, y, y) where y =  x1 + x2 + x3 + x4 . 4  (5.18)  The projection onto the constraints is described by Example 2.47: PC = PC1 ×C2 ×C3 ×C4 (x1 , x2 , x3 , x4 ) = (PC1 x1 , PC2 x2 , PC3 x3 , PC4 x4 ). (5.19) Finally, we define the reflectors and use the Difference Map, R∆ := 2P∆ − Id  (5.20)  RC := 2PC − Id .  (5.21)  The iterations are described as xn+1 =  Id +RC R 2  xn .  (5.22)  We round one of P xn to a 0-1 matrix, call the result yn . Then we monitor 4  yn − PCi yn 2 , which is 0 if and only if yn is a solution. i=1  5.11  8 Queens Example  In this example we start by placing 8 queens to the top left corner- a ridiculous starting position. As we see in Figure 5.3, after 30 iterations the board is more spread out. We have a queen in every row. Also notice that most diagonal constraints are satisfied. Notice that there are more than 8 queens on the board at this time. This is because the algorithm looks at the entries in each cells and rounds them, if the entry is greater or equal to 0.5, we place a queens in that entry, otherwise it is empty. Finally after 96 iterations we see in Figure 5.6, we have a solution.  56  5.11. 8 Queens Example  Figure 5.2: Initial Starting Position  Figure 5.3: 30 iterations  57  5.11. 8 Queens Example  Figure 5.4: 60 iterations  Figure 5.5: 90 iterations  58  5.11. 8 Queens Example  Figure 5.6: Solved in 96 iterations  59  Chapter 6  Sudoku We have proved some results about projections and convergence if our sets are convex. There is no theory about convergence if our sets are non-convex. In fact, it is known to fail. The method of alternating projections fails very quickly if our sets are non-convex- it gets stuck between two points and bounces back and forth endlessly. Sudoku uses integers from 1 to 9 which are a very non-convex set. However, when we apply Elser’s difference map to our very non-convex Sudoku constraint sets, it converges (very quickly even); which I find amazing. From a complexity point of view, Sudoku is NP-complete, see [Epp05] for details. Below are the full details on how to model Sudoku to solve it using Elser’s Difference Map.  6.1  What is Sudoku?  Sudoku literally means ”number single”. It was derived from the Japanese phrase ”Suuji wa dokushin ni kagiru”. This phrase translates as ”the numbers must occur only once” or ”the numbers must be single”. Sudoku is a logic-based number placement puzzle. The objective of Sudoku is to fill a 9 × 9 grid so that each column, each row, and each of the nine 3 × 3 boxes contains the digits from 1 to 9, only one time each. The game is a type of Latin Square game with the additional constraint of the 3×3 offices. Because of this Leonard Euler is often incorrectly cited as the father of Sudoku. Sudoku was actually invented by an American architect named Howard Garns in 1979; he called it ”Number Place”. Sudoku is now published and trademarked by Nikoli of Japan and given its name in 1986 [BT09, Pages 6–10], [Eri10]. A sample Sudoku puzzle and its solution is given below in 6.1 and 6.1 respectively (pictures from [Wik]).  6.2  Modeling Sudoku  We begin by thinking of the Sudoku as a 9 × 9 integer matrix. Think of each cell as an elevator shaft, where the integer number of the cell is the floor. If 60  6.2. Modeling Sudoku  Figure 6.1: A Sample Sudoku Puzzle  61  6.2. Modeling Sudoku  Figure 6.2: A Sample Sudoku Solution  62  6.3. Constraints the floor is say 3, we have 0,0,1,0,0,0,0,0,0 on the floors of the elevator shaft. If we do this for every cell, we add another dimension to our 9 × 9 matrix. We now have a 9 × 9 × 9 cube! We set all the blank entries to 1, but the starting position does not matter. Let Q be a cube in X described by Q(i, j, k), where {i, j, k} ⊆ {1, . . . , 9}.  (6.1)  So an elevator shaft is Q(i, j, :) (i.e., the 3rd component is free using Matlab notation). We aim to place unit vectors into each elevator shaft. Our space 3 X = R9 or X = R729 . We now describe our constraints, which are similar to those in the 8 Queens problem.  Figure 6.3: 9 Sudoku Elavator Shafts  6.3 6.3.1  Constraints North-South Hallways  Each cell of every column must not repeat any of the numbers 1 to 9. Let’s call these North-South hallways. In each hallway we are only allowed at 63  6.3. Constraints most one 1 and the rest are zeros. There are 81 North-South hallways, 9 per floor. Therefore, our constraint becomes 3  C1 = Q ∈ R9 Q(i, :, k) is a stardard unit vector ∀i, k .  (6.2)  Figure 6.4: A Sodoku Hallway  6.3.2  East-West Hallways  Each cell of every row must not repeat any of the numbers 1 to 9. Let’s call these East - West hallways. In each hallway we are only allowed at most one 1 and the rest are zeros. There are 81 East-West hallways, 9 per floor. Therefore, our constraint becomes 3  C2 = Q ∈ R9 Q(:, j, k) is a stardard unit vector ∀j, k .  (6.3)  64  6.3. Constraints  6.3.3  Meeting Rooms  Every 3 × 3 square on each floor must not repeat any of the numbers 1 to 9; we call these meeting rooms. In each meeting room we are only allowed at most 1 one the rest are zeros. There are 81 meeting rooms, 9 per floor. We define the following: E = {A ∈ R3×3 | all entries are zero except one entry equals zero}, (6.4) J = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}} ,  (6.5)  93  and, for every Q ∈ R , I1 ∈ J, I2 ∈ J, and k ∈ {1, . . . , 9}, set MI1 ,I2 ,k (Q) = A ∈ R3×3 |Aimod 3,jmod 3 = Q(i, j, k) ∀j ∈ I1 ∀j ∈ I2 . (6.6) Therefore, our constraint set is 3  C3 = Q ∈ R9 |∀k, ∀I1 ∈ J, ∀I2 ∈ J, MI1 ,I2 ,k (Q) ∈ E .  (6.7)  Figure 6.5: A Sodoku Meeting Room 65  6.4. Iterations As you can see from the picture the meeting room is not a typical unit vector. We can think of it as a fancy unit vector where we write it as a 3 × 3 matrix instead of the usual 1 × 9 vector we are used to.  6.3.4  The Given Sudoku  Constraint 4 is the given Sudoku problem. We call this constraint C4 . This 3 is the set of all cubes Q ∈ R9 such that if the (i, j) entry is prescribed and equal to γi,j then Q(i, j, :) is equal to the γi,j -unit vector; otherwise, Q(i, j, :) is any unit vector.  6.4  Iterations  Recall that Elser’s Difference Map with β = 1, Lions-Mercier and Douglas Rachford algorithms can be defined as T =  Id +RA RB , 2  (6.8)  where RA and RB are reflectors onto their respective sets. The solution to our Sudoku problem is in C1 ∩ C2 ∩ C3 ∩ C4 . We have 4 constraints and our algorithm can only handle two. As with 8 Queens, we use the product space and the diagonal, which we have previous defined. We work in the Hilbert product space 3 X × X × X × X, recall X = R9 . (6.9) We also define C = C1 × C2 × C3 × C4 = {(c1 , c2 , c3 , c4 )|ci ∈ Ci } ⊆ X 4 ,  (6.10)  and the Diagonal, ∆ = {(x1 , x2 , x3 , x4 ) | x1 = x2 = x3 = x4 ∈ X}.  (6.11)  ∆ ∩ C = {(c1 , c2 , c3 , c4 ) | c1 = c2 = c3 = c4 , ci ∈ Ci }  (6.12)  x ∈ C1 ∩ C2 ∩ C3 ∩ C4 ⇐⇒ (x, x, x, x) ∈ C ∩ ∆.  (6.13)  Then and hence  Our unique solution (assuming a well posed Sudoku puzzle) lies in the intersection of the 4 constraints. The reflectors used in the Difference Map 66  6.5. Sudoku Example are two times the projection minus the identity. In our model, we have two projections. The projection onto C, our constraints and the projection onto the diagonal. The projection onto the diagonal is described by Example 2.48: P∆ (x1 , x2 , x3 , x4 ) = (y, y, y, y) where y =  x1 + x2 + x3 + x4 . 4  (6.14)  The projection onto the constraints is described by Example 2.47: PC = PC1 ×C2 ×C3 ×C4 (x1 , x2 , x3 , x4 ) = (PC1 x1 , PC2 x2 , PC3 x3 , PC4 x4 ). (6.15) Finally, we define the reflectors and use the Difference Map, R∆ = 2P∆ − Id  (6.16)  RC := 2PC − Id .  (6.17)  The iterations are described as xn+1 =  Id +RC R 2  xn .  (6.18)  We now apply the exact same algorithm described in details with the 8Queens example. Now we pick a cube from our diagonal and convert the cube into a matrix by projecting onto the nearest unit vector of each elevator shaft. Convergence is monitored similarly. It should be noted that the iterations are in our cube space and in order to view them in a 9 × 9 matrix that is Sodoku, we need to flatten the cube. That is take the 9 × 9 × 9 binary cube and convert it to a 9 × 9 integer matrix by converting each elevator shaft containing a standard unit vector to its corresponding integer- see Figure 6.3.  6.5  Sudoku Example  Arto Inkala, a Finnish mathematician, is an enthusiastic puzzle and problem solver. He completed his doctoral thesis in applied mathematics in 2001. He creates very difficult Sudoku puzzles. Inkala claims that his ”mathematic background has been great help when creating the most difficult Sudokus by [using] genetic algorithms, different optimizing methods and artificial intelligence. On Inkala’s website (www.aisudoku.com), he shares 10 of his difficult Sudoku puzzles. According to the Finnish puzzle maker, he calls the most difficult Sudoku puzzle ever ”AI Escargot”, because it looks like 67  6.5. Sudoku Example a snail. The AI comes from Inkala’s initials. He calls solving it is like an intellectual culinary pleasure. We use the difference map method described above to solve all 10 of Inkala’s Sudoku puzzle. The online implementation of the Sudoku solver can be found at www.schaad.ca, the author’s website. We see that AI Escargot was solved in 6627 iterations whereas AI labryinth took over 12000 iterations. Even though Inkala claims that AI Escargot is the world’s hardest Sudoku puzzle, our algorithm solves it faster than other Sudokus created by Inkala. The algorithm managed to solve AI honeypot in only 208 iterations! The Sudoku problems and pictures in the left column are from www.aisudoku.com, whereas the Sudoku solutions in the right column are original work. AI escargot solved in 6627 iterations  68  6.5. Sudoku Example AI killer application solved in 882 iterations  AI lucky diamond solved in 276 iterations  69  6.5. Sudoku Example AI worm hole solved in 4998 iterations  AI labyrinth solved in 12025 iterations  70  6.5. Sudoku Example AI circles solved in 2410 iterations  AI squadron solved in 3252 iterations  71  6.5. Sudoku Example AI honeypot solved in 208 iterations  AI tweezers solved in 688 iterations  72  6.5. Sudoku Example AI broken brick solved in 1598 iterations  In the following figure, we show how the online solver solves AI Escargot. It takes 6627 iterations and we take snapshots every 1000 iteration until we come to a solution.  Figure 6.6: Solution to Escargot  73  6.6. Sudoku Distance  6.6  Sudoku Distance  The next figure is very interesting. At each iteration, we compute a distance from the solution. At each iteration, our algorithm displays the current iterate as 9 × 9 matrix which represents an attempt at a Sudoku puzzle solution. If we let the current iterate be the matrix A, and let the solution to the Sudoku puzzle be B, then we can compute the difference which we call matrix C. The difference is computed by subtracting each element, i.e. cij = aij − bij , where i, j ∈ 1, 2, . . . , 9. We calculate the distance as follows, c2ij  C =  (6.19)  i,j  Figure 6.7: Distance versus number of iterations The graph is not monotone. It shows that Elser’s Difference Map sometimes gets close to a solution, but needs to go further away to actually get the solution. There are a few times, around 1100, 1200, 4800, and 6300 iterations where we get fairly close to a solution, but then we get farther away before finding a solution. Also notice how quickly we converge to a solution at the end.  74  Chapter 7  Conclusion We have shown that Elser’s Difference Map is equivalent to the DouglasRachford and Fienup’s Hybrid Input-Output algorithms, where β = 1. We showed how to model the 8-queens problem and following Elser, we modeled Sudoku as well. We hope that the detailed expository in this thesis will help readers to understand modeling feasibility problems such as Sudoku and 8-queens. We used several very non-convex sets and applied the theory of convex sets to converge to a solution. There is no theory for the nonconvex case, although we have demonstrated the algorithm seems to work. Some future research might include discovering convergence proofs for the nonconvex case, starting with simple examples in the Euclidean plane or building on the paper [LLM09] by Lewis, Luke and Malick. Finally, we showed that the operator governing the Douglas-Rachford iteration need not be a proximal map even when the two involved resolvents are; this refines an observation of Eckstein and underlines the need for monotone operator theory.  75  Bibliography [AY89] Bruce Abramson and Moti Yung. Divide and conquer under global constraints: A solution to the n-queens problem. Journal of Parallel and Distributed Computing, 6(3):649 – 662, 1989. Available from: http://www.sciencedirect.com/science/article/ B6WKJ-4BRJHWS-13/2/44424d257adca8566925a5ca897293e9. → pages 47 [BCL02] Heinz H. Bauschke, Patrick L. Combettes, and D. Russell Luke. Phase retrieval, error reduction algorithm, and Fienup variants: a view from convex optimization. J. Opt. Soc. Amer. A, 19(7):1334–1345, 2002. Available from: http://dx.doi.org/10. 1364/JOSAA.19.001334. → pages 1 [bir09] Interdisciplinary workshop on fixed-point algorithms for inverse problems in science and engineering, November 2009. Available from: http://www.birs.ca/birspages.php?task= displayevent&event_id=09w5006. → pages 2 [Bor83] J. M. Borwein. Adjoint process duality. Math. Oper. Res., 8(3):403–434, 1983. Available from: http://dx.doi.org/10. 1287/moor.8.3.403. → pages 42 [BT09] Seymour S. Block and Santiago A. Tavares. Before Sudoku. Oxford University Press, Oxford, 2009. The world of magic squares. → pages 60 [BWY09] Heinz H. Bauschke, Xianfu Wang, and Liangjin Yao. On BorweinWiersma decompositions of monotone linear relations, 2009. Available from: http://www.citebase.org/abstract?id=oai: arXiv.org:0912.2772. → pages 41 [Com04] Patrick L. Combettes. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization, 53(5-6):475–504, 2004. Available from: http://dx.doi.org/10. 1080/02331930412331327157. → pages 43 76  Chapter 7. Bibliography [Cro98] Ronald Cross. Multivalued linear operators, volume 213 of Monographs and Textbooks in Pure and Applied Mathematics. Marcel Dekker Inc., New York, 1998. → pages 42 [Deu01] Frank Deutsch. Best approximation in inner product spaces. CMS Books in Mathematics/Ouvrages de Math´ematiques de la SMC, 7. Springer-Verlag, New York, 2001. → pages 20, 27, 31, 34 [EB92] Jonathan Eckstein and Dimitri P. Bertsekas. On the DouglasRachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Programming, 55(3, Ser. A):293–318, 1992. Available from: http://dx.doi.org/10. 1007/BF01581204. → pages 38 [Eck89] Jonathan Eckstein. Splitting Methods for Monotone Operators with Applications to Parallel Optimization. PhD dissertation, Massachusetts Institute of Technology, Cambridge, Massachusetts, June 1989. → pages 43 [EF98] Jonathan Eckstein and Michael C. Ferris. Operator-splitting methods for monotone affine variational inequalities, with a parallel application to optimal control. INFORMS J. Comput., 10(2):218–235, 1998. → pages 43 [Epp05] David Eppstein. Nonrepetitive paths and cycles in graphs with application to sudoku. CoRR, abs/cs/0507053, 2005. → pages 60 [Eri10] Martin Erickson. Pearls of discrete mathematics. Discrete Mathematics and its Applications (Boca Raton). CRC Press, Boca Raton, FL, 2010. → pages 60 [ERT07] V. Elser, I. Rankenburg, and P. Thibault. Searching with iterated maps. Proc. Natl. Acad. Sci. USA, 104(2):418–423, 2007. Available from: http://dx.doi.org/10.1073/pnas.0606359104. → pages 1, 26 [Fie82] J. R. Fienup. Phase retrieval algorithms: a comparison. Appl. Opt., 21(15):2758–2769, 1982. Available from: http://ao.osa. org/abstract.cfm?URI=ao-21-15-2758. → pages 26 [GE08] Simon Gravel and Veit Elser. Divide and concur: A general approach to constraint satisfaction. Phys. Rev. E, 78(3):036706, Sep 2008. → pages 1 77  Chapter 7. Bibliography [ICC07] Second mathematical programming society international conference on continuous optimization, August 2007. Available from: http://iccopt-mopta.mcmaster.ca/. → pages 1 [Kre78] Erwin Kreyszig. Introductory functional analysis with applications. John Wiley & Sons, New York-London-Sydney, 1978. → pages 7, 33, 34, 42 [LLM09] A. S. Lewis, D. R. Luke, and J. Malick. Local linear convergence for alternating and averaged nonconvex projections. Found. Comput. Math., 9(4):485–513, 2009. Available from: http://dx.doi.org/10.1007/s10208-008-9036-y. → pages 75 [LM79] P.-L. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal., 16(6):964–979, 1979. Available from: http://dx.doi.org/10.1137/0716071. → pages 26 [Mey00] Carl Meyer. Matrix analysis and applied linear algebra. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2000. With 1 CD-ROM (Windows, Macintosh and UNIX) and a solutions manual (iv+171 pp.). → pages 4, 34, 38 [Min62] George J. Minty. Monotone (nonlinear) operators in Hilbert space. Duke Math. J., 29:341–346, 1962. → pages 39 [MN88] Jan R. Magnus and Heinz Neudecker. Matrix differential calculus with applications in statistics and econometrics. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John Wiley & Sons Ltd., Chichester, 1988. → pages 40 [Opi67] Zdzislaw Opial. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc., 73:591–597, 1967. → pages 29 [Que] Queen moves. Available from: http://www.chessdryad.com/ education/magictheater/queens/index.htm. → pages 47 [RBC74] W. W. Rouse Ball and H. S. M. Coxeter. Mathematical recreations & essays. University of Toronto Press, Toronto, Ont., twelfth edition, 1974. → pages 47, 48  78  [Reh] Julie Rehmeyer. The Sudoku solution. The Science News. Available from: http://sciencenews.org/view/generic/id/39529/ title/The_Sudoku_solution. → pages 1 [Roc70] R. T. Rockafellar. On the maximal monotonicity of subdifferential mappings. Pacific J. Math., 33:209–216, 1970. → pages 38 [Roc97] R. Tyrrell Rockafellar. Convex analysis. Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ, 1997. Reprint of the 1970 original, Princeton Paperbacks. → pages 37, 39, 41 [Rud91] Walter Rudin. Functional analysis. International Series in Pure and Applied Mathematics. McGraw-Hill Inc., New York, second edition, 1991. → pages 31 [RW98] R. Tyrrell Rockafellar and Roger J-B. Wets. Variational analysis, volume 317 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. SpringerVerlag, Berlin, 1998. → pages 36, 39, 40, 42 [Sim98] Stephen Simons. Minimax and monotonicity, volume 1693 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1998. → pages 38 [Wik] Queen moves. Available from: wiki/Sudoku. → pages 60  http://en.wikipedia.org/  [Z˘ al02] C. Z˘ alinescu. Convex analysis in general vector spaces. World Scientific Publishing Co. Inc., River Edge, NJ, 2002. → pages 37, 39 [Zar71] Eduardo H. Zarantonello. Projections on convex sets in Hilbert space and spectral theory. I. Projections on convex sets. In Contributions to nonlinear functional analysis (Proc. Sympos., Math. Res. Center, Univ. Wisconsin, Madison, Wis., 1971), pages 237– 341. Academic Press, New York, 1971. → pages 32  79  Appendix A  8 Queens PHP Code <?php // This code i s p r o t o t y p e and i s p r o v i d e d as i s under t h e GNU General P u b l i c L i c e n s e // Author : Jason Schaad August 11 , 2010 $n =8; f u n c t i o n c h e c k q u e e n s ( $ mat rix ) {  // Round t h e m a t r i x f o r ( $x =1; $x <=8;$x++) { f o r ( $y =1; $y <=8;$y++) { i f ( $ma tri x [ $x ] [ $y ] >=0.5) { $m atr ix [ $x ] [ $y ] = 1 ; } } }  // P r i n t M a t r i x ( $ m a t r i x ) ; $ t o t a l =0; // HORIZONTAL f o r ( $x =1; $x <=8;$x++) { $sum=0; f o r ( $y =1; $y <=8;$y++) { $sum+=$ma tri x [ $x ] [ $y ] ; } i f ( $sum==1) { $ t o t a l ++; } } // VERTICAL  80  Appendix A. 8 Queens PHP Code  f o r ( $x =1; $x <=8;$x++) { $sum=0; f o r ( $y =1; $y <=8;$y++) { $sum+=$ma tri x [ $y ] [ $x ] ; } i f ( $sum==1) { $ t o t a l ++; } } $c heck =1; // POSITIVE ( ( $ma tri x [ 1 ] [ 1 ] ) > 1 ) { $ch eck =−1;} ( ( $ma tri x [ 2 ] [ 1 ] + $m atr ix [ 1 ] [ 2 ] ) > 1 ) { $ch eck =−1;} ( ( $ma tri x [ 3 ] [ 1 ] + $m atr ix [ 2 ] [ 2 ] + $m atr ix [ 1 ] [ 3 ] ) > 1 ) { $c heck =−1;} ( ( $ma tri x [ 4 ] [ 1 ] + $m atr ix [ 3 ] [ 2 ] + $m atr ix [ 3 ] [ 3 ] + $ mat rix [ 1 ] [ 4 ] ) > 1 ) { $c h e c k =−1;} ( ( $ma tri x [ 5 ] [ 1 ] + $m atr ix [ 4 ] [ 2 ] + $m atr ix [ 3 ] [ 3 ] + $ mat rix [ 2 ] [ 4 ] + $m atr ix [ 1 ] [ 5 ] ) > 1 ) { $c heck =−1;} i f ( ( $ma tri x [ 6 ] [ 1 ] + $m atr ix [ 5 ] [ 2 ] + $m atr ix [ 4 ] [ 3 ] + $ mat rix [ 3 ] [ 4 ] + $m atr ix [ 2 ] [ 5 ] + $ mat rix [ 1 ] [ 6 ] ) > 1 ) { $c heck =−1;} i f ( ( $ma tri x [ 7 ] [ 1 ] + $m atr ix [ 6 ] [ 2 ] + $m atr ix [ 5 ] [ 3 ] + $ mat rix [ 4 ] [ 4 ] + $m atr ix [ 3 ] [ 5 ] + $ mat rix [ 2 ] [ 6 ] + $ma tri x [ 1 ] [ 7 ] ) > 1 ) { $c heck =−1;} i f ( ( $ma tri x [ 8 ] [ 1 ] + $m atr ix [ 7 ] [ 2 ] + $m atr ix [ 6 ] [ 3 ] + $ mat rix [ 5 ] [ 4 ] + $m atr ix [ 4 ] [ 5 ] + $ mat rix [ 3 ] [ 6 ] + $ma tri x [ 2 ] [ 7 ] + $ma tri x [ 1 ] [ 8 ] ) > 1 ) { $c heck =−1;} i f ( ( $ma tri x [ 8 ] [ 2 ] + $m atr ix [ 7 ] [ 3 ] + $m atr ix [ 6 ] [ 4 ] + $ mat rix [ 5 ] [ 5 ] + $m atr ix [ 4 ] [ 6 ] + $ mat rix [ 3 ] [ 7 ] + $ma tri x [ 2 ] [ 8 ] ) > 1 ) { $ch eck =−1;} i f ( ( $ma tri x [ 8 ] [ 3 ] + $m atr ix [ 7 ] [ 4 ] + $m atr ix [ 6 ] [ 5 ] + $ mat rix [ 5 ] [ 6 ] + $m atr ix [ 4 ] [ 7 ] + $ mat rix [ 3 ] [ 8 ] ) > 1 ) { $c heck =−1;} i f ( ( $ma tri x [ 8 ] [ 4 ] + $m atr ix [ 7 ] [ 5 ] + $m atr ix [ 6 ] [ 6 ] + $ mat rix [ 5 ] [ 7 ] + $m atr ix [ 4 ] [ 8 ] ) > 1 ) { $c heck =−1;} i f ( ( $ma tri x [ 8 ] [ 5 ] + $m atr ix [ 7 ] [ 6 ] + $m atr ix [ 6 ] [ 7 ] + $ mat rix [ 5 ] [ 8 ] ) > 1 ) { $c h e c k =−1;} i f ( ( $ma tri x [ 8 ] [ 6 ] + $m atr ix [ 7 ] [ 7 ] + $m atr ix [ 6 ] [ 8 ] ) > 1 ) { $ch eck =−1;} i f ( ( $ma tri x [ 8 ] [ 7 ] + $m atr ix [ 7 ] [ 8 ] ) > 1 ) { $ch eck =−1;} i f ( ( $ma tri x [ 8 ] [ 8 ] ) > 1 ) { $ch eck =−1;}  if if if if if  // NEGATIVE if if if if if  ( ( $ma tri x [ 8 ] [ 1 ] ) > 1 ) { $ch eck =−1;} ( ( $ma tri x [ 7 ] [ 1 ] + $m atr ix [ 8 ] [ 2 ] ) > 1 ) { $ch eck =−1;} ( ( $ma tri x [ 6 ] [ 1 ] + $m atr ix [ 7 ] [ 2 ] + $m atr ix [ 8 ] [ 3 ] ) > 1 ) { $ch eck =−1;} ( ( $ma tri x [ 5 ] [ 1 ] + $m atr ix [ 6 ] [ 2 ] + $m atr ix [ 7 ] [ 3 ] + $ mat rix [ 8 ] [ 4 ] ) > 1 ) { $ch e c k =−1;} ( ( $ma tri x [ 4 ] [ 1 ] + $m atr ix [ 5 ] [ 2 ] + $m atr ix [ 6 ] [ 3 ] + $ mat rix [ 7 ] [ 4 ] + $m atr ix [ 8 ] [ 5 ] ) > 1 ) { $c heck =−1;}  81  Appendix A. 8 Queens PHP Code i f ( ( $ma tri x [ 3 ] [ 1 ] + $m atr ix [ 4 ] [ 2 ] + $m atr ix [ 5 ] [ 3 ] + $ mat rix [ 6 ] [ 4 ] + $m atr ix [ 7 ] [ 5 ] + $ mat rix [ 8 ] [ 6 ] ) > 1 ) { $c heck =−1;} i f ( ( $ma tri x [ 2 ] [ 1 ] + $m atr ix [ 3 ] [ 2 ] + $m atr ix [ 4 ] [ 3 ] + $ mat rix [ 5 ] [ 4 ] + $m atr ix [ 6 ] [ 5 ] + $ mat rix [ 7 ] [ 6 ] + $ma tri x [ 8 ] [ 7 ] ) > 1 ) { $ch eck =−1;} i f ( ( $ma tri x [ 1 ] [ 1 ] + $m atr ix [ 2 ] [ 2 ] + $m atr ix [ 3 ] [ 3 ] + $ mat rix [ 4 ] [ 4 ] + $m atr ix [ 5 ] [ 5 ] + $ mat rix [ 6 ] [ 6 ] + $ma tri x [ 7 ] [ 7 ] + $ma tri x [ 8 ] [ 8 ] ) > 1 ) { $ch eck =−1;} i f ( ( $ma tri x [ 1 ] [ 2 ] + $m atr ix [ 2 ] [ 3 ] + $m atr ix [ 3 ] [ 4 ] + $ mat rix [ 4 ] [ 5 ] + $m atr ix [ 5 ] [ 6 ] + $ mat rix [ 6 ] [ 7 ] + $ma tri x [ 7 ] [ 8 ] ) > 1 ) { $ch eck =−1;} i f ( ( $ma tri x [ 1 ] [ 3 ] + $m atr ix [ 2 ] [ 4 ] + $m atr ix [ 3 ] [ 5 ] + $ mat rix [ 4 ] [ 6 ] + $m atr ix [ 5 ] [ 7 ] + $ mat rix [ 6 ] [ 8 ] ) > 1 ) { $ch eck =−1;} i f ( ( $ma tri x [ 1 ] [ 4 ] + $m atr ix [ 2 ] [ 5 ] + $m atr ix [ 3 ] [ 6 ] + $ mat rix [ 4 ] [ 7 ] + $m atr ix [ 5 ] [ 8 ] ) > 1 ) { $ch eck =−1;} i f ( ( $ma tri x [ 1 ] [ 5 ] + $m atr ix [ 2 ] [ 6 ] + $m atr ix [ 3 ] [ 7 ] + $ mat rix [ 4 ] [ 8 ] ) > 1 ) { $c h e c k =−1;} i f ( ( $ma tri x [ 1 ] [ 6 ] + $m atr ix [ 2 ] [ 7 ] + $m atr ix [ 3 ] [ 8 ] ) > 1 ) { $c heck =−1;} i f ( ( $ma tri x [ 1 ] [ 7 ] + $m atr ix [ 2 ] [ 8 ] ) > 1 ) { $ch eck =−1;} i f ( ( $ma tri x [ 1 ] [ 8 ] ) > 1 ) { $ch eck =−1;} i f ( ( $ t o t a l ==16) && ( $ch eck ==1)) { return 1; } else { return 0; } } f u n c t i o n c h e s s r o u n d ( $x , $y ) { i f ( $y == 1 ) { i f ( $x >= 0 . 5 ) { r e t u r n ”<img h e i g h t =50 width=50 s r c =3. png>” ; } else { r e t u r n ”<img h e i g h t =50 width=50 s r c =2. png>” ; } } e l s e i f ( $y ==0) { i f ( $x >= 0 . 5 ) { r e t u r n ”<img h e i g h t =50 width=50 s r c =4. png>” ; } else { r e t u r n ”<img h e i g h t =50 width=50 s r c =1. png>” ; }  } }  82  Appendix A. 8 Queens PHP Code  f u n c t i o n P r i n t C h e s s M a t r i x ( $ma tri x ) { echo ”<t a b l e b o r d e r=1>” ; $n=s i z e o f ( $m atr ix ) ; f o r ( $x = 1 ; $x <= $n ; $x++) { echo ”<t r >” ; f o r ( $y = 1 ; $y <= $n ; $y++) { i f ( ( $x % 2)==1) { i f ( ( $y % 2)==1) { echo ”<td b g c o l o r =#999999>” . c h e s s r o u n d ( $m atr ix [ $x ] [ $y ] } else { echo ”<td b g c o l o r=#FFFFFF>” . c h e s s r o u n d ( $m atr ix [ $x ] [ $y ] } } else { i f ( ( $y % 2)==0) { echo ”<td b g c o l o r =#999999>” . c h e s s r o u n d ( $m atr ix [ $x ] [ $y ] } else { echo ”<td b g c o l o r=#FFFFFF>” . c h e s s r o u n d ( $m atr ix [ $x ] [ $y ] } } } echo ”</t r >” ; } echo ”</ t a b l e >” ; } // end f u n c t i o n  , 1 ) . ”</td>” ;  , 0 ) . ”</td>” ;  , 1 ) . ”</td>” ;  , 0 ) . ”</td>” ;  f u n c t i o n P r i n t M a t r i x ( $ma tri x ) { echo ”<t a b l e b o r d e r=1>” ; $n=s i z e o f ( $m atr ix ) ; f o r ( $x = 1 ; $x <= $n ; $x++) { echo ”<t r >” ; f o r ( $y = 1 ; $y <= $n ; $y++) { i f ( ( $x % 2)==1) { i f ( ( $y % 2)==1) { echo ”<td b g c o l o r =#999999>” . $m atr ix [ $x ] [ $y ] . ”</td>” ; } else { echo ”<td b g c o l o r=#FFFFFF>” . $m atr ix [ $x ] [ $y ] . ”</td>” ; } } else { i f ( ( $y % 2)==0) { echo ”<td b g c o l o r =#999999>” . $m atr ix [ $x ] [ $y ] . ”</td>” ; }  83  Appendix A. 8 Queens PHP Code else { echo ”<td b g c o l o r=#FFFFFF>” . $m atr ix [ $x ] [ $y ] . ”</td>” ; } } } echo ”</t r >” ; } echo ”</ t a b l e >” ; } // end f u n c t i o n  f u n c t i o n m a k e u n i t c o l ( $matrix , $colnumber , $ e n t r y t o m a k e z e r o ) { $n=s i z e o f ( $ma tr ix ) ; f o r ( $x =1; $x<=$n ; $x++) { $m atr ix [ $x ] [ $colnumber ] = 0 ; } i f ( $ e n t r y t o m a k e z e r o >0) { $m atr ix [ round ( $ e n t r y t o m a k e z e r o ) ] [ $colnumber ] = 1 ; } r e t u r n $m atr ix ; } f u n c t i o n makeunitrow ( $matrix , $rownumber , $ e n t r y t o m a k e z e r o ) { $n=s i z e o f ( $m atr ix ) ; f o r ( $y =1; $y<=$n ; $y++) { $m atr ix [ $rownumber ] [ $y ] = 0 ; } i f ( $ e n t r y t o m a k e z e r o >0) { $m atr ix [ $rownumber ] [ round ( $ e n t r y t o m a k e z e r o ) ] = 1 ; } r e t u r n $m atr ix ; }  f u n c t i o n c o l u m n p r o j ( $ mat rix ) { $n=s i z e o f ( $m atr ix ) ; f o r ( $y = 1 ; $y <= $n ; $y++) { $imax =1; $entrymax=$ma tri x [ 1 ] [ $y ] ; f o r ( $x = 1 ; $x <= $n ; $x++) { i f ( $ma tri x [ $x ] [ $y]> $entrymax ) { $imax = $x ; $entrymax= $ mat rix [ $x ] [ $y ] ; }  84  Appendix A. 8 Queens PHP Code } $m atr ix=m a k e u n i t c o l ( $matrix , $y , $imax ) ; } RETURN $ma tri x ; } // end f u n c t i o n f u n c t i o n r o w p r o j ( $ma tri x ) { $n=s i z e o f ( $m atr ix ) ; f o r ( $x = 1 ; $x <= $n ; $x++) { $imax =1; $entrymax=$ mat ri x [ $x ] [ 1 ] ; f o r ( $y = 1 ; $y <= $n ; $y++) { i f ( $ma tri x [ $x ] [ $y]> $entrymax ) { $imax = $y ; $entrymax= $ mat rix [ $x ] [ $y ] ; } } $m atr ix=makeunitrow ( $matrix , $x , $imax ) ; } RETURN $ma tri x ; } // end f u n c t i o n f u n c t i o n p o s t i v e s l o p e p r o j ( $m atr ix ) { $v1 [ 1 ] = $ma tri x [ 1 ] [ 1 ] ; $v2 [ 1 ] = $ma tri x [ 2 ] [ 1 ] ; $v2 [ 2 ] = $m atr ix [ 1 ] [ 2 ] ; $v3 [ 1 ] = $ma tri x [ 3 ] [ 1 ] ; $v3 [ 2 ] = $m atr ix [ 2 ] [ 2 ] ; $v3 [ 3 ] = $ mat rix [ 1 ] [ 3 ] ; $v4 [ 1 ] = $ma tri x [ 4 ] [ 1 ] ; $v4 [ 2 ] = $m atr ix [ 3 ] [ 2 ] ; $v4 [ 3 ] = $ mat rix [ 2 ] [ 3 ] ; $v4 [ 4 ] = $ma tri x [ 1 ] [ 4 ] ; $v5 [ 1 ] = $ma tri x [ 5 ] [ 1 ] ; $v5 [ 2 ] = $m atr ix [ 4 ] [ 2 ] ; $v5 [ 3 ] = $ mat rix [ 3 ] [ 3 ] ; $v5 [ 4 ] = $ma tri x [ 2 ] [ 4 ] ; $v5 [ 5 ] = $m atr ix [ 1 ] [ 5 ] ; $v6 [ 1 ] = $ma tri x [ 6 ] [ 1 ] ; $v6 [ 2 ] = $m atr ix [ 5 ] [ 2 ] ; $v6 [ 3 ] = $ mat rix [ 4 ] [ 3 ] ; $v6 [ 4 ] = $ma tri x [ 3 ] [ 4 ] ; $v6 [ 5 ] = $m atr ix [ 2 ] [ 5 ] ; $v6 [ 6 ] = $ mat rix [ 1 ] [ 6 ] ; $v7 [ 1 ] = $ma tri x [ 7 ] [ 1 ] ; $v7 [ 2 ] = $m atr ix [ 6 ] [ 2 ] ; $v7 [ 3 ] = $ mat rix [ 5 ] [ 3 ] ; $v7 [ 4 ] = $ma tri x [ 4 ] [ 4 ] ; $v7 [ 5 ] = $m atr ix [ 3 ] [ 5 ] ; $v7 [ 6 ] = $ mat rix [ 2 ] [ 6 ] ; $v7 [ 7 ] = $ma tri x [ 1 ] [ 7 ] ; $v8 [ 1 ] = $ma tri x [ 8 ] [ 1 ] ; $v8 [ 2 ] = $m atr ix [ 7 ] [ 2 ] ; $v8 [ 3 ] = $ mat rix [ 6 ] [ 3 ] ; $v8 [ 4 ] = $ma tri x [ 5 ] [ 4 ] ; $v8 [ 5 ] = $m atr ix [ 4 ] [ 5 ] ; $v8 [ 6 ] = $ mat rix [ 3 ] [ 6 ] ; $v8 [ 7 ] = $ma tri x [ 2 ] [ 7 ] ; $v8 [ 8 ] = $m atr ix [ 1 ] [ 8 ] ; $v9 [ 1 ] = $ma tri x [ 8 ] [ 2 ] ; $v9 [ 2 ] = $m atr ix [ 7 ] [ 3 ] ; $v9 [ 3 ] = $ mat rix [ 6 ] [ 4 ] ; $v9 [ 4 ] = $ma tri x [ 5 ] [ 5 ] ; $v9 [ 5 ] = $m atr ix [ 4 ] [ 6 ] ; $v9 [ 6 ] = $ mat rix [ 3 ] [ 7 ] ; $v9 [ 7 ] = $ma tri x [ 2 ] [ 8 ] ; $v10 [ 1 ] = $ma tri x [ 8 ] [ 3 ] ; $v10 [ 2 ] = $m atr ix [ 7 ] [ 4 ] ; $v10 [ 3 ] = $m atr ix [ 6 ] [ 5 ] ; $v10 [ 4 ] = $ma tri x [ 5 ] [ 6 ] ; $v10 [ 5 ] = $m atr ix [ 4 ] [ 7 ] ; $v10 [ 6 ] = $m atr ix [ 3 ] [ 8 ] ; $v11 [ 1 ] = $ma tri x [ 8 ] [ 4 ] ; $v11 [ 2 ] = $m atr ix [ 7 ] [ 5 ] ; $v11 [ 3 ] = $m atr ix [ 6 ] [ 6 ] ; $v11 [ 4 ] = $ma tri x [ 5 ] [ 7 ] ; $v11 [ 5 ] = $m atr ix [ 4 ] [ 8 ] ; $v12 [ 1 ] = $ma tri x [ 8 ] [ 5 ] ; $v12 [ 2 ] = $m atr ix [ 7 ] [ 6 ] ; $v12 [ 3 ] = $m atr ix [ 6 ] [ 7 ] ; $v12 [ 4 ] = $ma tri x [ 5 ] [ 8 ] ; $v13 [ 1 ] = $ma tri x [ 8 ] [ 6 ] ; $v13 [ 2 ] = $m atr ix [ 7 ] [ 7 ] ; $v13 [ 3 ] = $m atr ix [ 6 ] [ 8 ] ;  85  Appendix A. 8 Queens PHP Code $v14 [ 1 ] = $ma tri x [ 8 ] [ 7 ] ; $v14 [ 2 ] = $m atr ix [ 7 ] [ 8 ] ; $v15 [ 1 ] = $ma tri x [ 8 ] [ 8 ] ; i f ( $v1 [ 1 ] > = 0 . 5 ) { $m atr ix [ 1 ] [ 1 ] = 1 ; } else { $m atr ix [ 1 ] [ 1 ] = 0 ; } $diagnames = array ( ’ v2 ’ , ’ v3 ’ , ’ v4 ’ , ’ v5 ’ , ’ v6 ’ , ’ v7 ’ , ’ v8 ’ , ’ v9 ’ , ’ v10 ’ , ’ v11 ’ , ’ v12 ’ , ’ v13 ’ , ’ v14 ’ ) ; $unitnames= array ( ’ w2 ’ , ’ w3 ’ , ’ w4 ’ , ’ w5 ’ , ’ w6 ’ , ’ w7 ’ , ’ w8 ’ , ’ w9 ’ , ’ w10 ’ , ’ w11 ’ , ’ w12 ’ , ’ w13 ’ , ’ w14 ’ ) ; f o r ( $ o u t s i d e c o u n t e r =2; $ o u t s i d e c o u n t e r <=8; $ o u t s i d e c o u n t e r++) { $ l e f t s u m =0; $ r i g h t s u m =0; $ { $unitnames [ $ o u t s i d e c o u n t e r −2]}= u n i t p r o j ( $ { $diagnames [ $ o u t s i d e c o u n t e r −2]} , $ o u t s i d e c o u n t e r ) ; f o r ( $ i n s i d e c o u n t e r =1; $ i n s i d e c o u n t e r <=$ o u t s i d e c o u n t e r ; $ i n s i d e c o u n t e r ++) { $ l e f t s u m=$ l e f t s u m+pow ( $ { $diagnames [ $ o u t s i d e c o u n t e r − 2 ] } [ $ i n s i d e c o u n t e r ] , 2 ) ; $ r i g h t s u m=$ r i g h t s u m+pow ( $ { $diagnames [ $ o u t s i d e c o u n t e r − 2 ] } [ $ i n s i d e c o u n t e r ]− $ { $unitnames [ $ o u t s i d e c o u n t e r − 2 ] } [ $ i n s i d e c o u n t e r ] , 2 ) ; } // echo ” LS−> ” . $ l e f t s u m . ” <−RS−> ” . $ r i g h t s u m . ”<− ” ; i f ( sqrt ( $ l e f t s u m ) < sqrt ( $ r i g h t s u m ) ) { // echo ” A l l Z e r os ” ; f o r ( $num1=$ o u t s i d e c o u n t e r ; $num1>=1; $num1−−) { f o r ( $num2=1; $num2<=$ o u t s i d e c o u n t e r ; $num2++) { i f ( ( $num1+$num2)==( $ o u t s i d e c o u n t e r +1)) { $m atr ix [ $num1 ] [ $num2 ] = 0 ; } } } } else { // echo ” Unit Vector ” ; $ u n i t v e c t o r c o u n t e r =0; f o r ( $num1=$ o u t s i d e c o u n t e r ; $num1>=1; $num1−−) { f o r ( $num2=1; $num2<=$ o u t s i d e c o u n t e r ; $num2++) { i f ( ( $num1+$num2)==( $ o u t s i d e c o u n t e r +1)) { $ u n i t v e c t o r c o u n t e r ++; $m atr ix [ $num1 ] [ $num2]= $ { $unitnames [ $ o u t s i d e c o u n t e r − 2 ] } [ $ u n i t v e c t o r c o u n t e r ] ; } } } } }  86  Appendix A. 8 Queens PHP Code  // ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ NOW WE do t h e r e m a i n i n g 7 d i a g o n a l s ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ i f ( $v15 [ 1 ] > = 0 . 5 ) { $m atr ix [ 8 ] [ 8 ] = 1 ; } else { $m atr ix [ 8 ] [ 8 ] = 0 ; } f o r ( $ o u t s i d e c o u n t e r =14; $ o u t s i d e c o u n t e r >=9; $ o u t s i d e c o u n t e r −−) { $ l e f t s u m =0; $ r i g h t s u m =0; $ { $unitnames [ $ o u t s i d e c o u n t e r −2]}= u n i t p r o j ( $ { $diagnames [ $ o u t s i d e c o u n t e r −2]} ,(16 − $ o u t s i d e c o u n t e r ) ) ; f o r ( $ i n s i d e c o u n t e r =1; $ i n s i d e c o u n t e r <=(16− $ o u t s i d e c o u n t e r ) ; $ i n s i d e c o u n t e r ++) { $ l e f t s u m=$ l e f t s u m+pow ( $ { $diagnames [ $ o u t s i d e c o u n t e r − 2 ] } [ $ i n s i d e c o u n t e r ] , 2 ) ; $ r i g h t s u m=$ r i g h t s u m+pow ( $ { $diagnames [ $ o u t s i d e c o u n t e r − 2 ] } [ $ i n s i d e c o u n t e r ]− $ { $unitnames [ $ o u t s i d e c o u n t e r − 2 ] } [ $ i n s i d e c o u n t e r ] , 2 ) ; } i f ( sqrt ( $ l e f t s u m ) < sqrt ( $ r i g h t s u m ) ) { // echo ” A l l Z e r os ” ; f o r ( $num1=1; $num1<=8; $num1++) { f o r ( $num2=1; $num2<=8; $num2++) { i f ( ( $num1+$num2)==( $ o u t s i d e c o u n t e r +1)) { $m atr ix [ $num2 ] [ $num1 ] = 0 ; } } } } else { // echo ” Unit Vector ” ; $ u n i t v e c t o r c o u n t e r =0; f o r ( $num1=1; $num1<=8; $num1++) { f o r ( $num2=1; $num2<=8; $num2++) { i f ( ( $num1+$num2)==( $ o u t s i d e c o u n t e r +1)) { $ u n i t v e c t o r c o u n t e r ++; $m atr ix [ $num2 ] [ $num1]= $ { $unitnames [ $ o u t s i d e c o u n t e r − 2 ] } [ $ u n i t v e c t o r c o u n t e r ] ; } } } } } RETURN $ma tri x ; } // end f u n c t i o n  87  Appendix A. 8 Queens PHP Code f u n c t i o n c l o c k w i s e r o t a t i o n ( $m atr ix ) { f o r ( $x =1; $x <=8; $x++) { f o r ( $y =1; $y <=8; $y++) { $newmatrix [ $x ] [ $y ]= $ mat rix [9 − $y ] [ $x ] ; } } RETURN $newmatrix ; } // end f u n c t i o n  f u n c t i o n c o u n t e r c l o c k w i s e r o t a t i o n ( $ mat rix ) { f o r ( $x =1; $x <=8; $x++) { f o r ( $y =1; $y <=8; $y++) { $newmatrix [9− $y ] [ $x ]= $m atr ix [ $x ] [ $y ] ; } } RETURN $newmatrix ; } // end f u n c t i o n  f u n c t i o n u n i t p r o j ( $v , $ s i z e ) { $imax =1; $entrymax = $v [ 1 ] ; f o r ( $ i =2; $ i<=$ s i z e ; $ i ++) { i f ( $v [ $ i ]> $entrymax ) { $imax = $ i ; $entrymax =$v [ $ i ] ; } } RETURN makeunit ( $imax , $ s i z e ) ; } f u n c t i o n makeunit ( $ i , $ s i z e ) { f o r ( $x =1; $x<=$ s i z e ; $x++) { $v [ $x ] = 0 ; } i f ( $ i >0) { $v [ round ( $ i ) ] = 1 ; } r e t u r n $v ; } f u n c t i o n addcubes ($M1 , $M2) { f o r ( $ i =1; $ i <=8; $ i ++) { f o r ( $ j =1; $ j <=8; $ j++) { $S [ $ i ] [ $ j ]=$M1 [ $ i ] [ $ j ] + $M2 [ $ i ] [ $ j ] ;  88  Appendix A. 8 Queens PHP Code } } r e t u r n $S ; } f u n c t i o n s c a l m u l t c u b e ( $alpha , $T ) { f o r ( $ i =1; $ i <=8; $ i ++) { f o r ( $ j =1; $ j <=8; $ j++) { $S [ $ i ] [ $ j ]= $ a l p h a ∗$T [ $ i ] [ $ j ] ; } } r e t u r n $S ; } // end f u n c t i o n  f u n c t i o n d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) { $T=addcubes ( $T1 , $T2 ) ; $T=addcubes ( $T , $T3 ) ; $T=addcubes ( $T , $T4 ) ; $T=s c a l m u l t c u b e ( 0 . 2 5 , $T ) ; r e t u r n $T ; } f u n c t i o n new1 ( $T1 , $T2 , $T3 , $T4 ) { $PD=d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) ; $S1 = s c a l m u l t c u b e ( 2 ,$PD ) ; $S2 = s c a l m u l t c u b e ( −1 ,$T1 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = c o l u m n p r o j ( $S3 ) ; $S5 = addcubes ( $T1 , s c a l m u l t c u b e ( −1 ,$PD ) ) ; $S = addcubes ( $S4 , $S5 ) ; r e t u r n $S ; } f u n c t i o n new2 ( $T1 , $T2 , $T3 , $T4 ) { $PD=d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) ; $S1 = s c a l m u l t c u b e ( 2 ,$PD ) ; $S2 = s c a l m u l t c u b e ( −1 ,$T2 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = r o w p r o j ( $S3 ) ; $S5 = addcubes ( $T2 , s c a l m u l t c u b e ( −1 ,$PD ) ) ; $S = addcubes ( $S4 , $S5 ) ; r e t u r n $S ; } f u n c t i o n new3 ( $T1 , $T2 , $T3 , $T4 ) { $PD=d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) ;  89  Appendix A. 8 Queens PHP Code $S1 = s c a l m u l t c u b e ( 2 ,$PD ) ; $S2 = s c a l m u l t c u b e ( −1 ,$T3 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = p o s t i v e s l o p e p r o j ( $S3 ) ; $S5 = addcubes ( $T3 , s c a l m u l t c u b e ( −1 ,$PD ) ) ; $S = addcubes ( $S4 , $S5 ) ; r e t u r n $S ; } f u n c t i o n new4 ( $T1 , $T2 , $T3 , $T4 ) { global $origS ; $PD=d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) ; $S1 = s c a l m u l t c u b e ( 2 ,$PD ) ; $S2 = s c a l m u l t c u b e ( −1 ,$T4 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = c o u n t e r c l o c k w i s e r o t a t i o n ( p o s t i v e s l o p e p r o j ( c l o c k w i s e r o t a t i o n ( $S3 ) ) ) ; $S5 = addcubes ( $T4 , s c a l m u l t c u b e ( −1 ,$PD ) ) ; $S = addcubes ( $S4 , $S5 ) ; r e t u r n $S ; } $origMatrix [ 1 ] [ 1 ] = $ $origMatrix [ 1 ] [ 2 ] = $ $origMatrix [ 1 ] [ 3 ] = $ $origMatrix [ 1 ] [ 4 ] = $ $origMatrix [ 1 ] [ 5 ] = $ $origMatrix [ 1 ] [ 6 ] = $ $origMatrix [ 1 ] [ 7 ] = $ $origMatrix [ 1 ] [ 8 ] = $  POST [ POST [ POST [ POST [ POST [ POST [ POST [ POST [  ’ 11 ’ ’ 12 ’ ’ 13 ’ ’ 14 ’ ’ 15 ’ ’ 16 ’ ’ 17 ’ ’ 18 ’  ]; ]; ]; ]; ]; ]; ]; ];  $origMatrix [ 2 ] [ 1 ] = $ $origMatrix [ 2 ] [ 2 ] = $ $origMatrix [ 2 ] [ 3 ] = $ $origMatrix [ 2 ] [ 4 ] = $ $origMatrix [ 2 ] [ 5 ] = $ $origMatrix [ 2 ] [ 6 ] = $ $origMatrix [ 2 ] [ 7 ] = $ $origMatrix [ 2 ] [ 8 ] = $  POST [ POST [ POST [ POST [ POST [ POST [ POST [ POST [  ’ 21 ’ ’ 22 ’ ’ 23 ’ ’ 24 ’ ’ 25 ’ ’ 26 ’ ’ 27 ’ ’ 28 ’  ]; ]; ]; ]; ]; ]; ]; ];  $origMatrix [ 3 ] [ 1 ] = $ $origMatrix [ 3 ] [ 2 ] = $ $origMatrix [ 3 ] [ 3 ] = $ $origMatrix [ 3 ] [ 4 ] = $ $origMatrix [ 3 ] [ 5 ] = $ $origMatrix [ 3 ] [ 6 ] = $ $origMatrix [ 3 ] [ 7 ] = $ $origMatrix [ 3 ] [ 8 ] = $  POST [ POST [ POST [ POST [ POST [ POST [ POST [ POST [  ’ 31 ’ ’ 32 ’ ’ 33 ’ ’ 34 ’ ’ 35 ’ ’ 36 ’ ’ 37 ’ ’ 38 ’  ]; ]; ]; ]; ]; ]; ]; ];  $ o r i g M a t r i x [ 4 ] [ 1 ] = $ POST [ ’ 41 ’ ] ;  90  Appendix A. 8 Queens PHP Code $origMatrix [ 4 ] [ 2 ] = $ $origMatrix [ 4 ] [ 3 ] = $ $origMatrix [ 4 ] [ 4 ] = $ $origMatrix [ 4 ] [ 5 ] = $ $origMatrix [ 4 ] [ 6 ] = $ $origMatrix [ 4 ] [ 7 ] = $ $origMatrix [ 4 ] [ 8 ] = $  POST [ POST [ POST [ POST [ POST [ POST [ POST [  ’ 42 ’ ’ 43 ’ ’ 44 ’ ’ 45 ’ ’ 46 ’ ’ 47 ’ ’ 48 ’  ]; ]; ]; ]; ]; ]; ];  $origMatrix [ 5 ] [ 1 ] = $ $origMatrix [ 5 ] [ 2 ] = $ $origMatrix [ 5 ] [ 3 ] = $ $origMatrix [ 5 ] [ 4 ] = $ $origMatrix [ 5 ] [ 5 ] = $ $origMatrix [ 5 ] [ 6 ] = $ $origMatrix [ 5 ] [ 7 ] = $ $origMatrix [ 5 ] [ 8 ] = $  POST [ POST [ POST [ POST [ POST [ POST [ POST [ POST [  ’ 51 ’ ’ 52 ’ ’ 53 ’ ’ 54 ’ ’ 55 ’ ’ 56 ’ ’ 57 ’ ’ 58 ’  ]; ]; ]; ]; ]; ]; ]; ];  $origMatrix [ 6 ] [ 1 ] = $ $origMatrix [ 6 ] [ 2 ] = $ $origMatrix [ 6 ] [ 3 ] = $ $origMatrix [ 6 ] [ 4 ] = $ $origMatrix [ 6 ] [ 5 ] = $ $origMatrix [ 6 ] [ 6 ] = $ $origMatrix [ 6 ] [ 7 ] = $ $origMatrix [ 6 ] [ 8 ] = $  POST [ POST [ POST [ POST [ POST [ POST [ POST [ POST [  ’ 61 ’ ’ 62 ’ ’ 63 ’ ’ 64 ’ ’ 65 ’ ’ 66 ’ ’ 67 ’ ’ 68 ’  ]; ]; ]; ]; ]; ]; ]; ];  $origMatrix [ 7 ] [ 1 ] = $ $origMatrix [ 7 ] [ 2 ] = $ $origMatrix [ 7 ] [ 3 ] = $ $origMatrix [ 7 ] [ 4 ] = $ $origMatrix [ 7 ] [ 5 ] = $ $origMatrix [ 7 ] [ 6 ] = $ $origMatrix [ 7 ] [ 7 ] = $ $origMatrix [ 7 ] [ 8 ] = $  POST [ POST [ POST [ POST [ POST [ POST [ POST [ POST [  ’ 71 ’ ’ 72 ’ ’ 73 ’ ’ 74 ’ ’ 75 ’ ’ 76 ’ ’ 77 ’ ’ 78 ’  ]; ]; ]; ]; ]; ]; ]; ];  $origMatrix [ 8 ] [ 1 ] = $ $origMatrix [ 8 ] [ 2 ] = $ $origMatrix [ 8 ] [ 3 ] = $ $origMatrix [ 8 ] [ 4 ] = $ $origMatrix [ 8 ] [ 5 ] = $ $origMatrix [ 8 ] [ 6 ] = $ $origMatrix [ 8 ] [ 7 ] = $ $origMatrix [ 8 ] [ 8 ] = $ $ i t e r a t i o n =20000;  POST [ POST [ POST [ POST [ POST [ POST [ POST [ POST [  ’ 81 ’ ’ 82 ’ ’ 83 ’ ’ 84 ’ ’ 85 ’ ’ 86 ’ ’ 87 ’ ’ 88 ’  ]; ]; ]; ]; ]; ]; ]; ];  echo ”<t a b l e b o r d e r=2 c e l l p a d d i n g =5 c e l l s p a c i n g =5>” ; echo ”<t r ><th>I t e r a t i o n </th><th>Chess Board</th><th>” ; echo ” Current Matrix ” ;  91  Appendix A. 8 Queens PHP Code echo ”</th>” ; echo ”</t r >” ; echo ”<t r ><td >0</td><td>” ; PrintChessMatrix ( $origMatrix ) ; echo ”</td><td>” ; PrintMatrix ( $origMatrix ) ; echo”</td>” ; $T1old=$ o r i g M a t r i x ; $T2old=$ o r i g M a t r i x ; $T3old=$ o r i g M a t r i x ; ; $T4old=$ o r i g M a t r i x ; $Tadaold=d i a g p r o j ( $T1old , $T2old , $T3old , $T4old ) ;  $ s o l v e d =0; $ i =0; while ( ! $ s o l v e d ) { $T1=new1 ( $T1old , $T2old , $T3old , $T4old ) ; $T2=new2 ( $T1old , $T2old , $T3old , $T4old ) ; $T3=new3 ( $T1old , $T2old , $T3old , $T4old ) ; $T4=new4 ( $T1old , $T2old , $T3old , $T4old ) ; $Tada=d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) ; i f ( c h e c k q u e e n s ( $Tada ) ) { $ s o l v e d =1; } echo ”<t r ><td>$ i </td><td>” ; P r i n t C h e s s M a t r i x ( $Tada ) ; echo ”</td><td>” ; P r i n t M a t r i x ( $Tada ) ; echo ”</td>” ; echo ”</t r >” ; $ i ++; $T1old $T2old $T3old $T4old  = = = =  $T1 ; $T2 ; $T3 ; $T4 ;  } echo ”<big >S u c c e s s ! ?>  S o l v e d i n ” . ( $ i −1). ” i t e r a t i o n s . </ big >” ;  92  Appendix B  Sudoku PHP Code We first have sudokufunctions.php, then our sudoku code. <?php // This code i s p r o t o t y p e and i s p r o v i d e d as i s under t h e GNU General P u b l i c L i c e n s e // Author : Jason Schaad August 11 , 2010 f u n c t i o n makeunit ( $ i ) { // r e t u r n s a v e c t o r ( a r r a y ) // $ i s h o u l d be a f l o a t $v [ 1 ] = 0 ; $v [ 2 ] = 0 ; $v [ 3 ] = 0 ; $v [ 4 ] = 0 ; $v [ 5 ] = 0 ; $v [ 6 ] = 0 ; $v [ 7 ] = 0 ; $v [ 8 ] = 0 ; $v [ 9 ] = 0 ; // e a s i e r way t o i n i t i a l i z e an a r r a y i f ( $ i >0) { $v [ round ( $ i ) ] = 1 ; } r e t u r n $v ; }  f u n c t i o n u n i t p r o j ( $v ) { // i n p u t $v i s a v e c t o r // f i n d s b i g g e s t component and makes 1 and t h e r e s t 0 // e . g . [ 0 . 4 , . 6 , . 7 , . 8 , 0 , 0 , 0 , 0 , 0 ] = [ 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0 ] // r e t u r n s a v e c t o r ( a r r a y ) // r e l i e s on makeunit $imax =1; $entrymax = $v [ 1 ] ; f o r ( $ i =2; $ i <=9; $ i ++) { i f ( $v [ $ i ]> $entrymax ) { $imax = $ i ; $entrymax =$v [ $ i ] ; } } r e t u r n makeunit ( $imax ) ; } f u n c t i o n addcubes ( $T1 , $T2 ) { // add 2 9 x9x9 c u b e s  93  Appendix B. Sudoku PHP Code // r e t u r n s t h e ”sum cube ” f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $ j =1; $ j <=9; $ j++) { f o r ( $k =1; $k <=9; $k++) { $S [ $ i ] [ $ j ] [ $k ]=$T1 [ $ i ] [ $ j ] [ $k ] + $T2 [ $ i ] [ $ j ] [ $k ] ; } } } r e t u r n $S ;  } f u n c t i o n p r i n t c u b e ( $cube ) { // t h i s f u n c t i o n p r i n t s t h e cube t o t h e s c r e e n // n o t i n c l u d e d i n u n i t t e s t s as i t i s n e v e r used i n my a c t u a l sudoku code // i t was used f o r d e b u g g i n g when I was c r e a t i n g t h e code f o r ( $x =1; $x <=9; $x++) { f o r ( $y =1; $y <=9; $y++) { f o r ( $z =1; $z <=9; $z++) { echo ” ( ” . $x . ” , ” . $y . ” , ” . $z . ” ) =” . $cube [ $x ] [ $y ] [ $z ] . ”<br>” ; } } } } function printarray ( $square ) { // t h i s f u n c t i o n p r i n t s a 2 d i m e n s i o n a l a r r a y t o t h e s c r e e n ( our ” f l a t t e n e d ” Sudoku ) // t h e r e i s no need f o r a u n i t t e s t f o r t h i s code as i t o u t p u t s t o t h e s c r e e n o n l y f o r ( $x =1; $x <=9; $x++) { f o r ( $y =1; $y <=9; $y++) { echo $ s q u a r e [ $x ] [ $y ] . ” ” ; } echo ”<br />” ; } } f u n c t i o n c o l u m n p r o j ( $Tcube ) { // f i n d s t h e b i g g e s t e n t r y i n e v e r y column o f our sudoku cube // and makes i t i n t o a u n i t v e c t o r − r e t u r n s a cube f o r ( $ j =1; $ j <=9; $ j++) { f o r ( $k =1; $k <=9; $k++) { f o r ( $ i =1; $ i <=9; $ i ++) { $tempv [ $ i ]= $Tcube [ $ i ] [ $ j ] [ $k ] ; } $tempw=u n i t p r o j ( $tempv ) ;  94  Appendix B. Sudoku PHP Code f o r ( $ i =1; $ i <=9; $ i ++) { $ c p r o j T c u b e [ $ i ] [ $ j ] [ $k ]=$tempw [ $ i ] ; } } } return $cprojTcube ; } f u n c t i o n r o w p r o j ( $Tcube ) { // f i n d s t h e b i g g e s t e n t r y i n e v e r y row o f our sudoku cube // and makes i t i n t o a u n i t v e c t o r which i t r e t u r n s f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $k =1; $k <=9; $k++) { f o r ( $ j =1; $ j <=9; $ j++) { $tempv [ $ j ]= $Tcube [ $ i ] [ $ j ] [ $k ] ; } $tempw=u n i t p r o j ( $tempv ) ; f o r ( $ j =1; $ j <=9; $ j++) { $ r p r o j T c u b e [ $ i ] [ $ j ] [ $k ]=$tempw [ $ j ] ; } } } return $rprojTcube ; } f u n c t i o n m e e t p r o j ( $Tcube ) { // This f u n c t i o n c a l c u l a t e s a ” u n i t v e c t o r ” b a s e d on t h e // m e e t i n g rooms ( t h e 3 x 3 rooms ) i t u s e s meetpro2 f u n c t i o n // t o do a l o t o f t h e work ( l o o p s a r e good . . and so i s re−u s i n g your code ) // i n i t i a l i z e mprojTcube t o have a l l 0 ’ s f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $ j =1; $ j <=9; $ j++) { f o r ( $k =1; $k <=9; $k++) { $mprojTcube [ $ i ] [ $ j ] [ $k ] = 0 ; } } } // 1−1 s u b c u b e $mprojTcube=m e e t p r o j 2 ( $mprojTcube , $Tcube , 1 // 1−2 s u b c u b e $mprojTcube=m e e t p r o j 2 ( $mprojTcube , $Tcube , 4 // 1−3 s u b c u b e $mprojTcube=m e e t p r o j 2 ( $mprojTcube , $Tcube , 7 // 2−1 s u b c u b e $mprojTcube=m e e t p r o j 2 ( $mprojTcube , $Tcube , 1 // 2−2 s u b c u b e $mprojTcube=m e e t p r o j 2 ( $mprojTcube , $Tcube , 4  ,1); ,1); ,1); ,4); ,4);  95  Appendix B. Sudoku PHP Code // 2−3 s u b c u b e $mprojTcube=m e e t p r o j 2 ( $mprojTcube , $Tcube , 7 // 3−1 s u b c u b e $mprojTcube=m e e t p r o j 2 ( $mprojTcube , $Tcube , 1 // 3−2 s u b c u b e $mprojTcube=m e e t p r o j 2 ( $mprojTcube , $Tcube , 4 // 3−3 s u b c u b e $mprojTcube=m e e t p r o j 2 ( $mprojTcube , $Tcube , 7  ,4); ,7); ,7); ,7);  r e t u r n $mprojTcube ; } f u n c t i o n m e e t p r o j 2 ( $mprojTcube , $Tcube , $index1 , $ i n d e x 2 ) { // This f u n c t i o n c a l c u l a t e s a ” u n i t v e c t o r ” b a s e d on t h e // m e e t i n g rooms ( t h e 3 x 3 rooms ) // t h i s i s coded u s i n g l o o p s and t h e v a r i a b l e s $ i n d e x 1 and $ i n d e x // 2 h e l p g e t t h e i n d i c e s c o r r e c t f o r t h e funny ” u n i t v e c t o r ” f o r ( $k =1; $k <=9; $k++) { f o r ( $ j =0; $ j <3; $ j++) { f o r ( $ i =0; $ i <3; $ i ++) { $tempv [1+ $ i +3∗ $ j ]= $Tcube [ $ i n d e x 2+$ j ] [ $ i n d e x 1+$ i ] [ $k ] ; } } $tempw = u n i t p r o j ( $tempv ) ; f o r ( $ j =0; $ j <3; $ j++) { f o r ( $ i =0; $ i <3; $ i ++) { $mprojTcube [ $ i n d e x 2+$ j ] [ $ i n d e x 1+$ i ] [ $k ]=$tempw[1+ $ i +3∗ $ j ] ; } } } r e t u r n $mprojTcube ; }  f u n c t i o n s c a l m u l t c u b e ( $alpha , $T ) { // Takes as i n p u t a cube and a s c a l a r and m u l t i p l e s them − o u t p u t i s a c u b e f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $ j =1; $ j <=9; $ j++) { f o r ( $k =1; $k <=9; $k++) { $S [ $ i ] [ $ j ] [ $k ]= $ a l p h a ∗$T [ $ i ] [ $ j ] [ $k ] ; } } } r e t u r n $S ; } // end f u n c t i o n  96  Appendix B. Sudoku PHP Code f u n c t i o n thesame ( $T1 , $T2 ) { // compares two a r r a y s − i f t h e y a r e t h e same r e t u r n t r u e , o t h e r w i s e f a l s e $ c o u n t e r =0; f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $ j =1; $ j <=9; $ j++) { i f ( $T1 [ $ i ] [ $ j ] ! = $T2 [ $ i ] [ $ j ] ) { $ c o u n t e r ++; } } } i f ( $ c o u n t e r >0) { r e t u r n FALSE; } else { r e t u r n TRUE; } } f u n c t i o n g i v e n p r o j ( $Tcube , $S ) { // t h i s f u n c t i o n u s e s t h e o r i g i n a l sudoku t h a t was g i v e n t o c a l c u l a t e // a p r o j e c t i o n ( our g i v e n Sudoku c o n s t r a i n t // − which i s j u s t a 9 by 9 a r r a y n o t a cube ) . Returns a p r o j e c t e d c ub e . $gprojTcube=$Tcube ; f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $ j =1; $ j <=9; $ j++) { i f ( $S [ $ i ] [ $ j ] >0) { $tempw=makeunit ( $S [ $ i ] [ $ j ] ) ; f o r ( $k =1; $k <=9; $k++) { $gprojTcube [ $ i ] [ $ j ] [ $k ]=$tempw [ $k ] ; } } i f ( $S [ $ i ] [ $ j ]==0) { f o r ( $k =1; $k <=9; $k++) { $tempv [ $k ]= $Tcube [ $ i ] [ $ j ] [ $k ] ; } $tempw=u n i t p r o j ( $tempv ) ; f o r ( $k =1; $k <=9; $k++) { $gprojTcube [ $ i ] [ $ j ] [ $k ]=$tempw [ $k ] ; } } } } r e t u r n $gprojTcube ; } // end f u n c t i o n f u n c t i o n makearray ( $Tcube ) { // f l a t t e n s a cube (9 x9x9 ) i n t o an a r r a y (9 x9 ) so we can p r i n t i t t o t h e s c r e e n  97  Appendix B. Sudoku PHP Code f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $ j =1; $ j <=9; $ j++) { $temp =1; $tempval=$Tcube [ $ i ] [ $ j ] [ 1 ] ; f o r ( $k =2; $k <=9; $k++) { i f ( $Tcube [ $ i ] [ $ j ] [ $k]> $tempval ) { $temp=$k ; $tempval=$Tcube [ $ i ] [ $ j ] [ $k ] ; } } $A [ $ i ] [ $ j ]=$temp ; } } r e t u r n $A ; } function stuckinloop ( $i ) { // no need t o t e s t t h i s f u n c t i o n , i t i s t o o s i m p l e . i f ( $ i >100000) { throw new E x c e p t i o n ( ’We a r e s t u c k i n a l o o p with o v e r 10000 i t e r a t i o n s . ’ ) ; }  } f u n c t i o n d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) { // t a k e f o u r c u b e s and computes t h e d i a g o n a l $T=addcubes ( $T1 , $T2 ) ; $T=addcubes ( $T , $T3 ) ; $T=addcubes ( $T , $T4 ) ; $T=s c a l m u l t c u b e ( 0 . 2 5 , $T ) ; r e t u r n $T ; } f u n c t i o n new1 ( $T1 , $T2 , $T3 , $T4 ) { // c r e a t e s a new cube w i t h t h e column p r o j e c t i o n $PD=d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) ; $S1 = s c a l m u l t c u b e ( 2 ,$PD ) ; $S2 = s c a l m u l t c u b e ( −1 ,$T1 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = c o l u m n p r o j ( $S3 ) ; $S5 = addcubes ( $T1 , s c a l m u l t c u b e ( −1 ,$PD ) ) ; $S = addcubes ( $S4 , $S5 ) ; r e t u r n $S ; } f u n c t i o n new2 ( $T1 , $T2 , $T3 , $T4 ) { // c r e a t e s a new cube t h e row p r o j e c t i o n $PD=d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) ;  98  Appendix B. Sudoku PHP Code $S1 = s c a l m u l t c u b e ( 2 ,$PD ) ; $S2 = s c a l m u l t c u b e ( −1 ,$T2 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = r o w p r o j ( $S3 ) ; $S5 = addcubes ( $T2 , s c a l m u l t c u b e ( −1 ,$PD ) ) ; $S = addcubes ( $S4 , $S5 ) ; r e t u r n $S ; } f u n c t i o n new3 ( $T1 , $T2 , $T3 , $T4 ) { // c r e a t e s a new cube w i t h t h e meet p r o j e c t i o n $PD=d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) ; $S1 = s c a l m u l t c u b e ( 2 ,$PD ) ; $S2 = s c a l m u l t c u b e ( −1 ,$T3 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = m e e t p r o j ( $S3 ) ; $S5 = addcubes ( $T3 , s c a l m u l t c u b e ( −1 ,$PD ) ) ; $S = addcubes ( $S4 , $S5 ) ; r e t u r n $S ; } f u n c t i o n new4 ( $T1 , $T2 , $T3 , $T4 ) { // c r e a t e s a new cube w i t h t h e g i v e n p r o j e c t i o n global $origS ; $PD=d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) ; $S1 = s c a l m u l t c u b e ( 2 ,$PD ) ; $S2 = s c a l m u l t c u b e ( −1 ,$T4 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = g i v e n p r o j ( $S3 , $ o r i g S ) ; $S5 = addcubes ( $T4 , s c a l m u l t c u b e ( −1 ,$PD ) ) ; $S = addcubes ( $S4 , $S5 ) ; r e t u r n $S ; } f u n c t i o n m e e t p r o j 2 2 ( $Tcube ) { // This f u n c t i o n c a l c u l a t e s a ” u n i t v e c t o r ” b a s e d on t h e // m e e t i n g rooms ( t h e 3 x 3 rooms ) // I coded i t d i f f e r e n t l y ( u s i n g more l o o p s ) , // b u t i t made i t much h a r d e r t o r e a d . We can s e e where an // a c t u a l m e e t i n g room i s l o c a t e d i n our cube t h i s way ! // 1−1 s u b c u b e f o r ( $k =1; $k <=9; $k++) { $tempv [ 1 ] = $Tcube [ 1 ] [ 1 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 1 ] [ 2 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 1 ] [ 3 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 2 ] [ 1 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 2 ] [ 2 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 2 ] [ 3 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 3 ] [ 1 ] [ $k ] ;  99  Appendix B. Sudoku PHP Code $tempv [ 8 ] = $Tcube [ 3 ] [ 2 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 3 ] [ 3 ] [ $k ] ; $tempw = u n i t p r o j ( $tempv ) ; $mprojTcube [ 1 ] [ 1 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 1 ] [ 2 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 1 ] [ 3 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 2 ] [ 1 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 2 ] [ 2 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 2 ] [ 3 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 3 ] [ 1 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 3 ] [ 2 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 3 ] [ 3 ] [ $k ] = $tempw [ 9 ] ; } // 1−2 s u b c u b e f o r ( $k =1; $k <=9; $k++) { $tempv [ 1 ] = $Tcube [ 1 ] [ 4 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 1 ] [ 5 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 1 ] [ 6 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 2 ] [ 4 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 2 ] [ 5 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 2 ] [ 6 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 3 ] [ 4 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 3 ] [ 5 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 3 ] [ 6 ] [ $k ] ; $tempw = u n i t p r o j ( $tempv ) ; $mprojTcube [ 1 ] [ 4 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 1 ] [ 5 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 1 ] [ 6 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 2 ] [ 4 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 2 ] [ 5 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 2 ] [ 6 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 3 ] [ 4 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 3 ] [ 5 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 3 ] [ 6 ] [ $k ] = $tempw [ 9 ] ; } // 1−3 s u b c u b e f o r ( $k =1; $k <=9; $k++) { $tempv [ 1 ] = $Tcube [ 1 ] [ 7 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 1 ] [ 8 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 1 ] [ 9 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 2 ] [ 7 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 2 ] [ 8 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 2 ] [ 9 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 3 ] [ 7 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 3 ] [ 8 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 3 ] [ 9 ] [ $k ] ; $tempw = u n i t p r o j ( $tempv ) ; $mprojTcube [ 1 ] [ 7 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 1 ] [ 8 ] [ $k ] = $tempw [ 2 ] ;  100  Appendix B. Sudoku PHP Code $mprojTcube $mprojTcube $mprojTcube $mprojTcube $mprojTcube $mprojTcube $mprojTcube  [ 1 ] [ 9 ] [ $k ] [ 2 ] [ 7 ] [ $k ] [ 2 ] [ 8 ] [ $k ] [ 2 ] [ 9 ] [ $k ] [ 3 ] [ 7 ] [ $k ] [ 3 ] [ 8 ] [ $k ] [ 3 ] [ 9 ] [ $k ]  = = = = = = =  $tempw [ 3 ] ; $tempw [ 4 ] ; $tempw [ 5 ] ; $tempw [ 6 ] ; $tempw [ 7 ] ; $tempw [ 8 ] ; $tempw [ 9 ] ;  } // 2−1 s u b c u b e f o r ( $k =1; $k <=9; $k++) { $tempv [ 1 ] = $Tcube [ 4 ] [ 1 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 4 ] [ 2 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 4 ] [ 3 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 5 ] [ 1 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 5 ] [ 2 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 5 ] [ 3 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 6 ] [ 1 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 6 ] [ 2 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 6 ] [ 3 ] [ $k ] ; $tempw = u n i t p r o j ( $tempv ) ; $mprojTcube [ 4 ] [ 1 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 4 ] [ 2 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 4 ] [ 3 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 5 ] [ 1 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 5 ] [ 2 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 5 ] [ 3 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 6 ] [ 1 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 6 ] [ 2 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 6 ] [ 3 ] [ $k ] = $tempw [ 9 ] ; } // 2−2 s u b c u b e f o r ( $k =1; $k <=9; $k++) { $tempv [ 1 ] = $Tcube [ 4 ] [ 4 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 4 ] [ 5 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 4 ] [ 6 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 5 ] [ 4 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 5 ] [ 5 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 5 ] [ 6 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 6 ] [ 4 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 6 ] [ 5 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 6 ] [ 6 ] [ $k ] ; $tempw = u n i t p r o j ( $tempv ) ; $mprojTcube [ 4 ] [ 4 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 4 ] [ 5 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 4 ] [ 6 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 5 ] [ 4 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 5 ] [ 5 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 5 ] [ 6 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 6 ] [ 4 ] [ $k ] = $tempw [ 7 ] ;  101  Appendix B. Sudoku PHP Code $mprojTcube [ 6 ] [ 5 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 6 ] [ 6 ] [ $k ] = $tempw [ 9 ] ; } // 2−3 s u b c u b e f o r ( $k =1; $k <=9; $k++) { $tempv [ 1 ] = $Tcube [ 4 ] [ 7 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 4 ] [ 8 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 4 ] [ 9 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 5 ] [ 7 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 5 ] [ 8 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 5 ] [ 9 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 6 ] [ 7 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 6 ] [ 8 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 6 ] [ 9 ] [ $k ] ; $tempw = u n i t p r o j ( $tempv ) ; $mprojTcube [ 4 ] [ 7 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 4 ] [ 8 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 4 ] [ 9 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 5 ] [ 7 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 5 ] [ 8 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 5 ] [ 9 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 6 ] [ 7 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 6 ] [ 8 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 6 ] [ 9 ] [ $k ] = $tempw [ 9 ] ; } // 3−1 s u b c u b e f o r ( $k =1; $k <=9; $k++) { $tempv [ 1 ] = $Tcube [ 7 ] [ 1 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 7 ] [ 2 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 7 ] [ 3 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 8 ] [ 1 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 8 ] [ 2 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 8 ] [ 3 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 9 ] [ 1 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 9 ] [ 2 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 9 ] [ 3 ] [ $k ] ; $tempw = u n i t p r o j ( $tempv ) ; $mprojTcube [ 7 ] [ 1 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 7 ] [ 2 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 7 ] [ 3 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 8 ] [ 1 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 8 ] [ 2 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 8 ] [ 3 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 9 ] [ 1 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 9 ] [ 2 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 9 ] [ 3 ] [ $k ] = $tempw [ 9 ] ; } // 3−2 s u b c u b e f o r ( $k =1; $k <=9; $k++) {  102  Appendix B. Sudoku PHP Code $tempv [ 1 ] = $Tcube [ 7 ] [ 4 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 7 ] [ 5 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 7 ] [ 6 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 8 ] [ 4 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 8 ] [ 5 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 8 ] [ 6 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 9 ] [ 4 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 9 ] [ 5 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 9 ] [ 6 ] [ $k ] ; $tempw = u n i t p r o j ( $tempv ) ; $mprojTcube [ 7 ] [ 4 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 7 ] [ 5 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 7 ] [ 6 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 8 ] [ 4 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 8 ] [ 5 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 8 ] [ 6 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 9 ] [ 4 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 9 ] [ 5 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 9 ] [ 6 ] [ $k ] = $tempw [ 9 ] ; } // 3−3 s u b c u b e f o r ( $k =1; $k <=9; $k++) { $tempv [ 1 ] = $Tcube [ 7 ] [ 7 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 7 ] [ 8 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 7 ] [ 9 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 8 ] [ 7 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 8 ] [ 8 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 8 ] [ 9 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 9 ] [ 7 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 9 ] [ 8 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 9 ] [ 9 ] [ $k ] ; $tempw = u n i t p r o j ( $tempv ) ; $mprojTcube [ 7 ] [ 7 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 7 ] [ 8 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 7 ] [ 9 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 8 ] [ 7 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 8 ] [ 8 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 8 ] [ 9 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 9 ] [ 7 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 9 ] [ 8 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 9 ] [ 9 ] [ $k ] = $tempw [ 9 ] ; } r e t u r n $mprojTcube ; }  f u n c t i o n checkMeetingRoom ( $sudoku , $ i n d e x x , $ i n d e x y ) { // sudoku i s t h e 9 x9 a r r a y and i n d e x 1 and i n d e x 2 a r e t h e s t a r t i n g p o i n t s // [ 1 ] [ 1 ] [ 1 ] [ 2 ] [ 1 ] [ 3 ]  103  Appendix B. Sudoku PHP Code // [ 2 ] [ 1 ] // [ 3 ] [ 1 ]  //  [2][2] [3][2]  [2][3] [3][3]  // $one=a r r a y ( 1 = > 1 , 1 , 1 , 2 , 2 , 2 , 3 , 3 , 3 ) ; $two=a r r a y ( 1 = > 1 , 2 , 3 , 1 , 2 , 3 , 1 , 2 , 3 ) ; $one=array (1=> $ i n d e x y , $ i n d e x y , $ i n d e x y , ( $ i n d e x y +1) ,( $ i n d e x y +1) , ( $ i n d e x y +1) ,( $ i n d e x y +2) ,( $ i n d e x y +2) ,( $ i n d e x y + 2 ) ) ; $two=array (1=>( $ i n d e x x ) , ( $ i n d e x x +1) ,( $ i n d e x x +2) ,( $ i n d e x x ) , ( $ i n d e x x +1) , ( $ i n d e x x +2) ,( $ i n d e x x ) , ( $ i n d e x x +1) ,( $ i n d e x x + 2 ) ) ; // p r i n t r ( $one ) ; // p r i n t r ( $two ) ; f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $ j =1; $ j <=9; $ j++) { i f ( $ i != $ j ) { i f ( $sudoku [ $one [ $ i ] ] [ $two [ $ i ]]== $sudoku [ $one [ $ j ] ] [ $two [ $ j ] ] ) { return 0; } } } }  return 1; }  f u n c t i o n i s V a l i d S u d o k u ( $ o r i g i n a l , $sudoku ) { // c h e c k s t o s e e i f t h e sudoku i s v a l i d − r e t u r n s t r u e i f v a l i d , f a l s e o t h e r w i s e // i n p u t i s t h e o r i g i n a l c o n s t r a i n t s and t h e sudoku t o c h e c k // Check i f we meet t h e ROW c o n s t r a i n t s (1 t o 9 i n e v e r y row ) // f i r s t c e h c k f o r d u p l i c a t e s f o r ( $k =1; $k <=9; $k++) { f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $ j =1; $ j <=9; $ j++) { i f ( $ i != $ j ) { i f ( $sudoku [ $k ] [ $ i ]==$sudoku [ $k ] [ $ j ] ) { // echo ” F a i l e d row c o n s t r a i n t <b r />”; return 0; } } }  104  Appendix B. Sudoku PHP Code } } // Check i f we meet t h e COLUMN c o n s t r a i n t s (1 t o 9 i n e v e r y row ) // f i r s t c e h c k f o r d u p l i c a t e s f o r ( $k =1; $k <=9; $k++) { f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $ j =1; $ j <=9; $ j++) { i f ( $ i != $ j ) { i f ( $sudoku [ $ i ] [ $k]==$sudoku [ $ j ] [ $k ] ) { // echo ” F a i l e d column c o n s t r a i n t <b r / >”; return 0; } } } } } // Check MEETING ROOM c o n s t r a i n t s (1 i f ( ! checkMeetingRoom ( $sudoku , 1 , 1 ) ) i f ( ! checkMeetingRoom ( $sudoku , 4 , 1 ) ) i f ( ! checkMeetingRoom ( $sudoku , 7 , 1 ) ) i f ( ! checkMeetingRoom ( $sudoku , 1 , 4 ) ) i f ( ! checkMeetingRoom ( $sudoku , 4 , 4 ) ) i f ( ! checkMeetingRoom ( $sudoku , 7 , 4 ) ) i f ( ! checkMeetingRoom ( $sudoku , 1 , 7 ) ) i f ( ! checkMeetingRoom ( $sudoku , 4 , 7 ) ) i f ( ! checkMeetingRoom ( $sudoku , 7 , 7 ) )  t o 9 i n e v e r y m e e t i n g room ) { return 0;} { return 0;} { return 0;} { return 0;} { return 0;} { return 0;} { return 0;} { return 0;} { return 0;}  // L a s t l y we c h e c k t o make s u r e we have t h e g i v e n c o n s t r a i n t s f o r ( $ i =1; $ i <=9; $ i ++) { f o r ( $ j =1; $ j <=9; $ j++) { $ n e w a r r a y [ $ i ] [ $ j ]= $ o r i g i n a l [ $ i ] [ $ j ] ∗ $sudoku [ $ i ] [ $ j ] ; $ n e w a r r a y [ $ i ] [ $ j ]= sqrt ( $ n e w a r r a y [ $ i ] [ $ j ] ) ; } } // p r i n t a r r a y ( $ n e w a r r a y ) ; // echo ”<b r />”; // p r i n t a r r a y ( $ o r i g i n a l ) ; i f ( ! thesame ( $new array , $ o r i g i n a l ) ) { return 0; }  return 1; }  105  Appendix B. Sudoku PHP Code <?php set time limit ( 0 ) ; // Report s i m p l e r u n n i n g e r r o r s error reporting (E ERROR | E WARNING | E PARSE ) ; // Verbose mode ( shows a l l d e t a i l s ) $ d e t a i l s=$ POST [ ’ d e t a i l s ’ ] ; i f ( $ d e t a i l s != 1 ) { $ d e t a i l s =0; } // P i c k up t h e form v a r i a b l e s f o r ( $x =1; $x <=9; $x++) { f o r ( $y =1; $y <=9; $y++) { $ o r i g S [ $x ] [ $y ]=$ POST [ $x . $y ] ; } } // D i s p l a y t h e i n i t i a l Sudoku p u z z l e we w i l l s o l v e // We a l s o i n i t i a l i z e a GLOBAL v a r i a b l e c a l l e d o r i g S // which i s our o r i g i n a l Sudoku ( which i s a c o n s t r a i n t ) echo ”<t a b l e b o r d e r=1>” ; f o r ( $x =1; $x <=9; $x++) { echo ”<t r >” ; f o r ( $y =1; $y <=9; $y++) { echo ”<td>” ; i f ( $ o r i g S [ $x ] [ $y]==” ” ) { echo ”&nbsp ” ; } else { echo $ o r i g S [ $x ] [ $y ] ; } echo ”</td>” ; } echo ”</t r >” ; } echo ”</ t a b l e >” ; // i n c l u d e t h e f u n c t i o n s t o be used i n sodoku include ( ’ s u d o k u f u n c t i o n s . php ’ ) ; // i n i t i a l i z e t h e s c u b e 9 x9x9 a r r a y t o a l l z e r o s f o r ( $x =1; $x <=9; $x++) { f o r ( $y =1; $y <=9; $y++) { f o r ( $z =1; $z <=9; $z++) {  106  Appendix B. Sudoku PHP Code $Scube [ $x ] [ $y ] [ $z ] = 0 ; } } } // Get a s t a r t i n g p o s i t i o n b a s e d on t h e o r i g i n a l sudoku g i v e n // ( t h e commented o u t l i n e i t when we s t a r t a t random p o s i t i o n s − f o r r e s e a r c h ) f o r ( $x =1; $x <=9; $x++) { f o r ( $y =1; $y <=9; $y++) { $tempv=makeunit ( $ o r i g S [ $x ] [ $y ] ) ; f o r ( $z =1; $z <=9; $z++) { $Scube [ $x ] [ $y ] [ $z ]= $tempv [ $z ] ; // $Scube [ $x ] [ $y ] [ $z ]= rand ( −10000 , 1 0 0 0 0 ) ; } } } echo ”<br /><br />” ;  // We now s t a r t t h e i t e r a t i o n p r o c e s s // $Scube i s our s t a r t i n g p o s i t i o n as d e f i n e d a b o v e $T1old=$Scube ; $T2old=$Scube ; $T3old=$Scube ; $T4old=$Scube ; $Tadaold=d i a g p r o j ( $T1old , $T2old , $T3old , $T4old ) ; $zerosinarow = 0; $ i =1; echo ” working ” ; while ( ! i s V a l i d S u d o k u ( $ o r i g S , $mTada ) ) { // t h e s t o p p i n g c r i t e r i a i s when t h e r e i s n o t change // i n t h e c u b e s f o r 10 i t e r a t i o n s // we c l o n e 4 c u b e s and make t h e d i a g o n a l $T1=new1 ( $T1old , $T2old , $T3old , $T4old ) ; $T2=new2 ( $T1old , $T2old , $T3old , $T4old ) ; $T3=new3 ( $T1old , $T2old , $T3old , $T4old ) ; $T4=new4 ( $T1old , $T2old , $T3old , $T4old ) ; $Tada=d i a g p r o j ( $T1 , $T2 , $T3 , $T4 ) ; $mTada=makearray ( $Tada ) ; i f ( $ d e t a i l s ==1) { echo ” Current Counter ” . $ i , ”<br>” ; p r i n t a r r a y ( $mTada ) ; } else { echo ” . ” ;  107  Appendix B. Sudoku PHP Code } $T1old = $T1 ; $T2old = $T2 ; $T3old = $T3 ; $T4old = $T4 ; $mTadaold = $mTada ; $mTadaoldmatrix = $mTadamatrix ; $ i ++;  try { stuckinloop ( $i ) ; } c a t c h ( E x c e p t i o n $e ) { $badsudoku = p r i n t r ( $ o r i g S , true ) ; e r r o r l o g ( $e−>getMess age ( ) . ” \n” , 3 , ”my−e r r o r s . l o g ” ) ; e r r o r l o g ( $badsudoku . ” \n” , 3 , ”my−e r r o r s . l o g ” ) ; echo ”Too many i t e r a t i o n s − we a r e s t u c k . . maybe a bad Sudoku ? ” ; return ; } } i f ( $ d e t a i l s ==0) {echo ”<br>S o l v e d i n ” . $ i . ” i t e r a t i o n s :< br>” ; p r i n t a r r a y ( $mTada ) ; } ?>  108  Appendix C  Eckstein Calculation Details In chapter 4, we showed that Eckstein made a numerical error in (4.95) and as a result (4.96). The matrices in fact are Gλ,A,B = JλA (2JλB − Id) + (Id −JλB ) = and Sλ,A,B = G−1 λ,A,B − Id =  43 47 4 47  10 47 37 47  21 40 −1 40  .  −1 16 9 16  (C.1)  (C.2)  Below are the details of the calculation.  109  Appendix C. Eckstein Calculation Details  Figure C.1: Calculation Details  110  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0071292/manifest

Comment

Related Items