Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Modeling the 8-queens problem and Sudoku using an algorithm based on projections onto nonconvex sets Schaad, Jason 2010

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
[if-you-see-this-DO-NOT-CLICK]
ubc_2010_fall_schaad_jason.pdf [ 2.46MB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0071292.json
JSON-LD: 1.0071292+ld.json
RDF/XML (Pretty): 1.0071292.xml
RDF/JSON: 1.0071292+rdf.json
Turtle: 1.0071292+rdf-turtle.txt
N-Triples: 1.0071292+rdf-ntriples.txt
Original Record: 1.0071292 +original-record.json
Full Text
1.0071292.txt
Citation
1.0071292.ris

Full Text

Modeling the 8-Queens Problem and Sudoku using an Algorithm based on Projections onto Nonconvex Sets by Jason Schaad B.Sc., The University of British Columbia, 2000 M.A., Gonzaga University, 2006 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The College of Graduate Studies (Mathematics) THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan) August 2010 c Jason Schaad 2010Abstract We begin with some basic de nitions and concepts from convex analysis and projection onto convex sets (POCS). We next introduce various algorithms for converging to the intersection of convex sets and review various results (Elser’s Di erence Map is equivalent to the Douglas-Rachford and Fienup’s Hybrid Input-Output algorithms which are both equivalent to the Hybrid Projection-Re ection algorithm). Our main contribution is two-fold. First, we show how to model the 8-queens problem and following Elser, we model Sudoku as well. In both cases we use several very nonconvex sets and while the theory for convex sets does not apply, so far the algorithm  nds a so- lution. Second, we show that the operator governing the Douglas-Rachford iteration need not be a proximal map even when the two involved resolvents are; this re nes an observation of Eckstein. iiTable of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . vi 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1 Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Subspaces and Convex Sets . . . . . . . . . . . . . . . . . . . 4 2.3 The Inner Product and Inner Product Spaces . . . . . . . . . 5 2.4 Cauchy Sequences, Completeness and Hilbert spaces . . . . . 6 2.5 Product Space . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.6 Lipschitz and Nonexpansivity . . . . . . . . . . . . . . . . . . 13 2.7 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.8 Properties of the Projector . . . . . . . . . . . . . . . . . . . 23 2.9 The Re ector . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.10 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3 Projection Methods . . . . . . . . . . . . . . . . . . . . . . . . 26 3.1 Elser’s Di erence Map . . . . . . . . . . . . . . . . . . . . . . 26 3.2 Douglas-Rachford and Fienup HIO . . . . . . . . . . . . . . . 27 3.3 Fixed Point Results . . . . . . . . . . . . . . . . . . . . . . . 28 3.4 The Main Result . . . . . . . . . . . . . . . . . . . . . . . . . 29 4 Monotone Operators . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1 De nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2 Eckstein’s Observation . . . . . . . . . . . . . . . . . . . . . 43 iiiTable of Contents 5 8 Queens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.1 Queen’s Role in Chess . . . . . . . . . . . . . . . . . . . . . . 47 5.2 8 Queens History . . . . . . . . . . . . . . . . . . . . . . . . 47 5.3 Solutions to 8 Queens . . . . . . . . . . . . . . . . . . . . . . 48 5.4 Modeling 8 Queens . . . . . . . . . . . . . . . . . . . . . . . 50 5.5 Row Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 51 5.6 Column Constraints . . . . . . . . . . . . . . . . . . . . . . . 51 5.7 Positive Slope Constraints . . . . . . . . . . . . . . . . . . . 52 5.8 Negative Slope Constraints . . . . . . . . . . . . . . . . . . . 53 5.9 Projection onto the Nearest Unit Vector . . . . . . . . . . . . 54 5.10 Iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.11 8 Queens Example . . . . . . . . . . . . . . . . . . . . . . . . 56 6 Sudoku . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.1 What is Sudoku? . . . . . . . . . . . . . . . . . . . . . . . . 60 6.2 Modeling Sudoku . . . . . . . . . . . . . . . . . . . . . . . . 60 6.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 6.3.1 North-South Hallways . . . . . . . . . . . . . . . . . . 63 6.3.2 East-West Hallways . . . . . . . . . . . . . . . . . . . 64 6.3.3 Meeting Rooms . . . . . . . . . . . . . . . . . . . . . 65 6.3.4 The Given Sudoku . . . . . . . . . . . . . . . . . . . . 66 6.4 Iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 6.5 Sudoku Example . . . . . . . . . . . . . . . . . . . . . . . . . 67 6.6 Sudoku Distance . . . . . . . . . . . . . . . . . . . . . . . . . 74 7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Appendices A 8 Queens PHP Code . . . . . . . . . . . . . . . . . . . . . . . . 80 B Sudoku PHP Code . . . . . . . . . . . . . . . . . . . . . . . . . 93 C Eckstein Calculation Details . . . . . . . . . . . . . . . . . . . 109 ivList of Figures 2.1 Example of Convexity . . . . . . . . . . . . . . . . . . . . . . 5 5.1 Queen moves . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2 Initial Starting Position . . . . . . . . . . . . . . . . . . . . . 57 5.3 30 iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.4 60 iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.5 90 iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.6 Solved in 96 iterations . . . . . . . . . . . . . . . . . . . . . . 59 6.1 A Sample Sudoku Puzzle . . . . . . . . . . . . . . . . . . . . 61 6.2 A Sample Sudoku Solution . . . . . . . . . . . . . . . . . . . 62 6.3 9 Sudoku Elavator Shafts . . . . . . . . . . . . . . . . . . . . 63 6.4 A Sodoku Hallway . . . . . . . . . . . . . . . . . . . . . . . . 64 6.5 A Sodoku Meeting Room . . . . . . . . . . . . . . . . . . . . 65 6.6 Solution to Escargot . . . . . . . . . . . . . . . . . . . . . . . 73 6.7 Distance versus number of iterations . . . . . . . . . . . . . . 74 C.1 Calculation Details . . . . . . . . . . . . . . . . . . . . . . . . 110 vAcknowledgements First and foremost, I thank my supervisor Heinz Bauschke. His enthusiasm for mathematics is truly infectious. Without the support from my wife Yuki, the  exibility of my boss Jon Rever and the help of Liangjin Yao, I would not have been able to complete this thesis- thank you. I also thank Shawn Wang, Yves Lucet, Yong Gao, and Pulkit Bansal for careful reading of my thesis. viChapter 1 Introduction In mid August 2007, I attended the Second Mathematical Programming So- ciety International Conference on Continuous Optimization [ICC07] held at McMaster University in Hamilton Ontario. I had the pleasure of attending a talk given by Dr. Veit Elser from Cornell University. Elser talked very passionately about a new algorithm he discovered while researching the hy- brid input-output (HIO) algorithm of James R. Fienup, Ph.D., a University of Rochester optics researcher. I was intrigued by the variety of problems Elser was able to solve with his di erence map. A large number of problems can be formulated as follows:  nd an ele- ment of A \B with the understanding that  nding elements of A and B is easy [ERT07]. \Di cult problems can often be broken down into a collec- tion of smaller, more tractable, subproblems." [GE08]. Elser gives a very easy to understand example: Suppose you want to unscramble the anagram "ilnoostu". We think about two sets, A and B. Set A is the 20,160 distinct permutations of the given 8 letters. Set B, is the collection of eight-letter English words; the cardinality of set B is approximately 30,000. The inter- section of these two sets contains our answer, in fact A \B = fsolutiong. The algorithm projects onto the sets A and B, this projection needs to be computationally quick in order for the di erence map to be successful. Convergence to a  xed point has been proved when our sets are convex [BCL02]. This of course is not the case with Sudoku, as we are working with positive integers from 1 to 9. But, the Sudoku solver has solved every Sudoku problem we have thrown at it. In fact, Science News published an article about the Sudoku Solver [Reh] and people from around the world tried to  nd a Sudoku puzzle that it could not solve, but to no avail. The server that hosted the PHP code to solve Sudoku was overloaded because of the interest. The Internet Service Provider emailed and was upset because the algorithm was using over 99 percent of virtual server’s processor; we had to move the code to the University of British Columbia Servers. We were hoping to  nd a Sudoku puzzle that didn’t work so that we can understand what makes that particular Sudoku special. There may be a theorem out there that shows our method always works on Sudoku. We leave that for 1Chapter 1. Introduction future work in this area. Modeling Sudoku is complicated. Perhaps the greatest contribution of this thesis is providing the full details of modeling Sudoku in order for the di erence map to solve it; it is very ingenious. I want to be very clear, we did not discover the Sudoku model, Dr. Veit Elser did. In fact, talking with him at the International Math Conference in Ban in 2009 [bir09], I asked him how he ever came up with such a clever model. Elser told me simply, "I was motivated". He knew his di erence map could solve something as di cult as Sudoku puzzle, so he was determined to show the world. We wanted to try to model something simpler in order to better explain how to model Sudoku. Once we explain how to model 8-queens, the modeling of Sudoku is much easier to understand. The modeling of the 8-queens problem is a new application, and as far as I know Elser has not attempted it. As mathematicians we want to know why it works. The study of pro- jection operators reveals a strong connection to monotone operators. We revisit work by Eckstein, correct a computational error, and provide a much simpler example. The remainder of this thesis is organized as follows. Chapter 2 is a review of the well known de nitions and concepts from the broad area of convex analysis. We build the theory so we can de ne a Hilbert space and projections. In Chapter 3 we consider an algorithm for converging to the intersection of convex sets and review various related results. We show that Elser’s Di erence Map is equivalent to the Douglas-Rachford and Fienup’s Hybrid Input-Output algorithms. In Chapter 4 we show that the operator governing the Lions-Mercier iteration need not be a proximal map even when the two involved resolvents are; this re nes an observation of Eckstein. Chapter 5 focusses on modeling the 8-queens problem, and Chapter 6 concerns the Sudoku puzzle. Finally, Chapter 7 concludes this thesis. 2Chapter 2 Background This section is included so the results of this thesis can be understood in one self contained document. The readers can  rst familiarize themselves with the well known facts presented in this chapter. 2.1 Vector Space De nition 2.1 (Vector space). A vector space V over R is a nonempty set or collection of elements (called vectors) with two algebraic operations. These operations are called vector addition and multiplication of vectors by scalars. For all vectors x,y,z 2 V and  , 2 R the following conditions are required to hold: 1. x + y, called the sum of x and y, is a vector in V and is the unique result of addition applied to x and y. 2.  x, called the scalar multiple of x by  , is a vector in V and is the unique result of scalar multiplication applied to x by  . 3. Addition is commutative, x+ y = y + x. 4. Addition is associative, x+ (y + z) = (x+ y) + z. 5. There is a zero vector 0 in V such that x+ 0 = x. 6. There is an additive inverse  x such that x+ ( x) = 0. 7. Scalar multiplication is distributive,  (x+y) =  x+ y, and ( + )x =  x+  x. 8. Scalar multiplication is commutative,  ( x) = (  )x. 9. Scalar multiplication by 1 is the identity operator, 1x = Idx = x. 32.2. Subspaces and Convex Sets 2.2 Subspaces and Convex Sets De nition 2.2 (subspace). A subset W of a vector space V is a subspace of V if W is a vector space itself with the same operations of addition and scalar multiplication We note that the zero vector, 0, is in every subspace. The smallest possible subspace is f0g. And the largest possible subspace is the vector space V . It is very tedious and time consuming to check if a subset of V is a subspace by applying the de nition of a vector space. The following two facts are well known easier ways to verify if W is indeed a subspace. The proofs are omitted, but can be found in many Linear Algebra textbooks. Fact 2.3. [Mey00, page 162] Let V be a real vector space. Then W is a subspace of V if and only if W  V and  x+y 2W 8x; y 2W; 8 2 R. Proposition 2.4. If W1 and W2 are subspaces of a real vector space V , then W1 \W2 is a subspace of the vector space V . Proof. By Fact 2.3 we need only show that W1 \W2  V and  x + y 2 W1 \W2 8x; y 2 W 8 2 R. Pick x 2 W1 \W2, then x 2 W1 ) x 2 V as W1 is a subspace. Now pick x; y 2W1 \W2 this implies that x 2W1 and x 2W2; (2.1) y 2W1 and y 2W2: (2.2) Since W1 and W2 are both subspaces,  x+ y 2W1 (2.3)  x+ y 2W2: (2.4) We conclude that  x+ y 2W1 \W2. De nition 2.5 (A ne subspace). The set W is an a ne subspace of V if there exists a vector v 2 V and a subspace W0 of V such that W = v+W0 = fv + w : w 2W0g. De nition 2.6 (Convex set). Let X be a vector space and let C  X. We say C is convex if  C + (1  )C  C; 8 2 [0; 1]: (2.5) 42.3. The Inner Product and Inner Product Spaces Figure 2.1: Example of Convexity A geometric characterization of this de nition is that a set is convex if it contains all line segments whose end points are in the set. See Figure 2:1 It is easy to see that subspaces are convex sets. De nition 2.7 (Convex function). Let f : X ! ] 1,+1]. We say f is a convex function if f((1  )x+  y)  (1  )f(x) +  f(y); 8x; y 2 X;8 2 ]0; 1[ : (2.6) De nition 2.8 (Domain). Let f : X ! ] 1;+1]. The domain of a function is dom f = fx j f(x) < +1g: (2.7) De nition 2.9 (Epigraph). Let f : X ! ] 1;+1]. Then the epigraph of a function is the set of points lying on or above the graph, i.e., epi f = f(x;  ) jx 2 dom f;  2 R;   f(x)g: A function is convex if and only if its epigraph is a convex set; this is called a geometric characterization of a convex function. 2.3 The Inner Product and Inner Product Spaces De nition 2.10 (Inner Product). An inner product on X is a mapping of X  X (where X is a vector space) to R. For every pair of vectors x and y there is an associated scalar hx; yi, called the inner product of x and y, such that for all x; y; z 2 X and for all  2 R we have: 1. hx+ y; zi = hx; zi+ hy; zi 52.4. Cauchy Sequences, Completeness and Hilbert spaces 2. h x; yi =  hx; yi 3. hx; yi = hy; xi 4. hx; xi  0 5. hx; xi = 0() x = 0 De nition 2.11. (Inner Product Space) An inner product space (some- times called a pre-Hilbert Space) is a vector space X with an inner product de ned on X. De nition 2.12. (Orthogonality) An element x of an inner product space X is said to be orthogonal to an element y 2 X if hx; yi = 0. De nition 2.13. (Norm) Let V be a vector space. The function k k : V ! R is called a norm if the following hold: 1. kxk  0 and kxk = 0 if and only if x = 0 2. kx+ yk  kxk+ kyk; 8x; y 2 V 3. k xk = j jkxk 8 2 R; 8x 2 V An inner product on X de nes a norm on X which is given by kxk = p hx; xi  0: (2.8) Note that kxk2 = hx; xi  0: (2.9) We also have a metric on X which is given by d(x; y) = kx yk = p hx y; x yi  0: (2.10) 2.4 Cauchy Sequences, Completeness and Hilbert spaces De nition 2.14. (Cauchy Sequence) Let X be an inner product space. A sequence (xn) in X is said to be a Cauchy sequence if for every  > 0 there is an N = N( ) such that kxm  xnk <  for every m;n > N . De nition 2.15. (Completeness) The space X is said to be complete if every Cauchy sequence in X has a limit which is an element of X. 62.4. Cauchy Sequences, Completeness and Hilbert spaces De nition 2.16. (Hilbert Space) A complete inner product space is said to be a Hilbert Space. From now on we assume that X is a Hilbert Space. Example 2.17 (Euclidean space Rn). The space Rn is a Hilbert Space. This is the primary Hilbert Space focussed on in this thesis. [Kre78, Example 1.5-1] shows us that Rn is complete. The inner product is de ned by hx; yi = x1y1 +    + xnyn; (2.11) where x = (x1;    ; xn) and y = (y1;    ; yn). Now from (2.11) we see that kxk = p hx; xi = q x21 +    + x 2 n: (2.12) If n = 3, (2.11) gives us the usual dot product hx; yi = x  y = x1y1 + x2y2 + x3y3; (2.13) where x = (x1; x2; x3) and y = (y1; y2; y3). The orthogonality hx; yi = x  y = 0 (2.14) agrees with the concept of perpendicularity. Example 2.18 (‘2). [Kre78, 1.2-3] Pronounced "Little el 2" and sometimes written as ‘2(I) is the Hilbert space of sequences index by a countable set, usually N (but could be Z). Each element x of the space ‘2 is a sequence x = (xn) = (x1; x2; : : : ; xn; : : :) with k(xn)k = v u u t 1X n=1 x2n < +1: (2.15) For two elements (sequences) x = (xn) 2 ‘2 and y = (yn) 2 ‘2, the inner product is de ned by hx; yi = 1X n=1 hxn; yni: (2.16) De nition 2.19 (Bounded). Given a set A in X, A is said to be bounded if we have sup x2A kxk < +1 . 72.4. Cauchy Sequences, Completeness and Hilbert spaces Theorem 2.20. Every convergent sequence in a Hilbert space is a bounded Cauchy Sequence. Proof. Suppose xn  ! x. We  rst show (xn) is Cauchy. By the triangle inequality we have kxn  xmk = kxn  x+ x xmk  kxn  xk+ kx xmk: (2.17) We now let " > 0. Then 9N so for n > N and m > N we have kxn  xk < " 2 ; (2.18) kx xmk < " 2 : (2.19) Hence m;n > N kxn  xmk  kxn  xk+ kx xmk = " 2 + " 2 = "; (2.20) which proves the sequence is Cauchy. Now we show (xn) is bounded. Again we have xn  ! x. For " = 1, we get N 2 N such that n > N =) kxn  xk < 1: (2.21) We claim kxnk  kxk  kxn  xk: (2.22) By the triangle inequality kxnk = k  xnk = k  xn + x xk = k(x xn) + ( x)k  kxk+ kx xnk: (2.23) Thus, for n > N , kxnk < kxk+ 1: (2.24) If we choose M to be M = maxfkxk+ 1; kx1k; kx2k; :::; kxNkg (2.25) we get kxnk  M; 8n: (2.26) as desired. 82.4. Cauchy Sequences, Completeness and Hilbert spaces De nition 2.21 (Distance to a set). The distance from a vector x 2 X to a nonempty set A  X is de ned as follows dA(x) := inf y2A kx yk: Theorem 2.22 (Cauchy-Schwarz inequality). For any x and y in a Hilbert space X, we have that jhx; yij  kxkkyk: Proof. We prove this with two cases. Case 1: x = 0 or y = 0. Clear. Case 2: x 6= 0 and y 6= 0. Then 0   x kxk  y kyk ; x kxk  y kyk  (2.27) =     x kxk  y kyk     2 = hx; xi kxk2  2hx; yi kxkkyk + hy; yi kyk2 (2.28) () kxk2 kxk2  2hx; yi kxkkyk + kyk2 kyk2  0 (2.29) () 2  2hx; yi kxkkyk (2.30) () kxkkyk  hx; yi (2.31) () hx; yi  kxkkyk (2.32) This is true for all x. Let x be replaced by  x. Now we have k  xkkyk  h x; yi (2.33) ()  hx; yi  kxkkyk: (2.34) 92.4. Cauchy Sequences, Completeness and Hilbert spaces Now maxfh x; yi; hx; yig = jhx; yij: (2.35) Combine (2.32) and (2.34) to get jhx; yij  kxkkyk; 8x; y 2 X (2.36) as desired. Theorem 2.23 (Triangle Inequality). Then for every x; y 2 X, we have that kx+ yk  kxk+ kyk: Proof. We have kx+ yk2 = hx+ y; x+ yi (2.37) = hx; x+ yi+ hy; x+ yi (2.38) = hx; xi+ hx; yi+ hy; xi+ hy; yi (2.39) = kxk2 + 2hx; yi+ kyk2 (2.40)  kxk2 + 2jhx; yij+ kyk2: (2.41) By the Cauchy-Schwarz inequality we have kx+ yk2  kxk2 + 2kxkkyk+ kyk2 (2.42) = (kxk+ kyk)2: (2.43) We take the square root on both sides to obtain kx+ yk  kxk+ kyk (2.44) as desired. Theorem 2.24 (Parallelogram law). For every x; y 2 X kx+ yk2 + kx yk2 = 2(kxk2 + kyk2): Proof. We have, kx+ yk2 + kx yk2 = hx+ y; x+ yi+ hx y; x yi (2.45) = 2hx; xi+ 2hy; yi+ 2hx; yi  2hx; yi (2.46) = 2(kxk2 + kyk2): (2.47) 102.5. Product Space 2.5 Product Space De nition 2.25 (Product Space). Let N 2 N = f1; 2; 3; : : :g, and Xi be Hilbert spaces, with inner products h ;  ii, for each i 2 f1; : : : ; Ng. The corresponding product space H := X1  X2  : : : XN is de ned by H =  (xi) = (x1; x2; : : : ; xN ) j xi 2 Xi  ; (2.48) hx; yi = NX i=1 hxi; yiii; 8x = (xi); y = (yi) 2 H: (2.49) Then, kxk = v u u t NX i=1 kxik2i ; 8x = (xi) 2 H: (2.50) If Xi = X;8i 2 f1;    ; Ng, we denote H by XN . In fact, the well known space RN is actually a product space. We can write RN as R     R. Proposition 2.26. The Product Space is a Hilbert space. Proof. We  rst prove that hx; yi = NX i=1 hxi; yiii; 8x = (xi); y = (yi) 2 H: (2.51) is an inner product. Let x = (xi), y = (yi), z = (zi) 2 H, and  2 R. As described above, we need to show  ve properties in order to prove this is an inner product. Until the end of this proof, we abbreviate PN i=1 by P . 1. hx+ y; zi = h(xi) + (yi); (zi)i = h(xi + yi); (zi)i = X hxi + yi; ziii (2.52) Since xi; yi; zi 2 Xi and Xi is a Hilbert space, we can write = X (hxi; ziii + hyi; ziii) = X hxi; ziii + X hyi; ziii = hx; zi+ hy; zi: (2.53) 2. h x; yi = h (xi); (yi)i = h( xi); (yi)i (2.54) 112.5. Product Space Since xi; yi; zi 2 Xi and Xi is a Hilbert space, we can write h x; yi = X h xi; yiii = X  hxi; yiii =  X hxi; yiii =  hx; yi: (2.55) 3. hx; yi = h(xi); (yi)i = X hxi; yiii = X hyi; xiii = h(yi); (xi)i = hy; xi: (2.56) 4. hx; xi = h(xi); (xi)i = X hxi; xiii = X kxik 2 i  0: (2.57) 5. From above we know X kxik 2 i  0: (2.58) Then hx; xi = 0 = P kxik2i , xi = 0; 8i , x = (0; 0; : : :) = 0. Hence, h ;  i as de ned above is an inner product. It also induces the following norm, 8x = (xi) 2 H (2.59) kxk = p hx; xi = qX kxik2i : (2.60) Finally, to show this is a Hilbert space, we need to show that it is complete. Let (xn) be a Cauchy sequence inH. Write xn = (xn;i) = (xn;1; xn;2; : : : ; xn;N ). Since (xn) is a Cauchy sequence we know that 8"9N1 such that kxn  xmk < "; 8n;m > N1 (2.61) If we square both sides we get kxn  xmk 2 < "2: (2.62) Note that, xn  xm (2.63) = (xn;1; xn;2; : : : ; xn;N ) (xm;1; xm;2; : : : ; xm;N ) (2.64) = (xn;1  xm;1; xn;2  xm;2; : : : ; xn;N  xm;N ): (2.65) 122.6. Lipschitz and Nonexpansivity Thus, kxn  xmk 2 (2.66) = kxn;1  xm;1k 2 1 + kxn;2  xm;2k 2 2 + : : :+ kxn;N  xm;Nk 2 N : (2.67) By (2.62) kxn;1  xm;1k 2 1 + kxn;2  xm;2k 2 2 + : : :+ kxn;N  xm;N )k 2 N  " 2: (2.68) So, this implies that kxn;i  xm;ik 2 i  " 2; 8i: (2.69) Thus, kxn;i  xm;ik  "; 8i 8n;m > N1; (2.70) i.e., (xn;i) is a Cauchy sequence for every i, and xn;i is a sequence in Xi, which is complete. So 9x0;i 2 Xi such that xn;i  ! x0;i. Therefore, xn  ! (x0;1; x0;2; : : : ; x0;N ): (2.71) We have shown that our product space is indeed complete and thus a Hilbert space. De nition 2.27 (Diagonal space). Let N 2 N, and let X be a Hilbert space. The diagonal space  in XN is the subspace  = f(x; : : : ; x) 2 XN j x 2 Xg: (2.72) 2.6 Lipschitz and Nonexpansivity De nition 2.28 (Lipschitz). Let T : X ! X. Then T is said to be  - Lipschitz continuous if (8x 2 X)(8y 2 X) kTx Tyk   kx yk: Example 2.29 (The Identity). The Identity operator on X is 1-Lipschitz; this is very easy to show: kx yk = k Id(x) Id(y)k  1  kx yk: Theorem 2.30. Let F1 : X ! X be L1-Lipschitz and F2 : X ! X be L2-Lipschitz, then F1 + F2 is L1 + L2 Lipschitz. 132.6. Lipschitz and Nonexpansivity Proof. We start with the de nition of being Lipschitz continuous. F1 is L1 Lipschitz() 8x 8y kF1(x) F1(y)k  L1  kx yk (2.73) F2 is L2 Lipschitz() 8x 8y kF2(x) F2(y)k  L2  kx yk (2.74) Hence, k(F1 + F2)(x) (F1 + F2)(y)k = k(F1(x) + F2(x)) (F1(y) + F2(y))k (2.75) = k(F1(x) F1(y)) + (F2(x) F2(y))k; (2.76) which, by the triangle inequality, is  kF1(x) F1(y)k+ kF2(x) F2(y)k: (2.77) Now since F1 is L1-Lipschitz and F2 is L2-Lipschitz we get  L1  kx yk+L2  kx yk = kx yk (L1 +L2) = (L1 +L2)  kx yk; (2.78) as desired. De nition 2.31 (Nonexpansivity). Let T : X ! X. We say T is nonex- pansive if (8x 2 X)(8y 2 X) kTx Tyk  kx yk: Remark 2.32. We note that T being nonexpansive means it is 1-Lipschitz continuous. Therefore, the Identity is not only Lipschitz continuous but it is also nonexpansive. Example 2.33 (The Right Shift Operator). The circular right shift operator R : Xn ! Xn : (x1; : : : ; xn) 7! (xn; x1; : : : ; xn 1) satis es kRxk = kxk so that, using linearity of R, kR(x y)k = kRx Ryk = kx yk; and hence R is nonexpansive. Example 2.34 (Linear operator on ‘2). Recall that an element x in ‘2 is of the form x = (xn) = (x1; x2; : : :) such that k(xn)k = v u u t 1X n=1 xn 2 < +1 (2.79) 142.6. Lipschitz and Nonexpansivity We de ne a linear operator T : ‘2 ! ‘2 by Tx := ( xn n ) = (x1; x2 2 ; x3 3 ; : : : ; xn n ; : : :); 8x = (xn) = (x1; x2; : : :) 2 ‘2: (2.80) So we have (8x = (xn) 2 ‘2) kTxk2 = 1X i=1 ( x2i i2 )  1X i=1 (x2i ) = kxk 2 (by (2.79)): (2.81) Taking the square root on both sides, kTxk  kxk; 8x 2 ‘2: (2.82) Now by linearity of T we have kTx Tyk = kT (x y)k  kx yk; 8x; y 2 ‘2: (2.83) Therefore, T is nonexpansive. De nition 2.35 (Continuity). Let T : X ! X. T is said to be continuous if for all sequences (xn) 2 X with xn ! x, then Txn ! Tx. Alternatively, given an arbitrary element x 2 X, for all  > 0 there exists  > 0 such that kTx Tyk <  whenever y 2 X with kx yk <  . Proposition 2.36. If F is L-Lipschitz, then F is continuous. Proof. The result is clear when L = 0. So assume L > 0. Let c 2 X. We will show that F is continuous at c. Given " > 0, we need to  nd  > 0 such that kF (x) F (c)k < " when kx ck <  . We now set  := " L > 0. Then (8x 2 X), kx ck <  ) kF (x) F (c)k  Lkx ck  L = ": (2.84) So kF (x) F (c)k < " as desired. Theorem 2.37. The composition of nonexpansive operators is nonexpan- sive. Proof. Let T and N be nonexpansive, and let x and y be in X. Then, kT (Nx) T (Ny)k  kNx Nyk  kx yk: (2.85) 152.6. Lipschitz and Nonexpansivity De nition 2.38 (Firmly nonexpansive). Let T : X ! X. We say T is  rmly nonexpansive if (8x 2 X)(8y 2 X) kTx Tyk2  hx y; Tx Tyi Theorem 2.39. Let T : X ! X. Then the following statements are equiv- alent: 1. T is  rmly nonexpansive. 2. kTx Tyk2 + k(x Tx) (y  Ty)k2  kx yk2 8x;8y. 3. k(2Tx x) (2Ty  y)k  kx yk 8x; 8y. 4. Id T is  rmly nonexpansive 5. T = 12N + 1 2 Id, where N is nonexpansive. Proof. We rewrite the statements above by letting a = x y, b = Tx Ty. The following statements are equivalent: 1. kbk2  ha; bi. 2. kbk2 + ka bk2  kak2. 3. k2b ak  kak. 4. Id T is  rmly nonexpansive. 5. T = 12N + 1 2 Id, where N is nonexpansive 2() kbk2 + ka bk2  kak2 (2.86) () kbk2 + kak2  2ha; bi+ kbk2  kak2 (2.87) () 2kbk2  2ha; bi (2.88) () kbk2  ha; bi (2.89) () 1 (2.90) 3() k2b ak2  kak2 (2.91) () 4kbk2  4ha; bi+ kak2  kak2 (2.92) () 4kbk2  4ha; bi (2.93) () kbk2  ha; bi (2.94) () 1 (2.95) 162.6. Lipschitz and Nonexpansivity 4() k(Id T )x (Id T )yk2  hx y; (Id T )x (Id T )yi (2.96) () ka bk2  ha; a bi (2.97) () kak2  2ha; bi+ kbk2  ha; ai  ha; bi (2.98) () kak2 + kbk2  kak2  ha; bi+ 2ha; bi (2.99) () kbk2  ha; bi (2.100) () 1 (2.101) Note that T = 1 2 Id + 1 2 N () N = 2T  Id : (2.102) Hence, 3() N is nonexpansive (2.103) () 5: (2.104) The last equivalence is a tool to create new  rmly nonexpansive map- pings from nonexpansive mappings. Example 2.40. We de ne A on ‘2 as follows: A : (x1; x2; x3; : : :) 7 ! (x1; x2 2 ; x3 3 ; : : :): (2.105) If we take A+ Id we get A+ Id : (x1; x2; x3; : : :) 7 ! (2x1; 3 2 x2; 4 3 x3; : : :): (2.106) We now de ne F to be the inverse, F := (A+ Id) 1 : ( 1;  2;  3; : : :) 7 ! ( 1 2  1; 2 3  2; 3 4  3; : : :): (2.107) We will now show that F := (A+ Id) 1 is  rmly nonexpansive. By (2.107), F is linear. We will now show that kFxk2  hFx; xi 8x; (2.108) 172.7. Projections which, by linearity of F , is equivalent to the  rm nonexpansivity of F . Let y = Fx. Then x = F 1y = (A + Id)y. Then kFxk2 = kyk2, as they are equal, and since hy;Ayi  0, hFx; xi = hy; (A+ Id)yi = hy;Ayi+ kyk2  kyk2 = kFxk2: (2.109) We have shown that hFx; xi  kFxk2 and therefore F is  rmly nonex- pansive. Corollary 2.41. All  rmly nonexpansive mappings are nonexpansive. Proof. For T , a  rmly nonexpansive mapping, we know that 8x;8y kT (x) T (y)k2  hT (x) T (y); x yi  kTx Tyk  kx yk: (2.110) We now consider two cases. Case 1: kT (x) T (y)k = 0. kT (x) T (y)k = 0  kx yk (2.111) as norms are always greater or equal to zero. So we have, kT (x) T (y)k  kx yk; (2.112) therefore T is nonexpansive, as desired. Case 2: kT (x) T (y)k > 0. Again we have kT (x) T (y)k2  kT (x) T (y)k  kx yk: (2.113) Divide both sides by kT (x) T (y)k to obtain, kT (x) T (y)k  kx yk; (2.114) as desired. 2.7 Projections The main focus of this section is to provide an introduction to projections onto convex sets. De nition 2.42. Let C  X, x 2 X, and c 2 C. Then c is the projec- tion of x onto C if (8c 2 C) kx c k  kx ck: 182.7. Projections In other words, c is a closest point to x 2 C. The associated operator PC : X  C : x 7! fc 2 Cj c is a projection of x onto C g is called the projection operator or projector. If PCx = fc g we also write c = PCx rather than fc g = PCx, which is a slight abuse of notation. Lemma 2.43. Let u and v be in X. Then the following holds: hu; vi  0() (8 2 R+)kuk  ku  vk () (8 2 [0; 1])kuk  ku  vk: Proof. Observe that (8 2 R) ku  vk2  kuk2 =  ( kvk2  2hu; vi): (2.115) Hence, for forward implications follow immediately. Conversely, if for every  2]0; 1]; kuk  ku   vk, then (2.115) implies that hu; vi   kvk2=2. As  # 0, we obtain hu; vi  0. Theorem 2.44 (The Projection Theorem). Let C be a nonempty closed convex subset of X. Then 8x 2 X there exists a unique c 2 C such that kx c k = dC(x). Moreover, c is characterized by c 2 C (2.116) and (8c 2 C) hc c ; x c i  0: (2.117) Proof. Fix x 2 X. Pick a sequence (cn) in C such that kx cnk  ! dC(x). By the parallelogram law, kcn  cmk 2 = 2kcn  xk 2 + 2kcm  xk 2  k(cn + cm) 2xk 2 (2.118) = 2kcn  xk 2 + 2kcm  xk 2  4k cn + cm 2  xk2 (2.119)  2kcn  xk 2 + 2kcm  xk 2  4(dC(x)) 2; (2.120) as dC(x)  kx cn + cm 2 k. Now as n;m  !1 192.7. Projections  ! 2(d2C(x)) + 2(dC(x)) 2  4(dC(x)) 2 = 0: (2.121) Therefore (cn) is Cauchy. Hence (cn) is Cauchy and thus convergent as X is a Hilbert space and thus complete: cn  ! c  2 X: (2.122) But (cn) lies in C and C is closed, so c 2 C as well. Now since cn  ! c we also have that x  cn  ! x  c and thus kx  cnk  ! kx  c k. On the other hand, kx cnk ! dC(x) by assumptions. Since limits are unique kx  c k = dC(x) so c 2 PCx. We have shown existence. Now we need to show uniqueness. We let c0 be another nearest point in PCx. We need only show that c = c0. Set (c0n) = (c  ; c0; c ; c0; :::), then kx  c0nk = dC(x). By (2.119) we deduce that kc  c0k2 = 0 i.e., c = c0. Now for the characterization. Pick c 2 C and set c =  c+(1  )c 2 C where  2 [0; 1]. Then, using Lemma 2.43 (with u = x c and v = c c ), we obtain kx c k = dC(x) (2.123) () (8c 2 C)(8 2 [0; 1]) kx c k  kx c k = k(x c  )  (c c )k (2.124) () (8c 2 C)hx c ; c c i  0: (2.125) We now give a few examples of projections. Example 2.45 (Hyperplane). (See [Deu01, page 99].) If a 2 Xnf0g;  2 R and C = fx 2 Xjha; xi =  g then PCx = x (ha; xi   ) kak2 a (2.126) Proof. We shall use the characterization of the Projection theorem. It can be easily shown that our set C is closed and convex. Next we call our projection candidate c and check that c 2 C. ha; x (ha; xi   ) kak2 ai = ha; xi  (ha; xi   ) kak2 ha; ai =  : (2.127) 202.7. Projections Next we take c 2 C, which means that ha; ci =  . Then, by simplifying and recalling that ha; ci =  and ha; ai = kak2, hc (x ha; xi   kak2 a); (ha; xi   ) kak2 ai (2.128) = ha; xi   kak2 (hc; ai  hx; ai+ (ha; xi   ) kak2 ha; ai) (2.129) = ha; xi   kak2 (  hx; ai+ ha; xi   ) (2.130) = ha; xi   kak2 (0) (2.131) = 0 (2.132)  0: (2.133) Since the two conditions for the characterization are satis ed, our candidate c is indeed the projection PCx Example 2.46 (The unit ball). We provide the projection onto the unit ball, C := B(0; 1) = fxjkxk  1g: PCx = x maxfkxk; 1g ; 8x 2 X: (2.134) Proof. C is closed and convex this implies that our projection is a singleton by Theorem 2.44. We denote c our projection candidate in (2.134). To prove this is a projection we need to show two things: 1. c 2 C 2. hc c ; x c i  0 8c 2 C; 8x 2 X Case 1: Assume that x 2 C. This means that x is in the unit ball, so maxfkxk; 1g = 1 and hence c = x. 1. Trivial by assumptions. 2. hc c ; x c i = hc x; x xi = hc c ; 0i = 0  0. Case 2: Assume that x =2 C. 1. x =2 C implies kxk > 1 and c = x kxk by de nition. We also have that kc k = 1 so c 2 C: 2. 8c 2 C we have kck  1 by de nition 212.7. Projections hc c ; x c i = hc x kxk ; x x kxk i (2.135) = hc x kxk ; (1 1 kxk )xi = hc; (1 1 kxk )xi  h x kxk ; (1 1 kxk )xi: (2.136) Now since (1 1 kxk ) kxk hx; xi = (1 1 kxk ) kxk kxk2 = kxk(1 1 kxk ) = kxk  1 > 0; (2.137) we have that hc c ; x c i = (2.138) hc; (1 1 kxk )xi  kxk(1 1 kxk ) (2.139)  kckkxk(1 1 kxk ) kxk(1 1 kxk ) (2.140) by the Cauchy-Schwarz inequality. So this = (kxk  1)(kck  1)  0; (2.141) as kxk  1 > 0 and kck  1  0. Example 2.47 (The Product Space). We show that the projection onto the product set, C := C1  C2  : : : Cn is: PC(x1; x2; : : : ; xn) = (PC1(x1); PC2(x2); : : : ; PCn(xn)): (2.142) Proof. Clearly, PCi(xi) 2 Ci. Then (PC1(x1); PC2(x2); : : : ; PCn(xn)) 2 C1  C2      Cn = C. Now 8c 2 C, let c = (c1; c2; : : : ; cn) with ci 2 Ci. We will show that hc ((PC1(x1); : : : ; PCn(xn)); (x1; x2; : : : ; xn) (PC1(x1); : : : ; PCn(xn))i  0 (2.143) 222.8. Properties of the Projector Now, h(c1; : : : ; cn) (PC1(x1); : : : ; PCn(xn)); (x1; : : : ; xn) (PC1(x1); : : : ; PCn(xn)i (2.144) = h(c1  PC1(x1); : : : ; cn  PCn(xn)); (x1  PC1(x1); : : : ; xn  PCn(xn))i (2.145) = hc1  PC1(x1); x1  PC1(x1)i+ : : :+ hcn  PCn(xn); xn  PCn(xn)i  0; (2.146) because every term above is less than or equal to 0 by the Theorem 2.44. Example 2.48 (The Diagonal Set). We show that the projection onto the Diagonal Set, C := f(x; x; : : : ; x) 2 Xng is: PC(x1; x2; : : : ; xn) = (x; x; : : : ; x); (2.147) where x = 1n P xi. Proof. C is a closed and convex set. Clearly, (x; : : : ; x) 2 C. Let c 2 C, say c = (x; x; : : : ; x) for some x 2 X. Then hc (x; : : : ; x); (x1; x2; : : : ; xn) (x; x; : : : ; xi (2.148) = h(x; x; : : : ; x) (x; : : : ; x); (x1; x2; : : : ; xn) (x; x; : : : ; xi (2.149) = h(x x; : : : ; x x); (x1  x; : : : ; xn  x)i (2.150) = hx x; x1  xi+ : : :+ hx x; xn  xi (2.151) = hx x; (x1  x) + : : :+ (xn  x)i (2.152) = hx x; (x1 + : : :+ xn) nxi (2.153) = hx x; nx nxi (2.154) = hx x; 0i (2.155) = 0 (2.156)  0: (2.157) The conclusion follows from the Projection theorem. 2.8 Properties of the Projector Theorem 2.49 (Projections are  rmly nonexpansive). Suppose C is a nonempty closed convex set in the Hilbert space X. Then the projection PC onto C is  rmly nonexpansive: kPCx PCyk 2  hx y; PCx PCyi; 8x; y 2 C (2.158) 232.9. The Re ector for all x; y 2 X. Proof. By (2.117), we have that kPCx PCyk 2 = hPCx PCy; PCx PCyi  hPCx PCy; PCx PCyi+ hx PCx; PCx PCyi (2.159) = hx PCy; PCx PCyi (2.160)  hx PCy; PCx PCyi+ hPCy  y; PCx PCyi (2.161) = hx y; PCx PCyi: So PC is  rmly nonexpansive as desired. Corollary 2.50. Suppose C is a nonempty closed convex set in the Hilbert space X. Then PC is nonexpansive. Proof. PC is  rmly nonexpansive and therefore by Corollary 2:41 nonexpan- sive. Remark 2.51. Since PC is nonexpansive, it is 1-Lipschitz and hence contin- uous. Remark 2.52. Since projection operators are nonexpansive, the composition of projection operators is nonexpansive. 2.9 The Re ector De nition 2.53. Let B be a subset of X. The re ector is de ned as RB := 2PB  Id. Proposition 2.54. Let B  X be a nonempty closed convex set. Then the re ector RB is nonexpansive. Proof. Recall Theorem 2.39 and Theorem 2.49, so PB = 2PB  Id + Id 2 = RB + Id 2 is  rmly nonexpansive. This means that RB is nonexpansive if and only if PB is  rmly nonex- pansive. The good news is that all projections are  rmly nonexpansive by Theorem 2.49. Proposition 2.55. The composition of re ectors is nonexpansive. 242.10. Miscellaneous Proof. We proved that the composition of nonexpansive operators is nonex- pansive Theorem 2.37. Since each re ector is nonexpansive by Proposition 2.54, we are done. 2.10 Miscellaneous Theorem 2.56. We claim if xn ! x, then (xn) is bounded. Proof. For " = 1, 9N such that 8n  N we have kxn  xk < " = 1; (2.162) by the de nition of a limit. By the triangle inequality, kxnk  kxk  kxn  xk  1 8n  N: (2.163) Thus, kxnk  kxk+ 1 8n  N: (2.164) Next we de ne M to be maxfkxk+ 1; kx1k; kx2k; :::; kxN 1kg. Thus kxnk  M; 8n: (2.165) Theorem 2.57. We claim if (xn) is bounded, and B is a nonempty closed convex set, then (PB(xn)) is also bounded. Proof. Fix y, since PB is  rmly nonexpansive by Theorem 2.49 and thus nonexpansive by Theorem 2.41 we have, kPB(xn) PB(y)k  kxn  yk  kxnk+ kyk: (2.166) Since k(xn)k is bounded we have, kxnk+ kyk  M + kyk (2.167) for some M > 0. By the triangle inequality, kPB(xn)k  kPB(y)k  kPB(xn) PB(y)k  M + kyk: (2.168) Therefore, kPB(xn)k  kPByk+M +kyk and (PB(xn)) is bounded. De nition 2.58 (Weakly convergent sequence). We say xn * x () hxn; zi  ! hx; zi 8z 2 X. De nition 2.59 (Weak cluster point). A vector z is a weak cluster point of a sequence (xn) if there is a subsequence (xkn) of (xn) such that xkn * z. 25Chapter 3 Projection Methods In this chapter we show that Elser’s Di erence Map algorithm is equiva- lent to the Douglas-Rachford and Fienup Hybrid Input-Output (HIO) al- gorithms. Throughout this chapter A and B are nonempty  xed convex and closed sets in X a real Hilbert space, and PA,PB,RA, and RB are the associated projections and re ectors onto the sets A and B. 3.1 Elser’s Di erence Map De nition 3.1. Elser’s Di erence Map [ERT07, page 419] is de ned by D(x) = x+  [PA  gB(x) PB  fA(x)] (3.1) where fA(x) = PA(x) (PA(x) x)= (3.2) and gB(x) = PB(x) + (PB(x) x)= ; (3.3) where  is a real parameter that is not equal to zero. Proposition 3.2. Elser’s Di erence Map, for  = 1, can be written in the form D = PA(2PB  Id) + (Id PB): (3.4) This is the same as Fienup’s HIO [Fie82, page 2763] and the Douglas- Rachford algorithm [LM79]. Proof. The Di erence Map, when  = 1, becomes D(x) = x+ [PA  gB(x) PB  fA(x)] (3.5) where fA(x) = PA(x) (PA(x) x) = x (3.6) 263.2. Douglas-Rachford and Fienup HIO and gB(x) = PB(x) + (PB(x) x) = 2PB(x) x: (3.7) We now plug into (3.5) giving: D(x) = x+ PA(2PB(x) x) PB(x): (3.8) Hence D = PA(2PB  Id) + (Id PB); (3.9) as desired. Lemma 3.3. If A is a closed subspace, in such a case PA is linear by [Deu01, 5.13(1) page 79], we can write D = PA(2PB  Id) + (Id PB) in a more symmetric fashion: D = PAPB + (Id PA)(Id PB): (3.10) Proof. Indeed, D = PA(2PB  Id) + (Id PB) (3.11) = 2PAPB  PA + Id PB (2PAPB = PAPB + PAPB) (3.12) = PAPB + PAPB  PA + Id PB (3.13) = PAPB + PA(PB  Id) + (Id PB) (3.14) = PAPB  PA(Id PB) + (Id PB) (3.15) = PAPB + ( PA + Id)(Id PB) (3.16) = PAPB + (Id PA)(Id PB) (3.17) as desired. 3.2 Douglas-Rachford and Fienup HIO Proposition 3.4. The Fienup’s Hybrid Input-Output Algorithm and Douglas- Rachford Algorithm is described by xn+1 =  PA(2PB  Id) + (Id PB)  (xn): (3.18) For brevity, we set T = PA(2PB  Id) + (Id PB): (3.19) Then T = 1 2 (RARB + Id); (3.20) where RA = 2PA  Id and RB = 2PB  Id. 273.3. Fixed Point Results Proof. Indeed, T = PA(2PB  Id) + (Id PB) (3.21) = (PARB + Id PB) (2PB  Id replaced with RB) (3.22) = 1 2 (2PARB + 2 Id 2PB) (factor out 1 2 ) (3.23) = 1 2 (2PARB + Id + Id 2PB) (write 2 Id as Id + Id = 2 Id) (3.24) = 1 2 (2PARB + Id RB) (2PB  Id replaced with RB) (3.25) = 1 2 (2PARB  RB + Id) (rearrange) (3.26) = 1 2 ((2PA  Id)RB + Id) (factor out RB) (3.27) = 1 2 (RARB + Id) (3.28) as desired. 3.3 Fixed Point Results Theorem 3.5. Set T = PA(2PB  Id) + (Id PB). Then PB(FixT ) = A \B  FixT (3.29) Proof. Fix x 2 X, and write as x = b + q, where b = PB(x) and hence q = x b. Then x = T (x) de nition of a  xed point (3.30) () x = PA(2PB  Id)(x) + (Id PB)(x) (3.31) () b+ q = PA(2b (b+ q)) + b+ q  b substitution (3.32) () b+ q = PA(2b b q) + q simplify (3.33) () b = PA(b q) recall b = PB(x) (3.34) () PB(x) = PA(PB(x) (x PB(x))) substitution (3.35) () PB(x) = PA(PB(x) x+ PB(x)) simplify (3.36) () PB(x) = PA(2PB(x) x) simplify (3.37) () PB(x) = PA(RB(x)): (3.38) All together this means that x = T (x)() PB(x) = PA(RB(x)), PB(x) 2 B in which case PB(x) 2 A. So when x 2 FixT , we have PB(x) 2 B \A, and 283.4. The Main Result since x = T (x) and x was an arbitrary  xed point we have PB(FixT )  A \B: (3.39) We now prove that A \ B  FixT . 8x 2 A \ B we have PA(x) = x and PB(x) = x so T (x) = PA(2PB  Id)(x) + (Id PB)(x) expand (3.40) = PA(2PB(x) x) + (x PB(x)) (3.41) = PA(2x x) + (x x) (3.42) = PA(x) (3.43) = x: (3.44) Hence A \B  FixT: (3.45) Now apply PB to (3.45) to deduce that A \B = PB(A \B)  PB(FixT ): (3.46) Combining (3.39) and (3.46), we see that PB(FixT ) = A \B: (3.47) 3.4 The Main Result We next present the main convergence result. We show what happens in a  nite dimensional Hilbert space and then give an example about why the proof fails when we try to extend to in nite dimensions. Throughout the remainder of this chapter, we set (see Proposition 3.4) T = 1 2 (RARB + Id) = PA(2PB  Id) + (Id PB): (3.48) Fact 3.6 (Opial). [Opi67] Suppose that T is a  rmly nonexpansive mapping from X to X with FixT 6= ?. Then for every x 2 X, the sequence (Tnx) weakly converges to some point in Fix T . 293.4. The Main Result Theorem 3.7. Suppose that A \ B 6= ?. Then T is  rmly nonexpansive and the sequence (xn) generated by xn+1 = 1 2 (RARB + Id)(xn) = T (xn) (3.49) converges weakly to some point x 2 Fix T , and PB(x) 2 A \ B. Moreover, the sequence (PB(xn)) is bounded and every weak cluster point of (PB(xn)) lies in A \B. Proof. Fix T contains A \B 6= ? by Theorem 3.5. The sequence (xn) = (T n(x0)) (3.50) converges weakly to some  xed point x of T by Fact 3.6. Since (xn) is bounded, it follows (PB(xn)) is bounded by Theorem 2.57. RARB is nonexpansive by Proposition 2.54 and Theorem 2.37. This implies that T = Id +RARB2 is  rmly nonexpansive by Theorem 2.39. Thus, kT (x) T (y)k2+k(Id T )(x) (Id T )(y)k2  kx yk2 8x; y 2 X: (3.51) Pick x = xn and y = x a  xed point. Then we get kxn  xk 2  kT (xn) T (x)k 2 + kxn  T (xn) x+ T (x)k 2 (3.52) = kxn+1  xk 2 + kxn  xn+1  x+ xk 2 (3.53) = kxn+1  xk 2 + kxn  xn+1k 2 (3.54) So mX n=1 (kxn  xk 2  kxn+1  xk 2)  mX n=1 kxn  xn+1k 2: (3.55) The summation on the left is kx1  xk 2  kxm+1  xk 2; (3.56) which is less than or equal to kx1  xk2 < +1. Hence 8m mX n=1 kxn  xn+1k 2  kx1  xk 2 < +1 (3.57) since kx1  xk2 is a constant we have 1X n=1 kxn  xn+1k 2  kx1  xk 2 < +1: (3.58) 303.4. The Main Result Hence, kxn  xn+1k2 ! 0, thus, kxn  xn+1k ! 0 i.e., xn  xn+1 ! 0: (3.59) It follows that xn  xn+1 = xn  PA(2PB  Id)xn  (Id PB)xn (3.60) = PBxn  PA(2PB  Id)xn ! 0: (3.61) In in nite dimensions, it is a fact that if (PBxn)n2N is bounded then it has weak cluster points by [Deu01, page 205, 9.12] and De nition 2.59, say PBxkn * z: (3.62) Now (PBxn)n2N lies in B and B is weakly closed by [Rud91, page 66, The- orem 3.12 corollaries (a)] (B is weakly closed since B is closed and convex) so its weak limit z 2 B. We have PBxn = PBxn  PA(2PB  Id)xn + PA(2PB  Id)xn; (3.63) recall that PBxn  PA(2PB  Id)xn ! 0. Thus PA(2PB  Id)xkn * z: (3.64) Since A is also weakly closed we have that z 2 A. Altogether we have z 2 A \B. The statement of Theorem 3.7 is even simpler in  nite dimensions. Theorem 3.8. Suppose that X is  nite dimensional and A\B 6= ?. Then the sequence (xn) generated by xn+1 = Txn; where T := 1 2 (RARB + Id) (3.65) converges to some point x 2 FixT , and PB(x) 2 A \B. Moreover, PB(xn)! PB(x) 2 A \B: (3.66) Proof. Note that T is a  rmly nonexpansive operator, because it is a pro- jection by Theorem 2.49. So xn ! x = Tx =) PBxn ! PBx (3.67) as PB is continuous. We know PBx 2 PB(Fix T ) = A \B by Theorem 3.5. 313.4. The Main Result Remark 3.9. The proof of the Theorem 3.8 does not work in in nite dimen- sions because PB may fail to be weakly continuous. Zarantonello [Zar71, page 245], showed that if xn * x; PBxn * PBx: (3.68) Let B be the unit-ball in ‘2. Then let xn = e1 + en * e1 = PBe1 as e1 2 B: (3.69) It is a fact that if we are outside the unit ball the closed-form projection is x kxk by Example 2.46. So if n  2, then PBxn = e1 + en ke1 + enk = e1 + enp 2 * e1p 2 : (3.70) So on one hand we have xn * e1; (3.71) but on the other we have PBxn * e1p 2 6= e1 = PBe1: (3.72) Altogether, PB is not weakly continuous. 32Chapter 4 Monotone Operators Throughout this chapter, X is a  nite-dimensional real Hilbert space with inner product h ;  i and induced norm k  k. C and D are two closed convex sets in X such that C \D 6= ?: Recall that if A : X ! X is linear, then A is automatically continuous (see, e.g., [Kre78, Page 94]). 4.1 De nitions We  rst introduce some fundamental de nitions. The arrow "!" is used for a single-valued mapping, whereas " " denotes a set-valued mapping, i.e. X ! P(X), the power set of X. De nition 4.1. Let T : X  X: We say T is monotone if hx  y ; x yi  0; (4.1) whenever x 2 T (x); y 2 T (y): Proposition 4.2. Let A : X ! X be linear. Then A is monotone, if and only if, hx;Axi  0 8x 2 X: (4.2) Proof. \)" For every x 2 X, by monotonicity and linearity of A we have hAx; xi = hAx A0; x 0i  0: (4.3) (4.3) holds by A0 = 0 (since A is linear). \(" For every x; y 2 X, since hx;Axi  0, we have  Ax Ay; x y  =  A(x y); x y   0: (4.4) 334.1. De nitions De nition 4.3 (Adjoint operator). Let A : X ! X be continuous and linear. We say A is the adjoint operator of A if hx;Ayi = hA x; yi 8x; y 2 X: (4.5) Remark 4.4. We know A exists and is unique, see [Kre78, Theorem 3.9-2]. Remark 4.5. We know A  = A, see [Deu01, Lemma 8.29 (3)]. Remark 4.6. If A 2 Rn n then A = AT , see [Mey00, Page 84]. De nition 4.7 (Symmetric). Let A : X ! X be linear. We say that A is symmetric if A = A. De nition 4.8 (Symmetric Part). Let A : X ! X be linear. We say A+ is the symmetric part of A and is de ned as follows: A+ = A+A 2 : (4.6) Proposition 4.9. Let A : X ! X be linear. Then A+ is symmetric. Proof. A + = ( A+A 2 )  = A  +A  2 = A +A 2 = A+. De nition 4.10 (Antisymmetric). Let A : X ! X be linear. We say that A is antisymmetric if A =  A: De nition 4.11 (Antisymmetric Part). Let A : X ! X be linear. We say A0 is the antisymmetric part of A and is de ned as follows: A0 = A A 2 : (4.7) Proposition 4.12. Let A : X ! X be linear. Then A0 is antisymmetric. Proof. A 0 = ( A A 2 )  = A   A  2 = A  A 2 =  ( A A 2 ) =  A0. Lemma 4.13. Let A : X ! X be linear. Then A is monotone , A is monotone. Proof. By Proposition 4.2 we have that, A is monotone () hx;Axi  0 8x (4.8) () hAx; xi  0 8x (4.9) () hx;A xi  0 8x (4.10) () A is monotone: (4.11) 344.1. De nitions De nition 4.14. Let A : X ! X be linear. We set qA : X ! R : x 7! 1 2 hx;Axi: (4.12) Proposition 4.15. Let A : X ! X be linear. Then rqA = A+. Proof. For x 2 X and h 2 Xnf0g, we have, qA(x+ h) qA(x) hA+x; hi khk (4.13) = qA(x) + qA(h) + 12hx;Ahi+ 1 2hAx; hi  qA(x) hA+x; hi khk (4.14) = qA(h) + 12hA  x; hi+ 12hAx; hi  hA+x; hi khk (4.15) = qA(h) + hA+x; hi  hA+x; hi khk (4.16) = qA(h) khk (4.17) = 1 2hh;Ahi khk : (4.18) Since A is continuous, we have, for M = kAk, kAhk  Mkhk; 8h 2 X: (4.19) Then, by Cauchy-Schwarz we get 0      qA(x+ h) qA(x) hA+x; hi khk     (4.20) =      1 2hh;Ahi khk      (4.21)  1 2khk  M  khk khk = 1 2 Mkhk ! 0 as khk ! 0+: (4.22) By the Squeeze theorem we get, lim khk!0+ qA(x+ h) qA(x) hA+x; hi khk = 0 (4.23) i.e., rqA = A+. So we have proven that qA is Fr echet di erentiable at x and rqA = A+. 354.1. De nitions Proposition 4.16. Let A : X ! X be linear. Then A is monotone if and only if qA  0. Proof. See Proposition 4.2. Proposition 4.17. Let A : X ! X be linear. Then qA = qA+. Proof. Let x 2 X. Then, 2qA+(x) = hA+x; xi = h A +A 2 x; xi = hA  x 2 ; xi+ h Ax 2 ; xi = h x 2 ; Axi+ h Ax 2 ; xi = hAx; xi = 2qA(x): Proposition 4.18. Let A : X ! X be linear. Then A is monotone if and only if A+ is monotone. Proof. By Proposition 4.2 we have that A is monotone () qA  0 by Proposition 4.16 (4.24) () qA+  0 by Proposition 4.17 (4.25) () A+ is monotone by Proposition 4.16 : (4.26) De nition 4.19. Let T : X  X be monotone. We call T maximal mono- tone if for every (y; y ) =2 graT there exists (x; x ) 2 graT with hx y; x  y i < 0. Fact 4.20. Let A : X  X be maximal monotone and (x0; x 0) 2 X  X. Let eA : X  X such that gra eA = graA (x0; x 0) ( i.e., a rigid translation of graA). Then eA is maximal monotone. Proof. Follows directly from De nition 4.19. Lemma 4.21. Every continuous monotone operator A : X ! X is maximal monotone. Proof. [RW98, Example 12.7]. 364.1. De nitions De nition 4.22 (Subdi erential). Let f : X ! ] 1;+1] and let x 2 X be such f(x) < +1. Then the subdi erential of f at x is @f(x) = fs 2 X j f(y)  f(x) + hs; y  xi; 8y 2 Xg: (4.27) The elements in @f(x) are called subgradients of f at x. See [Roc97, Part V] for basic properties of the subdi erential operator. Fact 4.23. @f(x) = rf(x) if f is Gateaux di erentiable, see [Z al02, Theo- rem 2.4.4(i)]. Example 4.24. Let X = R and suppose f(x) = jxj. Then, @f(x) = 8 >< >: f 1g if x < 0; [ 1;+1] if x = 0; f1g if x > 0: (4.28) Proof. Let x < 0 and s 2 @f(x). We have hs; y  xi  f(y) f(x) = jyj  jxj: (4.29) Let z 2 X and y = x+ tz, where t > 0. By (4.29), hs; tzi  jx+ tzj  jxj = jx+ tzj  ( x): (4.30) Since x < 0, x+ tz < 0 when t is small. Thus by (4.30), hs; tzi   (x+ tz) ( x) =  tz (4.31) so hs; zi   z: (4.32) So plug in z =  1 and we get that s =  1. This shows that when x < 0 the subdi erential is the set f 1g. Now let x = 0 and s 2 @f(x) = @f(0). We have hs; y  0i  f(y) f(0) = jyj () hs; yi  jyj 8y (4.33) () hs; yi  jyj 8y 6= 0 (4.34) () hs; y jyj i  1 8y 6= 0 (4.35) () maxfhs; 1i; hs; 1ig  1 (4.36) () maxfs; sig  1 (4.37) () jsj  1 (4.38) () s 2 [ 1; 1]: (4.39) 374.1. De nitions This shows that when x = 0 the subdi erential is the set [ 1; 1]. Finally, let x > 0 and s 2 @f(x). We have hs; y  xi  f(y) f(x) = jyj  jxj: (4.40) Let z 2 X and y = x+ tz, where t > 0. By (4.40), hs; tzi  jx+ tzj  jxj = jx+ tzj  x: (4.41) Since x > 0, x+ tz > 0 when t is small. Thus by (4.41), hs; tzi  (x+ tz) x = tz (4.42) so hs; zi  z: (4.43) So plug in z =  1 and we get that s = 1. This shows that when x > 0 the subdi erential is the set f1g. Fact 4.25 (Rockafellar). Let f : X ! ] 1;+1] be proper lower semicon- tinuous and convex. Then @f is maximal monotone. Proof. See [Sim98, Page 113] or [Roc70]. De nition 4.26 (Positive-semide nite matrix). The matrix A 2 Rn n is positive semide nitite, written A  0, if A is symmetric, i.e. A = AT and hx;Axi  0 8x: (4.44) Fact 4.27. In the 2 2 case it can be checked that  a b b c   0() a  0; c  0; ac b2  0; (4.45) see [Mey00, Page 566]. De nition 4.28 (Resolvent). Let M be maximal monotone and JM = (Id +M) 1. Then JM is called the resolvent of M . Fact 4.29 (Minty). Let T : X ! X. Then T is  rmly nonexpansive() T = (Id +M) 1; where M is maximal monotone, (4.46) see [EB92, Theorem 2]. 384.1. De nitions The above fact can also be found in [Min62]. We can see from the above that we have another way to characterize  rmly nonexpansive mappings. De nition 4.30 (Indicator Function). Let S  X, then the indicator func- tion is de ned by x 7!  S(x) = ( 0; if x 2 S; 1; otherwise. De nition 4.31 (Proximal Mapping). Let T : X ! X be  rmly nonex- pansive, so that T = (Id +M) 1, where M is maximal monotone, see Fact 4.29. If M = @f for some function f that is convex, lower semicontinuous and proper, then T is called a proximal mapping. Remark 4.32 (Projection). Let T : X ! X be  rmly nonexpansive, say T = (Id +M) 1, where M is maximal monotone, see Fact 4.29. If M = @ C , the subdi erential of the indicator function, then T = PC . Projection mappings are a subset of proximal mappings and proximal mappings are subsets of  rmly nonexpansive mappings. Lemma 4.33. Every proximal mapping is a subdi erential operator. Proof. Let T be a proximal mapping. Then by de nition, the sum rule [Z al02, Theorem 2.8.7(iii)], and [Roc97, Theorem 23.8], T = (Id +@f) 1 (4.47) = (r 1 2 k  k2 + @f) 1 (4.48) = (@( 1 2 k  k2 + f)) 1 (4.49) = @( 1 2 k  k2 + f) ; (4.50) where g (x) = supy2X(hx; yi g(y)) is the Fenchel conjugate of g, see [Roc97, Chapter 12] for basic properties. This clearly shows that the proximal map- ping is a subdi erential. Lemma 4.34. The projection operator PC is the gradient of 12k  k  1 2d 2 C . Proof. We use [RW98, Theorem 2.26] throughout this proof. If we set f to be the indicator function and  = 1, then by [RW98, De nition 1.22] we get 394.1. De nitions when x 2 X, e1 C(x) = inf !2X  g(!) + 1 2 k!  xk2 (4.51) = inf !2C 1 2 k!  xk2 (4.52) = 1 2 d2C(x); (4.53) where e1g(x) = inf!2X g(!) + 12k!  xk 2 is the Moreau envelope of g and P1g(x) is the corresponding set of minimizers. Again by [RW98, De nition 1.22] we get, P1 C(x) = argmin !2X  C + 1 2 k!  xk2 (4.54) = argmin !2C 1 2 k!  xk2 (4.55) = argmin !2C 1 2 d2C(x) (4.56) = PC(x): (4.57) Now by [RW98, Theorem 2.26b] we have, re1 C = Id PC : (4.58) By rearranging, then using (4.53) we have PC = Id re1 C (4.59) = Id r( 1 2 d2C) (4.60) = r( 1 2 k  k2) r( 1 2 d2C) (4.61) = r( 1 2 k  k2  1 2 d2C); (4.62) which clearly shows that PC is in fact a gradient as desired. Proposition 4.35. Let B : X ! X be linear. Then B = @g () B = B , in which case B = rqB and g = qB + c, where c 2 R. Proof. "(": B = B =) B = B+ now by Proposition 4:15 =) B = B+ = rqB. ")": B = rg =) r2g = B is symmetric [MN88, Theo- rem 4 Page 120]. 404.1. De nitions Lemma 4.36. Let T be linear and a proximal mapping. Then T = T  . Proof. By assumption, there exists a lower semicontinuous convex function f such that T = (Id +@f) 1: (4.63) By Proposition 4:35, and the fact that Id is symmetric, Id = rqId: (4.64) By (4.63), T = (rqId + @f)  1 = (@(f + qId))  1 (4.65) by [Roc97, Theorem 23.8]. Let g = (f + qId)  (4.66) then @g = (@(f + qId))  1 (4.67) by [Roc97, Corollary 23.5.2]. By (4.65), we have T = @g: (4.68) Then by Proposition 4:35, T = T  . Below is an alternative proof provided by Liangjin Yao. We  rst note that if T = (Id +@f) 1 (4.69) then T 1 = Id +@f (4.70) and T 1  Id = @f: (4.71) Since T 1 and Id are linear relations, so is T 1 Id and hence gra(T 1 Id) is a linear subspace. Then (@f) = @f (4.72) by [BWY09, Lemma 4.1]. Now T = (Id +@f) 1 (4.73) 414.1. De nitions and T  = [(Id +@f) 1] (4.74) = [(Id +@f) ] 1 by [Cro98, Proposition III 1.3b] (4.75) = [(Id) + (@f) ] 1 by [Bor83, Theorem 7.4] (4.76) = (Id +@f) 1 by (4.72) (4.77) = T: (4.78) Lemma 4.37. Let P =  a b c d  such that P is a proximal mapping. Then b = c. Proof. By Lemma 4:36, P = P  , hence b = c. Lemma 4.38. (See also [RW98, Example 12.7].) Let A : X ! X be con- tinuous and monotone. Then A is maximal monotone. Proof. Let (x; x ) 2 X  X be such that hx y; x  Ayi  0; 8y 2 X: (4.79) It su ces to show that x = Ax. Let z 2 X and t > 0. Let y = x+ tz 2 X. Then hx (x+ tz); x  A(x+ tz)i  0 (4.80) () h tz; x  A(x+ tz)i  0 (4.81) () hA(x+ tz) x ; tzi  0 divide both sides by t (4.82) () hA(x+ tz) x ; zi  0: (4.83) Let t! 0+. Since A is continuous, hAx x ; zi  0; 8z 2 X: (4.84) Set z = x  Ax. Then (4.84) yields, 0  kAx x k2 () hAx x ; x  Axi  0: (4.85) So, kAx x k = 0, i.e., x = Ax. Corollary 4.39. Let A : X ! X be linear. Then A is maximal monotone if and only if 8x hx;Axi  0. Proof. We have 8x hx;Axi  0 () A is monotone by Proposition 4.2. Now since X is  nite-dimensional, this implies that A is continuous by [Kre78, Page 94]. Therefore, since A monotone and continuous, it is maxi- mal monotone by Lemma 4.38. 424.2. Eckstein’s Observation 4.2 Eckstein’s Observation Recall that RC and RD are the re ectors of given subsets C and D. De nition 4.40. Following Eckstein in his thesis, for any maximal mono- tone operators A and B, and  > 0 we de ne G ;A;B = J A(2J B  Id) + (Id J B): (4.86) Remark 4.41. It is important to note that J A = (Id + A) 1 is the resolvent of  A by De nition 4.28 as A is maximal monotone, Fact 4.42. (See [Eck89, Corollary 4.21].) Let G 1 ;A;B = S ;A;B + Id, then G ;A;B is  rmly nonexpansive. De nition 4.43 (Splitting Operator). (See [Eck89, Page 137].) S ;A;B = G  1  ;A;B  Id : (4.87) In [Eck89, Corollary 4.21] we see a fairly complicated proof of Fact 4.42. However, just like we proved that T is  rmly nonexpansive in Theorem 3.7, it may be proved that G ;A;B is  rmly nonexpansive by replacing projec- tions with appropriate resolvents. See also [Com04] and [EF98] for further information on the Douglas-Rachford and related methods involving ( rmly) nonexpansive operators. Eckstein was the  rst to observe the following [Eck89, Proposition 4.10]. Proposition 4.44. There exist maximal monotone operators A and B on R2 and a constant  > 0 such that A and B are both subdi erentials of convex functions, but S ;A;B is not the subdi erential of any convex function. Proof. Let A =  2 1 1 2  (4.88) B =  2 0 0 1  (4.89)  = 1 3 : (4.90) By Fact 4:27, A and B  0. Then, by Proposition 4.35, A = @f; where f(x) = 1 2 xT  2 1 1 2  x (4.91) 434.2. Eckstein’s Observation B = @g; where g(x) = 1 2 xT  2 0 0 1  x: (4.92) By direct calculation, J A = (Id + A)  1 =  5 8  1 8  1 8 5 8 ! (4.93) J B = (Id + B)  1 =  3 5 0 0 34 ! : (4.94) Now Eckstein writes G ;A;B = J A(2J B  Id) + (Id J B) =  1 2 1 16 1 10 11 16 ! (4.95) and S ;A;B = G  1  ;A;B  Id =  28 27  5 27  8 27 13 27 ! : (4.96) Since G ;A;B is not symmetric, it cannot be a proximal mapping: indeed, G is  rmly nonexpansive by Fact 4.42 and if G were a proximal map, it would by symmetric by Lemma 4.37, but it is not. Eckstein made a numerical error in (4.95) and as a result (4.96). The matrices in fact are G ;A;B = J A(2J B  Id) + (Id J B) =  21 40  1 16  1 40 9 16 ! (4.97) and S ;A;B = G  1  ;A;B  Id =  43 47 10 47 4 47 37 47 ! ; (4.98) however, Eckstein’s conclusion remains valid (see appendix for the calcula- tion details). In Example 4.46, we provide an example which isn’t prone to numerical errors as it is much simpler. Lemma 4.45. Let T : X ! X be linear. If T = Id +RDRC 2 (4.99) then, RDRC is symmetric() T is symmetric. (4.100) 444.2. Eckstein’s Observation Proof. RDRC is symmetric () (RDRC)  = RDRC (4.101) () Id +(RDRC)  = Id +RDRC (4.102) () (Id +RDRC)  = Id +RDRC (4.103) () (Id +RDRC) 2 = Id +RDRC 2 (4.104) () T  = T (4.105) Example 4.46. Set T = Id +RDRC 2 : (4.106) Then we have the implications if T is a proximal mapping then T is sym- metric () RDRC is symmetric, by Lemma 4.36 and Lemma 4.45. Consider, C = f(x; 0)jx 2 Rg the x-axis in R2 (4.107) D = f(x; x)jx 2 Rg the diagonal in R2: (4.108) We have PC =  1 0 0 0  ; PD =  1 2 1 2 1 2 1 2 ! ; Id =  1 0 0 1  : (4.109) Now the re ectors onto set C and D are de ned as follows: RC = 2PC  Id = 2  1 0 0 0    1 0 0 1  =  1 0 0  1  (4.110) RD = 2PD  Id = 2  1 2 1 2 1 2 1 2 !   1 0 0 1  =  0 1 1 0  : (4.111) Furthermore, RDRC =  0 1 1 0   1 0 0  1  =  0  1 1 0  (4.112) which is clearly not symmetric! Therefore, T is a  rmly nonexpansive map- ping that is not a proximal mapping even though it was constructed from two proximal mappings. 454.2. Eckstein’s Observation Now T = Id +RDRC 2 =  1 0 0 1  +  0  1 1 0  2 =  1 2  1 2 1 2 1 2 ! (4.113) T = Id +RDRC 2 = (Id +M) 1 (4.114) where M is maximal monotone, and hence M = T 1  Id : (4.115) We calculate T 1 directly from T , T 1 =  1 1  1 1  : (4.116) And  nally M , M = T 1  Id =  0 1  1 0  : (4.117) This is clearly not symmetric and does not come from a subdi erential operator, by Proposition 4.35 applied to M . Eckstein in his thesis pointed out that the T in Douglas-Rachford (aka Di erence map or Lions-Mercier) is not necessarily a proximal map even if it is constructed from two proximal mappings. (However, his example is more complicated and has a numerical error, compared to the simpler example we just showed). This shows that you truly need monotone operator theory to understand Douglas-Rachford or the Di erence Map, even if you started with the projec- tion operators since the resolvent is not a proximal mapping (the monotone operator part is not a subdi erential operator). 46Chapter 5 8 Queens The eight queens puzzle is the problem of putting eight chess queens on an 8 8 chessboard such that none of them is able to capture any other using the standard chess queen’s moves. The queens must be placed in such a way that no two queens would be able to attack each other. Thus, a solution requires that no two queens share the same row, column, or diagonal. The eight queens puzzle is an example of the more general n queens puzzle of placing n queens on an n  n chessboard, where solutions exist only for n = 1 or n  4. This is a new application of Elser’s Di erence Map. From a complexity point of view, the n queens puzzle is easy | in fact, it can be solved in linear time, see [AY89] for details. Our interest in it stems from a modeling perspective; it will make the modeling of Sudoku in the next chapter easier to understand. 5.1 Queen’s Role in Chess The queen can be moved any number of unoccupied squares in a straight line vertically, horizontally, or diagonally, thus combining the moves of the rook and bishop. The queen captures by occupying the square on which an opponent’s piece sits, see Figure 5.1 (from [Que]). 5.2 8 Queens History The puzzle was originally proposed in 1850 by Franz Nauck. Nauck also extended the puzzle to the n queens problem (on an n n chess-board, how can n queens be placed) [RBC74, page 166]. In 1874, S. G unther proposed a method of  nding solutions by using determinants, and J.W.L. Glaisher re ned this approach [RBC74, page 166-167]. This puzzle even appeared in the popular early 1990s computer game The 7th Guest. 475.3. Solutions to 8 Queens Figure 5.1: Queen moves 5.3 Solutions to 8 Queens The eight queens puzzle has 92 distinct solutions. If solutions that di er only by symmetry operations (rotations and re ections) of the board are counted as one, the puzzle has 12 unique (or fundamental) solutions [RBC74, page 177]. 485.3. Solutions to 8 Queens 12 Unique solutions to the 8 Queens problem 80Z0l0Z0Z 7Z0Z0Z0l0 60ZqZ0Z0Z 5Z0Z0Z0Zq 40l0Z0Z0Z 3Z0Z0l0Z0 2qZ0Z0Z0Z 1Z0Z0ZqZ0 a b c d e f g h 80Z0ZqZ0Z 7ZqZ0Z0Z0 60Z0l0Z0Z 5Z0Z0Z0l0 40ZqZ0Z0Z 3Z0Z0Z0Zq 20Z0Z0l0Z 1l0Z0Z0Z0 a b c d e f g h 80Z0l0Z0Z 7ZqZ0Z0Z0 60Z0Z0ZqZ 5Z0l0Z0Z0 40Z0Z0l0Z 3Z0Z0Z0Zq 20Z0ZqZ0Z 1l0Z0Z0Z0 a b c d e f g h 80Z0l0Z0Z 7Z0Z0ZqZ0 60Z0Z0Z0l 5Z0l0Z0Z0 4qZ0Z0Z0Z 3Z0Z0Z0l0 20Z0ZqZ0Z 1ZqZ0Z0Z0 a b c d e f g h 80ZqZ0Z0Z 7Z0Z0ZqZ0 60Z0Z0Z0l 5l0Z0Z0Z0 40Z0l0Z0Z 3Z0Z0Z0l0 20Z0ZqZ0Z 1ZqZ0Z0Z0 a b c d e f g h 80Z0ZqZ0Z 7Z0l0Z0Z0 60Z0Z0Z0l 5Z0ZqZ0Z0 40Z0Z0ZqZ 3l0Z0Z0Z0 20Z0Z0l0Z 1ZqZ0Z0Z0 a b c d e f g h 495.4. Modeling 8 Queens 12 Unique solutions to the 8 Queens problem 80Z0ZqZ0Z 7Z0Z0Z0l0 60Z0l0Z0Z 5l0Z0Z0Z0 40ZqZ0Z0Z 3Z0Z0Z0Zq 20Z0Z0l0Z 1ZqZ0Z0Z0 a b c d e f g h 80Z0l0Z0Z 7l0Z0Z0Z0 60Z0ZqZ0Z 5Z0Z0Z0Zq 40Z0Z0l0Z 3Z0l0Z0Z0 20Z0Z0ZqZ 1ZqZ0Z0Z0 a b c d e f g h 80ZqZ0Z0Z 7Z0Z0ZqZ0 60Z0l0Z0Z 5l0Z0Z0Z0 40Z0Z0Z0l 3Z0Z0l0Z0 20Z0Z0ZqZ 1ZqZ0Z0Z0 a b c d e f g h 80Z0Z0l0Z 7ZqZ0Z0Z0 60Z0Z0ZqZ 5l0Z0Z0Z0 40Z0l0Z0Z 3Z0Z0Z0Zq 20Z0ZqZ0Z 1Z0l0Z0Z0 a b c d e f g h 80Z0l0Z0Z 7Z0Z0Z0l0 6qZ0Z0Z0Z 5Z0Z0Z0Zq 40Z0ZqZ0Z 3ZqZ0Z0Z0 20Z0Z0l0Z 1Z0l0Z0Z0 a b c d e f g h 80Z0Z0l0Z 7Z0ZqZ0Z0 60Z0Z0ZqZ 5l0Z0Z0Z0 40Z0Z0Z0l 3ZqZ0Z0Z0 20Z0ZqZ0Z 1Z0l0Z0Z0 a b c d e f g h 5.4 Modeling 8 Queens We begin by thinking of the 8 queens as a 8 8 binary matrix. We represent a queen on a square with a 1 and a blank square with a 0. Our underlying 505.5. Row Constraints Hilbert space is X = R8 2 or X = R64. 5.5 Row Constraints Each row can only have one queen in it. In each row we are only allowed 1 one the rest are zeros. We call this constraint C1. C1 = fx 2 R8 8j every row of x is a unit vectorg: (5.1) The row constraint ensures that each row has only one queen in it. 8qZ0Z0Z0Z 7l0Z0Z0Z0 6qZ0Z0Z0Z 5l0Z0Z0Z0 4qZ0Z0Z0Z 3l0Z0Z0Z0 2qZ0Z0Z0Z 1l0Z0Z0Z0 a b c d e f g h 5.6 Column Constraints Each column can only have one queen in it. In each column we are only allowed 1 one the rest are zeros. We call this constraint C2 C2 = fx 2 R8 8j every column x is a unit vectorg: (5.2) The column constraint ensures that each column has only one queen in it. 515.7. Positive Slope Constraints 8qlqlqlql 7Z0Z0Z0Z0 60Z0Z0Z0Z 5Z0Z0Z0Z0 40Z0Z0Z0Z 3Z0Z0Z0Z0 20Z0Z0Z0Z 1Z0Z0Z0Z0 a b c d e f g h 5.7 Positive Slope Constraints Every positively sloped diagonal must have at most one queen in it. In each positively sloped diagonal we are only allowed at most one 1. We call this constraint C3. We can think of these as diagonals, or bishop moves. There are 15 positively sloped diagonals. C3 = fx 2 R8 8j every positive diagonal of x is a unit vector or the 0 vectorg: (5.3) The positive slope constraint ensures that each positive diagonal has at most one queen in it. 525.8. Negative Slope Constraints 80Z0Z0Z0Z 7Z0Z0Z0Z0 6qZ0Z0Z0Z 5ZqlqZ0Z0 40Z0l0l0Z 3Z0Z0Z0l0 20Z0Z0Z0Z 1Z0Z0Z0Zq a b c d e f g h 5.8 Negative Slope Constraints As with the positive slope constraint, the negative sloped constraint must have at most one queen in it. Again we are only allowed at most one 1. We call this constraint C4. There are 15 negatively sloped diagonals. C4 = fx 2 R8 8j every negative diagonal of x is a unit vector or the 0 vectorg: (5.4) The negative slope constraint ensures that each negative diagonal has at most one queen in it. 80Z0Z0Z0Z 7Z0Z0Z0Z0 60Z0Z0Z0l 5Z0ZqZql0 40l0l0Z0Z 3Z0Z0Z0Z0 20Z0Z0Z0l 1Z0Z0Z0Z0 a b c d e f g h 535.9. Projection onto the Nearest Unit Vector 5.9 Projection onto the Nearest Unit Vector Denote the standard unit vectors in Rn by ei. Let C = fe1; : : : ; eng be a very nonconvex set in X. Then PCx = fei j xi = maxfx1; : : : ; xngg: (5.5) Proof. The set C has a  nite number of elements so PCx 6= ?. Let i 2 f1; : : : ; ng. Then ei 2 PCx() kx eik  kx ejk 8j (5.6) () kx eik 2  kx ejk 2 8j (5.7) () kxk2  2hx; eii+ keik 2  kxk2  2hx; eji+ kejk 2 8j (5.8) ()  2hx; eii   2hx; eji 8j (5.9) () hx; eji  hx; eii 8j (5.10) () xj  xi 8j: (5.11) This projection is very easy to compute, especially for a computer as it picks the biggest component, sets it to 1 and the others to zero. All the constraints can be represented as unit vectors. For the diagonals we compare the projection with the zero vector and choose the nearest one. In general we pick the biggest component in our vector, set it to 1 and the others to 0. In the case of a tie we pick the lowest component. The constraints C1,C2,C3, and C4 can be applied to the 8 queens matrix as projections. 0 B B B B B B B B B B @ :3 :8 :3 :2 :4 :5 :5 :1 :1 :1 :1 :1 :1 :1 :1 :1 :2 :7 :4 :2 :7 :8 :5 :9 :9 :5 :5 :5 :5 :5 :4 :5 :4 :2 0 0 0 0 :2 :3 1 :1 :1 :3 :2 :6 :5 :6  6 :8 :1 :2 :3 :4 :7 :9 :3 :4 :7 :4 :2 1 :8 2 1 C C C C C C C C C C A 7! 0 B B B B B B B B B B @ 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 C C C C C C C C C C A The matrix on the right is PC1x where x is the matrix on the left. The negative slope constraint is exactly the same as the positive con- straint except in the negative direction. If we rotate the chessboard 90 545.10. Iterations degrees in a counterclockwise direction, apply the positive slope constraint and rotate it 90 degrees clockwise back we get the negative slope constraint. In fact this is how the algorithm is implemented. 0 B B B B B B B B B B @ :3 0 :3 :2 :4 :5 :5 :1 0 :1 :1 :1 :1 :1 :1 :1 :2 :7 :4 :2 :7 :8 :7 :9 :9 :5 :5 :5 :5 :5 :4 :5 :4 :2 0 0 0 0 :2 0 1 :1 :1 :3 :2 :6 0 :6  6 :9 :1 :2 :3 0 :7 :9 :3 :4 :7 :4 0 1 :8 0 1 C C C C C C C C C C A 7! 0 B B B B B B B B B B @ 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 C C C C C C C C C C A The matrix on the right is PC3x where x is the matrix on the left. 5.10 Iterations Recall that Elser’s Di erence Map with  = 1, Lions-Mercier and Douglas Rachford algorithms can be de ned as T = Id +RARB 2 ; (5.12) where RA and RB are re ectors onto their respective sets. The solution to our 8 Queens problem is in C1\C2\C3\C4. We have 4 constraints and our algorithm can only handle two. In order to address this we use the product space and the diagonal space previously de ned by De nitions 2.25 and 2.27 respectively. We’ll work in the Hilbert space X  X  X  X, recall X = R8 2 : (5.13) We also de ne C = C1  C2  C3  C4 = f(c1; c2; c3; c4)jci 2 Cig  X 4: (5.14) Now the Diagonal,  = f(x1; x2; x3; x4) j x1 = x2 = x3 = x4 2 Xg (5.15) Then  \ C = f(c1; c2; c3; c4) j c1 = c2 = c3 = c4; ci 2 Cig (5.16) 555.11. 8 Queens Example and get that x 2 C1 \ C2 \ C3 \ C4 () (x; x; x; x) 2 C \ : (5.17) Our solutions lie in the intersection of the 4 constraints. The re ectors used in the Di erence Map are two times the projection minus the identity. In our model, we have two projections. The projection onto C, our constraints and the projection onto the diagonal. The projection onto the diagonal is described by Example 2.48: P (x1; x2; x3; x4) = (y; y; y; y) where y = x1 + x2 + x3 + x4 4 : (5.18) The projection onto the constraints is described by Example 2.47: PC = PC1 C2 C3 C4(x1; x2; x3; x4) = (PC1x1; PC2x2; PC3x3; PC4x4): (5.19) Finally, we de ne the re ectors and use the Di erence Map, R := 2P  Id (5.20) RC := 2PC  Id : (5.21) The iterations are described as xn+1 =  Id +RCR4 2  xn: (5.22) We round one of P4xn to a 0-1 matrix, call the result yn. Then we monitor 4X i=1 kyn  PCiynk 2, which is 0 if and only if yn is a solution. 5.11 8 Queens Example In this example we start by placing 8 queens to the top left corner- a ridicu- lous starting position. As we see in Figure 5:3, after 30 iterations the board is more spread out. We have a queen in every row. Also notice that most diagonal constraints are satis ed. Notice that there are more than 8 queens on the board at this time. This is because the algorithm looks at the entries in each cells and rounds them, if the entry is greater or equal to 0.5, we place a queens in that entry, otherwise it is empty. Finally after 96 iterations we see in Figure 5:6, we have a solution. 565.11. 8 Queens Example Figure 5.2: Initial Starting Position Figure 5.3: 30 iterations 575.11. 8 Queens Example Figure 5.4: 60 iterations Figure 5.5: 90 iterations 585.11. 8 Queens Example Figure 5.6: Solved in 96 iterations 59Chapter 6 Sudoku We have proved some results about projections and convergence if our sets are convex. There is no theory about convergence if our sets are non-convex. In fact, it is known to fail. The method of alternating projections fails very quickly if our sets are non-convex- it gets stuck between two points and bounces back and forth endlessly. Sudoku uses integers from 1 to 9 which are a very non-convex set. However, when we apply Elser’s di erence map to our very non-convex Sudoku constraint sets, it converges (very quickly even); which I  nd amazing. From a complexity point of view, Sudoku is NP-complete, see [Epp05] for details. Below are the full details on how to model Sudoku to solve it using Elser’s Di erence Map. 6.1 What is Sudoku? Sudoku literally means "number single". It was derived from the Japanese phrase "Suuji wa dokushin ni kagiru". This phrase translates as "the num- bers must occur only once" or "the numbers must be single". Sudoku is a logic-based number placement puzzle. The objective of Sudoku is to  ll a 9 9 grid so that each column, each row, and each of the nine 3 3 boxes contains the digits from 1 to 9, only one time each. The game is a type of Latin Square game with the additional constraint of the 3 3 o ces. Because of this Leonard Euler is often incorrectly cited as the father of Sudoku. Su- doku was actually invented by an American architect named Howard Garns in 1979; he called it "Number Place". Sudoku is now published and trade- marked by Nikoli of Japan and given its name in 1986 [BT09, Pages 6{10], [Eri10]. A sample Sudoku puzzle and its solution is given below in 6.1 and 6.1 respectively (pictures from [Wik]). 6.2 Modeling Sudoku We begin by thinking of the Sudoku as a 9 9 integer matrix. Think of each cell as an elevator shaft, where the integer number of the cell is the  oor. If 606.2. Modeling Sudoku Figure 6.1: A Sample Sudoku Puzzle 616.2. Modeling Sudoku Figure 6.2: A Sample Sudoku Solution 626.3. Constraints the  oor is say 3, we have 0,0,1,0,0,0,0,0,0 on the  oors of the elevator shaft. If we do this for every cell, we add another dimension to our 9  9 matrix. We now have a 9  9  9 cube! We set all the blank entries to 1, but the starting position does not matter. Let Q be a cube in X described by Q(i; j; k); where fi; j; kg  f1; : : : ; 9g: (6.1) So an elevator shaft is Q(i; j; :) (i.e., the 3rd component is free using Matlab notation). We aim to place unit vectors into each elevator shaft. Our space X = R9 3 or X = R729. We now describe our constraints, which are similar to those in the 8 Queens problem. Figure 6.3: 9 Sudoku Elavator Shafts 6.3 Constraints 6.3.1 North-South Hallways Each cell of every column must not repeat any of the numbers 1 to 9. Let’s call these North-South hallways. In each hallway we are only allowed at 636.3. Constraints most one 1 and the rest are zeros. There are 81 North-South hallways, 9 per  oor. Therefore, our constraint becomes C1 = n Q 2 R9 3  Q(i; :; k) is a stardard unit vector 8i; k o : (6.2) Figure 6.4: A Sodoku Hallway 6.3.2 East-West Hallways Each cell of every row must not repeat any of the numbers 1 to 9. Let’s call these East - West hallways. In each hallway we are only allowed at most one 1 and the rest are zeros. There are 81 East-West hallways, 9 per  oor. Therefore, our constraint becomes C2 = n Q 2 R9 3  Q(:; j; k) is a stardard unit vector 8j; k o : (6.3) 646.3. Constraints 6.3.3 Meeting Rooms Every 3  3 square on each  oor must not repeat any of the numbers 1 to 9; we call these meeting rooms. In each meeting room we are only allowed at most 1 one the rest are zeros. There are 81 meeting rooms, 9 per  oor. We de ne the following: E = fA 2 R3 3j all entries are zero except one entry equals zerog; (6.4) J = ff1; 2; 3g; f4; 5; 6g; f7; 8; 9gg ; (6.5) and, for every Q 2 R9 3 , I1 2 J , I2 2 J , and k 2 f1; : : : ; 9g, set MI1;I2;k(Q) =  A 2 R3 3jAimod 3;jmod 3 = Q(i; j; k) 8j 2 I18j 2 I2  : (6.6) Therefore, our constraint set is C3 = n Q 2 R9 3 j8k;8I1 2 J; 8I2 2 J;MI1;I2;k(Q) 2 E o : (6.7) Figure 6.5: A Sodoku Meeting Room 656.4. Iterations As you can see from the picture the meeting room is not a typical unit vector. We can think of it as a fancy unit vector where we write it as a 3 3 matrix instead of the usual 1 9 vector we are used to. 6.3.4 The Given Sudoku Constraint 4 is the given Sudoku problem. We call this constraint C4. This is the set of all cubes Q 2 R9 3 such that if the (i; j) entry is prescribed and equal to  i;j then Q(i; j; :) is equal to the  i;j-unit vector; otherwise, Q(i; j; :) is any unit vector. 6.4 Iterations Recall that Elser’s Di erence Map with  = 1, Lions-Mercier and Douglas Rachford algorithms can be de ned as T = Id +RARB 2 ; (6.8) where RA and RB are re ectors onto their respective sets. The solution to our Sudoku problem is in C1 \C2 \C3 \C4. We have 4 constraints and our algorithm can only handle two. As with 8 Queens, we use the product space and the diagonal, which we have previous de ned. We work in the Hilbert product space X  X  X  X, recall X = R9 3 : (6.9) We also de ne C = C1  C2  C3  C4 = f(c1; c2; c3; c4)jci 2 Cig  X 4; (6.10) and the Diagonal,  = f(x1; x2; x3; x4) j x1 = x2 = x3 = x4 2 Xg: (6.11) Then  \ C = f(c1; c2; c3; c4) j c1 = c2 = c3 = c4; ci 2 Cig (6.12) and hence x 2 C1 \ C2 \ C3 \ C4 () (x; x; x; x) 2 C \ : (6.13) Our unique solution (assuming a well posed Sudoku puzzle) lies in the in- tersection of the 4 constraints. The re ectors used in the Di erence Map 666.5. Sudoku Example are two times the projection minus the identity. In our model, we have two projections. The projection onto C, our constraints and the projection onto the diagonal. The projection onto the diagonal is described by Example 2.48: P (x1; x2; x3; x4) = (y; y; y; y) where y = x1 + x2 + x3 + x4 4 : (6.14) The projection onto the constraints is described by Example 2.47: PC = PC1 C2 C3 C4(x1; x2; x3; x4) = (PC1x1; PC2x2; PC3x3; PC4x4): (6.15) Finally, we de ne the re ectors and use the Di erence Map, R = 2P  Id (6.16) RC := 2PC  Id : (6.17) The iterations are described as xn+1 =  Id +RCR4 2  xn: (6.18) We now apply the exact same algorithm described in details with the 8- Queens example. Now we pick a cube from our diagonal and convert the cube into a matrix by projecting onto the nearest unit vector of each elevator shaft. Convergence is monitored similarly. It should be noted that the iterations are in our cube space and in order to view them in a 9 9 matrix that is Sodoku, we need to  atten the cube. That is take the 9  9  9 binary cube and convert it to a 9  9 integer matrix by converting each elevator shaft containing a standard unit vector to its corresponding integer- see Figure 6.3. 6.5 Sudoku Example Arto Inkala, a Finnish mathematician, is an enthusiastic puzzle and problem solver. He completed his doctoral thesis in applied mathematics in 2001. He creates very di cult Sudoku puzzles. Inkala claims that his "mathematic background has been great help when creating the most di cult Sudokus by [using] genetic algorithms, di erent optimizing methods and arti cial intelligence. On Inkala’s website (www.aisudoku.com), he shares 10 of his di cult Sudoku puzzles. According to the Finnish puzzle maker, he calls the most di cult Sudoku puzzle ever "AI Escargot", because it looks like 676.5. Sudoku Example a snail. The AI comes from Inkala’s initials. He calls solving it is like an intellectual culinary pleasure. We use the di erence map method described above to solve all 10 of Inkala’s Sudoku puzzle. The online implementation of the Sudoku solver can be found at www.schaad.ca, the author’s website. We see that AI Es- cargot was solved in 6627 iterations whereas AI labryinth took over 12000 iterations. Even though Inkala claims that AI Escargot is the world’s hard- est Sudoku puzzle, our algorithm solves it faster than other Sudokus created by Inkala. The algorithm managed to solve AI honeypot in only 208 iter- ations! The Sudoku problems and pictures in the left column are from www.aisudoku.com, whereas the Sudoku solutions in the right column are original work. AI escargot solved in 6627 iterations 686.5. Sudoku Example AI killer application solved in 882 iterations AI lucky diamond solved in 276 iterations 696.5. Sudoku Example AI worm hole solved in 4998 iterations AI labyrinth solved in 12025 iterations 706.5. Sudoku Example AI circles solved in 2410 iterations AI squadron solved in 3252 iterations 716.5. Sudoku Example AI honeypot solved in 208 iterations AI tweezers solved in 688 iterations 726.5. Sudoku Example AI broken brick solved in 1598 iterations In the following  gure, we show how the online solver solves AI Escargot. It takes 6627 iterations and we take snapshots every 1000 iteration until we come to a solution. Figure 6.6: Solution to Escargot 736.6. Sudoku Distance 6.6 Sudoku Distance The next  gure is very interesting. At each iteration, we compute a distance from the solution. At each iteration, our algorithm displays the current iterate as 9  9 matrix which represents an attempt at a Sudoku puzzle solution. If we let the current iterate be the matrix A, and let the solution to the Sudoku puzzle be B, then we can compute the di erence which we call matrix C. The di erence is computed by subtracting each element, i.e. cij = aij  bij , where i; j 2 1; 2; : : : ; 9. We calculate the distance as follows, kCk = sX i;j c2ij (6.19) Figure 6.7: Distance versus number of iterations The graph is not monotone. It shows that Elser’s Di erence Map some- times gets close to a solution, but needs to go further away to actually get the solution. There are a few times, around 1100, 1200, 4800, and 6300 iterations where we get fairly close to a solution, but then we get farther away before  nding a solution. Also notice how quickly we converge to a solution at the end. 74Chapter 7 Conclusion We have shown that Elser’s Di erence Map is equivalent to the Douglas- Rachford and Fienup’s Hybrid Input-Output algorithms, where  = 1. We showed how to model the 8-queens problem and following Elser, we modeled Sudoku as well. We hope that the detailed expository in this thesis will help readers to understand modeling feasibility problems such as Sudoku and 8-queens. We used several very non-convex sets and applied the theory of convex sets to converge to a solution. There is no theory for the nonconvex case, although we have demonstrated the algorithm seems to work. Some future research might include discovering convergence proofs for the nonconvex case, starting with simple examples in the Euclidean plane or building on the paper [LLM09] by Lewis, Luke and Malick. Finally, we showed that the operator governing the Douglas-Rachford iteration need not be a proximal map even when the two involved resol- vents are; this re nes an observation of Eckstein and underlines the need for monotone operator theory. 75Bibliography [AY89] Bruce Abramson and Moti Yung. Divide and conquer under global constraints: A solution to the n-queens problem. Journal of Par- allel and Distributed Computing, 6(3):649 { 662, 1989. Available from: http://www.sciencedirect.com/science/article/ B6WKJ-4BRJHWS-13/2/44424d257adca8566925a5ca897293e9. ! pages 47 [BCL02] Heinz H. Bauschke, Patrick L. Combettes, and D. Russell Luke. Phase retrieval, error reduction algorithm, and Fienup vari- ants: a view from convex optimization. J. Opt. Soc. Amer. A, 19(7):1334{1345, 2002. Available from: http://dx.doi.org/10. 1364/JOSAA.19.001334. ! pages 1 [bir09] Interdisciplinary workshop on  xed-point algorithms for in- verse problems in science and engineering, November 2009. Available from: http://www.birs.ca/birspages.php?task= displayevent&event_id=09w5006. ! pages 2 [Bor83] J. M. Borwein. Adjoint process duality. Math. Oper. Res., 8(3):403{434, 1983. Available from: http://dx.doi.org/10. 1287/moor.8.3.403. ! pages 42 [BT09] Seymour S. Block and Santiago A. Tavares. Before Sudoku. Ox- ford University Press, Oxford, 2009. The world of magic squares. ! pages 60 [BWY09] Heinz H. Bauschke, Xianfu Wang, and Liangjin Yao. On Borwein- Wiersma decompositions of monotone linear relations, 2009. Available from: http://www.citebase.org/abstract?id=oai: arXiv.org:0912.2772. ! pages 41 [Com04] Patrick L. Combettes. Solving monotone inclusions via com- positions of nonexpansive averaged operators. Optimization, 53(5-6):475{504, 2004. Available from: http://dx.doi.org/10. 1080/02331930412331327157. ! pages 43 76Chapter 7. Bibliography [Cro98] Ronald Cross. Multivalued linear operators, volume 213 of Mono- graphs and Textbooks in Pure and Applied Mathematics. Marcel Dekker Inc., New York, 1998. ! pages 42 [Deu01] Frank Deutsch. Best approximation in inner product spaces. CMS Books in Mathematics/Ouvrages de Math ematiques de la SMC, 7. Springer-Verlag, New York, 2001. ! pages 20, 27, 31, 34 [EB92] Jonathan Eckstein and Dimitri P. Bertsekas. On the Douglas- Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Programming, 55(3, Ser. A):293{318, 1992. Available from: http://dx.doi.org/10. 1007/BF01581204. ! pages 38 [Eck89] Jonathan Eckstein. Splitting Methods for Monotone Opera- tors with Applications to Parallel Optimization. PhD disserta- tion, Massachusetts Institute of Technology, Cambridge, Mas- sachusetts, June 1989. ! pages 43 [EF98] Jonathan Eckstein and Michael C. Ferris. Operator-splitting methods for monotone a ne variational inequalities, with a par- allel application to optimal control. INFORMS J. Comput., 10(2):218{235, 1998. ! pages 43 [Epp05] David Eppstein. Nonrepetitive paths and cycles in graphs with application to sudoku. CoRR, abs/cs/0507053, 2005. ! pages 60 [Eri10] Martin Erickson. Pearls of discrete mathematics. Discrete Math- ematics and its Applications (Boca Raton). CRC Press, Boca Ra- ton, FL, 2010. ! pages 60 [ERT07] V. Elser, I. Rankenburg, and P. Thibault. Searching with iterated maps. Proc. Natl. Acad. Sci. USA, 104(2):418{423, 2007. Avail- able from: http://dx.doi.org/10.1073/pnas.0606359104. ! pages 1, 26 [Fie82] J. R. Fienup. Phase retrieval algorithms: a comparison. Appl. Opt., 21(15):2758{2769, 1982. Available from: http://ao.osa. org/abstract.cfm?URI=ao-21-15-2758. ! pages 26 [GE08] Simon Gravel and Veit Elser. Divide and concur: A general ap- proach to constraint satisfaction. Phys. Rev. E, 78(3):036706, Sep 2008. ! pages 1 77Chapter 7. Bibliography [ICC07] Second mathematical programming society international confer- ence on continuous optimization, August 2007. Available from: http://iccopt-mopta.mcmaster.ca/. ! pages 1 [Kre78] Erwin Kreyszig. Introductory functional analysis with applica- tions. John Wiley & Sons, New York-London-Sydney, 1978. ! pages 7, 33, 34, 42 [LLM09] A. S. Lewis, D. R. Luke, and J. Malick. Local linear con- vergence for alternating and averaged nonconvex projections. Found. Comput. Math., 9(4):485{513, 2009. Available from: http://dx.doi.org/10.1007/s10208-008-9036-y. ! pages 75 [LM79] P.-L. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal., 16(6):964{979, 1979. Available from: http://dx.doi.org/10.1137/0716071. ! pages 26 [Mey00] Carl Meyer. Matrix analysis and applied linear algebra. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2000. With 1 CD-ROM (Windows, Macintosh and UNIX) and a solutions manual (iv+171 pp.). ! pages 4, 34, 38 [Min62] George J. Minty. Monotone (nonlinear) operators in Hilbert space. Duke Math. J., 29:341{346, 1962. ! pages 39 [MN88] Jan R. Magnus and Heinz Neudecker. Matrix di erential calculus with applications in statistics and econometrics. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John Wiley & Sons Ltd., Chichester, 1988. ! pages 40 [Opi67] Zdzis law Opial. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc., 73:591{597, 1967. ! pages 29 [Que] Queen moves. Available from: http://www.chessdryad.com/ education/magictheater/queens/index.htm. ! pages 47 [RBC74] W. W. Rouse Ball and H. S. M. Coxeter. Mathematical recreations & essays. University of Toronto Press, Toronto, Ont., twelfth edition, 1974. ! pages 47, 48 78[Reh] Julie Rehmeyer. The Sudoku solution. The Science News. Avail- able from: http://sciencenews.org/view/generic/id/39529/ title/The_Sudoku_solution. ! pages 1 [Roc70] R. T. Rockafellar. On the maximal monotonicity of subdi erential mappings. Paci c J. Math., 33:209{216, 1970. ! pages 38 [Roc97] R. Tyrrell Rockafellar. Convex analysis. Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ, 1997. Reprint of the 1970 original, Princeton Paperbacks. ! pages 37, 39, 41 [Rud91] Walter Rudin. Functional analysis. International Series in Pure and Applied Mathematics. McGraw-Hill Inc., New York, second edition, 1991. ! pages 31 [RW98] R. Tyrrell Rockafellar and Roger J-B. Wets. Variational analysis, volume 317 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer- Verlag, Berlin, 1998. ! pages 36, 39, 40, 42 [Sim98] Stephen Simons. Minimax and monotonicity, volume 1693 of Lec- ture Notes in Mathematics. Springer-Verlag, Berlin, 1998. ! pages 38 [Wik] Queen moves. Available from: http://en.wikipedia.org/ wiki/Sudoku. ! pages 60 [Z al02] C. Z alinescu. Convex analysis in general vector spaces. World Scienti c Publishing Co. Inc., River Edge, NJ, 2002. ! pages 37, 39 [Zar71] Eduardo H. Zarantonello. Projections on convex sets in Hilbert space and spectral theory. I. Projections on convex sets. In Con- tributions to nonlinear functional analysis (Proc. Sympos., Math. Res. Center, Univ. Wisconsin, Madison, Wis., 1971), pages 237{ 341. Academic Press, New York, 1971. ! pages 32 79Appendix A 8 Queens PHP Code <?php // This code i s p ro to t ype and i s prov ided as i s under the GNU General Pub l i c License // Author : Jason Schaad August 11 , 2010 $n=8; func t i on checkqueens ( $matrix ) f // Round the matrix for ( $x=1;$x<=8;$x++) f for ( $y=1;$y<=8;$y++) f i f ( $matrix [ $x ] [ $y ] >=0.5) f $matrix [ $x ] [ $y ]=1; g g g // PrintMatr ix ( $matrix ) ; $ t o t a l =0; // HORIZONTAL for ( $x=1;$x<=8;$x++) f $sum=0; for ( $y=1;$y<=8;$y++) f $sum+=$matrix [ $x ] [ $y ] ; g i f ($sum==1) f $ t o t a l++; g g // VERTICAL 80Appendix A. 8 Queens PHP Code for ( $x=1;$x<=8;$x++) f $sum=0; for ( $y=1;$y<=8;$y++) f $sum+=$matrix [ $y ] [ $x ] ; g i f ($sum==1) f $ t o t a l++; g g $check=1; // POSITIVE i f ( ( $matrix [ 1 ] [ 1 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 2 ] [ 1 ]+ $matrix [ 1 ] [ 2 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 3 ] [ 1 ]+ $matrix [ 2 ] [ 2 ]+ $matrix [ 1 ] [ 3 ] ) > 1)f $check= 1;g i f ( ( $matrix [ 4 ] [ 1 ]+ $matrix [ 3 ] [ 2 ]+ $matrix [ 3 ] [ 3 ]+ $matrix [ 1 ] [ 4 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 5 ] [ 1 ]+ $matrix [ 4 ] [ 2 ]+ $matrix [ 3 ] [ 3 ]+ $matrix [ 2 ] [ 4 ]+ $matrix [ 1 ] [ 5 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 6 ] [ 1 ]+ $matrix [ 5 ] [ 2 ]+ $matrix [ 4 ] [ 3 ]+ $matrix [ 3 ] [ 4 ]+ $matrix [ 2 ] [ 5 ]+ $matrix [ 1 ] [ 6 ] ) > 1 ) f$check= 1;g i f ( ( $matrix [ 7 ] [ 1 ]+ $matrix [ 6 ] [ 2 ]+ $matrix [ 5 ] [ 3 ]+ $matrix [ 4 ] [ 4 ]+ $matrix [ 3 ] [ 5 ]+ $matrix [ 2 ] [ 6 ]+ $matrix [ 1 ] [ 7 ] ) > 1)f $check= 1;g i f ( ( $matrix [ 8 ] [ 1 ]+ $matrix [ 7 ] [ 2 ]+ $matrix [ 6 ] [ 3 ]+ $matrix [ 5 ] [ 4 ]+ $matrix [ 4 ] [ 5 ]+ $matrix [ 3 ] [ 6 ]+ $matrix [ 2 ] [ 7 ]+ $matrix [ 1 ] [ 8 ] ) > 1 ) f$check= 1;g i f ( ( $matrix [ 8 ] [ 2 ]+ $matrix [ 7 ] [ 3 ]+ $matrix [ 6 ] [ 4 ]+ $matrix [ 5 ] [ 5 ]+ $matrix [ 4 ] [ 6 ]+ $matrix [ 3 ] [ 7 ]+ $matrix [ 2 ] [ 8 ] ) > 1) f$check= 1;g i f ( ( $matrix [ 8 ] [ 3 ]+ $matrix [ 7 ] [ 4 ]+ $matrix [ 6 ] [ 5 ]+ $matrix [ 5 ] [ 6 ]+ $matrix [ 4 ] [ 7 ]+ $matrix [ 3 ] [ 8 ] ) > 1 ) f$check= 1;g i f ( ( $matrix [ 8 ] [ 4 ]+ $matrix [ 7 ] [ 5 ]+ $matrix [ 6 ] [ 6 ]+ $matrix [ 5 ] [ 7 ]+ $matrix [ 4 ] [ 8 ] ) > 1 ) f$check= 1;g i f ( ( $matrix [ 8 ] [ 5 ]+ $matrix [ 7 ] [ 6 ]+ $matrix [ 6 ] [ 7 ]+ $matrix [ 5 ] [ 8 ] ) > 1 ) f$check= 1;g i f ( ( $matrix [ 8 ] [ 6 ]+ $matrix [ 7 ] [ 7 ]+ $matrix [ 6 ] [ 8 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 8 ] [ 7 ]+ $matrix [ 7 ] [ 8 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 8 ] [ 8 ] ) >1 )f $check= 1;g // NEGATIVE i f ( ( $matrix [ 8 ] [ 1 ] ) > 1) f$check= 1;g i f ( ( $matrix [ 7 ] [ 1 ]+ $matrix [ 8 ] [ 2 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 6 ] [ 1 ]+ $matrix [ 7 ] [ 2 ]+ $matrix [ 8 ] [ 3 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 5 ] [ 1 ]+ $matrix [ 6 ] [ 2 ]+ $matrix [ 7 ] [ 3 ]+ $matrix [ 8 ] [ 4 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 4 ] [ 1 ]+ $matrix [ 5 ] [ 2 ]+ $matrix [ 6 ] [ 3 ]+ $matrix [ 7 ] [ 4 ]+ $matrix [ 8 ] [ 5 ] ) >1 )f $check= 1;g 81Appendix A. 8 Queens PHP Code i f ( ( $matrix [ 3 ] [ 1 ]+ $matrix [ 4 ] [ 2 ]+ $matrix [ 5 ] [ 3 ]+ $matrix [ 6 ] [ 4 ]+ $matrix [ 7 ] [ 5 ]+ $matrix [ 8 ] [ 6 ] ) > 1 ) f$check= 1;g i f ( ( $matrix [ 2 ] [ 1 ]+ $matrix [ 3 ] [ 2 ]+ $matrix [ 4 ] [ 3 ]+ $matrix [ 5 ] [ 4 ]+ $matrix [ 6 ] [ 5 ]+ $matrix [ 7 ] [ 6 ]+ $matrix [ 8 ] [ 7 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 1 ] [ 1 ]+ $matrix [ 2 ] [ 2 ]+ $matrix [ 3 ] [ 3 ]+ $matrix [ 4 ] [ 4 ]+ $matrix [ 5 ] [ 5 ]+ $matrix [ 6 ] [ 6 ]+ $matrix [ 7 ] [ 7 ]+ $matrix [ 8 ] [ 8 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 1 ] [ 2 ]+ $matrix [ 2 ] [ 3 ]+ $matrix [ 3 ] [ 4 ]+ $matrix [ 4 ] [ 5 ]+ $matrix [ 5 ] [ 6 ]+ $matrix [ 6 ] [ 7 ]+ $matrix [ 7 ] [ 8 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 1 ] [ 3 ]+ $matrix [ 2 ] [ 4 ]+ $matrix [ 3 ] [ 5 ]+ $matrix [ 4 ] [ 6 ]+ $matrix [ 5 ] [ 7 ]+ $matrix [ 6 ] [ 8 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 1 ] [ 4 ]+ $matrix [ 2 ] [ 5 ]+ $matrix [ 3 ] [ 6 ]+ $matrix [ 4 ] [ 7 ]+ $matrix [ 5 ] [ 8 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 1 ] [ 5 ]+ $matrix [ 2 ] [ 6 ]+ $matrix [ 3 ] [ 7 ]+ $matrix [ 4 ] [ 8 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 1 ] [ 6 ]+ $matrix [ 2 ] [ 7 ]+ $matrix [ 3 ] [ 8 ] ) > 1 ) f$check= 1;g i f ( ( $matrix [ 1 ] [ 7 ]+ $matrix [ 2 ] [ 8 ] ) >1 )f $check= 1;g i f ( ( $matrix [ 1 ] [ 8 ] ) >1 )f $check= 1;g i f ( ( $ t o t a l==16) && ( $check==1)) f re turn 1 ; g else f re turn 0 ; g g f unc t i on chessround ( $x , $y ) f i f ( $y == 1 ) f i f ( $x >= 0 . 5 ) f re turn "<img he ight=50 width=50 s r c =3.png>" ; g else f re turn "<img he ight=50 width=50 s r c =2.png>" ; g g else i f ( $y ==0) f i f ( $x >= 0 . 5 ) f re turn "<img he ight=50 width=50 s r c =4.png>" ; g else f re turn "<img he ight=50 width=50 s r c =1.png>" ; g g g 82Appendix A. 8 Queens PHP Code f unc t i on PrintChessMatrix ( $matrix ) f echo "<t ab l e border=1>" ; $n=s izeof ( $matrix ) ; for ( $x = 1 ; $x <= $n ; $x++) f echo "<tr>" ; for ( $y = 1 ; $y <= $n ; $y++) f i f ( ( $x % 2)==1) f i f ( ( $y % 2)==1) f echo "<td bgco lo r=#999999>" . chessround ( $matrix [ $x ] [ $y ] , 1 ) . "</td>" ; g else f echo "<td bgco lo r=#FFFFFF>" . chessround ( $matrix [ $x ] [ $y ] , 0 ) . "</td>" ; g g else f i f ( ( $y % 2)==0) f echo "<td bgco lo r=#999999>" . chessround ( $matrix [ $x ] [ $y ] , 1 ) . "</td>" ; g else f echo "<td bgco lo r=#FFFFFF>" . chessround ( $matrix [ $x ] [ $y ] , 0 ) . "</td>" ; g g g echo "</tr>" ; g echo "</table>" ; g //end func t i on f unc t i on PrintMatr ix ( $matrix ) f echo "<t ab l e border=1>" ; $n=s izeof ( $matrix ) ; for ( $x = 1 ; $x <= $n ; $x++) f echo "<tr>" ; for ( $y = 1 ; $y <= $n ; $y++) f i f ( ( $x % 2)==1) f i f ( ( $y % 2)==1) f echo "<td bgco lo r=#999999>" . $matrix [ $x ] [ $y ] . "</td>" ; g else f echo "<td bgco lo r=#FFFFFF>" . $matrix [ $x ] [ $y ] . "</td>" ; g g else f i f ( ( $y % 2)==0) f echo "<td bgco lo r=#999999>" . $matrix [ $x ] [ $y ] . "</td>" ; g 83Appendix A. 8 Queens PHP Code else f echo "<td bgco lo r=#FFFFFF>" . $matrix [ $x ] [ $y ] . "</td>" ; g g g echo "</tr>" ; g echo "</table>" ; g //end func t i on f unc t i on makeunitcol ( $matrix , $colnumber , $entrytomakezero ) f $n=s izeof ( $matrix ) ; for ( $x=1; $x<=$n ; $x++) f $matrix [ $x ] [ $colnumber ]=0; g i f ( $entrytomakezero >0) f $matrix [ round( $entrytomakezero ) ] [ $colnumber ]=1; g re turn $matrix ; g f unc t i on makeunitrow ( $matrix , $rownumber , $entrytomakezero ) f $n=s izeof ( $matrix ) ; for ( $y=1; $y<=$n ; $y++) f $matrix [ $rownumber ] [ $y ]=0; g i f ( $entrytomakezero >0) f $matrix [ $rownumber ] [ round( $entrytomakezero ) ]=1 ; g re turn $matrix ; g f unc t i on columnproj ( $matrix ) f $n=s izeof ( $matrix ) ; for ( $y = 1 ; $y <= $n ; $y++) f $imax=1; $entrymax=$matrix [ 1 ] [ $y ] ; for ( $x = 1 ; $x <= $n ; $x++) f i f ( $matrix [ $x ] [ $y]>$entrymax ) f $imax = $x ; $entrymax= $matrix [ $x ] [ $y ] ; g 84Appendix A. 8 Queens PHP Code g $matrix=makeunitcol ( $matrix , $y , $imax ) ; g RETURN $matrix ; g //end func t i on f unc t i on rowproj ( $matrix ) f $n=s izeof ( $matrix ) ; for ( $x = 1 ; $x <= $n ; $x++) f $imax=1; $entrymax=$matrix [ $x ] [ 1 ] ; for ( $y = 1 ; $y <= $n ; $y++) f i f ( $matrix [ $x ] [ $y]>$entrymax ) f $imax = $y ; $entrymax= $matrix [ $x ] [ $y ] ; g g $matrix=makeunitrow ( $matrix , $x , $imax ) ; g RETURN $matrix ; g //end func t i on f unc t i on po s t i v e s l o p ep r o j ( $matrix ) f $v1 [1 ]= $matrix [ 1 ] [ 1 ] ; $v2 [1 ]= $matrix [ 2 ] [ 1 ] ; $v2 [2 ]= $matrix [ 1 ] [ 2 ] ; $v3 [1 ]= $matrix [ 3 ] [ 1 ] ; $v3 [2 ]= $matrix [ 2 ] [ 2 ] ; $v3 [3 ]= $matrix [ 1 ] [ 3 ] ; $v4 [1 ]= $matrix [ 4 ] [ 1 ] ; $v4 [2 ]= $matrix [ 3 ] [ 2 ] ; $v4 [3 ]= $matrix [ 2 ] [ 3 ] ; $v4 [4 ]= $matrix [ 1 ] [ 4 ] ; $v5 [1 ]= $matrix [ 5 ] [ 1 ] ; $v5 [2 ]= $matrix [ 4 ] [ 2 ] ; $v5 [3 ]= $matrix [ 3 ] [ 3 ] ; $v5 [4 ]= $matrix [ 2 ] [ 4 ] ; $v5 [5 ]= $matrix [ 1 ] [ 5 ] ; $v6 [1 ]= $matrix [ 6 ] [ 1 ] ; $v6 [2 ]= $matrix [ 5 ] [ 2 ] ; $v6 [3 ]= $matrix [ 4 ] [ 3 ] ; $v6 [4 ]= $matrix [ 3 ] [ 4 ] ; $v6 [5 ]= $matrix [ 2 ] [ 5 ] ; $v6 [6 ]= $matrix [ 1 ] [ 6 ] ; $v7 [1 ]= $matrix [ 7 ] [ 1 ] ; $v7 [2 ]= $matrix [ 6 ] [ 2 ] ; $v7 [3 ]= $matrix [ 5 ] [ 3 ] ; $v7 [4 ]= $matrix [ 4 ] [ 4 ] ; $v7 [5 ]= $matrix [ 3 ] [ 5 ] ; $v7 [6 ]= $matrix [ 2 ] [ 6 ] ; $v7 [7 ]= $matrix [ 1 ] [ 7 ] ; $v8 [1 ]= $matrix [ 8 ] [ 1 ] ; $v8 [2 ]= $matrix [ 7 ] [ 2 ] ; $v8 [3 ]= $matrix [ 6 ] [ 3 ] ; $v8 [4 ]= $matrix [ 5 ] [ 4 ] ; $v8 [5 ]= $matrix [ 4 ] [ 5 ] ; $v8 [6 ]= $matrix [ 3 ] [ 6 ] ; $v8 [7 ]= $matrix [ 2 ] [ 7 ] ; $v8 [8 ]= $matrix [ 1 ] [ 8 ] ; $v9 [1 ]= $matrix [ 8 ] [ 2 ] ; $v9 [2 ]= $matrix [ 7 ] [ 3 ] ; $v9 [3 ]= $matrix [ 6 ] [ 4 ] ; $v9 [4 ]= $matrix [ 5 ] [ 5 ] ; $v9 [5 ]= $matrix [ 4 ] [ 6 ] ; $v9 [6 ]= $matrix [ 3 ] [ 7 ] ; $v9 [7 ]= $matrix [ 2 ] [ 8 ] ; $v10 [1 ]= $matrix [ 8 ] [ 3 ] ; $v10 [2 ]= $matrix [ 7 ] [ 4 ] ; $v10 [3 ]= $matrix [ 6 ] [ 5 ] ; $v10 [4 ]= $matrix [ 5 ] [ 6 ] ; $v10 [5 ]= $matrix [ 4 ] [ 7 ] ; $v10 [6 ]= $matrix [ 3 ] [ 8 ] ; $v11 [1 ]= $matrix [ 8 ] [ 4 ] ; $v11 [2 ]= $matrix [ 7 ] [ 5 ] ; $v11 [3 ]= $matrix [ 6 ] [ 6 ] ; $v11 [4 ]= $matrix [ 5 ] [ 7 ] ; $v11 [5 ]= $matrix [ 4 ] [ 8 ] ; $v12 [1 ]= $matrix [ 8 ] [ 5 ] ; $v12 [2 ]= $matrix [ 7 ] [ 6 ] ; $v12 [3 ]= $matrix [ 6 ] [ 7 ] ; $v12 [4 ]= $matrix [ 5 ] [ 8 ] ; $v13 [1 ]= $matrix [ 8 ] [ 6 ] ; $v13 [2 ]= $matrix [ 7 ] [ 7 ] ; $v13 [3 ]= $matrix [ 6 ] [ 8 ] ; 85Appendix A. 8 Queens PHP Code $v14 [1 ]= $matrix [ 8 ] [ 7 ] ; $v14 [2 ]= $matrix [ 7 ] [ 8 ] ; $v15 [1 ]= $matrix [ 8 ] [ 8 ] ; i f ( $v1 [1]>=0.5) f $matrix [ 1 ] [ 1 ] = 1 ; g else f $matrix [ 1 ] [ 1 ] = 0 ; g $diagnames = array ( ’ v2 ’ , ’ v3 ’ , ’ v4 ’ , ’ v5 ’ , ’ v6 ’ , ’ v7 ’ , ’ v8 ’ , ’ v9 ’ , ’ v10 ’ , ’ v11 ’ , ’ v12 ’ , ’ v13 ’ , ’ v14 ’ ) ; $unitnames= array ( ’w2 ’ , ’w3 ’ , ’w4 ’ , ’w5 ’ , ’w6 ’ , ’w7 ’ , ’w8 ’ , ’w9 ’ , ’w10 ’ , ’w11 ’ , ’w12 ’ , ’w13 ’ , ’w14 ’ ) ; for ( $out s idecounte r =2; $outs idecounter <=8; $out s idecounte r++) f $ le f t sum=0; $rightsum=0; $f$unitnames [ $outs idecounter  2]g= un i tp r o j ( $f$diagnames [ $outs idecounter  2]g , $out s idecounte r ) ; for ( $ i n s i d e coun t e r =1; $ in s idecounte r<=$out s idecounte r ; $ i n s i d e coun t e r++) f $ le f t sum=$le f t sum+pow( $f$diagnames [ $outs idecounter  2]g [ $ i n s i d e coun t e r ] , 2 ) ; $rightsum=$rightsum+pow( $f$diagnames [ $outs idecounter  2]g [ $ i n s i d e coun t e r ] $f$unitnames [ $outs idecounter  2]g [ $ i n s i d e coun t e r ] , 2 ) ; g // echo " LS > " . $ l e f t sum ." < RS > " . $r ightsum . "< " ; i f ( sqrt ( $ l e f t sum ) < sqrt ( $rightsum ) ) f // echo " A l l Zeros " ; for ($num1=$out s idecounte r ; $num1>=1; $num1  ) f for ($num2=1; $num2<=$out s idecounte r ; $num2++) f i f ( ( $num1+$num2)==($out s idecounte r +1)) f $matrix [ $num1 ] [ $num2]=0; g g g g else f // echo " Unit Vector " ; $un i tvec to r counte r =0; for ($num1=$out s idecounte r ; $num1>=1; $num1  ) f for ($num2=1; $num2<=$out s idecounte r ; $num2++) f i f ( ( $num1+$num2)==($out s idecounte r +1)) f $un i tvec to r counte r++; $matrix [ $num1 ] [ $num2]= $f$unitnames [ $outs idecounter  2]g [ $un i tvec to r counte r ] ; g g g g g 86Appendix A. 8 Queens PHP Code //                   NOW WE do the remaining 7 d i agona l s                   i f ( $v15 [1]>=0.5) f $matrix [ 8 ] [ 8 ] = 1 ; g else f $matrix [ 8 ] [ 8 ] = 0 ; g for ( $out s idecounte r =14; $outs idecounter >=9; $outs idecounter  ) f $ le f t sum=0; $rightsum=0; $f$unitnames [ $outs idecounter  2]g= un i tp r o j ( $f$diagnames [ $outs idecounter  2]g ,(16 $out s idecounte r ) ) ; for ( $ i n s i d e coun t e r =1; $ in s idecounte r <=(16 $out s idecounte r ) ; $ i n s i d e coun t e r++) f $ le f t sum=$le f t sum+pow( $f$diagnames [ $outs idecounter  2]g [ $ i n s i d e coun t e r ] , 2 ) ; $rightsum=$rightsum+pow( $f$diagnames [ $outs idecounter  2]g [ $ i n s i d e coun t e r ] $f$unitnames [ $outs idecounter  2]g [ $ i n s i d e coun t e r ] , 2 ) ; g i f ( sqrt ( $ l e f t sum ) < sqrt ( $rightsum ) ) f // echo " A l l Zeros " ; for ($num1=1; $num1<=8; $num1++) f for ($num2=1; $num2<=8; $num2++) f i f ( ( $num1+$num2)==($out s idecounte r +1)) f $matrix [ $num2 ] [ $num1]=0; g g g g else f // echo " Unit Vector " ; $un i tvec to r counte r =0; for ($num1=1; $num1<=8; $num1++) f for ($num2=1; $num2<=8; $num2++) f i f ( ( $num1+$num2)==($out s idecounte r +1)) f $un i tvec to r counte r++; $matrix [ $num2 ] [ $num1]= $f$unitnames [ $outs idecounter  2]g [ $un i tvec to r counte r ] ; g g g g g RETURN $matrix ; g //end func t i on 87Appendix A. 8 Queens PHP Code f unc t i on c l o c kw i s e r o t a t i o n ( $matrix ) f for ( $x=1; $x<=8; $x++) f for ( $y=1; $y<=8; $y++) f $newmatrix [ $x ] [ $y]=$matrix [9 $y ] [ $x ] ; g g RETURN $newmatrix ; g //end func t i on f unc t i on coun t e r c l o ckw i s e r o t a t i on ( $matrix ) f for ( $x=1; $x<=8; $x++) f for ( $y=1; $y<=8; $y++) f $newmatrix [9 $y ] [ $x]=$matrix [ $x ] [ $y ] ; g g RETURN $newmatrix ; g //end func t i on f unc t i on un i tp r o j ( $v , $ s i z e ) f $imax=1; $entrymax = $v [ 1 ] ; for ( $ i =2; $i<=$ s i z e ; $ i++) f i f ( $v [ $ i ]>$entrymax ) f $imax = $ i ; $entrymax =$v [ $ i ] ; g g RETURN makeunit ( $imax , $ s i z e ) ; g f unc t i on makeunit ( $i , $ s i z e ) f for ( $x=1; $x<=$ s i z e ; $x++) f $v [ $x ]=0; g i f ( $i >0) f $v [ round( $ i ) ]=1 ; g re turn $v ; g f unc t i on addcubes ($M1, $M2) f for ( $ i =1; $i<=8; $ i++) f for ( $ j =1; $j<=8; $ j++) f $S [ $ i ] [ $ j ]=$M1[ $ i ] [ $ j ] + $M2[ $ i ] [ $ j ] ; 88Appendix A. 8 Queens PHP Code g g re turn $S ; g f unc t i on sca lmultcube ( $alpha , $T) f for ( $ i =1; $i<=8; $ i++) f for ( $ j =1; $j<=8; $ j++) f $S [ $ i ] [ $ j ]=$alpha  $T [ $ i ] [ $ j ] ; g g re turn $S ; g // end func t i on f unc t i on d iagpro j ($T1 , $T2 , $T3 , $T4) f $T=addcubes ($T1 , $T2 ) ; $T=addcubes ($T , $T3 ) ; $T=addcubes ($T , $T4 ) ; $T=sca lmultcube ( 0 . 2 5 , $T ) ; re turn $T ; g f unc t i on new1 ($T1 , $T2 , $T3 , $T4) f $PD=diagpro j ($T1 , $T2 , $T3 , $T4 ) ; $S1 = sca lmultcube (2 ,$PD) ; $S2 = sca lmultcube ( 1 ,$T1 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = columnproj ( $S3 ) ; $S5 = addcubes ($T1 , sca lmultcube ( 1 ,$PD) ) ; $S = addcubes ( $S4 , $S5 ) ; r e turn $S ; g f unc t i on new2 ($T1 , $T2 , $T3 , $T4) f $PD=diagpro j ($T1 , $T2 , $T3 , $T4 ) ; $S1 = sca lmultcube (2 ,$PD) ; $S2 = sca lmultcube ( 1 ,$T2 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = rowproj ( $S3 ) ; $S5 = addcubes ($T2 , sca lmultcube ( 1 ,$PD) ) ; $S = addcubes ( $S4 , $S5 ) ; r e turn $S ; g f unc t i on new3 ($T1 , $T2 , $T3 , $T4) f $PD=diagpro j ($T1 , $T2 , $T3 , $T4 ) ; 89Appendix A. 8 Queens PHP Code $S1 = sca lmultcube (2 ,$PD) ; $S2 = sca lmultcube ( 1 ,$T3 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = po s t i v e s l o p ep r o j ( $S3 ) ; $S5 = addcubes ($T3 , sca lmultcube ( 1 ,$PD) ) ; $S = addcubes ( $S4 , $S5 ) ; r e turn $S ; g f unc t i on new4 ($T1 , $T2 , $T3 , $T4) f g l oba l $or igS ; $PD=diagpro j ($T1 , $T2 , $T3 , $T4 ) ; $S1 = sca lmultcube (2 ,$PD) ; $S2 = sca lmultcube ( 1 ,$T4 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = coun t e r c l o ckw i s e r o t a t i on ( p o s t i v e s l o p ep r o j ( c l o c kw i s e r o t a t i o n ( $S3 ) ) ) ; $S5 = addcubes ($T4 , sca lmultcube ( 1 ,$PD) ) ; $S = addcubes ( $S4 , $S5 ) ; r e turn $S ; g $or igMatr ix [ 1 ] [ 1 ]= $ POST [ ’ 11 ’ ] ; $or igMatr ix [ 1 ] [ 2 ]= $ POST [ ’ 12 ’ ] ; $or igMatr ix [ 1 ] [ 3 ]= $ POST [ ’ 13 ’ ] ; $or igMatr ix [ 1 ] [ 4 ]= $ POST [ ’ 14 ’ ] ; $or igMatr ix [ 1 ] [ 5 ]= $ POST [ ’ 15 ’ ] ; $or igMatr ix [ 1 ] [ 6 ]= $ POST [ ’ 16 ’ ] ; $or igMatr ix [ 1 ] [ 7 ]= $ POST [ ’ 17 ’ ] ; $or igMatr ix [ 1 ] [ 8 ]= $ POST [ ’ 18 ’ ] ; $or igMatr ix [ 2 ] [ 1 ]= $ POST [ ’ 21 ’ ] ; $or igMatr ix [ 2 ] [ 2 ]= $ POST [ ’ 22 ’ ] ; $or igMatr ix [ 2 ] [ 3 ]= $ POST [ ’ 23 ’ ] ; $or igMatr ix [ 2 ] [ 4 ]= $ POST [ ’ 24 ’ ] ; $or igMatr ix [ 2 ] [ 5 ]= $ POST [ ’ 25 ’ ] ; $or igMatr ix [ 2 ] [ 6 ]= $ POST [ ’ 26 ’ ] ; $or igMatr ix [ 2 ] [ 7 ]= $ POST [ ’ 27 ’ ] ; $or igMatr ix [ 2 ] [ 8 ]= $ POST [ ’ 28 ’ ] ; $or igMatr ix [ 3 ] [ 1 ]= $ POST [ ’ 31 ’ ] ; $or igMatr ix [ 3 ] [ 2 ]= $ POST [ ’ 32 ’ ] ; $or igMatr ix [ 3 ] [ 3 ]= $ POST [ ’ 33 ’ ] ; $or igMatr ix [ 3 ] [ 4 ]= $ POST [ ’ 34 ’ ] ; $or igMatr ix [ 3 ] [ 5 ]= $ POST [ ’ 35 ’ ] ; $or igMatr ix [ 3 ] [ 6 ]= $ POST [ ’ 36 ’ ] ; $or igMatr ix [ 3 ] [ 7 ]= $ POST [ ’ 37 ’ ] ; $or igMatr ix [ 3 ] [ 8 ]= $ POST [ ’ 38 ’ ] ; $or igMatr ix [ 4 ] [ 1 ]= $ POST [ ’ 41 ’ ] ; 90Appendix A. 8 Queens PHP Code $or igMatr ix [ 4 ] [ 2 ]= $ POST [ ’ 42 ’ ] ; $or igMatr ix [ 4 ] [ 3 ]= $ POST [ ’ 43 ’ ] ; $or igMatr ix [ 4 ] [ 4 ]= $ POST [ ’ 44 ’ ] ; $or igMatr ix [ 4 ] [ 5 ]= $ POST [ ’ 45 ’ ] ; $or igMatr ix [ 4 ] [ 6 ]= $ POST [ ’ 46 ’ ] ; $or igMatr ix [ 4 ] [ 7 ]= $ POST [ ’ 47 ’ ] ; $or igMatr ix [ 4 ] [ 8 ]= $ POST [ ’ 48 ’ ] ; $or igMatr ix [ 5 ] [ 1 ]= $ POST [ ’ 51 ’ ] ; $or igMatr ix [ 5 ] [ 2 ]= $ POST [ ’ 52 ’ ] ; $or igMatr ix [ 5 ] [ 3 ]= $ POST [ ’ 53 ’ ] ; $or igMatr ix [ 5 ] [ 4 ]= $ POST [ ’ 54 ’ ] ; $or igMatr ix [ 5 ] [ 5 ]= $ POST [ ’ 55 ’ ] ; $or igMatr ix [ 5 ] [ 6 ]= $ POST [ ’ 56 ’ ] ; $or igMatr ix [ 5 ] [ 7 ]= $ POST [ ’ 57 ’ ] ; $or igMatr ix [ 5 ] [ 8 ]= $ POST [ ’ 58 ’ ] ; $or igMatr ix [ 6 ] [ 1 ]= $ POST [ ’ 61 ’ ] ; $or igMatr ix [ 6 ] [ 2 ]= $ POST [ ’ 62 ’ ] ; $or igMatr ix [ 6 ] [ 3 ]= $ POST [ ’ 63 ’ ] ; $or igMatr ix [ 6 ] [ 4 ]= $ POST [ ’ 64 ’ ] ; $or igMatr ix [ 6 ] [ 5 ]= $ POST [ ’ 65 ’ ] ; $or igMatr ix [ 6 ] [ 6 ]= $ POST [ ’ 66 ’ ] ; $or igMatr ix [ 6 ] [ 7 ]= $ POST [ ’ 67 ’ ] ; $or igMatr ix [ 6 ] [ 8 ]= $ POST [ ’ 68 ’ ] ; $or igMatr ix [ 7 ] [ 1 ]= $ POST [ ’ 71 ’ ] ; $or igMatr ix [ 7 ] [ 2 ]= $ POST [ ’ 72 ’ ] ; $or igMatr ix [ 7 ] [ 3 ]= $ POST [ ’ 73 ’ ] ; $or igMatr ix [ 7 ] [ 4 ]= $ POST [ ’ 74 ’ ] ; $or igMatr ix [ 7 ] [ 5 ]= $ POST [ ’ 75 ’ ] ; $or igMatr ix [ 7 ] [ 6 ]= $ POST [ ’ 76 ’ ] ; $or igMatr ix [ 7 ] [ 7 ]= $ POST [ ’ 77 ’ ] ; $or igMatr ix [ 7 ] [ 8 ]= $ POST [ ’ 78 ’ ] ; $or igMatr ix [ 8 ] [ 1 ]= $ POST [ ’ 81 ’ ] ; $or igMatr ix [ 8 ] [ 2 ]= $ POST [ ’ 82 ’ ] ; $or igMatr ix [ 8 ] [ 3 ]= $ POST [ ’ 83 ’ ] ; $or igMatr ix [ 8 ] [ 4 ]= $ POST [ ’ 84 ’ ] ; $or igMatr ix [ 8 ] [ 5 ]= $ POST [ ’ 85 ’ ] ; $or igMatr ix [ 8 ] [ 6 ]= $ POST [ ’ 86 ’ ] ; $or igMatr ix [ 8 ] [ 7 ]= $ POST [ ’ 87 ’ ] ; $or igMatr ix [ 8 ] [ 8 ]= $ POST [ ’ 88 ’ ] ; $ i t e r a t i o n =20000; echo "<t ab l e border=2 ce l l padd ing=5 c e l l s p a c i n g=5>" ; echo "<tr><th>I t e r a t i on </th><th>Chess Board</th><th>" ; echo "Current Matrix" ; 91Appendix A. 8 Queens PHP Code echo "</th>" ; echo "</tr>" ; echo "<tr><td>0</td><td>" ; PrintChessMatrix ( $or igMatr ix ) ; echo "</td><td>" ; PrintMatr ix ( $or igMatr ix ) ; echo"</td>" ; $T1old=$or igMatr ix ; $T2old=$or igMatr ix ; $T3old=$or igMatr ix ; ; $T4old=$or igMatr ix ; $Tadaold=d iagpro j ( $T1old , $T2old , $T3old , $T4old ) ; $ so lved=0; $ i =0; while ( ! $ so lved ) f $T1=new1( $T1old , $T2old , $T3old , $T4old ) ; $T2=new2( $T1old , $T2old , $T3old , $T4old ) ; $T3=new3( $T1old , $T2old , $T3old , $T4old ) ; $T4=new4( $T1old , $T2old , $T3old , $T4old ) ; $Tada=d iagpro j ($T1 , $T2 , $T3 , $T4 ) ; i f ( checkqueens ( $Tada ) ) f $so lved=1; g echo "<tr><td>$i</td><td>" ; PrintChessMatrix ( $Tada ) ; echo "</td><td>" ; PrintMatr ix ( $Tada ) ; echo "</td>" ; echo "</tr>" ; $ i++; $T1old = $T1 ; $T2old = $T2 ; $T3old = $T3 ; $T4old = $T4 ; g echo "<big>Success ! Solved in " . ( $i  1). " i t e r a t i o n s .</big>" ; ?> 92Appendix B Sudoku PHP Code We  rst have sudokufunctions.php, then our sudoku code. <?php // This code i s p ro to t ype and i s prov ided as i s under the GNU General Pub l i c License // Author : Jason Schaad August 11 , 2010 f unc t i on makeunit ( $ i ) f // re turns a vec t o r ( array ) // $ i shou ld be a f l o a t $v [ 1 ]=0 ; $v [ 2 ]=0 ; $v [ 3 ]=0 ; $v [ 4 ]=0 ; $v [ 5 ]=0 ; $v [ 6 ]=0 ; $v [ 7 ]=0 ; $v [ 8 ]=0 ; $v [ 9 ]=0 ; // ea s i e r way to i n i t i a l i z e an array i f ( $i >0) f $v [ round( $ i ) ]=1 ; g re turn $v ; g f unc t i on un i tp r o j ( $v ) f // input $v i s a vec t o r // f i n d s b i g g e s t component and makes 1 and the r e s t 0 //e . g . [ 0 . 4 , . 6 , . 7 , . 8 , 0 , 0 , 0 , 0 , 0 ] = [0 ,0 ,0 , 1 , 0 , 0 , 0 , 0 , 0 ] // re turns a vec t o r ( array ) // r e l i e s on makeunit $imax=1; $entrymax = $v [ 1 ] ; for ( $ i =2; $i<=9; $ i++) f i f ( $v [ $ i ]>$entrymax ) f $imax = $ i ; $entrymax =$v [ $ i ] ; g g re turn makeunit ( $imax ) ; g f unc t i on addcubes ($T1 , $T2) f // add 2 9x9x9 cubes 93Appendix B. Sudoku PHP Code // re turns the "sum cube" for ( $ i =1; $i<=9; $ i++) f for ( $ j =1; $j<=9; $ j++) f for ( $k=1; $k<=9; $k++) f $S [ $ i ] [ $ j ] [ $k]=$T1 [ $ i ] [ $ j ] [ $k ] + $T2 [ $ i ] [ $ j ] [ $k ] ; g g g re turn $S ; g f unc t i on pr intcube ( $cube ) f // t h i s f unc t i on p r i n t s the cube to the screen // not inc l uded in un i t t e s t s as i t i s never used in my ac tua l sudoku code // i t was used f o r debugg ing when I was c r ea t i n g the code for ( $x=1; $x<=9; $x++) f for ( $y=1; $y<=9; $y++) f for ( $z=1; $z<=9; $z++) f echo " ( " . $x . " , " . $y . " , " . $z . " ) =" . $cube [ $x ] [ $y ] [ $z ] . "<br>" ; g g g g f unc t i on p r in ta r r ay ( $square ) f // t h i s f unc t i on p r i n t s a 2 dimensiona l array to the screen ( our " f l a t t e n e d " Sudoku ) // the r e i s no need f o r a un i t t e s t f o r t h i s code as i t ou tpu t s to the screen only for ( $x=1; $x<=9; $x++) f for ( $y=1; $y<=9; $y++) f echo $square [ $x ] [ $y ] . " " ; g echo "<br />" ; g g f unc t i on columnproj ( $Tcube ) f // f i n d s the b i g g e s t en try in every column of our sudoku cube // and makes i t i n t o a un i t v e c t o r  r e turns a cube for ( $ j =1; $j<=9; $ j++) f for ( $k=1; $k<=9; $k++) f for ( $ i =1; $i<=9; $ i++) f $tempv [ $ i ]=$Tcube [ $ i ] [ $ j ] [ $k ] ; g $tempw=un i tp r o j ( $tempv ) ; 94Appendix B. Sudoku PHP Code for ( $ i =1; $i<=9; $ i++) f $cprojTcube [ $ i ] [ $ j ] [ $k]=$tempw [ $ i ] ; g g g re turn $cprojTcube ; g f unc t i on rowproj ( $Tcube ) f // f i n d s the b i g g e s t en try in every row o f our sudoku cube // and makes i t i n t o a un i t v e c t o r which i t r e tu rns for ( $ i =1; $i<=9; $ i++) f for ( $k=1; $k<=9; $k++) f for ( $ j =1; $j<=9; $ j++) f $tempv [ $ j ]=$Tcube [ $ i ] [ $ j ] [ $k ] ; g $tempw=un i tp r o j ( $tempv ) ; for ( $ j =1; $j<=9; $ j++) f $rprojTcube [ $ i ] [ $ j ] [ $k]=$tempw [ $ j ] ; g g g re turn $rprojTcube ; g f unc t i on meetproj ( $Tcube ) f // This f unc t i on c a l c u l a t e s a " un i t v e c t o r " based on the // meeting rooms ( the 3 x 3 rooms ) i t uses meetpro2 func t i on // to do a l o t o f the work ( l oops are good . . and so i s re us ing your code ) // i n i t i a l i z e mprojTcube to have a l l 0 ’ s for ( $ i =1; $i<=9; $ i++) f for ( $ j =1; $j<=9; $ j++) f for ( $k=1; $k<=9; $k++) f $mprojTcube [ $ i ] [ $ j ] [ $k ]=0; g g g // 1 1 subcube $mprojTcube=meetproj2 ( $mprojTcube , $Tcube , 1 , 1 ) ; // 1 2 subcube $mprojTcube=meetproj2 ( $mprojTcube , $Tcube , 4 , 1 ) ; // 1 3 subcube $mprojTcube=meetproj2 ( $mprojTcube , $Tcube , 7 , 1 ) ; // 2 1 subcube $mprojTcube=meetproj2 ( $mprojTcube , $Tcube , 1 , 4 ) ; // 2 2 subcube $mprojTcube=meetproj2 ( $mprojTcube , $Tcube , 4 , 4 ) ; 95Appendix B. Sudoku PHP Code // 2 3 subcube $mprojTcube=meetproj2 ( $mprojTcube , $Tcube , 7 , 4 ) ; // 3 1 subcube $mprojTcube=meetproj2 ( $mprojTcube , $Tcube , 1 , 7 ) ; // 3 2 subcube $mprojTcube=meetproj2 ( $mprojTcube , $Tcube , 4 , 7 ) ; // 3 3 subcube $mprojTcube=meetproj2 ( $mprojTcube , $Tcube , 7 , 7 ) ; r e turn $mprojTcube ; g f unc t i on meetproj2 ( $mprojTcube , $Tcube , $index1 , $index2 ) f // This f unc t i on c a l c u l a t e s a " un i t v e c t o r " based on the // meeting rooms ( the 3 x 3 rooms ) // t h i s i s coded us ing l oops and the v a r i a b l e s $ index1 and $ index // 2 he l p g e t the i n d i c e s co r r e c t f o r the funny " un i t v e c t o r " for ( $k=1; $k<=9; $k++) f for ( $ j =0; $j <3; $ j++) f for ( $ i =0; $i <3; $ i++) f $tempv[1+ $ i+3 $ j ]=$Tcube [ $index2+$ j ] [ $ index1+$ i ] [ $k ] ; g g $tempw = un i tp r o j ( $tempv ) ; for ( $ j =0; $j <3; $ j++) f for ( $ i =0; $i <3; $ i++) f $mprojTcube [ $index2+$ j ] [ $ index1+$ i ] [ $k]=$tempw[1+ $ i+3 $ j ] ; g g g re turn $mprojTcube ; g f unc t i on sca lmultcube ( $alpha , $T) f // Takes as input a cube and a s c a l a r and mu l t i p l e s them  output i s a cube for ( $ i =1; $i<=9; $ i++) f for ( $ j =1; $j<=9; $ j++) f for ( $k=1; $k<=9; $k++) f $S [ $ i ] [ $ j ] [ $k]=$alpha  $T [ $ i ] [ $ j ] [ $k ] ; g g g re turn $S ; g // end func t i on 96Appendix B. Sudoku PHP Code f unc t i on thesame ($T1 , $T2) f // compares two arrays  i f they are the same re turn true , o the rw i s e f a l s e $counter=0; for ( $ i =1; $i<=9; $ i++) f for ( $ j =1; $j<=9; $ j++) f i f ($T1 [ $ i ] [ $ j ] !=$T2 [ $ i ] [ $ j ] ) f $counter++; g g g i f ( $counter >0) f re turn FALSE; g else f re turn TRUE; g g f unc t i on g ivenpro j ( $Tcube , $S ) f // t h i s f unc t i on uses the o r i g i n a l sudoku t ha t was g iven to c a l c u l a t e // a p r o j e c t i on ( our g iven Sudoku con s t r a i n t //  which i s j u s t a 9 by 9 array not a cube ) . Returns a p ro j e c t e d cube . $gprojTcube=$Tcube ; for ( $ i =1; $i<=9; $ i++) f for ( $ j =1; $j<=9; $ j++) f i f ( $S [ $ i ] [ $ j ]>0) f $tempw=makeunit ( $S [ $ i ] [ $ j ] ) ; for ( $k=1; $k<=9; $k++) f $gprojTcube [ $ i ] [ $ j ] [ $k]=$tempw [ $k ] ; g g i f ( $S [ $ i ] [ $ j ]==0) f for ( $k=1; $k<=9; $k++) f $tempv [ $k]=$Tcube [ $ i ] [ $ j ] [ $k ] ; g $tempw=un i tp r o j ( $tempv ) ; for ( $k=1; $k<=9; $k++) f $gprojTcube [ $ i ] [ $ j ] [ $k]=$tempw [ $k ] ; g g g g re turn $gprojTcube ; g // end func t i on f unc t i on makearray ( $Tcube ) f // f l a t t e n s a cube (9 x9x9 ) in t o an array (9 x9 ) so we can p r i n t i t to the screen 97Appendix B. Sudoku PHP Code for ( $ i =1; $i<=9; $ i++) f for ( $ j =1; $j<=9; $ j++) f $temp=1; $tempval=$Tcube [ $ i ] [ $ j ] [ 1 ] ; for ( $k=2; $k<=9; $k++) f i f ( $Tcube [ $ i ] [ $ j ] [ $k]>$tempval ) f $temp=$k ; $tempval=$Tcube [ $ i ] [ $ j ] [ $k ] ; g g $A [ $ i ] [ $ j ]=$temp ; g g re turn $A ; g f unc t i on s tuck in l oop ( $ i ) f // no need to t e s t t h i s func t ion , i t i s too s imple . i f ( $i >100000) f throw new Exception ( ’We are stuck in a loop with over 10000 i t e r a t i o n s . ’ ) ; g g f unc t i on d iagpro j ($T1 , $T2 , $T3 , $T4) f // take four cubes and computes the d iagona l $T=addcubes ($T1 , $T2 ) ; $T=addcubes ($T , $T3 ) ; $T=addcubes ($T , $T4 ) ; $T=sca lmultcube ( 0 . 2 5 , $T ) ; re turn $T ; g f unc t i on new1 ($T1 , $T2 , $T3 , $T4) f // c r ea t e s a new cube wi th the column p ro j e c t i on $PD=diagpro j ($T1 , $T2 , $T3 , $T4 ) ; $S1 = sca lmultcube (2 ,$PD) ; $S2 = sca lmultcube ( 1 ,$T1 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = columnproj ( $S3 ) ; $S5 = addcubes ($T1 , sca lmultcube ( 1 ,$PD) ) ; $S = addcubes ( $S4 , $S5 ) ; r e turn $S ; g f unc t i on new2 ($T1 , $T2 , $T3 , $T4) f // c r ea t e s a new cube the row p ro j e c t i on $PD=diagpro j ($T1 , $T2 , $T3 , $T4 ) ; 98Appendix B. Sudoku PHP Code $S1 = sca lmultcube (2 ,$PD) ; $S2 = sca lmultcube ( 1 ,$T2 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = rowproj ( $S3 ) ; $S5 = addcubes ($T2 , sca lmultcube ( 1 ,$PD) ) ; $S = addcubes ( $S4 , $S5 ) ; r e turn $S ; g f unc t i on new3 ($T1 , $T2 , $T3 , $T4) f // c r ea t e s a new cube wi th the meet p r o j e c t i on $PD=diagpro j ($T1 , $T2 , $T3 , $T4 ) ; $S1 = sca lmultcube (2 ,$PD) ; $S2 = sca lmultcube ( 1 ,$T3 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = meetproj ( $S3 ) ; $S5 = addcubes ($T3 , sca lmultcube ( 1 ,$PD) ) ; $S = addcubes ( $S4 , $S5 ) ; r e turn $S ; g f unc t i on new4 ($T1 , $T2 , $T3 , $T4) f // c r ea t e s a new cube wi th the g iven p r o j e c t i on g l oba l $or igS ; $PD=diagpro j ($T1 , $T2 , $T3 , $T4 ) ; $S1 = sca lmultcube (2 ,$PD) ; $S2 = sca lmultcube ( 1 ,$T4 ) ; $S3 = addcubes ( $S1 , $S2 ) ; $S4 = g ivenpro j ( $S3 , $or igS ) ; $S5 = addcubes ($T4 , sca lmultcube ( 1 ,$PD) ) ; $S = addcubes ( $S4 , $S5 ) ; r e turn $S ; g f unc t i on meetproj22 ( $Tcube ) f // This f unc t i on c a l c u l a t e s a " un i t v e c t o r " based on the // meeting rooms ( the 3 x 3 rooms ) // I coded i t d i f f e r e n t l y ( us ing more l oops ) , // but i t made i t much harder to read . We can see where an // ac t ua l meeting room i s l o c a t e d in our cube t h i s way ! // 1 1 subcube for ( $k=1; $k<=9; $k++) f $tempv [ 1 ] = $Tcube [ 1 ] [ 1 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 1 ] [ 2 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 1 ] [ 3 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 2 ] [ 1 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 2 ] [ 2 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 2 ] [ 3 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 3 ] [ 1 ] [ $k ] ; 99Appendix B. Sudoku PHP Code $tempv [ 8 ] = $Tcube [ 3 ] [ 2 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 3 ] [ 3 ] [ $k ] ; $tempw = un i tp r o j ( $tempv ) ; $mprojTcube [ 1 ] [ 1 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 1 ] [ 2 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 1 ] [ 3 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 2 ] [ 1 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 2 ] [ 2 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 2 ] [ 3 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 3 ] [ 1 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 3 ] [ 2 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 3 ] [ 3 ] [ $k ] = $tempw [ 9 ] ; g // 1 2 subcube for ( $k=1; $k<=9; $k++) f $tempv [ 1 ] = $Tcube [ 1 ] [ 4 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 1 ] [ 5 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 1 ] [ 6 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 2 ] [ 4 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 2 ] [ 5 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 2 ] [ 6 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 3 ] [ 4 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 3 ] [ 5 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 3 ] [ 6 ] [ $k ] ; $tempw = un i tp r o j ( $tempv ) ; $mprojTcube [ 1 ] [ 4 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 1 ] [ 5 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 1 ] [ 6 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 2 ] [ 4 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 2 ] [ 5 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 2 ] [ 6 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 3 ] [ 4 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 3 ] [ 5 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 3 ] [ 6 ] [ $k ] = $tempw [ 9 ] ; g // 1 3 subcube for ( $k=1; $k<=9; $k++) f $tempv [ 1 ] = $Tcube [ 1 ] [ 7 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 1 ] [ 8 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 1 ] [ 9 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 2 ] [ 7 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 2 ] [ 8 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 2 ] [ 9 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 3 ] [ 7 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 3 ] [ 8 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 3 ] [ 9 ] [ $k ] ; $tempw = un i tp r o j ( $tempv ) ; $mprojTcube [ 1 ] [ 7 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 1 ] [ 8 ] [ $k ] = $tempw [ 2 ] ; 100Appendix B. Sudoku PHP Code $mprojTcube [ 1 ] [ 9 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 2 ] [ 7 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 2 ] [ 8 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 2 ] [ 9 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 3 ] [ 7 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 3 ] [ 8 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 3 ] [ 9 ] [ $k ] = $tempw [ 9 ] ; g // 2 1 subcube for ( $k=1; $k<=9; $k++) f $tempv [ 1 ] = $Tcube [ 4 ] [ 1 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 4 ] [ 2 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 4 ] [ 3 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 5 ] [ 1 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 5 ] [ 2 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 5 ] [ 3 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 6 ] [ 1 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 6 ] [ 2 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 6 ] [ 3 ] [ $k ] ; $tempw = un i tp r o j ( $tempv ) ; $mprojTcube [ 4 ] [ 1 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 4 ] [ 2 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 4 ] [ 3 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 5 ] [ 1 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 5 ] [ 2 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 5 ] [ 3 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 6 ] [ 1 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 6 ] [ 2 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 6 ] [ 3 ] [ $k ] = $tempw [ 9 ] ; g // 2 2 subcube for ( $k=1; $k<=9; $k++) f $tempv [ 1 ] = $Tcube [ 4 ] [ 4 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 4 ] [ 5 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 4 ] [ 6 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 5 ] [ 4 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 5 ] [ 5 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 5 ] [ 6 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 6 ] [ 4 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 6 ] [ 5 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 6 ] [ 6 ] [ $k ] ; $tempw = un i tp r o j ( $tempv ) ; $mprojTcube [ 4 ] [ 4 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 4 ] [ 5 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 4 ] [ 6 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 5 ] [ 4 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 5 ] [ 5 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 5 ] [ 6 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 6 ] [ 4 ] [ $k ] = $tempw [ 7 ] ; 101Appendix B. Sudoku PHP Code $mprojTcube [ 6 ] [ 5 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 6 ] [ 6 ] [ $k ] = $tempw [ 9 ] ; g // 2 3 subcube for ( $k=1; $k<=9; $k++) f $tempv [ 1 ] = $Tcube [ 4 ] [ 7 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 4 ] [ 8 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 4 ] [ 9 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 5 ] [ 7 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 5 ] [ 8 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 5 ] [ 9 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 6 ] [ 7 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 6 ] [ 8 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 6 ] [ 9 ] [ $k ] ; $tempw = un i tp r o j ( $tempv ) ; $mprojTcube [ 4 ] [ 7 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 4 ] [ 8 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 4 ] [ 9 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 5 ] [ 7 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 5 ] [ 8 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 5 ] [ 9 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 6 ] [ 7 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 6 ] [ 8 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 6 ] [ 9 ] [ $k ] = $tempw [ 9 ] ; g // 3 1 subcube for ( $k=1; $k<=9; $k++) f $tempv [ 1 ] = $Tcube [ 7 ] [ 1 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 7 ] [ 2 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 7 ] [ 3 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 8 ] [ 1 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 8 ] [ 2 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 8 ] [ 3 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 9 ] [ 1 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 9 ] [ 2 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 9 ] [ 3 ] [ $k ] ; $tempw = un i tp r o j ( $tempv ) ; $mprojTcube [ 7 ] [ 1 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 7 ] [ 2 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 7 ] [ 3 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 8 ] [ 1 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 8 ] [ 2 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 8 ] [ 3 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 9 ] [ 1 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 9 ] [ 2 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 9 ] [ 3 ] [ $k ] = $tempw [ 9 ] ; g // 3 2 subcube for ( $k=1; $k<=9; $k++) f 102Appendix B. Sudoku PHP Code $tempv [ 1 ] = $Tcube [ 7 ] [ 4 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 7 ] [ 5 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 7 ] [ 6 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 8 ] [ 4 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 8 ] [ 5 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 8 ] [ 6 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 9 ] [ 4 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 9 ] [ 5 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 9 ] [ 6 ] [ $k ] ; $tempw = un i tp r o j ( $tempv ) ; $mprojTcube [ 7 ] [ 4 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 7 ] [ 5 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 7 ] [ 6 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 8 ] [ 4 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 8 ] [ 5 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 8 ] [ 6 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 9 ] [ 4 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 9 ] [ 5 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 9 ] [ 6 ] [ $k ] = $tempw [ 9 ] ; g // 3 3 subcube for ( $k=1; $k<=9; $k++) f $tempv [ 1 ] = $Tcube [ 7 ] [ 7 ] [ $k ] ; $tempv [ 2 ] = $Tcube [ 7 ] [ 8 ] [ $k ] ; $tempv [ 3 ] = $Tcube [ 7 ] [ 9 ] [ $k ] ; $tempv [ 4 ] = $Tcube [ 8 ] [ 7 ] [ $k ] ; $tempv [ 5 ] = $Tcube [ 8 ] [ 8 ] [ $k ] ; $tempv [ 6 ] = $Tcube [ 8 ] [ 9 ] [ $k ] ; $tempv [ 7 ] = $Tcube [ 9 ] [ 7 ] [ $k ] ; $tempv [ 8 ] = $Tcube [ 9 ] [ 8 ] [ $k ] ; $tempv [ 9 ] = $Tcube [ 9 ] [ 9 ] [ $k ] ; $tempw = un i tp r o j ( $tempv ) ; $mprojTcube [ 7 ] [ 7 ] [ $k ] = $tempw [ 1 ] ; $mprojTcube [ 7 ] [ 8 ] [ $k ] = $tempw [ 2 ] ; $mprojTcube [ 7 ] [ 9 ] [ $k ] = $tempw [ 3 ] ; $mprojTcube [ 8 ] [ 7 ] [ $k ] = $tempw [ 4 ] ; $mprojTcube [ 8 ] [ 8 ] [ $k ] = $tempw [ 5 ] ; $mprojTcube [ 8 ] [ 9 ] [ $k ] = $tempw [ 6 ] ; $mprojTcube [ 9 ] [ 7 ] [ $k ] = $tempw [ 7 ] ; $mprojTcube [ 9 ] [ 8 ] [ $k ] = $tempw [ 8 ] ; $mprojTcube [ 9 ] [ 9 ] [ $k ] = $tempw [ 9 ] ; g re turn $mprojTcube ; g f unc t i on checkMeetingRoom ( $sudoku , $index x , $ index y ) f // sudoku i s the 9x9 array and index1 and index2 are the s t a r t i n g po in t s // [ 1 ] [ 1 ] [ 1 ] [ 2 ] [ 1 ] [ 3 ] 103Appendix B. Sudoku PHP Code // [ 2 ] [ 1 ] [ 2 ] [ 2 ] [ 2 ] [ 3 ] // [ 3 ] [ 1 ] [ 3 ] [ 2 ] [ 3 ] [ 3 ] //$one=array (1=>1 ,1 ,1 ,2 ,2 ,2 ,3 ,3 ,3); // $two=array (1=>1 ,2 ,3 ,1 ,2 ,3 ,1 ,2 ,3); $one=array (1=>$index y , $index y , $index y , ( $ index y +1) ,( $ index y+1) , ( $ index y +1) ,( $ index y +2) ,( $ index y +2) ,( $ index y +2)) ; $two=array (1=>( $ index x ) , ( $ index x +1) ,( $ index x +2) ,( $ index x ) , ( $ index x+1) , ( $ index x +2) ,( $ index x ) , ( $ index x +1) ,( $ index x +2)) ; // p r i n t r ( $one ) ; // p r i n t r ( $two ) ; for ( $ i =1; $i<=9; $ i++) f for ( $ j =1; $j<=9; $ j++) f i f ( $ i != $ j ) f i f ( $sudoku [ $one [ $ i ] ] [ $two [ $ i ]]==$sudoku [ $one [ $ j ] ] [ $two [ $ j ] ] ) f re turn 0 ; g g g g re turn 1 ; g f unc t i on isVal idSudoku ( $ o r i g i n a l , $sudoku ) f // checks to see i f the sudoku i s v a l i d  r e turns t rue i f v a l i d , f a l s e o the rw i s e // input i s the o r i g i n a l c on s t r a i n t s and the sudoku to check // Check i f we meet the ROW con s t r a i n t s (1 to 9 in every row ) // f i r s t cehck f o r d u p l i c a t e s for ( $k=1; $k<=9; $k++) f for ( $ i =1; $i<=9; $ i++) f for ( $ j =1; $j<=9; $ j++) f i f ( $ i != $ j ) f i f ( $sudoku [ $k ] [ $ i ]==$sudoku [ $k ] [ $ j ] ) f // echo "Fa i l ed row cons t ra in t<br />"; re turn 0 ; g g g 104Appendix B. Sudoku PHP Code g g // Check i f we meet the COLUMN con s t r a i n t s (1 to 9 in every row ) // f i r s t cehck f o r d u p l i c a t e s for ( $k=1; $k<=9; $k++) f for ( $ i =1; $i<=9; $ i++) f for ( $ j =1; $j<=9; $ j++) f i f ( $ i != $ j ) f i f ( $sudoku [ $ i ] [ $k]==$sudoku [ $ j ] [ $k ] ) f // echo "Fa i l ed column cons t ra in t<br />"; re turn 0 ; g g g g g //Check MEETING ROOM con s t r a i n t s (1 to 9 in every meeting room) i f ( ! checkMeetingRoom ( $sudoku , 1 , 1 ) ) f re turn 0 ;g i f ( ! checkMeetingRoom ( $sudoku , 4 , 1 ) ) f re turn 0 ;g i f ( ! checkMeetingRoom ( $sudoku , 7 , 1 ) ) f re turn 0 ;g i f ( ! checkMeetingRoom ( $sudoku , 1 , 4 ) ) f re turn 0 ;g i f ( ! checkMeetingRoom ( $sudoku , 4 , 4 ) ) f re turn 0 ;g i f ( ! checkMeetingRoom ( $sudoku , 7 , 4 ) ) f re turn 0 ;g i f ( ! checkMeetingRoom ( $sudoku , 1 , 7 ) ) f re turn 0 ;g i f ( ! checkMeetingRoom ( $sudoku , 4 , 7 ) ) f re turn 0 ;g i f ( ! checkMeetingRoom ( $sudoku , 7 , 7 ) ) f re turn 0 ;g // Las t l y we check to make sure we have the g iven c on s t r a i n t s for ( $ i =1; $i<=9; $ i++) f for ( $ j =1; $j<=9; $ j++) f $new array [ $ i ] [ $ j ]= $ o r i g i n a l [ $ i ] [ $ j ] $sudoku [ $ i ] [ $ j ] ; $new array [ $ i ] [ $ j ]=sqrt ( $new array [ $ i ] [ $ j ] ) ; g g // pr in t a r ray ( $new array ) ; // echo "<br />"; // p r in t a r ray ( $ o r i g i n a l ) ; i f ( ! thesame ( $new array , $ o r i g i n a l ) ) f re turn 0 ; g re turn 1 ; g 105Appendix B. Sudoku PHP Code <?php set time limit ( 0 ) ; // Report s imp le running e r ro r s error reporting (E ERROR j EWARNING j E PARSE) ; // Verbose mode ( shows a l l d e t a i l s ) $ d e t a i l s=$ POST [ ’ d e t a i l s ’ ] ; i f ( $ d e t a i l s != 1) f $ d e t a i l s =0; g // Pick up the form v a r i a b l e s for ( $x=1; $x<=9; $x++) f for ( $y=1; $y<=9; $y++) f $or igS [ $x ] [ $y]=$ POST [ $x . $y ] ; g g // Disp lay the i n i t i a l Sudoku pu z z l e we w i l l s o l v e // We a l s o i n i t i a l i z e a GLOBAL va r i a b l e c a l l e d or igS // which i s our o r i g i n a l Sudoku ( which i s a c on s t r a i n t ) echo "<t ab l e border=1>" ; for ( $x=1; $x<=9; $x++) f echo "<tr>" ; for ( $y=1; $y<=9; $y++) f echo "<td>" ; i f ( $or igS [ $x ] [ $y]=="" ) f echo "&nbsp" ; g else f echo $or igS [ $x ] [ $y ] ; g echo "</td>" ; g echo "</tr>" ; g echo "</table>" ; // inc l ude the f unc t i on s to be used in sodoku include ( ’ sudokufunct ions . php ’ ) ; // i n i t i a l i z e the scube 9x9x9 array to a l l z e ro s for ( $x=1; $x<=9; $x++) f for ( $y=1; $y<=9; $y++) f for ( $z=1; $z<=9; $z++) f 106Appendix B. Sudoku PHP Code $Scube [ $x ] [ $y ] [ $z ]=0; g g g // Get a s t a r t i n g p o s i t i o n based on the o r i g i n a l sudoku g iven // ( the commented out l i n e i t when we s t a r t a t random po s i t i o n s  f o r re search ) for ( $x=1; $x<=9; $x++) f for ( $y=1; $y<=9; $y++) f $tempv=makeunit ( $or igS [ $x ] [ $y ] ) ; for ( $z=1; $z<=9; $z++) f $Scube [ $x ] [ $y ] [ $z ]=$tempv [ $z ] ; //$Scube [ $x ] [ $y ] [ $z ]= rand ( 10000 , 10000) ; g g g echo "<br /><br />" ; // We now s t a r t the i t e r a t i o n proces s // $Scube i s our s t a r t i n g p o s i t i o n as de f ined above $T1old=$Scube ; $T2old=$Scube ; $T3old=$Scube ; $T4old=$Scube ; $Tadaold=d iagpro j ( $T1old , $T2old , $T3old , $T4old ) ; $zeros inarow = 0 ; $ i =1; echo "working" ; while ( ! i sVal idSudoku ( $or igS , $mTada ) ) f // the s topp ing c r i t e r i a i s when the r e i s not change // in the cubes f o r 10 i t e r a t i o n s // we c lone 4 cubes and make the d iagona l $T1=new1( $T1old , $T2old , $T3old , $T4old ) ; $T2=new2( $T1old , $T2old , $T3old , $T4old ) ; $T3=new3( $T1old , $T2old , $T3old , $T4old ) ; $T4=new4( $T1old , $T2old , $T3old , $T4old ) ; $Tada=d iagpro j ($T1 , $T2 , $T3 , $T4 ) ; $mTada=makearray ( $Tada ) ; i f ( $ d e t a i l s==1) f echo "Current Counter " . $i , "<br>" ; p r i n t a r r ay ($mTada ) ; g else f echo " . " ; 107Appendix B. Sudoku PHP Code g $T1old = $T1 ; $T2old = $T2 ; $T3old = $T3 ; $T4old = $T4 ; $mTadaold = $mTada ; $mTadaoldmatrix = $mTadamatrix ; $ i++; try f s tuck in l oop ( $ i ) ; g catch ( Exception $e ) f $badsudoku = print r ( $or igS , true ) ; error log ( $e >getMessage ( ) . "nn" ,3 , "my e r r o r s . l og " ) ; error log ( $badsudoku . "nn" , 3 , "my e r r o r s . l og " ) ; echo "Too many i t e r a t i o n s  we are stuck . . maybe a bad Sudoku?" ; re turn ; g g i f ( $ d e t a i l s==0) fecho "<br>Solved in " . $ i . " i t e r a t i o n s :<br>" ; p r i n ta r r ay ($mTada ) ; g ?> 108Appendix C Eckstein Calculation Details In chapter 4, we showed that Eckstein made a numerical error in (4.95) and as a result (4.96). The matrices in fact are G ;A;B = J A(2J B  Id) + (Id J B) =  21 40  1 16  1 40 9 16 ! (C.1) and S ;A;B = G  1  ;A;B  Id =  43 47 10 47 4 47 37 47 ! : (C.2) Below are the details of the calculation. 109Appendix C. Eckstein Calculation Details Figure C.1: Calculation Details 110

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
India 26 3
United States 18 12
Canada 8 20
China 7 15
Japan 7 2
France 3 1
Russia 3 8
Spain 2 2
Albania 2 0
Germany 2 9
Romania 2 0
Australia 2 1
Bangladesh 2 0
City Views Downloads
Unknown 40 18
Ashburn 9 0
Beijing 6 0
Bangalore 5 1
Milwaukee 5 1
Pune 3 0
Newcastle 2 1
Tokyo 2 0
Victoria 2 0
Dhaka 2 0
Saratov 2 0
Akure 1 0
Shenzhen 1 13

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0071292/manifest

Comment

Related Items