UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Convex relaxations of the maximin dispersion problem Haines, Sheena Ayla 2011

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2012_spring_haines_sheena.pdf [ 885.86kB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0072338.json
JSON-LD: 1.0072338+ld.json
RDF/XML (Pretty): 1.0072338.xml
RDF/JSON: 1.0072338+rdf.json
Turtle: 1.0072338+rdf-turtle.txt
N-Triples: 1.0072338+rdf-ntriples.txt
Original Record: 1.0072338 +original-record.json
Full Text
1.0072338.txt
Citation
1.0072338.ris

Full Text

Convex Relaxations of the Maximin Dispersion Problem by Sheena Ayla Haines B.Sc. Hons., The University of British Columbia, 2009 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The College of Graduate Studies (Mathematics) THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan) July 2011 c Sheena Ayla Haines 2011Abstract Recently, convex relaxations have achieved notable success in solving NP- hard optimization problems. This thesis studies semide nite and second- order cone programming convex relaxations of the maximin dispersion prob- lem. Providing nontrivial approximation bounds, we believe that our SDP and SOCP relaxation methods are useful in large-scale optimization. The thesis is organized as follows. We begin by recalling some basic con- cepts from convex analysis, nonsmooth analysis, and optimization. We then introduce the weighted maximin dispersion optimization problem; locating point(s) in a given region X  Rn that is/are furthest from a given set of m points. Also given are several reformulations of the original problem, in- cluding a convex relaxation problem and semide nite and second order cone programming relaxations. Using well known results on Lipschitz functions and subdi erentials of Lipschitz functions we then derive a theoretical char- acterization of the optimal solutions for a given convex region X and equal weights. Next, we provide a proof that the weighted maximin dispersion problem is NP-hard even in the case where X is is a box and the weights are equal. We then propose an algorithm for  nding an approximate solution using the SDP relaxation, and derive an approximation bound that depends on the subset X . In the case that X is a box or a product of low-dimensional spheres, we show that the convex relaxation reduces to a second-order cone program, and provide further results on the bound provided. Lastly, we pro- vide numerical examples using the SDP and SOCP relaxations, and suggest future work. iiTable of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Existence of Maximizers and Minimizers . . . . . . . . . . . 3 1.3 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Convex Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4.1 Convex Sets and Convex Functions . . . . . . . . . . 7 1.4.2 Minimizers or Maximizers of Convex Functions . . . . 16 1.5 Nonsmooth Analysis . . . . . . . . . . . . . . . . . . . . . . . 16 1.5.1 Locally Lipschitz Functions . . . . . . . . . . . . . . . 16 1.5.2 Subdi erentials and Critical Points . . . . . . . . . . 18 1.6 Some Convex Optimization Problems . . . . . . . . . . . . . 19 1.7 NP-Complete Problems . . . . . . . . . . . . . . . . . . . . . 21 1.7.1 The Theory of NP-Completeness . . . . . . . . . . . . 22 1.7.2 Some NP-Hard Problems . . . . . . . . . . . . . . . . 23 1.7.3 The Complexity of Algorithms for Solving SDP and SOCP . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2 The Weighted Maximin Location Problem . . . . . . . . . . 25 2.1 General Problem and Motivation . . . . . . . . . . . . . . . . 25 2.2 Convex Relaxations . . . . . . . . . . . . . . . . . . . . . . . 28 iiiTable of Contents 2.2.1 SDP Relaxation . . . . . . . . . . . . . . . . . . . . . 30 2.2.2 SOCP Relaxation . . . . . . . . . . . . . . . . . . . . 31 2.2.3 More Properties on the Constraints of Convex Relax- ations . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.4 Why Does the SDP or SOCP Relaxation Have an Op- timal Solution? . . . . . . . . . . . . . . . . . . . . . 37 3 A Necessary Condition for Optimal Solutions . . . . . . . . 39 3.1 Properties of the Maximin Objective Functions . . . . . . . . 39 3.2 Characterization of Optimal Solutions . . . . . . . . . . . . . 41 4 NP-Hardness Results . . . . . . . . . . . . . . . . . . . . . . . 44 4.1 The Case That X is a Box . . . . . . . . . . . . . . . . . . . 44 4.2 NP-Hardness of Restricted Problems . . . . . . . . . . . . . 49 5 Convex Relaxation Based Algorithms . . . . . . . . . . . . . 54 5.1 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . 54 5.2 SDP Relaxation Based Algorithm . . . . . . . . . . . . . . . 56 5.3 Proof of Theorem 5.4: SDP Relaxation Based Algorithm . . 65 5.4 Examples on the Lower Bound   . . . . . . . . . . . . . . . 67 6 Numerics and Examples . . . . . . . . . . . . . . . . . . . . . 69 6.1 Approximation Bounds: Box Case . . . . . . . . . . . . . . . 69 6.2 Approximation Bounds: Ball Case . . . . . . . . . . . . . . . 70 6.3 Where the SDP Relaxation Fails . . . . . . . . . . . . . . . . 72 6.3.1 Necessary Condition for Non-trivial Bounds . . . . . 73 6.4 An Alternate Approach . . . . . . . . . . . . . . . . . . . . . 74 6.5 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . 75 7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . 80 7.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Appendices A Matlab Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 B Matlab Output . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 ivList of Tables 6.1 Maximal value of m yielding a non-trivial approximation. . . 74 6.2 Numerical Results for Large n = 5. . . . . . . . . . . . . . . . 78 6.3 Numerical Results for Large n and m. . . . . . . . . . . . . . 79 vList of Figures 1.1 Examples of convex sets. . . . . . . . . . . . . . . . . . . . . . 8 1.2 Examples of nonconvex sets. . . . . . . . . . . . . . . . . . . . 8 1.3 The convex hull of some nonconvex sets. . . . . . . . . . . . . 9 1.4 A hyperplane with normal vector a divides R2 into two half- spaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.5 The Euclidean norm cone in R3. . . . . . . . . . . . . . . . . 11 1.6 A polyhedron. . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.7 A convex function. . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1 Plots of f(x) for X = [ 1; 1]2, with the set S on the left and set T on the right. . . . . . . . . . . . . . . . . . . . . . . . . 27 viAcknowledgements I would like to thank my supervisors Jason Loeppky and Shawn Wang for their in nite patience and encouragement. Their passion and enthusi- asm for mathematics and statistics has both inspired me and given me the motivation to pursue my Master’s degree. My thanks also go to my thesis committee members Heinz Bauschke, Yong Gao and Warren Hare for their insightful comments and suggestions which have improved the presentation of my thesis. I would also like to thank Richard Taylor for being the  rst professor to encourage me to pursue mathematics, and Rebecca Tyson for giving me my  rst opportunity to pursue mathematical research. Most im- portantly, I want to acknowledge the work of Paul Tseng. The work in this thesis is based on an unpublished paper that Paul was working on at the time of his disappearance. Recognizing the signi cance of his work, my supervisors suggested that we take Paul’s ideas and carry them to their con- clusion, the result of which is this thesis. Without Paul’s ideas this thesis would never have existed, and I hope that I have done justice to him in this work. viiDedication This work is dedicated to my mom, who has always believed in me, and the late (Captain) Bob MacAuley, who was the  rst person to ever spark my interest in mathematics, physics and statistics. viiiChapter 1 Introduction The fundamental building blocks of all studies in optimization can be found in calculus, linear algebra and analysis. This chapter collects back- ground materials and known facts which will be used in later chapters. Sec- tions 1.1, 1.3 and 1.4 give a brief overview of the necessary results from these  elds which are used to build the basic theoretical results of optimization. Sections 1.5.1 and 1.5 then introduce concepts which are required speci - cally for solving non-convex optimization problems, like the one presented in this thesis. 1.1 Preliminaries In this work the geometry of problems is mainly concerned with Eu- clidean space and  nding solutions that are subsets of Rn. In this section we discuss the geometry and fundamental concepts of the real vector space Rn. We also recall a few very basic de nitions from calculus which are required throughout the rest of this thesis. We denote the set of n-dimensional real vectors by Rn, and for any x 2 Rn we denote the ith component of x by xi. Thus, each x 2 Rn is a column vector x = 0 B B B @ x1 x2 ... xn 1 C C C A : For any x; y 2 Rn we de ne the inner product h ;  i by hx; yi = xT y = nX i=1 xiyi: (1.1) De nition 1.1. A norm k k on Rn is a function that assigns a scalar kxk to every x 2 Rn with the following properties: 1. kxk  0 for all x 2 Rn. 11.1. Preliminaries 2. kxk = 0 if and only if x = 0. 3. k xk = j j kxk for all  2 R and x 2 Rn. 4. kx+ yk  kxk + kyk for all x; y 2 Rn. This is known as the Triangle Inequality. Throughout this thesis we will be working exclusively with the Euclidean norm kxk2 =  xTx  1=2 =  nX i=1 x2i !1=2 (1.2) and so we will drop the subscript, so that kxk refers to the Euclidean norm. Based on the de nition of a norm we have the notion of distance. In general, the distance between two vectors x and y is given by d(x; y) = kx yk : (1.3) The Euclidean distance between two vectors x; y 2 Rn is given by: kx yk2 = q (x y)T (x y) =  nX i=1 (xi  yi) 2 !1=2 : (1.4) As this is the only distance measure used throughout this thesis, we will again drop the subscript. A useful inequality that we will apply throughout this thesis is the Cauchy-Schwarz inequality: Fact 1.2. (Cauchy-Schwarz Inequality) [9] Let x; y 2 Rn. Then jhx; yij  kxk kyk ; (1.5) with equality holding if x and y are collinear, i.e., x being a scalar multiple of y or vice versa. A basic result of analysis is that Fact 1.3. All norms in Rn are equivalent. More precisely, suppose that k ka and k  kb are norms in Rn, then there exists  ;  > 0 such that  kxka  kxkb   kxka 8x 2 Rn: 21.2. Existence of Maximizers and Minimizers 1.2 Existence of Maximizers and Minimizers De nition 1.4. The domain of a function f , denoted dom (f), is given by dom (f) = fx 2 Rnj f(x) < +1g: Example 1.5. The domain of some common functions:  dom (x2) = R  dom ( p x) = fx 2 R j x  0g  dom ( 1x) = fx 2 R j x 6= 0g De nition 1.6. We say that the function f is proper if its domain is nonempty and f >  1. De nition 1.7. The  -sublevel set of a function f is the set fx 2 dom f jf(x)   g (1.6) and the epi-graph of f is the set epi (f) =  (x; r) 2 Rn+1 jx 2 dom f and f(x)  rg : (1.7) The indicator function associated with S  Rn is de ned by  S(x) = ( 0; if x 2 S; +1; otherwise. Associated with f : Rn ! [ 1;+1] we let argminf = fx 2 Rn j f(x) = min fg (1.8) and argmaxf = fx 2 Rn j f(x) = max fg : (1.9) Recall that De nition 1.8. A set S  Rn is compact if it is closed and bounded. A continuous function on a compact set always has maximizers and min- imizers. 31.3. Matrix Algebra Theorem 1.9. (Extreme Value Theorem) [9] Assume that f : Rn ! R is continuous and X  Rn is compact. Then argmin (f +  X ) =  x 2 X     f(x) = minX f  6= ; (1.10) and argmax (f +  X ) =  x 2 X     f(x) = maxX f  6= ;: (1.11) Building on the vector space Rn, the next section focuses on real matri- ces; that is, the set of all p q matrix with entries aij 2 R. Given the vector space Rn, the Euclidean norm, and the theories of linear algebra presented in the next section, we can de ne many optimization problems of interest. 1.3 Matrix Algebra For any matrix A, we denote its i; jth element by Ai;j . The transpose of A, denoted AT is de ned by ATi;j = Aj;i. For an n  n matrix, we write A 2 Rn n. De nition 1.10. Let A be a square matrix. We say that A is symmetric if A = AT : (1.12) The space of all n n symmetric matrices is denoted Sn. De nition 1.11. The n  n identity matrix is a diagonal matrix whose diagonal elements are equal to 1, and is denoted I or sometimes In. That is, Ij;j = 1 for j = 1; : : : ; n and Ii;j = 0 for all i 6= j. De nition 1.12. The trace of an n  n square matrix A is de ned to be the sum of the elements on the main diagonal. That is, trace(A) = tr(A) = nX i=1 Ai;i: (1.13) De nition 1.13. For A;B 2 Sn we de ne the inner product by hA;Bi = tr(AB): (1.14) 41.3. Matrix Algebra Fact 1.14. ([24] p.29 and p.45) For any A;B 2 Sn we have tr(AB) = tr(BA) (1.15) and tr(A+B) = tr(A) + tr(B): (1.16) De nition 1.15. A matrix A 2 Sn is said to be positive de nite if xTAx > 0 for all x 2 Rnn f0g (1.17) and we write A  0. The matrix A 2 Sn is negative de nite if xTAx < 0 for all x 2 Rnn f0g (1.18) and we write A  0. De nition 1.16. A matrix A 2 Sn is said to be positive semide nite if xTAx  0 for all x 2 Rn (1.19) and we write A  0. The matrix A 2 Sn is negative semide nite if xTAx  0 for all x 2 Rn (1.20) and we write A  0. De nition 1.17. The set of a positive semide nite symmetric matrices is denoted Sn+. De nition 1.18. Let A be any n n square matrix. A principal submatrix of A is any m  m submatrix of A obtained by deleting n  m rows and corresponding columns. Example 1.19. Let A = 2 6 6 4 a b c d e f g h i j k l m n o p 3 7 7 5. One example of a principal submatrix of A is obtained by deleting the second and third columns and rows:  a d m p  : (1.21) Another principal submatrix might delete just the fourth column and row, or the  rst and third, etc. 51.3. Matrix Algebra Proposition 1.20. If A 2 Sn is positive semide nite, then every principal submatrix of A is symmetric positive semide nite. Proof. Let Ap be any principal submatrix of A. First, note that since we have deleted corresponding rows and columns of A, a symmetric matrix, the submatrix Ap must be symmetric. Now, let xp be any nonzero vector of size p. \Enlarge" xp to a vector x of size n by inserting zeros into the positions corresponding to the rows (and columns) of A which were deleted to form Ap. Then we have that xTpApxp = x TAx  0: (1.22) Hence, Ap is positive semide nite. Corollary 1.21. Let A 2 Sn be positive semide nite. Then for each i = 1; : : : ; n 1 we have  Ai;i Ai;n Ai;n An;n   0: (1.23) This follows directly from Proposition 1.20. De nition 1.22. The eigenvalues of an n  n matrix A are the values  1; : : : ;  n (not necessarily distinct) such that det(A  I) = 0. Fact 1.23. ([24] p.278) The determinant and trace of an n  n matrix A can be expressed in terms of the eigenvalues of A by: det(A) = nY i=1  i tr(A) = nX i=1  i: (1.24) Fact 1.24. ([24] p.309) An n  n matrix A is positive semide nite if and only if all of the eigenvalues of A are nonnegative. Proposition 1.25. If A is a 2  2 symmetric matrix, then A is positive semide nite if and only if det(A)  0 and trace(A)  0. Proof. Suppose  rst that A is positive semide nite. Then by Fact 1.24 we have  1;  2  0. Then by Fact 1.23 we have det(A) = 2Y i=1  i =  1 2  0 (1.25) and trace(A) = 2X i=1  i =  1 +  2  0 (1.26) 61.4. Convex Analysis as required. Now, suppose that det(A)  0 and trace(A)  0. By Fact 1.23 we have that det(A) = 2Y i=1  i =  1 2  0 (1.27) so that  1 and  2 must have the same parity. Furthermore, by Fact 1.23 we also have trace(A) = 2X i=1  i =  1 +  2  0 (1.28) and thus we can conclude that  1;  2  0 and hence A is positive semide - nite. Fact 1.26. (The Cholesky Factorization, [18] page 154.) Let A 2 Sn be positive de nite. Then A = LTL, where L is a nonsingular upper triangular matrix. 1.4 Convex Analysis Recent applications of optimization are focused strongly on  nding opti- mal solutions to non-convex problems. The approach to solving non-convex problems however, often involves creating convex relaxation problems which can then be solved by existing optimization methods. The following section provides a brief overview of convex analysis. 1.4.1 Convex Sets and Convex Functions De nition 1.27. A set C  Rn is a ne if, for any x1; x2 2 C and  2 R we have  x1 + (1  )x2 2 C: (1.29) That is, an a ne set contains the linear combination of any two points in the set, provided the coe cients sum to one. A point of the form  1x1 +    +  kxk, where  1 +    +  k = 1, is called an a ne combination of the points x1; : : : ; xk. De nition 1.28. A set C  Rn is convex if, for any x1; x2 2 C and 0    1 we have  x1 + (1  )x2 2 C: (1.30) 71.4. Convex Analysis Graphically, a set C is convex if the line segment between any two points in C is also contained in C, see Figure 1.1. Every a ne set is convex, since if a set contains the line through any two points, it must obviously contain the line between those two points. Given  i  0 with  1 +    +  k = 1 and points x1; : : : ; xk 2 C, we say that the point  1x1 +    +  kxk is a convex combination of the points x1; : : : ; xk. Figure 1.1: Examples of convex sets. Figure 1.2: Examples of nonconvex sets. De nition 1.29. The convex hull of a set C is the set of all convex combi- nations of points in C: conv (C) = ( kX i=1  ixi      xi 2 C;  i  0; i = 1; : : : ; k; kX i=1  i = 1; k  1 ) : (1.31) The convex hull of the set C is the smallest convex set that contains C. That is, if D is any convex set such that C  D then conv (C)  D. 81.4. Convex Analysis Figure 1.3: The convex hull of some nonconvex sets. Fact 1.30. [22] If C  Rn is compact, i.e. closed and bounded, then conv(C) is compact. Corollary 1.31. Given x1; x2; : : : ; xk in Rn, conv fx1; : : : ; xkg is compact. De nition 1.32. A set K  Rn is called a cone if, for any x 2 K and   0 we have  x 2 K. A set K is a convex cone if it is convex and a cone; that is, for any x1; x2 2 K and  1;  2  0 we have  1x1 +  2x2 2 K: (1.32) Example 1.33.  R+ = fx j x  0g is a convex cone in R.  Rn+ = f(x1; : : : ; xn) j x1  0; : : : ; xn  0g is a convex cone in R n.  Sn+ = fA j A is an n n matrix, A is positive semide niteg is a con- vex cone. A point of the form  1x1 +    +  kxk, where  i  0 for i = 1; : : : ; k is called a conic combination of the points x1; : : : ; xk. De nition 1.34. Let K  Rn be a cone. The dual cone of K, denoted K is the set K =  y 2 Rn   yTx  0 8x 2 K  (1.33) De nition 1.35. A hyperplane is a set of the form  x 2 Rn   aTx = b  (1.34) where a 2 Rn, a 6= 0 and b 2 R. 91.4. Convex Analysis Analytically, a hyperplane is the solution set of a nontrivial linear equa- tion. Geometrically, the hyperplane H =  x   aTx = b  can be interpreted as the set of points with constant inner product to a given vector a; that is, a hyperplane with normal vector a. Note that a hyperplane is a ne (and thus convex). De nition 1.36. A (closed) halfspace is a set of the form  x 2 Rn   aTx  b  (1.35) where a 6= 0. A hyperplane divides Rn into two halfspaces (see Figure 1.4). Ana- lytically, a halfspace is the solution set of one nontrivial linear inequality. Halfspaces are convex, but not a ne. a bxaT ≥  bxaT ≤  Figure 1.4: A hyperplane with normal vector a divides R2 into two half- spaces. 101.4. Convex Analysis Some frequently seen convex sets are listed below:  The closed norm ball in Rn with center xc and radius r is given by Br[xc] = fx 2 Rn j kx xck  rg (1.36) where r > 0 and k k denotes any norm on Rn.  The norm cone associated with any norm k k is given by f(x; t) j kxk  t; x 2 Rn; t 2 Rg  Rn+1: (1.37) The norm cone is a convex cone. See Figure 1.5 for an example of the norm cone associated with the Euclidean norm.  A polyhedron is the solution set of a  nite number of linear equalities and inequalities, and is of the form  x 2 Rn   aTj x  bj ; j = 1; : : : ;m; c T k x  dk; k = 1; : : : ; p  : (1.38) Graphically, a polyhedron is the intersection of a  nite number of halfspaces and hyperplanes. Note that the norm cone associated with the Euclidean norm is referred to as the second-order cone or Lorentz cone. De nition 1.37. The Lorentz cone is the cone in Rn de ned as L = ( x 2 Rn      x2n  n 1X i=1 x2i ; xn  0 ) : (1.39) Figure 1.5: The Euclidean norm cone in R3. 111.4. Convex Analysis 1a  6a  5a  4a  3a  2a  Figure 1.6: A polyhedron. De nition 1.38. A function f : Rn ! ( 1;1] is said to be a convex function if dom f is a convex set and, for all x; y 2 dom f and 0 <  < 1 we have f ( x+ (1  ) y)   f(x) + (1  )f(y): (1.40) Graphically, a function f is convex if the line segment joining (x; f(x)) and (y; f(y)) lies above the graph of f , (see Figure 1.7). A function f is strictly convex if strict inequality holds in (1.40) whenever x 6= y and 0 <  < 1. We say that the function f is concave if  f is convex, and strictly concave if  f is strictly convex. (x, f (x)) ( y, f (y)) Figure 1.7: A convex function. Remark 1.39. A function f is convex if and only if epi (f) is a convex set. Fact 1.40. Every norm is a convex function on Rn. 121.4. Convex Analysis Proof. Let 0    1, and consider f( x+ (1  )y) = k x+ (1  )yk (1.41)  k xk+ k(1  )yk (1.42) =  kxk+ (1  ) kyk (1.43) =  f(x) + (1  )f(y): (1.44) Hence, f is convex. Alternatively, simply note that the epigraph of the function f(x) = kxk is given by epi (f) = f(x; t) j kxk  tg (1.45) which is a convex set in Rn+1 (the norm cone), and so by Remark 1.39 f is a convex function. De nition 1.41. Let f : Rn ! Rm and x 2 int dom f . If f is di erentiable at x, its Jacobian matrix rf(x) 2 Rm n is given by rf(x)ij = @fi(x) @xj (1.46) for all i = 1; : : : ;m, j = 1; : : : ; n. De nition 1.42. For a real-valued function f : Rn ! R, if f is twice di erentiable at x, its Hessian matrix r2f(x) 2 Rn n is given by r2f(x)ij = @2f @xi@xj (1.47) for all i; j = 1; : : : ; n. Convexity of a di erentiable function can also be established using the following conditions: Fact 1.43. (First order convexity condition) Suppose f is di erentiable on an open set O  Rn. Then f is convex on O if and only if O is convex and f(y)  f(x) +rf(x)T (y  x) (1.48) for all x; y 2 O ([22] page 46). 131.4. Convex Analysis Fact 1.44. (Second order convexity condition) Assume f is twice di erentiable on an open set O  Rn. Then f is convex on O if and only if O is convex and r2f is positive semide nite; i.e., r2f(x)  0 (1.49) for every x 2 O ([22] page 46). Fact 1.45. The function fi(x) = wi   x xi   2 (1.50) is a convex function on Rn, where wi > 0 2 R and xi 2 Rn is  xed. This follows from the second order convexity condition, and the fact that r2fi(x) = 2wiI  0. De nition 1.46. A vector v 2 Rn is said to be a subgradient of the convex function f at the point x if f(y)  f(x) + hv; y  xi ; 8y 2 dom f: (1.51) The set of all subgradients of f is called the subdi erential of f at x, and is denoted @f(x). A subdi erential mapping, being a surrogate for derivative, is fundamen- tally important in convex analysis and optimization. De nition 1.47. A vector x is said to be normal to the convex set C at the point x 2 C if hy  x; x i  0 8y 2 C: (1.52) The set of all vectors x that are normal to C at the point x is called the Normal Cone to C at x, and is denoted NC(x). If x =2 C, then NC(x) = ;. Recall that for C  Rn, x is an interior point of C if there exists  > 0 such that B (x)  C: The set of interior points of C will be denoted by int(C). Example 1.48. Let C  Rn. Immediately from the de nition of @ C , we have @ C = NC : Moreover, if x 2 int(C), then @ C(x) = f0g, i.e. NC(x) = f0g : 141.4. Convex Analysis De nition 1.49. Let f : Rn ! [ 1;1], and let x be a point such that jf(x)j <1. The one-sided directional derivative of f at x with respect to a vector y is de ned to be the limit f 0(x; y) := lim  #0 f(x+  y) f(x)  (1.53) if it exists (here  1 are allowed as limits). De nition 1.50. Let f : Rn ! [ 1;1], and let x be a point such that jf(x)j <1. We say that f is di erentiable at x if and only if there exists a vector x with the property lim z!x f(z) f(x) hx ; z  xi kz  xk = 0: (1.54) Such an x , if it exists, is called the gradient of f at x and is denoted rf(x). Fact 1.51. If f is di erentiable at x, then f 0(x; y) exists and is a linear function of y for all y. In particular, f 0(x; y) = hrf(x); yi 8y 2 Rn: (1.55) Fact 1.52. [2, Proposition 2.3.2] Let  x 2 int(C), C  Rn. Suppose that continuous functions g0; g1; : : : ; gm : C ! R are di erentiable at  x, that g is the max function g(x) = max i=0;:::;m gi(x) (1.56) and de ne the index set I = fi jgi( x) = g( x)g . Then for all directions d in Rn, the directional derivative of g is given by g0( x; d) = max i2I fhrgi( x); dig : (1.57) It turns out that the directional derivative of a convex function can be used to compute its subdi erential. Fact 1.53. [19, Theorem 23.2] Let f : Rn ! ( 1;+1] be convex and x 2 dom f . Then f 0(x; y) exists for every y 2 Rn. Moreover, @f(x) = fv 2 Rnj f 0(x; y)  hv; yi 8y 2 Rng: 151.5. Nonsmooth Analysis 1.4.2 Minimizers or Maximizers of Convex Functions Theorem 1.54. [19, page 264] For a convex function f , we have that x 2 argminf , 0 2 @f(x): (1.58) Remark 1.55. We say that x 2 D is an extremal point of D if x =  y + (1  )z (1.59) with 0 <  < 1, y; z 2 D implies that y = z. Theorem 1.56. [22, Theorem 32.3] Let f : Rn ! R be convex, and K  Rn be compact and convex. Then max x2K f(x) = max x2ext(K) f(x) (1.60) where ext(K) denotes the extremal points of K. 1.5 Nonsmooth Analysis Since maximin problems are nonconvex in general, in Subsection 1.5.1 and 1.5.2 below we introduce the concept of a Lipschitz function and nons- mooth analysis. They are used in Section 3.2 to derive necessary optimality conditions for the solutions of nonconvex maximin optimization problem. 1.5.1 Locally Lipschitz Functions De nition 1.57. Given a function f : Rn ! R, we say that f is locally Lipschitz continuous at a point x0 if there exists  > 0 and  > 0 such that whenever kx x0k <  and ky  x0k <  we have jf(x) f(y)j   kx yk : (1.61) Fact 1.58. If f is locally Lipschitz at x0, then  f is locally Lipschitz at x0. The above fact follows directly from j f(x) ( f(y))j = j (f(x) f(y))j = jf(x) f(y)j : (1.62) Fact 1.59. If f1; f2 are locally Lipschitz at x0, then max ff1; f2g (1.63) and min ff1; f2g (1.64) are locally Lipschitz at x0. 161.5. Nonsmooth Analysis Proof. Since f1 is locally Lipschitz at x0, there exists  1 > 0 and  1 > 0 such that jf1(x) f1(y)j   1 kx yk (1.65) whenever kx x0k <  1 and ky  x0k <  1. Similarly, there exists  2 > 0 and  2 > 0 such that jf2(x) f2(y)j   2 kx yk (1.66) whenever kx x0k <  2 and ky  x0k <  2. Set  = min f 1;  2g, and  = max f 1; 2g so that whenever kx x0k <  and ky  x0k <  we have   kx yk  f1(x) f1(y)   kx yk (1.67)   kx yk  f2(x) f2(y)   kx yk : (1.68) From this we have f1(x) max ff1(y); f2(y)g   kx yk (1.69) f2(x) max ff1(y); f2(y)g   kx yk : (1.70) Combining the above two equations we have max ff1(x); f2(x)g  max ff1(y); f2(y)g = max ff1(x) max ff1(y); f2(y)g; f2(x) max ff1(y); f2(y)gg   kx yk : (1.71) Similarly,   kx yk  f1(x) f1(y) ) f1(y) f1(x)   kx yk (1.72)   kx yk  f2(x) f2(y) ) f2(y) f2(x)   kx yk (1.73) (1.74) so that f1(y) max ff1(x); f2(x)g   kx yk (1.75) f2(y) max ff1(x); f2(x)g   kx yk (1.76) and thus max ff1(y); f2(y)g  max ff1(x); f2(x)g   kx yk ) max ff1(x); f2(x)g  max ff1(y); f2(y)g    kx yk : (1.77) 171.5. Nonsmooth Analysis Hence, whenever kx x0k <  and ky  x0k <  we have that jmax ff1(y); f2(y)g  max ff1(x); f2(x)gj   kx yk (1.78) which shows that max ff1; f2g is locally Lipschitz at x0. Now, min ff1; f2g =  max f f1; f2g: (1.79) Using Fact 1.58 we know that  f1; f2 are both locally Lipschitz at x0. Then max f f1; f2g is locally Lipschitz at x0 by the above proof. Applying Fact 1.58 once more, we have that  max f f1; f2g and thus min ff1; f2g is locally Lipschitz at x0. Remark 1.60. Fact 1.59 can also be seen by using maxff1; f2g = f1 + f2 2 + jf1  f2j 2 ; minff1; f2g = f1 + f2 2  jf1  f2j 2 : Fact 1.61. Assume that fi is locally Lipschitz at x0 for i = 1; : : : ;m. Then max ff1; : : : ; fmg (1.80) and min ff1; : : : ; fmg (1.81) are locally Lipschitz at x0. Proof. This follows directly from applying induction to the results of Fact 1.59. 1.5.2 Subdi erentials and Critical Points Let O  Rn be an open set. For a locally Lipschitz function f : O ! R, and x 2 O, we de ne its Clarke directional derivative with respect to v 2 Rn by fo(x; v) = lim sup y!x;t#0 f(y + tv) f(y) t (1.82) and the Clarke subdi erential @of(x) = fx 2 Rn j fo(x; v)  hx ; vi 8v 2 Rng : (1.83) The Clarke subdi erential plays a fundamental role in optimization problems involving nonconvex functions. Some key properties are: 181.6. Some Convex Optimization Problems Theorem 1.62. Let f; g : O  Rn ! R be locally Lipschitz and x 2 O. Then 1. @of(x) is compact and convex ([7] page 40). 2. @o(f + g)(x)  @of(x) + @og(x) ([7] Proposition 2.3.3). 3. @o( f)(x) =  @of(x) ([7] Proposition 2.3.1). 4. If f is a convex function on O, then @of = @f ([7] Proposition 2.2.7). 5. If f is continuously di erentiable on O, then @of(x) = frf(x)g ([7] Proposition 2.2.7). Theorem 1.63. [23, Proposition 7.4.7] If f is locally Lipschitz about  x and  x is a local minimizer or a local maximizer of f , then 0 2 @of( x). Theorem 1.64. [23, Proposition 7.4.7] Let I be a  nite set, and for all i 2 I let gi : Rn ! R be locally Lipschitz continuous around  x. Let g be the max-function de ned as g(x) = max fgi(x) j i 2 I g (1.84) and de ne the active index set at  x by I( x) = fi 2 I j gi( x) = g( x)g : (1.85) Suppose that gi is di erentiable at  x. Then the Clarke subdi erential of g at  x is given by @og( x) = conv frgi( x) j i 2 I( x)g : (1.86) 1.6 Some Convex Optimization Problems In this section we give the general and standard forms of a semide nite programming optimization problem (SDP) and a second-order cone pro- gramming problem (SOCP), which we will use in section 2.2. For a convex cone K  Rp and x 2 Rp, we write x  K 0 if x 2 K. De nition 1.65. ([4]) A conic form program (or conic program) is an op- timization problem which has a linear objective function and one inequality constraint function min cTx (1.87) s.t. Fx+G  K 0 Ax = b 191.6. Some Convex Optimization Problems where c; x 2 Rp, A 2 Rm p, b 2 Rm, F 2 Sp, G 2 Rp and K is a convex cone in Rp. De nition 1.66. When K is Sn+, the cone of positive semide nite n  n matrices, the associated conic form problem is called a semide nite program, and has the form min cTx (1.88) s.t. x1F1 +    + xmFm +G  0 Ax = b where c; x 2 Rm, G;F1; : : : ; Fm 2 Sn and A 2 Rp m. A standard SDP in inequality form is min cTx (1.89) s.t. x1A1 +    + xmAm  B where A1; : : : ; Am; B 2 Sn, c; x 2 Rm. A standard SDP is in the following form: min hC;Xi (1.90) s.t. hAi; Xi = bi i = 1; : : : ;m X  0 where C;X;Ai 2 Sn and bi 2 R. One can show that (1.90) has a dual given by max bTx (1.91) s.t. x1A1 +    + xmAm  C which is in the form of (1.89), see [4, Example 5.11], [29]. De nition 1.67. ([4]) The second-order cone program (SOCP) is of the form min fTx (1.92) s.t. kAix+ bik  c T i x+ di; i = 1; : : : ;m Fx = g where f; x 2 Rn, Ai 2 Rn n and F 2 Rp n. 201.7. NP-Complete Problems Constraints of the form kAx+ bk  cTx+d are called second-order cone constraints because the a nely de ned variables u = Ax+b and t = cTx+d must belong to the second order cone. Note that SOCP is a special case of SDP by using kAix+ bik  c T i x+ di ,  (cTi x+ d)I Aix+ bi (Aix+ bi)| (cTi x+ d)   0: Further readings on SDP and SOCP may also be found in [32], [27], [29]. 1.7 NP-Complete Problems In this section we will give a brief overview to the concept of complexity theory and NP-completeness. In order to begin discussing complexity of problems however, we  rst need a few basic de nitions. De nition 1.68. [12] A problem is a general question to be answered, usu- ally containing several parameters with values left unspeci ed. A problem is described by giving a general description of all of its parameters, and a statement of what properties the answer (or solution) is required to satisfy. An instance of a problem is obtained by specifying particular values for all the problem parameters. De nition 1.69. [12] An algorithm is a general, step-by-step procedure for solving a problem. For concreteness, we often think of algorithms as being computer pro- grams written in a precise computer language. We say that an algorithm solves a problem  if that algorithm can be applied to any instance I of  and be guaranteed to provide a solution for that instance. De nition 1.70. [12] The input length for an instance I of a problem  is de ned to be the number of symbols in the description of I obtained from the encoding scheme for  . De nition 1.71. [12] The time complexity function for an algorithm ex- presses its time requirement by giving, for each possible input length, the largest amount of time needed by the algorithm to solve a problem instance of that size. Note that by this de nition, this function is not well-de ned until the encoding scheme to be used for determining input length and the computer model for a particular problem are  xed. However, the particular choices made for these have little e ect on the broad distinctions made in the theory of NP-completeness ([12] p.6). 211.7. NP-Complete Problems 1.7.1 The Theory of NP-Completeness In this section we discuss measures for the complexity of algorithms, and introduce the concept of NP-complete problems. De nition 1.72. [12] We say that a function f(n) is O(g(n)) whenever there exists a constant c such that jf(n)j  c  jg(n)j (1.93) for all values of n  0. De nition 1.73. [12] A polynomial time algorithm is de ned to be one whose complexity function is O(p(n)) for some polynomial function p, where n is used to denote the input length. Any algorithm whose time complexity function can not be so bounded is called an exponential time algorithm. Note that this de nition includes certain non-polynomial time complex- ity functions (like nlogn) which are not normally regarded as exponential functions. De nition 1.74. [12] A problem is called intractable if no polynomial time algorithm can possibly solve it. The idea of polynomial time reducibility is integral to the theory of NP- completeness, and refers to reductions for which the transformation can be executed by a polynomial time algorithm. This is signi cant, since if we have a polynomial time reduction from one problem to another, then this ensures that any polynomial time algorithm for the second problem can be converted into a corresponding polynomial time algorithm for the  rst problem. The simplest way to explain the idea of NP-completeness is as follows. First, de ne two classes of problems: 1. The class of problems with solution algorithms bounded by a polyno- mial of  xed degree, P [8]. 2. The class of problems with solutions that are veri able in polynomial time, NP. That is, given a solution S to problem  , a polynomial time algorithm exists to verify that S is in fact a solution of  [31]. Fact 1.75. [8] P  NP: 221.7. NP-Complete Problems One of the most important open questions in theoretical computer sci- ences is whether or not P=NP. The notation NP stands for \non-deterministic polynomial time", since the original de nition of the class NP is the class of problems which can be solved by a non-deterministic Turing machine in polynomial time [31]. A problem Pi is NP-hard if every problem in NP is polynomially re- ducible to Pi. One can say that Pi is NP-hard, If Pi is solved in a polynomial time, then P = NP: A problem Pi is NP-complete if it is both NP-hard and an element of NP. In essence, the NP-complete problems are the \hardest" problems in NP [12]. The question of whether NP-complete problems are intractable is still a major open problem in mathematics and computer science (See [31] and references therein). How to show that a problem Pi is NP-hard? One often uses a reduction argument: 1. Con rm that Pi is a decision problem; 2. Take one problem Pj that is already known to be NP-hard or NP- complete; 3. Show that the problem Pj is polynomially reducible to Pi; 4. This shows that Pi is NP-hard. This is what we will use in the thesis. 1.7.2 Some NP-Hard Problems Based on the previous section, we see that the simplest way to prove that a problem is NP-complete is to reduce it in polynomial time to another problem which is known to be NP-complete (or NP-hard). In order for this to be possible, we require a compendium of problems which are already known to be NP-complete (or NP-hard). There are many books and papers including lists of problems that have been proven to be NP-complete or NP- hard (for example, see [12], [17], [6]). Here, we provide two examples of NP-hard problems that are integral to the work of this thesis. Fact 1.76. [6, MP1] It is NP-hard to determine the solvability of the integer linear program By = b; y 2 f 1; 1gq ; (1.94) with B 2 Zp q, b 2 Zp. 231.7. NP-Complete Problems Given an undirected graph G = (V;E) with weights wij = wji = 1 on the edges (i; j) 2 E, the Max Cut Problem is that of  nding the set of vertices S  V that maximizes the weight of the edges in the cut (S; S). Put wij = 0 if (i; j) 62 E. The weight of a cut (S; S) is w(S; S) = X i2S;j 62S wij : The Max Cut problem is to max S V w(S; S): Assume that the graph has q nodes, i.e., jV j = q. De ne two q q symmetric matrices A = (aij) and W respectively by setting aij = aji = wij , and W = Diag(Ae) A 2 Zq q where e = (1; : : : ; 1)| 2 Rq. De nition 1.77. (See [21, page 309], [27], [14]) The MaxCut problem on a graph with q nodes may be formulated as max y2f 1;1gq yTWy; (1.95) where W 2 Zq q and W  0. The Max Cut problem has long been known to be NP complete. Fact 1.78. ([14, page.1116], [21]) The Maxcut problem (1.95) is NP-hard. For further details on complexity theory and NP-hard problems, see [12]. 1.7.3 The Complexity of Algorithms for Solving SDP and SOCP To some extent, SDP is very similar to linear programming. Given any  > 0, SDPs can be solved within an additive error of  in polynomial time ( is part of the input, so the running time dependence on  is polynomial in log(1= )), [14]. This is can be done through ellipsoid algorithm, polynomial algorithms for convex programming, as well as interior point methods. See [14], [15], [1]. If the feasible region is bounded, then SOCPs can also be solved to arbitrary  xed precision in polynomial time, [13]. For the current state of the art in SDP, see [32]. Currently, available software packages solving SDPs and SOCPs include SeDuMi (self-dual minimization) [25], CVX (a modeling system for disci- plined convex programming) [11] and SDPT3. 24Chapter 2 The Weighted Maximin Location Problem 2.1 General Problem and Motivation De nition 2.1. The problem of  nding a point x in a closed set X  Rn (n  1) that is furthest from a given set of points x1; : : : ; xm in Rn in a weighted maximin sense is given by  p := max x2X f (x) with f (x) := min i=1;:::;m wi   x xi   2 ; (2.1) where wi > 0. This is known as the weighted maximin dispersion prob- lem. In general k k denotes the Euclidean norm; in the case that we consider another norm we will make the distinction explicit. In the equal weight case of w1 =    = wm, (2.1) has the geometric in- terpretation of  nding the largest Euclidean sphere with center in X and en- closing no given point. To illustrate, let B (x) = fy j ky  xk <  ; y 2 Rng , and f(x) = mini=1;:::;m   x xi   2. Example 2.2. Set r = p f(x). Then Br(x) \  x1; : : : ; xm  = ;. Proof. Indeed, r2    x xi   2 8i = 1; : : : ;m by de nition. If xi0 2 Br(x) then   x xi0   < r, so   x xi0   2 < r2. Thus we have r2    x xi0   2 < r2 (2.2) which is a contradiction. Hence, Br(x) \  x1; : : : ; xm  = ;. 252.1. General Problem and Motivation Our interest in the minimax distance problem arises from deterministic function modeling in statistics [16]. We begin with a deterministic computer code which models a complex physical system. It is assumed that the code is very costly to run, and so our goal is to build a computationally cheap surrogate of the computer code. The statistical approach to this problem is to place a prior belief on the function space, X , and then use the weighted maximin distance criteria to choose a set of sites in X at which to run the computer code, so that we may model the resulting deterministic output as a stochastic process (see [26]). In general, the weighted maximin problem also has many diverse appli- cations in facility location, spatial management and pattern recognition (see [10], [30] and references therein). One applied example in facility location would be the determining of the location for a new and highly polluting in- dustry within some region X . Suppose that there are m cities within region X , represented by the points x1; : : : ; xm. One criteria in choosing a location for the facility could be that the amount of pollution reaching any city be minimized. Then the optimization problem (2.1) may be used in choosing the location of the facility, provided we can assume that the amount of pol- lutant reaching a city is monotonically decreasing function of the distance between the city and the facility. For ease of notation throughout this thesis we de ne fi(x) = wi   x xi   2 (2.3) where wi > 0 is a  xed constant and xi 2 Rn. Thus, the maximin dispersion problem becomes max x2X f(x) (2.4) where, for all x 2 Rn, i 2 f1; : : : ;mg f(x) = min 1 i m fi(x): (2.5) In particular, we will focus on the case that X in (2.1) is convex under componentwise squaring: X = n x 2 Rn     x21; : : : ; x 2 n; 1  T 2 K o ; (2.6) for some closed convex cone K  Rn+1. We will examine two speci c cases: the case where X = [ 1; 1]n (a box) corresponding to K =  y 2 Rn+1 jyj  yn+1; j = 1; : : : ; ng ; (2.7) 262.1. General Problem and Motivation and the case of a unit Euclidean ball X = n x 2 Rn    kxk2  1 o correspond- ing to K =  y 2 Rn+1 jy1 +    + yn  yn+1g : (2.8) Even in the simple case of a box, X = [ 1; 1]n, the function f(x) can become complex quite quickly. Figure 2.1 shows f(x) for two sets of points in X ; the set S = f( 1; 1); ( 1; 1); (1; 1); (1; 1); (0; 0)g and the set T , a randomly generated set of 10 points in X . Figure 2.1: Plots of f(x) for X = [ 1; 1]2, with the set S on the left and set T on the right. Remark 2.3. The function f (x) := min i=1;:::;m wi   x xi   2 (2.9) in Equation (2.1) is neither convex nor concave whenever m  2 and xi 6= xj . This is illustrated in Figure 2.1. Proposition 2.4. The optimal value of the maximin dispersion problem satis es  p > 0: Proof. Take x 2 X with x 6= xi for i = 1; : : : ;m. We have f(x) > 0. Then  p  f(x) > 0. Since the weighted maximin dispersion problem is a nonconvex problem, we cannot apply convex programming methods. The goal of this thesis then is to consider convex relaxations of the problem, the solutions of which can be found using known techniques. The solutions to the relaxation problems can then provide a nontrivial lower bound for the original nonconvex problem. 272.2. Convex Relaxations 2.2 Convex Relaxations In this section we give several alternative formulations of the original problem (2.1) as well as de ning the convex relaxation problems. First note that Proposition 2.5. If we let Ai = " I  xi   xi  T   xi   2 # 2 Sn+1 and X =  xxT x xT 1  2 Sn+1 where I 2 Rn n, then tr[AiX] = kxk2  2  xi  T x+   xi   2 : Proof. We have AiX = " I  xi   xi  T   xi   2 #  xxT x xT 1  =  xx|  xix| x xi  (xi)|xx| + kxik2x|  (xi)|x+ kxik2  so that trAiX = tr(xx|  xix|) (xi)|x+ kxik2 = tr(xx|) tr(xix|) (xi)|x+ kxik2 = tr(x|x) tr(x|xi) (xi)|x+ kxik2 = kxk2  2(xi)|x+ kxik2: For any x 2 Rn and i 2 f1; : : : ;mg, by Proposition 2.5 we have   x xi   2 = kxk2  2  xi  T x+   xi   2 =  Ai;  xxT x xT 1   ; (2.10) where Ai := " I  xi   xi  T   xi   2 # (2.11) 282.2. Convex Relaxations and hA;Zi = tr[AZ]: By substituting X =  xxT x xT 1  , we can reformulate the original prob- lem (2.1) as  p := max X;  (2.12) s.t. wi  Ai; X    ; i = 1; : : : ;m (X11; : : : ; Xnn; 1) T 2 K Xn+1;n+1 = 1 X  0 rankX = 1; where \s.t." abbreviates for \subject to", K is given as in either (2.7) or (2.8), and Ai is given in (2.11). Proposition 2.6. Let n  2. The rank function rank : Rn n ! [0; n] : X 7! rankX is neither convex nor Lipschitz. Proof. It su ces to consider the case when n = 2. Consider matrices X1 =  1 0 0 0  ; X2 =  0 0 0 1  : We have rank  1 2 X1 + 1 2 X2  = 2 > 1 = 1 2 rankX1 + 1 2 rankX2; which implies that X 7! rankX is not convex. Now let  > 0 and Y =  1 0 0   : We have jrankX1  rankY j = j1 2j = 1; kX1  Y k =  which gives jrankX1  rankY j kX1  Y k = 1  ! +1 when  # 0. Therefore, rank is not Lipschitz at X1. 292.2. Convex Relaxations Since the rank function is neither convex nor Lipschitz, we drop the rank-1 constraint. This yields a convex programming relaxation of (2.1):  cp := max X;  s.t. wi  Ai; X    ; i = 1; : : : ;m (X11; : : : ; Xnn; 1) T 2 K Xn+1;n+1 = 1 X  0: (2.13) As (2.13), a convex relaxation of (2.12), is obtained by relaxing some of the constraints (2.12), all possible solutions of (2.12) are feasible for (2.13). Hence, Proposition 2.7.  cp   p: Our SDP and SOCP relaxations are formulated in next two subsections. 2.2.1 SDP Relaxation We will make the mild assumption that (2.13) has an optimal solution. This holds whenever X is compact. If X is convex, then we can strengthen this relaxation by adding the constraints (X1;n+1; : : : ; Xn;n+1) T 2 X . Since  p > 0,  6= 0 at an optimal solution of (2.13). By making the substitution Z = X= , we can reformulate (2.13) more compactly as 1  cp = min Z Zn+1;n+1 s.t. wi  Ai; Z   1; i = 1; : : : ;m (Z11; : : : ; Znn; Zn+1;n+1) T 2 K Z  0: (2.14) Note that when K is a polyhedral set having a tractable algebraic repre- sentation, for example (2.7), then (2.14) reduces to a semide nite program (SDP), so we will refer to this as the semide nite-programming relaxation of (2.1). Lemma 2.8. Let Z 2 S(n+1) (n+1) be an optimal solution of (2.14). Then Z n+1;n+1 > 0 302.2. Convex Relaxations Proof. This follows from 1 Z n+1;n+1 =  cp   p > 0: Theorem 2.9. When K =  y 2 Rn+1 j yj  yn+1; j = 1; : : : ; ng , we have 1. X solves the convex relaxation problem (2.13) , X cp solves (2.14). 2. Z solves (2.14) , X = Z cp = Z 1Zn+1;n+1 solves (2.13). 3. If Z solves (2.14), then  cp = 1Z n+1;n+1 . 2.2.2 SOCP Relaxation Lastly, we consider a further relaxation that replaces the semide nite- cone constraint, Z  0, of (2.14) with the following constraints  Zjj Zj;n+1 Zj;n+1 Zn+1;n+1   0; j = 1; : : : ; n: (2.15) Lemma 2.10. The constraints of equation (2.15),  Zj;j Zj;n+1 Zj;n+1 Zn+1;n+1   0 i = 1; : : : ; n (2.16) are equivalent to second order cone constraints. Proof. For each j = 1; : : : ; n, let Zj denote the matrix Zj =  Zj;j Zj;n+1 Zj;n+1 Zn+1;n+1  : (2.17) Then by Proposition 1.25 we know that the constraints Zj  0 are equivalent to the constraints det(Zj)  0 and trace(Zj)  0 for j = 1; : : : ; n. Now, det(Zj)  0 , Zj;jZn+1;n+1  (Zj;n+1) 2  0 , Zj;jZn+1;n+1  (Zj;n+1) 2 (2.18) and trace(Zj)  0 , Zj;j + Zn+1;n+1  0: (2.19) 312.2. Convex Relaxations We will show that (2.18) and (2.19) are equivalent to second order cone constraints, namely      2Zj;n+1 Zj;j  Zn+1;n+1       Zj;j + Zn+1;n+1; (2.20) 0  Zj;j + Zn+1;n+1: (2.21) To see that (2.18) and (2.20) are equivalent, consider the following:      2Zj;n+1 Zj;j  Zn+1;n+1       Zj;j + Zn+1;n+1 ,      2Zj;n+1 Zj;j  Zn+1;n+1      2  (Zj;j + Zn+1;n+1) 2 , 4(Zj;n+1) 2 + (Zj;j) 2  2Zj;jZn+1;n+1 + (Zn+1;n+1) 2  (Zj;j) 2 + 2Zj;jZn+1;n+1 + (Zn+1;n+1) 2 , 4(Zj;n+1) 2  4Zj;jZn+1;n+1 , (Zj;n+1) 2  Zj;jZn+1;n+1 (2.22) Equation (2.21) is exactly equation (2.19), which is already a second order cone constraint. That is, we have kAx+ bk  Zj;j + Zn+1;n+1 (2.23) with A = 0 and b = 0. Thus, substituting (2.15) for Z  0 in (2.14) yields the second-order cone programming (SOCP) relaxation of (2.1): 1  socp = min Z Zn+1;n+1 s.t. wi  Ai; Z   1; i = 1; : : : ;m (Z11; : : : ; Znn; Zn+1;n+1) T 2 K  Zjj Zj;n+1 Zj;n+1 Zn+1;n+1   0; j = 1; : : : ; n: (2.24) Theorem 2.11. One always has  p   cp   socp: 322.2. Convex Relaxations Corollary 2.12. Let Z 2 Sn+1 be an optimal solution of (2.24). Then Z n+1;n+1 > 0. Proof. This follows from 1 Z n+1;n+1 =  socp   cp   p > 0: We have  socp = 1z n+1;n+1 if Z solves (2.24). 2.2.3 More Properties on the Constraints of Convex Relaxations (1) Constraint (Z11; : : : ; Znn; Zn+1;n+1) T 2 K. In either (2.14) or (2.24), what does the constraint (Z11; : : : ; Znn; Zn+1;n+1) T 2 K (2.25) look like? We explicitly consider two cases: Box case: When K = fy 2 Rn+1j yj  yn+1; j = 1; : : : ; ng (2.25) transpires to Zjj  Zn+1;n+1 8j = 1; : : : ; n: (2.26) Ball case: When K = fy 2 Rn+1j nX j=1 yj  yn+1g (2.25) transpires to nX j=1 Zjj  Zn+1;n+1: (2.27) In both cases, (2.25) consists of linear inequalities. (2) Constraint wi  Ai; Z   1. In both (2.14) and (2.24), what does the constraint wihA i; Zi  1 (2.28) look like? 332.2. Convex Relaxations We can exploit the structure of Ai! For a matrix Z 2 Sn+1, let z = (Z1;n+1; : : : ; Zn;n+1) T and Z = 2 6 4 Z11   . . .   Znn 3 7 5 so that we can write Z =  Z z (z)T Zn+1;n+1  : (2.29) into a block form. Now consider the trace: hAi; Zi = Tr  " I  xi   xi  T   xi   2 # Z ! = Tr  " I  xi   xi  T   xi   2 #  Z z (z)T Zn+1;n+1  ! = Tr  " Z  xizT z  xiZn+1;n+1   xi  T Z +   xi   2 zT  (xi)T z +   xi   2 Zn+1;n+1 #! = Tr  Z  xizT  + Tr   (xi)T z +   xi   2 Zn+1;n+1  = 0 @ nX j=1 Zjj  (x i)T z 1 A+   (xi)T z +   xi   2 Zn+1;n+1  = nX j=1 Zjj  2(x i)T z +   xi   2 Zn+1;n+1: (2.30) Hence (2.28) is equivalent to wi  nX j=1 Zjj  2(x i)T z +   xi   2 Zn+1;n+1   1: (2.31) (3) The feasible region of (2.24) is strictly larger than the one in (2.14). A special case su ces. To this end, let n = 2, we consider the matrix Z = 0 @ 1 2 1 2 1 0 1 0 2 1 A 342.2. Convex Relaxations We have  Z11 Z13 Z31 Z3;3  =  1 1 1 2  2 S2+ and  Z22 Z23 Z32 Z3;3  =  1 0 0 2  2 S2+; but Z 62 S3+ because its principle submatrix  Z11 Z12 Z21 Z22  =  1 2 2 1  62 S2+: This shows that algthough Z < 0 )  Zjj Zj;n+1 Zj;n+1 Zn+1;n+1  < 0 8 j = 1; : : : ; n always holds, the converse implication fails. The above discussions allow us to conclude: Theorem 2.13 (box case). Let K = fy 2 Rn+1j yj  yn+1; j = 1; : : : ; ng: Then 1. The SDP relaxation (2.14) has an explicit form: 1  cp = min Z Zn+1;n+1 s.t. wi  Pn j=1 Zjj  2(x i)T z +   xi   2 Zn+1;n+1   1; i = 1; : : : ;m Zjj  Zn+1;n+1 j = 1; : : : ; n Z  0; (2.32) where z = (Z1;n+1; : : : ; Zn;n+1) T . 352.2. Convex Relaxations 2. The SOCP relaxation (2.24) has an explicit form: 1  socp = min Z Zn+1;n+1 s.t. wi  Pn j=1 Zjj  2(x i)T z +   xi   2 Zn+1;n+1   1; i = 1; : : : ;m Zjj  Zn+1;n+1 j = 1; : : : ; n Zjj Zj;n+1 Zj;n+1 Zn+1;n+1   0; j = 1; : : : ; n; (2.33) where z = (Z1;n+1; : : : ; Zn;n+1) T . Theorem 2.14 (ball case). Let K = fy 2 Rn+1j nX j=1 yj  yn+1g: Then 1. The SDP relaxation (2.14) has an explicit form: 1  cp = min Z Zn+1;n+1 s.t. wi  Pn j=1 Zjj  2(x i)T z +   xi   2 Zn+1;n+1   1; i = 1; : : : ;m Pn j=1 Zjj  Zn+1;n+1 Z  0; (2.34) where z = (Z1;n+1; : : : ; Zn;n+1) T . 2. The SOCP relaxation (2.24) has an explicit form: 1  socp = min Z Zn+1;n+1 s.t. wi  Pn j=1 Zjj  2(x i)T z +   xi   2 Zn+1;n+1   1; i = 1; : : : ;m Pn j=1 Zjj  Zn+1;n+1 Zjj Zj;n+1 Zj;n+1 Zn+1;n+1   0; j = 1; : : : ; n; (2.35) where z = (Z1;n+1; : : : ; Zn;n+1) T . 362.2. Convex Relaxations Remark 2.15. Note that in both box case and ball case, the SOCP relaxation only relies on (Z11; : : : ; Znn; Zn+1;n+1) and z = (Z1;n+1; : : : ; Zn;n+1) T . Remark 2.16. In both box case and ball case, the feasible regions are un- bounded. Indeed, if Z is feasible, then kZ is also feasible for every k  1. Amazingly, in next section we show that both (2.14) and (2.24) do have optimal solutions! 2.2.4 Why Does the SDP or SOCP Relaxation Have an Optimal Solution? For a matrix Z 2 Rn n we de ne its norm by kZk2 = maxfkZuk kuk  1g: Fact 2.17. For every Z 2 Rn n, we have kZk2 =  max(Z |Z): Theorem 2.18 (box case). Let K = fy 2 Rn+1j yj  yn+1; j = 1; : : : ; ng: Then the following hold: 1. The SDP relaxation (2.14) has  cp < +1 and at least one optimal solution. 2. The SOCP relaxation (2.24) has  socp < +1 and at least one optimal solution. Proof. Part 1. First we show that  cp < 1. Assume to the contrary that  cp =1. Then there exists a sequence Z (k) n+1;n+1 # 0 when k !1. Since by (2.26), 0  Z(k)jj  Z (k) n+1;n+1 8 j = 1; : : : ; n we have limk!1 Z (k) jj = 0. Now the matrix Z (k)  0 implies " Z(k)jj Z (k) j;n+1 Z(k)j;n+1 Z (k) n+1;n+1 #  0 j = 1; : : : ; n (2.36) so that Z(k)n+1;n+1Z (k) jj  (Z (k) j;n+1) 2  0 372.2. Convex Relaxations from which limk!1 Z (k) j;n+1 = 0. By (2.31), we obtain that wi  nX j=1 Z(k)jj  2(x i)T z(k) +   xi   2 Z(k)n+1;n+1   1 (2.37) where (z(k))| = (Z(k)1;n+1; : : : ; Z (k) n;n+1). When k ! 1, (2.37) gives 0  1, a contradiction. Next we show that  cp is achieved. Assume that there exists a feasible sequence (Z(k))1k=1 such that Z (k) n+1;n+1 # 1= cp when k ! 1. Since 0  Z(k)jj  Z (k) n+1;n+1 for j = 1; : : : ; n, the sequence (Z (k) jj ) 1 k=1 is bounded for j = 1; : : : ; n. This shows that (trZ(k))1k=1 is bounded. As the largest eigenvalue 0   max(Z (k))  trZ(k) we see that ( max(Z(k)))1k=1 is bounded. Note that kZ(k)k2 =  max(Z (k)) because Z(k) 2 Sn+1+ . Since all norms are equivalent in  nite dimensional space S(n+1) (n+1), this means that (Z(k))1k=1 is bounded. Therefore, there exists a subsequence of (Z(k))1k=1 converges to a Z  2 Sn+1+ . In particular, Z is feasible because each Z(k) is feasible. Without relabeling, let us assume that Z(k) ! Z . Then Z(k)n+1;n+1 ! Z  n+1;n+1 and Z  n+1;n+1 = 1= cp. Hence Z is an optimal solution. Part 2. Apply the similar arguments as in Part 1. Theorem 2.19 (ball case). Let K = fy 2 Rn+1j nX j=1 yj  yn+1g: Then the following hold: 1. The SDP relaxation (2.14) has  cp < +1 and at least one optimal solution. 2. The SOCP relaxation (2.24) has  socp < +1 and at least one optimal solution. Proof. The proof is similar to that of Theorem 2.18. Instead of using (2.26), we use (2.27). 38Chapter 3 A Necessary Condition for Optimal Solutions In this Chapter we prove general results about the objective functions of the maximin dispersion problem, as well as deriving a necessary condition for the optimal solutions of (2.1). 3.1 Properties of the Maximin Objective Functions In this section, we will apply the results of Sections 1.4, 1.5 and 1.5.1 to the objective functions of the maximin dispersion problem, (2.3). First, we show that f(x) as de ned in (2.1) is locally Lipschitz at any x0 in Rn. Theorem 3.1. The function fi as de ned in (2.3) is locally Lipschitz at any x0 2 Rn. Proof. Fix x0 2 Rn and choose x; y 2 Rn such that kx x0k   and ky  x0k   for some  > 0. We have jfi(x) fi(y)j =    wi   x xi   2  wi   y  xi   2    (3.1) = wi      x xi      y  xi       x xi   +   y  xi      = wi      x xi      y  xi         x xi   +   y  xi     wi    (x xi) (y  xi)       x xi   +   y  xi    = wi (kx yk)    x xi   +   y  xi    : 393.1. Properties of the Maximin Objective Functions Now,   x xi    kxk+   xi   = kx x0 + x0k+   xi    kx x0k+ kx0k+   xi     + kx0k+   xi   = L1 <1: (3.2) Similarly, there exists L2 <1 such that   y  xi    L2. Then   x xi   +   y  xi    L1 + L2 (3.3) so that by setting M = wi(L1 + L2) we have jfi(x) fi(y)j  wi kx yk (L1 + L2) (3.4) = M kx yk whenever kx x0k   and ky  x0k   , and hence fi is locally Lipschitz at x0. Corollary 3.2. The function f(x) = min i=1;:::;m fi(x) = min i=1;:::;m wi   x xi   2 (3.5) is locally Lipschitz at any x0 2 Rn. This follows directly from Theorem 3.1 and Fact 1.61. Lastly, we characterize the gradient of fi(x) as de ned in (2.3), and give a theorem for the existence of solutions. Fact 3.3. For each i = 1; : : : ;m, and every x 2 X the function fi is di er- entiable, with derivative rfi(x) = 2wi  x xi  : (3.6) Proof. This follows from that fi(x) = wihx x i; x xii = wi  hx; xi+ 2hxi; xi+ kxik2  : 403.2. Characterization of Optimal Solutions Theorem 3.4. Assume that X is a compact subset of Rn. Then argmaxx2X f 6= ;: Proof. This follows directly from Theorem 1.9. Remark 3.5. If X is not compact, then it may happen that argmaxf = ;. For example, let X = Rn. Then sup x2Rn f =1 so that argmaxx2Rnf = ;. 3.2 Characterization of Optimal Solutions In this section we will provide a theoretical result that gives a necessary condition for  x to be an optimal solution of the equally weighted maximin dispersion problem, which is (2.1) with wi =    = wm = w. To begin,  rst rewrite the original problem (2.1) as min x2X  max 1 i m  fi  (3.7) and note that this is equivalent to min x2X  max 1 i m  fi +  X  : (3.8) Theorem 3.6. Let  x belong to the interior of X , and let f(x) = min i=1;:::;m w   x xi   2 : (3.9) We have that if  x is an optimal solution of maxx2X f(x) (3.10) then  x 2 conv  xi ji 2 I( x)g (3.11) where I( x) =  i        x xi   2 = min i=1;:::;m    x xi   2  : (3.12) 413.2. Characterization of Optimal Solutions Proof. Since  x is a critical point of f if and only if  x is a critical point of  f , we begin by converting our min-function f(x) into a max-function  f(x) as follows:  f(x) =  min i=1;:::;m w   x xi   2 (3.13)  f(x) = max i=1;:::;m   w   x xi   2  (3.14)  f(x) = max i=1;:::;m gi(x) (3.15) where gi(x) =  w   x xi   2 =  fi(x). Now, by Theorem 3.1, we know that fi(x) is locally Lipschitz at any x 2 Rn, and so applying Fact 1.58 we know that gi(x) is locally Lipschitz at any x 2 Rn. Furthermore, by applying Fact 1.59 we know that the max of a family of locally Lipschitz functions is locally Lipschitz, so that  f(x) = maxi=1;:::;m gi(x) is locally Lipschitz at any x 2 Rn. Thus, by applying Theorem 1.63 we know that if  x is a possible critical point of  f +  X , then we must have 0 2 @o( f +  X )( x)  @o( f)( x) + @o X ( x). Thus, in order to derive a necessary condition for  x to be a critical point of  f(x), we need to characterize @o( f)( x). Since  f(x) is locally Lipschitz at any  x 2 Rn and gi is di erentiable for each i, by Theorem 1.64 the Clarke subgradient of  f at  x is given by @( f( x)) = conv frgi( x) j i 2 I( x)g : (3.16) Applying Fact 3.3 to the equally weighted case, we have that the derivative of the functions gi is rgi(x) =  2w(x x i) (3.17) so that @o( f( x)) = conv   2w( x xi) j i 2 I( x)  : (3.18) Thus 0 2 @o( f)( x) + @o X( x) implies that 0 2 conv   2w( x xi) j i 2 I( x)  + @ X ( x): Dividing both sides by w and using the fact that @o X ( x) = NX ( x) is a cone, we have , 0 2 conv   ( x xi) j i 2 I( x)  + @o X ( x) (3.19) , 0 2   x+ conv  xi j i 2 I( x)  + @o X ( x): (3.20) 423.2. Characterization of Optimal Solutions Since  x 2 intX , we have @o X ( x) = 0 by Example 1.48 so that  x 2 conv  xi j i 2 I( x)  : (3.21) Lastly, since i 2 I( x) if and only if gi( x) =  f( x) (3.22) ,  w    x xi   2 =  f( x) (3.23) , w    x xi   2 = f( x) (3.24) , w    x xi   2 = min i=1;:::;m w    x xi   2 (3.25) ,    x xi   2 = min i=1;:::;m    x xi   2 (3.26) we have that  x is a critical point of f(x) = mini=1;:::;mw   x xi   2 if and only if  x 2 conv  xi ji 2 I( x)g (3.27) where I( x) =  i        x xi   2 = min i=1;:::;m    x xi   2  : (3.28) Thus, we have provided a necessary condition for  x 2 Rn to be a possible maximizer for the equally weighted maximin dispersion problem. Remark 3.7. Let f be given as in (3.9). Then  x is an optimal solution of maxx2X f(x) only if 0 2 @o( f)( x) +NX ( x): That is, 0 2 conv   w( x xi) j i 2 I( x)g +NX ( x) (3.29) , x 2 conv  xi j i 2 I( x)g +NX ( x) (3.30) , x 2 conv  xi j i 2 I( x)g +NX ( x): (3.31) See Theorem 1.30. 43Chapter 4 NP-Hardness Results Fact 4.1. The general weighted maximin distance problem is known to be NP-hard, even in the case where the distances satisfy the triangle inequality [20]. In this chapter we focus on a particular subset of maximin distance problems, such that w1 =    = wm, X = [ 1; 1] n and x1; : : : ; xm are in X . We show that even in this reduced case of equal weighting and X an n-dimensional hypercube, the problem is still NP-hard (see Theorem 4.5). We also present a suggested heuristic for solving (2.1) which involves solving restricted subproblems, and show that the restricted subproblems are still NP-hard. 4.1 The Case That X is a Box In this section we will show that the equally weighted maximin optimiza- tion problem given by max x2[ 1;1]n min i=1;:::;m w   x xi   2 (4.1) is NP-hard. In order to prove this result, we  rst provide some preliminary results. Proposition 4.2. The following are equivalent to solve: 1. y is a solution of the integer linear program By = b; y 2 f 1; 1gq ; (4.2) with B 2 Zp q, b 2 Zp. 2. x is a solution to Cx  0; x 2 f 1; 1gq+1 ; (4.3) with C :=  B  b  B b  . 444.1. The Case That X is a Box Note that when we say that (4.2) and (4.3) are equivalent to solve, we mean it in the following sense: y 2 f 1; 1gq solves (4.2) ) (y; 1) solves (4.3): x = (x1; : : : ; xq; xq+1) solves (4.3) )  x1 xq+1 ;    ; xq xq+1  solves (4.2): Proof. Consider: 2 6 6 6 6 6 6 6 6 6 6 4 B11 B12    B1q  b1 B21 B22    B2q  b2 ... ... Bp1 Bp2    Bpq  bp  B11  B12     B1q b1 ... ...  Bp1  Bp2     Bpq bp 3 7 7 7 7 7 7 7 7 7 7 5 2 6 6 6 4 x1 x2 ... xq+1 3 7 7 7 5  2 6 6 6 4 0 0 ... 0 3 7 7 7 5 (4.4) , 8 >>>>>>>>>>< >>>>>>>>>>: B11x1 +B12x2 +    +B1qxq  b1xq+1  0 B21x1 +B22x2 +    +B2qxq  b2xq+1  0 ... Bp1x1 +Bp2x2 +    +Bpqxq  bpxq+1  0 +B11x1 +B12x2 +    +B1qxq  b1xq+1  0 ... +Bp1x1 +Bp2x2 +    +Bpqxq  bpxq+1  0 , 8 >< >: b1xq+1  B11x1 +B12x2 +    +B1qxq  b1xq+1 ... bpxq+1  Bp1x1 +Bp2x2 +    +Bpqxq  bpxq+1 And thus, since xi =  1 for i = 1; ::; q we obtain xq+1b  Bx  xq+1b , Bx = xq+1b , B  x xq+1  = b: (4.5) Since xq+1 =  1, we have xxq+1 2 f 1; 1g q, and xxq+1 solves (4.2). Proposition 4.3. The following are equivalent to solve: 454.1. The Case That X is a Box 1. x is a solution to Cx  0; x 2 f 1; 1gq+1 ; (4.6) with C :=  B  b  B b  2 Z2p (q+1). 2. x is a solution to 2Cx   e; x 2 f 1; 1gq+1 ; (4.7) for any 0 <   1, where e = (1; : : : ; 1)T . In this case, when we say that (4.6) and (4.7) are equivalent to solve, we mean that x solves (4.6) , x solves (4.7) : Proof. We can rewrite the inequality (4.7) in item 2 as Cx   2 e; x 2 f 1; 1gq+1 : (4.8) Now, since 0 <   1 we have 0 <  2  1 2 . Furthermore, as C 2 Z 2p (q+1) and x 2 f 1; 1gq+1 it is obvious to see that Cx 2 Z2p so that solving Cx  12e is equivalent to solving Cx  0. Proposition 4.4. The integer linear program (1.94) is reducible in polyno- mial time to the equally weighted maximin dispersion problem max x2[ 1;1]q+1 min i=1;:::;2p   x xi   2 : (4.9) Moreover, max x2[ 1;1]q+1 min i=1;:::;2p   x xi   2 = max x2f 1;1gq+1 min i=1;:::;2p   x xi   2 : (4.10) Proof. From Fact 1.76 we know that it is NP-hard to determine the solv- ability of By = b; y 2 f 1; 1gq ; (4.11) with B 2 Zp q, b 2 Zp. By Proposition 4.2 this is equivalent to Cx  0; x 2 f 1; 1gq+1 ; (4.12) 464.1. The Case That X is a Box with C :=  B  b  B b  . By Proposition 4.3, it is equivalent to solve 2Cx   e; x 2 f 1; 1gq+1 ; (4.13) for any 0 <   1, where e = (1; : : : ; 1)T . We assume without loss of generality that C has no zero row. Let ci denote the ith column of CT . Then (ci)T is the ith row of C so that   ci   6= 0. Let xi := %ic i 2 Rq+1 with %i :=  kcik2 ; i = 1; : : : ; 2p: (4.14) Then %i =  2 kcik2 =   xi   2, since   xi   =   %ic i   =  kcik2   ci   =  kcik : Furthermore, we have 2Cx = 2 h c1  T  c2  T     c2p  T iT x = 2 6 6 6 6 4 2  c1  T x 2  c2  T x ... 2  c2p  T x 3 7 7 7 7 5 (4.15) and thus 2Cx   e , 2  ci  T x   8i = 1; : : : ; 2p , 2%i  ci  T x  %i 8i = 1; : : : ; 2p , 2  xi  T x    xi   2 ; 8i = 1; : : : ; 2p: (4.16) Then (4.13) is equivalent to 2  xi  T x    xi   2 ; i = 1; : : : ; 2p; x 2 f 1; 1gq+1 : (4.17) Now, from   xi  x   2 =   xi   2  2  xi  T x+ kxk2, we have   xi   2  2  xi  T x i = 1; : : : ; 2p ,   xi  x   2 + 2  xi  T x kxk2  2  xi  T x i = 1; : : : ; 2p ,   xi  x   2  kxk2 i = 1; : : : ; 2p ,   xi  x   2  q + 1 i = 1; : : : ; 2p (4.18) 474.1. The Case That X is a Box (since x 2 f 1; 1gq+1 implies kxk2 = q + 1). Hence, (4.17) is in turn is equivalent to q + 1    xi  x   2 ; i = 1; : : : ; 2p; x 2 f 1; 1gq+1 : (4.19) Thus (1.94) has a solution if and only if the optimal value of max x2f 1;1gq+1 min i=1;:::;2p   xi  x   2 (4.20) is no less than q + 1. That is, 9 x 2 f 1; 1gq+1 such that q + 1    xi  x   2 8i , 9 x 2 f 1; 1gq+1 such that q + 1  min i=1;:::;2p   xi  x   2 , q + 1  max x2f 1;1gq+1 min i=1;:::;2p   xi  x   2 : (4.21) Now, we show that (4.20) is equivalent to max x2[ 1;1]q+1 min i=1;:::;2p   xi  x   2 : (4.22) . Indeed, since   xi   =  kcik , we can take  = min  1;min i   ci   =3  ; (4.23) so that   xi   < 12 for all i. To see this, simply note that by (4.23) we have   min i   ci   3 )     ci   3 8 i )  kcik  1 3 8 i: Now we show that the objective function value of (4.22) is strictly im- proved by setting x = f 1gq+1: For any x 2 [ 1; 1]q+1, if 0  xj < 1 for some j 2 f1; : : : ; q + 1g, then, for each i 2 f1; : : : ; 2pg, either (i) xij  xj so that   xj  x i j   = xj  x i j < 1 x i j =   1 xij   ; or (4.24) 484.2. NP-Hardness of Restricted Problems (ii) xij > xj so that   xj  x i j   = xij  xj  x i j < 1 2 < 1 xij =   1 xij   : (4.25) Then the objective function value of (4.22) is strictly improved by setting xj = 1. Similarly, if 0  xj >  1 for some j 2 f1; : : : ; q + 1g, then the objective function value of (4.22) is strictly improved by setting xj =  1. Lastly, it is readily seen that the size of x1; : : : ; x2p (encoded in binary) is polynomial in the size of [B b] ([12] p.9-10, p.21). Thus the solvability of (1.94) is reducible in polynomial time to the optimization problem (4.22). We now state the main theorem of this Chapter: Theorem 4.5. Let w > 0. The equally weighted maximin optimization problem given by max x2[ 1;1]n min i=1;:::;m w   x xi   2 (4.26) is NP-hard. Proof. By Proposition 4.4, we know that the integer linear program (1.94) is reducible in polynomial time to (4.22). Since (1.94) is NP-hard (Fact 1.76), this implies that (4.22) is NP-hard. Lastly, since (4.22) is simply (4.26) with wi = 1 for all i, we thus have that (4.26) is NP-hard. 4.2 NP-Hardness of Restricted Problems In [30], the author describes an heuristic approach for solving (2.1) based on partitioning X into m Voronoi cells X i := n x 2 X    wi   x xi   2  wj   x xj   2 8j 6= i o i = 1; : : : ;m; (4.27) and maximizing   x xi   2 over x 2 X i, for i = 1; : : : ;m. However, since (2.1) is NP-hard, at least one of these restricted problems (or subproblems) is NP-hard. In fact, we show below that the restricted problem is NP-hard even when w1 =    = wm and X = Rn. Theorem 4.6. We have 494.2. NP-Hardness of Restricted Problems 1. f(x) = min 1 i m wi   x xi   2 = 8 >>>>< >>>>: w1   x x1   2 if x 2 X 1 w2   x x2   2 if x 2 X 2 ... wm kx xmk 2 if x 2 Xm (4.28) where X i = n x 2 X    wi   x xi   2  wj   x xj   2 8 j 6= i o ; for i = 1; : : : ;m. 2. max x2X f(x) = max  max x2X 1 f(x); : : : ; max x2Xm f(x)  : 3. If w1 =    = wm, then each X i is a convex set, since n x 2 X    wi   x xi   2  wj   x xj   2 ; j 6= i o = n x 2 X     xj  xi; x     xj   2    xi   2 ; j 6= i o : Note that maxx2X i f(x) is: max x2X i f(x) = max x2X i   x xi   2 s.t.   x xi   2    x xj   2 j 6= i: (4.29) Let W 2 Zq q and assume that W is positive semide nite and symmet- ric. We have the following Lemma: Lemma 4.7. The MaxCut problem on a graph with q nodes max y2f 1;1gq yTWy; (4.30) is equivalent to the following problem: max y2[ 1;1]q yT (W +D) y; (4.31) where D 2 Rq q is any diagonal matrix chosen so that W + D is positive de nite. 504.2. NP-Hardness of Restricted Problems To prove Lemma 4.7 we recall Theorem 1.56, which states that if D  Rn is compact and convex and f : Rn ! R is convex, then max x2D f = max x2ext(D) f (4.32) where ext(D) is the set of extremal points of D. Proof. Note that the set of extremal points of [ 1; 1]q is f 1; 1gq. Then max y2[ 1;1]q yT (W +D) y = max y2f 1;1gq  yTWy + yTDy  = max y2f 1;1gq 8 < : yTWy + qX j=1 y2jDjj 9 = ; = max y2f 1;1gq 8 < : yTWy + qX j=1 Djj 9 = ; = max y2f 1;1gq yTWy + qX j=1 Djj : (4.33) Since Pq j=1Djj is constant, it is equivalent to solve max y2f 1;1gq yTWy: (4.34) Now, since W +D is nonsingular, positive de nite, by Fact 1.26 we can decompose W + D = LTL where L 2 Rq q is a nonsingular and upper (or lower) triangular matrix, and make the substitution x = Ly, which leads to the following lemma: Lemma 4.8. Solving (4.31) is equivalent to solving max x2Rq kxk2 s.t. 2  ui  T x  1;  2  ui  T x  1; i = 1; : : : ; q; (4.35) where ui denotes the ith column of  L 1  T =2. 514.2. NP-Hardness of Restricted Problems Proof. This follows from  L 1  T = (2u1; : : : ; 2uq)) L  1 = 2 6 6 6 4 2uT1 2uT2 ... 2uTq 3 7 7 7 5 (4.36) so that x = Ly ) y = L 1x = 2 6 6 6 4 2uT1 2uT2 ... 2uTq 3 7 7 7 5 x) yi = 2 (ui) T x i = 1; : : : ; q (4.37) and jyij  1 8i = 1; : : : ; q ,    2  ui  T x     1 8i = 1; : : : ; q (4.38) Lastly, it is easily seen that kxk2 = hLy;Lyi = yT  LTL  y = yT (W +D) y: (4.39) Remark 4.9. x solves (4.35) if and only if y = L 1x solves (4.31). Lemma 4.10. Problem (4.35) can be written as max x2Rq kxk2 s.t. kxk2    x xi   2 ; i = 1; : : : ; 2q: (4.40) where ui denotes the ith column of  L 1  T =2 and xi :=  iu i; xi+q :=   iu i; with  i := 1 kuik2 ; i = 1; : : : ; q: (4.41) Proof. By de nition of xi, we have   xi   =    iui   = 1 kuik2   ui   = 1kuik , so that kxk2    x xi   2 = kxk2  2  x; xi  +   xi   2 , 0   2  x; xi  + 1 kuik2 =  2  x; xi  +  i , 2  x; xi    i; i = 1; : : : ; 2q (4.42) ,   2  x;  iu i     i; i = 1; : : : ; q ,   2  x; ui     1: 524.2. NP-Hardness of Restricted Problems Theorem 4.11. The MaxCut problem on a graph with q nodes is reducible in polynomial time to the problem (4.40), max x2Rq kxk2 s.t. kxk2    x xi   2 ; i = 1; : : : ; 2q: (4.43) Proof. By applying Lemmas 4.7, 4.8 and 4.10 we know that the MaxCut problem is reducible to solving (4.40). Since W  0, W 2 Zq q, and by choosing D appropriately, we can ensure that x1; : : : ; x2q have rational en- tries and the size of x1; : : : ; x2q (encoded in binary) is polynomial in the size of W ([12]). Thus MaxCut is reducible in polynomial time to the restricted problem (4.40). Corollary 4.12. The restricted problems of the heuristic method of [30], max x2Rq kxk2 s.t. kxk2    x xi   2 ; i = 1; : : : ; 2q: (4.44) are NP-hard. Proof. By Theorem 4.11, we know that MaxCut is reducible in polynomial time to the restricted problem (4.40). Thus, since the MaxCut problem is NP-hard (Fact 1.78), we have that (4.40) must be NP-hard. Corollary 4.13. One of the restricted problems max x2X i f(x) must be NP-hard. Proof. Apply Corollary 4.12 and Theorem 4.6 part 2. 53Chapter 5 Convex Relaxation Based Algorithms Given that (2.1) is NP-hard, see Chapter 4, in general it is very im- portant to explore methods for  nding approximate solutions to (2.1). In this chapter, we outline an algorithm which can be used to  nd approximate solutions to (2.1) using our convex SDP (2.14) and SOCP (2.24) relaxations. 5.1 Auxiliary Results Let Z 2 Sn+1 be an optimal solution of the SDP relaxation (2.14). Then Z 2 Sn+1+ so that Z  jj  0 for j = 1; : : : ; n + 1. For i = 1; : : : ;m, let bi 2 Rn be given by bij = q Z jjx i j ; j = 1; : : : ; n (5.1) where xi, i = 1; : : : ;m are the given set of points from (2.1). To prove our main result, Theorem 5.4, we require the following tail estimate by Ben-Tal, Nemirovski, and Roos; see [3]. For completeness, we include the proof. Lemma 5.1. Let  2 f 1; 1gn be a random vector, componentwise inde- pendent, with Pr ( j = 1) = Pr ( j =  1) = 1 2 8j = 1; : : : ; n: (5.2) Then for any  > 0, Pr   bi  T   p    bi     e  =2; i = 1; : : : ;m: Proof. (based on proof of Lemma A.3 in [3]): First, note that  is a random variable and bi is  xed for each i, so (bi)T  is a random variable. Since  > 0 and   bi   are  xed, (bi)T   p    bi   is 545.1. Auxiliary Results also a random variable. Now, for any random variable y with density function f , for any   0 we have E (e y) = Z 0  1 e yf(y)dy + Z 1 0 e yf(y)dy  0 + Z 1 0 f(y)dy (since e y  1 for y  0) = Pr (y  0) : (5.3) Hence, substituting y = (bi)T   p    bi   , we have: Pr  (bi)T   p    bi    = Pr  (bi)T   p    bi    0   E  e ((b i)T   p  kbik)  = E  e ((b i)T  )  e  p  kbik (5.4) and thus E  e ((b i)T  )  = nY j=1 E  e b i j j  = nY j=1 1 2  e b i j + e  b i j  (from 5.2) = nY j=1 cosh ( bij) (5.5) Apply the inequality cosh t  e 1 2 t 2 (which holds by Taylor’s series) to obtain E  e ((b i)T  )   nY j=1 e 1 2 2(bij) 2 = e 1 2 2( Pn j=1(b i j) 2) = e 1 2 2kbik2 : Hence, we have for   0 Pr  (bi)T   p    bi     E  e ((b i)T  )  e  p  kbik   e 1 2 2kbik2  e  p  kbik = e 1 2 2kbik2  p  kbik: (5.6) 555.2. SDP Relaxation Based Algorithm The right hand side of this equation is minimized at  = p  =   bi   , with e 1 2 ( p  =kbik)2kbik2 ( p  =kbik) p  kbik = e  2 (5.7) Thus, we have Pr   bi  T   p    bi     e  2 (5.8) as required. 5.2 SDP Relaxation Based Algorithm In this section, we consider the case of (2.6) and propose an algorithm for constructing an approximate solution of (2.1) based on the SDP relaxation (2.14), with an approximation bound of 1 p    2  cp: (5.9) We will see in Section 6.1 that, in the case of X = [ 1; 1]n, this yields an approximation bound that is uniformly bounded away from zero as n!1. First we give some elementary properties of optimal solutions to the SDP relaxation. Lemma 5.2 (box case). Let K =  y 2 Rn+1 jyj  yn+1; j = 1; : : : ; ng : (5.10) If the SDP 1  cp = min Z Zn+1;n+1 s.t. wi  Ai; Z   1; i = 1; : : : ;m (Z11; : : : ; Znn; Zn+1;n+1) T 2 K Z  0: has a solution, then it has a solution Z 2 Sn+1 such that Z jj = Z  n+1;n+1 8 j = 1; : : : ; n: In particular, Z jj > 0 for j = 1; : : : ; n. 565.2. SDP Relaxation Based Algorithm Proof. The constraint (Z11; : : : ; Znn; Zn+1;n+1) T 2 K gives Zjj  Zn+1;n+1 8 j = 1; : : : ; n: (5.11) The constraint wi  Ai; Z   1 gives wi  nX j=1 Zjj +   xi   2 Zn+1;n+1  2  xi  T z   1: (5.12) If Z is a feasible solution, by (5.11) Diag(Zn+1;n+1  Z11; : : : ; Zn+1;n+1  Znn; 0)  0: Then ~Z := Z + Diag(Zn+1;n+1  Z11; : : : ; Zn+1;n+1  Znn; 0)  0 and also veri es (5.11) and (5.12), so ~Z is feasible. Moreover, ~Zjj = Zn+1;n+1 8 j = 1; : : : ; n: Hence any optimal solution Z to the SDP can be modi ed so that Z jj = Z  n+1;n+1 8 j = 1; : : : ; n: The proof is complete by using Lemma 2.8. Lemma 5.3 (ball case). Let K = fy 2 Rn+1j nX j=1 yj  yn+1g (5.13) If the SDP 1  cp = min Z Zn+1;n+1 s.t. wi  Ai; Z   1; i = 1; : : : ;m (Z11; : : : ; Znn; Zn+1;n+1) T 2 K Z  0: has a solution, then it has a solution Z 2 Sn+1 such that nX j=1 Z jj = Z  n+1;n+1: In particular, Pn j=1 Z  jj > 0. 575.2. SDP Relaxation Based Algorithm Proof. The constraint (Z11; : : : ; Znn; Zn+1;n+1) T 2 K gives nX j=1 Zjj  Zn+1;n+1: (5.14) The constraint wi  Ai; Z   1 gives wi  nX j=1 Zjj +   xi   2 Zn+1;n+1  2  xi  T z   1: (5.15) If Z is a feasible solution, by (5.14) Diag(Zn+1;n+1  nX j=1 Zjj ; 0; : : : ; 0)  0: Then ~Z := Z + Diag(Zn+1;n+1  nX j=1 Zjj ; 0; : : : ; 0)  0 and also veri es (5.14) and (5.15), so ~Z is feasible. Moreover, nX j=1 ~Zjj = Zn+1;n+1: Hence any optimal solution Z to the SDP can be modi ed so that nX j=1 Z jj = Z  n+1;n+1: The proof is complete by using Lemma 2.8. Theorem 5.4. Let Z 2 Sn+1 be an optimal solution of the SDP-relaxation problem (2.14) 1  cp = min Z Zn+1;n+1 s.t. wi  Ai; Z   1; i = 1; : : : ;m (Z11; : : : ; Znn; Zn+1;n+1) T 2 K Z  0 and de ne   := maxj=1;:::;n Z jj Pn j=1 Z  jj : Then there exists a feasible solution ex for the original optimization problem (2.1) such that 585.2. SDP Relaxation Based Algorithm 1. ~x = 0 @ p Z 11 1q Z n+1;n+1 ; p Z 22 2q Z n+1;n+1 ;    ; p Z nn n q Z n+1;n+1 1 A (5.16) where  = ( 1; : : : ;  n) satis es  2 f 1; 1g n and (bi)T  < p    bi   , and  = 2 ln (m= ). 2. f (~x) = min i=1;:::;m wi   ~x xi   2  1 p    2  cp: (5.17) 3. f(~x)  1 p    2  p. Before proving Theorem 5.4 we provide some Propositions and de ni- tions. Recall that in Section 5.1 we let Z 2 Sn+1 be an optimal solution of the SDP relaxation (2.14), and for i = 1; : : : ;m, we let bi 2 Rn be given by bi =  p Z 11x i 1; : : : ; p Z nnx i n  : (5.18) Proposition 5.5. Fix any 0 <  < 1, and set  = 2 ln (m= ). Then Pr   bi  T   p    bi      m ; i = 1; : : : ;m: (5.19) Proof. Indeed, by Lemma 5.1 we have Pr   bi  T   p    bi     e   2 ln m  =2 = e ln m  = eln  m =  m : (5.20) Proposition 5.6. Fix any 0 <  < 1, and set  = 2 ln (m= ). We have Pr   bi  T  < p    bi   ; i = 1; : : : ;m   1  > 0; (5.21) so that there exists a  2 f 1; 1gn satisfying  bi  T  < p    bi   ; i = 1; : : : ;m: (5.22) 595.2. SDP Relaxation Based Algorithm Proof. Consider: Pr   bi  T  < p    bi   ; i = 1; : : : ;m  = 1 Pr   bi  T   p    bi   for some i 2 f1; : : : ;mg   1 mX i=1 Pr   bi  T   p    bi     1 mX i=1  m (Apply Proposition 5.5) = 1  > 0: (5.23) How do we  nd the random vector  verifying (5.22)? Such a  can be found by repeated random sampling, and the probability of  nding such a  after N samples is at least 1   N . That is, the probability that we do not  nd such a  in each sample is less than or equal to  , so the probability of  nding such a  in N samples is greater than or equal to 1  N . Now, set ~z 2 Rn and ~zn+1 according to ~zj = q Z jj j ; j = 1; : : : ; n; ~zn+1 = q Z n+1;n+1: (5.24) Then ~z = ( p Z 11 1; : : : ; p Z nn n); (5.25)  ~z21 ; : : : ; ~z 2 n; ~z 2 n+1  T =  Z 11; : : : ; Z  nn; Z  n+1;n+1  T 2 K: We will show that   ~z  xi~zn+1   2 is not too small for all i, yielding a non- trivial approximation bound. Proposition 5.7. With ~z given in (5.24), we have   ~z  xi~zn+1   2 = nX j=1 Z jj  2  bi  T  q Z n+1;n+1 +   xi   2 Z n+1;n+1 (5.26) For i = 1; : : : ;m. Proof. First,   ~z  xi~zn+1   2 = k~zk2  2  xi  T ~z~zn+1 +   xi   2 ~z2n+1: (5.27) 605.2. SDP Relaxation Based Algorithm Then apply (5.1): bij = q Z jjx i j ) x i j = bij q Z jj (5.28) and (xi)T ~z = nX j=1 xij~zj = nX j=1 bij q Z jj q Z jj j = (b i)T  (5.29) to get the desired result. Now, de ne   := maxj=1;:::;n Z jj Pn j=1 Z  jj : (5.30) Proposition 5.8. We have nX j=1 Z jj  2  bi  T  q Z n+1;n+1 +   xi   2 Z n+1;n+1 (5.31)  nX j=1 Z jj  2 v u u t   nX j=1 Z jj   xi   q Z n+1;n+1 +   xi   2 Z n+1;n+1: Proof. First, since  satis es  bi  T  < p    bi   ; i = 1; : : : ;m, we have that nX j=1 Z jj  2  bi  T  q Z n+1;n+1 +   xi   2 Z n+1;n+1  nX j=1 Z jj  2 p    bi   q Z n+1;n+1 +   xi   2 Z n+1;n+1: (5.32) Furthermore, we have   bi   2 = nX j=1 Z jj  xij  2   max j=1;:::;n Z jj  nX j=1 (xij) 2 =  max j=1;:::;n Z jj    xi   2 = 0 @  nX j=1 Z jj 1 A   xi   2 (5.33) 615.2. SDP Relaxation Based Algorithm by (5.28) and (5.30). Then nX j=1 Z jj  2 p    bi   q Z n+1;n+1 +   xi   2 Z n+1;n+1 (5.34)  nX j=1 Z jj  2 v u u t   nX j=1 Z jj   xi   q Z n+1;n+1 +   xi   2 Z n+1;n+1: Combining equations (5.32) and (5.33) gives the desired result. Proposition 5.9. We have nX j=1 Z jj  2 v u u t   nX j=1 Z jj   xi   q Z n+1;n+1 +   xi   2 Z n+1;n+1   1 p     0 @ nX j=1 Z jj +   xi   2 Z n+1;n+1 1 A : (5.35) Proof. First, nX j=1 Z jj  2 v u u t   nX j=1 Z jj   xi   q Z n+1;n+1 +   xi   2 Z n+1;n+1 (5.36) =  1 p     0 @ nX j=1 Z jj +   xi   2 Z n+1;n+1 1 A+ p    0 @ v u u t nX j=1 Z jj    xi   q Z n+1;n+1 1 A 2 which can be seen by simply expanding and grouping terms:  1 p     0 @ nX j=1 Z jj +   xi   2 Z n+1;n+1 1 A+ p    0 @ v u u t nX j=1 Z jj    xi   q Z n+1;n+1 1 A 2 = (1 p    ) nX j=1 Z jj + (1 p    )   xi   2 Z n+1;n+1 + p    0 @ nX j=1 Z jj  2 v u u t nX j=1 Z jj   xi   q Z n+1;n+1 +   xi   2 Z n+1;n+1 1 A = nX j=1 Z jj +   xi   2 Z n+1;n+1  2 v u u t   nX j=1 Z jj   xi   q Z n+1;n+1: (5.37) 625.2. SDP Relaxation Based Algorithm Then simply note that p    0 @ v u u t nX j=1 Z jj    xi   q Z n+1;n+1 1 A 2  0: (5.38) and hence  1 p     0 @ nX j=1 Z jj +   xi   2 Z n+1;n+1 1 A+ p    0 @ v u u t nX j=1 Z jj    xi   q Z n+1;n+1 1 A 2   1 p     0 @ nX j=1 Z jj +   xi   2 Z n+1;n+1 1 A : (5.39) Next, we provide results that will help to establish our lower bound. Proposition 5.10. For each i = 1; : : : ;m, 1 wi  2 0 @ nX j=1 Z jj +   xi   2 Z n+1;n+1 1 A : (5.40) Proof. Recall that in (2.14) our  rst constraint is that wi  Ai; Z   1 or 1 wi   Ai; Z  for i = 1; : : : ;m. Consider the following sequence of inequali- ties, which we will justify later: 1 wi   Ai; Z  = *" I  xi   xi  T   xi   2 # ; Z + = nX j=1 Z jj  2  xi  T z +   xi   2 Z n+1;n+1 (5.41)  2 0 @ nX j=1 Z jj +   xi   2 Z n+1;n+1 1 A : (5.42) For (5.41), let z =  Z 1;n+1; : : : ; Z  n;n+1  T 635.2. SDP Relaxation Based Algorithm and Z  = 2 6 4 Z 11   . . .   Z nn 3 7 5 so that we can write Z =  Z  z (z )T Z n+1;n+1  : (5.43) Now consider the trace: Tr  " I  xi   xi  T   xi   2 # Z ! Tr  " I  xi   xi  T   xi   2 #  Z  z (z )T Z n+1;n+1  ! = Tr  " Z   xi(z )T z  xiZ n+1;n+1   xi  T Z  +   xi   2 (z )T  (xi)T z +   xi   2 Z n+1;n+1 #! = Tr  Z   xi(z )T  + Tr   (xi)T z +   xi   2 Z n+1;n+1  = 0 @ nX j=1 Z jj  (x i)T z 1 A+   (xi)T z +   xi   2 Z n+1;n+1  = nX j=1 Z jj  2(x i)T z +   xi   2 Z n+1;n+1: (5.44) For (5.42), we have that Z  0 so that  Z jj Z  j;n+1 Z j;n+1 Z  n+1;n+1   0; j = 1; : : : ; n: (5.45) Then  Z jj   Z n+1;n+1    Z j;n+1  2  0 for j = 1; : : : ; n (5.46) 645.3. Proof of Theorem 5.4: SDP Relaxation Based Algorithm and thus 2     xi  T z     2   xi   kz k (Cauchy-Schwarz) = 2   xi   v u u t nX j=1  Z j;n+1  2 (5.47)  2   xi   v u u t nX j=1 Z jjZ  n+1;n+1 (see (5.46)) = 2    xi   q Z n+1;n+1  0 @ v u u t nX j=1 Z jj 1 A    xi   2 Z n+1;n+1 + nX j=1 Z jj (2ab  a 2 + b2). Hence, nX j=1 Z jj  2  xi  T z +   xi   2 Z n+1;n+1  nX j=1 Z jj +   xi   2 Z n+1;n+1 + nX j=1 Z jj +   xi   2 Z n+1;n+1 = 2 0 @ nX j=1 Z jj +   xi   2 Z n+1;n+1 1 A : (5.48) 5.3 Proof of Theorem 5.4: SDP Relaxation Based Algorithm Using the results of the previous section (5.2) we now provide the proof of Theorem 5.4. Proof. First, by combining Propositions 5.7, 5.8 and 5.9, we have   ~z  xi~zn+1   2   1 p     0 @ nX j=1 Z jj +   xi   2 Z n+1;n+1 1 A : (5.49) 655.3. Proof of Theorem 5.4: SDP Relaxation Based Algorithm Combining the equation (5.49) with Proposition 5.10 we have that min i=1;:::;m wi   ~z  xi~zn+1   2  (1 p    )  Pn j=1 Z  jj +   xi   2 Z n+1;n+1  2  Pn j=1 Z  jj + kx ik2 Z n+1;n+1  = 1 p    2 : (5.50) Thus ~x 2 Rn given by ~x := ~z ~zn+1 ; (5.51) belongs to X given by (2.6), since, by (5.24)  ~x21; : : : ; ~x 2 n  =  Z 11 Z n+1;n+1 ; : : : ; Z nn Z n+1;n+1 ! ; (5.52) and  Z 11; : : : ; Z  nn; Z  n+1;n+1  2 K by feasibility. Since K is a convex cone, 1 Z n+1;n+1  Z 11; : : : ; Z  nn; Z  n+1;n+1  2 K; (5.53) so that  ~x21; : : : ; ~x 2 n; 1  2 K and thus ~x 2 X . This establishes part 1. Furthermore, ~x satis es f (~x) = min i=1;:::;m wi   ~x xi   2  1 p    2 1 ~z2n+1 = 1 p    2  cp; (5.54) since, by (5.49), min i=1;:::;m wi   ~x xi   2 = min i=1;:::;m wi     ~z ~zn+1  xi     2 = min i=1;:::;m wi ~z2n+1   ~z  xi~zn+1   2  1 p    2 1 ~z2n+1 : (5.55) Now, Z is an optimal solution to (2.14) and ~z2n+1 = Z  n+1;n+1, so that by (2.14) 1 ~z2n+1 = 1 Z n+1;n+1 =  cp: 665.4. Examples on the Lower Bound   Thus, we have shown that there exists a feasible solution ~x of the original problem (2.1) such that f (~x)  1 p    2  cp (5.56) as required in part 2. Finally, part 3 follows by combining part 2 and Theorem 2.11. Remark 5.11. Notice that we do not need Z  0 but only (5.45) to estab- lish (5.17). Since (5.45) is equivalent to second-order cone constraints, see section 2.2.2 or [3, 25], it su ces to solve a second-order program (SOCP) instead of an SDP, which is computationally much more expensive. To summarize, the key point is to  nd  Z 11; : : : ; Z  n+1;n+1  using the SDP or SOCP relaxations. Upon obtaining  Z 11; : : : ; Z  n+1;n+1  , we can construct a feasible solution ~x of (2.1) as follows: 1. Find  Z 11; : : : ; Z  n+1;n+1  , which is obtained from a solution Z to the SDP (2.14) or SOCP (2.24) relaxation. 2. Generate  2 f 1; 1gn as in Proposition 5.6. 3. Set ~x( ) = ~z ~zn+1 = 1 q Z n+1;n+1  p Z 11 1; : : : ; p Z nn n  : (5.57) 5.4 Examples on the Lower Bound   We would like   to be small for the bound (5.17) to be tight. In this section we provide concrete examples of calculating the lower bound   . Example 5.12. Let X = f 1; 1gn, corresponding to K =  y 2 Rn+1 jyj = yn+1; j = 1; : : : ; ng : (5.58) As  Y  11; : : : ; Y  nn; Y  n+1;n+1  T 2 K, in particular, Y  < 0, we have Y  jj = Y  n+1;n+1; j = 1; : : : ; n; (5.59) so that   = maxj=1;:::;n Y  jj Pn j=1 Y  jj = Y  n+1;n+1 nY  n+1;n+1 = 1 n : (5.60) 675.4. Examples on the Lower Bound   Example 5.13. Suppose n is even, and let X =  x 2 Rn   x2j 1 + x 2 j = 1; j = 2; 4; : : : ; n  : This corresponds to K =  y 2 Rn+1 jyj 1 + yj = yn+1; j = 2; 4; : : : ; ng : (5.61) As  Y  11; : : : ; Y  nn; Y  n+1;n+1  T 2 K, in particular Y  < 0, we have nX j=1 Y  jj = (Y  11 + Y  22) +    +  Y  n 1;n 1 + Y  nn  = n 2 Y  n+1;n+1 (5.62) and Y  jj  Y  n+1;n+1; j = 1; : : : ; n: (5.63) Thus,   = maxj=1;:::;n Y  jj Pn j=1 Y  jj  Y  n+1;n+1 n 2Y  n+1;n+1 = 2 n : (5.64) More examples of   will be given in Chapter 6. 68Chapter 6 Numerics and Examples In this chapter we begin by considering two speci c cases of the weighted maximin problem, and use the results of section 5.2 to derive approximation bounds. In the last section, we numerically compute the SDP and SOCP approximation bounds for a few speci c examples, and (where applicable) compare these results to an alternate approximate solution based on a grid search method. 6.1 Approximation Bounds: Box Case Lemma 6.1. If X = [ 1; 1]n so that K is given by (2.7), i.e. K =  y 2 Rn+1 j yj  yn+1; j = 1; : : : ; ng then the relaxation problem (2.14) given by 1  cp = min Z Zn+1;n+1 s.t. wi  Ai; Z   1; i = 1; : : : ;m (Z11; : : : ; Znn; Zn+1;n+1) T 2 K Z  0: (6.1) has at least one optimal solution Z with Z jj = Z  n+1;n+1 for j = 1; : : : ; n, corresponding to which   = 1=n. Proof. Since X = [ 1; 1]n which is compact, (2.14) has an optimal solution Z . Suppose that Z  j j < Z  n+1;n+1 for some  j 2 f1; : : : ; ng. Then, for i = 1; : : : ;m, by (5.44) we have  Ai; Z  = nX j=1 Z jj +   xi   2 Z n+1;n+1  2  xi  T z ; which is increasing with Z  j j . Thus, increasing Z   j j to Z  n+1;n+1 will maintain all constraints in (2.14) to be satis ed while not changing the objective function value. Hence, Z jj = Z  n+1;n+1 8j = 1; : : : ; n, and so   = 1n as in Example 5.12. 696.2. Approximation Bounds: Ball Case It follows from  = 2 ln (m= ), (5.17) and Lemma 6.1 that the feasible solution ~x of (2.1) found by the SDP-based algorithm in Section 5.2 satis es f(~x)  1 p 2 ln (m= )=n 2  cp: (6.2) Thus, 1  f(~x)= cp  1 p  2 whenever 2 ln (m= )=n   < 1. As we noted at the end of Section 5.2, the approximation bound (6.2) still holds when we further relax the SDP constraint Z  0 in (2.14) to the SOCP constraints. To our knowledge, this is the  rst nontrivial approximation bound for an NP-hard problem based on SOCP relaxation. Theorem 6.2. If K is given by (2.7), then when n!1, lim inf n!1 f(~x)  cp  1 2 : (6.3) Moreover, lim inf n!1 f(~x)  p  1 2 : Proof. This follows from Lemma 6.1 and the fact that as n!1, (2 ln (m= )=n)! 0 so that lim inf n!1 f(~x)  cp = lim inf n!1 1 p 2 ln (m= )=n 2  1 2 : (6.4) As  p   cp and f(~x) > 0, we have f(~x)  p  f(~x)  cp so that lim inf n!1 f(~x)  p  lim inf n!1 f(~x)  cp  1 2 : 6.2 Approximation Bounds: Ball Case In this section we consider the case of X = fx 2 Rn jkxk  1g ; 706.2. Approximation Bounds: Ball Case and derive analogous results for an approximation bound as we did in sec- tion 6.1. As noted in section 5.2, we want   to be small for the bound to be tight, so we begin by placing a lower bound on   . Lemma 6.3. If X = fx 2 Rn jkxk  1g is a Euclidean ball so that K =  y 2 Rn+1 jy1 + y2 +    + yn  yn+1g as in (2.8), then    1=n. Furthermore,   = 1=n when Z jj = 1 nZ  n+1;n+1 for j = 1; : : : ; n. Proof. In order to minimize   = maxj=1;:::;n Z jjPn j=1 Z  jj we want to maximize Pn j=1 Z  jj while minimizing maxj=1;:::;n Z jj . From the de nition of K it obvious thatPn j=1 Z  jj is maximized at Z  n+1;n+1. Now, setting Z  jj = 1 nZ  n+1;n+1 for j = 1; : : : ; n yields Pn j=1 Z  jj = Z  n+1;n+1, and   = maxj=1;:::;n Z jj Pn j=1 Z  jj = 1 nZ  n+1;n+1 Z n+1;n+1 = 1 n : (6.5) To show that this solution minimizes   , suppose that Z jj = 1  j Z n+1;n+1 for j = 1; : : : ; n so that Pn j=1 Z  jj = Z  n+1;n+1, but assume that 1  j < 1n for some j 2 f1; 2; : : : ; ng. Then we must have 1 k > 1 n for some k 6= j in 1; : : : ; n (in order that Pn j=1 Z  jj = Z  n+1;n+1 is still satis ed). Thus,   = maxj=1;:::;n Z jj Pn j=1 Z  jj = Z n+1;n+1 maxj=1;:::;n 1  j Z n+1;n+1 > 1 nZ  n+1;n+1 Z n+1;n+1 > 1 n : (6.6) Then we have an analogous result to Lemma 6.1 for the Euclidean ball bounded case: Lemma 6.4. If K is given by (2.8), then (2.14) has an optimal solution Z with Z jj = 1 nZ  n+1;n+1 for j = 1; : : : ; n corresponding to which   = 1=n. Proof. Since X is compact, (2.14) has an optimal solution, Z . We know that  Ai; Z  = nX j=1 Z jj +   xi   2 Z n+1;n+1  2  xi  T z (6.7) 716.3. Where the SDP Relaxation Fails is increasing with Pn j=1 Z  jj . Then setting Z  jj = 1 nZ  n+1;n+1 maximizesPn j=1 Z  jj at Z  n+1;n+1, which maintains all constraints of (2.14) while not changing the objective function value. Thus, applying Lemma 6.3, we have   = 1=n. Then, as in section 6.1, we can set  = 2 ln (m= ), and apply (5.17) and Lemma 6.4 so that the feasible solution x^ of (2.1) found by the SDP-based algorithm in Section 5.2 satis es f(x^)  1 p 2 ln (m= )=n 2  cp: (6.8) Theorem 6.5. If K is given by (2.8), then when n!1, lim inf n!1 f(~x)  cp  1 2 : (6.9) Moreover, lim inf n!1 f(~x)  p  1 2 : The proof is analogous to the proof of Theorem 6.2. 6.3 Where the SDP Relaxation Fails In this section we consider the approximation bound (6.2) for the case that n is small and m is large. The following example shows that the SDP relaxation (2.14) can be a poor approximation of (2.1) in such a case, so (6.2) cannot be signi cantly improved upon. Example 6.6. Suppose n = 1 (i.e. in R), wi = 1, and xi = 2(i 1) m  1, for i = 1; : : : ;m + 1. Since x1; : : : ; x(m+1) form a grid of equal spacing on X = [ 1; 1], we will have an optimal point (call it x ), at the halfway point between any two xi. For convenience, choose x1 =  1 and x2 =  1 + 2m , so that x =  1 + 1m . Then the optimal solution is given by  p =   x  x1   2 =      1 + 1 m  ( 1)     2 =      1 m     2 = 1 m2 : 726.3. Where the SDP Relaxation Fails On the other hand, Z = I is a feasible solution of (2.14) with objective function value of 1. To see that Z = I2 2 is feasible, need to show the following:   Ai; Z   1. Recall that  Ai; Z  = trace[AiZ]. Then we have AiZ = AiI = Ai = 2 4 1   2(i 1) m  1    2(i 1) m  1   2(i 1) m  1  2 3 5 )  Ai; Z  = 1 +  2(i 1) m  1  2  1  Here X = [ 1; 1] so that K =  y 2 R2 jy1  y2g . Then (Z11; Z22) = (1; 1) 2 K is obvious.  Z = I  0. Thus we have that 1 cp = minZ Zn+1;n+1  1 so that  cp  1. Hence,  cp  1 is signi cantly larger than  p = 1m2 . In fact,  cp  m 2 p which shows that in the case of n small and m large,  cp is a poor approxi- mation for  p. 6.3.1 Necessary Condition for Non-trivial Bounds In order for the approximation bounds to be useful, we need to ensure that they yield non-trivial results (bounds > 0). In this section we give con- ditions on the values of m and n for the case that X is a box or a ball and thus   = 1=n which will ensure that the approximation bounds provided by the SDP and SOCP relaxation problems are bounded away from 0. To obtain a non-trivial lower bound, we require that 1 p 2 ln(m= )=n 2  cp > 0: (6.10) 736.4. An Alternate Approach Since  cp > 0 (since  cp   p), we need only verify that 1 p 2 ln(m= )=n 2 > 0 (6.11) , 1 > p 2 ln(m= )=n (6.12) , n 2 > ln( m  ) (6.13) , m  < exp  n 2  ) m < exp  n 2  : (6.14) Thus, we can guarantee that the approximation bounds provided by the SDP and SOCP based algorithms will yield a non-trivial lower bound whenever n and m satisfy m= < exp(n2 ). Table 6.1 shows the largest value of m that will yield a non-trivial bound for a given value of n (setting  = 1): m = bexp n2 c, i.e m = the largest integer  exp n2 . n 1 2 3 4 5 6 7 8 9 10 m 1 2 4 7 12 20 33 54 90 148 Table 6.1: Maximal value of m yielding a non-trivial approximation. Note that this means that we can not guarantee a non-trivial bound for cases that m  exp(n2 ). Remark 6.7. Consider the function g : (0; 1]! R :  7! 1 p 2=n ln(m= ) 2 : Since  7! ln(m= ) is a decreasing function, g is an increasing function, therefore g(1) = max 2(0;1] g( ). 6.4 An Alternate Approach Of course, SDP relaxation is not the only way to construct an approxi- mate solution of (2.1). For example, consider the following simple procedure for constructing an approximate solution that is not based on SDP. De nition 6.8. The ceiling, or least-integer function, is a function from the real numbers R to the integers Z, that maps any real number x to the the smallest integer y such that y  x. It is denoted by dxe. For example, we have d1:5e = 2, d 3:2e =  3 and d4e = 4. 746.5. Numerical Results Example 6.9. Let N :=  (m+ 1)1=n  . We partition X = [ 1; 1]n into Nn boxes of length 2=N in each dimension. Since this box contains the Euclidean ball centered at x^ of radius 1=N , we have   x^ xi    1=N for all i. Hence f(x^) = min i=1;:::;m wi   x^ xi   2  miniwi N = miniwi  (m+ 1)1=n  : Also x^ 2 X . Assuming that x1; : : : ; xm are in X so that   x xi   2  4n for all x 2 X , then  p  4n miniwi. Therefore f(x^)  1 4n  (m+ 1)1=n   p: The approximate solution x^ is good when x1; : : : ; xm are distributed \uni- formly" over X but can be arbitrarily bad when x1; : : : ; xm are clustered near each other. In particular, unlike (6.2), the above approximation bound tends to zero as n!1; namely lim n!1 1 4nd(m+ 1)1=ne = 0: (6.15) 6.5 Numerical Results In this section, we use cvx [11], an optimization package for Matlab, to numerically compare the SDP and SOCP approximation bounds. cvx is a modeling system for disciplined convex prgramming (DCP). DCP’s are convex programming problems that are described using a limited set of construction rules. cvx can solve standard problems such as linear programs (LPs), quadratic programs (QPs), second-order cone programs (SOCPs) and semide nite programs (SDPs), as well as many other more complex convex optimization problems. Since the original optimization problems in the examples do not have a simple closed form solution, a Matlab program was written to  nd the approximate optimal point, yielding an approximate solution to the original weighted maximin dispersion problem (see Appendix C). Note that this code depends upon the full factorial (fullfact) function in Matlab, so that for high dimensional problems or very large grid sizes the computational time is very high, and thus this program is not meant to take the place of more complex solvers. 756.5. Numerical Results Example 6.10. Suppose that X = [ 1; 1]5, wi = 1 for i = 1; : : : ; 3 and x1 = (1; 0; 0; 0; 0) (6.16) x2 = (0; 0; 1; 0; 0) (6.17) x3 = (0; 0; 0; 0; 1) (6.18) so that we have the following equally-weighted maximin dispersion problem: max x2[ 1;1]5 min i=1;2;3   x xi   2 : (6.19) Remark 6.11. We can easily derive the exact optimal solution for (6.19). Set x = (x1; x2; x3; x4; x5). Then min i=1;2;3   x xi   2 = min  (x1  1) 2 + x22 + x 2 3 + x 2 4 + x 2 5; x 2 1 + x 2 2 + (x3  1) 2 + x24 + x 2 5; x21 + x 2 2 + x 2 3 + x 2 4 + (x5  1) 2 = min  (x1  1) 2 + x23 + x 2 5; x 2 1 + (x3  1) 2 + x25; x 2 1 + x 2 3 + (x5  1) 2 +x22 + x 2 4: Thus, the optimal solution requires that x1 = x3 = x5 =  1. Then we can set x1 = x3 = x5 =  1 and x2 =  1 and x4 =  1 to obtain an optimal point, corresponding to the optimal solution of  p = 4 + 2 + 2 = 8: First, we obtain an approximate solution of (6.19) on a grid using the Matlab code from Appendix A: Using grid sizes of 5, 9 and 11, we have an approximate solution of  p = 8 (6.20) in each case, with the optimal value being achieved at one of four points: x 1 = ( 1; 1; 1; 1; 1) x 2 = ( 1; 1; 1; 1; 1) x 3 = ( 1; 1; 1; 1; 1) x 4 = ( 1; 1; 1; 1; 1) : (6.21) Now, we want to solve the SDP and SOCP relaxation problems using cvx and Matlab to compare the approximation bounds. Since X = [ 1; 1]5 is a 766.5. Numerical Results polyhedral set having a tractable algebraic representation, equation (2.14) yields the following semide nite program: 1  cp = min Z Z66 s.t.  Ai; Z   1; i = 1; 2; 3 (Z11; : : : ; Z66) T 2 K Z  0: (6.22) while equation (2.24) yields the following second-order cone program: 1  socp = min Z Z66 s.t.  Ai; Z   1; i = 1; 2; 3 (Z11; : : : ; Z66) T 2 K  Zjj Zj;6 Zj;6 Z6;6   0; j = 1; : : : ; 5: (6.23) where K =  y 2 R6 jyj  y6 for j = 1; : : : ; 5g . Solving both the SDP and SOCP relaxation problems using cvx yield an optimal solution value of 1  cp = 1  socp = 0:125: (6.24) See Appendix B. Interestingly, 1 cp = 0:125 )  cp = 8, so that in this case,  cp =  p. However, this is not the case in general, and from our results in this paper we can only say with certainty that  p  1 p 2 ln(3= )=5 2  cp: Substituting  cp = 8 and choosing  = 1 (recall that 0 <   1), we have the following lower bound on the optimal solution:  p  1 p 2 ln(3)=5 2 8  1:348: Since this is a case where n and m are both quite small,  cp is not a very good approximation of the true solution  p. However, in cases where X is a box and n and m are small, it is certainly feasible to use the Matlab code provided in Appendix A to obtain an approximate solution to (2.1). We begin by examining the case where n = 5, which is still feasible to solve using the code in Appendix A to yield an approximation bound and we compare the results to those produced using cvx. 776.5. Numerical Results Example 6.12. Suppose that X = [ 1; 1]5, wi = 1 for i = 1; : : : ; 3 and we have m = 3; 4; : : : ; 12 points where each point xj is randomly selected from X . Using a grid size of 11 the following table compares the lower bound based on the optimal values  cp and  socp resulting from solving the SDP and SOCP relaxation problems with cvx to the approximate optimal solution  g found using the grid solver. Table 6.2 summarizes the results of using the grid solver and CVX (setting  = 1). Since n is small we expect the bound to not be very good, and this expectation is con rmed form the reults in Table 6.2. However, for small values of m the bound will signi cantly help in narrowing the search space for  nding the optimal solution. m 1 p 2 ln(m)=5 2  cp 1 p 2 ln(m)=5 2  socp  gGrid 3 1.2100 1.2100 6.9436 4 0.8238 0.8238 5.7219 5 0.7549 0.7549 6.1027 6 0.5184 0.5184 5.3748 7 0.5189 0.5189 5.8145 8 0.3272 0.3272 5.4333 9 0.2482 0.2482 5.7362 10 0.1341 0.1341 5.9963 11 0.0868 0.0868 4.9257 12 0.0119 0.0119 5.0090 Table 6.2: Numerical Results for Large n = 5. What we are more interested in is cases where n and m are large, so that applying the grid program becomes impossible. In particular, we examine a few cases where n is \large" (n > 5) and m < exp  n 2  (setting  = 1), which guarantees that the SDP and SOCP based algorithms will yield a non-trivial lower bound. Example 6.13. Let X = [ 1; 1]n, wi = 1 for i = 1; : : : ;m, and x1; : : : ; xm are a set of randomly generated points in X . The following table gives the lower bound to the original problem 2.1 based on the approximate solutions  cp and  socp resulting from using cvx to solve the relaxation problems (2.14) and (2.24): From this example it is once again apparent that the optimal solution to the simpler SOCP relaxation problem is identical to the optimal value of 786.5. Numerical Results n m 1 p 2 ln(m)=n 2  cp 1 p 2 ln(m)=n 2  socp 6 20 0.0025 0.0025 7 33 0.0027 0.0027 8 54 0.0061 0.0061 9 90 0.0001 0.0001 10 148 0.0016 0.0016 11 244 0.0016 0.0016 12 403 0.0006 0.0006 13 665 0.0001 0.0001 Table 6.3: Numerical Results for Large n and m. the SDP relaxation. Since the original problem is NP-hard, and the values of n and m are large enough that applying the grid solution is not feasible, we do not have a way to compare these results to the actual solution of the problem. However, since all of the bounds given are strictly greater than zero, Table 6.3 at least con rms that the lower bound for (2.1) resulting from the SDP and SOCP relaxation problems is a non-trivial bound. 79Chapter 7 Conclusions and Future Work 7.1 Results The purpose of this thesis has been to develop SDP and SOCP convex relaxations for tackling the weighted maximin dispersion problem, which is NP-hard. We began by describing the general weighted maximin dispersion prob- lem in Section 2.1, and de ning the convex relaxation problems in Section 2.2. Next, in Section 3.2, we used theoretical results from nonsmooth anal- ysis to derive a necessary condition for the optimal solutions of the equally weighted maximin problem. Provided that the set X is convex, we showed that  x 2 int(X ) is a possible solution of max x2X min i=1;:::;m w   x xi   2 (7.1) only if  x 2 conv  xi ji 2 I( x)g . Chapter 4 was dedicated to the theory of NP-completeness. While it is well known that the general weighted maximin distance problem is NP-hard, we have provided a proof to show that even in the case of equally weighting and X is a box, the problem is NP-hard. That is, the problem given by f(x) = max x2[ 1;1]n min i=1;:::;m w   x xi   2 (7.2) is NP-hard. Also in this Chapter, we investigated an heuristic approach based on partitioning the space X into Voronoi cells and considering sub- problems based on restricting x to be in one of m Voronoi cells. Again, we showed that even in the case of equal weighting and X = Rn, the restricted subproblems are NP-hard. Finally, in Chapter 5, we used the convex relaxations problems given in Section 2.2 to develop an algorithm to construct an approximate solution to the general weighted maximin dispersion problem. Pulling results from 807.2. Future Work statistics, convex analysis and matrix algebra, we showed that there exists a feasible solution ~x to the original problem max x2X min i=1;:::;m wi   x xi   2 (7.3) such that f(~x)  1 p    2  cp (7.4) where 1= cp is the optimal value of the SDP relaxation problem (2.14), and   = maxj=1;:::;n Z jj Pn j=1 Z  jj ;  = 2 ln  m   : (7.5) Chapter 6 began by focusing on two speci c examples - the case that X is a box, and the case that X is a ball. We showed that in either case, there exists an optimal solution to problem (2.10) with Z jj = Z  n+1;n+1 for j = 1; : : : ; n and   = 1=n. Finally, Chapter 6  nished with a few numerical examples to apply the SDP relaxation based algorithm developed in Chapter 5. Matlab code was provided for implementing the algorithm developed in Chapter 5, as well as Matlab code that was developed for  nding approximate solutions over a grid. 7.2 Future Work Possible directions for further investigation into the weighted maximin dispersion problem and the approximation bounds derived in this paper are: 1. Can we extend Lemma 6.1, or Lemma 6.4 to other X of the form (2.6)? Or more generally, when X is invariant under permutation. 2. Is (2.1) NP-hard for other choices of X besides the box (e.g., X is the Euclidean ball)? 3. Can the above results be extended to locate multiple points in X ? And maxsum dispersion problems? 4. Can we improve the approximation bound by using the rank reduction scheme of Goemans-Williamson [14] (also see [28]) or its variant pro- posed by Bertsimas and Ye [5]? In particular, instead of generating  817.2. Future Work uniformly on f 1; 1gn, generate   N(0; Z ), the real-valued normal distribution, and set  j =  1 if  j > 0;  1 else, j = 1; : : : ; n: Note that this requires Z  0. Analyzing this will likely require new large deviation estimates. 5. Create specialized algorithms to solve the SOCP relaxation more e - ciently when n and m are large (and large number of points to locate). 6. Find the lower bound for the performance of the SOCP relaxation. 7. One always has  cp   p. Under what conditions do the convex re- laxation problems (2.14) or (2.24) yield the exact solution. That is, under what conditions will we have  cp =  p? 8. Do some comparisons of our SDP and SOCP relaxations to existing relaxation methods, such as [16, 30, 33]. 9. In the thesis, we are considering the Maximin problem only for m points in Rn. What happen if we consider the Maximin problem for m convex sets in Rn? 82Bibliography [1] F. Alizadeh. Interior point methods in semide nite program- ming with applications to combinatorial optimization. SIAM Journal on Optimization, 5(1):13{51, 1995. ! pages 24 [2] Jonathan M. Borwein and Adrain S. Lewis. Convex Analysis and Nonlinear Optimization, Theory and Examples. Springer, 2000. ! pages 15 [3] A. Ben-Tal, A. Nemirovski, and C. Roos. Robust solutions of uncertain quadratic and conic-quadratic problems. SIAM Journal on Optimization, 13(2):535{560, 2002. ! pages 54, 67 [4] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004. ! pages 19, 20 [5] D. Bertsimas and Y. Ye. Semide nite relaxations, multivari- ate normal distribution, and order statistics. In D.Z. Du and P.M. Pardalos, editors, Handbook of Combinatorial Optimiza- tion, volume 3, pages 1{19. Kluwer Academic Publishers, 1998. ! pages 81 [6] Pierluigi Crescenzi and Viggo Kann. A compendium of np optimization problems, 1998. ! pages 23 [7] Frank H. Clarke. Optimization and Nonsmooth Analysis. So- ciety for Industrial and Applied Mathematics, 1990. ! pages 19 [8] Stephen Cook. The p vs np problem, 2000. ! pages 22 [9] Kenneth R. Davidson and Allan P. Donsig. Real Analysis and Applications. Springer, 2010. ! pages 2, 4 [10] B. Dasarathy and Lee J. White. A maxmin location problem. Operations Research, 28(6):1385{1401, 1980. ! pages 26 83[11] Micheal Grant and Stephen Boyd. cvx user’s guide; for cvx version 1.21. User’s Guide, September 2010. ! pages 24, 75 [12] Michael R. Garey and David S. Johnson. Computers and In- tractability: A Guide to the Theory of NP-Completeness. Bell Telephone Laboratories, Incorporated, 1979. ! pages 21, 22, 23, 24, 49, 53 [13] M. Grotschel, L. Lovasz, and A. Schrijver. Geometric Al- gorithms and Combinatorial Optimization. Springer-Verlag, Berlin, 1998. ! pages 24 [14] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satis abil- ity problems using semide nite programming. J. ACM, 42(6):1115{1145, 1995. ! pages 24, 81 [15] C. Helmberg, F. Rendl, R.J. Vanderbei, and H. Wolkowicz. An interior-point method for semide nite programming. SIAM Journal on Optimization, 6(2):342{361, 1996. ! pages 24 [16] M. E. Johnson, L. M. Moore, and D. Ylvisaker. Minimax and maximin distance designs. Journal of Statistical Planning and Inference, 26(2):131 { 148, 1990. ! pages 26, 82 [17] R.M. Karp. Reducibility among combinatorial problems. In R.E. Miller and J.W. Thatcher, editors, Complexity of Com- puter Computations, pages 85{103. Plenum Press, New York, 1972. ! pages 23 [18] Carl D. Meyer. Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics, 2000. ! pages 7 [19] R.T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, New Jersey, 1970. ! pages 15, 16 [20] S. S. Ravi, D. J. Rosenkrantz, and G. K. Tayi. Heuristic and special case algorithms for dispersion problems. Operations Re- search, 42(2):299{310, 1994. ! pages 44 [21] F. Rendl, G. Rinaldi, and A. Wiegele. Solving max-cut to opti- mality by intersecting semide nite and polyhedral relaxations. Mathematical Programming, 121(2):307{335, 2010. ! pages 24 84[22] R.T. Rockafellar and Roger J-B. Wets. Variational Analysis. Springer-Verlag, 1998. ! pages 9, 13, 14, 16 [23] Winfried Schirotzek. Nonsmooth Analysis. Springer-Verlag, Berlin, 2007. ! pages 19 [24] Shayle R. Seale. Matrix Algebra Useful for Statistics. John Wiley and Sons Inc., 1982. ! pages 5, 6 [25] J.F. Sturm. Using sedumi 1.02, a matlab* toolbox for optimiza- tion over symmetric cones (updated for version 1.05). Techni- cal report, Department of Econometrics, Tilburg University, Tilburg, The Netherlands, August 1998 - October 2001. ! pages 24, 67 [26] Jerome Sacks, William J. Welch, Toby J. Mitchell, and Henry P. Wynn. Design and analysis of computer experiments. Statistical Science, 4(4):pp. 409{423, 1989. ! pages 26 [27] M.J. Todd. Semide nite optimization. Acta Numerica, 10:515{ 560, 2001. ! pages 21, 24 [28] Paul Tseng. Further results on approximating nonconvex quadratic optimization by semide nite programming relax- ation. SIAM Journal on Optimization, 14(1):268{283, 2003. ! pages 81 [29] L. Vandenberghe and S. Boyd. Semide nite programming. SIAM Review, 38(1):49{95, 1996. ! pages 20, 21 [30] D.J. White. A heuristic approach to a weighted maxmin dis- persion problem. IMA Journal of Management Mathematics, 7(3):219{231, 1996. ! pages 26, 49, 53, 82 [31] Wikipedia. Complexity class. ! pages 22, 23 [32] H. Wolkowicz, R. Saigal, and L. Vandenberghe. Handbook of Semide nite Programming. Theory, Algorithms, and Applica- tions. Kluwer Academic Publishers, Boston, MA, 2000. ! pages 21, 24 [33] Shuzhong Zhang. Quadratic maximization and semide nite re- laxation. Mathematical Programming, 87(3):453{465, 2000. ! pages 82 85Appendix A Matlab Code The following matlab code was used to produce 3-dimensional pictures of the minimum distance function. Note that this function calls the MinNorm function, the code for which is also given. % Plot the minimum distance function over [-1,1]^2 % define the range over which to calculate the minimum distances x1=(-1:0.01:1)’; x2=(-1:0.01:1)’; xmat=[kron(x1,ones(length(x2),1)) kron(ones(length(x1),1),x2)]; [x y]=meshgrid(x1,x2); % create two sets (S and T) of 10 points % in the 2-D unit square % S is a "nice" set of points S=[[-1,1];[1,-1];[1,1];[-1,-1];[0,0]]; % T is a random set of points t=randi([-10,10],[10,2]); T=t/10; % get the function values for S and T fminS=MinNorm(xmat,S); fminT=MinNorm(xmat,T); zS=reshape(fminS,length(x1),length(x2)); zT=reshape(fminT,length(x1),length(x2)); figure(1) surf(x,y,zS,’edgecolor’, ’none’) print(’-f1’, ’MinDistExS.eps’,’-depsc2’) 86Appendix A. Matlab Code figure(2) surf(x,y,zT,’edgecolor’, ’none’) print(’-f2’, ’MinDistExT.eps’,’-depsc2’) The following code was written to estimate the optimal solution to (2.1) when X = [ 1; 1]n and wi = 1 for all i. Using a grid of size q + 1, we  nd the point in x 2 X that is furthest away from the given set of points S, using the maximin distance criteria. function [Mx, P]=MaximinGrid(q,S) %This function takes a set of points S and finds the % furthest point in [-1,1]^{n} based on the maximin distance % criteria, using a q+1 grid % S is an m by n array, where n is the number of dimensions % and m is the number of points (each ROW is a point) [m n]=size(S); % create the q+1 grid on [-1,1]^n: x=((q+1)*ones([n,1])); % a vector of length n with entries q+1 X=fullfact(x); % a grid of size (q+1)^{2} by n % scale the grid to be in [-1,1] [r c]=size(X); Y=-1+2*((X-1)/q); d=999*ones([r,1]); % a vector that stores the mindist from each point % on the grid to the closest point in S for ii=1:r % for each point on the grid k=0; % condition for the while loop while k==0 % enter loop for jj=1:m 87Appendix A. Matlab Code % for each point in S D = sum( (Y(ii,:)-S(jj,:)).^2); % calculate the distance if D==0 % if the grid point is in S d(ii)=D; % so it won’t have the max value of 1! k=1; % skip this point by exiting the while loop break; elseif D < d(ii) % if the new distance is smaller but not equal 0 d(ii) = D; % store the new mindist end end k=1; end end Mx=max(d); % get the maximin distance ind=(d==Mx); P=Y(ind,:); Mx=max(d); end 88Appendix B Matlab Output In this section we provide the Matlab output from Example 6.10. Solve Example 6.10 using the MaximinGrid function with di erent grid sizes q1 = 4, q2 = 8 and q3 = 10: % Find the approximate optimal point for Numerics Example 1 % Do a couple of trials with different grid sizes % Define the given set of points S x1 = [1,0,0,0,0]; x2 = [0,0,1,0,0]; x3 = [0,0,0,0,1]; S = [x1; x2; x3]; % Choose the different mesh sizes for the grid q1=4; q2=8; q3=10; % Find the optimal points using the different grids [d1 p1]=MaximinGrid(q1,S); d1 = 8 p1 = -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 1 -1 -1 1 -1 1 -1 [d2 p2]=MaximinGrid(q3,S); d2 = 8 [d3 p3]=MaximinGrid(q3,S); d3 = 8 89Appendix B. Matlab Output Output from solving the SDP relaxation for Example 6.10: Calling sedumi: 29 variables, 8 equality constraints ------------------------------------------------------------ SeDuMi 1.21 by AdvOL, 2005-2008 and Jos F. Sturm, 1998-2003. Alg = 2: xz-corrector, Adaptive Step-Differentiation, theta = 0.250, beta = 0.500 eqs m = 8, order n = 15, dim = 45, blocks = 2 nnz(A) = 39 + 0, nnz(ADA) = 64, nnz(L) = 36 it : b*y gap delta rate t/tP* t/tD* feas cg cg prec 0 : 4.82E+000 0.000 1 : -9.09E-002 1.99E+000 0.000 0.4133 0.9000 0.9000 2.47 1 1 2.7E+000 2 : 1.32E-001 8.58E-001 0.000 0.4308 0.9000 0.9000 4.31 1 1 4.4E-001 3 : 1.26E-001 2.41E-001 0.000 0.2805 0.9000 0.9000 1.57 1 1 1.0E-001 4 : 1.25E-001 1.18E-002 0.000 0.0489 0.9900 0.9900 1.22 1 1 4.8E-003 5 : 1.25E-001 3.71E-005 0.000 0.0032 0.9990 0.9990 1.01 1 1 1.5E-005 6 : 1.25E-001 7.28E-006 0.000 0.1960 0.9000 0.8959 1.00 1 1 2.9E-006 7 : 1.25E-001 5.39E-007 0.423 0.0741 0.9900 0.9438 1.00 1 1 2.2E-007 8 : 1.25E-001 6.57E-009 0.000 0.0122 0.9902 0.9900 1.00 1 1 3.3E-009 iter seconds digits c*x b*y 8 0.8 8.1 1.2500000099e-001 1.2499999993e-001 |Ax-b| = 2.7e-009, [Ay-c]_+ = 3.7E-010, |x|= 5.3e-001, |y|= 3.5e-001 Detailed timing (sec) Pre IPM Post 4.836E-001 7.800E-001 1.560E-001 Max-norms: ||b||=1, ||c|| = 1, Cholesky |add|=0, |skip| = 0, ||L.L|| = 316.544. ------------------------------------------------------------ Status: Solved Optimal value (cvx_optval): +0.125 Z = 0.1250 -0.0000 0.1250 -0.0000 0.1250 -0.1250 -0.0000 0.1250 -0.0000 -0.0000 -0.0000 0.0000 0.1250 -0.0000 0.1250 -0.0000 0.1250 -0.1250 -0.0000 -0.0000 -0.0000 0.1250 -0.0000 0.0000 0.1250 -0.0000 0.1250 -0.0000 0.1250 -0.1250 -0.1250 0.0000 -0.1250 0.0000 -0.1250 0.1250 90Appendix B. Matlab Output Output from solving the SOCP relaxation for Example 6.10: Calling sedumi: 23 variables, 12 equality constraints ------------------------------------------------------------ SeDuMi 1.21 by AdvOL, 2005-2008 and Jos F. Sturm, 1998-2003. Alg = 2: xz-corrector, Adaptive Step-Differentiation, theta = 0.250, beta = 0.500 eqs m = 12, order n = 19, dim = 29, blocks = 6 nnz(A) = 47 + 0, nnz(ADA) = 144, nnz(L) = 78 it : b*y gap delta rate t/tP* t/tD* feas cg cg prec 0 : 3.80E+000 0.000 1 : -1.05E-001 1.62E+000 0.000 0.4249 0.9000 0.9000 2.47 1 1 2.8E+000 2 : 1.35E-001 6.38E-001 0.000 0.3950 0.9000 0.9000 4.86 1 1 3.3E-001 3 : 1.24E-001 5.51E-002 0.000 0.0863 0.9900 0.9900 1.45 1 1 2.4E-002 4 : 1.25E-001 5.14E-004 0.358 0.0093 0.9945 0.9945 1.07 1 1 2.3E-004 5 : 1.25E-001 1.23E-005 0.000 0.0240 0.9900 0.9900 1.00 1 1 5.9E-006 6 : 1.25E-001 6.14E-007 0.082 0.0498 0.9900 0.9356 1.00 1 1 3.1E-007 7 : 1.25E-001 1.11E-008 0.000 0.0182 0.9905 0.9900 1.00 1 1 8.7E-009 iter seconds digits c*x b*y 7 0.1 7.5 1.2500000462e-001 1.2500000087e-001 |Ax-b| = 3.6e-009, [Ay-c]_+ = 2.7E-009, |x|= 5.0e-001, |y|= 3.5e-001 Detailed timing (sec) Pre IPM Post 6.240E-002 6.240E-002 3.120E-002 Max-norms: ||b||=1, ||c|| = 1, Cholesky |add|=1, |skip| = 0, ||L.L|| = 108.52. ------------------------------------------------------------ Status: Solved Optimal value (cvx_optval): +0.125 Z = 0.1250 -0.0000 0.1250 -0.0000 0.1250 -0.1250 -0.0000 0.1250 -0.0000 -0.0000 -0.0000 0.0000 0.1250 -0.0000 0.1250 -0.0000 0.1250 -0.1250 -0.0000 -0.0000 -0.0000 0.1250 -0.0000 0.0000 0.1250 -0.0000 0.1250 -0.0000 0.1250 -0.1250 -0.1250 0.0000 -0.1250 0.0000 -0.1250 0.1250 91

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 28 6
China 3 0
United Kingdom 2 0
Namibia 1 0
India 1 0
Russia 1 0
Sweden 1 0
Singapore 1 0
City Views Downloads
Ashburn 8 0
Durham 7 0
San Rafael 5 0
Unknown 4 2
Beijing 3 0
London 2 0
Mountain View 2 0
Athens 1 0
Sunnyvale 1 0
Stockholm 1 0
Singapore 1 0
Albuquerque 1 0
New York 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0072338/manifest

Comment

Related Items