Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Computational convex analysis using parametric quadratic programming Jakee, Khan Md. Kamall 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2013_fall_jakee_khan md kamall.pdf [ 1022.04kB ]
Metadata
JSON: 24-1.0074301.json
JSON-LD: 24-1.0074301-ld.json
RDF/XML (Pretty): 24-1.0074301-rdf.xml
RDF/JSON: 24-1.0074301-rdf.json
Turtle: 24-1.0074301-turtle.txt
N-Triples: 24-1.0074301-rdf-ntriples.txt
Original Record: 24-1.0074301-source.json
Full Text
24-1.0074301-fulltext.txt
Citation
24-1.0074301.ris

Full Text

Computational Convex Analysis usingParametric Quadratic ProgrammingbyKhan Md. Kamall JakeeB.Sc., Bangladesh University of Engineering and Technology, 2009A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinTHE COLLEGE OF GRADUATE STUDIES(Interdisciplinary Studies - Optimization)THE UNIVERSITY OF BRITISH COLUMBIA(Okanagan)September 2013c? Khan Md. Kamall Jakee, 2013AbstractThe class of piecewise linear-quadratic (PLQ) functions is a very impor-tant class of functions in convex analysis since the result of most convexoperators applied to a PLQ function is a PLQ function. Although there ex-ists a wide range of algorithms for univariate PLQ functions, recent work hasfocused on extending these algorithms to PLQ functions with more than onevariable. First, we recall a proof in [Convexity, Convergence and Feedbackin Optimal Control, Phd thesis, R. Goebel, 2000] that PLQ functions areclosed under partial conjugate computation. Then we use recent results onparametric quadratic programming (pQP) to compute the inf-projection ofany multivariate convex PLQ function. We implemented the algorithm forbivariate PLQ functions, and modified it to compute conjugates. We providea complete space and time worst-case complexity analysis and show that forbivariate functions, the algorithm matches the performance of [Computingthe Conjugate of Convex Piecewise Linear-Quadratic Bivariate Functions,Bryan Gardiner and Yves Lucet, Mathematical Programming Series B, 2011]while being easier to extend to higher dimensions.iiPrefaceA part of this thesis (Theorem 3.2) is included in a manuscript co-authored with Dr. Yves Lucet and Bryan Gardiner. The manuscript issubmitted for publication in Computational Optimization and Applications[GJL13].iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . viiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiChapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . 1Chapter 2: Convex Analysis Preliminaries . . . . . . . . . . . . 5Chapter 3: The Partial Conjugates . . . . . . . . . . . . . . . . 13Chapter 4: Convex Parametric Quadratic Programming . . . 184.1 Piecewise quadratic optimization problem . . . . . . . . . . . 184.2 Parametric piecewise quadratic optimization problems . . . . 264.2.1 Critical region . . . . . . . . . . . . . . . . . . . . . . 274.2.2 Calculation of the solution . . . . . . . . . . . . . . . . 294.2.3 Convex parametric quadratic program (pQP) . . . . . 304.3 Adaptation of pQP in computational convex analysis . . . . . 344.3.1 Conjugate computation . . . . . . . . . . . . . . . . . 344.3.2 Moreau envelope computation . . . . . . . . . . . . . . 414.3.3 Proximal average computation . . . . . . . . . . . . . 42Chapter 5: Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 445.1 Data structure . . . . . . . . . . . . . . . . . . . . . . . . . . 445.1.1 List representation of bivariate PLQ functions . . . . 44ivTABLE OF CONTENTS5.1.2 Face-constraint adjacency representation of a bivari-ate PLQ function . . . . . . . . . . . . . . . . . . . . . 455.1.3 The data structures to store lines, rays, segments andvertexes . . . . . . . . . . . . . . . . . . . . . . . . . . 465.1.4 The data structure to store entity-face and entity-constraint adjacency information . . . . . . . . . . . . 485.1.5 The Half-Edge data structure [CGA, Ope] . . . . . . . 505.2 The Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 525.3 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . 61Chapter 6: Examples . . . . . . . . . . . . . . . . . . . . . . . . 65Chapter 7: Conclusion . . . . . . . . . . . . . . . . . . . . . . . . 75Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76vList of FiguresFigure 2.1 Example of normal cones . . . . . . . . . . . . . . . . 11Figure 4.1 Polyhedral decomposition of D . . . . . . . . . . . . . 19Figure 4.2 Polyhedral subdivision of S . . . . . . . . . . . . . . . 19Figure 4.3 The index set of x . . . . . . . . . . . . . . . . . . . . 20Figure 4.4 The polyhedral decomposition of D . . . . . . . . . . 27Figure 4.5 The partitions of D . . . . . . . . . . . . . . . . . . . 28Figure 4.6 Illustration of ?(x3) = ?(x4) . . . . . . . . . . . . . . 29Figure 4.7 Illustration of GJ8 and DJ8 . . . . . . . . . . . . . . . 29Figure 5.1 Domain of the function, f(x1, x2) = |x1|+ |x2|. . . . . 48Figure 5.2 Domain of the function, f(x1, x2) = |x1|+ |x2|. . . . . 49Figure 5.3 The half-edge data structure. . . . . . . . . . . . . . . 51Figure 5.4 The representation of the domain of f(x1, x2) = |x1|+|x2| using half-edge data structure. . . . . . . . . . . . 52Figure 5.5 Computation of the vertexes. . . . . . . . . . . . . . . 55Figure 5.6 Computation of the remaining entities by using com-puted vertexes. . . . . . . . . . . . . . . . . . . . . . . 56Figure 5.7 2D Energy function. . . . . . . . . . . . . . . . . . . . 63Figure 6.1 The l1 norm and its conjugate. . . . . . . . . . . . . . 66Figure 6.2 2D Energy function. . . . . . . . . . . . . . . . . . . . 66Figure 6.3 The function from Example 6.3 and its conjugate . . 67Figure 6.4 The function from Example 6.4 and its conjugate . . 67Figure 6.5 The function from Example 6.5 and its conjugate . . 68Figure 6.6 The function from Example 6.6 and its conjugate . . 69Figure 6.7 The function from Example 6.7 and its conjugate . . 71Figure 6.8 2D Energy function. . . . . . . . . . . . . . . . . . . . 72Figure 6.9 2D Energy function. . . . . . . . . . . . . . . . . . . . 73Figure 6.10 2D Energy function. . . . . . . . . . . . . . . . . . . . 74Figure 6.11 The approximation of conjugate of f(x1, x2) = x41 + x42. 74viAcknowledgementsI would like to express my sincere gratitude to my supervisor Dr. YvesLucet for his continuous support throughout my thesis with his patience andknowledge.Beside my supervisor, I would like to thank my thesis committee mem-bers for spending time to read this thesis and agreeing to be a part of thismemorable moment of my life.A special thank to Dr. Warren Hare for the course Convex Optimizationand Non-smooth Analysis which helps me a lot in my research.I would like to thank my current and previous lab mates: Shahadat Hos-sain, Sukanto Mondal, Muhammad Mahmudul Hasan Razib and DessalegnAmenu Hirpa for their helping hands.I thank my parents and my wife for their support and encouragement inmy life.The research is funded by a Discovery Grant from the Natural Sciencesand Engineering Research Council (NSERC) of Canada. It was performed inthe Computer-Aided Convex Analysis (CA2) lab funded through a CanadianFoundation for Innovation Leaders Opportunity Fund.viiDedicationTo my wife, Sadia Sharmin.viiiChapter 1IntroductionConvex analysis is a branch of Mathematics which focuses on the studyof the properties of convex sets and functions. Numerous applications inengineering and science use convex optimization, a subfield of optimizationthat studies the minimization of convex functions over convex sets.Convex analysis provides numerous operators, like the Legendre-Fenchelconjugate (also known as the conjugate in convex analysis), the Moreau en-velope (Moreau-Yosida approximate or Moreau-Yosida regularization), andthe proximal average. Computational convex analysis focuses on the com-putation of these operators. It originated with the work of Moreau [Mor65,Paragraph 5c]. It was significantly studied with the introduction of the fastLegendre transform algorithm [Bre89, Cor96, Luc96, NV94, SAF92].Piecewise linear-quadratic (PLQ) functions have very interesting prop-erties. The class of PLQ functions is closed under the conjugate [RW98,Theorem 11.14 ] and Moreau envelope [RW98, Example 11.28] operators.Moreover, the partial conjugate of a PLQ function is also a PLQ function.When the closed form of a function does not exist or it is very complex, thepiecewise linear-quadratic approximation is a convenient way to manipulateit. Algorithms based on PLQ functions [Luc06] do not suffer the resultingnumerical errors due to the composition of several convex operators withunbounded domains that affects algorithms based on piecewise linear func-tions [Luc06]. These algorithms have been restricted to univariate functionsuntil very recently.Symbolic computation algorithms have been given to compute the Legen-dre-Fenchel conjugate and related transforms [BM06, BH08, Ham05]. It ispossible to handle univariate[BM06] and multi-variate[BH08] functions withthese algorithms. When symbolic computation is not applicable, the needfor numerical algorithms arises.The Fast Legendre Transform (FLT)[Bre89] was the beginning of the de-velopment of efficient numerical algorithms. Its worst-case time complexitywas log-linear which was improved by the development of the Linear-timeLegendre transform (LLT) algorithm [Luc97]. A piecewise linear model isused in the LLT algorithm and this model could be applied to multivari-1Chapter 1. Introductionate functions. The Moreau envelope could be computed from the con-jugate. Some other linear-time fast algorithms have been developed in[HUL07, Luc06] based on this relation. Beside the computation of the con-jugate and the Moreau envelope, the fast algorithms are used to computethe proximal average [BLT08], which combines several fundamental oper-ations of convex analysis like addition, scalar multiplication, conjugation,and regularization with the Moreau envelope. Other transforms like theproximal hull transforms, the inf-convolution of convex functions as well asthe deconvolution of convex functions [HUL93, RW98], could be computedby combining existing fast algorithms.The FLT and LLT algorithms have been applied beyond computationalconvex analysis. The LLT is used in network communication [HOVT03],robotics [Koo02], numerical simulation of multiphasic flows [Hel05], patternrecognition [Luc05] and analysis of the distribution of chemical compoundsin the atmosphere [LPBL05]. The LLT algorithm has also explicit appli-cation in the computation of distance transforms [Luc05]. The additionand inf-convolution of convex functions are also related to the Linear CostNetwork Flow problem on Series-Parallel Networks [TL96]. For a survey ofapplications, see [Luc10].The linear dependence between the graph of the subdifferential of mostconvex operators and the graph of the function gives rise to graph-matrix-calculus (GPH) algorithms with linear time complexity [GL11b]. Thesealgorithms store the subdifferential data for reducing the conjugate compu-tation to a matrix multiplication. Since GPH algorithms are optimized formatrix operations, these algorithms are very suitable and efficient for imple-mentation in matrix-based mathematical software like Scilab and Matlab.The conjugate computation algorithm of bivariate convex PLQ functions[GL11a] is the first effective algorithm to compute the conjugate of convexbivariate PLQ functions. Its worst case running time is log-linear. Thisalgorithm stores the function domain in a planar arrangement data structureprovided by CGLAB [CGL]. A toolbox for convex PLQ functions based onthis algorithm, has also been developed. The necessary operations to buildthis toolbox, like addition, scalar multiplication are implemented.The full conjugate of a PLQ function could be computed from the partialconjugates (the conjugate with respect to one of the variables). Due tothe similarities between the full conjugate and the partial conjugate, analgorithm to compute the partial conjugate of bivariate PLQ functions hasbeen developed based on the conjugate computation algorithm of bivariateconvex PLQ functions [GJL13]. Although partial conjugates are PLQ, theyare not convex. In fact, they are saddle functions [Roc70].2Chapter 1. IntroductionIntense research has been done in the sensitivity analysis of solutions tononlinear programs and variational inequalities during the last three decades[Fia83, RW09, KK02, DR09, FP03]. The sensitivity analysis in an optimiza-tion problem studies properties of the solution mapping only on the vicinityof a reference point in the parameter space. Sensitivity analysis is closelyrelated to parametric optimization and sensitivity analysis could be appliedin the algorithms for parametric optimization. But unlike sensitivity analy-sis, parametric optimization computes the solution mapping for every valuein the parameter space [BGK+82, PGD07b, PGD07a].Parametric optimization algorithms for linear and quadratic programsgot interest in the last decade [BBM02b, BMDP02, BBM02a]. Ongoingresearch in parametric programming has resulted in the development of sev-eral algorithms for solving convex parametric quadratic programs (pQP)[Bao02, SGD03, TJB03, GBTM04] and parametric linear programs (pLP)[GBTM03]. Research has been done on the continuity properties of the valuefunction and optimal solution set of parametric programs [Fia83, Zha97,BGK+83, Bes72, Hog73a, Hog73b]. The continuity of the optimal set map-ping and the stability of the optimization problem are closely related to eachother. The stability of quadratic programs is studied in [BC90, PY01]. Thestability and continuity of parametric programs are mostly derived from settheory [Ber63, DFS67, Hau57]. The goal in parametric programming is tosolve a parametric optimization problem for all possible values of the pa-rameters. The solutions of a parameter dependent optimization problemare piecewise functions which are defined on the partition of the parameterspace.Parametric programming could be used to formulate several problemsin control theory. Parametric linear program (pLP), parametric quadraticprogram (pQP), parametric mixed integer program (pMILP), parametricnon-linear program (pNLP), parametric linear complementarity problem(pLCP) are the different types of parametric programs in control theory.We could get the exact and the explicit solution of dynamic programmingequations of various classes of problems using parametric optimization. In[BBM02a, DB02], dynamic programming coupled with parametric program-ming is applied for min-max optimal control problems of constrained uncer-tain linear systems with a polyhedral performance index. It is also applied inthe finite or infinite horizon optimal control of piecewise affine systems withpolyhedral cost index [BCM06, CBM05]. Dynamic programming coupledwith parametric programming is also used for the finite inf-sup optimal con-trol of piecewise affine systems with polyhedral cost index [KM02, SKMJ09],and for problems with a quadratic cost index [BBBM09].3Chapter 1. IntroductionParametric programming could be used beyond control theory. Due tothe link between parametric programming and geometry, it has applicationsin computational geometry, i.e parametric programming could be used tocompute Voronoi Diagrams and Delaunay triangulations [RGJ04]. In addi-tion, it has applications in convex analysis.The parametric piecewise linear-quadratic optimization problem [PS11]enables us to use parametric programming in computational convex analy-sis. As long as the function to be optimized in an optimization problem isconvex PLQ, that optimization problem could be modeled using parametricprogramming. Due to this fact, it is possible to model the conjugate compu-tation problem as well as the Moreau envelope computation problem of anymulti-variate convex PLQ function as a parametric optimization problem.Although there is an effective algorithm to compute the conjugate of con-vex bivariate PLQ functions [GL11a], no work has been done to computethe convex conjugate of bivariate PLQ functions based on parametric pro-gramming. We study the first such algorithm. It relies on explicitly storingthe partitions of the domain space as well as the associated linear-quadraticfunctions. The algorithm also stores the adjacency information of the par-titions. The conjugate computation problem is modeled as a parametricpiecewise quadratic optimization problem and convex parametric piecewisequadratic optimization is applied to compute the convex conjugate which isanother bivariate PLQ function. PLQ functions have a simple structure andthose functions could be represented easily. Although there are several waysto represent a bivariate PLQ function, we discuss different data structuresto manipulate bivariate PLQ functions. These data structures are slightlydifferent from the data structure presented in [GJL13], but are efficient andeasily manipulable.The thesis is organized in several chapters. After introducing the thesisin the current chapter, we discuss the notations and some basic definitionsin the following chapter. Chapter 3 recalls partial conjugates. After that, werecall the recent results on convex parametric quadratic programming (pQP)in Chapter 4. We also discuss the adaptation of pQP in computationalconvex analysis in Chapter 4. Chapter 5 proposes several data structuresto represent a convex bivariate PLQ function. It also presents an algorithmbased on pQP to compute conjugate and Moreau envelope of any convexbivariate PLQ function. By using the algorithm discussed in Chapter 5,we generate some examples of conjugate and Moreau envelope computation.These examples are given in Chapter 6. Chapter 7 concludes the thesis.4Chapter 2Convex AnalysisPreliminariesWe start with the notations we will use in the present thesis. We applythe standard arithmetic used in Convex Analysis as described in [RW98], and[BC11]. We note R ? {+?} is the set of extended real numbers. The innerproduct of two column vectors x, y ? Rn is expressed by either ?x, y? or xT y.The notation xT denotes the transpose of x i.e. the row vector [x1, . . . , xn].We use ?x? to denote the Euclidean norm of any vector x ? Rn.Definition 2.1. [Open set, closed set] A set S is open if for any x ? S thereexists  > 0 such that B(x) ? S, where B(x) = {y : ?y ? x? < }.A set S is closed if for any sequence of points x1, x2, . . . such that xi ? Sand limi??xi = x we have x ? S.Definition 2.2. [Interior of a set, closure of a set] The interior of a set S isthe largest open set inside S and is denoted by Int(S).The closure of a set S is the smallest closed set containing S. It isexpressed by cl(S).Definition 2.3. [Convex set, convex hull, affine set, affine hull] A set S isa convex set if given any two points x1, x2 ? S, and any ? ? [0, 1] we have?x1 + (1? ?)x2 ? S. The convex hull of a set S ? Rn is the smallest convexset containing S. We use coS to denote it.A set S is an affine set if given any two points x1, x2 ? S, and any ? ? Rwe have ?x1 + (1? ?)x2 ? S. The affine hull of a set S ? Rn is the smallestaffine set containing S. We denote it by aff(S).Definition 2.4. [Open ball] The open ball is defined byB(x) = {y : ?x? y? < },where x, y ? Rn and  ? R+.5Chapter 2. Convex Analysis PreliminariesDefinition 2.5. [Relative interior] The relative interior of a set S ? Rn isthe interior relative to the affine hull of S. We denote it by riS, i.e.riS = {y ? S : ? > 0, B(y) ? aff(S) ? S}.Definition 2.6. [Half-space] Given a ? Rn, a 6= 0 and b ? R the half-spaceHab is defined asHab = {x : ?a, x? ? b}.Definition 2.7. [Hyperplane] Given a ? Rn, a 6= 0 and b ? R the hyper-plane hab is defined ashab = {x : ?a, x? = b}.Definition 2.8. [Polyhedral set] A polyhedral set S ? Rn is a set which canbe expressed as an intersection of some finite collection of closed half-spaces.In other words, it can be specified by finitely many linear constraints, i.e.,constraints fi(x) ? 0 or fi(x) = 0 where fi is a affine function of x ? Rnand i = {1, 2 . . .m}.Definition 2.9. [Dimension of the affine hull of a set] Let S be a non-emptyset. The linear space by S is linS = {?x + ?y : x, y ? S and?, ? ? R} andhas dimension n = dim(linS). For any ?, ? ? R with ? + ? = 1, the affinespace generated by S is aff(S) = {?x + ?y : x, y ? S} and has dimensiondim(aff S) = dim(linS).Definition 2.10. [Dimension of a set] The dimension of a set S ? Rn is thedimension of aff(S). It is denoted by dim(S). If dim(S) = n, then S is saidto be full dimensional.Definition 2.11. [Set difference, set complement] Assume A and B are twosets. The set difference of A and B is denoted by A ? B where A ? B ={x ? A : x /? B}.Let S ? U be a set. The complement of S with respect to U is denotedby Sc with Sc = U ? S.6Chapter 2. Convex Analysis PreliminariesDefinition 2.12. [Edge, vertex, face] An edge is defined by the set E ={x ? Rn : x = ?1x1 + ?2x2, ?1 + ?2 = 1} with x1, x2 ? Rn and x1 6= x2. Itbecomes a segment when ?1, ?2 ? 0. An edge is called a ray if ?1 ? 0 but?2 ? R, or a line when ?1, ?2 ? R.A vertex is a starting point of a ray or it is one of the end points of asegment or it is an isolated point.A face is a polyhedral set with nonempty interior.Definition 2.13. [Indicator function] For any set S ? Rn, IS denotes theindicator function, i.e.IS(x) ={0, if x ? S;? otherwise.Definition 2.14. [Effective domain] The effective domain of a function fis the set of all points where f takes a finite value. It is denoted by dom f .Definition 2.15. [Proper function] Let f : Rn ? R ? {??,+?} be afunction. It is proper if it has nonempty domain and is greater than ??everywhere.Definition 2.16. [Epigraph] The epigraph of a function f : Rn ? R ?{??,+?} is defined asepi(f) = {(x, ?) : ? ? f(x)}.It is a set in Rn+1.Definition 2.17. [Convex function] Let f : Rn ? R ? {+?} be a properfunction. Then it is convex iff((1? ?)x1 + ?x2) ? (1? ?)f(x1) + ?f(x2),for all x1, x2 ? dom(f), ? ? [0, 1].Definition 2.18. [Strictly convex function] A function f : Rn ? ?{??,+?}is strictly convex iff((1? ?)x1 + ?x2) < (1? ?)f(x1) + ?f(x2),for all x1, x2 ? dom(f), x1 6= x2, ? ? ]0, 1[.7Chapter 2. Convex Analysis PreliminariesDefinition 2.19. [Concave function] A function f is said to be a concavefunction if the function ?f is convex.Definition 2.20. [Saddle function] Let f : Rm+n ? R ? {??,+?} bea function. It is called a convex-concave function if f(x1, x2) is a convexfunction of x1 ? Rm for each x2 ? Rn and a concave function of x2 ? Rn foreach x1 ? Rm. Concave-convex functions are defined similarly. Both kindsof functions are called saddle functions.Definition 2.21. [Additively separable function, additively non-separablefunction] Let f : Rn ? R ? {??,+?} be a function of n variables. It iscalled an additively separable function if it can be written as f(x1, . . . , xn) =f(x1)+ . . .+f(xn) for some single variable functions f(x1), . . . , f(xn). Oth-erwise f is an additively non-separable function.Definition 2.22. [Convex subdifferential] Let f : Rn ? R ? {+?} be aconvex function. The convex subdifferential of f at x ? Rn is defined by?f(x) = {s : f(y) ? f(x) + ?s, y ? x?, ?y ? Rn}.Definition 2.23. [Partial Subdifferential] Let f : Rm+n ? R ? {+?} be aproper convex function. The partial subdifferential of f(?, x2) at x1 ? Rm isdefined by?1f(x1, x2) = {s : f(x1?, x2) ? f(x1, x2) + ?s, x1? ? x1?,?x1? ? Rm},where x2 ? Rn, s ? Rm.Definition 2.24. [Subdifferential of saddle function] Let f : Rm+n ? R ?{??,+?} be a concave-convex function. We define ?1f(x1, x2) to be theset of all subgradients of the concave function f(?, x2) at x1, i.e.?1f(x1, x2) = {s1 : f(x1?, x2) ? f(x1, x2) + ?s1, x1? ? x1?, ?x1? ? Rm},where x1 ? Rm, x2 ? Rn and s1 ? Rm. Similarly we define ?2f(x1, x2) tobe the set of all subgradients of the convex function f(x1, ?) at x2, i.e.?2f(x1, x2) = {s2 : f(x1, x?2) ? f(x1, x2) + ?s2, x2? ? x2?,?x2? ? Rn},where s2 ? Rn. Then, the subdifferential of the saddle function f at (x1, x2)is defined as?f(x1, x2) = ?1f(x1, x2)? ?2f(x1, x2).8Chapter 2. Convex Analysis PreliminariesDefinition 2.25. [Fenchel conjugate (also named Legendre-Fenchel trans-form, Young-Fenchel transform, Legendre-Fenchel conjugate, or convex con-jugate)] Let f : Rn ? R? {+?} be a function. The Fenchel conjugate of fis defined byf?(s) = supx?Rn[?s, x? ? f(x)],where s ? Rn.Definition 2.26. [Moreau envelope (or Moreau-Yosida approximation)] Given? > 0 the Moreau envelope of a function f : Rn ? R ? {+?} is defined ase?f(s) = infx?Rn[f(x) +12??x? s?2],where s ? Rn.Proposition 2.27. ([Luc06, Proposition 3]) Let f : Rn ? R ? {+?} be afunction and assume ? > 0. Thene?f(s) =?s?22??1?g??(s),where g?(x) =?x?22 + ?f(x).Proof.e?f(s) = infx?Rn[f(x) +12??x? s?2]=?s?22?+ infx?Rn[?1??s, x?+?x?22?+ f(x)]=?s?22?+ infx?Rn[?1??s, x?+1?(?x?22+ ?f(x))]=?s?22?+ infx?Rn[?1?(?s, x? ? g?(s))]=?s?22??1?supx?Rn[?s, x? ? g?(x)]=?s?22??1?g??(s).9Chapter 2. Convex Analysis PreliminariesDefinition 2.28. [Partial conjugate] Assume f : Rn ? R ? {+?} is afunction of n variables. The partial conjugate of f is defined as taking theconjugate with respect to a single variable while the other variables remainconstant i.e.f?i(x1, . . . , xi?1, si, xi+1, . . . , xn) = supxi?R[sixi ? f(x1, . . . , xn)] .Fact 2.29. ([RW98, Proposition 11.48]) Assume f : Rm+n ? R? {+?} isa proper convex function. Then,(s1, s2) ? ?f(x1, x2) ??{x1 ? ?1f?1(s1, x2),s2 ? ?2(?f?1)(s1, x2).Definition 2.30. [Piecewise linear-quadratic(PLQ) function] Let f : Rn ?R ? {+?} be a function. Then, f is called a piecewise linear-quadratic(PLQ) function if dom f can be represented as a union of finitely manypolyhedral sets, relative to each of which f(x) is defined by an expressionof the form 12?x,Ax?+ ?a, x?+ ? for some scalar ? ? R, vector a ? Rn andsymmetric matrix A ? Rn?n.Definition 2.31. [Pieces of PLQ function] Let f : Rn ? R ? {+?} be aPLQ function. Assume dom f = ?ki=1Pk, where Pk are polyhedral sets. Wedefine f on each Pk by fk(x) = 12?x,Ax?+ ?a, x?+ ? so thatf(x) =???????????????f1(x), if x ? P1;f2(x), if x ? P1;??fk(x), if x ? Pk.Then the function fk : Pk ? R ? {+?} is a piece of f withfk(x) ={fk(x), if x ? Pk;+?, otherwise.Definition 2.32. [Lower semicontinuous (lsc) function] A proper functionf : Rn ? R ? {+?} is lower semicontinuous (lsc) at a point x? ? dom f if10Chapter 2. Convex Analysis Preliminariesfor every  > 0 there exist a neighborhood U of x? such that f(x) ? f(x?)? for all x ? U .A function is lower semicontinuous if it is lower semicontinuous at everypoint of its domain.Fact 2.33. (Conjugate of convex PLQ functions[RW98, Theorem 11.14])Assume f : Rn ? R ? {+?} is a proper lsc convex function. Then, f isPLQ if and only if f? is PLQ.Definition 2.34. (Proximal average of two PLQ functions) Let f1 : Rn ?R ? {+?} and f2 : Rn ? R ? {+?} be two PLQ functions. The proximalaverage of f1 and f2 for any ? ? [0, 1] is defined asP (f1, ?, f2) =[(1? ?)(f1 +12?x?2)?+ ?(f2 +12?x?2)?]??12?x?2.Definition 2.35. [Normal cone to convex set] The normal cone to a convexset C at x ? C is defined byNC(x) = {z : ?z, x? x? ? 0 for all x ? C}.The normal cone NC(y) = {0} if y /? C.Example 2.36. [Example of normal cones to a convex set] Consider therectangle as shown in Figure 2.1. The normal cone at the point x, NC(x) ={(0, 0)}. For the point y, the normal cone is NC(y) = {(x1, x2) : x1 ? 0, x2 =0}. The normal cone for the point z is NC(z) = {(x1, x2) : x1 ? 0, x2 ? 0}.Figure 2.1: Example of normal conesDefinition 2.37. [Inequality and equality parts of normal cone] A normalcone to a polyhedral set C = {x ? Rn : Ax = 0, Bx ? 0} at a point can be11Chapter 2. Convex Analysis Preliminariesdefined as a set of polyhedral inequalities. Assume a normal cone Nc(x) tothe polyhedral set C at a point x ? Rn is defined asNC(x) = {x ? Rn :[LELI]x ? 0},with LEx = 0 and LIx ? 0 where LE and LI are m1?n and m2?n matrices.Then LE is the equality part and LI is the inequality part of NC(x).Definition 2.38. (Active set [NW06, Definition 12.1], LICQ [NW06, Defi-nition 12.4]) Let a set C be defined byC ={x ? Rn????gi(x) ? 0 if i ? I;gi(x) = 0 if i ? E .}.The active set A(x) at any point x ? C is defined asA(x) = E ? {i ? I : gi(x) = 0}.The linear independence constraint qualification (LICQ) holds at x if theset of active constraint gradients {?gi(x), i ? A(x)} is linearly independent.Fact 2.39. ([NW06, Lemma 12.9]) Let a set C be defined byC ={x ? Rn????gi(x) ? 0 if i ? I;gi(x) = 0 if i ? E .},where gi ? C1 for i ? I ? E. Suppose that C is convex and x ? C. AssumeLICQ holds at x, thenNC(x) = {z : z =?i?A(x)?i?gi(x),with?i ? 0 for i ? A(x) ? I}.Definition 2.40 (Degree of a vertex in a graph). Let G = (V,E) be a non-empty graph. The degree of a vertex v ? V is the total number of edges inv. It is denoted by deg(v).Fact 2.41. ([Die12, Page 5]) Let G = (V,E) be a non-empty graph. Wehave?v?V deg(v) = 2nE, where nE is the total number of edges in G.Fact 2.42. (Euler?s formula [Ger07]) In a planar graph with n nodes, aarcs and r regions, we have n? a+ r = 2.12Chapter 3The Partial ConjugatesIn this chapter we recall Goebel?s proof [Goe00, Proposition 3, page 17]that the partial conjugates of convex PLQ functions are also PLQ.Definition 3.1. [Partial conjugate] Assume that f : Rm+n ? R? {+?} isa function of m+ n variables. The partial conjugate of f with respect to avariable x1 ? Rm is defined asf?1(s1, x2) = supx1[?s1, x1? ? f(x1, x2)] .Similarly the partial conjugate of f with respect to another variable x2 ? Rnis defined byf?2(x1, s2) = supx2[?s2, x2? ? f(x1, x2)] .Theorem 3.2. Let f : Rm+n ? R?{+?} be a proper convex PLQ functionof m+ n variables. Assume f?1 is the partial conjugate of f with respect tothe first m variables. Then f?1 is PLQ.The proof is found in [Goe00, Proposition 3, page 17]. For the sake ofcompleteness, we reproduce it here using our notations.Proof. Since x1 7? f?2(x1, s2) is concave and s2 7? f?2(x1, s2) is convex,f?2(x1, s2) is a saddle function. The convex parent of a saddle functionh(x, y) is defined as [Goe00, Formula 2.6, page 10]Cp(p, y) = supx[h(x, y)? ?p, x?] .By putting p = ?s1, y = s2, x = x1 and replacing f?2 with h, we haveCp(?s1, s2) = supx1[f?2(x1, s2) + ?s1, x1?].13Chapter 3. The Partial ConjugatesUsing the definition of f?2 we deduce thatCp(?s1, s2) = supx1[supx2{s2x2 ? f(x1, x2)}+ s1x1]= supx1,x2[s1x1 + s2x2 ? f(x1, x2)]= f?(s1, s2)Since f is PLQ if, and only if, f? is PLQ [Fact 2.33], we deduce Cp is PLQ.Consequently, there exists polyhedral sets Pk with domCp = ?sk=1Pkand for (?s1, s2) ? Pk, Cp is defined byCp(?s1, s2) =12?(?s1, s2), Ak(?s1, s2)?+ ?ak, (?s1, s2)?+ bk.Since Cp is PLQ, ?Cp is piecewise polyhedral [RW98, Proposition 12.30].Note that gph ?Cp is piecewise polyhedral [RW98, Example 9.57]. For any(?x1, x2) ? ?Cp(?s1, s2), we have (?s1, s2,?x1, x2) ? gph ?Cp.Note gph ?Cp = ?sl=1Dl where each Dl is a polyhedral set such that theimage of Dl under the projection (?s1, s2,?x1, x2) 7? (?s1, s2) is containedin some Pk. We have(?s1, s2,?x1, x2) ? gph ?Cp?? (?x1, x2) ? ?Cp(?s1, s2)??{?s1 ? ?1C?1p (?x1, s2)x2 ? ?2(C?1p )(?x1, s2)[Fact 2.29].For any saddle function h(x, y) on Rm ? Rn which is concave in x andconvex in y, the subdifferential is defined in [Roc70, page 374] as ?1h(x, y) =?xh(x, y). It is the set of all subgradients of the concave function h(?, y) atx, i.e. the set of all vectors x? ? Rm such thath(x?, y) ? h(x, y) + ?x?, x? ? x?, ?x? ? Rm.Similarly ?2h(x, y) = ?yh(x, y) is the set of all subgradients of the convexfunction h(x, ?) at y, i.e. the set of all vectors y? ? Rn such thath(x, y?) ? h(x, y) + ?y?, y? ? y?, ?y? ? Rn.The elements (x?, y?) of the set ?h(x, y) = ?1h(x, y)? ?2h(x, y) are definedas the subgradients of h at (x, y).14Chapter 3. The Partial ConjugatesWe have? s1 ? ?1C?1p (?x1, s2)?? C?1p (?x?1, s2) ? C?1p (?x1, s2) + ??s1,?x?1 ? (?x1)?,?(?x?1)?? C?1p (?x?1, s2) ? ?C?1p (?x1, s2) + ?s1,?x?1 ? (?x1)?,?(?x?1)andx2 ? ?2(?C?1p )(?x1, s2)?? ? C?1p (?x1, s?2) ? ?C?1p (?x1, s2) + ?x2, s?2 ? s2?,?s?2Therefore, we obtain(s1, x2) ? ?(?C?1p )(?x1, s2)?? (?x1, s2, s1, x2) ? gph ?(?C?1p ).We noteM =????0 0 ?I 00 I 0 0I 0 0 00 0 0 I???? ,where I is the m ? m or n ? n identity matrix. We also note that M isinvertible withM?1 =????0 0 I 00 I 0 0?I 0 0 00 0 0 I???? .By applying M to both sides of (?x1, s2, s1, x2) ? gph ?(?C?1p ), we obtainM(?x1, s2, s1, x2) ?M gph ?(?C?1p )?? (?s1, s2,?x1, x2) ?M gph ?(?C?1p ).Therefore, we deducegph ?Cp = M gph ?(?C?1p )?? gph ?(?C?1p ) = M?1 gph ?Cp.Define the polyhedral set El = M?1Dl. We have gph ?(?C?1p ) = ?sl=1El.By applying M?1 to both sides of (?s1, s2,?x1, x2) ? Dl, we deduceM?1(?s1, s2,?x1, x2) ?M?1Dl?? (?x1, s2, s1, x2) ? El.15Chapter 3. The Partial ConjugatesLet Fl be the image of El under the projection (?x1, s2, s1, x2) 7? (?x1, s2).Now by [RW98, Proposition 3.55], the sets El and Fl are polyhedral, anddom ?(?C?1p ) = ?sl=1Fl. Since dom ?(?C?1p ) is dense in dom(?C?1p ) [Roc70,Theorem 37.4], we deduce that dom(?C?1p ) = ?sl=1Fl.It remains to show that ?C?1p is linear or quadratic on each Fl. To provethat ?C?1p is linear or quadratic on Fl it is necessary and sufficient to provethat ?C?1p is linear or quadratic relative to every line segment in Fl [RW98,Lemma 11.15].Pick any two points (?x01, s02), (?x11, s12) in Fl and any (s01, x02) ? ?(?C?1p )(?x01, s02),with (s11, x12) ? ?(?C?1p )(?x11, s12). We have(?x01, s02, s01, x02), (?x11, s12, s11, x12) ? El.Now we define for ? ? [0, 1],(?x?1 , s?2 , s?1 , x?2) = (1? ?)(?x01, s02, s01, x02) + ?(?x11, s12, s11, x12).Since polyhedral sets are convex, we deduce(?x?1 , s?2 , s?1 , x?2) ? El, ?? ? [0, 1] . (3.1)We have?C?1p (?x?1 , s?2) = ? sup?s1[??s1,?x?1? ? Cp(?s1, s?2)]= inf?s1[Cp(?s1, s?2) + ??s1, x?1?] .Define s?1 such that? x?1 ? ?1Cp(?s?1, s?2)?? Cp(?s1, s?2) ? Cp(?s?1, s?2) + ??x?1 ,?s1 + s?1?,?(?s1)?? Cp(?s1, s?2) + ??s1, x?1? ? Cp(?s?1, s?2)? ?s?1 , x?1?, ?(?s1)?? inf?s1[Cp(?s1, s?2) + ??s1, x?1?] ? Cp(?s?1, s?2)? ?s?1 , x?1?.By definitioninf?s1[Cp(?s1, s?2) + ??s1, x?1?] ? Cp(?s?1, s?2)? ?s?1 , x?1?.Therefore, we deduce thatinf?s1[Cp(?s1, s?2) + ??s1, x?1?] = Cp(?s?1, s?2)? ?s?1 , x?1?16Chapter 3. The Partial Conjugatesfor all s?1 such that ?x?1 ? ?1Cp(?s?1, s?2).Since (?x?1 , s?2 , s?1 , x?2) ? El, we have (?x?1 , s?2 , s?1 , x?2) ? gph ?(?C?1p ).From gph ?Cp = M gph ?(?C?1p ), we deduceM(?x?1 , s?2 , s?1 , x?2) ? gph ?Cp?? (?s?1 , s?2 ,?x?1 , x?2) ? gph ?Cp?? (?x?1 , x?2) ? ?Cp(?s?1 , s?2)=? ? x?1 ? ?1Cp(?s?1 , s?2).Consequently, we have?C?1p (?x?1 , s?2) = Cp(?s?1 , s?2) + ??s?1 , x?1?.By applying M to both sides of (3.1), we get?? ? [0, 1] ,M(?x?1 , s?2 , s?1 , x?2) ?MEl?? ?? ? [0, 1] , (?s?1 , s?2 ,?x?1 , x?2) ? Dl=? ?k,?? ? [0, 1] , (?s?1 , s?2) ? Pk.Therefore, we have?C?1p (?x?1 , s?2) =12?(?s?1 , s?2), Ak(?s?1 , s?2)?+?ak, (?s?1 , s?2)?+bk+??s?1 , x?1?.The later expression is linear or quadratic in ? , so the function?C?1p (?x?1 , s?2)is PLQ, which implies that ?C?1p is PLQ.In summary, we proved thatf is PLQ =? (f?)?1 is PLQ. (3.2)By Fact 2.33,f is PLQ ?? f? is PLQ. (3.3)From the previous two implications we derive,f? is PLQ =? (f?)?1 is PLQ.By applying the previous implication to g = f?, we deduceg is PLQ =? g?1 is PLQ,which concludes the proof.17Chapter 4Convex ParametricQuadratic ProgrammingIn this chapter, we recall recent results on parametric minimization ofconvex piecewise quadratic functions [PS11], and adapt them to computethe conjugate.4.1 Piecewise quadratic optimization problemDefinition 4.1. ([PS11, Definition 1]) Let ? = {Ck : k ? K} be a collectionof nonempty sets where K is a finite index set.1. If the union of all sets in ? is D ? Rd and the sets are mutually disjoint,then ? is a partition of D.2. If the members of ? are polyhedral sets and (i)?k?K Ck = D, (ii)dimCk = dimD for all k ? K, (iii) riCk? riCl = ?, for k, l ? K, k 6= l,then it is called a polyhedral decomposition of D ? Rd.3. If ? is a polyhedral decomposition and the intersection of any twomembers of ? is either empty or a common proper face of both, thenit is called a polyhedral subdivision.Example 4.2. [Polyhedral decomposition, polyhedral subdivision] Figure4.1 illustrates a collection of polyhedral set ?1 = {D1, D2, D3} such thatD1 ?D2 ?D3 = D. The collection ?1 is a polyhedral decomposition of D.But ? is not a polyhedral subdivision of D because the intersection of D1and D3 is not a common face of both. Similarly the intersection of D2 andD3 is not a common face of both too.Assume another collection of polyhedral sets ?2 = {S1, S2, S3} such thatS1 ? S2 ? S3 = S. Figure 4.2 shows this collection and it is a subdivision ofS.Definition 4.3. ([PS11, Definition 2]) A piecewise linear-quadratic function(PLQ) is a function f : Rn ? R?{+?} such that there exists a polyhedral184.1. Piecewise quadratic optimization problemFigure 4.1: Polyhedral decomposition of DFigure 4.2: Polyhedral subdivision of Sdecomposition ? = {Ck : k ? K} of dom f , f(x) = fk(x) if x ? Ck, k ? K,where fk(x) = (1/2)xTQkx+qkTx+?k with ?k ? R, qk ? Rn and Qk ? Rn?nis a symmetric matrix.One can always build a polyhedral decomposition from a polyhedralsubdivision. Without loss of generality it is assumed that the dimension ofdom f is n. Each Ck, k ? K is defined by Ck = {x ? Rn : Akx ? bk} withAk ? Rmk?n.Definition 4.4. ([PS11, Definition 3]) For any x ? dom f the index set ?(x)is defined by?(x) = {k ? K : x ? Ck}.Example 4.5. Figure 4.3 illustrates the domain of a PLQ function f withdom f = {C1, C2, C3, C4, C5} and a point x ? dom f . Here, ?(x) = {C2, C3}.Definition 4.6. ([PS11, Page 5]) Consider a non-empty polyhedral set C ={x ? Rn : Ax ? b} with b ? Rm. Let the collection of all non-empty polyhe-dral faces be denoted by F(C) and the collection of the relative interiors ofall non-empty faces of C be denoted by G(c), i.e. G(c) = {riF : F ? F(C)}.194.1. Piecewise quadratic optimization problemFigure 4.3: The index set of xAlso let B(C) = {I ? {1, . . . ,m} : ?x s.t. AI(x) = bI , AIc(x) < bIc}, whereIc is the complement of I.Consider the sets FI = {x ? Rn : AI(x) = bI , AIc(x) ? bIc} andGI = {x ? Rn : AI(x) = bI , AIc(x) < bIc}, for every index I ? B(C).Example 4.7. For example, consider a polyhedral set which is defined byC =???????(x1, x2) ? R2????????x1 ? 1?x2 ? 1?x1 ? 1x2 ? 1???????= [?1, 1]2.Therefore, C = {(x1, x2) ? R2 : A[x1x2]? b} where A =????1 00 ?1?1 00 1???? and b =????1111????. Now the index set B(C) = {?, {1}, {2}, {3}, {4}, {1, 2}, {2, 3}, {3, 4}, {1, 4}}.For the index set I1 = ? ? B(C), we deduce Ic1 = {1, 2, 3, 4}. Now the setFI1 is defined byFI1 =???????x ? R2????????x1 ? 1?x2 ? 1?x1 ? 1x2 ? 1???????.204.1. Piecewise quadratic optimization problemThe set GI1 is defined asGI1 =???????x ? R2????????x1 < 1?x2 < 1?x1 < 1x2 < 1???????.For any index set I2 = {1} ? B(C), FI1 and GI1 are defined as follows.FI2 =???????x ? R2????????x1 = 1?x2 ? 1?x1 ? 1x2 ? 1???????.GI2 =???????x ? R2????????x1 = 1?x2 < 1?x1 < 1x2 < 1???????.Similarly we define FI and GI for I ? {{2}, {3}, {4}} too. For I6 = {1, 2} ?B(C), FI6 and GI6 is defined as follows.FI6 =???????x ? R2????????x1 = 1?x2 = 1?x1 ? 1x2 ? 1???????.GI6 =???????x ? R2????????x1 = 1?x2 = 1?x1 < 1x2 < 1???????.The sets FI and GI for I ? {{2, 3}, {3, 4}, {1, 4}} are defined similarly.Fact 4.8. ([PS11, Proposition 1]) Let C = {x ? Rn|Ax ? b} be a polyhedralset. Then,1. G(C) is a partition of C,2. F(C) = {FI |I ? B(C)},3. for every I ? B(C),GI = riFI ,4. for any I, I ? ? B(C), I ? I ? ? B(C),214.1. Piecewise quadratic optimization problem5. for any x1, x2 ? GI and I ? B(C), NC(x1) and NC(x2) are equal.The notation NC(FI) is used to denote the normal cone of C at any pointx ? riFI .Consider again Example 4.7. For this example, F(C) = {FI1 ,FI2 , . . . ,FI9}and therefore, G(C) = {riFI1 , riFI2 , . . . , riFI9}. By using the previousfact, we deduce that GI1 = riFI1 , GI2 = riFI2 ,. . .,GI9 = riFI9 . Thesets G(C) is a partition of C. Consider two index sets I6 = {1, 2} andI7 = {2, 3}. We deduce I6 ? I7 = {2} ? B(C). Consider another index setI2 = {1} ? B(C). Take two points x1 = (1, 0.5) and x2 = (1,?0.5) ? GI2 .Then NC(x1) = NC(x2).Definition 4.9. ([PS11, Page 5]) Consider a polyhedral decomposition ? ={Ck : k ? K} of D ? Rd. Assume F(?) = {F(Ck)}k?K , G(?) = {G(Ck)}k?Kand B(?) = {B(Ck)}k?K . For any x ? D, J (x) is defined byJ (x) = {G ? G(?)|x ? G}.Assume Y consists of all sets J ? G(?) such that J = J (x) for some x ? D.Let GJ =?G?J G and DJ = clGJ , for any J ? Y.Fact 4.10. ([PS11, Proposition 2]) Let ? = {Ck : k ? K} be a polyhedraldecomposition of D ? Rd. Then,1. G = {GJ : J ? Y} is a partition of D,2. ?(x1) = ?(x2), for any x1, x2 ? GJ where J ? Y,3. for any J ? Y, there exists a unique Ik(J ) ? B(Ck) where k ? ?(J )such that GJ =?k??(J ) GIk(J ) and DJ =?k??(J )FIk(J ).Example 4.11. Consider a polyhedral decomposition ? = {C1, C2, C3, C4}of D. Assume C1, C2, C3 and C4 are defined byC1 ={(x1, x2) ? R2?????x1 ? 0?x2 ? 0},C2 ={(x1, x2) ? R2????x1 ? 0?x2 ? 0},C3 ={(x1, x2) ? R2????x1 ? 0x2 ? 0},224.1. Piecewise quadratic optimization problemC4 ={(x1, x2) ? R2?????x1 ? 0x2 ? 0}.Figure 4.4 illustrates this polyhedral decomposition. Now B(C1), B(C2),B(C3) and B(C4) are defined asB(C1) = {?1, {11}, {21}, {11, 21}},B(C2) = {?2, {12}, {22}, {12, 22}},B(C3) = {?3, {13}, {23}, {13, 23}},B(C4) = {?4, {14}, {24}, {14, 24}}.Here superscript 1,2,3, or 4 is used to denote whether we are consideringC1, C2, C3, or C4. For example, for any I ? B(C1), we use superscript 1 ineach element of I. Now consider the polyhedral set C1. We could write F?1asF?1 ={(x1, x2) ? R2 : ?x1 ? 0,?x2 ? 0}.We could compute G?1 asG?1 ={(x1, x2) ? R2 : ?x1 < 0,?x2 < 0}.Similarly we could write F{11} asF{11} ={(x1, x2) ? R2 : ?x1 = 0,?x2 ? 0}.We could compute G{11} asG{11} ={(x1, x2) ? R2 : ?x1 = 0,?x2 < 0}.We could write F{21} asF{21} ={(x1, x2) ? R2 : ?x1 ? 0,?x2 = 0}.We could compute G{21} asG{21} ={(x1, x2) ? R2 : ?x1 < 0,?x2 = 0}.The set F{11,21} could be computed asF{11,21} ={(x1, x2) ? R2 : ?x1 = 0,?x2 = 0}.The set G{11,21} is computed asG{11,21} ={(x1, x2) ? R2 : ?x1 = 0,?x2 = 0}.234.1. Piecewise quadratic optimization problemTherefore, the set F(C1) could be written asF(C1) = {F?1 ,F{11},F{21},F{11,21}},andG(C1) = {G?1 ,G{11},G{21},G{11,21}}.Now consider another polyhedral set C2. For C2 we could computeF?2 ={(x1, x2) ? R2 : x1 ? 0,?x2 ? 0},G?2 ={(x1, x2) ? R2 : x1 < 0,?x2 < 0},F{12} ={(x1, x2) ? R2 : x1 = 0,?x2 ? 0},G{12} ={(x1, x2) ? R2 : x1 = 0,?x2 < 0},F{22} ={(x1, x2) ? R2 : x1 ? 0,?x2 = 0},G{22} ={(x1, x2) ? R2 : x1 < 0,?x2 = 0},F{12,22} ={(x1, x2) ? R2 : x1 = 0,?x2 = 0},andG{12,22} ={(x1, x2) ? R2 : x1 = 0,?x2 = 0}.The set F(C2) could be computed asF(C2) = {F?2 ,F{12},F{22},F{12,22}},andG(C2) = {G?2 ,G{12},G{22},G{12,22}}.Consider the polyhedral set C3. We haveF?3 ={(x1, x2) ? R2 : x1 ? 0, x2 ? 0},G?3 ={(x1, x2) ? R2 : x1 < 0, x2 < 0},F{13} ={(x1, x2) ? R2 : x1 = 0, x2 ? 0},244.1. Piecewise quadratic optimization problemG{13} ={(x1, x2) ? R2 : x1 = 0, x2 < 0},F{23} ={(x1, x2) ? R2 : x1 ? 0, x2 = 0},G{23} ={(x1, x2) ? R2 : x1 < 0, x2 = 0},F{13,23} ={(x1, x2) ? R2 : x1 = 0, x2 = 0},G{13,23} ={(x1, x2) ? R2 : x1 = 0, x2 = 0},F(C3) = {F?3 ,F{13},F{23},F{13,23}},andG(C3) = {G?3 ,G{13},G{23},G{13,23}}.Finally, consider the remaining polyhedral set C4. We computeF?4 ={(x1, x2) ? R2 : ?x1 ? 0, x2 ? 0},G?4 ={(x1, x2) ? R2 : ?x1 < 0, x2 < 0},F{14} ={(x1, x2) ? R2 : ?x1 = 0, x2 ? 0},G{14} ={(x1, x2) ? R2 : ?x1 = 0, x2 < 0},F{24} ={(x1, x2) ? R2 : ?x1 ? 0, x2 = 0},G{24} ={(x1, x2) ? R2 : ?x1 < 0, x2 = 0},F{14,24} ={(x1, x2) ? R2 : ?x1 = 0, x2 = 0},G{14,24} ={(x1, x2) ? R2 : ?x1 = 0, x2 = 0},F(C4) = {F?4 ,F{14},F{24},F{14,24}},254.2. Parametric piecewise quadratic optimization problemsandG(C4) = {G?4 ,G{14},G{24},G{14,24}}.In this example, we have F{11} = F{12}, F{21} = F{24}, F{22} = F{23},F{13} = F{14} and F{11,21} = F{12,22} = F{13,23} = F{14,24}. We also haveG{11} = G{12}, G{21} = G{24}, G{22} = G{23}, G{13} = G{14} and G{11,21} =G{12,22} = G{13,23} = G{14,24}. Now we could write F(?) asF(?) ={F?1 ,F?2 ,F?3 ,F?4 ,F{11},F{21},F{22},F{13},F{11,21}},and G(?) asG(?) ={G?1 ,G?2 ,G?3 ,G?4 ,G{11},G{21},G{22},G{13},G{11,21}}.Consider the point x1 of Figure 4.4, we have J (x1) = {G?2}. For the pointx2, we have J (x2) = {G{21}}. Now we could write Y asY ={{G?1}, {G?2}, {G?3}, {G?4}, {G{11}}, {G{21}}, {G{22}}, {G{13}}, {G{11,21}}}.Assume J1 = {G?1}, J2 = {G?2}, J3 = {G?3}, J4 = {G?4}, J5 = {G{11}},J6 = {G{21}}, J7 = {G{22}}, J8 = {G{13}}, J9 = {G{11,21}}. Figure 4.5illustrates GJ1 , GJ2 , GJ3 , GJ4 ,GJ5 , GJ6 , GJ7 , GJ8 , GJ9 . The set Y is apartition of D, where Y is defined byY = {GJ1 , GJ2 , GJ3 , GJ4 , GJ5 , GJ6 , GJ7 , GJ8 , GJ9}.Now consider Figure 4.6. It illustrates two points x3, x4 ? GJ6 . We have?(x3) = ?(x4) = {1, 4}. Consider again J6 ? Y. We have ?(J6) = {1, 4}.There exist unique I1(J6) = {21} and I4(J6) = {24} so that GJ6 = G{21} ?G{24} and DJ6 = F{21} ? F{24}. Figure 4.7 illustrates GJ6 and DJ6 .4.2 Parametric piecewise quadratic optimizationproblemsDefine V (p) = infx f(x, p) and X(p) = arg minx f(x, p), where f : Rn ?Rd ? R ? {+?} is a proper convex PLQ function s.t. f(x, p) = fk(x, p) if264.2. Parametric piecewise quadratic optimization problemsFigure 4.4: The polyhedral decomposition of D(x, p) ? Ck. Here,fk(x, p) =12[xp]T [Qk RkRk? Sk] [xp]+[qkrk]T [xp]+ ?k,Ck = {(x, p)|Akx ? Bkp + bk} and ? = {Ck|k ? K} is a polyhedral decom-position of domf .Fact 4.12. ([PS11, Proposition 5]) Let f : Rn?Rd ? R?{+?} be a properconvex PLQ function. Then,1. V is also a proper convex PLQ function,2. X is a polyhedral multifunction,3. if f is strictly convex then X is single valued and piecewise affine.4. domX is defined asdomX = {p : ?x, (x, p) ? dom f}?{p : ??k, xk s. t.?xfk(xk, p)+AkT?k = 0, ?k ? K}.4.2.1 Critical regionDefinition 4.13. (Critical region [PS11]). For any index set J ? Y, thecritical region RJ is defined asRJ = {p : ?x ? X(p) s. t.J (x, p) = J }. (4.1)274.2. Parametric piecewise quadratic optimization problemsFigure 4.5: The partitions of DFact 4.14. ([PS11, Page 8]) For an index set J ? Y, we haveXJ (p) ={x?y, (x, p) ? GIk(J ),(0, y)??fk(x, p) ? NCk(FIk(J )), k ? ?(J )}andXJ (p) ={x?y, (x, p) ? FIk(J ),(0, y)??fk(x, p) ? NCk(FIk(J )), k ? ?(J )}.Fact 4.15. ([PS11, Theorem 1(b)]) Assume RJ = clRJ . The closure ofthe critical region of the parametric quadratic program infx{fk(x, p) : (x, p) ?Ck} corresponding to the equality set Ik(J ) ? B(Ck), k ? ?(J ) is defined asRIk(J ) =?????p?(x, ?k),?xfk(x, p) +AkIk(J )T?k = 0, ?k ? 0AkIk(J )x = BkIk(J )p+ bkIk(J )AkIck(J )x ? BkIck(J )p+ bkIck(J )?????. (4.2)284.2. Parametric piecewise quadratic optimization problemsFigure 4.6: Illustration of ?(x3) = ?(x4)Figure 4.7: Illustration of GJ6 and DJ8For an index set J ? Y, RJ is defined byRJ =?k??(J )RIk(J ).4.2.2 Calculation of the solutionFor strictly convex functions, the solution is single valued and affineinside the closure of the critical regionRJ . This solution could be calculatedby solving the following quadratic program (QP)minx{fk(x, p) : AkIk(J )x = BkIk(J )p+ bkIk(J )},for any k ? ?(J ). If the linear independence constraint qualification (LICQ)holds for k ? ?(J ), then we could calculate the Lagrange multipliers from294.2. Parametric piecewise quadratic optimization problems?xfk(x, p) +AkIk(J )T?k = 0, and consequently we obtain RIk(J ). Then, weneed to calculate the intersection of RIk(J ) for all k ? ?(J ) to computethe closure of the critical region RJ . Computing the Moreau envelope is anexample of strictly convex pQP.For merely convex function i.e. the function is not strictly convex, calcu-lation of the critical regions could be done using the normal cone optimalitycondition. The single valued and piecewise expression of the solution is validin each of these critical regions and could be computed by solving a strictlyconvex parametric quadratic program. The rest of the chapter will focus onthese computations.4.2.3 Convex parametric quadratic program (pQP)Consider the optimization problem defined byV (p) = minx?Rnf(x, p), s.t. Ax ? Bp+ b, (4.3)wheref(x, p) =12xTQx+ pTRTx+ q,with p ? Rd. The vector x ? Rn is to be optimized for all values of p ? P ,where P ? Rd is a polyhedral set such that the minimum in (4.3) exists.Here R ? Rn?d, A ? Rq?n, b ? Rq?1, B ? Rq?d and Q ? Rn?n is asymmetric matrix . The parametric quadratic program is strictly convex ifQ is positive definite. Otherwise it is convex when Q ? 0. For Q = 0 we geta special type of parametric program, that is a parametric linear program(pLP). It is assumed that P is a full dimensional set.Definition 4.16. [Feasible set, optimal set] For a given p, the feasible setof the optimization problem (4.3) is defined asX (p) = {x ? Rn : Ax ? Bp+ b}.The optimal set of points of (4.3), for any given p, is defined byX ?(p) = {x ? Rn : Ax ? Bp+ b, f(x, p) = V ?(p)},Definition 4.17. [Active constraints, inactive constraints, active set, opti-mal active set] From a given p, assume x is a feasible point of (4.3) . Theactive constraints are the constraints that satisfy Aix = Bip + bi. Inactiveconstraints satisfy Aix < Bip+ bi.304.2. Parametric piecewise quadratic optimization problemsThe active set A(x, p) is defined as the set of indices of the active con-straints, i.e.,A(x, p) = {i ? 1, 2, . . . , q : Aix = Bip+ bi}.The optimal active set A?(p) is defined byA?(p) = {i ? 1, 2, . . . , q : i ? A(x, p), ?x ? X ?(p)} =?x?X ?(p)A(x, p).It is the set of indices of the constraints that are active for all x ? X?(p).Definition 4.18. [Critical region of a convex parametric quadratic program]For a give index set A ? {1, 2, . . . , q}, the critical region of (4.3) associatedwith A is defined asPA = {p ? P : A?(p) = A}.This is the set of parameters for which the optimal active set is equal to A.Fact 4.19. ([STJ07, Theorem 2.1]) Consider the pQP defined in (4.3).1. There exists a piecewise affine minimizer function x : P ? Rn wherep 7? x(p) ? X ?(p) and it is defined on a finite set of full dimensionalpolyhedral sets R = {R1, R2, . . . , RK} with P =?Kk=1Rk, IntRi ?IntRj = ? for all i 6= j and x(p) defined on Rk is affine for all k ? K.2. The function V : P ? R is continuous and PLQ such that it is linearor quadratic on the polyhedral set Rk for all k ? {1, 2, . . . ,K} .Fact 4.20. (Normal cone optimality condition [RW98, Theorem 6.12]) Letf : Rn ? R ? {+?} be a function defined on a closed convex set ? ? Rn.If x is a local minimizer of f in ?, then??xf(x) ? N?(x).If f is convex, then x is a global minimizer.Assume ? is a polyhedral set which is defined by? = {x : Ax ? b},314.2. Parametric piecewise quadratic optimization problemswhere A =??????A1A2??An??????and b =??????b1b2??bn??????. The normal cone N?(x) to the polyhedralset ? at any point x ? ? is defined byN?(x) = {y : LIy ? 0, LEy = 0},where LI , LE are matrices representing the inequality and equality parts ofthe normal cone. We assume A = {i : Aix = bi}. Therefore, for any i /? Awe have Aix < bi. The notation C?(A) is introduced as followC?(A) = {y : LIy ? 0, LEy = 0} = N?(x).By using Fact 4.20, we deduce ??xf(x) ? {y : LIy ? 0, LEy = 0}.Thus, we haveLI?xf(x) ? 0, LE?xf(x) = 0.Now we define the normal cone optimality condition explicitly for (4.3) asLAI (Qx+Rp+ q) ? 0 and LAE(Qx+Rp+ q) = 0,where LAI , LAE matrixes represent the inequality and equality parts of thenormal cone matrix associated with A.Fact 4.21. ([STJ07, Lemma 4.1]) Consider the optimization problem (4.3)for a given p. Let A?(p) be the optimal active set and NX (p)(A?(p)) be thecorresponding normal cone. Then, we have?(Qx? +Rp+ q) ? NX (p)(A?(p)),for any optimal solution x? ? X ?(p).Fact 4.22. ([STJ07, Lemma 4.2]) Consider the optimization problem (4.3).For any optimal active set A, there exists an associated optimal set mappingXA(p) : PA ? 2Rnwhich is defined byXA(p) = {x ? Rn : LAE(Qx+Rp+ q) = 0, LAI (Qx+Rp+ q) ? 0,AAx = BAp+ bA, ANx ? BN p+ bN },where N = {1, 2, . . . , q} ? A, LAE and LAI are the normal cone equality andinequality matrices associated with the optimal active set A.324.2. Parametric piecewise quadratic optimization problemsWe assume XA : clPA ? 2Rndenotes the extension of the mappingXA(p) that is defined on the closure of PA.Fact 4.23. ([STJ07, Lemma 4.3])The minimizer function y? : cl(PA) ?Rn of the following strictly convex pQP is unique,continuous and piecewiseaffine.miny?Rn12yT y, where y ? XA(p). (4.4)We could write (4.4) asminy?Rn12yT y, where A?y ? B?p+ b?, p ? cl(PA). (4.5)We let t denote the number of constraints in (4.5). The parametric quadraticprogram defined in (4.5) is a function of the optimal active set A of (4.3).Consequently the optimizer y?(p) and the optimal active set for (4.5) arefunctions of A. We use B?(p) to denote the optimal active set for (4.5).Fact 4.24. ([STJ07, Lemma 4.4]) Given the optimal active sets A for (4.3)and B for (4.5), assume L?BE and L?BE are the equality and inequality part ofthe normal cone to {y ? Rn : A?y ? B?p + a?} defined by B. Assume thefunction X ?(p) is continuous on P . Let the function yA,B(?) be the uniquesolution to the systems of equationsL?BEy = 0, A?By = B?Bp+ b?B. (4.6)Then yA,B(?) is optimal for (4.3) and (4.5) when it is restricted toRA,B = cl{p ? P : A = A?(p),B = B?(p)}= {p ? P : A?N yA,B(p) ? B?N p+ b?N , L?BIyA,B(p) ? 0},where N = {1, 2, . . . , t} ? B.Definition 4.25. ([STJ07]) We define the mapping x : P ? Rn in Fact4.19 asx(p) = yA,B(p), if p ? RA,B.The set of polyhedral sets on which x(?) is defined is represented byR = {RA,B : dim(P ?RA,B) = s}.334.3. Adaptation of pQP in computational convex analysis4.3 Adaptation of pQP in computational convexanalysisWe adapt pQP for computing the conjugate of bivariate PLQ functions.Definition 4.26. (parametric quadratic optimization problem) We defineV(p) = infx ?(x, p) and X(p) = arg minx ?(x, p), where ? : Rn ? Rd ?R ? {+?} is a proper PLQ function s.t. ?(x, p) = ?k(x, p) if (x, p) ? Ck.Here,?k(x, p) =12[xp]T [Qk RkRk? Sk] [xp]+[qkrk]T [xp]+ ?k,Ck = {(x, p) : Akx ? Bkp + bk} and ? = {Ck : k ? K}. The set ? is apolyhedral decomposition of dom?. Now we could write ?k(x, p) as?k(x, p) =12xTQkx+12pTSkp+12pTRTk x+12pTRkx+ qTk x+ rTk p+ ?k.Let f : R2 ? R ? {+?} be a convex bivariate PLQ function suchthat there exists a polyhedral decomposition P = {Ck : k ? K} of dom f ,f(x) = fk(x) if x ? Ck, k ? K, where fk(x) = (1/2)x?Qkx + qk ?x + ?k with?k ? R, qk ? R2 and Qk ? R2?2 is a symmetric matrix. Assume for anyk ? K, Ck = {x ? Rn : Akx ? bk}.4.3.1 Conjugate computationModelingDefine a function g : R2 ? R2 ? R ? {+?} so that g(x, s) = gk(x, s)when x ? Ck and gk(x, s) is defined bygk(x, s) = fk(x)? ?s, x?.Now the conjugate of f could be written asf?(s) = supx[?g(x, s)]= ? infx[g(x, s)].Assume V (s) = infx[g(x, s)]. Therefore, f?(s) = ?V (s). Note that g is notjointly convex. So we could not apply convex pQP results directly.To describe the modelling more precisely, we note x =[x1x2]and s =[s1s2].We also assume that fk(x1, x2) = akx21 + bkx22 + ckx1x2 + dkx1 + ekx2 + ?k344.3. Adaptation of pQP in computational convex analysisfor all k ? K. Therefore, gk(x1, x2; s1, s2) = fk(x1, x2) ? s1x1 ? s2x2. Nowwe define gk for any k ? K asgk(x1, x2; s1, s2) =12[x1x2]T [2ak cc 2bk] [x1x2]+[s1s2]T [?1 00 ?1]T [x1x2]+[de]T [x1x2]+ ?k.For any k ? K, we assume Ak =??????l1k m1kl2k m2k??lnk mnk??????, Bk =??????0 00 0??0 0??????, bk =??????r1kr2k??rnk??????.Then Ck could be expressed asCk ={(x1, x2, s1, s2) : Ak[x1x2]? Bk[s1s2]+ bk}. (4.7)If we look at the decomposition of dom g, we see that we have used thematrix Bk =??????0 00 0??0 0??????in (4.7) to define each Ck from a set of (x1, x2) to(x1, x2, s1, s2). Thus, in our problem model, considering the index sets fordom g means considering the index sets for dom f . Therefore, for any indexset J ? Y, the definition of the critical region which is defined in (4.13)becomes RJ = {s : ?x ? X(s) s. t.J (x) = J }.Note that gk(x, s) is not convex. We have gk(x, s) = fk(x)? ?s, x?. Wedefine f?k(x) = fk(x) + ICk(x). We assume g?k(x, s) = fk(x) + ICk(x)? ?s, x?so that g(x, s) = g?k(x, s) when x ? Ck. The conjugate of f could be writtenasf?(s) = ? infxg(x, s).Now we writeV (s) = infxg(x, s), X(s) = arg minxg(x, s). (4.8)354.3. Adaptation of pQP in computational convex analysisTheorem 4.27. For any index set J ? Y of the optimization problem de-fined in (4.8), the critical region could be written asRJ =?k??(J )?????s?(x, ?k), 0 = ?xgk(x, s) +AkIk(J )?k, ?k ? 0AkIk(J )x = bkIk(J )AkIck(J )x < bkIck(J )?????.Proof. For any index set J ? Y, the optimality condition for the problemdefined in (4.8) is0 ? ?xfk(x) + ?ICk(x)? s, ?k ? ?(J ). (4.9)We have?ICk(x) = {v : ICk(x) ? ICk(x) + ?v, x? x?, ?x ? Ck}.Therefore, for any x ? Ck we have?ICk(x) = {v : ICk(x) ? ?v, x? x?,?x ? Ck}= {v : 0 ? ?v, x? x?, ?x ? Ck}= NCk(x).Therefore the optimality condition in (4.9) becomes0 ? ?xgk(x, s) +NCk(x), ?k ? ?(J ). (4.10)We have RJ = {s : ?x ? X(s) s. t.J (x) = J }. From Definition 4.6 weknow that, J (x) = J if and only if x ? GJ . Moreover, for any x ? GJ wehave x ? X(s) if and only if (4.10) holds. Therefore, we haveRJ ={s?????x s. t. x ? GJ ,0 ? ?xgk(x, s) +NCk(x), k ? ?(J )}.Since GJ =?k??(J ) GIk(J ) [Definition 4.9], for any x ? GJ we have x ?GIk(J ) for all k ? ?(J ). We know that GIk(J ) = riFIk(J ). For any x ?riFIk(J ), we have NC(x) = NC(FIk(J )) [Fact 4.8]. Thus, we haveRJ ={s?????x s. t. x ? GJ ,0 ? ?xgk(x, s) +NC(FIk(J )), k ? ?(J )}.Since GJ =?k??(J ) riFIk(J ), we deduceRJ =?k??(J ){s?????x s. t. x ? riFIk(J ),0 ? ?xgk(x, s) +NC(FIk(J ))}.364.3. Adaptation of pQP in computational convex analysisFor any x ? riFIk(J ) = GIk(J ), we have AkIk(J )x = bkIk(J ) and AkIck(J )x <bkIck(J ). Now by using Fact 2.39, we obtainNC(FIk(J )) = AkIk(J )T?k,with?k ? 0.Therefore, we deduceRJ =?k??(J )?????s????????(x, ?k), 0 ? ?xgk(x, s) +AkIk(J )T?k, ?k ? 0AkIk(J )x = bkIk(J ),AkIck(J )x < bkIck(J )?????.Computation of the full dimensional critical regionsWe assume that dom f is defined on a polyhedral subdivision. Aftermodeling, we go through each of the index sets for dom g to compute the fulldimensional i.e., in our case two dimensional critical regions and associatedvalue functions. We explain this computation by using an example. Consideragain Example 4.11. Assume a PLQ function f is defined on the polyhedraldecomposition ? = {C1, C2, C3, C4} of D. Note that ? is also a subdivision.The index sets are J1,J2, . . . ,J9.We first assume g(x, s) is a strictly convex function with respect to x.Consider an index set J ? Y. For all k ? ?(J ), we need to solve thefollowing equations for computing the values of x and ?k.?xgk(x, s) +AkIk(J )T?k = 0 (4.11)AkIk(J )x = bkIk(J )(4.12)Assume the solution gives us the function XIk(J )(s) and ?Ik(J )(s). Usingthis solution we get RIk(J ) by using (4.2) asRIk(J ) ={s ? R2?Ik(J )(s) ? 0AkIck(J )XIk(J )(s) ? bkIck(J )}.Consequently, we calculate the closure of the critical region which is definedasRJ =?k?K(J )RIk(J ).374.3. Adaptation of pQP in computational convex analysisBy using x = XIk(J )(s) in gk(x, s) for any k ? ?(J ), we get the valuefunction VJ (s) which is defined on RJ .For example consider the index set J1 in Example 4.11. Now, ?(J1) ={1}. We need to solve the equations (4.11) and (4.12) for k = 1 to getXI1(J )(s) and ?I1(J )(s). Consequently we calculate RI1(J ). Then, we com-pute RJ1 asRJ1 =?k?{1}RIk(J1)=RI1(J1).By substituting x = XI1(J1)(s) in g1(x, s), we get the value function VJ1(s)which is defined on RJ1 . Similarly we compute RJ2 , VJ2(s), RJ3 , VJ3(s),RJ4 , VJ4(s).Consider the index set J5 in Example 4.11. Now, ?(J5) = {1, 2}. Weneed to solve equations (4.11) and (4.12) for k = 1 to compute XI1(J5)(s)and ?I1(J5)(s). Using XI1(J5)(s) and ?I1(J5)(s) we deduceRI1(J5). Similarlywe compute RI2(J5). Then, the computation of RJ5 is as followsRJ5 =?k?{1,2}RIk(J5)=RI1(J5) ?RI2(J5).By substituting x = XI1(J5)(s) in g1(x, s) or x = XI2(J5)(s) in g2(x, s), wededuce the value function VJ5(s) which is defined on RJ5 . Similarly wecompute RJ6 , VJ6(s), RJ7 , VJ7(s), RJ8 , VJ8(s).Now consider the index set J9 in Example 4.11. We have ?(J5) ={1, 2, 3, 4}. Therefore, we need to solve equations (4.11) and (4.12) fork ? {1, 2, 3, 4} to compute XI1(J9)(s), ?I1(J9)(s), XI2(J9)(s), ?I2(J9)(s),XI3(J9)(s), ?I3(J9)(s), XI4(J9)(s), ?I4(J9)(s). Consequently, we get RI1(J9),RI2(J9), RI3(J9), RI4(J9). Therefore, we deduce RJ9 as,RJ9 = RI1(J9) ?RI2(J9) ?RI3(J9) ?RI4(J9).To compute VJ9(s) which is defined onRJ5 , we use x = XI1(J9)(s) in g1(x, s)or x = XI2(J9)(s) in g2(x, s) or x = XI3(J9)(s) in g3(x, s) or x = XI4(J9)(s)in g4(x, s).Now we assume that g(x, s) is convex but it is not strictly convex withrespect to x. In this case, we need to consider the optimization problem384.3. Adaptation of pQP in computational convex analysiswhich is defined bymk(s) = infxgk(x, s) s.t. x ? Ck (4.13)as an optimization problem defined in (4.3). Assume the minimum existsin (4.13). Since we are considering that dim(dom f) = 2, the associatedelement with any index set J ? Y is a relative interior of a two dimensionalface, or a relative interior of an edge or a vertex. For any FIk(J ) ? Ckwhere k ? ?(J ), we get an optimal active set A of (4.13). Actually forany k ? ?(J ), the optimal active set A for FIk(J ) is Ik(J ). For example,consider the index set J5. For FI1(J ), we get the optimal active set A = {11}of (4.13) with k = 1 and we have I1(J5) = {11}.For solving the convex case (not strictly convex), we have to go throughevery index sets like the strictly convex case. Consider an index set J ? Y.For any k ? ?(J ), we need to solve (4.13) by applying Fact 4.22. To solveit, at first we need to solve the equationsLIk(J )E (Qkx+Rks+ qk) = 0 (4.14)andAkIk(J )x = bkIk(J )(4.15)to get x. Assume the solution gives us XIk(J )(s). Then, we apply x =XIk(J )(s) in the following inequalities to get RIk(J )LIk(J )I (Qkx+Rks+ qk) ? 0, (4.16)AkIck(J )x ? bkIck(J ). (4.17)We define RIk(J ) asRIk(J ) ={s ? R2?????LIk(J )I (QkXIk(J )(s) +Rks+ qk) ? 0,AkIck(J )XIk(J )(s) ? bkIck(J )}.Actually we have used the definition of optimal set mapping for anyoptimal active set A which is defined in Fact 4.22. This definition givesus the information about optimal set mapping and the critical region inwhich the set mapping is defined. As discussed earlier, an optimal active setA = Ik(J ) is associated with a set FIk(J ) which is a vertex or an edge or aface.394.3. Adaptation of pQP in computational convex analysisCase 1. If an optimal active set is associated with the set FIk(J ) whichis a face of a polyhedral region Ck ? R2, then the normal cone to Ck at anypoint x ? riFIk(J ) isNCk(FIk(J )) = {(0, 0)}. (4.18)Therefore we get the unique solution x ? R2 by using Equation (4.14) only.Since we are considering the relative interior of a face of a polyhedral region,there is no active constraint.Case 2. If an optimal active set is associated with the set FIk(J ) whichis an edge of a polyhedral region Ck ? R2, then the normal cone to Ck atany point x ? riFIk(J ) is defined as,NCk(FIk(J )) = {y ? R2 : LIk(J )E y = 0, LIk(J )I y ? 0}.Since we are considering an edge, we have an active constraint. Therefore,we could calculate the unique solution x = XIk(J )(s) by solving equations(4.14) and (4.15).Case 3. Now assume an optimal active set is associated with a vertex vof a polyhedral region Ck ? R2. Since we are considering a vertex, we havetwo active constraints. Thus, we get the unique solution x = XIk(J )(s) bysolving Equation (4.15).For each of these three cases, we get the unique solution x = XIk(J )(s)after applying Fact 4.22. Since we are not getting a set of solution, we donot apply Fact 4.23 and Fact 4.24.Case 4. As a special case we may have Qk = 0 and the optimal activeset is associated with a face. In this case we get a set of solutions but thesolutions are defined on a critical region which is not full dimensional i.e.,two dimensional. It happens due to the following equation which we get byusing Qk = 0 in equation (4.14).LIk(J )E (Rks+ qk) = 0.Now we may proceed from here by applying facts 4.23, 4.24 and Definition4.25. Since the set of solutions are defined on a critical region which isnot full dimensional, by applying facts 4.23 and 4.24 we will get uniquesolutions defined on critical regions which are not full dimensional. SinceDefinition 4.25 defined the solution mapping only on the full dimensionali.e., two dimensional polyhedral regions, we actually do not need to applyfacts 4.23, 4.24 and Definition 4.25. We can give another explanation of notproceeding from here. That is, for computing the conjugate, it is enough forus if we can compute the value function defined on a critical region and the404.3. Adaptation of pQP in computational convex analysisvalue function can be computed if we need to consider an unique solutionor a set of solution. Similar scenario happens when Qk = 0 and the optimalactive set is associated with an edge.After computing XIk(J )(s) and RIk(J ) for all k ? ?(J ), we applyRJ =?k?K(J )RIk(J ) to compute RJ . Consequently, we deduce the valuefunction VJ (s) which is defined on RJ as we do for the strictly convex case.The computed conjugate could be written asf?(s) = ?VJ (s) where s ? RJ ,with J ? Y.4.3.2 Moreau envelope computationModelingDefine another function h : R2 ? R2 ? R ? {+?} so that h(x, s) =hk(x, s) when x ? Ck and hk(x, s) is defined byhk(x, s) = fk(x) +12??s? x?2,where ? > 0. Note that hk(x, s) is strictly convex. Now the Moreau envelopeof f could be written ase?f(s) = infxh(x, s).We have hk(x, s) = fk(x1, x2) + 12? [(x1 ? s1)2 + (x2 ? s2)2]. Therefore, weexpress hk ashk(x1, x2; s1, s2) =12[x1x2]T [2(ak + 12?) cc 2(bk + 12?)] [x1x2]+12[s1s2]T [ 1? 00 1?] [s1s2]+[s1s2]T [? 1? 00 ? 1?] [x1x2]+[de]T [x1x2]+ ?k.For any k ? K, Ck could be expressed similarly as expressed in Equation(4.7). We could writee?f(s) = infxh(x, s), X?(s) = arg minxh(x, s). (4.19)Now we apply Fact 4.15 for the optimization problem defined in (4.19).414.3. Adaptation of pQP in computational convex analysisComputation of full dimensional critical regionsThe computation of full dimensional critical regions could be done sim-ilarly to the way it is done for conjugate computation. Replace g with h.During conjugate computation, g may be strictly convex or only convex rel-ative to x. Since h is strictly convex relative to x, the computation is limitedto the strictly convex case only.4.3.3 Proximal average computationWe could also compute the proximal average of two convex bivariatePLQ functions. By using Definition 2.34 of the proximal average, we need tosolve three parametric quadratic programs for three conjugate computationas well as we have to compute a PLQ addition for computing the proximalaverage of two convex bivariate PLQ functions. But there is another way tocompute it by using another formulation of the proximal average. Considerthe following definition of the proximal average of two functions.Definition 4.28. [BGLW08, Definition 4.1] Let fi : R2 ? R ? {+?} befunctions where i ? {1, 2}. For ?i > 0 with?i?{1,2} ?i = 1 and for any? > 0, the proximal average of the functions fi is defined asp?(f, ?)(y) =1?[?12?y?2 + inf?i?{1,2} yi=y(?i(?fi(yi/?i) +12?yi/?i?2))].(4.20)By using the previous definition, we just need to solve a parametric quadraticprogram for computing the proximal average of two convex bivariate PLQfunctions. We haveinf?i?{1,2} yi=y(?i(?fi(yi/?i) +12?yi/?i?2))= infy1+y2=y(?1(?f1(y1/?1) +12?y1/?1?2)+ ?2(?f2(y2/?2) +12?y2/?2?2))= infy1(?1(?f1(y1/?1) +12?y1/?1?2)+ ?2(?f2((y ? y1)/?2) +12?(y ? y1)/?2?2))= infy1g(y1, y).Since (y ? y1)/?2 is an affine function and f2 is convex, f2((y ? y1)/?2) isa convex function [RW98, Exercise 2.20(a)]. Similarly f1(y1/?1) is a convexfunction also. Since ?y1/?1?2 and ?(y ? y1)/?2?2 are convex functions and424.3. Adaptation of pQP in computational convex analysisconvex functions are closed under addition and scalar multiplication, g(y1, y)is also a convex function. We define a parametric piecewise quadratic opti-mization problem asV (y) = infy1g(y1, y), X(y) = arg miny1g(y1, y). (4.21)Now we apply Fact 4.15 to solve the optimization problem defined in (4.21).Consequently we compute the proximal average defined in Equation (4.20).43Chapter 5AlgorithmIn this chapter we discuss the full algorithm based on parametric pro-gramming to compute the convex operators (conjugate and Moreau enve-lope) of any convex bivariate PLQ function.5.1 Data structureOur implementation in Scilab [Sci94] represents a convex bivariate PLQfunction using a list of data structures for taking input and giving out-put. But this data structure is converted to another data structure (face-constraint adjacency data structure) internally. We will discuss both of thesedata structures but we will focus mainly on the face-constraint adjacencydata structure to present the algorithm and compute its complexity.5.1.1 List representation of bivariate PLQ functionsRecall that each piece of a PLQ function f is a function defined on apolyhedral setCk = {x ? R2|Akx? bk ? 0},where Ak ? Rmk?2 and bk ? Rmk?1. On Ck, f is equal tofk(x) = (1/2)xTQkx+ qTk x+ ?k,where ?k ? R, qk ? R2 and Qk ? R2?2 is a symmetric matrix.Using these definitions, we store a piece of a bivariate PLQ function in astructure containing the constraint set Ck and function value fk. Thereforewe store the full bivariate PLQ function using a list of such structures.We store Ck and fk in mk ? 3 and 1 ? 6 matrixes respectively. Each row[c1, c2, c3] in Ck represents the constraint c1x1 + c2x2 + c3 ? 0. The row[fk1 , fk2 , fk3 , fk4 , fk5 , fk6 ] in fk stores the function fk(x1, x2) = fk1 x21 + fk2 x22 +fk3 x1x2 + fk4 x1 + fk5 x2 + fk6 .Example 5.1. For example, f(x1, x2) = |x1| + |x2| is represented as a listof four structures where each structure represents each individual piece.445.1. Data structureThe first piece is represented asC1 =[?1 0 00 ?1 0], f1 =[0 0 0 1 1 0],and similarlyC2 =[1 0 00 ?1 0], f2 =[0 0 0 ?1 1 0],C3 =[1 0 00 1 0], f3 =[0 0 0 ?1 ?1 0],C4 =[?1 0 00 1 0], f4 =[0 0 0 1 ?1 0].The structure of the first piece informs us that the function x1+x2 is definedon the polyhedral set {(x1, x2)|x1 ? 0, x2 ? 0}. The other pieces are definedsimilarly.5.1.2 Face-constraint adjacency representation of abivariate PLQ functionIn this representation a bivariate PLQ function is defined as (C, f,Ef C)where C is a nc ? 3 matrix (where nc is the number of distinct constraints)storing each constraint, c1x1 + c2x2 + c3 ? 0 as a row, [c1, c2, c3] in C. Wedefine f which is a nf ? 6 matrix where nf is the number of faces. Eachrow [fk1 , fk2 , fk3 , fk4 , fk5 , fk6 ] represents the associated quadratic function witheach face, fk(x1, x2) = fk1 x21 + fk2 x22 + fk3 x1x2 + fk4 x1 + fk5 x2 + fk6 . Theface-constraint adjacency is represented by Ef C which is a nf ? nC binarymatrix.Example 5.2. For example, f(x1, x2) = |x1|+ |x2| is represented asC =?????1 0 00 ?1 01 0 00 1 0???? , f =????0 0 0 1 1 00 0 0 ?1 1 00 0 0 ?1 ?1 00 0 0 1 ?1 0???? , Ef C =????1 1 0 00 1 1 00 0 1 11 0 0 1???? .From C we deduce there are four constraints. Each row in C represents aconstraint. For example, the second row [0,?1, 0] in C represents the con-straint with parameters c1 = 0, c2 = ?1 and c3 = 0 that is ?x2 ? 0. Sincethe matrix f has four rows, we deduce that there are four faces. The first455.1. Data structurerow of f indicates that the function with parameters f11 = 0, f12 = 0, f13 = 0,f14 = 1, f15 = 1 and f16 = 0 is associated with the first face. So x1 + x2 isthat associated function. The remaining rows of f stores other functionsassociated with other faces. The rows in Ef C indicates the face-constraintadjacency information for each of the four faces. The entry 1 in Ef C indi-cates adjacency while 0 indicates non-adjacency between corresponding faceand constraint. For example, the first row in Ef C is [1, 1, 0, 0] meaning thatthe boundary of the first face is defined by the first and second constraints.Similarly the fourth row of Ef C , [1, 0, 0, 1] indicates that the boundary ofthe fourth face is defined by the first and fourth constraints.5.1.3 The data structures to store lines, rays, segments andvertexesWe store the lines in the domain by using a nL ? 3 matrix ML which isdefined asML =????l11 l21 l31? ? ?? ? ?l1nL l2nL l3nL???? .Each row of ML represents a line. Consider the first row [l11, l21, l31] of ML.The row represents the line l11x1 + l21x2 + l31 = 0.Any ray is represented by using a line and a constraint. We use a nR?6matrix MR to represent nR rays in domain. The matrix MR is defined byMR =????r1l1 r1l2 r1l3 r1c1 r1c2 r1c3? ? ? ? ? ?? ? ? ? ? ?rnRl1 rnRl2 rnRl3 rnRc1 rnRc2 rnRc3???? .The first row [r1l1 , r1l2 , r1l3 , r1c1 , r1c2 , r1c3 ] of MR represents a ray which is definedby r1l1x1 + r1l2x2 + r1l3 = 0 and r1c1x1 + r1c2x2 + r1c3 ? 0. Similarly we definethe other rows.For nV vertexes in the domain, we use a nV ? 2 matrix MV to storethem. We define MV asMV =????v11 v21? ?? ?v1nV v2nV???? .465.1. Data structureEach row of MV represents a vertex. For example, the first row of MVrepresents the vertex with x1 = v11 and x2 = v21.A segment is defined by two vertexes. We store nS segments in thedomain by using a nS ? 2 matrix MS which is defined asMS =????r1v1 r1v2??rnSv1 rnSv2???? .Each entry in MS represents a reference to a vertex in MV . The row [r1v1 , r1v2 ]represents a segment between the two vertexes which are referenced by r1v1and r1v2 . Other segments are defined similarly.Example 5.3. For example we consider the domain of 1-norm, the functionf(x1, x2) = |x1|+ |x2|. It is illustrated in Figure 5.1. The domain has fourfaces, four rays and one vertex. We store the rays by using a 4 ? 6 matrixMR i.e.,MR =????1 0 0 0 ?1 00 1 0 0 1 01 0 0 0 1 00 1 0 ?1 0 0???? .The vertex is stored asMV =[0 0].Example 5.4. We consider the domain of indicator function I[?1,1]. It isillustrated in Figure 5.2. There are one face, four segments and four vertexesin the domain. The vertexes are stored asMV =????1 1?1 1?1 ?11 ?1???? .We store the segments asMS =????1 22 33 44 1???? .475.1. Data structureFigure 5.1: Domain of the function, f(x1, x2) = |x1|+ |x2|.5.1.4 The data structure to store entity-face andentity-constraint adjacency informationAssume we have nf faces, nL lines, nS segments, nR rays and nV ver-texes. We define a (nf + nL + nS + nR + nV )? nf binary matrix ME f tostore entity-face adjacency information. We use ei to denote any entity withi ? {1, 2, . . . , (nf + nL + nS + nR + nV )}. We denote the entities ei withi ? {1, 2, . . . , nf} are faces. Similarly ei with i ? {nf + 1, . . . , nf + nL} arelines. The segments are ei with i ? {nf + nL + 1, . . . , nf + nL + nS}. Therays are ei with i ? {nf +nL+nS +1, . . . , nf +nL+nS +nR}. Similarly thevertexes are ei where i ? {nf +nL+nS+nR+1, . . . , nf +nL+nS+nR+nV }.We define a (nf + nL + nS + nR + nV ) ? nf binary matrix ME f to storeentity-face adjacency information. The matrix ME f is defined byME f (i, j) ={1, if ei ? ej ;0 otherwise.where i ? {1, 2, . . . , nf + nL + nS + nR + nV } and j ? {1, 2, . . . , nf}.485.1. Data structureFigure 5.2: Domain of indicator function I[?1,1].Assume we have nC constraints. We use Ci with i ? {1, 2, . . . , nC} todenote a constraint. We define a (nf + nL + nS + nR + nV ) ? nC binarymatrix ME C to store the entity-constraint adjacency information which isdefined asME C(i, j) ={1, if ei is bounded by Cj ;0 otherwise.where i ? {1, 2, . . . , nf + nL + nS + nR + nV } and j ? {1, 2, . . . , nC}.Example 5.5. We again consider the domain of the function f(x1, x2) =|x1| + |x2| which is shown in Figure 5.1. There are nine entities in thedomain i.e., four faces, four rays and one vertex. We store computed entity-face adjacency information by using a 9 ? 4 binary matrix ME f definedasME f =??????????????1 0 0 00 1 0 00 0 1 00 0 0 11 1 0 00 1 1 00 0 1 11 0 0 11 1 1 1??????????????.495.1. Data structureThe collection of constraints is represented asMC =?????1 0 00 ?1 01 0 00 1 0???? .There are four constraints in MC . We store computed entity-constraintadjacency information by using a 9?4 binary matrix ME C which is definedasME C =??????????????1 1 0 00 1 1 00 0 1 11 0 0 11 1 1 00 1 1 11 0 1 11 1 0 11 1 1 1??????????????.5.1.5 The Half-Edge data structure [CGA, Ope]The Half-Edge data structure is an edge centered data structure whichcould maintain the incident information of vertexes, edges and faces in atwo dimensional polyhedral subdivision. Each edge is decomposed into twohalf-edges with opposite directions. We describe the data structure by usingan example. Consider Figure 5.3 where each edge is decomposed into twohalf-edges.1. Every vertex references one outgoing half-edge. For example the vertexv1 references the half-edge h1.2. Every face references one of its bounding half-edges. For example theface f1 references h2.3. Every half-edge provides references to the following elements.- The vertex it points to i.e., the half-edge h2 provides referenceto v1.- The face in which it belongs. For example the half-edge h2 hasa reference to the face f1.505.1. Data structureFigure 5.3: The half-edge data structure.- The next half-edge in the face in counter-clockwise direction. Forexample h2 references to h3.- The opposite half-edge. For example h2 has a reference to h4.- The previous half-edge in the belonging face. Reference to thiselement is optional. For example h2 has a reference to h5.Example 5.6. For example consider the 1-norm, f(x1, x2) = |x1|+ |x2|. Itsdomain is illustrated using half-edges in Figure 5.4. A representation of thedomain using the half-edge data structure is as follows.1. The vertex v1 references one outgoing half-edge e.g. h1.2. The face f1 references one of its bounding edges e.g. h1.3. The face f2 references one of its bounding edges e.g. h3.4. The face f3 references one of its bounding edges e.g. h5.5. The face f4 references one of its bounding edges e.g. h7.6. The half-edge h1 provides references to the following elements.515.2. The Algorithms- The vertex it points to. It is null.- The face in which it belongs. It is f1.- The next half-edge in the face in counter-clockwise direction. Itis null.- The opposite half-edge. It is h8.- The previous half-edge in the belonging face. It is h2.7. Similarly other half-edges i.e., h2, h3, h4, h5, h6, h7 and h8 providereferences.Figure 5.4: The representation of the domain of f(x1, x2) = |x1| + |x2|using half-edge data structure.5.2 The AlgorithmsFor an index set J ? Y, we compute the closure of the critical regionRJ which is defined asRJ =?k?K(J )RIk(J ),525.2. The Algorithmswhere RIk(J ) for the parametric quadratic program infx{?k(x, p) : (x, p) ?Ck} corresponding to the equality set Ik(J ) ? B(Ck), k ? ?(J ) is definedasRIk(J ) =?????p?(x, ?k),?x?k(x, p) +AkIk(J )T?k = 0, ?k ? 0AkIk(J )x = bkIk(J )AkIck(J )x ? bkIck(J )?????.An entity is a set in R2 which is either a vertex, an edge or a face. We use?(x, s) to denote g(x, s) or h(x, s). Note that g(x, s) is defined in Subsection4.3.1 and h(x, s) is defined in Subsection 4.3.2. We need to consider everyindex set J ? Y of dom?. For each index set J ? Y we have a DJ which isa vertex or edge or face in the domain. Moreover, for any index set J ? Ywe need to consider all k ? ?(J ). Due to these facts, the main idea of thealgorithm is to loop through each entity in the domain of a bivariate convexPLQ function. For each entity, it is needed to consider each of its adjacentfaces. For any index set J ? Y and for any k ? ?(J ), we have FI?(J )which is a vertex, an edge, or a face. Therefore, FI?(J ) represents an entity.We also have GI?(J ) which is the relative interior of the entity FI?(J ). Theentity FI?(J ) is the same entity as DJ for any k ? ?(J ). In our model, forall k ? ?(J ), Ck are the faces. The entity FI?(J ) is a face when I?(J ) = ?k.Actually, F?k = Ck for any k ? ?(J ).Our implementation in Scilab [Sci94] to compute entities andadjacency informationIn our implementation using Scilab [Sci94], we start with List represen-tation of bivariate PLQ functions which is defined in Subsection 5.1. Thisdata structure does not give us all entities of the domain explicitly exceptfaces. We use a brute-force approach to compute the remaining entities (ver-tex, ray, segment, line). At first we collect all constraints together and storethem in a matrix. Then we compute the vertexes. For each constraint weloop through all other constraints to compute the intersection points. Dur-ing the computation of vertexes, we get the adjacency information betweenvertexes and constraints and also between vertexes and faces. We get linesfrom those constraints which are not adjacent to any vertex. After that, wecompute the segments and rays for each face. For each face, we loop throughall of its adjacent constraints, and for each constraint we loop through allof its adjacent vertexes to the current face to compute associated segmentsand rays. We also get the adjacency information of the entities (segments,rays and lines) with faces and constraints.535.2. The AlgorithmsAn improved strategy to compute entities and adjacencyinformationIt is possible to compute all entities and their adjacency informationwith faces and constraints more efficiently. Assume we start with the face-constraint adjacency representation of any bivariate PLQ function which isdefined in Section 5.2. We compute all of the vertexes and also the adjacencyinformation between vertexes and constraints using Mulmuley?s segmentintersection algorithm [Mul88, MS92]. This algorithm finds all intersectingpairs of segments in a set of line segments in the plane. Since we haveall constraints in C, we get all lines corresponding to the constraints. Werepresent any line from these lines as a line segment by bounding it with largeboundaries. The Mulmuly?s algorithm may give some extra intersectionpoints which are not vertexes. We eliminate the unnecessary intersectionpoints by using the planar point location algorithm [Kir83, ST86]. Thisalgorithm determines the polyhedral region of a subdivision in which a querypoint lies. As a biproduct of unnecessary intersection points elimination, weget vertex-face adjacency information at no extra cost.For example consider a polyhedral subdivision with three polyhedral setsP1, P2 and P3 in Figure 5.5. Here the green lines present the boundaries ofthe polyhedral sets. After applying Mulmuly?s algorithm, we get all in-tersecting pairs of line segments. The red colored intersection points arethe required vertexes and the blue colored intersection points are the ex-tra intersection points. Then we apply the planar point location algorithmfor all intersection points to eliminate the unnecessary intersection points.Suppose we are considering the intersection point v1. After applying theplanar point location algorithm on it, we will get that it does not belongsto any polyhedral set. Therefore, we eliminate it. Assume we are consider-ing another intersection point v2 whose intersecting pair of line segment is(l1, l3). The planar point location algorithm will return the polyhedral setP3. Since l3 is not a boundary line of P3, we eliminate this intersection asan unnecessary intersection point. Now consider the intersection point v3whose intersecting line segments are l1 and l6. For this intersection point,the planar point location algorithm will return P3. Since both of l1 and l6are boundary lines of P3, v3 will not be eliminated.The remaining entities are lines, segments and rays. A line does notcontain any vertex, otherwise it will be a ray or segment. Since we havealready got the adjacency information between vertexes and constraints,we retrieve the lines in the domain by using those constraints which are notadjacent to any vertex. For example, assume a constraint c1x1+c2x2+c3 ? 0545.2. The AlgorithmsFigure 5.5: Computation of the vertexes.is not adjacent to any vertex. Therefore, we get a line c1x1 + c2x2 + c3 = 0.The adjacency information between lines and constraints is very trivial tocompute because lines come from the constraints.We compute the segments and rays using the information we have sofar. We have retrieved all of the vertexes. We also have the adjacencyinformation between constraints and vertexes. But we have no idea aboutthe vertex order on a line corresponding to a constraint. For example,consider again Figure 5.5. We already know that the vertexes v3, v4 andv5 are on the line segment l1. But we do not know the vertex order of thevertexes v3, v4 and v5 on l1.We need to loop through all of the vertexes and for each vertex we needto loop through its adjacent constraints to compute the associated segmentsor rays. We have no information about the order of vertexes along theline corresponding to a constraint. Therefore, we may have to find outthe vertex from a set of vertexes so that this vertex and another particularvertex belongs to the same polyhedral region. We do this by applying againthe planar point location algorithm. Another way to do this by using theparametric representation of the line corresponding to the constraint. Foreach vertex in the set of vertexes, we calculate the value of parameter in555.2. The Algorithmsthe parametric representation of the line. After that we sort the parametervalues to compute the nearest vertex.For example consider Figure 5.6. Suppose we are at vertex v4 and con-sidering the adjacent line segment l1 along the direction from v4 to v3. Fromthe already known adjacency information, we retrieve that the vertexes v3and v6 are on l1 along this direction. Then we need to compute a vertexfrom v3 and v6 such that this computed vertex and v4 belong to the samepolyhedral region. The computed vertex is v3 and this is the nearest vertexfrom v4 along the direction from v4 to v3. Consequently we get the segmentbetween v4 and v3.Figure 5.6: Computation of the remaining entities by using computedvertexes.Since we need to loop through all of the adjacent constraints for eachvertex to compute corresponding rays or segments, we get the adjacencyinformation between segments or rays and the constraints. Since Ef C givesthe face-constraint adjacency information, using Ef C we deduce the adja-cency information of all edges (lines, segments, rays) with faces. Algorithm1 computes the segments and rays.We already know the adjacency information between entities and faces.We just need to extract this information. Algorithm 2 extracts the adjacency565.2. The AlgorithmsAlgorithm 1: Compute segments and raysinput : C (the set of constraints), V (set of vertexes), Ef C(adjacency information between faces and constraints), EV C(adjacency information between vertexes and constraints)output: set of segments (all segments), set of rays (all rays)for v ? V doforeach Ci ? C adjacent to v docompute associated segments and rays;merge the segments into set of segments;merge the rays into set of rays;endendinformation between entities and faces.In addition to all of the previously computed information, we need theadjacency information between all entities and constraints. The matrix Ef cgives the adjacency information between faces and constraints. The adja-cency information between vertexes and constraints is retrieved already. Theadjacent constraints for any ray or segment is also known. The adjacencyinformation between lines and constraints is trivial. Algorithm 3 extractsthe adjacency information between entities and constraints.Computation of conjugate or the Moreau-envelopeAfter having all of the previous computed informations, we loop throughall of the different entities in the domain. For each entity, we loop through allof its adjacent faces to compute corresponding critical regions and we also getcorresponding solutions. Consequently, for each entity we calculate a criticalregion and define a value function on it. From these critical regions withassociated value functions, we get the conjugate or the Moreau envelope. Forexample, consider the domain of the function f(x1, x2) = |x1|+ |x2| which isillustrated in 5.1. One of its rays is adjacent to two faces. The single vertexis adjacent to all of the four faces. For every ray, the algorithm computes thecritical region as well as the associated solution for each of the two adjacentfaces. Then it deduces a critical region and defines a value function on it.Algorithm 4 represents these tasks. For ease of describing this algorithm,we have used some definitions and notations. We define set of entities asD = {DJ : J ? Y}. For any k ? ?(J ), we have DJ = FI?(J ). For any575.2. The AlgorithmsAlgorithm 2: Extract entity face adjacency informationinput : L (the set of lines), V (the set of vertexes), S (the set ofsegments), R (the set of rays), EL f (the adjacencyinformation between lines and faces), EV f (the adjacencyinformation between vertexes and faces), ES f (theadjacency information between segments and faces), ER f(the adjacency information between rays and faces)output: entity face adjacency ( adjacency information betweenentities and faces)foreach l ? L doextract the adjacency information between l and faces (EL f (l));merge EL f (l) into entity face adjacency;endforeach v ? V doextract the adjacency information between v and faces (EV f (v));merge EV f (v) into entity face adjacency;endforeach s ? S doextract the adjacency information between s and faces (ES f (s));merge ES f (s) into entity face adjacency;endforeach r ? R doextract the adjacency information between r and faces (ER f (r));merge ER f (r) into entity face adjacency;end585.2. The AlgorithmsAlgorithm 3: Extract entity constraint adjacency informationinput : L (the set of lines), V (the set of vertexes), S (the set ofsegments), R (the set of rays), Ef C (the adjacencyinformation between faces and constraints), EL C (theadjacency information between lines and constraints), EV C(the adjacency information between vertexes andconstraints), ES C (the adjacency information betweensegments and constraints), ER C (the adjacency informationbetween rays and constraints)output: entity constraint adjacency ( adjacency informationbetween entities and constraints)entity constraint adjacency ?? Ef C ;foreach l ? L doextract the adjacency information between l and constraints(EL C(l));merge EL C(l) into entity constraint adjacency;endforeach v ? V doextract the adjacency information between v and constraints(EV C(v));merge EV C(v) into entity constraint adjacency;endforeach s ? S doextract the adjacency information between s and constraint(ES C(s));merge ES C(s) into entity constraint adjacency;endforeach r ? R doextract the adjacency information between r and constraints(ER C(r));merge ER C(r) into entity constraint adjacency;end595.2. The Algorithmsentity DJ , the set of adjacent faces are defined by FJ = {F?k : k ? ?(J )}.For any k ? ?(J ), we have F?k = Ck.Algorithm 4: Conjugate Computationinput : C, D (the set of all entities), {FJ }J?Y (the adjacencyinformation between all entities and faces),entity constraint adjacency (the adjacency informationbetween entities and constraints), f , is strictly convex(whether ?(x, p) is strictly convex or only convex withrespect to x)output: C?, f?, Ef c?foreach DJ ? D doRJ ?? ?;XJ ?? ?;foreach F?k ? FJ doif is strictly convex then(RIk(J ), XIk(J )(s))?? Strictly Convex(?(x, p), F?k ,FIk(J ));endelse(RIk(J ), XIk(J )(s))?? Convex(?(x, p), F?k , FIk(J ));endRJ ?? RJ ?RIk(J );XJ ?? XJ ?XIk(J )(s);endVJ (s) = ?k(XIk(J )(s), s) for any k ? ?(J );endcompute C?, f?, Ef c?;Our algorithm works only for convex functions with respect to x. Wehave divided the computation of the closure of critical regionsRIk(J ) and theassociated solution functions XIk(J )(s) in strictly convex (?(x, p) is strictlyconvex with respect to x) and convex (?(x, p) is convex but not strictlyconvex with respect to x) cases. Function Strictly Convex and FunctionConvex are for the strictly convex and convex cases respectively.After computing a PLQ function V (s) with V (s) = VJ (s) when s ?RJ , we need to compute the information to get a face-constraint adjacencyrepresentation (C?, f?, E?f C). We already know the set of constraints and605.3. Complexity Analysisthe faces in domV . We also know which face is bounded by which constraint.Therefore, it is a matter of extraction to get C?, f?, E?f C .Function Strictly Convexinput : ?(x, p), Ck (the polyhedral set on which ?(x, p) = ?k(x, p)),FIk(J )(an entity)output: RIk(J ) (closure of a critical region), XIk(J )(s) (the solutionfunction on RIk(J ))(XIk(J )(s), ?Ik(J )(s))?? {(x, ?k) : ?x?k(x, s) +AkIk(J )T?k =0, AkIk(J )x = bkIk(J )};RIk(J ) ?? {s : ?Ik(J )(s) ? 0, AkIck(J )XIk(J )(s) ? bkIck(J )};Function Convexinput : ?(x, p), Ck (the polyhedral set on which ?(x, p) = ?k(x, p)),FIk(J )(an entity)output: RIk(J ) (closure of a critical region), XIk(J )(s) (the solutionfunction on RIk(J ))compute the normal cone NCk(FIk(J )) = {y ? R2 :[LIk(J )ELIk(J )I]y ? 0}to Ck at any point x ? riFIk(J );XIk(J )(s)?? {x : LIk(J )E (Qkx+Rks+ qk) = 0, AkIk(J )x = bkIk(J )};RIk(J ) ?? {s : LIk(J )I (QkXIk(J )(s) +Rks+ qk) ?0, AkIck(J )XIk(J )(s) ? bkIck(J )};5.3 Complexity AnalysisAs we noted earlier, we use a brute force implementation using Scilabfor computing entities and adjacency information. For O (nC) constraints,our brute-force implementation computes the vertexes in O(n2C)time. Thecomputation of segments and rays takes O (nfncnV ) time for O (nf ) faces,O (nc) constraints and O (nV ) vertexes.Now we describe the complexity of the algorithm discussed throughoutthis chapter which uses an improved strategy to compute entities and ad-615.3. Complexity Analysisjacency information. The space and time complexity of the algorithm isanalyzed according to the different parts of the algorithm.For nC different constraints and nf different faces, the space complexityof C and f is O (nC) and O (nf ) respectively. Adjacency list representa-tion [CLRS09, Chapter 22] of Ef C takes O (nC + nf ) space. For O (nC) linesegments and O (nV ) vertexes, the Mulmuly?s algorithm takes O (nC + nV )space. The planar point location algorithm which is used to eliminate theunnecessary constraints, needs O (nC) storage for O (nC) line segments. Fornf faces, nC constraints, and nV vertexes, we have O (nf + nC + nV ) enti-ties. Since in a planar graph with v vertexes and e edges, e = O(v2), wededuce that we have O (nf + nC) entities. As earlier, it is possible to manip-ulate the adjacency information between entities and faces with O (nC + nf )space complexity if we use adjacency list representation. Similarly, entity-constraint adjacency is represented in O (nC) space. Therefore, the totalspace complexity is O (nC + nf ).The time complexity comes from the three different parts of computa-tion.First part. We compute vertexes using the Mulmuly?s algorithm. It takesO (nC log nC + nV ) time with nC line segments and nV vertexes. For O (nC)line segments, the planar point location algorithm requires O (log nC) timefor each query point. Thus, the elimination of unnecessary constraints takesO (nV log nC) time. Therefore, the total time complexity for computing thevertexes is O (nC log nC + nV + nV log nC). The domain of a PLQ bivariatefunction is represented by a planar graph where vertexes and edges arecomputed from the constraints. In that planar graph we have O (nC) edgesand O (nV ) vertexes. Therefore, we have nC = O(n2V). So we deduce thetotal time complexity to compute the vertexes is O (nC log nC).Second part. We compute the remaining entities. If we consider thedomain as a planar graph, in this stage we actually need to loop through alladjacent edges for every vertex. For each edge, we may need to apply the pla-nar point location algorithm for a set of vertexes. For O (nC) line segmentsand O (nV ) vertexes this computation takes O (nV log nC) time. There-fore, a computation with O (nV log nC) time complexity may be needed ineach adjacent edge of any vertex. Therefore, the total time complexity forcomputing the remaining entities is O(nV log nC?i?{1,2...nV }deg(vi)). Byusing the Fact 2.41, we deduce that time complexity is O (nV nC log nC).Third part. We do the computation of conjugate or Moreau envelopeusing Algorithm 4. For every entity-face adjacency we do a constant timecomputation. An entity like segment, ray or line, is adjacent to at most625.3. Complexity Analysistwo faces in a polyhedral subdivision. For example consider the polyhedralsubdivision which is illustrated in Figure 5.7. Every ray or segment is adja-cent to at most two faces. A vertex is adjacent to more than two faces. Forexample Vertex 2 and Vertex 3 are adjacent to four faces in the polyhedralsubdivision illustrated in Figure 5.7. By using the Euler?s formula which isprovided in Fact 2.42, we could say this type of vertexes are rare or they areadjacent to approximately a constant number of faces. Due to these facts,we deduce that the time complexity of Algorithm 4 remains linear, which isO (nf + nC).Figure 5.7: A polyhedral subdivision.By summing the time complexities from three different parts of compu-tation, we get the overall time complexity asO (nC log nC + nV nC log nC + nf + nC) = O (nV nC log nC) .Finally, the time to build the conjugate data structure is log-linear usingthe same argument as [GL11a].Actually we could compute the conjugate or Moreau envelope in lineartime by using an enriched data structure but by breaking the data structuresimilarity between the input and output of computation.Theorem 5.7. We could compute the conjugate or Moreau envelope of anyconvex bivariate PLQ function in linear time by using an enriched datastructure like the Half-Edge data structure for taking input and without build-ing the output in a similar data structure.Proof. The overall complexity of our algorithm is not linear. It is due toa time complexity of O (nc log nc) which arises during the computation of635.3. Complexity Analysisvertexes, and also for O (nV nC log nC) time complexity for computing therays and segments.Since an enriched data structure like the Half-Edge data structure givesall of the vertexes, we do not need to compute them.In our algorithm we do not know the ordering information of vertexesalong a line corresponding to a constraint. For this reason we may need todo a computation of O (nV log nC) time complexity in each adjacent edge ofany vertex during the computation of the rays and segments. In an enricheddata structure like the Half-Edge data structure, a half-edge correspondingto an edge has reference to its pointing vertex. Therefore when we are at avertex, we just need to go through its adjacent edges to get the correspondingrays and segments. Thus, the run time to compute all segments and raysbecomes O(?i?{1,2...nV }deg(vi))= O (nC).Therefore, by using an enriched data structure we could compute theconjugate or Moreau envelope of any convex bivariate PLQ function inO (nC + nf ) time.Theorem 5.7 does not consider the complete computation. The cost ofbuilding the conjugate data structure is still log-linear [GL11a], and thequestion of whether there exists a linear-time algorithm taking as input thefunction f in the half-edge data structure and outputting f? as a half-edgedata format is still open.64Chapter 6ExamplesWe present some examples for which we have computed the conjugateby applying our algorithm. The algorithm is implemented in Scilab [Sci94].We used the SCAT package [BH06, BH08] in the Maple 15 toolbox [MAP]to verify our conjugate computation.Example 6.1. Consider the l1 norm, f(x1, x2) = |x1|+ |x2|. Its conjugateis f?(s1, s2) = I{(s1,s2)|?1?s1?1,?1?s2?1}(s1, s2). The bivariate PLQ functionand its conjugate are illustrated in Figure 6.1.Example 6.2. We next demonstrate the conjugate of the 2D energy func-tion. The function is f(x1, x2) = 12(x21 + x22). It has only one piece withunbounded domain which is the full 2D space. Therefore, it has only oneentity in the domain. The 2D energy function, illustrated in Figure 6.2, isthe only self-conjugate function [Roc70, Page 106].Example 6.3. Consider the following convex PLQ function which is onlyquadratic in x1 and has no linear termf(x1, x2) ={x21, if x1 ? 0, x2 ? 0;? otherwise.Its conjugate isf?(s1, s2) =?????14s21, if s1 ? 0, s2 ? 0;0, if s1 ? 0, s2 ? 0;?, otherwise.The function and its conjugate are shown in Figure 6.3.Example 6.4. All of the previously discussed PLQ functions are additivelyseparable functions. Now we consider an additively non-separable PLQ func-tion f defined byf(x1, x2) ={x21 + x22 + 2x1x2, if x1 ? 0, x2 ? 0;?, otherwise.65Chapter 6. Examples(a) The l1 norm. (b) The conjugate of the l1 norm.Figure 6.1: The l1 norm and its conjugate.Figure 6.2: 2D Energy function.The conjugate of this function is,f?(s1, s2) =?????14s22, if s2 ? 0, s1 ? s2;0, if s1 ? 0, s2 ? 0;14s21, if s1 ? 0, s1 ? s2.The function and its conjugate are shown in Figure 6.4.Example 6.5. Consider the piecewise function f which is defined byf(x1, x2) =?????12(x21 + x22), if x1 ? 0, x2 ? 0;?2x1 + x22, if x1 ? 0, x2 ? 0;?, otherwise.66Chapter 6. Examples(a) The function f(x1, x2) (b) The conjugate f?(s1, s2)Figure 6.3: The function from Example 6.3 and its conjugate(a) The function f(x1, x2) (b) The conjugate f?(s1, s2)Figure 6.4: The function from Example 6.4 and its conjugate67Chapter 6. Examples(a) Partion of the domain of f (b) The function f(x1, x2)(c) Partition of the domain of f? (d) The conjugate f?(s1, s2)Figure 6.5: The function from Example 6.5 and its conjugateIt has a strictly convex and merely convex piece. The domain of f containstwo faces, three rays and one vertex. The conjugate of this PLQ function isf?(s1, s2) =???????????????14(s21 + s22), if s1 ? 0, s2 ? 0;14s22, if ?2 ? s1 ? 0, s2 ? 0;0, if ?2 ? s1 ? 0, s2 ? 0;14s21, if s1 ? 0, s2 ? 0;?, otherwise.Its domain has four faces, five rays, one segment and two vertexes. Figure6.5 illustrates f and f? with the partitions of their domains.Example 6.6. Now we demonstrate another PLQ function with two finitevalued pieces whose one piece is quadratic in x1 and another one is quadratic68Chapter 6. Examples(a) The function f(x1, x2) (b) The conjugate f?(s1, s2)Figure 6.6: The function from Example 6.6 and its conjugatein x2. The function is defined byf(x1, x2) =?????x21, if x2 ? 0, x1 ? x2;x22, if x1 ? 0, x2 ? x1;?, otherwise.Its conjugate isf?(s1, s2) =???????????14s21 +14s22 +12s1s2, if s1 ? 0, s2 ? 0;14s22, if ?2 ? s1 ? 0, s2 ? 0;0, if s1 ? 0, s2 ? 0;14s21, if s1 ? 0, s2 ? 0.The function and its conjugate are shown in 6.6.Example 6.7. Consider a PLQ function f with four finite valued pieceswhich is defined byf(x1, x2) =???????????x21 + x22, if x1 ? 0, x2 ? 0;?2x1 + x22, if x1 ? 0, x2 ? 0;?2x1 ? 2x2, if x1 ? 0, x2 ? 0;x21 ? 2x2, if x1 ? 0, x2 ? 0.Like the domain of the 1-norm, its domain has four faces, four rays and one69Chapter 6. Examplesvertex. The conjugate of this PLQ function is defined byf?(s1, s2) =???????????????14(s21 + s22), if s1 ? 0, s2 ? 0;14s22, if ?2 ? s1 ? 0, s2 ? 0;0, if ?2 ? s1 ? 0,?2 ? s2 ? 0;14s21, if s1 ? 0,?2 ? s2 ? 0?, otherwise.There are four faces, four rays, four segments and four vertexes in the domainof f?. Figure 6.7 shows the function and its conjugate. It also illustratesthe partitions of the domains.We also compute the Moreau-envelope of the 2D energy function andthe 2D 1-norm with ? = 1/2. For ? = 1/2, the Moreau-envelope of the 2Denergy function is written ase?f(s1, s2) =13(s21 + s22).Figure 6.8 illustrates the Moreau-envelope of the 2D energy function. For? = 1/2, the Moreau-envelope of the 2D 1-norm is computed ase?f(s1, s2) =???????????????????????????????????s1 + s2 ? 12 , if s1 ?12 , s2 ?12 ;?s1 + s2 ? 12 , if s1 ? ?12 , s2 ?12 ;?s1 ? s2 ? 12 , if s1 ? ?12 , s2 ? ?12 ;s1 ? s2 ? 12 , if s1 ?12 , s2 ? ?12 ;s1 + s22 ?14 , if s1 ?12 ,?12 ? s2 ?12 ;s21 + s2 ?14 , if ?12 ? s1 ?12 , s2 ?12 ;?s1 + s22 ?14 , if s1 ? ?12 ,?12 ? s2 ?12 ;s21 ? s2 ?14 , if ?12 ? s1 ?12 , s2 ? ?12 ;s21 + s22, if ?12 ? s1 ?12 ,?12 ? s2 ?12 .The Moreau-envelope of 2D 1-norm is shown in Figure 6.9.Furthermore, we compute the proximal average of 2D 1-norm and 2Denergy function with ?1 = 1/2, ?2 = 1/2 and ? = 1. The proximal average70Chapter 6. Examples(a) Partion of the domain of f (b) The function f(x1, x2)(c) Partition of the domain of f? (d) The conjugate f?(s1, s2)Figure 6.7: The function from Example 6.7 and its conjugate71Chapter 6. ExamplesFigure 6.8: The Moreau-envelope of 2D energy function with ? = 1/2.is computed asp?(f, ?)(x1, x2) =???????????????????????????????????14(x21 + x22) +14(x1 + x2)?18 , if x1 ?12 , x2 ?12 ;14(x21 + x22)?14(x1 ? x2)?18 , if x1 ? ?12 , x2 ?12 ;14(x21 + x22)?14(x1 + x2)?18 , if x1 ? ?12 , x2 ? ?12 ;14(x21 + x22) +14(x1 ? x2)?18 , if x1 ?12 , x2 ? ?12 ;14x21 +12x22 +14x1 ?116 , if x1 ?12 ,?12 ? x2 ?12 ;12x21 +14x22 +14x2 ?116 , if ?12 ? x1 ?12 , x2 ?12 ;14x21 +12x22 ?14x1 ?116 , if x1 ? ?12 ,?12 ? x2 ?12 ;12x21 +14x22 ?14x2 ?116 , if ?12 ? x1 ?12 , x2 ? ?12 ;12(x21 + x22), if ?12 ? x1 ?12 ,?12 ? x2 ?12 .Figure 6.10 illustrates the proximal average of of 2D 1-norm and 2D energyfunction.All of the above examples deal with PLQ functions with few number ofpieces. Now we give an example in which we work with PLQ functions witha large number of pieces. We consider a function f(x1, x2) = x41 +x42. This isan additively separable function. We assume f1(x1) = x41 and f2(x2) = x42.We have f?(s1, s2) = f?1 (s1) + f?2 (s2). We build a univariate PLQ function72Chapter 6. ExamplesFigure 6.9: The Moreau-envelope of 2D 1-norm with ? = 1/2.from f1 by using plq build function of CCA package [CCA] which samplesf1 on a grid of points. We use the points from zero to twelve with theinterval of one and we get a univariate PLQ function with 13 finite valuedpieces. By using this PLQ function we generate a list representation of f asa bivariate PLQ function which has 132 = 169 finite valued pieces. Then weapply our implemented algorithm to compute an approximation of f? andthe computed f? has 144 finite valued pieces.To verify our result we also compute an approximation of f? by using theCCA package. We again use plq build to generate a univariate PLQ functionfrom f1. Then we apply plq lft in the CCA package to compute the conju-gate f?1 . We obtain f? which also has 144 finite valued pieces and we storeit by using the list representation of bivariate PLQ functions. We comparethe coefficients of constrains and function values of the computed conjugateby applying our implemented algorithm and the computed conjugate by ap-plying the CCA package. With a tolerance of 10?4, this comparison resultsequality between the computed conjugates. The conjugates are illustratedin Figure 6.11.73Chapter 6. ExamplesFigure 6.10: The proximal average of 2D 1-norm and 2D energy functionwith ?1 = 1/2, ?2 = 1/2 and ? = 1.(a) An approximation of conjugateof f(x1, x2) = x41+x42 by using ourimplemented algorithm.(b) An approximation of conjugateof f(x1, x2) = x41+x42 by using theCCA package.Figure 6.11: The approximation of conjugate of f(x1, x2) = x41 + x42.74Chapter 7ConclusionWe presented Goebel?s proof [Goe00, Proposition 3, page 3] for provingpartial conjugates of convex PLQ functions are also PLQ. We focused onadapting parametric quadratic programming (pQP) in computational con-vex analysis. We studied parametric quadratic optimization problem from[PS11] and we modeled the conjugate and Moreau envelope computationproblem of bivariate PLQ functions as a parametric quadratic optimizationproblem. We proposed the data structures to represent bivariate PLQ func-tions. We studied an algorithm based on pQP for computing conjugatesand Moreau envelopes of convex bivariate PLQ functions and we analyzedthe space and time complexity of the algorithm. We also implemented ouralgorithm in Scilab [Sci94] and we produced some examples of conjugateand Moreau envelope computation.Although the recent results in parametric minimization of convex PLQfunctions work for convex PLQ functions with n variables, we applied it onlyfor convex bivariate PLQ functions. An interesting future work would be toconsider extending the algorithm to functions of more than two variables.The data structure is one of the main issues to be considered. Always newtype of entities are present in the domain a n variate PLQ function whichare not present in the domain of a n? 1 variate PLQ function. One of thechallenging parts will be to give an efficient data structure to represent aPLQ function with n variables.We know if a function f is proper convex PLQ then its conjugate f? isalso a proper convex PLQ function [RW98, Proposition 11.32]. But assumef is proper PLQ but separately convex. What can we say about its conjugatef? and partial conjugate f?1?We computed conjugate and Moreau envelope of PLQ functions whichare convex. To compute the conjugate, we solved an optimization problemwhich was not jointly convex. Future research can focus on computingconjugates of multivariate PLQ functions which are not convex.75Bibliography[Bao02] M. Baoti. An efficient algorithm for multi-parametric quadraticprogramming. Report AUT02-05, Institut fr Automatik, ETHZrich, 2002. ? pages 3[BBBM09] F. Borrelli, M. Baoti, A. Bemporad, and M. Morari. Dynamicprogramming for constrained optimal control of discrete-timelinear hybrid systems. Automatica, 41:1709?1721, 2009.? pages3[BBM02a] A. Bemporad, F. Borrelli, and M. Morari. Min-max control ofconstrained uncertain discrete-time linear systems. IEEE Trans-actions on Automatic Control, 48(9):1600?1606, 2002. ? pages3[BBM02b] A. Bemporad, F. Borrelli, and M. Morari. Model predictive con-trol based on linear programming - the explicit solution. IEEETransactions on Automatic Control, 47(12):1974?1985, 2002. ?pages 3[BC90] M.J. Best and N. Chakravarti. Stability of linearly constrainedconvex quadratic programs. J. Opt. Theory Appl, 64:43?53,1990. ? pages 3[BC11] Heinz H. Bauschke and Patrick L. Combettes. Convex analysisand monotone operator theory in Hilbert spaces. CMS Books inMathematics/Ouvrages de Mathe?matiques de la SMC. Springer,New York, 2011. With a foreword by He?dy Attouch. ? pages 5[BCM06] M. Baoti, F.J. Christophersen, and M. Morari. Constrainedoptimal control of hybrid systems with a linear performanceindex. IEEE Transactions on Automatic Control, 51,12:1903?1919, 2006. ? pages 3[Ber63] C. Berge. Topological spaces. Oliver and Boyd Ltd., 1963. ?pages 376Bibliography[Bes72] M.J. Best. On the continuity of the minimum in parametricquadratic programs. J. Opt. Theory Appl, 86:245?250, 1972. ?pages 3[BGK+82] B. Bank, J. Guddat, D. Klatte, B. Kummer, and K. Tam-mer. Non-Linear Parametric Optimization. Birkhauser, Basel-Boston, 1982. ? pages 3[BGK+83] B. Bank, J. Guddat, D. Klatte, B. Kummer, and K. Tammer.Non-linear parametric optimization. J. Math. Anal. Appl, 1983.? pages 3[BGLW08] Heinz H. Bauschke, Rafal Goebel, Yves Lucet, and XianfuWang. The proximal average: Basic theory. SIAM J. Optim.,19(2):768?785, 2008. ? pages 42[BH06] Jonathan M. Borwein and Chris H. Hamilton. Symbolic compu-tation of multidimensional Fenchel conjugates. In ISSAC 2006,pages 23?30. ACM, New York, 2006. ? pages 65[BH08] Jonathan M. Borwein and Chris H. Hamilton. Symbolic Fenchelconjugation. Math. Program., 116(1):17?35, 2008. ? pages 1,65[BLT08] Heinz H. Bauschke, Yves Lucet, and Michael Trienis. How totransform one convex function continuously into another. SIAMRev., 50(1):115?132, July 2008. ? pages 2[BM06] Heinz H. Bauschke and Martin v. Mohrenschildt. Symbolic com-putation of Fenchel conjugates. ACM Commun. Comput. Alge-bra, 40(1):18?28, 2006. ? pages 1[BMDP02] A. Bemporad, M. Morari, V. Dua, and E.N. Pistikopoulos. Theexplicit linear quadratic regulator for constrained systems. Au-tomatica, 38:3?20, 2002. ? pages 3[Bre89] Y. Brenier. Un algorithme rapide pour le calcul de transforme?esde Legendre?Fenchel discre`tes. 308:587?589, 1989. ? pages 1[CBM05] F.J. Christophersen, M. Baoti, and M. Morari. Optimal controlof piecewise affine systems: A dynamic programming approach.Lecture Notes in Control and Information Sciences, 322:183?198,2005. ? pages 377Bibliography[CCA] The computational convex analysis numerical library.http://atoms.scilab.org/toolboxes/CCA. ? pages 73[CGA] CGAL, Computational Geometry Algorithms Library.http://www.cgal.org. ? pages v, 50[CGL] CGLAB a Scilab toolbox for geometry based on CGAL.http://cglab.gforge.inria.fr/. ? pages 2[CLRS09] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,and Clifford Stein. Introduction to Algorithms. The MIT Press,3 edition, 2009. ? pages 62[Cor96] L. Corrias. Fast Legendre?Fenchel transform and applications toHamilton?Jacobi equations and conservation laws. 33(4):1534?1558, August 1996. ? pages 1[DB02] M. Diehl and J. Bjornberg. Robust dynamic programming formnin-max model predictive control of constrained uncertain sys-tems. IEEE Transactions on Automatic Control, 49,12:2253?2257, 2002. ? pages 3[DFS67] G.B. Dantzig, J. Folkman, and N.Z. Shapiro. On the continuityof the minimum set of a continuous function. J. Math. Anal.Appl, 17:519?548, 1967. ? pages 3[Die12] Reinhard Diestel. Graph Theory, 4th Edition, volume 173 ofGraduate texts in mathematics. Springer, 2012. ? pages 12[DR09] A.L. Dontchev and R.T. Rockafellar. Implicit functions and so-lution mappings: A view from variational analysis. SpringerSeries in Mathematics, 2009. ? pages 3[Fia83] A.V. Fiacco. Introduction to sensitivity and stability analysisin nonlinear programming. Academic Press, Orlando, 1983. ?pages 3[FP03] F. Facchinei and J.S. Pang. Finite dimensional variational in-equalities and complementarity problems. Springer, 1, 2003. ?pages 3[GBTM03] P. Grieder, F. Borrelli, F. Torrisi, and M. Morari. A geometricalgorithm for multiparametric linear programming. J. Optim.Theory Appl, 118:515?540, 2003. ? pages 378Bibliography[GBTM04] P. Grieder, F. Borrelli, F. Torrisi, and M. Morari. Computa-tion of the constrained infinite time linear quadratic regulator.Automatica, 40:701?708, 2004. ? pages 3[Ger07] Judith L. Gersting. Mathematical structures for computer sci-ence. Craig Bleyer, 6 edition, 2007. ? pages 12[GJL13] Bryan Gardiner, Khan Jakee, and Yves Lucet. Computing thepartial conjugate of convex piecewise linear-quadratic bivariatefunctions. 2013. Submitted to Computational Optimization andApplications. ? pages iii, 2, 4[GL11a] Bryan Gardiner and Yves Lucet. Computing the conjugate ofconvex piecewise linear-quadratic bivariate functions. Mathe-matical Programming Series B, 2011. ? pages 2, 4, 63, 64[GL11b] Bryan Gardiner and Yves Lucet. Graph-matrix calculus for com-putational convex analysis. In Heinz H. Bauschke, Regina S. Bu-rachik, Patrick L. Combettes, Veit Elser, D. Russell Luke, andHenry Wolkowicz, editors, Fixed-Point Algorithms for InverseProblems in Science and Engineering, volume 49 of SpringerOptimization and Its Applications, pages 243?259. Springer NewYork, 2011. ? pages 2[Goe00] R. Goebel. Convexity, Convergence and Feedback in OptimalControl. Phd thesis, University of Washington, Seattle, 2000. ?pages 13, 75[Ham05] Chris H. Hamilton. Symbolic convex analysis. Master?s thesis,Simon Fraser University, 2005. ? pages 1[Hau57] F. Hausdorff. Set theory. 1957. ? pages 3[Hel05] Philippe Helluy. Simulation nume?rique des e?coulements multi-phasiques : De la the?orie aux applications. PhD thesis, Insti-tut des Sciences de l?Ingenieur de Toulon et du Var, Labora-toire Mode?lisation Nume?rique et Couplages, BP 56, 83162 LaValette CEDEX, France, January 2005. Habilitation a` Dirigerdes Recherches. ? pages 2[Hog73a] W.W. Hogan. The continuity of the perturbation function of aconvex program. Oper. Res, 21:351?352, 1973. ? pages 379Bibliography[Hog73b] W.W. Hogan. Point-to-set maps in mathematical programming.SIAM Rev, 15:591?603, 1973. ? pages 3[HOVT03] Takashi Hisakado, Kohshi Okumura, Vladimir Vukadinovic, andLjiljana Trajkovic. Characterization of a simple communicationnetwork using Legendre transform. In Proc. IEEE Int. Symp.Circuits and Systems, volume 3, pages 738?741, May 2003. ?pages 2[HUL93] Jean-Baptiste Hiriart-Urruty and Claude Lemare?chal. Con-vex Analysis and Minimization Algorithms, volume 305?306 ofGrundlehren der Mathematischen Wissenschaften [FundamentalPrinciples of Mathematical Sciences]. Springer-Verlag, Berlin,1993. Vol I: Fundamentals, Vol II: Advanced theory and bundlemethods. ? pages 2[HUL07] Jean-Baptiste Hiriart-Urruty and Yves Lucet. Parametric com-putation of the Legendre?Fenchel conjugate. J. Convex Anal.,14(3):657?666, August 2007. ? pages 2[Kir83] David G. Kirkpatrick. Optimal search in planar subdivisions.SIAM J. Comput., 12(1):28?35, 1983. ? pages 54[KK02] D. Klatte and B. Kummer. Nonsmooth equations in optimiza-tion: Regularity, calculus, methods and applications. 2002. ?pages 3[KM02] E. C. Kerrigan and D. Q. Mayne. Optimal control of constrained,piecewise affine systems with bounded disturbances. Proc. 42thIEEE Conf. Decision Control, Las Vegas, NV, 322:1552?1557,2002. ? pages 3[Koo02] B. Koopen. Contact of bodies in 2D-space: Implementing theDiscrete Legendre Transform. AI Master?s thesis, Intelligent Au-tonomous Systems Group, University of Amsterdam, February2002. ? pages 2[LPBL05] B. Legras, I. Pisso, G. Berthet, and F. Lefe`vre. Variability ofthe Lagrangian turbulent diffusion in the lower stratosphere. At-mospheric Chemistry and Physics, 5:1605?1622, 2005. ? pages280Bibliography[Luc96] Yves Lucet. A fast computational algorithm for the Legendre?Fenchel transform. Comput. Optim. Appl., 6(1):27?57, July1996. ? pages 1[Luc97] Yves Lucet. Faster than the Fast Legendre Transform,the Linear-time Legendre Transform. Numer. Algorithms,16(2):171?185, January 1997. ? pages 1[Luc05] Yves Lucet. A linear Euclidean distance transform algorithmbased on the Linear-time Legendre Transform. In Proceedingsof the Second Canadian Conference on Computer and Robot Vi-sion (CRV 2005), pages 262?267, Victoria BC, May 2005. IEEEComputer Society Press. ? pages 2[Luc06] Yves Lucet. Fast moreau envelope computation i: numerical al-gorithms. Numerical Algorithms, 43(3):235?249, 2006. ? pages1, 2, 9[Luc10] Yves Lucet. What shape is your conjugate? A survey of com-putational convex analysis and its applications. SIAM Rev.,52(3):505?542, 2010. ? pages 2[MAP] Maple. http://www.maplesoft.com/. ? pages 65[Mor65] Jean-Jacques Moreau. Proximite? et dualite? dans un espaceHilbertien. Bull. Soc. Math. France, 93:273?299, 1965. ? pages1[MS92] Jir Matousek and Raimund Seidel. A tail estimate for mulmu-ley s segment intersection algorithm. In Werner Kuich, editor,Automata, Languages and Programming, 19th International Col-loquium, ICALP92, Vienna, Austria, July 13-17, 1992, Proceed-ings, volume 623 of Lecture Notes in Computer Science, pages427?438. Springer, 1992. ? pages 54[Mul88] Ketan Mulmuley. A fast planar partition algorithm, i (extendedabstract). In FOCS, pages 580?589, 1988. ? pages 54[NV94] A. Noullez and M. Vergassola. A fast Legendre transform algo-rithm and applications to the adhesion model. J. Sci. Comput.,9(3):259?281, 1994. ? pages 181Bibliography[NW06] Jorge Nocedal and Stephen J. Wright. Numerical optimization.Springer Series in Operations Research and Financial Engineer-ing. Springer, New York, second edition, 2006. ? pages 12[Ope] Openmesh. http://www.openmesh.org. ? pages v, 50[PGD07a] E.N. Pistikopoulos, M.C. Georgiadis, and V. Dua. Multipara-metric Model-Based Control, Process Systems Engineering, vol-ume 2. Wiley-VCH, Weinheim, Germany, 2007. ? pages 3[PGD07b] E.N. Pistikopoulos, M.C. Georgiadis, and V. Dua. Multipara-metric Programming, Process Systems Engineering, volume 1.Wiley-VCH, Weinheim, Germany, 2007. ? pages 3[PS11] Panagiotis Patrinos and Haralambos Sarimveis. Convex para-metric piecewise quadratic optimization: Theory and algo-rithms. Automatica, 47(8):1770 ? 1777, 2011. ? pages 4, 18, 19,21, 22, 27, 28, 75[PY01] H.X. Phu and N.D. Yen. On the stability of solutions toquadratic programming problems. Math. Program, 89:385?394,2001. ? pages 3[RGJ04] S. V. Rakovic, P. Grieder, and C. N. Jones. Computation ofvoronoi diagrams and delaunay triangulation via parametric lin-ear programming. Technical Report AUT04-10, Automatic Con-trol Laboratory, Swiss Federal Institute of Technology, ETH-Zentrum, CD8092, Switzerland, 2004. ? pages 4[Roc70] R. T. Rockafellar. Convex Analysis. Princeton University Press,Princeton, New york, 1970. ? pages 2, 14, 16, 65[RW98] R. T. Rockafellar and R. J.-B. Wets. Variational Analysis.Springer-Verlag, Berlin, 1998. ? pages 1, 2, 5, 10, 11, 14, 16,31, 42, 75[RW09] R. T. Rockafellar and R. J.-B. Wets. Variational Analysis.Springer-Verlag, Berlin, 2009. ? pages 3[SAF92] Zhen-Su She, Erik Aurell, and Uriel Frisch. The inviscid Burg-ers equation with initial data of Brownian type. Comm. Math.Phys., 148(3):623?641, 1992. ? pages 182Bibliography[Sci94] Scilab Consortium. Scilab, 1994. http://www.scilab.org. ?pages 44, 53, 65, 75[SGD03] M.M. Seron, G.C. Goodwin, and J.A. De Don. Characterizationof receding horizon control for constrained linear systems. AsianJ. Control, 5:271?286, 2003. ? pages 3[SKMJ09] J. Spjtvold, E.C. Kerrigan, D.Q. Mayne, and T.A. Johansen.Inf-sup control of discontinuous piecewise affine systems. Inter-national Journal of Robust and Nonlinear Control, 19(13):1471?1492, 2009. ? pages 3[ST86] N Sarnak and RE Tarjan. Planar point location using persistentsearch trees. 1986. ? pages 54[STJ07] J. Spjtvold, P. Tndel, and T.A. Johansen. Continuous selec-tion and unique polyhedral representation of solutions to convexparametric quadratic programs. Journal of Optimization Theoryand Applications, (133):1471?1492, 2007. ? pages 31, 32, 33[TJB03] P. Tndel, T.A. Johansen, and A. Bemporad. An algorithm formultiparametric quadratic programming and explicit mpc solu-tions. Automatica, 39:489?497, 2003. ? pages 3[TL96] Paul Tseng and Zhi-Quan Luo. On computing the nested sumsand infimal convolutions of convex piecewise-linear functions. J.Algorithms, 21(2):240?266, September 1996. ? pages 2[Zha97] J. Zhao. The lower continuity of optimal solution sets. J. Math.Anal. Appl, 207:240?250, 1997. ? pages 383

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0074301/manifest

Comment

Related Items