Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Decompositions and representations of monotone operators with linear graphs Yao, Liangjin 2007-11-24

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
24-ubc_2008_spring_yao_liangjin.pdf [ 743.71kB ]
Metadata
JSON: 24-1.0066808.json
JSON-LD: 24-1.0066808-ld.json
RDF/XML (Pretty): 24-1.0066808-rdf.xml
RDF/JSON: 24-1.0066808-rdf.json
Turtle: 24-1.0066808-turtle.txt
N-Triples: 24-1.0066808-rdf-ntriples.txt
Original Record: 24-1.0066808-source.json
Full Text
24-1.0066808-fulltext.txt
Citation
24-1.0066808.ris

Full Text

Decompositions and representations ofmonotone operators with linear graphsbyLiangjin YaoM.Sc., Yunnan University, 2006A THESIS SUBMITTED IN PARTIAL FULFILMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMaster of ScienceinThe College of Graduate Studies(Interdisciplinary)The University Of British Columbia OkanaganDecember, 2007c© Liangjin Yao 2007AbstractWe consider the decomposition of a maximal monotone operator into thesum of an antisymmetric operator and the subdifferential of a proper lowersemicontinuous convex function. This is a variant of the well-known decom-position of a matrix into its symmetric and antisymmetric part. We analyzein detail the case when the graph of the operator is a linear subspace. Equiv-alent conditions of monotonicity are also provided.We obtain several new results on auto-conjugate representations includ-ing an explicit formula that is built upon the proximal average of the as-sociated Fitzpatrick function and its Fenchel conjugate. These results arenew and they both extend and complement recent work by Penot, Simonsand Z˘alinescu. A nonlinear example shows the importance of the linearityassumption. Finally, we consider the problem of computing the Fitzpatrickfunction of the sum, generalizing a recent result by Bauschke, Borwein andWang on matrices to linear relations.iiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . vi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Auxiliary results . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Properties of A† . . . . . . . . . . . . . . . . . . . . . . . . . 133 Inverse of linear monotone operators . . . . . . . . . . . . . 213.1 Borwein-Wiersma decomposition . . . . . . . . . . . . . . . . 213.2 Asplund decomposition . . . . . . . . . . . . . . . . . . . . . 223.3 The Borwein-Wiersma decomposition of the inverse . . . . . 234 Monotone operators with linear graphs . . . . . . . . . . . . 274.1 Linear graph . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.2 Maximal monotonicity identification . . . . . . . . . . . . . . 31iiiTable of Contents4.3 Constructing a good selection . . . . . . . . . . . . . . . . . 404.4 The first main result . . . . . . . . . . . . . . . . . . . . . . . 454.5 Monotonicity of operators with linear graphs . . . . . . . . . 495 Auto-conjugates . . . . . . . . . . . . . . . . . . . . . . . . . . . 535.1 Auto-conjugate representation . . . . . . . . . . . . . . . . . 535.2 The Fitzpatrick function and the proximal average . . . . . . 615.3 The second main result . . . . . . . . . . . . . . . . . . . . . 655.4 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . 795.5 Relationship among auto-conjugate representations . . . . . 835.6 Nonuniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . 896 Calculation of the auto-conjugates of ∂(−ln) . . . . . . . . . 1006.1 Fitzpatrick function for ∂(−ln) . . . . . . . . . . . . . . . . . 1016.2 Proximal average of ∂(−ln) and hF∂f . . . . . . . . . . . . . 1027 Proximal averages of monotone operators withlinear graphs 1087.1 Adjoint process of operators with linear graphs . . . . . . . . 1087.2 Fitzpatrick functionsof monotone operatorswith linear graphs1177.3 The third main result . . . . . . . . . . . . . . . . . . . . . . 1317.4 The Fitzpatrick function of the sum . . . . . . . . . . . . . 1448 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151ivList of Figures5.1 The function f (blue) and z = xy (gold), and their intersec-tion line (cyan), graId (red). . . . . . . . . . . . . . . . . . . 935.2 The function f (blue) and z = xy (gold), and their intersec-tion line (cyan), graId (red) . . . . . . . . . . . . . . . . . . . 945.3 The function f (blue) and z = xy (gold) , and their intersec-tion line (cyan), graId (red), where p = 4. . . . . . . . . . . . 976.1 The function hF∂f . . . . . . . . . . . . . . . . . . . . . . . . . 105vAcknowledgementsI sincerely thank Heinz Bauschke and Shawn Wang for giving me the op-portunity to study with them under their supervision at UBC Okanagan.They have both given me a lot of help and encouragement. I would like tothank Philip Loewen and Yves Lucet for their many pertinent comments onmy thesis.I would also like to thank the many people in the department who havemade UBCO into a pleasant environment.viChapter 1IntroductionThis thesis addresses two issues: decompositions of a monotone operatorwith a linear graph, and auto-conjugate representations of a monotone op-erator with a linear graph.It is well known that every matrix A in Rn×n can be decomposed intothe sum of a symmetric matrix and an antisymmetric matrix byA = A+A⊺2 + A−A⊺2 ,where A+A⊺2 is a gradient of a quadratic function. Our goal is to decomposemore general mappings, namely maximal monotone operators. Both pos-itive semidefinite matrices and gradients of convex functions are maximalmonotone.At present there are two famous decompositions: Asplund decomposi-tion and Borwein-Wiersma decomposition. In 1970, Asplund decompositionwas introduced by E. Asplund who showed that a maximal monotone andat most single-valued operator A with intdom A negationslash= ∅ is Asplund decompos-able. In 2006, J. Borwein and H. Wiersma introduced the Borwein-Wiersmadecomposition in [12], which is more restrictive. Borwein-Wiersma verifiedthat a maximal monotone operator that is Borwein-Wiersma decomposable1Chapter 1. Introductionis also Asplund decomposable in finite dimensional spaces. One goal of ourthesis is to show that maximal monotone operators with linear graphs areBorwein-Wiersma decomposable.The idea of representing a set by nice functions is classical. For example,for a closed set C, one can define the distance functiondC(x) := infc∈CbraceleftBigbardblx−cbardblbracerightBig.Then dC is 1-Lipschitz andC =braceleftBigx | dC(x) = 0bracerightBig.One can also define the lower semicontinuous function ιC where ιC(x) = 0if x ∈ C and +∞ otherwise, thenC =braceleftBigx | ιC(x) = 0bracerightBig.In the later part of this thesis, we want to find auto-conjugate repre-sentations for a maximal monotone operator with a linear graph. An auto-conjugate function is a convex function. One very important result providedby Penot, Simons and Zˇalinescu shows that an auto-conjugate f representsan induced maximal monotone operator A = G(f).In order to create auto-conjugate representations, we introduce the Fitz-patrick function and the proximal average. In 1988, Simon Fitzpatrickdefined a new convex function FA in [18] which is called the Fitzpatrickfunction associated with the monotone operator A. Recently, Fitzpatrick2Chapter 1. Introductionfunctions have turned out to be a very useful tool in the study of maximalmonotone operators, see [2, 4, 8, 9, 18]. The proximal average was first in-troduced in [6] in the context of fixed point theory. In its simplest form, theproximal average is denoted here by P(f0,f1), where f0 and f1 are properlower semicontinuous and convex functions. The recent works in [5, 7, 9]give numerous properties that are very attractive to Convex Analysts.Now we come back to our question. In [24], Penot and Zˇalinescu showedthat a maximal monotone operator Acan berepresented by an auto-conjugatefunction hFA, using a partial epigraphical sum. In [9], Bauschke and Wangshowed that P(FA,F∗A⊺) is an auto-conjugate representation for a maxi-mal monotone operator A. Until now there has been no clear formula forP(FA,F∗A⊺), even if A is a linear, continuous and monotone operator. In thisthesis, we give an explicit formula for P(FA,F∗A⊺) associated with a maximalmonotone operator A with a linear graph. We find that P(FA,F∗A⊺) = hFA.This is a new result.The thesis is organized as follows.Chapter 2 contains some auxiliary and basic results on monotone oper-ators, subdifferentials and Moore-Penrose inverses.In Chapter 3, it is shown that the inverse of a linear and monotoneoperator is Borwein-Wiersma decomposable.Chapter 4 contains our first main result: A maximal monotone operatorwith a linear graph is Borwein-Wiersma decomposable. In addition, theremainder of this chapter gives some equivalent conditions of monotonicityof operators with linear graphs.Chapter 5 discusses auto-conjugate representations. We give an explicit3Chapter 1. Introductionformula for P(FA,F∗A⊺) associated with a linear and monotone operator A,which is our second main result. Furthermore, we show that P(FA,F∗A⊺) =hFA.In Chapter 6, we give a specific example of a nonlinear monotone oper-ator: ∂(−ln) such that P(F∂(−ln),F∗⊺∂(−ln)) negationslash= hF∂(−ln). This illustrates thenecessity of the linearity assumption.Finally, in Chapter 7 we extend auto-conjugate representation resultsfrom linear and monotone operators to monotone operators with lineargraphs. Here we also discuss one open question: Expressing FA+B in termsof FA and FB. We show that FA+B = FAsquare2FB (Here square2 means the infconvolution for the second variable). This generalizes one of the resultsprovided by Bauschke, Borwein and Wang in [4].Throughout this thesis, X denotes a Hilbert space with inner product〈·,·〉, norm bardbl·bardbl, and Id is the identity mapping in X. The unit ball isB =braceleftBigx ∈ X | bardblxbardbl ≤ 1bracerightBig.We further setR+ =braceleftBigx ∈R| x ≥ 0bracerightBig, R− =braceleftBigx ∈R| x ≤ 0bracerightBig,R++ =braceleftBigx ∈R| x > 0bracerightBig, R−− =braceleftBigx ∈R| x < 0bracerightBig.For asubsetC ⊂ X , the closure ofC is denoted by C. Thearrow “→” is usedfor a single-valued mapping, whereas “⇉” denotes a set-valued mapping.4Chapter 2Auxiliary results2.1 DefinitionsWe first introduce some fundamental definitions.Definition 2.1.1 Let T : X ⇉X. We say T is monotone if〈x∗ −y∗,x−y〉 ≥ 0,whenever x∗ ∈ T(x), y∗ ∈ T(y).Definition 2.1.2 Let A: X → X. We say A is positive semidefinite if〈x,Ax〉 ≥ 0, ∀x ∈ X.Example 2.1.3 Let A: X → X be linear. Then A is monotone, if and onlyif, A is positive semidefinite.Proof. “⇒” For every x ∈ X, by monotonicity and linearity of S we have〈Ax,x〉 = 〈Ax−A0,x−0〉 ≥ 0. (2.1)(2.1) holds by A0 = 0 (since A is linear).“⇐” For every x,y ∈ X, since A is positive semidefinite, we have5Chapter 2. Auxiliary resultsangbracketleftbigAx−Ay,x−yangbracketrightbig = angbracketleftbigA(x−y),x−yangbracketrightbig≥ 0. (2.2)squaresolidDefinition 2.1.4 Let A: X → X be linear. We define qA byqA(x) = 12〈x,Ax〉, ∀x ∈ X. (2.3)Definition 2.1.5 Let A: X → X be linear and continuous. Then A∗ is theunique linear and continuous operator satisfying〈Ax, y〉 = 〈x, A∗y〉, ∀x,y ∈ X. (2.4)Remark 2.1.6 Let A: Rn →Rn be linear. Then A∗ = A⊺.Definition 2.1.7 Let A: X → X be linear and continuous. We say that Ais symmetric if A∗ = A.Remark 2.1.8 Let A: X → X be linear and continuous. Then A is mono-tone ⇔ A∗ is monotone.Definition 2.1.9 Let A: X → X be linear and continuous. We say that Ais antisymmetric if A∗ = −A.Remark 2.1.10 Let A: X → X be linear and continuous. Then A+A∗2 issymmetric and A−A∗2 is antisymmetric.6Chapter 2. Auxiliary resultsDefinition 2.1.11 (Symmetric and antisymmetric part) Let A: X →X be linear and continuous. Then A+ = 12A + 12A∗ is the symmetric partof A, and A◦ = A−A+ = 12A− 12A∗ is the antisymmetric part of A.Remark 2.1.12 Let A : X → X be linear and continuous. Then qA = qA+.Proof. Let x ∈ X.2qA+(x) = 〈A+x,x〉 = 〈A∗+A2 x,x〉= 〈A∗x2 ,x〉+〈Ax2 ,x〉 = 〈x2,Ax〉+〈Ax2 ,x〉= 〈Ax,x〉 = 2qA(x).squaresolidHere is a basic property.Fact 2.1.13 Let A : X → X be linear and continuous. Then A is mono-tone, if and only if, A+ is monotone.Proof. By Example 2.1.3 and Remark 2.1.12. squaresolidDefinition 2.1.14 Let T : X ⇉ X be monotone. We call T maximalmonotone if for every (y,y∗) /∈ graT there exists (x,x∗) ∈ graT with〈x−y, x∗ −y∗〉 < 0.Fact 2.1.15 Let A: X ⇉ X be maximal monotone and (x0,x∗0) ∈ X ×X.Let tildewideA: X ⇉ X such that gra tildewideA = graA−(x0,x∗0) ( i.e., a rigid translationof graA). Then tildewideA is maximal monotone.Proof. Follows directly from Definition 2.1.14. squaresolid7Chapter 2. Auxiliary resultsExample 2.1.16 Every continuous monotone operator A: X → X is max-imal monotone.Proof. [26, Example 12.7]. squaresolidLet us introduce an essential result that will be used often.Fact 2.1.17 Let A: X → X be linear, continuous and monotone. ThenkerA = kerA∗ and ranA = ranA∗.Proof. See [4, Proposition 3.1]. squaresolidFact 2.1.18 Let A : X → X be linear and continuous. ThenqA is convex ⇔ A is monotone ⇔ qA(x) ≥ 0, x ∈ X,and∇qA = A+.Proof. See [3, Theorem 3.6(i)]. squaresolidDefinition 2.1.19 Let f: X → ]−∞,+∞]. We say f is proper lowersemicontinuous and convex iff(x0) < +∞, ∃ x0 ∈ X,f(λx+ (1−λ)y) ≤ λf(x)+ (1−λ)f(y), ∀λ ∈ ]0,1[, ∀x,y ∈ X,liminfy→x f(y) = limδ→0+inf f(x+δB) ≥ f(x), ∀x ∈ X.Definition 2.1.20 Let f: X → ]−∞,+∞] be proper lower semicontinuous8Chapter 2. Auxiliary resultsand convex. The subdifferential mapping ∂f: X ⇉X is defined byx mapsto→ ∂f(x) :=braceleftBigx∗ | 〈x∗,y −x〉+f(x) ≤ f(y), ∀ybracerightBig.One of motivations for studying monotone operators comes from thefollowing Fact.Fact 2.1.21 (Rockafellar) Let f: X → ]−∞,+∞] be proper lower semi-continuous and convex. Then ∂f is maximal monotone.Proof. See [28, page 113] or [31, Theorem 3.28]. squaresolidFact 2.1.22 Let A: X → X be linear and monotone. Then A is maximalmonotone and continuous.Proof. See [23, Corollary 2.6 and Proposition 3.2.h]. squaresolidDefinition 2.1.23 For a set S ⊂ X, ιS: X → ]−∞,+∞] stands for theindicator function defined byιS(x) :=0, if x ∈ S;+∞, otherwise.Fact 2.1.24 Suppose that S is a nonempty convex subset of X. Then ιS isproper lower semicontinuous and convex, if and only if, S is closed.Proof. See [22, Example.(a)]. squaresolidDefinition 2.1.25 The space ℓ2 consists of all sequences of real numbers9Chapter 2. Auxiliary results(ξ1,ξ2,...) for whichbardbl(ξ1,ξ2,...)bardbl2 < ∞,wherebardbl(ξ1,ξ2,...)bardbl2 := (∞summationdisplayi=1|ξi|2)12,and where〈ξ,γ〉 =∞summationdisplayi=1〈ξi,γi〉, ∀ξ = (ξi)∞i=1, γ = (γi)∞i=1 ∈ ℓ2.Fact 2.1.26 (ℓ2, bardbl·bardbl2) is a Hilbert space.Proof. See [27, Example 3.24]. squaresolidExample 2.1.27 Let X be (ℓ2, bardbl·bardbl2) space and A: X → X: (xn)∞n=1 mapsto→(xnn )∞n=1. Then A is maximal monotone and continuous.Proof. Clearly, A is linear. Now we show A is monotone. Let x = (xn)∞n=1 ∈X. Then〈x, Ax〉 =∞summationdisplayn=1x2nn ≥ 0.By Example 2.1.3, A is monotone. By Fact 2.1.22, A is maximal monotoneand continuous. squaresolidLemma 2.1.28 Let S be a linear subspace of X. Suppose x ∈ X and α ∈Rsatisfy 〈x,s〉 ≤ α,∀s ∈ S. Then x⊥S.10Chapter 2. Auxiliary resultsProof. Let s ∈ S. By assumption, we have〈x, ks〉 ≤ α, ∀k ∈R ⇒〈x, s〉 ≤ 0, if k > 0〈x, s〉 ≥ 0, if k < 0⇒ 〈x,s〉 = 0, ∀s ∈ S⇒ x⊥S.squaresolidFact 2.1.29 Suppose that S is a closed linear subspace of X. Then∂ιS(x) = S⊥, ∀x ∈ S.Proof. Let x ∈ S. We havex∗ ∈ ∂ιS(x) ⇔ 〈x∗,s−x〉 ≤ ιS(s)−ιS(x), ∀s ∈ X⇔ 〈x∗,s−x〉 ≤ 0, ∀s ∈ S⇔ x⊥S (by Lemma 2.1.28).squaresolidFact 2.1.30 Let f,g: X → ]−∞,+∞] be proper lower semicontinuous andconvex. Suppose that f is differentiable everywhere. Then∂(f +g)(x) = ∇f(x)+∂g(x), ∀x ∈ domg.Proof. See [22, Theorem 3.23]. squaresolidExample 2.1.31 Suppose that j(x) = 12bardblxbardbl2, ∀x ∈ X and S ⊂ X is a11Chapter 2. Auxiliary resultsclosed subspace. Then ∂(j +ιS) is maximal monotone. In particular, ∂(j +ιS)(x) = x+S⊥, ∀x ∈ S.Proof. By Fact 2.1.24, ιS is proper lower semicontinuous and convex. Hencej +ιS is proper lower semicontinuous and convex. By Fact 2.1.21, ∂(j +ιS)is maximal monotone.Let x ∈ S. By Fact 2.1.30, Fact 2.1.18 and Fact 2.1.29,∂(j +ιS)(x) = ∇j(x) +∂ιS(x) = ∇qId(x) +∂ιS(x) = x+S⊥. squaresolidFact 2.1.32 Let A: X → X be linear and continuous such that ranA isclosed. Then ∂ιranA(x) = (ranA)⊥ = kerA∗, ∀x ∈ ranA.Proof. Let x ∈ ranA. By Fact 2.1.29, ∂ιranA(x) = (ranA)⊥. Now we showthat (ranA)⊥ = kerA∗. We havex∗ ∈ (ranA)⊥ ⇔ 〈x∗,Ax〉 = 0, ∀x ∈ X⇔ 〈A∗x∗,x〉 = 0, ∀x ∈ X⇔ A∗x∗ = 0 ⇔ x∗ ∈ kerA∗.squaresolidDefinition 2.1.33 Let A: X → X. The set-valued inverse mapping, A−1: X ⇉X, is defined byx ∈ A−1y ⇔ Ax = y.The following is the definition of the Moore-Penrose inverse, which willplay an important role in our Theorems.12Chapter 2. Auxiliary resultsDefinition 2.1.34 Let A: X → X be linear and continuous such that ranAis closed. The Moore-Penrose inverse of A, denoted by A†, is defined byA†b = argminA∗Au=A∗bbardblubardbl, ∀b ∈ X.In the following we always let A† stand for the Moore-Penrose inverse of alinear and continuous operator A.Remark 2.1.35 Let A: X → X be linear and continuous such that ranAis closed. Then by [19, Theorem 2.1.1],A†y ∈ A−1y, ∀y ∈ ranA.In particular, if A is bijective, thenA† = A−1.2.2 Properties of A†By the Remark above, we know that A† |ranA is a selection for A−1. Thisraises some questions: What is the relationship between them? If one ofthem is monotone, can we deduce that the other one is also monotone?Fact 2.2.1 Let A: X → X be linear and continuous. Then ranA is closed,if and only if, ranA∗ is closed.Proof. See [19, Theorem 1.2.4]. squaresolid13Chapter 2. Auxiliary resultsFact 2.2.2 Let A: X → X be linear and continuous such that ranA isclosed. Then A† is linear and continuous.Proof. See [19, Corollary 2.1.3]. squaresolidFact 2.2.3 Let A: X → X be linear and continuous such that ranA isclosed. Then ranA† = ranA∗.Proof. See [19, Theorem 2.1.2]. squaresolidFact 2.2.4 Let A: X → X be linear and continuous such that ranA isclosed. Then (A†)∗ = (A∗)†.Proof. See [19, Exercise 11 on page 111] and [21, Exercise 5.12.16 on page428]. squaresolidProposition 2.2.5 Let A: X → X be linear, continuous and monotonesuch that ranA is closed. ThenranA = ranA∗ = ranA† = ran(A†)∗, kerA = kerA∗ = ker(A†) = ker(A†)∗.Proof. By Fact 2.1.17 and Fact 2.2.1,ranA = ranA∗. (2.5)By Fact 2.2.3 and (2.5), we haveranA = ranA∗ = ranA†. (2.6)14Chapter 2. Auxiliary resultsBy Fact 2.2.1, ranA∗ is closed. By Remark 2.1.8, A∗ is monotone. Apply(2.6) with replacing A by A∗, we haveranA∗ = ranA∗∗ = ran(A∗)†. (2.7)By Fact 2.2.4 and (2.6), we haveranA∗ = ranA = ran(A†)∗ = ranA†.Then (ranA∗)⊥ = (ranA)⊥ = parenleftbigran(A†)∗parenrightbig⊥ = (ranA†)⊥,thusby Fact 2.1.32,kerA = kerA∗ = ker(A†) = ker(A†)∗ . squaresolidProposition 2.2.6 Let A: X → X be linear. Suppose y ∈ ranA. ThenA−1y = y∗ + kerA, ∀y∗ ∈ A−1y.Proof. Let y∗ ∈ A−1y and z∗ ∈ kerA. Then Ay∗ = y andA(y∗ +z∗) = Ay∗ +Az∗ = y + 0 = y.Thus y∗ +z∗ ∈ A−1y. Hence y∗ + kerA ⊂ A−1y.On the other hand, let y∗1 ∈ A−1y. Then Ay∗1 = y and for each y∗ ∈ A−1y,A(y∗1 −y∗) = Ay∗1 −Ay∗ = y −y = 0.Thus y∗1 −y∗ ∈ kerA, i.e., y∗1 ∈ y∗ + kerA. Then A−1y ⊂ y∗ + kerA. squaresolidCorollary 2.2.7 Let A: X → X be linear and continuous such that ranAis closed. Then A−1y = A†y + kerA, ∀y ∈ ranA.15Chapter 2. Auxiliary resultsProof. By Remark 2.1.35 and Proposition 2.2.6. squaresolidIn order to further illustrate the relationship between A† and A, weintroduce the concept of Projector.Fact 2.2.8 Let M be a closed subspace of X. For every vector x ∈ X,there is a unique vector m0 ∈ M such that bardblx − m0bardbl ≤ bardblx − mbardbl for allm ∈ M. Furthermore, a necessary and sufficient condition that m0 ∈ M bethe unique minimizing vector is that x−m0 be orthogonal to M.Proof. See [20, Theorem 2 on page 51]. squaresolidThe Fact above ensures that the following mapping is well defined.Definition 2.2.9 (Projector) Let M be a closed subspace of X. The Pro-jector, PM : X → M, is defined byPMx = argminm∈Mbardblx−mbardbl , x ∈ X. (2.8)Here is a result that will be very helpful for our problems.Proposition 2.2.10 Let A: X → X be linear and monotone such thatranA is closed. Then qA† = qA†PranA.16Chapter 2. Auxiliary resultsProof. Let x ∈ X. Then we have2qA†(x) = angbracketleftbigx, A†xangbracketrightbig (2.9)= angbracketleftbigPranAx+PkerAx, A†(PranAx+PkerAx)angbracketrightbig (2.10)= angbracketleftbigPranAx+PkerAx, A†(PranAx)angbracketrightbig (2.11)= angbracketleftbigPranAx, A†(PranAx)angbracketrightbig+angbracketleftbigPkerAx, A†(PranAx)angbracketrightbig (2.12)= angbracketleftbigPranAx, A†(PranAx)angbracketrightbig+angbracketleftbigPranAx, (A†)∗(PkerAx)angbracketrightbig (2.13)= angbracketleftbigPranAx, A†(PranAx)angbracketrightbig (2.14)= 2qA†(PranAx). (2.15)Note that (2.10) holds sinceX = ranA⊕kerA byFact 2.1.32 and Fact 2.1.17.(2.11) holds since PkerAx ∈ kerA = kerA† by Proposition 2.2.5.(2.14) holds by (A†)∗(PkerAx) = 0, since ker(A†)∗ = kerA by Proposi-tion 2.2.5. squaresolidFact 2.2.11 Let A: X → X be linear and continuous such that ranA isclosed. Then AA† = PranA.Proof. See [19, Theorem 2.2.2]. squaresolidCorollary 2.2.12 Let A: X → X be linear, continuous and monotone suchthat ranA is closed. Then A† is monotone.Proof. Since A† is linear and continuous by Fact 2.2.2, by Fact 2.1.18 itsuffices to show that qA†(x) ≥ 0,∀x ∈ X.17Chapter 2. Auxiliary resultsLet x ∈ X and y = A†(PranAx). Then Ay = AA†(PranAx) = PranAx byFact 2.2.11. By Proposition 2.2.10, we have2qA†(x) = 2qA†(PranAx) (2.16)= 〈A†(PranAx), PranAx〉 (2.17)= 〈y, Ay〉 (2.18)≥ 0, (2.19)in which (2.19) holds since A is monotone. squaresolidFact 2.2.13 Let A: X → X be linear and continuous such that ranA isclosed. Then A†† = A.Proof. See [19, Exercise 7 on page 110]. squaresolidTheorem 2.2.14 Let A: X → X be linear and continuous such that ranAis closed. Then A is monotone, if and only if, A† is monotone.Proof. “⇒” By Corollary 2.2.12.“⇐” Since ranA† = ranAis closed byProposition 2.2.5, weapplyFact 2.2.13and Corollary 2.2.12 to A† to conclude that A†† = A is monotone. squaresolidHere is a useful result that will be used very often.Proposition 2.2.15 Let A: X → X be linear, symmetric and continuoussuch that ranA is closed. ThenqA†(x+Ay) = qA†(x) +qA(y)+〈PranAx,y〉, ∀x,y ∈ X.18Chapter 2. Auxiliary resultsProof. Let x ∈ X, y ∈ X. ThenqA†(x+Ay) (2.20)= 12〈A†x+A†Ay, x+Ay〉 (2.21)= qA†(x) + 12〈A†Ay,Ay〉+ 12〈A†x,Ay〉+ 12〈A†Ay,x〉 (2.22)= qA†(x) + 12〈AA†Ay,y〉+ 12〈AA†x,y〉+ 12angbracketleftbigy,(A†A)∗xangbracketrightbig (2.23)= qA†(x) + 12〈PranA(Ay),y〉+ 12〈PranAx,y〉+ 12〈y,AA†x〉 (2.24)= qA†(x) +qA(y) + 12〈PranAx,y〉+ 12〈y,PranAx〉 (2.25)= qA†(x) +qA(y) +〈PranAx,y〉, (2.26)in which, (2.24) by Fact 2.2.11 and Fact 2.2.4, (2.25) by Fact 2.2.11. squaresolidCorollary 2.2.16 Let A: X → X be linear, symmetric and continuoussuch that ranA is closed. ThenqA†(Ax) = qA(x), ∀x ∈ X.Proof. Apply Proposition 2.2.15 to A with x replaced by 0 and y replacedby x. squaresolidFact 2.2.17 Let A: X → X be linear and continuous such that ranA isclosed. Then A†A = PranA†.Proof. See [19, Theorem 2.2.2]. squaresolid19Chapter 2. Auxiliary resultsCorollary 2.2.18 Let A: X → X be linear, continuous and monotone suchthat ranA is closed. ThenAA† = A†A = PranA.Proof. By Proposition 2.2.5, ranA = ranA†. Then follows directly fromFact 2.2.11 and Fact 2.2.17. squaresolid20Chapter 3Inverse of linear monotoneoperatorsIt is well known that a linear, continuous and monotone operator A can bedecomposed into the sum of a symmetric operator A+ and an antisymmetricoperator A◦: A = A+ +A◦.By Fact 2.1.18, A is also decomposed into the sum of the subdifferentialof a proper lower semicontinuous and convex function ∇qA and an anti-symmetric operator A◦: A = ∇qA + A◦. Such a decomposition is called aBorwein-Wiersma decomposition.3.1 Borwein-Wiersma decompositionDefinition 3.1.1 (Borwein-Wiersma decomposition) We say A: X ⇉X is Borwein-Wiersma decomposable or simply decomposable ifA = ∂f +S,where f is proper lower semicontinuous and convex, and S is antisymmetric.What kind of operators are Borwein-Wiersma decomposable?21Chapter 3. Inverse of linear monotone operatorsDefinition 3.1.2 We say A: Rn ⇉Rn is skew if there exists a linear andantisymmetric operator B such that B |domA= A |domA .Fact 3.1.3 Let A: Rn ⇉ Rn be maximal monotone and at most single-valued. Suppose that 0 ∈ domA,domA is open and A is Frechet differen-tiable on domA. Then A is Borwein-Wiersma decomposable, if and only if,A−∇fA is skew, wherefA: domA →R : x mapsto→integraldisplay 10〈A(tx),x〉dt.Proof. See [12, Theorem 3]. squaresolid3.2 Asplund decompositionHere we also introduce another famous decomposition: Asplund decompo-sition, see [1].Definition 3.2.1 We say A: X ⇉ X is acyclic with respect to a subset CifA = ∂f +S,where f is proper lower semicontinuous and convex, and S is monotone,which necessarily implies that ∂f is constant on C. If no set C is given,then C = domA.Definition 3.2.2 (Asplund decomposition) We say A: X ⇉ X is As-plund decomposable ifA = ∂f +S,22Chapter 3. Inverse of linear monotone operatorswhere f is proper lower semicontinuous and convex, and S is acyclic withrespect to domA.The following tells us which operators are Asplund decomposable.Fact 3.2.3 (Asplund) Let A: Rn ⇉ Rn be maximal monotone such thatintdom A negationslash= ∅ and A is at most single-valued. Then A is Asplund decom-posable.Proof. See [12, Theorem 13]. squaresolidBy the following result, we can find out the connection between thedecompositions.Fact 3.2.4 Let A: Rn →Rn be antisymmetric. Then A is acyclic.Proof. See [12, Proposition 15]. squaresolidRemark 3.2.5 Let A: Rn ⇉Rn be maximal monotone and Borwein-Wiersmadecomposable via∂f +S,where f is proper lower semicontinuous and convex, and S is antisymmetric.Then such a decomposition is also an Asplund decomposition.3.3 The Borwein-Wiersma decomposition of theinverseAs mentioned earlier, alinear, continuous and monotone operator isBorwein-Wiersma decomposable. It is natural to ask whether its set-valued inverse23Chapter 3. Inverse of linear monotone operatorsmapping is also Borwein-Wiersma decomposable.Theorem 3.3.1 Let A: X → X be linear, continuous and monotone suchthat ranA is closed. ThenA−1 = ∂f + (A†)◦,where f := qA† + ιranA is proper lower semicontinuous and convex, and(A†)◦ is antisymmetric. In particular, A−1 is decomposable.Proof. By Fact 2.2.2 and Corollary 2.2.12, A† is linear, continuous andmonotone. Then by Fact 2.1.18 we have qA† is convex function, differentiableand ∇qA† = (A†)+. Since ranA is a closed subspace of X, by Fact 2.1.24ιranA is proper lower semicontinuous and convex. Hence f is proper lowersemicontinuous and convex.We show that the convex function f satisfies∂f(x) + (A†)◦x =A†x+ kerA, if x ∈ ranA;∅, otherwise.(3.1)Indeed, since f is convex, ∀x ∈ ranA we have∂f(x) = ∂(qA† +ιranA)(x)= ∇qA†(x) +∂ιranA(x) ( by Fact 2.1.30)= (A†)+x+ kerA∗ (3.2)= (A†)+x+ kerA, (3.3)24Chapter 3. Inverse of linear monotone operatorswhere (3.2) holds by Fact 2.1.32, and (3.3) by Proposition 2.2.5.Thus∂f(x)+ (A†)◦x = (A†)+x+ kerA + (A†)◦x = A†x + kerA, ∀x ∈ ranA.If x negationslash∈ ranA = domf, by definition ∂f(x) = ∅. Hence (3.1) holds.By Corollary 2.2.7, we have thatA−1x = A†x+ kerA, ∀x ∈ ranA. (3.4)Thus,A−1x =A†x+ kerA, if x ∈ ranA;∅, otherwise.(3.5)By (3.1) and (3.5), we have A−1 = ∂f +(A†)◦. squaresolidProposition 3.3.2 Assume T : X ⇉ X is monotone, then T−1 is mono-tone. Moreover, if T is maximal monotone, then so too is T−1.Proof. Use Definition 2.1.1 and Definition 2.1.14 directly. squaresolidDue to Phelps and Simons, we obtain the following Proposition.Proposition 3.3.3 Let A: X → X be linear, continuous and monotonesuch that A is one-to-one and symmetric. ThenA−1 = ∂f,where f(x) := supy∈ranAbraceleftbig〈A−1y,x〉− 12〈A−1y,y〉bracerightbig (∀x ∈ X) is proper lower25Chapter 3. Inverse of linear monotone operatorssemicontinuous and convex. If X = Rn, then A−1 = ∇qA−1. In particular,A−1 is decomposable.Proof. By Example 2.1.16, A is maximal monotone. Then by Proposi-tion 3.3.2, A−1 is maximal monotone. Since A is linear and one-to-one ,A−1 is single-valued and linear on ranA.In the following we show that〈x,A−1y〉 = 〈y,A−1x〉, ∀x,y ∈ ranA.Let x,y ∈ ranA. Then there exist unique x1,y1 ∈ X such that x = Ax1,y =Ay1. We have〈x,A−1y〉 = 〈Ax1,y1〉 = 〈x1,Ay1〉 = 〈A−1x,y〉.By [23, Theorem 5.1], f is proper lower semicontinuous and convex, andA−1 = ∂f.If x = Rn, we have A is invertible. By assumption, A−1 is symmetric andmonotone. By Fact 2.1.18, A−1 = ∇qA−1. squaresolid26Chapter 4Monotone operators withlinear graphsTheorem 3.3.1 tells us that the set-valued inverse A−1 of a linear, continuousand monotone operator A is Borwein-Wiersma decomposable. Naturally,this raises the following question: Are maximal monotone operators withlinear graphs also Borwein-Wiersma decomposable? This chapter answersthe question above. It also gives some important equivalent conditions ofmonotonicity and maximal monotonicity of operators with linear graphs.Let us first introduce some interesting results about these operators.4.1 Linear graphFact 4.1.1 Let S,M be closed linear subspaces of X. ThenS = M ⇔ S⊥ = M⊥, S negationslash= M ⇔ S⊥ negationslash= M⊥.Proof. Follows directly by S⊥⊥ = S,M⊥⊥ = M. squaresolid27Chapter 4. Monotone operators with linear graphsDefinition 4.1.2 Let A: X ⇉X. We define domA,ranA bydomA := {x | Ax negationslash= ∅}ranA := {x∗ | x∗ ∈ Ax, ∃x ∈ domA}.Proposition 4.1.3 Let A: X ⇉ X such that graA is a linear subspace ofX ×X. For every x,y ∈ domA, the following hold.(i) A0 is a linear subspace of X.(ii) Ax = x∗ +A0, ∀x∗ ∈ Ax.(iii) αAx+βAy = A(αx +βy), ∀α,β ∈R with α negationslash= 0 or β negationslash= 0.(iv) If A is monotone, then domA⊥A0, hence domA ⊂ (A0)⊥,A0 ⊂(domA)⊥.(v) If A is monotone, then〈x,x∗〉 ≥ 0, ∀(x,x∗) ∈ graA.Proof. Obviously,domA =braceleftBigx ∈ X| (x,y) ∈ graA, ∃y ∈ XbracerightBig(4.1)and domA is a linear subspace of X.(i): ∀α,β ∈R, ∀x∗,z∗ ∈ A0 we have(0,x∗) ∈ graA (0,z∗) ∈ graA.28Chapter 4. Monotone operators with linear graphsAs graA is a linear subspace of X ×X,α(0, x∗) +β(0, z∗) = (0, αx∗ +βz∗) ∈ graA.This gives αx∗ +βz∗ ∈ A0. Hence A0 is a linear subspace.(ii): We first show thatx∗ +A0 ⊂ Ax, ∀x∗ ∈ Ax.Take x∗ ∈ Ax, z∗ ∈ A0. Then(x, x∗) ∈ graA and (0, z∗) ∈ graA.Since graA is a linear subspace,(x, x∗ +z∗) ∈ graA.That is, x∗ +z∗ ∈ Ax. Then x∗ +A0 ⊂ Ax.On the other hand, let x∗,y∗ ∈ Ax. We have(x, x∗) ∈ graA, (x, y∗) ∈ graA.Since graA is a linear subspace,(x−x, y∗ −x∗) = (0, y∗ −x∗) ∈ graA.Then y∗−x∗ ∈ A0. That is, y∗ ∈ x∗ +A0. Thus Ax ⊂ x∗ +A0. Hence (ii)29Chapter 4. Monotone operators with linear graphsholds.(iii): Let α,β ∈R. Take x∗ ∈ Ax,y∗ ∈ Ay. Then we have(x, x∗) ∈ graA, (y, y∗) ∈ graA.Since graA is a linear subspace, we have (αx + βy, αx∗ + βy∗) ∈ graA.That is, αx∗ +βy∗ ∈ A(αx +βy).Then by (ii) we haveAx = x∗ +A0, Ay = y∗ +A0, A(αx +βy) = αx∗ +βy∗ +A0. (4.2)Suppose that α negationslash= 0. By (i)αA0 +βA0 = A0. (4.3)Then by (4.2) and (4.3),αAx +βAy = α(x∗ +A0) +β(y∗ +A0)= αx∗ +βy∗ + (αA0 +βA0)= αx∗ +βy∗ +A0= A(αx +βy).(iv): Pick x ∈ domA. By (4.1) there exists x∗ ∈ X such that (x,x∗) ∈graA. Then by monotonicity of A, we have〈x−0, x∗ −z∗〉 ≥ 0, ∀z∗ ∈ A0.30Chapter 4. Monotone operators with linear graphsThat is,〈x, x∗〉 ≥ 〈x,z∗〉, ∀z∗ ∈ A0. (4.4)Since A0 is a linear subspace by (i), by Lemma 2.1.28 and (4.4),x⊥A0, ∀x ∈ domA⇒ domA⊥A0⇒ domA ⊂ (A0)⊥, A0 ⊂ (domA)⊥.(v): Since (0,0) ∈ graA,〈x,x∗〉 = 〈x−0,x∗ −0〉 ≥ 0, ∀(x,x∗) ∈ graA.squaresolidRemark 4.1.4 Proposition 4.1.3(ii) is a useful representation. It meansAx = tildewideAx+A0, ∀x ∈ domA, tildewideAx ∈ Ax.Later, we will show the selection map tildewideA can be chosen to be linear!4.2 Maximal monotonicity identificationThe next three results are well known.Fact 4.2.1 Let A: X ⇉ X be maximal monotone. Then Ax is closed andconvex, ∀x ∈ X.31Chapter 4. Monotone operators with linear graphsProof. Fix x ∈ X. If x /∈ domA, then Ax = ∅ is closed and convex. Sosuppose x ∈ domA. Let (x∗n) ⊂ Ax such that x∗n → x∗. In the followingwe show that x∗ ∈ Ax. For every (y,y∗) ∈ graA, by monotonicity of A, wehave〈y −x, y∗ −x∗n〉 ≥ 0. (4.5)Letting n → ∞ in (4.5), we see that〈y −x, y∗ −x∗〉 ≥ 0, ∀(y,y∗) ∈ graA. (4.6)By (4.6) and maximal monotonicity of A, we have (x,x∗) ∈ graA. That is,x∗ ∈ Ax. Hence Ax is closed.Now we show that Ax is convex. Let δ ∈ [0,1]. For every x∗1,x∗2 ∈ Ax, wehave〈y −x, y∗ −x∗1〉 ≥ 0 (4.7)〈y −x, y∗ −x∗2〉 ≥ 0, ∀(y,y∗) ∈ graA. (4.8)Adding (4.7)×δ and (4.8)×(1−δ) yields〈y −x, y∗ −(δx∗1 + (1−δ)x∗2)〉 ≥ 0, ∀(y,y∗) ∈ graA. (4.9)Since A is maximal monotone, (x,δx∗1 + (1−δ)x∗2) ∈ graA, i.e., δx∗1 + (1−δ)x∗2 ∈ Ax. Thus Ax is convex. squaresolidProposition 4.2.2 Let A,B: X ⇉ X be monotone. Then A+B is mono-tone.32Chapter 4. Monotone operators with linear graphsProof. Let (x,x∗),(y,y∗) ∈ gra(A +B). Then there exist(x,x∗1),(y,y∗1) ∈ graA(x,x∗2),(y,y∗2) ∈ graBsuch thatx∗ = x∗1 +x∗2y∗ = y∗1 +y∗2.Then〈x−y, x∗ −y∗〉 = 〈x−y, x∗1 +x∗2 −y∗1 −y∗2〉= 〈x−y, x∗1 −y∗1〉+〈x−y, x∗2 −y∗2〉 ≥ 0.Hence A+B is monotone. squaresolidFact 4.2.3 Let A: X ⇉X be maximal monotone. Then domA is convex.Proof. See [31, Theorem 3.11.12]. squaresolidFact 4.2.4 Let A: X ⇉X be maximal monotone. Then A+∂ιdomA = A.Proof. By Fact 4.2.3 and Fact 2.1.24, ιdomA is proper lower semicontinuousand convex. Then by Proposition 2.1.21, ∂ιdomA is monotone. Then byFact 4.2.2, A+∂ιdomA is monotone.33Chapter 4. Monotone operators with linear graphsSuppose x ∈ domA. Then since 0 ∈ ∂ιdomA(x) ,(A +∂ιdomA)(x) = Ax +∂ιdomA(x) ⊃ Ax.Suppose x /∈ domA. Then since Ax = ∅,(A +∂ιdomA)(x) ⊃ Ax, ∀x ∈ X.Since A is maximal monotone, A +∂ιdomA = A. squaresolidThe following are interesting properties about maximal monotonicity ofmonotone operators with linear graphs.Proposition 4.2.5 Let A: X ⇉X be monotone such that graA is a linearsubspace of X ×X. ThenA is maximal monotone ⇒ domA = (A0)⊥.Proof. Suppose to the contrary that domA negationslash= (A0)⊥. By Proposition 4.1.3(i)and Fact 4.2.1, A0is aclosed subspace. By Fact 4.1.1, (domA)⊥ negationslash= (A0)⊥⊥ =A0. Then by Proposition 4.1.3(iv), we have(domA)⊥ = (domA)⊥ supersetornoteql A0. (4.10)Thus there exists ω∗ ∈ (domA)⊥ \A0. By ω∗ ∈ (domA)⊥, we have〈ω∗,x〉 = 0, ∀x ∈ domA. (4.11)34Chapter 4. Monotone operators with linear graphsSince ω∗ /∈ A0, (0,ω∗) /∈ graA. By maximal monotonicity of A, there exists(x0,x∗0) ∈ graA such that〈x∗0,x0〉−〈ω∗,x0〉 = 〈x∗0 −ω∗,x0 −0〉 < 0. (4.12)By (4.11) and (4.12), 〈x∗0,x0〉 < 0, which is a contradiction to Proposi-tion 4.1.3(v). squaresolidProposition 4.2.6 Let A: X ⇉X be monotone such that graA is a linearsubspace of X ×X and A0 is closed. ThendomA = (A0)⊥ ⇒ A is maximal monotone.Proof. Let (x,x∗) ∈ X ×X satisfy that〈x−y, x∗ −y∗〉 ≥ 0, ∀(y,y∗) ∈ graA. (4.13)In the following we will verify that (x,x∗) ∈ graA.By (4.13) we have〈x, x∗〉 ≥ 〈x, z∗〉, ∀z∗ ∈ A0.Since A0 is a linear subspace by Proposition 4.1.3(i), by Lemma 2.1.28 wehave x⊥A0, i.e., x ∈ (A0)⊥ = domA.Take x∗0 ∈ Ax. For every v∗ ∈ Av, we have x∗0 + v∗ ∈ A(x + v) by Proposi-35Chapter 4. Monotone operators with linear graphstion 4.1.3(iii). By (4.13), we have〈x−(x +v), x∗ −(x∗0 +v∗)〉 ≥ 0, ∀(v,v∗) ∈ graA.That is,〈v, v∗〉 ≥ 〈v, x∗ −x∗0〉, ∀(v,v∗) ∈ graA. (4.14)By Proposition 4.1.3(iii), we have 1nv∗ ∈ A(1nv). Then by (4.14),〈1nv, 1nv∗〉 ≥ 〈1nv, x∗ −x∗0〉, ∀(v,v∗) ∈ graA. (4.15)Multiply (4.15) both sides by n and then let n → ∞ to see that〈v, x∗ −x∗0〉 ≤ 0, ∀v ∈ domA. (4.16)Since domA is a linear subspace, by Lemma 2.1.28, (x∗−x∗0)⊥domA. SinceA0 is closed, we have(x∗ −x∗0) ∈ (domA)⊥ = (A0)⊥⊥ = A0. (4.17)According to Proposition 4.1.3(ii), x∗ ∈ x∗0 +A0 = Ax. squaresolidHere is an important result in this chapter.Theorem 4.2.7 Let A: X ⇉ X be monotone such that graA is a linearsubspace of X ×X and domA is closed. ThenA is maximal monotone ⇔ domA = (A0)⊥,A0 is closed.36Chapter 4. Monotone operators with linear graphsProof. Since A is maximal monotone, A0 is closed by Fact 4.2.1. CombineProposition 4.2.5 and Proposition 4.2.6. squaresolidTheorem 4.2.7 gives an equivalent condition in infinite-dimensionalspaces.When we consider it in finite-dimensional spaces, can we get further results?Now we discuss this in detail.Proposition 4.2.8 Let A: Rn ⇉ Rn be monotone such that graA is alinear subspace of Rn ×Rn. Then dim(graA) = dim(domA)+ dimA0.Proof. We shall construct a basis of graA.By Proposition 4.1.3(i), A0 is a linear subspace. LetbraceleftBigx∗1,··· ,x∗kbracerightBigbea basis of A0 andbraceleftBigxk+1,...,xlbracerightBigbe a basis of domA. We show thatbraceleftBig(0,x∗1),...,(0,x∗k),(xk+1,x∗k+1),··· ,(xl,x∗l)bracerightBigisa basisof graA,wherex∗i ∈Axi, i ∈ {k + 1,··· ,l}. We first show thatbraceleftBig(0,x∗1),...,(0,x∗k),(xk+1,x∗k+1),··· ,(xl,x∗l)bracerightBigis linearly independent. Let αi,i ∈ {1,··· ,l}, satisfy thatα1(0,x∗1) +···+αk(0,x∗k) +αk+1(xk+1,x∗k+1) +···+αl(xl,x∗l ) = 0.Henceαk+1xk+1 +···+αlxl = 0 (4.18)α1x∗1 +···+αkx∗k +αk+1x∗k+1 +···+αlx∗l = 0. (4.19)37Chapter 4. Monotone operators with linear graphsSincebraceleftBigxk+1,...,xlbracerightBigis linearly independent, by (4.18) we have αi = 0,i ∈{k+1,··· ,l}. Then sincebraceleftBigx∗1,··· ,x∗kbracerightBigis linearly independent, by (4.19) wehave αi = 0,i ∈ {1,··· ,k}. Thusαi = 0,i ∈ {1,··· ,l}. HencebraceleftBig(0,x∗1),...,(0,x∗k),(xk+1,x∗k+1),··· ,(xl,is linearly independent.Let (x,x∗) ∈ graA. Then there exists βi, i ∈ {k + 1,··· ,l} satisfying thatβk+1xk+1 +···+βlxl = x.Thusβk+1x∗k+1 +···+βlx∗l ∈ Ax.By Proposition 4.1.3(ii), there exists z∗ ∈ A0 such thatβk+1x∗k+1 +···+βlx∗l +z∗ = x∗.Then there exists βi, i ∈ {1,··· ,k} satisfying thatz∗ = β1x∗1 +···+βkx∗k.Thus(x,x∗) = β1(0,x∗1) +···+βk(0,x∗k)+βk+1(xk+1,x∗k+1)+···+βl(xl,x∗l ).HencebraceleftBig(0,x1),...,(0,xk),(xk+1,x∗k+1),··· ,(xl,x∗l )bracerightBigisa basisof graA. Thendim(graA) = dim(domA) + dim(A0).38Chapter 4. Monotone operators with linear graphssquaresolidFrom Proposition 4.2.8, we now get a satisfactory characterization.Proposition 4.2.9 Let A: Rn ⇉ Rn be monotone such that graA is alinear subspace of Rn ×Rn. ThenA is maximal monotone ⇔ dimgraA = n.Proof. Since linear subspaces are closed in finite-dimensional spaces, byProposition 4.1.3(i) and Theorem 4.2.7 we haveA is maximal monotone ⇔ domA = (A0)⊥. (4.20)Assume that A is maximal monotone. ThendomA = (A0)⊥.Then Proposition 4.2.8 impliesdim(graA) = dim(domA) + dim(A0)= dim((A0)⊥) + dim(A0)= n,as (A0)⊥ +A0 = Rn.Conversely, let dim(graA) = n. By Proposition 4.2.8, we have39Chapter 4. Monotone operators with linear graphsdim(domA) = n−dim(A0).As dim((A0)⊥) = n−dim(A0) and domA ⊂ (A0)⊥ by Proposition 4.1.3(iv),we havedomA = (A0)⊥.By (4.20), A is maximal monotone. squaresolid4.3 Constructing a good selectionWhen we proved Theorem 3.3.1, most of the much focused on finding a lin-ear, continuous and monotone operator tildewideA such that tildewideA |domA−1 is a selectionof A−1. Now for a maximal monotone operator A with a linear graph, wealso want to find such an operator.Fact 4.3.1 Let S be a nonempty closed convex subset of X. Then for eachx ∈ X there exists a unique s0 ∈ S such thatbardblx−s0bardbl = minbraceleftBigbardblx−sbardbl| s ∈ SbracerightBig.Proof. See [19, Corollary 1.1.5]. squaresolidBy Fact 4.3.1, we can define the projector onto a nonempty closed convexsubset of X.40Chapter 4. Monotone operators with linear graphsDefinition 4.3.2 Let S be a nonempty closed convex subset of X. We definethe projector PS : X → X byPSx = argmins∈Sbardblx−sbardbl, x ∈ X.Fact 4.3.3 Let S be a closed linear subspace of X and x0 ∈ X. Then PS islinear, continuous andPS+x0x = x0 +PS(x−x0), ∀x ∈ X (4.21)PSx+PS⊥x = x (4.22)P∗S = PS. (4.23)Proof. (4.21): Let x ∈ S. By Definition 4.3.2,bardblx−x0 −PS(x−x0)bardbl ≤ bardblx−x0 −sbardbl = bardblx−(x0 +s)bardbl, ∀s ∈ S.By Fact 4.3.1, PS(x−x0) ∈ S. Thus x0 + PS(x−x0) ∈ S + x0. By Defini-tion 4.3.2, PS+x0x = x0 +PS(x−x0).(4.22) holdsbyS⊕S⊥ = X. For theother partssee[27, Theorem 5.51(a)]. squaresolidDefinition 4.3.4 Let A: X ⇉ X such that graA is a linear subspace ofX ×X. We define QA byQAx =PAxx, if x ∈ domA;∅, otherwise.41Chapter 4. Monotone operators with linear graphsProposition 4.3.5 Let A: X ⇉ X be maximal monotone such that graAis a linear subspace of X × X. Then QA is single-valued on domA and aselection of A.Proof. By Fact 4.2.1, Ax is nonempty closed convex, for every x ∈ domA.Then by Fact 4.3.1, QA is single-valued on domA and a selection of A. squaresolidProposition 4.3.6 Let A: X ⇉ X be maximal monotone such that graAis a linear subspace of X×X. Then QA is monotone, and linear on domA.Moreover,QAx = P(A0)⊥(Ax), ∀x ∈ domA. (4.24)Proof. By Proposition 4.1.3(i) and Fact 4.2.1, A0 is a closed subspace. Letx∗ ∈ Ax. ThenQAx = PAxx = Px∗+A0x (4.25)= x∗ +PA0(x−x∗) = x∗ +PA0x−PA0x∗ (4.26)= PA0x +P(A0)⊥x∗ (4.27)= P(A0)⊥x∗ (4.28)= P(A0)⊥(Ax), (4.29)in which, (4.25) holdsbyProposition 4.1.3(ii), (4.26) and(4.27) byFact 4.3.3.(4.28) holds since PA0x = 0 by Proposition 4.1.3(iv).Thus(4.24) holds. SinceQAxis single-valued byRemark4.3.5, thenP(A0)⊥(Ax)is single-valued.42Chapter 4. Monotone operators with linear graphsNow we show that QA is linear on domA. Take x,y ∈ domA and α,β ∈R.If α = β = 0, by Proposition 4.1.3(i), we haveQA(αx +βy) = QA0 = PA00 = 0 = αQAx+βQAy. (4.30)Assume that α negationslash= 0 or β negationslash= 0. By (4.24), we haveQA(αx +βy) = P(A0)⊥A(αx +βy) (4.31)= αP(A0)⊥(Ax) +βP(A0)⊥(Ay) (4.32)= αQAx+βQAy, (4.33)where (4.32) holds by Proposition 4.1.3(iii) and Fact 4.3.3, (4.33) by (4.24).By Proposition 4.3.5, QA is a selection of A. Since A is monotone, QA ismonotone. squaresolidProposition 4.3.7 Let Y be a closed linear subspace of X. Let A: X ⇉Xbe monotone such that A is linear on Y and at most single-valued. ThenPY APY is linear, continuous and maximal monotone.Proof. Clearly, PY APY is linear since PY is linear by Fact 4.3.3 and A islinear on Y. In the following we show that PY APY is monotone.Let x ∈ X. Then〈x, PY APY x〉 = 〈P∗Y x, A(PY x) = 〈PY x, A(PY x)〉 (4.34)≥ 0, (4.35)43Chapter 4. Monotone operators with linear graphswhere (4.34) holds by Fact 4.3.3. Inequality (4.35) holds since A is mono-tone.By Example 2.1.3, PY APY is monotone. Then by Fact 2.1.22, we havePY APY is continuous and maximal monotone. squaresolidNow we show that we found the operator we were looking for.Corollary 4.3.8 Let A: X ⇉X be maximal monotone such that graA isa linear subspace of X ×X and domA is closed. Then PdomAQAPdomA islinear, continuous and maximal monotone. Moreover, PdomAQAPdomA =QAPdomA,(PdomAQAPdomA) |domA= QA and Ax = (PdomAQAPdomA)x +A0,∀x ∈ domA.Proof. The former holds by Proposition 4.3.6 and Proposition 4.3.7. ByProposition 4.3.6 and Proposition 4.2.5, we have (QAPdomA)x ∈ (A0)⊥ =domA,∀x ∈ X. Then by Proposition 4.3.5, (PdomAQAPdomA)x = QAx ∈Ax,∀x ∈ domA. By Proposition 4.1.3(ii), Ax = (PdomAQAPdomA)x +A0,∀x ∈ domA. squaresolidRemark 4.3.9 By Corollary 4.3.8, we know that QA |domA is continuouson domA. But if we omit the assumption that domA be closed, then wecan’t guarantee that QA |domA is continuous on domA.Example 4.3.10 Let X be (ℓ2, bardbl·bardbl2) space and A: X → X: (xn)∞n=1 mapsto→(xnn )∞n=1. Then QA−1 |domA−1 is not continuous on domA−1.Proof. We first show that QA−1 = A−1 is maximal monotone with a lineargraph, but domA−1 = ranA is not closed.Clearly, A is linear and one-to-one. Thus QA−1 = A−1 and graA−1 is a44Chapter 4. Monotone operators with linear graphslinear subspace. By Example 2.1.27, A is maximal monotone. By Proposi-tion 3.3.2, A−1 is maximal monotone.By Proposition 4.2.5, ranA = domA−1 = (A−10)⊥ = (0)⊥ = X. Now weshow that ranA is not closed, i.e, ranA negationslash= X.On the contrary, assume ranA = X. Let x = (1/n)∞n=1 ∈ X. Then we haveA−1x = (1)∞n=1 /∈ X. This is a contradiction. Hence ranA is not closed.In the following we show that QA−1 = A−1 is not continuous on ranA =domA−1.Take {1nen} ⊂ ranA, where en = (0,··· ,0,1,0,···) : the nth entry is 1 andthe others are 0. Clearly, 1nen → 0. But bardblA−1(1nen)−0bardbl = bardblenbardblnotarrowright 0. HenceQA−1 = A−1 is not continuous on ranA. squaresolid4.4 The first main resultNow we come to our first main result in this thesis.Theorem 4.4.1 Let A: X ⇉ X be maximal monotone such that graA is alinear subspace of X ×X and domA is closed. ThenA = ∂f + tildewideA◦,where f := qtildewideA + ιdomA is proper lower semicontinuous and convex, tildewideA =PdomAQAPdomA is linear, continuous and maximal monotone, and tildewideA◦ isantisymmetric. In particular, A is decomposable.Proof. By Corollary 4.3.8, tildewideA is linear, continuous and maximal monotone.Then by Fact 2.1.18, qtildewideA is convex, differentiable and ∇qtildewideA = tildewideA+. Since45Chapter 4. Monotone operators with linear graphsdomA is a closed subspace, by Fact 2.1.24 ιdomA is proper lower semicon-tinuous and convex. Hence f is proper lower semicontinuous and convex.By Theorem 4.2.7, (domA)⊥ = (A0)⊥⊥ = A0. Let x ∈ domA. We have∂f(x) = ∂(qtildewideA +ιdomA)(x)= ∇qtildewideA(x) +∂ιdomA(x) (By Fact 2.1.30)= tildewideA+x+ (domA)⊥ (by Fact 2.1.18 and Fact 2.1.29)= tildewideA+x+A0.Thus ∀x ∈ domA,∂f(x) + tildewideA◦x = tildewideA+x+A0 + tildewideA◦x = tildewideAx+A0 (4.36)= Ax, (4.37)where (4.37) holds by Corollary 4.3.8. If x /∈ domA, by definition ∂f(x) =∅ = Ax.Hence we have Ax = ∂f(x) + tildewideA◦x, ∀x ∈ X. squaresolidIn general, a convex cone is not a linear subspace. We wonder if thereexists a maximal monotone operator with a convex cone graph such that itsgraph is not a linear subspace.The following gives a negative answer.Fact 4.4.2 A convex cone K is a linear subspace, if and only if, −K ⊂ K.Proof. See [25, Theorem 2.7]. squaresolid46Chapter 4. Monotone operators with linear graphsProposition 4.4.3 Let A: X ⇉ X be maximal monotone such that graAis a convex cone. Then graA is a linear subspace of X ×X.Proof. By Fact 4.4.2, it suffices to show that−graA ⊂ graA.Assume that (x,x∗) ∈ graA. We show that −(x,x∗) ∈ graA. Let (y,y∗) ∈graA. As graA is a convex cone,(x,x∗) + (y,y∗) = (x +y, x∗ +y∗) ∈ graA.Thus〈x+y, x∗ +y∗〉 ≥ 0. parenleftbig since A is monotone and (0,0) ∈ graAparenrightbigThis means〈−x−y, −x∗ −y∗〉 ≥ 0〈(−x)−y, (−x∗)−y∗〉 ≥ 0, ∀(y,y∗) ∈ graA.Since A is maximal, we conclude that(−x,−x∗) ∈ graA, −(x,x∗) ∈ graA.Hence graA is a linear subspace. squaresolid47Chapter 4. Monotone operators with linear graphsIn [14], Butnariu and Kassay discuss monotone operators with closedconvex graphs. Actually, if such operators are maximal monotone, theirgraphs are affine sets.Fact 4.4.4 C ⊂ X is an affine set ⇔ ∃c0 ∈ C, C −c0 is a linear subspace.Proof. See [31, page 1]. squaresolidProposition 4.4.5 Let A: X ⇉ X be maximal monotone such that graAis a convex subset. Then graA is an affine set.Proof. Let (x0,x∗0) ∈ graA and tildewideA: X ⇉ X such that gra tildewideA = graA −(x0,x∗0). Thus gra tildewideA is convex and (0,0) ∈ gra tildewideA. By Fact 2.1.15, tildewideA ismaximal monotone. By Fact 4.4.4, it suffices to verify that gra tildewideA is a linearsubspace. By Proposition 4.4.3, it suffices to show that gra tildewideA is a cone.Let k ≥ 0 and (x,x∗) ∈ gra tildewideA. We consider two cases.Case 1: k ≤ 1.k(x,x∗) = k(x,x∗) + (1−k)(0,0) ∈ gra tildewideA. (4.38)Case 2: k > 1.Let (y,y∗) ∈ gra tildewideA. By (4.38), 1k(y,y∗) ∈ gra tildewideA. Thus,〈kx−y, kx∗ −y∗〉 = k2〈x− 1ky, x∗ − 1ky∗〉 ≥ 0.Since tildewideA is maximal monotone, k(x,x∗) ∈ gra tildewideA.Hence gra tildewideA is a cone. squaresolid48Chapter 4. Monotone operators with linear graphs4.5 Monotonicity of operators with linear graphsIn general, it is not easy to identify whether an operator is monotone. Butif an operator with a linear graph and a basis is known, then we can uselinear algebra to verify monotonicity and strict monotonicity.Theorem 4.5.1 Let A: X ⇉X and graA = spanbraceleftBig(m1,m∗1),··· ,(mn,m∗n)bracerightBig.Then the following are equivalent(i) A is monotone.(ii)The matrix B :=〈m1,m∗1〉 〈m1,m∗2〉 ··· 〈m1,m∗n〉〈m2,m∗1〉 〈m2,m∗2〉 ··· 〈m2,m∗n〉... ... ... ...〈mn,m∗1〉 〈mn,m∗2〉 ··· 〈mn,m∗n〉is monotone.(iii) B+ is positive semidefinite.Proof. SincegraA = spanbraceleftBig(m1,m∗1),...,(mn,m∗n)bracerightBig,∀(x,x∗) ∈ graA,∃α1,...,αnsuch that(x,x∗) =nsummationdisplayi=1αi(mi,m∗i).49Chapter 4. Monotone operators with linear graphsThen A is monotone⇔ 〈x−y,x∗ −y∗〉 ≥ 0, ∀(x,x∗) ∈ graA,∀(y,y∗) ∈ graA⇔ 〈nsummationdisplayi=1(αi −βi)mi,nsummationdisplayi=1(αi −βi)m∗i〉 ≥ 0where (x,x∗) =nsummationdisplayi=1αi(mi,m∗i) = (nsummationdisplayi=1αimi,nsummationdisplayi=1αim∗i)(y,y∗) =nsummationdisplayi=1βi(mi,m∗i) = (nsummationdisplayi=1βimi,nsummationdisplayi=1βim∗i)⇔ 〈nsummationdisplayi=1γimi,nsummationdisplayi=1γim∗i〉 =nsummationdisplayi=1nsummationdisplayj=1〈mi,m∗j〉γiγj ≥ 0, ∀γi ∈R= (γ1,...,γn)B(γ1,...,γn)⊺ ≥ 0, ∀γi ∈R⇔ ν⊺Bν ≥ 0, ∀ν ∈Rn⇔ B is monotone (by Example 2.1.3)⇔ B+ is positive semidefinite (by Fact 2.1.13 and Example 2.1.3).squaresolidWe also have a way to identify whether an operator with a linear graphis strictly monotone. First we give the definition of strictly monotone.Definition 4.5.2 A strictly monotone operator T : X ⇉ X is a mappingthat satisfies〈x∗ −y∗, x−y〉 > 0,whenever x∗ ∈ T(x), y∗ ∈ T(y) and x negationslash= y.50Chapter 4. Monotone operators with linear graphsDefinition 4.5.3 Let A: X → X. We say A is positive definite if〈x,Ax〉 > 0, ∀x negationslash= 0.Theorem 4.5.4 Let A: X ⇉X and graA = spanbraceleftBig(m1,m∗1),...,(mn,m∗n)bracerightBig.Suppose thatbraceleftBigm1,...,mnbracerightBigis linearly independent. Then A is strictlymonotone, if and only if, the matrixB =〈m1,m∗1〉 〈m1,m∗2〉 ··· 〈m1,m∗n〉〈m2,m∗1〉 〈m2,m∗2〉 ··· 〈m2,m∗n〉... ... ... ...〈mn,m∗1〉 〈mn,m∗2〉 ··· 〈mn,m∗n〉is positive definite.Proof. Since graA = span{(mi,m∗i)}ni=1, A is strictly monotone⇔ 〈x−y,x∗ −y∗〉 > 0, ∀(x,x∗),(y,y∗) ∈ graA with x negationslash= y⇔ 〈nsummationdisplayi=1(αi −βi)mi,nsummationdisplayi=1(αi −βi)m∗i〉 > 0where (x,x∗) =nsummationdisplayi=1αi(mi,m∗i) = (nsummationdisplayi=1αimi,nsummationdisplayi=1αim∗i)(y,y∗) =nsummationdisplayi=1βi(mi,m∗i) = (nsummationdisplayi=1βimi,nsummationdisplayi=1βim∗i).51Chapter 4. Monotone operators with linear graphsSince m1,...,mn are linearly independent,x negationslash= y ⇔ (α1,...,αn) negationslash= (β1,...,βn)⇔ γ := (α1 −β1,...,αn −βn) negationslash= 0,A is strictly monotone⇔ 〈nsummationdisplayi=1γimi,nsummationdisplayi=1γim∗i〉 > 0, for γ negationslash= 0 (4.39)⇔ γ⊺Bγ > 0, ∀γ ∈Rn with γ negationslash= 0 (4.40)⇔ B is positive definite. (4.41)Just as in the proof of Theorem 4.5.1, we see that (4.40) holds. squaresolid52Chapter 5Auto-conjugates5.1 Auto-conjugate representationDefinition 5.1.1 Let f: Rn ×Rn → ]−∞,+∞]. We define f⊺ byf⊺(x,x∗) = f(x∗,x), ∀(x,x∗) ∈Rn ×Rn.Definition 5.1.2 (Fenchel conjugate) Let f: Rn → ]−∞,+∞]. The Fenchelconjugate of f, f∗, is defined byf∗(x∗) = supxbraceleftBig〈x∗,x〉−f(x)bracerightBig, ∀x∗ ∈Rn.Fact 5.1.3 Let f: Rn → ]−∞,+∞] be proper lower semicontinuous andconvex. Then f∗∗ = f.Proof. See [26, Theorem 11.1]. squaresolidProposition 5.1.4 Let f,g: Rn → ]−∞,+∞] satisfy f ≤ g. Then f∗ ≥ g∗.Proof. Follows directly by Definition 5.1.2. squaresolidDefinition 5.1.5 (Auto-conjugate) Let f: Rn × Rn → ]−∞,+∞] be53Chapter 5. Auto-conjugatesproper lower semicontinuous and convex. We say f is auto-conjugate iff∗⊺ = f.Here are some examples of auto-conjugate functions.Example 5.1.6 (Ghoussoub ’06/[17]) Let ϕ: Rn → ]−∞,+∞] be properlower semicontinuous and convex, and A: Rn → Rn be linear and antisym-metric. Thenf(x,x∗) := ϕ(x) +ϕ∗(x∗)f(x,x∗) := ϕ(x) +ϕ∗(−Ax +x∗) (∀(x,x∗) ∈Rn ×Rn)are auto-conjugate.54Chapter 5. Auto-conjugatesProof. The first function is a special case of the second one when A = 0. So,it suffices to show the second case. Let (x,x∗) ∈Rn ×Rn. Then we havef∗(x∗,x)= sup(y,y∗)braceleftBig〈y,x∗〉+〈y∗,x〉−f(y,y∗)bracerightBig= sup(y,y∗)braceleftBig〈y,x∗〉+〈y∗,x〉−ϕ(y)−ϕ∗(−Ay +y∗)bracerightBig= sup(y,y∗)braceleftBig〈y,x∗〉+〈Ay,x〉+〈−Ay +y∗,x〉−ϕ(y)−ϕ∗(−Ay +y∗)bracerightBig= supysupy∗braceleftBig〈y,x∗〉+〈Ay,x〉+〈−Ay +y∗,x〉−ϕ(y)−ϕ∗(−Ay +y∗)bracerightBig= supybraceleftBig〈y,x∗〉+〈y,−Ax〉−ϕ(y) + supy∗braceleftbig〈−Ay +y∗,x〉−ϕ∗(−Ay +y∗)bracerightbigbracerightBig= supybraceleftBig〈y,−Ax+x∗〉−ϕ(y)bracerightBig+ϕ∗∗(x)= ϕ∗(−Ax+x∗)+ϕ(x) parenleftbigby Fact 5.1.3parenrightbig= f(x,x∗).squaresolidNow we introduce some basic properties of auto-conjugate functions.Lemma 5.1.7 (Penot-Simons-Zˇalinescu ’05/[24],[29]) Let f: Rn×Rn →]−∞,+∞] be auto-conjugate. Thenf(x,x∗) ≥ 〈x,x∗〉, ∀(x,x∗) ∈Rn ×Rn.55Chapter 5. Auto-conjugatesProof. Let (x,x∗) ∈Rn ×Rn. Thenf(x,x∗) = f∗(x∗,x) = sup(y,y∗)braceleftBig〈y,x∗〉+〈y∗,x〉−f(y,y∗)bracerightBig≥〈x,x∗〉+〈x∗,x〉−f(x,x∗).Thus 2f(x,x∗) ≥ 2〈x,x∗〉. That is , f(x,x∗) ≥ 〈x,x∗〉. squaresolidProposition 5.1.8 Let f,g: Rn×Rn → ]−∞,+∞] be auto-conjugate suchthat f ≤ g. Then f = g.Proof. Let (x,x∗) ∈Rn ×Rn. By assumptions, f(x,x∗) ≤ g(x,x∗).On the other hand, by Proposition 5.1.4, f∗(x∗,x) ≥ g∗(x∗,x). Since f,gare auto-conjugate, f(x,x∗) ≥ g(x,x∗). Hence f(x,x∗) = g(x,x∗). squaresolidProposition 5.1.9 Let f: Rn ×Rn → ]−∞,+∞]. Then f∗⊺ = f⊺∗.Proof. Let (x,x∗) ∈Rn ×Rn. Thenf⊺∗(x∗,x) = sup(y,y∗)braceleftBig〈(y,y∗),(x∗,x)〉−f⊺(y,y∗)bracerightBig= sup(y,y∗)braceleftBig〈(y∗,y),(x,x∗)〉−f(y∗,y)bracerightBig= f∗(x,x∗) = f∗⊺(x∗,x).squaresolidFact 5.1.10 (Fenchel-Young inequality) Let f: Rn → ]−∞,+∞] be proper56Chapter 5. Auto-conjugateslower semicontinuous and convex, and x,x∗ ∈Rn. Thenf(x)+f∗(x∗) ≥ 〈x,x∗〉,and equality holds, if and only if, x∗ ∈ ∂f(x).Proof. See [25, Theorem 23.5] and [25, page 105]. squaresolid.Definition 5.1.11 Let f: Rn ×Rn → ]−∞,+∞]. We define G(f) byx∗ ∈ G(f)x ⇔ f(x,x∗) = 〈x,x∗〉.Here is an important property of auto-conjugates, which provides ourmain motivation for studying them.Fact 5.1.12 (Penot-Simons-Zˇalinescu ’05) Let f: Rn×Rn → ]−∞,+∞]be auto-conjugate. Then G(f) is maximal monotone.Proof. See [29, Theorem 1.4.(a)]. squaresolidDefinition 5.1.13 (Representation) Let f: Rn ×Rn → ]−∞,+∞] andA: Rn ⇉Rn. If A = G(f), we call f a representation for A. If f is auto-conjugate, we call f an auto-conjugate representation for A.Proposition 5.1.14 Let ϕ: Rn → ]−∞,+∞] be proper lower semicon-tinuous and convex, and A: Rn → Rn be linear and antisymmetric. Letf(x,x∗) := ϕ(x) +ϕ∗(−Ax +x∗) (∀(x,x∗) ∈Rn ×Rn). Then f is an auto-conjugate representation for ∂ϕ+A.57Chapter 5. Auto-conjugatesProof. By Example 5.1.6, f is auto-conjugate. Then we have〈x,x∗〉 = f(x,x∗)⇔ 〈x,−Ax+x∗〉 = ϕ(x) +ϕ∗(−Ax +x∗)⇔ x∗ −Ax ∈ ∂ϕ(x) (by Fact 5.1.10)⇔ (x,x∗) ∈ gra(∂ϕ+A).Hence f is an auto-conjugate representation for ∂ϕ+A. squaresolidDefinition 5.1.15 Let f,g: Rn ×Rn → ]−∞,+∞]. We define(fsquare2g)(x,x∗) = infy∗braceleftBigf(x,x∗ −y∗) +g(x,y∗)bracerightBig, ∀(x,x∗) ∈Rn ×Rn.Definition 5.1.16 Let f,g: Rn → ]−∞,+∞]. We define(f ⊕g)(x,x∗) = f(x)+g(x∗), ∀(x,x∗) ∈Rn ×Rn.Definition 5.1.17 We defineπ1: Rn ×Rn →Rn : (x,y) mapsto→ x.Fact 5.1.18 Let f,g: Rn ×Rn → ]−∞,+∞] be proper lower semicontinu-ous and convex. Set ϕ = fsquare2g. Assumeϕ(x,x∗) > −∞, ∀(x,x∗) ∈Rn ×Rn,58Chapter 5. Auto-conjugatesanduniondisplayλ>0λbracketleftbigπ1 domf −π1 domgbracketrightbig,is a linear subspace of Rn. Thenϕ∗(x∗,x) = miny∗braceleftBigf∗(x∗ −y∗,x) +g∗(y∗,x)bracerightBig, ∀(x,x∗) ∈Rn ×Rn.Proof. See [29, Theorem 4.2]. squaresolidProposition 5.1.19 Let f,g: Rn × Rn → ]−∞,+∞] be auto-conjugatesuch that (π1 domf − π1 domg) is a linear subspace of Rn. Suppose M =fsquare2g. ThenM(x,x∗) = miny∗braceleftBigf(x,x∗ −y∗)+g(x,y∗)bracerightBig, ∀(x,x∗) ∈Rn ×Rn.and M is an auto-conjugate representation for G(f) +G(g).Proof. By Lemma 5.1.7,M(x,x∗) ≥ infy∗braceleftBig〈x,y∗〉+〈x,x∗ −y∗〉bracerightBig= 〈x,x∗〉, ∀(x,x∗) ∈Rn ×Rn.(5.1)Since (π1 domf−π1 domg) isa linear subspace,uniontextλ>0 λbracketleftbigπ1 domf−π1 domgbracketrightbig =(π1 domf − π1 domg) is a linear subspace. Let (x,x∗) ∈ Rn × Rn. By59Chapter 5. Auto-conjugatesFact 5.1.18, we haveM∗(x∗,x) = miny∗braceleftBigf∗(x∗ −y∗,x) +g∗(y∗,x)bracerightBig= miny∗braceleftBigf(x,x∗ −y∗) +g(x,y∗)bracerightBig= M(x,x∗).HenceM(x,x∗) = miny∗braceleftBigf(x,x∗ −y∗) +g(x,y∗)bracerightBig, ∀(x,x∗) ∈Rn ×Rn. (5.2)and M is auto-conjugate.In the following we show that M is a representation for G(f)+G(g). Suppose(x,x∗) satisfiesM(x,x∗) = 〈x,x∗〉.For every y∗ ∈Rn, since f,g are auto-conjugate, by Fact 5.1.7 we havef(x,x∗ −y∗) ≥ 〈x,x∗ −y∗〉,g(x,y∗) ≥ 〈x,y∗〉, andM(x,x∗) ≥ 〈x,x∗〉. (5.3)60Chapter 5. Auto-conjugatesThen by (5.2) and (5.3),(x,x∗) ∈ graG(M)⇔ M(x,x∗) = 〈x,x∗〉⇔ ∃s∗ such that 〈x,x∗〉 = M(x,x∗) = f(x,x∗ −s∗)+g(x,s∗)⇔ ∃s∗ such that 0 = f(x,x∗ −s∗)−〈x,x∗ −s∗〉+g(x,s∗)−〈x,s∗〉⇔ ∃s∗ such that 〈x,x∗ −s∗〉 = f(x,x∗ −s∗),〈x,s∗〉 = g(x,s∗)⇔ ∃s∗ such that (x,x∗ −s∗) ∈ graG(f), (x,s∗) ∈ graG(g)⇔ x∗ ∈parenleftbigG(f) +G(g))x.squaresolidNow this raises the following question: Given a maximal monotone op-erator A, can we find an auto-conjugate representation for A?Before answering this question, we introduce some definitions.5.2 The Fitzpatrick function and the proximalaverageDefinition 5.2.1 (Fitzpatrick function ’88) Let A: X ⇉ X. The Fitz-patrick function of A isFA: (x,x∗) mapsto→ sup(y,y∗)∈graA〈x,y∗〉+〈y,x∗〉−〈y,y∗〉.Fact 5.2.2 Let A: Rn ⇉ Rn be monotone such that graA is nonempty.Then FA is proper lower semicontinuous and convex.61Chapter 5. Auto-conjugatesProof. See [8, Fact 1.2]. squaresolidFact 5.2.3 Let A: Rn →Rn be linear and monotone. Thenq∗A(x) = ιranA+(x) +q(A+)†(x), ∀x ∈Rn.Proof. See [25, page 108] and Corollary 2.2.18. squaresolidFact 5.2.4 Let A: Rn →Rn be linear and monotone. ThenFA(x,x∗) = ιranA+(x∗ +A∗x) + 12q(A+)†(x∗ +A∗x), ∀(x,x∗) ∈Rn ×Rn.Proof. Let (x,x∗) ∈Rn ×Rn. By [4, Theorem 2.3],FA(x,x∗)= 2q∗A+(12x∗ + 12A∗x)= ιranA+(12x∗ + 12A∗x)+ 2q(A+)†(12x∗ + 12A∗x) (by Fact 5.2.3)= ιranA+(x∗ +A∗x) + 12q(A+)†(x∗ +A∗x).squaresolidFact 5.2.5 Let A: Rn →Rn be linear and monotone. ThenF∗A(x∗,x) = ιgraA(x,x∗) +〈x,Ax〉, ∀(x,x∗) ∈Rn ×Rn.Proof. See [4, Theorem 2.3]. squaresolid62Chapter 5. Auto-conjugatesProposition 5.2.6 Let A: Rn →Rn be linear and monotone. Then A+kIdis invertible, for every k > 0.Proof. Let x satisfy that(A +kId)x = 0.Then we have Ax = −kx. By the monotonicity of A, we havekbardblxbardbl2 = 〈x, kx〉 = 〈x, −Ax〉 = −〈x, Ax〉 ≤ 0.Then x = 0. Hence A+kId is invertible. squaresolidDefinition 5.2.7 (Proximal average) Let f0,f1: Rn ×Rn → ]−∞,+∞]be proper lower semicontinuous and convex. We define P(f0,f1), the prox-imal average of f0 and f1, byP(f0,f1)(x,x∗)= −12bardbl(x,x∗)bardbl2 + inf(y1,y∗1)+(y2,y∗2)=(x,x∗)braceleftBig12f0(2y1,2y∗1)+ 12f1parenleftbig2y2,2y∗2parenrightbig+bardbl(y1,y∗1)bardbl2 +bardbl(y2,y∗2)bardbl2bracerightBig, ∀(x,x∗) ∈Rn ×Rn.Remark 5.2.8 Let f0,f1: Rn×Rn → ]−∞,+∞] be proper lower semicon-tinuous and convex. ThenP(f⊺0 ,f⊺1 ) =parenleftBigP(f0,f1)parenrightBig⊺.Fact 5.2.9 Let f0 and f1: Rn → ]−∞,+∞] be proper lower semicontinuous63Chapter 5. Auto-conjugatesand convex. ThenP(f0,f1)(x,x∗)= inf(y,y∗)braceleftBig12f0(x +y,x∗ +y∗) + 12f1(x−y,x∗ −y∗) + 12bardbl(y,y∗)bardbl2bracerightBig,∀(x,x∗) ∈Rn ×Rn.Proof. Let (x,x∗) ∈Rn ×Rn. ThenP(f0,f1)(x,x∗)= −12bardbl(x,x∗)bardbl2 + inf(y1,y∗1)+(y2,y∗2)=(x,x∗)braceleftBig12f0(2y1,2y∗1)+ 12f1parenleftbig2y2,2y∗2parenrightbig+bardbl(y1,y∗1)bardbl2 +bardbl(y2,y∗2)bardbl2bracerightBig= −12bardbl(x,x∗)bardbl2 + inf(y,y∗)braceleftBig12f0parenleftbig2x+y2 ,2x∗+y∗2parenrightbig+ 12f1parenleftbig2x−y2 ,2x∗−y∗2parenrightbig+vextenddoublevextenddouble(x+y2 , x∗+y∗2 )vextenddoublevextenddouble2 +vextenddoublevextenddouble(x−y2 , x∗−y∗2 )vextenddoublevextenddouble2bracerightBig= inf(y,y∗)braceleftBig12f0(x +y,x∗ +y∗) + 12f1(x−y,x∗ −y∗)+ 12bardbl(y,y∗)bardbl2bracerightBig.squaresolidDefinition 5.2.10 Let f: Rn ×Rn → ]−∞,+∞] be proper lower semicon-tinuous and convex and hf define byhf(x,x∗) = infbraceleftBig12f(x,2x∗1)+12f∗(2x∗2,x) | x∗ = x∗1+x∗2bracerightBig, ∀(x,x∗) ∈Rn×Rn.Now we begin to answer the question above.Fact 5.2.11 (Bauschke-Wang ’07) Let A: X ⇉ X be maximal mono-64Chapter 5. Auto-conjugatestone. Then P(FA,F∗A⊺) is an auto-conjugate representation for A.Proof. See [9, Theorem 5.7]. squaresolidFact 5.2.12 (Penot-Zˇalinescu ’05) Let A: X ⇉ X be maximal mono-tone such that aff(domA) is closed. Then hFA is an auto-conjugate repre-sentation for A.Proof. See [24, Proposition 4.2].squaresolid5.3 The second main resultOur main goal is to find a formula for P(FA,F∗A⊺) associated with a linearand monotone operator A. Until now, there was no explicit formula for that.Theorem 5.3.1 Let A: Rn →Rn be linear and monotone. ThenP(FA,F∗A⊺)(x,x∗) = ιranA+(x∗ −Ax) +〈x,x∗〉+q(A+)†(x∗ −Ax),∀(x,x∗) ∈Rn ×Rn.65Chapter 5. Auto-conjugatesProof. Let (x,x∗) ∈Rn ×Rn. By Fact 5.2.2 and Fact 5.2.9, we haveP(FA,F∗A⊺)(x,x∗)= inf(y,y∗)braceleftBig12FA(x +y,x∗ +y∗)+ 12F∗A⊺(x−y,x∗ −y∗) (5.4)+ 12bardbl(y,y∗)bardbl2bracerightBig= inf(y,y∗)braceleftBig12FA(x +y,x∗ +y∗)+ιgraA(x−y,x∗ −y∗) (5.5)+ 12〈x−y,A(x−y)〉+ 12bardbl(y,y∗)bardbl2bracerightBig= infybraceleftBig12FA(x+y,2x∗ −A(x−y)) + 12〈x−y,A(x−y)〉 (5.6)+ 12bardbl(y,x∗ −A(x−y))bardbl2bracerightBig= infybraceleftBigιranA+parenleftbig2x∗ −A(x−y) +A∗x+A∗yparenrightbig (5.7)+ 14q(A+)†parenleftbig2x∗ −A(x−y)+A∗x+A∗yparenrightbig+ 12〈x−y,A(x−y)〉+ 12bardblybardbl2 + 12bardblx∗ −A(x−y)bardbl2bracerightBig,in which, (5.5) holds by Fact 5.2.5, (5.6) by y∗ = x∗ −A(x−y), and (5.7)by Fact 5.2.4.Since2x∗ −A(x−y)+A∗x+A∗y= 2x∗ −2Ax+Ax +Ay +A∗x +A∗y= 2x∗ −2Ax+ (A +A∗)(x +y)= 2x∗ −2Ax+ 2A+(x +y), (5.8)66Chapter 5. Auto-conjugatesThus 2x∗ −A(x−y) +A∗x+A∗y ∈ ranA+ ⇔ x∗ −Ax ∈ ranA+. ThenιranA+(2x∗ −A(x−y) +A∗x +A∗y) = ιranA+(x∗ −Ax). (5.9)We consider two cases.Case 1: x∗ −Ax /∈ ranA+. By (5.7) and (5.9), P(FA,F∗A⊺)(x,x∗) = ∞.Case 2: x∗ − Ax ∈ ranA+. By Proposition 2.2.15 applied to A+ with xreplaced by x∗ −Ax and y replaced by x +y, we have14q(A+)†(2x∗ −A(x−y)+A∗x +A∗y)= 14q(A+)†(2x∗ −2Ax + 2A+(x +y)) (by (5.8))= 14 ·22q(A+)†(x∗ −Ax +A+(x+y))= q(A+)†(x∗ −Ax +A+(x+y))= q(A+)†(x∗ −Ax) +〈PranA+(x∗ −Ax), x+y〉+qA+(x +y)= q(A+)†(x∗ −Ax) +〈x +y, x∗ −Ax〉+ 12〈x +y, A(x +y)〉. (5.10)By (5.7), (5.9) and (5.10), we haveP(FA,F∗A⊺)(x,x∗)= q(A+)†(x∗ −Ax) + infybraceleftBig〈x +y, x∗ −Ax〉+ 12〈x +y, A(x +y)〉+ 12〈x−y,A(x−y)〉+ 12bardblybardbl2 + 12bardblx∗ −A(x−y)bardbl2bracerightBig.67Chapter 5. Auto-conjugatesSince12〈x +y, A(x +y)〉+12〈x−y,A(x−y)〉= 〈x, Ax〉+〈y, Ay〉,we have〈x+y, x∗ −Ax〉+ 12〈x +y, A(x +y)〉+ 12〈x−y,A(x−y)〉= 〈x,x∗〉−〈x, Ax〉+〈y, x∗〉−〈y, Ax〉+〈x, Ax〉+〈y, Ay〉= 〈x,x∗〉+〈y, Ay〉+〈y, x∗〉−〈y, Ax〉.ThenP(FA,F∗A⊺)(x,x∗)= q(A+)†(x∗ −Ax) +〈x,x∗〉+ infybraceleftBig〈y, Ay〉+〈y, x∗〉−〈y, Ax〉+ 12bardblybardbl2 + 12bardblx∗ −Ax+Aybardbl2bracerightBig= q(A+)†(x∗ −Ax) +〈x,x∗〉+ infybraceleftBig〈y, Ay〉+〈y, x∗〉−〈y, Ax〉+ 12bardblybardbl2 + 12bardblx∗ −Axbardbl2 + 12bardblAybardbl2 +〈Ay,x∗ −Ax〉bracerightBig.68Chapter 5. Auto-conjugatesSince 〈y, Ay〉+ 12bardblybardbl2 + 12bardblAybardbl2 = 12bardbly +Aybardbl2,P(FA,F∗A⊺)(x,x∗)= q(A+)†(x∗ −Ax) +〈x,x∗〉+ 12bardblx∗ −Axbardbl2+ infybraceleftBig12bardbly +Aybardbl2 +〈y,x∗ −Ax+A∗(x∗ −Ax)〉bracerightBig= q(A+)†(x∗ −Ax) +〈x,x∗〉+ 12bardblx∗ −Axbardbl2+ infybraceleftBig12bardbly +Aybardbl2 +〈y,(Id+A∗)(x∗ −Ax)〉bracerightBig= q(A+)†(x∗ −Ax) +〈x,x∗〉+ 12bardblx∗ −Axbardbl2−supybraceleftBig〈y,(Id+A∗)(Ax−x∗)〉−q(Id+A∗)(Id+A)(y)bracerightBig= q(A+)†(x∗ −Ax) +〈x,x∗〉+ 12bardblx∗ −Axbardbl2−q∗(Id+A∗)(Id+A)parenleftbig(Id+A∗)(Ax−x∗)parenrightbig= q(A+)†(x∗ −Ax) +〈x,x∗〉+ 12bardblx∗ −Axbardbl2−qparenleftbig(Id+A∗)(Id+A)parenrightbig†parenleftbig(Id+A∗)(Ax−x∗)parenrightbig (by Proposition 5.2.3and Proposition 5.2.6)= q(A+)†(x∗ −Ax) +〈x,x∗〉+ 12bardblx∗ −Axbardbl2−q(Id+A)−1(Id+A∗)−1parenleftbig(Id+A∗)(Ax−x∗)parenrightbig (by Remark 2.1.35)= q(A+)†(x∗ −Ax) +〈x,x∗〉+ 12bardblx∗ −Axbardbl2 − 12bardblx∗ −Axbardbl2= 〈x,x∗〉+q(A+)†(x∗ −Ax).69Chapter 5. Auto-conjugatesCombining the results above, we haveP(FA,F∗A⊺)(x,x∗) = ιranA+(x∗ −Ax) +〈x,x∗〉+q(A+)†(x∗ −Ax),∀(x,x∗) ∈Rn ×Rn.squaresolidProposition 5.3.2 Let A: Rn →Rn be linear and monotone. Thenπ1 [domP(FA,F∗A⊺)] =Rn.Proof. By Theorem 5.3.1,P(FA,F∗A⊺)(x,Ax) = 〈x,Ax〉 < ∞, ∀x ∈Rn.Thus (x,Ax) ∈ domP(FA,F∗A⊺),∀x ∈Rn. Henceπ1 [domP(FA,F∗A⊺)] =Rn.squaresolidCorollary 5.3.3 Let A: Rn → Rn be linear, symmetric and monotone.ThenP(FA,F∗A⊺) = qA ⊕(ιranA +qA†).70Chapter 5. Auto-conjugatesProof. Let (x,x∗) ∈Rn ×Rn. By Theorem 5.3.1, we haveP(FA,F∗A⊺)(x,x∗) = ιranA(x∗ −Ax) +〈x,x∗〉+qA†(x∗ −Ax)= ιranA(x∗) +〈x,x∗〉+qA†(x∗ −Ax).Now suppose x∗ ∈ ranA. By Proposition 2.2.15, we haveqA†(x∗ −Ax) = qA(x)+qA†(x∗)−〈x,PranAx∗〉= qA(x)+qA†(x∗)−〈x,x∗〉.ThusP(FA,F∗A⊺)(x,x∗) = qA(x) +qA†(x∗).Combining the conclusions above,P(FA,F∗A⊺)(x,x∗) = ιranA(x∗) +qA(x) +qA†(x∗)= parenleftbigqA ⊕(ιranA +qA†)parenrightbig(x,x∗), ∀(x,x∗) ∈Rn ×Rn.squaresolidCorollary 5.3.4 Let A: Rn →Rn be linear and antisymmetric. ThenP(FA,F∗A⊺) = ιgraA.Proof. Follows directly by Theorem 5.3.1. squaresolid71Chapter 5. Auto-conjugatesCorollary 5.3.5 Let A: Rn →Rn be linear and monotone. Thenparenleftbig∀(x,x∗) ∈Rn ×Rnparenrightbig P(FA,F∗A⊺)(x,x∗) ≥ 〈x,x∗〉P(FA,F∗A⊺)(x,Ax) = 〈x,Ax〉.Proof. Apply Theorem 5.3.1 and Corollary 2.2.12. squaresolidFor a linear and monotone operator A, Fact 5.2.11 shows P(FA,F∗A⊺) isauto-conjugate. Now we give a new proof.Proposition 5.3.6 Let A: Rn →Rn be linear and monotone. Then P(FA,F∗A⊺)is auto-conjugate.Proof. Let (x,x∗) ∈Rn ×Rn. By Theorem 5.3.1, we haveP(FA,F∗A⊺)∗(x∗,x)= sup(y, y∗)braceleftBig〈(x∗,x), (y,y∗)〉−ιranA+(y∗ −Ay)−〈y,y∗〉 (5.11)−q(A+)†(y∗ −Ay)bracerightBig= sup(y, w)braceleftBig〈(y, A+w +Ay), (x∗,x)〉−ιranA+(A+w) (5.12)−〈y, A+w +Ay〉−q(A+)†(A+w)bracerightBig= sup(y, w)braceleftBig〈y,x∗〉+〈A+w +Ay, x〉−〈y, A+w +Ay〉−qA+(w)bracerightBig(5.13)= supysupwbraceleftBig〈y,x∗〉+〈A+w +Ay, x〉−〈y, A+w +Ay〉−qA+(w)bracerightBig= supybraceleftBig〈y,x∗〉+〈Ay, x〉−〈y, Ay〉+ supwbraceleftbig〈w,A+x〉−〈A+y, w〉 (5.14)−qA+(w)bracerightbigbracerightBig.72Chapter 5. Auto-conjugates(5.12) holds by y∗ −Ay = A+w and (5.13) by Corollary 2.2.16.By (5.14),P(FA,F∗A⊺)∗(x∗,x)= supybraceleftBig〈y,A∗x+x∗〉−〈y, Ay〉+q∗A+(A+x−A+y)bracerightBig= supybraceleftBig〈y,A∗x+x∗〉−〈y, Ay〉+q(A+)†(A+x−A+y)bracerightBig(5.15)= supybraceleftBig〈y,A∗x+x∗〉−〈y, Ay〉+qA+(x−y)bracerightBig(5.16)= supybraceleftBig〈y,A∗x+x∗ −A+x〉−qA(y)+qA(x)bracerightBig(5.17)= q∗A(A∗x+x∗ −A+x) +qA(x)= ιranA+(A∗x+x∗ −A+x) +q(A+)†(A∗x+x∗ −A+x)+qA(x) (5.18)(5.15) and (5.18) holds by Proposition 5.2.3, (5.16) by Corollary 2.2.16 and(5.17) by Remark 2.1.12.NoteA∗x+x∗ −A+x = x∗ −Ax+Ax+A∗x−A+x = x∗ −Ax+A+x. (5.19)ThusιranA+(A∗x+x∗ −A+x) = ιranA+(x∗ −Ax). (5.20)If x∗ −Ax /∈ ranA+. By (5.18) and (5.20), P(FA,F∗A⊺)∗(x∗,x) = ∞.Now suppose x∗ −Ax ∈ ranA+. By Proposition 2.2.15 applied to A+ with73Chapter 5. Auto-conjugatesx replaced by x∗ −Ax and y replaced by x,q(A+)†(A∗x+x∗ −A+x)= q(A+)†(x∗ −Ax+A+x) (by (5.19))= q(A+)†(x∗ −Ax) +angbracketleftbigPranA+(x∗ −Ax), xangbracketrightbig+qA+(x)= q(A+)†(x∗ −Ax) +〈x∗ −Ax, x〉+qA(x) (by Remark 2.1.12)= q(A+)†(x∗ −Ax) +〈x∗,x〉−qA(x).Then by (5.18) and (5.20), P(FA,F∗A⊺)∗(x∗,x) = q(A+)†(x∗ −Ax) +〈x,x∗〉.Combining the results above, we haveP(FA,F∗A⊺)∗(x∗,x) = ιranA+(x∗ −Ax) +q(A+)†(x∗ −Ax) +〈x,x∗〉= P(FA,F∗A⊺)(x,x∗) (by Theorem 5.3.1),∀(x,x∗) ∈Rn ×Rn.squaresolidProposition 5.3.7 Let B: Rn → Rn be linear, symmetric and monotone.Let x ∈Rn. Then〈x,Bx〉 = 0 ⇔ Bx = 0.74Chapter 5. Auto-conjugatesProof. “⇐” Clear.“⇒” Take y ∈Rn and α > 0. We have0 ≤ 〈αy +x,B(αy +x)〉 (5.21)= 〈x,Bx〉+ 2α〈y,Bx〉+α2〈y,By〉= 2α〈y,Bx〉+α2〈y,By〉, (by 〈x,Bx〉 = 0)⇒ 0 ≤ 2〈y,Bx〉+α〈y,By〉 (5.22)⇒ 0 ≤ 〈y,Bx〉, ∀y ∈Rn (5.23)⇒ Bx = 0,in which, (5.21) holds by monotonicity of B, (5.22) by multiplying 1α in bothsides, and (5.23) by letting α → 0+. squaresolidCorollary 5.3.8 Let B: Rn → Rn be linear, symmetric and monotone.ThenkerB = {x | qB(x) = 0}.Corollary 5.3.9 Let B: Rn →Rn be linear, symmetric and monotone. Letx ∈Rn. Then (ιranB +qB†)(x) = 0, if and only if, x = 0.Proof. “⇐” Clear.“⇒” By assumption, we havex ∈ ranB (5.24)0 = 〈x, B†x〉. (5.25)75Chapter 5. Auto-conjugatesBy Fact 2.2.2, Fact 2.2.4 and Corollary 2.2.12, B† is linear, symmetric andmonotone. By (5.25) and Proposition 5.3.7 applied to B†, B†x = 0. Thenby (5.24) and Fact 2.2.11, x = PranBx = BB†x = 0. squaresolidCorollary 5.3.10 Let A: Rn →Rn be linear and monotone.(∀(x,x∗) ∈Rn ×Rn) P(FA,F∗A⊺)(x,x∗) = 〈x,x∗〉 ⇔ (x,x∗) ∈ graA.Proof. By Theorem 5.3.1 and Corollary 5.3.9. squaresolidCorollary 5.3.11 Let A: Rn →Rn be linear and monotone. ThenP(FA,F∗A⊺) is an auto-conjugate representation for A.Proof. By Proposition 5.3.6 and Corollary 5.3.10. squaresolidFor a linear and monotone operator A, what is hFA?Proposition 5.3.12 Let A: Rn →Rn be linear and monotone. ThenhFA = P(FA,F∗A⊺).76Chapter 5. Auto-conjugatesProof. Let (x,x∗) ∈Rn ×Rn. ThenhFA(x,x∗)= infbraceleftBig12FA(x,2x∗1)+ 12F∗A(2x∗2,x) | x∗ = x∗1 +x∗2bracerightBig= infy∗braceleftBig12FA(x,2(x∗ −y∗)) + 12F∗A(2y∗,x)bracerightBig= infy∗braceleftBig12FA(x,2(x∗ −y∗)) +ιgraA(x,2y∗)+qA(x)bracerightBig(5.26)= 12FA(x,2x∗ −Ax) +qA+(x) (by 2y∗ = Ax and Remark 2.1.12)= ιranA+(2x∗ −Ax+A∗x)+ 14q(A+)†(2x∗ −Ax+A∗x)+qA+(x), (5.27)where (5.26) holds by Fact 5.2.5, and (5.27) by Fact 5.2.4.Note that2x∗ −Ax +A∗x = 2x∗ −2Ax + 2A+x. (5.28)Then 2x∗ −Ax+A∗x ∈ ranA+ ⇔ x∗ −Ax ∈ ranA+. ThusιranA+(2x∗ −Ax +A∗x) = ιranA+(x∗ −Ax). (5.29)If x∗ −Ax /∈ ranA+,hFA(x,x∗) = ∞ by (5.27) and (5.29).Now suppose that x∗ −Ax ∈ ranA+. By Proposition 2.2.15 applied to A+77Chapter 5. Auto-conjugateswith x replaced by x∗ −Ax and y replaced by x, we have14q(A+)†(2x∗ −Ax+A∗x)= 14q(A+)†(2x∗ −2Ax + 2A+x) (by (5.28))= q(A+)†(x∗ −Ax+A+x)= q(A+)†(x∗ −Ax)+〈x, PranA+(x∗ −Ax)〉+qA+(x)= q(A+)†(x∗ −Ax)+〈x, x∗ −Ax〉+qA+(x)= q(A+)†(x∗ −Ax)+〈x, x∗〉−qA+(x) (by Remark 2.1.12).Then by (5.27) and (5.29),hFA(x,x∗) = 〈x, x∗〉+q(A+)†(x∗ −Ax).Combining the results above,hFA(x,x∗) = ιranA+(x∗ −Ax) +〈x, x∗〉+q(A+)†(x∗ −Ax)= P(FA,F∗A⊺)(x,x∗) (by Theorem 5.3.1),∀(x,x∗) ∈Rn ×Rn.squaresolidProposition 5.3.13 Let A: Rn ⇉ Rn be monotone such that graA isnonempty. ThenhFA = hF∗⊺A.78Chapter 5. Auto-conjugatesProof. Let (x,x∗) ∈Rn ×Rn. By Definition 5.2.10, we havehF∗⊺A(x,x∗) = infbraceleftBig12F∗A⊺(x,2x∗1) + 12(F∗A⊺)∗(2x∗2,x) | x∗ = x∗1 +x∗2bracerightBig(5.30)= infbraceleftBig12F∗A(2x∗1,x) + 12FA(x,2x∗2) | x∗ = x∗1 +x∗2bracerightBig(5.31)= infbraceleftBig12F∗A(2x∗2,x) + 12(FA(x,2x∗1) | x∗ = x∗1 +x∗2bracerightBig(5.32)= hFA(x,x∗), (5.33)where (5.31) holds by Proposition 5.1.9, Fact 5.2.2 and Fact 5.1.3. squaresolidCorollary 5.3.14 Let A: Rn →Rn be linear and monotone. ThenhFA = hF∗⊺A= P(FA,F∗A⊺).5.4 An exampleIn the following we give an example of Theorem 5.3.1.Example 5.4.1 LetA =cosθ −sinθsinθ cosθbe the rotation of angle θ ∈ [0, pi2[. Then A∗ = A−1 andP(FA,F∗A⊺)(x,x∗) = 12cosθbardblx∗ −Axbardbl2 +〈x,x∗〉= 12cosθbardblx∗ −sinθRxbardbl2 + cosθ2 bardblxbardbl2,79Chapter 5. Auto-conjugateswhereR =0 −11 0.Proof. By assumptions, we haveA+ = cosθId AA∗ = Id R∗R = Id. (5.34)By Theorem 5.3.1 and Remark 2.1.35, we haveP(FA,F∗A⊺)(x,x∗) = 12cosθbardblx∗ −Axbardbl2 +〈x,x∗〉. (5.35)By (5.35), (5.34), and A = cosθId+sinθR, we haveP(FA,F∗A⊺)(x,x∗)= 12cosθparenleftBigbardblx∗bardbl2 +〈A∗Ax,x〉−2〈x∗, Ax〉parenrightBig+〈x,x∗〉= 12cosθparenleftBigbardblx∗bardbl2 +bardblxbardbl2 −2〈x∗, cosθx+ sinθRx〉parenrightBig+〈x,x∗〉= 12cosθparenleftBigbardblx∗bardbl2 +bardblxbardbl2 −2sinθ〈x∗,Rx〉parenrightBig= 12cosθparenleftBigbardblx∗bardbl2 + sin2 θbardblxbardbl2 −2sinθ〈x∗,Rx〉+ cos2 θbardblxbardbl2parenrightBig= 12cosθbardblx∗ −sinθRxbardbl2 + cosθ2 bardblxbardbl2.squaresolidFact 5.4.2 Let A: Rn ⇉ Rn be monotone such that graA is nonempty.Then F⊺A−1 = FA.Proof. See [8, Fact 1.2]. squaresolid80Chapter 5. Auto-conjugatesCorollary 5.4.3 Let A: Rn ⇉Rn be monotone such that graA is nonempty.ThenP(FA−1,F∗⊺A−1) =parenleftBigP(FA,F∗A⊺)parenrightBig⊺.Proof. By assumptions and Proposition 3.3.2, A−1 is monotone and graA−1is nonempty. Then by Proposition 5.1.9, Fact 5.4.2 and Remark 5.2.8,P(FA−1,F∗⊺A−1) = P(F⊺A,F⊺∗A−1) = P(F⊺A,F∗A) =parenleftBigP(FA,F∗A⊺)parenrightBig⊺.squaresolidTheorem 5.4.4 Let A: Rn → Rn be linear, monotone and invertible. Let(x,x∗) ∈Rn ×Rn. Then0 ∈ ∂P(FA,F∗A⊺)(x,x∗)∂x∗ ⇔ x∗ = A◦x (5.36)0 ∈ ∂P(FA,F∗A⊺)(x,x∗)∂x ⇔ x = (A−1)◦x∗. (5.37)Proof. By Fact 2.2.2, Fact 2.2.4, Fact 2.1.13 and Corollary 2.2.12, (A+)† islinear, symmetric and monotone.81Chapter 5. Auto-conjugatesNow (5.36): Follows from Theorem 5.3.1, Fact 2.1.18 and Fact 2.1.30,0 ∈ ∂P(FA,F∗A⊺)(x,x∗)∂x∗⇔ 0 ∈ ∂ιranA+(x∗ −Ax) +x+ (A+)†(x∗ −Ax), x∗ −Ax ∈ ranA+⇔ 0 ∈ kerA+ +x+ (A+)†(x∗ −Ax), x∗ −Ax ∈ ranA+ (by Fact 2.1.32)⇔ 0 ∈ x +(A+)−1(x∗ −Ax), x∗ −Ax ∈ ranA+ (by Corollary 2.2.7)⇔ −x ∈ (A+)−1(x∗ −Ax), x∗ −Ax ∈ ranA+⇔ x∗ −Ax ∈ −A+x⇔ x∗ ∈ Ax−A+x = A◦x.Then (5.37) Follows from Corollary 5.4.3 and (5.36). squaresolidProposition 5.4.5 Let A: Rn →Rn be linear and monotone, and h(x∗) :=P(FA,F∗A⊺)(0,x∗) (∀x∗ ∈Rn). Then ∂h = (A+)−1.Proof. By Theorem 5.3.1,h(x∗) = ιranA+(x∗) +q(A+)†(x∗), ∀x∗ ∈Rn.By Fact 2.2.2, Fact 2.2.4, Fact 2.1.13 and Corollary 2.2.12, (A+)† is linear,symmetric and monotone. Thus by Fact 2.1.30 and Fact 2.1.18,∂h(x∗) =∂ιranA+(x∗) + (A+)†x∗, if x∗ ∈ ranA+;∅, otherwise.82Chapter 5. Auto-conjugatesNow suppose x∗ ∈ ranA+. By Fact 2.1.32 and Corollary 2.2.7,∂h(x∗) = kerA+ + (A+)†x∗ = (A+)−1x∗.Next suppose x∗ /∈ ranA+. Clearly, (A+)−1x∗ = ∅ = ∂h(x∗). In all cases,∂h = (A+)−1. squaresolidRemark 5.4.6 In general, let us consider g(x,x∗) := f(x)+f∗(x∗) (∀(x,x∗) ∈Rn ×Rn), where f: Rn → Rn is proper lower semicontinuous and convex.Let h(x∗) = g(0,x∗) = f(0) + f∗(x∗) (∀x∗ ∈ Rn). Thus by [26, Proposition11.3],∂h(x∗) = ∂f∗(x∗) = (∂f)−1(x∗), ∀x∗ ∈Rn.5.5 Relationship among auto-conjugaterepresentationsLet A: Rn → Rn be linear and monotone. Suppose f(x,x∗) = qA(x) +q∗A(−A◦x+x∗) (∀(x,x∗) ∈Rn×Rn). By Proposition 5.1.14 and Fact 2.1.18,f is an auto-conjugate representation for A++A◦ = A. By Corollary 5.3.11,P(FA,F∗⊺A ) is also an auto-conjugate representation for A. The next Propo-sition does that.Proposition 5.5.1 Let B: Rn → Rn be linear, symmetric and monotone,and A: Rn → Rn be linear and antisymmetric. Let f(x,x∗) = qB(x) +83Chapter 5. Auto-conjugatesq∗B(−Ax+x∗) (∀(x,x∗) ∈Rn ×Rn). Thenf = P(F(A+B),F∗⊺(A+B)).Proof. Let (x,x∗) ∈Rn ×Rn. By Fact 5.2.3,f(x,x∗) = qB(x) +ιranB(−Ax +x∗)+qB†(−Ax +x∗). (5.38)By Theorem 5.3.1, we haveP(F(A+B),F∗⊺(A+B))(x,x∗)= ιranBparenleftbigx∗ −(A +B)xparenrightbig+〈x,x∗〉+qB†parenleftbigx∗ −(A +B)xparenrightbig= ιranB(x∗ −Ax) +〈x,x∗〉+qB†(x∗ −Ax−Bx)If x∗ −Ax /∈ ranB, P(F(A+B),F∗⊺(A+B))(x,x∗) = ∞.Now suppose x∗ −Ax ∈ ranB. By Proposition 2.2.15 applied to B with xreplaced by x∗ −Ax and y replaced by −x,qB†(x∗ −Ax−Bx)= 〈PranB(x∗ −Ax), −x〉+qB†(−Ax+x∗)+qB(−x)= 〈x∗ −Ax, −x〉+qB†(−Ax +x∗) +qB(−x)= −〈x,x∗〉+qB†(−Ax+x∗)+qB(x) (by 〈Ax, −x〉 = 0).84Chapter 5. Auto-conjugatesHenceP(F(A+B),F∗⊺(A+B))(x,x∗) = qB(x) +ιranB(−Ax +x∗) +qB†(−Ax +x∗)= f(x,x∗) (by (5.38)), ∀(x,x∗) ∈Rn ×Rn.squaresolidProposition 5.5.2 Let A,B: Rn →Rn be linear and monotone. ThenP(F(A+B),F∗⊺(A+B)) = P(FA,F∗A⊺)square2P(FB,F∗B⊺).Proof. Let (x,x∗) ∈Rn ×Rn. By Theorem 5.3.1,P(FA,F∗A⊺)square2P(FB,F∗B⊺)(x,x∗)= infy∗braceleftBigP(FA,F∗A⊺)(x,x∗ −y∗) +P(FB,F∗B⊺)(x,y∗)bracerightBig= infy∗braceleftBigιranA+(x∗ −y∗ −Ax) +〈x,x∗ −y∗〉+q(A+)†(x∗ −y∗ −Ax)+〈x,y∗〉+ιranB+(y∗ −Bx)+q(B+)†(y∗ −Bx)bracerightBig= 〈x,x∗〉+ infy∗braceleftBigιranA+(x∗ −y∗ −Ax) +q(A+)†(x∗ −y∗ −Ax)+ιranB+(y∗ −Bx)+q(B+)†(y∗ −Bx)bracerightBig≤ 〈x,x∗〉+ιran(A++B+)(x∗ −Ax−Bx) (5.39)+ infy∗braceleftBigιranA+(x∗ −y∗ −Ax) +q(A+)†(x∗ −y∗ −Ax)+ιranB+(y∗ −Bx)+q(B+)†(y∗ −Bx)bracerightBig.85Chapter 5. Auto-conjugatesNow suppose x∗−Ax−Bx ∈ ran(A++B+). Let x∗−Ax−Bx = (A++B+)pand y∗0 := B+p+Bx. Thusx∗ −y∗0 −Ax = x∗ −B+p−Bx−Ax = (A+ +B+)p−B+p = A+p,y∗0 −Bx = B+p.Then by (5.39),P(FA,F∗A⊺)square2P(FB,F∗B⊺)(x,x∗)≤ 〈x,x∗〉+ιranA+(x∗ −y∗0 −Ax) +q(A+)†(x∗ −y∗0 −Ax)+ιranB+(y∗0 −Bx)+q(B+)†(y∗0 −Bx)= 〈x,x∗〉+q(A+)†(A+p)+q(B+)†(B+p)= 〈x,x∗〉+qA+(p) +qB+(p) (5.40)= 〈x,x∗〉+q(A++B+)(p)= 〈x,x∗〉+q(A++B+)†(x∗ −Ax−Bx), (5.41)in which, (5.40) and (5.41) hold by Corollary 2.2.16.Combining the results above,P(FA,F∗A⊺)square2P(FB,F∗B⊺)(x,x∗)≤ ιran(A++B+)(x∗ −Ax−Bx)+〈x,x∗〉+q(A++B+)†(x∗ −Ax−Bx)= P(F(A+B),F∗⊺(A+B))(x,x∗) (by Theorem 5.3.1), ∀(x,x∗) ∈Rn ×Rn.By Proposition 5.3.2, π1 [domP(FA,F∗A⊺)] = π1 [domP(FB,F∗B⊺)] = Rn.Then by Proposition 5.3.6 and Proposition 5.1.19, P(FA,F∗A⊺)square2P(FB,F∗B⊺)86Chapter 5. Auto-conjugatesis auto-conjugate. Thus by Proposition 5.3.6 and Proposition 5.1.8,P(FA,F∗A⊺)square2P(FB,F∗B⊺) = P(F(A+B),F∗⊺(A+B)). squaresolidLemma 5.5.3 Let A: Rn →Rn be linear, symmetric and monotone. ThenP(FA,F∗A⊺)(x,Ay) = P(FA,F∗A⊺)(y,Ax), ∀(x,y) ∈Rn ×Rn.Proof. Let (x,y) ∈Rn ×Rn. By Corollary 5.3.3 and Corollary 2.2.16,P(FA,F∗A⊺)(x,Ay) = ιranA(Ay) +qA(x) +qA†(Ay) = qA(x) +qA(y).On the other hand,P(FA,F∗A⊺)(y,Ax) = ιranA(Ax) +qA(y) +qA†(Ax) = qA(x)+qA(y).ThusP(FA,F∗A⊺)(x,Ay) = P(FA,F∗A⊺)(y,Ax).squaresolidProposition 5.5.4 Let A: Rn → Rn be linear, symmetric and monotone.Then f = P(FA,F∗A⊺), if and only if, f is auto-conjugate satisfying f(x,Ay) =f(y,Ax) (∀(x,y) ∈Rn ×Rn) and f(0,0) is finite.Proof. “⇒” By Proposition 5.3.6, Lemma 5.5.3 and Corollary 5.3.5.“⇐” Let (x,x∗) ∈Rn ×Rn. We prove in two steps.Step 1: We will verify that f(x,x∗) = ∞, if x∗ /∈ ranA.Since Rn = ranA ⊕ kerA, x∗ = PranAx∗ + PkerAx∗. Since x∗ /∈ ranA,87Chapter 5. Auto-conjugatesPkerAx∗ negationslash= 0. Thus〈PkerAx∗,x∗〉 = 〈PkerAx∗,PranAx∗ +PkerAx∗〉 = bardblPkerAx∗bardbl2 > 0. (5.42)Thus by assumptions,f(kPkerAx∗,0) = f(kPkerAx∗,A0) = f(0,AkPkerAx∗) = f(0,0), ∀k ∈R.(5.43)Then by Fact 5.1.10,f(x,x∗)+f(0,0) = f(x,x∗) +f(kPkerAx∗,0) = f(x,x∗)+f∗(0,kPkerAx∗)≥ 〈x∗,kPkerAx∗〉 → ∞, as k → ∞. (by (5.42)) (5.44)Since f(0,0) is finite, by (5.44) f(x,x∗) = ∞.Step 2: Suppose that x∗ ∈ ranA. Let x∗ = Ap. By Fact 5.1.10,f(x,Ap) +f(p,Ax) = f(x,Ap)+f∗(Ax,p) ≥ 〈p,Ap〉+〈x,Ax〉⇒ 2f(x,Ap) ≥ 〈p,Ap〉+〈x,Ax〉 (by f(x,Ap) = f(p,Ax))⇒ f(x,x∗) ≥ qA(x) +qA(p) = qA(x) +qA†(x∗), (5.45)in which, (5.45) by x∗ = Ap and Corollary 2.2.16.By conclusions above and Corollary 5.3.3, we havef(x,x∗) ≥ ιranA(x∗)+qA(x) +qA†(x∗) = P(FA,F∗A⊺)(x,x∗), ∀(x,x∗).88Chapter 5. Auto-conjugatesThen by Corollary 5.3.11 and Proposition 5.1.8 , we havef = P(FA,F∗A⊺).squaresolid5.6 NonuniquenessWe now tackle the following question: Given a linear and monotone op-erator, are auto-conjugate representations for A unique? The answer isnegative. We will give several different auto-conjugate representations forId. By Corollary 5.3.3, we haveP(FId,F∗Id⊺) = 12bardbl·bardbl2 ⊕ 12bardbl·bardbl2.Proposition 5.6.1 Let j(x) = 12x2, ∀x ∈R. Assume g is such that g∗(−x) =g(x) ≥ 0, ∀x ∈R. Thenf(x,y) := jparenleftBigx+y√2parenrightBig+gparenleftBigx−y√2parenrightBig parenleftbig∀(x,y) ∈Rn ×Rnparenrightbigis an auto-conjugate representation for Id.89Chapter 5. Auto-conjugatesProof. We first show that f is auto-conjugate. Let (x,y) ∈R×R. Then wehavef∗(y,x)= sup(v,w)braceleftBig〈v,y〉+〈w,x〉−j(v+w√2 )−g(v−w√2 )bracerightBig= sup(v,w)braceleftBig〈v+w2 ,x+y〉−〈v−w2 ,x−y〉−j(v+w√2 )−g(v−w√2 )bracerightBig(5.46)= sup(v,w)braceleftBig〈v+w√2 , x+y√2 〉−〈v−w√2 , x−y√2 〉−j(v+w√2 )−g(v−w√2 )bracerightBig= sup(s,t)braceleftBig〈s, x+y√2 〉−〈t, x−y√2 〉−j(s)−g(t)bracerightBig(5.47)= supsbraceleftBig〈s, x+y√2 〉−j(s)bracerightBig+ suptbraceleftBig−〈t, x−y√2 〉−g(t)bracerightBig= j∗(x+y√2 ) +g∗(−x−y√2 )= j(x+y√2 ) +g(x−y√2 ) parenleftbigsince j∗ = j by [7, Proposition 3.3(i)]parenrightbig= f(x,y).90Chapter 5. Auto-conjugatesHence f is auto-conjugate.Note that (5.46) holds since〈v+w2 ,x+y〉−〈v−w2 ,x−y〉= 12parenleftBig〈v +w,x +y〉−〈v −w,x−y〉parenrightBig= 12parenleftBig〈v,x〉+〈v,y〉+〈w,x〉+〈w,y〉−〈v,x〉+〈v,y〉+〈w,x〉−〈w,y〉parenrightBig= 12parenleftBig2〈v,y〉+ 2〈w,x〉parenrightBig= 〈v,y〉+〈w,x〉.In the following we show that (5.47) holds. Clearly, for every (v,w) thereexists (s,t) such that〈v+w√2 , x+y√2 〉−〈v−w√2 , x−y√2 〉−j(v+w√2 )−g(v−w√2 )= 〈s, x+y√2 〉−〈t, x−y√2 〉−j(s)−g(t).On the other hand, for every (s,t), there exists v0 =√22 (s+t),w0 =√22 (s−t)such that〈v0+w0√2 , x+y√2 〉−〈v0−w0√2 , x−y√2 〉−j(v0+w0√2 )−g(v0−w0√2 )= 〈s, x+y√2 〉−〈t, x−y√2 〉−j(s)−g(t).Hence (5.47) holds.We now show that f is a representation for Id. First we show that g(0) = 0.91Chapter 5. Auto-conjugatesBy assumptions, g(0) ≥ 0. On the other hand,g(0) = g∗(−0) = g∗(0) = supv{−g(v)} ≤ 0.Hence g(0) = 0.Then we have(x,y) ∈ G(f)⇔ f(x,y) = 〈x,y〉⇔ 14bardblx +ybardbl2 +g(x−y√2 ) = 〈x,y〉⇔ 14bardblx−ybardbl2 +g(x−y√2 ) = 0⇔ 14bardblx−ybardbl2 = 0, g(x−y√2 ) = 0 (by g ≥ 0)by g(0) = 0⇔ x = y ⇔ (x,y) ∈ graId.Hence f is an auto-conjugate representation for Id. squaresolidRemark 5.6.2 If we set g = j in Proposition 5.6.1, f = P(FId,F∗Id⊺).Now we give three examples based on Proposition 5.6.1.Example 5.6.3 The functiong := ιR+satisfies the conditions of Proposition 5.6.1. Figure 5.1 is corresponding tof.92Chapter 5. Auto-conjugates−2−10 y−42−21100 −1x22−24Figure 5.1: The function f (blue) and z = xy (gold), and their intersectionline (cyan), graId (red).Proof. Let x ∈R. We consider two cases.Case 1: x ≥ 0. We haveg∗(−x) = supvbraceleftBig〈v,−x〉−ιR+(v)bracerightBig= supv≥0braceleftBig〈v,−x〉bracerightBig= 0 = g(x).Case 2: x < 0. Theng∗(−x) = supvbraceleftBig〈v,−x〉−ιR+(v)bracerightBig= supv≥0braceleftBig〈v,−x〉bracerightBig= ∞ = g(x).Hence g∗(−x) = g(x). squaresolid93Chapter 5. Auto-conjugates21x0−1−2−2−1y012−2.50.02.55.07.5Figure 5.2: The function f (blue) and z = xy (gold), and their intersectionline (cyan), graId (red) .Example 5.6.4 Setg(x) :=x2, if x ≥ 0;14x2, if x ≤ 0.Then g satisfies the conditions of Proposition 5.6.1. Figure 5.2 is corre-sponding to f .94Chapter 5. Auto-conjugatesProof. Let x ∈R. We consider two cases.Case 1: x ≥ 0. We haveg∗(−x) = supvbraceleftBig〈v,−x〉−g(v)bracerightBig= supv≤0braceleftBig〈v,−x〉−g(v)bracerightBig(since g ≥ 0,g(0) = 0)= supv≤0braceleftBig〈v,−x〉− 14v2}= supv≤0braceleftBigh0(v)bracerightBig,where h0(v) := 〈v,−x〉− 14v2.Let0 = ∇h0(v) = −x− 12v.Then v0 = −2x ≤ 0 is a critical point of h0. Since h0 is concave on R−, itscritical point is its maximizer. Theng∗(−x) = h0(v0) = 〈−2x,−x〉−x2 = x2 = g(x).Case 2: x < 0. We haveg∗(−x) = supvbraceleftBig〈v,−x〉−g(v)bracerightBig= supv≥0braceleftBig〈v,−x〉−g(v)bracerightBig(since g ≥ 0,g(0) = 0)= supv≥0braceleftBig〈v,−x〉−v2bracerightBig= supv≥0braceleftBigh1(v)bracerightBig,95Chapter 5. Auto-conjugateswhere h1(v) := 〈v,−x〉−v2Let0 = ∇h1(v) = −x−2v.Then v1 = −12x ≥ 0 is a critical point of h1. Since h1 is concave on R+, itscritical point is its maximizer. Theng∗(−x) = h1(v1) = 〈−12x,−x〉− 14x2 = 14x2 = g(x).Hence g∗(−x) = g(x). squaresolidExample 5.6.5 Set p > 1, 1p + 1q = 1.g(x) :=1pxp, if x ≥ 0;1q(−x)q, if x ≤ 0.satisfies the conditions of Proposition 5.6.1. Figure 5.3 is corresponding tof.Proof. Let x ∈R. We consider two cases.Case 1: x ≥ 0. We haveg∗(−x) = supvbraceleftBig〈v,−x〉−g(v)bracerightBig= supv≤0braceleftBig〈v,−x〉−g(v)bracerightBig(since g ≥ 0,g(0) = 0)= supv≤0braceleftBig〈v,−x〉− 1q(−v)qbracerightBig= supv≤0braceleftBigg0(v)bracerightBig,96Chapter 5. Auto-conjugates201x05−110−215Figure 5.3: The function f (blue) and z = xy (gold) , and their intersectionline (cyan), graId (red), where p = 4.whereg0(v) := 〈v,−x〉− 1q(−v)q.Then let0 = ∇g0(v) = −x+ (−v)q−1.Thus v0 := −x1q−1 ≤ 0 is a critical point of g0.Since ∇2g0(v) = −(q−1)(−v)q−2 ≤ 0 (∀v < 0), by the continuity of g0, g097Chapter 5. Auto-conjugatesis concave on R−. Then its critical point is its maximizer. Thusg∗(−x) = g0(v0)= 〈−x1q−1,−x〉− 1qxqq−1= (1− 1q)xqq−1= 1pxp (by 1p + 1q = 1)= g(x).Case 2: x < 0. We haveg∗(−x) = supvbraceleftBig〈v,−x〉−g(v)bracerightbig= supv≥0braceleftBig〈v,−x〉−g(v)bracerightBig(since g ≥ 0,g(0) = 0)= supv≥0braceleftBig〈v,−x〉− 1pvpbracerightBig= supv≥0braceleftBigg1(v)bracerightBig,whereg1(v) := 〈v,−x〉− 1pvp.Then let0 = ∇g1(v) = −x−vp−1.Thus v1 := (−x)1p−1 > 0 is a critical point of g1.Since ∇2g1(v) = −(p−1)vp−2 ≤ 0 (∀v > 0), by the continuity of g1, g1 is98Chapter 5. Auto-conjugatesconcave on R+. Then its critical point is its maximizer. Thusg∗(−x) = g1(v1)= 〈(−x)1p−1,−x〉− 1p(−x)pp−1= (1− 1p)(−x)pp−1= 1q(−x)q= g(x).Hence g∗(−x) = g(x). squaresolidRemark 5.6.6 Example 5.6.3, 5.6.4 and 5.6.5 each provide a function fthat is an auto-conjugate representation for Id with f negationslash= P(FId,F∗Id⊺).99Chapter 6Calculation of theauto-conjugates of ∂(−ln)Throughout this chapter, −ln is meant in the extended real valued sense,i.e., −ln(x) = ∞ for x ≤ 0.In Chapter 5, Proposition 5.3.12 shows that hFA = P(FA,F∗A⊺) for a linearand monotone operator A. Now we will show that for the nonlinear mono-tone operator ∂(−ln) we have P(F∂(−ln),F∗⊺∂(−ln)) negationslash= hF∂(−ln). Throughoutthe chapter, we denoteC : =braceleftBig(x,x∗) ∈R×R| x ≤ − 1x∗ < 0bracerightBigD : =braceleftBig(x,x∗) ∈R×R| x∗ ≤ − 12x < 0bracerightBigE : =braceleftBig(x,x∗) ∈R×R| x∗ ≤ − 14x < 0bracerightBig.100Chapter 6. Calculation of the auto-conjugates of ∂(−ln)6.1 Fitzpatrick function for ∂(−ln)Fact 6.1.1 Let f = −ln. ThenF∂f(x,x∗) =1−2x12(−x∗)12, if x ≥ 0,x∗ ≤ 0;∞, otherwise.Proof. See [8, Example 3.4]. squaresolidFact 6.1.2 Let f = −ln. ThenF∗∂f(x∗,x) = −1+ιC(x∗,x).Proof. See [8, Example 3.4]. squaresolidFact 6.1.3 (Rockafellar) Let f = −ln.f∗(x∗) =−1−ln(−x∗), if x∗ < 0;∞, otherwise,Proof. See [8, Example 3.4]. squaresolidRemark 6.1.4 Let f = −ln. Recall(f ⊕f∗)(x,x∗) := f(x)+f∗(x∗), ∀(x,x∗) ∈R×R.By Fact 6.1.3,dom(f ⊕f∗) = R++ ×R−−.101Chapter 6. Calculation of the auto-conjugates of ∂(−ln)Proposition 6.1.5D subsetornotdbleql E subsetornotdbleqlR++ ×R−−.Proof. We first verify that D subsetornotdbleql E.Let (x,x∗) ∈ D. Thus x > 0. Then − 12x ≤ − 14x. By (x,x∗) ∈ D,x∗ ≤ − 12x ≤ − 14x < 0.Thus (x,x∗) ∈ E. Then D ⊂ E.On the other hand, (1,−14) ∈ E, but (1,−14) /∈ D. Thus D negationslash= E.Hence D subsetornotdbleql E.It is clear we have E subsetornotdbleqlR++ ×R−−.Thus combining the results above, D subsetornotdbleql E subsetornotdbleqlR++ ×R−−. squaresolid6.2 Proximal average of ∂(−ln) and hF∂fProposition 6.2.1 Let f = −ln. ThendomP(F∂f,F∗⊺∂f) = E.Proof. Let By [7, Theorem 4.6],domP(F∂f,F∗⊺∂f) = 12 domF∂f + 12 domF∗⊺∂f. (6.1)102Chapter 6. Calculation of the auto-conjugates of ∂(−ln)In the following we will show thatdomP(F∂f,F∗⊺∂f) = 12 domF∗⊺∂f. (6.2)By Fact 6.1.1, (0,0) ∈ domF∂f, then by (6.1), we have12 domF∗⊺∂f ⊂ domP(F∂f,F∗⊺∂f). (6.3)Next we show that12 domF∗⊺∂f = E. (6.4)Indeed,(x,x∗) ∈ 12 domF∗⊺∂f⇔ (2x∗,2x) ∈ domF∗∂f ⇔ 2x∗ ≤ − 12x < 0 (by Fact 6.1.2)⇔ x∗ ≤ − 14x < 0 ⇔ (x,x∗) ∈ E.Hence (6.4) holds.Then by (6.4) and (6.3),E ⊂ domP(F∂f,F∗⊺∂f). (6.5)In the following, we will verify that12 domF∂f +E ⊂ E. (6.6)103Chapter 6. Calculation of the auto-conjugates of ∂(−ln)Let (y,y∗) ∈ 12 domF∂f and (x,x∗) ∈ E. By Fact 6.1.1 we havey ≥ 0, y∗ ≤ 0, x > 0, x∗ < 0, 4xx∗ ≤ −1.Thus x + y ≥ x > 0,x∗ + y∗ ≤ x∗ < 0. Then we have 4(x + y)(x∗ + y∗) ≤4xx∗ ≤ −1, i.e., (x,x∗)+(y,y∗) ∈ E. Hence (6.6) holds. Thus by (6.6), (6.4)and (6.1), domP(F∂f,F∗⊺∂f) ⊂ E. Then by (6.5), domP(F∂f,F∗⊺∂f) = E. squaresolidLemma 6.2.2 Let x,x∗,y∗ ∈R with y∗ ≤ 0.ThenιC(2x∗ −2y∗,x) = ιC(2x∗ −2y∗,x) +ιD(x,x∗).Proof. We consider two cases.Case 1: (2x∗ −2y∗,x) /∈ C. Clear.Case 2: (2x∗ −2y∗,x) ∈ C. By assumptions,2x∗ ≤ 2x∗ −2y∗ ≤−1x < 0 ⇒ x∗ ≤ − 12x < 0 (by y∗ ≤ 0).Thus (x,x∗) ∈ D. Then ιD(x,x∗) = 0. squaresolidRemark 6.2.3 Let x,x∗ ∈R. ThenιR+(x) +ιD(x,x∗) = ιD(x,x∗).Proof. Follows directly from the definition of D . squaresolid104Chapter 6. Calculation of the auto-conjugates of ∂(−ln)-5.0-2.5y0.0-75.0-5-32.5-12.5x0.0 -2.5 5.0-5.0Figure 6.1: The function hF∂f .Proposition 6.2.4 Let f = −ln. ThenhF∂f(x,x∗) = −(−1−2xx∗)12 +ιD(x,x∗), ∀(x,x∗) ∈R×R.Consequently, domhF∂f = D. Figure 6.1 illustrates hF∂f.Proof. Let (x,x∗) ∈R×R. By Fact 6.1.1 and Fact 6.1.2, we havehF∂f(x,x∗)= infy∗braceleftBig12F∂f(x,2y∗)+ 12F∗∂f(2x∗ −2y∗,x)bracerightBig= infy∗≤0braceleftBig12 −|x|12(−2y∗)12 +ιR+(x)+ 12F∗∂f(2x∗ −2y∗,x)bracerightBig= infy∗≤0braceleftBig12 −|x|12(−2y∗)12 +ιR+(x)+ιC(2x∗ −2y∗,x)− 12bracerightBig= infy∗≤0braceleftBig−|x|12(−2y∗)12 +ιR+(x) +ιC(2x∗ −2y∗,x) +ιD(x,x∗)bracerightBig(6.7)= infy∗≤0braceleftBig−|x|12(−2y∗)12 +ιC(2x∗ −2y∗,x) +ιD(x,x∗)bracerightBig, (6.8)105Chapter 6. Calculation of the auto-conjugates of ∂(−ln)where (6.7) holds by Lemma 6.2.2, (6.8) by Remark 6.2.3.Now we consider two cases.Case 1: (x,x∗) /∈ D. Thus hF∂f(x,x∗) = ∞.Case 2: (x,x∗) ∈ D. Thus x > 0. ThenhF∂f(x,x∗)= infy∗≤0braceleftBig−x12(−2y∗)12 +ιC(2x∗ −2y∗,x)bracerightBig= inf(2x∗−2y∗≤−1x<0, y∗≤0)braceleftBig−x12(−2y∗)12bracerightBig(6.9)= −(2x)12 sup(2x∗−2y∗≤−1x<0, y∗≤0)braceleftBig(−y∗)12bracerightBig= −(2x)12 sup(0≤−2y∗≤−2x∗−1x)braceleftBig(−y∗)12bracerightBig= −(2x)12(− 12x −x∗)12 (6.10)= −(−1−2xx∗)12 (by x > 0),where (6.9) holds by letting 2x∗ −2y∗ ∈ C. (6.10) holds by 0 ≤ − 12x −x∗since (x,x∗) ∈ D.Thus combining the results above,hF∂f(x,x∗) = −(−1−2xx∗)12 +ιD(x,x∗), ∀(x,x∗) ∈R×R.squaresolidCorollary 6.2.5 Let f = −ln. Then P(F∂f,F∗⊺∂f),f⊕f∗ and hF∂f are threedifferent functions.106Chapter 6. Calculation of the auto-conjugates of ∂(−ln)Proof. By Remark 6.1.4, we have dom(f⊕f∗) = R++×R−−. Then by Propo-sition 6.2.1 and Proposition 6.2.4, domP(F∂f,F∗⊺∂f) = E and domhF∂f = D.By Proposition 6.1.5,domhF∂f subsetornotdbleql domP(F∂f,F∗⊺∂f)subsetornotdbleql dom(f ⊕f∗).Hence P(F∂f,F∗⊺∂f),f ⊕f∗ and hF∂f are all different. squaresolidRemark 6.2.6 We don’t have an explicit formula for P(F∂(−ln),F∗⊺∂(−ln)).107Chapter 7Proximal averages ofmonotone operators withlinear graphsWe have given some auto-conjugate representation results for linear andmonotone operators. Now we extend them to monotone operators withlinear graphs. Background worked on linear relations can be found in thebook by Cross [16].7.1 Adjoint process of operators with lineargraphsDefinition 7.1.1 Let C be a nonempty cone of Rn. The polar of C, C−,is defined byC− :=braceleftBigx∗ | 〈c,x∗〉 ≤ 0,∀c ∈ CbracerightBig.Remark 7.1.2 If C is a linear subspace of Rn, then by Lemma 2.1.28,C− = C⊥.108Chapter 7. Proximal averages of monotone operators with linear graphsDefinition 7.1.3 Let A: Rn ⇉ Rn be such that graA is a convex cone.The adjoint process of A, A∗, is defined bygraA∗ :=braceleftBig(x,x∗) | (x∗,−x) ∈ (graA)−bracerightBig.Lemma 7.1.4 [16, Proposition III.1.3] Let A: Rn ⇉Rn be such that graAis linear. Suppose k ∈R with k negationslash= 0. Then (kA)∗ = kA∗.Proof. By Remark 7.1.2,(x,x∗) ∈ gra(kA)∗ ⇔ (x∗,−x) ∈ (grakA)− = (grakA)⊥⇔ 〈(x∗,−x),(v,v∗)〉 = 0, ∀(v,v∗) ∈ gra(kA)⇔ 1k〈(x∗,−x),(v,v∗)〉 = 0, ∀(v,v∗) ∈ gra(kA)⇔ 〈(1kx∗,−x),(v, 1kv∗)〉 = 0, ∀(v,v∗) ∈ gra(kA)⇔ 〈(1kx∗,−x),(v,w∗)〉 = 0, ∀(v,w∗) ∈ graA⇔ (x, 1kx∗) ∈ graA∗ ⇔ x∗ ∈ kA∗x.Hence (kA)∗ = kA∗. squaresolidRemark 7.1.5 Let A: Rn ⇉Rn be such that graA is linear. Then graA∗is a linear graph.Remark 7.1.6 Let A: Rn ⇉Rn be such that graA is linear. Then A∗0 =(domA)⊥.Proof. See [16, Proposition III.1.4 (b)]. squaresolid109Chapter 7. Proximal averages of monotone operators with linear graphsDefinition 7.1.7 Let A: Rn ⇉ Rn be such that graA is linear. We saythat A is symmetric if A∗ = A.Definition 7.1.8 Let A: Rn ⇉ Rn be such that graA is linear. We saythat A is antisymmetric if A∗ = −A.Fact 7.1.9 Let A,B: Rn ⇉ Rn be such that graA and graB are linear.Then (A +B)∗ = A∗ +B∗.Proof. See [11, Theorem 7.4]. squaresolidFact 7.1.10 Let A: Rn ⇉ Rn be such that graA is a closed convex cone.Then graA∗∗ = −graA.Proof. See [13, Exercises 7 page 119]. squaresolidCorollary 7.1.11 Let A: Rn ⇉ Rn be such that graA is linear. ThenA∗∗ = A.Proof. SincegraAis alinear subspace,−graA = graA. ThusbyFact 7.1.10,graA∗∗ = graA. Hence A∗∗ = A. squaresolidCorollary 7.1.12 Let A: Rn ⇉Rn be such that graA is a linear subspace.Then domA∗ = (A0)⊥.Proof. By Remark 7.1.5 and Remark 7.1.6, we have (A∗)∗0 = (domA∗)⊥.Then by Corollary 7.1.11, A0 = (domA∗)⊥. Thus domA∗ = (A0)⊥. squaresolidRemark 7.1.13 Let A: Rn ⇉Rn be such that graA is linear. By Fact 7.1.9,Remark 7.1.5, Corollary 7.1.11 and Lemma 7.1.4, A+A∗2 is symmetric andA−A∗2 is antisymmetric.110Chapter 7. Proximal averages of monotone operators with linear graphsDefinition 7.1.14 (Symmetric and antisymmetric part) Let A: Rn ⇉Rn be such that graA is linear. Then A+ = 12A + 12A∗ is the symmetricpart of A, and A◦ = 12A− 12A∗ is the antisymmetric part of A.Remark 7.1.15 Let A: Rn ⇉ Rn be such that graA is a linear subspace.Then by Corollary 7.1.12, domA+ = domA◦ = domA∩(A0)⊥.Corollary 7.1.16 Let A: Rn ⇉Rn be such that graA is a linear subspace.Then A can be decomposed into the sum of a symmetric operator with alinear graph and an antisymmetric operator with a linear graph, if and onlyif, domA = (A0)⊥. In that case, A can be decomposed as : A = A+ +A◦.Proof. “⇒” Let B: Rn ⇉Rn be a symmetric operator with a linear graphand C: Rn ⇉ Rn be an antisymmetric operator with a linear graph suchthat A = B + C. By Fact 7.1.9, A∗ = B∗ + C∗ = B −C. Then domA∗ =domB ∩domC = domA. By Corollary 7.1.12, domA = (A0)⊥.“⇐” By Remark 7.1.15, domA+ = domA◦ = domA. By Corollary 7.1.12,domA∗ = (A0)⊥ = domA. Thus, byRemark7.1.5 and Proposition 4.1.3(iii),A+x+A◦x = 12(Ax +A∗x+Ax−A∗x) = Ax+A∗0= Ax+ (domA)⊥ = Ax+A0 (by Remark 7.1.6)= Ax (by Proposition 4.1.3(iii)), ∀x ∈ domA.squaresolidRemark 7.1.17 Consider an operator A: Rn ⇉Rn with graA = {(0,0)}.Then we have domA = {0} negationslash= Rn = (A0)⊥. Clearly, graA∗ = Rn ×Rn.111Chapter 7. Proximal averages of monotone operators with linear graphsThus (A++A◦)0 = A+0+A◦0 = Rn+Rn = Rn negationslash= A0. Thus A negationslash= A++A◦.By Proposition 7.1.16, A can not be decomposed into the sum of a symmetricoperator with a linear graph and an antisymmetric operator with a lineargraph.Remark 7.1.18 Let S be a linear subspace of Rn. Then S is closed.Corollary 7.1.19 Let A: Rn ⇉Rn be maximal monotone such that graAis a linear subspace. Then A = A+ +A◦.Proof. By Remark 7.1.18, Proposition 4.2.5 and Corollary 7.1.16. squaresolidDefinition 7.1.20 Let C be a nonempty convex subset of Rn and x0 ∈Rn.The normal cone of C at x0, NC(x0), is defined byNC(x0) :=braceleftBigx∗ | 〈x∗,c−x0〉 ≤ 0,∀c ∈ CbracerightBig, if x0 ∈ C;∅, otherwise.Fact 7.1.21 Let C be a nonempty convex subset of Rn and x0 ∈ C. ThenNC(x0) = ∂ιC(x0). If C is a linear subspace ofRn, then NC(x0) = ∂ιC(x0) =C⊥ by Fact 2.1.29.Remark 7.1.22 Let A: Rn → Rn be linear and S be a linear subspace ofRn. Then gra(A +NS) is a linear subspace of Rn ×Rn.Fact 7.1.23 Let B: Rn → Rn be linear and S be a linear subspace of Rnsuch that A = B +NS. Then A∗ = B∗ +NS.112Chapter 7. Proximal averages of monotone operators with linear graphsProof. Since graA is a linear subspace by Remark 7.1.22, then by Re-mark 7.1.2 and Fact 7.1.21 we have(x,x∗) ∈ graA∗⇔ (x∗,−x) ∈ (graA)−⇔ (x∗,−x) ∈ (graA)⊥⇔ 〈x∗,y〉−〈x,y∗〉 = 0, ∀y∗ ∈ Ay⇔ 〈x∗,y〉−〈x,By +S⊥〉 = 0, ∀y ∈ S. (7.1)Let y = 0 in (7.1). We have 〈x,S⊥〉 = 0. Thus x ∈ S. Then(x,x∗) ∈ graA∗⇔ x ∈ S, 〈x∗,y〉−〈x,By〉 = 0, ∀y ∈ S⇔ x ∈ S, 〈x∗ −B∗x,y〉 = 0, ∀y ∈ S⇔ x ∈ S, (x∗ −B∗x)⊥S⇔ x ∈ S, x∗ ∈ B∗x+S⊥⇔ x∗ ∈ (B∗ +NS)(x) (by Fact 7.1.21).Hence A∗ = B∗ +NS. squaresolidRemark 7.1.24 Fact 7.1.23 is a special case of [13, Exercises 14 (f) page120].Remark 7.1.25 Let B: Rn → Rn be linear and S be a linear subspace ofRn. Suppose A = B + NS. Then by Fact 7.1.23, A+ = B+ + NS, A◦ =113Chapter 7. Proximal averages of monotone operators with linear graphsB◦ +NS and A = A+ +A◦.Now we recall the definition of QA.Definition 7.1.26 Let A: Rn ⇉Rn be such that graA is a linear subspaceof Rn ×Rn. We define QA byQAx =PAxx, if x ∈ domA;∅, otherwise.Proposition 7.1.27 Let A: Rn ⇉ Rn be such that graA is a linear sub-space. Then QA is single-valued and linear on domA, and QA is a selectionof A.Proof. SinceA0 is aclosed subspacebyProposition 4.1.3(i) and Remark7.1.18,Ax (∀x ∈ domA) is a closed convex by Proposition 4.1.3(ii). By Fact 4.3.1,QA is single-valued on domA and QA is a selection of A. Very similar tothe proof of Proposition 4.3.6, we have QA is linear on domA. squaresolidCorollary 7.1.28 Let A: Rn ⇉Rn be such that graA is a linear subspace.ThenAx =QAPdomAx+A0, if x ∈ domA;∅, otherwise,where QAPdomA is linear.Proof. By Proposition 7.1.27 and Proposition 4.1.3(ii). squaresolidProposition 7.1.29 Let A: Rn ⇉ Rn be such that graA is a linear sub-space. Assume A can be decomposed into the sum of a symmetric operator114Chapter 7. Proximal averages of monotone operators with linear graphswith a linear graph and an antisymmetric operator with a linear graph. Thensuch a decomposition is not unique.Proof. By Corollary 7.1.28, Corollary 7.1.16 and Fact 7.1.21,A = QAPdomA +NdomA.Thus,A = (QAPdomA)++parenleftbig(QAPdomA)◦+NSparenrightbig = ((QAPdomA)++NS)+(QAPdomA)◦.By Fact 7.1.23, (QAPdomA)+,(QAPdomA)++NS are symmetricandparenleftbig(QAPdomA)◦+NSparenrightbig,(QAPdomA)◦ are antisymmetric. Since (QAPdomA)+ negationslash= (QAPdomA)++NS and (QAPdomA)◦ + NSparenrightbig negationslash= (QAPdomA)◦ as S negationslash= Rn, the decompositionis not unique. squaresolidTheorem 7.1.30 Let A,B,C: Rn ⇉ Rn be such that graA,graB andgraC are linear subspaces. Assume B is symmetric and C is antisymmetricsuch that A = B +C and domB = domC. Then B = A+,C = A◦.Proof. By Fact 7.1.9, A∗ = B−C. Thus by assumptions, domB = domC =domA = domA∗. Thus domA+ = domA◦ = domB = domC = domA.By Corollary 7.1.12, domB = domB∗ = (B0)⊥,domC = domC∗ = (C0)⊥.Thus (B0)⊥ = (C0)⊥. Since C0,B0 are closed linear subspaces by Propo-sition 4.1.3(i) and Remark 7.1.18, B0 = C0. Let x ∈ domA. Then by115Chapter 7. Proximal averages of monotone operators with linear graphsProposition 4.1.3(iii) and Proposition 4.1.3(ii),A+x = 12(Bx+Cx+Bx−Cx) = Bx+C0 = Bx+B0 = Bx,A◦x = 12(Bx+Cx−Bx+Cx) = B0+Cx = C0+Cx = Cx.Hence B = A+,C = A◦. squaresolidCorollary 7.1.31 Let A: Rn ⇉Rn be maximal monotone such that graAis a linear subspace of Rn×Rn. Then A = PdomAQAPdomA+NdomA, wherePdomAQAPdomA is linear and monotone.Proof. Since domA is a closed linear subspace by Remark 7.1.18, then byTheorem 4.4.1, PdomAQAPdomA is linear and monotone, and A = ∂(qtildewideA +ιdomA)+ tildewideA◦, where tildewideA = PdomAQAPdomA.Then by Fact 2.1.18, Fact 2.1.30 and Fact 7.1.21,A = tildewideA+ +∂ιdomA + tildewideA◦ = PdomAQAPdomA +NdomA.squaresolidRemark 7.1.32 Let A: Rn ⇉ Rn such that graA is a linear subspace ofRn. Then graA−1 is a linear subspace of Rn ×Rn.116Chapter 7. Proximal averages of monotone operators with linear graphs7.2 Fitzpatrick functions of monotone operatorswith linear graphsDefinition 7.2.1 Assume A: Rn ⇉ Rn. The set-valued inverse mapping,A−1: Rn ⇉Rn, is defined byx ∈ A−1y ⇔ y ∈ Ax.Definition 7.2.2 Let A: Rn ⇉ Rn and S be a subset of Rn. Then AS isdefined byAS :=braceleftBigx∗ | x∗ ∈ As, ∃s ∈ SbracerightBig.Proposition 7.2.3 Let B: Rn → Rn be linear and S be a linear subspaceof Rn such that A = B +NS. Then(i) x ∈ ranA ⇔ x+S⊥ ⊂ ranA(ii) A−1x = A−1(x+S⊥).Proof. (i): By Fact 7.1.21, ranA = ran(B |S) + S⊥. Thus S⊥ + ranA =ranA. Thenx ∈ ranA ⇔ x+S⊥ ⊂ S⊥ + ranA = ranA.Hence (i) holds.(ii): Clearly, A−1x ⊂ A−1(x+S⊥). In the following we show A−1(x+S⊥) ⊂A−1x.117Chapter 7. Proximal averages of monotone operators with linear graphsBy Fact 7.1.21,y ∈ A−1(x+S⊥) ⇒ y ∈ A−1(x +t), ∃t ∈ S⊥⇒ x+t ∈ Ay = By +NS(y) = By +S⊥, y ∈ S⇒ x ∈ By +S⊥ = By +NS(y) = Ay⇒ y ∈ A−1x.Thus A−1(x +S⊥) ⊂ A−1x. Hence A−1x = A−1(x+S⊥). squaresolidLemma 7.2.4 Let B: Rn → Rn be linear and symmetric, and S be a sub-space of Rn. Suppose that x ∈ ran(B + NS). Then 〈x, (B + NS)−1x〉 issingle-valued. Moreover, if y0 ∈ (B + NS)−1x, then 〈x, (B + NS)−1x〉 =〈y0,By0〉.Proof. Let x∗1,x∗2 ∈ (B +NS)−1x. Then x∗1,x∗2 ∈ S and by Fact 7.1.21,x ∈ (B +NS)x∗1 = Bx∗1 +S⊥, x ∈ (B +NS)x∗2 = Bx∗2 +S⊥. (7.2)Then we haveB(x∗1 −x∗2) ∈ S⊥. (7.3)118Chapter 7. Proximal averages of monotone operators with linear graphsBy (7.2), there exists t ∈ S⊥ such that x = Bx∗1 +t. Then〈x, x∗1 −x∗2〉 = 〈Bx∗1 +t, x∗1 −x∗2〉= 〈Bx∗1, x∗1 −x∗2〉 (7.4)= 〈x∗1, B(x∗1 −x∗2)〉= 0, (7.5)in which, (7.4) holds by t ∈ S⊥ and x∗1 −x∗2 ∈ S, and (7.5) holds by (7.3)and x∗1 ∈ S.Thus 〈x,x∗1〉 = 〈x,x∗2〉. Hence 〈x, (B +NS)−1x〉 is single-valued.Let y0 ∈ (B + NS)−1x. Then y0 ∈ S and x ∈ (B + NS)y0 = By0 + S⊥ byFact 7.1.21. Let t0 ∈ S⊥ such that x = By0 + t0. Since 〈x, (B + NS)−1x〉is single-valued,〈x, (B+NS)−1x〉 = 〈x,y0〉 = 〈By0+t0,y0〉 = 〈y0,By0〉 (by y0 ∈ S,t0 ∈ S⊥).squaresolidLemma 7.2.5 Let B: Rn →Rn be linear and S be a linear subspace of Rnsuch that A = B +NS. Suppose (x,x∗) ∈ S ×Rn. ThenιranA+(x∗ −Bx) = ιranA+(x∗ −Ax),i.e., x∗ −Bx ∈ ranA+ ⇔ x∗ −Ax ⊂ ranA+.119Chapter 7. Proximal averages of monotone operators with linear graphsMoreover if x∗ −Bx ∈ ranA+, thenangbracketleftbigx∗ −Bx, (A+)−1(x∗ −Bx)angbracketrightbig = angbracketleftbigx∗ −Ax, (A+)−1(x∗ −Ax)angbracketrightbig.Proof. By Fact 7.1.21,Ax = Bx+S⊥. (7.6)By Remark 7.1.25 and Proposition 7.2.3(i) applied to A+,ιranA+(x∗ −Bx) = ιranA+parenleftbigx∗ −Bx+S⊥parenrightbig= ιranA+parenleftbigx∗ −Axparenrightbig (by (7.6)).Let x∗ −Bx ∈ ranA+. By Remark 7.1.25, (A+)−1(x∗ −Bx) ⊂ S, then wehaveangbracketleftbigx∗ −Bx, (A+)−1(x∗ −Bx)angbracketrightbig= angbracketleftbigx∗ −Bx+S⊥, (A+)−1(x∗ −Bx)angbracketrightbig= angbracketleftbigx∗ −Bx+S⊥, (A+)−1parenleftbigx∗ −Bx+S⊥parenrightbigangbracketrightbig (7.7)= angbracketleftbigx∗ −Ax, (A+)−1parenleftbigx∗ −Axparenrightbigangbracketrightbig, (7.8)in which, (7.7) holds by Proposition 7.2.3(ii), (7.8) by (7.6). squaresolidRemark 7.2.6 Let B: Rn →Rn be linear and S be a linear subspace of Rnsuch that A = B+NS. Suppose (x,x∗) ∈ S×Rn such that x∗−Bx ∈ ranA+.120Chapter 7. Proximal averages of monotone operators with linear graphsBy Remark 7.1.25, Lemma 7.2.4 and Lemma 7.2.5, we see thatangbracketleftbigx∗ −Ax, (A+)−1(x∗ −Ax)angbracketrightbigis single-valued.Proposition 7.2.7 Let B: Rn → Rn be linear and S be a linear subspaceof Rn such that A = B +NS. Suppose (x,x∗) ∈ S×Rn. Then (x∗−Ax) ⊂ranA+ or (x∗ −Ax)∩ranA+ = ∅.Proof. Suppose that (x∗ −Ax) ∩ranA+ negationslash= ∅. By Fact 7.1.21, there existst ∈ S⊥ such that x∗−Bx+t ∈ ranA+. Then by Fact 7.1.21, Remark 7.1.25and Proposition 7.2.3(i), we obtain x∗ −Ax = x∗ −Bx + S⊥ = x∗ −Bx+t+S⊥ ⊂ ranA+. squaresolidDefinition 7.2.8 Let A: Rn ⇉Rn. We define ΦA: Rn ⇉Rn byΦA(x) =A−1x, if x ∈ ranA;{0}, otherwise.Remark 7.2.9 Let B: Rn → Rn be linear and S be a linear subspace ofRn such that A = B +NS. Then by Proposition 7.2.7 and Remark 7.2.6,angbracketleftbigx∗ −Ax, ΦA+(x∗ −Ax)angbracketrightbig parenleftbig(x,x∗) ∈ S ×Rnparenrightbigis single-valued. By Lemma 7.2.4 and Remark 7.1.25, angbracketleftbigx, ΦA+(x)angbracketrightbig (x ∈Rn) is single-valued.121Chapter 7. Proximal averages of monotone operators with linear graphsLemma 7.2.10 Let A: Rn → Rn such that graA is a linear subspace ofRn ×Rn. Let k ∈R with k negationslash= 0. Then ΦA(kx) = kΦA(x), ∀x ∈Rn.Proof. Let x ∈Rn. We consider two cases.Case 1: x /∈ ranA. Then kx /∈ ranA. Thus we have kΦA(x) = ΦA(kx) = 0.Case 2: x ∈ ranA. Then kx ∈ ranA. Then by Remark 7.1.32 and Proposi-tion 4.1.3(iii), kΦA(x) = kA−1x = A−1(kx) = ΦA(kx). squaresolidCorollary 7.2.11 Let B: Rn →Rn be linear and S be a linear subspace ofRn such that A = B +NS. Let k ∈R. ThenιS(x) +ιranA+(x∗ −Bx)+kangbracketleftbigx∗ −Bx, ΦA+(x∗ −Bx)angbracketrightbig= ιS(x) +ιranA+(x∗ −APSx)+kangbracketleftbigx∗ −APSx, ΦA+(x∗ −APSx)angbracketrightbig,∀(x,x∗) ∈Rn ×Rn.Proof. Combine Lemma 7.2.5 and Remark 7.1.25. squaresolidFact 7.2.12 Let B: Rn ⇉ Rn be linear and monotone, and S be a linearsubspace of Rn. Then B +NS is maximal monotone.Proof. See [28, Theorem 41.2]. squaresolidFact 7.2.13 Let B: Rn ⇉ Rn be linear, symmetric and monotone, and Sbe a linear subspace of Rn. Then ran(B +NS) = ranB +S⊥.Proof. Combine Remark 7.1.25, Fact 7.2.12, [4, Corollary 4.9] and [28, 19.0.3page 70]. squaresolid122Chapter 7. Proximal averages of monotone operators with linear graphsLemma 7.2.14 Let B: Rn →Rn be linear and monotone, and S be a linearsubspace of Rn. Then(qB +ιS)∗(x) = ιran(B++NS)(x) + 12angbracketleftbigx, Φ(B++NS)(x)angbracketrightbig, ∀x ∈Rn.Proof. Let x ∈Rn. Then(qB +ιS)∗(x) = supy∈RnbraceleftBig〈y,x〉−qB(y)−ιS(y)bracerightBig.Letg(y) := 〈y,x〉−qB(y)−ιS(y).A point y is a maximizer of g, if and only if, it is a critical point. Then byFact 2.1.30, Fact 2.1.18 and Fact 7.1.21,0 ∈ ∂g(y) = x−B+y −NS(y) = x−(B+ +NS)(y).We consider two cases.Case 1: x ∈ ran(B++NS). Let y0 satisfy that x ∈ (B++NS)y0. Then y0 ∈ Sand x ∈ B+y0 + S⊥ by Fact 7.1.21. Let t ∈ S⊥ such that x = B+y0 + t.123Chapter 7. Proximal averages of monotone operators with linear graphsSince y0 is a critical point,(qB +ιS)∗(x) = g(y0) = 〈y0,x〉− 12〈y0, B+y0〉 (by Remark 2.1.12)= 〈y0,B+y0 +t〉− 12〈y0, B+y0〉 (by x = B+y0 +t)= 〈y0,B+y0〉− 12〈y0, B+y0〉 (by y0 ∈ S and t ∈ S⊥)= 12〈y0, B+y0〉= 12〈x, (B+ +NS)−1x〉 (by Lemma 7.2.4 applied to B+)= 12angbracketleftbigx, Φ(B++NS)(x)angbracketrightbig.Case 2: x /∈ ran(B+ + NS). By Fact 7.2.13, ran(B+ + NS) = ranB+ + S⊥.Thus by Fact 2.1.32,parenleftbigran(B+ +NS)parenrightbig⊥ = (ranB+ +S⊥)⊥ = (ranB+)⊥ ∩(S⊥)⊥ = kerB+ ∩S.Then we have Rn = ran(B+ +NS)⊕(kerB+ ∩S) and x = Pran(B++NS)x+PkerB+∩Sx. Since x /∈ ran(B+ +NS), PkerB+∩Sx negationslash= 0. Thus〈PkerB+∩Sx,x〉 = 〈PkerB+∩Sx,Pran(B++NS)x+PkerB+∩Sx〉= bardblPkerB+∩Sxbardbl2 > 0. (7.9)Then by Fact 5.1.10,(qB +ιS)∗(x) = (qB +ιS)∗(x)+ (qB +ιS)(kPkerB+∩Sx) (7.10)≥ 〈kPkerB+∩Sx,x〉 → ∞, as k → ∞ parenleftbigby (7.9)parenrightbig,124Chapter 7. Proximal averages of monotone operators with linear graphswhere (7.10) holds since (qB + ιS)(kPkerB+∩Sx) = 0 by Remark 2.1.12 andPkerB+∩Sx ∈ kerB+ ∩S.Combining the conclusions above, we have(qB +ιS)∗(x) = ιran(B++NS)(x) + 12angbracketleftbigx, Φ(B++NS)(x)angbracketrightbig, ∀x ∈Rn.squaresolidProposition 7.2.15 Let B: Rn →Rn be linear and monotone, and S be alinear subspace of Rn. ThenF(B+NS)(x,x∗)= ιS(x)+ιran(B++NS)(B∗x+x∗)+ 14angbracketleftbigB∗x+x∗, Φ(B++NS)(B∗x+x∗)angbracketrightbig,∀(x,x∗) ∈Rn ×Rn.Proof. Let (x,x∗) ∈Rn ×Rn. By Fact 7.1.21, we haveF(B+NS)(x,x∗)= sup(y,y∗)∈gra(B+NS)braceleftBig〈y∗,x〉+〈x∗,y〉−〈y,y∗〉bracerightBig= supy∈SbraceleftBig〈By +S⊥,x〉+〈x∗,y〉−〈y, By +S⊥〉bracerightBig= ιS(x) + supy∈SbraceleftBig〈By,x〉+〈x∗,y〉−〈y,By〉bracerightBig(7.11)= ιS(x) + supy∈RnbraceleftBig〈y, B∗x+x∗〉−〈y,By〉−ιS(y)bracerightBig= ιS(x) + 2 supy∈RnbraceleftBig〈y, 12(B∗x+x∗)〉−qB(y)−ιS(y)bracerightBig(7.12)125Chapter 7. Proximal averages of monotone operators with linear graphswhere (7.11) holds by y ∈ S.By (7.12), we haveF(B+NS)(x,x∗) = ιS(x) + 2(qB +ιS)∗parenleftbig12(B∗x+x∗)parenrightbig= ιS(x) +ιran(B++NS)(B∗x +x∗) (7.13)+angbracketleftbig12(B∗x+x∗),Φ(B++NS)(12(B∗x+x∗)parenrightbigangbracketrightbig= ιS(x) +ιran(B++NS)(B∗x +x∗) (7.14)+ 14angbracketleftbig(B∗x+x∗),Φ(B++NS)(B∗x+x∗)angbracketrightbig.(7.13) holds by Lemma 7.2.14 and Remark 7.1.22. (7.14) holds by Re-mark 7.1.22 and Lemma 7.2.10. squaresolidRemark 7.2.16 Let S be a linear subspace of Rn. By Fact 7.2.13, ranNS =S⊥. By Proposition 7.2.15,FNS(x,x∗) = ιS(x)+ιS⊥(x∗), ∀(x,x∗) ∈Rn ×Rn.Corollary 7.2.17 Let A: Rn ⇉Rn be maximal monotone such that graAis a linear subspace. ThenFA(x,x∗)= ιdomA(x) +ιranA+(A∗PdomAx+x∗)+ 14angbracketleftbigA∗PdomAx+x∗,ΦA+(A∗PdomAx+x∗)angbracketrightbig,∀(x,x∗) ∈Rn ×Rn.126Chapter 7. Proximal averages of monotone operators with linear graphsProof. By Corollary 7.1.31, there exists a linear and monotone operatorB : Rn →Rn such thatA = B +NdomA.By Proposition 7.2.15 and Remark 7.1.25, we haveFA(x,x∗) = F(B+NdomA)(x,x∗)= ιdomA(x) +ιranA+(B∗x+x∗) + 14angbracketleftbigB∗x+x∗, ΦA+(B∗x +x∗)angbracketrightbig= ιdomA(x) +ιranA+(−B∗(−x) +x∗)+ 14angbracketleftbig−B∗(−x)+x∗, ΦA+(−B∗(−x)+x∗)angbracketrightbig= ιdomA(x) +ιranA+parenleftbig−A∗PdomA(−x) +x∗parenrightbig (7.15)+ 14angbracketleftbig−A∗PdomA(−x)+x∗, ΦA+parenleftbig−A∗PdomA(−x) +x∗parenrightbigangbracketrightbig= ιdomA(x) +ιranA+parenleftbig−A∗(−PdomAx) +x∗parenrightbig (7.16)+ 14angbracketleftbig−A∗(−PdomAx)+x∗, ΦA+parenleftbig−A∗(−PdomAx) +x∗parenrightbigangbracketrightbig= ιdomA(x) +ιranA+(A∗PdomAx+x∗) (7.17)+ 14angbracketleftbigA∗PdomAx+x∗, ΦA+parenleftbigA∗PdomAx+x∗parenrightbigangbracketrightbig,where (7.15) holds by Fact 7.1.23 and Corollary 7.2.11 applied to B∗ andA∗. (7.16) holds by Fact 4.3.3, and (7.17) by Remark 7.1.5 and Proposi-tion 4.1.3(iii). squaresolidProposition 7.2.18 Let B: Rn →Rn be linear and monotone, and S be a127Chapter 7. Proximal averages of monotone operators with linear graphslinear subspace of Rn. ThenF∗(B+NS)(x∗,x) = ιS(x)+ιS⊥(x∗ −Bx)+〈x,Bx〉,∀(x,x∗) ∈Rn ×Rn.Proof. Let (x,x∗) ∈Rn ×Rn. By Proposition 7.2.15,F∗(B+NS)(x∗,x)= sup(y,y∗)braceleftBig〈y,x∗〉+〈y∗,x〉−ιS(y)−ιran(B++NS)(B∗y +y∗)− 14angbracketleftbigB∗y +y∗, Φ(B++NS)(B∗y +y∗)angbracketrightbigbracerightBig= sup(y∈S,w∈S)braceleftBig〈y,x∗〉+〈B+w −B∗y +S⊥, x〉− 14〈B+w,w〉bracerightBig(7.18)= ιS(x) + sup(y∈S,w∈S)braceleftBig〈y,x∗〉+〈B+w −B∗y, x〉− 14〈B+w,w〉bracerightBig= ιS(x) + sup(y∈S, w∈S)braceleftBig〈y,x∗ −Bx〉+〈B+w,x〉− 14〈w,B+w〉bracerightBig= ιS(x) +ιS⊥(x∗ −Bx)+ supw∈SbraceleftBig〈B+w, x〉− 14〈w,B+w〉bracerightBig= ιS(x) +ιS⊥(x∗ −Bx)+ 12 supw∈SbraceleftBig〈w, 2B+x〉−qB(w)bracerightBig(7.19)= ιS(x) +ιS⊥(x∗ −Bx)+ 12 supw∈RnbraceleftBig〈w, 2B+x〉−qB(w)−ιS(w)bracerightBig= ιS(x) +ιS⊥(x∗ −Bx)+ 12(qB +ιS)∗(2B+x)= ιS(x) +ιS⊥(x∗ −Bx)+ιran(B++NS)(2B+x) (7.20)+ 14〈2B+x,Φ(B++NS)(2B+x)〉= ιS(x) +ιS⊥(x∗ −Bx)+〈x,Bx〉, (7.21)128Chapter 7. Proximal averages of monotone operators with linear graphsin which, (7.18) holds by B∗y + y∗ ∈ (B+ + NS)w = B+w + S⊥(w ∈S) by Fact 7.1.21, and by Lemma 7.2.4. (7.19) holds by Remark 2.1.12.(7.20) holds by Lemma 7.2.14, and (7.21) by Lemma 7.2.4 since 2x ∈ (B++NS)−1(2B+x) as x ∈ S by Fact 7.1.21. squaresolidRemark 7.2.19 Let S be a linear subspace of Rn. By Fact 7.2.13, ranNS =S⊥. Then by Proposition 7.2.18,F∗NS(x∗,x) = ιS(x)+ιS⊥(x∗), ∀(x,x∗) ∈Rn ×Rn.Corollary 7.2.20 Let S be a linear subspace of Rn. Then FNS is auto-conjugate.Proof. Combine Remark 7.2.16 and Remark 7.2.19. squaresolidRemark 7.2.21 Remark 7.2.16 and Remark 7.2.19 are special cases of [8,Example 3.1].Remark 7.2.22 Let B: Rn → Rn be linear and monotone, and S be alinear subspace of Rn such that A = B+NS. Suppose x ∈ S. Then 〈Ax,x〉 =〈x,Bx〉.Proof. Apply Ax = Bx+S⊥, which follows from Fact 7.1.21. squaresolidCorollary 7.2.23 Let A: Rn ⇉Rn be maximal monotone such that graAis a linear subspace. ThenF∗A(x∗,x) = ιdomA(x) +ι(domA)⊥(x∗ −APdomAx)+〈x,APdomAx〉,∀(x,x∗) ∈Rn ×Rn.129Chapter 7. Proximal averages of monotone operators with linear graphsProof. Let (x,x∗) ∈ Rn × Rn. By Corollary 7.1.31, there exists a linearand monotone operator B : Rn → Rn such that A = B + NdomA. Then byProposition 7.2.18,F∗A(x∗,x) = ιdomA(x) +ι(domA)⊥(x∗ −Bx)+〈x,Bx〉. (7.22)Suppose x ∈ domA. Since domA is a subspace of Rn andx∗ −Bx ∈ (domA)⊥ ⇔ x∗ −Bx+ (domA)⊥ ⊂ (domA)⊥.By Fact 7.1.21, x∗ −Bx+ (domA)⊥ = x∗ −Ax. Thusι(domA)⊥(x∗ −Bx) = ι(domA)⊥(x∗ −Ax) = ι(domA)⊥(x∗ −APdomAx).(7.23)By Remark 7.2.22,〈x,Bx〉 = 〈Ax,x〉 = 〈APdomAx,x〉. (7.24)Thus by (7.22), (7.23) and (7.24),F∗A(x∗,x) = ιdomA(x) +ι(domA)⊥(x∗ −APdomAx)+〈APdomAx,x〉,∀(x,x∗) ∈Rn ×Rn.squaresolid130Chapter 7. Proximal averages of monotone operators with linear graphs7.3 The third main resultLemma 7.3.1 Let B: Rn →Rn be linear and monotone, and S be a linearsubspace of Rn. Suppose that x ∈ S,x∗ ∈Rn and y∗ ∈Rn. Thenιran(B++NS)(B∗x +y∗)+ιS⊥(2x∗ −y∗ −Bx)= ιran(B++NS)(x∗ −Bx)+ιS⊥(2x∗ −y∗ −Bx). (7.25)Proof. We consider two cases.Case 1: 2x∗ −y∗ −Bx /∈ S⊥. Clear.Case 2: 2x∗−y∗−Bx ∈ S⊥. Let t ∈ S⊥ such that y∗ = 2x∗−Bx+t. ThusB∗x +y∗= B∗x+ 2x∗ −Bx+t = Bx+B∗x+ 2x∗ −Bx−Bx+t= 2x∗ −2Bx+ 2B+x+t. (7.26)On the other hand, since t ∈ S⊥, Fact 7.1.21 implies2B+x +t ∈ (B+ +NS)(2x). (7.27)Then by Remark 7.1.22, (7.26) and (7.27), we haveB∗x+y∗ ∈ ran(B+ +NS) ⇔ x∗ −Bx ∈ ran(B+ +NS). (7.28)Thus ιran(B++NS)(B∗x +y∗) = ιran(B++Ns)(x∗ −Bx).Hence (7.25) holds. squaresolid131Chapter 7. Proximal averages of monotone operators with linear graphsCorollary 7.3.2 Let B: Rn → Rn be linear and monotone, and S be alinear subspace of Rn. Suppose that x,x∗,y∗ ∈Rn. ThenιS(x) +ιran(B++NS)(B∗x+y∗) +ιS⊥(2x∗ −y∗ −Bx)= ιS(x) +ιran(B++NS)(x∗ −Bx) +ιS⊥(2x∗ −y∗ −Bx).Proof. Apply Lemma 7.3.1. squaresolidProposition 7.3.3 Let A: Rn ⇉Rn be maximal monotone such that graAis a linear subspace. ThenhFA(x,x∗)= ιdomA(x)+ιranA+(x∗ −APdomAx) +〈x,x∗〉+ 12angbracketleftbigx∗ −APdomAx, ΦA+(x∗ −APdomAx)angbracketrightbig, ∀(x,x∗) ∈Rn ×Rn.Proof. By Corollary 7.1.31, there exists a linear and monotone operatorB : Rn →Rn such thatA = B +NS,132Chapter 7. Proximal averages of monotone operators with linear graphswhere S = domA. Let (x,x∗) ∈Rn ×Rn.By Proposition 7.2.15 and Proposition 7.2.18,hFA(x,x∗)= infy∗braceleftBig12FA(x,2y∗) + 12F∗Aparenleftbig2(x∗ −y∗),xparenrightbigbracerightBig= infy∗braceleftBigιS(x) +ιran(B++NS)(B∗x+ 2y∗)+ 18angbracketleftbigB∗x+ 2y∗, Φ(B++NS)(B∗x+ 2y∗)angbracketrightbig+ιS⊥(2x∗ −2y∗ −Bx)+ 12〈x,Bx〉bracerightBig= ιS(x) + 12〈x,Bx〉+infy∗braceleftBigιran(B++NS)(B∗x+ 2y∗)+ 18angbracketleftbigB∗x+ 2y∗, Φ(B++NS)(B∗x+ 2y∗)angbracketrightbig+ιS⊥(2x∗ −2y∗ −Bx)bracerightBig= ιS(x) +ιran(B++NS)(x∗ −Bx)+ 12〈x,Bx〉 (7.29)+ infy∗braceleftBig18angbracketleftbigB∗x+ 2y∗, Φ(B++NS)(B∗x+ 2y∗)angbracketrightbig+ιS⊥(2x∗ −2y∗ −Bx)bracerightBig= ιS(x) +ιran(B++NS)(x∗ −Bx)+ 12〈x,Bx〉 (7.30)+ inft∈S⊥braceleftBig18angbracketleftbigB∗x+ 2x∗ −Bx+t, Φ(B++NS)(B∗x+ 2x∗ −Bx+t)angbracketrightbigbracerightBig,in which, (7.29) holds by Corollary 7.3.2, (7.30) by 2y∗ = 2x∗ −Bx+t, t ∈S⊥.If x /∈ S or x∗ −Bx /∈ ran(B+ +NS), hFA(x,x∗) = ∞ by (7.30).Now suppose x ∈ S = domA and x∗ − Bx ∈ ran(B+ + NS). Then thereexists y0 ∈ S such that x∗ − Bx ∈ (B+ + NS)y0. Thus by Fact 7.1.21,133Chapter 7. Proximal averages of monotone operators with linear graphsx∗ −Bx ∈ B+y0 +S⊥. Then〈B+x,y0〉 = 〈x,B+y0〉 = 〈x,x∗ −Bx〉 = 〈x,x∗〉−〈x,Bx〉. (7.31)Note thatB∗x+ 2x∗ −Bx+t = Bx+B∗x+ 2x∗ −Bx−Bx+t= 2x∗ −2Bx+ 2B+x+t. (7.32)By Fact 7.1.21,2B+x +t ∈ (B+ +NS)(2x). (7.33)Then by Remark 7.1.22, (7.32) and (7.33),B∗x+ 2x∗ −Bx+t ∈ (B+ +NS)(2y0 + 2x). (7.34)134Chapter 7. Proximal averages of monotone operators with linear graphsThen by (7.34), (7.31), (7.30) and Lemma 7.2.4,hFA(x,x∗)= 12〈x,Bx〉+ 18angbracketleftbigB+(2y0 + 2x), 2y0 + 2xangbracketrightbig= 12〈x,Bx〉+ 12angbracketleftbigB+(y0 +x),y0 +xangbracketrightbig= 12〈x,Bx〉+ 12〈B+y0,y0〉+〈B+x,y0〉+ 12〈B+x,x〉= 12〈x,Bx〉+ 12〈B+y0,y0〉+〈x,x∗〉−〈x,Bx〉+ 12〈B+x,x〉= 12〈B+y0,y0〉+〈x,x∗〉 (by Remark 2.1.12)= 〈x,x∗〉+ 12〈x∗ −Bx,(B+ +NS)−1(x∗ −Bx)〉= 〈x,x∗〉+ 12〈x∗ −Bx,(A+)−1(x∗ −Bx)〉 (by Remark 7.1.25)= 〈x,x∗〉+ 12angbracketleftbigx∗ −Ax, (A+)−1(x∗ −Ax)angbracketrightbig (by Lemma 7.2.5)= 〈x,x∗〉+ 12angbracketleftbigx∗ −Ax, ΦA+(x∗ −Ax)angbracketrightbig (by Lemma 7.2.5)= 〈x,x∗〉+ 12angbracketleftbigx∗ −APdomAx, ΦA+(x∗ −APdomAx)angbracketrightbig. (7.35)Thus combining (7.30) and (7.35),hFA(x,x∗)= ιdomA(x) +ιranA+(x∗ −Bx)+〈x,x∗〉 (by Remark 7.1.25)+ 12angbracketleftbigx∗ −APdomAx, ΦA+(x∗ −APdomAx)angbracketrightbig= ιdomA(x) +ιranA+(x∗ −APdomAx)+〈x,x∗〉 (by Lemma 7.2.5)+ 12angbracketleftbigx∗ −APdomAx, ΦA+(x∗ −APdomAx)angbracketrightbig, ∀(x,x∗) ∈Rn ×Rn.squaresolid135Chapter 7. Proximal averages of monotone operators with linear graphsRemark 7.3.4 Let S be a linear subspace of Rn and A: Rn → Rn be in-vertible. Then dimAS = dimS.Proposition 7.3.5 Let S be a linear subspace of Rn and A: Rn → Rn belinear and monotone. Suppose that AS ⊂ S. Then (Id+A)S = S.Proof. By assumptions, (Id+A)S is a linear subspace and(Id+A)S ⊂ S +AS ⊂ S. (7.36)Since (Id+A) isinvertible byProposition 5.2.6, byRemark7.3.4, dim(Id+A)S =dimS. Then by (7.36), (Id+A)S = S. squaresolidCorollary 7.3.6 Let S be a linear subspace of Rn and A: Rn → Rn belinear and monotone. Suppose that ranA ⊂ S. Then (Id+A)−1A+S ⊂ S.Proof. By Fact 2.1.17, ranA∗ = ranA ⊂ S. Thus A+S ⊂ ranA+ ⊂S. By Proposition 7.3.5, A+S ⊂ S = (Id+A)S. Then (Id+A)−1A+S ⊂(Id+A)−1(Id+A)S = S. squaresolidLemma 7.3.7 Let B: Rn →Rn be linear and monotone, and S be a linearsubspace of Rn. Suppose that x,y ∈ S and x∗,y∗ ∈Rn. Thenιran(B++NS)parenleftbigB∗(x +y)+x∗ +y∗parenrightbig+ιS⊥parenleftbigx∗ −y∗ −B(x−y)parenrightbig= ιran(B++NS)(x∗ −Bx)+ιS⊥parenleftbigx∗ −y∗ −B(x−y)parenrightbig. (7.37)Proof. We consider two cases.Case 1: x∗ −y∗ −B(x−y) /∈ S⊥. Clear.136Chapter 7. Proximal averages of monotone operators with linear graphsCase 2: x∗−y∗−B(x−y) ∈ S⊥. Let t ∈ S⊥ such that y∗ = x∗−B(x−y)+t.ThusB∗(x +y)+x∗ +y∗= B∗(x+y) +2x∗ −B(x−y) +t= B∗(x+y) +B(x+y)−B(x+y)+ 2x∗ −B(x−y)+t= 2B+(x +y)+t+ 2x∗ −2Bx. (7.38)On the other hand, since t ∈ S⊥, Fact 7.1.21 implies2B+(x +y)+t ∈ (B+ +NS)(2x + 2y). (7.39)Then by Remark 7.1.22, (7.39) and (7.38), we haveB∗(x +y)+x∗ +y∗ ∈ ran(B+ +NS) ⇔ x∗ −Bx ∈ ran(B+ +NS).Thus ιran(B++NS)parenleftbigB∗(x+y)+x∗+y∗parenrightbig = ιran(B++NS)(x∗−Bx). Hence (7.37)holds. squaresolidCorollary 7.3.8 Let B: Rn → Rn be linear and monotone, and S be alinear subspace of Rn. Suppose that x ∈Rn,y ∈ S and x∗,y∗ ∈Rn. ThenιS(x) +ιran(B++NS)parenleftbigB∗(x+y) +x∗ +y∗parenrightbig+ιS⊥parenleftbigx∗ −y∗ −B(x−y)parenrightbig= ιS(x) +ιran(B++NS)(x∗ −Bx)+ιS⊥parenleftbigx∗ −y∗ −B(x−y)parenrightbig.Proof. Apply Lemma 7.3.7. squaresolid137Chapter 7. Proximal averages of monotone operators with linear graphsTheorem 7.3.9 Let A: Rn ⇉Rn be maximal monotone such that graA isa linear subspace. ThenP(FA,F∗A⊺) = hFA.Proof. By Corollary 7.1.31,A = B +NS,where B = PSQAPS,S = domA. Let (x,x∗) ∈Rn ×Rn.By Fact 5.2.2, Fact 5.2.9, Proposition 7.2.15 and Proposition 7.2.18,P(FA,F∗A⊺)(x,x∗)= inf(y,y∗)braceleftBig12F(B+NS)(x+y,x∗ +y∗) + 12F∗⊺(B+NS)(x−y,x∗ −y∗)+ 12bardblybardbl2 + 12bardbly∗bardbl2bracerightBig= inf(y,y∗)braceleftBigιS(x +y)+ιran(B++NS)parenleftbigB∗(x +y)+x∗ +y∗parenrightbig+ιS(x−y)+ 18angbracketleftbigB∗(x+y)+x∗ +y∗, Φ(B++NS)parenleftbigB∗(x +y)+x∗ +y∗parenrightbigangbracketrightbig+ιS⊥parenleftbigx∗ −y∗ −B(x−y)parenrightbig+ 12angbracketleftbigx−y,B(x−y)angbracketrightbig+ 12bardblybardbl2+ 12bardbly∗bardbl2bracerightBig= ιS(x)+ infy∈S, y∗∈RnbraceleftBigιran(B++NS)parenleftbigB∗(x+y) +x∗ +y∗parenrightbig (7.40)+ 18angbracketleftbigB∗(x+y)+x∗ +y∗, Φ(B++NS)parenleftbigB∗(x +y)+x∗ +y∗parenrightbigangbracketrightbig+ιS⊥parenleftbigx∗ −y∗ −B(x−y)parenrightbig+ 12angbracketleftbigx−y,B(x−y)angbracketrightbig+ 12bardblybardbl2+ 12bardbly∗bardbl2bracerightBig,138Chapter 7. Proximal averages of monotone operators with linear graphswhere (7.40) holds by ιS(x+y) = 0,ιS(x−y) = 0 ⇔ x,y ∈ S.By (7.40) and Corollary 7.3.8,P(FA,F∗A⊺)(x,x∗)= ιS(x) +ιran(B++NS)(x∗ −Bx)+ infy∈S, y∗∈RnbraceleftBig18angbracketleftbigB∗(x+y) +x∗ +y∗, Φ(B++NS)parenleftbigB∗(x+y) +x∗ +y∗parenrightbigangbracketrightbig+ιS⊥parenleftbigx∗ −y∗ −B(x−y)parenrightbig+ 12angbracketleftbigx−y,B(x−y)angbracketrightbig+ 12bardblybardbl2+ 12bardbly∗bardbl2bracerightBig= ιS(x) +ιran(B++NS)(x∗ −Bx)+ infy∈S, t∈S⊥braceleftBig18angbracketleftbigC0, Φ(B++NS)(C0)angbracketrightbig(7.41)+ 12angbracketleftbigx−y,B(x−y)angbracketrightbig+ 12bardblybardbl2 + 12bardblx∗ −B(x−y) +tbardbl2bracerightBig,where (7.41) holds by y∗ = x∗ −B(x−y)+t, t ∈ S⊥, where C0 := B∗(x+y) + 2x∗ −B(x−y)+t (B∗(x+y) +x∗ +y∗ = C0).Suppose that x ∈ S and x∗ −Bx ∈ ran(B+ +NS). Then exists y0 ∈ S suchthat x∗−Bx ∈ (B++NS)y0. Thus by Fact 7.1.21, x∗−Bx ∈ (B++NS)y0 =B+y0 +S⊥. Thus〈x,Bx +B+y0〉 = 〈x,x∗〉 (by x ∈ S). (7.42)139Chapter 7. Proximal averages of monotone operators with linear graphsOn one hand,C0 = B∗(x+y) +B(x+y)−B(x+y)+ 2x∗ −B(x−y)+t= 2B+(x +y)+t+ 2x∗ −2Bx. (7.43)On the other hand, since t ∈ S⊥, Fact 7.1.21 implies2B+(x +y)+t ∈ (B+ +NS)(2x + 2y). (7.44)Then by Remark 7.1.22, (7.44) and (7.43)C0 ∈ (B+ +NS)(2x + 2y + 2y0). (7.45)Then by (7.45), (7.41) and Lemma 7.2.4,P(FA,F∗A⊺)(x,x∗)= infy∈S, t∈S⊥braceleftBig18angbracketleftbig2B+(x +y +y0), 2x + 2y + 2y0angbracketrightbig+ 12angbracketleftbigx−y,B(x−y)angbracketrightbig+ 12bardblybardbl2 + 12bardblx∗ −B(x−y)+tbardbl2bracerightBig= infy∈S, t′∈S⊥braceleftBig12angbracketleftbigB+(x+y +y0), x +y +y0angbracketrightbig+ 12angbracketleftbigx−y,B(x−y)angbracketrightbig+ 12bardblybardbl2 + 12bardblB+y0 +By +t′bardbl2bracerightBig≤ infy∈SbraceleftBig12〈B+(x+y +y0), x+y +y0〉+ 12〈x−y,B(x−y)〉+ 12bardblybardbl2 + 12bardblB+y0 +Bybardbl2bracerightBig.140Chapter 7. Proximal averages of monotone operators with linear graphsNote that (by Remark 2.1.12)12angbracketleftbigB+(x +y +y0), x+y +y0angbracketrightbig+ 12angbracketleftbigx−y,B(x−y)angbracketrightbig= 12angbracketleftbigB+(x+y0) +B+y, (x +y0)+yangbracketrightbig+ 12angbracketleftbigx−y,B+(x−y)angbracketrightbig= 12angbracketleftbigB+(x+y0), x+y0angbracketrightbig+〈y,B+(x+y0)〉+ 12〈y,B+y〉+ 12〈x,B+x〉+ 12〈y,B+y〉−〈y,B+x〉= 12angbracketleftbigB+(x+y0), x+y0angbracketrightbig+〈y,B+y0〉+〈y,B+y〉+ 12〈x,B+x〉= 12〈B+x, x〉+〈B+y0,x〉+ 12〈B+y0,y0〉+〈y,B+y0〉+〈y,B+y〉+ 12〈x,B+x〉= 〈x,Bx〉+〈B+y0,x〉+ 12〈B+y0,y0〉+〈y,B+y0〉+〈y,B+y〉= angbracketleftbigx,Bx+B+y0angbracketrightbig+ 12〈B+y0,y0〉+〈y,B+y0〉+〈y,B+y〉= 〈x,x∗〉+ 12〈B+y0,y0〉+〈y,B+y0〉+〈y,B+y〉 (by (7.42)).ThusP(FA,F∗A⊺)(x,x∗)≤ 12〈y0,B+y0〉+〈x,x∗〉 (7.46)+ infy∈SbraceleftBig〈y,B+y0〉+〈y,B+y〉+ 12bardblybardbl2 + 12bardblB+y0 +Bybardbl2bracerightBig141Chapter 7. Proximal averages of monotone operators with linear graphsNote that〈y,B+y0〉+〈y,B+y〉+ 12bardblybardbl2 + 12bardblB+y0 +Bybardbl2= 〈y,B+y0〉+〈y,B+y〉+ 12bardblybardbl2 +〈y,B∗B+y0〉+ 12〈y,B∗By〉+ 12bardblB+y0bardbl2= angbracketleftbigy, (Id+B∗)B+y0angbracketrightbig+ 12angbracketleftbigy, parenleftbigB +B∗ +Id+B∗Bparenrightbigyangbracketrightbig+ 12bardblB+y0bardbl2= 〈y, (Id+B∗)B+y0angbracketrightbig+ 12〈y, (Id+B∗)(Id+B)yangbracketrightbig+ 12bardblB+y0bardbl2.Then by (7.46), we haveP(FA,F∗A⊺)(x,x∗)≤ 12〈y0,B+y0〉+ 12bardblB+y0bardbl2 +〈x,x∗〉 (7.47)+ infy∈SbraceleftBigangbracketleftbigy, (Id+B∗)B+y0angbracketrightbig+ 12〈y,(Id+B∗)(Id+B)yangbracketrightbigbracerightBig≤ 12〈y0,B+y0〉+〈x,x∗〉+ 12bardblB+y0bardbl2 − 12bardblB+y0bardbl2 (7.48)= 12〈y0,B+y0〉+〈x,x∗〉= 〈x,x∗〉+ 12〈x∗ −Bx,(B+ +NS)−1(x∗ −Bx)〉 (7.49)= 〈x,x∗〉+ 12angbracketleftbigx∗ −Ax, (A+)−1(x∗ −Ax)angbracketrightbig (7.50)= 〈x,x∗〉+ 12angbracketleftbigx∗ −Ax, ΦA+(x∗ −Ax)angbracketrightbig (by Lemma 7.2.5)= 〈x,x∗〉+ 12angbracketleftbigx∗ −APdomAx, ΦA+(x∗ −APdomAx)angbracketrightbig, (7.51)in which (7.48) holds by letting y = −(Id+B)−1B+y0 ∈ S, where y ∈ S byCorollary 7.3.6.(7.49) holds Lemma 7.2.4, (7.50) by Remark 7.1.25 and Lemma 7.2.5.142Chapter 7. Proximal averages of monotone operators with linear graphsCombining (7.41) and (7.51),P(FA,F∗A⊺)(x,x∗)≤ ιS(x) +ιran(B++NS)(x∗ −Bx)+〈x,x∗〉+ 12angbracketleftbigx∗ −APdomAx, ΦA+(x∗ −APdomAx)angbracketrightbig= ιdomA(x) +ιranA+(x∗ −APdomAx)+〈x,x∗〉 (by Remark 7.1.25, Lemma 7.2.5)+ 12angbracketleftbigx∗ −APdomAx, ΦA+(x∗ −APdomAx)angbracketrightbig= hFA(x,x∗) (by Proposition 7.3.3), ∀(x,x∗) ∈Rn ×Rn.By Fact 5.2.11, Fact 5.2.12 and Proposition 5.1.8, P(FA,F∗A⊺) = hFA. squaresolidCorollary 7.3.10 Let A: Rn ⇉Rn be maximal monotone such that graAis a linear subspace. ThenP(FA,F∗A⊺) = hFA = hF∗A⊺.Proof. Combine Theorem 7.3.9 and Proposition 5.3.13. squaresolidTheorem 7.3.11 Let B: Rn → Rn be linear and monotone, and S be alinear subspace of Rn. ThenP(F(B+NS),F∗⊺(B+NS))(x,x∗) = hF(B+NS)(x,x∗) = hF∗⊺(B+NS)(x,x∗)= ιS(x) +ιran(B++NS)(x∗ −Bx)+〈x,x∗〉+ 12angbracketleftbigx∗ −Bx, Φ(B++NS)(x∗ −Bx)angbracketrightbig, ∀(x,x∗) ∈Rn ×Rn.143Chapter 7. Proximal averages of monotone operators with linear graphsProof. Let A = B + NS. Let (x,x∗) ∈ Rn ×Rn. By Fact 7.2.12 and Re-mark 7.1.22, A is maximal monotone with a linear graph. Thus by Propo-sition 7.3.3, Corollary 7.3.10, Remark 7.1.25 and Corollary 7.2.11,P(F(B+NS),F∗⊺(B+NS))(x,x∗) = hF(B+NS)(x,x∗) = hF∗⊺(B+NS)(x,x∗)= ιdomA(x) +ιranA+(x∗ −APdomAx)+〈x,x∗〉+ 12angbracketleftbigx∗ −APdomAx, ΦA+(x∗ −APdomAx)angbracketrightbig= ιS(x) +ιran(B++NS)(x∗ −Bx)+〈x,x∗〉+ 12angbracketleftbigx∗ −Bx, Φ(B++NS)(x∗ −Bx)angbracketrightbig.squaresolid7.4 The Fitzpatrick function of the sumFact 7.4.1 Let A,B: Rn ⇉Rn be monotone. ThenFA+B ≤ FAsquare2FB. (7.52)Proof. See [8, Proposition 4.2]. squaresolidIn (7.52), equality doesn’t always hold, see [8, Example 4.7]. It wouldbe interesting to characterize the pairs of monotone operators (A,B) thatsatisfy the identity FA+B = FAsquare2FB.Lemma 7.4.2 Let B: Rn →Rn be linear and monotone, and S be a linearsubspace of Rn. Then F(B+NS) = FBsquare2FNS.144Chapter 7. Proximal averages of monotone operators with linear graphsProof. Let (x,x∗) ∈Rn ×Rn. By Fact 5.2.4 and Remark 7.2.16, we have(FBsquare2FNS)(x,x∗)= infy∗braceleftBigFB(x,y∗)+FNS(x,x∗ −y∗)bracerightBig= infy∗braceleftBigιranB+(y∗ +B∗x)+ 12q(B+)†(y∗ +B∗x) +ιS(x)+ιS⊥(x∗ −y∗)bracerightBig= ιS(x)+ infy∗braceleftBigιranB+(y∗ +B∗x) + 12q(B+)†(y∗ +B∗x)+ιS⊥(x∗ −y∗)bracerightBig≤ ιS(x) +ιran(B++NS)(x∗ +B∗x) (7.53)+ infy∗braceleftBigιranB+(y∗ +B∗x) + 12q(B+)†(y∗ +B∗x)+ιS⊥(x∗ −y∗)bracerightBig.Next we will show that (FAsquare2FNS)(x,x∗) ≤ F(B+NS)(x,x∗).Now suppose x ∈ S and x∗+B∗x ∈ ran(B++NS). Then there exists y0 ∈ Ssuch thatx∗ +B∗x ∈ (B+ +NS)y0. (7.54)By Fact 7.1.21, there exists t ∈ S⊥ such that x∗ +B∗x = B+y0 +t.Let y∗0 = x∗ −t. Then by x∗ +B∗x = B+y0 +t,y∗0 +B∗x = x∗ +B∗x−t = B+y0. (7.55)145Chapter 7. Proximal averages of monotone operators with linear graphsBy (7.53), (7.55) and Lemma 7.2.4,(FBsquare2FNS)(x,x∗)≤ ιranB+(y∗0 +B∗x) + 12q(B+)†(y∗0 +B∗x) +ιS⊥(x∗ −y∗0)= 12q(B+)†(B+y0)= 12qB+(y0) (by Corollary 2.2.16)= 14angbracketleftbigB∗x+x∗, (B+ +NS)−1(B∗x+x∗)angbracketrightbig (by (7.54))= 14angbracketleftbigB∗x+x∗, Φ(B++NS)(B∗x+x∗)angbracketrightbig. (7.56)Thus combining (7.53) and (7.56),(FAsquare2FNS)(x,x∗)≤ ιS(x) +ιran(B++NS)(x∗ +B∗x) + 14angbracketleftbigB∗x+x∗, Φ(B++NS)(B∗x+x∗)angbracketrightbig= F(B+NS)(x,x∗) (by Proposition 7.2.15), ∀(x,x∗) ∈Rn ×Rn.By Fact 7.4.1, F(B+NS) = (FBsquare2FNS). squaresolidFact 7.4.3 Let A,B: Rn →Rn be linear and monotone. ThenF(A+B) = FAsquare2FB.Proof. See [4, Corollary 5.7]. squaresolidFact 7.4.4 Let S1,S2 be linear subspaces of Rn. ThenF(NS1+NS2)= FNS1square2FNS2.146Chapter 7. Proximal averages of monotone operators with linear graphsProof. See [8, Example 4.4]. squaresolidFact 7.4.5 Let S1,S2 be linear subspaces of Rn. Then NS1 +NS2 = NS1∩S2.Proof. Clearly, dom(NS1 +NS2) = S1 ∩S2 = domNS1∩S2. Let x ∈ S1 ∩S2.By Fact 7.1.21,(NS1 +NS2)(x) = (S1)⊥ +(S1)⊥ = (S1 ∩S2)⊥ (by [27, Exercises 3.17])= NS1∩S2(x).squaresolidProposition 7.4.6 Let B1,B2: Rn → Rn be linear and monotone, andS1,S2 be linear subspaces of Rn such that A1 = B1 + NS1,A2 = B2 + NS2.Then F(A1+A2) = FA1square2FA2.Proof. Let (x,x∗) ∈Rn ×Rn. Then we have(FNA1square2FNA2)(x,x∗)= parenleftbigF(B1+NS1)square2F(B2+NS2)parenrightbig(x,x∗)= infy∗+z∗=x∗braceleftBigF(B1+NS1)(x,y∗) +F(B2+NS2)(x,z∗)bracerightBig= infy∗+z∗=x∗braceleftBiginfy∗1+y∗2=y∗braceleftbigFB1(x,y∗1) +FNS1(x,y∗2)bracerightbig (by Lemma 7.4.2)+ infz∗1+z∗2=z∗braceleftbigFB2(x,z∗1) +FNS2(x,z∗2)bracerightbigbracerightBig (by Lemma 7.4.2)= infy∗1+y∗2+z∗1+z∗2=x∗braceleftBigFB1(x,y∗1)+FNS1(x,y∗2) +FB2(x,z∗1)+FNS2(x,z∗2)bracerightBig.147Chapter 7. Proximal averages of monotone operators with linear graphsThus(FNA1square2FNA2)(x,x∗)= infu∗+v∗=x∗braceleftBiginfu∗1+u∗2=u∗braceleftbigFB1(x,u∗1) +FB2(x,u∗2)bracerightbig+ infv∗1+v∗2=v∗braceleftbigFNS1(x,v∗1)+FNS2(x,v∗2)bracerightbigbracerightBig(by Fact 7.4.3 and Fact 7.4.4)= infu∗+v∗=x∗braceleftBigF(B1+B2)(x,u∗) +F(NS1+NS2)(x,v∗)bracerightBig= infu∗+v∗=x∗braceleftBigF(B1+B2)(x,u∗) +FNS1∩S2(x,v∗)bracerightBig(by Fact 7.4.5)= F(B1+B2+NS1∩S2)(x,x∗) (by Lemma 7.4.2)= F(B1+B2+NS1+NS2)(x,x∗) (by Fact 7.4.5)= F(A1+A2)(x,x∗).squaresolidCorollary 7.4.7 Let A,B: Rn ⇉Rn be maximal monotone such that graAand graB are linear subspaces. Then F(A+B) = FAsquare2FB.Proof. By Corollary 7.1.31, there exist linear and monotone operatorsA1,B1: Rn → Rn such that A = A1 + NdomA and B = B1 + NdomB.Since domA and domB are subspaces of Rn, by Proposition 7.4.6,F(A+B) = FAsquare2FB.squaresolid148Chapter 7. Proximal averages of monotone operators with linear graphsRemark 7.4.8 Corollary 7.4.7 generalizes the result of Bauschke, Borweinand Wang in [4].149Chapter 8Future workOur future work is the following• Simplify some of earlier technic proofs.• Extend main results to a Hilbert space and a possibly (reflexive) Ba-nach space.• Since Asplund’s decomposition of monotone operators is based onZorn’sLemma, it would bevery interesting to finda constructive proof.150Bibliography[1] E. Asplund, “A monotone convergence theorem for sequences of non-linear mappings,” Nonlinear Functional Analysis, Proceedings of Sym-posia in Pure Mathematics, American Mathematical Society, vol. XVIIIPart 1, Chicago, pp. 1–9, 1970.[2] S. Bartz, H. H. Bauschke, J. M. Borwein, S. Reich, and X. Wang, “Fitz-patrick functions, cyclic monotonicity, and Rockafellar’s antideriva-tive,” Nonlinear Analysis, No. 5, pp. 1198-1223, 2007.[3] H. H. Bauschke and J. M. Borwein, “Maximal monotonicity of densetype, local maximal monotonicity, and monotonicity of the conjugateare all the same for continuous linear operators,” Pacific Journal ofMathematics, vol. 189, pp. 1–20, 1999.[4] H. H. Bauschke, J. M. Borwein, and X. Wang, “Fitzpatrick functionsand continuous linear monotone operators,” SIAM Journal on Opti-mization, 18, pp. 789–809, 2007.[5] H. H. Bauschke, Y. Lucet, and M. Trienis, “How to transform oneconvex function continuously into another,” SIAM Review, in press,2008.151Bibliography[6] H. H. Bauschke, E.Matouskova, and S.Reich, “Projection and proximalpoint methods: Convergence results and counterexamples,” NonlinearAnalysis, vol. 56, pp. 715-739, 2004.[7] H. H. Bauschke, R. G. Goebel, Y. Lucet, and X. Wang, “The proximalaverage: Basic Theory,” submitted.[8] H. H. Bauschke, D. A. McLaren, and H. S. Sendov, “Fitzpatrick func-tions: inequalities, examples and remarks on a problem by S. Fitz-patrick,” Journal of Convex Analysis 13(3+4), pp. 499-523,2006.[9] H. H. Bauschke and X. Wang, “The kernel average for two convexfunctions and its applications to the extension and representation ofmonotone operators,” to appear in Transactions of the American Math-ematical Society, 2007.[10] H. H. Bauschke and X. Wang, “A convex-analytical approach to exten-sion results for n-cyclically monotone operators,” Set-Valued Analysis,No.3, pp. 297-306, 2007.[11] J. M. Borwein , “Adjoint process duality,” Mathematics of OperationsResearch, Vol. 8, No.3, pp. 403-434, 1983.[12] J. M. Borwein and H. Wiersma, “Asplund decomposition of monotoneoperators,” SIAM Journal on Optimization, Vol. 18, No.3, pp. 946-960,2007.[13] J. M. Borwein and A. S. Lewis, “Convex Analysis and Nonlinear Opti-mization,” Springer-Verlag, 2000.152Bibliography[14] D. Butnariu and G. Kassay, “A proximal-projection method for findingzeros of set-valued operators,” submitted.[15] F. H. Clarke, “Optimization and nonsmooth analysis,” Wiley, NewYork, 1983.[16] R. Cross, “Multivalued linear operators,” Marcel Dekker, Inc, NewYork, 1998.[17] N. Ghoussoub, “Maximal monotone operators are selfdual vector fieldsand vice-versa,” 2006, preprint.[18] S.Fitzpatrick, “Representing monotone operators by convex functions,”Workshop/Miniconference on Functional Analysis and Optimization(Canberra 1988), Proceedings of the Centre for Mathematical Analysis,Australian National University vol. 20, Canberra, Australia, pp. 59–65,1988.[19] C. W. Groetsch, “Generalized Inverses of Linear Operators,” MarcelDekker, Inc, New York, 1977.[20] D. G. Luenberger, “Optimization by Vector Space Methods,” John Wi-ley Sons, Inc, 2000.[21] C. D. Meyer, “Matrix Analysis and Applied Linear Algebra,” SIAM,New York, 2000.[22] R.R. Phelps, “Convex functions, Monotone Operators and Differentia-bility,” Lecture Notes in Mathematics, Springer-Verlag Berlin Heidel-berg, 1993.153Bibliography[23] R. R. Phelps and S. Simons, “Unbounded linear monotone operatorson nonreflexive Banach spaces,” Journal of Convex Analysis, vol. 5,pp. 303–328, 1998.[24] J. P. Penot and C. Z˘alinescu, “Some problems about the representa-tion of monotone operators by convex functions,” The Australian NewZealand Industrial and Applied mathematics Journal, vol. 47, No.1,pp. 1–20, 2005.[25] R. T. Rockafellar, “Convex Analysis,” Princeton University Press, 1970.[26] R. T. Rockafellar and R. J. B. Wets, “Variational Analysis,” Springer-Verlag, 1998.[27] B. P. Rynne and M. A. Youngson, “Linear Functional Analysis,” Lon-don, 2000.[28] S. Simons, “Minimax and Monotonicity,” Springer, New York, 1998.[29] S. Simons and C. Zˇalinescu, “Fenchel duality, Fitzpatrick functions andmaximal monotonicity,” Journal of Nonlinear and Convex Analysis, 6,pp. 1–22, 2005.[30] M. D. Voisei, “The sum theorem for linear maximal monotone op-erators,” Mathematical Sciences Research Journal, 10 (4), pp. 83–85,2006.[31] C. Z˘alinescu, “Convex Analysis in General Vector Spaces,” World Sci-entific Publishing, 2002.154

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0066808/manifest

Comment

Related Items