Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Decompositions and representations of monotone operators with linear graphs 2007

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2008_spring_yao_liangjin.pdf [ 743.71kB ]
Metadata
JSON: 1.0066808.json
JSON-LD: 1.0066808+ld.json
RDF/XML (Pretty): 1.0066808.xml
RDF/JSON: 1.0066808+rdf.json
Turtle: 1.0066808+rdf-turtle.txt
N-Triples: 1.0066808+rdf-ntriples.txt
Citation
1.0066808.ris

Full Text

Decompositions and representations of monotone operators with linear graphs by Liangjin Yao M.Sc., Yunnan University, 2006 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in The College of Graduate Studies (Interdisciplinary) The University Of British Columbia Okanagan December, 2007 c© Liangjin Yao 2007 Abstract We consider the decomposition of a maximal monotone operator into the sum of an antisymmetric operator and the subdifferential of a proper lower semicontinuous convex function. This is a variant of the well-known decom- position of a matrix into its symmetric and antisymmetric part. We analyze in detail the case when the graph of the operator is a linear subspace. Equiv- alent conditions of monotonicity are also provided. We obtain several new results on auto-conjugate representations includ- ing an explicit formula that is built upon the proximal average of the as- sociated Fitzpatrick function and its Fenchel conjugate. These results are new and they both extend and complement recent work by Penot, Simons and Zălinescu. A nonlinear example shows the importance of the linearity assumption. Finally, we consider the problem of computing the Fitzpatrick function of the sum, generalizing a recent result by Bauschke, Borwein and Wang on matrices to linear relations. ii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . vi 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Auxiliary results . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Properties of A† . . . . . . . . . . . . . . . . . . . . . . . . . 13 3 Inverse of linear monotone operators . . . . . . . . . . . . . 21 3.1 Borwein-Wiersma decomposition . . . . . . . . . . . . . . . . 21 3.2 Asplund decomposition . . . . . . . . . . . . . . . . . . . . . 22 3.3 The Borwein-Wiersma decomposition of the inverse . . . . . 23 4 Monotone operators with linear graphs . . . . . . . . . . . . 27 4.1 Linear graph . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.2 Maximal monotonicity identification . . . . . . . . . . . . . . 31 iii Table of Contents 4.3 Constructing a good selection . . . . . . . . . . . . . . . . . 40 4.4 The first main result . . . . . . . . . . . . . . . . . . . . . . . 45 4.5 Monotonicity of operators with linear graphs . . . . . . . . . 49 5 Auto-conjugates . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.1 Auto-conjugate representation . . . . . . . . . . . . . . . . . 53 5.2 The Fitzpatrick function and the proximal average . . . . . . 61 5.3 The second main result . . . . . . . . . . . . . . . . . . . . . 65 5.4 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.5 Relationship among auto-conjugate representations . . . . . 83 5.6 Nonuniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6 Calculation of the auto-conjugates of ∂(− ln) . . . . . . . . . 100 6.1 Fitzpatrick function for ∂(− ln) . . . . . . . . . . . . . . . . . 101 6.2 Proximal average of ∂(− ln) and hF∂f . . . . . . . . . . . . . 102 7 Proximal averages of monotone operators with linear graphs 108 7.1 Adjoint process of operators with linear graphs . . . . . . . . 108 7.2 Fitzpatrick functions of monotone operators with linear graphs 117 7.3 The third main result . . . . . . . . . . . . . . . . . . . . . . 131 7.4 The Fitzpatrick function of the sum . . . . . . . . . . . . . 144 8 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 iv List of Figures 5.1 The function f (blue) and z = xy (gold), and their intersec- tion line (cyan), gra Id (red). . . . . . . . . . . . . . . . . . . 93 5.2 The function f (blue) and z = xy (gold), and their intersec- tion line (cyan), gra Id (red) . . . . . . . . . . . . . . . . . . . 94 5.3 The function f (blue) and z = xy (gold) , and their intersec- tion line (cyan), gra Id (red), where p = 4. . . . . . . . . . . . 97 6.1 The function hF∂f . . . . . . . . . . . . . . . . . . . . . . . . . 105 v Acknowledgements I sincerely thank Heinz Bauschke and Shawn Wang for giving me the op- portunity to study with them under their supervision at UBC Okanagan. They have both given me a lot of help and encouragement. I would like to thank Philip Loewen and Yves Lucet for their many pertinent comments on my thesis. I would also like to thank the many people in the department who have made UBCO into a pleasant environment. vi Chapter 1 Introduction This thesis addresses two issues: decompositions of a monotone operator with a linear graph, and auto-conjugate representations of a monotone op- erator with a linear graph. It is well known that every matrix A in Rn×n can be decomposed into the sum of a symmetric matrix and an antisymmetric matrix by A = A+A ⊺ 2 + A−A⊺ 2 , where A+A ⊺ 2 is a gradient of a quadratic function. Our goal is to decompose more general mappings, namely maximal monotone operators. Both pos- itive semidefinite matrices and gradients of convex functions are maximal monotone. At present there are two famous decompositions: Asplund decomposi- tion and Borwein-Wiersma decomposition. In 1970, Asplund decomposition was introduced by E. Asplund who showed that a maximal monotone and at most single-valued operator A with int dom A 6= ∅ is Asplund decompos- able. In 2006, J. Borwein and H. Wiersma introduced the Borwein-Wiersma decomposition in [12], which is more restrictive. Borwein-Wiersma verified that a maximal monotone operator that is Borwein-Wiersma decomposable 1 Chapter 1. Introduction is also Asplund decomposable in finite dimensional spaces. One goal of our thesis is to show that maximal monotone operators with linear graphs are Borwein-Wiersma decomposable. The idea of representing a set by nice functions is classical. For example, for a closed set C, one can define the distance function dC(x) := inf c∈C { ‖x− c‖ } . Then dC is 1-Lipschitz and C = { x | dC(x) = 0 } . One can also define the lower semicontinuous function ιC where ιC(x) = 0 if x ∈ C and +∞ otherwise, then C = { x | ιC(x) = 0 } . In the later part of this thesis, we want to find auto-conjugate repre- sentations for a maximal monotone operator with a linear graph. An auto- conjugate function is a convex function. One very important result provided by Penot, Simons and Zǎlinescu shows that an auto-conjugate f represents an induced maximal monotone operator A = G(f). In order to create auto-conjugate representations, we introduce the Fitz- patrick function and the proximal average. In 1988, Simon Fitzpatrick defined a new convex function FA in [18] which is called the Fitzpatrick function associated with the monotone operator A. Recently, Fitzpatrick 2 Chapter 1. Introduction functions have turned out to be a very useful tool in the study of maximal monotone operators, see [2, 4, 8, 9, 18]. The proximal average was first in- troduced in [6] in the context of fixed point theory. In its simplest form, the proximal average is denoted here by P (f0, f1), where f0 and f1 are proper lower semicontinuous and convex functions. The recent works in [5, 7, 9] give numerous properties that are very attractive to Convex Analysts. Now we come back to our question. In [24], Penot and Zǎlinescu showed that a maximal monotone operator A can be represented by an auto-conjugate function hFA , using a partial epigraphical sum. In [9], Bauschke and Wang showed that P (FA, F ∗ A ⊺) is an auto-conjugate representation for a maxi- mal monotone operator A. Until now there has been no clear formula for P (FA, F ∗ A ⊺), even if A is a linear, continuous and monotone operator. In this thesis, we give an explicit formula for P (FA, F ∗ A ⊺) associated with a maximal monotone operator A with a linear graph. We find that P (FA, F ∗ A ⊺) = hFA . This is a new result. The thesis is organized as follows. Chapter 2 contains some auxiliary and basic results on monotone oper- ators, subdifferentials and Moore-Penrose inverses. In Chapter 3, it is shown that the inverse of a linear and monotone operator is Borwein-Wiersma decomposable. Chapter 4 contains our first main result: A maximal monotone operator with a linear graph is Borwein-Wiersma decomposable. In addition, the remainder of this chapter gives some equivalent conditions of monotonicity of operators with linear graphs. Chapter 5 discusses auto-conjugate representations. We give an explicit 3 Chapter 1. Introduction formula for P (FA, F ∗ A ⊺) associated with a linear and monotone operator A, which is our second main result. Furthermore, we show that P (FA, F ∗ A ⊺) = hFA . In Chapter 6, we give a specific example of a nonlinear monotone oper- ator: ∂(− ln) such that P (F∂(− ln), F ∗⊺ ∂(− ln)) 6= hF∂(− ln) . This illustrates the necessity of the linearity assumption. Finally, in Chapter 7 we extend auto-conjugate representation results from linear and monotone operators to monotone operators with linear graphs. Here we also discuss one open question: Expressing FA+B in terms of FA and FB . We show that FA+B = FA2FB (Here 2 means the inf convolution for the second variable). This generalizes one of the results provided by Bauschke, Borwein and Wang in [4]. Throughout this thesis, X denotes a Hilbert space with inner product 〈·, ·〉, norm ‖ · ‖, and Id is the identity mapping in X. The unit ball is B = { x ∈ X | ‖x‖ ≤ 1 } . We further set R+ = { x ∈ R | x ≥ 0 } , R− = { x ∈ R | x ≤ 0 } , R++ = { x ∈ R | x > 0 } , R−− = { x ∈ R | x < 0 } . For a subset C ⊂ X , the closure of C is denoted by C. The arrow “→” is used for a single-valued mapping, whereas “⇉” denotes a set-valued mapping. 4 Chapter 2 Auxiliary results 2.1 Definitions We first introduce some fundamental definitions. Definition 2.1.1 Let T : X ⇉ X. We say T is monotone if 〈x∗ − y∗, x− y〉 ≥ 0, whenever x∗ ∈ T (x), y∗ ∈ T (y). Definition 2.1.2 Let A : X → X. We say A is positive semidefinite if 〈x,Ax〉 ≥ 0, ∀x ∈ X. Example 2.1.3 Let A : X → X be linear. Then A is monotone, if and only if, A is positive semidefinite. Proof. “⇒” For every x ∈ X, by monotonicity and linearity of S we have 〈Ax, x〉 = 〈Ax−A0, x − 0〉 ≥ 0. (2.1) (2.1) holds by A0 = 0 (since A is linear). “⇐” For every x, y ∈ X, since A is positive semidefinite, we have 5 Chapter 2. Auxiliary results 〈 Ax−Ay, x− y 〉 = 〈 A(x− y), x− y 〉 ≥ 0. (2.2)  Definition 2.1.4 Let A : X → X be linear. We define qA by qA(x) = 1 2 〈x,Ax〉, ∀x ∈ X. (2.3) Definition 2.1.5 Let A : X → X be linear and continuous. Then A∗ is the unique linear and continuous operator satisfying 〈Ax, y〉 = 〈x, A∗y〉, ∀x, y ∈ X. (2.4) Remark 2.1.6 Let A : Rn → Rn be linear. Then A∗ = A⊺. Definition 2.1.7 Let A : X → X be linear and continuous. We say that A is symmetric if A∗ = A. Remark 2.1.8 Let A : X → X be linear and continuous. Then A is mono- tone ⇔ A∗ is monotone. Definition 2.1.9 Let A : X → X be linear and continuous. We say that A is antisymmetric if A∗ = −A. Remark 2.1.10 Let A : X → X be linear and continuous. Then A+A ∗ 2 is symmetric and A−A ∗ 2 is antisymmetric. 6 Chapter 2. Auxiliary results Definition 2.1.11 (Symmetric and antisymmetric part) Let A : X → X be linear and continuous. Then A+ = 1 2A + 1 2A ∗ is the symmetric part of A, and A◦ = A−A+ = 1 2A− 1 2A ∗ is the antisymmetric part of A. Remark 2.1.12 Let A : X → X be linear and continuous. Then qA = qA+. Proof. Let x ∈ X. 2qA+(x) = 〈A+x, x〉 = 〈 A∗+A 2 x, x〉 = 〈A ∗x 2 , x〉+ 〈 Ax 2 , x〉 = 〈 x 2 , Ax〉 + 〈 Ax 2 , x〉 = 〈Ax, x〉 = 2qA(x).  Here is a basic property. Fact 2.1.13 Let A : X → X be linear and continuous. Then A is mono- tone, if and only if, A+ is monotone. Proof. By Example 2.1.3 and Remark 2.1.12.  Definition 2.1.14 Let T : X ⇉ X be monotone. We call T maximal monotone if for every (y, y∗) /∈ graT there exists (x, x∗) ∈ graT with 〈x− y, x∗ − y∗〉 < 0. Fact 2.1.15 Let A : X ⇉ X be maximal monotone and (x0, x ∗ 0) ∈ X ×X. Let à : X ⇉ X such that gra à = graA− (x0, x ∗ 0) ( i.e., a rigid translation of graA). Then à is maximal monotone. Proof. Follows directly from Definition 2.1.14.  7 Chapter 2. Auxiliary results Example 2.1.16 Every continuous monotone operator A : X → X is max- imal monotone. Proof. [26, Example 12.7].  Let us introduce an essential result that will be used often. Fact 2.1.17 Let A : X → X be linear, continuous and monotone. Then kerA = kerA∗ and ranA = ranA∗. Proof. See [4, Proposition 3.1].  Fact 2.1.18 Let A : X → X be linear and continuous. Then qA is convex⇔ A is monotone⇔ qA(x) ≥ 0, x ∈ X, and ∇qA = A+. Proof. See [3, Theorem 3.6(i)].  Definition 2.1.19 Let f : X → ]−∞,+∞]. We say f is proper lower semicontinuous and convex if f(x0) < +∞, ∃ x0 ∈ X, f(λx+ (1− λ)y) ≤ λf(x) + (1− λ)f(y), ∀λ ∈ ]0, 1[, ∀x, y ∈ X, lim inf y→x f(y) = limδ→0+ inf f(x+ δB) ≥ f(x), ∀x ∈ X. Definition 2.1.20 Let f : X → ]−∞,+∞] be proper lower semicontinuous 8 Chapter 2. Auxiliary results and convex. The subdifferential mapping ∂f : X ⇉ X is defined by x 7→ ∂f(x) := { x∗ | 〈x∗, y − x〉+ f(x) ≤ f(y), ∀y } . One of motivations for studying monotone operators comes from the following Fact. Fact 2.1.21 (Rockafellar) Let f : X → ]−∞,+∞] be proper lower semi- continuous and convex. Then ∂f is maximal monotone. Proof. See [28, page 113] or [31, Theorem 3.28].  Fact 2.1.22 Let A : X → X be linear and monotone. Then A is maximal monotone and continuous. Proof. See [23, Corollary 2.6 and Proposition 3.2.h].  Definition 2.1.23 For a set S ⊂ X, ιS : X → ]−∞,+∞] stands for the indicator function defined by ιS(x) :=   0, if x ∈ S; +∞, otherwise. Fact 2.1.24 Suppose that S is a nonempty convex subset of X. Then ιS is proper lower semicontinuous and convex, if and only if, S is closed. Proof. See [22, Example.(a)].  Definition 2.1.25 The space ℓ2 consists of all sequences of real numbers 9 Chapter 2. Auxiliary results (ξ1, ξ2, . . .) for which ‖(ξ1, ξ2, . . .)‖2 <∞, where ‖(ξ1, ξ2, . . .)‖2 := ( ∞∑ i=1 |ξi| 2) 1 2 , and where 〈ξ, γ〉 = ∞∑ i=1 〈ξi, γi〉, ∀ξ = (ξi) ∞ i=1, γ = (γi) ∞ i=1 ∈ ℓ 2. Fact 2.1.26 (ℓ2, ‖ · ‖2) is a Hilbert space. Proof. See [27, Example 3.24].  Example 2.1.27 Let X be (ℓ2, ‖ · ‖2) space and A : X → X : (xn) ∞ n=1 7→ (xn n )∞n=1. Then A is maximal monotone and continuous. Proof. Clearly, A is linear. Now we show A is monotone. Let x = (xn) ∞ n=1 ∈ X. Then 〈x, Ax〉 = ∞∑ n=1 x2n n ≥ 0. By Example 2.1.3, A is monotone. By Fact 2.1.22, A is maximal monotone and continuous.  Lemma 2.1.28 Let S be a linear subspace of X. Suppose x ∈ X and α ∈ R satisfy 〈x, s〉 ≤ α,∀s ∈ S. Then x⊥S. 10 Chapter 2. Auxiliary results Proof. Let s ∈ S. By assumption, we have 〈x, ks〉 ≤ α, ∀k ∈ R ⇒〈x, s〉 ≤ 0, if k > 0 〈x, s〉 ≥ 0, if k < 0 ⇒ 〈x, s〉 = 0, ∀s ∈ S ⇒ x⊥S.  Fact 2.1.29 Suppose that S is a closed linear subspace of X. Then ∂ιS(x) = S ⊥, ∀x ∈ S. Proof. Let x ∈ S. We have x∗ ∈ ∂ιS(x)⇔ 〈x∗, s− x〉 ≤ ιS(s)− ιS(x), ∀s ∈ X ⇔ 〈x∗, s− x〉 ≤ 0, ∀s ∈ S ⇔ x⊥S (by Lemma 2.1.28).  Fact 2.1.30 Let f, g : X → ]−∞,+∞] be proper lower semicontinuous and convex. Suppose that f is differentiable everywhere. Then ∂(f + g)(x) = ∇f(x) + ∂g(x), ∀x ∈ dom g. Proof. See [22, Theorem 3.23].  Example 2.1.31 Suppose that j(x) = 12‖x‖ 2, ∀x ∈ X and S ⊂ X is a 11 Chapter 2. Auxiliary results closed subspace. Then ∂(j + ιS) is maximal monotone. In particular, ∂(j + ιS)(x) = x+ S ⊥, ∀x ∈ S. Proof. By Fact 2.1.24, ιS is proper lower semicontinuous and convex. Hence j + ιS is proper lower semicontinuous and convex. By Fact 2.1.21, ∂(j + ιS) is maximal monotone. Let x ∈ S. By Fact 2.1.30, Fact 2.1.18 and Fact 2.1.29, ∂(j + ιS)(x) = ∇j(x) + ∂ιS(x) = ∇qId(x) + ∂ιS(x) = x+ S ⊥.  Fact 2.1.32 Let A : X → X be linear and continuous such that ranA is closed. Then ∂ιranA(x) = (ranA) ⊥ = kerA∗, ∀x ∈ ranA. Proof. Let x ∈ ranA. By Fact 2.1.29, ∂ιranA(x) = (ranA) ⊥. Now we show that (ranA)⊥ = kerA∗. We have x∗ ∈ (ranA)⊥ ⇔ 〈x∗, Ax〉 = 0, ∀x ∈ X ⇔ 〈A∗x∗, x〉 = 0, ∀x ∈ X ⇔ A∗x∗ = 0⇔ x∗ ∈ kerA∗.  Definition 2.1.33 Let A : X → X. The set-valued inverse mapping, A−1 : X ⇉ X, is defined by x ∈ A−1y ⇔ Ax = y. The following is the definition of the Moore-Penrose inverse, which will play an important role in our Theorems. 12 Chapter 2. Auxiliary results Definition 2.1.34 Let A : X → X be linear and continuous such that ranA is closed. The Moore-Penrose inverse of A, denoted by A†, is defined by A†b = argminA∗Au=A∗b‖u‖, ∀b ∈ X. In the following we always let A† stand for the Moore-Penrose inverse of a linear and continuous operator A. Remark 2.1.35 Let A : X → X be linear and continuous such that ranA is closed. Then by [19, Theorem 2.1.1], A†y ∈ A−1y, ∀y ∈ ranA. In particular, if A is bijective, then A† = A−1. 2.2 Properties of A† By the Remark above, we know that A† |ranA is a selection for A−1. This raises some questions: What is the relationship between them? If one of them is monotone, can we deduce that the other one is also monotone? Fact 2.2.1 Let A : X → X be linear and continuous. Then ranA is closed, if and only if, ranA∗ is closed. Proof. See [19, Theorem 1.2.4].  13 Chapter 2. Auxiliary results Fact 2.2.2 Let A : X → X be linear and continuous such that ranA is closed. Then A† is linear and continuous. Proof. See [19, Corollary 2.1.3].  Fact 2.2.3 Let A : X → X be linear and continuous such that ranA is closed. Then ranA† = ranA∗. Proof. See [19, Theorem 2.1.2].  Fact 2.2.4 Let A : X → X be linear and continuous such that ranA is closed. Then (A†)∗ = (A∗)†. Proof. See [19, Exercise 11 on page 111] and [21, Exercise 5.12.16 on page 428].  Proposition 2.2.5 Let A : X → X be linear, continuous and monotone such that ranA is closed. Then ranA = ranA∗ = ranA† = ran(A†)∗, kerA = kerA∗ = ker(A†) = ker(A†)∗. Proof. By Fact 2.1.17 and Fact 2.2.1, ranA = ranA∗. (2.5) By Fact 2.2.3 and (2.5), we have ranA = ranA∗ = ranA†. (2.6) 14 Chapter 2. Auxiliary results By Fact 2.2.1, ranA∗ is closed. By Remark 2.1.8, A∗ is monotone. Apply (2.6) with replacing A by A∗, we have ranA∗ = ranA∗∗ = ran(A∗)†. (2.7) By Fact 2.2.4 and (2.6), we have ranA∗ = ranA = ran(A†)∗ = ranA†. Then (ranA∗)⊥ = (ranA)⊥ = ( ran(A†)∗ )⊥ = (ranA†)⊥, thus by Fact 2.1.32, kerA = kerA∗ = ker(A†) = ker(A†)∗ .  Proposition 2.2.6 Let A : X → X be linear. Suppose y ∈ ranA. Then A−1y = y∗ + kerA, ∀y∗ ∈ A−1y. Proof. Let y∗ ∈ A−1y and z∗ ∈ kerA. Then Ay∗ = y and A(y∗ + z∗) = Ay∗ +Az∗ = y + 0 = y. Thus y∗ + z∗ ∈ A−1y. Hence y∗ + kerA ⊂ A−1y. On the other hand, let y∗1 ∈ A −1y. Then Ay∗1 = y and for each y ∗ ∈ A−1y, A(y∗1 − y ∗) = Ay∗1 −Ay ∗ = y − y = 0. Thus y∗1 − y ∗ ∈ kerA, i.e., y∗1 ∈ y ∗ + kerA. Then A−1y ⊂ y∗ + kerA.  Corollary 2.2.7 Let A : X → X be linear and continuous such that ranA is closed. Then A−1y = A†y + kerA, ∀y ∈ ranA. 15 Chapter 2. Auxiliary results Proof. By Remark 2.1.35 and Proposition 2.2.6.  In order to further illustrate the relationship between A† and A, we introduce the concept of Projector. Fact 2.2.8 Let M be a closed subspace of X. For every vector x ∈ X, there is a unique vector m0 ∈ M such that ‖x − m0‖ ≤ ‖x − m‖ for all m ∈M . Furthermore, a necessary and sufficient condition that m0 ∈M be the unique minimizing vector is that x−m0 be orthogonal to M . Proof. See [20, Theorem 2 on page 51].  The Fact above ensures that the following mapping is well defined. Definition 2.2.9 (Projector) Let M be a closed subspace of X. The Pro- jector, PM : X →M , is defined by PMx = argminm∈M‖x−m‖ , x ∈ X. (2.8) Here is a result that will be very helpful for our problems. Proposition 2.2.10 Let A : X → X be linear and monotone such that ranA is closed. Then qA† = qA†PranA. 16 Chapter 2. Auxiliary results Proof. Let x ∈ X. Then we have 2qA†(x) = 〈 x, A†x 〉 (2.9) = 〈 PranAx+ PkerAx, A †(PranAx+ PkerAx) 〉 (2.10) = 〈 PranAx+ PkerAx, A †(PranAx) 〉 (2.11) = 〈 PranAx, A †(PranAx) 〉 + 〈 PkerAx, A †(PranAx) 〉 (2.12) = 〈 PranAx, A †(PranAx) 〉 + 〈 PranAx, (A †)∗(PkerAx) 〉 (2.13) = 〈 PranAx, A †(PranAx) 〉 (2.14) = 2qA†(PranAx). (2.15) Note that (2.10) holds sinceX = ranA⊕kerA by Fact 2.1.32 and Fact 2.1.17. (2.11) holds since PkerAx ∈ kerA = kerA † by Proposition 2.2.5. (2.14) holds by (A†)∗(PkerAx) = 0, since ker(A†)∗ = kerA by Proposi- tion 2.2.5.  Fact 2.2.11 Let A : X → X be linear and continuous such that ranA is closed. Then AA† = PranA. Proof. See [19, Theorem 2.2.2].  Corollary 2.2.12 Let A : X → X be linear, continuous and monotone such that ranA is closed. Then A† is monotone. Proof. Since A† is linear and continuous by Fact 2.2.2, by Fact 2.1.18 it suffices to show that qA†(x) ≥ 0,∀x ∈ X. 17 Chapter 2. Auxiliary results Let x ∈ X and y = A†(PranAx). Then Ay = AA†(PranAx) = PranAx by Fact 2.2.11. By Proposition 2.2.10, we have 2qA†(x) = 2qA†(PranAx) (2.16) = 〈A†(PranAx), PranAx〉 (2.17) = 〈y, Ay〉 (2.18) ≥ 0, (2.19) in which (2.19) holds since A is monotone.  Fact 2.2.13 Let A : X → X be linear and continuous such that ranA is closed. Then A†† = A. Proof. See [19, Exercise 7 on page 110].  Theorem 2.2.14 Let A : X → X be linear and continuous such that ranA is closed. Then A is monotone, if and only if, A† is monotone. Proof. “⇒” By Corollary 2.2.12. “⇐” Since ranA† = ranA is closed by Proposition 2.2.5, we apply Fact 2.2.13 and Corollary 2.2.12 to A† to conclude that A†† = A is monotone.  Here is a useful result that will be used very often. Proposition 2.2.15 Let A : X → X be linear, symmetric and continuous such that ranA is closed. Then qA†(x+Ay) = qA†(x) + qA(y) + 〈PranAx, y〉, ∀x, y ∈ X. 18 Chapter 2. Auxiliary results Proof. Let x ∈ X, y ∈ X. Then qA†(x+Ay) (2.20) = 12 〈A †x+A†Ay, x+Ay〉 (2.21) = qA†(x) + 1 2 〈A †Ay,Ay〉+ 12〈A †x,Ay〉+ 12〈A †Ay, x〉 (2.22) = qA†(x) + 1 2 〈AA †Ay, y〉+ 12〈AA †x, y〉+ 12 〈 y, (A†A)∗x 〉 (2.23) = qA†(x) + 1 2 〈PranA(Ay), y〉 + 1 2〈PranAx, y〉+ 1 2〈y,AA †x〉 (2.24) = qA†(x) + qA(y) + 1 2〈PranAx, y〉+ 1 2〈y, PranAx〉 (2.25) = qA†(x) + qA(y) + 〈PranAx, y〉, (2.26) in which, (2.24) by Fact 2.2.11 and Fact 2.2.4, (2.25) by Fact 2.2.11.  Corollary 2.2.16 Let A : X → X be linear, symmetric and continuous such that ranA is closed. Then qA†(Ax) = qA(x), ∀x ∈ X. Proof. Apply Proposition 2.2.15 to A with x replaced by 0 and y replaced by x.  Fact 2.2.17 Let A : X → X be linear and continuous such that ranA is closed. Then A†A = PranA†. Proof. See [19, Theorem 2.2.2].  19 Chapter 2. Auxiliary results Corollary 2.2.18 Let A : X → X be linear, continuous and monotone such that ranA is closed. Then AA† = A†A = PranA. Proof. By Proposition 2.2.5, ranA = ranA†. Then follows directly from Fact 2.2.11 and Fact 2.2.17.  20 Chapter 3 Inverse of linear monotone operators It is well known that a linear, continuous and monotone operator A can be decomposed into the sum of a symmetric operator A+ and an antisymmetric operator A◦: A = A+ +A◦. By Fact 2.1.18, A is also decomposed into the sum of the subdifferential of a proper lower semicontinuous and convex function ∇qA and an anti- symmetric operator A◦: A = ∇qA + A◦. Such a decomposition is called a Borwein-Wiersma decomposition. 3.1 Borwein-Wiersma decomposition Definition 3.1.1 (Borwein-Wiersma decomposition) We say A : X ⇉ X is Borwein-Wiersma decomposable or simply decomposable if A = ∂f + S, where f is proper lower semicontinuous and convex, and S is antisymmetric. What kind of operators are Borwein-Wiersma decomposable? 21 Chapter 3. Inverse of linear monotone operators Definition 3.1.2 We say A : Rn ⇉ Rn is skew if there exists a linear and antisymmetric operator B such that B |domA= A |domA . Fact 3.1.3 Let A : Rn ⇉ Rn be maximal monotone and at most single- valued. Suppose that 0 ∈ domA,domA is open and A is Frechet differen- tiable on domA. Then A is Borwein-Wiersma decomposable, if and only if, A−∇fA is skew, where fA : domA→ R : x 7→ ∫ 1 0 〈A(tx), x〉 dt. Proof. See [12, Theorem 3].  3.2 Asplund decomposition Here we also introduce another famous decomposition: Asplund decompo- sition, see [1]. Definition 3.2.1 We say A : X ⇉ X is acyclic with respect to a subset C if A = ∂f + S, where f is proper lower semicontinuous and convex, and S is monotone, which necessarily implies that ∂f is constant on C. If no set C is given, then C = domA. Definition 3.2.2 (Asplund decomposition) We say A : X ⇉ X is As- plund decomposable if A = ∂f + S, 22 Chapter 3. Inverse of linear monotone operators where f is proper lower semicontinuous and convex, and S is acyclic with respect to domA. The following tells us which operators are Asplund decomposable. Fact 3.2.3 (Asplund) Let A : Rn ⇉ Rn be maximal monotone such that int dom A 6= ∅ and A is at most single-valued. Then A is Asplund decom- posable. Proof. See [12, Theorem 13].  By the following result, we can find out the connection between the decompositions. Fact 3.2.4 Let A : Rn → Rn be antisymmetric. Then A is acyclic. Proof. See [12, Proposition 15].  Remark 3.2.5 Let A : Rn ⇉ Rn be maximal monotone and Borwein-Wiersma decomposable via ∂f + S, where f is proper lower semicontinuous and convex, and S is antisymmetric. Then such a decomposition is also an Asplund decomposition. 3.3 The Borwein-Wiersma decomposition of the inverse As mentioned earlier, a linear, continuous and monotone operator is Borwein- Wiersma decomposable. It is natural to ask whether its set-valued inverse 23 Chapter 3. Inverse of linear monotone operators mapping is also Borwein-Wiersma decomposable. Theorem 3.3.1 Let A : X → X be linear, continuous and monotone such that ranA is closed. Then A−1 = ∂f + (A†)◦, where f := qA† + ιranA is proper lower semicontinuous and convex, and (A†)◦ is antisymmetric. In particular, A−1 is decomposable. Proof. By Fact 2.2.2 and Corollary 2.2.12, A† is linear, continuous and monotone. Then by Fact 2.1.18 we have qA† is convex function, differentiable and ∇qA† = (A †)+. Since ranA is a closed subspace of X, by Fact 2.1.24 ιranA is proper lower semicontinuous and convex. Hence f is proper lower semicontinuous and convex. We show that the convex function f satisfies ∂f(x) + (A†)◦x =   A†x+ kerA, if x ∈ ranA; ∅, otherwise. (3.1) Indeed, since f is convex, ∀x ∈ ranA we have ∂f(x) = ∂(qA† + ιranA)(x) = ∇qA†(x) + ∂ιranA(x) ( by Fact 2.1.30) = (A†)+x+ kerA∗ (3.2) = (A†)+x+ kerA, (3.3) 24 Chapter 3. Inverse of linear monotone operators where (3.2) holds by Fact 2.1.32, and (3.3) by Proposition 2.2.5. Thus ∂f(x) + (A†)◦x = (A†)+x+ kerA+ (A†)◦x = A†x+ kerA, ∀x ∈ ranA. If x 6∈ ranA = dom f , by definition ∂f(x) = ∅. Hence (3.1) holds. By Corollary 2.2.7, we have that A−1x = A†x+ kerA, ∀x ∈ ranA. (3.4) Thus, A−1x =   A†x+ kerA, if x ∈ ranA; ∅, otherwise. (3.5) By (3.1) and (3.5), we have A−1 = ∂f + (A†)◦.  Proposition 3.3.2 Assume T : X ⇉ X is monotone, then T−1 is mono- tone. Moreover, if T is maximal monotone, then so too is T−1. Proof. Use Definition 2.1.1 and Definition 2.1.14 directly.  Due to Phelps and Simons, we obtain the following Proposition. Proposition 3.3.3 Let A : X → X be linear, continuous and monotone such that A is one-to-one and symmetric. Then A−1 = ∂f, where f(x) := supy∈ranA { 〈A−1y, x〉− 12 〈A −1y, y〉 } (∀x ∈ X) is proper lower 25 Chapter 3. Inverse of linear monotone operators semicontinuous and convex. If X = Rn, then A−1 = ∇qA−1. In particular, A−1 is decomposable. Proof. By Example 2.1.16, A is maximal monotone. Then by Proposi- tion 3.3.2, A−1 is maximal monotone. Since A is linear and one-to-one , A−1 is single-valued and linear on ranA. In the following we show that 〈x,A−1y〉 = 〈y,A−1x〉, ∀x, y ∈ ranA. Let x, y ∈ ranA. Then there exist unique x1, y1 ∈ X such that x = Ax1, y = Ay1. We have 〈x,A−1y〉 = 〈Ax1, y1〉 = 〈x1, Ay1〉 = 〈A−1x, y〉. By [23, Theorem 5.1], f is proper lower semicontinuous and convex, and A−1 = ∂f. If x = Rn, we have A is invertible. By assumption, A−1 is symmetric and monotone. By Fact 2.1.18, A−1 = ∇qA−1.  26 Chapter 4 Monotone operators with linear graphs Theorem 3.3.1 tells us that the set-valued inverse A−1 of a linear, continuous and monotone operator A is Borwein-Wiersma decomposable. Naturally, this raises the following question: Are maximal monotone operators with linear graphs also Borwein-Wiersma decomposable? This chapter answers the question above. It also gives some important equivalent conditions of monotonicity and maximal monotonicity of operators with linear graphs. Let us first introduce some interesting results about these operators. 4.1 Linear graph Fact 4.1.1 Let S,M be closed linear subspaces of X. Then S =M ⇔ S⊥ =M⊥, S 6=M ⇔ S⊥ 6=M⊥. Proof. Follows directly by S⊥⊥ = S,M⊥⊥ =M.  27 Chapter 4. Monotone operators with linear graphs Definition 4.1.2 Let A : X ⇉ X. We define domA, ranA by domA := {x | Ax 6= ∅} ranA := {x∗ | x∗ ∈ Ax, ∃x ∈ domA}. Proposition 4.1.3 Let A : X ⇉ X such that graA is a linear subspace of X ×X. For every x, y ∈ domA, the following hold. (i) A0 is a linear subspace of X. (ii) Ax = x∗ +A0, ∀x∗ ∈ Ax. (iii) αAx+ βAy = A(αx+ βy), ∀α, β ∈ R with α 6= 0 or β 6= 0. (iv) If A is monotone, then domA⊥A0, hence domA ⊂ (A0)⊥, A0 ⊂ (domA)⊥. (v) If A is monotone, then 〈x, x∗〉 ≥ 0, ∀(x, x∗) ∈ graA. Proof. Obviously, domA = { x ∈ X| (x, y) ∈ graA, ∃y ∈ X } (4.1) and domA is a linear subspace of X. (i): ∀α, β ∈ R, ∀x∗, z∗ ∈ A0 we have (0, x∗) ∈ graA (0, z∗) ∈ graA. 28 Chapter 4. Monotone operators with linear graphs As graA is a linear subspace of X ×X, α(0, x∗) + β(0, z∗) = (0, αx∗ + βz∗) ∈ graA. This gives αx∗ + βz∗ ∈ A0. Hence A0 is a linear subspace. (ii): We first show that x∗ +A0 ⊂ Ax, ∀x∗ ∈ Ax. Take x∗ ∈ Ax, z∗ ∈ A0. Then (x, x∗) ∈ graA and (0, z∗) ∈ graA. Since graA is a linear subspace, (x, x∗ + z∗) ∈ graA. That is, x∗ + z∗ ∈ Ax. Then x∗ +A0 ⊂ Ax. On the other hand, let x∗, y∗ ∈ Ax. We have (x, x∗) ∈ graA, (x, y∗) ∈ graA. Since graA is a linear subspace, (x− x, y∗ − x∗) = (0, y∗ − x∗) ∈ graA. Then y∗ − x∗ ∈ A0. That is, y∗ ∈ x∗+A0. Thus Ax ⊂ x∗ +A0. Hence (ii) 29 Chapter 4. Monotone operators with linear graphs holds. (iii): Let α, β ∈ R. Take x∗ ∈ Ax, y∗ ∈ Ay. Then we have (x, x∗) ∈ graA, (y, y∗) ∈ graA. Since graA is a linear subspace, we have (αx + βy, αx∗ + βy∗) ∈ graA. That is, αx∗ + βy∗ ∈ A(αx+ βy). Then by (ii) we have Ax = x∗ +A0, Ay = y∗ +A0, A(αx+ βy) = αx∗ + βy∗ +A0. (4.2) Suppose that α 6= 0. By (i) αA0 + βA0 = A0. (4.3) Then by (4.2) and (4.3), αAx+ βAy = α(x∗ +A0) + β(y∗ +A0) = αx∗ + βy∗ + (αA0 + βA0) = αx∗ + βy∗ +A0 = A(αx+ βy). (iv): Pick x ∈ domA. By (4.1) there exists x∗ ∈ X such that (x, x∗) ∈ graA. Then by monotonicity of A, we have 〈x− 0, x∗ − z∗〉 ≥ 0, ∀z∗ ∈ A0. 30 Chapter 4. Monotone operators with linear graphs That is, 〈x, x∗〉 ≥ 〈x, z∗〉, ∀z∗ ∈ A0. (4.4) Since A0 is a linear subspace by (i), by Lemma 2.1.28 and (4.4), x⊥A0, ∀x ∈ domA ⇒ domA⊥A0 ⇒ domA ⊂ (A0)⊥, A0 ⊂ (domA)⊥. (v): Since (0, 0) ∈ graA, 〈x, x∗〉 = 〈x− 0, x∗ − 0〉 ≥ 0, ∀(x, x∗) ∈ graA.  Remark 4.1.4 Proposition 4.1.3(ii) is a useful representation. It means Ax = Ãx+A0, ∀x ∈ domA, Ãx ∈ Ax. Later, we will show the selection map à can be chosen to be linear! 4.2 Maximal monotonicity identification The next three results are well known. Fact 4.2.1 Let A : X ⇉ X be maximal monotone. Then Ax is closed and convex, ∀x ∈ X. 31 Chapter 4. Monotone operators with linear graphs Proof. Fix x ∈ X. If x /∈ domA, then Ax = ∅ is closed and convex. So suppose x ∈ domA. Let (x∗n) ⊂ Ax such that x∗n → x∗. In the following we show that x∗ ∈ Ax. For every (y, y∗) ∈ graA, by monotonicity of A, we have 〈y − x, y∗ − x∗n〉 ≥ 0. (4.5) Letting n→∞ in (4.5), we see that 〈y − x, y∗ − x∗〉 ≥ 0, ∀(y, y∗) ∈ graA. (4.6) By (4.6) and maximal monotonicity of A, we have (x, x∗) ∈ graA. That is, x∗ ∈ Ax. Hence Ax is closed. Now we show that Ax is convex. Let δ ∈ [0, 1]. For every x∗1, x ∗ 2 ∈ Ax, we have 〈y − x, y∗ − x∗1〉 ≥ 0 (4.7) 〈y − x, y∗ − x∗2〉 ≥ 0, ∀(y, y ∗) ∈ graA. (4.8) Adding (4.7)×δ and (4.8)×(1− δ) yields 〈y − x, y∗ − (δx∗1 + (1− δ)x ∗ 2)〉 ≥ 0, ∀(y, y ∗) ∈ graA. (4.9) Since A is maximal monotone, (x, δx∗1 + (1− δ)x ∗ 2) ∈ graA, i.e., δx ∗ 1 + (1− δ)x∗2 ∈ Ax. Thus Ax is convex.  Proposition 4.2.2 Let A,B : X ⇉ X be monotone. Then A+B is mono- tone. 32 Chapter 4. Monotone operators with linear graphs Proof. Let (x, x∗), (y, y∗) ∈ gra(A+B). Then there exist (x, x∗1), (y, y ∗ 1) ∈ graA (x, x∗2), (y, y ∗ 2) ∈ graB such that x∗ = x∗1 + x ∗ 2 y∗ = y∗1 + y ∗ 2. Then 〈x− y, x∗ − y∗〉 = 〈x− y, x∗1 + x ∗ 2 − y ∗ 1 − y ∗ 2〉 = 〈x− y, x∗1 − y ∗ 1〉+ 〈x− y, x ∗ 2 − y ∗ 2〉 ≥ 0. Hence A+B is monotone.  Fact 4.2.3 Let A : X ⇉ X be maximal monotone. Then domA is convex. Proof. See [31, Theorem 3.11.12].  Fact 4.2.4 Let A : X ⇉ X be maximal monotone. Then A+ ∂ιdomA = A. Proof. By Fact 4.2.3 and Fact 2.1.24, ιdomA is proper lower semicontinuous and convex. Then by Proposition 2.1.21, ∂ιdomA is monotone. Then by Fact 4.2.2, A+ ∂ιdomA is monotone. 33 Chapter 4. Monotone operators with linear graphs Suppose x ∈ domA. Then since 0 ∈ ∂ιdomA(x) , (A+ ∂ιdomA)(x) = Ax+ ∂ιdomA(x) ⊃ Ax. Suppose x /∈ domA. Then since Ax = ∅, (A+ ∂ιdomA)(x) ⊃ Ax, ∀x ∈ X. Since A is maximal monotone, A+ ∂ιdomA = A.  The following are interesting properties about maximal monotonicity of monotone operators with linear graphs. Proposition 4.2.5 Let A : X ⇉ X be monotone such that graA is a linear subspace of X ×X. Then A is maximal monotone⇒ domA = (A0)⊥. Proof. Suppose to the contrary that domA 6= (A0)⊥. By Proposition 4.1.3(i) and Fact 4.2.1, A0 is a closed subspace. By Fact 4.1.1, (domA)⊥ 6= (A0)⊥⊥ = A0. Then by Proposition 4.1.3(iv), we have (domA)⊥ = (domA)⊥ ' A0. (4.10) Thus there exists ω∗ ∈ (domA)⊥ \A0. By ω∗ ∈ (domA)⊥, we have 〈ω∗, x〉 = 0, ∀x ∈ domA. (4.11) 34 Chapter 4. Monotone operators with linear graphs Since ω∗ /∈ A0, (0, ω∗) /∈ graA. By maximal monotonicity of A, there exists (x0, x ∗ 0) ∈ graA such that 〈x∗0, x0〉 − 〈ω ∗, x0〉 = 〈x∗0 − ω ∗, x0 − 0〉 < 0. (4.12) By (4.11) and (4.12), 〈x∗0, x0〉 < 0, which is a contradiction to Proposi- tion 4.1.3(v).  Proposition 4.2.6 Let A : X ⇉ X be monotone such that graA is a linear subspace of X ×X and A0 is closed. Then domA = (A0)⊥ ⇒ A is maximal monotone. Proof. Let (x, x∗) ∈ X ×X satisfy that 〈x− y, x∗ − y∗〉 ≥ 0, ∀(y, y∗) ∈ graA. (4.13) In the following we will verify that (x, x∗) ∈ graA. By (4.13) we have 〈x, x∗〉 ≥ 〈x, z∗〉, ∀z∗ ∈ A0. Since A0 is a linear subspace by Proposition 4.1.3(i), by Lemma 2.1.28 we have x⊥A0, i.e., x ∈ (A0)⊥ = domA. Take x∗0 ∈ Ax. For every v ∗ ∈ Av, we have x∗0 + v ∗ ∈ A(x + v) by Proposi- 35 Chapter 4. Monotone operators with linear graphs tion 4.1.3(iii). By (4.13), we have 〈x− (x+ v), x∗ − (x∗0 + v ∗)〉 ≥ 0, ∀(v, v∗) ∈ graA. That is, 〈v, v∗〉 ≥ 〈v, x∗ − x∗0〉, ∀(v, v ∗) ∈ graA. (4.14) By Proposition 4.1.3(iii), we have 1 n v∗ ∈ A( 1 n v). Then by (4.14), 〈 1 n v, 1 n v∗〉 ≥ 〈 1 n v, x∗ − x∗0〉, ∀(v, v ∗) ∈ graA. (4.15) Multiply (4.15) both sides by n and then let n→∞ to see that 〈v, x∗ − x∗0〉 ≤ 0, ∀v ∈ domA. (4.16) Since domA is a linear subspace, by Lemma 2.1.28, (x∗−x∗0)⊥ domA. Since A0 is closed, we have (x∗ − x∗0) ∈ (domA) ⊥ = (A0)⊥⊥ = A0. (4.17) According to Proposition 4.1.3(ii), x∗ ∈ x∗0 +A0 = Ax.  Here is an important result in this chapter. Theorem 4.2.7 Let A : X ⇉ X be monotone such that graA is a linear subspace of X ×X and domA is closed. Then A is maximal monotone⇔ domA = (A0)⊥, A0 is closed. 36 Chapter 4. Monotone operators with linear graphs Proof. Since A is maximal monotone, A0 is closed by Fact 4.2.1. Combine Proposition 4.2.5 and Proposition 4.2.6.  Theorem 4.2.7 gives an equivalent condition in infinite-dimensional spaces. When we consider it in finite-dimensional spaces, can we get further results? Now we discuss this in detail. Proposition 4.2.8 Let A : Rn ⇉ Rn be monotone such that graA is a linear subspace of Rn × Rn. Then dim(graA) = dim(domA) + dimA0. Proof. We shall construct a basis of graA. By Proposition 4.1.3(i), A0 is a linear subspace. Let { x∗1, · · · , x ∗ k } be a basis of A0 and { xk+1, . . . , xl } be a basis of domA. We show that{ (0, x∗1), . . . , (0, x ∗ k), (xk+1, x ∗ k+1), · · · , (xl, x ∗ l ) } is a basis of graA, where x∗i ∈ Axi, i ∈ {k + 1, · · · , l}. We first show that { (0, x∗1), . . . , (0, x ∗ k), (xk+1, x ∗ k+1), · · · , (xl, x ∗ l ) } is linearly independent. Let αi, i ∈ {1, · · · , l}, satisfy that α1(0, x ∗ 1) + · · ·+ αk(0, x ∗ k) + αk+1(xk+1, x ∗ k+1) + · · ·+ αl(xl, x ∗ l ) = 0. Hence αk+1xk+1 + · · ·+ αlxl = 0 (4.18) α1x ∗ 1 + · · ·+ αkx ∗ k + αk+1x ∗ k+1 + · · ·+ αlx ∗ l = 0. (4.19) 37 Chapter 4. Monotone operators with linear graphs Since { xk+1, . . . , xl } is linearly independent, by (4.18) we have αi = 0, i ∈ {k+1, · · · , l}. Then since { x∗1, · · · , x ∗ k } is linearly independent, by (4.19) we have αi = 0, i ∈ {1, · · · , k}. Thus αi = 0, i ∈ {1, · · · , l}. Hence { (0, x∗1), . . . , (0, x ∗ k), (xk+1, x ∗ k+1), · · · , (xl, is linearly independent. Let (x, x∗) ∈ graA. Then there exists βi, i ∈ {k + 1, · · · , l} satisfying that βk+1xk+1 + · · · + βlxl = x. Thus βk+1x ∗ k+1 + · · ·+ βlx ∗ l ∈ Ax. By Proposition 4.1.3(ii), there exists z∗ ∈ A0 such that βk+1x ∗ k+1 + · · · + βlx ∗ l + z ∗ = x∗. Then there exists βi, i ∈ {1, · · · , k} satisfying that z∗ = β1x∗1 + · · · + βkx ∗ k. Thus (x, x∗) = β1(0, x∗1) + · · ·+ βk(0, x ∗ k) + βk+1(xk+1, x ∗ k+1) + · · ·+ βl(xl, x ∗ l ). Hence { (0, x1), . . . , (0, xk), (xk+1, x ∗ k+1), · · · , (xl, x ∗ l ) } is a basis of graA. Then dim(graA) = dim(domA) + dim(A0). 38 Chapter 4. Monotone operators with linear graphs  From Proposition 4.2.8, we now get a satisfactory characterization. Proposition 4.2.9 Let A : Rn ⇉ Rn be monotone such that graA is a linear subspace of Rn × Rn. Then A is maximal monotone⇔ dimgraA = n. Proof. Since linear subspaces are closed in finite-dimensional spaces, by Proposition 4.1.3(i) and Theorem 4.2.7 we have A is maximal monotone⇔ domA = (A0)⊥. (4.20) Assume that A is maximal monotone. Then domA = (A0)⊥. Then Proposition 4.2.8 implies dim(graA) = dim(domA) + dim(A0) = dim((A0)⊥) + dim(A0) = n, as (A0)⊥ +A0 = Rn. Conversely, let dim(graA) = n. By Proposition 4.2.8, we have 39 Chapter 4. Monotone operators with linear graphs dim(domA) = n− dim(A0). As dim((A0)⊥) = n−dim(A0) and domA ⊂ (A0)⊥ by Proposition 4.1.3(iv), we have domA = (A0)⊥. By (4.20), A is maximal monotone.  4.3 Constructing a good selection When we proved Theorem 3.3.1, most of the much focused on finding a lin- ear, continuous and monotone operator à such that à |domA−1 is a selection of A−1. Now for a maximal monotone operator A with a linear graph, we also want to find such an operator. Fact 4.3.1 Let S be a nonempty closed convex subset of X. Then for each x ∈ X there exists a unique s0 ∈ S such that ‖x− s0‖ = min { ‖x− s‖ | s ∈ S } . Proof. See [19, Corollary 1.1.5].  By Fact 4.3.1, we can define the projector onto a nonempty closed convex subset of X. 40 Chapter 4. Monotone operators with linear graphs Definition 4.3.2 Let S be a nonempty closed convex subset of X.We define the projector PS : X → X by PSx = argmins∈S‖x− s‖, x ∈ X. Fact 4.3.3 Let S be a closed linear subspace of X and x0 ∈ X. Then PS is linear, continuous and PS+x0x = x0 + PS(x− x0), ∀x ∈ X (4.21) PSx+ PS⊥x = x (4.22) P ∗S = PS . (4.23) Proof. (4.21): Let x ∈ S. By Definition 4.3.2, ‖x− x0 − PS(x− x0)‖ ≤ ‖x− x0 − s‖ = ‖x− (x0 + s)‖, ∀s ∈ S. By Fact 4.3.1, PS(x− x0) ∈ S. Thus x0 + PS(x− x0) ∈ S + x0. By Defini- tion 4.3.2, PS+x0x = x0 + PS(x− x0). (4.22) holds by S⊕S⊥ = X. For the other parts see [27, Theorem 5.51(a)].  Definition 4.3.4 Let A : X ⇉ X such that graA is a linear subspace of X ×X. We define QA by QAx =   PAxx, if x ∈ domA; ∅, otherwise. 41 Chapter 4. Monotone operators with linear graphs Proposition 4.3.5 Let A : X ⇉ X be maximal monotone such that graA is a linear subspace of X × X. Then QA is single-valued on domA and a selection of A. Proof. By Fact 4.2.1, Ax is nonempty closed convex, for every x ∈ domA. Then by Fact 4.3.1, QA is single-valued on domA and a selection of A.  Proposition 4.3.6 Let A : X ⇉ X be maximal monotone such that graA is a linear subspace of X×X. Then QA is monotone, and linear on domA. Moreover, QAx = P(A0)⊥(Ax), ∀x ∈ domA. (4.24) Proof. By Proposition 4.1.3(i) and Fact 4.2.1, A0 is a closed subspace. Let x∗ ∈ Ax. Then QAx = PAxx = Px∗+A0x (4.25) = x∗ + PA0(x− x∗) = x∗ + PA0x− PA0x∗ (4.26) = PA0x+ P(A0)⊥x ∗ (4.27) = P(A0)⊥x ∗ (4.28) = P(A0)⊥(Ax), (4.29) in which, (4.25) holds by Proposition 4.1.3(ii), (4.26) and (4.27) by Fact 4.3.3. (4.28) holds since PA0x = 0 by Proposition 4.1.3(iv). Thus (4.24) holds. SinceQAx is single-valued by Remark 4.3.5, then P(A0)⊥(Ax) is single-valued. 42 Chapter 4. Monotone operators with linear graphs Now we show that QA is linear on domA. Take x, y ∈ domA and α, β ∈ R. If α = β = 0, by Proposition 4.1.3(i), we have QA(αx+ βy) = QA0 = PA00 = 0 = αQAx+ βQAy. (4.30) Assume that α 6= 0 or β 6= 0. By (4.24), we have QA(αx+ βy) = P(A0)⊥A(αx+ βy) (4.31) = αP(A0)⊥(Ax) + βP(A0)⊥(Ay) (4.32) = αQAx+ βQAy, (4.33) where (4.32) holds by Proposition 4.1.3(iii) and Fact 4.3.3, (4.33) by (4.24). By Proposition 4.3.5, QA is a selection of A. Since A is monotone, QA is monotone.  Proposition 4.3.7 Let Y be a closed linear subspace of X. Let A : X ⇉ X be monotone such that A is linear on Y and at most single-valued. Then PYAPY is linear, continuous and maximal monotone. Proof. Clearly, PYAPY is linear since PY is linear by Fact 4.3.3 and A is linear on Y . In the following we show that PYAPY is monotone. Let x ∈ X. Then 〈x, PYAPY x〉 = 〈P ∗ Y x, A(PY x) = 〈PY x, A(PY x)〉 (4.34) ≥ 0, (4.35) 43 Chapter 4. Monotone operators with linear graphs where (4.34) holds by Fact 4.3.3. Inequality (4.35) holds since A is mono- tone. By Example 2.1.3, PYAPY is monotone. Then by Fact 2.1.22, we have PYAPY is continuous and maximal monotone.  Now we show that we found the operator we were looking for. Corollary 4.3.8 Let A : X ⇉ X be maximal monotone such that graA is a linear subspace of X ×X and domA is closed. Then PdomAQAPdomA is linear, continuous and maximal monotone. Moreover, PdomAQAPdomA = QAPdomA, (PdomAQAPdomA) |domA= QA and Ax = (PdomAQAPdomA)x + A0,∀x ∈ domA. Proof. The former holds by Proposition 4.3.6 and Proposition 4.3.7. By Proposition 4.3.6 and Proposition 4.2.5, we have (QAPdomA)x ∈ (A0) ⊥ = domA,∀x ∈ X. Then by Proposition 4.3.5, (PdomAQAPdomA)x = QAx ∈ Ax,∀x ∈ domA. By Proposition 4.1.3(ii), Ax = (PdomAQAPdomA)x + A0,∀x ∈ domA.  Remark 4.3.9 By Corollary 4.3.8, we know that QA |domA is continuous on domA. But if we omit the assumption that domA be closed, then we can’t guarantee that QA |domA is continuous on domA. Example 4.3.10 Let X be (ℓ2, ‖ · ‖2) space and A : X → X : (xn) ∞ n=1 7→ (xn n )∞n=1. Then QA−1 |domA−1 is not continuous on domA −1. Proof. We first show that QA−1 = A −1 is maximal monotone with a linear graph, but domA−1 = ranA is not closed. Clearly, A is linear and one-to-one. Thus QA−1 = A −1 and graA−1 is a 44 Chapter 4. Monotone operators with linear graphs linear subspace. By Example 2.1.27, A is maximal monotone. By Proposi- tion 3.3.2, A−1 is maximal monotone. By Proposition 4.2.5, ranA = domA−1 = (A−10)⊥ = (0)⊥ = X. Now we show that ranA is not closed, i.e, ranA 6= X. On the contrary, assume ranA = X. Let x = (1/n)∞n=1 ∈ X. Then we have A−1x = (1)∞n=1 /∈ X. This is a contradiction. Hence ranA is not closed. In the following we show that QA−1 = A −1 is not continuous on ranA = domA−1. Take { 1 n en} ⊂ ranA, where en = (0, · · · , 0, 1, 0, · · · ) : the nth entry is 1 and the others are 0. Clearly, 1 n en → 0. But ‖A −1( 1 n en)− 0‖ = ‖en‖9 0. Hence QA−1 = A −1 is not continuous on ranA.  4.4 The first main result Now we come to our first main result in this thesis. Theorem 4.4.1 Let A : X ⇉ X be maximal monotone such that graA is a linear subspace of X ×X and domA is closed. Then A = ∂f + Ã◦, where f := q à + ιdomA is proper lower semicontinuous and convex, à = PdomAQAPdomA is linear, continuous and maximal monotone, and Ã◦ is antisymmetric. In particular, A is decomposable. Proof. By Corollary 4.3.8, à is linear, continuous and maximal monotone. Then by Fact 2.1.18, q à is convex, differentiable and ∇q à = Ã+. Since 45 Chapter 4. Monotone operators with linear graphs domA is a closed subspace, by Fact 2.1.24 ιdomA is proper lower semicon- tinuous and convex. Hence f is proper lower semicontinuous and convex. By Theorem 4.2.7, (domA)⊥ = (A0)⊥⊥ = A0. Let x ∈ domA. We have ∂f(x) = ∂(q à + ιdomA)(x) = ∇q à (x) + ∂ιdomA(x) (By Fact 2.1.30) = Ã+x+ (domA) ⊥ (by Fact 2.1.18 and Fact 2.1.29) = Ã+x+A0. Thus ∀x ∈ domA, ∂f(x) + Ã◦x = Ã+x+A0 + Ã◦x = Ãx+A0 (4.36) = Ax, (4.37) where (4.37) holds by Corollary 4.3.8. If x /∈ domA, by definition ∂f(x) = ∅ = Ax. Hence we have Ax = ∂f(x) + Ã◦x, ∀x ∈ X.  In general, a convex cone is not a linear subspace. We wonder if there exists a maximal monotone operator with a convex cone graph such that its graph is not a linear subspace. The following gives a negative answer. Fact 4.4.2 A convex cone K is a linear subspace, if and only if, −K ⊂ K. Proof. See [25, Theorem 2.7].  46 Chapter 4. Monotone operators with linear graphs Proposition 4.4.3 Let A : X ⇉ X be maximal monotone such that graA is a convex cone. Then graA is a linear subspace of X ×X. Proof. By Fact 4.4.2, it suffices to show that − graA ⊂ graA. Assume that (x, x∗) ∈ graA. We show that −(x, x∗) ∈ graA. Let (y, y∗) ∈ graA. As graA is a convex cone, (x, x∗) + (y, y∗) = (x+ y, x∗ + y∗) ∈ graA. Thus 〈x+ y, x∗ + y∗〉 ≥ 0. ( since A is monotone and (0, 0) ∈ graA ) This means 〈−x− y, −x∗ − y∗〉 ≥ 0 〈(−x)− y, (−x∗)− y∗〉 ≥ 0, ∀(y, y∗) ∈ graA. Since A is maximal, we conclude that (−x,−x∗) ∈ graA, −(x, x∗) ∈ graA. Hence graA is a linear subspace.  47 Chapter 4. Monotone operators with linear graphs In [14], Butnariu and Kassay discuss monotone operators with closed convex graphs. Actually, if such operators are maximal monotone, their graphs are affine sets. Fact 4.4.4 C ⊂ X is an affine set ⇔ ∃c0 ∈ C, C − c0 is a linear subspace. Proof. See [31, page 1].  Proposition 4.4.5 Let A : X ⇉ X be maximal monotone such that graA is a convex subset. Then graA is an affine set. Proof. Let (x0, x ∗ 0) ∈ graA and à : X ⇉ X such that gra à = graA − (x0, x ∗ 0). Thus gra à is convex and (0, 0) ∈ gra Ã. By Fact 2.1.15, à is maximal monotone. By Fact 4.4.4, it suffices to verify that gra à is a linear subspace. By Proposition 4.4.3, it suffices to show that gra à is a cone. Let k ≥ 0 and (x, x∗) ∈ gra Ã. We consider two cases. Case 1: k ≤ 1. k(x, x∗) = k(x, x∗) + (1− k)(0, 0) ∈ gra Ã. (4.38) Case 2: k > 1. Let (y, y∗) ∈ gra Ã. By (4.38), 1 k (y, y∗) ∈ gra Ã. Thus, 〈kx− y, kx∗ − y∗〉 = k2〈x− 1 k y, x∗ − 1 k y∗〉 ≥ 0. Since à is maximal monotone, k(x, x∗) ∈ gra Ã. Hence gra à is a cone.  48 Chapter 4. Monotone operators with linear graphs 4.5 Monotonicity of operators with linear graphs In general, it is not easy to identify whether an operator is monotone. But if an operator with a linear graph and a basis is known, then we can use linear algebra to verify monotonicity and strict monotonicity. Theorem 4.5.1 Let A : X ⇉ X and graA = span { (m1,m ∗ 1), · · · , (mn,m ∗ n) } . Then the following are equivalent (i) A is monotone. (ii) The matrix B :=   〈m1,m ∗ 1〉 〈m1,m ∗ 2〉 · · · 〈m1,m ∗ n〉 〈m2,m ∗ 1〉 〈m2,m ∗ 2〉 · · · 〈m2,m ∗ n〉 ... . . . . . . ... 〈mn,m ∗ 1〉 〈mn,m ∗ 2〉 · · · 〈mn,m ∗ n〉   is monotone. (iii) B+ is positive semidefinite. Proof. Since graA = span { (m1,m ∗ 1), . . . , (mn,m ∗ n) } , ∀(x, x∗) ∈ graA,∃α1, . . . , αn such that (x, x∗) = n∑ i=1 αi(mi,m ∗ i ). 49 Chapter 4. Monotone operators with linear graphs Then A is monotone ⇔ 〈x− y, x∗ − y∗〉 ≥ 0, ∀(x, x∗) ∈ graA,∀(y, y∗) ∈ graA ⇔ 〈 n∑ i=1 (αi − βi)mi, n∑ i=1 (αi − βi)m ∗ i 〉 ≥ 0 where (x, x∗) = n∑ i=1 αi(mi,m ∗ i ) = ( n∑ i=1 αimi, n∑ i=1 αim ∗ i ) (y, y∗) = n∑ i=1 βi(mi,m ∗ i ) = ( n∑ i=1 βimi, n∑ i=1 βim ∗ i ) ⇔ 〈 n∑ i=1 γimi, n∑ i=1 γim ∗ i 〉 = n∑ i=1 n∑ j=1 〈mi,m ∗ j 〉γiγj ≥ 0, ∀γi ∈ R = (γ1, . . . , γn)B(γ1, . . . , γn) ⊺ ≥ 0, ∀γi ∈ R ⇔ ν⊺Bν ≥ 0, ∀ν ∈ Rn ⇔ B is monotone (by Example 2.1.3) ⇔ B+ is positive semidefinite (by Fact 2.1.13 and Example 2.1.3).  We also have a way to identify whether an operator with a linear graph is strictly monotone. First we give the definition of strictly monotone. Definition 4.5.2 A strictly monotone operator T : X ⇉ X is a mapping that satisfies 〈x∗ − y∗, x− y〉 > 0, whenever x∗ ∈ T (x), y∗ ∈ T (y) and x 6= y. 50 Chapter 4. Monotone operators with linear graphs Definition 4.5.3 Let A : X → X. We say A is positive definite if 〈x,Ax〉 > 0, ∀x 6= 0. Theorem 4.5.4 Let A : X ⇉ X and graA = span { (m1,m ∗ 1), . . . , (mn,m ∗ n) } . Suppose that { m1, . . . ,mn } is linearly independent. Then A is strictly monotone, if and only if, the matrix B =   〈m1,m ∗ 1〉 〈m1,m ∗ 2〉 · · · 〈m1,m ∗ n〉 〈m2,m ∗ 1〉 〈m2,m ∗ 2〉 · · · 〈m2,m ∗ n〉 ... . . . . . . ... 〈mn,m ∗ 1〉 〈mn,m ∗ 2〉 · · · 〈mn,m ∗ n〉   is positive definite. Proof. Since graA = span{(mi,m ∗ i )} n i=1, A is strictly monotone ⇔ 〈x− y, x∗ − y∗〉 > 0, ∀(x, x∗), (y, y∗) ∈ graA with x 6= y ⇔ 〈 n∑ i=1 (αi − βi)mi, n∑ i=1 (αi − βi)m ∗ i 〉 > 0 where (x, x∗) = n∑ i=1 αi(mi,m ∗ i ) = ( n∑ i=1 αimi, n∑ i=1 αim ∗ i ) (y, y∗) = n∑ i=1 βi(mi,m ∗ i ) = ( n∑ i=1 βimi, n∑ i=1 βim ∗ i ) . 51 Chapter 4. Monotone operators with linear graphs Since m1, . . . ,mn are linearly independent, x 6= y ⇔ (α1, . . . , αn) 6= (β1, . . . , βn) ⇔ γ := (α1 − β1, . . . , αn − βn) 6= 0, A is strictly monotone ⇔ 〈 n∑ i=1 γimi, n∑ i=1 γim ∗ i 〉 > 0, for γ 6= 0 (4.39) ⇔ γ⊺Bγ > 0, ∀γ ∈ Rn with γ 6= 0 (4.40) ⇔ B is positive definite. (4.41) Just as in the proof of Theorem 4.5.1, we see that (4.40) holds.  52 Chapter 5 Auto-conjugates 5.1 Auto-conjugate representation Definition 5.1.1 Let f : Rn ×Rn → ]−∞,+∞] . We define f⊺ by f⊺(x, x∗) = f(x∗, x), ∀(x, x∗) ∈ Rn × Rn. Definition 5.1.2 (Fenchel conjugate) Let f : Rn → ]−∞,+∞] . The Fenchel conjugate of f , f∗, is defined by f∗(x∗) = sup x { 〈x∗, x〉 − f(x) } , ∀x∗ ∈ Rn. Fact 5.1.3 Let f : Rn → ]−∞,+∞] be proper lower semicontinuous and convex. Then f∗∗ = f. Proof. See [26, Theorem 11.1].  Proposition 5.1.4 Let f, g : Rn → ]−∞,+∞] satisfy f ≤ g. Then f∗ ≥ g∗. Proof. Follows directly by Definition 5.1.2.  Definition 5.1.5 (Auto-conjugate) Let f : Rn × Rn → ]−∞,+∞] be 53 Chapter 5. Auto-conjugates proper lower semicontinuous and convex. We say f is auto-conjugate if f∗⊺ = f. Here are some examples of auto-conjugate functions. Example 5.1.6 (Ghoussoub ’06/[17]) Let ϕ : Rn → ]−∞,+∞] be proper lower semicontinuous and convex, and A : Rn → Rn be linear and antisym- metric. Then f(x, x∗) := ϕ(x) + ϕ∗(x∗) f(x, x∗) := ϕ(x) + ϕ∗(−Ax+ x∗) (∀(x, x∗) ∈ Rn ×Rn) are auto-conjugate. 54 Chapter 5. Auto-conjugates Proof. The first function is a special case of the second one when A = 0. So, it suffices to show the second case. Let (x, x∗) ∈ Rn × Rn. Then we have f∗(x∗, x) = sup (y,y∗) { 〈y, x∗〉+ 〈y∗, x〉 − f(y, y∗) } = sup (y,y∗) { 〈y, x∗〉+ 〈y∗, x〉 − ϕ(y)− ϕ∗(−Ay + y∗) } = sup (y,y∗) { 〈y, x∗〉+ 〈Ay, x〉+ 〈−Ay + y∗, x〉 − ϕ(y)− ϕ∗(−Ay + y∗) } = sup y sup y∗ { 〈y, x∗〉+ 〈Ay, x〉+ 〈−Ay + y∗, x〉 − ϕ(y) − ϕ∗(−Ay + y∗) } = sup y { 〈y, x∗〉+ 〈y,−Ax〉 − ϕ(y) + sup y∗ { 〈−Ay + y∗, x〉 − ϕ∗(−Ay + y∗) }} = sup y { 〈y,−Ax+ x∗〉 − ϕ(y) } + ϕ∗∗(x) = ϕ∗(−Ax+ x∗) + ϕ(x) ( by Fact 5.1.3 ) = f(x, x∗).  Now we introduce some basic properties of auto-conjugate functions. Lemma 5.1.7 (Penot-Simons-Zǎlinescu ’05/[24],[29]) Let f : Rn×Rn → ]−∞,+∞] be auto-conjugate. Then f(x, x∗) ≥ 〈x, x∗〉, ∀(x, x∗) ∈ Rn × Rn. 55 Chapter 5. Auto-conjugates Proof. Let (x, x∗) ∈ Rn × Rn. Then f(x, x∗) = f∗(x∗, x) = sup (y,y∗) { 〈y, x∗〉+ 〈y∗, x〉 − f(y, y∗) } ≥〈x, x∗〉+ 〈x∗, x〉 − f(x, x∗). Thus 2f(x, x∗) ≥ 2〈x, x∗〉. That is , f(x, x∗) ≥ 〈x, x∗〉.  Proposition 5.1.8 Let f, g : Rn×Rn → ]−∞,+∞] be auto-conjugate such that f ≤ g. Then f = g. Proof. Let (x, x∗) ∈ Rn × Rn. By assumptions, f(x, x∗) ≤ g(x, x∗). On the other hand, by Proposition 5.1.4, f∗(x∗, x) ≥ g∗(x∗, x). Since f, g are auto-conjugate, f(x, x∗) ≥ g(x, x∗). Hence f(x, x∗) = g(x, x∗).  Proposition 5.1.9 Let f : Rn ×Rn → ]−∞,+∞]. Then f∗⊺ = f⊺∗. Proof. Let (x, x∗) ∈ Rn × Rn. Then f⊺∗(x∗, x) = sup (y,y∗) { 〈(y, y∗), (x∗, x)〉 − f⊺(y, y∗) } = sup (y,y∗) { 〈(y∗, y), (x, x∗)〉 − f(y∗, y) } = f∗(x, x∗) = f∗⊺(x∗, x).  Fact 5.1.10 (Fenchel-Young inequality) Let f : Rn → ]−∞,+∞] be proper 56 Chapter 5. Auto-conjugates lower semicontinuous and convex, and x, x∗ ∈ Rn. Then f(x) + f∗(x∗) ≥ 〈x, x∗〉, and equality holds, if and only if, x∗ ∈ ∂f(x). Proof. See [25, Theorem 23.5] and [25, page 105]. . Definition 5.1.11 Let f : Rn × Rn → ]−∞,+∞] . We define G(f) by x∗ ∈ G(f)x⇔ f(x, x∗) = 〈x, x∗〉. Here is an important property of auto-conjugates, which provides our main motivation for studying them. Fact 5.1.12 (Penot-Simons-Zǎlinescu ’05) Let f : Rn×Rn → ]−∞,+∞] be auto-conjugate. Then G(f) is maximal monotone. Proof. See [29, Theorem 1.4.(a)].  Definition 5.1.13 (Representation) Let f : Rn × Rn → ]−∞,+∞] and A : Rn ⇉ Rn. If A = G(f), we call f a representation for A. If f is auto- conjugate, we call f an auto-conjugate representation for A. Proposition 5.1.14 Let ϕ : Rn → ]−∞,+∞] be proper lower semicon- tinuous and convex, and A : Rn → Rn be linear and antisymmetric. Let f(x, x∗) := ϕ(x) + ϕ∗(−Ax+ x∗) (∀(x, x∗) ∈ Rn × Rn). Then f is an auto- conjugate representation for ∂ϕ+A. 57 Chapter 5. Auto-conjugates Proof. By Example 5.1.6, f is auto-conjugate. Then we have 〈x, x∗〉 = f(x, x∗) ⇔ 〈x,−Ax+ x∗〉 = ϕ(x) + ϕ∗(−Ax+ x∗) ⇔ x∗ −Ax ∈ ∂ϕ(x) (by Fact 5.1.10) ⇔ (x, x∗) ∈ gra (∂ϕ+A). Hence f is an auto-conjugate representation for ∂ϕ+A.  Definition 5.1.15 Let f, g : Rn × Rn → ]−∞,+∞] . We define (f2g)(x, x ∗) = inf y∗ { f(x, x∗ − y∗) + g(x, y∗) } , ∀(x, x∗) ∈ Rn × Rn. Definition 5.1.16 Let f, g : Rn → ]−∞,+∞] . We define (f ⊕ g)(x, x∗) = f(x) + g(x∗), ∀(x, x∗) ∈ Rn × Rn. Definition 5.1.17 We define π1 : Rn × Rn → Rn : (x, y) 7→ x. Fact 5.1.18 Let f, g : Rn × Rn → ]−∞,+∞] be proper lower semicontinu- ous and convex. Set ϕ = f2g. Assume ϕ(x, x∗) > −∞, ∀(x, x∗) ∈ Rn × Rn, 58 Chapter 5. Auto-conjugates and ⋃ λ>0 λ [ π1 dom f − π1 dom g ] , is a linear subspace of Rn. Then ϕ∗(x∗, x) = min y∗ { f∗(x∗ − y∗, x) + g∗(y∗, x) } , ∀(x, x∗) ∈ Rn × Rn. Proof. See [29, Theorem 4.2].  Proposition 5.1.19 Let f, g : Rn × Rn → ]−∞,+∞] be auto-conjugate such that (π1 dom f − π1 dom g) is a linear subspace of Rn. Suppose M = f2g. Then M(x, x∗) = min y∗ { f(x, x∗ − y∗) + g(x, y∗) } , ∀(x, x∗) ∈ Rn × Rn. and M is an auto-conjugate representation for G(f) +G(g). Proof. By Lemma 5.1.7, M(x, x∗) ≥ inf y∗ { 〈x, y∗〉+ 〈x, x∗ − y∗〉 } = 〈x, x∗〉, ∀(x, x∗) ∈ Rn × Rn. (5.1) Since (π1 dom f−π1 dom g) is a linear subspace, ⋃ λ>0 λ [ π1 dom f−π1 dom g ] = (π1 dom f − π1 dom g) is a linear subspace. Let (x, x ∗) ∈ Rn × Rn. By 59 Chapter 5. Auto-conjugates Fact 5.1.18, we have M∗(x∗, x) = min y∗ { f∗(x∗ − y∗, x) + g∗(y∗, x) } = min y∗ { f(x, x∗ − y∗) + g(x, y∗) } =M(x, x∗). Hence M(x, x∗) = min y∗ { f(x, x∗ − y∗) + g(x, y∗) } , ∀(x, x∗) ∈ Rn × Rn. (5.2) and M is auto-conjugate. In the following we show thatM is a representation for G(f)+G(g). Suppose (x, x∗) satisfies M(x, x∗) = 〈x, x∗〉. For every y∗ ∈ Rn, since f, g are auto-conjugate, by Fact 5.1.7 we have f(x, x∗ − y∗) ≥ 〈x, x∗ − y∗〉, g(x, y∗) ≥ 〈x, y∗〉, and M(x, x∗) ≥ 〈x, x∗〉. (5.3) 60 Chapter 5. Auto-conjugates Then by (5.2) and (5.3), (x, x∗) ∈ graG(M) ⇔M(x, x∗) = 〈x, x∗〉 ⇔ ∃s∗ such that 〈x, x∗〉 =M(x, x∗) = f(x, x∗ − s∗) + g(x, s∗) ⇔ ∃s∗ such that 0 = f(x, x∗ − s∗)− 〈x, x∗ − s∗〉+ g(x, s∗)− 〈x, s∗〉 ⇔ ∃s∗ such that 〈x, x∗ − s∗〉 = f(x, x∗ − s∗), 〈x, s∗〉 = g(x, s∗) ⇔ ∃s∗ such that (x, x∗ − s∗) ∈ graG(f), (x, s∗) ∈ graG(g) ⇔ x∗ ∈ ( G(f) +G(g))x.  Now this raises the following question: Given a maximal monotone op- erator A, can we find an auto-conjugate representation for A? Before answering this question, we introduce some definitions. 5.2 The Fitzpatrick function and the proximal average Definition 5.2.1 (Fitzpatrick function ’88) Let A : X ⇉ X. The Fitz- patrick function of A is FA : (x, x ∗) 7→ sup (y,y∗)∈graA 〈x, y∗〉+ 〈y, x∗〉 − 〈y, y∗〉. Fact 5.2.2 Let A : Rn ⇉ Rn be monotone such that graA is nonempty. Then FA is proper lower semicontinuous and convex. 61 Chapter 5. Auto-conjugates Proof. See [8, Fact 1.2].  Fact 5.2.3 Let A : Rn → Rn be linear and monotone. Then q∗A(x) = ιranA+(x) + q(A+)†(x), ∀x ∈ R n. Proof. See [25, page 108] and Corollary 2.2.18.  Fact 5.2.4 Let A : Rn → Rn be linear and monotone. Then FA(x, x ∗) = ιranA+(x ∗ +A∗x) + 12q(A+)†(x ∗ +A∗x), ∀(x, x∗) ∈ Rn × Rn. Proof. Let (x, x∗) ∈ Rn × Rn. By [4, Theorem 2.3], FA(x, x ∗) = 2q∗A+( 1 2x ∗ + 12A ∗x) = ιranA+( 1 2x ∗ + 12A ∗x) + 2q(A+)†( 1 2x ∗ + 12A ∗x) (by Fact 5.2.3) = ιranA+(x ∗ +A∗x) + 12q(A+)†(x ∗ +A∗x).  Fact 5.2.5 Let A : Rn → Rn be linear and monotone. Then F ∗A(x ∗, x) = ιgraA(x, x∗) + 〈x,Ax〉, ∀(x, x∗) ∈ Rn × Rn. Proof. See [4, Theorem 2.3].  62 Chapter 5. Auto-conjugates Proposition 5.2.6 Let A : Rn → Rn be linear and monotone. Then A+k Id is invertible, for every k > 0. Proof. Let x satisfy that (A+ k Id)x = 0. Then we have Ax = −kx. By the monotonicity of A, we have k‖x‖2 = 〈x, kx〉 = 〈x, −Ax〉 = −〈x, Ax〉 ≤ 0. Then x = 0. Hence A+ k Id is invertible.  Definition 5.2.7 (Proximal average) Let f0, f1 : Rn×Rn → ]−∞,+∞] be proper lower semicontinuous and convex. We define P (f0, f1), the prox- imal average of f0 and f1, by P (f0, f1)(x, x ∗) = −12‖(x, x ∗)‖2 + inf (y1,y∗1)+(y2,y ∗ 2)=(x,x ∗) { 1 2f0(2y1, 2y ∗ 1) + 1 2f1 ( 2y2, 2y ∗ 2 ) + ‖(y1, y ∗ 1)‖ 2 + ‖(y2, y ∗ 2)‖ 2 } , ∀(x, x∗) ∈ Rn × Rn. Remark 5.2.8 Let f0, f1 : Rn×Rn → ]−∞,+∞] be proper lower semicon- tinuous and convex. Then P (f⊺0 , f ⊺ 1 ) = ( P (f0, f1) ) ⊺ . Fact 5.2.9 Let f0 and f1 : Rn → ]−∞,+∞] be proper lower semicontinuous 63 Chapter 5. Auto-conjugates and convex. Then P (f0, f1)(x, x ∗) = inf (y,y∗) { 1 2f0(x+ y, x ∗ + y∗) + 12f1(x− y, x ∗ − y∗) + 12‖(y, y ∗)‖2 } , ∀(x, x∗) ∈ Rn × Rn. Proof. Let (x, x∗) ∈ Rn × Rn. Then P (f0, f1)(x, x ∗) = −12‖(x, x ∗)‖2 + inf (y1,y∗1)+(y2,y ∗ 2)=(x,x ∗) { 1 2f0(2y1, 2y ∗ 1) + 1 2f1 ( 2y2, 2y ∗ 2 ) + ‖(y1, y ∗ 1)‖ 2 + ‖(y2, y ∗ 2)‖ 2 } = −12‖(x, x ∗)‖2 + inf (y,y∗) { 1 2f0 ( 2x+y2 , 2 x∗+y∗ 2 ) + 12f1 ( 2x−y2 , 2 x∗−y∗ 2 ) + ∥∥(x+y2 , x∗+y∗2 )∥∥2 + ∥∥(x−y2 , x∗−y∗2 )∥∥2} = inf (y,y∗) { 1 2f0(x+ y, x ∗ + y∗) + 12f1(x− y, x ∗ − y∗) + 12‖(y, y ∗)‖2 } .  Definition 5.2.10 Let f : Rn×Rn → ]−∞,+∞] be proper lower semicon- tinuous and convex and hf define by hf (x, x ∗) = inf { 1 2f(x, 2x ∗ 1)+ 1 2f ∗(2x∗2, x) | x ∗ = x∗1+x ∗ 2 } , ∀(x, x∗) ∈ Rn×Rn. Now we begin to answer the question above. Fact 5.2.11 (Bauschke-Wang ’07) Let A : X ⇉ X be maximal mono- 64 Chapter 5. Auto-conjugates tone. Then P (FA, F ∗ A ⊺) is an auto-conjugate representation for A. Proof. See [9, Theorem 5.7].  Fact 5.2.12 (Penot-Zǎlinescu ’05) Let A : X ⇉ X be maximal mono- tone such that aff(domA) is closed. Then hFA is an auto-conjugate repre- sentation for A. Proof. See [24, Proposition 4.2].  5.3 The second main result Our main goal is to find a formula for P (FA, F ∗ A ⊺) associated with a linear and monotone operator A. Until now, there was no explicit formula for that. Theorem 5.3.1 Let A : Rn → Rn be linear and monotone. Then P (FA, F ∗ A ⊺)(x, x∗) = ιranA+(x ∗ −Ax) + 〈x, x∗〉+ q(A+)†(x ∗ −Ax), ∀(x, x∗) ∈ Rn × Rn. 65 Chapter 5. Auto-conjugates Proof. Let (x, x∗) ∈ Rn × Rn. By Fact 5.2.2 and Fact 5.2.9, we have P (FA, F ∗ A ⊺)(x, x∗) = inf (y,y∗) { 1 2FA(x+ y, x ∗ + y∗) + 12F ∗ A ⊺(x− y, x∗ − y∗) (5.4) + 12‖(y, y ∗)‖2 } = inf (y,y∗) { 1 2FA(x+ y, x ∗ + y∗) + ιgraA(x− y, x∗ − y∗) (5.5) + 12〈x− y,A(x− y)〉+ 1 2‖(y, y ∗)‖2 } = inf y { 1 2FA(x+ y, 2x ∗ −A(x− y)) + 12〈x− y,A(x− y)〉 (5.6) + 12‖(y, x ∗ −A(x− y))‖2 } = inf y { ιranA+ ( 2x∗ −A(x− y) +A∗x+A∗y ) (5.7) + 14q(A+)† ( 2x∗ −A(x− y) +A∗x+A∗y ) + 12〈x− y,A(x− y)〉+ 1 2‖y‖ 2 + 12‖x ∗ −A(x− y)‖2 } , in which, (5.5) holds by Fact 5.2.5, (5.6) by y∗ = x∗ − A(x − y), and (5.7) by Fact 5.2.4. Since 2x∗ −A(x− y) +A∗x+A∗y = 2x∗ − 2Ax+Ax+Ay +A∗x+A∗y = 2x∗ − 2Ax+ (A+A∗)(x+ y) = 2x∗ − 2Ax+ 2A+(x+ y), (5.8) 66 Chapter 5. Auto-conjugates Thus 2x∗ −A(x− y) +A∗x+A∗y ∈ ranA+ ⇔ x∗ −Ax ∈ ranA+. Then ιranA+(2x ∗ −A(x− y) +A∗x+A∗y) = ιranA+(x ∗ −Ax). (5.9) We consider two cases. Case 1: x∗ −Ax /∈ ranA+. By (5.7) and (5.9), P (FA, F ∗A ⊺)(x, x∗) =∞. Case 2: x∗ − Ax ∈ ranA+. By Proposition 2.2.15 applied to A+ with x replaced by x∗ −Ax and y replaced by x+ y, we have 1 4q(A+)†(2x ∗ −A(x− y) +A∗x+A∗y) = 14q(A+)†(2x ∗ − 2Ax+ 2A+(x+ y)) (by (5.8)) = 14 · 2 2q(A+)†(x ∗ −Ax+A+(x+ y)) = q(A+)†(x ∗ −Ax+A+(x+ y)) = q(A+)†(x ∗ −Ax) + 〈PranA+(x ∗ −Ax), x+ y〉+ qA+(x+ y) = q(A+)†(x ∗ −Ax) + 〈x+ y, x∗ −Ax〉+ 12〈x+ y, A(x+ y)〉. (5.10) By (5.7), (5.9) and (5.10), we have P (FA, F ∗ A ⊺)(x, x∗) = q(A+)†(x ∗ −Ax) + inf y { 〈x+ y, x∗ −Ax〉+ 12〈x+ y, A(x+ y)〉 + 12〈x− y,A(x− y)〉+ 1 2‖y‖ 2 + 12‖x ∗ −A(x− y)‖2 } . 67 Chapter 5. Auto-conjugates Since 1 2〈x+ y, A(x+ y)〉+ 1 2 〈x− y,A(x− y)〉 = 〈x, Ax〉+ 〈y, Ay〉, we have 〈x+ y, x∗ −Ax〉+ 12〈x+ y, A(x+ y)〉+ 1 2 〈x− y,A(x− y)〉 = 〈x, x∗〉 − 〈x, Ax〉+ 〈y, x∗〉 − 〈y, Ax〉+ 〈x, Ax〉+ 〈y, Ay〉 = 〈x, x∗〉+ 〈y, Ay〉+ 〈y, x∗〉 − 〈y, Ax〉. Then P (FA, F ∗ A ⊺)(x, x∗) = q(A+)†(x ∗ −Ax) + 〈x, x∗〉+ inf y { 〈y, Ay〉+ 〈y, x∗〉 − 〈y, Ax〉 + 12‖y‖ 2 + 12‖x ∗ −Ax+Ay‖2 } = q(A+)†(x ∗ −Ax) + 〈x, x∗〉+ inf y { 〈y, Ay〉+ 〈y, x∗〉 − 〈y, Ax〉 + 12‖y‖ 2 + 12‖x ∗ −Ax‖2 + 12‖Ay‖ 2 + 〈Ay, x∗ −Ax〉 } . 68 Chapter 5. Auto-conjugates Since 〈y, Ay〉+ 12‖y‖ 2 + 12‖Ay‖ 2 = 12‖y +Ay‖ 2, P (FA, F ∗ A ⊺)(x, x∗) = q(A+)†(x ∗ −Ax) + 〈x, x∗〉+ 12‖x ∗ −Ax‖2 + inf y { 1 2‖y +Ay‖ 2 + 〈y, x∗ −Ax+A∗(x∗ −Ax)〉 } = q(A+)†(x ∗ −Ax) + 〈x, x∗〉+ 12‖x ∗ −Ax‖2 + inf y { 1 2‖y +Ay‖ 2 + 〈y, (Id+A∗)(x∗ −Ax)〉 } = q(A+)†(x ∗ −Ax) + 〈x, x∗〉+ 12‖x ∗ −Ax‖2 − sup y { 〈y, (Id+A∗)(Ax− x∗)〉 − q(Id +A∗)(Id +A)(y) } = q(A+)†(x ∗ −Ax) + 〈x, x∗〉+ 12‖x ∗ −Ax‖2 − q∗(Id +A∗)(Id +A) ( (Id+A∗)(Ax− x∗) ) = q(A+)†(x ∗ −Ax) + 〈x, x∗〉+ 12‖x ∗ −Ax‖2 − q( (Id +A∗)(Id +A) )†((Id+A∗)(Ax − x∗)) (by Proposition 5.2.3 and Proposition 5.2.6) = q(A+)†(x ∗ −Ax) + 〈x, x∗〉+ 12‖x ∗ −Ax‖2 − q(Id +A)−1(Id +A∗)−1 ( (Id+A∗)(Ax− x∗) ) (by Remark 2.1.35) = q(A+)†(x ∗ −Ax) + 〈x, x∗〉+ 12‖x ∗ −Ax‖2 − 12‖x ∗ −Ax‖2 = 〈x, x∗〉+ q(A+)†(x ∗ −Ax). 69 Chapter 5. Auto-conjugates Combining the results above, we have P (FA, F ∗ A ⊺)(x, x∗) = ιranA+(x ∗ −Ax) + 〈x, x∗〉+ q(A+)†(x ∗ −Ax), ∀(x, x∗) ∈ Rn × Rn.  Proposition 5.3.2 Let A : Rn → Rn be linear and monotone. Then π1 [domP (FA, F ∗ A ⊺)] = Rn. Proof. By Theorem 5.3.1, P (FA, F ∗ A ⊺)(x,Ax) = 〈x,Ax〉 <∞, ∀x ∈ Rn. Thus (x,Ax) ∈ domP (FA, F ∗ A ⊺),∀x ∈ Rn. Hence π1 [domP (FA, F ∗ A ⊺)] = Rn.  Corollary 5.3.3 Let A : Rn → Rn be linear, symmetric and monotone. Then P (FA, F ∗ A ⊺) = qA ⊕ (ιranA + qA†). 70 Chapter 5. Auto-conjugates Proof. Let (x, x∗) ∈ Rn × Rn. By Theorem 5.3.1, we have P (FA, F ∗ A ⊺)(x, x∗) = ιranA(x∗ −Ax) + 〈x, x∗〉+ qA†(x ∗ −Ax) = ιranA(x ∗) + 〈x, x∗〉+ qA†(x ∗ −Ax). Now suppose x∗ ∈ ranA. By Proposition 2.2.15, we have qA†(x ∗ −Ax) = qA(x) + qA†(x ∗)− 〈x, PranAx∗〉 = qA(x) + qA†(x ∗)− 〈x, x∗〉. Thus P (FA, F ∗ A ⊺)(x, x∗) = qA(x) + qA†(x ∗). Combining the conclusions above, P (FA, F ∗ A ⊺)(x, x∗) = ιranA(x∗) + qA(x) + qA†(x ∗) = ( qA ⊕ (ιranA + qA†) ) (x, x∗), ∀(x, x∗) ∈ Rn × Rn.  Corollary 5.3.4 Let A : Rn → Rn be linear and antisymmetric. Then P (FA, F ∗ A ⊺) = ιgraA. Proof. Follows directly by Theorem 5.3.1.  71 Chapter 5. Auto-conjugates Corollary 5.3.5 Let A : Rn → Rn be linear and monotone. Then ( ∀(x, x∗) ∈ Rn × Rn ) P (FA, F ∗ A ⊺)(x, x∗) ≥ 〈x, x∗〉 P (FA, F ∗ A ⊺)(x,Ax) = 〈x,Ax〉. Proof. Apply Theorem 5.3.1 and Corollary 2.2.12.  For a linear and monotone operator A, Fact 5.2.11 shows P (FA, F ∗ A ⊺) is auto-conjugate. Now we give a new proof. Proposition 5.3.6 Let A : Rn → Rn be linear and monotone. Then P (FA, F ∗A ⊺) is auto-conjugate. Proof. Let (x, x∗) ∈ Rn × Rn. By Theorem 5.3.1, we have P (FA, F ∗ A ⊺)∗(x∗, x) = sup (y, y∗) { 〈(x∗, x), (y, y∗)〉 − ιranA+(y ∗ −Ay)− 〈y, y∗〉 (5.11) − q(A+)†(y ∗ −Ay) } = sup (y, w) { 〈(y, A+w +Ay), (x ∗, x)〉 − ιranA+(A+w) (5.12) − 〈y, A+w +Ay〉 − q(A+)†(A+w) } = sup (y, w) { 〈y, x∗〉+ 〈A+w +Ay, x〉 − 〈y, A+w +Ay〉 − qA+(w) } (5.13) = sup y sup w { 〈y, x∗〉+ 〈A+w +Ay, x〉 − 〈y, A+w +Ay〉 − qA+(w) } = sup y { 〈y, x∗〉+ 〈Ay, x〉 − 〈y, Ay〉+ sup w { 〈w,A+x〉 − 〈A+y, w〉 (5.14) − qA+(w) }} . 72 Chapter 5. Auto-conjugates (5.12) holds by y∗ −Ay = A+w and (5.13) by Corollary 2.2.16. By (5.14), P (FA, F ∗ A ⊺)∗(x∗, x) = sup y { 〈y,A∗x+ x∗〉 − 〈y, Ay〉+ q∗A+(A+x−A+y) } = sup y { 〈y,A∗x+ x∗〉 − 〈y, Ay〉+ q(A+)†(A+x−A+y) } (5.15) = sup y { 〈y,A∗x+ x∗〉 − 〈y, Ay〉+ qA+(x− y) } (5.16) = sup y { 〈y,A∗x+ x∗ −A+x〉 − qA(y) + qA(x) } (5.17) = q∗A(A ∗x+ x∗ −A+x) + qA(x) = ιranA+(A ∗x+ x∗ −A+x) + q(A+)†(A ∗x+ x∗ −A+x) + qA(x) (5.18) (5.15) and (5.18) holds by Proposition 5.2.3, (5.16) by Corollary 2.2.16 and (5.17) by Remark 2.1.12. Note A∗x+ x∗ −A+x = x∗ −Ax+Ax+A∗x−A+x = x∗ −Ax+A+x. (5.19) Thus ιranA+(A ∗x+ x∗ −A+x) = ιranA+(x ∗ −Ax). (5.20) If x∗ −Ax /∈ ranA+. By (5.18) and (5.20), P (FA, F ∗A ⊺)∗(x∗, x) =∞. Now suppose x∗ − Ax ∈ ranA+. By Proposition 2.2.15 applied to A+ with 73 Chapter 5. Auto-conjugates x replaced by x∗ −Ax and y replaced by x, q(A+)†(A ∗x+ x∗ −A+x) = q(A+)†(x ∗ −Ax+A+x) (by (5.19)) = q(A+)†(x ∗ −Ax) + 〈 PranA+(x ∗ −Ax), x 〉 + qA+(x) = q(A+)†(x ∗ −Ax) + 〈x∗ −Ax, x〉+ qA(x) (by Remark 2.1.12) = q(A+)†(x ∗ −Ax) + 〈x∗, x〉 − qA(x). Then by (5.18) and (5.20), P (FA, F ∗ A ⊺)∗(x∗, x) = q(A+)†(x ∗ −Ax) + 〈x, x∗〉. Combining the results above, we have P (FA, F ∗ A ⊺)∗(x∗, x) = ιranA+(x ∗ −Ax) + q(A+)†(x ∗ −Ax) + 〈x, x∗〉 = P (FA, F ∗ A ⊺)(x, x∗) (by Theorem 5.3.1), ∀(x, x∗) ∈ Rn ×Rn.  Proposition 5.3.7 Let B : Rn → Rn be linear, symmetric and monotone. Let x ∈ Rn. Then 〈x,Bx〉 = 0⇔ Bx = 0. 74 Chapter 5. Auto-conjugates Proof. “⇐” Clear. “⇒” Take y ∈ Rn and α > 0. We have 0 ≤ 〈αy + x,B(αy + x)〉 (5.21) = 〈x,Bx〉+ 2α〈y,Bx〉+ α2〈y,By〉 = 2α〈y,Bx〉 + α2〈y,By〉, (by 〈x,Bx〉 = 0) ⇒ 0 ≤ 2〈y,Bx〉+ α〈y,By〉 (5.22) ⇒ 0 ≤ 〈y,Bx〉, ∀y ∈ Rn (5.23) ⇒ Bx = 0, in which, (5.21) holds by monotonicity of B, (5.22) by multiplying 1 α in both sides, and (5.23) by letting α→ 0+.  Corollary 5.3.8 Let B : Rn → Rn be linear, symmetric and monotone. Then kerB = {x | qB(x) = 0}. Corollary 5.3.9 Let B : Rn → Rn be linear, symmetric and monotone. Let x ∈ Rn. Then (ιranB + qB†)(x) = 0, if and only if, x = 0. Proof. “⇐” Clear. “⇒” By assumption, we have x ∈ ranB (5.24) 0 = 〈x, B†x〉. (5.25) 75 Chapter 5. Auto-conjugates By Fact 2.2.2, Fact 2.2.4 and Corollary 2.2.12, B† is linear, symmetric and monotone. By (5.25) and Proposition 5.3.7 applied to B†, B†x = 0. Then by (5.24) and Fact 2.2.11, x = PranBx = BB †x = 0.  Corollary 5.3.10 Let A : Rn → Rn be linear and monotone. (∀(x, x∗) ∈ Rn × Rn) P (FA, F ∗A ⊺)(x, x∗) = 〈x, x∗〉 ⇔ (x, x∗) ∈ graA. Proof. By Theorem 5.3.1 and Corollary 5.3.9.  Corollary 5.3.11 Let A : Rn → Rn be linear and monotone. Then P (FA, F ∗ A ⊺) is an auto-conjugate representation for A. Proof. By Proposition 5.3.6 and Corollary 5.3.10.  For a linear and monotone operator A, what is hFA? Proposition 5.3.12 Let A : Rn → Rn be linear and monotone. Then hFA = P (FA, F ∗ A ⊺). 76 Chapter 5. Auto-conjugates Proof. Let (x, x∗) ∈ Rn × Rn. Then hFA(x, x ∗) = inf { 1 2FA(x, 2x ∗ 1) + 1 2F ∗ A(2x ∗ 2, x) | x ∗ = x∗1 + x ∗ 2 } = inf y∗ { 1 2FA(x, 2(x ∗ − y∗)) + 12F ∗ A(2y ∗, x) } = inf y∗ { 1 2FA(x, 2(x ∗ − y∗)) + ιgraA(x, 2y∗) + qA(x) } (5.26) = 12FA(x, 2x ∗ −Ax) + qA+(x) (by 2y ∗ = Ax and Remark 2.1.12) = ιranA+(2x ∗ −Ax+A∗x) + 14q(A+)†(2x ∗ −Ax+A∗x) + qA+(x), (5.27) where (5.26) holds by Fact 5.2.5, and (5.27) by Fact 5.2.4. Note that 2x∗ −Ax+A∗x = 2x∗ − 2Ax+ 2A+x. (5.28) Then 2x∗ −Ax+A∗x ∈ ranA+ ⇔ x∗ −Ax ∈ ranA+. Thus ιranA+(2x ∗ −Ax+A∗x) = ιranA+(x ∗ −Ax). (5.29) If x∗ −Ax /∈ ranA+, hFA(x, x ∗) =∞ by (5.27) and (5.29). Now suppose that x∗ − Ax ∈ ranA+. By Proposition 2.2.15 applied to A+ 77 Chapter 5. Auto-conjugates with x replaced by x∗ −Ax and y replaced by x, we have 1 4q(A+)†(2x ∗ −Ax+A∗x) = 14q(A+)†(2x ∗ − 2Ax+ 2A+x) (by (5.28)) = q(A+)†(x ∗ −Ax+A+x) = q(A+)†(x ∗ −Ax) + 〈x, PranA+(x ∗ −Ax)〉+ qA+(x) = q(A+)†(x ∗ −Ax) + 〈x, x∗ −Ax〉+ qA+(x) = q(A+)†(x ∗ −Ax) + 〈x, x∗〉 − qA+(x) (by Remark 2.1.12). Then by (5.27) and (5.29), hFA(x, x ∗) = 〈x, x∗〉+ q(A+)†(x ∗ −Ax). Combining the results above, hFA(x, x ∗) = ιranA+(x ∗ −Ax) + 〈x, x∗〉+ q(A+)†(x ∗ −Ax) = P (FA, F ∗ A ⊺)(x, x∗) (by Theorem 5.3.1), ∀(x, x∗) ∈ Rn × Rn.  Proposition 5.3.13 Let A : Rn ⇉ Rn be monotone such that graA is nonempty. Then hFA = hF ∗⊺A . 78 Chapter 5. Auto-conjugates Proof. Let (x, x∗) ∈ Rn × Rn. By Definition 5.2.10, we have hF ∗⊺ A (x, x∗) = inf { 1 2F ∗ A ⊺(x, 2x∗1) + 1 2(F ∗ A ⊺)∗(2x∗2, x) | x ∗ = x∗1 + x ∗ 2 } (5.30) = inf { 1 2F ∗ A(2x ∗ 1, x) + 1 2FA(x, 2x ∗ 2) | x ∗ = x∗1 + x ∗ 2 } (5.31) = inf { 1 2F ∗ A(2x ∗ 2, x) + 1 2(FA(x, 2x ∗ 1) | x ∗ = x∗1 + x ∗ 2 } (5.32) = hFA(x, x ∗), (5.33) where (5.31) holds by Proposition 5.1.9, Fact 5.2.2 and Fact 5.1.3.  Corollary 5.3.14 Let A : Rn → Rn be linear and monotone. Then hFA = hF ∗⊺ A = P (FA, F ∗ A ⊺). 5.4 An example In the following we give an example of Theorem 5.3.1. Example 5.4.1 Let A =  cos θ − sin θ sin θ cos θ   be the rotation of angle θ ∈ [0, pi2 [. Then A ∗ = A−1 and P (FA, F ∗ A ⊺)(x, x∗) = 12 cos θ‖x ∗ −Ax‖2 + 〈x, x∗〉 = 12 cos θ‖x ∗ − sin θRx‖2 + cos θ2 ‖x‖ 2, 79 Chapter 5. Auto-conjugates where R =  0 −1 1 0   . Proof. By assumptions, we have A+ = cos θ Id AA ∗ = Id R∗R = Id . (5.34) By Theorem 5.3.1 and Remark 2.1.35, we have P (FA, F ∗ A ⊺)(x, x∗) = 12 cos θ‖x ∗ −Ax‖2 + 〈x, x∗〉. (5.35) By (5.35), (5.34), and A = cos θ Id+ sin θR, we have P (FA, F ∗ A ⊺)(x, x∗) = 12 cos θ ( ‖x∗‖2 + 〈A∗Ax, x〉 − 2〈x∗, Ax〉 ) + 〈x, x∗〉 = 12 cos θ ( ‖x∗‖2 + ‖x‖2 − 2〈x∗, cos θx+ sin θRx〉 ) + 〈x, x∗〉 = 12 cos θ ( ‖x∗‖2 + ‖x‖2 − 2 sin θ〈x∗, Rx〉 ) = 12 cos θ ( ‖x∗‖2 + sin2 θ‖x‖2 − 2 sin θ〈x∗, Rx〉+ cos2 θ‖x‖2 ) = 12 cos θ‖x ∗ − sin θRx‖2 + cos θ2 ‖x‖ 2.  Fact 5.4.2 Let A : Rn ⇉ Rn be monotone such that graA is nonempty. Then F⊺ A−1 = FA. Proof. See [8, Fact 1.2].  80 Chapter 5. Auto-conjugates Corollary 5.4.3 Let A : Rn ⇉ Rn be monotone such that graA is nonempty. Then P (FA−1 , F ∗⊺ A−1 ) = ( P (FA, F ∗ A ⊺) ) ⊺ . Proof. By assumptions and Proposition 3.3.2, A−1 is monotone and graA−1 is nonempty. Then by Proposition 5.1.9, Fact 5.4.2 and Remark 5.2.8, P (FA−1 , F ∗⊺ A−1 ) = P (F⊺A, F ⊺∗ A−1 ) = P (F⊺A, F ∗ A) = ( P (FA, F ∗ A ⊺) ) ⊺ .  Theorem 5.4.4 Let A : Rn → Rn be linear, monotone and invertible. Let (x, x∗) ∈ Rn × Rn. Then 0 ∈ ∂P (FA,F ∗ A ⊺)(x,x∗) ∂x∗ ⇔ x∗ = A◦x (5.36) 0 ∈ ∂P (FA,F ∗ A ⊺)(x,x∗) ∂x ⇔ x = (A−1)◦x∗. (5.37) Proof. By Fact 2.2.2, Fact 2.2.4, Fact 2.1.13 and Corollary 2.2.12, (A+) † is linear, symmetric and monotone. 81 Chapter 5. Auto-conjugates Now (5.36): Follows from Theorem 5.3.1, Fact 2.1.18 and Fact 2.1.30, 0 ∈ ∂P (FA,F ∗ A ⊺)(x,x∗) ∂x∗ ⇔ 0 ∈ ∂ιranA+(x ∗ −Ax) + x+ (A+)†(x∗ −Ax), x∗ −Ax ∈ ranA+ ⇔ 0 ∈ kerA+ + x+ (A+) †(x∗ −Ax), x∗ −Ax ∈ ranA+ (by Fact 2.1.32) ⇔ 0 ∈ x+ (A+) −1(x∗ −Ax), x∗ −Ax ∈ ranA+ (by Corollary 2.2.7) ⇔ −x ∈ (A+) −1(x∗ −Ax), x∗ −Ax ∈ ranA+ ⇔ x∗ −Ax ∈ −A+x ⇔ x∗ ∈ Ax−A+x = A◦x. Then (5.37) Follows from Corollary 5.4.3 and (5.36).  Proposition 5.4.5 Let A : Rn → Rn be linear and monotone, and h(x∗) := P (FA, F ∗ A ⊺)(0, x∗) (∀x∗ ∈ Rn). Then ∂h = (A+)−1. Proof. By Theorem 5.3.1, h(x∗) = ιranA+(x ∗) + q(A+)†(x ∗), ∀x∗ ∈ Rn. By Fact 2.2.2, Fact 2.2.4, Fact 2.1.13 and Corollary 2.2.12, (A+) † is linear, symmetric and monotone. Thus by Fact 2.1.30 and Fact 2.1.18, ∂h(x∗) =   ∂ιranA+(x ∗) + (A+)†x∗, if x∗ ∈ ranA+; ∅, otherwise. 82 Chapter 5. Auto-conjugates Now suppose x∗ ∈ ranA+. By Fact 2.1.32 and Corollary 2.2.7, ∂h(x∗) = kerA+ + (A+)†x∗ = (A+)−1x∗. Next suppose x∗ /∈ ranA+. Clearly, (A+)−1x∗ = ∅ = ∂h(x∗). In all cases, ∂h = (A+) −1.  Remark 5.4.6 In general, let us consider g(x, x∗) := f(x)+f∗(x∗) (∀(x, x∗) ∈ Rn × Rn), where f : Rn → Rn is proper lower semicontinuous and convex. Let h(x∗) = g(0, x∗) = f(0) + f∗(x∗) (∀x∗ ∈ Rn). Thus by [26, Proposition 11.3], ∂h(x∗) = ∂f∗(x∗) = (∂f)−1(x∗), ∀x∗ ∈ Rn. 5.5 Relationship among auto-conjugate representations Let A : Rn → Rn be linear and monotone. Suppose f(x, x∗) = qA(x) + q∗A(−A◦x+x ∗) (∀(x, x∗) ∈ Rn×Rn). By Proposition 5.1.14 and Fact 2.1.18, f is an auto-conjugate representation for A++A◦ = A. By Corollary 5.3.11, P (FA, F ∗⊺ A ) is also an auto-conjugate representation for A. The next Propo- sition does that. Proposition 5.5.1 Let B : Rn → Rn be linear, symmetric and monotone, and A : Rn → Rn be linear and antisymmetric. Let f(x, x∗) = qB(x) + 83 Chapter 5. Auto-conjugates q∗B(−Ax+ x ∗) (∀(x, x∗) ∈ Rn × Rn). Then f = P (F(A+B), F ∗⊺ (A+B)). Proof. Let (x, x∗) ∈ Rn × Rn. By Fact 5.2.3, f(x, x∗) = qB(x) + ιranB(−Ax+ x∗) + qB†(−Ax+ x ∗). (5.38) By Theorem 5.3.1, we have P (F(A+B), F ∗⊺ (A+B))(x, x ∗) = ιranB ( x∗ − (A+B)x ) + 〈x, x∗〉+ qB† ( x∗ − (A+B)x ) = ιranB(x ∗ −Ax) + 〈x, x∗〉+ qB†(x ∗ −Ax−Bx) If x∗ −Ax /∈ ranB, P (F(A+B), F ∗⊺ (A+B))(x, x ∗) =∞. Now suppose x∗ − Ax ∈ ranB. By Proposition 2.2.15 applied to B with x replaced by x∗ −Ax and y replaced by −x, qB†(x ∗ −Ax−Bx) = 〈PranB(x ∗ −Ax), −x〉+ qB†(−Ax+ x ∗) + qB(−x) = 〈x∗ −Ax, −x〉+ qB†(−Ax+ x ∗) + qB(−x) = −〈x, x∗〉+ qB†(−Ax+ x ∗) + qB(x) (by 〈Ax, −x〉 = 0). 84 Chapter 5. Auto-conjugates Hence P (F(A+B), F ∗⊺ (A+B))(x, x ∗) = qB(x) + ιranB(−Ax+ x∗) + qB†(−Ax+ x ∗) = f(x, x∗) (by (5.38)), ∀(x, x∗) ∈ Rn × Rn.  Proposition 5.5.2 Let A,B : Rn → Rn be linear and monotone. Then P (F(A+B), F ∗⊺ (A+B) ) = P (FA, F ∗ A ⊺)2P (FB , F ∗ B ⊺). Proof. Let (x, x∗) ∈ Rn × Rn. By Theorem 5.3.1, P (FA, F ∗ A ⊺)2P (FB , F ∗ B ⊺)(x, x∗) = inf y∗ { P (FA, F ∗ A ⊺)(x, x∗ − y∗) + P (FB , F ∗B ⊺)(x, y∗) } = inf y∗ { ιranA+(x ∗ − y∗ −Ax) + 〈x, x∗ − y∗〉+ q(A+)†(x ∗ − y∗ −Ax) + 〈x, y∗〉+ ιranB+(y ∗ −Bx) + q(B+)†(y ∗ −Bx) } = 〈x, x∗〉+ inf y∗ { ιranA+(x ∗ − y∗ −Ax) + q(A+)†(x ∗ − y∗ −Ax) + ιranB+(y ∗ −Bx) + q(B+)†(y ∗ −Bx) } ≤ 〈x, x∗〉+ ιran(A++B+)(x ∗ −Ax−Bx) (5.39) + inf y∗ { ιranA+(x ∗ − y∗ −Ax) + q(A+)†(x ∗ − y∗ −Ax) + ιranB+(y ∗ −Bx) + q(B+)†(y ∗ −Bx) } . 85 Chapter 5. Auto-conjugates Now suppose x∗−Ax−Bx ∈ ran(A++B+). Let x∗−Ax−Bx = (A++B+)p and y∗0 := B+p+Bx. Thus x∗ − y∗0 −Ax = x ∗ −B+p−Bx−Ax = (A+ +B+)p−B+p = A+p, y∗0 −Bx = B+p. Then by (5.39), P (FA, F ∗ A ⊺)2P (FB , F ∗ B ⊺)(x, x∗) ≤ 〈x, x∗〉+ ιranA+(x ∗ − y∗0 −Ax) + q(A+)†(x ∗ − y∗0 −Ax) + ιranB+(y ∗ 0 −Bx) + q(B+)†(y ∗ 0 −Bx) = 〈x, x∗〉+ q(A+)†(A+p) + q(B+)†(B+p) = 〈x, x∗〉+ qA+(p) + qB+(p) (5.40) = 〈x, x∗〉+ q(A++B+)(p) = 〈x, x∗〉+ q(A++B+)†(x ∗ −Ax−Bx), (5.41) in which, (5.40) and (5.41) hold by Corollary 2.2.16. Combining the results above, P (FA, F ∗ A ⊺)2P (FB , F ∗ B ⊺)(x, x∗) ≤ ιran(A++B+)(x ∗ −Ax−Bx) + 〈x, x∗〉+ q(A++B+)†(x ∗ −Ax−Bx) = P (F(A+B), F ∗⊺ (A+B))(x, x ∗) (by Theorem 5.3.1), ∀(x, x∗) ∈ Rn × Rn. By Proposition 5.3.2, π1 [domP (FA, F ∗ A ⊺)] = π1 [domP (FB , F ∗ B ⊺)] = Rn. Then by Proposition 5.3.6 and Proposition 5.1.19, P (FA, F ∗ A ⊺)2P (FB , F ∗ B ⊺) 86 Chapter 5. Auto-conjugates is auto-conjugate. Thus by Proposition 5.3.6 and Proposition 5.1.8, P (FA, F ∗ A ⊺)2P (FB , F ∗ B ⊺) = P (F(A+B), F ∗⊺ (A+B)).  Lemma 5.5.3 Let A : Rn → Rn be linear, symmetric and monotone. Then P (FA, F ∗ A ⊺)(x,Ay) = P (FA, F ∗ A ⊺)(y,Ax), ∀(x, y) ∈ Rn × Rn. Proof. Let (x, y) ∈ Rn × Rn. By Corollary 5.3.3 and Corollary 2.2.16, P (FA, F ∗ A ⊺)(x,Ay) = ιranA(Ay) + qA(x) + qA†(Ay) = qA(x) + qA(y). On the other hand, P (FA, F ∗ A ⊺)(y,Ax) = ιranA(Ax) + qA(y) + qA†(Ax) = qA(x) + qA(y). Thus P (FA, F ∗ A ⊺)(x,Ay) = P (FA, F ∗ A ⊺)(y,Ax).  Proposition 5.5.4 Let A : Rn → Rn be linear, symmetric and monotone. Then f = P (FA, F ∗ A ⊺), if and only if, f is auto-conjugate satisfying f(x,Ay) = f(y,Ax) (∀(x, y) ∈ Rn × Rn) and f(0, 0) is finite. Proof. “⇒” By Proposition 5.3.6, Lemma 5.5.3 and Corollary 5.3.5. “⇐” Let (x, x∗) ∈ Rn × Rn. We prove in two steps. Step 1: We will verify that f(x, x∗) =∞, if x∗ /∈ ranA. Since Rn = ranA ⊕ kerA, x∗ = PranAx∗ + PkerAx∗. Since x∗ /∈ ranA, 87 Chapter 5. Auto-conjugates PkerAx ∗ 6= 0. Thus 〈PkerAx ∗, x∗〉 = 〈PkerAx∗, PranAx∗ + PkerAx∗〉 = ‖PkerAx∗‖2 > 0. (5.42) Thus by assumptions, f(kPkerAx ∗, 0) = f(kPkerAx∗, A0) = f(0, AkPkerAx∗) = f(0, 0), ∀k ∈ R. (5.43) Then by Fact 5.1.10, f(x, x∗) + f(0, 0) = f(x, x∗) + f(kPkerAx∗, 0) = f(x, x∗) + f∗(0, kPkerAx∗) ≥ 〈x∗, kPkerAx∗〉 → ∞, as k →∞. (by (5.42)) (5.44) Since f(0, 0) is finite, by (5.44) f(x, x∗) =∞. Step 2: Suppose that x∗ ∈ ranA. Let x∗ = Ap. By Fact 5.1.10, f(x,Ap) + f(p,Ax) = f(x,Ap) + f∗(Ax, p) ≥ 〈p,Ap〉+ 〈x,Ax〉 ⇒ 2f(x,Ap) ≥ 〈p,Ap〉+ 〈x,Ax〉 (by f(x,Ap) = f(p,Ax)) ⇒ f(x, x∗) ≥ qA(x) + qA(p) = qA(x) + qA†(x ∗), (5.45) in which, (5.45) by x∗ = Ap and Corollary 2.2.16. By conclusions above and Corollary 5.3.3, we have f(x, x∗) ≥ ιranA(x∗) + qA(x) + qA†(x ∗) = P (FA, F ∗A ⊺)(x, x∗), ∀(x, x∗). 88 Chapter 5. Auto-conjugates Then by Corollary 5.3.11 and Proposition 5.1.8 , we have f = P (FA, F ∗ A ⊺).  5.6 Nonuniqueness We now tackle the following question: Given a linear and monotone op- erator, are auto-conjugate representations for A unique? The answer is negative. We will give several different auto-conjugate representations for Id. By Corollary 5.3.3, we have P (FId, F ∗ Id ⊺) = 12‖ · ‖ 2 ⊕ 12‖ · ‖ 2. Proposition 5.6.1 Let j(x) = 12x 2, ∀x ∈ R. Assume g is such that g∗(−x) = g(x) ≥ 0, ∀x ∈ R. Then f(x, y) := j ( x+y√ 2 ) + g ( x−y√ 2 ) ( ∀(x, y) ∈ Rn × Rn ) is an auto-conjugate representation for Id . 89 Chapter 5. Auto-conjugates Proof. We first show that f is auto-conjugate. Let (x, y) ∈ R×R. Then we have f∗(y, x) = sup (v,w) { 〈v, y〉+ 〈w, x〉 − j(v+w√ 2 )− g(v−w√ 2 ) } = sup (v,w) { 〈v+w2 , x+ y〉 − 〈 v−w 2 , x− y〉 − j( v+w√ 2 )− g(v−w√ 2 ) } (5.46) = sup (v,w) { 〈v+w√ 2 , x+y√ 2 〉 − 〈v−w√ 2 , x−y√ 2 〉 − j(v+w√ 2 )− g(v−w√ 2 ) } = sup (s,t) { 〈s, x+y√ 2 〉 − 〈t, x−y√ 2 〉 − j(s)− g(t) } (5.47) = sup s { 〈s, x+y√ 2 〉 − j(s) } + sup t { − 〈t, x−y√ 2 〉 − g(t) } = j∗(x+y√ 2 ) + g∗(−x−y√ 2 ) = j(x+y√ 2 ) + g(x−y√ 2 ) ( since j∗ = j by [7, Proposition 3.3(i)] ) = f(x, y). 90 Chapter 5. Auto-conjugates Hence f is auto-conjugate. Note that (5.46) holds since 〈v+w2 , x+ y〉 − 〈 v−w 2 , x− y〉 = 12 ( 〈v + w, x+ y〉 − 〈v − w, x− y〉 ) = 12 ( 〈v, x〉 + 〈v, y〉+ 〈w, x〉 + 〈w, y〉 − 〈v, x〉 + 〈v, y〉 + 〈w, x〉 − 〈w, y〉 ) = 12 ( 2〈v, y〉 + 2〈w, x〉 ) = 〈v, y〉+ 〈w, x〉. In the following we show that (5.47) holds. Clearly, for every (v,w) there exists (s, t) such that 〈v+w√ 2 , x+y√ 2 〉 − 〈v−w√ 2 , x−y√ 2 〉 − j(v+w√ 2 )− g(v−w√ 2 ) = 〈s, x+y√ 2 〉 − 〈t, x−y√ 2 〉 − j(s)− g(t). On the other hand, for every (s, t), there exists v0 = √ 2 2 (s+t), w0 = √ 2 2 (s−t) such that 〈v0+w0√ 2 , x+y√ 2 〉 − 〈v0−w0√ 2 , x−y√ 2 〉 − j(v0+w0√ 2 )− g(v0−w0√ 2 ) = 〈s, x+y√ 2 〉 − 〈t, x−y√ 2 〉 − j(s) − g(t). Hence (5.47) holds. We now show that f is a representation for Id. First we show that g(0) = 0. 91 Chapter 5. Auto-conjugates By assumptions, g(0) ≥ 0. On the other hand, g(0) = g∗(−0) = g∗(0) = sup v {−g(v)} ≤ 0. Hence g(0) = 0. Then we have (x, y) ∈ G(f) ⇔ f(x, y) = 〈x, y〉 ⇔ 14‖x+ y‖ 2 + g(x−y√ 2 ) = 〈x, y〉 ⇔ 14‖x− y‖ 2 + g(x−y√ 2 ) = 0 ⇔ 14‖x− y‖ 2 = 0, g(x−y√ 2 ) = 0 (by g ≥ 0) by g(0) = 0 ⇔ x = y ⇔ (x, y) ∈ gra Id . Hence f is an auto-conjugate representation for Id.  Remark 5.6.2 If we set g = j in Proposition 5.6.1, f = P (FId, F ∗ Id ⊺). Now we give three examples based on Proposition 5.6.1. Example 5.6.3 The function g := ιR+ satisfies the conditions of Proposition 5.6.1. Figure 5.1 is corresponding to f . 92 Chapter 5. Auto-conjugates −2 −1 0 y −4 2 −2 1 1 0 0 −1x 2 2 −2 4 Figure 5.1: The function f (blue) and z = xy (gold), and their intersection line (cyan), gra Id (red). Proof. Let x ∈ R. We consider two cases. Case 1: x ≥ 0. We have g∗(−x) = sup v { 〈v,−x〉 − ιR+(v) } = sup v≥0 { 〈v,−x〉 } = 0 = g(x). Case 2: x < 0. Then g∗(−x) = sup v { 〈v,−x〉 − ιR+(v) } = sup v≥0 { 〈v,−x〉 } =∞ = g(x). Hence g∗(−x) = g(x).  93 Chapter 5. Auto-conjugates 2 1 x0 −1 −2 −2 −1 y 012 −2.5 0.0 2.5 5.0 7.5 Figure 5.2: The function f (blue) and z = xy (gold), and their intersection line (cyan), gra Id (red) . Example 5.6.4 Set g(x) :=   x2, if x ≥ 0; 1 4x 2, if x ≤ 0 . Then g satisfies the conditions of Proposition 5.6.1. Figure 5.2 is corre- sponding to f . 94 Chapter 5. Auto-conjugates Proof. Let x ∈ R. We consider two cases. Case 1: x ≥ 0. We have g∗(−x) = sup v { 〈v,−x〉 − g(v) } = sup v≤0 { 〈v,−x〉 − g(v) } (since g ≥ 0, g(0) = 0) = sup v≤0 { 〈v,−x〉 − 14v 2} = sup v≤0 { h0(v) } , where h0(v) := 〈v,−x〉 − 1 4v 2. Let 0 = ∇h0(v) = −x− 1 2v. Then v0 = −2x ≤ 0 is a critical point of h0. Since h0 is concave on R−, its critical point is its maximizer. Then g∗(−x) = h0(v0) = 〈−2x,−x〉 − x2 = x2 = g(x). Case 2: x < 0. We have g∗(−x) = sup v { 〈v,−x〉 − g(v) } = sup v≥0 { 〈v,−x〉 − g(v) } (since g ≥ 0, g(0) = 0) = sup v≥0 { 〈v,−x〉 − v2 } = sup v≥0 { h1(v) } , 95 Chapter 5. Auto-conjugates where h1(v) := 〈v,−x〉 − v 2 Let 0 = ∇h1(v) = −x− 2v. Then v1 = − 1 2x ≥ 0 is a critical point of h1. Since h1 is concave on R+, its critical point is its maximizer. Then g∗(−x) = h1(v1) = 〈−12x,−x〉 − 1 4x 2 = 14x 2 = g(x). Hence g∗(−x) = g(x).  Example 5.6.5 Set p > 1, 1 p + 1 q = 1. g(x) :=   1 p xp, if x ≥ 0; 1 q (−x)q, if x ≤ 0. satisfies the conditions of Proposition 5.6.1. Figure 5.3 is corresponding to f . Proof. Let x ∈ R. We consider two cases. Case 1: x ≥ 0. We have g∗(−x) = sup v { 〈v,−x〉 − g(v) } = sup v≤0 { 〈v,−x〉 − g(v) } (since g ≥ 0, g(0) = 0) = sup v≤0 { 〈v,−x〉 − 1 q (−v)q } = sup v≤0 { g0(v) } , 96 Chapter 5. Auto-conjugates 2 0 1 x 0 5 −1 10 −2 15 Figure 5.3: The function f (blue) and z = xy (gold) , and their intersection line (cyan), gra Id (red), where p = 4. where g0(v) := 〈v,−x〉 − 1 q (−v)q. Then let 0 = ∇g0(v) = −x+ (−v) q−1. Thus v0 := −x 1 q−1 ≤ 0 is a critical point of g0. Since ∇2g0(v) = −(q− 1)(−v) q−2 ≤ 0 (∀v < 0), by the continuity of g0, g0 97 Chapter 5. Auto-conjugates is concave on R−. Then its critical point is its maximizer. Thus g∗(−x) = g0(v0) = 〈−x 1 q−1 ,−x〉 − 1 q x q q−1 = (1− 1 q )x q q−1 = 1 p xp (by 1 p + 1 q = 1) = g(x). Case 2: x < 0. We have g∗(−x) = sup v { 〈v,−x〉 − g(v) } = sup v≥0 { 〈v,−x〉 − g(v) } (since g ≥ 0, g(0) = 0) = sup v≥0 { 〈v,−x〉 − 1 p vp } = sup v≥0 { g1(v) } , where g1(v) := 〈v,−x〉 − 1 p vp. Then let 0 = ∇g1(v) = −x− v p−1. Thus v1 := (−x) 1 p−1 > 0 is a critical point of g1. Since ∇2g1(v) = −(p− 1)v p−2 ≤ 0 (∀v > 0), by the continuity of g1, g1 is 98 Chapter 5. Auto-conjugates concave on R+. Then its critical point is its maximizer. Thus g∗(−x) = g1(v1) = 〈(−x) 1 p−1 ,−x〉 − 1 p (−x) p p−1 = (1− 1 p )(−x) p p−1 = 1 q (−x)q = g(x). Hence g∗(−x) = g(x).  Remark 5.6.6 Example 5.6.3, 5.6.4 and 5.6.5 each provide a function f that is an auto-conjugate representation for Id with f 6= P (FId, F ∗ Id ⊺). 99 Chapter 6 Calculation of the auto-conjugates of ∂(− ln) Throughout this chapter, − ln is meant in the extended real valued sense, i.e., − ln(x) =∞ for x ≤ 0. In Chapter 5, Proposition 5.3.12 shows that hFA = P (FA, F ∗ A ⊺) for a linear and monotone operator A. Now we will show that for the nonlinear mono- tone operator ∂(− ln) we have P (F∂(− ln), F ∗⊺ ∂(− ln)) 6= hF∂(− ln) . Throughout the chapter, we denote C : = { (x, x∗) ∈ R× R | x ≤ − 1 x∗ < 0 } D : = { (x, x∗) ∈ R× R | x∗ ≤ − 12x < 0 } E : = { (x, x∗) ∈ R× R | x∗ ≤ − 14x < 0 } . 100 Chapter 6. Calculation of the auto-conjugates of ∂(− ln) 6.1 Fitzpatrick function for ∂(− ln) Fact 6.1.1 Let f = − ln . Then F∂f (x, x ∗) =   1− 2x 1 2 (−x∗) 1 2 , if x ≥ 0, x∗ ≤ 0; ∞, otherwise. Proof. See [8, Example 3.4].  Fact 6.1.2 Let f = − ln . Then F ∗∂f (x ∗, x) = −1 + ιC(x∗, x). Proof. See [8, Example 3.4].  Fact 6.1.3 (Rockafellar) Let f = − ln . f∗(x∗) =   −1− ln(−x∗), if x∗ < 0; ∞, otherwise, Proof. See [8, Example 3.4].  Remark 6.1.4 Let f = − ln . Recall (f ⊕ f∗)(x, x∗) := f(x) + f∗(x∗), ∀(x, x∗) ∈ R× R. By Fact 6.1.3, dom(f ⊕ f∗) = R++ × R−−. 101 Chapter 6. Calculation of the auto-conjugates of ∂(− ln) Proposition 6.1.5 D $ E $ R++ × R−−. Proof. We first verify that D $ E. Let (x, x∗) ∈ D. Thus x > 0. Then − 12x ≤ − 1 4x . By (x, x ∗) ∈ D, x∗ ≤ − 12x ≤ − 1 4x < 0. Thus (x, x∗) ∈ E. Then D ⊂ E. On the other hand, (1,−14 ) ∈ E, but (1,− 1 4 ) /∈ D. Thus D 6= E. Hence D $ E. It is clear we have E $ R++ × R−−. Thus combining the results above, D $ E $ R++ × R−−.  6.2 Proximal average of ∂(− ln) and hF∂f Proposition 6.2.1 Let f = − ln . Then domP (F∂f , F ∗⊺ ∂f ) = E. Proof. Let By [7, Theorem 4.6], domP (F∂f , F ∗⊺ ∂f ) = 1 2 domF∂f + 1 2 domF ∗⊺ ∂f . (6.1) 102 Chapter 6. Calculation of the auto-conjugates of ∂(− ln) In the following we will show that domP (F∂f , F ∗⊺ ∂f ) = 1 2 domF ∗⊺ ∂f . (6.2) By Fact 6.1.1, (0, 0) ∈ domF∂f , then by (6.1), we have 1 2 domF ∗⊺ ∂f ⊂ domP (F∂f , F ∗⊺ ∂f ). (6.3) Next we show that 1 2 domF ∗⊺ ∂f = E. (6.4) Indeed, (x, x∗) ∈ 12 domF ∗⊺ ∂f ⇔ (2x∗, 2x) ∈ domF ∗∂f ⇔ 2x ∗ ≤ − 12x < 0 (by Fact 6.1.2) ⇔ x∗ ≤ − 14x < 0⇔ (x, x ∗) ∈ E. Hence (6.4) holds. Then by (6.4) and (6.3), E ⊂ domP (F∂f , F ∗⊺ ∂f ). (6.5) In the following, we will verify that 1 2 domF∂f + E ⊂ E. (6.6) 103 Chapter 6. Calculation of the auto-conjugates of ∂(− ln) Let (y, y∗) ∈ 12 domF∂f and (x, x ∗) ∈ E. By Fact 6.1.1 we have y ≥ 0, y∗ ≤ 0, x > 0, x∗ < 0, 4xx∗ ≤ −1. Thus x + y ≥ x > 0, x∗ + y∗ ≤ x∗ < 0. Then we have 4(x + y)(x∗ + y∗) ≤ 4xx∗ ≤ −1, i.e., (x, x∗)+(y, y∗) ∈ E. Hence (6.6) holds. Thus by (6.6), (6.4) and (6.1), domP (F∂f , F ∗⊺ ∂f ) ⊂ E. Then by (6.5), domP (F∂f , F ∗⊺ ∂f ) = E.  Lemma 6.2.2 Let x, x∗, y∗ ∈ R with y∗ ≤ 0.Then ιC(2x ∗ − 2y∗, x) = ιC(2x∗ − 2y∗, x) + ιD(x, x∗). Proof. We consider two cases. Case 1: (2x∗ − 2y∗, x) /∈ C. Clear. Case 2: (2x∗ − 2y∗, x) ∈ C. By assumptions, 2x∗ ≤ 2x∗ − 2y∗ ≤ − 1 x < 0⇒ x∗ ≤ − 12x < 0 (by y ∗ ≤ 0). Thus (x, x∗) ∈ D. Then ιD(x, x∗) = 0.  Remark 6.2.3 Let x, x∗ ∈ R. Then ιR+(x) + ιD(x, x ∗) = ιD(x, x∗). Proof. Follows directly from the definition of D .  104 Chapter 6. Calculation of the auto-conjugates of ∂(− ln) -5.0 -2.5 y 0.0 -7 5.0 -5 -3 2.5 -1 2.5 x 0.0 -2.5 5.0 -5.0 Figure 6.1: The function hF∂f . Proposition 6.2.4 Let f = − ln . Then hF∂f (x, x ∗) = −(−1− 2xx∗) 1 2 + ιD(x, x ∗), ∀(x, x∗) ∈ R× R. Consequently, domhF∂f = D. Figure 6.1 illustrates hF∂f . Proof. Let (x, x∗) ∈ R× R. By Fact 6.1.1 and Fact 6.1.2, we have hF∂f (x, x ∗) = inf y∗ { 1 2F∂f (x, 2y ∗) + 12F ∗ ∂f (2x ∗ − 2y∗, x) } = inf y∗≤0 { 1 2 − |x| 1 2 (−2y∗) 1 2 + ιR+(x) + 1 2F ∗ ∂f (2x ∗ − 2y∗, x) } = inf y∗≤0 { 1 2 − |x| 1 2 (−2y∗) 1 2 + ιR+(x) + ιC(2x ∗ − 2y∗, x)− 12 } = inf y∗≤0 { − |x| 1 2 (−2y∗) 1 2 + ιR+(x) + ιC(2x ∗ − 2y∗, x) + ιD(x, x∗) } (6.7) = inf y∗≤0 { − |x| 1 2 (−2y∗) 1 2 + ιC(2x ∗ − 2y∗, x) + ιD(x, x∗) } , (6.8) 105 Chapter 6. Calculation of the auto-conjugates of ∂(− ln) where (6.7) holds by Lemma 6.2.2, (6.8) by Remark 6.2.3. Now we consider two cases. Case 1: (x, x∗) /∈ D. Thus hF∂f (x, x ∗) =∞. Case 2: (x, x∗) ∈ D. Thus x > 0. Then hF∂f (x, x ∗) = inf y∗≤0 { − x 1 2 (−2y∗) 1 2 + ιC(2x ∗ − 2y∗, x) } = inf (2x∗−2y∗≤− 1 x <0, y∗≤0) { − x 1 2 (−2y∗) 1 2 } (6.9) = −(2x) 1 2 sup (2x∗−2y∗≤− 1 x <0, y∗≤0) { (−y∗) 1 2 } = −(2x) 1 2 sup (0≤−2y∗≤−2x∗− 1 x ) { (−y∗) 1 2 } = −(2x) 1 2 (− 12x − x ∗) 1 2 (6.10) = −(−1− 2xx∗) 1 2 (by x > 0), where (6.9) holds by letting 2x∗ − 2y∗ ∈ C. (6.10) holds by 0 ≤ − 12x − x ∗ since (x, x∗) ∈ D. Thus combining the results above, hF∂f (x, x ∗) = −(−1− 2xx∗) 1 2 + ιD(x, x ∗), ∀(x, x∗) ∈ R× R.  Corollary 6.2.5 Let f = − ln . Then P (F∂f , F ∗⊺ ∂f ), f⊕f ∗ and hF∂f are three different functions. 106 Chapter 6. Calculation of the auto-conjugates of ∂(− ln) Proof. By Remark 6.1.4, we have dom(f⊕f∗) = R++×R−−. Then by Propo- sition 6.2.1 and Proposition 6.2.4, domP (F∂f , F ∗⊺ ∂f ) = E and domhF∂f = D. By Proposition 6.1.5, domhF∂f $ domP (F∂f , F ∗⊺ ∂f ) $ dom(f ⊕ f ∗). Hence P (F∂f , F ∗⊺ ∂f ), f ⊕ f ∗ and hF∂f are all different.  Remark 6.2.6 We don’t have an explicit formula for P (F∂(− ln), F ∗⊺ ∂(− ln)). 107 Chapter 7 Proximal averages of monotone operators with linear graphs We have given some auto-conjugate representation results for linear and monotone operators. Now we extend them to monotone operators with linear graphs. Background worked on linear relations can be found in the book by Cross [16]. 7.1 Adjoint process of operators with linear graphs Definition 7.1.1 Let C be a nonempty cone of Rn. The polar of C, C−, is defined by C− := { x∗ | 〈c, x∗〉 ≤ 0,∀c ∈ C } . Remark 7.1.2 If C is a linear subspace of Rn, then by Lemma 2.1.28, C− = C⊥. 108 Chapter 7. Proximal averages of monotone operators with linear graphs Definition 7.1.3 Let A : Rn ⇉ Rn be such that graA is a convex cone. The adjoint process of A, A∗, is defined by graA∗ := { (x, x∗) | (x∗,−x) ∈ (graA)− } . Lemma 7.1.4 [16, Proposition III.1.3] Let A : Rn ⇉ Rn be such that graA is linear. Suppose k ∈ R with k 6= 0. Then (kA)∗ = kA∗. Proof. By Remark 7.1.2, (x, x∗) ∈ gra(kA)∗ ⇔ (x∗,−x) ∈ (gra kA)− = (gra kA)⊥ ⇔ 〈(x∗,−x), (v, v∗)〉 = 0, ∀(v, v∗) ∈ gra(kA) ⇔ 1 k 〈(x∗,−x), (v, v∗)〉 = 0, ∀(v, v∗) ∈ gra(kA) ⇔ 〈( 1 k x∗,−x), (v, 1 k v∗)〉 = 0, ∀(v, v∗) ∈ gra(kA) ⇔ 〈( 1 k x∗,−x), (v,w∗)〉 = 0, ∀(v,w∗) ∈ graA ⇔ (x, 1 k x∗) ∈ graA∗ ⇔ x∗ ∈ kA∗x. Hence (kA)∗ = kA∗.  Remark 7.1.5 Let A : Rn ⇉ Rn be such that graA is linear. Then graA∗ is a linear graph. Remark 7.1.6 Let A : Rn ⇉ Rn be such that graA is linear. Then A∗0 = (domA)⊥. Proof. See [16, Proposition III.1.4 (b)].  109 Chapter 7. Proximal averages of monotone operators with linear graphs Definition 7.1.7 Let A : Rn ⇉ Rn be such that graA is linear. We say that A is symmetric if A∗ = A. Definition 7.1.8 Let A : Rn ⇉ Rn be such that graA is linear. We say that A is antisymmetric if A∗ = −A. Fact 7.1.9 Let A,B : Rn ⇉ Rn be such that graA and graB are linear. Then (A+B)∗ = A∗ +B∗. Proof. See [11, Theorem 7.4].  Fact 7.1.10 Let A : Rn ⇉ Rn be such that graA is a closed convex cone. Then graA∗∗ = − graA. Proof. See [13, Exercises 7 page 119].  Corollary 7.1.11 Let A : Rn ⇉ Rn be such that graA is linear. Then A∗∗ = A. Proof. Since graA is a linear subspace, − graA = graA. Thus by Fact 7.1.10, graA∗∗ = graA. Hence A∗∗ = A.  Corollary 7.1.12 Let A : Rn ⇉ Rn be such that graA is a linear subspace. Then domA∗ = (A0)⊥. Proof. By Remark 7.1.5 and Remark 7.1.6, we have (A∗)∗0 = (domA∗)⊥. Then by Corollary 7.1.11, A0 = (domA∗)⊥. Thus domA∗ = (A0)⊥.  Remark 7.1.13 Let A : Rn ⇉ Rn be such that graA is linear. By Fact 7.1.9, Remark 7.1.5, Corollary 7.1.11 and Lemma 7.1.4, A+A ∗ 2 is symmetric and A−A∗ 2 is antisymmetric. 110 Chapter 7. Proximal averages of monotone operators with linear graphs Definition 7.1.14 (Symmetric and antisymmetric part) Let A : Rn ⇉ Rn be such that graA is linear. Then A+ = 12A + 1 2A ∗ is the symmetric part of A, and A◦ = 1 2A− 1 2A ∗ is the antisymmetric part of A. Remark 7.1.15 Let A : Rn ⇉ Rn be such that graA is a linear subspace. Then by Corollary 7.1.12, domA+ = domA◦ = domA ∩ (A0) ⊥. Corollary 7.1.16 Let A : Rn ⇉ Rn be such that graA is a linear subspace. Then A can be decomposed into the sum of a symmetric operator with a linear graph and an antisymmetric operator with a linear graph, if and only if, domA = (A0)⊥. In that case, A can be decomposed as : A = A+ +A◦. Proof. “⇒” Let B : Rn ⇉ Rn be a symmetric operator with a linear graph and C : Rn ⇉ Rn be an antisymmetric operator with a linear graph such that A = B + C. By Fact 7.1.9, A∗ = B∗ + C∗ = B − C. Then domA∗ = domB ∩ domC = domA. By Corollary 7.1.12, domA = (A0)⊥. “⇐” By Remark 7.1.15, domA+ = domA◦ = domA. By Corollary 7.1.12, domA∗ = (A0)⊥ = domA. Thus, by Remark 7.1.5 and Proposition 4.1.3(iii), A+x+A◦x = 1 2(Ax+A ∗x+Ax−A∗x) = Ax+A∗0 = Ax+ (domA)⊥ = Ax+A0 (by Remark 7.1.6) = Ax (by Proposition 4.1.3(iii)), ∀x ∈ domA.  Remark 7.1.17 Consider an operator A : Rn ⇉ Rn with graA = {(0, 0)}. Then we have domA = {0} 6= Rn = (A0)⊥. Clearly, graA∗ = Rn × Rn. 111 Chapter 7. Proximal averages of monotone operators with linear graphs Thus (A++A◦)0 = A+0+A◦0 = Rn+Rn = Rn 6= A0. Thus A 6= A++A◦. By Proposition 7.1.16, A can not be decomposed into the sum of a symmetric operator with a linear graph and an antisymmetric operator with a linear graph. Remark 7.1.18 Let S be a linear subspace of Rn. Then S is closed. Corollary 7.1.19 Let A : Rn ⇉ Rn be maximal monotone such that graA is a linear subspace. Then A = A+ +A◦. Proof. By Remark 7.1.18, Proposition 4.2.5 and Corollary 7.1.16.  Definition 7.1.20 Let C be a nonempty convex subset of Rn and x0 ∈ Rn. The normal cone of C at x0, NC(x0), is defined by NC(x0) :=   { x∗ | 〈x∗, c− x0〉 ≤ 0,∀c ∈ C } , if x0 ∈ C; ∅, otherwise. Fact 7.1.21 Let C be a nonempty convex subset of Rn and x0 ∈ C. Then NC(x0) = ∂ιC(x0). If C is a linear subspace of Rn, then NC(x0) = ∂ιC(x0) = C⊥ by Fact 2.1.29. Remark 7.1.22 Let A : Rn → Rn be linear and S be a linear subspace of Rn. Then gra(A+NS) is a linear subspace of Rn × Rn. Fact 7.1.23 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B +NS . Then A ∗ = B∗ +NS . 112 Chapter 7. Proximal averages of monotone operators with linear graphs Proof. Since graA is a linear subspace by Remark 7.1.22, then by Re- mark 7.1.2 and Fact 7.1.21 we have (x, x∗) ∈ graA∗ ⇔ (x∗,−x) ∈ (graA)− ⇔ (x∗,−x) ∈ (graA)⊥ ⇔ 〈x∗, y〉 − 〈x, y∗〉 = 0, ∀y∗ ∈ Ay ⇔ 〈x∗, y〉 − 〈x,By + S⊥〉 = 0, ∀y ∈ S. (7.1) Let y = 0 in (7.1). We have 〈x, S⊥〉 = 0. Thus x ∈ S. Then (x, x∗) ∈ graA∗ ⇔ x ∈ S, 〈x∗, y〉 − 〈x,By〉 = 0, ∀y ∈ S ⇔ x ∈ S, 〈x∗ −B∗x, y〉 = 0, ∀y ∈ S ⇔ x ∈ S, (x∗ −B∗x)⊥S ⇔ x ∈ S, x∗ ∈ B∗x+ S⊥ ⇔ x∗ ∈ (B∗ +NS)(x) (by Fact 7.1.21). Hence A∗ = B∗ +NS .  Remark 7.1.24 Fact 7.1.23 is a special case of [13, Exercises 14 (f) page 120]. Remark 7.1.25 Let B : Rn → Rn be linear and S be a linear subspace of Rn. Suppose A = B + NS . Then by Fact 7.1.23, A+ = B+ + NS, A◦ = 113 Chapter 7. Proximal averages of monotone operators with linear graphs B◦ +NS and A = A+ +A◦. Now we recall the definition of QA. Definition 7.1.26 Let A : Rn ⇉ Rn be such that graA is a linear subspace of Rn × Rn. We define QA by QAx =   PAxx, if x ∈ domA; ∅, otherwise. Proposition 7.1.27 Let A : Rn ⇉ Rn be such that graA is a linear sub- space. Then QA is single-valued and linear on domA, and QA is a selection of A. Proof. SinceA0 is a closed subspace by Proposition 4.1.3(i) and Remark 7.1.18, Ax (∀x ∈ domA) is a closed convex by Proposition 4.1.3(ii). By Fact 4.3.1, QA is single-valued on domA and QA is a selection of A. Very similar to the proof of Proposition 4.3.6, we have QA is linear on domA.  Corollary 7.1.28 Let A : Rn ⇉ Rn be such that graA is a linear subspace. Then Ax =   QAPdomAx+A0, if x ∈ domA; ∅, otherwise, where QAPdomA is linear. Proof. By Proposition 7.1.27 and Proposition 4.1.3(ii).  Proposition 7.1.29 Let A : Rn ⇉ Rn be such that graA is a linear sub- space. Assume A can be decomposed into the sum of a symmetric operator 114 Chapter 7. Proximal averages of monotone operators with linear graphs with a linear graph and an antisymmetric operator with a linear graph. Then such a decomposition is not unique. Proof. By Corollary 7.1.28, Corollary 7.1.16 and Fact 7.1.21, A = QAPdomA +NdomA. Thus, A = (QAPdomA)++ ( (QAPdomA)◦+NS ) = ((QAPdomA)++NS)+(QAPdomA)◦. By Fact 7.1.23, (QAPdomA)+, (QAPdomA)++NS are symmetric and ( (QAPdomA)◦+ NS ) , (QAPdomA)◦ are antisymmetric. Since (QAPdomA)+ 6= (QAPdomA)++ NS and (QAPdomA)◦ +NS ) 6= (QAPdomA)◦ as S 6= Rn, the decomposition is not unique.  Theorem 7.1.30 Let A,B,C : Rn ⇉ Rn be such that graA, graB and graC are linear subspaces. Assume B is symmetric and C is antisymmetric such that A = B + C and domB = domC. Then B = A+, C = A◦. Proof. By Fact 7.1.9, A∗ = B−C. Thus by assumptions, domB = domC = domA = domA∗. Thus domA+ = domA◦ = domB = domC = domA. By Corollary 7.1.12, domB = domB∗ = (B0)⊥,domC = domC∗ = (C0)⊥. Thus (B0)⊥ = (C0)⊥. Since C0, B0 are closed linear subspaces by Propo- sition 4.1.3(i) and Remark 7.1.18, B0 = C0. Let x ∈ domA. Then by 115 Chapter 7. Proximal averages of monotone operators with linear graphs Proposition 4.1.3(iii) and Proposition 4.1.3(ii), A+x = 1 2(Bx+ Cx+Bx− Cx) = Bx+ C0 = Bx+B0 = Bx, A◦x = 1 2(Bx+ Cx−Bx+ Cx) = B0 + Cx = C0 + Cx = Cx. Hence B = A+, C = A◦.  Corollary 7.1.31 Let A : Rn ⇉ Rn be maximal monotone such that graA is a linear subspace of Rn×Rn. Then A = PdomAQAPdomA+NdomA, where PdomAQAPdomA is linear and monotone. Proof. Since domA is a closed linear subspace by Remark 7.1.18, then by Theorem 4.4.1, PdomAQAPdomA is linear and monotone, and A = ∂(qà + ιdomA) + Ã◦, where à = PdomAQAPdomA. Then by Fact 2.1.18, Fact 2.1.30 and Fact 7.1.21, A = Ã+ + ∂ιdomA + Ã◦ = PdomAQAPdomA +NdomA.  Remark 7.1.32 Let A : Rn ⇉ Rn such that graA is a linear subspace of Rn. Then graA−1 is a linear subspace of Rn × Rn. 116 Chapter 7. Proximal averages of monotone operators with linear graphs 7.2 Fitzpatrick functions of monotone operators with linear graphs Definition 7.2.1 Assume A : Rn ⇉ Rn. The set-valued inverse mapping, A−1 : Rn ⇉ Rn, is defined by x ∈ A−1y ⇔ y ∈ Ax. Definition 7.2.2 Let A : Rn ⇉ Rn and S be a subset of Rn. Then AS is defined by AS := { x∗ | x∗ ∈ As, ∃s ∈ S } . Proposition 7.2.3 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B +NS . Then (i) x ∈ ranA⇔ x+ S⊥ ⊂ ranA (ii) A−1x = A−1(x+ S⊥). Proof. (i): By Fact 7.1.21, ranA = ran(B |S) + S ⊥. Thus S⊥ + ranA = ranA. Then x ∈ ranA⇔ x+ S⊥ ⊂ S⊥ + ranA = ranA. Hence (i) holds. (ii): Clearly, A−1x ⊂ A−1(x+S⊥). In the following we show A−1(x+S⊥) ⊂ A−1x. 117 Chapter 7. Proximal averages of monotone operators with linear graphs By Fact 7.1.21, y ∈ A−1(x+ S⊥)⇒ y ∈ A−1(x+ t), ∃t ∈ S⊥ ⇒ x+ t ∈ Ay = By +NS(y) = By + S ⊥, y ∈ S ⇒ x ∈ By + S⊥ = By +NS(y) = Ay ⇒ y ∈ A−1x. Thus A−1(x+ S⊥) ⊂ A−1x. Hence A−1x = A−1(x+ S⊥).  Lemma 7.2.4 Let B : Rn → Rn be linear and symmetric, and S be a sub- space of Rn. Suppose that x ∈ ran(B + NS). Then 〈x, (B + NS)−1x〉 is single-valued. Moreover, if y0 ∈ (B + NS) −1x, then 〈x, (B + NS)−1x〉 = 〈y0, By0〉. Proof. Let x∗1, x ∗ 2 ∈ (B +NS) −1x. Then x∗1, x ∗ 2 ∈ S and by Fact 7.1.21, x ∈ (B +NS)x ∗ 1 = Bx ∗ 1 + S ⊥, x ∈ (B +NS)x∗2 = Bx ∗ 2 + S ⊥. (7.2) Then we have B(x∗1 − x ∗ 2) ∈ S ⊥. (7.3) 118 Chapter 7. Proximal averages of monotone operators with linear graphs By (7.2), there exists t ∈ S⊥ such that x = Bx∗1 + t. Then 〈x, x∗1 − x ∗ 2〉 = 〈Bx ∗ 1 + t, x ∗ 1 − x ∗ 2〉 = 〈Bx∗1, x ∗ 1 − x ∗ 2〉 (7.4) = 〈x∗1, B(x ∗ 1 − x ∗ 2)〉 = 0, (7.5) in which, (7.4) holds by t ∈ S⊥ and x∗1 − x ∗ 2 ∈ S, and (7.5) holds by (7.3) and x∗1 ∈ S. Thus 〈x, x∗1〉 = 〈x, x ∗ 2〉. Hence 〈x, (B +NS) −1x〉 is single-valued. Let y0 ∈ (B +NS) −1x. Then y0 ∈ S and x ∈ (B + NS)y0 = By0 + S⊥ by Fact 7.1.21. Let t0 ∈ S ⊥ such that x = By0 + t0. Since 〈x, (B +NS)−1x〉 is single-valued, 〈x, (B+NS) −1x〉 = 〈x, y0〉 = 〈By0+t0, y0〉 = 〈y0, By0〉 (by y0 ∈ S, t0 ∈ S⊥).  Lemma 7.2.5 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B +NS . Suppose (x, x ∗) ∈ S × Rn. Then ιranA+(x ∗ −Bx) = ιranA+(x ∗ −Ax), i.e., x∗ −Bx ∈ ranA+ ⇔ x∗ −Ax ⊂ ranA+. 119 Chapter 7. Proximal averages of monotone operators with linear graphs Moreover if x∗ −Bx ∈ ranA+, then 〈 x∗ −Bx, (A+)−1(x∗ −Bx) 〉 = 〈 x∗ −Ax, (A+)−1(x∗ −Ax) 〉 . Proof. By Fact 7.1.21, Ax = Bx+ S⊥. (7.6) By Remark 7.1.25 and Proposition 7.2.3(i) applied to A+, ιranA+(x ∗ −Bx) = ιranA+ ( x∗ −Bx+ S⊥ ) = ιranA+ ( x∗ −Ax ) (by (7.6)). Let x∗ − Bx ∈ ranA+. By Remark 7.1.25, (A+)−1(x∗ − Bx) ⊂ S, then we have 〈 x∗ −Bx, (A+)−1(x∗ −Bx) 〉 = 〈 x∗ −Bx+ S⊥, (A+)−1(x∗ −Bx) 〉 = 〈 x∗ −Bx+ S⊥, (A+)−1 ( x∗ −Bx+ S⊥ )〉 (7.7) = 〈 x∗ −Ax, (A+)−1 ( x∗ −Ax )〉 , (7.8) in which, (7.7) holds by Proposition 7.2.3(ii), (7.8) by (7.6).  Remark 7.2.6 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B+NS. Suppose (x, x ∗) ∈ S×Rn such that x∗−Bx ∈ ranA+. 120 Chapter 7. Proximal averages of monotone operators with linear graphs By Remark 7.1.25, Lemma 7.2.4 and Lemma 7.2.5, we see that 〈 x∗ −Ax, (A+)−1(x∗ −Ax) 〉 is single-valued. Proposition 7.2.7 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B+NS. Suppose (x, x∗) ∈ S ×Rn. Then (x∗−Ax) ⊂ ranA+ or (x ∗ −Ax) ∩ ranA+ = ∅. Proof. Suppose that (x∗ − Ax) ∩ ranA+ 6= ∅. By Fact 7.1.21, there exists t ∈ S⊥ such that x∗−Bx+ t ∈ ranA+. Then by Fact 7.1.21, Remark 7.1.25 and Proposition 7.2.3(i), we obtain x∗ − Ax = x∗ − Bx+ S⊥ = x∗ − Bx+ t+ S⊥ ⊂ ranA+.  Definition 7.2.8 Let A : Rn ⇉ Rn. We define ΦA : Rn ⇉ Rn by ΦA(x) =   A−1x, if x ∈ ranA; {0}, otherwise. Remark 7.2.9 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B +NS . Then by Proposition 7.2.7 and Remark 7.2.6, 〈 x∗ −Ax, ΦA+(x ∗ −Ax) 〉 ( (x, x∗) ∈ S × Rn ) is single-valued. By Lemma 7.2.4 and Remark 7.1.25, 〈 x, ΦA+(x) 〉 (x ∈ Rn) is single-valued. 121 Chapter 7. Proximal averages of monotone operators with linear graphs Lemma 7.2.10 Let A : Rn → Rn such that graA is a linear subspace of Rn × Rn. Let k ∈ R with k 6= 0. Then ΦA(kx) = kΦA(x), ∀x ∈ Rn. Proof. Let x ∈ Rn. We consider two cases. Case 1: x /∈ ranA. Then kx /∈ ranA. Thus we have kΦA(x) = ΦA(kx) = 0. Case 2: x ∈ ranA. Then kx ∈ ranA. Then by Remark 7.1.32 and Proposi- tion 4.1.3(iii), kΦA(x) = kA −1x = A−1(kx) = ΦA(kx).  Corollary 7.2.11 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B +NS . Let k ∈ R. Then ιS(x) + ιranA+(x ∗ −Bx) + k 〈 x∗ −Bx, ΦA+(x ∗ −Bx) 〉 = ιS(x) + ιranA+(x ∗ −APSx) + k 〈 x∗ −APSx, ΦA+(x ∗ −APSx) 〉 , ∀(x, x∗) ∈ Rn × Rn. Proof. Combine Lemma 7.2.5 and Remark 7.1.25.  Fact 7.2.12 Let B : Rn ⇉ Rn be linear and monotone, and S be a linear subspace of Rn. Then B +NS is maximal monotone. Proof. See [28, Theorem 41.2].  Fact 7.2.13 Let B : Rn ⇉ Rn be linear, symmetric and monotone, and S be a linear subspace of Rn. Then ran(B +NS) = ranB + S⊥. Proof. Combine Remark 7.1.25, Fact 7.2.12, [4, Corollary 4.9] and [28, 19.0.3 page 70].  122 Chapter 7. Proximal averages of monotone operators with linear graphs Lemma 7.2.14 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn. Then (qB + ιS) ∗(x) = ιran(B++NS)(x) + 1 2 〈 x, Φ(B++NS)(x) 〉 , ∀x ∈ Rn. Proof. Let x ∈ Rn. Then (qB + ιS) ∗(x) = sup y∈Rn { 〈y, x〉 − qB(y)− ιS(y) } . Let g(y) := 〈y, x〉 − qB(y)− ιS(y). A point y is a maximizer of g, if and only if, it is a critical point. Then by Fact 2.1.30, Fact 2.1.18 and Fact 7.1.21, 0 ∈ ∂g(y) = x−B+y −NS(y) = x− (B+ +NS)(y). We consider two cases. Case 1: x ∈ ran(B++NS). Let y0 satisfy that x ∈ (B++NS)y0. Then y0 ∈ S and x ∈ B+y0 + S ⊥ by Fact 7.1.21. Let t ∈ S⊥ such that x = B+y0 + t. 123 Chapter 7. Proximal averages of monotone operators with linear graphs Since y0 is a critical point, (qB + ιS) ∗(x) = g(y0) = 〈y0, x〉 − 12〈y0, B+y0〉 (by Remark 2.1.12) = 〈y0, B+y0 + t〉 − 1 2 〈y0, B+y0〉 (by x = B+y0 + t) = 〈y0, B+y0〉 − 1 2 〈y0, B+y0〉 (by y0 ∈ S and t ∈ S ⊥) = 12〈y0, B+y0〉 = 12〈x, (B+ +NS) −1x〉 (by Lemma 7.2.4 applied to B+) = 12 〈 x, Φ(B++NS)(x) 〉 . Case 2: x /∈ ran(B+ +NS). By Fact 7.2.13, ran(B+ +NS) = ranB+ + S ⊥. Thus by Fact 2.1.32, ( ran(B+ +NS) )⊥ = (ranB+ + S ⊥)⊥ = (ranB+)⊥ ∩ (S⊥)⊥ = kerB+ ∩ S. Then we have Rn = ran(B+ +NS)⊕ (kerB+ ∩ S) and x = Pran(B++NS)x+ PkerB+∩Sx. Since x /∈ ran(B+ +NS), PkerB+∩Sx 6= 0. Thus 〈PkerB+∩Sx, x〉 = 〈PkerB+∩Sx, Pran(B++NS)x+ PkerB+∩Sx〉 = ‖PkerB+∩Sx‖ 2 > 0. (7.9) Then by Fact 5.1.10, (qB + ιS) ∗(x) = (qB + ιS)∗(x) + (qB + ιS)(kPkerB+∩Sx) (7.10) ≥ 〈kPkerB+∩Sx, x〉 → ∞, as k →∞ ( by (7.9) ) , 124 Chapter 7. Proximal averages of monotone operators with linear graphs where (7.10) holds since (qB + ιS)(kPkerB+∩Sx) = 0 by Remark 2.1.12 and PkerB+∩Sx ∈ kerB+ ∩ S. Combining the conclusions above, we have (qB + ιS) ∗(x) = ιran(B++NS)(x) + 1 2 〈 x, Φ(B++NS)(x) 〉 , ∀x ∈ Rn.  Proposition 7.2.15 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn. Then F(B+NS )(x, x ∗) = ιS(x) + ιran(B++NS)(B ∗x+ x∗) + 14 〈 B∗x+ x∗, Φ(B++NS)(B ∗x+ x∗) 〉 , ∀(x, x∗) ∈ Rn × Rn. Proof. Let (x, x∗) ∈ Rn × Rn. By Fact 7.1.21, we have F(B+NS )(x, x ∗) = sup (y,y∗)∈gra(B+NS) { 〈y∗, x〉+ 〈x∗, y〉 − 〈y, y∗〉 } = sup y∈S { 〈By + S⊥, x〉+ 〈x∗, y〉 − 〈y, By + S⊥〉 } = ιS(x) + sup y∈S { 〈By, x〉+ 〈x∗, y〉 − 〈y,By〉 } (7.11) = ιS(x) + sup y∈Rn { 〈y, B∗x+ x∗〉 − 〈y,By〉 − ιS(y) } = ιS(x) + 2 sup y∈Rn { 〈y, 12 (B ∗x+ x∗)〉 − qB(y)− ιS(y) } (7.12) 125 Chapter 7. Proximal averages of monotone operators with linear graphs where (7.11) holds by y ∈ S. By (7.12), we have F(B+NS )(x, x ∗) = ιS(x) + 2(qB + ιS)∗ ( 1 2(B ∗x+ x∗) ) = ιS(x) + ιran(B++NS)(B ∗x+ x∗) (7.13) + 〈 1 2(B ∗x+ x∗),Φ(B++NS)( 1 2 (B ∗x+ x∗) )〉 = ιS(x) + ιran(B++NS)(B ∗x+ x∗) (7.14) + 14 〈 (B∗x+ x∗),Φ(B++NS)(B ∗x+ x∗) 〉 . (7.13) holds by Lemma 7.2.14 and Remark 7.1.22. (7.14) holds by Re- mark 7.1.22 and Lemma 7.2.10.  Remark 7.2.16 Let S be a linear subspace of Rn. By Fact 7.2.13, ranNS = S⊥. By Proposition 7.2.15, FNS (x, x ∗) = ιS(x) + ιS⊥(x ∗), ∀(x, x∗) ∈ Rn × Rn. Corollary 7.2.17 Let A : Rn ⇉ Rn be maximal monotone such that graA is a linear subspace. Then FA(x, x ∗) = ιdomA(x) + ιranA+(A ∗PdomAx+ x∗) + 14 〈 A∗PdomAx+ x∗,ΦA+(A ∗PdomAx+ x∗) 〉 ,∀(x, x∗) ∈ Rn × Rn. 126 Chapter 7. Proximal averages of monotone operators with linear graphs Proof. By Corollary 7.1.31, there exists a linear and monotone operator B : Rn → Rn such that A = B +NdomA. By Proposition 7.2.15 and Remark 7.1.25, we have FA(x, x ∗) = F(B+Ndom A)(x, x ∗) = ιdomA(x) + ιranA+(B ∗x+ x∗) + 14 〈 B∗x+ x∗, ΦA+(B ∗x+ x∗) 〉 = ιdomA(x) + ιranA+(−B ∗(−x) + x∗) + 14 〈 −B∗(−x) + x∗, ΦA+(−B ∗(−x) + x∗) 〉 = ιdomA(x) + ιranA+ ( −A∗PdomA(−x) + x∗ ) (7.15) + 14 〈 −A∗PdomA(−x) + x∗, ΦA+ ( −A∗PdomA(−x) + x∗ )〉 = ιdomA(x) + ιranA+ ( −A∗(−PdomAx) + x∗ ) (7.16) + 14 〈 −A∗(−PdomAx) + x∗, ΦA+ ( −A∗(−PdomAx) + x∗ )〉 = ιdomA(x) + ιranA+(A ∗PdomAx+ x∗) (7.17) + 14 〈 A∗PdomAx+ x∗, ΦA+ ( A∗PdomAx+ x∗ )〉 , where (7.15) holds by Fact 7.1.23 and Corollary 7.2.11 applied to B∗ and A∗. (7.16) holds by Fact 4.3.3, and (7.17) by Remark 7.1.5 and Proposi- tion 4.1.3(iii).  Proposition 7.2.18 Let B : Rn → Rn be linear and monotone, and S be a 127 Chapter 7. Proximal averages of monotone operators with linear graphs linear subspace of Rn. Then F ∗(B+NS )(x ∗, x) = ιS(x) + ιS⊥(x ∗ −Bx) + 〈x,Bx〉, ∀(x, x∗) ∈ Rn × Rn. Proof. Let (x, x∗) ∈ Rn × Rn. By Proposition 7.2.15, F ∗(B+NS )(x ∗, x) = sup (y,y∗) { 〈y, x∗〉+ 〈y∗, x〉 − ιS(y)− ιran(B++NS)(B ∗y + y∗) − 14 〈 B∗y + y∗, Φ(B++NS)(B ∗y + y∗) 〉} = sup (y∈S,w∈S) { 〈y, x∗〉+ 〈B+w −B∗y + S⊥, x〉 − 14〈B+w,w〉 } (7.18) = ιS(x) + sup (y∈S,w∈S) { 〈y, x∗〉+ 〈B+w −B∗y, x〉 − 14〈B+w,w〉 } = ιS(x) + sup (y∈S, w∈S) { 〈y, x∗ −Bx〉+ 〈B+w, x〉 − 14 〈w,B+w〉 } = ιS(x) + ιS⊥(x ∗ −Bx) + sup w∈S { 〈B+w, x〉 − 1 4〈w,B+w〉 } = ιS(x) + ιS⊥(x ∗ −Bx) + 12 sup w∈S { 〈w, 2B+x〉 − qB(w) } (7.19) = ιS(x) + ιS⊥(x ∗ −Bx) + 12 sup w∈Rn { 〈w, 2B+x〉 − qB(w)− ιS(w) } = ιS(x) + ιS⊥(x ∗ −Bx) + 12(qB + ιS) ∗(2B+x) = ιS(x) + ιS⊥(x ∗ −Bx) + ιran(B++NS)(2B+x) (7.20) + 14〈2B+x,Φ(B++NS)(2B+x)〉 = ιS(x) + ιS⊥(x ∗ −Bx) + 〈x,Bx〉, (7.21) 128 Chapter 7. Proximal averages of monotone operators with linear graphs in which, (7.18) holds by B∗y + y∗ ∈ (B+ + NS)w = B+w + S⊥(w ∈ S) by Fact 7.1.21, and by Lemma 7.2.4. (7.19) holds by Remark 2.1.12. (7.20) holds by Lemma 7.2.14, and (7.21) by Lemma 7.2.4 since 2x ∈ (B++ NS) −1(2B+x) as x ∈ S by Fact 7.1.21.  Remark 7.2.19 Let S be a linear subspace of Rn. By Fact 7.2.13, ranNS = S⊥. Then by Proposition 7.2.18, F ∗NS (x ∗, x) = ιS(x) + ιS⊥(x ∗), ∀(x, x∗) ∈ Rn × Rn. Corollary 7.2.20 Let S be a linear subspace of Rn. Then FNS is auto- conjugate. Proof. Combine Remark 7.2.16 and Remark 7.2.19.  Remark 7.2.21 Remark 7.2.16 and Remark 7.2.19 are special cases of [8, Example 3.1]. Remark 7.2.22 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn such that A = B+NS. Suppose x ∈ S. Then 〈Ax, x〉 = 〈x,Bx〉. Proof. Apply Ax = Bx+ S⊥, which follows from Fact 7.1.21.  Corollary 7.2.23 Let A : Rn ⇉ Rn be maximal monotone such that graA is a linear subspace. Then F ∗A(x ∗, x) = ιdomA(x) + ι(domA)⊥(x ∗ −APdomAx) + 〈x,APdomAx〉, ∀(x, x∗) ∈ Rn × Rn. 129 Chapter 7. Proximal averages of monotone operators with linear graphs Proof. Let (x, x∗) ∈ Rn × Rn. By Corollary 7.1.31, there exists a linear and monotone operator B : Rn → Rn such that A = B +NdomA. Then by Proposition 7.2.18, F ∗A(x ∗, x) = ιdomA(x) + ι(domA)⊥(x ∗ −Bx) + 〈x,Bx〉. (7.22) Suppose x ∈ domA. Since domA is a subspace of Rn and x∗ −Bx ∈ (domA)⊥ ⇔ x∗ −Bx+ (domA)⊥ ⊂ (domA)⊥. By Fact 7.1.21, x∗ −Bx+ (domA)⊥ = x∗ −Ax. Thus ι(domA)⊥(x ∗ −Bx) = ι(domA)⊥(x ∗ −Ax) = ι(domA)⊥(x ∗ −APdomAx). (7.23) By Remark 7.2.22, 〈x,Bx〉 = 〈Ax, x〉 = 〈APdomAx, x〉. (7.24) Thus by (7.22), (7.23) and (7.24), F ∗A(x ∗, x) = ιdomA(x) + ι(domA)⊥(x ∗ −APdomAx) + 〈APdomAx, x〉, ∀(x, x∗) ∈ Rn × Rn.  130 Chapter 7. Proximal averages of monotone operators with linear graphs 7.3 The third main result Lemma 7.3.1 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn. Suppose that x ∈ S, x∗ ∈ Rn and y∗ ∈ Rn. Then ιran(B++NS)(B ∗x+ y∗) + ιS⊥(2x ∗ − y∗ −Bx) = ιran(B++NS)(x ∗ −Bx) + ιS⊥(2x ∗ − y∗ −Bx). (7.25) Proof. We consider two cases. Case 1: 2x∗ − y∗ −Bx /∈ S⊥. Clear. Case 2: 2x∗− y∗−Bx ∈ S⊥. Let t ∈ S⊥ such that y∗ = 2x∗−Bx+ t. Thus B∗x+ y∗ = B∗x+ 2x∗ −Bx+ t = Bx+B∗x+ 2x∗ −Bx−Bx+ t = 2x∗ − 2Bx+ 2B+x+ t. (7.26) On the other hand, since t ∈ S⊥, Fact 7.1.21 implies 2B+x+ t ∈ (B+ +NS)(2x). (7.27) Then by Remark 7.1.22, (7.26) and (7.27), we have B∗x+ y∗ ∈ ran(B+ +NS)⇔ x∗ −Bx ∈ ran(B+ +NS). (7.28) Thus ιran(B++NS)(B ∗x+ y∗) = ιran(B++Ns)(x ∗ −Bx). Hence (7.25) holds.  131 Chapter 7. Proximal averages of monotone operators with linear graphs Corollary 7.3.2 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn. Suppose that x, x∗, y∗ ∈ Rn. Then ιS(x) + ιran(B++NS)(B ∗x+ y∗) + ιS⊥(2x ∗ − y∗ −Bx) = ιS(x) + ιran(B++NS)(x ∗ −Bx) + ιS⊥(2x ∗ − y∗ −Bx). Proof. Apply Lemma 7.3.1.  Proposition 7.3.3 Let A : Rn ⇉ Rn be maximal monotone such that graA is a linear subspace. Then hFA(x, x ∗) = ιdomA(x) + ιranA+(x ∗ −APdomAx) + 〈x, x∗〉 + 12 〈 x∗ −APdomAx, ΦA+(x ∗ −APdomAx) 〉 , ∀(x, x∗) ∈ Rn ×Rn. Proof. By Corollary 7.1.31, there exists a linear and monotone operator B : Rn → Rn such that A = B +NS , 132 Chapter 7. Proximal averages of monotone operators with linear graphs where S = domA. Let (x, x∗) ∈ Rn × Rn. By Proposition 7.2.15 and Proposition 7.2.18, hFA(x, x ∗) = inf y∗ { 1 2FA(x, 2y ∗) + 12F ∗ A ( 2(x∗ − y∗), x )} = inf y∗ { ιS(x) + ιran(B++NS)(B ∗x+ 2y∗) + 18 〈 B∗x+ 2y∗, Φ(B++NS)(B ∗x+ 2y∗) 〉 + ιS⊥(2x ∗ − 2y∗ −Bx) + 12〈x,Bx〉 } = ιS(x) + 1 2〈x,Bx〉+ infy∗ { ιran(B++NS)(B ∗x+ 2y∗) + 18 〈 B∗x+ 2y∗, Φ(B++NS)(B ∗x+ 2y∗) 〉 + ιS⊥(2x ∗ − 2y∗ −Bx) } = ιS(x) + ιran(B++NS)(x ∗ −Bx) + 12〈x,Bx〉 (7.29) + inf y∗ { 1 8 〈 B∗x+ 2y∗, Φ(B++NS)(B ∗x+ 2y∗) 〉 + ιS⊥(2x ∗ − 2y∗ −Bx) } = ιS(x) + ιran(B++NS)(x ∗ −Bx) + 12〈x,Bx〉 (7.30) + inf t∈S⊥ { 1 8 〈 B∗x+ 2x∗ −Bx+ t, Φ(B++NS)(B ∗x+ 2x∗ −Bx+ t) 〉} , in which, (7.29) holds by Corollary 7.3.2, (7.30) by 2y∗ = 2x∗ −Bx+ t, t ∈ S⊥. If x /∈ S or x∗ −Bx /∈ ran(B+ +NS), hFA(x, x ∗) =∞ by (7.30). Now suppose x ∈ S = domA and x∗ − Bx ∈ ran(B+ + NS). Then there exists y0 ∈ S such that x ∗ − Bx ∈ (B+ + NS)y0. Thus by Fact 7.1.21, 133 Chapter 7. Proximal averages of monotone operators with linear graphs x∗ −Bx ∈ B+y0 + S⊥. Then 〈B+x, y0〉 = 〈x,B+y0〉 = 〈x, x ∗ −Bx〉 = 〈x, x∗〉 − 〈x,Bx〉. (7.31) Note that B∗x+ 2x∗ −Bx+ t = Bx+B∗x+ 2x∗ −Bx−Bx+ t = 2x∗ − 2Bx+ 2B+x+ t. (7.32) By Fact 7.1.21, 2B+x+ t ∈ (B+ +NS)(2x). (7.33) Then by Remark 7.1.22, (7.32) and (7.33), B∗x+ 2x∗ −Bx+ t ∈ (B+ +NS)(2y0 + 2x). (7.34) 134 Chapter 7. Proximal averages of monotone operators with linear graphs Then by (7.34), (7.31), (7.30) and Lemma 7.2.4, hFA(x, x ∗) = 12〈x,Bx〉+ 1 8 〈 B+(2y0 + 2x), 2y0 + 2x 〉 = 12〈x,Bx〉+ 1 2 〈 B+(y0 + x), y0 + x 〉 = 12〈x,Bx〉+ 1 2 〈B+y0, y0〉+ 〈B+x, y0〉+ 1 2〈B+x, x〉 = 12〈x,Bx〉+ 1 2 〈B+y0, y0〉+ 〈x, x ∗〉 − 〈x,Bx〉+ 12〈B+x, x〉 = 12〈B+y0, y0〉+ 〈x, x ∗〉 (by Remark 2.1.12) = 〈x, x∗〉+ 12〈x ∗ −Bx, (B+ +NS)−1(x∗ −Bx)〉 = 〈x, x∗〉+ 12〈x ∗ −Bx, (A+)−1(x∗ −Bx)〉 (by Remark 7.1.25) = 〈x, x∗〉+ 12 〈 x∗ −Ax, (A+)−1(x∗ −Ax) 〉 (by Lemma 7.2.5) = 〈x, x∗〉+ 12 〈 x∗ −Ax, ΦA+(x ∗ −Ax) 〉 (by Lemma 7.2.5) = 〈x, x∗〉+ 12 〈 x∗ −APdomAx, ΦA+(x ∗ −APdomAx) 〉 . (7.35) Thus combining (7.30) and (7.35), hFA(x, x ∗) = ιdomA(x) + ιranA+(x ∗ −Bx) + 〈x, x∗〉 (by Remark 7.1.25) + 12 〈 x∗ −APdomAx, ΦA+(x ∗ −APdomAx) 〉 = ιdomA(x) + ιranA+(x ∗ −APdomAx) + 〈x, x∗〉 (by Lemma 7.2.5) + 12 〈 x∗ −APdomAx, ΦA+(x ∗ −APdomAx) 〉 , ∀(x, x∗) ∈ Rn × Rn.  135 Chapter 7. Proximal averages of monotone operators with linear graphs Remark 7.3.4 Let S be a linear subspace of Rn and A : Rn → Rn be in- vertible. Then dimAS = dimS. Proposition 7.3.5 Let S be a linear subspace of Rn and A : Rn → Rn be linear and monotone. Suppose that AS ⊂ S. Then (Id+A)S = S. Proof. By assumptions, (Id+A)S is a linear subspace and (Id+A)S ⊂ S +AS ⊂ S. (7.36) Since (Id+A) is invertible by Proposition 5.2.6, by Remark 7.3.4, dim(Id+A)S = dimS. Then by (7.36), (Id+A)S = S.  Corollary 7.3.6 Let S be a linear subspace of Rn and A : Rn → Rn be linear and monotone. Suppose that ranA ⊂ S. Then (Id+A)−1A+S ⊂ S. Proof. By Fact 2.1.17, ranA∗ = ranA ⊂ S. Thus A+S ⊂ ranA+ ⊂ S. By Proposition 7.3.5, A+S ⊂ S = (Id+A)S. Then (Id+A) −1A+S ⊂ (Id+A)−1(Id+A)S = S.  Lemma 7.3.7 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn. Suppose that x, y ∈ S and x∗, y∗ ∈ Rn. Then ιran(B++NS) ( B∗(x+ y) + x∗ + y∗ ) + ιS⊥ ( x∗ − y∗ −B(x− y) ) = ιran(B++NS)(x ∗ −Bx) + ιS⊥ ( x∗ − y∗ −B(x− y) ) . (7.37) Proof. We consider two cases. Case 1: x∗ − y∗ −B(x− y) /∈ S⊥. Clear. 136 Chapter 7. Proximal averages of monotone operators with linear graphs Case 2: x∗−y∗−B(x−y) ∈ S⊥. Let t ∈ S⊥ such that y∗ = x∗−B(x−y)+t. Thus B∗(x+ y) + x∗ + y∗ = B∗(x+ y) + 2x∗ −B(x− y) + t = B∗(x+ y) +B(x+ y)−B(x+ y) + 2x∗ −B(x− y) + t = 2B+(x+ y) + t+ 2x ∗ − 2Bx. (7.38) On the other hand, since t ∈ S⊥, Fact 7.1.21 implies 2B+(x+ y) + t ∈ (B+ +NS)(2x + 2y). (7.39) Then by Remark 7.1.22, (7.39) and (7.38), we have B∗(x+ y) + x∗ + y∗ ∈ ran(B+ +NS)⇔ x∗ −Bx ∈ ran(B+ +NS). Thus ιran(B++NS) ( B∗(x+y)+x∗+y∗ ) = ιran(B++NS)(x ∗−Bx). Hence (7.37) holds.  Corollary 7.3.8 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn. Suppose that x ∈ Rn, y ∈ S and x∗, y∗ ∈ Rn. Then ιS(x) + ιran(B++NS) ( B∗(x+ y) + x∗ + y∗ ) + ιS⊥ ( x∗ − y∗ −B(x− y) ) = ιS(x) + ιran(B++NS)(x ∗ −Bx) + ιS⊥ ( x∗ − y∗ −B(x− y) ) . Proof. Apply Lemma 7.3.7.  137 Chapter 7. Proximal averages of monotone operators with linear graphs Theorem 7.3.9 Let A : Rn ⇉ Rn be maximal monotone such that graA is a linear subspace. Then P (FA, F ∗ A ⊺) = hFA . Proof. By Corollary 7.1.31, A = B +NS , where B = PSQAPS , S = domA. Let (x, x ∗) ∈ Rn ×Rn. By Fact 5.2.2, Fact 5.2.9, Proposition 7.2.15 and Proposition 7.2.18, P (FA, F ∗ A ⊺)(x, x∗) = inf (y,y∗) { 1 2F(B+NS)(x+ y, x ∗ + y∗) + 12F ∗⊺ (B+NS ) (x− y, x∗ − y∗) + 12‖y‖ 2 + 12‖y ∗‖2 } = inf (y,y∗) { ιS(x+ y) + ιran(B++NS) ( B∗(x+ y) + x∗ + y∗ ) + ιS(x− y) + 18 〈 B∗(x+ y) + x∗ + y∗, Φ(B++NS) ( B∗(x+ y) + x∗ + y∗ )〉 + ιS⊥ ( x∗ − y∗ −B(x− y) ) + 12 〈 x− y,B(x− y) 〉 + 12‖y‖ 2 + 12‖y ∗‖2 } = ιS(x) + inf y∈S, y∗∈Rn { ιran(B++NS) ( B∗(x+ y) + x∗ + y∗ ) (7.40) + 18 〈 B∗(x+ y) + x∗ + y∗, Φ(B++NS) ( B∗(x+ y) + x∗ + y∗ )〉 + ιS⊥ ( x∗ − y∗ −B(x− y) ) + 12 〈 x− y,B(x− y) 〉 + 12‖y‖ 2 + 12‖y ∗‖2 } , 138 Chapter 7. Proximal averages of monotone operators with linear graphs where (7.40) holds by ιS(x+ y) = 0, ιS(x− y) = 0⇔ x, y ∈ S. By (7.40) and Corollary 7.3.8, P (FA, F ∗ A ⊺)(x, x∗) = ιS(x) + ιran(B++NS)(x ∗ −Bx) + inf y∈S, y∗∈Rn { 1 8 〈 B∗(x+ y) + x∗ + y∗, Φ(B++NS) ( B∗(x+ y) + x∗ + y∗ )〉 + ιS⊥ ( x∗ − y∗ −B(x− y) ) + 12 〈 x− y,B(x− y) 〉 + 12‖y‖ 2 + 12‖y ∗‖2 } = ιS(x) + ιran(B++NS)(x ∗ −Bx) + inf y∈S, t∈S⊥ { 1 8 〈 C0, Φ(B++NS)(C0) 〉 (7.41) + 12 〈 x− y,B(x− y) 〉 + 12‖y‖ 2 + 12‖x ∗ −B(x− y) + t‖2 } , where (7.41) holds by y∗ = x∗ −B(x− y) + t, t ∈ S⊥, where C0 := B∗(x+ y) + 2x∗ −B(x− y) + t (B∗(x+ y) + x∗ + y∗ = C0). Suppose that x ∈ S and x∗ −Bx ∈ ran(B+ +NS). Then exists y0 ∈ S such that x∗−Bx ∈ (B++NS)y0. Thus by Fact 7.1.21, x∗−Bx ∈ (B++NS)y0 = B+y0 + S ⊥. Thus 〈x,Bx+B+y0〉 = 〈x, x ∗〉 (by x ∈ S). (7.42) 139 Chapter 7. Proximal averages of monotone operators with linear graphs On one hand, C0 = B ∗(x+ y) +B(x+ y)−B(x+ y) + 2x∗ −B(x− y) + t = 2B+(x+ y) + t+ 2x ∗ − 2Bx. (7.43) On the other hand, since t ∈ S⊥, Fact 7.1.21 implies 2B+(x+ y) + t ∈ (B+ +NS)(2x + 2y). (7.44) Then by Remark 7.1.22, (7.44) and (7.43) C0 ∈ (B+ +NS)(2x+ 2y + 2y0). (7.45) Then by (7.45), (7.41) and Lemma 7.2.4, P (FA, F ∗ A ⊺)(x, x∗) = inf y∈S, t∈S⊥ { 1 8 〈 2B+(x+ y + y0), 2x+ 2y + 2y0 〉 + 12 〈 x− y,B(x− y) 〉 + 12‖y‖ 2 + 12‖x ∗ −B(x− y) + t‖2 } = inf y∈S, t′∈S⊥ { 1 2 〈 B+(x+ y + y0), x+ y + y0 〉 + 12 〈 x− y,B(x− y) 〉 + 12‖y‖ 2 + 12‖B+y0 +By + t ′‖2 } ≤ inf y∈S { 1 2〈B+(x+ y + y0), x+ y + y0〉 + 12 〈x− y,B(x− y)〉+ 1 2‖y‖ 2 + 12‖B+y0 +By‖ 2 } . 140 Chapter 7. Proximal averages of monotone operators with linear graphs Note that (by Remark 2.1.12) 1 2 〈 B+(x+ y + y0), x+ y + y0 〉 + 12 〈 x− y,B(x− y) 〉 = 12 〈 B+(x+ y0) +B+y, (x+ y0) + y 〉 + 12 〈 x− y,B+(x− y) 〉 = 12 〈 B+(x+ y0), x+ y0 〉 + 〈y,B+(x+ y0)〉+ 1 2〈y,B+y〉+ 1 2〈x,B+x〉 + 12 〈y,B+y〉 − 〈y,B+x〉 = 12 〈 B+(x+ y0), x+ y0 〉 + 〈y,B+y0〉+ 〈y,B+y〉+ 1 2〈x,B+x〉 = 12 〈B+x, x〉+ 〈B+y0, x〉+ 1 2 〈B+y0, y0〉+ 〈y,B+y0〉+ 〈y,B+y〉+ 1 2〈x,B+x〉 = 〈x,Bx〉+ 〈B+y0, x〉+ 1 2 〈B+y0, y0〉+ 〈y,B+y0〉+ 〈y,B+y〉 = 〈 x,Bx+B+y0 〉 + 12 〈B+y0, y0〉+ 〈y,B+y0〉+ 〈y,B+y〉 = 〈x, x∗〉+ 12〈B+y0, y0〉+ 〈y,B+y0〉+ 〈y,B+y〉 (by (7.42)). Thus P (FA, F ∗ A ⊺)(x, x∗) ≤ 12〈y0, B+y0〉+ 〈x, x ∗〉 (7.46) + inf y∈S { 〈y,B+y0〉+ 〈y,B+y〉+ 1 2‖y‖ 2 + 12‖B+y0 +By‖ 2 } 141 Chapter 7. Proximal averages of monotone operators with linear graphs Note that 〈y,B+y0〉+ 〈y,B+y〉+ 1 2‖y‖ 2 + 12‖B+y0 +By‖ 2 = 〈y,B+y0〉+ 〈y,B+y〉+ 1 2‖y‖ 2 + 〈y,B∗B+y0〉+ 12 〈y,B ∗By〉+ 12‖B+y0‖ 2 = 〈 y, (Id+B∗)B+y0 〉 + 12 〈 y, ( B +B∗ + Id+B∗B ) y 〉 + 12‖B+y0‖ 2 = 〈y, (Id+B∗)B+y0 〉 + 12〈y, (Id+B ∗)(Id+B)y 〉 + 12‖B+y0‖ 2. Then by (7.46), we have P (FA, F ∗ A ⊺)(x, x∗) ≤ 12 〈y0, B+y0〉+ 1 2‖B+y0‖ 2 + 〈x, x∗〉 (7.47) + inf y∈S {〈 y, (Id+B∗)B+y0 〉 + 12〈y, (Id+B ∗)(Id+B)y 〉} ≤ 12 〈y0, B+y0〉+ 〈x, x ∗〉+ 12‖B+y0‖ 2 − 12‖B+y0‖ 2 (7.48) = 12 〈y0, B+y0〉+ 〈x, x ∗〉 = 〈x, x∗〉+ 12〈x ∗ −Bx, (B+ +NS)−1(x∗ −Bx)〉 (7.49) = 〈x, x∗〉+ 12 〈 x∗ −Ax, (A+)−1(x∗ −Ax) 〉 (7.50) = 〈x, x∗〉+ 12 〈 x∗ −Ax, ΦA+(x ∗ −Ax) 〉 (by Lemma 7.2.5) = 〈x, x∗〉+ 12 〈 x∗ −APdomAx, ΦA+(x ∗ −APdomAx) 〉 , (7.51) in which (7.48) holds by letting y = −(Id+B)−1B+y0 ∈ S, where y ∈ S by Corollary 7.3.6. (7.49) holds Lemma 7.2.4, (7.50) by Remark 7.1.25 and Lemma 7.2.5. 142 Chapter 7. Proximal averages of monotone operators with linear graphs Combining (7.41) and (7.51), P (FA, F ∗ A ⊺)(x, x∗) ≤ ιS(x) + ιran(B++NS)(x ∗ −Bx) + 〈x, x∗〉 + 12 〈 x∗ −APdomAx, ΦA+(x ∗ −APdomAx) 〉 = ιdomA(x) + ιranA+(x ∗ −APdomAx) + 〈x, x∗〉 (by Remark 7.1.25, Lemma 7.2.5) + 12 〈 x∗ −APdomAx, ΦA+(x ∗ −APdomAx) 〉 = hFA(x, x ∗) (by Proposition 7.3.3), ∀(x, x∗) ∈ Rn × Rn. By Fact 5.2.11, Fact 5.2.12 and Proposition 5.1.8, P (FA, F ∗ A ⊺) = hFA .  Corollary 7.3.10 Let A : Rn ⇉ Rn be maximal monotone such that graA is a linear subspace. Then P (FA, F ∗ A ⊺) = hFA = hF ∗A ⊺ . Proof. Combine Theorem 7.3.9 and Proposition 5.3.13.  Theorem 7.3.11 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn. Then P (F(B+NS ), F ∗⊺ (B+NS ) )(x, x∗) = hF(B+NS )(x, x ∗) = hF ∗⊺ (B+NS) (x, x∗) = ιS(x) + ιran(B++NS)(x ∗ −Bx) + 〈x, x∗〉 + 12 〈 x∗ −Bx, Φ(B++NS)(x ∗ −Bx) 〉 , ∀(x, x∗) ∈ Rn × Rn. 143 Chapter 7. Proximal averages of monotone operators with linear graphs Proof. Let A = B + NS . Let (x, x ∗) ∈ Rn × Rn. By Fact 7.2.12 and Re- mark 7.1.22, A is maximal monotone with a linear graph. Thus by Propo- sition 7.3.3, Corollary 7.3.10, Remark 7.1.25 and Corollary 7.2.11, P (F(B+NS ), F ∗⊺ (B+NS ) )(x, x∗) = hF(B+NS )(x, x ∗) = hF ∗⊺ (B+NS) (x, x∗) = ιdomA(x) + ιranA+(x ∗ −APdomAx) + 〈x, x∗〉 + 12 〈 x∗ −APdomAx, ΦA+(x ∗ −APdomAx) 〉 = ιS(x) + ιran(B++NS)(x ∗ −Bx) + 〈x, x∗〉 + 12 〈 x∗ −Bx, Φ(B++NS)(x ∗ −Bx) 〉 .  7.4 The Fitzpatrick function of the sum Fact 7.4.1 Let A,B : Rn ⇉ Rn be monotone. Then FA+B ≤ FA2FB . (7.52) Proof. See [8, Proposition 4.2].  In (7.52), equality doesn’t always hold, see [8, Example 4.7]. It would be interesting to characterize the pairs of monotone operators (A,B) that satisfy the identity FA+B = FA2FB . Lemma 7.4.2 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn. Then F(B+NS ) = FB2FNS . 144 Chapter 7. Proximal averages of monotone operators with linear graphs Proof. Let (x, x∗) ∈ Rn × Rn. By Fact 5.2.4 and Remark 7.2.16, we have (FB2FNS )(x, x ∗) = inf y∗ { FB(x, y ∗) + FNS (x, x ∗ − y∗) } = inf y∗ { ιranB+(y ∗ +B∗x) + 12q(B+)†(y ∗ +B∗x) + ιS(x) + ιS⊥(x ∗ − y∗) } = ιS(x) + inf y∗ { ιranB+(y ∗ +B∗x) + 12q(B+)†(y ∗ +B∗x) + ιS⊥(x ∗ − y∗) } ≤ ιS(x) + ιran(B++NS)(x ∗ +B∗x) (7.53) + inf y∗ { ιranB+(y ∗ +B∗x) + 12q(B+)†(y ∗ +B∗x) + ιS⊥(x ∗ − y∗) } . Next we will show that (FA2FNS )(x, x ∗) ≤ F(B+NS )(x, x ∗). Now suppose x ∈ S and x∗+B∗x ∈ ran(B++NS). Then there exists y0 ∈ S such that x∗ +B∗x ∈ (B+ +NS)y0. (7.54) By Fact 7.1.21, there exists t ∈ S⊥ such that x∗ +B∗x = B+y0 + t. Let y∗0 = x ∗ − t. Then by x∗ +B∗x = B+y0 + t, y∗0 +B ∗x = x∗ +B∗x− t = B+y0. (7.55) 145 Chapter 7. Proximal averages of monotone operators with linear graphs By (7.53), (7.55) and Lemma 7.2.4, (FB2FNS )(x, x ∗) ≤ ιranB+(y ∗ 0 +B ∗x) + 12q(B+)†(y ∗ 0 +B ∗x) + ιS⊥(x ∗ − y∗0) = 12q(B+)†(B+y0) = 12qB+(y0) (by Corollary 2.2.16) = 14 〈 B∗x+ x∗, (B+ +NS)−1(B∗x+ x∗) 〉 (by (7.54)) = 14 〈 B∗x+ x∗, Φ(B++NS)(B ∗x+ x∗) 〉 . (7.56) Thus combining (7.53) and (7.56), (FA2FNS )(x, x ∗) ≤ ιS(x) + ιran(B++NS)(x ∗ +B∗x) + 14 〈 B∗x+ x∗, Φ(B++NS)(B ∗x+ x∗) 〉 = F(B+NS )(x, x ∗) (by Proposition 7.2.15), ∀(x, x∗) ∈ Rn × Rn. By Fact 7.4.1, F(B+NS) = (FB2FNS ).  Fact 7.4.3 Let A,B : Rn → Rn be linear and monotone. Then F(A+B) = FA2FB . Proof. See [4, Corollary 5.7].  Fact 7.4.4 Let S1, S2 be linear subspaces of Rn. Then F(NS1+NS2) = FNS12FNS2 . 146 Chapter 7. Proximal averages of monotone operators with linear graphs Proof. See [8, Example 4.4].  Fact 7.4.5 Let S1, S2 be linear subspaces of Rn. Then NS1+NS2 = NS1∩S2 . Proof. Clearly, dom(NS1 +NS2) = S1 ∩ S2 = domNS1∩S2 . Let x ∈ S1 ∩ S2. By Fact 7.1.21, (NS1 +NS2)(x) = (S1) ⊥ + (S1)⊥ = (S1 ∩ S2)⊥ (by [27, Exercises 3.17]) = NS1∩S2(x).  Proposition 7.4.6 Let B1, B2 : Rn → Rn be linear and monotone, and S1, S2 be linear subspaces of Rn such that A1 = B1 + NS1 , A2 = B2 +NS2 . Then F(A1+A2) = FA12FA2 . Proof. Let (x, x∗) ∈ Rn × Rn. Then we have (FNA12FNA2 )(x, x ∗) = ( F(B1+NS1)2F(B2+NS2) ) (x, x∗) = inf y∗+z∗=x∗ { F(B1+NS1)(x, y ∗) + F(B2+NS2)(x, z ∗) } = inf y∗+z∗=x∗ { inf y∗1+y ∗ 2=y ∗ { FB1(x, y ∗ 1) + FNS1 (x, y ∗ 2) } (by Lemma 7.4.2) + inf z∗1+z ∗ 2=z ∗ { FB2(x, z ∗ 1) + FNS2 (x, z ∗ 2) }} (by Lemma 7.4.2) = inf y∗1+y ∗ 2+z ∗ 1+z ∗ 2=x ∗ { FB1(x, y ∗ 1) + FNS1 (x, y ∗ 2) + FB2(x, z ∗ 1) + FNS2 (x, z ∗ 2) } . 147 Chapter 7. Proximal averages of monotone operators with linear graphs Thus (FNA12FNA2 )(x, x ∗) = inf u∗+v∗=x∗ { inf u∗1+u ∗ 2=u ∗ { FB1(x, u ∗ 1) + FB2(x, u ∗ 2) } + inf v∗1+v ∗ 2=v ∗ { FNS1 (x, v ∗ 1) + FNS2 (x, v ∗ 2) }} (by Fact 7.4.3 and Fact 7.4.4) = inf u∗+v∗=x∗ { F(B1+B2)(x, u ∗) + F(NS1+NS2 )(x, v ∗) } = inf u∗+v∗=x∗ { F(B1+B2)(x, u ∗) + FNS1∩S2 (x, v ∗) } (by Fact 7.4.5) = F(B1+B2+NS1∩S2)(x, x ∗) (by Lemma 7.4.2) = F(B1+B2+NS1+NS2 )(x, x ∗) (by Fact 7.4.5) = F(A1+A2)(x, x ∗).  Corollary 7.4.7 Let A,B : Rn ⇉ Rn be maximal monotone such that graA and graB are linear subspaces. Then F(A+B) = FA2FB . Proof. By Corollary 7.1.31, there exist linear and monotone operators A1, B1 : Rn → Rn such that A = A1 + NdomA and B = B1 + NdomB . Since domA and domB are subspaces of Rn, by Proposition 7.4.6, F(A+B) = FA2FB .  148 Chapter 7. Proximal averages of monotone operators with linear graphs Remark 7.4.8 Corollary 7.4.7 generalizes the result of Bauschke, Borwein and Wang in [4]. 149 Chapter 8 Future work Our future work is the following • Simplify some of earlier technic proofs. • Extend main results to a Hilbert space and a possibly (reflexive) Ba- nach space. • Since Asplund’s decomposition of monotone operators is based on Zorn’s Lemma, it would be very interesting to find a constructive proof. 150 Bibliography [1] E. Asplund, “A monotone convergence theorem for sequences of non- linear mappings,” Nonlinear Functional Analysis, Proceedings of Sym- posia in Pure Mathematics, American Mathematical Society, vol. XVIII Part 1, Chicago, pp. 1–9, 1970. [2] S. Bartz, H. H. Bauschke, J. M. Borwein, S. Reich, and X. Wang, “Fitz- patrick functions, cyclic monotonicity, and Rockafellar’s antideriva- tive,” Nonlinear Analysis, No. 5, pp. 1198-1223, 2007. [3] H. H. Bauschke and J. M. Borwein, “Maximal monotonicity of dense type, local maximal monotonicity, and monotonicity of the conjugate are all the same for continuous linear operators,” Pacific Journal of Mathematics, vol. 189, pp. 1–20, 1999. [4] H. H. Bauschke, J. M. Borwein, and X. Wang, “Fitzpatrick functions and continuous linear monotone operators,” SIAM Journal on Opti- mization, 18, pp. 789–809, 2007. [5] H. H. Bauschke, Y. Lucet, and M. Trienis, “How to transform one convex function continuously into another,” SIAM Review, in press, 2008. 151 Bibliography [6] H. H. Bauschke, E. Matouskova, and S. Reich, “Projection and proximal point methods: Convergence results and counterexamples,” Nonlinear Analysis, vol. 56, pp. 715-739, 2004. [7] H. H. Bauschke, R. G. Goebel, Y. Lucet, and X. Wang, “The proximal average: Basic Theory,” submitted. [8] H. H. Bauschke, D. A. McLaren, and H. S. Sendov, “Fitzpatrick func- tions: inequalities, examples and remarks on a problem by S. Fitz- patrick,” Journal of Convex Analysis 13(3+4), pp. 499-523,2006. [9] H. H. Bauschke and X. Wang, “The kernel average for two convex functions and its applications to the extension and representation of monotone operators,” to appear in Transactions of the American Math- ematical Society, 2007. [10] H. H. Bauschke and X. Wang, “A convex-analytical approach to exten- sion results for n-cyclically monotone operators,” Set-Valued Analysis, No.3, pp. 297-306, 2007. [11] J. M. Borwein , “Adjoint process duality,” Mathematics of Operations Research, Vol. 8, No.3, pp. 403-434, 1983. [12] J. M. Borwein and H. Wiersma, “Asplund decomposition of monotone operators,” SIAM Journal on Optimization, Vol. 18, No.3, pp. 946-960, 2007. [13] J. M. Borwein and A. S. Lewis, “Convex Analysis and Nonlinear Opti- mization,” Springer-Verlag, 2000. 152 Bibliography [14] D. Butnariu and G. Kassay, “A proximal-projection method for finding zeros of set-valued operators,” submitted. [15] F. H. Clarke, “Optimization and nonsmooth analysis,” Wiley, New York, 1983. [16] R. Cross, “Multivalued linear operators,” Marcel Dekker, Inc, New York, 1998. [17] N. Ghoussoub, “Maximal monotone operators are selfdual vector fields and vice-versa,” 2006, preprint. [18] S. Fitzpatrick, “Representing monotone operators by convex functions,” Workshop/Miniconference on Functional Analysis and Optimization (Canberra 1988), Proceedings of the Centre for Mathematical Analysis, Australian National University vol. 20, Canberra, Australia, pp. 59–65, 1988. [19] C. W. Groetsch, “Generalized Inverses of Linear Operators,” Marcel Dekker, Inc, New York, 1977. [20] D. G. Luenberger, “Optimization by Vector Space Methods,” John Wi- ley Sons, Inc, 2000. [21] C. D. Meyer, “Matrix Analysis and Applied Linear Algebra,” SIAM, New York, 2000. [22] R.R. Phelps, “Convex functions, Monotone Operators and Differentia- bility,” Lecture Notes in Mathematics, Springer-Verlag Berlin Heidel- berg, 1993. 153 Bibliography [23] R. R. Phelps and S. Simons, “Unbounded linear monotone operators on nonreflexive Banach spaces,” Journal of Convex Analysis, vol. 5, pp. 303–328, 1998. [24] J. P. Penot and C. Zălinescu, “Some problems about the representa- tion of monotone operators by convex functions,” The Australian New Zealand Industrial and Applied mathematics Journal, vol. 47, No.1, pp. 1–20, 2005. [25] R. T. Rockafellar, “Convex Analysis,” Princeton University Press, 1970. [26] R. T. Rockafellar and R. J. B. Wets, “Variational Analysis,” Springer- Verlag, 1998. [27] B. P. Rynne and M. A. Youngson, “Linear Functional Analysis,” Lon- don, 2000. [28] S. Simons, “Minimax and Monotonicity,” Springer, New York, 1998. [29] S. Simons and C. Zǎlinescu, “Fenchel duality, Fitzpatrick functions and maximal monotonicity,” Journal of Nonlinear and Convex Analysis, 6, pp. 1–22, 2005. [30] M. D. Voisei, “The sum theorem for linear maximal monotone op- erators,” Mathematical Sciences Research Journal, 10 (4), pp. 83–85, 2006. [31] C. Zălinescu, “Convex Analysis in General Vector Spaces,” World Sci- entific Publishing, 2002. 154

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 10 1
France 6 0
China 4 6
City Views Downloads
Ashburn 9 0
Unknown 6 0
Beijing 3 0
Shenzhen 1 6
Sunnyvale 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items