Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Decompositions and representations of monotone operators with linear graphs Yao, Liangjin 2007

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2008_spring_yao_liangjin.pdf [ 743.71kB ]
Metadata
JSON: 24-1.0066808.json
JSON-LD: 24-1.0066808-ld.json
RDF/XML (Pretty): 24-1.0066808-rdf.xml
RDF/JSON: 24-1.0066808-rdf.json
Turtle: 24-1.0066808-turtle.txt
N-Triples: 24-1.0066808-rdf-ntriples.txt
Original Record: 24-1.0066808-source.json
Full Text
24-1.0066808-fulltext.txt
Citation
24-1.0066808.ris

Full Text

Decompositions and representations of monotone operators with linear graphs by Liangjin Yao  M.Sc., Yunnan University, 2006  A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in The College of Graduate Studies (Interdisciplinary)  The University Of British Columbia Okanagan December, 2007 c Liangjin Yao 2007  Abstract We consider the decomposition of a maximal monotone operator into the sum of an antisymmetric operator and the subdifferential of a proper lower semicontinuous convex function. This is a variant of the well-known decomposition of a matrix into its symmetric and antisymmetric part. We analyze in detail the case when the graph of the operator is a linear subspace. Equivalent conditions of monotonicity are also provided. We obtain several new results on auto-conjugate representations including an explicit formula that is built upon the proximal average of the associated Fitzpatrick function and its Fenchel conjugate. These results are new and they both extend and complement recent work by Penot, Simons and Z˘ alinescu. A nonlinear example shows the importance of the linearity assumption. Finally, we consider the problem of computing the Fitzpatrick function of the sum, generalizing a recent result by Bauschke, Borwein and Wang on matrices to linear relations.  ii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  v  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  2 Auxiliary results  . . . . . . . . . . . . . . . . . . . . . . . . . .  5  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5  Properties of A† . . . . . . . . . . . . . . . . . . . . . . . . .  13  2.1 2.2  Definitions  3 Inverse of linear monotone operators  . . . . . . . . . . . . .  21  3.1  Borwein-Wiersma decomposition . . . . . . . . . . . . . . . .  21  3.2  Asplund decomposition . . . . . . . . . . . . . . . . . . . . .  22  3.3  The Borwein-Wiersma decomposition of the inverse  . . . . .  23  4 Monotone operators with linear graphs . . . . . . . . . . . .  27  4.1  Linear graph . . . . . . . . . . . . . . . . . . . . . . . . . . .  27  4.2  Maximal monotonicity identification . . . . . . . . . . . . . .  31 iii  Table of Contents 4.3  Constructing a good selection  . . . . . . . . . . . . . . . . .  40  4.4  The first main result . . . . . . . . . . . . . . . . . . . . . . .  45  4.5  Monotonicity of operators with linear graphs . . . . . . . . .  49  5 Auto-conjugates . . . . . . . . . . . . . . . . . . . . . . . . . . .  53  5.1  Auto-conjugate representation  . . . . . . . . . . . . . . . . .  53  5.2  The Fitzpatrick function and the proximal average . . . . . .  61  5.3  The second main result . . . . . . . . . . . . . . . . . . . . .  65  5.4  An example  . . . . . . . . . . . . . . . . . . . . . . . . . . .  79  5.5  Relationship among auto-conjugate representations . . . . .  83  5.6  Nonuniqueness . . . . . . . . . . . . . . . . . . . . . . . . . .  89  6 Calculation of the auto-conjugates of ∂(− ln) . . . . . . . . . 100 6.1  Fitzpatrick function for ∂(− ln) . . . . . . . . . . . . . . . . . 101  6.2  Proximal average of ∂(− ln) and hF∂f  . . . . . . . . . . . . . 102  7 Proximal averages of monotone operators with linear graphs 108 7.1  Adjoint process of operators with linear graphs . . . . . . . . 108  7.2  Fitzpatrick functions of monotone operators with linear graphs 117  7.3  The third main result . . . . . . . . . . . . . . . . . . . . . . 131  7.4  The Fitzpatrick function of the sum  8 Future work  . . . . . . . . . . . . . 144  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151  iv  List of Figures 5.1  The function f (blue) and z = xy (gold), and their intersection line (cyan), gra Id (red). . . . . . . . . . . . . . . . . . .  5.2  The function f (blue) and z = xy (gold), and their intersection line (cyan), gra Id (red) . . . . . . . . . . . . . . . . . . .  5.3  94  The function f (blue) and z = xy (gold) , and their intersection line (cyan), gra Id (red), where p = 4. . . . . . . . . . . .  6.1  93  97  The function hF∂f . . . . . . . . . . . . . . . . . . . . . . . . . 105  v  Acknowledgements I sincerely thank Heinz Bauschke and Shawn Wang for giving me the opportunity to study with them under their supervision at UBC Okanagan. They have both given me a lot of help and encouragement. I would like to thank Philip Loewen and Yves Lucet for their many pertinent comments on my thesis. I would also like to thank the many people in the department who have made UBCO into a pleasant environment.  vi  Chapter 1  Introduction This thesis addresses two issues: decompositions of a monotone operator with a linear graph, and auto-conjugate representations of a monotone operator with a linear graph. It is well known that every matrix A in Rn×n can be decomposed into the sum of a symmetric matrix and an antisymmetric matrix by  A=  where  A+A⊺ 2  A+A⊺ 2  +  A−A⊺ 2 ,  is a gradient of a quadratic function. Our goal is to decompose  more general mappings, namely maximal monotone operators. Both positive semidefinite matrices and gradients of convex functions are maximal monotone. At present there are two famous decompositions: Asplund decomposition and Borwein-Wiersma decomposition. In 1970, Asplund decomposition was introduced by E. Asplund who showed that a maximal monotone and at most single-valued operator A with int dom A = ∅ is Asplund decomposable. In 2006, J. Borwein and H. Wiersma introduced the Borwein-Wiersma decomposition in [12], which is more restrictive. Borwein-Wiersma verified that a maximal monotone operator that is Borwein-Wiersma decomposable 1  Chapter 1. Introduction is also Asplund decomposable in finite dimensional spaces. One goal of our thesis is to show that maximal monotone operators with linear graphs are Borwein-Wiersma decomposable. The idea of representing a set by nice functions is classical. For example, for a closed set C, one can define the distance function  dC (x) := inf  c∈C  x−c  .  Then dC is 1-Lipschitz and  C = x | dC (x) = 0 .  One can also define the lower semicontinuous function ιC where ιC (x) = 0 if x ∈ C and +∞ otherwise, then  C = x | ιC (x) = 0 .  In the later part of this thesis, we want to find auto-conjugate representations for a maximal monotone operator with a linear graph. An autoconjugate function is a convex function. One very important result provided by Penot, Simons and Zˇ alinescu shows that an auto-conjugate f represents an induced maximal monotone operator A = G(f ). In order to create auto-conjugate representations, we introduce the Fitzpatrick function and the proximal average.  In 1988, Simon Fitzpatrick  defined a new convex function FA in [18] which is called the Fitzpatrick function associated with the monotone operator A. Recently, Fitzpatrick 2  Chapter 1. Introduction functions have turned out to be a very useful tool in the study of maximal monotone operators, see [2, 4, 8, 9, 18]. The proximal average was first introduced in [6] in the context of fixed point theory. In its simplest form, the proximal average is denoted here by P (f0 , f1 ), where f0 and f1 are proper lower semicontinuous and convex functions. The recent works in [5, 7, 9] give numerous properties that are very attractive to Convex Analysts. Now we come back to our question. In [24], Penot and Zˇ alinescu showed that a maximal monotone operator A can be represented by an auto-conjugate function hFA , using a partial epigraphical sum. In [9], Bauschke and Wang showed that P (FA , FA∗ ⊺) is an auto-conjugate representation for a maximal monotone operator A. Until now there has been no clear formula for P (FA , FA∗ ⊺), even if A is a linear, continuous and monotone operator. In this thesis, we give an explicit formula for P (FA , FA∗ ⊺) associated with a maximal monotone operator A with a linear graph. We find that P (FA , FA∗ ⊺) = hFA . This is a new result. The thesis is organized as follows. Chapter 2 contains some auxiliary and basic results on monotone operators, subdifferentials and Moore-Penrose inverses. In Chapter 3, it is shown that the inverse of a linear and monotone operator is Borwein-Wiersma decomposable. Chapter 4 contains our first main result: A maximal monotone operator with a linear graph is Borwein-Wiersma decomposable. In addition, the remainder of this chapter gives some equivalent conditions of monotonicity of operators with linear graphs. Chapter 5 discusses auto-conjugate representations. We give an explicit 3  Chapter 1. Introduction formula for P (FA , FA∗ ⊺) associated with a linear and monotone operator A, which is our second main result. Furthermore, we show that P (FA , FA∗ ⊺) = hFA . In Chapter 6, we give a specific example of a nonlinear monotone oper∗⊺ ator: ∂(− ln) such that P (F∂(− ln) , F∂(− ln) ) = hF∂(− ln) . This illustrates the  necessity of the linearity assumption. Finally, in Chapter 7 we extend auto-conjugate representation results from linear and monotone operators to monotone operators with linear graphs. Here we also discuss one open question: Expressing FA+B in terms of FA and FB . We show that FA+B = FA  2 FB  (Here  2  means the inf  convolution for the second variable). This generalizes one of the results provided by Bauschke, Borwein and Wang in [4]. Throughout this thesis, X denotes a Hilbert space with inner product ·, · , norm  · , and Id is the identity mapping in X. The unit ball is  B= x∈X | x ≤1 .  We further set  R+ = x ∈ R | x ≥ 0 , R++ = x ∈ R | x > 0 ,  R− = x ∈ R | x ≤ 0 , R−− = x ∈ R | x < 0 .  For a subset C ⊂ X , the closure of C is denoted by C. The arrow “→” is used for a single-valued mapping, whereas “⇉” denotes a set-valued mapping.  4  Chapter 2  Auxiliary results 2.1  Definitions  We first introduce some fundamental definitions. Definition 2.1.1 Let T : X ⇉ X. We say T is monotone if  x∗ − y ∗ , x − y ≥ 0, whenever x∗ ∈ T (x), y ∗ ∈ T (y). Definition 2.1.2 Let A : X → X. We say A is positive semidefinite if x, Ax ≥ 0, ∀x ∈ X. Example 2.1.3 Let A : X → X be linear. Then A is monotone, if and only if, A is positive semidefinite. Proof. “⇒” For every x ∈ X, by monotonicity and linearity of S we have  Ax, x = Ax − A0, x − 0 ≥ 0.  (2.1)  (2.1) holds by A0 = 0 (since A is linear). “⇐” For every x, y ∈ X, since A is positive semidefinite, we have 5  Chapter 2. Auxiliary results  Ax − Ay, x − y = A(x − y), x − y ≥ 0.  (2.2)  Definition 2.1.4 Let A : X → X be linear. We define qA by  qA (x) =  1 2  x, Ax ,  ∀x ∈ X.  (2.3)  Definition 2.1.5 Let A : X → X be linear and continuous. Then A∗ is the unique linear and continuous operator satisfying Ax, y = x, A∗ y ,  ∀x, y ∈ X.  (2.4)  Remark 2.1.6 Let A : Rn → Rn be linear. Then A∗ = A⊺. Definition 2.1.7 Let A : X → X be linear and continuous. We say that A is symmetric if A∗ = A. Remark 2.1.8 Let A : X → X be linear and continuous. Then A is monotone ⇔ A∗ is monotone. Definition 2.1.9 Let A : X → X be linear and continuous. We say that A is antisymmetric if A∗ = −A. Remark 2.1.10 Let A : X → X be linear and continuous. Then symmetric and  A−A∗ 2  A+A∗ 2  is  is antisymmetric.  6  Chapter 2. Auxiliary results Definition 2.1.11 (Symmetric and antisymmetric part) Let A : X → X be linear and continuous. Then A+ = 12 A + 12 A∗ is the symmetric part of A, and A◦ = A − A+ = 12 A − 12 A∗ is the antisymmetric part of A. Remark 2.1.12 Let A : X → X be linear and continuous. Then qA = qA+ . Proof. Let x ∈ X.  2qA+ (x) = A+ x, x = =  A∗ x 2 ,x  +  A∗ +A 2 x, x Ax 2 ,x  =  x 2 , Ax  +  Ax 2 ,x  = Ax, x = 2qA (x).  Here is a basic property. Fact 2.1.13 Let A : X → X be linear and continuous. Then A is monotone, if and only if, A+ is monotone. Proof. By Example 2.1.3 and Remark 2.1.12. Definition 2.1.14 Let T : X ⇉ X be monotone.  We call T maximal  monotone if for every (y, y ∗ ) ∈ / gra T there exists (x, x∗ ) ∈ gra T with x − y, x∗ − y ∗ < 0. Fact 2.1.15 Let A : X ⇉ X be maximal monotone and (x0 , x∗0 ) ∈ X × X. Let A : X ⇉ X such that gra A = gra A − (x0 , x∗0 ) ( i.e., a rigid translation of gra A). Then A is maximal monotone. Proof. Follows directly from Definition 2.1.14. 7  Chapter 2. Auxiliary results Example 2.1.16 Every continuous monotone operator A : X → X is maximal monotone. Proof. [26, Example 12.7]. Let us introduce an essential result that will be used often. Fact 2.1.17 Let A : X → X be linear, continuous and monotone. Then ker A = ker A∗ and ran A = ran A∗ . Proof. See [4, Proposition 3.1]. Fact 2.1.18 Let A : X → X be linear and continuous. Then  qA is convex ⇔ A is monotone ⇔ qA (x) ≥ 0,  x ∈ X,  and ∇qA = A+ . Proof. See [3, Theorem 3.6(i)]. Definition 2.1.19 Let f : X → ]−∞, +∞]. We say f is proper lower semicontinuous and convex if  f (x0 ) < +∞, ∃ x0 ∈ X, f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y), ∀λ ∈ ]0, 1[, ∀x, y ∈ X, lim inf f (y) = lim inf f (x + δB) ≥ f (x), y→x  δ→0+  ∀x ∈ X.  Definition 2.1.20 Let f : X → ]−∞, +∞] be proper lower semicontinuous  8  Chapter 2. Auxiliary results and convex. The subdifferential mapping ∂f : X ⇉ X is defined by x → ∂f (x) := x∗ | x∗ , y − x + f (x) ≤ f (y),  ∀y .  One of motivations for studying monotone operators comes from the following Fact. Fact 2.1.21 (Rockafellar) Let f : X → ]−∞, +∞] be proper lower semicontinuous and convex. Then ∂f is maximal monotone. Proof. See [28, page 113] or [31, Theorem 3.28]. Fact 2.1.22 Let A : X → X be linear and monotone. Then A is maximal monotone and continuous. Proof. See [23, Corollary 2.6 and Proposition 3.2.h]. Definition 2.1.23 For a set S ⊂ X, ιS : X → ]−∞, +∞] stands for the indicator function defined by  ιS (x) :=      0,    +∞,  if x ∈ S; otherwise.  Fact 2.1.24 Suppose that S is a nonempty convex subset of X. Then ιS is proper lower semicontinuous and convex, if and only if, S is closed. Proof. See [22, Example.(a)]. Definition 2.1.25 The space ℓ2 consists of all sequences of real numbers  9  Chapter 2. Auxiliary results (ξ1 , ξ2 , . . .) for which (ξ1 , ξ2 , . . .)  2  where (ξ1 , ξ2 , . . .)  2  := (  < ∞,  ∞  1  |ξi |2 ) 2 ,  i=1  and where ∞  ξ, γ =  ξi , γi ,  ∞ 2 ∀ξ = (ξi )∞ i=1 , γ = (γi )i=1 ∈ ℓ .  i=1  Fact 2.1.26 (ℓ2 ,  ·  2)  is a Hilbert space.  Proof. See [27, Example 3.24]. Example 2.1.27 Let X be (ℓ2 ,  ·  2)  space and A : X → X : (xn )∞ n=1 →  ( xnn )∞ n=1 . Then A is maximal monotone and continuous. Proof. Clearly, A is linear. Now we show A is monotone. Let x = (xn )∞ n=1 ∈ X. Then  x, Ax =  ∞  x2n n  ≥ 0.  n=1  By Example 2.1.3, A is monotone. By Fact 2.1.22, A is maximal monotone and continuous. Lemma 2.1.28 Let S be a linear subspace of X. Suppose x ∈ X and α ∈ R satisfy x, s ≤ α, ∀s ∈ S. Then x⊥S.  10  Chapter 2. Auxiliary results Proof. Let s ∈ S. By assumption, we have  x, ks ≤ α,  ∀k ∈ R ⇒ x, s ≤ 0,  if  k>0  x, s ≥ 0,  if  k<0  ⇒ x, s = 0,  ∀s ∈ S  ⇒ x⊥S.  Fact 2.1.29 Suppose that S is a closed linear subspace of X. Then ∂ιS (x) = S ⊥ ,  ∀x ∈ S.  Proof. Let x ∈ S. We have x∗ ∈ ∂ιS (x) ⇔ x∗ , s − x ≤ ιS (s) − ιS (x), ⇔ x∗ , s − x ≤ 0, ⇔ x⊥S  ∀s ∈ X  ∀s ∈ S  (by Lemma 2.1.28).  Fact 2.1.30 Let f, g : X → ]−∞, +∞] be proper lower semicontinuous and convex. Suppose that f is differentiable everywhere. Then  ∂(f + g)(x) = ∇f (x) + ∂g(x),  ∀x ∈ dom g.  Proof. See [22, Theorem 3.23]. Example 2.1.31 Suppose that j(x) =  1 2  x 2 , ∀x ∈ X and S ⊂ X is a 11  Chapter 2. Auxiliary results closed subspace. Then ∂(j + ιS ) is maximal monotone. In particular, ∂(j + ιS )(x) = x + S ⊥ ,  ∀x ∈ S.  Proof. By Fact 2.1.24, ιS is proper lower semicontinuous and convex. Hence j + ιS is proper lower semicontinuous and convex. By Fact 2.1.21, ∂(j + ιS ) is maximal monotone. Let x ∈ S. By Fact 2.1.30, Fact 2.1.18 and Fact 2.1.29, ∂(j + ιS )(x) = ∇j(x) + ∂ιS (x) = ∇qId (x) + ∂ιS (x) = x + S ⊥ . Fact 2.1.32 Let A : X → X be linear and continuous such that ran A is closed. Then ∂ιran A (x) = (ran A)⊥ = ker A∗ , ∀x ∈ ran A. Proof. Let x ∈ ran A. By Fact 2.1.29, ∂ιran A (x) = (ran A)⊥ . Now we show that (ran A)⊥ = ker A∗ . We have x∗ ∈ (ran A)⊥ ⇔ x∗ , Ax = 0,  ∀x ∈ X  ⇔ A∗ x∗ , x = 0,  ∀x ∈ X  ⇔ A∗ x∗ = 0 ⇔ x∗ ∈ ker A∗ .  Definition 2.1.33 Let A : X → X. The set-valued inverse mapping, A−1 : X ⇉ X, is defined by x ∈ A−1 y  ⇔  Ax = y.  The following is the definition of the Moore-Penrose inverse, which will play an important role in our Theorems. 12  Chapter 2. Auxiliary results Definition 2.1.34 Let A : X → X be linear and continuous such that ran A is closed. The Moore-Penrose inverse of A, denoted by A† , is defined by A† b = argminA∗ Au=A∗ b u ,  ∀b ∈ X.  In the following we always let A† stand for the Moore-Penrose inverse of a linear and continuous operator A. Remark 2.1.35 Let A : X → X be linear and continuous such that ran A is closed. Then by [19, Theorem 2.1.1],  A† y ∈ A−1 y,  ∀y ∈ ran A.  In particular, if A is bijective, then  A† = A−1 .  2.2  Properties of A†  By the Remark above, we know that A† |ran A is a selection for A−1 . This raises some questions: What is the relationship between them? If one of them is monotone, can we deduce that the other one is also monotone? Fact 2.2.1 Let A : X → X be linear and continuous. Then ran A is closed, if and only if, ran A∗ is closed. Proof. See [19, Theorem 1.2.4].  13  Chapter 2. Auxiliary results Fact 2.2.2 Let A : X → X be linear and continuous such that ran A is closed. Then A† is linear and continuous. Proof. See [19, Corollary 2.1.3]. Fact 2.2.3 Let A : X → X be linear and continuous such that ran A is closed. Then ran A† = ran A∗ . Proof. See [19, Theorem 2.1.2]. Fact 2.2.4 Let A : X → X be linear and continuous such that ran A is closed. Then (A† )∗ = (A∗ )† . Proof. See [19, Exercise 11 on page 111] and [21, Exercise 5.12.16 on page 428]. Proposition 2.2.5 Let A : X → X be linear, continuous and monotone such that ran A is closed. Then ran A = ran A∗ = ran A† = ran(A† )∗ ,  ker A = ker A∗ = ker(A† ) = ker(A† )∗ .  Proof. By Fact 2.1.17 and Fact 2.2.1, ran A = ran A∗ .  (2.5)  By Fact 2.2.3 and (2.5), we have ran A = ran A∗ = ran A† .  (2.6)  14  Chapter 2. Auxiliary results By Fact 2.2.1, ran A∗ is closed. By Remark 2.1.8, A∗ is monotone. Apply (2.6) with replacing A by A∗ , we have ran A∗ = ran A∗∗ = ran(A∗ )† .  (2.7)  By Fact 2.2.4 and (2.6), we have ran A∗ = ran A = ran(A† )∗ = ran A† . Then (ran A∗ )⊥ = (ran A)⊥ = ran(A† )∗  ⊥  = (ran A† )⊥ , thus by Fact 2.1.32,  ker A = ker A∗ = ker(A† ) = ker(A† )∗ . Proposition 2.2.6 Let A : X → X be linear. Suppose y ∈ ran A. Then A−1 y = y ∗ + ker A, ∀y ∗ ∈ A−1 y. Proof. Let y ∗ ∈ A−1 y and z ∗ ∈ ker A. Then Ay ∗ = y and A(y ∗ + z ∗ ) = Ay ∗ + Az ∗ = y + 0 = y. Thus y ∗ + z ∗ ∈ A−1 y. Hence y ∗ + ker A ⊂ A−1 y. On the other hand, let y1∗ ∈ A−1 y. Then Ay1∗ = y and for each y ∗ ∈ A−1 y, A(y1∗ − y ∗ ) = Ay1∗ − Ay ∗ = y − y = 0. Thus y1∗ − y ∗ ∈ ker A, i.e., y1∗ ∈ y ∗ + ker A. Then A−1 y ⊂ y ∗ + ker A. Corollary 2.2.7 Let A : X → X be linear and continuous such that ran A is closed. Then A−1 y = A† y + ker A,  ∀y ∈ ran A. 15  Chapter 2. Auxiliary results Proof. By Remark 2.1.35 and Proposition 2.2.6. In order to further illustrate the relationship between A† and A, we introduce the concept of Projector. Fact 2.2.8 Let M be a closed subspace of X. For every vector x ∈ X, there is a unique vector m0 ∈ M such that x − m0 ≤ x − m for all m ∈ M . Furthermore, a necessary and sufficient condition that m0 ∈ M be the unique minimizing vector is that x − m0 be orthogonal to M . Proof. See [20, Theorem 2 on page 51]. The Fact above ensures that the following mapping is well defined. Definition 2.2.9 (Projector) Let M be a closed subspace of X. The Projector, PM : X → M , is defined by  PM x = argminm∈M x − m ,  x ∈ X.  (2.8)  Here is a result that will be very helpful for our problems. Proposition 2.2.10 Let A : X → X be linear and monotone such that ran A is closed. Then qA† = qA† Pran A .  16  Chapter 2. Auxiliary results Proof. Let x ∈ X. Then we have 2qA† (x) = x, A† x  (2.9)  = Pran A x + Pker A x, A† (Pran A x + Pker A x)  (2.10)  = Pran A x + Pker A x, A† (Pran A x)  (2.11)  = Pran A x, A† (Pran A x) + Pker A x, A† (Pran A x)  (2.12)  = Pran A x, A† (Pran A x) + Pran A x, (A† )∗ (Pker A x)  (2.13)  = Pran A x, A† (Pran A x)  (2.14)  = 2qA† (Pran A x).  (2.15)  Note that (2.10) holds since X = ran A⊕ker A by Fact 2.1.32 and Fact 2.1.17. (2.11) holds since Pker A x ∈ ker A = ker A† by Proposition 2.2.5. (2.14) holds by (A† )∗ (Pker A x) = 0, since ker(A† )∗ = ker A by Proposition 2.2.5. Fact 2.2.11 Let A : X → X be linear and continuous such that ran A is closed. Then AA† = Pran A . Proof. See [19, Theorem 2.2.2]. Corollary 2.2.12 Let A : X → X be linear, continuous and monotone such that ran A is closed. Then A† is monotone. Proof. Since A† is linear and continuous by Fact 2.2.2, by Fact 2.1.18 it suffices to show that qA† (x) ≥ 0, ∀x ∈ X.  17  Chapter 2. Auxiliary results Let x ∈ X and y = A† (Pran A x). Then Ay = AA† (Pran A x) = Pran A x by Fact 2.2.11.  By Proposition 2.2.10, we have  2qA† (x) = 2qA† (Pran A x)  (2.16)  = A† (Pran A x), Pran A x  (2.17)  = y, Ay  (2.18)  ≥ 0,  (2.19)  in which (2.19) holds since A is monotone. Fact 2.2.13 Let A : X → X be linear and continuous such that ran A is closed. Then A†† = A. Proof. See [19, Exercise 7 on page 110]. Theorem 2.2.14 Let A : X → X be linear and continuous such that ran A is closed. Then A is monotone, if and only if, A† is monotone. Proof. “⇒” By Corollary 2.2.12. “⇐” Since ran A† = ran A is closed by Proposition 2.2.5, we apply Fact 2.2.13 and Corollary 2.2.12 to A† to conclude that A†† = A is monotone. Here is a useful result that will be used very often. Proposition 2.2.15 Let A : X → X be linear, symmetric and continuous such that ran A is closed. Then  qA† (x + Ay) = qA† (x) + qA (y) + Pran A x, y ,  ∀x, y ∈ X.  18  Chapter 2. Auxiliary results Proof. Let x ∈ X, y ∈ X. Then  qA† (x + Ay) =  1 2  (2.20)  A† x + A† Ay, x + Ay  (2.21)  = qA† (x) +  1 2  A† Ay, Ay +  1 2  A† x, Ay +  1 2  A† Ay, x  (2.22)  = qA† (x) +  1 2  AA† Ay, y +  1 2  AA† x, y +  1 2  y, (A† A)∗ x  (2.23)  = qA† (x) +  1 2  Pran A (Ay), y +  = qA† (x) + qA (y) +  1 2  1 2  Pran A x, y +  Pran A x, y +  1 2  1 2  y, Pran A x  = qA† (x) + qA (y) + Pran A x, y ,  y, AA† x  (2.24) (2.25) (2.26)  in which, (2.24) by Fact 2.2.11 and Fact 2.2.4, (2.25) by Fact 2.2.11.  Corollary 2.2.16 Let A : X → X be linear, symmetric and continuous such that ran A is closed. Then  qA† (Ax) = qA (x),  ∀x ∈ X.  Proof. Apply Proposition 2.2.15 to A with x replaced by 0 and y replaced by x. Fact 2.2.17 Let A : X → X be linear and continuous such that ran A is closed. Then A† A = Pran A† . Proof. See [19, Theorem 2.2.2].  19  Chapter 2. Auxiliary results Corollary 2.2.18 Let A : X → X be linear, continuous and monotone such that ran A is closed. Then AA† = A† A = Pran A . Proof. By Proposition 2.2.5, ran A = ran A† . Then follows directly from Fact 2.2.11 and Fact 2.2.17.  20  Chapter 3  Inverse of linear monotone operators It is well known that a linear, continuous and monotone operator A can be decomposed into the sum of a symmetric operator A+ and an antisymmetric operator A◦ : A = A+ + A◦ . By Fact 2.1.18, A is also decomposed into the sum of the subdifferential of a proper lower semicontinuous and convex function ∇qA and an antisymmetric operator A◦ : A = ∇qA + A◦ . Such a decomposition is called a Borwein-Wiersma decomposition.  3.1  Borwein-Wiersma decomposition  Definition 3.1.1 (Borwein-Wiersma decomposition) We say A : X ⇉ X is Borwein-Wiersma decomposable or simply decomposable if  A = ∂f + S,  where f is proper lower semicontinuous and convex, and S is antisymmetric. What kind of operators are Borwein-Wiersma decomposable? 21  Chapter 3. Inverse of linear monotone operators Definition 3.1.2 We say A : Rn ⇉ Rn is skew if there exists a linear and antisymmetric operator B such that B |dom A = A |dom A . Fact 3.1.3 Let A : Rn ⇉ Rn be maximal monotone and at most singlevalued. Suppose that 0 ∈ dom A, dom A is open and A is Frechet differentiable on dom A. Then A is Borwein-Wiersma decomposable, if and only if, A − ∇fA is skew, where 1  fA : dom A → R : x →  A(tx), x dt. 0  Proof. See [12, Theorem 3].  3.2  Asplund decomposition  Here we also introduce another famous decomposition: Asplund decomposition, see [1]. Definition 3.2.1 We say A : X ⇉ X is acyclic with respect to a subset C if A = ∂f + S, where f is proper lower semicontinuous and convex, and S is monotone, which necessarily implies that ∂f is constant on C. If no set C is given, then C = dom A. Definition 3.2.2 (Asplund decomposition) We say A : X ⇉ X is Asplund decomposable if A = ∂f + S, 22  Chapter 3. Inverse of linear monotone operators where f is proper lower semicontinuous and convex, and S is acyclic with respect to dom A. The following tells us which operators are Asplund decomposable. Fact 3.2.3 (Asplund) Let A : Rn ⇉ Rn be maximal monotone such that int dom A = ∅ and A is at most single-valued. Then A is Asplund decomposable. Proof. See [12, Theorem 13]. By the following result, we can find out the connection between the decompositions. Fact 3.2.4 Let A : Rn → Rn be antisymmetric. Then A is acyclic. Proof. See [12, Proposition 15]. Remark 3.2.5 Let A : Rn ⇉ Rn be maximal monotone and Borwein-Wiersma decomposable via ∂f + S, where f is proper lower semicontinuous and convex, and S is antisymmetric. Then such a decomposition is also an Asplund decomposition.  3.3  The Borwein-Wiersma decomposition of the inverse  As mentioned earlier, a linear, continuous and monotone operator is BorweinWiersma decomposable. It is natural to ask whether its set-valued inverse 23  Chapter 3. Inverse of linear monotone operators mapping is also Borwein-Wiersma decomposable. Theorem 3.3.1 Let A : X → X be linear, continuous and monotone such that ran A is closed. Then A−1 = ∂f + (A† )◦ , where f := qA† + ιran A is proper lower semicontinuous and convex, and (A† )◦ is antisymmetric. In particular, A−1 is decomposable. Proof. By Fact 2.2.2 and Corollary 2.2.12, A† is linear, continuous and monotone. Then by Fact 2.1.18 we have qA† is convex function, differentiable and ∇qA† = (A† )+ . Since ran A is a closed subspace of X, by Fact 2.1.24 ιran A is proper lower semicontinuous and convex. Hence f is proper lower semicontinuous and convex. We show that the convex function f satisfies  ∂f (x) + (A† )◦ x =     A† x + ker A,    ∅,  if x ∈ ran A;  (3.1)  otherwise.  Indeed, since f is convex, ∀x ∈ ran A we have  ∂f (x) = ∂(qA† + ιran A )(x) = ∇qA† (x) + ∂ιran A (x)  ( by Fact 2.1.30)  = (A† )+ x + ker A∗  (3.2)  = (A† )+ x + ker A,  (3.3)  24  Chapter 3. Inverse of linear monotone operators where (3.2) holds by Fact 2.1.32, and (3.3) by Proposition 2.2.5. Thus ∂f (x) + (A† )◦ x = (A† )+ x + ker A + (A† )◦ x = A† x + ker A, ∀x ∈ ran A. If x ∈ ran A = dom f , by definition ∂f (x) = ∅. Hence (3.1) holds. By Corollary 2.2.7, we have that A−1 x = A† x + ker A, ∀x ∈ ran A.  Thus, A−1 x =     A† x + ker A,    ∅,  if x ∈ ran A;  (3.4)  (3.5)  otherwise.  By (3.1) and (3.5), we have A−1 = ∂f + (A† )◦ . Proposition 3.3.2 Assume T : X ⇉ X is monotone, then T −1 is monotone. Moreover, if T is maximal monotone, then so too is T −1 . Proof. Use Definition 2.1.1 and Definition 2.1.14 directly. Due to Phelps and Simons, we obtain the following Proposition. Proposition 3.3.3 Let A : X → X be linear, continuous and monotone such that A is one-to-one and symmetric. Then A−1 = ∂f,  where f (x) := supy∈ran A  A−1 y, x − 12 A−1 y, y  (∀x ∈ X) is proper lower  25  Chapter 3. Inverse of linear monotone operators semicontinuous and convex. If X = Rn , then A−1 = ∇qA−1 . In particular, A−1 is decomposable. Proof. By Example 2.1.16, A is maximal monotone. Then by Proposition 3.3.2, A−1 is maximal monotone. Since A is linear and one-to-one , A−1 is single-valued and linear on ran A. In the following we show that x, A−1 y = y, A−1 x ,  ∀x, y ∈ ran A.  Let x, y ∈ ran A. Then there exist unique x1 , y1 ∈ X such that x = Ax1 , y = Ay1 . We have x, A−1 y = Ax1 , y1 = x1 , Ay1 = A−1 x, y .  By [23, Theorem 5.1], f is proper lower semicontinuous and convex, and A−1 = ∂f. If x = Rn , we have A is invertible. By assumption, A−1 is symmetric and monotone. By Fact 2.1.18, A−1 = ∇qA−1 .  26  Chapter 4  Monotone operators with linear graphs Theorem 3.3.1 tells us that the set-valued inverse A−1 of a linear, continuous and monotone operator A is Borwein-Wiersma decomposable. Naturally, this raises the following question: Are maximal monotone operators with linear graphs also Borwein-Wiersma decomposable? This chapter answers the question above. It also gives some important equivalent conditions of monotonicity and maximal monotonicity of operators with linear graphs. Let us first introduce some interesting results about these operators.  4.1  Linear graph  Fact 4.1.1 Let S, M be closed linear subspaces of X. Then S = M ⇔ S⊥ = M ⊥,  S = M ⇔ S⊥ = M ⊥.  Proof. Follows directly by S ⊥⊥ = S, M ⊥⊥ = M.  27  Chapter 4. Monotone operators with linear graphs Definition 4.1.2 Let A : X ⇉ X. We define dom A, ran A by  dom A := {x | Ax = ∅} ran A := {x∗ | x∗ ∈ Ax, ∃x ∈ dom A}.  Proposition 4.1.3 Let A : X ⇉ X such that gra A is a linear subspace of X × X. For every x, y ∈ dom A, the following hold. (i) A0 is a linear subspace of X. (ii) Ax = x∗ + A0,  ∀x∗ ∈ Ax.  (iii) αAx + βAy = A(αx + βy),  ∀α, β ∈ R with α = 0 or β = 0.  (iv) If A is monotone, then dom A⊥A0, hence dom A ⊂ (A0)⊥ , A0 ⊂ (dom A)⊥ . (v) If A is monotone, then x, x∗ ≥ 0,  ∀(x, x∗ ) ∈ gra A.  Proof. Obviously,  dom A = x ∈ X| (x, y) ∈ gra A,  ∃y ∈ X  (4.1)  and dom A is a linear subspace of X. (i): ∀α, β ∈ R, ∀x∗ , z ∗ ∈ A0 we have (0, x∗ ) ∈ gra A (0, z ∗ ) ∈ gra A. 28  Chapter 4. Monotone operators with linear graphs As gra A is a linear subspace of X × X, α(0, x∗ ) + β(0, z ∗ ) = (0, αx∗ + βz ∗ ) ∈ gra A. This gives αx∗ + βz ∗ ∈ A0. Hence A0 is a linear subspace. (ii): We first show that x∗ + A0 ⊂ Ax, ∀x∗ ∈ Ax. Take x∗ ∈ Ax, z ∗ ∈ A0. Then (x, x∗ ) ∈ gra A and  (0, z ∗ ) ∈ gra A.  Since gra A is a linear subspace, (x, x∗ + z ∗ ) ∈ gra A. That is, x∗ + z ∗ ∈ Ax. Then x∗ + A0 ⊂ Ax. On the other hand, let x∗ , y ∗ ∈ Ax. We have (x, x∗ ) ∈ gra A, (x, y ∗ ) ∈ gra A.  Since gra A is a linear subspace, (x − x, y ∗ − x∗ ) = (0, y ∗ − x∗ ) ∈ gra A. Then y ∗ − x∗ ∈ A0. That is, y ∗ ∈ x∗ + A0. Thus Ax ⊂ x∗ + A0. Hence (ii) 29  Chapter 4. Monotone operators with linear graphs holds. (iii): Let α, β ∈ R. Take x∗ ∈ Ax, y ∗ ∈ Ay. Then we have (x, x∗ ) ∈ gra A, (y, y ∗ ) ∈ gra A. Since gra A is a linear subspace, we have (αx + βy, αx∗ + βy ∗ ) ∈ gra A. That is, αx∗ + βy ∗ ∈ A(αx + βy). Then by (ii) we have Ax = x∗ + A0, Ay = y ∗ + A0, A(αx + βy) = αx∗ + βy ∗ + A0.  (4.2)  Suppose that α = 0. By (i)  αA0 + βA0 = A0.  (4.3)  Then by (4.2) and (4.3), αAx + βAy = α(x∗ + A0) + β(y ∗ + A0) = αx∗ + βy ∗ + (αA0 + βA0) = αx∗ + βy ∗ + A0 = A(αx + βy).  (iv): Pick x ∈ dom A. By (4.1) there exists x∗ ∈ X such that (x, x∗ ) ∈ gra A. Then by monotonicity of A, we have x − 0, x∗ − z ∗ ≥ 0,  ∀z ∗ ∈ A0. 30  Chapter 4. Monotone operators with linear graphs That is, x, x∗ ≥ x, z ∗ ,  ∀z ∗ ∈ A0.  (4.4)  Since A0 is a linear subspace by (i), by Lemma 2.1.28 and (4.4),  x⊥A0,  ∀x ∈ dom A  ⇒ dom A⊥A0 ⇒ dom A ⊂ (A0)⊥ ,  A0 ⊂ (dom A)⊥ .  (v): Since (0, 0) ∈ gra A, x, x∗ = x − 0, x∗ − 0 ≥ 0,  ∀(x, x∗ ) ∈ gra A.  Remark 4.1.4 Proposition 4.1.3(ii) is a useful representation. It means  Ax = Ax + A0,  ∀x ∈ dom A, Ax ∈ Ax.  Later, we will show the selection map A can be chosen to be linear!  4.2  Maximal monotonicity identification  The next three results are well known. Fact 4.2.1 Let A : X ⇉ X be maximal monotone. Then Ax is closed and convex, ∀x ∈ X.  31  Chapter 4. Monotone operators with linear graphs Proof. Fix x ∈ X. If x ∈ / dom A, then Ax = ∅ is closed and convex. So suppose x ∈ dom A. Let (x∗n ) ⊂ Ax such that x∗n → x∗ . In the following we show that x∗ ∈ Ax. For every (y, y ∗ ) ∈ gra A, by monotonicity of A, we have y − x, y ∗ − x∗n ≥ 0.  (4.5)  Letting n → ∞ in (4.5), we see that y − x, y ∗ − x∗ ≥ 0,  ∀(y, y ∗ ) ∈ gra A.  (4.6)  By (4.6) and maximal monotonicity of A, we have (x, x∗ ) ∈ gra A. That is, x∗ ∈ Ax. Hence Ax is closed. Now we show that Ax is convex. Let δ ∈ [0, 1]. For every x∗1 , x∗2 ∈ Ax, we have y − x, y ∗ − x∗1 ≥ 0 y − x, y ∗ − x∗2 ≥ 0,  (4.7) ∀(y, y ∗ ) ∈ gra A.  (4.8)  Adding (4.7)×δ and (4.8)×(1 − δ) yields y − x, y ∗ − (δx∗1 + (1 − δ)x∗2 ) ≥ 0,  ∀(y, y ∗ ) ∈ gra A.  (4.9)  Since A is maximal monotone, (x, δx∗1 + (1 − δ)x∗2 ) ∈ gra A, i.e., δx∗1 + (1 − δ)x∗2 ∈ Ax. Thus Ax is convex. Proposition 4.2.2 Let A, B : X ⇉ X be monotone. Then A + B is monotone. 32  Chapter 4. Monotone operators with linear graphs Proof. Let (x, x∗ ), (y, y ∗ ) ∈ gra(A + B). Then there exist (x, x∗1 ), (y, y1∗ ) ∈ gra A (x, x∗2 ), (y, y2∗ ) ∈ gra B such that x∗ = x∗1 + x∗2 y ∗ = y1∗ + y2∗ . Then x − y, x∗ − y ∗ = x − y, x∗1 + x∗2 − y1∗ − y2∗ = x − y, x∗1 − y1∗ + x − y, x∗2 − y2∗ ≥ 0. Hence A + B is monotone. Fact 4.2.3 Let A : X ⇉ X be maximal monotone. Then dom A is convex. Proof. See [31, Theorem 3.11.12]. Fact 4.2.4 Let A : X ⇉ X be maximal monotone. Then A + ∂ιdom A = A. Proof. By Fact 4.2.3 and Fact 2.1.24, ιdom A is proper lower semicontinuous and convex. Then by Proposition 2.1.21, ∂ιdom A is monotone. Then by Fact 4.2.2, A + ∂ιdom A is monotone.  33  Chapter 4. Monotone operators with linear graphs Suppose x ∈ dom A. Then since 0 ∈ ∂ιdom A (x) , (A + ∂ιdom A )(x) = Ax + ∂ιdom A (x) ⊃ Ax. Suppose x ∈ / dom A. Then since Ax = ∅,  (A + ∂ιdom A )(x) ⊃ Ax,  ∀x ∈ X.  Since A is maximal monotone, A + ∂ιdom A = A. The following are interesting properties about maximal monotonicity of monotone operators with linear graphs. Proposition 4.2.5 Let A : X ⇉ X be monotone such that gra A is a linear subspace of X × X. Then A is maximal monotone ⇒ dom A = (A0)⊥ .  Proof. Suppose to the contrary that dom A = (A0)⊥ . By Proposition 4.1.3(i) and Fact 4.2.1, A0 is a closed subspace. By Fact 4.1.1, (dom A)⊥ = (A0)⊥⊥ = A0. Then by Proposition 4.1.3(iv), we have (dom A)⊥ = (dom A)⊥  A0.  (4.10)  Thus there exists ω ∗ ∈ (dom A)⊥ \ A0. By ω ∗ ∈ (dom A)⊥ , we have ω ∗ , x = 0,  ∀x ∈ dom A.  (4.11)  34  Chapter 4. Monotone operators with linear graphs Since ω ∗ ∈ / A0, (0, ω ∗ ) ∈ / gra A. By maximal monotonicity of A, there exists (x0 , x∗0 ) ∈ gra A such that x∗0 , x0 − ω ∗ , x0 = x∗0 − ω ∗ , x0 − 0 < 0.  (4.12)  By (4.11) and (4.12), x∗0 , x0 < 0, which is a contradiction to Proposition 4.1.3(v). Proposition 4.2.6 Let A : X ⇉ X be monotone such that gra A is a linear subspace of X × X and A0 is closed. Then dom A = (A0)⊥ ⇒ A is maximal monotone. Proof. Let (x, x∗ ) ∈ X × X satisfy that x − y, x∗ − y ∗ ≥ 0,  ∀(y, y ∗ ) ∈ gra A.  (4.13)  In the following we will verify that (x, x∗ ) ∈ gra A. By (4.13) we have x, x∗ ≥ x, z ∗ ,  ∀z ∗ ∈ A0.  Since A0 is a linear subspace by Proposition 4.1.3(i), by Lemma 2.1.28 we have x⊥A0, i.e., x ∈ (A0)⊥ = dom A. Take x∗0 ∈ Ax. For every v ∗ ∈ Av, we have x∗0 + v ∗ ∈ A(x + v) by Proposi-  35  Chapter 4. Monotone operators with linear graphs tion 4.1.3(iii). By (4.13), we have x − (x + v), x∗ − (x∗0 + v ∗ ) ≥ 0,  ∀(v, v ∗ ) ∈ gra A.  That is, v, v ∗ ≥ v, x∗ − x∗0 , By Proposition 4.1.3(iii), we have 1 1 ∗ n v, n v  ≥  1 n v,  1 ∗ nv  ∀(v, v ∗ ) ∈ gra A.  (4.14)  ∈ A( n1 v). Then by (4.14),  x∗ − x∗0 ,  ∀(v, v ∗ ) ∈ gra A.  (4.15)  Multiply (4.15) both sides by n and then let n → ∞ to see that v, x∗ − x∗0 ≤ 0,  ∀v ∈ dom A.  (4.16)  Since dom A is a linear subspace, by Lemma 2.1.28, (x∗ −x∗0 )⊥ dom A. Since A0 is closed, we have (x∗ − x∗0 ) ∈ (dom A)⊥ = (A0)⊥⊥ = A0.  (4.17)  According to Proposition 4.1.3(ii), x∗ ∈ x∗0 + A0 = Ax. Here is an important result in this chapter. Theorem 4.2.7 Let A : X ⇉ X be monotone such that gra A is a linear subspace of X × X and dom A is closed. Then A is maximal monotone ⇔ dom A = (A0)⊥ , A0 is closed.  36  Chapter 4. Monotone operators with linear graphs Proof. Since A is maximal monotone, A0 is closed by Fact 4.2.1. Combine Proposition 4.2.5 and Proposition 4.2.6. Theorem 4.2.7 gives an equivalent condition in infinite-dimensional spaces. When we consider it in finite-dimensional spaces, can we get further results? Now we discuss this in detail. Proposition 4.2.8 Let A : Rn ⇉ Rn be monotone such that gra A is a linear subspace of Rn × Rn . Then dim(gra A) = dim(dom A) + dim A0. Proof. We shall construct a basis of gra A. By Proposition 4.1.3(i), A0 is a linear subspace. a basis of A0 and  xk+1 , . . . , xl  Let  be a basis of dom A.  x∗1 , · · · , x∗k  be  We show that  (0, x∗1 ), . . . , (0, x∗k ), (xk+1 , x∗k+1 ), · · · , (xl , x∗l ) is a basis of gra A, where x∗i ∈ Axi , i ∈ {k + 1, · · · , l}. We first show that (0, x∗1 ), . . . , (0, x∗k ), (xk+1 , x∗k+1 ), · · · , (xl , x∗l ) is linearly independent. Let αi , i ∈ {1, · · · , l}, satisfy that α1 (0, x∗1 ) + · · · + αk (0, x∗k ) + αk+1 (xk+1 , x∗k+1 ) + · · · + αl (xl , x∗l ) = 0. Hence  αk+1 xk+1 + · · · + αl xl = 0  (4.18)  α1 x∗1 + · · · + αk x∗k + αk+1 x∗k+1 + · · · + αl x∗l = 0.  (4.19)  37  Chapter 4. Monotone operators with linear graphs Since  xk+1 , . . . , xl  is linearly independent, by (4.18) we have αi = 0, i ∈  {k+1, · · · , l}. Then since x∗1 , · · · , x∗k is linearly independent, by (4.19) we  have αi = 0, i ∈ {1, · · · , k}. Thus αi = 0, i ∈ {1, · · · , l}. Hence (0, x∗1 ), . . . , (0, x∗k ), (xk+1 , x∗k+1 ), · · · , (xl , is linearly independent. Let (x, x∗ ) ∈ gra A. Then there exists βi , i ∈ {k + 1, · · · , l} satisfying that  βk+1 xk+1 + · · · + βl xl = x.  Thus βk+1 x∗k+1 + · · · + βl x∗l ∈ Ax. By Proposition 4.1.3(ii), there exists z ∗ ∈ A0 such that βk+1 x∗k+1 + · · · + βl x∗l + z ∗ = x∗ . Then there exists βi , i ∈ {1, · · · , k} satisfying that z ∗ = β1 x∗1 + · · · + βk x∗k . Thus (x, x∗ ) = β1 (0, x∗1 ) + · · · + βk (0, x∗k ) + βk+1 (xk+1 , x∗k+1 ) + · · · + βl (xl , x∗l ). Hence (0, x1 ), . . . , (0, xk ), (xk+1 , x∗k+1 ), · · · , (xl , x∗l ) is a basis of gra A. Then  dim(gra A) = dim(dom A) + dim(A0).  38  Chapter 4. Monotone operators with linear graphs  From Proposition 4.2.8, we now get a satisfactory characterization. Proposition 4.2.9 Let A : Rn ⇉ Rn be monotone such that gra A is a linear subspace of Rn × Rn . Then  A is maximal monotone ⇔ dim gra A = n.  Proof. Since linear subspaces are closed in finite-dimensional spaces, by Proposition 4.1.3(i) and Theorem 4.2.7 we have A is maximal monotone ⇔ dom A = (A0)⊥ .  (4.20)  Assume that A is maximal monotone. Then dom A = (A0)⊥ .  Then Proposition 4.2.8 implies  dim(gra A) = dim(dom A) + dim(A0) = dim((A0)⊥ ) + dim(A0) = n, as (A0)⊥ + A0 = Rn . Conversely, let dim(gra A) = n. By Proposition 4.2.8, we have  39  Chapter 4. Monotone operators with linear graphs  dim(dom A) = n − dim(A0).  As dim((A0)⊥ ) = n−dim(A0) and dom A ⊂ (A0)⊥ by Proposition 4.1.3(iv), we have dom A = (A0)⊥ .  By (4.20), A is maximal monotone.  4.3  Constructing a good selection  When we proved Theorem 3.3.1, most of the much focused on finding a linear, continuous and monotone operator A such that A |dom A−1 is a selection of A−1 . Now for a maximal monotone operator A with a linear graph, we also want to find such an operator. Fact 4.3.1 Let S be a nonempty closed convex subset of X. Then for each x ∈ X there exists a unique s0 ∈ S such that  x − s0 = min  x−s | s∈S .  Proof. See [19, Corollary 1.1.5]. By Fact 4.3.1, we can define the projector onto a nonempty closed convex subset of X.  40  Chapter 4. Monotone operators with linear graphs Definition 4.3.2 Let S be a nonempty closed convex subset of X. We define the projector PS : X → X by  PS x = argmins∈S x − s ,  x ∈ X.  Fact 4.3.3 Let S be a closed linear subspace of X and x0 ∈ X. Then PS is linear, continuous and  PS+x0 x = x0 + PS (x − x0 ), ∀x ∈ X PS x + PS ⊥ x = x  (4.21) (4.22)  PS∗ = PS .  (4.23)  Proof. (4.21): Let x ∈ S. By Definition 4.3.2,  x − x0 − PS (x − x0 ) ≤ x − x0 − s = x − (x0 + s) ,  ∀s ∈ S.  By Fact 4.3.1, PS (x − x0 ) ∈ S. Thus x0 + PS (x − x0 ) ∈ S + x0 . By Definition 4.3.2, PS+x0 x = x0 + PS (x − x0 ). (4.22) holds by S⊕S ⊥ = X. For the other parts see [27, Theorem 5.51(a)]. Definition 4.3.4 Let A : X ⇉ X such that gra A is a linear subspace of X × X. We define QA by  QA x =     PAx x,   ∅,  if x ∈ dom A; otherwise.  41  Chapter 4. Monotone operators with linear graphs Proposition 4.3.5 Let A : X ⇉ X be maximal monotone such that gra A is a linear subspace of X × X. Then QA is single-valued on dom A and a selection of A. Proof. By Fact 4.2.1, Ax is nonempty closed convex, for every x ∈ dom A. Then by Fact 4.3.1, QA is single-valued on dom A and a selection of A. Proposition 4.3.6 Let A : X ⇉ X be maximal monotone such that gra A is a linear subspace of X × X. Then QA is monotone, and linear on dom A. Moreover,  QA x = P(A0)⊥ (Ax),  ∀x ∈ dom A.  (4.24)  Proof. By Proposition 4.1.3(i) and Fact 4.2.1, A0 is a closed subspace. Let x∗ ∈ Ax. Then  QA x = PAx x = Px∗ +A0 x  (4.25)  = x∗ + PA0 (x − x∗ ) = x∗ + PA0 x − PA0 x∗  (4.26)  = PA0 x + P(A0)⊥ x∗  (4.27)  = P(A0)⊥ x∗  (4.28)  = P(A0)⊥ (Ax),  (4.29)  in which, (4.25) holds by Proposition 4.1.3(ii), (4.26) and (4.27) by Fact 4.3.3. (4.28) holds since PA0 x = 0 by Proposition 4.1.3(iv). Thus (4.24) holds. Since QA x is single-valued by Remark 4.3.5, then P(A0)⊥ (Ax) is single-valued. 42  Chapter 4. Monotone operators with linear graphs Now we show that QA is linear on dom A. Take x, y ∈ dom A and α, β ∈ R. If α = β = 0, by Proposition 4.1.3(i), we have  QA (αx + βy) = QA 0 = PA0 0 = 0 = αQA x + βQA y.  (4.30)  Assume that α = 0 or β = 0. By (4.24), we have  QA (αx + βy) = P(A0)⊥ A(αx + βy)  (4.31)  = αP(A0)⊥ (Ax) + βP(A0)⊥ (Ay)  (4.32)  = αQA x + βQA y,  (4.33)  where (4.32) holds by Proposition 4.1.3(iii) and Fact 4.3.3, (4.33) by (4.24). By Proposition 4.3.5, QA is a selection of A. Since A is monotone, QA is monotone. Proposition 4.3.7 Let Y be a closed linear subspace of X. Let A : X ⇉ X be monotone such that A is linear on Y and at most single-valued. Then PY APY is linear, continuous and maximal monotone. Proof. Clearly, PY APY is linear since PY is linear by Fact 4.3.3 and A is linear on Y . In the following we show that PY APY is monotone. Let x ∈ X. Then x, PY APY x = PY∗ x, A(PY x) = PY x, A(PY x) ≥ 0,  (4.34) (4.35)  43  Chapter 4. Monotone operators with linear graphs where (4.34) holds by Fact 4.3.3. Inequality (4.35) holds since A is monotone. By Example 2.1.3, PY APY is monotone. Then by Fact 2.1.22, we have PY APY is continuous and maximal monotone. Now we show that we found the operator we were looking for. Corollary 4.3.8 Let A : X ⇉ X be maximal monotone such that gra A is a linear subspace of X × X and dom A is closed. Then Pdom A QA Pdom A is linear, continuous and maximal monotone. Moreover, Pdom A QA Pdom A = QA Pdom A , (Pdom A QA Pdom A ) |dom A = QA and Ax = (Pdom A QA Pdom A )x + A0, ∀x ∈ dom A. Proof. The former holds by Proposition 4.3.6 and Proposition 4.3.7. By Proposition 4.3.6 and Proposition 4.2.5, we have (QA Pdom A )x ∈ (A0)⊥ = dom A, ∀x ∈ X. Then by Proposition 4.3.5, (Pdom A QA Pdom A )x = QA x ∈ Ax, ∀x ∈ dom A. By Proposition 4.1.3(ii), Ax = (Pdom A QA Pdom A )x + A0, ∀x ∈ dom A. Remark 4.3.9 By Corollary 4.3.8, we know that QA |dom A is continuous on dom A. But if we omit the assumption that dom A be closed, then we can’t guarantee that QA |dom A is continuous on dom A. Example 4.3.10 Let X be (ℓ2 ,  ·  2)  space and A : X → X : (xn )∞ n=1 →  −1 ( xnn )∞ n=1 . Then QA−1 |dom A−1 is not continuous on dom A .  Proof. We first show that QA−1 = A−1 is maximal monotone with a linear graph, but dom A−1 = ran A is not closed. Clearly, A is linear and one-to-one. Thus QA−1 = A−1 and gra A−1 is a 44  Chapter 4. Monotone operators with linear graphs linear subspace. By Example 2.1.27, A is maximal monotone. By Proposition 3.3.2, A−1 is maximal monotone. By Proposition 4.2.5, ran A = dom A−1 = (A−1 0)⊥ = (0)⊥ = X. Now we show that ran A is not closed, i.e, ran A = X. On the contrary, assume ran A = X. Let x = (1/n)∞ n=1 ∈ X. Then we have A−1 x = (1)∞ / X. This is a contradiction. Hence ran A is not closed. n=1 ∈ In the following we show that QA−1 = A−1 is not continuous on ran A = dom A−1 . Take { n1 en } ⊂ ran A, where en = (0, · · · , 0, 1, 0, · · · ) : the nth entry is 1 and the others are 0. Clearly,  1 n en  → 0. But A−1 ( n1 en ) − 0 = en  0. Hence  QA−1 = A−1 is not continuous on ran A.  4.4  The first main result  Now we come to our first main result in this thesis. Theorem 4.4.1 Let A : X ⇉ X be maximal monotone such that gra A is a linear subspace of X × X and dom A is closed. Then  A = ∂f + A◦ , where f := qA + ιdom A is proper lower semicontinuous and convex, A = Pdom A QA Pdom A is linear, continuous and maximal monotone, and A◦ is antisymmetric. In particular, A is decomposable. Proof. By Corollary 4.3.8, A is linear, continuous and maximal monotone. Then by Fact 2.1.18, qA is convex, differentiable and ∇qA = A+ . Since 45  Chapter 4. Monotone operators with linear graphs dom A is a closed subspace, by Fact 2.1.24 ιdom A is proper lower semicontinuous and convex. Hence f is proper lower semicontinuous and convex. By Theorem 4.2.7, (dom A)⊥ = (A0)⊥⊥ = A0. Let x ∈ dom A. We have  ∂f (x) = ∂(qA + ιdom A )(x) = ∇qA (x) + ∂ιdom A (x) = A+ x + (dom A)⊥  (By Fact 2.1.30)  (by Fact 2.1.18 and Fact 2.1.29)  = A+ x + A0.  Thus ∀x ∈ dom A,  ∂f (x) + A◦ x = A+ x + A0 + A◦ x = Ax + A0 = Ax,  (4.36) (4.37)  where (4.37) holds by Corollary 4.3.8. If x ∈ / dom A, by definition ∂f (x) = ∅ = Ax. Hence we have Ax = ∂f (x) + A◦ x,  ∀x ∈ X.  In general, a convex cone is not a linear subspace. We wonder if there exists a maximal monotone operator with a convex cone graph such that its graph is not a linear subspace. The following gives a negative answer. Fact 4.4.2 A convex cone K is a linear subspace, if and only if, −K ⊂ K. Proof. See [25, Theorem 2.7].  46  Chapter 4. Monotone operators with linear graphs Proposition 4.4.3 Let A : X ⇉ X be maximal monotone such that gra A is a convex cone. Then gra A is a linear subspace of X × X. Proof. By Fact 4.4.2, it suffices to show that  − gra A ⊂ gra A.  Assume that (x, x∗ ) ∈ gra A. We show that −(x, x∗ ) ∈ gra A. Let (y, y ∗ ) ∈ gra A. As gra A is a convex cone, (x, x∗ ) + (y, y ∗ ) = (x + y, x∗ + y ∗ ) ∈ gra A.  Thus x + y, x∗ + y ∗ ≥ 0.  since A is monotone and (0, 0) ∈ gra A  This means −x − y, −x∗ − y ∗ ≥ 0 (−x) − y, (−x∗ ) − y ∗ ≥ 0,  ∀(y, y ∗ ) ∈ gra A.  Since A is maximal, we conclude that (−x, −x∗ ) ∈ gra A,  −(x, x∗ ) ∈ gra A.  Hence gra A is a linear subspace.  47  Chapter 4. Monotone operators with linear graphs In [14], Butnariu and Kassay discuss monotone operators with closed convex graphs. Actually, if such operators are maximal monotone, their graphs are affine sets. Fact 4.4.4 C ⊂ X is an affine set ⇔ ∃c0 ∈ C, C − c0 is a linear subspace. Proof. See [31, page 1]. Proposition 4.4.5 Let A : X ⇉ X be maximal monotone such that gra A is a convex subset. Then gra A is an affine set. Proof. Let (x0 , x∗0 ) ∈ gra A and A : X ⇉ X such that gra A = gra A − (x0 , x∗0 ). Thus gra A is convex and (0, 0) ∈ gra A. By Fact 2.1.15, A is maximal monotone. By Fact 4.4.4, it suffices to verify that gra A is a linear subspace. By Proposition 4.4.3, it suffices to show that gra A is a cone. Let k ≥ 0 and (x, x∗ ) ∈ gra A. We consider two cases. Case 1 : k ≤ 1. k(x, x∗ ) = k(x, x∗ ) + (1 − k)(0, 0) ∈ gra A.  (4.38)  Case 2 : k > 1. Let (y, y ∗ ) ∈ gra A. By (4.38), k1 (y, y ∗ ) ∈ gra A. Thus, kx − y, kx∗ − y ∗ = k2 x − k1 y, x∗ − k1 y ∗ ≥ 0. Since A is maximal monotone, k(x, x∗ ) ∈ gra A. Hence gra A is a cone.  48  Chapter 4. Monotone operators with linear graphs  4.5  Monotonicity of operators with linear graphs  In general, it is not easy to identify whether an operator is monotone. But if an operator with a linear graph and a basis is known, then we can use linear algebra to verify monotonicity and strict monotonicity. Theorem 4.5.1 Let A : X ⇉ X and gra A = span (m1 , m∗1 ), · · · , (mn , m∗n ) . Then the following are equivalent (i) A is monotone. (ii)   m1 , m∗1     m2 , m∗ 1  The matrix B :=  ..  .   mn , m∗1  m1 , m∗2 m2 , m∗2 ..  .  mn , m∗2  ···  m1 , m∗n  ··· .. .  m2 , m∗n  ···  mn , m∗n  .. .         is monotone.     (iii) B+ is positive semidefinite. Proof. Since gra A = span (m1 , m∗1 ), . . . , (mn , m∗n ) , ∀(x, x∗ ) ∈ gra A, ∃α1 , . . . , αn such that n  (x, x∗ ) =  αi (mi , m∗i ). i=1  49  Chapter 4. Monotone operators with linear graphs  Then A is monotone ⇔ x − y, x∗ − y ∗ ≥ 0, n  ⇔  ∀(x, x∗ ) ∈ gra A, ∀(y, y ∗ ) ∈ gra A  n  (αi − βi )m∗i ≥ 0  (αi − βi )mi , i=1  i=1 n  where (x, x∗ ) =  n  αi (mi , m∗i ) = ( i=1 n  (y, y ∗ ) =  i=1 n  βi (mi , m∗i ) = ( i=1  n  ⇔  n  i=1  i=1  βi m∗i )  βi mi , i=1  n  mi , m∗j γi γj ≥ 0,  ∀γi ∈ R  i=1 j=1  = (γ1 , . . . , γn )B(γ1 , . . . , γn )⊺ ≥ 0, ⇔ ν ⊺Bν ≥ 0,  αi m∗i ) i=1 n  i=1 n  γi m∗i =  γi mi ,  n  αi mi ,  ∀γi ∈ R  ∀ν ∈ Rn  ⇔ B is monotone  (by Example 2.1.3)  ⇔ B+ is positive semidefinite  (by Fact 2.1.13 and Example 2.1.3).  We also have a way to identify whether an operator with a linear graph is strictly monotone. First we give the definition of strictly monotone. Definition 4.5.2 A strictly monotone operator T : X ⇉ X is a mapping that satisfies x∗ − y ∗ , x − y > 0, whenever x∗ ∈ T (x), y ∗ ∈ T (y) and x = y.  50  Chapter 4. Monotone operators with linear graphs Definition 4.5.3 Let A : X → X. We say A is positive definite if  x, Ax > 0,  ∀x = 0.  Theorem 4.5.4 Let A : X ⇉ X and gra A = span (m1 , m∗1 ), . . . , (mn , m∗n ) . Suppose that  is linearly independent.  m 1 , . . . , mn  Then A is strictly  monotone, if and only if, the matrix   m1 , m∗1  m1 , m∗2     m2 , m∗ 1  B= ..  .   mn , m∗1  m2 , m∗2 ..  .  mn , m∗2  ···  m1 , m∗n  ··· .. .  m2 , m∗n  ···  mn , m∗n  .. .            is positive definite. Proof. Since gra A = span{(mi , m∗i )}ni=1 , A is strictly monotone ⇔ x − y, x∗ − y ∗ > 0, n  ⇔  ∀(x, x∗ ), (y, y ∗ ) ∈ gra A with x = y  n  (αi − βi )m∗i > 0  (αi − βi )mi , i=1  i=1 n ∗  αi (mi , m∗i )  where (x, x ) =  n  =(  i=1 n  (y, y ∗ ) =  αi m∗i )  αi mi , i=1 n  βi (mi , m∗i ) = ( i=1  n i=1 n  βi m∗i )  βi mi , i=1  i=1  .  51  Chapter 4. Monotone operators with linear graphs Since m1 , . . . , mn are linearly independent,  x = y ⇔ (α1 , . . . , αn ) = (β1 , . . . , βn ) ⇔ γ := (α1 − β1 , . . . , αn − βn ) = 0,  A is strictly monotone n  ⇔  n  γi m∗i > 0,  γi mi , i=1  ⇔ γ ⊺Bγ > 0,  for γ = 0  (4.39)  i=1  ∀γ ∈ Rn with γ = 0  ⇔ B is positive definite.  (4.40) (4.41)  Just as in the proof of Theorem 4.5.1, we see that (4.40) holds.  52  Chapter 5  Auto-conjugates 5.1  Auto-conjugate representation  Definition 5.1.1 Let f : Rn × Rn → ]−∞, +∞] . We define f ⊺ by f ⊺(x, x∗ ) = f (x∗ , x),  ∀(x, x∗ ) ∈ Rn × Rn .  Definition 5.1.2 (Fenchel conjugate) Let f : Rn → ]−∞, +∞] . The Fenchel conjugate of f , f ∗ , is defined by f ∗ (x∗ ) = sup x  x∗ , x − f (x) ,  ∀x∗ ∈ Rn .  Fact 5.1.3 Let f : Rn → ]−∞, +∞] be proper lower semicontinuous and convex. Then f ∗∗ = f. Proof. See [26, Theorem 11.1]. Proposition 5.1.4 Let f, g : Rn → ]−∞, +∞] satisfy f ≤ g. Then f ∗ ≥ g∗ . Proof. Follows directly by Definition 5.1.2. Definition 5.1.5 (Auto-conjugate) Let f : Rn × Rn → ]−∞, +∞] be  53  Chapter 5. Auto-conjugates proper lower semicontinuous and convex. We say f is auto-conjugate if f ∗⊺ = f.  Here are some examples of auto-conjugate functions. Example 5.1.6 (Ghoussoub ’06/[17]) Let ϕ : Rn → ]−∞, +∞] be proper lower semicontinuous and convex, and A : Rn → Rn be linear and antisymmetric. Then f (x, x∗ ) := ϕ(x) + ϕ∗ (x∗ ) f (x, x∗ ) := ϕ(x) + ϕ∗ (−Ax + x∗ )  (∀(x, x∗ ) ∈ Rn × Rn )  are auto-conjugate.  54  Chapter 5. Auto-conjugates Proof. The first function is a special case of the second one when A = 0. So, it suffices to show the second case. Let (x, x∗ ) ∈ Rn × Rn . Then we have f ∗ (x∗ , x) y, x∗ + y ∗ , x − f (y, y ∗ )  = sup (y,y ∗ )  y, x∗ + y ∗ , x − ϕ(y) − ϕ∗ (−Ay + y ∗ )  = sup (y,y ∗ )  y, x∗ + Ay, x + −Ay + y ∗ , x − ϕ(y) − ϕ∗ (−Ay + y ∗ )  = sup (y,y ∗ )  = sup sup y  = sup y  = sup y  y∗  y, x∗ + Ay, x + −Ay + y ∗ , x − ϕ(y) − ϕ∗ (−Ay + y ∗ )  y, x∗ + y, −Ax − ϕ(y) + sup y∗  −Ay + y ∗ , x − ϕ∗ (−Ay + y ∗ )  y, −Ax + x∗ − ϕ(y) + ϕ∗∗ (x)  = ϕ∗ (−Ax + x∗ ) + ϕ(x)  by Fact 5.1.3  = f (x, x∗ ).  Now we introduce some basic properties of auto-conjugate functions. Lemma 5.1.7 (Penot-Simons-Zˇ alinescu ’05/[24],[29]) Let f : Rn ×Rn → ]−∞, +∞] be auto-conjugate. Then f (x, x∗ ) ≥ x, x∗ ,  ∀(x, x∗ ) ∈ Rn × Rn .  55  Chapter 5. Auto-conjugates Proof. Let (x, x∗ ) ∈ Rn × Rn . Then f (x, x∗ ) = f ∗ (x∗ , x) = sup  y, x∗ + y ∗ , x − f (y, y ∗ )  (y,y ∗ )  ≥ x, x∗ + x∗ , x − f (x, x∗ ). Thus 2f (x, x∗ ) ≥ 2 x, x∗ . That is , f (x, x∗ ) ≥ x, x∗ . Proposition 5.1.8 Let f, g : Rn × Rn → ]−∞, +∞] be auto-conjugate such that f ≤ g. Then f = g. Proof. Let (x, x∗ ) ∈ Rn × Rn . By assumptions, f (x, x∗ ) ≤ g(x, x∗ ). On the other hand, by Proposition 5.1.4, f ∗ (x∗ , x) ≥ g∗ (x∗ , x). Since f, g are auto-conjugate, f (x, x∗ ) ≥ g(x, x∗ ). Hence f (x, x∗ ) = g(x, x∗ ). Proposition 5.1.9 Let f : Rn × Rn → ]−∞, +∞]. Then f ∗⊺ = f ⊺∗ . Proof. Let (x, x∗ ) ∈ Rn × Rn . Then f ⊺∗ (x∗ , x) = sup  (y, y ∗ ), (x∗ , x) − f ⊺(y, y ∗ )  (y,y ∗ )  = sup  (y ∗ , y), (x, x∗ ) − f (y ∗ , y)  (y,y ∗ )  = f ∗ (x, x∗ ) = f ∗⊺(x∗ , x).  Fact 5.1.10 (Fenchel-Young inequality) Let f : Rn → ]−∞, +∞] be proper  56  Chapter 5. Auto-conjugates lower semicontinuous and convex, and x, x∗ ∈ Rn . Then f (x) + f ∗ (x∗ ) ≥ x, x∗ , and equality holds, if and only if, x∗ ∈ ∂f (x). Proof. See [25, Theorem 23.5] and [25, page 105].  .  Definition 5.1.11 Let f : Rn × Rn → ]−∞, +∞] . We define G(f ) by x∗ ∈ G(f )x ⇔ f (x, x∗ ) = x, x∗ .  Here is an important property of auto-conjugates, which provides our main motivation for studying them. Fact 5.1.12 (Penot-Simons-Zˇ alinescu ’05) Let f : Rn ×Rn → ]−∞, +∞] be auto-conjugate. Then G(f ) is maximal monotone. Proof. See [29, Theorem 1.4.(a)]. Definition 5.1.13 (Representation) Let f : Rn × Rn → ]−∞, +∞] and A : Rn ⇉ Rn . If A = G(f ), we call f a representation for A. If f is autoconjugate, we call f an auto-conjugate representation for A. Proposition 5.1.14 Let ϕ : Rn → ]−∞, +∞] be proper lower semicontinuous and convex, and A : Rn → Rn be linear and antisymmetric. Let f (x, x∗ ) := ϕ(x) + ϕ∗ (−Ax + x∗ ) (∀(x, x∗ ) ∈ Rn × Rn ). Then f is an autoconjugate representation for ∂ϕ + A.  57  Chapter 5. Auto-conjugates Proof. By Example 5.1.6, f is auto-conjugate. Then we have x, x∗ = f (x, x∗ ) ⇔ x, −Ax + x∗ = ϕ(x) + ϕ∗ (−Ax + x∗ ) ⇔ x∗ − Ax ∈ ∂ϕ(x)  (by Fact 5.1.10)  ⇔ (x, x∗ ) ∈ gra (∂ϕ + A).  Hence f is an auto-conjugate representation for ∂ϕ + A. Definition 5.1.15 Let f, g : Rn × Rn → ]−∞, +∞] . We define  (f  ∗ 2 g)(x, x )  = inf f (x, x∗ − y ∗ ) + g(x, y ∗ ) , ∗ y  ∀(x, x∗ ) ∈ Rn × Rn .  Definition 5.1.16 Let f, g : Rn → ]−∞, +∞] . We define (f ⊕ g)(x, x∗ ) = f (x) + g(x∗ ),  ∀(x, x∗ ) ∈ Rn × Rn .  Definition 5.1.17 We define π1 : Rn × Rn → Rn : (x, y) → x. Fact 5.1.18 Let f, g : Rn × Rn → ]−∞, +∞] be proper lower semicontinuous and convex. Set ϕ = f  2 g.  Assume  ϕ(x, x∗ ) > −∞,  ∀(x, x∗ ) ∈ Rn × Rn ,  58  Chapter 5. Auto-conjugates and  λ π1 dom f − π1 dom g , λ>0  is a linear subspace of Rn . Then ∀(x, x∗ ) ∈ Rn × Rn .  ϕ∗ (x∗ , x) = min f ∗ (x∗ − y ∗ , x) + g∗ (y ∗ , x) , ∗ y  Proof. See [29, Theorem 4.2]. Proposition 5.1.19 Let f, g : Rn × Rn → ]−∞, +∞] be auto-conjugate such that (π1 dom f − π1 dom g) is a linear subspace of Rn . Suppose M = f  2 g.  Then  f (x, x∗ − y ∗ ) + g(x, y ∗ ) , M (x, x∗ ) = min ∗ y  ∀(x, x∗ ) ∈ Rn × Rn .  and M is an auto-conjugate representation for G(f ) + G(g). Proof. By Lemma 5.1.7, M (x, x∗ ) ≥ inf ∗ y  x, y ∗ + x, x∗ − y ∗  = x, x∗ ,  ∀(x, x∗ ) ∈ Rn × Rn . (5.1)  Since (π1 dom f −π1 dom g) is a linear subspace,  λ>0 λ  π1 dom f −π1 dom g =  (π1 dom f − π1 dom g) is a linear subspace. Let (x, x∗ ) ∈ Rn × Rn . By  59  Chapter 5. Auto-conjugates Fact 5.1.18, we have M ∗ (x∗ , x) = min f ∗ (x∗ − y ∗ , x) + g∗ (y ∗ , x) ∗ y  = min f (x, x∗ − y ∗ ) + g(x, y ∗ ) ∗ y  = M (x, x∗ ).  Hence M (x, x∗ ) = min f (x, x∗ − y ∗ ) + g(x, y ∗ ) , ∗ y  ∀(x, x∗ ) ∈ Rn × Rn .  (5.2)  and M is auto-conjugate. In the following we show that M is a representation for G(f )+G(g). Suppose (x, x∗ ) satisfies M (x, x∗ ) = x, x∗ . For every y ∗ ∈ Rn , since f, g are auto-conjugate, by Fact 5.1.7 we have f (x, x∗ − y ∗ ) ≥ x, x∗ − y ∗ , g(x, y ∗ ) ≥ x, y ∗ , and M (x, x∗ ) ≥ x, x∗ .  (5.3)  60  Chapter 5. Auto-conjugates Then by (5.2) and (5.3), (x, x∗ ) ∈ gra G(M ) ⇔ M (x, x∗ ) = x, x∗ ⇔ ∃s∗ such that x, x∗ = M (x, x∗ ) = f (x, x∗ − s∗ ) + g(x, s∗ ) ⇔ ∃s∗ such that 0 = f (x, x∗ − s∗ ) − x, x∗ − s∗ + g(x, s∗ ) − x, s∗ ⇔ ∃s∗ such that x, x∗ − s∗ = f (x, x∗ − s∗ ), x, s∗ = g(x, s∗ ) ⇔ ∃s∗ such that (x, x∗ − s∗ ) ∈ gra G(f ), (x, s∗ ) ∈ gra G(g) ⇔ x∗ ∈ G(f ) + G(g))x.  Now this raises the following question: Given a maximal monotone operator A, can we find an auto-conjugate representation for A? Before answering this question, we introduce some definitions.  5.2  The Fitzpatrick function and the proximal average  Definition 5.2.1 (Fitzpatrick function ’88) Let A : X ⇉ X. The Fitzpatrick function of A is FA : (x, x∗ ) →  sup  x, y ∗ + y, x∗ − y, y ∗ .  (y,y ∗ )∈gra A  Fact 5.2.2 Let A : Rn ⇉ Rn be monotone such that gra A is nonempty. Then FA is proper lower semicontinuous and convex. 61  Chapter 5. Auto-conjugates Proof. See [8, Fact 1.2]. Fact 5.2.3 Let A : Rn → Rn be linear and monotone. Then ∗ qA (x) = ιran A+ (x) + q(A+ )† (x),  ∀x ∈ Rn .  Proof. See [25, page 108] and Corollary 2.2.18. Fact 5.2.4 Let A : Rn → Rn be linear and monotone. Then FA (x, x∗ ) = ιran A+ (x∗ + A∗ x) + 12 q(A+ )† (x∗ + A∗ x),  ∀(x, x∗ ) ∈ Rn × Rn .  Proof. Let (x, x∗ ) ∈ Rn × Rn . By [4, Theorem 2.3], FA (x, x∗ ) ∗ = 2qA ( 1 x∗ + 12 A∗ x) + 2  = ιran A+ ( 12 x∗ + 12 A∗ x) + 2q(A+ )† ( 12 x∗ + 12 A∗ x)  (by Fact 5.2.3)  = ιran A+ (x∗ + A∗ x) + 12 q(A+ )† (x∗ + A∗ x).  Fact 5.2.5 Let A : Rn → Rn be linear and monotone. Then FA∗ (x∗ , x) = ιgra A (x, x∗ ) + x, Ax ,  ∀(x, x∗ ) ∈ Rn × Rn .  Proof. See [4, Theorem 2.3].  62  Chapter 5. Auto-conjugates Proposition 5.2.6 Let A : Rn → Rn be linear and monotone. Then A+k Id is invertible, for every k > 0. Proof. Let x satisfy that (A + k Id)x = 0. Then we have Ax = −kx. By the monotonicity of A, we have 2  k x  = x, kx = x, −Ax = − x, Ax ≤ 0.  Then x = 0. Hence A + k Id is invertible. Definition 5.2.7 (Proximal average) Let f0 , f1 : Rn × Rn → ]−∞, +∞] be proper lower semicontinuous and convex. We define P (f0 , f1 ), the proximal average of f0 and f1 , by P (f0 , f1 )(x, x∗ ) = − 12 (x, x∗ )  2  + (y1 , y1∗ )  2  +  inf  (y1 ,y1∗ )+(y2 ,y2∗ )=(x,x∗ )  + (y2 , y2∗ )  2  ,  ∗ 1 1 2 f0 (2y1 , 2y1 ) + 2 f1  2y2 , 2y2∗  ∀(x, x∗ ) ∈ Rn × Rn .  Remark 5.2.8 Let f0 , f1 : Rn × Rn → ]−∞, +∞] be proper lower semicontinuous and convex. Then  P (f0⊺, f1⊺) = P (f0 , f1 )  ⊺  .  Fact 5.2.9 Let f0 and f1 : Rn → ]−∞, +∞] be proper lower semicontinuous  63  Chapter 5. Auto-conjugates and convex. Then P (f0 , f1 )(x, x∗ ) = inf  (y,y ∗ )  1 2 f0 (x  + y, x∗ + y ∗ ) + 12 f1 (x − y, x∗ − y ∗ ) +  1 2  2  (y, y ∗ )  ,  ∀(x, x∗ ) ∈ Rn × Rn . Proof. Let (x, x∗ ) ∈ Rn × Rn . Then P (f0 , f1 )(x, x∗ ) = − 12 (x, x∗ )  2  + (y1 , y1∗ )  2  + (y2 , y2∗ )  = − 12 (x, x∗ )  2  + inf  x + ( x+y 2 ,  = inf  (y,y ∗ )  +  inf  (y1 ,y1∗ )+(y2 ,y2∗ )=(x,x∗ )  (y,y ∗ )  ∗ +y ∗  2  1 2 f0 (x  )  2  ∗ 1 1 2 f0 (2y1 , 2y1 ) + 2 f1  2y2 , 2y2∗  2  1 2 f0  x 2 x+y 2 ,2  x + ( x−y 2 ,  ∗ −y ∗  2  )  ∗ +y ∗  2  x + 12 f1 2 x−y 2 ,2  ∗ −y ∗  2  2  + y, x∗ + y ∗ ) + 12 f1 (x − y, x∗ − y ∗ ) +  1 2  (y, y ∗ )  2  .  Definition 5.2.10 Let f : Rn × Rn → ]−∞, +∞] be proper lower semicontinuous and convex and hf define by hf (x, x∗ ) = inf  ∗ ∗ 1 1 ∗ 2 f (x, 2x1 )+ 2 f (2x2 , x)  | x∗ = x∗1 +x∗2 ,  ∀(x, x∗ ) ∈ Rn ×Rn .  Now we begin to answer the question above. Fact 5.2.11 (Bauschke-Wang ’07) Let A : X ⇉ X be maximal mono64  Chapter 5. Auto-conjugates tone. Then P (FA , FA∗ ⊺) is an auto-conjugate representation for A. Proof. See [9, Theorem 5.7]. Fact 5.2.12 (Penot-Zˇ alinescu ’05) Let A : X ⇉ X be maximal monotone such that aff(dom A) is closed. Then hFA is an auto-conjugate representation for A. Proof. See [24, Proposition 4.2].  5.3  The second main result  Our main goal is to find a formula for P (FA , FA∗ ⊺) associated with a linear and monotone operator A. Until now, there was no explicit formula for that. Theorem 5.3.1 Let A : Rn → Rn be linear and monotone. Then P (FA , FA∗ ⊺)(x, x∗ ) = ιran A+ (x∗ − Ax) + x, x∗ + q(A+ )† (x∗ − Ax), ∀(x, x∗ ) ∈ Rn × Rn .  65  Chapter 5. Auto-conjugates Proof. Let (x, x∗ ) ∈ Rn × Rn . By Fact 5.2.2 and Fact 5.2.9, we have P (FA , FA∗ ⊺)(x, x∗ ) = inf  (y,y ∗ )  +  1 2  = inf  (y, y ∗ )  (y,y ∗ )  +  1 2  = inf y  +  1 2  1 2 FA (x  + y, x∗ + y ∗ ) + 12 FA∗ ⊺(x − y, x∗ − y ∗ )  (5.4)  + y, x∗ + y ∗ ) + ιgra A (x − y, x∗ − y ∗ )  (5.5)  2  1 2 FA (x  x − y, A(x − y) + 1 2 FA (x  1 2  2  (y, y ∗ )  + y, 2x∗ − A(x − y)) +  (y, x∗ − A(x − y))  1 2  x − y, A(x − y)  (5.6)  2  = inf ιran A+ 2x∗ − A(x − y) + A∗ x + A∗ y  (5.7)  y  + 14 q(A+ )† 2x∗ − A(x − y) + A∗ x + A∗ y +  1 2  x − y, A(x − y) +  1 2  y  2  +  1 2  x∗ − A(x − y)  2  ,  in which, (5.5) holds by Fact 5.2.5, (5.6) by y ∗ = x∗ − A(x − y), and (5.7) by Fact 5.2.4. Since 2x∗ − A(x − y) + A∗ x + A∗ y = 2x∗ − 2Ax + Ax + Ay + A∗ x + A∗ y = 2x∗ − 2Ax + (A + A∗ )(x + y) = 2x∗ − 2Ax + 2A+ (x + y),  (5.8)  66  Chapter 5. Auto-conjugates Thus 2x∗ − A(x − y) + A∗ x + A∗ y ∈ ran A+ ⇔ x∗ − Ax ∈ ran A+ . Then ιran A+ (2x∗ − A(x − y) + A∗ x + A∗ y) = ιran A+ (x∗ − Ax).  (5.9)  We consider two cases. Case 1: x∗ − Ax ∈ / ran A+ . By (5.7) and (5.9), P (FA , FA∗ ⊺)(x, x∗ ) = ∞. Case 2: x∗ − Ax ∈ ran A+ . By Proposition 2.2.15 applied to A+ with x replaced by x∗ − Ax and y replaced by x + y, we have ∗ 1 4 q(A+ )† (2x  − A(x − y) + A∗ x + A∗ y)  = 14 q(A+ )† (2x∗ − 2Ax + 2A+ (x + y)) (by (5.8)) =  1 4  · 22 q(A+ )† (x∗ − Ax + A+ (x + y))  = q(A+ )† (x∗ − Ax + A+ (x + y)) = q(A+ )† (x∗ − Ax) + Pran A+ (x∗ − Ax), x + y + qA+ (x + y) = q(A+ )† (x∗ − Ax) + x + y, x∗ − Ax +  1 2  x + y, A(x + y) .  (5.10)  By (5.7), (5.9) and (5.10), we have P (FA , FA∗ ⊺)(x, x∗ ) = q(A+ )† (x∗ − Ax) + inf y  +  1 2  x − y, A(x − y) +  x + y, x∗ − Ax + 1 2  y  2  +  1 2  1 2  x + y, A(x + y)  x∗ − A(x − y)  2  .  67  Chapter 5. Auto-conjugates Since 1 2  x + y, A(x + y) +  1 2  x − y, A(x − y)  = x, Ax + y, Ay ,  we have x + y, x∗ − Ax +  1 2  x + y, A(x + y) +  1 2  x − y, A(x − y)  = x, x∗ − x, Ax + y, x∗ − y, Ax + x, Ax + y, Ay = x, x∗ + y, Ay + y, x∗ − y, Ax .  Then P (FA , FA∗ ⊺)(x, x∗ ) = q(A+ )† (x∗ − Ax) + x, x∗ + inf y  +  1 2  y  2  +  1 2  x∗ − Ax + Ay  2  = q(A+ )† (x∗ − Ax) + x, x∗ + inf y  +  1 2  y  2  +  1 2  x∗ − Ax  2  +  1 2  y, Ay + y, x∗ − y, Ax  Ay  y, Ay + y, x∗ − y, Ax 2  + Ay, x∗ − Ax  .  68  Chapter 5. Auto-conjugates =  1 2  y + Ay 2 ,  = q(A+ )† (x∗ − Ax) + x, x∗ +  1 2  x∗ − Ax  Since y, Ay +  1 2  y  2  +  1 2  Ay  2  P (FA , FA∗ ⊺)(x, x∗ )  + inf y  1 2  y + Ay  2  + y, x∗ − Ax + A∗ (x∗ − Ax)  = q(A+ )† (x∗ − Ax) + x, x∗ + + inf y  1 2  y + Ay  2  y  1 2  x∗ − Ax  2  + y, (Id +A∗ )(x∗ − Ax)  = q(A+ )† (x∗ − Ax) + x, x∗ + − sup  2  1 2  x∗ − Ax  2  y, (Id +A∗ )(Ax − x∗ ) − q(Id +A∗ )(Id +A) (y)  = q(A+ )† (x∗ − Ax) + x, x∗ +  1 2  x∗ − Ax  2  ∗ ∗ ∗ − q(Id +A∗ )(Id +A) (Id +A )(Ax − x )  = q(A+ )† (x∗ − Ax) + x, x∗ + −q  (Id +A∗ )(Id +A)  †  1 2  x∗ − Ax  2  (Id +A∗ )(Ax − x∗ )  (by Proposition 5.2.3  and Proposition 5.2.6) = q(A+ )† (x∗ − Ax) + x, x∗ +  1 2  x∗ − Ax  2  − q(Id +A)−1 (Id +A∗ )−1 (Id +A∗ )(Ax − x∗ ) = q(A+ )† (x∗ − Ax) + x, x∗ +  1 2  x∗ − Ax  2  (by Remark 2.1.35) −  1 2  x∗ − Ax  2  = x, x∗ + q(A+ )† (x∗ − Ax).  69  Chapter 5. Auto-conjugates Combining the results above, we have P (FA , FA∗ ⊺)(x, x∗ ) = ιran A+ (x∗ − Ax) + x, x∗ + q(A+ )† (x∗ − Ax), ∀(x, x∗ ) ∈ Rn × Rn .  Proposition 5.3.2 Let A : Rn → Rn be linear and monotone. Then π1 [dom P (FA , FA∗ ⊺)] = Rn . Proof. By Theorem 5.3.1, P (FA , FA∗ ⊺)(x, Ax) = x, Ax < ∞,  ∀x ∈ Rn .  Thus (x, Ax) ∈ dom P (FA , FA∗ ⊺), ∀x ∈ Rn . Hence π1 [dom P (FA , FA∗ ⊺)] = Rn .  Corollary 5.3.3 Let A : Rn → Rn be linear, symmetric and monotone. Then P (FA , FA∗ ⊺) = qA ⊕ (ιran A + qA† ).  70  Chapter 5. Auto-conjugates Proof. Let (x, x∗ ) ∈ Rn × Rn . By Theorem 5.3.1, we have P (FA , FA∗ ⊺)(x, x∗ ) = ιran A (x∗ − Ax) + x, x∗ + qA† (x∗ − Ax) = ιran A (x∗ ) + x, x∗ + qA† (x∗ − Ax). Now suppose x∗ ∈ ran A. By Proposition 2.2.15, we have qA† (x∗ − Ax) = qA (x) + qA† (x∗ ) − x, Pran A x∗ = qA (x) + qA† (x∗ ) − x, x∗ . Thus P (FA , FA∗ ⊺)(x, x∗ ) = qA (x) + qA† (x∗ ). Combining the conclusions above, P (FA , FA∗ ⊺)(x, x∗ ) = ιran A (x∗ ) + qA (x) + qA† (x∗ ) = qA ⊕ (ιran A + qA† ) (x, x∗ ),  ∀(x, x∗ ) ∈ Rn × Rn .  Corollary 5.3.4 Let A : Rn → Rn be linear and antisymmetric. Then P (FA , FA∗ ⊺) = ιgra A . Proof. Follows directly by Theorem 5.3.1.  71  Chapter 5. Auto-conjugates Corollary 5.3.5 Let A : Rn → Rn be linear and monotone. Then ∀(x, x∗ ) ∈ Rn × Rn  P (FA , FA∗ ⊺)(x, x∗ ) ≥ x, x∗ P (FA , FA∗ ⊺)(x, Ax) = x, Ax .  Proof. Apply Theorem 5.3.1 and Corollary 2.2.12. For a linear and monotone operator A, Fact 5.2.11 shows P (FA , FA∗ ⊺) is auto-conjugate. Now we give a new proof. Proposition 5.3.6 Let A : Rn → Rn be linear and monotone. Then P (FA , FA∗ ⊺) is auto-conjugate. Proof. Let (x, x∗ ) ∈ Rn × Rn . By Theorem 5.3.1, we have P (FA , FA∗ ⊺)∗ (x∗ , x) (x∗ , x), (y, y ∗ ) − ιran A+ (y ∗ − Ay) − y, y ∗  = sup (y,  y∗ )  (5.11)  − q(A+ )† (y ∗ − Ay) (y, A+ w + Ay), (x∗ , x) − ιran A+ (A+ w)  = sup (y, w)  (5.12)  − y, A+ w + Ay − q(A+ )† (A+ w) y, x∗ + A+ w + Ay, x − y, A+ w + Ay − qA+ (w)  = sup (y, w)  = sup sup y  = sup y  w  y, x∗ + A+ w + Ay, x − y, A+ w + Ay − qA+ (w)  y, x∗ + Ay, x − y, Ay + sup  − qA+ (w)  (5.13)  w  w, A+ x − A+ y, w  (5.14)  .  72  Chapter 5. Auto-conjugates (5.12) holds by y ∗ − Ay = A+ w and (5.13) by Corollary 2.2.16. By (5.14), P (FA , FA∗ ⊺)∗ (x∗ , x) = sup y  = sup y  = sup y  = sup y  ∗ y, A∗ x + x∗ − y, Ay + qA (A+ x − A+ y) +  y, A∗ x + x∗ − y, Ay + q(A+ )† (A+ x − A+ y)  (5.15)  y, A∗ x + x∗ − y, Ay + qA+ (x − y)  (5.16)  y, A∗ x + x∗ − A+ x − qA (y) + qA (x)  (5.17)  ∗ = qA (A∗ x + x∗ − A+ x) + qA (x)  = ιran A+ (A∗ x + x∗ − A+ x) + q(A+ )† (A∗ x + x∗ − A+ x) + qA (x)  (5.18)  (5.15) and (5.18) holds by Proposition 5.2.3, (5.16) by Corollary 2.2.16 and (5.17) by Remark 2.1.12. Note A∗ x + x∗ − A+ x = x∗ − Ax + Ax + A∗ x − A+ x = x∗ − Ax + A+ x. (5.19)  Thus ιran A+ (A∗ x + x∗ − A+ x) = ιran A+ (x∗ − Ax).  (5.20)  If x∗ − Ax ∈ / ran A+ . By (5.18) and (5.20), P (FA , FA∗ ⊺)∗ (x∗ , x) = ∞. Now suppose x∗ − Ax ∈ ran A+ . By Proposition 2.2.15 applied to A+ with  73  Chapter 5. Auto-conjugates x replaced by x∗ − Ax and y replaced by x, q(A+ )† (A∗ x + x∗ − A+ x) = q(A+ )† (x∗ − Ax + A+ x)  (by (5.19))  = q(A+ )† (x∗ − Ax) + Pran A+ (x∗ − Ax), x + qA+ (x) = q(A+ )† (x∗ − Ax) + x∗ − Ax, x + qA (x)  (by Remark 2.1.12)  = q(A+ )† (x∗ − Ax) + x∗ , x − qA (x). Then by (5.18) and (5.20), P (FA , FA∗ ⊺)∗ (x∗ , x) = q(A+ )† (x∗ − Ax) + x, x∗ . Combining the results above, we have P (FA , FA∗ ⊺)∗ (x∗ , x) = ιran A+ (x∗ − Ax) + q(A+ )† (x∗ − Ax) + x, x∗ = P (FA , FA∗ ⊺)(x, x∗ )  (by Theorem 5.3.1),  ∀(x, x∗ ) ∈ Rn × Rn .  Proposition 5.3.7 Let B : Rn → Rn be linear, symmetric and monotone. Let x ∈ Rn . Then  x, Bx = 0 ⇔ Bx = 0.  74  Chapter 5. Auto-conjugates Proof. “⇐” Clear. “⇒” Take y ∈ Rn and α > 0. We have  0 ≤ αy + x, B(αy + x)  (5.21)  = x, Bx + 2α y, Bx + α2 y, By = 2α y, Bx + α2 y, By ,  (by x, Bx = 0)  ⇒ 0 ≤ 2 y, Bx + α y, By  (5.22)  ∀y ∈ Rn  (5.23)  ⇒ 0 ≤ y, Bx , ⇒ Bx = 0,  in which, (5.21) holds by monotonicity of B, (5.22) by multiplying  1 α  in both  sides, and (5.23) by letting α → 0+ . Corollary 5.3.8 Let B : Rn → Rn be linear, symmetric and monotone. Then ker B = {x | qB (x) = 0}. Corollary 5.3.9 Let B : Rn → Rn be linear, symmetric and monotone. Let x ∈ Rn . Then (ιran B + qB † )(x) = 0, if and only if, x = 0. Proof. “⇐” Clear. “⇒” By assumption, we have  x ∈ ran B  (5.24)  0 = x, B † x .  (5.25)  75  Chapter 5. Auto-conjugates By Fact 2.2.2, Fact 2.2.4 and Corollary 2.2.12, B † is linear, symmetric and monotone. By (5.25) and Proposition 5.3.7 applied to B † , B † x = 0. Then by (5.24) and Fact 2.2.11, x = Pran B x = BB †x = 0. Corollary 5.3.10 Let A : Rn → Rn be linear and monotone. (∀(x, x∗ ) ∈ Rn × Rn )  P (FA , FA∗ ⊺)(x, x∗ ) = x, x∗ ⇔ (x, x∗ ) ∈ gra A.  Proof. By Theorem 5.3.1 and Corollary 5.3.9. Corollary 5.3.11 Let A : Rn → Rn be linear and monotone. Then P (FA , FA∗ ⊺) is an auto-conjugate representation for A. Proof. By Proposition 5.3.6 and Corollary 5.3.10. For a linear and monotone operator A, what is hFA ? Proposition 5.3.12 Let A : Rn → Rn be linear and monotone. Then hFA = P (FA , FA∗ ⊺).  76  Chapter 5. Auto-conjugates Proof. Let (x, x∗ ) ∈ Rn × Rn . Then hFA (x, x∗ ) = inf  ∗ ∗ 1 1 ∗ 2 FA (x, 2x1 ) + 2 FA (2x2 , x)  = inf ∗  ∗ 1 2 FA (x, 2(x  − y ∗ )) + 12 FA∗ (2y ∗ , x)  = inf ∗  ∗ 1 2 FA (x, 2(x  − y ∗ )) + ιgra A (x, 2y ∗ ) + qA (x)  y  y  = 12 FA (x, 2x∗ − Ax) + qA+ (x)  | x∗ = x∗1 + x∗2  (5.26)  (by 2y ∗ = Ax and Remark 2.1.12)  = ιran A+ (2x∗ − Ax + A∗ x) + 14 q(A+ )† (2x∗ − Ax + A∗ x) + qA+ (x), (5.27) where (5.26) holds by Fact 5.2.5, and (5.27) by Fact 5.2.4. Note that 2x∗ − Ax + A∗ x = 2x∗ − 2Ax + 2A+ x.  (5.28)  Then 2x∗ − Ax + A∗ x ∈ ran A+ ⇔ x∗ − Ax ∈ ran A+ . Thus ιran A+ (2x∗ − Ax + A∗ x) = ιran A+ (x∗ − Ax).  (5.29)  / ran A+ , hFA (x, x∗ ) = ∞ by (5.27) and (5.29). If x∗ − Ax ∈ Now suppose that x∗ − Ax ∈ ran A+ . By Proposition 2.2.15 applied to A+  77  Chapter 5. Auto-conjugates with x replaced by x∗ − Ax and y replaced by x, we have ∗ 1 4 q(A+ )† (2x  − Ax + A∗ x)  = 14 q(A+ )† (2x∗ − 2Ax + 2A+ x) (by (5.28)) = q(A+ )† (x∗ − Ax + A+ x) = q(A+ )† (x∗ − Ax) + x, Pran A+ (x∗ − Ax) + qA+ (x) = q(A+ )† (x∗ − Ax) + x, x∗ − Ax + qA+ (x) = q(A+ )† (x∗ − Ax) + x, x∗ − qA+ (x)  (by Remark 2.1.12).  Then by (5.27) and (5.29), hFA (x, x∗ ) = x, x∗ + q(A+ )† (x∗ − Ax). Combining the results above, hFA (x, x∗ ) = ιran A+ (x∗ − Ax) + x, x∗ + q(A+ )† (x∗ − Ax) = P (FA , FA∗ ⊺)(x, x∗ ) (by Theorem 5.3.1), ∀(x, x∗ ) ∈ Rn × Rn .  Proposition 5.3.13 Let A : Rn ⇉ Rn be monotone such that gra A is nonempty. Then  hFA = hF ∗⊺ . A  78  Chapter 5. Auto-conjugates Proof. Let (x, x∗ ) ∈ Rn × Rn . By Definition 5.2.10, we have hF ∗⊺ (x, x∗ ) = inf A  ∗ 1 ∗⊺ 2 FA (x, 2x1 )  + 12 (FA∗ ⊺)∗ (2x∗2 , x) | x∗ = x∗1 + x∗2  (5.30)  = inf  ∗ 1 ∗ 2 FA (2x1 , x)  + 12 FA (x, 2x∗2 ) | x∗ = x∗1 + x∗2  (5.31)  = inf  ∗ 1 ∗ 2 FA (2x2 , x)  + 12 (FA (x, 2x∗1 ) | x∗ = x∗1 + x∗2  (5.32)  = hFA (x, x∗ ),  (5.33)  where (5.31) holds by Proposition 5.1.9, Fact 5.2.2 and Fact 5.1.3. Corollary 5.3.14 Let A : Rn → Rn be linear and monotone. Then hFA = hF ∗⊺ = P (FA , FA∗ ⊺). A  5.4  An example  In the following we give an example of Theorem 5.3.1. Example 5.4.1 Let     cos θ − sin θ A=  sin θ cos θ be the rotation of angle θ ∈ [0, π2 [. Then A∗ = A−1 and 2  P (FA , FA∗ ⊺)(x, x∗ ) =  1 2 cos θ  x∗ − Ax  =  1 2 cos θ  x∗ − sin θRx  + x, x∗ 2  +  cos θ 2  x 2,  79  Chapter 5. Auto-conjugates where     0 −1 R= . 1 0  Proof. By assumptions, we have AA∗ = Id  A+ = cos θ Id  R∗ R = Id .  (5.34)  By Theorem 5.3.1 and Remark 2.1.35, we have P (FA , FA∗ ⊺)(x, x∗ ) =  1 2 cos θ  x∗ − Ax  2  + x, x∗ .  (5.35)  By (5.35), (5.34), and A = cos θ Id + sin θR, we have P (FA , FA∗ ⊺)(x, x∗ ) =  1 2 cos θ  x∗  2  + A∗ Ax, x − 2 x∗ , Ax  =  1 2 cos θ  x∗  2  + x  2  − 2 x∗ , cos θx + sin θRx  =  1 2 cos θ  x∗  2  + x  2  − 2 sin θ x∗ , Rx  =  1 2 cos θ  x∗  2  + sin2 θ x  =  1 2 cos θ  x∗ − sin θRx  2  +  2  + x, x∗ + x, x∗  − 2 sin θ x∗ , Rx + cos2 θ x  cos θ 2  2  x 2.  Fact 5.4.2 Let A : Rn ⇉ Rn be monotone such that gra A is nonempty. Then FA⊺−1 = FA . Proof. See [8, Fact 1.2]. 80  Chapter 5. Auto-conjugates Corollary 5.4.3 Let A : Rn ⇉ Rn be monotone such that gra A is nonempty. Then P (FA−1 , FA∗⊺−1 ) = P (FA , FA∗ ⊺)  ⊺  .  Proof. By assumptions and Proposition 3.3.2, A−1 is monotone and gra A−1 is nonempty. Then by Proposition 5.1.9, Fact 5.4.2 and Remark 5.2.8, P (FA−1 , F ∗ ⊺A−1 ) = P (FA⊺, FA⊺∗−1 ) = P (FA⊺, FA∗ ) = P (FA , FA∗ ⊺)  ⊺  .  Theorem 5.4.4 Let A : Rn → Rn be linear, monotone and invertible. Let (x, x∗ ) ∈ Rn × Rn . Then  0∈  ∗ ⊺ )(x,x∗ ) ∂P (FA ,FA ∂x∗  ⇔ x∗ = A◦ x  (5.36)  0∈  ∗ ⊺ )(x,x∗ ) ∂P (FA ,FA ∂x  ⇔ x = (A−1 )◦ x∗ .  (5.37)  Proof. By Fact 2.2.2, Fact 2.2.4, Fact 2.1.13 and Corollary 2.2.12, (A+ )† is linear, symmetric and monotone.  81  Chapter 5. Auto-conjugates Now (5.36): Follows from Theorem 5.3.1, Fact 2.1.18 and Fact 2.1.30,  0∈  ∗ ⊺ )(x,x∗ ) ∂P (FA ,FA ∂x∗  ⇔ 0 ∈ ∂ιran A+ (x∗ − Ax) + x + (A+ )† (x∗ − Ax), x∗ − Ax ∈ ran A+ ⇔ 0 ∈ ker A+ + x + (A+ )† (x∗ − Ax), x∗ − Ax ∈ ran A+ ⇔ 0 ∈ x + (A+ )−1 (x∗ − Ax), x∗ − Ax ∈ ran A+  (by Fact 2.1.32)  (by Corollary 2.2.7)  ⇔ −x ∈ (A+ )−1 (x∗ − Ax), x∗ − Ax ∈ ran A+ ⇔ x∗ − Ax ∈ −A+ x ⇔ x∗ ∈ Ax − A+ x = A◦ x. Then (5.37) Follows from Corollary 5.4.3 and (5.36). Proposition 5.4.5 Let A : Rn → Rn be linear and monotone, and h(x∗ ) := P (FA , FA∗ ⊺)(0, x∗ ) (∀x∗ ∈ Rn ). Then ∂h = (A+ )−1 . Proof. By Theorem 5.3.1, h(x∗ ) = ιran A+ (x∗ ) + q(A+ )† (x∗ ),  ∀x∗ ∈ Rn .  By Fact 2.2.2, Fact 2.2.4, Fact 2.1.13 and Corollary 2.2.12, (A+ )† is linear, symmetric and monotone. Thus by Fact 2.1.30 and Fact 2.1.18,  ∗  ∂h(x ) =     ∂ι  ∗ ran A+ (x )    ∅,  + (A+ )† x∗ , if x∗ ∈ ran A+ ; otherwise.  82  Chapter 5. Auto-conjugates Now suppose x∗ ∈ ran A+ . By Fact 2.1.32 and Corollary 2.2.7, ∂h(x∗ ) = ker A+ + (A+ )† x∗ = (A+ )−1 x∗ . Next suppose x∗ ∈ / ran A+ . Clearly, (A+ )−1 x∗ = ∅ = ∂h(x∗ ). In all cases, ∂h = (A+ )−1 . Remark 5.4.6 In general, let us consider g(x, x∗ ) := f (x)+f ∗ (x∗ ) (∀(x, x∗ ) ∈ Rn × Rn ), where f : Rn → Rn is proper lower semicontinuous and convex. Let h(x∗ ) = g(0, x∗ ) = f (0) + f ∗ (x∗ ) (∀x∗ ∈ Rn ). Thus by [26, Proposition 11.3], ∂h(x∗ ) = ∂f ∗ (x∗ ) = (∂f )−1 (x∗ ),  5.5  ∀x∗ ∈ Rn .  Relationship among auto-conjugate representations  Let A : Rn → Rn be linear and monotone. Suppose f (x, x∗ ) = qA (x) + ∗ (−A x + x∗ ) (∀(x, x∗ ) ∈ Rn × Rn ). By Proposition 5.1.14 and Fact 2.1.18, qA ◦  f is an auto-conjugate representation for A+ +A◦ = A. By Corollary 5.3.11, P (FA , FA∗⊺) is also an auto-conjugate representation for A. The next Proposition does that. Proposition 5.5.1 Let B : Rn → Rn be linear, symmetric and monotone, and A : Rn → Rn be linear and antisymmetric. Let f (x, x∗ ) = qB (x) +  83  Chapter 5. Auto-conjugates ∗ (−Ax + x∗ ) (∀(x, x∗ ) ∈ Rn × Rn ). Then qB  ∗⊺ ). f = P (F(A+B) , F(A+B)  Proof. Let (x, x∗ ) ∈ Rn × Rn . By Fact 5.2.3, f (x, x∗ ) = qB (x) + ιran B (−Ax + x∗ ) + qB † (−Ax + x∗ ).  (5.38)  By Theorem 5.3.1, we have ∗⊺ P (F(A+B) , F(A+B) )(x, x∗ )  = ιran B x∗ − (A + B)x + x, x∗ + qB † x∗ − (A + B)x = ιran B (x∗ − Ax) + x, x∗ + qB † (x∗ − Ax − Bx) ∗⊺ / ran B, P (F(A+B) , F(A+B) )(x, x∗ ) = ∞. If x∗ − Ax ∈  Now suppose x∗ − Ax ∈ ran B. By Proposition 2.2.15 applied to B with x replaced by x∗ − Ax and y replaced by −x, qB † (x∗ − Ax − Bx) = Pran B (x∗ − Ax), −x + qB † (−Ax + x∗ ) + qB (−x) = x∗ − Ax, −x + qB † (−Ax + x∗ ) + qB (−x) = − x, x∗ + qB † (−Ax + x∗ ) + qB (x)  (by Ax, −x = 0).  84  Chapter 5. Auto-conjugates Hence ∗⊺ )(x, x∗ ) = qB (x) + ιran B (−Ax + x∗ ) + qB † (−Ax + x∗ ) P (F(A+B) , F(A+B)  = f (x, x∗ ) (by (5.38)),  ∀(x, x∗ ) ∈ Rn × Rn .  Proposition 5.5.2 Let A, B : Rn → Rn be linear and monotone. Then ∗⊺ ) = P (FA , FA∗ ⊺) P (F(A+B) , F(A+B)  ∗⊺ 2 P (FB , FB ).  Proof. Let (x, x∗ ) ∈ Rn × Rn . By Theorem 5.3.1, P (FA , FA∗ ⊺)  ∗⊺ ∗ 2 P (FB , FB )(x, x )  = inf P (FA , FA∗ ⊺)(x, x∗ − y ∗ ) + P (FB , FB∗ ⊺)(x, y ∗ ) ∗ y  = inf ιran A+ (x∗ − y ∗ − Ax) + x, x∗ − y ∗ + q(A+ )† (x∗ − y ∗ − Ax) ∗ y  + x, y ∗ + ιran B+ (y ∗ − Bx) + q(B+ )† (y ∗ − Bx) ιran A+ (x∗ − y ∗ − Ax) + q(A+ )† (x∗ − y ∗ − Ax) = x, x∗ + inf ∗ y  + ιran B+ (y ∗ − Bx) + q(B+ )† (y ∗ − Bx) ≤ x, x∗ + ιran(A+ +B+ ) (x∗ − Ax − Bx)  (5.39)  + inf ιran A+ (x∗ − y ∗ − Ax) + q(A+ )† (x∗ − y ∗ − Ax) ∗ y  + ιran B+ (y ∗ − Bx) + q(B+ )† (y ∗ − Bx) .  85  Chapter 5. Auto-conjugates Now suppose x∗ −Ax−Bx ∈ ran(A+ +B+ ). Let x∗ −Ax−Bx = (A+ +B+ )p and y0∗ := B+ p + Bx. Thus x∗ − y0∗ − Ax = x∗ − B+ p − Bx − Ax = (A+ + B+ )p − B+ p = A+ p, y0∗ − Bx = B+ p. Then by (5.39), P (FA , FA∗ ⊺)  ∗⊺ ∗ 2 P (FB , FB )(x, x )  ≤ x, x∗ + ιran A+ (x∗ − y0∗ − Ax) + q(A+ )† (x∗ − y0∗ − Ax) + ιran B+ (y0∗ − Bx) + q(B+ )† (y0∗ − Bx) = x, x∗ + q(A+ )† (A+ p) + q(B+ )† (B+ p) = x, x∗ + qA+ (p) + qB+ (p)  (5.40)  = x, x∗ + q(A+ +B+ ) (p) = x, x∗ + q(A+ +B+ )† (x∗ − Ax − Bx),  (5.41)  in which, (5.40) and (5.41) hold by Corollary 2.2.16. Combining the results above, P (FA , FA∗ ⊺)  ∗⊺ ∗ 2 P (FB , FB )(x, x )  ≤ ιran(A+ +B+ ) (x∗ − Ax − Bx) + x, x∗ + q(A+ +B+ )† (x∗ − Ax − Bx) ∗⊺ = P (F(A+B) , F(A+B) )(x, x∗ ) (by Theorem 5.3.1),  ∀(x, x∗ ) ∈ Rn × Rn .  By Proposition 5.3.2, π1 [dom P (FA , FA∗ ⊺)] = π1 [dom P (FB , FB∗ ⊺)] = Rn . Then by Proposition 5.3.6 and Proposition 5.1.19, P (FA , FA∗ ⊺)  ∗⊺ 2 P (FB , FB )  86  Chapter 5. Auto-conjugates is auto-conjugate. Thus by Proposition 5.3.6 and Proposition 5.1.8, P (FA , FA∗ ⊺)  ∗⊺ 2 P (FB , FB )  ∗⊺ = P (F(A+B) , F(A+B) ).  Lemma 5.5.3 Let A : Rn → Rn be linear, symmetric and monotone. Then P (FA , FA∗ ⊺)(x, Ay) = P (FA , FA∗ ⊺)(y, Ax),  ∀(x, y) ∈ Rn × Rn .  Proof. Let (x, y) ∈ Rn × Rn . By Corollary 5.3.3 and Corollary 2.2.16, P (FA , FA∗ ⊺)(x, Ay) = ιran A (Ay) + qA (x) + qA† (Ay) = qA (x) + qA (y). On the other hand, P (FA , FA∗ ⊺)(y, Ax) = ιran A (Ax) + qA (y) + qA† (Ax) = qA (x) + qA (y). Thus P (FA , FA∗ ⊺)(x, Ay) = P (FA , FA∗ ⊺)(y, Ax).  Proposition 5.5.4 Let A : Rn → Rn be linear, symmetric and monotone. Then f = P (FA , FA∗ ⊺), if and only if, f is auto-conjugate satisfying f (x, Ay) = f (y, Ax)  (∀(x, y) ∈ Rn × Rn ) and f (0, 0) is finite.  Proof. “⇒” By Proposition 5.3.6, Lemma 5.5.3 and Corollary 5.3.5. “⇐” Let (x, x∗ ) ∈ Rn × Rn . We prove in two steps. Step 1: We will verify that f (x, x∗ ) = ∞, if x∗ ∈ / ran A. Since Rn = ran A ⊕ ker A, x∗ = Pran A x∗ + Pker A x∗ . Since x∗ ∈ / ran A, 87  Chapter 5. Auto-conjugates Pker A x∗ = 0. Thus Pker A x∗ , x∗ = Pker A x∗ , Pran A x∗ + Pker A x∗ = Pker A x∗  2  > 0.  (5.42)  Thus by assumptions, f (kPker A x∗ , 0) = f (kPker A x∗ , A0) = f (0, AkPker A x∗ ) = f (0, 0),  ∀k ∈ R. (5.43)  Then by Fact 5.1.10, f (x, x∗ ) + f (0, 0) = f (x, x∗ ) + f (kPker A x∗ , 0) = f (x, x∗ ) + f ∗ (0, kPker A x∗ ) ≥ x∗ , kPker A x∗ → ∞, as k → ∞. (by (5.42)) (5.44) Since f (0, 0) is finite, by (5.44) f (x, x∗ ) = ∞. Step 2: Suppose that x∗ ∈ ran A. Let x∗ = Ap. By Fact 5.1.10, f (x, Ap) + f (p, Ax) = f (x, Ap) + f ∗ (Ax, p) ≥ p, Ap + x, Ax ⇒  2f (x, Ap) ≥ p, Ap + x, Ax  (by f (x, Ap) = f (p, Ax))  ⇒  f (x, x∗ ) ≥ qA (x) + qA (p) = qA (x) + qA† (x∗ ),  (5.45)  in which, (5.45) by x∗ = Ap and Corollary 2.2.16. By conclusions above and Corollary 5.3.3, we have f (x, x∗ ) ≥ ιran A (x∗ ) + qA (x) + qA† (x∗ ) = P (FA , FA∗ ⊺)(x, x∗ ),  ∀(x, x∗ ).  88  Chapter 5. Auto-conjugates Then by Corollary 5.3.11 and Proposition 5.1.8 , we have f = P (FA , FA∗ ⊺).  5.6  Nonuniqueness  We now tackle the following question: Given a linear and monotone operator, are auto-conjugate representations for A unique? The answer is negative. We will give several different auto-conjugate representations for Id. By Corollary 5.3.3, we have ∗⊺ P (FId , FId )=  1 2  ·  2  ⊕  1 2  ·  2  .  Proposition 5.6.1 Let j(x) = 12 x2 , ∀x ∈ R. Assume g is such that g∗ (−x) = g(x) ≥ 0, ∀x ∈ R. Then  f (x, y) := j  x+y √ 2  +g  x−y √ 2  ∀(x, y) ∈ Rn × Rn  is an auto-conjugate representation for Id .  89  Chapter 5. Auto-conjugates Proof. We first show that f is auto-conjugate. Let (x, y) ∈ R × R. Then we have f ∗ (y, x) = sup (v,w)  = sup (v,w)  = sup (v,w)  = sup (s,t)  = sup s  √ ) − g( v−w √ ) v, y + w, x − j( v+w 2 2 v+w 2 ,x  +y −  v+w √ , x+y √ 2 2  v−w 2 ,x  √ ) − g( v−w √ ) − y − j( v+w 2 2  v−w √ , x−y √ 2 2  −  √ ) − g( v−w √ ) − j( v+w 2 2  √ √ s, x+y − t, x−y − j(s) − g(t) 2 2 √ s, x+y − j(s) + sup 2 t  (5.46)  (5.47)  √ − t, x−y − g(t) 2  √ ) + g ∗ (− x−y √ ) = j ∗ ( x+y 2 2 √ ) + g( x−y √ ) = j( x+y 2 2  since j ∗ = j by [7, Proposition 3.3(i)]  = f (x, y).  90  Chapter 5. Auto-conjugates Hence f is auto-conjugate. Note that (5.46) holds since v+w 2 ,x  v−w 2 ,x  +y −  −y  =  1 2  v + w, x + y − v − w, x − y  =  1 2  v, x + v, y + w, x + w, y − v, x + v, y  + w, x − w, y =  1 2  2 v, y + 2 w, x  = v, y + w, x .  In the following we show that (5.47) holds. Clearly, for every (v, w) there exists (s, t) such that v+w √ , x+y √ 2 2  −  v−w √ , x−y √ 2 2  √ ) − g( v−w √ ) − j( v+w 2 2  √ √ = s, x+y − t, x−y − j(s) − g(t). 2 2  On the other hand, for every (s, t), there exists v0 =  √  2 2 (s+t), w0  =  √ 2 2 (s−t)  such that v0√ +w0 x+y , √2 2  −  v0√ −w0 x−y , √2 2  +w0 −w0 − j( v0√ ) − g( v0√ ) 2 2  √ √ = s, x+y − t, x−y − j(s) − g(t). 2 2  Hence (5.47) holds. We now show that f is a representation for Id. First we show that g(0) = 0.  91  Chapter 5. Auto-conjugates By assumptions, g(0) ≥ 0. On the other hand, g(0) = g∗ (−0) = g∗ (0) = sup{−g(v)} ≤ 0. v  Hence g(0) = 0. Then we have  (x, y) ∈ G(f ) ⇔ f (x, y) = x, y ⇔  1 4  x+y  2  √ ) = x, y + g( x−y 2  ⇔  1 4  x−y  2  √ )=0 + g( x−y 2  ⇔  1 4  x−y  2  √ )=0 = 0, g( x−y 2  (by g ≥ 0)  by g(0) = 0 ⇔ x = y ⇔ (x, y) ∈ gra Id .  Hence f is an auto-conjugate representation for Id. ∗ ⊺). Remark 5.6.2 If we set g = j in Proposition 5.6.1, f = P (FId , FId  Now we give three examples based on Proposition 5.6.1. Example 5.6.3 The function  g := ιR+ satisfies the conditions of Proposition 5.6.1. Figure 5.1 is corresponding to f. 92  Chapter 5. Auto-conjugates  4  2  0  −2  −2 −1 0  −4 2  y  1  1 x  0  −1  −2  2  Figure 5.1: The function f (blue) and z = xy (gold), and their intersection line (cyan), gra Id (red).  Proof. Let x ∈ R. We consider two cases. Case 1: x ≥ 0. We have g∗ (−x) = sup v  v, −x − ιR+ (v) = sup  v, −x  = 0 = g(x).  v, −x − ιR+ (v) = sup  v, −x  = ∞ = g(x).  v≥0  Case 2: x < 0. Then g∗ (−x) = sup v  v≥0  Hence g∗ (−x) = g(x).  93  Chapter 5. Auto-conjugates  7.5  5.0  2.5 2 0.0  1 0  −2.5  2  0  1  −1  x −1 −2  −2  y  Figure 5.2: The function f (blue) and z = xy (gold), and their intersection line (cyan), gra Id (red) .  Example 5.6.4 Set  g(x) :=     x2 ,     1 x2 , 4  if x ≥ 0;  .  if x ≤ 0  Then g satisfies the conditions of Proposition 5.6.1. Figure 5.2 is corresponding to f .  94  Chapter 5. Auto-conjugates Proof. Let x ∈ R. We consider two cases. Case 1: x ≥ 0. We have g∗ (−x) = sup  v, −x − g(v)  = sup  v, −x − g(v)  v  (since g ≥ 0, g(0) = 0)  v≤0  = sup v≤0  v, −x − 14 v 2 }  = sup h0 (v) , v≤0  where h0 (v) := v, −x − 14 v 2 . Let 0 = ∇h0 (v) = −x − 12 v. Then v0 = −2x ≤ 0 is a critical point of h0 . Since h0 is concave on R− , its critical point is its maximizer. Then g∗ (−x) = h0 (v0 ) = −2x, −x − x2 = x2 = g(x).  Case 2: x < 0. We have g∗ (−x) = sup  v, −x − g(v)  = sup  v, −x − g(v)  v  (since g ≥ 0, g(0) = 0)  v≥0  = sup  v, −x − v 2  v≥0  = sup h1 (v) , v≥0  95  Chapter 5. Auto-conjugates where h1 (v) := v, −x − v 2 Let 0 = ∇h1 (v) = −x − 2v. Then v1 = − 12 x ≥ 0 is a critical point of h1 . Since h1 is concave on R+ , its critical point is its maximizer. Then g∗ (−x) = h1 (v1 ) = − 12 x, −x − 14 x2 = 14 x2 = g(x). Hence g∗ (−x) = g(x). Example 5.6.5 Set p > 1,  1 p  +  g(x) :=  1 q  = 1.      1 xp , p     1 (−x)q , q  if x ≥ 0; if x ≤ 0.  satisfies the conditions of Proposition 5.6.1. Figure 5.3 is corresponding to f. Proof. Let x ∈ R. We consider two cases. Case 1: x ≥ 0. We have g∗ (−x) = sup  v, −x − g(v)  = sup  v, −x − g(v)  v  (since g ≥ 0, g(0) = 0)  v≤0  = sup v≤0  v, −x − 1q (−v)q  = sup g0 (v) , v≤0  96  Chapter 5. Auto-conjugates  15  10  5  0  2  1  0  −1  −2  x  Figure 5.3: The function f (blue) and z = xy (gold) , and their intersection line (cyan), gra Id (red), where p = 4.  where g0 (v) := v, −x − 1q (−v)q . Then let 0 = ∇g0 (v) = −x + (−v)q−1 . 1  Thus v0 := −x q−1 ≤ 0 is a critical point of g0 . Since ∇2 g0 (v) = −(q − 1)(−v)q−2 ≤ 0 (∀v < 0), by the continuity of g0 , g0  97  Chapter 5. Auto-conjugates is concave on R− . Then its critical point is its maximizer. Thus g∗ (−x) = g0 (v0 ) q  1  = −x q−1 , −x − 1q x q−1 q  = (1 − 1q )x q−1 = 1p xp  (by  1 p  +  1 q  = 1)  = g(x).  Case 2: x < 0. We have g∗ (−x) = sup  v, −x − g(v)  = sup  v, −x − g(v)  v  (since g ≥ 0, g(0) = 0)  v≥0  = sup v≥0  v, −x − 1p v p  = sup g1 (v) , v≥0  where g1 (v) := v, −x − 1p v p . Then let 0 = ∇g1 (v) = −x − v p−1 . 1  Thus v1 := (−x) p−1 > 0 is a critical point of g1 . Since ∇2 g1 (v) = −(p − 1)v p−2 ≤ 0 (∀v > 0), by the continuity of g1 , g1 is  98  Chapter 5. Auto-conjugates concave on R+ . Then its critical point is its maximizer. Thus g∗ (−x) = g1 (v1 ) p  1  = (−x) p−1 , −x − 1p (−x) p−1 p  = (1 − 1p )(−x) p−1 = 1q (−x)q = g(x).  Hence g∗ (−x) = g(x). Remark 5.6.6 Example 5.6.3, 5.6.4 and 5.6.5 each provide a function f ∗ ⊺). that is an auto-conjugate representation for Id with f = P (FId , FId  99  Chapter 6  Calculation of the auto-conjugates of ∂(− ln) Throughout this chapter, − ln is meant in the extended real valued sense, i.e., − ln(x) = ∞ for x ≤ 0. In Chapter 5, Proposition 5.3.12 shows that hFA = P (FA , FA∗ ⊺) for a linear and monotone operator A. Now we will show that for the nonlinear mono∗⊺ tone operator ∂(− ln) we have P (F∂(− ln) , F∂(− ln) ) = hF∂(− ln) . Throughout  the chapter, we denote C : = (x, x∗ ) ∈ R × R | x ≤ − x1∗ < 0 1 D : = (x, x∗ ) ∈ R × R | x∗ ≤ − 2x <0 1 E : = (x, x∗ ) ∈ R × R | x∗ ≤ − 4x <0 .  100  Chapter 6. Calculation of the auto-conjugates of ∂(− ln)  6.1  Fitzpatrick function for ∂(− ln)  Fact 6.1.1 Let f = − ln . Then  F∂f (x, x∗ ) =   1 1   1 − 2x 2 (−x∗ ) 2 ,   ∞,  if x ≥ 0, x∗ ≤ 0; otherwise.  Proof. See [8, Example 3.4]. Fact 6.1.2 Let f = − ln . Then ∗ (x∗ , x) = −1 + ιC (x∗ , x). F∂f  Proof. See [8, Example 3.4]. Fact 6.1.3 (Rockafellar) Let f = − ln .  f ∗ (x∗ ) =     −1 − ln(−x∗ ),   ∞,  if x∗ < 0; otherwise,  Proof. See [8, Example 3.4]. Remark 6.1.4 Let f = − ln . Recall (f ⊕ f ∗ )(x, x∗ ) := f (x) + f ∗ (x∗ ),  ∀(x, x∗ ) ∈ R × R.  By Fact 6.1.3, dom(f ⊕ f ∗ ) = R++ × R−− . 101  Chapter 6. Calculation of the auto-conjugates of ∂(− ln) Proposition 6.1.5  D  R++ × R−− .  E  Proof. We first verify that D  E.  1 1 Let (x, x∗ ) ∈ D. Thus x > 0. Then − 2x ≤ − 4x . By (x, x∗ ) ∈ D,  1 1 x∗ ≤ − 2x ≤ − 4x < 0.  Thus (x, x∗ ) ∈ E. Then D ⊂ E. / D. Thus D = E. On the other hand, (1, − 14 ) ∈ E, but (1, − 14 ) ∈ Hence D  E.  It is clear we have E  R++ × R−− .  Thus combining the results above, D  6.2  E  R++ × R−− .  Proximal average of ∂(− ln) and hF∂f  Proposition 6.2.1 Let f = − ln . Then ∗⊺ dom P (F∂f , F∂f ) = E.  Proof. Let By [7, Theorem 4.6], ∗⊺ dom P (F∂f , F∂f )=  1 2  ∗⊺ dom F∂f + 12 dom F∂f .  (6.1)  102  Chapter 6. Calculation of the auto-conjugates of ∂(− ln) In the following we will show that ∗⊺ )= dom P (F∂f , F∂f  1 2  ∗⊺ dom F∂f .  (6.2)  By Fact 6.1.1, (0, 0) ∈ dom F∂f , then by (6.1), we have 1 2  ∗⊺ ∗⊺ dom F∂f ⊂ dom P (F∂f , F∂f ).  (6.3)  ∗⊺ dom F∂f = E.  (6.4)  Next we show that 1 2  Indeed, (x, x∗ ) ∈  1 2  ∗⊺ dom F∂f  ∗ 1 ⇔ (2x∗ , 2x) ∈ dom F∂f ⇔ 2x∗ ≤ − 2x < 0 (by Fact 6.1.2) 1 ⇔ x∗ ≤ − 4x < 0 ⇔ (x, x∗ ) ∈ E.  Hence (6.4) holds. Then by (6.4) and (6.3), ∗⊺ ). E ⊂ dom P (F∂f , F∂f  (6.5)  In the following, we will verify that 1 2  dom F∂f + E ⊂ E.  (6.6)  103  Chapter 6. Calculation of the auto-conjugates of ∂(− ln) Let (y, y ∗ ) ∈  1 2  dom F∂f and (x, x∗ ) ∈ E. By Fact 6.1.1 we have y ≥ 0, y ∗ ≤ 0, x > 0, x∗ < 0, 4xx∗ ≤ −1.  Thus x + y ≥ x > 0, x∗ + y ∗ ≤ x∗ < 0. Then we have 4(x + y)(x∗ + y ∗ ) ≤ 4xx∗ ≤ −1, i.e., (x, x∗ ) + (y, y ∗ ) ∈ E. Hence (6.6) holds. Thus by (6.6), (6.4) ∗⊺ ∗⊺ and (6.1), dom P (F∂f , F∂f ) ⊂ E. Then by (6.5), dom P (F∂f , F∂f ) = E.  Lemma 6.2.2 Let x, x∗ , y ∗ ∈ R with y ∗ ≤ 0.Then ιC (2x∗ − 2y ∗ , x) = ιC (2x∗ − 2y ∗ , x) + ιD (x, x∗ ).  Proof. We consider two cases. Case 1: (2x∗ − 2y ∗ , x) ∈ / C. Clear. Case 2: (2x∗ − 2y ∗ , x) ∈ C. By assumptions, 1 2x∗ ≤ 2x∗ − 2y ∗ ≤ − x1 < 0 ⇒ x∗ ≤ − 2x <0  (by y ∗ ≤ 0).  Thus (x, x∗ ) ∈ D. Then ιD (x, x∗ ) = 0. Remark 6.2.3 Let x, x∗ ∈ R. Then ιR+ (x) + ιD (x, x∗ ) = ιD (x, x∗ ). Proof. Follows directly from the definition of D .  104  Chapter 6. Calculation of the auto-conjugates of ∂(− ln)  -1  -3  -5  -7 5.0  2.5  0.0  -2.5  5.0 -5.0  2.5  x  0.0  -5.0 -2.5  y  Figure 6.1: The function hF∂f .  Proposition 6.2.4 Let f = − ln . Then 1  hF∂f (x, x∗ ) = −(−1 − 2xx∗ ) 2 + ιD (x, x∗ ),  ∀(x, x∗ ) ∈ R × R.  Consequently, dom hF∂f = D. Figure 6.1 illustrates hF∂f . Proof. Let (x, x∗ ) ∈ R × R. By Fact 6.1.1 and Fact 6.1.2, we have hF∂f (x, x∗ ) = inf ∗ y  ∗ ∗ 1 1 ∗ 2 F∂f (x, 2y ) + 2 F∂f (2x 1  1  1  1  − 2y ∗ , x)  = inf ∗  1 2  ∗ − |x| 2 (−2y ∗ ) 2 + ιR+ (x) + 12 F∂f (2x∗ − 2y ∗ , x)  = inf ∗  1 2  − |x| 2 (−2y ∗ ) 2 + ιR+ (x) + ιC (2x∗ − 2y ∗ , x) −  = inf ∗  − |x| 2 (−2y ∗ ) 2 + ιR+ (x) + ιC (2x∗ − 2y ∗ , x) + ιD (x, x∗ )  = inf ∗  − |x| 2 (−2y ∗ ) 2 + ιC (2x∗ − 2y ∗ , x) + ιD (x, x∗ ) ,  y ≤0 y ≤0 y ≤0 y ≤0  1  1  1  1  1 2  (6.7) (6.8) 105  Chapter 6. Calculation of the auto-conjugates of ∂(− ln) where (6.7) holds by Lemma 6.2.2, (6.8) by Remark 6.2.3. Now we consider two cases. Case 1: (x, x∗ ) ∈ / D. Thus hF∂f (x, x∗ ) = ∞. Case 2: (x, x∗ ) ∈ D. Thus x > 0. Then hF∂f (x, x∗ ) = inf ∗  y ≤0  =  1  1  − x 2 (−2y ∗ ) 2 + ιC (2x∗ − 2y ∗ , x) 1  inf  1 (2x∗ −2y ∗ ≤− x <0, y ∗ ≤0) 1  = −(2x) 2 1  (−y ∗ ) 2 1  sup 1 (0≤−2y ∗ ≤−2x∗ − x )  1  (6.9) 1  sup 1 (2x∗ −2y ∗ ≤− x <0, y ∗ ≤0)  = −(2x) 2  1  − x 2 (−2y ∗ ) 2  (−y ∗ ) 2  1  1 = −(2x) 2 (− 2x − x∗ ) 2 1  = −(−1 − 2xx∗ ) 2  (6.10)  (by x > 0),  1 − x∗ where (6.9) holds by letting 2x∗ − 2y ∗ ∈ C. (6.10) holds by 0 ≤ − 2x  since (x, x∗ ) ∈ D. Thus combining the results above, 1  hF∂f (x, x∗ ) = −(−1 − 2xx∗ ) 2 + ιD (x, x∗ ),  ∀(x, x∗ ) ∈ R × R.  ∗⊺ Corollary 6.2.5 Let f = − ln . Then P (F∂f , F∂f ), f ⊕f ∗ and hF∂f are three  different functions.  106  Chapter 6. Calculation of the auto-conjugates of ∂(− ln) Proof. By Remark 6.1.4, we have dom(f ⊕f ∗ ) = R++ ×R−− . Then by Propo∗⊺ ) = E and dom hF∂f = D. sition 6.2.1 and Proposition 6.2.4, dom P (F∂f , F∂f  By Proposition 6.1.5,  dom hF∂f  ∗⊺ dom P (F∂f , F∂f )  dom(f ⊕ f ∗ ).  ∗⊺ Hence P (F∂f , F∂f ), f ⊕ f ∗ and hF∂f are all different. ∗⊺ Remark 6.2.6 We don’t have an explicit formula for P (F∂(− ln) , F∂(− ln) ).  107  Chapter 7  Proximal averages of monotone operators with linear graphs We have given some auto-conjugate representation results for linear and monotone operators. Now we extend them to monotone operators with linear graphs. Background worked on linear relations can be found in the book by Cross [16].  7.1  Adjoint process of operators with linear graphs  Definition 7.1.1 Let C be a nonempty cone of Rn . The polar of C, C − , is defined by C − := x∗ | c, x∗ ≤ 0, ∀c ∈ C . Remark 7.1.2 If C is a linear subspace of Rn , then by Lemma 2.1.28, C − = C ⊥.  108  Chapter 7. Proximal averages of monotone operators with linear graphs Definition 7.1.3 Let A : Rn ⇉ Rn be such that gra A is a convex cone. The adjoint process of A, A∗ , is defined by gra A∗ := (x, x∗ ) | (x∗ , −x) ∈ (gra A)− . Lemma 7.1.4 [16, Proposition III.1.3] Let A : Rn ⇉ Rn be such that gra A is linear. Suppose k ∈ R with k = 0. Then (kA)∗ = kA∗ . Proof. By Remark 7.1.2, (x, x∗ ) ∈ gra(kA)∗ ⇔ (x∗ , −x) ∈ (gra kA)− = (gra kA)⊥ ⇔ (x∗ , −x), (v, v ∗ ) = 0, ∀(v, v ∗ ) ∈ gra(kA) ⇔  1 k  (x∗ , −x), (v, v ∗ ) = 0, ∀(v, v ∗ ) ∈ gra(kA)  ⇔ ( k1 x∗ , −x), (v, k1 v ∗ ) = 0, ∀(v, v ∗ ) ∈ gra(kA) ⇔ ( k1 x∗ , −x), (v, w∗ ) = 0, ∀(v, w∗ ) ∈ gra A ⇔ (x, k1 x∗ ) ∈ gra A∗ ⇔ x∗ ∈ kA∗ x. Hence (kA)∗ = kA∗ . Remark 7.1.5 Let A : Rn ⇉ Rn be such that gra A is linear. Then gra A∗ is a linear graph. Remark 7.1.6 Let A : Rn ⇉ Rn be such that gra A is linear. Then A∗ 0 = (dom A)⊥ . Proof. See [16, Proposition III.1.4 (b)].  109  Chapter 7. Proximal averages of monotone operators with linear graphs Definition 7.1.7 Let A : Rn ⇉ Rn be such that gra A is linear. We say that A is symmetric if A∗ = A. Definition 7.1.8 Let A : Rn ⇉ Rn be such that gra A is linear. We say that A is antisymmetric if A∗ = −A. Fact 7.1.9 Let A, B : Rn ⇉ Rn be such that gra A and gra B are linear. Then (A + B)∗ = A∗ + B ∗ . Proof. See [11, Theorem 7.4]. Fact 7.1.10 Let A : Rn ⇉ Rn be such that gra A is a closed convex cone. Then gra A∗∗ = − gra A. Proof. See [13, Exercises 7 page 119]. Corollary 7.1.11 Let A : Rn ⇉ Rn be such that gra A is linear. Then A∗∗ = A. Proof. Since gra A is a linear subspace, − gra A = gra A. Thus by Fact 7.1.10, gra A∗∗ = gra A. Hence A∗∗ = A. Corollary 7.1.12 Let A : Rn ⇉ Rn be such that gra A is a linear subspace. Then dom A∗ = (A0)⊥ . Proof. By Remark 7.1.5 and Remark 7.1.6, we have (A∗ )∗ 0 = (dom A∗ )⊥ . Then by Corollary 7.1.11, A0 = (dom A∗ )⊥ . Thus dom A∗ = (A0)⊥ . Remark 7.1.13 Let A : Rn ⇉ Rn be such that gra A is linear. By Fact 7.1.9, Remark 7.1.5, Corollary 7.1.11 and Lemma 7.1.4, A−A∗ 2  A+A∗ 2  is symmetric and  is antisymmetric. 110  Chapter 7. Proximal averages of monotone operators with linear graphs Definition 7.1.14 (Symmetric and antisymmetric part) Let A : Rn ⇉ Rn be such that gra A is linear. Then A+ =  1 2A  + 12 A∗ is the symmetric  part of A, and A◦ = 12 A − 12 A∗ is the antisymmetric part of A. Remark 7.1.15 Let A : Rn ⇉ Rn be such that gra A is a linear subspace. Then by Corollary 7.1.12, dom A+ = dom A◦ = dom A ∩ (A0)⊥ . Corollary 7.1.16 Let A : Rn ⇉ Rn be such that gra A is a linear subspace. Then A can be decomposed into the sum of a symmetric operator with a linear graph and an antisymmetric operator with a linear graph, if and only if, dom A = (A0)⊥ . In that case, A can be decomposed as : A = A+ + A◦ . Proof. “⇒” Let B : Rn ⇉ Rn be a symmetric operator with a linear graph and C : Rn ⇉ Rn be an antisymmetric operator with a linear graph such that A = B + C. By Fact 7.1.9, A∗ = B ∗ + C ∗ = B − C. Then dom A∗ = dom B ∩ dom C = dom A. By Corollary 7.1.12, dom A = (A0)⊥ . “⇐” By Remark 7.1.15, dom A+ = dom A◦ = dom A. By Corollary 7.1.12, dom A∗ = (A0)⊥ = dom A. Thus, by Remark 7.1.5 and Proposition 4.1.3(iii), A+ x + A◦ x = 12 (Ax + A∗ x + Ax − A∗ x) = Ax + A∗ 0 = Ax + (dom A)⊥ = Ax + A0 = Ax  (by Remark 7.1.6)  (by Proposition 4.1.3(iii)),  ∀x ∈ dom A.  Remark 7.1.17 Consider an operator A : Rn ⇉ Rn with gra A = {(0, 0)}. Then we have dom A = {0} = Rn = (A0)⊥ . Clearly, gra A∗ = Rn × Rn . 111  Chapter 7. Proximal averages of monotone operators with linear graphs Thus (A+ + A◦ )0 = A+ 0 + A◦ 0 = Rn + Rn = Rn = A0. Thus A = A+ + A◦ . By Proposition 7.1.16, A can not be decomposed into the sum of a symmetric operator with a linear graph and an antisymmetric operator with a linear graph. Remark 7.1.18 Let S be a linear subspace of Rn . Then S is closed. Corollary 7.1.19 Let A : Rn ⇉ Rn be maximal monotone such that gra A is a linear subspace. Then A = A+ + A◦ . Proof. By Remark 7.1.18, Proposition 4.2.5 and Corollary 7.1.16. Definition 7.1.20 Let C be a nonempty convex subset of Rn and x0 ∈ Rn . The normal cone of C at x0 , NC (x0 ), is defined by  NC (x0 ) :=      x∗ | x∗ , c − x0 ≤ 0, ∀c ∈ C ,   ∅,  if x0 ∈ C; otherwise.  Fact 7.1.21 Let C be a nonempty convex subset of Rn and x0 ∈ C. Then NC (x0 ) = ∂ιC (x0 ). If C is a linear subspace of Rn , then NC (x0 ) = ∂ιC (x0 ) = C ⊥ by Fact 2.1.29. Remark 7.1.22 Let A : Rn → Rn be linear and S be a linear subspace of Rn . Then gra(A + NS ) is a linear subspace of Rn × Rn . Fact 7.1.23 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B + NS . Then A∗ = B ∗ + NS .  112  Chapter 7. Proximal averages of monotone operators with linear graphs Proof. Since gra A is a linear subspace by Remark 7.1.22, then by Remark 7.1.2 and Fact 7.1.21 we have (x, x∗ ) ∈ gra A∗ ⇔ (x∗ , −x) ∈ (gra A)− ⇔ (x∗ , −x) ∈ (gra A)⊥ ⇔ x∗ , y − x, y ∗ = 0,  ∀y ∗ ∈ Ay  ⇔ x∗ , y − x, By + S ⊥ = 0,  ∀y ∈ S.  (7.1)  Let y = 0 in (7.1). We have x, S ⊥ = 0. Thus x ∈ S. Then (x, x∗ ) ∈ gra A∗ ⇔ x ∈ S, x∗ , y − x, By = 0, ⇔ x ∈ S, x∗ − B ∗ x, y = 0,  ∀y ∈ S  ∀y ∈ S  ⇔ x ∈ S, (x∗ − B ∗ x)⊥S ⇔ x ∈ S, x∗ ∈ B ∗ x + S ⊥ ⇔ x∗ ∈ (B ∗ + NS )(x)  (by Fact 7.1.21).  Hence A∗ = B ∗ + NS . Remark 7.1.24 Fact 7.1.23 is a special case of [13, Exercises 14 (f ) page 120]. Remark 7.1.25 Let B : Rn → Rn be linear and S be a linear subspace of Rn . Suppose A = B + NS . Then by Fact 7.1.23, A+ = B+ + NS , A◦ = 113  Chapter 7. Proximal averages of monotone operators with linear graphs B◦ + NS and A = A+ + A◦ . Now we recall the definition of QA . Definition 7.1.26 Let A : Rn ⇉ Rn be such that gra A is a linear subspace of Rn × Rn . We define QA by  QA x =     P  Ax x,    ∅,  if x ∈ dom A; otherwise.  Proposition 7.1.27 Let A : Rn ⇉ Rn be such that gra A is a linear subspace. Then QA is single-valued and linear on dom A, and QA is a selection of A. Proof. Since A0 is a closed subspace by Proposition 4.1.3(i) and Remark 7.1.18, Ax (∀x ∈ dom A) is a closed convex by Proposition 4.1.3(ii). By Fact 4.3.1, QA is single-valued on dom A and QA is a selection of A. Very similar to the proof of Proposition 4.3.6, we have QA is linear on dom A. Corollary 7.1.28 Let A : Rn ⇉ Rn be such that gra A is a linear subspace. Then Ax =     QA Pdom A x + A0,   ∅,  if x ∈ dom A; otherwise,  where QA Pdom A is linear.  Proof. By Proposition 7.1.27 and Proposition 4.1.3(ii). Proposition 7.1.29 Let A : Rn ⇉ Rn be such that gra A is a linear subspace. Assume A can be decomposed into the sum of a symmetric operator 114  Chapter 7. Proximal averages of monotone operators with linear graphs with a linear graph and an antisymmetric operator with a linear graph. Then such a decomposition is not unique. Proof. By Corollary 7.1.28, Corollary 7.1.16 and Fact 7.1.21,  A = QA Pdom A + Ndom A .  Thus,  A = (QA Pdom A )+ + (QA Pdom A )◦ +NS = ((QA Pdom A )+ +NS )+(QA Pdom A )◦ . By Fact 7.1.23, (QA Pdom A )+ , (QA Pdom A )+ +NS are symmetric and (QA Pdom A )◦ + NS , (QA Pdom A )◦ are antisymmetric. Since (QA Pdom A )+ = (QA Pdom A )+ + NS and (QA Pdom A )◦ + NS = (QA Pdom A )◦ as S = Rn , the decomposition is not unique. Theorem 7.1.30 Let A, B, C : Rn ⇉ Rn be such that gra A, gra B and gra C are linear subspaces. Assume B is symmetric and C is antisymmetric such that A = B + C and dom B = dom C. Then B = A+ , C = A◦ . Proof. By Fact 7.1.9, A∗ = B −C. Thus by assumptions, dom B = dom C = dom A = dom A∗ . Thus dom A+ = dom A◦ = dom B = dom C = dom A. By Corollary 7.1.12, dom B = dom B ∗ = (B0)⊥ , dom C = dom C ∗ = (C0)⊥ . Thus (B0)⊥ = (C0)⊥ . Since C0, B0 are closed linear subspaces by Proposition 4.1.3(i) and Remark 7.1.18, B0 = C0. Let x ∈ dom A. Then by  115  Chapter 7. Proximal averages of monotone operators with linear graphs Proposition 4.1.3(iii) and Proposition 4.1.3(ii), A+ x = 12 (Bx + Cx + Bx − Cx) = Bx + C0 = Bx + B0 = Bx, A◦ x = 12 (Bx + Cx − Bx + Cx) = B0 + Cx = C0 + Cx = Cx. Hence B = A+ , C = A◦ . Corollary 7.1.31 Let A : Rn ⇉ Rn be maximal monotone such that gra A is a linear subspace of Rn × Rn . Then A = Pdom A QA Pdom A + Ndom A , where Pdom A QA Pdom A is linear and monotone. Proof. Since dom A is a closed linear subspace by Remark 7.1.18, then by Theorem 4.4.1, Pdom A QA Pdom A is linear and monotone, and A = ∂(qA + ιdom A ) + A◦ , where A = Pdom A QA Pdom A . Then by Fact 2.1.18, Fact 2.1.30 and Fact 7.1.21,  A = A+ + ∂ιdom A + A◦ = Pdom A QA Pdom A + Ndom A .  Remark 7.1.32 Let A : Rn ⇉ Rn such that gra A is a linear subspace of Rn . Then gra A−1 is a linear subspace of Rn × Rn .  116  Chapter 7. Proximal averages of monotone operators with linear graphs  7.2  Fitzpatrick functions of monotone operators with linear graphs  Definition 7.2.1 Assume A : Rn ⇉ Rn . The set-valued inverse mapping, A−1 : Rn ⇉ Rn , is defined by x ∈ A−1 y  ⇔  y ∈ Ax.  Definition 7.2.2 Let A : Rn ⇉ Rn and S be a subset of Rn . Then AS is defined by AS := x∗ | x∗ ∈ As,  ∃s ∈ S .  Proposition 7.2.3 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B + NS . Then (i) x ∈ ran A ⇔ x + S ⊥ ⊂ ran A (ii) A−1 x = A−1 (x + S ⊥ ). Proof. (i): By Fact 7.1.21, ran A = ran(B |S ) + S ⊥ . Thus S ⊥ + ran A = ran A. Then x ∈ ran A ⇔ x + S ⊥ ⊂ S ⊥ + ran A = ran A.  Hence (i) holds. (ii): Clearly, A−1 x ⊂ A−1 (x + S ⊥ ). In the following we show A−1 (x + S ⊥ ) ⊂ A−1 x.  117  Chapter 7. Proximal averages of monotone operators with linear graphs By Fact 7.1.21, y ∈ A−1 (x + S ⊥ ) ⇒ y ∈ A−1 (x + t),  ∃t ∈ S ⊥  ⇒ x + t ∈ Ay = By + NS (y) = By + S ⊥ ,  y∈S  ⇒ x ∈ By + S ⊥ = By + NS (y) = Ay ⇒ y ∈ A−1 x. Thus A−1 (x + S ⊥ ) ⊂ A−1 x. Hence A−1 x = A−1 (x + S ⊥ ). Lemma 7.2.4 Let B : Rn → Rn be linear and symmetric, and S be a subspace of Rn . Suppose that x ∈ ran(B + NS ). Then x, (B + NS )−1 x is single-valued. Moreover, if y0 ∈ (B + NS )−1 x, then x, (B + NS )−1 x = y0 , By0 . Proof. Let x∗1 , x∗2 ∈ (B + NS )−1 x. Then x∗1 , x∗2 ∈ S and by Fact 7.1.21, x ∈ (B + NS )x∗1 = Bx∗1 + S ⊥ ,  x ∈ (B + NS )x∗2 = Bx∗2 + S ⊥ .  (7.2)  Then we have B(x∗1 − x∗2 ) ∈ S ⊥ .  (7.3)  118  Chapter 7. Proximal averages of monotone operators with linear graphs By (7.2), there exists t ∈ S ⊥ such that x = Bx∗1 + t. Then x, x∗1 − x∗2 = Bx∗1 + t, x∗1 − x∗2 = Bx∗1 , x∗1 − x∗2  (7.4)  = x∗1 , B(x∗1 − x∗2 ) = 0,  (7.5)  in which, (7.4) holds by t ∈ S ⊥ and x∗1 − x∗2 ∈ S, and (7.5) holds by (7.3) and x∗1 ∈ S. Thus x, x∗1 = x, x∗2 . Hence x, (B + NS )−1 x is single-valued. Let y0 ∈ (B + NS )−1 x. Then y0 ∈ S and x ∈ (B + NS )y0 = By0 + S ⊥ by Fact 7.1.21. Let t0 ∈ S ⊥ such that x = By0 + t0 . Since x, (B + NS )−1 x is single-valued, x, (B +NS )−1 x = x, y0 = By0 +t0 , y0 = y0 , By0 (by y0 ∈ S, t0 ∈ S ⊥ ).  Lemma 7.2.5 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B + NS . Suppose (x, x∗ ) ∈ S × Rn . Then ιran A+ (x∗ − Bx) = ιran A+ (x∗ − Ax), i.e., x∗ − Bx ∈ ran A+ ⇔ x∗ − Ax ⊂ ran A+ .  119  Chapter 7. Proximal averages of monotone operators with linear graphs Moreover if x∗ − Bx ∈ ran A+ , then x∗ − Bx, (A+ )−1 (x∗ − Bx) = x∗ − Ax, (A+ )−1 (x∗ − Ax) .  Proof. By Fact 7.1.21, Ax = Bx + S ⊥ .  (7.6)  By Remark 7.1.25 and Proposition 7.2.3(i) applied to A+ , ιran A+ (x∗ − Bx) = ιran A+ x∗ − Bx + S ⊥ = ιran A+ x∗ − Ax  (by (7.6)).  Let x∗ − Bx ∈ ran A+ . By Remark 7.1.25, (A+ )−1 (x∗ − Bx) ⊂ S, then we have x∗ − Bx, (A+ )−1 (x∗ − Bx) = x∗ − Bx + S ⊥ , (A+ )−1 (x∗ − Bx) = x∗ − Bx + S ⊥ , (A+ )−1 x∗ − Bx + S ⊥  (7.7)  = x∗ − Ax, (A+ )−1 x∗ − Ax ,  (7.8)  in which, (7.7) holds by Proposition 7.2.3(ii), (7.8) by (7.6). Remark 7.2.6 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B +NS . Suppose (x, x∗ ) ∈ S ×Rn such that x∗ −Bx ∈ ran A+ .  120  Chapter 7. Proximal averages of monotone operators with linear graphs By Remark 7.1.25, Lemma 7.2.4 and Lemma 7.2.5, we see that x∗ − Ax, (A+ )−1 (x∗ − Ax)  is single-valued. Proposition 7.2.7 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B + NS . Suppose (x, x∗ ) ∈ S × Rn . Then (x∗ − Ax) ⊂ ran A+ or (x∗ − Ax) ∩ ran A+ = ∅. Proof. Suppose that (x∗ − Ax) ∩ ran A+ = ∅. By Fact 7.1.21, there exists t ∈ S ⊥ such that x∗ − Bx + t ∈ ran A+ . Then by Fact 7.1.21, Remark 7.1.25 and Proposition 7.2.3(i), we obtain x∗ − Ax = x∗ − Bx + S ⊥ = x∗ − Bx + t + S ⊥ ⊂ ran A+ . Definition 7.2.8 Let A : Rn ⇉ Rn . We define ΦA : Rn ⇉ Rn by  ΦA (x) =     A−1 x,   {0},  if x ∈ ran A; otherwise.  Remark 7.2.9 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B + NS . Then by Proposition 7.2.7 and Remark 7.2.6, x∗ − Ax, ΦA+ (x∗ − Ax)  (x, x∗ ) ∈ S × Rn  is single-valued. By Lemma 7.2.4 and Remark 7.1.25, x, ΦA+ (x)  (x ∈  Rn ) is single-valued.  121  Chapter 7. Proximal averages of monotone operators with linear graphs Lemma 7.2.10 Let A : Rn → Rn such that gra A is a linear subspace of Rn × Rn . Let k ∈ R with k = 0. Then ΦA (kx) = kΦA (x),  ∀x ∈ Rn .  Proof. Let x ∈ Rn . We consider two cases. Case 1: x ∈ / ran A. Then kx ∈ / ran A. Thus we have kΦA (x) = ΦA (kx) = 0. Case 2: x ∈ ran A. Then kx ∈ ran A. Then by Remark 7.1.32 and Proposition 4.1.3(iii), kΦA (x) = kA−1 x = A−1 (kx) = ΦA (kx). Corollary 7.2.11 Let B : Rn → Rn be linear and S be a linear subspace of Rn such that A = B + NS . Let k ∈ R. Then ιS (x) + ιran A+ (x∗ − Bx) + k x∗ − Bx, ΦA+ (x∗ − Bx) = ιS (x) + ιran A+ (x∗ − APS x) + k x∗ − APS x, ΦA+ (x∗ − APS x) , ∀(x, x∗ ) ∈ Rn × Rn .  Proof. Combine Lemma 7.2.5 and Remark 7.1.25. Fact 7.2.12 Let B : Rn ⇉ Rn be linear and monotone, and S be a linear subspace of Rn . Then B + NS is maximal monotone. Proof. See [28, Theorem 41.2]. Fact 7.2.13 Let B : Rn ⇉ Rn be linear, symmetric and monotone, and S be a linear subspace of Rn . Then ran(B + NS ) = ran B + S ⊥ . Proof. Combine Remark 7.1.25, Fact 7.2.12, [4, Corollary 4.9] and [28, 19.0.3 page 70].  122  Chapter 7. Proximal averages of monotone operators with linear graphs Lemma 7.2.14 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn . Then (qB + ιS )∗ (x) = ιran(B+ +NS ) (x) +  1 2  x, Φ(B+ +NS ) (x) ,  ∀x ∈ Rn .  Proof. Let x ∈ Rn . Then (qB + ιS )∗ (x) = sup  y∈Rn  y, x − qB (y) − ιS (y) .  Let g(y) := y, x − qB (y) − ιS (y). A point y is a maximizer of g, if and only if, it is a critical point. Then by Fact 2.1.30, Fact 2.1.18 and Fact 7.1.21,  0 ∈ ∂g(y) = x − B+ y − NS (y) = x − (B+ + NS )(y).  We consider two cases. Case 1: x ∈ ran(B+ +NS ). Let y0 satisfy that x ∈ (B+ +NS )y0 . Then y0 ∈ S and x ∈ B+ y0 + S ⊥ by Fact 7.1.21. Let t ∈ S ⊥ such that x = B+ y0 + t.  123  Chapter 7. Proximal averages of monotone operators with linear graphs Since y0 is a critical point, (qB + ιS )∗ (x) = g(y0 ) = y0 , x −  1 2  y0 , B+ y0  (by Remark 2.1.12)  = y0 , B+ y0 + t −  1 2  y0 , B+ y0  (by x = B+ y0 + t)  = y0 , B+ y0 −  1 2  (by y0 ∈ S and t ∈ S ⊥ )  y0 , B+ y0  =  1 2  y0 , B+ y0  =  1 2  x, (B+ + NS )−1 x  =  1 2  x, Φ(B+ +NS ) (x) .  (by Lemma 7.2.4 applied to B+ )  Case 2: x ∈ / ran(B+ + NS ). By Fact 7.2.13, ran(B+ + NS ) = ran B+ + S ⊥ . Thus by Fact 2.1.32,  ran(B+ + NS )  ⊥  = (ran B+ + S ⊥ )⊥ = (ran B+ )⊥ ∩ (S ⊥ )⊥ = ker B+ ∩ S.  Then we have Rn = ran(B+ + NS ) ⊕ (ker B+ ∩ S) and x = Pran(B+ +NS ) x + Pker B+ ∩S x. Since x ∈ / ran(B+ + NS ), Pker B+ ∩S x = 0. Thus Pker B+ ∩S x, x = Pker B+ ∩S x, Pran(B+ +NS ) x + Pker B+ ∩S x = Pker B+ ∩S x  2  > 0.  (7.9)  Then by Fact 5.1.10, (qB + ιS )∗ (x) = (qB + ιS )∗ (x) + (qB + ιS )(kPker B+ ∩S x) ≥ kPker B+ ∩S x, x → ∞, as k → ∞  (7.10)  by (7.9) ,  124  Chapter 7. Proximal averages of monotone operators with linear graphs where (7.10) holds since (qB + ιS )(kPker B+ ∩S x) = 0 by Remark 2.1.12 and Pker B+ ∩S x ∈ ker B+ ∩ S. Combining the conclusions above, we have (qB + ιS )∗ (x) = ιran(B+ +NS ) (x) +  1 2  x, Φ(B+ +NS ) (x) ,  ∀x ∈ Rn .  Proposition 7.2.15 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn . Then F(B+NS ) (x, x∗ ) = ιS (x) + ιran(B+ +NS ) (B ∗ x + x∗ ) +  1 4  B ∗ x + x∗ , Φ(B+ +NS ) (B ∗ x + x∗ ) ,  ∀(x, x∗ ) ∈ Rn × Rn . Proof. Let (x, x∗ ) ∈ Rn × Rn . By Fact 7.1.21, we have F(B+NS ) (x, x∗ ) =  y ∗ , x + x∗ , y − y, y ∗  sup (y,y ∗ )∈gra(B+NS )  = sup  By + S ⊥ , x + x∗ , y − y, By + S ⊥  y∈S  = ιS (x) + sup  By, x + x∗ , y − y, By  (7.11)  y∈S  = ιS (x) + sup  y∈Rn  = ιS (x) + 2 sup  y∈Rn  y, B ∗ x + x∗ − y, By − ιS (y) y, 12 (B ∗ x + x∗ ) − qB (y) − ιS (y)  (7.12)  125  Chapter 7. Proximal averages of monotone operators with linear graphs where (7.11) holds by y ∈ S. By (7.12), we have F(B+NS ) (x, x∗ ) = ιS (x) + 2(qB + ιS )∗  ∗ 1 2 (B x  + x∗ )  = ιS (x) + ιran(B+ +NS ) (B ∗ x + x∗ ) ∗ 1 2 (B x  +  + x∗ ), Φ(B+ +NS ) ( 12 (B ∗ x + x∗ )  = ιS (x) + ιran(B+ +NS ) (B ∗ x + x∗ ) +  1 4  (7.13)  (7.14)  (B ∗ x + x∗ ), Φ(B+ +NS ) (B ∗ x + x∗ ) .  (7.13) holds by Lemma 7.2.14 and Remark 7.1.22. (7.14) holds by Remark 7.1.22 and Lemma 7.2.10. Remark 7.2.16 Let S be a linear subspace of Rn . By Fact 7.2.13, ran NS = S ⊥ . By Proposition 7.2.15, FNS (x, x∗ ) = ιS (x) + ιS ⊥ (x∗ ),  ∀(x, x∗ ) ∈ Rn × Rn .  Corollary 7.2.17 Let A : Rn ⇉ Rn be maximal monotone such that gra A is a linear subspace. Then FA (x, x∗ ) = ιdom A (x) + ιran A+ (A∗ Pdom A x + x∗ ) +  1 4  A∗ Pdom A x + x∗ , ΦA+ (A∗ Pdom A x + x∗ ) , ∀(x, x∗ ) ∈ Rn × Rn .  126  Chapter 7. Proximal averages of monotone operators with linear graphs Proof. By Corollary 7.1.31, there exists a linear and monotone operator B : Rn → Rn such that A = B + Ndom A . By Proposition 7.2.15 and Remark 7.1.25, we have FA (x, x∗ ) = F(B+Ndom A ) (x, x∗ ) = ιdom A (x) + ιran A+ (B ∗ x + x∗ ) +  1 4  B ∗ x + x∗ , ΦA+ (B ∗ x + x∗ )  = ιdom A (x) + ιran A+ (−B ∗ (−x) + x∗ ) +  1 4  − B ∗ (−x) + x∗ , ΦA+ (−B ∗ (−x) + x∗ )  = ιdom A (x) + ιran A+ − A∗ Pdom A (−x) + x∗ +  1 4  − A∗ Pdom A (−x) + x∗ , ΦA+ − A∗ Pdom A (−x) + x∗  = ιdom A (x) + ιran A+ − A∗ (−Pdom A x) + x∗ +  1 4  1 4  (7.16)  − A∗ (−Pdom A x) + x∗ , ΦA+ − A∗ (−Pdom A x) + x∗  = ιdom A (x) + ιran A+ (A∗ Pdom A x + x∗ ) +  (7.15)  (7.17)  A∗ Pdom A x + x∗ , ΦA+ A∗ Pdom A x + x∗ ,  where (7.15) holds by Fact 7.1.23 and Corollary 7.2.11 applied to B ∗ and A∗ . (7.16) holds by Fact 4.3.3, and (7.17) by Remark 7.1.5 and Proposition 4.1.3(iii). Proposition 7.2.18 Let B : Rn → Rn be linear and monotone, and S be a  127  Chapter 7. Proximal averages of monotone operators with linear graphs linear subspace of Rn . Then ∗ (x∗ , x) = ιS (x) + ιS ⊥ (x∗ − Bx) + x, Bx , F(B+N S)  ∀(x, x∗ ) ∈ Rn × Rn . Proof. Let (x, x∗ ) ∈ Rn × Rn . By Proposition 7.2.15, ∗ F(B+N (x∗ , x) S)  = sup (y,y ∗ )  − =  1 4  y, x∗ + y ∗ , x − ιS (y) − ιran(B+ +NS ) (B ∗ y + y ∗ )  B ∗ y + y ∗ , Φ(B+ +NS ) (B ∗ y + y ∗ ) y, x∗ + B+ w − B ∗ y + S ⊥ , x −  sup (y∈S,w∈S)  = ιS (x) +  sup  1 4  B+ w, w  y, x∗ + B+ w − B ∗ y, x −  (y∈S,w∈S)  = ιS (x) +  sup  y, x∗ − Bx + B+ w, x −  (y∈S, w∈S)  = ιS (x) + ιS ⊥ (x∗ − Bx) + sup  w∈S  = ιS (x) + ιS ⊥ (x∗ − Bx) +  1 2  B+ w, x −  sup w∈S  1 4  1 4  1 4  (7.18)  B+ w, w w, B+ w  w, B+ w  w, 2B+ x − qB (w)  (7.19)  = ιS (x) + ιS ⊥ (x∗ − Bx) +  1 2  sup w∈Rn  w, 2B+ x − qB (w) − ιS (w)  = ιS (x) + ιS ⊥ (x∗ − Bx) + 12 (qB + ιS )∗ (2B+ x) = ιS (x) + ιS ⊥ (x∗ − Bx) + ιran(B+ +NS ) (2B+ x) +  1 4  (7.20)  2B+ x, Φ(B+ +NS ) (2B+ x)  = ιS (x) + ιS ⊥ (x∗ − Bx) + x, Bx ,  (7.21)  128  Chapter 7. Proximal averages of monotone operators with linear graphs in which, (7.18) holds by B ∗ y + y ∗ ∈ (B+ + NS )w = B+ w + S ⊥ (w ∈ S) by Fact 7.1.21, and by Lemma 7.2.4. (7.19) holds by Remark 2.1.12. (7.20) holds by Lemma 7.2.14, and (7.21) by Lemma 7.2.4 since 2x ∈ (B+ + NS )−1 (2B+ x) as x ∈ S by Fact 7.1.21. Remark 7.2.19 Let S be a linear subspace of Rn . By Fact 7.2.13, ran NS = S ⊥ . Then by Proposition 7.2.18, FN∗ S (x∗ , x) = ιS (x) + ιS ⊥ (x∗ ),  ∀(x, x∗ ) ∈ Rn × Rn .  Corollary 7.2.20 Let S be a linear subspace of Rn . Then FNS is autoconjugate. Proof. Combine Remark 7.2.16 and Remark 7.2.19. Remark 7.2.21 Remark 7.2.16 and Remark 7.2.19 are special cases of [8, Example 3.1]. Remark 7.2.22 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn such that A = B +NS . Suppose x ∈ S. Then Ax, x = x, Bx . Proof. Apply Ax = Bx + S ⊥ , which follows from Fact 7.1.21. Corollary 7.2.23 Let A : Rn ⇉ Rn be maximal monotone such that gra A is a linear subspace. Then FA∗ (x∗ , x) = ιdom A (x) + ι(dom A)⊥ (x∗ − APdom A x) + x, APdom A x , ∀(x, x∗ ) ∈ Rn × Rn . 129  Chapter 7. Proximal averages of monotone operators with linear graphs Proof. Let (x, x∗ ) ∈ Rn × Rn . By Corollary 7.1.31, there exists a linear and monotone operator B : Rn → Rn such that A = B + Ndom A . Then by Proposition 7.2.18, FA∗ (x∗ , x) = ιdom A (x) + ι(dom A)⊥ (x∗ − Bx) + x, Bx .  (7.22)  Suppose x ∈ dom A. Since dom A is a subspace of Rn and x∗ − Bx ∈ (dom A)⊥ ⇔ x∗ − Bx + (dom A)⊥ ⊂ (dom A)⊥ . By Fact 7.1.21, x∗ − Bx + (dom A)⊥ = x∗ − Ax. Thus ι(dom A)⊥ (x∗ − Bx) = ι(dom A)⊥ (x∗ − Ax) = ι(dom A)⊥ (x∗ − APdom A x). (7.23) By Remark 7.2.22,  x, Bx = Ax, x = APdom A x, x .  (7.24)  Thus by (7.22), (7.23) and (7.24), FA∗ (x∗ , x) = ιdom A (x) + ι(dom A)⊥ (x∗ − APdom A x) + APdom A x, x , ∀(x, x∗ ) ∈ Rn × Rn .  130  Chapter 7. Proximal averages of monotone operators with linear graphs  7.3  The third main result  Lemma 7.3.1 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn . Suppose that x ∈ S, x∗ ∈ Rn and y ∗ ∈ Rn . Then ιran(B+ +NS ) (B ∗ x + y ∗ ) + ιS ⊥ (2x∗ − y ∗ − Bx) = ιran(B+ +NS ) (x∗ − Bx) + ιS ⊥ (2x∗ − y ∗ − Bx).  (7.25)  Proof. We consider two cases. / S ⊥ . Clear. Case 1: 2x∗ − y ∗ − Bx ∈ Case 2: 2x∗ − y ∗ − Bx ∈ S ⊥ . Let t ∈ S ⊥ such that y ∗ = 2x∗ − Bx + t. Thus B ∗x + y∗ = B ∗ x + 2x∗ − Bx + t = Bx + B ∗ x + 2x∗ − Bx − Bx + t = 2x∗ − 2Bx + 2B+ x + t.  (7.26)  On the other hand, since t ∈ S ⊥ , Fact 7.1.21 implies  2B+ x + t ∈ (B+ + NS )(2x).  (7.27)  Then by Remark 7.1.22, (7.26) and (7.27), we have B ∗ x + y ∗ ∈ ran(B+ + NS ) ⇔ x∗ − Bx ∈ ran(B+ + NS ).  (7.28)  Thus ιran(B+ +NS ) (B ∗ x + y ∗ ) = ιran(B+ +Ns ) (x∗ − Bx). Hence (7.25) holds.  131  Chapter 7. Proximal averages of monotone operators with linear graphs Corollary 7.3.2 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn . Suppose that x, x∗ , y ∗ ∈ Rn . Then ιS (x) + ιran(B+ +NS ) (B ∗ x + y ∗ ) + ιS ⊥ (2x∗ − y ∗ − Bx) = ιS (x) + ιran(B+ +NS ) (x∗ − Bx) + ιS ⊥ (2x∗ − y ∗ − Bx). Proof. Apply Lemma 7.3.1. Proposition 7.3.3 Let A : Rn ⇉ Rn be maximal monotone such that gra A is a linear subspace. Then hFA (x, x∗ ) = ιdom A (x) + ιran A+ (x∗ − APdom A x) + x, x∗ +  1 2  x∗ − APdom A x, ΦA+ (x∗ − APdom A x) ,  ∀(x, x∗ ) ∈ Rn × Rn .  Proof. By Corollary 7.1.31, there exists a linear and monotone operator B : Rn → Rn such that A = B + NS ,  132  Chapter 7. Proximal averages of monotone operators with linear graphs where S = dom A. Let (x, x∗ ) ∈ Rn × Rn . By Proposition 7.2.15 and Proposition 7.2.18, hFA (x, x∗ ) ∗ 1 2 FA (x, 2y )  = inf ∗ y  + 12 FA∗ 2(x∗ − y ∗ ), x  = inf ιS (x) + ιran(B+ +NS ) (B ∗ x + 2y ∗ ) ∗ y  +  1 8  B ∗ x + 2y ∗ , Φ(B+ +NS ) (B ∗ x + 2y ∗ )  + ιS ⊥ (2x∗ − 2y ∗ − Bx) + = ιS (x) + +  1 8  1 2  1 2  x, Bx  ιran(B+ +NS ) (B ∗ x + 2y ∗ ) x, Bx + inf ∗ y  B ∗ x + 2y ∗ , Φ(B+ +NS ) (B ∗ x + 2y ∗ )  + ιS ⊥ (2x∗ − 2y ∗ − Bx) = ιS (x) + ιran(B+ +NS ) (x∗ − Bx) + + inf ∗ y  1 8  1 2  x, Bx  (7.29)  B ∗ x + 2y ∗ , Φ(B+ +NS ) (B ∗ x + 2y ∗ )  + ιS ⊥ (2x∗ − 2y ∗ − Bx) = ιS (x) + ιran(B+ +NS ) (x∗ − Bx) + + inf  t∈S ⊥  1 8  1 2  x, Bx  (7.30)  B ∗ x + 2x∗ − Bx + t, Φ(B+ +NS ) (B ∗ x + 2x∗ − Bx + t)  ,  in which, (7.29) holds by Corollary 7.3.2, (7.30) by 2y ∗ = 2x∗ − Bx + t, t ∈ S⊥. If x ∈ / S or x∗ − Bx ∈ / ran(B+ + NS ), hFA (x, x∗ ) = ∞ by (7.30). Now suppose x ∈ S = dom A and x∗ − Bx ∈ ran(B+ + NS ). Then there exists y0 ∈ S such that x∗ − Bx ∈ (B+ + NS )y0 . Thus by Fact 7.1.21,  133  Chapter 7. Proximal averages of monotone operators with linear graphs x∗ − Bx ∈ B+ y0 + S ⊥ . Then B+ x, y0 = x, B+ y0 = x, x∗ − Bx = x, x∗ − x, Bx .  (7.31)  Note that B ∗ x + 2x∗ − Bx + t = Bx + B ∗ x + 2x∗ − Bx − Bx + t = 2x∗ − 2Bx + 2B+ x + t.  (7.32)  By Fact 7.1.21,  2B+ x + t ∈ (B+ + NS )(2x).  (7.33)  Then by Remark 7.1.22, (7.32) and (7.33), B ∗ x + 2x∗ − Bx + t ∈ (B+ + NS )(2y0 + 2x).  (7.34)  134  Chapter 7. Proximal averages of monotone operators with linear graphs Then by (7.34), (7.31), (7.30) and Lemma 7.2.4, hFA (x, x∗ ) =  1 2  x, Bx +  1 8  B+ (2y0 + 2x), 2y0 + 2x  =  1 2  x, Bx +  1 2  B+ (y0 + x), y0 + x  =  1 2  x, Bx +  1 2  B+ y0 , y0 + B+ x, y0 +  =  1 2  x, Bx +  1 2  B+ y0 , y0 + x, x∗ − x, Bx +  =  1 2  B+ y0 , y0 + x, x∗  1 2  B+ x, x 1 2  B+ x, x  (by Remark 2.1.12)  = x, x∗ +  1 2  x∗ − Bx, (B+ + NS )−1 (x∗ − Bx)  = x, x∗ +  1 2  x∗ − Bx, (A+ )−1 (x∗ − Bx)  (by Remark 7.1.25)  = x, x∗ +  1 2  x∗ − Ax, (A+ )−1 (x∗ − Ax)  (by Lemma 7.2.5)  = x, x∗ +  1 2  x∗ − Ax, ΦA+ (x∗ − Ax)  = x, x∗ +  1 2  x∗ − APdom A x, ΦA+ (x∗ − APdom A x) .  (by Lemma 7.2.5) (7.35)  Thus combining (7.30) and (7.35), hFA (x, x∗ ) = ιdom A (x) + ιran A+ (x∗ − Bx) + x, x∗ +  1 2  (by Remark 7.1.25)  x∗ − APdom A x, ΦA+ (x∗ − APdom A x)  = ιdom A (x) + ιran A+ (x∗ − APdom A x) + x, x∗ +  1 2  x∗ − APdom A x, ΦA+ (x∗ − APdom A x) ,  (by Lemma 7.2.5) ∀(x, x∗ ) ∈ Rn × Rn .  135  Chapter 7. Proximal averages of monotone operators with linear graphs Remark 7.3.4 Let S be a linear subspace of Rn and A : Rn → Rn be invertible. Then dim AS = dim S. Proposition 7.3.5 Let S be a linear subspace of Rn and A : Rn → Rn be linear and monotone. Suppose that AS ⊂ S. Then (Id +A)S = S. Proof. By assumptions, (Id +A)S is a linear subspace and  (Id +A)S ⊂ S + AS ⊂ S.  (7.36)  Since (Id +A) is invertible by Proposition 5.2.6, by Remark 7.3.4, dim(Id +A)S = dim S. Then by (7.36), (Id +A)S = S. Corollary 7.3.6 Let S be a linear subspace of Rn and A : Rn → Rn be linear and monotone. Suppose that ran A ⊂ S. Then (Id +A)−1 A+ S ⊂ S. Proof. By Fact 2.1.17, ran A∗ = ran A ⊂ S. Thus A+ S ⊂ ran A+ ⊂ S. By Proposition 7.3.5, A+ S ⊂ S = (Id +A)S. Then (Id +A)−1 A+ S ⊂ (Id +A)−1 (Id +A)S = S. Lemma 7.3.7 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn . Suppose that x, y ∈ S and x∗ , y ∗ ∈ Rn . Then ιran(B+ +NS ) B ∗ (x + y) + x∗ + y ∗ + ιS ⊥ x∗ − y ∗ − B(x − y) = ιran(B+ +NS ) (x∗ − Bx) + ιS ⊥ x∗ − y ∗ − B(x − y) .  (7.37)  Proof. We consider two cases. Case 1: x∗ − y ∗ − B(x − y) ∈ / S ⊥ . Clear. 136  Chapter 7. Proximal averages of monotone operators with linear graphs Case 2: x∗ −y ∗ −B(x−y) ∈ S ⊥ . Let t ∈ S ⊥ such that y ∗ = x∗ −B(x−y)+t. Thus B ∗ (x + y) + x∗ + y ∗ = B ∗ (x + y) + 2x∗ − B(x − y) + t = B ∗ (x + y) + B(x + y) − B(x + y) + 2x∗ − B(x − y) + t = 2B+ (x + y) + t + 2x∗ − 2Bx.  (7.38)  On the other hand, since t ∈ S ⊥ , Fact 7.1.21 implies  2B+ (x + y) + t ∈ (B+ + NS )(2x + 2y).  (7.39)  Then by Remark 7.1.22, (7.39) and (7.38), we have B ∗ (x + y) + x∗ + y ∗ ∈ ran(B+ + NS ) ⇔ x∗ − Bx ∈ ran(B+ + NS ). Thus ιran(B+ +NS ) B ∗ (x+ y)+ x∗ + y ∗ = ιran(B+ +NS ) (x∗ − Bx). Hence (7.37) holds. Corollary 7.3.8 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn . Suppose that x ∈ Rn , y ∈ S and x∗ , y ∗ ∈ Rn . Then ιS (x) + ιran(B+ +NS ) B ∗ (x + y) + x∗ + y ∗ + ιS ⊥ x∗ − y ∗ − B(x − y) = ιS (x) + ιran(B+ +NS ) (x∗ − Bx) + ιS ⊥ x∗ − y ∗ − B(x − y) . Proof. Apply Lemma 7.3.7.  137  Chapter 7. Proximal averages of monotone operators with linear graphs Theorem 7.3.9 Let A : Rn ⇉ Rn be maximal monotone such that gra A is a linear subspace. Then P (FA , FA∗ ⊺) = hFA . Proof. By Corollary 7.1.31,  A = B + NS , where B = PS QA PS , S = dom A. Let (x, x∗ ) ∈ Rn × Rn . By Fact 5.2.2, Fact 5.2.9, Proposition 7.2.15 and Proposition 7.2.18, P (FA , FA∗ ⊺)(x, x∗ ) 1 2 F(B+NS ) (x  = inf  (y,y ∗ )  +  1 2  = inf  (y,y ∗ )  +  1 8  2  y  +  1 2  y∗  ∗⊺ + y, x∗ + y ∗ ) + 12 F(B+N (x − y, x∗ − y ∗ ) S)  2  ιS (x + y) + ιran(B+ +NS ) B ∗ (x + y) + x∗ + y ∗ + ιS (x − y)  B ∗ (x + y) + x∗ + y ∗ , Φ(B+ +NS ) B ∗ (x + y) + x∗ + y ∗  + ιS ⊥ x∗ − y ∗ − B(x − y) + +  1 2  y∗  = ιS (x) + +  1 8  1 2  x − y, B(x − y) +  1 2  y  2  2  inf  y∈S, y ∗ ∈Rn  ιran(B+ +NS ) B ∗ (x + y) + x∗ + y ∗  (7.40)  B ∗ (x + y) + x∗ + y ∗ , Φ(B+ +NS ) B ∗ (x + y) + x∗ + y ∗  + ιS ⊥ x∗ − y ∗ − B(x − y) + +  1 2  y∗  2  1 2  x − y, B(x − y) +  1 2  y  2  ,  138  Chapter 7. Proximal averages of monotone operators with linear graphs where (7.40) holds by ιS (x + y) = 0, ιS (x − y) = 0 ⇔ x, y ∈ S. By (7.40) and Corollary 7.3.8, P (FA , FA∗ ⊺)(x, x∗ ) = ιS (x) + ιran(B+ +NS ) (x∗ − Bx) + 1 8  inf  y∈S, y ∗ ∈Rn  B ∗ (x + y) + x∗ + y ∗ , Φ(B+ +NS ) B ∗ (x + y) + x∗ + y ∗  + ιS ⊥ x∗ − y ∗ − B(x − y) + +  1 2  y∗  1 2  x − y, B(x − y) +  1 2  y  2  2  = ιS (x) + ιran(B+ +NS ) (x∗ − Bx) +  inf  y∈S, t∈S ⊥  1 8  C0 , Φ(B+ +NS ) (C0 ) (7.41)  +  1 2  x − y, B(x − y) +  1 2  y  2  +  1 2  x∗ − B(x − y) + t  2  ,  where (7.41) holds by y ∗ = x∗ − B(x − y) + t, t ∈ S ⊥ , where C0 := B ∗ (x + y) + 2x∗ − B(x − y) + t  (B ∗ (x + y) + x∗ + y ∗ = C0 ).  Suppose that x ∈ S and x∗ − Bx ∈ ran(B+ + NS ). Then exists y0 ∈ S such that x∗ −Bx ∈ (B+ +NS )y0 . Thus by Fact 7.1.21, x∗ −Bx ∈ (B+ +NS )y0 = B+ y0 + S ⊥ . Thus x, Bx + B+ y0 = x, x∗  (by x ∈ S).  (7.42)  139  Chapter 7. Proximal averages of monotone operators with linear graphs On one hand, C0 = B ∗ (x + y) + B(x + y) − B(x + y) + 2x∗ − B(x − y) + t = 2B+ (x + y) + t + 2x∗ − 2Bx.  (7.43)  On the other hand, since t ∈ S ⊥ , Fact 7.1.21 implies  2B+ (x + y) + t ∈ (B+ + NS )(2x + 2y).  (7.44)  Then by Remark 7.1.22, (7.44) and (7.43)  C0 ∈ (B+ + NS )(2x + 2y + 2y0 ).  (7.45)  Then by (7.45), (7.41) and Lemma 7.2.4, P (FA , FA∗ ⊺)(x, x∗ ) =  inf  y∈S,  + =  1 2  t∈S ⊥  inf  1 2  ≤ inf  y∈S  +  1 2  2B+ (x + y + y0 ), 2x + 2y + 2y0  x − y, B(x − y) +  y∈S, t′ ∈S ⊥  +  1 8  1 2  y  2  +  1 2  x∗ − B(x − y) + t  2  B+ (x + y + y0 ), x + y + y0  x − y, B(x − y) + 1 2  1 2  1 2  y  2  +  1 2  B+ y0 + By + t′  2  B+ (x + y + y0 ), x + y + y0  x − y, B(x − y) +  1 2  y  2  +  1 2  B+ y0 + By  2  .  140  Chapter 7. Proximal averages of monotone operators with linear graphs Note that (by Remark 2.1.12) 1 2  B+ (x + y + y0 ), x + y + y0 +  1 2  x − y, B(x − y)  =  1 2  B+ (x + y0 ) + B+ y, (x + y0 ) + y +  =  1 2  B+ (x + y0 ), x + y0 + y, B+ (x + y0 ) +  +  1 2  1 2  x − y, B+ (x − y) 1 2  y, B+ y +  1 2  x, B+ x  y, B+ y − y, B+ x  =  1 2  B+ (x + y0 ), x + y0 + y, B+ y0 + y, B+ y +  =  1 2  B+ x, x + B+ y0 , x +  = x, Bx + B+ y0 , x + = x, Bx + B+ y0 + = x, x∗ +  1 2  1 2  1 2  1 2  1 2  x, B+ x  B+ y0 , y0 + y, B+ y0 + y, B+ y +  1 2  x, B+ x  B+ y0 , y0 + y, B+ y0 + y, B+ y  B+ y0 , y0 + y, B+ y0 + y, B+ y  B+ y0 , y0 + y, B+ y0 + y, B+ y  (by (7.42)).  Thus P (FA , FA∗ ⊺)(x, x∗ ) ≤  1 2  y0 , B+ y0 + x, x∗  + inf  y∈S  y, B+ y0 + y, B+ y +  (7.46) 1 2  y  2  +  1 2  B+ y0 + By  2  141  Chapter 7. Proximal averages of monotone operators with linear graphs Note that  y, B+ y0 + y, B+ y +  1 2  2  y  = y, B+ y0 + y, B+ y +  1 2  +  y  1 2  2  B+ y0 + By  2  + y, B ∗ B+ y0 +  1 2  y, B ∗ By +  = y, (Id +B ∗ )B+ y0 +  1 2  y, B + B ∗ + Id +B ∗ B y +  = y, (Id +B ∗ )B+ y0 +  1 2  y, (Id +B ∗ )(Id +B)y +  1 2  1 2  1 2  B+ y0  B+ y0  2  2  B+ y0 2 .  Then by (7.46), we have P (FA , FA∗ ⊺)(x, x∗ ) ≤  1 2  y0 , B+ y0 +  + inf  y∈S  1 2  B+ y0  2  + x, x∗  y, (Id +B ∗ )B+ y0 +  ≤  1 2  y0 , B+ y0 + x, x∗ +  =  1 2  y0 , B+ y0 + x, x∗  1 2  1 2  B+ y0  (7.47)  y, (Id +B ∗ )(Id +B)y 2  −  1 2  B+ y0  2  (7.48)  = x, x∗ +  1 2  x∗ − Bx, (B+ + NS )−1 (x∗ − Bx)  (7.49)  = x, x∗ +  1 2  x∗ − Ax, (A+ )−1 (x∗ − Ax)  (7.50)  = x, x∗ +  1 2  x∗ − Ax, ΦA+ (x∗ − Ax)  = x, x∗ +  1 2  x∗ − APdom A x, ΦA+ (x∗ − APdom A x) ,  (by Lemma 7.2.5) (7.51)  in which (7.48) holds by letting y = −(Id +B)−1 B+ y0 ∈ S, where y ∈ S by Corollary 7.3.6. (7.49) holds Lemma 7.2.4, (7.50) by Remark 7.1.25 and Lemma 7.2.5.  142  Chapter 7. Proximal averages of monotone operators with linear graphs Combining (7.41) and (7.51), P (FA , FA∗ ⊺)(x, x∗ ) ≤ ιS (x) + ιran(B+ +NS ) (x∗ − Bx) + x, x∗ +  1 2  x∗ − APdom A x, ΦA+ (x∗ − APdom A x)  = ιdom A (x) + ιran A+ (x∗ − APdom A x) + x, x∗ (by Remark 7.1.25, Lemma 7.2.5) +  1 2  x∗ − APdom A x, ΦA+ (x∗ − APdom A x)  = hFA (x, x∗ ) (by Proposition 7.3.3),  ∀(x, x∗ ) ∈ Rn × Rn .  By Fact 5.2.11, Fact 5.2.12 and Proposition 5.1.8, P (FA , FA∗ ⊺) = hFA . Corollary 7.3.10 Let A : Rn ⇉ Rn be maximal monotone such that gra A is a linear subspace. Then P (FA , FA∗ ⊺) = hFA = hFA∗ ⊺ . Proof. Combine Theorem 7.3.9 and Proposition 5.3.13. Theorem 7.3.11 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn . Then ∗⊺ P (F(B+NS ) , F(B+N )(x, x∗ ) = hF(B+N ) (x, x∗ ) = hF ∗⊺ S) S  (B+NS )  (x, x∗ )  = ιS (x) + ιran(B+ +NS ) (x∗ − Bx) + x, x∗ +  1 2  x∗ − Bx, Φ(B+ +NS ) (x∗ − Bx) ,  ∀(x, x∗ ) ∈ Rn × Rn .  143  Chapter 7. Proximal averages of monotone operators with linear graphs Proof. Let A = B + NS . Let (x, x∗ ) ∈ Rn × Rn . By Fact 7.2.12 and Remark 7.1.22, A is maximal monotone with a linear graph. Thus by Proposition 7.3.3, Corollary 7.3.10, Remark 7.1.25 and Corollary 7.2.11, ∗⊺ P (F(B+NS ) , F(B+N )(x, x∗ ) = hF(B+NS ) (x, x∗ ) = hF ∗⊺ S)  (B+NS )  (x, x∗ )  = ιdom A (x) + ιran A+ (x∗ − APdom A x) + x, x∗ +  1 2  x∗ − APdom A x, ΦA+ (x∗ − APdom A x)  = ιS (x) + ιran(B+ +NS ) (x∗ − Bx) + x, x∗ +  7.4  1 2  x∗ − Bx, Φ(B+ +NS ) (x∗ − Bx) .  The Fitzpatrick function of the sum  Fact 7.4.1 Let A, B : Rn ⇉ Rn be monotone. Then  FA+B ≤ FA  2 FB .  (7.52)  Proof. See [8, Proposition 4.2]. In (7.52), equality doesn’t always hold, see [8, Example 4.7]. It would be interesting to characterize the pairs of monotone operators (A, B) that satisfy the identity FA+B = FA  2 FB .  Lemma 7.4.2 Let B : Rn → Rn be linear and monotone, and S be a linear subspace of Rn . Then F(B+NS ) = FB  2 FNS .  144  Chapter 7. Proximal averages of monotone operators with linear graphs Proof. Let (x, x∗ ) ∈ Rn × Rn . By Fact 5.2.4 and Remark 7.2.16, we have  (FB  ∗ 2 FNS )(x, x )  = inf FB (x, y ∗ ) + FNS (x, x∗ − y ∗ ) ∗ y  = inf ιran B+ (y ∗ + B ∗ x) ∗ y  + 12 q(B+ )† (y ∗ + B ∗ x) + ιS (x) + ιS ⊥ (x∗ − y ∗ ) = ιS (x) + inf ιran B+ (y ∗ + B ∗ x) + 12 q(B+ )† (y ∗ + B ∗ x) + ιS ⊥ (x∗ − y ∗ ) ∗ y  ≤ ιS (x) + ιran(B+ +NS ) (x∗ + B ∗ x)  (7.53)  + inf ιran B+ (y ∗ + B ∗ x) + 12 q(B+ )† (y ∗ + B ∗ x) + ιS ⊥ (x∗ − y ∗ ) . ∗ y  Next we will show that (FA  ∗ 2 FNS )(x, x )  ≤ F(B+NS ) (x, x∗ ).  Now suppose x ∈ S and x∗ + B ∗ x ∈ ran(B+ + NS ). Then there exists y0 ∈ S such that x∗ + B ∗ x ∈ (B+ + NS )y0 .  (7.54)  By Fact 7.1.21, there exists t ∈ S ⊥ such that x∗ + B ∗ x = B+ y0 + t. Let y0∗ = x∗ − t. Then by x∗ + B ∗ x = B+ y0 + t, y0∗ + B ∗ x = x∗ + B ∗ x − t = B+ y0 .  (7.55)  145  Chapter 7. Proximal averages of monotone operators with linear graphs By (7.53), (7.55) and Lemma 7.2.4,  (FB  ∗ 2 FNS )(x, x )  ≤ ιran B+ (y0∗ + B ∗ x) + 12 q(B+ )† (y0∗ + B ∗ x) + ιS ⊥ (x∗ − y0∗ ) = 12 q(B+ )† (B+ y0 ) = 12 qB+ (y0 ) (by Corollary 2.2.16) =  1 4  B ∗ x + x∗ , (B+ + NS )−1 (B ∗ x + x∗ )  =  1 4  B ∗ x + x∗ , Φ(B+ +NS ) (B ∗ x + x∗ ) .  (by (7.54)) (7.56)  Thus combining (7.53) and (7.56),  (FA  ∗ 2 FNS )(x, x )  ≤ ιS (x) + ιran(B+ +NS ) (x∗ + B ∗ x) +  1 4  B ∗ x + x∗ , Φ(B+ +NS ) (B ∗ x + x∗ )  = F(B+NS ) (x, x∗ ) (by Proposition 7.2.15), By Fact 7.4.1, F(B+NS ) = (FB  ∀(x, x∗ ) ∈ Rn × Rn .  2 FNS ).  Fact 7.4.3 Let A, B : Rn → Rn be linear and monotone. Then  F(A+B) = FA  2 FB .  Proof. See [4, Corollary 5.7]. Fact 7.4.4 Let S1 , S2 be linear subspaces of Rn . Then  F(NS1 +NS2 ) = FNS1  2 FNS2 .  146  Chapter 7. Proximal averages of monotone operators with linear graphs Proof. See [8, Example 4.4]. Fact 7.4.5 Let S1 , S2 be linear subspaces of Rn . Then NS1 +NS2 = NS1 ∩S2 . Proof. Clearly, dom(NS1 + NS2 ) = S1 ∩ S2 = dom NS1 ∩S2 . Let x ∈ S1 ∩ S2 . By Fact 7.1.21, (NS1 + NS2 )(x) = (S1 )⊥ + (S1 )⊥ = (S1 ∩ S2 )⊥  (by [27, Exercises 3.17])  = NS1 ∩S2 (x).  Proposition 7.4.6 Let B1 , B2 : Rn → Rn be linear and monotone, and S1 , S2 be linear subspaces of Rn such that A1 = B1 + NS1 , A2 = B2 + NS2 . Then F(A1 +A2 ) = FA1  2 FA2 .  Proof. Let (x, x∗ ) ∈ Rn × Rn . Then we have  (FNA1  ∗ 2 FNA2 )(x, x )  = F(B1 +NS1 ) = =  inf  y ∗ +z ∗ =x∗  inf  y ∗ +z ∗ =x∗  + =  inf  z1∗ +z2∗ =z ∗  inf  2 F(B2 +NS2 )  (x, x∗ )  F(B1 +NS1 ) (x, y ∗ ) + F(B2 +NS2 ) (x, z ∗ ) inf  y1∗ +y2∗ =y ∗  FB1 (x, y1∗ ) + FNS1 (x, y2∗ )  FB2 (x, z1∗ ) + FNS2 (x, z2∗ )  y1∗ +y2∗ +z1∗ +z2∗ =x∗  (by Lemma 7.4.2)  (by Lemma 7.4.2)  FB1 (x, y1∗ ) + FNS1 (x, y2∗ ) + FB2 (x, z1∗ ) + FNS2 (x, z2∗ ) .  147  Chapter 7. Proximal averages of monotone operators with linear graphs Thus  (FNA1 =  ∗ 2 FNA2 )(x, x )  inf  u∗ +v∗ =x∗  +  inf  v1∗ +v2∗ =v∗  inf  u∗1 +u∗2 =u∗  FB1 (x, u∗1 ) + FB2 (x, u∗2 )  FNS1 (x, v1∗ ) + FNS2 (x, v2∗ )  (by Fact 7.4.3 and Fact 7.4.4) = =  inf  F(B1 +B2 ) (x, u∗ ) + F(NS1 +NS2 ) (x, v ∗ )  inf  F(B1 +B2 ) (x, u∗ ) + FNS1 ∩S2 (x, v ∗ )  u∗ +v∗ =x∗ u∗ +v∗ =x∗  (by Fact 7.4.5)  = F(B1 +B2 +NS1 ∩S2 ) (x, x∗ ) (by Lemma 7.4.2) = F(B1 +B2 +NS1 +NS2 ) (x, x∗ ) (by Fact 7.4.5) = F(A1 +A2 ) (x, x∗ ).  Corollary 7.4.7 Let A, B : Rn ⇉ Rn be maximal monotone such that gra A and gra B are linear subspaces. Then F(A+B) = FA Proof.  2 FB .  By Corollary 7.1.31, there exist linear and monotone operators  A1 , B1 : Rn → Rn such that A = A1 + Ndom A and B = B1 + Ndom B . Since dom A and dom B are subspaces of Rn , by Proposition 7.4.6,  F(A+B) = FA  2 FB .  148  Chapter 7. Proximal averages of monotone operators with linear graphs Remark 7.4.8 Corollary 7.4.7 generalizes the result of Bauschke, Borwein and Wang in [4].  149  Chapter 8  Future work Our future work is the following • Simplify some of earlier technic proofs. • Extend main results to a Hilbert space and a possibly (reflexive) Banach space. • Since Asplund’s decomposition of monotone operators is based on Zorn’s Lemma, it would be very interesting to find a constructive proof.  150  Bibliography [1] E. Asplund, “A monotone convergence theorem for sequences of nonlinear mappings,” Nonlinear Functional Analysis, Proceedings of Symposia in Pure Mathematics, American Mathematical Society, vol. XVIII Part 1, Chicago, pp. 1–9, 1970. [2] S. Bartz, H. H. Bauschke, J. M. Borwein, S. Reich, and X. Wang, “Fitzpatrick functions, cyclic monotonicity, and Rockafellar’s antiderivative,” Nonlinear Analysis, No. 5, pp. 1198-1223, 2007. [3] H. H. Bauschke and J. M. Borwein, “Maximal monotonicity of dense type, local maximal monotonicity, and monotonicity of the conjugate are all the same for continuous linear operators,” Pacific Journal of Mathematics, vol. 189, pp. 1–20, 1999. [4] H. H. Bauschke, J. M. Borwein, and X. Wang, “Fitzpatrick functions and continuous linear monotone operators,” SIAM Journal on Optimization, 18, pp. 789–809, 2007. [5] H. H. Bauschke, Y. Lucet, and M. Trienis, “How to transform one convex function continuously into another,” SIAM Review, in press, 2008.  151  Bibliography [6] H. H. Bauschke, E. Matouskova, and S. Reich, “Projection and proximal point methods: Convergence results and counterexamples,” Nonlinear Analysis, vol. 56, pp. 715-739, 2004. [7] H. H. Bauschke, R. G. Goebel, Y. Lucet, and X. Wang, “The proximal average: Basic Theory,” submitted. [8] H. H. Bauschke, D. A. McLaren, and H. S. Sendov, “Fitzpatrick functions: inequalities, examples and remarks on a problem by S. Fitzpatrick,” Journal of Convex Analysis 13(3+4), pp. 499-523,2006. [9] H. H. Bauschke and X. Wang, “The kernel average for two convex functions and its applications to the extension and representation of monotone operators,” to appear in Transactions of the American Mathematical Society, 2007. [10] H. H. Bauschke and X. Wang, “A convex-analytical approach to extension results for n-cyclically monotone operators,” Set-Valued Analysis, No.3, pp. 297-306, 2007. [11] J. M. Borwein , “Adjoint process duality,” Mathematics of Operations Research, Vol. 8, No.3, pp. 403-434, 1983. [12] J. M. Borwein and H. Wiersma, “Asplund decomposition of monotone operators,” SIAM Journal on Optimization, Vol. 18, No.3, pp. 946-960, 2007. [13] J. M. Borwein and A. S. Lewis, “Convex Analysis and Nonlinear Optimization,” Springer-Verlag, 2000. 152  Bibliography [14] D. Butnariu and G. Kassay, “A proximal-projection method for finding zeros of set-valued operators,” submitted. [15] F. H. Clarke, “Optimization and nonsmooth analysis,” Wiley, New York, 1983. [16] R. Cross, “Multivalued linear operators,” Marcel Dekker, Inc, New York, 1998. [17] N. Ghoussoub, “Maximal monotone operators are selfdual vector fields and vice-versa,” 2006, preprint. [18] S. Fitzpatrick, “Representing monotone operators by convex functions,” Workshop/Miniconference on Functional Analysis and Optimization (Canberra 1988), Proceedings of the Centre for Mathematical Analysis, Australian National University vol. 20, Canberra, Australia, pp. 59–65, 1988. [19] C. W. Groetsch, “Generalized Inverses of Linear Operators,” Marcel Dekker, Inc, New York, 1977. [20] D. G. Luenberger, “Optimization by Vector Space Methods,” John Wiley Sons, Inc, 2000. [21] C. D. Meyer, “Matrix Analysis and Applied Linear Algebra,” SIAM, New York, 2000. [22] R.R. Phelps, “Convex functions, Monotone Operators and Differentiability,” Lecture Notes in Mathematics, Springer-Verlag Berlin Heidelberg, 1993. 153  Bibliography [23] R. R. Phelps and S. Simons, “Unbounded linear monotone operators on nonreflexive Banach spaces,” Journal of Convex Analysis, vol. 5, pp. 303–328, 1998. [24] J. P. Penot and C. Z˘ alinescu, “Some problems about the representation of monotone operators by convex functions,” The Australian New Zealand Industrial and Applied mathematics Journal, vol. 47, No.1, pp. 1–20, 2005. [25] R. T. Rockafellar, “Convex Analysis,” Princeton University Press, 1970. [26] R. T. Rockafellar and R. J. B. Wets, “Variational Analysis,” SpringerVerlag, 1998. [27] B. P. Rynne and M. A. Youngson, “Linear Functional Analysis,” London, 2000. [28] S. Simons, “Minimax and Monotonicity,” Springer, New York, 1998. [29] S. Simons and C. Zˇ alinescu, “Fenchel duality, Fitzpatrick functions and maximal monotonicity,” Journal of Nonlinear and Convex Analysis, 6, pp. 1–22, 2005. [30] M. D. Voisei, “The sum theorem for linear maximal monotone operators,” Mathematical Sciences Research Journal, 10 (4), pp. 83–85, 2006. [31] C. Z˘ alinescu, “Convex Analysis in General Vector Spaces,” World Scientific Publishing, 2002.  154  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0066808/manifest

Comment

Related Items