UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

On a polar factorization theorem Millien, Pierre 2011

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_spring_2012_millien_pierre.pdf [ 472.59kB ]
Metadata
JSON: 24-1.0072393.json
JSON-LD: 24-1.0072393-ld.json
RDF/XML (Pretty): 24-1.0072393-rdf.xml
RDF/JSON: 24-1.0072393-rdf.json
Turtle: 24-1.0072393-turtle.txt
N-Triples: 24-1.0072393-rdf-ntriples.txt
Original Record: 24-1.0072393-source.json
Full Text
24-1.0072393-fulltext.txt
Citation
24-1.0072393.ris

Full Text

On a Polar Factorization Theorem by Pierre Millien a thesis submitted in partial fulfillment of the requirements for the degree of Master in Science in the faculty of graduate studies (Mathematics) The University Of British Columbia (Vancouver) October 2011 c© Pierre Millien, 2011 Abstract We study the link between two different factorization theorems and their proofs : Brenier’s Theorem which states that for any u ∈ Lp(Ω), where Ω is a bounded domain in Rd and 1 ≤ p ≤ ∞, u can be written as u = ∇φ ◦ s where φ is a convex function, and s a measure preserving transformation, and on the other hand Ghoussoub and Moameni’s theorem which states that for any u ∈ L∞(Ω), u(x) = ∇1H(S(x), x), where H is a convex concave anti-symmetric function, and S is a measure preserving involution. In a second time we prove that Ghoussoub and Moameni’s theorem is true in L2, and find the decomposition for particular example : u(x) = |x− 1/2|. ii Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . vi 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 The mass transportation problem. . . . . . . . . . . . . . . . 3 2.1 Monge Kantorovich problem . . . . . . . . . . . . . . . . . . . 3 2.1.1 Mass transportation and polar factorization . . . . . . 5 2.2 Polar factorization via a variational approach . . . . . . . . . 6 2.2.1 Gangbo’s approach . . . . . . . . . . . . . . . . . . . . 6 2.2.2 A self-dual polar factorization . . . . . . . . . . . . . . 6 3 Polar factorization and Hilbert projections . . . . . . . . . 8 3.1 Abstract polar factorization . . . . . . . . . . . . . . . . . . . 8 3.2 The self dual case . . . . . . . . . . . . . . . . . . . . . . . . . 9 4 The self dual polar factorization in L2 . . . . . . . . . . . . . 13 5 Case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.1 Finding S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.2 Finding H(x, Sx) . . . . . . . . . . . . . . . . . . . . . . . . . 19 iii 5.3 Convexity inequalities . . . . . . . . . . . . . . . . . . . . . . 20 5.4 Final expressions for H . . . . . . . . . . . . . . . . . . . . . . 23 6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 iv List of Figures Figure 5.1 Expression for H . . . . . . . . . . . . . . . . . . . . . . . 25 v Acknowledgments I would like to thank my supervisor Nassif Ghoussoub, for all his time, his help, and his insightful advices during my researches and the writing of this document. I would also like to thank Abbas Moameni for all his help, his patience, and his advices. I would like to show my gratitude to Bernard Maurey for his contribution. I am grateful towards Mostafa Fazly for his many advices and his support. vi Chapter 1 Introduction This thesis tackles the issue of factorization in analysis. We study two polar decomposition theorems, one by Y. Brenier ([1]) which states that for any u ∈ Lp(Ω), with 1 ≤ p ≤ ∞, u can be written as u = ∇φ ◦ s where φ is a convex function, and s a measure preserving transformation and a more recent one by N. Ghoussoub and A. Moameni ([5]) which states that for any u ∈ L∞(Ω), u(x) = ∇1H(S(x), x), where H is a convex concave anti-symmetric function, and S is a measure preserving involution. Brenier’s first approach to the polar factorization was a "projection ap- proach", working in the space L2(Ω). His idea was to do the Hilbert projec- tion of u onto the subspace S of all the measure preserving transformations, then via geometrical arguments, he wanted to deduce the factorization. But this approach failed for technical reasons. So he invented a clever proof based on Monge-Kantorovich’s mass transportation theory, which is briefly shown in chapter 1. Three years later, Gangbo ([3]) invented a new proof for Brenier’s theorem 1 using a completely different approach. He proved the result for u ∈ L∞(Ω) using only elementary convex analysis tools and a variational resolution, and he deduced the result for u ∈ Lp(Ω) via a density. Using a variational method similar to Gangbo’s, Ghoussoub and Moameni proved a new version of the polar factorization, the self dual polar factoriza- tion in L∞(Ω). The goal of this thesis is to study the link between the new self dual polar factorization theorem and Brenier’s geometrical approach, and to extend the self dual polar factorization theorem to L2(Ω) using a density argument similar to Gangbo’s. In the last chapter is presented an explicit computation of the factorization for a simple function. 2 Chapter 2 The mass transportation problem. We first review a simplified case of the Monge-Kantorovich mass transporta- tion problem, and explain how it can lead to Brenier’s polar factorization. 2.1 Monge Kantorovich problem Let X ⊂ Rd, and Y ⊂ Rd. Consider (X,λ) and (Y, ν) two probabilised spaces. Now consider the class S of all transformations s : X → Y which rearrange λ into ν, i.e. such that s#λ = ν which means that ∀h ∈ C0(Y,Rd), ∫ X h(s(x))dλ(x) = ∫ Y h(y)dν(y). Now we introduce a cost function c : X × Y → [0,∞), and a total cost functional : I(s) := ∫ X c(x, s(x))dλ. The problem is to find min s∈S I(s). It turns out that, for the cost function c(x, y) = 12 |x − y|2, there is an s∗ such that s∗#λ = ν which minimize I, and s∗ is such that s∗ = Dφ∗ for 3 some convex function φ∗. More over, s∗ is unique. The idea of the proof is to introduce a "relaxed" version of the problem, called the Monge-Kantorovich problem, and solve it via a dual variational principle. To do so, we need to introduce a new class : M := {Radon probability measures pi on X×Y | pi[A×Y ] = λ[A], pi[X×B] = ν[B]}. We now have a relaxed cost functional : J [pi] := ∫ X×Y c(x, y)dpi(x, y), and we want to find pi∗ such that J [pi∗] = min pi∈M J [pi]. Under appropriate assumptions on the cost c, a compactness argument gives the existence of one optimal measure. Such a measure need not be generated by a one to one mapping s ∈ S. In order to get a better characterization of the optimal plan, Kantorovich, inspired by linear programming, introduced a dual problem by defining : L := {(u, v) | u : X → R, v : Y → R, u(x) + v(y) ≤ c(x, y) (x, y ∈ Rd)}. A new functional : K[u, v] := ∫ X u(x)dλ(x) + ∫ Y v(y)dν(y), and consequently the dual problem is to find an optimal pair (u∗, v∗) ∈ L such that K[u∗, v∗] = max (u,v)∈L K[u, v]. So rather than constructing an s∗ ∈ S satisfying a non linear constraint, we now have to find the optimal pair (u∗, v∗), which turns out to be a lot easier, and leads to some precise characterization of the optimal plans. For 4 the complete proof, see [8]. 2.1.1 Mass transportation and polar factorization Let’s consider (Ω, λ), where Ω is a bounded subset of Rd and λ is the Lebesgue measure. Theorem 2.1.1. (Brenier [1]) Let u : Ω→ Ω be a non degenerate L2 vector field. There is a unique pair (∇ψ, s) such that : ψ : Ω→ Rd is a convex function. s : Ω→ Ω is measure preserving (i.e s#λ = λ) u = ∇ψ ◦ s Moreover, s is the unique orthogonal L2 projection of u onto S, the set of all measure preserving mapping Ω→ Ω. The idea of the proof is the following : We look for a map s which minimizes ∫ Ω |u(x)− σ(x)|2dλ among all σ ∈ S. We introduce the image measure pi = (u× σ)#λ, then we have to find : min σ { ∫ X×Y |x− y|2dpi(x, y); pi = (u× σ)#λ, σ#λ = λ}. Then the idea is that the Monge Kantorovich theory gives a convex φ such that (∇φ ◦ u)#λ = λ, and pi = (u × ∇φ ◦ u)#λ is concentrated on the graph of ∇φ, and is optimal. Then setting s := ∇φ ◦ u and ψ = φ∗, we get ∇ψ ◦ s = ∇φ∗ ◦ ∇φ ◦ u = u, hence the polar decomposition. 5 2.2 Polar factorization via a variational approach 2.2.1 Gangbo’s approach Another proof of Brenier’s polar factorizations theorem was developed by W. Gangbo in [3] The idea is to solve a minimization problem whose Euler- Lagrange equation turns out to be u = ∇φ ◦ s. The proof does not rely on any of the Monge-Kantorovich tools, and just uses convex analysis. The proposition is proved for mappings u ∈ L∞, and the Lp version is deduced by an approximation argument. The idea is the following : let R be such that u(Ω) ⊂ BR. Then if we introduce ER = {(φ, ψ) |φ ∈ C(BR)∩L∞(BR), ψ ∈ C(Ω)∩L∞(Ω), φ(y)+ψ(z) ≥ yz ∀(y, z) ∈ BR×Ω} and S = {s : Ω→ Ω |s is a measure preserving mapping}. The variational problems are the following : i∞ = inf{I(φ, ψ) |(φ, ψ) ∈ ER} where I(φ, ψ) = ∫ Ω [φ(u(x)) + ψ(x)]dx, and the dual problem is to find sup{ ∫ Ω u(x).s(x)dx |s ∈ S}. 2.2.2 A self-dual polar factorization Using a similar variational approach, Ghoussoub and Moameni recently proved the following : Theorem 2.2.1. (Ghoussoub, Moameni,[5]) Let Ω be an open bounded set in RN , such that mea(∂Ω) = 0, and let u ∈ L∞(Ω, RN ) be a non degener- ate vector field. Then there exists a globally Lipschitz skew-adjoint saddle 6 function H, and an idempotent measure preserving mapping s such that u(x) = ∇1H(s(x), x) a.e. x ∈ Ω, and ∫ Ω (u(x).s(x))dx = sup f∈S ( ∫ Ω (u(x).f(x))dx) If we define LH(x, p) = supy∈Ω{(y.p)−H(y, x)}, we have that ∫ Ω LH(x, u(x))dx = infH̃∈H{ ∫ Ω LH̃(x, u(x))dx} and s = ∇2LH(x, u(x)). The question now is to understand what is the link between the self-dual polar decomposition, Monge-Kantorovich’s theory, and the Brenier’s polar factorization. 7 Chapter 3 Polar factorization and Hilbert projections 3.1 Abstract polar factorization We saw that regardless of the approach, in each case, to find the measure preserving mapping (or the self-dual measure preserving mapping), one has to find s such that∫ Ω (u(x).s(x))dx = sup σ∈S ( ∫ Ω (u(x).σ(x))dx), or equivalently, find min σ∈S { ∫ Ω |u(x)− σ(x)|2dλ}. It turns out that the study of the projection problem on a closed set S can lead to an abstract polar factorization theorem , which states that if S is a closed semi group of real Hilbert space H (understand here H = L2(Ω), and the group law is the composition of functions) then for "almost every" u ∈ H (in the sense of Baire) there exists a unique projection of u on S and there is an element in K such that u = k ◦ s, where K is the polar cone of S defined as follow : 8 Definition 3.1.1. The polar cone KS of S is : K = {u ∈ H; ((u, e− s)) ≥ 0,∀s ∈ S} K is the set of all u ∈ H for which the identity map e is a Hilbert projection of u on S. This approach is developed in [1]. The problem for our case is that S is not a group in both cases. The set of all measure preserving transformation is a semi group but not a group (some of its elements are not invertible), and the set of all self dual measure preserving transformation is not a semi group, but all its elements are invertible. Since Brenier was still able to prove that the polar cone of the set of all mea- sure preserving transformation is exactly the set {∇ψ, ψ ∈W 1,2(Ω), ψ convex}, we tried to see what would still hold in the case where S = {Self dual measure preserving transformations}, and find its polar cone. 3.2 The self dual case All the theorems that still hold need the assumption that S is closed. It is the case indeed : Proposition 3.2.1. Let S be the set of all self dual measure preserving transformation, i.e. ∀s ∈ S : ∀f ∈ L1(ω), f ◦ s ∈ L1(ω), and ∫ Ω f ◦ s(x)dx = ∫ Ω f(x)dx, and s ◦ s = e. Then S is closed. Proof. Let (sn)n∈N ∈ SN , sn −→ s. s is still measure preserving (see [1]). s is also idempotent : ∀H ∈ L1 ∩ C0, H antisymmetric, the dominated 9 convergence theorem and the continuity of H give :∫ Ω H(sn(x), x)dx −→ ∫ Ω (s(x), x)dx, which gives us ∫ Ω H(s(x), x)dx = 0. Now for any H ∈ L1, there exists a sequence Hn of continuous antisymmetric functions, converging to H. We get :∫ Ω Hn(s(x), x)dx −→ ∫ Ω H(s(x), x)dx, which gives ∫ Ω H(s(x), x)dx = 0. Then, considering H(x, y) = |s(x) − y| − |s(y) − x| and ∫ΩH(s(x), x)dx =∫ Ω |s2(x)− x|dx = 0. The first theorem that still holds is the one giving the existence of the projection. Theorem 3.2.2. (Edelstein) Let S be a closed bounded subset of a real Hilbert space H. Then, the set of all u ∈ H for which there is a unique Hilbert projection pi(u) on S contains a dense countable intersection of open sets H\N , defined by : H\N = u ∈ H;∀ > 0,∃δ > 0 such that ‖s1 − s2‖ ≤ , s1, s2 ∈ S, whenever |si − u| ≤ dist(u, S) = δ, i = 1, 2. Moreover, pi is continuous from H\N into H. Theorem 3.2.3. Let S be a closed bounded subset of a sphere centred at the origin in a real Hilbert space H. Then, the projector operator pi : H −→ S can be characterized as the gradient of the Lipschitz continuous convex function J(u) = sup s∈S ((u, s)). More precisely, one has ∂J(u) = pi(u), for all u ∈ H\N 10 The proof can be found in [1]. Then we managed to determine what S’s polar cone is : Proposition 3.2.4. Let K̃ = {u ∈ L2(Ω);u = ∇1H(x, x)} for some H ∈ H} Then K̃ is S’s polar cone, i.e. K̃ = KS. In the proof, we will use the following lemma, due to Krauss ([6]): Lemma 3.2.5. For each monotone operator A ⊂ H ×H, such that D(A) 6= ∅, there exists a lower closed skew symmetric saddle function LA : H × H −→ R with coD(A) ⊂ DomLA ⊂ coD(A) such that TLA, defined by f ∈ TLAx := [f,−f ] ∈ ∂LA(x, x), is a maximal monotone extension of A. Proof. First step : K̃ ⊂ KS . Let u ∈ K̃, u = ∇1H(x, x) with H ∈ H. For all s ∈ S, H(s(x), x) ≥ H(x, x) +∇1H(x, x).(s(x)− x) We get that :∫ Ω ∇1H(x, x).(x− s(x))dx ≥ ∫ Ω H(s(x), x)−H(x, x)dx ∫ Ω ∇1H(x, x).(x− s(x))dx ≥ 0, Because since s is measure preserving on Ω, (s×Id) is also measure preserving on Ω2 Second step : KS ⊂ K̃. Let u ∈ KS , then u is monotone. Let us consider a pair (x1, x2) ∈ Ω. When R is small enough, then B(xi, R) ⊂ Ω,∀i ∈ {1, 2}. Then we define sr, which is a Lebesgue measure preserving involution : sR(x) =  x− x2 + x1 if x ∈ B(x2, R) x− x1 + x2 if x ∈ B(x2, R) x otherwise Since u ∈ KS we have ∫ Ω u(x).(x− sr(x))dx ≥ 0, which is : (x1 − x2). [∫ BR u(x1 +Ry)dy − ∫ BR u(x2 +Ry)dy ] ≥ 0. 11 Since u is Lebesgue integrable, almost every point x ∈ Ω is a Lebesgue point, which means u(x) = lim R→0 | B |−1 ∫ B u(x+Ry)dy. This leads to : (x1 − x2).(u(x1)− u(x2)) ≥ 0, for a.e. x1, x2 ∈ Ω We now have that every u ∈ KS is monotone. We now use Krauss’ theorem, and the graph {(x, u(x)), x ∈ Ω ⊂ (x,∇1L(x, x)), x ∈ Ω} , for some skew symmetric saddle function L, which gives u ∈ K̃. We did not succeed in proving the self dual polar factorization in L2 via the abstract polar factorization theory, but we found out some similarities between the polar cone of S and the ∇1H that appears in Ghoussoub and Moameni’s theorem. Even though this technique did not work, we still man- aged to prove that the self dual polar factorization holds in L2, using an approximation argument, similar to the one that Gangbo used to prove the polar factorization in L2 using the L∞ result. 12 Chapter 4 The self dual polar factorization in L2 Ghoussoub and Moameni proved the polar factorization theorem (theorem 2.1) for u ∈ L∞(Ω). We want to prove this result in L2. The following proof is the work of Abbas Moameni and me. I would like to thank him for all his help. Let u ∈ L2(Ω) be a non degenerate vector field. Define un : Ω −→ R by un(x) = rn(u(x)), where rn is a diffeomorphism from Rd onto Bn such that ‖rn(y)‖ ≤ ‖y‖, for all y ∈ Rd and rn(y) −→ y uniformly on any compact subset of Rd. We have that for each n ∈ N , un ∈ L∞(Ω, Rd), and un is non degenerate. As sup{‖un(x)‖|x ∈ Ω} ≤ n, we have, using the self dual polar factorization theorem, that there exists an measure preserving involution sn, a continuous Lipschitz (the constant depends on n) saddle function Hn such that un(x) = ∇1Hn(sn(x), x) a.e. x ∈ Ω, 13 and ∫ Ω (un(x).sn(x))dx = sup f∈S ( ∫ Ω (un(x).f(x))dx). Let us show that (Hn)n∈N is bounded in L2(Ω). Since Hn is convex in the first variable, we get : Hn(y, x)−Hn(Snx, x) ≥ (y − Snx.∇1Hn(Snx, x)) Hn(y, x)−Hn(Snx, x) ≥ (y − Snx.un(x)) Hn(Snx, x) ≥ |un(x)||Snx− x| We also have that Hn(Snx, Snx)−Hn(x, Snx) ≥ (Snx− x.un(x)) Hn(x, Snx) ≤ (Snx− x.un(x)) Hn(Snx, x) ≥ −|Snx− x||un(x)| We get that : −|Snx− x||un(x)| ≤ Hn(Snx, x) ≤ |un(x)||Snx− x|. Now, we have the following : Hn(y, x)−Hn(Snx, x) ≥ (y − Snx.un(x)) Hn(y, x) ≥ −|y − Snx||un(x)|+Hn(Snx, x) Hn(y, x) ≥ −|y − Snx||un(x)| − |un(x)||Snx− x| Hn(x, y) ≤ |y − Snx||un(x)|+ |un(x)||Snx− x| Similarly we have Hn(x, y)−Hn(Sny, y) ≥ (x− Sny.un(y)) Hn(x, y) ≥ −|x− Sny||un(y)|+Hn(Sny, y) Hn(x, y) ≥ −|x− Sny||un(y)| − |un(y)||Sny − y| So we get that : −|x−Sny||un(y)|−|un(y)||Sny−y| ≤ Hn(x, y) ≤ |y−Snx||un(x)|+|un(x)||Snx−x|, which implies that Hn is a bounded sequence in L2(Ω× Ω). Up to a subsequence, Hn −→weakly H. Now, H is still an anti-symmetric function almost everywhere: 14 Let (x0, y0) ∈ Ω2. Let’s consider f(x, y) = 1 |B((x0, y0), )|1B((x0,y0),)(x, y) g(x, y) = 1 |B((y0, x0), )|1B((y0,x0),)(x, y). We have, for  small enough, ∀n ∈ N ,∫ Ω2 f(x, y)Hn(x, y)dxdy + ∫ Ω2 g(x, y)Hn(x, y)dxdy = 0. Taking the limit when n −→∞ we have that ∀ > 0, ∫ Ω2 f(x, y)H(x, y)dxdy = − ∫ Ω2 g(x, y)H(x, y)dxdy. Now, using Lebesgue’s differentiation theorem, we get, when  −→ 0 : H(x0, y0) = −H(y0, x0), a.e. Since Hn −→weakly H in L2, there exists a sequence (H̃n)n∈N of convex combinations H̃i = ∑ finite αi,nHn, αi,n ≥ 0, ∑ n αi,n = 1 which converges to H strongly. So up to a subsequence, we have H̃n(x, y) −→ H(x, y) for almost every x ∈ Ω and every y ∈ Ω. For every y ∈ Ω, let’s denote by Gy the set Gy ⊂ Ω, of all the points x in Ω such that H̃n(x, y) −→ H(x, y). We have |Gy| = |Ω|. Now, let’s fix y0 ∈ Ω. Let ∀x : H̃(x, y0) := inf{ ∑ λiH(xi, y0), ∑ i λi = 1, xi ∈ Gy0 , x = ∑ λixi}. H̃(., y0) is a convex function and H̃(x, y0) ≤ H(x, y0). But H̃(., y0) = H(., y0) almost everywhere. Indeed, for any x ∈ Gy0 , for any convex com- bination of points (xi)i in Gy0 such that ∑ λixi = x, we have H(x, y0) ≤∑ λiH(xi, y0) so, going to the infinitum we get H(x, y0) ≤ H̃(x, y0), ∀x ∈ 15 Gy0 . Since for every y almost everywhere H̃(., y) = H(., y) and almost ev- erywhere in Ω2, H(x, y) = −H(y, x), we have, almost everywhere in Ω2, H̃(x, y) = −H̃(y, x). For every y ∈ Ω, x 7→ H̃(x, y) is convex. Since H̃(., y) is in L2(Ω), it is finite for almost every x. By convexity it is finite every where on the interior of Ω, so it is also continuous on the interior of Ω. Putting together the two last points, we get that H̃(x, y) = −H̃(y, x) for every x, y ∈ Ω. Let’s consider gn(x) = Ln(x, un(x)) = (sn.un) − Hn(sn(x), x). Since gn ∈ L2(Ω), gn −→weakly g in L2. We have that Ln(x, un(x)) −→weakly g(x), which implies ∫ Ω Ln(x, un(x))dx −→ ∫ Ω g(x)dx. Now for every x, y we have : Hn(y, x) + Ln(x, un(x)) ≥ (y.un), which gives when taking the integral over Ω :∫ Ω Ln(x, un(x))dx+ ∫ Ω Hn(y, x)dx ≥ ∫ Ω (y.un(x))dx ∫ Ω Ln(x, un(x))dx ≥ ∫ Ω (y.un(x))−Hn(y, x)dx Now let us introduce, for any y0 ∈ Ω,  > 0 : f,y0(y) := 1 |B(y0, )|1B(y0,). 16 Now, for any y0 ∈ Ω, we multiply the previous inequality by f,y0 and inte- grate on Ω with respect to y : ∫ Ω ∫ Ω Ln(x, un(x))dxf,y0(y)dy ≥ ∫ Ω ∫ Ω ((y.un(x))−Hn(y, x))f,y0(y)dxdy. Now we can take the limit because Hn is weakly convergent, and get :∫ Ω g(x)dx ≥ ∫ Ω ∫ Ω ((y.u(x))−H(y, x))f,y0(y)dxdy. Since H = H̃ almost everywhere :∫ Ω g(x)dx ≥ ∫ Ω ∫ Ω ((y.u(x))− H̃(y, x))f,y0(y)dxdy. Now, H̃ is continuous so we can use Lebesgue differentiation theorem by taking the limit when  −→ 0. We get , for every y0 ∈ Ω,∫ Ω g(x)dx ≥ ∫ Ω (y0.u(x))− H̃(y0, x)dx. Now, taking the supremum over y0 :∫ Ω g(x)dx ≥ sup y∈Ω ∫ Ω (y.u(x))− H̃(y, x)dx, which is : ∫ Ω g(x)dx ≥ ∫ Ω LH̃(x, u(x))dx. Now, for any Ĥ ∈ H, we have :∫ Ω Ln(x, un(x))dx ≤ ∫ Ω LĤ(x, un(x))dx, taking the limit we have :∫ Ω LH̃(x, u(x))dx ≤ ∫ Ω g(x)dx ≤ ∫ Ω LĤ(x, u(x))dx. 17 So we found an optimal H̃, anti symmetric convex concave. Now if we take S(x) ∈ ∂2LH̃(x, u(x)), we have that S is self dual measure preserving and that u(x) = ∇1H(S(x), x). (See [5]). 18 Chapter 5 Case study In this section we study a particular case. Ω = [0, 1] and u(x) = |x − 1/2|. The following computations are due to Bernard Maurey from university Paris VII. 5.1 Finding S We find s by maximizing ∫ Ω u(x).S(x)dx. We are looking for S of the fol- lowing type : S(x) = α− x if x ∈ [0, α] and S(x) = x for x ∈ [α, 1]. We find α = √ 2/2. 5.2 Finding H(x, Sx) Let’s set α = √ 2/2. When 0 ≤ xα, Sx = α− x. Set f(x) = H(x, α− x) such that f ′(x) = ∇1H(x, α− x)−∇2H(x, α− x) = u(Sx) + u(x) = u(α− x) + u(x). When 0 ≤ x ≤ β := α−1/2, we have u(α−x) = α−x−1/2 and u(x) = 1/2−x so f ′(x) = α − 2x. When β ≤ x ≤ 1/2, we also have β ≤ α − x ≤ 1/2, f ′(x) = 1/2 − (α − x) + 1/2 − x = 1 − α, and to finish when 1/2 ≤ x ≤ α, 19 f ′(x) = x−1/2+(1/2−(α−x)) = 2x−α. Since f(α/2) = H(α/2, α/2) = 0, we can deduce f(x) = (1− α)(x− α/2) = (1− α)x− β/2 when β < x < 1/2, so f(β) = (1−α)β − β/2 = β/2− 1/2 +α/2 = α− 3/4, and f(1/2) = (1− α)(1/2− β/2) = −α+ 3/4. then when 0 ≤ x ≤ β, f(x) = f(β) + ∫ x β (α− 2t)dt = −x2 + αx+ α/2− 1/2. We have f(0) = −αβ. When 1/2 ≤ x ≤ α, f(x) = f(1/2) + ∫ x 1/2 (2t− α)dt = x2 − αx+ αβ. When α ≤ x ≤ 1, we have Sx = x and f(x) = H(x, x) = 0. 5.3 Convexity inequalities We are now going to give lower and upper estimates on H obtained by concavity - convexity. When 0 < y0 < β, we have 1/2 < x0 > α, H(x0, y0) = f(x0) = x 2 0 − αx0 + αβ and for any x we get the lower estimate on H by finding the tangent in x at x0 to the convex function H(., y0) by setting : C0(x, y0) = H(x0, y0) + (x− x0)∇1H(x0, y0) = H(x0, y0) + (x− x0)u(y0). We get that C0(x, y0) = x 2 0−αx0+αβ+(x−x0)(1/2−y0) = (α−y0)2−α(α−y0)+(x+y0−α)(1/2−y0)+αβ, C0(x, y0) = (x+ y0 − α)(1/2)− xy0 + αβ = −xy0 + (x+ y0)/2− β. We have that C0(x, y) = −xy + (x+ y)/2− β, 0 ≤ y ≤ β. 20 When 1/2 < y0 < α, we have 0 < x0 < α, H(x0, y0) = f(x0) = −x20 +αx0− αβ and for any x : C0(x, y0) = −x20 + αx0 − αβ − (x− x0)(1/2− y0). We have that C0(x, y) = xy − (x+ y)/2 + β, 1/2 ≤ y ≤ α. When β < y0 < 1/2, we have β < x0 < 1/2, H(x0, y0) = f(x0) = (1 − α)x0 − β/2 and for any x C(x, y0) = (1− α)x0 − β/2 + (x− x0)(1/2− y0) C(x, y0) = (1− α)(α− y0 − β/2) + (x+ y0 − α)(1/2− y0) C(x, y0) = −y20 − xy0 + x/2 + (α+ β)y0 − 1/4 So we have that C(x, y) = −y2 − xy + x/2 + (α+ β)y − 1/4, β ≤ y ≤ 1/2. For y0 ≥ α, we have x0 = y0, f(x0) = 0 and for any x C0(x, y0) = (x− y0)(y0 − 1/2), so C0(x, y) = (x− y)(y − 1/2), α ≤ y ≤ 1. Let’s now find the upper estimates C1(x, y) concave (actually affine) in y. By construction we find that C1(x, y) = −C0(y, x), so we get C1(x, y) = xy − (x+ y)/2 + β, 0 ≤ x ≤ β, 21 C1(x, y) = x 2 + xy − y/2− (α+ β)x+ 1/4, β ≤ x ≤ 1/2, C1(x, y) = −xy + (x+ y)/2− β, 1/2 ≤ x ≤ α, C1(x, y) = x 2 − xy − x/2 + y/2, α ≤ x ≤ 1. It looks like C0 ≤ C1, which is necessary if the problem has a solution that uses this transformation S. The solution H has to satisfy C0 ≤ H ≤ C1. In the square 0 ≤ x ≤ β, 1/2 ≤ y ≤ α, we have C0(x, y) = xy − (x+ y)/2 + β = C1(x, y) which shows that we found at least that part of the definition of H. The functions f = C0 and f = C1 both satisfy the equations ∇1f(x, Sx) = u(Sx), ∇2f(x, Sx) = −u(x). Since C0 ≤ C1, it turns out that any function between C0 and C1 will still satisfy the equations. We can then try to regularize : if C1,0 is the convexified in x of C1; we have C1,0 ≤ C1 by definition and C0 ≤ C1,0 because C0 is convex in x. We could imagine a succession of regularization in x (convexification), and in y (concavification), but it turns out that here the convexified C1,0 in x of C1 appears to be convex - concave. (This has been checked only on a computer for now.) And it satisfies both equations. We will finish by setting H(x, y) = (C1,0(x, y)− C1,0(y, x))/2. 22 We first give what we found for C1,0. If α ≤ y ≤ 1 and 0 < x < x1(y) := √ y − β, then C1,0(x, y) = −x1(y)2 + y/2 + x(2x1(y)− y − 1/2) C1,0 = −xy + 2x √ y − β − x/2− y/2 + β. If β ≤ y ≤ α and x0(y) := α− √ 2βy − β2 ≤ x ≤ α, then C1,0(x, y) = −x0(y)2 + 1/4− y/2 + x(2x0(y) + y − α− β) C1,0(x, y) = xy−2x √ 2βy − β2+x/2−(1/2+2β)y+2α √ 2βy − β2+1/2−α; When it is not those two cases we just set C1,0 = C1(x, y). C1,0 is the largest convex concave function satisfying our equations. 5.4 Final expressions for H Here are the equations, assuming 0 ≤ x ≤ y ≤ 1. (1) H(x, y) = −y2/2 + x √ y − β − x/2 + β/2 if α ≤ y ≤ 1 and 0 ≤ x ≤ √y − β, (2) H(x, y) = x2/2− y2/2− x/2 + y/2 if α ≤ y ≤ 1 and √y − β ≤ x ≤ y, (3) H(x, y) = xy − x/2− y/2 + β 23 if ≤ x ≤ β and 1/2 ≤ y ≤ α, (4) H(x, y) = x2/2− (α− y) √ 2βx− β2 − y/2 + α/2− 1/8 if β ≤ x ≤ α+ β −√1− α and α− √ 2βx− β2 ≤ y ≤ β/2 + (α− x)2/(2β), (5) H(x, y) = (α− x) √ 2βy − β2 − (α− y) √ 2βx− β2 + α(x− y) if α+ β −√1− α ≤ y ≤ α and α− √ 2βy − β2 ≤ x ≤ y, (6) H(x, y) = −y2/2 + βy + α/2− 3/8 if 0 ≤ x ≤ β and β ≤ y ≤ 1/2, (7) H(x, y) = x2/2− y2/2− βx+ βy if β ≤ x ≤ α+ β −√1− α and x ≤ y ≤ α− √ 2βx− β2, (8) H(x, y) = 0 if 0 ≤ y ≤ β and 0 ≤ x ≤ y. 24 Figure 5.1: Expression for H 25 Chapter 6 Conclusion Even though it is still not clear whether the self dual polar factorization can be written as a mass transportation problem, the geometrical approach in L2 via the projection onto the space of all measure preserving involutions S shows a lot of similarities with Brenier’s theory, for instance the polar cone that we find is K̃ = {u ∈ L2(Ω);u = ∇1H(x, x)}. This approach does not give a proof of the self dual polar decomposition, but it still gives a new geometrical meaning to it. The fact that we were able to extend the result to L2(Ω) comforts the idea that the measure preserving involution found in the decomposition is the Hilbert projection of u onto S. The explicit computation of H and s for a simple function shows that even though the theorem is a powerful existence result, constructing the decom- position is a problem, even for simple functions, because the proof of the decomposition is not constructive. The next step would be to build a numerical scheme to find H and S, in order to better understand the link between u and its decomposition. 26 Bibliography [1] Y. Brenier. Polar factorization and monotone rearrangement of vector valued functions. Comm. Pure Appl. Math., 1991. [2] L.C. Evans. Partial differential equations and monge-kantorovich mass transfer. Lecture notes. [3] W. Gangbo. An elementary proof of the polar factorization theorem. Arch. Rational Mech. Anal., 1994. [4] N. Ghoussoub. Self-dual partial differential systems and their variational principles. Springer, 2008. [5] N. Ghoussoub and A. Moameni. A self dual factorization for vector fields. Submitted, 2011. [6] E. Krauss. A representation of arbitrary maximal monotone operator via subgradients of skew-symmetric saddle functions. Nonlinear Anal, 1985. [7] T. Rockfellar. Convex Analysis. Princeton University Press, 1970. [8] C. Villani. Topics in Optimal Transportation. AMS, 2003. 27

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0072393/manifest

Comment

Related Items