Dynamic and Stochastic Propagation ofBrenier’s Optimal Mass TransportbyALISTAIR BARTONB.Sc., McGill University, 2016A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinTHE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES(Mathematics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)April 2018c© ALISTAIR BARTON 2018AbstractI present analysis of how the mass transports that optimize the inner prod-uct cost—considered by Y. Brenier—propagate in time along a given La-grangian in both deterministic and stochastic settings. While for the mini-mizing transports one may easily obtain Hopf-Lax formulas on Wassersteinspace by inf-convolution, this is not the case for the maximizing transports,which are sup-inf problems. In this case, we assume that the Lagrangianis jointly convex on phase space, which allow us to use Bolza-type duality,a well known phenomenon in the deterministic case but, as far as I know,novel in the stochastic case. Hopf-Lax formulas help relate optimal ballistictransports to those associated with the dynamic fixed-end transports stud-ied by Bernard-Buffoni and Fathi-Figalli in the deterministic case, and byMikami-Thieullen in the stochastic setting.iiLay SummaryMy work is in the mathematical field of optimal transportation, which ex-amines how to efficiently change one distribution into another, given thecost of transporting one unit of material between different locations in thedistributions—a common motivation is efficiently transport mining materialto construction sites. I analyze the combination of two such transportationsin sequence, where the second transportation uses a path-dependent cost.This is a natural way to consider the evolution of the first transportationcost function (seen as a function the initial distribution) along the paths ofleast resistance given by the second cost. I also consider the case where thesecond transportation is stochastic (its paths diffuse in some sense), as wellas a cost maximization version of each case.My main contribution is to demonstrate various equivalent reformula-tions of this problem, including a method of converting certain stochasticminimizing problems to maximizing problems.iiiPrefaceMy thesis research is inspired by a paper authored by my supervising pro-fessor, Professor N. Ghoussoub that considered the deterministic case [12].He proposed that I study the stochastic case, and directed me to relevantpapers. The research program was designed collaboratively with frequentmeetings to discuss possible strategies, techniques, and directions to focus.While the deterministic case—Sections 2.1, 3.1, 4.1—is discussed in a pa-per on the ArXiv authored by my supervisor (mentioned above), we also havea joint-authored paper discussing some of the stochastic content—Sections2.2, and 4.2—of this thesis uploaded to the ArXiv [2]. The two papers arecombined in a recent submission to the European Journal of Applied Math-ematics for publication later this year (2018) or early next year (2019).The stochastic sections are entirely my work, excepting some editingand revision by my supervisor. As mentioned, the deterministic results con-tained in this thesis were previously obtained by my supervisor, however inSection 2.1 most results are reframed using new techniques from the stochas-tic section. On the other hand, Sections 3.1 and 4.1 contain primarily mysupervisor’s work.No ethic approval was required for this research.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . v1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Minimizing Ballistic Costs . . . . . . . . . . . . . . . . . . . . 92.1 Deterministic Minimizing Cost . . . . . . . . . . . . . . . . . 92.2 Stochastic Minimizing Problem . . . . . . . . . . . . . . . . . 153 Bolza Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.1 Deterministic Bolza Duality . . . . . . . . . . . . . . . . . . . 243.2 Stochastic Bolza duality and its applications . . . . . . . . . 274 Maximizing Ballistic Costs . . . . . . . . . . . . . . . . . . . . 304.1 Deterministic Maximizing Cost . . . . . . . . . . . . . . . . . 304.2 Stochastic Maximizing Cost . . . . . . . . . . . . . . . . . . . 384.3 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 43Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46vChapter 1Introduction1.1 ResultsThe problem of optimal transportation, as stated by Kantorovich consists ofminimizing the transportation cost between two probability measures ν0, ν1on Polish spaces X0, X1 for a given cost function c : X0 ×X1 → R:inf{∫X0×X1c(x, y) dpi(x, y);pi ∈ K(ν0, ν1)} (1.1)where K(ν0, ν1) is the set of transport plans: probability measures onX0×X1whose marginal on X0 (resp. X1) is ν0 (resp. ν1). Kantorovich also providesus the so-called dual formulation of this costsup{∫X1φ1(y) dν1(y)−∫X0φ0(x) dν0(x); (φ0, φ1) ∈ K(c)}. (1.2)Here K(c) is the set of functions (φ0, φ1) with φ0 ∈ L1(X0, ν0) and φ1 ∈L1(X1, ν1) satisfying the inequality φ1(y)− φ0(x) ≤ c(x, y). By maximizing(resp. minimizing) φ1 (resp. φ0) for a given φ0 (resp. φ), we may assumethat the functions satisfy the below relationφ1(y) = infx∈X0{c(x, y) + φ0(x)} φ0(y) = supy∈X1{φ1(y)− c(x, y)}. (1.3)Since Monge first considered the problem using distance based costs of theform c(x, y) = |x− y| ([14], [18], [8], [19], [20]), where X0 = X1 is a normedspace, the structure of this problem has been studied under several costfunctions. We review a couple cost functions that are relevant for this the-sis. Brenier [6] uncovered many connections with convex analysis and trans-portation when the cost is quadratic c(x, y) = |x− y|2, demonstrating thatthe optimal transportation plan is given by the gradient of a convex func-tion: pio = (Id×∇φ)#ν0 (where f#ν indicates the push-forward of ν by thefunction f , ie. the measure µ defined by µ(A) = ν(f−1(A)) for all measur-able sets A). Notably, this is the same optimal plan for the cost function11.1. Resultsc(x, y) = −〈x, y〉 = |x− y|2−|x|2−|y|2. This was followed by a large numberof results addressing costs of the form f(x − y), where f is either a convexor a concave function [10].Bernard and Buffoni [4] considered dynamic costs on a manifold M ofthe formcT (x, y) := inf{∫ T0L(t, γ(t), γ˙(t)) dt; γ ∈ C1([0, T ],M), γ(0) = x, γ(T ) = y}(1.4)where T is a fixed time and L : TM → R ∪ {+∞} is a Lagrangian convexin the second variable of the tangent bundle. This formulation encompassescosts functions of the form c(x, y) = f(|x− y|) for f convex, which corre-spond to L(t, x, p) = Tf(|p| /T ). Fathi and Figalli [9] eventually dealt withthe case where M is a non-compact Finsler manifold.In this thesis I will consider “ballistic cost functions” bT : M∗ ×M →R ∪ {+∞} (where M∗ is dual to the Banach space M = Rd) derived bypropogating the inner product cost by a Lagrangian L:bT (v, y) := inf{〈v, γ(0)〉+∫ T0L(t, γ(t), γ˙(t)) dt; γ ∈ C1([0, T ],M), γ(T ) = y}(1.5)This leads to the minimzing and maximizing transportation problems thatwill be discussed in Section 2.1 and 4.1 respectively:BT (µ0, νT ) := inf{∫M∗×MbT (v, y) dpi(v, y);pi ∈ K(µ0, νT )} (1.6)BT (µ0, νT ) := sup{∫M∗×MbT (v, y) dpi(v, y);pi ∈ K(µ0, νT )}. (1.7)The latter cost is a sup-inf problem, however it can be made into a sup-supproblem using Bolza duality theory as reviewed in section 3.1. The ballisticcost may be considered a propagation of the Wasserstein cost considered byBrenier [6], as when T = 0 we recover the Wasserstein costW (µ0, ν0) := inf{∫M∗×M〈v, y〉 dpi(v, y);pi ∈ K(µ0, ν0)} (1.8)W (µ0, ν0) := sup{∫M∗×M〈v, y〉 dpi(v, y);pi ∈ K(µ0, ν0)}. (1.9)We will also consider a stochastic propagation of these cost functions. Todo so, we define a stochastic version of the dynamic transportation problem21.2. Main Resultsbetween two random variables Y,Z : Ω→M to becsT (Y,Z) := inf{E[∫ T0L(t,Xt, βt) dt]; (X,β) ∈ A, X0 = Y,XT = Z}(1.10)where A denotes the set of stochastic processes X with previsible drift βsuch that Xt = X0 +∫ t0 βs ds + Wt where Wt is σ(Xs; 0 ≤ s ≤ t)-Brownianmotion. The stochastic transportation between two measures can then begiven byCsT (ν0, νT ) := inf{csT (Y,Z);Y ∼ ν0, Z ∼ νT }= inf{E[∫ T0L(t,Xt, βt) dt]; (X,β) ∈ A, X0 ∼ ν0, XT ∼ νT }(1.11)as considered by Mikami and Theiullen [13], where X ∼ ν indicates that therandom variable X has law ν.Similarly, we define the ballistic transportation between two random vari-ables asbsT (V, Y ) := inf{E[〈V,X0〉+∫ T0L(t,Xt, βt) dt]; (X,β) ∈ A, XT = Y }(1.12)allowing us to define the stochastic version of the ballistic cost asBsT (µ0, νT ) := inf{bsT (V,Z);V ∼ µ0, Z ∼ νT }= inf{E[〈V,X0〉+∫ T0L(t,Xt, βt) dt]; (X,β) ∈ A, V ∼ µ0, XT ∼ νT }(1.13)andBsT (µ0, νT ) := sup{bsT (V,Z);Y ∼ µ0, Z ∼ νT } (1.14)which will be analyzed in section 2.2 and 4.2 respectively. Once more themaximizing cost is a sup-inf problem that can be simplified using a stochasticanalog of Bolza duality, which is shown in section 3.2.1.2 Main ResultsThe chief results of the analysis are duality formulae that allow us to refor-mulate the ballistic costs in a manner similar to eq. 1.2. We will show that31.2. Main Resultsthe role of K(c) will be played by solutions of the forward Hamilton-Jacobiequation {∂tφ+H(t, x,∇xφ) = 0 on [0, T ]×M,φ(0, x) = f(x),(1.15)and the backward Hamilton-Jacobi equation.{∂tφ+H(t, x,∇xφ) = 0 on [0, T ]×M,φ(T, x) = f(x),(1.16)where the Hamiltonian on [0, T ]×M ×M is defined byH(t, x, q) = supp∈M{〈p, q〉 − L(t, x, p)}.In particular, we concern ourselves with the variational solutions to the aboveequations:Φtf,+(x) := Φf,+(t, x) = inf{f(γ(0))+∫ t0L(s, γ(s), γ˙(s)) ds; γ ∈ C1([0, T ),M); γ(t) = x},(1.17)andΦtf,−(x) := Φf,−(t, x) = sup{f(γ(T ))−∫ TtL(s, γ(s), γ˙(s)) ds; γ ∈ C1([0, T ),M); γ(t) = x}(1.18)respectively. We will show that we can describe BT (µ0, νT ) by the forwardand backward duality formulaeBT (µ0, νT ) = sup{∫MΦf∗,+(T, x) dνT (x) +∫M∗f(v) dµ0(v); f concave in Lip(M∗)}= sup{∫Mg(x) dνT (x) +∫M∗(Φg,−)∗(0, v) dµ0(v); g ∈ Lip(M)}(1.19)where h∗ is the concave legendre transform of h:h∗(v) := infx∈M{〈v, x〉 − h(x)}. (1.20)As to the question of attainment, we use a result by Fathi-Figalli [9] to showthat if L is a Tonelli Lagrangian, and if µ0 is absolutely continuous with41.2. Main Resultsrespect to Lebesgue measure, then there exists a probability measure pi0 onM∗ ×M , and a concave function k : M → R such thatBT (µ0, νT ) =∫M∗bT (v, x)dpi0, (1.21)and pi0 is supported on the possibly set-valued map v → pi∗φHT (∇k∗(v), v),with pi∗ : M × M∗ → M being the canonical projection, and (x, v) →φHt (x, v) is the corresponding Hamiltonian flow.These results rely on interpolation formulation of BT (µ0, νT ):BT (µ0, νT ) = infν∈P(M){W (µ0, ν) + CT (ν, νT )}. (1.22)The interpolation formula can be seen as extensions of those by Hopf-Laxon state space to Wassertsein space. Indeed, for any (initial) function g, theassociated value function Φf,+(t, x) can be written asφg(t, x) = inf{g(y) + ct(y, x); y ∈M}. (1.23)In the case where the Lagrangian L(x, p) = L0(p) is only a function of p,and if H0 is the associated Hamiltonian, then c(t, y, x) = tL0(1t |x− y|) and(1.23) is nothing but the Hopf-Lax formula used to generate solutions forcorresponding Hamilton-Jacobi equations. When g is the linear functionalg(x) = 〈v, x〉, then b(t, v, x) is itself a solution to the Hamilton-Jacobi equa-tion, sinceb(t, v, x) = inf{〈v, y〉+ c(t, y, x); y ∈M}. (1.24)In other words, (1.22) can now be seen as extensions of (1.24) to the space ofprobability measures, where the Wasserstein cost fills the role of the scalarproduct. This interpolative result is extended to the rest of the costs in theirrespective sections.Stochastic Minimizing Cost The stochastic problem in Section 2.2 presentstwo differences. Firstly, it cannot formulated as a classical transportationproblem (1.1) hence there is no Monge-Kantorovich duality, secondly theirreversibility of stochastic processes means we only have the one dualityformulaBsT (µ0, νT ) = sup{∫Mg(x) dνT (x) +∫M∗(Ψ0g,−)∗(v) dµ0(v); g in Lip(M∗)},(1.25)51.2. Main Resultswhere this time Ψg,− is the solution to the backward Hamilton-Jacobi-Bellmanequation (1.26).{∂tψ +12∆ψ +H(t, x,∇xψ) = 0 on [0, T ]×M,ψ(T, x) = g(x),(1.26)whose formal variational solutions are given by the formula:Ψg,−(t, x) = sup(X,β)∈A{E[g(XT )−∫ TtL(s,Xs, βs) ds∣∣∣∣Xt = x]} . (1.27)Bolza Duality In order to deal with the maximization problemsBT (µ0, νT )and BsT (µ0, νT ), we need to use Bolza-type duality discussed in chapter 3 toconvert the sup-inf problem to a concave maximization problem. For that,we shall assume that the Lagrangian L is jointly convex in both variables.We then consider the dual Lagrangian L˜ defined on M∗ ×M∗ byL˜(t, v, q) := L∗(t, q, v) = sup{〈v, y〉+ 〈p, q〉 − L(t, y, p); (y, p) ∈M ×M},and the corresponding fixed-end costs on M∗ ×M∗,c˜T (u, v) := inf{∫ T0L˜(t, γ(t), γ˙(t)) dt; γ ∈ C1([0, T ),M∗); γ(0) = u, γ(T ) = v},(1.28)and its associated transportC˜T (µ0, µT ) := inf{∫M∗×M∗c˜T (x, y) dpi; pi ∈ K(µ0, µT )}. (1.29)We then recall the deterministic Bolza duality, and establish a new stochasticBolza duality, allowing us to write the maximizing costs asBT (µ0, νT ) = sup{∫ T0b˜T (v, x) dpi(v, x);pi ∈ P(µ0, νT )} (1.30)BsT (µ0, νT ) = sup{E[〈VT , X〉 −∫ T0L˜(t, Vt, βt) dt]; (V, β) ∈ A, V0 ∼ µ0, X ∼ νT }(1.31)respectively, where b˜T (v, x) := sup{〈u, x〉 − c˜T (v, u)}.61.3. NotationMaximizing Deterministic Cost We use this Bolza theory to establishthe following duality result for BT (µ0, νT ) in section 4.1:BT (µ0, νT ) = inf{∫Mg(x) dνT (x) +∫M∗(Φ˜0g,−)∗(v) dµ0(v); g convex on M},(1.32)where g∗ is the convex Legendre transform of g, i.e., g∗(x) = sup{〈v, x〉 −g(v); v ∈M∗}, and Φ˜k,− is a solution of the following dual backward Hamilton-Jacobi equation:{∂tφ−H(t,∇vφ, v) = 0 on [0, T ]×M∗,φ(T, v) = k(v),(1.33)whose variational solution is given byΦ˜k,−(t, v) = sup{k(γ(T ))−∫ t0L˜(s, γ(s), γ˙(s)) ds; γ ∈ C1([0, T ),M∗); γ(0) = v}.(1.34)Maximizing Stochastic Cost We follow this by establishing the dualityformula for the stochastic version:Bs(ν0, µT ) = inf{∫M∗g(x) dνT +∫M(Ψ˜0g∗,−)∗(v) dµ0; g in C∞db(M∗)},(1.35)where Ψ˜k solves the Hamilton-Jacobi-Bellman equation{∂tψ +12∆ψ −H(t,∇vψ, v) = 0 on [0, T ]×M∗,ψ(T, v) = k(v),(1.36)whose formal variational solutions are given by the formula:Ψ˜k,−(t, v) = supX∈A{E[k(X(T ))−∫ TtL˜(s,X(s), βX(s,X)) ds∣∣∣∣X(t) = v]} .(1.37)1.3 NotationWe will use the standard notation of (Ω,P) to denote a probability space Ωequipped with measure P. We will for convenience denote the probability ofan event such as {X < 3} := {ω ∈ Ω;X(ω) < 3} by P(X < 3) := P({X(ω) <3}).71.3. NotationI will use P(M) to indicate the space of probability measures on aspace M situated within the larger space of finite measures M(M). WhenM = Rd, I will use P1(M) to be the subset of probability measures withfinite first moment(ie. satisfying∫M |x| dν(x) = EX∼ν [|X|α] < ∞ whereEX∼ν [·] denotes expectation with X ∼ ν), which is likewise situated withinM1(M) := {ν;∫M 1 + |x| dν(x) <∞}.The dual space ofM1(M) can be identified with the space of uniformlyLipschitz functions Lip(M) under the inner product 〈f, ν〉 = ∫ f(x) dν(x).However, we will often be forced to work with the smooth subset of Lipschitzfunctions which I will denote C∞db(M) := C∞(M) ∩ Lip(M) (‘db’ stands forderivative bounded functions).When operating on a vector space V , I will use ν to represent measureson the primal space (ie. ν ∈ P(V )) and µ to represent measures on the dualspace (ie. µ ∈ P(V ∗)) for convenience.8Chapter 2Minimizing Ballistic Costs2.1 Deterministic Minimizing CostIn this section we deal with the standard transportation problem associatedto the cost bT (v, x). We shall assume that the Lagrangian L satisfies thefollowing assumption:(A0) The Lagrangian (t, x, v) 7→ L(t, x, v) is bounded below, and for all(t, x) ∈ [0, T ] ×M , v 7→ L(t, x, v) is convex and coercive in the sense thatthere is a δ > 1 such thatlim|v|→∞L(t, x, v)|v|δ = +∞. (2.1)This coercivity condition is inherited by the transportation cost CT (ν, ·),allowing the attainment of a minimizer in the following Theorem.Theorem 1. Assume that L satisfies (A0) and let µ0 (resp. νT ) be a proba-bility measure on M∗ (resp., M). Then, the following interpolation formulaholds:BT (µ0, νT ) = inf{W (µ0, ν) + CT (ν, νT ); ν ∈ P1(M)}. (2.2)In the case where νT has finite first moment, the infimum is attained atsome probability measure ν0 on M , and the initial Kantorovich potential forCT (ν0, νT ) is concave.Proof: To prove the formula it suffices to note thatinf{W (µ0, ν) + CT (ν, νT )}= infν∈P(M){∫〈v, x〉 dpiW (v, x) +∫c(x, y) dpiC(x, y);piW ∈ K(µ0, ν), piC ∈ K(ν, νT )}= inf{∫〈v, x〉+ c(x, y) dpi(v, x, y);pi1 = µ0, pi3 = νT } ≥ B(µ0, νT ).For the reverse inequality, we may use a selection theorem to find a measur-able function y : M∗×M →M that satisfies 〈v, y(v, x)〉+c(y(v, x), x)− <92.1. Deterministic Minimizing CostbT (v, x) (e.g. [21]). Fixing pi ∈ K(µ0, νT ) and letting pi := (Id × Id ×y)#(pi) ∈ P(M∗ ×M ×M)B(µ0, νT ) =∫bT (v, x) dpi(v, x) ≥∫〈v, y〉+ c(y, x) dpi(v, x, y)− .To show that the minimizer is achieved, we need to show that CT satisfiesa coercivity condition on the space P1(M) of probabilities on M with finitefirst moments. For that, we now prove that for any fixed νT ∈ P1(M) andany positive constant N > 0, the set of measures ν ∈ P1(M) satisfyingCT (ν, νT ) ≤ N∫M|x| dν(x) (2.3)is tight. Indeed, from (A0), there exists a constant K such that cT (x, y) >A∣∣x−yT∣∣δ −K. Hence for any optimal transport plan pi ∈ K(ν, νT )C(ν, νT ) ≥ AT δ∫M×M||x| − |y||δ dpi(x, y)−K. (2.4)We transfer the problem to R+ by using the push-forward p¯i := (|·| × |·|)#pito obtain, ∫M2||x| − |y||δ dpi(x, y) =∫R2+|x− y|δ dp¯i(x, y). (2.5)We can obtain a lower estimate for this by minimizing over transportationmeasures sharing p¯i’s marginals (i.e., γ ∈ K(ν¯, ν¯T ) where ν¯ := |·|# ν andν¯T := |·|# νT ). This is a well known optimal transport problem, whose opti-mal plan given by the monotone Hoeffding-Frechet mapping x 7→ Gν(G−1νT (x)),where Gν(t) := inf{z ∈ R : t ≥ ν({x ≤ z})} is the quantile function associ-ated with the measure ν [3]. Thus the optimal plan maps each quantile inone measure to the corresponding quantile in the other. Substituting thisinto the integral and applying Jensen’s inequality:∫R2+|x− y|δ dp¯i(x, y) ≥∫R2+|x− y|δ d[[Gν¯ ×Gν¯T ]# λ[0,1]](x, y)≥[∫M|x| dν(x)− b(νT )]δ,(2.6)where b(νT ) :=∫M |x| dνT , and we assume ν ∈ T,R := {ν; ν(B(R, 0)c) > }for some R > b(νT )/. It then suffices to find R such that, for all ν ∈ T,R,AT δ[[∫M|x| dν(x)− b(νT )]δ−K]≥ N∫M|x| dν(x).102.1. Deterministic Minimizing CostLetting Iν(R) :=∫B(R,0)c |x| dν(x) (≥ R for ν ∈ T,R) we can weaken theabove statement to the condition thatAT δ[(Iν(R)− b(νT ))δ −K]≥ N(Iν(R) +R(1− ))for ν ∈ T,R. Using the fact that Iν(R) ≥ R for ν ∈ T,R, we can say thatthis condition is satisfied if R large enough thatAT δ−1[[(R)1−1δ − b(νT )(R)− 1δ]− KR]>Nin addition to our earlier conditions that R > b(νT )/.To show the minimizer is achieved, fix ν0, and note that by coercivitythe set of probability measures ν such that−b(µ0)∫|x| dν + CT (ν, νT ) ≤W (µ0, ν0) + CT (ν0, νT ) (2.7)is tight. SinceW (µ0, ν)+CT (ν, νT ) ≤ −b(µ0)∫ |x| dν+CT (ν, νT ) this impliesthe set of measures ν ∈ P1(M) such thatW (µ0, ν) + CT (ν, νT ) ≤W (µ0, ν0) + CT (ν0, νT ) (2.8)is also tight. Combining this result with the lower semi-continuity of ν 7→W (µ0, ν) + CT (ν, νT ) guarantees the existence of a minimizing measure.Remark: Note that (2.6) indicates that when ν1 ∈ P1(M) and ν0 ∈ P(M)\P1(M), then C(ν0, ν1) = C(ν1, ν0) =∞.Theorem 2. Assume that L satisfies (A0) and let µ0 (resp. νT ) be a prob-ability measure on M∗ (resp., M) with finite first moment.1. If µ0 has compact support, then we have the following duality formulaBT (µ0, νT ) = sup{∫Mg(x) dνT (x) +∫M∗(Φ0g,−)∗(v) dµ0(v); g in Lip(M)}.(2.9)2. If the optimal interpolant ν0 has compact support, thenBT (µ0, νT ) = sup{∫MΦTf∗,+(x) dνT (x) +∫M∗f(v) dµ0(v); f concave in Lip(M∗)}.(2.10)112.1. Deterministic Minimizing CostWe shall need the following identifications of the Legendre transforms inthe Banach spaceM1(Rn) of measures ν on Rn such that∫Rn [1 + |x|] dν <∞, whose dual space under weak convergence is the space of Lipschitz func-tions Lip(Rn).Lemma 1. a) For µ0 ∈ P(Rn) with compact support, defineWµ0 :M1(Rn)→R ∪ {∞} to beWµ0(ν) :={W (µ0, ν) ν ∈ P1(M)+∞ otherwise.Then, the convex Legendre transform of Wµ0 is given for f ∈ Lip(Rn) byW ∗µ0(f) = −∫f∗ dµ0.b) For ν0 ∈ P(Rn), define the function Cν0 :M1(Rn)→ R ∪ {∞} to beCν0(ν) :={CT (ν0, ν) ν ∈ P1(Rn)+∞ otherwise. (2.11)Then, the convex Legendre transform of Cν0 is given for f ∈ Lip(Rn) byC∗ν0(f) =∫φf,−(0, x) dν0(x), where φf,−(t, x) is the solution to the backwardHamilton-Jacobi equation (1.16) with final condition φf,−(T, x) = f(x).Proof: Both statements follow from Kantorovich duality. Indeed, bothfunctions are convex and weak∗-lower semi-continuous onM1(Rn). Since µ0has compact support, Brenier’s duality yieldsWµ0(ν) = supg∈Lip(M){∫g dν +∫g∗ dµ0}.Note that this holds for all ν ∈ M1(M) (not merely ν ∈ P1(M) as theoriginal theorem concerns itself with), since if ν(Rn) > 1, we can applythe operation g 7→ g + c resulting in g∗ 7→ g∗ + c for arbitrarily large c.Likewise we may choose arbitrarily small c if ν(Rn) < 1. Compact supportis necessary as Brenier’s theorem implies that non-compactly supported µ0will require g 6∈ Lip(M). We then haveW ∗µ0(f) = supν∈M1(M)infg∈Lip(M){∫f dν −∫g dν −∫g∗ dµ0}. (2.12)Note that the functional g 7→ − ∫ g∗ dµ = ∫ (−g)∗ dµˆ(v) (where dµˆ(v) :=dµ(−v)) is convex and lower semicontinuous, and we may therefore apply122.1. Deterministic Minimizing Costthe Von Neuman minimax theorem as the expression is linear in ν and convexin g. We obtainW ∗µ0(f) = infg∈Lip(M)supν∈M1(M){∫f dν −∫g dν −∫g∗ dµ0}. (2.13)The infimum must occur at g = f since otherwise the sup in ν is +∞,resulting in statement a).The same proof applies to Cν0 , since in view of the duality formula ofBernard and Buffoni [4][Proposition 21]:Cν0(ν) = supg∈Lip(M){∫g dν −∫φg,−(0, ·) dν0}. (2.14)As for Wν0(ν), this equation holds for all ν ∈M1(M). We may again applythe minimax theorem as the expression is linear in ν and convex in g.Proof of Theorem 2: We first note that Kantorovich duality yields thatν 7→ B(µ0, ν) is weak∗-lower semi-continuous on P1(M) for all µ0 ∈ P1(M∗)and that (µ0, νT ) 7→ B(µ0, νT ) is jointly convex. Let nowBµ0(ν) := BT (µ0, ν)if ν ∈ P1(M) and +∞ otherwise. It follows thatBµ0(ν) = B∗∗µ0(ν) := sup{∫Rnf dν −B∗µ0(f); f ∈ Lip(Rn)}. (2.15)Now use the Hopf-Lax formula established above to writeB∗µ0(f) := sup{∫Mf dν −Bµ0(ν); ν ∈ P1(M)}= sup{∫Mf dν −W (µ0, ν ′)− CT (ν ′, ν); ν, ν ′ ∈ P1(M)}= sup{∫MΦf,−(T, ·) dν ′ −W (µ0, ν ′); ν ′ ∈ P1(M)}=−∫M[Φf,−(T, ·)]∗ dµ0.(2.16)This completes the proof of the first duality formula. The second follows inthe same way by simply varying the initial measure as opposed to the finalmeasure in B(µ, ν), and the concavity of f follows from the Kantorovichdual condition (1.3) and the linearity of bT in v.We now turn to the structure of the optimal transport plan, for which weshall consider Tonelli Lagrangians studied in the compact case by Bernard-Buffoni [4], and by Fathi-Figalli [9] in the case of a Finsler manifold.132.1. Deterministic Minimizing CostDefinition 2. We shall say that L is a Tonelli Lagrangian on M ×M , ifit is C2 and satisfies (A0) with the additional requirement that the functionv → L(x, v) is strictly convex on M .If L is a Tonelli Lagrangian, the Hamiltonian H : M ×M∗ → R is thenC1, and the Hamiltonian vector field XH on M ×M∗ is then XH(x, v) =(∂H∂v (x, v),−∂H∂x (x, v)), whose flow φHt solves the associated system of ODEsx˙ =∂H∂v(x, v)v˙ = −∂H∂x(x, v).(2.17)The connection between minimizers γ : [0, T ]→M of cT (x, y) and solutionsof (2.17) is as follows. If we write x(t) = γ(t) and v(t) = ∂L∂p (γ(t), γ˙(t)),then x(t) = γ(t) and v(t) are C1 with x˙(t) = γ˙(t), and the Euler-Lagrangeequation yields v˙(t) = ∂L∂x (γ(t), γ˙(t)), from which follows that t 7→ (x(t), v(t))satisfies (2.17). Note also that since L is a Tonelli Lagrangian, the Hamilto-nian H is actually C2, and the vector field XH is C1. It therefore defines a(partial) C1 flow φHt .There is also a (partial) C1 flow φLt on M ×M∗ such that every speedcurve of an L-minimizer is a part of an orbit of φLt . This flow is called theEuler-Lagrange flow, is defined by φLt = L−1 ◦ φHt ◦ L, where L : M ×M →M ×M∗, is the global Legendre transform (x, p) 7→ (x, ∂L∂p (x, p)). Note thatL is a homeomorphism on its image whenever L is a Tonelli Lagrangian.Theorem 3. In addition to (A0), assume that L is a Tonelli Lagrangianand that µ0 is absolutely continuous with respect to Lebesgue measure. Then,there exists a concave function k : M → R such thatBT (µ0, νT ) =∫M∗bT (v, ST ◦ ∇k∗(v))dµ0(v), (2.18)where ST (y) = pi∗φHT (y,∇k(y)), pi∗ : M × M∗ → M being the canonicalprojection, and φHt the Hamiltonian flow associated to L. In other words, anoptimal map for BT (µ0, νT ) is given by v → pi∗φHT (∇k∗(v), v).Proof: We start with the interpolation formula, BT (µ0, νT ) = CT (ν0, νT ) +W (µ0, ν0) for some probability measure ν0. By our duality result and Bre-nier’s theorem, there exists a concave function k : M → R and anotherfunction h : M → R such that (∇k∗)#µ0 = ν0,W (µ0, ν0) =∫M〈∇k∗(v), v〉dµ0(v),142.2. Stochastic Minimizing ProblemandCT (ν0, νT ) =∫Mh(x) dνT (x)−∫Mk(y) dν0(y)(in fact h(x) = ΦTk,+(x)). Now use a result of Fathi-Figalli [9] to writeCT (ν0, νT ) =∫M cT (y, ST y)dν0(y), where ST (y) = pi∗φHT (y, d˜yk). Note thatBT (µ0, νT ) ≤∫M∗ bT (v, ST ◦ ∇k∗(v))dµ0(v), (2.19)since (∇k∗)#µ0 = ν0 and (ST )#ν0 = νT , and therefore (I × ST ◦ ∇k∗)#µ0belongs to K(µ0, νT ).On the other hand, since bT (v, x) ≤ cT (∇k∗(v), x) + 〈∇k∗(v), v〉 for everyv ∈M∗, we haveBT (µ0, νT ) ≤∫M∗bT (v, ST ◦ ∇k∗(v))dµ0(v)≤∫M∗{cT (∇k∗(v), ST ◦ ∇k∗(v)) + 〈∇k∗(v), v〉} dµ0(v)=∫McT (y, ST y)dν0(y) +∫M∗〈∇k∗(v), v〉 dµ0(v)= CT (ν0, νT ) +W (µ0, ν0)= BT (µ0, νT ).It follows thatBT (µ0, νT ) =∫M∗bT (v, ST ◦∇k∗(v))dµ0(v) =∫M∗bT (v, pi∗φHT (∇k∗(v), d˜∇k∗(v)kdµ0(v).Since k is concave, we have that d˜xk = ∇k(x), hence d˜∇k∗(v)k = ∇k ◦∇k∗(v) = v, which yields our claim thatBT (µ0, νT ) =∫M∗ bT (v, pi∗φHT (∇k∗(v), v))dµ0(v).2.2 Stochastic Minimizing ProblemWe now turn to the stochastic version of the minimizing cost. The methodsof proof are generally similar to those for the deterministic cost, howeverthere are two complications: The first is that stochastic mass transport doesnot fit in the framework of cost minimizing transports, hence the Kantorovichduality is not readily available. The second is that stochastic processes arenot reversible and therefore there is only one direction to the transport,hence only one duality formula. In order to deal with the first complication,we rely on the results of Mikami-Thieullen [13] and therefore use the sameassumptions that he imposed on the Lagrangian, namely152.2. Stochastic Minimizing Problem(A1) L(t, x, v) is continuous, convex in v, and uniformly bounded below by aconvex function L(v) that is 2-coercive in the sense that lim|v|→∞L(v)|v|2 > 0.(A2) (t, x) 7→ log(1 + L(t, x, u)) is uniformly continuous in that∆L(1, 2) := supu∈M∗{1 + L(t, x, u)1 + L(s, y, u)− 1; |t− s| < 1, |x− y| < 2}1,2→0−→ 0.(A3) The following boundedness conditions:(i) supt,x L(t, x, 0) <∞.(ii) |∇xL(t, x, v)| /(1 + L(t, x, v)) is bounded.(iii) sup {|∇vL(t, x, u)| : |u| ≤ R} <∞ for all R.We will use the notation X = (X0, β, σ) to refer to an Itô process Xt of theform:Xt = X0 +∫ t0βs ds+∫ t0σs dWs. (2.20)We will use the notation AνTν0 to refer to the set of stochastic processesX = (X0, βt, Id) with X0 ∼ ν0 and XT ∼ νT . Notably, (A1) implies thatE [L(t,Xt, βt)] =∞ if βt 6∈ L2(P).Our main result is the stochastic counterpart to Theorem 2:Theorem 4. If L satisfies the assumptions (A1), (A2), and (A3), then1. For any given probabilities µ0 ∈ P(M∗) and νT ∈ P(M), we have:BsT (µ0, νT ) = inf{W (µ0, ν) + CsT (ν, νT ); ν ∈ P1(M)}. (2.21)Furthermore, this infimum is attained whenever µ0 ∈ P1(M∗) and νT ∈P1(M).2. If νT ∈ P1(M) and µ0 ∈ P1(M∗) are such that B(µ0, νT ) <∞, and ifµ0 ∈ P1(M∗) has compact support, thenBsT (µ0, νT ) = sup{∫Mf(x) dνT (x) +∫M∗(Ψ0f,−)∗(v) dµ0(v); f convex and in Lip(M)},(2.22)where Ψf,− is the solution to the Hamilton-Jacobi-Bellman equation∂ψ∂t+12∆ψ(t, x) +H(t, x,∇ψ) = 0, ψ(T, x) =f(x). (HJB)162.2. Stochastic Minimizing ProblemProof: 1) First, expand W (µ0, ν) and CT (ν, νT ) in the interpolationformula to obtain:inf{W (µ0, ν) + CsT (ν, νT ); ν ∈ P1(M)}= infν∈P1(M){E[〈V,X〉+∫ T0L(t,Xt, βt) dt];V ∼ µ0, X ∼ ν, (X,β) ∈ AνTν}≤ B(µ0, νT ).To obtain the reverse inequality, let νn be a sequence of measures approx-imating the infimum in (2.21). Then for each νn, there exists a stochasticprocess (Zn, βn) ∈ AνTνn such thatE[∫ T0L(t, Zn(t), βn(t)) dt]< CsT (νn, νT ) +1n . (2.23)Similarly, let dγnx (v)⊗ dνn(x) = dγn(v, x) be the disintegration of a measureγn such that ∫〈v, x〉 dγn(v, x) < W (µ0, νn) + 1n ,and define Un : M ×Ω→M∗ to be a random variable such that Un[x] ∼ γnxfor νn-a.a. x. Thus (Un[Zn(0)], Zn(0)) ∼ γn and we have constructed arandom variable that approximates the interpolation, asE[〈U [Zn(0)], Zn(0)〉+∫ T0L(t, Zn(t), βn(t)) dt]≤ inf{W (µ0, ν)+CsT (ν, νT ); ν ∈ P1(M)}+3n.(2.24)To show that the infimum in ν is attained in the set P1(M), we need againto prove the following coercivity property.Claim: For any fixed νT ∈ P1(M), N ∈ R, the set of measures ν ∈ P1(M)satisfying C(ν, νT ) ≤ N∫ |x| dν(x) is tight.We will assume ν ∈ T,R := {ν ∈ P1(M) : ν(B(R, 0)c) > } for what fol-lows. We leave R to be defined later, but note that if we define the set ΩR :={|X0| > R}, then our assumption on ν yields P(ΩR) > . By positivity of L,this allows us to say that E[∫ T0 L(t,Xt, βt) dt]≥ E[∫ T0 1ΩRL(t,Xt, βt) dt](henceforth we focus only on the event ΩR).By (A1), we assume that there is a convex function L : M∗ → R and C > 0such that for all |u| > U , L(u)|u|2 > C. Recall that L(|v|) is a lower bound on172.2. Stochastic Minimizing ProblemL(t, x, v). This imposes a lower bound on the expected action of X:E[∫ T0L(t,Xt, βt) dt]≥E[∫ T0L(|βt|) dt](J)≥ E [L(|V |)T ](A1)> CTE[1|V |>U |V |2]≥ CT[E[|V |2]− U2],(2.25)where V := (Y (T ) − Y (0))/T is the time-average of the drift. Hence theexpected action of the stochastic process X is bounded:E[∫ T01ΩRL(t,Xt, βt) dt]≥CTE[∣∣∣∣Y (T )− Y (0)T∣∣∣∣2]− CU2T>CTE[||X(0)| − |X(T )||2]− CU2T.(2.26)This leaves us with the same formulation as in (2.4) of the deterministiccoercivity result, the remainder of the proof is identical, and the claim isproven.The existence of a minimizing ν0 ∈ P1(M) follows similarly, with the onlydistinction being that the lower semi-continuity of ν 7→ Cs(ν, νT ) followsfrom Mikami-Thieullen [13].Remark: a) The same reasoning as in Section 2 yields that C(ν0, ν1) =C(ν1, ν0) = ∞ for ν1 ∈ P1(M) and ν0 ∈ P(M) \ P1(M). This implies thatit suffices to take the infimum in (2.21) over P1(M).b) The attainment of a minimizing interpolating measure ν0 is sufficientto show the existence of a minimizing (V,X) for BsT (ν0, νT ) whenever thelatter is finite. This is a consequence of the existence of minimizers for bothW (µ0, ν0) and CsT (ν0, νT ) [13, Proposition 2.1].To establish the duality formula in 2), we will proceed as in the thedeterministic case and use the Legendre dual of the optimal cost functionalν → CsT (ν0, ν), which was derived by Mikami and Thieullen [13]Proposition 3. if the Lagrangian satisfies (A1)-(A3), thenCsT (ν0, νT ) = sup{∫Mf(x) dνT −∫MΨ0f,−(x) dν0; f ∈ C∞b}, (2.27)where Ψf,− is the unique solution to the Hamilton-Jacobi-Bellman equation(1.26) that is given by:Ψf,−(t, x) = sup(X,β)∈A{E[f(XT )−∫ TtL(s,Xs, βs) ds∣∣∣∣Xt = x]} . (2.28)182.2. Stochastic Minimizing ProblemMoreover, there exists an optimal process X with drift βt(Xt) = argminv{v ·∇φ(t,Xt) + L(t,Xt, v)}.Furthermore, (µ, ν) 7→ CsT (µ, ν) is convex and lower semi-continuous un-der the weak∗-topology. It follows that ν 7→ BsT (µ0, ν) is weak∗-lower semi-continuous on P1(M) for all µ0 ∈ P1(M∗), and that (µ0, νT ) 7→ BsT (µ0, νT )is jointly convex.Remark: Note that integrating Ψf,−(0, x) over dν0 yields the Legendretransform of νT 7→ C(ν0, νT ) for f ∈ C∞db.2) For µ0 ∈ P1(M∗), define the function Bµ0 :M1(M)→ R∪{∞} to beBµ0(ν) :={B(µ0, ν) ν ∈ P1(M)∞ otherwise.Since Bµ0 is convex and weak∗-lower semi-continuous, we haveBµ0(ν) = B∗∗µ0(ν) = supf∈Lip(M){∫f dν −B∗µ0(f)}. (2.29)We break this into two steps. First we show that when f ∈ C∞db the dual isappropriate:B∗µ0(f) := supνT∈P1(M){∫f dνT −B(µ0, νT )}(2.21)= supνT∈P1(M)ν∈P1(M){∫f dνT − C(ν, νT )−W (µ0, ν)}(2.30)(2.28)= supν∈P1(M){∫Ψ0f,−(x) dν(x)−W (µ0, ν)}= W ∗µ0(Ψ0f,−) = −∫(Ψ0f,−)∗ dµ0.Thus, plugging this into our dual formula (2.29) and restricting our supre-mum to C∞db givesBµ0(ν) = B∗∗µ0(ν) ≥ supf∈C∞db{∫f dν +∫(Ψ0f,−)∗ dµ0}.To show the reverse inequality we will adapt the mollification argumentused in [13, Proof of Theorem 2.1]. We assume our mollifier η(x) is suchthat η1(x) is a smooth function on [−1, 1]d that satisfies∫η1(x) dx = 1 and192.2. Stochastic Minimizing Problem∫xη1(x) dx = 0, then define η(x) = −dη1(x/). For Lipschitz f , f := f ∗ηis then smooth with bounded derivatives. We can derive a bound on B∗µ(f)(where µ := µ0∗η) by removing the supremum in (2.30) and fixing a process(X,β) ∈ AνT :E[f(XT )−∫ T0L(s,Xs, βs) ds− 〈X0, V 〉](A2)≤E[f(XT +H)−∫ T0L(s,Xs +H, βs)−∆L(0, )1 + ∆L(0, )ds− 〈X0 +H, V +H〉+ |H|2]≤D∗ (f [1 + ∆L(0, )])1 + ∆L(0, )+ T∆L(0, )1 + ∆L(0, )+ d2,where D(ν) := inf{(1 + ∆L(0, ))W (µ, ν0) + C(ν0, ν); ν0}, H ∼ η is in-dependent of X(·), V , thus XT + H ∼ d(η ∗ νT ). The third line arises bymaximizing over processes (X(·) + H, V + H). Note that 7→ D(ν) islower semi-continuous for the same reason that ν 7→ Bµ(ν) is, and convergesto Bµ0(ν) as → 0.Taking the supremum over (X,β) ∈ Aµ0 of the left side above, we canretrieve a bound on B∗µ0(f). This bound allows us to say∫f dν −B∗µ(f) ≥∫f dν − D∗ (f [1 + ∆L(0, )])1 + ∆L(0, )− T ∆L(0, )1 + ∆L(0, )− d2,where we use -subscript to indicate convolution of a measure with η. Takingthe supremum over f ∈ Lip(M), we get the reverse inequality:supf∈C∞db{∫f dν −B∗µ(f)}≥ D(ν)1 + ∆L(0, )− T ∆L(0, )1 + ∆L(0, )− d2↘0≥ B(µ0, νT ).In the following corollary, we will discuss results pertaining to the conver-gence of solutions ψtn(x) := ψn(t, x) of the Hamilton-Jacobi-Bellman equa-tion for final conditions ψTn (x). In some sense ∇ψ is more fundamental thanψ, since our dual is invariant under ψ 7→ ψ + c, thus when discussing theconvergence of a sequence of ψ, we refer to the convergence of their gradi-ents. In the subsequent corollary, we denote PX the measure on M × [0, T ]associated with the process X.Corollary 4. Suppose the assumptions on Theorem 4.2 are satisfied and thatµ0 is absolutely continuous with respect to Lebesgue measure. Then (V,Xt)minimizes B(µ0, νT ) if and only if it is a solution to the stochastic differential202.2. Stochastic Minimizing ProblemequationdX =∇pH(t,Xt,∇ψ(t,Xt)) dt+ dWt (2.31)V =∇ψ¯(X0), (2.32)and hence a (time inhomogeneous) Markov process, where ∇ψn(t, x)→ ∇ψ(t, x)PX-a.s. and ∇ψn(0, x) → ∇ψ¯(x) ν0-a.s. for some sequence ψn(t, x) thatsolves (HJB) in such a way that ψTn := ψn(T, ·) and (ψ0n)∗ := [ψn(0, ·)]∗ aremaximixing sequences for the dual problem (2.22). Furthermore ψ¯ is concave.Proof: First note that there exists such an optimal pair (V,X), in view ofTheorem 4.1. Moreover, the pair is is optimal iff there exists a sequence ofsolutions ψn to HJB that is maximizing in (2.22) such thatE[∫ T0L(t,Xt, βt) dt+ 〈X0, V 〉]= limn→∞E[ψTn (XT ) + (ψ0n)∗(V )], (2.33)which we can write aslimn→∞EψTn (XT )− ψ0n(X0)︸ ︷︷ ︸(a)+ψ0n(X0)− (ψ0n)∗∗(X0)︸ ︷︷ ︸(b)+ (ψ0n)∗∗(X0) + (ψ0n)∗(V )︸ ︷︷ ︸(c) ,(2.34)where f∗∗ is the concave hull of f . Applying Itô’s formula to a), with theknowledge ψn satisfies (HJB), we getE[ψTn (XT )− ψ0n(X0)]= E[∫ T0〈βt,∇ψtn(Xt)〉 −H(t,Xt,∇ψtn(Xt)) dt].However, by the definition of the Hamiltonian, we have 〈v, b〉 −H(t, x, v) ≤L(t, x, b), which mean that (2.34) yield the following three inequalities:〈βt,∇ψtn(Xt)〉 −H(t,X,∇ψtn(Xt)) ≤L(t,X, βt) (a)ψ0n(X0)− (ψ0n)∗∗(X0) ≤0 (b)(ψ0n)∗∗(X0) + (ψ0n)∗(V ) ≤〈V,X0〉. (c)In other words, (2.34) breaks the problem into a stochastic and a Wasser-stein transport problem (in the flavour of Theorem 4), along with a correctionterm to account for ψ0n not being necessarily concave. Adding (2.33) to themix, allows us to obtain L1 convergence in the (a,b,c) inequalities, hence a.s.convergence of a subsequence ψnk .212.2. Stochastic Minimizing ProblemNote that the convergence in (b,c) means that ψ0n converges ν0-a.s. to aconcave function ψ such that x 7→ ∇ψ is the optimal transport plan forW (ν0, µ0) [6].To obtain the optimal control for the stochastic process, one needs theuniqueness of the point p achieving equality in (a). This is a consequenceof the strict convexity and coercivity of b 7→ L(t, x, b) for all t, x. The dif-ferentiability of L further ensures this value is achieved by p = ∇vL(t, x, b).Hence (a) holds iff∇ψtn(Xt) −→ ∇vL(t,Xt, βt) PX -a.s.Since ψtn are deterministic functions, this demonstrates that Xt is a Markovprocess with drift βt determined by the inverse transform: βt(Xt) = ∇pH(t,Xt,∇ψ(t,Xt)),i.e., (2.31).Remark: It is not possible to conclude from the above work that ψ¯(x) =ψ(0, x) without a regularity result on t 7→ ψ(t, x) for the optimal ψ. This isbecause ψ¯ is defined on a PX -null set.22Chapter 3Bolza DualityFor the rest of this thesis we will assume that the Lagrangian L is convex,proper and lower semi-continuous in both variables, so that we consider thedual Lagrangian L˜ defined on M∗ ×M∗ byL˜(t, v, q) := L∗(t, q, v) = sup{〈v, y〉+ 〈p, q〉 − L(t, y, p); (y, p) ∈M ×M},the corresponding fixed-end costs on M∗ ×M∗,c˜T (u, v) := inf{∫ T0L˜(t, γ(t), γ˙(t)) dt; γ ∈ C1([0, T ),M∗); γ(0) = u, γ(T ) = v},(3.1)and its associated optimal transportC˜T (µ0, µT ) := inf{∫M∗×M∗c˜T (x, y) dpi; pi ∈ K(µ0, µT )}. (3.2)More specifcally, we shall assume the following conditions on L, which areweaker than (A1), (A2), (A3) but for the crucial condition that L is convexin both variables.(B1) L : M ×M → R ∪ {+∞} is convex, proper and lower semi-continuousin both variables.(B2) The set F (x) := {p;L(x, p) <∞} is non-empty for all x ∈ M , and forsome ρ > 0, we have dist(0, F (x)) ≤ ρ(1 + |x|) for all x ∈M .(B3) For all (x, p) ∈M ×M , we have L(x, p) ≥ θ(max{0, |p| −α|x|})−β|x|,where α, β are constants, and θ is a coercive, proper, non-decreasing functionon [0,∞).These conditions on the Lagrangian make sure that the Hamiltonian H isfinite, concave in x and convex in q, hence locally Lipschitz. Moreover, wehaveψ(x)− (γ|x|+ δ)|q| ≤ H(x, q) ≤ φ(q) + (α|q|+ β)|x|for all x, q in M ×M∗,(3.3)233.1. Deterministic Bolza Dualitywhere α, β, γ, δ are constants, φ is finite and convex and ψ is finite andconcave (see [17]).Under these conditions, the cost (x, y)→ c(t, x, y) is convex proper and lowersemi-continuous on M ×M . But the cost bT is nicer in many ways. For one,it is everywhere finite and locally Lipschitz continuous on [0,∞)×M ×M∗.However, the main addition in the case of joint convexity for L is the followingso-called Bolza duality that we briefly describe in the deterministic case sinceit had been studied in-depth in various articles by T. Rockafellar [15] andco-authors [16, 17]. The stochastic counterpart is more recent and has beenestablished by Boroushaki and Ghoussoub [5].3.1 Deterministic Bolza DualityWe consider the path space A2M := A2M [0, T ] = {u : [0, T ] → M ; u˙ ∈ L2M}equipped with the norm‖u‖A2M=(‖u(0)‖2M +∫ T0‖u˙‖2dt) 12.Let L be a convex Lagrangian on M ×M as above, ` be a proper convexlower semi-continuous function on M ×M and consider the minimizationproblems,(P) inf{∫ T0 L(γ(s), γ˙(s)) ds+ `(γ(0), γ(T )); γ ∈ C1([0, T ),M)},(3.4)and(P˜) inf{∫ T0 L˜(γ(s), γ˙(s)) ds+ `∗(γ(0),−γ(T )); γ ∈ C1([0, T ),M∗)}(3.5)where `∗ is the legendre transform of ` in both variables.Theorem 5. Assume L satisfies (B1), (B2) and (B3), and that ` is proper,lsc and convex.1. If there exists ξ such that `(·, ξ) is finite, or there exists ξ′ such that`(ξ′, ·) is finite, theninf(P) = − inf(P˜).This value is not +∞, and if it is also not −∞, then there is an optimalarc v(t) ∈ A2[0, T ] for (P˜).243.1. Deterministic Bolza Duality2. A similar statement holds if we replace ` by `∗ in the above hypothesisand (P˜) by (P) in the conclusion.3. If both conditions are satisfied, then both (P˜) and (P) are attainedrespectively by optimal arcs v(t), x(t) in A2[0, T ].In this case, these arcs satisfy (v˙(t), v(t)) ∈ ∂L(x(t), x˙(t)) for a.e. t,which can also be written in a dual form (x˙(t), x(t)) ∈ ∂L˜(v(t), v˙(t)) for a.e.t, or in a Hamiltonian form asx˙(t) ∈ ∂vH(x(t), v(t)) (3.6)−v˙(t) ∈ ∂˜xH(x(t), v(t)), (3.7)coupled with the boundary conditions(v(0),−v(T )) ∈ ∂`(x(0), x(T )). (3.8)See for example [15]. The above duality has several consequences.Proposition 5. The value function Φg,+(x) = inf{g(y) + c(t, y, x); y ∈M},which is the variational solution of the Hamilton-Jacobi equation (??) start-ing at g, can be expressed in terms of the b and c˜ costs as follows:1. If g is convex and lower semi-continuous, then Φg,+(t, x) = sup{b(t, v, x)−g∗(v); v ∈M∗} is convex lower semi-continuous for every t ∈ [0,+∞).2. The convex Legendre transform of Φg,+ is given by the formulaΦ˜g∗,+(t, w) = inf{g∗(v) + c˜(t, v, w); v ∈M∗}.3. For each t, the graph of the subgradient ∂Φg,+(t, ·), i..e., Γg(t) ={(x, v); v ∈ ∂Φg,+(t, x)} is a globally Lipchitzian manifold of dimen-sion n in M ×M∗, which depends continuously on t.4. If a Hamiltonian trajectory (x(t), v(t)) over [0, T ] starts with v(0) ∈∂g(x(0)), then v(t) ∈ ∂Φg,+(t, x(t)) for all t ∈ [0, T ]. Moreover, thishappens if and only if x(t) is optimal in the minimization problem thatdefines Φg,+(t, x) and v(t) is optimal in the minimization problem thatdefines Φ˜g∗,+(t, w).Remark: The above shows that in the case when L is jointly convex, thecorresponding forward Hamilton-Jacobi equation has convex solutions when-ever the initial state is convex, while the corresponding backward Hamilton-Jacobi equation has concave solutions if the final state is concave. Unfortu-nately, we shall see that in the mass transport problems we are considering,253.1. Deterministic Bolza Dualityone mostly propagates concave (resp., concave) functions forward (resp.,backward), hence losing their concavity (resp., convexity).This said, the cost functionals cT , c˜T , bT are all value functions Φg start-ing or ending with affine function g. Indeed, b(t, v, x) = Φg,+(t, x), whengv(y) = 〈v, y〉. In this case, g∗v(u) = 0 if u = v and +∞ if u 6= v, whichyields that the Legendre dual of x→ Φg,+(t, x) = b(t, v, x) is w → c˜(t, v, w).One can also deduce the following.Proposition 6. Under assumptions (B1), (B2), (B3) on the Lagrangian L,the costs c and b have the following properties:1. For each t ≥ 0, (x, y) → c(t, x, y) is convex proper and lower semi-continuous on M ×M .2. For each t ≥ 0, v → b(t, v, x) is concave upper semi-continuous on M∗,while x→ b(t, v, x) is convex lower semi-continuous on M . Moreover,b is locally Lipschitz continuous on [0,∞)×M ×M∗.3. The costs b, c and c˜ are dual to each other in the following sense:• For any (v, x) ∈ M∗ × M , we have b(t, v, x) = inf{〈v, y〉 +c(t, y, x); y ∈M}.• For any (y, x) ∈ M × M , we have c(t, y, x) = sup{b(t, v, x) −〈v, y〉; v ∈M∗}.• For any (v, x) ∈ M∗ × M , we have b(t, v, x) = sup{〈w, x〉 −c˜(t, v, w);w ∈M∗}.4. The following properties are equivalent:(a) (−v, w) ∈ ∂y,xcT (y, x);(b) w ∈ ∂xbT (v, x) and y ∈ ∂˜vbT (v, x).(c) There is a Hamiltonian trajectory (γ(t), η(t)) over [0, T ] startingat (y, v) and ending at (x,w).This leads us to the following standard condition in optimal transporttheory.Definition 7. A cost function c satisfies the twist condition if for each y ∈M , we have x = x′ whenever the differentials ∂yc(y, x) and ∂yc(y, x′) existand are equal.In view of the above proposition, cT satisfies the twist condition if thereis at most one Hamiltonian trajectory starting at a given initial state (v, y),while the cost bT satisfies the twist condition if for any given states (v, w),there is at most one Hamiltonian trajectory starting at v and ending at w.263.2. Stochastic Bolza duality and its applications3.2 Stochastic Bolza duality and its applicationsWe define the Itô space ApM consisting of all M -valued processes of the fol-lowing form:ApM ={X :ΩT →M ; Xt = X0 +∫ t0βXs ds+∫ t0σXs dWs,for X0 ∈ L2(Ω,F0,P(; )M), βX ∈ Lp(ΩT ;M), σX ∈ L2(ΩT ;M)},(3.9)where βX and σX are both progressively measurable and ΩT := Ω × [0, T ].The cases of p = 1, 2,∞ will be of interest to us. We equip A2M with thenorm‖X‖2A2M = E(‖X0‖2M +∫ T0‖βXt ‖2M dt+∫ T0‖σXt ‖2M dt),so that it becomes a Hilbert space. Its dual space (A2M )∗ can also be identifiedwith L2(Ω;M)× L2(ΩT ;M)× L2(ΩT ;M). In other words, each q ∈ (A2M )∗can be represented by the tripletq = (q0, q1(t), Q(t)) ∈ L2(Ω;M)× L2(ΩT ;M)× L2(ΩT ;M),in such a way that the duality can be written as:〈X, q〉A2M×(A2M )∗ = E{〈q0, X0〉M+∫ T0〈q1(t), βXt 〉M dt+12∫ T0〈Q(t), σXt 〉M dt}.(3.10)Similarly, the dual of A1M can be identified with A∞M .We shall use the following result recently established in [5].Theorem 6. (Boroushaki-Ghoussoub) Let (Ω,F ,Ft, P ) be a completeprobability space with normal filtration, and let L(t, ·, ·) and M be two jointlyconvex Lagrangians onM×M , Assume ` is a convex lsc function onM×M .Consider the Lagrangian on A2M × (A2M )∗ defined byL(X, p) = E{∫ T0L(t,Xt − p1(t),−βXt ) dt+ `(X0 − p0, XT )+12∫ T0M(σXt − P (t),−σXt ) dt}.(3.11)273.2. Stochastic Bolza duality and its applicationsIts Legendre dual is then given for each q := (0, q1, Q) byL∗(q, Y ) = E{`∗(−Y (0), Y (T )) +∫ T0L∗(t,−βYt , Y (t)− q1(t)) dt+12∫ T0M∗(−σYt , σYt −Q(t)) dt}.Note that standard duality theory implies that in generalinfX∈A2{L(X, 0)} ≥ supY ∈(A2)∗{−L(0, Y )}. (3.12)In our case we shall restrict ourselves to processes of fixed diffusion. Thisfacilitates the proving of a stochastic analog to Bolza duality:Proposition 8. Assume L˜ satisfies (A1) and (A2) in addition to the as-sumptions in Theorem 6, and there exists (a.s.-)unique V0 ∈ L2(P) such that`∗(V0, ·) <∞ and (a.s.)-unique σV ∈ L2(P×λ[0,T ]) such thatM∗(σV , ·) <∞,then there is no duality gap, ie.infX∈A2{L(X, 0)} = supY ∈(A2)∗{−L∗(0, Y )} (3.13)Note that, unlike the deterministic case, there there is no backwardscondition that works if there is an VT ∈ L2(P) such that `∗(·, VT ) <∞, thisis due to the irreversibility of stochastic processes.Proof: We begin with augmenting our space by considering βV ∈ L1(P ×λ[0,T ])—we call this augmented set A1. If we can show the duality gap issatisfied in A1, by our coercivity condition (A2) we can then show that itmust be satisfied in A2.We proceed by a variational method outlined by Rockafellar [15]. First, wedefineφ(q) := infY ∈(A1)∗{L∗(q, Y )}. (3.14)As the infimum of a jointly convex function, φ itself is convex. The benefitof this definition is thatφ∗(X) = supq,v{〈X, q〉 − L∗(q, v)} = L∗∗(X, 0) = L(X, 0). (3.15)Hence, X minimizes L if and only ifX ∈ ∂φ(0) ⇐⇒ φ(0)+φ∗(X) = 0 ⇐⇒ L(X, 0) = − infY ∈(B2)∗{L∗(0, Y )}.(3.16)283.2. Stochastic Bolza duality and its applicationsIn other words, there is no duality gap if and only if ∂φ(0) is non-empty. Notethat this holds if there is an open (relative to {q;φ(q) <∞}) neighbourhoodN of the origin in A∞ such that L∗(q, Y ) <∞ for q ∈ N .By our assumptions, we may fix Y0, σY to be the unique elements such that`(Y0, ·) < ∞ and M∗(σY , ·) < ∞ (guaranteeing subdifferentiability in thesevariables), and let Y = (Y0, βY , σY ) be such that L∗(0, Y ) < ∞. For aperturbation βV ∈ L∞(P×λ[0,T ]) with∥∥βV ∥∥∞ < , note that (A2) gives forall (t, u) ∈ [0, T ]×M∗,L(t, Yt − βV , u) < (1 + ∆L(0, ))L(t, Yt, u) + ∆L(0, ), (3.17)andφ(V ) = infY ∈A1ML∗(V, Y )≤E`∗(−Y0, YT ) + E∫ T0L˜(t, Yt − βVt , βYt ) dt≤E`∗(−Y0, YT ) + (1 + ∆L˜(0, ))E∫ T0L˜(t, Yt, βYt ) dt+ T∆L˜(0, ),(3.18)which is finite for∥∥βV ∥∥∞ < sufficiently small by (A2). Hence φ is finiteand continuous in a open set of the origin (all relative to its domain), andduality is achieved on A1.To show that this duality is achieved in A2, it suffices to remark thatE∫ T0 L(t, Yt, βY ) ≥ E ∫ L(βY ) dt ≥ CE ∫ ∣∣βY ∣∣2−B dt =∞ for βY ∈ L1(P×λ[0,T ]) \ L2(P× λ[0,T ]) (where C,B are constants determined by L).29Chapter 4Maximizing Ballistic CostsWith Bolza duality in mind, it becomes possible to work with the maximiz-ing costs. We present a non-variational method of achieving duality in thedeterministic case, while the stochastic case becomes more natural to workwith in this setting, as it becomes a case of propagating the cost backwardsin time.4.1 Deterministic Maximizing CostTheorem 7. Assume that L satisfies hypothesis (B1), (B2) and (B3), andlet νT be a probability measure with compact support on M , that is alsoabsolutely continuous with respect to Lebesgue measure. Then,1. The following interpolation formula holds:BT (µ0, νT ) = sup{W (νT , µ)− C˜T (µ0, µ); µ ∈ P(M∗)}. (4.1)The supremum is attained at some probability measure µT on M∗, andthe final Kantorovich potential for C˜T (µ0, µT ) is convex.2. We also have the following duality formulae:BT (µ0, νT ) = inf{∫Mh(x) dνT (x) +∫M∗Φ˜0h∗,−(v) dµ0(v); h convex in Lip(M)}.(4.2)andBT (µ0, νT ) = inf{∫M(Φ˜Tg,+)∗(x) dνT (x) +∫M∗g(v) dµ0(v); g in Lip(M∗)}.(4.3)3. There exists a convex function h : M∗ → R such thatBT (µ0, νT ) =∫M∗bT (S∗T ◦ ∇h∗(x), x)dνT (x), (4.4)304.1. Deterministic Maximizing Costwhere S∗T (v) = pi∗φH∗T (v,∇h), and φH∗t the flow associated to the Hamil-tonian H∗(v, x) = −H(−x, v), whose Lagrangian is L∗(v, q) = L∗(−q, v).In other words, an optimal map for BT (µ0, νT ) is given by the inverseof the map x→ pi∗φH∗T (∇h∗(x), x).4. We also haveBT (µ0, νT ) =∫M∗bT (v,∇h ◦ S˜Tv)dµ0(v), (4.5)where S˜T (v) = pi∗φH˜T (v, d˜vh0), and φH˜t being the Hamiltonian flow as-sociated to L˜ (i.e., H˜(v, x) = −H(x, v), and h0 = Φ˜0h∗,−.h0 the solution h(0, v) of the backward Hamilton-Jacobi equation (1.33)with h(T, v) = h(v).Proof: To show (4.1) and (4.2), first note that for any probability mea-sure µ on M∗, we haveBT (µ0, νT ) ≥W (νT , µ)− C˜T (µ0, µ). (4.6)Indeed, since νT is assumed to be absolutely continuous with respect toLebesgue measure, Brenier’s theorem yields a convex function h that isdifferentiable µT -almost everywhere on M such that (∇h)#νT = µ, andW (νT , µ) =∫M 〈x,∇h(x)〉 dνT (x). Let pi0 be an optimal transport plan forC˜T (µ0, µ), that is pi0 ∈ K(µ0, µ) such that C˜T (µ0, µ) =∫M∗×M∗ c˜T (v, w) dpi0(v, w).Let p˜i0 := S#pi0, where S(v, w) = (v,∇h∗(w)), which is a transport plan inK(µ0, νT ). Since bT (v, y) ≥ 〈∇h(x), y〉 − c˜T (v,∇h(x)) for every (y, x, v) ∈M ×M ×M∗, we haveBT (µ0, νT ) ≥∫M∗×MbT (v, x) dp˜i0(v, x)≥∫M∗×M{〈∇h(x), x〉 − c˜T (v,∇h(x))}dp˜i0(v, x)=∫M〈x,∇h(x)〉 dνT (x)−∫M∗×M∗c˜T (v, w)dpi0(v, w)= W (νT , µ)− C˜T (µ0, µ).To prove the reverse inequality, we use standard Monge-Kantorovich theoryto writeBT (µ0, νT ) = sup{∫M∗×MbT (v, x) dpi(v, x); pi ∈ K(µ0, νT )}= inf{∫Mh(x) dνT (x)−∫M∗g(v) dµ0(v); h(x)− g(v) ≥ bT (v, x)},314.1. Deterministic Maximizing Costwhere the infimum is taken over all admissible Kantorovich pairs (g, h) offunctions, i.e. those satisfying the relationsg(v) = infx∈Mh(x)− bT (v, x) and h(x) = supv∈M∗bT (v, x) + g(v)Note that h is convex. Since the cost function bT is continuous, the supre-mum BT (µ0, νT ) is attained at some probability measure pi0 ∈ K(µ0, νT ).Moreover, the infimum in the dual problem is attained at some pair (g, h) ofadmissible Kantorovich functions. It follows that pi0 is supported on the setO := {(v, x) ∈M∗ ×M ; bT (v, x) = h(x)− g(v)}.We now exploit the convexity of h, and use the fact that for each (v, x) ∈ O,the function y 7→ h(y) − g(v) − bT (v, y) attains its minimum at x, whichmeans that ∇h(x) ∈ ∂xbT (v, x). But since c˜T is the Legendre transform ofbT with respect to the x-variable, we then havebT (v, x) + c˜T (v,∇h(x)) = 〈x,∇h(x)〉 on O. (4.7)Integrating with pi0, we get since pi0 ∈ K(µ0, νT ),∫M∗×MbT (v, x) dpi0 +∫M∗×Mc˜T (v,∇h(x))dpi0 =∫M〈x,∇h(x)〉 dνT . (4.8)Letting µT = ∇h#νT , we obtain thatBT (µ0, νT ) +∫M∗×Mc˜T (v,∇h(x))dpi0 = W (νT , µT ), (4.9)where W (νT , µT ) = sup{∫M×M∗〈x, v〉 dpi; pi ∈ K(νT , µT )}. Note that wehave used here that h is convex to deduce thatW (νT , µT ) =∫M 〈x,∇h(x) dµTby the uniqueness in Brenier’s decomposition. We now prove that∫M∗×Mc˜T (v,∇h(x))dpi0 = C˜T (µ0, µT ). (4.10)Indeed, we have∫M∗×M c˜T (v,∇h(x))dpi0 ≥ C˜T (µ0, µT ) since the measurepi = S#pi0, where S(v, x) = (v,∇h(x)) has marginals µ0 and µT respectively.324.1. Deterministic Maximizing CostOn the other hand, (4.9) yields∫M∗×Mc˜T (v,∇h(x))dpi0 =∫M〈x,∇h(x)〉 dνT (x)−∫M∗×MbT (v, x) dpi0=∫Mh∗(∇h(x))dνT (x) +∫Mh(x) dνT (x) +∫M∗g(v) dµ0(v)−∫Mh(x) dνT (x)=∫M∗h∗(w)dµT (w) +∫M∗g(v) dµ0(v).Moreover, since h(x) − g(v) ≥ b(v, x), we have h∗(w) + g(v) ≤ c˜T (v, w).Indeed, since for any (v, w) ∈ M∗ ×M∗, we have c˜(t, v, w) = sup{〈w, x〉 −b(t, v, x);x ∈M}, it follows that for any y ∈M ,c˜T (v, w) ≥ 〈w, y〉 − b(t, v, y) ≥ 〈w, y〉+ g(v)− h(y),hence h∗(w) + g(v) ≤ c˜T (v, w), which means that the couple (−g, h∗) is anadmissible Kantorovich pair for the cost c˜T . Hence,C˜T (µ0, µT ) ≤∫M∗×Mc˜T (v,∇h(x))dpi0=∫Mh∗(w)dµT (w) +∫M∗g(v) dµ0(v)≤ sup{∫M∗φT (w) dµT (w)−∫M∗φ0(v) dµ0(v); φT (w)− φ0(v) ≤ c˜T (v, w)}= C˜T (µ0, µT ).It follows that BT (µ0, νT ) = W (νT , µT ) − C˜T (µ0, µT ). In other words, thesupremum in (4.6) is attained by the measure µT . Note that the final opti-mal Kantorovich potential for C˜T (µ0, µT ) is h∗, hence is convex.The first duality formula (4.3) follows since we have established that if (g, h)are an optimal pair of Kantorovich functions for BT (µ0, νT ), then (g, h∗) arean optimal pair of Kantorovich functions for C˜T (µ0, µT ). In other words, theinitial Kantorovich function for BT (µ0, νT ) is g = Φ˜h∗,−(0, ·). This provesformula (4.2).To show (4.3), we can –now that the interpolation (4.1) is established–proceed as in Section 2.1, by identifying the Legendre transform of the func-tionals ν →W (ν, νT ) and µ→ C˜T (µ, µT ).To show part 4), we start with the interpolation inequality and write thatBT (µ0, νT ) = W (νT , µT )− C˜T (µ0, µT ),334.1. Deterministic Maximizing Costfor some probability measure µT . The proof also shows that there existsa convex function h : M∗ → R and another function k : M∗ → R suchthat (∇h)#µT = νT , W (νT , µT ) =∫M 〈∇h(v), v〉dµT (v). and C˜T (µ0, µT ) =∫M∗ h(u) dµT (u)−∫M∗ k(v) dµ0(v). Now use the theorem of Fathi-Figalli towriteC˜T (µ0, µT ) =∫M∗cT (v, S˜T v)dµ0(v), (4.11)where S˜T (v) = pi∗φH˜T (v, d˜vk). Note thatBT (µ0, νT ) ≥∫M∗ bT (v,∇h ◦ S˜T (v))dµ0(v), (4.12)since (S˜T )#µ0 = µT and ∇h#µT = νT , and therefore (I × ∇h ◦ S˜T )#µ0belongs to K(µ0, νT ).On the other hand, since bT (u, x) ≥ 〈∇h(v), x〉 − c˜T (u,∇h(v)) for everyv ∈M∗, we haveBT (µ0, νT ) ≥∫M∗bT (v,∇h ◦ S˜T (v))dµ0(v)≥∫M∗{〈∇h ◦ S˜T (v), S˜T (v)〉 − c˜T (v, S˜T (v))} dµ0(v)=∫M∗〈∇h(v), v〉dµT (v)−∫M∗c˜T (v, S˜T (v)) dµ0(v)= W (νT , µT )− C˜T (µ0, µT )= BT (µ0, νT ).It follows that BT (µ0, νT ) =∫M∗ bT (v,∇h ◦ S˜T (v))dµ0(v).To get 3), use the pushforward νT = (∇h ◦ S˜T )#µ0 to write the above interms of the measure νT , using the fact that (∇h)−1 = ∇h∗ and S˜−1T = S∗Twhere S∗T (v) = pi∗φH∗t (v, d˜vh) and φH∗t is the Hamiltonian flow associated tothe hamiltonian H∗(v, x) := −H(−x, v). This gives usBT (µ0, νT ) =∫M∗bT (S∗T ◦∇h∗(x), x)dνT (x) =∫M∗bT (pi∗φH∗t (∇h∗(x), d˜vh), x)dνT (x).Since h is convex, we have that d˜xh = ∇h(x), hence d˜∇h∗(x)h = ∇h ◦∇h∗(x) = x, which yields our claim thatBT (µ0, νT ) =∫MbT (pi∗φH˜T (∇h∗(x), x), x)dνT (x).Remark: While the costs c and c˜T are themselves jointly convex in bothvariables, one cannot deduce much in terms of the convexity or concavity of344.1. Deterministic Maximizing Costthe corresponding Kantorovich potentials. However, we note that the inter-polation (2.2) of BT (µ0, νT ) selects a ν0 such that CT (ν0, νT ) has a concaveinitial Kantorovich potential, while the interpolation (4.1) of BT (µ0, νT ) se-lects a µT such that C˜T (µ0, µT ) has a convex final Kantorovich potential.Furthermore, one wonders whether the formulac(t, y, x) = sup{b(t, v, x)− 〈v, y〉; v ∈M∗}, (4.13)also extends to Wasserstein space. We show it under the condition that theinitial Kantorovich potential of CT (ν0, νT ) is concave, and conjecture that itis also a necessary condition.Theorem 8. Assume M = Rd and that L satisfies hypothesis (B1), (B2)and (B3). Assume ν0 and νT are probability measures on M such that ν0 isabsolutely continuous with respect to Lebesgue measure. If the initial Kan-torovich potential of CT (ν0, νT ) is concave then the following holds:CT (ν0, νT ) = sup{BT (µ, νT )−W (ν0, µ); µ ∈ P(M∗)}, (4.14)and the supremum is attained.Proof: Again, it is easy to show thatCT (ν0, νT ) ≥ sup{BT (µ, νT )−W (ν0, µ); µ ∈ P(M∗)}. (4.15)To prove equality, we assume that the initial Kantorovich potential g isconcave and writeCT (ν0, νT ) = inf{∫M×Mc(y, x) dpi(y, x); pi ∈ K(ν0, νT )}= sup{∫Mh(x) dνT (x)−∫Mg(y) dν0(y); h(x)− g(y) ≤ cT (y, x)}.Since the cost function cT is continuous, the infimum CT (ν0, νT ) is attainedat some probability measure pi0 ∈ K(ν0, νT ). Moreover, the infimum inthe dual problem is attained at some pair (g, h) of admissible Kantorovichfunctions. It follows that pi0 is supported on the setO := {(y, x) ∈M ×M ; cT (y, x) = h(x)− g(y)}Since g is concave, use the fact that for each (y, x) ∈ O, the function z →h(x) − g(z) − cT (z, x) attains its maxmum at y, to deduce that −∇g(y) ∈∂ycT (y, x).354.1. Deterministic Maximizing CostSince g concave and b(t, v, x) = inf{〈v, z〉 + c(t, z, x); z ∈ M}, this meansthat for (y, x) ∈ O,cT (y, x) = bT (∇g(y), x)− 〈∇g(y), y〉. (4.16)Integrating with pi0, we get since pi0 ∈ K(ν0, νT ),∫M×McT (y, x) dpi0 =∫M×MbT (∇g(y), x) dpi0 −∫M〈∇g(y), y〉 dν0. (4.17)Letting µ0 = (∇g)#ν0, and since g is concave, we obtain thatCT (ν0, νT ) =∫M×MbT (∇g(y), x) dpi0 −W (ν0, µ0). (4.18)We now prove that∫M×MbT (∇g(y), x) dpi0(y, x) = BT (µ0, νT ). (4.19)Indeed, we have∫M×M bT (∇g(y), x) dpi0 ≥ BT (µ0, νT ), since the measurepi = S#pi0 where S(y, x) = (∇g(y), x) has µ0 and νT as marginals. On theother hand, (4.18) yields∫M×MbT (∇g(y), x) dpi0 =∫M×McT (y, x) dpi0 +∫M〈y,∇g(y)〉 dν0(y)=∫Mh(x) dνT (x)−∫Mg(y) dν0(y)−∫M(−g)∗(−∇g(y))dν0(y) +∫Mg(y) dν0(y)=∫Mh(x) dνT (x)−∫M∗(−g)∗(−v)dµ0(v).Moreover, since h(x)−g(y) ≤ cT (y, x), it is easy to see that h(x)−(−g)∗(−v) ≤bT (v, x), that is the couple ((−g)∗(−v), h(x)) is an admissible Kantorovichpair for the cost bT . It follows thatBT (µ0, νT ) ≤∫M×MbT (∇g(y), x) dpi0=∫Mh(x) dνT (x)−∫M(−g)∗(−v)dµ0(v)≤ sup{∫MφT (x) dµT (x)−∫M∗φ0(v) dµ0(v); φT (x)− φ0(v) ≤ bT (v, x)}= BT (µ0, νT ),364.1. Deterministic Maximizing Costand CT (ν0, νT ) = BT (µ0, νT )−W (ν0, µ0). In other words, the supremum in(4.14) is attained by the measure µ0.Corollary 9. Assume M = Rd and that L satisfies hypothesis (B1), (B2)and (B3). Assume ν0 and νT are probability measures on M such that ν0 isabsolutely continuous with respect to Lebesgue measure, and that the initialKantorovich potential of CT (ν0, νT ) is concave. If bT satisfies the twist con-dition, then there exists a map XT0 : M∗ → M and a concave function g onM such thatCT (ν0, νT ) =∫McT (y,XT0 ◦ ∇g(y))dν0(y). (4.20)Proof: In this case, CT (ν0, νT ) = BT (µ0, νT ) −W (ν0, µ0), for some prob-ability measure µ0 on M∗. Let g be the concave function on M such that(∇g)#ν0 = µ0 and W (ν0, µ0) =∫M 〈∇g(y), y〉dν0(y). Since bT satisfies thetwist condition, there exists a map XT0 : M∗ →M such that (XT0 )#µ0 = νTandBT (µ0, νT ) =∫M∗bT (v,XT0 v)dµ0(v). (4.21)Note that the infimum CT (ν0, νT ) is attained at some probability measurepi0 ∈ K(ν0, νT ) and that pi0 is supported on a subset O of M ×M such thatfor (y, x) ∈ O, cT (y, x) = bT (∇g(y), x)−〈∇g(y), y〉. Moreover, CT (ν0, νT ) =∫M×M bT (∇g(y), x) dpi0 −W (ν0, µ0), and∫M×MbT (∇g(y), x) dpi0 = BT (µ0, νT ) =∫M∗bT (v,XT0 v)dµ0(v) =∫MbT (∇g(y), XT0 ◦∇g(y))dν0(y).Since bT satisfies the twist condition, it follows that for any (y, x) ∈ O, wehave that x = XT0 ◦∇g(y) from which follows that CT (ν0, νT ) =∫M cT (y,XT0 ◦∇g(y))dν0(y).Corollary 10. Consider the cost c1(y, x) = c(x − y), where c is a convexfunction on M and let ν0, ν1 be probability measures on M such that theinitial Kantorovich potential associated to CT (ν0, νT ) is concave. Then, thereexist concave functions φ : M → R and ψ : M∗ → R such that(∇ψ ◦ ∇φ)#ν0 = ν1, (4.22)andC1(ν0, ν1)+K =∫Mc(∇ψ◦∇φ(y)−y)dν0(y) =∫M〈∇ψ∗(y)−∇φ(y), y〉 dν0(y),(4.23)where K > 0 is a constant and ψ∗ is the concave Legendre transform of ψ.374.2. Stochastic Maximizing CostProof: The cost c(x − y) corresponds to c1(y, x), where the Lagrangian isL(x, v) = c(v), that isc1(y, x) = inf{∫ 10c(γ˙(t)) dt; γ ∈ C1([0, 1),M); γ(0) = y, γ(1) = x} = c(x−y).(4.24)It follows from (4.14) that there is a probability measure µ0 onM∗ such thatC1(ν0, ν1) = B1(µ0, ν1)−W (ν0, µ0). But in this case, b1(v, x) = inf{〈v, y〉+c(x− y); y ∈M} = 〈v, x〉 − c∗(v), henceC1(ν0, ν1) = B1(µ0, ν1)−W (ν0, µ0) = W (µ0, ν1)−∫M∗c∗(v) dµ0(v)−W (ν0, µ0).(4.25)In other words,C1(ν0, ν1) +K = W 1(µ0, ν1)−W (ν0, µ0), (4.26)where K is the constant∫M∗ c∗(v) dµ0(v).Apply Brenier’s theorem [6] twice to find concave functions φ : M → Rand ψ : M∗ → R such that (∇φ)#ν0 = µ0, (∇ψ)#µ0 = ν1 andW (ν0, µ0) =∫M 〈y,∇φ(y)〉 dν0(y) and W (µ0, ν1) =∫M∗〈v,∇ψ(v)〉 dµ0(v).It follows from the preceeding corollary thatC1(ν0, ν1) +K =∫Mc1(y,∇ψ ◦ ∇φ(y))dν0(y) =∫Mc(∇ψ ◦ ∇φ(y)− y)dν0(y).Note also thatC1(ν0, ν1) +K =∫M〈v,∇ψ(v) dµ0(v)−∫M〈y,∇φ(y) dν0(y)=∫M〈∇ψ∗(y), y〉 dν0(y)−∫M〈y,∇φ(y) dν0(y)=∫M〈∇ψ∗(y)−∇φ(y), y〉 dν0(y).4.2 Stochastic Maximizing CostDefine the transportation cost between two random variables V on M∗ andX on M by:bs(V,Z) := inf{E[〈V,X0〉+∫L(t,Xt, βt) dt]; (X,β) ∈ A, XT = Z a.s.},(4.27)384.2. Stochastic Maximizing Costwhere A indicates Itô processes with Brownian diffusion. The minimizingballistic cost considered earlier is thenBsT (µ0, νT ) = inf{bs(V (·), Y (·));V ∼ µ0, Y ∼ νT }, (4.28)while the maximizing cost is defined as:BsT (µ0, νT ) := sup{bs(V (·), Y (·));V ∼ µ0, Y ∼ νT }. (4.29)Theorem 9. Assume L is a Lagrangian on M ×M∗ such that L and itsdual L˜ satisfies (A0)-(A3), then1. The following formula holds:BsT (µ0, νT ) := sup{E[〈V,XT 〉 −∫ T0L˜(t,Xt, βt(Xt)) dt]; (X,β) ∈ A, X0 ∼ µ0, V ∼ νT}.(4.30)2. The following duality holds:BsT (µ0, νT ) = sup{W (µ, νT )− C˜sT (µ0, µ)}, (4.31)where C˜sT is the action corresponding to the Lagrangian L˜. Further-more, if ν0 ∈ P1(M), and µT ∈ P1(M∗) there exist an optimal inter-polant νT .3. If µ0 ∈ P1(M∗), νT has compact support, and B(µ0, νT ) <∞, thenB(µ0, νT ) = inf{∫Mg∗ dνT +∫M∗Ψ˜g,− dµ0; g ∈ C∞db (M∗) and convex},(4.32)where Ψg,− solves the Hamilton-Jacobi-Bellman equation on M∗∂ψ∂t+12∆ψ −H(t,∇vφ, v) = 0 φ(v, T ) = g(v) (HJB2)Remark: The case where νT is given by a dirac measure δu provessuggestive. Here the maximizing ballistic cost may be interpreted literallyin terms of the HJB equation by eq. 2.28—with k : x 7→ 〈u, x〉. Thus, inthis particular case we can recoverB(µ0, δu) =∫Ψ˜k,−(0, x) dµ0(x),394.2. Stochastic Maximizing Costwithout the aid of any duality theorem. Notably,k∗(v) = supz{〈v, z〉 − 〈u, z〉} ={0 v = u∞ otherwise.Moreover, this method indicates a minimizing process, described by the SDEXt = X0 +∫ t0∇pH(s,X,∇Ψ˜k,−(s,X)) ds+Wt.Proof: 1) For a fixed pair (V, Y ), we consider the Bolza energy L(V,Y ) (3.11)associated to L and the two Lagrangians ` and M defined as:`(Y,U)(ω, y, z) :={〈z, U(ω)〉 y = Y (ω)∞ else M(ξ, ζ) :={−ζ − 1 ξ = 1∞ else(4.33)Note that the minimizing stochastic cost can be written as,Bs(µ0, νT ) := inf{inf{L(V,Y )(Xt, 0); (X,β) ∈ A2};V ∼ µ0, Y ∼ νT } (4.34)while the maximizing cost isBs(µ0, νT ) = sup{inf{L(V,Y )(Xt, 0); (X,β) ∈ A2};V ∼ µ0, Y ∼ νT }.(4.35)Applying Bolza duality turns the infimum to a supremum:Bs(µ0, νT ) = sup{sup{−L∗(V,Y )(0, U(t));U ∈ A2};V ∼ µ0, Y ∼ νT }, (4.36)which results in (4.30).2) The proof of the interpolation result can now follow closely the proof forthe minimization problem.3) We again try to identify the Legendre transforms of the functionals ν 7→W (µ, ν) and µ→ C˜sT (µ0, ν). We obtain easily that• If µ ∈ P1(M∗) has compact support, then for all f ∈ Lip(M), thensupν∈P1(M){∫Mf dν +W (µ, ν)}=∫M∗(−f)∗ dµ.• If g ∈ C∞db(M∗), thensupµ∈P1(M∗){∫M∗g dµ− C˜sT (µ0, µ)}=∫M∗Ψ˜g,− dµ0.404.2. Stochastic Maximizing CostDefine Bµ0 : ν 7→ B(µ0, ν), and note that the interpolation formula (4.31)and a result of Mikima-Thieullen [13] concerning C˜sT yields that Bµ0 is aconcave function. Furthermore it is weak?-upper semi-continuous on P1(M).Thus we haveBµ0(νT ) = −(−Bµ0)∗∗(νT ) = inff∈Lip(M){−∫Mf dνT + (−Bµ0)∗(f)}.(4.37)Investigating the dual, we find(−Bµ0)∗(f) = supν∈P1(M){∫Mf dν +Bµ0(ν)}= supµ∈P1(M∗)ν∈P1(M){∫Mf dν +W (µ, ν)− C˜sT (µ0, µ)}= supµ∈P1(M∗){∫M∗(−f)∗ dµ− C˜sT (µ0, µ)}. (4.38)Note that in the case where (−f)∗ ∈ C∞db, this is simply∫M∗ Ψ˜(−f)∗,− dµ0.,giving usBν0(µT ) ≤ inf(−f)∗∈C∞db{−∫M∗f dµT +∫M∗Ψ˜(−f)∗,− dµ0}.In either case, we can restrict our f to be concave by noting that if we fixg = (−f)∗, then the set of corresponding {−f ; (−f)∗ = g} is minimized bythe convex function g∗ = (−f)∗∗ ≤ −f [7, Proposition 4.1]. Thus it sufficesto consider f convex.We now show that it is sufficient to consider this infimum over convexg ∈ C∞db by a similar mollification argument to that used for B (note thatthe mollifying preserves convexity). Maintaining the same assumptions andnotation as in our earlier argument, we first note a useful application ofJensen’s inequality to the legendre dual of a mollified function:g∗ (v) = supx{〈v, x〉 − E [g(x+H)]}(J)≤ supx{〈v, x〉 − g(x)} = g∗(v).Mikami [13, Proof of Theorem 2.1] further shows that(4.38) = C∗ν0(g) ≤C∗ν0∗η((1 + ∆L(0, ))g)1 + ∆L(0, )+ T∆L(0, )1 + ∆L(0, ).414.2. Stochastic Maximizing CostPutting these together we get∫g∗ dµT+(−Bν0)∗(g∗ ) dν0 ≤∫g∗ dµT+C∗ν0∗η((1 + ∆L(0, ))g)1 + ∆L(0, )+T∆L(0, )1 + ∆L(0, ).And once we take the infimum over convex g ∈ Lip(M), we getinf{∫g∗ dµT +[−Bν]∗ (−g∗); g convex in C∞db} ≤ −(−B)∗∗ν0∗η(µL,)1 + ∆L(0, ) +T ∆L(0, )1 + ∆L(0, ) ,where dµL,(v) := dµT ([1 + ∆L(0, )] v). Taking ↘ 0 dominates the rightside by B(µ0, νT ) (where we exploit the upper semi-continuity of B), com-pleting the reverse inequality.Corollary 11 (Optimal Processes for B). Suppose the assumptions on The-orem 9 are satisfied, with dµ0 dλ. Then, (Vt, X) is an optimal process ifand only if it is a solution to the backward Stochastic differential equation,dV =∇pH(t,∇ψ(t, V ), V ) dt+ dWt (4.39)X =∇ψ¯(VT ), (4.40)where limn→∞ ψn(T, x) → ψ¯(x) νT -a.s. and limn→∞ ψn(t, x) = ψ(t, x) PV -a.s. for some sequence ψn(t, x) that solves (HJB) in such a way that ψ0n =ψn(0, ·) and ψTn = ψn(T, ·) are a minimizing pair for the dual problem.Proof: If (V,X) is optimal, then Theorem 9 means there exists a sequenceof solutions ψn(t, v) to (HJB) with convex final condition ψTn , such thatE[〈X,VT 〉 −∫ T0L˜(t, V, βVt (V )) dt]= limn→∞E[[ψTn]∗(X) + ψ0n(V0)],(4.41)which we write aslimn→∞E[[ψTn]∗(X) + ψTn (VT )− ψTn (VT ) + ψ0n(V0)].Applying Itô’s formula to the last two terms, with the knowledge that ψnsatisfies (HJB) we getE[−ψTn (VT ) + ψ0n(V0)] = E [∫ T0−〈βVt ,∇ψtn(Vt)〉 −H(t,∇ψtn(Vt), Vt) dt]However, by the definition of the Hamiltonian, we have −〈q, v〉−H(t, x, v) ≥−L˜(t, v, q), similarly ψ∗(v) + ψ(x) ≥ 〈v, x〉. These inequalities allow us to424.3. Final Remarksseparate the limit in (4.41) into two requirements:(a) 〈βVt ,∇ψtn(Vt)〉+H(t,∇ψtn(Vt), Vt) must converge to L˜(t, V, βVt (t, V )) and(b) ψTn (VT )+[ψTn]∗(X) must converge to 〈X,VT 〉 in L1 hence a subsequenceψnk exists such that this convergence is a.e.The journey from (a) to (4.39) is as in Corollary 4. The only differencefrom the earlier corollary is that we know that ψn must converge to a convexfunction, so (b) implies X = ∇ limn→∞ ψn(VT ).4.3 Final RemarksThe interpolation formula can be seen as a Hopf-Lax formula on Wassersteinspace, since for a fixed µ0 on M∗ (resp., fixed νT on M), then as a functionof the terminal (resp., initial) measure, we haveBµ0(t, ν) = inf{Uµ0(ρ)+Ct(ρ, ν); ρ ∈ P(M)} and BνT (t, µ) = inf{UνT (ρ)− C˜t(ρ, µ); ρ ∈ P(M∗)},(4.42)whereUµ0(ρ) = W (µ0, ρ) and UνT (ρ) = W (νT , ρ).The following Eulerian formulation illustrates best how Bµ0(t, ν) and BνT (t, µ)can be represented as value functionals on Wasserstein space. Indeed, liftthe Lagrangian L to the tangent bundle of Wasserstein space via the formulaL(ρ, w); = ∫M L(x,w(x)) dρ(x) and L˜(ρ, w); = ∫M∗ L˜(x,w(x)) dρ(x),where ρ is any probability density on M (resp., M∗) and w is a vector fieldon M (resp., M∗).Corollary 12. Assume L satisfies hypothesis (D0) and (D1), and let µ0 bea probability measure on M∗ with compact support, thenBµ0(T, ν) := inf{Uµ0(ρ0) +∫ T0L(ρt, wt)dt; ∂t%+∇ · (%w) = 0, %T = ν},(4.43)The set of pairs (%, w) considered above are such that t → %t ∈ P(M), andt→ wt(x) ∈ Lip(Rn) are paths of Borel vector fields.One can then ask whether these value functionals also satisfy a Hamilton-Jacobi equation on Wasserstein space such as{∂tB +H(t, ν,∇νB(t, ν)) = 0,B(0, ν) = W (µ0, ν).(4.44)434.3. Final RemarksHere the Hamiltonian is defined asH(ν, ζ) = sup{∫〈ζ, ξ〉dν − L(ν, ξ); ξ ∈ T ∗ν (P(M))}.We note that Ambrosio-Feng [1] have shown recently that –at least in thecase where the Hamiltonian is the square– value functionals on Wassersteinspace yield a unique metric viscosity solution for (4.44). As importantly,Gangbo-Sweich [11] have shown recently that under certain conditions, valuefunctionals yield solutions to the so-called Master equations of mean fieldgames.Theorem 10. (Gangbo-Swiech) Assume U0 : P(M) → R, and U0 : M ×P(M) → R are functionals such that ∇xU0(x, µ) ≡ ∇µU0(µ)(x) for all x ∈M , µ ∈ P(M), and consider the value functional,U(t, ν) = inf{U0(%0) +∫ t0L(%, w)dt; ∂t%+∇ · (%w) = 0, %T = ν}.Then, there exists U : [0, T ]×M × P(M)→ R such that∇xUt(x, ν) ≡ ∇νUt(ν)(x) for all x ∈M , ν ∈ P(M),and U satisfies the Master equation below (4.45).Applied to the value functional Bµ0(t, ν) := Bt(µ0, ν), this should thenyields the existence for any probabilities µ0, νT , a function β : [0, T ]×M ×P(M)→ R such that∇xβ(t, x, ν) ≡ ∇νBµ0(t, ν)(x) for all x ∈M , ν ∈ P(M),and ρ ∈ AC2((0, T )× P(M)) such that∂tβ +∫ 〈∇νβ(t, x, ν) · ∇H(x,∇xβ)〉 dν +H(x,∇xβ(t, x, ν)) = 0,∂tρ+∇(ρ∇H(x,∇xβ)) = 0,β(0, ·, ·) = β0, ρ(T, ·) = νT ,(4.45)where β0(x, ρ) = φρ(x), where φρ is the convex function such that ∇φρpushes µ0 into ρ.Finally, we mention that one would like to consider value functionals onWasserstein space that are more general than those starting with the Wasser-stein distance. One can still obtain such functionals via mass transport by444.3. Final Remarksconsidering more general ballistic costs of the formbg(T, v, x) := inf{g(v, γ(0)) +∫ T0L(γ(t), γ˙(t)) dt; γ ∈ C1([0, T ),M)},(4.46)where g : M∗ ×M → R is a suitable function.45Bibliography[1] L. Ambrosio and J. Feng. On a class of first order Hamilton-Jacobiequations in metric spaces. Journal of Differential Equations, 256, 42014.[2] A. Barton and N. Ghoussoub. On Optimal Stochastic Ballistic Trans-ports. ArXiv e-prints, November 2017.[3] M. Beiglböck and N. Juillet. On a problem of optimal transport undermarginal martingale constraints. ArXiv e-prints, August 2012.[4] P. Bernard and B. Buffoni. Weak KAM Pairs and Monge-KantorovichDuality. 2008.[5] S. Boroushaki and N. Ghoussoub. A Self-dual Variational Approach toStochastic Partial Differential Equations. ArXiv e-prints, October 2017.[6] Y. Brenier. Polar factorization and monotone rearrangement of vector-valued functions. Communications on Pure and Applied Mathematics,44(4):375–417, 1991.[7] I. Ekeland and R. Téman. Convex Analysis and Variational Problems,chapter 1. Convex Functions. Society for Industrial and Applied Math-ematics, Philadelphia, PA, USA, 1999.[8] L.C. Evans and W. Gangbo. Differential equations methods for theMonge-Kantorovich mass transfer problem. Mem. Amer. Math. Soc.,137, 1999.[9] A Fathi and A Figalli. Optimal transportation on non-compact mani-folds. Israel Journal of Mathematics, 175, 2010.[10] W. Gangbo and R.J. McCann. The geometry of optimal transportation.Acta Math., 177, 1996.[11] W. Gangbo and A. Swiech. Existence of a solution to an equation arisingfrom the theory of Mean Field Games. J. Differential Equations, 259,2015.46Chapter 4. Bibliography[12] N. Ghoussoub. Optimal Ballistic Transport and Hopf-Lax Formulae onWasserstein Space. ArXiv e-prints, May 2017.[13] T. Mikami and M. Thieullen. Duality theorem for the stochastic optimalcontrol problem. Stochastic Processes and their Applications, 2006.[14] G. Monge. Mémoire sur la théorie des déblais et des remblais. Del’Imprimerie Royale, 1781.[15] R. T. Rockafellar. Existence and duality theorems for convex problemsof Bolza. Trans. Amer. Math. Soc., 159, 1971.[16] R. T. Rockafellar and P.R. Wolenski. Convexity in Hamilton-Jacobitheory I: dynamics and duality. SIAM J. Control and Opt., 39, 2001.[17] R. T. Rockafellar and P.R. Wolenski. Convexity in Hamilton-Jacobitheory II: envelope representations. SIAM J. Control and Opt., 39,2001.[18] V.N. Sudakov. Geometric problems in the theory of infinite-dimensionalprobability distributions. Proc. Steklov Inst. Math., 141, 1979.[19] C. Villani. Topics in Optimal Transportation. Graduate studies in math-ematics. American Mathematical Society, 2003.[20] C. Villani. Optimal Transport: Old and New. Grundlehren der mathe-matischen Wissenschaften. Springer Berlin Heidelberg, 2008.[21] Daniel H. Wagner. Survey of measurable selection theorems. SIAMJournal on Control and Optimization, 15(5):859–45, 08 1977.47
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Dynamic and stochastic propagation of Brenier’s optimal...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Dynamic and stochastic propagation of Brenier’s optimal mass transport Barton, Alistair 2018
pdf
Page Metadata
Item Metadata
Title | Dynamic and stochastic propagation of Brenier’s optimal mass transport |
Creator |
Barton, Alistair |
Publisher | University of British Columbia |
Date Issued | 2018 |
Description | I present analysis of how the mass transports that optimize the inner product cost---considered by Y. Brenier---propagate in time along a given Lagrangian in both deterministic and stochastic settings. While for the minimizing transports one may easily obtain Hopf-Lax formulas on Wasserstein space by inf-convolution, this is not the case for the maximizing transports, which are sup-inf problems. In this case, we assume that the Lagrangian is jointly convex on phase space, which allow us to use Bolza-type duality, a well known phenomenon in the deterministic case but, as far as I know, novel in the stochastic case. Hopf-Lax formulas help relate optimal ballistic transports to those associated with the dynamic fixed-end transports studied by Bernard-Buffoni and Fathi-Figalli in the deterministic case, and by Mikami-Thieullen in the stochastic setting. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2018-04-25 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivatives 4.0 International |
DOI | 10.14288/1.0365998 |
URI | http://hdl.handle.net/2429/65633 |
Degree |
Master of Science - MSc |
Program |
Mathematics |
Affiliation |
Science, Faculty of Mathematics, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2018-09 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/4.0/ |
AggregatedSourceRepository | DSpace |
Download
- Media
- 24-ubc_2018_september_barton_alistair.pdf [ 509.15kB ]
- Metadata
- JSON: 24-1.0365998.json
- JSON-LD: 24-1.0365998-ld.json
- RDF/XML (Pretty): 24-1.0365998-rdf.xml
- RDF/JSON: 24-1.0365998-rdf.json
- Turtle: 24-1.0365998-turtle.txt
- N-Triples: 24-1.0365998-rdf-ntriples.txt
- Original Record: 24-1.0365998-source.json
- Full Text
- 24-1.0365998-fulltext.txt
- Citation
- 24-1.0365998.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0365998/manifest