UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Optimal design of over-the-counter derivatives in a principal-agent framework : an existence result and… Moreno-Bromberg, Santiago 2008

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2008_fall_moreno-bromberg_santiago.pdf [ 1.9MB ]
JSON: 24-1.0066987.json
JSON-LD: 24-1.0066987-ld.json
RDF/XML (Pretty): 24-1.0066987-rdf.xml
RDF/JSON: 24-1.0066987-rdf.json
Turtle: 24-1.0066987-turtle.txt
N-Triples: 24-1.0066987-rdf-ntriples.txt
Original Record: 24-1.0066987-source.json
Full Text

Full Text

Optimal Design of Over-the-counter Derivatives in a Principal-Agent Framework An Existence Result and Numerical Implementations by Santiago Moreno-Bromberg B.Sc., National University of Mexico, 2000 M.Sc., CINVESTAV del IPN, 2004 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy in The Faculty of Graduate Studies (Mathematics)  The University Of British Columbia (Vancouver) August, 2008  ©  Santiago Moreno-Bromberg 2008  Abstract This work lies in the intersection of Mathematical Finance, Mathematical Economics and Convex Analysis. In terms of the latter, a new result (to the author’s knowledge) on a Lipschitz property of the derivatives of a convex function is presented in the first chapter. An important result on its own, it might also provide a stepping stone to an extension to Hubert spaces of Alexandrov’s theorem on the second derivatives of convex functions. The second chapter considers the problem of Adverse Selection and op timal derivative design within a Principal-Agent framework. The principal’s income is exposed to non-hedgeable risk factors arising, for instance, from weather or climate phenomena. She evaluates her risk using a coherent and law invariant risk measure and tries to minimize her exposure by selling derivative securities on her income to individual agents. The agents have mean-variance preferences with heterogeneous risk aversion coefficients. An agent’s degree of risk aversion is private information and the principal only knows their overall distribution. It is shown that the principal’s risk mini mization problem has a solution and, in terms of the pricing schedule, the latter is unique. Finding a solution to the principal’s problem requires solving a varia tional problem with global convexity constraints. In general, this cannot be done in closed form. To this end an algorithm to approximate the solutions to variational problems where set of admissible functions consists of convex functions is presented in the fourth chapter of this work. Several examples are provided.  11  Table of Contents Abstract  ii  Table of Contents  II’  List of Tables  vi  List of Figures  vu  .  Acknowledgements  ix  Dedication 1  yin  1  Introduction  2 Preliminaries 2.1 Some results in Convex Analysis .  2.2 2.3  2.4 3  U-Convexity Some Contract Theory 2.3.1 Mechanism Design 2.3.2 Principal-Agent models and the Revelation and Taxation Principles 2.3.3 A standard screening model Risk measures in L 2  Optimal Derivative Design 3.1 The microeconornic setup 3.2 Solving the principal’s problem 3.2.1 Minimizing the risk for a given function v  8 20 21 22 22 24 27 31 31 34 39 ill  Table of Contents  3.3  3.4  3.2.2 Minimizing the overall risk A characterization of optimal contracts 3.3.1 The Lagrangian and the characterization 3.3.2 An application to entropic risk measures Example: A single benchmark claim 3.4.1 3.4.2 3.4.3  4  Description of the constrained claims The optimization problem A solution to the principal’s problem  50 52 53 54 54 55  An Algorithm to Solve Variational Problems with Convexity Constraints 59 4.1  Setup  4.2 4.3 4.4  Description of the algorithm Convergence of the algorithm Examples 4.4.1 The Mussa-Rosen problem in a square 4.4.2 The Mussa-Rosen problem with a non-uniform density of types  4.5  4.4.3 An example with non-quadratic cost 4.4.4 Minimizing risk A catalogue of put options 4.5.1 4.5.2  5  43 49  An existence result An algorithm for approximating the optimal solution  Conclusions  Bibliography  60 61 62 68 68 69 71 72 76 77 78 83 85  Appendices A Matlab and Maple Codes A.1 MatLab code for the example in Section 4.4.1 A.2 MatLab code for the example in Section 4.4.2 A.3 MatLab code for the example in Section 4.4.3  90 90 93 94 iv  Table of Contents A.4 MatLab code for the example in Section 4.4.4 A.5 Maple code for the example in Section 4.5.2  98 102  V  List of Tables 4.1 4.2 4.3  The optimal function v and the strikes. Entropic case The optimal function v and the strikes. Case 1 The optimal function V and the strikes. Case 2  .  .  76 80 81  vi  List of Figures 4.1 4.2 4.3 4.4 4.5 4.6  Optimal solution for uniformly distributed agent types Optimal solution for normally distributed agent types The optimal function Y Optimal solution for underwriting call options. Entropic case Optimal solution for underwriting put options, Case 1 Optimal solution for underwriting put options. Case 2 .  70 71 72 75 80 82  vii  Acknowledgements I would like to thank Professor Ivar Ekeland for having given me the oppor tunity to come to UBC, for all that he has taught me, for the guidance he  has given me and for being such a magnanimous person. I would also like to acknowledge the support Professor Ulrich Horst has given me, as well as the time he dedicated to many fruitful discussions we have had. Professor Guillaume Carlier has always been generous in sharing his knowledge and deep insight on many topics, which I sincerely appreciate. Professors Pierre André Chiappori, Ulrich Haussman, Yves Lucet and Michael Peters have been great role models for me. I thank them for their kindness, patience and thoroughness when answering my questions. I would like to acknowl edge the help and support I have received from the faculty and staff in the Mathematics Department at UBC, the Pacific Institute for the Mathemat ical Sciences and the Mathematics Department at CINVESTAV, especially Professors Alejandro Adem, Isidoro Gitler and Onésimo I-Iernández-Lerma. I can not leave my office mates unmentioned: Au, Amir, Bruno, Enrique, German, Gilles, José, Michele and Ramón. I have discussed Math with them, shared good and not-so-good times together, and learned about the World in the process. Finally, I would like to express my gratitude to the Mexican Council for Science and Technology for its financial support.  vi”  Dedication A aquellos que con su cariño y apoyo han hecho posible este trabajo. Es innecesario nombrarlos, ustedes saben quienes son.  ix  Chapter 1  Introduction The origins of the theory of asset pricing can be traced back to the early work of F. Black, M. Scholes and R. Merton. Their work led to the arbitrage paradigm for pricing financial derivatives. Two essential assumptions of this theory are market completeness and the fact that the underlying assets are liquidly traded in the financial market. To a certain extent, products written on companies’ equity can be priced in a satisfactory manner using arbitrage arguments. The latter transform the problem of pricing a certain derivative into replicating the products’ payoffs using the underlying assets. When such replication is not possible, either due to incompleteness of the market or the unavailablity of the assets required to form a hedging portfolio, then alternative pricing mechanisms must be used. An example of such a scenario is the pricing of weather derivatives. The history of financial derivatives whose underlying is the weather dates back to 1996 with the famous contract between Aquila Energy and Consol idated Edison Co. From that point on, trading on these kinds of contract started to gain momentum, first with Enron’s EnronOnline Unit and then with their introduction in the Chicago Mercantile Exchange in 1999. Other financial products based on non-tradeable risk include Collateralized Mort gage Obligations (first issued by Fannie Mae in 1985), Asset Backed Securi ties and Risk bonds. All these STRUCTURED PRODUCTS are the end-result of a process known as SECURITIZATION, which transfers risk into the financial markets. The idea of pooling and underwriting risk that cannot be hedged through investments in the capital markets alone has long become a key factor driving the convergence of insurance and financial markets. As these products became more popular, and given that they fall outside the scope of the traditional pricing-by-replication arguments, the need for an adequate 1  Chapter 1. Introduction pricing theory became evident. Structured products are often written on non-tradable underlyings, tai lored to the issuers specific needs and traded “over the counter”. Insurance companies, for instance, routinely sell weather derivatives or risk bonds to customers that cannot go to the capital markets directly and/or seek finan cial securities with low correlation with stock indices as additions to diversi fied portfolios. The market for such claims is generally incomplete and illiq uid. In such a market framework, preference-based valuation principles that take into account characteristics and endowment of trading partners may be more appropriate for designing, pricing and hedging contingent claims. These valuation principles have become a major topic of current research in economics and financial mathematics. They include rules of Pareto opti mal risk allocation ([25], [32]), market completion and dynamic equilibrium pricing ([30], [31]) and, in particular, utility indifference arguments ([2], [3], [5], [6], [19], ...). The latter assumes a high degree of market symmetry. For indifference valuation to be a pricing rather than valuation principle, the de rnand for a financial security must come from identical agents with known preferences and negligible market impact while the supply must come from a single principal. When the demand comes from heterogeneous individuals with hidden characteristics, indifference arguments do not always yield an appropriate pricing scheme. This work deviates from the assumption of investor homogeneity and al lows for heterogeneous agents. The model considers a single principal with a random endowment whose goal is to lay off some of her risk with hetero geneous agents by designing and selling derivative securities on her income. The agents have mean variance preferences. An agent’s degree of risk aver sion is private information, i.e. hidden from the principal. However, the principal does know the distribution of the risk aversion coefficients. This setting puts the principal at an informational disadvantage. If all the agents were homogeneous, the principal, when offering a structured product to a single agent, could extract the indifference (maximum) price from each trad ing partner. In the presence of agent heterogeneity this is no longer possible, either because the agents would hide their characteristics from the princi 2  Chapter 1. Introduction pal or prefer another asset offered by the principal but designed and priced for another customer. The latter would render the principal’s knowledge of the distribution of the agents’ types useless. The principal therefore needs to design a catalogue of derivatives and price them as to achieve the sec ond best solution. Although the problem of optimal mechanism design in principal-agent games is a standard problem of economic theory, it has only recently been applied to structured products. The problem of optimal derivative design in a Principal-Agent framework with informed agents and an uninformed principal has first been addressed in a recent paper by Carlier, Ekeland and Touzi [14]. With the agents being the informed party, theirs is a screening model. The literature on screen ing within the Adverse Selection framework can be traced back to Mussa and Rosen [39], where both the principal’s allocation rule and the agents’ types are one-dimensional. Armstrong [1] relaxes the hypothesis of agents being characterized by a single parameter. He shows that, unlike the onedimensional case, “bunching” of the first type is robust when the types of the agents are multi-dimensional. In their seminal paper, Rochet and Choné [44] further extend this analysis. They provide a characterization of the con tracts, determined by the (non-linear) pricing schedule, that maximize the principal’s utility under the constraints imposed by the asymmetry of infor mation in the models. Building on their work, Carlier, Ekeland and Touzi [14] study a Principal-Agent model of optimal derivative design where the agents’ preferences are of mean-variance type and their multi-dimensional types characterize their risk aversions and initial endowments. They assume that there is a direct cost to the principal when she designs a contract for an agent, and that the principal’s aim is to maximize profits. This work starts from a similar set-up, but the idea that providing prod ucts carries a cost is substituted for the idea that traded contracts expose the principal to additional risk, as measured by a convex risk measure, in exchange for a known revenue. This may be viewed as a partial extension of the work by Barrieu and El Karoui ([2], [3]) to an incomplete information framework. The principal’s aim is to minimize her risk exposure by trading with the 3  Chapter 1. Introduction agents subject to the standard incentive compatibility and individual ratio nality conditions on the agents’ choices. In order to prove that the principal’s risk minimization problem has a solution incentive compatible catalogues are characterized in terms of U-convex functions. When the impact of a single trade on the principal’s revenues is linear as in Carlier, Ekeland and Touzi [14], the link between incentive compatibility and U-convexity is key to es tablish the existence of an optimal solution. In the model presented here, the impact is non-linear as a single trade has a non-linear impact on the principal’s risk assessment. This non-linearity is reflected in a non-standard variational problem where the objective cannot be written as the integral of a given Lagrangian. Instead, this problem can be decomposed into a standard variational part representing the aggregate income of the principal, plus the minimization of the principal’s risk evaluation, which depends on the ag gregate of the derivatives traded. This non-standard formulation not with standing, the principal’s problem can be considered within the framework of variational problems subject to global convexity constraints. Sufficient conditions that guarantee that the principal’s optimization problem has a (unique) solution are given, as well as a partial (implicit) characterization of the optimal contracts. Since the principal’s problem can only be solved in closed form in very particular cases, this work also includes an algorithm to tackle variational problems with convexity constraints. Arguably, Newton’s problem of the body of minimal resistance is the original problem of this kind. It con sists of finding the shape of a solid that encounters the least resistance when moving through a fluid. This is equivalent to finding a convex func 2 to JR that minimizes tion from a convex domain (originally a disk) in ]R a certain functional (see Section 4). Newton’s original “solution” assumed radial symmetry. This turned out to be false, as shown by Brock, Ferone and Kawohl in [10], which sparked, new interest to the study of variational problems with convexity constraints. One can also find these kinds of prob lems in finance and economics. Starting in 1978 with the seminal paper of Mussa and Rosen [39j, the study of non-linear pricing as a means of mar ket screening under Adverse-Selection has produced a considerable stream 4  Chapter 1. Introduction of contributions ([1], [14], [44],...). In models where goods are described by a single quality and the set of agents is differentiated by a single parameter, it is in general possible to find closed form solutions for the pricing schedule. This is, however, not the case when multidimensional consumption bundles and agent types are considered. Although Rochet and Choné [44] provided conditions for the existence of an optimal pricing rule and fully characterized the ways in which markets differentiate in a multidimensional setting, they also pointed out that it is only in very special cases that one can expect to find closed form solutions. The same holds true for models where the set of goods lies in an infinite-dimensional space, even when agent types are one-dimensional. Given that a great variety of problems, such as product lines design, op timal taxation, structured derivatives design, etc. can be studied within the scope of these models, there is a clear need for robust and efficient numer ical methods that approximate their optimal pricing schedules. Note that this also provides an approximation of the optimal “products”. Most of the papers mentioned above eventually face solving a variational problem under convex constraints. This family of problems has lately been studied under different perspectives. Carlier and Lachand-Robert [15] have studied the C’ regularity of minimizers when the functional is elliptic and the admissible functions satisfy a Dirichiet-type boundary condition. Lachand-Robert and Pelletier [35] characterize the extreme points of a functional depending only on Vf over a set of convex functions with uniform convex bounds. The algorithm presented in this work is based on the idea of approximat ing a convex function by an affine envelope, to solve variational problems under convexity constraints. This deviates from previous work by Choné and Hervé [18] and Carlier, Lachand-Robert and Maury [16], where the au thors use finite element methods. In the former case, a conformal (interior) method is used and a non-convergence result is given. As a consequence, the latter uses an exterior approximation method, which is indeed found to be convergent in the classical projection problem in H. Lachand-Robert and Oudet present in [34] an algorithm for minimizing functionals within convex bodies that shares some similarities to the one presented here. For a 5  Chapter 1. Introduction particular problem, they start with an admissible polytope and iteratively modify the normals to the facets in order to find an approximate minimizer. In the process of gathering and generating the theory on convex functions required for the proof of convergence of the algorithm, several interesting results were found. The one that stands out is that the Fréchet derivatives of a convex function mapping a Hubert space into the Real numbers are locally Lipschitz on a dense subset. This builds on the celebrated result by Ekeland and Lebourg [21] that states that convex functions defined on Banach spaces that accept a differentiable norm (off the origin) are Fréchet differentiable on a dense G 3 set. In this work, the minimizers for several problems with known, closed form solutions are studied as a means of comparing the output of the algo rithm to the true solutions. These are taken from [44] and [14]. Then two risk minimization models are explored. The first one reduces to a non-linear program and falls within the scope of the algorithm. The second one re quires de computation of a minimax point and therefore requires a modified approach, which nevertheless still follows the philosophy of approximating a convex function by the envelope of a family of affine functions. It should be noted that the results contained in this thesis stem from work in progress by Ivar Ekeland and the author (Chapter 2) and the papers of the author and Ulrich Horst [29], which will appear in Mathematics and Financial Economics (Chapter 3), and Ivar Ekeland [22], which has been submitted to Numerische Mathematik (Chapter 4). The remainder of this work is organized as follows: Chapter 2 is divided into four sections. First a presentation of properties of convex functions to be used in the following two chapters is given. This section starts with well know facts and ends with the new (to the author’s knowledge) result that the derivatives of a convex function from a Hubert space into the Real num bers are densely Lipschitz. The latter is given as Corollary 2.1.19. Second is some standard theory on U-convex functions. Third a brief crash course in Mechanism Design. Finally a description of Convex Risk measures and some of their more useful (for this work’s purposes) properties. Chapter 3 presents a pricing method for over-the-counter financial derivatives to be 6  Chapter 1. Introduction traded between a single seller (the Principal) and a continuum of buyers (the Agents). The Principal evaluates the performance of her pricing sched ule via a convex risk measure. The main result in this chapter, Theorem 3.1.3, is one of existence and (partial) uniqueness of a pricing rule. Chapter 4 presents an algorithm to estimate the solutions of variational problems subject to convexity constraints. These include, but are not limited to, problems found in the previous chapter. A proof of convergence, Theorem 4.3.7, and various examples are provided. Finally, all the computer code is given in the Appendix.  7  Chapter 2  Preliminaries This work lies in the intersection of the Theory of Contracts and Convex Analysis. The purpose of this chapter is to provide the reader with the sufficient theory from these fields. It must be noted that some of the results contained in the section devoted to Convex Analysis are not used in the following chapters. However, they have been included to provide a broader view of some powerful (and beautiful) properties of convex functions. Special attention is called to Corollary 2.1.19, which states that the derivatives of a convex function defined on a Hubert space are densely locally Lipschitz. Although the theory of Convex Risk Measures can be set within that of Convex Analysis, this topic is treated in a separate section.  2.1  Some results in Convex Analysis  This Section contains the theory on convex functions used throughout this work. In this case, the domains considered are subsets of Hubert spaces. 2 of a particular The results on existence of optimal derivatives are set in L probability space, whereas the work on numerical methods is set in R’. In what follows H denotes a separable Hilbert space. Definition 2.1.1 A convex function  f>  —oc and  f : H  —*  JR is said to be  PROPER  if  f is not identically +oo.  Definition 2.1.2 Given a convex function f : H —+ IR, the set Dom(f) {x e H 1 If(x)I <oo} is called the EFFECTIVE DOMAIN off.  :=  The following characterization of proper, lower sernicontinuous convex functions plays a central role: 8  Chapter 2. Preliminaries Proposition 2.1.3 Let f : H —p R be a proper, lower semi-continuous con vex function. Then there is another convex, lower semi-continuous function f* such that (2.1) f(x) = sup {(x,y) f*(y)} —  yEH  f is then called the CONVEX  CONJUGATE  lower semicontinuous convex function Of(x)  :=  {y  E  H  I  f  f(x)  of f* (and viceversa). Any proper, satisfies f = (f*)*. The set =  (x,y)  —  f*(y)}  is called the SUBDIFFERENTIAL of f at x. In what follows and unless noted otherwise, f : H —* R will be a proper, lower semi-continuous convex func tion. Notice that the set epi(f)  :=  {(x,r)  xe H, r  f(x)},  which is called the EPIGRAPH of f is a closed and convex set. Since H is reflexive then epi(f) is weakly closed. Proposition 2.1.4 f remains lower semicontinuous with respect to weak (u(H, H*)) convergence. Proposition 2.1.4 can be used to prove the following useful result that will appear in Chapters 3 and 4. Proposition 2.1.5 Assume C Then the minimization problem  C  H is a closed and bounded convex set.  inf f(x) has at least one solution. Let {x} beaminimizing sequence offonCand let a = infecf(x). Since C is closed, bounded and convex there is a subsequence {x,-k} and e C such that Xk converges weakly to It follows from Proposition 2.1.4 PROOF.  9  Chapter 2. Preliminaries that liminf f(xflk)  f() so  = a  flk—OO  is a minimizer. D  Proposition 2.1.6 Let 7(f) be the interior of Dom(f). Then  (i) Of (x) # 0  for any  x E  7(f),  (ii) f is continuous on 7(f). In fact, it is locally Lipschitz. The following proposition shows that the supremum in equation (2.1) can actually be taken over a bounded set for points in 7(f). 0 B(x,a)  Proposition 2.1.7 Define for anya  Let x  e 7(f), then there exists e {y*EH  I  :=  {y  e H I lix—yll  <a}.  > 0 such that the set  f(y)=(y,y*)_f*(y*)}  is bounded for all y E B(x, e).  7(f), then f is continuous at x and Of(x) 0. Let 6 > 0 be such that B(x, 6) C 7(f). It follows from the convexity of f and the fact that Of(y) 0 for all y E B(x, 6) that for all h PROOF.  Note that if x belongs to  such that  lihil  6/2, and all y in B(x, 6/2), and all y E Of(y)  (h,  f(y + h)  Taking the supremum over all {h  I lihil  —  f(y)  (2.2)  6/2} on both sides of expression  (2.2) yields  IIy*II 6  Setting e  =  sup  21f(y)I  :=  K <  oo.  yEB(x,5)  (6K) /2 concludes the proof. i:i  10  Chapter 2. Preliminaries The following is a well known property of convex functions. It is fre quently used in the current literature (see Carlier [12]), but it probably dates back to Mosco or Joly (see for example [40]).  c  Lemma 2.1.8 Let A  W be a convex, open set. Assume the sequence of  {fk : A almost everywhere on A. convex functions  —*  R} converges uniformly to  f,  then Vfk  —*  Vf  Denote by Df the derivative of f in the direction of e. The convexity of fk and f implies the existence of a set B C A, with /1(A\B) = 0 such that the partial derivatives of fk and f exist and are continuous in B. Let x e B. To prove that Djfk(x) —* Df(x), consider i > 0 such that x + 77ej E A. Since fk is convex PROOF.  fk(x  for all 0 < h < fk(x  + hej)  -  + hej)  —  fk(x)  >  fk(x  Djfk(x)>  —  he)  —  fk(x)  Hence  i.  fk(x)  fk(x  Df(x) -DJ(x)  -Dj(x)  -  hej)  -  fk(x)  -Dj(x).  The left-hand side of the first inequality is equal to fk(x  + he)  —  J(x+  he)  h For c> 0 let 0 < 6  ij  J(x)  +  —  fk(x)  h  hi  <6. Let N  J(x  + he) h  —  J(x)  -  —  be such that  J(x-i-he)—J(x)  for  +  J(x) <e 1 —D  e N be such that —e6 f(x)—f(x) <eJ  for all  n>  N. Hence, taking h  =  6 and  f(x+he)—7(x+he) <e h —  ii  N and  7(x)—f(x)  h  <• —  11  Chapter 2. Preliminaries Thus 3c  Df(x)  f(x) 2 D  —  for all x E B. The same argument shows that —3e  for all n  N and all x  Df,()  —  f(x) 2 D  e B, which concludes the proof.  2 be a convex, compact set and let g : U Proposition 2.1.9 Let U C R’  D —*  R  be a convex function such that for all x E U, the subdifferentials ög(x) are contained in Q for some compact set Q. Then there exists {gj : U —* R} such that g, E C 1 (U) and gj —* g uniformly on U. PROOF. Fix S > 0 and define  Uö  := {(i +  5)x  I  x E U}.  Extend g to be defined on U by setting g(x) := maxyEQ {(x, s,,) for all x e Jô Notice that in this way the convexity of g and the bounds on 6 be a family of mollifiers (see, for instance [27]), Og(x) are preserved. Let K then the functions 6 6 h g*K  are convex, smooth and they converge uniformly to g on U as long as e is small enough so that V€:={xEU  >c} d(x,0U ) 6  , C U, then the sequence V , is contained in U. Let n e N be such that 1 113 has the required properties. h 3 := } {g E:I  6 if it is the denumerable in A set G c H is called RESIDUAL or dense G tersection of open, dense sets. By the Baire category theorem, a residual set is dense. An Apslund space is a Banach space in which every convex, lower 5 subset. The semicontinuous function is Fréchet differentiable on a dense C 12  Chapter 2. Preliminaries following theorem proved one half of a long standing conjecture that a space is Apsiund if and only if it accepts an equivalent norm that is differentiable off the origin. In what follows f’(x) denotes the Fréchet-derivative of x.  f  at  Theorem 2.1.10 (Ekeland, Lebourg /21]) Let X be a Banach space that  supports a Frechet-differentiable function ‘P : X —* 1 such that ‘I’(O) > 0, ‘P(x) 0 for some ball B(0, cr), ‘P = 0 on the complement of B(0, os). Let Y be an abstract set and f : X x Y —* IR a Frechet-differeritiable function such that for all x e X, inf f(x,y) =: F(x) > —00 yEY  Assume that for every x E X there exist 0> 0 and {f(z,y)  I liz —xli  i,  > 0 so that  f(z,y) F(x)+0}  is a bounded set in X*, and {z—*f(z,y)  I  f(x,y)F(x)+0}  is equicontinuous at x in X. Then there is a residual set G C X where F is Fréchet differentiable. Remark 2.1.11 A space X supports a function ‘P if it accepts an equivalent  norm that is differentiable off the origin. Lemma 2.1.12 Let H be a Hubert space and F: H  R a proper, lower semicontinuous convex function. Then there is a residual set G C H such that F is CONTINUOUSLY Frechét-differentiable on G. —*  PROOF. To see that H supports a function ‘P as described in the statement  is Fréchet-differentiable off the of Theorem 2.1.10 notice that lix Ii = origin. Let f(x, y) = F*(y) — (x, y) and define G(x) := infeH f(x, y). It follows from Proposition 2.1.7 that if G > —oo then for all x e H, there exist 13  Chapter 2. Preliminaries ,  0 > 0 such that the set  I lix li  {f(z, y)  17,  —  G(x) + 0}  f(z, y)  is bounded, since it coincides with the set  I lix  {—y E H  —  i,f(z,y)  zil  G(x) +8}.  Moreover, the set of functions  {z-f(z,y)  I  f(x,y)F(x)+0}  is clearly equicontinuous at each x E H, since f(z, y) = y. Hence the hypothesis of Theorem 2.1.10 are satisfied. It is shown in [20], page 466, that the sets G such that for any €> 0 there exist , 0 0 such that  llzi—xli m IIz2il 7],  f(yi,x) G(x)+8 f(y2,x) G(x)+8  iif(y1,z1)-f(y2,z2)li  (2.3)  j  are open and dense. In particular, given > 0 we can find ö, 0> 0 such that  G(x)+0 ) 2 f(x,y  G(x)+8  }  iIf(x,yi) -f(z,y2)ii (2.4)  Therefore  IIG’(x)  —  IIf(z,y2)  G’(z)lI  —  IIG’(x)  G’(z)Ii +  —  f(x,yi)ii+  )i 2 IIf(x,yi) f(z,y —  (2.5)  Since f(y,x) —* F(x) = f(y,x) —* F’(x) (See (61) and (62) in [20]), then yi and y2 can be chosen such that  IIG’(x)  —  f(x,yi)II  and  IIf(zy2)  —  G’(z)II  (2.6)  Inserting (2.6) and (2.4) into (2.5) concludes the proof.  ci 14  Chapter 2. Preliminaries In the case of finite dimensional spaces, the set G is in fact of full measure. The next corollary is used in the proof of convergence of the numerical algorithm in Chapter 4. Th Corollary 2.1.13 Let g : JR  —*  JR be a convex function, then the mapping x  —  Vg(x)  is well defined and continuous almost everywhere. Corollary 2.1.13 can be thought of as the first half of Alexandrov’s The orem, which states that a convex function f : R’ —+ JR is twice differentiable (actually C ) almost everywhere in the following sense: 2 Definition 2.1.14 A convex function  R is said to be twice ADIFFERENTIABLE at a point xo e R’ if Vf(xo) exists and if there is a f(xo), such that for any e> 0 there 2 symmetric, positive definite matrix D is 6 > 0 such that if lix — xoIi < 6, then sup  iIx*  —  Vf(xo)  —  f  : JR Th  f(xo) (x 2 D  —  —*  xo)Il  x eOf(x)  cllx xoII —  (2.7)  It turns out that in the case of Hubert spaces, the mapping x — g’(x) (the Fréchet-derivative of g at x) is locally Lipschitz on a dense set. In order to prove the latter, the following definitions are necessary: Definition 2.1.15 A function g : H —* JR is said to be locally upper sup ported (l.u.s.) by h : H —* JR at x e H if:  (i) g(x) (ii) g(y)  =  h(y) for all y in a neighborhood of x.  The following theorem is a “smooth version” of the Ekeland variational principle. It will be used to characterize the local upper support functions of a convex function.  15  Chapter 2. Preliminaries R is lower semicontinuous and bounded from below. Let e, ). > 0. Assume that xo satisfies  Theorem 2.1.16 (Borwein, Preiss [9]) Suppose g : H  —p  g(xo) <infg + c Then there exists w  e H and e H such that for all x e H  g(x) +  g() +  —  II  —  while  I—xoII<A Proposition 2.1.17 Let  convex function. Then of its effective domain.  f  and g)<ifg+c  f  : H —* R be a proper, lower semicontinuous is l.u.s. by quadratic functions on a dense subset  PROOF. Let x be in the interior of the effective domain of f. Since  f is lower  semicontinuous there is a > 0 and M 0 such that (f(y) <M for all y in B(x, 2a). Thus, f is continuous on B(x, a) (the closed ball centered in x of radius a). Define  ()  f —f(y),if  yEB(x,a);  otherwise.  +00,  The function g is lower semicontinuous and bounded below by —M. Consider e > 0 and xo e B(x,a) such that g(xo) < infHg + e. Given that g is continuous on B(x, a), we can assume xo E B(x, a), since its closure is the only region where g is finite. Let A > 0 be such that B(xo, A) is contained in B(x, a). It follows from Theorem 2.1.16 that there exists and w in H such that g(x) +  IIx w112 —  g() +  —  w112  while  Ilxo—II<A and  g()<ifg+e.  16  Chapter 2. Preliminaries The left hand side inequality above implies that —g = f as long as x e B(x, a), the function  Q(x)  :=  f() + IIx w112  is a local upper support function of  f  at  —  belongs to B(x, a). Since  C —  w112  .  0  R be a proper, lower : H semicontinuous convex function. 1ff is l.u.s. by a quadratic function at , then f’ is locally Lipschitz at  Theorem 2.1.18 (Ekeland, Moreno) Let  f  —  PROOF. Let Q : H —* R be a quadratic function as in Proposition 2.1.17 such that f isl.u.s. by Q at on B := B(,J) for some 5>0. Let 3 = for notational convenience. It follows from Proposition 2.1.7 that it can be assumed without loss of generality that sup IIf’(x)II <00. Q(y) Notice that  f’()  =  =  f() + IIx  —  w112  —  —  w112  Q’(E) and Q’(y)  (y 3 /  —  w)  therefore w This in turn allows Q(y)  Q  =  —  to be expressed as  f() + IIx  —  2 + /T’f’()II  Define q (y) 5  —  :=  (y—,f’())  f(y)  —  (y  — ,  —  f()  f’())  IIfI()II2  (2.8)  f(y) for all y E 13 that  It follows from the fact that Q(y) Q(y)  —  —  f(y)  —  (y—,f’())  —  f(E). Notice that ‘()  f() =  0 and  17  Chapter 2. Preliminaries  Q(y)  -  (y  -  ,  f’())  -  f()  =  +  -  3’f’(), y  _IIfI() 112 =  —  —  -  +  ’()) /3 f 1  fI())  (y-,y-) +/3(y-,/3’f’()) f’()) —  /3  -  -  =  Thus, for y E B (Y)  By convexity, for all x, y (x) + (y  q(y)  Letting  z’ = x  and x  =  —  x, ‘(x)),  (2.9)  in equation (2.9) yields  ) + (x-’()) +  (y-x,’(x)).  Writing x in the second summand of the right hand + y y for x side of expression above and noting ‘() = 0 gives —  —  —  (y-x,’(x)-’()).  Let x  e  be such that y + (‘(x) q’()) belongs to B, for all B. This number exists due to the boundedness of f’ on B. Then  j()  =  -‘()),  —  ‘(x)-’()))  (-x+’(x) -‘()), ‘(x)-’()).  18  Chapter 2. Preliminaries Hence  2 ll’(x) _!()ll  (-x,’(x) -‘()) +‘(x) _()li2.  The expression above can be written as (x  -  ,  ‘(x)  -  ‘())>  (  ii’(x)  -  -  1()ii2.  Using the Cauchy-Schwartz inequality yields  lix ilii’() ‘(x)li -  (ii  -  ) lI’()  -  (x)II2.  (2.10)  Note that =  0  and q’(x)  =  f’(x)  —  f’()  so expression (2.10) becomes  llf’() - f’(x)ll Since  (  —  -  is ill. -  > 0 the proof is complete.  As a direct implication of Proposition 2.1.17 and Theorem 2.1.18 one obtains the following Corollary 2.1.19 Let  f:H  —  R be a proper, lower semicontinuous convex  function. Then the assignment x—*f’(x) is locally Lipschitz on a dense subset of H.  19  Chapter 2. Preliminaries  2.2  U-Convexity  The analysis of optimality in Chapter 3 is based on the notion of U-convex functions. These can be thought of as a generalization of convex functions, and can be found throughout the literature of optimization within PrincipalAgent models (see, for example [11] and [14]). Definition 2.2.1 Let two spaces A and B and a function U : A x B  —k  JR  be given. (i) The function f : A —* JR is called U-convex p: B —* JR such that f(a)  =  sup {U(a, b)  —  if there exists a function  p(b)}.  bEB  (ii) For a given function p: B — R the U-conjugate pU(a) of p is defined by pU(a) = sup {U(a, b) — p(b)}. beB  (iii) The U-subdifferential of p at b is given by the set 9up(b) (iv) If a  e  :=  {a E A  I  pU(a)  =  U(a,b) —p(b)}.  e9p(b), then a is called a U-subgradient of p(b).  The following proposition is analogous to the characterization of proper,  lower semicontinuous, convex functions as those that coincide with their twice convex conjugate. It shows that a function is U-convex if and only if it equals the U-conjugate of its U-conjugate. Proposition 2.2.2 A function (fU)U =  f  A  —*  JR is U-convex if and only if  f.  PROOF.  20  Chapter 2. Preliminaries U  (f U)  (i) Let us first assume that f(a)  =  •  Then  sup{U(a, b)  =  —  bEB  and hence it is U-Convex. (ii) Conversely, if f is U-convex, then there is a function p: B that f(a) = supbEB{U(a, b) p(b)}. Since  —*  R such  —  fU(b)  =  sup{U(a,b)  —  aEA  sup {U(a,b’) —p(b’)}} b’EB  sup{U(a,b)  —  (U(a,b) —p(b))} =p(b)  aEA  we see that (fU)U  (a)  sup {U(a, b)  =  -  fU(b)}  beB  sup {U(a,b) _pU(b)}  =  f(a).  beB  On the other hand fU(b)  U(a, b)  —  f(a)  for all  a  e  A.  Thus (fU)U  (a)  =  sup {U(a, b)  —  fU(b)}  bEB  sup {U(a,b)  —  U(a,b) + f(a)}  =  f(a).  bEB  This concludes the proof. El  2.3  Some Contract Theory  Contract Theory is a branch of Economics that studies the ways in which contractual agreements between economic actors are made. This is mainly studied under the assumption of ASYMMETRIC INFORMATION; that is, a 21  Chapter 2. Preliminaries scenario in which one of the parties involved is better informed. In most cases the models contemplate a firm who is dealing with a group of agents.  2.3.1  Mechanism Design  The general model in the theory of mechanism design contemplates two par ties. The first one consisting of the AGENCIES who want to trade ACTIONS with a second party: the AGENTS. Actions can represent labor to be hired, public policies to be implemented, products to be sold, etc. The agents, whose behavior is assumed to be rational, evaluate the welfare they obtain from interacting with the agencies via utility functions. These take into con sideration the EFFORTS associated with dealing with the agencies, as well as the TRANSFERS from the agencies to the agents for given actions. De pending on the model, the efforts may represent actual physical output from performing a duty, prices to be paid, etc. The caveat from the point of view of the agencies is that the agents’ efforts are not directly observable. The agents may choose to alter the signals sent to the agencies (Moral Hazard), or they may have preferences that are private information (Adverse Selec tion). The objective of the agencies is to extract as much welfare from the agents given this asymmetry of information. The disutility to the agencies stemming from this asymmetry is usually called the INFORMATION RENT. The agents must be given appropriate incentives to share private informa tion or to reveal unobservable efforts. These are called COMMUNICATION MECHANISMS.  2.3.2  Principal-Agent models and the Revelation and Taxation Principles  The particular case where there is a single agency, which is then called the PRINCIPAL, is referred to as a Principal-Agent model. This work is set within such framework under Adverse-Selection. Perhaps the main appeal of this kind of hidden information model is the possibility of reducing all feasible communication mechanisms that satisfy the incentive constraints of the agents to DIRECT-REVELATION mechanisms.  22  Chapter 2. Preliminaries Denote by A the set of all possible actions and by T the set of possible transfers. Let the types 0 of the agents belong to some set 0. The (type dependent) utility functions for the agents will be denoted by  u(x,0,t) : Axe x T—*R. A mechanism is defined as follows: Definition 2.3.1 A MECHANISM is a message space M and a mapping G:  M  -+  A x T which writes as G(m)  (q(m),t(m))  for all m eM.  It is through elements of M that the agents “inform” the principal of their action of choice. The set M can be quite complex, and dealing directly with a mechanism as defined above can prove hard (or impossible) to track. On the other hand, if the message space could be reduced to the type space, the analysis would be considerably simplified. This is in general possible, as shown by the Revelation and Taxation Principles given below. Definition 2.3.2 A DIRECT-REVELATION MECHANISM is a mapping g 0  —*  A x T which writes as g(0)=(q(0),t(0))  forall 0cC.  If a direct-revelation mechanism is such that  u(q(0),0,t(O))  u(q(),0,t())  for all  0,  e  0  then it is called INCENTIVE-COMPATIBLE or TRUTHFUL. Notice that in a truthful, direct-revelation mechanism it is optimal for each agent report her type or level of effort. The following theorem is the crux of the theory of mechanism design in Principal-Agent models. Theorem 2.3.3 (Revelation Principle [7]) To any optimal (from the point  of view of the principal), incentive compatible mechanism of a Principal 23  Chapter 2. Preliminaries Agent game of incomplete information, there corresponds an associated incen tive- compatible direct-revelation (IC-DR) mechanism. If the preferences of the agents are given by quasi-linear utility functions  u(q, 0, t)  =  U(0, q)  —  t  the following theorem provides the necessary setting to allow for a mathe matical analysis of these models.  Theorem 2.3.4 (Taxation Principle [28]) An allocation rule X : 0 —+ A can be associated with an IC-DR mechanism if and only if there is a function ir : A —* R such that for all 0 e 0 sup  {U(O, q)  —  ir(q)}  qeA  is attained by x 2.3.3  =  X(0).  A standard screening model  The model described below can be found throughout the literature of Prin cipal-Agent models under Adverse Selection. It is used, for example, in the Rochet-Model [44] for which numerical solutions are computed in Chapter  4. Consider an economy with a single principal and a continuum of agents. The latter’s preferences are characterized by n-dimensional vectors. These are called the agents’ TYPES. The set of all types will be denoted by 0 c R’. The individual types 0 are private information, but the principal knows their statistical distribution, which has a density f(0). The model takes a HEDONIC approach to product differentiation: goods are characterized by (n-dimensional) vectors describing their utility-bearing attributes. The set of TECHNOLOGICALLY FEASIBLE goods that the princi pal can deliver is denoted by Q C 1R, which is assumed to be compact and convex. The cost to the principal of producing one unit of product p is de noted by C(p), which is assumed to be measurable in the sense of Lebesgue.  24  Chapter 2. Preliminaries Products are offered on a take-it-or-leave-it basis, each agent can buy one or zero units of a single product p and it is assumed there is no second-hand market. The (type.-dependent) preferences of the agents are represented by the (measurable) function  U:OxQ—*R. The (non-linear) price schedule for the technologically feasible goods is rep resented by  When purchasing good q at a price ir(q) an agent of type 0 has net utility  U(8,q)  —  ir(q)  Each agent solves the problem  max{U(0,q)  —  ir(q)}.  qEQ  By analyzing the choice of each agent type under a given price schedule ir, the principal screens the market. Let the INDIRECT UTILITY of an agent of type 6 be  v(0)  :=  U(0, q(0))  —  ir(q(0)),  (2.11)  where q(0) belongs to argmaxqQ {U(0, q) ir(q)}. Note the use of the Tax ation Principle here. The following holds for all q in Q we have —  v(0)  U(0, q)  —  ir(q)  (2.12)  Recall from Section 2.2 that the subset of Q where (2.12) is an equality is the U-SUBDIFFERENTIAL of v at 0 and v is the U-CONJUGATE of it, i.e.  v(6)  25  Chapter 2. Preliminaries and Ouv(O) : {q  U(o)  eQ  + ir(q)  =  U(8, q)}  To simplify notation let ir(q(9)) ‘ir(O). A mechanism is called INDIVID UALLY RATIONAL if v(O) vo(O) for all 9 E e, where vo(9) is type 9’s reservation utility. It is standard to normalize the reservation utility of all agents to zero, and assume there is always an OUTSIDE OPTION qo that denotes non-participation. This simplified model then only considers fünc tions v > 0. The Principal’s aim is to devise a (non-linear) pricing function Q —* R as to maximize her aggregate surplus  f  (r(9)  —  C(q(O))) f(O)d9  (2.13)  Inserting (2.11) into (2.13) yields the alternate representation  f  (U(6, q(8))  -  v(9)  -  C(q(9))) f(O)dO.  (2.14)  Expression (2.14) is to be maximized over all pairs (v, q) such that v is U-convex and non-negative and q(9) E Duv(9). Characterizing Ouv(9) in a way that makes the problem tractable can be quite challenging. In the case where U(O, q(O)) = 9 q(O), as in [44], for a given price schedule i- : Q —* R, .  the indirect utility of an agent of type 0 is  v(0) :=max{0.q—r(q)}  (2.15)  qeQ  Since v is defined as the supremum of its affine minorants, it is a convex function of the types. It follows from the Envelope Theorem that the max imum in equation (2.15) is attained if q(0) = Vv(0) almost everywhere, hence  v(0)  =  0 Vv(0)  —  r(Vv(0)).  (2.16)  The principal’s aggregate surplus is given by  fe  (ir(q(0))  -  C(q(0))) f(0)dO.  (2.17) 26  Chapter 2. Preliminaries Inserting (2.16) into (2.17) transforms the principal’s objective into maxi mizing  I[v]  :=  f(0. Vv(O)  -  C(Vv(0))  -  v(O)) f(0)dO  (2.18)  over the set  C  2.4  :=  {v :0  —*  R  vconvex, v  0, Vv(0)  eQ  a.e.}.  Risk measures in L 2  This section contains some properties and representation results for risk measures on L 2 spaces. These can be found in [17] and [4]. A classical reference for the theory of risk measures on L°° is [26]. All random variables are defined on some standard non-atomic probability space (, F, IP).  2 is a function p (i) A monetary risk measure on L 2 the following conditions 2 —* R U {oo} such that for all X, Y e L L are satisfied:  Definition 2.4.1  • Monotonicity: if X  Y then p(X)  p(Y).  • Cash Invariance: if m e R then p(X + m)  =  p(X)  —  m.  (ii) A risk measure is called coherent if it is convex and homogeneous of degree 1, i.e., if the following two conditions hold: : 2 • Convexity: for all c e [0, 1] and all positions X, Y e L p(oX + (1  —  o)Y)  op(X) + (1  • Positive Homogeneity: For all A p(AX)  =  —  a)p(Y)  0  Ap(X).  27  Chapter 2. Preliminaries (iii) The risk measure is called law invariant, if, p(X)  =  p(Y)  for any two random variables X and Y which have the same law. 2 has the Fatou property if for any sequence of (iv) The risk measure p on L ,X 1 ,... that converges in L 2 2 to a random variable random variables X X we have p(X) liminfp(X). Given A E (0, 1], the Average Value at Risk of level A of a position Y is defined as AV©RA(Y)  1  :=  p;  J  0 A  ——  qy(t)dt,  where qy(t)  :=  sup{x  I  IP[Y <x]  t}  is called the upper t-quantile of Y. If Y E L°°, then AV©RA admits the following characterization AV©RA(Y)  =  sup —EQ[YJ QeQ,.  where  and  Q  << P means that  Q is absolutely continuous with respect to P.  2 the mapping A Proposition 2.4.2 For a given financial position Y E L AV©R,(Y) is decreasing in A. PROOF.  ‘—*  Differentiate AV©RA(Y) with respect to A to get AV©RA(Y)  (—AV©RA(Y)  —  qy(A))  (2.19)  28  Chapter 2. Preliminaries Notice that qy (A) is increasing in A, so  J  A qy(A)  qy(t)dt.  0  This in turn makes the right hand side of equation (2.19) negative. 0  It turns out the Average Value of Risk can be viewed as a basis for the space of all law-invariant, coherent risk measures with the Fatou property. More precisely:  2 —* R is law-invariant, coherent Theorem 2.4.3 The risk measure p : L and has the Fatou Property if and only if p admits a representation of the following form: (  p(Y)  =  sup  1 j  /  AV©RA(Y)(dA)  ,tEM (JO  where M is a set of probability measures on [0, 1]. Proposition 2.4.2 and Theorem 2.4.3, together with the fact that  Y) AV©R ( 1  =  E[—Y]  yield the following Corollary: —* R is a law-invariant, coherent risk measure Corollary 2.4.4 If p : with the Fatou Property then  p(Y)  —IE[Y].  Corollary 2.4.4 will be essential to establish certain bounds in Chapter 3 that are necessary for the existence of optimal derivatives catalogues. An important class of risk measures are comonotone risk measures. Comonotone risk measures are characterized by the fact that the risk as sociated with two position whose payoff “moves in the same direction” is additive. 29  Chapter 2. Preliminaries Definition 2.4.5 A risk measure p is said to be comonotone if  p(X + Y)  =  p(X) + p(Y)  whenever X and Y are comonotone, i.e., whenever (X(w)  —  X(w’))(Y(w)  —  Y(w’)) > 0  IP-a.s.  Comonotone, law invariant and coherent risk measures with the Fatou property admit a representation of the form F’  p(Y) =  for some probability measure  J  AV©RA(Y)u(dA)  0  on [0, 1].  30  Chapter 3  Optimal Derivative Design This chapter studies the problem faced by a monopolist, whose income is exposed to non-hedgeable risk, when she trades over-the-counter derivatives with a set of heterogenous buyers. The monopolist’s aim is to minimize her risk exposure, which she evaluates via a coherent risk measure. The main result in this chapter is Theorem 3.1.3, which states that, when facing agents whose preferences are of the Mean-Variance type, the monopolist’s problem has a solution. Moreover, if the risk measure in use is strictly convex, then the resulting price schedule is unique. The proof of existence and uniqueness is contained in Section 3.2. The optimal contracts can be further characterized. Section 3.3 is devoted to this purpose. Finally, an example that can be solved in closed form is presented in Section 3.4.  3.1  The microeconomic setup  The model considers an economy with a single principal whose income W is exposed to non-hedgeable risk factors rising from, e.g., climate or weather phenomena. The random variable W is defined on a standard, non-atomic, probability space (, .F, IP) and it is square integrable: (1,..’F,IP). 2 WE L  The principal’s goal is to lay off parts of her risk with individual agents. The agents have heterogenous mean-variance preferences and their types are represented by their coefficients of risk aversion 0 e a Given a contingent  31  Chapter 3. Optimal Derivative Design claim Y e L (, F, P) an agent of type 0 enjoys the utility 2 U(0,Y)  =  E[Y]  —  0Var[Y].  (3.1)  Types are private information; however, the principal does know their dis tribution It is assumed that the agents are risk averse and that the risk aversion coefficients are bounded away from zero. More precisely, .  0  =  [a, 1]  for some a> 0.  The principal offers a derivative security X(0) written on her random income for every type 0. The set of all such securities is denoted by X  :=  {X  =  {X(O)}OE9  I  (2 x 9, P® IL), X is a(W) x 13(0) measurable}, 2 eL  X  where 13(0) denotes the Borel a-algebra on 9. A list of securities {X(0)} is called a contract. A catalogue is a contract along with prices 7r(0) for every available derivative X (0). For a given catalogue (X, ir) the indirect utility of the agent of type 0 is given by v(0)  =  sup {U(O,X(O’))  —  r(0’)}.  (3.2)  OEO  Remark 3.1.1 No assumption is made on the sign of ir(0); the model con siders both the case where the principal takes additional risk in exchange of  financial compensation and the one where she pays the agents to take part of her risk. A catalogue (X, 7r) is called incentive compatible (IC) if the agents’ in terests are best served by revealing their types. This means that the indirect utility of an agent of type 0 is achieved by the security X (0): U(0, X(O))  —  ir(0)  U(0, X(0’))  —  7r(O’)  for all  0, 0’ E 0.  (3.3)  it is assumed that each agent has an “outside option” (no participation) that yields a utility of zero. A catalogue is called individually rational (IR) 32  Chapter 3. Optimal Derivative Design if it yields at least the reservation utility for all agents, i.e., if  U(O, X(O))  —  7r(O)  for all  0  0 E 0.  (3.4)  Remark 3.1.2 By offering only incentive compatible contracts, the princi pal induces the agents to reveal their types. Offering contracts where the IR constraint is binding allows the principal to exclude undesirable agents from participating in the market. It can be shown that under certain conditions, the interests of the principal are better served by keeping agents of “lower types” to their reservation utility; Rochet and Choné [44] have shown that  in higher dimensions this is always the case. If the principal issues the catalogue (X, 7r), she receives a cash amount of ir (0) d,u(0) and is subject to the additional liability fe X(0)ii(dO). She evaluates the risk associated with her overall position  w + f((o)  -  X(0))d(0)  (, F, IP) —* IR U {oo} 2 via a coherent and law-invariant risk measure p : L that has the Fatou property. It can be shown that such risk measures can be represented as robust mixtures of Average Value at Risk (Theorem 2.4.3). The principal’s risk associated with the catalogue (X, ir) is given by p  (w  +  f(ir(0)  -  X(0))dIL(0)).  (3.5)  Her goal is to devise a catalogue (X, ir) that minimizes (3.5) subject to the incentive compatibility and individual rationality restrictions:  inf  {  (w + f(ir(0)  —  X(0))dIL(0))  I  X e X, (X, ir) is IC and IR}. (3.6)  The main result in this chapter is the following:  (IP) 2 Theorem 3.1.3 If p is a coherent and law invariant risk measure on L and if p has the Fatou property, then the principal’s optimization problem 33  Chapter 3. Optimal Derivative Design has a solution. Moreover, if the risk measure is strictly convex, then there exists a unique pricing schedule ir that minimizes the overall risk. For notational convenience all results will be developed for the special case di(O) = dO. The general case follows from straightforward modifica tions. The proof of Theorem 3.1.3 requires rephrasing the principal’s prob lem as an optimal control problem, as well as several intermediate steps. These are contained in the following section.  3.2  Solving the principal’s problem  Let (X, ir) be a catalogue. It is convenient to assume that the principal offers any square integrable contingent claim and to view the agents’ optimization 2 (IP). This can be achieved problems as optimization problems over the set L by identifying the price list {ir(O) } with the pricing scheme iT:  (IP) 2 L  —*  R  that assigns the value ir(O) to an available claim X(O) and the value E{Y] . Note this reduces the principal’s problem to 2 to any other claim Y E L offering products indexed on 0, and it is an application of the Taxation Principle ([28], [43]) in the particular case of Mean-Variance preferences. In terms of this pricing scheme the value function v defined in (3.2) satisfies, for any incentive compatible catalogue,  v(0)  =  sup  {U(O, Y)  —  ir(Y)}.  (3.7)  (P) 2 YEL  The value function is U-convex in the sense of Definition 2.2.1. In fact, it turns out to be convex and non-increasing. The goal is therefore to identify the class of IC and IR catalogues with a class of convex and non-increasing functions on the type space. To this end, recall the link between incentive compatible contracts and U-convex functions from Carlier, Ekeland and Touzi [14]:  34  Chapter 3. Optimal Derivative Design Proposition 3.2.1 If a catalogue (X, ir) is incentive compatible, then the  function v defined by (3.2) is proper (i.e., never —oo and not identically +oo) and U-convex and X(O) E Ouv(O). Conversely, any proper, U-subdifferentiable and U-convex function induces an. incentive compatible cata logue. PROOF. Incentive compatibility of a catalogue (X, ir) means that  U(O, X(9))  —  7r(O)  U(O, X(O’))  —  ir(O’)  for all  0, 0’  e 0,  so v(O) = U(O, X(O)) — ir(O) is U-convex and X(O) e Ouv(O). Conversely, for a proper, U-convex function v and X(O) e 9uv(O) let  r(O)  :=  U(9,X(O)) —v(O).  By the definition of the U-subdifferential, the catalogue (X, ir) is incentive compatible. D  The following is a fundamental lemma. It shows that the U-convex func tion v is convex and non-increasing and that any convex and non-increasing function is U-convex, i.e., it allows a representation of the form (3.7). This allows for the principal’s problem to be restated as an optimization problem over a set of convex functions.  (i) Suppose that the value function v as defined by (3.7) is proper. Then v is convex and non-increasing. Any optimal claim X*(O) is a U-subgradient of v(O) and almost surely  Lemma 3.2.2  _Var[X*(O)]  =  v’(O).  (ii) If Y 0 — IR+ is proper, convex and non-increasing, then ii is U (IP) —* R such that 2 convex, i.e., there exists a map : L (O)  =  sup  {U(O, Y)  —  2 (F) YEL  35  Chapter 3. Optimal Derivative Design Furthermore, any optimal claim .(O), that is any claim for which the sup above is attained, belongs to the U-subdifferential of 3(O) and satisfies —Var[X(O)] = v3’(O). PROOF. (i) Let v be a proper, U-convex function. Its U-conjugate is:  ’(Y) 1 v  sup {E{Y]—OVar{Yj---v(O)} OEO E[Y] + sup {O(—Var[Y]) v(9)}  =  —  oee  E[Y] + v*(_Var{Y]),  =  where v” denotes the convex conjugate of v. By Proposition 2.2.2, the map v is characterized by the fact that v =  v(O)  =  (vU)U.  Thus  (vU)U(9)  {U(O,Y)_IE[Yj_v*(_Var[Y])}  sup  =  YEL2(,P)  {IE[Y]  sup  =  —  OVar[Y]  —  IE[Y]  —  v*(_Var[Yj)}  Y€L2(,P) =  sup{O.y_v*(y)} vo  where the last equality uses the fact that the agents’ consumption set contains claims of any variance. It follows from the preceding representation that v is non-increasing. Furthermore v = (v*)* so v is convex.  9uv(O)  Ouv(O)  =  is characterized as follows:  {Y EL 2 {Y E L 2  =  2 {Y E L  =  {Y E L 2  =  {Y E L 2  I I I I  v(O)  =  U(O,X) _vU(Y)}  v(O)  =  E[Y]  v(O)  =  )E[YJ  v(8)  =  O(—Var[Y])  —  —  OVar[Y] OVar[Y] —  —  —  v’(Y)} IE{Yj  —  v*(_Var[Y])}  v*(_Var[Y])}  —Var[Y] E Ov(8)}  36  Chapter 3. Optimal Derivative Design By Corollary 2.. 1.13, the convexity of v implies it is a.e. differentiable, thus {Y E L 2 v’(O) = —Var{Y1}. öuv(0) (ii) Consider a proper, non-negative, convex and non-increasing function 9 —* JR. There exists a map f : JR —‘ JR (see Proposition 2.1.3) such that (9) =sup{9.y—f(y)}.  yo  2 Since 3 is non-increasing there exists a random variable Y(O) e L such that —Var[Y(O)] E (O) and the definition of the subgradient yields 3(9) = sup {O(—Var[Y]) — f(—Var[Y])}. 2 Y€L  (IP) defined by 2 With the pricing scheme on L  (Y)  :=  —E[Y]  —  f(—Var[Y])  this yields  v(8)  =  sup {U(O, Y)  —  r(Y)}.  2 Y€L  The characterization of the subdifferential follows by analogy to part (i). U  The preceding lemma along with Proposition 3.2.1 shows that any con vex, non-negative and non-increasing function v on 9 induces an incentive compatible catalogue (X, 7r) via  X(O) e 9uv(O)  and  ?r(6)  =  U(O, X(O))  —  v(O).  Inserting this representation of the pricing function into equation (3.5) and using the cash invariance of the risk measure, the principal’s aggregate risk  37  Chapter 3. Optimal Derivative Design can be expressed as  p  (w +  f  (E[X(O)j  -  X(O))dO) -  f  (Ov’(O)  -  v(O)) dO.  (3.8)  Therefore, it can be assumed with no loss of generality that IE[X(O)] terms of the principal’s choice of v her income is given by  1(v)  =10 (Ov’(O)  -  =  0. In  v(O))  Since v  0 is non-increasing and non-negative the principal needs only to consider functions that satisfy the normalization constraint  v(1)  =  0.  The class of all convex, non-increasing and non-negative real-valued func tions on 0 that satisfy the preceding condition is denoted by C: C  =  {v : 0  —*  R  v is convex, non-increasing, non-negative and v(l)  =  O.}  Conversely, any IC and IR catalogue (X, ir) can be associated to a non negative U-convex function of the form (3.7) where the contract satisfies the variance constraint —Var [X (0)] = v’ (0). In view of the preceding lemma this function is convex and non-increasing so after normalization it can be assumed that v belongs to the class C. This yields the following Theorem 3.2.3 The principal’s optimization problem allows the following alternative formulation: inf  {  (w  fo  X(O)dO)  —  1(v) v E C, IE[X(O)]  0, —Var{X(O)]  =  v’(O)}  The next preliminary result states that a principal with no initial en dowment will not issue any contracts. Lemma 3.2.4 If the principal has no initial random income, i.e., if W  =  0, 38  Chapter 3. Optimal Derivative Design then (v, X)  =  (0,0) solves her optimization problem.  Since p is a coherent, law invariant risk measure on L (IP) that has 2 the Fatou property, it follows from Corollary 2.4.4 that PROOF.  p(Y)  —IE[Y]  for all Y E L (IP). 2  (3.9)  For a given function v e C the normalization constraint E[X(O)J fore yields  p  (- f  X(8)dO)  -  1(v)  E  [f  X(O)dO]  -  1(v)  =  0 there  -1(v).  Since v is non-negative and non-increasing —1(v) 0. Taking the infimum in the preceding inequality shows that v 0 and hence X(O) 0 is an D optimal solution.  3.2.1  Minimizing the risk for a given function v  In the general case the principal’s problem is approached in two steps. At first a function v from the class C is chosen and the associated risk p (W_fX(O)dO) subject to the moment conditions E[X(O)j = 0 and —Var[X(6)J = v’(O) is minimized. To this end, a relaxed optimization problem, where the variance constraint is replaced by the weaker condition Var[X(O)]  —v’(O)  is shown to have a solution given by optimal contracts Xi,. In a subsequent step it is shown that based on X,,, the principal can transfer risk exposures among the agents in such a way that the aggregate risk remains unaltered, but the variance constraint becomes binding. Since v is a convex function, it is continuous in the interior of its effective domain. It will be assumed that v does not have a jump at 0 = a.  39  Chapter 3. Optimal Derivative Design The relaxed optimization problem Let v  e C and consider the convex set of derivative securities : {X  e X I E[X(O)j  =  0, Var[X(O)]  —v’(O) a.e.}  (3.10)  The function v is called acceptable for the principal if there exists some contract X e X,, such that the implementation of (X, v) does not increase the principal’s original risk. (i) All functions v E C that are acceptable for the principal Lemma 3.2.5 are uniformly bounded. -bounded. More 2 (ii) Under the conditions of (i) the set X,, is closed and L precisely, IIXIIv(a) forall XEX. PROOF.  (i) If v is acceptable for the principal, then there exists some X that satisfies p (w  -19  X(O)dO)  1(v)  -  p(W).  It follows from (3.9) and that fact that IE[X(O)] -E{W]  -  1(v)  p (w  -1  X(O)dO)  e X,  =  -  0 that  1(v)  p(W)  so —1(v)  E[W] + p(W)  =:  K.  Integrating by parts and using that v is non-increasing and v(1) we see that  =  0  1  K> —1(v)  =  av(a) + 2f v()dO  av(a).  This proves the assertion because a > 0. 40  Chapter 3. Optimal Derivative Design (ii) For X that  e X we deduce from the normalization constraint v(1)  iixii =  f  fx2(o,w)clIPde -  J  v’(O)dO  =  =  0  v(a)  so the assertion follows from part (i). D  Remark 3.2.6 From this point on, it will be assumed with no loss of gen erality that the set C contains only acceptable functions. This follows from the fact that any choice by the principal of a function v (an hence a pricing  schedule) which does not lie within the set of acceptable functions will result in an increment of her initial risk assessment. (IP) and because the set X 2 Since p is a 1.s.c. convex risk measure on L 2 (IP) the following of contingent claims is convex, closed and bounded in L result follows from proposition 2.1.5. Proposition 3.2.7 If the function v is acceptable for the principal, then  there exists a contract X, such that p (w —  fe  x(o)de)  =  p (w —  Jo  x(o)do).  The contract X, along with the pricing scheme associated with v does not yield an incentive compatible catalogue unless the variance constraints happen to be binding. However, it will be shown below that based on X, the principal can find a redistribution of risk among the agents such that the resulting contract satisfies the IC condition. Redistributing risk exposures among agents Let  8X,  =  {X E X,,  E[X(O)j  =  0, Var{X(O)]  =  —v’(O) a.e.}  41  Chapter 3. Optimal Derivative Design be the set of all contracts from the class X where the variance constraint is binding. Clearly,  p (w —  fe  p (w  X(O)dO)  fe  —  X(O)dO).  Define :=  {e E 0  I  Var[X(O)] < —v’(8)},  the subset of types for which the variance constraint is not binding. If = 0, then X, yields an incentive compatible contract. Otherwise, let e X,,, fix some type 0 and define Y:=  (3.11)  .  Var[Y()] The purpose of introducing Y is to offer a set of structured products Z, based on X,, such that Z, together with the pricing scheme associated with v yields an incentive compatible catalogue. Choose constants á(O) for 0 E such that Var[X(0) + &(0)Yj  —v’(9).  This equation holds for c(8)  =  —Cov[X(0),Y] ± /Cov2[Xv(0),Y]  —  v’(O)  —  Var[X(0)].  For a type 0 € O the variance constraint is not binding. Hence —v’(O) Var[X(0)] > 0 so that a(0) > 0 and c(0) <0. Notice that Var[X(0)j is a measurable and integrable function by Fubini’s theorem. This, together with Cauchy’s inequality, implies Cov{X(0), Y] is measurable and integrable —  as well. Jensen’s inequality implies  f  Cov[X(0),Y] —v’(8) —Var[X(0)]d0  f(Cov[xv(o)Y1 —v’(O))dO—  The expression above, together with Lemma 3.2.5 and the fact that  2 IIXvII 42  Chapter 3. Optimal Derivative Design is bounded shows that & are j-integrable functions. Therefore, there is a threshold type 0* E G such that  I  Jevn(a,o*l  &(O)d0 +  I  &(0)dO  =  0.  JOVn(0,1]  In terms of 0*, define the function  c(0)  :=  f  c  a(0), if 00* c(0), if 0>0*  and the contract  Z, :=X+oYEOX.  (3J2)  Since f cd0 0 the aggregate risks associated with X, and Z,, are equal. , solves the risk minimization problem 1 As a result, the contract Z  inf p (w XEc3X  3.2.2  .  —  f  (d0). 1 X(0)  (3.13)  ./  e  Minimizing the overall risk  The final step in the proof of the existence part of Theorem 3.1.3 is to show that the minimization problem inf  {  (w —  I  Zv(o)IL(do))  —  I(v)}  has a solution and the infimum is attained. Proposition 3.2.8 The set C is closed with respect to uniform convergence on compact subsets. PROOF.  The functions in C are locally Lipschitz continuous because they are convex. In fact they are uniformly locally Lipschitz: by Lemma 3.2.5 (i) the functions v e C are uniformly bounded and non-increasing so all the elements of Ov(0) are uniformly bounded on compact subsets of types. As a result, 43  Chapter 3. Optimal Derivative Design any sequence {v} C C is a sequence of uniformly bounded and uniformly equicontinuous functions when restricted to compact (strict) subsets of 0. Thus, the Arzela-Ascoli theorem guarantees the existence of a function 15 E C such that, passing to a subsequence if necessary, urn v  n—*oo  =  uniformly on compact subsets.  IS  Remark 3.2.9 If the sequence {v}  then it follows from Proposition lim v  n—*oo  .  =  V uniformly on compact subsets, 1.8 that  15’  —>  almost surely.  Proposition 3.2.10 Let Z,, E OX,, be such that p (w  1  Zv(O)iL(dO))  p (w =  +  f  X(O)IL(dO)).  Then the mapping p (w  +10 z(O)(dO))  is convex on C. PROOF.  Let c E [0, 1] and u, v E C and consider corresponding Z E OX and Z,, E OX,,. It follows from the convexity of the risk measure that p (w+ J(zu +(1 _a)Zv)(dO))  Zt(dO)) 0 ap(W+f +(1  The convexity of the variance operator X Var[oZ + (1  —  o)Z,,]  —*  —  o)p (W +  u(dO)). 1 fe Zv  Var[X] implies  aVar[Z] + (1  —  o)Var[Z,,],  44  Chapter 3. Optimal Derivative Design Hence cZ + (1 p (w  +1  —  a)Zu belongs to the set p (w  ZU+(l_)L(dO))  and by definition  +1  (Z + (1  )Z) I1(dO))  Therefore p  (w  +1  Zau+(l_)v(dO))  fe Z(dO))  <ap (W + —  a)p  (w  +  fe Zi(d6))  which concludes the proof.  Lemma 3.2.11 Let Z, be as in Proposition 3.2.10, then the mapping  v  —*  p (w +  I  Z(O)L(dO))  —  I[v]  is lower semi-continuous on C. PROOF.  Consider {v} c C. by Proposition 3.2.8 it can be assumed without loss of generality that there exist ‘Y e C such that urn v  n—+oo  Since —6v(6) + v(O) 1iminf. —I(v) so  =  uniformly on compact subsets.  J  0 it follows from Fatou’s lemma that —I(v)  liminf {p (w —  f  z(0)(do))  —  I(v)}  v(O)IL(d9)) +1iminf—I(v) liminfP(W_f Z 0  liminf p (w  z(o)/L(de))  —  IQu)  45  Chapter 3. Optimal Derivative Design and it remains to analyze the associated risk process. For this, observe that for e OX Fubini’s theorem yields  IIzvnhI  =  ffzc1IPdo  =  _f  v(O)dO  =  v(a).  (3.14)  are Since all the functions in C are uniformly bounded, the contracts 2 bounded, convex set. It follows from the reflexiveness contained in an L of L 2 and the Banach-Alaoglu theorem that there is a square integrable random variable Z such that, after passing to a subsequence if necessary, Z—Z  weaklyin  (3.15)  (P®), 2 L  which implies the convergence of aggregate risks:  Jo  Z(O,w)dO  —*  f  Z(O,)dO  weakly in L (IP). 2  It follows from Proposition 2.1.4 and the Fatou property of the risk measure p that  I  liminfp (w —  p (w  Z(O)/L(dO))  —  f  Z(O)It(dO)).  13 e X be an optimal claim associated with the limiting function 3 as Let Z constructed above. If the weak limit Z belongs to X, then (Z, i3) satisfies the principal’s problem because  p (w -  I  Z(O)L(dO))  p (w -  I  Z(O)[(dO)).  (3.16)  If such were not the case, given that Xy is closed, the Hahn-Banach theorem (IP 0 i) and c E IR such that 2 would guarantee the existence of and I E L (l,Z) > c  and  (I,Y) <c  for all  Ye X,  (3.17)  46  Chapter 3. Optimal Derivative Design Since v —* t3’ almost everywhere, Egoroff’s theorem guarantees that for each k N there is a set ek c e such that v  -  ‘5’  uniformly on  (€ \ Ok)  and  ek  <  Note that since the probability space is atomless, there is a k such that  k 0  satisfies (3.18)  lZdlPdp> c and trivially  10k  f  lYd1Pd < c  for all  YE X  By the uniform convergence of z% —* 3’, there is a sequence {€ : [1, oo) } that converges uniformly to one and such that for almost all 0 for all  Var{Y(0)] < —v/(0)c(0)  Y,  (3.19)  e  k 0  e  Then X C Xy  (3.20)  X C X  and  Given Y E X there is a unique Y E Xy such that  II  —  =  mm WEX,  lW  —  2 YII  (3.21)  This is simply the projection of Y onto X, since the latter is convex. Consider Y E OX, then e öX and Y  —  —Y  II”II2  1—  KS  (3.22)  where K is the uniform bound for v(a) and 6, —* 0. Any W e XE either , in which case the minimum in equation (3.21) is zero, or it 3 belongs to X belongs to Xe \ X. In such case its distance to X, is smaller than the 13 and the latter’s distance to OXye, distance between its projection onto X  47  Chapter 3. Optimal Derivative Design which has been bounded in expression (3.22). Therefore  II<nII2 Kö This implies, together with the first inequality in (3.20) that for n large enough  10k  f  lYdIPdp < c  Define  l*(w,O)  :=  f  for all  l(,O), 0,  Y E Xc,,  (3.23)  0e otherwise.  Expression (3.23) implies that 1* separates the sequence {X } from its weak limit Z, which is a contradiction. Therefore Z e X, and  p (w —  I  Z(0)/.L(d0))  p (w —  I  Z(0)u(d0)).  Thus  I  liminfp (w -  z(0)(d0))  p (w  -10 Z(0)IL(d0))  Lemma 3.2.12 There exists a catalogue (Zfl, ‘5) that solves the Principal’s  problem inf  {  (w —  I  (d0)) 1 z(9)  —  I(v)}  PROOF.  By Lemma 3.2.5 and Proposition 3.2.8 and the set C is closed and bounded. Moreover, Proposition 3.2.10 and Lemma 3.2.11 imply that the mapping  F(v)  =  p (w -  f  (d0)) 1 Z(O)  -  1(v)  is convex and lower semi-continuous. It follows from Proposition 2.1.5 that F attains its minimum in C. 48  Chapter 3. Optimal Derivative Design  Note that the solution to the principal’s problem is characterized in terms of a convex function v and a contract Z,. Associated with Z, there is a random variable X,, that solves the relaxed optimization problem. The is unique if p is strictly convex. However, there are many ways variable of “pushing X to the boundary”, i.e., of defining Z,.,. As a result, there is no reason to assume that the solution to principal’s problem is unique. However, in terms of the pricing schedule induced by v, the following Lemma follows from Proposition 3.2.10 Lemma 3.2.13 If the risk measure p is strictly convex, then there exists a unique function 13 that minimizes the overall risk. In particular, the prin  cipal’s optimization problem has is unique solution up to the definition of  zij. Lemma 3.2.12 together with Lemma 3.2.13 are precisely Theorem 3.1.3.  3.3  A characterization of optimal contracts  Typically, the solution to the principal’s problem cannot be given in closed form. However, its structure can be further characterized. More specifically, the following holds. Proposition 3.3.1 If {Y(O)} is an optimal contract, then all the random  variables Y(O) are co-linear. More precisely, there exists a random variable ZeL (IP) and a function a: 0 —* R such that almost surely 2 Y(O,)  =  a(O)Z(w).  To prove Proposition 3.3.1, it is enough to fix a function v e C and charac terize the infimum among all the random variables X e X of the operator R(X)  :=  p (w  -Jo  X(e)dO)  49  Chapter 3. Optimal Derivative Design subject to the moment conditions Ep[X(O)]=O,  and  Var[X(O)]+v’(O)=O.  In order to deal with the constraints consider the operators V, U : X defined by U(X)(O)  f  :=  X(O,w)dJP  and  V(X)(O)  :=  —*  2 () L  fX2(O,w)d1P+v(O),  respectively, so that the minimization problem can be rewritten as inf R(X)  s.t.  U(X)  =  0 and V(X)  =  (3.24)  0.  XEX  3.3.1  The Lagrangian and the characterization  In order to study the Lagrangian associated with the constrained optimiza tion problem notice that the Frechét-differentials of V and U at X in the direction h e X are given by, respectively, V’(X)h =  f  2X(O, w)h(9, w)dlP  U’(X)h  and  =  f  h(O, w)dIP.  The Frechét differential of R at X in the direction h E X is given by R’(X)  =  p’  (w  —19  X(O)dO)  (f  h(O, w)dO).  The derivative is well defined because (1P))* 2 pE(L  and  _fh(O,L)d8eL2(P).  (L IP), the map K —* p’(H)K is linear the Riesz repre Since, for all H E 2 (IP) such that 2 sentation theorem yields a random variable Z E L  p’  (w  —19  X(O)de)  (_  f  h(O, L)dO) =  f  Zx()  (—19  h(O, w)dO)  50  Chapter 3. Optimal Derivative Design As a result, the operator R has an extremum at Y under the moment con straints, if there exist Lagrange multipliers A, e 2 L ( ji) such that  0 ff  —Zy(w)h(O, w)dOdlP  210 A(O) for all h  (f  +10 i(O) (f h(O, w)dlP)  h(O, L,J)Y(O)dlP) dO  =  0  e X. This equation can be written as  f 10  h(9, w) (—Zy(w) + i(O) + 2A(O)Y(O,))dOdlP  for all h E X. Since (0, w) function this implies  —*  =  0.  —Zy(w) + ,(0) + 2A(0)Y(0, c) is an integrable  —Zy() + i(O) + 2A(O)Y(O,w)  =  (3.25)  0  due to the DuBois-Reymond lemma. Integrating both sides of this equation with respect to IP, and noting that Ep[Y(O)] 0, yields Ejp [—Zy] + (O)  0.  In particular, does not depend on 0. Thus, with Z (2A(0))’ and Y(0,c’) = a(O)(w).  :=  Zy  —  ij  and a(O)  :=  The function a is well-defined due to the fact that the variance constraint Var[Y(0)] = —v’(O) is binding for all types so A(0) 0 almost surely. As = 1 the variance constraint suming without loss of generality that implies that a(0)  =  This proves the following result which also establishes the characterization of optimal contracts. Proposition 3.3.2 Let Z, be an optimal solution of the problem (3.24).  51  Chapter 3. Optimal Derivative Design Then Z takes the form Z(9,w)  3.3.2  =  An application to entropic risk measures  The preceding considerations show that the optimization problem (3.24) can be reduced to finding inf p (w  -  zf /ZO)d9)  where P  :=  E[Z]  {Z E X  =  0,  IZII  =  1}.  This characterization of the optimal contracts given a price schedule induced by v E C can be taken further in the particular case of the entropic risk measure 1 p(X) = inlEp [exp(—3X)j which is strictly convex. Assuming for simplicity /3 differential of R is given by  =  1, the Frechét  R’(X)h= where 8(X)  =  E[exp(R(X))].  Equation (3.3.2) can be rewritten as  R’(X)h =  ff  S(X)’exp (_w  +10  x(O)dO) h(O, w)dOdlP  Hence R’(X) e PC’ can be identified with exp (_w +  f  X(6)dO),  52  Chapter 3. Optimal Derivative Design and problem (3.24) is reduced to finding inf ln  ZEF  In terms of a(v) = of the Lagrangian L(Z, r,  ,)  (1 \jç  —w+Zfe ../—v’(O)dO  fe \/—v’(O)dO the objective is to find a stationary point  =  ln  ,  (f  e_W+Za(dP)  + rf ZdIP +  J  dIP 2 Z  ic(v) belong to R. These are the Lagrange Multipli ers associated with the mean and variance constraint, respectively. Equat 2 (P) yields the following ing the Frechét-derivative to the zero operator in L equation:  where r  =  r(v) and  =  (v) 2 e_W+ )d1P 2 fç e_’  +  r  + 2,cZ  =  0.  (3.26)  The values of the multipliers are obtained from the moment constraints. In particular, this yields an implicit expression for the solution of (3.26): e_+Za(v)  Var  —  E  fe_B+Z(v)  (eW+Za(’))  This equation must hold almost everywhere. It is straightforward to see that it has a unique solution for each realization z of Z and w of W, since the line of negative slope 1(z)  E [e  7+Za(v)] —  intersects the exponential function f(z)  3.4  Var =  (e_W+Za(v))z  only once.  Example: A single benchmark claim  This section considers an example where the principal’s choice of contracts is restricted to a certain class of securities. Namely, a situation where the  53  Chapter 3. Optimal Derivative Design principal offers type-dependent multiples of some benchmark claim. This case is motivated by characterization result of Section 3.3 which states that all the optimal contracts are co-linear. In this case the principal’s problem can be reduced to a constrained variational problem that can be solved in closed form. It is worth mentioning that in terms of “real life” products, simpler contracts like the one described below and those studied numerically in Chapter 4 stand a better chance of being accepted by traders. 3.4.1  Description of the constrained claims  The model contemplates a principal who sells a type-dependent multiple of a benchmark claim f(W) 0 to the agents. More precisely, the principal offers contracts of the form  X(O)  =  (3.27)  a(O)f(W).  In order to simplify the notation, the T-bond’s variance will be assumed to be normalized: Var{f(W)] = 1. 3.4.2  The optimization problem  Let (X, ir) be a catalogue where the contract X is of the form (3.27). By anal ogy to the general case it will be convenient to view the agents’ optimization problem as an optimization problem of the set of claims {7f(W) I y E R} so the function a: e —+ R solves  sup {U(O, 7 f(W))  —  ir(O)}.  yER  In view of the variance constraint on the agents’ claims, the principal’s problem can be written as  inf p {(W  -  a(v)f(W))  -  I(v)}  where  a(v)  =10  -v’(O)dO.  (3.28)  Note that E[f(W)] > 0, so the term E[f(W)]—v’(O) must be included 54  Chapter 3. Optimal Derivative Design in the income. Before proceeding with the general case, consider a situation where in addition to being coherent and law invariant, the risk measure p is also comonotone. In this case each security the principal sells to some agent increases her risk by the amount  p  (_(f(w)  + (v(O)  —  —  Ov’(O))  >  0.  This suggests that it is optimal for the principal not to sell a bond whose payoff moves into the same direction as her initial risk exposure.  Proposition 3.4.1 Suppose that p is comonotone additive. If f(W) and W are comonotone, then v = 0 is a solution to the principal’s problem. If W and f(W) are comonotone, then the risk measure in equation (3.28) is additive and the principal needs to solve  PROOF.  p (W) + inf  vEC  Since p (f(W)  —  f and hence v  fe  (v(o) + p (f(W)  E{f(W)])  -  E[f(W)]) -v’(O)  -  Ov’(O)) dO.  0 and —Ov’(O) > 0 then  (v(o) + p (F(W)) /-v’(O)  -  Ov’(O)) dO  0  0 is a minimizer. 0  In view of the preceding proposition the principal needs to design the payoff function f in such a way that W and f(W) are not comonotone. An optimal payoff function is constructed in the following subsection.  3.4.3  A solution to the principal’s problem  Considering the fact that p(.) is a decreasing function, the principal’s goal must be to make the quantity a(v) as small as possible while keeping the income as large as possible. In a first step she solves, for any constant A e R  55  Chapter 3. Optimal Derivative Design the optimization problem sup a(v)  subject to  (E[f(w)]/_v’(o)  -  v(O) + ov’(o)) dO  f  =  A.  (3.29) The constrained variational problem (3.29) captures the problem of risk minimizing subject to an income constraint. It can be solved in closed form. The associated Euler-Lagrange equation is given by A  =  (_Ao  (3.30)  +  where A is the Lagrange multiplier. The income constraint and boundary conditions are:  v’(a)  =  and  v(l)  =  =  where  0  (AE[f]  —  1).  Integrating ‘both sides of equation (3.30) and taking into account the nor malization condition v(1) 0 yields  v(O)  = =  1  ()2  [2o a  2  (3.3)  a]  Inserting this equation into the constraint yields  A  =  -  E[fJ/()f  ()2  20- a  ía’ {  [2e a  -  + (20  -  a)2  }  dO.  In terms of 11111  1  1  1  1  6  ‘I  o M:__jtL ] o 2 )2IdO +( _ and  1’ dO N:=J 29  the problem is reduced to finding the roots of the quadratic equation  56  Chapter 3. Optimal Derivative Design which are —  NE[f]  —  A  /(NE[f]) 2 2M  —  4AM  (3 32  The root with negative sign is used, as it is required that the problem reduces to p(W) for A 0.  Remark 3.4.2 Notice that the constrained variational problem (3.9) is independent of the risk measure employed by the principal. This is because the risk has been minimized pointwise subject to a constraint on aggregate revenues. In view of the preceding considerations the principal’s problem reduces to a one-dimensional minimization problems over the Reals:  p  (w  —  2 + f(W)y(NE[f])2 f(W)N  —  4AM)  —  A.  Once the optimal value A* has been determined, the principal offers the securities \4OA—2Aa)  “  at a price  AE[f] 2  —  1 (3EA(20 a) a + 2Aa) (40A 2 —  —  —  —  2 A  1  1 2  —  a  Example 3.4.3 Assume that the principal measures her risk exposure us ing Average Value at Risk at level 0.05. Let W be a normally distributed random variable with mean 1/2 and variance 1/20. W can be thought to represent temperature. Suppose that the principal’s initial income is exposed to temperature risk and it is given by W = 0.1(W 1.1) with associated risk —  p(W)  =  0.0612.  Suppose furthermore that the principal sells units of a put option on W with strike 0.5, i.e., f(W) = (W 0.5) —  57  Chapter 3. Optimal Derivative Design By proceeding as above principal’s new risk exposure is approximately —0.6731 and she offers the security X(9)  =  f(W) 459  to the agent of type 0 for a price 1.1921 8(2—a) 1 —  “  —  (1.1921)0 (0.22)(20 /(2O_a)2 —  —  a)  58  Chapter 4  An Algorithm to Solve Variational Problems with Convexity Constraints This chapter presents a numerical algorithm to approximate the solutions of some variational problems subject to global convexity constraints. The initial motivation for such algorithm was solving optimal derivative design problems as in Chapter 3 where the class of derivatives available to the principal is constrained. However, besides tackling those problems, the al gorithm can be used to approximate solutions of various variational problems under convexity constraints. A classical example of the latter is Newton’s problem of the body of minimal resistance, which, given e a smooth, convex , consists of minimizing 2 subset of R  dO  I  I[v] =  ie  1+  ’ 2 vvI  R}. over the set of convex functions {f 9 The types of problems that the algorithm is designed to solve are de scribed in Section 4.1. The description of the algorithm is given in Section 4.2. The main result in this Chapter is Theorem 4.3.7. The proof of the latter makes up Section 4.3, and most of the examples are given in Section 4.4. Finally, Section 4.5 contains an example that is solved using similar —  methodology. However, its structure fails outside the scope of the problems described in Section 4.1. Namely, it requires solving a continuous minimax problem. The latter is done using a quasi-Newton descent method.  59  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints  4.1  Setup  The following notation is used throughout this chapter:  • e, Q C R’ are convex and compact sets, •  denotes the Lebesgue Measure on 1R,  • f  is a (probability) density on 0,  • L(O, z,p)  =  z  —  0 p + C(p), where C is strictly convex and C’. .  • C:={v:0—*IR IvOisconvex, and VveQa.e},  • I[f]  :=  L(0,v(0),Vv(0))f(0)dO. 0 f  The objective is to (numerically) estimate the solution to  P:=infl[fj  (4.1)  It is assumed that C is such that (4.1) has a unique solution (see, for exam ple, [27]). The properties of L and C yield the following Proposition 4.1.1 Assume Y solves 1’, then there is 0 in 0 such that = 0. minOEO Y(0) (recall 0 is compact). If o > 0, define ii(0) :=Y(0)—Yo, then PROOF.  Let 7 o  =  =  f((O)  —  .  Vi(0) + C(Vi(0)))f(0)d0  I[u]  —  This contradicts the hypothesis of Y being a minimizer of I over C.  D  It follows from proposition 4.1.1 that C can be redefined to include only functions that have a root in 0. This, together with the compactness of Q and 0 implies the following proposition, which is used frequently: Proposition 4.1.2 There exists 0 < K < co such that v C.  K for all v in  60  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints It follows from Proposition 4.1.2 and the restriction on the gradients that for each choice of function C, problem P has a unique solution, since the functional I will be strictly convex, lower semi continuous and the admissible set is bounded. Remark 4.1.3 The algorithm will still work with more general L’s as long as one can prove that the family of feasible minimizers is uniformly bounded.  4.2  Description of the algorithm  From this point on, the use of superscripts refers to vectors. For example  V’ (Vt, , V,). On the other hand a subscript indicates a function to of non-empty interior, be evaluated over some closed, convex subset of ie, {Vk} is a sequence of functions Vk : X —* Jl for some X contained in R. .  .  .  Assumption 4.2.1 It will be assumed that  e  =  [a, b]’.  The algorithm to find an approximate solution to? works as follows:  e  e  into is discretized in the following way: Partition 1. The domain ()fl. u(k) := k, which consists of k’ equal cubes of volume 1 1 k’. The elements of the lattice k will be denoted by j Now define ek as the set of centers of the 4’s. The elements of ek  4,  will be denoted by O. The choice of a uniform partition is done for computational simplicity. 2. Denote  f  =  f f (0) dO and associate such weight with O.  3. Associate to each element 0 of ek a non-negative number v and an n-dimensional vector D. The former represents the value of v(O) and the latter Vv(O). 4. Solve the (non-linear) program  flf t(E,)  >  L  (of, v, D) fjc  (4.2) 61  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints over the set of all vectors of the form v = (vi,.. ,. , Dn) such that: 1 of the form D = (D .  (a) v  .  ,  vj) and all matrices  .  0 (non-negativity),  (b) D e Q for i 1,.. k” (feasibility), (c) vj_vj+Dj.(Oj_Oj)0fori,jé{1,...,kn,ij (convexity). .  If the problem in hand includes Dirichiet boundary conditions these can be included here as linear constraints that the Di’s corresponding to points on the “boundary” of _kk , D 5. Let (v ) be the solution to  ek must satisfy.  .  Define vk( ) 9 —  Pk•  —k —k p(O)=v +D  6.  Vk  :=  (O), where 2 maxp  •(O—O,).  yields an approximation to the minimizer of P.  Remark 4.2.2 The constraints of the non-linear program determine a con  vex set. Remark 4.2.3,4 (c) guarantees that p is a supporting hyperplane of the convex hull of the points {(O, vl),. , (O, vkn)}. Note that Tik is a piecewise .  .  affine convex function.  4.3  Convergence of the algorithm  In this section convergence of the algorithm is proved. Define, for notational convenience Jk(Vk,  Dk)  :=  (v  -  .  D + C(D))  (Ek)f  Proposition 4.3.1 Under the assumptions made on L, the problem Pk has a unique solution.  62  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints PROOF.  The mapping (v’,D’)  —  Jk(v,Dc)  is strictly convex. It follows from proposition 4.1.2 that any feasible vectormatrix pair (vc, D’) must lie in [0, K]k x Qk, which together with Remark 4.2.2 implies Pk consists of minimizing a strictly convex function over a 0 compact and convex set. The result then follows Proposition 2.1.5. Proposition 4.3.2 There exists Y E C such that:  1. The sequence {Yk} generated by the Pk ‘s has a subsequence {13k }that 1 converges uniformly to . 2. limkI[vk]  I[J].  The bounded (Proposition 4.1.2) family {Uj} is uniformly equicon tinuous, as it consists of convex functions with uniformly bounded subgradi ents. The Arzela-Ascoli theorem implies that, upon passing to a subsequence if necessary (which for simplicity will still be denoted by {J, }), there is a PROOF.  non-negative and convex function 7 such that Vk  —*  7  uniformly on  By convexity VYk —p Vi almost everywhere (lemma 2.1.8); since VJk(O) belongs to the bounded set Q, the integrands are dominated. By Lebesgue Dominated Convergence lim I  k—400  []  =  I[]. C  Let ii be the minimizer of I[.] within C. The aim is to show that {ik } is a minimizing sequence of problem P, in other words that lim I [ii,j  k—*oo  =  I  [u]. 63  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints To this end, the following auxiliary definition is necessary Definition 4.3.3 Let i be such that infUEc I[u}  =  I[Tz}. Given  ek,  define  fori=1,...,k: 1.  2. G  Vu(O),  3. qj(O) :=+G.(6—O) and  4.  uj(8) :—maxq(O).  Notice that 11 k (0) is constructed as the convex envelope of a family of affine functions. The inequalities jk(Uk,Gk)  Jk(Vk,D)  (43)  I[i]  (4.4)  ‘[Uk]  follow from the definitions of jk (Uk  k)  Uk  and 1 k, as does the following  Proposition 4.3.4 Let i and Uk be as above, then Uk  k  —*  —*  U uniformly as  00.  The following lemma is key to establishing the convergence result at hand.  (€  [a,b] R} be a family of and Q is a compact convex subset of R. Let {fk : 0 convex functions such that Ofk(O) C Q for all 0 E 0, and whose uniform limit is 7. Let Ek be the uniform partition of 0 consisting of k cubes of and Th the elements of k volume p(Ek) := ()‘. Denote by u, 1 j ,  1 Lemma 4.3.5 Consider q(0,z,p) E C  x R x  Q  —  IR), wheree  =  —  let I1(Ek)  q5(0, fk(O), Vfk(9))  64  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints be the corresponding Riemann sum approximating Se q5(x, fk(O), Vfk(O))dO, where O e and E Ek. Then for any e> 0 there is K e N such that  4  fe  4  fk(O),  Vfk(O))dO  -  , fk(O), Vfk(O)) 4 q5(O,  k)  e  (4.5)  K.  for any k  By lemma 2.1.9, for each fk there exists a sequence of continuously differentiable convex functions {g } such that PROOF.  g  —  fk  uniformly.  and IIVhk(O) Let hk be the first element in {g} such that IIhk fkII for all 0 e e where Vfk is continuous. Then hk —f f uniformly, Vfk(O)II and by Lemma 2.1.8 we have that Vhk(O) —÷ VJ(9) a.e. It follows from Egoroff’s theorem that for every n e N there exists a set A C 0 such that: —  —  (0  \ Ar,,)  <1/n  and  Vhk  —f  V7  uniformly on  Let X (.) be the 1-0-valued indicator function of  gk(0)  :=  (0, hk(9),Vkk(0))  A.  4 and define  -  Fix n, consider 0 e A and let {0 } be a sequence of 0 ‘s converging to o as the partition is refined. By uniform convergence, Vf is continuous on 0  A, hence lim hk(0)  k—*oo  7(0°)  and  lim Vhk(0)  =  k—+oo  ) 0 V7(0  (4.6)  0 almost everywhere It follows from (4.6) and the continuity of that Yk on A,.. The following inequalities follow from the Mean Value Theorem, the —  65  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints compactness of (9 and  Q and the definition of hk forall  II(6,fk(O),Vfk(O))IIK1,  Oee  andsome K >O. 1  and  gk()  fk(O),Vfk(O))  -  for some K 2  fe  >  0 and all 0  -  Xk(O)(9, fk(O)Vfk(O)))  e (9 where Vfk  cb(0, fk(0), Vfk(0))dO  is continuous. Therefore  f(O),  -  Vfk(0))  <  9 K ( 2 I + JYk(0)d8 k Since g gence  —  0 aimost everywhere on A, by Lebesgue Dominated Conver  urn  /  gk(0)d00,  k—ooJA  moreover, the definition of A and the fact that implie 1 2K gk(0)dO  IIc’(O, fk(O), Vfk(0))II  K  f  Given e> 0 take n  e N such that K2H(9II +  and K such that  ‘  f  Then equation (4.5) holds for all k  gK(e)dO  K.  (k) such that 2 Proposition 4.3.6 For each k there exist ci(k) and c jk(vk,D)_I[vkj ei(k)  (4.7)  66  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints  Jk(Uk,Gk)_1[UkJ E (k) 2  (k)—*0 ask 2 andei(k),c  —*  (4.8)  00.  It will be shown that (4.7) holds, the proof for (4.8) is analogous. Define the simple function PROOF.  Wk(O) :=  L(O, i4, D), 0 e  hence Jk(Vk,D) =  (4.9)  fet0d0  The left-hand side of (4.7) can be written as fWk(0)dO  —  (4.10)  I[ik1  It follows from Lemma 4.3.5 that  =  satisfies ei(k)—*0  k—+oo. ci  The main theorem in this chapter can now be proved, namely: Theorem 4.3.7 The sequence {Yk} is minimizing for problem P. PROOF.  It follows from Proposition 4.3.6 and equations (4.3) and (4.4) that (k) +ci(k) 2 I[Zkj +c  Letting k result.  —*  I[]  I[j  (4.11)  cc in (4.11) and using Proposition 4.3.2 yields the desired ci  67  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints  Remark 4.3.8 Notice that given the kthstep approximation vk(O), an ini tial state for problem Pk+1 can be generated by evaluating Vk and VVk at the elements of 9 k+1• This allows for an iterative approach to solving larger problems, which may speed up running times considerably.  4.4  Examples  This section contains some results from implementing the algorithm. The first two examples to reduce quadratic programs, whereas the third and fourth ones are more general non-linear optimization programs. All the computer coding has been written in MatLab. However, in both cases sup plemental Optimization Toolboxes were used. In the first two examples we used the Mosek 4.0 Optimization Toolbox, whereas in the last two we used Tomlab 6.0. These four examples share the common microeconomic motivation provided in Section 2.3.3. 4.4.1  The Mussa-Rosen problem in a square  The following structures are shared in the first two examples, where n  =  2  • x = (vk, Dk), this structure will determine any possible candidate for a minimizer to Jk(•,•) in the following way: vk is a vector of length 2 that will contain the approximate values of the optimal function k 2 evaluated on the points of the lattice. The vector D’ has length 2* k and it contains what will be the partial derivatives of at the same points . The product hx provides the discretization 2 • h is a vector of length 3* k of the integral fe(° Vv v(O))f(O)dO. —  0 imposes the • B is the matrix of constraints. The inequality Bx non-negativity of v and D’ and the convexity of the resulting Jk•  Remark 4.4.1 The density f(O) is “built into” vector h and the cost func tion C. 68  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints Let 0 = [1, 2]2, C(q) = IIqll, and assume the types are uniformly distributed. This is a benchmark problem, since the solution to the princi pal’s problem can be found explicitly [44]. This case reduces to solving the quadratic program sup hx  —  xtHx  subject to Bx<O 2 columns are zero, since v H is a (3 * k ) matrix whose first k 2 ) x (3 * k 2 2 x k 2 blocks towards its lower does not enter the cost function; the four k ) identity matrix. Therefore xtHx is a 2 right corner form a (2* k ) x (2 * k 2 discretization off 0 IIVv(O)IIdO. Figure 4.1(a) was produced using a 17 x 17points lattice and a uniform density, whereas Figure 4.1(b) shows the traded qualities.  4.4.2  The Mussa-Rosen problem with a non-uniform density of types  In this section the cost function from the previous example is kept, but it is now assumed that the types are distributed according to a bivariate normal distribution with mean (1.9, 1) and variance-covariance matrix .3  .2  .2  .3  As noted before, the weight assigned to each to each agent type is built into h and H, so the vector x remains unchanged. The execution of the algorithm yields the figure 4.2(a).  Remark 4.4.2 It is interesting to see that in this case bunching of the sec ond kind, as described by Roche and Choné in /44], appears to be eliminated as a consequence of the skewed distribution of the agents. This can be seen in the non-linear level curves of the optimizing function v. This is also quite evident in the plot of the qualities traded, which is shown below. 69  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints  v.  (a) The optimal function  2.5 2 1.5  0.5 0 —0.5  —0.5  0  0.5  1  1.5  2  2.5  (b) The qualities traded.  Figure 4.1: Optimal solution for uniformly distributed agent types.  The MatLab programs for the two previous examples were run on MatLab (R14) in a Sun Fire V480 (4x1.2 HGz Ultra III, 16 GB RAM) computer running Solaris 2.10 OS. In the first example 57.7085 sec onds of processing time were required. The running time in the second example was 81.7280 seconds.  70  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints  (a) The optimal function  v.  2.5  1.5  —0.5  0  0.5  ‘.5  2.5  (b) The qualities traded.  Figure 4.2: Optimal solution for normally distributed agent types.  4.4.3  An example with non-quadratic cost  This example approximates a solution to the problem of a principal who is selling over-the-counter financial derivatives to a set of heterogeneous agents. This model is presented by Carlier, Ekeland and Touzi in [14]. They start with a standard probability space (cl, F, F), and the types of the agents are given by their risk aversion coefficients under the assumption of meanvariance utilities; namely, the set of agent types is 0 = [0, 1], and the utility 71  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints of an agent of type 0 when facing product X is  U(0, X)  =  E[X]  —  OVar[X]  Under the assumptions of a zero risk-free rate and that the principal has access to a complete market, her cost of delivering product X (0) is given by —i/—v’(0); where is the variance of the Radon-Nikodym derivative of the (unique) martingale measure, and Var[X(0)j = —v’(O). The principal’s problem can be written as sup VEC  J  0  (Ov’(O) + —v’(0)  —  v(0)) dO  (4.12)  {v :0 —*IR v convex, v O,v’ 0 and Var [X(0)] = —v’(O)}. Figure 4.4.3 shows an approximation of the maximizing using 25 agent types. whereC  :=  0.2  0.4  Figure 4.3: The optimal function  4.4.4  .  Minimizing risk  This example falls within the framework of the risk minimization problem studied in Chapter 3. In this case, the principal underwrites a catalogue of 72  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints call options on her income and their corresponding prices. Notice that the contracts will be characterized by the different strikes. An analysis of the agents’ problem as done in Chapter 3 yields derivatives of the form: X(O)  =  (IWI  —  K(O))  with  0 < K(O) <  IIWII.  The principal evaluates the risk associated with her overall position W + f(ir(O)  -  X(O))dO  via the “entropic measure” of her position, i.e.  (w + f(x(o)  p  -  Tr(O))dO)  where p(X) = logE[exp{—/3X}] for some /3> 0. The principal’s problem is to devise a catalogue as to minimize her risk exposure. We assume the set of states of the World is finite with cardinality m. Each possible state wj can occur with probability Pj• The realizations of the principal’s wealth are denoted by W = (Wi,. ,Wm). Note that p and W are treated as known .  .  data. The objective function of the non-linear program is  F(v, v’, K)  log (ex  =  {_  wipi + i=1  (T(K i=1  —  IWI)) pi  j=1  (LTKi_1wi1DPi}) —  + where K = (K , 1 , K) denotes the vector of type dependent strikes. We denote by ng the total number of constraints. The principal’s problem is to find .  .  .  73  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints  mm  (v,v,.K)  F(v,v’,K)  subject to  G(v,v’,K)  0  where G : determines the constraints that keep (v, v’, K) within the set of feasible contracts. Let (1/6, 2/6,.. 1) be the uniformly dis —  .,  tributed agent types, and • W=4*(—2,—1.7,1.4,—.7,—.5,0),  • P  =  (1/10, 1.5/10, 2.5/10, 2.5/10, 1.5/10, 1/10).  The principal’s initial evaluation of her risk is 1.52. The plots for the approx imating U and the strikes are given in Figure 4.4. Note that for illustration purposes the scale for the agent types in the second plot has been changed. The interpolates of the approximation to the optimal function U and the strikes are given in the table following the plots.  74  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints  (a) The optimal function v.  (b) The type-dependent strikes  Figure 4.4: Optimal solution for underwriting call options. Entropic case.  The Principal’s valuation of her risk after the exchanges with the agents decreases from 1.52 to —3.56. Remark 4.4.3 Notice the “bunching” at the bottom.  75  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints 4.196344 3.234565 2.321529 1.523532 0.745045 0.010025  K 1 2 K 3 K 4 K 5 K 6 K  1.078869 0.785079 0.733530 0.713309 0.713309 0.713309  Table 4.1: The optimal function v and the strikes. Entropic case  4.5  A catalogue of put options  This section discusses the case where the principal restricts the type of contracts offered to put options underwritten on her income. In other words, the elements of the catalogues that are offered by the principal are of the form ((K— WIY,ir(K)) Notice that the different contracts are determined by the choice of K. As studied in Chapter 3, given a price schedule it, an agent of type 0 computes sup{U((K— K  IWI),0) —ir(K)}  It will be assumed that W 0 is a bounded random variable. Then the principal considers contracts of the form X(0)  =  (K(0)  —  WI)  with  0  K(0)  . 00 11W11  The boundedness assumption on the strikes is made with no loss of generality as each equilibrium pricing scheme is necessarily non-negative. Note that in this case the risk measure can be defined on L°° (IP), so only convergence in probability is required to use the Fatou property. Both the agents’ net utilities and the variance of their positions are bounded from above by some , respectively. Thus, the principal chooses a function v 2 constants K 1 and K and contract X from the set {(X,v) I v  e C,  v  , —Var[(K(0)—WI)j 1 K  =  v’(O),  Iv’I  ,0 2 K  K(O)  }. 00 11W1I 76  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints The variance constraint v’(O) = —Var[(K(O) IWI)] implies the strikes can be expressed in terms of a continuous function of v’, i.e., —  K(O)  =  F(v’(O)).  The risk measure considered is AV©RA, and therefore the numerical pro cedure deviates from the algorithm presented above. This is caused by the fact that solving the “risky” part of the principal’s problem involves finding a minimax point. However, the philosophy of approximating a convex func tion via an envelope of affine functions still applies. The Principal’s problem can be written as  inf  {  (w -  f  {(F(v’(O))  -  IWI)  -  E{(F(v’(O))  -  IWI)j} do)  -  I(v)}  where the infimum is taken over the set of all functions v E C that satisfy 1 and Iv’I K . 2 v <K  Remark 4.5.1 Within the current framework the contracts are expressed in terms of the derivative of the principal’s choice of v. This reflects the fact that the principal restricts herself to type-dependent put options. This is not always true in the general case. 4.5.1  An existence result  Let {v} be a minimizing sequence for the principal’s optimization problem. The functions v are uniformly bounded and uniformly equicontinuous so we may with no loss of generality assume that v —f Y uniformly. Recall this also implies a.s. convergence of the derivatives. Lebesgue Dominated Convergence and the continuity of F, along with the fact that W is bounded yield  f(F(v(O))  -  IWI)dO  —*  fe0  -  WIidO P-a.s.  77  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints and  mofe  —  IWIY’dO  —  =  fe’°  IWI)dO  This shows that the principal’s positions converge almost surely and hence in probability. Since p is lower-semi-continuous with respect to convergence in probability then solves the principal’s problem.  An algorithm for approximating the optimal solution  4.5.2  In the following numerical implementation it is assumed that the set of states of the World is finite with cardinality m. Each possible state Wj can occur with probability pj. The realizations of the principal’s wealth are denoted by W = (Wi,. , Wm). Note that p and W are treated as known data. The principal evaluates risk via the risk measure .  .  p(X)  =  sup  —  qEQ j=1  where QA:={qeIR  I  }. 4 p.q1,qj<A  It is also assumed that the set of agent types is finite with cardinal ity n, i.e. 0 = (Oi’.. ,0,). The density of the types is given by M := (Mi,.. , M,). In order to avoid singular points in the principal’s objective function, the option’s payoff function f(x) = (K x) is approximated by the differentiable function .  .  —  T(x, K)  ( =  if xK—e, 0, S(x, K), if K e < x <K + e, if xK+E. x—K, —  ( where  c—K 2 X —2Kc+c K 2 S(x,K)=—+ 4€ 2€ 4€ The algorithm uses a penalized Quasi-Newton method, based on Zakovic  78  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints and Pantelides [45], to approximate a minimax point of  F(v, K, q)  =  Wpq +  —  i=1  —  1=1  IWI)) piqi  —  j=1  (T(K_w))P+ i=1  j=1  1 / 1 \ n=  . lViOi  Vj  —  1 J +—  V”\  iJ 0 °i+1  /jVj  Vn  —  fl  (vi,. , v,) stands for the values of a convex, non-increasing function, v’ = (vi,.. , v) are the corresponding values for the derivative , 1 , K,) denotes the vector of type dependent strikes. and K = (K The need for a penalty method arises from having to face the equality constraints v’(O) = —Var{(K(O)—IWI)j and p•q = 1. In order to implement where v  =  .  .  .  .  .  .  a descent method, these constraints are relaxed and a penalty term is added. Let ng be the total number of constraints. The principal’s problem is to find mm max F(v, K, q)  (v,K) qEQA  subject to  G(v, K, q)  0  n+m 2 RT1J determines the constraints that keep (v, K) within where G : jR the set of feasible contracts and q e QA.  Example 4.5.2 The effects of risk transfer on the principal’s position are  illustrated in two models with five agent types and two states of the world. (1/2, 5/8,3/4,7/8, 1) and A = 1.1 The In both cases W = (—1,--2), 0 0 we set are (4,3,2, 1,0), (1, 1) and (1,1, 1, 1, 1) starting values vo, qo and K respectively. i) Let p = (0.5,0.5) and the types be uniformly distributed. The princi pal’s initial evaluation of her risk is 1.52. The optimal function v and strikes are:  79  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints V 1 2 V 3 V 4 V 5 V  0.1055 0.0761 0.0501 0.0342 0.0195  1 K  r K 3 4 K 5 K  1.44 1.37 1.07 1.05 1.05  Table 4.2: The optimal function v and the strikes. Case 1.  The Principal’s valuation of her risk after the exchanges with the agents decreases to 0.2279.  1.45  I.15  45  5  (a) The type-dependent strikes.  (b) The optimal function v.  Figure 4.5: Optimal solution for underwriting put options, Case 1.  80  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints ii) In this instance p (0.25,0.75) and M = (1/15,2/15,3/15,4/15,5/15). The principal’s initial evaluation of her risk is 1.825. The values for the discretized v and the type-dependent strikes are: V 1 2 V 3 V 4 V 5 V  0.0073 0.0045 0.0029 0.0026 0.0025  K  IR K 3 4 K 5 K  1.2!J 1.16 1.34 0.11 0.12  Table 4.3: The optimal function V and the strikes. Case 2  The Principal’s valuation of her risk after the exchanges with the agents is 0.0922.  81  Chapter 4. An Algorithm to Solve Variational Problems with Convexity Constraints  C  1:5  2  2.5  3  3.5  4  4.5  5  (a) The type-dependent strikes. 7510  2.53455  (b) The optimal function v.  Figure 4.6: Optimal solution for underwriting put options. Case 2.  82  Chapter 5  Conclusions This work borrows tools from several fields; namely, Mathematical Fiance (Convex Risk measures), Mathematical Economics (Contract Theory) and Convex Analysis. Chapter 2 includes a new (to the author’s knowledge) re sult stating that a convex function defined on a Hubert space, which accepts an equivalent norm that is differentiable off the origin, is not only densely Fréchet differentiable (this is a well known result); but these derivatives are actually locally Lipschitz on a dense subset. Chapter 3 analyzes a screening problem where the principal’s choice space is infinite dimensional. The main motivation was to present a nonlin ear pricing scheme for over-the-counter financial products, which she trades with a set of heterogeneous agents with the aim of minimizing the exposure of her income to some non-hedgeable risk. U-convex analysis has been used in order to characterize incentive compatible and individually rational cata logues. To keep the problem tractable the agents have been assumed to have mean-variance utilities, but this is not necessary for the characterization of the problem. Considering more general utility functions is an obvious exten sion to this work. The main result is a proof of existence of a solution to the principal’s risk minimization problem in a general setting. The structure of optimal contracts has also been characterized and as it has been shown that the optimal solution is, in some sense, unique. In Chapter 4 a numerical algorithm to estimate the minimizers of vari ational problems with convexity constraints has been developed. The main motivation for the study of such algorithm stems from the fact that in most situations finding closed-form solutions to the principal’s problem described in Chapter 3 is not possible. This an INTERNAL method, i.e., at each preci sion level the approximate minimizer lies within the acceptable set of func 83  Chapter 5. Conclusions tions. The examples are developed over one or two dimensional sets for illustration reasons, but the algorithm can be implemented in higher dimen sions. However, it must be mentioned that, as is the case with the other methods found in the related literature, implementing convexity has a high computational cost which increases exponentially with dimension. Possible extensions to this work are: showing that the set of points where the derivatives of a convex function defined on a Hilbert space is a G 6 (this is, hitherto, just a conjecture); agents with heterogenous initial endowments (or risk exposures); a model of an economy with multiple principals; the study of more restricted products such as swaps; characterization of the convex hull of n points in 0 (n log n) time to facilitate treating examples with a larger number of agent types.  84  Bibliography [1] ARMSTRONG, M.: Multiproduct Nonlinear Pricing, Econometrica, Vol. 64, PP. 51-75, 1996. [2] BARRIEU, P. & EL KARoul, N.: Optimal Design of Derivatives in Illiquid Framework, Quantitative Finance, Vol. 2, pp. 1-8, 2005. [3] BARRIEU, P. & EL KAR0uT, N.: Inf-convolution of risk measures and optimal risk transfer, Finance and Stochastics, Vol. 9, pp. 269298, 2005. [4] BAUERLE, N. & MULLER, A.: Stochastic Orders and Risk Mea sures: Consistency and Bounds, Insurance: Mathematics £4 Eco nomics, Vol. 38, pp. 132-148, 2006. [5] BECHERER, D.: Utility indifference Hedging and Valuation via Re action Diffusion Systems, Proceedings of the Royal Society, Series A, Vol. 460, pp. 27-51, 2004. [6] BEcHERER, D.: Bounded Solutions to Backward SDE’s with Jumps for Utility Optimization and Indifference Hedging, Annals of Applied Probability, to appear. [7] BOLTON, P. & DEWATRIPOINT, M.: Contract Theory, MIT press, 2005. [8] BORWEIN, J. & NOLL, D.: Second order Differentiability of Convex Functions in Banach Spaces, Transactions of the American Mathe matical Society, Vol. 342, No. 1, Pp. 43-81, 1994.  85  Bibliography [9] BORWEIN, J. & PREISS, D.: A Smooth Variational Principle With Applications to Subdifferentiability and to Differentiability of Con vex Functions, Transactions of the American Mathematical Society, Vol. 303, No. 2, pp. 517-527, 1987. [10] BROCK, F. & FERONE, V. & KAWOHL, B.: A Symmetry Problem in the Calculus of Variations, Calc. Var. Partial Differential Equa tions, Vol. 4, pp. 593-599, 1996. [11] CARLIER, G.: A general existence result for the principal-agent problem with adverse selection, Journal of Mathematical Economics, Vol. 35, pp. 129-150, 2001. [12] CARLIER, G.: Calculus of Variations with Convexity Constraint, Journal of Nonlinear and Convex Analysis, Vol. 3, No. 2, pp. 125143, 2002. [13] CARLIER, G.: Duality and Existence for a Class of Mass Trans portation Problems and Economic Applications, Advances in Math ematical Economics Vol. 5, pp. 1-21, 2003. [14] CARLIER, G. & EKELAND, I & Touzi, N.: Optimal Derivatives Design for Mean-Variance Agents under Adverse Selection, Mathe matics and Financial Economics, Vol. 1, pp. 57-80, 2007. [15] CARLIER, G. & LACHAND-ROBERT: Regularity of Solutions for some Variational Problems Subject to a Convexity Constraint, Com munications on Pure and Applied Mathematics, Vol. 54, No. 5, pp. 583-594, 2001. G. & LACHAND-ROBERT, T. & MAURY, B.: A Numer ical Approach to Variational Problems Subject to Convexity Con straints, Numerische Mathematik, Vol. 88, pp. 299-318, 2001.  [16] CARLIER,  [17] CHERIDITO, P. & TIANHuT, L.: Monetary Risk Measures on Maxi mal Subspaces Of Orlicz Classes, Mathematical Finance, to appear.  86  Bibliography & HERV, V. J.: Non-Convergence Result for Confor mal Approximation of Variational Problems Subject to a Convexity Constraint, Numerical Functional Analysis and Optimization, Vol.  [18] CHON, P.  22, No. 5, pp. 529-547, 2001.  [19] DAvis, M.: Pricing Weather Derivatives by Marginal Value, Quan titative Finance, Vol. 1, pp. 305-308, 2001. [20] EKELAND, I.: Nonconvex Minimization Problems, Bulletin (New Series) of the American Mathematical Society, Vol. 1, No. 3, pp. 443-474, 1979.  [21] EKELAND, I. & LEBOURG, G.: Generic Frechét-Differentiability and Perturbed Optimization Problems in Banach Spaces, Transac tions of the American Mathematical Society, Vol. 224, No. 2, pp. 193-216, 1976.  & MORENO-BROMBERG, S.: An Algorithm for Com puting Solutions of Variational Problems with Global Convexity Constraints, submited, 2008.  [22] EKELAND, I.  [23] EKELAND, I. & TEMAM, R.: Convex Analysis and Variational Problems, Classics in Applied Mathematics, 28, SIAM, 1976. [24] EVANS, L.: Partial Differential Equations, Graduate Studies in Mathematics, 19, American Mathematical Society, 2002. [25] FILIPovIó, D. & KUPPER, M.: Equilibrium Prices for Monetary Utility Functions, International Journal of Theoretical é4 Applied Finance, to appear. [26] FöLLMER, H. & SCHIED, A.: Stochastic Finance. An Introduction in Discrete Time, de Gruyter Studies in Mathematics, 27, 2004. [27] GIAQUINTA, M. & HILDEBRANDT, S.: Calculus of Variations 1, Grundlehren der mathemtischen Wissenschaften 310, SpringerVerlag, 1996. 87  Bibliography [28] GUESNERIE, R.: A Contribution to the Pure Theory of Taxation, Econometrica, Vol. 49, pp. 33-64, 1995. [29] HORST, U. & MORENO-BROMBERG, S.: Risk Minimization and Optimal Derivative Design in a Principal Agent Game, Mathematics and Financial Economics, to appear. [30] HORST, U. & MULLER, M.: On the Spanning Property of Risk Bonds Priced by Equilibrium, Mathematics of Operations Research, Vol. 32, pp. 784-807, 2007. [31] HORST U. & PIRVU, T. & NUNES DOS REIs, C.: On Securitization, Market Completion and Equilibrium Risk Transfer, Working Paper, 2007. [32] Hu, Y. & IMKELLER, P. & MULLER, M.: Market Completion and Partial Equilibrium, International Journal of Theoretical é4 Applied Finance, to appear. [33] JOuINI, E. & SCHACHERMEYER, W. & Touzi, N.: Law Invariant Risk Measures have the Fatou Property, Advances in Mathematical Economics Vol. 9, pp. 49-71, 2006. [34] LACHAND-ROBERT, T. & OUDET, E.: Minimizing within Convex Bodies Using a Convex Hull Method, Siam Journal on Optimization, Vol. 16, No. 2, pp. 368-379, 2005. [35] LACHAND-ROBERT, T. & PELETIER, A.: Extremal Points of a Functional on the Set of Convex Functions, Proceedings of the Amer ican Mathematical Society, Vol. 127, No. 6, pp. 1723-1727, 1999. [36] LAFFONT, J-J. & MARTIMORT, D.: The Theory of Incentives (The Principal-Agent Model), Princeton University Press, 2002. [37] LIEB, E. & Loss, M.: Analysis: Second Edition, Graduate Studies in Mathematics, 14, Aerican Mathematical Society, 2001.  88  Bibliography [38] LUENBERGER, D.: Optimization by Vector Space Methods, Wiley and Sons, 1969. [39] MUSSA M. & ROSEN, S.: Monopoly and Product Quality, Journal of Economic Theory, Vol. 18, pp. 301-317, 1978. [40] Mosco, U.: Convergence of convex sets and of solutions of vari ational inequalities, Advances in Mathematics, Vol. 3, pp. 510-585, 1969. [41] NicoLEscu, C. & PERSSON, L-E.: Convex Functions and their Ap plications, CMS Books in Mathematics, Springer Science+Business Media, 2006. [42] PHELPS, R.: Convex Functions, Monotone Operators and Differen tiability, Lecture Notes in Mathematics, 1364, Springer-Verlag, 1989. [43] ROCHET, J.-C.: A Necessary and Sufficient Condition for Ratio nalizability in a Quasi-linear Context, Jornal of Mathematical Eco nomics, Vol. 16, pp. 191-200, 1987. [44] ROCHET, J.-C. & CHON, P.: Ironing, Sweeping and Multidimen sional Screening, Econometrica, Vol. 66, pp. 783-826, 1988. [45] ZAKovic, S. & PANTELIDES, C.: An Interior Point Algorithm for Computing Saddle Points of Constrainde Continuous Minimax, An nals of Operations Research, Vol. 99, pp. 59-77, 2000.  89  Appendix A  Matlab and Maple Codes This appendix contains the computer codes used in the numerical examples in Chapter 4. The codes used in Sections 4.4.1 and 4.4.2 are similar, hence only the differences on the latter are presented. Comments on the code are preceded by the symbol %, and the code itself is presented in italic.  A.1  MatLab code for the example in Section 4.4.1  The main bodies of the codes for examples in Sections 4.4.1 and 4.4.2 use the following function, which generates the envelope of affine functions as described in Section 4.2. function val= V(x, y, u=zeros(1, ii); val = sizexy for i  =  =  ii)  global S X Y  size(x);  1:sizexy(1), 1:sizexy(2),  forj = xval = x(i,j); yval for k=1:n  =  y(i,j);  u(k)=S(k) +S(k+n) * (xval X(k)) +S(k+ *n) *lyval Y(k)); end val(i,j) = max(u);  90  Appendix A. Matlab and Maple Codes end end  The following is the main body of the code: % The Mussa.-Rosen Problem in a Square global S X Y n=8; % The matrix H corresponds to the one in section 4.4.1. Hi =zeros (n 2, H.2=/zeros(n, n&) 2 ,n 2 eye(n ) ,n zeros(n ) J; ,n 2 H3=/zeros(n ) 2 ,n 2 zeros(n ) ,n eye(ri ) ]; H=[Hi; H; H3]; H=sparse (H); XN=zeros (n, n); for i=i:n XN(i, :)=(i + ((i-i)/(n-i))) *ones(i, n); end % The vectors X and Y make up the lattice of types. X—tzeros(i, 0); for i=i:n X=/X XN(i, :)J; end YN=zeros(n, n); for i=i:n YN(:, i)=(i +((i-i)/(n- 1))) *ones(’n, i); end Y=zeros(i, 0); for i=i:n  91  Appendix A. Matlab and Maple Codes Y=/Y YN(i,:)]; end  % f holds the linear part of the quadratic objective. ) -x -Y]; 2 f_—/ones(1,n % Bi and B3 will set the positivity constraints on v and its partial deriva tives. Bi =[ eye(n 2 ,n ) 2 2 ,n 2 zeros(n ) ,n zeros(n ) ]; B3=/ 2 ,n 2 zeros(n ) ,n eye(n eye(n ) 2 ,n )]; 2 % B2 sets the convexity constraints. )); 2 , 9*(n 4 B=zeros(n 2 for j=1:n 2 for i=1:n  if (i-i  ==  0) , :)=zeros(1, g*n 2 ); 2 B2(i+(j-1) *n else ),j)z1; 2 B2(i+(j-1) *(n ),i)z_1; 2 B2(i+(j-1) *(n ),j+(’n (n ) )=X(1 , i)-X(1,j); B2(i+ (j-i) * 2 ))r Y(1, i)- Y(1,j); 2 ),j+ *(n 2 B(i+ (j-i) *(n end end end B=[B1; B3;-B2]; B=sparse(B);  % There is no a priori bound on the function v. buc=/J;  92  Appendix A. Matlab and Maple Codes % This a corresponds to the one mentioned in Section 4.4.1 in the constraints of the quadratic program. , 1); 4 a=zeros(2*(n)+n % This instruction records running times. t—clock; % The following is the call to the quadratic solver. Q—tmskqpopt(H, f, B, a, buc); S=Q. sol. itr. xx;  %  The outputs: Elapsed time and plot.  etime (clock, t) xvec = (1:.O1:); yvec = (1:.O1:2); [Xmesh, Ymesh] = meshgrid(xvec,yvec); xv = (1:.1:2); yv = (1:.1:2); [Xm,Ym] = meshgrid(xv,yv); Vmesh = V(Xmesh, Ymesh, n ); 2 contour3(Xrnesh, Ymesh, Vmesh, 35) hold on; surface (Xmesh, Ymesh, Vmesh, ‘EdgeColor’,[.8 .8 .8], ‘FaceColor’, ‘none’) grid off view(-15,25) colormap cool  A.2  MatLab code for the example in Section 4.4.2  This example differs from the one presented above in the non-uniform dis tribution of types. % This part of the code includes the density into the cost-function matrix  93  Appendix A. Matlab and Maple Codes H. mu  [1.9 1];  =  Sigma xl  =  =  [.3 .2; .2 .3];  1+1/17:1/17:2; x2  =  1+1/17:1/17:2;  [X1,X2] = meshgrid(xl,x2); D = mvnpdf([Xi (:) X2(:)], mu, Sigma)’; )); 2 Hi =zeros(n , 3 *(rj 2 ); 2 , n 2 H22=zeros(n ; ,n 2 H33=zeros(n ) for i=1:n 2 H22(i, i)=D(1, i); H33(i, i)=D(1, i); end ,n 2 H2=[zeros(n ) H22 zeros(n 2 )]; 2 , rt 2 ) zeros(m 2 ,n 2 ,n 2 H3=[zeros(n ) H33]; 2 H=[H1; H2; H3];  % The density has to be included in the linear part of the objective function. f=[D (X. *D) (Y. *D)]; -  A.3  -  MatLab code for the example in Section 4.4.3  This non-linear example requires the objective function to be build outside the main body of the optimization routine. Both this code and the one in section A.4 employ the following function:  94  Appendix A. Matlab and Maple Codes % The one-dimensional version of the function that generates the convex envelope of a family of affine functions. function val=Vonediin(x,y, X) sizex = size(x); sizeX = size(X); n=sizeX(2); u=zeros(1, n); val = forj = 1:sizex() xval = x(1,j); for k=1:n u(k)=y(1,k)+y(1,k+n)*(’xvalX(l,k)); end val(1,j)  =  max(u);  end The following is the objective function in the principal’s non-linear program. function val=CEkTomlab(x) sizeof=size (x); n=(1/) *sizeof(2); X=zeros(n, 1); for i=1:n X(i, n,)=i/n; end f=zeros (n, 1); F=zeros(n, 1); G=zeros(n, 1); for i=1:n f(i, 1)=x(i, 1); F(i, 1)=x(n+i, 1); G(i, 1)=(-F(i, 1))(1/2); end 95  Appendix A. Matlab and Maple Codes  g—-(F. *x); val==sum(f-i-g- C);  The following is the main body of the routine. n=20; X=zeros(1, n); xo=zeros(1,n); fo =zeros (1, n);  % The partition of the interval and the initial values for the variables vector. for i=1:n X(1,i)=i/n; xo (1,i)=n-i; fo (1,i)=t-1-(n_i); end % The starting point 0 X  =  ]; 0 [xof  % The matrix A contains the convexity constraints. ,2*n); 2 A=zeros(n for j=1:n for i—tl:n if (i-i == 0) A(i+(j1)*n,:)=zeros(1, 2*n); else A (i+(j-1) *n,j)1; A(i+(j1)*n,i)=1; A (i+(j-1) *n,i+n)zzx(1,i)_x(1,j);  96  Appendix A. Matlab and Maple Codes end end end ), for the variables 1 % The bounds for the constraints (b and b Name= ‘CEkT’; = zeros(1,n ); 2  (xl  and x).  1 =11; b xi=[zeros(1,n) 1000O*ones(1,n)]; *ones (1, n) zeros(1, n)]; U =110000 flowBnd=- 100000;  % This is the call to the nonlinear solver. , 0 Prob=conAssign(’CEkTomlab’, U I1 U xj, x, Name, X Prob.NumDiff=1; Prob. ConsDiff=5; Pro b.Solver.Alg=0; Result=tomRun(’conSolve’, Prob, ); % The output.  Z=(Result.xk  —  [Result .xk(n)  *  , flowBnd, A, b1, ba);  ones(n, 1); zeros(n, 1)])’;  xvec = (0:.01:1); Vmesh = Vonedim( xvec, Z, X); hold on; plot (xvec, Vmesh);  97  Appendix A. Matlab and Maple Codes  A.4  MatLab code for the example in Section 4.4.4  The code for this example is separated into several subroutines which are then integrated in the main body. % This is the payoff of a call option. function val=Call(W,K) if (W>=K) val=W-K; else val=O; end The non linear constraints are built separate from the main body. %The non-linear (variance) constraints function val=NL CHMTomlab2(x) % The number of types n and states of the world s. n—JO; s=10; % The principal’s risky initial endowment W and the corresponding proba bilities. W=4*[2; -1.7; 1.4;-.7; -.5; 0]; P=/1/1 0; 1.5/10; 2.5/10; 2.5/10; 1.5/10; 1/10]; % The initial values of the function and the strikes. V=zeros(n, 1); K=zeros (n, 1); for j=1:n V(j,1)=x(j+n,1); K(j, 1)=x(ji2*n, 1); 98  Appendix A. Matlab and Maple Codes end M=zeros(s,n); for i=1:s for j=i:n M(i,j)=Call(abs(W(i, 1)),K(’j, 1)); end end *  +V; 2 P).  The following is the objective function in the principal’s non-linear program. function val=HMTomlab2(x) n—6; s=6; T=zeros(n, 1); W=4*[2; -1.7; 1.4;-.7; -.5; 0]; P=[1/1 0; 1.5/10; 2.5/10; 2.5/10; 1.5/10; 1/10]; Beta—. 5; % Separating the variables vector into its three components. v=zeros (n, 1); V=zeros (n, 1); K=zeros(n, 1); for j=1:n v(j, 1)=x(j, 1); V(j, 1)=x(j+n, 1); K(j, 1)=x(j+2*n, 1); 99  Appendix A. Matlab and Maple Codes end  % Computing the Risky Part of the Principal’s Problem. M—zeros(s, n); for i=1:s for j=i:n M(i,j)=Call(abs(W(i, 1)),K(j, 1)); end end Y=M*ones(n, 1); B=exp (-Beta *( W+ Y)); R__P*B; % Computing the Income of the Principal’s Problem. D=P *M I=sum(-D-(T. *V) ‘+v’);  % The value of the objective function. val=R +1; The following is the main body of the optimization routine for the principal’s problem in Section 4.4.4. % The TomLab routine for the one-dimensional HM Problem. n=6; X=zeros(’l,n); vo =zeros (1 , Vo =zeros (1, n); Ko=zeros(1,n); m=6;  % The partition of the interval and the initial values for the variables vector. 100  Appendix A. Matlab and Maple Codes for i—1:n X(1,i)_—(i/n); vo (1,i)=n-i; Vo (1,i)=-.l-(n-i); 0 K (1,i)=1; end %The starting point. 0 X  [v ] K V 0 ; % The matrix A contains the convexity constraints. ,3*n); for j=1:ri 2 A=zeros(m =  for i=1:ri  if (i-i  ==  0)  A (i+ (j-1) *, :)=zeros(1, 3*n); else A (i+(j-1) *n,j)1; A (i+(j-1) *n,i)1; A (i+(j-1) *n,j+n)=x(1,ifx(1,j); end end end ), for the variables (xj and xv). 1 % The bounds for the constraints (b and b Name= ‘HM’; ); 2 b =zeros(1,n  b1 =11; -10000 *ones (1, n) zeros (1, n)]; Xu/10000*OneS(1,n) zeros(1,n) 10000*ones(1,n)]; Xi =[zeros (1 , n)  CL =-ones (n, 1);  cu=ones(n, 1); flowBnd=-1 00000;  % This is the call to the nonlinear solver. Prob_-conAssign(’HMTomlab2’, U [1, L1 x, x, Name, Xo, A, b , ba,’ NLCHMTom1ab2’, [1, [1, U, 1 CU); Prob.NumDiff=1; Prob. ConsDiff=5;  U flowBnd,  101  Appendix A. Matlab and Maple Codes Pro b.Solver.A lg=O; Result=tomRun(’conSolve Prob, 2); Z = (Result.xk)’; W=Z(1, 1:n);  %  The output.  xvec  =  (O:.O1:1);  Vmesh = Vonedim( xvec, Z, X); hold on; plot (xvec, Vmesh);  A.5  Maple code for the example in Section 4.5.2  with(LinearAlgebra) n := 5: m := 2: ng := 2*m+4*n+1: This section constructs the objective function x  :=  Vector(2*n, symbol  q  :=  Vector(m, symbol  f :=  =  =  f  and its gradient.  xs):  qs):  add(—G[j] *p[j] * q[j],j = 1..m)+ add(add(T(x[i + n], W(j]) * M[ij * p[jj * q[j], i = 1..n),j add(add(T(x[i+n],W[j]) *M[ij *p[jl,i 1..n),j = 1..m)+ add((x[i] t[i] * (x[i + 1] x[i])/(t[i + 11 t[iJ)) * M[i}, i = 1..n x[n] t[nl * (x[nj x[n 1]) * M[nj/(t[n] t[n 1]) —  —  —  —  —  —  —  —  —  TT := (x, K)-L 1/4*x/eps+1/2*(eps_K)*x/eps+1/4* (K_2*K*eps+eps)/eps: K-eps, 0, x K+eps, TT(x, K), K+eps x, x-K): piecewise(x T := (x, K) -  gradfx := 0: gradfq := 0: for i from 1 to 2*n do gradfx[i} := diff(f, xlii) 102  Appendix A. Matlab and Maple Codes end do: for i from 1 to m do gradfq[iJ := diff(f, q[i]) end do: This section constructs the constraint function g and its gradient. g := Vector(ng, symbol tt): for j from 1 to n do g[j] := -xU] end do: for j from 1 to n-i do g[j+n] := x[j+1]-x[j] end do: for i from 1 to n-i do g[i+2*n_i] := add(T(x[i+n], W])*p[jj, j = 1 m)-add(T(x[i+n], W[j])*p[j], j m)+(x[i+1]-x[i])/(t[i+1]-t[i])-eps2 = 1 end do: g[3*n_1} := add(T(x[2*nJ, W[j])*p[j], j = 1 m)_add(T(x[2*n], W])*p[j, j = 1 m)+(x[n]-x[n-1] )/(t [nJ-t[n-1] )-eps2: for i from 1 to n-i do g[i+3*ni] := -add(T(x[i+n], Wj])*p[j}, j = 1 m)+add(T(x[i+n], W[j])*p[j], j = 1 m)-(x[i+i]-x[i])/(t[i+1]-t[i})-eps2 end do: g[4*n4} := _add(T(x[2*n], W[j])*p[j], j = 1 m)+add(T(x[2*n], W[j])*p[j], j = 1 m)-(x[n]-x[n-i])/(t[n]-t[n-i])-eps2: g[4*n] := add(pfi]*q[i], i = 1 m)-i+eps3: g14*n+il := _add(p[i]*q[iJ, i = 1 m)-i-eps3: for i from 1 to m do g[i+4*n+1] := -q[i] end do: for i to m do g[i+m+4*n+i] := q[i]-lambda end do: gradgx := 0: gradgq := 0: ..  ..  ..  ..  ..  ..  ..  ..  for i from 1 to ng do for j from 1 to 2*n 103  Appendix A. Matlab and Maple Codes do gradgx[i, j] := diff(g[i], x[j]) end do: j] end do: for i from 1 to ng do for j from 1 to m do gradgq[i, : diff(g[i], qU]) end do: end do: This section constructs the slackness structures. e := Vector(ng, 1): s := Vector(ng, symbol si): z := Vector(ng, symbol = zi): S :=DiagonalMatrix(s): Z := DiagonalMatrix(z): This section initializes the variable and parameter vectors. x := Vector(2*n, symbol xs): q := Vector(m, symbol = qs): p :=Vector(m, symbol = ps): W := Vector(m, symbol = ws): G := Vector(m,symbol = gs): t := Vector(n, symbol = ts): M := Vector(n, symbol = ms): chi := convert([x, q, s, z], Vector): This section constructs the Lagrangian and its Hessian matrix. F :=convert ( [gradfx+(VectorCalculus[DotProduct]) (Transpose(gradgx), z), gradfq-(VectorCalculus[DotProduct] ) (Transpose(gradgq), z), (VectorCalculuslDotProduct]) ((VectorCalculus[DotProduct]) (Z, S), e)_mu*e, g+s], Vector): DF =0: for i from 1 to 2*n+m+2*ng 104  Appendix A. Matlab and Maple Codes do for j from 1 to 2*n+m+2*ng do DF[i, j] :=diff(F[i], chiLj]) end do: end do: This section inputs the initial values of the variables and the values of the param eters. xinit := qinit :=  (4,3, 2, 1, 0, 1, 1, 1, 1, 1): (1, 1):  sinit : (.1 .1, .1): zinit := (1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1): tinit := (1/2, 5/8, 3/4, 7/8,1): pinit := (.5, .5): ginit := (-1, -2): minit := (1,1, 1, 1, 1): winit := (1, 2): tau := .1; mu := .1; rho := .5: lambda := 1.1: eps := .1: eps2 := .1: eps3 xo := xinit; qo :=qinit; so minit; wo : winit; xs := xinit; qs := qinit; Si ws := winit; ts := tinit:  :=  :=  sinit; zo  sinit; zi  :=  :=  zinit;  zinit; ps  p0 :=  :=  pinit; go  pinit; gs  :=  :=  1,  :=  .2:  ginit; mo  ginit; ms  :=  :=  minit;  The following section contains the executable code. i  :=  0;  j  :=  0  normF:=Norm(F,2): while (normF > mu and j <40) do printf(” Inner loop: #Iteration  Solve Linear System (11) ...“); LinearSolve(DF, -Transpose(F)):  printf(” d  :=  %g /n” ,j);  -  printf(” doneN”); 105  Appendix A. Matlab and Maple Codes printf(”  -  Update Params  ...“);  alphaS:=min(seq(select(type,-s[k] /d[2*n+m+k] ,positive), k = 1 Dimension(s))); alphaZ:=min(seq(select(type,-z[k] /d[2*n+m+ng+k] ,positive), k = 1.. Dimension(z))); ..  alphamax:=min(alphaS,alphaZ): alphamax:=min(tau*alphamax,1); printf(” done/n”); chi; chinew := chio1dja1phamax*d; xs := chinew[1 .. 2*n]: ys := chinew[2*n+1 chiold  si  :=  :=  chinew[2*n+m+1.. 2*n+m+ng]: zi  :=  ..  2*n+mJ:  chinew[2*n+m+ng+1.. 2*n+m+2*ng]:  normF:=Norm(F,2); printf(” doneN”); j:=j+1; end do:  106  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items