UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Essays on bounding stochastic programming problems Edirisinghe, Nalin Chanaka Perera 1991

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1991_A1 E35.pdf [ 7.58MB ]
Metadata
JSON: 831-1.0100452.json
JSON-LD: 831-1.0100452-ld.json
RDF/XML (Pretty): 831-1.0100452-rdf.xml
RDF/JSON: 831-1.0100452-rdf.json
Turtle: 831-1.0100452-turtle.txt
N-Triples: 831-1.0100452-rdf-ntriples.txt
Original Record: 831-1.0100452-source.json
Full Text
831-1.0100452-fulltext.txt
Citation
831-1.0100452.ris

Full Text

ESSAYS ON BOUNDING STOCHASTIC PROGRAMMING PROBLEMS By Nalin Chanaka Perera Edirisinghe B . Sc. (Mech. Eng.) University of Peradeniya, Sri Lanka, 1980 M . Eng. (Ind. Eng.) Asian Institute of Technology, Thailand, 1985 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES FACULTY OF COMMERCE AND BUSINESS ADMINISTRATION We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA July 1991 (c) Nalin Chanaka Perera Edirisinghe, 1991 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of The University of British Columbia Vancouver,^ Canada D a t e TJLj 3 i , I ? ? / DE-6 (2/88) Abstract Many planning problems involve choosing a set of optimal decisions for a system in the face of uncertainty of elements that may play a central role in the way the system is analyzed and operated. During the past decade, there has been a renewed interest in the modelling, analysis, and solution of such problems due to a remarkable development of both new theoretical results and novel computational techniques in stochastic optimiza-tion. A prominent approach is to develop upper and lower bounding approximations to the problem along with procedures to sharpen bounds until an acceptable tolerance is satisfied. The contributions of this dissertation are concerned with the latter approach. The thesis first studies the stochastic linear programming problem with randomness in both the objective coefficients and the constraints. A convex-concave saddle property of the value function is utilized to derive new bounding techniques which generalize previ-ously known results. These approximations require discretizing bounded domains of the random variables in such a way that tight upper and lower bounds result. Such techniques will prove attractive with the recent advances in large-scale linear programming. The above results are also extended to obtain new upper and lower bounds when the domains of random variables are unbounded. While these bounds are tight, the approximating models are large-scale deterministic linear programs. In particular, with a proposed order-cone decomposition for the domains, these linear programs are well-structured, thus enabling one to use efficient techniques for solution, such as parallel computation. The thesis next considers convex stochastic programs. Using aggregation concepts from the deterministic literature, new bounds are developed for the problem which are ii computable using standard convex programming algorithms. Finally, the discussion is focused on a stochastic convex program arising in a certain resource allocation problem. Exploiting the problem structure, bounds are developed via the Karush-Kuhn-Tucker conditions. Rather than discretizing domains, these approximations advocate replacing difficult multidimensional integrals by a series of simple univariate integrals. Such prac-tice allows one to preserve differentiability properties so that smooth convex programming methods can be applied for solution. m Table of Contents Abstract ii List of Tables vii List of Figures viii Acknowledgement ix 1 Bounding the Expectation 1 1.1 Introduction 1 1.2 Preliminaries . 6 1.3 The Generalized Moment Problem 8 1.4 Two-stage Stochastic Programming Recourse Model 15 1.4.1 Approximation for SP 16 2 Two-Stage Stochastic Linear Programs 18 2.1 SLP Problems With Random Right-hand Sides 21 2.1.1 Lower Bounds 21 2.1.2 Upper Bounds 25 2.2 SLP Problems With Random Objective Coefficients and Right-hand Side 27 2.2.1 Case of Independent Random Vectors 28 2.2.2 The Dependent Case 30 2.3 Tight Bounds Using First and Cross Moments 31 2.4 Relation to Generalized Moment Problems 38 iv 2.5 Special Cases . 40 2.6 First Moment Bounds - Dependent Case 44 2.7 Application to Stochastic Programming 46 2.8 Monotonicity and Convergence 52 2.9 Appendix 54 3 SLP Problems: Unbounded Domains for Random Vectors 56 3.1 Preliminaries 57 3.2 Derivation of the Bounds 60 3.2.1 Bounds in the Univariate Case 67 3.3 Tightness of the Bounds 67 3.4 Bounds Using Only First Moments 72 3.5 Application to Stochastic Linear Programs 73 3.6 A n Order Cone Approximation 76 3.7 A Numerical Example 81 3.8 Discussion 85 3.9 Appendix 86 4 Tight Bounds on Stochastic Convex Programs 88 4.1 Preliminaries 90 4.2 Error Bounds by Aggregation 92 4.2.1 A n Aggregated Upper Bound 96 4.3 The Tightened Mean Model 98 4.4 Upper Bounds Using First Moments 103 4.5 Multistage Stochastic Convex Programs 106 4.5.1 Under Intertemporal Independence 108 4.5.2 Under Intertemporal Dependence 109 v 5 Bounds on Stochastic Convex Allocation Problems 116 5.1 Model Formulation and Solution Characterization 118 5.2 Lower Bounds 121 5.3 Tightness of the Lower Bound 125 5.4 Upper Bounds 132 5.5 A n Example 135 Bibliography 139 vi List of Tables 3.1 Conditional first and cross moments 84 3.2 Summary of bounds 85 5.1 Upper and lower bounds on the example. . 137 5.2 Relative gaps for first moment bounds and new bounds 137 5.3 Bounds under independence information 137 vii List of Figures 1.1 The feasible set Z(£l) for the example 15 2.1 Bounds using first and cross moments in the univariate case 42 2.2 First moment bounds in the univariate case 47 3.1 Lower and upper bounds in the univariate case 68 3.2 Partitioning of domains in 3J3 by order cones 80 3.3 Partition of the domains in the example 83 4.1 Sequential decision problem under uncertainty 106 vni Acknowledgement The author wishes to express his deepest gratitude to his research supervisor Professor William Ziemba for his invaluable guidance and constant encouragement throughout this thesis research, which have not only made it a rewarding experience but also have given pleasant memories which the author shall continue to cherish for a long time. The author is also grateful to Professors Derek Atkins and Shelby Brumelle for their constructive comments and critical suggestions at various stages of this research. Sincere thanks are also due to Professors Erwin Diewert, Ulrich Haussmann, and Ilan Vertinsky for serving as members of the program committee. Grateful appreciation is also conveyed to the Faculty of Commerce, UBC and Tina and Morris Wagner Foundation for providing financial assistance in the early stages; NSERC and Frank Russell Company, Tacoma for partial financial assistance in the later stages of the research. Last, but not least, the author is deeply grateful to his wife Suramya who has been a constant inspiration to him, and twin sons Gayan and Dilan for their tremendous sacrifices and patience during this research. ix Chapter 1 Bounding the Expectation 1.1 Introduction The presence of uncertainty in real-world decision making problems was recognized very early as is demonstrated by various papers in the 50's, such as Beale [2], Charnes and Cooper [11], Dantzig [13], and Tintner [83]. Since most decision problems can either be formulated as optimization problems or be reduced to them, it is imperative that op-timization models involving uncertain input parameters be analyzed and solved. If the level of uncertainty is low, one may assume that the error caused by deterministic simpli-fications in problem formulation and analysis will be insignificant. However, uncertainty is often critical and pervades all aspects of planning activities. As an example, consider the situation where an energy authority is faced with the problem of determining generation capacities for a potential set of power generating sta-tions, such as hydro-electric plants. Once the plants are commissioned and built, in their operational phase, the plants will generate electricity to satisfy demands occurring at various nodes, being supplied through an existing transmission network. The actual amounts to be distributed to demand nodes from each plant could be determined by modelling the distribution problem as a transportation-type linear program for minimiz-ing transmission costs, to which plant capacities and demands are specified. However, the demand is uncertain at the time of capacity construction and capacity decisions must be taken "here-and-now" before uncertainties are resolved. The problem may be further 1 Chapter 1. Bounding the Expectation 2 enriched with uncertainty in the objective transmission costs. For example, it is not uncommon that equipment such as transformers are subject to more frequent failures, thus tighter maintenance schedules should be in place during times when the demand for electricity is high. When the capacity determination problem is considered in an optimization framework for minimizing capacity construction and transmission costs, it is ill-posed because one cannot minimize an objective function of uncertain costs. Probabilistic statements about the uncertain quantities are possible, such as those based on historical data. Consequently, when uncertainty in such problems is modeled by random variables, one is within the realm of stochastic optimization. What is then required is an appropriate notion of a here-and-now decision (i.e., anticipatory solution) that is not dependent on future observations of the environment. A standard approach is the notion of recourse, which is to specify a contingency plan that would not re-quire (transmission) actions be determined at the outset but rather be chosen adaptively dependent on the (capacity) decisions taken and (demand) outcomes observed. The ex-pected cost of recourse then becomes an essential part of the total costs to be weighed in any choice of anticipatory decisions. This leads to a two-stage stochastic program with recourse in which the objective is to determine a trade-off between the costs associated with long-term anticipatory decisions and the costs associated with short-term recourse actions. For detailed descriptions and motivations of this type of models, see, for exam-ple, Dantzig and Madansky [15], Dempster [16], Ermoliev and Wets [24], Kail [51], and Wets [89,93]. This concept may be generalized to more than two stages with random events being observed successively in the future and recourse actions being taken to cor-rect infeasibilities at any stage, to obtain a multistage stochastic program. Applications involving such models span many different fields, such as portfolio theory, energy plan-ning, agriculture, engineering, etc. See, for example, Kallberg, White, and Ziemba [55], Kao and Queyranne [56], Kusy and Ziemba [61], and Louveaux [62]. More recently a very Chapter 1. Bounding the Expectation 3 large multiperiod stochastic programming model has been developed and implemented for an application in finance, as briefly discussed in Henriques [41]. For a comprehensive research bibliography in stochastic programming, see Stancu-Minasian and Wets [82]. The difficulty of solution of such models stems from the need for evaluating an ex-pectation functional of the recourse actions, for given first stage anticipatory decisions. The knowledge of the underlying probability distributions may not be complete, being available only through certain moments of the distribution, or the probability space may be continuous. In the latter case, multidimensional numerical integration is generally required to evaluate the expectation functional, and the integrand is yet another opti-mization problem. Furthermore, such integration is required many times as would be needed in an iterative procedure designed to determine an optimal set of (anticipatory) decisions for the problem. There are at least two major approaches for solving such problems when there is no exploitable structure. One is to approximate the expectation of recourse actions by sampling from the underlying (continuous) probability distribution and to use the sample based function and gradient information to guide an optimization algorithm. These stochastic quasi-gradient techniques (see, Ermoliev [22,23], for example) have asymptotic convergence properties, but they are limited by a lack of computable bounds. Another solution strategy is to compute upper and lower bounds for the expectation functional of the recourse actions which can eliminate the need for multidimensional nu-merical integration and then use these bounding functions in the (outer) optimization. This thesis studies various aspects of stochastic programs for the latter purpose. In a major portion of the thesis, we are concerned with the stochastic linear programming re-course problem and approximations are pursued using limited moment information of the underlying probability distribution. The moment conditions utilized are such that the Chapter 1. Bounding the Expectation 4 resulting lower and upper approximations are solvable using deterministic linear program-ming methods. These approximations can be sharpened by applying them sequentially on successively finer partitions of the domain to obtain what is termed a "sequential ap-proximation method", see Ermoliev and Wets [24], to solve the given stochastic program to a desired degree of accuracy. Such a method is based on the availability of the joint probability distribution, because, when applying the bounds on a (given) partition, it is necessary to evaluate conditional moments and probability mass on each cell of the partition. Another mechanism for alleviating the difficulties associated with computing the ex-pected recourse cost function is to seek lower and upper approximations which allow one to replace the difficult multidimensional numerical integrals with simple univariate integrals. Such methods are usually numerically attractive since, in a sequential approx-imation method under partitioning the domain to sharpen bounds, the computational burden does not grow dramatically. See Birge and Wallace [7] and Birge and Wets [10] for such bounds in the case of stochastic linear programming. We address similar techniques in the context of stochastic convex programming recourse problems. The remainder of this chapter is organized as follows. A section on preliminaries is presented to provide the basic terminology and notation required for the exposition. Then we consider the question of approximating the expectation of a function of random variables as a moment problem to which certain moments of the underlying probability distribution are specified. This serves to lay the groundwork for discussing tightness of approximations based on limited moment information, as is pursued in the subsequent chapters. The chapter concludes by mathematically formulating the stochastic program-ming recourse problem. In Chapter 2, the focus is on two-stage stochastic linear programs under limited Chapter 1. Bounding the Expectation 5 moment information of the underlying probability distributions. First, the case of ran-domness in the constraints is considered and the known bounds are examined in the context of generalized moment problems. A major part of the chapter is concerned with the case where there is randomness in the objective coefficients as well. No specific struc-ture is assumed on the stochastic linear program and random variables are allowed to have arbitrary multivariate probability distributions with bounded domains. Tight upper and lower bounds are derived which are shown to provide a certain generalization of the previously known bounding results. In the third chapter, the results in Chapter 2 are extended to the case of unbounded domains. Tight bounds are developed using first and cross moments of the underlying random parameters, and the computations require only the solution of deterministic finite linear programs. Moreover, a computationally-more-appealing order-cone decomposition scheme is proposed which behaves quadratically in the number of random variables. Numerical examples are used to elucidate the bounds. The remaining chapters deal with stochastic convex programs, and bounds are devel-oped using both limited moments and marginal distributional information. Chapter 4 utilizes variable aggregation and row aggregation procedures to develop bounds for two-stage stochastic convex programs. These bounds are tighter than the usual bounds using limited moment information. This discussion is also extended to multistage stochas-tic convex programming problems to obtain tight approximations under dependence of random parameters across time periods. Finally in Chapter 5, we consider a stochastic convex program arising in a certain resource allocation problem, a class of problems which has received considerable atten-tion over the past few years. Exploiting the problem structure and using a somewhat different approach, tight upper and lower bounds are developed for the problem which are computable under univariate integration. Simple numerical examples are used to Chapter 1. Bounding the Expectation 6 illustrate that the new bounds may easily outperform the usual bounds in the literature. 1.2 Preliminaries In this section, we establish the basic notation which will be further developed as is required in the subsequent chapters and also present some terminologies from convex analysis. For detailed expositions, consult Rockafellar [73]. Let 3ft denote the set of real numbers and dtn the ra-dimensional vector space of real ra-tuples. 9£ denotes 3£(J{—oo,+oo}. The transpose of a vector x G is x'. The jih component of a n-vector x is represented by Xj, j' — 1,..., n. The inner product of two vectors x, y G 3£n is represented by either x'y or < x, y >, or simply by xy. For a (m x n)-dimensional matrix M, denotes the ith row, i — 1,... ,m, and M(j) denotes the jth column, j = 1,..., n. A subset S of 3ftn is convex if vx + (1 — v)y G S for all x,y G S and 0 < v < 1. A convex subset S C 3?" which can be expressed as the intersection of finitely many half-spaces is called a polyhedral convex set or simply a polyhedron. A bounded polyhedron is called a polytope. The intersection of all the convex sets containing a given nonempty subset S C 3£n is called the convex hull of S and is denoted by co S which is a convex set. Consequently, a polytope is the convex hull of finitely many points. The set of extreme points (i.e., vertices) of a given polytope S is the set of minimal number of points that belong to the polytope of which the convex hull is 5". A subset C of 3?" is said to be a cone if vx G C when x £ C and v > 0. A cone C is convex polyhedral if it is generated by finitely many half-spaces. The graph of a function / : S -> & is the subset {(x,/(x)) : x G S C 3t"} of ftn+1. The set {(x, v) : x G S, v G 9£, v > f(x)} is called the epigraph of / and is denoted Chapter 1. Bounding the Expectation 7 by epi f, which is a subset of 9£ n + 1 . If epi f is convex, / is said to be a convex function, moreover, proper if it is nowhere —oo and finite somewhere. / is lower semicontinuous if epi f is a closed set in 3? n + 1 . If epi f is a cone, / is said to be positively homogeneous (of degree 1), i.e., for every x £ S, one has f(vx) = vf(x), 0 < v < oo. If epi f is a convex cone, then / is called a sublinear function. Similarly, we define, if epi (—/) is convex, then / is concave and if epi (—/) is a convex cone, then / is superlinear. A function / : S\ x 52 —• 3ft is said to be a convex-concave saddle if f(x, y) is convex in x £ Si for fixed y £ S2 and f(x,y) is concave in y £ S2 for fixed x £ 5 i . If a convex-concave function is positively homogeneous (of degree 1) in the first argument, it is called sublinear-concave. Similarly, we define convex-superlinear functions. The empty set is denoted by 0. For 0 ^  S C 3ftn, d is a direction of recession of S if x + vd £ S for every v > 0 and x £ S. The recession cone of 5, denoted by re 5, is the set of all directions of recession of 5,-which is a convex cone containing the origin. Thus, rc S ^ 0. It is also the set of vectors such that S + rc S C S. For a convex function / on 9ftn, consider the recession cone rc(epif) of the epigraphical set epif. The recession function of /', denoted by/*, is one that satisfies epif* = rc(epif). The lower-case Greek characters £, 77, u, and ( are reserved for random variables (or vectors). The support of a random vector is the smallest set of probability measure one. If the support is closed and bounded, the random vector is said to have a compact support. The upper-case characters S, 0, fl are reserved for support or for domain that contain the support. The domain D of a random vector is defined through a (measurable) extension of the support S by attaching zero probability mass to the set D\S, the complement of S with respect to D. The probability measures under concern are Lebesgue-Stieltjes measures and math-ematical expectations are defined by the usual Lebesgue-Stieltjes integrals. For details, Chapter 1. Bounding the Expectation 8 consult Feller [29]. 1.3 The Generalized Moment Problem Let (fi, B, Ptr) be a given probability space where B is the Borel sigma field of events u> in 0 ( C 3ftn). Let E[.] denote mathematical expectation with respect to the true probability measure Ptr. Let /,• : fl —• 3ft, i = 1,... , m + 1, be measurable functions. Suppose knowledge of the true probability distribution of a; is specified in terms of the (generahzed) moments Efi := Jn fi(u)Ptr(du) i = 1,... , m. The question of interest is to determine bounds for Efm+i. A n upper bound to Efm+i is available by solving the generalized moment problem (GMP) given in (1.1): Let V be the set of all probability measures defined on (Tt,B). Then, V(GMP) := sup ( / fm+1(u)P(du) : f fi(u)P(dw) = Efi, » = l , . . . , m } (1.1) provides the upper bounding inequality Efm+i < V(GMP). Similarly, replacing "sup" by "inf" in (1.1) yields a lower bound to Efm+i. Such mo-ment problems have been studied much earlier in the literature in the classical framework of statistical theory, see Krein and Nudelman [60]. Kemperman [58] derived conditions for the existence of a probability measure which solves (1.1). The set of probability measures feasible to (1.1) has been studied extensively. Of particular interest is the fact that extreme points of the set of admissible probability measures of (1.1) are discrete measures involving no more than (m + l ) atoms; see also Karr [57]. The latter result also implies that if there exists a solution to (1.1), then a discrete measure with at most (m+l) support points solves (1.1), i.e., the maximizing measure of (1.1) is discrete. Chapter 1. Bounding the Expectation 9 Determination of solutions to (1.1) is an onerous task in itself, and usually involves column generation techniques with solutions of nonconvex optimization subproblems. For details see, for example, Birge and Wets [8]. However, (1.1) is generally used as a yardstick for investigating tightness of bounds available by other means. To see this, define UB : 3£m —• 3ft as any arbitrary upper bounding strategy satisfying EPfm+i<UB(Epf1,...,Epfm), VPeV (1.2) where V is the set of all probability measures on (£l,B) and, for some P £ V, Epfi denotes the expectation of /,• with respect to P. Suppose by contradiction that UB(Ef\, ... , Efm) is a better upper bound for Efm+i than V(GMP) using only the (generalized) moments Efi, i = 1,.. . , m of the underlying true probability distribution, i.e., UB(Efi, ... ,Efm) < V(GMP). Assume (1.1) is solvable and let an optimal solution of it be denoted by P*. Now applying (1.2) for P* £ V yields V{GMP) = f fm+1(u)P*(dw) < UB(Ef1,...,Efn) < V(GMP), Jn which is a contradiction. Thus, when (1.1) is solvable, it provides the best upper bound, in the sense of (1.2), to Efm+i. If a sufficient number of moment conditions are present in (1.1) together with a pre-scribed shape for 0 , the maximizing measure in (1.1) may trivially be determined. This is the case with Frauendorfer's [31] upper bound when fm+i is convex, fi is a multidi-mensional compact rectangle, and all joint moments of the random vector are specified, see Ka l i [53]. A second strategy is to develop a solution procedure for (1.1) by investi-gating the relationship between (1.1) and the general optimization theory. Consequently, one derives equivalent formulations for V(GMP) which can be solved using linear or nonlinear programming techniques. For example, Chapter 2 shows that when / m + i is convex, with only first moments being used, Jensen's inequality is obtained by solving Chapter 1. Bounding the Expectation 10 the "inf" version of (1.1). When / m + i is convex and 17 is a compact polyhedral set, the Gassmann-Ziemba [38] upper bound, which requires the solution of a linear program involving the first moments, solves the corresponding G M P . This bound was extended by Birge and Wets [9] to include unbounded domains. Ka i l [52] shows that the latter bound is the value of the corresponding moment problem. To use computational tools provided by linear and nonlinear programming toward solving (1.1), what is fundamental is the duality theory for moment problems and semi-infinite programs (SIP). This is because the symmetric dual of the moment problem in (1.1) is the semi-infinite linear program: { m m ~\ j/o + £(£ / , )2 / , - : y0 + Y,fi(")yi>/m+iM, w e n . (i.3) i=i .=i J K a i l [52] investigates this duality relationship in the context of general moment problems. For a development of the duality theory for semi-infinite linear programs from the first principles, see Glashoff and Gustafson [40]. We review the relevant duality relationships in the remainder of this section. Weak duality holds for (1.1) and (1.3), i.e., Theo rem 1.1 V(GMP) < V(SIP). Proof . If either of the problems is infeasible, the theorem holds trivially, by setting V(GMP) — —oo or V(SIP) = +oo. Otherwise, for a feasible probability measure P of (1.1) and a feasible vector (y 0, J/i, • • •, Um) of (1.3), y0 + ^(Ef^i = / P(du)y0 + £ ( / fi(U)P(du))yi = / [yo + E / . H ^ W > ( fm+1{u)P(du). • Ja Strong duality, i.e., V(GMP) = V(SIP), can be ensured in many ways. When f2 is a compact set, under mild requirements, strong duality holds. To show this, define the Chapter 1. Bounding the Expectation 11 set t7(fl) : = { ( / ! ( « ) , . . . , / « + ! ( « ) ) : used} (1.4) and its convex hull by Z(0) := 000(9). (1.5) Since (1.1) can be expressed as V(GMP)= sup { z m + 1 / fm+l(u)P(dw) - Z m + 1 , Per,zm+1 I fi(u)P(du) = Efi, » = l , . . . , m } , (1.6) one obtains the following equivalent representation for (1.1): V(GMP) = sup { z m + 1 e 3ft : z <E £(ft), z,- = £/ , - , i = 1,..., m) . (1.7) (iii) GMP is feasible, then GMP is solvable and V(GMP) = V(SIP). Proof. By condition (iii), V(GMP) > —oo. Moreover, since fl is compact and /,s are continuous, Q(ft) defined in (1.4) is compact, and so is its convex hull Z(Vt) defined in (1.5), see Rockafellar [73, Theorem 17.2], and thus is a closed convex set. Therefore, (1.7), hence GMP, is finite and solvable. Suppose an optimal solution of (1.7) is z*. We have thus defined z* := E[fi] for i = 1,..., m. For some e > 0, define the point z*(e) by z*(e) — z* for i = 1,..., m and zm+i(£) — zm+i + £- Clearly, z*(e) g" Z(fi). Since the convex set Z(Ct) is closed and the point z*(e) Z(Q), there exist coefficients a = («i,..., am+i)' and a0 such that Theorem 1.2 (Glashoff and Gustafson [40], Kail [52]) Jf (i) fi, i = 1,..., m + 1, are continuous, (ii) fl is compact, and Chapter 1. Bounding the Expectation 12 (aojCt')' / 0 and the hyperplane a'z = a0 separates the point z*(e) from Z(Q), i.e., a'z*(e) > a0 and a'z < a0, Vz £ Z(Sl) (1.8) hold, by applying the separation theorem for closed convex sets. Combining the two inequalities in (1.8), eam+1 > a'{z - z*), Vz £ Z{Ct). (1.9) In particular, for z* £ Z(Q,), (1.9) implies that eam+i > 0, Ve > 0, and thus a m + i > 0. Now consider any point z(u>) £ where z(u) := (fi(u),...,fm+i(uj)). Since Q(£l) C Z(Cl), a'z(uj) < ct0 holds by the second inequality in (1.8), implying that /m+1(W) < (-p-) + £ ( ^ l ) fi(u) (1.10) \ctm+i J i = 1 \ocm+1J since am+i > 0. Furthermore, by the first inequality in (1.8) / oo \ , A / - a , x implying that V(SIP) < z*+1 + £ since the solution y0 := a Q and y,- := ~ a ' , i = 1,.. . , m, is feasible in (1.3), due to (1.10). Using the weak duality in Theorem 1.1, V(GMP) < V(SIP) < V{GMP) + e, Ve > 0 (1.12) proves the theorem. • However, when Q, is unbounded, it is difficult to prove strong duality without an interior-type condition. Define the set Ai by M := co {(/iH,..., fm(u)) : uen}. (1.13) Theorem 1.3 (Glashoff and Gustafson [40], Ka l i [52]) / / (i) (GMP) is bounded from above, and (ii) the m-dimensional point (Efi,..., Efm)' £ int(A4), where int(A4.) denotes the (relative) interior of the set M., then (SIP) is solvable and V{GMP) = V(SIP). Chapter 1. Bounding the Expectation 13 Proof. Due to the conditions in the theorem, -oo < V(GMP) < +00. Observe that Z(Sl) may not be a closed set where Z(Vt) is defined in (1.5). However, z*, an optimal solution of (1.7), must belong to the boundary of Z(Q), bd(Z(£l)). To see this, if z* £ int(Z(Q)), then there must exist a scalar v > 0 such that z* + vem+\, em+i being the (m+l)-st elementary vector, is feasible in (1.7) with strictly better objective value, thus contradicting the optimality of z*. Since z* £ bd(Z(Q,)), consider the supporting hyperplane a'z = a0 to Cl(Z(Q,)), the closure of Z(fl,), at z* where a = (ai,... ,am+i)' and (ao,"') ' 0- Then a'z<a0, Vz £ C7(Z(ft)). (1.14) Since the equality in (1.14) holds for z — z*, (1.14) implies that m am+1(z*m+1 - zm+1) > - 5>,-(*? - zt), Vz £ Cl(Z(n)). (1.15) i=l In particular, for any z feasible to (1.7), ocm+i{z*m+1 - z m + i ) > 0. (1.16) Since z*^ + 1 — z m + i > 0 holds for such z, it follows that am+i > 0. By contradiction, suppose am+x = 0. Substituting am+1 = 0 in (1.14), yields 771 771 $>,•*,• < £«,(£/;) =: aQ, Vz £ Cl(Z(Sl)). (1.17) i=l i=l Since . M is the projection of Z(il) on the subspace of 3 ? m + 1 defined through the condition Zm+i = 0, (1.17) implies that 771 X>,-2,-<a 0 , V ( z 1 , . . . , z m ) ' £ C / ( A ^ ) , (1.18) Chapter 1. Bounding the Expectation 14 which in turn implies that X)v^i a« z« = t*o is a supporting hyperplane to Cl(Ai) at the point {Efi,...,Efm)', which is a contradiction since the latter point belongs to the interior of M. by condition (ii) in the theorem. Therefore, a m +i > 0. Consider any point z(ui) G £(fi) where z{ui) := (/i(u>),..., / m + i(u;)). Since Q(ft) C Cl(Z(Q,)), a'z(u>) < a0 holds by (1.14), which implies that \am+ij ,-=1 \am+ij since ocm+i > 0. Furthermore, by the condition a'z* = an, / \ m / „ ^ \ a m + i 7 , = i \am+ij implying that V(SIP) < z^+1 = V(GMP) since the solution yQ := and y,- := t = l , . . . , m , is feasible in (1.3). Using weak duality in Theorem 1.1, V(GMP) = V(SIP) holds. Solvability of SIP follows from (1.20), thus the proof is completed. • Observe that solvability of G M P does not follow from Theorem 1.3. That is, there may not exist a feasible probability measure which solves G M P when f i is unbounded. However, strong duality and solvability of SIP is ensured by the conditions in the latter theorem. On the contrary, with f i compact, under mild restrictions, see Theorem 1.2, solvability of G M P and strong duality follow. To exemplify the necessity of the conditions in Theorem 1.3, when f i is unbounded, consider for univariate u> with domain f i = [0, oo), the G M P upper bound on E[to2] given the first moment u: (Birge [4]) V(GMP) = sup ( [°°uj2P(du) : (°° LoP(duo) = u\ (1.21) Pev kJo Jo J where V is the set of probability measures on [0, oo). Rewriting, V(GMP) = sup {z2 : (z i ,z 2 ) € Z(Cl), zx = u] (1.22) Chapter 1. Bounding the Expectation 15 Z(O) o Q Figure 1.1: The feasible set Z{£l) for the example. where Z(Q,) is illustrated in Figure 1.1. Observe that Z(Vt) is not closed. Also observe that the semi-infinite dual, V(SIP) — inf\y 0 + uy-i : y0 + uy\ >u>2, 0 < uj < oo (1.23) is never feasible, and thus V(SIP) = oo. Now for uj > 0, it is clear from Figure 1.1 that V(GMP) — oo and thus the condition 1 in Theorem 1.3 is violated and SIP is not solvable. For uj = 0, V(GMP) = 0 < V(SIP), and we notice that the second condition in Theorem 1.3 is violated. 1.4 Two-stage Stochastic Programming Recourse Model We formulate the two-stage stochastic programming recourse model (SP) in a more gen-eral setting here and discuss various specializations and multistage extensions of it in the subsequent chapters. The model is inf (1.24) s.t. i £ l Chapter 1. Bounding the Expectation 16 where <j>(x,u) := inf q(y,u) (1.25) s.t. y £ Y(x,uj). In (1.24), X is a subset of 3ft", c : 3?n —» 3?, and P t r denotes the £rue probability measure of the random vector uj having a domain fi. In (1.25), for fixed x £ X and uj £ 0, l^ (a;,u>) C 3ftm. This formulation also allows the possibility that for certain x £ X and u> £ fi, the set y(x,w) may be empty, i.e., the problem (1.25) is infeasible, in which case one sets <f>(x,u) — +oo. The interpretation of the dynamic formulation (1.24)-(1.25) is that one chooses an initial decision vector x (from the set X) prior to observing any random events and then in the subsequent time period, after observing the random vector uj, a recourse (second stage) decision vector y will be chosen from the feasible set Y(x, uj) such that the recourse costs q(y,u) are minimized. An optimal first stage decision x* corresponds to one which minimizes the sum of the first stage costs c(x) and the second stage expected recourse costs E[4>(x,uj)]. 1.4.1 Approximation for SP There are at least two possibilities for approximating (1.24)-(1.25). First, one may ap-proximate Ptr by a finite discrete measure, so that the integral in (1.24) is available as a finite sum, thus, transforming the stochastic program (SP) into a deterministic mathematical program. On the other hand, <j)(x,uj) may be approximated by a simpler function, so that the integral in (1.24) does not involve multidimensional numerical in-tegration, instead, may be evaluated, for example, using a series of univariate integrals. The challenge is to design schemes such that they yield upper and lower approximations. The general moment problem, if it is solvable, yields a solution which is a discrete Chapter 1. Bounding the Expectation 17 probability measure and it is the best possible bound under the specified moment con-ditions. Consequently, in the next two chapters, we consider a linear version of (SP) in (1.24)-(1.25), in an attempt to determine upper and lower bounding approximations of the form: inf (c(x)+ I <f>(x,u)Ptr(dw)\ < inf \ c(x) + sup / <j>(x,u;)P(dw)\ (1.26) xeX ( JQ ) x£X y PZPijJSI J and inf [c(x) + jj{x,u)P*{o^)}>mix[c(x) + ^ (1.27) where Vtj and Vl are sets of probability measures on fi, characterized by known moments, such that Ptr E Vu and Ptr <E VL. In the last two chapters, we consider convex versions of SP in (1.24)-(1.25) and de-velop strategies to approximate the recourse function <j>(x,oj) directly such that univariate integration suffices to compute upper and lower bounds. Chapter 2 Two-Stage Stochastic Linear Programs We are concerned with the two-stage stochastic linear programming (SLP) problem with fixed recourse, for instance, see Ka l i [49] or Wets [91]. The model is Z* :=imn{c'x +E[<j>(x,£,n)] : Ax = b, x > 0} (2.1) where <j>(x, r,) := min {q(r,)'y : Wy = h(Q - T ( c > , y > 0} , (2.2) with E[.] denoting mathematical expectation with respect to the true probability measure of the joint random vector uj := ( £ , 77). The domain of the random K-vector £ is denoted by H C $lK and that of the random L-vector n by 0 C 3ftL. The joint domain of the random (K + Z/)-vector uj is denoted by 0. The recourse problem (2.2) is said to have fixed recourse because the (pii x U2)-matrix W is deterministic. A is a fixed (mj x ni)-matrix and c is a fixed vector in . T is a (m2 x r i i ) dimensional stochastic matrix and the random vectors h and q are, respectively, rri2- and ^-dimensional. Define the set X1 : = { x <E » n i : Ax = 6, x > 0}. (2.3) Let the feasibility set of the first stage decisions x induced by the second stage recourse problem (2.2) (for almost sure satisfaction of its constraints) be X2 := {x <E ft"1 : 3y > 0 s.t. Wy = h(() - T{£)x, V£ £ E}. (2.4) The set of feasible solutions of the stochastic program (2.1)-(2.2) is then X—X^X2. (2.5) 18 Chapter 2. Two-Stage Stochastic Linear Programs 19 Assume that: (AI) x^m, (A2) {TT e & m 2 : n'W < q(n), i) G 0} / I , (A3) g(n) = go + Q»7 = h0 + H( T(() = TQ + E f = i Ztf* where the vectors qo, ho and matrices Q,H,Tk are all fixed and defined with appro-priate dimensions, (A4) fl = H x 0 , and (A5) S and 0 are convex, compact subsets of $lK and $lL, respectively. The assumption (A l ) ensures that the stochastic program (2.1)-(2.2) is feasible with probability one while (A2) ensures that the recourse problem (2.2) is never unbounded. Furthermore, due to (A3), the recourse function (f>(x, £, n) is a convex-concave saddle function in (£, 77) for fixed x G X, as follows from the linear programming duality theory, see Chvatal [12]. The convexity assumption in (A5) is not restrictive for, if not, one could work with the convex hulls of the respective domains. In particular, consider these convex hulls as being generated by a finite set of extreme points, i.e., S and 0 are polytopes, which we shall follow in order to make the exposition easy. Properties of the stochastic program (2.1)-(2.2) are well known in the literature, see Ka l i [49] or Wets [87,88]. For example, the feasible set X is convex. See also Walkup and Wets [85] and Wets and Walkup [94]. Moreover, since integration with respect to a probability measure preserves order, convexity of <f>(x,£,r)) in x, for fixed £ and 77, implies that the deterministic equivalent program of (2.1)-(2.2) is a convex program. As a convex function in £, (j) is lower semicontinuous, see Wets [89]. Moreover, since a Chapter 2. Two-Stage Stochastic Linear Programs 20 convex function defined on a convex polytope is upper semicontinuous, see Gale, Klee, and Rockafellar [37], continuity of <j> in £ and n can be ensured. There are special cases of the above model arising from the structure of the recourse matrix W. For example, one has the case of complete recourse when X2 = ^f", relatively complete recourse when X1 C X2, or the case of simple recourse when W — [I: —I] (or up to permutations of rows and columns if necessary) where / is the identity matrix of order m?. The simple recourse case has been studied extensively in the literature, for a survey see Ziemba [96]. The interest has mainly been because it is a class of problems which is rich in applications, see Ferguson and Dantzig [30], Kusy and Ziemba [61], and Ziemba, Brooks-Hill, and Parkan [97]. For such problems, the second stage minimization is fictitious and (j>(x,£,r)) becomes separable in the components of £ which also means that correlations among the random variables £j are irrelevant. For algorithms to solve such problems, see Everitt and Ziemba [25], Qi [72], and Wets [92]. This chapter is concerned with determining tight upper and lower bounds to the ex-pectation of the recourse function <f> under limited moment information on the random vector uj and without restricting W to any special forms. However, when devising algo-rithms to solve the model in (2.1)-(2.2), using the approximations to be described, it is desirable to have at least relatively complete recourse. The rest of the chapter comprises two main parts and is organized as follows. In the first part, we consider a special instance of the model (2.1)-(2.2) by restricting the objective coefficients of (2.2) to be deterministic, i.e., n has a degenerate distribution. The major contributions of this chapter are in the second part, i.e., Sections 2.2 and beyond, where the discussion on approximations for SLP is continued in the more general form as presented in (2.1)-(2.2). The notation <f>(x) represents E[(J)(x,uj)] for fixed x £ X. Unless otherwise stated, x belongs to the set X. Chapter 2. Two-Stage Stochastic Linear Programs 21 2.1 S L P Problems W i t h R a n d o m Righ t -hand Sides Consider the SLP model (2.1)-(2.2) with objective coefficients of (2.2) being determin-istic, a class of problems known as SLP problems with random right-hand sides. Thus, the recourse function <f>(x,£) is convex in x(G X) and £(£ E) separately. Bounding techniques for these problems are rather rich in the literature. For bounds using limited moment information of the underlying true distribution, see Birge and Wets [8], Dula [17], Frauendorfer [31], Gassmann and Ziemba [38], Huang, Vertinsky, and Ziemba [42], Huang, Ziemba, and Ben-Tal [43], Ka i l and Stoyan [54], and Ziemba and Butterworth [98]. We formulate the bounding problem as a moment problem involving first moments and show that under a weak independence criterion, referred to as order-one-independence, the upper and lower bounds behave differently. 2.1.1 Lower Bounds When only the first moments are used, applying Jensen's inequality [71] one obtains <f>(x) > <j)(x,£). This result follows immediately by convexity of <f> in £, since integration with respect to a probability measure can be viewed as simply taking convex combina-tions. To show that this simple bound is also tight, consider the lower bounding problem as a moment problem in which the only information available of the underlying prob-ability distribution is the first moment £ of the random vector £. Denoting the set of probability measures on E by V, the tight first moment lower bound is obtained by solving the moment problem: Feasibility of the problem in (2.6) is not a question since the measure P such that all the (2.6) Chapter 2. Two-Stage Stochastic Linear Programs 22 Theorem 2.1 For V(JB) defined in (2.6), V(JB) = (f>(x,£), the lower bound due to Jensen's inequality. Proof . Dualizing the moment problem (2.6) and applying strong duality, due to condi-tions in Theorem 1.2 being satisfied, yields V(JB) = sup j y 0 + £ y'ktk : yo + E yk£k < <f>(*, £), f e s l ( 2 J ) • » I *=1 Jb=l J Proper convexity of <f>(x,.) ensures that (2.7) is always feasible. Now, for any feasible solution y of (2.7), yo + Y$=i Vk£k < <f>(x, 0 must hold, and thus V(JB) < <f>(x,£). It only remains to observe that convexity of <j> in £ is sufficient to guarantee that V(JB) = <f>(x,£) can be attained, i.e., consider the supporting hyperplane to the (closed) epigraph of <j>(x,£) at (£,<j>(x,£)), for fixed x. This completes the proof. • L e m m a 2.2 Suppose there exists a moment generating function for the multivariate dis-tribution of the random K-vector £. Then the components of £ are statistically indepen-dent if and only if E II(fc)T jeA = II E WtiY'] f o r all A G B and all = 1,2,... for j = 1,... , K jeA (2.8) where B is the set of all subsets of the index set { 1 , . . . , K}. Proof . See the Appendix. • Using Lemma 2.2 we can define a weak independence relation for a probability dis-tribution by restricting (2.8) to be satisfied only for certain values r, as follows: Def in i t ion 2.3 A random vector £ is said to have "order-one-independence" when the relation in (2.8) is satisfied with rj being restricted to be 1 for all j — 1,. . . , K. Chapter 2. Two-Stage Stochastic Linear Programs 23 Although order-one-independence is a much weaker requirement than (the usual) independence, there exists a class of multivariate distributions for which order-one-independence implies independence. For instance, for bivariate normal distributions, order-one-independence implies independence. Our interest in order-one-independence is due to the following result. Theorem 2.4 Knowledge of order-one-independence does not improve the first moment lower bound V(JB). Proof . By Definition 2.3, order-one-independence implies that / . ( I T ti)PM) = ( I T li) for all A e 5 (2.9) where the set B is defined in Lemma 2.2. We now introduce these constraints into the generalized moment problem in (2.6) and denote its optimal value by V( JB). Considering the dual of this moment problem and due to strong duality, see Theorem 1.2, V(JB) = sup j/o + E ytfk + 5 >A ( I I 6) (2-10) y fc=i Aes jeA K s.t. y 0+ E ( I I ^ W < <Kx,t), f G s. fc=l AG-B i£A Observe that (2.10) is feasible. Since f G H, for any feasible (j/ 0, • • •, Vk) and (y\, A G B), it foUows that y0 + T,kVklk + EAgB(IligA«fj)yA < implying that V(JB) < (f>(x,£). Since the moment problem under order-one-independence is obtained by adding constraints (2.9) to (2.6), V(JB) > V(JB) = <f>(x,£), the last equality being held by Theorem 2.1. • Although the compactness of El has been utilized in the preceding proofs, that the foregoing results are valid for unbounded (closed) domains S as well can be claimed easily provided £ £ int(ft). Moreover, not only is Jensen's lower bound tight, but also Chapter 2. Two-Stage Stochastic Linear Programs 24 it does not improve under the additional information of order-one-independence. O n the contrary, it will be shown in a subsequent section that the latter result does not hold for the case of upper bounds. Nevertheless, it is neither proven nor asserted that there does not exist a better lower bound under the usual statistical independence using first moments. In fact, when components of f are independent, one may use univariate integration to obtain a better lower bound as follows. Let us introduce the following notation: • Ck •= (6,---,6-i,6+i,---,60' • £k '•= (6,--->6-i>6+i>---,6:)' • Fk(.) is the marginal distribution and [eijto, Ofci] is the domain of fjt • <f>k(x,Zk) ••= <f>(x,£)\ik=tk for k = l,...,K. Proposition 2.5 Under statistical independence in f, 4>(x)> H <f>k(XiCk)dFk(Ck)>V(JB), Vk=l,...,K. (2.11) Ja.k0 The proof of (2.11) is straightforward. The following example illustrates this bound. Example 2.6 For independent continuous uniform random variables f i and f2 over intervals [0,8] and [2,10], respectively, consider the following recourse problem (suppressing x): «K6,6) = m i n { - y i - 3j/2 : V l + y2 < 6, ~Vi + 2y 2 < 6, yi > 0, y2 > 0} (2.12) Since, 6 = 4 and £2 = 6, we have V( JB) = <6(4,6) = —10.667. The functions <f>k in (2.11) can be obtained by a parametric analysis of (2.12) on the right-hand side, i.e., <M6,6) = - 3 6 if 0 < 6 < 3 I f 6 - f if 2 < 6 < 8 , ^ ( 4 , 6 ) = \ - 1 6 - 4 i f 3 < 6 < 8 { - 1 2 i f 8 < 6 < 1 0 Chapter 2. Two-Stage Stochastic Linear Programs 25 It follows that %[<M6,6)] = /o(-36)s^6 +/38 (-|6 - 4)|dfi = -9.917 and similarly, £ ^ 2 ( 4 , 6 ) ] = -10.5. Therefore the best lower bound in (2.11) is -9.917 > V(JB) = -10.667. • 2.1.2 Upper Bounds In developing upper bounds for the expectation of a convex function, one is usually concerned if the random variables are stochastically independent or not. Under inde-pendence, the Edmundson-Madansky (E-M) upper bound [20,63] for univariate £ and (bounded) interval domain [a0, ai], i.e., can be extended in a straightforward manner to the multivariate case with interval do-mains using independence of random variables, see Huang, Ziemba, and Ben-Tal [43]. In the dependent case, Gassmann and Ziemba [38] derived an upper bound, for S being a polytope, as where ext S denotes the set of extreme points of c In essence, the linear program in (2.13) (2.14) (2.14) determines a discrete probability measure on the extreme points of the polytope H, such that this measure has the same first moments £. The latter bound is tight because it is the solution to the moment problem (2.15) as shown next. Theorem 2.7 (Kail [52]) V(GZ) defined in (2.15) is equal to the upper bound in (2.14). Chapter 2. Two-Stage Stochastic Linear Programs 26 Proof . Upon dualizing the moment problem (2.14) and using strong duality due to Theorem 1.2, V(GZ) = inf \y0 + E yUk : y 0 + E Vktk > <f>{x, 0, £ € s } . ' (2.16) I k=i k=i ) Since ext E c H , V(GZ) > inf \y0 + E With • Vo + E > 4>{x, e), e G ez* -) . (2.17) v I fc=i fc=i J Observe that the right hand side of (2.17) is precisely the bound in (2.14), as follows from linear programming duality. Furthermore, observe that the convexity of <f>(x,£) in £ and that H = co(ext E) are sufficient to guarantee that (2.17) is held as an equality, and thus, completing the proof. • Consider the moment problem upper bound using first moments and order-one-independence: ftx) < V(GZ) := sup { / 4>{x,t)P{dt) : / (II ti)P{*t) = (II 6) f o r all A G B (2.18) Trivially, V(GZ) < V(GZ) holds. Consider E as a multidimensional interval in %lK, denoting the extreme points by e J, j = 1,.. . , 2K, hence, the cardinality of ext S is 2K. Define the (2K x 2K)-dimensional matrix T> with the (A.,j)th component being FTfceA ei where A G B (cardinality of B, \B\ = 2K) and j = l,...,2K. Also define the 2K-dimensional vector ( of which the Ath component is lliteA sffc where A G B. Theorem 2.8 The matrix!) is nonsingular. Moreover, V(GZ) = Yl'j=i<f>(xie:')^~1C where T)~x denotes the inverse ofD. Proof . A proof can be gathered by referring to Frauendorfer [31] and Ka i l [53]. • Finally, we give an example to show that V(GZ) < V(GZ) is possible. Chapter 2. Two-Stage Stochastic Linear Programs 27 Example 2.9 Consider the convex function ^(cji,^) defined in (2.12) in Example 2.6 with distributions of (i and 6 as described therein. Using only the first moments 6 = 4 and £2 = 6 and solving the linear program (2.14) yields V(GZ) — —7.3333. If it is also known that f has order-one-independence, then due to Theorem 2.8, one obtains V(GZ) = —8.6667 < V{GZ). m It can be shown, when H is a if-dimensional interval, that V(GZ) is also the K-dimensional Edmundson-Madansky upper bound which has been derived using stochastic independence of the components off, see Huang, Ziemba, and Ben-Tal [43]. Thus, the E-M upper bound is indeed applicable to a much broader class of probability distributions as characterized by order-one-independence. 2.2 SLP Problems With Random Objective Coefficients and Right-hand Side In the remainder of this chapter, we consider the stochastic program defined in (2.1)-(2.2), i.e., the objective coefficients as well as the right-hand side of the recourse problem (2.2) are random, along with the assumptions (A1)-(A5) given in page 19. Consequently, the recourse function <j>(x, f, rf) is a convex-concave saddle function of the random parameters (f, 77) =: CJ, for fixed x 6 X. We are concerned with developing tight upper and lower bounds for the expectation of <j>(x, f, 77) using limited moment information of the random vector u. Literature on bounds for such problems is relatively sparse. Frauendorfer [32,33,34,35] obtained upper and lower bounds for the expectation of the convex-concave recourse function. These bounds have been derived when domains are either multidimensional rectangles or simplices. In the former case, approximating extremal discrete distributions Chapter 2. Two-Stage Stochastic Linear Programs 28 are determined on vertices of the domains, and thus, is exponential in the dimensions of the random vectors, while in the latter case, bounding computations can be afforded within a quadratic amount of function evaluations. An algorithmic implementation of the latter bound is reported recently by Frauendorfer [36]. Following a similar approach, we provide a generalization of the known bounds for such expectation, namely, by requiring only that domains be polytopes. We also derive tight upper and lower bounds using only first moments and without any independence assumption, in an attempt to generalize the first moment bounds in the pure convex case, described in Section 2.1, to the convex-concave case. Finally, we derive upper and lower bounding functions which are convex polyhedral in the first stage decisions x, and thus, the bounding computations for the SLP problem only require solving deterministic linear programs. We shall fix the following notation with regard to domains of the random vectors: H is a polytope with extreme points u1,..., u1 and 0 is a polytope with extreme points u 1 , . . . , vJ. The first moments of f and 77, respectively, denoted by f and 77, as well as the cross moments E[£kVi]i k = 1,... ,K; I = 1,..., L, denoted by raw, are all assumed to exist and be finite. The expectation of the recourse function <f>(x,£,r)), for fixed x 6 X, is denoted by <j>(x). Additional notation is introduced as it becomes necessary. One has to be concerned if the random vectors f and 77 are independent or not. We first consider the case of independence and then the dependent case. 2.2.1 Case of Independent Random Vectors If the random vectors ( and 7  are stochastically independent of each other, then Jensen's inequality and the Gassmann-Ziemba inequality can be utilized simultaneously to obtain Chapter 2. Two-Stage Stochastic Linear Programs 29 upper and lower bounding functions for <j>(x), using only first moments, as follows. (f>(x) = E„[Ez[<f>(x,£,rj)] (due to independence) > Ev[(f>(x,£, n)] (due to Jensen's inequality) { j j J "j 53^(X'^'wJ)r*i : E U > i = V, Pi = 1 \ = '• LBind- (2.19) i=i i=i i=i J The last inequahty follows from the Gassmann-Ziemba inequality in (2.14) being applied on the concave function <f>(x,£,.). Similarly, one obtains 4>{x) < maxJE^y^A,- : XVAt = {, = 1} = : UB™*- (2-20) - U=i t=i .=i J Proposition 2.10 Under independence of £ and n, and using the first moments £ and fj, the best lower (resp., upper) bound to (f>(x) is given by LBind in (2.19) (resp., UBind in (2.20)). Proof. The best upper bound, see page 9, using first moments is obtained, due to independence, by solving the moment problem sup { / / 4>((,n)P1(dOP2(dn) : / tf\(dO = £ / * P2(dr]) = fj\ (2.21) where V\ and V2 are the sets of probability measures on domains E and 0, respectively. Since the upper bound UBind was determined using only first moments and independence between £ and n, (2.21) implies that z1 < UBind, i.e., apply the bounding strategy in (2.20) on an optimal solution of (2.21), solvability of (2.21) being ensured by Theorem 1.2. On the other hand, since the degenerate distribution on H with probability mass concen-trated at £ is feasible in (2.21), z1 > sup { / <f>(t,r,)P2{dri) : / nP2{dn) = fj) = UBind, Chapter 2. Two-Stage Stochastic Linear Programs 30 the last equality being held by Theorem 2.7. Hence, z1 = UBind follows. The lower bounding result is proven analogously. • 2.2.2 The Dependent Case If f and n are dependent, one has to proceed differently than in the preceding discussion. We take an approach similar to that used by Birge and Wets [8, section 5] in their work in the pure convex case. The basic idea is to consider each realization of the random vector as a convex combination of the extreme points of the domain. Thus, by the convexity of H, for given f £ H there exist scalars A(f) that satisfy E « i M 0 = f. E M 0 = i . M0> o i = i , . . . , / . ' (2.22) »'=1 i=l Similarly, by the convexity of 0 , for given 77 £ 0 there exist scalars p(r/) satisfying J2vJN(v) = V, X>;0?) = 1, ^ ) > 0 j = l , . . . , J . (2.23) i=i i=i Then by the saddle property of </>(x,.,.), E ^ O / M * . ^ ) < <K*,t,l) < E A , ( 0 ^ x , « ' ' , / / ) for all (f,r,) £ H x 0 =: fl. i=l «'=i (2.24) Upon integration of (2.24), with respect to the true probability measure, bounding in-equalities for 4>(x) are obtained as LB := E ^ [ ^ ( ^ ) ^ , 6 « 0 ] < ^ ) < E ^ [ A . ( O ^ , « ' ' , ' 7 ) ] =: UB. (2.25) i=l t'=i In the present form, the bounds UB and L73 in (2.25) are difficult to compute since the scalars A and / i are to be determined according to (2.22) and (2.23), respectively, and moreover, product terms involving these scalars and nonlinear functional forms of (j) need to be integrated with respect to the true probability measure. To alleviate this Chapter 2. Two-Stage Stochastic Linear Programs 31 difficulty, we describe a tworstep procedure in the next section. First, each of the functions <f>(x, f, uJ) and <f>(x,77) are linearized at some arbitrary points in the respective domains, similar in spirit to what was done in Frauendorfer [31]. Bounds are then determined by considering the resulting inequalities in expectation. Since the bounds from this procedure would depend on the chosen linearizations, in the second step, we optimize over all possible linearizations to tighten the bounds. Finally, we will show that these bounds solve the respective generalized moment problems, hence, the bounds are tight. 2.3 Tight Bounds Using First and Cross Moments Although other possibilities exist, we have chosen to model the dependence structure between f and 77 by the cross moments mki (for all I), toward determining upper and lower bounds to 4>(x). The resulting bounds may be further sharpened by applying them on subsets of the domain, a procedure which requires conditional first and cross moments, being conditioned on each subset of the domain. Such a scheme can be shown to generate a sequence of bounds that monotonically converge to the true expectation. Considering the lower bounding inequality in (2.25), linearize each of the convex functions 4>(x, f, u J), being convex in f £ E, at some known point fJ £ E, i.e., consider the supporting hyperplane < 7 r J , £ > +o~*, for fixed coefficients 7r J £ $lK and cri £ 3ft, to the epigraph of (f>(x, f, at (£ J, (f>(x, f J, uJ')). Dependence of the latter coefficients on x is suppressed for simplicity. The coefficient vector 7r J is a subgradient of the recourse function at f = <fJ' and may be interpreted as an optimal dual solution of the linear program: ^ ^ ^ n u n ^ ' ) ' ! , : Wy = h(£i) - T(?)x, y > o} . (2.26) Similarly, linearize each of the concave functions <f>(x, u\ rj) in the upper bounding inequal-ity in (2.25), being concave in 77 £ 0, at some known point 17' £ 0 to obtain the supporting Chapter 2. Two-Stage Stochastic Linear Programs 32 hyperplane < a*, 7  > +/?* to the convex set {{n,v) : v < (j>(x,ut,r)), n £ 0 , v £ 3?} at (T7',^ (X,U*, 77')). a ' £ 3?L is a subgradient of <j>(x,u\vi) at 77 = 77' and is interpreted as a primal solution of the linear program: cf>(x, u% 77') = mm {qtfYy : Wy = fc(u') - r(u*>, y > OJ . (2.27) Therefore, < a',77 > +p? > <j>(x,u\n), V77 € 0 and i = I,...,I, (2.28) and < irj, £ > +<r> < 4>{x, vj), V ( G S and j = 1,.. . , J . (2.29) Substituting (2.28)-(2.29) in (2.25), the resulting lower and upper bounds to the expec-tation functional are given by [N(r,) < **,t > +^ (77)^ ] < fa) <j^E [A,-(0 < a\n > +A,-(0/9'] (2.30) j=i »=i where A's and p's satisfy (2.22) and (2.23), respectively. Since S is compact, for each component random variable k = 1,. . . , K, there exist (finite) lower and upper bounds denoted by dfc0 and a^i, respectively. Similarly, those for components 77;, / = 1,..., L, are denoted by S/o and an,,respectively. Proposition 2.11 Mx) < max Y,l=1<cti,ti >-\-ppi (2.31) s-t. Ef=i «!*! = m " > Vfc = 1,.. . , A"; VZ = 1,.. . , £ E f = i ^ = i , P . - > O ' Chapter 2. Two-Stage Stochastic Linear Programs 33 Proof . Observe that from every feasible solution A(f) of (2.22), a feasible solution of (2.31) can be constructed by setting Pi:=E[\i{Q] and t) := £[A,-(Ow] for all i = 1,. . . , J and / = 1,... , L. Moreover, under this construction, the upper bound in (2.30) develops to be the objective function in (2.31). Hence, the right-hand side upper bound in (2.30) is obtained as a feasible point lower bound to the maximization in (2.31). • In an analogous manner, the following lower bound is derived. P ropos i t ion 2.12 4> > min E l i < *j,tj > +<rjPj (2.32) t,p J s.t. E / = W 4 = ™ ^ Vfc = l , . . .,K; V / = 1 , . . . , L E /= i4 = 6 , Vfc E /=iPi = l , ft>0 akoPj < t3k<akiPj, VA;,j. The bounds in (2.31) and (2.32) require solving linear programs. Although these are valid upper and lower bounds to <f>(x) using only first and cross moments and without any independence assumption, even tighter bounds may be obtained as follows. More-over, the tightened versions serve to develop the best upper and lower bounds under the linearization procedure. P ropos i t ion 2.13 4>(x) < m a X y ; ( < a\vj > +/3i)pij (2.33) Chapter 2. Two-Stage Stochastic Linear Programs 34 and 4>{x) > min]T(< irj,u' > +<Tj)Pij (2.34) where the feasible set C is the polytope defined by c-.= { p e ^ J : £ ( u > V i = ( ^ ) , Y^ukviPH = mkh Vfc = l,...,K; l = l,...,L, X > t i = l , Pij>0}. (2.35) Proof. For any feasible solution A(£) of (2.22) and p(n) of (2.23), define PH := E[Mt)K(*l)]-For this A(£) and fi(n), defining pi and t\, i = 1,..., / and / = 1,..., L, as in the proof of Proposition 2.11, observe that J2j vJPij = *S 5Zj /'ij = Pt> and pij > 0 hold. Substituting these in the upper bound in (2.31), the upper bound in (2.33) follows. Similarly, one obtains the lower bound in (2.34). • Corollary 2.14 / / the domains H and 0 are multidimensional rectangles of the form H := x^Ljfafco, ati] and 0 := x^Jajo, an], then the upper and lower bounds in (2.31) and (2.32) coincide with those in (2.33) and (2.34), respectively,. Proof. Considering the upper bound in (2.33), from any of its feasible solutions, a feasible solution for the linear program in (2.31) may be constructed having the same objective value, i.e., the bound in (2.33) is never worse than that in (2.31). Consider a feasible solution (£',p,), Vi , of (2.31). Since 0 is a multidimensional interval, ^ E 0 for Pi 7^  0. Thus, for each i, there exist convex combinations Oij such that V = (52jVJ$ij)pi. The construction := piQij results in a feasible solution to (2.33), implying that the upper bound in (2.31) is as good as that in (2.33), hence the result. The lower bounding result follows analogously. • Chapter 2. Two-Stage Stochastic Linear Programs 35 The upper and lower bounding linear programs (2.33) and (2.34), respectively, depend on the linearizing points which determine the supporting hyperplanes in (2.28) and (2.29). Therefore, it is of interest to determine those fj*1, i = 1 , . . . , / , that lead to the best linearization upper bound. That is, one needs to determine that linearization of <f>(x, it1, rf) for each i such that the upper bound in (2.33) becomes the smallest of all possible linearizations. Likewise, we obtain that linearization of c£(x,f,-uJ') for each j such that the lower bound in (2.34) becomes the largest of all possible linearizations. For this development, first note the following result. L e m m a 2.15 Let p G C where C is as defined in (2.35). Then, (i) Vi = 1,..., / : E Pii ~ 0 implies E v l P*i = 0' j j E / ^ j > 0 implies — — (E 0 j E j Pij (ii) \/j = 1,. . . , J : E Pij = 0 implies E u% Pij — 0> pu > 0 implies ' " G H. i Et Pij Proof . For p £ C, since is nonnegative, Ej Pij = 0 ( f ° r some i) implies /9,-j = 0 for all j , and thus, Ej v'Pa = 0 f ° r such i . If Ej Pij 0 f ° r some i , then = Ej u J ( E ^ O ) = Ej v'vi where Uj := y f 0 . . Since Ej "j = 1 a n <l vj > 0 f ° r a U j \ Ej u J l / j G 0 proving the first assertion. The second assertion follows similarly. • Theorem 2.16 The best-linearization upper bound from (2.33) is <t>u{x):= max ^Pijtfau'rfip)) (2.36) s.t. ^2vJPij = fjl(p)Y,Pij, i = l , . . . , / j j where C is defined in (2.35). Chapter 2. Two-Stage Stochastic Linear Programs 36 P r o o f . The best-linearization upper bound from (2.33) is obtained from, inf sup { £ ( a V + F)Pii : a'n + ? > <f>(x, u\ n), n G 6 , V i 1 (2.37) In (2.37), transposes have been removed for simplicity. We show that (2.37) is solvable and its value is the same as that of (2.36). Observe that in (2.37), the objective function is bilinear in the two (finite-dimensional) variable vectors (a, /?) and p. Moreover, the set C is compact. Thus, applying Corollary 37.3.1 of Rockafellar [73], it follows that (2.37) is equivalent to: sup pec inf \ V V a V + F)Pij : a*n + /?*' > <f>(x, u\ n), n G 0 , Vi (2.38) For a given feasible solution p G C, consider the inner infimum in (2.38), the value of which is denoted by Z(p). This problem decomposes to / subproblems of the form Z\p) := inf. | a« £ ^ + /?'' £ = <*'v + ? > ttx, u», n), n € 0 I (2.39) and Z(p) = ^ZiZl(p). First consider the case when ^Zj Pij 0- (<*%/?*) he a feasi-ble pair to (2.39), the existence of such being ensured by the concavity of (f>(x,u\r)) in 77. Then multiplying the constraints in (2.39) by YljPijt since fjl(p) := %t—— G 0 (by Lemma 2.15), it follows that Zl(p) > J2j Pij<l>{x->«'»¥(p)) holds. Further-more, the continuity and concavity of <f>(x,u\.) is sufficient to guarantee that Z^p) = J2j Pij<l>(x,u%,fjt(p)) is attained, i.e., choose the coefficients (a*, /?*) corresponding to the supporting hyperplane at the boundary point (fj*(p), <f>(x, u', fj'ip))) of the closed convex set {(n,!;) : v < <f>(x,u,,'n), n G 0 , v G 3ft}. Suppose J2j Pij = 0- Then, by Lemma 2.15, ^ZjV^pij = 0 and (one may choose any T}' G 0 ) thus Z*(/9) = 0 (hence it is not necessary to linearize <f>(x,u\ n)). Noting C is a closed set and thus replacing the outer "sup" by "max" completes the proof. • Chapter 2. Two-Stage Stochastic Linear Programs 37 Theorem 2.17 The best-linearization lower bound from (2.34) *s M*)'-= mj? EPv<t>(x,£JiP),vj) (2.40) s.t. E U%PH = i3 (p) E Pih J =. !> • • •»J i i where C is defined in (2.35). Proof. The proof is similar to that of Theorem 2.16. • It is clear from (2.36) and (2.40) that one needs to solve nonlinear programs to obtain the best linearized upper and lower bounds, respectively. However, as follows from the next lemma, these are in fact convex programs. Lemma 2.18 Let g : 3ftn —> 3ft be convex and define f : 3ftn x (0, oo) —+ 3ft by /(i,p) = pg(^). Then f is (jointly) convex. Proof. We show that epi f is a convex cone. epif = {(t,p,6) : 0>pg(t),p>0} = {(t,p,9) : 3v>0 with v0>g(vt), up = 1} = {(t,p,0) : 3 ^ > 0 with v(t,p,0)eS} where S := {(t,p,0) : p — 1, 0 > g(t)}. Since g(.) is convex, E is a convex set. Next observe that epi f = rc£, the recession cone of £, and is thus convex. • Although the bounding expressions <j>u(x) and (f>L(x) are convex programming prob-lems, our intent here is not to suggest solution schemes for computing these bounds in the present form. We will use these bounds to derive finite linear programming formula-tions by using the properties of the recourse function <f>(x,£,r)), which is the discussion in Section 2.7. Chapter 2. Two-Stage Stochastic Linear Programs 38 2.4 Relation to Generalized Moment Problems An immediate concern would be whether or not the bounds <f>v(x) and 4>l{x) derived in the previous section are tight, in the sense of the corresponding generalized moment problems. We will show that this indeed is the case. First, observe that it is possible to attain these bounds as the exact expectation in certain cases, i.e., Theorem 2.19 If the convex-concave recourse function 4>(x,£,n) is bilinear in £ andn, for fixed x, then <j>L(x) = <f>(x) — (j>u(x) holds where ^ v(x) and <f>L(x) are defined in (2.36) and (2.40), respectively. Proof. Bilinearity allows one to simplify the upper bounding formulation in (2.36) so that the constraints defining the set C imply that <f>u(x) = <f>(x). The lower bounding result also follows trivially. • To show that the bounds <f>u(x) and <f>L(x) are the best using first and cross moments, consider upper and lower bounds on <f>(x) by formulating the corresponding moment problems as follows, where the set of all probability measures on fl is denoted by V: (2.41) 1,...,K; 1 = 1,...,L (2.42) 1,...,K; I = 1,...,L. (j>(x) < Vu(x) := sup / (j)(x,£,r})P(d(;,dri) Per Jo s.t. / (P(d(,dr,)=t JQ f riP(dZ,dr,)=rj Jq I £kr}iP(d£,dr)) = mkh k JQ and ${x)>VL(x):= mf jj{x,i,r,)P{d(,dr,) s.t. UP{dZ,dr,)=l Jq j rlP(d£,dr})=f] Jq I €kr)iP(dZ,dri) =mki, k Jq Chapter 2. Two-Stage Stochastic Linear Programs 39 Theorem 2.20 Assuming that (2.41) and (2.42) are feasible, Vv(x) = <f>u(x) andVL(x) = (f>i(x) hold, where (j>v{x), <f>L{x), Vu(x), and VL(x) are defined in (2.36), (2.40), (2.41), and (2.42), respectively. Proof . We shall prove the result for the upper bound. The proof for the lower bound is analogous. Consider the symmetric dual problem of the upper bounding moment problem in (2.41), which is a semi-infinite linear program. By feasibility of (2.41), compactness of Q, and continuity of cf>(x,.,.), it follows due to Theorem 1.2 that strong duality holds and thus, K _ L vv{x) = mf eo + ^ ZeiCk + ^ efrj, + ^2eklmkI k=l 1=1 k,l K L s.t. 6o + '520lZk + 520fol + 'E0kltkril><p(x,t,ri), V ( ^ ) 6 f l k=l 1=1 k,l = inf flo + EflJcjfc + E ^ + ^ f l H m w ( 2 ' 4 3 ) Jt=l (=1 k,l K L s.t. flo + S ^ u i + E f l ? , ? ' + E ^ ' u t , / ' ^ ^ ( x ' u ^ , ? ) Vt = l , . . . , / ; n e e . *=i ;=i k,i The equality in (2.43) is due to convexity of <f>(x,.,rf) and the assumption (A4) that £2 = E x 0 . Consider any feasible solution p G C. Due to Lemma 2.15, Yj Pij > 0 implies 3 =: rjx(p) E 0 . If Yj Pij = 0, pick any arbitrary /)'(/?) from the domain 0 . It is easy to verify that the pairs (J2j Pij, i)x{p)), i — •••>!, a r e dual feasible to (2.43), hence Vv(x) > Yi,j Pij<f>(x,ui\V*(p)) f ° r a n v such feasible solution p 6 C. That is, Vv{x) > 4>v{x). To claim that Vv(x) < <f>u(x) holds, note that, due to Theorem 1.2, (2.41) is solvable and thus let P* be a maximizing measure in (2.41). Now, applying the upper bounding technique in the previous section, with the underlying probability measure being replaced Chapter 2. Two-Stage Stochastic Linear Programs 40 by P*, yields / #r,£,i7)i*K,Af)<M*)> (2-44) since P* has the same first and cross moments as the true probability measure. Therefore, Vv(x) < <j>u(x) holds and the proof is completed. • 2.5 Special Cases The upper and lower bounds in (2.36) and (2.40) may be viewed as approximating the true probability distribution by discrete probability distributions defined on the boundary of the joint domain fl. Moreover, these discrete measures have the same first moments and cross moments as the given true measure. However, the approximating measures may depend on the functional form of <f>, and more importantly, on the first stage decisions x. Nonetheless, in certain special cases, when the domains E and 0 are restricted to be of a certain shape, the bounds <f>L(x) and <l>L(x) a r e obtained with unique (i.e., independent of x or <f>) discrete measures on the boundary of fl, i.e., the optimizations in (2.36) and (2.40) become trivial. This indeed is the case when E and 0 are multidimensional simplices since every point in a simplex can be expressed as a unique convex combination of the extreme points of the simplex, i.e., the barycentric coordinates. Consequently, the feasible set C in (2.35) implies that the values of E j Pij-, E j v*Piji f ° r a n y i-i a n d Et P*ji E,«ViJ5 f ° r a n y 3i a r e uniquely determined. That is, the upper and lower bounding discrete probability measures are being determined independently of the function <f> or the decisions x. This special case is discussed in Frauendorfer [35]. When f and rj are univariate random variables, hence the domains are one dimensional simplices, the bounds <f>u(x) and (j>L(x) are simplified as follows: Theorem 2.21 If £ and n are both univariate (with interval domains E := [do, ai], 0 := [ao, 5i] such that a,Q < di and SQ < a,\), then denoting the cross moment C := E[£rj\, Chapter 2. Two-Stage Stochastic Linear Programs 41 the upper bound (j>v(x) simplifies to m*) = (^l) * U * > . + ( * U h , ( 2 - « > \ a i - o o / \ \ a i _ a o / V t-ao J and the lower bound <f>L(x) simplifies to M*) - (^-) * U¥^-,i.) + (*=±) 4 ( . (2.46) \ai — a 0 / \ ai — n / Vai - aQJ \ 77 — a0 J P r o o f . Let us consider the upper bound when f and 77 are univariate. Setting u 1 = do, u2 = d i , v1 = do, and v2 = &i, from the corresponding feasible region C for pu, pi2, p2i, and />22? one can show that (pn + /9t2) and (d0/9,i + di/9j2) are uniquely determined for each z = 1,2 as follows: />n + />i2 = 7 — and p21 + p 2 2 a 5 a l — a 0 al ~~ a0 axr\-C . C - a0rj aopu + axpi2 = — r- and a0p2i + a^pii = -= —. ai - 4 f - a 0 Moreover, since ?*Z&0 > 0 a n d £~-£0 > 0 (assuming nondegeneracy for the distribution), r}1 = ^=2- e 0 and r}2 = e 0 . Substituting' these in the upper bound <f>v{x) = Ya=i EJ=I Pij<f>{xi u \ V*) gives (2.45). Similarly, one can obtain the lower bound in (2.46). • The bounds in (2.45) and (2.46) for a saddle function in 3-dimensions are depicted in Figure 2.1. The following example illustrates the univariate bounds. E x a m p l e 2.22 Consider the two-stage stochastic linear program: Z* := min {2xi + 2x 2 + E^r\<j)(xu x 2 , f , 77)] : 2x x — x2 < 1 , xx + x2 < 1} Xl ,X2>0 Chapter 2. Two-Stage Stochastic Linear Programs 42 Figure 2.1: Bounds using first and cross moments in the univariate case. where <f>(xu x 2 , £, n) := min {2rjy! + 3ny2 + ys : 2y1 + 2/2 ~ 3*/3 = 2 + 3£ - 6£xx + 4£x 2, Vl.«2>0 -Vi + 3y2 + J/3 = 4 + 2£ - xx - 3£x2 }. The domain 0 of the bivariate random variable u> : = ( £ , 7 7 ) is [0,1] x [0,1] and the joint distribution is given by mv) = 2r One obtains £ = 0.5 = fj and C := E[£n] — ^ g. Using the upper bound in (2.45), one finds that the discrete distribution a;1 = (0,0.4444), p1 = 0.5 ; uj2 = (1,0.5556), p2 = 0.5 provides the best upper bound under the first and cross moment information. Conse-quently, Z* < 3.6032 and the upper bounding solution is x* = (0.5,0). Using the lower bound in (2.46), the discrete distribution u1 = (0.4444,0), p1 = 0.5 ; uj2 = (0.5556,1), p2 = 0.5 provides the best lower bound, which is 3.0318 and the lower bounding solution Chapter 2. Two-Stage Stochastic Linear Programs 43 is x*L = (0.5,0). Thus, the bounds are 3.0318 < Z* < 3.6032. • Remark. Frauendorfer [33] derived bounds for the expectation of a convex-concave function using the following moments (for the case when E and 0 are multidimensional rectangles): (2.47) m A , i := fe(Uie\Vi)P(dv) for all A 6 B2 mA,2 •= So wfllfceA tk)P(dt, dn) for all / = 1,.. . , L and Aefli m A ) 3 == k{Xlk^ik)P{di) for all AeB, ™ A , 4 := Jh UYlieK ViWt, dn) for all k = 1,. . . , K and A C B2 where Bi is the set of all subsets of { 1 , . . . , K] and B2 is the set of all subsets of { 1 , . . . , L). The moment information in (2.47) allows one to determine upper and lower bounding extremal (discrete) distributions uniquely. For the special case of univariate £ and n, the upper and lower bounds due to Frauendorfer [33] can be shown to coincide with those given in (2.45) and (2.46), respectively. The upper and lower bounds in Theorem 2.21 can be extended to the case of random vectors £ and n having pairs of independent components, in much the same way the E - M upper bound in (2.13) is extended to higher dimensions under independent components. For simplicity, assume that K = L = m (say), and the pair of random variables (£fc,r?fc) is independent of (£i,n{) for k ^ / and k,l = l , . . . , m , along with S and 0 being Tri-dimensional rectangles of the form x™ = 1[afc 0, a*i] and xj^l-ja/o, an], respectively. Under pairwise independence, the upper bounding inequality is 1 1 / m \ m < z••• E n ^ W ( ^ ^ . - " » c . ^ . - - - . ^ ) (2-48) i / 1 = 0 um=0 \fc=l / where i t A f c akiVk — mkk *k mkk — a-koVk d0 := afco, «i := a-kx, e0 := —; =—, ex :— —= ; aw — Qk Qk — aki Chapter 2. Two-Stage Stochastic Linear Programs 44 and ik Qfci - £k ?fc sf* - o*i Cfc SIC rfc Si o 0 •— :—, Oi •— — o-ki — ajto a*i — flfco Under pairwise independence, the lower bounding inequality is 1 l # * ) > £ • • • E n ^ U ^ ^ - . c . s J i . " - ^ ) ( 2 - 4 9 ) where and 1/1=0 i/m=0 \*=i Tfc . Qfclf* - m * * Tjfc . mkk ~ Qfcoffc ~fc . - ~fc . ~ «0 - = ~5 "1 - = = ~ 5 e0 - = °*0? e i •= fl*l a*i — rjk rjk — a^i j f c o*i ~ rjk 7-k Vk - Qfci o-ki — ajto — afco It is easy to show that the bounds in (2.48) and (2.49) are precisely the bounds of Frauendorfer [33] when pairwise independence information is utilized. 2.6 First Moment Bounds - Dependent Case When £ and r\ are independent, it was shown before that the lower and upper bounds in (2.19) and (2.20), respectively, are the best bounds using only the first moments f and fj. In this section, we derive the best lower and upper bounds for <^ (x), in the dependent case, under the first moment information. These bounds could also be viewed as generalizations of Jensen's and the Gassmann-Ziemba bounds, from the pure convex case to the convex-concave case. Most results follow trivially from those in the preceding sections. Define the bounded convex polyhedron C by C := \p € : 5>*> J)pij = (<*,fj), E P H = h PH > o\ . (2.50) Then, Chapter 2. Two-Stage Stochastic Linear Programs 45 Theorem 2.23 An upper bound to <j>(x) using only first moments is 4>u{x):= max ^ ^ ( x , u' , i f (p)) (2.51) S.t. £ V3Pii = fj\p) Y,PH> * = 1, • • • , /• i j A lower bound to <[>(x) using only first moments is Mx)- min ( 2 - 5 2 ) s.t. uV«i = (p) j =1> • • •> J-where C is defined in (2.50). Proof. <f>u(x) < 4>u{x) and <j>L{x) > 4>L{X) trivially hold since C C C , where <f>u{x) and < x^(x) are defined in (2.36) and (2.40), respectively, hence the result. • To see that the bounds ^>v(x) and <j>L(x) are tight, formulate the upper and lower bounding problems as generalized moment problems. One obtains Theorem 2.24 Defining V as the set of all probability measures on the domain Q, Mx) = sup If <f>(x,Z,n)P(dt,dn) : I £P{d£,dn) = £, I nP{d£,n) = fj) (2.53) Per U n Jn Jn ) and 4>L{z)= mf [Jj{x,£,n)P{d£,dn) : J^P{d£,dn) = I J^P{d£,n) = f]) (2.54) hold where 4>v(x) and 4>L(x) are defined in (2.51) and (2.52), respectively. Proof. Noting that (2.53) and (2.54) are always feasible, the proof is similar to that in Theorem 2.20. • These first moment bounds inherit the following properties: Chapter 2. Two-Stage Stochastic Linear Programs 46 Theorem 2.25 1. 4>u(x) is convex in x. 2. If (f>(x,£,n) is jointly linear in f and r\, then 4>v(x) = <j>(x) = <f>L(x). 3. If <f>(x,£,n) is only a convex function o/f, i.e., for fixed x and f, <f> is a constant over all Q, then (i) <f>L(x) simplifies to Jensen's lower bound, and (ii) (j>u(x) simplifies to the Gassmann-Ziemba upper bound. Proof. Straightforward. • When first and cross moments are used, it was shown in the previous section, that the resulting bounds are trivially determined for the case when domains are multidimensional simplices. However, with the first moment bounds ^>v(x) and <f>L(x), such is not possible. As illustrated in Figure 2.2, when f and r? are univariate, the upper bound is determined by choosing a "vertical" cross section of the saddle surface through the point of first moments (u>) such that the line joining the points of intersection of the saddle surface has a maximum height at a;. 2.7 Application to Stochastic Programming In this section, the bounds derived using first and cross moments are applied to the stochastic program (2.1)-(2.2) to obtain finite mathematical programming formulations. The intent here is to develop upper and lower bounding formulations which are amenable to efficient solution techniques, such as linear programming. Consider the upper and lower bounds <j>u(x) and <f>L(x) in (2.36) and (2.40), respec-tively, on the expectation <f>(x), using first and cross moments of the random vector. With Z* being defined by (2.1), the resulting upper and lower bounding approximations Chapter 2. Two-Stage Stochastic Linear Programs 47 for Z* are and Figure 2.2: First moment bounds in the univariate case. Z* < Z'v := m\n [c'x + <j>v{x)} Z* > ZI := min {c'x + <j>L{x)} (2.55) (2.56) P ropos i t ion 2.26 For the stochastic program in (2.1)-(2.2), under the assumptions (A1)-(A5), <f>u{%) is convex polyhedral in x G X, where <f>u(x) is. defined by (2.36). Proof . For any i = 1,.. . , / , <f>(x, u \ fj'ip)) := min UftpW : Wy' = % ' ) - T ( u ' > , y{ > o} . (2.57) y' Denote the feasible set on y', z" = 1,.. . , / (simply referred to as y), for any given x, by y(x). Thus, <^(z) =max min G(y,p) := pijqtfipW • pec vev(x) (2.58) Chapter 2. Two-Stage Stochastic Linear Programs 48 Due to assumption (A3) on page 19, G(y,p) = JZi,j(l{v^)'y,Piji i - e - 5 G(y,p) is bilinear in y and p. Moreover, since C and y(x) are polyhedral sets, interchanging maximization and minimization in (2.58), 4>u(x) = min maxG(y,p). (2.59) yey(x) p^c Using linear programming strong duality for the inner maximization in (2.59) max G(y, p) - min w° + w1^ + w2fj + J2k,i mkiwll pec w s.t. w° + w1^ + w2vi + Yk,i ukviwki ^ Q.{v')'y%i Vi , j. Therefore, <f>v(x)= mm w° + w1^ + w2fj + mklw3kl x (2.60) s.t. Wy' = h(u() - T(ul)x, Vi to0 + u^it* + w2yi + Yk,i ukviwki ~ <z(uJ)V — 0> Vi , j y« > 0, Vi and thus is convex polyhedral in x. • Due to the convex polyhedrality of ^(x), as follows from the above proof, the upper bounding approximation Z* in (2.55) can be solved as a linear program. On the contrary, the lower bounding approximation <f>L(x) may fail to be convex. To see this, rewrite <f>L(x) using the recourse function definition in (2.2) as <f>L{x) = ™no^PiMvJYyJ •• Wy* + T(?{p))x - h${p)) = 0, Vj j . (2.61) Due to our assumptions (A2) and (A3): M*)= ™ * E «(«*') V (2-62) p€C,y>0 . s.t. Wy{ + (T0x - h0) £ P i j + J2(Tkx - hk)C£ <Pij) = 0, V j . « fc=l i Chapter 2. Two-Stage Stochastic Linear Programs 49 Although for any given x £ X, <f>L(x) could be evaluated by solving a linear program, observe that it cannot be concluded from (2.62) that <f>L(x) is convex in x. Therefore, in general, the lower bounding approximation Z£ in (2.56) involves nonconvex optimiza-tion. To overcome this computational difficulty, we derive below an alternative convex polyhedral lower bound <f>L(x), using first and cross moments, which is generally weaker than <j>L(x), i.e., <f>L(x) < <j>L(x). Construct the (K x X)-dimensional matrix M of cross moments, with the entry at the kth row and the Ith column being m^/. P ropos i t i on 2.27 4>(x) > <f>L(x) > 4>L(x) where 4>L(x):= min £ a ( * y ) y (2.64) 3 s.t. E W = Mf)-ra> j £vjWy* = h(M{l)) - T(M{l))x + (ho - r0*)(r7, - 1), l = l,...,L 3 yj>0, j = i , . . . , J and M{i) is the Ith column of the cross moment matrix M. Moreover, 4>L(x) is convex polyhedral in x. Proof . Summing the constraints (involving variables y J ) of (2.62) for all j = 1,..., J , J2Wyi = h(0-TU)x (2.65) 3 follows since p € C. Multiplying the constraints (involving variables y1) of (2.62) by vj (2.63) Chapter 2. Two-Stage Stochastic Linear Programs 50 (for some / = 1,..., L) and summing the resulting equalities together for all j = 1,...,«/, K £ v3Wy3 = (h0 - T0x)fji + ^2(hk - Tkx)mkt j k-l = h(M,)- T(M,)x + (h0 - T0x)(fj, - 1). (2.66) Since the constraints are aggregated, (2.65)-(2.66) provide a relaxation to the feasible set of (2.62), hence a lower bound which is convex polyhedral in x. • The constraint aggregation procedure above should be viewed as a rather natural way to obtain a convex polyhedral lower bound from the generally nonconvex bound in (2.62), as evident from the next proposition. P r o p o s i t i o n 2.28 IfQ, the domain of the random vector r\, is a L-dimensional simplex, then </>L(x) — $L(x) for all x £ X. P r o o f . Since <f>L(x) > 4>L(x) holds trivially from (2.63), we only need to show that s^i(x) < 4>L{X) holds when 0 is a simplex. For every feasible (L+l)-tuple (Ej yJn, Ej viVi, 1 = 1,..- L) of (2.64), where n = 1,... n 2 , observe that 0 being a simplex implies that yJn, Vj, Vn are completely and uniquely determined. It is easy to verify that this unique solution y\ for the chosen feasible (L+l)-tuple, is feasible in (2.62) since E» Pij a n d Ei u%pii a r e uniquely determined for 0 being a simplex. • Therefore, not only is the lower bound 4>L(x) in (2.64) tight in the sense of Proposition 2.28, but it also allows one to solve the resulting lower bounding approximation Z;L:=rmn{c'x + iL(x)} (2.67) as a linear program. Remark: When solving the foregoing approximating models, explicit specification of the set X may entail some difficulty. A way to circumvent this is discussed with regard to Chapter 2. Two-Stage Stochastic Linear Programs 51 stochastic convex programs in Chapter 4. Nevertheless, in the case of complete recourse, or even relatively complete recourse, one can replace the set X with X1 where X1 is defined in (2.3). Example 2.29 Consider the following two-stage stochastic programming (complete recourse) problem with random right-hand side, technology matrix, and cost coefficients: Z* = min 2xi + 2x2 + 22*,, [min 27712/1 + 37722/2 + 2/3] x>0 y>0 s.t. 2 x 1 - X 2 < l s.t. 22/1 + 2/2 - 3 i / 3 = 2 + 36 -3(6 +6)xi + 4fcs3 xi + x2 < 1 - J / i + 3y2 + 2/3 = 4 + 2£ 2 - xi - 3£ 2 x 2 where E = [0,1] x [0,1] = 0 and the joint distribution of (6, £2, »?i> ^ 2) is given by ' ^ i fn*<6 for = 1,2 with (£i,rji) being independent of (£2,n2). One obtains £k = fjk = 0.5 for k = 1,2 and the cross moments m n = m22 = j$ and m i 2 = m 2 i = 0.25. Using the lower bounding approximation in (2.67), solving only a linear program, one obtains Z^L = 3.6369 together with the approximate solution xL — (0.5,0). Using the upper bounding approximation in (2.55) and solving only a linear program, one obtains Z* = 3.7977 together with an approximate solution xv = (0,0). Thus, the relative error of the bounds is ^^J^^f 3 6 9 < 5%. Moreover, from the dual solution of the upper bounding linear program, one obtains that the true distribution is approximated by a discrete probability measure on the boundary of the joint domain as UJ1 = (0,0,0.4580,0.5420), p{ux) = 0.3306; UJ2 = (1,0,0.5821,0.2538), P(UJ2) = 0.1694; a;3 = (1,1,0.5420,0.6261), p(u3) = 0.3306; and UJ4 = (0,1,0.4180,0.4180), p(uj4) = 0.1694; where p(u) denotes the probability assigned to the point UJ = (6,£2, ni,772). fk(£k,r)k) -Chapter 2. Two-Stage Stochastic Linear Programs 52 Since ( f u T/I) is independent of ( f 2 , V2), the bounds in (2.48) and (2.49) are applicable. Under pairwise independence, one obtains these lower and upper bounds as 3.6369 < Z* < 3.7973. • 2.8 Monotonicity and Convergence The goal is to show that the bounds derived so far when applied repeatedly on partitions of the support f i , as these partitions become finer the bounds themselves get tighter. This is an important property toward solving stochastic programs efficiently. Let us consider the upper bound <f>u{x) on the expectation function 4>(x), using first and cross moments. Next let us define a partition S on f i , having cells f i r (which are polytopes themselves) with positive probability mass pT, r = 1,..., R as follows: a-HO-' = 0, r , r ' G {1,...,#}, r^r' R \J fi r = fi. r = l The upper bounding technique in Theorem 2.16 when applied to <^ >(f, 77) where (f, 77) G fir, the resulting upper bound is denoted by ^y(x). Also define: f := E[t I (f ,77) € f i r ] ff := JBfol IL] m\x := I (*,!/) e c u V(M)-Theorem 2.30 The upper and lower bounding strategies in Theorems 2.16 and 2.17, respectively, under partitioning the support f i , behave monotonously. Proof. We demonstrate monotonicity for the upper bounding strategy; same follows for the lower bound. Due to our notations, we have to show that 4>(x) < < M*)- (2-68) r = l Chapter 2. Two-Stage Stochastic Linear Programs 53 Considering the generalized moment problem in (2.41) and strong duality with the asso-ciated semi-infinite linear program, due to Theorem 2.20, it follows that # f » = inf y V ) + E ^ 0 + i : « ( r ) + E < / y ^ ) y<r> *=i l=i k,i s.t. y°(r) + £ Ckvl(r) + £ W O O + E & W J C O > i?), Vtf,r?) € flr. it=i /=i fc,/ Since, £ = E r P r £ r , »7 = E r M r , and = E r ^ m ^ , , for ^ ( x ) we get <f>v(x) = inf ^ p r y ° + YPrlkVk + YPrVlVf + YPrmkiyll r k,r l,r k,l,r s.t. y° + £6y£ + E W + E & W * > <K^), *: / it,/ V ( ^ ) 6 f i r , r = l , . . . , J R > inf J > y ° ( r ) + 5>8yJ ( r ) + 5 > t f y ? ( r ) + £ P r ^ y 2 , ( r ) ^ r k,r l,r k,l,r s.t. y°(r) + £ & y j ( r ) + E W (r) l E ^ M ^ W > fc / it,/ V ( { , i ) ) G f i r , r = l , . . . , i 2 r = l thus proving the second inequality in (2.68). To prove the first inequality in (2.68), note that 4>(x) = Y,rPrE[<f>(x,£,n)\Slr]. Since E[<f>(x,£,rj)\£lT} < fa(x) holds for r = 1,.. . , R, the proof is completed. • The strategy of partitioning the domain is an approach generally used to solve stochas-tic programs to e-optimality, see, e.g., Birge and Wallace [6], K a l i [48,50]. A n important issue is whether a sequence of approximate solutions xv obtained by solving the model (2.55) converges to a true solution of the stochastic program (2.1)-(2.2). Similarly, for the lower bounding solutions xL too. What is fundamental to this discussion is the theory of epiconvergence, which can be defined as follows: A sequence of functions {g", v = 1,...} is said to epiconverge to g if and only if the sets {epi gu, v = 1,...} converge to epi g, Chapter 2. Two-Stage Stochastic Linear Programs 54 i.e., l im sup epi g" = epi g = l im inf epi gv. Under epiconvergence, due to Theorem 9 of Wets [90], the set of approximate solutions is guaranteed to belong to the set of optimal solutions, i.e., limsup[argirun<71/] C argmin^. Now, referring to the stochastic program of concern, consider a sequence of partitions S", v — 1,..., where each Sv is described by a collection of cells VlTl/, rv = 1,... , R„, of H such that Su C Su+1. Moreover, suppose m a x i P i r { p , T v ) —» 0 as v —> oo. Now, applying the derived bounding strategies on each Q.Tu of partition S", one determines approximating (discrete) probability measures Pv on ft such that {Pv} weakly converges to Ptr. Furthermore, since fl is compact, due to Birge and Wets [8, section 2.11], it follows that Epu [<j>] is both epi- and pointwise convergent to (j>. Consequently, under such a scheme, for the upper (resp., lower) bounding first stage solutions xvv (resp., x£), one has {jy} —• x* (resp., {x^} —• x**) where x* (resp., x**) is a solution to the stochastic program (2.1)-(2.2). What remains to be addressed is the choice of such partitioning schemes so that the computational burden does not become excessive; for instance, how to refine partitions based on information gained from previous partitions. Frauendorfer [33] addressed the issues when domains are rectangles. More recently, simplicial domains have been inves-tigated by Frauendorfer [36]. 2.9 Appendix Proof of Lemma 2.2: The components of £ are independent if and only if, E[e9'^] = ]TJ^i E[ee&] holds for all Chapter 2. Two-Stage Stochastic Linear Programs 5 5 real-valued parameters 8 £ $tK-. Prime (') denotes transposition of a vector. Writing e* = 1 + t + ^; + ... + ^ + ..., this reduces to, 1 + 0'( + ^,E[0'(f + ... + -.E[6'(T + ••• = m 2 ( _ 1 1 1 n {i+Oi t i + 2?^[(^-)a]+• • •+-^ imi)n]+•••} (2.6.9) (2.69) holds for all real 0 £ $lK if and only if for each (fixed) combination of 0/s, say f(0), the coefficients in the left-hand side and right-hand side of (2.69) are equal. Any combination f(8) has the form (Y[jeA.(Qj)Tj) where A and r,- are as defined in the Lemma. The relevant coefficient arising from the right-hand side of (2.69) is Uje\{vi.E[(^)ri})-To obtain the corresponding coefficient from the left-hand side of (2.69) we need to consider the term ^ - £ [ ( ^ ' 0 ^ ] where R = Y^jeKrj- Since each 0j is raised to power rj, in E[(0'£)R], the term (Y[j^A.(0j)Ti) appears <-/ R l r , times. Thus, the relevant coefficient from the left-hand side is (TT-1—i)E[YljeA(^jY j]- This completes the proof. • l l j g A ri' Chapter 3 S L P Problems: Unbounded Domains for Random Vectors In this chapter, we extend the results of the previous chapter on approximating the expectation of the recourse saddle function <f>(x, £, 77), being saddle in (£, 77) =: UJ, of the stochastic program (2.1)-(2.2). We retain the assumptions (A1)-(A4) of Chapter 2, see page 19, but relax the assumption (A5) by requiring only that (A5') E and 0 are closed convex, possibly unbounded, subsets of dlK and %tL, respec-tively. Furthermore, we make the following unrestrictive assumption: (A6) belongs to a single orthant of dtK+L. For instance, if Assumption (A6) is violated, one can always partition the domain such that the latter assumption is satisfied on the partitioning cells, and then carry on the subsequent discussion on such a partition. The ideas in this chapter may be viewed as extensions of those in Birge and Wets [9], Edirisinghe and Ziemba [18], Frauendorfer [33], and Kail [52]. Birge and Wets [9] ob-tained an upper bound to the expectation of a convex function, when the domain is possibly unbounded, as a generalization of the Gassmann-Ziemba upper bound in (2.14), by incorporating the function information as well as the information of the recession function. Kali [52] showed that this approach yields the optimal value of the correspond-ing moment problem using first moments. This chapter combines and applies the ideas in the above references to derive bounds for the expectation of a saddle function, namely 56 Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 57 that arising in the SLP problem, using first and cross moments as well as using only first moments. In the sequel, we show that these bounds represent optimal values to the respective moment problems, implying that the bounds are tight. We begin the exposition by dropping the first stage decisions x from the convex-concave recourse function <f>(x,£,ri), hence the saddle function of interest is represented by <f>(£, 77) for simplicity. In the application of the bounds to the SLP problem of concern, the dependence of the saddle function on x is made explicit again, however, at remaining all times being aware that this dependence is implicit. The chapter is organized as follows. First, we derive the new bounds and then these bounds are examined in the context of the respective general moment problems. Application of the bounds to SLP problems is then considered and finite linear programs are formulated. Finally, an order-cone decomposition scheme for the domain is considered to enhance the computations of the bounds. However, detailed developmental and implementational issues of such schemes are beyond the scope of this chapter. A numerical example illustrates the bounds. 3.1 Preliminaries Consider the convex-concave recourse function 77) in (2.2) with the argument x being suppressed for simplicity, along with the assumptions (A1)-(A4) on page 19 and (A5') and (A6) on page 56. The expectation 4>(x) of the recourse function is represented by ^ suppressing the argument x. In the ensuing development, it is implicitly assumed that x 6 X where X is defined in (2.5). This way, for fixed f G S and 77 6 0, <^ (f, 77) is finite. As in Chapter 2, the first moments f, 77, and the cross moments rriki are all assumed to exist and be finite. We use the following notation with regard to domains E and 0 of the random vectors f and 77, respectively: S is a convex polyhedron with the set of extreme points ext H Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 58 and the recession cone rc E. Similarly, 0 is a convex polyhedron with the set of extreme points ext 0 and the recession cone rc 0. We denote ext E = {u1,..., u1} , and (3-1) extS = {vl,...,vJ}. (3.2) Since H is polyhedral, rc E is a polyhedral cone, and its set of generating elements is denoted by ext-rcE = {d 1 , . . . , ^} (3.3) and that of rc 0 by ext-rce = { r 1 , . . . , ^ } . (3.4) We shall only be concerned with developing an upper bound to ^; an analogous procedure is followed to derive a lower bound. Define the function : rc H x 9 -> $ such that (i) <£(£ + d, r,) < V) + $v(d, V), Vf G E, d <E rc E, r, <E 0, (ii) $„ is sublinear in the first argument, and ' (3-5) (iii) $ v is concave in the second argument. J For the saddle function <f>, the existence of a function satisfying the conditions (i), (ii), and (iii) in (3.5) is argued as follows. Considering the inequality in (3.5), for fixed r) G 0 and d G rc S, *v(d,r,) > sup {itf + d,r,)- d>(t,n)} . (3.6) As a convex function in f, by our assumptions, (j) is lower semicontinuous, and thus is closed, i.e., epigraph is closed, for fixed r?. Then the right-hand side supremum of (3.6), which is the so-called recession function of <j>(£,rf) for fixed rf G 0, can be expressed as Ptr(a,n) := sup< >, d £ rc a, (3.7) *>o I f I Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 59 which is independent of £ that is used in the right-hand side of (3.7), see Rockafellar [73, Theorem 8.5]. Thus, the choice of $ v in (3.5) must satisfy, for all i ) G 0 , <f>tr{d,v)<*u{d,ri), VdercZ. (3.8) Defining $ v by the equality in (3.8) will satisfy the first two conditions in (3.5), since <f>v(d,n) is sublinear in d, however, it may fail to be concave in 77, in general. In such a case, one may strive for a function $ v satisfying (3.8) and the conditions (ii) and (iii) in (3.5) by setting, for example, $v(d,T)) :=sup <f>Z(d,ri), (3.9) rjee which in fact is only a sublinear function in d and thus the condition (iii) being satisfied trivially. In the application of the bounds for stochastic programming, reported later in the chapter, we will derive $u explicitly, with equality being held in (3.8), implying that saddle functions arising in stochastic programming can be manipulated to yield recession functions that have the required properties. Also noteworthy is that although it may seem at the first glance that using $ v as determined by (3.9) may only give a rather crude upper bound, we will show later that, this indeed is the best approach when one uses only first moments of the random vectors £ and 77 to compute bounds for <j>. Moreover, in the latter case, $u in (3.9) can be deter-mined explicitly, for the stochastic program under consideration, due to the polyhedral nature of 0. Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 60 3.2 Derivation of the Bounds Suppose a function $u satisfying the conditions in (3.5) is in hand. Due to the polyhe-drality of E and 0 , each f G E and each 7  G 0 can be represented by { = EA-(0+E«) (3-io) 1 = 1 3=1 7 = £A 2 M + X>v?M, (3.ii) j=i t=i where EAJ(0 = I, X>?fo) = i ( 3- 1 2) »=i j=i and AHO, A'foj, ^ ( 0 , / ^ ) > 0 . (3.13) For fixed 7  G 0 , therefore, yields \j=l s=l / < ^ EA!((),?)+^(E^1(()1'?) < J2MQ<l>(ui,v) + ilti(t)*A<r,'i)- (3-i4) 1=1 s=l The first term in the inequality (3.14) follows by convexity of <j> in f, while the second term is due to sublinearity being equivalent to subadditivity of convex functions, see Rockafellar [73, Theorem 4.7]. Now suppose 77' and rj" are given fixed points in 0 for all i = 1,..., J and s = 1,..., S. For fixed i and s, linearize the concave functions (f>(u\rj) and ^u(ds,rj) at the known points 7* and fjs, respectively. This procedure yields dominating affine functions, due to concavity of <f>(u*,.) and $j/(d s,.), a*n + /3*' > 4>{u\n), i = 1,...,/ and T T ' H + <r* > ^ ( d ' . n ) , s = 1,..., S, (3.15) Chapter 3. SLP Problems: Unbounded Domains tor Random Vectors 61 where the transposes of the fixed coefficient vectors a'(G 3?L) and 7r*((E 3tL) have been removed for simplicity. Combining (3.14) and (3.15), s=l (3.16) i=l holds, which leads to the upper bounding inequality « ' = 1 s=l where A, A, /t , and /2 are defined by A':= / r)\}(£)Ptr(dZ,dn), A« := / A f t f l F " ^ . * / ) , Vi = 1, Jn / ^ ( O P t r K , ^ ) , / fi)(OPtT(dt,dV), Va = l , Jn and Pir denotes the true probability measure on (Sl,B). Notice that £ f = 1 A'' = 1 and A' , / / s > 0 while A and fi are vector measures (in 3ftL). Integrating both sides of (3.10) with respect to the probability measure P t r , .,5, (3.17) (3.18) (3.19) (3.20) i=i s=l Multiplying both sides of the first equation in (3.12) by 77/ (for some / = 1,..., L) and integrating with respect to Ptr gives *7 = E A \ (3.21) t=i Consider (3.10) for some component 6 and (3.11) for some component 77;. Multiply one equation by the other and integrate the resulting equation with respect to the probability measure Ptr. Denoting Pij ••= J^iO^MP^id^dr,), i = l , . . . , / , i = l , . . . , J := / pl(0»2j(rj)PtT(dt, dr,), s = 1,..., S, t = 1,..., r == / p]m2j(l)Pir(d£,dri), s = l,...,S,j = l,...,J := [ X](0^(v)Ptr(d(,dr,), i = 1,...,I, t = 1,...,r, Jn ' Tat S3j l i t (3.22) Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 62 yields mkl = £ £ « W # i + £ £ u * r / 7 f t i j i t + £ £ « ^ + ££<W*t s j S t The following identities hold: * ' = £ " % • + £ ^ 7 * , i = l,...,I % 3 t £* = £ « ' * . ; + £ r * 0 r t , s = l,...,S j t A' = ^2PH, t = 1, i /** = ££»i> 3 = 1,.,.,5. j Substituting the above in (3.20) and (3.21), ^ = £ £ «Vy + £ £ ^*7it-Moreover, E E ^ « i = x> a n d Pij> »^i> 7t't5 #st > 0. (3.23) (3.24) (3.25) (3.26) (3.27) (3.28) Consequently, the inequality in (3.17), under this variable definition, can be expressed as $ ^ E E ^ v ^ + E E o r v s , » j i t + E E ^ / v i + E E ^ . i * t s j =: F(Q>,P,TT,O; />,<$, 7,0). (3.29) (3.30) Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 63 Observe that for any probability measure p and nonnegative measures 8, 7, and 6 satisfy-ing the constraints (3.23) and (3.25)-(3.28), the value of F in (3.30) may not give a valid upper bound to However, a set of measures (p, 8,7,0) that attains the supremum of F in (3.30) is indeed a valid upper bound. Thus, we have proved Proposition 3.1 For a given set of linearizing points rj,,fjs G 0 , the supremum of (3.30) over the (feasible) set of coefficients p, 8,7, and 0 satisfying (3.23), (3.25)-(3.28) is a valid upper bound to <j>. While the bound in Proposition 3.1 uses only the first and cross moments to determine an upper bound, it depends on the chosen points fj' and 77s that were used to linearize « ^ ( 7 i * , 7 7 ) a n d $u{ds,n). Therefore, it is of interest to determine those affine functions in (3.15) that lead to the least upper bound available through this procedure. If such exists, it would be the best upper bound obtainable through the technique of linearization. We show the existence by actually determining the optimal points for linearization. Define the set C := {{p, 8,7,0) : [p, 8,7,0) satisfies (3.23) and (3.25)-(3.28)} . (3.31) Then the upper bound in Proposition 3.1 is 4>< sup F(a,/3,ir,o-; p,8,^,0), (3.32) where F is defined in (3.30). For the upper bound in (3.32), one would have chosen the set of coefficients o r , /? ,7r , and <r such that they satisfy (3.15). Define the feasible set of coefficients A := {(or, /?, ir, cr) : (a, (3, TT, a) satisfies (3.15)} . (3.33) To obtain the best upper bound from (3.32), we wish to solve UB:= inf sup F(a,f3,n,o; p,8,^,0). (3.34) (a,P,ir,(r)GA^s,f,B)eC Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 64 Observe that F is bilinear in the two (finite-dimensional) variable vectors (a, {3, ir, a) euclidean spaces. Furthermore, L e m m a 3.2 If there exists some index 1. k E { 1 , . . . , K} such that d'k has the same sign for all s = 1,. . . , S, or 2. I E { 1 , . . . , L) such that r] has the same sign for all t = 1,. ., r, then, the set C in (3.31) is compact. Proof . See the Appendix. • Observe that Assumption (A6) guarantees that the conditions in Lemma 3.2 are satisfied and thus C is compact. In fact, under much weaker conditions than those in Lemma 3.2, the result holds, which is evident from the proof in the Appendix. Since C is compact, applying Corollary 37.3.1 of Rockafellar [73], and (p, 8,7,6), hence convex-concave, and that A and C are subsets of finite-dimensional (3.35) L e m m a 3.3 Let (p,8,j,0) E C. Then, (3.36) and (3.37) Proof . Follows directly from the polyhedrality of O. Theo rem 3.4 (3.38) s.t. i f £ pi:i = £ rfpa + £ rf7«'t> V i = 1,. . . , / v' E = E + E r i e ° t i v5 = 1,..., 5. Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 65 P r o o f . For fixed (/>, 6,7,0) E C, consider the inner infimum in (3.35), denoted by F{p, 6,7,0) := inf F ( a , /?, TT, o; 7, 0). (3.39) Defining 0 , 1 i * 2 s.t. a*'n + pV > <t>{u\ 77), V77 € 0 and F\d°):= inf £ 7 r W s j + £ 7 r V 0 s t + £ < T % -J 1 } s.t. ff*n + <r* > $^ ,77), V i / G 6 , one obtains ^ , 7 , 0 ) = £ ^ V ) + £**(<*•)• First, suppose that Ej Pij > 0 and Ej »^j > 0, and consider fj1 and 77* as defined in Lemma 3.3. The closedness and concavity of ^ (u',77) and $y(da,77) in 77 along with the definitions of 771 and 77S given in (3.36) and (3.37), respectively, guarantee that ^ V ) = £M («w) . j FV) = E ^ M ^ , r ) , i and (3.39) is solvable. So, in this case, U B is a valid upper bound to ^ and the theorem is proved. Now suppose Ej Pij = 0- This implies Ej v^Pij = 0 since p is nonnegative. Thus, F V ) = i n f IEa'"r*7.-t : + > 77), V77 E 01 . (3.40) Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 66 The solution of (3.40) is 0 ifE*r*7i* = 0 —oo otherwise. Similarly, in the case Ej f>sj = 0? thus implying that Ej v3t>sj = 0, one obtains [ — oo otherwise. Since UB = sup c F, a feasible solution in C such that F1(ul) = — oo for some i = 1,..., J or F2(ds) = —oo for some s = 1,..., 5" is never optimal, i.e., in an optimal solution of (3.35), we must have Efti = 0 Er*7ft = 0 (3.41) and E<^  = ° =• X)r*^  = °- (3-42) The constraints in (3.38) indeed guarantee that (3.41)-(3.42) hold. Moreover, in the latter case, F 1 (u ' ) + F2(ds) = 0. This completes the proof. • To n\erive a lower bound using first and cross moments, defining the convex-superlinear function < & x ( . , . ) which satisfies +r) > i/) + $ £(£,r), V T / G O , r e r c G , £ G H , (3.43) it can be proven analogous to Theorem 3.4 that T h e o r e m 3.5 $>LB:= Jni^ jj> (£, ^ ) ^ + £ rf) 7 ; t J (3.44) s.t. ^E/'.'i = E u X ' + E ^ Vj = l , . . . , J t j s I* E 7."t = E «'7ft + E ^ Vt = 1,..., r. Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 67 3.2.1 Bounds in the Univariate Case Since C is a closed (convex polyhedral) set, "sup" and "inf" in (3.38) and (3.44) can be replaced with "max" and "min". When f and rj are univariate, the maxima and the minima can be shown to be uniquely determined. To see this, suppose H = {f : f > d} and 0 = {rj : n > a}. Let E[£rj] be denoted by C. Setting ul = d, d1 = 1, v1 = d, and r 1 = 1, the feasible set C is a singleton with Pn = 1, <$n = f — d, 7n = fj — d, and On = C + dd — (dr7 + df). Thus, we have r}1 = fj, rj1 = < ^~^, f 1 = f, and f 1 = ^r|S giving the bounds as d) + (r? - z)*L (j-^f, i ) < i < + (£ - ( i , Try) • (3-45) The bounds in (3.45) are depicted in Figure 3.1. 3.3 Tightness of the Bounds We pursue the relationship of the upper and lower bounds derived in the preceding section to the corresponding generalized moment problems under first and cross moments. The case of upper bound is considered; analogous results hold for the lower bound. The upper bounding problem for when formulated as a moment problem using the first and cross moments, is V(MP) := sup / <ftttr,)P(dt,dri) t (3.46) s.t. [ (P(dt,dri)={ JQ f rjP(d^dr,)=fj JQ / &njP(d£,dn) = m w , V(fc, /) JQ Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 68 <bu<M> Figure 3.1: Lower and upper bounds in the univariate case. where V is the set of probability measures on (fi, B). The symmetric dual of (3.46) is the following semi-infinite linear program: V(DP):= inf y 0 + E ^ + E ^ 2 + E ^ 3 / (3-47) fc=l Z=l k,l s.t. y 0 + E ^ + E^J// 2 + E 6 ^ > ^ , r ; ) , V ( ^ ) e ( l . jt=i (=i k,i Observe that the convexity of <f>(£, rj) in f along with fi = E x 0 and the polyhedrality of E imply that (3.47) is equivalent to: V(DP)= inf y° + E ^ + Ew/2 + E ^ 3 / (3-48) k l k,l s.t. y° + E(«l + vdi)y\ + E Vivf + E K + vdilmyli > + »d>, n), fc / i t , / i = !,...,!, s = 1,.. . , S, rj G 0, v > 0. Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 69 For v > 0, the constraints of (3.48) imply (for fixed indices i, s and n (E 0) that E ^ + E^W/ > sup< «/>o I + vd',r,)-<f>(u\r,) | <f>{u\n) - {y° -rYkUJyl + EiViyf + Ekj<ml)] V V j (3.49) For v = 0, one obtains y° + E <y\ + E W + E «* W i > (3.50) k l k,l Furthermore, by the convexity of </)(., 77), for fixed n, the ratio ^ ( u ' ' + ^ , r ? ) - ^ K ' , r ? ) ] is nondecreasing. This along with the inequality in (3.50) yield V(DP)= inf t/° + E &yl + E Viyf + E mfc^fc< (3-51) fc <• fc,/ E42/fc + E r ffcW/>C(^%'?), 5 = 1,...,5*, 77 G 0 , fc fc,/ where « ( * , , ) := sup / « " ' + • < * ) - * • ' . » ) } _ fe | ^ + » < * ) - « u ' , , ) j ( J 5 2 ) is the recession function of <f> for fixed 77, defined in (3.7). <j>u(ds, n) is sublinear in d". In the pure convex case, i.e., when <f> is only a convex function (of £), Birge and Wets [9] used the sublineariy of the recession function to derive an upper bound using first moments, which was shown by Ka l i [52] to be the solution to the corresponding moment problem. In fact, the analysis in (3.48)-(3.52) resembles that of Ka l i [52] who treated the moment problem for pure convex functions under the first moment information. However, when Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 70 <j> is convex-concave, although <f>*v(ds, 77) is sublinear in d", for fixed 77 G 0, fc m a v ^au ^° be concave in 77. This concavity is essential for our goal. Defining M := co {(6, • • • , fr, 171,..., r\Li • • •, 6T»7L) € $*+I+XL : (£, 77) G 3 x 0 = f)} (3.53) and the point p G 3^+^+^ /* := (tu---,tK,m,...,TjL,rn11,...,mKL), (3.54) T h e o r e m 3.6 7/ 1. p £ int(M) holds, and 2. fc(ds,rf) is concave in TJ(G 0), then <t><UB < V(MP) holds where UB is defined by (3.38) with $ y = <fc. P r o o f . For a given set of coefficients (p, 8, 7,9) G C, where C is defined in (3.31), whenever Ej Pij > 0, define 77' as in Lemma 3.3, and if Ej Pij — 0, let 77' be any point in 0. Similarly, define fj" for all s = 1,...,S. The pairs (EjPij,V*), i = and (Ej <f>«j, »7S), s = 1,..., S, are dual feasible to (3.51), which follows from the constraints defining the set C. Since UB is defined with $ v = fcv, V(DP) > UB follows. If V(MP) = 00, then UB < V(MP) trivially holds. Thus, assume that V(MP) < 00. Then the first condition in the theorem guarantees, due to Theorem 1.3, that V(MP) — V(DP) holds, and thus UB < V(MP) follows. <j) <UB holds since fc(ds, 77) is concave in 77. • The upper bound U B may be viewed as approximating the true probability measure PtT by a probability measure p and nonnegative measures 8,7, and 9. Moreover, U B is never inferior to the moment problem upper bound V(MP), provided the conditions in Theorem 3.6 are satisfied. However, if the moment problem in (3.46) is solvable, with a Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 71 solution being denoted by P*, then V(MP) = J n <f>{uj)P*(dw) < UB trivially holds, and thus UB = V(MP) follows. In contrast, for the case of bounded domains, see Chapter 2, the upper bounding result not only is the value of the moment problem, but provides an optimal solution to the moment problem as well. If the recession functions are infinite, i.e., = oo for some ds £ ext-rc3, rj £ 0 , then the upper bound is infinite. Analogous results hold with regard to the lower bound in (3.44). The univariate bounds in (3.45), with $ v and $ £ being replaced by the recession functions <f>^ and <^ *, respectively, can be generalized to multidimensions under pairwise independence of random variables, similar to those in (2.48) and (2.49). For K = L = m (say), and the pair of random variables (£k,i]k) being independent of (f/,77/) for k ^ I and k, I = 1,.. . , m, along with H and 0 being of the form x^_ x[ajt, oo) and xj l i ja / , oo), respectively, the upper bounding inequality is 4>{x) < <f>(a,77) + - aoft (Cfc>77 + (TO|*lff*) e*) (3-55) where ek is the kth elementary m-vector. Under pairwise independence, the lower bound-ing inequality is 4>{x) > 4>&a) + p(fjk - afc)« + (m?lff*) e^ e*) • (3-56) If f and r/ are stochastically independent, one can generally derive tighter bounds than those in Theorems 3.4 and 3.5 by applying Jensen's inequality and the Birge and Wets [9] inequality simultaneously. Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 72 3.4 Bounds Using Only First Moments The effort in this section is to generalize the first moment bounds of Section 2.6 (see Chapter 2) to the case when domains are unbounded and then to investigate the rela-tionship of the resulting bounds to the respective moment problems. We will only be concerned with the case of upper bounds. Lower bounding results can be derived in a similar manner. Define the polyhedral feasible set C by C := {(p,6,f,0) : (p,6,i,9) satisfies (3.25)-(3.28)} . (3.57) Considering the feasible set C defined in (3.31), it is clear that CCC since the constraints involving cross moments have been dropped from C to obtain C. Obviously, 4><UB<UB:= sup |E^(u'^ 0 '^ + E^K ^ S ) ^ [ (3-58) P,s,<y,eeC [ i,j a<j J s.t. fj1 £ PH = £ vJPij + Y r *7 .*> V i = 1,. . . , I j j t Vs £ 8.j = £ + £ r%t, Vs = 1,. . . , S. j j f where UB is defined in (3.38). The upper bound UB in (3.58) requires only the first moments of the random vectors. To see the relationship of this bound to the G M P involving only the first moments, define V(MP) := sup { / r,)P(d^ dr,) : I (£, , dr,) = ( £ fj)} . (3.59) Pev wn Jn ) The dual problem of (3.59) is the following semi-infinite linear program: V(DP):= inf J / 0 + £ 6 y I + EW (3.60) " k i Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 73 Defining <f>*v as in (3.52), V(DP)= inf y° + £ & y l + 5>y,2 (3-61) y k i s.t. y° + 5>'fcy£ + 5>/y,2 > #u',n), t = l , . . . , / , n e e £<**y* > ^ K , T / ) , 5 = i , . . . , 5 , n e 0 k which yields V(DP)= inf y0 + E s ^ + £ W (3.62) s.t. y° + Y,uiyl + J2viy? > <t>(u\ri), i = 1,...,/, n e 0 fc / E « >/ v(d s), 5 = 1,...,5, where we have defined /„(<*•):= sup n). (3.63) Since the degenerate distribution at (£,fj) is feasible in (3.59), following a proof similar to that of Theorem 3.6, it is easy to show that <^> < UB < V(MP) holds when one sets $u(da, rj) = fv(da) for all n e 0, provided that (£ij)em<(«). (3.64) 3 . 5 Application to Stochastic Linear Programs We apply the bounding techniques in Section 3.2, which use first and cross moments, on the stochastic program (2.1)-(2.2). To do so, one needs an upper bounding function $u(x,da,ri) and a lower bounding function $ L ( X , (, »"*), for the convex-concave recourse function (f>(x,.,.), such that <f>(x,£ + d3,r]) < <£(x,f,7?) + $ a(z,<f ,77) and Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 74 where $u{x,.,.) is sublinear-concave and $L(x,.,.) is convex-superlinear. As follows from the discussions in the previous sections, if these functions are chosen to be the respec-tive recession functions, then tight bounds are available provided that these recession functions have the appropriate convexity and concavity properties, see Theorem 3.6. It develops that for the recourse saddle function (j)(x,£,r)) defined in (2.2), the associated recession functions have the required properties. Proposition 3.7 The recession function fc of the recourse function 4>(x,£,n) for fixed x € X and r, (E 0 is given by fc(x,d,n):= mm q(n)'y (3.65) K s.t. Wy = Y,{H(k)-Tkx)dk k=i y>0. Furthermore, fc(x,.,.) is sublinear-concave for fixed x. Proof. By definition, fc(x, d, n) = J i m — . For fixed x £ X and n G 0 , by the assumptions on page 19, <f)(x,£,n) is finite for all £ £ S, and thus l i m , , - ^ ^<f>(x, £, rf) = 0. Moreover, for v > 0, noting that -4>{x, ( + vd, rf) = min q(n)'y v s.t. Wy = -v (hQ - T0x) + £(#(*) - Tkx)tk + v £(%) - Tkx)dk k k y>o, the result follows by taking the limit when v —• oo. Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 75 Proposition 3.8 The recession function <f>*L of the recourse function <f>(x,£,r)) for fixed x £ X and f £ E is given by <f>*L(x,t,r):= min (Qr)'y (3.66) s.t. Wy = h(Q-T(t)x y>0. Furthermore, <f>1(x,.,.) is convex-superlinear for fixed x. Proof. Analogous to Proposition 3.7. • Consequently, applying the results in Theorems 3.4 and 3.5, the following bounds on <f>(x) are obtained: 4>(x)< max l^Pij<l>(xy^+^Saj<f>*v{x,d%ri')\ =: Qv(x) (3.67) S . t . TJ'^Pij = Y1V3PH + ET*7i*. ^ ' E ^ E ^ + E ^ and rnin J E M + E ^ (*,?V) J =: Gx(*) (3-68) s-t• sf E tti = E « V< j + E d % e*E7ft = E«,"7i* + E ^ where the feasible set of discrete measures C is defined in (3.31), and <f>^ and <f>*L are defined in (3.65) and (3.66), respectively. Furthermore, the upper bounding function Qu(x) is convex polyhedral in rc E X , which can be proven analogous to the proof of Proposition 2.26 in Chapter 2. Hence, the upper bounding formulation Z* < Z* := min {c'x + Qu(x)} (3.69) Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 76 can be solved as a linear program. However, QL(x) may fail to be convex, a property that is desirable for the efficient solution of the lower bounding approximation. Following similar steps as in Proposition 2.27, we obtain below a lower bound QL(x) which is convex polyhedral in x, implying that the lower bounding approximation Z* > Z*L := min \c'x + QL(x)\ (3.70) can be solved as a linear program. Propos i t ion 3.9 £(*) > QL(X) > QL{x) where QL(x):= min ^q(vj)'yj+ ^(Qri)'yi (3.72) j t s.t. £ w = k(i)-r«> j £vjWy* + ^ rjWy* = h(M(l)) - T{M(l))x + (feo - T0x)(fj, - 1), V/ j t y j , y * > 0 . Proof . Follows analogous to that of Proposition 2.27. • 3.6 A n Order Cone A p p r o x i m a t i o n In the previous section, it was shown that the upper bounding strategy when applied to stochastic programs yields an upper bound that is convex in the first stage decisions, while the lower bound so derived lacks this property. In order to avoid nonconvex opti-mization associated with this lower bound, one may obtain a lower bound such as that in (3.72). The major difficulty is that although the approximating distributions are discrete, (3.71) Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 77 they are only available as solutions to linear programs. This implies that the resulting bounding formulations for the stochastic program may not have a ready-made decompos-able structure such as the dual block angularity. An exploitable structure is important because, as the number of dimensions increases, the sizes of the upper and lower bounding linear programs may become considerably large. We show that it is possible to deter-mine the approximating discrete probability distributions trivially, i.e., without solving linear programs, independently of the first stage decisions or the functional form </>, by restricting the shape of the domain of the random variables. With bounded domains, this happens to be the case when 3 and 17 are simplices, as remarked in Chapter 2, a procedure which results in Frauendorfer's [35] bounds. In the unbounded case, it develops that if one utilizes order cones (i.e., simplicial cones) as domains, the approximating discrete measures in Theorems 3.4 and 3.5 are uniquely determined independently of the recourse function </>, the upper and lower bounding functions $ y and $ i 5 or the first stage decisions x. Furthermore, such a scheme has the desirable features that the bounds are convex polyhedral in the first stage de-cisions and also that the resulting bounding formulations of the stochastic program are amenable to large-scale LP decomposition techniques, such as the L-shaped method, see Van Slyke and Wets [84]. D e f i n i t i o n 3.10 An order cone in 3ftn is a convex polyhedral cone defined by the inter-section of n linearly independent (n — \)-dimensional hyperplanes. An order cone in 3ft™ is n-dimensional, and moreover, it is completely determined by a single "vertex" and n number of linearly independent extreme directions, emanating from the vertex. Consequently, any point that belongs to this order cone can be represented by a unique nonnegative linear combination of the n extremal directions, say d* (i = l , . . . , n ) , together with the vertex, say u. That is, the matrix V £ 3 f t n X n constructed Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 78 with columns 2?(,-j = dx for i = 1,. . . , n, is nonsingular, the inverse being denoted by V'1. Thus, any point u> in this order cone has the (unique) representation u> = u + E ! L i A,(u>)d' where A;(u>) = (P - 1)W(a> — u). Using the latter property, we simphfy the bounds derived in the preceding sections by containing the domains in order cones. Define <fc and <j>*L by (3.65) and (3.66), respectively. Notice that the domains H(C 3?K) and 0 ( C $tL) under consideration (since they are not the entire spaces due to (A6)) can be contained in some order cone in %tK and 3ftL, respectively, herein denoted as E and 0 . That is, ECE :=cone{u,d1,...,dK} (3.73) and 0 C 0 := cone{u,r 1 , . - - , r Z '} . (3.74) Construct the (nonsingular) matrices V(E $tKxK) with columns dx,...,dK and 7C(G dtLxL) with columns r 1 , . . . , r L . Also construct the (K x L)-dimensional matrix M con-taining the cross moments mu for fc = 1,. . . , K and I = 1,. . . , L. Then applying the upper and lower bounding strategies in Theorems 3.4 and 3.5, one obtains Proposition 3.11 4>(x) < ^ . u . r D + Sto^jWK'-u)] (3.75) s=l and fa) > K z ^ + j r ^ l y r ^ i r j - v ) ] (3.76) t=l where (f>s(x) and <f>t(x) are defined by _ \ K (*, d% {v-1)(]Ht_u) [{M> - rju>)(V^)[s)]) if{V-^)u^{V-^)( Ys\X) .— ^ 0 otherwise (3.77) Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 79 and <f>t(x) := < (3.78) otherwise. Proof. Wi th domains H and 0 , the feasible set C in (3.31) is determined by £ < / • * . = £ - « , (3.79) 3 £ r * 7 t = (3.80) t and £ « f c r | 7 T + £ M £ 6 S + ££4 r;^t = m ^ - "*u<- (3.81) t s s t Upon simplification, (3.79)-(3.81) yield £ £ d'krfBst = mkl + ukvi - ukfji - vi(k s t and £ r]0st = ( X r i ) W [ M ( / ) + vw - fj,u - v,t], VZ = 1,.. . , L. t Note that (3.79) implies that S, = ( P - ^ M ^ - u ) . Moreover, noting (3.38), 771 = v+(fj-v) — fj and rjfS. = + £ r ? 0 , t t = ^ ( Z r 1 ) ^ - - «) + (Zr^WfAfy) + w,u - »/,u - vrf) = [ ( J t f ) « - i w u ' ] ( 2 r % ) hold, and thus, rj'6. = (M'-ijv?){V-%). Therefore the upper bound in (3.75) follows. The lower bound in (3.76) can be shown analogously. • Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 80 e Ipartitioning plane Figure 3.2: Partitioning of domains in 3ft3 by order cones. Observe that when K = 1 and L = 1, the bounds in Proposition 3.11 coincide with those in (3.45). Also it is evident from Proposition 3.11 that these upper and lower bounds are convex polyhedral in x. Moreover, one can show that when £ and 0 are order cones, then the lower bound in Proposition 3.11 coincide with that in (3.72). Toward obtaining a better approximation to <f>(x), suppose the domain O is partitioned such that the resulting partitions induced on H and 0 consist only of order cones, see Figure 3.2. This practice leaves the (only) extreme point, for example in E, unaltered throughout the partitioning procedure. Given a cell which is an order cone, a single partition of this cell by a hyperplane results in two cells each of which is an order cone. Furthermore, the corresponding T>~x matrices for the two cells can be obtained by just two pivot updates of the former 2? _ 1 matrix. However, such a decomposition of the domain does not ensure that the diameters of the cells become arbitrarily small as the partitions become finer. Consequently, one may not be assured that the bounds (theoretically) converge to the true expectation. Nevertheless, the procedure is numerically attractive Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 81 because, when applied to the stochastic program under consideration, the resulting upper and lower bounding approximations are staircase-structured linear programs and thus amenable to efficient LP solution techniques, or even parallel computation. The following example illustrates these bounds. 3.7 A Numerical Example Consider the two-stage stochastic linear program with complete fixed recourse: Z* = min 2x1 + 2x2 + E^v[ mm 2niyi + 3n2y2 + 2/3 ] s.t. 2xi - x 2 < 1 s.t. 2yi + y2 - 3y 3 = 2 + 3 6 - 3(6 + 6 ) * i + 4 6 * 2 xi + x 2 < 1 -yi + 3j/ 2 + 2/3 = 4 + 2 6 - xx - "S£2x2 where the distribution of the 4-dimensional random vector is described by n i ~ E X P ( l ) , f i l n i - E X P f^-(3.82) 772 ~ E X P ( 1 ) , 6 I " 2 ~ E X P Q - ) (6,*?i) is independent of (6,^2) EXP(A) means the exponential distribution with parameter A and 6l"i means the ran dom variable 6 conditioned on n1. One obtains the first moments f = 6 = m = fj2 = 1 and the cross moments mn = 2 = m 2 2 and m\2 — 1 = m2\. Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 82 Defining ^x( x »£> r ) : = ^ 2 r i y i + 3r2j/2 s.t. 2j/x + y2 - 3t/3 = 2 + 36 - 3(6 + 6)x x + 4&x 2 -yi + 3j/2 + J/3 = 4 + 26 - x i - 36x 2 yuy2,V3 > 0 and ^* (x, d, n) := min 2?7Xt/i + 3n2y2 + Jte s.t. 2yx + y2 — 3r/3 = 3di - 3(dx + d 2)xi + 4dix 2 -yi + 3j/2 + y 3 = 2d2 - 3d 2x 2 yi , j /2 ,y 3 >0, one obtains, for the expectation of the recourse function <^ (x, 6> 6, Vii Vi), * n e upper bounding inequality 4>(x) < <f>(x, 0,0,1,1) + &(x, 1,0,2,1) 4- <£* (x, 0,1,1,2) and the lower bounding inequality ^(x) > <f>(x, 1,1,0,0) + <fL(x, 2,1,1,0) + <f>l(x, 1, 2,0,1). The above inequalities yield the bounds 9.329 < Z* < 12.136. (3.83) For example, the upper bound requires the distribution in (3.82) be approximated by a degenerate probability measure at (0,0,1,1) and two nonnegative measures, each having unit weights, along the rays (0,0,2, l) + i/(l, 0,0,0) and (0,0,1,2) + i/(0,1,0,0), for v > 0. Since random variables have pairwise independence, see (3.82), one may use (3.55) and (3.56) to compute valid bounds. However, as can be shown quite easily, under Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 83 0 ® © © ® 7?, Figure 3.3: Partition of the domains in the example pairwise independence and domains as order cones which are entire orthants, the bounds in Proposition 3.11 coincide with those in (3.55) and (3.56). Partitioning: The above bounds may be improved by partitioning the domain of the random vector. We use an order-cone partition as discussed before. Noting that both S and 0 are positive orthants in (the two-dimensional euclidean space), we induce a partition on 17 = S X 0 by the lines 6 ~ 6 = " on S and r)\ — V2 = 0 on 0. See Figure 3.3 for an illustration. Consequently, the cells of the partition are also order-cones and thus, Proposition 3.11 can be used again to compute sharper bounds on Z*. For bounding computations, one needs the conditional moments given in Table 3.1. Applying the bounds in Proposition 3.11, the following upper and lower bounding inequalities are obtained: 4>(x) < 0.375<£(x, 0,0,1.554,0.445)+ 0.125<£(x, 0,0,0.664,1.332) + 0.125<£(x, 0,0,1.332,0.664) + 0.375<£(x, 0,0,0.445,1.554) + 0.582<£* (x, 1,0,2.394,0.589) + 0.115<£* (x, 1,1,2.098,0.870) Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 84 Cell prob. 6 6 *}2 mn m i 2 m 2 i m 2 2 (1,3) 0.375 1.860 0.307 1.554 0.445 4.362 1.181 0.644 0.267 (1,4) 0.125 1.080 0.412 0.664 1.332 1.200 2.052 0.452 0.800 (2,3) 0.125 0.412 1.080 1.332 0.664 0.800 0.452 2.052 1.200 (2,4) 0.375 0.307 1.860 0.445 1.554 0.267 0.644 1.181 4.362 Table 3.1: Conditional first and cross moments + 0.084<^(x, 1,0,1.120,1.874) + 0.052<^(x, 1,1,1.097,1.942) + 0.052#;(x, 1,1,1.942,1.097) + 0.084<^(x, 0,1,1.874,1.120) + 0.115<£* (x, 1,1,2.098,0.870) + 0.582<£* (x, 0,1,0.589,2.394) and 4>{x) < 0.375<£(x, 1.860,0.307,0,0) + 0.125< (^x, 1.080,0.412,0,0) + 0.125<£(x, 0.412,1.080,0,0) + 0.375<£(x, 0.307,1.860,0,0) + 0.416<£* (x, 2.868,0.340,1,0) + 0.167<^(x, 2.654,0.600,1,1) + 0.083^(x, 1.807,0.681,1,1) + 0.084^* (x, 1.275,0.521,0,1) + 0.084<£* (x, 0.521,1.275,1,0) + 0.083^ (x, 0.681,1.807,1,1) + 0.167^(x, 0.600,2.654,1,1) + 0.416<£* (x, 0.340,2.868,0,1). The upper (UB) and lower (LB) bounds and the approximating upper and lower bounding solutions are summarized in Table 3.2. Relative gap of bounds is denned by U ^ g L B X100%. Observe that the first stage solution x = (0.5,0) and the (mid-point) estimate for Z*, which is 10.890, is within 0.2% of the optimal. Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 85 Domain XL Xu L B U B Relative gap No partition 0.5 0 0.5 0 9.329 12.136 30.09% First partition 0.5 0 0.5 0 10.872 10.908 0.33% Table 3.2: Summary of bounds 3.8 D i s c u s s i o n As indicated in Section 2.8 of Chapter 2, if the domain of the random variables is com-pact, then, as long as the partitioning strategies ensure that the approximating prob-ability measures weakly converge to the original measure, the approximating functions epiconverge to the original expected recourse function. Consequently, one is guaranteed of convergence of approximate x solutions to a true optimal solution of the stochastic program. When the domains are unbounded, to ensure epiconvergence of the approximating sequence of expectation functionals determined by a set of probability measures {Pu} weakly converging to the true probability measure Ptr, one requires additional condi-tions. See Theorem 2.8 of Birge and Wets [8]. In contrast, Wang [86] showed that C2-convergence of the approximating probability measures to the original measure is suf-ficient to ensure epiconvergence of the approximating functions to the expected recourse function. The latter reference also discusses possible schemes to ensure £2-convergence of probability measures. However, in our bounding scheme, since the approximation is provided by a discrete probability measure along with discrete nonnegative measures, it is not clear if these convergence results apply directly. Finally, extension of the bounds to multiperiod stochastic linear programs is almost Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 86 straightforward under intertemporal independence, i.e., the random vectors are indepen-dent across time periods. What is obtained then is a multiperiod model with random components having a discrete set of outcomes. The resulting (approximate) problem can be cast as a large-scale linear program having a dual block angular structure. Numerical solution of such problems is currently an active area of research. Wi th approximat-ing discrete points (of the domain) being considered in an event-tree future, the nested decomposition approach based on the L-shaped method of Van Slyke and Wets [84] is ap-plicable, see Gassmann [39] for an implementation. Alternatively, considering a sequence of (approximating) discrete outcomes through time as a "scenario", one may consider the problem in a scenario setting. Such models may be solved via the scenario aggregation method of Rockafellar and Wets [79,80], known as the progressive hedging algorithm. 3.9 Appendix Proof of Lemma 3.2: We first prove the following more general result from which the result in the lemma is obtained as a special case. Lemma 3.12 Define the set S := {(s,y) : Ax + By = 6, x G X, x, y > 0} G 9 £ m + n where X C 3ftm is bounded and A, B are matrices of appropriate dimensions. Then S is bounded if and only if the subspace of 9ftn spanned by the rows of B contains a strictly positive vector. Proof. S is unbounded if and only if there exists a direction (dx, dy) G 3ftm + n such that Adx + Bdy = 0, dx,dy > 0, and (dx,dy) ^ 0. But, the boundedness of X implies that for such directions, we must have dx = 0, for if not, x° + vdx G X for all v > 0 where x° G X, thus contradicting the boundedness of X. Now using Gordon's theorem, see Zoutendijk [103, p.18], the system Bdy = 0, dy > 0, dy ^ 0 is feasible if and only if Chapter 3. SLP Problems: Unbounded Domains for Random Vectors 87 {z : B'z > 0} = 0, i.e., the row space of B does not contain a strictly positive vector, which completes the proof of Lemma 3.12. Now consider S to be the set C defined in (3.31), and the nonnegative variables x and y to be p and (6,7,9), respectively. Due to (3.27), p belongs to a compact set. If the condition 1 of Lemma 3.2 is satisfied for some fc, then considering the row k of constraints (3.25), notice that the condition in Lemma 3.12 is trivially satisfied and thus C is compact. A similar argument applies for the second condition in Lemma 3.2. • Chapter 4 Tight Bounds on Stochastic Convex Programs Consider the following specialization, referred to as a two-stage stochastic convex program (SCP), of the stochastic program introduced in (1.24)-(1.25): Z* := min {c(x) + tp(x) : gi(x) < bi, i = 1,..., m l 5 x > 0} (4.1) where 4>(x) := E^[<f>(x,()] and <l>(x,0 := min {q(y) : ft(x, y) < i = 1, • • •, m2, y > 0} . (4.2) J/fc3t"2 In (4.1)-(4.2), c: 3ft n i ->• 3ft, g4 : 3ft n i - + 3ft, q : 3ft"2 - + 3ft, /,• : 3ft"1 X 3ft"2 3ft are all finite, convex, differentiate functions, f is a m2-dimensional random vector with domain S, which is assumed to be a compact, convex set in 3 f t " 1 2 . The distribution function of f is denoted by F. E{[.] denotes mathematical expectation with respect to f. f denotes the first moment of f. Theoretical aspects of the two-stage stochastic convex program have been researched at great length. See, for example, Rockafellar and Wets [74,75,76,77]. Olsen [66,67,68,69] gave a number of important theoretical results concerning optimality conditions. How-ever, knowledge about the computational procedures for these problems is considerably more sparse. When the random elements are discrete, Louveaux [62] presented a method to solve multistage stochastic programs with recourse having quadratic objective func-tions and linear inequality constraints. Solving such a program is equivalent to solving a nested sequence of piecewise quadratic programs. The latter reference also reports the application to an energy investment problem. The nested decomposition approach 88 Chapter 4. Tight Bounds on Stochastic Convex Programs 89 of O'Neil [70] for deterministic staircase convex programs was extended by Noel and Smeers [65] for the stochastic case with discrete distributions. Algorithms based on the Lagrangian and augmented Lagrangian function have been proposed by Rockafel-lar and Wets [78] for the linear-quadratic two-stage stochastic programs with discrete distributions. A n implementation of this technique can be found in King [59]. For modified nonlinear programming approaches to solving stochastic convex programs, see Ziemba [95]. When the underlying probability space is continuous, in order to avoid direct multi-dimensional numerical integration of the recourse function in (4.2), we follow the basic approach of the previous chapters, namely, computing upper and lower bounds for the second stage expected recourse costs. First, we examine variable and constraint aggre-gation procedures from deterministic mathematical programming to develop bounds for stochastic convex programs. Birge [3] addressed the topic of aggregation bounds for stochastic linear programming, where error bounds for Jensen's mean model were ob-tained. The approach basically followed was that proposed by Zipkin [99,100] for aggre-gation in deterministic linear programming. Using a Lagrangian approach, we examine Jensen's bound in the context of stochastic convex programs to develop lower and upper error bounds. The latter approach is similar in spirit to that used by Huberman [44] in which the results of Zipkin were extended to deterministic convex programming. For a state-of-the-art survey on aggregation in optimization, see Rogers, et al [81]. One can construct examples of stochastic convex programs for which Jensen's bound is arbitrarily poor while the aggregation bound remains tighter. This motivates us to address the issue of the best lower bound obtainable for the SCP using only first moments. It also becomes possible to construct cheap upper bounds to the problem using a tightened mean model. The bounds are applied to multiperiod stochastic convex programs with recourse, under the case of intertemporal independence, i.e., the random elements are Chapter 4. Tight Bounds on Stochastic Convex Programs 90 independent across time periods. In the presence of intertemporal dependence, the notion of time-stage aggregation is used to develop bounds. Notation is introduced as needed. 4.1 Preliminaries A vector x in 3ftni is a feasible solution to the SCP in (4.1)-(4.2) if it satisfies the first stage constraints in (4.1) and if the second stage problem in (4.2), given this x, is feasible with probability one. Assume that SCP is solvable, that is, there is a feasible x* at which the minimum of SCP is attained. While the problem in (4.1)-(4.2) is dynamic in nature, there is a static formulation corresponding to this problem given by Z = min c(x) + Et[q(y(0)} (4.3) s.t. gi(x)<bi, i = l,...,m1 fi(x,y(0) ^ 6> » = l , . . . , m 2 (a.s.) x > 0, y(t) > 0 (a.s.) where a.s. means almost surely. The pertinent difference in the interpretations of the dynamic formulation (4.1)-(4.2) and the static version (4.3) is the following: In (4.1)-(4.2), one chooses an initial decision vector x and in the subsequent time period a recourse decision y will be chosen after observing the random vector f. On the contrary, (4.3) requires one to select at the outset of a recourse function f —> y(£) which will specify all decisions for every possible f. Trivially, Z > Z*. Under mild regularity conditions, certainly satisfied if the set D(£) := {y '• fi{x,y) < i = 1,... ,m2, y > 0} is almost surely bounded for all x satisfying the first stage constraints, the static formulation (4.3) and the corresponding dynamic program (4.1)-(4.2) are equivalent, hence Z = Z*. See, for example, Rockafellar and Wets [75] for details. Moreover, x* is the solution to (4.1)-(4.2) if and only if there exists (x*, j/*(f)) which is the solution to (4.3). Chapter 4. Tight Bounds on Stochastic Convex Programs 91 A lower bound to Z* is obtained by replacing £ with £ in (4.2). Denoting this bound by JB, Z* > JB := min c(x) + q(y(0) (4.4) s.t. gi(x) < bi, i = l,...,mi fi(x,y{0) < ti, i = l,...,m2 x > 0 , y(0 > 0. That JB is a valid lower bound to Z* follows by the convexity of <j>(x,£) in £ for each x and thus applying Jensen's inequality ijj(x) > <f>(x,£). Moreover, due to Theorem 2.1, <f>(x, £) is the best lower bound to ^(x) using only the first moments. We refer to JB in (4.4) as the (usual) mean model bound to SCP. We may also view JB as an aggregation lower bound to Z* by considering the static problem (4.3) as follows. Following the definition of Birge [3] for an aggregation function, as compared to aggregation vectors proposed by Zipkin [99,100], let h(.) : S —> 3? be a nonnegative aggregation function such that fzh(£)d£ = 1. In particular, one may choose the aggregation function to be the joint density function of £, denoted by h(£) as well. Row aggregation of (4.3) leads to a relaxation of the constraints of (4.3), for any nonnegative aggregation function h. Moreover, choosing h to be the joint density function ensures that the aggregated right-hand side of the second stage constraints is the first moment of £. This is the usual technique of considering random constraints in expectation which clearly gives an upper or lower bound depending upon whether the problem is a maximization or minimization, respectively, to the given problem. Since Z = Z* holds, Z* > mmc(x) + Ei[q(y(0)) (4-5) s.t. gi(x) <bi, i - 1,... ,m x Chapter 4. Tight Bounds on Stochastic Convex Programs 92 Et[fi(x,y(t))]<ti, t = l , . . . , m 2 x > 0, y(0 > 0 > min c(x) + q(E[y({)}) (4.6) s.t. gi(x) <bi, i = l,...,m1 fi(x,E[y(0])<ti, i = l , . . . , r o 2 x > 0, £[2/(0] > 0 - JB. (4.6) follows by the convexity of q and / , , thus using Jensen's inequality, E[q(y(£))] > q(E[y(0]) a n d E[fi(x,y(£))] > fi(x,E[y(£)}). By exploring the aggregation steps linking JB and Z, we develop error bounds for JB. Since (Z* — JB) > 0 trivially holds, only strictly positive lower error bounds are of interest. 4.2 Error Bounds by Aggregation The error bounds are developed by considering the Lagrangian dual of (4.3). For non-negative multipliers p(e 3£ m i ) and A(e 3ft™2), the dual is Z = max min \c(x) + I q{y{Z))F{d£) + f>t- (9i(x) - b{) \,H>0 x,y>0 ^ JH 1712 r } + E/ s A.(0(/.Ky(0)-6)m)}- (4-7) Since weak duality holds for (4.3) and (4.7), Z* > Z. Consider the mean model in (4.4) and let its optimal primal solution be (x, j7(f)) and the optimal Lagrange multipliers be (/I, A). For notational convenience, we use y = y(f). Assume that a constraint qualification is satisfied for this problem so that the Karush-Kuhn-Tucker ( K K T ) conditions are satisfied at an optimal solution. Representing the gradient vectors with respect to x and y by V x and V y , respectively, and removing the Chapter 4. Tight Bounds on Stochastic Convex Programs 93 transposes of vectors for simplicity, it holds at optimality that 9i(x) <bi, i = 1,..., mi , fi(x, y) < £, i = 1,..., m 2 Pi[gi(x) - k] = 0, \i[fi{x, y) - £] = 0 // > 0, A > 0, x>0, y>0 Till T7l2 Vxc(x) + £ /i.-V,#(x) + 53 A , V x / , ( x , y) > 0 «=i «=i T712 V y ? (y ) + E ^ v v / i ( ^ J/)>0 i = l / " U m 2 \ x Vxc(x) + J2PiVx9i(x) + 5^  ^Vx/t(x, j7) - 0 ( K K T ) t=i t = i 7712 The following characterization of convexity of differentiable functions is useful in the subsequent derivations: A differentiable function h(.) : X —*• di, where X is a convex subset of St", is convex if and only if for any given fixed x E X, h(x) > h(x) + Vxh(x)'(x — x) holds for all x 6 X. Let an optimal solution of (4.3) be denoted by (x*, £/*(£))• Strong duality holds for two-stage stochastic convex programs, for example, see Rockafellar and Wets [74]. Using their Corollary B it follows that there exists an optimal solution of (4.7) with primal solution x*, £/*(£) and multipliers p*, A*(£) such that Z* = Z. However, for our purposes weak duality is sufficient. Let us use y* to denote the variable aggregation fE y*(£)F(d£). Proposition 4.1 Z* — JB > e;(x*,y*,x,y)>0 (4.8) where e1 (x*,y*,x,y) := my + t = l TO2 V x c ( x ) + 53 fii^x9i{x) + 53 A,-V x/,-(x, y) i=i x m 2 Vy?(l7) + E ^ V v / « ( S ^ ) « = 1 (4.9) Chapter 4. Tight Bounds on Stochastic Convex Programs 94 Proof. Z* > Z (by weak duality) > c(x*) + Ag(y*(fl)F(dO + 5>-(*(**) " •/- i=l + E A,- / (/.(**, y*(0) " &) W ) feasibility of //, A) «=i * /-171J + E (/«(«*> J/*) - li) (by Jensen's inequality) t=i > c(x) + Vc(x)'(x* - x) + g(y) + Va(y)'(y* - y) + E + Vp t(x)'(x* - x) - &,] + E Ai[/.(*, y) + V x / , ( x , y)'(x* - x) + V, / , (x , y)'(y* - y) - £•] = J 5 + [Vc(i) + E £.V 5 ,(x) + E AfVx/<(x, y)] x* + [V«(y) + EWyi (x ,y ) ]y ' . The last equality holds by the Karush-Kuhn-Tucker conditions (KKT) given for the problem (4.4). Since x* > 0, y* > 0, by (KKT) conditions it follows that ef > 0. • Unfortunately, the bound ej" is not computable unless the optimal solutions x*, y* are available. An easily computable lower error bound is available if one makes the following assumption which could be easily verified for certain problems, as is illustrated in the example that follows. Assumption 4.2 The constraints of problem (4-l)-(4-2) may be manipulated to imply that there exist nonzero lower bounds to x* and y*(£), i.e., x* > x1, y*(£) > y' (a.s.). Corollary 4.3 Under Assumption 4-2, a computable lower bound to (Z*—JB) is e% (x, y) Chapter 4. Tight Bounds on Stochastic Convex Programs 95 where 0 < e2 (x,y) := «=1 tri2 Vc(x) + £ fiiV9i(x) + £ A,-V,/ ; (x, y) t=i + T712 V ^ y J + ^ A i V ^ ^ y ) i = i (4.10) A necessary condition for e2 > 0 is that not both x and y are strictly positive simulta-neously. Example 4.4 Consider the two-stage stochastic convex program: Z* = min | ( x i + x 2 ) 2 + tb(xi, x2) : x\ — 2xi — x2 < 0, x x > 0, x 2 > 0} where rp(x1,x2) = £[(£(xi,x 2,6,6)] a n d <j>(xu x 2 ,fr ,6) = min y iy 2 - yi st - 2 x i - 6yi + 4y? < fr - x 2 - 2y 2 + y | < fr yi > 0, y 2 > 0 . The random variable fr is uniform in [—3,3] and fr is uniform in [—3,5]. Since 6 = 0 and f 2 = l , Jensen's lower bound JB to Z* is obtained by solving JB = min (x x + x 2 ) 2 + ym - yi st xl — 2xi — x 2 < 0 - 2 x ! - 6yx + 4y\ <0 - x 2 - 2y 2 + y\ < 1 X i > 0, x 2 > 0, yx > 0, y 2 > 0 . Chapter 4. Tight Bounds on Stochastic Convex Programs 96 The solution is JB = -1.5260, xr = 0.1562, x 2 = 0, yi = 1.5504, y 2 = 0, px = 0, Ai = 0.1562, A 2 = 0. With this solution to the mean model, one obtains 2 Vc(x) + i i i Vflfi(i) + £ A,V x / . (x , y) = (0,0.3123)', «=i V?(y) + A,V y / , (x , y) = (0,1.5504)'. t=i Consider the two-stage stochastic program. The second stage problem is not feasible unless xi > | and x 2 > 2. Set xl = (|,2)' and y' = 0. Then, e2 is 0.6246 and the lower error bound is strictly positive. Hence, we have a lower bound to Z* that is tighter than Jensen's bound, using only the first moments and exploiting the problem structure. The new lower bound is JB + e2 = —0.9014, which is strictly better than the usual mean model bound JB = -1.5260. For £2 £ [ai,a2] the second stage induces the constraints x 2 > —1 — a x, hence e2 = —0.3123(1 + ai). So, could be made arbitrarily large by choosing a distribution for 6 with ai arbitrarily small such that 6 = 1- This implies that for such problems Jensen's bound can be made arbitrarily poor. However, the aggregation bound remains —1.8383 — 0.3123 ai, which improves as a a is decreased. • 4 . 2 . 1 A n A g g r e g a t e d U p p e r B o u n d A similar procedure may be used to construct an upper bound to the error term (Z* — JB) if one assumes that there exists a penalty cost vector p(£ 3?"12) for violating the second stage constraints. Suppose the decision making environment is made flexible by requiring that for some given first stage decision x, one may pick a second stage decision y to violate the second stage constraints, but will have to pay a price for such infeasibility at a linear rate of pi for the ith constraint violation. The presence of penalty terms was utilized by Birge [3] to obtain aggregation bounds in the context of stochastic linear programming. Chapter 4. Tight Bounds on Stochastic Convex Programs 97 Consequently, A,(£) < pi follows for the problem with penalty terms, due to the K K T conditions, see Rockafellar and Wets [74, Corollary B]. P ropos i t ion 4.5 Z * - J B < 4(x,g) := j> / Ifi&y) -6] *i(<%) ( 4-u) where Fi is the marginal distribution of Proof . From the dual problem (4.7), Z* = Z (by strong duality) m i 7722 < c( ' ( * ) + « ( y ) + E ^ r - *.•)+E £ wo w*>v) - ^ t=l 1=1 ~ m i m 2 -= JB + E (ft(x) - 6.) + E /_ A?(0 (/*(*, y) - 6) / W « = i « = i •/-Since gi(x) — bi < 0 and / J * > 0, A* > 0, m.2 . Z* - JB < E / ^ ^ O n i a x I f / . ^ y ) - f c ] , 0 } * W rri2  t = i Using A*(£) < p yields the result E x a m p l e 4.6 Consider the stochastic convex program used in Example 4.4 but now with penalty costs for constraint violations. Wi th violations denoted by the variables s i and s2, the problem is Z* = min | ( x i + x2)2 + ip{xi, x2) : xl — 2xi — x2 < 0, xx > 0, x2 > 0j where tp(x1,x2) = £[<£(xi, x2,6,6)] and Chapter 4. Tight Bounds on Stochastic Convex Programs 98 ^ ( x i , X 2 , 6 , 6 ) = min t/ij/2 - J/i + 3^! + 4s 2 s.t. -2xi - 6yi + Ay2 - Si < fr -x2 - 2y2 + y2-s2 < fr 2 / 1 , 2 / 2 , 5 1 , 3 2 > 0 . With £ described as in Example 4.4, since fr = 0 and fr = 1, the solution to the mean model yields the lower bound JB = —1.5260 with xi = 0.1562, x 2 = 0, t/i = 1-5504, j/2 = 0, s i = 0, s i = 0, fii = 0, Ai = 0.1562, and A 2 = 0. Hence, e+ = 3 ^ max{- f r ,0} |d f r + 4 £ m a x { - ( 2 , 0 } ^ 2 yielding 6 + = 4.5. Thus we have an upper bound for the stochastic convex program as Z* < -1.5260 + 4.5 = 2.9740. • 4.3 The Tightened Mean Model While it is true that Jensen's inequality provides a tight lower bound to the expectation of a convex function of a random vector using first moments, see Theorem 2.1, it may become poor on the overall stochastic program as demonstrated in Example 4.4. Indeed, it was shown using the aggregation error bound that Jensen's bound could be made arbitrarily poor. In this section, we attempt to understand why this is so and eventually to construct a model which provides the sharpest lower bound to the stochastic convex program using the first moments. As in Chapter 2, define the sets X1 := {x G 3ft™1 : < 0, i = 1,... , m i , x > 0}, X2 := {x G 3ft"1 : 3y > 0 such that y) < i = 1,. . . , m 2 , £ G E}, X:=XX[\X2. Therefore, (4.1)-(4.2) may equivalently be expressed as Z* = min{c(x) + (4.12) Chapter 4. Tight Bounds on Stochastic Convex Programs 99 where ip(x) := Ez[<f>(x,£)] is convex in x, which follows by the convexity of <^(x,£) for each £. (4.12) is referred to as the equivalent deterministic program of the stochastic convex program SCP. Define the set X 3 := {x : 3y > 0 such that / ,(x, y) < £ , i = 1,. . . , m 2 } . (4.13) Then, Jensen's bound on Z* has the equivalent representation given by JB= min {c{x) + 4>(x,()}. (4.14) x£Xx f]X3 That (4.14) is a lower bound to (4.12) may be seen in two ways. First, by Jensen's inequality, the objective function of (4.12) is never smaller than that of (4.14) due to convexity of <f>(x,£) in i.e., tp(x) > <j>(x,() holds. Secondly, since H is a closed convex set, X 2 C X 3 . Therefore, X C X 1 f | X 3 holds implying that the feasible region X in (4.12) is relaxed by X x n X 3 in (4.14), hence providing a lower bound. While Jensen's inequality bounds the objective term i/>(x) tightly with <f>(x,£), however, the relaxation of the feasible set makes the overall bound JB become poor in some cases, as evidenced by the aggregation bounds. In this sense, the best lower bound to Z* using first moments should be given by the following model, herein referred to as the tightened mean model: Z M : = m i n { c ( x ) + ^(x,0}. (4.15) x£X Therefore, the ability to specify the set X explicitly immediately gives us a way to compute the tightened mean model. P r o p o s i t i o n 4.7 X is a convex set. P r o o f . Since, X 1 is convex it is only necessary to show that X 2 is convex. Now X 2 = f l^eE^I where X 2 := {x | 3y > 0 such that / ,(x, y) < 6, «' = 1,.. . , m 2 }. Choose x 1 and x 2 in X] such that x 1 ^ x 2 . Let nonnegative yx,y2 be associated with this choice. For Chapter 4. Tight Bounds on Stochastic Convex Programs 100 some scalar a G [0,1], define ya = ay1 + (1 — a)y2. Thus, ya > 0 and by convexity of each / , , xa := ax1 + (1 — a)x2 G so X ^ is convex which implies the convexity of X2. • In the case of stochastic linear programming, a polyhedral characterization of the feasible set X has been given by Wets [89, Corollary 4.13]. This approach is, however, not readily applicable to stochastic convex programs, such as the SCP. Therefore, we omit the details in the latter reference. Instead, we pursue a different approach, under the following assumption. A s s u m p t i o n 4.8 The domain E of the random vector £ is a polytope. Under the Assumption 4.8, let {ye : e G V} be the set of vertices of E . Define the sets: Xe = {x : 3y>0 such that / , - ( x , y ) < uf, i = 1,... , m 2 ) for e G V. (4.16) P ropos i t ion 4.9 X2 = f| Xe. Proof . Pick any x° G C\eev Xe. For any e G V, since x° G X e , there exists ye > 0 such that fi(x°,ye) < vf for all i. Now pick any £ in E . By Assumption 4.1, there exists a set of nonnegative scalar multipliers ote for e £ F , adding up to 1, such that £ = Eeev aeVe:. Defining y$ = Eeev aeye, by convexity of each /,• yields / , (x° , y^) < for all i. Moreover, yt > 0. Hence, x° G X2 implying that f]e&v Xe C X2. Picking any x° G X2 implies x° G Xe since u e G E . Thus, x° G f leev^ e implying that X2 C fleev Therefore, X 2 = fleev ^ e follows. • Having expressed the set X2 explicitly, it follows that Chapter 4. Tight Bounds on Stochastic Convex Programs 101 Proposition 4.10 Under the Assumption 4-8, the tightened mean model to (4-l)-(4-2) is given by ZM= min c(x) + q(y) (4.17) s.t. gi(x) < bi, t = l , . . . , m i fi(x,y)<ti, » = l , . . . , m 2 fi(x,ye)<v?, x = l , . . . , m 2 , eeV x > 0, y > 0, y e > 0 (e £ V). If the number of extreme points of the polytope E is large, then the resulting tightened mean model would be a large convex program. For example, if E is a m2-dimensional rectangle then the number of extreme points is 2 m 2 , and thus, the size of the tightened mean model grows exponentially. This difficulty is easily overcome as follows. Partition the set of indices of the extreme points ve (e £ V) of E into two sets S and S' as follows: • e £ S and e' £ S' V < ve' • e £ S and e' £ S => ve £ ve' • eeS' and e' £ S' ve £ ve'. Proposition 4.11 X2 = f ) Xe. Proof. Straightforward following that of Proposition 4.9. • Thus, it follows that Proposition 4.12 If the domain of ( is a multidimensional interval of the form E = x j^a^a j ] , then in the {S, S'} partition of the 2m2 extreme points of 3, the set S is a sin-gleton which corresponds to the extreme point [aj,..., a^J'- Consequently, the tightened Chapter 4. Tight Bounds on Stochastic Convex Programs 102 mean model to (4.1)-(4-2) is min c{x) + q(y) (4.18) s.t. 9i(x) < h, i = l , . . . , m x fi(x,y) < £i ,m2 fi{x,y) < a?, ,m2 x > 0, y > 0 y >o. Due to Proposition 4.12, in the case of a m2-dimensional interval support, only m2 number of additional constraints will have to be introduced to obtain the tightened mean model in comparison to the usual mean model in (4.4). Moreover, one need not consider all the additional constraints simultaneously, at least when the problem is a stochastic linear program, because the additional constraints may be introduced to the usual mean model algorithmically, for example, using the L-shaped method of Van Slyke and Wets [84]. Such a decomposition scheme is not available for convex programs, to our knowledge. Example 4.13 Consider the stochastic convex program given in Example 4.4. We obtained JB = —1.5260 and the aggregation lower bound ZA = —0.9014. Since H = [—3,3] x [—3,5], it follows that S — {1} where vx = [—3, —3]'. Using Proposition 4.12, we obtain the tightened mean model bound as ZM = 4.0246 which is a substantial improvement over the usual mean model bound. • 1. If H is not convex but bounded, one could compute a fairly tight lower bound by considering a convex polyhedron U which is contained in EE. Picking U as close as Remarks: Chapter 4. Tight Bounds on Stochastic Convex Programs 103 possible to H gives the best lower bound. If the support E is convex and compact but not polyhedral, the procedure would still be to find a convex polyhedron U 2. On the contrary, the upper bounding approach of Gassmann and Ziemba [38], see (2.14) of Chapter 2, requires finding a convex polyhedron U which encloses E if E is not a convex polyhedron. To make the upper bound tighter one has to consider U as close as possible to S. 3. In Jensen's mean model, the solution x is not guaranteed to be feasible to the given stochastic program. However, the tightened mean model provides a lower bounding solution which is feasible to the original problem. If the SCP has (relatively) complete recourse, then ZM = JB holds and this procedure is redundant. 4.4 Upper Bounds Using First Moments One can again rely on the convexity properties of the recourse function (f>(x, £) to develop upper bounds to (4.1)-(4.2). Under the Assumption 4.8, applying Gassmann-Ziemba inequality, rp(x) < max \ ^ fa, : 53 V"P' = t> 53 P« = 1 f = : ( 4' 1 9) where < (^.,.) is defined by (4.2). The bound in (4.19) requires the solution of a linear program. Moreover, the solution of (4.19) determines a discrete probability measure, having mean fr on the vertices of E. More importantly, observe that this discrete mea-sure is dependent on the value of the vector x, the first stage decision, used in (4.19). Consequently, solution of the Gassmann-Ziemba upper bounding model, that is contained in E. One has to pick U such that it contains fr GZ := min {c(x) + il>u(x)} , (4.20) Chapter 4. Tight Bounds on Stochastic Convex Programs 104 becomes rather difficult since one would have to generate iterates x algorithmically and then evaluate r(>u(x) together with its subgradients, in an attempt to develop an iterative procedure toward solving the model in (4.20). A n exception is when E is a m2-dimensional simplex, in which case, the solution of the linear program in (4.19) is determined trivially as the barycentric coordinates of fr In the case of stochastic /mear programming, one may compute the above upper bound using the L-shaped method of Van Slyke and Wets [84] in a specialized decomposition form, known as the multicut strategy, a method proposed by Birge and Louveaux [5]. We do not discuss details of this procedure because such a decomposition method does not seem possible for the solution of the model in (4.20). Instead, we reformulate this model so that standard convex programming algorithms may be applied for solution. Furthermore, this alternative formulation allows one to compute rather cheap upper bounds to the SCP using the solution of the lower bounding problem. P ropos i t i on 4.14 Z* < GZ = min c(x) + 0 (4.21) s.t. gi(x)<bi, i = l,...,m1 fi{x, ye) < w,e, i = 1,.. •, m 2 ; e G V q{ye) + w{£-ve)-e <o, eev Proof . Denote the feasible set for pe of the maximization problem in (4.19) by V, the set of all discrete probability measures defined on the vertices of S and having mean fr Therefore, ip(x) < ^)v(x) = max p <, ep Eeev 4>(xiv6)Pe holds for all x G X, the feasible set of the SCP. The objective function of this maximization problem is mmfepeqtf) : fi(x,f) < v°, Vi ,Ve G vj . (4.22) Chapter 4. Tight Bounds on Stochastic Convex Programs 105 Using y to represent the vectors ye for all e 6 V and denoting the feasible set for y of the minimization problem in (4.22) by y(x) for some given i G l , yields ^u(x) = max min K(y,p) := ^ peq(ye) (4.23) Observe that the set V :— {p 6 3ft'v' : J2ePe = 1, Pe > 0} is compact. Moreover, the function K(y,p) defined in (4.23) is convex in y and linear in p, and hence K(.,.) is convex-concave. Therefore, due to Rockafellar [73, Corollary 37.3.1], the "max" and "min" in (4.23) may be interchanged to obtain ij>u(x) = mrn maxK(y,p), (4.24) yey(x) pet? hence, Z* < GZ = min fc(x) + maxif(y,p) j . (4.25) The inner maximization of (4.25) is a linear program and thus using LP duality, the result follows. • The alternative upper bounding formulation is now amenable to a variety of convex programming techniques, for example, cutting plane methods, and in particular, could even be solved through iterative solution of linear programs. Furthermore, it allows one to compute inexpensive upper bounds to Z* during the solution of the lower bounding problem as follows. Given any feasible solution of the lower bounding problem (4.17), denoted by a;0, yo, and J/Q, a feasible solution to (4.21) may be constructed by using X o , Vo along with any w, unrestricted in sign, and any 0 > maxe^v{q(yo)+w(£~vC)}• Therefore, for any w € 3ftm2, Z* < GZ < c(x0) + max\q(ye) + w{£ - ve)\. (4.26) In particular, setting w = 0, Z* < GZ < c(x0) + max e 6v{g(t/o)} follows. Chapter 4. Tight Bounds on Stochastic Convex Programs 106 stage-1 — • X\ —• stage-2 — • x2 —• • • • * T - I —* stage-T —> X? T T observe fr observe fr* Figure 4.1: Sequential decision problem under uncertainty Since any first stage solution XQ of the tightened lower bounding problem (4.17) is also feasible to the stochastic program SCP, Z* < c(xo) + rj>(x0) < C(XQ) + ijsu(xo) holds due to (4.19), which is a tighter bound than that in (4.26). Example 4.15 Consider the stochastic convex program in Example 4.4. The upper bound in (4.21) is GZ = 5.2075. The tightened mean model bound is ZM — 4.0246, see Example 4.13, compared to Jensen's bound of —1.5260. The optimal solution of the tightened mean model is x0 = [0.375,2.0]', y0 = [1.616,0]', and yg = [0.75,1.00]' for any e = 1,2,3,4. Using this solution, the upper bound in (4.26), for w = 0, is 5.6406. <£(£oX) = -0.0047, <f>(x0,v2) = -0.0049, <t>{x0,v3) = -1.9747, and </>(x0,v4) = -0.7503 where v1 = [-3, -3] ' , v2 = [3, -3] ' , v3 = [3,5]', and v4 = [-3,5]'. Thus, Vv(xo) = —0.3776 which gives the upper bound c(x0) + Vv(x0) = 5.2630. The bounds thus take the form 4.0246 < Z* < 5.2075 < 5.2630 < 5.6406. • 4.5 Multistage Stochastic Convex Programs Suppose a multi-stage extension of the two-stage stochastic convex program in (4.1)-(4.2) is considered corresponding to the sequential decision problem in Figure 4.1. Z* := min \ ci(xi) + min (c 2 (x 2 ) + ^ ^ ( m i n c 3 (x 3 ) + Chapter 4. Tight Bounds on Stochastic Convex Programs 107 +EMh,..*T-i™%1CT s.t. Gx{xx) < ^ <*T)))]} (4.27) Gt(xt-x,xt) < £ t , t = 2,...,T x t > 0, t = 1 T , . . . , j . . In (4.27), xt (E 9ftnt, = 1 , . . . ,T , are the decision vectors and the random vectors £t G 3ftmt, t = 1,...,T are defined on probability spaces (Et,pt,Ft). Assume that tih stage decisions have no effect on the probability space on which £ T , for r = t + 1,.. . , T, is defined. That is, 6 is independent of x\,..., x f _ i . The functions Gt : 3ftn , _ 1 x 3ftnt —» 3ftmt and the functions Q : 3ftnt —> 3ft are convex, differentiate, measurable on H t , and bounded. Moreover, we consider that x t is non-anticipative, namely that xt depends only on £i>. . . ,6 - i a n d x v , . . . , x t _ i . In fact, nonanticipativity is implicitly introduced by the way (?t is specified in the dynamic stochastic program (4.27). For details, see Dempster [16]. Therefore, in formulation (4.27) it is understood that xt = x t(xi,...,x t_i,£i>--->6)-It is possible to apply the two-stage bounding results in the preceding sections on the multistage version given in (4.27). In this section, we provide the multistage formulations of these upper and lower bounds. However, one has to be concerned with intertemporal dependence/independence of the random vectors across time periods. The multistage extensions basically follow from the value function <f>t(xi,..., x f _ a , £ 2 » . . . , £<), arising at any stage t, being convex jointly in £ 2 , • • •, where <^t(xi,...,x t_i,6,...,6) (4.28) G>+i(xT, x T + 1 ) < £ •T ) r = i , . . . , r - i Chapter 4. Tight Bounds on Stochastic Convex Programs 108 X T > 0 , T = t,...,T and xi,..., xt-i, fr,..., fr satisfy the constraints in the stages prior to stage t. 4.5.1 Under Intertemporal Independence As in the two-stage case, assume that the domain H t of the random vector fr in stage t is a polytope with extreme points v** for et £ Vt, the index set of extreme points for stage t. Any sequence of occurrence of these extreme points from stage 2 through stage t is denoted by Oj while the set of all such occurrences is denoted by Ot for t = 2 , . . . , T. Proposition 4.16 Under intertemporal independence and using only the first moments fr > • • • > fr"; a tight lower bound to the stochastic convex program in (4-27) is obtained when one solves the following large-scale deterministic convex program: T Z*> min X>0E«) (4.29) t=i s.t. G i (x i ) < h Gt(xt-i,xt) < fr, t — 2,...,T G t(xt-i,yD<"* eS e*eV;, t = 2,...,T xt,yetf>0. Proof. By repeated application of the result in Proposition 4.10 together with convexity of the value function at any stage and intertemporal independence. Observe that one needs to apply the two stage result starting from stage T backwards to obtain (4.29). • Proposition 4.17 Under intertemporal independence and using only the first moments fr? • • •, CTJ a tight upper bound to the stochastic convex program in (4-27) is obtained when one solves the following large-scale deterministic convex program: Z* < min a(xi) + 02 (4.30) Chapter 4. Tight Bounds on Stochastic Convex Programs 109 s.t. Gi(xi) < 61 Gt(x°tL?,x?)<v?, oteOu t = 2,...,T ct{x?) + w?-1{tt-v?)-e°-1+6?+1<o, oteOu t = 2,...,T x°t' > 0 where 9%?+1 = 0 (ior € Oj), and by notation x{x = X\, Q02 = 62 and w^1 = w2. Proof. Similar to that of Proposition 4.16, but now applying the result in Proposi-tion 4.14 together with convexity of the value function at any stage and intertemporal independence. • 4.5.2 Under Intertemporal Dependence If the random vectors are intertemporally dependent, it is not immediately clear whether the usual mean model bound based on Jensen's inequality, using only joint first moments 6» • • • ? £r, is even a valid lower bound to (4.27). First, we show that this indeed is a valid lower bound, which may subsequently be tightened as before. To this end, we consider approximating the general T-stage dynamic problem in (4.27) with one having a less number of time stages, a process which is herein referred to as time-stage aggregation. The idea is to aggregate a number of consecutive stages of the recourse problem to form a single representative stage. Such an aggregation procedure was considered by Kusy and Ziemba [61] in the context of stochastic linear programming with simple recourse. Definition 4.18 The k-aggregated version of (4-27) is the problem resulting when the last k <T stages of (4-27) are aggregated to form a single stage. The k-aggregated version of (4.27) would effectively have (T — k + 1) time stages, i.e., Zk := min \ ci(xi) + Ei2 min (c 2(x 2) + . . . + EiT_k\i2 iT_k_x (min cT-k(xT-k) xl ^ L 2 \ \ X T — Chapter 4. Tight Bounds on Stochastic Convex Programs 110 s.t. Gi(xx) < bx Gt{xt-\,xt) <£t, t = 2,...,T x t > 0 , t = l,...,T. Lemma 4.19 The optimal value of (4-31) is a lower bound to the optimal value of (4-27). Proof . A feasible decision sequence of (4.27) is, in general, of the form xi ,x 2 (x i ,4 r 2 )v • • > XT(XI, . • •, XT-I, 61 • • • 5 £r)- A feasible decision sequence of (4.31) would, in general, be of t he fo rmx i ,x 2 (x i , 6 )v • • , XT-k(xi, Z T - f c - 1 , 6 , • • • ,6r-Jfe)> XT-k+i(xi, • • •, £T-fc,6> • • •, £T), • • •, XT{XI, . . . , xr-k, 65 • • • 5 £r)- Observe the implicit difference in the feasible poli-cies for (4.27) and (4.31), in general. For example, the {£} sequence may not even be nonanticipative. But, the {x} sequence satisfies the constraints of (4.31), hence, it is feasible for (4.31). However, it is neither implied nor asserted that there does not exist an optimal policy of (4.31) that is feasible in (4.27). By feasibility of the sequence {x}, the result follows. • A n immediate consequence of Lemma 4.19 is P ropos i t ion 4.20 Even under intertemporal dependence, the usual mean model using 6) • • • >£r is a valid lower bound to (4-27). Proof . From Lemma 4.19, the (T-l)-aggregated problem given below is a valid lower bound to Z* of (4.27): Z* > ZT-i := min X l j c i ( x i ) + %,. . . ,£ T s.t. Gi (x i ) < h Gt(x t _ i ,x t ) <6 , t = 2 , . . . , r x, >0, t. = l , . . . , T . min £ cr-jt+,-(xT-fc+0 • (4.31) mm t=2 (4.32) Chapter 4. Tight Bounds on Stochastic Convex Programs 111 Since (4.32) is a two stage stochastic program as in (4.1), the usual mean model lower bound on (4.32), i.e., T ZT-I > min ^ ^ ( ^ t ) (4.33) t=i s.t. Gi (x i ) < 6i Gt(xt-i,xt) < fr, t = 2,...,T xt > 0, is a lower bound to Z * . • The bound in (4.33) may further be tightened using the approach in Section 4.3, since (4.32) is a two stage stochastic program. For example, if S t is a multidimensional interval with a° < fr < a], for t = 1,. . . , T, then a tightened first moment lower bound for (4.27) under intertemporal dependence is, T Z*> min 5>t(a;t) (4.34) s.t. G i ( x x ) < 6i Gt(xt-i,xt) < fr, t = 2, . . . , T G2(x1,y2) < a2 G t(y«_i,y0<«?, < = 3 , . . . , T «> x t ,y t >0. Nevertheless, under intertemporal dependence and using only the first moments fr,..., fr-> it does not seem possible to construct an upper bound to (4.27). The closest that we could get requires the following conditional first moments. Let the extreme points of the domain S 2 (of fr) be v22 for e2 € V2. Given v22, a vertex of the conditional domain S 3 is denoted by i ^ 3 ' 6 2 while the conditional first moment is denoted by f | 2 . Let an occurrence of such conditional vertices from stage 2 through stage t be represented by ot while all Chapter 4. Tight Bounds on Stochastic Convex Programs 112 such occurrences are denoted by 0*. Hence, u^' 0 t _ 1 represents a vertex of the conditional domain S t (the set of all such indices is denoted by Vf*'1) and £ ° t _ 1 is the conditional first moment of £t> being conditioned on the occurrence of conditional vertices Ot-\. Propos i t ion 4.21 Given the conditional first moments ot-i G Ot-i, for t = 2 , . . . , T, the solution of the following convex program is an upper bound to (4-27). Z* < min cl(x1) + 62 (4.35) s.t. G i (x i ) < h Gt(x°t^,x°t*) < vf°<-\ et G Vt0t~\ o t_! G O t _ i , t = 2 , . . . , T E M * ? ) + wtifr-1 - vt^-1)} -02<o xV > o. Proof . The optimal value of (4.27) is, j c i (x i ) + Ei2 min {c 2 (x 2 ) + V>3(x2) : G2(x1,x2) < 6} ] } Z* = min Gi(a;i)<6i ,xi>0 where ip3(.) i s a convex function. Applying the upper bounding result in (4.21) to this two stage stochastic program, yields Z* < min ci(xi) + 02 (4.36) s.t. Gi (x i ) < fei G2(xux?) < v?, e2eV2 c2(x?) + Mxe22) + w2{i2 - v?) - e2 < 0, e2 G V2 x i , x^2 > 0. However, V > 3 ( X ^ ) - ^ 3 K 2 = ^ 2 min{c 3(x 3) + %!>A(x3) : G 3(x22,x 3) < £ 3 } X3 >0 Chapter 4. Tight Bounds on Stochastic Convex Programs 113 st p > c 3(x° 3) + ipA{x°i) + w3(& - v?) G3{x<?,x°3>)<v?°\ Ve3eVj < max {c3(x°33) + V> 4 (*3 3 ) + - v°3*)} (4.37) for any x°33,w3 that satisfy G 3 (x2 2 , xs) < vl > XT > °-The first equality above is by the definition of \j>3 while the second inequality follows by applying the Gassmann-Ziemba inequality, as in the proof of Proposition 4.14. The inequality in (4.37) is a feasible point upper bound. Consider (4.36) and the constraints involving 92. Replacing these constraints with the inequality expression for tp3 in (4.37) Z* < min ci(xi) + 92 s.t. G x(xi) < fel G2{xux?)<v?, o2e02 G3{x°*,x%)<vf°\ o3e03 c2{x?) + c3{x°33) + w2{£2 - v?) 4- w3(& - v°33) + Mx°33) - 2^ < 0, o3 G 03  xii x2 , x3 > 0. Applying this idea repeatedly yields the result in (4.35). • Corollary 4.22 Suppose H t is a multidimensional interval with a° < fr < a] for all t. If a solution of the lower bounding model (4-34), under intertemporal dependence, is x*,y* (Vt), then an upper bound to the stochastic program (4-27) is Z* < Ci(xJ) + Ylt=2ct(y*)-Proof. Setting x°f = y*, Vo t 6 Ot, and wt = 0, for all t = 2, . . . , T , we get from (4.35) that 92 > EL2Ct(y t*). Setting 02 = EL2ct(2/t*)> the result follows since this "hat" solution is feasible in (4.35). • provides a restriction to (4.36), and thus, Chapter 4. Tight Bounds on Stochastic Convex Programs 114 Example 4.23 Consider the following 3-stage stochastic convex program: Z* := min ( i n + x12)2 + ^2(111, xX2) s.t. x\x — 2xn — Xu < 0 xu,x12 > 0 where ^ 2(^ 11, xi2) = Ei2ui22[<f>2{xxx,xi2,£21,£22)), 2^(^ 11, X i 2 , 6 i > 6 2 ) :.= min x 2 i x 2 2 - X21 + V ^ a ^ i , £22) s.t. —6x21 + 4x\x < 6 1 + 2xn -2x22 + x\2 < 6 2 + xX2 £21, £22 — 0 where ^3(^21,x22) = Ei31^32^21ti22[<f>3(x21,x22,61,62)], ^3(^21, £22 , 6 1 , 6 2 ) := ™ n ~X31 + X 3 2 s.t. 2x\x < 6 1 + 4x 2 i + 3x 2 2 - x 3 1 + x\2 < 6 2 + X 22 X31,2 32> 0 . Random components 6 j have interval marginal domains given as 6 1 £ [—3,3], 6 2 £ [—3,5], 6 1 £ [—2,4], and 6 2 £ [-2,6] with first moments 6 1 = 0 and 6 2 = 6 1 = 6 2 = 1-First, let us suppose random vectors are intertemporally independent. The usual mean model bound provides a valid lower bound as Z* > —3.4297. The tightened mean model in (4.29) gives the lower bound 2.5176. The upper bounding model in (4.30) gives Z* < 3.3215. Thus, under intertemporal independence, the sharpest bounds using first moments are 2.5176 < Z* < 3.3215. Now, suppose the random vectors are intertemporally dependent. The usual mean model bound above is still a valid lower bound. However, the tightened mean model Chapter 4. Tight Bounds on Stochastic Convex Programs 115 (4.34), under intertemporal dependence, gives a tighter bound as Z* > 2.0922. Upper bounding computations require conditional moments, see the model in (4.35). However, using the first moments, an upper bound is available from Corollary 4.22. Considering the lower bounding solution from the model in (4.34), i.e., x\x = 0.375, x\2 = 2.0, yJi = 0.75, 2/2*2 = 1.0, t/3X = 1.4145, and y^2 = 0, one obtains Z* < c\(x\) + c2(j/5) + c^{Vz) = 4.2258. Thus, under intertemporal dependence and using the first moments, we have 2.0922 < Z* < 4.2258. • Chapter 5 Bounds on Stochastic Convex Allocation Problems In this chapter, a stochastic convex program having a special structure is considered. Consequently, lower and upper bounds are developed on the problem which are tighter than the usual bounds based on limited moment information. The problem of interest is one of stock allocation in a two-echelon distribution system for a consumable resource. The resource is acquired by the central storage which then allocates it amongst a number of different competing activities, i.e., retailers, over two time periods when resource usage in either time period is random. Initially, a fixed quantity of the resource is available. Some or all of it is allocated at the beginning of period one to each of the activities. The initial allocation has to be made before observing the random usage. At the beginning of the second time period, after observing the random demand for resource in the first period and prior to observing the random demand in the second period, any of the resource unallocated initially will be allocated amongst the activities. Resource allocation is a commitment in the sense that resources cannot be removed from an activity prior to the second period, that is, transhipments among activities and stock returns to the central storage are not allowed. Demand that cannot be satisfied at any activity is assumed to be backordered. The problem for the case when the central storage facility acts as a transhipment point that merely performs the role of a central ordering function has been addressed by Eppen and Schrage [21], Federgruen and Zipkin [26,27,28]. The former reference con-siders the case when the central facility, upon receipt of the order, allocates it entirely 116 Chapter 5. Bounds on Stochastic Convex Allocation Problems 117 to the retailers without the redistribution possibility. Federgruen and Zipkin [28] allows redistribution of resource and a myopic allocation procedure is used to set up a dynamic programming model to approximate a multiperiod allocation model. Zipkin [102] ana-lyzed the dynamics of the imbalances of inventories at retailers in such myopic allocation procedures. Jonsson and Silver [47] considers an allocation model where redistribution of resource among retailers is allowed once in the operating cycle. Several papers have investigated the possibility of allowing the central facility to hold resource rather than just being a transhipment point. The motivation is that such practice permits one to observe the random demand over a certain period of time and thus make better informed allocation decisions so as to minimize system-wide costs or to achieve better service performance overall. Such problems have been analyzed by Jackson [45], Jackson and Muckstadt [46]. As an example, consider allocating a fixed budget between competing activities. The strategy of keeping a portion of it to allocate at a later date until after some demand is observed is often more practical than trying to recapture allocated funds from program directors when it appears that demand for services has been unbalanced. The model formulated in this chapter is concerned with the latter situation. We present a two-stage stochastic convex programming formulation. Although only a two-period model, it is close to that of Jonsson and Silver in that it approximates a multiperiod model where the receipt of material by the centre is sufficient to eliminate backorders throughout the system and raise all retailer-inventories. Thus, one cycle of the centre corresponds to the two periods here, with an opportunity for reallocation at one time during the depot cycle. Exploiting the special structure in the model, we de-velop new lower and upper bounding strategies. In contrast to limited moment bounds advocated in the previous chapters, the bounds in this chapter are based on approximat-ing the multidimensional integrals by a series of univariate integrals. Simple numerical Chapter 5. Bounds on Stochastic Convex Allocation Problems 118 examples show that the new bounds are tighter. For bounding techniques based on uni-variate integrals, in the case of stochastic linear programming, see Birge and Wallace [7] and Birge and Wets [10]. 5.1 M o d e l Formulation and Solution Characterization Let there be n activities and let W be the (fixed) quantity of resource available initially. Let the random usage of resource by activity i in the first period be £« (i — 1,... ,n) and represent by £ the random vector (£i,•• •, £n)'- The domain of the random vector £ is denoted by H which is a subset of 3ft n . The first moment of £ is denoted by £. E^[.] denotes mathematical expectation with respect to the true probability distribution of £. Initial allocation is denoted by x, and that of the second period by z,-, i = 1,..., n. An initial allocation vector x and a random demand vector £ in the first period incur a cost / 1(x,-, s r t). The vector of the amount of resource leftover after the demand in the first period is x — £. Suppose the random demand in the second period is denoted by n = ( » 7 i , . . . , 7/n)' with its supporting set being 0. A resource availability of r := x — £ + z and a demand of 77 in the second period incur a cost / 2 ( r , 77). We make the following assumption: Assumption 5.1 The function /*(.,£) : 3ftn —> 3ft, for fixed ( £ E , and the function f2(.,n) : 3ftn —> 3ft, for fixed rj £ 0, are continuous and convex. Furthermore, the random vectors £ and 77 have absolutely continuous distribution functions. Define the functions 5, h : 3ftn —• 3ft by . g{x) := Ettfix,®] and fc(r) := Ev]i[f2(r,r,)}. Thus, it follows from Assumption 5.1 that g(.) and h(.) are differentiable convex. The resource allocation problem may be formulated as the following two-stage stochastic Chapter 5. Bounds on Stochastic Convex Allocation Problems 119 convex program: Z* := mm {<?(*) + Et [<f>(x,0] : £ * , < W, x,- > o| (5.1) where, #z,0 := min j - £ + z) : f > = W - f> , - , z,- > ol (5.2) z I t=i i = i J For all initial feasible allocations of (5.1), the second period problem (5.2) is always feasible, and hence, the stochastic program (5.1)—(5.2) has relatively complete recourse. Despite the convex structure and linear constraints, evaluation of Z* is complicated by the multidimensional integration in (5.1) and moreover, (5.2) is not a trivial optimization either. As a possible solution strategy, one may compute bounds for the expectation of <^(x,£) for fixed x, and then use these bounds to solve the approximated problems so derived. As long as the upper and lower bounds are not far apart, one is usually content with such approximations. For the above goal, first we characterize solutions of the second stage problem (5.2), in an attempt to provide a more explicit representation to the recourse function <^(x,£), in view of Assumption 5.1. The following is a result which is interesting in its own right. Theorem 5.2 1 For differentiable convex function h : dtn —* dt, an optimal solution y* of the convex program min h(y) (5.3) y s.t. f > = W t '=i Vi > h 1 When h is separable additive convex, the result in this theorem coincides with the ranking property discussed in Zipkin [101]. That is, allocate the resource sequentially to the activity (or activities) with highest marginal need until the resource is depleted. Chapter 5. Bounds on Stochastic Convex Allocation Problems 120 (5.4) can be expressed as y* = m a x { / i , a , } , i = 1,... ,n, where a is a feasible solution to: ,V,-&(/9*') = A, Vi = l , . . . , n J^max{/,-,a,-} = W m&x{lj,aj} ifj^i aj if j = i. Proof. Since the constraints of (5.3) are linear, the Karush-Kuhn-Tucker (KKT) con-ditions are necessary for optimality, moreover, as follows by the convexity of h(.), these conditions are sufficient as well. (KKT) conditions may be simplified so that y* is optimal to (5.3) if and only if there exists a scalar A such that V,%*)>A, Vz = l , . . . , n (y.- - U)[VMy') - A] = 0, Vt = 1,..., n "£y* = W, andy,->/,-, V». « Define the points y* and /3* as in the theorem. Noting that V,7i(y) is nondecreasing in y,-, it follows that V,%*) > V.-M/9') = A. (5.5) If (5.5) is a strict inequality, then y* > o:, and thus y* = U. Conversely, if y* > then y* = ai and thus (5.5) is held as an equality. Thus, (y*,/3l,i = l , . . . ,u) satisfies the (KKT) conditions, which completes the proof. • Consequently, one obtains the following characterization for the recourse function in (5.2). Corollary 5 . 3 For <f>(x,£) defined in (5.2), <Kx,t) = h{y*{x,t)) (5.6) where y*(x,£) := max{it- — , Qf t (x,£)} and a(x,£) is a feasible solution to: Chapter 5. Bounds on Stochastic Convex Allocation Problems 121 t=i Vf-M/3*(*,0) = A(*,0, Vi = l,...,n n n X) max{0, a,(x, £) - (xt- - fr)} = - 53 z« i=i maxlxj - fr , a,-(z,0} l / i 7 ^ * (5.7) Proof. Writing 4{x,t) = min \ h(y) : J2yi = W-J2t» Vi ^ x«' - 6, Vt = 1,... n L (5.8) apply the result in Theorem 5.2. 5.2 Lower Bounds Observe that the recourse function <^(x,£) is convex in x and £ separately as can be verified from (5.8). Consequently, one may use Jensen's inequality to derive a lower bound using only the first moments fr i.e., ES(x,t)]>JB:=mm\h(x-t + z) : = W - 5>,-, zt > 0 (5.9) t=i t=i The bound JB in (5.9) is tight in the sense that it solves the corresponding moment prob-lem involving only the first moments, see Chapter 2. Observe that JB is a deterministic convex program with linear constraints. Another easy lower bound is readily available from the recourse function description in (5.8) by dropping the inequality constraints t/t- > x, — fr for all i. Thus, < K x , 0 > m m j % ) : J> = W-c} =: *(C) (5.10) where we have defined the univariate random variable ( := E"=i £«'• Following Theorem 5.2, this lower bound may be expressed as •EMM] > Ec[*{Q] = E([h{3r(C))] (5.11) Chapter 5. Bounds on Stochastic Convex Allocation Problems 122 where z**(() is a feasible solution to •  V,-fc(*"(0) •= A'(C) 5>r(0 = w-c. i This lower bound involves only univariate integration with respect to the aggregate de-mand (. Moreover, the latter bound is equivalent to relaxing the nonnegativity con-straints Zi > 0 from the second stage problem (5.2), i.e., resource being freely pulled from activities at the end of the first period and then merely reallocated for the second period as if redistribution were costless. This is an idea similar in spirit to Federgruen and Zipkin [26], but now with a two period rather than a myopic multiperiod model. This procedure essentially makes the problem separable in the two time periods, thus $(C) is independent of the first stage allocation decisions x, which leads to the following redistribution lower bound RB on the stochastic convex'program in (5.1)-(5.2): Z* > RB := EcMO] + m i n 9(x) (5-12) s.t. Y.Xi<W 1=1 Xi > 0, Vi. On the other hand, exploiting the problem structure, and thus using the equivalent representation for </>(x,£) given in Corollary 5.3, one may derive an alternative lower bounding inequality, which is the discussion that follows. We will simplify this inequality to develop easily computable bounds for certain classes of problems and show that bound-ing computations require only the marginal distributional information, thus, correlations in the random components are irrelevant. That is, the multidimensional numerical inte-gration in (5.1) can be approximated by a series of univariate integrations. Moreover, we will show that the new bound is tighter than the bound JB in (5.9). Chapter 5. Bounds on Stochastic Convex Allocation Problems 123 To describe the bound, consider the recourse function representation in (5.6)-(5.7). First we introduce a restriction to the constraints in (5.7) by requiring that ct(x,£) must be independent of fr i.e., it only depends on the first stage decisions x, and thus is denoted by a*(x). Then we consider a relaxation of the constraints in (5.7) by requiring that they be satisfied only in expectation, i.e., row aggregation of the constraints where the aggregation function is the probability density function. The combined effect can be used to derive the following lower bounding inequality. Define: fa) :=Et[h(y(x,0)} where t/,(x,fr) := max{xt- — fr , ct*(x)} and a*(x) is a feasible solution to: Ei[Vih(0i{x,$]=\*{x), V i = l , . . . , r » J2 Eti [max{0, <(x) - (*,- - fr)]} = W - £ x,-«=i maxfxj - fr , OJJ(X)} if j ^ i ct*(x) i f j = i . (5.13) t=i *}(*,0 •={ (5.14) Theorem 5.4 where (5.15) i=l <T,(X) := Cov{V, / i (0 '(x,£)) , max{(xt- - fr), ar,-(x,0} ~ max{(xt- - fr),a*(x)}} , (5.16) a(x,£) is a feasible solution in (5.7), and Cov{.} denotes covariance. Proof. Consider the two points y*(x,£) in (5.6)-(5.7) and y(x,£) in (5.13)-(5.14). For these two points, h(y*(x,0)-h(y(x,0) Chapter 5. Bounds on Stochastic Convex Allocation Problems 124 > Vh(y(x,0)'(y"(^0-y(x,0) = Yl^My(xiO) [max{aj,- - &,a , (x ,£)} - max{x, - £,,a*(x)}] »=i > E V^(^'(x,0) [max{x,- - &,a,(x,0} - max{x, - fc, a*(x)}]. (5.17) »=i The first inequality follows by the characterization of convexity for differentiable func-tions. To see the last inequality in (5.17), observe that due to convexity of h and definition of the two points of concern, V,/i(2/(x, £)) > V,/t(0*(x, £)) holds where the latter equality is held only if a*(x) > x,- — Moreover, o:*(x) < x, — implies that 6i(x,£) := max{x,-— «i(x, £)} — m a x { z f — S5M «*(x)} > 0. Thus, the inequahty in (5.17) follows. Using the definition of 6j(x,£) given above, integrating the inequality in (5.17), Edh(y*(x,0)-Hy(*,0)} > ££[V,/i(0*(x,0)fc(*,0] = i2E[Vih(0\x,O)}E[8t(x,O} + f;Cov{vih(9i(x,O) ,*(*,0j t=i i=i = 01+ ! > ( * ) • (5-18) .=1 1=1 Observing that E " = 1 £[<!>,-(x, £)] = 0 holds due to constraints in (5.7) and (5.14), (5.18) yields the result. • Unfortunately, the lower bound in the above theorem is difficult to evaluate since it requires a solution to the problem (5.6)-(5.7). However, the above general result can be used to compute lower bounds in certain special cases, that is, by restricting the func-tional form of h. For example, let h be additively separable. Additive separability of the objective terms is quite common in most applications. The previous literature in resource allocation, referenced at the beginning of the chapter, have considered this case. This means that the individual convex costs at any activity (or retailer), comprising inventory Chapter 5. Bounds on Stochastic Convex Allocation Problems 125 holding and shortage/backorder costs, are summable when the system is considered as a whole. The latter references utilized one or both of the assumptions that the probability distributions are normal and the random variables are independent across different re-tailers, in an attempt to make the analysis of the problem tractable. However, the lower bound that follows does not require any such assumptions. In the remainder of the chapter, the following notation is used. For a differentiable function / : 3ft —> 3ft, the first derivative of /(.) is denoted by / ' ( . ) . Coro l l a ry 5 . 5 Suppose the convex function h(.) in (5.2) is additively separable, i.e., h(y) = 2lLihi(yi). Then, E[4>(x,0] > ^(*):=E^&N»(*,6))] (5-19) .=1 where y(x,£i) := max{x; — £, , a*(x)} and cx*(x) is a feasible solution to: h^(x)) = X'(x), V i = l , . . . , n J2 Eit [max{0, a?(x) - (x< - ii)]} =W - f > . i=i »=i (5.20) Proof . 2 Observe that due to additive separability in h(.), when applying the result in Theorem 5.4, yields V,7i(0'(x,£)) = /^(a*(x)), V i , and thus <7,(x) = 0 holds for all i. Hence, the result follows. • 5 . 3 Tightness of the Lower B o u n d It was shown in Chapter 2 that using only the first moments, the "best" lower bound to the expectation of a convex function is provided by Jensen's inequality. In this section, we compare the new lower bound in (5.19) to the Jensen lower bound in an attempt to attach a measure of tightness to the new lower bound. It is difficult to handle the case of 2See Atkins, Edirisinghe, and Iyogun [1] for a slightly different proof of this result. Chapter 5. Bounds on Stochastic Convex Allocation Problems 126 general convex h, i.e., a comparison of the bound in Theorem 5.4 and the Jensen bound JB in (5.9) is tedious. We prove tightness results for the special case of separable additive objective functions. In the additive separable case, the new lower bound in Corollary 5.5 is never inferior to the Jensen bound. To prove this result, we need the following lemma. The notation in the lemma is limited to itself and not related to that outside the lemma. L e m m a 5.6 Let hi : 3ft —» 3ft (i = l , . . . , n ) be differentiable convex functions, first derivatives being denoted by h\. For fixed constants ai (Vi) and b, defining Zi := min { V hi(xi) : V x , - = b, x,- - maxjy,-, a,} = 0, fe-(j/,) - A = 0, V i > (5.21) and { n n "\ y2hi(xi) : V i ( = b, Xi - max{j/,-,a,} > 0, fcj(y,-) - A = 0, V i > , i=i f = i J (5.22) Zi = Z2 holds. P r o o f . If Et'a« > then both (5.21) and (5.22) are infeasible, and Zi = oo = Z2 trivially holds. For the remainder of the proof, assume that Et a% ^ 0 holds. Since (5.22) is a relaxation to (5.21), Z2 < Zi. We will show that Zx < Z2 holds as well. Let an optimum solution of (5.21) be denoted by (x*,t/*, A*). Also let any feasible solution of (5.22) be (x,y, A). By convexity of h(x) := E l = i ^t(xi), considering the two points x* and x, h{x)-h(x*) > EVi%*)(xi-i*) t=i n = £ K (max(2/*> a»)) _ max{2/«*> • i=l Chapter 5. Bounds on Stochastic Convex Allocation Problems 127 Now partition the index set {1 , . . . , n) into I\ and I2 such that h ••= {i • y* > a,} and I2 := {i : y* < a,}. Thus, h(x)-h(x*) > Y - y.*) + £ - «.•) « e / i ieh > Y X*(x< - y^) + H X*(X< - °0 i€h i£h = A* YXi+Yi Xi jeh ieh = \*b-\*b = 0. - A * t'6/i ieh The inequality above follows by the nonnegativity of the terms (£,• — a;) and by /i{(a,-) > = A* for i G / 2 - Note that the latter inequality holds since h\ is nondecreasing, which is due to the convexity of hi. Therefore, h(x) > h(x*) = Zx implying that Z2 > Z\. • Theorem 5.7 Under additive separability on the convex function h, <j>L{,x) > JB holds where 4>L(X) is the lower bound in (5.19)-(5.20) and JB is the Jensen bound in (5.9). Proof. Rewriting, 4>L{X) = min YEZi [^(rnax{x, - ti,a*(x)})] a*(x),X*(x) { = l s.t. Y En - & = w - £ & (5.23) i = l i = l h'i(a*i(x)) = \'(x), V» = l , . . . , n . The minimization in (5.23) is fictitious, since any feasible solution of (5.19)-(5.20) is optimal for (5.23). By convexity of hi, using Jensen's inequality, <PL{?) > rnin £ ^ ( r t ) Chapter 5. Bounds on Stochastic Convex Allocation Problems 128 s.t. ] T r . = K F - X 6 ^ = Ei{ [max{x, - a*(x)}], V i K(a*(x)) = A*(x), V i > min X^ '(r«') s.t. £ r , = W - £ 6 » i r,- > max{x, — a*(x)}, V i / iK(*)) = V i . The last inequality follows by the convexity of max{x,- — £,-,or,(x)} in Now, apply the result in Lemma 5.6 to argue that the last minimization problem above is equivalent to the one with inequality constraints being considered as equalities. It only remains to observe that the resulting equality constrained minimization problem is equivalent to the Jensen bound JB in (5.9) which indeed follows from the solution characterization in Corollary 5.3. • The example that follows shows that the inequality in Theorem 5.7 can become strict, i.e., JB < (J>L{X) is possible. The question of whether the redistribution lower bound R B in (5.12) is better than <f>L(x) or JB cannot be answered, in general. Although the lower bound R B requires correlation information of the random vector £, it develops that the latter bound may become inferior to the lower bound 4>L(x), as shown in the following example. Example 5.8 Suppose a central supply position has an initial amount W of inventory available for the two periods. There are n local users. Local user i experiences a random negative exponential demand with mean /x, each period. Moreover, demand in either period is independent of each other. Each user has an initial inventory allocation of Xi > 0, leaving Chapter 5. Bounds on Stochastic Convex Allocation Problems 129 W — E« x% ^ 0 remaining at the centre for later disbursement. It is desired to minimize the expected unfilled demand during period one and period two. Thus, 1 f°° -li. _£i 9i{xi) = — [ii - xi]+e »> d£i = p^ . Pi Jo At the start of the second period, each local user has an inventory x,- — £, which may be negative, signifying a backorder to be filled on receipt of replenishments. Now the remaining W — E xi1S distributed, depleting central inventory to zero. Those users with inventory in excess of a, get nothing, others are raised to a,, where a,'s are defined as in (5.20). Then 3 . " • " ( ' . - i i . " . ) hi[Xi - £i + Vi) = y-i e »i When applying the lower bound <f>L(x) on the two-period problem, the resulting bound is denoted by L B . Jensen's bound JB on the two-period problem is denoted by JB as well. The formulation for JB is JB= mm 2__,pie + /z,e **• s.t. £ m a x { x , — pi, pi/3] = W — M, x, > 0 i where M = EtA4*- Rewriting, ' JB = mm Mi-) = EPA~) = W-M) X>P L i Mi i p-i J where -f(u) := e~u + e~s^ and 8(u) := max{(u — 1),/?}- Although piecewise convex (above and below j3 + 1), 7 is not convex. However, the minimization is unconstrained for u < /? + 1 and so this piece will play no role and we have a solution satisfying 7 ' ( w ) / * ' ( « ) = c o n s t a n t > o r f{ = °- Thus> J 5 = m i n | M ( e - a + e 1-^) : max{a - 1,/3} = ^ - l | . 3For simplicity, we assume that each retailer is raised to an inventory level large enough to clear all backorders at the beginning of the second period. Chapter 5. Bounds on Stochastic Convex Allocation Problems 130 That is a = hence JB = Me~A?3.718. Turning to the lower bound L B , L B = min ( / i . e - * ' / " ' + [exp(-max(x; - 6, a/ij)//^)]) i s.t. £ { x , + / (6 - (x,- - a/*,-)) dd] = W i Jxi-ctui Pi Xi > 0. By the same type of argument used to calculate JB above, the minimum must have the form Xi = fra for all i, so L B = min M(e~a + e~a \ e~xdx-r e~(a-x)e~xdx) a Ja-a JO = Me~a(2 + a - a) s.t. a + e-(°-a) = W/M a > 0. Remark: This is only strictly true when W is sufficiently large so that the optimal value of a — a is nonnegative, which we assume here for convenience. Substituting yields L B = M i n 0 M e - a ( 2 - £n(W/M - a)) which has a minimum at W/M — a = 0.318, and thus, L B = = Me-WIM^-— = Me~w'M4.32 . (W/M - a) 0.318 This result is rather curious, being as it implies that inventory 'held back' at the centre for the second period measured in units of total system demand, that is W/M — a, is independent of n, the number of users; independent of W, the initial amount; and independent of /J ,- , variations of mean demand amongst retailers. This result is however an artifact of the rather simple problem and does not generalize in any interesting way. Chapter 5. Bounds on Stochastic Convex Allocation Problems 131 Next consider the redistribution bound RB. Observing that </,(.) is strictly convex decreasing, an optimal initial allocation satisfies, x,- = pid = piW/M and the second period minimum fractile levels satisfy cti = Ka = K{W-()/M. where we have defined £ = Yli £«'• Thus RB = Me~w'M + MEc[exp(-(W - Q/M)) = Me-wlM(\ + MGFA±-)) M where MGF^ is the moment generating function for £. Observe that one needs the distri-butional information of the aggregate demand £ = E i £«'• For simplicity, we assume that the components of £ are independent of each other. Then MGF^(j^) = Jlj MGF^jj) = TT,(1 -fti/M)-1. Hence RB = Me~wlM{\ + n ( l - Vi/M)-1). Summarising the three lower bounds above as proportions: LB : RB : JB = 4.32 : (1 + H(l - m/M)'1 : 3.718 If the means pi are all equal, LB : RB : JB = 4.32 : 1 + (1 - l/n)~n : 3.718 For 1 < n < 3, LB<RB holds and for all n > 4, we have LB>RB. Moreover, RB>JB holds for all n, with equality being held when n —> oo. n Chapter 5. Bounds on Stochastic Convex Allocation Problems 132 5.4 Upper Bounds We are concerned with developing upper bounds to E[<f>(x,£)], <£(x,£) being defined by (5.2), with h in its general convex form. Recall that the lower bounding strategy was based on the solution characterization in (5.6)-(5.7). Subsequently, the constraints were relaxed by considering them in expectation and then a restriction was added by requiring that ct(x,£) be deterministic, i.e., a*(x). It was shown that the relaxation overrides the restriction, thus yielding a lower bound. In this section, we perform similar tricks to develop upper bounds. Observe that the solution characterization (5.6)-(5.7) for <^(x,£) may equivalently be expressed as, <K*,t) = Hv'(*,t)) (5-24) where y*(x,£) := max{x; — & , a ,(x,£) — £;} and or(x,£) is a feasible solution to: v , -M0W)) = M*,O, v.- = i , . . . , n n n X max{0, a,(x, £) - x,} = W - £ z« «=i «'=i max{xj - £j , aj(x, () - £,} iij^i (5.25) To see that (5.24) holds, perform the variable transformation o:(x,£) <— [a:(x,£) — £] on (5.6)-(5.7). To describe the upper bound, first add the restriction that o:(x,£) be nonstochastic, being of the form ct*(x), to (5.25). Observe that this restriction transforms the resource constraint (involving W) in (5.25) to a deterministic one. Next, consider the remaining constraints being satisfied in expectation. The restriction overrides the relaxation, thus providing an upper bound. To show this, define M*)-=E<W{x,t))] . (5.26) Chapter 5. Bounds on Stochastic Convex Allocation Problems 133 i=i (5.27) where j/,(x,fr) := max{i,- — fr , ct*(x) — fr} and a*(x) is a feasible solution to: E{[V,-fc(0'"(*,O] =A'(x), Vi = l , . . . , n n n ]Tmax{0,a*(x) - as.-} = W - £ x , t=i max{xj - fr- , o*(x) - fr } if j ± i a j ( x ) - Z j if j = i. t Theorem 5.9 For<$>v(x) defined in (5.26)-(5.21), the upper bounding inequality E[<j>(x,£)] < <f>u(x) holds. Proof. For the two points y*(x,£) and y(x,£) in (5.24) and (5.26), respectively, due to convexity of h(.), h(y(x,0)-Hy*(x,£)) > vh(y*(x,0)'(y(x,0-y*(x,0) n = £ V,7i(y*(x, £)) [max{x, - fr, a*(x) - fr} - max{xt- - fr, a,(x, £ ) - fr}] i=i n = £ V,-fe(y*(x,0) [max{0,a*(x) - x,} - max{0, a,-(x, {) - x,}] t=i > Y VMFfa 0) [max{0, a?(x) - x,} - max{0, a,-(x, £) - x,-}] i=l n = A(x,£)£[max{0,Q!*(x) - x,} - max{0, a,-(x, £) - x,}] i=l = A(x,0 = 0. i=i «=i (5.28) Integrating both sides of (5.28), the theorem is proved. • When h is separable additive, being given as h(y) = E » ^ «'(2/»')> the upper bound in Theorem 5.9 takes the form: (5.29) t=i Chapter 5. Bounds on Stochastic Convex Allocation Problems 134 where n n 5 3 m a x { 0 , a* (x) - xt} = W * i i=i i=i The upper bounding model for the stochastic program (5.1)-(5.2) using the bound <f>v(x) in (5.26) is UB := min j#(x) + <f>v(x) : £ x,- < W, x,- > 0, V x | . (5.30) Proposition 5.10 If g(xr) < g(x2) holds whenever x 1 > x2, then there is an optimal solution x* of the upper bounding model in (5.30) such that J2i x* = W. Proof. For any feasible solution x of (5.30), considering a feasible solution a*(x) of (5.27), define ij :— max{a*(x),Xj}. Therefore, x > x holds which yields g(x) < g(x) due to the hypothesis in the proposition. Moreover, noting the constraints of (5.27), we have (j>v(x) = <f>u(x). Moreover, J2i xi — W holds. That is, from any feasible solution x of (5.30), a feasible solution x can be constructed with better objective value, hence, proving the claim. • Thus, whenever the first period (objective) function satisfies the condition in the above proposition, the upper bounding strategy <f>u{x) is to allocate all of the resource in the first period. However, when the functions of interest are costs representing inventory holding costs and backorder/shortage costs, such a property is very unlikely to be satisfied, since larger x's tend to increase inventory costs while decreasing shortage costs; see the example that follows. In contrast, the function g(.) in Example 5.8, being the expected unfilled demand in period one, indeed satisfies the hypothesis in Proposition 5.10. The lower bound (f>L{x) derived in the preceding section was shown to be superior to the first moment lower bound due to Jensen's inequality, when h is separable additive. Chapter 5. Bounds on Stochastic Convex Allocation Problems 135 Similarly, we pursue the question of how good the tipper bound U B is in comparison to the first moment upper bounds. As shown in Chapter 2, the best upper bound to the expectation of a convex function, using only the first moments, is obtained by applying the Gassmann-Ziemba inequality. Wi th (compact) domain E having extreme points u f c, k €E K, the set of indices of the vertices, the Gassmann-Ziemba upper bound, denoted by GZ, for E[<f>(x,£)] is GZ := max j £ <f>(x, vk)pk : £ vkpk = £ £ pk = 11 . (5.31) P- (keK keK keK J Observe that if n = 1, then <j>u{x) = i?[^(a;,£)] < trivially holds. Moreover, if W = 0, the t^u-(a;) < GZ holds for any n. However, the latter inequality does not seem to generalize for all n and W. For example, consider a maximizing measure p* of (5.31), having the support points as the extreme points of E . If the true probability distribution is />*, then due to the upper bounding inequality in Theorem 5.9, we must have GZ < (f>u(x). However, it is neither proven nor is a counter-example available to claim that GZ = <j>u(x) holds in such a case. As the example that follows indicates, the new bound (j>v(x) can easily outperform the bound GZ, with arbitrary distributions. 5.5 An Example We illustrate the upper bound on the following hypothetical 2-stage problem. Suppose there are n competing activities and let W be the initial endowment of the resource at the central supply position. Activity i experiences a random demand in the first period which is continuous uniform in [0,1]. Activity i, in the second period, experiences a negative exponential random demand (with mean 1), which is distributed independently of the first period usage. The first period costs are due to both carrying inventory and experiencing shortages of the resource (both charged at an average rate of 1 per unit per time period). The second period costs are due to shortages and the Chapter 5. Bounds on Stochastic Convex Allocation Problems 136 disposal charges for any resource leftover after the second period usage. Both are charged at an average rate of 1 per unit per time period. It is desired to find an allocation such that the total expected inventory, shortage, and disposal costs are minimized. Hence, the cost functions of interest are <7i(x;) = (x,—0.5)2+0.25 and Zi,-(r,-) = 2e - r ,'+r t— 1. Considering the upper bound U B in (5.30), since fr|(r,-) = —2e - r ' + 1, we obtain that a* = constant = a (say) for all i. Observe that JE^[/i,(r,- — fr)] = 3.4366e - r' + r, — 1.5. Thus, n Mx) = E3 .4366e- m a x { x i ' a >] + W-1.5n where n £ m a x { x i , o : } = W, i = i from which follows that { 3 . 4 3 6 6 n c - ^ + W - 1 . 2 5 n if — > 0.5 - 0.5)2 + 3.4366ne"^ + W- 1.25n if ^ < 0.5. Turning now to GZ, notice that it is not possible to obtain a closed form solution for GZ bound as in U B , since the weights of the approximating discrete probability distribution (defined on the vertices of the support) are dependent on the first stage decisions. The GZ bound is evaluated through the upper bounding model (4.21) in Chapter 4. To ease the calculations, we fix n = 2 and the results for varying W are given in Table 5.1. Also, in the same table we report the two lower bounds JB and L B (discussed in the preceding section) on the problem. As evident from Table 5.1, the GZ bound is inferior to U B throughout. Moreover, the first moment lower bound JB performs poorly compared to the lower bound L B . The relative gaps as percentages, i.e., u P P e r ^ Q ^ ^ ^ ^ bound x JQQ%? f o r ^ n e ^ r s ^ moment bounds and the new bounds (which use marginal distributional information), are reported in Table 5.2. Chapter 5. Bounds on Stochastic Convex Allocation Problems 137 w 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 GZ UB 4.169 3.706 3.689 3.272 3.305 2.927 3.011 2.669 2.781 2.472 2.593 2.313 2.441 2.188 2.323 2.094 LB JB 3.680 3.479 3.233 3.066 2.869 2.741 2.581 2.500 2.358 2.319 2.190 2.175 2.067 2.063 1.982 1.981 Table 5.1: Upper and lower bounds on the example. w 0.4 0.6 0.8 1.0 1.2 1.4 1.6 .1.8 relative gap (%) of lst-moment bounds 19.8 20.3 20.6 20.4 19.9 19.2 18.3 17.3 relative gap (%) of new bounds 0.7 1.2 2.0 3.4 4.9 5.6 5.9 5.7 Table 5.2: Relative gaps for first moment bounds and new bounds. If it was also known that the random components £, are independent of each other, then one may compute a tighter upper bound than GZ utilizing the Edmundson-Madansky inequality in (2.13), denoted by E M . Moreover, under independence, the redistribution lower bound RB in (5.12) can easily be computed. These bounds are presented in Table 5.3. Notice that although these bounds have used the additional information of indepen-dence, the new bounds remain attractive. Finally, a few remarks are in order. The first moment bounds JB and GZ, on the expectation of the recourse function <f>(x,£), are convex in x, a property that is useful when devising algorithms to solve the approximated problems. However, such a property w 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 E M 4.166 3.673 3.261 2.923 2.651 2.436 2.274 2.157 RB 3.593 3.168 2.834 2.584 2.395 2.244 2.126 2.038 Table 5.3: Bounds under independence information. Chapter 5. Bounds on Stochastic Convex Allocation Problems 138 cannot be ensured with the lower bound L B or the upper bound <j>v{x). Nevertheless, the lack of convexity should not be viewed as a serious drawback. This is because when one uses a partitioning strategy, see Chapter 2, to sharpen the upper bound U B , for instance, the computational effort does not increase dramatically with the number of random vari-ables, since the bound is computed using marginal distributions. In contrast, the GZ (or E M ) bound is to determine a discrete probability measure on the vertices of the domain, and thus with partitioning, computations can be afforded only for smaller number of random variables. For example, with n = 10 and H being a multidimensional rectangle, in order to compute the GZ bound, one needs to perform 2 1 0 function evaluations, a formidable task. Observe that the GZ bound as well as the bounding strategies advocated in Chapters 2 and 3, which use limited moment information, are based on discretizing the underlying true probability measure, hence, discrete approximations. On the contrary, the bound-ing strategies presented in this chapter are continuous approximations. Note that, with inventory applications, such as the one in the latter example, recourse function <f>(x,£) may not be continuously differentiate in the first stage decisions x. Thus, discrete ap-proximations may not yield a model that is solvable by (smooth) convex programming methods. On the contrary, in the case of stochastic linear programming, discrete ap-proximations are very apt because the approximated problems are then large scale L P problems which have a dual block angular structure. However, in the case of stochastic convex programs, continuous approximations ensure differentiability properties for the approximated expected recourse, and thus are amenable to standard (smooth) convex programming methods. Bibliography [1] Atkins, D.R., Edirisinghe, N.C.P . , and Iyogun, P. (1989). Bounds on the Optimal Two Period Allocation of a Single Committed Resource Under Stochastic Demand. Working Paper No. 89-MSC-026. Faculty of Commerce, U B C . [2] Beale, E . M . (1955). On Minimizing a Convex Function Subject to Linear Inequal-ities. Journal of the Royal Statistical Society 17B 173-184. [3] Birge, J.R. (1985). Aggregation Bounds in Stochastic Linear Programming. Math-ematical Programming 31 25-42. [4] Birge, J.R. personal communication. [5] Birge, J.R. and Louveaux, F . V . (1988). A Multicut Algorithm for Two-stage Stochastic Linear Programs. European Journal of Operational Research 34 384-392. [6] Birge, J.R. and Wallace, S.W. (1986). Refining Bounds for Stochastic Linear Pro-grams with Linearly Transformed Independent Random Variables. Operations Re-search Letters 5 73-77. [7] Birge, J.R. and Wallace, S.W. (1988). A Separable Piecewise Linear Upper Bound for Stochastic Linear Programs. SIAM Journal on Control and Optimization 26 725-739. [8] Birge, J.R. and Wets, R.J -B. (1986). Designing Approximation Schemes for Stochastic Optimization Problems, in particular for Stochastic Programs with Re-course. Mathematical Programming Study 27 54-102. [9] Birge, J.R. and Wets, R.J -B. (1987). Computing Bounds for Stochastic Program-ming Problems by Means of a Generalized Moment Problem. Mathematics of Op-erations Research 12 149-162. [10] Birge, J.R. and Wets, R.J -B. (1989). Sublinear Upper Bounds for Stochastic Pro-grams with Recourse. Mathematical Programming 43 131-149. [11] Charnes, A . and Cooper, W . W . (1959). Chance-constrained Programming. Man-agement Science 5 73-79. [12] Chvatal, V . (1983). Linear Programming. W . H . Freeman & Co., New York. 139 Bibliography 140 [13] Dantzig, G .B. (1955). Linear Programming Under Uncertainty. Management Sci-ence 1 197-206. [14] Dantzig, G .B. (1990). Developments in Programming under Uncertainty. Presen-tation at the T I M S / O R S A meeting in Las Vegas, May 7-9. [15] Dantzig, G .B. and Madansky, A . (1961). On the Solution of Two-stage Linear Programs Under Uncertainty. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Vol 1, University of California Press, Los Angeles, 165-176. [16] Dempster, M . A . H . (1980). Introduction to Stochastic Programming, in M . A . H . Dempster (Ed.), Stochastic Programming, Academic Press, London, 3-59. [17] Dula, J .H. (1988). A n Upper Bound on the Expectation of Sublinear Functions of Multivariate Random Variables. Working Paper, Dept. of Operations Research and Engineering Management, Southern Methodist University, Dallas, Texas. [18] Edirisinghe, N .C.P . and Ziemba, W . T . (1989). Bounds for Two-stage Stochastic Programs Wi th Fixed Recourse. Working Paper No. 89-MSC-024, Faculty of Com-merce, U B C . Accepted in Mathematics of Operations Research. [19] Edirisinghe, N .C.P . and Ziemba, W . T . (1989). Tight Bounds for Stochastic Convex Programs. Working Paper No. 89-MSC-025, Faculty of Commerce, U B C . Accepted in Operations Research. [20] Edmundson, H.P. (1957). Bounds on the Expectation of a Convex Function of a Random Variable. The R A N D Corporation, P-382. [21] Eppen, G. and Schrage, L . (1981). Centralized Ordering Policies in a Multi-Warehouse System Wi th Lead Times and Random Demand, in L . B . Schwarz (Ed.), Multi-Level Production/Inventory Control Systems: Theory and Practice, North-Holland, Amsterdam, 51-69. [22] Ermoliev, Y . (1983). Stochastic Quasigradient Methods and Their Application to Systems Optimization. Stochastics 9 1-36. [23] Ermoliev, Y . (1988). Stochastic Quasigradient Methods, in Y u . Ermoliev and R.J-B. Wets (Eds.), Numerical Techniques for Stochastic Optimization, Springer-Verlag, 141-185. [24] Ermoliev, Y . and Wets, R.J -B. (1988). Numerical Techniques for Stochastic Opti-mization, Springer-Verlag. Bibliography 141 Everitt, R. and Ziemba, W . T . (1979). Two-period Stochastic Programs with Simple Recourse. Operations Research 27 485-502. Federgruen, A . and Zipkin, P. (1984). Approximations of Dynamic, Multilocation Production and Inventory Problems. Management Science 30 69-84. Federgruen, A . and Zipkin, P. (1984). Allocation Policies and Cost Approximations for Multilocation Inventory Systems. Naval Research Logistics Quarterly 31 97-129. Federgruen, A . and Zipkin, P. (1984). Computational Issues in an Infinite-Horizon, Multiechelon Inventory Model. Operations Research 32 818-836. Feller, W . (1966). An Introduction to Probability Theory and Its AppUcations, Vol II, New York, John Wiley. Ferguson, A . and Dantzig, G. (1956). The Allocation of Aircraft to Routes. A n Example of Linear Programming Under Uncertain Demand. Management Science 3 45-73. Frauendorfer, K . (1988). SLP Recourse Problems with Arbitrary Multivariate Dis-tributions - the Dependent Case. Mathematics of Operations Research 13 377-394. Frauendorfer, K . (1988). SLP Problems: Objective and Right-Hand Side Being Stochastically Dependent - Part I. to appear in: Proceedings of the 12th IMACS World Congress in Paris. Frauendorfer, K . (1988). SLP Problems: Objective and Right-hand side being Stochastically Dependent - Part II. Manuscript. IOR, University of Zurich. Frauendorfer, K . (1988). Solving SLP Recourse Problems: The Case of Stochastic Technology Matrix, RHS and Objective, to appear in: Proceedings of the 13th IFIP Conference on System Modelling and Optimization, Springer-Verlag. Frauendorfer, K . (1989). A Simplicial Approximation Scheme for Convex Two-stage Stochastic Programming Problems, presented at the Fifth International Conference on Stochastic Programming, University of Michigan, Ann Arbor, Michigan. Frauendorfer, K . (1991). A Barycentric Approximation Scheme for Convex Stochas-tic Optimization Problems (Algorithmic Design and Computational Results). Pre-sentation at ORSA Applied Probability Conference, Monterey, C A , January 9-11. Gale, D. , Klee, V . , and Rockafellar, R. (1968). Convex Functions and Convex Polytopes. Proceedings of the American Mathematical Society 19 867-873. Bibliography 142 [38] Gassmann, H . and Ziemba, W.T. (1986). A Tight Upper Bound for the Expec-tation of a Convex Function of a Multivariate Random Variable. Mathematical Programming Study 27 39-53. [39] Gassmann, H.I. (1990). M S L i P : A Computer Code for the Multistage Stochastic Linear Programming Problem. Mathematical Programming 47 407-423. [40] Glashoff, K . and Gustafson, S-A. (1983). Linear Optimization and Approximation. Springer-Verlag, New York Inc. [41] Henriques, D.B. (1991). A Better Way to Track Your Assets. The New York Times MONEY, March 31, p . l l . [42] Huang, C.C. , Vertinsky, I., and Ziemba, W . T . (1977). Sharp Bounds on the Value of Perfect Information. Operations Research 25 128-139. [43] Huang, C C , Ziemba, W.T. , and Ben-Tal, A . (1977). Bounds on the Expecta-tion of a Convex Function of a Random Variable: with Applications to Stochastic Programming. Operations Research 25 315-325. [44] Huberman, G. (1983). Bounds for the Aggregated Convex Programming Problem. Mathematical Programming 26 100-108. [45] Jackson, P .L. (1988). Stock Allocation in a Two-Echelon Distribution System or "What To Do Until Your Ship Comes In". Management Science 34 880-895. [46] Jackson, P .L. and Muckstadt, J .A. (1989). Risk Pooling in a Two-Period, Two-Echelon Inventory Stocking and Allocation Problem. Naval Research Logistics Quarterly 36 1-26. [47] Jonsson, H . and Silver, E . A . (1987). Analysis of a Two-Echelon Inventory Control System With Complete Redistribution. Management Science 33 215-227. [48] Ka l i , P. (1974). Approximations to Stochastic Programs With Complete Fixed Recourse. Numerische Mathematik 22 333-339. [49] K a i l , P. (1976). Stochastic Linear Programming. Springer-Verlag, Berlin. [50] K a i l , P. (1979). Computational Methods for Solving Two-stage Stochastic Lin-ear Programming Problems. Zeitschrift fur angewandte Mathematik und Physik 30 261-271. [51] Ka l i , P. (1982). Stochastic Programming. European Journal of Operational Re-search 10 125-130. r Bibliography 143 [52] Ka i l , P. (1987). Stochastic Programming with Recourse : Upper Bounds and Mo-ment Problems - A Review, to appear in J . Guddat, P .Kal l , K . Lommatzsch, M . Vlach, and K . Zimmermann (Eds.), Advances in Mathematical Optimization and Related Topics, Akademie-Verlag, Berlin. [53] Ka i l , P. (1987). Stochastic Programs with Recourse: A n Upper Bound and the Related Moment Problem. Zeitschrift fur Operations Research31 A119-A141. [54] K a i l , P. and Stoyan, D. (1982). Solving Stochastic Programming problems With Recourse Including Error Bounds. Mathematische Operationsforschung und Statis-tik, Series Optimization 13 431-447. [55] KaUberg, J .G. , White, R.W., and Ziemba, W . T . (1982). Short Term Financial Planning Under Uncertainty. Management Science 28 670-682. [56] Kao, E .P .C. and Queyranne, M . (1986). Aggregation in a Two-stage Stochastic Program for Manpower Planning in the Service Sector. TIMS Studies in the Man-agement Sciences 22 205-225. [57] Karr, A . F . (1983). Extreme Points of Certain Sets of Probability Measures, With Applications. Mathematics of Operations Research 8 74-85. [58] Kemperman, J . (1968). The General Moment Problem. A Geometric Approach. Annals of Mathematical Statistics 39 93-112. [59] King, A . J . (1988). A n Implementation of the Lagrangian Finite Generation Method, in Y u . Ermoliev and R.J-B. Wets (Eds.), Numerical Techniques for Stochastic Optimization, Springer-Verlag, 295-311. [60] Krein, M . and Nudelman, A . (1977). The Markov Moment Problem and Extremal Problems. Transl. Mathematical Monographs 50, American Mathematical Society, Providence. [61] Kusy, M.I . and Ziemba, W.T. (1986). A Bank Asset and Liability Management Model. Operations Research 34 356-376. [62] Louveaux, F . V . (1980). A Solution Method for Multistage Stochastic Programs with Recourse with Application to an Energy Investment Problem. Operations Re-search 28 889-902. [63] Madansky, A . (1959). Bounds on the Expectation of a Convex Function of a Mul-tivariate Random Variable. Annals of Mathematical Statistics 30 743-746. [64] Nazareth, J .L. and Wets, R.J-B. (1986). Algorithms for Stochastic Programs: The Case of Nonstochastic Tenders. Mathematical Programming Study. 28 1-28. Bibliography 144 [65] Noel, M . - C . and Smeers, Y . (1985). Nested Decomposition of Multistage Nonlinear Programs with Recourse. C O R E Working Paper No. 8505, Universite Catholique de Louvain, Louvain, Belgium. [66] Olsen, P. (1976). Multistage Stochastic Programming Wi th Recourse: The Equiv-alent Deterministic Problem. SIAM Journal on Control and Optimization 14 495-517. [67] Olsen, P. (1976). When is a Multistage Stochastic Programming Problem Well Defined?. SIAM Journal on Control and Optimization 14 518-527. [68] Olsen, P. (1976). Multistage Stochastic Programming Wi th Recourse as Mathe-matical Programming in an Lp-space. SIAM Journal on Control and Optimization 14 528-537. [69] Olsen, P. (1976). Discretizations of Multistage Stochastic Programming Problems. Mathematical Programming 6 111-124. [70] O'Neil, R.P . (1976). Nested Decomposition of Multistage Convex Programs. SIAM Journal on Control and Optimization 14 409-418. [71] Perlman, M . D . (1974). Jensen's Inequality for a Convex Vector-valued Function on an Infinite Dimensional Space. Journal of Multivariate Statistics 4 52-65. [72] Qi , L . (1986). A n Alternating Method for Stochastic Linear Programming With Simple Recourse. Mathematical Programming Study 27 183-190. [73] Rockafellar, R .T . (1970). Convex Analysis. Princeton Univ. Press, Princeton, N . J . [74] Rockafellar, R .T . and Wets, R.J-B. (1975). Stochastic Convex Programming: Kuhn-Tucker Conditions. Journal of Mathematical Economics 2 349-370. [75] Rockafellar, R .T . and Wets, R.J -B. (1976). Stochastic Convex Programming: Basic Duality. Pacific Journal of Mathematics 62 173-195. [76] Rockafellar, R .T . and Wets, R.J -B. (1976). Stochastic Convex Programming: Sin-gular Multipliers and Extended Duality. Pacific Journal of Mathematics 62 507-522. [77] Rockafellar, R .T. and Wets, R.J-B. (1976). Stochastic Convex Programming: Rel-atively Complete Recourse and Induced Feasibility. SIAM Journal on Control and Optimization 14 574-589. [78] Rockafellar, R .T. and Wets, R.J -B. (1986). A Lagrangian Finite Generation Tech-nique for Solving Linear-quadratic Problems in Stochastic Programming. Mathe-matical Programming Study 28 63-93. Bibliography 145 [79] Rockafellar, R .T . and Wets, R.J -B. (1990). A Dual Strategy for the Implementa-tion of the Aggregation Principle in Decision Making Under Uncertainty. Working Paper. [80] Rockafellar, R .T . and Wets, R.J -B. (1991). Scenarios and Policy Aggregation in Optimization Under Uncertainty. Mathematics of Operations Research 16 119-147. [81] Rogers, D.F. , Plante, R.D. , Wong, R.T. , and Evans, J.R. (1988). Aggregation Tech-niques and Methodology in Optimization. Working Paper No. QA-1988-010, Dept. of Quantitative Analysis and Information Systems, College of Business Adminis-tration, Univ. of Cincinnati, Ohio. (Accepted to Operations Research) [82] Stancu-Minasian, L M . and Wets, R.J -B. (1976). A Research Bibliography in Stochastic Programming, 1955-1975. Operations Research 24 1078-1119. [83] Tintner, G. (1955). Stochastic Linear Programming Wi th Application to Agricul-tural Economics, in H.A. Antosiewicz (Ed.), Proceedings of the Second Symposium in Linear Programming, Washington, 197-207. [84] Van Slyke, R. and Wets, R.J-B. (1969). L-shaped Linear Programs with Applica-tion to Optimal Control and Stochastic Optimization. SIAM Journal on Applied Mathematics 17 638-663. [85] Walkup, D .W. and Wets, R.J -B. (1967). Stochastic Programs With Recourse. SIAM Journal of Applied Mathematics 15 1299-1314. [86] Wang, J . (1986). Lipschitz Continuity of Objective Functions in Stochastic Pro-grams With Fixed Recourse and Its AppUcations. Mathematical Programming Study 27 145-152. [87] Wets, R .J -B. (1966). Programming Under Uncertainty: The Equivalent Convex Programming SIAM Journal on Applied Mathematics 14 89-105. [88] Wets, R .J -B. (1966). Programming Under Uncertainty: The Solution Set. SIAM Journal on Applied Mathematics 14 1143-1151. [89] Wets, R.J -B. (1974). Stochastic Programs with Fixed Recourse: The Equivalent Deterministic Problem. SIAM Review 16 309-339. [90] Wets, R .J -B. (1980). Convergence of Convex Functions, Variational Inequalities and Convex Optimization Problems, in R .W. Cottle, F . Giannesi, and J-L. Lious (Eds.), Variational Inequalities and Complimentary Problems, J . Wiley and Sons, New York, 375-403. Bibliography 146 [91] Wets, R .J -B. (1982). Stochastic Programming: Solution Techniques and Approx-imation Schemes, in A . Bachem, M . Grotschel, B . Korte (Eds.), Mathematical Programming: State-of-the-art, Springer-Verlag, Berlin and New York, 566-603. [92] Wets, R .J -B. (1983). Solving Stochastic Programs With Simple Recourse. Stochas-tics 10 219-242. [93] Wets, R .J -B. (1989). Stochastic Programming, in G.L . Nemhauser, A . H . G . Rinnooy Kan, and M . J . Todd (Eds.), Handbooks in Operations Research and Management Science: Optimization, Vol 1, North-Holland. [94] Wets, R .J -B. and Walkup, D.W. (1969). Stochastic Programs With Recourse: On the Continuity of the Objective. SIAM Journal of Applied Mathematics 17 98-103. [95] Ziemba, W . T . (1970). Computational Algorithms for Convex Stochastic Programs with Simple Recourse. Operations Research 18 414-431. [96] Ziemba, W . T . (1974). Stochastic Programs With Simple Recourse, in P .L. Ham-mer and G. Zoutendijk (Eds.), Mathematical Programming: Theory and Practice, North-Holland, Amsterdam, 213-274. [97] Ziemba, W.T. , Brooks-Hill, F. , and Parkan, C. (1974). Calculating Investment Portfolios When the Returns are Normally Distributed. Management Science 21 209-222. [98] Ziemba, W . T . and Butterworth, J .E. (1975). Bounds on the Value of Information in Uncertain Decision Problems. Stochastics 1 361-378. [99] Zipkin, P .H. (1980). Bounds on the Effect of Aggregating Variables in Linear Pro-grams. Operations Research 28 403-418. [100] Zipkin, P .H. (1980). Bounds for Row Aggregation in Linear Programming. Opera-tions Research 28 903-916. [101] Zipkin, P .H . (1980). Simple Ranking Methods for Allocation of One Resource. Management Science 26 34-43. [102] Zipkin, P .H . (1984). On the Imbalance of Inventories in Multi-Echelon Systems. Mathematics of Operations Research 9 402-423. [103] Zoutendijk, G. (1976). Mathematical Programming Methods. North-Holland. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0100452/manifest

Comment

Related Items