UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The interior point method and its application to the fractional (g.f)-factor problem Shi, Ping 1996

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1996-0493.pdf [ 4.32MB ]
Metadata
JSON: 831-1.0079841.json
JSON-LD: 831-1.0079841-ld.json
RDF/XML (Pretty): 831-1.0079841-rdf.xml
RDF/JSON: 831-1.0079841-rdf.json
Turtle: 831-1.0079841-turtle.txt
N-Triples: 831-1.0079841-rdf-ntriples.txt
Original Record: 831-1.0079841-source.json
Full Text
831-1.0079841-fulltext.txt
Citation
831-1.0079841.ris

Full Text

T H E I N T E R I O R P O I N T M E T H O D A N D ITS A P P L I C A T I O N T O T H E F R A C T I O N A L (# , / ) -FACTOR P R O B L E M By Ping Shi B. Eng., Beijing University of Posts and Telecommunications, 1985 M. Eng., Beijing University of Posts and Telecommunications, 1988 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF T H E REQUIREMENTS FOR T H E DEGREE OF M A S T E R OF SCIENCE in T H E FACULTY OF GRADUATE STUDIES DEPARTMENT OF MATHEMATICS INSTITUTE OF APPLIED MATHEMATICS We accept this thesis as conforming to the required standard T H E UNIVERSITY OF BRITISH COLUMBIA September 1996 © Ping Shi, 1996 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for refer-purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Mathematics Institute of Applied Mathematics The University of British Columbia 2075 Wesbrook Place •Vancouver, Canada ence and study. I further agree that permission for extensive copying of this thesis for scholarly V6T 1Z1 Date: Abstract This thesis contains six chapters. From Chapter 1 to Chapter 5, we give an exposition of interior-point methods (IPM). In Chapter 1 a brief survey of the recent development in interior-point methods is given. Chapter 2 gives a detailed description on the history of the primal-dual path-following methods for linear programming. Chapter 3 introduces several predictor-corrector methods. This is an efficient and powerful class of interior-point methods and a lot ofattention has been paid during recent years. Chapter 4 discusses the various efficient linear algebraic methods which are used for solving the Newton equations system. Practically, most of the computational effort in implementations of interior-point methods is taken up in solving the Newton equations system. Therefore an efficient method for solving the Newton equations system can make the whole algorithm very efficient. Chapter 5 talks about the presolving procedure which is used to reduce the size of the L P problem and.to make the system sparser before passing the problems to optimization. The presolve procedure is a very efficient adjunct for any L P solver. In chapter 6, we apply the interior-point method to the fractional (#,/)-factor problem, we first exploit the special structure of the fractional (<?,/)-factor problem and propose two I P M algorithms to take advantage of the special structure of the fractional (5,/)-factor problem. In each of the two algorithms, we use a different linear programming model for the fractional /)-factor problem in order to make the corresponding algorithm more efficient. Then, we discuss three starting point approaches and the strategy for creating our test problems. Last, we use the high order predictor-corrector primal-dual path-following code HOPDM[45] to test the three starting point approaches and from the computational results we get the conclusion that a fractional (<7,/)-factor problem-specific starting point can save iterations of the I P M and make the I P M more efficient for solving fractional (g, /)-factor problem. ii Table of Contents Abstract ii List of Tables vi List of Figures vii Acknowledgement viii 1 Introduction to Interior Point Methods 1 2 Path-following Methods for Linear Programming 7 2.1 Preliminaries 7 2.1.1 Newton-Raphson's method for solving nonlinear equations 11 2.1.2 The Karush-Kuhn-Tucker Conditions 12 2.2 The Affine Scaling Direction 13 2.2.1 The Primal Affine Scaling Direction 13 2.2.2 The Dual Affine Scaling Direction . 14 2.3 The Centering Direction 15 2.3.1 The Primal Centering Direction 15 2.3.2 The Dual Centering Direction 16 2.4 The Primal Path-following Method 16 2.5 The Dual Path-following Method 19 2.6 The Primal-Dual Path-following Method 19 2.7 The Primal-Dual Infeasible-Interior-Point Method 25 iii 3 Predictor-Corrector Methods 29 3.1 Mehrotra's Predictor-Corrector Method 29 3.2 Carpenter's Higher-Order Predictor-Corrector Method . 34 3.3 Gondzio's Multiple Centrality Corrections Method 37 3.4 Starting Point 40 4 Solving the Newton Equations 42 4.1 Direct Methods 45 4.1.1 The Normal Equations Approach 45 4.1.2 The Augmented System Approach . . . 50 4.2 The Adaptive Approach 54 5 Presolve Analysis 57 5.1 Reduction of Problem Size 57 5.1.1 Simple Presolve Methods 57 5.1.2 Forcing and Redundant Constraints 58 . 5.1.3 Tightening Variable Bounds and Implied Free Variables 60 5.1.4 Bounds on Shadow Prices and Dominated Variables . 61 5.2 Improving Sparsity of the Problem 63 5.2.1 Making the Matrix A Sparser 63 5.2.2 Splitting Dense Columns 63 5.3 Removing Linearly Dependent Rows and Columns 64 5.3.1 Linearly Dependent Rows . 64 5.3.2 Duplicate Columns 65 5.4 Postsolve • • • • • • 66 6 Solving the Fractional (</,/)-factor Problem 67 6.1 The Parallel IPM for the Fractional (g, /)-factor Problem . 68 6.1.1 The Interior Point Algorithm 69 iv 6.1.2 Staircase Structure 71 6.1.3 The Fractional (p,/)-factor Problem 74 6.2 The Preconditioned Conjugate Gradient Algorithm 78 6.3 Computational Implementation 85 6.3.1 Starting Point . 86 6.3.2 Test Problems 89 6.3.3 Computational Results 90 Bibliography 9 5 List of Tables 6.1 The preconditioned conjugate gradient algorithm 82 6.2 Original problem size 91 6.3 Iteration counts for approach 2 93 6.4 Iteration counts for different starting point approaches 94 vi List of Figures 6.1 The layered graph G 76 6.2 The staircase matrix A of the layered graph G 77 vii Acknowledgement I would like to express my appreciation to my supervisor, Dr. R. P. Anstee, for his guidance and criticism throughout the preparation of this work. I would also like to thank Dr. S. T . McCormick for his helpful suggestions, comments and careful reading of the draft. viii Chapter 1 Introduction to Interior Point Methods Interior point methods for mathematical programming were present at least since the sixties. In 1967, Dikin[25] introduced an affine-scaling method for linear and quadratic programming. Then Fiacco and McCormick[31] developed the IPM as a tool for nonlinear programming. Although Fiacco and McCormick noted that their proposed methods could be applied in full measure to linear programming, neither they nor any other researchers at that time seriously proposed that interior point method would provide a viable alternative to the simplex method for actually solving linear programming problems. This is for two reasons. First, due to the storage limitations, the size of the problems solved at that time did not exceed a couple of hundred rows and columns, and for such sizes the simplex method is practically unbeatable. Secondly, there was no numerical linear algebra for solving large, sparse linear equation systems at that time. (This was done in the '70s and '80s by George and Liu [34].) Therefore the orthogonal projections in IPM were very expensive. IPM needs significantly more memory than the simplex method which was an unacceptable requirement at that time. Although the simplex method is efficient and elegant, it does not possess a property that became more and more important in the last two decades: polynomial complexity. It was proved that the simplex method has an exponential worst-case complexity[61]. In 1978 Khachiyan[60] provided the first polynomial worst-case complexity algorithm for linear programming by showing how to adapt the ellipsoid method for convex optimization provided by Shor[94] and Yudin and Nemirovskii[116] to linear programming which requires 0(n4L) time, where L is the bit size of the problem. This method has significant theoretical implications for mathematical programming but is not competitive with the simplex method in 1 Chapter 1. Introduction to Interior Point Methods 2 practice. In 1984, Karmarkar[57] published his projective algorithm, which was proven to require 0(nL) iterations and 0(n3-5L) time overall, lower than Khachian's. It was also announced that it was more efficient than the simplex method in practice. Karmarkar's algorithm is essentially different from the simplex method in that it moves through the relative interior of the feasible set instead of following a sequence of vertices as does the simplex method. Karmarkar's work opened a new research field, the field of interior point methods, which has yielded many theoretical and practical achievements. During the last eleven years a substantial number of contributions have been made in this field, both theoretical and practical. Inspired by Karmarkar's publication many other polynomial methods were proposed by researchers, sometimes containing new ideas, sometimes containing only slight modifications or embellishments. Numerical results have shown that some interior point methods can solve large linear programming problems faster than the simplex method. For a complete survey, the reader can consult the following excellent articles or books by Andersen, Gondzio, Meszaros and Xu[4], Goldfarb and Todd[40], Gonzaga[50], Hertpg and Roos[51], Lustig, Marsten and Shanno[68], Todd[98, 99], Wright[108, 109] and Ye[114]. I also refer the reader to the following web sites: 1. Linear Programming - Frequently Asked Questions List: http://www.skypoint.com/subscribers/ashbury/linear-programming-faq.html. 2. Interior-Point Methods Online: http://www.mcs.anl.gov/home/otc/InteriorPoint/index.html. All variations of IPM can be classified into four main categories: 1. Projective potential reduction methods; 2. Affine scaling methods; Chapter 1. Introduction to Interior Point Methods 3 3. Path-following methods; 4. Affine potential reduction methods. Karmarkar's algorithm is the original projective potential reduction method. It was studied and improved by many researchers: simplification of the analysis, studies of limiting behav-ior, removing the initial assumptions, etc. From the practical point of view the early results, e.g.[100], were disappointing. Later more efficient algorithms were developed using better for-mulas of the linear programming problem. Anstreicher [10],[11], Gay[33], De Ghellinck and Vial[36], Gonzaga[48], Yamashita[110], Ye[112], Ye and Kojima[115] studied the projective po-tential reduction method. Actually, these earlier methods provide useful perspective on later path-following approaches and the projective algorithms soon became off the main stream of research. The practical merits of IPM became clear when several researchers proposed and imple-mented the so-called affine scaling method. In each iteration of the projective potential reduc-tion method a projective transformation is carried out to 'center' the current iterate. Instead of doing this projective transformation, Vanderbei et al.[106] and Barnes[13] proposed to do a simple affine scaling to center the current iterate. It turned out that this natural simplification of Karmarkar's method was already proposed by Dikin[25] in 1967. Cavalier and Soyster[20] also studied this method. Several implementations of the primal and dual affine scaling methods (e.g.[2],[14] and[106]) showed promising results. The affine scaling methods have no known polynomial time complexity and, in fact, Megiddo and Shub[75] have indicated that these methods may require an exponential number of itera-tions if they are started "close" to the boundary of the feasible region. Also, Vanderbei[103] has pointed out that these methods can make it difficult to recover dual solutions and prove optimality when there is degeneracy. However, these affine method do work well in practice; see Chen[22] for the primal affine method, and Adler et al.[2],[l], Marsten[71], and Monma and Morton[80] for the dual affine method. Monterio et al.[83] have obtained a polynomial time bound for a primal-dual affine method. Chapter 1. Introduction to Interior Point Methods 4 Most recent research in the IPM has concentrated on path-following methods. This class of methods explicitly restricts the iterates to a neighborhood of the central path and follows the path to a solution of the linear programming problem. Both the classical logarithmic barrier method and the center method are called path-following methods. We now describe the main results obtained for this class of methods. Gill, Murray, Saunders, Tomlin and Wright[38] first built the bridge between Karmarkar's method and the logarithmic barrier method for linear programming. They showed that the search directions used in Karmarkar's method and in the logarithmic barrier method were closely related. They also implemented the logarithmic barrier method and obtained promising results. However, they did not prove polynomiality as Karmarkar did. The logarithmic barrier function approach is usually attributed to Frisch[32] and is formally studied in Fiacco and McCormick[31] in the context of nonlinear optimization. The presentation of a continuous trajectory was first proposed by Karmarkar and extensively studied by Bayer and Lagarias[16], Megiddo[74], and Megiddo and Shub[75]. Megiddo[73] relates these central paths to the classical barrier path in a primal-dual setting. Kojima, Mizuno and Yoshise[62] used this framework to present a primal-dual path-foDowing method which requires O(nL) iterations and overall 0(n4L) time. Monteiro and Adler[81] improve upon this result by using ideas of Gonzaga[46] and Karmarkar[57] to obtain a primal-dual short-step path-following method which require O(n05L) iterations and overall 0(n3L) time. Roos and Vial[90] and Gonzaga[49] independently developed a natural and more practical version of the logarithmic barrier method for linear programming, the long step path-following method. In this method the barrier parameter may be reduced by an arbitrary factor between 0 and 1. This means that the iterates need not lie close to the central path and hence long steps can be taken. They also proved that this method has an O(nL) iteration bound. Hertog[53] gave a simple analysis for the logarithmic barrier method and showed that the overall complexity can be reduced by a factor y^ n, using safeguarded line searches. Chapter 1. Introduction to Interior Point Methods 5 Better results are obtained by predictor-corrector methods. The predictor-corrector meth-ods (adaptive primal-dual algorithms) were developed by Mizuno, Todd and Ye[79]. The more practical predictor-corrector algorithms were proposed by Mehrotra[76], Carpenter, Lustig, Mulvey and Shanno[19] and Zhang and Zhang[117]. Nowaday, almost all the existing interior-point codes for general-purpose linear programming problems are based on Mehrotra's predictor-corrector algorithm. These algorithms represent current state-of-the-art IPM. The center method was proposed by Huard[55] in 1967. Compared to the logarithmic barrier method, the center method has.received less attention. Renegar[87] proved in 1987 that a very special version of the center method for linear programming is polynomial. In fact, this was the first polynomial path-following method with an 0(y/nL) iteration bound. In this method very short steps are taken. Although this method for linear programming has an 0(y/nL) iteration bound, which is a factor yfn better than the projective potential reduction method, it is very inefficient in practice. Later on Vaidya[102] reduced the total complexity bound to 0(n3L), using approximate solutions and rank-one updates. Hertog[52] developed a natural and more practical version of the center method for linear programming, in which the relaxation factor may be any number between 0 and 1, and line searches are allowed. The affine potential reduction algorithm was developed by Gonzaga[47]. His algorithm has iteration complexity O(nL). The primal-dual potential function and algorithm were analyzed by Anstreicher and Bosch[12] and Ye[113]. These algorithms possess 0(y/nL) iteration complexity. Ye proved that by doing projected gradient steps (after affine scaling) a constant reduction for the potential function value can be obtained. His method was not a symmetric primal-dual method. Kojima et al.[63] developed a symmetric primal-dual version of this method for linear complementarity problems. The potential reduction methods take steps of the same form as do path-following methods, but they do not explicitly follow the path and can be motivated independently of the path. Chapter 1. Introduction to Interior Point Methods 6 They use a logarithmic potential function to measure the worth of each point in the primal-dual relative interior of the feasible set and aim to achieve a certain fixed reduction in this function at each iteration. Chapter 2 Path-following Methods for Linear Programming 2.1 Preliminaries In this section we will give notation and some relevant theory for linear programming problems and IPM. We also present a conceptual path-following algorithm. We consider the linear programming problem in the standard form minimize cTx, subject to Ax = 6, (2-1) x > 0, where c,x £ lZn, b £ 1Zm, A € lZmxn, n > m. The corresponding dual problem is maximize bTy, subject to ATy < c. It can be changed to the following form (2.2) maximize bTy, subject to ATy + z = c, (2.3) z > 0, where.?/ G Km, z € 1ln. With respect to other forms of linear programs, an analogous analysis can be carried out. The linear problem with upper bounded variables will be discussed later. We assume that the primal feasible region TP = {x e Tl\ : Ax = b}, where 11+ = {x e 1Z : x > 0}, (2.4) Chapter 2. Path-following Methods for Linear Programming 8 is bounded and has a nonempty relative interior given by fP = {x e 7e++ : Ax = 6}, where TZ++ = {x e U : x > 0}. (2.5) The corresponding dual feasible region is TD = {(y,z)enmxnn+:ATy + z = c}, (2.6) and the dual relative interior is FD = {(y, z)enmx n\+ -.ATy + z = c}. (2.7) We also use T = T-p x Tv and T = Tp x Tp> to represent primal-dual feasible region and its relative interior, respectively. The duality gap is cTx — bTy and the complementarity gap is xTz. If (x, y, z) is primal and dual feasible then the duality and complementarity gap are identical. Our concern starts from the fact that IPM performs well by avoiding the boundary of the feasible set. It does this with the help of a classical resource first used in optimization by Frisch[32] in 1955: the logarithmic barrier function n n xeTZn,x>0 = » J>(a:) = l o g s , = - l o g JJxj. (2.8) This function grows indefinitely near the boundary of the feasible set T, and can be used as a penalty attached to those points. Combining P(x) with the objective makes points near the boundary expensive, and forces any minimization algorithm to avoid them. The unique minimizer of P(x) in .Fis the analytic center of T, and it coincides with the point that maximizes the product of the variables in T. Newton-Raphson's method can be adapted to determine a center with a prescribed precision with excellent theoretical and practical performance. The main idea behind all interior point method is that costs should try to be decreased and simultaneously move away from the boundary. Thus we examine combinations of cost and barrier function, in a traditional construction known as the internal penalized function: H €ll,x 6 111+ ' => f(x,p) = cTx - /x^jTlogz, = cTx + pP(x). (2.9) Chapter 2. Path-following Methods for Linear Programming 9 where p is a barrier parameter. For a given value of p, (2.1) becomes minimize f(x,p), subject to Ax = b. Similarly, the internal penalized function for (2.3) is (2.10) peTl,zeni+ => g(y,z,p) = & T j /+ / i £ log ^ , (2.11) (2.12) thus transforming (2.3) to maximize g(y,z,p), subject to ATy + z = c. The internal penalized function was extensively studied by Fiacco and McCormick in their book[31], and described since then in all nonlinear programming textbooks. Now associate to each value of the parameter p a primal central point x(p) uniquely denned by x(p) = argmin f(x,p). (2.13) The curve (x(p) : yu € 72.) is the primal central path for the linear programming problem (2.1). The curve is smooth and has the important property that as p goes to zero it converges to an optimal solution of (2.1). In the same way we can define a dual central point (y(p), z(p)) as (»(/*)> *(/*))= argmin g(y,z,p). (2.14) The curve {(y(p),z (/x)) : p £ 11} is the dual central path for the linear prog (2.3). Assume that f ^ 0, i.e. both Tp ^ 0 and TD ^ 0- We define the primal-dual central path as foDows x^ z r = {(x,y, z)e T :Xz = pe where p - — } , (2.15) where X = diag(xi,a;2,.. .,xn), and e denotes the vector of all ones that is e = (1,1, . . . , 1)T. Ye[114] gives the following theorem: Chapter 2. Path-following Methods for Linear Programming 10 Theorem 2.1.1 Let (x(p), y(u-), z(p,)) G T(p) then 1. The central pair (x(fi), z(fx)) is bounded where 0 < /i < /x° /or any given u° > 0. 2. For 0 < fj,' < p, cTx(p') < cTx(fi) and bTy(n') > bTy(p). 3. (x(n), Z(H)) converges to an optimal solution pair for (2.1) and (2.3) as \x —• 0. The theorem shows the primal-dual central path is well defined and its limit point is an optimal solution to (2.1) and (2.3). Path-following algorithms follow the central path. Let (r(/x) : p 6 1Z) be a parameterization of the central path. A l l path-following algorithms follow the model below. Algorithm 2.1.Conceptual path-following: given an initial point (x°,y°,z°) £ j7, u° € Ti, with (x°,j/V°) = rv°). k := 0. R E P E A T Choose Hk+i < Hk-Call an internal minimization algorithm to find (xk+1, yk+1, zk+1) = T(pk+i)-k := k + 1. U N T I L convergence. The model simply generates a sequence of independent central points T(/Xfc),(A; = 1, . . . ) . The parameterization and the updating rule characterize different algorithms. In the internal minimization algorithm, Newton-Raphson's algorithm can be used to find the central point for corresponding pk- We will discuss the choices of p, later. Exact central points cannot be exactly computed in finite time. The practical algorithms generate points in a neighborhood of the central path converging towards the optimal solution. Before we present the detailed description about the path-following algorithm, let us review some basic concepts which we will use in the following sections. Chapter 2. Path-following Methods for Linear Programming 11 2.1.1 Newton-Raphson's method for solving nonlinear equations For F(x) = 0, (2.16) where x - (xux2,...,xm)T, F(x) = (fi(x), f2(x), ..., fm(x))T and /,-(x), (i = l , 2 , . . . , m ) are difFerentiable at D C Tlm. The Taylor expansion of fi(x), (i = 1,2,..., m) at xk is / , ( x ) « / I ( ^ ) + f : ^ | ^ A x J f c , (2.17) i 9XJ where Ax j = x j - Xj, (j — 1,2,..., m). Combine (2.16) and (2.17) we get or in vector form f^y^lx* =-Mzk) ( i = l , 2 , . . . , m ) , 7 = 1 ^ F'(xk)Axk = -F(xk), (2.18) (2.19) where and / dA ... Ill ^ F'(xk) = dim d X m d X m I X=Xk Axk = (Axk,...,AxkJT, and F'(x) is Jacobi matrix of F(x). Then = xk + A«*. (2.20) Newton's method can be applied to unconstrained minimization problems. To minimize L(x), we take F(x) = L\x) and use Newton's method to search for a zero of F(x). If L(x) is Chapter 2. Path-following Methods for Linear Programming 12 a real valued function of m variables, a local minimum of L will satisfy the following system of m equations in m variables: dL(x) dxi 8L(x) = 0, = 0. 8x r So the Newton iteration (2.19) becomes V2L(xk)Axk = -VL(xk), (2.21) where VL is the gradient of L and V2L is the Hessian of L. If L is convex, then any local minimizer is also a global minimizer. If x* is a local minimizer of L(x), i.e. V i ( x * ) = 0, and if W2L(x*) has full rank, then Newton's method will converge to x*. 2.1.2 The Karush-Kuhn-Tucker Conditions We consider the NLP of the following form: NLP: minimize {f(x) : gi(x) > 0, for t = 1,..., m}. (2.22) The Karush-Kuhn-Tucker Conditions for NLP are m 1. V / ( z ) + 5>,-Va(a:) = 0. t=i 2. 9i(x) > 0 for i = l , . . . , m ; and y < 0. (2.23) 3. yigi{x) = 0 for i = l , . . . , m . If all the constraints in NLP are equations, the NLP becomes the problem of Lagrange multipliers(PLM). P L M can be stated as follows: P L M : minimize {f(x) : gt(x) = 0, for i = l , . . . , m ) . (2.24) The Lagrange function is m L(x,y) = f(x) + Y,yi9i(*)- (2-25) Chapter 2. Path-following Methods for Linear Programming 13 The Optimality Conditions for PLM A vector x € TZn is optimal for PLM only if there exists a vector y € TZm such that m 1. VxL(x,y) = Vf(x) + J2yiVgi{x) = 0. i=i (2.26) 2. gi(x) — 0 for i = 1,..., m; and y is sign free. The Lagrange function (2:25) transforms a constrained optimization problem with equality constraints into an unconstrained optimization problem. To solve (2.24) from the Lagrange function (2.25), we minimize the unconstrained function L(x,y) by solving the system of (n + m) equations in (n + m) variables: dL df(x) A dgt(x) . = - ^ - - £ ^ - ^ = °> forj = l , . . . , n , TT- = -gi(x) = 0, for i = 1,.. .,m. oyi These equations can be solved by Newton's method. 2.2 The Affine Scaling Direction 2.2.1 The Primal Affine Scaling Direction For xk 6 Tp, the affme-scaling algorithm generates an improved feasible solution by scaling xk to e, the vector of ones, and making a step in the projected steepest-descent direction for the transformed objective function in the transformed space. A scaling transformation is a change of variables x = Dx, where D is a positive diagonal matrix. Give a point xk 6 Tp, scaling about xk is the scaling transformation x — XkX, where Xk = d\ag(xk,...,xk). The scaled version of (2.1) is then minimize cFi, subject to Ax = b, (2.27) x > 0, Chapter 2. Path-following Methods for Linear Programming 14 where A = AXk and c = Xkc are obtained by substitution of x — Xkx in (2.1). The feasible solution x — xk of (2.1) corresponds to the feasible solution i = e of (2.27). Dikin's[25] primal affine-scaling algorithm takes a step from x = e in the direction of the negative projected gradient. This direction is —c projected onto the null space of A paff = -PAc = -PAXkXkc, (2.28) where PAXk =1- XkAT(AXk2AT)~1 AXk. The affine-scaling direction in the original space corresponds to moving from xk in the direction P a f f = -XkPAc = -XkPAXkXkc. (2.29) This primal affine-scaling direction is used in the primal affine scaling methods[25, 2, 13, 106]. 2.2.2 T h e D u a l Affine Scaling Di rec t ion Similar to the primal affine scaling direction, the scaled version of (2.3) is maximize bTy, subject to Z " 1 ATy + z = Zllc, (2.30) z > 0. Gonzaga[48] gave the following Lemma which transforms the dual problem to the primal problem. L e m m a 2.2.1 Formulation (2.30) is equivalent to minimize bTz — bo, subject to ATz = c, (2.31) z > 0. where AT = P A Z - i , Chapter 2. Path-following Methods for Linear Programming 15 b = Z-1AT(AZ-2AT)-1b, b0 = bT(AZ-2AT)-1AZ-2c, c = PAZ-I Moreover, given h G 7Zn, consider the direction d? = PArh, then h is equivalent to the direction d for (2.3) given by The dual affine scaling direction is the projected steepest-descent direction for problem (2.31). Use lemma 2.2.1 we get that This direction is used in dual affine scaling methods. 2.3 The Centering Direction 2.3.1 The Primal Centering Direction IP Ms try to avoid the boundary of feasible region T. A good way of avoiding the boundary is obtained by denning a center for the polytope T, and by simultaneous consideration of two objectives: reducing costs and centering. We now ignore the cost and turn to the problem of finding a "center" for the polytope T. Most IPMs use the analytic center which was defined by Sonnevend[95]. The analytic center of Tp is the unique point given by d = -{AZ-2AT)-lAZ~1h. (2.32) daff = (AZ~2AT)- lAZ~1b = {AZ~2AT)-Xb. (2.33) n X = argmin P(x) = argmin(- y^log Xj). (2.34) The analytic center is approached by solving the following problem n minimize - 2 log Xj, (2.35) subject to Ax — b. Chapter 2. Path-following Methods for Linear Programming 16 The Lagrangian is n L(x,y) = - ^ o g x j - y T ( A x - b ) . (2.36) The first order conditions are VXL = -X~le- ATy = 0, VyL = - 4 x + 6 = 0. (2.37) Solving these equations by Newton's method, we get pcent = Axk = XkPAXke. (2.38) This is the primal centering direction. 2.3.2 The Dual Centering Direction Similar to the primal centering direction, the dual centering direction is obtained by solving the foDowing problem n minimize - ^ log Zj, j=i (2.39) subject to ATy + z = c. The Newton direction for this problem is deem = -(AZ-2ATy1AZ-1e. (2.40) This is the dual centering direction. 2.4 The Primal Path-following Method The primal path-following method was first studied by Gill et al. [38]. They consider the linear programming problem in the standard form (2.1). Chapter 2. Path-following Methods for Linear Programming 17 Replacing non-negativity of constraints in (2.1) with the primal logarithmic barrier penalty terms in the objective function, we get n minimize cTx — p^T^lnxj = f(x,u-), j=i (2.41) subject to Ax = b, where \x > 0 is the barrier penalty parameter. This technique is well known in the context of genera] constrained optimization problems. One solves the problem penalized by the logarithmic barrier function term for several values of the parameter //, with /x decreasing to zero, and the result is a sequence of feasible points converging to a solution of the original problem. The objective function /(x,/x) of problem (2.41) is a strictly convex function. This implies that this problem has at most one global minimum, and that this global minimum, if it exists, is completely characterized by the Karush-Kuhn-Tucker stationary conditions(first order condi-tions). The function f(x,p) achieves the minimal value in its domain (for fixed p) at a unique point, which is denoted by x([i), and called the p-center. The Lagrangian for (2.41) is 71 L(x, y, p) = cTx - (i ]P m xj - VT{Ax - b), (2.42) i=i arH the first order conditions(KKT conditions) for (2.41) are V X X = c - pX~*e - ATy = 0, VyL = -Ax + b = 0, (2 A3) where y is the vector of Lagrangian multipliers associated with the equality constraints of prob-lem (2.41), and X denotes the diagonal matrix whose diagonal elements are the variables Xj, 1 < j < n, which is denoted by Xk when evaluated at the iterate xk (i.e. X = diag(x\,.. . ,x n )) . Now, we apply one iteration of Newton's method to find an approximate solution to (2.43) for a fixed value of \i. We get the search direction Axk = -(l/flk)XkPAXkXkC + XkPAXke = (l/Pk)Paff + Pcent, (2.44) Chapter 2. Path-following Methods for Linear Programming 18 where PAX, = {I-XkAT{AXlAT)-lAXk). (2.45) A new estimate optimal solution x + is defined by x k + 1 = xk + akAxk, (2.46) where ak is respective step length in the primal spaces chosen to assure the positivity of the variable x. Rather than making several Newton steps to converge to the central trajectory for a fixed value of p, the algorithm reduces p by pk+\ = ppk,0 < p < 1, at each step, and the algorithm continues. Gill et al. [38] and Gay[33] noted that Karmarkar's method is just a special case of primal logarithmic barrier methods. We find that (2.44) consist of a centering term to keep away from the boundary, and an affine term that leads toward an optimal solution. When pk -* 0 optimality dominates, whereas for large pk, the method proceeds toward the analytic center of the feasible region. Den Hertog and Roos[51] showed that most interior point methods have search directions that are linear combinations of these two vectors. The choice of ak and p is important for the convergence of the algorithm. Gonzaga[46] shows a certain choice which updates the barrier penalty parameter p by short steps, forces the algorithm to trace the central path, so that all points generated are near the central path. This algorithm converges in 0(y/nL) iterations assuming the initial point (a;0, y°) is close to the central path. The overall complexity is 0(n3L). Roos and Vial[91] and Gonzaga[49] propose a long-step path-following method based on the same direction (2.44). In their algorithms, the barrier parameter p is reduced by a large factor if the iterate lies close to the central path. Their algorithms also converges in 0(\/nL) iterations for certain values of p. Chapter 2. Path-following Methods for Linear Programming 19 2.5 The Dual Path-following Method Renegar proposed a special case of dual logarithmic barrier method in[87]. We consider the dual linear programming problem (2.2). We do not use the logarithmic barrier function(2.11). We eliminate the inequality in(2.2) by incorporating it into a logarithmic barrier term appended to the objective to obtain the following transformed problem n minimize bTy + /x ^ T] ln(cj - ajy), (2-47) where aj is the jth column of the matrix A. The first order conditions(KKT conditions) are b-pAZ^e = 0, (2.48) where Z is the n X n diagonal matrix with elements Zj = Cj — ajy- One step of Newton's method yields Ay = ^{AZ-2AT)-lb - (AZ-2AT)-1AZ-1e = ^daff + dcent, (2.49) where the first term in(2.49) represents a step toward optimality and the second term is a centering step in the dual space. 26 The Primal-Dual Path-following Method Most of the new interior-point methods can be seen as variants of the primal-dual logarithmic barrier method due to Megiddo[74] who used logarithmic barrier methods to solve the primal and dual problems simultaneously. Meggiddo's method was developed into a convergent primal-dual logarithmic barrier algorithm by Kojima et al. [62]. This method was shown by Monteiro and Adler[82] to require 0(n3L) overall time and no interior point algorithm has been shown to have a better worst-case complexity bound. In practice the primal-dual algorithm outperforms the primal and dual barrier algorithms. Because in Chapter 6 we will use the primal-dual method to solve the fractional (g,f)-factor problem which has upper bounded variables, now we consider the primal problem in the Chapter 2. Path-following Methods for Linear Programming 20 following form minimize cTx, subject to Ax = b, (2.50) 0 < x < u, where c, x,u G lZn, b G TZm, A G 1ZmXn, n > m. Some or all of the upper bounds u may be infinite. Adding slack variables to x < u, we get minimize cTx, subject to Ax = b, x + s — u, (2.51) x > 0, s > 0, where c, x, s,u£ 72™, and b G TZm, A G Tlmxn, n > m. The corresponding dual problem is (2.52) maximize bTy — uTw, subject to ATy - w + z = c, * > 0, w>0, where y G 1Zm, and z, w G 7271. To derive the primal-dual algorithm, we eliminate the inequality constraints in(2.51) by incorporating them in a logarithmic barrier function as n n minimize cTx — u ^  In Xj — \i ^ In SJ , subject to Ax = 6, (2.53) x + s = u. The Lagrangian for(2.53) is then n n L(x, s, y, u;,//) = cTa; - JJL In a;j - u ^  In Sj - 2/T(Ax — 6) + w r (x + s — u). (2.54) i=i j=i Chapter 2. Path-following Methods for Linear Programming 21 The first order optimality conditions ( K K T conditions) for(2.54) are Ax = b, x + s = u, (2-55) A y + pX le — w = c, pS~1e - w — 0. Introducing the dual slack variable vector z = pX-1e, the first order optimality conditions(2.55) give Ax — b, x + s = u, ATy + z - w = c, (2.56) XZe - /lie, SWe = pe, where X, 5, Z, and W are diagonal matrices with the elements XJ,SJ,ZJ and Wj, respectively, e is the n-vector of all ones and p is a barrier parameter. The set of solutions of(2.56) (x(p),s(p)) and (y(p),z(p),w(p)) defines the central path of the primal and dual problem, respectively. The central path is the set of points at which the product of each primal-dual variable pair is identical. For any primal and dual feasible solution (x.s) and (y, z, w) the quality of centrality is measured by n ^ , , 9 , ^ . , ) = g ( ^ - ^ + g , ^ - ^ , ' . (2,7, If 6(x,s,y,z,w,p) = 0 then (x,s) and (y,z,w) are on the central path. The smaller 6 is, the better the points are centered. From (2.56) we find the complementarity gap is rp rp rp rp rp . c x — b y + u w - x z — x w + u w T T = X z + s w = xTpX~1e + sTpS~1e - np + np = 2np. (2.58) Chapter 2. Path-following Methods for Linear Programming 22 Therefore when p —• 0 + the complementarity gap go to 0. It proves (x(p),s(p)) and (y(p), z(p), w(p)) converge to the optimal solution for p —• 0 + . Let us observe that the first three equations of(2.56) are linear and force primal and dual feasibility of the solution. The last two equations are nonlinear and become the complementarity conditions for'// = 0, which together with the feasibility constraints provides optimality of the solutions. A single iteration of the basic primal-dual algorithm makes one step of Newton's method applied to the first order optimality conditions (2.56) with a given p and then p is decreased. The algorithm terminates when the complementarity gap is reduced below a predetermined tolerance. Assuming that the current estimate of the primal and dual variables satisfies both primal and dual feasibility, one step of Newton's method is applied to (2.56) by solving the following system of linear equations A 0 0 0 0 Ax 0 I 0 I 0 0 Ay 0 0 AT 0. I -I As = 0 Z 0 0 X 0 Az pe -XZe 0 0 w 0 5 Aw pe - SWe Solving (2.59) we get the search directions Ay = (AQAT)~1AQp(p), Ax = e(ATAy-p(p)), Az = pX'^e- Ze-X^ZAx, (2-60) Aw = S~l(pe- SWe + WAx), As - -Ax, where 0 = ( S - 1 W.+ X~lZyl and p{p) = p^S'1 - X - 1 )e - (W - Z)e. A new approximation Chapter 2. Path-following Methods for Linear Programming 23 to the minimizer is then determined as i = x + ctpAx, s = s + a p A s , y = y + aDAy, (2.61) z = z + ct£)Az, w = tz; + a n A w , where and Q £ > are step lengths in the primal and dual spaces respectively chosen to assure the positivity of the variables x,s,z and w. As in[66] ap and a® are determined as • <y • £ • I min{—— ,AXJ < 0},min{^- -,AXJ > 0} > , j — AXJ j AXJ J Z' 10' I T { ^ A V A 2 j < °}'T{^'A^' < ° } J ' (2.62) Q P = 0.995ap, = 0.995aD, where the 0.995 factor keeps the iterates from actually touching the boundary. When the minimum index corresponds to the artificial variable, then the relevant a was set equal to one. The barrier parameter /i serves as a centering parameter, pulling the primal variables and dual slacks away from zero. Let <?ap(0) denote the current duality gap. From (2.58) we know the duality gap is gap(0) — cTx - bTy + uTw = zTx + wTs. (2.63) Let a < m i n { a p , a £ ) } , From (2.59) we know, after one iteration, the new duality gap is gap(a) = (z + aAz)T(x + aAx) + (w + aAw)T(s + aAs) = zTx + wTs + a(ZAx + XAz + WAs + SAw)Te = gap(0) + a(2np - gap^)), Thus the duality gap is reduced at each step as long as 2np - gap(0) < 0, i.e. < gop(O) _ cTx - bTy + uTw 4 ^ 2n 2n Chapter 2. Path-following Methods for Linear Programming 24 In the primal-dual algorithm, we choose p by the algorithm of McShane et al.[72] and Lustig[66], namely, for feasible x and y, cTx — bTy + uTw u- = -f- , 2.65 and n 2 , if n < 5000, 4>(n) = I (2.66) n,y/n, if n > 5000. The algorithm is repeated until the relative duality gap satisfies cTx — bTy + uTw for a user predetermined e. In comparing primal, dual, and primal-dual methods, we find that all search directions consist of a matrix of the form ADAT, where D is a diagonal matrix. The content of D varies, but the computational work does not. There are several advantages for the primal-dual method. First, primal feasibility may be obtained well before optimality. For the dual affine method, a primal feasible solution cannot in general be recovered until optimality. Thus if a run must be stopped short of optimality, no primal feasible solution can be guaranteed to be available for purely dual methods. Second, under primal and dual degeneracy, when large steps are taken, primal methods cannot be guaranteed to yield dual optimal solutions at optimality, nor can dual methods yield primal optimal solution in general[115]. The primal-dual method yields both under any degeneracy conditions. Third, for primal feasible (x, s) and dual feasible (y, z, w), the exact current duality gap (2.67) is always known. Thus an excellent measure of how close the given solution is to the optimal is always available. Finally, the primal-dual method allows for separate step lengths in the primal and. dual spaces as shown in (2.61). McShane, Monma, and Shanno[72] have proven that these separate step lengths significantly reduce the number of iterations to convergence. Chapter 2. Path-following Methods for Linear Programming 25 2.7 The Primal-Dual Infeasible-Interior-Point Method In the discussions so far, we have been assuming that a feasible solution (x°, s°) £ Tp of the primal problem and a feasible solution (y°, z° , w°) € To of the dual problem are available so that we can immediately start the algorithm. In general, it is necessary to provide a means for determining an 5°, y°, z°, w°) 6 Tp X Try as an initial point for the algorithm. Lustig[65] has used the following pair of primal and dual linear programs to find the solution of a problem for which an initial interior feasible point is not known. The augmented primal system is minimize cTx + caxa, subject to Ax + dpxa = b, x + s = u, (2.68) dJ)X + X b = ba, X, 5, X a , X b ^ 0. The dual of this system is maximize bT.y — uTw + baya, subject to ATy + d^ya + z — w = c, dTPy + za = ca, (2.69) ya + *b = 0, 2 , W , Z a , Z b > 0, where dP = b- Ax° and dD - ATy° + z°-w°-c, x° > 0, z° > 0, w° > 0 and y° are arbitrary initial points. We need to take ca and ba satisfying ca>d%y0 = (b-Ax°)Ty0, (2.70) and ba > dTDx° = (ATy° + z°-w°- c)Tx°, (2.71) Chapter 2. Path-following Methods for Linear Programming 26 so that (x°,xl,s°,x%) and (y°, z°, z ° , z$, w°) are feasible solution of (2.68) and (2.69), re-spectively, where x° = X , = 1, = ba - dpx° s° = u — x°, y° - if0, y°a - - i , Z° = *°, W° = - ca - dpy° 4 = 1. Therefore, we can apply the algorithm to the artificial problems (2.68) and (2.69) with these initial feasible solutions. In the limit as ba and ca approach infinity, the search directions of (2.68) and (2.69) satisfy the following system AAx = xadp, Ax + As = 0, ATAy + Az - Aw = yadD, (2.72) ZAx + XAz = -XZe + pe, WAs + SAw = -SWe + pie. From (2.68), we get xadp — b—Ax, (2.73) Chapter 2. Path-following Methods for Linear Programming 27 and from (2.69), we get yadD = c - ATy - z + w. (2.74) Lustig[65] proposes a primal-dual infeasible-interior-point method which relaxes the primal and dual feasibility assumption on the primal-dual logarithmic barrier method, and shows this method is equivalent to (2.72), which is derived using artificial vectors when the artificial cost become infinitely large. He applies Newton's method to the first order conditions (2.56) without requiring primal and dual feasibility. He assumes only that x and s satisfy x + s = u with x > 0 and s > 0, which implies that As still satisfies As = —Ax. The defining Newton equations for the change in the variables are then AAx — b - Ax, Ax + As = 0, ATAy + Az-Aw = c - A T y - z + w, (2.75) ZAx + XAz = -XZe + pe, WAs + SAw = -SWe + pe. The resulting search directions are Ay = (AQA^-HAQipi^-r^-iAx-b)}, Ax = Q(ATAy - p(p) + rD), Az = pX~le- Ze- X-^ZAx, (2.76) Aw = S - \ n e - SWe + WAx), As = -Ax, where 0 = {S^W + X'1 Z)'1, p(p) = piS'1 - X~l)e - (W - Z)e, and rD = ATy + z - w - c . In comparing (2.72) and (2.75), we find they are identical. Therefore, we get the conclu-sion that the primal-dual logarithmic barrier method using the limiting feasible direction (i.e. ba-,ca —* oo) leads to the primal-dual infeasible-interior-point method. The infeasible-interior-point method requires only the positivity of (x° , .s°) and (z°,w°) and x ° + s° = u. The generated sequence (xk,sk,yk,zk,wk) is not restricted to the interior of Chapter 2. Path-following Methods for Linear Programming 28 the feasible region. An iterate (xk,sk,yk,zk,wk) is required to satisfy neither the equality constraints Ax = b of primal problem nor ATy - w + z = c of the dual problem, but only the positivity x > 0 and z > 0 and x + s = u. From (2.75) we see that the search direction is still a Newton step toward the point on the central path. It tries to get rid of all the infeasibility in the equality constraints in a single step. If a full step is ever taken (i.e. a = 1), the residuals rp = Ax — b and rp — ATy + z — w - c become zero, and all subsequent iterates remain strictly feasible. The choice of step length parameter ap and a n are same as in primal-dual method described in last section. The barrier parameter p is chosen as: for feasible x and y, cTx - bTy + uTw A*= T 7 ~ \ , (2.77) <p(n) and when feasibility has not been obtained cTx - bTy + uTw + M6i + M62 , . [i- —— , (Z.tb) <p(n) where S1 = \\b- Ax\\/\\b- Ax°\\, (2.79) 62 = \\c-ATy-z + w\\/\\c-ATy0-z0 + w°\\, (2.80) and n2, if n < 5000, ct>(n) = { ~ (2.81) ny/n, if n > 5000. Here M is chosen as in [72], namely M = p</>(n)max< max {|c,|}, max \ , (2.82) {l<j<n l<i<m J and p is a scalar multiplier. The algorithm is repeated until the relative duality gap satisfies cTx — bTy -f uTw 1 + \bTy — uTw\ for a user predetermined e. < e, (2.83) Chapter 3 Predictor-Corrector Methods Mizuno, Todd, and Ye[78] developed the adaptive primal-dual algorithm which is also called a predictor-corrector algorithm. In this algorithm the predictor step is a affine scaling (damped Newton) step for problem (2.56), producing a new strictly feasible iterate. This affine scaling step can make a large reduction in complementarity gap. The subsequent corrector step is a centered Newton step to move the point back to the central path. In this corrector step, the choice of the barrier parameter a is based on the predictor step. Both the predictor and corrector steps require essentially the same amount of work, the evaluation and factorization of the Jacobian matrix. Later on, Mehrotra[76] proposed a more practical predictor-corrector algorithm. A common feature in these two predictor-corrector algorithms is that the value of the barrier parameter in the corrector step depends on the predictor step. However, unlike Mizuno, Todd, and Ye's corrector step, Mehrotra's corrector step does not evaluate a new Jacobian matrix. Instead, it reuses the Jacobian matrix used by the predictor step. Mehrotra's predictor-corrector method has been used by almost all LP interior-point im-plementations [76]. In some sense these predictor-corrector methods follow the central path in primal-dual form, but certain of algorithms allow a very loose approximation to the central path. 3.1 Mehrotra's Predictor-Corrector Method Mehrotra[76] proposed a predictor-corrector method which takes a single "corrector" step to the central path.after each "predictor" step to decrease a. Lustig present an extension of 29 Chapter 3. Predictor-Corrector Methods 30 Mehrotra's method in [67]. His method can be derived directly from the first order conditions (2.56). Rather than applying Newton's method to (2.56) to generate correction term to the current estimate, Lustig substitute the new point into (2.56) directly, yielding A(x + Ax) = 6, (x + Ax) + (s + As) = u, AT(y + Ay) + (z + Az) - f> + Aw) = c, (3.1) (X + AX)(Z + AZ)e = /ie, (S + AS)(W + AW)e = pe, where AX, AS, AZ, and AW are diagonal matrices with the elements AXJ,ASJ,AZJ and AWJ, respectively. Simple algebra reduces (3.1) to the equivalent system AAx = b - Ax, Ax + As = u — x — s, ATAy + Az-Aw = c - A T y - z + w, (3.2) ZAx + XAz = pe-XZe- AXAZe, WAs + SAw = /Lie - SWe - ASAWe. The left-hand side of (3.2) is identical to (2.75),while the right-hand side has two distinct differ-ences. The first is the upper bounds equation. In (2.75) we always choose x° and s° satisfying x° + s° = u with x° > 0 and s° > 0. This means the right-hand side of the second equation is always zero. For problems with small upper bounds this requirement will bring computa-tional instability. Therefore we assume only that x° > 0 and s° > 0 but do not require that the upper-bound constraint x° + s° = u be satisfied initially. We allow the method to iter-ate to bound feasibility in precisely the same manner it iterates to primal and dual feasibility. The second difference between (3.2) and (2.75) is the presence of the nonlinear terms AXAZe and ASAWe in the right-hand side of the last two equations in (3.2). Therefore Mehrotra's predictor-corrector technique incorporates higher order information when approximating the Chapter 3. Predictor-Corrector Methods 31 central trajectory and computing the direction step. This method looks for a parametric repre-sentation of the trajectory that goes from a given (infeasible) point in primal and dual spaces to a solution of (2.51) and (2.52). The predictor-corrector technique decomposes direction step A = (Ax, As, Ay, Az, Aw) into two parts A = A a + A c , (3.3) i.e., combine affine-scaling, A a and centering, Ac components. The term A a (affine direction) is obtained by solving AAax = b — Ax, Aax + Aas =: u — x - s, ATAay + Aaz - Aaw = c - ATy - z + w, (3.4) ZAax + XAaz = -XZe, WAas + SAaw = -SWe. Actually the affine scaling direction A a is the predictor direction. It then used in two ways. One is to approximate the nonlinear terms in the right-hand side of (3.2). The other one is to dynamically estimate /x. Mehrotra use the affine scaling direction A a to performs the standard ratio test on both the primal and dual variables to determine the maximum stepsizes 6p and tip which preserve nonnegativity of (x,s) and (z,w). m i n { — — , A a X j < 0 } , m i n { — — , A a S j < 0} j -AaXj j -AaSj min{— - 1 — ,A a Zj < 0 } , m m { — - — , A a W j < 0} j -AaZj j -AaWj bp = 0.99995<5P, 8D = 0.99995^c, where the 0.99995 factor keeps the iterates from actually touching the boundary. Chapter 3. Predictor-Corrector Methods 32 The predicted complementarity gap that would result from a step in the affine direction is 9a = (x + SPAax)T(z + 6DAaz) + (s + SPAas)T(w + 6DAaw). (3.6) Lustig's[67] estimate for p is then where g = xTz + sTw denotes current complementarity gap. The term ga/g measures the achievable progress in the affine scaling direction. Now (3.7) chooses a small p when good progress can be made in the affine direction and a large p when the affine direction produces little improvement. This means the poor progress in the affine direction generally indicates the need for more centering and hence a large value of p. If the affine scaling direction offers considerable progress in the reduction of the complementarity gap, then a more optimistic target is chosen. The actual new step (Ax, Ay, Az, As, Aw) is then chosen as the solution to AAx = b - Ax, Ax + As = u — x — s, ATAy-\-Az-Aw = c - A T y - z + w, (3.8) ZAx + XAz = pe- XZe - AaXAaZe, WAs + SAw = pe - SWe - AaSAaWe. Note that the full primal-dual affine scaling terms, AaXAaZe and AaSAaWe, are added to the right-hand side of (3.8). The step lengths SP and 8p defined by (3.5) are used to compute p but not to modify the affine scaling correction terms. Thus the affine correction added is one that results from taking a full step of length one in the affine direction. Whenever a primal and dual full step of length one can be taken, primal feasibility and dual feasibility are achieved exactly within the numerical accuracy of the computations. So the solution to (3.8) can be Chapter 3. Predictor-Corrector Methods 33 written as Ay = Aay + Acy, Ax = Aax + Acx, Az = Aaz + Acz, (3.9) Aw = Aaw + Acw, As = Aas + Acs, where Aax, Aay, Aaz, Aaw and Aas are the solution to (3.4) and the correction terms Acx, Acy, Acz, Acw and Acs satisfy AAcx = 0, Acx + Acs = 0, ATAcy + Acz - Acw = 0, (3.10) ZAcx + XAcz = fie - AaXAaZe, WAcs + SAcw = fie- AaSAaWe. According to (3.4) xTz + AaxTz + AazTx = sTw + AazTw + AawTs = 0. (3.11) Tf the full step of one were achieved on the affine scaling step, the new complementarity gap would be (ar + Aax)T(z + Aaz) + (s + Aas)T(w + Aaw) rr* rri rrt rrt rri rry f-rt — x z + Aax z + Aaz x + Aax Aaz + s w + Aas w + Aaw s + Aas Aaw = AaxTAaz + AasTAaw. (3.12) Therefore (3.10) could describe a centered Newton step from the point x + Aax,y + Aay,z + Aaz,s + Aas and w + Aaw except that the corrections Aax, Aay, Aaz, Aas and Aaw have not been added to the diagonal matrices on the left-hand side of (3.10). Thus the centering correction terms are a Newton step from the point achieved by a full affine step, but using Chapter 3. Predictor-Corrector Methods 34 the second-derivative matrix at the current point x,y,z,w and s as an approximation to the Hessian at the point corresponding to a full affine step. From (3.4) and (3.10), we see that a single iteration of the predictor-corrector primal-dual method needs two solutions of the same large, sparse linear system for two different right hand sides. The iterate obtained from the predictor-corrector method can be viewed as a point on a quadratic path which approximates the central trajectory and pushes an iterate towards an optimum along such an approximation. Tapia et al.[96] demonstrate that Mehrotra's predictor-corrector methods is the level-1 composite Newton variant of the Newton interior-point method and prove that it has a quadratic local convergence rate. The computational results of Lustig[67] show that the predictor-corrector method almost always reduces the iteration count and usually reduces computation time. Furthermore, as problem size and complexity increase, the improvements in both iteration count and execution time become greater. Thus the predictor-corrector method is generally very computationally efficient. 3.2 Carpenter's Higher-Order Predictor-Corrector Method As we described in previous section, Mehrotra's predictor-corrector method first solves (3.4) for the predictor direction A a , then it computes the barrier parameter fi as a function of both the current point (x, s), (y, z, w) and A a , and finally, it uses A a to correct the centered direction that would be obtained by solving (3.10). The systems we solve to obtain the predictor or the corrector direction each involve the same matrix. Each of these directions is obtained based on the evaluation of the same derivative. The idea of the predictor-corrector procedure is to reduce the work required in the primal-dual interior point procedure by reusing the factorization required to solve the Newton system (3.4). The factorization phase is computationally much more expensive than the solve phase. The method in last section performs only one correction in obtaining the direction. It can Chapter 3. Predictor-Corrector Methods 35 easily be extended to perform several corrections in a multiple predictor-corrector procedure. Instead of solving (3.8) once at each step of the primal-dual interior point method, it can be solved repetitively with each direction corrected based on the previous direction. That is, we substitute at each step the Ax, Ay, Az, Aw and As terms found by solving (3.8) back into the right-hand side A 0 . Now, we outline the multiple predictor-corrector algorithm [19] which is invoked at each iteration of the I P M . Algorithm MPC (Multiple Predictor-Corrector) Given vk = (xk, sk, yk, zk, wk) with xk,sk,zk, wk > 0. Step 1: Solve (3.4) for the affine direction Aav. Step 2: Compute p(vk,Aav) using (3.7). Step 3: For i = 1,.. . , nik (mk: the number of corrections in iteration k) do solve the following system for Av1: AAx1 - b - Ax, Ax1 + As1 -- u - x — s, ATAyi + Az1 - Aw{ = c - A T y - z + w, (3.13) ZAxi + XAzi = ue-XZe- AX^AZ^e, . WAs{ + SAwl = u-e-SWe-AS^AW^e. end do. Define Av = A / / " * . Step 4: Perform the ratio test to determine primal and dual steplength ap and ap. Chapter 3. Predictor-Corrector Methods 36 Step 5: Move to the new point vk+1 defined by xk+l = xk + otpAx, sk+l = sk + apAs, yh+1 = yk + aDAy, zk+i = zk + aDAz, = w + aoAw. (3-14) The primal and dual step lengths ap and ap> are chosen to insure the nonnegativity of ' the variables x,s,z and w. Given v and a direction Av, the ratio functions ap(v, Av) and ap(v,Av) are defined as ap(v, Av) = min ^ min{—^—, A x j < 0 } , m i n { — ^ — , A S J < 0 } | , aD(v,Av) = min «j mjn{—^j- , AZJ < 0 } , m i n { - ^ - , AWJ < 0} j , ^ ^ aP = 0.99995ap(i/, Av), aD = 0.99995d£>(i/,Ai/), ap and ap> are the steps that are actually taken in M P C . Carpenter et al. [19] give a heuristic strategy for determining m^: the number of corrections in iteration k. They make different consideration on feasible point and infeasible point. For feasible points, let gl be the complementarity that would result from taking a step in the direction obtained after i corrections. If g1 < gl~l and i is less than some maximum number df corrections, then an (z + l)st correction is performed. If gl > gl~l, then we stop correcting and use the direction Av1'1. Therefore, another correction is considered only if the previous one decreased the complementarity. When correcting from an infeasible point, the value rrik is chosen so that the algorithm reduces complementarity while reducing infeasibility. Carpenter et al. [19] define G' = gx + (1 + aD)\\dv\\x + (1 - apXWdplU + \\du\U). (3.16) Chapter 3. Predictor-Corrector Methods 37 where dp, do and du are the primal, dual and upper bound infeasibilities at the current point which are denned as dp — b — Ax, dp, = c - ATy - z + w, (3-17) du — u — x — s. If i is less than the allowable maximum and Gx < Gl~l, then an (z + l)st correction is performed. Carpenter et al. proved that the multiple predictor-corrector algorithm is equivalent to the level-m composite Newton method. Although the use of higher order terms usually results in the reduction of the number of iterations to reach optimality, it does not guarantee the total time savings due to the increased effort in a single iteration. Computational results indicate that on average, the second order methods seem to be the most efficient. In the case that Cholesky factorization is very expensive compared with a single solve of Newton equations, we prefer to use higher (than two) order method to save the number of factorizations. But Carpenter's higher-order method has difficulty of building an accurate higher order approximation of the central trajectory. The method presented in the next section solves this problem. 3.3 Gondzio's Multiple Centrality Corrections Method Gondzio offered a multiple centrality corrections method in [43] . This method combines the multiple centrality correction with a choice of reasonable well centered targets that are supposed to be easier to reach than perfectly centered analytic centers. The algorithm takes special care on restoring centrality of the next iterate and, at the same time, to increase stepsize in the primal and dual spaces. The effort of multiple corrections does not concentrate on reducing the complementarity gap. He believe the complementarity gap will be sufficiently reduced if a long step along a primal-dual affine scaling direction is made. Driving the primal-dual point as close to the central path as possible is thus an investment that is expected to pay off in the ability to make a larger step in the next iterate. Chapter 3. Predictor-Corrector Methods 38 This method uses the second order predictor-corrector technique which introduced in Section 3.1 to produce the predictor direction. The following corrector terms concerne improving the centrality of the subsequent iterate and with increasing the stepsizes in primal and dual spaces. For a given point (x, s) and (y, z, w), the quality of centrality is measured by ^ , , , ^ ) . ± ^ . 0 + ± [ ^ . ^ ' , (3,8, or 6(x,s,y,z,w,fi) = \\Xz-pe\\ + \\X~xZ~xe - -e\\ + \\Sw - pe\\ + WS^W^e- -e\\. (3.19) p p What really hurts a primal-dual algorithm is a large discrepancy between complementarity products XjZj and SjWj. Both too small and too large complementarity products, compared with their average pa = (xTz + sTw)/(2n), are undesirable. The step A of (2.75) aims at drawing all complementarity products to the same value p. Usually, there is little hope to reach such a goal. Gondzio's[43] approach combines the choice of targets[56] that are supposed to be easier to reach with the use of multiple correctors. Assume (x, s) and (y, z, w) are primal and dual solutions at a given iteration of the primal-dual algorithm, and x,s,z and w are strictly positive. We also assume that the predictor direction A p at this point, and the maximum stepsizes in primal ap and dual a n are determined. The algorithm looks for a corrector direction A m such that larger stepsizes in primal and dual spaces are allowed for a composite direction A = A p + A m . (3.20) To enlarge these stepsizes from a P and OLD to dtp = min(ap + 6a, 1) and dx> = min(ajr, + 6 a , l ) , respectively, a corrector term A m has to compensate for the negative components in the primal and dual variables (i,S) = (x,s) + ap{Apx,Aps), (y,z,w) = (y,z,w) + a D ( A p y , Apz,Apw). Chapter 3. Predictor-Corrector Methods 39 The algorithm tries to reach the goal by adding the corrector term A m that drives from this exterior trial point to the next iterate (x, s, y, z, w) lying in the vicinity of the central path. However, there is little chance to attain the analytic center in one step, i.e., to reach the point v = (pe,pe) e Tl2n, (3.22) in the space of complementarity products. Hence the algorithm computes the complementarity products i) = (Xz, Sw G TZ2n) and concentrates its effort only on correcting their outliers. Thus, these complementarity products are projected component-wise on a hypercube H = [/?minAi,/3maxM]2"- This gives the following target vt = v(v\H) € n2n. (3.23) The corrector term of the direction is defined by the following system of linear equations A O 0 0 0 7 0 / 0 0 0 AT 0 I -I Z 0 0 X 0 0 0 W 0 s The right hand side in the above system of equations has nonzero elements only in a subset of positions of vt — v that refer to the complementarity products which do not belong to the interval (/Smin/^AnaxA1)-Once corrector term A m is computed, the new stepsizes ap and &D are determined for the composite direction A = A p + A m . (3.25) Then the primal-dual algorithm can move to the next iteration. The correcting process can easily be repeated a desirable number of times. Direction A of (3.25) becomes a new predictor A p for which a new trial point is computed from (3.21) The point in the complementarity products space is then used to define the new target (3.23). Then, Amx 0 Amy A m s 0 0 (3.24) A M 2 Amw vt-v Chapter 3. Predictor-Corrector Methods 40 a new modified centering direction Am from (3.24) is computed and added to the predictor term as in (3.25). In this algorithm, correcting terminates when the stepsizes in the primal and dual spaces ap and ap determined for a composite direction (3.25) do not increase sufficiently compared with the stepsizes ap and a n found earlier for a predictor direction. That is, we stop correcting if ap < ap + -ySa or ap, < ao + fSa, (3.26) where 7 is some prescribed tolerance. The computational experience of [43] shows that this method saves significant G P U time over the Mehrotra's predictor-corrector method. The Higher Order Primal-Dual Method (HOPDM) [45] which we will use in Chapter 6 is an implementation of this algorithm. 3.4 S ta r t ing Po in t All interior point methods(including the infeasible IPMs) are very sensitive to the choice of an starting point. The choice of a good starting point for an IPM is still an open question. Good guesses of (a:0, s ° , y°, z°, w°) can reduce the computational effort considerably. One would like the starting point to be well centered and to be as close to primal and dual feasibility as possible. Surprisingly, points that are relatively close to the optimal solution (but are not well centered) often lead to bad performance and/or numerical difficulties. The starting point in most implementations of primal-dual infeasible IPMs [45, 76, 66] are some variation of the approximate solution of the following auxiliary QP problem which is given in [4]: minimize cTx + ^(xTx + sTs), subject to Ax = 6, (3.27) x + s = u, where g is a predetermined weight parameter. A solution of (3.27) can be given by an explicit Chapter 3. Predictor-Corrector Methods 41 formula and can be computed at a cost comparable to a single interior point iteration. It is supposed to minimize the norm of the primal solution (x,s) and promote points that are better in the sense of the LP objective. As the solution of (3.27) may have negative components in x and s, those negative components are pushed towards positive values sufficiently bounded away from zero (all elements smaller than 6 are replaced by 6, say, 6 = 1). Independently, an initial dual solution (y,z,t) is chosen similarly to satisfy y = 0 and the dual constraint. Again, all elements of z and t smaller than 6 are replaced by 6. Chap te r 4 Solv ing the N e w t o n Equat ions In section 2.5, we obtained Newton equations (2.75). If we do not require that the upper bound constraint a;0 + s° = u is satisfied initially, i.e. we allow the method to iterate to bound feasibility in precisely the same manner it iterates to primal and dual feasibility, then we get the Newton equations system A 0 0 0 0 Ax 6> I 0 J 0 0 Ay 0 AT 0 I -I As = Z 0 0 X 0 Az lie - XZe 0 0 w 0 s Aw fie - SWe where - b- Ax, £c = c - ATy - z + w, denote the violation of the primal, the upper bound and the dual constraints, respectively. After elimination of Az = X~\fie- XZe - ZAX), AS = £u - Ax, Aw = S^ifie- SWe- WAs) = S~\fie - SWe- W{u + WAx), 42 Chapter 4. Solving the Newton Equations 43 it reduces to D~2 AT Ax r A 0 Ay h where D2 = (X^Z + S-lW)-\ r = Zc-X-\ue-XZe) + S-1(ue-SWe)-S-1W£u, h = 6-This form of the Newton equations system usually is known as the augmented system. The coefficient matrix of this system is symmetric indefinite and is easier and cheaper to factorize than the original form. We can further reduce (4.2) to the normal equations (AD2AT)Ay = g = AD2r + h (4.3) by eliminating A x from (4.2). The matrix for normal equations is positive definite and sym-metric, has smaller size (m x m), and may be more dense. The solution of the Newton equations system is the computationally most involved task in any interior point method. All IPMs solve an identical system of linear equations. The only difference is in the value of the diagonal matrix D2 and the right-hand side. The coefficient matrix in the Newton equations system is usually large and sparse, since the constraint matrix A is itself large and sparse in most applications. Both linear systems (4.2) and (4.3) can be solved using either direct or iterative methods. These two approaches are not completely disjoint; the process of constructing a preconditioner for iterative methods is in many ways similar to direct factorization [41]. The choice of the method (direct or iterative) depends on a large number of factors. Karmarkar in [58] gave a detailed discussion of these factors and their influence on the choice of the method. Direct methods involve the factorization of the system coefficient matrix, usually obtained through Gaussian elimination. Implementations of methods in this class require the use of Chapter 4. Solving the Newton Equations 44 specific data structures and special pivoting strategies, in an attempt to reduce the amount of fill-in during Gaussian elimination. Notable examples of software using this type of technique are SPARSPAK[34], YSMP[30], and MA27[29, 28]. Iterative methods generate a sequence of approximate solutions to the system of linear equations usually involving only matrix-vector multiplications in the computation of each iter-ate. Methods like Jacobi, Gauss-Seidel, Chebychev, Lanczos (see [41] for a description of these methods) and the conjugate gradient are attractive by virtue of their low storage requirements, but these methods have slow convergence unless an effective preconditioner is applied. For IPMs, iterative methods, e.g., conjugate gradient algorithms, are not competitive in the general case due to the difficulties in choosing a good and computationally cheap preconditioner. Some success with iterative methods for special LP problems has been obtained [84, 89, 85]. Resende indicated in [88] that the direct method is not appropriate for solving Newton equations system that occurs in network flow problems. This is because the AD2AT and its Cholesky factor are much denser than the matrix A, even if the incidence matrix A has no dense columns. Portugal and Resende[85] used an iterative method to solve the minimum cost network flow problem. They used a heuristic to select the preconditioner. The initial selection is the diagonal preconditioner, since it tends to outperform the other preconditioners during the initial interior point iterations. If the number of conjugate gradient iterations exceeds \frn~, the diagonal preconditioner is changed to the spanning tree preconditioner. For general purpose IPMs, a direct method is better than an iterative method to solve the Newton equations. Hence, almost all the available implementations of the general purpose IPMs use a direct approach to solve the Newton equations and they all use some variant of the symmetric triangular LALT decomposition, where L is a lower triangular matrix and A is a block diagonal matrix with blocks of size 1 or 2. An alternative direct approach is the QR decomposition of A. Although this approach uses orthogonal transformations and guarantees high accuracy of solutions, it cannot be used in practice since it is prohibitively expensive. Another more efficient approach for solving Newton equations system is called the adaptive Chapter 4. Solving the Newton Equations 45 approach. This approach combines the advantages of both the direct and iterative methods. It uses an adaptive automated procedure to determine whether to use a direct or iterative method. Computational results show this approach is more efficient than a pure direct method. 4.1 Direct Methods From the previous discussion, we get the conclusion that one practical approach to solving the Newton equations in general purpose IPM codes is the direct method using LALT decomposi-tion. There are two major alternative approaches corresponding to the solving of the normal equations (4.3) and the augmented system (4.2). t. 4.1.1 The Normal Equations Approach Many current implementations [57, 38, 67, 72, 76] solve the normal equations directly by using Gaussian elimination to compute the Cholesky factorization LALT — AD2AT. A major advan-tage of this approach is.that it is based on a symmetric positive definite matrix guaranteed to yield a factorization of the form LALT, where L is lower triangular with unit diagonal and A is diagonal with strictly positive entries. Given the LALT decomposition, A y is easily obtained from (4.3) by solving two triangular systems. Moreover, the sparsity pattern in the decompo-sition is independent of the value of D2 and hence is constant in all IPM iterations. Therefore, a good sparsity-preserving pivot order can be chosen with much care, even if it involves con-siderable computational effort, since it will be used extensively throughout the whole solution process. The sparsity-preserving sequence of pivots (ordering) in which the symmetric Gaussian elimination is performed can be defined in advance, i.e., before the numerical operations start. Thus the analysis phase is completely separated from the factorize phase [27] and the analysis phase has to be done only once, before the whole solution process. The success of the implementation of the Cholesky factorization depends on the quality of its analysis phase, i.e. reordering for sparsity. Its goal is to find a permutation matrix P such that the Cholesky factor of PAD2 A1PT is the sparsest possible. Chapter 4. Solving the Newton Equations 46 The overall solution of the normal equations is typically divided into four distinct indepen-dent phases: 1. Find an appropriate ordering P for AD2AT. 2. Set up a data structure for L, the Cholesky factor of PAD2ATPT. 3. Numerically factor PAD2ATPT into LALT. 4. Solve (LALT)(PAy) = Pg. Step 1 and step 2 are the analysis phase. They depend only on the structure of AD2AT and are independent of its numerical values. Step3 and step 4 are the factorize phase. Finding the permutation that yields the minimum fill-in in the Cholesky factors is an NP-hard problem[lll]. As a practical consequence, we cannot expect to find an efficient algorithm that identifies an optimal ordering. Hence, heuristics are devised that efficiently compute a permutation that approximates the effect of the optimal one. Two such heuristics, namely the minimum degree and the minimum local fill-in orderings [27, 34, 35] are particularly useful in the context of IPM implementations. Now we give a brief description of these two heuristics. • Minimum Degree Ordering The minimum degree ordering algorithm follows the Markowitz criterion[69], which is designed for unsymmetric matrices. Markowitz observed that in the kth step of an sparse Gaussian elimination the local best pivot candidate atj is the one that minimizes fa = (n - i ) ( C j - l ) , (4.4) where ri and Cj are the numbers of nonzero entries of row i and column j in the kth Schur complement. The value fa gives the number of floating point operations required by the A;th step of Gaussian elimination and, at the same time, estimates the potential fill-in caused by this step. Chapter 4. Solving the Newton Equations 47 Tinney and Walker[97] applied this strategy to symmetric matrices which results in the minimum degree ordering heuristic[92, 97]. This heuristic can be regarded as an attempt to minimize the number of arithmetic operations in the next stage of the Gaussian elimi-nation algorithm, and as an attempt to select the pivot column that introduces the least fill-in in the next stage. This local minimization strategy does not guarantee a global minimum for either the amount of fill-in or total number of arithmetic operations per-formed during Gaussian elimination. Nevertheless, the strategy has been proved to be very effective in reducing arithmetic operations and fill-ins. When symmetric positive definite systems are considered, pivot selection is restricted to diagonal elements, hence the Markowitz merit function simplifies to f3 = {c3-lf ' " (4.5) which leads to a simple rule that the best candidate for the pivot column is, among the columns in the portion of the matrix still to be factored, the one with the minimum number of nonzero entries. Interpreting this process in terms of the elimination graph[35] one sees that this is equivalent to the choice of the node that has the minimum degree, which gave the name to this heuristic. The minimum degree ordering algorithm can be implemented efficiently both in terms of speed and storage requirements. George and Liu in[35] described several powerful enhancements of the basic algorithm. The various methods that have been developed to improve the performance of the basic minimum degree algorithm are mass elimination, indistinguishable nodes(supernodes), incomplete degree update, element absorption and multiple elimination. George and Liu[35] also discussed the tie-breaking strategy used to select the next pivot column among all the columns matching the minimum number of nonzero entries. • Minimum Fill-in Ordering The minimum fill-in ordering heuristic results from the effort of trying to reduce the Chapter 4. Solving the Newton Equations 48 amount of fill-in further than the Markowitz criterion. The function (4.5) considerably overestimates the expected number of fill-ins in a given iteration of the Gaussian elimina-tion because it does not take into account the fact that in many positions of the predicted fill-in, nonzero entries already exist. It is possible that another node, although more ex-pensive in term of (4.5), would produce less fill-in as the elimination step would mainly update already existing nonzero entries of the the Schur complement. The minimum local fill-in ordering heuristic selects, at each stage of the Gaussian elimination procedure, the pivot element that introduces the minimum amount of fill-in. To count predicted fill-in one has to simulate the elimination step, which is quite an expensive operation. Generally, the minimum local fill-in algorithm produces a sparse factorization but at higher initial cost to obtain the ordering. The motivation for introducing the minimum local fill-in heuristic in I P M is straightfor-ward. IPMs solve a sequence of systems of linear equations sharing an identical nonzero structure. Therefore, the ordering procedure will be executed only once at the beginning of the algorithm, and the resulting permutation remains valid for the remaining itera-tions. By contrast, the Gaussian elimination procedure is repeated in every iteration of the I P M and any computational savings achieved by a better ordering is multiplied by the total number of iterations of the algorithm. Moreover, the effort of ordering a matrix according to either heuristic does not constitute a significant portion of the algorithm's total computation effort. Therefore, the extra effort involved in performing the minimum local fill-in heuristic does not impact significantly the algorithm's running time. The normal equations approach has been observed to work well for many linear programs. Nevertheless, it is known to encounter difficulties in two common situations. The first drawback of the normal equations approach is that dense columns in A can create unacceptably dense matrix AD2AT and thus unacceptably dense Cholesky factors L. There are several methods to solve this problem. Adler[2] removes the dense columns from A, and then corrects for their absence by a conjugate gradient method to solve the normal equations using Chapter 4. Solving the Newton Equations 49 the incomplete Cholesky factors as preconditioners. Gi l l et al.[37] and Choi et al.[24] employ the Schur complement algorithm and partition matrix A into sparse and dense columns. Let A = [Ax A 2 ] , (4.6) where A\ G TZmX(n-k) and A2 G TZmxk are matrices built of sparse and dense columns, respec-tively. The Schur complement mechanism based on (4.6) and an explicit decomposition of the matrix AD2AT = AiD2Aj + A2D\AT2 (4.7) into a presumably sparse part A\D\A\ and a significantly denser symmetric rank k update of it. A Cholesky decomposition is then computed for the "sparse" part and the dense rank-A; update is handled via the Sherman-Morrison-Woodbury formula. This method is not guaranteed to work correctly because the sparse part may be rank deficient. Whenever this happens, the Cholesky decomposition of A\D\A\ does not exist and the Sherman-Morrison-Woodbury update is not well defined. Therefore in a practical implementation a small diagonal regularization term is added to A\D\A\ such that the decomposition exists. The method usually works satisfactorily for a small number of dense columns. Andersen[5] proposed a remedy to the rank deficiency arising in the Schur complement mechanism. Lustig[67] proposed an algorithm which combines Adler's and Gill 's method efficiently. This algorithm monitors the spread between the largest and the smallest diagonal elements of the factorization L\AL\ = A\D\Aj. When this spread becomes large, all smaller diagonal elements are set to 1 and the remaining elements of these columns are set to 0. These Cholesky factors are used as preconditioners for a preconditioned conjugate gradient method with the dense columns added to the matrix A. This method eliminates rank deficiency and removes much of the instability. A more numericaDy sound alternative is based on expanded normal equations formed by "splitting" dense columns[42, 104], but the efficiency of this kind of modification is not fully established. In any case, all of these approaches have the drawback of requiring that dense columns be identified in A in some arbitrary way. Chapter 4. Solving the Newton Equations 50 The second difficulty occurs when the linear program contains "free variables" which are not explicitly bounded above or below. To transform the problem to the standard form, any free variable has to be replaced with a difference of two nonnegative variables: Xf = x+ — x~. The presence of logarithmic terms in the objective function causes very fast growth of both split brothers. Although their difference may be kept relatively close to the optimal value of xj, both x+ and x~ tend to infinity. This results in a serious loss of accuracy in (4.6). A remedy used in many IPM implementations is to prevent excessive growth of x+ and x~. The difficulties of the normal equations motivated several researchers to pay special attention to the augmented system form of the Newton equations which allows more freedom in the pivot choice. 4.1.2 T h e A u g m e n t e d Sys tem A p p r o a c h An alternative approach that avoids the difficulties of the normal equations is to apply elim-ination directly to factor the larger "augmented system" (4.2). This approach is based on performing a Bunch-Parlett[18] factorization to the symmetric indefinite matrix K -D~2 AT A 0 (4.8) This factorization has the form PKPT = LALT, where P is a permutation matrix and A is an symmetric indefinite block diagonal matrix with l x l and 2 x 2 blocks. Because the matrix is symmetric indefinite, the equations (4.2) can not be solved quite so easily as the normal equations. There exists a stable elimination procedure for symmetric indefinite systems, due to Bunch and Parlett[18], but it must allow for certain 2x2 pivots along the diagonal in addition to the usual l x l pivot elements. The factorization LALT is guaranteed to exist[39], but unlike the positive definite case(normal equations approach), the pivot order cannot be computed symbolically because the permutation choice is based on numerical values and both the sparsity and stability of the triangular factor. Thus, the pivot order cannot be fixed once in the analysis phase, but must be recomputed as x and z are updated in each Chapter 4. Solving the Newton Equations 51 iteration. On the other hand, due to the greater freedom in the choice of the pivot order, the augmented system factorization may produce significantly sparser factors than that of the normal equations. Actually the normal equations are a special case of the augmented system (4.8) in which the first n pivots are chosen from the D2 part regardless their stability properties and without any concern about the fill-in they produce. Since D is positive, no zero pivots are encountered, and after the nth stage of symmetric elimination the partially reduced matrix has the form -D-2 0 0 AD2AT Each stage of the elimination performs a rank-one update of the lower right mx m submatrix, and after n stages, AD2AT is explicitly formed. The remaining matrix is positive definite and the factorization Z m . D m £ ^ of AD2AT is guaranteed to exist. Moreover, the last m pivots can be performed in any order, using whatever efficient pivoting strategy is preferred. For example, the minimum degree or the minimum local fill ordering heuristics might be used. When the symmetric elimination is complete, we have K = LkDkLj where Dk is an (n + m) X {n + m) diagonal matrix of the form (4.9) -D~2 0 0 Dm (4.10) and (4.11) 7 0 -AD2 Lm Hence, the normal equations approach is embedded in the augmented system approach as one particular pivot strategy. Vanderbei[105] call this pivot strategy the conservative pivot strategy. Solving (4.2) and solving (4.3) are roughly equivalent under the conservative strategy. Observing (4.9), when A has dense column ay, a,aJbecomes a fully dense m x m matrix. Consequently, AD2AT formed in the lower right hand corner is also fully dense. So if A has dense column, the conservative pivot strategy can dramatically impair the efficiency of solving the augmented system. Chapter 4. Solving the Newton Equations 52 The success of the augmented system factorization depends highly on the efficiency of the pivot selection rule. More freedom in the pivot choice opens the possibility of getting sparser factors than in the normal equations approach (e.g., the degrading influence of dense columns can be avoided). A popular strategy for pivot selection rules is to detect "dense" columns and then to pivot out the columns in the diagonal positions of D~2 corresponding to the nondense columns of A. A difficulty arises with the choice of a threshold density used to group columns of A into the sparse and the dense parts in (4.6). A fixed threshold value approach works well only in a case when dense columns are easily identifiable. Whenever more complicated sparsity structure appears in A, a more sophisticated heuristic is needed. Maros and Meszaros [70] give a detailed analysis of this issue. Instead of (4.6), they consider the following partition of the L P constraint matrix r An A12 A2\ A22 where An is supposed to be very sparse and additionally it is assumed to create a sparse adjacency structure AnAu, A\2 is a presumably small set of "difficult" columns, e.g., dense columns or columns referring to free variables, and [^21^22] is a set of "difficult" rows. An efficient heuristic to find such a partition is given in [70]. Therefore (4.2) becomes (4-12) -Di2 0 AT A21 A x i 0 u2 AT A22 A x 2 T2 An Au 0 0 A 2/1 hi A2\ A22 0 0 Aj/2 h2 (4.13) Elimination of 2 causes very limited fill-in and reduces the matrix to •D2-2 AT A12 AT A22 A12 AnD2Au AnDJAl A22 A2lD\An A2lD\Alx (4-14) Chapter 4. Solving the Newton Equations 53 The ehmination of the D2 2 block should be delayed after all attractive pivot candidates from A\\D\A\X and A2\D2A2L blocks are exploited. Fill-in can be reduced dramatically by pivoting out the dense columns of A as late as possible. The normal equations approach makes no such a distinction and pivots out both D,2 and D22 blocks. This technique for delaying the elimination of dense columns is algebraically equivalent to the well-known Schur-complement technique for handling dense columns. Observe that the normal equations (4.15) An An ' D\ 0 AT A21 9i _ ^21 A22 _ 0 AJ2 AT ^22 . A y 2 . 92 _ can be replaced with the following system A^DJAT, AuD2A21 A12D2 A2/1 9i A21D2Aj, A2XD\AT2X A22D2 A2/2 = 92 D2ATl2 D2Ar22 -I A i / 0 (4.16) in which all "difficult" columns are handled as a symmetric rank-k update of an "easy" part (cf.(4.7)) Au A12 D2ilAn A*2} + [D2Al2 D2AT22]. (4.17) A\2D2 A22D2 We can find that the matrix involved in the system (4.16) has exactly the same sparsity pattern as that in (4.14). In the implementation of an augmented system approach, to save on the expensive analysis phase, the pivot order is reused in subsequent IPM iterations and only occasionally updated when the numerical properties of the Newton equation matrix have changed considerably. In every IPM iteration, the augmented system approach tends to require fewer arithmetic opera-tions than the normal equations approach to solve for (Ax,Aj / ) , because it computes sparser factors. Summing up, the augmented system approach has the following advantages: 1. Good accuracy properties. Chapter 4. Solving the Newton Equations 54 2. Easily generalizable to exploit the sparsity of K K T systems arising in nonseparable quadratic programming and linear complementarity problems. 3. Naturally handles free variables. If Xj £ (—oo,+oo), then D~2 in (4.2) in replaced by zero. 4. Dense columns of A do not degrade its efficiency, do not lead to significant fill-in. It has two important disadvantages: 1. It is more complicated to implement than the normal equations approach. 2. The analysis and factorize phases cannot be separated and the factorization is more expensive than in the normal equations approach. Computational practice [4] shows that both methods are important and have their own advantages. It would be beneficial to have both of them implemented as well as to have an analyzer that is able to determine which of them should be used [70]. 4.2 The Adaptive Approach Wang et al.[107] propose an efficient adaptive approach for solving the Newton equations in IPM. They use an adaptive automated procedure for determining whether to use a direct or iterative solver, whether to reinitialize or update the preconditioner, and how many updates to apply. This algorithm exploits the advantages of both the direct and the iterative methods and avoids the disadvantages of both of them. This algorithm uses normal equations to solve the Newton equations. The ideas in this algorithm can be extended to the augmented system. Now we give a brief description of Wang's algorithm. In the first iteration of the algorithm, the Newton equations are solved using a direct method. Starting from the second iteration, the algorithm uses a preconditioned conjugate gradient method. The preconditioner for each iteration is determined by factoring the current Chapter 4. Solving the Newton Equations 55 matrix K = AD2AT or by updating the current preconditioner. The determination of whether to update the current preconditioner or refactor the matrix AD2AT to obtain a new precondi-tioner based on the cost of determining the previous search direction or the predicted cost of determining the current iteration. If either of the two costs is high, the algorithm reinitializes the preconditioner by factoring the current matrix K = AD2AT. Otherwise, it updates the current preconditioner. The number of Cholesky updates is changed adaptively over the course of the algorithm in order to improve efficiency. It is increased if the previous search direction took many iterations and decreased if the previous search direction took a very small number of iterations. After computing the preconditioner, the algorithm solves the normal equations using the preconditioned conjugate gradient method. It gives a maximum number of iterations allowed. If the preconditioned conjugate gradient iteration number exceeds the maximum number, then the current preconditioner is abandoned and a new preconditioner is determined by Cholesky factorization. If this happens twice, the iterative method is not suitable and the algorithm switch to a direct method. The iterative method will continue until the relative duality gap is small enough. In this situation, the iterates are close to the optimal solution and accuracy requirements are high. The elements in matrix D vary significantly and make the matrix K = AD2AT very ill-conditioned. The Cholesky factorization of K may not generate a good preconditioner, even if stable methods are used. For all of these reasons, a direct method is used to determine the final search direction. It also switches to a direct method when a Cholesky factorization has a zero on the diagonal (i.e. the diagonal of the preconditioner is singular). The algorithm prefers to use the preconditioned conjugate gradient method in early itera-tions of IPM because of the following advantages. 1. The conjugate gradient method has better stability than the direct solver. Convergence can be achieved even if the factorization is quite inaccurate. 2. The storage requirement of the preconditioned conjugate gradient method is quite low, Chapter 4. Solving the Newton Equations 56 amounting to a few vectors of length m. 3. Since the accuracy requirements for the search direction in the beginning iterations of the I P M are quite low, only a few conjugate gradient iterations are required. 4. A good preconditioner may dramatically accelerate the convergence rate and gain great computational saving. Computational results in [107] demonstrated that this algorithm can enhance performance of interior point algorithms on large sparse problems. Chapter 5 Presolve Analysis Many LP problem can be simplified to a form that is faster to solve. It is beneficial to implement a presolve procedure in an LP solver to detect redundancy and to remove it. A properly formulated LP problem must contain as few variables, constraints, and nonzeros as possible, it must be well-scaled and the constraints must be linearly independent. The issue of presolve analysis has been addressed by many authors [1], [17], [68], [3], [101]. Andersen[3] and Gondzio[44] proposed the presolve procedures for IPM. We consider the primal and dual LP problems (2.50) and (2.52). The computational complexity of IPM algorithms is dependent on the number of constraints (m) and variables (n) in (2.50). The sparsity of A is more important because only nonzeros in A are stored and the work is dependent on the number of nonzeros. These require that the presolve procedure should reduce the size of A without creating any new nonzeros in A. The presolve procedures for IPM can be categorized into three classes: 1. Reduction of problem size. 2. Improving sparsity of the problem. 3. Removing linear dependent rows and columns. 5.1 Reduction of Problem Size 5.1.1 Simple Presolve Methods • An empty row: 3i:aij = 0,Vj. (5.1) 57 Chapter 5. Presolve Analysis 58 A n empty row must must either be redundant or infeasible. • A n empty column: 3j : dij = 0 ,Vi . . (5.2) Dependent oil the bounds on variable Xj and its objective coefficient Cj, variable Xj is fixed at one of its bounds or the problem is unbounded. • A n infeasible variable: 3j : lj > Uj. , (5.3) The problem is trivially infeasible! • A fixed variable: 3j : lj = Vj. (5.4) Variable Xj can be substituted out of the problem. • A singleton row: 3(i,k) : dij = 0,Vj ^ k,dik ^ 0. (5.5) A n equality type singleton row fixes variable Xk at level bi/dik- If t\/at-fc < Ik or &;/a4fc > Uk, then the problem is infeasible. The elimination of an equality type singleton row may create new singleton rows and the process is repeated. A n inequality type singleton row introduces a new bound on an appropriate variable. Such a bound can be redundant, tighter than one already existing or infeasible. 5.1.2 Forcing and Redundant Constraints The simple bounds on the primal variables (2.50) are exploited to detect forcing and redundant constraints. We define the following lower and upper limits for a constraint i Ps= ]C (5.6) Chapter 5. Presolve Analysis 59 and bt = YI aiJuh (5-7) jeP,. where P,» = {j : a,j > 0} and iV,* - {j : a,-j < 0}. We have bi<Y2dijXj < bi, (5.8) J for any Xj that satisfies 0 < XJ < u3. We now consider the "less than or equal to" constraint Y^aiixj < bf (5.9) j ' ' " • A similar analysis applies to a "greater than or equal to" constraint. The following four possibilities may occur. • An infeasible constraint: 3i : b, < bi. (5.10) The problem is infeasible. • An redundant constraint: 3i:b~<bt. (5.11) Constraint i is redundant, row i can be eliminated. • A forcing constraint: 3i:bi = bi. (5.12) If 6i = bi then due to linearity the only feasible value of Xj is 0 if j € Pi*(j € Ni*). Therefore, we can fix all variables in the ith constraint. • An non constructive constraint: 3i:b1<bl <b~i. (5.13) Constraint i cannot be eliminated. This case is nonconstructive. Chapter 5. Presolve Analysis 60 5.1.3 Tightening Variable Bounds and Implied Free Variables The non constructive case bi<bi<Ft (5.14) can often be used to generate implied bounds on variables involved by (5.9). Equation (5.6) and (5.9) imply xk < u'k = (6; - bj)/aik, Vfc G (5.15) and xk>l'k = uk + {bl~bi)laik^k£N^. (5.16) These new bounds can be constructive (tighten appropriate bound), redundant or contradictory to the existing bound. When an implied finite bound is imposed on a free variable, the free variable does not have to be split and represented as the difference of two nonnegative variables. If an upper bound in (5.6) is infinite, i.e. uk = +00 for some k € N{*, then bj_ is infinite. There.is still an implied bound on xk. Gondzio[44] gives Xk > l'k = (bi - ^2 dijUj)/aik, if k e N'i*. (5-17) jeN..-{k) i Implied free variables: If 0 < l'k < u'k < uk, then the original bound 0 < xk < uk can be removed and xk becomes free. • A free singleton column: 3(k,j) : (aij = 0,Vt ^ k,akj ^ 0) A (lj = -oo) A (u3 = -foo). (5.18) For equality constraints, substitute Xj = (bk — ^  akgxq)/akj in to the objective function, and the constraint k can be eliminated. Thus one constraint and one variable are removed from the problem without generating any fill-ins in A, although the objective function is modified. Chapter 5. Presolve Analysis 61 • An implied free singleton column: 3(k,j) : (a0- = 0,V»V k,akj ± 0) A (0 < 1'3 < u) < u3) (5.19) where l'j and u'}- are computed according to (5.15) and (5.16) or (5.17). Then the variable Xj can be treated as a free column singleton and the constraint k and variable Xj are removed from the problem. 5.1.4 Bounds on Shadow Prices and Dominated Variables All presolve procedures discussed in the previous section are concerned with the analysis of a primal formulation (2.50). The similar analysis can be used on the dual problem (2.52). If the upper bound of a primal variable is infinite, we get if UJ = +oo then Wj = 0. (5.20) The associated constraint in (2.52) becomes an inequality al]V < CJ. (5.21) If a„j is a singleton column with an entry akj, that is 3(k,j):(ai3 = 0,\/i^k,ak3^0), (5.22) then the inequality (5.21) become a singleton row for the dual problem (2.52) and can be used to produce a bound on the shadow price yk-Assume all dual variables y have explicit (possibly infinite) bounds Pi<Vi<qi, t = l , 2 , - - - , m . (5.23) Define the lower and upper limits for all dual constraints as follows £/= o,ijPi+ 53 °»J9»> (5.24) Chapter 5. Presolve Analysis 62 and rj = X) + Yl aHPi, (5.25) where P*j = {i : a tj > 0} and JV„j = {i : < 0). Any feasible y satisfies Si ^ ]Ca'j2/<- (5.26) i The following possibihties may occur: 1. Cj < Cj (5.21) can not be satisfied, the problem is dual infeasible. 2. cj < CJ The induced cost Zj is strictly positive, so the variable Xj is dominated. It can be fixed at its lower bound Xj = 0 and eliminated from the primal problem. The dual constraint j becomes redundant. 3. c] = Cj (5.21) is always satisfied with Zj > 0. 4. Cj < Cj < cj The variable Xj cannot be eliminated. If the third case cj = Cj occurs, the variable Xj is called weakly dominated. It is claimed in [3] and [44] that under some additional conditions the variable can be eliminated. If Cj < Cj < c], we can apply the same technique as in last section to derive new bounds on the dual variables from (5.21),(5.24) and (5.26), we get: Vk < q'k = Pk + (CJ - Cj)/akj, V7c € P*j, (5-27) and Vk > p'k = Qk + (CJ - cj)/akj, V A r e J V . j , (5.28) for dkj ^ 0. If the new bound improves the older one, we call it constructive. If it is redundant, we omit it. Finally, if it contradicts the older one, the problem is dual infeasible. The new bounds on y are used to recompute Cj and cj which in turn are used to detect more dominated columns. Chapter 5. Presolve Analysis 63 5.2 Improving Sparsity of the Problem 5.2.1 Making the Matrix A Sparser For solving the Newton equation system, both the normal equations and the augmented system are strongly influenced by the sparsity structure of A. Therefore, we want to make the matrix A as sparse as possible. This Sparsity Problem(SP) can be described as: try to find a nonsingular matrix M G TZmXm such that the matrix MA is the sparsest possible. Unless some simplifying assumptions are added, this problem is NP-hard[54]. Some efficient heuristics are proposed in [21]. These methods are relative expensive to reduce the number of nonzeros of A. Gondzio[44] gives an considerably simpler heuristic for improving the sparsity of A. He analyze every equality type row, say, a t« and to look for all LP constraints with the sparsity pattern being the superset of a;*. If ak* is a superset of a;», then for any real a, a'krt = ak* + aa^ has the same sparsity pattern as ak*. If ak* is replaced by a'km and bk is replaced by b'k = bk + abi, the changed linear program is clearly equivalent to the original one. An appropriate,choice of a can cause elimination of at least one nonzero entry from ak*, or more, thus producing a sparser a'kx. Adler et al.[l] also gives a very simple and fast heuristic for improving the sparsity of A. 5.2.2 Splitting Dense Columns Sometimes even very sparse A may produce relative dense factors in the normal equations or the augmented system. If dense columns exist in A, the factors of normal equations can have a lot of fill-ins. Therefore if the number of dense columns is not excessive, we can split them into shorter pieces. A dense column d of A with nonzero elements is replaced with a set of columns d\,d2, ..., dk such that k . d = Y,di- (5-29) i=l Chapter 5. Presolve Analysis 64 Adding the linking constraints 4-4+1=0, z= l ,2 , . . . , f c -1 , (5.30) to force the variables xj, x2,,..., x^ associated with all the d t to be the same. We have no longer large dense block of size x created by the dense column d. Instead, the new adjacency structure AQAT of the transformed problem contains k smaller dense windows resulting from pieces rf,-, for which we pay by adding k — 1 constraints (5.30). 5.3 Removing Linearly Dependent Rows and Columns 5.3.1 Linearly Dependent Rows Linearly dependent rows in A are undesirable because in an interior-point algorithm a certain linear system must be solved in each iteration. If A contains linearly dependent rows, the linear system has a singular matrix and does not have a unique solution. To find all the linearly dependent rows in A is computational too expensive. Therefore, most IPM codes try to find only simple linear dependent rows. One simple form of linear dependent rows is called duplicate row by Tomlin and Welch[101]. The definition is 3(i,k) : dij = j / a ; t j i ^ k . (5.31) The heuristic of section 3.6 to make A sparser naturally detects this situation if at least one of the constraints i and k is of the equality type. It must be noticed that due to primal degeneracy, even if the original A has full row rank, the normal equations matrix becomes rank deficient when the optimum is approached. Therefore, it is necessary to correct small rank deficiency of A in the optimization procedure, not only in the presolve procedure. Chapter 5. Presolve Analysis 65 5.3.2 Duplicate Columns We define columns j and k to be duplicate if they satisfy 3(j,k) : a,-j = vaik, j # k,Vi, (5.32) where v is a scalar multipher. We have that t = Cj: - 1/ yiaik i = C j + V(Z% ~W*k- Ck) = Cj - vck + v(z*k - wk). (5.33) • Fixing a duplicate column: Cj--vck^0 (5.34) In this case, under some additional assumptions, one of the columns dominates the other. If Cj > vck then z*-w*>v{zi-wt). (5.35) Now, if uk •= +oo (i.e. w*k = 0) and v > 0, or lk = -oo (i.e. z*k = 0) and v < 0 then (5.35) yields z* - w* > 0. (5.36) Variable Xj can then be fixed to its lower bound lj. If lj = — oo then the problem is dual infeasible. An analogous analysis applies to the opposite case Cj < vck. • Replacing two duplicate columns by one: Cj - vck — 0 (5.37) Chapter 5. Presolve Analysis 66 Let us study the primal problem (2.50), we have minimize h CjXj + ckxk + • • •, subject to h a.jXj -f a.kxk + • • • = b, ij X j ^**^  1Jb j ^ h < X k < UK. (5.38) Define xk = -VXJ + x'k, (5.39) substitute (5.39) into (5.38) we obtain minimize h ckx'k + • • •, subject to (- a kx'u + • • • = b, (5.40) h < %'K - VXJ < uk. Bounds of x'k depend on the sign of v: if v > 0 then l'k = lk + vlj and u'k = uk + VUJ, (5-41) if v < 0 then l'k = lk + VUJ and u'k = uk + vlj. (5-42) Therefore, the duplicate column procedure removes variable £ j from the problem and modifies the bounds on variable xk according to (5.41) and (5.42). 5.4 Postsolve After presolving the original problem (2.50), the reduced problem is solved by an IPM algo-rithm. In general, the optimal solution to the reduced problem is in different variables than the original problem. Therefore, a restoration procedure is needed to "undo" the presolving. To perform the "undo" procedure efficiently, one has to store some trace information on all presolve modifications done to the primal and dual variables, respectively. Andersen[3] and Gondzio [44] gave detailed description about the postsolve procedure. Chapter 6 Solving the Fractional (g, /)-factor Problem A (g, /)-factor of a graph G is a subgraph of G whose valencies are bounded between g and / . More formaUy, let G — (V(G), E(G)) be a graph on m vertices with multiple edges and loops allowed, let A,j denote the multiplicity of the edge G E(G) and let dc(i) denote the degree of vertex i in G. A loop counts 2 towards the degree. Let g = (#,- : i G V(G)) and / = (/,•: i G V'(Cr)) be vectors of positive integers satisfying If we allow the subgraph H to have fractional edges, i.e., a real vector x — (Xij '• S E(G)) The fractional (g, /)-factor problem can be solved using network flow algorithms[6]. In this chapter, we use IPM to solve the fractional (g, /)-factor problem. We will exploit the special structure of the fractional (g,/)-factor problem and propose two IPM algorithms to take advantage of the special structure of the fractional (g, /)-factor problem. In Section 6.1 we develop Choi's parallel interior point algorithm for Network flow problem[23] to the algorithm for the fractional (p,/)-factor problem. In Section 6.2 we propose a preconditioned conjugate gradient based implementation of IPM which uses a efficient preconditioner for the fractional (g, /)-factor problem. Vi G V(G) : 0 < gt < /,• < dG(i). (6.1) A (g, /)-factor is a subgraph H of G with Vi G V(G): gt < d„(i) < /,-. (6.2) 67 Chapter 6. Solving the Fractional (g, f)-factor Problem 68 There are several different ways to formulate the fractional (g, /)-factor problem as a linear programming problem. In order to make the corresponding algorithm in Section 6.1 and Section 6.2 more efficient, we use two different linear programming models of the fractional (g, /)-factor problem in these two sections. The model we use in Section 6.1 has relative fewer variables and is good for the parallel algorithm. The model we use in Section 6.2 has more variables, but for this model we can use a very efficient preconditions and therefore make the algorithm more efficient for solving the fractional (g, /)-factor problem. 6.1 The Parallel IPM for the Fractional (g, /)-factor Problem The fractional (g, /)-factor problem can be formulated as the following linear program: minimize ^ /,• + ^ hi, i=l i-l subject to — ^2 Xij ~ h < ~9i, f ° r i = 1, • • - ,"x, X Xij ~ hi < fi, for i = 1,.. . , m , (i,j)eE(G) o< x o - t o (i,j)e E(G), (6.3) /,• > 0, for i = 1,.. . , m, hi > 0, for i = 1,. . . , m, where X = (Xi j = € E(G)) G TZn, gj G TZm, I = (/,• : i G V(G)) G Tlm and h = (hi : i G V(G)) G TZm (U and hi represent the lower and upper deficiency of vertex i respectively). Here we try to reduce the deficiency to zero. In matrix notation, the above L P problem can be formulated as the foDowing form: minimize cTx, subject to i i < i , _ • (6-4) Xij<*ij, for (i,j)e E(G), x>0, Chapter 6. Solving the Fractional (g, f)-factor Problem 69 where A - -A -I o A 0 - / b = l-9T fTV x = lxT lT hT]T C = tor eL }T g 7£2mx(n+2m) e 7e 2 m, G nn+2m, G 7^ ™+2m -'n c 2m 0n denotes the vector of all zeros in lZn, e2m denotes the vector of all ones in 7Z2m, and A is the mxn node-arc incidence matrix of the graph G = (V(G), E(G)), i.e., for each arc G E(G)y there is an associated column in matrix A with exactly two nonzero entries: an entry 1 in row i and an entry 1 in row j. Choi and Goldfarb propose a parallel interior point algorithm in [23]. They show that their algorithm can exploit various special structures of linear programs. We develop this algorithm for our fractional (g,/)-factor problem by taking advantage of the special structure of the fractional /)-factor problem. For convenience, we reproduce Choi's resultant algorithm in Section 6.1.1 and 6.1.2. In Section 6.1.2, we suggest using George's algorithm[34] to solve the tridiagonal linear equations system in order to save the required storage for the algorithm. This suggestion is not mentioned in [23]. In Section 6.1.3, the layering routine is proposed by Choi and Goldfarb[23] for the network flow problem. We add a modifying routine to construct the staircase matrix A for our fractional (g, /)-factor problem and make Choi's algorithm suitable for the fractional (g, /)-factor problem. . 6.1.1 The Interior Point Algorithm We consider a linear program of the following form: minimize cTx, subject to Ax = 6, Ex < u, x > 0, (6.5) Chapter 6. Solving the Fractional (g, f)-factor Problem 70 where c, x £ Kn, be Tlm, u £ TZk, A £ 7e m X n and JY e l l k x n . Comparing (6.5) with (2.50), we can see the only difference is: in (6.5) we change the upper bound constraints x < u in (2.50) into Hx < u. We study problem in this form because it captures the special structures of the fractional (g, /)-factor problem. We will give detailed description in Section 6.1.3. Adding slack variables to Hx < u, we get minimize cTx, subject to Ax = b, Hx + s = u, (6.6) x > 0, s>0, where c,x € TZn, s,ueTlk,be TZm, A £ TZmxn and H £ TZkxn. The corresponding dual problem is maximize bTy — uTw, subject to ATy — HTw + z = c, (6.7) z > 0, w > 0, where ?/ £ 72m, 2 € Kn and w £ Tlk. Using the same procedure as in Section 2.6, we can get the Newton search directions: Ay = (AQ-*AT)-\AQ-*p(p), Ax = Q-\ATAy-p{p)), . Az = pX^en - Zen -X^ZAx, (6.8) Aw = S-^pek - SWek + WHAx), As = -HAx, where Q = X~*Z + HTS^WH, (6.9) Chapter 6. Solving the Fractional (g, f)-factor Problem 71 and p(p) = pH TS lek - pX 1en- HTWek + Zen. (6.10) Using the Sherman-Morrison-Woodbury updating formula[41], we can write Q 1 as Q-1 = ( X _ 1 Z + ^S^WH)-1 = XZ'1 - XZ~lHTK-lHZ-lX, (6.11) where K = W~lS + HXZ~lHT. (6.12) In the predictor-corrector IPM, the search directions can be computed the same way. Now we consider the case where the matrix H has a block diagonal structure. In particular, let H have t blocks where H{ € TZk,Xn', i — 1, . . . , / , 17Ji=i = k and YA=I ni = n- Then, the matrices HTS~lWH and Q are square block diagonal matrices with n, x diagonal blocks and K is a square block diagonal matrix with k{ x k{ diagonal blocks. Because of the block diagonal structure of Q and K, operating involving Q - 1 requires the solution of several smaller systems of equations, all of which can be done in parallel. Moreover, when H has a large number of rows relative to A, i.e. k > m, the speed-up obtainable by using parallel computation can be substantial. 6.1.2 Staircase Structure IPMs work well for staircase linear programs. If matrix A is staircase then the matrix AAT is block tridiagonal so that no fill-in occurs outside of the diagonal and off-diagonal blocks during Cholesky factorization. Thus, the search direction for such problems can be computed efficiently. H = H2 Chapter 6. Solving the Fractional (</, f)-factor Problem 72 The algorithm described in Section 6.1.1 can take advantage of a constraint matrix A* that has the staircase structure Ai Aii A2i A2 A23; A* = I A32 A3 A34 e 7e(™+*)*". A2q+l,2q A2q+i We assume that A* has an odd number of row blocks. We also assume that all the constraints are equalities and we shall use m t to denote the number of rows in the ith row block, i.e., in submatrices of the form A{.. We decompose the constraint matrix into as follows: let A consist of the odd blocks of rows of A* (A G TZmXn where m = J2i=om2i+i) and let H be the even blocks of rows of A* (H € Tlkxn where k - E L i m2i)-Obviously, both A and H are block diagonal. Therefore, K and Q~l are both square block diagonal matrices which can be easily obtained by (6.12) and (6.11). Let the diagonal matrix D = XZ~l be symmetrically partitioned to conform with the column partition (block structure) of A*, i.e., D = diag{Di, Di2, D2, D23, D3,..., D 2 q t 2 q + i , D2q+i}, and let W, S and K be symmetrically partitioned to conform with the row partition of H. Then, the q diagonal blocks of K and the 2q + 1 diagonal blocks of Q~l are Ki = W-1Si + HiDiHT, t = l , . . . , g , (<2-1)2i+i = D2i+i, i = 0,l,...,q, and (Q-1)2t = Di-DlH[K-1HlDi, i = l,...,q, where Hi = [A2i<2i-i A2i A2it2i+i] and Di = diag{D2i-it2i, D2i, D2it2i+i} for i = 1, . . . , q. Moreover, AQ~l AT is block tridiagonal with diagonal blocks (AQ~1AT)i+i •= A2i+it2iD2ii2i+i(I - A2i<2i+1K~lA2it2i+iD2it2i+i)Ali+12i Chapter 6. Solving the Fractional (g, f)-factor Problem 73 +^2t+.1^2t'+l'^2i+l + ^.2«+l,2i+2^2«'+l,2t'+2 X(-f - ^+2,2t+l^t>\^2i+2,2t>1^2t>l ,2t>2)^2t+l ,2J+2> for i = 0 , . . . , q, where the first and last term is vacuous for the cases i = 0 and i = q, and off-diagonal blocks (AQ-1AT)iti+1 = {AQ-lAT)i+u = ^2i - l , 2 t ' -C ) 2t - l ,2 i^2! ' ,2 t - l^T 1 ^21,21+1 ^2t',2t+1^4-2i+l,2t5. * = 1> •••>?• We see that splitting the staircase matrix A* into A and # so that we can apply our algorithm allows us to compute a step direction by factorizing an m X m symmetric block tridiagonal matrix (with diagonal blocks of size m2i+i x m,2i+i,i = 0,...,q and off-diagonal blocks of size m^i-x X mii+\, i = 1,.. . , q) and q smaller matrices K{ of size m,2i x m2i. We note that the A'; factorizations can all be done in parallel. This compares with the straightforward approach which involves factorizing an (m + k) x (m + k) symmetric block tridiagonal matrix of the form A*Q~lA*T where Q is diagonal. This matrix has 2q + 1 diagonal blocks of size m; x mi, i = 1,. . . , 2q -f 1 and off-diagonal blocks of size m; x m;+i, i = l,...,q. Certainly, when m is much smaller than k, substantial savings in computational effort are afforded by our approach. For this purpose, we can control the size of AQ~X AT, making it small at the expense of making the blocks of H large. To solve the linear equations system AQ'1 ATAy = t, where t = AQ'1 /?(/i), we do not need to factorize the matrix AQ-1 AT. We know R = AQ_1AT is a block tridiagonal matrix. George[34] point out that the factorization of such a matrix R can be written in the form fill R12 / Pi R\i #21 #22 #23 #21 f f 1 1 F 2 #23 #32 #33 • _ fi32P2_1 / #3 #34 Rmm ' I P™ where Pi - Rii-, Pi = Rn - Ri,i-iPr-\Ri-i,i, t = 2 , 3 , . . . , m . Chapter 6. Solving the Fractional (g, f)-factor Problem 74 To use this form to solve a set of equations RAy — t requires the forward substitution steps v\ = 1*1, Vi = U-Ri,i-iPr_\vi--i, _ i = 2 ,3 , . . . , ro , (6.13) followed by the back substitution steps Aym — Pm vm, Ayi = P~l{v,;- Rltl+1Ayi+1), i = m - l , . . . , l . (6.14) It therefore suffices to keep the off-diagonal blocks of R unmodified and the diagonal blocks Pi in factorized form Pi = LiUi, i = 1,2,.. . , m . Therefore, all fill-in is avoided in off-diagonal blocks and the required storage is saved overall. 6.1.3 The Fractional (<7,/)-factor Problem To apply the IPM to the fractional (g,/)-factor problem, we want to construct staircase struc-tured constraints in (6.4). We use the following layering routine on the graph G = (V(G), E(G)), to reorder the node-arc incident matrix A into a staircase matrix. Layering routine: Step 1 Starting from a chosen set of nodes, say JVi, perform undirected breadth first search to construct a layered graph, say with 2q + 1 layers. Step 2 Let Ri be the set of arcs in layer i , and Riti+i be the set of arcs between layer i and i + 1. Furthermore, let TV,- be the set of nodes in layer i. Step 3 Permute the node-arc incidence matrix A of the graph G = (V(G),E(G)) so that the rows of A are ordered as N\, J V 2 , . . . , i V 2 g + 1 , and the columns of A are ordered as R\ i R\,2, R2, R2,3, • • • 5 R2q+1 • Chapter 6. Solving the Fractional (g, f)-factor Problem 75 The node-arc incidence matrix A produced by the above routine has a staircase structure, since this routine creates a layered graph, i.e., there are no arcs between layer i and layer j for j' ^ i + 1 or j i — 1. Note that the choice of N\ greatly affects the number of layers. We would like to choose N\ so that q is large and rh — YA=Q I -^2t'+i I is small. For given N\, we can find the layered structure by performing breadth first search. But finding the best layered graph with respect to the above criteria is difficult because of the exponentially large number of possible starting sets N\. Therefore, we wiD use a heuristic method to find a better (not necessary best) N\. Now we use the following modifying routine to reorder A of (6.4) into a staircase matrix based on A. Modifying routine -A Step 1 Order in (6.4) into a staircase by moving each row in the bottom matrix A to the position right after the corresponding row in the upper matrix —A. Step 2 For each vector Ri : i = 1 , . . . , 2q + 1, move all the columns corresponding to (Ik hk) : VA; € Ni to the position at end of each vector Ri. After this modifying routine we get a new staircase matrix A. For example, the staircase matrix A of the graph in Figure 6.1 is shown in Figure 6.2. Now applying the method described in Section 6.1.2, we decompose the staircase constraint A . . . Adding slack variables r in the constraints corresponding to matrix A, matrix A into therefore (6.4) becomes H minimize T -c x, subject to [A I]x = bi, [H 0]x < b2, Xij<\j, for (i,j)e E(G), x > 0, (6.15) Chapter 6. Solving the Fractional (g, f)-factor Problem 76 elO Figure 6.1: The layered graph G. where x = [xT lT hT rT]T G 72™+" with n = n + 2m and m = 2m, A € 72™ x \ H G 72* x f i, and r G 72™. We can write (6.15) in the following form: where minimize cTx, subject to Ax = b\, Hx < b2, x > 0, A — [A I] H I 0 H 0 0 g ^ m x ( n + m ) G 7£(^+ n ) x ("+" i ) G 72*+" &2 = \b\ \W x G 72" + m 6i G 72™ To get Newton search directions for (6.16), we need to compute (6.16) Q-1 = [X~lZ + HTS-1WH]-\ (6.17) Chapter 6. Solving the Fractional (g, f)-factor Problem 77 R l e l II h i 12 h2 R12 e2 e3 e4 e5 R2 e6 e7 13 h3 14 h4 15 h5 R23 e8 e9 R3 elO 16 h6 17 h7 NI N2 N3 -1 -1 1 1 -1 -1 1 1 -1. -1 1 1 -1 1 -1 -1 -1 1 1 -1 -1 1 -1 -1 1 -1 -1 -1 1 -1 Figure 6.2: The staircase matrix A of the layered graph G. We can see HTS~lWH where H I 0 0 0 s-'Wi 0 0 5 2" 1VF 2 H 0 / 0 S2lW2 0 0 0 0 0 0 H'S^WiH + Sl 0 0 0 0 S2~1W2 0 0 0 Chapter 6. Solving the Fractional (g, f)-factor Problem 78 Therefore Q-1 = + X^Zi 0 0 X 2 " 1 Z 2 [X7lZx + jVT5 1- 1VF 1iY + ft]"1 j^s^w iJ f + n o o o o ~\ - i Let =- X f 1 ^ ! + # T 5 f xWi 7i + ft, thus Q-1 = 0 - QHT HQ, where 0 = ( X f 1 ^ + ft.)"1 and if = VPf15j + #0J7 T. Note that 0 is a diagonal matrix and that K is a block diagonal matrix, therefore QT1 is a block diagonal matrix. We have AQ~XAT = [A I] QT1 o ' AT ' 0 Q~2X I = AQT1AT + Q 2 l , where Q^1 = [ X ^ 1 / ^ ] - 1 is diagonal matrix. Therefore, AQ,1 AT and AQ~lAT are block tridiagonal matrices with dimension m. We can use the parallel algorithm which is described in Section 6.1.2 to solve the Newton search directions (6.8). 6.2 The Preconditioned Conjugate Gradient Algorithm In this section, we first use the result of [7] and [8] to reduce the fractional (g, /)-factor problem to a minimum cost network flow(MCNF) problem. Then we use a preconditioned conjugate gra-dient(PCG) based implementation of IPM to solve the MCNF problem. Resende and Veiga[89] have shown that the PCG based IPM makes the IPM very efficient for MCNF problem. The preconditioner we suggest is a combination of Mehrotra and Wang's preconditioner for MCNF problem[77] and Wang and O'Leary's adaptive preconditioner [107]. According to the results of [7] and [8], the network Q = (V,£) for computing the fractional (<7,/)-factor problem is constructed as follows: Chapter 6. Solving the Fractional (g, f)-factor Problem 79 Let the nodes V = {dummy} \j{R{:ieV} U {Si :ie V}, (6.18) and the set E of arcs given as follows V(i,j) £ E(G) (Ri,Sj),(Rj,Si) € El, both with capacity A,j and cost 0, Vz € V(G) (dummy, R{), (Si, dummy) € E2, both with capacity g,- and cost - 1, (dummy, Ri), (Si, dummy) € E3, both with capacity /,• — and cost 0, (dummy, Ri), (Si, dummy) £ EA, both with capacity oo and cost 1. Therefore, the set of arcs € - El U E2 U £ 3 U EA. Proposition 6.2.1 (Anstee[7]) A minimum cost flow in Q yields a minimum deficiency frac-tional subgraph H using V(i,j)eE(G) xo- = ^ ( / ( ^ . ^ ) + /C^,5,-)), where f(p,q) is the flow on the arc (p,q) G £• Let Xij = the flow on arc (Ri, Sj) G El, hi = the flow on arc (dummy, Ri) G E2, hi = the flow on arc (Si, dummy) G E2, mti = the flow on arc (dummy, Ri) £ J53, msi = the flow on arc (Si,dummy) G ES, hri = the flow on arc (dummy, Ri) G EA, hsi = the flow on arc (Si, dummy) £ J54, then the minimum cost network flow problem of Q can be formulated as the following LP Chapter 6. Solving the Fractional (g, f)-factor Problem 80 problem, minimize subject to i=i y) Xij — ' rt — ™>T{ — hri = 0, for i = 1,. («\j)e£i Xij _ ' 5t ~ m s t _ — 0, for i = 1,. (iJ)eEl m m Y2(lri + mri + hri) - Y2(lsi + + hsi) = 0, t=l i=l 0 < x.j < 0 < /r, < g{, for z = 1 0 < /s t < g;, for z = 1 0 < mri < fi — for i = 1 0 < ms; < / , - <7;, for z = 1 /ir, > 0, for i = l hs{ > 0, for i = l , , m , f o r ( z , ; ) G £ ( G ) , (6.19) . , m , . , m , . , m , , , m , . , m , ., m. To solve (6.19), if the optimal objective value is equal to Y^TLi ~9ii w e get a fractional (g,f)-factor H with the edge values given as Proposition 6.2.1. Otherwise, we get a minimum defi-ciency fractional subgraph H. The network flow problem (6.19) can be transformed to the following form: minimize cTx, subject to Ax = b, x < u, (6.20) £ > 0, where. A is the (2m + 1) x (2n + 6m) node-arc incidence matrix of the network Q = ( V , £ ) , i.e. for each arc (i,j) G £ there is an associated column in matrix A with exactly two nonzero entries: an entry 1 in row i and an entry -1 in row j. i{j denotes the flow on edge (z, j) G £ , Cij is the cost of transporting one unit of flow on edge (i,j) G £ and u,j is the upper bound on flow (i.e. capacity) on edge (i,j) G £ . For each vertex j G V, bj denotes the flow produced or Chapter 6. Solving.the Fractional (</, f)-factor Problem 81 consumed at vertex j. If bj > 0, vertex j is a source. If bj < 0, vertex j is a sink. Otherwise bj = 0, vertex j is a transshipment vertex. In the formulation (6.20), bj = 0 for all j £ V. As we stated in Chapter 4, the major work in a single iteration of any IPM is related to build and update the matrix AD2AT and to solve the Newton equations system (AD2AT)Ay = o (6.21) that determines the search direction at each iteration of IPM. There are two methods to solve the Newton equations system: the direct method and the iterative method. Resende and Veiga[89] have shown that the direct method performs poorly on even small instances of network flow problems. Tnis is because that even though the node-arc incidence matrix A is sparse, the factorization of AD2AT can produce considerable fill-in, which will cause prohibitively high computational effort and storage requirements during Cholesky fac-torization in direct method. Therefore, the most fundamental requirement of an interior point implementation for network flows is an efficient implementation of an iterative method.to solve the Newton equations system. We pay our attention to the conjugate gradient(CG) method. Because the matrices AD2AT in IPMs show a tendency to be extremely ill conditioned, a pure conjugate gradient algorithm shows hopelessly slow convergence. Additionally, the use of finite precision results in round-off errors which sometimes make it impossible for the algorithm to converge. To remedy this we use a preconditioner, i.e., a symmetric positive definite matrix M , such that the preconditioned matrix • M~rl(AD2AT) is less ill-conditioned than AD2AT. For this purpose, M should satisfy that M~l ' « ( A D 2 A T ) _ 1 and M~l is easy to compute. The preconditioner improves the efficiency of the conjugate gradient algorithm by reducing the number of iterations it takes to find a satisfiable direction. In the preconditioned conjugate gradient(PCG) algorithm, we use the determined preconditioner Chapter 6. Solving the Fractional (g, f)-factor Problem 82 Procedure PCG(1 , D, a, e, Ay) 1. Ay0 = Ay; 2. r0 = a - (AD2AT)Ay0; 3. v0 = M _ 1 r 0 ; 4. Po = v0; 5. t = 0; 6. While the stopping criterion is not satisfied do: 7. qt = (AD2AT)pt; 8. a{ = vfri/pjqi; 9. Ayi+1 = Ayi + aip{; 10. = Ti- aiqi; 11. vi+i = M-lri+1; 12. Pi = vf+1r.i+1/vfri; 13. Pi+i = vi+i + Pipr, 14. i = i + 1; 15. end do; 16. Ay = Ayi end P C G ; Table 6.1: The preconditioned conjugate gradient algorithm to solve the following system of linear equations. M-l(AD2AT)Ay = M~xo. The pseudo-code of P C G algorithm is given in Table 6.1. Resende and Veiga[89] have shown that the special structure of network flow problems can be exploited to determine an easily invertible preconditioner for the P C G algorithm. They proposed a preconditioner which is a combination of diagonal and maximum weighted spanning tree preconditioned. In the earlier iterations of IPM they use the diagonal preconditioner M = diag(AD2AT), and switch to the maximum weighted spanning tree preconditioner when the diagonal preconditioner becomes ineffective. • The spanning tree preconditioner Chapter 6. Solving the Fractional (g, f)-factor Problem 83 A basis in the network simplex method correspond to a spanning tree of the graph Q. A nondegenerate minimum cost network flow problem has a unique optimal solution and a corresponding optimal spanning tree T*. Let A = [Aj* AjV>], where M* = Q\T*. It is easy to see that De —• (Dj*e,0) and the matrix AD2AT —• Aj»D\,Aj., when the I P M goes to the optimal solution. Hence one may expect that Aq-D\Aj, where T is a maximum weighted spanning tree, would serve as a good preconditioner as the algorithm converges. The maximum weighted spanning tree can be identified using as weights the diagonal elements of the current scaling matrix w = De where e is a unit vector. Based on Resende and Veiga's preconditioner, Mehrotra and Wang[77] proposed a improved preconditioner as follows: Let T be a maximum weighted spanning tree of G, then AD2AT = [ATD2TAj + A/fDJrAfr] « A7D\Arr + A , where A is a nonnegative diagonal matrix. A simple choice of A which has worked well in practice is to take A.= px diag(A^fD%-Ajf), where p is computed adaptively. Mehrotra and Wang[77] proved that the Cholesky factor of the preconditioner ATDJA^- + A can be computed in 0(m) computational steps. Actually, the Cholesky factor of the precondi-tioner is an incomplete Cholesky factor of AD2AT. The computational results given in [77] show that Mehrotra and Wang's preconditioner is better than Resende and Veiga's preconditioner. Wang and O'Leary[107] proposed an adaptive approach to determine a preconditioner. FoDowing their approach, we use the scheme in which we first perform a procedure to de-termine whether to update the current preconditioner or to recompute a new preconditioner ArDq-A^ + A for the matrix AD2AT. Next, we use the determined preconditioner to implement a P C G method. Chapter 6. Solving the Fractional (g, f)-factor Problem 84 The updated preconditioner is computed by updating the previous preconditioner by a small-rank change, when p, changes. Let D be the current diagonal matrix and D be the one for which we have a preconditioner ATD\AJ + A and the Cholesky factor L of the preconditioner. Define AD = D2 — D2 and let a8 be the i-th column of matrix A. Since AD2AT = AD2AT + AADAT « A T £ > T I T + A + AADAT 2n+6m = LLT + ^2 Aduaiof, t=i we may obtain an improved preconditioner LLT by applying a rank-a update to LLT. This update may be computed as in [15] and [26]. . The determination of whether to use the updated preconditioner or to recompute a new preconditioner is based on the approximate cost of the previous iteration including the cost of any updates that were made to the preconditioner. If the cost of the previous search direction was high, we recompute a new preconditioner for the current matrix AD2AT. If the cost of the previous search direction was not high, the preconditioner is obtained by updating the previous one. Wang and O'Leary[107] demonstrate that the adaptive preconditioner can enhance the performance of interior point algorithms on large sparse problems. In the P C G algorithm, the initial direction Aj/o is the direction Ay which is produced in the previous call to the P C G algorithm, i.e., during the previous interior point iteration. This is done with the expectation that D and a change little between consecutive interior point iterations, and consequently the direction produced in an iteration should be close to the one produced in the previous iteration. When first time P C G is called, let Aj/o = (0,.. .,0). In this algorithm, we use the stopping criterion suggested by Karmarkar and Ramakrishnan [59] and Resende and Veiga [89]. Let Ay be an approximate solution of (6.21) and . | oT(AD2AT)Ay | cos v — =— . || o || • || (AD2AT)Ay \\ Chapter 6. Solving the Fractional (g, f)-factor Problem 85 where 6 is the angle between (AD2AT)Ay and o. We terminate P C G iterations when | 1 — cos# |< e, i.e., 6 « 0, where e is some small -tolerance (typically e = 10 - 8 ) . This stopping criterion effectively halts the conjugate gradient algorithm when a good enough direction is on hand. 6.3 Computational Implementation We use Gondzio's code H O P D M to solve the fractional (g, /)-factor problem. H O P D M stands for Higher Order Primal Dual Method. It is an implementation of Gondzio's multiple centrality corrections method which has been described in Section 3.3. We use the formulation (6.3) to solve the fractional (g, /)-factor problem. The objective function of (6.3) represents the total deficiency for the current x = (Xij '• (*>J) £ B{G)) € TZn value. If we can find an optimal solution in which the objective value is less than 1, then the current x is a fractional (g, /)-factor. Otherwise the objective value will be an integer K which is equal to or larger than 1. In this case, we can get a (g, /)-barrier (see below) and a minimum deficiency fractional subgraph x-To introduce the definition of (g, /)-barrier, we cite the following theorem. Theorem 6.3.1 (Anstee[6]) There exists a fractional (g, f)-factor in G if and only if Ef>+ E * . - ; > 5 > ( 6 - 2 2 ) for all partitions S,T,U of vertex 1,2,..:, m € V(G). If there is a partition S, T, U violating (6.22), this partition is a (g, /)-barrier. . After executing H O P D M , If the objective value is equal to or larger than 1, we can get a (g, /)-barrier S, T, U as follows: T = {i:lt>0, V i e F ( G ) } , 5 = {i:hi> 0, Vi € V(G)}, Chapter 6. Solving the Fractional (g, f)-factor Problem 86 U = V(G)\(TuS). In this case, the objective value K will be K = ^29i~ 13 Xij-^fi-i€T ieT,j€TuU i£S Actually, for all • i G T,j G T U U, Xij ~ 1- Therefore, 3^ Xij 1 S approximately the ierjeTuu number of edges (i, j): i G T,j G T U U. 6.3.1 Starting Point All interior point methods are sensitive to the starting point. The best choice of a default starting point is still an open question. In this section we use three different starting point approaches to run the H O P D M code for fractional (g, /)-factor problem and compare the results. Now we give a detailed description of these three starting point approaches. • Approach 1 For LP (2.51) and (2.52), the only requirement for the starting point of Gondzio's multiple centrality corrections method is x° > 0 , s ° > 0,z° > 0, and w° > 0. Gondzio generates the starting point by using the solution of a primal least squares problem, minimize | xTs |, subject to Ax = b, (6.23) x + s = u, and a dual least squares problem, minimize | zTw I, ' (6.24) subject to ATy + z — w = c. The Lagrangian for (6.23) is then L(x,s,y,w)=\ xTs | -vT(Ax - b) + tT(x + s - u). (6.25) Chapter 6. Solving the Fractional (g, f)-factor Problem 87 The first order optimality conditions(KKT conditions) for (6.23) are s - ATv + t = 0, x + t = 0, Ax - b = 0, X + 5 - U = 0. We can easily solve the linear equations system (6.26). s = u — x. Similarly, we can find the solution of (6.24) as w —z. The stepsizes in primal space dp and dual space do are determined by dp = max( —1.5 X min{xj}, —1.5 x min{sj}, 0), and then do = max( —1.5 X min{zj}, -1.5 X min{u>j}, 0), (a: + dPe)T(z + dDe) + (s + dPe)T(w + dDe) dp = dp + 0.5 X and . ; , n c + dPe)T(z + d£,e) + (s + dPe)T{w + cfoe) « £ ) = « £ > + 0.5 X ; . (6.26) w '= (A/4 T ) - 1 ( J 4u-26) , x = ( « — / l T « ) / 2 , (6.27) y = (AAT)-1Ac, z = (ATy-c)/2, (6.28) Chapter 6. Solving the Fractional (g, f)-factor Problem 88 The starting point is then given by x° = x + dpe, s° = s + dpe, z° = z + dDe, (6.29) w° — w + djje, y° = y, (unaltered). We can prove that the starting point satisfies x° > 0,5° > 0, z° > 0, and w° > 0. • Approach 2 Anstee gives a method for choosing the starting point in [9]. For every vertex i G V(G), the interval and §i,fi are defined as follows 9i fi Ii [9i,fi. LdG(0' dG(i)\ We can see if a fractional subgraph x has Xij 6 li^Ij, V(i , j) G E(G) then x is a fractional (g,/)-factor. Therefore if Ii CI lj ^ 0, we choose Xij 6 7,- n lj. If n Jj = 0 and fi < g~j, we choose Xij — fi + 2 ^ ( ^ ' - ^ ) ' where d(Ii,Ij) is the distance between and 7,-. In (6.3), /2- : i G ^ ( G ) are determined by gi — Yl(i,j)^E{G)Xiji a n d hi : i £ V(G) are determined by £( , \ j ) e£(G) X i j ; - /,-. Now we have determined a;0 = [ x ° T l°T h°T] of (6.4). The starting value of the primal upper bound slack variables s of (6.4) is determined by s° = u - x°, where u is the upper bound vector. We give a threshold value 6, and adjust all a;0 and s° that are below this threshold value to the threshold value 6. This guarantees that all x° and s° are sufficiently bounded away from zero. This threshold value S is also used to bound dual variables z° and w° away from zero. For the dual problem, we set y° = 0. z° and w° are determined as follows Chapter 6. Solving the Fractional (g, f)-factor Problem 89 For Xi unbounded: < c0- if Cj > 6 6 if Cj < 6 (6.30) For Xi bounded: < z° = Cj + u>9, if CJ > 0 w° = z° - Cj if Cj < 0 (6.31) • Approach 3 We give a threshold value 6. For primal problem, we set x° = 6 and s° = 6. The dual variables are determined the same way as approach 2. 6.3.2 Test Problems A merely random (g, /)-factor problem is not good. To show this, let us review some available results. We consider a random graph G on n vertices where each vertex chooses randomly a set of r neighbors. Posa [86] proved that a random graph G with f(n) = crclogn edges is Hamiltonian with probability tending to 1 if c > 3. Obviously, if a graph of even order is Hamiltonian, then the graph has two disjoint 1-factors. In [93] Shamir and TJpfal found a sufficient condition for a 1-factor. They proved that if the minimum vertex degree reaches 6, almost every random graph G has a 1-factor. Komlos and Szemeredi[64] proved that when a random graph is chosen by choosing edges independently with the same probability, Hamiltonian cycles appear (in a probabilistic sense) at the same time that the minimum vertex degree reaches 2. This requires about l/2nlogrc + n log log n edges. This means that if in a random graph G each valency is at least 2, then the graph G contains a Hamiltonian cycle "almost surely". From these results, we get the conclusion that a random graph with enough large number of edges almost surely has a 1-factor. We expect that in a similar way a random graph G with "large enough" degree almost surely has a (<7, /)-factor. A proof could involve showing that the cut conditions (6.22) are almost Chapter 6. Solving the Fractional (g, f)-factor Problem 90 surely satisfied. Therefore in order to generate "interesting" (g, /)-factor problems, we do not think it is appropriate to use a wholly random graph. We have generated problems that either contain a (g, /)-barrier or something close a /)-barrier, i.e. a "small" cut. To do this we choose a partition S,T,U of the vertex set V(G) at random and then introduce edges and the data values / , , <7, for every vertex i in V(G) to make the partition S, T, U either violate or nearly violate (6.22), namely equality in (6.22). 6.3.3 Computational Results According to [9], we allow an error of size < 1 for our fractional (g, /)-factor problem. In pur algorithm, when the objective value < 1, we stop the algorithm and get a fractional (g, /)-factor. Otherwise, for a given optimality tolerance e, when the relative duality gap satisfies cTx — bTy + uTw 1+ | bTy — uTw | ' we stop the algorithm. In this case, the algorithm converge to an optimal solution with an integral objective value K > 1 and we get a (g, /)-barrier. In this section, we test fourteen (g, /)-factor problems using the three starting point ap-proaches and compare the computational results. Problem 1 to Problem 10 are feasible (having fractional (g, /)-factor) or infeasible problems with small size. Problem 11, 12 and 13 are disjoint union of Problem 1 and Problem 2, Problem 3 and Problem 4, and Problem 3 and Problem 5, respectively. They are the following three types of union: infeasible and infeasible, feasible and infeasible, and feasible and feasible. Problem 14 is a disjoint union of Problem 1 and Problem 2 plus 9 edges, so that there is a cut of capacity 0, i.e. a cut satisfying equality in (6.22). Table 6.2 gives the sizes of the fourteen (g, /)-factor problems and their optimal objective values. Column 1 and Column 2 of this table indicate the numbers of vertices and edges in the original graph. Column 3 indicates the number of constraints in each problem. Column 4 indicates the number of variables in (6.4) and column 5 indicates the number of variables after adding slack variables to (6.4). Column 6 gives the number of nonzero elements in the Chapter 6. Solving the Fractional (g, f)-factor Problem 91 Problems 1 V(G) | 1 E(G) | Rows Structural cols. Columns Nonzeros Opt. values 1 24 . 55 48 103 151 316 5 2 27 70 54 124 178 388 4 3 22 45 44 89 133 268 0 4 30 70 60 130 190 400 1 5 24 55 48 103 151 316 0 6 27 70 . 54 124 178 388 0 7 9 14 18 32 50 92 1 8 9 14 18 32 50 92 0 9 11 19 22 41 63 120 0 10 8 11 16 27 43 76 0 11 51 125 102 227 329 704 9 12 52 115 104 219 323 668 1 13 46 100 92 192 284 584 0 14 51 134 102 236 338 740 0 Table 6.2: Original problem size coefficient matrix after adding slack variables. Column 7 gives the optimal objective value of every problem. Table 6.3 shows the number of iterations of IPM using several different threshold value <5 in the starting point approach 2. It shows the results for two different optimality tolerance e. From the total iterations in Table 6.3, we can see that the best threshold value of approach 2 is 6 = 0.4. Table 6.4 compares the iteration numbers of the three starting point approaches using two different optimality tolerances e. From the computational results of approach 2 in Table 6.3, we know that the best threshold value is 6 = 0.4, and the second best threshold value is <5 = 0.5. Therefore, we test S = 0.4 and 6 — 0.5 in approach 3 to see the difference between approach 2 and approach 3. From the total iterations in Table 6.4, we can see that among the three starting point approaches, the best approach is approach 2 with 6 = 0.4. In Table 6.3 and Table 6.4, the iteration counts for all the feasible problems are relatively small. This is because as soon as the objective value is smaller than one, we stop the algorithm Chapter 6. Solving the Fractional (g, f)-factor Problem 92 and get a fractional (g, /)-factor; otherwise, we stop the algorithm only when the relative duality gap is smaller than the optimality tolerance e. From the computational results in Table 6.3 and Table 6.4, we see that doubling the problem size for both feasible and infeasible problems has little effect on the iteration count. The effect is unpredictable. In some cases the iteration count decreases, and in some cases it increases. But doubling the problem size does not affect our conclusion that the /)-factor problem-specific starting point (approach 2 with 6 = 0.4.) is a good starting point and makes the IPM more efficient for solving the fractional (<?,/)-factor problem. All the test problems were solved extremely fast (in a few seconds for the largest problems), and so these methods might apply to problems much larger than these tested here. Nonetheless we imagine a direct network flow algorithm would outcompete these methods for all but very large problems. It is shown in [89], [85] and [88] that for many classes of large scale prob-lems interior point methods are currently the most efficient computational solution approaches. Therefore, interior point methods provide, in addition to the direct network flow algorithm, new ways to solve large scale fractional (g, /)-factor problems. Chapter 6. Solving the Fractional (g, f)-factor Problem 9.3 Problem 6 --0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 £ = ID'- 8 1 7 8 7 7 7 7 7 7 7 7 2 10 8 7 10 7 8 8 8 9 8 3 3 5 3 3 2 2 2 2 2 2 4 8 9 8 8 7 8 9 9 7 7 5 4 5 4 3 3 3 3 3 3 3 6 4 5 4 3 3 2 2 2 2 2 7 7 6 6 6 7 7 7 7 7 7 8 1 3 2 2 2 2 2 2 2 2 9 1 1 1 1 1 1 1 1 2 2 TO 1 1 1 1 1 2 2 2 2 2 11 8 9 7 7 . 8 8 8 8 8 8 12 8 9 9 7 8 9 9 10 7 . 7 13 5 5 4 3 3 3 3 3 3 3 14 4 6 5 4 3 4 4 4 4 4 Total 71 80 68 65 62 66 67 68 65 64 = ID - 6 1 6 8 6 6 6 6 6 6 7 7 2 9 8 7 .8 7 8 7 8 8 8 3 3 5 3 3 2 2 2 2 2 2 4 •7 8 8 7 ' 6 7 8 7 7 • 7 5 4 5 4 3 3 3 3 3 3 3 6 4 5 4 3 3 2 2 2 2 2 7 6 6 5 5 6 6 6 6 6 6 8 1 3 2 2 2 2 2 2 2 2 9 1 1 1 1 1 1 1 1 2 2 10 1 1 1 1 1 2 2 2 2 2 11 7 8 7 6 7 7 7 8 7 8 12 8 8 8 7 7 8 8 8 7 7-13 5 5 4 3 3 3 3 3 3 3 14 4 6 .5 4 3 4 4 4 4 4 Total 66 77 65 59 57 61 61 62 62 63 Table 6.3: Iteration counts for approach 2 Chapter 6. Solving the Fractional (g, f)-factor Problem 94 Problem Approach 1 Approach 2 Approach 3 <S = 0.4 S = 0.4 6 = 0.5 e = ID - 8 1 8 7 8 8 2 13 7 8 8 3 2 2 2 2 4 8 7 9 8 5 3 3 3 3 6 2 3 3 3 7 8 7 6 7 8 2 2 2 2 9 2 1 2 1 10 1 1 2 2 11 9 8 9 8 12 8 8 9 9 13 3 3 4 3 14 4 3 4 4 Total 73 62 71 68 = 1 2 3 - 6 1 7 6 7 7 2 11 7 8 8 3 2 2 2 2 4 7 6 7 8 5 3 3 3 3 6 2 3 3 3 • 7 7 6 6 6 8 2 2 2 2 9 2 1 2 1 10 1 1 2 2 11 8 7 8 8 . 12 7 7 8 8 13 3 3 4 3 . 14 4 3 4 4 Total 66 57 66 65 Table 6.4: Iteration counts for different starting point approaches Bibliography [1] I. Adler, N. K. Karmarkar, M . G . C. Resende, and G . Veiga. Data structures and pro-gramming techniques for the implementation of Karmarkar's algorithm. ORSA Journal on Computing, 1:84-106, 1989. [2] I. Adler, N. K. Karmarkar, M . G. C. Resende, and G . Veiga. An implementation of Karmarkar's algorithm for linear programming. Mathematical Programming, 44:297-335, 1989. Errata in Mathematical Programming, 50:415, 1991. [3] E . D. Andersen and K. D. Andersen. Presolving in linear programming. Technical re-port, Dept. of Math, and Computer Sci., Odense University, 1993. To appear in Math Programming. [4] E . D. Andersen, J . Gondzio, C. Meszaros, and X. Xu. Implementation of interior point methods for large scale linear programming. In T . Terlaky, editor, Interior Point Methods in Mathematical Programming. Kluwer Academic Publisher, 1996. [5] K. D. Andersen. A modified schur complement method for handling dense columns in interior point methods for linear programming. Technical report, Dept. of Math, and Computer Sci., Odense University, 1994. Submitted to A C M Transaction on Mathemat-ical Software. [6] R. P. Anstee. An algorithmic proof of Tutte's /-factor theorem. Journal of Algorithms, 6:112-131, 1985. [7] R. P. Anstee. Matching theory: fractional to integral. New Zealand Journal of Mathe-matics, 12:17-32,1992. [8] R. P. Anstee. Minimum vertex weighted deficiency of (g, /)-factors: A greedy algorithm. Discrete Applied Mathematics, 44:247-260, 1993. [9] R. P. Anstee and Yunsun Nam. More sufficient conditions for a graph to have factors, preprint, Department of Mathematics, The University of British Columbia, Vancouver, B . C . , Canada. [10] K. M . Anstreicher. A monotonic projective algorithm for fractional linear programming. Algorithmica, l(4):483-498, 1986. [11] K. M . Anstreicher. A standard form variant and safeguarded linesearch for the modified Karmarkar algorithm. Mathematical Programming, 47:337-351, 1990. [12] K. M . Anstreicher and R. A . Bosch. Long steps in a 0(nzL) algorithm for linear pro-gramming. Mathematical Programming, 54:251-265, 1992. 95 Bibliography 96 [13] E . R. Barnes. A variation on Karmarkar's algorithm for solving linear programming problems. Mathematical Programming, 36:174-182, 1986. [14] E . R. Barnes, S. Chopra, and D. J . Jensen. The affine scaling method with centering. Technical Report, Dept. of Mathematical Sciences, IBM T . J . Watson Research Center, P. 0. Box 218, Yorktown Heights, N Y 10598, USA, 1988. [15] Richard Bartels and Linda Kaufman. Cholesky factor updating techniques for rank 2 matrix modifications. SIAM Journal on Matrix Analysis and Applications, 10(4):557-592, 1989. [16] D. A . Bayer and J . C. Lagarias. The nonlinear geometry of linear programming, Part I : Affine and projective scaling trajectories. Transactions of the American Mathematical Society, 314(2):499-526, 1989. [17] A . L . Breariey, G . Mitra, and H. P. Williams. Analysis of mathematical programming problems prior to applying the simplex algorithm. Mathematical Programming, 15:54-83, 1975. [18] J. R. Bunch and B. N. Parlett. Direct methods for solving symmetric indefinite systems of linear equations. SIAM J. Numer. Anal., 8:639-655, 1971. [19] T . J . Carpenter, I. J . lustig, J . M . Mulvey, and D. F . Shanno. Higher order predictor-corrector interior point methods with application to quadratic objectives. SIAM Journal on Optimization, 3:696-725, 1993. [20] T . M . Cavalier and A. L. Soyster. Some computational experience and a modification of the Karmarkar algorithm. Working Paper 85-105, Dept. of Industrial and Management Systems Engineering, Pennsylvania State University, University Park, PA 16802, USA, 1985. [21] S.F. Chang and S.T. McCormick. A hierarchical algorithm for making sparse matrices sparser. Mathematical Programming, 56:1-30, 1992. [22] S. Chen. Computational experience with the Karmarkar algorithm. Talk held at the O R S A / T I M S Joint National Meeting in Los Angeles, C A , USA, A T & T Bell Laborato-ries, Holmdel, NJ 07733, USA, April 1986. [23] I. C. Choi and D. Goldfarb. Exploiting special structure in a primal-dual path-following algorithm. Mathematical Programming, 58:33-52, 1993. [24] I. C . Choi, C. L. Monma, and D. F. Shanno. Further development of a primal-dual interior point method. ORSA Journal on Computing, 2:304-311, 1990. [25] I. I. Dikin. Iterative solution of problems of linear and quadratic programming. Doklady Akademii Nauk SSSR, 174:747-748, 1967. Translated in : Soviet Mathematics Doklady, 8:674-675, 1967; Bibliography 97 [26] J . J . Dongarra, J . R. Bunch, C. B. Moler, and G . W . Stewart. UNPACK User's Guide. SIAM, Philadelphia, 1979. [27] I. S. Duff, A . M . Erisman, and J . K. Reid. Direct methods for sparse matrices. Oxford University Press, New York, 1989. [28] I. S. Duff and J . K. Reid. Ma27-a set of Fortran subroutines for solving sparse symmetric sets of linear equations. In Report AERE R-10533, Computer Sciences and Systems Division. A E R E Harwell, 1982. [29] I. S. Duff and J . K. Reid. The multifrontal solution of indefinite sparse symmetric linear equations. ACM Transactions on Mathematical Software, 9:302-325, 1983. [30] S. C . Eisenstat, M . C. Gurshy, M . H. Schultz, and A. H. Sherman. The Yale sparse matrix package, i. the symmetric codes. International Journal for Numerical Methods in Engineering, 18:1145-1151, 1982. [31] A . V . Fiacco and G . P. McCormick. Nonlinear Programming : Sequential Unconstrained Minimization Techniques. John Wiley & Sons, New York, 1968. Reprint : Volume 4 of SIAM Classics in Applied Mathematics, SIAM Publications, Philadelphia, PA 19104-2688, USA, 1990. [32] K. R. Frisch. The logarithmic potential method for convex programming. Unpublished manuscript, Institute of Economics, University of Oslo, Oslo, Norway, May 1955. [33] D. M . Gay. A variant of Karmarkar's linear programming algorithm for problems in standard form. Mathematical Programming, 37:81-90, 1987. Errata in Mathematical Programming, 40:111, 1988. [34] A . George and J . W. H. Liu. Computing solution of large sparse positive definite systems. Prentice-Hall, Engliwood Cliffs, NJ, 1981. [35] A . George and J. W. H. Liu. The evolution of the minimum degree ordering algorithm. SIAM Rev., 31(1-19), 1989. [36] G . de Ghellinck and J . P. Vial. A polynomial Newton method for linear programming. Algorithmic^, l(4):425-453, 1986. [37] P. E . Gill, W . Murray, and M . A. Saunders. A single - phase dual barrier method for linear programming. Technical Report SOL 88-10, Systems Optimization Laboratory, Dept. of Operations Research, Stanford University, Stanford, C A 94305, USA, August 1988. [38] P. E . Gill, W . Murray, M . A. Saunders, J . A . Tomlin, and M . H. Wright. On projected Newton barrier methods for linear programming and an equivalence to Karmarkar's pro-jective method. Mathematical Programming, 36:183-209, 1986. Bibliography 98 [39] P. E . Gill, W . Murray, and M . H. Wright. Numerical Linear Algebra and Optimiza-tion, Vol. 2. Addison Wesley Publishing Company, Amsterdam, The Netherlands, 1992. Chapter on interior point methods. [40] D. Goldfarb and M . J . Todd. Linear Programming. In G . L. Nemhauser, A . H . G . Rinnooy Kan, and M . J . Todd, editors, Optimization, volume 1 of Handbooks in Oper-ations Research and Management Science, pages 141-170. North Holland, Amsterdam, The Netherlands, 1989. [41] G . H . Golub and C. F . Van Laon. Matrix Computations. The Johns Hopkins University Press, Baltimore, M D , 1989. [42] J . Gondzio. Splitting dense columns of constraint matrix in interior point methods for large scale linear programming. Optimization, 24:285-297, 1992. [43] J . Gondzio. Multiple centrality corrections in primal-dual method for linear program-ming. Technical Report 20, Department of Management Studies, University of Geneva, Switzerland, November 1994. [44] J . Gondzio. Presolve analysis of linear programs prior to applying an interior point method. Technical Report 1994.3, Department of Management Studies, University of Geneva, Switzerland, 1994. [45] J . Gondzio. Hopdm (version 2.12) - a fast lp solver based on a primal-dual interior point method. European Journal of Operational Research, 85:221-225, 1995. [46] C. C. Gonzaga. An algorithm for solving linear programming problems in 0(n3L) oper-ations. In N. Megiddo, editor, Progress in Mathematical Programming : Interior Point and Related Methods, pages 1-28. Springer Verlag, New York, 1989. [47] C. C . Gonzaga. Polynomial affine algorithms for linear programming. Mathematical Programming, 49:7-21, 1990. [48] C. C. Gonzaga. Interior point algorithms for linear programming problems with inequality constraints. Mathematical Programming, 52:209-225, 1991. [49] C. C. Gonzaga. Large steps path-following methods for linear programming, Part I : Barrier function method. SIAM Journal on Optimization, 1:268-279, 1991. [50] C . C. Gonzaga. Path following methods for linear programming. SIAM Review, 34(2):167-227, 1992. [51] D. den Hertog and C. Roos. A survey of search directions in interior point methods for linear programming. Mathematical Programming, 52:481-509, 1991. [52] D. den Hertog, C. Roos, and T . Terlaky. A large-step analytic center method for a class of smooth convex programming problems. SIAM Journal on Optimization, 2:55-70, 1992. Bibliography 99 [53], D. den Hertog, C. Roos, and J . P. Vial. A complexity reduction for the long-step path-following algorithm for linear programming. SIAM Journal on Optimization, 2:71-87, .1992 . [54] A . J . Hoffman and S. T . McCormick. A fast algorithm that makes matrices optimally sparse. Technical Report SOL 82-13, Systems Optimization Laboratory, Stanford Uni-versity, Stanford, C A , USA, 1982. [55] P. Huard. Resolution of mathematical programming with nonlinear constraints by the method of centers. In J . Abadie, editor, Nonlinear Programming, pages 207-219. North Holland, Amsterdam, The Netherlands, 1967. [56] B. Jansen, C. Roos, T . Terlaky, and J . Vial. Primal-dual target following algorithms for linear programming. Technical Report 93-107, Faculty of Technical Mathematics and Informatics, T U Delft, NL-2600 G A Delft, The Netherlands, 1993. [57] N. K. Karmarkar. A new polynomial-time algorithm for linear programming. Combina-torica, 4:373-395, 1984. [58] N. K. Karmarkar and K. G . Ramakrishnan. Implementation and computational aspects of the Karmarkar algorithm for linear programming, using an iterative method for computing projections. Technical Memorandum, A T & T Bell Laboratories, Murray Hill, NJ 07974, USA, 1989. See Karmarkar and Ramakrishnan [59]. [59] N. K. Karmarkar and K. G . Ramakrishnan. Computational results of an interior point algorithm for large scale linear programming. Mathematical Programming, 52:555-586, 1991. [60] L . G . Khachian. A polynomial algorithm for linear programming. Doklady Akad. Nauk USSR, Translated in Soviet Math. Doklady, 244:1093-1096, 1979. '" [01] V . Klee and G. J . Minty. How good is the simplex method. In O.Shisha, editor, Inequal-ities III. Academic Press, New York, 1972. [62] M . Kojima, S. Mizuno, and A. Yoshise. A primal-dual interior point algorithm for linear programming. In N. Megiddo, editor, Progress in Mathematical Programming : Interior Point and Related Methods, pages 29-47. Springer Verlag, New York, 1989. [63] M . Kojima, S. Mizuno, and A. Yoshise. An 0(y/nL) iteration potential reduction al-gorithm for linear complementarity problems. Mathematical Programming, 50:331-342, 1991. [64] J . Komlos and E . Szemeredi. Limit distribution for the existence of hamiltonian cycles in random graphs. Discrete Mathematics, 43:55-63, 1983. [65] I. J . Lustig. Feasibility issues in a primal-dual interior point method for linear program-ming. Mathematical Programming, 49:145-162, 1990/91. Bibliography 100 I. J . Lustig, R. E . Marsten, and D. F. Shanno. Computational experience with a primal-dual interior point method for linear programming. Linear Algebra and Its Applications, 152:191-222, 1991. I. J . Lustig, R. E . Marsten, and D. F. Shanno. On implementing Mehrotra's predictor-corrector interior point method for linear programming. SIAM Journal on Optimization, 2:435-449,1992. I. J . Lustig, R. E . Marsten, and D. F. Shanno. Interior point method for linear program-ming: Computational state of the art. ORSA Journal on Computing, 6:1-14, 1994. H. M . Markowitz. The elimination form of the inverse and its application to linear programming. Management Sci., 3:255-269, 1957. I. Maros and Cs. Meszaros. The role of the augmented system in interior point methods. Technical Report TR/06/95, Department of mathematics and statistics, Brunei Univer-sity, London,1995. R. E . Marsten. The IMP System on personal computers. Talk held at the O R S A / T I M S Joint National Meeting in St. Louis, USA, Dept. of MIS, University of Arizona, Tucson, A Z 85721, USA, October 1987. K. A . McShane, C. L. Monma, and D. F. Shanno. An implementation of a primal-dual interior point method for linear programming. ORSA Journal on Computing, 1:70-83, 1989. • N. Megiddo. A variation on Karmarkar's algorithm. Preliminary Report, IBM San Jose Research Laboratory, San Jose, C A 95193, USA, 1984. N. Megiddo. Pathways to the optimal set in linear programming. In N. Megiddo, editor, Progress in Mathematical Programming : Interior Point and Related Methods, pages 131-158. Springer Verlag, New York, 1989. Identical version in : Proceedings of the 6th Mathematical Programming Symposium of Japan, Nagoya, Japan, pages 1-35, 1986. N. Megiddo and M . Shub. Boundary behavior of interior point algorithms in linear programming. Mathematics of Operations Research, 14:97-146, 1989. S. Mehrotraand J . Sun. On the implementation of a (primal-dual) interior point method. SIAM Journal on Optimization, 2(4):575-601, 1992. S. Mehrotra and J . S. Wang. Conjugate gradient based implementation of interior point methods for network flow problems. Technical Report 95-70.1, Department of Indus-trial Engineering and Management Sciences, Northwestern University, Evanston, October 1995. [78] S. Mizuno, M . J . Todd, and Y . Ye. Anticipated behavior of interior point algorithms for linear programming. Technical Report, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, N Y 14853-3801, USA, 1990. Incorrect citation, elsewhere. Bibliography 101 [79] S. Mizuno, M . J . Todd, and Y . Ye. On adaptive step primal-dual interior-point algo-rithms for linear programming. Mathematics of Operations Research, 18:964-981, 1993. [80] C. L . Monma and A. J . Morton. Computational experience with the dual affine variant of Karmarkar's method for linear programming. Operations Research Letters, 6:261-267, 1987. [81] R. D. C . Monteiro and I. Adler. Interior path following primal-dual algorithms : Part I : Linear programming. Mathematical Programming, 44:27-41, 1989. [82] R. D. C. Monteiro and I. Adler. An extension of Karmarkar-type algorithm to a class of convex separable programming problems with global linear rate of convergence. Mathe-matics of Operations Research, 15:408-422, 1990. [83] R. D. C . Monteiro, I. Adler, and M . G . C. Resende. A polynomial-time primal-duaj affine scaling algorithm for linear and convex quadratic programming and its power series extension. Mathematics of Operations Research, 15:191-214, 1990. [84] L. Portugal, F . Bastos, J . Jiidice, J . Paixao, and T. Terlaky. An investigation of interior point algorithms for the linear transportation problems. Technical Report 93-100, Fac-ulteit der technische wiskunde en informatica, Technische Universiteit Delft, Nederlands, 1993. [85] L . F . Portugal, M . G. C. Resende, G . Veiga, and J. J . Judice. A truncated primal-infeasible dual-feasible network interior point method. Technical Report, A T &: T Bell Laboratories, Murray Hill, NJ , USA, 1995. [86] L. Posa. Hamiltonian circuits in random graphs. Discrete Mathematics, 14:359-364, 1976. [871 J- Renegar. A polynomial-time algorithm, based on Newton's method, for linear pro-gramming. Mathematical Programming, 40:59-93, 1988. [88] M . G . C. Resende and P. M . Pardalos. Interior point algorithms for network flow problems. Technical report, A T & T Bell Laboratories, Murray Hill, NJ , USA, 1996. [89] M . G . C. Resende and G . Veiga. An efficient implementation of a network interior point method. In D.S. Johnson and C . C . McGeoch, editors, Network Flows and Matching: First DIM ACS Implementation Challenge, DIM ACS Series on.Discrete Mathematics and The-oretical Computer Science, volume 12, pages 299-348. American Mathematical Society, 1993. [90] C . Roos and J . P. Vial. Analytic centers in linear programming. Technical Report 88-74, Faculty of Mathematics and Informatics, T U Delft, NL-2628 B L Delft, The Netherlands, 1988. Bibliography 102 [91] C . Roos and J . P. Vial. Long steps with the logarithmic penalty barrier function in linear programming. In J . Gabszevwicz, J . F . Richard, and L . Wolsey, editors, Economic Decision-Making : Games, Economics and Optimization, dedicated to J. H. Dreze, pages 433-441. Elsevier Science Publisher B.V. , Amsterdam, The Netherlands, 1989. [92] D. J . Rose. Symmetric elimination on sparse positive definite systems and potential flow network problem. PhD thesis, Harvard University, Cambridge, M A , 1970. [93] E . Shamir and E . Upfal. One-factor in random graphs based on vertex choice. Discrete Mathematics, 41:281-286,1982. [94] N. Shor. Utilization of the operation of space dilatation in the minimization of convex functions. Kibernetica, 1:6-12, 1970. English translation: Cybernetics 6, 7-15. [95] G . Sonnevend. An "analytic center" for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming. In A. Prekopa, J . Szelezsan, and B. Strazicky, editors, System Modeling and Optimization : Proceedings of the 12th IFIP-Conference held in Budapest, Hungary, September 1985, volume 84 of Lecture Notes in Control and Information Sciences, pages 866-876. Springer Verlag, Berlin, West-Germany, 1986. [96] R. A . Tapia, Y . Zhang, M . Saltzman, and A. Weiser. The predictor-corrector interior-point method as a composite Newton method. Technical Report T R 90-17, Dept. of Mathematical Sciences, Rice University, Houston, T X 77251, USA, 1990. [97] W. F . Tinney and J . W. Walker. Direct solution of sparse network equations by optimally ordered triangular factorization. In Proceedings of IEEE 55, volume 55, pages 1801-1809, 1967. [98] M . J . Todd. Recent developments and new directions in linear programming. In M . Iri and K. Tanabe, editors, Mathematical Programming : Recent Developments and Applications, pages 109-157. Kluwer Academic Press, Dordrecht, The Netherlands, 1989. [99] M . J . Todd. Potential-reduction methods in mathematical programming. Technical Report, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, N Y 14853, 1995. [100] J . A . Tomlin. An experimental approach to Karmarkar's projective method for linear programming. Mathematical Programming Study, .31:175—191, 1987. [101] J . A . Tomlin and J . S. Welch. Finding duplicate rows in a linear program. Operations research letters, 5:7-11, 1986. [102] P. M . Vaidya. An algorithm for linear programming which requires 0((m + n)n2 +(m + n)15nL) arithmetic operations. Mathematical Programming, 47:175-201, 1990. Condensed version in : Proceedings of the 19th Annual ACM Symposium on Theory of Computing, pages 29-38, 1987. Bibliography 103 [103] R. J . Vanderbei. Affine-scaling for linear programs with free variables. Mathematical Programming, 43:31-44, 1989. [104] R. J . Vanderbei. Splitting dense columns in sparse linear systems. Linear Algebra and Its Applications, 152:107-117, 1991. [105] R. J . Vanderbei and T . J . Carpenter. Symmetric indefinite systems for interior point methods. Mathematical Programming, 58:1-32, 1993. [106] R. J . Vanderbei, M . S. Meketon, and B. A . Freedman. A modification of Karmarkar's linear programming algorithm. Algorithmica, l(4):395-407, 1986. [107] W . Wang and D. P. O'Leary. Adaptive use of iterative methods in interior point methods for linear programming. Technical Report, Applied Mathematics Program, University of Maryland, College Park, M D 20742, 1995. [108] M . H . Wright. A brief history of linear programming. SIAM News, 18(3):4, November 1985. [109] M . H . Wright. Interior methods for constrained optimization. In A . Iserles, editor, Acta Numerica 1992, pages 341-407. Cambridge University Press, New York, USA, 1992. [110] H. Yamashita. A polynomially and quadratically convergent method for linear program-ming. Working Paper, Mathematical Systems Institute, Inc., Tokyo, Japan, 1986. [Ill] M . Yannakakis. Computing the minimum fill-in is NP-complete. SIAM Journal on Alge-braic Discrete Methods, 2:77-79, 1981. [112] Y . Ye. A class of potential functions for linear programming. Management Science Work-ing Paper 88-13, Dept. of Management Science, University of Iowa, Iowa City, IA 52242, USA, 1988: [113] Y . Ye. An 0(n3L) potential reduction algorithm for linear programming. Mathematical Programming, 50:239-258, 1991. [114] Y . Ye. Interior point algorithm: Theory and practice, lecture notes, Department of Management Sciences, The University of Iowa, 1995. [115] Y . Ye and M . Kojima. Recovering optimal dual solutions in Karmarkar's polynomial algorithm for linear programming. Mathematical Programming, 39:305-317, 1987. [116] D. Yudin and A . S. Nemirovsky. Informational complexity and efficient methods for the solution of convex extremal problems. Ekonomika i Mathematicheskie Metody, 12:357-369, 1976. English translation: Matekon 13(2), 3-25. [117] Y . Zhang and D. Zhang. On polynomiality of the Mehrotra-type predictor-corrector interior-point algorithms. Mathematical Programming, 68:303-318, 1995. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0079841/manifest

Comment

Related Items