UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Time indexed formulation of scheduling problems Williams, David Niranian 1997

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1997-0163.pdf [ 1.46MB ]
JSON: 831-1.0099180.json
JSON-LD: 831-1.0099180-ld.json
RDF/XML (Pretty): 831-1.0099180-rdf.xml
RDF/JSON: 831-1.0099180-rdf.json
Turtle: 831-1.0099180-turtle.txt
N-Triples: 831-1.0099180-rdf-ntriples.txt
Original Record: 831-1.0099180-source.json
Full Text

Full Text

TIME I N D E X E D F O R M U L A T I O N OF S C H E D U L I N G PROBLEMS by DAVID N I R A N J A N WILLIAMS B.Sc.Eng., The University of Peradeniya, 1991 M.Eng.Sc., The University of Melbourne, 1993 A THESIS SUBMITTED I N P A R T I A L F U L F I L L M E N T OF THE REQUIREMENTS FOR THE D E G R E E OF M A S T E R OF SCIENCE  in THE F A C U L T Y OF G R A D U A T E STUDIES F A C U L T Y OF C O M M E R C E A N D BUSINESS ADMINISTRATION We accept this thesis as conforming to the required standard  THE UNIVERSITY OF BRITISH C O L U M B I A March 1997 © David Niranjan Williams, 1997  In presenting degree  this  thesis  in partial  of the requirements . for an advanced  at the University of British Columbia, I agree  freely available for reference copying  fulfilment  and study. I further  agree  that the Library shall make it that permission for extensive  of this thesis for scholarly purposes may be granted  department  or by his or  her representatives.  It  by the head of my  is understood  that  copying or  publication of this thesis for financial gain shall not be allowed without my written permission.  Department  of  CoMme^c-e  4-  The University of British Columbia Vancouver, Canada  Date  DE-6 (2788)  Apr  8-, ' ^ 7  fc>-SIA>£-&-S  M>M.if^\srts.f 'Tlor^ T  Abstract In this thesis, we study various approaches that could be used in finding a lower bound for single and parallel machine scheduling problems. These approaches are based on integer programming formulations involving binary variables indexed by (i,t), where i denotes a job and t is a time period. For the single machine case, we provide an approximation scheme and a Lagrangian relaxation procedure both of which produce good lower bounds. We also present a new column generation algorithm which solves the L P relaxation of time-indexed formulation using fewer columns than the standard column generation procedure. In chapter 3 we present a new integer programming formulation for the movie scheduling problem, based on time indexed variables. This formulation led us to investigate the general parallel machine scheduling problem, for which we present a column generation procedure, which is an extension of the work done by van den Akker for the single machine case.  ii  T A B L E O F CONTENTS  Abstract  ii  Table of Contents  iii  List of Tables  v  Acknowledgment  vi  Chapter 1  Introduction  1  1.1 Scheduling  1  1.2 Complexity  4  1.3 Integer Programming Formulations  5  1.4 Outline of the Thesis  8  Single Machine Scheduling  10  2.1 Introduction  10  2.2 Time-Indexed Formulation  10  2.3 Some Valid Inequalities  13  2.4 Simplification of the Model  14  2.5 Aggregation and Column Generation  16  2.6 Lagrangian Relaxation  19  Chapter 2  iii  Chapter 3  Parallel Machine Scheduling  24  3.1 Introduction  24  3.2 Movie Scheduling  25  3.3 A Column Generation Approach to Parallel Machine Scheduling Problems  Chapter 4  28  Conclusions  32  Bibliography  34  Appendix 1  A M P L Model for the Movie Scheduling Problem  36  Appendix 2  Data for the AMPL Model  37  iv  List of Tables 2.1  Empirical performance of approximation scheme (% from optimum)  15  2.2  Performance of Lagrangian Relaxation (% from L P bound) . . 23  2.3  Average Computational Time for Lagrangian Relaxation (in Sec.) . . .  3.1  23  Average Computational time of Column Generation Algorithm 31  v  Acknowledgments I wish to express my sincere thanks to Prof. Maurice Queyranne, my supervisor, for his guidance, encouragement, advice, inspiration and financial support throughout my period of research for the master's degree. Special thanks are due to Mr. Sanjeev Swami. a PhD student in Marketing, and Prof.  Charles Weinberg for sharing their data on the movie  scheduling problem and for helping me to understand all the dimensions of this problem. I would also like to thank the Faculty of Commerce and Business Administration for providing me financial assistance during my first two years of study.  vi  Chapter 1 Introduction 1.1  Scheduling  Sequencing and scheduling are forms of decision-making which play a crucial role in manufacturing as well as in service industries. In the current competitive environment, effective sequencing and scheduling have become a necessity for survival in the market place. Scheduling concerns the allocation of limited resources to tasks over time [4]. It is a decision making process that has as a goal the optimization of one or more objectives. The resources and tasks may take many forms. The resources may be machines in a workshop, runways at an airport, crews at a construction site, screens in a movie theatre, processing units in a computing environment, beds in a hospital, and so on. The tasks may be operations in a production process, take-offs and landings at an airport, activities in a construction project, movies in a theatre, execution of computer programs, patients in a hospital, and so on. Each task may have a different priority level, arrival date and due date. The objective may take many forms. Minimization of  1  completion time of the last task, minimization of number of tasks completed after the committed due dates and maximization of total revenue are some of them. Scheduling theory is concerned primarily with mathematical models that relate to the process of scheduling. It is a quantitative approach that attempts to capture the problem structure in mathematical form. This quantitative approach begins with a description of resources and tasks and with the translation of decision-making goals into an explicit objective function. Ideally, the objective function should include all costs (and revenues) in the system that depend on scheduling decisions. In practice, however, such costs are often difficult to measure, or even to identify completely.  Nevertheless,  three types of decision making goals seem to be prevalent in scheduling: turnaround, timeliness, and throughput [1]. Turnaround measures the time required to complete a task. Timeliness measures the conformance of a particular task's completion to a given due date.  Throughput measures the  amount of work completed during a fixed period of time. The major scheduling models can be categorized by specifying resource configuration and the nature of the tasks. machine or several machines.  A model may contain a single  There may be a single machine, or several  machines, possibly in parallel. The jobs could have a single operation or multiple operations. If the set of jobs available for scheduling does not change over time, the system is called static, in contrast to cases in which new jobs appear over time, where the system is called dynamic. Three kinds of feasibility constraints are commonly found in scheduling problems.  2  First,  there are limits on the capacity of machines; second, there are technological restrictions on the order in which some jobs can be performed; and third, there may be time limit constraints, such as release dates and deadlines. We confine ourselves to scheduling problems in which each task requires at most one resource at a time. In this case, scheduling problems are usually seen as problems that concern the scheduling of jobs on machines of limited capacity and availability. Such problems are traditionally referred to as machine scheduling problems. We also limit ourselves to nonpreemptive (no job  splitting) schedules. The usual setting for this type of problems is as follows. A set of n jobs has to be scheduled on machines. Each job j (j = 1,... ,n) must be processed without interruption during a period of length pj. A machine can handle no more than one job at a time and is continuously available from time zero onwards. We are asked to find an optimal feasible schedule, that is, a set of start times such that the capacities, availability and time limit constraints are met and a given objective function is optimized. It is possible that each job j has a release date r,-, after which it becomes available, and a due date d , which indicates the time by which it ideally should 3  be completed. In addition it may have a positive weight Wj, which expresses its importance with respect to the other jobs. Some well known objective functions are the weighted sum of completion times, the weighted sum of lateness (amount of time by which the jobs is late or early, i.e. due date minus the completion time) and the weighted sum of tardiness (the amount of time by which a job exceeds its due date, if it is completed after that time, 0 otherwise).  3  1.2  Complexity  Scheduling problems are a special class of combinatorial optimization problems, where we have to choose an optimal solution from a finite number of potential solutions.  Many combinatorial optimization problems, including  the scheduling problems considered in this thesis, are NP-hard. This implies that no algorithm is known that solves each instance of the problem to optimality in polynomial time with respect to the size of the instance. Hence, as we wish to solve the problem to optimality we shall allow for exponential time algorithms. Although most optimization algorithms for hard combinatorial optimization problems are enumerative in nature, the aim is still to develop algorithms that perform satisfactorily well in practice for instances of reasonable size. The main concern is to avoid exhaustive enumeration of the solution space. There are number of optimization methods like dynamic programming and branch-and-bound that aim at implicit enumeration of the solution space. Branch-and- bound solves a combinatorial optimization problem by breaking up the feasible set of solutions into successively smaller subsets, calculating bounds on the objective function value over each subset, and using them to discard certain subsets from further consideration. If a lower bound on the objective in one part of the solution space is larger than a solution already obtained in a different part of the solution space, the corresponding part of the former solution space may be discarded. Therefore, in order to apply branch-and-bound effectively, it is very important to find good lower bounds that can be easily computed.  4  1.3  Integer Programming Formulations  Machine scheduling problems are basically defined on permutations of a set of tasks. Each permutation produces a schedule that can be associated with a given cost (revenue).  Our goal is to find the cheapest (highest) feasible  schedule. In these terms, we define a discrete optimization problem (P): min {/(s)  : s € 5}  (1.1)  where / is the objective function, and 5 is a general solution in a set S of feasible solutions.  Given the discrete character of this problem, it can  be formulated as an integer linear program. This means that the problem is described by a decision vector x, a linear objective function cx, and a feasible region for x, represented by a set of linear constraints Ax < b. In addition some or all components of x must be integer (or binary). If we relax the integrality conditions, then we obtain a linear programming problem, which is called the L P - relaxation. The optimal value of LP-relaxation is a lower bound on the optimal value of the original problem. Solving L P relaxations can hence be used as a lower bounding strategy in a branch-andbound algorithm. For nonpreemptive machine scheduling problems different mixed-integer programming formulations have been proposed. There are formulations based on completion times, start times and positional start times. Below we give a few examples.  A formulations based on completion times: In nonpreemptive machine scheduling problems a schedule is uniquely  5  determined by the corresponding job completion times Cj (j =  l,...,n).  Most of the performance measures depend on job completion times. To illustrate, consider the single machine, weighted sum of completion time problem. This problem is solved in polynomial time by scheduling the jobs in order of nonincreasing ratio Wj/pj [4]. This problem can be formulated as follows: n  min^WjCj  (1-2)  subject to Cj>pj Cj-d  >PJ vCi-Cj;>  j = l,...,n i,j = l , : . . , n , * < j  Pi  (1.3) (1.4)  where the symbol V is the logical "or" (disjunction). Constraints (1.3) state that no job can start before time zero and constraints (1.4) describe machine availability (i.e., no two jobs can overlap in their execution). The above formulation is not a linear program because of the disjunctive constraints (1.4). The application of branch- and-bound to a disjunctive program is straightforward. First the set of disjunctive constraints are relaxed to obtain an LP-relaxation. If the optimal solution to the LP-relaxation satisfies all the disjunctive constraints, then the solution is optimal for the disjunctive program as well. However, if one of the disjunctive constraints is violated then two additional LPs are generated by adding one term of the disjunctive constraint to each of them. This is continued till we find an optimal solution that satisfies all the disjunctive constraints. The main advantage of this formulation is that it contains only n variables. This implies that instances with a large number of jobs or with large 6  processing times can be easily handled by this formulation. A disadvantage is that only problems with a linear objective function in the completion times can be modeled in a straightforward way. In other situations, new variables and constraints have to be introduced.  A formulation based on start times and sequence determining variables: Let us consider the same scheduling problem (single machine, weighted completion time) described above. The formulation described here is based on variables tj (j = 1 , . . . , n) that denote the start times of the jobs and on variables c\j  = l , . . . , n , i ^ j) such that c\j = 1 if job i is processed  before job j, 0 otherwise. This formulation is given by: minJ2wj{tj + Pj)  (1.5)  subject to tj - U > pi - M(1 - Sij) i,j = 1,..., n, i ^ j  (1.6)  = l i, j = 1,... ,n,i < j  (1.7)  Sij + 8ji  %£{0,1}  tj>0 where M (> Y^j=iPj)  1S  a  i,j = l,...,n,i?j j = l,...,n  (1.8)  (1.9)  large integer. Constraints (1.6) describe the ma-  chine availability and constraints (1.7) ensure that all the jobs are processed. Precedence constraints can easily be included in this formulation. If job i has to precede job j then the constraint <5,j + cv, = 1 is replaced by e\j = 1 t  and 6ji = 0.  This formulation can be considered as an extension of the formulation based on completion times. This extended formulation is found to be stronger than the completion time one [9]. The modeling of objective functions in this formulation has the same problems as in the completion time formulation. In this thesis we consider the time-indexed formulation of scheduling problems that has been studied before by Sousa and Wolsey [7] and then by van den Akker [9]. This formulation is based on time-discretization. The planning horizon T is divided into T time periods and period t starts at time t — 1 and ends at time t. Computational experiments [9] indicate that the LP-relaxation of the time-indexed formulation yields good lower bounds. A limitation to the applicability of these lower bounds in a branch-and-bound algorithm is the considerable amount of time needed to solve LP-relaxations. This is mainly due to the large size of this formulation. Our efforts were focused on reducing this computational time by means of efficient lower bounding algorithms.  1.4  Outline of the Thesis  The main goal of our research was to devise better methods to solve the LPrelaxation of the time indexed formulation for single machine and identical parallel machine scheduling problems. The solution to the LP-relaxation gives a good lower bound for the optimal value of the scheduling problem and hence may be used in a branch-and-bound algorithm. In Chapter 2, we describe the time indexed model for single machine scheduling problem and the progress made up to now in solving it. We show  8  some simplified forms of this formulation and discuss their solution quality. We also provide a Lagrangian relaxation method for this problem and propose a new column generation algorithm to solve the LP-relaxation. In Chapter 3, we investigate the time indexed formulation for parallel machine scheduling problems. We first show how this model could be used for the movie scheduling problem described by Swami, Weinberg and Eliashberg [8]. Then we describe a new column generation procedure which could be used to solve the LP relaxation of the parallel machine scheduling problem. Finally, Chapter 4 concludes this thesis with a brief survey of the main results, and with some topics for further research.  9  Chapter 2 Single Machine Scheduling 2.1  Introduction  The single machine scheduling problem is very simple and a special case of all other scheduling environments. In this case each job consists of a single operation. The number of jobs to be processed is assumed to be finite and is known in advance. Furthermore, it is assumed that the machine will be continuously available, without breakdown, until all the jobs are completed. A job j becomes available for processing on its release date r,- and has a weight of Wj and a due date of dj.  2.2  Time-Indexed Formulation  In this section, we introduce the time-indexed formulation that will be investigated in this thesis. This formulation is based on time discretization. Time is divided into periods, where period t starts at time t — 1 and ends at time t. The planning horizon T is made up of T time periods. The formulation uses binary variables Xj , where Xj = 1 implies that job j starts in period t. t  t  10  The formulation is given by:  minE E i=l  (2.1)  CjtXj  t  t=rj  subject to T-PJ + I  x  Jt  = l  j = l,...,n  (2.2)  t=r,  E  E *;.<i  t = i,...,r  (2.3)  i=i s=t-pj+i  x G{0,l} i t  j = l,...,n; < =  l,...,r-  Pj  + l  (2.4)  The first set of equations ensure that each job is scheduled exactly once whereas the second set of inequalities ensure that at each time at most one job is executed. An advantage of this formulations is that it can be used to model many variants of nonpreemptive single-machine scheduling. We can model all the objective functions of the form £ j = i fj{tj) by setting Cj = fj(t — 1). If the t  objective is to minimize the weighted sum of tardiness, then we set Cj = t  Wj(t +Pj — dj — 1) for t + Pj — 1 > dj and c  ]t  = 0 for t +pj — 1 < dj. Choosing  Cj = Wj(t + pj — 1) we minimize the weighted sum of completion times. The t  weighed number of late jobs is minimized by using c  Jt  = Wj if t > dj — pj + 1  and Cjt = 0, otherwise. To incorporate release dates r, we simply discard the variables Xj for t = 1 , . . . , r,-. This means that these variables are implicitly t  fixed at 0. Deadlines are incorporated in a similar way. Precedence constraints also could be handled by the time indexed formulation. If we want job j to precede job k in all feasible schedules we can add 11  either the inequalities [5] T-p + l k  E  Xjt-  t=l,...,T-pj  *ks<0  + l  (2.5)  s=t+pj  or the inequality [6] T-pfc + l  T-pj + l  E  E  t=i  (t-l)x  (=i  j t  +  (2.6)  P j  The main disadvantage of the time-indexed formulation is that it contains' a large number of variables and a large number of constraints. There are n + T constraints and approximately nT variables. As T has to be at least pj, the formulation is pseudo- polynomial (since its size depends on the processing times of the jobs). Therefore, to handle instances with a large number of jobs or large processing times we have to resort to techniques designed for large linear programs. The LP-relaxation of this integer programming model is obtained by replacing constraints (2.4) by nonnegativity constraints. Therefore, the L P relaxation will be given by: n  m i n  T-pj+l  E E  :t H  c  (-)  x  2  7  subject to T-PJ+1  E  *i« = l  j = l,...,n  (2.8)  t=r  }  E j = l S=  E  t = i,...,r  (2.9)  t-pj+l  x >0 jt  j = l,...,n; t = \,...,T12  P j  + 1  (2.10)  2.3  Some Valid Inequalities  The LP-relaxation of the time indexed formulation could be strengthened by adding some valid inequalities. These valid inequalities can contribute to finding a better lower bound for the original scheduling problem or to pruning some branches in a branch-and-bound tree. One such set of valid inequalities is described here [9]. Let j be a job, i be a time period, and define p^  max  = max^j{pfc}.  If we choose A 6 {2,... , p ? „ } , it follows from ax  the restriction that jobs are not allowed to overlap, that no job k ^ j can be started in one of the periods t — pk + A , . . . . t when the processing of job j starts in one of the periods t — pj -f- 1,..., £ + A — 1. On the other hand, no two jobs k and / different from j can be started simultaneously in one of the periods t — pk + A , . . . , t and t — p; + A , . . . , i , respectively. This can be formulated as the following inequality. t  H-A-l  E t-Pj+1  *i. + E  E  **.<!  AG{2,...,,y  m a x  }  (2.11)  k^j a=t-p +A k  This set of inequalities is not implied by the system (2.8) - (2.10). However, for A = 1 we get back inequalities (2.9). All the facet inducing inequalities with right hand side 1 and 2 are given in [9]. Any facet inducing inequality with integral coefficients and with right hand side equal to 1, will be of the form x{V) < 1. Here x(V) denotes Yl{j,s)ev js- Each facet inducing x  inequality x(V) < 1 is maximal in the sense that there does not exist any W D V such that z(VK) < 1 is valid. Van den Akker [9] derived necessary and sufficient conditions for inequalities to define facets and have right hand side 2 as well. 13  2.4  Simplification of the Model  As pointed out earlier, the major drawback of time indexed formulations arises from the increase in time horizon and therefore in the whole size of the formulation. The size of this formulation depends on the number and duration of the jobs and on the degree of accuracy used to describe the durations of these jobs. To obtain approximations to the original problem with smaller numbers of variables and constraints, we can apply some kind of rounding before discretizing the time. If we can make the processing times, release dates and due dates all have a common divisor larger than 1, the size of the model can be reduced by dividing all the above quantities by that common divisor before time discretization. For this we round the processing times, release dates and due dates after dividing them by a scaling parameter a. After dividing we can either round the parameters up or round them down. If we round pj and Tj down and dj up the resulting problem (P') is a relaxation of the original problem P. On the other hand if we round pj and rj up and dj down the resulting problem (P") is a restriction of the original problem. Therefore, solving P' produces a lower bound on the optimal integer solution of the original problem P and solving P" produces an upper bound on the optimal integer solution of the original problem P. These approximation procedures were tested on randomly generated problems (minimizing total weighted tardiness) with a = 2,3,5,10 and 20. The computational results suggest that this method gives very good bounds (within 14  Table 2.1: Empirical performance of approximation scheme (% from optimum) a j Pmax) (10,5) (10,10) (10,20)  2 9 10 7  3 22 19 27  5 82 76 80  10 95 90 87  10% of the optimum) when a = 2. In this case the size (total number of non zero matrix entries) of the formulation is one fourth that of the original problem. The bound deteriorates as a increases. This procedure could be used with a branch-and-bound algorithm to fathom some branches.  The  generation procedure was as follows [9]. For different values of number of jobs (n = 10,20,30), let the processing times pj be integers uniformly distributed in [ l , p i ] . The weights are in [1,10] and the release dates are in ma  [0, (I2j=i Pj)/2]- We consider instances with p  max  = 5,10 and 20. We used  C P L E X 3.0 to solve the linear programs. The results for n — 10 are given in Table 2.1. The above results show that depending on the quality of bound we need we can reduce the problem size by using a suitable scaling factor. In the initial steps of a B&.B algorithm we can use a larger a and then gradually reduce it to 1 as the solution improves. This will make the initial optimization problems small. The computational experiments we did are limited to a small number of problem instances and the conclusions drawn from them are tentative at best.  15  2.5  Aggregation and Column Generation  In this section we provide a new column generation procedure to solve the LP-relaxation of the time-indexed formulation. Empirical findings indicate that the typical number of iterations needed by the simplex method increases proportionally with the number of rows and only very slowly with the number of columns. The time indexed formulation contains n + T rows, where T > Yl^iPj- Hence, especially for instances with large processing times we may expect large running times when the LP-relaxation is solved through the simplex method. To reduce the number of constraints we introduce an aggregation scheme which generates a single knapsack type constraint from A successive capacity constraints.  This reduces the number of constraints to  n + \T/X\. This  is a relaxed version of the original problem and hence the optimal value of this problem will be a lower bound for the LP-relaxation.  To find the  optimal value of the LP-relaxation we incorporate column generation and row generation procedures with this aggregation scheme. In the aggregation phase we generate a 0 - 1 knapsack inequality by adding the time period constraints over A consecutive time periods (k, k + 1 , . . . , k + A — 1). This gives a single constraint  5 Z j = i Hs=fc-pj+i jsXjs a  < A instead of  A original constraints. The coefficients Oj are nonnegative with following s  properties.  1. For every job j, as s increases a  JS  increases from 1 by steps of 1 until  it attains the value min{pj,A}, stays constant and then decreases to 1  16  by steps of 1. That is a.j = min{pj, A. (k + A — s) , (s + pj — k) }. +  +  s  2. If pj < A, cijs = Pj for s 6 {k,..., k + A -  pj}.  The reduced cost of a variable Xj is given by: t  f((+ -l)/Al Pj  Cjt = C -Uj jt  -  E J>> >i h=[t/Aj Q  (- )  V  2  12  where u is the price associated with job constraint j and Vh is the price 7  associated to aggregated capacity constraint h (given directly by the current dual variables). Here ajh is the coefficient of variable Xj in the aggregated t  capacity constraint h. The Algorithm: Step 1. Select A (the number of original constraints that are to be aggregated to form an aggregated constraint). Find a few feasible schedules in the original problem and generate the corresponding columns in the aggregated problem. S t e p 2. Solve the LP-relaxation with the generated columns. From the dual costs find out whether there are any variables with negative reduced cost. If there are some variables with negative reduced cost go to Step 3, else go to Step 4. Step 3. A d d some columns with negative reduced costs to the L P . Go to Step 2. Step 4. Check whether the current solution violates any of the original constraints. If some constraints are violated add those constraints to the 17  LP and go to Step 2. If there are no violated constraints STOP, the optimal L P solution has been found. This algorithm is faster than the general column generation algorithm described by Sousa [6], because it does not use all n + T constraints in all the iterations. Constraints are added as the algorithm progresses and hence only few iterations are done with a large number of constraints. Sousa [6] reported that the average number of columns generated by a general column generation algorithm for the time indexed model, is around one third the total number of columns. His algorithm uses all the rows in every iteration. We tested the above algorithm in 60 problems generated according to the description in Section 2.4. The total number of columns generated by the above algorithm was always less than 20% of the total number of columns. The computational experiments we did are limited to a small number of problem instances and the conclusions drawn from them are tentative at best. As this algorithm adds rows progressively the number of columns and the number of rows increase in each step. At the last iteration the L P has about 95% of its rows regenerated. The main advantage of this algorithm is that during the initial iterations it uses only a few rows (as rows are added progressively) and hence in those iterations the LP could be solved very fast. This combined with the fact that this algorithm needs about 10% less columns than the standard column generation algorithm used by Sousa, makes it a faster algorithm than the standard column generation algorithm for time indexed formulations of single machine scheduling problems. This algorithm can be used to find an LP bound faster, hence speeding up the 18  execution time of any B&B algorithms making use of LP bounds.  2.6  Lagrangian Relaxation  Another well known technique to find lower bounds is Lagrangian relaxation. This technique involves taking an integer programming formulation and attaching Lagrange multipliers to some of the constraints and then relaxing these constraints into the objective function. Then the resulting integer program is solved and gives a lower bound on the optimal solution to the original problem. One basic reason for using this approach is the fact that many combinatorial optimization problems consist of an easy problem complicated by some extra constraints. By absorbing these complicating constraints into the objective function we are left with an easy problem to solve and attention can then be turned to choosing numeric values for the Lagrange multipliers. Subgradient optimization and multiplier adjustment are two general techniques used to do so. It is important to choose the right set of constraints to relax because the quality of the lower bound depends on the constraint set that has been relaxed. To obtain a lower bound for the time indexed formulation that is discussed in this chapter, we could either relax the first set of constraints (job constraints) or the second set of constraints (capacity constraints).  If we  relax the first set of constraints into the objective function we obtain „  min£ j= l  T- j+\ P  £ t=Tj  n  /T-pj+l  CjtXjt + ^aA j = l  19  £ \  t=Tj  \  x  j t  -\\ J  (2.13)  This can be simplified to: n T-pj + l  minE E j=l  n  ( jt  + aj)  c  Xj  t  t=rj  - E a,-  (2.14)  j= l  The relaxed problem is then subject to E  E  a*^  *= 1,...,T  1  (2.15)  j=l s=t- -+l Pj  Xjt  €{0,1}  j = l,...,n; « =  l,...,r-  Pj  + l  (2.16)  where aj, j = l , . . . , n , are the Lagrange multipliers associated with the equality constraints (2.2) are unrestricted in sign. This problem can be reduced to a shortest path problem. The constraint matrix associated with (2.15) is an interval matrix, with which we can associate a network with T nodes and almost nT arcs [9]. The network associated with this matrix is acyclic and for each job j and each time period s, with 5 < T — Pj + 1, it has an arc from s to s + pj that indicates that the machine processes job j from time 5 to time s + pj. Each of these arcs has a length or cost of (CJ + aj). A S  shortest path algorithm could be used to solve the above problem for a given vector of values of Q j , j = 1,..., n. In this case the resulting schedule may have some jobs processed more than once and some others not processed at all. We use a subgradient optimization procedure to update the Lagrangian multipliers so as to improve the lower bound. If we relax the capacity constraints (2.3) the remaining problem is a simple selection problem. Here we have to select the best start time t for each job. The objective function is n T-Pi+1  minE E j= l  t=Tj  T  / n  t  +J> E E 1=1  20  \j=ls=i- ,+l P  \ J°  X  -  (- )  1  2  /  17  The relaxed problem is now subject to T-pj+l  £  x  Jt  = l  j = L...,n  (2.18)  t = Tj  x E {0,1} jt  j = l , . . . , n ; t= 1 , . . . , T -  P j  + 1  (2.19)  Here S > 0 are the Lagrange multipliers. The optimal solution to the t  above problem involves selecting the best (cj + S ) value for each job j. This t  t  solution has each job processed exactly once, but may violate some capacity constraints. The Lagrange multipliers associated with the above two formulations were calculated using the subgradient optimization method.' Subgradient optimization is an iterative procedure which involves generating new Lagrange multipliers from an initial set of multipliers, in a systematic fashion [2]. It attempts to maximize the lower bound obtained from the relaxed problem, by a suitable choice of multipliers. For example, consider the second Lagrangian relaxation described above. The subgradient optimization iterative procedure for that problem will be as follows:  Step 1. Let 7r be a given  parameter satisfying 0 < 7r < 2. Initialize  ZUB,  the upper bound on the objective value by assigning to it the objective value of a feasible heuristic solution (schedule). Set  Step 2. Solve the selection  Z  MAX  =  —oo.  problem (2.17-2.19) with the current set (6 ) of t  multipliers, to get a solution (xj ) with value ZLBt  Step 3.  Define subgradients g for the relaxed constraints, evaluated at the t  21  current solution, by: t Gt=  t =  £ X J . - I s-t-pj + l  l,...,T  Step 4. Define a (scalar) step size A- by:  Thus, this step size depends upon the gap between  ZTJB  and  ZLB  and  the user defined parameter TT. Step 5. Update 6 using 6 — max(0,6 + kG ) for t = 1 , . . . , T , set Z t  max(Z  Tnar  t  t  t  m a I  =  , Z£,a). If the termination rule described below does not ap-  ply, then go to step 2 to re-solve the problem with this new set of multipliers. To terminate the above iterative process we introduce a termination rule based on either limiting the number of iterations that can be done, or on the value of TT (reducing it during the procedure and terminating when TT is small). If  Zmax  has not improved in the last JV subgradient iterations with  the current value of n then we halve the value of - (initially TT is set to 2) [2]. The above two forms of relaxations were tested on some randomly generated problems described in section 2.4. We used N = 3. The computational experiments indicate that the first form of relaxation gives a better lower bound than the second one. The results are presented in Table 2.2. The results for the other values of n were very similar to the ones shown in Table 2.2. In the first form of relaxation we have to solve a shortest path problem in 22  Table 2.2: Performance of Lagrangian Relaxation (% from LP bound)  (10,5) (10,10) (10,20)  Relaxed Constraints Job Capacity 74 92 69 90 65 87  Table 2.3: Average Computational Time for Lagrangian Relaxation (in Sec.)  (ft, Pmax)  (10,5) (10,10) (10,20)  Relaxed Constraints Job Capacity 6 2 8 3 12 5  each iteration. Whereas in the second form of relaxation only a selection has to be done in each iteration, making it a faster relaxation algorithm (Table 2.3). The computational experiments we did are limited to a small number of problem instances and the conclusions drawn from them are tentative at best.  23  Chapter 3 Parallel Machine Scheduling Problems 3.1  Introduction  In this chapter we consider the problem of scheduling identical parallel machines. Parallel machine scheduling problems concern the scheduling of n jobs on m parallel machines to minimize some function of the job completion times. We may consider scheduling parallel machines as a two-step process. First, we have to determine which jobs are to be allocated to which machines; second, we have to determine the sequence of jobs allocated to each machine. The single machine model discussed in Chapter 2 could be extended for this case by modifying the capacity constraints.  In this chapter we present a  model for scheduling movies in a multiplex theatre. Then we consider the general identical parallel machine scheduling problem and extend the column generation algorithm presented by van den Akker [9] to this problem.  24  3.2  Movie Scheduling  This portion of the research was motivated by a problem presented by Swami, Weinberg and Eliashberg [8].  They considered the problem of scheduling  movies on screens of a multiplex cinema theatre. Given a set of movies j = 1 , . . . , n and their release dates r, and weekly revenues Rj , t £ {r^,..., T}, t  we want to select a subset of movies to show on our screens in order to maximize the theatre's total revenues over horizon [0, T). The revenue generated by a movie in a given week t depends on its age (t — r,-). Movies generate their maximum revenue in the week of their release and then the revenue typically decays; following [8] we assume revenue decays exponentially. If a movie j is selected for screening, it has to be shown for at least p° weeks, its obligation period. A movie could be shown for any number of weeks up to (T — Tj) weeks. The portion {3 ) of the revenue retained by the theatre, the t  retention revenue varies with the screening age of the movie. Typically, for the first week of screening the exhibitor gets 10% of the total revenue. He or she gets 20% and 30% of the total revenue for the second and third weeks, respectively. After that he gets 40% of the total revenue generated by that movie. The exhibitor is allowed to deduct the cost involved in showing the movie, the house nut H (typically, around $500 per week), from the total revenue before calculating the retention revenue. If the total revenue is less than the house nut, then the exhibitor gets to keep everything. The exhibitor also generates some revenue from the concession stands in the theatre. The revenue from this source is considered to be a certain percentage 6 (around 28%) of the total revenue generated by the movie and the exhibitor keeps all 25  of this revenue. The planning horizon for this problem is fairly small. Generally we consider about 10-26 weeks and around 30 movies. The screening period of a movie could start any week after its release date r, and go on up to week T. Therefore in our model we consider k : = T — max(p j,r ) distinct screening (  }  periods p j, I = 0,1,....  J  kj, for each movie j. There are m identical screens  l  in the theatre and the total revenue (from the movie and concession stands) retained by the theatre by showing movie j for p'j weeks starting from week t is given by e) = E ^ t  _ 1  (max(0, R  - H) x 0 + H + 6R ).  js  t  JS  This model  could be formulated as follows: „  k  }  T-p{+l  max  E j'=l 1=0  «**5t  t = Tj  subject to k, T - p J + l  E /=0  EE  E  *5I<1  i = l,---,n  (3.2)  t = l,...,T  (3.3)  t = T,  E  j=i ;=o ,=«-pj+i x e{0,l}  (3.4)  l  jt  where x j = 1 if movie j is screened for p j weeks starting from week t, 0 l  !  t  otherwise. This model was coded in A M P L (a modeling language for mathematical programming) [3] and tested on some real world as well as some randomly generated problems. The real world problems were constructed using the actual weekly revenues and release dates published by the VARIETY  magazine for the movies that were playing in North America (see 26  Appendices 1 and 2 for the A M P L model and the data set, respectively). Some fictitious problems were also created assuming an exponential decay in revenue with the age of the movies. Then we used the model to find the optimum schedules. The computational experiments indicated that the L P relaxation of the above problem gives an integer solution most of the time. The dynamic programming based procedure proposed by Swami, Weinberg and Eliashberg [8] gives the optimum schedule when there is only one screen in the theatre. They also proposed a greedy heuristic which makes use of this dynamic programming procedure, for the multiple screen case. This greedy heuristic allocates movies to screens using the single screen model and moves from screen to screen in a greedy fashion (for more details on this heuristic see [8]). This greedy heuristic does not converge to the global optimum in all problem instances. The integer programming model we described above will always give the optimum schedule. The problem size and running time are reasonably small for all real world problems in the movie industry. The planning horizon is usually about 8 to 10 weeks and during which time there'll be about 25 movies available in the market. This will give rise to a problem with about 2000 variables and 35 constraints. A problem of this size can be solved within a minute using A M P L / C P L E X 3.0 running on a HP 9000-735 workstation (under the HP-UX 9.05 operation system) The problem we considered in this section is a simplification of the real world movie scheduling problem. We simplified the problem by assuming that the future revenues of all the movies are known and all the screens in the theatre have the same seating capacity. Designing a forecasting model for  27  the movie revenues and including the seating constraints in the time indexed formulation could form topics for further research.  3.3  A Column Generation Approach to Parallel Machine Scheduling Problem  The general identical parallel machine scheduling problem may be formulated using time indexed variables, as follows:  min]T j=l  subject  £  cx jt  (3.5)  jt  t=r,  to  ]T  £ j=l  E S=t —  Xjt = 1  Js<m  x  j = l,...,n  (3.6)  * = 1,...,T  (3.7)  Pj+1  x E{0,l} jt  j = l,...,n; < = 1 , . . . , T -  P j  + 1  (3.8)  Note that this formulation ignores the assignment of jobs to machines. Given an integer solution to (3.5)-(3.8), it is straightforward to define a corresponding feasible m-machine schedule with equal total cost [5]. Here we describe a Dantzig-Wolfe decomposition approach to solve the L P relaxation of the above formulation. This is an extension of the algorithm proposed by van den Akker [9] for the single machine case. Denote by PS the polytope defined by the inequalities (3.7) and the nonnegativity constraints. Then the LP relaxation could be viewed as (3.5) subject to (3.6) and (xj ) G PS. t  28  The polytope PS is integral since the constraint matrix of the inequalities (3.7) is an interval matrix and therefore totally unimodular. The vertices of PS are pseudo- schedules where each job can be assigned any number of starting times.  But the machines never execute more than one job at a  time. If  the extreme points of PS, each x 6 PS can be  written as a convex combination x = Y^=i ^rX where $3^=i A = 1 and r  r  A > 0. Replacing x in the above formulation by this expression we obtain r  the master problem of the LP relaxation.  m  i  K  I n  £  £  r=l  \j=l  n  T-pj+l  £  c x \X  (3.9)  r  jt  jt  T  t=Tj  subject to K  /T-pj+l  £ r=l  £ \  t=Tj  \  4  U  = l  j = l,...,n  (3.10)  J  £ A  = l  r  (3.11)  A >0  r = l,...,K  r  (3.12)  The master problem has only n + 1 constraints but involves in general a huge number of variables. If we are able to solve the subproblem efficiently so as to find out whether there is a column with negative reduced cost, we can be able to solve the LP relaxation fast. The reduced cost of the variable Ajfc is given by n  £ j= l  T-pj+l  £ t=Tj  „  /T-pj+l  <*xJ,-5>,J= l  £ \  t=l  \  z}  -a,  t  (3.1.3)  /  where TTJ denotes the dual variable of constraint (3.10) for job j, and a denotes 29  the dual variable of constraint (3.11). The above expression simplifies to E  E  j= \  t = Tj  (3.14)  The above pricing problem can be solved by determining an m-unit minimum cost flow from the source to the sink in the acyclic network associated with the interval matrix described earlier. The nodes of the network correspond to time periods 1,2,..., T + 1. The arcs correspond to the use of machine during these time periods.  For each job j and each time period s, with  s < T — Pj 4- 1, there is an arc from s to s + j  that indicates that the  machine processes job j from time s to time 5 + j.  Each of these arcs have  P  P  a capacity of 1 unit and a cost of (CJ  S  — 7Tj). When there are no release dates  (i.e., all the jobs are available at time 0), the source node is connected to node 0 with an arc having capacity m.  In this case the planning horizon  T = Ej=iPj/  p  m  + (m — l)pmax/m,  where  m  a  x  = m&Xi<j< pj.  arcs to the sink from every the node between YJj=\Pjl  n  m  There will be  — {m — l)p i/> > i v  y  ma  and T. We tested the performance of this column generation algorithm on the LP-relaxation of the time indexed formulation with the objective of minimizing the sum of weighted completion times. Our computational experiments have been conducted with C P L E X 3.0 running on a HP 9000-735 workstation (under the H P - U X 9.05 operating system), for both the master and netflow subproblem. It has been observed that a similar column generation algorithm for single machine scheduling problem is much faster than the standard simplex algorithm [9]. Therefore this column generation procedure could be incorporated in a branch-and-bound code to rapidly produce an LP30  Table 3.1: Average Computational time of Column Generation Algorithm Pmax)  Time(s)  (10,5)  21  (10,10)  34  (10,20)  70  relaxation bound. The computational experiments we did are limited to a small number of problem instances and the conclusions drawn from them are tentative at best. The computational results for problems with 3 machines are presented in Table 3.1.  31  Chapter 4 Conclusions This thesis provides various methods that may be used in finding quick and good lower bounds on the time indexed formulations of single and parallel machine scheduling problems. Various general techniques like aggregation, column generation and Lagrangian relaxation are applied to the time indexed formulations and shown to produce good results.  The column generation  algorithms presented in this thesis provide a general framework for solving the LP-relaxation of time indexed formulations of scheduling problems. These algorithms could be incorporated into a branch-and-bound scheme to find an optimal integer solution, that is an optimal schedule. The approximation methods discussed in this thesis provide a starting point to generate initial feasible schedules and to warm start a branch-and-bound procedure. The work presented in this thesis could be incorporated in a branch-andbound procedure to find an integer optimal solution to the time indexed formulation. The effects of adding cutting planes to the column generation algorithms proposed here could also be investigated. As pointed out by van den Akker [9] some of these column generation procedures tend to preserve 32  their structures even when some valid inequalities are added. We propose that the general framework and various tools presented in this thesis form a sound basis to design a branch-and-bound algorithm for single and parallel machine scheduling problems. This could form a topic for further research.  33  Bibliography [1] K . R . Baker. Elements of sequencing and scheduling. Dartmouth College, Hanover, N K , 1993. [2] J . E . Beasley. Lagrangian relaxation. Operational research notes, Imperial College, 1995. [3] R. Fourer, D . M . Gay, and B.W. Kernighan. AMPL: A modeling language for mathematical programming. Duxbury Press, 1993. [4] M . Pinedo. Scheduling: Theory, algorithms, and systems. Prentice Hall, New Jersey, 1995. [5] M . Queyranne and A . S. Schulz.  Polyhedral approaches to machine  scheduling. Preprint 408/1994, Technische Universitat Berlin, 1994. [6] J . P. Sousa. Time indexed formulations of non-preemptive single-machine scheduling problems. PhD thesis, Universite Catholique de Louvain, Belgium, 1989. [7] J . P. Sousa and L. A . Wolsey.  A time indexed formulation of non-  preemptive single machine scheduling problems. Mathematical Programming, 54:353 - 367, 1992. 34  [8] S. Swami, J. Eliashberg, and C. B. Weinberg. Screener: A dynamic programming approach to the scheduling of movies on screens. Talk presented at a joint Marketing/Management Science seminar. UBC, February 27, 1996. [9] M . van den Akker. LP-based solution methods for single-machine scheduling problems. PhD thesis, Eindhoven University of Technology, Eindhoven, The Netherlands, 1994.  35  Appendix 1 A M P L Model for the Movie Scheduling Problem: param T > 0; #Planning h o r i z o n param n > 0; #Number o f movies param m > 0; #Number o f screens param c > 0; #House nut param d e l > 0; #Maximum share param f > 0; # F i r s t week share param a > 0; #Increment i n share s e t MOVIE := l . . n ; #Indices o f movies param opd {MOVIE} > 0; # O b l i g a t i o n p e r i o d param r {MOVIE} > 0; #Release date param d {MOVIE} > 0; #Expiry date param k {MOVIE} >= 0; #Number o f d i s t i n c t s c r e e n i n g p e r i o d s param s c r {j i n MOVIE, i i n 0..k[j]} := opd[j] + i ; #Screening p e r i o d param r e v {j i n MOVIE, t i n r [ j ] . . d [ j ] } ; #Revenue i n week t param t o t {j i n MOVIE, i i n 0 . . k [ j ] , t i n r [ j ] . . d [ j ] - s c r [ j , i ] + l } := sum {u i n t . . t + s c r [ j , i ] - l } (max(0, r e v [ j , u ] - c ) * m i n ( d e l , f+(u-t)*a) + c + rev[j,u]*0.2857) #Total revenue v a r x {j i n MOVIE, i i n 0 . . k [ j ] , t i n r [ j ] . . d [ j ] - s c r [ j , i ] + 1 } b i n a r y ; maximize t o t a l _ r e v e n u e : sum {j i n MOVIE, i i n 0 . .k [j ] , t i n r [j ] . . d [j ] - s c r [j , i ] +l} tot[j,i,t] * x[j,i,t] ; s u b j e c t t o movies {j i n MOVIE}: sum { i i n 0 . . k [ j ] , t i n r [ j ] . . d [ j ] - s c r [ j , i ] + l } x [ j , i , t ]  <= 1;  s u b j e c t t o s c r e e n s {t i n 1..T}: sum {j i n MOVIE, i i n 0 . . k [ j ] , s i n t - s c r [ j , i ] + 1 . . t : s >= r [ j ] and s <= d [ j ] - s c r [ j , i ] + 1 } x [j , i , s] <= m;  36  Appendix 2 Data for the A M P L Model: param T := 10; param n := 13; param m := 3; param c := 500; param d e l := 0.4; param f := 0.1; param a := 0.1; param: r d 1 8 10 2 1 10 3 1 10 4 1 10 5 1 10 6 1 10 7 3 10 8 2 10 9 5 10 10 1 10 11 6 10 12 6 10 13 6 10 param r e v : 1 2 3 1 2 1679 1325 1452 3 8908 5194 3658 4 5577 3027 2236 5 9400 6820 5675 6 7652 5274 3880 7 12762 . 8 8755 6947 9 10 4000 2733 2445 11 12 13  opd 2 2 2 2 2 2 2 2 2 2 2 2 2  4 935 2840 1795 3881 3033 6003 5306  k := 1 8 8 8 8 8 6 7 4 8 3 3 3;  5  1172 3039 2266 3157 3226 4532 4884 18264 2167 2331  6  7  696 648 1931 1756 991 700 2285 2166 2228 2160 2288 1864 2898 2500 6786 5828 1835 1793 13314 11759 3576 2808 7229 5835  37  8 12966 600 1582 409 2048 2108 1440 2341 4869 1751 10204 2040 4440  9 8629 500 1953 1311 1627 2188 1582 2090 3893 1641 8096 1501 3212  10 5706 400 1655 1137 1790 1781 1226 1738 3715 2134 5955 1282 1992  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items