TIME INDEXED FORMULATION OF SCHEDULING PROBLEMS by DAVID N I R A N J A N WILLIAMS B.Sc.Eng., The University of Peradeniya, 1991 M.Eng.Sc., The University of Melbourne, 1993 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF M A S T E R OF SCIENCE in THE F A C U L T Y OF G R A D U A T E STUDIES F A C U L T Y OF C O M M E R C E A N D BUSINESS ADMINISTRATION We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA March 1997 © David Niranjan Williams, 1997 In presenting this thesis in partial fulfilment of the requirements . for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of C o M m e ^ c - e 4- fc>-SIA>£-&-S M>M.if^\srts.fT'Tlor^ The University of British Columbia Vancouver, Canada Date Apr 8-, '^7 DE-6 (2788) Abstract In this thesis, we study various approaches that could be used in finding a lower bound for single and parallel machine scheduling problems. These approaches are based on integer programming formulations involving binary variables indexed by (i,t), where i denotes a job and t is a time period. For the single machine case, we provide an approximation scheme and a La-grangian relaxation procedure both of which produce good lower bounds. We also present a new column generation algorithm which solves the LP-relaxation of time-indexed formulation using fewer columns than the stan-dard column generation procedure. In chapter 3 we present a new integer programming formulation for the movie scheduling problem, based on time indexed variables. This formulation led us to investigate the general parallel machine scheduling problem, for which we present a column generation procedure, which is an extension of the work done by van den Akker for the single machine case. ii T A B L E O F CONTENTS Abstract ii Table of Contents iii List of Tables v Acknowledgment vi Chapter 1 Introduction 1 1.1 Scheduling 1 1.2 Complexity 4 1.3 Integer Programming Formulations 5 1.4 Outline of the Thesis 8 10 10 10 13 14 16 19 iii Chapter 2 Single Machine Scheduling 2.1 Introduction 2.2 Time-Indexed Formulation 2.3 Some Valid Inequalities 2.4 Simplification of the Model 2.5 Aggregation and Column Generation 2.6 Lagrangian Relaxation Chapter 3 Parallel Machine Scheduling 24 3.1 Introduction 24 3.2 Movie Scheduling 25 3.3 A Column Generation Approach to Parallel Machine Scheduling Problems 28 Chapter 4 Conclusions 32 Bibliography 34 Appendix 1 A M P L Model for the Movie Scheduling Problem 36 Appendix 2 Data for the AMPL Model 37 iv List of Tables 2.1 Empirical performance of approximation scheme (% from op-timum) 15 2.2 Performance of Lagrangian Relaxation (% from LP bound) . . 23 2.3 Average Computational Time for Lagrangian Relaxation (in Sec.) . . . 23 3.1 Average Computational time of Column Generation Algorithm 31 v Acknowledgments I wish to express my sincere thanks to Prof. Maurice Queyranne, my su-pervisor, for his guidance, encouragement, advice, inspiration and financial support throughout my period of research for the master's degree. Special thanks are due to Mr. Sanjeev Swami. a PhD student in Mar-keting, and Prof. Charles Weinberg for sharing their data on the movie scheduling problem and for helping me to understand all the dimensions of this problem. I would also like to thank the Faculty of Commerce and Business Admin-istration for providing me financial assistance during my first two years of study. vi Chapter 1 Introduction 1.1 Scheduling Sequencing and scheduling are forms of decision-making which play a cru-cial role in manufacturing as well as in service industries. In the current competitive environment, effective sequencing and scheduling have become a necessity for survival in the market place. Scheduling concerns the allocation of limited resources to tasks over time [4]. It is a decision making process that has as a goal the optimization of one or more objectives. The resources and tasks may take many forms. The resources may be machines in a workshop, runways at an airport, crews at a construction site, screens in a movie theatre, processing units in a computing environment, beds in a hospital, and so on. The tasks may be operations in a production process, take-offs and landings at an airport, activities in a construction project, movies in a theatre, execution of computer programs, patients in a hospital, and so on. Each task may have a different priority level, arrival date and due date. The objective may take many forms. Minimization of 1 completion time of the last task, minimization of number of tasks completed after the committed due dates and maximization of total revenue are some of them. Scheduling theory is concerned primarily with mathematical models that relate to the process of scheduling. It is a quantitative approach that at-tempts to capture the problem structure in mathematical form. This quan-titative approach begins with a description of resources and tasks and with the translation of decision-making goals into an explicit objective function. Ideally, the objective function should include all costs (and revenues) in the system that depend on scheduling decisions. In practice, however, such costs are often difficult to measure, or even to identify completely. Nevertheless, three types of decision making goals seem to be prevalent in scheduling: turnaround, timeliness, and throughput [1]. Turnaround measures the time required to complete a task. Timeliness measures the conformance of a par-ticular task's completion to a given due date. Throughput measures the amount of work completed during a fixed period of time. The major scheduling models can be categorized by specifying resource configuration and the nature of the tasks. A model may contain a single machine or several machines. There may be a single machine, or several machines, possibly in parallel. The jobs could have a single operation or multiple operations. If the set of jobs available for scheduling does not change over time, the system is called static, in contrast to cases in which new jobs appear over time, where the system is called dynamic. Three kinds of feasibility constraints are commonly found in scheduling problems. First, 2 there are limits on the capacity of machines; second, there are technological restrictions on the order in which some jobs can be performed; and third, there may be time limit constraints, such as release dates and deadlines. We confine ourselves to scheduling problems in which each task requires at most one resource at a time. In this case, scheduling problems are usually seen as problems that concern the scheduling of jobs on machines of limited capacity and availability. Such problems are traditionally referred to as ma-chine scheduling problems. We also limit ourselves to nonpreemptive (no job splitting) schedules. The usual setting for this type of problems is as follows. A set of n jobs has to be scheduled on machines. Each job j (j = 1,... ,n) must be processed without interruption during a period of length pj. A ma-chine can handle no more than one job at a time and is continuously available from time zero onwards. We are asked to find an optimal feasible schedule, that is, a set of start times such that the capacities, availability and time limit constraints are met and a given objective function is optimized. It is possible that each job j has a release date r,-, after which it becomes avail-able, and a due date d3, which indicates the time by which it ideally should be completed. In addition it may have a positive weight Wj, which expresses its importance with respect to the other jobs. Some well known objective functions are the weighted sum of completion times, the weighted sum of lateness (amount of time by which the jobs is late or early, i.e. due date minus the completion time) and the weighted sum of tardiness (the amount of time by which a job exceeds its due date, if it is completed after that time, 0 otherwise). 3 1.2 Complexity Scheduling problems are a special class of combinatorial optimization prob-lems, where we have to choose an optimal solution from a finite number of potential solutions. Many combinatorial optimization problems, including the scheduling problems considered in this thesis, are NP-hard. This implies that no algorithm is known that solves each instance of the problem to op-timality in polynomial time with respect to the size of the instance. Hence, as we wish to solve the problem to optimality we shall allow for exponential time algorithms. Although most optimization algorithms for hard combina-torial optimization problems are enumerative in nature, the aim is still to develop algorithms that perform satisfactorily well in practice for instances of reasonable size. The main concern is to avoid exhaustive enumeration of the solution space. There are number of optimization methods like dy-namic programming and branch-and-bound that aim at implicit enumeration of the solution space. Branch-and- bound solves a combinatorial optimization problem by breaking up the feasible set of solutions into successively smaller subsets, calculating bounds on the objective function value over each subset, and using them to discard certain subsets from further consideration. If a lower bound on the objective in one part of the solution space is larger than a solution already obtained in a different part of the solution space, the cor-responding part of the former solution space may be discarded. Therefore, in order to apply branch-and-bound effectively, it is very important to find good lower bounds that can be easily computed. 4 1.3 Integer Programming Formulations Machine scheduling problems are basically defined on permutations of a set of tasks. Each permutation produces a schedule that can be associated with a given cost (revenue). Our goal is to find the cheapest (highest) feasible schedule. In these terms, we define a discrete optimization problem (P): min {/(s) : s € 5} (1.1) where / is the objective function, and 5 is a general solution in a set S of feasible solutions. Given the discrete character of this problem, it can be formulated as an integer linear program. This means that the problem is described by a decision vector x, a linear objective function cx, and a feasible region for x, represented by a set of linear constraints Ax < b. In addition some or all components of x must be integer (or binary). If we relax the integrality conditions, then we obtain a linear programming problem, which is called the LP- relaxation. The optimal value of LP-relaxation is a lower bound on the optimal value of the original problem. Solving LP-relaxations can hence be used as a lower bounding strategy in a branch-and-bound algorithm. For nonpreemptive machine scheduling problems different mixed-integer programming formulations have been proposed. There are formulations based on completion times, start times and positional start times. Below we give a few examples. A formulations based on completion times: In nonpreemptive machine scheduling problems a schedule is uniquely 5 determined by the corresponding job completion times Cj (j = l , . . . , n ) . Most of the performance measures depend on job completion times. To illus-trate, consider the single machine, weighted sum of completion time problem. This problem is solved in polynomial time by scheduling the jobs in order of nonincreasing ratio Wj/pj [4]. This problem can be formulated as follows: n m i n ^ W j C j (1-2) subject to Cj>pj j = l,...,n (1.3) Cj-d >PJ vCi-Cj;>Pi i , j = l , : . . , n , * < j (1.4) where the symbol V is the logical "or" (disjunction). Constraints (1.3) state that no job can start before time zero and con-straints (1.4) describe machine availability (i.e., no two jobs can overlap in their execution). The above formulation is not a linear program because of the disjunctive constraints (1.4). The application of branch- and-bound to a disjunctive program is straightforward. First the set of disjunctive con-straints are relaxed to obtain an LP-relaxation. If the optimal solution to the LP-relaxation satisfies all the disjunctive constraints, then the solution is optimal for the disjunctive program as well. However, if one of the disjunc-tive constraints is violated then two additional LPs are generated by adding one term of the disjunctive constraint to each of them. This is continued till we find an optimal solution that satisfies all the disjunctive constraints. The main advantage of this formulation is that it contains only n vari-ables. This implies that instances with a large number of jobs or with large 6 processing times can be easily handled by this formulation. A disadvantage is that only problems with a linear objective function in the completion times can be modeled in a straightforward way. In other situations, new variables and constraints have to be introduced. A formulation based on start times and sequence determining vari-ables: Let us consider the same scheduling problem (single machine, weighted completion time) described above. The formulation described here is based on variables tj (j = 1,. . . , n) that denote the start times of the jobs and on variables c\j = l , . . . , n , i ^ j) such that c\j = 1 if job i is processed before job j, 0 otherwise. This formulation is given by: minJ2wj{tj + Pj) (1.5) subject to tj - U > pi - M(1 - Sij) i,j = 1,..., n, i ^ j (1.6) Sij + 8ji = l i, j = 1,... ,n,i < j (1.7) % £ { 0 , 1 } i,j = l,...,n,i?j (1.8) tj>0 j = l,...,n (1.9) where M (> Y^j=iPj) 1S a large integer. Constraints (1.6) describe the ma-chine availability and constraints (1.7) ensure that all the jobs are processed. Precedence constraints can easily be included in this formulation. If job i has to precede job j then the constraint <5,j + cv,t = 1 is replaced by e\j = 1 and 6ji = 0. This formulation can be considered as an extension of the formulation based on completion times. This extended formulation is found to be stronger than the completion time one [9]. The modeling of objective functions in this formulation has the same problems as in the completion time formulation. In this thesis we consider the time-indexed formulation of scheduling prob-lems that has been studied before by Sousa and Wolsey [7] and then by van den Akker [9]. This formulation is based on time-discretization. The plan-ning horizon T is divided into T time periods and period t starts at time t — 1 and ends at time t. Computational experiments [9] indicate that the LP-relaxation of the time-indexed formulation yields good lower bounds. A limitation to the applicability of these lower bounds in a branch-and-bound algorithm is the considerable amount of time needed to solve LP-relaxations. This is mainly due to the large size of this formulation. Our efforts were fo-cused on reducing this computational time by means of efficient lower bound-ing algorithms. 1.4 Outline of the Thesis The main goal of our research was to devise better methods to solve the LP-relaxation of the time indexed formulation for single machine and identical parallel machine scheduling problems. The solution to the LP-relaxation gives a good lower bound for the optimal value of the scheduling problem and hence may be used in a branch-and-bound algorithm. In Chapter 2, we describe the time indexed model for single machine scheduling problem and the progress made up to now in solving it. We show 8 some simplified forms of this formulation and discuss their solution quality. We also provide a Lagrangian relaxation method for this problem and propose a new column generation algorithm to solve the LP-relaxation. In Chapter 3, we investigate the time indexed formulation for parallel machine scheduling problems. We first show how this model could be used for the movie scheduling problem described by Swami, Weinberg and Eliashberg [8]. Then we describe a new column generation procedure which could be used to solve the LP relaxation of the parallel machine scheduling problem. Finally, Chapter 4 concludes this thesis with a brief survey of the main results, and with some topics for further research. 9 Chapter 2 Single Machine Scheduling 2.1 Introduction The single machine scheduling problem is very simple and a special case of all other scheduling environments. In this case each job consists of a single operation. The number of jobs to be processed is assumed to be finite and is known in advance. Furthermore, it is assumed that the machine will be continuously available, without breakdown, until all the jobs are completed. A job j becomes available for processing on its release date r,- and has a weight of Wj and a due date of dj. 2.2 Time-Indexed Formulation In this section, we introduce the time-indexed formulation that will be inves-tigated in this thesis. This formulation is based on time discretization. Time is divided into periods, where period t starts at time t — 1 and ends at time t. The planning horizon T is made up of T time periods. The formulation uses binary variables Xjt, where Xjt = 1 implies that job j starts in period t. 10 The formulation is given by: minE E CjtXjt (2.1) i=l t=rj subject to T-PJ + I xJt = l j = l , . . . , n (2.2) t=r, E E *;.<i t = i,...,r (2.3) i=i s=t-pj+i x i t G { 0 , l } j = l , . . . , n ; < = l , . . . , r - P j + l (2.4) The first set of equations ensure that each job is scheduled exactly once whereas the second set of inequalities ensure that at each time at most one job is executed. An advantage of this formulations is that it can be used to model many variants of nonpreemptive single-machine scheduling. We can model all the objective functions of the form £ j = i fj{tj) by setting Cjt = fj(t — 1). If the objective is to minimize the weighted sum of tardiness, then we set Cjt = Wj(t +Pj — dj — 1) for t + Pj — 1 > dj and c]t = 0 for t +pj — 1 < dj. Choosing Cjt = Wj(t + pj — 1) we minimize the weighted sum of completion times. The weighed number of late jobs is minimized by using cJt = Wj if t > dj — pj + 1 and Cjt = 0, otherwise. To incorporate release dates r, we simply discard the variables Xjt for t = 1,..., r,-. This means that these variables are implicitly fixed at 0. Deadlines are incorporated in a similar way. Precedence constraints also could be handled by the time indexed formu-lation. If we want job j to precede job k in all feasible schedules we can add 11 either the inequalities [5] T - p k + l Xjt- E *ks<0 t=l,...,T-pj + l (2.5) s=t+pj or the inequality [6] T-pfc + l T-pj + l E E ( t - l ) x j t + P j (2.6) t=i (=i The main disadvantage of the time-indexed formulation is that it contains' a large number of variables and a large number of constraints. There are n + T constraints and approximately nT variables. As T has to be at least pj, the formulation is pseudo- polynomial (since its size depends on the processing times of the jobs). Therefore, to handle instances with a large number of jobs or large processing times we have to resort to techniques designed for large linear programs. The LP-relaxation of this integer programming model is obtained by re-placing constraints (2.4) by nonnegativity constraints. Therefore, the LP-relaxation will be given by: n T-pj+l m i n E E c:txH (2-7) subject to T-PJ+1 E *i« = l j = l , . . . , n (2.8) t=r} E E t = i,...,r (2.9) j = l S = t-pj+l xjt>0 j = l , . . . , n ; t = \ , . . . , T - P j + 1 (2.10) 12 2.3 Some Valid Inequalities The LP-relaxation of the time indexed formulation could be strengthened by adding some valid inequalities. These valid inequalities can contribute to finding a better lower bound for the original scheduling problem or to pruning some branches in a branch-and-bound tree. One such set of valid inequalities is described here [9]. Let j be a job, i be a time period, and define p^max = max^j{pfc}. If we choose A 6 {2,... , p ? „ a x } , it follows from the restriction that jobs are not allowed to overlap, that no job k ^ j can be started in one of the periods t — pk + A , . . . . t when the processing of job j starts in one of the periods t — pj -f- 1,..., £ + A — 1. On the other hand, no two jobs k and / different from j can be started simultaneously in one of the periods t — pk + A , . . . , t and t — p; + A , . . . , i , respectively. This can be formulated as the following inequality. H - A - l t E * i . + E E * * . < ! A G { 2 , . . . , , y m a x } (2.11) t-Pj+1 k^j a=t-pk+A This set of inequalities is not implied by the system (2.8) - (2.10). How-ever, for A = 1 we get back inequalities (2.9). All the facet inducing in-equalities with right hand side 1 and 2 are given in [9]. Any facet inducing inequality with integral coefficients and with right hand side equal to 1, will be of the form x{V) < 1. Here x(V) denotes Yl{j,s)ev xjs- Each facet inducing inequality x(V) < 1 is maximal in the sense that there does not exist any W D V such that z(VK) < 1 is valid. Van den Akker [9] derived necessary and sufficient conditions for inequalities to define facets and have right hand side 2 as well. 13 2.4 Simplification of the Model As pointed out earlier, the major drawback of time indexed formulations arises from the increase in time horizon and therefore in the whole size of the formulation. The size of this formulation depends on the number and duration of the jobs and on the degree of accuracy used to describe the durations of these jobs. To obtain approximations to the original problem with smaller numbers of variables and constraints, we can apply some kind of rounding before discretizing the time. If we can make the processing times, release dates and due dates all have a common divisor larger than 1, the size of the model can be reduced by dividing all the above quantities by that common divisor before time discretization. For this we round the processing times, release dates and due dates after dividing them by a scaling parameter a. After dividing we can either round the parameters up or round them down. If we round pj and Tj down and dj up the resulting problem (P') is a relaxation of the original problem P. On the other hand if we round pj and rj up and dj down the resulting problem (P") is a restriction of the original problem. Therefore, solving P' produces a lower bound on the optimal integer solution of the original problem P and solving P" produces an upper bound on the optimal integer solution of the original problem P. These approximation procedures were tested on randomly generated prob-lems (minimizing total weighted tardiness) with a = 2,3,5,10 and 20. The computational results suggest that this method gives very good bounds (within 14 Table 2.1: Empirical performance of approximation scheme (% from opti-mum) a j Pmax) 2 3 5 10 (10,5) 9 22 82 95 (10,10) 10 19 76 90 (10,20) 7 27 80 87 10% of the optimum) when a = 2. In this case the size (total number of non zero matrix entries) of the formulation is one fourth that of the original problem. The bound deteriorates as a increases. This procedure could be used with a branch-and-bound algorithm to fathom some branches. The generation procedure was as follows [9]. For different values of number of jobs (n = 10,20,30), let the processing times pj be integers uniformly dis-tributed in [ l ,p m a i ] . The weights are in [1,10] and the release dates are in [0, (I2j=i Pj)/2]- We consider instances with pmax = 5,10 and 20. We used C P L E X 3.0 to solve the linear programs. The results for n — 10 are given in Table 2.1. The above results show that depending on the quality of bound we need we can reduce the problem size by using a suitable scaling factor. In the initial steps of a B&.B algorithm we can use a larger a and then gradually reduce it to 1 as the solution improves. This will make the initial optimiza-tion problems small. The computational experiments we did are limited to a small number of problem instances and the conclusions drawn from them are tentative at best. 15 2.5 Aggregation and Column Generation In this section we provide a new column generation procedure to solve the LP-relaxation of the time-indexed formulation. Empirical findings indicate that the typical number of iterations needed by the simplex method increases proportionally with the number of rows and only very slowly with the number of columns. The time indexed formulation contains n + T rows, where T > Yl^iPj- Hence, especially for instances with large processing times we may expect large running times when the LP-relaxation is solved through the simplex method. To reduce the number of constraints we introduce an aggregation scheme which generates a single knapsack type constraint from A successive capacity constraints. This reduces the number of constraints to n + \T/X\. This is a relaxed version of the original problem and hence the optimal value of this problem will be a lower bound for the LP-relaxation. To find the optimal value of the LP-relaxation we incorporate column generation and row generation procedures with this aggregation scheme. In the aggregation phase we generate a 0 - 1 knapsack inequality by adding the time period constraints over A consecutive time periods (k, k + 1 , . . . , k + A — 1). This gives a single constraint 5 Z j = i Hs=fc-pj+i ajsXjs < A instead of A original constraints. The coefficients Ojs are nonnegative with following properties. 1. For every job j, as s increases a J S increases from 1 by steps of 1 until it attains the value min{pj,A}, stays constant and then decreases to 1 16 by steps of 1. That is a.js = min{pj, A. (k + A — s)+, (s + pj — k)+}. 2. If pj < A, cijs = Pj for s 6 {k,..., k + A - pj}. The reduced cost of a variable Xjt is given by: f((+ P j-l)/Al Cjt = Cjt -Uj - E QJ>>V>i (2-12) h=[t/Aj where u 7 is the price associated with job constraint j and Vh is the price associated to aggregated capacity constraint h (given directly by the current dual variables). Here ajh is the coefficient of variable Xjt in the aggregated capacity constraint h. T h e A l g o r i t h m : Step 1. Select A (the number of original constraints that are to be aggre-gated to form an aggregated constraint). Find a few feasible schedules in the original problem and generate the corresponding columns in the aggregated problem. Step 2. Solve the LP-relaxation with the generated columns. From the dual costs find out whether there are any variables with negative reduced cost. If there are some variables with negative reduced cost go to Step 3, else go to Step 4. Step 3. Add some columns with negative reduced costs to the LP. Go to Step 2. Step 4. Check whether the current solution violates any of the original con-straints. If some constraints are violated add those constraints to the 17 LP and go to Step 2. If there are no violated constraints STOP, the optimal LP solution has been found. This algorithm is faster than the general column generation algorithm described by Sousa [6], because it does not use all n + T constraints in all the iterations. Constraints are added as the algorithm progresses and hence only few iterations are done with a large number of constraints. Sousa [6] reported that the average number of columns generated by a general column generation algorithm for the time indexed model, is around one third the total number of columns. His algorithm uses all the rows in every iteration. We tested the above algorithm in 60 problems generated according to the description in Section 2.4. The total number of columns generated by the above algorithm was always less than 20% of the total number of columns. The computational experiments we did are limited to a small number of problem instances and the conclusions drawn from them are tentative at best. As this algorithm adds rows progressively the number of columns and the number of rows increase in each step. At the last iteration the LP has about 95% of its rows regenerated. The main advantage of this algorithm is that during the initial iterations it uses only a few rows (as rows are added progressively) and hence in those iterations the LP could be solved very fast. This combined with the fact that this algorithm needs about 10% less columns than the standard column generation algorithm used by Sousa, makes it a faster algorithm than the standard column generation algorithm for time indexed formulations of single machine scheduling problems. This algorithm can be used to find an LP bound faster, hence speeding up the 18 execution time of any B&B algorithms making use of LP bounds. 2.6 Lagrangian Relaxation Another well known technique to find lower bounds is Lagrangian relaxation. This technique involves taking an integer programming formulation and at-taching Lagrange multipliers to some of the constraints and then relaxing these constraints into the objective function. Then the resulting integer pro-gram is solved and gives a lower bound on the optimal solution to the original problem. One basic reason for using this approach is the fact that many com-binatorial optimization problems consist of an easy problem complicated by some extra constraints. By absorbing these complicating constraints into the objective function we are left with an easy problem to solve and attention can then be turned to choosing numeric values for the Lagrange multipliers. Sub-gradient optimization and multiplier adjustment are two general techniques used to do so. It is important to choose the right set of constraints to relax because the quality of the lower bound depends on the constraint set that has been relaxed. To obtain a lower bound for the time indexed formulation that is dis-cussed in this chapter, we could either relax the first set of constraints (job constraints) or the second set of constraints (capacity constraints). If we relax the first set of constraints into the objective function we obtain „ T-Pj+\ n / T - p j + l \ m i n £ £ CjtXjt + ^aA £ x j t - \ \ (2.13) j = l t=Tj j = l \ t=Tj J 19 This can be simplified to: n T-pj + l n minE E (cjt + aj) Xjt - E a,- (2.14) j=l t=rj j = l The relaxed problem is then subject to E E a * ^ 1 * = 1 , . . . , T (2.15) j=l s=t-Pj-+l Xjt €{0,1} j = l , . . . , n ; « = l , . . . , r - P j + l (2.16) where aj, j = l , . . . , n , are the Lagrange multipliers associated with the equality constraints (2.2) are unrestricted in sign. This problem can be re-duced to a shortest path problem. The constraint matrix associated with (2.15) is an interval matrix, with which we can associate a network with T nodes and almost nT arcs [9]. The network associated with this matrix is acyclic and for each job j and each time period s, with 5 < T — Pj + 1, it has an arc from s to s + pj that indicates that the machine processes job j from time 5 to time s + pj. Each of these arcs has a length or cost of (CJS + aj). A shortest path algorithm could be used to solve the above problem for a given vector of values of Q j , j = 1,..., n. In this case the resulting schedule may have some jobs processed more than once and some others not processed at all. We use a subgradient optimization procedure to update the Lagrangian multipliers so as to improve the lower bound. If we relax the capacity constraints (2.3) the remaining problem is a simple selection problem. Here we have to select the best start time t for each job. The objective function is n T-Pi+1 T / n t \ minE E +J> E E XJ° - 1 (2-17) j = l t=Tj 1=1 \ j= l s= i - P , + l / 20 The relaxed problem is now subject to T-pj+l £ xJt = l j = L . . . , n (2.18) t = Tj xjt E {0,1} j = l , . . . , n ; t= 1 , . . . , T - P j + 1 (2.19) Here St > 0 are the Lagrange multipliers. The optimal solution to the above problem involves selecting the best (cjt + St) value for each job j. This solution has each job processed exactly once, but may violate some capacity constraints. The Lagrange multipliers associated with the above two formulations were calculated using the subgradient optimization method.' Subgradient opti-mization is an iterative procedure which involves generating new Lagrange multipliers from an initial set of multipliers, in a systematic fashion [2]. It attempts to maximize the lower bound obtained from the relaxed problem, by a suitable choice of multipliers. For example, consider the second La-grangian relaxation described above. The subgradient optimization iterative procedure for that problem will be as follows: Step 1. Let 7r be a given parameter satisfying 0 < 7r < 2. Initialize ZUB, the upper bound on the objective value by assigning to it the objective value of a feasible heuristic solution (schedule). Set ZMAX = —oo. Step 2. Solve the selection problem (2.17-2.19) with the current set (6t) of multipliers, to get a solution (xjt) with value ZLB-Step 3. Define subgradients gt for the relaxed constraints, evaluated at the 21 current solution, by: t Gt= £ X J . - I t = l , . . . , T s-t-pj + l Step 4. Define a (scalar) step size A- by: Thus, this step size depends upon the gap between ZTJB and ZLB and the user defined parameter TT. Step 5. Update 6t using 6t — max(0,6t + kGt) for t = 1,..., T , set Z m a I = max(Z T n a r , Z£,a). If the termination rule described below does not ap-ply, then go to step 2 to re-solve the problem with this new set of multipliers. To terminate the above iterative process we introduce a termination rule based on either limiting the number of iterations that can be done, or on the value of TT (reducing it during the procedure and terminating when TT is small). If Zmax has not improved in the last JV subgradient iterations with the current value of n then we halve the value of - (initially TT is set to 2) [2]. The above two forms of relaxations were tested on some randomly gener-ated problems described in section 2.4. We used N = 3. The computational experiments indicate that the first form of relaxation gives a better lower bound than the second one. The results are presented in Table 2.2. The results for the other values of n were very similar to the ones shown in Table 2.2. In the first form of relaxation we have to solve a shortest path problem in 22 Table 2.2: Performance of Lagrangian Relaxation (% from LP bound) Relaxed Constraints Job Capacity (10,5) 74 92 (10,10) 69 90 (10,20) 65 87 Table 2.3: Average Computational Time for Lagrangian Relaxation (in Sec.) Relaxed Constraints (ft, Pmax) Job Capacity (10,5) 6 2 (10,10) 8 3 (10,20) 12 5 each iteration. Whereas in the second form of relaxation only a selection has to be done in each iteration, making it a faster relaxation algorithm (Table 2.3). The computational experiments we did are limited to a small number of problem instances and the conclusions drawn from them are tentative at best. 23 Chapter 3 Parallel Machine Scheduling Problems 3.1 Introduction In this chapter we consider the problem of scheduling identical parallel ma-chines. Parallel machine scheduling problems concern the scheduling of n jobs on m parallel machines to minimize some function of the job completion times. We may consider scheduling parallel machines as a two-step process. First, we have to determine which jobs are to be allocated to which machines; second, we have to determine the sequence of jobs allocated to each machine. The single machine model discussed in Chapter 2 could be extended for this case by modifying the capacity constraints. In this chapter we present a model for scheduling movies in a multiplex theatre. Then we consider the general identical parallel machine scheduling problem and extend the column generation algorithm presented by van den Akker [9] to this problem. 24 3.2 Movie Scheduling This portion of the research was motivated by a problem presented by Swami, Weinberg and Eliashberg [8]. They considered the problem of scheduling movies on screens of a multiplex cinema theatre. Given a set of movies j = 1,..., n and their release dates r, and weekly revenues Rjt, t £ {r^,..., T}, we want to select a subset of movies to show on our screens in order to max-imize the theatre's total revenues over horizon [0, T). The revenue generated by a movie in a given week t depends on its age (t — r,-). Movies generate their maximum revenue in the week of their release and then the revenue typically decays; following [8] we assume revenue decays exponentially. If a movie j is selected for screening, it has to be shown for at least p° weeks, its obligation period. A movie could be shown for any number of weeks up to (T — Tj) weeks. The portion {3t) of the revenue retained by the theatre, the retention revenue varies with the screening age of the movie. Typically, for the first week of screening the exhibitor gets 10% of the total revenue. He or she gets 20% and 30% of the total revenue for the second and third weeks, respectively. After that he gets 40% of the total revenue generated by that movie. The exhibitor is allowed to deduct the cost involved in showing the movie, the house nut H (typically, around $500 per week), from the total revenue before calculating the retention revenue. If the total revenue is less than the house nut, then the exhibitor gets to keep everything. The exhibitor also generates some revenue from the concession stands in the theatre. The revenue from this source is considered to be a certain percentage 6 (around 28%) of the total revenue generated by the movie and the exhibitor keeps all 25 of this revenue. The planning horizon for this problem is fairly small. Generally we con-sider about 10-26 weeks and around 30 movies. The screening period of a movie could start any week after its release date r, and go on up to week T. Therefore in our model we consider k}: = T — max(p(j,rJ) distinct screening periods plj, I = 0,1,.... kj, for each movie j. There are m identical screens in the theatre and the total revenue (from the movie and concession stands) retained by the theatre by showing movie j for p'j weeks starting from week t is given by e)t = E ^ _ 1 ( m a x ( 0 , Rjs - H) x 0t + H + 6RJS). This model could be formulated as follows: „ k} T-p{+l max E «**5t j'=l 1=0 t = Tj subject to k, T-pJ+l E E *5I<1 i = l , - - - , n (3.2) /=0 t = T, E E E t = l,...,T (3.3) j=i ;=o ,=«-pj+i xljte{0,l} (3.4) where xljt = 1 if movie j is screened for p!j weeks starting from week t, 0 otherwise. This model was coded in AMPL (a modeling language for math-ematical programming) [3] and tested on some real world as well as some randomly generated problems. The real world problems were constructed using the actual weekly revenues and release dates published by the VA-RIETY magazine for the movies that were playing in North America (see 26 Appendices 1 and 2 for the A M P L model and the data set, respectively). Some fictitious problems were also created assuming an exponential decay in revenue with the age of the movies. Then we used the model to find the optimum schedules. The computational experiments indicated that the LP relaxation of the above problem gives an integer solution most of the time. The dynamic programming based procedure proposed by Swami, Weinberg and Eliashberg [8] gives the optimum schedule when there is only one screen in the theatre. They also proposed a greedy heuristic which makes use of this dynamic programming procedure, for the multiple screen case. This greedy heuristic allocates movies to screens using the single screen model and moves from screen to screen in a greedy fashion (for more details on this heuristic see [8]). This greedy heuristic does not converge to the global optimum in all problem instances. The integer programming model we described above will always give the optimum schedule. The problem size and running time are reasonably small for all real world problems in the movie industry. The plan-ning horizon is usually about 8 to 10 weeks and during which time there'll be about 25 movies available in the market. This will give rise to a problem with about 2000 variables and 35 constraints. A problem of this size can be solved within a minute using A M P L / C P L E X 3.0 running on a HP 9000-735 workstation (under the HP-UX 9.05 operation system) The problem we considered in this section is a simplification of the real world movie scheduling problem. We simplified the problem by assuming that the future revenues of all the movies are known and all the screens in the theatre have the same seating capacity. Designing a forecasting model for 27 the movie revenues and including the seating constraints in the time indexed formulation could form topics for further research. 3.3 A Column Generation Approach to Par-allel Machine Scheduling Problem The general identical parallel machine scheduling problem may be formulated using time indexed variables, as follows: min]T £ cjtxjt (3.5) j=l t=r, subject to ]T Xjt = 1 j = l,...,n (3.6) £ E xJs<m * = 1 , . . . , T (3.7) j = l S=t — Pj+1 xjtE{0,l} j = l , . . . , n ; < = 1 , . . . , T - P j + 1 (3.8) Note that this formulation ignores the assignment of jobs to machines. Given an integer solution to (3.5)-(3.8), it is straightforward to define a cor-responding feasible m-machine schedule with equal total cost [5]. Here we describe a Dantzig-Wolfe decomposition approach to solve the LP relaxation of the above formulation. This is an extension of the algorithm proposed by van den Akker [9] for the single machine case. Denote by PS the polytope defined by the inequalities (3.7) and the nonnegativity constraints. Then the LP relaxation could be viewed as (3.5) subject to (3.6) and (xjt) G PS. 28 The polytope PS is integral since the constraint matrix of the inequalities (3.7) is an interval matrix and therefore totally unimodular. The vertices of PS are pseudo- schedules where each job can be assigned any number of starting times. But the machines never execute more than one job at a time. If the extreme points of PS, each x 6 PS can be written as a convex combination x = Y^=i ^rXr where $3^ =i A r = 1 and A r > 0. Replacing x in the above formulation by this expression we obtain the master problem of the LP relaxation. K I n T-pj+l m i n £ £ £ cjtxrjt\XT (3.9) r=l \ j= l t=Tj subject to K /T-pj+l \ £ £ 4 U = l j = l , . . . , n (3.10) r=l \ t=Tj J £ A r = l (3.11) A r > 0 r = l,...,K (3.12) The master problem has only n + 1 constraints but involves in general a huge number of variables. If we are able to solve the subproblem efficiently so as to find out whether there is a column with negative reduced cost, we can be able to solve the LP relaxation fast. The reduced cost of the variable Ajfc is given by n T-pj+l „ /T-pj+l \ £ £ <*xJ,-5>,- £ z}t - a , (3.1.3) j = l t=Tj J = l \ t=l / where TTJ denotes the dual variable of constraint (3.10) for job j, and a denotes 29 the dual variable of constraint (3.11). The above expression simplifies to E E (3.14) j = \ t = Tj The above pricing problem can be solved by determining an m-unit minimum cost flow from the source to the sink in the acyclic network associated with the interval matrix described earlier. The nodes of the network correspond to time periods 1,2,..., T + 1. The arcs correspond to the use of machine during these time periods. For each job j and each time period s, with s < T — Pj 4- 1, there is an arc from s to s + Pj that indicates that the machine processes job j from time s to time 5 + Pj. Each of these arcs have a capacity of 1 unit and a cost of (CJS — 7Tj). When there are no release dates (i.e., all the jobs are available at time 0), the source node is connected to node 0 with an arc having capacity m. In this case the planning horizon T = E j = i P j / m + (m — l)pmax/m, where p m a x = m&Xi<j<npj. There will be arcs to the sink from every the node between YJj=\Pjlm — {m — l)pmai/>v>yi and T. We tested the performance of this column generation algorithm on the LP-relaxation of the time indexed formulation with the objective of minimiz-ing the sum of weighted completion times. Our computational experiments have been conducted with C P L E X 3.0 running on a HP 9000-735 worksta-tion (under the HP-UX 9.05 operating system), for both the master and netflow subproblem. It has been observed that a similar column generation algorithm for single machine scheduling problem is much faster than the standard simplex algorithm [9]. Therefore this column generation procedure could be incorporated in a branch-and-bound code to rapidly produce an LP-30 Table 3.1: Average Computational time of Column Generation Algorithm Pmax) Time(s) (10,5) 21 (10,10) 34 (10,20) 70 relaxation bound. The computational experiments we did are limited to a small number of problem instances and the conclusions drawn from them are tentative at best. The computational results for problems with 3 machines are presented in Table 3.1. 31 Chapter 4 Conclusions This thesis provides various methods that may be used in finding quick and good lower bounds on the time indexed formulations of single and parallel machine scheduling problems. Various general techniques like aggregation, column generation and Lagrangian relaxation are applied to the time indexed formulations and shown to produce good results. The column generation algorithms presented in this thesis provide a general framework for solving the LP-relaxation of time indexed formulations of scheduling problems. These algorithms could be incorporated into a branch-and-bound scheme to find an optimal integer solution, that is an optimal schedule. The approximation methods discussed in this thesis provide a starting point to generate initial feasible schedules and to warm start a branch-and-bound procedure. The work presented in this thesis could be incorporated in a branch-and-bound procedure to find an integer optimal solution to the time indexed formulation. The effects of adding cutting planes to the column generation algorithms proposed here could also be investigated. As pointed out by van den Akker [9] some of these column generation procedures tend to preserve 32 their structures even when some valid inequalities are added. We propose that the general framework and various tools presented in this thesis form a sound basis to design a branch-and-bound algorithm for single and parallel machine scheduling problems. This could form a topic for further research. 33 Bibliography [1] K.R. Baker. Elements of sequencing and scheduling. Dartmouth College, Hanover, NK, 1993. [2] J . E . Beasley. Lagrangian relaxation. Operational research notes, Imperial College, 1995. [3] R. Fourer, D.M. Gay, and B.W. Kernighan. AMPL: A modeling language for mathematical programming. Duxbury Press, 1993. [4] M . Pinedo. Scheduling: Theory, algorithms, and systems. Prentice Hall, New Jersey, 1995. [5] M . Queyranne and A. S. Schulz. Polyhedral approaches to machine scheduling. Preprint 408/1994, Technische Universitat Berlin, 1994. [6] J . P. Sousa. Time indexed formulations of non-preemptive single-machine scheduling problems. PhD thesis, Universite Catholique de Louvain, Bel-gium, 1989. [7] J. P. Sousa and L. A. Wolsey. A time indexed formulation of non-preemptive single machine scheduling problems. Mathematical Program-ming, 54:353 - 367, 1992. 34 [8] S. Swami, J. Eliashberg, and C. B. Weinberg. Screener: A dynamic pro-gramming approach to the scheduling of movies on screens. Talk presented at a joint Marketing/Management Science seminar. UBC, February 27, 1996. [9] M . van den Akker. LP-based solution methods for single-machine schedul-ing problems. PhD thesis, Eindhoven University of Technology, Eind-hoven, The Netherlands, 1994. 35 Appendix 1 A M P L Model for the Movie Scheduling Problem: param T > 0; #Planning horizon param n > 0; #Number of movies param m > 0; #Number of screens param c > 0; #House nut param del > 0; #Maximum share param f > 0; #First week share param a > 0; #Increment i n share set MOVIE := l . . n ; #Indices of movies param opd {MOVIE} > 0; #Obligation period param r {MOVIE} > 0; #Release date param d {MOVIE} > 0; #Expiry date param k {MOVIE} >= 0; #Number of d i s t i n c t screening periods param scr {j i n MOVIE, i i n 0..k[j]} := opd[j] + i ; #Screening period param rev {j i n MOVIE, t i n r [ j ] . . d [ j ] } ; #Revenue i n week t param tot {j i n MOVIE, i i n 0..k[j], t i n r [ j ] . . d [ j ] - s c r [ j , i ] + l } := sum {u i n t . . t + s c r [ j , i ] - l } (max(0, rev[j,u]-c) * min(del, f+(u-t)*a) + c + rev[j,u]*0.2857) #Total revenue var x {j i n MOVIE, i i n 0..k[j], t i n r [ j ] . . d [ j ] - s c r [ j , i ] + 1 } binary; maximize total_revenue: sum {j i n MOVIE, i i n 0 . .k [j ] , t i n r [j ] . . d [j ] -scr [j , i] +l} t o t [ j , i , t ] * x [ j , i , t ] ; subject to movies {j i n MOVIE}: sum {i i n 0..k[j], t i n r [ j ] . . d [ j ] - s c r [ j , i ] + l } x [ j , i , t ] <= 1; subject to screens {t i n 1..T}: sum {j i n MOVIE, i i n 0..k[j], s i n t - s c r [ j , i ] + 1 . . t : s >= r [ j ] and s <= d[j]- s c r [ j , i ] + 1 } x [j , i , s] <= m; 36 Data for the A M P L Model: Appendix 2 param T := 10; param n := 13; param m := 3; param c := 500; param del := 0.4; param f := 0.1; param a := 0.1; param: r d opd k : = 1 8 10 2 1 2 1 10 2 8 3 1 10 2 8 4 1 10 2 8 5 1 10 2 8 6 1 10 2 8 7 3 10 2 6 8 2 10 2 7 9 5 10 2 4 10 1 10 2 8 11 6 10 2 3 12 6 10 2 3 13 6 10 2 3; param rev: 1 2 3 4 5 6 7 8 9 10 1 12966 8629 5706 2 1679 1325 1452 935 1172 696 648 600 500 400 3 8908 5194 3658 2840 3039 1931 1756 1582 1953 1655 4 5577 3027 2236 1795 2266 991 700 409 1311 1137 5 9400 6820 5675 3881 3157 2285 2166 2048 1627 1790 6 7652 5274 3880 3033 3226 2228 2160 2108 2188 1781 7 . 12762 6003 4532 2288 1864 1440 1582 1226 8 8755 6947 5306 4884 2898 2500 2341 2090 1738 9 18264 6786 5828 4869 3893 3715 10 4000 2733 2445 2167 2331 1835 1793 1751 1641 2134 11 13314 11759 10204 8096 5955 12 3576 2808 2040 1501 1282 13 7229 5835 4440 3212 1992 37
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Time indexed formulation of scheduling problems
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Time indexed formulation of scheduling problems Williams, David Niranian 1997
pdf
Page Metadata
Item Metadata
Title | Time indexed formulation of scheduling problems |
Creator |
Williams, David Niranian |
Date Issued | 1997 |
Description | In this thesis, we study various approaches that could be used in finding a lower bound for single and parallel machine scheduling problems. These approaches are based on integer programming formulations involving binary variables indexed by (i,t), where i denotes a job and t is a time period. For the single machine case, we provide an approximation scheme and a Lagrangian relaxation procedure both of which produce good lower bounds. We also present a new column generation algorithm which solves the LP-relaxation of time-indexed formulation using fewer columns than the standard column generation procedure. In chapter 3 we present a new integer programming formulation for the movie scheduling problem, based on time indexed variables. This formulation led us to investigate the general parallel machine scheduling problem, for which we present a column generation procedure, which is an extension of the work done by van den Akker for the single machine case. |
Extent | 1530610 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2009-03-10 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0099180 |
URI | http://hdl.handle.net/2429/5850 |
Degree |
Master of Science - MSc |
Program |
Business Administration |
Affiliation |
Business, Sauder School of |
Degree Grantor | University of British Columbia |
GraduationDate | 1997-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_1997-0163.pdf [ 1.46MB ]
- Metadata
- JSON: 831-1.0099180.json
- JSON-LD: 831-1.0099180-ld.json
- RDF/XML (Pretty): 831-1.0099180-rdf.xml
- RDF/JSON: 831-1.0099180-rdf.json
- Turtle: 831-1.0099180-turtle.txt
- N-Triples: 831-1.0099180-rdf-ntriples.txt
- Original Record: 831-1.0099180-source.json
- Full Text
- 831-1.0099180-fulltext.txt
- Citation
- 831-1.0099180.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0099180/manifest