Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Appointment scheduling with discrete random durations and applications Begen, Mehmet Atilla 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_spring_begen_mehmet.pdf [ 1.6MB ]
Metadata
JSON: 24-1.0070919.json
JSON-LD: 24-1.0070919-ld.json
RDF/XML (Pretty): 24-1.0070919-rdf.xml
RDF/JSON: 24-1.0070919-rdf.json
Turtle: 24-1.0070919-turtle.txt
N-Triples: 24-1.0070919-rdf-ntriples.txt
Original Record: 24-1.0070919-source.json
Full Text
24-1.0070919-fulltext.txt
Citation
24-1.0070919.ris

Full Text

Appointment Scheduling with Discrete Random Durations and Applications by Mehmet Atilla Begen  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Business Administration)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  April 2010  c Mehmet Atilla Begen 2010  Abstract We study scheduling of jobs on a highly utilized resource when the processing durations are stochastic and there are significant underage (resource idle-time) and overage (job waiting and/or resource overtime) costs. Our work is motivated by surgery scheduling and physician appointments. We consider several extensions and applications. In the first manuscript, we determine an optimal appointment schedule (planned start times) for a given sequence of jobs (surgeries) on a single resource (operating room, surgeon). Random processing durations are integers and given by a discrete probability distribution. The objective is to minimize the expected total underage and overage costs. We show that an optimum solution is integer and can be found in polynomial time. In the second manuscript, we consider the appointment scheduling problem under the assumption that the duration probability distributions are not known and only a set of independent samples is available, e.g., historical data. We develop a sampling-based approach and determine bounds on the number of independent samples required to obtain a provably near-optimal solution with high probability. In manuscript three, we focus on determining the number of surgeries for an operating room in an incentive-based environment. We explore the interaction between the hospital and the surgeon in a game theoretic setting, present empirical findings and suggest incentive schemes that the hospital may offer to the surgeon to reduce its idle time and overtime costs. In manuscript four, we consider an application to inventory management in a supply chain context. We introduce advance multi-period quantity commitment with stochastic characteristics (demand or yield) and describe several real-world applications. We show these problems can be solved as special cases of the appointment scheduling problem. In manuscript five, an appendix, we develop an alternate solution approach for the appointment scheduling problem. We find a lower bound value, obtain a subgradient of the objective function, and develop a special-purpose integer rounding algorithm combining discrete convexity and non-smooth convex optimization methods.  ii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  List of Tables  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vii  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  viii  Dedication  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  Co-Authorship Statement  x  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xi  1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  1.1  Motivation and Appointment Scheduling  . . . . . . . . . . . . . . . . . . .  1  1.2  Overview of the Thesis  . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  6  1.2.1  Chapter 2: Appointment Scheduling . . . . . . . . . . . . . . . . . .  6  1.2.2  Chapter 3: A Sampling-Based Approach  7  1.2.3  Chapter 4: Incentive-Based Surgery Scheduling  1.2.4  Chapter 5: Advance Quantity Commitment  1.2.5  . . . . . . . . . . . . . . . . . . . . . . . . . .  9  . . . . . . . . . . . . .  10  Appendix A: Minimizing a Discrete-Convex Function . . . . . . . .  11  1.3  Outline of the Thesis  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  12  1.4  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13  2 Appointment Scheduling  . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  16  2.1  Introduction and Motivation  . . . . . . . . . . . . . . . . . . . . . . . . . .  16  2.2  Related Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  20  2.3  Assumptions and Notation  22  . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  2.4  Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  24  2.5  Optimality of an Integer Appointment Vector  . . . . . . . . . . . . . . . .  26  2.6  L-convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  32  2.7  Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  39  2.8  Objective Function with a Due Date . . . . . . . . . . . . . . . . . . . . . .  40  2.9  No-shows and Emergency Jobs . . . . . . . . . . . . . . . . . . . . . . . . .  44  2.10 Current Work, Future Work and Conclusion  . . . . . . . . . . . . . . . . .  46  2.11 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  49  3 Sampling Approach  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  52  3.1  Introduction and Motivation  52  3.2  Appointment Scheduling Problem  . . . . . . . . . . . . . . . . . . . . . . .  58  3.3  Convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  60  3.4  Subdifferential Characterization  . . . . . . . . . . . . . . . . . . . . . . . .  63  3.5  Sampling Approach  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  71  3.6  Conclusion  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  75  3.7  Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  76  3.8  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  86  4 Incentive-Based Surgery Scheduling . . . . . . . . . . . . . . . . . . . . . .  90  4.1  Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  90  4.2  Problem Description and Motivation . . . . . . . . . . . . . . . . . . . . . .  94  4.3  The Model  4.4  Misalignment of Incentives  4.6  102  . . . . . . . . . . . . . . . . . . . . . . . . . . .  106  Deciding the Number of Surgeries . . . . . . . . . . . . . . . . . . .  106  Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  109  4.5.1  Hospital has Complete Information and Coercive Power . . . . . . .  110  4.5.2  Take-It-or-Leave-It Offer  . . . . . . . . . . . . . . . . . . . . . . . .  111  4.5.3  Three-Part Contract  . . . . . . . . . . . . . . . . . . . . . . . . . .  112  4.5.4  Implementing the Two Contracts  4.5.5  Welfare Considerations  4.4.1 4.5  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . . .  115  . . . . . . . . . . . . . . . . . . . . . . . . .  116  Dependent Surgeries with Identical Realizations 4.6.1  . . . . . . . . . . . . . . .  117  Preliminaries and Misalignment of Incentives . . . . . . . . . . . . .  118 iv  4.6.2  Risk-Averse Surgeon  . . . . . . . . . . . . . . . . . . . . . . . . . .  120  4.6.3  Intuitive Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . .  121  4.6.4  Discrete Time Distribution with Two Values . . . . . . . . . . . . .  122  4.6.5  Sufficient Conditions on Cost Coefficients . . . . . . . . . . . . . . .  125  4.7  Conclusion and Future Directions  . . . . . . . . . . . . . . . . . . . . . . .  125  4.8  Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  126  4.9  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  133  5 Advance Quantity Commitments . . . . . . . . . . . . . . . . . . . . . . . .  136  5.1  Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  136  5.2  Description of Appointment Scheduling Problem . . . . . . . . . . . . . . .  138  5.3  An Inventory Model with Commitments . . . . . . . . . . . . . . . . . . . .  140  5.4  A Production Model with Commitments  . . . . . . . . . . . . . . . . . . .  145  5.5  Conclusion  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  149  5.6  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  151  6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  152  6.1  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  155  Appendices A Minimizing a Discrete-Convex Function  . . . . . . . . . . . . . . . . . . .  156  . . . . . . . . . . . . . . . . . . . . . . . . . .  156  A.2 Lower Bound (on the Value) and Computation of F . . . . . . . . . . . . .  158  A.3 Obtaining a Subgradient in Polynomial Time . . . . . . . . . . . . . . . . .  163  A.1 Introduction and Motivation  A.3.1 Probability Computations  . . . . . . . . . . . . . . . . . . . . . . .  163  A.3.2 Complexity of Subgradient Computations . . . . . . . . . . . . . . .  171  A.3.3 Obtaining a Subgradient Fast (in Polynomial Time) . . . . . . . . .  172  A.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  180  A.5 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . .  183  A.6 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  184  v  List of Tables Table 4.1  Estimates of Daily Overtime Probability per OR . . . . . . . . . . . .  98  Table 5.1  Comparison of the Appointment Scheduling and Inventory Models . .  142  Table 5.2  Comparison of the Appointment Scheduling and Production Models .  147  vi  List of Figures Figure 1.1  Surgery Durations by Surgical Specialty . . . . . . . . . . . . . . . .  2  Figure 1.2  Duration Distribution of a Simple Hernia Operation . . . . . . . . .  2  Figure 1.3  Surgery Scheduling Process . . . . . . . . . . . . . . . . . . . . . . .  4  Figure 1.4  An Instance with Three Surgeries . . . . . . . . . . . . . . . . . . . .  5  Figure 2.1  Surgery Durations . . . . . . . . . . . . . . . . . . . . . . . . . . . .  17  Figure 2.2  A Three-Job Instance, and A Realization of the Processing Durations  18  Figure 2.3  An Example Schedule with Emergency Jobs . . . . . . . . . . . . . .  45  Figure 4.1  Healthcare Players and Tasks . . . . . . . . . . . . . . . . . . . . . .  91  Figure 4.2  Surgery Scheduling Process . . . . . . . . . . . . . . . . . . . . . . .  94  Figure 4.3  The Third Level of Surgery Scheduling Process . . . . . . . . . . . .  95  Figure 4.4  Duration Distribution of a Simple Hernia Operation . . . . . . . . .  97  Figure 4.5  Actual and Scheduled Surgery Durations . . . . . . . . . . . . . . . .  98  Figure 4.6  Daily Average Overtime Minutes and Probability of Overtime per OR 99  Figure 4.7  Existence of m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  123  Figure 4.8  Illustration of nR ≤ nS with Two t Values . . . . . . . . . . . . . . .  124  Figure 4.9  Illustration of Condition Eq(4.16) . . . . . . . . . . . . . . . . . . . .  125  Figure 5.1  A Realization of Inventory Levels for 6 Periods . . . . . . . . . . . .  143  Figure A.1 Event I5 = {1, 2, 3, 5} Visualization . . . . . . . . . . . . . . . . .  164  Figure A.2 Event I8 = {3, 5, 6} Visualization . . . . . . . . . . . . . . . . . .  165  vii  Acknowledgements This thesis would not have been what it is without the encouragement and generous support from many individuals. I am grateful to every person who has offered help to me and contributed in some way to the process leading to my dissertation. It is difficult to overstate my gratitude to my supervisor, Maurice Queyranne. With his patience, inspiration, and great efforts to explain things clearly and simply, he has been a great mentor throughout my Ph.D. studies. I would like to thank him for our long research meetings, all of the advice and insightful direction he has passed along. I wish one day I can be a researcher like him and supervise my own Ph.D. students in the same way. The second individual I would like to express my appreciation to is Marty Puterman. He persuaded me to come to Canada for the COE program in the first place and encouraged me to pursue a Ph.D. degree. I would like to thank him for all of the advice, encouragement and support he has given during my studies and my employment at UBC as well as introducing me to the world of applied operations research. I have also been fortunate enough to have a great thesis committee. Besides Maurice Queyranne and Marty Puterman, I had the privilege to have Ralph Winter and Mahesh Nagarajan as my thesis committee members. I thank them for being supportive of my research, for their encouragement and constructive critiques that have greatly contributed to my thesis. Furthermore, I thank Ralph Winter for introducing me to the topic of Contract Theory and getting started the seed ideas of Chapter 4 in a term paper. And I thank Mahesh Nagarajan for providing me advice on my research and other academic issues when it was most needed. Let me also thank a few individuals who contributed to this research. I start with Retsef Levi who suggested using a sampling-based approach for surgery scheduling in his timely visit to UBC. I also thank him for being a co-author for Chapter 3. Second, my appreciation goes to Chris Ryan who has been a very good friend, a great research collaborator and a co-author of Chapter 4. For Chapter 4, I also thank Mati Dubrovinsky, Zhongzhi Song,  viii  and Gavin Yang and for their interesting and fruitful discussions and feedback, and give my thanks to the management and system analysts at a local hospital for their support and help in obtaining data. Last but not least, I thank Philip D. Loewen and Steven Shechter for their comments on the thesis. Since I first came to UBC, the campus and its tenants offered me a pleasant, friendly and motivating work environment. I would like to thank OPLOG faculty (in particular Steven Shechter, Derek Atkins, Harish Krishnan, Danny Granot, Anming Zhang and Tom McCormick), Elaine Cho (who has guided me through the university bureaucracy with her endless goodwill), Michelle Medalla, Joel Feldman, Geoffrey Blair, the COE staff and others (in particular Fredrik Odegaard, Jonathan Patrick, Mariel Lavieri, Antoine Saure V., Pablo Santibanez, Vincent Chow, Abelardo Mayoral, Steven Kabanuk Greg Werker, Anita Parkinson). Furthermore, thanks to UBC and NSERC for their financial support during my studies. Last but not least, I want to thank my mom, dad, and brother. I am forever indebted for their understanding, endless patience and encouragement when it was most required. It is to them that I dedicate this work.  ix  To my family, S¨ und¨ uz, Cevdet and Ali Cengiz.  x  Co-Authorship Statement Chapter 2, Chapter 5 and Appendix A are manuscripts co-authored with the candidate’s supervisor, Maurice Queyranne. The identification and design of the research program, for these papers were carried out jointly. Research, analysis and manuscript preparation were performed by the candidate with close supervision from Maurice Queyranne. Chapter 3 is co-authored with Maurice Queyranne and Retsef Levi. The identification and design of the research program, for this paper were carried out jointly. Research, analysis and manuscript preparation were performed by the candidate with close supervision from Maurice Queyranne and with comments on revisions provided by Retsef Levi. Chapter 4 is co-authored with Maurice Queyranne and Chris Ryan. The design of the research program, research, analysis and manuscript preparation for this paper were carried out jointly with Chris Ryan with close supervision from Maurice Queyranne. The identification of the initial research question and preliminary data analysis were performed by the candidate.  xi  1 Introduction We begin with motivation and introduction of the appointment scheduling problem in Section 1.1. In Section 1.1, we also discuss how and where appointment scheduling fits in the context of surgery scheduling and give a summary of related previous work. Then in Section 1.2, we give an overview of the thesis. Finally, Section 1.3 concludes the chapter with an outline of the thesis.  1.1  Motivation and Appointment Scheduling  Healthcare is one of the biggest industries in North America. Canada was expected to spend $148 billion on healthcare in 2006 [13], which accounts for more than 10% of its GDP. In the United States the situation is similar, in 2006 it accounted for 15.3% of GDP [8]. Healthcare challenges, on one end costs and on the other end demand, are growing not only in Canada but in almost every country in the world [6]. To address these challenges, one may think of either increasing available resources (capacity), limiting demand or finding ways to improve efficiency [24]. In most cases, increasing capacity or limiting demand may not be possible, and even if they are possible the challenges may require a deeper analysis and efficiency improvements. One way to improve healthcare operations is with effective scheduling of resources and patients who need them. Scheduling issues become more important and challenging when there is uncertainty present in the system. Uncertainty may be involved with patients (e.g., priority levels and arrivals [25]), resources (e.g., availability of a vaccine) or any other aspect of the healthcare operations (e.g., surgery durations [10]). In our applied healthcare projects, we also observed uncertainty of patient arrivals and surgery durations [32]. For instance, Figures 1.1 and 1.2 show how variable surgery durations can be. Figure 1.1 shows an example of surgery durations (operating room (OR) time in minutes) by surgical specialty and Figure 1.2 depicts duration distribution of a simple hernia operation. (Data for these figures comes from local hospitals.)  1  400 350  OR Time (min)  300 250 200 150 100 50 0 General  Cardiac  Neuro.  Ortho.  Plastic.  Vascu.  Urol.  Obst/Gyn.  Oto.  Ophth.  Surgical Specialty  Figure 1.1: Surgery Durations by Surgical Specialty Uncertainty makes the scheduling and capacity allocation decisions more complex and challenging. In such an environment, one needs to find a balance in the tradeoff allocating too much (more idle-time but less patient waiting time) or too little (more patient waiting time but less overtime) capacity. Durations of simple hernia operation 0.35 0.30 frequency  0.25 0.20 0.15 0.10 0.05 0.00 32  43  53  64  75  86  96  more  (actual) surgery duration (minutes)  Figure 1.2: Duration Distribution of a Simple Hernia Operation Motivated by the surgeries, oncologist consultations and radiation therapy treatments for cancer patients, we take an in-depth look at the appointment scheduling of jobs (e.g.,  2  surgeries, exams) of a highly utilized processor (e.g., OR, physician) when the job durations are stochastic and there are significant overage (job waiting and/or processor overtime) and underage (processor idle-time) costs. For a given sequence of jobs on a single processor, we determine an optimal appointment schedule (planned start times) minimizing the expected total underage and overage costs. Before we get into the details of the appointment scheduling problem, we first take a look at the surgery scheduling process to see how and where appointment scheduling fits in this context. In practice, scheduling surgeries in a medical facility is a complex and important process, and the choice of schedule directly impacts the overall performance of the system [32]. The surgery scheduling process (for elective cases) is usually considered as a three-level process [2, 3, 23]. We can classify these three levels as the strategic, tactical and operational stages of the surgery scheduling process respectively. Figure 1.3 gives an overview of the process in terms of decisions, decision maker and decision level. The first level defines and assigns the OR time among the surgical specialties, usually called mix planning. A surgical OR block schedule1 is developed at the second level. Finally, in the third level, individual cases are scheduled on a daily basis, also known as patient mix. It is at this level that variability in surgery durations plays a key role and where one determines the number of surgeries to perform in a block, the sequence the surgeries performed and the planned start times (appointment times) of the surgeries. Ideally, one should consider all these three levels of decisions simultaneously and not in isolation. However, the practical applications and mathematical challenges force practitioners and academics to work on these problems individually. In this thesis, we concentrate in level three and for the appointment scheduling problem we assume that the number of jobs (surgeries) and a sequence are already determined and given. For example, in the case of surgeries, for a given set of surgeries and their sequence, an appointment schedule, i.e., planned start times, needs to be prepared. This is an important and challenging task since surgery appointment schedule has a direct impact on amount of overtime and idle-time of ORs [10]. OR overtime can be costly since it involves staff overtime as well as additional overhead costs, on the other hand, idle-time costs can also 1  An OR block schedule is simply a table that assigns each specialty surgery time in ORs on each day.  The times are called blocks. The OR block schedule is sometimes called the master surgical schedule (see Figure 2 of [32] for a sample OR block schedule).  3  Decision level Decision maker  Decision  Strategic  Budget and Surgical Mix Health Authority (specialties and % of time, i.e., capacity per specialty)  Tactical  Hospital Management  Block Schedule (blocks for each specialty/surgeon)  Operational  Surgeons  Patient Schedule (scheduling of patients into a block)  Figure 1.3: Surgery Scheduling Process be high due to the opportunity cost of unused capacity, especially important in a Canadian context due to important political and social issues related to the length of surgical waiting lists [34]. An appointment schedule assigns an allocated duration by specifying the appointment time of each surgery at which the required resource(s) (e.g., OR, surgeon, healthcare personnel and equipment), and the patient will be available. However, due to the uncertain surgery durations, some surgeries may finish earlier whereas some others may finish later. In the latter case, the next surgery has to wait for the preceding surgery to complete and will start later than its original appointment time. As the appointment times have to be determined in advance, there are only limited recourse options when the actual duration of a procedure differs from its planned value. When a surgery finishes earlier than the next surgery’s appointment time, there is under-utilization of the healthcare resources. On the other hand, if a procedure finishes later than the next procedure’s appointment date, there may be some overtime of the healthcare resources and is waiting of the next procedure. Therefore, there is an important trade-off between under-utilization, overtime and patient waiting times. We are interested in finding a schedule that minimizes the expected total cost of resources under-utilization, overtime of resources and patient waiting. Generating such a schedule is more challenging but more valuable and useful when processing durations have more variability. The need for a good schedule is crucial, and savings from such a schedule may be significant. Figure 1.4 shows an instance with 3 surgeries G, B, R to be processed in this order. An appointment schedule (AG , AB , AR ) is given. Once the processing starts, 4  due to the random processing durations, some surgeries may be early whereas some others may be late as shown in Figure 1.4. (CG , CB , CR denotes completion times of surgeries.) an appointment schedule: AG , AB , AR G  B  R I : idle time  AB  AG G AG  I CG  AR B  W: wait time W  R CB  O : overtime  O CR  a realization of durations and the completion times  Figure 1.4: An Instance with Three Surgeries In the last five decades, there has been a tremendous interest in appointment scheduling not only in healthcare and service industries [7, 5, 35] but also in other areas such as production and transportation [12, 31]. While our goal is to provide an overview of the prior work on appointment scheduling, here we can only give a glimpse of it. However, in the subsequent chapters, we survey the related work about the problems under discussion in more detail. Weiss [36] recognized that the appointment scheduling problem has a closed form solution when there is only a single job, and it coincides with the well known newsvendor [23] solution from inventory theory. However, the problem departs from newsvendor characteristics and solution methods in the case of multiple jobs [29]. In multi-period newsvendor problem, naturally, decisions are taken at each period sequentially. Whereas in appointment scheduling, one needs to have a schedule before any processing can start, i.e., one determines all the decision variables (appointment times) simultaneously at the beginning of the planning horizon, i.e., at time zero. In terms of solution methods, we see studies based on stochastic programming [28, 10], queuing theory [35, 5, 16], simulation and other methods, see [7] and references therein. Cayirli and Veral [7] classify the literature in terms of methodologies and modeling aspects considered, and provide a discussion of performance measures. The authors conclude that the existing literature provides very situation-specific solutions and does not offer generally applicable and portable methodologies for appointment systems design in outpatient scheduling. We finally would like to point out some differences between appointment scheduling and single machine scheduling [27].  5  Unlike machine scheduling, in appointment scheduling a sequence is given and the release dates are the decision variables. Furthermore, the objective function of the appointment scheduling problem is different than the objective functions of classical machine scheduling problems. Processing durations are usually deterministic in machine scheduling problems but random processing durations are also studied in the literature [27]. Appointment scheduling problem can be modeled as a multistage stochastic programming problem [28, 10, 29], but there are significant computational difficulties due to the need for multidimensional numerical integration, e.g., even computing expected cost for a given schedule is difficult. Hence, heuristic methods have to be developed for realistic size problems. To the best of our knowledge, all the analytical studies that we are aware of about appointment scheduling, even the ones with discrete epochs [16, 5, 35] for job arrivals, use continuous job duration distributions.  1.2 1.2.1  Overview of the Thesis Chapter 2: Appointment Scheduling with Discrete Random Durations  In Chapter 2, we study a discrete time version of the appointment scheduling problem, i.e., the processing durations are integer and given by a discrete probability distribution. This assumption fits many applications, for example, surgeries and physician appointments are scheduled on minute(s) basis (usually a block of certain minutes). (For instance, one 20 minute physician appointment could be two blocks of 10 minutes.) We establish discrete convexity [21] properties of the objective function (under a mild condition on cost coefficients) and show that there exists an optimal integer appointment schedule minimizing the objective. This result is important as it allows us to optimize only over integer appointment schedules without loss of optimality. All these results on the objective function and optimal appointment schedule enable us to develop a polynomial time algorithm, based on discrete convexity [22], that, for a given processing sequence, finds an appointment schedule minimizing the total expected cost. When processing durations are stochastically independent we evaluate the expected cost for a given processing order and an integer appointment schedule in polynomial time. Independent processing durations lead to faster algorithms. Our modeling framework can include a given due date for the end of the processing of all 6  jobs (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose an end date. We also extend the analysis to include no-shows and some emergency jobs. Our setting is quite general, and it could be applied to various reallife scenarios (in healthcare and other areas) including surgeries, MRI exams, physician and specialist consultations, radiation therapy, project scheduling, container vessel and terminal operations, gate and runway scheduling of aircrafts in an airport. We believe our approach is sufficiently generic and portable to solve the appointment scheduling problem efficiently.  1.2.2  Chapter 3: A Sampling-Based Approach to Appointment Scheduling  In Chapter 2, we assume complete information of job duration distributions, i.e., there is an underlying discrete probability distribution for job durations (true distribution), and this distribution is available and known fully. This may be the case for some applications. However, for others, the true duration distributions may not be known but its (past) realizations or some samples may be available. One good example for such an application comes from healthcare; hospitals and surgeons usually have some data (historical) available on the length of surgeries but no one knows what the true distribution for a certain type of surgery is. In Chapter 3, we consider the problem of appointment scheduling with discrete random durations under the assumption that the true duration probability distributions are not known and only a set of independent samples is available. These samples may correspond to historical data, for example daily observations of surgery durations. We show that the objective function of the appointment scheduling problem is convex (as a function of continuous appointment vectors) under a simple and sufficient condition on cost coefficients. Under this condition we characterize the subdifferential2 of the objective function with a closed-form formula. We use this formula to determine bounds on the number of independent samples required to obtain a provably near-optimal solution with high probability, i.e., the cost of the sampling-based optimal schedule is with high probability no more than (1 + ǫ) times the cost of the optimal schedule that is computed based on the true distribution. Our bound for number of required samples is polynomial in number of jobs, accuracy level, confidence level and cost coefficients. 2  The set of all subgradients at a point of a convex function [30, 14].  7  There has been much interest for studying stochastic models, especially the newsvendor problem and its multiperiod extension, with partial probabilistic characterization. When the true distribution is not fully known then the question is how to find a “good solution”. Depending on how much is known about the true distribution(s) different approaches are possible, e.g., parametric and non-parametric. One may know the family of the true distribution but be uncertain about its parameters. This is called parametric approach, e.g., see [11, 19] and the references therein. If there are no assumptions on the true distribution, i.e., no prior assumptions on its family or its parameters, then the approach becomes non-parametric, e.g., see [17, 33, 26, 4]. Our approach is non-parametric and we employ sample average approximation (SAA) [33] to solve the appointment scheduling problem with samples. In other words, we use available samples to form an empirical distribution and find an optimal solution with respect to this empirical solution, i.e., sampling solution. Then we use the subdifferential characterization of the objective function (Section 3.4) and the well-known Hoeffdings inequality [15] to determine the number of samples required to guarantee that there will exist a (sufficiently) small (in terms of the specified accuracy level) subgradient at the sampling solution with high probability (i.e., at least the specified confidence level). As a final step we show that the objective value (w.r.t. the true distribution) of the sampling solution is no more than (1 + the accuracy level) of the true optimal value with probability at least the confidence level. For our sampling-based approach, job durations may not necessarily be independent but we require samples to be independent. In other words, each sample is a vector of durations where each coordinate corresponds to job duration and these vectors are independent. Independence assumption of probability distributions (e.g., job durations) is common but we do not require it in our sampling-based analysis. To the best of our knowledge Chapter 3 is the first to address the appointment scheduling problem when the probability distributions of durations are unknown. We develop a sampling-approach for the appointment scheduling problem which is a stochastic non-linear integer program. Furthermore, we believe Chapter 3 presents the first rigorous analysis of the convexity of the objective function of appointment scheduling problem with the simple sufficient condition. Last but not least, we characterize the set of all subgradients, i.e., the subdifferential at a given appointment  8  date vector, with a closed-form formula3 .  1.2.3  Chapter 4: Incentive-Based Surgery Scheduling: Determining Optimal Number of Surgeries  In Chapter 4, we look at a different but related problem of determining the number of surgeries for an OR block with a focus on the incentives of the parties involved (hospital and surgeon). We investigate the interaction between the hospital and the surgeon in a game theoretic setting, present empirical findings on surgery durations and suggest payment schemes that the hospital may offer to the surgeon to reduce its (idle and especially overtime) costs. In particular, we investigate the commonly observed situation reported in the literature [23] and observed empirically (Section 4.2) that surgeons over-schedule their allotted OR time, i.e., they schedule too many surgeries for their OR time. Olivares et al. [23] reports that the amount of schedule overruns are mainly caused by incentive conflicts and over-confidence. We take a systematic look at this and provide a model by which these incentive conflicts can be identified and effectively analyzed. Based on historical data analysis (Section 4.2) we see that in 81% of the surgeries, actual durations were longer than booked/scheduled durations. This high percentage suggests that duration of individual surgeries are often underestimated. One may ask how this phenomenon actually effects the daily overall performance of an OR block; i.e., amount of overtime for an OR as well as the likelihood of an OR to go overtime. To answer this question, we look the data at an operating room level. For each OR, we compute daily average of scheduled and overtime OR minutes and find the percentage of overtime, i.e., the ratio of overtime OR minutes and scheduled OR minutes. We find that the overtime amount is well over 20% for each OR (Figure 4.6). We also find the percentage of days that each OR has an overtime to estimate the probability of daily overtime for each OR (Table 4.1). The smallest of these numbers is 75%. These empirical findings, significant amount and high likelihood of overtime, suggest that the cost of overtime can be substantial. If an OR can be managed in such a way that overtime is decreased then this may translate to immediate and significant cost savings. Additionally, savings from reduction in overtime costs may be used to increase hospital 3  This is unusual since only a single subgradient may be obtained in most applications. We make use  of this subdifferential characterization in finding optimum appointment schedules by using non-smooth optimization methods in Appendix A.  9  resources such as regular OR time, recovery and intensive care beds. We argue that these observations can be explained by the incentive of surgeons to take advantage of fee-forservice4 payment structure for surgeries performed combined with the fact surgeons do not bear overtime costs at the hospital level. This creates a cost which is borne by the hospital who operates the OR and pays surgery support staff. Thus we argue that the hospital has an incentive to limit the number of surgeries performed by surgeons to reduce overtime expenditures. We explore this misalignment of incentives – for the surgeon to over-schedule and the hospital to control overtime costs – in a game theoretic setting. We characterize analytically the number of surgeries that minimizes hospital costs, find conditions when this number is less than surgeon’s preference, and propose contracts that induce the surgeon to schedule a number of surgeries more aligned with the goals of the hospital. Depending on how much power the hospital has over surgeons and how much information is available to the hospital, we suggest several contracts that hospital might consider.  1.2.4  Chapter 5: Advance Multi-Period Quantity Commitment and Appointment Scheduling  As discussed briefly above there is a connection between the celebrated newsvendor problem and the appointment scheduling problem. If we have only a single job (surgery), i.e., n = 1, then the appointment scheduling problem becomes the newsvendor problem. This was first recognized by Weiss [36]. In Chapter 5, we investigate this relationship in the case of multiple jobs. We introduce advance multi-period quantity (order or supply) commitment problems with random characteristics (demand or yield). There are underage and overage costs if there is a mismatch between committed and realized quantities. All quantity decisions (how much to order or supply in each of the next n periods) are needed now, before any realization of demand or yield. The objective is to maximize the total expected profit after n periods. We establish a link between these problems and the appointment scheduling problem (as given in Chapter 2). We show that these problems can be studied and solved as special cases of the appointment scheduling problem. In a supply chain, uncertainty effects (e.g., due to stochastic demand or random yield) 4  In Canada and the United States, although some doctors are salaried hospital employees, most doctors  are private entrepreneurs who have admission privileges at a hospital, work on a fee-for-service basis and appear when the patient needs a cure or treatment [6].  10  are something that players would like to minimize or pass to others. Consider a buyer and a supplier where the buyer can order any amount from the supplier whenever it is convenient. This may be the case where there are many suppliers and they are competing for buyers. However, the supplier would prefer a contract in which the buyer (who has better information about the demand uncertainty) commits in advance how much to purchase over a certain period of time. In return, the supplier may offer a discount to the buyer to make this choice attractive. These type of agreements are reported in practice [1, 20, 18]. With such an agreement, the challenge for the buyer becomes to determine how much to commit to purchase in advance (e.g., in total for the entire horizon or per period) and how much to order in each period. This problem and its variants (such as finite or infinite horizons, with or without fixed costs, total or individual period commitments) have been well motivated and studied in literature [1, 20, 9]. These studies mostly (and naturally) use dynamic programming to determine an optimal policy and in some cases they develop heuristics. Nevertheless, all the previous studies on this topic that we are aware of consider situations where a buyer commits to how much to purchase in advance and decides how much to order in each period consecutively, i.e., ordering decision for the next period is given after this period’s demand realization. In our setting, the buyer needs to decide how much to order for all periods at once and now, before any realization of random demands. There can be some situations where the buyer needs to enter such a contract to secure any orders from a strong supplier. We discuss two models and a few examples. The first one is a multi-period inventory model for a buyer with a perishable product and backordering. The second one is a multiperiod production model for a producer with random yield with high inventory and product shortage costs. The distinct feature of these models from previous ones reported in literature is that all quantity commitment (order and supply amounts) decisions are to be made at once and before the decision horizon starts. To the best of our knowledge, the problems considered in Chapter 5 have not yet been studied.  1.2.5  Minimizing a Discrete-Convex Function for Appointment Scheduling  The objective function of the appointment scheduling problem, under a simple sufficient condition, is discretely convex as a function of the integer appointment vector (Chapter 2), 11  but it is convex and non-smooth when appointment vectors are continuous (Chapter 3). In Appendix A, we investigate whether we can take advantage of both discrete convexity and non-smooth convex optimization methods to solve the appointment scheduling problem. Our purpose is to find a way to combine both sets of methods to minimize the objective function of the appointment scheduling problem more efficiently and practically. In this Appendix, we compute a subgradient of the objective in polynomial time for any given (real-valued) appointment schedule with independent processing durations by using the subdifferential characterization obtained in Chapter 3. Finding a subgradient in polynomial time is not trivial because the subdifferential formulas include exponentially many terms, and some of the probability computations are complicated. We also extend computation of the expected total cost (in polynomial time) for any (real-valued) appointment vector. These results allow us to use non-smooth convex optimization techniques to find an optimal schedule. To combine the discrete and non-smooth algorithms, a hybrid approach, we develop a special-purpose integer rounding method which takes any fractional solution and rounds it to an integer one with the same or improved objective value. We believe this hybrid approach may perform well in practice.  1.3  Outline of the Thesis  The rest of this thesis, as seen from Section 1.2, is organized as a series of chapters. At the beginning of every chapter, we motivate the problem in discussion and examine the related work. We provide our analysis and results. We conclude each chapter with a summary of the main findings. In addition to the chapters discussed in Section 1.2, we have Chapter 6 which summarizes the thesis contributions and provides a brief discussion of future research directions.  12  1.4  Bibliography  [1] Yehuda Bassok and Ravi Anupindi. Analysis of supply contracts with total minumum commitment. IIE Trans., 29:373–381, 1997. [2] Jeroen Belien and Erik Demeulemeester. Building cyclic master surgery schedules with leveled resulting bed occupancy. European Journal of Operational Research, 176: 11851204, 2007. [3] John Blake and Joan Donald. Mount sinai hospital uses integer programming to allocate operating room time. Interfaces, 32(2):63–73, 2002. [4] James H. Bookbinder and Anne E. Lordahl. Estimation of inventory re-order levels using the bootstrap statistical procedure. IIE Trans., 21(4):302–312, 1989. [5] Peter M. Vanden Bosch, Dennis C. Dietz, and John R. Simeoni. Scheduling customer arrivals to a stochastic service system. Naval Research Logistics, 46:549–559, 1999. [6] Mike Carter. Diagnosis: Mismanagement of resources. OR/MS Today, 29(2):26–32, 2002. [7] Tugba Cayirli and Emre Veral. Outpatient scheduling in health care: A review of literature. Production and Operations Management, 12(4), 2003. [8] Amitabh Chandra and Jonathan Skinner. Expenditure and productivity growth in health care. Dartmouth College, February. Forthcoming as an NBER Working Paper, 2008. [9] Ki Ling Cheung and Xue-Ming Yuan. An infinite horizon inventory model with periodic order commitment. EJOR, 146:52–66, 2003. [10] Brian Denton and Diwakar Gupta. A sequential bounding approach for optimal appointment scheduling. IIE Transactions, 35:1003–1016, 2003. [11] Xiaomei Ding, Martin L. Puterman, and Arnab Bisi. The censored newsvendor and the optimal acquisition of information. Oper. Res., 50(3):517–527, 2002. [12] Mohsen Elhafsi. Optimal leadtime planning in serial production systems with earliness and tardiness costs. IIE Transactions, 34:233 – 243, 2002. 13  [13] Canadian Institute for Health Information Web Site. http://www.cihi.ca/. [14] JeanBaptiste Hiriart-Urruty and Claude Lemarechal. Convex Analysis and Minimization Algorithms I and II. Springer, 1993. [15] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. J. American Statistical Assoc., 58(301):13–30, 1963. [16] Guido C. Kaandorp and Ger Koole. Optimal outpatient appointment scheduling. Health Care Man. Sci., 10:217–229, 2007. [17] Retsef Levi, Robin O. Roundy, and David B. Shmoys. Provably near-optimal samplingbased policies for stohastic inventory control models. Math. of Oper. Res., 32(4):821– 838, 2007. [18] Zhaotong Lian and Abhijit Deshmukh. Analysis of supply contracts with quantity flexibility. EJOR, 196:526–533, 2009. [19] Liwan H. Liyanage and J. George Shanthikumar. A practical inventory control policy using operational statistics. Operations Research Letters, 33:341–348, 2005. [20] Kamran Moinzadeh and Steven Nahmias. Adjustment strategies for a fixed delivery contract. Oper. Res., 48(3):408–423, 2000. [21] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. [22] Kazuo Murota. On steepest descent algorithms for discrete convex functions. SIAM J. OPTIM, 14(3):699–707, 2003. [23] Marcelo Olivares, Christian Terwiesh, and Lydia Cassorla. Structural estimation of the newsvendor model: An application to reserving operating room time. Management Science, 54(1):41–55, 2008. [24] Jonathan Patrick. Dynamic Patient Scheduling for a Diagnostic Resource. PhD thesis, The University of British Columbia, 2006. [25] Jonathan Patrick, Martin L. Puterman, and Maurice Queyranne. Dynamic multipriority patient scheduling for a diagnostic resource. Operations Research, 56:1507 – 1525, 2008. 14  [26] Georgia Perakis and Guillaume Roels. Regret in the newsvendor model with partial information. Oper. Res., 56(1):188 – 203, 2008. [27] Michael Pinedo. Scheduling: Theory, Algorithms, and Systems. Prentice Hall, 2001. [28] Lawrence W. Robinson and Rachel R. Chen. Scheduling doctors’ appointments: optimal and empirically-based heuristic policies. IIE Transactions, 35:295–307, 2003. [29] Lawrence W. Robinson, Yigal Gerchak, and Diwakar Gupta. Appointment times which minimize waiting and facility idleness. Working Paper, DeGroote School of Business, McMaster University, 1996. [30] R. Tyrrell Rockafellar. Theory of subgradients and its applications to problems of optimization: convex and nonconvex functions. Helderman-Verlag, Berlin, 1981. [31] F Sabria and C F Daganzo. Approximate expressions for queuing systems with scheduling arrivals and established service order. Transportation Science, 23:159–165, 1989. [32] Pablo Santibanez, Mehmet Begen, and Derek Atkins. Surgical block scheduling in a system of hospitals: An application to resource and wait list management in a british columbia health authority. Health Care Man. Sci., 10(3):269–282, 2007. [33] Alexander Shapiro. Stochastic programming approach to optimization under uncertainty. Math. Programming, 112:183–220, 2007. [34] Health  Canada  Web  Site.  http://www.hc-sc.gc.ca/hcs-sss/qual/acces/wait-  attente/index-eng.php. [35] P Patrick Wang. Static and dynamic scheduling of customer arrivals to a single-server system. Naval Research Logistics, 40(3):345–360, 1993. [36] E N Weiss. Models for determining estimated start times and case orderings in hospital operating rooms. IIE Transactions, 22(2):143–150, 1990.  15  2 Appointment Scheduling with Discrete Random Durations1 We consider the problem of determining an optimal appointment schedule for a given sequence of jobs (e.g., medical procedures) on a single processor (e.g., operating room, examination facility, physician), to minimize the expected total underage and overage costs when each job has a random processing duration given by a joint discrete probability distribution. Simple conditions on the cost rates imply that the objective function is submodular and L-convex. Then there exists an optimal appointment schedule which is integer and can be found in polynomial time. Our model can handle a given due date for the total processing (e.g., end of day for an operating room) after which overtime is incurred, and no-shows and some emergencies.  2.1  Introduction and Motivation2  Our research concerns appointment scheduling of jobs on a highly utilized processor when the processing durations are stochastic, and jobs are not available before their appointment dates.3 We came across this problem in surgery scheduling and in appointment scheduling of oncologist consultations and radiation therapy treatments for cancer patients. There are many other challenging and important real-life applications for this setting including healthcare diagnostic operations (such as CAT scan, MRI) and physician appointments, as well as project scheduling, container vessel and terminal operations, gate and runway scheduling of aircrafts in an airport. For example, in surgery scheduling, patients or surgeries 1  A version of this chapter has been submitted for publication. Begen M.A. and Queyranne M. Appoint-  ment Scheduling with Discrete Random Durations. 2 A conference version of this chapter appeared in [1]. 3 To conform with scheduling terminology, we use the term “date” to denote a point in time. In most applications of appointment scheduling, the appointment “dates” are actually appointment times within the day for which the jobs are being scheduled.  16  are the jobs, the operating room (OR) and associated resources are the processor, and the surgeon or the hospital is the scheduler. Figure 2.1 shows an example of surgery durations (OR time in minutes) per surgical specialty. As seen from the box plots of Figure 2.1 surgery durations show variability. This data was obtained during an applied research project [28].  400 350  OR Time (min)  300 250 200 150 100 50 0 General  Cardiac  Neuro.  Ortho.  Plastic.  Vascu.  Urol.  Obst/Gyn.  Oto.  Ophth.  Surgical Specialty  Figure 2.1: Surgery Durations Some appointment scheduling applications may have a specific due date for the end of processing, e.g., end of day for an OR, after which additional cost per time unit, e.g., overtime, is incurred. The need for a good schedule is crucial, and savings from such a schedule can be significant. In most cases, an appointment schedule needs to be prepared before any processing starts. It assigns each procedure an allocated duration by specifying the appointment date at which the required personnel and equipment, and the job or patient will be available. However, due to the uncertain processing durations, some jobs may finish earlier, whereas some others may finish later, than the appointment date of the next job. As the appointment dates have to be determined in advance, there are only limited recourse options when the actual duration of a job differs from its planned value. When a procedure finishes earlier than the next procedure’s appointment date, the processor and other resources remain idle until the appointment date of the next job. This results in 17  resource under-utilization. On the other hand, if a job finishes later than the next job’s appointment date, the next job has to wait for the preceding procedure to complete and will start later than its original appointment date. This results in waiting for the next job and may cause overtime for the processor and resources at the end of the schedule. Therefore, there is an important trade-off between under-utilization, overtime and job waiting times. We are interested in generating an appointment vector4 that minimizes the expected total cost of resource under-utilization, overtime and job waiting times. Finding such a schedule is more challenging but more valuable and useful when processing durations have more variability. Figure 2.2 shows an instance with 3 jobs G, B, R to be processed in this order. An appointment schedule (AG , AB , AR ) is given. Once the processing starts, due to the random processing durations, some jobs may be early whereas some others may be late as shown in Figure 2.2. an appointment schedule: AG , AB , AR G  B  R I : idle time  AB  AG G AG  I CG  AR B  W: wait time W  R  O : overtime  O  CB  CR  a realization of durations and the completion times  Figure 2.2: A Three-Job Instance, and A Realization of the Processing Durations This problem can be modeled as a multistage stochastic program, but there are significant computational difficulties due to the need for multidimensional numerical integration (see Section 2.2). To our best knowledge, all the analytical studies we are aware of, even the ones with discrete epochs for job arrivals, use continuous processing duration distributions. For a given sequence of jobs, only small instances can be solved to optimality, larger instances require heuristics. We study a discrete time version of the appointment scheduling problem and establish discrete convexity properties of the objective function. Discrete convex analysis has been advocated by Murota [18] and for recent developments in the topic see [20]. We prove that 4  We use appointment schedule and appointment vector interchangeably.  18  the objective function is L-convex under mild assumptions on cost coefficients. L-convex functions, introduced by Murota in [17], play a central role in discrete convexity and our research. Furthermore, we show that there exists an optimal integer appointment schedule minimizing expected total costs. This result is important as it allows us to optimize only over integer appointment schedules without loss of optimality. All these results on the objective function and optimal appointment schedule enable us to develop a polynomial time algorithm, based on discrete convexity [19], that, for a given processing sequence, finds an appointment schedule minimizing the total expected cost. This algorithm invokes a sequence of submodular set function minimizations, for which various algorithms are available, see e.g., [9], [13], [8], [16] and [21]. When processing durations are stochastically independent we evaluate the expected cost for a given processing order and an integer appointment schedule, efficiently both in theory (in polynomial time) and in practice (computations are quite fast as shown in our preliminary computational experiments). Independent processing durations lead to faster algorithms. Our modeling framework can include a given due date for the end of processing (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose an end date. We also extend our analysis to include no-shows and emergency jobs. The expected benefits of this research effort include reduced job waiting times, reduced overtime and improved capacity utilization. Our chapter is organized as follows. We start with a literature summary in Section 2.2. Section 2.3 states our assumptions, introduces our notation and formally defines the problem and objective function. Section 2.4 gives some basic properties of the objective function and optimal solutions. In section 2.5 we show the existence of an optimal appointment vector which is integer. Section 2.6 establishes the submodularity and L-convexity of the objective function under a mild condition on cost coefficients. We show that the total expected cost can be minimized efficiently and give the complexity of this minimization in Section 2.7. In this section, we also compute the objective function for any integer appointment vector and determine its complexity when the processing durations are stochastically independent. This independence assumption leads to faster algorithms. We extend our analysis for an objective function with a due date for the end of processing in Section 2.8. Section 2.9 shows how to handle no-shows and some emergency jobs within our framework. Section 2.10 19  discusses the current work and future work, and it concludes the chapter.  2.2  Related Literature  There are many studies in the last 50 years about appointment scheduling, especially in healthcare. Here, we present the ones that we believe are the most relevant to our research. The use of appointment systems is not limited to service industries but also extends to other areas, such as project management, production and transportation. Sabria and Daganzo [27] consider scheduling of arrivals of container vessels at a seaport. Weiss [33] recognized that the appointment scheduling problem has a closed form solution when there are only two jobs, and it coincides with the well known newsvendor solution from inventory theory. Robinson et al. [26] extend this result to three jobs by obtaining optimality conditions. Zipkin [34] presents an analysis on the structure of a single-item multi-period inventory system, closely related to the newsvendor problem, by using discrete convexity. Elhafsi [7] studies a production system of multiple stages with stochastic leadtimes. The objective is to determine planned leadtimes such that the expected total cost (inventory, tardiness and earliness) is minimized. Bendavid and Golany [2] consider project scheduling with stochastic activity durations. They address the problem of determining for each activity a gate, i.e., a time before which the activity cannot begin, so as to minimize total expected holding and shortage costs, for which they use a heuristic based on the Cross Entropy methodology. Cayirli and Veral [5] review the literature on appointment systems of outpatient scheduling. The authors classify the literature in terms of methodologies and modeling aspects considered, and provide a discussion of performance measures. The authors conclude that the existing literature provides very situation-specific solutions and does not offer generally applicable and portable methodologies for appointment systems design in outpatient scheduling. Another literature review by Cardoen et al. [4] on operating room scheduling evaluates the papers on either the problem setting, such as performance measures, or technical properties such as solution methods. In a queuing based study, Wang [31] develops a model to find appointment dates of jobs in a single server system to minimize expected customer delay and server completion time with identical jobs and costs, and exponential processing duration distributions. In his numerical studies, the optimal allocated time for each job shows a “dome” structure,  20  i.e., it increases first and then decreases. In another study, Wang [32] investigates the sequencing problem with the same setting but with distinct exponential distributions. He conjectures that sequencing with increasing variance is optimal. Bosch et al. [3] present a model with i.i.d. Erlang processing durations and identical cost coefficients. In their model, customers can arrive only at discrete potential arrival epochs, which are equally spaced, and the decision variable is the number of customers to be scheduled at each potential arrival epoch. In a related paper, Kaandorp and Koole [14] study outpatient appointment scheduling with exponential processing durations and no-shows. They take advantage of the exponential distribution in their computations and define a neighborhood structure and an exact search method. However for large instances, they develop a heuristic due to high computation times of their search method. Another important stream of appointment scheduling research is based on stochastic programming. Denton and Gupta [6] develop a two-stage stochastic linear program to determine optimal appointment dates for a given surgery sequence and due date for the end of processing horizon. The authors use general, i.i.d. and continuous processing durations, and identical server idling cost coefficients for all jobs. They infer from stochastic programming results that their model is a convex minimization problem, and they develop an algorithm with sequential bounding for solving small sized instances. They develop heuristics to solve larger instances. In a related study, Robinson and Chen [25] develop a stochastic linear program for finding appointment dates for a fixed sequence of surgeries and propose a Monte-Carlo based solution method. Due to the high computational requirements of Monte-Carlo integration, they develop heuristics in which they use the “dome” structure of the optimal policy as reported in Wang [31]. Appointment scheduling can be thought of as an operational level of capacity planning problem since it concerns with scheduling of jobs/patients available on the day of processing/service [28], [22] and [29]. Researchers also study the problem of scheduling patients in advance of the service date. In this stream of research, e.g., [22], [29], [10], [11] and the references therein, arrivals are random but processing durations are deterministic and the main decision is how to allocate available capacity to incoming demand. Different objectives are considered such as revenue maximization [11] or cost minimization to achieve target waiting times [22]. Luzon et al. [15] use a fluid approximation to minimize average waiting time. 21  We finally would like to point out the similarities between appointment scheduling and single machine scheduling, see e.g., [24] for machine scheduling. Unlike machine scheduling, in appointment scheduling a sequence is given and the release dates are the decision variables. Furthermore, the objective function of the appointment scheduling problem is quite different than the objective functions of classical machine scheduling problems. Processing durations are usually deterministic in machine scheduling problems but random processing durations are also studied in literature, see e.g., [23] and [24]. In this chapter we develop a sufficiently generic and portable framework to solve the appointment scheduling problem efficiently.  2.3  Assumptions and Notation  There are n + 1 jobs that need to be sequentially processed on a single processor. The processing sequence is given. An appointment schedule needs to be prepared before any processing can start. Jobs will not be available before their appointment dates. When a job finishes earlier than the next job’s appointment date, the system experiences some cost due to under-utilization. We refer to this cost as the underage cost. On the other hand, if a job finishes later than the next job’s appointment date, the system experiences overage cost due to the overtime of the current job and the waiting of the next job. The processing durations are given by their joint discrete distribution. In Section 2.7, we will show that assuming independent discrete processing durations lead to faster algorithms. We assume that this joint distribution is known. Complete information of distributions is reasonable in most settings, but we relax this assumption in Chapter 3. Our next assumption is a natural one: all cost coefficients and processing durations are non-negative and bounded. A key assumption in this work is that processing durations are integer valued5 . Although we obtain some of our results without this assumption, it is important for our main results. We assume job 1 starts on-time, i.e., the start time for the first job is zero, and there are n real jobs. The (n + 1)th job is a dummy job with a processing duration of 0. The appointment time for the (n + 1)th job is the total time available for the n real jobs. We use the dummy job to compute the overage or underage cost of the nth job. Let {1, 2, 3, ..., n, n+1} denote the set of jobs. We denote the random processing duration 5  We can restrict ourselves to integer appointment schedules without loss of optimality by Theorem 2.5.10.  22  of job i by pi and the random vector6 of processing durations by p = (p1 , p2 , ..., pn , 0). Let pi and pi denote the minimum and maximum possible value of processing duration pi , respectively. The maximum of these pi ’s is pmax = max(p1 , ..., pn ). The underage cost rate ui of job i is the unit cost (per time unit) incurred when job i is completed at a date Ci before the appointment date Ai+1 of the next job i + 1. The overage cost rate oi of job i is the unit cost incurred when job i is completed at a date Ci after the appointment date Ai+1 . Thus the total cost due to job i completing at date Ci is ui (Ai+1 − Ci )+ + oi (Ci − Ai+1 )+ where (x)+ = max(0, x) is the positive part of real number x. We define u = (u1 , u2 , ..., un ) and o = (o1 , o2 , ..., on ). We denote unit vectors in Rn+1 as 1i where the ith component is 1 and all other components are 0. The underage cost may be interpreted as the idling cost and/or opportunity cost of the resources, whereas the overage cost may be thought as the waiting cost of the next job and/or the overtime of the current job. The overage cost of the last job may include the overtime cost for the whole facility at the end of the schedule after a specified due date. Next we introduce our decision variables. Let ai be the allocated duration and Ai the appointment date for job i. Then we have A1 = 0 and Ai+1 = Ai + ai for i = 1, . . . , n. Thus we may equivalently use the allocated duration vector a = (a1 , a2 , ..., an−1 , an ) or the appointment vector A = (A1 , A2 , ..., An , An+1 ) (with A1 = 0) as our decision variables; we choose to work with the appointment vector A. We introduce additional variables which help define and compute the objective function. Let Si be the start date and Ci the completion date of job i. Since job 1 starts on-time we have S1 = 0 and C1 = p1 . The other start times and completion times are determined as follows: Si = max{Ai , Ci−1 } and Ci = Si + pi for 2 ≤ i ≤ n + 1. Note that the dates Si and Ci are random variables which depend on the appointment vector A. Let F (A|p) be the total cost of appointment vector A given processing duration vector p:  n  oi (Ci − Ai+1 )+ + ui (Ai+1 − Ci )+ .  F (A|p) =  (2.1)  i=1  The objective to be minimized is the expected total cost F (A) = Ep [F (A|p)] where the expectation is taken with respect to random processing duration vector p. We simplify notations by defining the lateness Li = Ci − Ai+1 of job i, its tardiness Ti = (Li )+ , and its 6  We write all vectors as row vectors.  23  earliness Ei = (−Li )+ . The objective F (A) can now be written as n  n  F (A) = Ep  i=1  i=1  2.4  (oi Ep Ti + ui Ep Ei ) .  (oi Ti + ui Ei ) =  Basic Properties  We start by making an observation about the completion times and expressing the objective function in a different form that is useful for deriving some of our later results. Since Ci = Si + pi = max{Ai , Ci−1 } + pi , the completion time of job i may be seen as the length of the longest (or critical) path from some job j (j ≤ i) to job i + 1 in a corresponding “project network” (Pinedo [24]), namely: Lemma 2.4.1. (Critical Path) For all jobs i = 1, . . . , n, i  pk }  Ci = max{Aj + j≤i n  F (A|p) = i=1     oi  k=j +  i  max Aj + j≤i  pk  − Ai+1  k=j  +  i  + ui Ai+1 − max Aj + j≤i  pk k=j    .  Proof. The claim holds trivially for i = 1. By induction let the claim be true for i = m, i.e., Cm = maxj≤m Aj +  m k=j  pk . Then  Cm+1 = Sm+1 + pm+1 = max Am+1 , Cm + pm+1 by definition   m   = max Am+1 , max Aj + pk + pm+1 by inductive assumption   j≤m k=j     m+1 m+1     pk . = max Am+1 + pm+1 , max Aj + pk = max Aj +    j≤m+1  j≤m k=j  k=j  The expression for F (A|p) follows.  The next result is not only important on its own but also crucial for the existence of an optimal solution. Lemma 2.4.2. (Continuity) Functions F (.|p) and F (.) are continuous. Proof. By expression Eq(3.1), F (.|p) is a weighted sum of piecewise linear continuous functions of A, hence is itself piecewise linear continuous. Since we have a finite sample space, the expectation F (.) = Ep F (.|p) is also continuous. 24  We next establish the existence of an optimal solution. The proof follows from the fact that our objective function is continuous (by Lemma 2.4.2), and we can restrict the appointment vector to a compact set without loss of optimality. Let A = (A1 , . . . , An+1 ) and A = (A1 , . . . , An+1 ) where A1 = A1 = 0, Ai =  j<i pj  and Ai =  j<i pj  for i =  2, . . . , n + 1. We define the compact set K as the cartesian product of the intervals [Ai , Ai ], i.e., K =  n+1 i=1 [Ai , Ai ]  = [A, A] ⊆ Rn+1 .  Lemma 2.4.3. (Existence of an Optimal Vector) There exists an appointment vector A∗ ∈ K such that F (A∗ ) ≤ F (A) for any appointment vector A. Proof. We show that we can restrict, without loss of optimality, the appointment vector A to the compact set K = [A, A] and recall that job 1 starts at time zero, i.e., A1 = 0 = A1 = A1 . Consider any appointment vector A ∈ K with A1 = 0. If A ≥ A then define the appointment vector A′ = A ∨ A with component A′i = max{Ai , Ai }. For any realization p of the processing durations, the completion times Ci′ in the resulting schedule satisfy Ci′ = Ci ≥ Ai+1 . (Indeed, C1′ = p1 = C1 ≥ A2 and, ′ } + p = max{A , A , C by induction Ci′ = max{A′i , Ci−1 i i i−1 } + pi = max{Ai , Ci−1 } + pi = i  Ci ≥ Ai+1 ). Then the resulting tardiness and earliness become: if Ai+1 ≥ Ai+1 then Ti′ = (Ci′ − A′i+1 )+ = (Ci − Ai+1 )+ = Ti and Ei′ = (A′i+1 − Ci′ )+ = (Ai+1 − Ci )+ = Ei ; and, if Ai+1 < Ai+1 then Ti′ = (Ci′ − A′i+1 )+ = (Ci − Ai+1 )+ ≤ (Ci − Ai+1 )+ = Ti and 0 ≤ Ei = (Ai+1 − Ci )+ ≤ (A′i+1 − Ci′ )+ = Ei′ = 0 (so Ei′ = Ei = 0). Since all ui , oi ≥ 0, it follows from Eq(3.1) that F (A′ |p) ≤ F (A|p) and thus F (A′ ) ≤ F (A). We have shown that for every A there exists A′ ≥ A with F (A′ ) ≤ F (A). Now, for any vector A ∈ Rn+1 satisfying A ≥ A, A1 = 0 and A ∈ K, let i(A) denote the smallest index such that Ai > Ai . Let A ∈ Rn+1 be a vector with largest i(A) value satisfying A ≥ A, A1 = 0 and A ∈ K. We claim that there exists A′ satisfying A′ ≥ A, A′1 = 0, F (A′ ) ≤ F (A), and either A′ ∈ K or i(A′ ) > i(A). Then after at most n such changes we obtain A′′ ∈ K satisfying F (A′′ ) ≤ F (A), which is what we wanted to show. We now prove the claim. Let i = i(A), ε = Ai − Ai > 0, and define A′ with A′j = Aj for all j ≤ i − 1 and A′j = Aj − ε for all j ≥ i, so A′i = Ai . For every realization p of the processing durations, the completion time Cj′ in the resulting schedule satisfy Cj′ = Cj for all j ≤ i − 1. Note that for all j ≤ i − 1, Aj ≤ Aj implies Cj ≤ Aj+1 . Therefore Ci = Ai + pi and 25  Ci′ = A′i + pi = A′i + Ci − Ai = Ci − ε. It follows that Cj′ = Cj − ε for all j ≥ i. As a result, ′ ′ = Ti−1 = 0. Since ε > 0 = Ei−1 − ε, Ti−1 Ej′ = Ej and Tj′ = Tj for all j = i − 1, and Ei−1  and ui−1 ≥ 0, F (A′ |p) ≤ F (A|p) and thus F (A′ ) ≤ F (A). Since A′j = Aj ≤ Aj for all j ≤ i − 1 and A′i = Ai , then either A′ ∈ K or i(A′ ) ≥ i + 1 = i(A) + 1, establishing the claim. This shows that for any A ∈ K there exists a vector A′′ ∈ K with F (A′′ ) ≤ F (A). As a result, since F is continuous, its minimum on compact set K is attained and is therefore the global minimum. The next lemma gives bounds on the difference between any two consecutive components of an optimal appointment vector, and from this we obtain a useful and intuitive result in Lemma 2.4.5. Lemma 2.4.4. There exists an optimal appointment schedule A∗ ∈ K satisfying pi ≤ A∗i+1 − A∗i ≤  j≤i pj  −  j<i pj  for all i = 1, . . . , n.  Proof. By Lemma 2.4.3, we immediately obtain p1 ≤ A∗2 − A∗1 ≤ p1 and A∗i+1 − A∗i ≤ −  j≤i pj  j<i pj  for all i = 2, . . . , n. Next, we show that pi ≤ A∗i+1 − A∗i holds for all  i = 2, . . . , n. By contradiction, suppose pk + A∗k > A∗k+1 for some k = 2, . . . , n then job k is late at least (pk + A∗k − A∗k+1 ) time units, so increasing A∗k+1 to pk + A∗k will improve the objective function by ok (pk + A∗k − A∗k+1 ) ≥ 0. Therefore we must have pi ≤ A∗i+1 − A∗i for all i = 2, . . . , n. Lemma 2.4.5. (Non-Decreasing Appointment Dates) There exists an optimal appointment vector A∗ ∈ K with non-decreasing components, i.e., A∗i ≤ A∗i+1 for all i = 1, . . . , n. Proof. By Lemma 2.4.4, A∗i+1 − A∗i ≥ pi ≥ 0 (1 ≤ i ≤ n).  2.5  Optimality of an Integer Appointment Vector  The existence of an optimal appointment vector which is integer is crucial. It implies that we can restrict attention to integer appointment vectors without loss of optimality. We establish this result in the Appointment Vector Integrality Theorem 2.5.10. Its proof is surprisingly non-trivial. Let A∗ be any non-integer appointment vector and A∗f the first non-integer component of A∗ . Knowing all the jobs which have the same fractional part as A∗f is crucial, so we 26  define J to be the set of all jobs j ≥ f such that A∗j − A∗f is integer. Let Z denote the set of integers, and ⌊x⌋ = sup{n ∈ Z : n ≤ x} and ⌈x⌉ = inf{n ∈ Z : n ≥ x} for x ∈ R. Let ϕ(x) be the distance to the nearest integer for x ∈ R, i.e., ϕ(x) = min(x−⌊x⌋, ⌈x⌉−x). Let ∆ be a strictly positive scalar satisfying 0 < ∆ < j ∈ J, k ∈ J  > 0 and ∆2 =  1 4  1 2  min(∆1 , ∆2 ) where ∆1 =  1 4  min ϕ(|A∗j − A∗k |) :  min ϕ(|A∗j − A∗k |) : j ∈ J, k ∈ J, Aj − Ak ∈ Z > 0. We  use ∆ to construct two new appointment schedules A′ and A′′ from A∗ : let A′j = A∗j − ∆ if j ∈ J, and A′j = A∗j otherwise; similarly, let A′′j = A∗j + ∆ if j ∈ J, and A′′j = A∗j otherwise. For any realization of the processing duration vector p, denote the completion times of job j as Cj∗ , Cj′ , Cj′′ in schedules A∗ , A′ and A′′ , respectively. One of the main ideas in proving the Appointment Vector Integrality Theorem 2.5.10 is that ∆ is small enough so that there is “no event change” when we move from schedule A∗ to schedules A′ and A′′ . When there is no event change, we show in Lemma 2.5.9 that our objective function changes linearly between schedules A′ and A′′ . To make the no event ∗ change concept precise, we define the following. Job i (1 < i ≤ n + 1) is late if Ci−1 > A∗i ∗ (strictly positive tardiness), early if Ci−1 < A∗i (strictly positive earliness), just-on-time if ∗ ∗ Ci−1 = A∗i , and on-time if Ci−1 ≤ A∗i . Then no event change means that if any job is late,  early or just-on-time, respectively, in schedule A∗ then it is also late, early or just-on-time, respectively, in both schedules A′ and A′′ . We consider all possible realizations r of the random processing duration vector p, so ri is the corresponding realization of the processing duration pi . We start by establishing relationships between the completion times in the schedules A′ and A∗ , and A′′ and A∗ . Lemma 2.5.1. For every realization of the processing durations and every j = 1, . . . , n + 1, Cj∗ + ∆ ≥ Cj′′ ≥ Cj∗ ≥ Cj′ ≥ Cj∗ − ∆. Proof. Let 1 ≤ j ≤ n + 1 and let r be a realization of p. Then A∗j − ∆ ≤ A′j ≤ A∗j ≤ A′′j ≤ A∗j + ∆ by definition of A′ and A′′ . By the Critical Path Lemma 2.4.1, Cj∗ = maxk≤j {A∗k +  j i=k ri },  Cj′ = maxk≤j {A′k +  j i=k ri }  and Cj′′ = maxk≤j {A′′k +  j i=k ri }.  Hence, A′j ≤ A∗j ≤ A′′j implies that Cj′ ≤ Cj∗ ≤ Cj′′ . On the other hand, A∗j − ∆ ≤ A′j implies that Cj∗ − ∆ = maxk≤j {A∗k − ∆ +  j i=k ri }  ≤ maxk≤j {A′k +  j i=k ri }  Cj∗ − ∆ ≤ Cj′ . Similarly A∗j + ∆ ≥ A′′j implies that Cj∗ + ∆ = maxh≤j {A∗k + ∆ + maxk≤j {A′′k +  j i=k ri }  = Cj′ so j i=k ri }  ≥  = Cj′′ so Cj∗ + ∆ ≥ Cj′′ . The result follows.  The next two results are about late and early jobs. Lemma 2.5.2 below implies that if 27  job k is late (resp., early) then its tardiness (resp., earliness) is strictly greater then 2∆. Lemma 2.5.3 implies that if job k is late (resp., early) in schedule A∗ then it is also late (resp., early) in A′ and A′′ . Lemma 2.5.2. For every realization of the processing durations and every k = 2, . . . , n + 1, ∗ ∗ if |Ck−1 − A∗k | > 0 then |Ck−1 − A∗k | > 2∆.  Proof. Let r be a realization of p. Let t be the last on-time job before k (1 ≤ t < k) so ∗ Ck−1 = A∗t +  k−1 i=t ri .  Note that t exists and is well defined since job 1 is always on-time,  i.e., A∗1 = 0. We consider two cases: (A∗k − A∗t ) ∈ Z or (A∗k − A∗t ) ∈ Z. If (A∗k − A∗t ) ∈ Z ∗ then 0 < |Ck−1 − A∗k | = |A∗t +  |A∗t +  k−1 ∗ i=t ri −Ak |  k−1 i=t ri  − A∗k | but since the ri ’s and A∗k − A∗t are integer,  ∗ is a positive integer, hence |Ck−1 −A∗k | = |A∗t +  k−1 ∗ i=t ri −Ak |  ≥ 1 > 2∆.  If A∗k −A∗t is not integer then ϕ(A∗k −A∗t ) > 2∆, and this implies ⌈A∗k − A∗t ⌉−(A∗k −A∗t ) > 2∆ and (A∗k − A∗t ) − ⌊A∗k − A∗t ⌋ > 2∆. Since ⌊A∗k − A∗t ⌋ or  k−1 i=t ri  k−1 i=t ri  is integer, we must have either  ∗ −A∗ | = |A∗ + ≥ ⌈A∗k − A∗t ⌉. Therefore |Ck−1 t k  k−1 ∗ i=t ri −Ak |  k−1 i=t ri  ≤  > 2∆.  Lemma 2.5.3. For every realization of the processing duration and every k = 2, . . . , n + 1, ′ ∗ ′′ ′ ∗ < A′k and < A∗k then Ck−1 > A′′k , and if Ck−1 > A′k and Ck−1 if Ck−1 > A∗k then Ck−1 ′′ Ck−1 < A′′k . ∗ ∗ Proof. By Lemma 2.5.2, Ck−1 > A∗k implies Ck−1 − A∗k > 2∆. Note that A∗k − ∆ ≤ A′k ≤ ∗ ′′ ∗ ′ ∗ A∗k ≤ A′′k ≤ A∗k + ∆ by definition, and Ck−1 + ∆ ≥ Ck−1 ≥ Ck−1 ≥ Ck−1 ≥ Ck−1 −∆ ∗ ′ ∗ by Lemma 2.5.1. Then Ck−1 − A∗k > 2∆ implies Ck−1 − A′k ≥ Ck−1 − ∆ − A∗k > ∆ ′′ ∗ ∗ and Ck−1 − A′′k ≥ Ck−1 − A∗k − ∆ > ∆. Similarly, by Lemma 2.5.2, Ck−1 < A∗k implies ∗ A∗k − Ck−1 > 2∆. Note that A∗k − ∆ ≤ A′k ≤ A∗k ≤ A′′k ≤ A∗k + ∆ by definition, and ∗ ′′ ∗ ′ ∗ ∗ Ck−1 + ∆ ≥ Ck−1 ≥ Ck−1 ≥ Ck−1 ≥ Ck−1 − ∆ by Lemma 2.5.1. Then A∗k − Ck−1 > 2∆ ∗ ′′ ∗ ′ ≥ A∗k − Ck−1 − ∆ > ∆. The result ≥ A∗k − Ck−1 − ∆ > ∆ and A′′k − Ck−1 implies A′k − Ck−1  follows. Just-on-time jobs require more care, and we need further definitions and results before we can establish similar results as Lemmata 2.5.2 and 2.5.3. Let a block B[t, k] be a sequence of consecutive jobs, [t, t+1, . . . , k] (1 ≤ t < k ≤ n+1) such that either t = 1 or job t is early, i.e., ∗ ∗ = Cj∗ ≥ A∗j+1 for j = t+1, . . . , k; < St∗ = A∗t ; no other job in the block is early, i.e., Sj+1 Ct−1 ∗ ∗ = A∗i } denote = A∗k . Let K = {i : t < i ≤ k and Ci−1 and job k is just-on-time, i.e., Ck−1 ∗ for the set of just-on-time jobs in the block B[t, k]. So we have St∗ = A∗t and Sj∗ = A∗j = Cj−1  28  all j ∈ K. Our next result, Lemma 2.5.4, implies that the first (job t) and all just-on-time jobs in a block (i.e., elements of K) are either all in J or all outside J. Lemma 2.5.4. If B[t, k] is a block then either {t} ∪ K ⊆ J or {t} ∪ K ⊆ B[t, k] \ J. ∗ Proof. Let j ∈ K. We have Cj−1 = A∗t +  j−1 i=t ri  since t is on-time, and there is no idle  ∗ time between t and j. We obtain 0 = Cj−1 − A∗j = A∗t +  j−1 i=t ri  − A∗j . Since  j−1 i=t ri  is  integer, A∗j − A∗t must be integer. This implies that if j ∈ J then t ∈ J, and if j ∈ J then t ∈ J. Lemma 2.5.5 will be used to prove Lemmata 2.5.6 and 2.5.7. Lemma 2.5.5. Let k ∈ {2, . . . , n + 1} be such that A∗k ∈ Z. Then for every realization of ∗ the processing durations such that Ck−1 = A∗k there is an early job j < k.  Proof. Let r be a realization of p. Seeking a contradiction, assume there is no early job ∗ before job k. Then Ck−1 = A∗1 + k−1 i=1 ri  k−1 i=1 ri  = A∗k . This implies A∗k ∈ Z (since A∗1 = 0 and  are integer), a contradiction.  In Lemmata 2.5.6 and 2.5.7 below we prove that no event change occurs for any juston-time job. Therefore Lemma 2.5.8 states that no event change occurs for any job. Lemma 2.5.6. Let k ∈ {2, . . . , n + 1}. For every realization of the processing durations ∗ ′ ′′ such that Ck−1 = A∗k , if there exists an early job j < k then Ck−1 = A′k and Ck−1 = A′′k .  Proof. Let r be a realization of p. Let t be the last early job before k, so B[t, k] is a block. ∗ = A∗i } be the set of just-on-time jobs As explained above, let K = {i : t < i ≤ k and Ci−1  between t and k. By Lemma 2.5.4, either (i) {t} ∪ K ⊆ J or (ii) {t} ∪ K ⊆ B[t, k] \ J.  Case (i) {t} ∪ K ⊆ J ′ ∗ ≤ Ct−1 < First, by induction we show that Cj′ = Cj∗ − ∆ for all j ∈ B[t, k]. Indeed, Ct−1  A∗t − 2∆ < A′t (by Lemmata 2.5.1 and 2.5.2) so St′ = A′t = A∗t − ∆ and Ct′ = A∗t − ∆ + rt = ′ ∗ = Cj−1 − ∆. If Ct∗ − ∆. Consider t < j ∈ B[t, k]. By inductive assumption, Cj−1 ′ ∗ j ∈ K then j ∈ J and A′j = A∗j − ∆, so Sj′ = max{Cj−1 , A′j } = max{Cj−1 − A∗j } − ∆ = ∗ ∗ Cj−1 − ∆. Otherwise, j ∈ K, i.e., j is late, then by Lemma 2.5.2 Cj−1 > A∗j + 2∆. So ∗ ′ ′ ∗ − ∆. In both cases, = Cj−1 − ∆ and hence Sj′ = Cj−1 − 2∆ = Cj−1 A′j ≤ A∗j < Cj−1 ∗ , A∗ } + r − ∆ = C ∗ − ∆, completing our ∗ − ∆ + rj = max{Cj−1 Cj′ = Sj′ + rj = Cj−1 j j j  29  ′ ∗ inductive proof. This implies that Ck−1 = Ck−1 − ∆ = A∗k − ∆ = A′k since k ∈ K ⊆ J so ′ Ck−1 = A′k as claimed.  Similarly, by induction we show that Cj′′ = Cj∗ + ∆ for all j ∈ B[t, k].  Indeed,  ∗ ′′ + ∆ < A∗t < A∗t + ∆ = A′′t (by Lemmata 2.5.1 and 2.5.2) so St′′ = A′′t = ≤ Ct−1 Ct−1  A∗t + ∆ and Ct′′ = A∗t + ∆ + rt = Ct∗ + ∆. ′′ ∗ tive assumption, Cj−1 = Cj−1 + ∆.  Consider t < j ∈ B[t, k].  By induc-  If j ∈ K then j ∈ J and A′′j = A∗j + ∆, so  ′′ , A′′ } = max{C ∗ , A∗ } + ∆ = C ∗ Sj′′ = max{Cj−1 j−1 + ∆. Otherwise, j ∈ K, i.e., j is j j−1 j ′′ ∗ ∗ − 2∆ − ∆ = Cj−1 > A∗j + 2∆. So A′′j ≤ A∗j + ∆ < Cj−1 late, then by Lemma 2.5.2 Cj−1 ′′ ∗ and hence Sj′′ = Cj−1 = Cj−1 + ∆, completing our inductive proof.  In both cases,  ∗ ∗ , A∗ } + r + ∆ = C ∗ + ∆. This implies Cj′′ = Sj′′ + rj = Cj−1 + ∆ + rj = max{Cj−1 j j j ′′ ∗ ′′ that Ck−1 = Ck−1 + ∆ = A∗k + ∆ = A′′k since k ∈ K ⊆ J so Ck−1 = A′′k as claimed.  Case (ii) {t} ∪ K ⊆ B[t, k] \ J ∗ ′ < ≤ Ct−1 First, by induction we show that Cj′ = Cj∗ for all j ∈ B[t, k]. Indeed, Ct−1  A∗t − 2∆ < A∗t = A′t (by Lemmata 2.5.1 and 2.5.2) so St′ = A′t = A∗t and Ct′ = A∗t + rt = Ct∗ . ′ ∗ . If j ∈ K then j ∈ J and Consider t < j ∈ B[t, k]. By inductive assumption, Cj−1 = Cj−1 ∗ , A∗ } = C ∗ . Otherwise, j ∈ K, i.e., j is ′ , A′j } = max{Cj−1 A′j = A∗j , so Sj′ = max{Cj−1 j−1 j ∗ ∗ −2∆ = C ′ > A∗j +2∆. So A′j ≤ A∗j < Cj−1 late, then by Lemma 2.5.2 Cj−1 j−1 −2∆ and hence ′ ∗ . In both cases, C ′ = S ′ + r = C ∗ ∗ ∗ ∗ Sj′ = Cj−1 = Cj−1 j j j j−1 + rj = max{Cj−1 , Aj } + rj = Cj , ′ ∗ completing our inductive proof. This implies that Ck−1 = Ck−1 = A∗k = A′k since k ∈ K ′ and k ∈ J, so Ck−1 = A′k as claimed. ′′ ≤ Similarly, by induction we show that Cj′′ = Cj∗ for all j ∈ B[t, k]. Indeed, Ct−1 ∗ +∆ < A∗ = A′′ (by Lemmata 2.5.1 and 2.5.2) so S ′′ = A′′ = A∗ and C ′′ = A∗ +r = C ∗ . Ct−1 t t t t t t t t t ′′ ∗ . If j ∈ K then j ∈ J and Consider t < j ∈ B[t, k]. By inductive assumption, Cj−1 = Cj−1 ′′ , A′′ } = max{C ∗ , A∗ } = C ∗ . Otherwise, j ∈ K, i.e., j is A′′j = A∗j , so Sj′′ = max{Cj−1 j j−1 j j−1 ∗ ∗ ′′ late, then by Lemma 2.5.2 Cj−1 > A∗j + 2∆. So A′′j ≤ A∗j + ∆ < Cj−1 − ∆ = Cj−1 − ∆ and ∗ , completing our inductive proof. In both cases, C ′′ = S ′′ + r = ′′ = Cj−1 hence Sj′′ = Cj−1 j j j ∗ ∗ ′′ ∗ , A∗ } + r = C ∗ . This implies that C ′′ ∗ + rj = max{Cj−1 Cj−1 j j j k−1 = Ck−1 = Ak = Ak since ′′ k ∈ K and k ∈ K, so Ck−1 = A′′k as claimed.  Lemma 2.5.7. Let k ∈ {2, . . . , n + 1}. For every realization of the processing durations ′′ ′ ∗ = A′′k . = A′k and Ck−1 = A∗k we have Ck−1 such that Ck−1  30  Proof. If there is an early job before k then the result follows from Lemma 2.5.6. Otherwise, ∗ B[1, k] is a block. Therefore Ck−1 = A∗1 +  k−1 i=1 ri  = A∗k . Furthermore, A∗k ∈ Z by  Lemma 2.5.5 so k ∈ J. Therefore {1} ∪ K ⊆ B[1, k] \ J by Lemma 2.5.4, and hence the result follows from Lemma 2.5.6. Our next result establishes that no event change occurs for any job and directly follows from Lemmata 2.5.3 and 2.5.7. We define the sign of a real number x as sign(x) = 1 if x > 0; 0 if x = 0; and −1 if x < 0. Lemma 2.5.8. For every job j = 2, . . . , n + 1 and every realization of the processing ∗ ′′ ′ − A∗j ). − A′′j ) =sign(Cj−1 − A′j ) =sign(Cj−1 durations, sign(Cj−1  Lemma 2.5.9 below gives a consequence on the objective function of this no event change result. Lemma 2.5.9. F changes linearly with ∆ between A′ and A′′ . Proof. There is no event change when moving from A′ to A′′ by Lemma 2.5.8. Therefore for every realization r of the processing duration vector p, F (.|p = r) changes linearly with ∆ between A′ and A′′ . Hence, F (.) = Ep [F (.|p)], F also changes linearly with ∆ between A′ and A′′ . Theorem 2.5.10. (Appointment Vector Integrality) If the processing durations are integer random variables then there exists an optimal appointment vector which is integer. Proof. By Lemma 2.4.3 we know that there exists an optimal appointment schedule in the set K = {A ∈ Rn+1 : A ≤ A ≤ A}. Let A denote the set of all such optimal appointment vectors in K, so A is nonempty, bounded and closed, since by Lemma 2.4.2 F is continuous. For A ∈ A let   min{Aj : j ∈ {2, . . . , n + 1} and Aj ∈ Z} if A ∈ Zn+1 I(A) =  np if A ∈ Zn+1 . max + 1  We claim I(.) is upper semi continuous (usc) on the compact set A. If A ∈ A ∩ Zn+1 then I(A) = h + 1 ≥ I(B) for all B ∈ A, implying that I(.) is usc at A. Otherwise A ∈ A \ Zn+1 , and let I(A) = Af . For any ǫ > 0 let δ = min{ǫ, I(A) − ⌊Af ⌋, ⌈Af ⌉ − I(A)} > 0. For all B ∈ A, ||B − A|| < δ implies Bf > Af −δ ≥ Af −(I(A)−⌊Af ⌋) = ⌊Af ⌋ and Bf < Af +δ ≤  31  Af + ⌈Af ⌉ − I(A) = ⌈Af ⌉. Therefore Bf is fractional so I(B) ≤ Bf ≤ Af + ǫ = I(A) + ǫ. Therefore I(.) is usc at A ∈ A \ Zn+1 . This completes the proof that I(.) is usc on A. The fact that I(.) is usc and A is compact implies that there exists an element A∗ of A maximizing I(.). Seeking a contradiction, assume A∗ ∈ Zn+1 . Let f = min{i : A∗i = I(A∗ )}, so for all j < f , A∗j < I(A∗ ) and thus A∗j ∈ Z. Let A′ and A′′ be the schedules derived from A∗ as defined at the beginning of this section. By optimality F (A∗ ) ≤ F (A′ ) and F (A∗ ) ≤ F (A′′ ). But by Lemma 2.5.9, F (A∗ ) changes linearly with ∆ between A′ and A′′ . Hence we must have F (A∗ ) = F (A′ ) = F (A′′ ). Note that A′′ ≥ A∗ ≥ A and, for every j ∈ J, A′′j = A∗j + ∆ < ⌈A∗j ⌉ ≤ Aj so A′′ ≤ A. This shows that A′′ ∈ K and therefore A′′ ∈ A. But I(A∗ ) = A∗f < A∗f + ∆ = A′′f = I(A′′ ), i.e., I(A∗ ) < I(A′′ ), a contradiction with the definition of A∗ . Remark 2.5.11. Linear overage and underage costs are essential for the integrality of an optimal appointment vector. Consider the following example with quadratic costs. Let n = 1 and F (A) = Ep o1 (C1 − A2 )+  2  + u1 (A2 − C1 )+  2  with o1 = u1 = 1; and P rob{p1 =  1} = P rob{p1 = 2} = 12 . Then F (A) = Ep 2(C1 − A2 )2 , C1 = p1 , and the optimum is A∗2 = Ep (p1 ) =  2.6  3 2  which is not integer.  L-convexity  We start by investigating an important property of our objective function, submodularity (see e.g., [9], [30] and [18]). Definition 2.6.1. A function f : Zq → R is submodular iff f (z) + f (y) ≥ f (z ∨ y) + f (z ∧ y) for all z, y ∈ Zq where z ∨ y = (max(zi , yi ) : 0 ≤ i ≤ q) ∈ Zq , z ∧ y = (min(zi , yi ) : 0 ≤ i ≤ q) ∈ Zq ([18]). We now define a property of an appointment vector and a realization of the processing durations that will play an important role in this section. Definition 2.6.2. A quadruple (i, j, k, l) is a submodularity obstacle for appointment schedule A and a realization r of the processing durations if • 1 ≤ i < j < k < l ≤ n + 1; • the cost coefficients satisfy oj−1 + uj−1 +  j≤t<k−1 ot  < uk−1 ; 32  and, in schedule A|p = r • both jobs i and j are on-time; • job l is the last job which starts on-time before job n + 1; • there is no idle time between jobs i and j; • there is positive idle time between jobs j and l; and • job k is the first early job after j. Proposition 2.6.3. For any realization r of the processing durations, the function F (.|p = r) is submodular if and only if there is no submodularity obstacle for any integer appointment vector A. Proof. Let r be any realization of the processing durations p. By the proof of Theorem 6.19 from Murota [18], F (.|p) is submodular if and only if F (A + 1i + 1j |p = r) − F (A + 1i |p = r) ≤ F (A + 1j |p = r) − F (A|p = r)  (2.2)  for each A ∈ Zn+1 and 1 ≤ i < j ≤ n + 1. Let A ∈ Zn+1 and i < j. Let l be the last on-time job before job n + 1. Job l is well defined since job 1 always on-time with S1 = A1 . We consider the following cases for job l. (A) 1 ≤ l < j ≤ n + 1, i.e., job j is late. (B) l = j, i.e., job j is on-time, and all the jobs after job j are late. (C) j < l ≤ n + 1. To ease notation we use (.|r) to denote schedule (.|p = r). We now verify the submodular inequality (2.2) in each case. Case (A) (l < j ≤ n + 1). Job j is late for both schedules (A|r) and (A + 1i |r), and job j remains not early when Aj is replaced with Aj + 1, therefore F (A + 1i + 1j |r) − F (A + 1i |r) = −oj−1 and F (A + 1j |r) − F (A|r) = −oj−1 . As a result, (2.2) holds with equality. Case (B) (l = j ≤ n+1). Job j is the last on-time job for schedule (A|r), and (A + 1j |r) pushes every job after job j − 1 to the right by one unit. Therefore F (A + 1j |r) − F (A|r) = 33  uj−1 + oj + oj+1 + ... + on ≥ 0. If there is an idle slot between i and j then job j will still be on-time in schedules (A + 1i + 1j |r) and (A + 1i |r). Since every job after job j − 1 in (A + 1i + 1j |r) will also be pushed to the right by one unit, F (A + 1i + 1j |r) − F (A + 1i |r) = uj−1 + oj + oj+1 + ... + on and (2.2) holds with equality. Otherwise, there is no idle slot between i and j. Then job j will be late in schedule (A + 1i |r) but on-time in schedule (A + 1i + 1j |r) and all jobs k > j have the same start times in both schedules (A + 1i + 1j |r) and (A + 1i |r). Therefore, F (A + 1i + 1j |r) − F (A + 1i |r) = −oj−1 ≤ 0 and inequality (2.2) holds. Case (C) (j < l ≤ n + 1). If job j is late in schedule (A|r) then it is also late in schedule (A + 1i |r), and it remains not early when Aj is replaced with Aj + 1. Therefore, F (A + 1j |r) − F (A|r) = −oj−1 and F (A + 1i + 1j |r) − F (A + 1i |r) = −oj−1 . As a result, (2.2) holds with equality. Therefore assume that job j is on-time in schedule (A|r). If there is positive idle time between i and j in schedule (A|r), then j remains on-time in schedule (A + 1i |r) hence also remains on time (A + 1i + 1j |r) and (A + 1j |r) and (2.2) holds with equality. Therefore we also assume that there is no idle time between i and j. We consider two subcases, CR1 and CR2, for the right hand side F (A + 1j |r) − F (A|r) and three subcases, CL1, CL2 and CL3, for the left hand side F (A + 1i + 1j |r) − F (A + 1i |r): in schedule (A|r), (CR1) there is no idle time between j and l; (CR2) there is positive idle time7 between j and l; (CL1) job i is on-time; (CL2) job i is late, and there is no idle time between j and l; (CL3) job i is late and there is positive idle time between j and l. In CR1, the time interval [Aj , Aj + 1] is idle in schedule (A + 1j |r) and every job j, j + 1, . . . , n incurs one more unit of overtime in schedule (A + 1j |r) than in schedule (A|r) since all jobs between j and l are not early and all jobs after l are late. Hence, in CR1, F (A + 1j |r) − F (A|r) = uj−1 + oj + oj+1 + ... + on . In CR2, there is an early job k between j and l. Choose k to be the first early job after j so j < k < l. Similarly to CR1, the time interval [Aj , Aj + 1] is idle in schedule (A + 1j |r) and every job j, j + 1, . . . , k − 1 incurs one more unit of overtime in schedule 7  This means that there is at least one idle slot available between the jobs under consideration. In this  case, there exists at least one job which starts on-time in the interval.  34  (A + 1j |r) than in schedule (A|r) since all jobs between j and k are not early. Furthermore, job k − 1 incurs one less unit of idle time in schedule (A + 1j |r) than in schedule (A|r) since k remains not late in schedule (A + 1j |r). Hence, in CR2, F (A + 1j |r) − F (A|r) = uj−1 + oj + oj+1 + ... + ok−3 + ok−2 − uk−1 . In CL1, job i remains on-time in both schedules (A + 1i |r) and (A + 1i + 1j |r), and because there is no idle time between i and j, job j is late in schedule (A + 1i |r) but ontime in schedule (A + 1i + 1j |r). Therefore, schedule (A + 1i + 1j |r) will have one unit less overtime (just before job j) than schedule (A + 1i |r). Hence, in CL1, F (A + 1i + 1j |p) − F (A + 1i |p) = −oj−1 . In CL2, job j is just-on-time in schedule (A + 1i |r) but one time unit early in schedule (A + 1i + 1j |r). In schedule (A|r), all jobs between j and l are not early (since there is no idle time between j and l), and all jobs after l are late (since l is the last on-time job). Furthermore, all jobs after j are late in schedule (A + 1i |r) and therefore also late in schedule (A + 1i + 1j |r) because there is no idle time between i and j in schedule (A|r). As a result, schedule (A + 1i + 1j |r) has an idle slot just before Aj + 1 and one more unit of overtime for each job j, . . . , n + 1 than schedule (A + 1i |r). Hence, in CL2, F (A + 1i + 1j |r) − F (A + 1i |r) = uj−1 + oj + oj+1 + ... + on . Similarly to CL2, in CL3, job j is just-on-time in schedule (A + 1i |r) but one time unit early in schedule (A + 1i + 1j |r). Furthermore, there is a first early job k between j + 1 and l since there is positive idle time between j and l in schedule (A|r). The time interval [Aj , Aj +1] is idle in schedule (A + 1i + 1j |r) and every job j, j +1, . . . , k−1 incurs one more unit of overtime in schedule (A + 1i + 1j |r) than in schedule (A + 1i |r). Furthermore, job k−1 incurs one less unit of idle time in schedule (A + 1i + 1j |r) than in schedule (A + 1i |r). Hence, in CL3, F (A + 1i + 1j |r) − F (A + 1i |r) = uj−1 + oj + oj+1 + ... + ok−3 + ok−2 − uk−1 . Note that we have the same job k as in CR2. As a result, F (A + 1i + 1j |p) − F (A + 1i |p) − (F (A + 1j |p) − F (A|p))     −oj−1 − (uj−1 + oj + oj+1 + ... + on ) ≤ 0     0 =   −oj−1 − (uj−1 + oj + oj+1 + ... + ok−3 + ok−2 − uk−1 )      0  if CR1 and CL1 if CR1 and CL2 if CR2 and CL1 if CR2 and CL3  If there is no submodularity obstacle then inequality −oj−1 ≤ uj−1 + oj + oj+1 + ... + 35  ok−3 + ok−2 − uk−1 in CR2 and CL1 is satisfied and F (.|p) is submodular. Conversely, if F (.|p) is submodular then −oj−1 ≤ uj−1 +oj +oj+1 +...+ok−3 +ok−2 −uk−1 for all jobs i < j < k < l such that j is on-time, there is no idle time between i and j, there is positive idle time between j and l and job i is on-time; i.e., there is no submodularity obstacle for the appointment vector A and processing duration realization r, hence there cannot be a submodularity obstacle. Corollary 2.6.4. If there is no submodularity obstacle for any integer appointment vector A and processing duration realization r then F is submodular. Proof. The result holds since submodularity is preserved under expectation, F (.) = Ep [F (.|p)], and by Proposition 2.6.3 F (.|p) is submodular if there is no submodularity obstacle for any integer appointment vector A and processing duration realization r. A submodularity obstacle is a very specific configuration, and it does not exist with reasonable cost structures such as nonincreasing ui ’s (ui+1 ≤ ui for all i) or nonincreasing (oi + ui )’s (oi+1 + ui+1 ≤ oi + ui for all i). To capture these cost structures we define the following: Definition 2.6.5. The cost coefficients (u, o) are α-monotone if there exists reals αi (1 ≤ i ≤ n) such that 0 ≤ αi ≤ oi and ui + αi are non-increasing in i, i.e., ui + αi ≥ ui+1 + αi+1 for all i = 1, . . . , n − 1. The following Lemma establishes a relation between existence of a submodularity obstacle and α-monotonicity. Proposition 2.6.6. If the cost coefficients (u, o) are α-monotone then there is no submodularity obstacle for any integer appointment vector A and processing duration realization r.  Proof. Assume (u, o) are α-monotone. We will show that for every j ∈ {2, . . . , n} there exists t ≥ j + 1 such that oj−1 + uj−1 +  t−1 r=j  or ≥ ut−1 . For contradiction suppose,  36  oj−1 + uj−1 +  t−1 r=j  or < ut−1 for all t ≥ j + 1. Then t−1  αt−1 + oj−1 + uj−1 +  or < ut−1 + αt−1  (add αt−1 to both sides)  or < ut−1 + αt−1  (since αj−1 ≤ oj−1 )  r=j t−1  αt−1 + αj−1 + uj−1 + r=j  αj−1 + uj−1 < ut−1 + αt−1  (since  t−1 r=j  or + αt−1 ≥ 0),  but this is a contradiction to α-monotonicity. Therefore the result follows. Theorem 2.6.7. (Submodularity) If the cost vectors (u, o) are α-monotone then F is submodular. Proof. If the cost vectors (u, o) are α-monotone then by Proposition 2.6.6 there is no submodularity obstacle for any integer appointment vector A and processing duration realization r. Hence the result follows from Corollary 2.6.4. Completion times, start times and tardiness and their expectations are also submodular: Corollary 2.6.8. The tardiness Tk , start time Sk , completion time Ck , and their expected values Ep [Tk ], Ep [Sk ] and Ep [Ck ] are submodular functions of A for every k = 1, . . . , n. Proof. Recall that F (.|p) =  n i=1 (oi Ti  + ui Ei ). Let 1 ≤ k ≤ n, ui = 0 for all i, and oi = 1  if i = k and 0 otherwise. Then Tk = F (.|p). Therefore Tk is submodular whenever F (.|p) is. By Proposition 2.6.3, F (.|p) is submodular if there is no submodularity obstacle. But the chosen ui ’s and oi ’s are α-monotone so no submodularity obstacle exists by Proposition 2.6.6. As a result F (.|p) and hence Tk is submodular. Next we show Sk is submodular. S1 = 0 and Sk = Ak + max{0, Ck−1 − Ak } = Ak + Tk−1 (1 < k ≤ n) by definition. Since Ak is a scalar and Tk is submodular, Sk is also submodular. Similarly, Ck = Sk +pk (1 ≤ k ≤ n) by definition. Since pk is a scalar and Sk is submodular, Ck is submodular. Finally, the expected values Ep [Tk ], Ep [Sk ] and Ep [Ck ] are submodular since submodularity is preserved under expectation and Tk , Sk and Ck are submodular. This completes the proof. Remark 2.6.9. The earliness Ek is not a submodular function of A in general. To see this let A = (0, 3, 5, 6, 9), deterministic processing durations p1 = 3, p2 = 2, p3 = 2, p4 = 1. E4 (A) = (A5 − C4 )+ = (9 − 8)+ = 1, similarly E4 (A + 11 + 12 ) = 0, E4 (A + 11 ) = 0 and E4 (A + 12 ) = 0. Therefore 1+0 = E4 (A)+E4 (A + 11 + 12 ) > E4 (A + 11 )+E4 (A + 12 ) = 0 + 0. Hence E4 is not submodular. 37  The objective function is not only submodular but also L-convex, an important discrete convexity property. Before we show L-convexity results, we give the definition of L-convexity. Definition 2.6.10. f : Zq → R ∪ {∞} is L-convex iff f (z) + f (y) ≥ f (z ∨ y) + f (z ∧ y) ∀z, ∀y ∈ Zq and ∃r ∈ R : f (z + 1) = f (z) + r ∀z ∈ Zq ([18]). Proposition 2.6.11. For any realization r of the processing durations, the function F (.|p = r) is L-convex if and only if there is no submodularity obstacle for any integer appointment vector A and realization r. Proof. Let r be a realization of the processing durations. If there is no submodularity obstacle for any integer appointment vector A and realization r then F (.|p = r) is submodular by Proposition 2.6.3, the first property in the definition of L-convexity. Recall that F (A|p = r) = Consider F (A + 1|p = r) =  n i=1 (oi Ti n i=1  + ui Ei ), Ti = (Ci −Ai+1 )+ and Ei = (Ai+1 −Ci )+ .  oi Ti1 + ui Ei1 , where x1i = quantity of interest of job i  with appointment vector A + 1 for x ∈ {S, C, T, E}. Then Si1 = Si + 1 and Ci1 = Ci + 1 hence Ti1 = Ti and Ei1 = Ei . Therefore F (A + 1|p = r) − F (A|p = r) = 0. This gives us the second property of L-convexity definition. Conversely, if F (.|p = r) is L-convex then F (.|p) must be submodular and by Proposition 2.6.3 there is no submodularity obstacle for any integer appointment vector A. Corollary 2.6.12. If there is no submodularity obstacle for any integer appointment vector A and realization r then F (.) is L-convex. Proof. The claim holds since L-convexity is preserved under expectation, F (.) = Ep [F (.|p)], and by Proposition 2.6.11 F (.|p) is L-convex if there is no submodularity obstacle for any integer appointment vector A and realization r. Theorem 2.6.13. (L-convexity) If the cost vectors (u, o) are α-monotone then F (A) is L-convex. Proof. If the cost coefficients (u, o) are α-monotone then by Proposition 2.6.6 there is no submodularity obstacle for any integer appointment vector A and processing duration realization r. Therefore the result follows from Corollary 2.6.12.  38  2.7  Algorithms  Using algorithmic results, [19] and [18], for minimizing L-convex functions, we can minimize the expected cost F in polynomial time, using a polynomial number of expected cost computations and submodular set minimizations. Assume the input to our problem consists of the number n of jobs, the cost vectors u and o, the horizon h over which F is to be minimized. Assume also that the processing times are integer and that we have an oracle which computes the expected cost F (A) for any given integer appointment vector A. Theorem 2.7.1. (Polynomial Time Algorithm 1) If the cost vectors (u, o) are αmonotone and the processing durations are integer then there exists an algorithm which minimize F using polynomial time and a polynomial number of expected cost evaluations. Proof. The Appointment Vector Integrality Theorem 2.5.10 implies that to minimize F we only need to consider integer appointment vectors. If the cost vectors (u, o) are αmonotone then F is an L-convex function by the L-convexity Theorem 2.6.13. Then F can be minimized in O(σ(n) EO n2 log(⌈h/2n⌉)) time by Iwata’s steepest descent scaling algorithm (Section 10.3.2 of Murota [18]) where σ(n) is the number of function evaluations required to minimize a submodular set function over an n-element ground set and EO is the time needed for an expected cost evaluation. When the processing durations are independent, the expected cost of an integer appointment vector can be evaluated efficiently. We use recursive equations for the probability distributions of the start time, completion time, tardiness and earliness of each job and compute F at an integer point A in O(n2 p2max ) time. Theorem 2.7.2. If the processing durations are stochastically independent and A is an integer appointment vector then F (A) may be computed in O(n2 p2max ) time. Proof. The first job starts at time zero so S1 = A1 = 0, and C1 = p1 , i.e., the distribution of C1 is that of p1 . Next, we look at the start times Si (2 ≤ i ≤ n). We have Si = max(Ai , Ci−1 ) so for all k = 0, 1, . . . , npmax ,  P rob{Si = k} =     0    if k < Ai  P rob{Ci−1 ≤ k} if k = Ai     P rob{C i−1 = k} if k > Ai .  (2.3)  39  Note that Si and pi are independent because Si is completely determined by p1 , p2 , ..., pi−1 and A1 , A2 , ..., Ai . Since Ci = Si + pi , by conditioning on pi and using independence of pi and Si , we obtain for all k = 0, 1, ..., npmax , pi  P rob{Si = k − j}P rob{pi = j},  P rob{Ci = k} = P rob{Si = k − pi } =  (2.4)  j=0  and P rob{Ci−1 ≤ k} = P rob{Ci−1 = k}+P rob{Ci−1 ≤ k −1}. For each i−1, P rob{Ci−1 ≤ k} may be computed in O((i−1)pmax ) time. Hence P rob{Ci = k} can be computed once we have the distribution of Si . For each job i and value k, computing P rob{Si = k} by Eq(2.3) requires a constant number of operations, and computing P rob{Ci = k} by Eq(2.4) requires O(pi + 1) operations. Therefore the total number of operations needed for computing the whole start time and completion time distributions for job i is O(np2max ). The distribution of Ti and Ei , their expected values Ep Ti and Ep Ei can then be determined in O(npmax ) time. Therefore, the objective value F (A) is obtained in O(n2 p2max ) time. The running time of the algorithm given in Theorem 2.7.1 depends on how the distributions of the processing durations are given. Under the common assumption of independent durations, the input to the algorithm includes the distribution of each processing duration pi , which specifies pi + 1 probabilities P rob{pi = x} for x = 0, 1, . . . , pi . In this case, F can be minimized in O(n9 p2max log pmax ) time. Theorem 2.7.3. (Polynomial Time Algorithm 2) If the processing durations are independent, integer-valued random variables and the cost vectors (u, o) are α-monotone then we can minimize F in O(n9 p2max log pmax ) time. Proof. The horizon h can be taken as npmax ≥  n i=1 pi ,  so h is polynomially bounded  in the input size. Theorem 2.7.2 shows that EO = O(h2 ) when processing durations are independent. Theorem 4 of Orlin [21] shows that σ(n) = O(n5 ). The result follows from Theorem 2.7.1.  2.8  Objective Function with a Due Date  Suppose that we are given a due date D for the end of processing, instead of letting the model choose a planned makespan An+1 . We assume D is integer and 0 ≤ D ≤  n i=1 pi .  40  ˜ = (A1 , A2 , ..., An ) , then our new objective becomes Define A     n−1  ˜ = Ep  F D (A)  oj (Cj − Aj+1 )+ + uj (Aj+1 − Cj )+  j=1  + on (Cn − D)+ + un (D − Cn )+  .  ˜ D) = F D (A). ˜ Like F , F D has many properties such We immediately observe that F (A,  as discrete convexity (F D is L♮ -convex, see Definition 2.8.4), optimal vector integrality and existence of a polynomial time minimization algorithm. ˜ = {A ˜ ∈ Rn : A˜1 = 0, We verify the properties of F D . Let K j<i pj  j<i pj  ≤ A˜i ≤  ˜ D) = F D (A), ˜ by using our previous results on F for all i = 2, . . . , n}. Since F (A,  we obtain the following for F D . Corollary 2.8.1. 1. Critical Path Lemma 2.4.1 applies to F D . 2. Function F D is continuous. ˜ ˜ ∗ ∈ K. 3. There exists an optimal appointment schedule A ˜ ∗ satisfying 4. There exists an optimal appointment schedule A pi ≤ A˜∗i+1 − A˜∗i ≤  j≤i pj  −  j<i pj  for i = 1, . . . , n − 1.  ˜ with components non-decreasing, ˜∗ ∈ K 5. There exists an optimal appointment vector A i.e., A˜∗i ≤ A˜∗i+1 for all i = 1, . . . , n − 1. ˜ ˜ may be computed in O(n2 p2max ) if processing durations are independent and A 6. F D (A) is integer.  Proof. 1. Follows directly from Critical Path Lemma 2.4.1. 2. Continuity is preserved by projection onto a coordinate subspace. Therefore, the result follows from Lemma 2.4.2. 3. The feasible set for F D is compact since 0 ≤ D ≤  n i=1 pi  and the compactness is  preserved by projection onto a coordinate subspace. Therefore, the result follows from Lemma 2.4.3. 41  4. Follows from Lemma 2.4.4 (by changing 1 ≤ i ≤ n to 1 ≤ i ≤ n − 1) and the fact that 0≤D≤  n i=1 pi .  5. Follows from Non-Decreasing Appointment Dates Lemma 2.4.5 (by changing 1 ≤ i ≤ n to 1 ≤ i ≤ n − 1) and the fact that 0 ≤ D ≤  n i=1 pi .  ˜ D) = F D (A), ˜ we can compute F D (A) ˜ exactly the same way we compute 6. Since F (A, F (A) with An+1 = D. Therefore the result follows from Theorem 2.7.2.  We next verify that appointment vector integrality also holds for F D . Corollary 2.8.2. (Appointment Vector Integrality) If the processing durations are integer random variables and the due date is integer then there exists an optimal appointment vector which is integer. ˜ ∗ be any non-integer appointment vector and A˜∗ the first non-integer compoProof. Let A f ˜ ′′ = A˜′′ , . . . , A˜′′n . ˜ ∗ . As before, we define set J, ϕ(x), ∆, A ˜ ′ = A˜′ , . . . , A˜′n and A nent of A 1 1 We consider any realization r of the processing durations. Then, Lemmata 2.5.1-2.5.9 follow for F D (either directly or by taking A∗n+1 = D). By Corollary 2.8.1 we know that there exists an optimal appointment schedule which ˜ = {A ˜ ∈ Rn : A˜1 = 0, is in the set K  j<i pj  ≤ A˜i ≤  j<i pj  for all i = 2, . . . , n}. Let A˜  ˜ so A˜ is nonempty, bounded denote the set of all such optimal appointment vectors in K, ˜ we define I(.) as before ˜ ∈ A, and closed, since by Corollary 2.8.1 F D is continuous. For A ˜ and Zn+1 to Zn . Then, I(.) is upper semi continuous (usc) on A˜ but by changing A to A since upper semi continuity is preserved by projection onto a coordinate subspace. ˜ ∗ of A˜ The fact that I(.) is usc and A˜ is compact implies that there exists an element A ˜ ∗ ∈ Zn . Let f = min{i : A˜∗ = I(A ˜ ∗ )}, so for maximizing I(.). By contradiction, assume A i ˜ ′ and A ˜ ′′ be the schedules derived from A ˜ ∗ as ˜ ∗ ) and thus A˜∗ ∈ Z. Let A all j < f , A˜∗j < I(A j ˜ ∗ ) ≤ F (A ˜ ′ ) and defined at the beginning of Section 2.5 and this proof. By optimality F (A ˜ ∗ ) ≤ F (A ˜ ′′ ). But F (A ˜ ∗ ) changes linearly with ∆ between A ˜ ′ and A ˜ ′′ as Lemma 2.5.9 F (A ˜ ∗ ) = F (A ˜ ′ ) = F (A ˜ ′′ ). Note that A˜i ′′ ≥ A˜i ∗ ≥ applies to F D . Hence we must have F (A k<i pk  ˜ ′′ ≤ A i  for all i = 1, . . . , n and, for every j ∈ J, A˜′′j = A˜∗j + ∆ < ⌈A˜∗j ⌉ ≤ k<i pk  k<i pk  so  ˜ and therefore A ˜ But ˜ ′′ ∈ K ˜ ′′ ∈ A. for all i = 1, . . . , n. This shows that A  42  ˜ ∗ ) = A˜∗ < A˜∗ + ∆ = A˜′′ = I(A ˜ ′′ ), i.e., I(A ˜ ∗ ) < I(A ˜ ′′ ), a contradiction with the I(A f f f ˜ ∗. definition of A Remark 2.8.3. Integrality of D is crucial for an integer optimal appointment vector. Consider the following example. ˜ = Ep o1 (C1 − A˜2 )+ + u1 (A˜2 − C1 )+ + o2 (C2 − D)+ + u2 (D − C2 )+ F D (A) with o1 = u1 = o2 = u2 = 1, D = probability  1 2  9 2,  p2 = 3 (deterministic p2 ), and p1 = 1 with 9  9  and p1 = 2 with probability 21 . Then, F 2 (0, 1) = u2 /4+o1 /2+o2 /4, F 2 (0, 2) = 9  o2 /4 + u1 /2 + o2 /4 but F 2 (0, 23 ) = u1 /4 + o1 /4 + o2 /4. The next result is on the submodularity and discrete convexity (L♮ -convexity) of F D . Before providing the result we need the definition of L♮ -convexity. A L♮ -convex function is obtained by restriction of a L-convex function to a coordinate plane [18]. Definition 2.8.4. A function f : Zq → R ∪ {∞} is said to be L♮ -convex if the function f˜ : Zq+1 → R ∪ {∞} defined by f˜(z, y) = f (z − y1) (z ∈ Zq , y ∈ Z) is an L-convex function ([18]). Corollary 2.8.5. (L♮ -convexity) If the cost coefficients (u, o) are α-monotone then F D is submodular and L♮ -convex. Proof. Assume that the cost coefficients (u, o) are α-monotone. Then F is submodular (by Submodularity Theorem 2.6.7) and L-convex (by L-convexity Theorem 2.6.13). Then, F D is submodular (since submodularity is preserved by projection onto a coordinate subspace) ˜ = F (A, ˜ D) and F is L-convex). and L♮ -convex (since F D (A) Similarly to F , we can minimize F D by using algorithmic results for L♮ -convexity, [19] and [18], with a polynomial number of expected cost computations and submodular set minimizations. As in the case of F , assume the input to our problem consists of the number n of jobs, the cost vectors u and o, the horizon h over which F D is to be minimized. We also assume that the processing times are integer and that we have an oracle which computes ˜ for any given integer appointment vector A. ˜ the expected cost F D (A) Corollary 2.8.6. (Polynomial Time Algorithm 1) If the cost vectors (u, o) are αmonotone and the processing durations are integer then there exists an algorithm which minimize F D using polynomial time and polynomial number of expected cost evaluations. 43  Proof. The Appointment Vector Integrality Corollary 2.8.2 implies that we only need to consider integer appointment vectors to minimize F D . If the cost vectors (u, o) are αmonotone then F D is an L♮ -convex function by the L♮ -convexity Corollary 2.8.5. Then F D can be minimized in O(σ(n) EO n2 log(⌈h/2n⌉)) time by Iwata’s steepest descent scaling algorithm (Section 10.3.2 of Murota [18]). As in the case of F , when the processing durations are independent, we can evaluate the expected cost of an integer appointment vector in O(n2 p2max ) by Corollary 2.8.1. In the case of independent processing durations, the input to the algorithm in Corollary 2.8.6 includes the distribution of each processing duration pi and we can minimize F D in O(n9 p2max log pmax ). Corollary 2.8.7. (Polynomial Time Algorithm 2) If the processing durations are independent, integer-valued random variables and the cost vectors (u, o) are α-monotone then we can minimize F D in O(n9 p2max log pmax ). Proof. The horizon h can be taken as npmax ≥  n i=1 pi ,  so h is polynomially bounded in  the input size. Corollary 2.8.1 shows that EO = O(h2 ) when processing durations are independent. Theorem 4 of Orlin [21] shows that σ(n) = O(n5 ). The result follows from Corollary 2.8.6.  2.9  No-shows and Emergency Jobs  No-shows and emergency jobs may have important practical applications and implications. For example, no-shows can be quite important in certain outpatient exams such as MRI scans [12]. Similarly, emergencies, such as emergency surgeries or examinations, can be a huge factor affecting the planned appointment schedules. With minor modifications and assumptions, our model can handle no-shows and that arrive when the machine (processor) is busy processing original jobs, emergency jobs, in finding an optimal appointment schedule. We first discuss no-shows. Suppose that there is some probability noshowi that job i will not show up. If job i does not show up then its processing duration becomes zero. On the other hand, if it does show up then its processing duration will be determined by its distribution. Therefore, all we need to do is to update the processing duration distribution pi of job i to take this no-show possibility into account. We can do so by 44  multiplying (1 − noshowi ) with P rob{pi = k} for all k > 0 and assign P rob{pi = 0} = 1−  k>0 P rob{pi  = k}.  Emergency jobs arrive after the processing starts without any appointments, and they may need to be processed as soon as possible. We take a non-preemptive approach, i.e., we finish processing of the current job first. We assume that emergency jobs may arrive only during processing of a planned job, i.e., there is no emergency job arrival during idle time or the processing of an emergency job. This is a reasonable assumption when the ratio of total idle time between planned jobs to total processing durations of planned jobs is small and there are not many emergency jobs. Therefore during processing of planned job i, some emergency jobs may arrive, and these emergency jobs will be processed back to back just after job i and before job i + 1. Observe that there will be no idle time between the processing of emergency jobs, so we may think of the duration of these emergency jobs processing as a lengthening of job i’s processing time. Therefore, the problem reduces to find the new processing duration distribution of job i. Figure 2.3 shows an example of a schedule with emergency jobs.  ~pi-1  emergency job arrivals  pi  Ai-1  Ai  ~pi+1 pe  Ci-1  1  pe  2  pe  3  Ai+1 ~ pi  Figure 2.3: An Example Schedule with Emergency Jobs We assume that there can be at most certain number of emergency jobs that can arrive during processing of a planned job, and the distribution of number of emergency jobs arrivals (during each planned job) is given by a discrete probability distribution. Furthermore, processing duration distribution of emergency jobs is also given by a discrete probability distribution. There can be at most mimax emergency jobs that can arrive during the processing of job i. Let pe be the discrete processing duration distribution of emergency jobs (we use the same processing duration distribution for each emergency job, but one may take pie as the discrete processing duration distribution of emergency jobs arriving during job i). We denote the total processing duration of emergency jobs that will be processed just after job i  45  as P i . Then, all we need to do is to find the distribution of the new job processing duration p˜i = pi + P i . Because once we have p˜i ’s, we can minimize FED (.) = Ep˜ F D (.|˜ p) as we minp)] as we minimize F (.) = Ep [F (.|p)], imize F D (.) = Ep F D (.|p) , and FE (.) = Ep˜ [F (.|˜ and solve the scheduling problem with emergency jobs. We now obtain the distribution of p˜i = pi + P i . Let mi (0 ≤ mi ≤ mimax ) be the number of emergency jobs jobs arriving during processing of job i. We define Pki =  k e j=1 p  (1 ≤ k ≤ mimax ). Distributions Pki (1 ≤ k ≤ mimax ) can be computed in a recursive manner i starting from P1i = pe and computing Pki = Pk−1 + pe for 1 < k ≤ mimax . Once we have the  distribution of Pki ’s we can find the distributions of P i (1 ≤ i ≤ n) as follows: mimax i  P rob{mi = k}P rob{Pki = 0}  i  P rob{P = 0} = P rob{m = 0} + k=1 mimax  P rob{mi = k}P rob{Pki = j}  i  P rob{P = j} =  for j = 1, 2, ..., mimax pmax .  k=1  The last thing we need to do is to obtain the distribution of p˜i , (˜ pi = pi + P i ), a single convolution of random variables pi (already available) and P i (just obtained).  2.10  Current Work, Future Work and Conclusion  After developing our modeling framework and proving that we can find an optimal appointment schedule in polynomial time, we focus on practical implementation issues. Our objective as a function of continuous appointment vector is non-smooth, but we show that the objective is convex and characterized its subdifferential in Chapter 3. We also obtain closed form formulas for the subdifferential as well as for any subgradient. This characterization is useful, it allows is to develop two important extensions. In the first extension, in Chapter 3, we relax the perfect information assumption on the probability distributions of processing durations, i.e., we assume that processing duration distributions are not known and can only be statistically estimated on the basis of past data or statistical sampling. Our approach is non-parametric, and we assume no (prior) information about processing duration distributions. We develop a sample-based approach to determine the number of independent samples required to obtain a provably near-optimal solution with high confidence, i.e., the cost of the sample-based optimal schedule is with high probability no more than (1 +ǫ) times the cost of an optimal schedule determined from 46  knowing the true distributions. This result has important practical implications, as the true processing duration distributions are often not known and only their past realizations or some samples are available. In another study, Appendix A, we use the subdifferential characterization with independent processing durations and compute a subgradient in polynomial time for any given appointment schedule. This is not a trivial task as the subdifferential formulas include exponentially many terms, and some of the probability computations are complicated. We also obtain an easily computable lower bound on the optimal objective value. Furthermore, we extend computation of the expected total cost (in polynomial time) for any (real-valued) appointment vector. These allow us to use non-smooth convex optimization techniques to find an optimal schedule. Although we already have a polynomial time algorithm to find an optimal appointment schedule, it is not clear at the moment which technique will work faster in practice. We are also considering hybrid algorithms based on both discrete convexity and non-smooth convex optimization combined with a special-purpose integer rounding method. Preliminary versions of these algorithms have been developed. The rounding algorithm takes any fractional solution and rounds it to an integer one with the same or improved objective value. We are planning to implement our algorithms and compare different approaches in computational experiments. There are many exciting future directions for this research. One is to find an optimal sequence and appointment schedule simultaneously, i.e., given the jobs, determine a sequence and a job appointment schedule minimizing the total expected cost. This problem is likely to be hard, but it may be possible to develop heuristic algorithms with performance guarantees. Studying some special cases for this problem may shed light on the general case. Another one is to put our findings into practice. We are in contact with local healthcare organizations to apply our results with real data and compare the appointment schedules determined by of our methods with current practices. In this chapter, we study a discrete time version of the appointment scheduling problem and establish discrete convexity properties of the objective function. We prove that the objective function is L-convex under mild assumptions on cost coefficients. Furthermore, we show that there exists an optimal integer appointment schedule minimizing the objective. This result is important as it allows us to optimize only over integer appointment schedules without loss of optimality. All these results on the objective function and optimal 47  appointment schedule enable us to develop a polynomial time algorithm, based on discrete convexity, that, for a given processing sequence, finds an appointment schedule minimizing the total expected cost. When processing durations are stochastically independent we evaluate the expected cost for a given processing order and an integer appointment schedule, efficiently both in theory (in polynomial time) and in practice (computations are quite fast as shown in our preliminary computational experiments). Independent processing durations lead to faster algorithms. Our modeling framework can handle a given due date for the total processing (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose an end date. We also extend our model and framework to include no-shows and emergencies. We believe that our framework is sufficiently generic so that it is portable and applicable to many appointment systems in healthcare as well as in other areas.  48  2.11  Bibliography  [1] Mehmet A. Begen and Maurice Queyranne. Appointment scheduling with discrete random durations. Proceedings of the 20th Annual ACM - SIAM Symposium on Discrete Algorithms., pages 845 – 854, 2009. [2] Illana Bendavid and Boaz Golany. Setting gates for activities in the stochastic project scheduling problem through the cross entropy methodology. Annals of Operations Research, published online, 2009. [3] Peter M. Vanden Bosch, Dennis C. Dietz, and John R. Simeoni. Scheduling customer arrivals to a stochastic service system. Naval Research Logistics, 46:549–559, 1999. [4] Brecht Cardoen, Erik Demeulemeester, and Jeroen Belien. Operating room planning and scheduling: A literature review. Working Paper, Katholieke Universiteit Leuven, Faculty of Business and Economics, Belgium, 2008. [5] Tugba Cayirli and Emre Veral. Outpatient scheduling in health care: A review of literature. Production and Operations Management, 12(4), 2003. [6] Brian Denton and Diwakar Gupta. A sequential bounding approach for optimal appointment scheduling. IIE Transactions, 35:1003–1016, 2003. [7] Mohsen Elhafsi. Optimal leadtime planning in serial production systems with earliness and tardiness costs. IIE Transactions, 34:233 – 243, 2002. [8] Lisa Fleischer. Recent progress in submodular function minimization. OPTIMA: Mathematical Programming Society Newsletter, 64:1 – 11, 2000. [9] Satoru Fujishige. Submodular Functions and Optimization. Elsevier, 2005. [10] Linda Green, Sergei Savin, and Ben Wang. Managing patient service in a diagnostic medical facility. Operations Research, 54:11–25, 2006. [11] Diwakar Gupta and Lei Wang. Revenue management for a primary-care clinic in the presence of patient choice. Operations Research, 56:576–592, 2008. [12] Refael Hassin and Sharon Mendel. Scheduling arrivals to queues: A single-server model with no-shows. Management Science, 54(3):565–572, 2008. 49  [13] Satoru Iwata. Submodular function minimization. Math. Program., 112:45–64, 2008. [14] Guido C. Kaandorp and Ger Koole. Optimal outpatient appointment scheduling. Health Care Man. Sci., 10:217–229, 2007. [15] Yossef Luzon, Avishai Mandelbaum, and Michal Penn. Scheduling appointments via fluids control. Industrial Engineering and Management, Technion, Haifa Israel,, 2009. [16] S. T. McCormick. Submodular function minimization. a chapter in the handbook on discrete optimization. Elsevier, K. Aardal, G. Nemhauser, and R. Weismantel, eds, 2006. [17] Kazuo Murota. Discrete convex analysis. Math Programming, 83(3):313–371, 1998. [18] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. [19] Kazuo Murota. On steepest descent algorithms for discrete convex functions. SIAM J. OPTIM, 14(3):699–707, 2003. [20] Kazuo Murota. Recent developments in discrete convex analysis. a chapter in research trends in combinatorial optimization, w. cook, l. lovasz and j. vygen, eds. SpringerVerlag, 2009. [21] James B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Math Programming, 118(2):237–251, 2007. [22] Jonathan Patrick, Martin L. Puterman, and Maurice Queyranne. Dynamic multipriority patient scheduling for a diagnostic resource. Operations Research, 56:1507 – 1525, 2008. [23] Michael Pinedo. Stochastic scheduling with release dates and due dates. Operations Research, 31:559 – 572, 1993. [24] Michael Pinedo. Scheduling: Theory, Algorithms, and Systems. Prentice Hall, 2001. [25] Lawrence W. Robinson and Rachel R. Chen. Scheduling doctors’ appointments: optimal and empirically-based heuristic policies. IIE Transactions, 35:295–307, 2003.  50  [26] Lawrence W. Robinson, Yigal Gerchak, and Diwakar Gupta. Appointment times which minimize waiting and facility idleness. Working Paper, DeGroote School of Business, McMaster University, 1996. [27] F Sabria and C F Daganzo. Approximate expressions for queuing systems with scheduling arrivals and established service order. Transportation Science, 23:159–165, 1989. [28] Pablo Santibanez, Mehmet Begen, and Derek Atkins. Surgical block scheduling in a system of hospitals: An application to resource and wait list management in a british columbia health authority. Health Care Man. Sci., 10:269–282, 2007. [29] Hans-Jorg Schutz and Rainer Kolisch. Capacity allocation for magnetic resonance imaging scanners. Working Paper, TUM Business School, Technische Universit at Munchen, Germany, 2008. [30] Donald M. Topkis. Minimizing a submodular function on a lattice. Oper. Res., 26(2): 305–321, 1978. [31] P Patrick Wang. Static and dynamic scheduling of customer arrivals to a single-server system. Naval Research Logistics, 40(3):345–360, 1993. [32] P Patrick Wang. Sequencing and scheduling n cusotmers for a stochastic server. European Journal of Operations Research, 119(3):729–738, 1999. [33] E N Weiss. Models for determining estimated start times and case orderings in hospital operating rooms. IIE Transactions, 22(2):143–150, 1990. [34] Paul Zipkin. On the structure of lost-sales inventory models. Oper. Res., 56(4):937 – 944, 2008.  51  3 A Sampling-Based Approach to Appointment Scheduling1 We consider the problem of appointment scheduling with discrete random durations of Chapter 2 under the assumption that the duration probability distributions are not known and only a set of independent samples is available e.g., historical data. The goal is to determine an optimal planned start schedule, i.e., an optimal appointment schedule for a given sequence of jobs on a single processor such that the expected total underage and overage costs is minimized. We show that the objective function of the appointment scheduling problem is convex under a simple sufficient condition on cost coefficients. Under this condition we characterize the subdifferential of the objective function with a closed-form formula. We use this formula to determine bounds on the number of independent samples required to obtain provably near-optimal solution with high probability.  3.1  Introduction and Motivation  We consider the appointment scheduling problem with discrete random durations introduced in Chapter 2 but under the assumption that the probability distributions of job durations are not known and the only available information on the durations is a set of independent random samples, e.g., historical data. We show that the objective function is convex under a simple condition on the cost parameters and characterize its subdifferential, and determine the number of independent samples required to obtain a provably near-optimal solution with high probability. In the appointment scheduling problem jobs are processed on a single processor in a given sequence and one has to decide the planned starting time of each job, also called  1  A version of this chapter has been submitted for publication. Begen M.A., Levi R. and Queyranne M.  A Sampling-Based Approach to Appointment Scheduling.  52  appointment date.2 Jobs are not available before their appointment dates. Moreover, the process durations are priori random and are realized only after the appointment dates are set. Due to stochastic processing durations, some jobs may finish earlier, whereas some others may finish later, than the appointment date of the next job. If a job ends earlier than the next job’s appointment date then the system experiences underage cost due to underutilization of the processor. On the other hand, when a job finishes later than the next job’s appointment date, the system is exposed to overage cost due the wait of the next job and/or overtime for the processor. Therefore there is an important trade-off between underutilization, waiting and overtime, i.e., underage and overage. The goal is to find an optimal appointment schedule, i.e., appointment date vector which minimizes the total expected underage and overage costs. There are important real-world applications of this problem, especially in healthcare such as surgery scheduling, transportation and production, e.g., see Chapter 2 and the references therein. For example in surgery scheduling, we can think of surgeries as the jobs, operating room/surgeon as the processor and the hospital as the scheduler. As observed in practice, surgery durations show variability (e.g., see Figure 2.1 and Figure 4.4) and determining planned start times, i.e., setting appointment dates of surgeries, is an important and challenging task [8]. Surgery appointment schedule has a direct impact on amount of overtime and idle-time of operating room(s). Operating room’s overtime can be costly since it involves staff overtime as well as additional overhead costs, on the other hand, idle-time costs can also be high due to the opportunity cost of unused capacity. Similar trade-off exists in scheduling of container ship arrivals at a container terminal [34]. Another example comes from a production system where it has multiple stages and stochastic leadtimes and the objective is to determine planned leadtimes to minimize expected cost [10]. Researchers studied the appointment scheduling problem for the last 50 years, e.g., see [6], [8], [19], [5], [39]. Existing literature exclusively uses continuous processing durations with full probability characterization, i.e., the probability distributions of processing time of jobs are given as part of the input. Due to the continuous processing durations there are computational difficulties in the computation of the expected total cost. For a given 2  To conform with scheduling terminology, we use the term date to denote a point in time. In most  applications of appointment scheduling, the appointment “dates” are actually appointment times within the day for which the jobs are being scheduled.  53  sequence of jobs, only small instances can be solved to optimality, larger instances require heuristics. Chapter 2 studies a discrete time version of the appointment scheduling problem, i.e., the processing durations are integer and given by a discrete probability distribution. This assumption fits many applications, for example, surgeries and physician appointments are scheduled on minute(s) basis (usually a block of certain minutes). (For instance, one 20 minute physician appointment could be two blocks of 10 minutes.) Chapter 2 establishes discrete convexity ([27]) properties of the objective function and prove that the objective function is L-convex ([26]) under a mild assumption on cost coefficients. Furthermore, it shows that there exists an optimal integer appointment schedule minimizing the objective. This result is important as it makes possible to optimize only over integer appointment schedules without loss of optimality. All these results on the objective function and optimal appointment schedule lead to a polynomial time algorithm, based on discrete convexity ([28]), that, for a given processing sequence, finds an appointment schedule minimizing the total expected cost. This algorithm invokes a sequence of submodular set function minimizations, for which various algorithms are available, see e.g., [11], [25] and [18]. When processing durations are stochastically independent the expected cost for a given processing order and an integer appointment schedule is evaluated in polynomial time in Chapter 2. Independent processing durations lead to faster algorithms. Chapter 2’s modeling framework can include a given due date for the end of the processing of all jobs (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose an end date. The framework is also extended to include no-shows and some emergency jobs. In Chapter 2 the discrete convexity, L-convexity, of the objective function is proved by assuming integer appointment vectors. The definition of L-convexity includes submodularity and a translation equivalence property. With integer appointment vectors, Chapter 2 establishes L-convexity by proving that the objective function is submodular (under a simple condition on cost coefficients) and the translation equivalence property is satisfied. In this chapter, we show that the objective function of the appointment scheduling problem (as a function of continuous appointment vector) is convex under the same simple condition on the cost coefficients. Convexity of the objective function has been discussed (explicitly or implicitly) in several papers in literature so far [39, 5, 32, 8], but we believe our analysis is the first rigorous treatment of the subject with this simple condition. Under this simple sufficient 54  condition, the objective function is convex but non-smooth due to the kinks. Due to the non-differentiability of the objective function we work with subgradients (instead of derivatives). In fact, we characterize the set of all subgradients, i.e., the subdifferential at a given appointment date vector, with a closed-form formula. This is unusual since only a single subgradient may be obtained in most applications. We use the subdifferential characterization to relax the perfect information assumption of Chapter 2 on the probability distribution of processing times by establishing a link between the sampling-based solution quality and the number of samples. Chapter 2 assumes complete information on the job duration distributions, i.e., there is an underlying discrete probability distribution for job durations, and this distribution is available and known fully. This may be the case for some applications. However, for others, the true duration distributions may not be known but their (past) realizations or some samples may be available. One good example for such an application comes from healthcare; hospitals and surgeons usually have some data available on the length of previous surgeries but no one knows what the true distribution for a certain type of surgery is. When the true distribution is not known then the question is how to use these samples to find a “good solution”. We assume that there is an underlying joint discrete distribution for the job durations, but there is only a set of independent samples available. This may correspond to historical data, for example daily observations of surgery durations. In this chapter, we develop a sampling-based approach and determine the number of independent samples required to obtain a provably near-optimal solution with high probability, i.e., the cost of the sampling-based optimal schedule is with high probability no more than (1 + ǫ) times the cost of the optimal schedule that is computed based on the true distribution. Job durations may not necessarily be independent but samples are. In other words, each sample is a vector of durations where each coordinate corresponds to job duration and these vectors are independent. Independence assumption of probability distributions (e.g., job durations, demands of different periods) is common but we do not require it in our analysis. There has been much interest for studying stochastic models with partial probabilistic characterization. We see that inventory models, especially the newsvendor problem and its multiperiod extension, receive a lot of attention. Depending on how much is known about the true distribution(s) different approaches are possible. 55  One may know the family of the true distribution but be uncertain about its parameters. This is called the parametric approach, and in this case there is usually an initial prior belief on the uncertainty of the parameter values. This belief is revised with Bayesian updates on realizations of the distribution, e.g., see Ding et al. [9] and the references therein. Liyanage and Shanthikumar [23] introduced operational statistics, an approach that combines parameter estimation and optimization for the case of known family but unknown parameters and priors, see also [7] for more on operational statistics. If there are no assumptions on the true distribution, i.e., no prior assumptions on its family or its parameters, then the approach becomes non-parametric. Levi et al. [22] use sample average approximation (SAA) (e.g., see [36]) to determine the number of samples required for the SAA solution to be a provably near-optimal (w.r.t. true demand distribution) with high probability. For the multi-period case, they develop a sampling-based dynamic programming framework and obtain similar results. Levi et al. [22] also establish a link between first-order information and relative error with respect to optimal value. Samples can then be used to estimate derivatives, or more generally, subgradients. Godfrey and Powell [13] develop a Concave Adaptive Value Estimation (CAVE) algorithm to approximate the value function of a newsvendor problem by successive concave piecewise linear functions. The CAVE algorithm has good performance in numerical experiments, but no convergence result is given in the paper. Powell et al. [31] extend this work and establish convergence results for separable objective functions with integer break points. Huh and Rusmevichientong [17] propose another non-parametric approach to single and multiperiod newsvendor problem but only when sales information is available. (This is called censored demand observation). The authors develop an adaptive policy with an average expected cost converging to the newsvendor cost (determined with the knowledge of true demand distribution) at the rate proportional to the square root of the number of periods. Huh et al. [16] considered a similar model and developed new adaptive policies by using Kaplan-Meier estimator [20]. Another alternative in the non-parametric approach is to work with some partial information on the true distribution, e.g., known moments. For the newsvendor problem the mean and the variance of demand can be used to develop a robust min-max policy, see [35, 12, 30] and the references therein for more on this approach. The Bootstrapping method is another “distribution-free” non-parametric approach. Bookbinder and Lordahl [4] use this method to estimate a quantile of lead-time demand distribution to 56  determine the reorder point for a continuous review inventory system [3]. Besides the inventory models, researchers use sampling methods for stochastic programs, in particular the SAA method. SAA is one of the most popular approximation methods for stochastic programs, replacing the true distribution with an empirical distribution obtained from random samples. Several papers, e.g., [21], [1], [24], [37], [38], [36] (and the references therein), obtain results on convergence and number of samples required for the SAA method to give small relative errors with high probability. Our modeling technique is not stochastic programming, and we make use of the discrete convexity and polynomial time algorithm results of Chapter 2 to solve the SAA counterpart of the appointment scheduling problem. Furthermore, our analysis is non-parametric, we characterize the subdifferential of the objective and use this information explicitly to establish a link between number of samples and the quality of the SAA solution. Appointment scheduling reduces to the well-known newsvendor problem when there is only a single job. This was first recognized by Weiss [41]. However, the problem departs from newsvendor characteristics and solution methods in the case of multiple jobs ([32], Chapter 2). In the multi-period newsvendor problem, naturally, decisions are taken at each period sequentially. By contrast, in appointment scheduling, one needs to have a schedule before any processing can start, i.e., one determines all the decision variables (i.e., appointment dates) simultaneously at the beginning of the planning horizon (i.e., at time zero). We employ an SAA approach for the appointment scheduling problem. We use available (independent) samples to form an empirical distribution and find an optimal solution. For the SAA problem, using subdifferential characterization (Section 3.4) and the well-known Hoeffdings inequality [15] we determine number of samples required to guarantee that there will exist a (sufficiently) small (in terms of the specified accuracy level) subgradient at the SAA solution with high probability (i.e., at least the specified confidence level). As a final step we show that the objective value (w.r.t. true distribution) of the SAA solution is no more than (1 + the accuracy level) of the true optimal value with probability at least the confidence level. Our bound for number of required samples is polynomial in number of jobs, accuracy level, confidence level and cost coefficients. To the best of our knowledge this chapter is the first to address the appointment scheduling problem when the probability distributions of durations are unknown. We develop a 57  sampling-approach for the appointment scheduling problem which is a stochastic non-linear integer program. Furthermore, we believe this chapter presents the first rigorous analysis for the convexity of the objective function of appointment scheduling problem with a simple condition. Last but not least, we characterize the set of all subgradients, i.e., the subdifferential at a given appointment date vector, with a closed-form formula. We use the subdifferential characterization to relate SAA solution quality with number of samples required. As a result we relax the perfect information assumption of Chapter 2 on the probability distribution of processing times. We believe this subdifferential characterization will lead to additional applications, e.g., finding optimum appointment schedules by using non-smooth optimization methods as in Appendix A. The rest of this chapter is organized as follows. In Section 3.2, we give the formal description of appointment scheduling problem. We present the convexity results in Section 3.3. Section 3.4 contains the subdifferential characterization. We provide our sampling analysis in Section 3.5. Finally Section 3.6 concludes the paper. We provide all the proofs either just after their statements or in Section 3.7.  3.2  Formal Description of Appointment Scheduling Problem  This section closely follows from Chapter 2. There are n + 1 jobs numbered 1, 2, ..., n + 1 that need to be sequentially processed (in the order of 1, 2, ..., n + 1) on a single processor. An appointment schedule needs to be prepared before any processing can start. That is, each job is assigned a planned start date. In particular, job i will not be available before its appointment date (planned start date) Ai . When a job finishes earlier than the next job’s appointment date, the system experiences some cost due to under-utilization, i.e., underage cost. On the other hand, if a job finishes later than the successor job’s appointment date, the system experiences overage cost due to the overtime of the current job and the waiting of the next job. The goal is to find appointment dates, (A1 , ..., An ), that minimize the total cost. In surgery scheduling, determining good surgery start times is crucial. This is not a trivial task due to randomness in surgery durations. In finding good surgery start times one needs to consider the tradeoff between idleness and overtime of resource(s) as well as patients’ waiting times. 58  If the processing durations were deterministic the problem is straight forward to solve. However, the processing durations are stochastic and we are only given their joint discrete distribution. We assume, naturally, that all cost coefficients and processing durations are non-negative and bounded. We also assume that processing durations are integer valued.3 Job 1 starts on-time, i.e., the start time for the first job is zero, and there are n real jobs. The (n + 1)th job is a dummy job with a processing duration of 0. The appointment time for the (n + 1)th job is the total time available for the n real jobs. We use the dummy job to compute the overage or underage cost of the nth job. Let {1, 2, 3, ..., n, n+1} denote the set of jobs. We denote the random processing duration of job i by pi and the random vector4 of processing durations by p = (p1 , p2 , ..., pn , 0). Let pi denote the maximum possible value of processing duration pi , respectively. The maximum of these pi ’s is pmax = max(p1 , ..., pn ). The underage cost rate ui of job i is the unit cost (unit per time) incurred when job i is completed at a date Ci before the appointment date Ai+1 of the next job i + 1. The overage cost rate oi of job i is the unit cost incurred when job i is completed at a date Ci after the appointment date Ai+1 . Thus the total cost due to job i completing at date Ci is ui (Ai+1 − Ci )+ + oi (Ci − Ai+1 )+ where (x)+ = max(0, x) is the positive part of real number x. We define u = (u1 , u2 , ..., un ) and o = (o1 , o2 , ..., on ). Next we introduce our decision variable. Let A = (A1 , A2 , ..., An , An+1 ) (with A1 = 0) be the appointment vector where Ai is the appointment date for job i. We introduce additional variables which help define and express the objective function. Let Si be the start date and Ci the completion date of job i. Since job 1 starts on-time we have S1 = 0 and C1 = p1 . The other start times and completion times are determined as follows: Si = max{Ai , Ci−1 } and Ci = Si + pi for 2 ≤ i ≤ n + 1. Note that the dates Si and Ci are random variables which depend on the appointment vector A, and the random duration vector p. Let F (A|p) be the total cost of appointment vector A given processing duration vector p:  n  oi (Ci − Ai+1 )+ + ui (Ai+1 − Ci )+ .  F (A|p) =  (3.1)  i=1  The objective to be minimized is the expected total cost F (A) = Ep [F (A|p)] where 3  We can restrict ourselves to integer appointment schedules without loss of optimality by Appointment  Vector Integrality Theorem 2.5.10. 4 We write all vectors as row vectors.  59  the expectation is taken with respect to random processing duration vector p. We simplify notations by defining the lateness Li = Ci − Ai+1 of job i, its tardiness Ti = (Li )+ , and its earliness Ei = (−Li )+ . The objective F (A) can now be written as n  n  F (A) = Ep  (oi Ep Ti + ui Ep Ei ) .  (oi Ti + ui Ei ) = i=1  i=1  The framework can include a given due date D for the end of processing (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose a planned makespan An+1 . We assume D is an integer and that 0 ≤ D ≤  n i=1 pi .  ˜ = (A1 , A2 , ..., An ) then the new objective becomes Define A     n−1  ˜ = Ep  F D (A)  oj (Cj − Aj+1 )+ + uj (Aj+1 − Cj )+  j=1  + on (Cn − D)+ + un (D − Cn )+  .  ˜ D) = F D (A), ˜ and our results in this chapter apply to We immediately observe that F (A, both objectives (with or without a due date) equally.  3.3  Convexity  In this section, we provide a simple sufficient condition so that F and F D are convex as a function of continuous appointment vector. We start with rewriting F (.|p) in an equivalent form. We will need this in our convexity proof. Lemma 3.3.1. (Identity) i  n  αi (Ci − Ai+1 ) + βi (Ci − Ai+1 )+ + γi (max{Ci , Ai+1 } −  F (A|p) = i=1  pk ) k=1  for any αi ∈ R (1 ≤ i ≤ n) where βi = (oi − αi ) and γi = [(ui + αi ) − (ui+1 + αi+1 )]. Proof. By definition (Eq(3.1)) we have F (A|p) =  n i=1 [oi (Ci  − Ai+1 )+ + ui (Ci − Ai+1 )+ ] .  Let αi ∈ R (1 ≤ i ≤ n) then by using the identity x − (x)+ + (−x)+ = 0 for any x ∈ R we can write oi (Ci − Ai+1 )+ + ui (Ci − Ai+1 )+ = oi (Ci − Ai+1 )+ + ui (Ci − Ai+1 )+ + αi (Ci − Ai+1 ) − αi (Ci − Ai+1 )+ + αi (Ai+1 − Ci )+ = αi (Ci − Ai+1 ) + (oi − αi )(Ci − Ai+1 )+ + (ui + αi )(Ai+1 − Ci )+ . 60  Therefore, for any αi ∈ R (1 ≤ i ≤ n) F (A|p) can be written as n  αi (Ci − Ai+1 ) + (oi − αi )(Ci − Ai+1 )+ + (ui + αi )(Ai+1 − Ci )+ .  F (A|p) = i=1  Recall that earliness of job i is Ei = (Ai+1 − Ci )+ . Define Mi as the total idle time of jobs 1, 2, ..., j. Then Ei = Mi − Mi−1 with M0 = 0, and F (A|p) can be written as n  αi (Ci − Ai+1 ) + (oi − αi )(Ci − Ai+1 )+ + (ui + αi )(Mi − Mi−1 )  F (A|p) = i=1 n  αi (Ci − Ai+1 ) + (oi − αi )(Ci − Ai+1 )+ + [(ui + αi ) − (ui+1 + αi+1 )]Mi .  = i=1  Next, we prove that Mi = max{Ci , Ai+1 } −  i t=1 pt  by induction. The result holds for i = 1  because M1 = max{C1 , A2 } − p1 = max{p1 , A2 } − p1 = (A2 − p1 )+ = E1 (since S1 = 0 we have C1 = p1 ). Assume that the result holds for i = k, i.e., Mk = max{Ck , Ak+1 }−  k+1 t=1 pt .  We need to show it also holds for i = k + 1, i.e., Mk+1 = max{Ck+1 , Ak+2 } −  Mk+1  = Mk + Ek+1 = Mk + (Ak+2 − Ck+1 )+  k t=1 pt .  (by definition)  k  =  pt + (Ak+2 − Ck+1 )+  max{Ck , Ak+1 } −  (by the inductive assumption)  t=1 k+1  =  pt + (Ak+2 − Ck+1 )+ (add and subtract pk+1 )  max{Ck , Ak+1 } + pk+1 − t=1 k+1  pt + (Ak+2 − Ck+1 )+  = Ck+1 −  (Ck+1 = max{Ck , Ak+1 } + pk+1 )  t=1 k+1  =  max{Ck+1 , Ak+2 } −  pt t=1  where the last equality follows by the identity (x − y)+ + y = max{x, y}. Therefore, Mi = max{Ci , Ai+1 } −  i t=1 pt  and i  n +  αi (Ci − Ai+1 ) + βi (Ci − Ai+1 ) + γi (max{Ci , Ai+1 } −  F (A|p) = i=1  pk ) k=1  where βi = (oi − αi ) and γi = [(ui + αi ) − (ui+1 + αi+1 )]. This completes the proof. We recall the definition of α-monotonicity, Definition 2.6.5. We prove in Proposition 3.3.3 that α-monotonicity is a sufficient condition for the convexity of F . Definition 3.3.2. The cost coefficients (u, o) are α-monotone if there exists reals αi (1 ≤ i ≤ n) such that 0 ≤ αi ≤ oi and the sequence ui + αi is non-increasing. 61  The condition of α-monotonicity is satisfied with many reasonable cost structures such as non-increasing ui ’s (ui+1 ≤ ui for all i) or non-increasing (oi + ui )’s (oi+1 + ui+1 ≤ oi + ui for all i). Especially the assumption of non-increasing ui ’s fits well with many healthcare applications since idle time is usually a bigger concern earlier in a day than later, e.g., if the first patient fails to show up then the surgeon (and other resources) will be idle until the second patient’s appointment date for sure however if a later patient fails to show then it may be the case the surgeon is still busy with previous patient(s) until the next appointment date. Furthermore, non-increasing ui ’s captures an important and commonly used special case, uniform idle cost rate for all jobs (ui = u for all i). Proposition 3.3.3. (Convexity) If (u, o) are α-monotone then F (.|p) and F (.) are convex. Proof. F (A|p) =  n i=1  αi (Ci − Ai+1 ) + βi (Ci − Ai+1 )+ + γi (max{Ci , Ai+1 } −  i k=1 pk )  where βi = (oi − αi ) and γi = [(ui + αi ) − (ui+1 + αi+1 )] by Identity Lemma 3.3.1. We first show that (Ci − Ai+1 ), (Ci − Ai+1 )+ and max{Ci , Ai+1 } −  i k=1 pk  are convex in  A. These functions are convex in A because Ci is convex in A. Ci is convex in A since Ci = maxj≤i {Aj +  i k=j  pk } (by the Critical Path Lemma 2.4.1), i.e., Ci is the maximum of  convex (affine) functions of A (therefore it is convex). If (u, o) are α-monotone then αi ≥ 0, βi ≥ 0, γi ≥ 0 (1 ≤ i ≤ n). Since finite sum of convex functions with non-negative weights is convex both F (.|p) and its expectation F (.) are convex. This completes the proof. Remark 3.3.4. F may fail to be convex in the absence of α-monotonicity. To see this, consider the following example with two jobs (n = 2), deterministic processing times p1 > 0 and p2 > 0, cost coefficients o1 = 0, u1 = 0 and o2 > 0, u2 > 0. Then F (A) = F (A|p) (due to deterministic processing times) and F (A) = o2 (C2 − A3 )+ + u2 (A3 − C2 )+ . Let A′ = (0, 0, p1 + p2 ) then S1 = 0, C1 = S2 = p1 and C2 = S3 = p1 + p2 so F (A′ ) = 0. Similarly, let A′′ = (0, 2p1 , 2p1 + p2 ) then S1 = 0, C1 = p1 and S2 = 2p1 , and C2 = S3 = 2p1 + p2 therefore F (A′′ ) = 0. Now, we define A′′′ = 12 A′ + 12 A′′ = (0, p1 , 23 p1 + p2 ) then S1 = 0, C1 = S2 = p1 , C2 = p1 + p2 and S3 = 32 p1 + p2 so F (A′′′ ) = 12 p1 u2 > 0. But this implies that F (.) is not convex since 12 p1 u2 = F (A′′′ ) = F ( 12 A′ + 21 A′′ ) > 12 F (A′ ) + 21 F (A′′ ) = 0. Similar convexity result holds for F D . Corollary 3.3.5. (Convexity) If (u, o) are α-monotone then F D (.|p) and F D (.) are convex.  62  Proof. We substitute An+1 with D in Identity Lemma 3.3.1 and obtain ˜ F D (A|p) =  n−1 i=1  αi (Ci − Ai+1 ) + βi (Ci − Ai+1 )+ + γi (max{Ci , Ai+1 } −  + αn (Cn − D) + βn (Cn − D)+ + γn (max{Cn , D} −  n k=1 pk )  i k=1 pk )  for any αi ∈ R (1 ≤ i ≤ n)  where βi = (oi − αi ) and γi = [(ui + αi ) − (ui+1 + αi+1 )]. Then the result follows from Convexity Proposition 3.3.3 because convexity is preserved by projection onto a coordinate subspace.  3.4  Subdifferential Characterization  We start with a definition of a subgradient and subdifferential. Definition 3.4.1. A vector g is a subgradient of a convex function f at the point x if f (y) ≥ f (x) + g T (y − x) for all y. Subdifferential at a point x is the set of all subgradients at the point x, i.e., ∂f (x) = {g : f (y) ≥ f (x) + g T (y − x)} [14]. We find a subgradient of the objective function F and characterize the set of all subgradients, i.e., subdifferential of F , ∂F , at any appointment vector A. We start with the alternative representation of F given by Identity Lemma 3.3.1 from which we can identify the smaller blocks of F : lateness, tardiness and total idle time of jobs 1, 2, ..., j for each job j. By using Minkowski sum and subdifferential calculus rules we first obtain subdifferential of these smaller blocks and then again by using these rules we put these smaller subdifferential sets together to characterize the subdifferential of F with a closed form formula. This characterization allows us to link the quality of the sampling solution with the number of independent samples. We also prove that any subgradient of F (., D) is a subgradient for F D allowing us to extend our results for F D . By Identity Lemma 3.3.1, j  n  αj (Cj − Aj+1 ) + βj (Cj − Aj+1 )+ + γj (max{Cj , Aj+1 } −  F (A|p) = j=1  for any αi ∈ R  pk ) k=1  (1 ≤ i ≤ n) where βi = (oi − αi ) and γi = [(ui + αi ) − (ui+1 + αi+1 )]  (γn = [(un + αn )]). We assume α-monotone (u, o) (by the Convexity Proposition 3.3.3) for the convexity of F (.|p) and F (.). Recall that Lj (A|p) = (Cj − Aj+1 ) (lateness), Tj (A|p) = (Cj − Aj+1 )+ (tardiness) and Mj (A|p) = max{Cj , Aj+1 } −  j k=1 pk  (total  idle time of jobs 1, 2, ..., j). Here we use (A|p) for Lj , Tj and Mj to emphasize the fact 63  that these quantities are for a given (particular) p, hence they are deterministic. Similarly, we will use (A) to denote their expected values, i.e., Lj (A), Tj (A) and Mj (A). We can n j=1 [αj Lj (A|p)  rewrite F (A|p) as  + βj Tj (A|p) + γj Mj (A|p)] where αj , βj , γj ≥ 0 by α-  monotonicity. To characterize the subdifferential ∂F (A), we first derive the subdifferentials of Lj (A|p), Tj (A|p) and Mj (A|p). Then, we find ∂Lj (A), ∂Tj (A) and ∂Mj (A) where ζj (A) = Ep [ζj (A|p)] for ζ ∈ {L, T, M }. After that we obtain the subdifferential of Fj (A) n j=1 Fj (A))  (by Eq(3.2) below) and ∂(  = ∂F (A).  Fj (A) = αj Lj (A) + βj Tj (A) + γj Mj (A)  (3.2)  We start the analysis with the definition of Minkowski sum (e.g., see [33]) since the sums in our subdifferential derivations are Minkowski sums. The Minkowski sum of sets X1 and X2 is defined as X1 + X2 = x = x1 + x2 : x1 ∈ X1 , x2 ∈ X2 . More generally, if I is a finite set, ri is a real number and Xi is a set (i ∈ I) then the i∈ I ri Xi  Minkowski sum  is  i∈ I ri Xi  = x=  i∈ I ri xi  : xi ∈ Xi for all i . In particular,  for any real r ∈ R, rX = {rx : x ∈ X}. We will use two particular subdifferential calculus rules in our derivations. Rule 1 and Rule 2 follow from Theorem 4.1.1 and Corollary 4.4.4 of [14] respectively. Let f, f1 , f2 , ..., fm be finite convex functions from Rn to R, I be a finite set and ri (i ∈ I) be a non-negative real number. Then the rules are: Rule 1:  ∂(  ri fi ) = i∈ I  Rule 2:  ri ∂fi  (3.3)  i∈ I  If f = max fi and all fi ’s are differentiable then ∂f = co{∇fi : fi = f } 1≤i≤m  where co stands for convex hull and the summation on the right side of the equation in Rule 1 is a Minkowski sum. Lemma 3.4.2 allows us to consider ∂Ψ(A|p) in finding ∂Ψ(A) where Ψ ∈ {Lj , Tj , Mj , F, Fj }. We use the notations ∂Ψ(A) = Ep Ψ(A|p) and P rob{p} to represent the probability of realization p. Since Ψ(.) is, in all these cases, finite and convex, we have the following result. Lemma 3.4.2. The relation ∂(Ep Ψ(A|p)) = Ep [∂Ψ(A|p)] holds where P rob{p}∂Ψ(A|p) = {s ∈ Rn+1 : ∃ sp ∈ ∂Ψ(A|p) ∀p and s =  Ep [∂Ψ(A|p)] = p  P rob{p}sp }. p  64  Proof. Ψ(A|p) is finite and convex everywhere for any p and there are finitely many realizations of p. Therefore by Rule 1 and Rule 2 we obtain the claimed result as follows: ∂(Ep [Ψ(A|p)]) = ∂(  P rob{p}Ψ(A|p)) = p  P rob{p}∂Ψ(A|p) = Ep [∂Ψ(A|p)] . p  Lemma 3.4.2 is useful as it allows us to work with ∂Ψ(A|p) for any Ψ ∈ {Lj , Tj , Mj , F, Fj } and obtain ∂Ψ(A) by taking its expectation. By Eq(3.2) and Lemma 3.4.2, n  ∂F (A) =  [αj Ep [∂Lj (A|p)] + βj Ep [∂Tj (A|p)] + γj Ep [∂Mj (A|p)]] j=1  where, as before, all sums are Minkowski sums. So, once we find ∂Lj (A|p), ∂Tj (A|p) and ∂Mj (A|p)], we can obtain ∂F (A). Consider Lj (A|p) = (Cj − Aj+1 ). By Critical Path Lemma 2.4.1, Cj = maxk≤j {Ak +  j t=k  pt } so Lj (A|p) = maxk≤j {Ak +  j t=k  pt − Aj+1 }.  Recall that the notation {.|p} is used to emphasize the fact that the quantity of interest is deterministic once the job duration vector p is given. In order to find ∂Lj (A|p), we need to know which k’s (k ≤ j) maximize {Ak + Pkj } where Pkj =  j t=k  pt . To represent the set  of such maximizers for job j we define Ij = arg max{Ak + Pkj }. k≤j  (3.4)  A remark is here in order. Note that Ij depends on A and p, and it is deterministic for any given (particular realization of) p and a random variable otherwise. Let 1i denote a unit vector in Rn+1 where the ith component is 1 and all other components are 0. Then Lj (A|p) = maxk≤j {Ak + Pkj − Aj+1 } and by Rule 2 we obtain ∂Lj (A|p) = co{1k − 1j+1 : k ∈ Ij }.  (3.5)  Similar to ∂Lj (A|p), we now obtain ∂Tj (A|p). In addition to Ij , we also need the sign of maxk≤j {Ak + Pkj } − Aj+1 since ∂Tj (A|p) = (Lj (A|p))+ . Let Ij̺ = {k ∈ Ij : Ak + Pkj ̺ Aj+1 } where the relation ̺ ∈ {>, <, =}.  (3.6)  By extending ∂Lj (A|p) with the sign of maxk≤j {Ak + Pkj } − Aj+1 we obtain ∂Tj (A|p):    co{1k − 1j+1 : k ∈ Ij> } if {maxk≤j Ak + Pkj } > Aj+1   ∂Tj (A|p) = co({0} ∪ {1k − 1j+1 : k ∈ Ij= }) if {maxk≤j Ak + Pkj } = Aj+1     {0} if {maxk≤j Ak + Pkj } < Aj+1 .  65  We note that exactly two of the sets Ij> , Ij= , Ij< are empty. This allows us to represent ∂Tj (A|p) as ∂Tj (A|p) = co({0} ∪ {1k − 1j+1 : k ∈ Ij> }) + co({0} ∪ {1k − 1j+1 : k ∈ Ij= }).  (3.7)  Next, we obtain ∂Mj (A|p). Recall that Mj (A|p) = max{Cj , Aj+1 } − P1j and Cj = maxk≤j {Ak +Pkj }, then Mj (A|p) = max maxk≤j {Ak +Pkj }, Aj+1 −P1j . By using Rule 2 (similarly to ∂Tj (A|p)), we obtain ∂Mj (A|p).    co{1k : k ∈ Ij> }   ∂Mj (A|p) = co({1j+1 } ∪ {1k : k ∈ Ij= })     1 j+1  if {maxk≤j Ak + Pkj } > Aj+1 if {maxk≤j Ak + Pkj } = Aj+1 if {maxk≤j Ak + Pkj } < Aj+1 .  Since exactly two of the sets Ij> , Ij= , Ij< are empty, we can represent ∂Mj (A|p) compactly as ∂Mj (A|p) = co({0} ∪ {1k − 1j+1 : k ∈ Ij> }) + co({1j+1 } ∪ {1k : k ∈ Ij= }).  (3.8)  We now can obtain ∂Lj (A), ∂Tj (A) and ∂Mj (A) starting with ∂Lj (A). Recall that by Eq(3.5), Lj (A|p) = co{1k − 1j+1 : k ∈ Ij } and by Lemma 3.4.2, we have ∂Lj (A) = Ep [∂Lj (A|p)] =  p P rob{p}∂Lj (A|p).  Therefore,  P rob{p}co{1k − 1j+1 : k ∈ Ij }.  ∂Lj (A) =  (3.9)  p  There are potentially pnmax many realizations of p, and this number may be very large. However, we observe that all the vectors appearing in the convex hull of Lj (A|p) are (1k − 1j+1 ) for some k ≤ j. Therefore the convex hull of Lj (A|p) will be a convex combination of the vectors in some subset of {(11 − 1j+1 ), (12 − 1j+1 ), ..., (1j − 1j+1 )}. The following result makes this observation precise. Lemma 3.4.3. Let r1 , r2 ..., rm ≥ 0 be reals. If X is a convex set and then (  m i=1 (ri X)  =  m i=1 ri )X.  Remark 3.4.4. The convexity of X is essential, to see this take the non-convex set X = {0, 1} with r1 = r2 = 1 then  2 i=1 (ri X)  = {0, 1, 2} = {0, 2} = (  2 i=1 ri )X.  Lemma 3.4.3 enables us to combine all realizations giving the same convex hull X, i.e., instead of considering all possible realizations we consider the non-empty subsets of 66  {(11 − 1j+1 ), (12 − 1j+1 ), ..., (1j − 1j+1 )}. We define [j] = {1, 2, ..., j − 1, j} and use P ∗ ([j]) to denote all the non-empty subsets of [j]. For any S ∈ P ∗ ([j]), let P rob{Φ = S} = p : Φ=S  P rob{p} for Φ ∈ {Ij , Ij= , Ij> } and j = 1, ..., n. Then the next Lemma shows how  to obtain ∂Lj (A). Lemma 3.4.5. The subdifferential ∂Lj (A) is given by P rob{Ij = S} co{(1k − 1j+1 ) : k ∈ S}. S ∈P ∗ ([j])  Proof. Eq(3.9) gives ∂Lj (A) =  p P rob{p} co{1k  − 1j+1 : k ∈ Ij }. We obtain the desired  result by the equalities below: P rob{p} co{1k − 1j+1 : k ∈ Ij } p  =  co{1k − 1j+1 : k ∈ S}1{Ij = S}  P rob{p} p  S  ∈P ∗ ([j])  P rob{p} 1{Ij = S} co{1k − 1j+1 : k ∈ S}  = S  ∈P ∗ ([j])  p  P rob{Ij = S} co{1k − 1j+1 : k ∈ S}  = S ∈P ∗ ([j])  where 1{Ij = S} is 1 if {Ij = S} and 0 otherwise (i.e., 1 is the indicator function), and the last equality follows from the definition of P rob{Ij = S} and Lemma 3.4.3. We obtain similar results for ∂Tj (A) and ∂Mj (A) in the next two lemmata. Lemma 3.4.6. The subdifferential ∂Tj (A) is given by P rob{Ij> = S}co{1k −1j+1 : k ∈ S}+P rob{Ij= = S}co({0}∪{1k −1j+1 : k ∈ S}) . S∈P ∗ ([j])  Lemma 3.4.7. The subdifferential ∂Mj (A) is given by P rob{Ij> = S} co{1k − 1j+1 : k ∈ S} + S∈P ∗ ([j])  P rob{Ij= = S} 1j+1 .  P rob{Ij= = S} co{1k : k ∈ S ∪ {j + 1}} + 1 − S∈P ∗ ([j])  67  For later purposes we represent the convex hulls in ∂Lj (A), Tj (A) and Mj (A) in a different form. By using Lemma 3.4.5 above we may express ∂Lj (A) as ∂Lj (A)  =  L (1k − 1j+1 )Xkj (S) :  P rob{Ij = S} k∈ S  S ∈P ∗ ([j])  L Xkj (S) = 1 ∀S ∈ P ∗ ([j])  L Xkj (S) ≥ 0 ∀S ∈ P ∗ ([j]) ∀k ∈ S . (3.10)  k∈ S  L (S) is the non-negative variable representing the weight of the term (1 − 1 where Xkj k j+1 )  in a convex combination determining an element of co{(1k − 1j+1 ) : k ∈ S}. Similarly to Eq(3.10), by using Lemma 3.4.6 we obtain: ∂Tj (A)  P rob{Ij> = S}  =  k∈ S  S∈P ∗ ([j])  P rob{Ij=  +  T> (1k − 1j+1 )Xkj (S) =  T (1k − 1j+1 )Xkj (S) :  = S} k∈ S  T> Xkj (S)  = 1 ∀S ∈ P ∗ ([j]),  k∈ S  T= Xkj (S) ≤ 1 ∀S ∈ P ∗ ([j]) k∈ S  T> T= Xkj (S), Xkj (S) ≥ 0 ∀S ∈ P ∗ ([j]) ∀k ∈ S  (3.11)  T> T = (S) are non-negative variables representing the weight of the terms where Xkj (S) and Xkj  (1k − 1j+1 ) in a convex combination determining an element of co{1k − 1j+1 : k ∈ S} and co({0} ∪ {1k − 1j+1 : k ∈ S}) respectively. Note that the convexity constraint in the second line of ∂Tj (A|p),  k∈ S  T = (S) ≤ 1, is an inequality since 0 may be a subgradient. Xkj  Similarly to Eq(3.10) and Eq(3.11), by using Lemma 3.4.7 we express ∂Mj (A) as the following . ∂Mj (A)  P rob{Ij> = S}  =  M> (1k − 1j+1 )Xkj (S) +  S∈P ∗ ([j])  k∈ S  P rob{Ij= = S}  (1k )XkMj= (S ∪ {j + 1}) + k∈ S∪{j+1}  P rob{Ij= = S} 1j+1 :  1− S∈P ∗ ([j])  M> Xkj (S) = 1 ∀S ∈ P ∗ ([j]), k∈ S M> XkMj= (S ∪ {j + 1}) = 1 ∀S ∈ P ∗ ([j]), Xkj (S) ≥ 0 ∀S ∈ P ∗ ([j]) ∀k ∈ S, k∈ S∪{j+1} M= (S ∪ {j + 1}) ≥ 0 ∀S ∈ P ∗ ([j]) ∀k ∈ S ∪ {j + 1} Xkj  (3.12)  T> M = (S) are non-negative variables representing the weight of the terms where Xkj (M ) and Xkj  (1k − 1j+1 ) and (1k ) in a convex combination determining an element of co{1k − 1j+1 : k ∈ S} and co{1k : k ∈ S ∪ {j + 1}} respectively. 68  For clarity, we collect the variables XijL (S), XijT > (S), XijT = (S), XijM > (S), XijM = (S) into a single vector Xj , and express the feasible set Θj of Xj in a compact form. Xj =  =  M (S ∪ {l + 1})) : υ ∈ {L, T > , T = , M > }, 1 ≤ i ≤ j ≤ n + 1, (Xijυ (S)), (Xkl  1 ≤ k < l ≤ n + 1, S ∈ P ∗ ([j]), i ∈ S, k ∈ S ∪ {l + 1} . Θj  =  Xijυ (S) = 1,  Xj ≥ 0 :  M= Xkl (S ∪ {l + 1}) = 1,  XijT = (S) ≤ 1,  i∈S  i∈S  k∈ S∪{l+1}  ∀υ ∈ {L, T > , M > }, ∀S ∈ P ∗ ([j]), ∀i ∈ S, ∀k ∈ S ∪ {l + 1} . We next collect all Xj vectors into a single vector X and express the feasible set Θ of X: X = (Xj )j∈[n+1] Θ = ×j∈[n+1] Θj = X = (Xj )j∈[n+1] : Xj ∈ Θj ∀j ∈ [n + 1] .  (3.13)  Now we obtain ∂F (A). Proposition 3.4.8. We may express ∂F (A) in a closed-form formula given by Eq(3.15). Proof. Since αj , βj , γj ≥ 0 (j = 1, ..., n + 1), by Rule 1 and Eq(3.2) we get ∂Fj (A) = αj ∂Lj (A) + βj ∂Tj (A) + γj ∂Mj (A). We obtain ∂F (A) as the Minkowski sum of Fj (A)’s (j = 1, ..., n + 1). n  n  [αj ∂Lj (A) + βj ∂Tj (A) + γj ∂Mj (A)] .  ∂Fj (A) =  ∂F (A) =  (3.14)  j=1  j=1  We gather the values of ∂Lj (A), ∂Tj (A), and ∂Mj (A) from Eq(3.10), Eq(3.11) and Eq(3.12) respectively and use Eq(3.14) to obtain the closed-form expression: n  ∂F (A) = j=1  (1i )XijL (S) − 1j+1  P rob{Ij = S}  αj  i∈ S  S∈P ∗ ([j])  n  P rob{Ij> = S}  βj  + j=1  (1i )XijT > (S) − 1j+1 i∈ S  S∈P ∗ ([j])  n  P rob{Ij= = S}  βj  + j=1  (1i )XijT = (S) − (1j+1 ) i∈ S  S∈P ∗ ([j])  XijT = (S) i∈ S  n  P rob{Ij> = S}  γj  + j=1  (1i )XijM > (S) − 1j+1 i∈ S  S∈P ∗ ([j])  n j=1  =  (1i )XiMj (S ∪ {j + 1})  P rob{Ij= = S}  γj  +  S∈P ∗ ([j])  i∈ S∪{j+1}  n  P rob{Ij= = S})1j+1 : X ∈ Θ .  γj (1 −  + j=1  (3.15)  S∈P ∗ ([j])  69  We next express ∂F (A) component by component for a particular X ∈ Θ, i.e., a coordinate of a subgradient at the point A for a particular X ∈ Θ. Let g(X, A) be the element of ∂F (A) defined by the vector X. Then g(X, A) = (g1 (X, A), g2 (X, A), ..., gn+1 (X, A)) where gk (X, A) is the k th component of g(X, A). Corollary 3.4.9 gives an expression for gk (X, A). Corollary 3.4.9. We may express g(X, A) in a closed-form formula given by Eq(3.16). Proof. We observe that all vectors appearing in the convex combination defining ∂F (A) are (1i − 1j+1 ) for some 1 ≤ i < j + 1 ≤ n + 1 and (1i ) for some 1 ≤ i ≤ n + 1. Therefore the k th component gk of g ∈ ∂F (A) may only get nonzero contributions from vectors (1k − 1j+1 ) for all j + 1 > k, vectors (1i − 1k ) for all i < k and vector (1k ). To see this from a different perspective, consider Ak : Ak appears in the terms (Ck−1 − Ak ), (Ck−1 − Ak )+ and may appear in max{Ck−1 , Ak } −  k−1 i=1 pi ,  (Cj−1 − Aj ) and (Cj−1 − Aj )+ for j > k. We derive  gk by using Eq(3.15). Let X ∈ Θ, then the k th component of g(X, A), namely, gk (X, A), is n L P rob{Ij = S} Xkj (S) − αk−1  αj j=k  S∈  P ∗ ([j])  P rob{Ik−1 = S} S ∈ P ∗ ([k−1])  n T> P rob{Ij> = S}Xkj (S) − βk−1  βj  + j=k  S ∈ P ∗ ([j])  > P rob{Ik−1 = S} S ∈ P ∗ ([k−1])  n T= P rob{Ij= = S}Xkj (S) − βk−1  βj  + j=k  S ∈ P ∗ ([j])  = P rob{Ik−1 = S} S ∈ P ∗ ([k−1])  = XiTk−1 (S) i∈ S  n M> P rob{Ij> = S}Xkj (S) − γk−1  γj  + j=k  S ∈ P ∗ ([j])  > P rob{Ik−1 = S} S ∈ P ∗ ([k−1])  n  P rob{Ij= = S}XkMj= (S ∪ {j + 1})  γj  + j=k  S ∈ P ∗ ([j]) = P rob{Ik−1 = S}).  +γk−1 (1 − S∈  (3.16)  P ∗ ([k−1])  Remark 3.4.10. Note that  S ∈ P ∗ ([k−1]) P rob{Ik−1  = S} = 1. For our analysis in this  chapter, we do not require the values of the probabilities P rob{Ij = S}, P rob{Ij> = S} and P rob{Ij= = S} (for S ∈ P ∗ ([j]) and j ∈ [n + 1]). However these values may be needed for other research, and indeed these probabilities are computed and used in Appendix A. ˜ D) Subgradients for F D . Proposition 3.4.11 allows us to use any subgradient of F (A, ˜ for F D (A). 70  ˜ D)) ⊆ ∂F D (A) ˜ where proj is the projection given as Proposition 3.4.11. proj (∂F (A, proj (x1 , x2 ..., xn , xn+1 ) = (x1 , x2 , ..., xn ). ˜ D)) is a subgradient for F D (A) ˜ and hence we can Therefore subgradient proj (g(A, extend our results to F D . Remark 3.4.12. One may wish to find a minimum norm subgradient at a point A as it provides an optimality test (a point A∗ is optimal if and only if 0 ∈ ∂F (A∗ )) and also the negative minimum norm subgradient is a descent direction (e.g., see [2]). By Eq(3.16) the minimum norm subgradient may be computed with a linear program (LP) in l1 norm and as a quadratic program (QP) in l2 norm.   min n+1  k=1 zk       subject to   LP zk ≥ gk (X, A) (1 ≤ k ≤ n + 1)     z ≥ −g (X, A) (1 ≤ k ≤ n + 1)  k k      X∈Θ  Decision variable zi is used to represent the absolute value |gi | in the l1 norm. Now, we give the QP formulation. QP     min    n+1 k=1  gk2 (X, A)  subject to     X∈Θ  This QP has linear constraints but a quadratic objective function.  3.5  Sampling Approach  In this section, we relax the perfect information assumption of job durations distribution in Chapter 2. We assume that there exists an underlying (true) discrete joint distribution for the job durations but the distribution is not known. Instead there is a set of independent samples available. For example, in many practical scenarios one has daily historical data on surgery durations. Job durations may not necessarily be independent but samples are. (Each sample is a vector of all job durations on surgeries.) We develop a sampling-based approach to determine the number of independent samples required to obtain a provably near-optimal solution with high probability. That is, with high probability the cost (w.r.t. 71  the true distribution) of the sampling-based schedule is no more than (1 + ǫ) times the cost of optimal schedule that is computed based on the true distribution. Let ǫ be the accuracy level, 1 − δ the confidence level and N = N (ǫ, δ, u, o) the number of samples. Define pk = (pk1 , pk2 , ..., pkn ) as the k th observation of N samples. We use “  ” to  denote quantities obtained from samples. Let p = p(N ) be the empirical joint probability distribution obtained from N independent observations of p, i.e., P rob{p = pk } =  1 N  for  1 ≤ k ≤ N . We denote a true optimal appointment vector with A∗ , i.e., A∗ is a minimizer of Fp (A) = Ep (F (A|p)). We use the subscript p emphasize the fact that the quantities are obtained with respect to the true distribution p. Similarly, let A = A(N ) be a minimizer of Fp (A) = Ep (F (A|p)). Again we use the subscript p to emphasize the fact that the quantities are obtained with respect to the sampling distribution p. For subgradients, we express their k th component as gk (X, A)p for Fp (.) and gk (X, A)p for Fp (.) at the point A. We start our analysis by proving that we can minimize Fp (.) (and FpD (.)) in polynomial time. This follows from Theorem 2.7.1 (and Corollary 2.8.6) of Chapter 2. Then, with an application of Hoeffdings’ inequality, we establish a connection between the probability of an event with respect to p and p as a function of sample size N for a given accuracy level ε′ (absolute difference of the probabilities w.r.t. p and p) and a confidence level 1 − δ ′ . After that we provide a similar result for a family of events F to hold simultaneously. Our next result uses subdifferential characterization to show the existence of a g ∈ ∂Fp (A) such that |gk | < ε′ K ′ with high probability where K ′ = K ′ (n, u, o) is some constant. Then we prove that if there exists g ∈ ∂Fp (A) such that |gk | < ǫν/3(n + 1)n for all 1 ≤ k ≤ n + 1 then Fp (A) ≤ (1 + ǫ)Fp (A∗ ) where 0 < ǫ ≤ 1 and ν = min{u1 , u2 , ..., un , o1 , o2 , ..., on }. This is achieved with an application of Jensen inequality and a version of Lemma 5.1 of [22] (Lemma 3.7.2). We conclude by stating our main result which determines the number of samples required to achieve (1 + ǫ) approximation with probability at least 1 − δ. Corollary 3.5.1. (Polynomial Time Algorithm) If the cost vectors (u, o) are α-monotone and the processing durations are integer then Fp (.) (and FpD (.)) can be minimized in O(n5 N n n2 log(⌈pmax /2⌉)) time. Proof. Theorem 2.7.1 implies that Fp (.) can be minimized in O(σ(n) EO n2 log(⌈h/2n⌉)) where σ(n) is the number of function evaluations required to minimize a submodular set function over an n-element ground set and EO is the time needed for an expected cost 72  evaluation. We find the expected cost for Fp (.) in O(nN ) by computing the total cost for each realization (takes O(n) time) and then take the average of N total cost realizations, i.e., sample average approximation. Finally, Theorem 4 of [29] shows that σ(n) = O(n5 ). The result for FpD (.) follows similarly from Corollary 2.8.6. Polynomial Time Algorithm Corollary 3.5.1 shows that for a given N samples of job durations we can solve the SAA counterpart of the appointment scheduling problem efficiently. The remaining task is to find the sufficient number of samples N (for a given accuracy level and confidence level) such that the SAA optimal solution (w.r.t. the true distribution) will have a cost no more than (1+ the accuracy level) times optimal cost with probability at least the confidence level. Let O be any event depending on the processing times p = (p1 , p2 , ..., pn ), O = O(p1 , p2 , ..., pn ) = O(p). Let P robp {O(p)} denote the true probability of O. Let P robp {O(p)} denote an estimate of P robp {O(p)} when true distribution of p is not known, and the empirical probability distribution p, based on N independent samples, is used in the estimation. We define an indicator function as   1 if event O occurs with realization pk 1{O(pk )} =  0 otherwise  then 1{O(pk )} is Bernoulli distributed with parameter P robp {O(p)}. We define our esti-  mate P robp {O(p)} as 1 P robp {O(p)} = N  N  1{O(pk )}. k=1  Remark 3.5.2. Note that N P robp {O(p)} is the sum of N independent Bernoulli random variables with parameter P robp {O(p)}, therefore N P robp {O(p)} is binomially distributed with parameters P robp {O(p)} and N . We use Hoeffdings’ inequality to obtain the number of samples N required such that P rob  P robp {O(p)} − P robp {O(p)} ≤ ε′  > 1 − δ′  for any given accuracy level ε′ > 0 and confidence level 0 < δ ′ < 1. Direct application of Hoeffdings’ inequality for Bernoulli random variables (Theorem 4.5 in [40]) yields N>  1 1 ln(2/δ ′ ). 2 (ε′ )2  Using union bounds we obtain a similar result for a family of events to hold simultaneously. 73  Lemma 3.5.3. Let F be the set of (possibly dependent) events O1 , O2 , ..., O|F |−1 , O|F | where each Ok ∈ F depends on the processing times p = (p1 , p2 , ..., pn ). Let 0 < ε′ , δ ′ < 1. If N>  1 1 2 (ε′ )2  ln(2/δ ′ ) then  P rob  P robp {Ok (p)} − P robp {Ok (p)} ≤ ε′ ∀k = 1, 2, .., |F| > 1 − |F|δ ′ .  We characterized the subdifferential of F , ∂Fp (.), in Section 3.4 with a closed-form expression Eq(3.15). We also derived a formula, Eq(3.16), to represent a component of any subgradient, gk (X, .)p . The formulas for ∂Fp (.) and gk (X, .)p are identical to the Eq(3.15) and Eq(3.16) respectively except that each P robp {.} term is replaced by the corresponding P robp {.} term. We show in Lemma 3.5.4 that if we take a sufficiently large number of samples then |gk (X, A)p − gk (X, A)p | will be small with high probability for some X ∈ Θ . This implies that there exists a small g ∈ ∂Fp (A). Recall that A is an optimal appointment vector for Fp . Therefore there exists X ∈ Θ such that gk (X, A)p = 0 for all 1 ≤ k ≤ n + 1. If we show that |gk (X, A)p − gk (X, A)p | < ε′ K ′ then |gk (X, A)p − 0| < ε′ K ′ , and hence there exists g ∈ ∂Fp (A) such that |gk | < ε′ K ′ for all 1 ≤ k ≤ n + 1. We now show that |gk (X, A)p − gk (X, A)p | < ε′ K ′ with probability at least 1 − |F|δ ′ where |F| = 5n2 + 5 and K ′ = n(9omax + 4umax ) where omax = max(o1 , o2 , ..., on ) and umax = max(u1 , u2 , ..., un ). Lemma 3.5.4. If N > 12 (1/ε′ )2 ln(2/δ ′ ) then |gk (X, A)p | < ε′ K ′ with probability at least 1 − |F|δ ′ where X ∈ Θ , |F| = 5n2 + 5 and K ′ = n(9omax + 4umax ). Remark 3.5.5. If ui = u for all i = 1, 2, ..., n then K ′ = n(4omax + 2u) + 2u, i.e., |gk (X, A)p − gk (X, A)p )| ≤ ε′ (n(4omax + 2u) + 2u) (1 ≤ k ≤ n + 1) with probability at least 1 − |F|δ ′ where |F| = (n + 1)(4n + 2) = 5n2 + 5. The last piece we need is a connection between Fp (A) and Fp (A∗ ) if there exists g ∈ ∂Fp (A) such that |gk | < ǫ. Before this result we need a Lemma to obtain a lower bound function for Fp . Lemma 3.5.6. Let pi = E[pi ], p = (p1 , p2 , ..., pn ), C1 = p1 and Ci = max(Ci−1 , Ai ) + pi . We define f (A) = ν(  n i=1 [(Ci  − Ai+1 )+ + (Ai+1 − Ci )+ ]) and A ∈ arg minA f (A). If cost  coefficients (u, o) are α-monotone then A = (0, p1 , p1 + p2 ..., ν n ||A  n j=1 pj )  and Fp (A) ≥ f (A) ≥  − A||1 . 74  Remark 3.5.7. The following example with n = 2 jobs shows that this lower bound is tight, that is we may have Fp (A) =  ν n ||A  − A||1 . Let processing times p = (1, 4) be  ˜ = (0, 1, 5). For deterministic, u1 = u2 = o1 = o2 = 1 (therefore ν = 1). Then A A = (0, 4, 8) F (A) = 3 =  1 2  n i=1 |Ai  − A˜i |.  The last step we need before our main result is to prove that for a suitably chosen A we can obtain Fp (A) ≤ (1 + ǫ)Fp (A∗ ) for any 0 < ǫ ≤ 1. This result follows easily by using a version of Lemma 5.1 of [22] (Lemma 3.7.2 in Section 3.7). Lemma 3.5.8. Let 0 < ǫ ≤ 1. If there exists g ∈ ∂Fp (A) such that |gk | < ǫν/(3(n + 1)n) for all 1 ≤ k ≤ n + 1 then Fp (A) ≤ (1 + ǫ)Fp (A∗ ) . Proof. If |gk | < ǫν/(3(n + 1)n) for all 1 ≤ k ≤ n + 1 then ||g||1 ≤ ǫν/(3n). We then directly ˜ 1 Fp (A) ≥ ν ||A − A||1 by Lemma 3.5.6 and apply Lemma 3.7.2 with f (A) = nν ||A − A|| n α = ǫν/(3n) obtain the desired result. Combining Lemmata 3.5.3, 3.5.4 and 3.5.8 yields our main result for the sampling method. Theorem 3.5.9. Let 0 < ǫ ≤ 1 (accuracy level) and 0 < δ < 1 (confidence level) be given. If N>  4.5(1/ǫ)2 n2 (n+1)(9omax +4umax )/ν  2  ln(2(5n2 +5)/δ) then Fp (A) ≤ (1+ǫ)Fp (A∗ )  with probability at least 1 − δ. Remark 3.5.10. In the case of uniform underage cost coefficients (ui = u for all i) the bound in Theorem 3.5.9 becomes  4.5(1/ǫ)2 n2 (n + 1)((4omax + 2u) + 2u)/ν  2  ln(2(5n2 +  5)/δ) . Furthermore, the bound is similar but has a slightly higher polynomial (w.r.t. the number of jobs n) compared to the bound obtained for the multi-period newsvendor problem in [22] (w.r.t. the number of periods T ). This is expected since in the appointment scheduling problem one needs to make all the decisions (i.e., determine the planned start times of all jobs) at once (before any processing starts), whereas in the inventory problem one decides sequentially at each period.  3.6  Conclusion  We consider the appointment scheduling problem with discrete random durations of Chapter 2 but without assuming any (prior) knowledge about the probability distribution of job 75  durations. We show that the objective function is convex under a simple sufficient condition. We work with subgradients of the objective function due to its non-differentiability. In fact we characterize the set of all subgradients, i.e., the subdifferential at a given appointment date vector with a closed-form formula. This is unusual since only a single subgradient may be obtained in most applications. We use the subdifferential characterization to relax the perfect information assumption of Chapter 2 on the probability distribution of processing times. We assume that there is an underlying (true) joint discrete distribution for the job durations, and only its independent samples are available, e.g., daily historical observations of surgery durations. Job durations may not necessarily be independent but samples are. In other words, we assume that job duration distribution is not known, i.e., no (prior) information about the distribution except that independent samples are available. We develop a sampling-based approach to determine the number of independent samples required to obtain a provably near-optimal solution with high probability, i.e., the cost of the samplingbased optimal schedule is with high probability no more than (1 + ǫ) times the cost of optimal schedule if the true distribution were known.  3.7  Proofs  Proof. (Lemma 3.4.3) If at most one of r1 , r2 , ..., rm is non-zero then there is nothing to prove. Now suppose that there are at least two ri > 0, and w.l.o.g assume that r1 , r2 > 0. We first prove the result for m = 2 then generalize it by induction. r1 X = {r1 x : x ∈ X} and r1 X + r2 X = {r1 x + r2 y : x, y ∈ X} by definition. We first show that r1 X + r2 X ⊆ (r1 + r2 )X. Let (a + b) ∈ (r1 X + r2 X) where a ∈ r1 X and b ∈ r2 X. If a ∈ r1 X then is convex, ( ra1 λ + b r2 (1  b r2 (1  a r1  ∈ X. Similarly if b ∈ r2 X then  − λ)) ∈ X for any 0 ≤ λ ≤ 1. Now let λ =  1 − λ)) = ( ra1 r1r+r + 2  b r2 r2 r1 +r2 )  =  a+b r1 +r2  b r2  ∈ X. Since X  r1 r1 +r2  then ( ra1 λ +  this implies a + b ∈ (r1 + r2 )X and therefore  r1 X + r2 X ⊆ (r1 + r2 )X. Next, we show that (r1 + r2 )X ⊆ r1 X + r2 X. Let a ∈ (r1 + r2 )X then  a r1 +r2  ∈ X, and  a a a a therefore r1 r1 +r ∈ r1 X and r2 r1 +r ∈ r2 X. Hence r1 r1 +r + r2 r1 +r ∈ (r1 X + r2 X). But 2 2 2 2 a a + r2 r1 +r = a therefore (r1 + r2 )X ⊆ r1 X + r2 X. This completes the proof for r1 r1 +r 2 2  m = 2. Next, assume that the result holds for m = k > 2, i.e.,  k i=1 (ri X)  = (  k i=1 ri )X.  76  k+1 i=1 (ri X)  We need to show that it also holds for m = k + 1, i.e., k i=1 ri .  r=  Let  k+1  k  i=1  i=1  i=1  ri )X.  ri )X+rk+1 X = rX+rk+1 X = (r+rk+1 )X = (  (ri X)+rk+1 X = (  (ri X) = i=1  k+1 i=1 ri )X.  Then k  k+1  = (  where the second equality follows by the inductive assumption and fourth equality is due to our result for m = 2. Therefore the proof is complete. Proof. (Lemma 3.4.6) By Eq(3.7), the fact that r(X + Y ) = rX + rY (for r ∈ R and, sets X and Y ) and Lemma 3.4.2 we obtain ∂Tj (A)  P rob{p} co{1k − 1j+1 : k ∈ Ij> } + co({0} ∪ {(1k − 1j+1 ) : k ∈ Ij= })  = p  P rob{p}co({0} ∪ {1k − 1j+1 : k ∈ Ij= }).  P rob{p} co{1k − 1j+1 : k ∈ Ij> } +  = p  p  Then by using the definition of P rob{Ij> = S} and Lemma 3.4.3 (similarly to Lemma 3.4.5) we get P rob{p} co{1k − 1j+1 : k ∈ Ij> }  =  co{1k − 1j+1 : k ∈ S}1{Ij> = S}  P rob{p}  p  p  S∈P ∗ ([j])  P rob{p}1{Ij> = S}co{1k − 1j+1 : k ∈ S}  = S∈P ∗ ([j]) p  P rob{Ij> = S}co{1k − 1j+1 : k ∈ S}.  = S∈P ∗ ([j])  Next by using the identity of P rob{x : x ∈ X} +  P rob{x} = x  x  P rob{x : x ∈ X}, x  the definition of P rob{Ij= = S} and Lemma 3.4.3 we rewrite p P rob{p}co({0}  ∪ {1k − 1j+1 : k ∈ Ij= }) as  P rob{p : Ij= = ∅}{0} + p  =  P rob{p : Ij= = ∅}co({0} ∪ {1k − 1j+1 : k ∈ Ij= }) p  P rob{p : p  Ij=  co({0} ∪ {1k − 1j+1 : k ∈ S})1{Ij= = S}  = ∅} S∈P ∗ ([j])  P rob{p}1{Ij= = S}co({0} ∪ {1k − 1j+1 : k ∈ S})  = S∈P ∗ ([j]) p  P rob{Ij= = S}co({0} ∪ {1k − 1j+1 : k ∈ S}).  = S∈P ∗ ([j])  77  Therefore we finally obtain ∂Tj (A) P rob{Ij> = S}co{1k − 1j+1 : k ∈ S} + P rob{Ij= = S}co({0} ∪ {1k − 1j+1 : k ∈ S}) .  = S∈P ∗ ([j])  Proof. (Lemma 3.4.7) Similarly to Lemma 3.4.6, by Eq(3.8) and Lemma 3.4.2 we obtain ∂Mj (A)  P rob{p} co{1k − 1j+1 : k ∈ Ij> } + co({1j+1 } ∪ {1k : k ∈ Ij= })  = p  P rob{p} co({1j+1 } ∪ {1k : k ∈ Ij= }).  P rob{p} co{1k − 1j+1 : k ∈ Ij> } +  = p  p  As in Lemma 3.4.6, P rob{p} co{1k − 1j+1 : k ∈ Ij> } = p  P rob{Ij> = S}co{1k − 1j+1 : k ∈ S} S∈P ∗ ([j])  and we can rewrite  p P rob{p} co({1j+1 }  ∪ {1k : k ∈ Ij= }) as  P rob{p : Ij= = ∅}co{1k : k ∈ Ij= ∪ {j + 1}} + p  P rob{p : Ij= = ∅}1j+1 p  =  P rob{p :  Ij=  =  ∅}1{Ij=  = S}co{1k : k ∈ Ij= ∪ {j + 1}}  S∈P ∗ ([j]) p  P rob{p : Ij= = ∅}1{Ij= = S}1j+1  + S∈P ∗ ([j])  p  P rob{Ij= = S} co{1k : k ∈ S ∪ {j + 1}} + 1 −  =  P rob{Ij= = S} 1j+1 . S∈P ∗ ([j])  S∈P ∗ ([j])  Hence we obtain P rob{Ij> = S} co{1k − 1j+1 : k ∈ S} +  ∂Mj (A) = S∈P ∗ ([j])  P rob{Ij= = S} co{1k : k ∈ S ∪ {j + 1}} + 1 −  P rob{Ij= = S} 1j+1 . S∈P ∗ ([j])  ˜ D). Then by subgradient inequality we have Proof. (Proposition 3.4.11) Let y ∈ ∂F (A, ˜ D) + (B − A, ˜ Bn+1 − D)y t for all B = (B1 , .., Bn ) ∈ Rn and Bn+1 ∈ R. F (B, Bn+1 ) ≥ F (A, This inequality holds for all (B, Bn+1 ) ∈ Rn+1 and in particular (B, D). Thus we obtain ˜ D) + (B − A, ˜ D − D)y t . This gives F D (B) ≥ F D (A) ˜ + (B − A)g ˜ t since F (B, D) ≥ F (A, ˜ D) = F D (A) ˜ where g = proj (y) = (y1 , ..., yn ). F (B, D) = F D (B) and F (A, ˜ + (B − A)g ˜ t implies that g = proj (y) ∈ ∂F D (A). ˜ F D (B) ≥ F D (A) 78  Proof. (Lemma 3.5.3) The proof is by induction on |F|. Let 1 ≤ k ≤ |F| and Yk =  P robp {Ok (p)} − P robp {Ok (p)} ≤ ε′ .  For F = 2 the result holds since P rob{Y1 ∩ Y2 } =  1 − P rob{Y1 ∩ Y2 }  =  1 − P rob{Y1 ∪ Y2 }  ≥ 1 − (P rob{Y1 } + P rob{Y2 })  (since P rob{Y1 ∪ Y2 } ≤ P rob{Y1 } + P rob{Y2 })  ≥ 1 − 2δ ′  (since P rob{Y1 }, P rob{Y2 } < δ ′ ).  Suppose the result is true for |F| = k, i.e., P rob  k i=1 Yi  ≥ 1 − kδ ′ . Let Y =  k i=1 Yi .  Then, P rob{Y ∩ Yk+1 }  =  1 − P rob{Y ∩ Yk+1 }  =  1 − P rob{Y ∪ Yk+1 }  ≥ 1 − (P rob{Y } + P rob{Yk+1 }) (as P rob{Y ∪ Yk+1 } ≤ P rob{Y } + P rob{Yk+1 }) ≥ 1 − (k + 1)δ ′  (as P rob{Y } < kδ ′ and P rob{Yk+1 } < δ ′ ).  Therefore the result is also true for |F| = k + 1, and hence the proof is complete. Proof. (Lemma 3.5.4). Since A is an optimal appointment vector for Fp there exists X ∈ Θ such that gk (X, A)p = 0 for all 1 ≤ k ≤ n + 1. If |gk (X, A)p − gk (X, A)p | < ε′ K ′ then this implies that |gk (X, A)p − 0| < ε′ K ′ , and hence there exists g ∈ ∂Fp (A) such that |gk | < ε′ K ′ for all 1 ≤ k ≤ n + 1. We now show that |gk (X, A)p − gk (X, A)p | < ε′ K ′ . We start by taking the difference |gk (X, A)p − gk (X, A)p | term by term and factor out X terms by using Eq(3.16).  79  |gk (X, A)p − gk (X, A)p )| n L Xkj (S) P robp {Ij = S} − P robp {Ij = S}  αj  = j=k  S ∈ P ∗ ([j])  − αk−1  P robp {Ik−1 = S} − P robp {Ik−1 = S} S ∈ P ∗ ([k−1])  n T> Xkj (S) P robp {Ij> = S} − P robp {Ij> = S}  βj  + j=k  S ∈ P ∗ ([j]) > > P robp {Ik−1 = S} − P robp {Ik−1 = S}  − βk−1 S ∈ P ∗ ([k−1]) n  T= Xkj (S) P robp {Ij= = S} − P robp {Ij= = S}  βj  + j=k  S ∈ P ∗ ([j]) = = = XiTk−1 (S) P robp {Ik−1 = S} − P robp {Ik−1 = S}  − βk−1 S ∈ P ∗ ([k−1]) i∈ S n  T> Xkj (S) P robp {Ij> = S} − P robp {Ij> = S}  γj  + j=k  S ∈ P ∗ ([j]) > > P robp {Ik−1 = S} − P robp {Ik−1 = S}  − γk−1 S ∈ P ∗ ([k−1]) n  +  XkMn= (S ∪ {n + 1}) P robp {In= = S} − P robp {In= = S}  γj j=k  S ∈ P ∗ ([n]) = = P robp {Ik−1 = S} − P robp {Ik−1 = S}  − γk−1 S∈  The term −αk−1  .  P ∗ ([k−1])  S ∈ P ∗ ([k−1])  S ∈ P ∗ ([k−1]) P robp {Ik−1  P robp {Ik−1 = S} − P robp {Ik−1 = S}  = S} = 1 =  S ∈ P ∗ ([k−1]) P robp {Ik−1  disappears since  = S}. By using triangu-  lar inequality we obtain  80  |gk (X, A)p − gk (X, A)p )| n  ≤  L Xkj (S) P robp {Ij = S} − P robp {Ij = S}  αj j=k  S ∈ P ∗ ([j])  n T> Xkj (S) P robp {Ij> = S} − P robp {Ij> = S}  βj  + j=k  S ∈ P ∗ ([j]) > > P robp {Ik−1 = S} − P robp {Ik−1 = S}  + βk−1 S ∈ P ∗ ([k−1]) n  T= Xkj (S) P robp {Ij= = S} − P robp {Ij= = S}  βj  + j=k  S ∈ P ∗ ([j]) = = = XiTk−1 (S) P robp {Ik−1 = S} − P robp {Ik−1 = S}  + βk−1 S ∈ P ∗ ([k−1]) i∈ S n  T> Xkj (S) P robp {Ij> = S} − P robp {Ij> = S}  γj  + j=k  S ∈ P ∗ ([j]) > > P robp {Ik−1 = S} − P robp {Ik−1 = S}  + γk−1 S ∈ P ∗ ([k−1]) n  XkMn= (S ∪ {n + 1}) P robp {In= = S} − P robp {In= = S}  γj  + j=k  S ∈ P ∗ ([n]) = = P robp {Ik−1 = S} − P robp {Ik−1 = S}  + γk−1 S∈  .  (3.17)  P ∗ ([k−1])  We now find an upper bound for |gk (X, A)p − gk (X, A)p )| by obtaining an upper bound for each |.| term in Eq(3.17). We do so by using the fact that X ∈ Θ and rewriting some of the probability terms. Note that we will show this for the first and the third terms as the remaining bounds are obtained similar to either of the first or the third. We start with the first term in Eq(3.17). n L Xkj (S) P robp {Ij = S} − P robp {Ij = S}  αj j=k  S ∈ P ∗ ([j]) n  =  αj j=k  L Xkj (S) 1{k ∈ S}P robp {Ij = S} − 1{k ∈ S}P robp {Ij = S} S∈  ,  P ∗ ([j])  L (S) = 0 if k ∈ S. Let P + ([j]) = {S ∈ P ∗ ([j]) | P rob {I = S} − P rob {I = S} ≥ since Xkj p j p j L (S) ≤ 1 and triangular inequality 0}. Then by definition of P + ([j]), the fact that 0 ≤ Xkj  81  we obtain n L Xkj (S) 1{k ∈ S}P robp {Ij = S} − 1{k ∈ S}P robp {Ij = S}  αj j=k  S ∈ P ∗ ([j]) n  ≤  αj  L Xkj (S) 1{k ∈ S}P robp {Ij = S} − 1{k ∈ S}P robp {Ij = S} S ∈ P + ([j])  j=k n  ≤  1{k ∈ S}P robp {Ij = S} − 1{k ∈ S}P robp {Ij = S}  αj S ∈ P + ([j])  j=k n  αj P robp {k ∈ Ij , Ij ∈ P + ([j])} − P robp {k ∈ Ij , Ij ∈ P + ([j])}  = j=k n  αj P robp {k ∈ Ij , Ij ∈ P + ([j])} − P robp {k ∈ Ij , Ij ∈ P + ([j])}  ≤ j=k n  αj ε′ ≤ ε′ αmax n.  ≤ j=k  Similarly, n T> Xkj (S) P robp {Ij> = S} − P robp {Ij> = S}  ≤ ε′ βmax n,  T= Xkj (S) P robp {Ij= = S} − P robp {Ij= = S}  ≤ ε′ βmax n,  T> Xkj (S) P robp {Ij> = S} − P robp {Ij> = S}  ≤ ε′ γmax n,  XkMn= (S ∪ {n + 1}) P robp {In= = S} − P robp {In= = S}  ≤ ε′ γmax n.  βj j=k  S ∈ P ∗ ([j])  n  βj j=k  S ∈ P ∗ ([j])  n  γj j=k  S ∈ P ∗ ([j])  n  γj j=k  S ∈ P ∗ ([n])  82  We now find an upper bound for the third term in Eq(3.17). > > P robp {Ik−1 = S} − P robp {Ik−1 = S}  βk−1 S ∈ P ∗ ([k−1])  =  P robp {Ik−1 = S and Pi,k−1 > Ak − Ai : i ∈ S}  βk−1 S ∈ P ∗ ([k−1])  − P robp {Ik−1 = S and Pi,k−1 > Ak − Ai : i ∈ S} =  P robp {Ik−1 = S and Pi,k−1 > Ak − Ai }1{i ∈ S}  βk−1 S ∈ P ∗ ([k−1])  − P robp {Ik−1 = S and Pi,k−1 > Ak − Ai }1{i ∈ S} k−1  P robp {i ∈ Ik−1 and Pi,k−1 > Ak − Ai } − P robp {i ∈ Ik−1 and Pi,k−1 > Ak − Ai }  βk−1  =  i=1 k−1  =  > > P robp {i ∈ Ik−1 } − P robp {i ∈ Ik−1 }  βk−1 i=1 k−1  ≤  > > P robp {i ∈ Ik−1 } − P robp {i ∈ Ik−1 }  ok−1 i=1 k−1  ≤  ε′ ≤ ε′ βmax n.  βk−1 i=1  Similarly, we get βk−1  = = = XiTk−1 (S) P robp {Ik−1 = S} − P robp {Ik−1 = S}  ≤ ε′ βmax n,  > > P robp {Ik−1 = S} − P robp {Ik−1 = S}  ≤ ε′ γmax n,  = = P robp {Ik−1 = S} − P robp {Ik−1 = S}  ≤ ε′ γmax n.  S ∈ P ∗ ([k−1]) i∈ S  γk−1 S ∈ P ∗ ([k−1])  γk−1 S ∈ P ∗ ([k−1])  Therefore we can bound |gk (X, A)p − gk (X, A)p )| from above: |gk (X, A)p − gk (X, A)p )| ≤ ε′ n(αmax + 4βmax + 4γmax ) (1 ≤ k ≤ n + 1). Since the cost coefficients (u, o) are α-monotone we have 0 ≤ αi ≤ omax , βi ≤ omax and γi ≤ umax + omax . Therefore (αmax + 4βmax + 4γmax ) ≤ (9omax + 4umax ) so we can take K ′ = n(9omax + 4umax ). We also determine |F|, the maximum number of events we need to compute |gk (X, A)p − gk (X, A)p )| for all k. For each k, we have 5(n − k + 1) + 4(k − 1) therefore at most 5n events. Since k ≤ n + 1 we have |F| = (n + 1)(5n) = 5n2 + 5. This completes the proof. 83  Proof. (Lemma 3.5.6) Fix A. Let h(p) = F (A|p) =  n i=1 [oi (Ci  − Ai+1 )+ + ui (Ai+1 −  Ci )+ ]. We claim that h is convex. Recall that by Identity Lemma 3.3.1, we can rewrite F (A|p) and hence h(p) as i  n  αi (Ci − Ai+1 ) + βi (Ci − Ai+1 )+ + γi (max{Ci , Ai+1 } −  h(p) = F (A|p) = i=1  pk ) k=1  for any αi ∈ R (1 ≤ i ≤ n) where βi = (oi − αi ) and γi = [(ui + αi ) − (ui+1 + αi+1 )]. Recall that Ci = maxk≤i {Ak +  i t=k  pt } (by the Critical Path Lemma 2.4.1) so Ci is convex in p.  By α-monotonicity αi , βi ≥ 0 hence the terms αi (Ci − Ai+1 ) and βi (Ci − Ai+1 )+ are convex in p. Furthermore, the term γi (max{Ci , Ai+1 } −  i k=1 pk )  is convex (in fact linear) in p.  Therefore h(p) is convex. Recall that ν = min{u1 , u2 , ..., un , o1 , o2 , ..., on } and Ci ’s are the completion times, but they are deterministic since we are using expected values, pi ’s, for the processing times. We next show that Fp (A) ≥ f (A) by applying Jensen’s inequality to h(p) and applying Identity Lemma 3.3.1 to F (A|p). Fp (A) = Ep [h(p)] ≥ F (A|Ep) = F (A|p) i  n +  αi (Ci − Ai+1 ) + βi (Ci − Ai+1 ) + γi (max{Ci , Ai+1 } −  = i=1 n  pk ) k=1  [oi (Ci − Ai+1 )+ + ui (Ai+1 − Ci )+ ]  = i=1 n  ν[(Ci − Ai+1 )+ + (Ai+1 − Ci )+ ] = f (A).  ≥ i=1  Next we obtain A ∈ arg minA f (A). 0, ..., Ai+1 =  i j=1 pj , ..., An+1  =  n j=1 pj  Note that f (A) ≥ 0 for all A.  Set A1 =  , i.e., Ai+1 − Ai = pi and Ai+1 =  i k=1 pk  for 1 ≤ i ≤ n. Then Ai+1 = Ci for all i = 1, ..., n and f (A) = 0. Therefore A = (0, p1 , ...,  n j=1 pj )  is indeed optimal for f .  We next show Fp (A) ≥  ν n ||A  − A||1 by showing f (A) ≥  ν n ||A  − A||1 . Note that  n n + + i=1 [(Ci − Ai+1 ) + (Ai+1 − Ci ) ] = i=1 |(Ci − Ai+1 )|, and the result would follow if we show ji=1 |(Ci − Ai+1 )| ≥ |Aj+1 − Aj+1 | = |Aj+1 − jt=1 pt | for all j = 1, 2, .., n. We now show ji=1 |(Ci − Ai+1 )| ≥ |Aj+1 − jt=1 pt | for all j = 1, 2, .., n. We distinguish two cases.  84  First, suppose that Aj+1 ≤  j t=1 pt .  Since  j t=1 pt  j  ≤ Cj , we have Aj+1 ≤ Cj . Therefore  j  pt − Aj+1 | = |Aj+1 − Aj+1 |.  |(Ci − Ai+1 )| ≥ |(Cj − Aj+1 )| ≥ | t=1  i=1  The second case is where Aj+1 > j  j t=1 pt .  Then  j  |(Ci − Ai+1 )| i=1  j  j  (Ai+1 − Ci )+ = max(Cj , Aj+1 ) −  ≥  pt ≥ Aj+1 − t=1  i=1  pt = |Aj+1 − Aj+1 |. t=1  where the first equality follows from Identity Lemma 3.3.1. Hence we obtain j i=1 |(Ci  − Ai+1 )| ≥ |Aj+1 − Aj+1 | for all 1 ≤ j ≤ n.  Therefore for every j = 1, ..., n n i=1 |(Ci  − Ai+1 )| ≥  j i=1 |(Ci  − Ai+1 )| ≥ |Aj+1 − Aj+1 |  and hence n  n  [(Ci − Ai+1 )+ + (Ai+1 − Ci )+ ] = nν  nf (A) = nν i=1  |Ci − Ai+1 | ≥ ν ||A − A||1 i=1  as desired. Therefore Fp (A) ≥ nν ||A − A||1 . This completes the proof. Definition 3.7.1. (Definition 3.3 of [22]) Let f : Rm → R be convex. A point y is an α-point if there exists g ∈ ∂f (y) such that ||g||1 ≤ α.  Lemma 3.7.2. (A version of Lemma 5.1 of [22]). Let f : Rm → R be convex, finite with a global minimizer y ∗ . Assume that there exists f¯ such that f ≥ f¯ = λ||y − y||1 for some λ > 0 and y ∈ Rm . If y is an α-point for α = λǫ/3 then f (y) ≤ (1 + ǫ)f (y ∗ ). Proof. Let L = f (y ∗ )/λ. Consider the norm l1 ball B = B(y, L), then y ∗ ∈ B(y, L) = {λ||y ∗ − y|| ≤ f (y ∗ )}. Subgradient inequality at y combined with Cauchy-Schwartz inequality yields f (ˆ y ) − f (y ∗ ) ≤ α||y − y ∗ ||1 (since Cauchy-Schwartz inequality also holds for l1 norm). We also have ||ˆ y − y ∗ ||1 ≤ ||y − y||1 + ||y − y ∗ ||1 ≤ f (ˆ y )/λ + L = f (ˆ y )/λ + f (y ∗ )/λ. So we obtain f (y) − f (y ∗ ) ≤ α(f (y)/λ + f (y ∗ )/λ) and hence f (y) ≤ f (y ∗ )(λ + α)/(λ − α). If we choose α ≤ λǫ/3 the result follows.  85  3.8  Bibliography  [1] Shabbir Ahmed and Alexander Shapiro. The sample average approximation method for stochastic programs with integer recourse. Optimization Online, 2002. [2] Mokhtar S. Bazaraa, Hanif D. Sherali, and C.M. Shetty. Nonlinear Programming: Theory and Algorithms. John Wiley & Sons, 2006. [3] Dennis Blumenfeld. Operations Research Calculations Handbook. CRC Press, 2001. [4] James H. Bookbinder and Anne E. Lordahl. Estimation of inventory re-order levels using the bootstrap statistical procedure. IIE Trans., 21(4):302–312, 1989. [5] Peter M. Vanden Bosch, Dennis C. Dietz, and John R. Simeoni. Scheduling customer arrivals to a stochastic service system. Naval Research Logistics, 46:549–559, 1999. [6] Tugba Cayirli and Emre Veral. Outpatient scheduling in health care: A review of literature. Production and Operations Management, 12(4), 2003. [7] Leon Yang Chu, J.George Shanthikumar, and Zuo-Jun Max Shen. Solving operational statistics via a bayesian analysis. Operations Research Letters,, pages 110–116, 2008. [8] Brian Denton and Diwakar Gupta. A sequential bounding approach for optimal appointment scheduling. IIE Transactions, 35:1003–1016, 2003. [9] Xiaomei Ding, Martin L. Puterman, and Arnab Bisi. The censored newsvendor and the optimal acquisition of information. Oper. Res., 50(3):517–527, 2002. [10] Mohsen Elhafsi. Optimal leadtime planning in serial production systems with earliness and tardiness costs. IIE Transactions, 34:233 – 243, 2002. [11] Satoru Fujishige. Submodular Functions and Optimization. Elsevier, 2005. [12] Guillermo Gallego and Ilkyeong Moon. The distribution free newsboy problem: Review and extensions. J. Oper. Res. Soc., 44(8):825–834, 1993. [13] Gregory A. Godfrey and Warren B. Powell. An adaptive, distribution-free algorithm for the newsvendor problem with censored demands, with application to inventory and distribution problems. Man. Scie., 47(8):1101–1112, 2001. 86  [14] JeanBaptiste Hiriart-Urruty and Claude Lemarechal. Convex Analysis and Minimization Algorithms I and II. Springer, 1993. [15] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. J. American Statistical Assoc., 58(301):13–30, 1963. [16] Woonghee Tim Huh, Retsef Levi, Paat Rusmevichientong, and James B. Orlin. Adaptive data-driven inventory control policy based on kaplan-meier estimator. Working Paper, 2008. [17] Woonghee Tim Huh and Paat Rusmevichientong. A non-parametric asymptotic analysis of inventory planning with censored demand. Math. of Oper. Res. to appear, 2009. [18] Satoru Iwata. Submodular function minimization. Math. Program., 112:45–64, 2008. [19] Guido C. Kaandorp and Ger Koole. Optimal outpatient appointment scheduling. Health Care Man. Sci., 10:217–229, 2007. [20] E. L. Kaplan and Paul Meier. Nonparametric estimation from incomplete observations. J. American Statistical Assoc., 53(282):457–481, 1958. [21] Anton J. Kleywegt, Alexander Shapiro, and Tito Homem-De-Mello. The sample average approximation method for stochastic discrete optimization. SIAM J. Optim., 12: 479–502, 2001. [22] Retsef Levi, Robin O. Roundy, and David B. Shmoys. Provably near-optimal samplingbased policies for stohastic inventory control models. Math. of Oper. Res., 32(4):821– 838, 2007. [23] Liwan H. Liyanage and J. George Shanthikumar. A practical inventory control policy using operational statistics. Operations Research Letters, 33:341–348, 2005. [24] James Luedtke and Shabbir Ahmed. A sample approximation approach for optimization with probabilistic constraints. SIAM J. Optim., 19:674–699, 2008. [25] S. T. McCormick. Submodular function minimization. a chapter in the handbook on discrete optimization. Elsevier, K. Aardal, G. Nemhauser, and R. Weismantel, eds, 2006. 87  [26] Kazuo Murota. Discrete convex analysis. Math Programming, 83(3):313–371, 1998. [27] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. [28] Kazuo Murota. On steepest descent algorithms for discrete convex functions. SIAM J. OPTIM, 14(3):699–707, 2003. [29] James B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Math Programming, 118(2):237–251, 2007. [30] Georgia Perakis and Guillaume Roels. Regret in the newsvendor model with partial information. Oper. Res., 56(1):188 – 203, 2008. [31] Warren B. Powell, Andrzej Ruszczynski, and Huseyin Topaloglu. Learning algorithms for separable approximations of discrete stochastic optimization problems. Math. of Oper. Res., 29(4):814–836, 2004. [32] Lawrence W. Robinson, Yigal Gerchak, and Diwakar Gupta. Appointment times which minimize waiting and facility idleness. Working Paper, DeGroote School of Business, McMaster University, 1996. [33] R. Tyrrell Rockafellar. Theory of subgradients and its applications to problems of optimization: convex and nonconvex functions. Helderman-Verlag, Berlin, 1981. [34] F Sabria and C F Daganzo. Approximate expressions for queuing systems with scheduling arrivals and established service order. Transportation Science, 23:159–165, 1989. [35] Herbert Scarf. A min-max solution to an inventory problem. In K. J. Arrow, S. Karlin, and H. Scarf, editors, Studies in the mathematical theory of inventory and production, 1958. [36] Alexander Shapiro. Stochastic programming approach to optimization under uncertainty. Math. Programming, 112:183–220, 2007. [37] Alexander Shapiro and Arkadi Nemirovski. On complexity of stochastic programming problems. a chapter in continuous optimization: Current trends and applications,. Springer, V. Jeyakumar and A.M. Rubinov, eds, 2005. 88  [38] Chaitanya Swamy and David B. Shmoys. Algorithms column: Approximation algorithms for 2-stage stochastic optimization problems. ACM SIGACT News, 37(1):33–46, 2006. [39] P Patrick Wang. Static and dynamic scheduling of customer arrivals to a single-server system. Naval Research Logistics, 40(3):345–360, 1993. [40] Larry Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer Texts in Statistics, 2004. [41] E N Weiss. Models for determining estimated start times and case orderings in hospital operating rooms. IIE Transactions, 22(2):143–150, 1990.  89  4 Incentive-Based Surgery Scheduling: Determining Optimal Number of Surgeries1 We study the problem of determining the number of surgeries for an operating room (OR) block where surgery durations are random, there are significant idle and overtime costs for running an OR and the incentives of the parties involved (hospital and surgeon) are not aligned. We explore the interaction between the hospital and the surgeon in a game theoretic setting, present empirical findings on surgery durations and suggest, under reasonable assumptions, payment schemes that the hospital may offer to the surgeon to reduce its (idle and especially overtime) costs.  4.1  Introduction  Healthcare is one of the biggest industries in North America. Canada was expected to spend $148 billion on healthcare in 2006 [8], which accounts for more than 10% of its GDP. In the United States the situation is similar, in 2006 it accounted for 15.3% of GDP [6]. Healthcare challenges, including rising costs and demand, are continually becoming more acute not only in Canada and the United States, but in almost every country in the world [5]. In [10], Glouberman and Mintzberg develop a novel framework to analyze healthcare management. According to this framework, we can think of healthcare as an industry like any other, but with some additional unique characteristics. Unlike other businesses, either private or public, no one is in complete charge (e.g., of a hospital), and there are several decision makers with conflicting objectives. For instance, managers make resource allocation decisions, but it is doctors who decide what to do with those resources. 1  A version of this chapter will be submitted for publication. Begen M.A., Ryan C. and Queyranne M.  Incentive-Based Surgery Scheduling: Determining Optimal Number of Surgeries.  90  Healthcare is more easily categorized as belonging to the public sector than the private sector in most countries. In his article [7], Dixit states the following about public sector agencies: “Public sector agencies have some special features, most notably a multiplicity of dimensions - of tasks, of the stake holders and their often conflicting interests about the ends and the means, and of the tiers of management and front-line workers”. The framework in [10] makes it easier to understand this statement and why the healthcare system is difficult to manage. We can think of hospitals as essentially independent blocks of an healthcare system. According to [10], there are four different management groups – four worlds, if you will – within a hospital, as shown in Figure 4.1.  Up Managers Control  Doctor Cure  Nurses Care  In  Out  Trustees Community  Down  Figure 4.1: Healthcare Players and Tasks Each group in Figure 4.1 has its own objectives, and has some scope to make their own decisions. Doctors and nurses deliver clinical operations, and hence focus on downstream considerations; that is, closer to dealing with the actual health of patients. Managers and trustees are responsible for budgeting and raising funds for the hospital, so their concerns upstream. On the other hand, employees (managers and nurses) work in the hospital, while doctors and trustees work out of the hospital, since they are not technically employees of the hospital. In Canada and the United States, although some doctors are salaried hospital employees, most doctors are private entrepreneurs who have admission privileges at a hospital, work on a fee-for-service basis and appear when the patient needs a cure or treatment [5]. 91  The significance of this framework for our study is the fact that in order to provide a medical service, such as a surgery, all four groups have a unique contribution to make. Conversely, the actions of any one of the groups has an effect on the ability of the others to perform their duties. The picture gets even more complicated when we think of government and insurance companies. In [10], Glouberman and Mintzberg conclude that these decision-makers must achieve a certain level of integration to provide effective healthcare management. Other studies, including Calmes and Shusterich [3] and Marco [15], reach a similar conclusion for operating room management. Operating rooms are one of the most essential places of a hospital, and also one of the costliest. These authors, among others (see for instance, [11, 13]) point out that ORs are one of the most difficult places to manage in a hospital, and it is imperative to improve collaboration between the players (of an OR) for any advancement in OR management. In this chapter, we focus on this small but important part of healthcare operations – surgery scheduling and, more specifically setting the number of surgeries for an OR. In particular, we study the interactions between a surgeon and a hospital in determining number of surgeries for an OR where surgery durations are random, and there are significant costs for idle time and overtime. More specifically, we explore the commonly observed situation reported in the literature Olivares et al. [18] and observed empirically (Section 4.2) that surgeons over-schedule their allotted OR time, i.e., they schedule too many surgeries for their OR time. We argue that this observation can be explained by the incentive of surgeons to take advantage of fee-for-service payment structure for surgeries performed combined with the fact surgeons do not bear overtime costs at the hospital level. This creates a cost which is borne by the hospital who operates the OR and pays surgery support staff. Thus in our model we discuss that the hospital has an incentive to limit the number of surgeries performed by surgeons to reduce overtime expenditures. We explore this misalignment of incentives – for the surgeon to over-schedule and the hospital to control overtime costs – in a game theoretic setting. Only recently some systematic attention has been given in the literature to incentive issues in health care management. For instance, researchers have studied physician-patient, government-physician, and hospital-physician relationships in the context of principal-agent modeling framework [23]. There are also studies such as [19] that look at a bigger picture and explore patient-physician-third party payer relationships. A more recent study [12] 92  develops a framework to empirically estimate the parameters of a principal-agent model to design a payment system for dialysis providers. Other authors focus on expansion of OR’s [13] and stakeholder interactions in OR’s [14] in basic game theoretic environments. In [14], it is stated that many interactions between surgeons, anesthetists, nurses and hospital management can be seen as a repeated game. We also mention two empirical studies [12, 18] which estimate cost parameters in a principal agent model and newsvendor model respectively for making decisions about compensation and capacity. Of particular interest to the present work is a suggestion raised in Olivares et al. [18] that the amount of schedule overruns are mainly caused by incentive conflicts and over-confidence. Our chapter takes a systematic look at this very question, providing a model by which these incentive conflicts can be identified and effectively analyzed. To the best of our knowledge our chapter presents the first systematic study of determining the number of surgeries for an OR block, investigating the interaction between the surgeon and hospital (management) in a game- theoretic setting. Our research has been motivated by our observations from applied healthcare projects such as [20], literature, e.g., [18], and empirical findings (Section 4.2). The organization of the chapter is as follows. In Section 4.2, we define and motivate the problem, give an overview of the surgery scheduling process, present data and discuss findings (empirical, literature-based, anecdotal-based) on the underestimating of surgery durations, and hence on overtime in an OR (block). Section 4.3 presents our model, including notation and a thorough discussion of assumptions. In this section, we define the objective functions of the surgeon and the hospital and explore their properties. In Section 4.4, we demonstrate a misalignment of incentives between the hospital and surgeon. Section 4.5 provides alternative contracting scenarios whereby this misalignment of incentives can be aligned, and take care to characterize sufficient conditions for when these schemes are cost effective for the hospital. This section also includes a discussion of welfare considerations from the perspective of an upstream planner (in the Canadian system, the provincial government) who is concerned for maximizing social welfare, including impacts on patients. We consider a different formulation of our model in Section 4.6 which makes alternative assumptions about the independence of surgeries, and in this framework we explore the impact on some of our results in the presence of a risk-averse surgeon. All the proofs of our analytical results are placed in Section 4.8. 93  4.2  Problem Description and Motivation  The objective of our study is to determine the number of elective (i.e., non-emergency, scheduled) surgeries for an OR block where surgery durations are random, there are significant idle and overtime costs, and the incentives of the hospital and surgeon (parties involved in the scheduling process) are not necessarily aligned. We start with an overview of a surgery scheduling process. In practice, scheduling surgeries in a medical facility is a complex and important process, and the choice of schedule directly impacts the number of patients treated for each specialty, cancelation of surgeries, utilization of resources, wait times, and the overall performance of the system [20]. The surgery scheduling process for elective cases is usually considered as a three-level process [1, 2, 18], which we now describe. The first level defines and assigns the OR time among the surgical specialties, usually called mix planning. A surgical OR block schedule is developed at the second level. An OR block schedule is simply a table that assigns each specialty surgery time in ORs on each day. The times are called blocks. The OR block schedule is sometimes called the master surgical schedule (see Figure 2 of [20] for a sample OR block schedule). Finally, in the third level we schedule individual cases on a daily basis, also known as patient mix. We can classify these levels as strategic, tactical and operational stage of the surgery scheduling process respectively. Figure 4.2 gives an overview of the process in terms of decisions, decision maker and decision level. Decision level Decision maker  Decision  Strategic  Budget and Surgical Mix Health Authority (specialties and % of time, i.e., capacity per specialty)  Tactical  Hospital Management  Block Schedule (blocks for each specialty/surgeon)  Operational  Surgeons  Patient Schedule (scheduling of patients into a block)  Figure 4.2: Surgery Scheduling Process  94  In the first level, the budget often determines the available OR time, and there could be several factors determining the proportion of time to be assigned to each surgical specialty. For instance, waiting times (or number of patients waiting for a certain type of surgery) and seniority of surgeons might be used to define the amount of OR time required by each specialty. In the second level, the OR time assigned to each specialty is used to build the surgical block schedule, assigning days of the week and operating rooms, this taking into consideration the availability of both OR’s and post-surgery resources such as recovery beds [20, 1]. The third level has more of an operational focus. Individual surgeries are scheduled within an assigned block in the overall OR block schedule. It is at this level where one determines the number of surgeries to perform in a block, the sequence of the surgeries performed and the planned start times (appointment times) of the surgeries. It is primarily at this level that variability in surgery durations plays a key role. It is also important to note that surgeons may have surgical privileges in one or more hospitals and they are often the decision-makers that manage the third stage of the surgical scheduling process [20]. Figure 4.3 shows the decisions taken, and their usual order, in this operational level of surgery scheduling.  number of surgeries sequence of the surgeries planned start times (appointment times) of surgeries  Figure 4.3: The Third Level of Surgery Scheduling Process Ideally, one should consider all three decisions not in isolation but within a unified framework of analysis. However, the practical applications and mathematical challenges force practitioners and academics to work on these problems individually. For instance, even the problem of determining planned start times of surgeries is difficult on its own (e.g., see Chapter 2 and the references therein). In this chapter, we address the first decision of level 3, namely determining the number of surgeries. We do not consider sequencing (which 95  is a challenging problem in itself) since we will assume identical surgeries. We also do not consider appointment scheduling because our main focus is to explain the incentive issues between the surgeon and hospital in setting the number of surgeries and not their schedule. A practical justification for this is the common practice for all patients expecting a surgery on a given day to arrive at the hospital in the morning and await their surgeries. Thus, the surgeon has no idle time between surgeries scheduled in an OR block. We consider the problem in a Canadian context, where hospitals receive funding to make ORs and supporting staff and equipment available. As explained in Blake and Donald [2], governments in Canada have historically managed the amount of healthcare services by putting rigid constraints on hospital budgets. By doing so they control hospital spending and resources and therefore indirectly the actions of physicians. In our setting, the hospital is a cost minimizer, and it receives funding (e.g., from a provincial government) to run ORs. We consider two types of cost for a hospital: idle time and overtime costs. Idle time cost may be seen as an opportunity cost of OR being idle, this is especially important in a Canadian context due to important political and social issues related to the length of surgical waiting lists [22]. Overtime costs are also significant, since the cost of operating beyond the regular OR time is often quite costly. One source of cost is in paying nurses, who are paid overtime by the hospital when they work beyond shift hours. On the other hand, the surgeon is a private entrepreneur who has privileges at the hospital and works on a fee-for-service basis. It is commonly believed in practice that surgeons tend to underestimate surgery durations and hence perform many surgeries in their allotted OR time, more than what may be ideal for the hospital. Empirical findings provides some evidence for this belief, as we show next. Due to the randomness of surgery durations2 (Figure 4.4 and [24]), one cannot predict the precise time required for a surgery, instead it must be estimated. Usual practice is to take surgeon’s surgery duration prediction in OR bookings. In Canada, usually (if not always) it is the surgeons who keep track of the patients that require a surgical procedure, decide on the order in which they will be performed and determine the schedule of their OR block. Anecdotal (our discussions with surgeons, anesthetists and OR booking managers, ob2  The data used in this chapter comes from a local hospital in Vancouver, BC. The data cover a period in  2007 and 2008 with over 5000 elective surgeries. The hospital has over fifteen surgical specialties and seven ORs.  96  Durations of simple hernia operation 0.35 0.30 frequency  0.25 0.20 0.15 0.10 0.05 0.00 32  43  53  64  75  86  96  more  (actual) surgery duration (minutes)  Figure 4.4: Duration Distribution of a Simple Hernia Operation servations in projects with hospitals and health authorities) and data based evidence ([18], Figure 4.5) suggest that surgeons are often overly optimistic about the duration of surgeries that they perform, i.e., surgeons think that they can perform surgeries quicker or they tend to underestimate their durations. The hospital may have historical data on the surgeon and a specific type of surgery, and hospital’s OR booking manager may sometimes interfere the surgeon’s predictions if the manager thinks that surgeon is underestimating the durations. However, this is not common since “surgeons are the most mobile and least easily replaced healthcare professionals” as stated in [13]. This is due in part to the fact that surgeons are a highly mobile and scarce resource in the Canadian healthcare industry and thus wield a lot of power. In a city with multiple hospitals or private clinics their power is even more enhanced, due to their mobility between hospitals. Figure 4.5 depicts a comparison of actual and booked/scheduled duration of surgeries. If surgery durations were perfectly estimated we would expect all surgeries be on the 45 degree line. However we see that majority (based on the data we collected, in 81%) of the cases actual durations were longer than booked/scheduled durations. Figure 4.5 shows that duration of individual surgeries are often underestimated. One may ask how this phenomenon actually effects the daily overall performance of an OR block, i.e., amount of overtime for an OR as well as the likelihood of an OR to go overtime. To answer this question, we look at the data at an operating room level. Our data comes from a hospital with seven ORs and on average five ORs run per day. For each OR, we  97  actual and scheduled surgery durations 1000 900 800  actual (minutes)  700 600 500 400 300 200 100 0 0  100  200  300  400  500  600  700  800  900  1000  scheduled (minutes)  Figure 4.5: Actual and Scheduled Surgery Durations compute daily average of scheduled and overtime OR minutes. We summarize our findings in Figure 4.6. The figure also shows the percentage of overtime, i.e., the ratio of overtime OR minutes and scheduled OR minutes. We see from this figure that the overtime amount is well over 20% for each OR. Total average daily overtime minutes from all ORs add up to 167 minutes. We also find the percentage of days that each OR has an overtime to estimate the probability of daily overtime for each OR. We give these probabilities in Table 4.1. These numbers are well above 75%, suggesting that overtime for an OR is very likely. Table 4.1: Estimates of Daily Overtime Probability per OR Operating Rooms A  B  C  D  E  F  G  0.91  0.95  0.97  0.77  0.75  0.93  0.91  These empirical findings – significant amount and high likelihood of overtime – suggest that the cost of overtime can be substantial; overtime pay rates (of hospital personnel) are more expensive than regular pay rates. In addition, excessive overtime can cause job satisfaction losses, fatigue and other work-related problems with hospital employees. If an 98  400  60%  350  51%  50%  300 m 250 i n u 200 t e 150 s  37% 34% 32%  31% 24%  23%  o 40% v e r t 30% i m e 20% %  100 10% 50 0  0% A  B  C  D  E  F  G  operating rooms overtime OR minutes  scheduled OR minutes  overtime % (overtime/scheduled OR minutes)  Figure 4.6: Daily Average Overtime Minutes and Probability of Overtime per OR OR can be managed in such a way that overtime is decreased then it is hoped that this will translate to immediate and significant cost savings. Additionally, savings from reduction in overtime costs may be used to increase hospital resources such as regular OR time, recovery and intensive care beds. Then the question becomes how can we reduce overtime? Before we propose a solution, we discuss reasons for overtime. The current method of assigning surgeries to OR time works roughly as follows: the surgeon provides an OR booking manager with a list of surgeries with estimated durations. If the estimated durations are less than the allotted time then the booking manager accepts the schedule and coordinates the appropriate surgical support staff for each individual operation. Now, Figure 4.5 shows underestimation of individual surgery durations by surgeons, i.e., surgeons are quite optimistic on how quickly they perform surgeries. By underestimating the duration of surgeries, the surgical plan presented to the booking manager may contain more surgeries than can be actually accommodated in the OR block. Thus, reducing overtime crucially depends on the way that surgical plans are devised by surgeons and approved by hospital management staff. 99  A first step in remedying this situation is understanding why surgeons book more surgeries than they can realistically complete in a given OR block. One reason is that surgeons have a desire to serve as many of their patients as possible for altruistic reasons – a surgeon sees his patients suffer and hopes they can be healed as promptly as possible. Another more structural reason derives from the remuneration scheme for surgeons. In Canada, most surgeons are paid by the provincial government based on the number and type of surgeries they perform, irrespective of how much time each surgery takes. If we assume surgeons are profit maximizers this payment scheme puts emphasis of performing as many surgeries as possible and devalues costs associated with the overuse of hospital resources. The intuition is simple: suppose having an additional hour in the OR allows the surgeon to take on one more surgery. The surgeon may be quite willing to take on this overtime, since the benefit of performing the surgery is a fixed amount that may be worth significantly more than the disutility of one hour of working overtime for the surgeon. This may often be the case, especially since the surgeon need not consider the overtime costs of support staff and materials when deciding if working one more hour is desirable. We argue that the optimism we see in the data regarding the estimated duration of surgeries reflects this incentive, since it directly effects the number of surgeries they are able to book and execute in their block. Hospitals have an interest in influencing surgeons to perform fewer surgeries and more accurately estimate their durations, since this would mean less overtime costs. This raises two important considerations for the hospital. The first is to decide themselves how many surgeries they would prefer be performed in the OR block. It is reasonable to assume that hospital would be better off with a surgeon who optimizes the use of resources (and the most important resource is the OR time) rather than one who is interested in profit maximizing, i.e., hospital’s ideal number of surgeries may be less than the profit maximizer surgeon’s number. We propose that the hospital should determine its own ideal number of surgeries in order to minimize its own cost. Thus leads us, however, to the second key issue: how can the surgical booking procedure be adjusted so that hospitals can influence surgeons to decide on a surgery plan that better reflects the costs of the hospital. Indeed, the surgeon may not cooperate with being dictated to doing fewer surgeries (e.g., she/he has a better outside alterative) and thus the hospital must consider how to design a contract that will be accepted by the surgeon and also save 100  money for the hospital (with respect to current “high-overtime” situation). In this chapter, we characterize analytically the number of surgeries that minimizes hospital costs, find conditions when this number is less than surgeon’s preference, and suggest contracts to remedy this misalignment between the hospital and surgeon on determining the number of surgeries in a given OR block. We consider the surgeon as an agent and the hospital as a principal and use a simple principal-agent model of analysis common in applied economics. (For a review of principal-agent theory we refer the reader to [9] and the references therein.) In principal-agent theory, when there is no information asymmetry between players the first best (i.e., best possible outcome) can be achieved with a ”forcing contract”, i.e., the principal can dictate the agent what to do (i.e., how many surgeries to be performed). With or without information asymmetry between the players, a residual claimancy contract can be used to achieve the first best if the agent is risk neutral and such a contract is feasible (e.g., there is a single agent who has unlimited wealth). We can think of a residual claimancy contract in our setting as the hospital renting the operating room to the surgeon, at the hospital’s opportunity cost of the room. The surgeon then makes his or her scheduling decision having internalized all costs and benefits of the decision. The incentive problem is completely resolved. In our analysis, both players are risk neutral (although we study briefly the case of a risk averse surgeon in Section 4.6.2), and we consider two cases regarding the information asymmetry: no information asymmetry (hospital has access to surgery duration distribution) and information asymmetry (the hospital has access to only the mean of surgery durations). We propose two payment schemes that will achieve the first best depending on how much information that the hospital has on surgery durations. If the hospital has access to only the mean duration of surgeries then it may choose a three-part contract (Section 4.5.3), and if the hospital has access to entire distribution of surgeries then it may choose either take-it-orleave-it offer at the optimal number of surgeries or a three-part contract (Section 4.5.2). The three-part contract can be seen as a residual claimancy contract whereas take-it-or-leave-it offer can be thought as a forcing contract.  101  4.3  The Model  We start with a short description of our notation and define the objective functions of the hospital and the surgeon. We will make several important assumptions to refine our model, help focus on the incentive issues involved and avoid over-complication. Each assumption will be discussed and motivated, and the more restrictive assumptions will be noted. As discussed, the scenario is a surgeon working out of a hospital by using its OR facilities and support staff. The hospital receives funding (e.g., from a provincial government) to make its operating rooms available for surgeries at minimum possible cost. The surgeon works in an operating room reserved (determined by an OR block schedule) for her/his use in the hospital. The scheduled time, i.e., the length of the OR block allocated for this surgeon will be denoted d and is given exogenously in our model. The surgeon is paid a fee-for-service rate of r dollars per surgery directly by the provincial government. It is important to stress that we assume that the surgeon is not directly paid by the hospital (as in the Canadian health care system). Indeed, this is a distinctive feature of our model as compared to a classical principal-agent framework where the principal compensates the agent. The surgeon decides the number n of surgeries to schedule during her/his allotted time. We assume that there is a long list of people waiting to receive the given surgery, and thus no shortage of demand for operations. This is quite reasonable under most (if not all) types of (elective) surgeries [21]. The fact the surgeon chooses n and not an “effort level” is another distinctive feature in our setting which is not considered in standard principalagent problems. Here we may think of the surgeon’s choice as a rough proxy for effort. The number of surgeries n is the key decision variable, and exploring precisely how it is determined is the distinguishing feature of our analysis. As is common in practice, we assume that every surgery scheduled must be performed on the scheduled day even if this causes the total duration of all n surgeries to exceed d. An important extension of our analysis, which is not addressed here, would be to consider the possibility of cancelations after a certain cut-off time. Each surgery i has a random duration ti . We assume that the support of the pdf is contained in the positive real line R+ and each has identical finite mean µ. We also assume that the random variables ti are independent. Let T (n) =  n i=1 ti  denote the random  102  duration of n surgeries. It is a random real-valued function of n. The (random) overtime of n surgeries can thus be expressed as max{0, T (n) − d} and similarly max{0, d − T (n)} represents the (random) idle time. The above two assumptions – that of identical mean for each surgery and independence – are worthy of further discussion. First, by assuming identical means we may think of the surgeon scheduling n elective surgeries of a similar type; for instance, all hernia operations. We see this practice in certain specializations of surgeons, e.g., ophthalmology. Another motivation for this assumption is to simplify the model to avoid consideration of surgery sequence. If surgeries have varying means the question of sequence becomes paramount and the problem becomes more combinatorial in nature. Nonetheless, since we assume that surgeries may have different distributions (under the condition they have the same mean) then sequence is still an issue. For instance, a schedule of five “high” variance surgeries will have different properties from a sequence of “low” variance surgeries. Thus, we assume that the sequence is given and the surgeon simply chooses n consecutive elements of that sequence. An alternative assumption is that the ti are independent and identically distributed (iid) in which case sequence is irrelevant. The results for this case are essentially identical to those found here and so we adopt the former assumption. In either case, the goal is to focus on the incentives which drive the choice of the number of surgeries and avoiding extraneous complexities at this point. Second, we address the the assumption of independence of surgery durations. Indeed, one might argue that surgeon fatigue creates a dependence in surgery durations, and this is a relevant criticism of our model. By assuming independence we effectively assume that all variation in surgery duration depends on the specifics of each surgery case. We can derive similar results to those found here under the assumption that all have the same (random) duration t, in other words, there is complete dependency among surgery durations. Thus, the total duration of n surgeries performed in one day by the surgeon has random value nt. The results we derive in this setting are similar in spirit to those discussed below and yield many of the same general findings. However, our approach to the analysis is different and we believe of separate interest. Details are included in Section 4.6. Thus, our analysis covers the extreme cases of independence and complete dependence, and thus one might imagine that similar insight might arise for intermediate cases. We start our analysis by assuming both the hospital and surgeon are risk neutral (we 103  extend our analysis with a risk averse surgeon in the case where all surgeries have the same duration t in Section 4.6). The hospital is not-for-profit and closer to being a public rather than a private organization. Therefore we believe risk-neutrality for the hospital is a reasonable assumption. We begin by assuming that the surgeon is risk neutral for simplicity of our arguments and analytical tractability. One reason this assumption might make sense is that the surgeon performs surgeries on many days during a month or year, therefore the profit deriving from a single day is small in comparison to her overall compensation. On the other hand, since our model concerns the decisions of a surgeon for a single day, it is reasonable to consider that a surgeon might be risk averse. We assume that the hospital’s expected cost function of opening an OR room to a surgeon to use for duration d is a function of the the form C(n) = ET (n) [oH max{0, T (n) − d} + uH max{0, d − T (n)}] where ET (n) [·] denotes the expectation operator on the random variable T (n). The cost coefficient oH is the cost per unit time of going overtime after regular working duration d, whereas the cost coefficient uH is the cost per unit time of idle time cost of the OR. We can think of oH as the overtime cost rate of the hospital, e.g., staffing and equipment costs after regular working hours. On the other hand, uH may be thought as the opportunity cost of an idle OR. Besides facility and operating costs, this cost may include a component to reflect the utility loss of patients waiting for a surgery (especially when there are long surgical waiting lists as in Canada). This, however, is modeled more directly when we consider welfare considerations in Section 4.5. Underage costs may also be seen to include costs related to the possibility that unused capacity might motivate budget cuts to the hospital from the provincial government. Note that our cost function does not track normal operating costs for running the operating room within the scheduled d hours and in particular there is no direct per-unit cost incurred by the hospital per surgery. Since the hospital will incur these costs in any instance, we focus on the problem of minimizing the costs related to the overand under-utilization of resources. This cost function is reminiscent of standard newsvendor costs having a cost for overage and underage. The difference here is that in the standard newsvendor setting, demand is random and the newsvendor chooses capacity. Here the situation is reversed: the capacity d is fixed and the choice variable n impacts demand on that capacity (in this case, OR 104  time). Similar models have been explored in the literature, most notably in the “inverse newsvendor” model in [4]. The difference in our model is that demand is not chosen directly, but through the choice of n, which determines the number of random values which amount to total demand. Using linearity of expectation and the identity x = max{0, x} − max{0, −x} we can express C(n) as: C(n) = −uH µn + (oH + uH )ET (n) [max{0, T (n) − d}] + uH d.  (4.1)  This form is more useful in the analysis that follows in Section 4.4. The expected profit of the surgeon for performing n surgeries in an operating room scheduled for duration d is assumed to be of the form π(n) = rn − ET (n) [oS max{0, T (n) − d} + uS max{0, d − T (n)}]. The quantity oS is the cost per unit of time of performing surgeries for hours in excess of scheduled surgery time d. This can represent an opportunity cost for the surgeon to work outside of scheduled hours, possibly reflecting alternate sources of income or leisure time. The quantity uS is the cost per unit time worked less than the scheduled time, and can represent lost revenues from surgeries that might have been scheduled and indirectly loss of goodwill amongst patients to the surgeon for longer wait times. A similar transformation of above yields the following more amenable form of π(n):  π(n) = (r + uS µ)n − (oS + uS )ET (n) [max{0, T (n) − d}] − uS d  (4.2)  One important feature of the expected profit function of the surgeon is that when the total duration of the surgeries is precisely d, the surgeon experiences no costs. The same is true for the hospital, making d a significant value for both the hospital and surgeon. Clearly, this is a special case of a more general setting where we might imagine that the surgeon experiences costs when total duration is different from some other value, say l, where l = d. We assume for analytical tractability that l = d, although this assumption might not hold in practice. Considering how these results extend to the case l = d is one possible direction for future study. 105  Clearly, all the cost coefficients oH , uH , oS and uS can be challenging to estimate, which is a common problem of models of this type. We assume that hospital knows all these cost coefficients and the surgeon knows only his or hers. This is, undoubtedly, a strong assumption of this model. Finally, we assume that all cost coefficients and the fee-for-service rate r are nonnegative.  4.4  Misalignment of Incentives  We now discuss the process of determining the number of surgeries to perform in time d under the assumptions stated above. Our goal is to understand why surgeons tend to underestimate surgery durations and hence schedule more number of surgeries than what might be ideal for hospital in their allotted time by demonstrating that the ideal number of surgeries for the surgeon is (under some stated conditions) larger than the preferred number of the hospital, which needs to take into account overtime costs. We now proceed with the analysis.  4.4.1  Deciding the Number of Surgeries  First we focus on the decision of the surgeon. Let nS be the preferred number of surgeries scheduled by the surgeon when unrestricted by the hospital. In other words, nS is chosen to optimize the profit function π. We begin describing the properties of the surgeon’s profit function π. For our first result we need the following definitions. The first definition concerns convexity properties of π. We treat n as an integer variable, and thus use the following notion of discrete convexity: Definition 4.4.1. Let f : Z → R. The first differences of f are denoted ∆f (n) = f (n + 1) − f (n) and second differences by ∆2 f (n) = ∆f (n + 1) − ∆f (n). Then f is discretely convex if its first differences are nondecreasing or equivalently if its second differences are nonnegative, i.e., ∆2 f (n) ≥ 0 for all n ∈ Z. We say f is discretely concave if ∆2 f (n) ≤ 0 for all n ∈ Z. The second definition concerns the distribution of surgery durations and is due to [16]: Definition 4.4.2. A random variable X is new better than used in expectation ( NBUE) if E[X] ≥ E[X − k|X ≥ k] for all k. 106  Proposition 4.4.3 (Discrete concavity of π). The surgeon’s profit function π is discretely concave when ti is NBUE for all i. Having established the discrete concavity of π we use the following necessary and sufficient condition for nS to be integer optimal: π(nS ) ≥ π(nS − 1) and π(nS ) ≥ π(nS + 1), i.e., ∆π(nS − 1) ≥ 0 and ∆π(nS ) ≤ 0. Necessary and sufficient conditions for the optimality of nS in terms of the cost data of the surgeon now follow. We use the following convenient notation. Recall that O(n) = max{0, T (n)−d} is the (random) overtime for scheduling n surgeries. Let θ(n) = ET (n) O(n) be the expected overtime. Then, by using Eq(4.2) we may write π(n) = (r + uS µ)n − (oS + uS )θ(n) − uS d. Thus, nS is optimal for maxn≥0 π(n) if and only if: ∆θ(nS − 1) ≤  r + uS µ ≤ ∆θ(nS ). oS + u S  (4.3)  The condition for optimality in Eq(4.3) is reminiscent of the classic newsvendor solution based on critical fractiles. We can interpret r + uS µ as the expected marginal benefit of undertaking an additional surgery. On the other hand, (oS + uS )∆θ(nS ) may be interpreted as the marginal expected overtime cost of an additional surgery. Thus, Eq(4.3) says that at the optimal choice of nS the expected marginal cost and benefit of surgery nS must be comparable (indeed, if n is allowed to be continuous then they must be equal). Turning now to the hospital’s decision, let nH be the preferred number of surgeries scheduled by the hospital when it knows the surgery duration mean µ of t and can force the surgeons to perform the number of surgeries it prefers. In other words, nH is chosen to minimize the cost function C. Our goal is to provide optimality conditions for nH similar to Eq(4.3). The following result describes the convexity properties of C. Corollary 4.4.4 (Discrete convexity of C). The hospital’s cost function C is discretely convex when ti is NBUE for all i. Since C is discretely convex we use the following optimality conditions to characterize nH : C(nH ) ≤ C(nH − 1) and C(nH ) ≤ C(nH + 1), i.e., ∆C(nH − 1) ≤ 0 and ∆C(nH ) ≥ 0  107  Next we state the necessary and sufficient conditions for the optimality of nH in terms of the cost data of the hospital: ∆θ(nH − 1) ≤  uH µ ≤ ∆θ(nH ). oH + u H  (4.4)  This condition can be similarly interpreted as above. Note that the value uH µ can be interpreted as the marginal expected benefit of undertaking an additional surgery (assuming the total duration with the additional surgery does not exceed d), which does not depend on the r. We now turn to one of the motivating questions of this study: how do the incentives of the hospital and surgeon interact and is there a misalignment of these incentives? We give conditions whereby there truly is a misalignment of incentives between the parties, and attempt to explain why this misalignment arises. The main result describes conditions where nS ≥ nH ; in other words, when each party has a different optimal number of surgeries and the surgeon prefers to perform more surgeries than the hospital. The surgeon’s objective is to maximize his/her profits whereas the hospital’s objective can be thought as utilizing the OR time as much as possible, i.e., minimizing the cost of idle and overtime. Theorem 4.4.5. The optimal number of surgeries from the hospital’s perspective nH is less or equal than the surgeon’s preferred number of surgeries nS , i.e., nH ≤ nS if and only if uH µ r + uS µ ≥ . oS + u S oH + u H  (4.5)  The resulting condition Eq(4.5) has a straightforward interpretation. We may think of ρS =  r+uS µ oS +uS  as the ratio of expected marginal benefit for the surgeon to perform a surgery  to per-unit-time cost of overtime. A similar interpretation holds for the hospital’s ratio ρH =  uH µ oH +uH  where uH µ is the impact on cost when one more surgery is scheduled and  oH + uH represents a per-unit-cost of overtime. Thus when the “marginal ratio” ρS of the surgeon exceeds that of the hospital ρH then the surgeon schedules more surgeries than the hospital prefers. Since in practice it is likely that nS ≥ nH this indicates that the marginal ratios of the surgeon and hospital satisfy ρS ≥ ρH . It is reasonable to assume that in practice, hospitals are less sensitive to undertime than overtime, i.e., uH ≤ oH , and surgeons are more sensitive to undertime than overtime, i.e., 108  uS ≥ oS . This observation makes easier to see why ρS ≥ ρH holds and surgeons’ preferences on number of surgeries may be higher than hospitals’ choice.  4.5  Contracts  We now turn to the question of how we might address the misalignment of incentives described in the previous section. Thus, we turn from explaining some of the observations detailed empirically in Section 4.2 towards considering ways to reduce over-use of operating room resources. This can be achieved through the alignment of incentives of the hospital and surgeon via designing mechanisms or contracts. The type of mechanism required to align incentives depends on several important factors. We discuss them briefly. One important factor in designing mechanisms is the amount of information that each of the players have. There are several types of information involved in this problem, which are essentially knowledge about the cost coefficients, surgery durations and functional form of the utilities. As above we assume both the hospital and surgeon have common knowledge of the mean surgery duration µ and the functional form of the utilities of each player, as well as their own cost coefficients. One thing that differentiates the later scenarios we discuss is whether the hospital has complete information about the distributions of the surgeries. In all of our models we assume that the hospital has knowledge of the surgeon’s cost coefficients. A second important factor is the degree to which the hospital can monitor the actions of the surgeon. In other words, whether the hospital can observe n, the number of surgeries booked by the surgeon, and also the overtime and idle time. We are aware that in some hospitals the surgeon must present their schedules to an OR manager, who observes n and then schedules the support staff and equipment for the surgeon. Nonetheless, one might well imagine a scenario where hospital management is less informed as to the number of surgeries. We assume here that the hospital can always observe the number of surgeries planned by the surgeon and any idle or overtime. This is a reasonable assumption in any well-run hospital. A third important factor is consideration of whether a third party – possibly a government or health authority – has some control over the design of the contract. Indeed, one factor missing in the discussion to this point is a very important one – the effective  109  treatment of patients. We have mentioned the possibility that the hospital or surgeon’s concern for patients can be captured in their cost coefficients, but this is a rather indirect way to understand the impact on patient care. At the end of this section we explore welfare considerations that attempt to look explicitly at the impact on patient care and the overall efficiency of the system. In the following subsections we describe several types of contracts that arise under different assumptions on information, monitoring and power, and the role of government.  4.5.1  Hospital has Complete Information and Coercive Power  The best situation from the point of view of the hospital is when the surgeon performs nH surgeries. However, the requirements to ensure this outcome are quite strong. First of all, in order to compute nH the hospital needs full knowledge of the distribution of the ti ’s. In particular, we would need to know the distribution of T (n) for each n, and this depends on the joint distribution of t1 , . . . , tn . As we can see this is a strong informational requirement for the hospital. Nonetheless, we may assume this to be the case, since the hospital has access to historical information about the surgeries, possibly at least as much information as the surgeon does. Certainly, the hospital tracks OR usage by various physicians and likely tracks surgery types and durations (such as in the data set we used in Section 4.2). Nonetheless, one might argue that the surgeon herself/himself has private information about the specifics of each case which is independent of the historical information and is thus not available to the hospital. This represents an information asymmetry between surgeon and hospital. This issue is partially addressed below in another contracting scenario. Nonetheless, studying further how asymmetry of information would impact our results is an area for future research. The other factor, besides information, that may prevent the surgeon from taking the hospital’s recommendation of nH surgeries is that the surgeon may have some power in determining how many surgeries get scheduled. In general, the surgeon has a best outside alternative to performing surgeries at the hospital in question, which yields her some level of utility π0 , which we can safely assume is less than π(nS ) (since otherwise our surgeon should quit!). We assume that this outside alternative is common knowledge to both the surgeon and the hospital, and it may, for instance, be the option that a surgeon can work in some other hospital or possibly a private medical clinic. If π0 ≤ π(nH ) then the surgeon would be 110  willing to perform nH surgeries if granted use of the operating room, because her/his next best alternative leaves her/him worse off, and so the hospital achieves its desired number of surgeries. In the other case, i.e., π0 ≥ π(nH ), the situation is more complicated. If the hospital has the power to force the surgeon to perform exactly nH surgeries, then the hospital is most happy, whereas the surgeon is less well off compared to his/her alternative. This is only a sustainable option if the hospital has strong coercive power.  4.5.2  Take-It-or-Leave-It Offer  We now suppose that the hospital cannot coerce the surgeon into performing nH . This is observed in practice when the skills of a surgeon are in great demand. As mentioned above, “surgeons are the most mobile and least easily replaced health care professionals” [13], and we assume that the hospital has a shortage of surgeons and cannot easily replace one surgeon with another. Furthermore, suppose that π(nH ) < π0 ≤ π(nS ) and hence the surgeon’s most profitable activity is to perform surgeries at the given hospital, just not as few as nH surgeries. We ask the following basic question: Can the hospital offer some incentive to induce the surgeon to perform fewer surgeries than nS in a way that is cost-effective for the hospital? We answer this question in two settings. The first, described in this subsection, is where the hospital retains complete information about surgery durations but no longer has coercive power. The second setting, described in the next section, is where the hospital no longer has complete information about surgery durations and must induce the surgeon to make an appropriate choice simply through adjusting her/his compensation. In both cases we assume that the hospital can monitor the choice of n by the surgeon, knows the surgery expected duration µ, and knows the surgeon’s cost coefficients uS and oS . The first setting can be modeled as a simple bilateral externality, a standard model in the microeconomics literature (see Chapter 11.B of [17]). The overtime and idle time costs of the hospital are influenced by the decision of the surgeon, who need not consider these costs when making her/his optimal choice. As we showed in the previous section, if the surgeon is unconstrained in her/his choice, she/he opts for nS surgeries, which is usually not optimal for the hospital. The hospital experiences a loss of C(nS ) − C(nH ) from its optimum under this scenario. Assuming, as we do here, that the surgeon has the right to schedule surgeries as she/he sees fit (i.e., the hospital has no coercive power of the surgeon’s 111  decision), the hospital will need to offer some compensation B > 0 to induce the surgeon to adjust her/his surgeries. The surgeon will agree to performing n surgeries if and only if π(n) + B ≥ π(nS ). Since we may assume that the hospital will offer the smallest bonus possible to achieve the reduction of surgeries to n in number, we have B = π(nS )−π(n) ≥ 0. Thus, the cost of the surgeon under this bonus scheme is precisely: min {C(n) + B : π(n) + B ≥ π(nS )} = min C(n) − π(n) − π(nS )  n≥0,B  n≥0  (4.6)  Let nG ∈ arg minn≥0 {C(n) − π(n)} denote an optimal solution to the above problem. Note that this can seen as a kind of socially optimal choice, since it maximizes the overall welfare π(n) − C(n) of the two parties (of course, it does not consider the direct impact on patients, which is discussed below). Since the bonus B = π(nS ) − π(nG ) compensates the surgeon sufficiently, the surgeon will always choose to perform nG surgeries and take bonus B. To clarify, the end result of this analysis is that the hospital computes nG based on information at its disposal by solving Eq(4.6). Then the offer to surgeon is simple: if the surgeon schedules nG surgeries, s/he receives a compensation of B dollars from the hospital. Otherwise, the surgeon is free to determine the number of surgeries s/he prefers but will not receive the bonus (however, again the bonus is defined so this latter case never occurs). One may see this as a “top-down” approach to surgery scheduling, all the computational work (computing nG and B) is undertaken by the hospital and the surgeon’s choice is straightforward.  4.5.3  Three-Part Contract  The contract in the previous subsection was predicated on the assumption that the hospital has complete information about the duration of surgeries. In addition, the hospital needs to undertake computation of the preferred number of surgeries and bonus, with the possibility (due to inaccurate calculation) that the take-it-or-leave-it offer may still be rejected. As mentioned previously, it is probable that the surgeons have some private information about the duration of surgeries due to their personal knowledge of the patients and their histories. Thus, it may be preferable to consider a “decentralized” contract which attempts to align the incentive of the surgeon to that of the hospital. In other words, the hospital may design a compensation scheme whereby the surgeon’s decisions themselves weigh the importance of the hospital’s cost structure. 112  This can be achieved by the following three-part contract, which is specified up to some policy parameter we denote as α > 0. The three parts to the contract are as follows: 1. a fixed sum Bα which passes from hospital to surgeon 2. a surgery unit cost γα which is charged by the hospital for each surgery booked by the surgeon 3. a per-unit time overtime penalty ωα which is charged by the hospital for each unit of overtime incurred by the surgeon. Ranges for values of these parameters which achieve an alignment of incentives are discussed below. Note that to administer this contract the hospital needs to be able to monitor the actions of the surgeon. Indeed, to charge the surgery unit cost, the number of surgeries needs to be observed, and overtime fees can only be calculated if the hospital carefully monitors overtime. This latter monitoring is, of course, should already be a practice of the hospital since they need to compensate nurses for overtime and thus have an interest in monitoring this quantity. One interpretation of the contract is as follows. The surgeon is given some budget Bα and she/he can use this budget to rent time for a surgery at rate γα and is penalized for overuse of resources at a rate of ωα per unit time. Thus, we see that the cost implications of a surgery for the hospital are in some sense passed to the surgeon, and she is in turn compensated at the budget level Bα to defray these cost considerations and still remain interested in performing surgeries. The payoff of the surgeon on this contract will be πα (n) = Bα − γα n − ωα ET (n) [max{0, T (n) − d}] + π(n) We now proceed in specifying the three elements of the contract, which are related to the choice of parameter α, for which the hospital can ensure the surgeon can perform nH surgeries. Of course, the surgeon will need to be compensated in order to participate. The participation constraint of the surgeon is given by πα (n) ≥ π0 where n is the optimal number of surgeries for the surgeon under a contract with parameter α. 113  The values of the three contract parameters which can align incentives are as follows: γα = r + (uS − αuH )µ, ωα = α(oH + uH ) − (oS + uS ) and any fixed sum Bα which lies in the range Bα ≥ π0 + (uS − αuH )d + αC(nH ). These values are chosen so that the surgeon’s profit has the form πα (n) = Bα − (uS − αuH )d − αC(n). which is simply a linear transformation of the hospital’s objective C(n). It is then straightforward to see that a surgeon facing profit function πα (n) will choose n = nH . This is precisely that number of surgeries which minimizes the hospital’s costs, and the hospital has achieved its goal. The surgeon will participate in this three-part contract for any value α > 0 since the surgeon’s profit will be πα (nH ) = Bα − (uS − αuH )d − αC(nH ) ≥ π0 + (uS − αuH )d − (uS − αuH )d + α(C(nH ) − C(nH )) ≥ π0 The result derives from the bound which we put on the fixed sum: Bα ≥ π0 +(uS −αuH )d+ αC(nH ). A few comments are in order about this contract. First, note that the variable fees γα and ωα can be determined without knowing the full surgery time distributions. Thus under this contract, the hospital can ensure nH surgeries are performed without full information about those surgeries. Thus, in contrast to the take-it-or-leave-it offer described in the previous subsection, the computational burden now rests with the surgeon and not the hospital. We can thus see this as a “bottom-up” approach to surgery scheduling – the hospital passes the necessary information to the surgeon via the three components of the contract, and the surgeon is left to decide. One further consideration is necessary here, which is whether the hospital benefits from offering these contracts. Note that the total cost to the hospital is now: Cα (nH ) = Bα + C(nH ) − γα nH − ωα ET (n) [max{0, T (nH ) − d}] We can rewrite Cα (nH ) by plugging in the values of γα and ωα as Cα (nH ) = Bα + (1 − α)C(nH ) − π(nH ) + (αuH − uS )d.  114  The hospital benefits from the new scheme if Cα (nH ) ≤ C(nS ) where we assume nS surgeries are performed if no intervention is made. This implies an upper bound on the bonus, i.e.,: Bα ≤ C(nS ) − (1 − α)C(nH ) + π(nH ) − (αuH − uS )d. The possible values of Bα , i.e., π0 + (uS − αuH )d + αC(nH ) ≤ Bα ≤ C(nS ) − (1 − α)C(nH ) + π(nH ) − (αuH − uS )d yield a range of fixed-fee compensations that yield feasible contracts. We note that the bonus could be determined through bargaining by the two parties, and its value would fall somewhere between these bounds.  4.5.4  Implementing the Two Contracts  We now discuss briefly some of the challenges that might be faced when implementing either of these two contracts. First of all, there is an important challenge in specifying the parameters of the model. Specifically, information on the cost parameters of both the surgeon and hospital may not be explicitly known, and particularly not to both parties (which is assumed under both contracts). The task of determining oH and uH , because it deals with the overall costs of the hospital, may present challenges. Indeed, there may be lack of consensus on the values of these parameters amongst the various decision-makers in the hospital. Overtime costs, nonetheless, seem more accessible than idle time costs. Overtime costs may be approximated by direct costs of staffing overtime wages. Idle time costs are more indirect, and take into account lost value from idle resources. To implement either contract we foresee that important discussions would need to held to establish consensus on the values of these parameters. A second issue, which is mostly unrelated to specifying the parameters of the model, is that of the political feasibility of adopting these contracts. One attractive feature of the take-it-or-leave-it offer is that the contract is relatively simple to understand and has the appearance of a “win-win” situation. The surgeon receives a bonus for performing nG surgeries and there are no fees involved. On the other hand, since most of the computation in this setting rests with the hospital, surgeons may feel disempowered in having the hospital in some sense decide the preferred surgery level. The potential disutility that may arise due to a sense of disempowerment is not covered by our model, but may be a consideration in practice. On the other hand, in the three-part contract surgeons retain their decision-making 115  role. However, the downside here is that the surgeon now experiences fees and penalties for overtime, and so it seems less a clear “win-win” situation than the alternate contract. The three-part contract proposes to treat surgeons much like independent entrepreneurs who must rent and pay for overuse of facilities and resources, which seems a less advantageous setup than the current situation where surgeons retain “privileges” in the operating room. Thus, there there may be some disutility deriving from this perceived loss of privilege that is not considered in our model but again it may be significant in practice.  4.5.5  Welfare Considerations  One element missing from the above analysis is a concern for the welfare of patients. When isolating attention on the incentives of hospitals and surgeons, it is possible that patients are adversely affected. In the Canadian healthcare system another agent, the provincial government, is responsible for the health care system as a whole. They are interested in balancing the interests of patients, hospitals and physicians. We assume that the provincial government is a social welfare maximizer with the following utility function: W (n) = π(n) − C(n) + δn where δ is a measure of the per unit “social value” of a performed surgery. This is consistent with earlier assumptions in our model that the surgeries are elective surgeries of a similar type, hence having identical mean µ and remuneration r. It is true that some surgeries may have more social value than others; for instance, saving the life of a child by a surgery may carry more social value than saving someone with many other health complications and whose quality of life after the surgery would only be marginally improved. This is of course an ethical discussion and value judgement. We avoid such discussions, and take the view that the provincial government is not privy to the details of each individual case and thus takes δ as their valuation of each surgery. Letting uW = uH + uS and oW = oH + oS we can rewrite W (n) as: W (n) = (r + δ + uW µ)n − (oW + uW )ET (n) [max{0, T (n) − d}] − uW d As above, W is discretely concave, and an optimal number of surgeries from a social welfare perspective nW satisfies: ∆θ(nW − 1) ≤  r + δ + uW µ ≤ ∆θ(nW ). oW + u W  (4.7) 116  Note that nW can be bigger, smaller, or equal to nH and nS depending on the cost parameters. For instance, if the central planner places high value on surgeries (for instance, due to political pressures) δ may be large enough so that nW is in fact greater than nS . It is straightforward to find bounds on δ that guarantee this to be the case. The case with the most intuitive appeal is where nH ≤ nW ≤ nS , which indicates that surgeons do more surgeries than socially optimum, and hospitals hope to perform fewer surgeries than socially optimal value. To align the surgeon’s incentives and induce the cooperation of the surgeon, the central planner could again offer a three-part contract similar to the one above. The major difference is that the contract needs to ensure participation of both the surgeon and hospital in this case. Also, different contracts arise if revenues from overuse of the facilities or per-unit surgery charges can either accrue to the hospital or to the central planner directly. Precise details are omitted but the analysis follows very similar reasoning to that in Section 4.5. A social planner, the provincial government in our case, can use Eq(4.7) with their estimates δ to judge if and how they would like to intervene via designing a contract to manage the relationship between hospitals and surgeons.  4.6  Dependent Surgeries with Identical Realizations  In this section we return to one of the important assumptions in our model, that of independent surgery durations. Here we assume that surgery durations are fully dependent and given by the outcome of a single random variable. Although this setting is also restrictive, it can be seen as the opposite extreme of the independent case. Of interest is the fact that we can obtain similar results, and thus under both models the conclusions and insights are similar. This suggests that the findings of our analysis could apply to intermediate cases or surgery duration dependence. Another reason for considering this case is that in this framework we are able to say something about risk aversion. Using the previous model we were unable to establish results in the case of risk aversion, and this can be remedied here. In this section, we assume that each surgery has the same random duration t, a random variable with probability distribution function (pdf) f and cumulative density function (cdf) F . We assume that the support of the pdf is contained in the positive real line R+ . Thus, the total duration of n surgeries performed in one day by the surgeon has random value nt.  117  Furthermore, we assume that n is a continuous decision variable and can take any value in R+ . This is an abstraction from reality, where only an integer number of surgeries ought to be considered. However, this is not a restrictive assumption in this case since we have a single dimensional decision variable n and one can always take ⌊n⌋, i.e., round down, or ⌈n⌉, i.e., round up, after the analysis and choose the better one.  4.6.1  Preliminaries and Misalignment of Incentives  The cost function of the hospital and the profit function of the surgeon are defined as before, the only difference is that the total duration is now nt and Et [·] denotes the expectation operator on the random variable t. As before, µ is the mean of t. We assume that the hospital and surgeon have the following objectives – the cost of the hospital  C(n) = Et [oH max{0, nt − d} + uH max{0, d − nt}]  (4.8)  and the profit of the surgeon  π(n) = rn − Et [oS max{0, nt − d} + uH max{0, d − nt}].  (4.9)  In this section we assume that hospital knows f and all the cost coefficients whereas surgeon knows f and only his/her cost coefficients. As before we characterize nH and nS . To state the results we first define the function: x  tf (t)dt = xF (x) − G(x)  ϕ(x) =  (4.10)  0  where G(x) =  x 0 F (t)dt.  Proposition 4.6.1. (Convexity of C, characterization of nH ) 1. The hospital’s cost function C is (strictly) convex when oH + uH > 0. 2. The optimal solution, nH , to the optimization problem min{C(n) : n ≥ 0} is the unique solution of n to the following equation: ϕ  d n  =  oH µ oH + u H  (4.11)  Proposition 4.6.2. (Concavity of π, characterization of nS ) 1. The surgeon’s profit function π is strictly concave when oS + uS > 0. 118  2. The optimal solution, nS , to the optimization problem max{π(n) : n ≥ 0} is the unique solution of n to the following equation: d n  ϕ  =  oS µ − r oS + u S  (4.12)  We next obtain a result similar to Theorem 4.4.5. Theorem 4.6.3. The optimal number of surgeries from the hospital’s perspective nH is less or equal than the surgeon’s preferred number of surgeries nS ; i.e., nH ≤ nS if and only if  oS − µr oH ≥ . oH + u H oS + u S  (4.13)  A remark is in order here. The conditions of Theorems 4.4.5 and 4.6.3 are the same. To see this reorganize the terms in Eq(4.13) and rewrite them as given in Eq(4.5):  oH oH + u H oH µ oH + u H  µ+  r − oS µ oS + u S r + uS µ oS + u S  ≥ ≥  oS −  r µ  oS + u S oS µ − r oS + u S oH µ oH + u H uH µ oH + u H  (by equation 4.13) (multiply both sides with µ)  ≥ µ−  (add µ to both sides and switch the terms)  ≥  (condition given in Theorem 4.4.5)  We may rewrite inequality Eq(4.13) as 1 oH 1 oH  oS − µr oH ≥ oH + u H oS + u S  1 oS 1 oS  , i.e.,  1 − µorS 1 ≥ . H 1 + uoH 1 + uoSS  Note that µoS is the average cost of doing a surgery after time d for the surgeon, and r is the revenue from a surgery. The ratio  r µoS  is clearly strictly positive. Furthermore this  ratio should be less than one because otherwise, i.e.,  r µoS  > 1, it implies that surgeon would  never stop doing surgeries. On the other hand, we argue that the ratios  uH oH  and  uS oS  may be comparable even  though the individual cost coefficients may not, i.e., uH vs. uS and oH vs. oS . If they are comparable then as  r µoS  approaches 1 it becomes more attractive for the surgeon to work  overtime suggesting nH ≤ nS . 119  4.6.2  Risk-Averse Surgeon  We now revisit our assumption that the surgeon is risk neutral and extend some of our findings to the risk-averse setting. When the surgeon is risk neutral his/her expected profits for undertaking n surgeries is: π(n) = rn − Et [oS max{0, nt − d} + uS max{0, d − nt}]. When we consider risk aversion, the expected profit function will include a (strict) concave increasing utility function v, which is a one variable real-valued function mapping dollar amount to utility. The expected utility of undertaking n surgeries is assumed to be of the form Et [v(π(n, t))] where    rn − uS (d − nt) if nt ≤ d π(n, t) =  rn − o (nt − d) if nt > d S  is the dollar amount of profit for n surgeries with time realization t. A remark on how the utility function is constructed is in order. The utility of undertaking n surgeries when time is realized as t is v(π(n, t)). In other words, utility is a function of the dollar amount of profit π(n, t). Note, however, that in defining π(n, t) we already made a conversion of opportunity cost of time into dollars when we defined the coefficients uS and oS . The fact that the surgeon now has a general increasing concave utility function v does not change this determination of the coefficients uS and oS . That is, we maintain, even under risk aversion, the fact that each time unit is worth uS dollars before time d and oS dollars after. In other words, the dollar value of time is a piecewise linear function under both risk neutrality and risk aversion. The fundamental question we consider in this setting is how would a risk-averse surgeon choose his/her optimal number of surgeries, and would this value, for instance, be less than a risk neutral surgeon with the same profit function π(n, t). To make things precise we let nR denote the optimal number of surgeries a risk averse surgeon would plan for a time period d. That is, nR ∈ arg max Et [v(π(n, t))] n  (4.14)  and our question of interest is whether nR ≤ nS where nS is as defined in Proposition 4.6.2. 120  Finding nR directly might be quite challenging depending on the structure of the utility function v, so this bound in itself, if true, can be quite illuminating. Another reason for our interest in the question “is nR ≤ nS ?” is due to its possible implication for compensation B. If it turns out that nR is in fact smaller than nS then this may indicate that a risk-averse surgeon needs a smaller compensation B than that of risk neutral surgeon with identical costs in order to align his/her incentives with the hospital. The intuition is simple: the less a surgeon schedules surgeries on his/her own volition the less it would take to compensate him/her to perform nH surgeries. Despite this intuition we were unable to prove the result analytically and plan to take it up in future research. We feel, nonetheless, that the question of whether nR ≤ nS has independent interest, and thus pursue it further here. We will establish some sufficient conditions for which nR ≤ nS . First we give an intuitive discussion for why this indeed might be the case, however, this intuitive line of reasoning yields only motivation and not concrete proof. Next we establish the result when the time distribution takes on only two values – tL and tH – where tL is the time for a “routine” surgery and tH for a surgery with “complications”. This simplification of the surgery duration distribution yields attractive conditions under which nR ≤ nS holds. Finally, we present a sufficient condition on the cost coefficients for nR ≤ nS for a general case, i.e., for a general surgery duration distribution.  4.6.3  Intuitive Discussion  The basis of our intuition for nR ≤ nS comes from the following oversimplified notion of risk aversion: given the same expected monetary return from two lotteries, the lottery with smaller variance would be favored by a risk-averse decision maker. In particular, if one lottery has a smaller expected return and a greater variance than another then intuitively the latter is preferred by both a risk-averse and risk-neutral decision maker. Of course, in general, the validity of this statement depends on more than just the first two moments of the lottery distribution (and often on first or second order stochastic dominance) and possibly the degree of concavity of the the utility function [17], but the intuition is nonetheless clear. As a brief aside, we should point out the well known result that if the utility function exhibits constant absolute risk aversion (CARA), i.e., (negative) exponential utility function, and the payoff of a lottery is normally distributed then maximizing the expected 121  utility function is the same as maximizing a term involving the mean and the variance (and some other given/known parameters), i.e., it is determined entirely by the first two moments of the distribution of the payoff distribution (called mean-variance utility). This result, however, does not apply to our setting. Indeed, even if we assume the utility function v has CARA it is clear that the payoffs are not normal even if we assume normality or lognormality of the surgery duration t. Turning to the problem at hand, the first fact to note is that, by definition, nS maximizes the expected profit function π(n). In particular, this means that π(nR ) ≤ π(nS )  (4.15)  for any choice of nR . The next result establishes that if nR > nS then the variance of expected payoffs under nR surgeries is greater than for nS surgeries. Lemma 4.6.4. The variance of π(n, t), Var π(n, t), is proportional to n2 . From this our intuition strongly suggests that nR ≤ nS , since by performing more than nS surgeries one’s expected profit would be lower and variance of payoffs higher (i.e., there is more risk) and thus less attractive to a risk-averse surgeon.  4.6.4  Discrete Time Distribution with Two Values  In order to verify analytically the intuitive discussion in the previous subsection we begin by assuming a simple setting where the time distribution has two outcomes, tL and tH , with probabilities pL and pH respectively. We will see how our intuition leads us to define sufficient conditions for nR ≤ nS . By only using tL and tH values for a surgery duration we are making a rough approximation to the real distribution by picking only two values. We think of the low time tL as the duration of routine/easy surgery where there are no complications. On the other hand, tH corresponds to a high duration associated with the occurrence of complications during surgery. We now present some simple conditions on cost coefficients for when nR ≤ nS . The argument essentially follows our intuition presented in the previous subsection. We begin by assuming that nR > nS and derive conditions whereby this cannot occur. The first step is to show that there exists a number of surgeries m with the same expected profit as nR where m ≤ nS . 122  π(n)  m  nS  nR  -uSd  Figure 4.7: Existence of m Lemma 4.6.5. Let nR > nS . Then there exists an m with 0 ≤ m ≤ nS such that π(m) = π(nR ). Figure 4.7 provides an illustration of this result. The next step is to demonstrate that the expected utility of performing m surgeries is greater than that of performing nR surgeries, thus contradicting the definition of nR (see Eq(4.14)). The following lemma illustrates that when some conditions on the expected profits at tL and tH hold, we can indeed establish the contradiction. These conditions imply that the expected utilities of the various outcomes associated with nR surgeries are “more spread out” than with m surgeries. Proposition 4.6.6. The expected utility from undertaking m surgeries is greater than the expected utility of undertaking nR surgeries when the following condition holds: π(nR , tH ) ≤ π(m, tL ) ≤ π(m, tH ) ≤ π(nR , tL )  (4.16)  The idea of the proof is illustrated in Figure 4.8. The choice of either nR or m determines one of two lotteries for the surgeon. The first lottery, associated with the choice nR , is as follows: earn profit π(nR , tH ) with probability pH and earn profit π(nR , tL ) with probability pL . The expected profit of the first lottery is π(nR ) = pH π(nR , tH ) + pL π(nR , tL ). Similarly the second lottery has outcome π(m, tL ) with probability pL and outcome π(m, tH ) with probability pH . Both lotteries have the same expected value of π(nR ) = π(m). Then, 123  v w  E[v(Π(m,t)] E[v(Π(nR,t)]  Π(nR,tH)  Π(m,tL)  Π(m)=Π(nR)  Π(m,tH)  Π(nR,tL)  profit  Figure 4.8: Illustration of nR ≤ nS with Two t Values it can be seen graphically that Et [v(nR , t)] < Et [v(m, t)]. Details of the proof can be found in Section 4.8. A few comments on the previous theorem are in order. First, we give a brief interpretation of inequality Eq(4.16). When surgery duration is short (i.e., tL is realized) it is intuitive that we would like to do more surgeries, thus motivating the condition π(m, tL ) ≤ π(nR , tL ). When surgeries are long (i.e., tH ) the opposite holds true: the surgeon would favor fewer surgeries, thus motivating the condition π(nR , tH ) ≤ π(m, tH ). The remaining conditions implied by inequality Eq(4.16) are less straight-forward to motivate. Figure 4.9 gives an illustration of some time values tL and tH which satisfy these conditions. The two functions shown in the figure are the profit functions π(m, t) and π(nR , t). Since nR > m by assumption, it follows that π(nR , t) peaks (at t = d/nR ) to the left of π(m, t) (at t = d/m). From this figure we can also see why some conditions similar to inequality Eq(4.16) are not valid to give our desired result. Indeed, a similar picture to Figure 4.8 could be drawn with conditions π(nR , tL ) ≤ π(m, tL ) ≤ π(m, tH ) ≤ π(nR , tH )  (4.17)  and it might appear that a similar result would hold. However, it can be readily seen from Figure 4.9 that condition Eq(4.17) does not hold under the natural restriction that tL ≤ tH .  124  Π(nR,t) Π(m,t)  tL  d/nR  d/m  tH  t  Figure 4.9: Illustration of Condition Eq(4.16)  4.6.5  Sufficient Conditions on Cost Coefficients  In the previous subsection we made assumptions on the time distribution in order to find conditions whereby nR ≤ nS . Now we relax these restrictions and allow for a general continuous distribution of time, only restricting the relative sizes of the cost coefficients. We derive the following result. Proposition 4.6.7. If rnS > d(oS +uS ) then nR ≤ nS . Thus, in particular if r > d(oS +uS ) then nR ≤ nS . This result implies that if the total revenue from performing nS surgeries is large enough (i.e., greater than d(oS + uS )) then we will find that nR ≤ nS .  4.7  Conclusion and Future Directions  We have presented a model and analysis for the problem of determining the number of surgeries to schedule in an OR block of fixed length that takes into consideration the competing incentives of hospital and surgeon. By proposing contracts that induce the surgeon to schedule a number of surgeries more aligned with the goals of the hospital, the hope is that this alignment of incentives leads to a reduction in costs (especially the overtime costs) and an overall improvement in the working environment. Savings garnered in this  125  scheme could be used to open up other OR’s or intensive-care beds and further improve OR throughput. Depending on how much power the hospital has over surgeons and how much information is available to the hospital, we propose several implementable contracts that hospital might consider in Section 4.5. In this section, we also provide a discussion of the problem in a perspective of a social planner, e.g., a provincial government. Our analysis is based on some important assumptions. We argue that many of them are quite reasonable and made for tractability to remove complexity in the analysis and demonstrate as simply as possible the incentives involved. Two of the stronger assumptions in our basic setting are risk-neutrality of the surgeon and the fact that all surgeries are identical and independently distributed. In Section 4.6 we provide a framework that relaxes these two assumptions, but introduce another set of restrictive conditions. Although both models are restrictive, the fact that they are different in character but yield similar insights testifies to the robustness of our approach. We now point out some other possible extensions of our model that may be promising directions for future research. Firstly, we have modeled the interaction between the hospital and surgeon as a single period game. However, the hospital and surgeon have a long-term working relationship and it may add more insight to explore this setting in a repeated game structure. One direction is that the bonus structure may be used as a “carrot” by the hospital to induce cooperation from the surgeon at each stage in a repeated game. Secondly, there is scope to examine more closely how the size of the bonus would change with degrees of risk aversion. Our results on the magnitude of nR with respect to nS could be a foundation for this study. Thirdly, one can study the asymmetric information case in more detail, in which both surgeon’s and hospital’s cost coefficients are private and not known by the other party.  4.8  Proofs  Proof of Proposition 4.4.3. It suffices to show that θ(n) is discrete convex. Note that π(n) = (r + us µ)n − (oS + uS )θ(n) − uS d is discrete convex when each of the the first two terms are discrete convex. The linear term (r + us µ)n is both discrete convex and concave, and since oS + uS ≥ 0 by assumption the second term is discrete concave precisely  126  when θ(n) is discrete convex. Thus, it suffices to show that θ(n) is discrete convex; that is, ∆2 θ(n) ≥ 0. To establish this we consider various cases for durations T (n), T (n+1), etc., with respect to the normal day duration d. There are three important ranges for d: 0 ≤ d < T (n); T (n) ≤ d < T (n + 1) ; T (n + 1) ≤ d < T (n + 2) ; and d ≥ T (n + 2). We will compute the expectation in the definition of θ(n) with these various ranges. The following table contains the necessary information: Range for d [0, T (n))  [T (n), T (n + 1))  [T (n + 1), T (n + 2))  [T (n + 2), ∞)  ∆O(n)  tn+1  T (n + 1) − d  0  0  ∆O(n + 1)  tn+2  tn+2  T (n + 2) − d  0  T (n + 2) − d  0  tn+2 − tn+1 tn+2 − (T (n + 1) − d) ∆2 O(n) Thus, using conditional expectation we write ∆2 θ(n) = E[∆2 O(n)|d < T (n)]P r(d < T (n))  +E[∆2 O(n)|T (n) ≤ d < T (n + 1)]P r(T (n) ≤ d < T (n + 1)) +E[∆2 O(n)|T (n + 1) ≤ d < T (n + 2)]P r(T (n + 1) ≤ d < T (n + 2)) +E[∆2 O(n)|d ≥ T (n + 2)]P r(d ≥ T (n + 2)) ≥ E[tn+2 − tn+1 |d < T (n)]P r(d < T (n)) (a)  + E[tn+2 − (T (n + 1) − d)|T (n) ≤ d < T (n + 1)]P r(T (n) ≤ d < T (n + 1)) (b)  The first term (a) simplifies as (using linearity of expectations and independence): (a) = =  E[tn+2 |d < T (n)] − E[tn+1 |d < T (n)] P r(d < T (n)) E[tn+2 ] − E[tn+1 ] P r(d < T (n))  = (µ − µ)P r(d < T (n)) = 0 The second term (b) can be written as: (b) =  E[tn+2 ] − E[tn+1 − (d − T (n))|T (n) ≤ d, tn+1 > d − T (n)] P r(T (n) ≤ d < T (n + 1))  = (µ − E[tn+1 − k|tn+1 ≥ k, k ≥ 0])P r(T (n) ≤ d < T (n + 1)) ≥ 0  127  where k = d−T (n) > 0. The last inequality holds that since by NBUE we have E[ti −k|ti ≥ k] ≤ E[ti ]. Thus, ∆2 θ(n) ≥ 0. Proof of Corollary 4.4.4. The proof follows from Proposition 4.4.3 and is thus omitted.  Proof of Theorem 4.4.5. Since θ(n) is discrete convex (shown above), this implies ∆θ(n)− ∆θ(n′ ) ≥ 0 for n ≥ n′ . Thus, ∆θ(nS ) − ∆θ(nH − 1) ≥  uH µ r + uS µ − >0 oS + u S oH + u H  Now, if nS ≤ nH −1 then ∆θ(nS )−∆θ(nH −1) ≤ 0 thus we can conclude that nS ≥ nH . Proof of Proposition 4.6.1. We first establish (i) by considering second order conditions. The first and second derivative of C (given in Eq(4.8)) with respect to n are respectively:  d/n  ∞  C ′ (n) = oH  tf (t)dt − uH  tf (t)dt  (4.18)  0  d/n  and C ′′ (n) =  d2 f n3  d n  (uH + oH )  (4.19)  using judicious appeals to Leibniz’s rule for differentiating integrals and some basic housekeeping. Observe C ′′ (n) > 0 for all n ≥ 0 (and hence C is (strictly) convex) provided the sum of the cost coefficients oH + uH is positive. As for (ii), since C is convex a sufficient condition for optimality is C ′ (n) = 0. Thus the equation C ′ (n) = 0 characterizes nH . From Eq(4.18) this yields the equivalent relation: ∞  oH  d/n  tf (t)dt = uH d/n  tf (t)dt.  (4.20)  0  Noting the fact ∞  tf (t)dt  µ = 0  d/n  =  ∞  tf (t)dt + 0  tf (t)dt, d/n  we remove the improper integral on the left-hand side of Eq(4.20) by substitution and simplify Eq(4.20) to: d/n  oH 0 = uH µ−  tf (t)dt  d/n tf (t)dt 0  .  (4.21) 128  We further simplify this expression further by expanding the integral  d/n tf (t)dt 0  using  integration by parts yielding: d/n  d F n  tf (t)dt = 0  Using our notation G(y) =  y 0  d n  d/n  −  F (t)dt. 0  F (t)dt and ϕ(y) = yF (y) − G(y) we yield an equivalent  expression for Eq(4.21) as follows: ϕ nd oH = uH µ−ϕ  d n  .  A simple rearrangement yields the characterization of nH : ϕ  d nH  =  oH µ , oH + u H  (4.22)  thus establishing (ii). Proof of Proposition 4.6.2. The details are similar to the proof of Proposition 4.6.1 so details here are more brief. We establish (i) by considering second order conditions. The first and second derivative of π (given in Eq(4.9)) with respect to n are respectively: π ′ (n) = r − oS  d/n  ∞  tf (t)dt  tf (t)dt + uS  (4.23)  0  d/n  and π ′′ (n) = −  d2 f n3  d n  (uS + oS ).  (4.24)  Observe π ′′ (n) < 0 for all n ≥ 0 (and hence π is (strictly) concave) provided the sum of the cost coefficients oS + uS is positive. As for (ii), since π is concave an optimal solution is achieved at π ′ (n) = 0. Thus the equation π ′ (n) = 0 characterizes nS . Using similar manipulations as above this yields an equivalent characterization of nS given by: ϕ  d nS  =  oS µ − r , oS + u S  (4.25)  thus establishing (ii). The following lemma is useful in the proofs of the next two theorems. It reveals a useful property of the function ϕ which plays a common role in the characterizing nH and nS . Lemma 4.8.1. The function ϕ(x) = xF (x) − G(x) is an increasing function of x for x ≥ 0. 129  Proof. We show that ϕ′ (x) ≥ 0 for x ≥ 0. Note that: ϕ′ (x) = F (x) + xf (x) − G′ (x) = F (x) + xf (x) − F (x) = xf (x) ≥ 0 where the inequality holds since the pdf f is non-negative and x ≥ 0. Proof of Theorem 4.6.3. Note the characterizations of nH and nS given in Eq(4.11) and Eq(4.12) reproduced here for convenience: ϕ  d nH  =  oH µ oH + u H  ϕ  d nS  =  oS µ − r . oS + u S  and  Since ϕ is an increasing function then ϕ nH ≤ nS it is equivalent to show ϕ  d nH  d n  ≥ϕ  is a decreasing function of n. Thus to show d nS  . By the above characterization it is in  turn equivalent to establish: oH µ oS µ − r ≥ . oH + u H oS + u S We obtain the desired result by dividing both sides by µ. Proof of Lemma 4.6.4. Recall that π(n, t) = rn − oS max{0, nt − d} − uS max{0, d − nt}. Then Var π(n, t) = Var (oS max{0, nt − d} + uS max{0, d − nt}). Since we are interested in finding only how n is related to the variance of π(n, t) w.l.o.g. we may assume oS = uS = 1. Also note that max{0, nt − d} + max{0, d − nt} = |nt − d|. Therefore we just need to find Var |nt − d|. But since d is a constant we obtain Var |nt − d| = Var |nt| = n2 Var |t|, and since t > 0 we obtain n2 Var |t| = n2 Var t. Hence Var π(n, t) is proportional to n2 . Proof of Lemma 4.6.5. The expected profit function π (see Eq(4.9)) is continuous. We know π(0) = −uS d and by inequality Eq(4.15) π(nR ) < π(nS ) and so by the Intermediate Value Theorem there exists an m with 0 ≤ m ≤ nS such that π(m) = π(nR ). Proof of Proposition 4.6.6. Condition Eq(4.16) and the fact v is increasing imply v(π(nR , tH )) ≤ v(π(m, tL )) ≤ v(π(m, tH )) ≤ v(π(nR , tL ))  130  as illustrated in Figure 4.8. We know that the point (π(nR ), Et [v(π(nR , t))]) lies on the line segment between points (π(nR , tH ), v(π(nR , tH )) and (π(nR , tL ), v(π(nR , tL )) and is in fact the convex combination of the points given by: pH (π(nR , tH ), v(π(nR , tH )) + pL (π(nR , tL ), v(π(nR , tL )). By the concavity of v this line segment lies below the line segment adjoining points (π(m, tL ), v(π(m, tL )) and (π(m, tH ), v(π(m, tH )) for every profit level on which both line segments are defined, and in particular for expected profit level π(nR ) = π(m). It then follows that Et [v(π(nR , t))] ≤ Et [v(π(m, t))], which can be seen graphically in Figure 4.8. Proof of Proposition 4.6.7. Recall our assumption that v is a strictly increasing, twice differentiable and concave utility function, i.e., v ′ > 0 and v ′′ < 0. Define η(n) as the expected utility function, i.e., η(n) = Et [v(π(n, t))]. Since π(n, t) and v are (strictly) concave and concavity is preserved by expectation operator η is (strictly) concave. Thus nR ∈ arg maxn≥0 η(n). We can compute nR by finding η ′ (n) with Leibniz’s rule and solving for n in d/n  η ′ (n) =  (r +uS t)v ′ (rn−uS d+uS nt)f (t)dt+  ∞  (r −oS t)v ′ (rn+oS d−oS nt)f (t)dt = 0.  d/n  0  We know that π ′ (nS ) = 0 by Proposition 4.6.2. Our strategy is to show η ′ (nS ) ≤ 0 which implies nR ≤ nS . We will start from the expression π ′ (nS ) and obtain η ′ (nS ) ≤ 0. Recall, by Eq(4.23), π ′ (nS ) =  d/nS (r 0  + uS t)f (t)dt +  ∞ d/nS (r  − oS t)f (t)dt = 0.  Since r > d(oS + uS ) and nS ≥ 1, we have r/oS > d/nS and can rewrite π ′ (nS ) so that Eq(4.23) is equivalent to d/nS  π ′ (nS ) =  r/oS  0  ∞  (r − oS t)f (t)dt +  (r + uS t)f (t)dt + d/nS  (r − oS t)f (t)dt = 0. (4.26) r/oS  Now we multiply each term in Eq(4.26) by v ′ (oS d), a nonnegative constant, and rewrite it as d/nS  v ′ (oS d)(r + uS t)f (t)dt +  r/oS  v ′ (oS d)(r − oS t)f (t)dt +  part 1  v ′ (oS d)(r − oS t)f (t)dt = 0.  r/oS  d/nS  0  ∞  part 2  part 3  (4.27) Note that part 1 and part 2 are non-negative, and part 3 is non-positive. Next we will obtain inequalities individually for each one these three parts. First we recall that v is 131  strictly concave so we have v ′′ < 0, i.e., v ′ is strictly decreasing. Also note that we assume nS r > d(oS + uS ) and nS ≥ 1. We start with part 1. Since r > d(oS + uS )/nS and v ′ is decreasing we have 0 ≤ v ′ (rnS − uS d + uS nS t) ≤ v ′ (oS d) for 0 ≤ t ≤ d/nS . Therefore d/nS  d/nS  v ′ (oS d)(r + uS t)f (t)dt ≥  0  v ′ (rnS − uS d + uS nS t)(r + uS t)f (t)dt ≥ 0. (4.28)  0  Next we obtain a similar result for part 2. Since r > d(oS + uS )/nS and v ′ is decreasing we have 0 ≤ v ′ (rnS + oS d − oS nS t) ≤ v ′ (oS d) for d/nS ≤ t ≤ r/oS . Hence r/oS  v ′ (oS d)(r − oS t)f (t)dt ≥  d/nS  r/oS  v ′ (rnS + oS d − oS nS t)(r − oS t)f (t)dt ≥ 0.  (4.29)  d/nS  Finally, we obtain an inequality for part3. Since r > d(oS + uS )/nS and v ′ is decreasing we have 0 ≤ v ′ (oS d) ≤ v ′ (rnS + oS d − oS nS t) for t ≥ r/oS we obtain ∞  0≥  v ′ (oS d)(r − oS t)f (t)dt ≥  r/oS  ∞  v ′ (rnS + oS d − oS nS t)(r − oS t)f (t)dt.  (4.30)  r/oS  When we look at the Eq(4.28), Eq(4.29) and Eq(4.30) we see that non-negative parts, i.e., part1 and part2 are getting less positive, and non-positive part, i.e., part3 gets more negative when we replace v ′ (oS d) with the corresponding v ′ (.)’s. We put the Eq(4.28), Eq(4.29), Eq(4.30), Eq(4.26) and Eq(4.27) together and obtain π ′ (nS ) = 0 r/oS  d/nS  ∞  (r − oS t)f (t)dt +  (r + uS t)f (t)dt +  =  d/nS  0 d/nS  ≥  (r − oS t)f (t)dt r/oS  v ′ (rnS − uS d + uS nS t)(r + uS t)f (t)dt  0 r/oS  +  v ′ (rnS + oS d − oS nS t)(r − oS t)f (t)dt  d/nS ∞  +  v ′ (rnS + oS d − oS nS t)(r − oS t)f (t)dt  r/oS  = η ′ (nS ). Hence η ′ (nS ) ≤ 0 implying that nR ≤ nS as claimed.  132  4.9  Bibliography  [1] Jeroen Belin and Erik Demeulemeester.  Building cyclic master surgery schedules  with leveled resulting bed occupancy. European Journal of Operational Research, 176: 11851204, 2007. [2] John Blake and Joan Donald. Mount sinai hospital uses integer programming to allocate operating room time. Interfaces, 32(2):63–73, 2002. [3] Selma Harrison Calmes and Kurt M. Shusterich. Operating room management: what goes wrong and how to fix it. Physician Executive, 18(6), 1992. [4] Scott Carr and William S. Lovejoy. The inverse newsvendor problem: Choosing an optimal demand portfolio for capacitated resources. Management Science, 46(7):912– 927, 2000. [5] Mike Carter. Diagnosis: Mismanagement of resources. OR/MS Today, 29(2):26–32, 2002. [6] Amitabh Chandra and Jonathan Skinner. Expenditure and productivity growth in health care. Dartmouth College, February. Forthcoming as an NBER Working Paper, 2008. [7] Avinash Dixit. Incentives and organizations in the public sector: An interpretative review. The Journal of Human Resources, 37(4):696–727, 2002. [8] Canadian Institute for Health Information Web Site. http://www.cihi.ca/. [9] Robert Gibbons. Incentives between firms (and within). Management Science, 51(1): 2–17, 2005. [10] Sholom Glouberman and Henry Mintzberg. Managing the care of health and the cure of disease: Part i: Differentiation, part ii: Integration. Health Care Management Review, 26(1):56–84, 2001. [11] Erwin Hans and Tim Nieberg. Operating room manager game. INFORMS Transactions on Education, 8(1):25–36, 2007.  133  [12] Donald K.K. Lee and Stefanos A. Zenios. Evidence-based incentive systems with an application in health care delivery. Submitted to Management Science, 2007. [13] William S. Lovejoy and Ying Li. Hospital operating room capacity expansion. Management Science, 48(11):1369–1387, 2002. [14] Alan P. Marco. Game theory in the operating room environment. The American Surgeon, 67(1):92–96, 2001. [15] Alan P. Marco. Game theoretic approaches to operating room management. The American Surgeon, 68(5):454–462, 2002. [16] A.W. Marshall and F. Proschan. Classes of distributions applicable in replacement, with renewal theory implications. In Proc. 6th Berkeley Symposium on Mathematical Statististics and Probability, volume 1, pages 395–415, 1972. [17] Andreu Mas-Colell, Michael D. Whinston, and Jerry Green. Microeconomic Theory. Oxford University Press, 1995. [18] Marcelo Olivares, Christian Terwiesh, and Lydia Cassorla. Structural estimation of the newsvendor model: An application to reserving operating room time. Management Science, 54(1):41–55, 2008. [19] Thomas W. Samuel, Stephen G. Raleigh, Judith M. Hower, and Richard W. Schwartz. The next stage in the health care economy: Aligning the interests of patients, provides, and third-party payers through consumer-driven health care plans. The Amerian Journal of Surgery, 186:117–124, 2003. [20] Pablo Santib´ an ˜ez, Mehmet A. Begen, and Derek Atkins. Surgical block scheduling in a system of hospitals: an application to resource and wait list management in a British Columbia health authority. Health Care Management Science, 10(3):269–282, 2007. [21] CBC Web Site. http://www.cbc.ca/health/story/2007/10/15/waittimes-fraser.html. . [22] Health  Canada  Web  Site.  http://www.hc-sc.gc.ca/hcs-sss/qual/acces/wait-  attente/index-eng.php. .  134  [23] Peter C. Smith, Adolf Stepan, Vivian Valdmanis, and Piet Verheyen. Principal-agent problems in health care systems: An international perspective. Health Policy, 41:37–60, 1997. [24] D.P. Strum, J.H. May, and L.G. Vargas. Modeling the uncertainty of surgical procedure times: comparison of log-normal and normal models. Anesthesiology, 92(4):1160–1167, 2000.  135  5 Advance Multi-Period Quantity Commitment and Appointment Scheduling1 We introduce advance multi-period quantity (order or supply) commitment problems with stochastic characteristics (demand or yield) and several real-world applications. There are underage and overage costs if there is a mismatch between committed and realized quantities. Decisions are needed now and they are the order or supply amounts for the next n periods. The objective is to maximize the total expected profit of n periods. We establish a link between these advance multi-period quantity commitment problems and the appointment scheduling problem studied in Chapter 2. We show that these problems can be thought of and solved (efficiently) as special cases of the appointment scheduling problem.  5.1  Introduction  We introduce and study advance multi-period quantity (order or supply) commitment problems with random characteristics (demand or yield), explore their relationship with the appointment scheduling problem given in Chapter 2 and provide several real-world applications. All quantity decisions (how much to order or supply in each of the next n periods) are needed now, i.e., before any realization of demand or yield. We show that these problems can be modeled and solved as special cases of the appointment scheduling problem. In a supply chain, uncertainty consequences (e.g, due to stochastic demand or random yield) are something that players would like to minimize and, when possible, pass to others. Consider a buyer and a supplier where the buyer can order any amount from the supplier 1  A version of this chapter will be submitted for publication. Begen M.A. and Queyranne M. Advance  Multi-Period Quantity Commitment and Appointment Scheduling.  136  whenever it is convenient. This may be the case where there are many suppliers and they are competing for buyers. However, when possible a supplier would prefer a contract in which the buyer (who has better information about the demand uncertainty) commits in advance how much to purchase over a certain period of time. In return, the supplier may offer a discount to the buyer to make this choice attractive. These type of agreements are reported in practice, e.g., [2], [8] and [7]. With such an agreement, the challenge for the buyer becomes to determine how much to commit to purchase in advance (e.g., in total for the entire horizon or per period) and how much to order in each period. This problem and its variants (such as finite or infinite horizons, with or without fixed costs, total or individual period commitments) have been well motivated and studied in literature e.g., [2], [8], [5], [4], [3] [7] and [1]. These studies mostly (and naturally) use dynamic programming to determine an optimal policy and in some cases they develop heuristics. Nevertheless, all the previous studies on this topic that we are aware of consider situations where a buyer commits on how much to purchase in advance and decides how much to order in each period consecutively, i.e., the ordering decision for the next period is given after this period’s demand realization. In our setting, the buyer needs to decide how much to order for all periods at once and now, i.e., before any realization of random demands. There can be some situations where the buyer needs to enter such a contract to secure any orders from a strong supplier. Or alternatively, we think of a producer who is subject to random yield and needs to determine now how much to supply for each of the n periods before any production levels are known. In this case, the producer may be subject to stiff competition and needs to promise customers supply quantities for each of the next n periods before the production horizon starts. If there is any product shortage in a period then the producer will obtain the product by other means, e.g., purchase it from a competitor. Furthermore, the producer has a high inventory holding cost so building inventory in advance to compensate for product shortage may not be profitable but is necessary when there is excess inventory. We first provide the details on the advance multi-period quantity (order or supply) commitment problems and then show that they have a very strong connection with the appointment scheduling problem given in Chapter 2. Then we use the algorithmic and convex optimization results obtained in Chapter 2 and Chapter 3 (for the appointment scheduling problem) to determine optimal levels of quantity commitments (order or supply) before the first period. To our best knowledge, the problems considered in this chapter have 137  not been yet studied. We use the same assumptions and notation as in Chapter 2 and Chapter 3. We provide a description of appointment scheduling problem as well as introduce notation in Section 5.2. The rest of the chapter is organized as follows. In Section 5.3, we introduce a multi-period inventory model for a perishable product with advance order commitments, provide a few real-world examples and show that it has one to one correspondence with the appointment scheduling problem. Therefore, it may be solved efficiently as a special case of the appointment scheduling problem. We will refer to this model as “the inventory model”. Section 5.4 introduces the model for the production problem with advance supply commitments and random yield. In this section, we establish a link between “the production problem” and the appointment scheduling problem. We show that (under a mild condition on cost coefficients) the objective function of this problem is L-concave2 (if production quantities are integer) and concave (if production quantities are real). Furthermore, if the yield distributions are independent then the production problem can also be solved efficiently as in the case of appointment scheduling. Finally, we conclude the chapter in Section 5.5.  5.2  Description of Appointment Scheduling Problem  This section closely follows from Chapter 2 and Chapter 3. There are n + 1 jobs numbered 1, 2, ..., n + 1 that need to be sequentially processed (in the order of 1, 2, ..., n + 1) on a single processor. An appointment schedule, i.e., a processing duration allocation ai for each job i, is needed before any processing can start. That is, each job is assigned a planned start date, i.e., appointment date Ai where A1 = 0 and Ai = Ai−1 + ai−1 for i = 2, 3, ...n + 1. The processing durations are stochastic and we are only given their joint discrete distribution. When a job finishes later than the next job’s appointment date, the system experiences overage cost due to the overtime of the current job and the waiting of the next job. On the other hand, if a job finishes earlier than the next job’s appointment date, the system experiences some cost due to under-utilization, i.e., underage cost. The goal is to find appointment dates, (A1 , ..., An ), that minimize the total expected cost. There are n real jobs. The (n + 1)th job is a dummy job with a processing duration of 0. The appointment time for the (n+1)th job is the total time available for the n real jobs. We 2  See Definition 5.4.1.  138  use the dummy job to compute the overage or underage cost of the nth job. We denote the random processing duration of job i by pi and the random vector3 of processing durations by p = (p1 , p2 , ..., pn , 0). Let pi denote the maximum possible value of processing duration pi , respectively. The maximum of these pi ’s is pmax = max(p1 , ..., pn ). The underage cost rate ui of job i is the cost (per unit time) incurred when job i is completed at a date Ci before the appointment date Ai+1 of the next job i + 1. The overage cost rate oi of job i is the unit cost incurred when job i is completed at a date Ci after the appointment date Ai+1 . Thus the total cost due to job i completing at date Ci is ui (Ai+1 − Ci )+ + oi (Ci − Ai+1 )+ where (x)+ = max(0, x) is the positive part of real number x. We define u = (u1 , u2 , ..., un ) and o = (o1 , o2 , ..., on ). We assume, naturally, that all cost coefficients and processing durations are non-negative and bounded. We also assume that processing durations are integer valued.4 Next we introduce our decision variable for the appointment scheduling problem. Let A = (A1 , A2 , ..., An , An+1 ) (with A1 = 0) be the appointment vector where Ai is the appointment date for job i. We introduce additional variables which help define and express the objective function. Let Si be the start date and Ci the completion date of job i. Since job 1 starts on-time we have S1 = 0 and C1 = p1 . The other start times and completion times are determined as follows: Si = max{Ai , Ci−1 } and Ci = Si + pi for 2 ≤ i ≤ n + 1. Note that the dates Si and Ci are random variables which depend on the appointment vector A, and the random duration vector p. Let F (A|p) be the total cost of appointment vector A given processing duration vector p:  n  oi (Ci − Ai+1 )+ + ui (Ai+1 − Ci )+ .  F (A|p) = i=1  The objective to be minimized is the expected total cost F (A) = Ep [F (A|p)] where the expectation is taken with respect to random processing duration vector p. Our framework can include a given due date D for the end of processing (e.g., end of day for an operating room for the appointment scheduling problem, or a quota set by the supplier for the inventory model) after which overtime is incurred, instead of letting the model choose a planned makespan An+1 . We assume D is an integer and that 0 ≤ D ≤ 3 4  n i=1 pi .  Define  We write all vectors as row vectors. We can restrict ourselves to integer appointment schedules without loss of optimality by Appointment  Vector Integrality Theorem 2.5.10 of Chapter 2.  139  ˜ = (A1 , A2 , ..., An ) then the new objective becomes A  n−1  ˜ = Ep  F D (A)  oj (Cj − Aj+1 )+ + uj (Aj+1 − Cj )+  j=1    + on (Cn − D)+ + un (D − Cn )+  .  ˜ D) = F D (A). ˜ We end this section with two definitions We immediately observe that F (A,  that we need later in the chapter. First definition is a mild condition on cost coefficients and is due to Definition 2.6.5 of Chapter 2. Definition 5.2.1. The cost coefficients (u, o) are α-monotone if there exists reals αi (1 ≤ i ≤ n) such that 0 ≤ αi ≤ oi and ui + αi are non-increasing in i, i.e., ui + αi ≥ ui+1 + αi+1 for all i = 1, . . . , n − 1. The condition of α-monotonicity is automatically satisfied if all underage cost coefficients are the identical, i.e., ui = u for all i. Let Z denote the set of integers and 1 is a vector in Rn+1 where each component is 1. The next definition gives the definition of a L-convex function. Definition 5.2.2. f : Zq → R∪{∞} is L-convex iff f (z)+f (y) ≥ f (z∨y)+f (z∧y) ∀z, ∀y ∈ Zq and ∃r ∈ R : f (z + 1) = f (z) + r ∀z ∈ Zq [9].  5.3  A Multi-Period Inventory Model for a Perishable Product with Advance Commitments  Consider a buyer who has to make ordering decisions for the next n periods at time zero for a perishable product with a stochastic demand. Since the product is perishable, excess (unsold) items at the end of a period cannot be used in the next period and they need to disposed. On the other hand, unsatisfied demand is backordered. Furthermore, there may be a quota for the total purchases (orders and backorders) such that it is more costly to order beyond this quota. The objective of the buyer is to determine how much to commit now for the next n periods to maximize his/her expected profits. We think of the profits as revenue−costs where revenue is simply number of products sold times the contribution factor (including unit product cost). In our setting number of products sold is equal to the total demand since we assume backorders and satisfy all the demand. On the other hand, the costs consists of holding (and/or disposal costs) and backorder costs. Revenue will be 140  a constant after taking the expectation with respect to demand of each period therefore we can think of this problem profit maximization problem as a cost minimization, i.e., minimization of total expected holding (and/or disposal) and backorder costs. There are many real-world examples for this setting. For example, consider a retailer who is under heavy competition and has a single supplier. Demand for the retailer’s product is stochastic, and the supplier requires the retailer to commit its orders for the next n periods and may set a quota for the total amount of purchases. The retailer has little negotiating power with the supplier, and wants to keep his reputation by satisfying all the demand using backordering when needed. In this setup, we look at the problem in the retailer’s eyes and have to decide on orders for the next n periods. This inventory model and the appointment scheduling model as defined in Chapter 2 has one to one correspondence in terms of their structure, data, decision variables and objective function. In the inventory model we have n periods whereas in the appointment scheduling model there are n jobs. The random component, pi in the inventory model is the demand for period i whereas in the appointment scheduling model it is the processing duration of job i. The costs are u and o; in the appointment scheduling model u is the underage (earliness) and o is the overage (waiting and/or overtime) cost whereas in the inventory model u is the holding (and/or disposal) and o is the backorder cost. The decision variable in the appointment scheduling model is the appointment date Ai of job i, or how much time to allocate ai for job i. On the other hand, the decision for the inventory model is how much to order for period i, ai , and as given in Section 5.2 we have the relationship ai = Ai+1 − Ai . The start time of job i, Si , in the appointment scheduling model corresponds to total purchase (orders and backorders) up to period i (not including period i), and completion time of job i, Ci is the total purchase up to period i + 1 (including period i). Table 5.1 gives a comparison summary between the appointment scheduling model and the described inventory problem in terms of data and decision variables. In Figure 5.1 we provide an example. The figure is a graph of periods and order levels for a demand realization (without a quota D) showing inventory levels (excess inventory or backorders) for each period. Figure 5.1 shows demands (pi ’s), orders (ai ’s), backorders (square blocks), excess units (diagonal blocks) for a seven-period instance. The x axis is the cumulative orders (Ai ’s) and the y axis is time, i.e., periods. As discussed earlier, the buyer has to decide the order levels a1 , . . . , an for the next n 141  Table 5.1: Comparison of the Appointment Scheduling and Inventory Models appointment scheduling  inventory  n  jobs  periods  i  job  period  pi  processing duration of job i  demand for period i  u  underage cost  holding and/or disposal cost  o  overage cost  backorder cost  ai  allocated time for job i  order for period i  Ai  appointment date for job i  cumulative orders upto period i  Si  start date for job i  total purchase (orders and backorders) up to period i  Ci  completion date for job i  total purchase (orders and backorders) up to period i + 1  periods now such that the total expected cost (holding and/or disposal and backorder) is minimized. We find the cost for period i first. Recall that total purchases up to period i is Si , total orders up to period i is Ai , and the demand for period i is pi . Suppose we had ordered ai for period i. Then Si + pi is total purchases up to period i + 1, and Ai + ai is total orders up to period i + 1. Therefore we pay holding and/or disposal cost u on (Ai + ai − Si − pi )+ and backorder cost o on (Si + pi − Ai − ai )+ , hence the total cost due to period i ordering ai units is u(Ai + ai − Si − pi )+ + o(Si + pi − Ai − ai )+ . By definition Ai+1 = Ai + ai and Ci = Si + pi , therefore we can represent the cost of period i as u(Ai+1 − Ci )+ + o(Ci − Ai+1 )+ . Then the total expected cost for n periods of the inventory problem becomes n  o(Ci − Ai+1 )+ + u(Ai+1 − Ci )+  G(A) = Ep  .  (5.1)  i=1  We see that Eq(5.1) is precisely the definition of F (A) (given in Section 5.2) when ui = u and oi = o for all i (1 ≤ i ≤ n). Therefore G(A) = F (A) with ui = u and oi = o for all i (1 ≤ i ≤ n) and hence the inventory model is a special case of the appointment scheduling model. Furthermore, since ui = u for all i (1 ≤ i ≤ n) α-monotonicity is automatically satisfied. Therefore, if the demand distributions are independent and integer-valued then G(A) can be minimized in O(n9 p2max log pmax ) time by the Theorem 2.7.3 of Chapter 2. In addition, we see that with real-valued demands G is convex (by the Corollary 3.3.5 of Chapter 3), and we can use the sampling approach developed in Chapter 3 to obtain 142  Periods  Period 6  Period 5  p6 p5  Period 4  p4  Period 3  p3  Period 2  p2  Period 1  p1  A1  a1  A2  a2  A3  a3  A4  a4  A5  a5  A6  a6  Cumulative orders  A7  Figure 5.1: A Realization of Inventory Levels for 6 Periods a provable optimal order plan if demand distributions are not known but only a set of independent samples is available. In this case, we do not require independence of demands between one period to another. Last but not least, by our results in Appendix A we can use non-smooth convex optimization methods and a hybrid algorithm5 to find an optimal order plan for the buyer. When there is a limit (or quota) D set by the supplier on the total number of purchases then we may represent the total expected cost for n periods of the inventory problem as   n−1  ˜ = Ep  GD (A)  o(Cj − Aj+1 )+ + u(Aj+1 − Cj )+  j=1  + on (Cn − D)+ + u(D − Cn )+  .  ˜ D) = GD (A) ˜ and furthermore, GD (A) ˜ = F D (A) ˜ (with We immediately observe that G(A, un−1 = ui = u and oi = o for all i (1 ≤ i ≤ n − 1), and possibly different on ). Similar to G, for GD α-monotonicity is satisfied and hence it can be minimized in O(n9 p2max log pmax ) time by the Corollary 2.8.7 of Chapter 2 if the demand distributions are independent and integervalued. Furthermore, like G, with real-valued demands GD is convex (by the Corollary 3.3.5 of Chapter 3), and by Proposition 3.4.11 of Chapter 3 we can use the sampling approach 5  It is based on combination of discrete and non-smooth convex methods with the special purpose rounding  algorithm, see Section A.4 of Appendix A.  143  developed in Chapter 3 to obtain a provable optimal order plan if demand distributions are not known but only a set of independent samples is available. In this case, again we do not require independence of demands between periods. Last but not least, by using our results in Appendix A and Chapter 3 (Remark A.3.11 of Appendix A and Proposition 3.4.11 of Chapter 3) we can use non-smooth convex optimization methods and the hybrid algorithm to find an optimal order plan for the buyer. Remark 5.3.1. We may use distinct ui ’s and oi ’s as long as they are α-monotone for the inventory model. However, for simplicity and the fact that it makes sense to have the same backorder and holding (and/or disposal) cost for all periods, we use the same u and o for all periods, except possibly for on (to capture the extra cost incurred when there is a quota and it is exceeded). When all the ui ’s are the same then α-monotonicity is satisfied and the functions G and GD are automatically discretely convex in the case of integer-valued demands, and convex when demand is real-valued. We provide a real-world example for the inventory model. It comes from the high-tech industry. Consider an Internet company that has a need for high amount of bandwidth daily. The demand for the bandwidth changes from day to day and it is stochastic. The company has to make prior minimum commitments on how much bandwidth to buy for the next 30 days with an Internet service provider at the beginning of each month. Any unused amount is a lost since the company already agreed to buy some minimum quantity of bandwidth daily whereas the company can purchase more if needed in a day. However, there is the quota D that the Internet provider sets for the total purchase of 30 days. If this limit is exceeded then the company must pay a penalty to get any amount over D. The company’s objective is to determine how much bandwidth purchase to commit for the next 30 days to minimize the expected unused amount and overuse costs. (We do not consider the cost of the service since the company will pay that amount in any case.) We can think of this company’s objective as Ep  n−1 j=1 u(Aj+1  − Cj )+ + u(D − Cn )+ + o(Cn − D)+ where  u is the unused amount cost rate (this cost can be thought of as the opportunity cost of unused bandwidth) and o is the overuse cost for exceeding the limit D. Note that this ˜ with oi = 0 for (1 ≤ i ≤ n − 1) and on = o. function is the same as GD (A) We end this section with another application of appointment scheduling in the context the inventory model. Consider a project manager who is responsible for budget allocation 144  decisions for multiple and serial phases of a project. The challenge is that funding requirements of each project phase is stochastic and they need to secured before the project start. If the funding requirement (for a phase) turns out to be more than what was allocated then it costs more (with a rate of o) to secure the remaining portion (e.g., the manager needs to pay a higher interest to obtain additional funds). On the other hand, if the allocation (for a phase) is more than the required funds then there is lost of some opportunity cost (with a rate of u). Moreover, there may a total quota D for the entire project after which it costs even more to get any funding. The objective is to determine funding allocations for each phase before the project starts such that total expected cost is minimized. We can think of this problem as the same inventory problem where the demand is the funding requirements, order commitments are budget allocations and the same cost structure with u and o.  5.4  A Multi-Period Production Model with Random Yield and Advance Commitments  Consider a production manager who is responsible for manufacturing and selling a product which is subject to random yield and high inventory holding cost. In order to secure customer contracts this manager needs to commit in advance how much to supply each period for the next n periods before any production starts. Due to random yield, the production amount is subject to uncertainty however the supply commitments must be met regardless. For example, if the production falls short and the producer cannot satisfy the current period’s supply commitment from its inventory then it needs to provide the missing product amount by other means (e.g., purchase from other manufacturers). On the other hand, if there is more product available than what was committed then the excess amount is placed in inventory and can be used for future periods. However keeping inventory is expensive due the product’s high inventory holding cost. The manager is interested to maximize the total expected profit after n periods. Revenue consists of number of products sold times the contribution factor r per product (including unit production cost). In our setting, number of products sold is total number of products committed. Costs include product shortage cost ui per product and inventory holding cost oi per product for period i (1 ≤ i ≤ n). We think of ui as the additional cost of obtaining a product (compared to in-house production) when the producer falls short to satisfy the 145  current period’s commitment. On the other hand, oi can be thought as the inventory holding cost for periods 1, ..., n − 1, and on (in addition to holding cost) may include disposal cost of products remaining at the end of n periods. Similar to the inventory model described in Section 5.3, this production model and the appointment scheduling model given in Chapter 2 have many similarities with respect to their structure, data, decision variables and objective function. In this production model, we have n periods and in the appointment scheduling model there are n jobs. The stochastic component, pi in the production model is the production level of period i whereas in the appointment scheduling model it is the processing duration of job i. The costs are ui and oi ; in the appointment scheduling model ui is the underage (earliness) and oi is the overage (waiting and/or overtime) cost of job i, on the other hand in the production model ui is the product shortage and oi inventory holding (and possibly disposal) cost. The decision variable in the appointment scheduling model is the appointment date Ai of job i whereas the decision for the production model is how much to commit for supply of period i, ai , and we have the relationship ai = Ai+1 − Ai . Completion time of job i, Ci , in the appointment scheduling model represents total number of products supplied (produced in-house and purchased from outside) until the end of period i before purchases (if any) in period i, and start time of job i, Si , is the total number of products supplied (produced in-house and purchased from outside) until the end of period i − 1 after purchases (if any) in period i − 1. Table 5.2 gives a comparison summary between the appointment scheduling model and the described inventory problem in terms of data and decision variables. We can interpret Figure 5.1 of the inventory model similarly for the production model as well. Now the x axis is cumulative supply commitments and, as before, the y axis is time, i.e., periods. The figure shows supply commitment levels (ai ’s), a sample production realization (pi ’s), units in inventory (square blocks) and units that are short (diagonal blocks) for each period. The manager, as mentioned above, has to decide the supply commitment levels a1 , . . . , an now for the next n periods such that the total expected profit is maximized. We first look at revenue for period i. The producer sells precisely the committed amount, ai , for each period; if there is shortage then the manager needs to obtain the missing amount from outside to fulfill the supply commitment, and if there is a surplus then the excess goes to inventory. Therefore the revenue for period i as rai . Next, we look at costs for period i. Recall that 146  Table 5.2: Comparison of the Appointment Scheduling and Production Models appointment scheduling  production  n  jobs  periods  i  job  period  pi  processing duration of job i  production for period i  ui  underage cost for job i  product shortage cost for period i  oi  overage cost for job i  inventory holding cost for period i  ai  allocated time for job i  supply commitment for period i  Ai  appointment date for job i  cumulative supply commitments upto period i  Si  start date for job i  total products upto i with purchase in i − 1  Ci  completion date for job i  total products upto i + 1 without purchase in i  Ai is the cumulative supply commitment up to period i. Suppose the manager promises to supply ai for period i. Then Ai +ai is the cumulative supply commitment up to period i+1. Also note that Ci is the total products supplied (produced in-house and purchased from outside) up to period i + 1 before purchase (if any) in period i. Therefore if Ci − Ai − ai > 0 then there is a product surplus and the producer pays oi (Ci − Ai − ai ) as inventory holding cost else there is a product shortage and the producer pays ui (Ai + ai − Ci ) to purchase the missing products. Hence the profit for period i will be rai − oi (Ci −Ai+1 )+ +ui (Ai+1 −Ci )+ . Then the total expected profit for n periods of the production problem becomes n  rai − oi (Ci − Ai+1 )+ − ui (Ai+1 − Ci )+  H(A) = Ep  (5.2)  i=1 n  n  oi (Ci − Ai+1 )+ + ui (Ai+1 − Ci )+  ai − Ep  = r i=1  (5.3)  i=1  = rAn+1 − F (A)  (5.4)  where Eq(5.2) is the total expected profit for all n periods. In Eq(5.3) we take the nonrandom part out of the expectation and finally we obtain Eq(5.4) by noting  n i=1 ai  = An+1  and recognizing F (A)’s definition as given in Section 5.2. Therefore H(A) = rAn+1 − F (A) and hence the production model has a close connection with the appointment scheduling model. We can think of H as a special case of F . Before we formalize this relationship with our next result we need a few definitions. First is the definition of L-concavity, a function f is L-concave if −f is L-convex. Formal definition is below. 147  Definition 5.4.1. f : Zq → R∪{∞} is L-concave iff f (z)+f (y) ≤ f (z∨y)+f (z∧y) ∀z, ∀y ∈ Zq and ∃r ∈ R : f (z + 1) = f (z) + r ∀z ∈ Zq [9]. The next two definitions are on subgradient and subdifferential for a convex function, and supgradient and supdifferential of a concave function. Definition 5.4.2. A vector g is a subgradient of a convex function f at the point x if f (y) ≥ f (x) + g T (y − x) for all y. The subdifferential of f at a point x is the set of all subgradients at the point x, i.e., ∂f (x) = {g : f (y) ≥ f (x) + g T (y − x)} [6]. Definition 5.4.3. A vector g is a supgradient of a concave function f at the point x if f (y) ≤ f (x) + g T (y − x) for all y. The supdifferential at of f a point x is the set of all ¯ (x) = {g : f (y) ≤ f (x) + g T (y − x)} [6]. supgradients at the point x, i.e., ∂f Now we are ready for our results on H, the objective function of the production model. Corollary 5.4.4. 1. If production levels are integer-valued and the cost coefficients (u, o) are α-monotone then H is L-concave. In addition if production level distributions are independent then H can be maximized in O(n9 p2max log pmax ) time. 2. If production amount is real-valued and the cost coefficients (u, o) are α-monotone then • H is concave, • if g is a subgradient of F then r1n+1 − g is a supgradient of H, i.e., if g ∈ ∂F ¯ then (r1n+1 − g) ∈ ∂H(A) and ¯ • the supdifferential of H at A is ∂H(A) = r1n+1 − ∂F (A). Proof. 1. By definition H(A) = rAn+1 − F (A). The first term rAn+1 is linear in A and by the L-convexity Theorem 2.6.13 F (A) is L-convex when the cost vectors (u, o) are α-monotone. Therefore H(A) is L-concave. In the case of independent production level distributions, for a given A, we see that the computation of complexity of H(A) is the same as the computation of complexity of F (A) and by Theorem 2.7.3 of Chapter 2 F and H can be optimized in O(n9 p2max log pmax ) time. 148  2.  • Note that F is convex by Corollary 3.3.5 of Chapter 3 if (u, o) are α-monotone, and rAn+1 is linear in A therefore H(A) = rAn+1 − F (A) is concave if (u, o) are α-monotone. • If g ∈ ∂F then F (B) ≥ F (A) + g T (B − A). Since F = rAn+1 − H we obtain rBn+1 − H(B) ≥ rAn+1 − H(A) + g T (B − A). By reorganizing the terms we get −H(B) ≥ −H(A) − r1n+1 − g T (B − A). Finally we multiply both sides with −1 and obtain H(B) ≤ H(A) + r1n+1 − g T (B − A). Therefore (r1n+1 − g T ) ¯ is a supgradient of H at A, i.e., (r1n+1 − g) ∈ ∂H(A). ¯ • The above results gives If g ∈ ∂F then (r1n+1 − g) ∈ ∂H(A). This also shows ¯ ∂H(A) ⊇ r1n+1 − ∂F (A). By using the same arguments above we also see ¯ ¯ that if g ∈ ∂H(A) then (r1n+1 − g T ) is a subgradient of F showing ∂H(A) ⊆ ¯ r1n+1 − ∂F (A). Hence we obtain ∂H(A) = r1n+1 − ∂F (A).  Corollary 5.4.4 allows us to solve the production problem with the tools developed for the appointment scheduling problem in Chapter 2 and Chapter 3. In the case of independent and integer-valued production level distributions we can maximize H in polynomial time. Furthermore, with real-valued production levels H is concave and we obtain a supgradient for H easily once we have a subgradient for F . Therefore we can utilize the results in Appendix A and use non-smooth concave optimization methods and the hybrid algorithm to find an optimal production plan for the manager. Last but not least, we characterize H’s supdifferential and can use the sampling approach developed in Chapter 3 to obtain a provable near-optimal production plan if production level distributions are not known but only a set of independent samples is available. As discussed earlier, in this case, we do not require independence of production levels between one period to other.  5.5  Conclusion  We introduce two advance multi-period quantity (order or supply) commitment problems with a random demand or yield. The distinct feature of these models with previous ones reported in literature is that all quantity commitment (order and supply amounts) decisions are to be made at once and before the planning interval starts. We show that there is a 149  close relationship between these problems and the appointment scheduling problem studied in Chapter 2. Therefore, we can solve these type of quantity commitment problems efficiently, e.g., in polynomial time in the case of integer-valued and independent (demand or yield) distributions or by using non-smooth convex optimization methods developed in Appendix A. Furthermore, in the case of unknown demand or yield distributions we can use the sampling approach developed in Chapter 3.  150  5.6  Bibliography  [1] Ravi Anupindi and Yehuda Bassok. Supply contracts with quantity commitments and stochastic demand. a chapter in quantitative models in supply chain management. Kluwer Academic Publishers, S. Tayur, M. Magazine, and R. Ganeshan, eds, 1998. [2] Yehuda Bassok and Ravi Anupindi. Analysis of supply contracts with total minumum commitment. IIE Trans., 29:373–381, 1997. [3] Yehuda Bassok and Ravi Anupindi. Analysis of supply contracts with commitments and flexibility. Naval Research Logistics, 55(5), 2008. [4] Aharon Ben-Tal, Boaz Golany, Arkadi Nemirovski, and Jean-Philippe Vial. Retailersupplier flexible commitments contracts: A robust optimization approach. MSOM, 7(3):248–271, 2005. [5] Ki Ling Cheung and Xue-Ming Yuan. An infinite horizon inventory model with periodic order commitment. EJOR, 146:52–66, 2003. [6] JeanBaptiste Hiriart-Urruty and Claude Lemarechal. Convex Analysis and Minimization Algorithms I and II. Springer, 1993. [7] Zhaotong Lian and Abhijit Deshmukh. Analysis of supply contracts with quantity flexibility. EJOR, 196:526–533, 2009. [8] Kamran Moinzadeh and Steven Nahmias. Adjustment strategies for a fixed delivery contract. Oper. Res., 48(3):408–423, 2000. [9] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003.  151  6 Concluding Remarks In this thesis, we take a in-depth look at the appointment scheduling problem [2, 1, 7]. In Chapter 2, we study a discrete time version of the appointment scheduling problem and develop a polynomial time algorithm, based on discrete convexity, that, for a given processing sequence, finds an appointment schedule minimizing the total expected cost. To the best of our knowledge this is the first polynomial time algorithm for the appointment scheduling problem. In addition, our framework can handle a given due date for the total processing (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose an end date. We also extend our model and framework to include no-shows (e.g., patient no-shows) and some emergencies (e.g., emergency surgeries). We believe that our framework is sufficiently generic so that it is portable and applicable to many appointment systems in healthcare and other areas including surgery scheduling, healthcare diagnostic operations (such as CAT scan, MRI) and physician appointments, as well as project scheduling, container vessel and terminal operations, gate and runway scheduling of aircraft in an airport. After developing our modeling framework and proving that we can find an optimal appointment schedule in polynomial time, we focus on practical implementation issues in Chapter 3 and Appendix A. The objective function of the appointment scheduling problem as a function of continuous appointment vector is non-smooth but in Chapter 3 we show that it is convex, and we characterize its subdifferential. We obtain closed form formulas for the subdifferential as well as for any subgradient. This characterization is very useful as it allows us to develop two very important extensions. In Chapter 3, we relax the perfect information assumption on the probability distributions of processing durations. We develop a samplebased approach to determine the number of independent samples required to obtain a provably near-optimal solution with high confidence. This result has important practical implications, as the true processing duration distributions are often not known and only their past realizations or some samples are available. We believe this is the first sampling  152  approach developed for the appointment scheduling problem. In Appendix A, we use the subdifferential characterization with independent processing durations to develop a hybrid approach based on both discrete convexity [4] and non-smooth convex optimization [3, 6] combined with a special-purpose rounding algorithm which takes any fractional solution and rounds it to an integer one with the same or improved objective value. We believe the hybrid approach may perform well in practice. Again motivated by surgery scheduling, in Chapter 4 we look at the problem of determining the number of surgeries for an OR block with a focus on the incentives of the parties involved (hospital and surgeon). In particular, we investigate the commonly observed situation reported in the literature and observed empirically that surgeons over-schedule their allotted OR time, i.e., they schedule too many surgeries for their OR time and cause excessive overtime. We argue that this can be explained by the incentive of surgeons to take advantage of fee-for-service payment structure for surgeries performed combined with the fact surgeons do not bear overtime costs at the hospital level. This creates a cost which is borne by the hospital, which operates the OR and pays surgery support staff. We propose contracts that induce the surgeon to schedule a number of surgeries more aligned with the goals of the hospital and thus reduce overtime. If an OR can be managed in such a way that overtime is decreased then this may translate to immediate and significant cost savings which may be used to increase hospital resources such as regular OR time, recovery and intensive care beds. Depending on how much power the hospital has over surgeons and how much information is available to the hospital, we suggest several contracts that hospital might consider. There is a connection between the celebrated newsvendor problem and the appointment scheduling problem. If we have only a single job (surgery), i.e., n = 1, then the appointment scheduling problem becomes the newsvendor problem [8]. In Chapter 5, we introduce a new set of advance multi-period quantity (order or supply) commitment problems with random characteristics (demand or yield), and underage and overage costs if there is a mismatch between committed and realized quantities. We show that these multi-period quantity commitment problems can be modeled and solved as special cases of the appointment scheduling problem. To the best of our knowledge, the problems introduced in Chapter 5 have not been yet studied. There are exciting future directions and improvement possibilities for this research. 153  One possibility is to find an optimal sequence and appointment schedule simultaneously, i.e., given the jobs, determine a sequence and a job appointment schedule minimizing the total expected cost. This problem is likely to be hard [5], but it may be possible to develop heuristic algorithms with performance guarantees. Studying some special cases for this problem may shed light on the general case. In the near future, we are planning to implement the algorithms in Appendix A and develop a computational engine for the appointment scheduling problem. Besides testing and comparing the discrete, non-smooth and hybrid algorithms in computational experiments we plan to put our findings into practice. We are in contact with local healthcare organizations to apply our results with real data and compare the appointment schedules determined by our methods with current practices. Furthermore, we may test performance of various heuristic methods for both appointment scheduling and sequencing problem once the computational engine is built. One other research avenue that we consider is an extension in which both job arrivals and durations are random, and jobs belong to different priority classes. The goal is to determine a booking policy such that each priority target waiting times are satisfied at minimum cost. Or alternatively, one can find what would be the waiting times for a given budget. Last but not least, there are many interesting incentive problems in healthcare. For example, in generating an OR block scheduled there is an issue of allocating available OR time between different specialties as well as between surgeons within a specialty. Each surgeon has her/his number of patients waiting for surgery. Furthermore, there some other non-medical factors to consider such as the rank of the surgeon as well as how well the specialty is represented in OR block allocation process. We consider studying this problem and develop a mechanism that will be fair and transparent for everyone with an overall objective to reduce surgical waiting times.  154  6.1  Bibliography  [1] Tugba Cayirli and Emre Veral. Outpatient scheduling in health care: A review of literature. Production and Operations Management, 12(4), 2003. [2] Brian Denton and Diwakar Gupta. A sequential bounding approach for optimal appointment scheduling. IIE Transactions, 35:1003–1016, 2003. [3] JeanBaptiste Hiriart-Urruty and Claude Lemarechal. Convex Analysis and Minimization Algorithms I and II. Springer, 1993. [4] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. [5] Lawrence W. Robinson, Yigal Gerchak, and Diwakar Gupta. Appointment times which minimize waiting and facility idleness. Working Paper, DeGroote School of Business, McMaster University, 1996. [6] R. Tyrrell Rockafellar. Theory of subgradients and its applications to problems of optimization: convex and nonconvex functions. 1981. [7] P Patrick Wang. Static and dynamic scheduling of customer arrivals to a single-server system. Naval Research Logistics, 40(3):345–360, 1993. [8] E N Weiss. Models for determining estimated start times and case orderings in hospital operating rooms. IIE Transactions, 22(2):143–150, 1990.  155  A Minimizing a Discrete-Convex Function for Appointment Scheduling1 We consider the appointment scheduling problem with discrete random durations studied in Chapter 2. Under a simple sufficient condition, the objective of the appointment scheduling problem is discretely convex as a function of the integer appointment vector (Chapter 2), but it is convex but non-smooth when appointment vectors are continuous (Chapter 3). In this chapter, we compute a subgradient of the objective function in polynomial time for any given (real-valued) appointment schedule with independent processing durations. We also extend computation of the expected total cost (in polynomial time) for any (real-valued) appointment vector. Furthermore, we develop a special-purpose integer rounding algorithm that allow us to develop an hybrid approach combining both discrete convexity and nonsmooth convex optimization methods. We plan to implement these algorithms and compare different approaches in computational experiments.  A.1  Introduction and Motivation  We consider the appointment scheduling problem with discrete random durations studied in Chapter 2. The goal of appointment scheduling is to determine an optimal planned start schedule, i.e., an optimal appointment schedule for a given sequence of jobs on a single processor such that the expected total underage and overage costs is minimized. In Chapter 2, we showed that the objective function of the appointment scheduling problem is discretely convex (under α−monotonicity) and there exists an optimal integer appointment schedule minimizing the objective over integer appointment vectors. These results on the objective function and optimal appointment schedule enabled us to develop a polynomial time algorithm, based on discrete convexity, that, for a given processing sequence, finds an 1  A version of this chapter will be submitted for publication. Begen M.A. and Queyranne M. Minimizing  a Discrete-Convex Function for Appointment Scheduling.  156  appointment schedule minimizing the total expected cost. On the other hand, in Chapter 3 we considered the same appointment scheduling problem in Chapter 2 under the assumption that the duration probability distributions are not known and only a set of independent samples is available, e.g., historical data. We showed that, under a simple sufficient condition, the same objective function is convex (as a function of continuous appointment vector) and non-smooth. Under this condition we characterized the subdifferential of the objective function with a closed-form formula. This characterization is useful; it allows us to develop two very important extensions. First, we used it to determine bounds on the number of independent samples required to obtain provably near-optimal solution with high probability in Chapter 3. Second, we use it in this chapter to obtain a subgradient in polynomial time to use subgradient methods (with discrete methods) to optimize the appointment scheduling objective. In this chapter, we use the subdifferential characterization of Chapter 3 with independent processing durations and compute a subgradient in polynomial time for any given appointment schedule. The reason we are after a fast obtainable subgradient is to use non-smooth convex optimization methods to find an optimal appointment schedule. From Chapter 2 we already have a polynomial time algorithm to minimize the objective and obtain an optimal appointment schedule, however it is not clear at the moment which technique (discrete or non-smooth) will work faster in practice. Finding a subgradient in polynomial time is not trivial because the subdifferential formulas include exponentially many terms, and some of the probability computations are complicated. In addition to a subgradient, we obtain an easily computable lower bound on the optimal objective value. Furthermore, we extend computation of the expected total cost (in polynomial time) for any (real-valued) appointment vector. These results allow us to use non-smooth convex optimization techniques to find an optimal schedule. To combine the discrete and non-smooth algorithms, we develop a special-purpose integer rounding method which takes any fractional solution and rounds it to an integer one with the same or improved objective value. This rounding algorithm enable us to develop a hybrid approach combining both discrete convexity and non-smooth convex optimization methods. In the near future, we are planning to implement our algorithms and compare different approaches in computational experiments. 157  This chapter is organized as follows2 . We start with finding a lower bound on the objective functions F and F D in Section A.2. In this section, we also extend the computation ˜ for any real appointment vector A. In Section A.3, we find of F (A) (and hence F D (A)) a subgradient of F (and F D ) in polynomial time. We first compute probabilities required to obtain a subgradient from subdifferential ∂F (A) and discuss the complexity of this computation. Then we show how to find a subgradient in polynomial time. In Section A.4, we develop the rounding algorithm and discuss how it can be used with the existing discrete and non-smooth algorithms to build a hybrid approach. Finally we conclude the chapter in Section A.5.  A.2  Lower Bounds (on the Value) and Computation of ˜ and for any Real A F (A) and F D (A)  ˜ In this section, we find an easily computable lower bound on the values of F (A) and F D (A). When the underage cost coefficients ui ’s (1 ≤ i ≤ n) are identical F (A) has additional interesting properties. In this case, α-monotonicity is (automatically) satisfied and hence F (A) is convex by Convexity Proposition 3.3.3 of Chapter 3 (and the same result follows ˜ from Corollary 3.3.5 of Chapter 3). Furthermore, a lower bound on the value of for F D (A) F (A) can be easily computed. A remark is in order here. If underage cost coefficients are not identical for F (A) then define u min{u1 , u2 , ..., un } and  f (A) = Ep   n  j=1    oj (Cj − Aj+1 )+ + u(Cj − Aj+1 )+  ,  i.e., replace ui with u for all i (1 ≤ i ≤ n), and find a lower bound for f (A) as descried in this section. Note that since f (A) ≤ F (A) the obtained lower bound of f (A) will be a lower bound for F (A) with non-identical ui ’s. We start by expressing F (A) in a different but an equivalent way that will be essential ˜ in finding a lower bound on the value of F (A) (and F D (A)). Our first result is a Corollary to Lemma 3.3.1 in Chapter 3.  2  Since this chapter is included to the thesis as an appendix, we will omit notation introduction and  formal description of appointment problem and refer the reader to Chapter 2 and Chapter 3.  158  Corollary A.2.1. If ui = u for all i (1 ≤ i ≤ n) then  n  F (A) = Ep     = Ep   j=1 n    oj (Cj − Aj+1 )+ + u(Cj − Aj+1 )+   n  oj (Cj − Aj+1 )+ + u(max{An+1 , Cn } − j=1  k=1    pk ) .  Proof. The proof is an application of Lemma 3.3.1 with specific αi (1 ≤ i ≤ n). Choose αi = 0 (1 ≤ i ≤ n). Then βi = oi (1 ≤ i ≤ n), γi = 0 (1 ≤ i < n) and γn = un = u. Now the result directly follows from Lemma 3.3.1. We need the following definition before computing a lower bound on F (A). Definition A.2.2. (Single Period) Newsvendor Problem [8] A newsvendor needs to decide the number of units Q to be purchased before the demand Y is realized. The newsvendor pays ch for each unit remaining unsold and pays cp for each unit of unsatisfied demand. The objective of the newsvendor is to choose the Q minimizing the expected cost. This problem is well studied and it has a closed form solution. Let H(Y ) be the cumulative distribution of Y and Q∗ the optimum solution then H(Q∗ ) =  cp ch + cp  We may think each job j (1 ≤ j ≤ n + 1) as a single period newsvendor problem if we had only that job to process and find its solution as given in Definition A.2.2. We use this idea to obtain a lower bound on F (A) in our next result. Proposition A.2.3. Let ui = u (1 ≤ i ≤ n), A∗ be an optimal appointment vector for F (A) and a∗j the (single period) newsvendor solution for job j (1 ≤ j ≤ n + 1). Then n j=1 Ep  oj (pj − a∗j )+ + u(a∗j − pj )+ is a lower bound for F (A∗ ).  Proof. Consider the following optimization problem.  OP T 1    minA Ep     n j=1  oj (Cjp − Aj+1 )+ + u(max{An+1 , Cnp } −  n k=1 pk )  subject to     A = 0, C p = p , C p = max(A , C p ) + p for (2 ≤ j ≤ n), all p. 1 j j 1 1 j j−1  159  OP T 1 is the stochastic program for minimizing F (A) hence its optimum will be F (A∗ ). We need all the constraints of OP T 1 as they are the essential dynamics of our scheduling problem. p p +pj )+pj as Cjp ≥ Aj +pj and Cjp ≥ Cj−1 We rewrite the constraints Cjp = max(Aj , Cj−1  for all p and obtain OP T 2:  OP T 2    minA Ep     n j=1  oj (Cjp − Aj+1 )+ + u(max{An+1 , Cnp } −  n k=1 pk )  subject to     A = 0, C p = p , C p ≥ A + p and C p ≥ C p + p for (2 ≤ j ≤ n), all p. 1 j j j 1 1 j j j−1  Since objective function coefficients oj (1 ≤ j ≤ n−1), on , u are all non-negative and the objective function is a non-decreasing function of Cjp ’s OP T 1 and OP T 2 are equivalent. Since OP T 1 and OP T 2 are equivalent, A∗ is also an optimum appointment vector for OP T 2. p Now we relax Cjp ≥ Cj−1 + pj constraints from OP T 2 and the obtain relaxation OP T 3:  OP T 3    minA Ep     n j=1  n k=1 pk )  oj (Cjp − Aj+1 )+ + u(max{An+1 , Cnp } −  subject to     A = 0, C p = p , C p ≥ A + p for (2 ≤ j ≤ n), all p. 1 j j 1 1 j  Observe that OP T 3 is a relaxation of OP T 2 and it decomposes into n independent optimization problems, one for each j. Furthermore, the Cjp ≥ Aj + pj constraints will be binding in any optimal solution (since objective function coefficients are all non-negative and the objective function is a non-decreasing function of Cjp ’s). Let aj = Aj+1 − Aj for 1 ≤ i ≤ n then by using Cjp = Aj + pj for 1 ≤ i ≤ n we can rewrite OP T 3 as   n  min Ep  a  n  oj (pj − aj )+ + u(max{An+1 , An+1 − an + pn } −  j=1  k=1  By Corollary A.2.1, OP T 3 may be written equivalently as  pk ) .  n  ok (pk − ak )+ + u(ak − pk )+  min Ep a  ,  k=1  and this is nothing but sum of n independent newsvendor problems so we can minimize −1 it by setting a∗i = P−1 i (oi /(oi + u)) for 1 ≤ i ≤ n where Pi (.) is the inverse cumulative  distribution of job duration i (1 ≤ i ≤ n). Therefore the result follows. 160  Remark A.2.4. Let f (A) = Ep [  n k=1 (ok (pk  − (Ak+1 − Ak ))+ + u((Ak+1 − Ak ) − pk )+ )],  then f (A) is not necessarily a lower bound for F (A). Consider the following example with deterministic processing times. Let n = 4, p1 = 4, p2 = 6, p3 = 1, p4 = 1, A1 = 0, A2 = 3, A3 = 6, A4 = 9 and A5 = 13. Then f (A) = o1 +3o2 +2u +3u and F (A) = o1 +4o2 +2o3 +u. So for u = o2 = o3 , f (A) > F (A). However, we find a different lower bound function (as a function of A) for F (A) in Lemma 3.5.6 in Chapter 3. Next we find a lower bound on the value of objective with a due date D, i.e., on the ˜ We obtain a lower bound similar to that in Proposition A.2.3 on the value value of F D (A). ˜ of F D (A). ˜ ∗ be an optimal appointment vector for F D (A) ˜ Corollary A.2.5. Let ui = u (1 ≤ i ≤ n), A and a∗j the (single period) newsvendor solution for job j (1 ≤ j ≤ n + 1). Then n j=1 Ep  ˜ oj (pj − a∗j )+ + u(a∗j − pj )+ is a lower bound for F D (A).  Proof. Consider the following optimization problem:  p p n n + + u(max{A  minA Ep  n+1 , Cn } − j=1 oj (Cj − Aj+1 ) k=1 pk )   OP T F D subject to    p p p  A n+1 = D, A1 = 0, C1 = p1 , Cj = max(Aj , Cj−1 ) + pj for (2 ≤ j ≤ n), all p. If we relax the An+1 = D constraint of OP T F D , we immediately obtain the optimization  ˜ problem for F (A) (of course with ui = u). Therefore F (A) ≤ F D (A). On the other hand,  n j=1 Ep  oj (pj − a∗j )+ + u(a∗j − pj )+ is a lower bound for F (A)  by Proposition A.2.3. Hence n  ˜ Ep oj (pj − a∗j )+ + u(a∗j − pj )+ ≤ F (A) ≤ F D (A). j=1  This completes the proof. ˜ if processing durations are indepenWe now show how to compute F (A) (and F D (A)) dent and A is not integer. The following result is due to Theorem 2.7.2 in Chapter 2 and an observation to keep track of previous (potential) fractional points. This extra bookkeeping in the case of non-integer appointment vectors cost a factor of n in the complexity of F (A) computation. 161  Corollary A.2.6. If the processing durations are stochastically independent and A is a real appointment vector then F (A) may be computed in O(n3 p2max ) time. Proof. The first job starts at time zero so S1 = A1 = 0, and C1 = p1 , i.e., the distribution of C1 is that of p1 . Next, we look at the start times Si (2 ≤ i ≤ n). We have Si = max(Ai , Ci−1 ) so for all k and, P rob{Si = k} =     0    if k < Ai  P rob{Ci−1 ≤ k} if k = Ai     P rob{C i−1 = k} if k > Ai .  (A.1)  Note that Si and pi are independent because Si is completely determined by p1 , p2 , ..., pi−1 and A1 , A2 , ..., Ai . A remark is in order here. If A is integer then k = 0, 1, . . . , npmax , however when A is not integer then in addition to previous integer values we also need to consider (possible distinct) fractional values arising from non-integer Ai values. A1 = 0 so there will be at most npmax integer values for k, from A2 at most (n − 1)pmax , from A3 at most (n − 2)pmax and so on. Therefore in total we need to consider at most n2 pmax distinct values (it was only npmax if A is integer). Since Ci = Si + pi , by conditioning on pi and using independence of pi and Si , we obtain for all k, pi  P rob{Ci = k} = P rob{Si = k − pi } =  P rob{Si = k − j}P rob{pi = j},  (A.2)  j=0  and P rob{Ci−1 ≤ k} = P rob{Ci−1 = k}+P rob{Ci−1 ≤ k −1}. For each i−1, P rob{Ci−1 ≤ k} may be computed in O((i − 1)2 pmax ) time. Hence P rob{Ci = k} can be computed once we have the distribution of Si . For each job i and value k, computing P rob{Si = k} by Eq(A.1) requires a constant number of operations, and computing P rob{Ci = k} by Eq(A.2) requires O(pi + 1) operations. Therefore the total number of operations needed for computing the whole start time and completion time distributions for job i is O(n2 p2max ). The distribution of Ti and Ei , their expected values Ep Ti and Ep Ei can then be determined in O(n2 p2max ) time. Therefore, the objective value F (A) is obtained in O(n3 p2max ) time. When the appointment vector is not integer, the size of possible values (for completion and start times) grows with a factor of n2 , as compared to n in the discrete case shown above. However, note that in practice, some components may have the same fractions, and this will speed up the computation. 162  A.3  Obtaining a Subgradient in Polynomial Time  In Section 3.4, we characterize the subdifferential of F (A) and obtain the closed form formula given in Eq(3.15) for ∂F (A). We also express ∂F (A) component by component for a given convex hull weight vector X, i.e., gk (X, A), k th coordinate of a subgradient at the point A for a particular X. By Eq(3.16) recall that gk (X, A) is given by n L P rob{Ij = S} Xkj (S) − αk−1  αj j=k  S ∈ P ∗ ([j])  P rob{Ik−1 = S} S ∈ P ∗ ([k−1])  n  +  T> P rob{Ij> = S}Xkj (S) − βk−1  βj j=k  S ∈ P ∗ ([j])  > P rob{Ik−1 = S} S ∈ P ∗ ([k−1])  n  +  T= P rob{Ij= = S}Xkj (S) − βk−1  βj j=k  S ∈ P ∗ ([j])  = P rob{Ik−1 = S}  = XiTk−1 (S) i∈ S  S ∈ P ∗ ([k−1])  n  +  M> P rob{Ij> = S}Xkj (S) − γk−1  γj j=k  S ∈ P ∗ ([j])  > P rob{Ik−1 = S} S ∈ P ∗ ([k−1])  n  +  P rob{Ij= = S}XkMj= (S ∪ {j + 1}) + γk−1 (1 −  γj j=k  S ∈ P ∗ ([j])  = P rob{Ik−1 = S}).  (A.3)  S ∈ P ∗ ([k−1])  To obtain a subgradient of F , as seen in Eq(A.3), we need to compute some probability = terms, e.g., P rob{Ik−1 = S}, choose an appropriate (i.e., feasible) X vector and need to  find a way to deal with exponentially many S ∈ P ∗ ([j]) terms. Our strategy is to compute the probabilities first. Then we discuss the complexity of obtaining a subgradient. Finally we show a way to obtain a subgradient (in fact two subgradients) fast, i.e., in polynomial time.  A.3.1  Probability Computations  In this section we compute probabilities P rob{Ij = S} and P rob{Ijη = S} for S ∈ P ∗ ([j]), j ∈ [n + 1] and η ∈ {>, =, <}. Recall that, by Eq(3.4) and Eq(3.6) we have Ij  = arg max{Ak + Pkj }  Ijη  = {k ∈ Ij : Ak + Pkj η Aj+1 }  k≤j  For easier computation and later purposes, we rewrite Ijη . Let max Ij = max{k : k ∈ Ij } then Ijη = {k ∈ Ij : Ak + Pkj η Aj+1 } = {k ∈ Ij : Amax Ij + Pmax Ij , j η Aj+1 }.  (A.4)  163  2  1 A1  A2  3 A3  4 A5  A4  Figure A.1: Event I5 = {1, 2, 3, 5} Visualization Eq(A.4) follows from the fact that Ak + Pkj is the same quantity for any k ∈ Ij and particularly for k = max Ij . Let is = max{i : i ∈ S} then by using Eq(A.4) we get P rob{Ijη = S} = P rob{Ij = S and Ais + Pis , j η Aj+1 }. We first compute P rob{Ij = S}. Let S ∈ P ∗ ([j]) then   P rob{Ij = S} = P rob (i ∈ Ij ) ∩  i∈S  (k ∈ Ij ) k ∈ [j]−S  (A.5)      .  (A.6)  Eq(A.6) simply means that for two sets to be equal, they need to agree on each term, i.e.,  each element of S should be an element of Ij and an element which is not in S should not be in Ij . We now provide an intuitive result which is also crucial in computing P rob{Ij = S}. Lemma A.3.1. If i ∈ Ij then job i starts on time, i.e., Si = Ai . Proof. For i = 1 the result holds since job 1 always starts on time (i.e., at time A1 = 0). For 2 ≤ i ≤ n + 1 we will show the contrapositive. If job i does not start on time then job i is late, i.e., completion date of job i − 1 is strictly greater than appointment date of job i (Ci−1 > Ai ). Therefore job i cannot be on the critical path of Cj and hence i ∈ Ij . We give an example to illustrate Eq(A.6) and the application of Lemma A.3.1. Example A.3.2. Let S = {1, 2, 3, 5} then P rob I5 = {1, 2, 3, 5} = P rob{1 ∈ I5 and 2 ∈ I5 and 3 ∈ I5 and 4 ∈ I5 and 5 ∈ I5 }. We can visualize the event I5 = {1, 2, 3, 5} as depicted in Figure A.1. We use Lemma A.3.1 in the following arguments to deduce the fact that if i ∈ Ij then job i starts on time. Let’s examine probability P rob I5 = {1, 2, 3, 5} by starting from the end. 5 ∈ I5 tells us that job 5 starts at time A5 . 4 ∈ I5 and 3 ∈ S imply that job 3 starts on time, and we 164  C2 A1  3 A3  5  4 A4  A5  6 A6  7 A7  A8  Figure A.2: Event I8 = {3, 5, 6} Visualization must have p3 > A4 − A3 (if p3 = A4 − A3 then 4 ∈ I5 , if p3 < A4 − A3 then A3 ∈ I5 ) and P34 = A5 − A3 since both 3 and 5 ∈ I5 (if P34 > A5 − A3 then 5 ∈ I5 , if P34 < A5 − A3 then 3 ∈ I5 ). Similarly, 2 ∈ I5 gives us that job 2 starts on time and p2 = A3 −A2 . Finally, 1 ∈ I5 implies that p1 = A2 − A1 (note that A1 = 0), otherwise either 1 ∈ I5 (if p1 < A2 − A1 ) or 2 ∈ I5 (if p1 > A2 − A1 ). Therefore, P rob I5 = {1, 2, 3, 5} = P rob{1 ∈ I5 and 2 ∈ I5 and 3 ∈ I5 and 4 ∈ I5 and 5 ∈ I5 } = P rob{1 ∈ I5 and 2 ∈ I5 and p3 > A4 − A3 and P34 = A5 − A3 } = P rob{1 ∈ I5 and p2 = A3 − A2 and p3 > A4 − A3 and P34 = A5 − A3 } = P rob{p1 = A2 − A1 and p2 = A3 − A2 and p3 > A4 − A3 and P34 = A5 − A3 } = P rob{p1 = A2 − A1 } P rob{p2 = A3 − A2 }P rob{p3 > A4 − A3 and P34 = A5 − A3 }. The last equality follows from the independence of job duration distributions. Furthermore, the convolutions such as P34 and their corresponding probabilities such as P rob{p3 > A4 − A3 and P34 = A5 − A3 } can be computed efficiently as our job duration distributions are discrete and independent. We will compute these probabilities with the RCA (Recursive Computation Algorithm) developed later in the section. We provide another example to show the other possibilities that may occur in P rob{Ij = S} computation. Example A.3.3. Let S = {3, 5, 6}. Then P rob I8 = {3, 5, 6} = P rob{1 ∈ I8 and 2 ∈ I8 and 3 ∈ I8 and 4 ∈ I8 and 5 ∈ I8 and 6 ∈ I8 and 7 ∈ I8 and 8 ∈ I8 }  Event I8 = {3, 5, 6} may be visualized as shown in Figure A.2. As in Example A.3.2, we use Lemma A.3.1 to deduce the fact that if i ∈ Ij then job i starts on time. We compute P rob I8 = {3, 5, 6} by starting from the end and by considering blocks of jobs between successive elements of S. 165  6 ∈ I8 gives us that job 6 starts on time. 7 ∈ I8 implies A6 + p6 > A7 (if A6 + p6 = A7 then 7 ∈ I8 , if A6 + p6 < A7 then 6 ∈ I8 ). Similarly, 8 ∈ I8 implies A6 + P67 > A8 . 5 ∈ I8 means that job 5 starts on time and hence A5 + p5 = A6 (if A5 + p5 > A6 then 6 ∈ I8 , if A5 + p5 < A6 then 5 ∈ I8 ). 3 ∈ I8 implies that job 3 starts on time. 4 ∈ I8 tells us that A3 +p3 > A4 (if A3 +p3 = A4 then 4 ∈ I8 , if A3 +p3 < A4 then 3 ∈ I8 ). Furthermore, 3 ∈ I8 also implies that A3 +P34 = A5 (if A3 + P34 < A5 then 3 ∈ I8 , if A3 + P34 > A5 then 5 ∈ I8 ). The remaining jobs, jobs 1 and 2, are not in I8 . This implies that completion time of job 2 C2 is strictly less than A3 (if C2 > A3 then 3 ∈ I8 , if C2 = A3 then 1 ∈ I8 or 2 ∈ I8 or both 1, 2 ∈ I8 ). If we collect together the arguments above, we obtain: P rob I8 = {3, 5, 6} = P rob{1 ∈ I8 and 2 ∈ I8 and 3 ∈ I8 and 4 ∈ I8 and 5 ∈ I8 and 6 ∈ I8 and 7 ∈ I8 and 8 ∈ I8 } = P rob{1 ∈ I8 and 2 ∈ I8 and 3 ∈ I8 and 4 ∈ I8 and 5 ∈ I8 and A6 + p6 > A7 and A6 + P67 > A8 } = P rob{1 ∈ I8 and 2 ∈ I8 and 3 ∈ I8 and 4 ∈ I8 and A5 + p5 = A6 and A6 + p6 > A7 and A6 + P67 > A8 } = P rob{1 ∈ I8 and 2 ∈ I8 and A3 + p3 > A4 and A3 + P34 = A5 and A5 + p5 = A6 and A6 + p6 > A7 and A6 + P67 > A8 } = P rob{C2 < A3 and A3 + p3 > A4 and A3 + P34 = A5 and A5 + p5 = A6 and A6 + p6 > A7 and A6 + P67 > A8 } = P rob{C2 < A3 } P rob{A3 + p3 > A4 and A3 + P34 = A5 } P rob{A5 + p5 = A6 } P rob{A6 + p6 > A7 and A6 + P67 > A8 }  As in Example A.3.2, the last equality follows from the independence of job duration distributions. We next obtain a formula for computing P rob{Ij = S} for any S ∈ P ∗ ([j]). The nice thing about this probability is that it breaks down into blocks of independent events with the elements of S as seen in Examples A.3.2 and A.3.3. We now prove this result. Lemma A.3.4. Write any subset S ∈ P ∗ ([j]) as an ordered list: S = {i1 , ..., is } where  166  il < il+1 for l = 1, 2, ..., s − 1 and s = |S|. Then P rob{Ij = S} = P rob{Ci1 −1 < Ai1 } s−1  ·  Pil k > Ak+1 − Ail  P rob  ∩  Pil il+1 −1 = Ail+1 − Ail  il ≤k<il+1 −1  l=1  ·P rob  Pis k > Ak+1 − Ais is ≤k<j  where P rob{C0 < A1 } = 1. Proof. As in Examples A.3.2 and A.3.3, we consider blocks of jobs between successive elements of S. We start our derivation from the largest element of S (i.e., is ) and use Lemma A.3.1 (viz, if i ∈ Ij then job i starts on time, i.e., it starts at Ai ). The largest element of Ij is is hence is is the last job, before j + 1, which starts on time (indeed, otherwise there must exist a job is < k ≤ j either with Ck−1 < Ak implying that As + Psj < Ak + Pkj and is ∈ Ij , or with Ck−1 = Ak implying that As + Psj = Ak + Pkj and is < k ∈ Ij ). Therefore all jobs after is must be late, i.e., event  is ≤k<j  Ais +Pis k > Ak+1  holds. Next, we consider il . Since il , il+1 ∈ Ij both of them are on time and we must have Ail + Pil ,il+1 −1 = Ail+1 (indeed, if Ail + Pil ,il+1 −1 < Ail+1 then il ∈ Ij , if Ail + Pil ,il −1 > Ail then il ∈ Ij ). Furthermore, all jobs between il and il+1 must be late, i.e., Ail + Pil ,k > Ak+1 for il ≤ k < il+1 − 1 (if Ail + Pil ,k = Ak+1 then k + 1 ∈ Ij , if Ail + Pil ,k < Ak+1 then il ∈ Ij ). Therefore, the following event must hold: s−1  Ail +Pil k > Ak+1 l=1  ∩  Ail +Pil il+1 −1 = Ail+1  ∩  Ais +Pis k > Ak+1 is ≤k<j  il ≤k<il+1 −1  Finally, we consider the jobs before i1 . Since i1 ∈ Ij , i1 starts on time, i.e., Si1 = Ai1 . Furthermore, completion time of job i1 − 1 must be strictly less than Ai1 , i.e., Ci1 −1 < Ai1 (indeed, if Ci1 −1 = Ai1 then there exist at least one job k such that k ≤ i1 − 1 and k ∈ Ij , if Ci1 −1 > Ai1 then i1 ∈ Ij ). Therefore the following event must hold: Ci1 −1 < Ai1 part 1 s−1  ∩  Ail + Pil k > Ak+1 l=1  ∩  Ail + Pil il+1 −1 = Ail+1  il ≤k<il+1 −1 part 2  ∩  Ais + Pis k > Ak+1  .  is ≤k<j part 3  167  .  This proves that Ij = S ⊆ part 1 ∩ part 2 ∩ part 3 . Conversely, assume that outcome p is such that the event  part 1 ∩ part 2 ∩ part 3  holds. part 1 implies m ∈ Ij for 1 ≤ m < i1 since Am + Pm i1 −1 ≤ Ci1 −1 < Ai1 (i.e., job m cannot be on the critical path of Ij ). part 3 implies m ∈ Ij for is < m ≤ j since Ais + Pis m > Am+1 (i.e., job m cannot be on the critical path of Ij ), and Ais + Pis j−1 > Aj implies is ∈ Ij . Ail + Pil il+1 −1 = Ail+1 (1 ≤ l ≤ s − 1) of part 2 implies that either il , il+1 ∈ Ij or il , il+1 ∈ Ij (i.e., they are either on the critical path of Ij or not). But as shown above is ∈ Ij , therefore i1 , i2 , ..., is ∈ Ij . The only remaining thing to show is that m ∈ Ij for il < m < il+1 and 1 ≤ l ≤ s − 1. But this follows from  i1 ≤k<il+1 −1 Ail  + Pil k > Ak+1 of  part 2 and the fact that il ∈ Ij . Therefore {Ij = S} = part 1 ∩ part 2 ∩ part 3 and P rob{Ij = S} = P rob Ci1 −1 < Ai1 part 1 s−1  ∩  Ail + Pil k > Ak+1 l=1  ∩  Ail + Pil il+1 −1 = Ail+1  il ≤k<il+1 −1 part 2  ∩  Ais + Pis k > Ak+1  .  (A.7)  is ≤k<j part 3  We now carefully look at part 1, part 2 and part 3. In terms of processing times (i.e., only pi ’s), part 1 is a function of pk for 1 ≤ k < i1 , part 2 is a function of pk for i1 ≤ k < is , and part 3 is a function of pk for is ≤ k < ij . Therefore part 1, part 2 and part 3 are independent of each other. Furthermore, we may break part 2 into s − 1 independent smaller parts (i.e., one for each l in 1 ≤ l ≤ s − 1) since each term Ail + Pil k > Ak+1  ∩  Ail + Pil il+1 −1 = Ail+1  il ≤k<il+1 −1  is (in terms of processing times) a function of only pk for il ≤ k ≤ il+1 − 1. Then by  168  independence we obtain P rob{Ij = S}  = P rob{Ci1 −1 < Ai1 } s−1  ·  P rob  ∩  Ail + Pil k > Ak+1  Ail + Pil il+1 −1 = Ail+1  il ≤k<il+1 −1  l=1  · P rob  Ais + Pis k > Ak+1 is ≤k<j  This completes the proof. Remark A.3.5. Note that the probability of a set which is an intersection of sets over an empty set of indices is 1. Therefore, if S = {1} then by Lemma A.3.4, Pis k > Ak+1 − Ais  P rob Ij = {1} = P rob is ≤k<j  and if S = {j} then by Lemma A.3.4, P rob Ij = {j} = P rob Cj−1 < Aj . Probability P rob{Ci1 −1 < Ai1 } can be directly obtained from the distribution of completion times as we compute the entire distribution for completion times in the expected cost computations. The remaining probabilities can be computed efficiently in a recursive manner. Probabilities that we need (for current and later purposes) are in the form of P rob ξ, η (l, m) = P rob  Pl,v ξ Av+1 − Al  ∩ Pl,m−1 η Am − Al  (A.8)  l≤v<m−1  for all 1 ≤ l < m ≤ n + 1 where ξ ∈ {>, ≥} and η ∈ {>, =, <}. We now give a recursive algorithm RCA (Recursive Computation Algorithm) to compute P rob ξ, η (l, m). For 1 ≤ i < k < m and (k − i + 1)pmax ≥ t ξ (Ak − Ai ) define ξ (t) = P rob Pik = t ∩ Ji,k  Piv ξ Av+1 − Ai  (A.9)  i≤v≤k−1 ξ Ji,i (t) = P rob Pii = t = P rob pi = t .  (A.10)  If we can compute these probabilities then ξ Jl,m−1 (t) : t η Am − Al  P rob ξ, η (l, m) = t  169  Recall that pmax is the maximum possible value for any job distribution. Initial conditions ξ for the recursion is given by Eq(A.10). We now provide the recursion for computing Ji,k (t)’s. ξ Ji,k (t)  = P rob (Pi,k = t) ∩  Pi,v ξ Av+1 − Ai  (A.11)  i≤v≤k−1  P rob (pk = u) ∩ Pi,k−1 = (t − u) ∩  =  =  Pi,v ξ Av+1 − Ai  P rob pk = u} P rob  Pi,k−1 = (t − u) ∩  P rob pk = u} P rob  =  Pi,v ξ Av+1 − Ai  (A.13)  i≤v≤k−1  u∈[pmax ]∩[t]  Pi,k−1 = (t − u) ∩  Pi,v ξ Av+1 − Ai  (A.14)  i≤v≤k−2  u∈[pmax ]∩[t] (t−u) ξ Ak −Ai  ξ P rob pk = u} Ji,k−1 (t − u) : (t − u) ξ Ak − Ai and u ∈ [pmax ] ∩ [t]  =  (A.12)  i≤v≤k−1  u∈[pmax ]∩[t]  (A.15)  u  ξ Eq(A.11) is just the definition of Ji,k (t) given by Eq(A.9). Then we write Pi,k as the  convolution of pk and Pi,k−1 in Eq(A.12). By recognizing the fact that pk and Pi,k−1 are independent, and pk does not appear in the remaining terms, we obtain Eq(A.13). Next, in ξ (t) and therefore Eq(A.14) we place the condition Pi,k−1 ξ Ak −Ai in the sum to obtain Ji,k−1  the recursion as shown in Eq(A.15). We now take a close look to the P rob ξ, η (l, m) computation. ξ Jl,l+1 (t)  = P rob pl ξ Al+1 − Al and Pl,l+1 = t  ξ Jl,l+2 (t)  = P rob pl ξ Al+1 − Al and Pl,l+1 ξ Al+2 − Al and Pl,l+2 = t  ... = .... ξ Jl,m−1 (t)  P rob ξ, η (l, m)  = P rob pl ξ Al+1 − Al and ... and Pl,m−2 ξ Am−1 − Al and Pl,m−1 = t = P rob pl ξ Al+1 − Al and ... and Pl,m−2 ξ Am−1 − Al and Pl,m−1 η Am − Al ξ Jl,m−1 (t)  =  (A.16)  t η Am −Al  We can now give the formula for P rob{Ij = S}. Write S ∈ P ∗ ([j]) as an ordered list: S = {i1 , ..., is } where il < il+1 for l = 1, 2, ..., s − 1 and s = |S|. Then by Lemma A.3.4 and Eq(A.16) we get: s−1  P rob>,= (il , il+1 ) P rob>,> (is , j) (A.17)  P rob{Ij = S} = P rob{Ci1 −1 < Ai1 } l=1  To compute P rob{Ijη = S} for η ∈ {>, =, <}, we need a Corollary. 170  Corollary A.3.6. s−1  P rob{Ijη  P rob>,= (il , il+1 ) P rob>,η (is , j + 1).  = S} = P rob{Ci1 −1 < Ai1 } l=1  Proof. By Eq(A.7) of Lemma A.3.4 we obtain P rob{Ijη = S} = P rob Ci1 −1 < Ai1 part 1 s−1  ∩  Ail + Pil k > Ak+1  ∩  Ail + Pil il+1 −1 = Ail+1  il ≤k<il+1 −1  l=1  part 2  ∩  ∩ Ais + Pis j η Aj+1  Ais + Pis k > Ak+1  .  is ≤k<j part 3  As in Lemma A.3.4, by independence we obtain P rob{Ijη = S} = P rob{Ci1 −1 < Ai1 } s−1  ·  P rob l=1  Ail + Pil k > Ak+1  ∩  Ail + Pil il+1 −1 = Ail+1  il ≤k<il+1 −1  ·P rob  Ais + Pis k > Ak+1  ∩ Ais + Pis j η Aj+1  .  is ≤k<j  Finally we use Eq(A.16) and obtain P rob{Ijη = S} as s−1  P rob{Ijη = S} = P rob{Ci1 −1 < Ai1 }  P rob>,= (il , il+1 ) P rob>,η (is , j + 1).  (A.18)  l=1  This completes the proof.  A.3.2  Complexity of Subgradient Computations  We first look at the complexity of the required probability computations. We see that the complexity of P rob{Ij = S}, P rob{Ij> = S} and P rob{Ij= = S} are the same. For P rob{Ij = S}, we need to compute and multiply probabilities given in Eq(A.17), namely P rob{Ci1 −1 < Ai1 }, s−1 >,= (i , i l l+1 ) l=1 P rob  s−1 >,= (i , i l l+1 ) l=1 P rob  and P rob>,> (is , j). Since  is the product of |S| − 1 P rob >, = (., .) terms, the complexity of  P rob{Ij = S} is the same as the complexity of P rob{Ci1 −1 < Ai1 } |S|P rob>,> (i, j) . We start with P rob{Ci1 −1 < Ai1 }. The complexity of this probability computation is O((npmax )2 ) when A is integer and O(n(npmax )2 ) otherwise. These directly follow from 171  expected cost computations. Recall that h = npmax , so we can represent the worst case as O(nh2 ). Next, we look at the complexity of P rob>,> (i, j). We need to compute P rob>,> (i, j) for every i (i < j), so we have a factor of n. Furthermore, in the worst case we need to > . For this convolution, we may have as many as npmax values and O(pmax ) compute J1,n  operations for each value. Finally, |S| may be at most n so all together the complexity of |S| P rob>,> (i, j) becomes O(nnnpmax pmax ). Since h = npmax it becomes O(nh2 ). Hence for a given S ∈ P ∗ ([j]), the complexity of P rob{Ij = S} becomes O(nh2 ) for a particular j, and O(n2 h2 ) for all jobs. However, due to S ∈ P ∗ ([j]) (in both ∂F (A) and P rob{Ij = S}), we have an additional factor of O(2n ) in the complexity of subgradient computations.  A.3.3  Obtaining a Subgradient Fast (in Polynomial Time)  The preceding characterizations of ∂F (A) may not be efficient for convex analysis methods to find an optimal appointment schedule. This is due to the O(2n ) factor in obtaining ∂F (A). Instead of fully characterizing ∂F (A), we may obtain a subgradient g ∈ ∂F (A) quickly. We will do so by not computing the convex hulls (in ∂Lj (A), ∂Tj (A) and ∂Mj (A)) but by choosing a particular (smallest (or largest) index) element for that convex hull (i.e., set a particular X variable to 1 and all others to zero in every convex combination). Recall that the O(2n ) factor comes from the fact that S ∈ P ∗ ([j]) where [j] = {1, 2, ..., j}. We are now after just one subgradient, but not the whole subdifferential, so instead of considering all possible (non-empty) subsets of [j] (i.e., P ∗ ([j])) for S and computing the corresponding convex hull, we just choose the vector corresponding to the smallest (or the largest) element of each S. In other words, when we choose the smallest element of each S, (.)  (.)  we eliminate variables Xi,j (S) by constraining Xi,j (S) = 1 if i = min S = min{t : t ∈ S} (.)  and Xi,j (S) = 0 otherwise. Similarly, when we choose the largest element of each S, we (.)  (.)  eliminate variables Xi,j (S) by constraining Xi,j (S) = 1 if i = max S = max{t : t ∈ S} and (.)  Xi,j (S) = 0 otherwise. We illustrate this idea with an example. Example A.3.7. Let F (A) = L3 (A) then ∂F (A) = ∂L3 (A). We will find two subgradi-  172  ′  ′′  ents, g (A), g (A) ∈ ∂L3 (A). Recall that [3] = {1, 2, 3} and by Eq(3.10) of Chapter 3 L (1k − 14 )Xk3 (S) :  P rob{I3 = S}  ∂L3 (A) =  k∈ S  S ∈ P ∗ ([3])  L Xk3 (S) = 1 ∀S ∈ P ∗ ([3]) k∈ S L (S) ≥ 0 ∀S ∈ P ∗ ([3]) ∀k ∈ S Xk3  then we obtain g ′ (A) and g ′′ (A) as g ′ (A) =  P rob{1 ∈ I3 }(11 − 14 ) + P rob{2 ∈ I3 , 1 ∈ I3 }(12 − 14 ) + P rob{3 ∈ I3 , 2 ∈ I3 , 1 ∈ I3 }(13 − 14 ) 3  P rob{i = min I3 }(1i − 14 ) and  = i=1  g ′′ (A) =  P rob{3 ∈ I3 }(13 − 14 ) + P rob{2 ∈ I3 , 3 ∈ I3 }(12 − 14 ) + P rob{1 ∈ I3 , 2 ∈ I3 , 3 ∈ I3 }(11 − 14 ) 3  P rob{i = max I3 }(1i − 14 ).  = i=1  The nice thing about g ′ (A) and g ′′ (A) is that the probabilities appearing in the equations above may be computed efficiently by RCA. Recall that I3 = arg maxk≤3 {Ak + Pk3 }. For example, P rob{1 ∈ I3 } = P rob{A1 + p1 ≥ A2 and A1 + p1 + p2 ≥ A3 }.  P rob{2 ∈ I3 and 1 ∈ I3 } = P rob{A2 + p2 ≥ A3 and A1 + p1 < A2 } = P rob{A2 + p2 ≥ A3 }P rob{A1 + p1 < A2 }  (A.19)  (A.20) (A.21)  = P rob{A2 + p2 ≥ A3 }P rob{C1 < A2 }.  P rob{3 ∈ I3 and 2 ∈ I3 and 1 ∈ I3 } = P rob{A2 + p2 < A3 and A1 + p1 + p2 < A3 } = P rob{C2 < A3 }.  (A.22)  In this paragraph’s arguments we use the Critical Path Lemma 2.4.1 of Chapter 2 for C3 . Eq(A.19) follows from the fact that 1 ∈ I3 if and only if A1 is on the critical path. In Eq(A.20), {2 ∈ I3 and 1 ∈ I3 } implies that A2 is on the critical path, but A1 is not on 173  the critical path, and this can only happen if and only if A2 + p2 ≥ A3 and A1 + p1 < A2 . Eq(A.21) is just the result of p1 and p2 ’s independence. {3 ∈ I3 and 2 ∈ I3 and 1 ∈ I3 } if and only if A3 is on the critical path, and A2 and A1 are not on the critical path. This can only happen if and only if C2 < A3 , and Eq(A.22) gives this result. In general, for ∀j ∈ [n + 1] and i ≤ j we need to compute probabilities P rob{i = min Ij } = P rob{1 ∈ Ij and 2 ∈ Ij and ... and i − 1 ∈ Ij and i ∈ Ij }, P rob{i = min Ij> } = P rob{1 ∈ Ij> and 2 ∈ Ij> and ... and i − 1 ∈ Ij> and i ∈ Ij> } for g ′ (A), and for g ′′ (A) we need probabilities P rob{i = max Ij } = P rob{i ∈ Ij and i + 1 ∈ Ij and ... and j − 1 ∈ Ij and j ∈ Ij }, P rob{i = max Ij> } = P rob{i ∈ Ij> and i + 1 ∈ Ij> and ... and j − 1 ∈ Ij> and j ∈ Ij> }. Recall that by Eq(A.5) we have P rob{Ij> = S} = P rob{Ij = S and Ais + Pis ,j > Aj+1 } where is = max{i : i ∈ S}. Therefore, P rob{i = max Ij> }  = P rob{i ∈ Ij> and i + 1 ∈ Ij> and ... and j ∈ Ij> } = P rob{i ∈ Ij and i + 1 ∈ Ij and ... and j ∈ Ij and Ai + Pij > Aj+1 }. (A.23)  Let i1 = min{i : i ∈ S}. Since Ai1 + Pi1 ,j = Ais + Pis ,j we can rewrite P rob{Ij> = S} as P rob{Ij> = S} = P rob{Ij = S and Ais + Pis ,j > Aj+1 } = P rob{Ij = S and Ai1 + Pi1 ,j > Aj+1 } therefore, P rob{i = min Ij> }  = P rob{1 ∈ Ij> and ... and i − 1 ∈ Ij> and i ∈ Ij> } = P rob{1 ∈ Ij and ... and i − 1 ∈ Ij and i ∈ Ij and Ai + Pij > Aj+1 }. (A.24)  We next compute P rob{i = min Ij }, P rob{i = max Ij }, P rob{i = min Ij> } and P rob{i = max Ij> }. We start with the computation of P rob{i = min Ij }. Lemma A.3.8. P rob{i = min Ij } = P rob{Ci−1 < Ai } P rob{Ai + Pit ≥ At+1 ∀t = i, i + 1, ..., j − 1} = P rob{Ci−1 < Ai }P rob≥,≥ (i, j) ≥ Ji,j−1 (m).  = P rob{Ci−1 < Ai } m≥Aj −Ai  174  Proof. By definition {i = min Ij } = {1 ∈ Ij and ... and i − 1 ∈ Ij and i ∈ Ij }, i.e., this is the event that A1 , A2 , ..., Ai−1 are not on the critical path of Cj but Ai is. Then, {i = min Ij } =  {1 ∈ Ij and ... and i − 1 ∈ Ij and i ∈ Ij } ⇔  max Ak + Pk,i−1 < Ai and Ai + Pit ≥ At+1 ∀t = i, i + 1, ..., j − 1  k≤i−1  ⇔ Ci−1 < Ai and Ai + Pit ≥ At+1 ∀t = i, i + 1, ..., j − 1 . Now, since Ci−1 is a function of only p1 , p2 , ..., pi−1 (in terms of processing times) and Pit is the sum of pi + pi+1 + ... + pt (i ≤ t ≤ j − 1), events {Ci−1 < Ai } and {Ai + Pit ≥ At+1 ∀t = i, i + 1, ..., j − 1} are independent. Therefore, P rob{i = min Ij } = P rob{Ci−1 < Ai } P rob{Ai + Pit ≥ At+1 ∀t = i, i + 1, ..., j − 1} but by Eq(A.8) and Eq(A.16) we have, ≥ Ji,j−1 (m).  P rob{Ai + Pit ≥ At+1 ∀t = i, i + 1, ..., j − 1} = P rob≥,≥ (i, j) = m≥Aj −Ai  Therefore the result follows. Similarly to Lemma A.3.8, we have another Lemma for P rob{i = max Ij }.  Lemma A.3.9. P rob{i = max Ij } = P rob{Ci−1 ≤ Ai } P rob{Ai + Pit > At+1 ∀t = i, i + 1, ..., j − 1} = P rob{Ci−1 ≤ Ai }P rob>,> (i, j) > Ji,j−1 (m).  = P rob{Ci−1 ≤ Ai } m>Aj −Ai  Proof. By definition {i = max Ij } = {i ∈ Ij and i + 1 ∈ Ij and ... and j ∈ Ij }, i.e., this is the event that Ai+1 , Ai+2 , ..., Aj are not on the critical path of Cj but Ai is. Then, {i = max Ij } =  {i ∈ Ij and i + 1 ∈ Ij and ... and j − 1 ∈ Ij and j ∈ Ij } ⇔  max Ak + Pk,i−1 ≤ Ai and Ai + Pit > At+1 ∀t = i, i + 1, ..., j − 1  k≤i−1  ⇔ Ci−1 ≤ Ai and Ai + Pit > At+1 ∀t = i, i + 1, ..., j − 1 .  175  Now, since Ci−1 is a function of only p1 , p2 , ..., pi−1 (in terms of processing times) and Pit is the sum of pi + pi+1 + ... + pt (i ≤ t ≤ j − 1), events {Ci−1 ≤ Ai } and {Ai + Pit > At+1 ∀t = i, i + 1, ..., j − 1} are independent. Therefore, P rob{i = max Ij } = P rob{Ci−1 ≤ Ai } P rob{Ai + Pit > At+1 ∀t = i, i + 1, ..., j − 1} but by Eq(A.8) and Eq(A.16) we have, > Ji,j−1 (m).  P rob{Ai + Pit > At+1 ∀t = i, i + 1, ..., j − 1} = P rob>,> (i, j) = m>Aj −Ai  Therefore the result follows. Then by Lemmata A.3.8 and A.3.9, and Eq(A.24) and Eq(A.23) it follows that P rob{i = min Ij> } = P rob{Ci−1 < Ai }P rob≥,> (i, j + 1) ≥ Ji,j (m)  = P rob{Ci−1 < Ai }  and  m>Aj+1 −Ai  P rob{i = max Ij> } = P rob{Ci−1 ≤ Ai }P rob>,> (i, j + 1) > Ji,j (m).  = P rob{Ci−1 ≤ Ai } m>Aj+1 −Ai  As mentioned before, completion time distributions are already available to us from ex. (.) is computed efficiently by the recursive algorithm RCA. pected cost computations and J.,.  Therefore we can compute all the required probabilities for finding g ′ and g ′′ . We now go back to computation of g ′ (A) and g ′′ (A). We compute g ′ (A) first. We will find g ′ (A) by computing contributions gj′L (A), gj′T (A) and gj′M (A) of ∂Lj (A), ∂Tj (A) and ∂Mj (A) to g ′ (A) ∈ ∂F (A) respectively, and obtain g ′ (A) by Rule 1 (Eq(3.3) of Chapter 3) as  n  (αj gj′L (A) + βj gj′T (A) + γj gj′M (A)).  g ′ (A) =  (A.25)  j=1  We start with  gj′L (A),  contribution of Lj (A) to g ′ (A). Recall that by Eq(3.10) of  Chapter 3, L (1k − 1j+1 )Xkj (S) :  P rob{Ij = S}  ∂Lj (A) = S ∈P ∗ ([j])  L Xkj (S)  k∈ S  = 1 ∀S ∈ P ∗ ([j])  k∈ S L Xkj (S) ≥ 0 ∀S ∈ P ∗ ([j]) ∀k ∈ S  176  then by choosing the smallest index for each S in every convex combination we obtain: j  gj′L (A)  [P rob{1 ∈ Ij and 2 ∈ Ij and ... and i − 1 ∈ Ij and i ∈ Ij }(1i − 1j+1 )]  = i=1 j  [P rob{i = min Ij }(1i − 1j+1 )].  =  (A.26)  i=1  Next, we obtain gj′T (A). ∂Tj (A) is given by Eq(3.11) of Chapter 3 as T> (1k − 1j+1 )Xkj (S)  P rob{Ij> = S}  ∂Tj (A) =  k∈ S  S∈P ∗ ([j])  + P rob{Ij=  =  T (1k − 1j+1 )Xkj (S) :  = S} k∈ S  T> Xkj (S)  = 1 ∀S ∈ P ∗ ([j])  k∈ S T= Xkj (S) ≤ 1 ∀S ∈ P ∗ ([j]) k∈ S T> T= Xkj (S), Xkj (S) ≥ 0 ∀S ∈ P ∗ ([j]) ∀k ∈ S . = (S) We will eliminate all the terms in the second line of ∂Tj (A) by assigning 0 to all Xkj  variables. We may do so due to the  k∈ S  T = (S) ≤ 1 inequality. Then by choosing the Xkj  smallest index for each S in every convex combination for the remaining of the terms we obtain j  gj′T (A) =  [P rob{1 ∈ Ij> and 2 ∈ Ij> and ... and i − 1 ∈ Ij> and i ∈ Ij> }(1i − 1j+1 )] i=1 j  [P rob{i = min Ij> }(1i − 1j+1 )].  =  (A.27)  i=1  177  Finally, we obtain gj′M (A). Recall that by Eq(3.12) of Chapter 3, M> (1k − 1j+1 )Xkj (S)  P rob{Ij> = S}  ∂Mj (A) =  k∈ S  S∈P ∗ ([j])  + P rob{Ij= = S}  (1k )XkMj = (S ∪ {j + 1}) k∈ S∪{j+1}  P rob{Ij= = S} 1j+1 :  + 1− S∈P ∗ ([j])  M> Xkj (S) = 1 ∀S ∈ P ∗ ([j]) k∈ S  XkMj = (S ∪ {j + 1}) = 1 ∀S ∈ P ∗ ([j]) k∈ S∪{j+1} M> Xkj (S) ≥ 0 ∀S ∈ P ∗ ([j]) ∀k ∈ S M= (S ∪ {j + 1}) ≥ 0 ∀S ∈ P ∗ ([j]) ∀k ∈ S ∪ {j + 1} . Xkj  Here we choose the smallest index for each S in the first convex combination (i.e., with M> terms) and choose j + 1 (note that j + 1 is always in S ∪ {j + 1}) for each S in the Xkj M = terms). Then, second convex combination (i.e., with Xkj j  gj′M (A) =  P rob{1 ∈ Ij> and ... and i − 1 ∈ Ij> and i ∈ Ij> }(1i − 1j+1 )] + 1j+1 i=1 j  [P rob{i = min Ij> }(1i − 1j+1 )] + 1j+1 .  =  (A.28)  i=1  Next we obtain g ′ (A) by using Eq(A.25) and collecting gj′L (A), gj′T (A) and gj′M (A) terms together. n  (αj gj′L (A) + βj gj′T (A) + γj gj′M (A))  ′  g (A) = j=1  j  n  [P rob{i = min Ij }(1i − 1j+1 )]  αj  = j=1  i=1 j  +  [P rob{i = min Ij> }(1i − 1j+1 )]  βj i=1 j  +  [P rob{i = min Ij> }(1i − 1j+1 )] + 1j+1  γj  .  (A.29)  i=1  We can write g ′ (A) component by component. We derive the formula for gk′ (A), the  178  k th component of g ′ (A), directly from Eq(A.29). n  k−1  gk′ (A)  = −αk−1  αj P rob{k = min Ij }  P rob{i = min Ik−1 } + i=1  j=k  k−1  n > P rob{i = min Ik−1 }+  −βk−1 i=1  βj P rob{k = min Ij> } j=k n  k−1  γj P rob{k = min Ij> }  > P rob{i = min Ik−1 }−1 +  −γk−1 i=1  j=k n  k−1  = −αk−1  αj P rob{k = min Ij } + γk−1  P rob{i = min Ik−1 } + i=1  j=k k−1  n > P rob{i = min Ik−1 }+  −(βk−1 + γk−1 ) i=1  (βj + γj )P rob{k = min Ij> }. (A.30) j=k  By using Eq(A.25), Eq(A.26), Eq(A.27), Eq(A.28), Eq(A.29) and Eq(A.30) we can easily obtain g ′′ (A) and gk′′ (A) as below. j  n ′′  [P rob{i = max Ij }(1i − 1j+1 )]  αj  g (A) = j=1  i=1 j  +  [P rob{i = max Ij> }(1i − 1j+1 )]  βj i=1 j  +  [P rob{i = max Ij> }(1i − 1j+1 )] + 1j+1  γj  .  (A.31)  i=1  n  k−1  gk′′ (A)  = −αk−1  αj P rob{k = max Ij }  P rob{i = max Ik−1 } + i=1  j=k  k−1  −βk−1  n  P rob{i =  > max Ik−1 }  βj P rob{k = max Ij> }  +  i=1  j=k  k−1  n > P rob{i = max Ik−1 }−1 +  −γk−1 i=1  j=k n  k−1  = −αk−1  γj P rob{k = max Ij> }  αj P rob{k = max Ij } + γk−1  P rob{i = max Ik−1 } + i=1  j=k n  k−1  (βj + γj )P rob{k = max Ij> }. (A.32)  > P rob{i = max Ik−1 }+  −(βk−1 + γk−1 ) i=1  j=k  Once we know the probabilities P rob{i = max Ij }, P rob{i = min Ij }, P rob{i = max Ij> } and P rob{i = min Ij> } for all j then we can find g ′ (A) and g ′′ (A) in O(n2 ). And these probabilities can be computed in O(nh2 ) by the recursive algorithm RCA. Therefore we can obtain a subgradient of F (in fact two) in polynomial time, i.e., O(nh2 ). (We have 179  implemented a preliminary program to compute subgradients g ′ (A) and g ′′ (A) for any appointment vector A (real or integer)). Remark A.3.10.  n+1 ′ k=1 gk  =  n+1 ′′ k=1 gk  =  n+1 k=2 γk−1  = γ1 = u1 + α1 . This follows from  equations Eq(A.29) and Eq(A.31), due to (1i − 1j+1 ) vectors all terms but  n j=1 γj 1j+1  dis-  appear. This is useful in implementations since it provides an easy test to check subgradient computations. Another observation is that the norms of the subgradients depend on α. Therefore, one may want to choose α (among possible α’s) such that subgradient norm is minimized. Remark A.3.11. By Proposition 3.4.11 of Chapter 3 we can easily extend our results for ˜ D)), F D and obtain a subgradient for F D . We just need to find a subgradient for ∂F (A, i.e., last component, Aj+1 , will be set to D in F .  A.4  Algorithms  The objective function of the appointment scheduling problem has useful and interesting properties that allow us to minimize it efficiently. We can divide these properties into two main groups, depending on whether we work with integer or non-integer appointment vectors. In the case of integer appointment vectors, we can limit our search of optimal appointment schedule to only integer appointment vectors without loss of optimality by Appointment Vector Integrality Theorem 2.5.10. Furthermore, under a mild condition on cost coefficients, i.e., α-monotonicity (Definition 2.6.5), we can find an optimal appointment schedule with discrete algorithms using polynomial time and number of expected cost evaluations (Theorem 2.7.1). In the case of independent processing durations we can minimize F in O(n9 p2max log pmax ) time as shown in Theorem 2.7.3. Similar results hold for FD (Section 2.8). The results above use algorithms based on L-convexity and submodular set function minimization (e.g., see Section 10.3.2 of Murota [7], [6], [5], [9]). Besides these (discrete) algorithms, in [2, 3] authors propose to use minimum-norm-point algorithm [11] for submodular set function minimization to reduce the complexity of existing discrete algorithms. Computational results reported in [3] shows that the proposed algorithm (using minimum-norm-point algorithm) may perform better than existing polynomial algorithms.  180  On the other hand, if we work with non-integer appointment vectors then the objective is convex by Proposition 3.3.3, under a mild condition on cost coefficients (α-monotonicity). Furthermore, as shown in this chapter, we can obtain a subgradient (in fact two subgradients) of the objective in O(nh2 ) time, which is the same complexity of computing the objective at a non-integer point. Therefore we can use non-smooth convex optimization algorithms (e.g., see [4], [10], [1]) to find an optimal appointment vector efficiently. Both approaches, discrete and non-smooth convex optimization, have their advantages and disadvantages. For example, discrete methods have polynomial complexity with guaranteed optimality but they may be slow and more difficult to implement. (Although [3]’s proposed minimum-norm-point algorithm has a potential to be fast). On the other hand, non-smooth convex optimization methods may have a fast start (can take larger steps to reach near by optimum vector) and can be easier to implement however finding an optimal integer solution may be a challenge. Furthermore, finding a good solution can again be slow. It is not clear at this point which methods will work faster and which implementation will be easier in practice. A third approach besides using only discrete or non-smooth convex optimization methods, is to combine these two approaches and develop a hybrid algorithm. The idea is to start with non-smooth methods to get close to an optimum vector quickly and when the improvement with non-smooth methods loses its steam turn to discrete methods. However, non-smooth methods work with non-integer appointment vectors whereas discrete algorithms work with integer appointment schedules. Therefore, to combine these two methods in a meaningful way we should be able to pass the current solution from one to another without worsening the objective. To achieve this we develop a rounding algorithm which takes any fractional solution (e.g., of a non-smooth optimization method) and rounds it to an integer one with the same or improved objective value (for a discrete algorithm). With this rounding algorithm we can combine both non-smooth and discrete methods and develop an hybrid algorithm for our appointment scheduling problem. Next we provide the details of the rounding algorithm. The rounding algorithm is based on the Appointment Vector Integrality Theorem and its supporting Lemmata, see Section 2.5 for the details on these results. We use the same notation as in Section 2.5. Recall that for a given non-integer appointment vector A there exists a positive scalar ∆ and we construct two new appointment schedules A′ and A′′ . The 181  important idea behind the rounding algorithm is that there exists an integer appointment schedule by the Appointment Vector Integrality Theorem 2.5.10 and the objective function changes linearly between A′ and A′′ , i.e., either min{F (A′ ), F (A′′ )} < F (A) or F (A′ ) = F (A′′ ) = F (A). In other words, at each iteration (unless we’ve found an optimum) we strictly improve the objective and we run the algorithm until we obtain an integer A. We now describe the rounding algorithm. The algorithm starts with an appointment vector A and computes F (A). If A is integer then the algorithm stops, else it finds the ∆ as given above. Then by using the ∆ the algorithm constructs appointment schedule A′ and computes its cost, i.e., F (A′ ). If F (A′ ) < F (A) then it sets A to A′ and goes to start, else generates A′′ and sets A to A′′ and goes to start. In algorithm 1 we present the rounding algorithm. Algorithm 1 Rounding Algorithm start with a given (non-integer) A compute F (A) while A is not integer do find ∆ generate A′ compute F (A′ ) if F (A′ ) < F (A) then A ⇐ A′ F (A) ⇐ F (A′ ) else generate A′′ compute F (A′′ ) A ⇐ A′′ F (A) ⇐ F (A′′ ) end if end while  182  A.5  Conclusion and Future Work  In this chapter, we use the subdifferential characterization of the objective function for appointment scheduling problem with independent processing durations and compute a subgradient in polynomial time for any given appointment schedule. Finding a subgradient in polynomial time is not trivial because the subdifferential characterization include exponentially many terms, and some of the probability computations are complicated. We also obtain an easily computable lower bound on the optimal objective value. Furthermore, we extend computation of the expected total cost (in polynomial time) for any (real-valued) appointment vector. These results allow us to use non-smooth convex optimization techniques to find an optimal schedule. Previously, we already showed that there exists a polynomial time algorithm to find an optimal appointment schedule however it is not clear at the moment which technique (discrete or non-smooth) will work faster in practice. Besides the discrete convexity and and non-smooth convex optimization approaches, we also develop an hybrid method in which we combine both approaches with a special-purpose integer rounding method which takes any fractional solution and rounds it to an integer one with the same or improved objective value. We believe this hybrid approach may perform well in practice. In the near future, we are planning to implement all these algorithms and methods and develop a computational engine for appointment scheduling problem. Besides, testing and comparing the discrete, non-smooth and hybrid algorithms in computational experiments we plan to test performance of various heuristic methods for both appointment scheduling and sequencing problem, and apply it to real-world appointment scheduling problem in healthcare. (A preliminary version of the rounding algorithm and subgradient computations have been implemented.)  183  A.6 [1] J.  Bibliography Frederic  Bonnans,  Claude Lamarechal.  Jean  Charles  Gilbert,  Numerical Optimization:  and  Claudia  A.  Sagastizabal  Theoretical and Practical Aspects.  Springer, 2006. [2] Satoru Fujishige. Submodular systems and related topics. Math. Program. Stud., 22:113– 131, 1984. [3] Satoru Fujishige, Takumi Hayashi, and Shigueo Isotani.  The minimum-norm-  point algorithm applied to submodular function minimization and linear programming.  Working Paper, Kyoto University (avaliable at http://www.kurims.kyoto-  u.ac.jp/preprint/file/RIMS1571.pdf ), 2006. [4] JeanBaptiste Hiriart-Urruty and Claude Lemarechal. Convex Analysis and Minimization Algorithms I and II. Springer, 1993. [5] Satoru Iwata. Submodular function minimization. Math. Program., 112:45–64, 2008. [6] S. T. McCormick. Submodular function minimization. a chapter in the handbook on discrete optimization. Elsevier, K. Aardal, G. Nemhauser, and R. Weismantel, eds, 2006. [7] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. [8] Marcelo Olivares, Christian Terwiesh, and Lydia Cassorla. Structural estimation of the newsvendor model: An application to reserving operating room time. Management Science, 54(1):41–55, 2008. [9] James B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Math Programming, 118(2):237–251, 2007. [10] R. Tyrrell Rockafellar. Theory of subgradients and its applications to problems of optimization: convex and nonconvex functions. 1981. [11] Philip Wolfe. Finding the nearest point in a polytope. Math Programming, 11:128–149, 1976. 184  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0070919/manifest

Comment

Related Items