{"http:\/\/dx.doi.org\/10.14288\/1.0070919":{"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool":[{"value":"Business, Sauder School of","type":"literal","lang":"en"}],"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider":[{"value":"DSpace","type":"literal","lang":"en"}],"https:\/\/open.library.ubc.ca\/terms#degreeCampus":[{"value":"UBCV","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/creator":[{"value":"Begen, Mehmet Atilla","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/issued":[{"value":"2010-04-09T14:36:10Z","type":"literal","lang":"en"},{"value":"2010","type":"literal","lang":"en"}],"http:\/\/vivoweb.org\/ontology\/core#relatedDegree":[{"value":"Doctor of Philosophy - PhD","type":"literal","lang":"en"}],"https:\/\/open.library.ubc.ca\/terms#degreeGrantor":[{"value":"University of British Columbia","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/description":[{"value":"We study scheduling of jobs on a highly utilized resource when the processing durations are stochastic and there are significant underage (resource idle-time) and overage (job waiting and\/or resource overtime) costs. Our work is motivated by surgery scheduling and physician appointments. We consider several extensions and applications.\n\nIn the first manuscript, we determine an optimal appointment schedule (planned start times) for a given sequence of jobs (surgeries) on a single resource (operating room, surgeon). Random processing durations are integers and given by a discrete probability distribution. The objective is to minimize the expected total underage and overage costs. We show that an optimum solution is integer and can be found in polynomial time.\n\nIn the second manuscript, we consider the appointment scheduling problem under the assumption that the duration probability distributions are not known and only a set of independent samples is available, e.g., historical data. We develop a sampling-based approach and determine bounds on the number of independent samples required to obtain a provably near-optimal solution with high probability.\n\nIn manuscript three, we focus on determining the number of surgeries for an operating room in an incentive-based environment. We explore the interaction between the hospital and the surgeon in a game theoretic setting, present empirical findings and suggest incentive schemes that the hospital may offer to the surgeon to reduce its idle time and overtime costs.\n\nIn manuscript four, we consider an application to inventory management in a supply chain context. We introduce advance multi-period quantity commitment with stochastic characteristics (demand or yield) and describe several real-world applications. We show these problems can be solved as special cases of the appointment scheduling problem.\n\nIn manuscript five, an appendix, we develop an alternate solution approach for the appointment scheduling problem. We find a lower bound value, obtain a subgradient of the objective function, and develop a special-purpose integer rounding algorithm combining discrete convexity and non-smooth convex optimization methods.","type":"literal","lang":"en"}],"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO":[{"value":"https:\/\/circle.library.ubc.ca\/rest\/handle\/2429\/23332?expand=metadata","type":"literal","lang":"en"}],"http:\/\/www.w3.org\/2009\/08\/skos-reference\/skos.html#note":[{"value":"Appointment Scheduling with Discrete Random Durations and Applications by Mehmet Atilla Begen A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Business Administration) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) April 2010 c Mehmet Atilla Begen 2010 \fAbstract We study scheduling of jobs on a highly utilized resource when the processing durations are stochastic and there are significant underage (resource idle-time) and overage (job waiting and\/or resource overtime) costs. Our work is motivated by surgery scheduling and physician appointments. We consider several extensions and applications. In the first manuscript, we determine an optimal appointment schedule (planned start times) for a given sequence of jobs (surgeries) on a single resource (operating room, surgeon). Random processing durations are integers and given by a discrete probability distribution. The objective is to minimize the expected total underage and overage costs. We show that an optimum solution is integer and can be found in polynomial time. In the second manuscript, we consider the appointment scheduling problem under the assumption that the duration probability distributions are not known and only a set of independent samples is available, e.g., historical data. We develop a sampling-based approach and determine bounds on the number of independent samples required to obtain a provably near-optimal solution with high probability. In manuscript three, we focus on determining the number of surgeries for an operating room in an incentive-based environment. We explore the interaction between the hospital and the surgeon in a game theoretic setting, present empirical findings and suggest incentive schemes that the hospital may offer to the surgeon to reduce its idle time and overtime costs. In manuscript four, we consider an application to inventory management in a supply chain context. We introduce advance multi-period quantity commitment with stochastic characteristics (demand or yield) and describe several real-world applications. We show these problems can be solved as special cases of the appointment scheduling problem. In manuscript five, an appendix, we develop an alternate solution approach for the appointment scheduling problem. We find a lower bound value, obtain a subgradient of the objective function, and develop a special-purpose integer rounding algorithm combining discrete convexity and non-smooth convex optimization methods. ii \fTable of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Co-Authorship Statement x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Motivation and Appointment Scheduling . . . . . . . . . . . . . . . . . . . 1 1.2 Overview of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.1 Chapter 2: Appointment Scheduling . . . . . . . . . . . . . . . . . . 6 1.2.2 Chapter 3: A Sampling-Based Approach 7 1.2.3 Chapter 4: Incentive-Based Surgery Scheduling 1.2.4 Chapter 5: Advance Quantity Commitment 1.2.5 . . . . . . . . . . . . . . . . . . . . . . . . . . 9 . . . . . . . . . . . . . 10 Appendix A: Minimizing a Discrete-Convex Function . . . . . . . . 11 1.3 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.4 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2 Appointment Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 Related Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 Assumptions and Notation 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . iii \f2.4 Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.5 Optimality of an Integer Appointment Vector . . . . . . . . . . . . . . . . 26 2.6 L-convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.7 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.8 Objective Function with a Due Date . . . . . . . . . . . . . . . . . . . . . . 40 2.9 No-shows and Emergency Jobs . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.10 Current Work, Future Work and Conclusion . . . . . . . . . . . . . . . . . 46 2.11 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3 Sampling Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.1 Introduction and Motivation 52 3.2 Appointment Scheduling Problem . . . . . . . . . . . . . . . . . . . . . . . 58 3.3 Convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.4 Subdifferential Characterization . . . . . . . . . . . . . . . . . . . . . . . . 63 3.5 Sampling Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.7 Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.8 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4 Incentive-Based Surgery Scheduling . . . . . . . . . . . . . . . . . . . . . . 90 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2 Problem Description and Motivation . . . . . . . . . . . . . . . . . . . . . . 94 4.3 The Model 4.4 Misalignment of Incentives 4.6 102 . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Deciding the Number of Surgeries . . . . . . . . . . . . . . . . . . . 106 Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.5.1 Hospital has Complete Information and Coercive Power . . . . . . . 110 4.5.2 Take-It-or-Leave-It Offer . . . . . . . . . . . . . . . . . . . . . . . . 111 4.5.3 Three-Part Contract . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.5.4 Implementing the Two Contracts 4.5.5 Welfare Considerations 4.4.1 4.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 . . . . . . . . . . . . . . . . . . . . . . . . . 116 Dependent Surgeries with Identical Realizations 4.6.1 . . . . . . . . . . . . . . . 117 Preliminaries and Misalignment of Incentives . . . . . . . . . . . . . 118 iv \f4.6.2 Risk-Averse Surgeon . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.6.3 Intuitive Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.6.4 Discrete Time Distribution with Two Values . . . . . . . . . . . . . 122 4.6.5 Sufficient Conditions on Cost Coefficients . . . . . . . . . . . . . . . 125 4.7 Conclusion and Future Directions . . . . . . . . . . . . . . . . . . . . . . . 125 4.8 Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.9 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 5 Advance Quantity Commitments . . . . . . . . . . . . . . . . . . . . . . . . 136 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.2 Description of Appointment Scheduling Problem . . . . . . . . . . . . . . . 138 5.3 An Inventory Model with Commitments . . . . . . . . . . . . . . . . . . . . 140 5.4 A Production Model with Commitments . . . . . . . . . . . . . . . . . . . 145 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.6 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 6.1 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Appendices A Minimizing a Discrete-Convex Function . . . . . . . . . . . . . . . . . . . 156 . . . . . . . . . . . . . . . . . . . . . . . . . . 156 A.2 Lower Bound (on the Value) and Computation of F . . . . . . . . . . . . . 158 A.3 Obtaining a Subgradient in Polynomial Time . . . . . . . . . . . . . . . . . 163 A.1 Introduction and Motivation A.3.1 Probability Computations . . . . . . . . . . . . . . . . . . . . . . . 163 A.3.2 Complexity of Subgradient Computations . . . . . . . . . . . . . . . 171 A.3.3 Obtaining a Subgradient Fast (in Polynomial Time) . . . . . . . . . 172 A.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 A.5 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . 183 A.6 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 v \fList of Tables Table 4.1 Estimates of Daily Overtime Probability per OR . . . . . . . . . . . . 98 Table 5.1 Comparison of the Appointment Scheduling and Inventory Models . . 142 Table 5.2 Comparison of the Appointment Scheduling and Production Models . 147 vi \fList of Figures Figure 1.1 Surgery Durations by Surgical Specialty . . . . . . . . . . . . . . . . 2 Figure 1.2 Duration Distribution of a Simple Hernia Operation . . . . . . . . . 2 Figure 1.3 Surgery Scheduling Process . . . . . . . . . . . . . . . . . . . . . . . 4 Figure 1.4 An Instance with Three Surgeries . . . . . . . . . . . . . . . . . . . . 5 Figure 2.1 Surgery Durations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Figure 2.2 A Three-Job Instance, and A Realization of the Processing Durations 18 Figure 2.3 An Example Schedule with Emergency Jobs . . . . . . . . . . . . . . 45 Figure 4.1 Healthcare Players and Tasks . . . . . . . . . . . . . . . . . . . . . . 91 Figure 4.2 Surgery Scheduling Process . . . . . . . . . . . . . . . . . . . . . . . 94 Figure 4.3 The Third Level of Surgery Scheduling Process . . . . . . . . . . . . 95 Figure 4.4 Duration Distribution of a Simple Hernia Operation . . . . . . . . . 97 Figure 4.5 Actual and Scheduled Surgery Durations . . . . . . . . . . . . . . . . 98 Figure 4.6 Daily Average Overtime Minutes and Probability of Overtime per OR 99 Figure 4.7 Existence of m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Figure 4.8 Illustration of nR \u2264 nS with Two t Values . . . . . . . . . . . . . . . 124 Figure 4.9 Illustration of Condition Eq(4.16) . . . . . . . . . . . . . . . . . . . . 125 Figure 5.1 A Realization of Inventory Levels for 6 Periods . . . . . . . . . . . . 143 \b Figure A.1 Event I5 = {1, 2, 3, 5} Visualization . . . . . . . . . . . . . . . . . \b Figure A.2 Event I8 = {3, 5, 6} Visualization . . . . . . . . . . . . . . . . . . 164 165 vii \fAcknowledgements This thesis would not have been what it is without the encouragement and generous support from many individuals. I am grateful to every person who has offered help to me and contributed in some way to the process leading to my dissertation. It is difficult to overstate my gratitude to my supervisor, Maurice Queyranne. With his patience, inspiration, and great efforts to explain things clearly and simply, he has been a great mentor throughout my Ph.D. studies. I would like to thank him for our long research meetings, all of the advice and insightful direction he has passed along. I wish one day I can be a researcher like him and supervise my own Ph.D. students in the same way. The second individual I would like to express my appreciation to is Marty Puterman. He persuaded me to come to Canada for the COE program in the first place and encouraged me to pursue a Ph.D. degree. I would like to thank him for all of the advice, encouragement and support he has given during my studies and my employment at UBC as well as introducing me to the world of applied operations research. I have also been fortunate enough to have a great thesis committee. Besides Maurice Queyranne and Marty Puterman, I had the privilege to have Ralph Winter and Mahesh Nagarajan as my thesis committee members. I thank them for being supportive of my research, for their encouragement and constructive critiques that have greatly contributed to my thesis. Furthermore, I thank Ralph Winter for introducing me to the topic of Contract Theory and getting started the seed ideas of Chapter 4 in a term paper. And I thank Mahesh Nagarajan for providing me advice on my research and other academic issues when it was most needed. Let me also thank a few individuals who contributed to this research. I start with Retsef Levi who suggested using a sampling-based approach for surgery scheduling in his timely visit to UBC. I also thank him for being a co-author for Chapter 3. Second, my appreciation goes to Chris Ryan who has been a very good friend, a great research collaborator and a co-author of Chapter 4. For Chapter 4, I also thank Mati Dubrovinsky, Zhongzhi Song, viii \fand Gavin Yang and for their interesting and fruitful discussions and feedback, and give my thanks to the management and system analysts at a local hospital for their support and help in obtaining data. Last but not least, I thank Philip D. Loewen and Steven Shechter for their comments on the thesis. Since I first came to UBC, the campus and its tenants offered me a pleasant, friendly and motivating work environment. I would like to thank OPLOG faculty (in particular Steven Shechter, Derek Atkins, Harish Krishnan, Danny Granot, Anming Zhang and Tom McCormick), Elaine Cho (who has guided me through the university bureaucracy with her endless goodwill), Michelle Medalla, Joel Feldman, Geoffrey Blair, the COE staff and others (in particular Fredrik Odegaard, Jonathan Patrick, Mariel Lavieri, Antoine Saure V., Pablo Santibanez, Vincent Chow, Abelardo Mayoral, Steven Kabanuk Greg Werker, Anita Parkinson). Furthermore, thanks to UBC and NSERC for their financial support during my studies. Last but not least, I want to thank my mom, dad, and brother. I am forever indebted for their understanding, endless patience and encouragement when it was most required. It is to them that I dedicate this work. ix \fTo my family, Su\u0308ndu\u0308z, Cevdet and Ali Cengiz. x \fCo-Authorship Statement Chapter 2, Chapter 5 and Appendix A are manuscripts co-authored with the candidate\u2019s supervisor, Maurice Queyranne. The identification and design of the research program, for these papers were carried out jointly. Research, analysis and manuscript preparation were performed by the candidate with close supervision from Maurice Queyranne. Chapter 3 is co-authored with Maurice Queyranne and Retsef Levi. The identification and design of the research program, for this paper were carried out jointly. Research, analysis and manuscript preparation were performed by the candidate with close supervision from Maurice Queyranne and with comments on revisions provided by Retsef Levi. Chapter 4 is co-authored with Maurice Queyranne and Chris Ryan. The design of the research program, research, analysis and manuscript preparation for this paper were carried out jointly with Chris Ryan with close supervision from Maurice Queyranne. The identification of the initial research question and preliminary data analysis were performed by the candidate. xi \f1 Introduction We begin with motivation and introduction of the appointment scheduling problem in Section 1.1. In Section 1.1, we also discuss how and where appointment scheduling fits in the context of surgery scheduling and give a summary of related previous work. Then in Section 1.2, we give an overview of the thesis. Finally, Section 1.3 concludes the chapter with an outline of the thesis. 1.1 Motivation and Appointment Scheduling Healthcare is one of the biggest industries in North America. Canada was expected to spend $148 billion on healthcare in 2006 [13], which accounts for more than 10% of its GDP. In the United States the situation is similar, in 2006 it accounted for 15.3% of GDP [8]. Healthcare challenges, on one end costs and on the other end demand, are growing not only in Canada but in almost every country in the world [6]. To address these challenges, one may think of either increasing available resources (capacity), limiting demand or finding ways to improve efficiency [24]. In most cases, increasing capacity or limiting demand may not be possible, and even if they are possible the challenges may require a deeper analysis and efficiency improvements. One way to improve healthcare operations is with effective scheduling of resources and patients who need them. Scheduling issues become more important and challenging when there is uncertainty present in the system. Uncertainty may be involved with patients (e.g., priority levels and arrivals [25]), resources (e.g., availability of a vaccine) or any other aspect of the healthcare operations (e.g., surgery durations [10]). In our applied healthcare projects, we also observed uncertainty of patient arrivals and surgery durations [32]. For instance, Figures 1.1 and 1.2 show how variable surgery durations can be. Figure 1.1 shows an example of surgery durations (operating room (OR) time in minutes) by surgical specialty and Figure 1.2 depicts duration distribution of a simple hernia operation. (Data for these figures comes from local hospitals.) 1 \f400 350 OR Time (min) 300 250 200 150 100 50 0 General Cardiac Neuro. Ortho. Plastic. Vascu. Urol. Obst\/Gyn. Oto. Ophth. Surgical Specialty Figure 1.1: Surgery Durations by Surgical Specialty Uncertainty makes the scheduling and capacity allocation decisions more complex and challenging. In such an environment, one needs to find a balance in the tradeoff allocating too much (more idle-time but less patient waiting time) or too little (more patient waiting time but less overtime) capacity. Durations of simple hernia operation 0.35 0.30 frequency 0.25 0.20 0.15 0.10 0.05 0.00 32 43 53 64 75 86 96 more (actual) surgery duration (minutes) Figure 1.2: Duration Distribution of a Simple Hernia Operation Motivated by the surgeries, oncologist consultations and radiation therapy treatments for cancer patients, we take an in-depth look at the appointment scheduling of jobs (e.g., 2 \fsurgeries, exams) of a highly utilized processor (e.g., OR, physician) when the job durations are stochastic and there are significant overage (job waiting and\/or processor overtime) and underage (processor idle-time) costs. For a given sequence of jobs on a single processor, we determine an optimal appointment schedule (planned start times) minimizing the expected total underage and overage costs. Before we get into the details of the appointment scheduling problem, we first take a look at the surgery scheduling process to see how and where appointment scheduling fits in this context. In practice, scheduling surgeries in a medical facility is a complex and important process, and the choice of schedule directly impacts the overall performance of the system [32]. The surgery scheduling process (for elective cases) is usually considered as a three-level process [2, 3, 23]. We can classify these three levels as the strategic, tactical and operational stages of the surgery scheduling process respectively. Figure 1.3 gives an overview of the process in terms of decisions, decision maker and decision level. The first level defines and assigns the OR time among the surgical specialties, usually called mix planning. A surgical OR block schedule1 is developed at the second level. Finally, in the third level, individual cases are scheduled on a daily basis, also known as patient mix. It is at this level that variability in surgery durations plays a key role and where one determines the number of surgeries to perform in a block, the sequence the surgeries performed and the planned start times (appointment times) of the surgeries. Ideally, one should consider all these three levels of decisions simultaneously and not in isolation. However, the practical applications and mathematical challenges force practitioners and academics to work on these problems individually. In this thesis, we concentrate in level three and for the appointment scheduling problem we assume that the number of jobs (surgeries) and a sequence are already determined and given. For example, in the case of surgeries, for a given set of surgeries and their sequence, an appointment schedule, i.e., planned start times, needs to be prepared. This is an important and challenging task since surgery appointment schedule has a direct impact on amount of overtime and idle-time of ORs [10]. OR overtime can be costly since it involves staff overtime as well as additional overhead costs, on the other hand, idle-time costs can also 1 An OR block schedule is simply a table that assigns each specialty surgery time in ORs on each day. The times are called blocks. The OR block schedule is sometimes called the master surgical schedule (see Figure 2 of [32] for a sample OR block schedule). 3 \fDecision level Decision maker Decision Strategic Budget and Surgical Mix Health Authority (specialties and % of time, i.e., capacity per specialty) Tactical Hospital Management Block Schedule (blocks for each specialty\/surgeon) Operational Surgeons Patient Schedule (scheduling of patients into a block) Figure 1.3: Surgery Scheduling Process be high due to the opportunity cost of unused capacity, especially important in a Canadian context due to important political and social issues related to the length of surgical waiting lists [34]. An appointment schedule assigns an allocated duration by specifying the appointment time of each surgery at which the required resource(s) (e.g., OR, surgeon, healthcare personnel and equipment), and the patient will be available. However, due to the uncertain surgery durations, some surgeries may finish earlier whereas some others may finish later. In the latter case, the next surgery has to wait for the preceding surgery to complete and will start later than its original appointment time. As the appointment times have to be determined in advance, there are only limited recourse options when the actual duration of a procedure differs from its planned value. When a surgery finishes earlier than the next surgery\u2019s appointment time, there is under-utilization of the healthcare resources. On the other hand, if a procedure finishes later than the next procedure\u2019s appointment date, there may be some overtime of the healthcare resources and is waiting of the next procedure. Therefore, there is an important trade-off between under-utilization, overtime and patient waiting times. We are interested in finding a schedule that minimizes the expected total cost of resources under-utilization, overtime of resources and patient waiting. Generating such a schedule is more challenging but more valuable and useful when processing durations have more variability. The need for a good schedule is crucial, and savings from such a schedule may be significant. Figure 1.4 shows an instance with 3 surgeries G, B, R to be processed in this order. An appointment schedule (AG , AB , AR ) is given. Once the processing starts, 4 \fdue to the random processing durations, some surgeries may be early whereas some others may be late as shown in Figure 1.4. (CG , CB , CR denotes completion times of surgeries.) an appointment schedule: AG , AB , AR G B R I : idle time AB AG G AG I CG AR B W: wait time W R CB O : overtime O CR a realization of durations and the completion times Figure 1.4: An Instance with Three Surgeries In the last five decades, there has been a tremendous interest in appointment scheduling not only in healthcare and service industries [7, 5, 35] but also in other areas such as production and transportation [12, 31]. While our goal is to provide an overview of the prior work on appointment scheduling, here we can only give a glimpse of it. However, in the subsequent chapters, we survey the related work about the problems under discussion in more detail. Weiss [36] recognized that the appointment scheduling problem has a closed form solution when there is only a single job, and it coincides with the well known newsvendor [23] solution from inventory theory. However, the problem departs from newsvendor characteristics and solution methods in the case of multiple jobs [29]. In multi-period newsvendor problem, naturally, decisions are taken at each period sequentially. Whereas in appointment scheduling, one needs to have a schedule before any processing can start, i.e., one determines all the decision variables (appointment times) simultaneously at the beginning of the planning horizon, i.e., at time zero. In terms of solution methods, we see studies based on stochastic programming [28, 10], queuing theory [35, 5, 16], simulation and other methods, see [7] and references therein. Cayirli and Veral [7] classify the literature in terms of methodologies and modeling aspects considered, and provide a discussion of performance measures. The authors conclude that the existing literature provides very situation-specific solutions and does not offer generally applicable and portable methodologies for appointment systems design in outpatient scheduling. We finally would like to point out some differences between appointment scheduling and single machine scheduling [27]. 5 \fUnlike machine scheduling, in appointment scheduling a sequence is given and the release dates are the decision variables. Furthermore, the objective function of the appointment scheduling problem is different than the objective functions of classical machine scheduling problems. Processing durations are usually deterministic in machine scheduling problems but random processing durations are also studied in the literature [27]. Appointment scheduling problem can be modeled as a multistage stochastic programming problem [28, 10, 29], but there are significant computational difficulties due to the need for multidimensional numerical integration, e.g., even computing expected cost for a given schedule is difficult. Hence, heuristic methods have to be developed for realistic size problems. To the best of our knowledge, all the analytical studies that we are aware of about appointment scheduling, even the ones with discrete epochs [16, 5, 35] for job arrivals, use continuous job duration distributions. 1.2 1.2.1 Overview of the Thesis Chapter 2: Appointment Scheduling with Discrete Random Durations In Chapter 2, we study a discrete time version of the appointment scheduling problem, i.e., the processing durations are integer and given by a discrete probability distribution. This assumption fits many applications, for example, surgeries and physician appointments are scheduled on minute(s) basis (usually a block of certain minutes). (For instance, one 20 minute physician appointment could be two blocks of 10 minutes.) We establish discrete convexity [21] properties of the objective function (under a mild condition on cost coefficients) and show that there exists an optimal integer appointment schedule minimizing the objective. This result is important as it allows us to optimize only over integer appointment schedules without loss of optimality. All these results on the objective function and optimal appointment schedule enable us to develop a polynomial time algorithm, based on discrete convexity [22], that, for a given processing sequence, finds an appointment schedule minimizing the total expected cost. When processing durations are stochastically independent we evaluate the expected cost for a given processing order and an integer appointment schedule in polynomial time. Independent processing durations lead to faster algorithms. Our modeling framework can include a given due date for the end of the processing of all 6 \fjobs (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose an end date. We also extend the analysis to include no-shows and some emergency jobs. Our setting is quite general, and it could be applied to various reallife scenarios (in healthcare and other areas) including surgeries, MRI exams, physician and specialist consultations, radiation therapy, project scheduling, container vessel and terminal operations, gate and runway scheduling of aircrafts in an airport. We believe our approach is sufficiently generic and portable to solve the appointment scheduling problem efficiently. 1.2.2 Chapter 3: A Sampling-Based Approach to Appointment Scheduling In Chapter 2, we assume complete information of job duration distributions, i.e., there is an underlying discrete probability distribution for job durations (true distribution), and this distribution is available and known fully. This may be the case for some applications. However, for others, the true duration distributions may not be known but its (past) realizations or some samples may be available. One good example for such an application comes from healthcare; hospitals and surgeons usually have some data (historical) available on the length of surgeries but no one knows what the true distribution for a certain type of surgery is. In Chapter 3, we consider the problem of appointment scheduling with discrete random durations under the assumption that the true duration probability distributions are not known and only a set of independent samples is available. These samples may correspond to historical data, for example daily observations of surgery durations. We show that the objective function of the appointment scheduling problem is convex (as a function of continuous appointment vectors) under a simple and sufficient condition on cost coefficients. Under this condition we characterize the subdifferential2 of the objective function with a closed-form formula. We use this formula to determine bounds on the number of independent samples required to obtain a provably near-optimal solution with high probability, i.e., the cost of the sampling-based optimal schedule is with high probability no more than (1 + \u01eb) times the cost of the optimal schedule that is computed based on the true distribution. Our bound for number of required samples is polynomial in number of jobs, accuracy level, confidence level and cost coefficients. 2 The set of all subgradients at a point of a convex function [30, 14]. 7 \fThere has been much interest for studying stochastic models, especially the newsvendor problem and its multiperiod extension, with partial probabilistic characterization. When the true distribution is not fully known then the question is how to find a \u201cgood solution\u201d. Depending on how much is known about the true distribution(s) different approaches are possible, e.g., parametric and non-parametric. One may know the family of the true distribution but be uncertain about its parameters. This is called parametric approach, e.g., see [11, 19] and the references therein. If there are no assumptions on the true distribution, i.e., no prior assumptions on its family or its parameters, then the approach becomes non-parametric, e.g., see [17, 33, 26, 4]. Our approach is non-parametric and we employ sample average approximation (SAA) [33] to solve the appointment scheduling problem with samples. In other words, we use available samples to form an empirical distribution and find an optimal solution with respect to this empirical solution, i.e., sampling solution. Then we use the subdifferential characterization of the objective function (Section 3.4) and the well-known Hoeffdings inequality [15] to determine the number of samples required to guarantee that there will exist a (sufficiently) small (in terms of the specified accuracy level) subgradient at the sampling solution with high probability (i.e., at least the specified confidence level). As a final step we show that the objective value (w.r.t. the true distribution) of the sampling solution is no more than (1 + the accuracy level) of the true optimal value with probability at least the confidence level. For our sampling-based approach, job durations may not necessarily be independent but we require samples to be independent. In other words, each sample is a vector of durations where each coordinate corresponds to job duration and these vectors are independent. Independence assumption of probability distributions (e.g., job durations) is common but we do not require it in our sampling-based analysis. To the best of our knowledge Chapter 3 is the first to address the appointment scheduling problem when the probability distributions of durations are unknown. We develop a sampling-approach for the appointment scheduling problem which is a stochastic non-linear integer program. Furthermore, we believe Chapter 3 presents the first rigorous analysis of the convexity of the objective function of appointment scheduling problem with the simple sufficient condition. Last but not least, we characterize the set of all subgradients, i.e., the subdifferential at a given appointment 8 \fdate vector, with a closed-form formula3 . 1.2.3 Chapter 4: Incentive-Based Surgery Scheduling: Determining Optimal Number of Surgeries In Chapter 4, we look at a different but related problem of determining the number of surgeries for an OR block with a focus on the incentives of the parties involved (hospital and surgeon). We investigate the interaction between the hospital and the surgeon in a game theoretic setting, present empirical findings on surgery durations and suggest payment schemes that the hospital may offer to the surgeon to reduce its (idle and especially overtime) costs. In particular, we investigate the commonly observed situation reported in the literature [23] and observed empirically (Section 4.2) that surgeons over-schedule their allotted OR time, i.e., they schedule too many surgeries for their OR time. Olivares et al. [23] reports that the amount of schedule overruns are mainly caused by incentive conflicts and over-confidence. We take a systematic look at this and provide a model by which these incentive conflicts can be identified and effectively analyzed. Based on historical data analysis (Section 4.2) we see that in 81% of the surgeries, actual durations were longer than booked\/scheduled durations. This high percentage suggests that duration of individual surgeries are often underestimated. One may ask how this phenomenon actually effects the daily overall performance of an OR block; i.e., amount of overtime for an OR as well as the likelihood of an OR to go overtime. To answer this question, we look the data at an operating room level. For each OR, we compute daily average of scheduled and overtime OR minutes and find the percentage of overtime, i.e., the ratio of overtime OR minutes and scheduled OR minutes. We find that the overtime amount is well over 20% for each OR (Figure 4.6). We also find the percentage of days that each OR has an overtime to estimate the probability of daily overtime for each OR (Table 4.1). The smallest of these numbers is 75%. These empirical findings, significant amount and high likelihood of overtime, suggest that the cost of overtime can be substantial. If an OR can be managed in such a way that overtime is decreased then this may translate to immediate and significant cost savings. Additionally, savings from reduction in overtime costs may be used to increase hospital 3 This is unusual since only a single subgradient may be obtained in most applications. We make use of this subdifferential characterization in finding optimum appointment schedules by using non-smooth optimization methods in Appendix A. 9 \fresources such as regular OR time, recovery and intensive care beds. We argue that these observations can be explained by the incentive of surgeons to take advantage of fee-forservice4 payment structure for surgeries performed combined with the fact surgeons do not bear overtime costs at the hospital level. This creates a cost which is borne by the hospital who operates the OR and pays surgery support staff. Thus we argue that the hospital has an incentive to limit the number of surgeries performed by surgeons to reduce overtime expenditures. We explore this misalignment of incentives \u2013 for the surgeon to over-schedule and the hospital to control overtime costs \u2013 in a game theoretic setting. We characterize analytically the number of surgeries that minimizes hospital costs, find conditions when this number is less than surgeon\u2019s preference, and propose contracts that induce the surgeon to schedule a number of surgeries more aligned with the goals of the hospital. Depending on how much power the hospital has over surgeons and how much information is available to the hospital, we suggest several contracts that hospital might consider. 1.2.4 Chapter 5: Advance Multi-Period Quantity Commitment and Appointment Scheduling As discussed briefly above there is a connection between the celebrated newsvendor problem and the appointment scheduling problem. If we have only a single job (surgery), i.e., n = 1, then the appointment scheduling problem becomes the newsvendor problem. This was first recognized by Weiss [36]. In Chapter 5, we investigate this relationship in the case of multiple jobs. We introduce advance multi-period quantity (order or supply) commitment problems with random characteristics (demand or yield). There are underage and overage costs if there is a mismatch between committed and realized quantities. All quantity decisions (how much to order or supply in each of the next n periods) are needed now, before any realization of demand or yield. The objective is to maximize the total expected profit after n periods. We establish a link between these problems and the appointment scheduling problem (as given in Chapter 2). We show that these problems can be studied and solved as special cases of the appointment scheduling problem. In a supply chain, uncertainty effects (e.g., due to stochastic demand or random yield) 4 In Canada and the United States, although some doctors are salaried hospital employees, most doctors are private entrepreneurs who have admission privileges at a hospital, work on a fee-for-service basis and appear when the patient needs a cure or treatment [6]. 10 \fare something that players would like to minimize or pass to others. Consider a buyer and a supplier where the buyer can order any amount from the supplier whenever it is convenient. This may be the case where there are many suppliers and they are competing for buyers. However, the supplier would prefer a contract in which the buyer (who has better information about the demand uncertainty) commits in advance how much to purchase over a certain period of time. In return, the supplier may offer a discount to the buyer to make this choice attractive. These type of agreements are reported in practice [1, 20, 18]. With such an agreement, the challenge for the buyer becomes to determine how much to commit to purchase in advance (e.g., in total for the entire horizon or per period) and how much to order in each period. This problem and its variants (such as finite or infinite horizons, with or without fixed costs, total or individual period commitments) have been well motivated and studied in literature [1, 20, 9]. These studies mostly (and naturally) use dynamic programming to determine an optimal policy and in some cases they develop heuristics. Nevertheless, all the previous studies on this topic that we are aware of consider situations where a buyer commits to how much to purchase in advance and decides how much to order in each period consecutively, i.e., ordering decision for the next period is given after this period\u2019s demand realization. In our setting, the buyer needs to decide how much to order for all periods at once and now, before any realization of random demands. There can be some situations where the buyer needs to enter such a contract to secure any orders from a strong supplier. We discuss two models and a few examples. The first one is a multi-period inventory model for a buyer with a perishable product and backordering. The second one is a multiperiod production model for a producer with random yield with high inventory and product shortage costs. The distinct feature of these models from previous ones reported in literature is that all quantity commitment (order and supply amounts) decisions are to be made at once and before the decision horizon starts. To the best of our knowledge, the problems considered in Chapter 5 have not yet been studied. 1.2.5 Minimizing a Discrete-Convex Function for Appointment Scheduling The objective function of the appointment scheduling problem, under a simple sufficient condition, is discretely convex as a function of the integer appointment vector (Chapter 2), 11 \fbut it is convex and non-smooth when appointment vectors are continuous (Chapter 3). In Appendix A, we investigate whether we can take advantage of both discrete convexity and non-smooth convex optimization methods to solve the appointment scheduling problem. Our purpose is to find a way to combine both sets of methods to minimize the objective function of the appointment scheduling problem more efficiently and practically. In this Appendix, we compute a subgradient of the objective in polynomial time for any given (real-valued) appointment schedule with independent processing durations by using the subdifferential characterization obtained in Chapter 3. Finding a subgradient in polynomial time is not trivial because the subdifferential formulas include exponentially many terms, and some of the probability computations are complicated. We also extend computation of the expected total cost (in polynomial time) for any (real-valued) appointment vector. These results allow us to use non-smooth convex optimization techniques to find an optimal schedule. To combine the discrete and non-smooth algorithms, a hybrid approach, we develop a special-purpose integer rounding method which takes any fractional solution and rounds it to an integer one with the same or improved objective value. We believe this hybrid approach may perform well in practice. 1.3 Outline of the Thesis The rest of this thesis, as seen from Section 1.2, is organized as a series of chapters. At the beginning of every chapter, we motivate the problem in discussion and examine the related work. We provide our analysis and results. We conclude each chapter with a summary of the main findings. In addition to the chapters discussed in Section 1.2, we have Chapter 6 which summarizes the thesis contributions and provides a brief discussion of future research directions. 12 \f1.4 Bibliography [1] Yehuda Bassok and Ravi Anupindi. Analysis of supply contracts with total minumum commitment. IIE Trans., 29:373\u2013381, 1997. [2] Jeroen Belien and Erik Demeulemeester. Building cyclic master surgery schedules with leveled resulting bed occupancy. European Journal of Operational Research, 176: 11851204, 2007. [3] John Blake and Joan Donald. Mount sinai hospital uses integer programming to allocate operating room time. Interfaces, 32(2):63\u201373, 2002. [4] James H. Bookbinder and Anne E. Lordahl. Estimation of inventory re-order levels using the bootstrap statistical procedure. IIE Trans., 21(4):302\u2013312, 1989. [5] Peter M. Vanden Bosch, Dennis C. Dietz, and John R. Simeoni. Scheduling customer arrivals to a stochastic service system. Naval Research Logistics, 46:549\u2013559, 1999. [6] Mike Carter. Diagnosis: Mismanagement of resources. OR\/MS Today, 29(2):26\u201332, 2002. [7] Tugba Cayirli and Emre Veral. Outpatient scheduling in health care: A review of literature. Production and Operations Management, 12(4), 2003. [8] Amitabh Chandra and Jonathan Skinner. Expenditure and productivity growth in health care. Dartmouth College, February. Forthcoming as an NBER Working Paper, 2008. [9] Ki Ling Cheung and Xue-Ming Yuan. An infinite horizon inventory model with periodic order commitment. EJOR, 146:52\u201366, 2003. [10] Brian Denton and Diwakar Gupta. A sequential bounding approach for optimal appointment scheduling. IIE Transactions, 35:1003\u20131016, 2003. [11] Xiaomei Ding, Martin L. Puterman, and Arnab Bisi. The censored newsvendor and the optimal acquisition of information. Oper. Res., 50(3):517\u2013527, 2002. [12] Mohsen Elhafsi. Optimal leadtime planning in serial production systems with earliness and tardiness costs. IIE Transactions, 34:233 \u2013 243, 2002. 13 \f[13] Canadian Institute for Health Information Web Site. http:\/\/www.cihi.ca\/. [14] JeanBaptiste Hiriart-Urruty and Claude Lemarechal. Convex Analysis and Minimization Algorithms I and II. Springer, 1993. [15] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. J. American Statistical Assoc., 58(301):13\u201330, 1963. [16] Guido C. Kaandorp and Ger Koole. Optimal outpatient appointment scheduling. Health Care Man. Sci., 10:217\u2013229, 2007. [17] Retsef Levi, Robin O. Roundy, and David B. Shmoys. Provably near-optimal samplingbased policies for stohastic inventory control models. Math. of Oper. Res., 32(4):821\u2013 838, 2007. [18] Zhaotong Lian and Abhijit Deshmukh. Analysis of supply contracts with quantity flexibility. EJOR, 196:526\u2013533, 2009. [19] Liwan H. Liyanage and J. George Shanthikumar. A practical inventory control policy using operational statistics. Operations Research Letters, 33:341\u2013348, 2005. [20] Kamran Moinzadeh and Steven Nahmias. Adjustment strategies for a fixed delivery contract. Oper. Res., 48(3):408\u2013423, 2000. [21] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. [22] Kazuo Murota. On steepest descent algorithms for discrete convex functions. SIAM J. OPTIM, 14(3):699\u2013707, 2003. [23] Marcelo Olivares, Christian Terwiesh, and Lydia Cassorla. Structural estimation of the newsvendor model: An application to reserving operating room time. Management Science, 54(1):41\u201355, 2008. [24] Jonathan Patrick. Dynamic Patient Scheduling for a Diagnostic Resource. PhD thesis, The University of British Columbia, 2006. [25] Jonathan Patrick, Martin L. Puterman, and Maurice Queyranne. Dynamic multipriority patient scheduling for a diagnostic resource. Operations Research, 56:1507 \u2013 1525, 2008. 14 \f[26] Georgia Perakis and Guillaume Roels. Regret in the newsvendor model with partial information. Oper. Res., 56(1):188 \u2013 203, 2008. [27] Michael Pinedo. Scheduling: Theory, Algorithms, and Systems. Prentice Hall, 2001. [28] Lawrence W. Robinson and Rachel R. Chen. Scheduling doctors\u2019 appointments: optimal and empirically-based heuristic policies. IIE Transactions, 35:295\u2013307, 2003. [29] Lawrence W. Robinson, Yigal Gerchak, and Diwakar Gupta. Appointment times which minimize waiting and facility idleness. Working Paper, DeGroote School of Business, McMaster University, 1996. [30] R. Tyrrell Rockafellar. Theory of subgradients and its applications to problems of optimization: convex and nonconvex functions. Helderman-Verlag, Berlin, 1981. [31] F Sabria and C F Daganzo. Approximate expressions for queuing systems with scheduling arrivals and established service order. Transportation Science, 23:159\u2013165, 1989. [32] Pablo Santibanez, Mehmet Begen, and Derek Atkins. Surgical block scheduling in a system of hospitals: An application to resource and wait list management in a british columbia health authority. Health Care Man. Sci., 10(3):269\u2013282, 2007. [33] Alexander Shapiro. Stochastic programming approach to optimization under uncertainty. Math. Programming, 112:183\u2013220, 2007. [34] Health Canada Web Site. http:\/\/www.hc-sc.gc.ca\/hcs-sss\/qual\/acces\/wait- attente\/index-eng.php. [35] P Patrick Wang. Static and dynamic scheduling of customer arrivals to a single-server system. Naval Research Logistics, 40(3):345\u2013360, 1993. [36] E N Weiss. Models for determining estimated start times and case orderings in hospital operating rooms. IIE Transactions, 22(2):143\u2013150, 1990. 15 \f2 Appointment Scheduling with Discrete Random Durations1 We consider the problem of determining an optimal appointment schedule for a given sequence of jobs (e.g., medical procedures) on a single processor (e.g., operating room, examination facility, physician), to minimize the expected total underage and overage costs when each job has a random processing duration given by a joint discrete probability distribution. Simple conditions on the cost rates imply that the objective function is submodular and L-convex. Then there exists an optimal appointment schedule which is integer and can be found in polynomial time. Our model can handle a given due date for the total processing (e.g., end of day for an operating room) after which overtime is incurred, and no-shows and some emergencies. 2.1 Introduction and Motivation2 Our research concerns appointment scheduling of jobs on a highly utilized processor when the processing durations are stochastic, and jobs are not available before their appointment dates.3 We came across this problem in surgery scheduling and in appointment scheduling of oncologist consultations and radiation therapy treatments for cancer patients. There are many other challenging and important real-life applications for this setting including healthcare diagnostic operations (such as CAT scan, MRI) and physician appointments, as well as project scheduling, container vessel and terminal operations, gate and runway scheduling of aircrafts in an airport. For example, in surgery scheduling, patients or surgeries 1 A version of this chapter has been submitted for publication. Begen M.A. and Queyranne M. Appoint- ment Scheduling with Discrete Random Durations. 2 A conference version of this chapter appeared in [1]. 3 To conform with scheduling terminology, we use the term \u201cdate\u201d to denote a point in time. In most applications of appointment scheduling, the appointment \u201cdates\u201d are actually appointment times within the day for which the jobs are being scheduled. 16 \fare the jobs, the operating room (OR) and associated resources are the processor, and the surgeon or the hospital is the scheduler. Figure 2.1 shows an example of surgery durations (OR time in minutes) per surgical specialty. As seen from the box plots of Figure 2.1 surgery durations show variability. This data was obtained during an applied research project [28]. 400 350 OR Time (min) 300 250 200 150 100 50 0 General Cardiac Neuro. Ortho. Plastic. Vascu. Urol. Obst\/Gyn. Oto. Ophth. Surgical Specialty Figure 2.1: Surgery Durations Some appointment scheduling applications may have a specific due date for the end of processing, e.g., end of day for an OR, after which additional cost per time unit, e.g., overtime, is incurred. The need for a good schedule is crucial, and savings from such a schedule can be significant. In most cases, an appointment schedule needs to be prepared before any processing starts. It assigns each procedure an allocated duration by specifying the appointment date at which the required personnel and equipment, and the job or patient will be available. However, due to the uncertain processing durations, some jobs may finish earlier, whereas some others may finish later, than the appointment date of the next job. As the appointment dates have to be determined in advance, there are only limited recourse options when the actual duration of a job differs from its planned value. When a procedure finishes earlier than the next procedure\u2019s appointment date, the processor and other resources remain idle until the appointment date of the next job. This results in 17 \fresource under-utilization. On the other hand, if a job finishes later than the next job\u2019s appointment date, the next job has to wait for the preceding procedure to complete and will start later than its original appointment date. This results in waiting for the next job and may cause overtime for the processor and resources at the end of the schedule. Therefore, there is an important trade-off between under-utilization, overtime and job waiting times. We are interested in generating an appointment vector4 that minimizes the expected total cost of resource under-utilization, overtime and job waiting times. Finding such a schedule is more challenging but more valuable and useful when processing durations have more variability. Figure 2.2 shows an instance with 3 jobs G, B, R to be processed in this order. An appointment schedule (AG , AB , AR ) is given. Once the processing starts, due to the random processing durations, some jobs may be early whereas some others may be late as shown in Figure 2.2. an appointment schedule: AG , AB , AR G B R I : idle time AB AG G AG I CG AR B W: wait time W R O : overtime O CB CR a realization of durations and the completion times Figure 2.2: A Three-Job Instance, and A Realization of the Processing Durations This problem can be modeled as a multistage stochastic program, but there are significant computational difficulties due to the need for multidimensional numerical integration (see Section 2.2). To our best knowledge, all the analytical studies we are aware of, even the ones with discrete epochs for job arrivals, use continuous processing duration distributions. For a given sequence of jobs, only small instances can be solved to optimality, larger instances require heuristics. We study a discrete time version of the appointment scheduling problem and establish discrete convexity properties of the objective function. Discrete convex analysis has been advocated by Murota [18] and for recent developments in the topic see [20]. We prove that 4 We use appointment schedule and appointment vector interchangeably. 18 \fthe objective function is L-convex under mild assumptions on cost coefficients. L-convex functions, introduced by Murota in [17], play a central role in discrete convexity and our research. Furthermore, we show that there exists an optimal integer appointment schedule minimizing expected total costs. This result is important as it allows us to optimize only over integer appointment schedules without loss of optimality. All these results on the objective function and optimal appointment schedule enable us to develop a polynomial time algorithm, based on discrete convexity [19], that, for a given processing sequence, finds an appointment schedule minimizing the total expected cost. This algorithm invokes a sequence of submodular set function minimizations, for which various algorithms are available, see e.g., [9], [13], [8], [16] and [21]. When processing durations are stochastically independent we evaluate the expected cost for a given processing order and an integer appointment schedule, efficiently both in theory (in polynomial time) and in practice (computations are quite fast as shown in our preliminary computational experiments). Independent processing durations lead to faster algorithms. Our modeling framework can include a given due date for the end of processing (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose an end date. We also extend our analysis to include no-shows and emergency jobs. The expected benefits of this research effort include reduced job waiting times, reduced overtime and improved capacity utilization. Our chapter is organized as follows. We start with a literature summary in Section 2.2. Section 2.3 states our assumptions, introduces our notation and formally defines the problem and objective function. Section 2.4 gives some basic properties of the objective function and optimal solutions. In section 2.5 we show the existence of an optimal appointment vector which is integer. Section 2.6 establishes the submodularity and L-convexity of the objective function under a mild condition on cost coefficients. We show that the total expected cost can be minimized efficiently and give the complexity of this minimization in Section 2.7. In this section, we also compute the objective function for any integer appointment vector and determine its complexity when the processing durations are stochastically independent. This independence assumption leads to faster algorithms. We extend our analysis for an objective function with a due date for the end of processing in Section 2.8. Section 2.9 shows how to handle no-shows and some emergency jobs within our framework. Section 2.10 19 \fdiscusses the current work and future work, and it concludes the chapter. 2.2 Related Literature There are many studies in the last 50 years about appointment scheduling, especially in healthcare. Here, we present the ones that we believe are the most relevant to our research. The use of appointment systems is not limited to service industries but also extends to other areas, such as project management, production and transportation. Sabria and Daganzo [27] consider scheduling of arrivals of container vessels at a seaport. Weiss [33] recognized that the appointment scheduling problem has a closed form solution when there are only two jobs, and it coincides with the well known newsvendor solution from inventory theory. Robinson et al. [26] extend this result to three jobs by obtaining optimality conditions. Zipkin [34] presents an analysis on the structure of a single-item multi-period inventory system, closely related to the newsvendor problem, by using discrete convexity. Elhafsi [7] studies a production system of multiple stages with stochastic leadtimes. The objective is to determine planned leadtimes such that the expected total cost (inventory, tardiness and earliness) is minimized. Bendavid and Golany [2] consider project scheduling with stochastic activity durations. They address the problem of determining for each activity a gate, i.e., a time before which the activity cannot begin, so as to minimize total expected holding and shortage costs, for which they use a heuristic based on the Cross Entropy methodology. Cayirli and Veral [5] review the literature on appointment systems of outpatient scheduling. The authors classify the literature in terms of methodologies and modeling aspects considered, and provide a discussion of performance measures. The authors conclude that the existing literature provides very situation-specific solutions and does not offer generally applicable and portable methodologies for appointment systems design in outpatient scheduling. Another literature review by Cardoen et al. [4] on operating room scheduling evaluates the papers on either the problem setting, such as performance measures, or technical properties such as solution methods. In a queuing based study, Wang [31] develops a model to find appointment dates of jobs in a single server system to minimize expected customer delay and server completion time with identical jobs and costs, and exponential processing duration distributions. In his numerical studies, the optimal allocated time for each job shows a \u201cdome\u201d structure, 20 \fi.e., it increases first and then decreases. In another study, Wang [32] investigates the sequencing problem with the same setting but with distinct exponential distributions. He conjectures that sequencing with increasing variance is optimal. Bosch et al. [3] present a model with i.i.d. Erlang processing durations and identical cost coefficients. In their model, customers can arrive only at discrete potential arrival epochs, which are equally spaced, and the decision variable is the number of customers to be scheduled at each potential arrival epoch. In a related paper, Kaandorp and Koole [14] study outpatient appointment scheduling with exponential processing durations and no-shows. They take advantage of the exponential distribution in their computations and define a neighborhood structure and an exact search method. However for large instances, they develop a heuristic due to high computation times of their search method. Another important stream of appointment scheduling research is based on stochastic programming. Denton and Gupta [6] develop a two-stage stochastic linear program to determine optimal appointment dates for a given surgery sequence and due date for the end of processing horizon. The authors use general, i.i.d. and continuous processing durations, and identical server idling cost coefficients for all jobs. They infer from stochastic programming results that their model is a convex minimization problem, and they develop an algorithm with sequential bounding for solving small sized instances. They develop heuristics to solve larger instances. In a related study, Robinson and Chen [25] develop a stochastic linear program for finding appointment dates for a fixed sequence of surgeries and propose a Monte-Carlo based solution method. Due to the high computational requirements of Monte-Carlo integration, they develop heuristics in which they use the \u201cdome\u201d structure of the optimal policy as reported in Wang [31]. Appointment scheduling can be thought of as an operational level of capacity planning problem since it concerns with scheduling of jobs\/patients available on the day of processing\/service [28], [22] and [29]. Researchers also study the problem of scheduling patients in advance of the service date. In this stream of research, e.g., [22], [29], [10], [11] and the references therein, arrivals are random but processing durations are deterministic and the main decision is how to allocate available capacity to incoming demand. Different objectives are considered such as revenue maximization [11] or cost minimization to achieve target waiting times [22]. Luzon et al. [15] use a fluid approximation to minimize average waiting time. 21 \fWe finally would like to point out the similarities between appointment scheduling and single machine scheduling, see e.g., [24] for machine scheduling. Unlike machine scheduling, in appointment scheduling a sequence is given and the release dates are the decision variables. Furthermore, the objective function of the appointment scheduling problem is quite different than the objective functions of classical machine scheduling problems. Processing durations are usually deterministic in machine scheduling problems but random processing durations are also studied in literature, see e.g., [23] and [24]. In this chapter we develop a sufficiently generic and portable framework to solve the appointment scheduling problem efficiently. 2.3 Assumptions and Notation There are n + 1 jobs that need to be sequentially processed on a single processor. The processing sequence is given. An appointment schedule needs to be prepared before any processing can start. Jobs will not be available before their appointment dates. When a job finishes earlier than the next job\u2019s appointment date, the system experiences some cost due to under-utilization. We refer to this cost as the underage cost. On the other hand, if a job finishes later than the next job\u2019s appointment date, the system experiences overage cost due to the overtime of the current job and the waiting of the next job. The processing durations are given by their joint discrete distribution. In Section 2.7, we will show that assuming independent discrete processing durations lead to faster algorithms. We assume that this joint distribution is known. Complete information of distributions is reasonable in most settings, but we relax this assumption in Chapter 3. Our next assumption is a natural one: all cost coefficients and processing durations are non-negative and bounded. A key assumption in this work is that processing durations are integer valued5 . Although we obtain some of our results without this assumption, it is important for our main results. We assume job 1 starts on-time, i.e., the start time for the first job is zero, and there are n real jobs. The (n + 1)th job is a dummy job with a processing duration of 0. The appointment time for the (n + 1)th job is the total time available for the n real jobs. We use the dummy job to compute the overage or underage cost of the nth job. Let {1, 2, 3, ..., n, n+1} denote the set of jobs. We denote the random processing duration 5 We can restrict ourselves to integer appointment schedules without loss of optimality by Theorem 2.5.10. 22 \fof job i by pi and the random vector6 of processing durations by p = (p1 , p2 , ..., pn , 0). Let pi and pi denote the minimum and maximum possible value of processing duration pi , respectively. The maximum of these pi \u2019s is pmax = max(p1 , ..., pn ). The underage cost rate ui of job i is the unit cost (per time unit) incurred when job i is completed at a date Ci before the appointment date Ai+1 of the next job i + 1. The overage cost rate oi of job i is the unit cost incurred when job i is completed at a date Ci after the appointment date Ai+1 . Thus the total cost due to job i completing at date Ci is ui (Ai+1 \u2212 Ci )+ + oi (Ci \u2212 Ai+1 )+ where (x)+ = max(0, x) is the positive part of real number x. We define u = (u1 , u2 , ..., un ) and o = (o1 , o2 , ..., on ). We denote unit vectors in Rn+1 as 1i where the ith component is 1 and all other components are 0. The underage cost may be interpreted as the idling cost and\/or opportunity cost of the resources, whereas the overage cost may be thought as the waiting cost of the next job and\/or the overtime of the current job. The overage cost of the last job may include the overtime cost for the whole facility at the end of the schedule after a specified due date. Next we introduce our decision variables. Let ai be the allocated duration and Ai the appointment date for job i. Then we have A1 = 0 and Ai+1 = Ai + ai for i = 1, . . . , n. Thus we may equivalently use the allocated duration vector a = (a1 , a2 , ..., an\u22121 , an ) or the appointment vector A = (A1 , A2 , ..., An , An+1 ) (with A1 = 0) as our decision variables; we choose to work with the appointment vector A. We introduce additional variables which help define and compute the objective function. Let Si be the start date and Ci the completion date of job i. Since job 1 starts on-time we have S1 = 0 and C1 = p1 . The other start times and completion times are determined as follows: Si = max{Ai , Ci\u22121 } and Ci = Si + pi for 2 \u2264 i \u2264 n + 1. Note that the dates Si and Ci are random variables which depend on the appointment vector A. Let F (A|p) be the total cost of appointment vector A given processing duration vector p: F (A|p) = n X i=1 \u0001 oi (Ci \u2212 Ai+1 )+ + ui (Ai+1 \u2212 Ci )+ . (2.1) The objective to be minimized is the expected total cost F (A) = Ep [F (A|p)] where the expectation is taken with respect to random processing duration vector p. We simplify notations by defining the lateness Li = Ci \u2212 Ai+1 of job i, its tardiness Ti = (Li )+ , and its 6 We write all vectors as row vectors. 23 \fearliness Ei = (\u2212Li )+ . The objective F (A) can now be written as \" n # n X X F (A) = Ep (oi Ep Ti + ui Ep Ei ) . (oi Ti + ui Ei ) = i=1 i=1 2.4 Basic Properties We start by making an observation about the completion times and expressing the objective function in a different form that is useful for deriving some of our later results. Since Ci = Si + pi = max{Ai , Ci\u22121 } + pi , the completion time of job i may be seen as the length of the longest (or critical) path from some job j (j \u2264 i) to job i + 1 in a corresponding \u201cproject network\u201d (Pinedo [24]), namely: Lemma 2.4.1. (Critical Path) For all jobs i = 1, . . . , n, Ci = max{Aj + j\u2264i F (A|p) = n X i=1 \uf8eb \uf8ed oi i X pk } k=j n max Aj + j\u2264i i X k=j pk \u001b \u2212 Ai+1 !+ + ui ! \uf8f6 \u001a i o + X \uf8f8. pk Ai+1 \u2212 max Aj + j\u2264i k=j Proof. The claim holds trivially for i = 1. By induction let the claim be true for i = m, n o P i.e., Cm = maxj\u2264m Aj + m k=j pk . Then \b Cm+1 = Sm+1 + pm+1 = max Am+1 , Cm + pm+1 by definition \uf8f1 \uf8fc \u001a \u001b\uf8fd m \uf8f2 X = max Am+1 , max Aj + pk + pm+1 by inductive assumption \uf8f3 \uf8fe j\u2264m k=j \uf8fc \uf8fc \uf8f1 \uf8f1 \u001a m+1 m+1 \uf8f2 \uf8f2 X \uf8fd X \u001b\uf8fd pk . = max Am+1 + pm+1 , max Aj + pk = max Aj + \uf8fe \uf8fe \uf8f3 j\u2264m+1 \uf8f3 j\u2264m k=j The expression for F (A|p) follows. k=j \u0003 The next result is not only important on its own but also crucial for the existence of an optimal solution. Lemma 2.4.2. (Continuity) Functions F (.|p) and F (.) are continuous. Proof. By expression Eq(3.1), F (.|p) is a weighted sum of piecewise linear continuous functions of A, hence is itself piecewise linear continuous. Since we have a finite sample space, the expectation F (.) = Ep F (.|p) is also continuous. \u0003 24 \fWe next establish the existence of an optimal solution. The proof follows from the fact that our objective function is continuous (by Lemma 2.4.2), and we can restrict the appointment vector to a compact set without loss of optimality. Let A = (A1 , . . . , An+1 ) P P and A = (A1 , . . . , An+1 ) where A1 = A1 = 0, Ai = j* Ai . Let A \u2208 Rn+1 be a vector with largest i(A) value satisfying A \u2265 A, A1 = 0 and A 6\u2208 K. We claim that there exists A\u2032 satisfying A\u2032 \u2265 A, A\u20321 = 0, F (A\u2032 ) \u2264 F (A), and either A\u2032 \u2208 K or i(A\u2032 ) > i(A). Then after at most n such changes we obtain A\u2032\u2032 \u2208 K satisfying F (A\u2032\u2032 ) \u2264 F (A), which is what we wanted to show. We now prove the claim. Let i = i(A), \u03b5 = Ai \u2212 Ai > 0, and define A\u2032 with A\u2032j = Aj for all j \u2264 i \u2212 1 and A\u2032j = Aj \u2212 \u03b5 for all j \u2265 i, so A\u2032i = Ai . For every realization p of the processing durations, the completion time Cj\u2032 in the resulting schedule satisfy Cj\u2032 = Cj for all j \u2264 i \u2212 1. Note that for all j \u2264 i \u2212 1, Aj \u2264 Aj implies Cj \u2264 Aj+1 . Therefore Ci = Ai + pi and 25 \fCi\u2032 = A\u2032i + pi = A\u2032i + Ci \u2212 Ai = Ci \u2212 \u03b5. It follows that Cj\u2032 = Cj \u2212 \u03b5 for all j \u2265 i. As a result, \u2032 \u2032 = Ti\u22121 = 0. Since \u03b5 > 0 = Ei\u22121 \u2212 \u03b5, Ti\u22121 Ej\u2032 = Ej and Tj\u2032 = Tj for all j 6= i \u2212 1, and Ei\u22121 and ui\u22121 \u2265 0, F (A\u2032 |p) \u2264 F (A|p) and thus F (A\u2032 ) \u2264 F (A). Since A\u2032j = Aj \u2264 Aj for all j \u2264 i \u2212 1 and A\u2032i = Ai , then either A\u2032 \u2208 K or i(A\u2032 ) \u2265 i + 1 = i(A) + 1, establishing the claim. This shows that for any A 6\u2208 K there exists a vector A\u2032\u2032 \u2208 K with F (A\u2032\u2032 ) \u2264 F (A). As a result, since F is continuous, its minimum on compact set K is attained and is therefore the global minimum. \u0003 The next lemma gives bounds on the difference between any two consecutive components of an optimal appointment vector, and from this we obtain a useful and intuitive result in Lemma 2.4.5. Lemma 2.4.4. There exists an optimal appointment schedule A\u2217 \u2208 K satisfying pi \u2264 P P A\u2217i+1 \u2212 A\u2217i \u2264 j\u2264i pj \u2212 j** A\u2217k+1 for some k = 2, . . . , n then job k is late at least (pk + A\u2217k \u2212 A\u2217k+1 ) time units, so increasing A\u2217k+1 to pk + A\u2217k will improve the objective function by ok (pk + A\u2217k \u2212 A\u2217k+1 ) \u2265 0. Therefore we must have pi \u2264 A\u2217i+1 \u2212 A\u2217i for all i = 2, . . . , n. \u0003 Lemma 2.4.5. (Non-Decreasing Appointment Dates) There exists an optimal appointment vector A\u2217 \u2208 K with non-decreasing components, i.e., A\u2217i \u2264 A\u2217i+1 for all i = 1, . . . , n. Proof. By Lemma 2.4.4, A\u2217i+1 \u2212 A\u2217i \u2265 pi \u2265 0 (1 \u2264 i \u2264 n). 2.5 \u0003 Optimality of an Integer Appointment Vector The existence of an optimal appointment vector which is integer is crucial. It implies that we can restrict attention to integer appointment vectors without loss of optimality. We establish this result in the Appointment Vector Integrality Theorem 2.5.10. Its proof is surprisingly non-trivial. Let A\u2217 be any non-integer appointment vector and A\u2217f the first non-integer component of A\u2217 . Knowing all the jobs which have the same fractional part as A\u2217f is crucial, so we 26 \fdefine J to be the set of all jobs j \u2265 f such that A\u2217j \u2212 A\u2217f is integer. Let Z denote the set of integers, and \u230ax\u230b = sup{n \u2208 Z : n \u2264 x} and \u2308x\u2309 = inf{n \u2208 Z : n \u2265 x} for x \u2208 R. Let \u03d5(x) be the distance to the nearest integer for x \u2208 R, i.e., \u03d5(x) = min(x\u2212\u230ax\u230b, \u2308x\u2309\u2212x). Let \u2206 be a \b strictly positive scalar satisfying 0 < \u2206 < 21 min(\u22061 , \u22062 ) where \u22061 = 14 min \u03d5(|A\u2217j \u2212 A\u2217k |) : \b j \u2208 J, k 6\u2208 J > 0 and \u22062 = 14 min \u03d5(|A\u2217j \u2212 A\u2217k |) : j 6\u2208 J, k 6\u2208 J, Aj \u2212 Ak 6\u2208 Z > 0. We use \u2206 to construct two new appointment schedules A\u2032 and A\u2032\u2032 from A\u2217 : let A\u2032j = A\u2217j \u2212 \u2206 if j \u2208 J, and A\u2032j = A\u2217j otherwise; similarly, let A\u2032\u2032j = A\u2217j + \u2206 if j \u2208 J, and A\u2032\u2032j = A\u2217j otherwise. For any realization of the processing duration vector p, denote the completion times of job j as Cj\u2217 , Cj\u2032 , Cj\u2032\u2032 in schedules A\u2217 , A\u2032 and A\u2032\u2032 , respectively. One of the main ideas in proving the Appointment Vector Integrality Theorem 2.5.10 is that \u2206 is small enough so that there is \u201cno event change\u201d when we move from schedule A\u2217 to schedules A\u2032 and A\u2032\u2032 . When there is no event change, we show in Lemma 2.5.9 that our objective function changes linearly between schedules A\u2032 and A\u2032\u2032 . To make the no event \u2217 change concept precise, we define the following. Job i (1 < i \u2264 n + 1) is late if Ci\u22121 > A\u2217i \u2217 (strictly positive tardiness), early if Ci\u22121 < A\u2217i (strictly positive earliness), just-on-time if \u2217 \u2217 Ci\u22121 = A\u2217i , and on-time if Ci\u22121 \u2264 A\u2217i . Then no event change means that if any job is late, early or just-on-time, respectively, in schedule A\u2217 then it is also late, early or just-on-time, respectively, in both schedules A\u2032 and A\u2032\u2032 . We consider all possible realizations r of the random processing duration vector p, so ri is the corresponding realization of the processing duration pi . We start by establishing relationships between the completion times in the schedules A\u2032 and A\u2217 , and A\u2032\u2032 and A\u2217 . Lemma 2.5.1. For every realization of the processing durations and every j = 1, . . . , n + 1, Cj\u2217 + \u2206 \u2265 Cj\u2032\u2032 \u2265 Cj\u2217 \u2265 Cj\u2032 \u2265 Cj\u2217 \u2212 \u2206. Proof. Let 1 \u2264 j \u2264 n + 1 and let r be a realization of p. Then A\u2217j \u2212 \u2206 \u2264 A\u2032j \u2264 A\u2217j \u2264 A\u2032\u2032j \u2264 A\u2217j + \u2206 by definition of A\u2032 and A\u2032\u2032 . By the Critical Path Lemma 2.4.1, Cj\u2217 = P P P maxk\u2264j {A\u2217k + ji=k ri }, Cj\u2032 = maxk\u2264j {A\u2032k + ji=k ri } and Cj\u2032\u2032 = maxk\u2264j {A\u2032\u2032k + ji=k ri }. Hence, A\u2032j \u2264 A\u2217j \u2264 A\u2032\u2032j implies that Cj\u2032 \u2264 Cj\u2217 \u2264 Cj\u2032\u2032 . On the other hand, A\u2217j \u2212 \u2206 \u2264 A\u2032j P P implies that Cj\u2217 \u2212 \u2206 = maxk\u2264j {A\u2217k \u2212 \u2206 + ji=k ri } \u2264 maxk\u2264j {A\u2032k + ji=k ri } = Cj\u2032 so P Cj\u2217 \u2212 \u2206 \u2264 Cj\u2032 . Similarly A\u2217j + \u2206 \u2265 A\u2032\u2032j implies that Cj\u2217 + \u2206 = maxh\u2264j {A\u2217k + \u2206 + ji=k ri } \u2265 P \u0003 maxk\u2264j {A\u2032\u2032k + ji=k ri } = Cj\u2032\u2032 so Cj\u2217 + \u2206 \u2265 Cj\u2032\u2032 . The result follows. The next two results are about late and early jobs. Lemma 2.5.2 below implies that if 27 \fjob k is late (resp., early) then its tardiness (resp., earliness) is strictly greater then 2\u2206. Lemma 2.5.3 implies that if job k is late (resp., early) in schedule A\u2217 then it is also late (resp., early) in A\u2032 and A\u2032\u2032 . Lemma 2.5.2. For every realization of the processing durations and every k = 2, . . . , n + 1, \u2217 \u2217 if |Ck\u22121 \u2212 A\u2217k | > 0 then |Ck\u22121 \u2212 A\u2217k | > 2\u2206. Proof. Let r be a realization of p. Let t be the last on-time job before k (1 \u2264 t < k) so Pk\u22121 \u2217 ri . Note that t exists and is well defined since job 1 is always on-time, Ck\u22121 = A\u2217t + i=t i.e., A\u22171 = 0. We consider two cases: (A\u2217k \u2212 A\u2217t ) \u2208 Z or (A\u2217k \u2212 A\u2217t ) 6\u2208 Z. If (A\u2217k \u2212 A\u2217t ) \u2208 Z Pk\u22121 \u2217 then 0 < |Ck\u22121 \u2212 A\u2217k | = |A\u2217t + i=t ri \u2212 A\u2217k | but since the ri \u2019s and A\u2217k \u2212 A\u2217t are integer, Pk\u22121 Pk\u22121 \u2217 |A\u2217t + i=t ri \u2212A\u2217k | is a positive integer, hence |Ck\u22121 \u2212A\u2217k | = |A\u2217t + i=t ri \u2212A\u2217k | \u2265 1 > 2\u2206. If A\u2217k \u2212A\u2217t is not integer then \u03d5(A\u2217k \u2212A\u2217t ) > 2\u2206, and this implies \u2308A\u2217k \u2212 A\u2217t \u2309\u2212(A\u2217k \u2212A\u2217t ) > 2\u2206 Pk\u22121 Pk\u22121 ri is integer, we must have either i=t ri \u2264 and (A\u2217k \u2212 A\u2217t ) \u2212 \u230aA\u2217k \u2212 A\u2217t \u230b > 2\u2206. Since i=t P P k\u22121 k\u22121 \u2217 \u2212A\u2217 | = |A\u2217 + \u2217 \u230aA\u2217k \u2212 A\u2217t \u230b or i=t ri \u2265 \u2308A\u2217k \u2212 A\u2217t \u2309. Therefore |Ck\u22121 t i=t ri \u2212Ak | > 2\u2206. \u0003 k Lemma 2.5.3. For every realization of the processing duration and every k = 2, . . . , n + 1, \u2032 \u2217 \u2032\u2032 \u2032 \u2217 < A\u2032k and < A\u2217k then Ck\u22121 > A\u2032\u2032k , and if Ck\u22121 > A\u2032k and Ck\u22121 if Ck\u22121 > A\u2217k then Ck\u22121 \u2032\u2032 Ck\u22121 < A\u2032\u2032k . \u2217 \u2217 Proof. By Lemma 2.5.2, Ck\u22121 > A\u2217k implies Ck\u22121 \u2212 A\u2217k > 2\u2206. Note that A\u2217k \u2212 \u2206 \u2264 A\u2032k \u2264 \u2217 \u2032\u2032 \u2217 \u2032 \u2217 A\u2217k \u2264 A\u2032\u2032k \u2264 A\u2217k + \u2206 by definition, and Ck\u22121 + \u2206 \u2265 Ck\u22121 \u2265 Ck\u22121 \u2265 Ck\u22121 \u2265 Ck\u22121 \u2212\u2206 \u2217 \u2032 \u2217 by Lemma 2.5.1. Then Ck\u22121 \u2212 A\u2217k > 2\u2206 implies Ck\u22121 \u2212 A\u2032k \u2265 Ck\u22121 \u2212 \u2206 \u2212 A\u2217k > \u2206 \u2032\u2032 \u2217 \u2217 and Ck\u22121 \u2212 A\u2032\u2032k \u2265 Ck\u22121 \u2212 A\u2217k \u2212 \u2206 > \u2206. Similarly, by Lemma 2.5.2, Ck\u22121 < A\u2217k implies \u2217 A\u2217k \u2212 Ck\u22121 > 2\u2206. Note that A\u2217k \u2212 \u2206 \u2264 A\u2032k \u2264 A\u2217k \u2264 A\u2032\u2032k \u2264 A\u2217k + \u2206 by definition, and \u2217 \u2032\u2032 \u2217 \u2032 \u2217 \u2217 Ck\u22121 + \u2206 \u2265 Ck\u22121 \u2265 Ck\u22121 \u2265 Ck\u22121 \u2265 Ck\u22121 \u2212 \u2206 by Lemma 2.5.1. Then A\u2217k \u2212 Ck\u22121 > 2\u2206 \u2217 \u2032\u2032 \u2217 \u2032 \u2265 A\u2217k \u2212 Ck\u22121 \u2212 \u2206 > \u2206. The result \u2265 A\u2217k \u2212 Ck\u22121 \u2212 \u2206 > \u2206 and A\u2032\u2032k \u2212 Ck\u22121 implies A\u2032k \u2212 Ck\u22121 follows. \u0003 Just-on-time jobs require more care, and we need further definitions and results before we can establish similar results as Lemmata 2.5.2 and 2.5.3. Let a block B[t, k] be a sequence of consecutive jobs, [t, t+1, . . . , k] (1 \u2264 t < k \u2264 n+1) such that either t = 1 or job t is early, i.e., \u2217 \u2217 = Cj\u2217 \u2265 A\u2217j+1 for j = t+1, . . . , k; < St\u2217 = A\u2217t ; no other job in the block is early, i.e., Sj+1 Ct\u22121 \u2217 \u2217 = A\u2217i } denote = A\u2217k . Let K = {i : t < i \u2264 k and Ci\u22121 and job k is just-on-time, i.e., Ck\u22121 \u2217 for the set of just-on-time jobs in the block B[t, k]. So we have St\u2217 = A\u2217t and Sj\u2217 = A\u2217j = Cj\u22121 28 \fall j \u2208 K. Our next result, Lemma 2.5.4, implies that the first (job t) and all just-on-time jobs in a block (i.e., elements of K) are either all in J or all outside J. Lemma 2.5.4. If B[t, k] is a block then either {t} \u222a K \u2286 J or {t} \u222a K \u2286 B[t, k] \\ J. \u2217 Proof. Let j \u2208 K. We have Cj\u22121 = A\u2217t + Pj\u22121 \u2217 time between t and j. We obtain 0 = Cj\u22121 ri since t is on-time, and there is no idle Pj\u22121 Pj\u22121 \u2212 A\u2217j = A\u2217t + i=t ri \u2212 A\u2217j . Since i=t ri is i=t integer, A\u2217j \u2212 A\u2217t must be integer. This implies that if j \u2208 J then t \u2208 J, and if j 6\u2208 J then t 6\u2208 J. \u0003 Lemma 2.5.5 will be used to prove Lemmata 2.5.6 and 2.5.7. Lemma 2.5.5. Let k \u2208 {2, . . . , n + 1} be such that A\u2217k 6\u2208 Z. Then for every realization of \u2217 the processing durations such that Ck\u22121 = A\u2217k there is an early job j < k. Proof. Let r be a realization of p. Seeking a contradiction, assume there is no early job Pk\u22121 \u2217 before job k. Then Ck\u22121 = A\u22171 + i=1 ri = A\u2217k . This implies A\u2217k \u2208 Z (since A\u22171 = 0 and Pk\u22121 \u0003 i=1 ri are integer), a contradiction. In Lemmata 2.5.6 and 2.5.7 below we prove that no event change occurs for any just- on-time job. Therefore Lemma 2.5.8 states that no event change occurs for any job. Lemma 2.5.6. Let k \u2208 {2, . . . , n + 1}. For every realization of the processing durations \u2217 \u2032 \u2032\u2032 such that Ck\u22121 = A\u2217k , if there exists an early job j < k then Ck\u22121 = A\u2032k and Ck\u22121 = A\u2032\u2032k . Proof. Let r be a realization of p. Let t be the last early job before k, so B[t, k] is a block. \u2217 = A\u2217i } be the set of just-on-time jobs As explained above, let K = {i : t < i \u2264 k and Ci\u22121 between t and k. By Lemma 2.5.4, either (i) {t} \u222a K \u2286 J or (ii) {t} \u222a K \u2286 B[t, k] \\ J. Case (i) {t} \u222a K \u2286 J \u2032 \u2217 \u2264 Ct\u22121 < First, by induction we show that Cj\u2032 = Cj\u2217 \u2212 \u2206 for all j \u2208 B[t, k]. Indeed, Ct\u22121 A\u2217t \u2212 2\u2206 < A\u2032t (by Lemmata 2.5.1 and 2.5.2) so St\u2032 = A\u2032t = A\u2217t \u2212 \u2206 and Ct\u2032 = A\u2217t \u2212 \u2206 + rt = \u2032 \u2217 = Cj\u22121 \u2212 \u2206. If Ct\u2217 \u2212 \u2206. Consider t < j \u2208 B[t, k]. By inductive assumption, Cj\u22121 \u2032 \u2217 j \u2208 K then j \u2208 J and A\u2032j = A\u2217j \u2212 \u2206, so Sj\u2032 = max{Cj\u22121 , A\u2032j } = max{Cj\u22121 \u2212 A\u2217j } \u2212 \u2206 = \u2217 \u2217 Cj\u22121 \u2212 \u2206. Otherwise, j 6\u2208 K, i.e., j is late, then by Lemma 2.5.2 Cj\u22121 > A\u2217j + 2\u2206. So \u2217 \u2032 \u2032 \u2217 \u2212 \u2206. In both cases, = Cj\u22121 \u2212 \u2206 and hence Sj\u2032 = Cj\u22121 \u2212 2\u2206 = Cj\u22121 A\u2032j \u2264 A\u2217j < Cj\u22121 \u2217 , A\u2217 } + r \u2212 \u2206 = C \u2217 \u2212 \u2206, completing our \u2217 \u2212 \u2206 + rj = max{Cj\u22121 Cj\u2032 = Sj\u2032 + rj = Cj\u22121 j j j 29 \f\u2032 \u2217 inductive proof. This implies that Ck\u22121 = Ck\u22121 \u2212 \u2206 = A\u2217k \u2212 \u2206 = A\u2032k since k \u2208 K \u2286 J so \u2032 Ck\u22121 = A\u2032k as claimed. Similarly, by induction we show that Cj\u2032\u2032 = Cj\u2217 + \u2206 for all j \u2208 B[t, k]. Indeed, \u2217 \u2032\u2032 + \u2206 < A\u2217t < A\u2217t + \u2206 = A\u2032\u2032t (by Lemmata 2.5.1 and 2.5.2) so St\u2032\u2032 = A\u2032\u2032t = \u2264 Ct\u22121 Ct\u22121 A\u2217t + \u2206 and Ct\u2032\u2032 = A\u2217t + \u2206 + rt = Ct\u2217 + \u2206. \u2032\u2032 \u2217 tive assumption, Cj\u22121 = Cj\u22121 + \u2206. Consider t < j \u2208 B[t, k]. By induc- If j \u2208 K then j \u2208 J and A\u2032\u2032j = A\u2217j + \u2206, so \u2032\u2032 , A\u2032\u2032 } = max{C \u2217 , A\u2217 } + \u2206 = C \u2217 Sj\u2032\u2032 = max{Cj\u22121 j\u22121 + \u2206. Otherwise, j 6\u2208 K, i.e., j is j j\u22121 j \u2032\u2032 \u2217 \u2217 \u2212 2\u2206 \u2212 \u2206 = Cj\u22121 > A\u2217j + 2\u2206. So A\u2032\u2032j \u2264 A\u2217j + \u2206 < Cj\u22121 late, then by Lemma 2.5.2 Cj\u22121 \u2032\u2032 \u2217 and hence Sj\u2032\u2032 = Cj\u22121 = Cj\u22121 + \u2206, completing our inductive proof. In both cases, \u2217 \u2217 , A\u2217 } + r + \u2206 = C \u2217 + \u2206. This implies Cj\u2032\u2032 = Sj\u2032\u2032 + rj = Cj\u22121 + \u2206 + rj = max{Cj\u22121 j j j \u2032\u2032 \u2217 \u2032\u2032 that Ck\u22121 = Ck\u22121 + \u2206 = A\u2217k + \u2206 = A\u2032\u2032k since k \u2208 K \u2286 J so Ck\u22121 = A\u2032\u2032k as claimed. Case (ii) {t} \u222a K \u2286 B[t, k] \\ J \u2217 \u2032 < \u2264 Ct\u22121 First, by induction we show that Cj\u2032 = Cj\u2217 for all j \u2208 B[t, k]. Indeed, Ct\u22121 A\u2217t \u2212 2\u2206 < A\u2217t = A\u2032t (by Lemmata 2.5.1 and 2.5.2) so St\u2032 = A\u2032t = A\u2217t and Ct\u2032 = A\u2217t + rt = Ct\u2217 . \u2032 \u2217 . If j \u2208 K then j 6\u2208 J and Consider t < j \u2208 B[t, k]. By inductive assumption, Cj\u22121 = Cj\u22121 \u2217 , A\u2217 } = C \u2217 . Otherwise, j 6\u2208 K, i.e., j is \u2032 , A\u2032j } = max{Cj\u22121 A\u2032j = A\u2217j , so Sj\u2032 = max{Cj\u22121 j\u22121 j \u2217 \u2217 \u22122\u2206 = C \u2032 > A\u2217j +2\u2206. So A\u2032j \u2264 A\u2217j < Cj\u22121 late, then by Lemma 2.5.2 Cj\u22121 j\u22121 \u22122\u2206 and hence \u2032 \u2217 . In both cases, C \u2032 = S \u2032 + r = C \u2217 \u2217 \u2217 \u2217 Sj\u2032 = Cj\u22121 = Cj\u22121 j j j j\u22121 + rj = max{Cj\u22121 , Aj } + rj = Cj , \u2032 \u2217 completing our inductive proof. This implies that Ck\u22121 = Ck\u22121 = A\u2217k = A\u2032k since k \u2208 K \u2032 and k 6\u2208 J, so Ck\u22121 = A\u2032k as claimed. \u2032\u2032 \u2264 Similarly, by induction we show that Cj\u2032\u2032 = Cj\u2217 for all j \u2208 B[t, k]. Indeed, Ct\u22121 \u2217 +\u2206 < A\u2217 = A\u2032\u2032 (by Lemmata 2.5.1 and 2.5.2) so S \u2032\u2032 = A\u2032\u2032 = A\u2217 and C \u2032\u2032 = A\u2217 +r = C \u2217 . Ct\u22121 t t t t t t t t t \u2032\u2032 \u2217 . If j \u2208 K then j 6\u2208 J and Consider t < j \u2208 B[t, k]. By inductive assumption, Cj\u22121 = Cj\u22121 \u2032\u2032 , A\u2032\u2032 } = max{C \u2217 , A\u2217 } = C \u2217 . Otherwise, j 6\u2208 K, i.e., j is A\u2032\u2032j = A\u2217j , so Sj\u2032\u2032 = max{Cj\u22121 j j\u22121 j j\u22121 \u2217 \u2217 \u2032\u2032 late, then by Lemma 2.5.2 Cj\u22121 > A\u2217j + 2\u2206. So A\u2032\u2032j \u2264 A\u2217j + \u2206 < Cj\u22121 \u2212 \u2206 = Cj\u22121 \u2212 \u2206 and \u2217 , completing our inductive proof. In both cases, C \u2032\u2032 = S \u2032\u2032 + r = \u2032\u2032 = Cj\u22121 hence Sj\u2032\u2032 = Cj\u22121 j j j \u2217 \u2217 \u2032\u2032 \u2217 , A\u2217 } + r = C \u2217 . This implies that C \u2032\u2032 \u2217 + rj = max{Cj\u22121 Cj\u22121 j j j k\u22121 = Ck\u22121 = Ak = Ak since \u2032\u2032 k \u2208 K and k 6\u2208 K, so Ck\u22121 = A\u2032\u2032k as claimed. \u0003 Lemma 2.5.7. Let k \u2208 {2, . . . , n + 1}. For every realization of the processing durations \u2032\u2032 \u2032 \u2217 = A\u2032\u2032k . = A\u2032k and Ck\u22121 = A\u2217k we have Ck\u22121 such that Ck\u22121 30 \fProof. If there is an early job before k then the result follows from Lemma 2.5.6. Otherwise, Pk\u22121 \u2217 B[1, k] is a block. Therefore Ck\u22121 = A\u22171 + i=1 ri = A\u2217k . Furthermore, A\u2217k \u2208 Z by Lemma 2.5.5 so k 6\u2208 J. Therefore {1} \u222a K \u2286 B[1, k] \\ J by Lemma 2.5.4, and hence the result follows from Lemma 2.5.6. \u0003 Our next result establishes that no event change occurs for any job and directly follows from Lemmata 2.5.3 and 2.5.7. We define the sign of a real number x as sign(x) = 1 if x > 0; 0 if x = 0; and \u22121 if x < 0. Lemma 2.5.8. For every job j = 2, . . . , n + 1 and every realization of the processing \u2217 \u2032\u2032 \u2032 \u2212 A\u2217j ). \u2212 A\u2032\u2032j ) =sign(Cj\u22121 \u2212 A\u2032j ) =sign(Cj\u22121 durations, sign(Cj\u22121 Lemma 2.5.9 below gives a consequence on the objective function of this no event change result. Lemma 2.5.9. F changes linearly with \u2206 between A\u2032 and A\u2032\u2032 . Proof. There is no event change when moving from A\u2032 to A\u2032\u2032 by Lemma 2.5.8. Therefore for every realization r of the processing duration vector p, F (.|p = r) changes linearly with \u2206 between A\u2032 and A\u2032\u2032 . Hence, F (.) = Ep [F (.|p)], F also changes linearly with \u2206 between A\u2032 and A\u2032\u2032 . \u0003 Theorem 2.5.10. (Appointment Vector Integrality) If the processing durations are integer random variables then there exists an optimal appointment vector which is integer. Proof. By Lemma 2.4.3 we know that there exists an optimal appointment schedule in the set K = {A \u2208 Rn+1 : A \u2264 A \u2264 A}. Let A denote the set of all such optimal appointment vectors in K, so A is nonempty, bounded and closed, since by Lemma 2.4.2 F is continuous. For A \u2208 A let \uf8f1 \uf8f2 min{Aj : j \u2208 {2, . . . , n + 1} and Aj 6\u2208 Z} if A 6\u2208 Zn+1 I(A) = \uf8f3 np if A \u2208 Zn+1 . max + 1 We claim I(.) is upper semi continuous (usc) on the compact set A. If A \u2208 A \u2229 Zn+1 then I(A) = h + 1 \u2265 I(B) for all B \u2208 A, implying that I(.) is usc at A. Otherwise A \u2208 A \\ Zn+1 , and let I(A) = Af . For any \u01eb > 0 let \u03b4 = min{\u01eb, I(A) \u2212 \u230aAf \u230b, \u2308Af \u2309 \u2212 I(A)} > 0. For all B \u2208 A, ||B \u2212 A|| < \u03b4 implies Bf > Af \u2212\u03b4 \u2265 Af \u2212(I(A)\u2212\u230aAf \u230b) = \u230aAf \u230b and Bf < Af +\u03b4 \u2264 31 \fAf + \u2308Af \u2309 \u2212 I(A) = \u2308Af \u2309. Therefore Bf is fractional so I(B) \u2264 Bf \u2264 Af + \u01eb = I(A) + \u01eb. Therefore I(.) is usc at A \u2208 A \\ Zn+1 . This completes the proof that I(.) is usc on A. The fact that I(.) is usc and A is compact implies that there exists an element A\u2217 of A maximizing I(.). Seeking a contradiction, assume A\u2217 6\u2208 Zn+1 . Let f = min{i : A\u2217i = I(A\u2217 )}, so for all j < f , A\u2217j < I(A\u2217 ) and thus A\u2217j \u2208 Z. Let A\u2032 and A\u2032\u2032 be the schedules derived from A\u2217 as defined at the beginning of this section. By optimality F (A\u2217 ) \u2264 F (A\u2032 ) and F (A\u2217 ) \u2264 F (A\u2032\u2032 ). But by Lemma 2.5.9, F (A\u2217 ) changes linearly with \u2206 between A\u2032 and A\u2032\u2032 . Hence we must have F (A\u2217 ) = F (A\u2032 ) = F (A\u2032\u2032 ). Note that A\u2032\u2032 \u2265 A\u2217 \u2265 A and, for every j \u2208 J, A\u2032\u2032j = A\u2217j + \u2206 < \u2308A\u2217j \u2309 \u2264 Aj so A\u2032\u2032 \u2264 A. This shows that A\u2032\u2032 \u2208 K and therefore A\u2032\u2032 \u2208 A. But I(A\u2217 ) = A\u2217f < A\u2217f + \u2206 = A\u2032\u2032f = I(A\u2032\u2032 ), i.e., I(A\u2217 ) < I(A\u2032\u2032 ), a contradiction with the definition of A\u2217 . \u0003 Remark 2.5.11. Linear overage and underage costs are essential for the integrality of an optimal appointment vector. Consider the following example with quadratic costs. Let n = 1 h \u00012 i \u00012 with o1 = u1 = 1; and P rob{p1 = and F (A) = Ep o1 (C1 \u2212 A2 )+ + u1 (A2 \u2212 C1 )+ \u0002 \u0003 1} = P rob{p1 = 2} = 12 . Then F (A) = Ep 2(C1 \u2212 A2 )2 , C1 = p1 , and the optimum is A\u22172 = Ep (p1 ) = 2.6 3 2 which is not integer. L-convexity We start by investigating an important property of our objective function, submodularity (see e.g., [9], [30] and [18]). Definition 2.6.1. A function f : Zq \u2192 R is submodular iff f (z) + f (y) \u2265 f (z \u2228 y) + f (z \u2227 y) for all z, y \u2208 Zq where z \u2228 y = (max(zi , yi ) : 0 \u2264 i \u2264 q) \u2208 Zq , z \u2227 y = (min(zi , yi ) : 0 \u2264 i \u2264 q) \u2208 Zq ([18]). We now define a property of an appointment vector and a realization of the processing durations that will play an important role in this section. Definition 2.6.2. A quadruple (i, j, k, l) is a submodularity obstacle for appointment schedule A and a realization r of the processing durations if \u2022 1 \u2264 i < j < k < l \u2264 n + 1; \u2022 the cost coefficients satisfy oj\u22121 + uj\u22121 + P j\u2264t j have the same start times in both schedules (A + 1i + 1j |r) and (A + 1i |r). Therefore, F (A + 1i + 1j |r) \u2212 F (A + 1i |r) = \u2212oj\u22121 \u2264 0 and inequality (2.2) holds. Case (C) (j < l \u2264 n + 1). If job j is late in schedule (A|r) then it is also late in schedule (A + 1i |r), and it remains not early when Aj is replaced with Aj + 1. Therefore, F (A + 1j |r) \u2212 F (A|r) = \u2212oj\u22121 and F (A + 1i + 1j |r) \u2212 F (A + 1i |r) = \u2212oj\u22121 . As a result, (2.2) holds with equality. Therefore assume that job j is on-time in schedule (A|r). If there is positive idle time between i and j in schedule (A|r), then j remains on-time in schedule (A + 1i |r) hence also remains on time (A + 1i + 1j |r) and (A + 1j |r) and (2.2) holds with equality. Therefore we also assume that there is no idle time between i and j. We consider two subcases, CR1 and CR2, for the right hand side F (A + 1j |r) \u2212 F (A|r) and three subcases, CL1, CL2 and CL3, for the left hand side F (A + 1i + 1j |r) \u2212 F (A + 1i |r): in schedule (A|r), (CR1) there is no idle time between j and l; (CR2) there is positive idle time7 between j and l; (CL1) job i is on-time; (CL2) job i is late, and there is no idle time between j and l; (CL3) job i is late and there is positive idle time between j and l. In CR1, the time interval [Aj , Aj + 1] is idle in schedule (A + 1j |r) and every job j, j + 1, . . . , n incurs one more unit of overtime in schedule (A + 1j |r) than in schedule (A|r) since all jobs between j and l are not early and all jobs after l are late. Hence, in CR1, F (A + 1j |r) \u2212 F (A|r) = uj\u22121 + oj + oj+1 + ... + on . In CR2, there is an early job k between j and l. Choose k to be the first early job after j so j < k < l. Similarly to CR1, the time interval [Aj , Aj + 1] is idle in schedule (A + 1j |r) and every job j, j + 1, . . . , k \u2212 1 incurs one more unit of overtime in schedule 7 This means that there is at least one idle slot available between the jobs under consideration. In this case, there exists at least one job which starts on-time in the interval. 34 \f(A + 1j |r) than in schedule (A|r) since all jobs between j and k are not early. Furthermore, job k \u2212 1 incurs one less unit of idle time in schedule (A + 1j |r) than in schedule (A|r) since k remains not late in schedule (A + 1j |r). Hence, in CR2, F (A + 1j |r) \u2212 F (A|r) = uj\u22121 + oj + oj+1 + ... + ok\u22123 + ok\u22122 \u2212 uk\u22121 . In CL1, job i remains on-time in both schedules (A + 1i |r) and (A + 1i + 1j |r), and because there is no idle time between i and j, job j is late in schedule (A + 1i |r) but ontime in schedule (A + 1i + 1j |r). Therefore, schedule (A + 1i + 1j |r) will have one unit less overtime (just before job j) than schedule (A + 1i |r). Hence, in CL1, F (A + 1i + 1j |p) \u2212 F (A + 1i |p) = \u2212oj\u22121 . In CL2, job j is just-on-time in schedule (A + 1i |r) but one time unit early in schedule (A + 1i + 1j |r). In schedule (A|r), all jobs between j and l are not early (since there is no idle time between j and l), and all jobs after l are late (since l is the last on-time job). Furthermore, all jobs after j are late in schedule (A + 1i |r) and therefore also late in schedule (A + 1i + 1j |r) because there is no idle time between i and j in schedule (A|r). As a result, schedule (A + 1i + 1j |r) has an idle slot just before Aj + 1 and one more unit of overtime for each job j, . . . , n + 1 than schedule (A + 1i |r). Hence, in CL2, F (A + 1i + 1j |r) \u2212 F (A + 1i |r) = uj\u22121 + oj + oj+1 + ... + on . Similarly to CL2, in CL3, job j is just-on-time in schedule (A + 1i |r) but one time unit early in schedule (A + 1i + 1j |r). Furthermore, there is a first early job k between j + 1 and l since there is positive idle time between j and l in schedule (A|r). The time interval [Aj , Aj +1] is idle in schedule (A + 1i + 1j |r) and every job j, j +1, . . . , k\u22121 incurs one more unit of overtime in schedule (A + 1i + 1j |r) than in schedule (A + 1i |r). Furthermore, job k\u22121 incurs one less unit of idle time in schedule (A + 1i + 1j |r) than in schedule (A + 1i |r). Hence, in CL3, F (A + 1i + 1j |r) \u2212 F (A + 1i |r) = uj\u22121 + oj + oj+1 + ... + ok\u22123 + ok\u22122 \u2212 uk\u22121 . Note that we have the same job k as in CR2. As a result, F (A + 1i + 1j |p) \u2212 F (A + 1i |p) \u2212 (F (A + 1j |p) \u2212 F (A|p)) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \u2212oj\u22121 \u2212 (uj\u22121 + oj + oj+1 + ... + on ) \u2264 0 \uf8f4 \uf8f4 \uf8f4 \uf8f2 0 = \uf8f4 \uf8f4 \u2212oj\u22121 \u2212 (uj\u22121 + oj + oj+1 + ... + ok\u22123 + ok\u22122 \u2212 uk\u22121 ) \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0 if CR1 and CL1 if CR1 and CL2 if CR2 and CL1 if CR2 and CL3 If there is no submodularity obstacle then inequality \u2212oj\u22121 \u2264 uj\u22121 + oj + oj+1 + ... + 35 \fok\u22123 + ok\u22122 \u2212 uk\u22121 in CR2 and CL1 is satisfied and F (.|p) is submodular. Conversely, if F (.|p) is submodular then \u2212oj\u22121 \u2264 uj\u22121 +oj +oj+1 +...+ok\u22123 +ok\u22122 \u2212uk\u22121 for all jobs i < j < k < l such that j is on-time, there is no idle time between i and j, there is positive idle time between j and l and job i is on-time; i.e., there is no submodularity obstacle for the appointment vector A and processing duration realization r, hence there cannot be a submodularity obstacle. \u0003 Corollary 2.6.4. If there is no submodularity obstacle for any integer appointment vector A and processing duration realization r then F is submodular. Proof. The result holds since submodularity is preserved under expectation, F (.) = Ep [F (.|p)], and by Proposition 2.6.3 F (.|p) is submodular if there is no submodularity obstacle for any integer appointment vector A and processing duration realization r. \u0003 A submodularity obstacle is a very specific configuration, and it does not exist with reasonable cost structures such as nonincreasing ui \u2019s (ui+1 \u2264 ui for all i) or nonincreasing (oi + ui )\u2019s (oi+1 + ui+1 \u2264 oi + ui for all i). To capture these cost structures we define the following: Definition 2.6.5. The cost coefficients (u, o) are \u03b1-monotone if there exists reals \u03b1i (1 \u2264 i \u2264 n) such that 0 \u2264 \u03b1i \u2264 oi and ui + \u03b1i are non-increasing in i, i.e., ui + \u03b1i \u2265 ui+1 + \u03b1i+1 for all i = 1, . . . , n \u2212 1. The following Lemma establishes a relation between existence of a submodularity obstacle and \u03b1-monotonicity. Proposition 2.6.6. If the cost coefficients (u, o) are \u03b1-monotone then there is no submodularity obstacle for any integer appointment vector A and processing duration realization r. Proof. Assume (u, o) are \u03b1-monotone. We will show that for every j \u2208 {2, . . . , n} there Pt\u22121 exists t \u2265 j + 1 such that oj\u22121 + uj\u22121 + r=j or \u2265 ut\u22121 . For contradiction suppose, 36 \foj\u22121 + uj\u22121 + Pt\u22121 r=j or < ut\u22121 for all t \u2265 j + 1. Then \u03b1t\u22121 + oj\u22121 + uj\u22121 + t\u22121 X or < ut\u22121 + \u03b1t\u22121 (add \u03b1t\u22121 to both sides) or < ut\u22121 + \u03b1t\u22121 (since \u03b1j\u22121 \u2264 oj\u22121 ) r=j \u03b1t\u22121 + \u03b1j\u22121 + uj\u22121 + t\u22121 X r=j \u03b1j\u22121 + uj\u22121 < ut\u22121 + \u03b1t\u22121 (since Pt\u22121 r=j or + \u03b1t\u22121 \u2265 0), but this is a contradiction to \u03b1-monotonicity. Therefore the result follows. \u0003 Theorem 2.6.7. (Submodularity) If the cost vectors (u, o) are \u03b1-monotone then F is submodular. Proof. If the cost vectors (u, o) are \u03b1-monotone then by Proposition 2.6.6 there is no submodularity obstacle for any integer appointment vector A and processing duration realization r. Hence the result follows from Corollary 2.6.4. \u0003 Completion times, start times and tardiness and their expectations are also submodular: Corollary 2.6.8. The tardiness Tk , start time Sk , completion time Ck , and their expected values Ep [Tk ], Ep [Sk ] and Ep [Ck ] are submodular functions of A for every k = 1, . . . , n. Proof. Recall that F (.|p) = Pn i=1 (oi Ti + ui Ei ). Let 1 \u2264 k \u2264 n, ui = 0 for all i, and oi = 1 if i = k and 0 otherwise. Then Tk = F (.|p). Therefore Tk is submodular whenever F (.|p) is. By Proposition 2.6.3, F (.|p) is submodular if there is no submodularity obstacle. But the chosen ui \u2019s and oi \u2019s are \u03b1-monotone so no submodularity obstacle exists by Proposition 2.6.6. As a result F (.|p) and hence Tk is submodular. Next we show Sk is submodular. S1 = 0 and Sk = Ak + max{0, Ck\u22121 \u2212 Ak } = Ak + Tk\u22121 (1 < k \u2264 n) by definition. Since Ak is a scalar and Tk is submodular, Sk is also submodular. Similarly, Ck = Sk +pk (1 \u2264 k \u2264 n) by definition. Since pk is a scalar and Sk is submodular, Ck is submodular. Finally, the expected values Ep [Tk ], Ep [Sk ] and Ep [Ck ] are submodular since submodularity is preserved under expectation and Tk , Sk and Ck are submodular. This completes the proof. \u0003 Remark 2.6.9. The earliness Ek is not a submodular function of A in general. To see this let A = (0, 3, 5, 6, 9), deterministic processing durations p1 = 3, p2 = 2, p3 = 2, p4 = 1. E4 (A) = (A5 \u2212 C4 )+ = (9 \u2212 8)+ = 1, similarly E4 (A + 11 + 12 ) = 0, E4 (A + 11 ) = 0 and E4 (A + 12 ) = 0. Therefore 1+0 = E4 (A)+E4 (A + 11 + 12 ) > E4 (A + 11 )+E4 (A + 12 ) = 0 + 0. Hence E4 is not submodular. 37 \fThe objective function is not only submodular but also L-convex, an important discrete convexity property. Before we show L-convexity results, we give the definition of L-convexity. Definition 2.6.10. f : Zq \u2192 R \u222a {\u221e} is L-convex iff f (z) + f (y) \u2265 f (z \u2228 y) + f (z \u2227 y) \u2200z, \u2200y \u2208 Zq and \u2203r \u2208 R : f (z + 1) = f (z) + r \u2200z \u2208 Zq ([18]). Proposition 2.6.11. For any realization r of the processing durations, the function F (.|p = r) is L-convex if and only if there is no submodularity obstacle for any integer appointment vector A and realization r. Proof. Let r be a realization of the processing durations. If there is no submodularity obstacle for any integer appointment vector A and realization r then F (.|p = r) is submodular by Proposition 2.6.3, the first property in the definition of L-convexity. P Recall that F (A|p = r) = ni=1 (oi Ti + ui Ei ), Ti = (Ci \u2212Ai+1 )+ and Ei = (Ai+1 \u2212Ci )+ . \u0001 P Consider F (A + 1|p = r) = ni=1 oi Ti1 + ui Ei1 , where x1i = quantity of interest of job i with appointment vector A + 1 for x \u2208 {S, C, T, E}. Then Si1 = Si + 1 and Ci1 = Ci + 1 hence Ti1 = Ti and Ei1 = Ei . Therefore F (A + 1|p = r) \u2212 F (A|p = r) = 0. This gives us the second property of L-convexity definition. Conversely, if F (.|p = r) is L-convex then F (.|p) must be submodular and by Proposition 2.6.3 there is no submodularity obstacle for any integer appointment vector A. \u0003 Corollary 2.6.12. If there is no submodularity obstacle for any integer appointment vector A and realization r then F (.) is L-convex. Proof. The claim holds since L-convexity is preserved under expectation, F (.) = Ep [F (.|p)], and by Proposition 2.6.11 F (.|p) is L-convex if there is no submodularity obstacle for any integer appointment vector A and realization r. \u0003 Theorem 2.6.13. (L-convexity) If the cost vectors (u, o) are \u03b1-monotone then F (A) is L-convex. Proof. If the cost coefficients (u, o) are \u03b1-monotone then by Proposition 2.6.6 there is no submodularity obstacle for any integer appointment vector A and processing duration realization r. Therefore the result follows from Corollary 2.6.12. \u0003 38 \f2.7 Algorithms Using algorithmic results, [19] and [18], for minimizing L-convex functions, we can minimize the expected cost F in polynomial time, using a polynomial number of expected cost computations and submodular set minimizations. Assume the input to our problem consists of the number n of jobs, the cost vectors u and o, the horizon h over which F is to be minimized. Assume also that the processing times are integer and that we have an oracle which computes the expected cost F (A) for any given integer appointment vector A. Theorem 2.7.1. (Polynomial Time Algorithm 1) If the cost vectors (u, o) are \u03b1monotone and the processing durations are integer then there exists an algorithm which minimize F using polynomial time and a polynomial number of expected cost evaluations. Proof. The Appointment Vector Integrality Theorem 2.5.10 implies that to minimize F we only need to consider integer appointment vectors. If the cost vectors (u, o) are \u03b1monotone then F is an L-convex function by the L-convexity Theorem 2.6.13. Then F can be minimized in O(\u03c3(n) EO n2 log(\u2308h\/2n\u2309)) time by Iwata\u2019s steepest descent scaling algorithm (Section 10.3.2 of Murota [18]) where \u03c3(n) is the number of function evaluations required to minimize a submodular set function over an n-element ground set and EO is the time needed for an expected cost evaluation. \u0003 When the processing durations are independent, the expected cost of an integer appointment vector can be evaluated efficiently. We use recursive equations for the probability distributions of the start time, completion time, tardiness and earliness of each job and compute F at an integer point A in O(n2 p2max ) time. Theorem 2.7.2. If the processing durations are stochastically independent and A is an integer appointment vector then F (A) may be computed in O(n2 p2max ) time. Proof. The first job starts at time zero so S1 = A1 = 0, and C1 = p1 , i.e., the distribution of C1 is that of p1 . Next, we look at the start times Si (2 \u2264 i \u2264 n). We have Si = max(Ai , Ci\u22121 ) so for all k = 0, 1, . . . , npmax , P rob{Si = k} = \uf8f1 \uf8f4 \uf8f4 0 \uf8f4 \uf8f2 if k < Ai P rob{Ci\u22121 \u2264 k} if k = Ai \uf8f4 \uf8f4 \uf8f4 \uf8f3 P rob{C i\u22121 = k} if k > Ai . (2.3) 39 \fNote that Si and pi are independent because Si is completely determined by p1 , p2 , ..., pi\u22121 and A1 , A2 , ..., Ai . Since Ci = Si + pi , by conditioning on pi and using independence of pi and Si , we obtain for all k = 0, 1, ..., npmax , P rob{Ci = k} = P rob{Si = k \u2212 pi } = pi X P rob{Si = k \u2212 j}P rob{pi = j}, (2.4) j=0 and P rob{Ci\u22121 \u2264 k} = P rob{Ci\u22121 = k}+P rob{Ci\u22121 \u2264 k \u22121}. For each i\u22121, P rob{Ci\u22121 \u2264 k} may be computed in O((i\u22121)pmax ) time. Hence P rob{Ci = k} can be computed once we have the distribution of Si . For each job i and value k, computing P rob{Si = k} by Eq(2.3) requires a constant number of operations, and computing P rob{Ci = k} by Eq(2.4) requires O(pi + 1) operations. Therefore the total number of operations needed for computing the whole start time and completion time distributions for job i is O(np2max ). The distribution of Ti and Ei , their expected values Ep Ti and Ep Ei can then be determined in O(npmax ) time. Therefore, the objective value F (A) is obtained in O(n2 p2max ) time. \u0003 The running time of the algorithm given in Theorem 2.7.1 depends on how the distributions of the processing durations are given. Under the common assumption of independent durations, the input to the algorithm includes the distribution of each processing duration pi , which specifies pi + 1 probabilities P rob{pi = x} for x = 0, 1, . . . , pi . In this case, F can be minimized in O(n9 p2max log pmax ) time. Theorem 2.7.3. (Polynomial Time Algorithm 2) If the processing durations are independent, integer-valued random variables and the cost vectors (u, o) are \u03b1-monotone then we can minimize F in O(n9 p2max log pmax ) time. Proof. The horizon h can be taken as npmax \u2265 Pn i=1 pi , so h is polynomially bounded in the input size. Theorem 2.7.2 shows that EO = O(h2 ) when processing durations are independent. Theorem 4 of Orlin [21] shows that \u03c3(n) = O(n5 ). The result follows from Theorem 2.7.1. 2.8 \u0003 Objective Function with a Due Date Suppose that we are given a due date D for the end of processing, instead of letting the P model choose a planned makespan An+1 . We assume D is integer and 0 \u2264 D \u2264 ni=1 pi . 40 \fDefine A\u0303 = (A1 , A2 , ..., An ) , then our new objective becomes \uf8ee \uf8f9 \u0013 n\u22121 X\u0012 F D (A\u0303) = Ep \uf8f0 oj (Cj \u2212 Aj+1 )+ + uj (Aj+1 \u2212 Cj )+ + on (Cn \u2212 D)+ + un (D \u2212 Cn )+ \uf8fb . j=1 We immediately observe that F (A\u0303, D) = F D (A\u0303). Like F , F D has many properties such as discrete convexity (F D is L\u266e -convex, see Definition 2.8.4), optimal vector integrality and existence of a polynomial time minimization algorithm. We verify the properties of F D . Let K\u0303 = {A\u0303 \u2208 Rn : A\u03031 = 0, P j** 0 and assign P rob{pi = 0} = P 1 \u2212 k>0 P rob{pi = k}. Emergency jobs arrive after the processing starts without any appointments, and they may need to be processed as soon as possible. We take a non-preemptive approach, i.e., we finish processing of the current job first. We assume that emergency jobs may arrive only during processing of a planned job, i.e., there is no emergency job arrival during idle time or the processing of an emergency job. This is a reasonable assumption when the ratio of total idle time between planned jobs to total processing durations of planned jobs is small and there are not many emergency jobs. Therefore during processing of planned job i, some emergency jobs may arrive, and these emergency jobs will be processed back to back just after job i and before job i + 1. Observe that there will be no idle time between the processing of emergency jobs, so we may think of the duration of these emergency jobs processing as a lengthening of job i\u2019s processing time. Therefore, the problem reduces to find the new processing duration distribution of job i. Figure 2.3 shows an example of a schedule with emergency jobs. ~pi-1 emergency job arrivals pi Ai-1 Ai ~pi+1 pe Ci-1 1 pe 2 pe 3 Ai+1 ~ pi Figure 2.3: An Example Schedule with Emergency Jobs We assume that there can be at most certain number of emergency jobs that can arrive during processing of a planned job, and the distribution of number of emergency jobs arrivals (during each planned job) is given by a discrete probability distribution. Furthermore, processing duration distribution of emergency jobs is also given by a discrete probability distribution. There can be at most mimax emergency jobs that can arrive during the processing of job i. Let pe be the discrete processing duration distribution of emergency jobs (we use the same processing duration distribution for each emergency job, but one may take pie as the discrete processing duration distribution of emergency jobs arriving during job i). We denote the total processing duration of emergency jobs that will be processed just after job i 45 \fas P i . Then, all we need to do is to find the distribution of the new job processing duration \u0002 \u0003 p\u0303i = pi + P i . Because once we have p\u0303i \u2019s, we can minimize FED (.) = Ep\u0303 F D (.|p\u0303) as we min\u0002 \u0003 imize F D (.) = Ep F D (.|p) , and FE (.) = Ep\u0303 [F (.|p\u0303)] as we minimize F (.) = Ep [F (.|p)], and solve the scheduling problem with emergency jobs. We now obtain the distribution of p\u0303i = pi + P i . Let mi (0 \u2264 mi \u2264 mimax ) be the P number of emergency jobs jobs arriving during processing of job i. We define Pki = kj=1 pe (1 \u2264 k \u2264 mimax ). Distributions Pki (1 \u2264 k \u2264 mimax ) can be computed in a recursive manner i starting from P1i = pe and computing Pki = Pk\u22121 + pe for 1 < k \u2264 mimax . Once we have the distribution of Pki \u2019s we can find the distributions of P i (1 \u2264 i \u2264 n) as follows: mimax i i P rob{P = 0} = P rob{m = 0} + X P rob{mi = k}P rob{Pki = 0} k=1 mimax i P rob{P = j} = X P rob{mi = k}P rob{Pki = j} for j = 1, 2, ..., mimax pmax . k=1 The last thing we need to do is to obtain the distribution of p\u0303i , (p\u0303i = pi + P i ), a single convolution of random variables pi (already available) and P i (just obtained). 2.10 Current Work, Future Work and Conclusion After developing our modeling framework and proving that we can find an optimal appointment schedule in polynomial time, we focus on practical implementation issues. Our objective as a function of continuous appointment vector is non-smooth, but we show that the objective is convex and characterized its subdifferential in Chapter 3. We also obtain closed form formulas for the subdifferential as well as for any subgradient. This characterization is useful, it allows is to develop two important extensions. In the first extension, in Chapter 3, we relax the perfect information assumption on the probability distributions of processing durations, i.e., we assume that processing duration distributions are not known and can only be statistically estimated on the basis of past data or statistical sampling. Our approach is non-parametric, and we assume no (prior) information about processing duration distributions. We develop a sample-based approach to determine the number of independent samples required to obtain a provably near-optimal solution with high confidence, i.e., the cost of the sample-based optimal schedule is with high probability no more than (1 +\u01eb) times the cost of an optimal schedule determined from 46 \fknowing the true distributions. This result has important practical implications, as the true processing duration distributions are often not known and only their past realizations or some samples are available. In another study, Appendix A, we use the subdifferential characterization with independent processing durations and compute a subgradient in polynomial time for any given appointment schedule. This is not a trivial task as the subdifferential formulas include exponentially many terms, and some of the probability computations are complicated. We also obtain an easily computable lower bound on the optimal objective value. Furthermore, we extend computation of the expected total cost (in polynomial time) for any (real-valued) appointment vector. These allow us to use non-smooth convex optimization techniques to find an optimal schedule. Although we already have a polynomial time algorithm to find an optimal appointment schedule, it is not clear at the moment which technique will work faster in practice. We are also considering hybrid algorithms based on both discrete convexity and non-smooth convex optimization combined with a special-purpose integer rounding method. Preliminary versions of these algorithms have been developed. The rounding algorithm takes any fractional solution and rounds it to an integer one with the same or improved objective value. We are planning to implement our algorithms and compare different approaches in computational experiments. There are many exciting future directions for this research. One is to find an optimal sequence and appointment schedule simultaneously, i.e., given the jobs, determine a sequence and a job appointment schedule minimizing the total expected cost. This problem is likely to be hard, but it may be possible to develop heuristic algorithms with performance guarantees. Studying some special cases for this problem may shed light on the general case. Another one is to put our findings into practice. We are in contact with local healthcare organizations to apply our results with real data and compare the appointment schedules determined by of our methods with current practices. In this chapter, we study a discrete time version of the appointment scheduling problem and establish discrete convexity properties of the objective function. We prove that the objective function is L-convex under mild assumptions on cost coefficients. Furthermore, we show that there exists an optimal integer appointment schedule minimizing the objective. This result is important as it allows us to optimize only over integer appointment schedules without loss of optimality. All these results on the objective function and optimal 47 \fappointment schedule enable us to develop a polynomial time algorithm, based on discrete convexity, that, for a given processing sequence, finds an appointment schedule minimizing the total expected cost. When processing durations are stochastically independent we evaluate the expected cost for a given processing order and an integer appointment schedule, efficiently both in theory (in polynomial time) and in practice (computations are quite fast as shown in our preliminary computational experiments). Independent processing durations lead to faster algorithms. Our modeling framework can handle a given due date for the total processing (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose an end date. We also extend our model and framework to include no-shows and emergencies. We believe that our framework is sufficiently generic so that it is portable and applicable to many appointment systems in healthcare as well as in other areas. 48 \f2.11 Bibliography [1] Mehmet A. Begen and Maurice Queyranne. Appointment scheduling with discrete random durations. Proceedings of the 20th Annual ACM - SIAM Symposium on Discrete Algorithms., pages 845 \u2013 854, 2009. [2] Illana Bendavid and Boaz Golany. Setting gates for activities in the stochastic project scheduling problem through the cross entropy methodology. Annals of Operations Research, published online, 2009. [3] Peter M. Vanden Bosch, Dennis C. Dietz, and John R. Simeoni. Scheduling customer arrivals to a stochastic service system. Naval Research Logistics, 46:549\u2013559, 1999. [4] Brecht Cardoen, Erik Demeulemeester, and Jeroen Belien. Operating room planning and scheduling: A literature review. Working Paper, Katholieke Universiteit Leuven, Faculty of Business and Economics, Belgium, 2008. [5] Tugba Cayirli and Emre Veral. Outpatient scheduling in health care: A review of literature. Production and Operations Management, 12(4), 2003. [6] Brian Denton and Diwakar Gupta. A sequential bounding approach for optimal appointment scheduling. IIE Transactions, 35:1003\u20131016, 2003. [7] Mohsen Elhafsi. Optimal leadtime planning in serial production systems with earliness and tardiness costs. IIE Transactions, 34:233 \u2013 243, 2002. [8] Lisa Fleischer. Recent progress in submodular function minimization. OPTIMA: Mathematical Programming Society Newsletter, 64:1 \u2013 11, 2000. [9] Satoru Fujishige. Submodular Functions and Optimization. Elsevier, 2005. [10] Linda Green, Sergei Savin, and Ben Wang. Managing patient service in a diagnostic medical facility. Operations Research, 54:11\u201325, 2006. [11] Diwakar Gupta and Lei Wang. Revenue management for a primary-care clinic in the presence of patient choice. Operations Research, 56:576\u2013592, 2008. [12] Refael Hassin and Sharon Mendel. Scheduling arrivals to queues: A single-server model with no-shows. Management Science, 54(3):565\u2013572, 2008. 49 \f[13] Satoru Iwata. Submodular function minimization. Math. Program., 112:45\u201364, 2008. [14] Guido C. Kaandorp and Ger Koole. Optimal outpatient appointment scheduling. Health Care Man. Sci., 10:217\u2013229, 2007. [15] Yossef Luzon, Avishai Mandelbaum, and Michal Penn. Scheduling appointments via fluids control. Industrial Engineering and Management, Technion, Haifa Israel,, 2009. [16] S. T. McCormick. Submodular function minimization. a chapter in the handbook on discrete optimization. Elsevier, K. Aardal, G. Nemhauser, and R. Weismantel, eds, 2006. [17] Kazuo Murota. Discrete convex analysis. Math Programming, 83(3):313\u2013371, 1998. [18] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. [19] Kazuo Murota. On steepest descent algorithms for discrete convex functions. SIAM J. OPTIM, 14(3):699\u2013707, 2003. [20] Kazuo Murota. Recent developments in discrete convex analysis. a chapter in research trends in combinatorial optimization, w. cook, l. lovasz and j. vygen, eds. SpringerVerlag, 2009. [21] James B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Math Programming, 118(2):237\u2013251, 2007. [22] Jonathan Patrick, Martin L. Puterman, and Maurice Queyranne. Dynamic multipriority patient scheduling for a diagnostic resource. Operations Research, 56:1507 \u2013 1525, 2008. [23] Michael Pinedo. Stochastic scheduling with release dates and due dates. Operations Research, 31:559 \u2013 572, 1993. [24] Michael Pinedo. Scheduling: Theory, Algorithms, and Systems. Prentice Hall, 2001. [25] Lawrence W. Robinson and Rachel R. Chen. Scheduling doctors\u2019 appointments: optimal and empirically-based heuristic policies. IIE Transactions, 35:295\u2013307, 2003. 50 \f[26] Lawrence W. Robinson, Yigal Gerchak, and Diwakar Gupta. Appointment times which minimize waiting and facility idleness. Working Paper, DeGroote School of Business, McMaster University, 1996. [27] F Sabria and C F Daganzo. Approximate expressions for queuing systems with scheduling arrivals and established service order. Transportation Science, 23:159\u2013165, 1989. [28] Pablo Santibanez, Mehmet Begen, and Derek Atkins. Surgical block scheduling in a system of hospitals: An application to resource and wait list management in a british columbia health authority. Health Care Man. Sci., 10:269\u2013282, 2007. [29] Hans-Jorg Schutz and Rainer Kolisch. Capacity allocation for magnetic resonance imaging scanners. Working Paper, TUM Business School, Technische Universit at Munchen, Germany, 2008. [30] Donald M. Topkis. Minimizing a submodular function on a lattice. Oper. Res., 26(2): 305\u2013321, 1978. [31] P Patrick Wang. Static and dynamic scheduling of customer arrivals to a single-server system. Naval Research Logistics, 40(3):345\u2013360, 1993. [32] P Patrick Wang. Sequencing and scheduling n cusotmers for a stochastic server. European Journal of Operations Research, 119(3):729\u2013738, 1999. [33] E N Weiss. Models for determining estimated start times and case orderings in hospital operating rooms. IIE Transactions, 22(2):143\u2013150, 1990. [34] Paul Zipkin. On the structure of lost-sales inventory models. Oper. Res., 56(4):937 \u2013 944, 2008. 51 \f3 A Sampling-Based Approach to Appointment Scheduling1 We consider the problem of appointment scheduling with discrete random durations of Chapter 2 under the assumption that the duration probability distributions are not known and only a set of independent samples is available e.g., historical data. The goal is to determine an optimal planned start schedule, i.e., an optimal appointment schedule for a given sequence of jobs on a single processor such that the expected total underage and overage costs is minimized. We show that the objective function of the appointment scheduling problem is convex under a simple sufficient condition on cost coefficients. Under this condition we characterize the subdifferential of the objective function with a closed-form formula. We use this formula to determine bounds on the number of independent samples required to obtain provably near-optimal solution with high probability. 3.1 Introduction and Motivation We consider the appointment scheduling problem with discrete random durations introduced in Chapter 2 but under the assumption that the probability distributions of job durations are not known and the only available information on the durations is a set of independent random samples, e.g., historical data. We show that the objective function is convex under a simple condition on the cost parameters and characterize its subdifferential, and determine the number of independent samples required to obtain a provably near-optimal solution with high probability. In the appointment scheduling problem jobs are processed on a single processor in a given sequence and one has to decide the planned starting time of each job, also called 1 A version of this chapter has been submitted for publication. Begen M.A., Levi R. and Queyranne M. A Sampling-Based Approach to Appointment Scheduling. 52 \fappointment date.2 Jobs are not available before their appointment dates. Moreover, the process durations are priori random and are realized only after the appointment dates are set. Due to stochastic processing durations, some jobs may finish earlier, whereas some others may finish later, than the appointment date of the next job. If a job ends earlier than the next job\u2019s appointment date then the system experiences underage cost due to underutilization of the processor. On the other hand, when a job finishes later than the next job\u2019s appointment date, the system is exposed to overage cost due the wait of the next job and\/or overtime for the processor. Therefore there is an important trade-off between underutilization, waiting and overtime, i.e., underage and overage. The goal is to find an optimal appointment schedule, i.e., appointment date vector which minimizes the total expected underage and overage costs. There are important real-world applications of this problem, especially in healthcare such as surgery scheduling, transportation and production, e.g., see Chapter 2 and the references therein. For example in surgery scheduling, we can think of surgeries as the jobs, operating room\/surgeon as the processor and the hospital as the scheduler. As observed in practice, surgery durations show variability (e.g., see Figure 2.1 and Figure 4.4) and determining planned start times, i.e., setting appointment dates of surgeries, is an important and challenging task [8]. Surgery appointment schedule has a direct impact on amount of overtime and idle-time of operating room(s). Operating room\u2019s overtime can be costly since it involves staff overtime as well as additional overhead costs, on the other hand, idle-time costs can also be high due to the opportunity cost of unused capacity. Similar trade-off exists in scheduling of container ship arrivals at a container terminal [34]. Another example comes from a production system where it has multiple stages and stochastic leadtimes and the objective is to determine planned leadtimes to minimize expected cost [10]. Researchers studied the appointment scheduling problem for the last 50 years, e.g., see [6], [8], [19], [5], [39]. Existing literature exclusively uses continuous processing durations with full probability characterization, i.e., the probability distributions of processing time of jobs are given as part of the input. Due to the continuous processing durations there are computational difficulties in the computation of the expected total cost. For a given 2 To conform with scheduling terminology, we use the term date to denote a point in time. In most applications of appointment scheduling, the appointment \u201cdates\u201d are actually appointment times within the day for which the jobs are being scheduled. 53 \fsequence of jobs, only small instances can be solved to optimality, larger instances require heuristics. Chapter 2 studies a discrete time version of the appointment scheduling problem, i.e., the processing durations are integer and given by a discrete probability distribution. This assumption fits many applications, for example, surgeries and physician appointments are scheduled on minute(s) basis (usually a block of certain minutes). (For instance, one 20 minute physician appointment could be two blocks of 10 minutes.) Chapter 2 establishes discrete convexity ([27]) properties of the objective function and prove that the objective function is L-convex ([26]) under a mild assumption on cost coefficients. Furthermore, it shows that there exists an optimal integer appointment schedule minimizing the objective. This result is important as it makes possible to optimize only over integer appointment schedules without loss of optimality. All these results on the objective function and optimal appointment schedule lead to a polynomial time algorithm, based on discrete convexity ([28]), that, for a given processing sequence, finds an appointment schedule minimizing the total expected cost. This algorithm invokes a sequence of submodular set function minimizations, for which various algorithms are available, see e.g., [11], [25] and [18]. When processing durations are stochastically independent the expected cost for a given processing order and an integer appointment schedule is evaluated in polynomial time in Chapter 2. Independent processing durations lead to faster algorithms. Chapter 2\u2019s modeling framework can include a given due date for the end of the processing of all jobs (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose an end date. The framework is also extended to include no-shows and some emergency jobs. In Chapter 2 the discrete convexity, L-convexity, of the objective function is proved by assuming integer appointment vectors. The definition of L-convexity includes submodularity and a translation equivalence property. With integer appointment vectors, Chapter 2 establishes L-convexity by proving that the objective function is submodular (under a simple condition on cost coefficients) and the translation equivalence property is satisfied. In this chapter, we show that the objective function of the appointment scheduling problem (as a function of continuous appointment vector) is convex under the same simple condition on the cost coefficients. Convexity of the objective function has been discussed (explicitly or implicitly) in several papers in literature so far [39, 5, 32, 8], but we believe our analysis is the first rigorous treatment of the subject with this simple condition. Under this simple sufficient 54 \fcondition, the objective function is convex but non-smooth due to the kinks. Due to the non-differentiability of the objective function we work with subgradients (instead of derivatives). In fact, we characterize the set of all subgradients, i.e., the subdifferential at a given appointment date vector, with a closed-form formula. This is unusual since only a single subgradient may be obtained in most applications. We use the subdifferential characterization to relax the perfect information assumption of Chapter 2 on the probability distribution of processing times by establishing a link between the sampling-based solution quality and the number of samples. Chapter 2 assumes complete information on the job duration distributions, i.e., there is an underlying discrete probability distribution for job durations, and this distribution is available and known fully. This may be the case for some applications. However, for others, the true duration distributions may not be known but their (past) realizations or some samples may be available. One good example for such an application comes from healthcare; hospitals and surgeons usually have some data available on the length of previous surgeries but no one knows what the true distribution for a certain type of surgery is. When the true distribution is not known then the question is how to use these samples to find a \u201cgood solution\u201d. We assume that there is an underlying joint discrete distribution for the job durations, but there is only a set of independent samples available. This may correspond to historical data, for example daily observations of surgery durations. In this chapter, we develop a sampling-based approach and determine the number of independent samples required to obtain a provably near-optimal solution with high probability, i.e., the cost of the sampling-based optimal schedule is with high probability no more than (1 + \u01eb) times the cost of the optimal schedule that is computed based on the true distribution. Job durations may not necessarily be independent but samples are. In other words, each sample is a vector of durations where each coordinate corresponds to job duration and these vectors are independent. Independence assumption of probability distributions (e.g., job durations, demands of different periods) is common but we do not require it in our analysis. There has been much interest for studying stochastic models with partial probabilistic characterization. We see that inventory models, especially the newsvendor problem and its multiperiod extension, receive a lot of attention. Depending on how much is known about the true distribution(s) different approaches are possible. 55 \fOne may know the family of the true distribution but be uncertain about its parameters. This is called the parametric approach, and in this case there is usually an initial prior belief on the uncertainty of the parameter values. This belief is revised with Bayesian updates on realizations of the distribution, e.g., see Ding et al. [9] and the references therein. Liyanage and Shanthikumar [23] introduced operational statistics, an approach that combines parameter estimation and optimization for the case of known family but unknown parameters and priors, see also [7] for more on operational statistics. If there are no assumptions on the true distribution, i.e., no prior assumptions on its family or its parameters, then the approach becomes non-parametric. Levi et al. [22] use sample average approximation (SAA) (e.g., see [36]) to determine the number of samples required for the SAA solution to be a provably near-optimal (w.r.t. true demand distribution) with high probability. For the multi-period case, they develop a sampling-based dynamic programming framework and obtain similar results. Levi et al. [22] also establish a link between first-order information and relative error with respect to optimal value. Samples can then be used to estimate derivatives, or more generally, subgradients. Godfrey and Powell [13] develop a Concave Adaptive Value Estimation (CAVE) algorithm to approximate the value function of a newsvendor problem by successive concave piecewise linear functions. The CAVE algorithm has good performance in numerical experiments, but no convergence result is given in the paper. Powell et al. [31] extend this work and establish convergence results for separable objective functions with integer break points. Huh and Rusmevichientong [17] propose another non-parametric approach to single and multiperiod newsvendor problem but only when sales information is available. (This is called censored demand observation). The authors develop an adaptive policy with an average expected cost converging to the newsvendor cost (determined with the knowledge of true demand distribution) at the rate proportional to the square root of the number of periods. Huh et al. [16] considered a similar model and developed new adaptive policies by using Kaplan-Meier estimator [20]. Another alternative in the non-parametric approach is to work with some partial information on the true distribution, e.g., known moments. For the newsvendor problem the mean and the variance of demand can be used to develop a robust min-max policy, see [35, 12, 30] and the references therein for more on this approach. The Bootstrapping method is another \u201cdistribution-free\u201d non-parametric approach. Bookbinder and Lordahl [4] use this method to estimate a quantile of lead-time demand distribution to 56 \fdetermine the reorder point for a continuous review inventory system [3]. Besides the inventory models, researchers use sampling methods for stochastic programs, in particular the SAA method. SAA is one of the most popular approximation methods for stochastic programs, replacing the true distribution with an empirical distribution obtained from random samples. Several papers, e.g., [21], [1], [24], [37], [38], [36] (and the references therein), obtain results on convergence and number of samples required for the SAA method to give small relative errors with high probability. Our modeling technique is not stochastic programming, and we make use of the discrete convexity and polynomial time algorithm results of Chapter 2 to solve the SAA counterpart of the appointment scheduling problem. Furthermore, our analysis is non-parametric, we characterize the subdifferential of the objective and use this information explicitly to establish a link between number of samples and the quality of the SAA solution. Appointment scheduling reduces to the well-known newsvendor problem when there is only a single job. This was first recognized by Weiss [41]. However, the problem departs from newsvendor characteristics and solution methods in the case of multiple jobs ([32], Chapter 2). In the multi-period newsvendor problem, naturally, decisions are taken at each period sequentially. By contrast, in appointment scheduling, one needs to have a schedule before any processing can start, i.e., one determines all the decision variables (i.e., appointment dates) simultaneously at the beginning of the planning horizon (i.e., at time zero). We employ an SAA approach for the appointment scheduling problem. We use available (independent) samples to form an empirical distribution and find an optimal solution. For the SAA problem, using subdifferential characterization (Section 3.4) and the well-known Hoeffdings inequality [15] we determine number of samples required to guarantee that there will exist a (sufficiently) small (in terms of the specified accuracy level) subgradient at the SAA solution with high probability (i.e., at least the specified confidence level). As a final step we show that the objective value (w.r.t. true distribution) of the SAA solution is no more than (1 + the accuracy level) of the true optimal value with probability at least the confidence level. Our bound for number of required samples is polynomial in number of jobs, accuracy level, confidence level and cost coefficients. To the best of our knowledge this chapter is the first to address the appointment scheduling problem when the probability distributions of durations are unknown. We develop a 57 \fsampling-approach for the appointment scheduling problem which is a stochastic non-linear integer program. Furthermore, we believe this chapter presents the first rigorous analysis for the convexity of the objective function of appointment scheduling problem with a simple condition. Last but not least, we characterize the set of all subgradients, i.e., the subdifferential at a given appointment date vector, with a closed-form formula. We use the subdifferential characterization to relate SAA solution quality with number of samples required. As a result we relax the perfect information assumption of Chapter 2 on the probability distribution of processing times. We believe this subdifferential characterization will lead to additional applications, e.g., finding optimum appointment schedules by using non-smooth optimization methods as in Appendix A. The rest of this chapter is organized as follows. In Section 3.2, we give the formal description of appointment scheduling problem. We present the convexity results in Section 3.3. Section 3.4 contains the subdifferential characterization. We provide our sampling analysis in Section 3.5. Finally Section 3.6 concludes the paper. We provide all the proofs either just after their statements or in Section 3.7. 3.2 Formal Description of Appointment Scheduling Problem This section closely follows from Chapter 2. There are n + 1 jobs numbered 1, 2, ..., n + 1 that need to be sequentially processed (in the order of 1, 2, ..., n + 1) on a single processor. An appointment schedule needs to be prepared before any processing can start. That is, each job is assigned a planned start date. In particular, job i will not be available before its appointment date (planned start date) Ai . When a job finishes earlier than the next job\u2019s appointment date, the system experiences some cost due to under-utilization, i.e., underage cost. On the other hand, if a job finishes later than the successor job\u2019s appointment date, the system experiences overage cost due to the overtime of the current job and the waiting of the next job. The goal is to find appointment dates, (A1 , ..., An ), that minimize the total cost. In surgery scheduling, determining good surgery start times is crucial. This is not a trivial task due to randomness in surgery durations. In finding good surgery start times one needs to consider the tradeoff between idleness and overtime of resource(s) as well as patients\u2019 waiting times. 58 \fIf the processing durations were deterministic the problem is straight forward to solve. However, the processing durations are stochastic and we are only given their joint discrete distribution. We assume, naturally, that all cost coefficients and processing durations are non-negative and bounded. We also assume that processing durations are integer valued.3 Job 1 starts on-time, i.e., the start time for the first job is zero, and there are n real jobs. The (n + 1)th job is a dummy job with a processing duration of 0. The appointment time for the (n + 1)th job is the total time available for the n real jobs. We use the dummy job to compute the overage or underage cost of the nth job. Let {1, 2, 3, ..., n, n+1} denote the set of jobs. We denote the random processing duration of job i by pi and the random vector4 of processing durations by p = (p1 , p2 , ..., pn , 0). Let pi denote the maximum possible value of processing duration pi , respectively. The maximum of these pi \u2019s is pmax = max(p1 , ..., pn ). The underage cost rate ui of job i is the unit cost (unit per time) incurred when job i is completed at a date Ci before the appointment date Ai+1 of the next job i + 1. The overage cost rate oi of job i is the unit cost incurred when job i is completed at a date Ci after the appointment date Ai+1 . Thus the total cost due to job i completing at date Ci is ui (Ai+1 \u2212 Ci )+ + oi (Ci \u2212 Ai+1 )+ where (x)+ = max(0, x) is the positive part of real number x. We define u = (u1 , u2 , ..., un ) and o = (o1 , o2 , ..., on ). Next we introduce our decision variable. Let A = (A1 , A2 , ..., An , An+1 ) (with A1 = 0) be the appointment vector where Ai is the appointment date for job i. We introduce additional variables which help define and express the objective function. Let Si be the start date and Ci the completion date of job i. Since job 1 starts on-time we have S1 = 0 and C1 = p1 . The other start times and completion times are determined as follows: Si = max{Ai , Ci\u22121 } and Ci = Si + pi for 2 \u2264 i \u2264 n + 1. Note that the dates Si and Ci are random variables which depend on the appointment vector A, and the random duration vector p. Let F (A|p) be the total cost of appointment vector A given processing duration vector p: F (A|p) = n X i=1 \u0001 oi (Ci \u2212 Ai+1 )+ + ui (Ai+1 \u2212 Ci )+ . (3.1) The objective to be minimized is the expected total cost F (A) = Ep [F (A|p)] where 3 We can restrict ourselves to integer appointment schedules without loss of optimality by Appointment Vector Integrality Theorem 2.5.10. 4 We write all vectors as row vectors. 59 \fthe expectation is taken with respect to random processing duration vector p. We simplify notations by defining the lateness Li = Ci \u2212 Ai+1 of job i, its tardiness Ti = (Li )+ , and its earliness Ei = (\u2212Li )+ . The objective F (A) can now be written as \" n # n X X F (A) = Ep (oi Ep Ti + ui Ep Ei ) . (oi Ti + ui Ei ) = i=1 i=1 The framework can include a given due date D for the end of processing (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model P choose a planned makespan An+1 . We assume D is an integer and that 0 \u2264 D \u2264 ni=1 pi . Define A\u0303 = (A1 , A2 , ..., An ) then the new objective becomes \uf8f9 \uf8ee \u0013 n\u22121 X\u0012 oj (Cj \u2212 Aj+1 )+ + uj (Aj+1 \u2212 Cj )+ + on (Cn \u2212 D)+ + un (D \u2212 Cn )+ \uf8fb . F D (A\u0303) = Ep \uf8f0 j=1 We immediately observe that F (A\u0303, D) = F D (A\u0303), and our results in this chapter apply to both objectives (with or without a due date) equally. 3.3 Convexity In this section, we provide a simple sufficient condition so that F and F D are convex as a function of continuous appointment vector. We start with rewriting F (.|p) in an equivalent form. We will need this in our convexity proof. Lemma 3.3.1. (Identity) # \" i n X X + pk ) \u03b1i (Ci \u2212 Ai+1 ) + \u03b2i (Ci \u2212 Ai+1 ) + \u03b3i (max{Ci , Ai+1 } \u2212 F (A|p) = i=1 k=1 for any \u03b1i \u2208 R (1 \u2264 i \u2264 n) where \u03b2i = (oi \u2212 \u03b1i ) and \u03b3i = [(ui + \u03b1i ) \u2212 (ui+1 + \u03b1i+1 )]. Proof. By definition (Eq(3.1)) we have F (A|p) = Pn i=1 [oi (Ci \u2212 Ai+1 )+ + ui (Ci \u2212 Ai+1 )+ ] . Let \u03b1i \u2208 R (1 \u2264 i \u2264 n) then by using the identity x \u2212 (x)+ + (\u2212x)+ = 0 for any x \u2208 R we can write oi (Ci \u2212 Ai+1 )+ + ui (Ci \u2212 Ai+1 )+ = oi (Ci \u2212 Ai+1 )+ + ui (Ci \u2212 Ai+1 )+ + \u03b1i (Ci \u2212 Ai+1 ) \u2212 \u03b1i (Ci \u2212 Ai+1 )+ + \u03b1i (Ai+1 \u2212 Ci )+ = \u03b1i (Ci \u2212 Ai+1 ) + (oi \u2212 \u03b1i )(Ci \u2212 Ai+1 )+ + (ui + \u03b1i )(Ai+1 \u2212 Ci )+ . 60 \fTherefore, for any \u03b1i \u2208 R (1 \u2264 i \u2264 n) F (A|p) can be written as n X \u0002 \u0003 \u03b1i (Ci \u2212 Ai+1 ) + (oi \u2212 \u03b1i )(Ci \u2212 Ai+1 )+ + (ui + \u03b1i )(Ai+1 \u2212 Ci )+ . F (A|p) = i=1 Recall that earliness of job i is Ei = (Ai+1 \u2212 Ci )+ . Define Mi as the total idle time of jobs 1, 2, ..., j. Then Ei = Mi \u2212 Mi\u22121 with M0 = 0, and F (A|p) can be written as F (A|p) = = n X \u0002 \u0003 \u03b1i (Ci \u2212 Ai+1 ) + (oi \u2212 \u03b1i )(Ci \u2212 Ai+1 )+ + (ui + \u03b1i )(Mi \u2212 Mi\u22121 ) i=1 n X i=1 \u0002 \u0003 \u03b1i (Ci \u2212 Ai+1 ) + (oi \u2212 \u03b1i )(Ci \u2212 Ai+1 )+ + [(ui + \u03b1i ) \u2212 (ui+1 + \u03b1i+1 )]Mi . Next, we prove that Mi = max{Ci , Ai+1 } \u2212 Pi t=1 pt by induction. The result holds for i = 1 because M1 = max{C1 , A2 } \u2212 p1 = max{p1 , A2 } \u2212 p1 = (A2 \u2212 p1 )+ = E1 (since S1 = 0 we P have C1 = p1 ). Assume that the result holds for i = k, i.e., Mk = max{Ck , Ak+1 }\u2212 kt=1 pt . P We need to show it also holds for i = k + 1, i.e., Mk+1 = max{Ck+1 , Ak+2 } \u2212 k+1 t=1 pt . Mk+1 = Mk + Ek+1 = Mk + (Ak+2 \u2212 Ck+1 )+ = max{Ck , Ak+1 } \u2212 k X pt + (Ak+2 \u2212 Ck+1 )+ (by definition) (by the inductive assumption) t=1 = max{Ck , Ak+1 } + pk+1 \u2212 k+1 X pt + (Ak+2 \u2212 Ck+1 )+ (add and subtract pk+1 ) t=1 = Ck+1 \u2212 k+1 X pt + (Ak+2 \u2212 Ck+1 )+ (Ck+1 = max{Ck , Ak+1 } + pk+1 ) t=1 = max{Ck+1 , Ak+2 } \u2212 k+1 X pt t=1 where the last equality follows by the identity (x \u2212 y)+ + y = max{x, y}. Therefore, P Mi = max{Ci , Ai+1 } \u2212 it=1 pt and # \" i n X X pk ) \u03b1i (Ci \u2212 Ai+1 ) + \u03b2i (Ci \u2212 Ai+1 )+ + \u03b3i (max{Ci , Ai+1 } \u2212 F (A|p) = i=1 k=1 where \u03b2i = (oi \u2212 \u03b1i ) and \u03b3i = [(ui + \u03b1i ) \u2212 (ui+1 + \u03b1i+1 )]. This completes the proof. \u0003 We recall the definition of \u03b1-monotonicity, Definition 2.6.5. We prove in Proposition 3.3.3 that \u03b1-monotonicity is a sufficient condition for the convexity of F . Definition 3.3.2. The cost coefficients (u, o) are \u03b1-monotone if there exists reals \u03b1i (1 \u2264 i \u2264 n) such that 0 \u2264 \u03b1i \u2264 oi and the sequence ui + \u03b1i is non-increasing. 61 \fThe condition of \u03b1-monotonicity is satisfied with many reasonable cost structures such as non-increasing ui \u2019s (ui+1 \u2264 ui for all i) or non-increasing (oi + ui )\u2019s (oi+1 + ui+1 \u2264 oi + ui for all i). Especially the assumption of non-increasing ui \u2019s fits well with many healthcare applications since idle time is usually a bigger concern earlier in a day than later, e.g., if the first patient fails to show up then the surgeon (and other resources) will be idle until the second patient\u2019s appointment date for sure however if a later patient fails to show then it may be the case the surgeon is still busy with previous patient(s) until the next appointment date. Furthermore, non-increasing ui \u2019s captures an important and commonly used special case, uniform idle cost rate for all jobs (ui = u for all i). Proposition 3.3.3. (Convexity) If (u, o) are \u03b1-monotone then F (.|p) and F (.) are convex. Proof. F (A|p) = Pn h i=1 \u03b1i (Ci \u2212 Ai+1 ) + \u03b2i (Ci \u2212 Ai+1 )+ + \u03b3i (max{Ci , Ai+1 } \u2212 i p ) k=1 k Pi where \u03b2i = (oi \u2212 \u03b1i ) and \u03b3i = [(ui + \u03b1i ) \u2212 (ui+1 + \u03b1i+1 )] by Identity Lemma 3.3.1. We P first show that (Ci \u2212 Ai+1 ), (Ci \u2212 Ai+1 )+ and max{Ci , Ai+1 } \u2212 ik=1 pk are convex in A. These functions are convex in A because Ci is convex in A. Ci is convex in A since P Ci = maxj\u2264i {Aj + ik=j pk } (by the Critical Path Lemma 2.4.1), i.e., Ci is the maximum of convex (affine) functions of A (therefore it is convex). If (u, o) are \u03b1-monotone then \u03b1i \u2265 0, \u03b2i \u2265 0, \u03b3i \u2265 0 (1 \u2264 i \u2264 n). Since finite sum of convex functions with non-negative weights is convex both F (.|p) and its expectation F (.) are convex. This completes the proof. \u0003 Remark 3.3.4. F may fail to be convex in the absence of \u03b1-monotonicity. To see this, consider the following example with two jobs (n = 2), deterministic processing times p1 > 0 and p2 > 0, cost coefficients o1 = 0, u1 = 0 and o2 > 0, u2 > 0. Then F (A) = F (A|p) (due to deterministic processing times) and F (A) = o2 (C2 \u2212 A3 )+ + u2 (A3 \u2212 C2 )+ . Let A\u2032 = (0, 0, p1 + p2 ) then S1 = 0, C1 = S2 = p1 and C2 = S3 = p1 + p2 so F (A\u2032 ) = 0. Similarly, let A\u2032\u2032 = (0, 2p1 , 2p1 + p2 ) then S1 = 0, C1 = p1 and S2 = 2p1 , and C2 = S3 = 2p1 + p2 therefore F (A\u2032\u2032 ) = 0. Now, we define A\u2032\u2032\u2032 = 12 A\u2032 + 12 A\u2032\u2032 = (0, p1 , 23 p1 + p2 ) then S1 = 0, C1 = S2 = p1 , C2 = p1 + p2 and S3 = 32 p1 + p2 so F (A\u2032\u2032\u2032 ) = 12 p1 u2 > 0. But this implies that F (.) is not convex since 12 p1 u2 = F (A\u2032\u2032\u2032 ) = F ( 12 A\u2032 + 21 A\u2032\u2032 ) > 12 F (A\u2032 ) + 21 F (A\u2032\u2032 ) = 0. Similar convexity result holds for F D . Corollary 3.3.5. (Convexity) If (u, o) are \u03b1-monotone then F D (.|p) and F D (.) are convex. 62 \fProof. We substitute An+1 with D in Identity Lemma 3.3.1 and obtain i Pn\u22121 h P F D (A\u0303|p) = i=1 \u03b1i (Ci \u2212 Ai+1 ) + \u03b2i (Ci \u2212 Ai+1 )+ + \u03b3i (max{Ci , Ai+1 } \u2212 ik=1 pk ) P + \u03b1n (Cn \u2212 D) + \u03b2n (Cn \u2212 D)+ + \u03b3n (max{Cn , D} \u2212 nk=1 pk ) for any \u03b1i \u2208 R (1 \u2264 i \u2264 n) where \u03b2i = (oi \u2212 \u03b1i ) and \u03b3i = [(ui + \u03b1i ) \u2212 (ui+1 + \u03b1i+1 )]. Then the result follows from Convexity Proposition 3.3.3 because convexity is preserved by projection onto a coordinate subspace. 3.4 \u0003 Subdifferential Characterization We start with a definition of a subgradient and subdifferential. Definition 3.4.1. A vector g is a subgradient of a convex function f at the point x if f (y) \u2265 f (x) + g T (y \u2212 x) for all y. Subdifferential at a point x is the set of all subgradients at the point x, i.e., \u2202f (x) = {g : f (y) \u2265 f (x) + g T (y \u2212 x)} [14]. We find a subgradient of the objective function F and characterize the set of all subgradients, i.e., subdifferential of F , \u2202F , at any appointment vector A. We start with the alternative representation of F given by Identity Lemma 3.3.1 from which we can identify the smaller blocks of F : lateness, tardiness and total idle time of jobs 1, 2, ..., j for each job j. By using Minkowski sum and subdifferential calculus rules we first obtain subdifferential of these smaller blocks and then again by using these rules we put these smaller subdifferential sets together to characterize the subdifferential of F with a closed form formula. This characterization allows us to link the quality of the sampling solution with the number of independent samples. We also prove that any subgradient of F (., D) is a subgradient for F D allowing us to extend our results for F D . By Identity Lemma 3.3.1, # \" j n X X + pk ) \u03b1j (Cj \u2212 Aj+1 ) + \u03b2j (Cj \u2212 Aj+1 ) + \u03b3j (max{Cj , Aj+1 } \u2212 F (A|p) = j=1 for any \u03b1i \u2208 R k=1 (1 \u2264 i \u2264 n) where \u03b2i = (oi \u2212 \u03b1i ) and \u03b3i = [(ui + \u03b1i ) \u2212 (ui+1 + \u03b1i+1 )] (\u03b3n = [(un + \u03b1n )]). We assume \u03b1-monotone (u, o) (by the Convexity Proposition 3.3.3) for the convexity of F (.|p) and F (.). Recall that Lj (A|p) = (Cj \u2212 Aj+1 ) (lateness), P Tj (A|p) = (Cj \u2212 Aj+1 )+ (tardiness) and Mj (A|p) = max{Cj , Aj+1 } \u2212 jk=1 pk (total idle time of jobs 1, 2, ..., j). Here we use (A|p) for Lj , Tj and Mj to emphasize the fact 63 \fthat these quantities are for a given (particular) p, hence they are deterministic. Similarly, we will use (A) to denote their expected values, i.e., Lj (A), Tj (A) and Mj (A). We can P rewrite F (A|p) as nj=1 [\u03b1j Lj (A|p) + \u03b2j Tj (A|p) + \u03b3j Mj (A|p)] where \u03b1j , \u03b2j , \u03b3j \u2265 0 by \u03b1- monotonicity. To characterize the subdifferential \u2202F (A), we first derive the subdifferentials of Lj (A|p), Tj (A|p) and Mj (A|p). Then, we find \u2202Lj (A), \u2202Tj (A) and \u2202Mj (A) where \u03b6j (A) = Ep [\u03b6j (A|p)] for \u03b6 \u2208 {L, T, M }. After that we obtain the subdifferential of Fj (A) P (by Eq(3.2) below) and \u2202( nj=1 Fj (A)) = \u2202F (A). Fj (A) = \u03b1j Lj (A) + \u03b2j Tj (A) + \u03b3j Mj (A) (3.2) We start the analysis with the definition of Minkowski sum (e.g., see [33]) since the sums in our subdifferential derivations are Minkowski sums. The Minkowski sum of sets X1 and X2 is defined as \b X1 + X2 = x = x1 + x2 : x1 \u2208 X1 , x2 \u2208 X2 . More generally, if I is a finite set, ri is a real number and Xi is a set (i \u2208 I) then the \b P P P Minkowski sum i\u2208 I ri Xi is i\u2208 I ri Xi = x = i\u2208 I ri xi : xi \u2208 Xi for all i . In particular, for any real r \u2208 R, rX = {rx : x \u2208 X}. We will use two particular subdifferential calculus rules in our derivations. Rule 1 and Rule 2 follow from Theorem 4.1.1 and Corollary 4.4.4 of [14] respectively. Let f, f1 , f2 , ..., fm be finite convex functions from Rn to R, I be a finite set and ri (i \u2208 I) be a non-negative real number. Then the rules are: Rule 1: X X \u2202( ri fi ) = ri \u2202fi i\u2208 I Rule 2: (3.3) i\u2208 I If f = max fi and all fi \u2019s are differentiable then \u2202f = co{\u2207fi : fi = f } 1\u2264i\u2264m where co stands for convex hull and the summation on the right side of the equation in Rule 1 is a Minkowski sum. Lemma 3.4.2 allows us to consider \u2202\u03a8(A|p) in finding \u2202\u03a8(A) where \u03a8 \u2208 {Lj , Tj , Mj , F, Fj }. We use the notations \u2202\u03a8(A) = Ep \u03a8(A|p) and P rob{p} to represent the probability of realization p. Since \u03a8(.) is, in all these cases, finite and convex, we have the following result. Lemma 3.4.2. The relation \u2202(Ep \u03a8(A|p)) = Ep [\u2202\u03a8(A|p)] holds where Ep [\u2202\u03a8(A|p)] = X p P rob{p}\u2202\u03a8(A|p) = {s \u2208 Rn+1 : \u2203 sp \u2208 \u2202\u03a8(A|p) \u2200p and s = X P rob{p}sp }. p 64 \fProof. \u03a8(A|p) is finite and convex everywhere for any p and there are finitely many realizations of p. Therefore by Rule 1 and Rule 2 we obtain the claimed result as follows: X X \u2202(Ep [\u03a8(A|p)]) = \u2202( P rob{p}\u03a8(A|p)) = P rob{p}\u2202\u03a8(A|p) = Ep [\u2202\u03a8(A|p)] . p p \u0003 Lemma 3.4.2 is useful as it allows us to work with \u2202\u03a8(A|p) for any \u03a8 \u2208 {Lj , Tj , Mj , F, Fj } and obtain \u2202\u03a8(A) by taking its expectation. By Eq(3.2) and Lemma 3.4.2, \u2202F (A) = n X [\u03b1j Ep [\u2202Lj (A|p)] + \u03b2j Ep [\u2202Tj (A|p)] + \u03b3j Ep [\u2202Mj (A|p)]] j=1 where, as before, all sums are Minkowski sums. So, once we find \u2202Lj (A|p), \u2202Tj (A|p) and \u2202Mj (A|p)], we can obtain \u2202F (A). Consider Lj (A|p) = (Cj \u2212 Aj+1 ). By Critical Path P P Lemma 2.4.1, Cj = maxk\u2264j {Ak + jt=k pt } so Lj (A|p) = maxk\u2264j {Ak + jt=k pt \u2212 Aj+1 }. Recall that the notation {.|p} is used to emphasize the fact that the quantity of interest is deterministic once the job duration vector p is given. In order to find \u2202Lj (A|p), we need P to know which k\u2019s (k \u2264 j) maximize {Ak + Pkj } where Pkj = jt=k pt . To represent the set of such maximizers for job j we define Ij = arg max{Ak + Pkj }. k\u2264j (3.4) A remark is here in order. Note that Ij depends on A and p, and it is deterministic for any given (particular realization of) p and a random variable otherwise. Let 1i denote a unit vector in Rn+1 where the ith component is 1 and all other components are 0. Then Lj (A|p) = maxk\u2264j {Ak + Pkj \u2212 Aj+1 } and by Rule 2 we obtain \u2202Lj (A|p) = co{1k \u2212 1j+1 : k \u2208 Ij }. (3.5) Similar to \u2202Lj (A|p), we now obtain \u2202Tj (A|p). In addition to Ij , we also need the sign of maxk\u2264j {Ak + Pkj } \u2212 Aj+1 since \u2202Tj (A|p) = (Lj (A|p))+ . Let Ij\u033a = {k \u2208 Ij : Ak + Pkj \u033a Aj+1 } where the relation \u033a \u2208 {>, <, =}. (3.6) By extending \u2202Lj (A|p) with the sign of maxk\u2264j {Ak + Pkj } \u2212 Aj+1 we obtain \u2202Tj (A|p): \uf8f1 \uf8f4 \uf8f4 co{1k \u2212 1j+1 : k \u2208 Ij> } if {maxk\u2264j Ak + Pkj } > Aj+1 \uf8f4 \uf8f2 \u2202Tj (A|p) = co({0} \u222a {1k \u2212 1j+1 : k \u2208 Ij= }) if {maxk\u2264j Ak + Pkj } = Aj+1 \uf8f4 \uf8f4 \uf8f4 \uf8f3 {0} if {maxk\u2264j Ak + Pkj } < Aj+1 . 65 \fWe note that exactly two of the sets Ij> , Ij= , Ij< are empty. This allows us to represent \u2202Tj (A|p) as \u2202Tj (A|p) = co({0} \u222a {1k \u2212 1j+1 : k \u2208 Ij> }) + co({0} \u222a {1k \u2212 1j+1 : k \u2208 Ij= }). (3.7) Next, we obtain \u2202Mj (A|p). Recall that Mj (A|p) = max{Cj , Aj+1 } \u2212 P1j and Cj = \b maxk\u2264j {Ak +Pkj }, then Mj (A|p) = max maxk\u2264j {Ak +Pkj }, Aj+1 \u2212P1j . By using Rule 2 (similarly to \u2202Tj (A|p)), we obtain \u2202Mj (A|p). \uf8f1 \uf8f4 \uf8f4 co{1k : k \u2208 Ij> } \uf8f4 \uf8f2 \u2202Mj (A|p) = co({1j+1 } \u222a {1k : k \u2208 Ij= }) \uf8f4 \uf8f4 \uf8f4 \uf8f3 1 j+1 if {maxk\u2264j Ak + Pkj } > Aj+1 if {maxk\u2264j Ak + Pkj } = Aj+1 if {maxk\u2264j Ak + Pkj } < Aj+1 . Since exactly two of the sets Ij> , Ij= , Ij< are empty, we can represent \u2202Mj (A|p) compactly as \u2202Mj (A|p) = co({0} \u222a {1k \u2212 1j+1 : k \u2208 Ij> }) + co({1j+1 } \u222a {1k : k \u2208 Ij= }). (3.8) We now can obtain \u2202Lj (A), \u2202Tj (A) and \u2202Mj (A) starting with \u2202Lj (A). Recall that by Eq(3.5), Lj (A|p) = co{1k \u2212 1j+1 : k \u2208 Ij } and by Lemma 3.4.2, we have \u2202Lj (A) = P Ep [\u2202Lj (A|p)] = p P rob{p}\u2202Lj (A|p). Therefore, \u2202Lj (A) = X P rob{p}co{1k \u2212 1j+1 : k \u2208 Ij }. (3.9) p There are potentially pnmax many realizations of p, and this number may be very large. However, we observe that all the vectors appearing in the convex hull of Lj (A|p) are (1k \u2212 1j+1 ) for some k \u2264 j. Therefore the convex hull of Lj (A|p) will be a convex combination of the vectors in some subset of {(11 \u2212 1j+1 ), (12 \u2212 1j+1 ), ..., (1j \u2212 1j+1 )}. The following result makes this observation precise. Lemma 3.4.3. Let r1 , r2 ..., rm \u2265 0 be reals. If X is a convex set and then P ( m i=1 ri )X. Pm i=1 (ri X) = Remark 3.4.4. The convexity of X is essential, to see this take the non-convex set X = P P {0, 1} with r1 = r2 = 1 then 2i=1 (ri X) = {0, 1, 2} = 6 {0, 2} = ( 2i=1 ri )X. Lemma 3.4.3 enables us to combine all realizations giving the same convex hull X, i.e., instead of considering all possible realizations we consider the non-empty subsets of 66 \f{(11 \u2212 1j+1 ), (12 \u2212 1j+1 ), ..., (1j \u2212 1j+1 )}. We define [j] = {1, 2, ..., j \u2212 1, j} and use P \u2217 ([j]) to denote all the non-empty subsets of [j]. For any S \u2208 P \u2217 ([j]), let P rob{\u03a6 = S} = P = > p : \u03a6=S P rob{p} for \u03a6 \u2208 {Ij , Ij , Ij } and j = 1, ..., n. Then the next Lemma shows how to obtain \u2202Lj (A). Lemma 3.4.5. The subdifferential \u2202Lj (A) is given by X P rob{Ij = S} co{(1k \u2212 1j+1 ) : k \u2208 S}. S \u2208P \u2217 ([j]) Proof. Eq(3.9) gives \u2202Lj (A) = result by the equalities below: X P p P rob{p} co{1k \u2212 1j+1 : k \u2208 Ij }. We obtain the desired P rob{p} co{1k \u2212 1j+1 : k \u2208 Ij } p X = P rob{p} p = S = X \u2208P \u2217 ([j]) X X S X co{1k \u2212 1j+1 : k \u2208 S}1{Ij = S} \u2208P \u2217 ([j]) P rob{p} 1{Ij = S} co{1k \u2212 1j+1 : k \u2208 S} p P rob{Ij = S} co{1k \u2212 1j+1 : k \u2208 S} S \u2208P \u2217 ([j]) where 1{Ij = S} is 1 if {Ij = S} and 0 otherwise (i.e., 1 is the indicator function), and the last equality follows from the definition of P rob{Ij = S} and Lemma 3.4.3. \u0003 We obtain similar results for \u2202Tj (A) and \u2202Mj (A) in the next two lemmata. Lemma 3.4.6. The subdifferential \u2202Tj (A) is given by X S\u2208P \u2217 ([j]) \u0003 \u0002 P rob{Ij> = S}co{1k \u22121j+1 : k \u2208 S}+P rob{Ij= = S}co({0}\u222a{1k \u22121j+1 : k \u2208 S}) . Lemma 3.4.7. The subdifferential \u2202Mj (A) is given by X \u0014 P rob{Ij> = S} co{1k \u2212 1j+1 : k \u2208 S} + S\u2208P \u2217 ([j]) \u0015 \u0012 P rob{Ij= = S} co{1k : k \u2208 S \u222a {j + 1}} + 1 \u2212 X S\u2208P \u2217 ([j]) \u0013 P rob{Ij= = S} 1j+1 . 67 \fFor later purposes we represent the convex hulls in \u2202Lj (A), Tj (A) and Mj (A) in a different form. By using Lemma 3.4.5 above we may express \u2202Lj (A) as \u2202Lj (A) \u001a = X P rob{Ij = S} L (1k \u2212 1j+1 )Xkj (S) : k\u2208 S S \u2208P \u2217 ([j]) X X L Xkj (S) = 1 \u2200S \u2208 P \u2217 ([j]) k\u2208 S \u001b L Xkj (S) \u2265 0 \u2200S \u2208 P \u2217 ([j]) \u2200k \u2208 S . (3.10) L (S) is the non-negative variable representing the weight of the term (1 \u2212 1 where Xkj k j+1 ) in a convex combination determining an element of co{(1k \u2212 1j+1 ) : k \u2208 S}. Similarly to Eq(3.10), by using Lemma 3.4.6 we obtain: \u2202Tj (A) \u001a = X S\u2208P \u2217 ([j]) \u0002 P rob{Ij> = S} P rob{Ij= + X = S} X k\u2208 S T> Xkj (S) X T> (1k \u2212 1j+1 )Xkj (S) k\u2208 S \u0003 T= (1k \u2212 1j+1 )Xkj (S) : = 1 \u2200S \u2208 P \u2217 ([j]), k\u2208 S X T= Xkj (S) \u2264 1 \u2200S \u2208 P \u2217 ([j]) k\u2208 S T> T= Xkj (S), Xkj (S) \u2265 0 \u2200S \u2208 P \u2217 ([j]) \u2200k \u2208 S \u001b (3.11) T> T = (S) are non-negative variables representing the weight of the terms where Xkj (S) and Xkj (1k \u2212 1j+1 ) in a convex combination determining an element of co{1k \u2212 1j+1 : k \u2208 S} and co({0} \u222a {1k \u2212 1j+1 : k \u2208 S}) respectively. Note that the convexity constraint in the second P T = (S) \u2264 1, is an inequality since 0 may be a subgradient. line of \u2202Tj (A|p), k\u2208 S Xkj Similarly to Eq(3.10) and Eq(3.11), by using Lemma 3.4.7 we express \u2202Mj (A) as the following . \u2202Mj (A) = \u001a X S\u2208P \u2217 ([j]) \u0012 P rob{Ij> = S} M> (1k \u2212 1j+1 )Xkj (S) + k\u2208 S P rob{Ij= = S} \u0012 X X \u0013 (1k )XkMj= (S \u222a {j + 1}) + k\u2208 S\u222a{j+1} 1\u2212 X X S\u2208P \u2217 ([j]) P rob{Ij= \u0013 = S} 1j+1 : M> Xkj (S) = 1 \u2200S \u2208 P \u2217 ([j]), k\u2208 S X M> XkMj= (S \u222a {j + 1}) = 1 \u2200S \u2208 P \u2217 ([j]), Xkj (S) \u2265 0 \u2200S \u2208 P \u2217 ([j]) \u2200k \u2208 S, k\u2208 S\u222a{j+1} \u001b M= (S \u222a {j + 1}) \u2265 0 \u2200S \u2208 P \u2217 ([j]) \u2200k \u2208 S \u222a {j + 1} Xkj (3.12) T> M = (S) are non-negative variables representing the weight of the terms where Xkj (M ) and Xkj (1k \u2212 1j+1 ) and (1k ) in a convex combination determining an element of co{1k \u2212 1j+1 : k \u2208 S} and co{1k : k \u2208 S \u222a {j + 1}} respectively. 68 \fFor clarity, we collect the variables XijL (S), XijT > (S), XijT = (S), XijM > (S), XijM = (S) into a single vector Xj , and express the feasible set \u0398j of Xj in a compact form. = M (S \u222a {l + 1})) : \u03c5 \u2208 {L, T > , T = , M > }, 1 \u2264 i \u2264 j \u2264 n + 1, (Xij\u03c5 (S)), (Xkl \u0001 1 \u2264 k < l \u2264 n + 1, S \u2208 P \u2217 ([j]), i \u2208 S, k \u2208 S \u222a {l + 1} . \u001a X X X M= Xkl (S \u222a {l + 1}) = 1, = Xj \u2265 0 : Xij\u03c5 (S) = 1, XijT = (S) \u2264 1, Xj = \u0398j i\u2208S i\u2208S > > k\u2208 S\u222a{l+1} \u001b \u2217 \u2200\u03c5 \u2208 {L, T , M }, \u2200S \u2208 P ([j]), \u2200i \u2208 S, \u2200k \u2208 S \u222a {l + 1} . We next collect all Xj vectors into a single vector X and express the feasible set \u0398 of X: X = (Xj )j\u2208[n+1] \b \u0398 = \u00d7j\u2208[n+1] \u0398j = X = (Xj )j\u2208[n+1] : Xj \u2208 \u0398j \u2200j \u2208 [n + 1] . (3.13) Now we obtain \u2202F (A). Proposition 3.4.8. We may express \u2202F (A) in a closed-form formula given by Eq(3.15). Proof. Since \u03b1j , \u03b2j , \u03b3j \u2265 0 (j = 1, ..., n + 1), by Rule 1 and Eq(3.2) we get \u2202Fj (A) = \u03b1j \u2202Lj (A) + \u03b2j \u2202Tj (A) + \u03b3j \u2202Mj (A). We obtain \u2202F (A) as the Minkowski sum of Fj (A)\u2019s (j = 1, ..., n + 1). \u2202F (A) = n X \u2202Fj (A) = n X [\u03b1j \u2202Lj (A) + \u03b2j \u2202Tj (A) + \u03b3j \u2202Mj (A)] . (3.14) j=1 j=1 We gather the values of \u2202Lj (A), \u2202Tj (A), and \u2202Mj (A) from Eq(3.10), Eq(3.11) and Eq(3.12) respectively and use Eq(3.14) to obtain the closed-form expression: ( n X X X \u0001 \u2202F (A) = P rob{Ij = S} \u03b1j (1i )XijL (S) \u2212 1j+1 j=1 + + n X \u03b2j n X X n X \u03b3j n X j=1 + n X j=1 P rob{Ij= = S} X X P rob{Ij> = S} X \u0001 (1i )XijT = (S) \u2212 (1j+1 ) X P rob{Ij= = S} (1i )XijM > (S) \u2212 1j+1 X S\u2208P \u2217 ([j]) X i\u2208 S\u222a{j+1} P rob{Ij= X i\u2208 S i\u2208 S S\u2208P \u2217 ([j]) \u03b3j (1 \u2212 (1i )XijT > (S) \u2212 1j+1 i\u2208 S S\u2208P \u2217 ([j]) \u03b3j X i\u2208 S S\u2208P \u2217 ([j]) j=1 + P rob{Ij> = S} S\u2208P \u2217 ([j]) j=1 + X j=1 \u03b2j i\u2208 S S\u2208P \u2217 ([j]) \u0001 \u0001 XijT = (S) = (1i )XiMj (S \u222a {j + 1}) ) = S})1j+1 : X \u2208 \u0398 . (3.15) 69 \f\u0003 We next express \u2202F (A) component by component for a particular X \u2208 \u0398, i.e., a coordinate of a subgradient at the point A for a particular X \u2208 \u0398. Let g(X, A) be the element of \u2202F (A) defined by the vector X. Then g(X, A) = (g1 (X, A), g2 (X, A), ..., gn+1 (X, A)) where gk (X, A) is the k th component of g(X, A). Corollary 3.4.9 gives an expression for gk (X, A). Corollary 3.4.9. We may express g(X, A) in a closed-form formula given by Eq(3.16). Proof. We observe that all vectors appearing in the convex combination defining \u2202F (A) are (1i \u2212 1j+1 ) for some 1 \u2264 i < j + 1 \u2264 n + 1 and (1i ) for some 1 \u2264 i \u2264 n + 1. Therefore the k th component gk of g \u2208 \u2202F (A) may only get nonzero contributions from vectors (1k \u2212 1j+1 ) for all j + 1 > k, vectors (1i \u2212 1k ) for all i < k and vector (1k ). To see this from a different perspective, consider Ak : Ak appears in the terms (Ck\u22121 \u2212 Ak ), (Ck\u22121 \u2212 Ak )+ and may Pk\u22121 pi , (Cj\u22121 \u2212 Aj ) and (Cj\u22121 \u2212 Aj )+ for j > k. We derive appear in max{Ck\u22121 , Ak } \u2212 i=1 gk by using Eq(3.15). Let X \u2208 \u0398, then the k th component of g(X, A), namely, gk (X, A), is n X j=k + n X + + j=k + n X j=k X = P rob{Ik\u22121 = S} S \u2208 P \u2217 ([k\u22121]) M> P rob{Ij> = S}Xkj (S) \u2212 \u03b3k\u22121 S \u2208 P \u2217 ([j]) \u03b3j X T= P rob{Ij= = S}Xkj (S) \u2212 \u03b2k\u22121 X > P rob{Ik\u22121 = S} S \u2208 P \u2217 ([k\u22121]) S \u2208 P \u2217 ([j]) \u03b3j X T> P rob{Ij> = S}Xkj (S) \u2212 \u03b2k\u22121 X \u03b2j P rob{Ik\u22121 = S} S \u2208 P \u2217 ([k\u22121]) S \u2208 P \u2217 ([j]) j=k n X X \u03b2j X L P rob{Ij = S} Xkj (S) \u2212 \u03b1k\u22121 S \u2208 P \u2217 ([j]) j=k n X X \u03b1j X X = XiTk\u22121 (S) i\u2208 S > P rob{Ik\u22121 = S} S \u2208 P \u2217 ([k\u22121]) P rob{Ij= = S}XkMj= (S \u222a {j + 1}) S \u2208 P \u2217 ([j]) +\u03b3k\u22121 (1 \u2212 S\u2208 X = P rob{Ik\u22121 = S}). (3.16) P \u2217 ([k\u22121]) \u0003 Remark 3.4.10. Note that P S \u2208 P \u2217 ([k\u22121]) P rob{Ik\u22121 = S} = 1. For our analysis in this chapter, we do not require the values of the probabilities P rob{Ij = S}, P rob{Ij> = S} and P rob{Ij= = S} (for S \u2208 P \u2217 ([j]) and j \u2208 [n + 1]). However these values may be needed for other research, and indeed these probabilities are computed and used in Appendix A. Subgradients for F D . Proposition 3.4.11 allows us to use any subgradient of F (A\u0303, D) for F D (A\u0303). 70 \fProposition 3.4.11. proj (\u2202F (A\u0303, D)) \u2286 \u2202F D (A\u0303) where proj is the projection given as proj (x1 , x2 ..., xn , xn+1 ) = (x1 , x2 , ..., xn ). Therefore subgradient proj (g(A\u0303, D)) is a subgradient for F D (A\u0303) and hence we can extend our results to F D . Remark 3.4.12. One may wish to find a minimum norm subgradient at a point A as it provides an optimality test (a point A\u2217 is optimal if and only if 0 \u2208 \u2202F (A\u2217 )) and also the negative minimum norm subgradient is a descent direction (e.g., see [2]). By Eq(3.16) the minimum norm subgradient may be computed with a linear program (LP) in l1 norm and as a quadratic program (QP) in l2 norm. \uf8f1 Pn+1 \uf8f4 min \uf8f4 k=1 zk \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 subject to \uf8f4 \uf8f2 LP zk \u2265 gk (X, A) (1 \u2264 k \u2264 n + 1) \uf8f4 \uf8f4 \uf8f4 \uf8f4 z \u2265 \u2212g (X, A) (1 \u2264 k \u2264 n + 1) \uf8f4 k k \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 X\u2208\u0398 Decision variable zi is used to represent the absolute value |gi | in the l1 norm. Now, we give the QP formulation. QP \uf8f1 P 2 \uf8f4 \uf8f4 min n+1 \uf8f4 k=1 gk (X, A) \uf8f2 subject to \uf8f4 \uf8f4 \uf8f4 \uf8f3 X\u2208\u0398 This QP has linear constraints but a quadratic objective function. 3.5 Sampling Approach In this section, we relax the perfect information assumption of job durations distribution in Chapter 2. We assume that there exists an underlying (true) discrete joint distribution for the job durations but the distribution is not known. Instead there is a set of independent samples available. For example, in many practical scenarios one has daily historical data on surgery durations. Job durations may not necessarily be independent but samples are. (Each sample is a vector of all job durations on surgeries.) We develop a sampling-based approach to determine the number of independent samples required to obtain a provably near-optimal solution with high probability. That is, with high probability the cost (w.r.t. 71 \fthe true distribution) of the sampling-based schedule is no more than (1 + \u01eb) times the cost of optimal schedule that is computed based on the true distribution. Let \u01eb be the accuracy level, 1 \u2212 \u03b4 the confidence level and N = N (\u01eb, \u03b4, u, o) the number of samples. Define pk = (pk1 , pk2 , ..., pkn ) as the k th observation of N samples. We use \u201c b \u201d to b=p b (N ) be the empirical joint probability denote quantities obtained from samples. Let p distribution obtained from N independent observations of p, i.e., P rob{b p = pk } = 1 N for 1 \u2264 k \u2264 N . We denote a true optimal appointment vector with A\u2217 , i.e., A\u2217 is a minimizer of Fp (A) = Ep (F (A|p)). We use the subscript p emphasize the fact that the quantities are b = A(N b ) be a minimizer obtained with respect to the true distribution p. Similarly, let A b to emphasize the fact that the of Fpb (A) = Epb (F (A|b p)). Again we use the subscript p b . For subgradients, we quantities are obtained with respect to the sampling distribution p express their k th component as gk (X, A)p for Fp (.) and gk (X, A)pb for Fpb (.) at the point A. We start our analysis by proving that we can minimize Fpb (.) (and FpbD (.)) in polynomial time. This follows from Theorem 2.7.1 (and Corollary 2.8.6) of Chapter 2. Then, with an application of Hoeffdings\u2019 inequality, we establish a connection between the probability of b as a function of sample size N for a given accuracy level an event with respect to p and p b ) and a confidence level 1 \u2212 \u03b4 \u2032 . \u03b5\u2032 (absolute difference of the probabilities w.r.t. p and p After that we provide a similar result for a family of events F to hold simultaneously. Our b such next result uses subdifferential characterization to show the existence of a g \u2208 \u2202Fp (A) that |gk | < \u03b5\u2032 K \u2032 with high probability where K \u2032 = K \u2032 (n, u, o) is some constant. Then we b such that |gk | < \u01eb\u03bd\/3(n + 1)n for all 1 \u2264 k \u2264 n + 1 prove that if there exists g \u2208 \u2202Fp (A) b \u2264 (1 + \u01eb)Fp (A\u2217 ) where 0 < \u01eb \u2264 1 and \u03bd = min{u1 , u2 , ..., un , o1 , o2 , ..., on }. then Fp (A) This is achieved with an application of Jensen inequality and a version of Lemma 5.1 of [22] (Lemma 3.7.2). We conclude by stating our main result which determines the number of samples required to achieve (1 + \u01eb) approximation with probability at least 1 \u2212 \u03b4. Corollary 3.5.1. (Polynomial Time Algorithm) If the cost vectors (u, o) are \u03b1-monotone and the processing durations are integer then Fpb (.) (and FpbD (.)) can be minimized in O(n5 N n n2 log(\u2308pmax \/2\u2309)) time. Proof. Theorem 2.7.1 implies that Fpb (.) can be minimized in O(\u03c3(n) EO n2 log(\u2308h\/2n\u2309)) where \u03c3(n) is the number of function evaluations required to minimize a submodular set function over an n-element ground set and EO is the time needed for an expected cost 72 \fevaluation. We find the expected cost for Fpb (.) in O(nN ) by computing the total cost for each realization (takes O(n) time) and then take the average of N total cost realizations, i.e., sample average approximation. Finally, Theorem 4 of [29] shows that \u03c3(n) = O(n5 ). The result for FpbD (.) follows similarly from Corollary 2.8.6. \u0003 Polynomial Time Algorithm Corollary 3.5.1 shows that for a given N samples of job du- rations we can solve the SAA counterpart of the appointment scheduling problem efficiently. The remaining task is to find the sufficient number of samples N (for a given accuracy level and confidence level) such that the SAA optimal solution (w.r.t. the true distribution) will have a cost no more than (1+ the accuracy level) times optimal cost with probability at least the confidence level. Let O be any event depending on the processing times p = (p1 , p2 , ..., pn ), O = O(p1 , p2 , ..., pn ) = O(p). Let P robp {O(p)} denote the true probability of O. Let P robpb {O(p)} denote an estimate of P robp {O(p)} when true distribution of p is not known, b , based on N independent samples, is used in and the empirical probability distribution p the estimation. We define an indicator function as \uf8f1 \uf8f2 1 if event O occurs with realization pk 1{O(pk )} = \uf8f3 0 otherwise then 1{O(pk )} is Bernoulli distributed with parameter P robp {O(p)}. We define our esti- mate P robpb {O(p)} as N 1 X 1{O(pk )}. P robpb {O(p)} = N k=1 Remark 3.5.2. Note that N P robpb {O(p)} is the sum of N independent Bernoulli random variables with parameter P robp {O(p)}, therefore N P robpb {O(p)} is binomially distributed with parameters P robp {O(p)} and N . We use Hoeffdings\u2019 inequality to obtain the number of samples N required such that \b P rob P robp {O(p)} \u2212 P robpb {O(p)} \u2264 \u03b5\u2032 > 1 \u2212 \u03b4\u2032 for any given accuracy level \u03b5\u2032 > 0 and confidence level 0 < \u03b4 \u2032 < 1. Direct application of Hoeffdings\u2019 inequality for Bernoulli random variables (Theorem 4.5 in [40]) yields N> 1 1 ln(2\/\u03b4 \u2032 ). 2 (\u03b5\u2032 )2 Using union bounds we obtain a similar result for a family of events to hold simultaneously. 73 \fLemma 3.5.3. Let F be the set of (possibly dependent) events O1 , O2 , ..., O|F |\u22121 , O|F | where each Ok \u2208 F depends on the processing times p = (p1 , p2 , ..., pn ). Let 0 < \u03b5\u2032 , \u03b4 \u2032 < 1. If N> 1 1 2 (\u03b5\u2032 )2 ln(2\/\u03b4 \u2032 ) then \b P rob P robp {Ok (p)} \u2212 P robpb {Ok (p)} \u2264 \u03b5\u2032 \u2200k = 1, 2, .., |F| > 1 \u2212 |F|\u03b4 \u2032 . We characterized the subdifferential of F , \u2202Fp (.), in Section 3.4 with a closed-form expression Eq(3.15). We also derived a formula, Eq(3.16), to represent a component of any subgradient, gk (X, .)p . The formulas for \u2202Fpb (.) and gk (X, .)pb are identical to the Eq(3.15) and Eq(3.16) respectively except that each P robp {.} term is replaced by the corresponding P robpb {.} term. We show in Lemma 3.5.4 that if we take a sufficiently large number of samples then b p \u2212 gk (X, A) b pb | will be small with high probability for some X \u2208 \u0398 . This implies |gk (X, A) b that there exists a small g \u2208 \u2202Fp (A). b \u2208\u0398 b is an optimal appointment vector for Fpb . Therefore there exists X Recall that A b A) b pb = 0 for all 1 \u2264 k \u2264 n + 1. If we show that |gk (X, b A) b p \u2212 gk (X, b A) b pb | < such that gk (X, b A) b p \u2212 0| < \u03b5\u2032 K \u2032 , and hence there exists g \u2208 \u2202Fp (A) b such that |gk | < \u03b5\u2032 K \u2032 then |gk (X, b A) b p \u2212 gk (X, b A) b pb | < \u03b5\u2032 K \u2032 with \u03b5\u2032 K \u2032 for all 1 \u2264 k \u2264 n + 1. We now show that |gk (X, probability at least 1 \u2212 |F|\u03b4 \u2032 where |F| = 5n2 + 5 and K \u2032 = n(9omax + 4umax ) where omax = max(o1 , o2 , ..., on ) and umax = max(u1 , u2 , ..., un ). b p | < \u03b5\u2032 K \u2032 with probability at least Lemma 3.5.4. If N > 12 (1\/\u03b5\u2032 )2 ln(2\/\u03b4 \u2032 ) then |gk (X, A) 1 \u2212 |F|\u03b4 \u2032 where X \u2208 \u0398 , |F| = 5n2 + 5 and K \u2032 = n(9omax + 4umax ). Remark 3.5.5. If ui = u for all i = 1, 2, ..., n then K \u2032 = n(4omax + 2u) + 2u, i.e., b A) b p \u2212 gk (X, b A) b pb )| \u2264 \u03b5\u2032 (n(4omax + 2u) + 2u) (1 \u2264 k \u2264 n + 1) |gk (X, with probability at least 1 \u2212 |F|\u03b4 \u2032 where |F| = (n + 1)(4n + 2) = 5n2 + 5. b and Fp (A\u2217 ) if there exists g \u2208 The last piece we need is a connection between Fp (A) b such that |gk | < \u01eb. Before this result we need a Lemma to obtain a lower bound \u2202Fp (A) function for Fp . e1 = pe1 and C ei = max(C ei\u22121 , Ai ) + pei . e = (e Lemma 3.5.6. Let pei = E[pi ], p p1 , pe2 , ..., pen ), C P e \u2208 arg minA fe(A). If cost ei \u2212 Ai+1 )+ + (Ai+1 \u2212 C ei )+ ]) and A We define fe(A) = \u03bd( ni=1 [(C P e = (0, pe1 , pe1 + pe2 ..., n pej ) and Fp (A) \u2265 fe(A) \u2265 coefficients (u, o) are \u03b1-monotone then A j=1 \u03bd n ||A e 1. \u2212 A|| 74 \fRemark 3.5.7. The following example with n = 2 jobs shows that this lower bound is tight, that is we may have Fp (A) = \u03bd n ||A e 1 . Let processing times p = (1, 4) be \u2212 A|| deterministic, u1 = u2 = o1 = o2 = 1 (therefore \u03bd = 1). Then A\u0303 = (0, 1, 5). For P A = (0, 4, 8) F (A) = 3 = 12 ni=1 |Ai \u2212 A\u0303i |. b The last step we need before our main result is to prove that for a suitably chosen A b \u2264 (1 + \u01eb)Fp (A\u2217 ) for any 0 < \u01eb \u2264 1. This result follows easily by using we can obtain Fp (A) a version of Lemma 5.1 of [22] (Lemma 3.7.2 in Section 3.7). b such that |gk | < \u01eb\u03bd\/(3(n + 1)n) Lemma 3.5.8. Let 0 < \u01eb \u2264 1. If there exists g \u2208 \u2202Fp (A) b \u2264 (1 + \u01eb)Fp (A\u2217 ) . for all 1 \u2264 k \u2264 n + 1 then Fp (A) Proof. If |gk | < \u01eb\u03bd\/(3(n + 1)n) for all 1 \u2264 k \u2264 n + 1 then ||g||1 \u2264 \u01eb\u03bd\/(3n). We then directly \u0001 e 1 by Lemma 3.5.6 and apply Lemma 3.7.2 with f (A) = n\u03bd ||A \u2212 A\u0303||1 Fp (A) \u2265 n\u03bd ||A \u2212 A|| \u03b1 = \u01eb\u03bd\/(3n) obtain the desired result. \u0003 Combining Lemmata 3.5.3, 3.5.4 and 3.5.8 yields our main result for the sampling method. Theorem 3.5.9. Let 0 < \u01eb \u2264 1 (accuracy level) and 0 < \u03b4 < 1\u0013(confidence level) be given. If \u0012 \u00012 b \u2264 (1+\u01eb)Fp (A\u2217 ) N > 4.5(1\/\u01eb)2 n2 (n+1)(9omax +4umax )\/\u03bd ln(2(5n2 +5)\/\u03b4) then Fp (A) with probability at least 1 \u2212 \u03b4. Remark 3.5.10. In the case of uniform underage cost coefficients (ui = u for all i) the \u0012 \u00012 bound in Theorem 3.5.9 becomes 4.5(1\/\u01eb)2 n2 (n + 1)((4omax + 2u) + 2u)\/\u03bd ln(2(5n2 + \u0013 5)\/\u03b4) . Furthermore, the bound is similar but has a slightly higher polynomial (w.r.t. the number of jobs n) compared to the bound obtained for the multi-period newsvendor problem in [22] (w.r.t. the number of periods T ). This is expected since in the appointment scheduling problem one needs to make all the decisions (i.e., determine the planned start times of all jobs) at once (before any processing starts), whereas in the inventory problem one decides sequentially at each period. 3.6 Conclusion We consider the appointment scheduling problem with discrete random durations of Chapter 2 but without assuming any (prior) knowledge about the probability distribution of job 75 \fdurations. We show that the objective function is convex under a simple sufficient condition. We work with subgradients of the objective function due to its non-differentiability. In fact we characterize the set of all subgradients, i.e., the subdifferential at a given appointment date vector with a closed-form formula. This is unusual since only a single subgradient may be obtained in most applications. We use the subdifferential characterization to relax the perfect information assumption of Chapter 2 on the probability distribution of processing times. We assume that there is an underlying (true) joint discrete distribution for the job durations, and only its independent samples are available, e.g., daily historical observations of surgery durations. Job durations may not necessarily be independent but samples are. In other words, we assume that job duration distribution is not known, i.e., no (prior) information about the distribution except that independent samples are available. We develop a sampling-based approach to determine the number of independent samples required to obtain a provably near-optimal solution with high probability, i.e., the cost of the samplingbased optimal schedule is with high probability no more than (1 + \u01eb) times the cost of optimal schedule if the true distribution were known. 3.7 Proofs Proof. (Lemma 3.4.3) If at most one of r1 , r2 , ..., rm is non-zero then there is nothing to prove. Now suppose that there are at least two ri > 0, and w.l.o.g assume that r1 , r2 > 0. We first prove the result for m = 2 then generalize it by induction. r1 X = {r1 x : x \u2208 X} and r1 X + r2 X = {r1 x + r2 y : x, y \u2208 X} by definition. We first show that r1 X + r2 X \u2286 (r1 + r2 )X. Let (a + b) \u2208 (r1 X + r2 X) where a \u2208 r1 X and b \u2208 r2 X. If a \u2208 r1 X then is convex, ( ra1 \u03bb + b r2 (1 b r2 (1 a r1 \u2208 X. Similarly if b \u2208 r2 X then \u2212 \u03bb)) \u2208 X for any 0 \u2264 \u03bb \u2264 1. Now let \u03bb = 1 \u2212 \u03bb)) = ( ra1 r1r+r + 2 b r2 r2 r1 +r2 ) = a+b r1 +r2 b r2 \u2208 X. Since X r1 r1 +r2 then ( ra1 \u03bb + this implies a + b \u2208 (r1 + r2 )X and therefore r1 X + r2 X \u2286 (r1 + r2 )X. Next, we show that (r1 + r2 )X \u2286 r1 X + r2 X. Let a \u2208 (r1 + r2 )X then a r1 +r2 \u2208 X, and a a a a therefore r1 r1 +r \u2208 r1 X and r2 r1 +r \u2208 r2 X. Hence r1 r1 +r + r2 r1 +r \u2208 (r1 X + r2 X). But 2 2 2 2 a a + r2 r1 +r = a therefore (r1 + r2 )X \u2286 r1 X + r2 X. This completes the proof for r1 r1 +r 2 2 m = 2. Next, assume that the result holds for m = k > 2, i.e., Pk i=1 (ri X) P = ( ki=1 ri )X. 76 \fWe need to show that it also holds for m = k + 1, i.e., P r = ki=1 ri . Then k+1 X (ri X) = i=1 (ri X) P = ( k+1 i=1 ri )X. Let k+1 k k X X X ri )X. ri )X+rk+1 X = rX+rk+1 X = (r+rk+1 )X = ( (ri X)+rk+1 X = ( i=1 i=1 i=1 i=1 Pk+1 where the second equality follows by the inductive assumption and fourth equality is due to our result for m = 2. Therefore the proof is complete. \u0003 Proof. (Lemma 3.4.6) By Eq(3.7), the fact that r(X + Y ) = rX + rY (for r \u2208 R and, sets X and Y ) and Lemma 3.4.2 we obtain \u2202Tj (A) = X p = X \u0002 \u0003 P rob{p} co{1k \u2212 1j+1 : k \u2208 Ij> } + co({0} \u222a {(1k \u2212 1j+1 ) : k \u2208 Ij= }) P rob{p} co{1k \u2212 1j+1 : k \u2208 Ij> } + p X P rob{p}co({0} \u222a {1k \u2212 1j+1 : k \u2208 Ij= }). p Then by using the definition of P rob{Ij> = S} and Lemma 3.4.3 (similarly to Lemma 3.4.5) we get X P rob{p} co{1k \u2212 1j+1 : k \u2208 Ij> } X = p P rob{p} p X = X X co{1k \u2212 1j+1 : k \u2208 S}1{Ij> = S} S\u2208P \u2217 ([j]) P rob{p}1{Ij> = S}co{1k \u2212 1j+1 : k \u2208 S} S\u2208P \u2217 ([j]) p X = P rob{Ij> = S}co{1k \u2212 1j+1 : k \u2208 S}. S\u2208P \u2217 ([j]) Next by using the identity of X P rob{x} = x X P rob{x : x \u2208 X} + x X P rob{x : x 6\u2208 X}, x the definition of P rob{Ij= = S} and Lemma 3.4.3 we rewrite P = p P rob{p}co({0} \u222a {1k \u2212 1j+1 : k \u2208 Ij }) as X P rob{p : Ij= = \u2205}{0} + p = X P rob{p : p = X X Ij= 6= \u2205} X X P rob{p : Ij= 6= \u2205}co({0} \u222a {1k \u2212 1j+1 : k \u2208 Ij= }) p co({0} \u222a {1k \u2212 1j+1 : k \u2208 S})1{Ij= = S} S\u2208P \u2217 ([j]) P rob{p}1{Ij= = S}co({0} \u222a {1k \u2212 1j+1 : k \u2208 S}) S\u2208P \u2217 ([j]) p = X P rob{Ij= = S}co({0} \u222a {1k \u2212 1j+1 : k \u2208 S}). S\u2208P \u2217 ([j]) 77 \fTherefore we finally obtain \u2202Tj (A) X = S\u2208P \u2217 ([j]) \u0002 \u0003 P rob{Ij> = S}co{1k \u2212 1j+1 : k \u2208 S} + P rob{Ij= = S}co({0} \u222a {1k \u2212 1j+1 : k \u2208 S}) . \u0003 Proof. (Lemma 3.4.7) Similarly to Lemma 3.4.6, by Eq(3.8) and Lemma 3.4.2 we obtain \u2202Mj (A) X = p X = \u0002 \u0003 P rob{p} co{1k \u2212 1j+1 : k \u2208 Ij> } + co({1j+1 } \u222a {1k : k \u2208 Ij= }) P rob{p} co{1k \u2212 1j+1 : k \u2208 Ij> } + p X P rob{p} co({1j+1 } \u222a {1k : k \u2208 Ij= }). p As in Lemma 3.4.6, X X P rob{p} co{1k \u2212 1j+1 : k \u2208 Ij> } = p P rob{Ij> = S}co{1k \u2212 1j+1 : k \u2208 S} S\u2208P \u2217 ([j]) and we can rewrite X p P rob{p} co({1j+1 } \u222a {1k : k \u2208 Ij= }) as P rob{p : Ij= 6= \u2205}co{1k : k \u2208 Ij= \u222a {j + 1}} + p X = P X X P rob{p : Ij= = \u2205}1j+1 p P rob{p : Ij= 6= \u2205}1{Ij= = S}co{1k : k \u2208 Ij= \u222a {j + 1}} S\u2208P \u2217 ([j]) p + X S\u2208P \u2217 ([j]) = X S\u2208P \u2217 ([j]) X P rob{p : Ij= = \u2205}1{Ij= = S}1j+1 p \u0002 \u0003 P rob{Ij= = S} co{1k : k \u2208 S \u222a {j + 1}} + 1 \u2212 X S\u2208P \u2217 ([j]) \u0001 P rob{Ij= = S} 1j+1 . Hence we obtain \u2202Mj (A) = X S\u2208P \u2217 ([j]) \u0002 P rob{Ij> = S} co{1k \u2212 1j+1 : k \u2208 S} + \u0003 P rob{Ij= = S} co{1k : k \u2208 S \u222a {j + 1}} + 1 \u2212 X S\u2208P \u2217 ([j]) \u0001 P rob{Ij= = S} 1j+1 . \u0003 Proof. (Proposition 3.4.11) Let y \u2208 \u2202F (A\u0303, D). Then by subgradient inequality we have F (B, Bn+1 ) \u2265 F (A\u0303, D) + (B \u2212 A\u0303, Bn+1 \u2212 D)y t for all B = (B1 , .., Bn ) \u2208 Rn and Bn+1 \u2208 R. This inequality holds for all (B, Bn+1 ) \u2208 Rn+1 and in particular (B, D). Thus we obtain F (B, D) \u2265 F (A\u0303, D) + (B \u2212 A\u0303, D \u2212 D)y t . This gives F D (B) \u2265 F D (A\u0303) + (B \u2212 A\u0303)g t since F (B, D) = F D (B) and F (A\u0303, D) = F D (A\u0303) where g = proj (y) = (y1 , ..., yn ). F D (B) \u2265 F D (A\u0303) + (B \u2212 A\u0303)g t implies that g = proj (y) \u2208 \u2202F D (A\u0303). \u0003 78 \fProof. (Lemma 3.5.3) The proof is by induction on |F|. Let 1 \u2264 k \u2264 |F| and Yk = \b P robp {Ok (p)} \u2212 P robpb {Ok (p)} \u2264 \u03b5\u2032 . For F = 2 the result holds since P rob{Y1 \u2229 Y2 } = 1 \u2212 P rob{Y1 \u2229 Y2 } = 1 \u2212 P rob{Y1 \u222a Y2 } \u2265 1 \u2212 (P rob{Y1 } + P rob{Y2 }) (since P rob{Y1 \u222a Y2 } \u2264 P rob{Y1 } + P rob{Y2 }) \u2265 1 \u2212 2\u03b4 \u2032 (since P rob{Y1 }, P rob{Y2 } < \u03b4 \u2032 ). \b Tk Suppose the result is true for |F| = k, i.e., P rob i=1 Yi \u2265 1 \u2212 k\u03b4 \u2032 . Let Y = Then, P rob{Y \u2229 Yk+1 } = 1 \u2212 P rob{Y \u2229 Yk+1 } = 1 \u2212 P rob{Y \u222a Yk+1 } Tk i=1 Yi . \u2265 1 \u2212 (P rob{Y } + P rob{Yk+1 }) (as P rob{Y \u222a Yk+1 } \u2264 P rob{Y } + P rob{Yk+1 }) \u2265 1 \u2212 (k + 1)\u03b4 \u2032 (as P rob{Y } < k\u03b4 \u2032 and P rob{Yk+1 } < \u03b4 \u2032 ). Therefore the result is also true for |F| = k + 1, and hence the proof is complete. \u0003 b is an optimal appointment vector for Fpb there exists Proof. (Lemma 3.5.4). Since A b \u2208 \u0398 such that gk (X, b A) b pb = 0 for all 1 \u2264 k \u2264 n + 1. If |gk (X, b A) b p \u2212 gk (X, b A) b pb | < \u03b5\u2032 K \u2032 X b A) b p \u2212 0| < \u03b5\u2032 K \u2032 , and hence there exists g \u2208 \u2202Fp (A) b such that then this implies that |gk (X, b A) b p \u2212 gk (X, b A) b pb | < \u03b5\u2032 K \u2032 . We |gk | < \u03b5\u2032 K \u2032 for all 1 \u2264 k \u2264 n + 1. We now show that |gk (X, b A) b p \u2212 gk (X, b A) b pb | term by term and factor out X b terms start by taking the difference |gk (X, by using Eq(3.16). 79 \fb A) b p \u2212 gk (X, b A) b pb )| |gk (X, \u0012 \u0013 n X X b L (S) P robp {Ij = S} \u2212 P robpb {Ij = S} X \u03b1j = kj j=k S \u2208 P \u2217 ([j]) X \u2212 \u03b1k\u22121 S \u2208 P \u2217 ([k\u22121]) + n X X \u03b2j j=k S \u2208 P \u2217 ([j]) X \u2212 \u03b2k\u22121 \u0012 \u0013 b T > (S) P robp {I > = S} \u2212 P robpb {I > = S} X j j kj S \u2208 P \u2217 ([k\u22121]) + n X \u03b2j j=k S\u2208 X P \u2217 ([j]) X \u2212 \u03b2k\u22121 \u0012 \u0013 P robp {Ik\u22121 = S} \u2212 P robpb {Ik\u22121 = S} \u0012 > P robp {Ik\u22121 X S \u2208 P \u2217 ([k\u22121]) i\u2208 S + n X \u03b3j j=k S\u2208 X P \u2217 ([j]) X \u2212 \u03b3k\u22121 + \u03b3j j=k X S \u2208 P \u2217 ([n]) \u2212 \u03b3k\u22121 S\u2208 X \u0012 P robp {Ij= = S} \u2212 \u0013 = S} P robpb {Ij= \u0013 = S} \u0012 \u0013 b T = (S) P robp {I = = S} \u2212 P robpb {I = = S} X i k\u22121 k\u22121 k\u22121 \u0012 b T > (S) X kj S \u2208 P \u2217 ([k\u22121]) n X \u0012 b T = (S) X kj = S} \u2212 > P robpb {Ik\u22121 P robp {Ij> = S} \u2212 P robpb {Ij> \u0013 = S} \u0013 > > P robp {Ik\u22121 = S} \u2212 P robpb {Ik\u22121 = S} \u0012 \u0013 M= = = b Xk n (S \u222a {n + 1}) P robp {In = S} \u2212 P robpb {In = S} P \u2217 ([k\u22121]) \u0012 = P robp {Ik\u22121 = S} \u2212 = P robpb {Ik\u22121 \u0013 = S} . \u0012 \u0013 P The term \u2212\u03b1k\u22121 S \u2208 P \u2217 ([k\u22121]) P robp {Ik\u22121 = S} \u2212 P robpb {Ik\u22121 = S} disappears since P P b {Ik\u22121 = S}. By using trianguS \u2208 P \u2217 ([k\u22121]) P robp S \u2208 P \u2217 ([k\u22121]) P robp {Ik\u22121 = S} = 1 = lar inequality we obtain 80 \fb A) b p \u2212 gk (X, b A) b pb )| |gk (X, \u0012 \u0013 n X X b L (S) P robp {Ij = S} \u2212 P robpb {Ij = S} X \u03b1j \u2264 kj j=k + S \u2208 P \u2217 ([j]) n X \u03b2j j=k S\u2208 X P \u2217 ([j]) X + \u03b2k\u22121 S \u2208 P \u2217 ([k\u22121]) + n X X \u03b2j j=k S \u2208 P \u2217 ([j]) X + \u03b2k\u22121 S\u2208 + n X \u03b3j j=k S \u2208 P \u2217 ([j]) X + \u03b3k\u22121 S\u2208 + n X \u03b3j j=k S \u2208 P \u2217 ([n]) + \u03b3k\u22121 S\u2208 X = S} \u2212 P robpb {Ij> \u0013 = S} \u0013 > > P robp {Ik\u22121 = S} \u2212 P robpb {Ik\u22121 = S} X i\u2208 S \u0012 b T = (S) X i k\u22121 = P robp {Ik\u22121 = S} \u2212 = P robpb {Ik\u22121 \u0012 \u0013 T> > > b Xkj (S) P robp {Ij = S} \u2212 P robpb {Ij = S} P \u2217 ([k\u22121]) X \u0012 P robp {Ij> \u0012 \u0013 T= = = b Xkj (S) P robp {Ij = S} \u2212 P robpb {Ij = S} P \u2217 ([k\u22121]) X \u0012 b T > (S) X kj \u0012 > P robp {Ik\u22121 \u0012 = P robp {Ik\u22121 = S} \u2212 > P robpb {Ik\u22121 \u0013 = S} \u0013 = S} \u0012 \u0013 M= = = b Xk n (S \u222a {n + 1}) P robp {In = S} \u2212 P robpb {In = S} P \u2217 ([k\u22121]) = S} \u2212 = P robpb {Ik\u22121 \u0013 = S} . (3.17) b A) b p \u2212 gk (X, b A) b pb )| by obtaining an upper bound We now find an upper bound for |gk (X, b \u2208 \u0398 and rewriting some of for each |.| term in Eq(3.17). We do so by using the fact that X the probability terms. Note that we will show this for the first and the third terms as the remaining bounds are obtained similar to either of the first or the third. We start with the first term in Eq(3.17). n X j=k = X \u03b1j \u0012 \u0013 L b Xkj (S) P robp {Ij = S} \u2212 P robpb {Ij = S} S \u2208 P \u2217 ([j]) n X X \u03b1j j=k S \u2208 P \u2217 ([j]) \u0012 b L (S) X kj \u0013 1{k \u2208 S}P robp {Ij = S} \u2212 1{k \u2208 S}P robpb {Ij = S} , b L (S) = 0 if k 6\u2208 S. Let P + ([j]) = {S \u2208 P \u2217 ([j]) | P robp {Ij = S} \u2212 P robpb {Ij = S} \u2265 since X kj b L (S) \u2264 1 and triangular inequality 0}. Then by definition of P + ([j]), the fact that 0 \u2264 X kj 81 \fwe obtain n X \u03b1j j=k \u2264 \u2264 S \u2208 P \u2217 ([j]) n X X \u03b1j j=k S \u2208 P + ([j]) n X X \u03b1j S \u2208 P + ([j]) j=k = \u0012 \u0013 L b Xkj (S) 1{k \u2208 S}P robp {Ij = S} \u2212 1{k \u2208 S}P robpb {Ij = S} X \u0012 \u0013 L b Xkj (S) 1{k \u2208 S}P robp {Ij = S} \u2212 1{k \u2208 S}P robpb {Ij = S} \u0012 \u0013 1{k \u2208 S}P robp {Ij = S} \u2212 1{k \u2208 S}P robpb {Ij = S} \u0012 \u0013 n X + + \u03b1j P robp {k \u2208 Ij , Ij \u2208 P ([j])} \u2212 P robpb {k \u2208 Ij , Ij \u2208 P ([j])} j=k \u2264 \u2264 n X j=k n X \u03b1j P robp {k \u2208 Ij , Ij \u2208 P + ([j])} \u2212 P robpb {k \u2208 Ij , Ij \u2208 P + ([j])} \u03b1j \u03b5\u2032 \u2264 \u03b5\u2032 \u03b1max n. j=k Similarly, n X \u03b2j j=k S \u2208 P \u2217 ([j]) n X X \u03b2j j=k S \u2208 P \u2217 ([j]) n X X \u03b3j j=k n X j=k \u03b3j X X S \u2208 P \u2217 ([n]) S \u2208 P \u2217 ([j]) \u0012 \u0013 T> > > b Xkj (S) P robp {Ij = S} \u2212 P robpb {Ij = S} \u0012 \u0013 T= = = b Xkj (S) P robp {Ij = S} \u2212 P robpb {Ij = S} \u0012 \u0013 T> > > b Xkj (S) P robp {Ij = S} \u2212 P robpb {Ij = S} \u0012 \u0013 b M = (S \u222a {n + 1}) P robp {In= = S} \u2212 P robpb {In= = S} X kn \u2264 \u03b5\u2032 \u03b2max n, \u2264 \u03b5\u2032 \u03b2max n, \u2264 \u03b5\u2032 \u03b3max n, \u2264 \u03b5\u2032 \u03b3max n. 82 \fWe now find an upper bound for the third term in Eq(3.17). X \u03b2k\u22121 \u0012 S \u2208 P \u2217 ([k\u22121]) = \u03b2k\u22121 \u0013 > > P robp {Ik\u22121 = S} \u2212 P robpb {Ik\u22121 = S} X \u0012 X \u0012 S \u2208 P \u2217 ([k\u22121]) = \u03b2k\u22121 S \u2208 P \u2217 ([k\u22121]) \u03b2k\u22121 = \u03b2k\u22121 \u2212 P robpb {Ik\u22121 = S and Pi,k\u22121 \u0013 \u2212 P robpb {Ik\u22121 = S and Pi,k\u22121 > Ak \u2212 Ai }1{i \u2208 S} \u0013 P robp {i \u2208 Ik\u22121 and Pi,k\u22121 > Ak \u2212 Ai } \u2212 P robpb {i \u2208 Ik\u22121 and Pi,k\u22121 > Ak \u2212 Ai } k\u22121 X\u0012 P robp {i \u2208 > Ik\u22121 } i=1 \u2264 ok\u22121 k\u22121 X \u2264 \u03b2k\u22121 \u2212 P robpb {i \u2208 \u0013 > Ik\u22121 } > > P robp {i \u2208 Ik\u22121 } \u2212 P robpb {i \u2208 Ik\u22121 } i=1 k\u22121 X \u0013 > Ak \u2212 Ai : i \u2208 S} P robp {Ik\u22121 = S and Pi,k\u22121 > Ak \u2212 Ai }1{i \u2208 S} k\u22121 X\u0012 i=1 = P robp {Ik\u22121 = S and Pi,k\u22121 > Ak \u2212 Ai : i \u2208 S} \u03b5\u2032 \u2264 \u03b5\u2032 \u03b2max n. i=1 Similarly, we get \u03b2k\u22121 X X S \u2208 P \u2217 ([k\u22121]) i\u2208 S \u03b3k\u22121 \u0012 \u0013 T= = = b Xi k\u22121 (S) P robp {Ik\u22121 = S} \u2212 P robpb {Ik\u22121 = S} \u2264 \u03b5\u2032 \u03b2max n, S\u2208 \u03b3k\u22121 X P \u2217 ([k\u22121]) X S \u2208 P \u2217 ([k\u22121]) \u0012 > P robp {Ik\u22121 \u0012 = S} \u2212 > P robpb {Ik\u22121 \u0013 = S} \u2264 \u03b5\u2032 \u03b3max n, \u0013 = = P robp {Ik\u22121 = S} \u2212 P robpb {Ik\u22121 = S} \u2264 \u03b5\u2032 \u03b3max n. b A) b p \u2212 gk (X, b A) b pb )| from above: Therefore we can bound |gk (X, b A) b p \u2212 gk (X, b A) b pb )| \u2264 \u03b5\u2032 n(\u03b1max + 4\u03b2max + 4\u03b3max ) (1 \u2264 k \u2264 n + 1). |gk (X, Since the cost coefficients (u, o) are \u03b1-monotone we have 0 \u2264 \u03b1i \u2264 omax , \u03b2i \u2264 omax and \u03b3i \u2264 umax + omax . Therefore (\u03b1max + 4\u03b2max + 4\u03b3max ) \u2264 (9omax + 4umax ) so we can take K \u2032 = n(9omax + 4umax ). We also determine |F|, the maximum number of events we need to \u0001 b A) b p \u2212 gk (X, b A) b pb )| for all k. For each k, we have 5(n \u2212 k + 1) + 4(k \u2212 1) compute |gk (X, therefore at most 5n events. Since k \u2264 n + 1 we have |F| = (n + 1)(5n) = 5n2 + 5. This completes the proof. \u0003 83 \fProof. (Lemma 3.5.6) Fix A. Let h(p) = F (A|p) = Pn i=1 [oi (Ci \u2212 Ai+1 )+ + ui (Ai+1 \u2212 Ci )+ ]. We claim that h is convex. Recall that by Identity Lemma 3.3.1, we can rewrite F (A|p) and hence h(p) as # \" i n X X pk ) \u03b1i (Ci \u2212 Ai+1 ) + \u03b2i (Ci \u2212 Ai+1 )+ + \u03b3i (max{Ci , Ai+1 } \u2212 h(p) = F (A|p) = i=1 k=1 for any \u03b1i \u2208 R (1 \u2264 i \u2264 n) where \u03b2i = (oi \u2212 \u03b1i ) and \u03b3i = [(ui + \u03b1i ) \u2212 (ui+1 + \u03b1i+1 )]. Recall P that Ci = maxk\u2264i {Ak + it=k pt } (by the Critical Path Lemma 2.4.1) so Ci is convex in p. By \u03b1-monotonicity \u03b1i , \u03b2i \u2265 0 hence the terms \u03b1i (Ci \u2212 Ai+1 ) and \u03b2i (Ci \u2212 Ai+1 )+ are convex P in p. Furthermore, the term \u03b3i (max{Ci , Ai+1 } \u2212 ik=1 pk ) is convex (in fact linear) in p. Therefore h(p) is convex. ei \u2019s are the completion times, but Recall that \u03bd = min{u1 , u2 , ..., un , o1 , o2 , ..., on } and C they are deterministic since we are using expected values, pei \u2019s, for the processing times. We next show that Fp (A) \u2265 fe(A) by applying Jensen\u2019s inequality to h(p) and applying Identity Lemma 3.3.1 to F (A|e p). Fp (A) = Ep [h(p)] \u2265 F (A|Ep) = F (A|e p) # \" i n X X ei \u2212 Ai+1 ) + \u03b2i (C ei \u2212 Ai+1 )+ + \u03b3i (max{C ei , Ai+1 } \u2212 pek ) \u03b1i (C = i=1 = n X i=1 \u2265 n X i=1 k=1 ei \u2212 Ai+1 )+ + ui (Ai+1 \u2212 C ei )+ ] [oi (C ei \u2212 Ai+1 )+ + (Ai+1 \u2212 C ei )+ ] = fe(A). \u03bd[(C e \u2208 arg minA fe(A). Note that fe(A) \u2265 0 for all A. Set A e1 = Next we obtain A ei+1 = Pi pej , ..., A en+1 = Pn pej , i.e., A ei+1 \u2212 A ei = pei and A ei+1 = Pi pek 0, ..., A j=1 j=1 k=1 e = 0. Therefore A e = ei+1 = C ei for all i = 1, ..., n and fe(A) for 1 \u2264 i \u2264 n. Then A Pn (0, pe1 , ..., j=1 pej ) is indeed optimal for fe. e 1 by showing fe(A) \u2265 \u03bd ||A \u2212 A|| e 1 . Note that \u2212 A|| n Pn P n + e e + e i=1 [(Ci \u2212 Ai+1 ) + (Ai+1 \u2212 Ci ) ] = i=1 |(Ci \u2212 Ai+1 )|, and the result would follow if we P ei \u2212 Ai+1 )| \u2265 |Aj+1 \u2212 A ej+1 | = |Aj+1 \u2212 Pj pet | for all j = 1, 2, .., n. We now show ji=1 |(C t=1 Pj P j ei \u2212 Ai+1 )| \u2265 |Aj+1 \u2212 show i=1 |(C t=1 pt | for all j = 1, 2, .., n. We distinguish two cases. We next show Fp (A) \u2265 \u03bd n ||A 84 \fFirst, suppose that Aj+1 \u2264 j X i=1 Pj et . t=1 p i=1 Pj et t=1 p ei \u2212 Ai+1 )| \u2265 |(C ej \u2212 Aj+1 )| \u2265 | |(C The second case is where Aj+1 > j X Since ei \u2212 Ai+1 )| |(C j X \u2265 i=1 Pj et . t=1 p j X t=1 Then ej , we have Aj+1 \u2264 C ej . Therefore \u2264C ej+1 \u2212 Aj+1 |. pet \u2212 Aj+1 | = |A ei )+ = max(C ej , Aj+1 ) \u2212 (Ai+1 \u2212 C j X t=1 pet \u2265 Aj+1 \u2212 j X t=1 ej+1 |. pet = |Aj+1 \u2212 A where the first equality follows from Identity Lemma 3.3.1. Hence we obtain Pj e \u2212 Ai+1 )| \u2265 |Aj+1 \u2212 A ej+1 | for all 1 \u2264 j \u2264 n. i=1 |(Ci Therefore for every j = 1, ..., n Pn ei \u2212 Ai+1 )| \u2265 |Aj+1 \u2212 A ej+1 | e \u2212 Ai+1 )| \u2265 Pj |(C i=1 i=1 |(Ci and hence nfe(A) = n\u03bd n X i=1 n X \u0001 \u0001 e 1 ei \u2212 Ai+1 )+ + (Ai+1 \u2212 C ei )+ ] = n\u03bd ei \u2212 Ai+1 | \u2265 \u03bd ||A \u2212 A|| [(C |C i=1 e 1 . This completes the proof. as desired. Therefore Fp (A) \u2265 n\u03bd ||A \u2212 A|| \u0003 Definition 3.7.1. (Definition 3.3 of [22]) Let f : Rm 7\u2192 R be convex. A point y is an \u03b1-point if there exists g \u2208 \u2202f (y) such that ||g||1 \u2264 \u03b1. Lemma 3.7.2. (A version of Lemma 5.1 of [22]). Let f : Rm 7\u2192 R be convex, finite with a global minimizer y \u2217 . Assume that there exists f\u00af such that f \u2265 f\u00af = \u03bb||y \u2212 ye||1 for some \u03bb > 0 and ye \u2208 Rm . If yb is an \u03b1-point for \u03b1 = \u03bb\u01eb\/3 then f (b y ) \u2264 (1 + \u01eb)f (y \u2217 ). Proof. Let L = f (y \u2217 )\/\u03bb. Consider the norm l1 ball B = B(e y , L), then y \u2217 \u2208 B(e y , L) = {\u03bb||y \u2217 \u2212 ye|| \u2264 f (y \u2217 )}. Subgradient inequality at yb combined with Cauchy-Schwartz inequal- ity yields f (y\u0302) \u2212 f (y \u2217 ) \u2264 \u03b1||b y \u2212 y \u2217 ||1 (since Cauchy-Schwartz inequality also holds for l1 norm). We also have ||y\u0302 \u2212 y \u2217 ||1 \u2264 ||b y \u2212 ye||1 + ||e y \u2212 y \u2217 ||1 \u2264 f (y\u0302)\/\u03bb + L = f (y\u0302)\/\u03bb + f (y \u2217 )\/\u03bb. So we obtain f (b y ) \u2212 f (y \u2217 ) \u2264 \u03b1(f (b y )\/\u03bb + f (y \u2217 )\/\u03bb) and hence f (b y ) \u2264 f (y \u2217 )(\u03bb + \u03b1)\/(\u03bb \u2212 \u03b1). If we choose \u03b1 \u2264 \u03bb\u01eb\/3 the result follows. \u0003 85 \f3.8 Bibliography [1] Shabbir Ahmed and Alexander Shapiro. The sample average approximation method for stochastic programs with integer recourse. Optimization Online, 2002. [2] Mokhtar S. Bazaraa, Hanif D. Sherali, and C.M. Shetty. Nonlinear Programming: Theory and Algorithms. John Wiley & Sons, 2006. [3] Dennis Blumenfeld. Operations Research Calculations Handbook. CRC Press, 2001. [4] James H. Bookbinder and Anne E. Lordahl. Estimation of inventory re-order levels using the bootstrap statistical procedure. IIE Trans., 21(4):302\u2013312, 1989. [5] Peter M. Vanden Bosch, Dennis C. Dietz, and John R. Simeoni. Scheduling customer arrivals to a stochastic service system. Naval Research Logistics, 46:549\u2013559, 1999. [6] Tugba Cayirli and Emre Veral. Outpatient scheduling in health care: A review of literature. Production and Operations Management, 12(4), 2003. [7] Leon Yang Chu, J.George Shanthikumar, and Zuo-Jun Max Shen. Solving operational statistics via a bayesian analysis. Operations Research Letters,, pages 110\u2013116, 2008. [8] Brian Denton and Diwakar Gupta. A sequential bounding approach for optimal appointment scheduling. IIE Transactions, 35:1003\u20131016, 2003. [9] Xiaomei Ding, Martin L. Puterman, and Arnab Bisi. The censored newsvendor and the optimal acquisition of information. Oper. Res., 50(3):517\u2013527, 2002. [10] Mohsen Elhafsi. Optimal leadtime planning in serial production systems with earliness and tardiness costs. IIE Transactions, 34:233 \u2013 243, 2002. [11] Satoru Fujishige. Submodular Functions and Optimization. Elsevier, 2005. [12] Guillermo Gallego and Ilkyeong Moon. The distribution free newsboy problem: Review and extensions. J. Oper. Res. Soc., 44(8):825\u2013834, 1993. [13] Gregory A. Godfrey and Warren B. Powell. An adaptive, distribution-free algorithm for the newsvendor problem with censored demands, with application to inventory and distribution problems. Man. Scie., 47(8):1101\u20131112, 2001. 86 \f[14] JeanBaptiste Hiriart-Urruty and Claude Lemarechal. Convex Analysis and Minimization Algorithms I and II. Springer, 1993. [15] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. J. American Statistical Assoc., 58(301):13\u201330, 1963. [16] Woonghee Tim Huh, Retsef Levi, Paat Rusmevichientong, and James B. Orlin. Adaptive data-driven inventory control policy based on kaplan-meier estimator. Working Paper, 2008. [17] Woonghee Tim Huh and Paat Rusmevichientong. A non-parametric asymptotic analysis of inventory planning with censored demand. Math. of Oper. Res. to appear, 2009. [18] Satoru Iwata. Submodular function minimization. Math. Program., 112:45\u201364, 2008. [19] Guido C. Kaandorp and Ger Koole. Optimal outpatient appointment scheduling. Health Care Man. Sci., 10:217\u2013229, 2007. [20] E. L. Kaplan and Paul Meier. Nonparametric estimation from incomplete observations. J. American Statistical Assoc., 53(282):457\u2013481, 1958. [21] Anton J. Kleywegt, Alexander Shapiro, and Tito Homem-De-Mello. The sample average approximation method for stochastic discrete optimization. SIAM J. Optim., 12: 479\u2013502, 2001. [22] Retsef Levi, Robin O. Roundy, and David B. Shmoys. Provably near-optimal samplingbased policies for stohastic inventory control models. Math. of Oper. Res., 32(4):821\u2013 838, 2007. [23] Liwan H. Liyanage and J. George Shanthikumar. A practical inventory control policy using operational statistics. Operations Research Letters, 33:341\u2013348, 2005. [24] James Luedtke and Shabbir Ahmed. A sample approximation approach for optimization with probabilistic constraints. SIAM J. Optim., 19:674\u2013699, 2008. [25] S. T. McCormick. Submodular function minimization. a chapter in the handbook on discrete optimization. Elsevier, K. Aardal, G. Nemhauser, and R. Weismantel, eds, 2006. 87 \f[26] Kazuo Murota. Discrete convex analysis. Math Programming, 83(3):313\u2013371, 1998. [27] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. [28] Kazuo Murota. On steepest descent algorithms for discrete convex functions. SIAM J. OPTIM, 14(3):699\u2013707, 2003. [29] James B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Math Programming, 118(2):237\u2013251, 2007. [30] Georgia Perakis and Guillaume Roels. Regret in the newsvendor model with partial information. Oper. Res., 56(1):188 \u2013 203, 2008. [31] Warren B. Powell, Andrzej Ruszczynski, and Huseyin Topaloglu. Learning algorithms for separable approximations of discrete stochastic optimization problems. Math. of Oper. Res., 29(4):814\u2013836, 2004. [32] Lawrence W. Robinson, Yigal Gerchak, and Diwakar Gupta. Appointment times which minimize waiting and facility idleness. Working Paper, DeGroote School of Business, McMaster University, 1996. [33] R. Tyrrell Rockafellar. Theory of subgradients and its applications to problems of optimization: convex and nonconvex functions. Helderman-Verlag, Berlin, 1981. [34] F Sabria and C F Daganzo. Approximate expressions for queuing systems with scheduling arrivals and established service order. Transportation Science, 23:159\u2013165, 1989. [35] Herbert Scarf. A min-max solution to an inventory problem. In K. J. Arrow, S. Karlin, and H. Scarf, editors, Studies in the mathematical theory of inventory and production, 1958. [36] Alexander Shapiro. Stochastic programming approach to optimization under uncertainty. Math. Programming, 112:183\u2013220, 2007. [37] Alexander Shapiro and Arkadi Nemirovski. On complexity of stochastic programming problems. a chapter in continuous optimization: Current trends and applications,. Springer, V. Jeyakumar and A.M. Rubinov, eds, 2005. 88 \f[38] Chaitanya Swamy and David B. Shmoys. Algorithms column: Approximation algorithms for 2-stage stochastic optimization problems. ACM SIGACT News, 37(1):33\u201346, 2006. [39] P Patrick Wang. Static and dynamic scheduling of customer arrivals to a single-server system. Naval Research Logistics, 40(3):345\u2013360, 1993. [40] Larry Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer Texts in Statistics, 2004. [41] E N Weiss. Models for determining estimated start times and case orderings in hospital operating rooms. IIE Transactions, 22(2):143\u2013150, 1990. 89 \f4 Incentive-Based Surgery Scheduling: Determining Optimal Number of Surgeries1 We study the problem of determining the number of surgeries for an operating room (OR) block where surgery durations are random, there are significant idle and overtime costs for running an OR and the incentives of the parties involved (hospital and surgeon) are not aligned. We explore the interaction between the hospital and the surgeon in a game theoretic setting, present empirical findings on surgery durations and suggest, under reasonable assumptions, payment schemes that the hospital may offer to the surgeon to reduce its (idle and especially overtime) costs. 4.1 Introduction Healthcare is one of the biggest industries in North America. Canada was expected to spend $148 billion on healthcare in 2006 [8], which accounts for more than 10% of its GDP. In the United States the situation is similar, in 2006 it accounted for 15.3% of GDP [6]. Healthcare challenges, including rising costs and demand, are continually becoming more acute not only in Canada and the United States, but in almost every country in the world [5]. In [10], Glouberman and Mintzberg develop a novel framework to analyze healthcare management. According to this framework, we can think of healthcare as an industry like any other, but with some additional unique characteristics. Unlike other businesses, either private or public, no one is in complete charge (e.g., of a hospital), and there are several decision makers with conflicting objectives. For instance, managers make resource allocation decisions, but it is doctors who decide what to do with those resources. 1 A version of this chapter will be submitted for publication. Begen M.A., Ryan C. and Queyranne M. Incentive-Based Surgery Scheduling: Determining Optimal Number of Surgeries. 90 \fHealthcare is more easily categorized as belonging to the public sector than the private sector in most countries. In his article [7], Dixit states the following about public sector agencies: \u201cPublic sector agencies have some special features, most notably a multiplicity of dimensions - of tasks, of the stake holders and their often conflicting interests about the ends and the means, and of the tiers of management and front-line workers\u201d. The framework in [10] makes it easier to understand this statement and why the healthcare system is difficult to manage. We can think of hospitals as essentially independent blocks of an healthcare system. According to [10], there are four different management groups \u2013 four worlds, if you will \u2013 within a hospital, as shown in Figure 4.1. Up Managers Control Doctor Cure Nurses Care In Out Trustees Community Down Figure 4.1: Healthcare Players and Tasks Each group in Figure 4.1 has its own objectives, and has some scope to make their own decisions. Doctors and nurses deliver clinical operations, and hence focus on downstream considerations; that is, closer to dealing with the actual health of patients. Managers and trustees are responsible for budgeting and raising funds for the hospital, so their concerns upstream. On the other hand, employees (managers and nurses) work in the hospital, while doctors and trustees work out of the hospital, since they are not technically employees of the hospital. In Canada and the United States, although some doctors are salaried hospital employees, most doctors are private entrepreneurs who have admission privileges at a hospital, work on a fee-for-service basis and appear when the patient needs a cure or treatment [5]. 91 \fThe significance of this framework for our study is the fact that in order to provide a medical service, such as a surgery, all four groups have a unique contribution to make. Conversely, the actions of any one of the groups has an effect on the ability of the others to perform their duties. The picture gets even more complicated when we think of government and insurance companies. In [10], Glouberman and Mintzberg conclude that these decision-makers must achieve a certain level of integration to provide effective healthcare management. Other studies, including Calmes and Shusterich [3] and Marco [15], reach a similar conclusion for operating room management. Operating rooms are one of the most essential places of a hospital, and also one of the costliest. These authors, among others (see for instance, [11, 13]) point out that ORs are one of the most difficult places to manage in a hospital, and it is imperative to improve collaboration between the players (of an OR) for any advancement in OR management. In this chapter, we focus on this small but important part of healthcare operations \u2013 surgery scheduling and, more specifically setting the number of surgeries for an OR. In particular, we study the interactions between a surgeon and a hospital in determining number of surgeries for an OR where surgery durations are random, and there are significant costs for idle time and overtime. More specifically, we explore the commonly observed situation reported in the literature Olivares et al. [18] and observed empirically (Section 4.2) that surgeons over-schedule their allotted OR time, i.e., they schedule too many surgeries for their OR time. We argue that this observation can be explained by the incentive of surgeons to take advantage of fee-for-service payment structure for surgeries performed combined with the fact surgeons do not bear overtime costs at the hospital level. This creates a cost which is borne by the hospital who operates the OR and pays surgery support staff. Thus in our model we discuss that the hospital has an incentive to limit the number of surgeries performed by surgeons to reduce overtime expenditures. We explore this misalignment of incentives \u2013 for the surgeon to over-schedule and the hospital to control overtime costs \u2013 in a game theoretic setting. Only recently some systematic attention has been given in the literature to incentive issues in health care management. For instance, researchers have studied physician-patient, government-physician, and hospital-physician relationships in the context of principal-agent modeling framework [23]. There are also studies such as [19] that look at a bigger picture and explore patient-physician-third party payer relationships. A more recent study [12] 92 \fdevelops a framework to empirically estimate the parameters of a principal-agent model to design a payment system for dialysis providers. Other authors focus on expansion of OR\u2019s [13] and stakeholder interactions in OR\u2019s [14] in basic game theoretic environments. In [14], it is stated that many interactions between surgeons, anesthetists, nurses and hospital management can be seen as a repeated game. We also mention two empirical studies [12, 18] which estimate cost parameters in a principal agent model and newsvendor model respectively for making decisions about compensation and capacity. Of particular interest to the present work is a suggestion raised in Olivares et al. [18] that the amount of schedule overruns are mainly caused by incentive conflicts and over-confidence. Our chapter takes a systematic look at this very question, providing a model by which these incentive conflicts can be identified and effectively analyzed. To the best of our knowledge our chapter presents the first systematic study of determining the number of surgeries for an OR block, investigating the interaction between the surgeon and hospital (management) in a game- theoretic setting. Our research has been motivated by our observations from applied healthcare projects such as [20], literature, e.g., [18], and empirical findings (Section 4.2). The organization of the chapter is as follows. In Section 4.2, we define and motivate the problem, give an overview of the surgery scheduling process, present data and discuss findings (empirical, literature-based, anecdotal-based) on the underestimating of surgery durations, and hence on overtime in an OR (block). Section 4.3 presents our model, including notation and a thorough discussion of assumptions. In this section, we define the objective functions of the surgeon and the hospital and explore their properties. In Section 4.4, we demonstrate a misalignment of incentives between the hospital and surgeon. Section 4.5 provides alternative contracting scenarios whereby this misalignment of incentives can be aligned, and take care to characterize sufficient conditions for when these schemes are cost effective for the hospital. This section also includes a discussion of welfare considerations from the perspective of an upstream planner (in the Canadian system, the provincial government) who is concerned for maximizing social welfare, including impacts on patients. We consider a different formulation of our model in Section 4.6 which makes alternative assumptions about the independence of surgeries, and in this framework we explore the impact on some of our results in the presence of a risk-averse surgeon. All the proofs of our analytical results are placed in Section 4.8. 93 \f4.2 Problem Description and Motivation The objective of our study is to determine the number of elective (i.e., non-emergency, scheduled) surgeries for an OR block where surgery durations are random, there are significant idle and overtime costs, and the incentives of the hospital and surgeon (parties involved in the scheduling process) are not necessarily aligned. We start with an overview of a surgery scheduling process. In practice, scheduling surgeries in a medical facility is a complex and important process, and the choice of schedule directly impacts the number of patients treated for each specialty, cancelation of surgeries, utilization of resources, wait times, and the overall performance of the system [20]. The surgery scheduling process for elective cases is usually considered as a three-level process [1, 2, 18], which we now describe. The first level defines and assigns the OR time among the surgical specialties, usually called mix planning. A surgical OR block schedule is developed at the second level. An OR block schedule is simply a table that assigns each specialty surgery time in ORs on each day. The times are called blocks. The OR block schedule is sometimes called the master surgical schedule (see Figure 2 of [20] for a sample OR block schedule). Finally, in the third level we schedule individual cases on a daily basis, also known as patient mix. We can classify these levels as strategic, tactical and operational stage of the surgery scheduling process respectively. Figure 4.2 gives an overview of the process in terms of decisions, decision maker and decision level. Decision level Decision maker Decision Strategic Budget and Surgical Mix Health Authority (specialties and % of time, i.e., capacity per specialty) Tactical Hospital Management Block Schedule (blocks for each specialty\/surgeon) Operational Surgeons Patient Schedule (scheduling of patients into a block) Figure 4.2: Surgery Scheduling Process 94 \fIn the first level, the budget often determines the available OR time, and there could be several factors determining the proportion of time to be assigned to each surgical specialty. For instance, waiting times (or number of patients waiting for a certain type of surgery) and seniority of surgeons might be used to define the amount of OR time required by each specialty. In the second level, the OR time assigned to each specialty is used to build the surgical block schedule, assigning days of the week and operating rooms, this taking into consideration the availability of both OR\u2019s and post-surgery resources such as recovery beds [20, 1]. The third level has more of an operational focus. Individual surgeries are scheduled within an assigned block in the overall OR block schedule. It is at this level where one determines the number of surgeries to perform in a block, the sequence of the surgeries performed and the planned start times (appointment times) of the surgeries. It is primarily at this level that variability in surgery durations plays a key role. It is also important to note that surgeons may have surgical privileges in one or more hospitals and they are often the decision-makers that manage the third stage of the surgical scheduling process [20]. Figure 4.3 shows the decisions taken, and their usual order, in this operational level of surgery scheduling. number of surgeries sequence of the surgeries planned start times (appointment times) of surgeries Figure 4.3: The Third Level of Surgery Scheduling Process Ideally, one should consider all three decisions not in isolation but within a unified framework of analysis. However, the practical applications and mathematical challenges force practitioners and academics to work on these problems individually. For instance, even the problem of determining planned start times of surgeries is difficult on its own (e.g., see Chapter 2 and the references therein). In this chapter, we address the first decision of level 3, namely determining the number of surgeries. We do not consider sequencing (which 95 \fis a challenging problem in itself) since we will assume identical surgeries. We also do not consider appointment scheduling because our main focus is to explain the incentive issues between the surgeon and hospital in setting the number of surgeries and not their schedule. A practical justification for this is the common practice for all patients expecting a surgery on a given day to arrive at the hospital in the morning and await their surgeries. Thus, the surgeon has no idle time between surgeries scheduled in an OR block. We consider the problem in a Canadian context, where hospitals receive funding to make ORs and supporting staff and equipment available. As explained in Blake and Donald [2], governments in Canada have historically managed the amount of healthcare services by putting rigid constraints on hospital budgets. By doing so they control hospital spending and resources and therefore indirectly the actions of physicians. In our setting, the hospital is a cost minimizer, and it receives funding (e.g., from a provincial government) to run ORs. We consider two types of cost for a hospital: idle time and overtime costs. Idle time cost may be seen as an opportunity cost of OR being idle, this is especially important in a Canadian context due to important political and social issues related to the length of surgical waiting lists [22]. Overtime costs are also significant, since the cost of operating beyond the regular OR time is often quite costly. One source of cost is in paying nurses, who are paid overtime by the hospital when they work beyond shift hours. On the other hand, the surgeon is a private entrepreneur who has privileges at the hospital and works on a fee-for-service basis. It is commonly believed in practice that surgeons tend to underestimate surgery durations and hence perform many surgeries in their allotted OR time, more than what may be ideal for the hospital. Empirical findings provides some evidence for this belief, as we show next. Due to the randomness of surgery durations2 (Figure 4.4 and [24]), one cannot predict the precise time required for a surgery, instead it must be estimated. Usual practice is to take surgeon\u2019s surgery duration prediction in OR bookings. In Canada, usually (if not always) it is the surgeons who keep track of the patients that require a surgical procedure, decide on the order in which they will be performed and determine the schedule of their OR block. Anecdotal (our discussions with surgeons, anesthetists and OR booking managers, ob2 The data used in this chapter comes from a local hospital in Vancouver, BC. The data cover a period in 2007 and 2008 with over 5000 elective surgeries. The hospital has over fifteen surgical specialties and seven ORs. 96 \fDurations of simple hernia operation 0.35 0.30 frequency 0.25 0.20 0.15 0.10 0.05 0.00 32 43 53 64 75 86 96 more (actual) surgery duration (minutes) Figure 4.4: Duration Distribution of a Simple Hernia Operation servations in projects with hospitals and health authorities) and data based evidence ([18], Figure 4.5) suggest that surgeons are often overly optimistic about the duration of surgeries that they perform, i.e., surgeons think that they can perform surgeries quicker or they tend to underestimate their durations. The hospital may have historical data on the surgeon and a specific type of surgery, and hospital\u2019s OR booking manager may sometimes interfere the surgeon\u2019s predictions if the manager thinks that surgeon is underestimating the durations. However, this is not common since \u201csurgeons are the most mobile and least easily replaced healthcare professionals\u201d as stated in [13]. This is due in part to the fact that surgeons are a highly mobile and scarce resource in the Canadian healthcare industry and thus wield a lot of power. In a city with multiple hospitals or private clinics their power is even more enhanced, due to their mobility between hospitals. Figure 4.5 depicts a comparison of actual and booked\/scheduled duration of surgeries. If surgery durations were perfectly estimated we would expect all surgeries be on the 45 degree line. However we see that majority (based on the data we collected, in 81%) of the cases actual durations were longer than booked\/scheduled durations. Figure 4.5 shows that duration of individual surgeries are often underestimated. One may ask how this phenomenon actually effects the daily overall performance of an OR block, i.e., amount of overtime for an OR as well as the likelihood of an OR to go overtime. To answer this question, we look at the data at an operating room level. Our data comes from a hospital with seven ORs and on average five ORs run per day. For each OR, we 97 \factual and scheduled surgery durations 1000 900 800 actual (minutes) 700 600 500 400 300 200 100 0 0 100 200 300 400 500 600 700 800 900 1000 scheduled (minutes) Figure 4.5: Actual and Scheduled Surgery Durations compute daily average of scheduled and overtime OR minutes. We summarize our findings in Figure 4.6. The figure also shows the percentage of overtime, i.e., the ratio of overtime OR minutes and scheduled OR minutes. We see from this figure that the overtime amount is well over 20% for each OR. Total average daily overtime minutes from all ORs add up to 167 minutes. We also find the percentage of days that each OR has an overtime to estimate the probability of daily overtime for each OR. We give these probabilities in Table 4.1. These numbers are well above 75%, suggesting that overtime for an OR is very likely. Table 4.1: Estimates of Daily Overtime Probability per OR Operating Rooms A B C D E F G 0.91 0.95 0.97 0.77 0.75 0.93 0.91 These empirical findings \u2013 significant amount and high likelihood of overtime \u2013 suggest that the cost of overtime can be substantial; overtime pay rates (of hospital personnel) are more expensive than regular pay rates. In addition, excessive overtime can cause job satisfaction losses, fatigue and other work-related problems with hospital employees. If an 98 \f400 60% 350 51% 50% 300 m 250 i n u 200 t e 150 s 37% 34% 32% 31% 24% 23% o 40% v e r t 30% i m e 20% % 100 10% 50 0 0% A B C D E F G operating rooms overtime OR minutes scheduled OR minutes overtime % (overtime\/scheduled OR minutes) Figure 4.6: Daily Average Overtime Minutes and Probability of Overtime per OR OR can be managed in such a way that overtime is decreased then it is hoped that this will translate to immediate and significant cost savings. Additionally, savings from reduction in overtime costs may be used to increase hospital resources such as regular OR time, recovery and intensive care beds. Then the question becomes how can we reduce overtime? Before we propose a solution, we discuss reasons for overtime. The current method of assigning surgeries to OR time works roughly as follows: the surgeon provides an OR booking manager with a list of surgeries with estimated durations. If the estimated durations are less than the allotted time then the booking manager accepts the schedule and coordinates the appropriate surgical support staff for each individual operation. Now, Figure 4.5 shows underestimation of individual surgery durations by surgeons, i.e., surgeons are quite optimistic on how quickly they perform surgeries. By underestimating the duration of surgeries, the surgical plan presented to the booking manager may contain more surgeries than can be actually accommodated in the OR block. Thus, reducing overtime crucially depends on the way that surgical plans are devised by surgeons and approved by hospital management staff. 99 \fA first step in remedying this situation is understanding why surgeons book more surgeries than they can realistically complete in a given OR block. One reason is that surgeons have a desire to serve as many of their patients as possible for altruistic reasons \u2013 a surgeon sees his patients suffer and hopes they can be healed as promptly as possible. Another more structural reason derives from the remuneration scheme for surgeons. In Canada, most surgeons are paid by the provincial government based on the number and type of surgeries they perform, irrespective of how much time each surgery takes. If we assume surgeons are profit maximizers this payment scheme puts emphasis of performing as many surgeries as possible and devalues costs associated with the overuse of hospital resources. The intuition is simple: suppose having an additional hour in the OR allows the surgeon to take on one more surgery. The surgeon may be quite willing to take on this overtime, since the benefit of performing the surgery is a fixed amount that may be worth significantly more than the disutility of one hour of working overtime for the surgeon. This may often be the case, especially since the surgeon need not consider the overtime costs of support staff and materials when deciding if working one more hour is desirable. We argue that the optimism we see in the data regarding the estimated duration of surgeries reflects this incentive, since it directly effects the number of surgeries they are able to book and execute in their block. Hospitals have an interest in influencing surgeons to perform fewer surgeries and more accurately estimate their durations, since this would mean less overtime costs. This raises two important considerations for the hospital. The first is to decide themselves how many surgeries they would prefer be performed in the OR block. It is reasonable to assume that hospital would be better off with a surgeon who optimizes the use of resources (and the most important resource is the OR time) rather than one who is interested in profit maximizing, i.e., hospital\u2019s ideal number of surgeries may be less than the profit maximizer surgeon\u2019s number. We propose that the hospital should determine its own ideal number of surgeries in order to minimize its own cost. Thus leads us, however, to the second key issue: how can the surgical booking procedure be adjusted so that hospitals can influence surgeons to decide on a surgery plan that better reflects the costs of the hospital. Indeed, the surgeon may not cooperate with being dictated to doing fewer surgeries (e.g., she\/he has a better outside alterative) and thus the hospital must consider how to design a contract that will be accepted by the surgeon and also save 100 \fmoney for the hospital (with respect to current \u201chigh-overtime\u201d situation). In this chapter, we characterize analytically the number of surgeries that minimizes hospital costs, find conditions when this number is less than surgeon\u2019s preference, and suggest contracts to remedy this misalignment between the hospital and surgeon on determining the number of surgeries in a given OR block. We consider the surgeon as an agent and the hospital as a principal and use a simple principal-agent model of analysis common in applied economics. (For a review of principal-agent theory we refer the reader to [9] and the references therein.) In principal-agent theory, when there is no information asymmetry between players the first best (i.e., best possible outcome) can be achieved with a \u201dforcing contract\u201d, i.e., the principal can dictate the agent what to do (i.e., how many surgeries to be performed). With or without information asymmetry between the players, a residual claimancy contract can be used to achieve the first best if the agent is risk neutral and such a contract is feasible (e.g., there is a single agent who has unlimited wealth). We can think of a residual claimancy contract in our setting as the hospital renting the operating room to the surgeon, at the hospital\u2019s opportunity cost of the room. The surgeon then makes his or her scheduling decision having internalized all costs and benefits of the decision. The incentive problem is completely resolved. In our analysis, both players are risk neutral (although we study briefly the case of a risk averse surgeon in Section 4.6.2), and we consider two cases regarding the information asymmetry: no information asymmetry (hospital has access to surgery duration distribution) and information asymmetry (the hospital has access to only the mean of surgery durations). We propose two payment schemes that will achieve the first best depending on how much information that the hospital has on surgery durations. If the hospital has access to only the mean duration of surgeries then it may choose a three-part contract (Section 4.5.3), and if the hospital has access to entire distribution of surgeries then it may choose either take-it-orleave-it offer at the optimal number of surgeries or a three-part contract (Section 4.5.2). The three-part contract can be seen as a residual claimancy contract whereas take-it-or-leave-it offer can be thought as a forcing contract. 101 \f4.3 The Model We start with a short description of our notation and define the objective functions of the hospital and the surgeon. We will make several important assumptions to refine our model, help focus on the incentive issues involved and avoid over-complication. Each assumption will be discussed and motivated, and the more restrictive assumptions will be noted. As discussed, the scenario is a surgeon working out of a hospital by using its OR facilities and support staff. The hospital receives funding (e.g., from a provincial government) to make its operating rooms available for surgeries at minimum possible cost. The surgeon works in an operating room reserved (determined by an OR block schedule) for her\/his use in the hospital. The scheduled time, i.e., the length of the OR block allocated for this surgeon will be denoted d and is given exogenously in our model. The surgeon is paid a fee-for-service rate of r dollars per surgery directly by the provincial government. It is important to stress that we assume that the surgeon is not directly paid by the hospital (as in the Canadian health care system). Indeed, this is a distinctive feature of our model as compared to a classical principal-agent framework where the principal compensates the agent. The surgeon decides the number n of surgeries to schedule during her\/his allotted time. We assume that there is a long list of people waiting to receive the given surgery, and thus no shortage of demand for operations. This is quite reasonable under most (if not all) types of (elective) surgeries [21]. The fact the surgeon chooses n and not an \u201ceffort level\u201d is another distinctive feature in our setting which is not considered in standard principalagent problems. Here we may think of the surgeon\u2019s choice as a rough proxy for effort. The number of surgeries n is the key decision variable, and exploring precisely how it is determined is the distinguishing feature of our analysis. As is common in practice, we assume that every surgery scheduled must be performed on the scheduled day even if this causes the total duration of all n surgeries to exceed d. An important extension of our analysis, which is not addressed here, would be to consider the possibility of cancelations after a certain cut-off time. Each surgery i has a random duration ti . We assume that the support of the pdf is contained in the positive real line R+ and each has identical finite mean \u00b5. We also assume Pn that the random variables ti are independent. Let T (n) = i=1 ti denote the random 102 \fduration of n surgeries. It is a random real-valued function of n. The (random) overtime of n surgeries can thus be expressed as max{0, T (n) \u2212 d} and similarly max{0, d \u2212 T (n)} represents the (random) idle time. The above two assumptions \u2013 that of identical mean for each surgery and independence \u2013 are worthy of further discussion. First, by assuming identical means we may think of the surgeon scheduling n elective surgeries of a similar type; for instance, all hernia operations. We see this practice in certain specializations of surgeons, e.g., ophthalmology. Another motivation for this assumption is to simplify the model to avoid consideration of surgery sequence. If surgeries have varying means the question of sequence becomes paramount and the problem becomes more combinatorial in nature. Nonetheless, since we assume that surgeries may have different distributions (under the condition they have the same mean) then sequence is still an issue. For instance, a schedule of five \u201chigh\u201d variance surgeries will have different properties from a sequence of \u201clow\u201d variance surgeries. Thus, we assume that the sequence is given and the surgeon simply chooses n consecutive elements of that sequence. An alternative assumption is that the ti are independent and identically distributed (iid) in which case sequence is irrelevant. The results for this case are essentially identical to those found here and so we adopt the former assumption. In either case, the goal is to focus on the incentives which drive the choice of the number of surgeries and avoiding extraneous complexities at this point. Second, we address the the assumption of independence of surgery durations. Indeed, one might argue that surgeon fatigue creates a dependence in surgery durations, and this is a relevant criticism of our model. By assuming independence we effectively assume that all variation in surgery duration depends on the specifics of each surgery case. We can derive similar results to those found here under the assumption that all have the same (random) duration t, in other words, there is complete dependency among surgery durations. Thus, the total duration of n surgeries performed in one day by the surgeon has random value nt. The results we derive in this setting are similar in spirit to those discussed below and yield many of the same general findings. However, our approach to the analysis is different and we believe of separate interest. Details are included in Section 4.6. Thus, our analysis covers the extreme cases of independence and complete dependence, and thus one might imagine that similar insight might arise for intermediate cases. We start our analysis by assuming both the hospital and surgeon are risk neutral (we 103 \fextend our analysis with a risk averse surgeon in the case where all surgeries have the same duration t in Section 4.6). The hospital is not-for-profit and closer to being a public rather than a private organization. Therefore we believe risk-neutrality for the hospital is a reasonable assumption. We begin by assuming that the surgeon is risk neutral for simplicity of our arguments and analytical tractability. One reason this assumption might make sense is that the surgeon performs surgeries on many days during a month or year, therefore the profit deriving from a single day is small in comparison to her overall compensation. On the other hand, since our model concerns the decisions of a surgeon for a single day, it is reasonable to consider that a surgeon might be risk averse. We assume that the hospital\u2019s expected cost function of opening an OR room to a surgeon to use for duration d is a function of the the form C(n) = ET (n) [oH max{0, T (n) \u2212 d} + uH max{0, d \u2212 T (n)}] where ET (n) [\u00b7] denotes the expectation operator on the random variable T (n). The cost coefficient oH is the cost per unit time of going overtime after regular working duration d, whereas the cost coefficient uH is the cost per unit time of idle time cost of the OR. We can think of oH as the overtime cost rate of the hospital, e.g., staffing and equipment costs after regular working hours. On the other hand, uH may be thought as the opportunity cost of an idle OR. Besides facility and operating costs, this cost may include a component to reflect the utility loss of patients waiting for a surgery (especially when there are long surgical waiting lists as in Canada). This, however, is modeled more directly when we consider welfare considerations in Section 4.5. Underage costs may also be seen to include costs related to the possibility that unused capacity might motivate budget cuts to the hospital from the provincial government. Note that our cost function does not track normal operating costs for running the operating room within the scheduled d hours and in particular there is no direct per-unit cost incurred by the hospital per surgery. Since the hospital will incur these costs in any instance, we focus on the problem of minimizing the costs related to the overand under-utilization of resources. This cost function is reminiscent of standard newsvendor costs having a cost for overage and underage. The difference here is that in the standard newsvendor setting, demand is random and the newsvendor chooses capacity. Here the situation is reversed: the capacity d is fixed and the choice variable n impacts demand on that capacity (in this case, OR 104 \ftime). Similar models have been explored in the literature, most notably in the \u201cinverse newsvendor\u201d model in [4]. The difference in our model is that demand is not chosen directly, but through the choice of n, which determines the number of random values which amount to total demand. Using linearity of expectation and the identity x = max{0, x} \u2212 max{0, \u2212x} we can express C(n) as: C(n) = \u2212uH \u00b5n + (oH + uH )ET (n) [max{0, T (n) \u2212 d}] + uH d. (4.1) This form is more useful in the analysis that follows in Section 4.4. The expected profit of the surgeon for performing n surgeries in an operating room scheduled for duration d is assumed to be of the form \u03c0(n) = rn \u2212 ET (n) [oS max{0, T (n) \u2212 d} + uS max{0, d \u2212 T (n)}]. The quantity oS is the cost per unit of time of performing surgeries for hours in excess of scheduled surgery time d. This can represent an opportunity cost for the surgeon to work outside of scheduled hours, possibly reflecting alternate sources of income or leisure time. The quantity uS is the cost per unit time worked less than the scheduled time, and can represent lost revenues from surgeries that might have been scheduled and indirectly loss of goodwill amongst patients to the surgeon for longer wait times. A similar transformation of above yields the following more amenable form of \u03c0(n): \u03c0(n) = (r + uS \u00b5)n \u2212 (oS + uS )ET (n) [max{0, T (n) \u2212 d}] \u2212 uS d (4.2) One important feature of the expected profit function of the surgeon is that when the total duration of the surgeries is precisely d, the surgeon experiences no costs. The same is true for the hospital, making d a significant value for both the hospital and surgeon. Clearly, this is a special case of a more general setting where we might imagine that the surgeon experiences costs when total duration is different from some other value, say l, where l 6= d. We assume for analytical tractability that l = d, although this assumption might not hold in practice. Considering how these results extend to the case l 6= d is one possible direction for future study. 105 \fClearly, all the cost coefficients oH , uH , oS and uS can be challenging to estimate, which is a common problem of models of this type. We assume that hospital knows all these cost coefficients and the surgeon knows only his or hers. This is, undoubtedly, a strong assumption of this model. Finally, we assume that all cost coefficients and the fee-for-service rate r are nonnegative. 4.4 Misalignment of Incentives We now discuss the process of determining the number of surgeries to perform in time d under the assumptions stated above. Our goal is to understand why surgeons tend to underestimate surgery durations and hence schedule more number of surgeries than what might be ideal for hospital in their allotted time by demonstrating that the ideal number of surgeries for the surgeon is (under some stated conditions) larger than the preferred number of the hospital, which needs to take into account overtime costs. We now proceed with the analysis. 4.4.1 Deciding the Number of Surgeries First we focus on the decision of the surgeon. Let nS be the preferred number of surgeries scheduled by the surgeon when unrestricted by the hospital. In other words, nS is chosen to optimize the profit function \u03c0. We begin describing the properties of the surgeon\u2019s profit function \u03c0. For our first result we need the following definitions. The first definition concerns convexity properties of \u03c0. We treat n as an integer variable, and thus use the following notion of discrete convexity: Definition 4.4.1. Let f : Z \u2192 R. The first differences of f are denoted \u2206f (n) = f (n + 1) \u2212 f (n) and second differences by \u22062 f (n) = \u2206f (n + 1) \u2212 \u2206f (n). Then f is discretely convex if its first differences are nondecreasing or equivalently if its second differences are nonnegative, i.e., \u22062 f (n) \u2265 0 for all n \u2208 Z. We say f is discretely concave if \u22062 f (n) \u2264 0 for all n \u2208 Z. The second definition concerns the distribution of surgery durations and is due to [16]: Definition 4.4.2. A random variable X is new better than used in expectation ( NBUE) if E[X] \u2265 E[X \u2212 k|X \u2265 k] for all k. 106 \fProposition 4.4.3 (Discrete concavity of \u03c0). The surgeon\u2019s profit function \u03c0 is discretely concave when ti is NBUE for all i. Having established the discrete concavity of \u03c0 we use the following necessary and sufficient condition for nS to be integer optimal: \u03c0(nS ) \u2265 \u03c0(nS \u2212 1) and \u03c0(nS ) \u2265 \u03c0(nS + 1), i.e., \u2206\u03c0(nS \u2212 1) \u2265 0 and \u2206\u03c0(nS ) \u2264 0. Necessary and sufficient conditions for the optimality of nS in terms of the cost data of the surgeon now follow. We use the following convenient notation. Recall that O(n) = max{0, T (n)\u2212d} is the (random) overtime for scheduling n surgeries. Let \u03b8(n) = ET (n) O(n) be the expected overtime. Then, by using Eq(4.2) we may write \u03c0(n) = (r + uS \u00b5)n \u2212 (oS + uS )\u03b8(n) \u2212 uS d. Thus, nS is optimal for maxn\u22650 \u03c0(n) if and only if: \u2206\u03b8(nS \u2212 1) \u2264 r + uS \u00b5 \u2264 \u2206\u03b8(nS ). oS + u S (4.3) The condition for optimality in Eq(4.3) is reminiscent of the classic newsvendor solution based on critical fractiles. We can interpret r + uS \u00b5 as the expected marginal benefit of undertaking an additional surgery. On the other hand, (oS + uS )\u2206\u03b8(nS ) may be interpreted as the marginal expected overtime cost of an additional surgery. Thus, Eq(4.3) says that at the optimal choice of nS the expected marginal cost and benefit of surgery nS must be comparable (indeed, if n is allowed to be continuous then they must be equal). Turning now to the hospital\u2019s decision, let nH be the preferred number of surgeries scheduled by the hospital when it knows the surgery duration mean \u00b5 of t and can force the surgeons to perform the number of surgeries it prefers. In other words, nH is chosen to minimize the cost function C. Our goal is to provide optimality conditions for nH similar to Eq(4.3). The following result describes the convexity properties of C. Corollary 4.4.4 (Discrete convexity of C). The hospital\u2019s cost function C is discretely convex when ti is NBUE for all i. Since C is discretely convex we use the following optimality conditions to characterize nH : C(nH ) \u2264 C(nH \u2212 1) and C(nH ) \u2264 C(nH + 1), i.e., \u2206C(nH \u2212 1) \u2264 0 and \u2206C(nH ) \u2265 0 107 \fNext we state the necessary and sufficient conditions for the optimality of nH in terms of the cost data of the hospital: \u2206\u03b8(nH \u2212 1) \u2264 uH \u00b5 \u2264 \u2206\u03b8(nH ). oH + u H (4.4) This condition can be similarly interpreted as above. Note that the value uH \u00b5 can be interpreted as the marginal expected benefit of undertaking an additional surgery (assuming the total duration with the additional surgery does not exceed d), which does not depend on the r. We now turn to one of the motivating questions of this study: how do the incentives of the hospital and surgeon interact and is there a misalignment of these incentives? We give conditions whereby there truly is a misalignment of incentives between the parties, and attempt to explain why this misalignment arises. The main result describes conditions where nS \u2265 nH ; in other words, when each party has a different optimal number of surgeries and the surgeon prefers to perform more surgeries than the hospital. The surgeon\u2019s objective is to maximize his\/her profits whereas the hospital\u2019s objective can be thought as utilizing the OR time as much as possible, i.e., minimizing the cost of idle and overtime. Theorem 4.4.5. The optimal number of surgeries from the hospital\u2019s perspective nH is less or equal than the surgeon\u2019s preferred number of surgeries nS , i.e., nH \u2264 nS if and only if uH \u00b5 r + uS \u00b5 \u2265 . oS + u S oH + u H (4.5) The resulting condition Eq(4.5) has a straightforward interpretation. We may think of \u03c1S = r+uS \u00b5 oS +uS as the ratio of expected marginal benefit for the surgeon to perform a surgery to per-unit-time cost of overtime. A similar interpretation holds for the hospital\u2019s ratio \u03c1H = uH \u00b5 oH +uH where uH \u00b5 is the impact on cost when one more surgery is scheduled and oH + uH represents a per-unit-cost of overtime. Thus when the \u201cmarginal ratio\u201d \u03c1S of the surgeon exceeds that of the hospital \u03c1H then the surgeon schedules more surgeries than the hospital prefers. Since in practice it is likely that nS \u2265 nH this indicates that the marginal ratios of the surgeon and hospital satisfy \u03c1S \u2265 \u03c1H . It is reasonable to assume that in practice, hospitals are less sensitive to undertime than overtime, i.e., uH \u2264 oH , and surgeons are more sensitive to undertime than overtime, i.e., 108 \fuS \u2265 oS . This observation makes easier to see why \u03c1S \u2265 \u03c1H holds and surgeons\u2019 preferences on number of surgeries may be higher than hospitals\u2019 choice. 4.5 Contracts We now turn to the question of how we might address the misalignment of incentives described in the previous section. Thus, we turn from explaining some of the observations detailed empirically in Section 4.2 towards considering ways to reduce over-use of operating room resources. This can be achieved through the alignment of incentives of the hospital and surgeon via designing mechanisms or contracts. The type of mechanism required to align incentives depends on several important factors. We discuss them briefly. One important factor in designing mechanisms is the amount of information that each of the players have. There are several types of information involved in this problem, which are essentially knowledge about the cost coefficients, surgery durations and functional form of the utilities. As above we assume both the hospital and surgeon have common knowledge of the mean surgery duration \u00b5 and the functional form of the utilities of each player, as well as their own cost coefficients. One thing that differentiates the later scenarios we discuss is whether the hospital has complete information about the distributions of the surgeries. In all of our models we assume that the hospital has knowledge of the surgeon\u2019s cost coefficients. A second important factor is the degree to which the hospital can monitor the actions of the surgeon. In other words, whether the hospital can observe n, the number of surgeries booked by the surgeon, and also the overtime and idle time. We are aware that in some hospitals the surgeon must present their schedules to an OR manager, who observes n and then schedules the support staff and equipment for the surgeon. Nonetheless, one might well imagine a scenario where hospital management is less informed as to the number of surgeries. We assume here that the hospital can always observe the number of surgeries planned by the surgeon and any idle or overtime. This is a reasonable assumption in any well-run hospital. A third important factor is consideration of whether a third party \u2013 possibly a government or health authority \u2013 has some control over the design of the contract. Indeed, one factor missing in the discussion to this point is a very important one \u2013 the effective 109 \ftreatment of patients. We have mentioned the possibility that the hospital or surgeon\u2019s concern for patients can be captured in their cost coefficients, but this is a rather indirect way to understand the impact on patient care. At the end of this section we explore welfare considerations that attempt to look explicitly at the impact on patient care and the overall efficiency of the system. In the following subsections we describe several types of contracts that arise under different assumptions on information, monitoring and power, and the role of government. 4.5.1 Hospital has Complete Information and Coercive Power The best situation from the point of view of the hospital is when the surgeon performs nH surgeries. However, the requirements to ensure this outcome are quite strong. First of all, in order to compute nH the hospital needs full knowledge of the distribution of the ti \u2019s. In particular, we would need to know the distribution of T (n) for each n, and this depends on the joint distribution of t1 , . . . , tn . As we can see this is a strong informational requirement for the hospital. Nonetheless, we may assume this to be the case, since the hospital has access to historical information about the surgeries, possibly at least as much information as the surgeon does. Certainly, the hospital tracks OR usage by various physicians and likely tracks surgery types and durations (such as in the data set we used in Section 4.2). Nonetheless, one might argue that the surgeon herself\/himself has private information about the specifics of each case which is independent of the historical information and is thus not available to the hospital. This represents an information asymmetry between surgeon and hospital. This issue is partially addressed below in another contracting scenario. Nonetheless, studying further how asymmetry of information would impact our results is an area for future research. The other factor, besides information, that may prevent the surgeon from taking the hospital\u2019s recommendation of nH surgeries is that the surgeon may have some power in determining how many surgeries get scheduled. In general, the surgeon has a best outside alternative to performing surgeries at the hospital in question, which yields her some level of utility \u03c00 , which we can safely assume is less than \u03c0(nS ) (since otherwise our surgeon should quit!). We assume that this outside alternative is common knowledge to both the surgeon and the hospital, and it may, for instance, be the option that a surgeon can work in some other hospital or possibly a private medical clinic. If \u03c00 \u2264 \u03c0(nH ) then the surgeon would be 110 \fwilling to perform nH surgeries if granted use of the operating room, because her\/his next best alternative leaves her\/him worse off, and so the hospital achieves its desired number of surgeries. In the other case, i.e., \u03c00 \u2265 \u03c0(nH ), the situation is more complicated. If the hospital has the power to force the surgeon to perform exactly nH surgeries, then the hospital is most happy, whereas the surgeon is less well off compared to his\/her alternative. This is only a sustainable option if the hospital has strong coercive power. 4.5.2 Take-It-or-Leave-It Offer We now suppose that the hospital cannot coerce the surgeon into performing nH . This is observed in practice when the skills of a surgeon are in great demand. As mentioned above, \u201csurgeons are the most mobile and least easily replaced health care professionals\u201d [13], and we assume that the hospital has a shortage of surgeons and cannot easily replace one surgeon with another. Furthermore, suppose that \u03c0(nH ) < \u03c00 \u2264 \u03c0(nS ) and hence the surgeon\u2019s most profitable activity is to perform surgeries at the given hospital, just not as few as nH surgeries. We ask the following basic question: Can the hospital offer some incentive to induce the surgeon to perform fewer surgeries than nS in a way that is cost-effective for the hospital? We answer this question in two settings. The first, described in this subsection, is where the hospital retains complete information about surgery durations but no longer has coercive power. The second setting, described in the next section, is where the hospital no longer has complete information about surgery durations and must induce the surgeon to make an appropriate choice simply through adjusting her\/his compensation. In both cases we assume that the hospital can monitor the choice of n by the surgeon, knows the surgery expected duration \u00b5, and knows the surgeon\u2019s cost coefficients uS and oS . The first setting can be modeled as a simple bilateral externality, a standard model in the microeconomics literature (see Chapter 11.B of [17]). The overtime and idle time costs of the hospital are influenced by the decision of the surgeon, who need not consider these costs when making her\/his optimal choice. As we showed in the previous section, if the surgeon is unconstrained in her\/his choice, she\/he opts for nS surgeries, which is usually not optimal for the hospital. The hospital experiences a loss of C(nS ) \u2212 C(nH ) from its optimum under this scenario. Assuming, as we do here, that the surgeon has the right to schedule surgeries as she\/he sees fit (i.e., the hospital has no coercive power of the surgeon\u2019s 111 \fdecision), the hospital will need to offer some compensation B > 0 to induce the surgeon to adjust her\/his surgeries. The surgeon will agree to performing n surgeries if and only if \u03c0(n) + B \u2265 \u03c0(nS ). Since we may assume that the hospital will offer the smallest bonus possible to achieve the reduction of surgeries to n in number, we have B = \u03c0(nS )\u2212\u03c0(n) \u2265 0. Thus, the cost of the surgeon under this bonus scheme is precisely: min {C(n) + B : \u03c0(n) + B \u2265 \u03c0(nS )} = min C(n) \u2212 \u03c0(n) \u2212 \u03c0(nS ) n\u22650,B n\u22650 (4.6) Let nG \u2208 arg minn\u22650 {C(n) \u2212 \u03c0(n)} denote an optimal solution to the above problem. Note that this can seen as a kind of socially optimal choice, since it maximizes the overall welfare \u03c0(n) \u2212 C(n) of the two parties (of course, it does not consider the direct impact on patients, which is discussed below). Since the bonus B = \u03c0(nS ) \u2212 \u03c0(nG ) compensates the surgeon sufficiently, the surgeon will always choose to perform nG surgeries and take bonus B. To clarify, the end result of this analysis is that the hospital computes nG based on information at its disposal by solving Eq(4.6). Then the offer to surgeon is simple: if the surgeon schedules nG surgeries, s\/he receives a compensation of B dollars from the hospital. Otherwise, the surgeon is free to determine the number of surgeries s\/he prefers but will not receive the bonus (however, again the bonus is defined so this latter case never occurs). One may see this as a \u201ctop-down\u201d approach to surgery scheduling, all the computational work (computing nG and B) is undertaken by the hospital and the surgeon\u2019s choice is straightforward. 4.5.3 Three-Part Contract The contract in the previous subsection was predicated on the assumption that the hospital has complete information about the duration of surgeries. In addition, the hospital needs to undertake computation of the preferred number of surgeries and bonus, with the possibility (due to inaccurate calculation) that the take-it-or-leave-it offer may still be rejected. As mentioned previously, it is probable that the surgeons have some private information about the duration of surgeries due to their personal knowledge of the patients and their histories. Thus, it may be preferable to consider a \u201cdecentralized\u201d contract which attempts to align the incentive of the surgeon to that of the hospital. In other words, the hospital may design a compensation scheme whereby the surgeon\u2019s decisions themselves weigh the importance of the hospital\u2019s cost structure. 112 \fThis can be achieved by the following three-part contract, which is specified up to some policy parameter we denote as \u03b1 > 0. The three parts to the contract are as follows: 1. a fixed sum B\u03b1 which passes from hospital to surgeon 2. a surgery unit cost \u03b3\u03b1 which is charged by the hospital for each surgery booked by the surgeon 3. a per-unit time overtime penalty \u03c9\u03b1 which is charged by the hospital for each unit of overtime incurred by the surgeon. Ranges for values of these parameters which achieve an alignment of incentives are discussed below. Note that to administer this contract the hospital needs to be able to monitor the actions of the surgeon. Indeed, to charge the surgery unit cost, the number of surgeries needs to be observed, and overtime fees can only be calculated if the hospital carefully monitors overtime. This latter monitoring is, of course, should already be a practice of the hospital since they need to compensate nurses for overtime and thus have an interest in monitoring this quantity. One interpretation of the contract is as follows. The surgeon is given some budget B\u03b1 and she\/he can use this budget to rent time for a surgery at rate \u03b3\u03b1 and is penalized for overuse of resources at a rate of \u03c9\u03b1 per unit time. Thus, we see that the cost implications of a surgery for the hospital are in some sense passed to the surgeon, and she is in turn compensated at the budget level B\u03b1 to defray these cost considerations and still remain interested in performing surgeries. The payoff of the surgeon on this contract will be \u03c0\u03b1 (n) = B\u03b1 \u2212 \u03b3\u03b1 n \u2212 \u03c9\u03b1 ET (n) [max{0, T (n) \u2212 d}] + \u03c0(n) We now proceed in specifying the three elements of the contract, which are related to the choice of parameter \u03b1, for which the hospital can ensure the surgeon can perform nH surgeries. Of course, the surgeon will need to be compensated in order to participate. The participation constraint of the surgeon is given by \u03c0\u03b1 (n) \u2265 \u03c00 where n is the optimal number of surgeries for the surgeon under a contract with parameter \u03b1. 113 \fThe values of the three contract parameters which can align incentives are as follows: \u03b3\u03b1 = r + (uS \u2212 \u03b1uH )\u00b5, \u03c9\u03b1 = \u03b1(oH + uH ) \u2212 (oS + uS ) and any fixed sum B\u03b1 which lies in the range B\u03b1 \u2265 \u03c00 + (uS \u2212 \u03b1uH )d + \u03b1C(nH ). These values are chosen so that the surgeon\u2019s profit has the form \u03c0\u03b1 (n) = B\u03b1 \u2212 (uS \u2212 \u03b1uH )d \u2212 \u03b1C(n). which is simply a linear transformation of the hospital\u2019s objective C(n). It is then straightforward to see that a surgeon facing profit function \u03c0\u03b1 (n) will choose n = nH . This is precisely that number of surgeries which minimizes the hospital\u2019s costs, and the hospital has achieved its goal. The surgeon will participate in this three-part contract for any value \u03b1 > 0 since the surgeon\u2019s profit will be \u03c0\u03b1 (nH ) = B\u03b1 \u2212 (uS \u2212 \u03b1uH )d \u2212 \u03b1C(nH ) \u2265 \u03c00 + (uS \u2212 \u03b1uH )d \u2212 (uS \u2212 \u03b1uH )d + \u03b1(C(nH ) \u2212 C(nH )) \u2265 \u03c00 The result derives from the bound which we put on the fixed sum: B\u03b1 \u2265 \u03c00 +(uS \u2212\u03b1uH )d+ \u03b1C(nH ). A few comments are in order about this contract. First, note that the variable fees \u03b3\u03b1 and \u03c9\u03b1 can be determined without knowing the full surgery time distributions. Thus under this contract, the hospital can ensure nH surgeries are performed without full information about those surgeries. Thus, in contrast to the take-it-or-leave-it offer described in the previous subsection, the computational burden now rests with the surgeon and not the hospital. We can thus see this as a \u201cbottom-up\u201d approach to surgery scheduling \u2013 the hospital passes the necessary information to the surgeon via the three components of the contract, and the surgeon is left to decide. One further consideration is necessary here, which is whether the hospital benefits from offering these contracts. Note that the total cost to the hospital is now: C\u03b1 (nH ) = B\u03b1 + C(nH ) \u2212 \u03b3\u03b1 nH \u2212 \u03c9\u03b1 ET (n) [max{0, T (nH ) \u2212 d}] We can rewrite C\u03b1 (nH ) by plugging in the values of \u03b3\u03b1 and \u03c9\u03b1 as C\u03b1 (nH ) = B\u03b1 + (1 \u2212 \u03b1)C(nH ) \u2212 \u03c0(nH ) + (\u03b1uH \u2212 uS )d. 114 \fThe hospital benefits from the new scheme if C\u03b1 (nH ) \u2264 C(nS ) where we assume nS surgeries are performed if no intervention is made. This implies an upper bound on the bonus, i.e.,: B\u03b1 \u2264 C(nS ) \u2212 (1 \u2212 \u03b1)C(nH ) + \u03c0(nH ) \u2212 (\u03b1uH \u2212 uS )d. The possible values of B\u03b1 , i.e., \u03c00 + (uS \u2212 \u03b1uH )d + \u03b1C(nH ) \u2264 B\u03b1 \u2264 C(nS ) \u2212 (1 \u2212 \u03b1)C(nH ) + \u03c0(nH ) \u2212 (\u03b1uH \u2212 uS )d yield a range of fixed-fee compensations that yield feasible contracts. We note that the bonus could be determined through bargaining by the two parties, and its value would fall somewhere between these bounds. 4.5.4 Implementing the Two Contracts We now discuss briefly some of the challenges that might be faced when implementing either of these two contracts. First of all, there is an important challenge in specifying the parameters of the model. Specifically, information on the cost parameters of both the surgeon and hospital may not be explicitly known, and particularly not to both parties (which is assumed under both contracts). The task of determining oH and uH , because it deals with the overall costs of the hospital, may present challenges. Indeed, there may be lack of consensus on the values of these parameters amongst the various decision-makers in the hospital. Overtime costs, nonetheless, seem more accessible than idle time costs. Overtime costs may be approximated by direct costs of staffing overtime wages. Idle time costs are more indirect, and take into account lost value from idle resources. To implement either contract we foresee that important discussions would need to held to establish consensus on the values of these parameters. A second issue, which is mostly unrelated to specifying the parameters of the model, is that of the political feasibility of adopting these contracts. One attractive feature of the take-it-or-leave-it offer is that the contract is relatively simple to understand and has the appearance of a \u201cwin-win\u201d situation. The surgeon receives a bonus for performing nG surgeries and there are no fees involved. On the other hand, since most of the computation in this setting rests with the hospital, surgeons may feel disempowered in having the hospital in some sense decide the preferred surgery level. The potential disutility that may arise due to a sense of disempowerment is not covered by our model, but may be a consideration in practice. On the other hand, in the three-part contract surgeons retain their decision-making 115 \frole. However, the downside here is that the surgeon now experiences fees and penalties for overtime, and so it seems less a clear \u201cwin-win\u201d situation than the alternate contract. The three-part contract proposes to treat surgeons much like independent entrepreneurs who must rent and pay for overuse of facilities and resources, which seems a less advantageous setup than the current situation where surgeons retain \u201cprivileges\u201d in the operating room. Thus, there there may be some disutility deriving from this perceived loss of privilege that is not considered in our model but again it may be significant in practice. 4.5.5 Welfare Considerations One element missing from the above analysis is a concern for the welfare of patients. When isolating attention on the incentives of hospitals and surgeons, it is possible that patients are adversely affected. In the Canadian healthcare system another agent, the provincial government, is responsible for the health care system as a whole. They are interested in balancing the interests of patients, hospitals and physicians. We assume that the provincial government is a social welfare maximizer with the following utility function: W (n) = \u03c0(n) \u2212 C(n) + \u03b4n where \u03b4 is a measure of the per unit \u201csocial value\u201d of a performed surgery. This is consistent with earlier assumptions in our model that the surgeries are elective surgeries of a similar type, hence having identical mean \u00b5 and remuneration r. It is true that some surgeries may have more social value than others; for instance, saving the life of a child by a surgery may carry more social value than saving someone with many other health complications and whose quality of life after the surgery would only be marginally improved. This is of course an ethical discussion and value judgement. We avoid such discussions, and take the view that the provincial government is not privy to the details of each individual case and thus takes \u03b4 as their valuation of each surgery. Letting uW = uH + uS and oW = oH + oS we can rewrite W (n) as: W (n) = (r + \u03b4 + uW \u00b5)n \u2212 (oW + uW )ET (n) [max{0, T (n) \u2212 d}] \u2212 uW d As above, W is discretely concave, and an optimal number of surgeries from a social welfare perspective nW satisfies: \u2206\u03b8(nW \u2212 1) \u2264 r + \u03b4 + uW \u00b5 \u2264 \u2206\u03b8(nW ). oW + u W (4.7) 116 \fNote that nW can be bigger, smaller, or equal to nH and nS depending on the cost parameters. For instance, if the central planner places high value on surgeries (for instance, due to political pressures) \u03b4 may be large enough so that nW is in fact greater than nS . It is straightforward to find bounds on \u03b4 that guarantee this to be the case. The case with the most intuitive appeal is where nH \u2264 nW \u2264 nS , which indicates that surgeons do more surgeries than socially optimum, and hospitals hope to perform fewer surgeries than socially optimal value. To align the surgeon\u2019s incentives and induce the cooperation of the surgeon, the central planner could again offer a three-part contract similar to the one above. The major difference is that the contract needs to ensure participation of both the surgeon and hospital in this case. Also, different contracts arise if revenues from overuse of the facilities or per-unit surgery charges can either accrue to the hospital or to the central planner directly. Precise details are omitted but the analysis follows very similar reasoning to that in Section 4.5. A social planner, the provincial government in our case, can use Eq(4.7) with their estimates \u03b4 to judge if and how they would like to intervene via designing a contract to manage the relationship between hospitals and surgeons. 4.6 Dependent Surgeries with Identical Realizations In this section we return to one of the important assumptions in our model, that of independent surgery durations. Here we assume that surgery durations are fully dependent and given by the outcome of a single random variable. Although this setting is also restrictive, it can be seen as the opposite extreme of the independent case. Of interest is the fact that we can obtain similar results, and thus under both models the conclusions and insights are similar. This suggests that the findings of our analysis could apply to intermediate cases or surgery duration dependence. Another reason for considering this case is that in this framework we are able to say something about risk aversion. Using the previous model we were unable to establish results in the case of risk aversion, and this can be remedied here. In this section, we assume that each surgery has the same random duration t, a random variable with probability distribution function (pdf) f and cumulative density function (cdf) F . We assume that the support of the pdf is contained in the positive real line R+ . Thus, the total duration of n surgeries performed in one day by the surgeon has random value nt. 117 \fFurthermore, we assume that n is a continuous decision variable and can take any value in R+ . This is an abstraction from reality, where only an integer number of surgeries ought to be considered. However, this is not a restrictive assumption in this case since we have a single dimensional decision variable n and one can always take \u230an\u230b, i.e., round down, or \u2308n\u2309, i.e., round up, after the analysis and choose the better one. 4.6.1 Preliminaries and Misalignment of Incentives The cost function of the hospital and the profit function of the surgeon are defined as before, the only difference is that the total duration is now nt and Et [\u00b7] denotes the expectation operator on the random variable t. As before, \u00b5 is the mean of t. We assume that the hospital and surgeon have the following objectives \u2013 the cost of the hospital C(n) = Et [oH max{0, nt \u2212 d} + uH max{0, d \u2212 nt}] (4.8) and the profit of the surgeon \u03c0(n) = rn \u2212 Et [oS max{0, nt \u2212 d} + uH max{0, d \u2212 nt}]. (4.9) In this section we assume that hospital knows f and all the cost coefficients whereas surgeon knows f and only his\/her cost coefficients. As before we characterize nH and nS . To state the results we first define the function: Z x tf (t)dt = xF (x) \u2212 G(x) \u03d5(x) = (4.10) 0 where G(x) = Rx 0 F (t)dt. Proposition 4.6.1. (Convexity of C, characterization of nH ) 1. The hospital\u2019s cost function C is (strictly) convex when oH + uH > 0. 2. The optimal solution, nH , to the optimization problem min{C(n) : n \u2265 0} is the unique solution of n to the following equation: \u0012 \u0013 oH \u00b5 d = \u03d5 n oH + u H (4.11) Proposition 4.6.2. (Concavity of \u03c0, characterization of nS ) 1. The surgeon\u2019s profit function \u03c0 is strictly concave when oS + uS > 0. 118 \f2. The optimal solution, nS , to the optimization problem max{\u03c0(n) : n \u2265 0} is the unique solution of n to the following equation: \u0012 \u0013 d oS \u00b5 \u2212 r \u03d5 = n oS + u S (4.12) We next obtain a result similar to Theorem 4.4.5. Theorem 4.6.3. The optimal number of surgeries from the hospital\u2019s perspective nH is less or equal than the surgeon\u2019s preferred number of surgeries nS ; i.e., nH \u2264 nS if and only if oS \u2212 \u00b5r oH \u2265 . oH + u H oS + u S (4.13) A remark is in order here. The conditions of Theorems 4.4.5 and 4.6.3 are the same. To see this reorganize the terms in Eq(4.13) and rewrite them as given in Eq(4.5): oH oH + u H oH \u00b5 oH + u H \u00b5+ r \u2212 oS \u00b5 oS + u S r + uS \u00b5 oS + u S \u2265 \u2265 oS \u2212 r \u00b5 oS + u S oS \u00b5 \u2212 r oS + u S oH \u00b5 oH + u H uH \u00b5 oH + u H (by equation 4.13) (multiply both sides with \u00b5) \u2265 \u00b5\u2212 (add \u00b5 to both sides and switch the terms) \u2265 (condition given in Theorem 4.4.5) We may rewrite inequality Eq(4.13) as 1 oH 1 oH oS \u2212 \u00b5r oH \u2265 oH + u H oS + u S 1 oS 1 oS , i.e., 1 \u2212 \u00b5orS 1 \u2265 . H 1 + uoH 1 + uoSS Note that \u00b5oS is the average cost of doing a surgery after time d for the surgeon, and r is the revenue from a surgery. The ratio r \u00b5oS is clearly strictly positive. Furthermore this ratio should be less than one because otherwise, i.e., r \u00b5oS > 1, it implies that surgeon would never stop doing surgeries. On the other hand, we argue that the ratios uH oH and uS oS may be comparable even though the individual cost coefficients may not, i.e., uH vs. uS and oH vs. oS . If they are comparable then as r \u00b5oS approaches 1 it becomes more attractive for the surgeon to work overtime suggesting nH \u2264 nS . 119 \f4.6.2 Risk-Averse Surgeon We now revisit our assumption that the surgeon is risk neutral and extend some of our findings to the risk-averse setting. When the surgeon is risk neutral his\/her expected profits for undertaking n surgeries is: \u03c0(n) = rn \u2212 Et [oS max{0, nt \u2212 d} + uS max{0, d \u2212 nt}]. When we consider risk aversion, the expected profit function will include a (strict) concave increasing utility function v, which is a one variable real-valued function mapping dollar amount to utility. The expected utility of undertaking n surgeries is assumed to be of the form Et [v(\u03c0(n, t))] where \uf8f1 \uf8f2 rn \u2212 uS (d \u2212 nt) if nt \u2264 d \u03c0(n, t) = \uf8f3 rn \u2212 o (nt \u2212 d) if nt > d S is the dollar amount of profit for n surgeries with time realization t. A remark on how the utility function is constructed is in order. The utility of undertaking n surgeries when time is realized as t is v(\u03c0(n, t)). In other words, utility is a function of the dollar amount of profit \u03c0(n, t). Note, however, that in defining \u03c0(n, t) we already made a conversion of opportunity cost of time into dollars when we defined the coefficients uS and oS . The fact that the surgeon now has a general increasing concave utility function v does not change this determination of the coefficients uS and oS . That is, we maintain, even under risk aversion, the fact that each time unit is worth uS dollars before time d and oS dollars after. In other words, the dollar value of time is a piecewise linear function under both risk neutrality and risk aversion. The fundamental question we consider in this setting is how would a risk-averse surgeon choose his\/her optimal number of surgeries, and would this value, for instance, be less than a risk neutral surgeon with the same profit function \u03c0(n, t). To make things precise we let nR denote the optimal number of surgeries a risk averse surgeon would plan for a time period d. That is, nR \u2208 arg max Et [v(\u03c0(n, t))] n (4.14) and our question of interest is whether nR \u2264 nS where nS is as defined in Proposition 4.6.2. 120 \fFinding nR directly might be quite challenging depending on the structure of the utility function v, so this bound in itself, if true, can be quite illuminating. Another reason for our interest in the question \u201cis nR \u2264 nS ?\u201d is due to its possible implication for compensation B. If it turns out that nR is in fact smaller than nS then this may indicate that a risk-averse surgeon needs a smaller compensation B than that of risk neutral surgeon with identical costs in order to align his\/her incentives with the hospital. The intuition is simple: the less a surgeon schedules surgeries on his\/her own volition the less it would take to compensate him\/her to perform nH surgeries. Despite this intuition we were unable to prove the result analytically and plan to take it up in future research. We feel, nonetheless, that the question of whether nR \u2264 nS has independent interest, and thus pursue it further here. We will establish some sufficient conditions for which nR \u2264 nS . First we give an intuitive discussion for why this indeed might be the case, however, this intuitive line of reasoning yields only motivation and not concrete proof. Next we establish the result when the time distribution takes on only two values \u2013 tL and tH \u2013 where tL is the time for a \u201croutine\u201d surgery and tH for a surgery with \u201ccomplications\u201d. This simplification of the surgery duration distribution yields attractive conditions under which nR \u2264 nS holds. Finally, we present a sufficient condition on the cost coefficients for nR \u2264 nS for a general case, i.e., for a general surgery duration distribution. 4.6.3 Intuitive Discussion The basis of our intuition for nR \u2264 nS comes from the following oversimplified notion of risk aversion: given the same expected monetary return from two lotteries, the lottery with smaller variance would be favored by a risk-averse decision maker. In particular, if one lottery has a smaller expected return and a greater variance than another then intuitively the latter is preferred by both a risk-averse and risk-neutral decision maker. Of course, in general, the validity of this statement depends on more than just the first two moments of the lottery distribution (and often on first or second order stochastic dominance) and possibly the degree of concavity of the the utility function [17], but the intuition is nonetheless clear. As a brief aside, we should point out the well known result that if the utility function exhibits constant absolute risk aversion (CARA), i.e., (negative) exponential utility function, and the payoff of a lottery is normally distributed then maximizing the expected 121 \futility function is the same as maximizing a term involving the mean and the variance (and some other given\/known parameters), i.e., it is determined entirely by the first two moments of the distribution of the payoff distribution (called mean-variance utility). This result, however, does not apply to our setting. Indeed, even if we assume the utility function v has CARA it is clear that the payoffs are not normal even if we assume normality or lognormality of the surgery duration t. Turning to the problem at hand, the first fact to note is that, by definition, nS maximizes the expected profit function \u03c0(n). In particular, this means that \u03c0(nR ) \u2264 \u03c0(nS ) (4.15) for any choice of nR . The next result establishes that if nR > nS then the variance of expected payoffs under nR surgeries is greater than for nS surgeries. Lemma 4.6.4. The variance of \u03c0(n, t), Var \u03c0(n, t), is proportional to n2 . From this our intuition strongly suggests that nR \u2264 nS , since by performing more than nS surgeries one\u2019s expected profit would be lower and variance of payoffs higher (i.e., there is more risk) and thus less attractive to a risk-averse surgeon. 4.6.4 Discrete Time Distribution with Two Values In order to verify analytically the intuitive discussion in the previous subsection we begin by assuming a simple setting where the time distribution has two outcomes, tL and tH , with probabilities pL and pH respectively. We will see how our intuition leads us to define sufficient conditions for nR \u2264 nS . By only using tL and tH values for a surgery duration we are making a rough approximation to the real distribution by picking only two values. We think of the low time tL as the duration of routine\/easy surgery where there are no complications. On the other hand, tH corresponds to a high duration associated with the occurrence of complications during surgery. We now present some simple conditions on cost coefficients for when nR \u2264 nS . The argument essentially follows our intuition presented in the previous subsection. We begin by assuming that nR > nS and derive conditions whereby this cannot occur. The first step is to show that there exists a number of surgeries m with the same expected profit as nR where m \u2264 nS . 122 \f\u03c0(n) m nS nR -uSd Figure 4.7: Existence of m Lemma 4.6.5. Let nR > nS . Then there exists an m with 0 \u2264 m \u2264 nS such that \u03c0(m) = \u03c0(nR ). Figure 4.7 provides an illustration of this result. The next step is to demonstrate that the expected utility of performing m surgeries is greater than that of performing nR surgeries, thus contradicting the definition of nR (see Eq(4.14)). The following lemma illustrates that when some conditions on the expected profits at tL and tH hold, we can indeed establish the contradiction. These conditions imply that the expected utilities of the various outcomes associated with nR surgeries are \u201cmore spread out\u201d than with m surgeries. Proposition 4.6.6. The expected utility from undertaking m surgeries is greater than the expected utility of undertaking nR surgeries when the following condition holds: \u03c0(nR , tH ) \u2264 \u03c0(m, tL ) \u2264 \u03c0(m, tH ) \u2264 \u03c0(nR , tL ) (4.16) The idea of the proof is illustrated in Figure 4.8. The choice of either nR or m determines one of two lotteries for the surgeon. The first lottery, associated with the choice nR , is as follows: earn profit \u03c0(nR , tH ) with probability pH and earn profit \u03c0(nR , tL ) with probability pL . The expected profit of the first lottery is \u03c0(nR ) = pH \u03c0(nR , tH ) + pL \u03c0(nR , tL ). Similarly the second lottery has outcome \u03c0(m, tL ) with probability pL and outcome \u03c0(m, tH ) with probability pH . Both lotteries have the same expected value of \u03c0(nR ) = \u03c0(m). Then, 123 \fv w E[v(\u03a0(m,t)] E[v(\u03a0(nR,t)] \u03a0(nR,tH) \u03a0(m,tL) \u03a0(m)=\u03a0(nR) \u03a0(m,tH) \u03a0(nR,tL) profit Figure 4.8: Illustration of nR \u2264 nS with Two t Values it can be seen graphically that Et [v(nR , t)] < Et [v(m, t)]. Details of the proof can be found in Section 4.8. A few comments on the previous theorem are in order. First, we give a brief interpretation of inequality Eq(4.16). When surgery duration is short (i.e., tL is realized) it is intuitive that we would like to do more surgeries, thus motivating the condition \u03c0(m, tL ) \u2264 \u03c0(nR , tL ). When surgeries are long (i.e., tH ) the opposite holds true: the surgeon would favor fewer surgeries, thus motivating the condition \u03c0(nR , tH ) \u2264 \u03c0(m, tH ). The remaining conditions implied by inequality Eq(4.16) are less straight-forward to motivate. Figure 4.9 gives an illustration of some time values tL and tH which satisfy these conditions. The two functions shown in the figure are the profit functions \u03c0(m, t) and \u03c0(nR , t). Since nR > m by assumption, it follows that \u03c0(nR , t) peaks (at t = d\/nR ) to the left of \u03c0(m, t) (at t = d\/m). From this figure we can also see why some conditions similar to inequality Eq(4.16) are not valid to give our desired result. Indeed, a similar picture to Figure 4.8 could be drawn with conditions \u03c0(nR , tL ) \u2264 \u03c0(m, tL ) \u2264 \u03c0(m, tH ) \u2264 \u03c0(nR , tH ) (4.17) and it might appear that a similar result would hold. However, it can be readily seen from Figure 4.9 that condition Eq(4.17) does not hold under the natural restriction that tL \u2264 tH . 124 \f\u03a0(nR,t) \u03a0(m,t) tL d\/nR d\/m tH t Figure 4.9: Illustration of Condition Eq(4.16) 4.6.5 Sufficient Conditions on Cost Coefficients In the previous subsection we made assumptions on the time distribution in order to find conditions whereby nR \u2264 nS . Now we relax these restrictions and allow for a general continuous distribution of time, only restricting the relative sizes of the cost coefficients. We derive the following result. Proposition 4.6.7. If rnS > d(oS +uS ) then nR \u2264 nS . Thus, in particular if r > d(oS +uS ) then nR \u2264 nS . This result implies that if the total revenue from performing nS surgeries is large enough (i.e., greater than d(oS + uS )) then we will find that nR \u2264 nS . 4.7 Conclusion and Future Directions We have presented a model and analysis for the problem of determining the number of surgeries to schedule in an OR block of fixed length that takes into consideration the competing incentives of hospital and surgeon. By proposing contracts that induce the surgeon to schedule a number of surgeries more aligned with the goals of the hospital, the hope is that this alignment of incentives leads to a reduction in costs (especially the overtime costs) and an overall improvement in the working environment. Savings garnered in this 125 \fscheme could be used to open up other OR\u2019s or intensive-care beds and further improve OR throughput. Depending on how much power the hospital has over surgeons and how much information is available to the hospital, we propose several implementable contracts that hospital might consider in Section 4.5. In this section, we also provide a discussion of the problem in a perspective of a social planner, e.g., a provincial government. Our analysis is based on some important assumptions. We argue that many of them are quite reasonable and made for tractability to remove complexity in the analysis and demonstrate as simply as possible the incentives involved. Two of the stronger assumptions in our basic setting are risk-neutrality of the surgeon and the fact that all surgeries are identical and independently distributed. In Section 4.6 we provide a framework that relaxes these two assumptions, but introduce another set of restrictive conditions. Although both models are restrictive, the fact that they are different in character but yield similar insights testifies to the robustness of our approach. We now point out some other possible extensions of our model that may be promising directions for future research. Firstly, we have modeled the interaction between the hospital and surgeon as a single period game. However, the hospital and surgeon have a long-term working relationship and it may add more insight to explore this setting in a repeated game structure. One direction is that the bonus structure may be used as a \u201ccarrot\u201d by the hospital to induce cooperation from the surgeon at each stage in a repeated game. Secondly, there is scope to examine more closely how the size of the bonus would change with degrees of risk aversion. Our results on the magnitude of nR with respect to nS could be a foundation for this study. Thirdly, one can study the asymmetric information case in more detail, in which both surgeon\u2019s and hospital\u2019s cost coefficients are private and not known by the other party. 4.8 Proofs Proof of Proposition 4.4.3. It suffices to show that \u03b8(n) is discrete convex. Note that \u03c0(n) = (r + us \u00b5)n \u2212 (oS + uS )\u03b8(n) \u2212 uS d is discrete convex when each of the the first two terms are discrete convex. The linear term (r + us \u00b5)n is both discrete convex and concave, and since oS + uS \u2265 0 by assumption the second term is discrete concave precisely 126 \fwhen \u03b8(n) is discrete convex. Thus, it suffices to show that \u03b8(n) is discrete convex; that is, \u22062 \u03b8(n) \u2265 0. To establish this we consider various cases for durations T (n), T (n+1), etc., with respect to the normal day duration d. There are three important ranges for d: 0 \u2264 d < T (n); T (n) \u2264 d < T (n + 1) ; T (n + 1) \u2264 d < T (n + 2) ; and d \u2265 T (n + 2). We will compute the expectation in the definition of \u03b8(n) with these various ranges. The following table contains the necessary information: Range for d [0, T (n)) [T (n), T (n + 1)) [T (n + 1), T (n + 2)) [T (n + 2), \u221e) \u2206O(n) tn+1 T (n + 1) \u2212 d 0 0 \u2206O(n + 1) tn+2 tn+2 T (n + 2) \u2212 d 0 T (n + 2) \u2212 d 0 tn+2 \u2212 tn+1 tn+2 \u2212 (T (n + 1) \u2212 d) \u22062 O(n) Thus, using conditional expectation we write \u22062 \u03b8(n) = E[\u22062 O(n)|d < T (n)]P r(d < T (n)) +E[\u22062 O(n)|T (n) \u2264 d < T (n + 1)]P r(T (n) \u2264 d < T (n + 1)) +E[\u22062 O(n)|T (n + 1) \u2264 d < T (n + 2)]P r(T (n + 1) \u2264 d < T (n + 2)) +E[\u22062 O(n)|d \u2265 T (n + 2)]P r(d \u2265 T (n + 2)) \u2265 E[tn+2 \u2212 tn+1 |d < T (n)]P r(d < T (n)) | {z } (a) + E[tn+2 \u2212 (T (n + 1) \u2212 d)|T (n) \u2264 d < T (n + 1)]P r(T (n) \u2264 d < T (n + 1)) {z } | (b) The first term (a) simplifies as (using linearity of expectations and independence): (a) = = \u0001 E[tn+2 |d < T (n)] \u2212 E[tn+1 |d < T (n)] P r(d < T (n)) \u0001 E[tn+2 ] \u2212 E[tn+1 ] P r(d < T (n)) = (\u00b5 \u2212 \u00b5)P r(d < T (n)) = 0 The second term (b) can be written as: (b) = \u0001 E[tn+2 ] \u2212 E[tn+1 \u2212 (d \u2212 T (n))|T (n) \u2264 d, tn+1 > d \u2212 T (n)] P r(T (n) \u2264 d < T (n + 1)) = (\u00b5 \u2212 E[tn+1 \u2212 k|tn+1 \u2265 k, k \u2265 0])P r(T (n) \u2264 d < T (n + 1)) \u2265 0 127 \fwhere k = d\u2212T (n) > 0. The last inequality holds that since by NBUE we have E[ti \u2212k|ti \u2265 k] \u2264 E[ti ]. Thus, \u22062 \u03b8(n) \u2265 0. \u0003 Proof of Corollary 4.4.4. The proof follows from Proposition 4.4.3 and is thus omitted. \u0003 Proof of Theorem 4.4.5. Since \u03b8(n) is discrete convex (shown above), this implies \u2206\u03b8(n)\u2212 \u2206\u03b8(n\u2032 ) \u2265 0 for n \u2265 n\u2032 . Thus, \u2206\u03b8(nS ) \u2212 \u2206\u03b8(nH \u2212 1) \u2265 uH \u00b5 r + uS \u00b5 \u2212 >0 oS + u S oH + u H Now, if nS \u2264 nH \u22121 then \u2206\u03b8(nS )\u2212\u2206\u03b8(nH \u22121) \u2264 0 thus we can conclude that nS \u2265 nH . \u0003 Proof of Proposition 4.6.1. We first establish (i) by considering second order conditions. The first and second derivative of C (given in Eq(4.8)) with respect to n are respectively: \u2032 C (n) = oH Z \u221e tf (t)dt \u2212 uH d\/n tf (t)dt (4.18) 0 d\/n and d2 C (n) = 3 f n \u2032\u2032 Z \u0012 \u0013 d (uH + oH ) n (4.19) using judicious appeals to Leibniz\u2019s rule for differentiating integrals and some basic housekeeping. Observe C \u2032\u2032 (n) > 0 for all n \u2265 0 (and hence C is (strictly) convex) provided the sum of the cost coefficients oH + uH is positive. As for (ii), since C is convex a sufficient condition for optimality is C \u2032 (n) = 0. Thus the equation C \u2032 (n) = 0 characterizes nH . From Eq(4.18) this yields the equivalent relation: oH Z \u221e tf (t)dt = uH d\/n Z d\/n tf (t)dt. (4.20) 0 Noting the fact \u00b5 = Z \u221e tf (t)dt 0 = Z 0 d\/n tf (t)dt + Z \u221e tf (t)dt, d\/n we remove the improper integral on the left-hand side of Eq(4.20) by substitution and simplify Eq(4.20) to: R d\/n tf (t)dt oH 0 = . R d\/n uH \u00b5 \u2212 0 tf (t)dt (4.21) 128 \fWe further simplify this expression further by expanding the integral integration by parts yielding: Z d\/n 0 Using our notation G(y) = d tf (t)dt = F n Ry 0 R d\/n 0 tf (t)dt using \u0012 \u0013 Z d\/n d \u2212 F (t)dt. n 0 F (t)dt and \u03d5(y) = yF (y) \u2212 G(y) we yield an equivalent expression for Eq(4.21) as follows: \u03d5 nd oH = uH \u00b5\u2212\u03d5 \u0001 d n \u0001. A simple rearrangement yields the characterization of nH : \u0013 \u0012 oH \u00b5 d = , \u03d5 nH oH + u H (4.22) thus establishing (ii). \u0003 Proof of Proposition 4.6.2. The details are similar to the proof of Proposition 4.6.1 so details here are more brief. We establish (i) by considering second order conditions. The first and second derivative of \u03c0 (given in Eq(4.9)) with respect to n are respectively: \u03c0 \u2032 (n) = r \u2212 oS Z \u221e tf (t)dt + uS d\/n and d2 \u03c0 (n) = \u2212 3 f n \u2032\u2032 Z d\/n tf (t)dt (4.23) 0 \u0012 \u0013 d (uS + oS ). n (4.24) Observe \u03c0 \u2032\u2032 (n) < 0 for all n \u2265 0 (and hence \u03c0 is (strictly) concave) provided the sum of the cost coefficients oS + uS is positive. As for (ii), since \u03c0 is concave an optimal solution is achieved at \u03c0 \u2032 (n) = 0. Thus the equation \u03c0 \u2032 (n) = 0 characterizes nS . Using similar manipulations as above this yields an equivalent characterization of nS given by: \u0012 \u0013 oS \u00b5 \u2212 r d = , \u03d5 nS oS + u S (4.25) thus establishing (ii). \u0003 The following lemma is useful in the proofs of the next two theorems. It reveals a useful property of the function \u03d5 which plays a common role in the characterizing nH and nS . Lemma 4.8.1. The function \u03d5(x) = xF (x) \u2212 G(x) is an increasing function of x for x \u2265 0. 129 \fProof. We show that \u03d5\u2032 (x) \u2265 0 for x \u2265 0. Note that: \u03d5\u2032 (x) = F (x) + xf (x) \u2212 G\u2032 (x) = F (x) + xf (x) \u2212 F (x) = xf (x) \u2265 0 where the inequality holds since the pdf f is non-negative and x \u2265 0. \u0003 Proof of Theorem 4.6.3. Note the characterizations of nH and nS given in Eq(4.11) and Eq(4.12) reproduced here for convenience: \u0013 \u0012 oH \u00b5 d = \u03d5 nH oH + u H and \u03d5 \u0012 d nS \u0013 = oS \u00b5 \u2212 r . oS + u S \u0001 Since \u03d5 is an increasing function then \u03d5 nd is a decreasing function of n. Thus to show \u0010 \u0011 \u0010 \u0011 nH \u2264 nS it is equivalent to show \u03d5 ndH \u2265 \u03d5 ndS . By the above characterization it is in turn equivalent to establish: oH \u00b5 oS \u00b5 \u2212 r \u2265 . oH + u H oS + u S We obtain the desired result by dividing both sides by \u00b5. \u0003 Proof of Lemma 4.6.4. Recall that \u03c0(n, t) = rn \u2212 oS max{0, nt \u2212 d} \u2212 uS max{0, d \u2212 nt}. Then Var \u03c0(n, t) = Var (oS max{0, nt \u2212 d} + uS max{0, d \u2212 nt}). Since we are interested in finding only how n is related to the variance of \u03c0(n, t) w.l.o.g. we may assume oS = uS = 1. Also note that max{0, nt \u2212 d} + max{0, d \u2212 nt} = |nt \u2212 d|. Therefore we just need to find Var |nt \u2212 d|. But since d is a constant we obtain Var |nt \u2212 d| = Var |nt| = n2 Var |t|, and since t > 0 we obtain n2 Var |t| = n2 Var t. Hence Var \u03c0(n, t) is proportional to n2 . \u0003 Proof of Lemma 4.6.5. The expected profit function \u03c0 (see Eq(4.9)) is continuous. We know \u03c0(0) = \u2212uS d and by inequality Eq(4.15) \u03c0(nR ) < \u03c0(nS ) and so by the Intermediate Value Theorem there exists an m with 0 \u2264 m \u2264 nS such that \u03c0(m) = \u03c0(nR ). \u0003 Proof of Proposition 4.6.6. Condition Eq(4.16) and the fact v is increasing imply v(\u03c0(nR , tH )) \u2264 v(\u03c0(m, tL )) \u2264 v(\u03c0(m, tH )) \u2264 v(\u03c0(nR , tL )) 130 \fas illustrated in Figure 4.8. We know that the point (\u03c0(nR ), Et [v(\u03c0(nR , t))]) lies on the line segment between points (\u03c0(nR , tH ), v(\u03c0(nR , tH )) and (\u03c0(nR , tL ), v(\u03c0(nR , tL )) and is in fact the convex combination of the points given by: pH (\u03c0(nR , tH ), v(\u03c0(nR , tH )) + pL (\u03c0(nR , tL ), v(\u03c0(nR , tL )). By the concavity of v this line segment lies below the line segment adjoining points (\u03c0(m, tL ), v(\u03c0(m, tL )) and (\u03c0(m, tH ), v(\u03c0(m, tH )) for every profit level on which both line segments are defined, and in particular for expected profit level \u03c0(nR ) = \u03c0(m). It then follows that Et [v(\u03c0(nR , t))] \u2264 Et [v(\u03c0(m, t))], which can be seen graphically in Figure 4.8. \u0003 Proof of Proposition 4.6.7. Recall our assumption that v is a strictly increasing, twice differentiable and concave utility function, i.e., v \u2032 > 0 and v \u2032\u2032 < 0. Define \u03b7(n) as the expected utility function, i.e., \u03b7(n) = Et [v(\u03c0(n, t))]. Since \u03c0(n, t) and v are (strictly) concave and concavity is preserved by expectation operator \u03b7 is (strictly) concave. Thus nR \u2208 arg maxn\u22650 \u03b7(n). We can compute nR by finding \u03b7 \u2032 (n) with Leibniz\u2019s rule and solving for n in \u2032 \u03b7 (n) = Z d\/n \u2032 (r +uS t)v (rn\u2212uS d+uS nt)f (t)dt+ Z \u221e (r \u2212oS t)v \u2032 (rn+oS d\u2212oS nt)f (t)dt = 0. d\/n 0 We know that \u03c0 \u2032 (nS ) = 0 by Proposition 4.6.2. Our strategy is to show \u03b7 \u2032 (nS ) \u2264 0 which implies nR \u2264 nS . We will start from the expression \u03c0 \u2032 (nS ) and obtain \u03b7 \u2032 (nS ) \u2264 0. R d\/n R\u221e Recall, by Eq(4.23), \u03c0 \u2032 (nS ) = 0 S (r + uS t)f (t)dt + d\/nS (r \u2212 oS t)f (t)dt = 0. Since r > d(oS + uS ) and nS \u2265 1, we have r\/oS > d\/nS and can rewrite \u03c0 \u2032 (nS ) so that Eq(4.23) is equivalent to \u2032 \u03c0 (nS ) = Z d\/nS (r + uS t)f (t)dt + 0 Z r\/oS (r \u2212 oS t)f (t)dt + d\/nS Z \u221e (r \u2212 oS t)f (t)dt = 0. (4.26) r\/oS Now we multiply each term in Eq(4.26) by v \u2032 (oS d), a nonnegative constant, and rewrite it as Z |0 d\/nS Z r\/oS Z \u221e v \u2032 (oS d)(r \u2212 oS t)f (t)dt = 0. v (oS d)(r \u2212 oS t)f (t)dt + v (oS d)(r + uS t)f (t)dt + r\/oS d\/nS {z } | {z } | {z } \u2032 part 1 \u2032 part 2 part 3 (4.27) Note that part 1 and part 2 are non-negative, and part 3 is non-positive. Next we will obtain inequalities individually for each one these three parts. First we recall that v is 131 \fstrictly concave so we have v \u2032\u2032 < 0, i.e., v \u2032 is strictly decreasing. Also note that we assume nS r > d(oS + uS ) and nS \u2265 1. We start with part 1. Since r > d(oS + uS )\/nS and v \u2032 is decreasing we have 0 \u2264 v \u2032 (rnS \u2212 uS d + uS nS t) \u2264 v \u2032 (oS d) for 0 \u2264 t \u2264 d\/nS . Therefore Z d\/nS \u2032 v (oS d)(r + uS t)f (t)dt \u2265 0 Z d\/nS v \u2032 (rnS \u2212 uS d + uS nS t)(r + uS t)f (t)dt \u2265 0. (4.28) 0 Next we obtain a similar result for part 2. Since r > d(oS + uS )\/nS and v \u2032 is decreasing we have 0 \u2264 v \u2032 (rnS + oS d \u2212 oS nS t) \u2264 v \u2032 (oS d) for d\/nS \u2264 t \u2264 r\/oS . Hence Z r\/oS v \u2032 (oS d)(r \u2212 oS t)f (t)dt \u2265 d\/nS Z r\/oS v \u2032 (rnS + oS d \u2212 oS nS t)(r \u2212 oS t)f (t)dt \u2265 0. (4.29) d\/nS Finally, we obtain an inequality for part3. Since r > d(oS + uS )\/nS and v \u2032 is decreasing we have 0 \u2264 v \u2032 (oS d) \u2264 v \u2032 (rnS + oS d \u2212 oS nS t) for t \u2265 r\/oS we obtain 0\u2265 Z \u221e v \u2032 (oS d)(r \u2212 oS t)f (t)dt \u2265 r\/oS Z \u221e v \u2032 (rnS + oS d \u2212 oS nS t)(r \u2212 oS t)f (t)dt. (4.30) r\/oS When we look at the Eq(4.28), Eq(4.29) and Eq(4.30) we see that non-negative parts, i.e., part1 and part2 are getting less positive, and non-positive part, i.e., part3 gets more negative when we replace v \u2032 (oS d) with the corresponding v \u2032 (.)\u2019s. We put the Eq(4.28), Eq(4.29), Eq(4.30), Eq(4.26) and Eq(4.27) together and obtain \u03c0 \u2032 (nS ) = 0 Z = \u2265 Z d\/nS (r + uS t)f (t)dt + Z r\/oS (r \u2212 oS t)f (t)dt + d\/nS 0 d\/nS 0 + Z \u221e (r \u2212 oS t)f (t)dt r\/oS v \u2032 (rnS \u2212 uS d + uS nS t)(r + uS t)f (t)dt Z r\/oS v \u2032 (rnS + oS d \u2212 oS nS t)(r \u2212 oS t)f (t)dt d\/nS + Z \u221e v \u2032 (rnS + oS d \u2212 oS nS t)(r \u2212 oS t)f (t)dt r\/oS = \u03b7 \u2032 (nS ). Hence \u03b7 \u2032 (nS ) \u2264 0 implying that nR \u2264 nS as claimed. \u0003 132 \f4.9 Bibliography [1] Jeroen Belin and Erik Demeulemeester. Building cyclic master surgery schedules with leveled resulting bed occupancy. European Journal of Operational Research, 176: 11851204, 2007. [2] John Blake and Joan Donald. Mount sinai hospital uses integer programming to allocate operating room time. Interfaces, 32(2):63\u201373, 2002. [3] Selma Harrison Calmes and Kurt M. Shusterich. Operating room management: what goes wrong and how to fix it. Physician Executive, 18(6), 1992. [4] Scott Carr and William S. Lovejoy. The inverse newsvendor problem: Choosing an optimal demand portfolio for capacitated resources. Management Science, 46(7):912\u2013 927, 2000. [5] Mike Carter. Diagnosis: Mismanagement of resources. OR\/MS Today, 29(2):26\u201332, 2002. [6] Amitabh Chandra and Jonathan Skinner. Expenditure and productivity growth in health care. Dartmouth College, February. Forthcoming as an NBER Working Paper, 2008. [7] Avinash Dixit. Incentives and organizations in the public sector: An interpretative review. The Journal of Human Resources, 37(4):696\u2013727, 2002. [8] Canadian Institute for Health Information Web Site. http:\/\/www.cihi.ca\/. [9] Robert Gibbons. Incentives between firms (and within). Management Science, 51(1): 2\u201317, 2005. [10] Sholom Glouberman and Henry Mintzberg. Managing the care of health and the cure of disease: Part i: Differentiation, part ii: Integration. Health Care Management Review, 26(1):56\u201384, 2001. [11] Erwin Hans and Tim Nieberg. Operating room manager game. INFORMS Transactions on Education, 8(1):25\u201336, 2007. 133 \f[12] Donald K.K. Lee and Stefanos A. Zenios. Evidence-based incentive systems with an application in health care delivery. Submitted to Management Science, 2007. [13] William S. Lovejoy and Ying Li. Hospital operating room capacity expansion. Management Science, 48(11):1369\u20131387, 2002. [14] Alan P. Marco. Game theory in the operating room environment. The American Surgeon, 67(1):92\u201396, 2001. [15] Alan P. Marco. Game theoretic approaches to operating room management. The American Surgeon, 68(5):454\u2013462, 2002. [16] A.W. Marshall and F. Proschan. Classes of distributions applicable in replacement, with renewal theory implications. In Proc. 6th Berkeley Symposium on Mathematical Statististics and Probability, volume 1, pages 395\u2013415, 1972. [17] Andreu Mas-Colell, Michael D. Whinston, and Jerry Green. Microeconomic Theory. Oxford University Press, 1995. [18] Marcelo Olivares, Christian Terwiesh, and Lydia Cassorla. Structural estimation of the newsvendor model: An application to reserving operating room time. Management Science, 54(1):41\u201355, 2008. [19] Thomas W. Samuel, Stephen G. Raleigh, Judith M. Hower, and Richard W. Schwartz. The next stage in the health care economy: Aligning the interests of patients, provides, and third-party payers through consumer-driven health care plans. The Amerian Journal of Surgery, 186:117\u2013124, 2003. [20] Pablo Santiba\u0301n\u0303ez, Mehmet A. Begen, and Derek Atkins. Surgical block scheduling in a system of hospitals: an application to resource and wait list management in a British Columbia health authority. Health Care Management Science, 10(3):269\u2013282, 2007. [21] CBC Web Site. http:\/\/www.cbc.ca\/health\/story\/2007\/10\/15\/waittimes-fraser.html. . [22] Health Canada Web Site. http:\/\/www.hc-sc.gc.ca\/hcs-sss\/qual\/acces\/wait- attente\/index-eng.php. . 134 \f[23] Peter C. Smith, Adolf Stepan, Vivian Valdmanis, and Piet Verheyen. Principal-agent problems in health care systems: An international perspective. Health Policy, 41:37\u201360, 1997. [24] D.P. Strum, J.H. May, and L.G. Vargas. Modeling the uncertainty of surgical procedure times: comparison of log-normal and normal models. Anesthesiology, 92(4):1160\u20131167, 2000. 135 \f5 Advance Multi-Period Quantity Commitment and Appointment Scheduling1 We introduce advance multi-period quantity (order or supply) commitment problems with stochastic characteristics (demand or yield) and several real-world applications. There are underage and overage costs if there is a mismatch between committed and realized quantities. Decisions are needed now and they are the order or supply amounts for the next n periods. The objective is to maximize the total expected profit of n periods. We establish a link between these advance multi-period quantity commitment problems and the appointment scheduling problem studied in Chapter 2. We show that these problems can be thought of and solved (efficiently) as special cases of the appointment scheduling problem. 5.1 Introduction We introduce and study advance multi-period quantity (order or supply) commitment problems with random characteristics (demand or yield), explore their relationship with the appointment scheduling problem given in Chapter 2 and provide several real-world applications. All quantity decisions (how much to order or supply in each of the next n periods) are needed now, i.e., before any realization of demand or yield. We show that these problems can be modeled and solved as special cases of the appointment scheduling problem. In a supply chain, uncertainty consequences (e.g, due to stochastic demand or random yield) are something that players would like to minimize and, when possible, pass to others. Consider a buyer and a supplier where the buyer can order any amount from the supplier 1 A version of this chapter will be submitted for publication. Begen M.A. and Queyranne M. Advance Multi-Period Quantity Commitment and Appointment Scheduling. 136 \fwhenever it is convenient. This may be the case where there are many suppliers and they are competing for buyers. However, when possible a supplier would prefer a contract in which the buyer (who has better information about the demand uncertainty) commits in advance how much to purchase over a certain period of time. In return, the supplier may offer a discount to the buyer to make this choice attractive. These type of agreements are reported in practice, e.g., [2], [8] and [7]. With such an agreement, the challenge for the buyer becomes to determine how much to commit to purchase in advance (e.g., in total for the entire horizon or per period) and how much to order in each period. This problem and its variants (such as finite or infinite horizons, with or without fixed costs, total or individual period commitments) have been well motivated and studied in literature e.g., [2], [8], [5], [4], [3] [7] and [1]. These studies mostly (and naturally) use dynamic programming to determine an optimal policy and in some cases they develop heuristics. Nevertheless, all the previous studies on this topic that we are aware of consider situations where a buyer commits on how much to purchase in advance and decides how much to order in each period consecutively, i.e., the ordering decision for the next period is given after this period\u2019s demand realization. In our setting, the buyer needs to decide how much to order for all periods at once and now, i.e., before any realization of random demands. There can be some situations where the buyer needs to enter such a contract to secure any orders from a strong supplier. Or alternatively, we think of a producer who is subject to random yield and needs to determine now how much to supply for each of the n periods before any production levels are known. In this case, the producer may be subject to stiff competition and needs to promise customers supply quantities for each of the next n periods before the production horizon starts. If there is any product shortage in a period then the producer will obtain the product by other means, e.g., purchase it from a competitor. Furthermore, the producer has a high inventory holding cost so building inventory in advance to compensate for product shortage may not be profitable but is necessary when there is excess inventory. We first provide the details on the advance multi-period quantity (order or supply) commitment problems and then show that they have a very strong connection with the appointment scheduling problem given in Chapter 2. Then we use the algorithmic and convex optimization results obtained in Chapter 2 and Chapter 3 (for the appointment scheduling problem) to determine optimal levels of quantity commitments (order or supply) before the first period. To our best knowledge, the problems considered in this chapter have 137 \fnot been yet studied. We use the same assumptions and notation as in Chapter 2 and Chapter 3. We provide a description of appointment scheduling problem as well as introduce notation in Section 5.2. The rest of the chapter is organized as follows. In Section 5.3, we introduce a multi-period inventory model for a perishable product with advance order commitments, provide a few real-world examples and show that it has one to one correspondence with the appointment scheduling problem. Therefore, it may be solved efficiently as a special case of the appointment scheduling problem. We will refer to this model as \u201cthe inventory model\u201d. Section 5.4 introduces the model for the production problem with advance supply commitments and random yield. In this section, we establish a link between \u201cthe production problem\u201d and the appointment scheduling problem. We show that (under a mild condition on cost coefficients) the objective function of this problem is L-concave2 (if production quantities are integer) and concave (if production quantities are real). Furthermore, if the yield distributions are independent then the production problem can also be solved efficiently as in the case of appointment scheduling. Finally, we conclude the chapter in Section 5.5. 5.2 Description of Appointment Scheduling Problem This section closely follows from Chapter 2 and Chapter 3. There are n + 1 jobs numbered 1, 2, ..., n + 1 that need to be sequentially processed (in the order of 1, 2, ..., n + 1) on a single processor. An appointment schedule, i.e., a processing duration allocation ai for each job i, is needed before any processing can start. That is, each job is assigned a planned start date, i.e., appointment date Ai where A1 = 0 and Ai = Ai\u22121 + ai\u22121 for i = 2, 3, ...n + 1. The processing durations are stochastic and we are only given their joint discrete distribution. When a job finishes later than the next job\u2019s appointment date, the system experiences overage cost due to the overtime of the current job and the waiting of the next job. On the other hand, if a job finishes earlier than the next job\u2019s appointment date, the system experiences some cost due to under-utilization, i.e., underage cost. The goal is to find appointment dates, (A1 , ..., An ), that minimize the total expected cost. There are n real jobs. The (n + 1)th job is a dummy job with a processing duration of 0. The appointment time for the (n+1)th job is the total time available for the n real jobs. We 2 See Definition 5.4.1. 138 \fuse the dummy job to compute the overage or underage cost of the nth job. We denote the random processing duration of job i by pi and the random vector3 of processing durations by p = (p1 , p2 , ..., pn , 0). Let pi denote the maximum possible value of processing duration pi , respectively. The maximum of these pi \u2019s is pmax = max(p1 , ..., pn ). The underage cost rate ui of job i is the cost (per unit time) incurred when job i is completed at a date Ci before the appointment date Ai+1 of the next job i + 1. The overage cost rate oi of job i is the unit cost incurred when job i is completed at a date Ci after the appointment date Ai+1 . Thus the total cost due to job i completing at date Ci is ui (Ai+1 \u2212 Ci )+ + oi (Ci \u2212 Ai+1 )+ where (x)+ = max(0, x) is the positive part of real number x. We define u = (u1 , u2 , ..., un ) and o = (o1 , o2 , ..., on ). We assume, naturally, that all cost coefficients and processing durations are non-negative and bounded. We also assume that processing durations are integer valued.4 Next we introduce our decision variable for the appointment scheduling problem. Let A = (A1 , A2 , ..., An , An+1 ) (with A1 = 0) be the appointment vector where Ai is the appointment date for job i. We introduce additional variables which help define and express the objective function. Let Si be the start date and Ci the completion date of job i. Since job 1 starts on-time we have S1 = 0 and C1 = p1 . The other start times and completion times are determined as follows: Si = max{Ai , Ci\u22121 } and Ci = Si + pi for 2 \u2264 i \u2264 n + 1. Note that the dates Si and Ci are random variables which depend on the appointment vector A, and the random duration vector p. Let F (A|p) be the total cost of appointment vector A given processing duration vector p: F (A|p) = n X i=1 \u0001 oi (Ci \u2212 Ai+1 )+ + ui (Ai+1 \u2212 Ci )+ . The objective to be minimized is the expected total cost F (A) = Ep [F (A|p)] where the expectation is taken with respect to random processing duration vector p. Our framework can include a given due date D for the end of processing (e.g., end of day for an operating room for the appointment scheduling problem, or a quota set by the supplier for the inventory model) after which overtime is incurred, instead of letting the model choose P a planned makespan An+1 . We assume D is an integer and that 0 \u2264 D \u2264 ni=1 pi . Define 3 4 We write all vectors as row vectors. We can restrict ourselves to integer appointment schedules without loss of optimality by Appointment Vector Integrality Theorem 2.5.10 of Chapter 2. 139 \fA\u0303 = (A1 , A2 , ..., An ) then the new objective becomes \uf8ee \uf8f9 \u0013 n\u22121 X\u0012 F D (A\u0303) = Ep \uf8f0 oj (Cj \u2212 Aj+1 )+ + uj (Aj+1 \u2212 Cj )+ + on (Cn \u2212 D)+ + un (D \u2212 Cn )+ \uf8fb . j=1 We immediately observe that F (A\u0303, D) = F D (A\u0303). We end this section with two definitions that we need later in the chapter. First definition is a mild condition on cost coefficients and is due to Definition 2.6.5 of Chapter 2. Definition 5.2.1. The cost coefficients (u, o) are \u03b1-monotone if there exists reals \u03b1i (1 \u2264 i \u2264 n) such that 0 \u2264 \u03b1i \u2264 oi and ui + \u03b1i are non-increasing in i, i.e., ui + \u03b1i \u2265 ui+1 + \u03b1i+1 for all i = 1, . . . , n \u2212 1. The condition of \u03b1-monotonicity is automatically satisfied if all underage cost coefficients are the identical, i.e., ui = u for all i. Let Z denote the set of integers and 1 is a vector in Rn+1 where each component is 1. The next definition gives the definition of a L-convex function. Definition 5.2.2. f : Zq \u2192 R\u222a{\u221e} is L-convex iff f (z)+f (y) \u2265 f (z\u2228y)+f (z\u2227y) \u2200z, \u2200y \u2208 Zq and \u2203r \u2208 R : f (z + 1) = f (z) + r \u2200z \u2208 Zq [9]. 5.3 A Multi-Period Inventory Model for a Perishable Product with Advance Commitments Consider a buyer who has to make ordering decisions for the next n periods at time zero for a perishable product with a stochastic demand. Since the product is perishable, excess (unsold) items at the end of a period cannot be used in the next period and they need to disposed. On the other hand, unsatisfied demand is backordered. Furthermore, there may be a quota for the total purchases (orders and backorders) such that it is more costly to order beyond this quota. The objective of the buyer is to determine how much to commit now for the next n periods to maximize his\/her expected profits. We think of the profits as revenue\u2212costs where revenue is simply number of products sold times the contribution factor (including unit product cost). In our setting number of products sold is equal to the total demand since we assume backorders and satisfy all the demand. On the other hand, the costs consists of holding (and\/or disposal costs) and backorder costs. Revenue will be 140 \fa constant after taking the expectation with respect to demand of each period therefore we can think of this problem profit maximization problem as a cost minimization, i.e., minimization of total expected holding (and\/or disposal) and backorder costs. There are many real-world examples for this setting. For example, consider a retailer who is under heavy competition and has a single supplier. Demand for the retailer\u2019s product is stochastic, and the supplier requires the retailer to commit its orders for the next n periods and may set a quota for the total amount of purchases. The retailer has little negotiating power with the supplier, and wants to keep his reputation by satisfying all the demand using backordering when needed. In this setup, we look at the problem in the retailer\u2019s eyes and have to decide on orders for the next n periods. This inventory model and the appointment scheduling model as defined in Chapter 2 has one to one correspondence in terms of their structure, data, decision variables and objective function. In the inventory model we have n periods whereas in the appointment scheduling model there are n jobs. The random component, pi in the inventory model is the demand for period i whereas in the appointment scheduling model it is the processing duration of job i. The costs are u and o; in the appointment scheduling model u is the underage (earliness) and o is the overage (waiting and\/or overtime) cost whereas in the inventory model u is the holding (and\/or disposal) and o is the backorder cost. The decision variable in the appointment scheduling model is the appointment date Ai of job i, or how much time to allocate ai for job i. On the other hand, the decision for the inventory model is how much to order for period i, ai , and as given in Section 5.2 we have the relationship ai = Ai+1 \u2212 Ai . The start time of job i, Si , in the appointment scheduling model corresponds to total purchase (orders and backorders) up to period i (not including period i), and completion time of job i, Ci is the total purchase up to period i + 1 (including period i). Table 5.1 gives a comparison summary between the appointment scheduling model and the described inventory problem in terms of data and decision variables. In Figure 5.1 we provide an example. The figure is a graph of periods and order levels for a demand realization (without a quota D) showing inventory levels (excess inventory or backorders) for each period. Figure 5.1 shows demands (pi \u2019s), orders (ai \u2019s), backorders (square blocks), excess units (diagonal blocks) for a seven-period instance. The x axis is the cumulative orders (Ai \u2019s) and the y axis is time, i.e., periods. As discussed earlier, the buyer has to decide the order levels a1 , . . . , an for the next n 141 \fTable 5.1: Comparison of the Appointment Scheduling and Inventory Models appointment scheduling inventory n jobs periods i job period pi processing duration of job i demand for period i u underage cost holding and\/or disposal cost o overage cost backorder cost ai allocated time for job i order for period i Ai appointment date for job i cumulative orders upto period i Si start date for job i total purchase (orders and backorders) up to period i Ci completion date for job i total purchase (orders and backorders) up to period i + 1 periods now such that the total expected cost (holding and\/or disposal and backorder) is minimized. We find the cost for period i first. Recall that total purchases up to period i is Si , total orders up to period i is Ai , and the demand for period i is pi . Suppose we had ordered ai for period i. Then Si + pi is total purchases up to period i + 1, and Ai + ai is total orders up to period i + 1. Therefore we pay holding and\/or disposal cost u on (Ai + ai \u2212 Si \u2212 pi )+ and backorder cost o on (Si + pi \u2212 Ai \u2212 ai )+ , hence the total cost due to period i ordering ai units is u(Ai + ai \u2212 Si \u2212 pi )+ + o(Si + pi \u2212 Ai \u2212 ai )+ . By definition Ai+1 = Ai + ai and Ci = Si + pi , therefore we can represent the cost of period i as u(Ai+1 \u2212 Ci )+ + o(Ci \u2212 Ai+1 )+ . Then the total expected cost for n periods of the inventory problem becomes G(A) = Ep \" n X i=1 + + o(Ci \u2212 Ai+1 ) + u(Ai+1 \u2212 Ci ) \u0001 # . (5.1) We see that Eq(5.1) is precisely the definition of F (A) (given in Section 5.2) when ui = u and oi = o for all i (1 \u2264 i \u2264 n). Therefore G(A) = F (A) with ui = u and oi = o for all i (1 \u2264 i \u2264 n) and hence the inventory model is a special case of the appointment scheduling model. Furthermore, since ui = u for all i (1 \u2264 i \u2264 n) \u03b1-monotonicity is automatically satisfied. Therefore, if the demand distributions are independent and integer-valued then G(A) can be minimized in O(n9 p2max log pmax ) time by the Theorem 2.7.3 of Chapter 2. In addition, we see that with real-valued demands G is convex (by the Corollary 3.3.5 of Chapter 3), and we can use the sampling approach developed in Chapter 3 to obtain 142 \fPeriods Period 6 Period 5 p6 p5 Period 4 p4 Period 3 p3 Period 2 p2 Period 1 p1 A1 a1 A2 a2 A3 a3 A4 a4 A5 a5 A6 a6 Cumulative orders A7 Figure 5.1: A Realization of Inventory Levels for 6 Periods a provable optimal order plan if demand distributions are not known but only a set of independent samples is available. In this case, we do not require independence of demands between one period to another. Last but not least, by our results in Appendix A we can use non-smooth convex optimization methods and a hybrid algorithm5 to find an optimal order plan for the buyer. When there is a limit (or quota) D set by the supplier on the total number of purchases then we may represent the total expected cost for n periods of the inventory problem as \uf8f9 \uf8ee \u0013 n\u22121 X\u0012 o(Cj \u2212 Aj+1 )+ + u(Aj+1 \u2212 Cj )+ + on (Cn \u2212 D)+ + u(D \u2212 Cn )+ \uf8fb . GD (A\u0303) = Ep \uf8f0 j=1 We immediately observe that G(A\u0303, D) = GD (A\u0303) and furthermore, GD (A\u0303) = F D (A\u0303) (with un\u22121 = ui = u and oi = o for all i (1 \u2264 i \u2264 n \u2212 1), and possibly different on ). Similar to G, for GD \u03b1-monotonicity is satisfied and hence it can be minimized in O(n9 p2max log pmax ) time by the Corollary 2.8.7 of Chapter 2 if the demand distributions are independent and integervalued. Furthermore, like G, with real-valued demands GD is convex (by the Corollary 3.3.5 of Chapter 3), and by Proposition 3.4.11 of Chapter 3 we can use the sampling approach 5 It is based on combination of discrete and non-smooth convex methods with the special purpose rounding algorithm, see Section A.4 of Appendix A. 143 \fdeveloped in Chapter 3 to obtain a provable optimal order plan if demand distributions are not known but only a set of independent samples is available. In this case, again we do not require independence of demands between periods. Last but not least, by using our results in Appendix A and Chapter 3 (Remark A.3.11 of Appendix A and Proposition 3.4.11 of Chapter 3) we can use non-smooth convex optimization methods and the hybrid algorithm to find an optimal order plan for the buyer. Remark 5.3.1. We may use distinct ui \u2019s and oi \u2019s as long as they are \u03b1-monotone for the inventory model. However, for simplicity and the fact that it makes sense to have the same backorder and holding (and\/or disposal) cost for all periods, we use the same u and o for all periods, except possibly for on (to capture the extra cost incurred when there is a quota and it is exceeded). When all the ui \u2019s are the same then \u03b1-monotonicity is satisfied and the functions G and GD are automatically discretely convex in the case of integer-valued demands, and convex when demand is real-valued. We provide a real-world example for the inventory model. It comes from the high-tech industry. Consider an Internet company that has a need for high amount of bandwidth daily. The demand for the bandwidth changes from day to day and it is stochastic. The company has to make prior minimum commitments on how much bandwidth to buy for the next 30 days with an Internet service provider at the beginning of each month. Any unused amount is a lost since the company already agreed to buy some minimum quantity of bandwidth daily whereas the company can purchase more if needed in a day. However, there is the quota D that the Internet provider sets for the total purchase of 30 days. If this limit is exceeded then the company must pay a penalty to get any amount over D. The company\u2019s objective is to determine how much bandwidth purchase to commit for the next 30 days to minimize the expected unused amount and overuse costs. (We do not consider the cost of the service since the company will pay that amount in any case.) We can think of hP i n\u22121 + + + where this company\u2019s objective as Ep j=1 u(Aj+1 \u2212 Cj ) + u(D \u2212 Cn ) + o(Cn \u2212 D) u is the unused amount cost rate (this cost can be thought of as the opportunity cost of unused bandwidth) and o is the overuse cost for exceeding the limit D. Note that this function is the same as GD (A\u0303) with oi = 0 for (1 \u2264 i \u2264 n \u2212 1) and on = o. We end this section with another application of appointment scheduling in the context the inventory model. Consider a project manager who is responsible for budget allocation 144 \fdecisions for multiple and serial phases of a project. The challenge is that funding requirements of each project phase is stochastic and they need to secured before the project start. If the funding requirement (for a phase) turns out to be more than what was allocated then it costs more (with a rate of o) to secure the remaining portion (e.g., the manager needs to pay a higher interest to obtain additional funds). On the other hand, if the allocation (for a phase) is more than the required funds then there is lost of some opportunity cost (with a rate of u). Moreover, there may a total quota D for the entire project after which it costs even more to get any funding. The objective is to determine funding allocations for each phase before the project starts such that total expected cost is minimized. We can think of this problem as the same inventory problem where the demand is the funding requirements, order commitments are budget allocations and the same cost structure with u and o. 5.4 A Multi-Period Production Model with Random Yield and Advance Commitments Consider a production manager who is responsible for manufacturing and selling a product which is subject to random yield and high inventory holding cost. In order to secure customer contracts this manager needs to commit in advance how much to supply each period for the next n periods before any production starts. Due to random yield, the production amount is subject to uncertainty however the supply commitments must be met regardless. For example, if the production falls short and the producer cannot satisfy the current period\u2019s supply commitment from its inventory then it needs to provide the missing product amount by other means (e.g., purchase from other manufacturers). On the other hand, if there is more product available than what was committed then the excess amount is placed in inventory and can be used for future periods. However keeping inventory is expensive due the product\u2019s high inventory holding cost. The manager is interested to maximize the total expected profit after n periods. Revenue consists of number of products sold times the contribution factor r per product (including unit production cost). In our setting, number of products sold is total number of products committed. Costs include product shortage cost ui per product and inventory holding cost oi per product for period i (1 \u2264 i \u2264 n). We think of ui as the additional cost of obtaining a product (compared to in-house production) when the producer falls short to satisfy the 145 \fcurrent period\u2019s commitment. On the other hand, oi can be thought as the inventory holding cost for periods 1, ..., n \u2212 1, and on (in addition to holding cost) may include disposal cost of products remaining at the end of n periods. Similar to the inventory model described in Section 5.3, this production model and the appointment scheduling model given in Chapter 2 have many similarities with respect to their structure, data, decision variables and objective function. In this production model, we have n periods and in the appointment scheduling model there are n jobs. The stochastic component, pi in the production model is the production level of period i whereas in the appointment scheduling model it is the processing duration of job i. The costs are ui and oi ; in the appointment scheduling model ui is the underage (earliness) and oi is the overage (waiting and\/or overtime) cost of job i, on the other hand in the production model ui is the product shortage and oi inventory holding (and possibly disposal) cost. The decision variable in the appointment scheduling model is the appointment date Ai of job i whereas the decision for the production model is how much to commit for supply of period i, ai , and we have the relationship ai = Ai+1 \u2212 Ai . Completion time of job i, Ci , in the appointment scheduling model represents total number of products supplied (produced in-house and purchased from outside) until the end of period i before purchases (if any) in period i, and start time of job i, Si , is the total number of products supplied (produced in-house and purchased from outside) until the end of period i \u2212 1 after purchases (if any) in period i \u2212 1. Table 5.2 gives a comparison summary between the appointment scheduling model and the described inventory problem in terms of data and decision variables. We can interpret Figure 5.1 of the inventory model similarly for the production model as well. Now the x axis is cumulative supply commitments and, as before, the y axis is time, i.e., periods. The figure shows supply commitment levels (ai \u2019s), a sample production realization (pi \u2019s), units in inventory (square blocks) and units that are short (diagonal blocks) for each period. The manager, as mentioned above, has to decide the supply commitment levels a1 , . . . , an now for the next n periods such that the total expected profit is maximized. We first look at revenue for period i. The producer sells precisely the committed amount, ai , for each period; if there is shortage then the manager needs to obtain the missing amount from outside to fulfill the supply commitment, and if there is a surplus then the excess goes to inventory. Therefore the revenue for period i as rai . Next, we look at costs for period i. Recall that 146 \fTable 5.2: Comparison of the Appointment Scheduling and Production Models appointment scheduling production n jobs periods i job period pi processing duration of job i production for period i ui underage cost for job i product shortage cost for period i oi overage cost for job i inventory holding cost for period i ai allocated time for job i supply commitment for period i Ai appointment date for job i cumulative supply commitments upto period i Si start date for job i total products upto i with purchase in i \u2212 1 Ci completion date for job i total products upto i + 1 without purchase in i Ai is the cumulative supply commitment up to period i. Suppose the manager promises to supply ai for period i. Then Ai +ai is the cumulative supply commitment up to period i+1. Also note that Ci is the total products supplied (produced in-house and purchased from outside) up to period i + 1 before purchase (if any) in period i. Therefore if Ci \u2212 Ai \u2212 ai > 0 then there is a product surplus and the producer pays oi (Ci \u2212 Ai \u2212 ai ) as inventory holding cost else there is a product shortage and the producer pays ui (Ai + ai \u2212 Ci ) to purchase the \u0001 missing products. Hence the profit for period i will be rai \u2212 oi (Ci \u2212Ai+1 )+ +ui (Ai+1 \u2212Ci )+ . Then the total expected profit for n periods of the production problem becomes \" n # X \u0001 H(A) = Ep rai \u2212 oi (Ci \u2212 Ai+1 )+ \u2212 ui (Ai+1 \u2212 Ci )+ i=1 = r n X i=1 ai \u2212 Ep \" n X + + oi (Ci \u2212 Ai+1 ) + ui (Ai+1 \u2212 Ci ) i=1 = rAn+1 \u2212 F (A) # \u0001 (5.2) (5.3) (5.4) where Eq(5.2) is the total expected profit for all n periods. In Eq(5.3) we take the nonP random part out of the expectation and finally we obtain Eq(5.4) by noting ni=1 ai = An+1 and recognizing F (A)\u2019s definition as given in Section 5.2. Therefore H(A) = rAn+1 \u2212 F (A) and hence the production model has a close connection with the appointment scheduling model. We can think of H as a special case of F . Before we formalize this relationship with our next result we need a few definitions. First is the definition of L-concavity, a function f is L-concave if \u2212f is L-convex. Formal definition is below. 147 \fDefinition 5.4.1. f : Zq \u2192 R\u222a{\u221e} is L-concave iff f (z)+f (y) \u2264 f (z\u2228y)+f (z\u2227y) \u2200z, \u2200y \u2208 Zq and \u2203r \u2208 R : f (z + 1) = f (z) + r \u2200z \u2208 Zq [9]. The next two definitions are on subgradient and subdifferential for a convex function, and supgradient and supdifferential of a concave function. Definition 5.4.2. A vector g is a subgradient of a convex function f at the point x if f (y) \u2265 f (x) + g T (y \u2212 x) for all y. The subdifferential of f at a point x is the set of all subgradients at the point x, i.e., \u2202f (x) = {g : f (y) \u2265 f (x) + g T (y \u2212 x)} [6]. Definition 5.4.3. A vector g is a supgradient of a concave function f at the point x if f (y) \u2264 f (x) + g T (y \u2212 x) for all y. The supdifferential at of f a point x is the set of all \u00af (x) = {g : f (y) \u2264 f (x) + g T (y \u2212 x)} [6]. supgradients at the point x, i.e., \u2202f Now we are ready for our results on H, the objective function of the production model. Corollary 5.4.4. 1. If production levels are integer-valued and the cost coefficients (u, o) are \u03b1-monotone then H is L-concave. In addition if production level distributions are independent then H can be maximized in O(n9 p2max log pmax ) time. 2. If production amount is real-valued and the cost coefficients (u, o) are \u03b1-monotone then \u2022 H is concave, \u2022 if g is a subgradient of F then r1n+1 \u2212 g is a supgradient of H, i.e., if g \u2208 \u2202F \u00af then (r1n+1 \u2212 g) \u2208 \u2202H(A) and \u00af \u2022 the supdifferential of H at A is \u2202H(A) = r1n+1 \u2212 \u2202F (A). Proof. 1. By definition H(A) = rAn+1 \u2212 F (A). The first term rAn+1 is linear in A and by the L-convexity Theorem 2.6.13 F (A) is L-convex when the cost vectors (u, o) are \u03b1-monotone. Therefore H(A) is L-concave. In the case of independent production level distributions, for a given A, we see that the computation of complexity of H(A) is the same as the computation of complexity of F (A) and by Theorem 2.7.3 of Chapter 2 F and H can be optimized in O(n9 p2max log pmax ) time. 148 \f2. \u2022 Note that F is convex by Corollary 3.3.5 of Chapter 3 if (u, o) are \u03b1-monotone, and rAn+1 is linear in A therefore H(A) = rAn+1 \u2212 F (A) is concave if (u, o) are \u03b1-monotone. \u2022 If g \u2208 \u2202F then F (B) \u2265 F (A) + g T (B \u2212 A). Since F = rAn+1 \u2212 H we obtain rBn+1 \u2212 H(B) \u2265 rAn+1 \u2212 H(A) + g T (B \u2212 A). By reorganizing the terms we get \u0001 \u2212H(B) \u2265 \u2212H(A) \u2212 r1n+1 \u2212 g T (B \u2212 A). Finally we multiply both sides with \u0001 \u22121 and obtain H(B) \u2264 H(A) + r1n+1 \u2212 g T (B \u2212 A). Therefore (r1n+1 \u2212 g T ) \u00af is a supgradient of H at A, i.e., (r1n+1 \u2212 g) \u2208 \u2202H(A). \u00af \u2022 The above results gives If g \u2208 \u2202F then (r1n+1 \u2212 g) \u2208 \u2202H(A). This also shows \u00af \u2202H(A) \u2287 r1n+1 \u2212 \u2202F (A). By using the same arguments above we also see \u00af \u00af that if g \u2208 \u2202H(A) then (r1n+1 \u2212 g T ) is a subgradient of F showing \u2202H(A) \u2286 \u00af r1n+1 \u2212 \u2202F (A). Hence we obtain \u2202H(A) = r1n+1 \u2212 \u2202F (A). \u0003 Corollary 5.4.4 allows us to solve the production problem with the tools developed for the appointment scheduling problem in Chapter 2 and Chapter 3. In the case of independent and integer-valued production level distributions we can maximize H in polynomial time. Furthermore, with real-valued production levels H is concave and we obtain a supgradient for H easily once we have a subgradient for F . Therefore we can utilize the results in Appendix A and use non-smooth concave optimization methods and the hybrid algorithm to find an optimal production plan for the manager. Last but not least, we characterize H\u2019s supdifferential and can use the sampling approach developed in Chapter 3 to obtain a provable near-optimal production plan if production level distributions are not known but only a set of independent samples is available. As discussed earlier, in this case, we do not require independence of production levels between one period to other. 5.5 Conclusion We introduce two advance multi-period quantity (order or supply) commitment problems with a random demand or yield. The distinct feature of these models with previous ones reported in literature is that all quantity commitment (order and supply amounts) decisions are to be made at once and before the planning interval starts. We show that there is a 149 \fclose relationship between these problems and the appointment scheduling problem studied in Chapter 2. Therefore, we can solve these type of quantity commitment problems efficiently, e.g., in polynomial time in the case of integer-valued and independent (demand or yield) distributions or by using non-smooth convex optimization methods developed in Appendix A. Furthermore, in the case of unknown demand or yield distributions we can use the sampling approach developed in Chapter 3. 150 \f5.6 Bibliography [1] Ravi Anupindi and Yehuda Bassok. Supply contracts with quantity commitments and stochastic demand. a chapter in quantitative models in supply chain management. Kluwer Academic Publishers, S. Tayur, M. Magazine, and R. Ganeshan, eds, 1998. [2] Yehuda Bassok and Ravi Anupindi. Analysis of supply contracts with total minumum commitment. IIE Trans., 29:373\u2013381, 1997. [3] Yehuda Bassok and Ravi Anupindi. Analysis of supply contracts with commitments and flexibility. Naval Research Logistics, 55(5), 2008. [4] Aharon Ben-Tal, Boaz Golany, Arkadi Nemirovski, and Jean-Philippe Vial. Retailersupplier flexible commitments contracts: A robust optimization approach. MSOM, 7(3):248\u2013271, 2005. [5] Ki Ling Cheung and Xue-Ming Yuan. An infinite horizon inventory model with periodic order commitment. EJOR, 146:52\u201366, 2003. [6] JeanBaptiste Hiriart-Urruty and Claude Lemarechal. Convex Analysis and Minimization Algorithms I and II. Springer, 1993. [7] Zhaotong Lian and Abhijit Deshmukh. Analysis of supply contracts with quantity flexibility. EJOR, 196:526\u2013533, 2009. [8] Kamran Moinzadeh and Steven Nahmias. Adjustment strategies for a fixed delivery contract. Oper. Res., 48(3):408\u2013423, 2000. [9] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. 151 \f6 Concluding Remarks In this thesis, we take a in-depth look at the appointment scheduling problem [2, 1, 7]. In Chapter 2, we study a discrete time version of the appointment scheduling problem and develop a polynomial time algorithm, based on discrete convexity, that, for a given processing sequence, finds an appointment schedule minimizing the total expected cost. To the best of our knowledge this is the first polynomial time algorithm for the appointment scheduling problem. In addition, our framework can handle a given due date for the total processing (e.g., end of day for an operating room) after which overtime is incurred, instead of letting the model choose an end date. We also extend our model and framework to include no-shows (e.g., patient no-shows) and some emergencies (e.g., emergency surgeries). We believe that our framework is sufficiently generic so that it is portable and applicable to many appointment systems in healthcare and other areas including surgery scheduling, healthcare diagnostic operations (such as CAT scan, MRI) and physician appointments, as well as project scheduling, container vessel and terminal operations, gate and runway scheduling of aircraft in an airport. After developing our modeling framework and proving that we can find an optimal appointment schedule in polynomial time, we focus on practical implementation issues in Chapter 3 and Appendix A. The objective function of the appointment scheduling problem as a function of continuous appointment vector is non-smooth but in Chapter 3 we show that it is convex, and we characterize its subdifferential. We obtain closed form formulas for the subdifferential as well as for any subgradient. This characterization is very useful as it allows us to develop two very important extensions. In Chapter 3, we relax the perfect information assumption on the probability distributions of processing durations. We develop a samplebased approach to determine the number of independent samples required to obtain a provably near-optimal solution with high confidence. This result has important practical implications, as the true processing duration distributions are often not known and only their past realizations or some samples are available. We believe this is the first sampling 152 \fapproach developed for the appointment scheduling problem. In Appendix A, we use the subdifferential characterization with independent processing durations to develop a hybrid approach based on both discrete convexity [4] and non-smooth convex optimization [3, 6] combined with a special-purpose rounding algorithm which takes any fractional solution and rounds it to an integer one with the same or improved objective value. We believe the hybrid approach may perform well in practice. Again motivated by surgery scheduling, in Chapter 4 we look at the problem of determining the number of surgeries for an OR block with a focus on the incentives of the parties involved (hospital and surgeon). In particular, we investigate the commonly observed situation reported in the literature and observed empirically that surgeons over-schedule their allotted OR time, i.e., they schedule too many surgeries for their OR time and cause excessive overtime. We argue that this can be explained by the incentive of surgeons to take advantage of fee-for-service payment structure for surgeries performed combined with the fact surgeons do not bear overtime costs at the hospital level. This creates a cost which is borne by the hospital, which operates the OR and pays surgery support staff. We propose contracts that induce the surgeon to schedule a number of surgeries more aligned with the goals of the hospital and thus reduce overtime. If an OR can be managed in such a way that overtime is decreased then this may translate to immediate and significant cost savings which may be used to increase hospital resources such as regular OR time, recovery and intensive care beds. Depending on how much power the hospital has over surgeons and how much information is available to the hospital, we suggest several contracts that hospital might consider. There is a connection between the celebrated newsvendor problem and the appointment scheduling problem. If we have only a single job (surgery), i.e., n = 1, then the appointment scheduling problem becomes the newsvendor problem [8]. In Chapter 5, we introduce a new set of advance multi-period quantity (order or supply) commitment problems with random characteristics (demand or yield), and underage and overage costs if there is a mismatch between committed and realized quantities. We show that these multi-period quantity commitment problems can be modeled and solved as special cases of the appointment scheduling problem. To the best of our knowledge, the problems introduced in Chapter 5 have not been yet studied. There are exciting future directions and improvement possibilities for this research. 153 \fOne possibility is to find an optimal sequence and appointment schedule simultaneously, i.e., given the jobs, determine a sequence and a job appointment schedule minimizing the total expected cost. This problem is likely to be hard [5], but it may be possible to develop heuristic algorithms with performance guarantees. Studying some special cases for this problem may shed light on the general case. In the near future, we are planning to implement the algorithms in Appendix A and develop a computational engine for the appointment scheduling problem. Besides testing and comparing the discrete, non-smooth and hybrid algorithms in computational experiments we plan to put our findings into practice. We are in contact with local healthcare organizations to apply our results with real data and compare the appointment schedules determined by our methods with current practices. Furthermore, we may test performance of various heuristic methods for both appointment scheduling and sequencing problem once the computational engine is built. One other research avenue that we consider is an extension in which both job arrivals and durations are random, and jobs belong to different priority classes. The goal is to determine a booking policy such that each priority target waiting times are satisfied at minimum cost. Or alternatively, one can find what would be the waiting times for a given budget. Last but not least, there are many interesting incentive problems in healthcare. For example, in generating an OR block scheduled there is an issue of allocating available OR time between different specialties as well as between surgeons within a specialty. Each surgeon has her\/his number of patients waiting for surgery. Furthermore, there some other non-medical factors to consider such as the rank of the surgeon as well as how well the specialty is represented in OR block allocation process. We consider studying this problem and develop a mechanism that will be fair and transparent for everyone with an overall objective to reduce surgical waiting times. 154 \f6.1 Bibliography [1] Tugba Cayirli and Emre Veral. Outpatient scheduling in health care: A review of literature. Production and Operations Management, 12(4), 2003. [2] Brian Denton and Diwakar Gupta. A sequential bounding approach for optimal appointment scheduling. IIE Transactions, 35:1003\u20131016, 2003. [3] JeanBaptiste Hiriart-Urruty and Claude Lemarechal. Convex Analysis and Minimization Algorithms I and II. Springer, 1993. [4] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. [5] Lawrence W. Robinson, Yigal Gerchak, and Diwakar Gupta. Appointment times which minimize waiting and facility idleness. Working Paper, DeGroote School of Business, McMaster University, 1996. [6] R. Tyrrell Rockafellar. Theory of subgradients and its applications to problems of optimization: convex and nonconvex functions. 1981. [7] P Patrick Wang. Static and dynamic scheduling of customer arrivals to a single-server system. Naval Research Logistics, 40(3):345\u2013360, 1993. [8] E N Weiss. Models for determining estimated start times and case orderings in hospital operating rooms. IIE Transactions, 22(2):143\u2013150, 1990. 155 \fA Minimizing a Discrete-Convex Function for Appointment Scheduling1 We consider the appointment scheduling problem with discrete random durations studied in Chapter 2. Under a simple sufficient condition, the objective of the appointment scheduling problem is discretely convex as a function of the integer appointment vector (Chapter 2), but it is convex but non-smooth when appointment vectors are continuous (Chapter 3). In this chapter, we compute a subgradient of the objective function in polynomial time for any given (real-valued) appointment schedule with independent processing durations. We also extend computation of the expected total cost (in polynomial time) for any (real-valued) appointment vector. Furthermore, we develop a special-purpose integer rounding algorithm that allow us to develop an hybrid approach combining both discrete convexity and nonsmooth convex optimization methods. We plan to implement these algorithms and compare different approaches in computational experiments. A.1 Introduction and Motivation We consider the appointment scheduling problem with discrete random durations studied in Chapter 2. The goal of appointment scheduling is to determine an optimal planned start schedule, i.e., an optimal appointment schedule for a given sequence of jobs on a single processor such that the expected total underage and overage costs is minimized. In Chapter 2, we showed that the objective function of the appointment scheduling problem is discretely convex (under \u03b1\u2212monotonicity) and there exists an optimal integer appointment schedule minimizing the objective over integer appointment vectors. These results on the objective function and optimal appointment schedule enabled us to develop a polynomial time algorithm, based on discrete convexity, that, for a given processing sequence, finds an 1 A version of this chapter will be submitted for publication. Begen M.A. and Queyranne M. Minimizing a Discrete-Convex Function for Appointment Scheduling. 156 \fappointment schedule minimizing the total expected cost. On the other hand, in Chapter 3 we considered the same appointment scheduling problem in Chapter 2 under the assumption that the duration probability distributions are not known and only a set of independent samples is available, e.g., historical data. We showed that, under a simple sufficient condition, the same objective function is convex (as a function of continuous appointment vector) and non-smooth. Under this condition we characterized the subdifferential of the objective function with a closed-form formula. This characterization is useful; it allows us to develop two very important extensions. First, we used it to determine bounds on the number of independent samples required to obtain provably near-optimal solution with high probability in Chapter 3. Second, we use it in this chapter to obtain a subgradient in polynomial time to use subgradient methods (with discrete methods) to optimize the appointment scheduling objective. In this chapter, we use the subdifferential characterization of Chapter 3 with independent processing durations and compute a subgradient in polynomial time for any given appointment schedule. The reason we are after a fast obtainable subgradient is to use non-smooth convex optimization methods to find an optimal appointment schedule. From Chapter 2 we already have a polynomial time algorithm to minimize the objective and obtain an optimal appointment schedule, however it is not clear at the moment which technique (discrete or non-smooth) will work faster in practice. Finding a subgradient in polynomial time is not trivial because the subdifferential formulas include exponentially many terms, and some of the probability computations are complicated. In addition to a subgradient, we obtain an easily computable lower bound on the optimal objective value. Furthermore, we extend computation of the expected total cost (in polynomial time) for any (real-valued) appointment vector. These results allow us to use non-smooth convex optimization techniques to find an optimal schedule. To combine the discrete and non-smooth algorithms, we develop a special-purpose integer rounding method which takes any fractional solution and rounds it to an integer one with the same or improved objective value. This rounding algorithm enable us to develop a hybrid approach combining both discrete convexity and non-smooth convex optimization methods. In the near future, we are planning to implement our algorithms and compare different approaches in computational experiments. 157 \fThis chapter is organized as follows2 . We start with finding a lower bound on the objective functions F and F D in Section A.2. In this section, we also extend the computation of F (A) (and hence F D (A\u0303)) for any real appointment vector A. In Section A.3, we find a subgradient of F (and F D ) in polynomial time. We first compute probabilities required to obtain a subgradient from subdifferential \u2202F (A) and discuss the complexity of this computation. Then we show how to find a subgradient in polynomial time. In Section A.4, we develop the rounding algorithm and discuss how it can be used with the existing discrete and non-smooth algorithms to build a hybrid approach. Finally we conclude the chapter in Section A.5. A.2 Lower Bounds (on the Value) and Computation of F (A) and F D (A\u0303) and for any Real A In this section, we find an easily computable lower bound on the values of F (A) and F D (A\u0303). When the underage cost coefficients ui \u2019s (1 \u2264 i \u2264 n) are identical F (A) has additional interesting properties. In this case, \u03b1-monotonicity is (automatically) satisfied and hence F (A) is convex by Convexity Proposition 3.3.3 of Chapter 3 (and the same result follows for F D (A\u0303) from Corollary 3.3.5 of Chapter 3). Furthermore, a lower bound on the value of F (A) can be easily computed. A remark is in order here. If underage cost coefficients are not identical for F (A) then define u min{u1 , u2 , ..., un } and \uf8ee \uf8f9 n X \u0001 f (A) = Ep \uf8f0 oj (Cj \u2212 Aj+1 )+ + u(Cj \u2212 Aj+1 )+ \uf8fb , j=1 i.e., replace ui with u for all i (1 \u2264 i \u2264 n), and find a lower bound for f (A) as descried in this section. Note that since f (A) \u2264 F (A) the obtained lower bound of f (A) will be a lower bound for F (A) with non-identical ui \u2019s. We start by expressing F (A) in a different but an equivalent way that will be essential in finding a lower bound on the value of F (A) (and F D (A\u0303)). Our first result is a Corollary to Lemma 3.3.1 in Chapter 3. 2 Since this chapter is included to the thesis as an appendix, we will omit notation introduction and formal description of appointment problem and refer the reader to Chapter 2 and Chapter 3. 158 \fCorollary A.2.1. If ui = u for all i (1 \u2264 i \u2264 n) then \uf8ee \uf8f9 n X \u0001 F (A) = Ep \uf8f0 oj (Cj \u2212 Aj+1 )+ + u(Cj \u2212 Aj+1 )+ \uf8fb j=1 \uf8f9 \uf8ee n n X X \u0001 pk )\uf8fb . = Ep \uf8f0 oj (Cj \u2212 Aj+1 )+ + u(max{An+1 , Cn } \u2212 j=1 k=1 Proof. The proof is an application of Lemma 3.3.1 with specific \u03b1i (1 \u2264 i \u2264 n). Choose \u03b1i = 0 (1 \u2264 i \u2264 n). Then \u03b2i = oi (1 \u2264 i \u2264 n), \u03b3i = 0 (1 \u2264 i < n) and \u03b3n = un = u. Now the result directly follows from Lemma 3.3.1. \u0003 We need the following definition before computing a lower bound on F (A). Definition A.2.2. (Single Period) Newsvendor Problem [8] A newsvendor needs to decide the number of units Q to be purchased before the demand Y is realized. The newsvendor pays ch for each unit remaining unsold and pays cp for each unit of unsatisfied demand. The objective of the newsvendor is to choose the Q minimizing the expected cost. This problem is well studied and it has a closed form solution. Let H(Y ) be the cumulative distribution of Y and Q\u2217 the optimum solution then H(Q\u2217 ) = cp ch + cp We may think each job j (1 \u2264 j \u2264 n + 1) as a single period newsvendor problem if we had only that job to process and find its solution as given in Definition A.2.2. We use this idea to obtain a lower bound on F (A) in our next result. Proposition A.2.3. Let ui = u (1 \u2264 i \u2264 n), A\u2217 be an optimal appointment vector for F (A) and a\u2217j the (single period) newsvendor solution for job j (1 \u2264 j \u2264 n + 1). Then h i Pn \u2217 )+ + u(a\u2217 \u2212 p )+ is a lower bound for F (A\u2217 ). E o (p \u2212 a j j j j j j=1 p Proof. Consider the following optimization problem. OP T 1 \uf8f1 hP \u0010 i \u0011 Pn p p n + + u(max{A \uf8f4 min E o (C \u2212 A ) } \u2212 p ) , C \uf8f4 n p j j+1 n+1 A k j=1 k=1 j \uf8f4 \uf8f2 subject to \uf8f4 \uf8f4 \uf8f4 \uf8f3 A = 0, C p = p , C p = max(A , C p ) + p for (2 \u2264 j \u2264 n), all p. 1 j j 1 1 j j\u22121 159 \fOP T 1 is the stochastic program for minimizing F (A) hence its optimum will be F (A\u2217 ). We need all the constraints of OP T 1 as they are the essential dynamics of our scheduling problem. p p +pj )+pj as Cjp \u2265 Aj +pj and Cjp \u2265 Cj\u22121 We rewrite the constraints Cjp = max(Aj , Cj\u22121 for all p and obtain OP T 2: OP T 2 \uf8f1 \u0010 i \u0011 hP P n \uf8f4 oj (Cjp \u2212 Aj+1 )+ + u(max{An+1 , Cnp } \u2212 nk=1 pk ) minA Ep \uf8f4 j=1 \uf8f4 \uf8f2 subject to \uf8f4 \uf8f4 \uf8f4 \uf8f3 A = 0, C p = p , C p \u2265 A + p and C p \u2265 C p + p for (2 \u2264 j \u2264 n), all p. 1 j j j 1 1 j j j\u22121 Since objective function coefficients oj (1 \u2264 j \u2264 n\u22121), on , u are all non-negative and the objective function is a non-decreasing function of Cjp \u2019s OP T 1 and OP T 2 are equivalent. Since OP T 1 and OP T 2 are equivalent, A\u2217 is also an optimum appointment vector for OP T 2. p Now we relax Cjp \u2265 Cj\u22121 + pj constraints from OP T 2 and the obtain relaxation OP T 3: OP T 3 \uf8f1 \u0010 \u0011 i hP Pn p p n + + u(max{A \uf8f4 o (C \u2212 A ) } \u2212 , C p ) min E \uf8f4 n j j+1 n+1 A p k=1 k j=1 j \uf8f4 \uf8f2 subject to \uf8f4 \uf8f4 \uf8f4 \uf8f3 A = 0, C p = p , C p \u2265 A + p for (2 \u2264 j \u2264 n), all p. 1 j j 1 1 j Observe that OP T 3 is a relaxation of OP T 2 and it decomposes into n independent optimization problems, one for each j. Furthermore, the Cjp \u2265 Aj + pj constraints will be binding in any optimal solution (since objective function coefficients are all non-negative and the objective function is a non-decreasing function of Cjp \u2019s). Let aj = Aj+1 \u2212 Aj for 1 \u2264 i \u2264 n then by using Cjp = Aj + pj for 1 \u2264 i \u2264 n we can rewrite OP T 3 as \uf8ee \uf8f9 n n X X \u0001 min Ep \uf8f0 pk )\uf8fb . oj (pj \u2212 aj )+ + u(max{An+1 , An+1 \u2212 an + pn } \u2212 a j=1 k=1 By Corollary A.2.1, OP T 3 may be written equivalently as \" n # X \u0001 + + min Ep , ok (pk \u2212 ak ) + u(ak \u2212 pk ) a k=1 and this is nothing but sum of n independent newsvendor problems so we can minimize \u22121 it by setting a\u2217i = P\u22121 i (oi \/(oi + u)) for 1 \u2264 i \u2264 n where Pi (.) is the inverse cumulative distribution of job duration i (1 \u2264 i \u2264 n). Therefore the result follows. \u0003 160 \fP Remark A.2.4. Let f (A) = Ep [ nk=1 (ok (pk \u2212 (Ak+1 \u2212 Ak ))+ + u((Ak+1 \u2212 Ak ) \u2212 pk )+ )], then f (A) is not necessarily a lower bound for F (A). Consider the following example with deterministic processing times. Let n = 4, p1 = 4, p2 = 6, p3 = 1, p4 = 1, A1 = 0, A2 = 3, A3 = 6, A4 = 9 and A5 = 13. Then f (A) = o1 +3o2 +2u +3u and F (A) = o1 +4o2 +2o3 +u. So for u = o2 = o3 , f (A) > F (A). However, we find a different lower bound function (as a function of A) for F (A) in Lemma 3.5.6 in Chapter 3. Next we find a lower bound on the value of objective with a due date D, i.e., on the value of F D (A\u0303). We obtain a lower bound similar to that in Proposition A.2.3 on the value of F D (A\u0303). Corollary A.2.5. Let ui = u (1 \u2264 i \u2264 n), A\u0303\u2217 be an optimal appointment vector for F D (A\u0303) and a\u2217j the (single period) newsvendor solution for job j (1 \u2264 j \u2264 n + 1). Then h i Pn \u2217 + \u2217 + is a lower bound for F D (A\u0303). j=1 Ep oj (pj \u2212 aj ) + u(aj \u2212 pj ) Proof. Consider the following optimization problem: \uf8f1 hP \u0010 i \u0011 Pn p p n + + u(max{A \uf8f4 min E o (C \u2212 A ) } \u2212 p ) , C \uf8f4 n p j j+1 n+1 A k j=1 k=1 j \uf8f4 \uf8f2 D OP T F subject to \uf8f4 \uf8f4 \uf8f4 p p p \uf8f3 A n+1 = D, A1 = 0, C1 = p1 , Cj = max(Aj , Cj\u22121 ) + pj for (2 \u2264 j \u2264 n), all p. If we relax the An+1 = D constraint of OP T F D , we immediately obtain the optimization problem for F (A) (of course with ui = u). Therefore F (A) \u2264 F D (A\u0303). h i P On the other hand, nj=1 Ep oj (pj \u2212 a\u2217j )+ + u(a\u2217j \u2212 pj )+ is a lower bound for F (A) by Proposition A.2.3. Hence n X j=1 \u0002 \u0003 Ep oj (pj \u2212 a\u2217j )+ + u(a\u2217j \u2212 pj )+ \u2264 F (A) \u2264 F D (A\u0303). This completes the proof. \u0003 We now show how to compute F (A) (and F D (A\u0303)) if processing durations are independent and A is not integer. The following result is due to Theorem 2.7.2 in Chapter 2 and an observation to keep track of previous (potential) fractional points. This extra bookkeeping in the case of non-integer appointment vectors cost a factor of n in the complexity of F (A) computation. 161 \fCorollary A.2.6. If the processing durations are stochastically independent and A is a real appointment vector then F (A) may be computed in O(n3 p2max ) time. Proof. The first job starts at time zero so S1 = A1 = 0, and C1 = p1 , i.e., the distribution of C1 is that of p1 . Next, we look at the start times Si (2 \u2264 i \u2264 n). We have Si = max(Ai , Ci\u22121 ) so for all k and, P rob{Si = k} = \uf8f1 \uf8f4 \uf8f4 0 \uf8f4 \uf8f2 if k < Ai P rob{Ci\u22121 \u2264 k} if k = Ai \uf8f4 \uf8f4 \uf8f4 \uf8f3 P rob{C i\u22121 = k} if k > Ai . (A.1) Note that Si and pi are independent because Si is completely determined by p1 , p2 , ..., pi\u22121 and A1 , A2 , ..., Ai . A remark is in order here. If A is integer then k = 0, 1, . . . , npmax , however when A is not integer then in addition to previous integer values we also need to consider (possible distinct) fractional values arising from non-integer Ai values. A1 = 0 so there will be at most npmax integer values for k, from A2 at most (n \u2212 1)pmax , from A3 at most (n \u2212 2)pmax and so on. Therefore in total we need to consider at most n2 pmax distinct values (it was only npmax if A is integer). Since Ci = Si + pi , by conditioning on pi and using independence of pi and Si , we obtain for all k, P rob{Ci = k} = P rob{Si = k \u2212 pi } = pi X P rob{Si = k \u2212 j}P rob{pi = j}, (A.2) j=0 and P rob{Ci\u22121 \u2264 k} = P rob{Ci\u22121 = k}+P rob{Ci\u22121 \u2264 k \u22121}. For each i\u22121, P rob{Ci\u22121 \u2264 k} may be computed in O((i \u2212 1)2 pmax ) time. Hence P rob{Ci = k} can be computed once we have the distribution of Si . For each job i and value k, computing P rob{Si = k} by Eq(A.1) requires a constant number of operations, and computing P rob{Ci = k} by Eq(A.2) requires O(pi + 1) operations. Therefore the total number of operations needed for computing the whole start time and completion time distributions for job i is O(n2 p2max ). The distribution of Ti and Ei , their expected values Ep Ti and Ep Ei can then be determined in O(n2 p2max ) time. Therefore, the objective value F (A) is obtained in O(n3 p2max ) time. \u0003 When the appointment vector is not integer, the size of possible values (for completion and start times) grows with a factor of n2 , as compared to n in the discrete case shown above. However, note that in practice, some components may have the same fractions, and this will speed up the computation. 162 \fA.3 Obtaining a Subgradient in Polynomial Time In Section 3.4, we characterize the subdifferential of F (A) and obtain the closed form formula given in Eq(3.15) for \u2202F (A). We also express \u2202F (A) component by component for a given convex hull weight vector X, i.e., gk (X, A), k th coordinate of a subgradient at the point A for a particular X. By Eq(3.16) recall that gk (X, A) is given by n X \u03b1j j=k + n X + + \u03b2j X \u03b2j X X T> P rob{Ij> = S}Xkj (S) \u2212 \u03b2k\u22121 X T= P rob{Ij= = S}Xkj (S) \u2212 \u03b2k\u22121 X j=k S \u2208 P \u2217 ([k\u22121]) n X X X n X j=k M> P rob{Ij> = S}Xkj (S) \u2212 \u03b3k\u22121 S \u2208 P \u2217 ([j]) \u03b3j X > P rob{Ik\u22121 = S} S \u2208 P \u2217 ([k\u22121]) S \u2208 P \u2217 ([j]) \u03b3j P rob{Ik\u22121 = S} S \u2208 P \u2217 ([k\u22121]) S \u2208 P \u2217 ([j]) j=k + L P rob{Ij = S} Xkj (S) \u2212 \u03b1k\u22121 S \u2208 P \u2217 ([j]) j=k n X X = P rob{Ik\u22121 = S} X = XiTk\u22121 (S) i\u2208 S > P rob{Ik\u22121 = S} S \u2208 P \u2217 ([k\u22121]) P rob{Ij= = S}XkMj= (S \u222a {j + 1}) + \u03b3k\u22121 (1 \u2212 S \u2208 P \u2217 ([j]) X = P rob{Ik\u22121 = S}). (A.3) S \u2208 P \u2217 ([k\u22121]) To obtain a subgradient of F , as seen in Eq(A.3), we need to compute some probability = terms, e.g., P rob{Ik\u22121 = S}, choose an appropriate (i.e., feasible) X vector and need to find a way to deal with exponentially many S \u2208 P \u2217 ([j]) terms. Our strategy is to compute the probabilities first. Then we discuss the complexity of obtaining a subgradient. Finally we show a way to obtain a subgradient (in fact two subgradients) fast, i.e., in polynomial time. A.3.1 Probability Computations In this section we compute probabilities P rob{Ij = S} and P rob{Ij\u03b7 = S} for S \u2208 P \u2217 ([j]), j \u2208 [n + 1] and \u03b7 \u2208 {>, =, <}. Recall that, by Eq(3.4) and Eq(3.6) we have Ij = arg max{Ak + Pkj } Ij\u03b7 = {k \u2208 Ij : Ak + Pkj \u03b7 Aj+1 } k\u2264j For easier computation and later purposes, we rewrite Ij\u03b7 . Let max Ij = max{k : k \u2208 Ij } then Ij\u03b7 = {k \u2208 Ij : Ak + Pkj \u03b7 Aj+1 } = {k \u2208 Ij : Amax Ij + Pmax Ij , j \u03b7 Aj+1 }. (A.4) 163 \f2 1 A1 A2 3 A3 4 A5 A4 \b Figure A.1: Event I5 = {1, 2, 3, 5} Visualization Eq(A.4) follows from the fact that Ak + Pkj is the same quantity for any k \u2208 Ij and particularly for k = max Ij . Let is = max{i : i \u2208 S} then by using Eq(A.4) we get P rob{Ij\u03b7 = S} = P rob{Ij = S and Ais + Pis , j \u03b7 Aj+1 }. We first compute P rob{Ij = S}. Let S \u2208 P \u2217 ([j]) then \uf8f1 \u0013 \u0012 \uf8f2\u0012 \\ P rob{Ij = S} = P rob (i \u2208 Ij ) \u2229 \uf8f3 i\u2208S \\ k \u2208 [j]\u2212S \uf8fc \u0013\uf8fd . (k \u2208 6 Ij ) \uf8fe (A.5) (A.6) Eq(A.6) simply means that for two sets to be equal, they need to agree on each term, i.e., each element of S should be an element of Ij and an element which is not in S should not be in Ij . We now provide an intuitive result which is also crucial in computing P rob{Ij = S}. Lemma A.3.1. If i \u2208 Ij then job i starts on time, i.e., Si = Ai . Proof. For i = 1 the result holds since job 1 always starts on time (i.e., at time A1 = 0). For 2 \u2264 i \u2264 n + 1 we will show the contrapositive. If job i does not start on time then job i is late, i.e., completion date of job i \u2212 1 is strictly greater than appointment date of job i (Ci\u22121 > Ai ). Therefore job i cannot be on the critical path of Cj and hence i 6\u2208 Ij . \u0003 We give an example to illustrate Eq(A.6) and the application of Lemma A.3.1. Example A.3.2. Let S = {1, 2, 3, 5} then \b P rob I5 = {1, 2, 3, 5} = P rob{1 \u2208 I5 and 2 \u2208 I5 and 3 \u2208 I5 and 4 6\u2208 I5 and 5 \u2208 I5 }. \b We can visualize the event I5 = {1, 2, 3, 5} as depicted in Figure A.1. We use Lemma A.3.1 in the following arguments to deduce the fact that if i \u2208 Ij then job i starts on time. \b Let\u2019s examine probability P rob I5 = {1, 2, 3, 5} by starting from the end. 5 \u2208 I5 tells us that job 5 starts at time A5 . 4 6\u2208 I5 and 3 \u2208 S imply that job 3 starts on time, and we 164 \fC2 A1 3 A3 5 4 A4 A5 6 A6 7 A7 A8 \b Figure A.2: Event I8 = {3, 5, 6} Visualization must have p3 > A4 \u2212 A3 (if p3 = A4 \u2212 A3 then 4 \u2208 I5 , if p3 < A4 \u2212 A3 then A3 6\u2208 I5 ) and P34 = A5 \u2212 A3 since both 3 and 5 \u2208 I5 (if P34 > A5 \u2212 A3 then 5 6\u2208 I5 , if P34 < A5 \u2212 A3 then 3 6\u2208 I5 ). Similarly, 2 \u2208 I5 gives us that job 2 starts on time and p2 = A3 \u2212A2 . Finally, 1 \u2208 I5 implies that p1 = A2 \u2212 A1 (note that A1 = 0), otherwise either 1 6\u2208 I5 (if p1 < A2 \u2212 A1 ) or 2 6\u2208 I5 (if p1 > A2 \u2212 A1 ). Therefore, \b P rob I5 = {1, 2, 3, 5} = P rob{1 \u2208 I5 and 2 \u2208 I5 and 3 \u2208 I5 and 4 6\u2208 I5 and 5 \u2208 I5 } = P rob{1 \u2208 I5 and 2 \u2208 I5 and p3 > A4 \u2212 A3 and P34 = A5 \u2212 A3 } = P rob{1 \u2208 I5 and p2 = A3 \u2212 A2 and p3 > A4 \u2212 A3 and P34 = A5 \u2212 A3 } = P rob{p1 = A2 \u2212 A1 and p2 = A3 \u2212 A2 and p3 > A4 \u2212 A3 and P34 = A5 \u2212 A3 } = P rob{p1 = A2 \u2212 A1 } P rob{p2 = A3 \u2212 A2 }P rob{p3 > A4 \u2212 A3 and P34 = A5 \u2212 A3 }. The last equality follows from the independence of job duration distributions. Furthermore, the convolutions such as P34 and their corresponding probabilities such as P rob{p3 > A4 \u2212 A3 and P34 = A5 \u2212 A3 } can be computed efficiently as our job duration distributions are discrete and independent. We will compute these probabilities with the RCA (Recursive Computation Algorithm) developed later in the section. We provide another example to show the other possibilities that may occur in P rob{Ij = S} computation. Example A.3.3. Let S = {3, 5, 6}. Then \b P rob I8 = {3, 5, 6} = P rob{1 6\u2208 I8 and 2 6\u2208 I8 and 3 \u2208 I8 and 4 6\u2208 I8 and 5 \u2208 I8 and 6 \u2208 I8 and 7 6\u2208 I8 and 8 6\u2208 I8 } \b Event I8 = {3, 5, 6} may be visualized as shown in Figure A.2. As in Example A.3.2, we use Lemma A.3.1 to deduce the fact that if i \u2208 Ij then \b job i starts on time. We compute P rob I8 = {3, 5, 6} by starting from the end and by considering blocks of jobs between successive elements of S. 165 \f6 \u2208 I8 gives us that job 6 starts on time. 7 6\u2208 I8 implies A6 + p6 > A7 (if A6 + p6 = A7 then 7 \u2208 I8 , if A6 + p6 < A7 then 6 6\u2208 I8 ). Similarly, 8 6\u2208 I8 implies A6 + P67 > A8 . 5 \u2208 I8 means that job 5 starts on time and hence A5 + p5 = A6 (if A5 + p5 > A6 then 6 6\u2208 I8 , if A5 + p5 < A6 then 5 6\u2208 I8 ). 3 \u2208 I8 implies that job 3 starts on time. 4 6\u2208 I8 tells us that A3 +p3 > A4 (if A3 +p3 = A4 then 4 \u2208 I8 , if A3 +p3 < A4 then 3 6\u2208 I8 ). Furthermore, 3 \u2208 I8 also implies that A3 +P34 = A5 (if A3 + P34 < A5 then 3 6\u2208 I8 , if A3 + P34 > A5 then 5 6\u2208 I8 ). The remaining jobs, jobs 1 and 2, are not in I8 . This implies that completion time of job 2 C2 is strictly less than A3 (if C2 > A3 then 3 6\u2208 I8 , if C2 = A3 then 1 \u2208 I8 or 2 \u2208 I8 or both 1, 2 \u2208 I8 ). If we collect together the arguments above, we obtain: \b P rob I8 = {3, 5, 6} = P rob{1 6\u2208 I8 and 2 6\u2208 I8 and 3 \u2208 I8 and 4 6\u2208 I8 and 5 \u2208 I8 and 6 \u2208 I8 and 7 6\u2208 I8 and 8 6\u2208 I8 } = P rob{1 6\u2208 I8 and 2 6\u2208 I8 and 3 \u2208 I8 and 4 6\u2208 I8 and 5 \u2208 I8 and A6 + p6 > A7 and A6 + P67 > A8 } = P rob{1 6\u2208 I8 and 2 6\u2208 I8 and 3 \u2208 I8 and 4 6\u2208 I8 and A5 + p5 = A6 and A6 + p6 > A7 and A6 + P67 > A8 } = P rob{1 6\u2208 I8 and 2 6\u2208 I8 and A3 + p3 > A4 and A3 + P34 = A5 and A5 + p5 = A6 and A6 + p6 > A7 and A6 + P67 > A8 } = P rob{C2 < A3 and A3 + p3 > A4 and A3 + P34 = A5 and A5 + p5 = A6 and A6 + p6 > A7 and A6 + P67 > A8 } = P rob{C2 < A3 } P rob{A3 + p3 > A4 and A3 + P34 = A5 } P rob{A5 + p5 = A6 } P rob{A6 + p6 > A7 and A6 + P67 > A8 } As in Example A.3.2, the last equality follows from the independence of job duration distributions. We next obtain a formula for computing P rob{Ij = S} for any S \u2208 P \u2217 ([j]). The nice thing about this probability is that it breaks down into blocks of independent events with the elements of S as seen in Examples A.3.2 and A.3.3. We now prove this result. Lemma A.3.4. Write any subset S \u2208 P \u2217 ([j]) as an ordered list: S = {i1 , ..., is } where 166 \fil < il+1 for l = 1, 2, ..., s \u2212 1 and s = |S|. Then P rob{Ij = S} = P rob{Ci1 \u22121 < Ai1 } \u001a\u0012 s\u22121 Y \\ P rob \u00b7 l=1 \u00b7P rob \b \\ Pil k > Ak+1 \u2212 Ail il \u2264k Ak+1 \u2212 Ais is \u2264k Ak+1 holds. Next, we consider il . Since il , il+1 \u2208 Ij both of them are on time and we must have Ail + Pil ,il+1 \u22121 = Ail+1 (indeed, if Ail + Pil ,il+1 \u22121 < Ail+1 then il 6\u2208 Ij , if Ail + Pil ,il \u22121 > Ail then il 6\u2208 Ij ). Furthermore, all jobs between il and il+1 must be late, i.e., Ail + Pil ,k > Ak+1 for il \u2264 k < il+1 \u2212 1 (if Ail + Pil ,k = Ak+1 then k + 1 \u2208 Ij , if Ail + Pil ,k < Ak+1 then il 6\u2208 Ij ). Therefore, the following event must hold: \u001a s\u22121 Y \u0014\u0012 l=1 \\ Ail +Pil k > Ak+1 il \u2264k Ak+1 . \u2229 Ail +Pil il+1 \u22121 = Ail+1 is \u2264k Ai1 then i1 6\u2208 Ij ). Therefore the following event must hold: \u001a Ci1 \u22121 < Ai1 {z } | part 1 \u2229 s\u22121 Y \u001a\u0012 l=1 \u2229 | \b \\ | \\ Ail + Pil k > Ak+1 il \u2264k Ak+1 {z part 3 \u001b } {z \u0013 part 2 . \u2229 \u0012 Ail + Pil il+1 \u22121 = Ail+1 \u0013\u001b } 167 \f\b \b This proves that Ij = S \u2286 part 1 \u2229 part 2 \u2229 part 3 . Conversely, assume that outcome p is such that the event holds. \b part 1 \u2229 part 2 \u2229 part 3 part 1 implies m 6\u2208 Ij for 1 \u2264 m < i1 since Am + Pm i1 \u22121 \u2264 Ci1 \u22121 < Ai1 (i.e., job m cannot be on the critical path of Ij ). part 3 implies m 6\u2208 Ij for is < m \u2264 j since Ais + Pis m > Am+1 (i.e., job m cannot be on the critical path of Ij ), and Ais + Pis j\u22121 > Aj implies is \u2208 Ij . Ail + Pil il+1 \u22121 = Ail+1 (1 \u2264 l \u2264 s \u2212 1) of part 2 implies that either il , il+1 \u2208 Ij or il , il+1 6\u2208 Ij (i.e., they are either on the critical path of Ij or not). But as shown above is \u2208 Ij , therefore i1 , i2 , ..., is \u2208 Ij . The only remaining thing to show is that m 6\u2208 Ij for \u0001 T il < m < il+1 and 1 \u2264 l \u2264 s \u2212 1. But this follows from i1 \u2264k Ak+1 of part 2 and the fact that il \u2208 Ij . \b Therefore {Ij = S} = part 1 \u2229 part 2 \u2229 part 3 and \u001a P rob{Ij = S} = P rob Ci1 \u22121 < Ai1 {z } | part 1 \u2229 s\u22121 Y \u001a\u0012 l=1 \u2229 | \b \\ | \\ Ail + Pil k > Ak+1 il \u2264k Ak+1 {z part 3 \u001b {z \u0013 \u0012 \u0013\u001b \u2229 Ail + Pil il+1 \u22121 = Ail+1 part 2 . } (A.7) } We now carefully look at part 1, part 2 and part 3. In terms of processing times (i.e., only pi \u2019s), part 1 is a function of pk for 1 \u2264 k < i1 , part 2 is a function of pk for i1 \u2264 k < is , and part 3 is a function of pk for is \u2264 k < ij . Therefore part 1, part 2 and part 3 are independent of each other. Furthermore, we may break part 2 into s \u2212 1 independent smaller parts (i.e., one for each l in 1 \u2264 l \u2264 s \u2212 1) since each term \u0013 \u0012 \u001a\u0012 \u0013\u001b \\ Ail + Pil k > Ak+1 \u2229 Ail + Pil il+1 \u22121 = Ail+1 il \u2264k Ak+1 il \u2264k Ak+1 \u0013 \u2229 \u0012 Ail + Pil il+1 \u22121 = Ail+1 \u0013\u001b is \u2264k Ak+1 \u2212 Ais P rob Ij = {1} = P rob is \u2264k, \u2265} and \u03b7 \u2208 {>, =, <}. We now give a recursive algorithm RCA (Recursive Computation Algorithm) to compute P rob \u03be, \u03b7 (l, m). For 1 \u2264 i < k < m and (k \u2212 i + 1)pmax \u2265 t \u03be (Ak \u2212 Ai ) define \b \u03be (t) = P rob Pik = t \u2229 Ji,k \\ Piv \u03be Av+1 \u2212 Ai i\u2264v\u2264k\u22121 \b \b \u03be Ji,i (t) = P rob Pii = t = P rob pi = t . \u0001 (A.9) (A.10) If we can compute these probabilities then P rob \u03be, \u03b7 (l, m) = X\b \u03be Jl,m\u22121 (t) : t \u03b7 Am \u2212 Al t 169 \fRecall that pmax is the maximum possible value for any job distribution. Initial conditions \u03be for the recursion is given by Eq(A.10). We now provide the recursion for computing Ji,k (t)\u2019s. \u03be Ji,k (t) \u001a = P rob (Pi,k = t) \u2229 X = u\u2208[pmax ]\u2229[t] X = u\u2208[pmax ]\u2229[t] Pi,v \u03be Av+1 \u2212 Ai i\u2264v\u2264k\u22121 \u0001 \u001b \u001a \u0001 P rob (pk = u) \u2229 Pi,k\u22121 = (t \u2212 u) \u2229 (A.11) \\ u\u2208[pmax ]\u2229[t] (t\u2212u) \u03be Ak \u2212Ai Pi,v \u03be Av+1 \u2212 Ai i\u2264v\u2264k\u22121 \u001a \u0012 \b \u0001 P rob pk = u} P rob Pi,k\u22121 = (t \u2212 u) \u2229 X = \\ \\ \\ \u001b Pi,v \u03be Av+1 \u2212 Ai i\u2264v\u2264k\u22121 \u001a \u0012 \b \u0001 P rob pk = u} P rob Pi,k\u22121 = (t \u2212 u) \u2229 \u0001 (A.12) \u0013\u001b Pi,v \u03be Av+1 \u2212 Ai i\u2264v\u2264k\u22122 \u001b X\u001a \b \u03be P rob pk = u} Ji,k\u22121 (t \u2212 u) : (t \u2212 u) \u03be Ak \u2212 Ai and u \u2208 [pmax ] \u2229 [t] = \u0013\u001b (A.13) (A.14) (A.15) u \u03be Eq(A.11) is just the definition of Ji,k (t) given by Eq(A.9). Then we write Pi,k as the convolution of pk and Pi,k\u22121 in Eq(A.12). By recognizing the fact that pk and Pi,k\u22121 are independent, and pk does not appear in the remaining terms, we obtain Eq(A.13). Next, in \u03be (t) and therefore Eq(A.14) we place the condition Pi,k\u22121 \u03be Ak \u2212Ai in the sum to obtain Ji,k\u22121 the recursion as shown in Eq(A.15). We now take a close look to the P rob \u03be, \u03b7 (l, m) computation. \u03be Jl,l+1 (t) \u03be Jl,l+2 (t) \b = P rob pl \u03be Al+1 \u2212 Al and Pl,l+1 = t \b = P rob pl \u03be Al+1 \u2212 Al and Pl,l+1 \u03be Al+2 \u2212 Al and Pl,l+2 = t ... = .... \u03be Jl,m\u22121 (t) P rob \u03be, \u03b7 (l, m) \b = P rob pl \u03be Al+1 \u2212 Al and ... and Pl,m\u22122 \u03be Am\u22121 \u2212 Al and Pl,m\u22121 = t \b = P rob pl \u03be Al+1 \u2212 Al and ... and Pl,m\u22122 \u03be Am\u22121 \u2212 Al and Pl,m\u22121 \u03b7 Am \u2212 Al X \u03be Jl,m\u22121 (t) (A.16) = t \u03b7 Am \u2212Al We can now give the formula for P rob{Ij = S}. Write S \u2208 P \u2217 ([j]) as an ordered list: S = {i1 , ..., is } where il < il+1 for l = 1, 2, ..., s \u2212 1 and s = |S|. Then by Lemma A.3.4 and Eq(A.16) we get: \u0013 \u0012 s\u22121 Y P rob>,= (il , il+1 ) P rob>,> (is , j) (A.17) P rob{Ij = S} = P rob{Ci1 \u22121 < Ai1 } l=1 To compute P rob{Ij\u03b7 = S} for \u03b7 \u2208 {>, =, <}, we need a Corollary. 170 \fCorollary A.3.6. P rob{Ij\u03b7 = S} = P rob{Ci1 \u22121 \u0013 \u0012 s\u22121 Y >,= P rob (il , il+1 ) P rob>,\u03b7 (is , j + 1). < Ai1 } l=1 Proof. By Eq(A.7) of Lemma A.3.4 we obtain \u001a P rob{Ij\u03b7 = S} = P rob Ci1 \u22121 < Ai1 | {z } part 1 \u2229 s\u22121 Y \u001a\u0012 l=1 \u2229 | Ail + Pil k > Ak+1 il \u2264k Ak+1 is \u2264k Ak+1 il \u2264k Ak+1 \u2229 Ais + Pis j \u03b7 Aj+1 . \u0013\u001b is \u2264k,= P rob (il , il+1 ) P rob>,\u03b7 (is , j + 1). < Ai1 } (A.18) l=1 This completes the proof. A.3.2 \u0003 Complexity of Subgradient Computations We first look at the complexity of the required probability computations. We see that the complexity of P rob{Ij = S}, P rob{Ij> = S} and P rob{Ij= = S} are the same. For P rob{Ij = S}, we need to compute and multiply probabilities given in Eq(A.17), Q >,= (i , i >,> (i , j). Since namely P rob{Ci1 \u22121 < Ai1 }, s\u22121 s l l+1 ) and P rob l=1 P rob Qs\u22121 >,= (i , i >, = (., .) terms, the complexity of l l+1 ) is the product of |S| \u2212 1 P rob l=1 P rob \u0001 P rob{Ij = S} is the same as the complexity of P rob{Ci1 \u22121 < Ai1 } |S|P rob>,> (i, j) . We start with P rob{Ci1 \u22121 < Ai1 }. The complexity of this probability computation is O((npmax )2 ) when A is integer and O(n(npmax )2 ) otherwise. These directly follow from 171 \fexpected cost computations. Recall that h = npmax , so we can represent the worst case as O(nh2 ). Next, we look at the complexity of P rob>,> (i, j). We need to compute P rob>,> (i, j) for every i (i < j), so we have a factor of n. Furthermore, in the worst case we need to > . For this convolution, we may have as many as npmax values and O(pmax ) compute J1,n operations for each value. Finally, |S| may be at most n so all together the complexity of |S| P rob>,> (i, j) becomes O(nnnpmax pmax ). Since h = npmax it becomes O(nh2 ). Hence for a given S \u2208 P \u2217 ([j]), the complexity of P rob{Ij = S} becomes O(nh2 ) for a particular j, and O(n2 h2 ) for all jobs. However, due to S \u2208 P \u2217 ([j]) (in both \u2202F (A) and P rob{Ij = S}), we have an additional factor of O(2n ) in the complexity of subgradient computations. A.3.3 Obtaining a Subgradient Fast (in Polynomial Time) The preceding characterizations of \u2202F (A) may not be efficient for convex analysis methods to find an optimal appointment schedule. This is due to the O(2n ) factor in obtaining \u2202F (A). Instead of fully characterizing \u2202F (A), we may obtain a subgradient g \u2208 \u2202F (A) quickly. We will do so by not computing the convex hulls (in \u2202Lj (A), \u2202Tj (A) and \u2202Mj (A)) but by choosing a particular (smallest (or largest) index) element for that convex hull (i.e., set a particular X variable to 1 and all others to zero in every convex combination). Recall that the O(2n ) factor comes from the fact that S \u2208 P \u2217 ([j]) where [j] = {1, 2, ..., j}. We are now after just one subgradient, but not the whole subdifferential, so instead of considering all possible (non-empty) subsets of [j] (i.e., P \u2217 ([j])) for S and computing the corresponding convex hull, we just choose the vector corresponding to the smallest (or the largest) element of each S. In other words, when we choose the smallest element of each S, (.) (.) we eliminate variables Xi,j (S) by constraining Xi,j (S) = 1 if i = min S = min{t : t \u2208 S} (.) and Xi,j (S) = 0 otherwise. Similarly, when we choose the largest element of each S, we (.) (.) eliminate variables Xi,j (S) by constraining Xi,j (S) = 1 if i = max S = max{t : t \u2208 S} and (.) Xi,j (S) = 0 otherwise. We illustrate this idea with an example. Example A.3.7. Let F (A) = L3 (A) then \u2202F (A) = \u2202L3 (A). We will find two subgradi- 172 \f\u2032 \u2032\u2032 ents, g (A), g (A) \u2208 \u2202L3 (A). Recall that [3] = {1, 2, 3} and by Eq(3.10) of Chapter 3 \u001a X X L (1k \u2212 14 )Xk3 (S) : P rob{I3 = S} \u2202L3 (A) = k\u2208 S S \u2208 P \u2217 ([3]) X L Xk3 (S) = 1 \u2200S \u2208 P \u2217 ([3]) k\u2208 S L (S) Xk3 \u2217 \u2265 0 \u2200S \u2208 P ([3]) \u2200k \u2208 S \u001b then we obtain g \u2032 (A) and g \u2032\u2032 (A) as g \u2032 (A) = P rob{1 \u2208 I3 }(11 \u2212 14 ) + P rob{2 \u2208 I3 , 1 6\u2208 I3 }(12 \u2212 14 ) + P rob{3 \u2208 I3 , 2 6\u2208 I3 , 1 6\u2208 I3 }(13 \u2212 14 ) 3 X P rob{i = min I3 }(1i \u2212 14 ) and = i=1 g \u2032\u2032 (A) = P rob{3 \u2208 I3 }(13 \u2212 14 ) + P rob{2 \u2208 I3 , 3 6\u2208 I3 }(12 \u2212 14 ) + P rob{1 \u2208 I3 , 2 6\u2208 I3 , 3 6\u2208 I3 }(11 \u2212 14 ) 3 X P rob{i = max I3 }(1i \u2212 14 ). = i=1 The nice thing about g \u2032 (A) and g \u2032\u2032 (A) is that the probabilities appearing in the equations above may be computed efficiently by RCA. Recall that I3 = arg maxk\u22643 {Ak + Pk3 }. For example, P rob{1 \u2208 I3 } = P rob{A1 + p1 \u2265 A2 and A1 + p1 + p2 \u2265 A3 }. P rob{2 \u2208 I3 and 1 6\u2208 I3 } = P rob{A2 + p2 \u2265 A3 and A1 + p1 < A2 } = P rob{A2 + p2 \u2265 A3 }P rob{A1 + p1 < A2 } (A.19) (A.20) (A.21) = P rob{A2 + p2 \u2265 A3 }P rob{C1 < A2 }. P rob{3 \u2208 I3 and 2 6\u2208 I3 and 1 6\u2208 I3 } = P rob{A2 + p2 < A3 and A1 + p1 + p2 < A3 } = P rob{C2 < A3 }. (A.22) In this paragraph\u2019s arguments we use the Critical Path Lemma 2.4.1 of Chapter 2 for C3 . Eq(A.19) follows from the fact that 1 \u2208 I3 if and only if A1 is on the critical path. In Eq(A.20), {2 \u2208 I3 and 1 6\u2208 I3 } implies that A2 is on the critical path, but A1 is not on 173 \fthe critical path, and this can only happen if and only if A2 + p2 \u2265 A3 and A1 + p1 < A2 . Eq(A.21) is just the result of p1 and p2 \u2019s independence. {3 \u2208 I3 and 2 6\u2208 I3 and 1 6\u2208 I3 } if and only if A3 is on the critical path, and A2 and A1 are not on the critical path. This can only happen if and only if C2 < A3 , and Eq(A.22) gives this result. In general, for \u2200j \u2208 [n + 1] and i \u2264 j we need to compute probabilities P rob{i = min Ij } = P rob{1 6\u2208 Ij and 2 6\u2208 Ij and ... and i \u2212 1 6\u2208 Ij and i \u2208 Ij }, P rob{i = min Ij> } = P rob{1 6\u2208 Ij> and 2 6\u2208 Ij> and ... and i \u2212 1 6\u2208 Ij> and i \u2208 Ij> } for g \u2032 (A), and for g \u2032\u2032 (A) we need probabilities P rob{i = max Ij } = P rob{i \u2208 Ij and i + 1 6\u2208 Ij and ... and j \u2212 1 6\u2208 Ij and j 6\u2208 Ij }, P rob{i = max Ij> } = P rob{i \u2208 Ij> and i + 1 6\u2208 Ij> and ... and j \u2212 1 6\u2208 Ij> and j 6\u2208 Ij> }. Recall that by Eq(A.5) we have P rob{Ij> = S} = P rob{Ij = S and Ais + Pis ,j > Aj+1 } where is = max{i : i \u2208 S}. Therefore, P rob{i = max Ij> } = P rob{i \u2208 Ij> and i + 1 6\u2208 Ij> and ... and j 6\u2208 Ij> } = P rob{i \u2208 Ij and i + 1 6\u2208 Ij and ... and j 6\u2208 Ij and Ai + Pij > Aj+1 }. (A.23) Let i1 = min{i : i \u2208 S}. Since Ai1 + Pi1 ,j = Ais + Pis ,j we can rewrite P rob{Ij> = S} as P rob{Ij> = S} = P rob{Ij = S and Ais + Pis ,j > Aj+1 } = P rob{Ij = S and Ai1 + Pi1 ,j > Aj+1 } therefore, P rob{i = min Ij> } = P rob{1 6\u2208 Ij> and ... and i \u2212 1 6\u2208 Ij> and i \u2208 Ij> } = P rob{1 6\u2208 Ij and ... and i \u2212 1 6\u2208 Ij and i \u2208 Ij and Ai + Pij > Aj+1 }. (A.24) We next compute P rob{i = min Ij }, P rob{i = max Ij }, P rob{i = min Ij> } and P rob{i = max Ij> }. We start with the computation of P rob{i = min Ij }. Lemma A.3.8. P rob{i = min Ij } = P rob{Ci\u22121 < Ai } P rob{Ai + Pit \u2265 At+1 \u2200t = i, i + 1, ..., j \u2212 1} = P rob{Ci\u22121 < Ai }P rob\u2265,\u2265 (i, j) X \u2265 Ji,j\u22121 (m). = P rob{Ci\u22121 < Ai } m\u2265Aj \u2212Ai 174 \fProof. By definition {i = min Ij } = {1 6\u2208 Ij and ... and i \u2212 1 6\u2208 Ij and i \u2208 Ij }, i.e., this is the event that A1 , A2 , ..., Ai\u22121 are not on the critical path of Cj but Ai is. Then, {i = min Ij } = {1 6\u2208 Ij and ... and i \u2212 1 6\u2208 Ij and i \u2208 Ij } \b \u21d4 max Ak + Pk,i\u22121 < Ai and Ai + Pit \u2265 At+1 \u2200t = i, i + 1, ..., j \u2212 1 k\u2264i\u22121 \b \u21d4 Ci\u22121 < Ai and Ai + Pit \u2265 At+1 \u2200t = i, i + 1, ..., j \u2212 1 . Now, since Ci\u22121 is a function of only p1 , p2 , ..., pi\u22121 (in terms of processing times) and Pit is the sum of pi + pi+1 + ... + pt (i \u2264 t \u2264 j \u2212 1), events {Ci\u22121 < Ai } and {Ai + Pit \u2265 At+1 \u2200t = i, i + 1, ..., j \u2212 1} are independent. Therefore, P rob{i = min Ij } = P rob{Ci\u22121 < Ai } P rob{Ai + Pit \u2265 At+1 \u2200t = i, i + 1, ..., j \u2212 1} but by Eq(A.8) and Eq(A.16) we have, P rob{Ai + Pit \u2265 At+1 \u2200t = i, i + 1, ..., j \u2212 1} = P rob\u2265,\u2265 (i, j) = X \u2265 Ji,j\u22121 (m). m\u2265Aj \u2212Ai Therefore the result follows. \u0003 Similarly to Lemma A.3.8, we have another Lemma for P rob{i = max Ij }. Lemma A.3.9. P rob{i = max Ij } = P rob{Ci\u22121 \u2264 Ai } P rob{Ai + Pit > At+1 \u2200t = i, i + 1, ..., j \u2212 1} = P rob{Ci\u22121 \u2264 Ai }P rob>,> (i, j) X > = P rob{Ci\u22121 \u2264 Ai } Ji,j\u22121 (m). m>Aj \u2212Ai Proof. By definition {i = max Ij } = {i \u2208 Ij and i + 1 6\u2208 Ij and ... and j 6\u2208 Ij }, i.e., this is the event that Ai+1 , Ai+2 , ..., Aj are not on the critical path of Cj but Ai is. Then, {i = max Ij } = {i \u2208 Ij and i + 1 6\u2208 Ij and ... and j \u2212 1 6\u2208 Ij and j 6\u2208 Ij } \b \u21d4 max Ak + Pk,i\u22121 \u2264 Ai and Ai + Pit > At+1 \u2200t = i, i + 1, ..., j \u2212 1 k\u2264i\u22121 \b \u21d4 Ci\u22121 \u2264 Ai and Ai + Pit > At+1 \u2200t = i, i + 1, ..., j \u2212 1 . 175 \fNow, since Ci\u22121 is a function of only p1 , p2 , ..., pi\u22121 (in terms of processing times) and Pit is the sum of pi + pi+1 + ... + pt (i \u2264 t \u2264 j \u2212 1), events {Ci\u22121 \u2264 Ai } and {Ai + Pit > At+1 \u2200t = i, i + 1, ..., j \u2212 1} are independent. Therefore, P rob{i = max Ij } = P rob{Ci\u22121 \u2264 Ai } P rob{Ai + Pit > At+1 \u2200t = i, i + 1, ..., j \u2212 1} but by Eq(A.8) and Eq(A.16) we have, P rob{Ai + Pit > At+1 \u2200t = i, i + 1, ..., j \u2212 1} = P rob>,> (i, j) = X > Ji,j\u22121 (m). m>Aj \u2212Ai Therefore the result follows. \u0003 Then by Lemmata A.3.8 and A.3.9, and Eq(A.24) and Eq(A.23) it follows that P rob{i = min Ij> } = P rob{Ci\u22121 < Ai }P rob\u2265,> (i, j + 1) X \u2265 Ji,j (m) = P rob{Ci\u22121 < Ai } and m>Aj+1 \u2212Ai P rob{i = max Ij> } = P rob{Ci\u22121 \u2264 Ai }P rob>,> (i, j + 1) X > Ji,j (m). = P rob{Ci\u22121 \u2264 Ai } m>Aj+1 \u2212Ai As mentioned before, completion time distributions are already available to us from ex. (.) is computed efficiently by the recursive algorithm RCA. pected cost computations and J.,. Therefore we can compute all the required probabilities for finding g \u2032 and g \u2032\u2032 . We now go back to computation of g \u2032 (A) and g \u2032\u2032 (A). We compute g \u2032 (A) first. We will find g \u2032 (A) by computing contributions gj\u2032L (A), gj\u2032T (A) and gj\u2032M (A) of \u2202Lj (A), \u2202Tj (A) and \u2202Mj (A) to g \u2032 (A) \u2208 \u2202F (A) respectively, and obtain g \u2032 (A) by Rule 1 (Eq(3.3) of Chapter 3) as g \u2032 (A) = n X (\u03b1j gj\u2032L (A) + \u03b2j gj\u2032T (A) + \u03b3j gj\u2032M (A)). (A.25) j=1 We start with gj\u2032L (A), contribution of Lj (A) to g \u2032 (A). Recall that by Eq(3.10) of Chapter 3, \u2202Lj (A) = \u001a X P rob{Ij = S} L (1k \u2212 1j+1 )Xkj (S) : k\u2208 S S \u2208P \u2217 ([j]) X X L Xkj (S) = 1 \u2200S \u2208 P \u2217 ([j]) k\u2208 S L Xkj (S) \u2217 \u2265 0 \u2200S \u2208 P ([j]) \u2200k \u2208 S \u001b 176 \fthen by choosing the smallest index for each S in every convex combination we obtain: gj\u2032L (A) = j X [P rob{1 6\u2208 Ij and 2 6\u2208 Ij and ... and i \u2212 1 6\u2208 Ij and i \u2208 Ij }(1i \u2212 1j+1 )] i=1 = j X [P rob{i = min Ij }(1i \u2212 1j+1 )]. (A.26) i=1 Next, we obtain gj\u2032T (A). \u2202Tj (A) is given by Eq(3.11) of Chapter 3 as \u2202Tj (A) = \u001a X S\u2208P \u2217 ([j]) X \u0002 T> (1k \u2212 1j+1 )Xkj (S) P rob{Ij> = S} + P rob{Ij= X = S} X k\u2208 S T> Xkj (S) k\u2208 S \u0003 T= (1k \u2212 1j+1 )Xkj (S) : = 1 \u2200S \u2208 P \u2217 ([j]) k\u2208 S X T= Xkj (S) \u2264 1 \u2200S \u2208 P \u2217 ([j]) k\u2208 S T> T= Xkj (S), Xkj (S) \u001b \u2265 0 \u2200S \u2208 P ([j]) \u2200k \u2208 S . \u2217 = (S) We will eliminate all the terms in the second line of \u2202Tj (A) by assigning 0 to all Xkj P T = (S) \u2264 1 inequality. Then by choosing the variables. We may do so due to the k\u2208 S Xkj smallest index for each S in every convex combination for the remaining of the terms we obtain gj\u2032T (A) = = j X i=1 j X [P rob{1 6\u2208 Ij> and 2 6\u2208 Ij> and ... and i \u2212 1 6\u2208 Ij> and i \u2208 Ij> }(1i \u2212 1j+1 )] [P rob{i = min Ij> }(1i \u2212 1j+1 )]. (A.27) i=1 177 \fFinally, we obtain gj\u2032M (A). Recall that by Eq(3.12) of Chapter 3, \u2202Mj (A) = \u001a X S\u2208P \u2217 ([j]) \u0012 P rob{Ij> = S} X M> (1k \u2212 1j+1 )Xkj (S) k\u2208 S + P rob{Ij= = S} X \u0013 (1k )XkMj = (S \u222a {j + 1}) k\u2208 S\u222a{j+1} X + 1\u2212 X S\u2208P \u2217 ([j]) \u0001 P rob{Ij= = S} 1j+1 : M> Xkj (S) = 1 \u2200S \u2208 P \u2217 ([j]) k\u2208 S X XkMj = (S \u222a {j + 1}) = 1 \u2200S \u2208 P \u2217 ([j]) k\u2208 S\u222a{j+1} M> Xkj (S) \u2265 0 \u2200S \u2208 P \u2217 ([j]) \u2200k \u2208 S \u2217 M= (S Xkj \u001b \u222a {j + 1}) \u2265 0 \u2200S \u2208 P ([j]) \u2200k \u2208 S \u222a {j + 1} . Here we choose the smallest index for each S in the first convex combination (i.e., with M> terms) and choose j + 1 (note that j + 1 is always in S \u222a {j + 1}) for each S in the Xkj M = terms). Then, second convex combination (i.e., with Xkj j X gj\u2032M (A) = P rob{1 6\u2208 Ij> and ... and i \u2212 1 6\u2208 Ij> and i \u2208 Ij> }(1i \u2212 1j+1 )] + 1j+1 i=1 j X = [P rob{i = min Ij> }(1i \u2212 1j+1 )] + 1j+1 . (A.28) i=1 Next we obtain g \u2032 (A) by using Eq(A.25) and collecting gj\u2032L (A), gj\u2032T (A) and gj\u2032M (A) terms together. \u2032 g (A) = n X (\u03b1j gj\u2032L (A) + \u03b2j gj\u2032T (A) + \u03b3j gj\u2032M (A)) j=1 j n \u0012 X X \u03b1j = [P rob{i = min Ij }(1i \u2212 1j+1 )] j=1 i=1 + j X + \u03b2j \u03b3j [P rob{i = min Ij> }(1i \u2212 1j+1 )] i=1 j X \u0002 i=1 [P rob{i = min Ij> }(1i \u0013 \u0003 \u2212 1j+1 )] + 1j+1 . (A.29) We can write g \u2032 (A) component by component. We derive the formula for gk\u2032 (A), the 178 \fk th component of g \u2032 (A), directly from Eq(A.29). gk\u2032 (A) = \u2212\u03b1k\u22121 k\u22121 X P rob{i = min Ik\u22121 } + i=1 k\u22121 X \u2212\u03b2k\u22121 > P rob{i = min Ik\u22121 }+ i=1 = \u2212\u03b1k\u22121 n X \u03b2j P rob{k = min Ij> } j=k k\u22121 X k\u22121 X \u03b1j P rob{k = min Ij } j=k i=1 \u2212\u03b3k\u22121 n X n \u0001 X > \u03b3j P rob{k = min Ij> } P rob{i = min Ik\u22121 }\u22121 + j=k P rob{i = min Ik\u22121 } + i=1 n X \u03b1j P rob{k = min Ij } + \u03b3k\u22121 j=k \u2212(\u03b2k\u22121 + \u03b3k\u22121 ) k\u22121 X > P rob{i = min Ik\u22121 }+ i=1 n X (\u03b2j + \u03b3j )P rob{k = min Ij> }. (A.30) j=k By using Eq(A.25), Eq(A.26), Eq(A.27), Eq(A.28), Eq(A.29) and Eq(A.30) we can easily obtain g \u2032\u2032 (A) and gk\u2032\u2032 (A) as below. j n \u0012 X X \u03b1j g (A) = [P rob{i = max Ij }(1i \u2212 1j+1 )] \u2032\u2032 j=1 + + gk\u2032\u2032 (A) = \u2212\u03b1k\u22121 k\u22121 X i=1 j X [P rob{i = max Ij> }(1i \u2212 1j+1 )] \u03b2j \u03b3j i=1 j X \u0002 [P rob{i = i=1 P rob{i = max Ik\u22121 } + i=1 \u2212\u03b2k\u22121 k\u22121 X P rob{i = i=1 = \u2212\u03b1k\u22121 (A.31) \u03b1j P rob{k = max Ij } > max Ik\u22121 } + n X \u03b2j P rob{k = max Ij> } j=k k\u22121 X k\u22121 X n X \u0013 \u0003 \u2212 1j+1 )] + 1j+1 . j=k i=1 \u2212\u03b3k\u22121 max Ij> }(1i n \u0001 X > P rob{i = max Ik\u22121 \u03b3j P rob{k = max Ij> } }\u22121 + j=k P rob{i = max Ik\u22121 } + i=1 \u2212(\u03b2k\u22121 + \u03b3k\u22121 ) n X \u03b1j P rob{k = max Ij } + \u03b3k\u22121 j=k k\u22121 X i=1 > P rob{i = max Ik\u22121 }+ n X (\u03b2j + \u03b3j )P rob{k = max Ij> }. (A.32) j=k Once we know the probabilities P rob{i = max Ij }, P rob{i = min Ij }, P rob{i = max Ij> } and P rob{i = min Ij> } for all j then we can find g \u2032 (A) and g \u2032\u2032 (A) in O(n2 ). And these probabilities can be computed in O(nh2 ) by the recursive algorithm RCA. Therefore we can obtain a subgradient of F (in fact two) in polynomial time, i.e., O(nh2 ). (We have 179 \fimplemented a preliminary program to compute subgradients g \u2032 (A) and g \u2032\u2032 (A) for any appointment vector A (real or integer)). Remark A.3.10. Pn+1 gk\u2032 = Pn+1 gk\u2032\u2032 = Pn+1 \u03b3k\u22121 = \u03b31 = u1 + \u03b11 . This follows from P equations Eq(A.29) and Eq(A.31), due to (1i \u2212 1j+1 ) vectors all terms but nj=1 \u03b3j 1j+1 disk=1 k=1 k=2 appear. This is useful in implementations since it provides an easy test to check subgradient computations. Another observation is that the norms of the subgradients depend on \u03b1. Therefore, one may want to choose \u03b1 (among possible \u03b1\u2019s) such that subgradient norm is minimized. Remark A.3.11. By Proposition 3.4.11 of Chapter 3 we can easily extend our results for F D and obtain a subgradient for F D . We just need to find a subgradient for \u2202F (A\u0303, D)), i.e., last component, Aj+1 , will be set to D in F . A.4 Algorithms The objective function of the appointment scheduling problem has useful and interesting properties that allow us to minimize it efficiently. We can divide these properties into two main groups, depending on whether we work with integer or non-integer appointment vectors. In the case of integer appointment vectors, we can limit our search of optimal appointment schedule to only integer appointment vectors without loss of optimality by Appointment Vector Integrality Theorem 2.5.10. Furthermore, under a mild condition on cost coefficients, i.e., \u03b1-monotonicity (Definition 2.6.5), we can find an optimal appointment schedule with discrete algorithms using polynomial time and number of expected cost evaluations (Theorem 2.7.1). In the case of independent processing durations we can minimize F in O(n9 p2max log pmax ) time as shown in Theorem 2.7.3. Similar results hold for FD (Section 2.8). The results above use algorithms based on L-convexity and submodular set function minimization (e.g., see Section 10.3.2 of Murota [7], [6], [5], [9]). Besides these (discrete) algorithms, in [2, 3] authors propose to use minimum-norm-point algorithm [11] for submodular set function minimization to reduce the complexity of existing discrete algorithms. Computational results reported in [3] shows that the proposed algorithm (using minimum-norm-point algorithm) may perform better than existing polynomial algorithms. 180 \fOn the other hand, if we work with non-integer appointment vectors then the objective is convex by Proposition 3.3.3, under a mild condition on cost coefficients (\u03b1-monotonicity). Furthermore, as shown in this chapter, we can obtain a subgradient (in fact two subgradients) of the objective in O(nh2 ) time, which is the same complexity of computing the objective at a non-integer point. Therefore we can use non-smooth convex optimization algorithms (e.g., see [4], [10], [1]) to find an optimal appointment vector efficiently. Both approaches, discrete and non-smooth convex optimization, have their advantages and disadvantages. For example, discrete methods have polynomial complexity with guaranteed optimality but they may be slow and more difficult to implement. (Although [3]\u2019s proposed minimum-norm-point algorithm has a potential to be fast). On the other hand, non-smooth convex optimization methods may have a fast start (can take larger steps to reach near by optimum vector) and can be easier to implement however finding an optimal integer solution may be a challenge. Furthermore, finding a good solution can again be slow. It is not clear at this point which methods will work faster and which implementation will be easier in practice. A third approach besides using only discrete or non-smooth convex optimization methods, is to combine these two approaches and develop a hybrid algorithm. The idea is to start with non-smooth methods to get close to an optimum vector quickly and when the improvement with non-smooth methods loses its steam turn to discrete methods. However, non-smooth methods work with non-integer appointment vectors whereas discrete algorithms work with integer appointment schedules. Therefore, to combine these two methods in a meaningful way we should be able to pass the current solution from one to another without worsening the objective. To achieve this we develop a rounding algorithm which takes any fractional solution (e.g., of a non-smooth optimization method) and rounds it to an integer one with the same or improved objective value (for a discrete algorithm). With this rounding algorithm we can combine both non-smooth and discrete methods and develop an hybrid algorithm for our appointment scheduling problem. Next we provide the details of the rounding algorithm. The rounding algorithm is based on the Appointment Vector Integrality Theorem and its supporting Lemmata, see Section 2.5 for the details on these results. We use the same notation as in Section 2.5. Recall that for a given non-integer appointment vector A there exists a positive scalar \u2206 and we construct two new appointment schedules A\u2032 and A\u2032\u2032 . The 181 \fimportant idea behind the rounding algorithm is that there exists an integer appointment schedule by the Appointment Vector Integrality Theorem 2.5.10 and the objective function changes linearly between A\u2032 and A\u2032\u2032 , i.e., either min{F (A\u2032 ), F (A\u2032\u2032 )} < F (A) or F (A\u2032 ) = F (A\u2032\u2032 ) = F (A). In other words, at each iteration (unless we\u2019ve found an optimum) we strictly improve the objective and we run the algorithm until we obtain an integer A. We now describe the rounding algorithm. The algorithm starts with an appointment vector A and computes F (A). If A is integer then the algorithm stops, else it finds the \u2206 as given above. Then by using the \u2206 the algorithm constructs appointment schedule A\u2032 and computes its cost, i.e., F (A\u2032 ). If F (A\u2032 ) < F (A) then it sets A to A\u2032 and goes to start, else generates A\u2032\u2032 and sets A to A\u2032\u2032 and goes to start. In algorithm 1 we present the rounding algorithm. Algorithm 1 Rounding Algorithm start with a given (non-integer) A compute F (A) while A is not integer do find \u2206 generate A\u2032 compute F (A\u2032 ) if F (A\u2032 ) < F (A) then A \u21d0 A\u2032 F (A) \u21d0 F (A\u2032 ) else generate A\u2032\u2032 compute F (A\u2032\u2032 ) A \u21d0 A\u2032\u2032 F (A) \u21d0 F (A\u2032\u2032 ) end if end while 182 \fA.5 Conclusion and Future Work In this chapter, we use the subdifferential characterization of the objective function for appointment scheduling problem with independent processing durations and compute a subgradient in polynomial time for any given appointment schedule. Finding a subgradient in polynomial time is not trivial because the subdifferential characterization include exponentially many terms, and some of the probability computations are complicated. We also obtain an easily computable lower bound on the optimal objective value. Furthermore, we extend computation of the expected total cost (in polynomial time) for any (real-valued) appointment vector. These results allow us to use non-smooth convex optimization techniques to find an optimal schedule. Previously, we already showed that there exists a polynomial time algorithm to find an optimal appointment schedule however it is not clear at the moment which technique (discrete or non-smooth) will work faster in practice. Besides the discrete convexity and and non-smooth convex optimization approaches, we also develop an hybrid method in which we combine both approaches with a special-purpose integer rounding method which takes any fractional solution and rounds it to an integer one with the same or improved objective value. We believe this hybrid approach may perform well in practice. In the near future, we are planning to implement all these algorithms and methods and develop a computational engine for appointment scheduling problem. Besides, testing and comparing the discrete, non-smooth and hybrid algorithms in computational experiments we plan to test performance of various heuristic methods for both appointment scheduling and sequencing problem, and apply it to real-world appointment scheduling problem in healthcare. (A preliminary version of the rounding algorithm and subgradient computations have been implemented.) 183 \fA.6 [1] J. Bibliography Frederic Bonnans, Claude Lamarechal. Jean Charles Gilbert, Numerical Optimization: and Claudia A. Sagastizabal Theoretical and Practical Aspects. Springer, 2006. [2] Satoru Fujishige. Submodular systems and related topics. Math. Program. Stud., 22:113\u2013 131, 1984. [3] Satoru Fujishige, Takumi Hayashi, and Shigueo Isotani. The minimum-norm- point algorithm applied to submodular function minimization and linear programming. Working Paper, Kyoto University (avaliable at http:\/\/www.kurims.kyoto- u.ac.jp\/preprint\/file\/RIMS1571.pdf ), 2006. [4] JeanBaptiste Hiriart-Urruty and Claude Lemarechal. Convex Analysis and Minimization Algorithms I and II. Springer, 1993. [5] Satoru Iwata. Submodular function minimization. Math. Program., 112:45\u201364, 2008. [6] S. T. McCormick. Submodular function minimization. a chapter in the handbook on discrete optimization. Elsevier, K. Aardal, G. Nemhauser, and R. Weismantel, eds, 2006. [7] Kazuo Murota. Discrete Convex Analysis, SIAM Monographs on Discrete Mathematics and Applications, Vol. 10. Society for Industrial and Applied Mathematics, 2003. [8] Marcelo Olivares, Christian Terwiesh, and Lydia Cassorla. Structural estimation of the newsvendor model: An application to reserving operating room time. Management Science, 54(1):41\u201355, 2008. [9] James B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Math Programming, 118(2):237\u2013251, 2007. [10] R. Tyrrell Rockafellar. Theory of subgradients and its applications to problems of optimization: convex and nonconvex functions. 1981. [11] Philip Wolfe. Finding the nearest point in a polytope. Math Programming, 11:128\u2013149, 1976. 184 ","type":"literal","lang":"en"}],"http:\/\/www.europeana.eu\/schemas\/edm\/hasType":[{"value":"Thesis\/Dissertation","type":"literal","lang":"en"}],"http:\/\/vivoweb.org\/ontology\/core#dateIssued":[{"value":"2010-05","type":"literal","lang":"en"}],"http:\/\/www.europeana.eu\/schemas\/edm\/isShownAt":[{"value":"10.14288\/1.0070919","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/language":[{"value":"eng","type":"literal","lang":"en"}],"https:\/\/open.library.ubc.ca\/terms#degreeDiscipline":[{"value":"Business Administration","type":"literal","lang":"en"}],"http:\/\/www.europeana.eu\/schemas\/edm\/provider":[{"value":"Vancouver : University of British Columbia Library","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/publisher":[{"value":"University of British Columbia","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/rights":[{"value":"Attribution-NonCommercial-NoDerivatives 4.0 International","type":"literal","lang":"en"}],"https:\/\/open.library.ubc.ca\/terms#rightsURI":[{"value":"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/","type":"literal","lang":"en"}],"https:\/\/open.library.ubc.ca\/terms#scholarLevel":[{"value":"Graduate","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/title":[{"value":"Appointment scheduling with discrete random durations and applications","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/type":[{"value":"Text","type":"literal","lang":"en"}],"https:\/\/open.library.ubc.ca\/terms#identifierURI":[{"value":"http:\/\/hdl.handle.net\/2429\/23332","type":"literal","lang":"en"}]}}*