Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Scheduling techniques for flexible semiconductor manufacturing tools Rostani, Shadi 2002

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2003-792501.pdf [ 4.68MB ]
Metadata
JSON: 831-1.0065479.json
JSON-LD: 831-1.0065479-ld.json
RDF/XML (Pretty): 831-1.0065479-rdf.xml
RDF/JSON: 831-1.0065479-rdf.json
Turtle: 831-1.0065479-turtle.txt
N-Triples: 831-1.0065479-rdf-ntriples.txt
Original Record: 831-1.0065479-source.json
Full Text
831-1.0065479-fulltext.txt
Citation
831-1.0065479.ris

Full Text

Scheduling Techniques For Flexible Semiconductor Manufacturing Tools by Shadi Rostami B.S . , Sharif University of Technology, 1997 A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F D o c t o r o f P h i l o s o p h y in T H E F A C U L T Y O F G R A D U A T E S T U D I E S (Departmant of Electr ical and Computer Engineering) We accept this thesis as conforming to the required standard The University of British Columbia December 2002 © Shadi Rostami, 2002 U B C Rare Books and Special Collections - Thesis Authorisation Form Page 1 of 1 In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t o f t h e r e q u i r e m e n t s f o r an a d v a n c e d d e g r e e a t t h e U n i v e r s i t y o f B r i t i s h C o l u m b i a , I a g r e e t h a t the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and s t u d y . I f u r t h e r a g r e e t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y p u r p o s e s may be g r a n t e d by t h e head o f my depar tment o r by h i s o r h e r r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l n o t be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n . Department o f The U n i v e r s i t y o f B r i t i s h C o l u m b i a V a n c o u v e r , Canada Date http://www.library.ubc.ca/spcoll/thesauth.html 12/11/2002 Abstract In this thesis, we introduce Residency Constraints for cluster tools. A res-idency constraint is a l imi t on how long raw material can reside on either a processing module or the robot after the process is finished. Like a deadline, it specifies a point in time by which a task must be finished. However, unlike a deadline, the point in time is no longer a fixed point, and is determined dy-namically when the system runs based on the completion time of other tasks. Our work is the first to attempt to address different forms of residency con-straints in cluster tools. We investigate different models of cluster tools, and provide different scheduling techniques for each model. the first model that we investigate is a single-route model, which as-sumes that a l l wafers go through the same processing visits. Fi rs t , we assume that a residency constraint is only allowed on processing modules, and provide a scheduling technique that searches the time and resource domains for an op-t imal solution. Then, we allow Residency Constraints on both the processing modules and the robot. For this model, we introduce another technique that uses a special kind of Linear Programming and a branch-and-bound method to search for the optimal schedule. In the next model, we use the buffer module to hold the partially processed materials and free the resources to improve the performance of the system. Our experiment shows that these techniques find the optimal schedule in a reasonable amount of time. The last model that we investigate is a multi-route model that supports the production of different products i n the same tool configuration. We pro-vide a greedy algorithm that finds a near-optimal schedule for residency-free cases. For the residency-aware model, we provide both opt imal and near-optimal scheduling techniques. We use Simulated Anneal ing for finding the near-optimal schedule, and provide several optimization techniques to improve the quality of the near-optimal solution. i i Contents Abstract ii Contents iii List of Figures vi Acknowledgements ix Dedication x 1 Introduction 1 1.1 Thesis Contribution . 5 1.2 Thesis Organization 11 2 Model and Definitions 12 2.1 Single-route Model 14 2.2 Multi-route Model 17 3 Related Work 20 4 Single-Route Model with P M Residency Constraint 30 iii 4.1 Algorithm Description . 37 4.1.1 Search 40 4.1.2 Search Space 43 4.1.3 Optimization Techniques 46 4.2 Experimental Evaluation 47 4.2.1 Experiment Design 47 4.2.2 Experiment Result 49 5 Single-route model with P M and T M Residency Constraints 52 5.1 Algorithm Description 54 5.1.1 Time-Space-Search Algorithm 54 5.1.2 Linear Programming with Shortest Path Algorithm . . 55 5.1.3 Pruning techniques 63 5.2 Experimental Evaluation 66 5.2.1 Experiment Design 66 5.2.2 Experimental Results 67 6 Single-Route model with Buffer Module 73 6.1 Algorithm Description 74 6.1.1 Phase I: A triple-arm solution . 75 6.1.2 Phase II: Changing the third arm with the buffer module 79 6.2 Experimental Evaluation 80 6.2.1 Experiment Design 80 6.2.2 Experimental Results 82 iv 7 M u l t i - R o u t e M o d e l 84 7.1 Algorithm Description 86 7.1.1 Calculating the Ideal F P 86 7.1.2 Residency-free Scheduler 89 7.1.3 Residency-A ware Approaches 93 7.2 Analytical and Experimental Evaluation 103 7.2.1 Analytical Evaluation of Residency-Free Approaches 103 7.2.2 Residency-aware schedulers 106 8 C o n c l u s i o n s a n d F u t u r e W o r k 111 8.1 Future Works 115 B i b l i o g r a p h y 117 v List of Figures 1.1 A typical cluster tool 2 2.1 The schedule of a two-visit cluster tool. The second visit have two parallel modules 15 3.1 Perkinson algorithm for a four-process-module cluster tool . . 23 3.2 P C B electoplating line 27 3.3 Topology of a cluster tool 29 3.4 A schedule for a cluster tool with single-arm robot 29 4.1 A parrallel-module visit with two identical processing modules with tpr0C2 equal to 10, and i e / / 2 equal to 4 31 4.2 Timing diagram of a three-visit cluster tools with only one wafer 35 4.3 Timing diagram of a simple periodic schedule for a three-visit cluster tools with two wafers 38 4.4 Search window. It is enough to resolve all the conflicts between one wafer (the black one) and its previous and next wafers (gray blocks) 39 vi 4.5 The timing diagram of the system after moving the first and second blocks one unit to the left 40 4.6 Search Tree 45 4.7 Three different topologies with 4 P M s and 3 visits 48 4.8 Performance in terms of fundamental period, with respect to cluster factor 51 5.1 A topology for which the Time-Space-Search algorithm took about 18 hours to find the solution(t p i c^ = tpiace = 1) 55 5.2 A route configuration . 58 5.3 Six possible situations should be investigated for each permu-tation of the above configuration 62 5.4 A topology for which the 'LPSP algorithm with only Pruning technique one took about 35 minutes to find the solution(tpiCk = tpiace = 1) 64 5.5 Algorithm running times of the Time-Space-Search algorithm with respect to the cluster factor. It takes the algorithm several hours to find the solution for cases with both P M and T M residency constraints 68 5.6 Algorithm running times of the L P S P algorithm with respect to the cluster factor. The algorithm takes less than 35 minutes to find the solution 69 vn 5.7 Algor i thm running times of the L P S P algorithm wi th Pruning technique One wi th respect to the cluster factor. P run ing tech-nique 1 is useful when the cluster factor is greater than 1 . . . 71 5.8 Agor i thm running times of the L P S P algorithm wi th both prun-ing techniques wi th respect to the cluster factor. The solution was found in at most, 4 seconds for al l cases 72 7.1 A two-route configuration. Each node represents a visit , and Vis i t t/2 is common between the two routes (i.e. R\ : 1-2-3, and R2 : 4-2-5). The number on top of a node represents that visit 's process time 89 7.2 A two-route, disjointed configuration, and its schedule. The ar-rows represent the pick and place actions. The boxes represent the processing at the P M s 90 7.3 Schedule of the configuration shown in Figure 7.1. The number on the arrows is the route of the wafer that is being picked or placed 92 7.4 A two-route configuration wi th one common visit 103 7.5 A Symmetric three-route configuration 105 7.6 Performance comparison between a primitive solution, and mul-tiple runs of Simulated Anneal ing 109 7.7 Comparison between a binary-split and a linear-search node evaluation . . . 110 v i i i Acknowledgements First, I wish to thank my advisor, Dr. Hamidzadeh for all of his guidance.Also, I would like to thank my committee members and examiners, Dr. Wilton, Dr. Vuong, Dr. Kirkpatrick, and Dr. Sassani, for their very helpful comments and suggestions. Additionally, I would like to thank all of my friends at H P Lab who were always helpful and supportive. This work has been parially funded by Brooks Automation Canada, and Natural Sciences and Engineering Research Council ( N S E R C ) . S H A D I R O S T A M I The University of British Columbia December 2002 ix To my husband, Babak, for all his love, support, patience, and encouragement. To my mother, who taught me the most import lessons of my life. Chapter 1 Introduction Cluster tools provide a flexible, reconfigurable and efficient environment [3] [4] for several manufacturing processes, especially semiconductor manufacturing. They are like small factories having a number of process modules (PM) , gath-ered around a transport module ( T M ) , each responsible for a special kind of operation. Raw materials (e.g. wafers) enter the system through the cassette modules ( C M ) , which interfaces with the environment external to the tool (Figure 1.1). There is also another module available in cluster tool, called buffer module (BM). B M is similar to a P M that does not do any processing and only holds the wafer. This module is usually available for maintenance reasons (e.g.when one of the P M s fails, B M can hold its wafer until the P M comes back on-line). When a wafer enters the system through the C M , the T M , which con-sists of one or more robot arms, moves it from one module to another. After all the necessary process steps are done on a wafer, it returns to the C M that 1 Process Modules Kyi 'Dual-arm Robot Buffer Module Cassette Modules Figure 1.1: A typical cluster tool it originates from. When all wafers of a C M are processed, the C M gets un-loaded and replaced with another C M containing a new set of raw wafers. One of the important benefits of using a cluster tool, is the fact that is very easily reconfigurable. To reconfigure a tool that was producing a product of type 1 to produce a type-2 product, all that needs to be done is to replace the PMs. The P M s are designed in a way that they can be very easily detached from the tool, and replaced with a new P M . Therefore, the same tool with the same T M and controller can be used to produce the new product. Scheduling a cluster tool involves specifying a sequence of T M actions and their times, applied to material units, to move the units between P M s in order to satisfy a number of constraints and objectives. Due to the fact that the same T M with its controller can be used to produce very different kinds of product, the scheduler must be flexible enough to schedule different models of cluster tool. Also, it should not penalize a simpler model because it 2 can schedule a more complicated model. In other words, it should provide the best schedule for different models of cluster tool. Due to stringent timing and throughput requirements, the scheduling problems in these tools can be quite complex. There are two important timing constraints that should be satisfied; the first one is the processing time, which is a lower bound on how long a wafer should stay in a P M . The second timing constraint, which is called residency constraint, is a limit on how long a wafer can stay in a P M after its process is finished. A n example of a module with residency constraint is a module that acts like an always-on oven to warm up the wafers to facilitate the next chemical operation that must be performed on the wafer. For example, if the wafer stays in that module for 10 seconds, it gets to the desired temperature. If the wafer stays there for another couple of seconds, it may still be acceptable. However, if the wafer stays more, it is burned, and the product is not of an acceptable quality. Therefore, the P M residency constraint for that module is 2 seconds. O n the other hand, after the wafer is picked up with the arm, if it stays on the arm for a long time, it is cooled down and the next operation can not performed properly. Therefore, there is also a limit on how long the wafer can reside on the T M after it is picked up from this P M . We call this T M residency constraint. A residency constraint is similar to a deadline in the sense that both specify a point in time by which a task must be finished. However, a deadline is a fixed point in time and is known in advance, while a residency constraint 3 depends on the completion of another task (e.g. the picking up of a wafer from module A must finish within 5 units of time after the processing task is complete). In a system with residency constraints, time becomes a functional requirement. Unlike soft real-time systems, it is not just a matter of performance or product completion time to provide a better service or throughput; it is a matter of product quality and usability. If residency constraints are violated in the manufacturing process of a product, the product becomes worthless. Therefore, a high-performance schedule that does not satisfy the residency constraint produces many unusable wafers, and cannot be used in practice. In any residency-constraint-aware scheduler, the complete schedule of any wafer must be planned before its entrance into the system. Otherwise, the wafer may have to stay in a module for longer than its residency, because other resources are busy processing other wafers. In other words, only algo-rithms that plan ahead, based on a global view of the system, can be used for scheduling, and local-decision-based algorithms, such as "greedy" [6], are not applicable. Some effort has been put into modeling and scheduling clus-ter tools, but to the best of our knowledge, none of it addresses any form of residency constraint for cluster tools. However, in practice there are some pro-cesses, such as chemical vapor deposition and rapid thermal processing that have residency constraints. Scheduling of a cluster tool can be done in two modes: on-line and off-line. Off-line scheduling happens before the cluster tool is operational at 4 the configuration step. At this time, we may be willing to let the scheduler spend as much time as needed in order to find the optimal schedule. In other words, in off-line mode the running time of the scheduler may not matter much. However, when the tool is operational, if one of the modules fail or a new module is added to the system, the tool has to be rescheduled to adopt the new configuration. In this situation, on-line mode, we can not afford to spend a long time for finding the schedule, and the running time of the scheduler becomes important. In on-line mode, we may have to trade-off the optimality of the schedule with the time that the scheduler takes to come up with that schedule. 1 . 1 Thesis Contribution In this thesis, we introduce residency constraints for cluster tools, and provide solutions to several open scheduling problems in different cluster tool mod-els. The constraints and assumptions in each model are derived from actual manufacturing processes. The first model that we investigate is a single-route model, which as-sumes that all wafers go through almost the same path. In this model, we as-sume that the robot module has two arms and we assume residency constraints on the P M may exist. For this model, we provide a scheduling technique that searches in the time and resource domain looking for an optimal and periodic solution. Although the search technique is an exhaustive search based on a branch-and-bound technique, the optimal solution is found in a short time, 5 because we reduce the size of the search tree significantly by assuming that we are looking for a periodic schedule, and with the help of several power-ful pruning techniques. Our experiments show that for 90% of the cases, the optimal solution is found in less than a second, and in the worst test case it takes approximately 19 seconds on a sun ultra-10 spare station with 256 M B of memory. The next model that we consider is a single-route model that allows residency constraints on both processing modules and transport module. In practice, there are many process visits that may impose a limit on how long a wafer is allowed to reside on the T M after being picked up from a P M module. For this model, we provide two different scheduling techniques, one of which is an extension of the technique that we use for the PM-residency-only model. The complexity of this technique is exponential in both the processing time and in number of visits. The number of visits is usually less than 7. However, due to the fact that the scheduler may be used to schedule different processing modules, the processing time can be quite large depending on the nature of the process, and this technique may take several hours to find the optimal schedule, which makes it impractical. In the second technique, we remove the dependency of the algorithm complexity on processing time. Since a robot can only perform one pick or place action at a time, we define the state of the system to be the permutation of all the robot's actions within a period in the steady state. In our search, we try to find a permutation where if the right times are assigned to the 6 robot's actions, all constraints are satisfied, and we have an optimal schedule. To ensure that we find the optimal schedule, we start from a period that can be achieved in the absence of residency constraints, and investigate all permutations that have the potential of leading to the optimal schedule. If we can not find such a permutation, we increase the period, and re-start the search. In a model with 6 visits, we may need to look at 14! = 7* 10 2 2 permuta-tions which is a very large number. However, we use an intelligent generation process that takes into account our requirements, and reduces the number of permutations that need to be investigated, significantly. To evaluate each state (permutation), we create a linear-programming system. Instead of solving this L P system using general techniques, such as simplex [7] that may take a few seconds, we alter the L P system to make it a special kind of L P system (a system of difference constraints), so it can be solved using the Bellman-ford shortest path algorithm [6]. In this model, with the use of several pruning techniques, the optimal schedule is found in less than 4 seconds (in the same case where the first technique spent several hours). Our experiments in the first two models show that in some cases, due to the presence of residency constraints, we may need to increase the period (i.e. decrease throughput). In the next model, we try to use the Buffer module to improve perfor-mance in these cases. Buffer modules (BM) are usually available in the cluster tool for maintenance purposes, and can be used to temporarily hold the wafer to release other resources, such as the robot arm. However, there is a trade-off 7 in using the buffer module: it can be used to free resources, but it introduces two extra arm actions (i.e. pick up from the buffer, and place into the buffer). Therefore, use of this module does not help when the T M is the bottleneck. For this model, we first perform a scheduling technique that is an extension of the technique we used for the no-buffer model which assumes that there are three arms available in the tool. Next, we try to replace the third arm with the buffer module, if possible. Our experiments show that in 57% of the cases where the analytical lower bound on the period can not be achieved due to residency constraints, a better schedule can be found by using the B M as compared to a technique that does not use the buffer resource. The last model that we investigate is a more general model, called a multi-route model. In this model, different wafers can go through different routes, and different products can be produced with the same tool configu-ration, simultaneously. One might ask that if we are providing a technique for a multi-route model, then why bother providing different techniques for a single-route model. After all, a single-route model is simply a special case of a multi-route model, and if we have a box that solves the multi-route model, we can use it to solve the single-route model as well. The answer is that there is a cost in scheduling a more general model; that cost is the time that it takes our scheduling technique to come up with the solution. Therefore, if we know that we only want to produce one product with the cluster tool, single-route model should be used. Multi-route models are becoming more popular in flexible manufactur-8 ing systems, where different products are to be produced with the same tool. Multi-route models that are of more interest in the industry are usually special cases that allow a fixed number of products to be produced simultaneously. We divide the multi-route model into two categories: residency-free, and residency-aware models. For a residency-free model, we provide a greedy algorithm [6] to schedule the robot actions in the absence of residency con-straints. We prove that this greedy algorithm finds the optimal schedule in the more common cases; however, there are some cases where the greedy algorithm does not find the optimal schedule. For residency-aware model, we present two techniques: optimal and near-optimal. The optimal technique is an extension of the single-route model technique; it looks at all possible permutations of robot actions in a period in search of the optimal schedule. Due to the fact that one product of each route is to be produced in each period, the number of actions in a period becomes large and as our experiments show, this scheduling technique might take a long time to find the optimal schedule. Therefore, it can only be used in off-line mode. However, in on-line mode, where the tool must be rescheduled imme-diately, we have to provide a scheduling technique that finds the schedule in a reasonable amount of time, even if the schedule is near optimal. We use Simulated Annealing (SA) to search for the near-optimal schedule. We define the state of the system to be a permutation of the.robot's actions within a period, and the fitness value to be equal to the minimum possible period in 9 which a feasible schedule exists for that permutation Like any other simulated annealing method, we need to have a fast way of evaluating each state to find its fitness value. To find the fitness value, we first check the feasibility of the permutation. If we have not over-utilized the T M (e.g. not having three pick actions in a row), and common P M s , we check all periods from the Ideal period (i.e. period that can be achieved in the absence of residency constraints) to the so-far-best-found period x . We modified the basic S A algorithm significantly to improve perfor-mance. One version of the algorithm performs the S A technique more than once but allows fewer number of moves at each temperature. This method improves performance, because after the first round of S A is solved, the best-found period is improved, and less time is needed for state-evaluation in the second round. In another technique, instead of linearly searching all periods from Ideal period to best-found period, we perform a binary-like search. This technique reduces state evaluation significantly, but it introduces an inaccu-racy in the fitness value. However, the time that it saves enables us to evaluate a large number of states to find a better schedule. As a result of combining the different techniques, our experiments show that we can find a near-optimal schedule with a period that in 92% of the cases is only 20% larger than the op-timal period, in less than one-tenth of the time that took the optimal scheduler to find the schedule. 1 A t the beginning, the best-found period is the period of a schedule that only allows one wafer to the system. 10 1.2 Thesis Organization The remainder of this thesis is organized as follows. Chapter Two describes different cluster tool models in detail. Chapter Three provides a literature survey in this area. In Chapter Four, we present a scheduling technique for single-route cluster tools that allows residency constraints on processing mod-ules. Chapter Five presents a scheduling technique that can be applied to single-route cluster tools with residency constraints available on both the pro-cessing module and the transport module. Chapter Six provides a scheduling technique that uses the Buffer module to improve the cluster tool's perfor-mance. Chapter Seven introduces the multi-route model, and provides several scheduling techniques to solve problem instances in this model, with and with-out residency constraints. Chapter Eight provides concluding remarks. 11 Chapter 2 Model and Definitions A manufacturing process consists of a number of process visits, which together form a route. We only consider a single-wafer model for cluster tools, which means that each P M can only process one wafer at a time. However, in a batch-processing model, a P M can process multiple wafers simultaneously. A process visit (V) can consist of one or more identical P M s . A wafer that needs to be processed in that visit can be processed by any of the identical PMs. To manufacture one product unit, all visits in its route are assumed to be performed once in a predefined order. We also assume that each visit can appear only once on a wafer's route (i.e. wafers can not re-visit a process-visit in their route). In a visit Vi, PMij denotes the jth P M of that visit. Three numbers are assigned to each Vf tVroci^resP^ a n d tresTr The process time, tpr0Ci, shows the time that a wafer must remain on one of the P M j j s to be completely processed. We define tsN to be the maximum'of tpr0Ci for all visits. The 12 residency constraint on the PMitjS, trespi , shows the maximum time that the wafer can stay in one PMitjS after it is processed. The residency constraint on the T M , tresTn shows the maximum time that the wafer can reside on the T M after it is picked up from one of the PMijs, and before it is placed in one of the P M s of the next visit. If a wafer leaves one of the PMijS before tproCi, it is premature, and if it stays more than tproCi + trespi on a PMij, or more than UesTi on the T M , it violates its residency constraints. A T M transfers wafers from one module to another. It performs two kinds of actions: pick up wafers from a source module and place wafers into a destination module. This module can consist of one or more robot arms. A dual-arm T M has two blades pointing in opposite directions, each capable of carrying a wafer. The two blades are tightly coupled by construction, that is, at anytime only one of the two blades can pick up or place a wafer from or into a module [37]. One of the advantages of dual-arm T M s is that one of the arms can be used as a temporary buffer. Wafers enter the system through the cassette modules, C M , that are loaded alternately into the system so that while the wafers in one C M are processed, the other C M is loaded into the system. Using this model, we can assume that, at any time, wafers exist that are ready for processing. One of the common metrics of cluster tool performance is throughput. This is defined as the processing rate of the cluster tool, expressed in the number of wafers completed per hour. Our models are divided into two categories: single-route, and multi-13 route. In a single-route model, al l wafers go through the same path, and visit the same process visits. A multi-route model allows multiple routes to exist in the tool at the same time (i.e. it supports the manufacturing of more than one product in the same configuration). 2.1 Single-route Model In a single-route model, a l l wafers visit the same processing visits. However, because we allow parallel-module visits, wafers do not have to visit exactly the same processing modules. A wafer that wants to visit a parallel-module visit can do so for any of the identical modules. Having process visits that consist of multiple identical P M s represents the notion of parallelism in a route, and provides some degree of flexibility. Usually, parallel-modules are used for visits wi th long processing times to achieve better throughput.. For each Vi, an effective process time, teff{, is defined. For a single-module visit, this time is equal to tpr0Ci, whereas for a parallel-module, it is equal to the processing time of a single module that has the same effect on cluster tool throughput (analytical study in the absence of residency constraint) as al l the parallel modules . Figure 2.1 shows the schedule of two wafers in a single-route cluster tool. In this example, each route consists of two visits, and the second visit has two identical P M s . U p o n making the second visit , a wafer can visit any of the P M s (PM2i\ and P M 2 ] 2 ) of that visit. In this figure, wafer one visits the first P M ( P M 2 , i ) of visit 2, and wafer 2 visits the second P M ( P M 2 , 2 ) of that 14 C M Visit 1 PM1.1 Visit 2 PM2.1 PM2.2 C M FP-I * i l — Waferl 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Time Place r» Wafer 2: I Pick. Figure 2.1: The schedule of a two-visit cluster tool. The second visit has two parallel modules. visit. Except for using different P M s for visit 2, the schedules of wafer one and two are the same. This schedule is periodic, meaning the same schedule repeats for wafers in subsequent periods. As shown in the figure, a new wafer enters the system every fundamental period (FP). For quality assurance reasons, all wafers should stay for the same amount of time in the same visits to make sure they are all of the same quality. There-fore, we are looking for a periodic solution with a fundamental period F P . In such schedule, if we find the times that a wafer should enter and leave visits in a route, those relative times are repeated for the subsequent wafers in the following periods. Each periodic schedule has an initial, steady, and terminate state. The initial and terminate states are transients. Because we use two C M s , we can assume that there is always a wafer ready to be processed, so the initial and terminate transient states are small compared to the steady state. We are looking for a periodic schedule that has the best throughput in the steady state (i.e. a schedule with the minimum F P ) . If we define PKW^ to be the time of picking up wafer w from Vi, and 15 PLW}i to be the time of placing wafer w into Vi, a residency-aware feasible schedule with fundamental period F P should satisfy the following constraints: • For each wafer w, and each visit Vi we should have the following: tproci — P ^w,i P ^ j U j i ^ ^proCj ^resPi 5 and PLw,i+l ~ PKw,i < tresTi-• The schedule must be periodic with a period equal to F P . Therefore, we should have this: PKw>i = PKw-iti + FP, and PLw,i = PLw-\ti + FP. • The robot can perform only one action (pick up or place) at any time; therefore, there should not be any integer w, i, m, and a that satisfies any of the following equations: PKw>i = Pkmii + a * FP, PKWti = F L m , i + a*FP, and PLWii = PLm,i + a*FP. • Only two robot arms are available; therefore, there should not be any three visits Vi, Vj, and Vk and three wafers w, m , and n that satisfy the following inequalities : PKw<i < PKmj < PLwj+i , PKWji < PKn!k < P £ _ , i + i , and 16 Notation Definition Vi Visit i PMid jth process module of Vi T M Transport module C M Cassette module N Number of visits Mi Number of P M s of Vi tproci Processing Time of Vi tresPi residency constraint Vi on the P M tresTi residency constraint Vi on the T M tBN Maximum of tpr0Ci tpick Pick time tplace Place time F P Fundamental Period PK Time of picking up wafer w from Vi P T -*  J-'W,l . Time of placing wafer w into Vi Table 2.1: Single-route model term definitions PKmd < PKn!k < PL Table 2.1 lists the terms that we use for a single-route model. 2.2 Multi-route Model In a multi-route model, wafers can have different routes. In other words, different kinds of products can be produced simultaneously with the same tool configuration. In our model we assume that there are r routes available in the tool, and 1/r of wafers belong to each route (i.e. Ri, R2, • • •, Rr)- We define Ni to be the number of visits on route i. We also define Oj to be the number of times that visit j occurrs in different routes. In other words, 03 17 shows how many route recipes share Vj. A l l wafers belonging to the same product type should undergo the same processing steps, and the time that they remain in their visits should be the same. Otherwise, the quality of wafers belonging to the same product type differs, making quality assurance very difficult. Therefore, if an R± wafer stays in a process visit for s seconds, all of the Ri wafers should stay in that visit for s seconds. This behaviour can be achieved in a periodic schedule. Our objective is to find a periodic schedule with a minimum fundamen-tal period (FP). By periodic we mean that if we find the times at which the first r wafers enter and leave PMs , those relative times will be repeated for the subsequent wafers in the following periods. As in a single-route model, we are looking for a periodic schedule that has the best throughput in steady state (i.e. a schedule with the minimum F P ) . Table 2.2 lists the terms that we use for a multi-route model. 18 Notation Definition Vi Visit i T M Transport module C M Cassette module P M Process Module r Number of routes Ri Route / Ni Number of visits of route 1 Oi Number of occurrence of visit j Processing Time of Vi tresPi P M residency constraint of K ^resTi T M residency constraint of Vi tpick Pick time tplace Place time F P Fundamental Period Table 2.2: Multi-route model term definitions 19 Chapter 3 Related Work Cluster tools combine multiple processes in one machine, like small factories, giving the advantage of low operational costs per wafer [2]. Cluster tools improve throughput, reduce contamination, achieve better utilization of floor space and reduce human intervention, while providing rapid reconfiguration [35]. Recently, all research in the cluster tool area focuses on single-wafer cluster tools. Wood [39] compared batch and single-wafer processing, and showed that single-wafer processing improves performance significantly when the lot size is small (which is the case for flexible manufacturing systems). Previous work on the modeling and scheduling of cluster tools can be divided into two categories. The work in the first category focuses on finding the optimal schedule of a single cluster tool. The work in the second category addresses the scheduling of a number of cluster tools that work in collaboration with each other. In the second category, each wafer may visit some of these cluster tools to get processed. Due to the fact that the operation of these 20 cluster tools must be synchronized, the model becomes very complicated, and the optimal solution can not be easily found. [43][42][19][8][17][18]tried to address simple cases of these models. Wood [19] [18] considered a very simple model, and assumed that all the modules within one cluster tool perform the same action. Y i m [43] [42] and Dummler [8] considered a more generic model and introduced near optimal solutions using simulated annealing and genetic algorithms. In this thesis, we focus on scheduling a single cluster tool, and cover the literature on that area. Perkinson et al. [23], and Wood[41] [40] developed a model for cluster tools with a single-arm T M , and assumed that all visits are single module (i.e. no parallelism). In their model, all wafers must visit exactly the same module. Perkinson assumed that the processing time of these modules are the same. In practice, however, different modules have different processing times. In their model, processing modules do not have residency constraints, and we can therefore assume that the processing time of all modules is equal to the maximum processing time (tsN)- It can be easily shown that this assumption does not affect the throughput of the tool. Hence, their periodic schedule can be used to schedule any single-wafer, single-arm, single-route cluster tool that does not have residency constraint. Perkinson calculated a lower bound on the period of the tool (Ideal F P ) . There are two kinds of resources available in the tool; P M and T M . The period must be large enough to let each of these resources complete its operation. In each period every P M must process a wafer. After the process is finished, 21 a robot arm must pick up the wafer from that module, and place it in the next module in the route-recipe. Then, it can pick up the next wafer that is supposed to be processed in this module from the source, and place it in this module. Whi le T M is performing these moves (which would require 4 T time units, assuming tPiCk = tpiace — T ) , the P M remains idle. Therefore, each visit needs tBN + 4T time units in each period. O n the other hand, a wafer must be placed in each visit and in the C M , and a wafer must be picked up from each visit and from the C M in each period. Therefore, i f there are N visits in the route, T M must perform (TV + 1) pick up actions, and (N + 1) place actions. A s a result, T M needs 2T(N + 1) time units in each period. Therefore, the best possible period becomes the following: IdealFP = max(4T + tBN, 2T{N + 1)). If the first term of the formula becomes dominant, the processing time determines the throughput, and the process module wi th the maximum pro-cessing time can have at most 4 T idle time (while waiting for the T M to replace the wafer). In this situation, the tool is in the process-bound region. O n the other hand, if the second term of the formula is greater, the T M is the bot-tleneck, and it determines the period. Therefore, to get the best period, the T M must be kept busy all of the time. In this situation, the tool is referred to as being in the transport-bound region. Perkinson showed that while the tool remains in the transport-bound region, changing the process time of the modules does not effect the period. Perkinson et. al also provided a simple schedule wi th the "Ideal" period. 22 Figure 3.1: Perkinson algorithm for a four-process-module cluster tool In their schedule, every wafer remains in every visit for the time equal to the Ideal F P - 4T. The period starts when a wafer moves from C M to the first visit. The robot then moves the wafers in the following order: K - i to Vn, K l _ 2 to Vn-i, Vi to V2. Figure 3.1 shows an example of such a schedule. Filled boxes show the time during which the module processes the wafer, and empty boxes show the time that each module remains idle, to make sure that wafers remain in all modules for the same amount of time. Arrows represent the time that T M is moving the wafers between modules. Perkinson [22] presented a model that handles parallel-module visits as well. His solution assigns the wafers in a round-robin fashion to parallel modules of a visit. In other words, when there are m parallel modules in 23 a visit, the first wafer gets processed in the first module, the second one in the second module, the mth one gets processed in the mth module, and the (m + l)th wafers goes back to the first module. Therefore, each of these modules processes a wafer every M wafers. To find the effect of the parallel-modules on the cluster tool performance, for each Visit i with parallel modules, he defined to be equal to the processing time of a single module that has the same effect on the tool performance as all of the parallel modules. He proved that tef j{ = t p r o ^ . + 4 r — 4 T where M ; is the number of parallel modules in Visit i. O h [21] [20] proposed another scheduling technique for single-wafer, single-route, single-arm cluster tools that do not have any residency con-straints. In his technique, a new wafer enters the system every sent-period time, which is determined by the throughput requirement. The new wafer is processed through the system as quickly as possible. However, the schedule may have some conflict with the previous and new wafers (e.g. two wafers attempt to use the same P M or T M ) . In the next step, the algorithm re-solves the conflicts by increasing the time that some wafers spend in a visit. This algorithm basically gets the desired performance (sent-period) from the customer and tries to provide a greedy schedule that works with that perfor-mance. However, Perkinson proved that period can not be less than the Ideal F P , even if the user asks for a smaller sent-period. In this situation, Oh's scheduler enters wafers into the tool every sent-period, but when resolving the conflict, it has to increase the period. In Perkinson's algorithm, each wafer 24 remains in each module for the same amount of time. Because there is a post-processing effect on the wafers that remain in the modules after processing, products of Perkinson's algorithm would be of the same quality. However, even if Oh's algorithm finds the optimal solution using its greedy algorithm (which is not guaranteed), different wafers may stay for different amounts of time in the same module. Therefore, the tool would create products with different qualities, which makes quality assurance difficult. Vankatesh et al. [37] proposed an optimal solution for a dual-arm cluster tool that does not have any form of residency constraints. They mention that in a dual-arm cluster tool when wafer i is moving from one visit to another, the destination can be full (e.g. occupied by wafer j). T M picks up wafer i from the first visit with one arm, moves to the next visit, picks up wafer j with the other arm, and places wafer i in that visit. They refer to this set of actions as a new action, called as "Exchange". A n advantage of a dual-arm over a single-arm is to use the arms as a buffer. Therefore, if the tool is in the transport-bound region where the number of actions determines the period, having a dual-arm does not improve the throughput. However, when the tool is in a process-bound region, using the "exchange" action, we can reduce the amount of time that the process modules remain idle from 4 T to 2T, and F P becomes tsN + 2T. Srinivasan[36] and Zuberek [46] [45] used timed petri-nets to model clus-ter tool behavior. They also focused on single-wafer, single-route cluster tools with no residency constraints. In their model, they assume that they know 25 the sequence of the robot actions (which is their state transition) for drawing the petri-net. They use the petri-net to analyze the tool and find the optimal period and time of the robot actions. Their model can not be used when a residency constraint is available, because they assume that the sequence of the robot actions is known before hand when drawing the graph. However, one of the important components of the scheduler's attempt to address residency constraints is to come up with a feasible sequence of actions. Herrmann [9], Poolsup [25] LeBaron [14] and Koehler [11] provided simulators for cluster tools. A l l of them focused on simulating a single-wafer, single-route cluster tool that does not have any residency constraints. Given the predefined sequence of robot moves (which is only possible in residency-free models), Hermann tried to show the impact of a process change on tool performance with the use of a simulator. He claims that the simulation may help the cluster tool designer to design a time for process modules that would result in getting the best utilization of all the resources. Although we have not found references to any work that addresses resi-dency constraints in cluster tools, there exists literature that deals with manu-facturing processes with a form of residency constraints [24][34][16][1][13][44]. In their model, used for creating electronic circuit boards ( P C B Electoplating line), a number of tanks are responsible for performing a special process on the board. These tanks are arranged in a row (Figure 3.2). Boards are mounted on a carrier at the end of the row (load station). The carrier is then moved from tank to tank by a single hoist. Each carrier must go through the same 26 tankO load/ unload tank 1 Process 1 HOIST Carrier tank 2 Process 2 tank N Process N Carrier Movement Figure 3.2: P C B electoplating line sequence of tanks (i.e. single-route model). In their model, because tanks are located in a row, the time that the hoist spends moving from one tank to another depends on the position of the source and destination. However, in cluster tools, because all processing modules are placed around the robot arm, the arm spends almost the same amount of time when moving from one module to another. Lei et. al[15] proved that even if they have only one hoist available, the scheduling problem is N P complete. In [24][34][1] authors use only one material handler (as a T M with one robot arm). In [16] two material handlers are used, which, however, operate in a completely disjointed manner from each other. This prevents their algorithm from using the material handler as a temporary buffer. 27 Also, they consider the action of picking up from one process module, moving to the destination, and placing the material in the destination module as an atomic action (i.e. they do not consider exchange [37] as a possible T M sequence of actions). In our model, we consider pick up, and place to be different actions, so wafers can stay on an arm after being picked up and before being placed in the destination. In such a model, we can use the arms as buffer. In fact, our model extends the exchange operation to its general form where in the interval between picking up a wafer and placing it in its destination, more than one T M move might exist. To be able to compare our model with their approach, we assume that our robot has only one arm. Consider a case with the configuration shown in Figure 3.3. Each node in that figure represents a P M , and a wafer should visit these P M s from left to right (i.e. each P M is connected to its previous and next P M with an arc). Assume that tpick — tpiace = 1. The best possible period according to Perkinson [23] is tBN + 4 T = 6 + 4 = 10. In a model where move actions are atomic, it is easy to note that a periodic schedule with that period does not exist. However, if we do not force moves to be atomic, we get Figure 3.4, which shows a schedule whose period is equal to 10. As the figure shows, there are some periods of time where an arm is used as a buffer. During our research we published a number of papers that address differ-ent models of cluster tools with residency constraints [5][31][29][30][26][27J[28][32] To the best of our knowledge, these works are the only ones that try to address residency constraints in cluster tools 28 Process Time:4 PM Residency:0 Process Time:4 PM Residency:0 Process Time:6 PM Residency:0 Figure 3.3: Topology of a cluster tool 1 2 3 4 5 6 Arm is used as buffer during these times Time Figure 3.4: A schedule for a cluster tool with single-arm robot 29 Chapter 4 Single-Route Mode l wi th P M Residency Constraint Scheduling cluster tools involves specifying a sequence of T M actions and their times, and apply these to wafers to move them between PMs in order to satisfy a number of constraints and objectives. Due to stringent timing and through-put requirements, the scheduling problem in these tools can be quite complex. In this chapter, we start from the simplest model: a single-visit, single-route and dual-arm model with parallel-module visits that allows residency con-straints on the P M . In the next chapters we look at more complicated models. As mentioned before, for each Vi, an effective process time, t e//, is defined. For a single-module visit, this time is equal to the processing time tproai whereas for a parallel-module Vi, it is equal to the processing time of a single module that has the same effect on cluster tool throughput (analytical study in the absence of the residency constraint) as all the parallel modules. 30 CM Effective time Exchange time Effective Time Process Time Figure 4.1: A parrallel-module visit with two identical processing modules with tproC2 equal to 10, and teff2 equal to 4. 31 We assign wafers to parallel-module visits in a round-robin fashion. So each of the Mi parallel modules of Vi processes a wafer every Mi wafers. Therefore, while one of the PMitj modules is processing a wafer, Mj — 1 exchange occurrs in that visit (exchange with other modules). The effective time of the parallel-module visit would be equal to the time from placing a wafer into that visit to picking up a wafer from that visit (As is shown in Figure 4.1). Therefore, tproa = M * teffi + ( M - 1) * (tpick + tpiace) or beffi — — Mi \lpick "T Lplace) In the example in figure 4.1, the second visit has two modules with a processing time of 10 units. Because these two modules can process wafers simultaneously, their effect on the throughput would be similar to a single-module visit with a processing time of 4 units, and te//2 would be 4. It is important to note that these numbers are only valid when we do not have any residency constraints. When a residency constraint becomes available, we can not determine its effect with a simple calculation. The whole algo-rithm must run to find the feasible schedule, but this number can be used to calculate the Ideal F P (FP that can be achieved in the absence of residency constraints). Lemma 1 proves that we can assign wafers to the identical mod-ules of a parrallel-module visit in a round-robin fashion without loosing the optimality of our schedule. Lemma 1 : If a feasible solution with period F P exists, there is always a solution with that period that assigns the wafers to the identical modules of 32 the parralel-module visit in a.round-robin fashion. Proof : Assume that the parrallel-module visit has m identical process modules. Consider the first m + 1 wafers; two of these wafers must have been processed in the same module. Because one wafer is entered the system in each period, the time that the schedule let the wafers stay in the visit is less than m * FP. Proof by induction: Base step: For the first m wafers, we leave their schedule as it is, but make sure that they are assigned to the identical modules of the parallel-module visit in a round-robin fashion. The time that they spend in that visit remains as before. Because, at the beginning, al l of these modules are empty, we could could assign the wafers to the modules. Also , because al l of the parrallel modules are identical, i f in our original schedule, we have satissfied the process and residency t iming constraints, in this new schedule we are st i l l satissfying those constraints. Induction step: For any k > m, if all wafers from 1 to A; have been assigned to those modules in a round-robin fashion, we can assign (k + l ) t h wafer in the same fashion. We assign the (k + l ) t h wafer to the same module as (k + 1 — m)th wafer. (k +1 — m) th wafer has been assigned to that module m * F P time units ago. Because, the time that the wafer stays in that visit is less than m * F P , the (k + 1 — m)th wafer must have already been picked up and the module is free at this time, and the place action can take place. 33 Therefore, all wafers can be assigned to that visit in a round-robin fashion without loosing the optimality of our schedule. • As we said before, we are looking for a periodic schedule with a period F P that has the best throughput in the steady state (i.e. a schedule with the minimum F P ) . We define the time that takes a wafer to get processed completely and return back to the C M as t t o t ai . We prefer a schedule with a shorter F P , although it may have a longer ttotai- The amount, by which we increase ttotai, is equal to the time that each wafer remains idle either on P M s or T M . To represent a schedule, we define all the times that the first wafer (Wi) enters and leaves a visit; (Ei) shows the time that W\ enters Vi, and (Lt) shows the time that it leaves V*. L0 is the time that it leaves the C M , and EN+\ is the time that it is placed back into the C M . The other wafers enter and leave visits at the same relative times as the first wafer with a multiple of F P added to all the times. So for W f c , we must add k * F P to all the W\ times. In a single-route cluster tool, because all the wafers go through the same route, the visit whose effective processing time is more than the others is important in determining the throughput of the system. We call this visit the bottleneck. The bottleneck time, tBN, is equal to £e//i of that visit. Figure 4.2 shows the timing diagram of a cluster tool that has three single-module visits. The schedule of the first wafer (Wi) is shown. Fil led boxes show the time intervals during which the wafer stays in the P M of K , which is equal to tproct of that visit in this diagram. After that time, the wafer is moved to the 34 0 2 3 4 5 6 6 7 8 9 10 11 12 13 Time Figure 4.2: Timing diagram of a three-visit cluster tools with only one wafer next visit. Each of these boxes, with their start and end arrows, are called a block (tBiocki = tproci + tpick -f- t p i a c e ) . The arrows show the transport times. The arrow at the beginning of the block represents a place action, and the arrow at the end of the block represents a pick-up action. We define Bitj to be the jth block of Wi. Bito and Bi>N+i represent a pick up from, and a place into the CMs. Our technique searches in the time and resource domains for a feasible schedule with a maximum throughput. It operates in two main phases. In the first phase, a simple periodic schedule is computed with low complexity. For a large number of problem instances, the simple periodic schedule feasibly solves the problem. If a feasible schedule is not found in the first phase, the scheduler enters phase two (with a higher degree of complexity) to compute a feasible schedule. During this phase, the periodic nature of the schedule is preserved. The scheduler incrementally increases the period only if necessary in order to keep the throughput at a maximum. Several optimization techniques are 35 Notation Definition Vi Visit i jth process module of Vi N Number of visits Mi Number of PMs of Vi tproci Processing Time of Vi tresPi residency constraint Vi on the P M teffi Effective processing time of Vi tp,N Maximum of teff{ tpick Pick up time tplace Place time F P Fundamental Period Wi Wafer i Bi,3 jth Block of Wi Ei The time that Wx enters Vi u The time that W\ leaves Vi ttotal tadd The amount that must be added to ttotai to reach a feasible solution Table 4.1: Term Definitions 36 A l g o r i t h m 1 Algorithm Overview Calculate Ideal FP w h i l e not DONE do Schedule <— get-periodic-schedule(FP). i f conflict-free (Schedule) t h e n Done e n d i f search(FP, Schedule). i f a conflict-free schedule was found during the search t h e n Done else FP <•• FP + 1. e n d i f e n d w h i l e designed and added to reduce the complexity of the scheduling algorithm. The resulting schedules are deadlock free, since resources are scheduled according to the times that they are available. Analytical and experimental analyses are presented to demonstrate the correctness and efficiency of the proposed technique. Table 4.1 shows some terms that we use in this chapter. 4 .1 Algorithm Description Algorithm 1 shows an overview of the algorithm. For any single-module Vi , FP must be greater than or equal to its tBiocki, for parallel-module visits it must be greater than or equal to the effective block time of the equivalent single module visit. Considering the definition oitBN, the best possible period (Ideal FP) is at least tBN +tPick + tpiace • Any smaller period is less than the bottleneck block, so no feasible solution exists with that period. The first line of Algorithm 1 calculates the Ideal FP. Then, a simple 37 A l g o r i t h m 2 Pseudo code of the Simple Periodic Schedule for i = 1 to N do Ei <- L j _ ! Li Ei + tpiace + tpr0Ci + tpici~ e n d for EN+I <— L N 0 1 2 3 4 5 y 6 7 8 9 10 11 12 13 14 15 16 17 18 19 A Conflict Time Figure 4.3: Timing diagram of a simple periodic schedule for a three-visit cluster tools with two wafers periodic schedule with that F P is made. Algorithm 2 shows the algorithm for finding the timing of a simple periodic schedule, and Figure 4.3 shows its timing diagram (only 2 wafers are shown here). In a feasible schedule these conditions must be satisfied: 1. The time a wafer spends in a P M must be lower bounded by tproCi and upper bounded by tproCi + tresPi. 2. A t each time there must be at most one wafer in each P M . 38 0 1 2 3 4 5 6 7 8 9 10 11 12 13 Time Figure 4.4: Search window. It is enough to resolve all the conflicts between one wafer (the black one) and its previous and next wafers (gray blocks) 3. It must be a conflict-free schedule. In a conflict-free schedule T M s are scheduled such that no two transport actions are scheduled at one time, and at most two wafers reside on the two arms. In the simple periodic schedule, condition one arid two are satisfied, but con-flicts may occur between the time slots during which T M s are scheduled to move wafers from one module to another. A n example of such conflict is shown in Figure 4.3. If the initial schedule is not conflict-free, we try to search and find a conflict-free solution. In our search we start from that simple periodic schedule and try to find a conflict-free solution with that fixed period, but with a larger ttotai- If no solution can be found with that F P , F P would be increased one unit and a new search must be performed. 39 0 1 2 3 / 4 5 6 7 8 9 10 11.12 13 14 15 16 17 18 19 New Conflict Time Figure 4.5: The timing diagram of the system after moving the first and second blocks one unit to the left 4.1.1 Search Due to the periodic nature of this schedule, if we can schedule a wafer such that it does not have any conflict with the previous or next wafers, we have found the optimal schedule. To find a conflict-free schedule, it suffices to look and search in a search window (SW) whose length is equal to ttotai. This is due to the fact that the pattern in the S W is repeated over time. In S W we have to consider all the portions (blocks) of the previous and next wafers that can take place in that time interval (Figure 4.4). To resolve any conflicts in SW, transport moves involved in the conflict must be rescheduled, and at least one of the transport actions in the conflict must be moved. Because the relation between the blocks is important, and not their absolute time, we only move blocks to the left (i.e. to an earlier time). There are two ways of rescheduling a transport action associated with 40 a block: 1. Slide: Move a whole block some units of time to the left. Through this action, both the pick up and place action associated with the block occures at different times. In Figure 4.3 if we slide both Bito and BiiX one unit, some conflicts are resolved, but some new ones are created. Figure 4.5 shows the state after these moves. Ash shown in this figure, the time needed for creating a single wafer is increased from 13 to 14, but the period still remains at 6. 2. Stretch: Stretch a block to the left, so the size of the block becomes bigger. By this action, the place action associated with the block occurs sooner, but the pick up does not change. The meaning of stretching a block is to force a wafer to remain in a P M longer than tproCi. Considering the definition of U^p^ this block can increase to, at most, trespr By performing any of these two actions for one unit of time, the time needed to produce a wafer, ttotai, is increased by one, and the increased time is equal to the time that we use T M s (in slide) or P M s (in stretch) as buffers. None of these actions change F P . Before sliding or stretching a block, the previous block must have been moved, so sliding the first block (-B1 ) 0) determines what configurations we can see in the search space, and how complex the algorithm would be. Lemma 2 and Lemma 3 show a bound on the sliding of the first block (i.e. maximum for tadd)- B y searching in the S W , and sliding the B^Q to, at most, tadd, we should be able to look at the entire possible configurations. If none of the 41 resulting states is conflict-free, then there is no answer with that period. So, we increase the fundamental period by one, and start the search again with the new period. Lemma 2 : In a case where all £resP;S are equal to zero, no feasible solution can be found with tadd < 2 * FP - (tpick + tplace) * (N + 1). Proof: In the first simple periodic schedule (Algorithm 2), a wafer does not have any idle time (it is being processed or moved all the time). tadd is the time that each wafer would remain idle. In this idle time, the wafer must be buffered either by the PMs or the TMs. Because tresPis are equal to zero, PMs can not be used as buffer. Thus, the wafer should spend all its idle time, tadd, on the TMs and uses the arms as buffers. On the other hands, if there are M wafers in the system, the total time that the system is operating is equal to: System time = Initial state + steady state + terminate state System time = Initial state + M * FP + terminate state ~ M * FP If we keep both arms busy the entire time the system is operating, we can use them for 2*FP*M time units. Considering the fact that we are looking for a periodic schedule, all wafers use the robot arms for the same amount of time. Therefore, each wafer can use the robot arms for at most 2 * FP time units. We know that each wafer uses the robot arm for (tpick + tpiace) * (N +1) for transport action. Therefore, each wafer can use the robot arms for at most 2 * FP — (tpiCk +'tpiace) * (N +: 1) time units as the buffer. Therefore, if tadd 42 exceeds that bound, no feasible solution can be found. • Lemma 3 : In a case where trespi s are not equal to zero, the following provides a bound for tadd: tadd < 2 * FP - {tpick + tpUce) * N + £ i _ i Min{tresPi, (FP - (tproCi + tpick ""I" tpiace))) P r o o f : As mentioned before, we can either use the P M s or the T M s as buffers. The size of each block can be, at most, equal to F P , so after being processed in a Vj, the wafer can remain, at most, for the time equal to MinftresPi, (FP—(tproCi+tpick+tpiace))) in that P M . Hence, we can use all visits as a buffer for a time equal to Y,iL\ Min(tresPi,(FP - (tproCi + tpick + tpiace))). The first part, as in Lemma 2, is the time that we can use the T M s as buffers.* 4 . 1 . 2 S e a r c h S p a c e To search for a conflict-free schedule with the minimum F P , we must examine all the possible configurations in S W (Figure 4.4). We defin two actions, slide and stretch. Lemma 4 shows a way that we can reach any possible state (configuration of blocks) from the first simple schedule by applying slide and stretch in a specific order. Lemma 4: Any schedule with a period equal to F P can be reached by applying slide and stretch actions starting from the simple periodic schedule. P r o o f : Suppose there is a target configuration and we want to find the series 43 A l g o r i t h m 3 The series of the actions that should be applied to the simple periodic schedule to reach a target configuration Current-state get-periodic-schedule(FP). tadd <— ttotai of the target configuration —ttotai of the Current-state. Current-state <— Slide J3 1 ) 0 of the Current-state t a d d units, for i = l to N do strech-len <— (L{ — Ei) in the target configuration —(Li — Ei) in the Current-state. Current-state <— stretch Bij for the current-state strech-len units. Slide-len <— Ei in target configuration —Ei in Current-state Current-state <— Slide Biti of the Current-state for slide-len units, e n d for A l g o r i t h m 4 pseudo code of the search algorithm State-list simple-periodic schedule whi le not empty(state-list) do Current-state <— getNewState(state-list) i f conflict-free (current-state) t h e n return (current-state) else State-list. add (expand(Current-state)) e n d i f e n d whi le Return Not found of the actions that needs to be performed on the simple periodic schedule to reach that target configuration. The procedure in Algorithm 3 shows the series of the actions that we have to perform on the simple periodic schedule to reach the target state.* Figure 4.6 shows the search tree. A node in the search tree represents the state of the system, which consists of the start and end time of the blocks of the first wafer (E(S and LJS). The edges represent state transitions. By applying a transition to the current state we reach a new state in the tree. 44 Figure 4.6: Search Tree There are three elements in a state transition: block number, which shows the block that should be changed to reach the new state; action number ( A l means slide, and A2 means stretch); and a number that shows the number of time units the action should be applied to. For example, if the triplet is (B2, A l , 5), it means that we have to slide the second block 5 units. The nodes of this tree are the configurations of blocks that we have to check to see if any of them is a conflict-free one. From Lemma 4 we can conclude that all of the possible configurations can be found in this tree. Algorithm 4 shows the search pseudo code. There is a general list (state-list) that keeps all the nodes that have not been expanded yet. For each time we get a new node from that list, if that one is conflict-free, we are done. Otherwise the expand function finds all the possible child nodes of this 45 state, and adds them to the state-list. If a l l the nodes are expanded and the state-list became empty, it means that we have traversed the whole tree and no conflict-free schedules with the current F P exists. Therefore, we increase the F P , and perform the a new search again. 4.1.3 Optimization Techniques Because in the worst case we have to check every node of the tree to see if it is a conflict-free state or not, the complexity of the algori thm is the same degree as the number of nodes. It is easy to show that this tree has about £ ^ d n o d e s . So, the number of nodes is exponential to the number of visits. In this section, some of the optimization techniques that reduce the complexity of the algorithm are presented. Using these techniques does not affect the correctness and completeness of the solution. In other words, the resulting schedule is conflict-free and performs at the minimum fundamental period. O p t i m i z a t i o n T e c h n i q u e 1 : As mentioned, we start our search from a period equal to IBN + tpick + tpiace- However, if the system is in the transport-bound region, the best possible period is: (tpick + tpiace) * (N + 1). This calculates these two periods and starts from the greater one of the two, so we save the time that we would have spent searching for the smaller period without finding any solution. O p t i m i z a t i o n T e c h n i q u e 2 : In our algorithm, after the two actions of B\j are examined, if that block s t i l l has any conflict wi th Bkj, block Bk,j must move. So, i f two blocks 46 have had their actions examined and they still have conflicts with each other, by moving other blocks, the conflicts would not be resolved. Thus, the internal node of the tree representing this situation can not lead to any conflict-free node. Therefore, we can prune the sub-tree from that node, and there is no need for expanding it further. -Optimization Technique 3: If we have two parallel modules whose blocks are greater than the fun-damental period, they may be in conflict with each other. The only way of resolving this conflict is by stretching that block. So, if blocks of a parallel-module visit that are in conflict with each other can not stretch, we can not find any solution down that path, and the tree can be pruned from that internal node. 4.2 Experimental Evaluation 4.2.1 Experiment Design Each configuration of cluster tool consists of a number of visits. For each visit, the number of PMs of that visit must be known. Having more than one P M in a visit represents parallelism in that visit. The number of visits and number of PMs per visit determine the topology of the configuration. Figure 4.7 shows three possible topologies with 3 visits and 4 PMs. Due to the physical structure of the cluster tool, a maximum of six PMs are considered in a configuration. So, we run the experiment on all topologies (i.e. all possible number of visits 47 Figure 4.7: Three different topologies with 4 P M s and 3 visits and number of P M s per visit) that can be made with 1 to 6 P M s . To assign tproCi, tTespi , and £ r e s T ; to each visit, we assume that tproCi — Mi*k, where k is a random number generated according to a normal distribu-tion -N(fi, cr).1 In this distribution, /x is a number between 1 and 30, and a is equal to fx. For each topology, we generate three hundred different cases (ten for each //). The maximum amount of time that a wafer stays in any visit like i is usually less than or equal to the time that it stays in the visit with the maximum processing time. Otherwise, visit i would become the bottleneck visit. For example, if tpN is equal to 5, tpr0Ci is equal to 2 and trespi is equal 1 W e only accept the positive K ' s , and if the normal function returns a negative number, we will generate a new number until it becomes positive. 48 Scheduling Techniques Worst case Runn ing time Without any optimization techniques Over 7 days W i t h optimization technique 1 80,000 (s) W i t h optimization technique 1 and 2 250 (s) W i t h optimization technique 1,2, and 3 19 (s) Table 4.2: Comparison between the worst case runing time of the algorithms to 1000, we would usually use at most, up to three units of Vi residency. Therefore, we can say that for a single-module visit, infinite residency is the same as tBN — tpr0Ci. Therefore, we assign for each Vi, a value to trespi that is a uniform random number between 0 and (tew —V°c i )*^ i - In our experiments, we assume tpick = tpiace = 1. B y these arrangements, 23,099 cases are generated and run on a Sun spare station (UltralO) with 256 M B of memory running Solaris 5.7. To evaluate the system performance, we define a term to which we refer to as the cluster factor. The cluster factor, calculated as t%x+2 > P r o v i d e s a notion of processing over transport. If it becomes less than one, the cluster tool is in a transport-bound region. Otherwise, it is in a process-bound region. One of the metrics of measuring cluster tool performance is F P . It shows how fast the system can accept a new wafer. The other factor is the time, in seconds, that it takes the algorithm to find a solution. 4 . 2 . 2 E x p e r i m e n t R e s u l t To find out the effects of the optimization techniques on the scheduling algo-rithm's performance, we first ran the test cases on the scheduling techniques 49 that did not use any optimization technique. Table 4.2 shows the results of those experiments. The scheduling algorithm that does not use any optimiza-tion technique could not complete all the cases after 7 days. After running 1,800 of all the 23,099 cases, the algorithm encountered a test case where it could not find the solution after running for 7 days. After applying the first optimization technique, the optimal solution for all the cases was found, but in the worst case, it took 80,000 seconds. By applying both the first and the second optimization techniques, the running time to find the optimal solution in the worst case was reduced to 250 seconds. In other words, the worst-case running time was improved more than 300 times as compared to the worst-case running time of the algorithm with only the first optimization technique. After applying all the optimization techniques, in 90 percent of the cases the optimal solution was found in less than a second and in the worst case it took approximately 19 seconds (i.e. an improvement of more than 4,000 times as compared to the worst-case running time of the algorithm with only the first optimization technique) to find the optimal solution. Figure 4.8 shows the F P over a range of cluster factors for the algorithm utilizing all 3 optimization techniques. The solid lines show the Ideal FP, drawn according to the analytical models proposed in [23] and [22]. The other points refer to the cases where the Ideal F P must be increased, due to the residency constraints, in order to find a feasible solution. In cases with the Ideal FP, when a cluster tool goes into a transport-bound region (cluster factor less than one), the F P remains constant no matter what the tBN is. Most of 50 FP 4 visits • cm o • • • + 1 visit 2 visits X 3 visits • 4 visits b- 5 visits O 6 visits Ideal FP's 1.5 2 2.5 Cluster Factor Fig. 15 : Performance, in terms of fundamental period, with respect to cluster factor Figure 4.8: Performance in terms of fundamental period, with respect to clus-ter factor the cases that require the Ideal F P to be increased in order to find a feasible solution are in the transport-bound region. As is shown in the figure, the Ideal F P is increased by at most, two units in the transport-bound region, and when the cluster tool is in the process-bound region, the Ideal F P is increased by at most, one unit. For cluster-factor values greater than 7, a feasible solution with the Ideal F P is found for all cases. 51 Chapter 5 Single-route model wi th P M and T M Residency Constraints In cluster tools, some times there is a dependency between the operations of two consecutive visits (e.g. the first one warms the wafer so the next one can perform its chemical operation). In these cases, if the next operation does not happen within a time limit from the completion of the first operation, we would not get a product of an acceptable quality. We call this time limit a residency constraint on the T M , tTesTr In this chapter we allow visits to have a T M residency constraint as well as PM residency constraints. In other words, we are looking for an optimal periodic schedule for a single-visit, single-route, single-wafer, and dual-arm cluster tool with parallel-module visits that satisfies both PM and T M residency constraints. We will introduce two classes of algorithms for solving cluster tool cases with both PM and T M residency constraints present simultaneously. The first 52 class of algorithms is an extension of the technique in Chapter 4. A major point for consideration in these algorithms is the computational complexity. The search space of these algorithms grows exponentially with the number of cluster tool visits and the maximum processing time. Due to the physical structure of the cluster tool, the number of visits is typically a small number (less than 7), but the maximum processing time depends on the nature of the processes, and can become very large. In most cases with only P M residency constraints, several solutions with a minimum period exist for large process-ing times. Therefore, the algorithm does not need to search the entire search space, and although the search space is large, the solution is always found within a reasonable time. However, in cases with both P M and T M residency constraints, because of the new T M timing constraints, solutions with mini-mum periods are more scarce. Applying the first class of algorithms to these cases can lead to long running times. This can be a disadvantage in practice, particularly if the scheduling algorithms are executed on-line and in a dynamic fashion. The second class of algorithms reduces complexity by eliminating de-pendency on the processing times. These algorithms are still exponential, but only in the number of visits and not in the processing time. Therefore, in practical cases where the number of visits is a small number, with the help of several powerful pruning techniques, the optimal solution can be found quickly. 53 5.1 Algorithm Description 5.1.1 Time-Space-Search Algorithm In this technique, similar to the technique in Chapter 4, we start the search from a simple-periodic schedule with a period equal to the Ideal FP. If the simple-periodic schedule has some conflicts, we try to resolve the conflict in the search phase. In the search phase, we investigate all the possible configurations that have the potential of being a conflict-free schedule. While evaluating each configuration (the nodes of the search tree), we take into account T M residency constraints (i.e. a node that was acceptable in a previous model, but violates the T M residency constraint of one of its visit is not acceptable any more). If no feasible-solution for this period exists, we increase the period, and perform the whole search with the new period. A l l optimization techniques that we provided in Chapter 4 are still applicable and reduce the running time of the algorithm significantly. Also, while expanding any internal node of the tree, if the moves of a visit (slide, and stretch), and its next visit are completed and the wafer is scheduled to stay in the arm (after being picked up from that visit) for a period longer than the visit's T M residency constraint, we can say for sure that no feasible-solution down that path can be found, and thus, prune the search tree from that node. The complexity of this algorithm is 0(tgN). In practice, we can assume that N is less than 7, but tBN can be very large depending on the nature of the process. Therefore, the search space can become very large, and as a 54 Process Time : 26 Process Time : 47 Process Time : 41 Process Time : 55 Process Time : 54 P M residency: 20 P M residency: 3 P M residency: 11 P M residency: 0 P M residency: 0 T M residency : 10 T M residency :5 T M residency :2 T M residency :2 T M residency :2 Figure 5.1: A topology for which the Time-Space-Search algorithm took about 18 hours to find the solution (tpj c f c = tpiace = 1). Algorithm 5 Pseudo code of the LPSP algorithm F P <- Calculate Ideal FP for each permutation of all the actions within one period do Create an LP system for this permutation if there is a solution for the LP system then return the solution end if end for Increment F P result, the algorithm running time may become very long. Figure 5.1 shows a topology of a cluster tool. It took the algorithm 18 hours to find the optimal solution for the topology shown in this figure. 5.1.2 Linear Programming with Shortest Path Algo-rithm In the steady state of any simple periodic schedule, in each period, each visit should recieve a new wafer and finish processing a wafer; also, a wafer must be picked up from the C M , and one should also be placed into the C M . Therefore, during each period, the robot should perform ( N + 1) pick up actions and ( N -f- 1) place actions. If we can find the order of these 2 N 4- 2 actions within a 55 period, and their relative times, we can repeat the same actions with the same relative times for the subsequent periods. Therefore, we only need to schedule one period. Algorithm 5 shows the pseudo code of the algorithm. To use this algorithm, we have to solve the following problems: A . How to generate all the possible permutations, so that the elimination of the non-feasible ones can be done at early steps. B . How to create linear programming for each of these permutations based on our constraints, so that its solution would be a feasible schedule. C . How to solve the linear programming problem efficiently. A . H o w to generate p e r m u t a t i o n s : If we assume that each period starts with a pick up from the C M , the other 2N+1 actions can happen in any order. Each of these (2N + 1)! permutations should be investigated to see whether any of them can lead to a conflict-free solution with the given F P . Considering the fact that we only have two robot arms available, many of these permutations can be eliminated early. For example, none of the permutations that start with a pick up from the C M , followed by a pick up from visit i, and then a pick up from visit can lead to a solution (i.e. all of these permutations need three arms to perform these three actions). One way to eliminate these permutations is to first generate them and then eliminate non-feasbile ones (the one that uses more than two arms). In this approach, we need to look at all the (2N + 1)! permutations once to find out if they use more than two arms. In another approach, during the 56 generation procedure, we start from the first position and assign the positions one at a time to the actions. In the assignment step, we can check how many-arms are being used, and if at position i , assigning an action to this position means using more than two arms, no permutation starting with these i actions can lead to a solution. Therefore, we do not assign that action to this position, and eliminate many permutations at this point. To clarify this approach, consider the following example. Suppose we want to find the optimal schedule for the configuration shown in Figure 5.2. Assume that we have assigned pick up from the CM to the first position, and pick up from V\ to the second position, and we want to assign an action to the third position. Here is the list of all the possible actions that can be assigned to this position: pick up from V 2 , place into V i , place into V 2 , and place into the C M . If we assign pick up from V2 to this position, we need 3 arms: the first arm has picked up from the C M , the second arm has picked from V i , and because both arms are busy holding a wafer, we need a third arm to be able to pick up from V 2 - Also, if we assign place into the CM to the third position, we need three arms: one arm picks up from the C M , another one picks up from V i , and to be able to place a wafer into the C M , that wafer should be picked up before hand from V2 by a third arm. Therefore, at this step, we can remove these two actions from the list of possible actions for the third position, and eliminate many permutations that would not result in a feasible solution. This approach speeds up the algorithm significantly. In cases with 5 visits, there are 11!= 39,916,800 possible permutations, but there are only 732 57 Process T ime: 26 P M residency: 20 T M residency :10 Process T ime : 47 P M residency: 3 T M residency :5 o o Figure 5.2: A route configuration permutations that use two or fewer arms. In the L P S P algorithm, we only generate these cases and then check to see if we can find any solution using these permutations. B . H o w to create a l inear p r o g r a m m i n g p r o b l e m for each p e r -m u t a t i o n : To explain this algorithm without a loss of generality, we assume that tpick — tpiace = 1- This algorithm can easily be generalized to address cases with other pick up and place times. First, we assume that all visits are single-module; later, we explain how we handle parallel-module visits. Con-sider the permutation pi, p2, P3,..., pm of a set of actions, where m = 2N + 2, and pi denotes a pick up action from the C M , and Pi(i > 1) denotes the sub-sequent actions in the schedule. If we can assign some start times, such as a i , a2, a 3 ) . . . , a m to these m actions, such that the following five constraints are satisfied, these actions, along with their corresponding times within a period, would be the solution. 1. A r m avai labi l i ty: Only two arms should be used. Because we have con-sidered this condition while generating the permutations, no extra check-ing for this condition is needed. 58 2. Act ion overlap: only one action can happen at a time. We model it by the following inequalities 0 < a-i < a2 < a 3 • • • < am < FP, which means that the start time of each action is at least one second after the start of the previous action, and because each action takes only one second, if the start times satisfy these constraints, no two actions should overlap. 3. Process time constraint: Wafers should stay in V, for a minimum time of tpr0Ci. If pj is a pick up from Vi, and pk is a place into Vi, the following two different cases need consideration: • if k < j, pick up happens after place; the same wafer is placed and picked up in each period, so the time that a wafer stays in this visit is aj — ak — 1. Therefore, a3; — ak — 1 > tpr0Ci should be added to the set of constraints. • if k > j, pick up happens before place; if wafer w is placed in this period, it is picked up in the next period, so the time that a wafer stays in this visit is FP + aj — ak — l. Therefore, FP + aj — ak — 1 > tproa should be added to the set of constraints. 4. P M residency constraint: Wafers can stay in Vi for a maximum time of tproci + UesPf If is a pick up from Vi, and pk is a place into Vi, there will be the following two different cases: 5 9 • if k < j , a,j — ak — 1 < tpr0Ci + trespi should be added to the set of constraints. • if k > j , FP + Oj; — ak — 1 < tpr0Ci + tresPi should be added to the set of constraints. 5. T M residency constraint: After picking up a wafer from Viy it can re-side on the arm for at most, UesP^ If pj is a pick up from Vit and pi is a place into Vi+i, there would be the following two different cases: • if j < I, ai — aj — 1 < tresTi should be added to the set of constraints. • if Z > j , FP + ai — aj — 1 < t^Tishould be added to the set of constraints. Using the above constraints, we form a linear programming system. C.How to solve the linear programming problem: The problem of checking the feasibility of a permutation is a linear programming problem with M variables, and 2 M + 2 + M + M - | - M = 5 M - | - 2 constraints. The ob-jective is to find out if there exists some values for a i , a 2 , . . . , am that satisfy all of the constraints. It is important to note that we do not want to maximize or minimize an objective function; we only want to know if we can find some val-ues for our variables that satisfy all the constraints. Due to the characteristics of this linear programming problem, we do not need to use a general algorithm such as the simplex method [7] for solving our problem. W i t h a small change to the inequalities, we can use a technique to solve the problem that is of the order 0(M2). It is known that in a linear programming problem with m 60 variables, where all of the constraints are in the form a,- — ai < bk (difference constraints), we can create a graph with m nodes (a node for each variable), and for each constraint — aj < bk, add an edge of weight bk between nodes a; and aj. We can then apply a single-source shortest path algorithm with a Bellman-Ford algorithm [6]. If a Bellman-Ford algorithm returns true, the weight of the shortest path to each node is the time value of that action within the period 1 that satisfies all constraints, and is the conflict-free solution. If we look at our inequalities, we can see that all of them, except for the second constraint, are in the form of difference constraints. We can change the second constraint to difference constraints, as follows: ai — a2 < —1 a2 - a3 < - 1 <X3 — (J4 < — 1 am-i — am < — 1 am - ax < FP In this case, the graph has M = 2N + 2 nodes and approximately 5M edges, and because the complexity of the Bellman-Ford algorithm is O ( V E ) , where V is the number of nodes, and E is the number of edges, the complexity of our problem is 0(M2) or 0(N2). 1 Because we assume that the first action happens at time zero, all times should be deducted from the time of the first action to get a correct relative time. 61 Process Time : 17 Process Time : 20 Process Time : 18 P M residency: 60 P M residency: 5 P M residency: 40 T M residency : 10 T M residency :2 T M residency :2 Figure 5.3: Six possible situations should be investigated for each permutation of the above configuration. Parallel-Module Visits In a single-module visit, the time that a wafer can reside in that module is less than a period, but in a parallel-module visit, a wafer can remain in one of the modules of the visit for more than a period. For example, in a three-module visit, the time that the wafer stays in the module can be less than a period, more than a period but less than two periods, or more than two periods but less than three periods, depending on the processing time and the P M residency time. The third and fourth constraints are different for each of these cases. For example, if the wafer resides in a P M for less than a period, the inequalities do not change, but if the wafer resides in a P M for more than a period and less than two, we must add F P to the left side of the inequalities. If the wafer resides in a P M for more than two periods and less than three, 2*FP must be added to the left side of the inequalities. 62 For every parallel-module visit in the configuration, we must find out what the possible time ranges are for a wafer to reside in a P M of that visit. We then consider all the possible combinations. For example, consider all the configuration shown in Figure 5.3. There are three visits in that configuration. The first visit has three parallel PMs , the second one has only one P M , and the third visit has two parallel P M s (i.e. all the nodes that are in the same vertical line are parallel P M s of the same visit). Suppose we want to see if there exists any feasible solutions with FP=22. Because of the process and PM-residency times, a wafer can stay in the first visit for less than 20 seconds, more than 20 seconds but less than 42 seconds, or more than 42 seconds. For each of these cases we get different inequalities. O n the other hand, the third visit is a two-module visit. Because of the process and PM-residency time, a wafer can stay in that visit for less than 20 seconds or more than 20 seconds, and for each of these cases we get different inequalities. To make sure that we consider all the cases, we have to check to see if any of the 2*3 = 6 possible situations can lead to a conflict-free solution. 5.1.3 Pruning techniques To decrease the running time of the algorithm, several pruning techniques are added. It is important to note that the use of these pruning techniques does not effect the correctness and completeness of the solution, namely adding any of the following pruning techniques does not exclude a feasible solution from consideration. 63 Process Time: 5 PM residency: 0 TM residency :2 Process Time: 3 PM residency: 0 TM residency :3 Process Time: 3 PM residency: 0 TM residency :3 Process Time: 5 PM residency: 0 TM residency :2 Process Time: 1 PM residency: 1 TM residency :4 Process Time: 5 PM residency: 0 TM residency :2 Figure 5.4: A topology for which the L P S P algorithm with only Pruning technique one took about 35 minutes to find the solution (t p i c f c = tpiace = 1). Pruning Technique 1: Any single-process Vi whose process time satisfies the F P = t p r 0 C i + t p i c k + tpiace equation is considered a throughput bottleneck and should be kept busy at all times. To accommodate this requirement, we must provision an immediate exchange action[37] when picking up a wafer from and placing a wafer in that module. The corresponding actions will be pick up from Vi, and place into Vi, and they should happen one after another without any delay. Therefore, we can consider these two actions as atomic, and generate only the permutations where these actions appear in this order. For a six-visit topology, in general, we need to look at 11,352 permutations. Applying this technique when a visit is the bottleneck, results in considering only 480 permutations. Although this technique helps reduce the running time of the algorithm, there are still some cases where the algorithm takes about 35 minutes to find the solution. One such case is depicted in Figure 5.4. 64 Pruning Technique 2 : To explain the idea behind this technique, let us consider the following exam-ple. Suppose in a configuration with 5 visits, there is a single-module Vi, such that tpr0Ci = 2 and trespi = 0. Assume that we generated the first five actions in the sequence, and a place into Vi is the second action, and a pick up from Vi is not among the first five actions. We can say that no matter what the order of the other actions after the fifth one is, none of the permutations starting with those 5 actions can lead to a solution. This is because each action takes one second, and there are at least three actions between a place into Vi and a pick up from Vi. Therefore, in all of these permutations, the wafer remains in Vi for at least three seconds, which violates its P M residency constraint. Con-fident that none of these can lead to a conflict-free solution, we can eliminate all of those permutations. In the permutation generation procedure, when the position of both of the actions of a visit in the permutation is known, we can figure out the minimum and the maximum bound on the time that the wafer stays in that module. If the minimum bound is more than tpr0Ci + tresp{, or the maximum is less than tpr0Ci, we observe that none of the permutations starting with these actions can lead to a conflict-free solution, and we eliminate all of them. Also, when the position of a pick up from a visit and a place into the next visit is known, if the number of the actions in between is more than the T M residency of that visit, the sequence can not lead to a useful permutation and we can thus terminate the generation of the remainder of the sequence at this point. When we use the second pruning technique, we find the solution 65 for the case in Figure 5.4 in 0.09 second. Because the second pruning tech-nique eliminates many permutations at an early stage, it has a great effect on improving the running time of the algorithm. 5.2 Experimental Evaluation 5 .2 .1 E x p e r i m e n t D e s i g n Each configuration of the cluster tool consists of a number of visits. For each visit, the number of P M s of that visit must be known. Having more than one P M in a visit represents parallelism in that visit. The number of visits and number of P M s per visit determine the topology of the configuration. Due to the physical structure of cluster tools, a maximum of six P M s is considered in a configuration. So, we run the experiment on all topologies (i.e. all possible number of visits and number of P M s per visit) that can be made with 1 to 6 PMs. To assign tproCi, trespi , and £ r e sr; to each visit, we assume that tproci = Mi * k, where k is a random number generated according to a normal distribution N(n,a).2 In this distribution, /j is a number between 1 and 30, and a is equal to \i. For each topology, we generate three hundred different cases (ten for each /x). As in the previous chapter, we assign for each Vi, a value to tresPi that is a uniform random number between 0 and (tpN — tvroCi) * Mi. Also, for each Vi, we assign a value to tresTi that is a uniform random number between 0 and 2 W e only accept the positive K ' s , and if the normal function returns a negative number, we will generate a new number until it becomes positive. 66 (tBN-tproci-tresPi)*Mi. In our experiments, we assume tpick — tvlace — 1. By these arrangements, 23,099 cases are generated and run on a Sun spare station (UltralO) with 256 M B memory running Solaris 5.7. We tested these cases with both the Time-Space-Search algorithm and the L P S P Algorithm. To compare different scheduling algorithms, we have to evaluate two characteristics. First, we want to verify how optimal or near-optimal the solution are, and second, how fast they are found. Both of our scheduling algorithms find the optimal solution, and are the same in this sense. However, the time that it takes to find the solution is very different. As in Chapter 4, the cluster factor, calculated as |/^+2> provides a notion of processing over transport. If it becomes less than one, the cluster tool is in a transport-bound region; if it is more than one, the tool is in a process-bound region. 5.2.2 Experimental Results A l l of our algorithms found the same result, the optimal solution. However, they found it in different running times. Sometimes when for example a module fails, or a new module is added to the tool while the cluster tool is running, rescheduling has to be done immediately. In this situation (on-line mode), we can not afford to wait for a long time for the scheduler to come up with the schedule, and the time that it takes to find the optimal schedule differentiate different scheduling techniques. 67 Algorithm Name Maximum Processing Time Time-Space-Search Algorithm 18 hours L P S P without any pruning techniques 35 minutes L P S P with technique 1 35 minutes L P S P with both of the pruning techniques 4 seconds Table 5.1: Maximum processing time of the algorithms 65,000(sec) or 18 16 14 6 h 2 h Cluster Factor Figure 5.5: Algorithm running times of the Time-Space-Search algorithm with respect to the cluster factor. It takes the algorithm several hours to find the solution for cases with both P M and T M residency constraints. P e r f o r m a n c e i n T e r m s of Eff ic iency Table 5.1 shows the maximum processing time of the different algorithms. T i m e - S p a c e - S e a r c h A l g o r i t h m : Figure 5.5 shows the running times of the Time-Space-Search algorithm versus a range of cluster factors. From this graph we can conclude that the Time-Space-Search algorithms may take several hours to find the solution for a configuration (especially when the cluster factor is 68 o o ° o o o ©eg ^ °° © o o o o o 0 0 ° 0 1 2 3 4 5 6 Cluster Factor Figure 5.6: Algorithm running times of the L P S P algorithm with respect to the cluster factor. The algorithm takes less than 35 minutes to find the solution less than 7). Although these cases do not occur very often (only for 9 cases out of 23,099 cases, it took more than 1 hour to find a solution, and only for 193 of the cases it took more than 10 seconds to find a solution), it proves that this approach is impractical. L P S P A l g o r i t h m : Figure 5.6 shows the running times of the L P S P algo-rithm without any pruning techniques versus a range of cluster factors. As is shown in the figure, the worst case running time is reduced approximately 30 times, and in the worst case the solution can be found in less than 35 minutes, which is a significant improvement over the 18 hours that the time-space-search algorithm takes to find a solution in the worst case. Most of the cases that take a long time are in the transport-bound region. This is because in cases with a smaller F P , it is more likely that no solution with the Ideal F P 69 2000 1500 1000 exists. In cases where the Ideal F P should be increased, we first look at all the possible permutations with the Ideal F P , and because none of them lead to the solution, we increase the F P and look at the permutations with the new F P ; therefore, it takes much longer to find the solution. It is interesting to note that although the worst case running time of this algorithm is better than the Time-Space-Search algorithm, in this algorithm there are 371 cases that take more than 10 seconds, all of which have six visits in their configura-tions. When the cluster factor is around one, the tool is between a transport and process bound region, which means that most of the resources (transport and process modules) are busy most of the time, and we are getting the best utilization from the system; therefore, we can expect a majority of cases in practice to be of this sort. The L P S P algorithm may take about 2,100 seconds to find the solution for these cases, which is still a long time. Therefore, we need to use optimization techniques to improve the running time, and thus be able to use the technique in actual practice. P r u n i n g T e c h n i q u e O n e : Figure 5.7 shows the running times of the L P S P algorithm with Pruning technique One. As the figure shows, the first pruning technique improves running times when the cluster factor is greater than one. This technique is useful when one of the process modules is the bottleneck (i.e. when the tool is in a process-bound region, and the cluster factor is > 1), but when the cluster tool is in the transport-bound region, and the T M is the bottleneck, this technique is not very helpful and the worst case running time remains at 2,100 seconds. 70 2500 8 1500 e i— CD 500 © © © © V ~ © © o Cluster Factor Figure 5.7: Algorithm running times of the L P S P algorithm with Pruning technique One with respect to the cluster factor. Pruning technique 1 is useful when the cluster factor is greater than 1 P r u n i n g T e c h n i q u e T w o : Figure 5.8 shows the impact of the second pruning technique. A s is shown in the figure, Pruning technique Two has a major impact on the running time of the program, and reduces it 500 times as compared to the cases where only Pruning technique One is used. The algorithm that uses both techniques finds the solution in 4 seconds at most. It is important to note that although the algorithm and the pruning techniques are designed to address the cases with both P M and T M residency, they can also be applied to the cases that have only one form of residency constraint (either P M or T M ) . To compare this approach with the work in Chapter 4, we ran this new approach with the experiment cases in that chapter. The optimal solution is always found in less than 1.2 seconds, which is much less than the 19 seconds that the algorithm in Chapter 4 may take to find the solution. 71 Cluster Factor Figure 5.8: Agorithm running times of the L P S P algorithm with both pruning techniques with respect to the cluster factor. The solution was found in at most, 4 seconds for all cases 72 Chapter 6 Single-Route model wi th Buffer Module In this chapter, we present a scheme that explores the use of a buffer resource to achieve better throughput in the presence of P M and T M residency constraints. Buffer modules (BM) are usually available in the cluster tool for maintenance purposes. A s shown in the previous chapters, to obtain an optimal and feasible solution, at some specific time, wafer Wi should be picked up from a P M (either due to the P M s residency constraint, or to place the next wafer into that P M ) . If the P M that Wi intends to visit next in its route is busy, the wafer can not be placed in that module right away, and must reside on the arm for some time. During this time, this wafer is keeping the arm busy, and this arm can not be used to pick up other wafers from or place them into other modules. The buffer module can be used to free the arm. After picking up Wi from the P M , we place it into the B M , and later on, we pick the wafer from the B M and 73 place it into the destination module. Therefore, we can perform some other actions during that time using the freed T M resource. It is important to note that there is a trade-off in using the buffer module. O n the one hand, using the B M helps to free the T M s for other actions. O n the other hand, it adds two extra actions (pick up from the B M and place into the B M ) to the list of actions that arms should perform in each period. 6.1 Algorithm Description The B M can only be helpful when the T M has some free time within each period to place into and pick up from the B M . Therefore, in cases where the tool is in a transport-bound region, the B M can not help us find a conflict-free solution. For those cases, we apply the algorithm that does not use the buffer. In other words, we only apply the buffer algorithm if the condition FP > (tpick + tpiace) * (N + 2) is true. Like techniques in previous chapters, our algorithm uses an incremental technique, that is, it starts with the FP=IdealFP, and searches for a feasible solution with that F P . If no solution with that F P exists, it increments F P , and performs the search with the new F P . The search part works in two phases. In the first phase, we assume that we have three arms in the model, and we search for a conflict-free solution with the given F P that may use the three arms. If we find such a solution, in the next phase of the algorithm, we start from the solution and try to use the B M instead of the third arm, and come 74 up with a solution that only uses two arms. 6.1.1 Phase I: A triple-arm solution In this phase we perform a similar search algorithm to the one we used to find a double-arm solution in Chapter 5. We must make a few changes to ensure it is applicable to the triple-arm cases. We know that in the steady state of any simple periodic schedule, in each period, each visit should receive a new wafer, and finish processing a wafer; also a wafer must be picked up from the C M , and one should be placed into the C M . Therefore, during each period, the robot should perform (N + 1) pick up actions and (N + 1) place actions. If we can find the order of these 2N + 2 actions within a period, along with their relative times, we can repeat the same actions with the same relative times for the next periods. If we assume that each period starts with a pick up from the C M , the other 2N+1 actions can happen in any order, and each of these (2N + 1)! permutations should be investigated to see whether we can assign some times to these actions to get a conflict-free solution with the given F P . The important point here, is that we can eliminate many of these permutations at early stages considering that we only have three robot arms available. For example, none of the permutations that start with a pick up from the C M , a pick up from visit i, a pick up from visit j, and a pick up from visit k, can lead to a solution (i.e. all of these permutations need four arms to perform these four actions). For the permutation generating phase, we must take into account that we only have three arms available. Therefore, if at the 75 middle of one permutation we find out that, so far, this permutation is using more than three arms (e.g. a pick up from C M , a pick up from V, a pick up from Vj, and a pick up from 14), we stop that permutation at that point. This approach speeds up the algorithm significantly. During the generation of permutations we perform some pruning techniques (as in Chapter 5) that improve the running time of the algorithm. Similar to the technique in the previous chapter, for each permutation we will make a linear programming system. Consider the permutation px, p2, Pzr • • > Pm of a set of actions where m = 2N + 2, and p\ denotes a pick up action from the C M . If we can assign some start times, such as ax, a2, a3,..., am to these m actions, such that the following five constraints are satisfied, these actions, along with their corresponding times within a period, would be the solution. 1. A r m avai labi l i ty: only three arms should be used. Because we have considered this condition while generating the permutations, no extra checking for this condition is needed. 2. A c t i o n overlap: only one action can occur at a time. We model it by the following inequalities 0 < ax < a2 < a 3 • • • < am < FP, which mean that the start time of each action is at least one second after the start of the previous action, and because each action takes only 76 one second, if the start times satisfy these constraints, no two actions overlap. 3 . P r o c e s s t i m e constra int : Wafers should stay in Vi for a minimum time of tprocr If Pj is a pick up from Vi, and pk is a place into Vi, the following two different cases need consideration: • if A; < j , pick up occurs after place; the same wafer is placed and picked up in each period, so the time that a wafer stays in this visit is a, — ak — 1. Therefore, a0• — ak — 1 > tpr0Ci should be added to the set of constraints. • if k > j , pick up happens before place; if wafer w is placed in this period, it is picked up in the next period, so the time that a wafer stays in this visit is FP + aj — ak — 1. Therefore, FP + CLJ — ak — 1 > tproci should be added to the set of constraints. 4. P M res idency constra int : Wafers can stay in Vi for a maximum time of tproci + tresPi- If Pj is a pick up from Vi, and pk is a place into Vi, there are two different cases, as follows: • if k < j, aj — ak — 1 < tpr0Ci + t r e s p i should be added to the set of constraints. • if k > j, FP + aj — ak — 1 < tpr0Ci + t r e s p { should be added to the set of constraints. 5. T M res idency constra int : After picking up a wafer from Vi, it can re-side on the arm for at most, £ r e s P ; - If Pj is a pick up from Vi, and pi is a ' 77 place into Vi+i, there would be two different cases: • if j < I, ai — aj — 1 < tresTi should be added to the set of constraints. • if / > j , FP + at — aj — 1. < iVesTiShould be added to the set of constraints. Similar to what we have done in Chapter 5, if we change the second constraints to the following different constraints, a\ — a2 < —1, a2 — a 3 < —1, a 3 — a 4 < —1, • • •, andam-i — am < - 1 , am — ai < FP, all of our constraints would be in the form of difference constraints. Therefore, we can create a graph whose nodes are our variables in this linear system. We must add an edge of weight bk to the graph between node aj, and aj for each constraint, such as a; — aj < bk. We can solve this linear programming problem by using the Bellman-Ford single-source shortest path algorithm on this graph. In this case, the graph has M = 2N + 2 nodes, and approximately 5M edges, and because the complexity of the Bellman-Ford algorithm is O ( V E ) , where V is the number of nodes, and E is the number of edges, the complexity of our problem is 0(M2) or 0(N2). From the result of the Bellman-Ford algorithm, we can decide the time that each action takes place within a period. 78 K0(0) ffi(l) K2(3) Li{4) L2(7) £3(8) 1 2 3 3 2 1 Table 6.1: Ki represent pick from Vi, and Li represent place into Vi. The first row shows the actions with their corresponding time in the period. The second rows shows that how many arms are being used for performing each action. This solution uses three arms, but we can use B M instead of the third arm if we place into B M at time 2 , and pick from B M at either 5 or 6. 6 . 1 . 2 P h a s e I I : C h a n g i n g t h e t h i r d a r m w i t h t h e b u f f e r m o d u l e We represent each schedule with a matrix, as is shown in Table 6.1. In this table, the first row shows the actions with their corresponding time in the period. The second row shows how many arms are used for performing each action. If the number of arms that are used is less than three, the solution is feasible, and we do not have to do anything. However, if there are some durations where we are using three arms (in Table 6.1, it is from time 3 to 4, inclusive.), we try to use a buffer to change the solution to a double-arm solution. We refer to the regions where three arms are used as conflict boxes. We also refer to a time as an empty time if no action (i.e. pick up or place) occurs at that time (e.g. time 2, 5, 6, in Table 2). If there is more than one of these boxes, we try to resolve one box at a time, starting from the most left one. There are three cases where a box can be resolved. c a s e 1 : If a pick up action from Vi occurs before the beginning of the box, and the place action into happens its the end, the box can be resolved if there is an empty time (a place action into the B M can happen at 79 this time) before the box begins, and there is an empty time after the end of the box and before the placement into Vi+i (to perform a, pick up from a B M ) . c a s e 2 : If place action into Vj+i happens after the end of the box but before pick up from Vj , that box may be resolved. If an empty time exists after the end of the box but before place into Vj+\ (a pick up from the buffer can happen at that time), and an empty time exists after pick up from Vj, or before the start of the box (a place into the buffer can happen during that time), the box can be resolved. c a s e 3:If place into Vit+i happens before the end of the box, and also before pick up from 14, and if pick up from Vk happens before the beginning of the box, the box may be resolved. If we can find an empty time before place into Vk+i, or after the end of the block (a pick up from the buffer can happen at that time), and another empty time after pick up from Vk, but before the beginning of the box (a place into the buffer can happen at that time), the box will be resolved. These are the only ways to resolve a conflict box. 6.2 Experimental Evaluation 6 .2 .1 E x p e r i m e n t D e s i g n As in previous chapters, we use the cluster factor ( C F ) , calculated as 2^+2' to show a notion of processing over transport. If it becomes less than one, the cluster tool is in a transport-bound region. The greater the cluster factor, the more idle time the T M s have. The lower the cluster factor, the more idle time 80 the P M s have. The experiment runs on all the topologies (i.e. all possible visits and number of P M s per visit) that can be made with 1 to 6 P M s . In practice, because we want to get the best utilization of our resources (i.e. P M s and T M s ) , we try to keep the process times close to each other, and in balance with the transport time. For example, if the cluster factor is 5, it means that one of the processing modules is the bottleneck. Even if the other processing modules also take a long time to process, and are busy processing, the transport module has a lot of idle time. O n the other hand, if our process times are very small, the transport module becomes the bottleneck, and while the T M is busy performing the moves, our P M s may have some idle time. The cluster tool designer usually wants to get the best utilization of all the resources in the tool, keep the cluster factor around one, and the processing times close to each other. In our experiments, we want to produce cases that are more practical, so we tried to keep the cluster factor between 1 and 1.5. We assume that tproa = M{* k + 2* (N +1), where k is a random number generated according to a normal distribution N(fi,o). In this distribution, \x is a number between 0 and (N + 1), and a is equal to n (we generate ten cases for each fj). Using this formula, we can say that the C F is greater than one, because all the processing times are at least 2*(N + 1). Also, in most cases, the processing times would be less than 3 * ( N + 1). Therefore, in most of these cases, the cluster factor would be between 1 and 1.5. In our experiments, we assume tpick = tpiace = 1, and for all visits, tresPi = 0 , and t r e s T i is a large number. By 81 these arrangements, 6339 cases are generated and run on a Sun spare station (UltralO) with 256 M B memory running Solaris 5.7. We tested these cases with both an algorithm that uses a buffer and one that does not use a buffer ( the algorithm in the previous chapter). 6 . 2 . 2 E x p e r i m e n t a l R e s u l t s Performance in terms of Fundamental Period The algorithm that does not use a buffer could not find a solution with the IdealFP in 311 cases of these 6339 cases (about 5% of the cases). Among these 311 cases, using a buffer helped 178 cases (about 57%) to have a better F P . Since B M usually exists in cluster tools, the only cost that we have to pay for this great improvement is the algorithm running time. The algorithm that does not use a buffer finds the solution in less than 3 seconds. However, the algorithm that uses a buffer may take up to 80 seconds to find the solution. It is important to note that this worst case happens very rarely. In 99.98% (6332 out of 6339) of the cases, the solution was found in less than 3 seconds. We also ran these test cases on a model that uses three robot arms (instead of two), and we detained the same throughput as a model that uses two arms and has a buffer module. Based on this experiment we can say that the effect on performance of adding a new arm to the tool is almost the same as adding a buffer module l . However, a buffer module is usually available in the tool, 1 I n cases that the tool is in transport-bound region, we can not use buffer, but if we had three arms we could have used the third arm, and got a better performance. However, in our cases that the tool was always in process-bound region, the buffer had the same effect as the third arm 82 and the use of it is virtually free; However, adding another robot arm is very expensive and difficult. 83 Chapter 7 Mult i -Route M o d e l Previously, all the works on analyzing cluster tools focused on single-route models. This means that all the wafers go through the same route and visit the same processing visits. In this model, if two different products are to be produced with the same tool, one alternative is to process all the wafers of Route One first. After processing all the wafers with the same route recipe, the tool should be rescheduled and reconfigured (e.g. by adding some new PMs) , so that the wafers of Route Two can be processed. There are many cases where the tool can support more P M s than is needed for one route. Therefore, if we allow the wafers of two or more routes to be processed simultaneously in the same tool configuration, more P M s are used simultaneously, and better overall throughput can be achieved. In this chapter, we introduce a multi-route model that allows wafers of different routes to be processed simultaneously with the same tool configura-tion. This flexibility is needed in more unconventional manufacturing systems, 84 such as flexible manufacturing systems. We provide a greedy scheduling tech-nique for finding a schedule for cases with no residency constraints. In most of the common cases, this greedy algorithm finds the optimal schedule; how-ever, there are some cases for which the greedy approach finds a near-optimal solution. We provide a branch-and-bound technique to find the optimal schedule for cases with residency constraints. The branch-and-bound technique is opti-mal, but can take a long time to find a solution. Therefore, we use it primarily as a benchmark for comparison with near-optimal scheduling techniques, us-ing Simulated Annealing, which we propose for solving multi-route cases with residency constraints. To evaluate each state of the system in the simulated annealing technique, a linear programming system must be solved. Conven-tional linear programming techniques may take several seconds. Due to the fact that many states of the system must be evaluated in Simulated Annealing, we can not afford to spend several seconds on each evaluation. We provide a technique for finding the fitness value faster, and use several heuristics to improve the performance of the scheduler. In a multi-route model, wafers can have different routes. In other words, different kinds of products can be produced simultaneously with the same tool configuration. As we mentioned in Chapter 2, in a multi-route model we assume that we have r routes, and 1/r of wafers belong to each route (i.e. Rii R2, • • • , Rr)- We define N{ to be the number of visits of route i. We also define Oj to be the number of times that visit j occurrs in different routes. In 85 other words, Oj shows how many route recipes share Vj. A l l wafers belonging to the same product type should undergo the same processing steps, and the time that they remain in their visits should be the same. Otherwise, the quality of wafers belonging to the same type would be different, which makes quality assurance very difficult. Therefore, if an Rx wafer stays in a P M for s seconds, all of the Ri wafers should stay in that P M for s seconds; this can be achieved in a periodic schedule. Our objective is to find a periodic schedule with a minimum Fundamen-tal Period (FP). By periodic we mean that if we find the times that the first r wafers enter and leave the P M s , those relative times will be repeated for the subsequent wafers in the following periods. Table 2.2 lists the terms that we use throughout this chapter, along with their definition. 7.1 Algorithm Description 7.1.1 Calculating the Ideal F P The IdealFP is the best possible period for a specific tool configuration; the-oretically, no solution with a smaller F P can be found for that configuration. In practice, however, there is no guarantee that a feasible solution with this F P exists; in some cases, because of the configuration of the tool or residency constraints, we may have to increase the F P to find a feasible solution. To explain our approach, we assume that tvick = tpiace = 1. Our approaches can be easily applied to cases with other pick up and place times. 86 There are two important kinds of resources in a cluster tool: transport module ( T M ) , and process module (PM). If the T M becomes the bottleneck, to get the best performance, we have to keep the T M as busy as possible. In this situation, the tool is in the transport-bound region. In cases where some of the processing times are long and they become the bottleneck, the tool is said to be in the process-bound region. L e m m a 1: When the tool is in the transport-bound region, 2*YH-\ [Ni+ 1) is a lower bound for the period. P r o o f : In each period, for each route Ri, a wafer undergoing Ri should be picked up, and another Ri wafer should be placed into each visit and the C M . Therefore, 2 * (/vi + 1) + 2 * (N2 + 1) + • • • + 2 * (Nr + 1) T M actions occur in each period. Because of the structure of the tool, only one action can happen at any time. Therefore, in the transport-bound region the following is given, Ideal FP- > 2 * £ [ = 1 ( N { + 1). • For example, if there are only two routes available, the Ideal F P should be greater than or equal to 2(NX + N2 + 2). L e m m a 2 : When the tool is in the process-bound region, a lower bound on the period is max((tpr0Ci + 2) * Oi) for all visits Vi. P r o o f : Consider a visit Vi, that is not common between different routes. In each period, one wafer should be picked up from it, and one should be placed into it. Therefore, IdealFP > 2 + tpr0Ci. On the other hand, if visit Vj is common between Oj routes, in each 87 period, Oj wafers should be picked up from it, and Oj wafers should be placed into it, and IdealFP > Oj * (2 + tpr0Cj). Therefore, IdealFP > max((tpr0Ci + 2) * Oi) for all visits Vi. • We calculate the Ideal F P from both the transport-bound and process-bound (i.e. Lemmas 1 and 2) formulae, and the maximum of the two would be the Ideal F P . For example, consider the configuration shown in Figure 7.1. In this configuration, there are two routes: 1-2-3, and 4-2-5. From the transport point of view, the Ideal F P should be greater than 2 *(3 + 1 + 3 + 1) = 16. From the process point of view, it should be greater than the maximum of any of the following: V\ . tproc\ tpick tplace — H "t" 2 — 13, V3 . tprocs tpick ~t~ tpiace — 10 -}~ 2 — 12, V4 . tproci "P tpick tpiace = 12 -f- 2 — 14, V5 : tpr0C5 + tpick tpiace = 8 + 2 = 10, and V2 : (tpr0C2 + tpick + tpiace) * 0 2 = 2 * ( 6 + 2) = 16. Therefore, the Ideal F P becomes 16, and the tool is in the boundary of the transport-bound region and the process-bound region. In other words, both the transport module, and Module 2 are bottlenecks, and we have to keep them as busy as possible to get an optimal schedule with this period. 88 11 10 © © 8 Figure 7.1: A two-route configuration. Each node represents a visit, and Visit V2 is common between the two routes (i.e. Ri : 1-2-3, and R2 : 4-2-5). The number on top of a node represents that visit's process time. 7.1.2 Residency-free Scheduler In this section, we present an algorithm for finding a feasible solution for cases that do not have residency constraints. The period of the solution may not be equal to the Ideal F P (i.e. the solution may be near optimal). Note that in some tool configurations, no solution with the Ideal F P exists. There are still others for which a solution with the Ideal F P exists, but our algorithm does not find it. In the following, we first consider cases with disjoint routes. Later, we explain the algorithms for configurations in which routes share common visits. Disjoint Routes In a configuration where all the paths are disjointed, the algorithm starts with a pick up action of an Ri wafer from the C M . For all the visits in Route One, we perform an exchange, one after another, and finally place the wafer that 89 Topology Schedule 10 Route 1: Route 2: 0-0 8 9 0-0 Exchange Figure 7.2: A two-route, disjointed configuration, and its schedule. The arrows represent the pick and place actions. The boxes represent the processing at the P M s . is picked up from the last module into the C M . Then, we perform a pick up action of an R2 wafer from the C M , and after doing exchange actions for all the modules, we place the R2 wafer into the C M . We perform the same kind of actions for all the routes. When all the routes are finished, if the time is less than the Ideal F P (i.e. the tool is in the process-bound region), we leave the arm idle for the time difference. It is easy to show that this scheduler always finds the optimal, and feasible solution with the Ideal F P . Figure 7.2 shows a configuration of two disjointed paths, and the schedule. Routes with Common Visits Algorithm 6 shows the pseudo code of our algorithm. The algorithm starts a period by picking up an Rx wafer from the C M . After picking up the wafer, the T M places it in the next visit via an exchange action; that is, it first picks up the wafer from that module, and then places the wafer residing on the arm. It is important to note that for visits that are common between two or more 90 Algorithm 6 Pseudo code for residency-free cases where multiple routes have common visits. t<-0, i <- 1 while i < r do Pick an Ri wafer from C M . V <- j th visit of Ri while V < Ni do if Place[v] = 0 and (t - Place[v] < tproCv) then end if k<-i — l if k = 0 then k «— r end if if Ov = 1 then end if pick an Rk wafer from V . t 4 t -f- tpick place the i?i wafer into V . £ 4— t + tpi ace place[v] <— t % <- k V <— Next visit of route K end while place Rk wafer into C M . t 4 t -f- tpiace end while Increase F P if any of the common visit needs more time for processing. 91 Route No. C M V I V 2 V 3 V 4 V 5 C M P 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 FP = 16 • M 17 li Time Figure 7.3: Schedule of the configuration shown in Figure 7.1. The number on the arrows is the route of the wafer that is being picked or placed. routes, when we want to place a wafer from route Riy we pick up a wafer from route Ri-il. For example, in the configuration shown in Figure 7.1, which is a two-route configuration, if we place a wafer from route R\ into visit 2, a wafer from route R2 is picked up, and vice versa. For the visits that are not common between routes, because we always perform an exchange action, the wafer stays in them for a time equal to the FP — 2, which is always greater than the processing time of that module (because of how we have chosen our F P ) . For visits that are common between routes, in each period, two or more exchange actions occur. Therefore, when picking up a wafer from a common visit, we must make sure that the wafer stays in that P M long enough for it to get processed. If not, we leave the arm idle for the time difference. When all the actions within one period are finished, the algorithm checks to see if 1 For i = 0, we pick a wafer of route Rr 92 it should keep the arm idle for some time, so that all the P M s have enough time to process their wafer. If so, the T M remains idle for that time. Figure ?? shows the schedule that is derived from this algorithm for configuration, as shown in Figure 7.1. 7.1.3 Residency-Aware Approaches In any residency-aware scheduler, the complete schedule of any wafer must be planned before its entrance into the system. Otherwise, the wafer may have to stay in a module for longer than its residency constraint because other resources are busy processing other wafers. In other words, only algorithms that plan ahead based on the global view of the system can be used. Therefore, local-decision-based algorithms, such as the greedy technique that we used for a residency-free scheduler, are not applicable any more. Our approaches can be divided into two important categories: optimal and near-optimal. When looking for the optimal solution, we are willing to spend as much running time as ,needed in order to get the best performance. This approach can be used in the off-line mode. For example, when a new cluster tool needs to be scheduled to process a large number of wafers, we may be willing to spend several hours to run our scheduler to find the optimal solution. However, in some cases where a schedule must be found immediately, we can not afford to spend several hours finding the optimal schedule. In these scheduler that finds a near-optimal schedule in a reasonable time is needed. In such cases, we discover a near-optimal schedule using different techniques, such as 93 Simulated Annealing. O p t i m a l S c h e d u l i n g techniques In a periodic schedule, because all the wafers of the same type remain in each P M and T M for the same amount of time, they would all be of the same quality. For quality assurance reasons, it is usually required that all products of the same type are of the same quality; therefore, we are only in-terested in periodic schedules. In a periodic schedule, if the actions of the robot in a period are known, these actions repeat over time. This assumption makes the search space much smaller. If there are Ni modules in route i, A = YT=i~r°utes 2 ( N i + 1) actions should be scheduled in each period. There-fore, any of the A\ permutaions can be a possible solution, and should be investigated. A permutation only specifies the order of the actions in a period, and for any permutation, another search must be performed to assign an exact time to these actions that satisfy all of the constraints. As in previous chapters, we perform a branch-and-bound technique for generating and investigating all the permutations. In this technique, we try to prune the tree as much as possible to reduce the number of permutations that should be investigated. Each node of the tree represents a partially generated permutation. The first i actions of nodes at level i are assigned (e.g. no actions are set for the root, and the permutation of all the actions are known for the leaves of the tree). We traverse this tree in a depth-first manner to get to the leaves sooner (one of them may be the optimal schedule). 94 P r u n i n g T e c h n i q u e s The first i actions within a period are known for a node at level i. To find out if any leaf of the sub-tree rooted at this node can be the optimal solution, we investigate these i actions. If there is already a known conflict within these actions (e.g. both the actions of a visit are set, but it violates either its processing time, or residency time constraints), we prune the tree from that node, and do not investigate any of the (A — i)\ leaves in that sub-tree. A r m A v a i l a b i l i t y If we have over-utilized the robot (i.e. used more than two arms) for the first i actions, no solution down this path can be found, and this node can be pruned. M o d u l e A v a i l a b i l i t y For visits that are common between different paths, we have to make sure that we are not assigning wafers (of different routes) to the same P M , simultaneously. If that condition is violated at node x, the node can be pruned. T i m e - C o n s t r a i n t V i o l a t i o n s When the position of both of the ac-tions of a visit in the permutation are known, we can figure out the minimum and the maximum bound on the time that the wafer stays in that module in the permutation. If the minimum bound is more than tproCi + i ^ p ; , or the maximum is less than tpr0Ci, we observe that none of the permutations starting with these actions can lead to a conflict-free solution, and we eliminate such permutations by pruning that node. Also, when the position of pick up from a visit and place into the next visit is known, if the number of the actions in between is more than the T M residency of that visit, the sequence can not lead 95 to a useful permutation and we can terminate the generation of the remainder of the sequence at this point. L e a f E v a l u a t i o n Each leaf of the tree is a potential schedule. Consider the permutation Pi, P2, P3,- • •, PA of a set of actions where pi denotes a pick up action from the C M , and p;(i > 1) denotes the subsequent actions in the schedule. If we can assign some starting times, such as a i , a 2 , 03, . . . , ^totaiActions to these actions such that the following constraints are satisfied, these actions, along with their corresponding times within a period, would be a feasible schedule. Considering the way that we have pruned the tree, we know that leaves do not over-utilize robot arms or process modules, so no checking is necessary at this stage. A c t i o n over lap: only one action can happen at a time. We model it by the following inequalities 0 < ai < a 2 < a 3 • • • < atotaiActions < FP. P r o c e s s t i m e constra int : Wafers should stay in Vi for a minimum time of tproCi. If pj is a pick up from Vi, and pk is a place into Vi, the following two different cases need consideration: • if k < j , pick up happens after place; the same wafer is placed and picked up in each period, so the time that a wafer stays in this visit is aj — ak — 1. Therefore, aj — ak — 1 > tpr0Ci should be added to the set of constraints. 96 • if > j , pick uphappens before place; if wafer w is placed in this period, it is picked up in the next period, so the time that a wafer stays in this visit is FP + a,j — a-k — I. Therefore, FP + a3• — ak — 1 > tpr0Ci should be added to the set of constraints. P M res idency constra int : Wafers can stay in V; for a maximum time of tpr0Ci + tresPi. If pj is a pick up from Vi, and p/~ is a place into Vi, there will be the following two different cases: • if k < j , aj — ak — 1 < tpr0Ci + trespi should be added to the set of constraints. • if fe > j , FP + aj — ak — 1 < tproCi + trespi should be added to the set of constraints. T M re s idency constra int : After picking up a wafer from Vj, it can reside on the arm for at most tresPi. If pj is a pick from Vi, and pt is a place into Vi+i, there will be the following two different cases: • if j < I, ai — aj — 1 < UesTi should be added to the set of constraints. • if / > j , FP+ai — aj — l < i r ^ s h o u l d be added to the set of constraints. As we have shown in the previous chapters, for a fixed F P , this system can be solved using the shortest path problem in 0(A2) . To be able to use this method, we have to fix F P . There are two ways that we can perform the branch-and-bound technique. In the first approach, we can start from the IdealFP and search the whole tree (using our pruning techniques) to find out 97 if a solution with that IdealFP exists. If no solution is found, we increase the F P by one, and perform the search again. We do this step until we find a feasible solution. In the second approach, we traverse the tree once, and while evaluating a leaf, we try to find the minimum period that a feasible schedule can be found for this permutation. We know that a feasible schedule with the primitive solution (a schedule that only allows one wafer in the system) exists, and we assign it as the best-found F P at the beginning. When investigating a leaf, we check all the values from the IdealFP to the currently best-found F P for that node. The Best-found F P gets updated when a new schedule is found. Our search terminates either when we finish investigating all the potential leaves of the tree, or when we find a solution with the Ideal F P . In this approach, when we prune the tree we are not searching for a specific F P ; therefore, our pruning techniques have to be changed slightly, and we should assume that a period can be any number between the Ideal F P and the best-found F P . As a result, its pruning techniques are not as tight as the pruning techniques of the first approach. If a solution with an optimal F P exists, approach 1 searches the whole search tree (optimalPeriod - IdealFP + 1) number of times. A t each iteration, it searches for one F P . Therefore, it has to solve the many following shortest path problems: (Size — of — search — tree) * (optimalPeroid — IdealFP + 1). 98 However, approach 2 traverses the search tree only once, but while evaluating a node, it considers all possible permutations (from IdealFP to bestFoundFP). Therefore, it may need to solve the many following shortest path problem: (Size — of — search — tree) * (bestFoundFP — IdealFP + 1). Because the bestFoundFP is always greater than or equal to the optimal F P , and the size of the search tree is smaller in the first approach (it uses tighter pruning techniques), one needs to solve fewer shortest path problems, which would be faster. However, approach 2 can be used for finding near-optimal solutions. For example, assume that in a case where no solution with the IdealFP exists, a user is willing to spend t seconds searching for the near-optimal solution. We can perform approach 2, and stop the search after t, and report the best-found-FP as the near-optimal F P . In this case, the algorithm may have found a reasonable good solution. However, if we use approach 1, after t seconds we may still be busy searching the whole tree for the IdealFP, and therefore, the only feasible solution that is known is the primitive solution with non-acceptable performance. The results of our experiments show that if the total number of actions in a period are equal to or are more than 16 (e.g. there are 16 actions in two 3-visit routes with one common visit), we may need to spend a long time searching the huge search space for the optimal solution. 99 N e a r - o p t i m a l S c h e d u l i n g Techniques As we said before, for cases that we have to schedule immediately (because a module fails, or a new module is added), it is not acceptable to spend several hours finding the optimal solution. In these cases, we would rather get a near-optimal schedule in a reasonable amount of time. Although approach 2 can be used for finding a near optimal solution, we would get a better result using a randomized search algorithm, for example, Simulated Annealing. Simulated Annealing is a stochastic search algorithm that is used to find a near-optimal solution for many NP-complete optimization problems, such as the traveling salesman problem[10], the quadratic assignment problems [38], job shop scheduling [12] and assembly line balancing [33]. In a typical Simulated Annealing [10] approach, we start with an initial state of the system, and an initial temperature. For each state, we define a fitness value that measures the closeness of that state to the desired solution. The temperature decreases by a factor a, and the search stops when the tem-perature reaches a threshold. In each temperature, we perform a number of moves. After performing a move, if the fitness value of the resulting state is better than the original state, we accept that move. If not, based on the fitness values and the temperature, we may accept or reject that move (i.e. if the temperature is higher, it is more likely that we accept that move). A permutation represents a state of the system. Its fitness value is equal to the minimum possible period for which a feasible schedule exists for that permutation. Due to the fact that each state is a permutation of ac-100 tions, calculating the fitness value is the same as the leaf evaluation of our branch-and-bound technique. Therefore, instead of solving a general L P sys-tem (that usually takes several seconds), we perform a number of shortest-path algorithms. We linearly check all the values between the IdealFP and the best-Found F P . For permutations in which no feasible schedule exists, we define the fitness value to be the best-found-FP plus the number of times that this permutation violates the timing constraints (i.e. the processing time, P M residency time, and T M residency time). For any Simulated Annealing approach, we have to find out what values of the parameters (e.g. the initial temperature, a) yield the best result. For our case, we decided that we can get the best result if we use the following values for our parameters. The initial temperature is equal to one; the a factor is equal to 0.8, and the threshold is equal to 0.05. Also, we perform A2 moves at the initial temperature. As we decrease the temperature, we divide the number of moves in half. H e u r i s t i c I: E a r l y R e s t a r t Our Simulated Annealing approach may not find the global-optimal schedule; therefore, if we re-apply the simulated annealing, we can investigate more nodes, and find a better solution. One may suggest that instead of redoing the search, why not perform the simulated annealing once, but allow twice as many moves in each temperature. As our experiments show, we get almost the same result; however, the running time of the algorithm is less when restarting the Simulated Annealing, as compared to doubling the number of 101 moves. This is due to the fact that when investigating a state, we have to perform (bestFoundFP - IdealFP) shortest path algorithms. If we perform Simulated Annealing once, the best-found F P usually improves, so in redoing the search with the improved best-found F P , we spend less time investigating each node. H e u r i s t i c I I : B i n a r y S p l i t Most of the search time is spent finding the fitness value of a node; if we decrease that time, we can investigate more nodes, and find a better schedule. Therefore, we decided that instead of linearly searching from the IdealFP to the best-found-FP, we could use a binary-like searching method. In other words, we first check for a period equal to m i d F P = (bestFoundFP - IdealFP ) / 2. If a solution for m i d F P exists, we limit our search from the IdealFP to the midFP. If no solution for m i d F P exists, we limit the search from the m i d F P to the bestFoundFP. Therefore, at each step after checking a period, we divide the search interval in half. The search stops when the interval size is zero. If during this search we have found a solution with a period less than the best-found F P , we update the best-found F P . We know that the existence of a schedule with period p does not imply the existence of a feasible schedule with period p + 1; therefore, by using a binary search, we may lose some feasible solutions. However, our experiments confirm that the time we save using this method outweighs the inaccuracy that it introduces in our fitness function. 102 R l al Visits o R2 a2 Visits b l Visi ts . -o b2 Visits-Figure 7.4: A two-route configuration with one common visit. 7.2 Analytical and Experimental Evaluation 7.2.1 Analytical Evaluation of Residency-Free Approaches The residency-free algorithm always finds a feasible schedule for cases with-out any residency constraint. However, in some cases, the solution period is more than the Ideal F P . In this section, we explain when it finds the optimal schedule. To do so, we consider different cases. First, we focus on having only one common visit between different routes. We start by considering only two routes. One common visit, and two routes Consider the configuration shown in Figure 7.4. In this configuration, in Route Ri, there are ai visits before the common visit, and there are b\ visits after the common visit. In Route R2, there are a 2 visits before the common visit, and b2 visits after the common visit. Visits that are not common between routes have only one exchange in each period; therefore, they always have enough 103 time to process their wafer. However, this may not be the case for common visits. If we look at the formula that we used to calculate the Ideal F P , we see that in a two-route configuration, we assumed that a common visit spends half of the period processing an Ri wafer, and the other half processing an R2 wafer. Therefore, if we let one of the wafers spend more time in that visit, the next one may not have enough time to get processed, and we may need to increase the period to achieve a feasible schedule. In other words, if our algorithm schedules the actions in a way that the time Ri wafers spend in the visits is equal to the time R2 wafers spend, the schedule would be optimal. L e m m a 3 : Our algorithm finds the optimal schedule, if TVi = N2. Poof: In our algorithm, if we pick up an R\ wafer from the C M , an R\ wafer is placed into the common visit, and an R2 wafer is picked up from that visit. Therefore, while the R\ wafer is processed in that module, R2 wafers are exchanged in b2 modules. After an R2 wafer is placed in the C M , we have to pick up a new wafer from the C M . Because last time we picked up an Ri wafer, we will pick an R2 wafer this time. While the R2 wafers are exchanged in a2 visits, the Ri wafer is still being processed. Then, an R2 wafer is placed into the common visit. Therefore, the time that the R\ wafer has spent in the common visit is equal to (a2 + b2 + 1) * (tpick + tpiace) = N2* (tpick + tpiace). The time that the R2 wafer spends, similarly, would be equal to N\ * (tpick + tpiace). If these two times are equal, it is guaranteed that the algorithm finds the optimal schedule. In other words, if N\ = N2, the algorithm finds the optimal schedule. If not, based on the processing time of the common visit, the Ideal 104 R2 2L_ R3 Figure 7.5: A Symmetric three-route configuration. F P may be increased by at most, max(N2, iVi) — min(JV 2 , Nx). • O n e c o m m o n vis i t , a n d r routes As in the two-route case, if the time that the R\ wafer stays in the common visit is the same as the time that the R2, • • •, Rr wafers stay in that visit, our algorithm finds the optimal solution. It is easy to show that for the above condition to be true, the following should hold: at + 6(i_m o d_ r) +i = aj + 6(j_m o d_ r) + 1 for all 1 < i, j < r A special case of the above formula would be as follows <2j = aj, and bi = bj for all i, j. N c o m m o n vis i ts a n d r routes We assume that N visits are common to all routes. Also, we assume that all routes visit the common visits in the same order (i.e. our algorithm does not solve a two-route configuration where one route is V i , V2, and the other route is V2, Vi) . A similar approach to the "one common visit and r routes" approach can be used to find the formula for each N and r that shows when our algorithm finds the optimal solution. However, here we look at a special configuration that is more common in practice. 105 We define a configuration to be symmetric if the number of visits be-tween any two consecutive common visits is the same in all routes. Figure 7.5 shows a symmetric configuration with 3 common visits. We claim that our algorithm finds the optimal solution for symmetric configurations with any number of routes and visits. This is due to the fact that when a wafer is processed in a common visit like C l , bi,Cj,dk, and a; exchanges happen before the next wafer is placed in that common visit. If a-i = aj, bi = bj, Ci = Cj, and di = dj for all i, j , which is a condition for a symmetric configuration, the time that a wafer stays in a common visit is the same for all routes. Therefore, our algorithm finds the optimal solution. 7.2.2 Residency-aware schedulers Experiment Design The topology of a multi-route cluster tool is defined by the number of the routes, and the location of common visits in different routes. The optimal scheduling techniques take a very long time (i.e. several days) to find the optimal solution for cases that have more than 16 actions in a period. Due to the fact that we want to find out how good the result of our near-optimal techniques are, we limit the complexity of our generated topologies, so that the optimal algorithms can complete within reasonable times. We performed our experiments only on 2-route models, and we allowed the routes to have either two or three visits; we generated all such topologies. 106 To assign tpr0Ci, trespi , and tREST{ to each visit, we assume that tpr0Ci — Mi * k, where is a random number generated according to a normal distri-bution N(fi, a). In this distribution, \x is a number between 1 and 30, and a is equal to fx. For each topology, we generate three hundred different cases (ten for each /J). After assigning t p r o C i 's , maximum processing time is determined. We assign for each Vi, a value to tresPi that is a uniform random number be-tween 0 and (maximum Processing Time — tpr0Ci) * M ; . Also, for each Vi, we assign a value to tREST{ that is a uniform random number between 0 and (max-imumProcessingTime —tpr0Ci — £ r e s P , ) * Mi- In our experiments, we assume tpick = tpiace = 1- By these arrangements, 2699 cases are generated and run on a Sun spare station (UltralO) with 256 M B memory running Solaris 5.7. To measure the throughput we define a factor, the performance factor, that is equal to (foundFP — optimalFP)/optimalFP. The performance factor shows how close the schedule is to the optimal schedule. The optimal F P is determined by the optimal algorithms. E x p e r i m e n t R e s u l t O p t i m a l - S c h e d u l i n g Techn iques Approach 2 took at most, 1400 seconds to find the optimal solution for our cases. However, for 50% of the cases, it took at most, one second, and for 85% of the cases, it took less than two minutes. N e a r - o p t i m a l S c h e d u l i n g Techn iques Figure 7.6 shows a comparison between the primitive schedule (i.e. the schedule that allows only one wafer in the system at any time), and the result 107 of multiple-runs of our Simulated Annealing technique. The X axis represents the performance factor, and the Y axis shows the percentage of cases that have better or equal performance. As the figure shows, if we restart Simulated Annealing once, we get a 10% improvement in performance on average. It may take up to 140 seconds to run simulated annealing three times with our parameters. When we used a binary-search instead of a linear-search, we found out that it reduces the algorithm running time by a factor of 5.5 . Therefore, to compare the effect of a binary-search, we decided to allow five times more actions in our binary search. Figure 7.7 compares the performance of the linear search, and the binary search. As it is shown, we would get an improvement of 20% with the binary approach. The running time of the binary-search is at most, 125 seconds; while the running time of linear search is 140 seconds. 108 P e r f o r m a n c e F a c t o r Figure 7.6: Performance comparison between a primitive solution, and multi-ple runs of Simulated Annealing 109 Chapter 8 Conclusions and Future Work In this thesis, we addressed and solved several open problems in cluster tool scheduling. We introduced a new timing constraint for cluster tools, called the residency constraint. The residency constraint is a limit on how long a wafer can remain in a processing module, or reside on the robot arm after being processed, and it depends on the nature of the process and differs significantly from one process module to another. This timing constraint complicates the scheduling problem significantly. In this thesis we have considered several models of cluster tools, ranging from a simpler model, a single-route model with only P M residency constraints, to more complicated ones, such as a single-route model with P M and T M residency constraints with a Buffer module, to the most complicated one which is a multi-route model that allows different wafers to visit different P M s in their path. For a single-route with only P M residency models, we provide a schedul-111 ing technique that performs an exhaustive search in the time and resource do-mains. We add several powerful optimization techniques, so that the optimal schedule is found in a reasonable time. In a model where both P M residency and T M residency are present, an extension of the previous approach, with a complexity depending on the processing time, may take a very long time to find the optimal schedule and is not practical. Therefore, we decided to look for a scheduling technique whose complexity is independent of processing times. We devised a technique that schedules all the robot actions in a period in the steady state. Due to the periodic nature of the schedules, if these robot actions do not conflict with each other and satisfy our constraint, the feasible-schedule is found. To make sure that we find the optimal schedule, we start from the pe-riod that can be achieved in the absence of residency constraint (Ideal F P ) , and increase it when no solution with that period exists. To find a feasible permutation, all permutations of robot action must be considered and inves-tigated. For a six-visit configuration, there can be as many as 14! = 7 * 10 2 2 permutations that should be investigated. However, we perform an intelligent permutation-generation process that takes into account our requirements, and eliminates infeasible permutations as early as possible. For example, any per-mutation that starts with three pick actions can not be a feasible schedule, because we only have two robot-arms available. To evaluate each state for a fixed period, we have to solve a linear-programming system. We alter the L P system to make it a system of differ-112 ence constraints so that it can be solved wi th shortest path algorithms. Our experiments show that this technique finds the opt imal schedule in 4 seconds for the worst generated case. In another model, we introduced use of a buffer module ( B M ) , which is usually available for maintenace reasons, to free up the resources, and achieve a better throughput. There exists a trade-off in using B M ; on one-hand, it frees up the resources, but on the other-hand, it adds two extra robot actions. Therefore, it can only be used wi th the P M s are the bottleneck and not the T M . To find the schedule that uses the B M , first we perform a search similar to the previous techniques but assume that there are three arms, and then we try to replace the thi rd arm wi th the B M . Our experiments show that in 57% of the cases where period has to be increased due to residency constraints, use of B M helps us achieve a better throughput. The last model that we considered is the multi-route model. In this model, more than one product can be produced simultaneously wi th the same tool configuration. For this model, we provide both residency-free, and residency-aware scheduling techniques. In the residency-free approach, we use a greedy algorithm that finds the optimal schedule in many common cases. However, there are some cases that the greedy algorithm finds a near-optimal schedule. For residency-aware model, we provide two techniques: optimal and near-optimal. The opt imal technique is an extension of the single-route ap-proach and can take a long time; therefore, it should be only used when tool 113 is scheduled off-line. However, during tool operation, when a change in the tool configuration occurs (e.g. a module fails, or a new module is added), we should be able to re-schedule the tool in a reasonable time and can not afford to spend a long time looking for the optimal schedule. In this situation, we would rather have a near-optimal schedule, and decided to use Simulated Annealing (SA) to find the schedule. We modified the S A approach significantly to arrive at good schedules. In one version, the algorithm performs the S A technique more than once but allows fewer numbers of moves at each temperatue. This technique improves performance, because after the first round of S A is solved, less time is needed to evaluate each state in the second round. Another technique also tries to reduce the time needed for state evaluation. In this technique, instead of linearly searching all periods from Ideal period to best-found period and solving an L P system for each of them, we perform a binary-like search. This technique reduces state evaluation time significantly, but it introduces an inaccuracy in the fitness value. However, the time that it saves enables us to evaluate more states, and find a better schedule. As a result of combining all these techniques, our experiments show that we can find a near-optimal schedule with a period that in 92% of the cases is only 20% larger than the optimal period, in less than one-tenth of the time. 114 8.1 Future Works We defined Residency Constraint to be a window of time within which a wafer can be picked up from the module and still be of acceptable quality. In our approaches, we do not favor a schedule that picks up the wafer sooner, to one that picks up the wafer later. However, even if we satisfy the residency constraint, there can be a difference in product quality depending on when the wafer had been picked up for some processing modules. In future, we can investigate the nature of processing modules to find out if there are such processing modules, and provide a model that takes that into account. As mentioned before, when a change in the cluster tool's configuration occurs (e.g. a new module is added), we need to re-schedule the tool. In our schedules, we assume that we always start from an empty configuration (i.e. there is no wafer in the tool). Therefore, to be able to apply the new schedule, we have to wait for the wafers that are already in the system to finish their path and exit the tool, before we can apply the new schedule and start processing the new wafers. In future, we can investigate a model that allows the tool to be non-empty when the first wafers enter the system to address this case. In the multi-route model, we used Simulated Annealing as the base of our near-optimal scheduling technique. In future, we can investigate other near-optimal scheduling techniques to find out if they help us achieve a better solution. Also, in our Simulated Annealing technique, we assumed that tem-perature is reduced at a fixed rate. Adaptive cooling techniques may help us 115 achieve a better schedule and can be investigated in the future. In our multi-route model, we assumed that we have R routes available, and 1/R wafers belong to each route. Although this has been the model that was of interest to the industry, in the future we can investigate to find out if there are cases where more general multi-route models are needed and provide scheduling techniques for those models. Additional models of cluster tools can be investigated as well. Models such as a multi-visit model that allows the same visit to be re-visited in-the wafer's route add more flexibility to the cluster tool model and may be required for some production lines. For these models we may have to provide both optimal and near-optimal scheduling techniques. In this thesis, we have only focused on scheduling one cluster tool, and have not looked at models that allow multiple cluster tools to collaborate with each other. In the future, we can look into scheduling mutiple collaborative cluster tools. 116 Bibliography [1] H . Chen amd C . Chu and J . M . Proth. Cyclic scheduling of a hoist with time window constraints. IIEE Trans. Robotics and Automation, 14:144-152, 1998. [2] L . F . Atherton and R . W . Atherton. Wafer fabrication: Factory perfor-mance and analysis. Bostom: kluwer Academic, page ch 8, 1995. [3] M . E . Bader, R.P. Hall , and G . Strasser. Integrated processing equipment. Solid State TechnoL, 33:149-154, 1990. [4] P. Burggraaf. Coping with the high cost of wafer fabs. Semicond. Int., 38:45-50, 1995. [5] D . Camporese, B . Hamidzadeh, and S. Rostami. Residency constrained cluster tool scheduling. Proceedings of AEC/APEC Symposium, 1999. [6] T . H . Cormen, Charles E . L . , and R . L . Rivest. Introduction to Algorithms. The M I T Press, 1990. [7] G . B . Dantzig. Linear Programming and Extension. Princeton University Press, 1963. [8] M . A . Dummler. Using simulation and genetic algorithms to improve clus-ter tool performance. Proceedings of the 1999 Winter Simulation Confer-ence, 1999. [9] J . W . Herrmann, N . Chandrasekaran, B . F . Conaghan, and M.Nguyenand G . W . Rubloff. Evaluating the impact of process changes on cluster tool performance. IEEE Tran. on Semiconductor Manufacturing, 13(2): 181-192, May 2000. 117 [10] S. Kirkpatrick, C D . Gelatt, and M . P . Vecchi. Optimization by simulated annealing. Science, 220:671-680, May 1983. [11] E . J . Koehler and M.S. Seppanen. Evaluation of cluster tool throughput for thin film head production. Proceedings of the 1999 Winter Simulation Conference, 1999. [12] P . J . M . Laarhoven, E . H . L . Aarts, and J . K . Lenstra. Job shop scheduling by simulated annealing. Operations Research, 40:113-125, 1992. [13] J . Lamothe, M . Correge, and J . Delmas. A dynamic heuristic for the real-time hoist scheduling problem. Proceedings of the International Con-ference on Emerging Technologies and Factory Automation, 2:161-168, 1995. [14] H . T . LeBaron and M.Pool . The simulation of cluster tools: a new semicon-ductor manufacturing technology. Proceedings of the 1994 Winter Simu-lation Conference, 1994. [15] L . Lei and T.J .Wang. A note: The single-hoise cyclic schedluing problem is np-complet. Rutgers University, GSM Working paper No. 1989-16, 1989. [16] L . Lei and T . J . Wang. The minimum common-cycle algorithm for cyclic scheduling of two material handling hoist with time window constraints. Management Science, 37:1629-1639, 1991. [17] B . Lemmen, E . J . J Van Campen, H . Roede, and J . E . Rooda. clustertool optimization through scheduling rules. Proceedings of 1999 IEEE Inter-national Symposium on Semiconductor Manufacturing Conference, pages 89 -92, 1999. [18] M . J . Lopes and S.C. Wood. Performance models of systems of multiple cluster tools. Proceedings of IEEE International Electronics Manufactur-ing Technology Symposium, 1996. [19] M . J . Lopez and S.C. Wood. Systems of multiple cluster tools: Configu-ration and performance under perfect reliability. IEEE Transactions on Semiconductor Manufacturing, ll(3):465-474, 1998. 118 [20] H . L . O h . Reducing complexity of wafer flow to improve quality and throughput in a single-wafer cluster tool. Proceedings of IEEE/CPMT Electronics Manufacturing Technology Symposium, pages 378 -388, 1999. [21] H . L . Oh . Conflict resolving algorithm to improve productivity in single-wafer processing. Proceedings of the International Conference on Modeling and Analysis of Semiconductor Manufacturing (MASM 2000), 2000. [22] T . L . Perkinson, R.S. Gyurcsik, and P . K . McLarty. Single-wafer clus-ter tool performance: an analysis of the effects of redundant chambers and revisitation sequences on throughput. IEEE Trans. Semiconductor Manufacturing, 9:384-400, 1996. [23] T . L . Perkinson, P . K . McLarty, R.S. Gyurcsik, and R . K . Cavin III. Single-wafer cluster tool performance: an analysis of throughput. IEEE Trans. Semiconductor Manufacturing, 7:369-373, 1994. [24] L . W . Phillips and P.S. Unger. Mathematical programming solution of a hoist scheduling program. AIIE Trans., 8:219-225, 1976. [25] S. Poolsup and S. Deshpande. Cluster tool simulation assists the system design. Proceedings of the 2000 Winter Simulation Conference, 2000. [26] S. Rostami and B. Hamidzadeh. A n optimal residency-aware scheduling technique for cluster tools with buffer module. Submitted to IEEE Trans, on Semiconductor Manufacturings. [27] S. Rostami and B . Hamidzadeh. Simulated annealling approach for scheduling semiconductor manufacturing tools that support mutiple prod-ucts. Submitted to IEEE Trans, on Systems, Man, and Cybernetics. [28] S. Rostami and B . Hamidzadeh. Optimal scheduling techniques for cluster tools with process-module, and transport-module residency constraints. To appear in IEEE Trans, on Semiconductor Manufacturing, 15(3), A u -gust 2002. [29] S. Rostami and B . Hamidzadeh. Optimal scheduling techniques for cluster tools with process-module, and transport-module residency constraints. Proceedings of IEEE International Conference on Robotics, and Automa-tion, May 2001. 119 [30] S. Rostami and B. Hamidzadeh. A n optimal scheduling technique for dual-arm cluster tools with buffer modules. Proceedings of IEEE International Conference on System, Man, and Cybernetics, October 2001. [31] S. Rostami, B . Hamidzadeh, and D . Camporese. A n optimal periodic scheduling technique for dual-arm robots in cluster tools with process-module residency constraint. Proceedings of the 39th IEEE Conference on Decision and Control, December 2000. [32] S. Rostami, B . Hamidzadeh, and D . Camporese. A n optimal periodic scheduler for dual-arm robots in cluster tools with residency constraints. IEEE Trans, on Robotics and Automation, 17(5):609-618, October 2001. [33] S. Sahu S. Suresh. Stochastic assembly line balancing using simulated annealling. International Journal of Production Research, 32:1801-1810, 1994. [34] G . W . Shapiro and H . L . W . Nuttle. Hoist scheduling for a pcb electroplat-ing facility. HE Trans., 20:157-167, 1988. [35] Singer. The driving forces in cluster tool improvement. Seconductor Int., 18(8):113-118, 1995. [36] R.S. Srinivasan. Modeling and performance analysis of cluster tools using petri nets. IEEE Trans. Semiconductor Manufacturing, 11:304-403, 1998. [37] S. Venkatesh, R. Davenport, P.Foxhoven, and J . Nulman. A steady state throughput analysis of cluster tools: Dual-blade versus single-blade robots. IEEE Trans. Semiconductor Manufacturing, 10:418-424, 1997. [38] M . R . Wilhelm and T . L . Ward. Solving quadratic assignment problems by simulated annealing. HE Transactions, 19:107-119, 1987. [39] S .C. Wood. The impact of single-wafer processing on fab cycle time. Proceedings of the IEMT Symposium, 1995. [40] S.C. Wood. Simple performance models for integrated processing tools. IEEE Trans. Semiconductor Manufacturing, 9:320-328, 1996. 120 [41] S .C. Wood, S. Tripathi, and F . Moghadam. A generic model of cluster tools throughput time and capacity. Proceedings of Advanced Semicon-ductor Manufacturing, pages 194-199, 1994. [42] S.J Y i m and D . Y . Lee. Scheduling cluster tools in wafer fabrication using candidate list and simulated annealing. Proceeding of IEEE Int. Confer-ence on Systems, Man and Cybernetics, 1998. " [43] S.J Y i m and D . Y . Lee. Scheduling cluster tools in wafer fabrication using candidate list and simulated annealing. Journal of Intelligent Manufac-turing, 10:531-540, 1999. [44] G . H . Young and C . Chan. Single-vehicle scheduling with time window constraints. Journal of Scheduling, 2(4):175-87, 1999. [45] W . M . Zuberek. Petri net modeling and performance analysis of cluster tools with chamber revisiting. Proceedings. 2001 8th IEEE International Conference on Emerging Technologies and Factory Automation, 2, 2001. [46] W . M . Zuberek. Timed petri nets in modeling and analysis of cluster tools. IEEE Trans. Robotics, and Automation, 17(5):562-575, Oct. 2001. 121 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065479/manifest

Comment

Related Items