Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

On-line optimal and adaptive control of a queuing system Yuan, Joseph Sze-Chiang 1972

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1972_A7 Y82.pdf [ 3.7MB ]
Metadata
JSON: 831-1.0101479.json
JSON-LD: 831-1.0101479-ld.json
RDF/XML (Pretty): 831-1.0101479-rdf.xml
RDF/JSON: 831-1.0101479-rdf.json
Turtle: 831-1.0101479-turtle.txt
N-Triples: 831-1.0101479-rdf-ntriples.txt
Original Record: 831-1.0101479-source.json
Full Text
831-1.0101479-fulltext.txt
Citation
831-1.0101479.ris

Full Text

ON-LINE OPTIMAL AND ADAPTIVE CONTROL OF A QUEUING SYSTEM by JOSEPH SZE-CHIANG YUAN B.A.Sc., University of B r i t i s h Columbia, 1970 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE i n the Department of E l e c t r i c a l Engineering We accept t h i s t hesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA August 1972 In present ing th i s thes is in p a r t i a l f u l f i lmen t of the requirements fo r an advanced degree at the Un iver s i t y of B r i t i s h Columbia, I agree that the L ibrary sha l l make it f r ee l y ava i l ab le for reference and study. I fu r ther agree that permission for extensive copying of th i s thes i s fo r s cho la r l y purposes may be granted by the Head of my Department or by his representat ives . It is understood that copying or pub l i c a t i on of th i s thes i s for f i nanc i a l gain sha l l not be allowed without my wr i t ten permiss ion. Department of & eA^CfUL The Un ivers i ty of B r i t i s h Columbia Vancouver 8. Canada Date mJ/uM'lo] !9J^ ABSTRACT A number of on-line control methods have been studied for the operational control of a queuing system. Time-series models have been used, i n contrast to the probability models usual i n the t r a d i t i o n a l approach to such problems. I t i s shown that most queuing processes can be formulated as multistage control problems to which modern control theory can "be applied. The various control techniques applicable to a queuing system can be divided into two classes: decision and regulator control. In obtaining the control strategies, this thesis draws heavily from dynamic programming, least-squares estimation, the discrete maximum p r i n c i p l e and gradient techniques. The uncertainties encountered i n the queuing system can be overcome with an adaptive control method. The open-loop-feedback-optimal control technique has been stressed here due to i t s s i m p l i c i t y . A p p l i -cations of the methods to various f i e l d s have also been studied. Ex-tension of the method to long i n t e r v a l control i s immediate i n a l l the cases. Although the optimal control of a queuing system has been d i s -cussed, the methods are general enough to be applied to other areas. i TABLE OF CONTENTS Page ABSTRACT ' i TABLE OF CONTENTS i i L IST OF TABLES i v L I ST OF FIGURES v ACKNOWLEDGEMENT ' . v i 1. INTRODUCTION 1.1 Queu ing Sy s tem, Queu ing Mode l s and t h e C l a s s i c a l Queu ing T h e o r y 1 1.2 O u t l i n e o f T h e s i s 2 2. A GENERALIZED QUEUING MODEL 2.1 I n t r o d u c t i o n 4 2.2 A T i m e - s e r i e s Model f o r the Queu ing Sys tem 4 2.3 O p e r a t i o n a l C o n t r o l o f Queu ing Systems 7 2.4 A d a p t i v e C o n t r o l o f Queu ing Systems 9 2.5 D i s c u s s i o n 11 3. OPTIMAL DECISION CONTROL FOR QUEUING SYSTEMS 3.1 I n t r o d u c t i o n 12 3.2 M a t h e m a t i c a l F o r m u l a t i o n o f the O p t i m a l D e c i s i o n C o n t r o l P r o b l e m . . . . . . . . . . . . 12 3.3 An O p t i m a l D e c i s i o n C o n t r o l A l g o r i t h m 13 3.4 An I l l u s t r a t i v e Example 16 3.5 A p p l i c a t i o n I 19 3.5.1 An O p t i m a l D e c i s i o n C o n t r o l P r o b l e m f o r Queu ing Channe l s 19 3.5.2 Example - O p t i m a l Queu ing o f Computer B a t c h Jobs 20 3 .5 .3 Remarks . 21 3.6 A p p l i c a t i o n I I 24 3.6.1 An O p t i m a l D e c i s i o n C o n t r o l P r o b l e m f o r S e r v i c e Channe l s 24 3.6.2 Example - O p t i m a l C o n t r o l o f T r a f f i c C o n g e s t i o n a t an I n t e r s e c t i o n 24 3 .6 .3 Remarks 26 3.7 D i s c u s s i o n 26 4. OPTIMAL AND ADAPTIVE REGULATOR C0NT0RL FOR QUEUING SYSTEMS 4.1 I n t r o d u c t i o n 31 4.2 An O n - l i n e I d e n t i f i e r f o r Queu ing Systems . 32 i i Page 4.3 An On-line Adaptive Controller- f or Queuing Systems . . . . . . . . . . . . . . . . . . 34 4.4 A p p l i c a t i o n I . . . . . . . 36 4.4.1 Optimal T r a f f i c Control f o r a Highway Merging Problem . . . . . . . . 36 4.4.2 Remarks 43 4.5 A p p l i c a t i o n II 43 4.5.1 Optimal and Adaptive Scheduling of Operations i n a Hospital 43 4.5.2 Remarks 54 4.6 Discussion . . . 54 5. CONCLUSIONS AND SUGGESTIONS 5.1 Conclusions 59 5.2 Suggestions 59 APPENDICES I. The Discrete Maximum (Minimum) P r i n c i p l e 61 I I . Dynamic Programming Formulation of the Optimal Decision Control Problem 63 I I I . Matrix Least Squares State Estimation 65 IV. An Adaptive Feedback Control Algorithm v i a Dynamic Programming 68 REFERENCES 72-i i i LIST OF TABLES Table Page 3.1 Complete Dynamic Programming Solution for-Queuing-Channel Decision Control Example 17 3.2 Optimal Decision Control Solution for Example i n 3.4 18 3.3 Optimal Queuing P o l i c i e s for Computer Batch Jobs 22 4.1 Optimal Control for the Highway Merging Problem with p = 10- . 40 4.2 Optimal Control for the Highway Merging Problem with p = 25 . 41 i v LIST OF FIGURES Figure Page 2.1 A Fundamental Queuing System 5 2.2 A Simple Adaptive Control System 10 2.3 Adaptive Control for a Simple Queuing System . . . . . . . . . 10 3.1 Convergence.of the Performance Function,for the.Computer-job Queuing Problem 23 3.2 Convergence of the Performance Function for the T r a f f i c Intersection Control Problem 27 3.3 Complete REsult for the T r a f f i c Intersection Control Problem . 28 3.4 Open-loop Control Cost versus Noise Index . 29 4.1 Merging Points on a Highway 37 4.2 The Relationship between the Total Cost and Junction Delay Time for the Highway Merging Problem 42 4.3 The Queuing Phenomenon for a Surgery Process i n a Hospital . . 44 4.4 The A r r i v a l Sequences for the Optimal and Adaptive Operation-Scheduling Problem 48 4.5 On-line I d e n t i f i c a t i o n of C^. 49 4.6 Comparison Between the Optimal Feedback Control State Tra-jectory and the Adaptive Control STate Trajectory 50 4.7 Comparison Between the Optimal Feedback Control Gain and the Adaptive Control Gain 51 '4.8 Optimal Scheduling of Operations with C^  known 52 4.9 Adaptive Scheduling of Operations with unknown C^ 53 4.10 Suboptimai Scheduling of Surgery with unknown a r r i v a l s .{^} • • 55 4.11 Suboptimai Control State Trajectory with unknown a r r i v a l s {X, } 56 k 4.12 Suboptimai Control Feedback Gain with unknown a r r i v a l s {X } . 57 v ACKNOWLEDGEMENT The author wishes to express h i s sincere gratitude to Professors E.V. Bohn and M.S. Davies who supervised t h i s research. The f i n a n c i a l assistance given by the National Research Council of Canada i n the form of Scholarships awarded i n 1971 and 1972 i s grate-f u l l y acknowledged. Thanks are -also due to Miss "Emi-ly Chow,'who -typed the i n i t i a l d r a f t , and Miss Norma Duggan who prepared the manuscript for the thesis. v i 1 1. INTRODUCTION Queuing occurs i n many facets of our d a i l y l i v e s . To catch a bus, we j o i n a queue. To place a telephone c a l l , we j o i n a queue. Indeed, i t i s impossible to survive today without i n v o l u n t a r i l y p a r t i -c i p a t i n g i n some sor t of a queuing process. As society gets more complex, so do the queuing processes one -encounters. -Consequently, -the-study of queuing processes i s an important feature of the way man i n t e r a c t s with h i s environment. 1.1 Queuing System, Queuing Models and the C l a s s i c a l Queuing Theory A queuing system, or a stoch a s t i c s e r v i c e system - to include cases where the theory of queues has been applied to s i t u a t i o n s i n which no p h y s i c a l queues a c t u a l l y e x i s t , i s a process of "customers" waiting for s e r v i c e from "servers". The terms' "customers" and "servers" were i n h e r i t e d from e a r l i e r researchers whose problems were mostly i n con-nection with the telephone business. The theory of queues was es t a b l i s h e d about h a l f a century ago to study the behaviour of the mathematical models developed f o r the queuing process. A l l queuing theory models are based upon the three major charac-t e r i s t i c s of the queuing processes: the input-process, the s e r v i c e -mechanism and the queue-discipline. With the f i e l d of s t a t i s t i c a l a n alysis already f i r m l y established, i t i s not s u r p r i s i n g to f i n d that a l l the c l a s s i c a l queuing models are p r o b a b i l i t y models, that i s models i n which emphasis i s placed upon the p r o b a b i l i t y d i s t r i b u t i o n i t s e l f , - i n contrast to time-series models where the actual values of some parameters are con-(9) sidered . Subsequently, mathematicians have been able to apply queuing theory s u c c e s s f u l l y to obtain elegant a n a l y t i c r e s u l t s f o r a wide v a r i e t y of queuing processes. 2 Nevertheless, as Saaty has s a i d ^ : "The subject of queuing i s not d i r e c t l y concerned with optimization. Rather i t attempts to explore, understand, and compare various queuing situations and thus i n d i r e c t l y achieve optimization approxi-mately". The t r a d i t i o n a l s p i r i t of the p r o l i f i c amount of research on queuing systems has been towards the performance analysis rather than the con-t r o l aspect. An i n e f f i c i e n t l y controlled queuing process can be very frus-t r a t i n g and exasperating. From the engineer's point of view, i t i s ob-viously desirable to obtain a compatible optimization technique for the queuing processes one encounters. Just as the c l a s s i c a l queuing theory works to better control through understanding the behaviour of a queuing process, today's control engineer should aim at implementing better control -through his knowledge f-rom -the w e l l — developed-theo ry of -optimal control. 1.2 Outline of Thesis This thesis i s an attempt to apply modern control theory to a queuing process. P a r t i c u l a r emphasis i s placed upon two different types of on-line control strategies: decision and regulator control. Several applications are included to demonstrate the v e r s a t i l i t y of the approach. The p o s s i b i l i t y of extending the method to adaptive control systems i s also investigated. The structure of the thesis i s as follows. The purpose of the f i r s t part of Chapter 2 i s to demonstrate the real-time approach to the queue-modelling problem where the c l a s s i c a l approach has been v i a pro b a b i l i t y modelling. A time-series model i s developed for a simple queuing process. Operational control s t r a t e g i e s , i n p a r t i c u l a r the decision- and regulator-type, are also discussed q u a l i -t a t i v e l y . An adaptive control system i s also proposed. 3 Chapter 3 i s devoted to a d e t a i l e d discussion of d e c i s i o n con-t r o l f o r queuing systems. The d i s c r e t e Maximum (Minimum) P r i n c i p l e w i l l be applied extensively to obtain the necessary conditions f o r o p t i m a l i t y . A gradient technique i s used to compute the s o l u t i o n f o r the r e s u l t i n g two-point boundary value problem. This control algorithm i s then applied to two optimal ordering problems i n queuing and se r v i c e channelling pro-blems. Several numerical examples., a t r a f f i c c ontrol problem and a com-puter queuing problem, are presented. Extension of the. method to long-term applications i s discussed, using an adaptive technique known as "open-loop-feedback-optimal" strategy. In Chapter 4 the regulator-type of control i s discussed. This class of control i s mainly confined to server-rate c o n t r o l . An on-line adaptive control method i s proposed: Dynamic Programming i s employed 'for optimization while simultaneously the "sys*tem' i s - i d e n t i f i e d *by l e a s t -square methods. A suboptimal, but feedback, c o n t r o l i s derived. The control strategy i s then a p p l i e d to a t r a f i c c ontrol problem and an operation-scheduling problem i n h o s p i t a l s . The major conclusions and some suggestions f o r further research i n t h i s area are presented i n Chapter 5. 4 2. A GENERALIZED QUEUING MODEL 2.1 Introduction The main concern of this section i s to develop a mathematical model for a general queuing system with provision for on-line control. Most of the c l a s s i c a l queuing models have been pr o b a b i l i t y models that lead to results i n terms of p r o b a b i l i t i e s and expected values. In recent years, there has been a tendency towards time-series modelling of queuing s y s t e m s . However, the major attempts have been aimed at performance analysis rather than operational control of queu-ing systems. Although a number of researchers have applied control C 2 3 4) theory to the c l a s s i c a l probability model ' ' , the results they obtained were more t a i l o r e d for system designing: o f f - l i n e rather than on-line control. 2.2 A Time-series Model for the Queuing System I t i s impractical to develop a highly generalized mathematical model that includes a l l queuing situati o n s . However, i t i s important to obtain a generalized queuing model which includes a l l the general characteristics of a queuing process, i . e . input-process, queue-discipline and service-mechanism. The simplest queuing process consists of: 1. M p a r a l l e l streams of a r r i v a l customers 2. N p a r a l l e l queuing channels or waiting l i n e s 3. P p a r a l l e l service channels or servers The queuing phenomenon for such a system i s depicted i n Fig. 2.1. We s h a l l refer to i t as an elementary system unit because i t contains a l l the essential features of a queuing system. More sophisticated sys-tems can then be represented as a cascade or network of such queuing units. ARRIVALS QUEUES SERVERS F i g . 2.1 A Fundamental Queuing System 6 A mathematical model f o r such a unit w i l l be developed s h o r t l y . But f i r s t of a l l , the following remarks are appropriate: 1) In the c l a s s i c a l queuing model, to obtain useful a n a l y t i c a l r e s u l t s , c e r t a i n p r o b a b i l i t y d i s t r i b u t i o n functions must be assumed on the input (poisson) and the s e r v i c e processes (ne-gative-exponential). No such assumptions w i l l be made here. 2) For on-line control purposes, i t i s s u i t a b l e to have a mathe-matical model whose dynamics are depicted i n d i s c r e t e time u n i t . 3) Define the following q u a n t i t i e s : {....k-1, k, k+1 } : d i s c r e t e sampling time i n s t a n t s , not n e c e s s a r i l y equally spaced i n time N q^ e R : The number of customers waiting i n the N queuing channels at the time k P s, e R : The number of customers each of the P servers xs k capable of s e r v i c i n g during [k, k+1) M X^ e R : The number of customers a r r i v i n g from the M input streams during the i n t e r v a l [k, k+1) Thus, the dynamics of the queuing process u n i t ( F i g . 2.1) can be represented by the model \ + l " Ik + \ \ " C k \ < 2"^ where.B. i s an (N x M) matrix, [b. .], , i n which b. . = f r a c t i o n of the ar^-k l j k IJ r i v i n g customers from the j*"* 1 input channel who joined the i ^ queuing channel during [k, k+1), and i s an (N x P) matrix t c ^ ] ^ * n w n i ° h A th c_^ =• the f r a c t i o n of the customers served by the 1 servers during [k, k+1), that had come from i ^ queuing channel. One can detect almost immediately the d i f f e r e n c e between t h i s generalized queuing model and the c l a s s i c a l p r o b a b i l i t y model. Most 7 c l a s s i c a l models treat the evolution of the state i n queuing process as a stochastic process. For instance, instead of using a queue s i z e , q f c as the state, P^q^, £ n) would be used as the state of the model. For real-time control, a deterministic formulation of the stochastic queuing process would be more convenient. However, as the queuing process i s b a s i c a l l y a highly stochastic process, the deterministic model would be inadequate for des-cribing .the.process.over a long time period. Over a long span of time, i n order to preserve a correct representation of the actual process, correc-t i v e adjustments would have to be taken on the model. Thus arises the need for an adaptive control model. 2.3 Operational Control of Queuing Systems There are b a s i c a l l y two classes of methods used i n the operational control of queuing systems: "A) Decision Control - where the control i s generally imbedded i n the process of optimal selection. The optimal assignment of a r r i v i n g customers to the available queuing channel.s i s an ex-ample of this type of control. B) Regulator Control - this type of control generally involves set-ting a control mechanism at certain levels so that o v e r a l l op-timal system performance i s obtained. Examples are found i n server-rate control and scheduling problems. The objective of any type of control for queuing system i s to prevent a congestion s i t u a t i o n from bui l d i n g up. A certain performance c r i t e r i o n of the system i s to be optimized. In most socio-economical problems, a cost would be associated with this performance c r i t e r i o n . For instance, there may be a cost associated with the queue.size^.in .another s i t u a t i o n we might want to maximize the use of a server though assigning 8 a cost to each i d l e moment of the s e r v e r . However, to define a cost at t h i s stage would cause our g e n e r a l i z e d queuing model to l o s e much of i t s g e n e r a l i t y . In the context of the g e n e r a l i z e d queuing model, the f o l l o w i n g remarks are app r o p r i a t e : 1) The matrices B, , C, i n f a c t represent the queuing and s e r v i c e d i s c i p l i n e of .the system. For example, b „ = 0 None of the customers from j * " * 1 input . . .th j o i n s the l queue c ^ = 0 =v* 1 se r v e r not s e r v i n g customers c *.u -th from the l queue In a s t a t i s t i c a l sense, b ^ i s the p r o b a b i l i t y of an a r r i v a l r , . th . . . . ... . th . , customer from the j input j o i n i n g the I queue; c ^ i s the p r o b a b i l i t y of a customer "in the queue bein g served "by the . t h 1 s e r v e r 2) From the c o n t r o l p o i n t of view, any type of d e c i s i o n c o n t r o l on the queuing or s e r v i c e d i s c i p l i n e would d i r e c t l y a f f e c t the t r a n s i t i o n matrices B^ and C^. 3) In our g e n e r a l i z e d model, { X ^ } , the a r r i v a l sequence i s represented as a disturbance i n p u t whose dynamics are unknown to the c o n t r o l l e r . However, as f a r as the c o n t r o l of queues i s concerned, such knowledge i s not e s s e n t i a l . Consequently, throughout the r e -mainder of t h i s t h e s i s , i t s h a l l be assumed that i n f o r m a t i o n on the a r r i v a l sequence {X^\ ( e i t h e r p r e d i c t e d or scheduled) w i l l be a v a i l a b l e to the c o n t r o l l e r . A) In the case of t o t a l l y unknown {X }, the question of op t i m a l K. 9 prediction can e a s i l y be studied through time-series analysis and a number of operations research techniques (9>10,11) w k i c h are beyond the scope of t h i s thesis • -2.4 Adaptive Control of Queuing Systems The philosophy behind the generalized queuing model (2-1) aids i n the implementation of different types of control to a queuing system. As mentioned e a r l i e r , the ..deterministic nature .of the .model i s inade-quate to represent the actual queuing process which i s b a s i c a l l y a sto-chastic system. I n i t i a l information on such a system i s usually absent, or at best based on estimations. An e f f e c t i v e control would be the adaptive control process, i n which the state of knowledge about the sys-tem improves as i t evolves. This updating of information i s carried out by a learning mechanism. A simple adaptive control system (Fig. 2.2) would consist of a sys'tem i d e n t i f i e r , which i d e n t i f i e s the unknown system parameters i n the system> and a feedback controller which optimizes the performance of the system. A parametrized control i s usually obtained. A schematic representation of an adaptive control system for the queuing model i s shown i n Fig. 2.3. The two classes of control, regulator and decision type, are applied simultaneously to achieve an o v e r a l l on-line control for the queuing process. There, are many instants i n the control of a queuing system where an adaptive scheme would be useful. Typical examples are found . i n cases where the actual a r r i v a l sequence may d i f f e r s i g n i f i c a n t l y from the predicted values due to some non-anticipated disturbances. In the case where the regulator type of control i s carried out simultaneously but separately from decision control, knowledge of the effect of each control on the model parameters helps to establish an ov e r a l l optimal control input UNKNOWN S Y S T E M system output FEEDBACK CONTROLLER T S Y S T E M IDENTIFIER <3— F i g . 2 . 2 A Simple Adaptive Control System system -measurements -system in put Q U E U I N G S Y S T E M r queuing channel decision control queues SYSTEM CONTROL service channel decision control servers 5 1 servic e rate control system identifier <3-F i g . 2 . 3 Adaptive Control f o r a Simple Queuing System 11 control for the system. 2.5 Discussion A generalized queuing model which includes a l l the basic fea-tures of a t y p i c a l queuing process has been developed. Using such a model, i t i s possible to obtain various control strategies to the queuing system. The complexity of most queuing systems makes i t necessary to separate the applicable control strategies into two major types: decision and regulator control. In the remainder of this t h esis, these two types of control w i l l be considered separately. Adaptive control techniques can be applied to both methods. 12 3. OPTIMAL DECISION CONTROL FOR QUEUING SYSTEMS 3.1 I n t r o d u c t i o n D e c i s i o n c o n t r o l i n a queuing system a r i s e s whenever a s e l e c t i o n process e x i s t s i n the queuing order or s e r v i c e d i s c i p l i n e . Customers e n t e r i n g i n t o a queuing system are faced w i t h a number of p o s s i b l e queues; customers emerging from queues are faced w i t h the choice of a s e r v e r . A good example i s provided by a shopper checking out through a number of ca s h i e r s i n a supermarket. Before j o i n i n g any queue, each shopper makes a d e c i s i o n as to which queue would provide the f a s t e s t s e r v i c e . Very o f t e n a short queue does not imply f a s t s e r v i c e . This process of o p t i m a l s e l e c t i o n can be formulated as a d e c i s i o n c o n t r o l problem. I r e l a n d e,t _ a i . and Fsogbue^"^ have handled .problems of .this nature through the a p p l i c a t i o n of c o n t r o l to the c l a s s i c a l p r o b a b i l i t y model. A t i m e - s e r i e s model w i t h o n - l i n e c o n t r o l features w i l l be des-c r i b e d here. 3.2 Mathematical Formulation of the Optimal D e c i s i o n C o n t r o l Problem Consider a d e c i s i o n c o n t r o l system which can be expressed as x k + l ~ X k = f ^ \ ' \ ^ k = 1,...,K (3-1) where x^ e R , i s the system s t a t e and " k E L m /o 0 1 • > • 0 0 \ 0/ \ o l I0\ 0 eR m i s the (3-2) d e c i s i o n c o n t r o l . As i s u s u a l , the performance measure for. the o p t i m a l s e l e c t i o n 13 process w i l l be expressed i n the form K J = 2L I(x , u ) + 0 ( x R , K+l) (3-3) j = l J Hence, we can formulate the decision control problem as a multistage de-c i s i o n problem: Given: x k + 1 - x f c = f( x f c , u^ .) ; c, x^ . e R n, e R™ (3-1) with x^ = c and Uj^ . e L m (3-2) Find a c o n t r o l sequence {u.} . _ which would minimize the cost i i — I , . . . K f u n c t i o n a l J . (3-3) 3.3 An Optimal Decision Control Algorithm Looking at the problem stated, one tends to conclude that the (12) s o l u t i o n n a t u r a l l y involves e i t h e r dynamic programming or the Discrete (22) (Maximum) Minimum P r i n c i p l e . The high dimensionality of the problem p r a c t i c a l l y eliminates the p o s s i b i l i t y of applying the exact Dynamic Pro-gramming technique (Appendix I I ) . The p e c u l i a r form of control constraint (19) (3-2) makes any dynamic programming technique , such as l i n e a r feedback co n t r o l , formidably awkward to implement. When applying the Discrete Minimum P r i n c i p l e , there are c e r t a i n convexity conditions the system (3-1), (3-2), (3-3) has to s a t i s f y ' • Because of the p e c u l i a r structure of the admissible control set L , such m convexity conditions are not s a t i s f i e d here. Nevertheless, the convexity condition can be relaxed i n defining the Hamiltonian function and obtaining the necessary conditions f o r o p t i m a l i t y . With t h i s i n mind, the Hamil-tonian function i s defined: H ( v v pk+r k ) " I ( v V + pk+i f ( v V ( 3- 4 ) 14 where i s the costate vector. . ^. . . . •-• (Appendix I) ' According to the Discrete Minimum P r i n c i p l e , i t i ^ l k - l K l s t' i e °P t :'- m a^ control sequence and | x* | K-t-1 l s t ^ e optimal t r a j e c t o r y , the following r e l a t i o n s hold: (i ) Canonical Equations: 8 H k+1 9 pk+: f ( x * , u*) (3-5) Pk+1 p £ 9 H _ 3 x k (3-6) where f ( x ) | ^ = f(x*) ( i i ) Boundary Conditions: x* 1 3* = - — 0(x* K+1) K+1 9 x * , K K+1. ' K+i (3-7) (3-8) ( i i i ) Necessary Condition of Optimality H < * V P £ + l ' k) H(x*,u k, P * + 1 , k) V y e V ( 3 ~ 9 ) k = 1, 2, K An i t e r a t i v e s o l u t i o n to t h i s non-linear Two-Point-Boundary-(25) Value-Problem (TPBVP) can be obtained through gradient techniques . The algorithm i s i n i t i a l i z e d with a nominal control sequence ("£} ^ This i s used to generate {x£} k = 1 K + 1 i n t h e f o r w a r d i t e r a t i o n of (3-5), (3-7), and {p?) , _", , -.in the backward i t e r a t i o n of K. iC 1 ) • > « j Jx "i X (3-6), (3-8). ^ H ( \ > \> P£ + 1' k ^ k = 1 K ° a n t h e n b e evaluated from (3 - 4 ). The new con t r o l sequence i s generated by changing the com-ponents i n the current control so as to decrease the Hamiltonian f u n c t i o n . The number of control elements at which adjustment i s made i s determined 15 by a predetermined step s i z e f a c t o r . The complete algorithm takes the form: 1) Start with control function {u.1} , .. , K k = 1, .. . K 2) Determine {x.1} . , , T r , » N from (3-5), (3-7) k k = 1,. . . , (K + 1) 3) Determine {p^} k _'•]_ ( K + 1 ) f r ° m ^ 3 ~ 6 ^ » ( 3~ 8) 4) Determine the Hamil.tonian ,{H(x^, u^, P^-^ k).} ^=1 from (3-4) 5) Determine the "optimal" Hamiltonian values { H ^ } K _^ ^ K whe " i A . i i i . ere Hfc = H < V V P k + 1»k) A " i i i i where g = arg {Max [21 | H£ - H(x£, i ^ , P f c + 1» k ) I « and = arg {Min H(x£, K, P^ + 1» k ) } £e L m where "arg" i s used to denote the value of the argument at which minimization i s achieved. 6) Determine the integer "step s i z e " g I L a k=i K . Sh £_ |H(x£, U£, p j + 1 , k) |]} k=l 6h: a predetermined step s i z e f a c t o r |•| = Absolute value of a vector 7) Update Control Function: u^ = n k k = 1, . . ., g = k = g + l , . . . , K 8) Return to Step (2) 16 Convergence occurs when K ZL k=l 71 , i+1 i+1 i+1 . . K - Z _ H ( x J , u J , p^ + 1, k)| $ e k=l where e i s a small positive number. I t should be pointed out here that the integer step size g i s i n fact adjusted to l i m i t :the o v e r a l l change i n the Hamiltonian within a predetermined step size factor 6h. Further refinements such as variable step size can also be included. I t i s also interesting to note that the control constraint (3-2) i s automatically s a t i s f i e d i n i t e r a t i o n step (5) of the algorithm. Hence the extremal solution for our relaxed problem w i l l also be an extre-mal solution for the o r i g i n a l problem. 3.4 An I l l u s t r a t i v e Example We s h a l l demonstrate the technique by a simple numerical ex-ample for which the exact solution i s e a s i l y obtainable by dynamic pro-gramming. This example i s a simple decision control for queuing channels, to be discussed i n a following section. Consider the cost, where J = Z L fa. (x. + q) -6] + ( x g + q ) ' F(x g + q j i = l x i+1 x . + u . : x., u. e R u. e L 2 10.0 1.10 20 q = F = 1 0 0 1 ,3L Ij2^««*^ .5 !7 Table 3.1 Complete Dynamic Programming Solution f o r Queuing Channel Decision Control Example 3 5 40 T -1 0 # 0 5 64 50 0 30 1 0 40 r • 1 0 40 34 1 0 44 1 0 44 1 0 34-52 1 0 62 l ' 0 62 1 0 52 1 0 r n 3 2 4 1 62 0 0 0 1 72 r- n 72 2 0 o" 1 r- - 62 3 0 42 0 1 34 # • Admissible state. Minimum cost of reaching the f i n a l . s t a t e at K = 6 for any given state Optimal control (or controls where both are optimal) Table 3-2 Optimal Decision Control Solution for Example in 3.4 I N I T I A L T R A J E C T O R Y K X ( K ) i n K ) O P T I M A L T R A J E C T O R Y K X ( K ) U ( K ) 0 0 0 1 1 1 1 2 2 2 2 3 0 1 1 0 0 ' 1 1 0 0' 1 5 0 0 1 0 2 0 3 0 u 0 4 1 0 0 0 .0 0 1 TOTAL C O S T = 90 . T O T A L - C O S T = 62 ^ • c o n v e r g e n c e a f t e r 3 i t e r a t i o n s 19 The complete s o l u t i o n v i a Dynamic Programming (Appendix II) i s shown i n table 3-1. The optimal s o l u t i o n v i a Discrete Minimum P r i n c i p l e and gra-dient technique i s shown i n table 3-2. 3.5 A p p l i c a t i o n I 3.5.1 An Optimal Decision Control Problem f o r Queuing Channels The type of decision control required i n a queuing problem i s b a s i c a l l y that of s e l e c t i n g an appropriate queue for an incoming cus-tomer. Let us assume that associated with each customer i s a queuing cost a, such that the cost of j o i n i n g a queue of s i z e q w i l l be qa. We s h a l l also assign a "reward", B to each queuing channel. For instance, a high reward w i l l be assigned to a f a s t moving queue. In general, a customer has to decide how much "reward" he can obtain from j o i n i n g a cer-t a i n queue a f t e r paying for the queuing cost. -Consider -the -situation -where -K -customers-arrive --at -n p a r a l l e l queuing channels. Let x^ e R n denote the number of customers who have already joined the queues on the a r r i v a l of the k*^ customer, then the queue s e l e c t i o n process can be w r i t t e n as x k + 1 = x k + U k (3-10) where u. e L i s the decision vector: L i s the set of n-dimensional u n i t it n n vectors'defined i n (3-2). th The t o t a l cost of j o i n i n g any one queue for the k customer would be u^ [a^ (x^ + q) - B] where i s the queuing cost of the customerj 3 e R n i s the reward from the queuing channels, and q i s the queue s i z e before the a r r i v a l of the K customers. The o v e r a l l cost for assigning the K customers to the queuing channels would be K J = 2L [a. (x. + q) - 3 ] + ( x ^ + q)' F ( x R + 1 + q) i = l (3-11) 20 Applying the Discrete Minimum P r i n c i p l e , we obtain for the Hamiltonian: H ( V \ P k +i» k ) = ^ l \ ( x k + q ) - B ] + pk+i \ ( 3 " 1 2 ) p k + l = P k - \ \ ( 3 " 1 3 ) P K + 1 = 2 F <*K+1 + <3-14> H ( V V p k + r k ) * H ( V V p k + r k ) V \ e L n ( 3 _ 1 5 ) where x^ ., u^ _, ,p.^  are evaluated ..along the optimal trajectory. The algorithm from section 3.3 can then be applied. 3.5.2 Example - Optimal Queuing of Computer Batch Jobs An i n t e r e s t i n g application of decision control i s found i n the optimal queuing of computer batch jobs. Programs with d i f f e r e n t p r i o r i t i e s are submitted for service from a number of peripheral devices. Various devices have dif f e r e n t execution speed. The p r i o r i t y of a job i s a func-tion of waiting cost. Tt "is desirable to match the r e l a t i v e p r i o r i t i e s of different jobs to the e f f i c i e n c y of the devices so as to achieve an o v e r a l l optimal system performance. Applying the model discussed i n the l a s t section, would de-note the p r i o r i t y of the k*"*1 job submitted. 8 r e f l e c t s the e f f i c i e n c y of a device, q i s the number of jobs already l i n e d up for service from the devices before the a r r i v a l of the batch of, say K jobs. F would measure the t o t a l cost of running the jobs. A numerical example i s shown below for optimal queuing of 10 jobs to 3 devices. The values of some of the parameters are: { \ } k = 1,...,10 = ( 1 , 4 j 1 0 , 5' 2' 6 ' 2' 8' 4' 1 2 ) 6 = 10 20 30 V • 0 0 1 0 3 1 0 1 0 0 21 0 q = 0 3 1 0 0' F = 0 2 0 0 0 3 A number of runs with d i f f e r e n t i n i t i a l c ontrol functions are made. The complete r e s u l t s are shown i n F i g . 3-1 and Table 3-3. In a l l the runs, convergence i s achieved i n less than ten - i t e r a t i o n s . In addition, i t i s noted that the f i n a l optimal p o l i c i e s are d i f f e r e n t i n a l l four cases. This i s due to the a b i l i t y of the gradient technique to track l o c a l o p t i m a l i t i e s . 3.5.3 Remarks The control obtained above i s obviously an open loop c o n t r o l . An assumption made i s that {OL } _ must be known. For on-line K K — JL , . . . ,Jx operation, one would prefer a feedback type of c o n t r o l . However., .one .can r e a d i l y extend the above algorithm to an on-line a p p l i c a t i o n through an open-loop-feedback-optimal (0LF0) s t r a t e g y ^ 1 4 ' 2 " ^ : At stage k, only the f i r s t element u^ of the optimal open-loop control sequence (u^} ^-j-]^ i s transmitted. At the next stage k + 1 , information on the system parameters, such as 3> o r q> c a n be updated. A new open-loop contro sequence (u.}' . _ , - i s thus generated. 1 1 - K T 1 ) . i t j R For long term optimizations, where K -> °°, a " s h i f t i n g i n t e r v a l " can be used. In t h i s case i t i s assumed that at a l l times the c o n t r o l l e r can only see K i n t e r v a l s ahead, and an open-loop control i s obtained f o r that i n t e r v a l . The procedure i s repeated as the stage progresses. Of course, the o v e r a l l decision control obtained w i l l be sub-optimal, but from a computational point of view, such a scheme i s desi r a b l e for on-l i n e operations. 22 T a b l e 3-3 O p t i m a l Queu ing P o l i c i e s f o r Computer B a t c h J o b s J RUN 1 RUN 2 RUN 3 RUN 4 0 IN IT IAL FINAL IN IT IAL F INAL I N I T I A L FINAL IN IT IAL FINAL B POLICY POLICY POLICY POLICY POLICY POLICY POLICY POLICY 1 (1 , 0 , 0 ) * (0,0,1) (0,1,0) (0,0,1) (1,0,0) (0,0,1) (0,0,1) (0,0,1) 2; (0,1,0) (0,1,0) (0,1,0) (0,0,1)! (1,0,0) (0,0,1) (0,1,0) (0,1,0) 3 (0,0,1) (0,1,0) (Q,1,0) (1,0,0) (1,0,0) (0,1,0) (1,0,0) (1,0,0) 4 (1,0,0) (0,1,0) (0,1,6) (1,0,0) (1,0,0) (0,1,0) (0,0,1) (1,0,0) 5 (0,1,0) (0,1,0) (0,1,0) (1,0,0) (1,0,0) (0,1,0) (0,1,0) (0,0,1) 6 (0,0,1) -(1,'0,0)' (0,1,0)' '(0,1,'0)- (i,o,-o) • (0,1,0)- (i/o/o) • -(i,'0,o)-7 (1,0,0) (0,1,0) (0,1,0) (1,0,0) (1,0,0) (0,1,0) (0,0,1) (1,0,0) 8 (0,1,0) (1,0,0) (0,1,0) (0,1,0) (1,0,0) (1,0,0) (0,1,0) (0,1,0) 9 (0,0,1) (0,0,1) (0,1,0) (0,1,0) (1,0,0) (1,0,0) (1,0,0) (0,1,0) 10 (1,0,0) (1,0,0) (0,1,0) (0,1,0) (1,0,0) (1 ,0 ,0 ) (0,0,1) (0,1,0) No. o f I t n s . 10 ,7 . 6 8 I n i t i a l Cos t 89.0 • 312.0 312.0 101.0 F i n a l C o s t 29 .0 29.0 26.0 29.0 : The p o s i t i o n o f "1" d e n o t e s t h e d e v i c e number t o w h i c h t h e job i s a s s i g n e d . 350 300 o i — o z : Li_ LLJ O z : < o LJL. XT LU Q_ .250 200 150 o X RUN 1 R U N 2 R U N 3 R U N 4 100 + 50 29 -4- v ® A o © + o ^ r f - © © %- -®-0 0 2 4 • • 6 8 10 ITERATION NUMBER 12 F i g . 3.1 Convergence o f the Pe r f o rmance F u n c t i o n f o r t h e Compute r - j ob Queuing P rob l em 24 3.6 Application I I 3.6.1 An Optimal Decision Control Problem for Service Channels A different kind of decision control problem occurs i n the opera-ti o n of service channels. Here the choice i s usually between operating or shutting down a server. To keep a server active requires both operational and maintenance costs. On the other hand, to shut down a server at the expense of customers waiting i n queues would incur another cost to the server. Hence, the system performance function would consist of two terms: Total cost to the server system = ( I d l i n g cost) + (Operational cost) (3-16) This cost structure w i l l be applied to a p r a c t i c a l example i n the following section. 3.6.2 -Example - -Optimal -Gontrol-o-f T-ra-ff-ic 'Congestion at "an -Inter- section A large amount of work on the t r a f f i c control of an i n t e r s e c t i o n (5 7) ' has been done previously. The c l a s s i c a l p r o b a b i l i t y model i s used i n p r a c t i c a l l y a l l of the cases. This example w i l l adequately demonstrate the more r e a l i s t i c type of control one gets when the time-series model i s employed. Consider an intersection of two one-way t r a f f i c streams. 2 l e t : x^ e R denote the queue size at the intersection.at time k 2 A e R denote the number of vehicles a r r i v i n g i n the i n t e r v a l [k, k+1) s denotes the number of vehicles allowed to pass the in t e r s e c t i o n i n each green cycle of period equal to the i n t e r v a l [k, k+1). I f u^ . i s the decision control where u^ e L2 = { 1 1 ' o l } ' t' i e dynamics of the process can thus be expressed as 25 x. (3-17) k + 1 Adopting the cost structure established i n (3-16), the following cost functions are defined: 2 c^ e R : The cost each t r a f f i c stream "pays" for a green cycle. This i s the operational cost of the system. Q^ , A(2 x 2) diagonal matrix whose elements are the queuing cost of each stream. This represents the cost each stream "pays" for a red cycle, which also r e f l e c t s the i d l i n g cost of the intersection l i g h t . Hence, the cost functional can be written as: K - l J = Y [x Q. x. + u C ] + x' T x,, 4—^ i l l i i K K 1=1 (3-18) Before applying the optimization algorithm of 3.3, i t w i l l be assumed that a predicted record of vehicle a r r i v a l s { X, ) , . ., I k l k = 1,...,K-1 i s available K time instants ahead. A large number of current time-series -, • i • (9,10,11) .. _ , 1_. analysis techniques are available for such a prediction. Application of the Discrete Minimum P r i n c i p l e yields the Hamil-tonian: H ( v v p k + i > k ) = x k Q k \ + pk+i ( \ - s V K ck ( 3 _ 1 9 ) and costate equation: P k + 1 + 2 % \ with Boundary Conditions: K 2 T x^ = x (3-20) (3-21) (3-22) values: In the simulation below, the parameters assume the following 1 •0 1 10 - = T ; c = ; x = 0 1 1 30 26 s = 10 In a d d i t i o n , i n order to simulate the e f f e c t of having inaccurate information on the a r r i v a l s {A }, a noisy measurement on {A. }, i s assumed: K. K. \ ~-\ + ^ k where y = noise index £ = a random sequence, \<rlth | £ | < 1 K. K. Simulation r e s u l t s f or 20 time units are shown i n figures 3.2, 3.3. In F i g . 3.4, the e f f e c t of an open loop c o n t r o l based on inaccurate pre-d i c t i o n on the a r r i v a l s i s studied. I t i s apparent that for noise index y < 2, the control resembles the optimal control where y = 0. I t should also be noted that with zero noise index, the control obtained i s optimal feedback c o n t r o l . This property i s demonstrated i n the complete r e s u l t shown i n F i g . 3.3. 3.6.3 Remarks Remarks s i m i l a r to those found i n 3.5.3 can be applied to t h i s section, that i s a "short-term open-loop-feedback-optimal" control strategy can be employed i n long-term on-line a p p l i c a t i o n s . Here, the information needed to update would be the v e h i c l e a r r i v a l sequence * ^ n addition, other parameters such as and s can also be updated as time progresses to accommodate the varying t r a f f i c demands of the day. F i n a l l y , one can e a s i l y extend the above a p p l i c a t i o n to a t r a f f i c merging problem i n v o l v i n g more than 2 streams of t r a f f i c . 3.7 Discussion The decision control problem i n queuing system has been f o r -mulated and the necessary condition of o p t i m a l i t y have been derived by the a p p l i c a t i o n of the Discrete Minimum P r i n c i p l e . Solutions of the r e s u l -t i n g TPBVP are obtained using gradient techniques. The s i m p l i c i t y of the 2 7 F i g . 3 . 2 C o n v e r g e n c e o f t h e P e r f o r m a n c e F u n c t i o n f o r t h e T r a f f i c I n t e r s e c t i o n C o n t r o l - P r o b l e m i u r 1 1 • » 1 I I I 1 1 1 1 1 1 . I I I 1 . 1 ——1 1 red green ^ 2 red I 1 1 L J L J J 1 L J ! L 7 9 T l 11 13 15 17 19 21 Pig. 3.3 Complete Result for the Traffic Intersection Control Problem 29 12r CO o X C O ^ o 10 o X /X' •X" / / / / 9 8.76 x x */ / / J L / X / / / / / / ' ' I J L 0 4 N O I S E 6 I N D E X 8 10 F i g . 3.4 Open - l oop C o n t r o l Cos t v e r s u s N o i s e I ndex 30 suggested algorithm offers a f e a s i b l e , i f subopt-imal, solution to a problem which otherwise would have been formidable using dynamic programming. A fringe benefit of the algorithm i s the p o s s i b i l i t y of exten-ding i t to on-line control applications. Using the " s h i f t i n g i n t e r v a l " and open-loop-feedback-optimal technique, a short term adaptive scheme can be obtained. F i n a l l y , one must observe the generality of the decision control model. The v e r s a t i l i t y of the model l i e s i n the d e f i n i t i o n of the per-formance index (3-3). One can e a s i l y adapt the algorithm to accommodate different decision control situations by defining an appropriate cost model. This has been exemplified by the two applications i n this chapter. 31 4. OPTIMAL AND ADAPTIVE REGULATOR CONTROL FOR QUEUING SYSTEMS 4.1 Introduction The previous chapter treated the problem of dec i s i o n c o n t r o l i n queuing systems. As mentioned i n Chapter 2, i n queuing systems we are faced„wi„th another ..class of .control, .namely, .regulator type of con t r o l . This type of control attempts to optimize the o v e r a l l system performance by manipulation of a control mechanism, us u a l l y the s e r v i c e mechanism. The rest of th i s chapter w i l l devote i t s e l f to such a problem. The generalized queuing model developed i n Chapter 2 w i l l be adopted. Both the optimal and adaptive c o n t r o l s t r a t e g i e s w i l l be discussed. The generalized queuing model (2-1) i s rewritten q k + 1 = q k + \ \ " °k U k when q^ e Rn, i s the queue s i z e at k e Rm> i s the a r r i v a l i n [k, k+1), and i s assumed known or pre-di c t a b l e u^ e R P, i s the server capacity control B^, C^ are (n x m) and (n x p) matrices r e s p e c t i v e l y . The r e s u l t s of decision control of the queuing or se r v i c e channels w i l l be r e g i s t e r e d i n B. and C, . For instance, b. . of B. w i l l denote the b k k I J k f r a c t i o n of customers from the a r r i v a l streams that selected the i * " queue: c , of C, w i l l denote the f r a c t i o n of customers from the l^1 x l k queuing channel that was served by the 1 ^ server. As f a r as the server control i s concerned, B^ and C^ must be known, or i d e n t i f i e d i f unknown, before an optimal s o l u t i o n can be evaluated. 32 In the following sections, the question of i d e n t i f y i n g B, and w i l l be treated separately from the optimal control problem. Combin-ing the two schemes, an adaptive control algorithm i s obtained. 4.2 An On-Line I d e n t i f i e r for Queuing Systems Assume that B^ and C^ are either constant or quasi-stationary. Rewriting the queuing model % + l [I • B : C] •= S 'k+1 (4-1) T A Where S = [I j B i C] i s an n x (n + m + p) matrix and I i s an (n x n) unit matrix. r V \ k+1 an (n + m + p) - vector After making k observations of q^, we have S or Qk = 0 k S (4-2) where (.)' denotes the transpose of a vector, Qk i s a (k x n) matrix 0^ i s a (k x (n + m + p)) matrix Note the s i m i l a r i t y of (4-2) to the observation equation i n state 33 (23) estimation problems Z k = \ X ( 4 _ 3 ) where Z, = observation vector k H. = observation matrix k x = unknown state to be estimated Using the l e a s t squares state estimation technique i n (4-3) gives *k - ( H k V ' " 1 H k \ where x^ = estimated value of x a f t e r k observations, A matrix version of the above l e a s t squares estimation (MLSE) can s i m i -l a r l y be derived (Appendix III) f o r (4-2). We have S. = ( 6 T 6. ) - 1 0.T Q (4-4) which minimizes the estimation error f u n c t i o n a l 1=1 = Tr [(Q k - 0 k S k) (Q k - 0 k S k ) T ] (4-5) where Tr (.) denotes the trace of a matrix. A sequential form f o r (4-4) i s derived; t h i s makes the i d e n t i f i e r an on-line i d e n t i f i e r f o r the system: when _, p k + i = p k - p k W ( I + p k W ~ ^ ' k + i p k <4"7> where S k: an n x (n + m + p) matrix P k: an ( n + m + p ) x ( n + m + p ) matrix P o i s unknown. However, a number of simulation runs with P = 1 o y i e l d f a i r l y rapid convergence f or the algorithm. Of course there are (23) b e t t e r ways of f i n d i n g P^ , but such s o p h i s t i c a t i o n i s beyond the 34 scope of th i s t h e s i s . A fringe b e n e f i t of the MLSE technique i s that (I + l s n 0 W 3 s c a-'- a r quantity. The rather arduous job of matrix inversion i n vector LSE i s eliminated here. F i n a l l y , i t should be noted that i t has been assumed that the queue s i z e s , q^, can be measured d i r e c t l y . This i s n o t too unreasonable an assumption i n queuing systems. For low system noise l e v e l and small .fluctuations ,o.f ..the .mo.de 1,parameters,, the MLSE .can . give f a i r l y s a t i s f a c t o r y r e s u l t s as w i l l be demonstrated by an a p p l i c a t i o n below. 4.3 An On-Line Adaptive Co n t r o l l e r for Queuing Systems The s t o c h a s t i c s e r v i c e - r a t e c o n t r o l problem may be stated: Find a se r v i c e - r a t e sequence: ( u, \ , T„ ., which minimizes the N \ K. J k=o, . . . ,K-1 performance index K - l J = E {q, > , „ i=0 k k=o,...,K where E {f(x)} denotes the expected values of the function f ( . ) of the x random v a r i a b l e x. Q^ i s an (n x n) matrix, = Cost of queuing i n the system R^ i s an (n x p) matrix = Cost of operation of server q. e R n; u. E R P Due to the unknown parameters i n the systems, the predicted system model w i l l be q k + l k• " \ + \ k A k " C k k \ ( 4 " 9 ) where f. J = expected value of the function f at time j , based on the information a v a i l a b l e up to and i n c l u d i n g time i q *k+l k " E ( q k + l | q k ' q k - l ' q k - 2 ' " - ' q 0 ; \ ' V l U 0 , V A k - l " " V q k + l 35 (qk+l %' u w V k  k> V (4 -10) k+1 B k | k ' Scjk a r e t * i e i d e n t i f i e d v a l u e s o f and m a t r i c e s a f t e r k measurements . E q u a t i o n (4 -10) i s a d i r e c t consequence o f the m a r k o v i a n n a t u r e o f the mode l ( 4 - 9 ) . {A, } , . ... . . . w i l l be assumed k k = o , l , . . , ( K - l ) known or p r e d i c t a b l e . U s i n g dynamic programming (Append ix I V ) , a s t o c h a s t i c f e e d b a c k c o n t r o l a l g o r i t h m ; j=k , k + 1 , . . . , K - l ( A - l l ) k J , x . + b . k J J can be o b t a i n e d where the c o n t r o l g a i n s A.. | ^ and b j j ^ a r e a ( P ™ ) m a t r i x and a p - v e c t o r r e s p e c t i v e l y . F o r j = k, k + 1 , . . . K - l , *j|k- nj|k \ | k V l | k A -1 r £ T b. h = A. |, [C, .3. k j.k k k ( m j + l 1 + L -4.1 k J + l . k \ , A.)] k 2 ( 4 -12 ) (4 -13 ) w i t h A j | k - R j + ak|k V i k cklk (4 -14) \ | k k E ( B k | \ > v Vs ilk" E ( c k|v v V (4"15) j j + k | k a n d m j + i | k L . 1 T i, and m . , , i i _ a r e o b t a i n e d f rom the f o l l o w i n g r e c u r s i v e r e l a t i o n s : Lj|k = Qj + [ I " Lj+l|k Scjk V k ' ^ l k 3 Lj+l|k (4 -16 ) mj|k = [ I - Lj+l|k Scjk Aj|k ^|k ] C mi+l|k + L j + l | k \ | k V <4~17> w i t h the boundary c o n d i t i o n s : ^K|k - T > 0 " K j k = ° (4 -18) (4 -19 ) 36 I t i s noted that the sequence treated as a known disturb an input i n (4-9), i s assumed available at any stage, as shown i n Equation (4-17). This assumption i s essential i f the control (4-11) i s to be op-timal; otherwise, a suboptimal control would be obtained. An adaptive control i s obtained when the feedback control (4-11) i s applied simultaneously with the least-squares i d e n t i f i e r of 4.2 to the system. 4.4 Application I In section 3.6, we considered the t r a f f i c control problem using decision control. In th i s section, we s h a l l investigate the application of regulator control to another t r a f f i c problem: the merging problem. 4.4.1 Optimal T r a f f i c Control for a Highway Merging Problem Consider a section of a highway consisting of •n-merging-points located i n series (Fig. 4.1), assuming that a l l the t r a f f i c i s i n the same direc i t o n . Furthermore, the merge points are assumed equally spaced; and a l l the vehicles to move at the same constant speed, or the same constant average speed. The dynamics involved i n vehicle acceleration w i l l be ignored. Let: x, E R n denote the t o t a l number of vehicles at each of the k merging points at time k. X^_ E R n denote the t o t a l number ,of vehicles a r r i v i n g at the merging points i n the time i n t e r v a l [k, k+1). s, E R n denotes the number of vehicles released from each k merge point during [k, k+1). p > 0, denotes the number of time units each vehicle takes to tra v e l from one merge point to the next. In other words, the 37 38 average speed of the vehicles i s given by: Average vehicle speed = Distance between successive merge points P (4-20) Hence, at time k, the t o t a l input to the i ^ 1 merge point w i l l be Aj^ + s^_p where the superscript i denotes the i * * * 1 element of a vector. .th The o v e r a l l merging process can be represented by x, + A. + s, - s, where s "k+1 "k A k-p (4-21) (4-22) k ' "k-p "k 0 1 k-p 2 s k-p n-1 k-p denotes the input into any junction from the preceding one. To obtain an optimal o v e r a l l control of the vehicle merge, we s h a l l formulate the following control problem: Given, the system dynamics represented by (4-21), (4-22), f i n d a control sequence {s, } , which w i l l minimize the quadratic cost function K, K. X ^ • • • 9 ix K J = i= l (x! Q. x. + s. R. s.) + x,' T x. 1 1 1 1 1 1 K+1 K+1 (4-23) where Q^ , R^,'which are (n x n) diagonal matrices, denote respectively the vehicle waiting cost and the cost of releasing vehicles at the junc-tions. T, a p o s i t i v e d e f i n i t e (n x n) matrix, i s the cost of incomplete vehicle release at the end of the planning horizon [1, K] . Employing the control algorithm discussed i n 4.3, we obtain the following feedback control strategy: . s k = [ \ + W 1 [ m k + i + pk+i ( x k + ( 4 ~ 2 4 ) 39 where 6, = X, + s. k k k-p (4-25) P k 5 mk a r e c o i n P u t e d from the recursive relations P k " \ + I'1 " P k + 1 Pk+1 m k = [ I " Pk+1 A " k ] [ m k+1 + Pk+1 6 k ] where A = \ + pk+i and the boundary conditons PK+1 " T > ° (4-26) (4-27) (4-28) (4-29) (4-30) "K+l = ° A system with 4 merging junctions and different a r r i v a l i n -te n s i t i e s at each junciton i s simulated with the following parameter values: 10.0 .0 0 0 0 10.0 0 0 0 0 10.0 0 0 0 0 10.0 20.0 0 0 0 0 20.0 0 0 0 0 20.0 0 0 0 0 20.0 T = 30.0 0 0 0 •0 30.0 0 0 0 0 30.0 0 0 0 0 30.0 K = 24 Simulation results for p = 10 and p = 25 are shown i n Table 4-1 and Table 4-2 respectively. In Fig. 4.2, the r e l a t i o n between the junction delay time p and the t o t a l cost i s studied. I t appears that for a l l p >. K, the t o t a l cost stays constant. In other words, the junctions are i n effect decoupled as far as the o v e r a l l control i s concerned. However, for p < K, the control algorithm (4-24) to (4-30) i s i n fact repeated once every p time units i n order to update s^. Consequently, a 40 M E R G I N G P O I N T N O . 1 T I M E A R R I V A L Q U E U E S C O N T R O L 1 1 . 5 0 3 0 2 2 . 2 1 1 9 3 1 1 . 4 1 8 4 2 7 . 0 2 1 5 2 5 . 6 1 9 6 1 2 . 1 2 1 3 7 6 . 1 1 9 8 3 . 8 5 9 0 . 6 3 1 0 6 . 3 2 1 1 1 8 . 7 1 8 1 2 1 8 . 7 1 6 1 3 1 1 . 9 1 2 1 4 4 . 8 8 1 5 7 . 4 7 1 6 2 . 4 6 1 7 4 . 0 6 1 8 1 0 . 0 9 1 9 1 7 . 1 9 2 0 8 . 9 5 2 1 1 1 . 1 2 1 7 2 2 1 4 . 6 1 4 2 3 1 3 . 6 1 2 2 4 1 0 . 7 1 0 2 5 7 M E R G I N G P O I N T N O . 2 T I M E A R R I V A L Q U E U E S C O N T R O L 1 3 . 5 0 4 6 2 1 9 . 1 7 3 7 3 2 4 . 9 3 4 4 9 . 9 2 9 5 2 8 . 0 3 1 6 1 7 . 7 2 7 7 1 4 . 7 2 4 8 1 6 . 7 2 2 9 2 . 1 1 1 6 1 0 2 6 . 7 1 4 1 1 1 6 . 2 9 5 3 1 2 1 2 . 2 2 4 2 1 3 1 5 . 1 1 3 6 1 4 9 . 8 3 3 1 5 1 8 . 5 3 0 1 6 8 . 1 2 2 5 1 7 1 4 . 8 2 2 1 8 1 2 . 9 1 8 1 9 1 0 . 8 1 5 2 0 3 2 . 6 1 3 2 1 1 3 . 2 7 4 0 2 2 1 3 . 1 8 3 1 2 3 6 . 1 6 2 3 2 4 1 3 . 1 1 1 9 2 5 1 3 M E R G I N G P O I N T N O . 3 T I M E A R R I V A L Q U E U E S C O N T R O L 1 1 1 . 5 0 5 2 2 2 1 . 1 9 4 3 3 3 3 . 7 4 1 4 1 8 . 9 3 7 5 2 5 . 0 3 7 6 4 1 . 0 4 0 7 3 2 . 1 1 3 5 8 7 . 1 8 2 6 9 2 2 . 9 2 2 1 0 1 0 . 1 9 1 2 1 1 4 . 2 7 7 5 1 2 4 4 . 2 7 4 1 3 3 8 . 9 7 0 1 4 3 6 . 1 1 6 6 1 5 3 2 . 1 0 6 1 1 6 3 7 . 1 2 5 6 1 7 5 . 2 0 4 7 1 8 3 4 . 2 4 7 1 9 4 4 . 1 1 4 1 2 0 3 8 . 3 0 2 7 2 1 1 2 . 5 5 9 3 2 2 3 5 . , 2 7 7 9 2 3 3 3 . 2 5 6 7 2 4 3 1 . 2 7 5 4 2 5 3 7 M E R G I N G P O I N T N O . 4 T I M E A R R I V A L Q U E U E S C O N T R O L 1 4 2 . 5 0 7 3 2 3 5 . 2 9 5 9 3 4 6 . 1 5 5 3 4 2 . 1 8 4 4 5 4 5 . 0 5 9 6 8 4 . 0 6 4 7 2 7 . 3 0 4 9 8 2 . 1 8 4 0 9 7 6 . 0 4 9 1 0 4 8 . 3 7 3 1 1 1 2 7 . 6 4 1 2 2 1 2 6 9 . 2 1 1 1 3 1 3 8 1 . 2 0 1 0 3 1 4 5 . 3 9 8 3 1 5 7 3 . 0 8 6 1 6 3 5 . 2 4 7 5 1 7 3 1 . 2 4 6 3 1 8 2 5 . 2 7 5 0 1 9 2 2 . 2 8 3 7 2 0 1 6 . 3 5 2 0 2 1 6 4 . 4 3 1 3 0 2 2 0 . 5 2 1 0 4 2 3 9 . 2 2 9 4 2 4 7 8 . 7 9 0 2 5 6 1 T O T A L C O S T = 5 . 3 0 6 7 E 0 6 T a b l e 4 .1 O p t i m a l C o n t r o l f o r t h e H ighway M e r g i n g P r o b l e m w i t h p = 10 41 M E R G I N G P O I N T N O . 1 T I M E A R R I V A L Q U E U E S C O N T R O L 1 1 . 5 0 3 0 2 2 . 2 1 1 9 3 1 1 . 4 I S 4 2 7 . 0 2 1 5 2 5 . 6 1 9 6 1 2 . 1 2 1 4 7 6 . 1 0 1 0 8 3 . 6 7 9 0 . 2 6 1 0 6 . 0 1 0 1 1 1 8 . 0 1 5 1 2 1 8 . 3 1 4 1 3 1 1 . 7 1 1 1 4 4 . 7 8 1 5 7 . 3 7 1 6 2 . 3 6 1 7 4 . 0 7 1 8 1 0 . 0 1 1 1 9 1 7 . 0 1 3 2 0 8 . 4 1 1 2 1 1 1 . 1 1 1 2 2 1 4 . 1 1 2 2 3 1 3 . 3 1 1 2 4 1 0 . 5 8 2 5 7 M E R G I N G P O I N T N O . 2 T I M E A R R I V A L Q U E U E S C O N T R O L 1 3 . 5 0 4 6 2 1 9 . 1 7 3 7 3 2 4 . 9 3 4 4 9 . ' 9 2 9 5 2 8 . 0 3 1 6 1 7 . 7 2 8 7 1 4 . 6 2 6 8 1 6 . 4 2 5 9 2 . 5 2 3 1 0 2 6 . 0 3 0 1 1 1 6 . • 6 2 7 1 2 1 2 . 5 2 5 1 3 1 5 . 2 2 4 1 4 9 . 3 2 3 1 5 1 8 . 0 2 4 1 6 8 . 4 2 2 1 7 1 4 . 0 2 3 1 8 1 2 . 1 2 4 1 9 1 0 . 0 2 5 2 0 3 2 . 0 3 1 2 1 1 3 . 1 1 2 6 2 2 1 3 . 8 2 3 2 3 6 . 8 1 9 2 4 1 3 . 5 1 6 2 5 1 2 M E R G I N G P O I N T N O . 3 T I M E A R R I V A L Q U E U E S C O N T R O L 1 1 1 . 5 0 5 2 2 2 1 . 1 9 4 4 3 3 3 . 6 4 1 4 1 8 . 8 3 7 5 2 5 . 0 3 8 6 4 1 . 0 4 1 7 3 2 . 1 0 3 7 8 7 . 1 5 3 0 9 2 2 . 2 2 9 1 0 1 0 . 5 2 8 1 1 4 . 0 3 1 1 2 4 4 . 0 4 9 1 3 3 8 . 5 4 8 1 4 3 6 . 5 4 5 1 5 3 2 . 6 4 3 1 6 3 7 . 5 4 1 1 7 5 . 1 1 3 5 1 8 3 4 . 0 4 5 1 9 4 4 . 0 4 6 2 0 3 8 . 8 4 3 2 1 1 2 . 1 3 3 7 2 2 3 5 . 0 3 9 2 3 3 3 . 6 3 7 2 4 3 1 . 1 2 3 1 2 5 2 2 M E R G I N G P O I N T N O . 4 T I M E A R R I V A L Q U E U E S C O N T R O L . 1 4 2 . 5 0 7 3 2 3 5 . 2 9 6 0 3 4 6 . 1 4 5 3 4 2 . 1 7 4 4 5 4 5 . 0 6 0 6 8 4 . 0 6 6 7 2 7 . 2 8 5 3 8 2 . 1 2 4 7 9 7 6 . 0 7 1 1 0 4 8 . 1 5 6 3 1 1 2 7 . 1 0 5 9 1 2 6 9 . 0 7 1 1 3 8 1 . 8 6 8 1 4 5 . 3 1 5 4 1 5 7 3 . 0 6 2 1 6 3 5 . 2 1 5 1 1 7 3 1 . 1 5 4 5 1 8 2 5 . 1 1 4 0 1 9 2 2 . 6 3 7 2 0 1 6 . 1 3 7 2 1 6 4 . 0 4 8 2 2 0 . 2 6 3 5 2 3 9 . 1 3 5 2 4 7 8 . 0 5 2 2 5 3 6 T O T A L C O S T = 2 . 9 0 0 0 E 0 6 Table 4.2 Optimal Control f o r the Highway Merging Problem with p = 25 42 JUNCTION DELAY TIME V (K=24) P i g . 4 . 2 T h e R e l a t i o n s h i p b e t w e e n t h e T o t a l C o s t a n d J u n c t i o n D e l a y T i m e f o r t h e H i g h w a y M e r g i n g P r o b l e m 43 suboptimai control i s obtained. In ad d i t i o n , the cost parameters Q^, R^, T can be made time-varying to r e f l e c t the f l u c t u a t i n g t r a f f i c demands of the day. 4.4.2 Remarks Once again the f a m i l i a r assumption of known a r r i v a l sequence {A^} holds i n t h i s a p p l i c a t i o n . For long term operations (K -> °°) , w i l l have to be updated constantly i n order to account for any p r e d i c t i o n e r r o r . The " s h i f t i n g i n t e r v a l " concept discussed i n the l a s t chapter can again be applied here. An on-line adaptive control w i l l r e s u l t through the use of the open-loop-feedback-optimal control technique combining with the p r e d i c t i o n of {A^* 4.5 A p p l i c a t i o n II In 4.3, we mentioned the p o s s i b i l i t y of an on-line adaptive cont-rol through-s-inmlt-aneous •application -of -the system i d e n t i f i e r -of 4-2 and the c o n t r o l l e r of 4.3. In t h i s s e c t i o n , an a p p l i c a t i o n o f t h i s tech-nique to a scheduling problem w i l l be discussed. 4.5.1 Optimal and Adaptive Scheduling of Operations i n a H o s p i t a l Consider a h o s p i t a l with p operating rooms and enough s u r g i c a l f a c i l i t i e s f o r n types of'operations. Patients demanding operations are f i r s t t ransferred from the h o s p i t a l ward into the operating ward where they are screened for surgery or r e j e c t i o n . In the case of surgery, they are allowed to j o i n a queue for the type of operation (one of the n-type) each re'quires. I f patient's case i s a " f a l s e alarm" or a r e l a -t i v e l y mild case compared to other emergency cases, he i s re j e c t e d and returned to the h o s p i t a l ward. The complete queuing phenomenon involved i n the surgery process i s shown i n F i g . 4.3. In view of the number of patients i n the surgery queue and the 44 HOSPITAL WARD APPLICATIONS v REJECTIONS SCREENING SURGERY WARD SURGERY ROOM F i g . 4.3 The Queuing Phenomenon f o r a Sur g e r y P r o c e s s i n a H o s p i t a l 45 cost of operations, the nurse has to decide on an optimal operation sche-dule at the beginning of each period (say a month). In addition, the type of operations performed i n each operating room may vary from time to time (say every week) due to complications a r i s i n g from the surgeon's schedule or the relocation of equipments. Hence the operation schedule decided on e a r l i e r w i l l have to be revised constantly i n order to cope with the varying s i t u a t i o n . Under such circumstances, an on-line adaptive scheduling strategy would be most desirable. The optimal scheduling problem can be formulated as a control problem as follows: Let x^ e R n denote the number of patients waiting for different types, of operations i n the operating ward at time k. A^ e R n denote the number of new applications for the operations before screening process during the period [k, k+1). u^ e RP denote the number of operations performed i n the operating rooms during the period [k, k+1). Thus, the s u r g i c a l queuing process can be expressed as: Xk+1 = \ + \ ~ Ck \ ( 4 " 3 1 ) where C, , an (n x p) matrix, represents the f r a c t i o n a l amount of the type K. of operations performed i n different operating room. In e f f e c t , c ^ u.. i s the t o t a l number of the i ^ -type operations performed i n the operating room i n the period fk, k+1). The mathematical statement of the scheduling problem i s as follows: Given the process dynamics (4-31), f i n d a schedule {u^} k _o (K-l) which w i l l minimize the quadratic cost function 46 K - l J = ~> [x! Q. x. + u'. R. u.] + x' T * (4-32) x ^x x x x x K K 1=0 where Q^ , an (n x n) matrix, denotes the cost of waiting for operation i n the surgery ward (x^ > 0), or the cost of r e j e c t i n g p atients (x^ < 0). R^, a (p x p) matrix, denotes the cost of surgery and T, an (n x n) matrix, i s the cost of incomplete operations at the end of the scheduling period. In the case of adaptive .scheduling, when i s unknown, the cost function (4.32) w i l l be replaced by K - l J = E t 2L K ! % x - r + u-l R i U P + K T X K ] ( 4 " 3 3 ) r i .—^ 1 X X X X X K . R {x, } . . X=0 K k=0,...,K where i s now an random v a r i a b l e . The on-line adaptive control algorithm (4.11) to (4..19) can then A be applied with B ^ j ^ replaced by I, a unit matrix. At any time k the optimal control sequence {u.} n can be computed from the i 1—k, k + I ,. . ., K.—X following equations: u. = A. x. + b. (4-34) 1 x x x where A. = A - 1 cf-P.., (4-35) x x k l + l b. = A"} (cf'.tm-.., + P.., X.]) (4-36) x x k x+1 x+1 x A A 4- A A i - R i + c k p i + l c k ( 4 ~ 3 7 ) p i - \ + 1 1 - pxfc+i K A _ 1 i c ^ p i + i ( 4- 3 8> m i [ I - p i + i K A 1 & K + i + p i + i \ ] ( 4 - 3 9 ) t where (.) denotes the transpose of a matrix, C^ i s the i d e n t i f i e d value of which i s assumed constant during the i n t e r v a l [ k , . . . , K - l ] . The on-line i d e n t i f i e r of 4.2 can be used for t h i s purpose. 47 In the case of unknown C^ ., only the f i r s t element of the sequence { u. ) . , „ ., i s transmitted at time k. A new sequence i s ^ I 1 J x=k,...,K-1 generated at k+1, and the same procedure continues. Hence, th i s i s i n fact an open-loop-feedback-optimal control strategy. In the case where C, i s known, the complete sequence can be K. used, and the resulting control function w i l l be optimal feedback. In the simulation .below,, the following values have been used: n = 2; p = 3; K = 30 Ck = 0.4 0.3 0.6 0.6 0.7 0.4 ; Q i , 10.0 0.0 0.0 10.0 50.0 0.0 0.0 50.0 30.0 0.0 0.0 0.0 30.0 0.0 0.0 0.0 30.0 X0 = 50 50 The a r r i v a l sequence {X^} k = 1 2 g i s assumed known (Fig. 4.4). Several runs have been made on the same system for both the optimal (C^ known) and the adaptive case (C^ unknown and estimated through a least-squares estimator). The i d e n t i f i e d values for are shown i n Fig. 4.5. Fig. 4.6 gives a comparison between the optimal and adaptive state t r a j e c t o r i e s . Similar comparison can be made for the feedback control gain A^ and control bias b^. Only one element from each i s plotted i n F i g . 4.7. The other elements behaved s i m i l a r l y . The complete control schedules are shown i n Fig. 4.8 and Fig. 4.9. Examination of the plots w i l l demonstrate the nature of'an 30 LO < > cr < 20 10 0 0 30 r 10 TIME 20 48 30 CM ~ 20 to < > a: < 10 o o 10 TIME 20 30 F i g . 4.4 The A r r i v a l Sequences fo r the Optimal and Adaptive Operation-scheduling Problem TIME P i g . 4 .6 C o m p a r i s o n Be tween t h e O p t i m a l F e e d b a c k C o n t r o l S t a t e T r a j e c t o r y and t h e A d a p t i v e C o n t r o l S t a t e T r a j e c t o r y 4.0 r 51 ' O 3.0 X 2.0 (NJ <T / < 1.0 o cr o.o o o - 1 . 0 \ OPT IMAL F E E D B A C K G A I N ( C k K N O W N ) A D A P T I V E GA1 N ( C k U N K N O W N ) - 2-0 I 1 1 I I I L L 0 10 _l i i i i i i 20 30 12.0 r 10.0 8.0 < 6-0 O o cr 4.0 2-0 0.0 - / 11 OPTIMAL F E E D B A C K GA IN ( C k KNOWN ) A D A P T I V E GA I N ( C k U N K N O W N ) _ 2 0 ' 1 • i i i 0 10 J i i_ 20 30 TIME Fig. 4.7 C o m p a r i s o n Be tween t h e O p t i m a l F e e d b a c k C o n t r o l G a i n and t h e A d a p t i v e C o n t r o l G a i n 54 adaptive system. During the i n i t i a l stages (p ^ k ^ 7), the adaptive control behaves rather e r r a t i c a l l y . The uncertainty i n the control arises from a lack of s u f f i c i e n t information on the system. The iden-t i f i e r gradually reacts properly as the number of measurements increases. The adaptive control c o l l e c t s information on the system through the i d -e n t i f i e r , and as time progresses, converges onto the optimal control. .Hence., although the i d e n t i f i e r i s separ.ated from the c o n t r o l l e r , there i s an i n d i r e c t i n t e r a c t i o n between the two. F i n a l l y , i t i s noted that i n both the optimal and the adaptive case above, the a r r i v a l sequence {X } has been assumed known. In the case of unknown {X, }, a suboptimal control can be obtained i f the X. i n k j (4.36) and (4.39) i s replaced by X^ for a l l K - l £ j ~i k. In other words, a l l future a r r i v a l s are assumed to be the same as the present a r r i v a l s . "The results for this -suboptimal -strategy are shown i n Fig. '4.TO, Fig. 4.11 and Fig. 4.12. 4.5.2 Remarks An on-line adaptive control method has been demonstrated for the operation-scheduling problem. This method i s i n fact of the open-loop-feedback-optimal (OLFO) type discussed i n the l a s t chapter. The same strategy can of course be extended to long-term scheduling problems (K 00) i f the i n f i n i t e i n t e r v a l i s approximated by a number of f i n i t e i n t e r v a l s . The r e s u l t i n g o v e r a l l control would of course be suboptimal. Nevertheless, on-line control i s achieved. 4.6 Discussion Systems with time-varying parameters cannot be treated with the i d e n t i f i c a t i o n algorithm discussed i n 4.2, unless the parameters vary slowly with time. However, there are always other more sophisticated 40 30 20 10 0 40 30 20 10 0 40 30 20 10 0 L J I I L J J I I i i i n . n i t L J 10 20 30 I i I » L J l I r -i i i -1 I L J I I I I 1 1 0 20 30 T I M E ,10 Suboptimai Scheduling of Surgery with unknown a r r i v a l s {*-u} 56 , S U B O P T I M A L F E E D B A C K , T IME P i g . 4.11 . S u b o p t i m a l C o n t r o l S t a t e T r a j e c t o r y w i t h Unknown A r r i v a l s [ X ^ j . ( x ? behaves s i m i l a r l y ) 57 / i i I • S U B O P T I M A L F E E D B A C K G A I N j ( C k K N O W N ) / S U B O P T I M A L A D A P T I V E G A I N ( C U N K N O W N ) / K / • / / J ] i i i I i i • i I I I I I I I 10 2 0 3 0 T I M E P i g . 4 . 1 2 S u b o p t i m a l C o n t r o l ' F e e d b a c k G a i n w i t h Unknown A r r i v a l s {^k} 58 i d e n t i f i c a t i o n techniques i n the l i t e r a t u r e that can be applied. F i n a l l y , i n view of the generalized queuing model discussed i n Chapter 2, the system parameters C^. and i n fact represent the results from decision control i n the system. Hence, simultaneous ap p l i cation of the regulator and decision control on the same system i s pos-s i b l e . Hie coupling between the two controllers w i l l be through the on-line - i d e n t i f i e r . 59 5."• CONCLUSIONS AND SUGGESTIONS 5.1 Conclusions Two different techniques have been demonstrated for the on-line control of a queuing system. Time-series models have been used. This i s i n direct contrast to the t r a d i t i o n a l approach where pr o b a b i l i t y models are widely used. I t has been shown that decision control problems can be form-ulated as a control-constrained optimization problem. Application of the Discrete Maximum P r i n c i p l e and gradient techniques yields s a t i s f a c t o r y results. Extension of the decision control strategy to the adaptive case i s immediate through the open-loop-feedback, optimal approach. Regulator control i s mainly applied to the control of a service mechanism. I t has been shown that dynamic programming can be used to obtain an optimal feedback solution for such problems. An on-line adaptive control method has been demonstrated by applying the control and an iden-t i f i c a t i o n scheme simultaneously but separately. A direct coupling of the regulator and decision control i s thus possible through this i d e n t i f i c a t i o n process. Least squares i d e n t i f i c a t i o n has been used because i t does not require any s t a t i s t i c a l information on the system. More sophisticated techniques can of course be used to obtain better r e s u l t s . However, the philosophy behind the present work has been to demonstrate the f e a s i b i l i t y of applying an on-line control strategy to a queuing system, rather than attempting an elaborate solution to any p r a c t i c a l problem. 5.2 Suggestions The techniques demonstrated here can t h e o r e t i c a l l y be applied, without any major modification, to the control of large scale systems. 60 However, i n any large scale system, there i s always the problem of unde-si r a b l e interactions among the variables. In addition, the optimality for the o v e r a l l system i s doubtful. Therefore, other large-scale system op-timization techniques should be considered. F i n a l l y , the successful application of on-line control to a queuing system can be the s t a r t of a trend for c o n t r o l l i n g other socio-..economical .systems.. ..In view of the ..current .awareness of .the environment by s c i e n t i s t s and engineers, i t i s apparent that large prospect exists i n the f i e l d of environmental control. Successful application of modern control theory to such problems can be most rewarding. 61 APPENDIX I The Discrete Maximum (Minimum) P r i n c i p l e The P r i n c i p l e w i l l be merely stated here for reference. A com-plete proof can be obtained from the l i t e r a t u r e ^ 2 2 ' " * " 8 ^ . Consider the system of discrete equations \ = f k "lP ' k = ° ' 1 5 2 ' N - 1 (Al-1) with u^ e L, x^ e R n and u^ . e P™. Consider the scalar cost functional N-1 k=0 (Al-2) the Assuming that for every k = 0, 1, . .., N-1, and for every x^ e R , t set {f'k u j c ) : Uj. e L} i s convex or at least d i r e c t i o n a l convex (18) ^  (Al-3) Define the Hamiltonian function H <V pk+i> u k ) = \ <V u k ) + pk+i fk <V u k ) (Al-4) where p^ i s the costate vector. The Minimum P r i n c i p l e states that at the optimal trajectory {x*}'_ „ and optimal control {u*} _ , , the following relations K. K. U j « • j JN 1C K. U y • • • j JN X hold: i ) Canonical Equation x& — x* k+1 k Sn-ap k+1 (Al-5) k+1 F k 9H (Al-6) i i ) Boundary Conditions x* o (Al-7) 62 9<j> i i i ) Minimization of the Hamiltonian (Al-8) H ( xk> P k + l ' ' ^ * H ( x k ' Pk+1' \ } for every e L. In the case of unconstrained control, (Al-9) yields (Al-9) 3 \ = 0 (Al-10) 63 APPENDIX I I Dynamic Programing Formulation of the  Optimal Decision Control Problem The i l l u s t r a t i v e example shown i n 3.4 w i l l be formulated below as a multistage decision process. Given: x, ., = x, + u, (A2-1) k+1 V \ e *R  x i = ( o ) u k E E2 2 to] 1 ,1l > the cost Find a control sequence {u^ e ^ ^ k + l 5 J = 2— L (x. , u., i ) + <Kx6, 6) i = l lC X y • • • y 5 which w i l l minimize (A2-2) where L ( X i, u i , i ) = u^ [a ± (Xj. + q) - 3] * ( x 6 , 6) = ( x 6 + q)' ' F ( x 6 + q) Define an optimal value function *<V k> = v ^ . u . E , { ^ L < V V « + *<V 6 ) } ( A 2 " 3 ) k 5 2 i=k Because the cost at any stage i s independent of any previous stage, we can write Min Min 5 *(x 6, 6) }] By the d e f i n i t i o n of iCx^, k) i n (A2-3), we have : 64 1 C x k > k > = • V E 2 I L ( x k > V 1 0 + 1 C xk+1> k + 1 ) ] = f e E 0 t L ( V V k ) + 1 <*k + V k + 1 ) ] ( A 2 k 2 S t a r t i n g from the terminal condition I ( x 6 , 6) = <Kx6, 6) = (x f i + q)' F ( x 6 + q) the f u n c t i o n a l equation (A2-4) can be solved completely. The optimal t r a j e c t o r y can be obtained f o r any i n i t i a l point at any stage 1 $ k $ 5. F i n a l l y , i t should be noted that (A2-4) i s i n f a c t the mathe-matical formulation f o r Bellman's P r i n c i p l e of Optimality. 65 or equivalently APPENDIX I I I Matrix Least Squares State Estimation Consider the observation equation *k = g T * k 4 - K s ' where x k e Rn, ^ e R™ and S i s an (m x n) matrix. T (.) denotes the transpose of a matrix, and (.)' denotes the transpose of a vector. After k observations, we have (A3-1) (A3-2) — s k H, S k (A3-3) or where i s now a (k x n) matrix and i s a (k x m) matrix. Consider finding a least squares estimate of S which would minimize the error function J (S k) - T r [ ( Z k . - H k S k ) ( ^ - H ^ ) ] (A3-4) where S. denotes the estimate of S after k measurements and Tr(.) denotes k the trace of a matrix. Expanding (A3-4) we have m ^ rp rp -A rri ^ rp rp J (S k) = Tr {Z kZ k - Z k S k H k - l y ^ + } 66 9J ( S k ) as, " 0 " - \ zk " \ \ + 2 \\ s k Hence A T - I T S k " ( H k V \ Z k (A3-5) Some o f the g r a d i e n t m a t r i x i d e n t i t i e s u s e d above a r e : i f A : n x m; B: m x n ; C: p x n t h e n l l r (AB) _ T _ ^ T r (BA) a 1 A ^ T r (CAB) „T „ T fT = C B Now c o n s i d e r t a k i n g a n o t h e r measurement x k + l * "k+1 ~zk " _ v Sk+1 = V l Sk+1 where S k+1 "( Hk+l ' V l 5 1 "Hk+1 Zk+1 (A3--6) T • • = Pk+1 \+l Zk+1 u . -1 A T _I ' , ,. . , where p k + 1 = H k + 1 H k + 1 = H k H k + ^ ^ pk+i = (Hk v' 1 - (\ T v"1 w ( I + *i+i ( HkHk ) _ 1 w \ X v _ 1 o r pk+i = pk - pk V i ( I + V i pk V P V i pk (A3-7) wh ich i s the consequence o f a M a t r i x I n v e r s i o n Lemma. (I + ^ k + ^ P k ^+]_) 1 S a s c a l a r q u a n t i t y S u b s t i t u t i n g (A3-7) i n t o (A3-6) and e x p a n d i n g the t e r m s , we have V i = C I - \ V i ( I + K+i V W " 1 <Vi ] pk < zk -1 + pk V i [ i - ( I + V i pk V P V i pk V i ] K + i 67 - * - pt \+i C I + K+I Pk V i ) _ 1 V P K -i + pk V i ( I + K +i pk W " { I +"'K+i pk V i " K+i p k V i } K+i - i = \ + pk V i ( I + K+i pk W [K+i" K+i sk] From the matrix Inversion Lemma pk+i K+i = p k K+i [ I + ( I + K+i pk K+r)_1 K+i pk K+i] pkA+i [ I + K+i pk K+i ] - i Hence, we obtain the sequential form of (A3-5) Sk+i = pk+i V i ( V i - K+i V (A3-8) where Pk+1 = Pk pk V i ( I + K+i pk V i ) _ 1 K+i pk (A3-7) m x n m x m 1 + K+i pk V i n x 1 m x 1 l x l 68 APPENDIX. XV An Adaptive Feedback C o n t r o l A l g o r i t h m  v i a Dynamic Programming Consider the o p t i m i z a t i o n problem: Find: Min (u, } Vk=0,...,N-l { Xk}k=0,. . ,N N-1 [ 7 (x! Q.x. + u! R. u.) + * N T V (A4-1) su b j e c t to x, = x. + B, A, - C, u K+ i le k k k k (A4-2) where •nn -\ r>m t>P x k e R , X k e R , e E rI f the parameters B k and C k are unknown, (A4-2) w i l l be replaced by k+1 \ * \ |k \ " Ck' (A4-3) where f. estimate of the value of the f u n c t i o n f at time i based on the t o t a l i n f o r m a t i o n a v a i l a b l e at time j . 5k|k and C j J k are the estimated values of B k and C k r e s p e c t i v e l y . X j ^ ^ j k = E ^ x k + i ^ s t* i e P r e d i c t e d value of at k. ^c+l where E ( f ( x ) | g ) - expected value of the random f u n c t i o n f ( x ) over the x random v a r i a b l e x, given i n f o r m a t i o n s e t g. T, i s the i n f o r m a t i o n s e t a v a i l a b l e at time k. For the above k problem, a s u f f i c i e n t s e t of i n f o r m a t i o n at k would be k' c k (A4-4) Define the o p t i m a l value f u n c t i o n A Min ( x ^ k ) W i ( . ) , . . . , V i ( 0 N-1 E [7 (x'.Q.x. + u'. R.u.) x. fee J 3 3 3 2 2 + *N T x N (A4-5) 69 Applying Bellman's P r i n c i p l e of Optimality (A2-4) to (A4-5), we obtain Min , , , Min I < V W - ^ {*k\\ + " k V k + „ k + 1 ( . ) , . . . ,u N _ 1 ( . ) ' E C V l - l , " ' , XN N-1 J ~ (x'.Q.x. + u!R.u. ) -j =k+l 1 J 3 3 3 J Min f . rMin [ K W + + ^ [\+is--->v-i(-> N-1 (x E x ( T- fx ' .q.x, + U !R.U.H x! i x., F k + 1 ) } Hence I ( V k ) = ^  { x ^ + + "E - ( I ^ , k+l7| T k ) } (A4-6) *k+l Assuming a l i n e a r feedback structure of the optimal control, we obtain a l i n e a r quadratic optimal cost: I(x k,k) = x k P k x k + 2 m k x k + b k (A4-7) where P k : an (n x n) symmetric matrix an n vector om b k : a scalar quantity. The reason for including and b k i n (A4-7) i s to account for the rand input term X k i n (A4-2). Substituting (A4-7) and (A4-3) into (A4-6) and d i f f e r e n t i a t i n g with respect to u k, we obtain the optimal control 70 \ik = (\ •+ ilk Pk+i \ik r l i | k imk+i+Pk+i(v\ikAk)] (A4-8) Replacing i n (A4-6) by the optimal function (A4-8), and re-placing l(xk,k) by (A4-7), we obtain after equating the co e f f i c i e n t of the l i n e a r and quadratic terms of x^: p k - \ + 1 1 - Pk+i i|k Mik 6k|k ] Pk+i ( M- 9 ) \ = C I - Pk+i ck|k A? |k ck|k i h c + i + pk+i \|k \ 3 <A4-l0> P N = T (A4-11) 11^  = 0 (A4-12) The complete control sequence i s as follows: u j-|* " *j |k X J + I* i = k , k + 1 ' * • * ' N _ 1 ( M " 1 3 ) k = 0,1,2,...,N-1 * i | k = Aj|k i l k V l I k (A4-14) "j|k * Aj|k k^|k ^+l|k + Pj+Hk \|k V 3 (A4-15) Aj|k = R3 + C k|kP j + l|k K|k ( A 4 - 1 6 ) where Pj|k a n a m j | k a r e obtained from the recursive equations: p j i k = Q i + [ I- p j + i | k i | k Aj|k ijk 3 p j + i | k ( A 4 ~ 1 7 ) m i i k = [ i - p?+iik iik'Ajik i i k 3 h + 1 | k + p j + i | k \ i k v ( A 4- i 8 ) PN|N«-1 = T > \ l N - l = ° ( A 4 " 1 9 ) The above algorithm i s i n effect an approximation to the optimal control. At each stage k, the system parameters B^ and C^. are assumed 71 equal to |^ . C^jk for a l l remaining stages Ik, k+1,...,N-1]. This fact i s i l l u s t r a t e d i n equations (A4-14) to (A4-18) where B ^ j ^ and C^j^ are used instead of B . i , and C . i , for the optimal case. A further ap-j|k j|k proximation can of course be made(when the random input i^-^i i s unknown by replacing A with A^ .. In th i s case, a l l future random inputs are assumed to be the same as A. . 72 REFERENCES Queuing Theory: Analysis and Applications 1. Chan, W.C., "Computer-controlled Queuing System with Constant-access Cycle and General Service Times", Proc. IEE, V o l . 117, No. 5, pp. 927-930, May 1970. 2. El-Bardai, M.T., "Load Regulation of a F i n i t e Size Queue", Int. J . Systems S c i . , V o l . 2, No. 2, pp. 163-170, 1971. 3. Esogbue, A.M.O., "Optimal and Adaptive Control of a Stochastic Service System with Applications to H o s p i t a l s " , Ph.D. D i s s e r t a t i o n , U n i v e r s i t y of South C a l i f o r n i a , June 1968. 4. Ireland, R.J. and Thomas, M.E., "Optimal Control of Customer-flow Through a System of P a r a l l e l Queues", Int. J. Systems S c i . , V o l . 2, No. 4, pp. 401-410, 1972. 5. Neymark, Yu. I. and Fedotkin, M.A., "On the Operation of an Automaton C o n t r o l l i n g T r a f f i c at an I n t e r s e c t i o n " , Automation and Remote Control U.S.S.R. Vol. 27; .pp. 415-424', 1966. 6. Saaty, T.L., Elements of Queuing Theory with A p p l i c a t i o n s , New York: McGraw-Hill Book Co., Inc., 1961. 7. Uematu, Tosio, "On the T r a f f i c Control at an Intersection Controlled by the Repeated Fixed-cycle T r a f f i c L i g h t s " , Ann. Inst., S t a t i s t . Math., Vol. 9, No. 2, pp. 87-107, 1958. 8. Zacks, S. and Yadin, M., " A n a l y t i c Characterization of the Optimal Control of a Queuing System", J . Appl. Prob., V o l . 7, pp. 617-633, December 1970. S t a t i s t i c a l ; A n a l y s i s and Operations Research Techniques 9. Brown, R.G., Smoothing,Forecasting and P r e d i c t i o n of Discrete Time  Series, P r e n t i c e - H a l l , In., Englewood C l i f f s , N.J., 1963. 10. H i l l i e r , F.S. and Lieberman, G.J.', Introduction to Operations Research, Holden-Day, In., C a l i f . , 1967. 11. Vishwakarma, K.P., "P r e d i c t i o n of Economic Time-series by means of the Kalman F i l t e r " , Int. J . Systems S c i . , V o l . 1, No. 1, pp. 25-32, 1970. Optimal and Adaptive Control: Theory and Applications 12. Aoki, M., "Dynamic Programming and Numerical Experimentation as Ap-p l i e d to Adaptive Control Systems", Ph.D. D i s s e r t a t i o n , U n i v e r s i t y of C a l i f o r n i a , Los Angeles, February 1960. 73 o 13. Astrb'm, K.H. and Eykhoff, P., "System I d e n t i f i c a t i o n - A Survey", Automatica, V o l . 7, No. 2, pp. 123-162, March. 1971. 14. Dreyfus., S.E., "Some Types of Optimal Control of Stochastic Systems", J. SIAM Control, Ser. A, V o l . 2, No. 1, pp. 120-134, 1962. 15. Eykhoff, P., "Process Parameter and State Estimation", Automatica, Vol. 4, pp. 205-233, 1968. 16. Farison, J.B., Graham, R.E. and Shelton, R.C., " I d e n t i f i c a t i o n and Control of Linear Discrete Systems", IEEE Trans, on Autom. Control, V o l . AC-12, pp. 438-442, August 1967. 17. Heffes, H. and Horing, S., "Optimal A l l o c a t i o n of Tracking Pulses f o r an Array Radar", IEEE Trans, on Autom. Control, V o l . AC-15, pp. 81-87, February 1970. 18. Holtzman, J.M., "On the Maximum P r i n c i p l e f o r Nonlinear Discrete Time Systems", IEEE Trans. Autom. Contorl, V o l . AC-11, pp. 273-274, A p r i l 1966. 19. Larson, R.E., "A Survey of Dynamic Programming Computational Pro-cedures", IEEE Trans. Automatic Control, V o l . AC-12, pp. 767-774, December 1967. 20. Meier, L. I l l , Larson, R.E, and Tether, A.J., "Dynamic Programming for Stochastic Control of Discrete Systems", "IEEE'Trans. Automatic Control, Vol. AC-16, pp. 767-775, December 1971. 21. Mendes, M., "An On-line Adaptive Control Method", Automatica, V o l , 7, No. 3, pp. 323-332, May 1971. 22. Sage, A.P., Optimum Systems Control, P r e n t i c e - H a l l , Inc., Englewood C l i f f s , N.J., 1968. 23. Sage, A.P. and Melsa, J.L.'y Estimation Theory with Ap p l i c a t i o n s to  Communications and Control, McGraw-Hill Books Col, Inc., 1971. 24. Tse, E. and Athans, M., "Adaptive Stochastic Control f o r a Class of Linear Systems", IEEE Trans. Atuomatic Control, V o l . AC-17, pp. 38-51, February 1972. 25. Wilde, D.J. and Bei g h t l e r , C.S., Foundations of Optimization, P r e n t i c e -H a l l , Inc., Englewood C l i f f s , N.J., 1967. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0101479/manifest

Comment

Related Items