Sampled-Data Generalized Predictive Control (SDGPC) by Guoqiang Lu B.Sc. Beijing Institute of Technology, 1984 M.Sc. Beijing Institute of Technology, 1987 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA January 1996 Â© Guoqiang Lu, 1996 In presenting this degree at the thesis in University of freely available for reference copying of department publication this or of partial fulfilment of British Columbia, I agree and study. this his or her representatives. Tel. %M*W/<jÂ» EUyty^L % Â«* It is thesis for financial gain shall not be The University of British Columbia Vancouver, Canada Date that the may be permission. Department of requirements I further agree thesis for scholarly purposes by the ; (9? & for an advanced Library shall make it that permission for extensive granted by the understood that allowed without head of my copying or my written Abstract This thesis develops a novel predictive control strategy called Sampled-Data Generalized Predictive Control (SDGPC). SDGPC is based on a continuous-time model yet assumes the projected control profile to be piecewise constant, i.e. to be compatible with zero order hold circuit. It thus enjoys both the advantage of continuous-time modeling and theflexibilityof digital implementation. SDGPC is shown to be equivalent to an infinite horizon LQ control law under certain conditions. For well-damped open-loop stable systems, the piecewise constant projected control scenario adopted in SDGPC is shown to have benefits such as reduced computational burden, increased numerical robustness etc. When extending SDGPC to tracking design, it is shown that future knowledge of the setpoint significandy improves tracking performance. A two-degree-of-freedom SDGPC based on optimization of two performance indices is proposed. Actuator constraints are considered in an anti-windup framework. It is shown that the nonlinear control problem is equivalent to a linear time-varying problem. The proposed anti-windup algorithm is also shown to have attractive stability properties. Time-delay systems are treated later. It is shown that the Laguerre-filter-based adaptive SDGPC has excellent performance controlling systems with varying time-delay. An algorithm for continuous-time system parameter estimation based on sampled input output data is presented. The effectiveness and the advantages of continuous-time model estimation and the SDGPC algorithm over the pure discrete-time approach are highlighted by an inverted pendulum experiment. Iable of Contents . Abstract 1 2 3 ii List of Tables v List of Figures vi Acknowledgment ix Introduction 1 1.1 Background and Motivation. 1 1.2 Literature Review 2 1.3 Contribution of the Thesis 4 1.4 Outline of the Thesis . 5 Sampled-Data Generalized Predictive Control (SDGPC) 7 2.1 Formulation of SDGPC 7 2.2 Stability Properties of SDGPC 14 2.2.1 Stability of SDGPC with control execution time T 2.2.2 Property of SDGPC with control execution time T exe exe =T m 14 <T 22 m 2.3 Interpretation and Stability Property of the Integral Control Law 24 2.4 Simulations and Tuning Guidelines of SDGPC 29 2.5 Conclusion 40 S D G P C Design for "Tracking Systems 41 3.1 The Servo SDGPC Problem 41 3.2 The Model Following SDGPC Problem 51 3.3 The Tracking SDGPC Problem 55 3.4 The Feedforward Design of SDGPC . 60 3.5 Conclusion 66 iii 4 5 6 Control of Time-delay Systems and Laguerre Filter Based Adaptive S D G P C 67 4.1 The Direct Approach 67 4.2 The Laguerre Filter Modelling Approach 71 4.3 Conclusion 77 Anti-windup Design of S D G P C by Optimizing Two Performance Indices 78 5.1 S D G P C Based on Two Performance Indices 79 5.1.1 Optimizing Servo Performance 80 5.1.2 Optimizing Disturbance Rejection Performance 83 5.2 Anti-windup Scheme 5.3 Conclusion 91 104 Continuous-time System Identification Based on Sampled-Data 105 6.1 The Regression Model for Continuous-time Systems 108 6.2 TheEFRA 110 6.3 Dealing with Fast Time-varying Parameters 112 6.4 Identification and Control of an Inverted Pendulum 115 6.4.1 System Model 116 6.4.2 Parameter Estimation 118 6.4.3 Controller Design 122 6.5 Conclusion 125 7 Conclusions 127 A Stability Results of Receding Horizon Control 130 A.l The Finite and Infinite Horizon Regulator 130 A.2 The Receding Horizon Regulator 132 References 138 iv List of Tables 2.1 Comparison of SDGPC and discrete-time receding horizon LQ control List of Figures 2.1 The projected control derivative 8 2.2 The implementation scheme 12 2.3 The integral control law 12 2.4 Zero placing strategy 13 2.5 Comparison of SDGPC and GPC strategy 22 2.6 Zero placement in SDGPC 29 2.7 The projected control derivative 30 2.8 Step response of example 1 31 2.9 Simulation 1 of example 1 32 2.10 Simulation 2 of example 1 33 2.11 Simulation 3 of example 1 34 2.12 Simulation 4 of Examplel: infinite horizon LQR 35 2.13 Simulation of plant (2.68) 36 2.14 Simulation of plant (2.69) 37 2.15 Simulation of plant (2.69) 38 2.16 Simulation of plant (2.73) 39 3.17 The projected control derivative 43 3.18 The servo SDGPC controller 45 3.19 The dynamic feedforward controller implementation 45 3.20 Servo SDGPC of plant (3.110)-double integrator 50 vi 3.21 Servo S D G P C of plant (3.110)-single integrator 51 3.22 Desired trajectory for model-following problem 51 3.23 M o d e l following control of unstable third-order system 53 3.24 M o d e l following control of stable third-order system 55 3.25 Servo S D G P C of plant (3.128) 58 3.26 Tracking S D G P C of plant (3.128) 58 3.27 Servo S D G P C of plant (3.128) 59 3.28 Tracking S D G P C of plant (3.128) 60 3.29 Disturbance feedforward design 64 3.30 The effect of disturbance model . 66 4.31 Graphical illustration of SDGPC for systems with delay 68 4.32 Laguerre Filter Network 72 4.33 Simulation of plant (4.169) 76 4.34 Simulation of plant (4.169) with measurement noise 76 5.35 Graphical illustration of (5.189) 83 5.36 Control subject to actuator constraints 91 5.37 Example 5.1: Control law (5.247) without anti-windup compensation 97 5.38 Example 5.1: Control law (5.247) with anti-windup compensation 98 5.39 Example 5.2: Control law (5.251) with anti-windup compensation 100 5.40 Example 5.2: Control law (5.251) without anti-windup compensation 101 5.41 Conventional anti-windup 101 5.42 Conventional anti-windup scheme vs proposed 102 6.43 Graphical illustration of numerical integration 109 vii 6.44 Estimation of time-varying parameters 115 6.45 The inverted pendulum experimental setup 116 6.46 Downward pendulum 117 6.47 Upward pendulum 118 6.48 Parameter estimation of model (6.283) 119 6.49 Parameter estimation of model (6.283) 120 6.50 Step response of the estimated discrete-time model (6.292) 121 6.51 Changing dynamics 124 6.52 SDGPC of pendulum subject to disturbance 125 viii Acknowledgment I w o u l d like to express my deep gratitude to Professor G u y A . Dumont for giving me the opportunity to pursue a P h D degree i n process control under his supervision. I w i s h to thank h i m for his invaluable guidance and instruction throughout my study i n U B C . I thank h i m for helping me selecting a project for his self-tuning control course which led to the research reported i n this thesis. I w o u l d also like to thank Professor M i c h a e l S. Davies, D r . Y e F u and other members of the Control Group at the Pulp and Paper Centre for their helping hands. I also appreciate the constructive suggestions and criticisms of the members of my examination committee for making the draft more readable. Financial support from Professor G u y A . Dumont and the Woodpulp Network of Centres of Excellence is greatly acknowledged. I w o u l d like to thank my parents, brother and sister for their love and care. They were always there especially during the hard times. I am grateful to my wife Weiling, who always believed i n me and offered unlimited support. I would like to thank her for her love and patience during those years. Chapter 1: Introduction Chapter 1 Introduction 1.1 Background and Motivation M o d e l Based Predictive Control ( M B P C ) has achieved a significant level o f success i n industrial applications during the last ten years. This has inspired the academic community to investigate the theoretical foundations of M B P C . A s a result, a wealth of exciting stability results have been obtained for the last couple o f years. It is safe to say that a solid theoretical foundation for model predictive control has now been established. One o f the many explanations of the success of M B P C is that predictive control is an open methodology. That is, within the framework of predictive control, the predictive controller can be closely tailored to meet different requirements of a particular problem. A s a result, quite a few predictive controllers have been proposed. Some of the well-known predictive controllers are G P C ( Generalized Predictive Control [13]), D M C ( Dynamic Matrix Control [15]), M o d e l Predictive Heuristic Control [61], etc. A l l of these controllers are developed i n a discrete-time context. That is, a l l the controller designs start with a discrete-time model which can be obtained either by direct identification from the discrete input output data or by discretizing a continuous-time model. Although most of the industrial processes are continuous in nature, the discrete-time approach o f M B P C is a natural choice since most of the M B P C algorithms need computer implementation. However, the selection o f the sampling interval in digital control is not a trivial task. Moreover, it has been pointed out that i n applications where fast sampling is needed, the discrete-time model i n ^-domain is not a good description of the underlying continuous-time process since the poles and zeros o f the continuous-time system are mapped to the unit circle as the sampling interval A goes to zero. It is thus not a surprise to see a resurgence of interest i n continuous-time model based methods [33] [32]. Efforts have also recently been made to unify discrete-time and continuous-time methods under the name o f 6-operator [50]. 1 â€¢r Chapter 1: Introduction One of the few continuous-time results on M B P C is the work done by H.Demircioglu and P.J.Gawthrop [18] i n which Continuous-time Generalized Predictive Control ( C G P C ) was developed based on Laplace transfer function model. Multivariable C G P C [19] and modified C G P C with guaranteed stability are also available [16]. However, results on continuous-time M B P C are still very limited compared with its discrete-time counterpart. There is still no reported real life application of continuous-time M B P C to the best of the author's knowledge. This is perhaps partly due to the fact that it assumes the projected future control inputs to be of a polynomial type which is not compatible with the widely used zero-order hold device i n digital control equipment. A s a result, the digital implementation of C G P C unavoidably introduces approximations w h i c h often demand a small sampling interval. This demand w i l l result in computation difficulties i n some applications. Nonetheless, continuous-time modelling is still appealing even for the purpose of digital control since physical relevance of the model parameters is retained and it is easier to identify partiallyk n o w n systems i n a continuous-time setting. This motivates us to develop a M B P C algorithm based on continuous-time modelling while assuming the projected future control scenario to be piecewise constant, i.e. to be compatible with the zero-order hold device. The model form is chosen to be a continuous-time state-space equation instead of a continuous-time transfer function for two reasons. First, it is easier to deal with time-delay in time domain. Second, Laguerre network naturally has a state space form i n time domain. Actuator constraints are not considered i n the problem formulation initially, rather they are incorporated into the scheme later i n the framework of anti-windup design. 1.2 Literature Review Historical background as well as current trends in M o d e l Based Predictive Control ( M B P C ) are reviewed i n this section. The concept of predictive control originated i n the late seventies with the seminal papers on D M C [15] by Cutler and Ramaker and on M o d e l Predictive Heuristic Control [61], by Richalet et al. 1. The common features of predictive control are: A t each "present moment" t, a forecast of the process output over a long-range time horizon is made. This forecast is based on a mathematical model of the process dynamics, and on the future control scenario one proposes to apply from now on. 2' Chapter 1: 2. The control strategy is selected such that it brings the Introduction predicted process output back to the setpoint i n the "best" way according to a specific control objective. M o s t often this is done by minimizing a quadratic performance index. 3. The resulting control is then applied to the process input but only at the present time. A t the next sampling instant the whole procedure is repeated leading to an updated control action with corrections based on the latest measurements. This is called a receding horizon strategy. Another school of thought in predictive control, whose objective is to design the underlying controllers i n an adaptive control context, emerged almost independently at about the same time. Peterka's predictive controller [58], Ydstie's extended-horizon control [84], M o s c a etaWs M U S M A R [53] and the G P C [13] of Clarke et al. are all in this category. The continuous-time counterpart of G P C called C G P C is reported in [18]. However, the completely continuous-time design seems to limit its applicability. The structures of all the M B P C algorithms are the same but differ i n details. For example, the D M C [15] uses a finite step response model and M o d e l Predictive Heuristic Control [61] uses impulse response model while G P C [13] on the other hand uses an A R T M A X model. M a n y application of M B P C are reported in the literature and several companies offer M B P C software. The survey paper by Garcia [31] et al. examines the relationship between several M B P C algorithms and industrial applications are also reported. A more recent paper by Richalet [62] presented two classical applications of M B P C . B y the late eighties, M B P C had secured a widespread acceptance i n process industry despite the lack of firm theoretical foundation, which is quite remarkable. It is acknowledged [51] that there is no useful general stability results for the original formulation of M B P C . In fact it was shown in [4] that G P C has difficulty controlling systems with nearly cancelled unstable poles and zeros. A l t h o u g h such kind of systems are difficult to control for any control methods, it nonetheless showed that G P C has some serious shortcomings. Bitmead et al. [4] suggested using the traditional infinite horizon L Q G instead. The infinite horizon approach, albeit with guaranteed stability property, is less appealing i n applications where some input and/or state constraints exist. A finite horizon with terminal state constraints is proposed independently by a group of researchers [14, 60, 52, 54]. The survey paper [11] by Clarke covers the most recent advances i n M B P C . A bibliography of M B P C and related topics from 1965 to 1993 is also appended 3 Chapter 1: i n that paper. A book entitled " Advances in Model-Based Predictive Control " [11], Introduction edited by Clarke is based on the presentations made at a conference wholly devoted to recent advances i n M B P C . It is a complete collection of the latest results on M B P C . A s pointed out by Clarke [11], M B P C can handle real-time state and actuator constraints i n a natural way. This is an active research topic which has important practical implications. It is predicted [51] that M B P C w i l l emerge as a versatile tool with many desirable properties and with a solid theoretical foundation. It is worth pointing out at this point that most of the M B P C algorithms are not robust synthesis methods i n the sense that there is no explicit incorporation of realistic plant uncertainty description i n the problem formulation. Recent developments in the theory and application ( t o c o n t r o l ) of convex optimization involving Linear Matrix Inequalities ( L M I ) [7] have opened a new avenue for research i n M B P C . M u c h of the existing robust control theory can be recast in the framework of L M I s and the resulting convex optimization problem can be solved very efficiently using the recent interior-point methods. It is thus not surprising to see that results on M B P C using convex optimization ( as opposed to conventional linear or quadratic programs ) have begun to appear in the literature [40, 75]. This is certainly a promising research filed for M B P C . Literature reviews on related topics such as receding horizon L Q control, Laguerre filter based modelling and control, anti-windup scheme, control of time-delay systems and continuous-time system identification based on sampled input output data w i l l be given when these topics are introduced. 1.3 Contribution of the Thesis The contributions of this thesis can be summarized as follows. 1. A new predictive control strategy is developed in a sampled-data framework. The resulting algorithm, S D G P C , has guaranteed stability property. Its relationship with infinite horizon L Q regulator is established clearly. S D G P C enjoys the advantage of continuous-time modeling and the 2. flexibility of digital implementation. A two-degree-of-freedom S D G P C based on optimization of two performance indices is proposed. Its servo performance and disturbance rejection performance can be tuned separately. Based on 4 Chapter 1: Introduction this design, an and-windup scheme is developed with guaranteed stability properties. The novel approach used here is to transform the nonlinear problem into a time-varying linear problem. This scheme has important practical implications as well as theoretical interests. 3. The one-degree-of-freedom 4. Control of time-delay systems is treated in detail. A practically appealing Laguerre filter based S D G P C is extended to tracking system design. adaptive S D G P C algorithm is developed. 5. A n algorithm to estimate the parameters of continuous-time system based on sampled input output data is presented. Fast time-varying parameters can also be estimated under this framework. The effectiveness and the advantage of continuous-time model estimation and the S D G P C algorithm over the pure discrete-time approach are highlighted by an inverted pendulum experiment. 1.4 O u t l i n e of the Thesis Chapter 2 presents the Sampled-Data Generalized Predictive Control algorithm S D G P C . Its relationship with infinite horizon L Q regulator and stability property are analyzed i n detail. Simulation and tuning guidelines are also given by examples. Chapter 3 extends the One-Degree-of-Freedom ( O D F ) S D G P C to tracking problems resulting i n a Two-Degree-of-Freedom ( T D F ) design formulation. The T D F - S D G P C can track non-constant reference trajectories and/or disturbances with zero steady state error. W h e n the future setpoint information is available, the T D F - S D G P C has a concise form and the tracking performance can be improved dramatically. Chapter 4 considers control of time-delay systems. The direct approach, i n which time-delay appears explicitly in,the model, and Laguerre filter modeling approach are proposed. The Laguerre filter based adaptive S D G P C is particularly appealing i n that its computation burden is independent on the prediction horizon. Chapter 5 deals with another important issue in process control: actuator constraints. A S D G P C algorithm based on two performance indices is proposed. The control problem is interpreted as a nominal servo performance design plus an integrator compensation for disturbances and modeling 5 Chapter 1: error. Introduction This algorithm under the framework of anti-windup design effectively transforms the con- strained control problem into an unconstrained time-varying control problem whose stability can be guaranteedâ€”a pleasant result. Examples are presented to show the effectiveness of the algorithm. Chapter 6 proposes a method to estimate the parameters of a continuous-time model based on sampled input output data. It is argued that even i f the controller design is based on discrete-time model, it is always desirable to estimate the continuous-time model before discretization. A n inverted pendulum is successfully controlled by SDGPC based on a continuous-time model estimated using the algorithm developed in this chapter. Chapter 7 summarizes the thesis and gives suggestions for future research. 6 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) Chapter 2 Sampled-Data Generalized Predictive Control (SDGPC) The poor numerical property of discrete-time models based on shift operator for fast sampling applications was shown by Middleton and G o o d w i n [50, pp. 44]. This is no surprise since the discrete-time model coefficients could be badly conditioned under fast sampling [50, pp. 46]. One solution is to use the 8 operator. The 8 operator offers superior numerical property and has great resemblance i n model coefficients with its continuous-time counterpart [50, pp. 46]. Gawthrop [32] on the other hand argued that a continuous-time process is best represented by a continuous-time model and took the complete continuous-time approach, for example, i n the formulation of Continuous-time Generalized Predictive Control ( C G P C ) [18] i n which the user selected future control scenario is o f a polynomial form. This approach requires approximation i n digital implementation and may cause unacceptable errors for large sampling interval. The S D G P C approach given i n this chapter w i l l be based on continuous-time modeling while assuming a piecewise constant projected control scenario thus enjoying the advantages of both sides. This chapter is organized as follows. S D G P C is formulated i n section 2.1. Section 2.2 studies the stability properties of S D G P C . Section 2.3 gives interpretations for the S D G P C law i n its integral form. Simulations are presented i n section 2.4 to give tuning guidelines o f S D G P C . Section 2.5 concludes the chapter. T h e work i n this chapter was summarized i n [46]. 2.1 Formulation of S D G P C In order to highlight the basic ideas behind S D G P C , we only consider S I S O systems here. However, the extension to M I M O systems is straightforward. T h e system being considered is described by a state-space equation x(t) = Ax(t)+Bu(t) y(t) = c x(t) T dim(x) â€” n 1 (2.1) Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) In order to introduce integral action in the control law, an integrator is inserted before the plant to give the augmented system if = AfXf + IJf = C Xf Xd(t) = i(t), 'x ' d x f = u (t) (2.2) T f = ii(t), d 'A d = nf = n + 1 dim(xf) Where Bfu 0' e(t) = y(t)-w 'B~ 0 _e _c T 0_ (2.3) 1] .0. A n d w is the constant setpoint. We further assume that the projected future control derivative u (t) is piecewise constant over d the period of T = jf- with values ud.(l),u<i(2) â€¢ â€¢ â€¢ua(A ) as i n Fig.2.1. The benefit of assuming r m u piecewise constant control derivative is that it w i l l result i n a continuous control signal. Setpoint Predicted output S D G P C projected controls U Texe T T m d p Time Figure 2.1: The projected control derivative We call T p the prediction horizon or prediction time and N u the control order which is the allowable control maneuvers over the prediction horizon. In Fig.2.1, T m is called the design sampling interval since the resulting S D G P C law, as w i l l be shown i n section 2.2, is equivalent to a discretetime receding horizon control law based on (2.2) with sampling interval T m control u , i ( l ) is injected into the plant for a duration of T . m provided that the first However this is not necessarily so, the first control U d ( l ) can actually be injected into the plant for a shorter time interval T exe w h i c h we Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) w i l l call it the execution sampling interval. T exe is the implementation sampling interval ( i n contrast to design sampling interval ) and can take any value on [0, T ]. m Similar to a l l other model based predictive control approaches, S D G P C is based on minimizing a performance index: (2.4) o Subject to : (2.5) x (t + T ) = 0 f p Note that the above optimization problem is a standard finite time linear quadratic regulator problem i n terms of the augmented plant model (2.2). One o f the key concepts in the formulation of model based predictive control is the receding horizon strategy. However, special to S D G P C is that there are two ways to implement the receding horizon strategy. That is, after the projected control vector [ud(l),Ud(2) â€¢ â€¢-Ud(iVu)] is obtained, either of the following strategies can be used: 1. T h e first control u d ( l ) is applied to the plant for a time duration of T . 2. The first control U d ( l ) is applied to the plant for a time duration of T m exe which is a fraction o f the design sampling interval T . m The first case is equivalent to a digital control law with sampling interval T as w i l l be shown m in the next section. In the second case, T exe T e eX can be smaller than T and when the execution time m 0, it w i l l become a continuous time control law. This approach thus has the potential to solve the numerical problem for the pure discrete-time approach, as we mentioned at the beginning of this chapter, i n fast sampling applications. W i t h the above preparations, we are i n a position to derive the S D G P C law. The projected future control derivative i n Fig.2.1 can be described mathematically as: u {t) = H(t)u d d 9 (2.6) Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) where H(t) = [H (t) H (t) 1 â€¢â€¢â€¢H (t)---H .(t)} 2 i N (2.7) Ud = [Ud(l) U (2) â€¢ â€¢ â€¢ u ( i ) â€¢ â€¢ â€¢ U (iVâ€ž)] d d fl Hi(t) d (i-l)T <t<iTâ€ž m {0 otherwise i = 1, 2,---N (2.8) u â€” IjL " N T m u Based on the system model (2.2) and the projected control scenario (2.6), we have the following T-ahead state prediction: i x (t d +J + T) = e x (t) AT d e ?-^Bu {r)dT A{ d o T = e x (t) + (J AT d e ^BH{r)dr)xx A (2.9) A o T + [f = e x (t) AT d T e A ^ B H e (r)dr â€¢â€¢â€¢ J 1 x (t) + T ( T ) u AI d e ( -^BE (r)dT]n A T K d d Where l r(T)nxNâ€ž l J eW-^BH^dT-.- j (2.10) e ^ -^BH (T)di A T Ntt nxiVâ€ž r(T) j A(T-r) . e dTB 0 B xiVâ€ž . Q...0 0 < T < T m 0 f A(T-r) . T e dTB 0 /" e^-^drB; .0 j A{T-r) . e Q ... 0 (WB 7?JI ^ 7" < 2T' 7n T,â€ž â€¢â€¢â€¢ j (7Vâ€ž-l)X e ^ ^ d r B m 10 (iV - l ) r u m <T <Tf (2.11) Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) W i t h x (t + T), e(t + T) can be obtained: d j i e(t + T) = e(t) + c T x (t + r)dr d o (2.12) T = e(t) + J e dTx (t) + AT d c r (T)u T o d o Where T T (T) = J (2.13) T(T)dr 0 o Recalling the cost (2.4), we define the Hamiltonian: H(t,rj) =J = J(t) + ri x (t + Tp) T f [e(t) + c A- (e T l l)x {i) + cTr (T)u ] 2 - AT o d (2.14) e Â»x {t) + A1 + d j XujH (t)H(t)u dT e t 1 Â®i iftf = d ] e(t) + ?A- (e > = p 1 d iu7 T(T )u + T' T - I)x {t) AT ( ^ dT d 0> w e n a v e t n e u d optimal solution for = K x (t) d + d + d c T (T )n _ T 0 p d u: d (2.15) K e(t) e where AT e K = -K ^K T d 1 3 + TK d g r T -l( AT 2 c A e _y p t -0- < KzT + T K e g â€¢ 2 0 .1. T = Jrl(T)cc {e T - IJA-'dT, AT d 4 (2.16) JT {T)cdT T = T e 0 -i r(T) T = Ki 9 J Tl(T)cc T (T)dT +\J 7C = I - TgK TgK\ K = [TjKiTg) 2 T 0 \ 3 11 2 H HdT T Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) x=Ax+Bu X T c' 1 ob>â€”^ Observer Figure 2.2: The implementation scheme Fig.2.2 shows the S D G P C law (2.15) in a block-diagram form: A s we mentioned earlier, the control law (2.15) does not necessarily need to be implemented with the design sampling interval T . m When the execution sampling interval T exe goes to zero, we can take integration on both sides of (2.15) to obtain an integral control law (2.17) i n terms of the state and the control signal of the original systems (2.1). u(t) = K x(t) + K f e{r)dr + d (2.17) e The block diagram of control law (2.17) is shown in F i g . 2.3 <x>EHI] & u x=Ax+Bu X J -TK51Figure 2.3: The integral control law The constant term 770 i n (2.17) is unspecified and has no bearing oh the problem i n the sense that it neither affects the closed loop eigenvalue nor the asymptotic property of e(t) â€”â€¢ 0 as t â€”â€¢ 0 provided that the integral control law (2.17) is stabilizing. However, we can make use of the above fact and let rjo be proportional to the constant setpoint w. The effect is that a system zero can be placed i n a desired location to improve the transient response of the closed system under control law (2.17). The scheme is depicted in Fig.2.4. Details on how to select the feedforward gain i n Fig.2.4 w i l l be discussed in section 2.3. 12 K w Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) w =Ax+Bu Figure 2.4: Zero placing strategy S D G P C was developed above by minimizing the cost (2.4) subject to end point state constraints (2.5). Another approach is to include the end point state i n the cost functional: J{t) = j' [e (t + T) + Xu (t + T)]dT + jx (t 2 2 + T )x (t T p +T) f (2.18) p o Substitute equations (2.6) (2.9) (2.12) into performance index (2.18), w e have J{t) = J e{t) + J e drx (t) + c T ( T ) u AT o L dT+ T d 0 d o J (2.19) \u/H (t)H(t)u dT+ T d 0 e "x (t) + r ( T ) u AT d e >x (t) + r ( T > AT d p e(t)+ fe drx (t) + c T (T )u AT T T d o p T d left) + J e *d x {t) + c r {T )xi A d H 0 p d d 0 0 Let gjj^- = 0, w e have the solution for u : ( 1 u d = -K(T x (t) d + T e(t)) d e -1 K=\J T cc Y dT T T 0 + \J H HdT + r (T )r(T ) + r^(r )cc r (T ) T r T T p p 7 p o p ,0 (2.20) T = JT cc AT d T 0 1 (e AT -I)dT + yr (T )e > T AT p + (T )cc T p A~'(e > AT - I) o T = J T (T)dTc r (T )c T e T + 1 p o Where T, F , H are defined by equation (2.10), (2.13), (2.7) respectively. Obviously, when a 7 â€”â€¢ oo, control laws (2.20) and (2.15) become equivalent. 13 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) The main point of this section is that by selecting the projected control derivative scenario to be piecewise constant, a predictive control law S D G P C can be designed based on continuous-time modelling without causing any difficulty for digital implementation. This is i n sharp contrast to C G P C . 2.2 Stability Properties of S D G P C Stability results for G P C with terminal states constraints (or weighting) are available both for discrete-time [11, 12, 52] and continuous-time [16]. A natural question is whether S D G P C possesses such stability properties. This question w i l l be answered i n this section. The basic idea is to show that S D G P C is equivalent to a stabilizing discrete-time receding horizon L Q control law. The important work of Bitmead et al. [4] is included in Appendix A for completeness. Those results are used to establish the stability property of S D G P C in section 2.2.1. 2.2.1 Stability of S D G P C w i t h control execution time T exe =T m The S D G P C stability problem is attacked by first applying a transformation to convert the S D G P C problem to a discrete-time receding horizon problem, then making use of the stability results summarized i n Theorem A . 10 and Corollary A.2. The transformation is based on the work of Levis et al. [43] i n which the infinite horizon problem was treated. Recall the state augmented system described by equation (2.2) if = AfXf + BfU d (2.21) dim(xf) = nf = n + 1 Assuming that the execution sampling interval T exe under S D G P C control is the same as the design sampling interval T , the discrete-time equivalent of the augmented system (2.21) is then m Xf(i+l) = <f>Xf(i) + ru (i) d (2.22) 2//(0 = Cf f(}) x With T, (2.23) o 14 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) R e c a l l the cost functional (2.4) with Q = CfC T J(t) = J [x (t + T)Qx {t + T) + u (t + T)Ru {t T + T))dT T f d (2.24) o Subject to : x (t + T ) = 0 f p W i t h the projected control scenario described by (2.6) as i n Fig.2.1, the cost (2.24) can be expressed as the sum of N u integrals: J(t) = f [x (t + T)Qx (t + T) + u (t + T)Ru (t T + T)]dT T f d (2.25) = / i [x {t + T)Qx (t f â€¢ A Â« / iT = + T) + u^(t + T)Ru {t T f d + T)]dT 0 m Define x (i) = x (t + iT ), f f m u ( i ) = u (t + iT ), d d i = 0,l,---N -l m (2.26) u The integrals i n (2.25) can be expressed as (â€¢â€¢+i)r m [xj(t + T)Qx {t J f = J [x (i + r)Qx (i T f + T) + u {t + T)Ru (t T d d + T) + uj(i + r)Ru (i d (2.27) + T)] dT + r)] dr The inter-sampling behavior Xf(i + r ) of system (2.22) is a function of x(i) and u (i) as follows d T x (i f + T) = e x(i) AfT + J e > -^B u (i)ds A {T } o Substitute equation (2.28) into equation (2.27), we have 15 d (2.28) Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) (â€¢â€¢+1)Tâ€ž J [x (t + T)Qx (t + T) + u (t + T)Ru (t T + T)] dT T f d iT m (2.29) = J [x (i + T)Qx (i + T) + uJ(i + T)Ru (i T f + T)]dT d 0 = x (i)Qx (i) + 2x (i)Mu (i) T + T f d uJ(i)Ru (i) d where Q = J e f Qe ' dr, A T M = J A T o J eJQ A T o r T m m J(t) = J e ' dt A f + T) + u (t + T)Ru {t T T f f (2.30) A [x (t + T)Qx (t B dr r T J e Vdt Q J 0 LO J LO Finally the continuous-time cost (2.24) has the form R =TR + Bj J A f Lo -I T e' d drB t + T)]dT o = YI (2.31) U (i)Qxf(i)+2x (i)Mn (i) T + T d nJ(i)Ru (i) d i=0 Remarks: 1. These weighting matrices are time-invariant as long as T m is constant. T h e symmetric and positive semi-definite or positive definite properties of Q, R are preserved i n Q, R. 2. E v e n i f the control weighting R = 0 i n the original cost functional, there always is a non-zero weighting term R i n the equivalent discrete-time cost. Note that there is a cross-product term i n the discrete-time cost (2.25) involving Xf(i) and u (i). d However, by some transformation [43], the cross-product term can be removed to form a standard discrete-time cost. Define Q = Q- MRT M l # = $ - v i r M l v(i) = R- M x {i) 1 T f 16 T (2.32) T + u (i) d Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) B y substituting equation (2.32) into system equation (2.22) and the associated cost (2.25), we obtain x (i + 1) = $x (i) + Tv(i) f f y (i) = cjx (i) f (2.33) f dim(xf) = rif = n + 1 and cost functional, J(t)= iVâ€ž-l Â£ i=0 \^(i)Qx (i) f x (N ) f u + v (i)Rv(i) T =0 For clarity, the above derivation is summarized i n Table 2.1 17 (2.34) Chapter 2: Problem Formulation Sampled-Data Generalized Predictive Control (SDGPC) Discrete-time receding horizon L Q control SDGPC x f â€” AfXf + x (i Bu r / d / Â»/(0 = J /(0 c System equation m y = J(t) + T)Qx (t + T) + u {t + T)R.u (t + T f x dim(x}) = rif = n + 1 dim(xf) = rif = n + 1 Performance index + l ) = $ x ( i ) + rv(i) d d Â£ T)]dT = [xf(i)Qx (i)+v (i)Rv(ij\ T f o Final state constraint x (Tâ€ž) f x (N ) = 0 f $ = eA T ^ f m J = T e Ajr = 0 u B f d T o in Bfdr T Relationships T .0 drB f .0 Q = Q- MR- M l $ = * - rj^M v(i) = R- M x (i) 1 fe^dt J Q J T f T 3 , + u (t) d Table 2.1 Comparison of S D G P C and discrete-time receding horizon L Q control We summarize the above results as follows: lemma 2.1 When the execution time interval T exe is equal to the design sampling interval T , the SDG m problem can be transformed to a standard discrete-time receding horizon LQ control proble summarized in Table 2.1. Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) F r o m lemma 2.1, it is clear that the stability problem of S D G P C boils down to finding the conditions i n terms of system (2.1) under which Theorem A.10 holds. We have following results to serve this purpose. Lemma 2.2 investigates the controllability and observability of the integrator augmented system (2.2). The proof of the controllability part can be found in [59]. The proof of the observability part is straightforward as given below. lemma 2.2 ( Power et al. [59] ) If the original system (2.1) with triple (A,B,c ) is T a. both controllable and observable b. there is no system zeros at the origin then the augmented system (2.2) with triple (^Af, Bf, c ^j is also controllable and observable. T Proof: The proof for controllability of (Af,Bf) observability matrix of (A, c ) T is 0 T Ac = [ c ; Ac ; T T / rp\ The observability matrix of (Af,c ) is 0 r T â€¢â€¢â€¢ A~c] , n 1 T nxn 1 with rank(OAc ) T / . Obviously, rank\0 TJ AjC = n - \ = 0â€žxij [OAC T / is observable. 2 â€” AfC T Ac Olxn T n + 1, and the pair ^Af,c ^j can be found i n Power and Porter [59]. The â€¢ Remark: Condition b is intuitively obvious. If violated, there is no way that the system output of (2.1) can be driven to a nonzero setpoint. Or i n terms of the augmented system (2.2), the state e(t) with nonzero initial value can not be driven to the origin. The following theorem is due to Kalman et al [38]. Theorem 2.1 ( Kalman et al [38] ) Let the continuous-time system (2.2) be controllable. Then the discrete-time system (2.22) is completely controllable if: I (\i{A] m - \j{A}) * npJ n = Â±1,Â±2,.... whenever R (Xi{A} e - Xj{A}) = 0. 19 m (2.40) Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) lemma 2.3 ( Anderson et al. [3, pp. 354] ) Assume ( $ , T ) given by equation then ($,r) (2.23) is controllable, given by (2.32) is also controllable. Proof: The proof is obvious. Recall $ = $ - TR~ M , 1 pair ($,r) lemma Q can not be changed by state feedback. the controllability o f a controllable T â€¢ 2.4 ( Levis et al. [43] ) > o. Proof: Since Q > 0, R > 0, so every integrand i n (2.25) (i+l)T I i= j m [xJ(t + T)Qx (t + T) + uT(t + T)Ru (t f = xj(i)Qx (i) + 2x (i)MvL (i) T f f A is nonnegative for any u (i). Let u<i(i) = â€”R~ M Xf(i), 1 + uJ(i)Ru (i) d We have from equation (2.41) T d = + T)]dT d + 2xJ(i)Mu (z') + d uJ(i)Ru (i) d = x (i) (Q - ir M )x (i) T l f = xJ(i)Qx (i) f for any Xf(i). S o Q > 0. (2.42) T f > 0 â€¢ Lemma 2.5 establishes the observability of the pair ( $ , <5) and the observability of the augmented system (2.2). This is a special case o f the results for periodic time-varying systems given by A l Rahmani and Franklin [2] in which multi-rate control strategy is used. A simpler proof based on the if-controllability and observability concept [6] [37] is given i n the following. lemma 2.5 Assume the controllability if the pair (^Af,cJ^j conditions of Theorem 2.1 hold, then ($, Q) is observable if and only of equation (2.2) is observable . Sufficiency: Assume ($,<Q) is observable but (Af,Q) is not, then there exists an eigenvalue A o f $ associated with a nonzero eigenvector z such that <3>z = Xz and Qe z Aft 20 = 0 for any r > 0 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) [6]. It then follows from equation (2.30) that Qz = 0, M z = 0. F r o m equation (2.32), we have T Â®z = Xz, Qz = 0. So A is unobservable i n ( $ , Q ) l 3 7 ! - This contradicts the assumption. Necessity: Assume ( A / , Q) is observable but ( $ , Q) is not. Let A be an unobservable eigenvalue of $ and z ^ 0 be an associated eigenvector. We have $z = Xz, Qz = 0. Recall equation (2.41), let Xf(i) = z, \i (i) d = â€”R~ M z, l we have T (t+i)r,â€ž J [x (t + T)Qx {t + T) + u (t + Â«Â» = x (i)Qx (i) + 2x (i)Mu (i) + Ii= T T)Ru (t-rT)]dT T f d (2.43) T T T f d = z Qz uJ(i)Ru (i) d = 0 T Since Q > 0, R > 0, equation (2.43) implies / x (T)Qx (r)dT = J u (T)Ru (T)dT T T Further, J u (T)Ru (r)dT = 0. T f d o o m T d = z M R ~ (T R)R~ M 2 T 1 1 = 0. T m Since Â£ - 1 (T i?).Rm 1 > 0, we 0 have M T 2 = - TR-^M^z 0. F r o m equation (2.32), z Qz T - z MRr M z T 1 T - z Qz T = 0 and $ z = $ z = A * . From equation (2.30), z Q z = 0 implies Qe t*z r the existence of z ^ 0 such that $ 2 = A2, Qe z A = 0. = But = 0 contradicts the observability assumption Aft of(A ,Q). â€¢ N o w , we are i n a position to state the main stability property of S D G P C . / Theorem 2.2 For systems described by equation (2.1), if a. The triple (A,B,c ) b. There is no system zero at the origin. c. The control execution time T T is both controllable and m observable. is selected such that the condition in Theorem 2.1 is fulfilled, then the resulting closed loop system under SDGPC Proof: According to lemma is asymptotically stable for N u > n + 1. 2.1, S D G P C of system (2.1) is equivalent to receding horizon control of discrete-time system (2.33). Thus we need only to prove the stability of the receding horizon control problem for system (2.33) with performance index (2.34). Conditions a. and b. guarantee the controllability and observability of the integrator augmented system (2.2) according to 21 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) lemma 2.2. F r o m condition c. and Theorem 2.1, it is apparent that the discrete-time counterpart of (2.2) given by (2.22) is also controllable and observable. A p p l y i n g lemma 2.3-2.5, it is obvious that Q > 0, R > 0 i n (2.34), proves the theorem. 2.2.2 is controllable and is observable. A p p l y Theorem A.10 â€¢ P r o p e r t y o f S D G P C with control execution time T exe < T m In section 2.1, we mentioned that the execution sampling time interval T , i.e. the time' interval exe with which the plant is actually being sampled, can take any value on [0, T ]. The case of T m w i l l be analyzed i n this section. exe < T m This strategy is very similar i n spirit to the w e l l k n o w n G P C design practice of selecting a smaller control horizon than the prediction horizon i n w h i c h case the computation burden can be greatly reduced. Fig.2.5 illustrates these two closely related strategies. Setpoint Figure 2.5: Comparison of SDGPC and G P C strategy G P C [13] design is based on minimization of the following performance index J(N) = (y(t + w(t + i)f + A i=N. [Au{t + i)} (2.44) 2 i=0 Ni can always be selected as zero. The prediction horizon N corresponds to the prediction time 2 T i n S D G P C . The control weighting A has the same meaning i n S D G P C but the control horizon p N u has a quite different interpretation as is clearly illustrated i n Fig.2.5. In S D G P C , N is the u number of controls that w i l l cover the whole prediction horizon T , while i n G P C it is the number of p 22 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) controls that only cover a portion of the prediction horizon after which the control is kept constant or the increment of controls is kept zero. A n d in S D G P C the control execution time T exe is not necessarily equal to the design sampling time T . It is also possible to assume the projected controls m i n S D G P C design have the same form as that of G P C , or any other form, say a polynomial up to certain degree. However, the advantage of choosing piecewise constant equally spaced controls over the entire prediction horizon is that in doing so the S D G P C problem can be transformed into a discrete-time receding horizon L Q problem for which powerful stability analysis methods i n optimal control theory can be utilized and improved numerical property can be expected because a larger design sampling interval T is used. m Refer to Fig.2.5, both S D G P C and G P C use N = 4. Both S D G P C and G P C update their u control every T exe seconds. Both S D G P C and G P C use the same prediction horizon: N2*T = T. exe The difference is that the design sampling interval i n S D G P C is T m p = 4 * T , i.e. four times as exe large as the execution time. Both of them have the effects of reducing computational burden and damping the control action. However, S D G P C w i l l have superior numerical property when is small because S D G P C is computed based on a larger design sampling is still based on T . claim when N < N or T 2 stability when T exe m exe while G P C Another advantage of S D G P C is that although neither of them has stability exe u interval T T exe < T , we know that the same S D G P C law does have guaranteed m = T . W h i l e it is very natural to choose T m exe < T m i n S D G P C design under the framework of receding horizon strategy, it is almost unthinkable for any other controller synthesis method to design a stabilizing control law for one sampling interval but to apply it to the process with another sampling interval. It is well known that discrete-time design methods based on z-transform w i l l encounter numerical problems when the sampling interval is small [50]. In S D G P C , a larger design sampling interval T can be used to improve numerical property while implementing it w i t h m a shorter sampling interval T . exe Although there are no general stability results for T exe < T , m extensive simulation examples w i l l be presented in next section to offer guidelines of selecting T m and T . A s a by-product, those simulations w i l l also shed some light on the selection o f sampling exe interval i n digital control i n general. 23 Chapter 2: Sampled-Data Generalized Predictive Control (SDGP 2.3 Interpretation and Stability Property of the Integral Control Law The integral control law (2.17) was obtained by integrating both sides of (2.15) under the assumption that T e x e â€”â€¢ 0. However, (2.17) itself can be interpreted as a solution of a w e l l formulated t predictive control problem for system (2.1). Define integral I e limit of the integral was left blank to indicate that I = J (y(r) â€” w)dr, where the lower can take any initial value, as the new state of e system (2.1), the augmented system becomes: xj = AfXj + BfU + B w v y=[c (2.45) 0]x T x dim(xi) = n + 1 Where X 'A 0' XJ = B c Je. 0. T ' 0 ' 'B' , f = , B V .0. (2.46) = -1. Where w is the constant setpoint. Notice that the augmented system matrices Af,Bf are exacdy the same as of that i n (2.2). The objective of the control is to let the output y(t) of system (2.1) track the constant setpoint w without steady state error. Thus at equilibrium, the following relations hold: lim y(t) = i /oo = w tâ€”too lim u(i) = ti a tâ€”too (2.47) lim I (t) = I e 0 tâ€”*oo l i m x(t) = tâ€”too XQ, and J/oo â€” W â€” C XQQ (2.48) 0 = AXQO + BUQO where Uoo-Zooj^oo are constants whose value can not be determined a priori based on the nominal plant parameter matrices (A, B,c ) T and the setpoint because of the unavoidable modelling errors. A sensible approach is thus to define the shifted input, the shifted state respectively as 24 Chapter 2: Sampled-Data Generalized Predictive Control (SDGP u'(t) â€” u(t) â€” â€¢oo u, x'(t) = x(t) â€” X, 00 I' (t) = I {t) e (2.49) - f00 e y'(t) = y(t) - w Solving (2.49) for u,x,I ,y, e substituting the results into (2.45), and using (2.48) it is not difficult to find that the shifted variables satisfy the equations x' (2.50) The shifted equilibrium of (2.50) is at zero as that in (2.2) and a predictive control problem can be w e l l formulated by minimizing a quadratic performance index (2.51) o A n d at the end of the prediction horizon T , the state of (2.50) should be constraint to be zero, p that is x'j(t + T ) p = 0. Although the above problem is well defined, it is still very inconvenient, to say the least, to obtain the control law due to the unknown equilibrium point U Q O , x^, 1^. A more effective formulation should thus have a model which accommodates the fact that at the equilibrium, the input, output and the state are a l l constant but at the same time should not explicitly have those unknown constants i n the model. Taking derivative of both sides of the first equation of (2.45) w i l l just do that. The resulting equivalent system model has the form if = AfXf where + Bfiij Xd(t)-x(t), e W = 2/(0 + Bw v (2.52) u (t) = u(t) d (2.53) ~w, 25 x f = e Chapter 2: Sampled-Data Generalized Predictive Control (SDGP For constant setpoint as we assumed, w â€” 0 and (2.52) is exacdy the same as (2.2). The only modification needs to be made is that the observation matrix should be cj i n (2.3). The S D G P C problem for (2.2) and the associated performance index (2.4) can thus be interpreted as a sensible way to circumvent the unknown equilibrium difficulty encountered in the control problem defined by (2.50) and (2.51). According to Theorem 2.2, the control law (2.15) stabilize system (2.1). Similar results can be said about control law (2.17): Theorem 2.3 For systems described by equation (2.1) and the integral control law (2.17), if a. The triple (A,B,c ) b. There is no system zero at the origin. c. The control execution time T T exe the condition d. is both controllable and observable. is equal to the design sampling time T m and is selected such that in Theorem 2.1 is fulfilled. Zero-th order hold is used when applying (2.17) to system (2.1). then the resulting closed loop system under the integral control law (2.17) is asymptotically for N u stable > n + 1. Proof: W h e n the integral control law (2.17) is applied to (2.1) with zero order hold, the resulting closed loop system matrix w i l l be the same as that of by applying (2.15) to (2.1). This can be seen readily by comparing equations (2.2) and (2.45) considering that fact that the disturbance term Bw v i n (2.45) w i l l not affect the stability of the closed loop system. Since system (2.1) is stable under the control o f (2.15) according to Theorem 2.2, it w i l l be stable as w e l l under the control of (2.17). â€¢ We mentioned i n section 2.1 that the unspecified term i n 770 i n (2.17) can be used to place a zero to improve the transient response of the closed loop system. In the following we show that there is a sound mathematical basis for doing so. Consider system (2.52) and the cost (2.18), the T-ahead state predictor is described by (2.54) with u<* given by (2.7). 26 Chapter 2: Sampled-Data Generalized Predictive Control (SDGP x (t f + T) = e ' x (t) + D (T)w(t) A T f + u T(A B ,T)u f f d T U where T(Af,Bf,T) (2.54) = j D {T) e * -^B dT A T v is given by (2.11) with A, B replaced by Af,Bf. Without detailed derivation, the optimal control to system (2.52) can be obtained as K = x + K f Le(<) (2.55) Kpw{t) where Kxt = KH XJ Kp = KHp J K = T (T)c cjT{T)dT \J + T f \ Â° +7 r H (T)H(T)dT T T (T )r(T ) p J 0 Hx â€” â€” J T (T)c cJe f dT T t HB = + A T f - J T (T)c c^D (T)dT + T f p u N*xN, (2.56) r (T )e ' Â» T 7 A T p jT (T )D (T ) T p u p The counterpart of the optimal control sequences (2.55) with respect to system (2.45) is given by u* = K Xf x(t) (2.57) + l<0w(t) lJe(T)dT_ The first control which is the only one being applied to the plant is u*(l) = K x(t) x where K x +K j e e{r)dT + K (l)w(t) denotes the first n entries of the first row of the N u the last element o f the first row of K r 27 (2.58) 0 x ( n + 1) matrix K , Xf K e is Chapter 2: Sampled-Data Generalized Predictive Control (SDGP The effect o f Kp{l) i n (2.58) is to add a zero at K /Kp(l) e from reference w(s) to output y(s) [28 , p.559]. Since Kp(l) does not affect the eigenvalues of the closed loop system matrix, meaning that it can take any value i n addition to the one being computed by equation (2.56). This provides one extra degree o f freedom i n the design. Example 2.3.1: In this simulation, the plant with transfer function G{s) = 3 _ (2.59) is being controlled using control law (2.58) with the following design parameters N =l u T â€” 6s p T exe = 0.1s A = 10" (2.60) 4 7 = 100 The resulting feedback gains are: K x = -[0.2839 0.8628 0.8776] (2.61) K = -0.2993 e The eigenvalues o f the augmented closed-loop matrix Af + Bf[K x K ] are at: e â€¢1.0523 Â± 0 . 0 6 5 3 t -0.8697 -0.3096 28 (2.62) Chapter 2: Sampled-Data Generalized Predictive Control (SDGP F i g . 2.6 shows the control results for two different values of the feedforward term Kp, i.e. Kp = 0 and Kp = _ * 0 c 0 9 6 = 0.9668. The latter Kp places a zero which cancels the last pole - 0 . 3 0 9 6 of the closed-loop system matrix resulting a faster response which can be seen from F i g . 2.6. 1 1 1 Setpoint > > / I / w Output: Kp = 0.9668 X / \ / Â»/\ * Output: K = 0 \ \ / / s p ; y 1 10 1 20 1 1 1 30 40 50 - - 60 (S) Control: K = 0.9668 p Control: K = 0 p _I 0 5 1 10 1 1 20 1 30 40 1 60 (s) 1 50 Figure 2.6: Zero placement in SDGPC 2.4 S i m u l a t i o n s a n d T u n i n g G u i d e l i n e s of S D G P C Refer to Fig.2.7, the design parameters of S D G P C are: Prediction time T , design sampling p interval T , execution sampling interval T , and control weighting A. The control order N is m exe u related to prediction time and design sampling interval by -/Vâ€ž = If the final states weighting is used other than final states constraint as in performance index (2.18), there is an additional design parameter 7 . This is the approach used i n [17] where 7 served as the tuning parameter to damp the control action. However i n S D G P C , we would ratherfix7 to a very large value w h i c h corresponds to the states constraint case since this is crucial to guarantee stability. The task of reducing excessive control action can be accomplished by selecting T exe 29 < T , which is equivalent to putting infinite m Chapter 2: Sampled-Data Generalized Predictive Control (SDGP weighting on controls with sampling interval T and only allowing control to vary every T exe m time units. This w i l l be shown later by example. Setpoint w SDGPC projected controls U (N ) d T exe Figure 2.7: T m u T p TilTIG The projected control derivative Example 1: The aim of the first example is to show the effects of the S D G P C design parameters on the control performance, and compare S D G P C with infinite horizon L Q control. The process being controlled is G(s) = (2.63) It is assumed that this process has to be controlled with a relatively fast sampling interval T exe = 0.2s i n order to have fast disturbance rejection property. It is also assumed the states of the process is available for measurements and the derivatives of the states are computed by the state space equivalent of system model (2.63) x (t) = d -3 -3 - 1 0 0 . 0 1 r "1" a:(*) + 0 u(t) 0 . Fig.2.8 shows the step response of the plant. 30 .0. (2.64) Chapter 2: Sampled-Data Generalized Predictive Control (SDGP Time (sees) Figure 2.8: Step response of example 1 Simulation 1: S D G P C of plant (2.63) with the following design parameters N u T = 6 = 1.2s p (2.65) Tm â€” Texe â€” 0.2s A = 10~ ,0.01,0.1,0.5,10 5 A c c o r d i n g to Theorem 3 i n section 2.2, N u from Fig.2.8, the final prediction horizon T p 10 should not be smaller than 4 to ensure stability. A l s o = 1.2s is very short for this plant. Fig.2.9 shows the control results. 31 Chapter 2: Sampled-Data Generalized Predictive Control (SDGP 2 1 0 -1 1000 500 0 -500 -1000. It is obvious from Fig.2.9 that this control law is unacceptable i n practice because of the large magnitude of the control action. A l s o notice that increasing the control weighting is not effective i n damping the control since the prediction horizon T p is too short. It can be seen from Fig.2.9 that between 40 (s) and 50 (s) even a control weighting of 1 0 is because when T p 1 0 can not penalize the control action. This is small, the end point states constraints dominate the control law calculation whereas the performance index (2.4) has little effect on the controls. Since we can not reduce the control order N u horizon T . p because of the stability requirements, the only option is to increase the prediction Simulation 2 shows the results. Simulation 2: S D G P C of plant (2.63) with N = 21 u T = 4.2s p (2.66) T m â€” T exe = 0.2s A = 10 ,0.01,0.1,0.5,2 - 5 32 Chapter 2: Sampled-Data Generalized Predictive Control (SDGP The prediction horizon is selected to cover the significant part of the step response. See Fig.2.8. The design sampling interval and the execution sampling interval are the same as i n simulation 1. Fig.2.10 shows the results. ^=0.00001 1 X=0.01 \/ 1 1 X=0A / 1 1 t 10 20 X=2 X=0.5 30 \ 1 40 50 (S) 50 (s) Figure 2.10: Simulation 2 of example 1 The results shown i n Fig.2.10 are good except that the control law involves calculation of a 21 x 21 matrix inversion, a significant increase in computation burden compared with simulation 1. Simulation 3: Simulation 3 shows the S D G P C of plant (2.63) with Nâ€ž T = 4.2s p T m = 0.7s ^exe = 0.2$ A = 10 ,0.01,0.1,0.5,2. -5 33 (2.67) Chapter 2: Sampled-Data Generalized Predictive Control (SDGP The prediction horizon is the same as in simulation 2 but the control order equals the one i n simulation 1. That means the design sampling interval T,â€ž = = 0.7s and the execution sampling interval remains to be 0.2s as i n simulation 1 and 2. / \/ \ 0 X=2 h=0.S X=0.01 X=0.00001 \ 1 1 1 1 10 20 30 40 (a) â€¢ 50 (s) Figure 2.11: Simulation 3 of example 1 The good results i n Fig.2.11 suggest that selecting T m > T exe is a useful strategy to reduce computation burden and at the same time damping the control action. M o r e simulations w i l l be presented to support this claim in example 2. Simulation 4: It is interesting to compare S D G P C with infinite horizon L Q R with the performance OO r J = -I ~~ ) w + m index which the only tuning parameter is control weighting A since infinite horizon is used. Plant (2.63) is discretized with sampling interval 0.2s as previous simulations. Control weighting varies as indicated in Fig.2.12 34 Chapter 2: Sampled-Data Generalized Predictive Control (SDGP ^=0.00001 X=0.01 / I \ / \â€¢ 1 10 X=2 X=0.5 X=0.1 1 1 30 20 40 50 (s) 50(s) Figure 2.12: Simulation 4 of Examplel: infinite horizon L Q R Compare Fig.2.12 with Fig.2.10 and Fig.2.11, it can be seen that infinite horizon L Q R has visible overshoot for small control weighting A whereas increase A slows down the response significantly. However, this is not suggesting that S D G P C has inherent advantage over infinite horizon L Q R , after all they are the same as analyzed i n section 2.2.1. However, it might be easier to tune S D G P C than L Q R since there are fewer design parameters i n S D G P C ( prediction horizon, control order etc. ) than that i n L Q R ( all entries of the weighting matrices ). Example 2: T w o plants are simulated in this example. The first one is a non-minimum phase w e l l damped open loop stable system. Gi(s) = (2.68) ( + 1) s The second one is an open loop unstable system with imaginary poles. G '"> = ( Â» - + 0 . 4 , + 9) < " Â» Simulation 1: Plant (2.68) is controlled by S D G P C with the following design parameters: 35 Chapter 2: Sampled-Data Generalized Predictive Control (SDGP N =5 u T = 5s p (2.70) T = Is m A = 0.1 In the first 15 seconds, the execution sampling interval T exe is equal to T , and after that m T exe is reducing every 15 seconds as illustrated in Fig.2.13 1 Texe = 1 s \ 1 11 Texe 0-8s 1 AT =0.5s = exe r 1 T =0.2s 1 AT xe=0-01s exe e / i 10 Ui 20 i 30 i (a) 40 i 50 i 60 i 70 (s) Figure 2.13: Simulation of plant (2.68) This simulation shows that for well damped stable system ( l o w pass plant) when fast sampling is needed, S D G P C can offer both low computation load and high implementation sampling rate by selecting T exe < T . m Simulation 2: Plant (2.69) is studied. First the following group o f design parameters is used. 36 Chapter 2: Sampled-Data Generalized Predictive Control (SDGP N = 5 u T = 5s p (2.71) T = Is m A = 0.1 The execution time T exe 2i varies as illustrated in Fig.2.14 1 Texe =1s , Texe =0.5s 1 Texe =0.2s 100(s) 100(s) Figure 2.14: Simulation of plant (2.69) Fig.2.14 shows that when the execution sampling interval T exe interval T , the performance is good. m system becomes unstable when T exe As T exe is equal to the design sampling decreases the performance deteriorates and the = 0.2s. Considering the plant (2.69) has an unstable pole with time constant of 1 second and has a lightly damped mode with resonance frequency of 0 . 4 7 5 9 H z , the design sampling interval T m = Is is relatively large. Two things can be told by the results i n Fig.2.14 for unstable and/or lightly damped systems. First, when the sampling interval T m large, selecting T exe T exe = T m <T m is relatively can cause performance deterioration or even instability. Second, even is not a good choice for such a system. Since changing sampling interval can be viewed as a perturbation to the sampled-data system, the performance deterioration i n Fig.2.14 means that the closed4oop system under S D G P C with sampling interval T m 37 â€” Is is sensitive to plant uncertainties. Chapter 2: Sampled-Data Generalized Predictive Control (SDGP The next simulation suggests that for systems with unstable and/or lightly damped poles the design sampling interval T m should be at most one third of the unstable pole time constant or the sampling rate be 6 times that of the resonant frequency. Simulation 3: Plant (2.69) is controlled with the following design parameters. N = l u T - 2.1s p (2.72) T = 0.3s m A = 0.1 The sampling interval T is reduced to one third of the unstable pole time constant. Fig.2.15 m shows the results. i Texe=0.1s 1 r Texe=0.3s A 11 i Texe=0.01s / t i 10 l l 20 30 ! 40 50 60 (s) 60 (s) Figure 2.15: Simulation of plant (2.69) It can be seem from Fig.2.15 that when T m is reduced to 0.3s for this plant, good results are obtained regardless of the changing of the execution time. The conclusion drawn from these two examples is that for stable well damped systems, the design parameters can be selected primarily based on performance and computation load considerations and 38 Chapter 2: Sampled-Data Generalized Predictive Control (SDGP the execution time can be selected flexibly. For unstable and/or lightly damped systems, i n addition to performance and computation load considerations, there is an upper bound on the design sampling interval T m restricted by the unstable pole time constant or the resonant frequency. explicit formula available for the bound yet. But a rule of thumb is to select T m of the unstable time constant or make ^ There is no less than one third larger than 6 times the highest resonance frequency. Example 3: This example shows the ability of S D G P C to control systems with nearly cancelled unstable zeros and poles. G P C w i l l encounter difficulty controlling this k i n d of systems [4, pp. 102]. The plant being controlled is s - 0.9999 G (s) (2.73) (s - l)(2s + 1) 2 Design parameters: N = 6 u T = 2.4s p (2.74) T â€”T OAs A = 0.5 30 (s) 25 Figure 2.16: Simulation of plant (2.73). 39 30 (s) Chapter 2: Sampled-Data Generalized Predictive Control (SDGP S D G P C can control this plant without any difficulty. 2.5 C o n c l u s i o n A t the beginning of this chapter, we mentioned that there are some problems with pure discretedme approach when fast sampling is needed. A new predictive control algorithm S D G P C was thus formulated based on continuous-time modeling while assuming a piecewise constant control scenario. Better numerical property can be expected since when fast sampling is needed, the control law can actually be designed based on a larger sampling interval. S D G P C relates continuous-time control and discrete-time control i n a natural way thus enjoys the advantage of continuous-time modeling and the flexibility of digital implementation at the same time. Under m i l d condition, S D G P C is shown to be equivalent to an infinite horizon L Q control law thus it has the inherent robustness property of an infinite horizon L Q regulator. However, the finite horizon formulation of S D G P C makes it convenient to handle various input and states constraints. Moreover, like other predictive control methods, the tuning of S D G P C is easy and intuition based. The design of S D G P C i n this chapter is a one-degree-of-freedom design. Various extension w i l l be made i n the following chapters. 40 Chapter 3: SDGPC Design for Tracking Systems Chapter 3 S D G P C Design for T r a c k i n g Systems In process control, the main objective is to regulate the process output at a constant level subject to various types of disturbances which is known as regulator problem. There is another type of control problem k n o w n as tracking or servo problem where it is required that the output of a system follow a desired trajectory i n some optimal sense. This servo problem does occur i n process industry , albeit not as often, such as the change of paper grade from a basis weight of 8 0 g / m 2 to l O O g / m 2 i n paper production. Another important class of problem i n process control which also fits into the framework of tracking control is the feed forward design problem when the disturbance information is available. The optimal tracking problem i n the linear quadratic optimal control context was w e l l formulated [41] [3]. But they were either i n continuous-time or discrete-time framework. It is thus worthwhile to formulate the tracking problem i n the context of sampled-data generalized predictive control. The S D G P C algorithm we developed i n chapter 2 is a special case of tracking system design i n which the desired trajectory is a constant setpoint. A wider class of trajectories w i l l be considered here. Trajectory following problems were classified i n three categories i n Anderson and M o o r e [3]. We follow the same treatment in this chapter. If the plant outputs are to follow a class o f desired trajectories, for example, all polynomials up to a certain order, the problem is referred to as a servo problem; i f the desired trajectory is a particular prescribed function of time, the problem is called a tracking problem. W h e n the outputs of the plant are to follow the response of another plant (or model), it is referred to as the model-following problem. However, the differences between them are rather subde i n principle. 3.1 T h e S e r v o S D G P C P r o b l e m G i v e n the n-dimensional S I S O linear system having state equations x(t) = Ax(t)+Bu(t) (3.75) y(t) = c x(t) T 41 Chapter 3: SDGPC Design for Tracking Systems The augmented system is described by if â€” AfXf + Vf = Â°ff dim(xf) Where x (t) 'A 'xd~ Xf = , *f = .y. u (t) 'B' . f 0. = u(t) d 0" T (3.76) = rif = n + 1 = B _c d T x = i(t), d BfU C .0. 0 [0 ' f 1] (3.77) Note that the system output y(t) rather than the tracking error e(i) = y(i) â€” r(t) is augmented as the system state since the setpoint r(t) is no longer constrained to be constant. Suppose the reference signal is the output of a p-dimensional linear reference model w = Fw (3.78) r(t) = with the pair [F, R ] Rw T completely observable. T Assume that the future projected control derivative is piecewise constant i n the time interval [t,t + Tp] as illustrated i n F i g . 3.17, the S D G P C servo problem is to find the optimal control vector u d = [ d ( l ) d(2) â€¢ â€¢ â€¢ u (N )] u u such that the following performance index is minimized. T d u J = l(y{t + T ) - r(t + T )) 2 p + J p [( (t + T)-r(t + + T)) 2 y {t T 1X d + T )x (t p + d + T) \u (t-rT) dT p (3.79) 2 d To solve this optimization problem, we need the T-ahead state prediction for both the plant (3.76) and the reference (3.78). Recall that the projected control scenario i n F i g . 3.17 can be written as u (t) = d H(t)u (3.80) d where H(t) = [E (t) H (t) 1 â€¢â€¢â€¢H (t).--H .(t)) 2 i N (3.81) Ud=Ki(i) u (2) â€¢ â€¢ â€¢ u ( i ) â€¢ â€¢ â€¢ u (iVâ€ž)] d d 42 d Chapter 3. SDGPC Design for Tracking Systems Setpoint SDGPC projected controls U (t) d T Time p Figure 3.17: The projected control derivative 1 Hi(t) = (t - l ) T < t < iT m m L0 otherwise i = l,2,---N (3.82) u T ~ N m a We have ( Xf +J + T) = e ' x (t) A T t f = e ' x {t) e '( -^B u {r)di A f + A T f T d (3.83) T(A B ,T)vL f f d where T(A ,B T) f T f A (T-T) . e f = f 0 < T < T Q.Q...Q dTBf m 0 J A (T-r) . T e f dTBf . 0 dTBf ... o T <T< m 2T (3.84) m m f A (T-r) .... T o e T 0 e j Aj(T-T) f J dTBf (iv -i)r B AAT-r) e drBf (N u - l)T < T < T m f m And the reference state prediction is simply w(t + T) = e w{t) ifl 43 (3.85) Chapter 3:SDGPC Design for Tracking Systems Consequently the output predictions are as follows. y(t + T) = c x (t + T) T f (3.86) X (t + T) = C X (t + T) = [Inxn0) x (t T d f nxn r(t + T) = R w(t + T) f + T) T (3.87) Substitute equations (3.86) and (3.87) into the performance index (3.79) J = J [c e ' x {t) T + c T{A ,Bf,T)u A T - R T T f f f d e w{t)] FT 2 dT o T +A J (3.88) [u H (T)H(T)u ]dT T T d o + c T(A ,B ,T )u + [c e ' >x (t) T A T 7 [c e ' >>x (t) T +1 T f f f p - d R e "w(t)] T + c r(A ,B ,T )u ] [c e ' *x (t) A T T T f f f p T 2 c T(A ,B ,T )u ] + A T f d FT Take the derivative of J with respect to u ^ T f f p d 1 dJ _ du d j [T {T)c c T{T)u T + T {T)c {c e ' x {t) T f T T d - A T f f R e w(t))]dT T FT (3.89) +A j +j[T (T )c c r(T )u T f p T + r (T )c {c e ' >x (t) T p H (T)H(T)dT T d + [r (r )c Jr(2> 7 T p T dC p d A T f f + - R e *w(t))} T FT r (T ) cJe ^x (t)] T A p Cd f The optimal S D G P C tracking control solution u* is given by d ~x (ty d u d = K w(t) w + K (3.90) Xs ly(t) Af,Bf are dropped (tomT(Af,Bf,T) for clarity 44 J Chapter 3: SDGPC Design for Tracking Systems where Kw â€” K = Xf K = \ J f r (T)c cjT(T)dT+X T f H -ft w KH X/ H (T)H(T)dT jT (T )F(T ) y + T T p \ = I J V (T)c R e dT T T f (3.91) jF (T )c RT FT + FT 1 p T p e f p I JVâ€žxp H XJ = - \ J T (T)c cJe ' dT T + A T f >yT (T )e ' > T A T p N XTlf u A s shown i n F i g . 3.18, the servo S D G P C law (3.90) has one feedforward term i n addition to a usual feedback term as i n the regulator case. This is what is known as a two-degree-of-freedom design method. A l s o note that equation (3.91) clearly shows that the feedback gain K Xf is independent of the trajectory reference model (3.78). zoh w= Fw w(t) H^^'^V^y' HIl^ r(t)= R w i T KY, t x=Ax+Bu Observer x Figure 3.18: The servo SDGPC controller So far we have assumed that the state w of the reference signal is available for measurement. In practice, however, often only an incoming signal is at hand. For this case, a state estimator may be constructed with the pair [F, R ] T forward gain K w completely observable. Then the state estimator and the static feed can be combined to give a dynamic feedforward controller as illustrated i n F i g . 3.19 r(t) zoh Feedforward controller x=Ax+Bu Ky, X T c' Observer Figure 3.19: The dynamic feedforward controller implementation 45 Chapter 3: SDGPC Design for Tracking Systems W h e n the reference model (3 78) is given by 0 â€¢ â€¢ 1 â€¢ 0 0" 0 0 = {0 ,R T â€¢ 0 (3.92) l]j ^ -* pxpconsisting of all polynomials of degree (pâ€” 1). the reference trajectory w i l l be the class1 of signals The state w(t) w i l l be consisting of the incoming signal r(t) and its derivatives up to the order (p â€” 1) = â€¢- ^ <t)f- (3.93) Clearly, the S D G P C algorithm developed in chapter 2 is a special case where r(t) is, F = 0,R â€” 0, that = 1. T A s mentioned earlier, for general F and R , T a state estimator may be needed to construct the state of the incoming signal. However, when F and R are given by (3.92) with p = 2, a simple T structure can be obtained. According to the receding horizon strategy, only u ^ ( l ) , the first element of the optimal control u, d is applied to the plant. For R T 0 0 1 0 = [0 (3.94) 1] , from equation (3.90), we have r(t) uj(l) = [7^(1,1) where K (l,:) Xf T and [0 r('). .y(') i n equation (3.91). XJ and H w T] (3.95) denotes the first row of matrix K A closer look at H are [0 1^(1,2)] in equation (3.91) reveals that the last columns of e FT Xf 0 â€¢â€¢â€¢ the last columns of CfR e T T]Â£ x in H FT immediate consequence, K (l,2) AfT and A ' w with opposite signs. Let K Td uj(l) = K (l, w = K f(t) rd A T because the last columns of F and Af are a l l zeros. Thus, l is equal to the last column of matrix CfCJ"e w and e > X / Xf A s an ( l , n / ) i n equation 3.95 are of the same amplitude but l),K y Tt in H . = K (l,2),K w + K , [r(t) r y 46 Xd - y(t)) + = K (l, Xf K x (t) Xd d 1 : n ) , we have (3.96) Chapter 3: SDGPC Design for Tracking Systems Here K (l, Xf 1 : n) denotes the first n elements of the first row of matrix K. Xf For small execution sampling time, the optimal servo S D G P C law (3.96) can be written i n another form as (3.97) Compare with the optimal S D G P C solution (2.17) i n chapter 2, control law (3.97) has an additional feedforward term K r(t). Td It should be noted that although the servo S D G P C solution (3.97) is derived for ramp signal f(t) = 0, it does not necessarily yield zero tracking error for a ramp even asymptotically. The reason for this is that there is only one integrator in the controller (3.97) which can only track constant reference with zero steady state error [73]. According to the internal model principle, there must be a model of the exogenous signal included in the control law for robust zero error tracking and disturbance rejection. A s most of the discrete-time predictive control algorithms, S D G P C has the ability to track a general class of reference trajectory with zero steady state error. The spirit of this approach is state augmentation. That is, by including the equations satisfied by the external signal into the system model, a new system model in the e r r o r space [28] with new coordinates can be obtained and the S D G P C design procedure can then be applied. In the following, the servo S D G P C problem w h i c h incorporates double integrators in the control law is presented to show the procedure. Consider the plant described by x{t) = Ax(t)+Bu(t) y{t) = c x(t) (3.98) T dim(x) = n The augmented system is described by x z = Ax z + z Bu z z (3.99) dim(x ) z = n z 47 = n + 2 Chapter 3: SDGPC Design for Tracking Systems Where u (t) = u(t) z â€¢ A "x~ , A y Onxl Onxl ' CT 0 0 -Olxn 1 0 = Z .y. 1 â€¢B, B , Â£ = [0 0 = z 0 1] (3.100) .0. . Assume that the reference trajectory is given by w = Fw (3.101) r(t) = where Rw T r(t) 0 0 0' 1 0 0 ,R .0 1 0. = [0 T 0 l],w(t) = (3.102) r(t) r(t). Similarly, assume that the future projected u (t) over [t,t + T ] is piecewise constant as illus- z p trated i n F i g . 3.17, the objecdve is to find the optimal control vector u = [u*(l) u (2) â€¢ â€¢ z z â€¢u (N )] T z u such that the following performance index is minimized. J = (y(t 7 +Jx (t T \2 + T )Y + T ) - r(t P p i t ' (4. i T> \ */ Â± i T + (y(t + T ) - f(t + T )Y 7 + T )x{t + T ) + J {(y(t + T)-r(t p \ *\ 2 \u (t + T) dT + T)) + 2 p 1 P p (3.103) 2 z A g a i n , we need the T-ahead state prediction for both the plant (3.99) and the reference (3.101) which are given by l c (t + T) â€” e ' x (t) A T x z +J e '( -^B u (r)dr A T z (3.104) z o = e > x (t) A T z + r{A ,B ,T)u z z z and w(t + T) = T(A ,B ,T) Z Z e w(t) (3.105) FT is given by equation (3.84) with Af,Bf replaced by A ,B . Z Z The optimal solution to the performance index (3.103) is given below without detailed derivation: 48 Chapter 3: â€¢ (ty â€¢f(ty x r(t) u* = K w SDGPC Design for Tracking Systems (3.106) + Kx. m -y(t). where K w = KH K, = KH U x X -l K = \ J +XJ H T (T)c c T(T)dT T T {T)H(T)dT T z +^ {T )T(T ) p p N xN u 0 H = I jr (T)c R e dT T w T + FT 2 * . = - [ f F (T)c c T T z T e p e * dT + jT A T (3.107) FT jT (T ) r -^2x2 n,x3 H u JVâ€žx3 (T )e ' T A Tp p N xn u We are concerned about the first row o f K w and K -K (l,n -l),K (l,3) Xa z K (1,3),K W = -K ,(l,n ) w X x = K (l, and let K z since only the first element o f Xz the optimal control vector u* is applied to the plant. f Consider the equalities K (l,2) = K {l,l),K y T = w w t = K (l,2),K , w = r v 1 : n), the first control i n (3.106) has a simplified form: Xl uj(l) = K r(t) + KrA+M ~ Â»(*)] + KrM*) ~ y(01 r (3.108) +K x(t) x O r i n terms o f the control input to the original plant (3.98) when the execution time goes to zero, w e have by integrating both sides of equation (3.108) twice, t u*(l) = K r(t) r t V + Kr,y J V(T) - y(r)}dr + K ,y J J [r(r) - y(r))dTdv r +K x(t) x F o l l o w i n g is an example o f servo S D G P C design to track a ramp reference signal. 49 ( 31 Q Q ) Chapter 3: SDGPC Design for Tracking Systems Example 3.1.1 The plant being controlled has the transfer function (3.110) Control law (3.109) is used with the design parameter Text* = O.lS T = p 3s N = e (3.111) u A = 0.001 7 = 1000 F i g . 3.20 shows the reference and the output, tracking error and the control input. Clearly zero steady state error is obtained. Reference and output -10 0 5 10 15 20 25 30 35 40 45 50 Figure 3.20: Servo SDGPC of plant (3.110)-double integrator For comparison, the servo S D G P C law with single integrator (3.97) is designed with the same group of design parameters given in (3.111). The control results are illustrated i n F i g . 3.21 i n w h i c h steady state error can be clearly observed. 50 Chapter 3: SDGPC Design for Tracking Systems Reference and output 5 10 15 20 25 30 35 40 45 50 Figure 3.21: Servo SDGPC of plant (3.110)-single integrator 3.2 The Model Following S D G P C Problem There is another kind of tracking system called the model following problem. It is a mild generalization of the servo problem of section 3.1. In the framework of SDGPC, the problem is to find the control vector for the system (3.76) which minimizes the performance index (3.79), where r(t) is the response of a linear system model *!(*) = Ai*i(t) +Sit (*) (3.112) r(t) = Cfz (t) 1 to command input i(t), which, in turn, is the zero input response of the system i (*) = A z (t) *'(<) = CTz (t) 2 2 2 (3.113) 2 as indicated in Fig. 3.22 Desired Trajectory Command Signal z =A z 2 2 2 z, = A , z , + B,/ r= C z , /'= C z T J 2 2 Figure 3.22: Desired trajectory for model-foil owing problem 51 Chapter 3: SDGPC Design for Tracking Systems The two systems described by ( 3.112 ) and ( 3.113 ) can be combined into a single linear system with state space equation z(t) = Az(t) (3.114) r(t) = C z(t) T where A B&f x (3.115) 0] A With equation (3.114 ), the model following problem is identical to the servo problem in section Z Z ,C =[Cf T = [ T 2 ]> z A 0 2 3.1. The following example shows the design procedure of the model following problem and the control results. Example 3.2.1 The plant being controlled is an unstable third order process with transfer function 1 G(s) (3.116) s - 1 3 The reference model has the following transfer function G(s) = (3.117) s + 4.5s + 2 2 The step response r(t) of the reference model to input co is given by "-4.5 '2" - 2 ' w(t) = w(t) + o . 1 0. co (3.118) r(t) = [0 l]w(t) w(t) Or in the form of equation (3.114), the above state space equation can be rewritten as x(t) = â€¢o 0 2 -4.5 .0 r(t) = [0 1 0 52 0 ' -2 x(t) 0 . l]x(t) (3.119) Chapter 3: SDGPC Design for Tracking Systems with x(t) = [co r r] . T N o w the control law (3.75) can be applied. The design parameters are: T = 4s p N = 20 u A = 0.001 (3.120) 7 = 1000 7m where T p = T exe = 0.2s is the prediction horizon, iVâ€ž is the control order and A, 7 are the control weighting and final state weighting respectively. The execution sampling interval T exe design sampling interval T m Fig. is set to equal to the since the plant being controlled is unstable. 3.23 shows the control results. Setpoint and outputs â€” setpoint model response plant output 80 (S) Figure 3.23: Model following control of unstable third-order system 53 Chapter 3: SDGPC Design for Tracking Systems Example 3.2.1 The second example is a third order stable plant with transfer function 1 (*+l) a The reference model is a second order under damped plant G(s) = K (3.122) 4s + 2.4s + 1 J 2 The design parameters are again T = 4s p N = 20 u A = 0.001 ( 3 1 7 = 1000 T m = T exe = 0.2s Fig. 3.24 shows the setpoint, reference model response, plant output and the control signal. 54 2 3 ) Chapter 3: SDGPC Design for Tracking Systems Setpoint, reference model and plant outputs 0 10 20 30 40 50 60 70 80(s) Control signal 51 1 1 1 1 1 1 1 _51 1 1 1 1 1 1 1 10 20 30 40 50 60 70 0 1 80 (s) Figure 3.24: Model following control of stable third-order system 3.3 T h e T r a c k i n g S D G P C P r o b l e m It is w e l l k n o w n that when the future setpoint is available the tracking performance can be improved radically. Similarly, future values of disturbance can be utilized for better disturbance rejection. Practical examples for which future setpoints are available can be found i n areas such as robot manipulator applications, high speed machining of complex shaped work pieces and vehicle lateral guidance control problems [77, 76, 57]. Predictive control is a natural candidate i n these applications since it explicitly accommodates the future values of setpoint i n its formulation. However, the setpoint preview capacity of predictive control has not been fully exploited before since i n process control applications, where predictive control has blossomed, disturbance rejection is the major concern and the future disturbances are often unknown. The S D G P C tracking problem is formulated as follows. 55 Chapter 3:SDGPC Design for Tracking Systems For the system (3.75) and its augmented plant (3.76), with the desired trajectory r(t) available i n the range [t,t + T ], the S D G P C tracking problem is to find the optimal control by minimizing p the performance index (3.79). Assuming that the projected u (t) i n [t, t + T ] is given by (3.80) as illustrated i n F i g . 3.17, the d p performance index (3.79) can be written as J= J [c e ' x {t) T + cJT(A ,B ,T)n A T f f f f - r(t + d T)] dT 2 o +\ J (3.124) [u H (T)H(T)u ]dT T T d o + [c e ^x {t) T 1 + c T(A ,B ,T )u A } +j[<$e ' 'x (t) - r(t + T )] T f f f p 2 d p + c T{A ,B T )n } [c e ^x {t) A T T f T d f h p d T + A d f c^T(A ,B ,T )n ] f f p d Take the derivative o f J with respect to u ^ , we have dJ du d J [Y {T)c c T{T)u T + Y (T)c {c e ' x {t) T f T d T - r(t + A T f } T))]dT (3.125) +A [r (T )c c T(T )u T +1 f p (T)H(T)dT + r (T )c (c e ' >x (t) T p J H T T d p +y[r (T )c cJr(T )u d p T d - r(t + A T f T ))] p T (T )c cJe ^x (t)] + T p f T The optimal S D G P C tracking control solution u p r f d A f is given by x (t) d n d = f (t) + K r (3.126) Xf Ly(0 56 Chapter 3: SDGPC Design for Tracking Systems where f (t) = KH Kx KH r = f \J K= Y {T)c c T{T)dT T T f } t Xf + \ J H (T)H(T)dT + -yr T (T )T(T ) T P /T \ r H = \J t P r (T)c r(t T f j + T)dT + T (T )c r(t T (T)cfC T + T) T 7 T e AfT p (3-127) p f dT + jT (T )e T AfTp p W i t h receding horizon strategy, the feedforward term f (t) r needs to be computed at every time instant. Simple numerical integration algorithm such as Euler approximation can be used without compromising the performance of the controller. A s we mentioned at the beginning o f this section, use o f the future setpoint information can improve the tracking performance, sometimes significandy. The following example compares the tracking performance o f two controllers one o f which utilizes the future setpoint information and the other one does not. Example 3.3.1 The plant i n Example 3.1.1 is used again with the following transfer function G(s) = 1_ \ 3 (* + l ) a (3.128) First, control l a w (3.97) is used with the design parameter T^e.xe. â€” 0.25 6s T v = N u = 10 (3.129) A = 0.001 7 = 10000 Fig. 3.25 shows the setpoint and the output, the tracking error and the control input under control l a w (3.97). 57 Chapter 3: SDGPC Design for Tracking Systems Figure 3.25: Servo SDGPC of plant (3.128) N o w the tracking control (3.126) which utilizes the future setpoint information is designed with same design parameters given in (3.129). Fig. 3.26 shows the results. Reference and output 1 â€”i 0 l l l 10 20 30 l 40 Tracking error 1 I 50 r 1 60 0 -1 Figure 3.26: Tracking SDGPC of plant (3.128) 58 i_ 70 Chapter 3: SDGPC Design for Tracking Systems Compare F i g . 3.25 and F i g . 3.26, the improvements in tracking error are obvious. A l s o notice that the control effort i n F i g . 3.26 is smoother due to the preview ability. The improvements can be explained as follows. A t current time t, knowing the future setpoint information r(t + T) is equivalent to knowing the current setpoint and a l l its derivatives up to an arbitrarily large order. Indeed any future setpoint value r(t + T) can be calculated using Maclaurin oo series expansion r(t + T) = r(t) + ^ r (tfjr- 1Â° control law (3.97), it was assumed that the future setpoint is a ramp. In another words, only the first derivative of the setpoint is assumed to be available. It is thus natural to expect performance improvements for complex setpoint when tracking control law (3.126) is used. However, these two control laws w i l l not differ from each other for ramp signal. This can be confirmed by comparing F i g . 3.27 and F i g . 3.28 which show the results of plant (3.128) being controlled by (3.97) and (3.126). It can be seen that the tracking errors are the same for these two control laws at steady state while the tracking errors under control (3.126) at the transition region around time 10s are smaller since the setpoint here is no longer pure ramp signal. 59 Chapter 3: SDGPC Design for Tracking Systems Reference and output 40 20 j 0 0 5 10 | | | | 25 30 35 40 I 15 20 L 45 50 Tracking error 0.11 i 1 1 1 1 1 1 1 1 1 L 45 1 Control input i 40 r 20 0 0 J5 I 10 I 1 15 20 I I 25 30 I 35 40 50 Figure 3.28: Tracking SDGPC of plant (3.128) It should be pointed out that the tracking performance of the servo control law (3.97) can not be improved significantly by simply increasing the order of the reference model (3.94) without knowing the future setpoint information. For model order p > 2, the setpoint derivatives w i l l be needed i n the computation of control law (3.97). In such case a state observer of the reference model (3.94) can be constructed with desired dynamics. However, no matter how fast the dynamics of the state observer is, there is still no anticipation ability in this approach and thus the transient tracking error can not be reduced efficiently. O n the other hand, the knowledge of the future setpoint can also be used i n the design of control law (3.97) i n which case the setpoint derivatives can be estimated using CO the Maclaurin series expansion r(t + T) = r(t) + r ^(*)7T iÂ« a l e a s t squares sense. k=i 3.4 T h e F e e d f o r w a r d Design of S D G P C W h e n the disturbances can be measured, the control performance can be improved radically by utilizing this information compared with the use of feedback only. The reason is that there are inherent delays i n a l l dynamic systems. observed at the output. It is always better to cancel the disturbance before it is Feedforward disturbance rejection also alleviates the burden o f feedback disturbance rejection so that the design of the feedback loop can concentrate on robustness issues. 60 Chapter 3:SDGPC Design for Tracking Systems Here is the formulation of the feedforward design of S D G P C . G i v e n the n-dimensional S I S O linear system having state equations x(t) = Ax(t)+Bu(t) + B v(t) v (3.130) y(t) = c x(t) T where v(t) is a measurable disturbance satisfying state space equation /?(Â«) = W(3(t) (3.131) = D (3{t) T with dimension np. The integrator augmented system is described by i f = + BfU AfXf XJf = dim(xf) Where x (t) d = i(t), .y. c u d (3.132) = rif = n + 1 u (t) = i>(t) d 'A = f Bf v cjxf u (t) = ii(t), ~x ' d x + d T d 0" B' 'Bu 0. .0. . 0 . (3.133) cf = [0 â€¢ â€¢ â€¢ 0 1 ] F o l l o w i n g the arguments i n section 3.1, the T-ahead state predictor based o n equation (3.132) can be written as follows: T l Xf(t + T) = e ' x (t) A T f +J 1T + J e ^ -^BfU (r)dr A T d 0 = e Where F(Af,Bf,T) */(*) + J A T h 0 c ^ - ^BfV (t A e ^J-^B V â€”vV {r)di f T + r)dr + T d is given by equation (3.84), and u d d (3.134) T{A Bf,T)u f d is piecewise constant as illustrated i n F i g . 3.17. v (t + T) can be obtained from state equation (3.131) as d 3(t + r) = r e $(t) Wr (3.135) v (t + r) = d 61 D e p(t) T Wr Chapter 3: SDGPC Design for Tracking Systems The T-ahead state predictor can be obtained by substituting equation (3.135) into (3.134): x (t + T) = e ' x (t) + D {T)P{t) A T f f + v T(A B ,T)u f f d T (3.136) = j D {T) V e ^ -^B D e dr A T T WT fi/ W i t h the above state predictor, the feedforward S D G P C problem is to minimize performance index (3.79) subject to the measurable disturbance v{t). Based o n (3.136) and (3.87), the performance index (3.79) can be written as J = T, J [c e ' x {t) T + c D {T)P(t) A T f + cJF(T)n T f v - d + A J [u H (T)H(T)u ] T FT dT T d R e w(t)f'dT d o +y[c e > *x {t) T A + c D {T )P{t) T \c e ' >x {t) T f + c D {T )P{t) A T +1 d v u - T p p d + c Y{T )n ^[c e ' -x {t) T f + c T{T )u T f T p T d p d T FT + <$D (T )p(t) A T d R e "w(t) f u + p c T{T )u T d p d (3.137) Take the derivative of J with respect to u,j and let it be zero 8J dn d *p J [r (T)c r /C Jr(T)u + c D (T)!3(t) - + r (T)c (c e ' x (t) T d T A T f R e w(t))]dT T f T u FT (3.138) j + A +j[T (T )c c T(T )u T f p T d T d T p + 7 [T (T )c cjT(T )u p T + r (T )c (c e ' >x {t) T p H (T)H(T)dT p v + T {T )c c e ' >x (t) T p d p + A T d - T f T d + c D {T )p(t) A T f f R e >w(t)}} T FT r (T )c cjD (T P(t)\ T p d u P The optimal S D G P C tracking control solution u is given by r f ~x (ty d u* = K w(t) d w + K (3.139) Xf y{t) J 62 Chapter 3: SDGPC Design for Tracking Systems where K â€” KH = KH Kp = KHp w K Xf W XJ - l T, K=\J + \J T {T)c c T(T)dT T T f f H (T)H(T)dT + yT T (T )T(T ) T P P Nâ€žxN a (3.140) Hw=\J V (T)c R e dT T T + yT FT (T )c R e " T f T p FT f N xp u H = -\j X} T (T)cfC T T e AfT dT + \T )e ^ A p I Nâ€žXTlf \ = -IJ H 0 T {T)c c D {T)dT T T } f v ) T (T )D (T ) + T 7 p u p ) N xriff u Notice again that, like the setpoint feedforward term K , the inclusion of the disturbance w feedforward termKp does not affect the state feedback gain K . Xf The control action given by (3.139) is the derivative of the control to the original plant. F o r small sampling interval, the direct control acdon can be obtained by integrating both sides of (3.139) resulting i n , z u*(l) = K (l, Xf Here K (1,1 XJ 1 : n)x(t) + J [K W(T) W + K (l,n + Xf l)y(r)]dr + Kp(3(t) : n) denotes the first n elements of the first row of matrix (3.141) K. Xf Normally, the states (3(t) of the disturbance model (3.131) are not available. A state observer with gain L can be constructed to give the states estimates p(t) J3(t) = W(}(t) + L(v(t) - D p{t)) T The observer gain L is selected such that the matrix W â€” LD T as follows (3.142) is stable and has desired dynamics. F o l l o w i n g are some examples which show the effects of feedforward disturbance rejection. 63 Chapter 3: SDGPC Design for Tracking Systems Example 3.4.1 The plant being controlled is G(s) = (3.143) The disturbance is generated by a white noise passed through an integrator. However, i n the design of the control law, the disturbance is assumed to be constant. That is v(t) â€” 0. The design parameters are Texe â€” 0.2s T = 6s p (3.144) A = 0.001 7 = 1000 The control law takes the form of (3.141). F i g . 3.29 shows the control results. The effect of the disturbance feedforward can be seen clearly by comparing the first 50 seconds of the figure where feedforward gain Kp was set to zero and the rest of the figure where Kp is set to the value as computed. Output and setpoint â€¢ without feedforward â€¢ with feedforward- 0.5 o -0.5 -1 10 20 30 40 50 60 70 80 90 100 (s) 70 80 90 100 (s) Control signal 10 20 30 40 50 60 Figure 3.29: Disturbance feedforward design 64 Chapter 3: SDGPC Design for Tracking Systems This example shows how disturbance feedforward can improve the control performance dramatically even with a simple and inaccurate disturbance model. The next example shows that further performance improvements can be made with more accurate disturbance model. Example 3.4.2 The plant being controlled is G(s) = -L- (3.145) The disturbance is sinusoidal with known frequency. The state space model of the disturbance is: fct) 0 -4 1 0 !/(*) = [0 The observer gain L T loop matrix W â€” LD T = [1 (3.146) 1 ]/?(*) 4.5] is selected such that the eigenvalues of the observer closed are set to - 2 , - 2 . 5 respectively. The control law design parameters are 0.2s T = 3.5s p N = 10 u (3.147) A = 0.001 7 = 1000 First, the correct disturbance model (3.146) was used to design the control law (3.141) and then a constant disturbance model i>{i) â€” 0 was used. The corresponding control results can be seen from the F i g . 3.30. A s expected, the performance deteriorates i n the time span between 50 seconds and 100 seconds as a wrong disturbance model is used. 65 Chapter 3: SDGPC Design for Tracking Systems 0.25 0.2 - Constant model Sinusoidal model- 0.15 â€¢IÂ« 0.1 * i, " I 0.05 H 0 111 I r. | :i ji i! M ,i 'i ,i M ,i ii i, â€¢ H i i, i ' n 11 II J II . i ,i,i i . i ' l. lil l i i , â€¢ iilLd . * 11 i i I I,i I.i. 'â€¢""I lâ€¢| iI, .I,i. i " ; ii ,, 'i ii T7T 1 i' .| , '1 1I . I, 1I 1I .Ii J. i ' ' l | II ' ' I' l | I I ' 'I I I' ' ' l | '| I I -0.05 1 -0.1 â€¢I â€¢I â€¢I â€¢I II I i! V I I 'l I i i t â€¢j â€¢ â€¢! j, 1 11! n â€¢' ,i ii i| j! â€¢! i! n i â€¢i -0.15 *i 'Â»J -0.2 -0.25 0 10 20 30 40 50 60 70 80 90 100 (s) Figure 3.30: The effect of disturbance model 3.5 Conclusion In this chapter, various tracking problems are formulated i n the framework of S D G P C . Tracking control problems generally have two-degree-of-freedom design structure. However, the feedback part of the tracking problem is equivalent to the regulator problem which has one-degree-of-freedom design structure provided that the design parameters are the same. When information about the future setpoint are available, tracking performance can be radically improved. This is because knowing the future setpoint is equivalent to knowing the exact states information of the state equation describing the setpoint. W h e n the disturbance is available for measurement, the disturbance rejection performance can be improved dramatically by using feedforward design. 66 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive Chapter 4 Control of Time-delay Systems and Laguerre Filter Based Adaptive SDGPC Time-delay, or dead time, occurs frequently i n industrial processes and i n some cases, is rimevarying. Time-delay poses one of the major challenges for the design o f robust process control systems. In discrete-time, time-delay systems have the form: G { q ) = where k â€” integer(Tj/A) vM -*i*r>+ +- + q ( 4 1 4 8 ) is the delay in samples and T , A are the delay time and sampling d rime respectively. For unknown time-delay, k can be either estimated directly [24] or v i a the extended B polynomial approach i n which the leading coefficients o f the B polynomial i n (4.148) up to order k w o u l d be a l l zero. In continuous-time, time-delay can be approximated by a l o w order rational approximation such as Pad6 approximation [66]. Laguerre filter was introduced into systems theory first by Wiener i n the fifties [82] and has been popular recently [87, 80, 48, 49]. In particular it can approximate time-delay systems efficiently. With the time-delay known or being modeled properly, model based predictive control strategies provide an effective way of controlling such systems. In this chapter, we give two approaches to the control of time-delay systems. T h e direct approach i n section 4.1 is based on general state space model and assumes the time-delay is known. Emphasis w i l l be placed on the Laguerre filter based adaptive control given i n section 4.2. 4.1 The Direct Approach The S D G P C approach to deal with time-delay systems is formulated as follows. The system model considered is: x(t) = Ax(t)+Bu(t y(t) = c x(t) T dim(x) A n d the augmented system is given by 67 = n - r) d (4.149) Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive = A Xf(t) Xf(t) + BfU (t f d y (t) = Where x (t) = x(t), d T d = f - u(t - T ), e(t) = d 'A ~x ~ x f = rif â€” n + 1 u (t - r ) d 0' 'B' Â» f 0. w is the constant setpoint and r y(t)-w d = B _e (4.150) c x (t) f dim(xf) - Td) ,c = {0 T 0 1] (4.151) .0. is the time delay i n the system. d Consider the performance index J(t) = J [e (t + T) + Xu (t-rT)]dT-rjx (t 2 + T )x (t T d p + T) f p (4.152) Setpoint w Predicted output Projected control derivatives Past controls \ -Td Xd Tp-Td Time Tp Figure 4.31: Graphical illustration of SDGPC for systems with delay Assume the projected control signal to be piecewise constant as illustrated i n Fig.4.31. F o r simplicity, we assume the dme delay r d T. m That is N d = has an integral number N o f the design sampling interval d A t present time t = 0, for a prediction horizon T , define p â€¢* m u (T) = H(T)u u {T) = H {T)u d 0 < T < T d p - r d (4.153) d Td â€” T <T < 0 Ti d 68 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive where H(T) H (T) = [H (T) H (T) 1 = [H _ {T) Ti ( â€¢â€¢â€¢H {T)---H ST)} 2 i H _ (T) Nd) ( N ...2T (r) â€¢.â€¢*â€¢<_!)( T)] Nj+1) (0 (4.154) U d = h i ( i ) u (2) â€¢ â€¢ â€¢ u (i) â€¢ â€¢ â€¢ u ( 7 V ) ] d Ur =[u d Td (-N ) u d r d d {-N +l) (t - l ) T (0 i = T u (i) â€¢ â€¢ â€¢ u T d T d (-1)] T <T<iT m m otherwise 1,2,---N U - f 1 H {T) â€¢â€¢ â€¢u d 1 Hi(T) = d T P ~ T (4.155) T iT <T < m (i+l)T n (i) .0 otherwise -N + 1, i=-N , d 2,-1 d W i t h the system equation (4.150) and the control scenario (4.153), the T-ahead states prediction can be obtained: x (t + T) = e x (t) AT d d +r T d (T)u T d + r (T)u u (4.156) d where u rr,(r) = J u J e ^) e ^BH _ {r)dr... A ( Tâ€”Td T-T d râ€ž(T)= J (4.157) B H ^ ^ d t A Nd) e ( - '- )BH (T)dT--A T T J T 1 0 e ( - <- )BH {r)di A T T T Na 0 T> r â€ž ( T ) = [0 0 0 - - - 0 ] -I nxN u (4.158) r d 0<T<r d BXjVii The predicted error between system output and the setpoint is: e(t + T) = e(t) + c T e dr Ar j x (t) + c T {T)n T d {Td)o 69 T<i + c T (T)n T (u)o d (4.159) Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive Where o \ (4.160) T r )o(T) = J T {r)di (u u Substitute equations (4.156) (4.159) into cost function (4.152), the optimal control vector can be obtained as: u = -K(T x (t) d d + T e(t) + T d e T d u T d ) -1 K = \ J T d Uo (u)odT + \ T T ccTr = J rf cJA-' i m N + M 7 r^T )r (T ) + p ( e ^ - I)dT + ^ ( T u)o 7 p u 7 p ) e ^ + lTj rf (r )cc r T u ) o p {T )cc (e^ T u ) o ( u ) o p (T ) p - /) Td T = /rf (r)dr e u)o c+ 7 rf (r )c u)o p Td T ?Td = J Tf cc r dT T u)o {Td)o + 1 Tl(T )T^ p (4.161) Remark: Systems with delay can also be treated i n a L Q R setting. F o r example, the continuous-time system model (4.150) can be first transformed into a discrete one without delay by augmenting the past controls up to time r as the new system states, resulting i n a system o f order N + rif as d equation (4.162). W h e r e A ^ = d r is the time-delay and T d m 70 is the design sampling interval Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive as illustrated i n Fig.4.31. / e^Bfdri 'x (k + l) f .UT (*+l) 0 0 '0" 0 0 0 1 0 0 0 0 1 "*/(*)" . Td(fc). + 0 u (k) d U d (4.162) 0 0 y (k) = [cj 0] .1. x (k) f f u (k) Td uÂ£ (k) = [u (k - N ) d u (k -N d d + 1) d u (k-l)} d Based on system equation (4.162), either finite horizon or infinite horizon discrete-time L Q R solution can be found. The problem with the infinite horizon case is the singularity of the transition matrix of system (4.162) [55]. For the finite horizon case, the computation time could increase significantly due to the increase of the system order. A l s o notice i n Fig.4.31 that although the projected controls i n the time interval [T â€” T ,T ] have no effect on the performance index p d p (4.152), but due to the reverse-time iteration of the associated Riccati Difference Equation, the recursion has to go through the whole horizon whereas i n the S D G P C approach , only the controls i n the time interval [0, T â€” r ] are computed. p d 4.2 T h e L a g u e r r e F i l t e r M o d e l l i n g A p p r o a c h The use o f orthogonal series expansion, particularly Laguerre expansion has become increasingly popular i n system identification and control particularly for control of systems with long and timevarying time delay. Briefly speaking, given a open loop stable system with transfer function G(s), its Laguerre filter expansion is G(,) = f > ^ ( Â± Z * ) ^ ^ s +p 71 s+p (4.163) Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive The truncated expression of equation (4.163) can be expressed i n a network form as depicted i n Fig.4.32. /2p u ( t ) ^ s+p y ( t ) x, (t) â€¢ s-p X, (t) ^ s+p Summing s-p (t) s+p Circuit Figure 4.32: Laguerre Filter Network Where p is the time scale selected by the user. The Laguerre network consists of a first-order low-pass filter followed by a bank of identical all-pass filters. Its input u(t) is the process input. The Laguerre states defined i n Fig.4.32 as xi(t),X2(t), â€¢ â€¢ â€¢ xj^(t) are weighted to produce the output which matches the output of the process being modeled. The set of Laguerre functions is particularly appealing because it is simple to represent, is similar to transient signals, and closely resembles Pad6 approximants. The continuous Laguerre functions, a complete orthonormal set i n L [0, oo), i.e. w i l l 2 allow us to represent with arbitrary precision any stable system [21]. A n y stable process can be expanded exactly in an infinite Laguerre series regardless of the value of the time scale p. However, when a truncated series with expansion number N is used, an immediate problem is the choice of the time scale used to ensure a fast convergence rate. Parks [56] gave a procedure to determine the optimal value of the time scale based on two measurements of the impulse response of the process being approximated. F o r open loop stable non-oscillatory systems with possibly long and time-varying time delays a real number p is sufficient to provide good convergence results. F u and Dumont [29] gave an optimal time scale for discrete-time Laguerre network and proposed an optimization algorithm to search for the optimal complex time scale p when the process being modeled is highly under damped. Since the Laguerre network has a state space representation, any state space design method can be applied to the controller design. Dumont and Zervos [22] proposed a simple one step ahead predictive controller based on discrete-time Laguerre filters. This algorithm has been commercialized and is 72 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive routinely used i n pulp and paper mills [21]. Dumont, F u and L u [23] proposed a G P C algorithm based on nonlinear Laguerre model i n which the linear dynamic part is represented by a series of Laguerre filters and the nonlinear part is a memoryless nonlinear mapping. Simulation shows it has superior performance over the linear approach for systems with severe nonlinearity such as the chip refiner motor load control problem and the pH control problem. Recently, Finn, Wahlberg and Ydstie [27] reformulated D y n a m i c Matrix Control ( D M C ) based on a discrete-time Laguerre expansion. In this section, w e propose the S D G P C algorithm based on continuous-time Laguerre filter modelling. It is shown that the resulting control law is particularly suitable for adaptive control applications i n that its computation burden is independent on the prediction horizon used i n the S D G P C design. Define x (t) = [xi(t) 13(f) . . . x (t)]. From Fig.4.32, we have the following state space T N expression o f the Laguerre network: x(t) = Ax(t)+Bu(t) (4.164) y(i) = c x(t) T where A = -p 0 0 -2p -p . 0 (4.165) -2p -2p . ~P-* NxN B= (4.166) V2pc T = [ci c JVxl â€¢â€¢ - C J V ] 2 1 X J V W i t h the time scale p being properly selected, the Laguerre spectrum c T can be estimated based o n the input output o f the process. = [ci C2 â€¢ â€¢ â€¢ CJV ] I n fact, notice that the equation y(t) = c x(t) i n (4.164) is already i n a regression model form, various Recursive Least Squares T algorithm can be applied. W e settled for the recent E F R A ( Exponential Forgetting and Resetting Algorithm ) [68] which w i l l be described i n chapter 6, section 6.1. 73 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive N o w the S D G P C design procedure can be readily applied to the Laguerre state space equation (4.164). T h e intended application of Laguerre filter based S D G P C is adaptive control where the Laguerre coefficients c â€” \c\ ci T â€¢â€¢ â€¢ c^] are estimated on-line using E F R A [68]. W e show i n lxN the following that the on-line computation burden is actually independent on the prediction horizon T . p The main computation involved i n the calculation of (2.16) is integration. integration / T (T)cc T (T)dT o T i n K\. T 0 B y some simple manipulations, we have: Tp J T (T)cc T (T)dT T T 0 F o r example the T, c 'JiafdTc = T (4.167) 0 o where 7^ jj are the i t h and j t h column of r ( T ) = [71 72 â€¢ â€¢ â€¢ 7 i v J j y . T h e integral / o n X u njJdT 0 can be computed off-line and stored. The other integrals i n the calculation of (2.16) can be treated similarly so that the on-line computation burden of the control law (2.15) is independent on the prediction horizon T . p Although Laguerre filter modeling is known for its capability of dealing with dead time, however for long time-delay, it requires a large number of Laguerre filters causing problems such as slow convergence rate i n the Laguerre coefficients estimation and increasing the control law computation burden. A n effective way of dealing with this problem is to use the delayed control u(t â€” r ) as the d input to the Laguerre network i n Fig.4.32 resulting i n a system model: x(t) = Ax(t)+Bu(t - r) d (4.168) 3/(*) = Where r d process. c x(t) T can be either estimated or just a rough guess based on prior knowledge about the O n l y the uncertainty of the time-delay needs to be taken care of by Laguerre network modeling. Based on system model (4.168), the S D G P C for time-delay systems i n Section can be applied. Example 4: Adaptive Laguerre based S D G P C is shown i n this example. T h e plant is given by 74 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive = e-sT G (s) 4 0.5s (4.169) Where the dead time T varies from 5.5s to 4.5s as illustrated i n Fig.4.33. Four Laguerre filters with pole p = 1.2 are used. The delay is assumed to be 5s i n the design. Thus the system model can be described as x(t) = â€¢-1.2 0 0 o - -2.4 -1.2 0 0 -2.4 -2.4 -1.2 .-2.4 -2.4 -2.4 1.54921.5492 x(t) + 0 -1.2. u(t - 5) 1.5492 (4.170) .1.5492. y(t) = c x(t) 1 where c T is the Laguerre filter coefficients vector which w i l l be estimated using R L S . The algorithm developed i n section 4.1 w i l l be used. The design parameters are: - 5s T d Texe â€” 0.25s Tp = l i s (4.171) N = 6 u A = 1 7 = 1000 where 7 is the end point states weighting. Fig.4.33 shows the plant output and the reference, the control input to the plant and the control derivative , the estimated Laguerre coefficients. Between 0 to 80 seconds, the dead time is 5.5 seconds, between 80 to 160 seconds T is changing to 5 seconds and after 160 seconds it is reduced to 4.5 seconds. A s can be seen, the performances are very good in all three cases. B y using the prior delay knowledge, less Laguerre filters can be used resulting i n quick convergence i n the coefficients estimation and thus less transients i n the response. 75 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive T=5s T=5.5s' -T=4.5s Jl -5. 0 50 100 ^ 150 200 (s) Figure 4.33: Simulation of plant (4.169) The simulation was performed with the same conditions as the previous run except measurement noise was added at the output. Fig.4.34 shows the system output, setpoint and the estimated Laguerre parameters. T h e algorithm still performs well i n the presence o f noise. Output and setpoint 50 100 150 200 Estimated Laguerre coeffi. with forgetting:0.98 50 100 150 200 Figure 4.34: Simulation of plant (4.169) with measurement noise 76 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive 4.3 Conclusion M o d e l based predictive control can deal with time-delay systems effectively and S D G P C is no exception. Laguerre network requires little a priori information about the system and is able to model varying dead times. The adaptive Laguerre filter based S D G P C developed i n section 4.2 is a suitable candidate for most process control applications. 77 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Chapter 5 A n t i - w i n d u p Design of S D G P C by O p t i m i z i n g T w o P e r f o r m a n c e Indices Actuation limits exist i n almost a l l practical control systems. F o r example, a motor has limited speed, a valve can only operate between fully open and fully close etc. Other than these physical actuator limitations, there are constraints which are imposed by production requirements. O n the other hand, most o f the available controller design methods ignore the existence o f the saturation nonlinearity. W h e n large disturbances occur or, the operating conditions are changed over a wide range, it may happen that the theoretical controller output goes beyond the actuator limits. A s a result, the control system is effectively operating i n open-loop as the input to the plant remains at its limit regardless o f the controller output. When this happens, the controller output is wrongly updated. For example, i f the controller has integral action, the error w i l l continue to be integrated resulting i n a very large integral term. This effect is called controller windup. T h e windup difficulty was first experienced i n the integrator part of PID controllers. It was recognized later that integrator windup is only a special case of a more general problem. In fact, any controller with relatively slow or unstable modes w i l l experience windup problems i f there are actuator constraints [20]. T h e consequence o f windup is either significant performance deterioration, large overshoots or sometimes even instability [8]. Various compensation schemes have been proposed. The anti-reset windup method was proposed by Fertik and Ross [26]. Astrom and Wittenmark [63 pp. 184-185] suggested resetting the integrator at saturation to prevent integrator windup for P I D controllers. A general approach where an observer is introduced into the controller structure to prevent windup was proposed by Astrom and Wittenmark [63 pp. 369-373]. The " conditioning technique" was proposed by Hanus [36] and it was found that the conditioned controller can be derived as a special case of the observer-based approach [8]. However, as pointed out i n [39], many of these schemes are by and large intuition based. Rigorous stability analyses are rare and there is no general analysis and synthesis theory. Several attempts have been made to unify anti-windup schemes notably by Walgama and Stemby [81] and Kothare et a l . [39]. 78 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Since the S D G P C control law (2.17) has integral action it w i l l also encounter windup difficulties in the case of actuator saturation unless measures are taken. A systematic approach that takes into account the input constraints right from the start of the problem formulation is the constrained model based predictive control [69]. This is in fact one of the main advantages of using model based predictive control strategy. However, one difficulty which comes with this approach is the increased complexity. The more common approach which most of the afore mentioned schemes adopt is the two steps paradigm. That is, the linear controller is designed ignoring control input first and then anti-windup constraints. algorithm is added to compensate for the performance nonlinearities degradation due to This w i l l also be the method of attack used here. This chapter is organized as follows, i n section 5.1, a S D G P C algorithm based on two performance indices is given i n which the "nominal" response of the closed-loop system and the integral compensation performance can be designed independently. This was motivated by the work reported i n [1, 34, 35, 30] , i n which a control algorithm with the structure of state feedback plus integral action was developed where the state feedback and the integral feedback gain can be tuned separately. However, that work was i n the framework of continuous-time infinite horizon L Q control. The S D G P C approach, however, has a quite different formulation procedure and an interpretation which naturally leads to a novel anti-windup compensation scheme presented i n section 5.2. The importance of this anti-windup scheme is that under some reasonable assumptions, the overall "twodegree-of-freedom" S D G P C and the anti-windup scheme guarantee closed-loop stability. Section 5.3 concludes the chapter. 5.1 S D G P C B a s e d o n T w o P e r f o r m a n c e Indices T h e primary goal of introducing integral action i n the design of S D G P C is to ensure zero static error for systems tracking a non-zero constant setpoint subject to constant disturbances and modeling error to some degree. If there are neither modeling error nor disturbances, there is no need to introduce an additional integral state. However, models are inevitably wrong and there are always disturbances acting on the plant thus integral action is always needed. Nevertheless, the argument is that the controller can be designed for good servo step response performance assuming perfect modeling and 79 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance no disturbance. Integral action, on the other hand can be added on to compensate for step and impulse disturbances and for modeling error. The key idea is to have the servo performance and disturbance rejection performance tuned independently. In other words, changing the servo performance shall not affect the disturbance rejection performance or vice versa. 5.1.1 O p t i m i z i n g Servo P e r f o r m a n c e Consider system (2.1) x(t) = Ax(t)+Bu(t) y{t) = c x(t) (5.172) T dim(x) = n The system is required to track a constant setpoint r . If there is no system zero at the origin, a H constant uo can be found for any ro to hold the system state at xo such that [41]: 0 = AXQ + Bua (5.173) y - cx T Q Define the 0 =r 0 shifted input, the shifted state and the shifted output respectively as u(t) = u(t) â€” UQ x(t) = x(t) - x (5.174) 0 ! / ( 0 = 2/(0 - o r Substitute (5.174) into (5.172) and using (5.173), the shifted variables satisfy the equations x(t) = Ax(t)+Bu(t) y(t) = c x(t) (5.175) T dim(x) - n A sensible control objective for the system (5.175) is to minimize the following performance index v 1 J(t) = j x (t T 3 + T )x(t + T ) + J {[y(t + T ) ] + X [u(t + T ) ] } d T 2 p p 2 3 80 (5.176) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I where U(T),T â‚¬ [Â£,/ + T ] is confined to be piecewise constant as u<i(t) i n (3.76) w h i c h is p depicted i n F i g . 3.17. F o l l o w the same arguments as that of servo S D G P C i n chapter 3, section 3.1, the optimal control vector u* which minimizes (5.176) can be written as u* = K x(t) (5.177) s with (5.178) where T(T) is a n x N u matrix given by (2.11) and H(T) is given by (2.7). N is the control u order as defined before. Since only the first element of u* is applied to the plant according to receding horizon strategy, define F as the first row of the N u x n feedback matrix K x F = K (l,l:n) (5.179) x The time-invariant control law for the shifted system is thus u(t) = Fx(t) (5.180) Control law (5.180) has guaranteed stability when applied to (5.175) with sampling interval according to Theorem 2.3. W h e n the design sampling interval T m T m is small, the continuous-time control law also stabilizes the system as shown by simulation in chapter 2, section 2.4. This can be thought of as a procedure of designing continuous-time control law based on sampled-data control i n a reverse way of the conventional approach of designing discrete-time control law based on continuoustime design method. F o r the sake of clarity, (5.180) w i l l be applied to (5.175) with T exe 81 â€”* 0. This Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance way, the spirit o f the anti-windup scheme which w i l l be given i n the next section can be shown more clearly and comparisons can be made with that of the familiar three term PID control law. In terms o f the original system variables, control law (5.180) can be written from (5.174): u(t) = Fx(t) + UQ- Fx (5.181) Q It is easy to see from (5.181) that there is a constant term u' = uo â€” FXQ i n the control law. 0 It can be shown [41pp.270-276] that u' = c {-A)- B T L -i i - i 0 r (5.182) 0 where A is the closed-loop system matrix A=A+BF The term c (â€”A)~ B T (5.183) i n (5.182) is the static gain of the close-loop system with transfer function l H (s) c from the constant term u 0 = c (sI-A) B T (5.184) l to the output. That is H (0) = c (-A)~ B T (5.185) l c The nonexistence o f system zero at the origin ensures that H (0) c is nonsingular and thus guarantees the existence of such a constant term u' given by (5.182) which makes (5.173) hold 0 at steady state. The transfer function from the constant setpoint ro to the output is y( ) s ro(s) â€” i7~ (0)c (s/ - A) B = H- ($)H {s) 1 T l l c where the steady state gain is one. Thus the optimal control law without integral action for (5.172) is 82 (5.186) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I u{t) = Fx(t) + # (0)r _ 1 c (5.187) 0 The control law (5.187) is not "robust" in the sense that when there are either disturbances or modeling error, the output of system (5.172) at steady state w i l l differ from the setpoint ro. This is where integral control given i n the next subsection kicks i n . 5.1.2 Optimizing Disturbance Rejection Performance Define the integral state z(t): (5.188) The integral state augmented system o f (5.172) is â€¢i(ty At). ' A = 'B~ 0" -c + 0 . A*). T y(t) = {c T o] '0' u(t) + _1_ .0. A*) ro (5.189) At)} Compare with (2.3) where the integrator was inserted before the plant, the integrator i n (5.189) is added after the plant. See. Fig.5.35. <sHIr y.A x=Ax+Bu Figure 5.35: Graphical illustration of (5.189) The last term [0 l ] r o i n (5.189) can be ignored since the control law for (5.189) w i l l have T integral term w h i c h w i l l eliminate any constant disturbance like [0 1 ] r o . Thus we w i l l work on T the disturbance rejection control law based on the following equations At)' ' A 0" At)' At). -c 0 . At). T y(t) = [c T 83 0] + 'B' .0. At) (5.190) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I What we are looking for is a control law that consists of two terms u (t),v(t). n u (t) n T h e first term is the nominal control given by (5.187) which ensures nominal servo performance. The second term v(t) is responsible for disturbance rejection and modeling error compensation. That is u(t) = u (t) + v(t) n (5.191) = Fx(t)+H- (0)r + v(t) 1 Q Substitute (5.191) into (5.190), we have ' T At). [F A â€”c 1 0] At) + 0" At)' 0. At). + (5.192) tf-i(0)r v(t) + 0 At)} A g a i n ignoring the constant disturbance term [B At)' At). 0] H T ( 0 ) r o , we have from (5.192) 0" Aty 'A + BF = a C -c B' + 0 . At). T v(t) (5.193) .0. The disturbance rejection state feedback control law for (5.193) w i l l have the form of v(t) = [L I At)' L] 1 2 (5.194) At) Without loss of generality, assume L 2 = Â£,Li = Â£_L. i.e. \x{t) v(t) = f[L (5.195) 1] The closed-loop system of (5.193) under control (5.195) is At)' A + BF + fBL fB At). â€” câ€¢ 0 (5.196) T The criteria for the disturbance rejection control (5.195) are that 1. It should not alter the nominal servo performance given by (5.187). i.e. the eigenvalue o f the closed loop system (5.196) should contain the eigenvalues of A given by (5.183) 2. It should at the same time give good disturbance rejection properties 84 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I Defining the new state CW as CW = Lx(t) (5.197) + z(t) and substituting (5.197) into (5.193), the new state equation is 0 A + BF ' B ' â€¢ (ty X .CW. .CW. + v(t) (5.198) LB. Substitute control law (5.195), where Â£, L are yet to be decided, into (5.198). The closed loop system is A + BF .CW. L(A + BF) ( 5 1 \x(t)' - c $LB T To fulfill criterion 1, it is obvious that L(A + BF) (5.199) J LC(0. â€”c has to be a zero vector, i.e. L should BF)' (5.200) T be given by L = c (A + T Since (A + BF) 1 is the nominal closed loop system matrix which is stable, it is always possible to find L from (5.200). The closed loop system equation is thus 'A + BF Â£B (5.201) 0 .CW. t\LB_ .CW. The eigenvalues of (5.201) are the solutions of Sl n - (A + BF) -Â£B det = det[sl n 0 - (A + BF)](s That is, the eigenvalues of (5.201) are those of (A + BF) pole location p , z - Â£LB) = 0 (5.202) s-Â£LB_ and one at Â£LB. G i v e n the desired the feedback gain Â£ can be easily decided by i= {LB)- 1 Pt The gain Â£ can also be given by applying S D G P C method to the following equation 85 (5.203) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I C(t) = LBv(t) (5.204) The performance index is J = J [C (t + T) + X v (t JrC (t + T ) + 2 2 p + T)]dT 2 r (5.205) o A p p l y i n g the formulae (5.178) with design parameters N =l u X = 0 r (5.206) 7r = 0 to (5.205), w e have ^ - ^ ( L B ) - The closed loop pole is at p z (5.207) 1 =-^A. The overall control law (5.191) which takes into account both servo performance and disturbance rejection performance is thus u{t) = (F + Â£L)x(*) + t;z(t) + K r r (5.208) 0 Where F is given by (5.179) and L = c (A + BF) T 15 -1 i = -^ri )~ LB l (5.209) J-P K = H~ (0) = - ]c (A + BF)- B\ l R T 1 _ 1 We summarize the above results i n the following theorem. 86 = -(LB)~ L Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Theorem 5.1 For the system given by (5.172) and the augmented system (5.190) 1. The eigenvalues of the closed loop system under control law (5.208) are those of A + BF and p z = ((LB). Where F and ( are obtained by minimizing two performance indices (5.176) and (5.205) respectively using SDGPC method. L is given by (5.209). 2. The system transfer function from reference r o to system output y(i) under control law (5.208) is â€” (A + the same as that of the nominal case (5.186). i.e. K c (sI T r n BF)) B. -1 Proof: 1. The closed loop system equations o f (5.190) under control law (5.208) are: x(0" ~A + BF + (BL -c 0 T y(t) = [c 'B' B(' At)' + At). x(t) .0. (5.210) 0] T [z(t) A p p l y the similarity transformation defined by nonsingular matrix T = In 0 -L 1 to the closed loop system matrix ~A + BF + (BL B(' (5.211) Adose 0 â€”C Aclose â€” T A\ T c ose A + BF + Â£BL - ZBL L(A + BF) - c + Â£LBL - Â£LBL T The eigenvalues o f Aci ose A + BF Â£B 0 Â£LB (LB _ (5.212) are given by A + BF det sln+i â€¢sI -(A 0 + BF) n Â£B (LB -(B â€¢ det 0 s-(LB. det[sl - (A + BF)]det(s n 87 - (LB) = 0 (5.213) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I So the eigenvalues of A dose are eigenvalues of A+BF and (LB. Since similarity transformation does not change the eigenvalues o f a matrix, it is clear that the eigenvalues o f Aa A + BF + t]BL oae Bf = are those o f A^ogg. This completes the proof of statement 1. 0 J Substitute control law (5.208) into system equations (5.189), w e have -c T 2. 'BK ' A + BF + tBL BÂ£ â€¢ (ty x r At). 0 At). y(t) = [c T + 1 (5.214) â€¢(0 0] L*(*)J 'In -*(<)â€¢ A p p l y transformation 0' At)' â€”L 1 â€¢x(ty A + BF 0 At). to (5.214), we have At). â€¢x(ty B( LBt] y(t) = [c T ' + BK r ' _LBKr + l_ X(t). At) ro (5.215) o] At) U s i n g (5.209), we have LBK + 1 = 0, the closed loop transfer function from ro to y(t) is thus T r/ S [c T =l Â°1 3 (A + BF) n 0] 0 -] - 1 'BK ' -BÂ£ r s- LBt] [sl - (A + BF^B^s â€¢ [sl -(A + BF))- 1 n n (s - LBZY = c [sI -(A + BF)]- BK T 0 - LBÂ£)~ BK T (5.216) 0 1 n T This completes the proof o f statement 2. â€¢ Remark: Statement 2 implies that the integral term i n control law (5.208) adds a systems pole as w e l l as a system zero at (LB. However, the integral term adds a blocking zero at the origin from the reference (or disturbance) to the tracking error e(t) = ro â€” y(t) thus ensures zero steady state error for constant reference signal and/or constant disturbance. Theorem 5.1 says for a changing Â£, Â£ , the eigenvalues o f the closed loop system are those of t A + BF, w h i c h are constant, and a time-varying pole atp*, = (tLB. For stable A + BF and a stable p given by Â£, as long as the sign of Â£ stays the same as Â£, then the eigenvalues of the closed loop z t Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I system matrix (5.211) are stable at any instant of time. However, this does not guarantee the stability \A + BF + Â£ BL B^t' That is, for time-varying x(t) = A(t)x(t), i f the of the rime-varying AÂ£, . = t OS( eigenvalues of A(t) have rea negative parts for each t, then x(t) = A(t)x(t) is not necessarily stable 'A + BF + S BL Bi ' , we have the following theorem. [9, pp.411]. F o r the particular A , = t t l clo e o Theorem 5.2 For the system (5.172) and the augmented system (5.190) under control law (5.208) with constant setpoint and with constant F, which stabilizes A + BF, 1. & is such that the time-varying pole p 2. Â£t(<) Â° Zt = (tLB f LBZ (T)dT is bounded and Â£ , if t is always negative, i.e. p Zt f t e and time-varying = (tLB < 0, and LBÂ£ (r)dT t l i m ft(i)eÂ° = constant exponentially, tâ€”too then the closed loop system (5.214) is exponentially Proof: stable. F o r the closed loop system â€¢Â±(ty A + BF .C(0. 0 B( {t) â€¢*(*)â€¢ LB( (t) .CO. t t (5.217) we have ^ = LBÂ£ (t)dt t (5.218) / LBUr)dr C(t) = C(0)eÂ° Condition 1. ensures l i m J LB( (r)dT tâ€”too is a negative constant ( not necessarily - o o ) which means t l i m ((t) = (5.219) constant tâ€”too from (5.218). F o r the states x(t), we have x(t) = (A + BF)x(t) + B$ (t)C{t) t t (t) x = e( Â»x(0) A+BF + J e( + ^-^B( (T)C(T)dT A BF t 89 (5.220) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Because A + BF is stable and l i m = constant (or 0) according to condition 2., which tâ€”Â»oo means w e have a stable system subject to an input which is either exponentially vanishing or goes to a nonzero constant exponentially. Thus lim x(t) = a;^ i.e. exponentially stable at its equilibrium point. [constant) (5.221) â€¢ Theorem 5.2 w i l l be used to construct an anti-windup scheme i n next subsection. In the process industries, most processes are well damped open-loop stable systems. F o r such systems, the " mean level " control [13], i.e. the control law which does not alter the process poles, may be desirable since more aggressive control requires large input amplitude. The following theorem gives the S D G P C version of " mean level " control. Theorem 5.3 For system (5.172) with stable A and the augmented system (5.190), the mean level control law (5.223), i.e. the one with F = 0, is obtained with design parameters 1 u OO (5.222) A 0 7 0 The mean level control law is: u(t) = {(L)x(t) where Â£ is given by (5.207) and K r = - + (z(t) [c A~ B] T l + Kr r 0 (5.223) -l Proof: For stable A and iVâ€ž = 1, F ( T ) in (5.178) has a concise form: r(,4,T) = (e AT 90 - ^A^B (5.224) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I Substitute (5.224) into (5.178), we have -l K = J [(trV^A^B) 2 - 2c e A- Bc A- B T AT 1 T = (h -I + hy 2 Since A is stable, + 1 {<FA~ B) l A dT (5.225) 1 l i m e * = 0, thus both the first and the second integrals AT i n (5.225) T ->oo r approach constant while 1$ approaches infinity as T â€”â€¢ 0 0 . Thus l i m K = 0. Similarly, it can be T â€”Â»oo shown that H approaches constant as T â€”+ 0 0 . So l i m F = l i m K * H = 0. â€¢ p p x Remark: v x Although the mean level control law (5.223) does not change the system dynamics of the setpoint response from the open loop one, its disturbance rejection response can be tuned by selecting different Â£. Larger Â£ results i n faster disturbance rejection rate. 5.2 A n t i - w i n d u p Scheme F i g . 5.36 shows the control law (5.208) applied to a system subject to actuator constraints where ' u(t), U i m n < u(t) < Umax u(t) = sat[u(t)] = < max: l â€”' (5.226) "â€¢max â€” u T c x=Ax+Bu Figure 5.36: Control subject to actuator constraints A s we mentioned earlier, u(t) consists of the nominal control term u (t) n rejection control term v(t). and the disturbance i.e. u(t) = u (t) + v(t) n u (t) n = Fx(t) + K ro r v(t) = Â£(Lx(t) 91 + z(t)) (5.227) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I We only consider open loop stable plants here as it is meaningless to design anti-wind up scheme for strictly unstable systems. It is impossible to stabilize a strictly unstable system regardless of whatever control strategy is applied when the system disturbance causes the input to saturate [47]. We first consider the case when u(t) is over the upper limit u m a x . Case 1: Do not reset integrator when both u(t) and u (t) exceed control limit. That is: n u(t) > u { Z <( ) = Â«Â«Â»el(0 a n d u (t) > u m a x n m u(t)=u f J i l t e o theni f< J u(t) < u a n d u (t) < u m i n n or u (t) > u m i n n m a l (*) Z < r e s e , I m ( 5 - 2 2 8 ) u(t)=u min where u(r) is defined as the new controller output after integrator reset. Case 1 happens either when an unreachable setpoint is asked or the system suffers too large a disturbance. The system is effectively operated i n open loop. If the input is saturated long enough, the output o f the system w i l l be â€”c A~ Bun . T 1 m Case 2: Reset integrator state when u(t) exceeds control limit but u (t) does not. Stop reset n integrator when saturation 1. When saturation is over. That is: occurs: i u{t) > u m a x u(t) < u m i n and u and u m i n m i n < uâ€ž(t) < u < u (t) < u H m a x m a x *re..i(Â«) = *(*) + m ( , theni , thenl {u(t) = u (t) + ttLx(t) + n 2. " V t ) z (t)) re3Ct After saturation is over, the control should be updated according to: z(t) = z(t) (5.230) u(t) = u (t) + Â£(Lx(t) n + z(t)) We have the following theorem regarding scheme (5.229). Theorem 5.4 Scheme (5.229) guarantees u(t) = u(t) where u(t) = sat[u(t)] Proof: 92 is defined in (5.226). Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I Consider the case when u(t) > u m a x and u m i n < u (t) < u n m a x u(t) max u u(t) = u (t) + Â£ (^Lx(t) + z(t) + n i J = Umax + U (t) + Lx(t) + z(t) - z(t) = (5.231) U n Thus u(t) = sat[u(t)] , by direct calculation we have max = u(t). â€¢ . Remark: Theorem 5.4 essentially claims that the nonlinear control problem with saturated u(t) can be transformed into a linear control problem by scheme (5.229). Case 3: Reset the feedback gain f but keep the integrator state unchanged when u(t) exceeds control limit but u (t) does not. After saturation is over, changing Â£ according n to equation (5.233). That is: 1. When saturation occurs: I u(t) > u max and u min Â£ _i_ Umaxâ€” t -Â»(*) "UJ < uâ€ž(t) < u , then< max Â« ( * ) = Â« â€ž ( * ) + (rc.ct{Lx(t) u(t) < u min and uâ€ž in < u (t) < u , n max e preset then â€” I l{t)e<Â° 52 n ) + z(t)) After saturation is over, the disturbance feedback gain Â£(t) should fulfill the following f ( - u(t) = uâ€ž(*) + U..t(Lx(t) 2. + *(*)) C _1_ Â»m.n-Â»(t) equation: LBl(r)dr =Â£ â€ž (<o)<^ '-' r e t B ( (5-233) o ) where to denotes the time just when saturation is over, Â£ is the originally designed feedback without considering saturation. Â£reset{to) is the reset Â£ at to obtained from gain (5.232). lemma 5.1 The integrator reset algorithm given by (5.229) and (5.230) is equivalent algorithm to the gain reset given by (5.232) and (5.233). We only prove the case when the control exceeds the upper limit u can be proved i n the same manner. Proof: 93 m a x . The case when u < u m t n Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I 1. W h e n control is saturated, gives u(t) = u . i.e. u(t) > u . F r o m Theorem 5.4, it was shown that (5.229) max W h i l e i n (5.232), we have max = u (t) + ((Lx(t) + z(t)) + u n - u(t) max (5-234) â€” Umax So, both (5.229) and (5.232) give the same control action when u(t) > 2. u . max After saturation is over, denote that moment as to, we have Ul(t ) = U (to) + ((Lx(t ) n n + Z (to)) 0 reset = U (tn) + (C, eset{to) n r U (to) = U (to) + (reset(to){Lx(t ) + z(to)) = U (t ) + (reset 2 n where ui(*o),"2(<o) 0 n 0 (*o)C(*o) (5.235) are the control obtained from (5.229) and (5.232) respectively. Since " i ( * o ) = "2(^0) as we just proved, we have Â£Crese<(*f)) = (reset(to)C(t(l) (5.236) The control given b y (5.229) after to is "1 (0 = Â«n(<) + Z(reset(t) (5.237) where Creset(t) is given by (5.201) as: ^reset(t) = Creset{t) = LB((] (t) reset (reset^ O l ( t ) = Â«Â»(*) + 1 3 (Uset{t^ ^ (5.238) - LB{t U) Similarly, the control given by (5.232) after to is *2 (*) = Â« Â» ( * ) + Â£(*)<(*) (5.239) where at time to, ((to) = (reset{to)- After to, ((t) is described by (5.233). Finally, w e have J LBi{r)dT at) = LB|(OC(*). m = Â«to)e>Â° ( / LBl{T)dr u (t) = u (t) + 2 n ({t)ato)^ For ((t) given by (5.233), it is easy to show that ui(t) = u (t) using (5.236). 2 94 5 2 4 0 ) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I We conclude that the integral state reset algorithm described by (5.229) and (5.230) and the gain reset algorithm described by (5.232) and (5.233) give the same unsaturated control action during saturation period and thereafter and are thus equivalent. Remark: â€¢ L e m m a 5.1 claims that the nonlinear control problem with saturated u(t) is equivalent to a linear time-varying control problem with the time-varying feedback gain given by (5.232) and (5.233). lemma 5.2 For the gain reset scheme (5.232) and (5.233), the sign ofâ‚¬ t(t) during saturation and the sign rese ofÂ£(t) after saturation are the same as the sign of the originally designed Â£. Proof: 1. During saturation, from (5.232), we have U Since u max â€” u (t) s /â€¢ , max ~ (t) (\ C e t { t ) ~ * + U u u 4 Lx(t) + z(t) = â€” U (t) Lx(t) + z(t) ( 5 - 2 4 1 ) > 0 by assumption, we have n f (t)(Lx(t) + z(t))> reset We also have u(t) = u (t) + f(La:(f) + z(i)) > u , n max i(Lx(t) 0 (5.242) thus + z(t))> (t) > 0 (5.243) Equations (5.242) and (5.243) hold at the same time only i f Â£ and f et(t) Te3 2. /CO/11\ n m a x have the same sign. After the saturation, from (5.233), it is obvious that SLB(t-U) e iresetih) > 0 J (5.244) ft LB T)dr e< '> j since both e ^ - B ^ - M ) e < 0 LBS(r)dT a r e greater than zero. Thus <f (t) has the same sign as 95 Â£ eset(to)â€¢ r Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I We conclude that the signs of (, ( eset{t) and ((t) are a l l the same. T â€¢ Theorem 5.5 The gain reset anti-windup scheme described by (5.232) and (5.233) and the equivalent reset anti-windup scheme described by (5.229) and (5.230) are exponentially steady state error for constant setpoint subject to constant integrator stable and gives zero disturbance. Proof: A c c o r d i n g to lemma 5.2, the time-varying ( we h a v e p , = (tLB has the same sign as Â£. Since (LB < 0 by design, < 0 during all the stages of the algorithm. Further ( "is bounded during saturation 2 J t t LBi(T)dr and l i m ((t)e*<> = lim tâ€”too (resetito)^ ^'^ = 0 13 after saturation according to (5.233). Thus tâ€”too both condition 1. and condition 2. of Theorem 5.2 are met. The resulting time-varying law is exponentially stable for the overall system, i.e. linear control the linear system (5.172) with saturation nonlinearity as depicted i n F i g . 5.36. The equivalent integrator state reset algorithm (5.229) and (5.230) is also exponentially stable. and l i m x(t) = constant, Further, since l i m = l i m (Lx(t) + z(i)) = constant tâ€”too tâ€”Â»oo t we have l i m z(t) = l i m J (ro â€” y(r))dT = constant, which means tâ€”too l i m e(t) tâ€”too tâ€”too = l i m ( r - y(t)) tâ€”too 0 = 0. tâ€”too â€¢ Some examples are presented here to show the effectiveness o f the T D F - S D G P C and the antiwindup algorithm. Example 5.1 The first example is a simple integrator process. G(s) = s (5.245) The actuator limits are Â± 0 . 1 . Design the S D G P C algorithm with two performance indices using the same following design parameters N u T p = l = 1.5sec A = 7 96 = 0 (5.246) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I gives the control law u(t) = u (t) n u (t) n = -y(t) v(t) = -y(t) + v(t) + r (5.247) 0 + z(t) Fig.5.37 shows the control results of control law (5.247) without anti-windup compensation. The overshoot i n Fig.5.37(a) can be clearly observed. setpoint and output 60 30 (a) (S) \ control I -2 - ter^sa after saturation \ before saturation 60 (S) 30 (b) integral of tracking error 0 1 1 1 10 20 30 1 40 1 SO 60 (c) nominal control u (t) n - I 0 1 10 I 1 i 1 I 1 1 , 1 1 1 1 20 30 (d) 40 SO - 1 60 (S) Figure 5.37: Example 5.1: Control law (5.247) without anti-windup compensation F i g . 5.38 shows the control results of control law (5.247) with anti-windup scheme. F r o m F i g . 5.38(c), the integral state is reset just after the 20 seconds mark at which time the nominal control is within the control limits. The effectiveness of the algorithm is obvious. 97 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I setpoint and output control integral of tracking error -integrator reset 60 ( ) S (c) nominal control 1 0.5 0 -0.5 30 60 (S) (d) Figure 5.38: Example 5.1: Control law (5.247) with anti-windup compensation Example 5.2 The process for the second example is described by -_3 _3 1 0 x(t) = . 0 . 1 T _ r 0 x(t) + .0. 0 . y(t) = [o l]x{t) The actuator limits are: 0 o Â±3.5 For the nominal control law u (t), n the design parameters are 98 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance In N = 10 u T p = 2sec. (5.249) A = 0.0001 7 = 1000 The resulting nominal control law is given by u (<) = Fx(t) n + K ro r (5.250) F = [-4.7599 -25.9369 - 5 1 . 4 5 1 6 ],K = 52.4516 r For the control v(t), the prediction horizon T is selected such that the pole is placed at â€”2 and p f = 104.9031. the overall control law is u (t) n = (F + Â£L)x(t) + Kr r 0 + Â£z(t) (5.251) = [-6.7599 -41.4567 - 1 0 9 . 3 2 5 4 }x(t) + 52.4516r + 104.90312(f) 0 The control results for the control law (5.251) with anti-windup algorithm are shown i n F i g . 5.39. The effectiveness of the anti-windup algorithm can be again observed by comparing F i g . 5.39 with F i g . 5.40 i n which no anti-windup algorithm is used. Notice the integral state reset i n F i g . 5.39(c) after each setpoint change. 99 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance In setpoint and output 60 (s) control 10 20 60 (S) 30 (b) integral of tracking error 60 (s) Nominal control 100 l\ 07 100 0 10 20 V 30 - - 40 60 & 6 0 (S) (d) Figure 5.39: Example 5.2: Control law (5.251) with anti-windup compensation 100 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance setpoint and output 1 - â€” r, - / "\ / V ^ 1 \ ,s \J 10 20 30 (a) 40 - 50 60 ( ) 50 60 (s) S control integral of tracking error nominal control Figure 5.40: Example 5.2: Control law (5.251) without anti-windup compensation The integrator reset algorithm has a strong similarity with the conventional anti-windup PID controller whose structure is depicted i n Fig.5.41. Figure 5.41: Conventional anti-windup The performances o f the conventional anti-windup algorithm and the proposed scheme are compared i n next example. 101 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance In Example 5.3 The plant being controlled is a simple integrator with random disturbance injected at the input. The saturation limits are Â± 0 . 1 and the control law is given i n equation (5.252) G(s) = I -0.1 < u(t) < 0.1 t (5.252) u(t) = -1.9998y(i) + 0.9998râ€ž + 0.9998 J (r - y(r))dr 0 uâ€ž(t) = -0.9998j/(t) + 0.9998r 0 t v{t) = -y{t) + 0.9998 J ( r - y(n ))dr 0 The reset gain K as depicted in Fig.5.41 is selected to ensure good performance for the conventional algorithm. Fig.5.42 shows the results. The first three plots are for the conventional methods with different reset gains. The observation is that although good performance can be obtained for properly selected gain K, it is nonetheless a nontrivial trial and error procedure. Moreover, the effect of changing K on the performance is "non-monotonic". This makes the tuning more difficult. O n the other hand, the proposed scheme gives excellent results in a straightforward manner as can be seen from the fourth plot of Fig.5.42. K=0.02 0.2 0.4 0.6 0.8 K=0.5 0.2 0.4 0.6 0.8 K=10 -^Xy (t) = 0.7618 2 1.2 ^ 1 lÂ£y (t) = 2 0 -^xy (t) = Â° 2 1.6 1.8 2 1 3 0 1.2 1 1 1.4 1.4 x 1 1.6 1.8 3 5 7 5 Figure 5.42: Conventional anti-windup scheme vs proposed 102 Â° x 10 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance We make some remarks to conclude this section. 1. The stability result stated i n Theorem 5.5 also applies to unstable systems u (t) n < u provided that u i < m n holds. However, for unstable plants, it is always possible to find a system state m a x XQ such that u (t) n = FXQ + K o > max r u r o n{i) r u < min- This w i l l drive the system i n open u loop mode thus destabilize the system. 2. F o r open loop stable systems, it is also desirable to have u (t) n = Fx(t) + Kr T Q (5.253) Umin * N f n(i) <C U max hold. This way, the anti-windup scheme leads to a graceful performance degradation compared with the unconstrained linear design. However, to fulfill (5.253), a smaller feedback gain F is required. The extreme case is the "mean-level" control given by Theorem 5.3 where F â€” 0, i.e. no control effort is made to make the servo response faster. This means that when input limits exist, a trade-off must be made to balance servo performance and disturbance rejection performance even for a two-degree-of-freedom design. 3. Although the integral state reset scheme (5.229), (5.230) is equivalent to the gain reset scheme (5.232), (5.233), it is clear that the integrator reset scheme is much easier to implement. The gain reset scheme is also very important since it clearly shows that the original time-invariant nonlinear problem is equivalent to a time-varying linear problem which possesses nice stability property under reasonable assumption. 4. Integral reset can also be used for bumpless transfer i n process control. F o r manual control signal u (t), man the integral z(t) i n control law (5.208) can be set as z(t) = â„¢an{t) U - {F + ZL)x{t) - K r r ( i ^ ^ This way, the control law (5.208) tracks the operator action i n manual mode and takes over the control task from the operator based on the last operator entered value whenever i n auto mode thus realizing bumpless transfer. 103 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance I 5.3 C o n c l u s i o n In this chapter, we developed a two-degree-of-freedom S D G P C algorithm based on two performance indices. Based on this T D F - S D G P C algorithm, an anti-windup scheme was proposed i n section 5.2. It is shown that the linear control law plus saturation nonlinearity is effectively equivalent to an unconstrained linear time-varying control law which leads to graceful performance degradation compared with the original linear unconstrained design while guarantees stability. It also clearly poses a trade-off design problem between the servo performance and the disturbance rejection performance. The attractive features of this anti-windup scheme are its elegant simplicity, effectiveness and most importandy, guaranteed stability property. 104 Chapter 6: Continuous-time System Identification Based on Sampled Chapter 6 Continuous-time System Identification Based on Sampled-Data System model is the centerpiece i n all M o d e l Based Predictive Control algorithms. In the case of various S D G P C algorithms we developed i n previous chapters, the system models being used are continuous-time state space ( or high order differential equation ) models. The continuous-time Laguerre filter identification and its associated Laguerre filter based adaptive S D G P C problem were treated i n chapter 4. The parameter estimation problem for general (stable or unstable) continuoustime differential equation is discussed in this chapter. There is no doubt that discrete-time models have received more attention than their continuoustime counterparts i n the development of both identification theory and techniques. T h e theoretical results and algorithms available for discrete-time model parameter estimation are overwhelming [65, 25, 45, 44, 72]. The continuous-time model identification problem using digital computers, on the other hand, has yet to reach the same level although the relevance and importance of continuoustime system identification have been increasingly recognized i n the recent years. A n earlier survey solely devoted to this subject can be found i n P . C . Young [85]. A comprehensive review of recent developments i n the identification of continuous-time systems was given by Unbehauen and Rao [79]. A book written by Unbehauen and Rao [78] attempts to present a simple and unifying account o f a broad class of identification techniques for continuous-time models. Least squares ( L S ) is a basic technique for parameter estimation. The least squares principle, formulated by Gauss at the end of eighteenth century, says that the unknown parameters of a mathematical model should be chosen i n such a way that the sum of the squares of the differences between the actually observed and the computed values, multiplied by numbers that measure the degree of precision, is a minimum [64J. The method is particularly simple i f the model has the property of being linear in the parameters. That is i f a model has the following form y{t) = y>i(i)0i + ip (t)9 + â€¢â€¢â€¢ + M*) n 0 2 2 105 = Â¥>(O 0 T (6.255) Chapter 6: Continuous-time System Identification Based on Sampled where y is the observed variable, #1, #2, â€¢ â€¢ â€¢ , # are unknown parameters, and , <pi, <p%, â€¢ â€¢ â€¢, <p n n are k n o w n functions that may depend on other known variables. The model is indexed by t, which often denotes time. The variables <pi are called the regression variables or the regressors and the <p (t),9 are defined as T model described by (6.255) is called a regression model. The vectors T <p (t) = [^(t) <p (t) â€¢â€¢â€¢ <p (t)} T 2 n (6.256) e = [e {t) e (t) â€¢â€¢â€¢ e (t)] T l W i t h pairs of observations and regressors 2 n {(y(i),<pi(i)), i = 1,2, â€¢ â€¢ â€¢ t}, the parameter estimation problem is to determine the parameters i n such a way that the outputs computed from the model described by (6.255) agree as closely as possible with the measured variables y(i). The solution has a analytical form (6.257) where Y(t) = [y(l) y(2) â€¢â€¢â€¢ y(t)] V (i)' T r (6.258) and the symbol " A " denotes estimates throughout die thesis. In view of real time application, the computation based on (6.257) can also be done recursively resulting i n the so called Recursive Least Squares (RLS) algorithm. The regression model (6.255) is an algebraic (non-dynamic) equation which has two important properties: it is realizable i n the parameters 0 T functions <p (t) T and contains only realizable of the data. Usually there are two phases i n applying the above least squares method to parameter estimation of dynamic models. The primary phase involves converting the dynamic equation into an algebraic equation. A n d the secondary phase involves solving these algebraic equations for the unknown parameters which is given by (6.257) or its recursive version. For systems described by a difference equation y(') u(t) acq + aiq ~ H 71 n 106 l (6.259) h a n Chapter 6: Continuous-time System Identification Based on Sample or equivalently y(t) _ frp + feig *H +bq n H b aq n u(t) aa + aiq- 1 : n (6.260) n where q is the difference operator, i.e. qy(t) â€” y(t + 1), the solution to the primary phase is obvious since the dynamic equation (6.260) already fulfills the requirement of the primary phase. For a continuous-time model given by y(t) _ bs u(t) as + bs- n n 0 + â€¢â€¢â€¢ + &, 'n 1 1 + a!*"" -| n 1 0 (6.261) \-a, â€¢n where s is the differential operator, i.e. sy(t) = dy(t)/dt (also loosely interpreted here as the Laplace operator), the solution to the primary phase is not as simple since derivative operation s y(t) n = d y(i)/d t n is not feasible. One way to circumvent this difficulty is to make continuous-to- n discrete conversion of the continuous-time model (6.261) first, and then estimate the parameters of the resulting discrete-time model. The parameters of the original continuous-time model can then be obtained by a discrete-to-continuous time transformation. However, obtaining a continuous-time model from its identified discrete-time form is not without difficulties [70, 71] as the choice o f sampling interval is not trivial. O n the other hand several methods are available to make the continuous-time model (6.261) compatible with the requirement of the primary phase without changing the parameters (an â€¢ â€¢ â€¢ a â€ž ,bo---b ) [79, 79, 85]. Perhaps the most direct of these is the low-pass filter approach. n The key idea is to choose a low-pass filter H(s) with a transfer function of sufficient relative degree to make s H(s) n proper so that all the signals s y(t), n s ~ y(t), n 1 â€¢ â€¢ â€¢, y(t), s u(t), involved i n (6.261) w i l l be feasible by passing through H(s). n - 1 u ( t ) , â€¢ â€¢ â€¢, u(t) Thus a realizable, linear-in-the- n s parameters form o f equation (6.261) can be obtained. One particularly simple choice o f the filter is the multiple integration filter l/s . n The initial condition problem associated with integration opera- tion can be overcome by integrating the input/output signal over a moving time interval [to, to + T] [67]. In section 6.1, the continuous-time model (6.261) is transformed into a regression model by passing s y(t), n s ~ y(t), n 1 â€¢ â€¢ â€¢, y(t), s u(t), n s n - 1 u ( f ) , â€¢ â€¢ â€¢ , u(t) through a multiple integration filter 1/s". A n d the numerical integration formulae w i l l also be given. Section 6.2 introduces the recursive least squares algorithm E F R A [68] which stands for exponential forgetting 107 and resetting algorithm. In Chapter 6: Continuous-time System Identification Based on Sampled section 6.3, we develop a new algorithm to estimate fast time-varying parameters. A real life inverted pendulum experiment i n section 6.4 shows the effectiveness of the algorithm presented i n this chapter. 6.1 The Regression Model for Continuous-time Systems Considering the system model (6.261), assuming the leading coefficient OQ is equal to 1. Let y( \t) n denotes the ra-th derivative of y(t), the system model being considered has the form y (0 (n) + Â«iy (n_1) ( 0 + â€¢ â€¢ â€¢ + a y(t) = n Define the multiple integral of signal y(t) U+TU+T Iny(ta) = f J to tl "' &ou (0 (n) + M ( n _ 1 ) ( * ) + â€¢â€¢â€¢ + b u(t) (6.262) n over [to,to + T) as tj-i+T f y{tj)dtjdt - ---dt i X (6 263) 1 tj â€” 1 j = l,2,---n Apply I defined i n (6.263) to both sides of (6.262), the resulting regression model is n I y (to) (n) n = <p (t )9 (6.264) T 0 where <p (to) = [-I y - (to) T (n â€¢â€¢â€¢ 1) n 0 = [ai â€¢â€¢â€¢ -I y{to) n a n bo Jâ€žU<Â»)(*o) â€¢â€¢â€¢ â€¢â€¢â€¢ /â€žU(* )] 0 (6.265) b] n A s can be seen from (6.265), all the regressor entries are numerically feasible since derivation is no longer necessary. Since we are interested i n computer implementation of the algorithm, the input/output data are only available at discrete sampling instants. However, this w i l l be enough to compute the regressor numerically. The trapezoidal rule w i l l be used for its simplicity i n form and its satisfactory accuracy. Rather than giving the general formulae to compute the integrals i n (6.265), we only consider the case when n = 2 for the purpose of not hindering the basic principle. Assume / + 1 samples of signal y(t), y(0), y ( l ) , â€¢ â€¢ â€¢ y(l), are available on the interval [to, to + T] with sampling interval A = j as illustrated i n Fig.6.43, then the integral hy(to) = / 0 can be given numerically using trapezoidal rule as 108 y(h)dti Chapter 6: Continuous-time System Identification Based on Sampled â€¢t +T 0 hy(to) = J Â« A Â£ y ( 0 - 0 . 5 ( y ( 0 ) + y(7)) y{h)dh to (6.266) Li=0 to+A U+T Figure 6.43: Graphical illustration of numerical integration Similarly, the double integration o f y(t) over [to,to + T] can be given by t +T U+T 0 = J hy(h) J dt x to U+T = J y(t )dt 2 2 ti [hy{h))dt to (6.267) â€¢ / i A hy(to + t'A) - 0 . 5 J i y ( Â« ) - 0 . 5 / i y ( t + T ) 0 0 Li=0 where each hy(to + i A ) , i â€” 0 , 1 , â€¢ â€¢ â€¢ / i n (6.267) can be computed using (6.266). Double integration o f y(t) over [to, to + T] is given by U+T hy{to)= J U+T dh to J U+T J y{t )dt = 2 2 ti (yCti + T ) - ^ ! ) ) * ! (6.268) to = y{t + 2T) - 2y(t 0 Q + T) + y(to) Double integration o f y(t) over [to, to + T] is given by U+T U+T hy(to)= J to dt x J U+T y(t )dt = 2 2 t] = hy(to j (y(*i+r)-y(ti))d<i (6.269) to + T)-I {to) iy W i t h these formulae (6.266)-(6.269), the regressors i n (6.265) can be computed for n < 2. F o r n > 2, the corresponding equations can be obtained i n a similar fashion. It is obvious that both the sampling interval A and the integration interval T affect the estimates. Sampling interval A directly affects the accuracy o f the numerical integration (6.266) thus A should 109 Chapter 6: Continuous-time System Identification Based on Sampled keep small. However, too small A may also lead to inaccurate estimates due to round off error. A s a rule of thumb, the sampling interval should be chosen according to | r J where T 3 = ^- < A ! < ( 6 ' 2 7 0 ) is the Shannon maximum sampling period [86]. In obtaining the regression model (6.264), the multiple integral operation I n defined i n (6.263), which functions as a pre-filter, is applied to both sides of system model (6.262). Intuitively, the bandwidth of the this pre-filter should match that of the system (6.262) so that the noise i n the measurement data can be depressed and at the same time, without rounding off the "richness" of the input/output signal which contains the information about the dynamics of the system (6.262). The Laplace transform of the multiple integrator (6.263) over a time length of T is [67] (1 _ e - s T ) n 4Â«01 - J - ^ U o i . (6 27I) = =(Â»)Â£[(â€¢)] It is clear from (6.271) that the integration span T of the multiple integrator / â€ž should be selected such that the bandwidth of the multiple integrator transfer function E(s) matches the bandwidth of the system being identified. 6.2 T h e E F R A F o r the regression model (6.264), the standard R L S algorithm is given by the following formulae^] 8(t ) = Â§(to - 1) + /<â€¢(*(>) (W(*o) - /(to)^fl - 1)) B) 0 P(to)={l-K(t )<p (t ))P(t -l) T 0 where P(0) a (i is a large enough positive definite matrix. The statistical interpretation of the least- squares method is that the initial covariance of the parameters is proportional to -P(O). 110 Chapter 6: Continuous-time System Identification Based on Sampled The R L S algorithm (6.272) can not track time-varying parameters effectively. A simple extension to (6.272) is to use a forgetting factor 0 < A < 1 to give more recent data more weight. T h e modified algorithm is given by 0(t ) Q tf(* )(W (*o) = e{t - 1 ) + T ir+^Tp^l =P(t M*o) = K(to) - v ('oW*o - 1 ) ) B) 0 Q 0 <- > 6 273 A7 + y > ( t o ) P ( * o - l M < o ) r P ( t ) = (I - K(t )<p {to))P(to - 1)/A T 0 0 A disadvantage o f the exponential forgetting (6.273) is that the covariance matrix P may eventually b l o w up when the excitation is poor. The Exponential Forgetting and Resetting A l g o r i t h m ( E F R A ) [68] of Salgado et al has been shown to have superior performance. It guarantees bounded covariance matrix P even when the excitation is poor. The E F R A is given by - <P (to)Hto ~ 1)) 0(<o) = 0(<o - 1) + Km){l y \to) {n T 2 K P(*o) = \p(t ( t o ) - I + ^ o ) P ( t - 1) - K(to)<p (t )P(t T Q 0 0 - l M t o ) ( 6 - i ) + p i - 6P (t 2 0 0 - 2 ? 4 ) - 1) There are four parameters i n this algorithm to be chosen by the user. However, it is straightforward to select them i n practice. The general guidelines are: 1. a adjusts the gain o f the estimators, typically a e [0.1,0.5] 2. (5 is small, directly related to the smallest eigenvalue of P, typically (3 G [0,0.01] 3. A is the usual forgetting factor, A e [0.9,0.99] 4. 6 is small, inversely proportional to the maximum eigenvalue of P, typically S Â£ [0,0.01] The desirable features o f E F R A are: 1. Exponential forgetting and resetting 2. A n upper bound for P, i.e. a nonzero lower bound for P 3. A n lower bound for P - 1 - 1 , i.e. a nonzero lower bound for P 111 Chapter 6: Continuous-time System Identification Based on Sampled-Data 6.3 Dealing with Fast Time-varying Parameters R L S with forgetting factor can deal with slow time-varying parameters effectively but w i l l encounter difficulties for fast time-varying parameters. In such cases it is advantageous to assume the parameters to be time-varying right from the start of the problem formulation. X i e and Evans [83] proposed an algorithm i n a discrete-time setting assuming that the parameters are o f the form of offset linear ramp. The moving horizon multiple integrator approach developed i n section 6.1 w i l l be used to deal with the time-varying parameter case. A g a i n for simplicity, a second order differential equation with time varying parameters is considered. Consider the equation y + a!(t)y + a (t)y = bi(t)u 2 (6.275) Assume that the time-varying coefficients ai(t), 6,-(t) have the form di(t) = aw + ant bi(t) = b + but (6.276) i0 te[0,T ] res over interval [to, to + T \. Obviously, equation (6.276) would be a very good approximation of res ai(t), bi(t) i f T res is reasonably small. Note that T is not necessarily the same as the integration re3 span T i n (6.263). Usually T re3 > T. Substitute equation (6.276) into equation (6.275), we have V + a m y + ^202/ + a\\ty + a \ty = b u + b tu 2 w A p p l y I defined i n (6.263) on both sides of (6.277) over [t , 2 0 n (6.277) to + T], the following regression model can be obtained hy(to) = <p (t )e T a 112 (6.278) Chapter 6: Continuous-time System Identification Based on Sampled-Data where <p (t ) T a = â€¢hy{h) - hy(to) hu(ta) : - hty(to) - hty{h): htu(t ) 0 (6.279) aio a 20 ^10 : a n The integrals l2y(to),l2y(ta),I y(t(\),l2u(t ) 2 can be computed using formulae (6.266)-(6.269), Q while l2ty(t<)),hty(h),ht {to) c-21 hi are given as.follows u t +T U+T 0 = j hty{h) dh J to u+T a rdy(r) u+T J (Tvir))$+ T to dt J to U+TT = J = J ry{T)dr tj t,+r L vi*)* ti U+T = J (6.280) [(*! +T)y(t + T)-<!!/(*!)-/u/^ijjdt! a U U+T hty(to) = J dh J u u U+T I tu(to) 2 U+T = J ry(T)dr U+T dti J Tu(r)dT W i t h the regression model (6.278), either the standard R L S (6.272) or the E F R A (6.274) can be used. R e c a l l equation (6.276), apparently the offset linear ramp approximation o f time-varying coefficients is valid only when T re3 kT , res is small. Define T res as the resetting period . A t each k = 0, 1, 2 â€¢ â€¢ â€¢, it is necessary to reset parameter vector and covariance matrix as follows 113 Chapter 6: aw(k Continuous-time System Identification Based on Sampled-Data + 1) = a (k) + 1Q a (k n T a (k) re3 + 1) = a (k) n G2o(k + 1) = a2(\(k) + a2i(k + 1) = b (k T a2\{k) res 2 T b (k) 10 hi(k (6.281) a i(k) + l) = b (k) + 1Q n Tes u + I) = b (k) u P(k + 1) = KiP(k), It is important to select the resetting period T res I<! > 1 properly, the principle is that T res must be chosen large enough to allow reasonable convergence o f the parameters but the variation o f the real parameters over the period of T res should stay small so that the offset linear ramp is still a good approximation. T h e following example shows the effectiveness of this algorithm. Example 6.3.1 Consider system (6.275) with parameters described by b(t) = 2 + 0.It, 0<i<30sec a i ( f ) = 2 + 1.5 * sin(0.27r*), a 2 (6.282) 0 < t < ZQsec = 1, 0 < t < 30sec The simulation is performed i n open-loop with P R B S signal as input, and the following settings: sampling interval A = 0.01 sec, integration interval T = 0.05sec and resetting period T res The standard R L S algorithm (6.272) is used. The results are depicted i n Fig.6.44. 114 = 0.08sec. Chapter 6: Continuous-time System Identification Based on Sample 0 U 0 1 5 1 10 1 1 1 15 20 25 Â« 1 30 (S) Figure 6.44: Estimation of time-varying parameters As can be seen from Fig.6.44, even the sinusoidally time-varying parameter can be tracked satisfactorily. This verifies the validity of our assumption in equation (6.276). 6.4 Identification and Control of an Inverted Pendulum The control of an inverted pendulum is a classic topic in control engineering. There are many solutions available to this problem, for example PID, LQG, fuzzy logic etc. The SDGPC solution will be given in this section. The advantage of continuous-time parameter estimation over the discrete-time method is highlighted by the comparison between the two different approaches. Fig 6.45 is a picture of the experimental setup which is built on a used printer. The pendulum 2 rod is mounted on the printer head and able to freewheel 360 degrees around its axis. The printer head is driven by a DC motor along the x axis. Both the printer head position x and the pendulum angel 6 are available for measurement through a LVDT and an encoder attached to the printer head. The control input to the system is the voltage applied to the DC motor. The purpose of the control is to keep the pendulum rod upward and at the same time keep the printer head at the center position. The author thanks Dr. A . Elnaggar who was then a research engineer at the Pulp & Paper Centre for making this experimental setup available. 2 115 Chapter 6: Continuous-time System Identification Based on Sample Figure 6.45: The inverted pendulum experimental setup 6.4.1 System Model The printer head position is proportional to the angular displacement of the D C motor. Thus the transfer function from the input voltage u(t) to the printer head position x(t) %\8j G (s) =u(s) m k s ( r s + l) bo m m s + ais 2 has the form (6.283) O n l y two parameters bo,a-[ need to be estimated. A s for the relation between the printer head position x(t) and the pendulum angle 0(t), let us consider the downward pendulum first. Fig. 6.46 is an idealized sketch of the downward pendulum. Notice that at the equilibrium point Oo = 0, only the acceleration moving with acceleration x(t), of the printer head M w i l l break the equilibrium. Suppose M is observing on the moving M, the effect of x(t) 116 on m is that it seems Chapter 6: Continuous-time System Identification Based on Sample as though there is a force mx(t) being applied on the other direction. A p p l y i n g Newton's law i n the 9 direction as indicated i n F i g . 6.46, the torque balance is â€”mg * L sin0 â€” e9 + mx(t) * L cos 9 = ml?9 (6.284) where L, m are the length and the mass of the idealized pendulum respectively, g is the gravity constant and e is the friction coefficient. Assuming small 9, and linearizing it around 9$ = 0, equation (6.284) becomes 9 + ai9 + a 9 = b'x(t) 2 Gdoum(^) â€” 9{s) bs x(s) s + a\ s + a Â£ 2 (6.285) 2 2 9 , 1 t mg Figure 6.46: Downward pendulum Similarly, the torque balance for the upward pendulum case is mg * L sin 9 â€” e9 + mx(t) * L cos 9 = A n d the linearized model around 9o â€” 0 can be written as 117 mL 9 2 (6.286) Chapter 6: Continuous-time System Identification Based on Sample 6 + aiO - a 6 = bx(t) 2 (6.287) Figure 6.47: Upward pendulum Comparing Gdo (s) wn in . Gdo (s) Wn and G (s) up in (6.285) and G (s) up in (6.287), it is easy to see that the parameters are the same except for a sign difference in a . This important a priori 2 information is only preserved in the continuous-time model! Since the upward pendulum is open loop unstable, it is very difficult, if not impossible, to estimate G (s) up direcdy without stabilizing it first. However, with continuous-time modeling, it is possible to estimate a i , a and b for the downward 2 pendulum which is open-loop stable. The estimation results will be presented in the next subsection. 6.4.2 Parameter Estimation The experiment was conducted on the downward pendulum. The input voltage applied to the DC motor is PRBS ( Pseudo Random Binary Sequence) signal with an amplitude of Â± 1 volt and a length of 2 - 1 = 255 samples. The sampling interval A = O.lsec. Both the printer head position 8 x(t) and the angle 6(t) are measured. For the model structure given by (6.283), the continuoustime regression model (6.264) and the standard RLS (6.272) can be readily applied. The integration 118 Chapter 6: Continuous-time System Identification Based on Sample interval T = 0.3sec. The input/output data and the estimated parameters bo,ai are depicted i n F i g . 6.48 where So = 10250, fii = 12.59. Input voltage to DC motor Parameter estimates 201 1 1 râ€” : Figure 6.48: Parameter estimation of model (6.283) The estimated model from input voltage to printer head position is thus L Since the time constant r m ' 10250 m ~ 814.09 + 12.59s " s(0.0794s + 1) = 0:0794sec is very small, (6.288) can be reasonably approximated by G m = 814.09 s (6.289) For the identification of the system model from printer head position to pendulum angle, a 255 sample P R B S signal with an amplitude of Iv is applied to the motor. The sampling interval is again A = O.lsec. The model parameters a\, a and b of G<* 2 oum (s) in (6.285) are estimated using algorithms given i n previous sections i n this chapter. The printer head position, the angle output data and the 119 Chapter 6: Continuous-time System Identification Based on Sampled estimated parameters a\, Â£2, b are depicted i n F i g . 6.49 with d i = 0.01418, a = 43.9627, b â€” 0.1125. 2 Printer head position 1000 -1000 10 15 Pendulum angle 100 10 25 (s) 15 Parameter estimates 25 (s) Figure 6.49: Parameter estimation of model (6.283) The identified model Gdown( ) is thus s 0.1125s s + 0.01418s + 43.9627 2 Gdown{ ) â€” s with poles at 2 (6.290) -0.0071 Â± 6.6304z. The open loop unstable upward pendulum model (6.287) can be readily written as r (*\ Â°G {s) = s + 0.01418s - 43.9627 1125s2 2 (6.291) up with poles at -6.6375,6.6233 which correspond to a time constant o f approximately 0.15sec. The sampling interval o f A = O.lsec we used i n the experiment is relatively large for this system, either for the downward or the upward case. Unfortunately, that was the smallest sampling interval we can get due to the computer system limitations. 120 Chapter 6: Continuous-time System Identification Based on Sampled It is interesting to see how discrete-time estimation w i l l perform with the same experimental data. The discrete-time counterpart of system model (6.285) has the form: 9(k) + a 9(k dl b x(k) - 1) + a 9{k di + b x(k di - 1) + b x(k d2 9{k) d3 b + b ,q- 1 di x(k) - 2) = d 1 + a q~ l dl + + - 2) (6.292) b q~ 2 d3 a q~ 2 d3 The MATLAB function ARX i n the system identification toolbox was used on the same data set to identify the parameters i n (6.292), i.e.: arx([9 x],[n a n b n ]) k (6.293) n = 2,n a b = 3,n k = 0 The parameter estimates are [bd, b b] it [a d3 dl a] d2 = [0.093 -0.1863 = [-1.5685 0.0923] (6.294) 0.9663] F i g . 6.50 shows the step response of model (6.292) with estimated parameters. Figure 6.50: Step response of the estimated discrete-time model (6.292) F i g . 6.50 tells us that the system has a natural undamped frequency of 1.05. This agrees quite satisfactorily with what we have from the continuous-time identification approach. See (6.290) where 121 Chapter 6: Continuous-time System Identification Based on Sample f = ^43.9627 _ 10553. However, the damping factor is quite different from what we obtained i n (6.290). F r o m F i g . 6.50, the pendulum should settle i n about 30 seconds when there is a step position input. Experiment shows that the pendulum oscillation w i l l last about 400 seconds after a hit on the printer head which agrees quite well with the continuous-time estimation results. Also notice from F i g . 6.50 that there is a nonzero steady state gain i n the estimated discrete-time model. This is obviously wrong. It has been shown that the system has two zeros at the origin, see (6.285). This important a priori information is also lost i n the discrete-time modelling. A s a matter o f fact, the continuous-time counterpart o f the estimated discrete-time model with sampling interval A = 0.1 sec and zero-orderhold i s : 0.093s + 0.2004s - 0.1036 r s + 0.3428s + 41.91 2 (o.zio) 2 Compare (6.295) with (6.290), the superior performance o f continuous-time identification for this example is obvious. 6.4.3 C o n t r o l l e r Design Define the states o f the pendulum system and the input voltage to the D C motor as [0(t) 6(t) x(t) x(t)] and u(t) respectively, the state space description o f the system can be written based on (6.289) and (6.291) â€¢o- 43.9627 0 0 "1 â€¢o- -91.5851" 0 0 0 0 X 814.09 10 .X. 0 1 0 0 0 X 0 0 .X. 0 0 where u (t) = d -0.01418 ii(t). 122 u (t) d . 0 . (6.296) Chapter 6: Continuous-time System Identification Based on Sampled In the S D G P C framework, system (6.296) can be regarded as the integrator augmented system of â€¢e â€¢ t -0.01418 43.9627 0 â€¢ 9 â€¢ 1 0 0 /' 0 0 0. = t -91.5851" + 0 Â«(*) .814.09 . â€¢ e t x(t) = [0 0 1] (6.297) Jo x A p p l y the final state weighting S D G P C (2.20) to (6.296) with design parameters: N = 6 u T p = 1.2sec (6.298) A = 1 7 = 1 The resulting control law is â€¢01 9 u (t) d = -[0.1905 1.2643 -0.0069 (6.299) -0.0120] x .x. The control law (6.299) is implemented i n the following form: u(t) = - 0 . 1 9 0 5 * 9(t) - 1.2643 * J 9(T)<1T (6.300) t +0.0069 * (x(t) - 1100) + 0.0120 * J (X(T) - 1100)<*7 The number 1100 i n (6.300) is the L V D T reading corresponding to the center position. The picture i n Fig.6.52 shows the pendulum being successfully controlled by (6.300). Notice that the control law is fairly robust against a disturbance ( a plastic bag was placed on the top of the pendulum after (6.300) was applied ). The same control law can also stabilize the pendulum when the rod is stretched to twice the original length. See F i g . 6.51. 123 Chapter 6: Continuous-time System Identification Based on Sample Chapter 6: Continuous-time System Identification Based on Sample Figure 6.52: SDGPC of pendulum subject to disturbance It is found i n the experiment that it is not difficult to stabilize the pendulum, i.e. keeping it upward. But it is not easy to control the printer head exacdy i n the center position while keeping the pendulum upward. A l s o notice that the system model we developed i n subsection 6.4.1 is an idealized one with the assumption that the pendulum rod is a rigid body. A s the length of the rod increases, so does its flexibility. A n adaptive version of the S D G P C w o u l d have been more interesting. Unfortunately the experiment setup was only available for a limited period of time. Nevertheless, the experiment results show that the continuous-time model parameter estimation algorithm and the S D G P C algorithm are quite effective solving practical control problems. 6.5 Conclusion Continuous-time system identification based on sampled-data is considered i n this chapter. The moving horizon integration approach given i n section 6.1 is a simple yet powerful method for 125 Chapter 6: Continuous-time System Identification Based on Sampled parameter estimation of a continuous-time model. Based on the regression model (6.264), various available recursive estimation algorithms such as the E F R A developed i n discrete-time context can be readily applied. The algorithm we proposed i n 6.2 can deal with fast time-varying parameters as was shown by simulation. A real life inverted pendulum experiment i n section 6.3 showed the benefits of continuous-time identification, namely, effect use of a priori information etc. The identification method i n this chapter together with the S D G P C algorithm offer an effective way for solving complicated control problems. In this way, the insight about the underlying inherent continuous-time process is never lost during the whole design process. It is the author's belief that even i f the control law is designed i n discrete-time domain, it is always beneficial to identify the underlying continuous-time process first and then discretize it. The experiment presented i n section 6.3 can be regarded as a supportive example. 126 Chapter 7: Conclusions Chapter 7 Conclusions A new predictive control approach was taken i n this thesis. The important issue such as actuator saturation i n practical applications was taken into account. The resulting algorithms have very important practical interests as w e l l as nice theoretical properties. The work can be summarized as follows. 1. A new predictive control algorithm, S D G P C , has been developed. It possesses the inherent robustness ( gain and phase margin ) and stability property of infinite horizon L Q regulator and at the same time, has the constraint handling flexibility of the finite horizon formulation, a feature unique to M B P C . S D G P C distinguishes itself from the rest of the M B P C family i n that it is based on continuous-dme modelling yet assumes digital implementation. This formulation stresses the connection rather than the differences between continuous-time and discrete-time control. It has been shown by simulation that for a stable w e l l damped process, the execution time T exe can vary from 0, which corresponds to continuous-time control, to the design sampling interval T , which can be quite large, without affecting the servo performance significantly. This m means that for a given prediction horizon T and desired sampling interval T , a larger T p exe m can be selected to reduce computation burden i n adaptive applications. F o r unstable and/or lightly damped processes, however, T exe 2. should be equal to T . m S D G P C for tracking systems has a two-degree-of-freedom design structure. This is achieved by assuming a different model for the reference signal and the disturbance. However, only one performance index is used to obtain the control law. Tracking performance can be improved radically when the future setpoint information is available. This is because k n o w i n g the future setpoint is equivalent to knowing the complete state information of the reference signal at present time. 3. Another two-degree-of-freedom design extension to S D G P C was made. Contrary to the approach taken i n tracking system design i n which different models for reference and disturbance were assumed, two performance indices are used but assuming the reference and the disturbance 127 Chapter 7: Conclusions have the same model ( i n this thesis, it is a simple constant). The servo performance and the disturbance rejection performance can be tuned separately by using different design parameters ( prediction horizon, control order, control weighting etc. ) for the two performance indices. The nonlinearity due to actuator constraints is considered i n the framework of anti-windup design. The resulting scheme effectively transforms the nonlinear problem into a time-varying linear problem and was shown to have guaranteed stability property. Simulation results confirmed the effectiveness of the scheme. 4. Control o f time-delay systems was considered. Laguerre filter based adaptive S D G P C was shown to be particularly effective i n dealing with time-delay systems. A priori information about the time-delay can be utilized to improve the control performance significandy. 5. A continuous-time model parameter estimation algorithm based on sampled data was developed. Numerical integration on a moving interval was used to eliminate the initial condition problem. It was argued that even i f the controller design is purely i n discrete-time, it is always beneficial to identify the underlying continuous-time model first before discretizing. A priori information about the physical system is best utilized i n continuous-time modelling. The continuous-time model identification method and the S D G P C algorithm were applied to an inverted pendulum experiment. The results confirmed the benefits of continuous-time modelling and identification. Some future research suggestions are: 1. Extend the work to multi-input multi-output systems. Although the author sees no major obstacles i n doing so for most of the topics covered i n the thesis, some efforts are needed to formulate the anti-windup scheme for M D M O case. 2. Dealing with the trade-off between good tracking and disturbance rejection performance and, good noise suppression performance. This is a basic trade-off i n any control systems design [3, pp. 112]. S D G P C was formulated i n a deterministic framework. Deterministic disturbances such as impulse, step, ramp, sinusoidal etc. can all be handled i n a straightforward manner i n the framework of S D G P C . The trade-off between good noise suppress performance and good disturbance rejection performance can be obtained by proper tuning of S D G P C to give the desired 128 Chapter 7: Conclusions closed-loop system bandwidth. F o r stochastic noise, the w e l l k n o w n Kalman filter theory can be applied to estimate the system states. The deterministic treatment of S D G P C does not prevent it from using these results because of the Separation Theorem or Certainty Equivalence Principle [3, pp. 218]. F o r systems with colored noise, which is more often than not, the optimal Kalman filter can be designed based on the noise model provided that the noise model is k n o w n [4, pp. 54]. Unfortunately, the noise model is often unknown and difficult to estimate. Estimation of the "true" system states subject to unknown colored noise poses one of the biggest challenges i n process control applications. Thus how to utilize the available results and develop new one, i n the framework of S D G P C , is certainly a. topic worth pursuing. 3. Adaptive S D G P C . We only considered adaptive Laguerre filter based S D G P C i n the thesis. Since the parameter estimation algorithm has been developed, it w o u l d be nice to see an adaptive version of S D G P C based on general transfer function description of systems. 4. A p p l y the S D G P C algorithm to practical problems. Although initial experiment on an inverted pendulum showed the effectiveness of S D G P C and the associated continuous-time identification algorithm, only large scale industrial applications can be the final judge. 129 Appendix A Stability Results of Receding Horizon Control Appendix A . l and appendix A . 2 are based on [4]. A . l The Finite and Infinite Horizon Regulator G i v e n a state-space model of a linear plant where F,G,H x(k + 1) = Fx(k) y(k) have proper dimensions. + Gu(k) (A.301a) = Hx(k) (A.301b) The finite horizon L Q regulator problem can be posed as follows. The performance index: J(N, x(k)) = x (k + N)P x(k T 0 + N)+ N-l {x {k + j)Qx {k T T + j) + u (k + j)Ru(k T + (A.302) j)} The solution to the above optimal L Q problem may be given by iterating the Riccati Difference Equation ( R D E ), P = F PjF - F PjG(G PjG T j+1 T + R)~ G P F T 1 T j + Q (A.303) j = 0,l,-.-JV-2 from the initial condition PQ and implements the feedback control sequences given by u{k + N-j) = -(G P - G + R)~ G P -iFx(k T l j 1 = Kj-. (k lX T j + N-j), j = 1,2, - â€¢â€¢ N, + N - j) (A.304) where Pj is the matrix solution of R D E (A.303). Notice from the control sequence (A.304) that it iterates reversely i n time compared with the direction of evolution of the plant (A.301). That is, i n order to obtain the current control u(k), PN-I has to be solved first by iterating (A.303). The accumulated cost of (A.302) is given by PN which itself does not appear in the control law, 130 J{N,x(k)) F = x (k)P x(k) (A.305) T N Similarly, the infinite horizon L Q regulator problem may be posed as the limiting case of the finite horizon L Q problem (A.302), J(x(k))= l i m J(N,x(k)) (A.306) Nâ€”Â»co A n d the optimal solution can be obtained by iterating (A.303) indefinitely. Under m i l d assumptions, Pj converges to its limit P ^ which is the maximal solution of the Algebraic Riccati Equation ( A R E ), P = F P F-F P G(G P G T 00 T 00 + R) T 00 00 GP F 1 T CX) + Q (A.307) A n d a stationary control law is obtained as u(k) = - (G P G T O0 + R)~ G P Fx(k) 1 T 00 = Kx(k) (A.308) The following theorem regarding the stability property of the infinite horizon L Q control law (A.308) is due to D e Souza et al [74]. 131 Theorem A.6 (De Souza et al [74]) Consider an infinite horizon LQ regulator problem with plant (A.301) and performance (A.306), for the associated index ARE, P = F PF T - F PG(G PG T T + R)~ G PF 1 T + Q (A.309) if â€¢ [F,G] is â€¢ [F, Q / ] â€¢ Q > 0 and R > 0 , 1 2 stabilizable, has no unobservable modes on the unit circle, then â€¢ there exists a unique, maximal, nonnegative definite symmetric solution â€¢ P is a unique stabilizing solution, P. i.e. F -G(G PG + R)~ G PF T 1 T (A.310) has all its eigenvalues strictly within the unit circle. The solution P above is called the stabilizing solution of the A R E (A.309). A l s o note that the matrix (A.310) is the state transition matrix of the closed loop system when the stationary control law (A.308 ) is applied to plant (A.301). Theorem A . 6 is the fundamental closed loop stability result for infinite horizon L Q control which w i l l be utilized to prove the stability result of receding horizon control i n the following and the stability property of S D G P C thereafter. A.2 T h e R e c e d i n g H o r i z o n Regulator F r o m the discussions i n appendix A . l , a number of facts about the finite horizon and infinite horizon discrete-time L Q regulator problem are clear. For the finite horizon case, the optimization task with cost function (A.302) is merely to find N control values which, i n principle, may be found by finite-dimensional optimization which is referred as the "one shot" algorithm i n most model based 132 predictive controllers. The control sequences may also be obtained by iterating the R D E (A.303) explicitiy from Pa to PN-2 using simple linear algebra. The resulting control law i n feedback form (A.304) is time-varying even i f the plant being controlled is time invariant. B y contrast, the infinite horizon problem involves an infinite-dimensional optimization or the solution of an A R E (A.307) which is computationally burdensome especially in adaptive applications. However, the control law of the infinite horizon problem is stationary and have guaranteed stability properties under m i l d assumptions. Receding horizon control is one method proposed to inherit the simplicity of the finite horizon L Q method while addressing an infinite horizon implementation and preserving the time-invariance of the infinite horizon feedback. In this formulation only the first element u(k) u(k),u(k i n the control sequences + 1), â€¢ â€¢ â€¢ u(k + N â€” 1) is applied to the plant at time k and at time k + 1 the first control u(k + 1) i n the control sequences u(k + l),u(k + 2), â€¢ â€¢ â€¢ u(k + N) is applied and so on. In terms of the finite horizon feedback control law (A.304), one has for the receding horizon strategy u(k) = - {G P - G = K~N-.ix(k) + R) T N 1 -i ^ ( G P - Fx{k) A 3 1 1 > T N X which is a stationary control law. Note that there is still no word having been said about the stability of control law (A.304). In fact, receding horizon strategy does not guarantee stability itself. Motivated by the facts that the infinite horizon L Q control law has guaranteed stabilizing property and there are strong similarities between receding horizon control law (A.304) and infinite horizon L Q control law (A.308), i.e. both are stationary and have the same form, one has enough reason to wonder i f the stability result of infinite horizon L Q control summarized as Theorem A . 6 could be of any help to the stability problem of receding horizon control. For this we go to the important work of Bitmead et al [4]. Consider the R D E (A.303) Pj+i = F PjF T - F PjG(G PjG T T + R)~ G P F 1 T j + Q (A.312) j = 0,l,---JV-2 define QJ = Q-{PJ+I-PJ) 133 (A.313) , the R D E (A.312) w i l l have a form of an A R E , Pj = F PjF - F PjG(G PjG T T + R)~ G P F T 1 (A.314) + Qj T j w h i c h is called Fake Algebraic Riccati Equation ( F A R E ) [4]. F r o m Theorem A . 6 , the stability property o f the solution o f the above F A R E can be immediately established as follows. Theorem A.7 (Bitmead et al. [4, pp. 87] ) Consider the FARE (A.314) or (A.313) defining the matrix Qj. If Qj > 0, R > 0, [F,G] is -1/2 stabilizable, F,Qj is detectable, thenPj Fj = F-G is stabilizing, (G Pj T i.e. G + R)~ G P F 1 (A.315) T j has all its eigenvalues strictly within the unit circle. Clearly, i f the conditions i n Theorem A . 7 are met for j = N - 1, then the receding horizon control law (A.304) w i l l be stabilizing. However, further work needs to be done to relate the design parameters, i.e. the matrices Po,Q, R i n the performance index (A.302), to the conditions i n Theorem A . 7 . The following results from [4] can serve this purpose. lemma A.3 (Bitmead et al. [4, pp. 88] ) r 1/2 Given two nonnegative definite symmetric matrices Q\ and Qi satisfying Q\ < Q2 then F, Q^ detectable implies F,Qf detectable. The following corollary [5] which is an immediate result o f lemma A . 3 tells that i f the solution of the R D E is decreasing at time j then the closed loop state transition matrix of (A.315) is stable. Corollary A.1 ( Bitmead et al. [5] ) r If the RDE with [F, G] stabilizable, 1 /21 F, Q ' y detectable and if Pj in (A.312) is non-increasing j, i.e. Pj+i < Pj, then Fj defined by (A.315) is stable. 134 at A l s o from [5], we have the following theorem regarding the monotonicity properties of the solution of the R D E (A.312). Theorem A . 8 ( Bitmead et al. [5]) If the nonnegative definite solution Pj of the RDE (A.312) is monotonically time, i.e. Pj+\ Pj+k+i < Pj for some j, then Pj is monotonically non-increasing non-increasing at one for all subsequent times, < Pj+k, far all k > 0. The following result is immediate by combining Corollary A.1 and Theorem A.8. Theorem A . 9 (Bitmead et al. [4, pp. 90] ) Consider â€¢ [F,G] is stabilizable [F, Q l ] is detectable l 2 â€¢ the RDE (A.312), Pj+i < Pj f or if some j then Fk given by (A.315) with Pk is stable for all k > j. A s an immediate consequence of Theorem A.9, we see that i f Po i n the design of receding horizon controller is selected i n such a way that one iteration of the R D E w i l l result i n P i < P o , then we have QQ = Q - (P\ - PQ) > Q and Qj > Q for any subsequent j > 0, this implies that Fj given by (A.315) is stable for any j > 0. A clever choice of Po which w i l l guarantee the monotonically non-increasing solution of R D E is to let Po = oo as first proposed by K w o n and Pearson [42] albeit i n a very different framework. The result can be summarized as follows: 135 Theorem A.10 ( Kwon et al. [42] and Bitmead et al. [4, pp. 97] ) Consider system x(k + 1) = Fx(k) y(k) = + (A.316a) Gu(k) (A.316b) Hx(k) and the associated receding horizon control problem, i.e. minimize performance J(N,x(k)) index = JV-l Y, subject to final state {x (k T + j)Qx (k + j) + u (k T + j)Ru(k T + j)} (A.317) constraint x(k + N) = assume Q > 0, R > 0, F is nonsingular 0 (A.318) and [F, G] is controllable, [F, Q] is observable, optimal solution exists and stabilizes the system (A.316) whenever N > n, where n is the then the dimension of system (A.316). The nonsingularity condition of F was removed i n a recent paper by C h i s c i and M o s c a [10]. The following corollary is a natural consequence of Theorem by Demircioglu et al. Corollary A . 10 using the argument given [16]. A.2 For system described by equation (A.316) with performance J(N, x(k)) = x (k T + N)P x (k N-l T a index + N) (A.319) there exists a positive number 7 such that for P o > 71, the closed loop system under the control law obtained by minimizing (A.319) is also stable. Proof: Since the pole location of the closed loop system under the optimal control law obtained by minimizing (A.319) is a continuous function of Pa, the closed loop system pole can thus be made 136 arbitrarily close to the limiting case of Pa = oo which is stable according to Theorem A. 10 by increasing Pa. Thus there always exists a positive number 7 such that for PQ > j l , the closed loop system is stable. Theorem A . 10 is used to investigate the stability properties o f S D G P C i n section 2.2.1. 137 References [I] A i d a , K . and T . Kitamori (1990). 'Design of a Pi-type state feedback optimal servo system'. INT. J. Control, [2] No.3. A l - R a h m a n i , H . M . and G . F . Franklin (1992). 'Multirate control: A new Automatica, [3] Vol. 52, approach'. Vol. 28, No. 1. Anderson, B . D . O . and J. B . M o o r e (1990). Optimal Control, Linear Quadratic Method. Control, The Thinking Prentice H a l l , Englewood Cliffs, N e w Jersey. [4] Bitmead, R . R . , M . Gevers and V . Wertz (1990). Adaptive Man's [5] Optimal GPC. Prentice H a l l . Bitmead, R . R . , M . Gevers, I. R . Petersen and R . J . K a y e (1985). 'Monotonicity and stabilizablility properties of solutions of the riccati difference equation: Propositions, lemmas, theorems, fallacious conjectures and counterexamples'. Systems and Control Letters, Vol. 5. [6] Bittanti, S., P . Colaneri and G . Guardabassi (1984). 'H-controllability and observability of linear periodic systems'. SIAM Journal on Control and [7] Optimization. B o y d , S., L . E l Ghaoui, E . Feron and V . Balakrishnan (June, 1994). Linear Matrix in System and Control Theory. Volume 15 of Studies in Applied Inequalities Mathematics. SIAM, Philadelphia, P A . [8] Campo, P . J. and M . M o r a r i (1990). 'Robust control of processes subject to saturation nonlinearities'. Computers in chemical Engineering, Vol. 14, No. 4/5. [9] Chen, C . T . (1984). Linear System Theory and Design. N e w York, Holt, Rinehart and Winston. [10] C h i s c i , L . and E . M o s c a (September, 1993.). Stabilizing predictive control: T h e singular transition matrix case.. In 'Advances i n Model-Based Predictive Control. Oxford, England.'. [II] Clarke, D . W . (September,1993). Advances i n model-based predictive control. In 'Workshop on M o d e l - B a s e d Predictive Control, Oxford University, E n g l a n d ' . [12] Clarke, D . W . and R . Scattolini (July 1991). 'Constrained receding-horizon predictive control'. IEE Proceedings [13] Vol.138 No.4. Clarke, D . W . , C . Montadi and P. S. Tuffs (1987). 'Generalized predictive control-part I. the basic algorithm.'. Automatica, [14] Vol.23, No.2. Clarke, D . W . , E . M o s c a and R . Scattolini (December 1991). Robustness o f an adaptive predictive controller. In 'Proceedings of the 30th Conference on Decision and C o n t r o l , ' . Brighton, England. [15] Cuder, C . R . and B . C . Ramaker (1980). Dynamic matrix control-a computer control algorithm, paper wp5-b. In ' J A C C , San Francisco'. [16] Demircioglu, H . and D . W . Clarke (July, 1992). ' C G P C with guranteed stability properties'. IEE Proceedings [17] D, Vol. 139, No. 4. Demircioglu, H . and D . W . Clarke (July, 1993). 'Generalised predictive control w i t h end-point state weighting'. IEE Proceedings [18] Demircioglu, H . and P. J . Gawthrop (1991). 'Continuous-time generalized predictive control ( C G P C ) ' . Automatica, [19] Vol. 27, No. 1. Demircioglu, H . and P. J . Gawthrop (1992). 'Multivariable continuous-time predictive control ( M C G P C ) ' . [20] D, Vol. 140, No. 4. generalized Automatica. D o y l e , J . C , R . S. Smith and D . F . Enns (1987). Control of plants with input saturation nonlinearities. In '1987 A C C , ' . [21] Dumont, G . A . (1992). Fifteen years in the life of an adaptive controller. In T F A C Adaptive Systems i n Control and Signal Processing, Grenoble, France'. [22] Dumont, G . A . and C . C . Zervos (1986). Adaptive controllers based on orthonormal series representation. In '2nd I F A C workshop on adaptive control and signal processing'. L u n d , Sweden. [23] Dumont, G . A . , Y . F u and G . L u (September, 1993). Nonlinear Adaptive Generalized Predictive Control and Its Applications. In 'Workshop on Model-Based Predictive Control, Oxford University, England'. [24] Elnaggar, A . , G . Dumont and A . Elshafei (December, 1990). System identification and adaptive control based on a variable regression for systems having unknown delay. In 'Proceedings of the 29th Conference on Decision and Control, Honolulu, H a w a i i ' . [25] Eykhoff, P . (1974). System Identification. [26] Fertik, H . A . and C . W . Ross (1967). 'Direct digital control algorithm with anti-windup feature'. ISA Transactions, [27] 6(4):317-328. F i n n , C . K . , B . Wahlberg and B . E . Ydastie (1993). 'Constrained predictive control using orthogonal expansion'. AlChE Journal, [28] Wiley, N e w York. Vol. 39 No. 11. Franklin, G . F . , J . D . P o w e l l and A . Emami-Naeini (1994). Feedback Control of Dynamic Systems. Addison-Wesley Publishing Company. [29] F u , Y . and G . A . Dumont (June, 1993). ' A n optimal time scale for discrete Laguerre network'. IEEE Trans, on Auto. Control, AC-38, No. 6, pp. [30] 934-938. Furutani, E . , T . Hagiwara and M . A r a k i (December, 1994). Two-degree-of-freedom design method of state-predictive lqi systems. In 'Proceedings of the 33rd Conference on Decision and Control, L a k e Buena Vista, F L . ' . [31] Gacia, C . E . , D . M . Prett and M . M o r a r i (1989). ' M o d e l predictive control: Theory and practice-a survey'. [32] Automatica,. Gawthrop, P . J . (1987). Continuous-time Self-tuning Control, Volume I - Design. Research Studies Press, England. [33] G o o d w i n , G . C . and D . Q . M a y n e (1987). ' A parameter estimation perspective of continuous time model reference adaptive control'. Automatica, [34] 23 (1), 57-70. Hagiwara, T., T . Yamasaki and M . A r a k i (July, 1993a). Two-degree-of-freedom design method of l q i servo systems, part i : Disturbance rejection by constant feedback. In ' T h e 12th I F A C W o r l d Congress'. [35] Hagiwara, T., T . Yamasaki and M . A r a k i (July, 1993iÂ»). Two-degree-of-freedom design method of l q i servo systems, part i i : Disturbance rejection by dynamic feedback. In 'The 12th I F A C World Congress'. [36] Hanus, R., M . Kinnaert and J. L . Henrotte (1987). 'Conditioning technique, a general antiwindup and bumpless transfer method'. Automatica, [37] 729-739. Hautus, M . L . J . (1969). 'Controllability and observability conditions of linear autonomous systems'. Indagationes mathematicae, [38] Vol. 23, Vol. 72, pp. 443-448. K a l m a n , R . E . , Y . C . H o and K . S. Narendra (1963). Contributions to Differential Equations, Vol. I. N e w York: Interscience. [39] Kothare, M . , P J . Campo and M . M o r a r i (1993). A unified framework for the study of antiwindup designs. Technical report. C I T - C D S 93-011, California Institute of Technology. [40] Kothare, M . , V . Balakrishnan and M . M o r a r i (1995). Robust constrained model predictive control using linear matrix inequalities. Technical report. Chemical Engineering, 2 1 0 - 4 1 , California Institute of Technology. [41] Kwakernaak, H . and R . Silvan (1960). Linear Optimal Control Systems. N e w York, W i l e y - Interscience. [42] K w o n , W . H . and A . E . Pearson (1978). ' O n feedback stabilization of time-varying discrete linear system'. IEEE Trans. AC-23, [43] (3), pp. 479-481. L e v i s , A . H . , R . A . Schlueter and M . Athans (1971). ' O n the behaviour of optimal linear sampled-data regulators'. INT. J. CONTROL, Vol. 13 No. 2. [44] Ljung., L . (1987). SYSTEM IDENTIFICATION: Theory for the User. Prentice-Hall. [45] Ljung, L . and T . Soderstrdm (1983). Thoery and Practice of Recursive Parameter Estimation. M I T Press, London. [46] L u , G . and G . A . Dumont ( D e c , 1994). Sampled-Data G P C with Integral A c t i o n : The State Space Approach. In 'Proceedings of the 33rd C D C , L a k e Buena Vista, F L ' . [47] M a , C . C . H . (December, 1991). 'Unstabilizability of linear unstable systems w i t h input l i m i t s ' . Transactions [48] of the ASME, MMkila, P . M . (1990). Automatica, Vol. 113. 'Laguerre series approximation of infinite dimensional Vol. 26, No. 6. systems'. [49] M a k i l a , P. M . (1991). ' O n identification of stable systems and optimal approximation'. Automatica, [50] Middleton, R . H . and G . C . G o o d w i n (1990). Digital Approach.. [51] Vol. 27 No. 4. Estimation and Control: A Unified Englewood Cliffs, N J : Prentice-Hall,. M o r a r i , M . (September,1993). M o d e l predictive control: Multivariable control technique of choice i n the 1990s?. In 'Workshop on Model-Based Predictive Control, Oxford University, England'. [52] M o s c a , E . and J . Zhang (1992). 'Stable redesign of predictive control'. Automatica, No. 6, pp. [53] 1229-1233. M o s c a , E . , G . Zappa and J. M . Lemos (1989). 'Robustness regulators: M u s m a r ' . Automatica, [54] Vol. 28, Vol. 25 of multipredictor adaptive No.4. Nicolao, G . D . and R . Scattolini (September, 1993). Stability and output terminal constraints in predictive control. In 'Workshop on Model-Based Predictive C o n t r o l ' . [55] Pappas, T., A . J. Laub and N . R . Sandell (1980). ' O n the numerical solution of the discrete-time algebraic riccati equation'. IEEE Trans. Auto. Control, [56] Parks, T . W . (1971). 'Choice of time scale i n Laguerre approximations measurements'. IEEE Trans. Auto. Control, [57] AC-25. using signal AC-16. Peng, H . and M . Tomizuka (1991). Preview control for vehicle lateral guidance i n highway. In 'Proceedings of the 1991 American Control Conference'. [58] Peterka, V . (1984). 'Predictor based self-tuning control'. Automatica, [59] Power, H . M . and B . Porter (1970, 6.). 'Necessary and sufficient conditions for controllability of multivariable systems incorporating integral feedback'. Electron. [60] Lett. Rawlings, J . and K . R . M u s k e (1993). 'The stability of constrained receding horizon control,'. IEEE [61] Vol. 20. Trans. Auto. Contr., 38. Richalet, J . , A . Rault, J. L . Testud and J . Papon (1978a). ' M o d e l predictive heuristic control: Applications to Industrial Processes'. Automatica. Vol. 14. [62] Richalet, J . , A . Rault, J. L . Testud and J . Papon (1978ft). ' M o d e l predictive heuristic control: Applications to Industrial Processes'. Automatica. [63] Astrom, K . J. and B . Wittenmark (1984). Computer Vol. 14. Controlled Systems-Theory and Design. Englewood Cliffs, N J : Prentice H a l l . [64] Astrom, K . J . and B . Wittenmark (1989). Adaptive Control. Addison-Wesley Publishing Company. [65] Astrom, K . J . and P . Eykhoff (1971). 'System identification - a survey'. Automatica, [66] Robinson, W . R . and A . C . Soudak (1970). ' A method for the identification of time-delays i n linear systems'. IEEE Tran. Aut. [67] Control. Sagara, S. and Zhen-Yu Zhao (1990). 'Numerical integration approach to on-line identification of continuous-time systems'. [68] Vol. 7. Automatica. Salgado, M . . E . , G . C . G o o d w i n and R . H . Middleton (1988). 'Modified least squares algorithm incorporating exponential resetting and forgetting'. Int. J. Control, Vol. 47, No. 2. [69] Scokaert, P . O . M . and D . W . Clarke (1994). Stability and feasibility i n constrained predictive control. In 'Advances i n M o d e l Based Predictive Control'. Oxford Science Publications. [70] Sinha, N . (1972). 'Estimation of the transfer function of a continuous-time systems from sampled data'. IEE Proceedings [71] Sinha, N . and S. Puthenpura Part D, Vol. 119. (November, 1985). ' C h o i c e of the sampling interval for the identification of continuous-time systems from samples of input/output data'. Proceedings [72] IEE Part D, Vol. 132(6). Soderstrom, T . and P . G . Stoica (1989). System Identification. Prentice-Hall, H e m e l Hempstead, * U.K. [73] Soeterboek, R . (1992). Predictive [74] Souza, C . D . , M . R . Gevers and G . C . G o o d w i n (Sep. 1986). 'Riccati equations i n optimal filtering Control - A Unified Approach. Prentice-Hall. of nonstabilizable systems having singular state transition matrices'. IEEE Auto. Contr. , Vol. AC-31, No. 9. Trans. [75] Sznaier, M . and F . Blanchini (Vol. 5, 1995). 'Robust control of constrained systems v i a convex optimization'. International [76] Journal of Robust and Nonlinear Control. Tomizuka, M . and D . E . Whitney (Dec. 1975). 'The discrete optimal finite preview control problem ( why and how is future information important?)'. ASME Systems, Measurement [77] Journal of and Control, Vol. 97, No.4. Tomizuka, M . , D . Dornfeld, X . Q . B i a n and H . C . C a i (Mar. 1984). 'Experimental evaluation of the preview servo scheme for a two-axis positioning system'. ASME Journal of Systems, Measurement and Control, Vol. 106, [78] Dynamic Dynamic No.l. Unbehauen, H . and G . P . Rab (1987). Identification of Continuous Systems. North-Holland, Amsterdam. [79] Unbehauen, H . and G . P. Rao (1990). 'Continuous-time approaches to system identificationa survey'. [80] Automatica. Wahlberg, B . ( M a y 1991). 'System identification using Laguerre models'. IEEE on Automatic [81] Control, Vol. 36, No. 5. Walgama, K . S. and J . Sternby (1990). 'Inherent observer property i n a class of anti-windup compensators'. Int. J. Control, Vol. 52, No. 3, [82] Transactions 705-724. Wiener, N . (1956). The theory of Prediction. Modern Mathematics for Engineers,. N e w York, McGraw-Hill. [83] X i e , X . and R . J . Evans (1984). 'Discrete-time adaptive control for deterministic time-varying systems'. Automatica, [84] Vol. 20, No. 3. Ydstie, B . (1984). Extended horizon adaptive control. In 'Proceedings of the 9th I F A C W o r l d Congress, Budapest, Hungary'. [85] Young, P . C . (1981). 'Parameter estimation for continuous-time models- a survey.'. Automatica, [86] Vol. 17. Young, P . C . and A . Jakeman (1980). 'Refined instrumental variable methods of recursive time-series analysis, part i i i . extensions.'. Int. J. Control, Vol. 31. Zervos, C . C . and G . A . Dumont (1988). 'Deterministic adaptive control based on Laguerre series representation'. Int. J. Control, Vol. 48, No. 6. 145
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Sampled-data generalized predictive control (SDGPC)
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Sampled-data generalized predictive control (SDGPC) Lu, Guoqiang 1996-12-31
pdf
Page Metadata
Item Metadata
Title | Sampled-data generalized predictive control (SDGPC) |
Creator |
Lu, Guoqiang |
Date | 1996 |
Date Issued | 2009-02-19 |
Description | This thesis develops a novel predictive control strategy called Sampled-Data Generalized Predictive Control (SDGPC). SDGPC is based on a continuous-time model yet assumes the projected control profile to be piecewise constant, i.e. to be compatible with zero order hold circuit. It thus enjoys both the advantage of continuous-time modeling and the flexibility of digital implementation. SDGPC is shown to be equivalent to an infinite horizon LQ control law under certain conditions. For well-damped open-loop stable systems, the piecewise constant projected control scenario adopted in SDGPC is shown to have benefits such as reduced computational burden, increased numerical robustness etc. When extending SDGPC to tracking design, it is shown that future knowledge of the setpoint significandy improves tracking performance. A two-degree-of-freedom SDGPC based on optimization of two performance indices is proposed. Actuator constraints are considered in an anti-windup framework. It is shown that the nonlinear control problem is equivalent to a linear time-varying problem. The proposed anti-windup algorithm is also shown to have attractive stability properties. Time-delay systems are treated later. It is shown that the Laguerre-filter-based adaptive SDGPC has excellent performance controlling systems with varying time-delay. An algorithm for continuous-time system parameter estimation based on sampled input output data is presented. The effectiveness and the advantages of continuous-time model estimation and the SDGPC algorithm over the pure discrete-time approach are highlighted by an inverted pendulum experiment. |
Extent | 6343712 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
File Format | application/pdf |
Language | eng |
Collection |
Retrospective Theses and Dissertations, 1919-2007 |
Series | UBC Retrospective Theses Digitization Project |
Date Available | 2009-02-19 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0065201 |
URI | http://hdl.handle.net/2429/4788 |
Degree |
Doctor of Philosophy - PhD |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
Graduation Date | 1996-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
Aggregated Source Repository | DSpace |
Download
- Media
- ubc_1996-091228.pdf [ 6.05MB ]
- [if-you-see-this-DO-NOT-CLICK]
- Metadata
- JSON: 1.0065201.json
- JSON-LD: 1.0065201+ld.json
- RDF/XML (Pretty): 1.0065201.xml
- RDF/JSON: 1.0065201+rdf.json
- Turtle: 1.0065201+rdf-turtle.txt
- N-Triples: 1.0065201+rdf-ntriples.txt
- Original Record: 1.0065201 +original-record.json
- Full Text
- 1.0065201.txt
- Citation
- 1.0065201.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Country | Views | Downloads |
---|---|---|
United States | 20 | 5 |
China | 17 | 23 |
Brazil | 6 | 15 |
Canada | 5 | 0 |
Russia | 3 | 0 |
Japan | 2 | 0 |
Poland | 1 | 0 |
India | 1 | 0 |
City | Views | Downloads |
---|---|---|
Unknown | 12 | 22 |
Beijing | 10 | 1 |
Ashburn | 8 | 0 |
Shenzhen | 4 | 22 |
San Francisco | 3 | 0 |
Saint Petersburg | 3 | 0 |
Innisfil | 3 | 0 |
Hefei | 2 | 0 |
University Park | 2 | 0 |
Wilmington | 2 | 0 |
Tokyo | 2 | 0 |
Ottawa | 2 | 0 |
Redwood City | 1 | 0 |
{[{ mDataHeader[type] }]} | {[{ month[type] }]} | {[{ tData[type] }]} |
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065201/manifest