"Applied Science, Faculty of"@en . "Electrical and Computer Engineering, Department of"@en . "DSpace"@en . "UBCV"@en . "Lu, Guoqiang"@en . "2009-02-19T00:00:00"@en . "1996"@en . "Doctor of Philosophy - PhD"@en . "University of British Columbia"@en . "This thesis develops a novel predictive control strategy called Sampled-Data Generalized Predictive\r\nControl (SDGPC). SDGPC is based on a continuous-time model yet assumes the projected\r\ncontrol profile to be piecewise constant, i.e. to be compatible with zero order hold circuit. It thus\r\nenjoys both the advantage of continuous-time modeling and the flexibility of digital implementation.\r\nSDGPC is shown to be equivalent to an infinite horizon LQ control law under certain conditions.\r\nFor well-damped open-loop stable systems, the piecewise constant projected control scenario adopted\r\nin SDGPC is shown to have benefits such as reduced computational burden, increased numerical\r\nrobustness etc. When extending SDGPC to tracking design, it is shown that future knowledge of\r\nthe setpoint significandy improves tracking performance. A two-degree-of-freedom SDGPC based\r\non optimization of two performance indices is proposed. Actuator constraints are considered in an\r\nanti-windup framework. It is shown that the nonlinear control problem is equivalent to a linear\r\ntime-varying problem. The proposed anti-windup algorithm is also shown to have attractive stability\r\nproperties. Time-delay systems are treated later. It is shown that the Laguerre-filter-based adaptive\r\nSDGPC has excellent performance controlling systems with varying time-delay. An algorithm for\r\ncontinuous-time system parameter estimation based on sampled input output data is presented. The\r\neffectiveness and the advantages of continuous-time model estimation and the SDGPC algorithm over\r\nthe pure discrete-time approach are highlighted by an inverted pendulum experiment."@en . "https://circle.library.ubc.ca/rest/handle/2429/4788?expand=metadata"@en . "6343712 bytes"@en . "application/pdf"@en . "Sampled-Data Generalized Predictive Control (SDGPC) by Guoqiang Lu B.Sc. Beijing Institute of Technology, 1984 M.Sc. Beijing Institute of Technology, 1987 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA January 1996 \u00C2\u00A9 Guoqiang Lu, 1996 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of EUyty^L % M * W / < j \u00C2\u00BB The University of British Columbia Vancouver, Canada Date Tel. % \u00C2\u00AB* ; (9? & Abstract This thesis develops a novel predictive control strategy called Sampled-Data Generalized Pre-dictive Control (SDGPC). SDGPC is based on a continuous-time model yet assumes the projected control profile to be piecewise constant, i.e. to be compatible with zero order hold circuit. It thus enjoys both the advantage of continuous-time modeling and the flexibility of digital implementation. SDGPC is shown to be equivalent to an infinite horizon LQ control law under certain conditions. For well-damped open-loop stable systems, the piecewise constant projected control scenario adopted in SDGPC is shown to have benefits such as reduced computational burden, increased numerical robustness etc. When extending SDGPC to tracking design, it is shown that future knowledge of the setpoint significandy improves tracking performance. A two-degree-of-freedom SDGPC based on optimization of two performance indices is proposed. Actuator constraints are considered in an anti-windup framework. It is shown that the nonlinear control problem is equivalent to a linear time-varying problem. The proposed anti-windup algorithm is also shown to have attractive stability properties. Time-delay systems are treated later. It is shown that the Laguerre-filter-based adaptive SDGPC has excellent performance controlling systems with varying time-delay. An algorithm for continuous-time system parameter estimation based on sampled input output data is presented. The effectiveness and the advantages of continuous-time model estimation and the SDGPC algorithm over the pure discrete-time approach are highlighted by an inverted pendulum experiment. Iable of Contents Abstract . ii List of Tables v List of Figures vi Acknowledgment ix 1 Introduction 1 1.1 Background and Motivation. 1 1.2 Literature Review 2 1.3 Contribution of the Thesis 4 1.4 Outline of the Thesis . 5 2 Sampled-Data Generalized Predictive Control (SDGPC) 7 2.1 Formulation of SDGPC 7 2.2 Stability Properties of SDGPC 14 2.2.1 Stability of SDGPC with control execution time Texe = Tm 14 2.2.2 Property of SDGPC with control execution time Texe - I)xd{t) + cTT0(Tp)nd_ ^ e t iu7 = \u00C2\u00AEi iftf = 0> w e n a v e t n e optimal solution for ud: u d = Kdxd(t) + Kee(t) (2.12) (2.13) (2.14) (2.15) where Kd = -K1^K3Td + TgK2 eATr cTA-l(eATp _ty -0-< KzTe + TgK2 \u00E2\u0080\u00A2 0 . 1 . 4 Td = Jrl(T)ccT{eAT - IJA-'dT, Te = JTT0{T)cdT T9 = r(T) - i Ki J Tl(T)ccTT0(T)dT + \ J HTHdT K2 = [TjKiTg) \ 7 C 3 = I - TgK2TgK\ (2.16) 11 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) ob>\u00E2\u0080\u0094^ x=Ax+Bu X T c' 1 Observer Figure 2.2: The implementation scheme Fig.2.2 shows the S D G P C law (2.15) in a block-diagram form: A s we mentioned earlier, the control law (2.15) does not necessarily need to be implemented with the design sampling interval Tm. When the execution sampling interval Texe goes to zero, we can take integration on both sides of (2.15) to obtain an integral control law (2.17) in terms of the state and the control signal of the original systems (2.1). u(t) = Kdx(t) + Ke f e{r)dr + The block diagram of control law (2.17) is shown in F ig . 2.3 (2.17) EHI] & u x=Ax+Bu X J -TK51-Figure 2.3: The integral control law The constant term 770 in (2.17) is unspecified and has no bearing oh the problem in the sense that it neither affects the closed loop eigenvalue nor the asymptotic property of e(t) \u00E2\u0080\u0094\u00E2\u0080\u00A2 0 as t \u00E2\u0080\u0094\u00E2\u0080\u00A2 0 provided that the integral control law (2.17) is stabilizing. However, we can make use of the above fact and let rjo be proportional to the constant setpoint w. The effect is that a system zero can be placed i n a desired location to improve the transient response of the closed system under control law (2.17). The scheme is depicted in Fig.2.4. Details on how to select the feedforward gain Kw in Fig.2.4 w i l l be discussed in section 2.3. 12 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) w =Ax+Bu Figure 2.4: Zero placing strategy S D G P C was developed above by minimizing the cost (2.4) subject to end point state constraints (2.5). Another approach is to include the end point state in the cost functional: J{t) = j' [e2(t + T) + Xu2(t + T)]dT + jxT(t + Tp)xf(t + Tp) (2.18) o Substitute equations (2.6) (2.9) (2.12) into performance index (2.18), we have J{t) = J e{t) + J eATdrxd(t) + c T T 0 ( T ) u d o L o dT+ J \u/HT(t)H(t)uddT+ 0 eAT>xd(t) + r ( T > H left) + J eA*dTxd{t) + cTro{Tp)xid 0 0 Let gjj^ - = 0, we have the solution for u ( 1 : u d = -K(Tdxd(t) + Tee(t)) eAT\"xd(t) + r ( T p ) u d e(t)+ feATdrxd(t) + cTT0(Tp)ud -1 K=\J TTccTY0dT + \J HTHdT + T r r ( T p ) r ( T p ) + 7 r ^ ( r p ) c c T r o ( T p ) ,0 (2.19) (2.20) Td = JTT0ccTA-1 (eAT -I)dT + yrT(Tp)eAT> + (Tp)ccT A~'(eAT> - I) o Te = J TT(T)dTc + 1rT(Tp)c o Where T, Fa, H are defined by equation (2.10), (2.13), (2.7) respectively. Obviously, when 7 \u00E2\u0080\u0094\u00E2\u0080\u00A2 oo, control laws (2.20) and (2.15) become equivalent. 13 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) The main point of this section is that by selecting the projected control derivative scenario to be piecewise constant, a predictive control law S D G P C can be designed based on continuous-time modelling without causing any difficulty for digital implementation. This is in sharp contrast to C G P C . 2.2 Stabil i ty Properties of S D G P C Stability results for G P C with terminal states constraints (or weighting) are available both for discrete-time [11, 12, 52] and continuous-time [16]. A natural question is whether S D G P C possesses such stability properties. This question w i l l be answered in this section. The basic idea is to show that S D G P C is equivalent to a stabilizing discrete-time receding horizon L Q control law. The important work of Bitmead et al. [4] is included in Appendix A for completeness. Those results are used to establish the stability property of S D G P C in section 2.2.1. 2.2.1 Stabil i ty of S D G P C with control execution time Texe = Tm The S D G P C stability problem is attacked by first applying a transformation to convert the S D G P C problem to a discrete-time receding horizon problem, then making use of the stability results summarized in Theorem A . 10 and Corollary A.2. The transformation is based on the work of Levis et al. [43] in which the infinite horizon problem was treated. Recal l the state augmented system described by equation (2.2) dim(xf) = nf = n + 1 Assuming that the execution sampling interval Texe under S D G P C control is the same as the design sampling interval Tm, the discrete-time equivalent of the augmented system (2.21) is then if = AfXf + BfUd (2.21) Xf(i+l) = Xf(i) + rud(i) 2//(0 = Cfxf(}) (2.22) With T, (2.23) o 14 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) Recal l the cost functional (2.4) with Q = CfCT-J(t) = J [xT(t + T)Qxf{t + T) + uT(t + T)Rud{t + T))dT (2.24) o Subject to : xf(t + Tp) = 0 With the projected control scenario described by (2.6) as in Fig.2.1, the cost (2.24) can be expressed as the sum of Nu integrals: J(t) = f [xT(t + T)Qxf(t + T) + uT(t + T)Rud(t + T)]dT = / [xTf{t + T)Qxf(t + T) + u^(t + T)Rud{t + T)]dT \u00E2\u0080\u00A2 A \u00C2\u00AB/ (2.25) i = 0 iTm Define xf(i) = xf(t + iTm), u d ( i ) = ud(t + iTm), i = 0,l,---Nu-l (2.26) The integrals in (2.25) can be expressed as (\u00E2\u0080\u00A2\u00E2\u0080\u00A2+i)rm J [xj(t + T)Qxf{t + T) + uTd{t + T)Rud(t + T)] dT = J [xT(i + r)Qxf(i + T) + uj(i + r)Rud(i + r)] dr (2.27) The inter-sampling behavior Xf(i + r ) of system (2.22) is a function of x(i) and ud(i) as follows T xf(i + T) = eAfTx(i) + J eA>{T-^B}ud(i)ds (2.28) o Substitute equation (2.28) into equation (2.27), we have 15 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) (\u00E2\u0080\u00A2\u00E2\u0080\u00A2+1)T\u00E2\u0080\u009E J [xT(t + T)Qxf(t + T) + uT(t + T)Rud(t + T)] dT iTm = J [xT(i + T)Qxf(i + T) + uJ(i + T)Rud(i + T)]dT 0 = xT(i)Qxf(i) + 2xT(i)Mud(i) + uJ(i)Rud(i) (2.29) where Q = J eAfTQeA'Tdr, M = J eAJTQ J eA'f o o Lo Tm r T -I r T R = TmR + Bj J J eAVdt Q J eA'fdt Bfdr 0 LO J LO Finally the continuous-time cost (2.24) has the form (2.30) drBt J(t) = J [xT(t + T)Qxf(t + T) + uT(t + T)Rud{t + T)]dT o = YI UT(i)Qxf(i)+2xT(i)Mnd(i) + nJ(i)Rud(i) (2.31) i=0 Remarks: 1. These weighting matrices are time-invariant as long as Tm is constant. The symmetric and positive semi-definite or positive definite properties of Q, R are preserved in Q, R. 2. Even i f the control weighting R = 0 in the original cost functional, there always is a non-zero weighting term R in the equivalent discrete-time cost. Note that there is a cross-product term in the discrete-time cost (2.25) involving Xf(i) and ud(i). However, by some transformation [43], the cross-product term can be removed to form a standard discrete-time cost. Define Q = Q- MRTlMT # = $ - v i r l M T v(i) = R-1MTxf{i) + ud(i) (2.32) 16 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) B y substituting equation (2.32) into system equation (2.22) and the associated cost (2.25), we obtain xf(i + 1) = $xf(i) + Tv(i) yf(i) = cjxf(i) (2.33) dim(xf) = rif = n + 1 and cost functional, iV\u00E2\u0080\u009E-l i=0 J(t)= \u00C2\u00A3 \^(i)Qxf(i) + vT(i)Rv(i) xf(Nu) = 0 (2.34) For clarity, the above derivation is summarized in Table 2.1 17 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) Problem Formulation S D G P C Discrete-time receding horizon L Q control System equation xf \u00E2\u0080\u0094 AfXf + Brud dim(xf) = rif = n + 1 x / ( i + l) = $ x / ( i ) + rv(i) \u00C2\u00BB/(0 = cJx/(0 dim(x}) = rif = n + 1 Performance index m = y + T)Qxf(t + T) + uTd{t + T)R.ud(t + T)]dT o J(t) = \u00C2\u00A3 [xf(i)Qxf(i)+vT(i)Rv(ij\ Final state constraint xf(T\u00E2\u0080\u009E) = 0 xf(Nu) = 0 $ = e AfTm^ T = J e A j r B f d T in o Bfdr Relationships T Q T fe^dt drBf J .0 J .0 Q = Q- MR-lMT $ = * - r j ^ M 3 , v(i) = R-1MTxf(i) + ud(t) Table 2.1 Comparison of S D G P C and discrete-time receding horizon L Q control We summarize the above results as follows: lemma 2.1 When the execution time interval Texe is equal to the design sampling interval Tm, the SDGPC problem can be transformed to a standard discrete-time receding horizon LQ control problem as summarized in Table 2.1. Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) From lemma 2.1, it is clear that the stability problem of S D G P C boils down to finding the conditions i n terms of system (2.1) under which Theorem A.10 holds. We have following results to serve this purpose. Lemma 2.2 investigates the controllability and observability of the integrator augmented system (2.2). The proof of the controllability part can be found in [59]. The proof of the observability part is straightforward as given below. lemma 2.2 ( Power et al. [59] ) If the original system (2.1) with triple (A,B,cT) is a. both controllable and observable b. there is no system zeros at the origin then the augmented system (2.2) with triple (^Af, Bf, cT^j is also controllable and observable. Proof: The proof for controllability of (Af,Bf) can be found in Power and Porter [59]. The observability matrix of (A, cT) is 0AcT = [ cT; AcT; A2cT \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 An~1cT]nxn, with rank(OAcT) = n -/ rp\ Olxn 1 / \ The observability matrix of (Af,cT) is 0AfCr \u00E2\u0080\u0094 . Obviously, rank\0AjCTJ = / [OACT 0 \u00E2\u0080\u009E x i j n + 1, and the pair ^Af,cT^j is observable. \u00E2\u0080\u00A2 Remark: Condition b is intuitively obvious. If violated, there is no way that the system output of (2.1) can be driven to a nonzero setpoint. Or in terms of the augmented system (2.2), the state e(t) with nonzero initial value can not be driven to the origin. The following theorem is due to Kalman et al [38]. Theorem 2.1 ( Kalman et al [38] ) Let the continuous-time system (2.2) be controllable. Then the discrete-time system (2.22) is completely controllable if: Im(\i{A] - \j{A}) * np-J m (2.40) n = \u00C2\u00B1 1 , \u00C2\u00B1 2 , . . . . whenever Re(Xi{A} - Xj{A}) = 0. 19 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) lemma 2.3 ( Anderson et al. [3, pp. 354] ) Assume ( $ , T ) given by equation (2.23) is controllable, then ($,r) given by (2.32) is also controllable. Proof: The proof is obvious. Recall $ = $ - TR~1MT, the controllability of a controllable pair ($,r) can not be changed by state feedback. \u00E2\u0080\u00A2 lemma 2.4 ( Levis et al. [43] ) Q > o. Proof: Since Q > 0, R > 0, so every integrand in (2.25) (i+l)Tm Ii= j [xJ(t + T)Qxf(t + T) + uT(t + T)Rud(t + T)]dT = xj(i)Qxf(i) + 2xTf(i)MvLA(i) + uJ(i)Rud(i) is nonnegative for any ud(i). Let u 0 for any Xf(i). So Q > 0. \u00E2\u0080\u00A2 Lemma 2.5 establishes the observability of the pair ( $ , <5) and the observability of the augmented system (2.2). This is a special case of the results for periodic time-varying systems given by A l -Rahmani and Franklin [2] in which multi-rate control strategy is used. A simpler proof based on the if-controllabil i ty and observability concept [6] [37] is given in the following. lemma 2.5 Assume the controllability conditions of Theorem 2.1 hold, then ($, Q) is observable if and only if the pair (^Af,cJ^j of equation (2.2) is observable . Sufficiency: Assume ($,z = Xz and QeAftz = 0 for any r > 0 20 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) [6]. It then follows from equation (2.30) that Qz = 0, MTz = 0. From equation (2.32), we have \u00C2\u00AEz = Xz, Qz = 0. So A is unobservable in ( $ , Q ) l 3 7 ! - This contradicts the assumption. Necessity: Assume ( A / , Q) is observable but ( $ , Q) is not. Let A be an unobservable eigenvalue of $ and z ^ 0 be an associated eigenvector. We have $z = Xz, Qz = 0. Recal l equation (2.41), let Xf(i) = z, \id(i) = \u00E2\u0080\u0094R~lMTz, we have (t+i)r,\u00E2\u0080\u009E Ii= J [xT(t + T)Qxf{t + T) + uT(t + T)Rud(t-rT)]dT \u00C2\u00AB T \u00C2\u00BB (2.43) = xT(i)Qxf(i) + 2xT(i)Mud(i) + uJ(i)Rud(i) = zTQz = 0 Since Q > 0, R > 0, equation (2.43) implies / xT(T)Qxf(r)dT = J uT(T)Rud(T)dT = 0. o o Tm Further, J uT(T)Rud(r)dT = z T M R ~ 1 (TmR)R~1MT2 = 0. Since \u00C2\u00A3 - 1 ( T m i ? ) . R - 1 > 0, we 0 have M T 2 = 0. From equation (2.32), zTQz - zTMRr1MTz - zTQz = 0 and $ z = - TR-^M^z = $ z = A* . From equation (2.30), z r Q z = 0 implies QeAt*z = 0. But the existence of z ^ 0 such that $ 2 = A2, QeAftz = 0 contradicts the observability assumption o f ( A / , Q ) . \u00E2\u0080\u00A2 N o w , we are in a position to state the main stability property of S D G P C . Theorem 2.2 For systems described by equation (2.1), if a. The triple (A,B,cT) is both controllable and observable. b. There is no system zero at the origin. c. The control execution time Tm is selected such that the condition in Theorem 2.1 is fulfilled, then the resulting closed loop system under SDGPC is asymptotically stable for Nu > n + 1. Proof: According to lemma 2.1, S D G P C of system (2.1) is equivalent to receding horizon control of discrete-time system (2.33). Thus we need only to prove the stability of the receding horizon control problem for system (2.33) with performance index (2.34). Conditions a. and b. guarantee the controllability and observability of the integrator augmented system (2.2) according to 21 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) lemma 2.2. From condition c. and Theorem 2.1, it is apparent that the discrete-time counterpart of (2.2) given by (2.22) is also controllable and observable. Applying lemma 2.3-2.5, it is obvious that Q > 0, R > 0 in (2.34), is controllable and is observable. A p p l y Theorem A.10 proves the theorem. \u00E2\u0080\u00A2 2.2.2 Proper ty of S D G P C with control execution time Texe < Tm In section 2.1, we mentioned that the execution sampling time interval Texe, i.e. the time' interval with which the plant is actually being sampled, can take any value on [0, Tm]. The case of Texe < Tm w i l l be analyzed in this section. This strategy is very similar in spirit to the wel l known G P C design practice of selecting a smaller control horizon than the prediction horizon in which case the computation burden can be greatly reduced. Fig.2.5 illustrates these two closely related strategies. Setpoint Figure 2.5: Comparison of SDGPC and GPC strategy G P C [13] design is based on minimization of the following performance index J(N) = (y(t + w(t + i)f + A [Au{t + i)}2 (2.44) i=N. i=0 Ni can always be selected as zero. The prediction horizon N2 corresponds to the prediction time Tp in S D G P C . The control weighting A has the same meaning in S D G P C but the control horizon Nu has a quite different interpretation as is clearly illustrated in Fig.2.5. In S D G P C , Nu is the number of controls that w i l l cover the whole prediction horizon Tp, while in G P C it is the number of 22 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) controls that only cover a portion of the prediction horizon after which the control is kept constant or the increment of controls is kept zero. A n d in S D G P C the control execution time Texe is not necessarily equal to the design sampling time Tm. It is also possible to assume the projected controls in S D G P C design have the same form as that of G P C , or any other form, say a polynomial up to certain degree. However, the advantage of choosing piecewise constant equally spaced controls over the entire prediction horizon is that in doing so the S D G P C problem can be transformed into a discrete-time receding horizon L Q problem for which powerful stability analysis methods in optimal control theory can be utilized and improved numerical property can be expected because a larger design sampling interval Tm is used. Refer to Fig.2.5, both S D G P C and G P C use Nu = 4. Both S D G P C and G P C update their control every Texe seconds. Both S D G P C and G P C use the same prediction horizon: N2*Texe = Tp. The difference is that the design sampling interval in S D G P C is Tm = 4 * Texe, i.e. four times as large as the execution time. Both of them have the effects of reducing computational burden and damping the control action. However, S D G P C w i l l have superior numerical property when Texe is small because S D G P C is computed based on a larger design sampling interval Tm while G P C is still based on Texe. Another advantage of S D G P C is that although neither of them has stability claim when Nu < N2 or Texe < Tm, we know that the same S D G P C law does have guaranteed stability when Texe = Tm. Whi le it is very natural to choose Texe < Tm in S D G P C design under the framework of receding horizon strategy, it is almost unthinkable for any other controller synthesis method to design a stabilizing control law for one sampling interval but to apply it to the process with another sampling interval. It is well known that discrete-time design methods based on z-transform w i l l encounter numerical problems when the sampling interval is small [50]. In S D G P C , a larger design sampling interval Tm can be used to improve numerical property while implementing it with a shorter sampling interval Texe. Although there are no general stability results for Texe < Tm, extensive simulation examples w i l l be presented in next section to offer guidelines of selecting Tm and Texe. A s a by-product, those simulations w i l l also shed some light on the selection of sampling interval in digital control in general. 23 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) 2.3 Interpretation and Stability Property of the Integral Control Law The integral control law (2.17) was obtained by integrating both sides of (2.15) under the assumption that T e x e \u00E2\u0080\u0094\u00E2\u0080\u00A2 0. However, (2.17) itself can be interpreted as a solution of a we l l formulated t predictive control problem for system (2.1). Define integral Ie = J (y(r) \u00E2\u0080\u0094 w)dr, where the lower l imit of the integral was left blank to indicate that Ie can take any initial value, as the new state of system (2.1), the augmented system becomes: xj = AfXj + BfU + Bvw y=[cT 0]xx dim(xi) = n + 1 (2.45) Where X 'A 0 ' 'B' ' 0 ' XJ = , Bf = , BV = Je. c T 0. . 0 . - 1 . (2.46) Where w is the constant setpoint. Notice that the augmented system matrices Af,Bf are exacdy the same as of that in (2.2). The objective of the control is to let the output y(t) of system (2.1) track the constant setpoint w without steady state error. Thus at equilibrium, the following relations hold: lim y(t) = i/oo = w t\u00E2\u0080\u0094too and l im u(i) = tia t\u00E2\u0080\u0094too l im Ie(t) = I0 t\u00E2\u0080\u0094*oo l i m x(t) = XQ, t\u00E2\u0080\u0094too J/oo \u00E2\u0080\u0094 W \u00E2\u0080\u0094 C XQQ 0 = AXQO + BUQO (2.47) (2.48) where Uoo-Zooj^oo are constants whose value can not be determined a priori based on the nominal plant parameter matrices (A, B,cT) and the setpoint because of the unavoidable modelling errors. A sensible approach is thus to define the shifted input, the shifted state respectively as 24 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) u'(t) \u00E2\u0080\u0094 u(t) \u00E2\u0080\u0094 u, \u00E2\u0080\u00A2oo x'(t) = x(t) \u00E2\u0080\u0094 X, 00 (2.49) I'e(t) = Ie{t) - f 00 y'(t) = y(t) - w Solving (2.49) for u,x,Ie,y, substituting the results into (2.45), and using (2.48) it is not difficult to find that the shifted variables satisfy the equations The shifted equilibrium of (2.50) is at zero as that in (2.2) and a predictive control problem can be wel l formulated by minimizing a quadratic performance index A n d at the end of the prediction horizon Tp, the state of (2.50) should be constraint to be zero, that is x'j(t + Tp) = 0. Although the above problem is well defined, it is still very inconvenient, to say the least, to obtain the control law due to the unknown equilibrium point U Q O , x^, 1^. A more effective formulation should thus have a model which accommodates the fact that at the equilibrium, the input, output and the state are a l l constant but at the same time should not explicitly have those unknown constants in the model. Taking derivative of both sides of the first equation of (2.45) w i l l just do that. The resulting equivalent system model has the form x' (2.50) (2.51) o if = AfXf + Bfiij + Bvw (2.52) where Xd(t)-x(t), ud(t) = u(t) eW = 2/(0 ~w, xf = (2.53) e 25 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) For constant setpoint as we assumed, w \u00E2\u0080\u0094 0 and (2.52) is exacdy the same as (2.2). The only modification needs to be made is that the observation matrix should be cj in (2.3). The S D G P C problem for (2.2) and the associated performance index (2.4) can thus be interpreted as a sensible way to circumvent the unknown equilibrium difficulty encountered in the control problem defined by (2.50) and (2.51). According to Theorem 2.2, the control law (2.15) stabilize system (2.1). Similar results can be said about control law (2.17): Theorem 2.3 For systems described by equation (2.1) and the integral control law (2.17), if a. The triple (A,B,cT) is both controllable and observable. b. There is no system zero at the origin. c. The control execution time Texe is equal to the design sampling time Tm and is selected such that the condition in Theorem 2.1 is fulfilled. d. Zero-th order hold is used when applying (2.17) to system (2.1). then the resulting closed loop system under the integral control law (2.17) is asymptotically stable for Nu > n + 1. Proof: When the integral control law (2.17) is applied to (2.1) with zero order hold, the resulting closed loop system matrix w i l l be the same as that of by applying (2.15) to (2.1). This can be seen readily by comparing equations (2.2) and (2.45) considering that fact that the disturbance term Bvw i n (2.45) w i l l not affect the stability of the closed loop system. Since system (2.1) is stable under the control of (2.15) according to Theorem 2.2, it w i l l be stable as wel l under the control of (2.17). \u00E2\u0080\u00A2 We mentioned in section 2.1 that the unspecified term in 770 in (2.17) can be used to place a zero to improve the transient response of the closed loop system. In the following we show that there is a sound mathematical basis for doing so. Consider system (2.52) and the cost (2.18), the T-ahead state predictor is described by (2.54) with u<* given by (2.7). 26 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) xf(t + T) = eA'Txf(t) + Du(T)w(t) + T(AfBf,T)ud T DU{T) = j eA*T-^BvdT (2.54) where T(Af,Bf,T) is given by (2.11) with A, B replaced by Af,Bf. Without detailed derivation, the optimal control to system (2.52) can be obtained as K = Kxf Le(<) + Kpw{t) where K = Kxt = KHXJ Kp = KHp J TT(T)cfcjT{T)dT + \J HT(T)H(T)dT + 7 r T ( T p ) r ( T p ) \ \u00C2\u00B0 0 J N*xN, Hxt \u00E2\u0080\u0094 \u00E2\u0080\u0094 HB = -J TT(T)cfcJeAfTdT + 7rT(Tp)eA'T\u00C2\u00BB J TT(T)cfc^Du(T)dT + jTT(Tp)Du(Tp) (2.55) (2.56) The counterpart of the optimal control sequences (2.55) with respect to system (2.45) is given by u* = K Xf x(t) + l<0w(t) lJe(T)dT_ The first control which is the only one being applied to the plant is (2.57) u*(l) = Kxx(t) + Ke j e{r)dT + K0(l)w(t) (2.58) where Kx denotes the first n entries of the first row of the Nu x (n + 1) matrix KXf, Ke is the last element of the first row of Kr 27 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) The effect of Kp{l) in (2.58) is to add a zero at Ke/Kp(l) from reference w(s) to output y(s) [28 , p.559]. Since Kp(l) does not affect the eigenvalues of the closed loop system matrix, meaning that it can take any value in addition to the one being computed by equation (2.56). This provides one extra degree of freedom in the design. Example 2.3.1: In this simulation, the plant with transfer function G{s) = 3 _ (2.59) is being controlled using control law (2.58) with the following design parameters The resulting feedback gains are: Nu = l Tp \u00E2\u0080\u0094 6s Texe = 0.1s (2.60) A = 1 0 \" 4 7 = 100 Kx = -[0.2839 0.8628 0.8776] (2.61) Ke = -0 .2993 The eigenvalues of the augmented closed-loop matrix Af + Bf[Kx Ke] are at: \u00E2\u0080\u00A21.0523 \u00C2\u00B1 0 . 0 6 5 3 t -0 .8697 -0.3096 (2.62) 28 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) Fig . 2.6 shows the control results for two different values of the feedforward term Kp, i.e. Kp = 0 and Kp = _ 0 * c 0 9 6 = 0.9668. The latter Kp places a zero which cancels the last pole -0 .3096 of the closed-loop system matrix resulting a faster response which can be seen from F ig . 2.6. 1 1 1 Setpoint Output: Kp = 0.9668 w > / > / I / ; y X \ / \u00C2\u00BB/\ Output: Kp = 0 * \ / \ s / 1 1 1 1 1 - -10 20 30 40 50 60 (S) Control: Kp = 0.9668 Control: Kp = 0 _ 5I 1 1 1 1 1 1 0 10 20 30 40 50 60 (s) Figure 2.6: Zero placement in SDGPC 2.4 Simulat ions and Tuning Guidel ines of S D G P C Refer to Fig.2.7, the design parameters of S D G P C are: Prediction time Tp, design sampling interval Tm, execution sampling interval Texe, and control weighting A. The control order Nu is related to prediction time and design sampling interval by -/V\u00E2\u0080\u009E = If the final states weighting is used other than final states constraint as in performance index (2.18), there is an additional design parameter 7 . This is the approach used in [17] where 7 served as the tuning parameter to damp the control action. However in S D G P C , we would rather fix 7 to a very large value which corresponds to the states constraint case since this is crucial to guarantee stability. The task of reducing excessive control action can be accomplished by selecting Texe < T m , which is equivalent to putting infinite 29 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) weighting on controls with sampling interval Texe and only allowing control to vary every Tm time units. This w i l l be shown later by example. w Setpoint SDGPC projected controls Ud(Nu) T e x e T m T p TilTIG Figure 2.7: The projected control derivative Example 1: The aim of the first example is to show the effects of the S D G P C design parameters on the control performance, and compare S D G P C with infinite horizon L Q control. The process being controlled is G(s) = (2.63) It is assumed that this process has to be controlled with a relatively fast sampling interval Texe = 0.2s in order to have fast disturbance rejection property. It is also assumed the states of the process is available for measurements and the derivatives of the states are computed by the state space equivalent of system model (2.63) - 3 - 3 - r \"1\" xd(t) = 1 0 0 a:(*) + 0 . 0 1 0 . .0. u(t) (2.64) Fig.2.8 shows the step response of the plant. 30 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) Time (sees) Figure 2.8: Step response of example 1 Simulation 1: S D G P C of plant (2.63) with the following design parameters Nu = 6 (2.65) Tp = 1.2s Tm \u00E2\u0080\u0094 Texe \u00E2\u0080\u0094 0.2s A = 10~ 5 , 0 . 01 ,0 .1 ,0 .5 ,10 1 0 According to Theorem 3 in section 2.2, Nu should not be smaller than 4 to ensure stability. A l s o from Fig.2.8, the final prediction horizon Tp = 1.2s is very short for this plant. Fig.2.9 shows the control results. 31 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) 2 1 0 -1 1000 500 0 -500 -1000. It is obvious from Fig.2.9 that this control law is unacceptable in practice because of the large magnitude of the control action. Also notice that increasing the control weighting is not effective in damping the control since the prediction horizon Tp is too short. It can be seen from Fig.2.9 that between 40 (s) and 50 (s) even a control weighting of 1 0 1 0 can not penalize the control action. This is because when Tp is small, the end point states constraints dominate the control law calculation whereas the performance index (2.4) has little effect on the controls. Since we can not reduce the control order Nu because of the stability requirements, the only option is to increase the prediction horizon Tp. Simulation 2 shows the results. Simulation 2: S D G P C of plant (2.63) with Nu = 21 Tp = 4.2s (2.66) Tm \u00E2\u0080\u0094 Texe = 0.2s A = 1 0 - 5 , 0 . 0 1 , 0 . 1 , 0 . 5 , 2 32 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) The prediction horizon is selected to cover the significant part of the step response. See Fig.2.8. The design sampling interval and the execution sampling interval are the same as i n simulation 1. Fig.2.10 shows the results. ^=0.00001 1 X=0.01 1 1 X=0A X=0.5 X=2 / \ / \ -1 t 1 1 10 20 30 40 50 (S) 50 (s) Figure 2.10: Simulation 2 of example 1 The results shown in Fig.2.10 are good except that the control law involves calculation of a 21 x 21 matrix inversion, a significant increase in computation burden compared with simulation 1. Simulation 3: Simulation 3 shows the S D G P C of plant (2.63) with N\u00E2\u0080\u009E Tp = 4.2s Tm = 0.7s ^exe = 0.2$ A = 1 0 - 5 , 0 . 0 1 , 0 . 1 , 0 . 5 , 2 . (2.67) 33 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) The prediction horizon is the same as in simulation 2 but the control order equals the one in simulation 1. That means the design sampling interval T,\u00E2\u0080\u009E = = 0.7s and the execution sampling interval remains to be 0.2s as in simulation 1 and 2. X=0.00001 \ X=0.01 h=0.S X=2 / \ / \ \u00E2\u0080\u00A2 1 1 1 1 0 10 20 30 40 50 (s) (a) Figure 2.11: Simulation 3 of example 1 The good results in Fig.2.11 suggest that selecting Tm > Texe is a useful strategy to reduce computation burden and at the same time damping the control action. More simulations w i l l be presented to support this claim in example 2. Simulation 4: It is interesting to compare S D G P C with infinite horizon L Q R with the performance index OO r -I J = ~~ w) + m which the only tuning parameter is control weighting A since infinite horizon is used. Plant (2.63) is discretized with sampling interval 0.2s as previous simulations. Control weighting varies as indicated in Fig.2.12 34 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) ^=0.00001 X=0.01 X=0.1 X=0.5 X=2 / \ / \ \u00E2\u0080\u00A2 I 1 1 1 10 20 30 40 50 (s) 50(s) Figure 2.12: Simulation 4 of Examplel: infinite horizon L Q R Compare Fig.2.12 with Fig.2.10 and Fig.2.11, it can be seen that infinite horizon L Q R has visible overshoot for small control weighting A whereas increase A slows down the response significantly. However, this is not suggesting that S D G P C has inherent advantage over infinite horizon L Q R , after a l l they are the same as analyzed in section 2.2.1. However, it might be easier to tune S D G P C than L Q R since there are fewer design parameters in S D G P C ( prediction horizon, control order etc. ) than that in L Q R ( al l entries of the weighting matrices ). Example 2: Two plants are simulated in this example. The first one is a non-minimum phase wel l damped open loop stable system. Gi(s) = (2.68) ( s + 1) The second one is an open loop unstable system with imaginary poles. G ' \" > = ( \u00C2\u00BB - + 0 . 4 , + 9) < \" \u00C2\u00BB Simulation 1: Plant (2.68) is controlled by S D G P C with the following design parameters: 35 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) Nu = 5 Tp = 5s Tm = Is A = 0.1 (2.70) In the first 15 seconds, the execution sampling interval Texe is equal to T m , and after that Texe is reducing every 15 seconds as illustrated in Fig.2.13 1 Texe = 1 s \ 1 1 Texe =0-8s 1 1 AT e x e =0.5s r T e x e=0.2s 1 1 ATexe=0-01s / i U i i i i i i 10 20 30 40 50 60 70 (s) (a) Figure 2.13: Simulation of plant (2.68) This simulation shows that for well damped stable system ( l o w pass plant) when fast sampling is needed, S D G P C can offer both low computation load and high implementation sampling rate by selecting Texe < Tm. Simulation 2: Plant (2.69) is studied. First the following group of design parameters is used. 36 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) Nu = 5 Tp = 5s Tm = Is A = 0.1 The execution time Texe varies as illustrated in Fig.2.14 2i 1 , 1 Texe =1s Texe =0.5s Texe =0.2s (2.71) 100(s) Figure 2.14: Simulation of plant (2.69) 100(s) Fig.2.14 shows that when the execution sampling interval Texe is equal to the design sampling interval Tm, the performance is good. A s Texe decreases the performance deteriorates and the system becomes unstable when Texe = 0.2s. Considering the plant (2.69) has an unstable pole with time constant of 1 second and has a lightly damped mode with resonance frequency of 0 .4759Hz, the design sampling interval Tm = Is is relatively large. Two things can be told by the results in Fig.2.14 for unstable and/or lightly damped systems. First, when the sampling interval Tm is relatively large, selecting Texe < Tm can cause performance deterioration or even instability. Second, even Texe = Tm is not a good choice for such a system. Since changing sampling interval can be viewed as a perturbation to the sampled-data system, the performance deterioration in Fig.2.14 means that the closed4oop system under S D G P C with sampling interval Tm \u00E2\u0080\u0094 Is is sensitive to plant uncertainties. 37 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) The next simulation suggests that for systems with unstable and/or lightly damped poles the design sampling interval Tm should be at most one third of the unstable pole time constant or the sampling rate be 6 times that of the resonant frequency. Simulation 3: Plant (2.69) is controlled with the following design parameters. Nu = l Tp - 2.1s Tm = 0.3s (2.72) A = 0.1 The sampling interval Tm is reduced to one third of the unstable pole time constant. Fig.2.15 shows the results. Texe =0.3s r i 1 1 1 Texe=0.1s i Texe=0.01s A / t i l l ! 10 20 30 40 50 60 (s) Figure 2.15: Simulation of plant (2.69) 60 (s) It can be seem from Fig.2.15 that when Tm is reduced to 0.3s for this plant, good results are obtained regardless of the changing of the execution time. The conclusion drawn from these two examples is that for stable wel l damped systems, the design parameters can be selected primarily based on performance and computation load considerations and 38 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) the execution time can be selected flexibly. For unstable and/or lightly damped systems, i n addition to performance and computation load considerations, there is an upper bound on the design sampling interval Tm restricted by the unstable pole time constant or the resonant frequency. There is no explicit formula available for the bound yet. But a rule of thumb is to select Tm less than one third of the unstable time constant or make ^ larger than 6 times the highest resonance frequency. Example 3: This example shows the ability of S D G P C to control systems with nearly cancelled unstable zeros and poles. G P C w i l l encounter difficulty controlling this k ind of systems [4, pp. 102]. The plant being controlled is s - 0.9999 Design parameters: G2(s) (s - l)(2s + 1) Nu = 6 Tp = 2.4s (2.73) T \u00E2\u0080\u0094 T OAs (2.74) A = 0.5 30 (s) 25 30 (s) Figure 2.16: Simulation of plant (2.73). 39 Chapter 2: Sampled-Data Generalized Predictive Control (SDGPC) S D G P C can control this plant without any difficulty. 2.5 Conclus ion A t the beginning of this chapter, we mentioned that there are some problems with pure discrete-dme approach when fast sampling is needed. A new predictive control algorithm S D G P C was thus formulated based on continuous-time modeling while assuming a piecewise constant control scenario. Better numerical property can be expected since when fast sampling is needed, the control law can actually be designed based on a larger sampling interval. S D G P C relates continuous-time control and discrete-time control in a natural way thus enjoys the advantage of continuous-time modeling and the flexibility of digital implementation at the same time. Under mi ld condition, S D G P C is shown to be equivalent to an infinite horizon L Q control law thus it has the inherent robustness property of an infinite horizon L Q regulator. However, the finite horizon formulation of S D G P C makes it convenient to handle various input and states constraints. Moreover, l ike other predictive control methods, the tuning of S D G P C is easy and intuition based. The design of S D G P C in this chapter is a one-degree-of-freedom design. Various extension w i l l be made in the following chapters. 40 Chapter 3: SDGPC Design for Tracking Systems Chapter 3 S D G P C Design for T rack ing Systems In process control, the main objective is to regulate the process output at a constant level subject to various types of disturbances which is known as regulator problem. There is another type of control problem known as tracking or servo problem where it is required that the output of a system follow a desired trajectory in some optimal sense. This servo problem does occur in process industry , albeit not as often, such as the change of paper grade from a basis weight of 8 0 g / m 2 to l O O g / m 2 i n paper production. Another important class of problem in process control which also fits into the framework of tracking control is the feed forward design problem when the disturbance information is available. The optimal tracking problem in the linear quadratic optimal control context was wel l formulated [41] [3]. But they were either in continuous-time or discrete-time framework. It is thus worthwhile to formulate the tracking problem in the context of sampled-data generalized predictive control. The S D G P C algorithm we developed in chapter 2 is a special case of tracking system design in which the desired trajectory is a constant setpoint. A wider class of trajectories w i l l be considered here. Trajectory following problems were classified in three categories in Anderson and Moore [3]. We follow the same treatment in this chapter. If the plant outputs are to follow a class of desired trajectories, for example, al l polynomials up to a certain order, the problem is referred to as a servo problem; i f the desired trajectory is a particular prescribed function of time, the problem is called a tracking problem. When the outputs of the plant are to follow the response of another plant (or model), it is referred to as the model-following problem. However, the differences between them are rather subde in principle. 3.1 The Servo S D G P C Prob lem Given the n-dimensional SISO linear system having state equations x(t) = Ax(t)+Bu(t) (3.75) y(t) = cTx(t) 41 Chapter 3: SDGPC Design for Tracking Systems The augmented system is described by Where Xf = if \u00E2\u0080\u0094 AfXf + BfUd Vf = \u00C2\u00B0Tfxf dim(xf) = rif = n + 1 xd(t) = i(t), ud(t) = u(t) ' Cf (3.76) 'xd~ 'A 0\" 'B' , *f = . Bf = .y. _cT 0. . 0 . [0 0 1] (3.77) Note that the system output y(t) rather than the tracking error e(i) = y(i) \u00E2\u0080\u0094 r(t) is augmented as the system state since the setpoint r(t) is no longer constrained to be constant. Suppose the reference signal is the output of a p-dimensional linear reference model w = Fw (3.78) r(t) = RTw with the pair [F, RT] completely observable. Assume that the future projected control derivative is piecewise constant in the time interval [t,t + Tp] as illustrated in F ig . 3.17, the S D G P C servo problem is to find the optimal control vector u d = [ u d ( l ) ud(2) \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 ud(Nu)]T such that the following performance index is minimized. J = l(y{t + Tp) - r(t + Tp))2 + 1XTd{t + Tp)xd(t + Tp) + J [(y(t + T)-r(t + T))2 + \u2d(t-rT) dT (3.79) To solve this optimization problem, we need the T-ahead state prediction for both the plant (3.76) and the reference (3.78). Recall that the projected control scenario in F ig . 3.17 can be written as ud(t) = H(t)ud (3.80) where H(t) = [E1(t) H2(t) \u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2Hi(t).--HN.(t)) Ud=Ki(i) u d (2) \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 u d ( i ) \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 u d(iV\u00E2\u0080\u009E)] (3.81) 42 Chapter 3. SDGPC Design for Tracking Systems Setpoint SDGPC projected controls Ud (t) T p Time We have where Figure 3.17: The projected control derivative 1 (t - l ) T m < t < iTm Hi(t) = L 0 otherwise i = l,2,---Nu T m ~ Na Xf(t + T) = eA'Txf(t) + J eA'(T-^Bfud{r)di = eA'Txf{t) + T(AfBf,T)vLd T(Af,BfT) = T feAf(T-T)dTBf. Q.Q...Q 0 TJ eAf(T-r)dTBf. j eAj(T-T)dTBf. 0 . . . o 0 Tm TfeAf(T-r)dTBf.... J eAAT-r)drBf o (ivB-i)rm And the reference state prediction is simply 0 < T < Tm Tmxf(t) + cTT(Af,Bf,Tp)ud - RTeFT\"w(t)]2 +1[cTeA'T>>xf(t) + cTr(Af,Bf,Tp)ud]T[cTeA'T*xf(t) + cTT(Af,Bf,Tp)ud] Take the derivative of J with respect to u^ 1 dJ _ dud j [TT{T)cfcTT{T)ud + TT{T)cf{cTeA'Txf{t) - RTeFTw(t))]dT +A j HT(T)H(T)dT +j[TT(Tp)cfcTr(Tp)ud + rT(Tp)cf{cTeA'T>xf(t) - RTeFT*w(t))} + 7 [r T (r p )c d C Jr(2> d + rT(Tp)CdcJeA^xf(t)] The optimal S D G P C tracking control solution u*d is given by ~xd(ty ud = Kww(t) + KXs ly(t) J (3.88) (3.89) (3.90) Af,Bf are dropped (tomT(Af,Bf,T) for clarity 44 Chapter 3: SDGPC Design for Tracking Systems where Kw \u00E2\u0080\u0094 - f t Hw KXf = KHX/ y 1 = I J VT(T)cfRTeFTdT + jFT(Tp)cfR TeFTp JV\u00E2\u0080\u009Exp K = \ J rT(T)cfcjT(T)dT+X f HT(T)H(T)dT + jTT(Tp)F(Tp) \ (3.91) I HXJ = - \ J TT(T)cfcJeA'TdT + >yTT(Tp)eA'T> NuXTlf A s shown in F i g . 3.18, the servo S D G P C law (3.90) has one feedforward term in addition to a usual feedback term as in the regulator case. This is what is known as a two-degree-of-freedom design method. A l so note that equation (3.91) clearly shows that the feedback gain KXf is independent of the trajectory reference model (3.78). zoh w= Fw w(t) r(t)= RTw H ^ ^ ' ^ V ^ y ' i H I l ^ x=Ax+Bu KY, Observer x t Figure 3.18: The servo SDGPC controller So far we have assumed that the state w of the reference signal is available for measurement. In practice, however, often only an incoming signal is at hand. For this case, a state estimator may be constructed with the pair [F, RT] completely observable. Then the state estimator and the static feed forward gain Kw can be combined to give a dynamic feedforward controller as illustrated in F i g . 3.19 r(t) Feedforward controller zoh x=Ax+Bu X T c' Ky, Observer Figure 3.19: The dynamic feedforward controller implementation 45 Chapter 3: SDGPC Design for Tracking Systems When the reference model (3 0 \u00E2\u0080\u00A2 1 \u00E2\u0080\u00A2 78) is given by \u00E2\u0080\u00A2 0 0 \" \u00E2\u0080\u00A2 0 0 1 ^ -* pxp ,RT = {0 0 l ] j (3.92) the reference trajectory w i l l be the class of signals consisting of al l polynomials of degree (p\u00E2\u0080\u0094 1). The state w(t) w i l l be consisting of the incoming signal r(t) and its derivatives up to the order (p \u00E2\u0080\u0094 1) = \u00E2\u0080\u00A2- ^ T are [0 T]T and [0 0 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 T]\u00C2\u00A3 x l because the last columns of F and Af are a l l zeros. Thus, the last columns of CfRTeFT in Hw is equal to the last column of matrix CfCJ\"eAfT in HXf. A s an immediate consequence, Kw(l,2) and A ' X / ( l , n / ) in equation 3.95 are of the same amplitude but with opposite signs. Let KTd = Kw(l, l),KTty = Kw(l,2),KXd = KXf(l, 1 : n ) , we have uj(l) = Krdf(t) + Kr,y[r(t) - y(t)) + KXdxd(t) (3.96) 46 Chapter 3: SDGPC Design for Tracking Systems Here KXf(l, 1 : n) denotes the first n elements of the first row of matrix KXf. For small execution sampling time, the optimal servo S D G P C law (3.96) can be written in another form as Compare with the optimal S D G P C solution (2.17) in chapter 2, control law (3.97) has an additional feedforward term KTdr(t). It should be noted that although the servo S D G P C solution (3.97) is derived for ramp signal f(t) = 0, it does not necessarily yield zero tracking error for a ramp even asymptotically. The reason for this is that there is only one integrator in the controller (3.97) which can only track constant reference with zero steady state error [73]. According to the internal model principle, there must be a model of the exogenous signal included in the control law for robust zero error tracking and disturbance rejection. A s most of the discrete-time predictive control algorithms, S D G P C has the ability to track a general class of reference trajectory with zero steady state error. The spirit of this approach is state augmentation. That is, by including the equations satisfied by the external signal into the system model, a new system model in the e r r o r space [28] with new coordinates can be obtained and the S D G P C design procedure can then be applied. In the following, the servo S D G P C problem which incorporates double integrators in the control law is presented to show the procedure. Consider the plant described by (3.97) x{t) = Ax(t)+Bu(t) y{t) = cTx(t) (3.98) dim(x) = n The augmented system is described by xz = Azxz + Bzuz (3.99) dim(xz) = nz = n + 2 47 Chapter 3: SDGPC Design for Tracking Systems Where uz(t) = u(t) \"x~ \u00E2\u0080\u00A2 A Onxl Onxl ' \u00E2\u0080\u00A2B-y , AZ = T C1 0 0 , Bz = 0 .y. -Olxn 1 0 . . 0 . , \u00C2\u00A3 = [0 0 1] (3.100) Assume that the reference trajectory is given by w = Fw r(t) = RTw (3.101) where 0 0 0' 1 0 0 .0 1 0. ,RT = [0 0 l],w(t) = r(t) r(t) r(t). (3.102) Similarly, assume that the future projected uz(t) over [t,t + Tp] is piecewise constant as illus-trated in F ig . 3.17, the objecdve is to find the optimal control vector uz = [u*(l) uz(2) \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2uz(Nu)]T such that the following performance index is minimized. \ 2 i t ' (4. i T> \ * / \u00C2\u00B1 i T 1 \ *\ 2 J = 7(y(t + Tp) - r(t + TP)Y + 7(y(t + Tp) - f(t + TP)Y +JxT(t + Tp)x{t + Tp) + J {(y(t + T)-r(t + T))2+ \u2z(t + T) dT (3.103) Again , we need the T-ahead state prediction for both the plant (3.99) and the reference (3.101) which are given by l cx(t + T) \u00E2\u0080\u0094 eA'Txz(t) + J eA'(T-^Bzuz(r)dr o = eA>Txz(t) + r{Az,Bz,T)uz and (3.104) (3.105) w(t + T) = eFTw(t) T(AZ,BZ,T) is given by equation (3.84) with Af,Bf replaced by AZ,BZ. The optimal solution to the performance index (3.103) is given below without detailed derivation: 48 Chapter 3: SDGPC Design for Tracking Systems \u00E2\u0080\u00A2f(ty \u00E2\u0080\u00A2x(ty u* = Kw r(t) + Kx. m -y(t). (3.106) where Kw = KHU Kx, = KHX K = \ J TT(T)czcTT(T)dT + X J HT {T)H(T)dT + ^ {Tp)T(Tp) - l NuxNu Hw = I jrT(T)c2RTeFTdT + jTT(Tp) 0 -^ 2x2 (3.107) eFTr n,x3 JV\u00E2\u0080\u009Ex3 H * . = - [ f FT (T)czcT eA*T dT + jTT (Tp)eA'Tp Nuxnf We are concerned about the first row of Kw and KXz since only the first element of the optimal control vector u* is applied to the plant. Consider the equalities Kw(l,2) = -KXa(l,nz-l),Kw(l,3) = -Kx,(l,nz) and let KT = Kw{l,l),Kty = Kw(l,2),Kr,v = KW(1,3),KX = KXl(l, 1 : n), the first control in (3.106) has a simplified form: uj(l) = Krr(t) + KrA+M ~ \u00C2\u00BB(*)] + KrM*) ~ y(01 +Kxx(t) (3.108) Or in terms of the control input to the original plant (3.98) when the execution time goes to zero, we have by integrating both sides of equation (3.108) twice, t t V u*(l) = Krr(t) + Kr,y J V(T) - y(r)}dr + Kr,y J J [r(r) - y(r))dTdv ( 3 1 Q Q ) +Kxx(t) Fol lowing is an example of servo S D G P C design to track a ramp reference signal. 49 Chapter 3: SDGPC Design for Tracking Systems Example 3.1.1 The plant being controlled has the transfer function Control law (3.109) is used with the design parameter Text* = O.lS Tp = 3s (3.110) (3.111) Nu = e A = 0.001 7 = 1000 F i g . 3.20 shows the reference and the output, tracking error and the control input. Clearly zero steady state error is obtained. Reference and output -10 0 5 10 15 20 25 30 35 40 45 50 Figure 3.20: Servo SDGPC of plant (3.110)-double integrator For comparison, the servo S D G P C law with single integrator (3.97) is designed with the same group of design parameters given in (3.111). The control results are illustrated in F i g . 3.21 in which steady state error can be clearly observed. 50 Chapter 3: SDGPC Design for Tracking Systems Reference and output 5 10 15 20 25 30 35 40 45 50 Figure 3.21: Servo SDGPC of plant (3.110)-single integrator 3.2 The Model Following S D G P C Problem There is another kind of tracking system called the model following problem. It is a mild generalization of the servo problem of section 3.1. In the framework of SDGPC, the problem is to find the control vector for the system (3.76) which minimizes the performance index (3.79), where r(t) is the response of a linear system model *!(*) = Ai*i(t) +Sit (*) r(t) = Cfz1(t) to command input i(t), which, in turn, is the zero input response of the system i 2(*) = A2z2(t) *'(<) = CTz2(t) as indicated in Fig. 3.22 Command Signal Desired Trajectory (3.112) (3.113) z 2 = A 2 z 2 /'= CJ2z2 z, = A , z , + B,/ r= C T z , Figure 3.22: Desired trajectory for model-foil owing problem 51 Chapter 3: SDGPC Design for Tracking Systems The two systems described by ( 3.112 ) and ( 3.113 ) can be combined into a single linear system with state space equation where Z = [ZT z2 ]> A z(t) = Az(t) r(t) = CTz(t) Ax B&f ,CT=[Cf 0] (3.114) (3.115) 0 A2 With equation (3 .114 ), the model following problem is identical to the servo problem in section 3.1. The following example shows the design procedure of the model following problem and the control results. Example 3.2.1 The plant being controlled is an unstable third order process with transfer function G(s) 1 s 3 - 1 The reference model has the following transfer function (3.116) G(s) = s 2 + 4.5s + 2 The step response r(t) of the reference model to input co is given by w(t) = \"-4.5 - 2 ' '2\" w(t) + 1 o . 0. c o r(t) = [0 l]w(t) w(t) (3.117) (3.118) Or in the form of equation (3.114), the above state space equation can be rewritten as \u00E2\u0080\u00A2o 0 0 ' x(t) = 2 - 4 . 5 - 2 x(t) .0 1 0 . (3.119) r(t) = [0 0 l]x(t) 52 Chapter 3: SDGPC Design for Tracking Systems (3.120) with x(t) = [co r r]T. N o w the control law (3.75) can be applied. The design parameters are: T p = 4s Nu = 20 A = 0.001 7 = 1000 7 m = Texe = 0.2s where Tp is the prediction horizon, iV\u00E2\u0080\u009E is the control order and A, 7 are the control weighting and final state weighting respectively. The execution sampling interval Texe is set to equal to the design sampling interval Tm since the plant being controlled is unstable. F ig . 3.23 shows the control results. Setpoint and outputs setpoint model response \u00E2\u0080\u0094 plant output 80 (S) Figure 3.23: Model following control of unstable third-order system 53 Chapter 3: SDGPC Design for Tracking Systems Example 3.2.1 The second example is a third order stable plant with transfer function 1 ( * + l ) a The reference model is a second order under damped plant G(s) = (3.122) K J 4s 2 + 2.4s + 1 The design parameters are again Tp = 4s Nu = 20 A = 0.001 ( 3 1 2 3 ) 7 = 1000 Tm = Texe = 0.2s Fig. 3.24 shows the setpoint, reference model response, plant output and the control signal. 54 Chapter 3: SDGPC Design for Tracking Systems Setpoint, reference model and plant outputs 0 10 20 30 40 50 60 70 80(s) Control signal 51 1 1 1 1 1 1 1 _51 1 1 1 1 1 1 1 1 0 10 20 30 40 50 60 70 80 (s) Figure 3.24: Model following control of stable third-order system 3.3 The T r a c k i n g S D G P C Prob lem It is we l l known that when the future setpoint is available the tracking performance can be improved radically. Similarly, future values of disturbance can be utilized for better disturbance rejection. Practical examples for which future setpoints are available can be found in areas such as robot manipulator applications, high speed machining of complex shaped work pieces and vehicle lateral guidance control problems [77, 76, 57]. Predictive control is a natural candidate in these applications since it explicitly accommodates the future values of setpoint in its formulation. However, the setpoint preview capacity of predictive control has not been fully exploited before since i n process control applications, where predictive control has blossomed, disturbance rejection is the major concern and the future disturbances are often unknown. The S D G P C tracking problem is formulated as follows. 55 Chapter 3: SDGPC Design for Tracking Systems For the system (3.75) and its augmented plant (3.76), with the desired trajectory r(t) available in the range [t,t + Tp], the S D G P C tracking problem is to find the optimal control by minimizing the performance index (3.79). Assuming that the projected ud(t) in [t, t + Tp] is given by (3.80) as illustrated in F ig . 3.17, the performance index (3.79) can be written as J = J [cTfeA'Txf{t) + cJT(Af,Bf,T)nd - r(t + T)]2dT o +\ J [uTHT(T)H(T)ud]dT o +1[cT}eA^xf{t) + cTT(Af,Bf,Tp)ud - r(t + Tp)]2 +j[<$eA'T'xf(t) + cTdT{Af,BhTp)nd}T[cTdeA^xf{t) + c^T(Af,Bf,Tp)nd] (3.124) Take the derivative of J with respect to u^, we have dJ dud J [YT{T)cfcTT{T)ud + YT(T)cf{cTeA'Tx}{t) - r(t + T))]dT +A J HT (T)H(T)dT +1[rT(Tp)cfcTT(Tp)ud + rT(Tp)cf(cTeA'T>xf(t) - r(t + Tp))] +y[rT(Tp)cdcJr(Tp)ud + TT(Tp)cdcJeA^xf(t)] The optimal S D G P C tracking control solution u r f is given by nd = fr(t) + KXf xd(t) Ly(0 (3.125) (3.126) 56 Chapter 3: SDGPC Design for Tracking Systems where fr(t) = KHt Kxf = KHXf K= \J YT{T)c}cTfT{T)dT + \ J HT(T)H(T)dT + -yrT (TP)T(TP) /Tr \ (3-127) Ht = \J rT(T)cfr(t + T)dT + 7TT(Tp)cfr(t + Tp) j TT (T)cfCT eAfT dT + jTT (Tp)eAfTp With receding horizon strategy, the feedforward term fr(t) needs to be computed at every time instant. Simple numerical integration algorithm such as Euler approximation can be used without compromising the performance of the controller. A s we mentioned at the beginning of this section, use of the future setpoint information can improve the tracking performance, sometimes significandy. The following example compares the tracking performance of two controllers one of which utilizes the future setpoint information and the other one does not. Example 3.3.1 The plant in Example 3.1.1 is used again with the following transfer function 1_ (* + l ) a First, control law (3.97) is used with the design parameter G(s) = \ 3 (3.128) T^e.xe. \u00E2\u0080\u0094 0.25 Tv = 6s Nu = 10 (3.129) A = 0.001 7 = 10000 F i g . 3.25 shows the setpoint and the output, the tracking error and the control input under control law (3.97). 57 Chapter 3: SDGPC Design for Tracking Systems Figure 3.25: Servo SDGPC of plant (3.128) N o w the tracking control (3.126) which utilizes the future setpoint information is designed with same design parameters given in (3.129). F ig . 3.26 shows the results. Reference and output \u00E2\u0080\u0094 i 1 1 r l l l l I 1 i_ 0 10 20 30 40 50 60 70 Tracking error 0 -1 Figure 3.26: Tracking SDGPC of plant (3.128) 58 Chapter 3: SDGPC Design for Tracking Systems Compare F ig . 3.25 and F ig . 3.26, the improvements in tracking error are obvious. A l so notice that the control effort in F ig . 3.26 is smoother due to the preview ability. The improvements can be explained as follows. A t current time t, knowing the future setpoint information r(t + T) is equivalent to knowing the current setpoint and al l its derivatives up to an arbitrarily large order. Indeed any future setpoint value r(t + T) can be calculated using Maclaurin oo series expansion r(t + T) = r(t) + ^ r (tfjr- 1\u00C2\u00B0 control law (3.97), it was assumed that the future setpoint is a ramp. In another words, only the first derivative of the setpoint is assumed to be available. It is thus natural to expect performance improvements for complex setpoint when tracking control law (3.126) is used. However, these two control laws w i l l not differ from each other for ramp signal. This can be confirmed by comparing F ig . 3.27 and F ig . 3.28 which show the results of plant (3.128) being controlled by (3.97) and (3.126). It can be seen that the tracking errors are the same for these two control laws at steady state while the tracking errors under control (3.126) at the transition region around time 10s are smaller since the setpoint here is no longer pure ramp signal. 59 Chapter 3: SDGPC Design for Tracking Systems Reference and output 40 20 0 0 5 10 15 20 25 30 35 40 45 50 Tracking error 0.11 i 1 1 1 1 1 1 1 1 1 Control input 40 20 0 0 5 10 15 20 25 30 35 40 45 50 Figure 3.28: Tracking SDGPC of plant (3.128) It should be pointed out that the tracking performance of the servo control law (3.97) can not be improved significantly by simply increasing the order of the reference model (3.94) without knowing the future setpoint information. For model order p > 2, the setpoint derivatives w i l l be needed in the computation of control law (3.97). In such case a state observer of the reference model (3.94) can be constructed with desired dynamics. However, no matter how fast the dynamics of the state observer is, there is still no anticipation ability in this approach and thus the transient tracking error can not be reduced efficiently. On the other hand, the knowledge of the future setpoint can also be used in the design of control law (3.97) in which case the setpoint derivatives can be estimated using C O the Maclaurin series expansion r(t + T) = r(t) + r^(*)7T i\u00C2\u00AB a l e a s t squares sense. k=i 3.4 The Feedforward Design of S D G P C When the disturbances can be measured, the control performance can be improved radically by util izing this information compared with the use of feedback only. The reason is that there are inherent delays in a l l dynamic systems. It is always better to cancel the disturbance before it is observed at the output. Feedforward disturbance rejection also alleviates the burden of feedback disturbance rejection so that the design of the feedback loop can concentrate on robustness issues. 60 j I | | | | L i r J I I 1 I I I 1 L Chapter 3: SDGPC Design for Tracking Systems Here is the formulation of the feedforward design of S D G P C . Given the n-dimensional SISO linear system having state equations x(t) = Ax(t)+Bu(t) + Bvv(t) y(t) = cTx(t) where v(t) is a measurable disturbance satisfying state space equation /?(\u00C2\u00AB) = W(3(t) = DT(3{t) with dimension np. The integrator augmented system is described by i f = AfXf + BfUd + Bfuvd XJf = cjxf dim(xf) = rif = n + 1 xd(t) = i(t), ud(t) = ii(t), ud(t) = i>(t) (3.130) (3.131) (3.132) Where xf = ~xd' 'A 0\" B' 'Bu .y. c T 0. . 0 . . 0 . (3.133) cf = [0 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 0 1 ] Fol lowing the arguments in section 3.1, the T-ahead state predictor based on equation (3.132) can be written as follows: T T J V \u00E2\u0080\u0094 V f 0 0 l 1 Xf(t + T) = eA'Txf(t) + J eA^T-^BfUd(r)dr + J eA^T-^Bhvd{r)di = e */(*) + J cA^T-T^BfVd(t + r)dr + T{AfBf,T)ud (3.134) Where F(Af,Bf,T) is given by equation (3.84), and ud is piecewise constant as illustrated in F ig . 3.17. vd(t + T) can be obtained from state equation (3.131) as r3(t + r) = eWr$(t) vd(t + r) = DTeWrp(t) (3.135) 61 Chapter 3: SDGPC Design for Tracking Systems The T-ahead state predictor can be obtained by substituting equation (3.135) into (3.134): xf(t + T) = eA'Txf(t) + Dv{T)P{t) + T(AfBf,T)ud T DV{T) = j eA^T-^Bfi/DTeWTdr (3.136) With the above state predictor, the feedforward S D G P C problem is to minimize performance index (3.79) subject to the measurable disturbance v{t). Based on (3.136) and (3.87), the performance index (3.79) can be written as J = T, J [cTfeA'Txf{t) + cTDv{T)P(t) + cJF(T)nd - RTeFTw(t)f'dT + A J [udHT(T)H(T)ud] dT o +y[cTeA>T*xf{t) + cTfDu{Tp)P{t) + cTT{Tp)ud - RTeFT\"w(t) +1\cTeA'T>xf{t) + cTdDv{Tp)P{t) + cTdY{Tp)nd^[cTdeA'T-xf{t) + <$Du(Tp)p(t) + cTdT{Tp)ud (3.137) Take the derivative of J with respect to u,j and let it be zero 8J dnd *p J [ r r (T)c / C J r (T)u d + rT(T)cf(cTeA'Txf(t) + cTDu(T)!3(t) - RTeFTw(t))]dT + A j HT(T)H(T)dT (3.138) +j[TT(Tp)cfcTT(Tp)ud + rT(Tp)cf(cTeA'T>xf{t) + cTDv{Tp)p(t) - RTeFT>w(t)}} + 7 [TT(Tp)cdcjT(Tp)ud + TT{Tp)cdcTdeA'T>xf(t) + rT(Tp)cdcjDu(TPP(t)\ The optimal S D G P C tracking control solution u r f is given by ~xd(ty u*d = Kww(t) + KXf y{t) J (3.139) 62 Chapter 3: SDGPC Design for Tracking Systems where Kw \u00E2\u0080\u0094 KHW KXf = KHXJ Kp = KHp T, K=\J TT{T)cfcTfT(T)dT + \ J HT(T)H(T)dT + yTT (TP)T(TP) Hw=\J VT(T)cfRTeFTdT + yTT (Tp)cfRTeFT\" - l N\u00E2\u0080\u009ExNa Nuxp HX} = - \ j TT (T)cfCT eAfT dT + \Tp)eA^ I N\u00E2\u0080\u009EXTlf \ ) ) (3.140) H0 = -IJ TT{T)c}cTfDv{T)dT + 7TT(Tp)Du(Tp) Nuxriff Notice again that, like the setpoint feedforward term Kw, the inclusion of the disturbance feedforward termKp does not affect the state feedback gain KXf. The control action given by (3.139) is the derivative of the control to the original plant. For small sampling interval, the direct control acdon can be obtained by integrating both sides of (3.139) resulting in, z u*(l) = KXf(l, 1 : n)x(t) + J [KWW(T) + KXf(l,n + l)y(r)]dr + Kp(3(t) (3.141) Here KXJ(1,1 : n) denotes the first n elements of the first row of matrix KXf. Normally, the states (3(t) of the disturbance model (3.131) are not available. A state observer with gain L can be constructed to give the states estimates p(t) as follows J3(t) = W(}(t) + L(v(t) - DTp{t)) (3.142) The observer gain L is selected such that the matrix W \u00E2\u0080\u0094 LDT is stable and has desired dynamics. Fol lowing are some examples which show the effects of feedforward disturbance rejection. 63 Chapter 3: SDGPC Design for Tracking Systems Example 3.4.1 The plant being controlled is G(s) = (3.143) The disturbance is generated by a white noise passed through an integrator. However, in the design of the control law, the disturbance is assumed to be constant. That is v(t) \u00E2\u0080\u0094 0. The design parameters are Texe \u00E2\u0080\u0094 0.2s Tp = 6s (3.144) A = 0.001 7 = 1000 The control law takes the form of (3.141). F ig . 3.29 shows the control results. The effect of the disturbance feedforward can be seen clearly by comparing the first 50 seconds of the figure where feedforward gain Kp was set to zero and the rest of the figure where Kp is set to the value as computed. Output and setpoint 0.5 o -0.5 -1 \u00E2\u0080\u00A2 without feedforward \u00E2\u0080\u00A2 with feedforward-10 20 30 40 50 60 70 80 90 100 (s) Control signal 10 20 30 40 50 60 70 80 90 100 (s) Figure 3.29: Disturbance feedforward design 64 Chapter 3: SDGPC Design for Tracking Systems This example shows how disturbance feedforward can improve the control performance dramat-ically even with a simple and inaccurate disturbance model. The next example shows that further performance improvements can be made with more accurate disturbance model. Example 3.4.2 The plant being controlled is G(s) = -L- (3.145) The disturbance is sinusoidal with known frequency. The state space model of the disturbance is: fct) 0 - 4 1 0 !/(*) = [0 1 ]/?(*) (3.146) The observer gain LT = [1 4.5] is selected such that the eigenvalues of the observer closed loop matrix W \u00E2\u0080\u0094 LDT are set to - 2 , - 2 . 5 respectively. The control law design parameters are 0.2s Tp = 3.5s Nu = 10 A = 0.001 7 = 1000 (3.147) First, the correct disturbance model (3.146) was used to design the control law (3.141) and then a constant disturbance model i>{i) \u00E2\u0080\u0094 0 was used. The corresponding control results can be seen from the F ig . 3.30. A s expected, the performance deteriorates in the time span between 50 seconds and 100 seconds as a wrong disturbance model is used. 65 Chapter 3: SDGPC Design for Tracking Systems 0.25 0.2 0.15 0.1 0.05 0 -0.05 -0.1 -0.15 -0.2 -0.25 Sinusoidal model-I H I r. | 1 i! T7T 111 ji - Constant model I V I I 'l I i i t * i, \" :i i! , i , i ilLd \u00E2\u0080\u00A2I\u00C2\u00AB M ,i ' i ,i M ,i II J II . i H i i . i ii i, \u00E2\u0080\u00A2 i, i ' n 1 1 ' l l l l i . * 11 i i . i i I i i . \u00E2\u0080\u00A2 \" \u00E2\u0080\u00A2 i 1 . i. i \" I I i . , 1 1 . , 1 1 . i . i * i \u00E2\u0080\u00A2I \u00E2\u0080\u00A2I \u00E2\u0080\u00A2I \u00E2\u0080\u00A2I II i , \u00E2\u0080\u00A2 I, I. ' \" I l | I, I, ' | ' I I I I I J ' ' l | II ' ' I' l | I I ' 'I I I' ' ' l | '| \u00E2\u0080\u00A2j \u00E2\u0080\u00A2 \u00E2\u0080\u00A2! j,1 i| j! \u00E2\u0080\u00A2! n i \u00E2\u0080\u00A2 i ' \u00C2\u00BB J \u00E2\u0080\u00A2' ,i ii i ! ; i , i i i , ' i 11! n 0 10 20 30 40 50 60 70 80 90 100 (s) Figure 3.30: The effect of disturbance model 3.5 Conc lus ion In this chapter, various tracking problems are formulated in the framework of S D G P C . Tracking control problems generally have two-degree-of-freedom design structure. However, the feedback part of the tracking problem is equivalent to the regulator problem which has one-degree-of-freedom design structure provided that the design parameters are the same. When information about the future setpoint are available, tracking performance can be radically improved. This is because knowing the future setpoint is equivalent to knowing the exact states information of the state equation describing the setpoint. When the disturbance is available for measurement, the disturbance rejection performance can be improved dramatically by using feedforward design. 66 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive SDGPC Chapter 4 Control of Time-delay Systems and Laguerre Filter Based Adaptive SDGPC Time-delay, or dead time, occurs frequently in industrial processes and in some cases, is rime-varying. Time-delay poses one of the major challenges for the design of robust process control systems. In discrete-time, time-delay systems have the form: G { q ) = vM q-*i*r>+ +- + ( 4 1 4 8 ) where k \u00E2\u0080\u0094 integer(Tj/A) is the delay in samples and Td, A are the delay time and sampling rime respectively. For unknown time-delay, k can be either estimated directly [24] or via the extended B polynomial approach in which the leading coefficients of the B polynomial in (4.148) up to order k would be a l l zero. In continuous-time, time-delay can be approximated by a low order rational approximation such as Pad6 approximation [66]. Laguerre filter was introduced into systems theory first by Wiener in the fifties [82] and has been popular recently [87, 80, 48, 49]. In particular it can approximate time-delay systems efficiently. With the time-delay known or being modeled properly, model based predictive control strategies provide an effective way of controlling such systems. In this chapter, we give two approaches to the control of time-delay systems. The direct approach in section 4.1 is based on general state space model and assumes the time-delay is known. Emphasis w i l l be placed on the Laguerre filter based adaptive control given in section 4.2. 4.1 The Direct Approach The S D G P C approach to deal with time-delay systems is formulated as follows. The system model considered is: x(t) = Ax(t)+Bu(t - rd) y(t) = cTx(t) (4.149) dim(x) = n A n d the augmented system is given by 67 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive SDGPC Where Xf(t) = AfXf(t) + BfUd(t - Td) yf(t) = cTxf(t) dim(xf) = rif \u00E2\u0080\u0094 n + 1 xd(t) = x(t), ud(t - rd) - u(t - Td), e(t) = y(t)-w xf = ~xd~ 'A 0' 'B' \u00C2\u00BB Bf = _ e 0. . 0 . ,cT = {0 0 1] w is the constant setpoint and rd is the time delay in the system. Consider the performance index (4.150) (4.151) J(t) = J [e2(t + T) + Xud(t-rT)]dT-rjxT(t + Tp)xf(t + Tp) (4.152) w Past controls \ Setpoint Predicted output Projected control derivatives Time - Td Xd T p - T d Tp Figure 4.31: Graphical illustration of SDGPC for systems with delay Assume the projected control signal to be piecewise constant as illustrated in Fig.4.31. For simplicity, we assume the dme delay rd has an integral number Nd of the design sampling interval Tm. That is Nd = A t present time t = 0, for a prediction horizon Tp, define \u00E2\u0080\u00A2* m ud(T) = H(T)ud 0 < T < T p - r d ud{T) = HTd{T)uTi \u00E2\u0080\u0094 Td rd r \u00E2\u0080\u009E ( T ) = [0 0 0---0] B X j V i i 0 ^ ( \u00C2\u00B1 Z * ) ^ (4.163) ^ s +p s+p 71 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive SDGPC The truncated expression of equation (4.163) can be expressed in a network form as depicted in Fig.4.32. u ( t ) /2p x , ( t ) s - p X , ( t ) ^ s - p ( t ) ^ s + p \u00E2\u0080\u00A2 s + p s + p y ( t ) Summing C i r c u i t Figure 4.32: Laguerre Filter Network Where p is the time scale selected by the user. The Laguerre network consists of a first-order low-pass filter followed by a bank of identical all-pass filters. Its input u(t) is the process input. The Laguerre states defined in Fig.4.32 as xi(t),X2(t), \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 xj^(t) are weighted to produce the output which matches the output of the process being modeled. The set of Laguerre functions is particularly appealing because it is simple to represent, is similar to transient signals, and closely resembles Pad6 approximants. The continuous Laguerre functions, a complete orthonormal set in L2[0, oo), i.e. w i l l allow us to represent with arbitrary precision any stable system [21]. A n y stable process can be expanded exactly in an infinite Laguerre series regardless of the value of the time scale p. However, when a truncated series with expansion number N is used, an immediate problem is the choice of the time scale used to ensure a fast convergence rate. Parks [56] gave a procedure to determine the optimal value of the time scale based on two measurements of the impulse response of the process being approximated. For open loop stable non-oscillatory systems with possibly long and time-varying time delays a real number p is sufficient to provide good convergence results. Fu and Dumont [29] gave an optimal time scale for discrete-time Laguerre network and proposed an optimization algorithm to search for the optimal complex time scale p when the process being modeled is highly under damped. Since the Laguerre network has a state space representation, any state space design method can be applied to the controller design. Dumont and Zervos [22] proposed a simple one step ahead predictive controller based on discrete-time Laguerre filters. This algorithm has been commercialized and is 72 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive SDGPC routinely used in pulp and paper mills [21]. Dumont, F u and L u [23] proposed a G P C algorithm based on nonlinear Laguerre model in which the linear dynamic part is represented by a series of Laguerre filters and the nonlinear part is a memoryless nonlinear mapping. Simulation shows it has superior performance over the linear approach for systems with severe nonlinearity such as the chip refiner motor load control problem and the pH control problem. Recently, Finn, Wahlberg and Ydstie [27] reformulated Dynamic Matr ix Control ( D M C ) based on a discrete-time Laguerre expansion. In this section, we propose the S D G P C algorithm based on continuous-time Laguerre filter modelling. It is shown that the resulting control law is particularly suitable for adaptive control applications in that its computation burden is independent on the prediction horizon used in the S D G P C design. Define xT(t) = [xi(t) 13(f) . . . xN(t)]. From Fig.4.32, we have the following state space expression of the Laguerre network: where x(t) = Ax(t)+Bu(t) y(i) = cTx(t) A = -p 0 -2p -p . -2p -2p . 0 0 ~P-* NxN B = V2p-(4.164) (4.165) (4.166) JVxl cT = [ci c2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 - C J V ] 1 X J V With the time scale p being properly selected, the Laguerre spectrum cT = [ci C2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 C J V ] can be estimated based on the input output of the process. In fact, notice that the equation y(t) = cTx(t) in (4.164) is already in a regression model form, various Recursive Least Squares algorithm can be applied. We settled for the recent E F R A ( Exponential Forgetting and Resetting Algori thm ) [68] which w i l l be described in chapter 6, section 6.1. 73 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive SDGPC N o w the S D G P C design procedure can be readily applied to the Laguerre state space equation (4.164). The intended application of Laguerre filter based S D G P C is adaptive control where the Laguerre coefficients cT \u00E2\u0080\u0094 \c\ ci \u00E2\u0080\u00A2\u00E2\u0080\u00A2 \u00E2\u0080\u00A2 c^]lxN are estimated on-line using E F R A [68]. We show in the following that the on-line computation burden is actually independent on the prediction horizon Tp. The main computation involved in the calculation of (2.16) is integration. For example the integration / TT(T)ccTT0(T)dT in K\. B y some simple manipulations, we have: o Tp J TT(T)ccTT0(T)dT = o T, cT'JiafdTc 0 (4.167) where 7^ jj are the ith and j t h column of r o ( T ) = [71 72 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 7 i v J n X j y u . The integral / njJdT 0 can be computed off-line and stored. The other integrals in the calculation of (2.16) can be treated similarly so that the on-line computation burden of the control law (2.15) is independent on the prediction horizon Tp. Although Laguerre filter modeling is known for its capability of dealing with dead time, however for long time-delay, it requires a large number of Laguerre filters causing problems such as slow convergence rate in the Laguerre coefficients estimation and increasing the control law computation burden. A n effective way of dealing with this problem is to use the delayed control u(t \u00E2\u0080\u0094 rd) as the input to the Laguerre network in Fig.4.32 resulting in a system model: x(t) = Ax(t)+Bu(t - rd) (4.168) 3/(*) = cTx(t) Where rd can be either estimated or just a rough guess based on prior knowledge about the process. Only the uncertainty of the time-delay needs to be taken care of by Laguerre network modeling. Based on system model (4.168), the S D G P C for time-delay systems in Section can be applied. Example 4: Adaptive Laguerre based S D G P C is shown in this example. The plant is given by 7 4 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive SDGPC G4(s) = e -sT 0.5s (4.169) Where the dead time T varies from 5.5s to 4.5s as illustrated in Fig.4.33. Four Laguerre filters with pole p = 1.2 are used. The delay is assumed to be 5s in the design. Thus the system model can be described as x(t) = \u00E2\u0080\u00A2-1.2 0 0 o - 1.5492-- 2 . 4 -1.2 0 0 1.5492 x(t) + - 2 . 4 - 2 . 4 -1.2 0 1.5492 . - 2 . 4 - 2 . 4 - 2 . 4 -1.2. .1.5492. u(t - 5) (4.170) y(t) = c1 x(t) where cT is the Laguerre filter coefficients vector which w i l l be estimated using R L S . The algorithm developed in section 4.1 w i l l be used. The design parameters are: Td - 5s Texe \u00E2\u0080\u0094 0.25s Tp = l i s (4.171) Nu = 6 A = 1 7 = 1000 where 7 is the end point states weighting. Fig.4.33 shows the plant output and the reference, the control input to the plant and the control derivative , the estimated Laguerre coefficients. Between 0 to 80 seconds, the dead time is 5.5 seconds, between 80 to 160 seconds T is changing to 5 seconds and after 160 seconds it is reduced to 4.5 seconds. A s can be seen, the performances are very good in al l three cases. B y using the prior delay knowledge, less Laguerre filters can be used resulting in quick convergence in the coefficients estimation and thus less transients in the response. 75 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive SDGPC -5. T=5.5s' T=5s -T=4.5s J l 0 50 100 ^ 150 200 (s) Figure 4.33: Simulation of plant (4.169) The simulation was performed with the same conditions as the previous run except measurement noise was added at the output. Fig.4.34 shows the system output, setpoint and the estimated Laguerre parameters. The algorithm still performs well in the presence of noise. Output and setpoint 50 100 150 200 Estimated Laguerre coeffi. with forgetting:0.98 50 100 150 200 Figure 4.34: Simulation of plant (4.169) with measurement noise 76 Chapter 4: Control of Time-delay Systems and Laguerre Filter Based Adaptive SDGPC 4.3 Conc lus ion M o d e l based predictive control can deal with time-delay systems effectively and S D G P C is no exception. Laguerre network requires little a priori information about the system and is able to model varying dead times. The adaptive Laguerre filter based S D G P C developed in section 4.2 is a suitable candidate for most process control applications. 77 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices Chapter 5 A n t i - w i n d u p Design of S D G P C by Op t imiz ing Two Performance Indices Actuation limits exist in almost all practical control systems. For example, a motor has limited speed, a valve can only operate between fully open and fully close etc. Other than these physical actuator limitations, there are constraints which are imposed by production requirements. On the other hand, most of the available controller design methods ignore the existence of the saturation nonlinearity. When large disturbances occur or, the operating conditions are changed over a wide range, it may happen that the theoretical controller output goes beyond the actuator limits. A s a result, the control system is effectively operating in open-loop as the input to the plant remains at its l imit regardless of the controller output. When this happens, the controller output is wrongly updated. For example, i f the controller has integral action, the error w i l l continue to be integrated resulting in a very large integral term. This effect is called controller windup. The windup difficulty was first experienced in the integrator part of PID controllers. It was recognized later that integrator windup is only a special case of a more general problem. In fact, any controller with relatively slow or unstable modes w i l l experience windup problems i f there are actuator constraints [20]. The consequence of windup is either significant performance deterioration, large overshoots or sometimes even instability [8]. Various compensation schemes have been proposed. The anti-reset windup method was proposed by Fertik and Ross [26]. Astrom and Wittenmark [63 pp. 184-185] suggested resetting the integrator at saturation to prevent integrator windup for PID controllers. A general approach where an observer is introduced into the controller structure to prevent windup was proposed by Astrom and Wittenmark [63 pp. 369-373]. The \" conditioning technique\" was proposed by Hanus [36] and it was found that the conditioned controller can be derived as a special case of the observer-based approach [8]. However, as pointed out in [39], many of these schemes are by and large intuition based. Rigorous stability analyses are rare and there is no general analysis and synthesis theory. Several attempts have been made to unify anti-windup schemes notably by Walgama and Stemby [81] and Kothare et al. [39]. 78 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices Since the S D G P C control law (2.17) has integral action it w i l l also encounter windup difficulties in the case of actuator saturation unless measures are taken. A systematic approach that takes into account the input constraints right from the start of the problem formulation is the constrained model based predictive control [69]. This is in fact one of the main advantages of using model based predictive control strategy. However, one difficulty which comes with this approach is the increased complexity. The more common approach which most of the afore mentioned schemes adopt is the two steps paradigm. That is, the linear controller is designed ignoring control input nonlinearities first and then anti-windup algorithm is added to compensate for the performance degradation due to constraints. This w i l l also be the method of attack used here. This chapter is organized as follows, in section 5.1, a S D G P C algorithm based on two perfor-mance indices is given in which the \"nominal\" response of the closed-loop system and the integral compensation performance can be designed independently. This was motivated by the work reported in [1, 34, 35, 30] , in which a control algorithm with the structure of state feedback plus integral action was developed where the state feedback and the integral feedback gain can be tuned sepa-rately. However, that work was in the framework of continuous-time infinite horizon L Q control. The S D G P C approach, however, has a quite different formulation procedure and an interpretation which naturally leads to a novel anti-windup compensation scheme presented in section 5.2. The importance of this anti-windup scheme is that under some reasonable assumptions, the overall \"two-degree-of-freedom\" S D G P C and the anti-windup scheme guarantee closed-loop stability. Section 5.3 concludes the chapter. 5.1 S D G P C Based on Two Performance Indices The primary goal of introducing integral action in the design of S D G P C is to ensure zero static error for systems tracking a non-zero constant setpoint subject to constant disturbances and modeling error to some degree. If there are neither modeling error nor disturbances, there is no need to introduce an additional integral state. However, models are inevitably wrong and there are always disturbances acting on the plant thus integral action is always needed. Nevertheless, the argument is that the controller can be designed for good servo step response performance assuming perfect modeling and 79 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices no disturbance. Integral action, on the other hand can be added on to compensate for step and impulse disturbances and for modeling error. The key idea is to have the servo performance and disturbance rejection performance tuned independently. In other words, changing the servo performance shall not affect the disturbance rejection performance or vice versa. 5.1.1 O p t i m i z i n g Servo Performance Consider system (2.1) x(t) = Ax(t)+Bu(t) y{t) = cTx(t) (5.172) dim(x) = n The system is required to track a constant setpoint r H . If there is no system zero at the origin, a constant uo can be found for any ro to hold the system state at xo such that [41]: 0 = AXQ + Bua yQ - cTx0 = r 0 (5.173) Define the shifted input, the shifted state and the shifted output respectively as u(t) = u(t) \u00E2\u0080\u0094 UQ x(t) = x(t) - x0 (5.174) ! / ( 0 = 2/(0 - r o Substitute (5.174) into (5.172) and using (5.173), the shifted variables satisfy the equations x(t) = Ax(t)+Bu(t) y(t) = cTx(t) (5.175) dim(x) - n A sensible control objective for the system (5.175) is to minimize the following performance index 1v J(t) = j3xT(t + Tp)x(t + Tp) + J {[y(t + T ) ] 2 + X3[u(t + T ) ] 2 } d T (5.176) 80 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices where U(T),T \u00E2\u0082\u00AC [\u00C2\u00A3,/ + T p] is confined to be piecewise constant as uoo approach constant while 1$ approaches infinity as Tp \u00E2\u0080\u0094\u00E2\u0080\u00A2 0 0 . Thus l i m K = 0. Similarly, it can be Tp\u00E2\u0080\u0094\u00C2\u00BBoo shown that Hx approaches constant as Tv \u00E2\u0080\u0094+ 0 0 . So l i m F = l i m K * Hx = 0. \u00E2\u0080\u00A2 Remark: Although the mean level control law (5.223) does not change the system dynamics of the setpoint response from the open loop one, its disturbance rejection response can be tuned by selecting different \u00C2\u00A3. Larger \u00C2\u00A3 results in faster disturbance rejection rate. 5.2 A n t i - w i n d u p Scheme F i g . 5.36 shows the control law (5.208) applied to a system subject to actuator constraints where ' u(t), U m i n < u(t) < Umax u(t) = sat[u(t)] = < lmax: \"\u00E2\u0080\u00A2max (5.226) \u00E2\u0080\u0094 u x=Ax+Bu T \u00E2\u0080\u0094 ' c Figure 5.36: Control subject to actuator constraints A s we mentioned earlier, u(t) consists of the nominal control term un(t) and the disturbance rejection control term v(t). i.e. u(t) = un(t) + v(t) un(t) = Fx(t) + Krro (5.227) v(t) = \u00C2\u00A3(Lx(t) + z(t)) 91 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices We only consider open loop stable plants here as it is meaningless to design anti-wind up scheme for strictly unstable systems. It is impossible to stabilize a strictly unstable system regardless of whatever control strategy is applied when the system disturbance causes the input to saturate [47]. We first consider the case when u(t) is over the upper limit u m a x . Case 1: Do not reset integrator when both u(t) and un(t) exceed control limit. That is: \u00C2\u00AB\u00C2\u00AB\u00C2\u00BBel(0 = Z(*) {Z r e s e <( < ) u(t)=um u(t) > u m a x and un(t) > um Jf< f J i l t e o ( 5 - 2 2 8 ) u(t) < u m i n and un(t) < u m i n or un(t) > u m a l , theni I u(t)=umin where u(r) is defined as the new controller output after integrator reset. Case 1 happens either when an unreachable setpoint is asked or the system suffers too large a disturbance. The system is effectively operated in open loop. If the input is saturated long enough, the output of the system w i l l be \u00E2\u0080\u0094cTA~1Bunm. Case 2: Reset integrator state when u(t) exceeds control limit but un(t) does not. Stop reset integrator when saturation is over. That is: 1. When saturation occurs: i *re..i(\u00C2\u00AB) = *(*) + \" m V ( t ) u{t) > u m a x and u m i n < u\u00E2\u0080\u009E(t) < u m a x , theni u(t) < u m i n and u m i n < uH(t) < u m a x , thenl {u(t) = un(t) + ttLx(t) + zre3Ct(t)) 2. After saturation is over, the control should be updated according to: z(t) = z(t) (5.230) u(t) = un(t) + \u00C2\u00A3(Lx(t) + z(t)) We have the following theorem regarding scheme (5.229). Theorem 5.4 Scheme (5.229) guarantees u(t) = u(t) where u(t) = sat[u(t)] is defined in (5.226). Proof: 92 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices Consider the case when u(t) > u m a x and u m i n < un(t) < u m a x , by direct calculation we have umax u(t) u(t) = un(t) + \u00C2\u00A3 (^Lx(t) + z(t) + i J (5.231) = Umax + Un(t) + Lx(t) + z(t) - z(t) = Umax Thus u(t) = sat[u(t)] = u(t). \u00E2\u0080\u00A2 . Remark: Theorem 5.4 essentially claims that the nonlinear control problem with saturated u(t) can be transformed into a linear control problem by scheme (5.229). Case 3: Reset the feedback gain f but keep the integrator state unchanged when u(t) exceeds control limit but un(t) does not. After saturation is over, changing \u00C2\u00A3 according to equation (5.233). That is: 1. When saturation occurs: -\u00C2\u00BB(*) I t \u00C2\u00A3 _i_ Umax\u00E2\u0080\u0094 \" U J u(t) > umax and umin < u\u00E2\u0080\u009E(t) < umax, then< u(t) < umin and u\u00E2\u0080\u009Ein < un(t) < umax, then \u00C2\u00AB ( * ) = \u00C2\u00AB \u00E2\u0080\u009E ( * ) + (rc.ct{Lx(t) + *(*)) ( 5 2 n ) e C _1_ \u00C2\u00BBm.n-\u00C2\u00BB(t) preset \u00E2\u0080\u0094 I -u(t) = u\u00E2\u0080\u009E(*) + U..t(Lx(t) + z(t)) 2. After saturation is over, the disturbance feedback gain \u00C2\u00A3(t) should fulfill the following equation: f LBl(r)dr l{t)e<\u00C2\u00B0 = \u00C2\u00A3 r e \u00E2\u0080\u009E t ( < o ) < ^ B ( ' - ' o ) (5-233) where to denotes the time just when saturation is over, \u00C2\u00A3 is the originally designed feedback gain without considering saturation. \u00C2\u00A3reset{to) is the reset \u00C2\u00A3 at to obtained from (5.232). lemma 5.1 The integrator reset algorithm given by (5.229) and (5.230) is equivalent to the gain reset algorithm given by (5.232) and (5.233). We only prove the case when the control exceeds the upper limit u m a x . The case when u < u m t n can be proved in the same manner. Proof: 93 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices (5.235) 1. When control is saturated, i.e. u(t) > umax. From Theorem 5.4, it was shown that (5.229) gives u(t) = umax. Whi le in (5.232), we have = un(t) + ((Lx(t) + z(t)) + umax - u(t) (5-234) \u00E2\u0080\u0094 Umax So, both (5.229) and (5.232) give the same control action when u(t) > umax. 2. After saturation is over, denote that moment as to, we have Ul(tn) = Un(to) + ((Lx(t0) + Zreset(to)) = Un(tn) + (C,reset{to) U2(to) = Un(to) + (reset(to){Lx(t0) + z(to)) = Un(t0) + (reset (*o)C(*o) where ui(*o),\"2(\u00C2\u00B0 ( 5 2 4 0 ) / LBl{T)dr u2(t) = un(t) + ({t)ato)^ For ((t) given by (5.233), it is easy to show that ui(t) = u2(t) using (5.236). 94 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices We conclude that the integral state reset algorithm described by (5.229) and (5.230) and the gain reset algorithm described by (5.232) and (5.233) give the same unsaturated control action during saturation period and thereafter and are thus equivalent. \u00E2\u0080\u00A2 Remark: Lemma 5.1 claims that the nonlinear control problem with saturated u(t) is equivalent to a linear time-varying control problem with the time-varying feedback gain given by (5.232) and (5.233). lemma 5.2 For the gain reset scheme (5.232) and (5.233), the sign of\u00E2\u0082\u00ACreset(t) during saturation and the sign of\u00C2\u00A3(t) after saturation are the same as the sign of the originally designed \u00C2\u00A3. Proof: 1. During saturation, from (5.232), we have C (4\ /\u00E2\u0080\u00A2 , umax ~ u(t) U m a x \u00E2\u0080\u0094 Un(t) / C O / 1 1 \ U s e t { t ) ~ * + Lx(t) + z(t) = Lx(t) + z(t) ( 5 - 2 4 1 ) Since umax \u00E2\u0080\u0094 un(t) > 0 by assumption, we have freset(t)(Lx(t) + z(t))> 0 (5.242) We also have u(t) = un(t) + f(La:(f) + z(i)) > umax, thus i(Lx(t) + z(t))> (t) > 0 (5.243) Equations (5.242) and (5.243) hold at the same time only i f \u00C2\u00A3 and fTe3et(t) have the same sign. 2. After the saturation, from (5.233), it is obvious that eSLB(t-U) iresetih) J LBftT)dr e'<> > 0 (5.244) j LBS(r)dT since both e ^ - B ^ - M ) e<0 a r e greater than zero. Thus = l i m (resetito)^13^'^ = 0 after saturation according to (5.233). Thus t\u00E2\u0080\u0094too t\u00E2\u0080\u0094too both condition 1. and condition 2. of Theorem 5.2 are met. The resulting time-varying linear control law is exponentially stable for the overall system, i.e. the linear system (5.172) with saturation nonlinearity as depicted in F ig . 5.36. The equivalent integrator state reset algorithm (5.229) and (5.230) is also exponentially stable. Further, since l i m = l i m (Lx(t) + z(i)) = constant t\u00E2\u0080\u0094too t\u00E2\u0080\u0094\u00C2\u00BBoo t and l i m x(t) = constant, we have l i m z(t) = l i m J (ro \u00E2\u0080\u0094 y(r))dT = constant, which means t\u00E2\u0080\u0094too t\u00E2\u0080\u0094too t\u00E2\u0080\u0094too l i m e(t) = l i m ( r 0 - y(t)) = 0. \u00E2\u0080\u00A2 t\u00E2\u0080\u0094too t\u00E2\u0080\u0094too Some examples are presented here to show the effectiveness of the T D F - S D G P C and the anti-windup algorithm. Example 5.1 The first example is a simple integrator process. G(s) = - (5.245) s The actuator limits are \u00C2\u00B1 0 . 1 . Design the S D G P C algorithm with two performance indices using the same following design parameters Nu = l Tp = 1.5sec (5.246) A = 7 = 0 96 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices gives the control law u(t) = un(t) + v(t) un(t) = -y(t) + r0 (5.247) v(t) = -y(t) + z(t) Fig.5.37 shows the control results of control law (5.247) without anti-windup compensation. The overshoot in Fig.5.37(a) can be clearly observed. 60 (S) 60 (S) integral of tracking error 1 1 1 1 1 0 10 20 30 40 (c) nominal control un(t) SO 60 - I 1 i 1 -- I 1 1 1 -I 1 1 , 1 1 1 0 10 20 30 40 SO 60 (S) (d) Figure 5.37: Example 5.1: Control law (5.247) without anti-windup compensation F i g . 5.38 shows the control results of control law (5.247) with anti-windup scheme. From F ig . 5.38(c), the integral state is reset just after the 20 seconds mark at which time the nominal control is within the control limits. The effectiveness of the algorithm is obvious. 97 setpoint and output 30 (a) \ control I after saturation \ ter^sa -2 - before saturation 30 (b) Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices setpoint and output control integral of tracking error -integrator reset (c) 60 ( S ) 1 0.5 0 -0.5 nominal control 30 (d) 60 (S) Figure 5.38: Example 5.1: Control law (5.247) with anti-windup compensation Example 5.2 The process for the second example is described by -_3 _3 _ r T x(t) = 1 0 0 x(t) + 0 . 0 . 1 0 . .0. y(t) = [o o l]x{t) The actuator limits are: \u00C2\u00B13.5 For the nominal control law un(t), the design parameters are 98 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices Nu = 10 Tp = 2sec. (5.249) A = 0.0001 7 = 1000 The resulting nominal control law is given by un(<) = Fx(t) + Krro (5.250) F = [-4.7599 -25.9369 -51.4516 ],Kr = 52.4516 For the control v(t), the prediction horizon Tp is selected such that the pole is placed at \u00E2\u0080\u00942 and f = 104.9031. the overall control law is un(t) = (F + \u00C2\u00A3L)x(t) + Krr0 + \u00C2\u00A3z(t) (5.251) = [-6.7599 -41.4567 -109.3254 }x(t) + 52.4516r 0 + 104.90312(f) The control results for the control law (5.251) with anti-windup algorithm are shown in F ig . 5.39. The effectiveness of the anti-windup algorithm can be again observed by comparing F ig . 5.39 with F ig . 5.40 in which no anti-windup algorithm is used. Notice the integral state reset in F ig . 5.39(c) after each setpoint change. 99 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices setpoint and output 60 (s) control 10 20 30 (b) 60 (S) integral of tracking error 60 (s) Nominal control 100 l \ -0 100 7 V -0 10 20 30 40 60 & (d) 60 (S) Figure 5.39: Example 5.2: Control law (5.251) with anti-windup compensation 100 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices setpoint and output - 1 \u00E2\u0080\u0094 r, -1 / / \"\ \ ,s V ^ \J - -10 20 30 (a) control 40 50 60 (S) integral of tracking error 50 60 (s) nominal control Figure 5.40: Example 5.2: Control law (5.251) without anti-windup compensation The integrator reset algorithm has a strong similarity with the conventional anti-windup PID controller whose structure is depicted in Fig.5.41. Figure 5.41: Conventional anti-windup The performances of the conventional anti-windup algorithm and the proposed scheme are compared in next example. 101 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices Example 5.3 The plant being controlled is a simple integrator with random disturbance injected at the input. The saturation limits are \u00C2\u00B1 0 . 1 and the control law is given in equation (5.252) G(s) = I -0.1 < u(t) < 0.1 t u(t) = -1.9998y(i) + 0.9998r\u00E2\u0080\u009E + 0.9998 J (r 0 - y(r))dr u\u00E2\u0080\u009E(t) = -0.9998j/(t) + 0.9998r0 (5.252) t v{t) = -y{t) + 0.9998 J (r 0 - y(n ))dr The reset gain K as depicted in Fig.5.41 is selected to ensure good performance for the conventional algorithm. Fig.5.42 shows the results. The first three plots are for the conventional methods with different reset gains. The observation is that although good performance can be obtained for properly selected gain K, it is nonetheless a nontrivial trial and error procedure. Moreover, the effect of changing K on the performance is \"non-monotonic\". This makes the tuning more difficult. O n the other hand, the proposed scheme gives excellent results in a straightforward manner as can be seen from the fourth plot of Fig.5.42. K=0.02 -^Xy2(t) = 0.7618 0.2 0.4 0.6 0.8 ^ 1 1.2 1.4 1.6 1.8 K=0.5 l \u00C2\u00A3 y 2 ( t ) = 0 - 2 1 3 0 x 1 \u00C2\u00B0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 1 K=10 -^xy 2(t) = \u00C2\u00B0 - 3 5 7 5 x 10 Figure 5.42: Conventional anti-windup scheme vs proposed 102 Chapter 5: Anti-windup Design of SDGPC by Optimizing Two Performance Indices We make some remarks to conclude this section. 1. The stability result stated in Theorem 5.5 also applies to unstable systems provided that u m i n < un(t) < u m a x holds. However, for unstable plants, it is always possible to find a system state XQ such that un(t) = FXQ + Krro > umax o r un{i) < umin- This w i l l drive the system in open loop mode thus destabilize the system. 2. For open loop stable systems, it is also desirable to have un(t) = Fx(t) + KTrQ (5.253) Umin * N f n(i) i(i)0i + ip2(t)92 + \u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2 + M*)0n = \u00C2\u00A5 > ( O T 0 105 (6.255) Chapter 6: Continuous-time System Identification Based on Sampled-Data where y is the observed variable, #1, #2, \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 , # n are unknown parameters, and , 2, the corresponding equations can be obtained in a similar fashion. It is obvious that both the sampling interval A and the integration interval T affect the estimates. Sampling interval A directly affects the accuracy of the numerical integration (6.266) thus A should 109 Chapter 6: Continuous-time System Identification Based on Sampled-Data keep small. However, too small A may also lead to inaccurate estimates due to round off error. A s a rule of thumb, the sampling interval should be chosen according to | r J < A < ! ( 6 ' 2 7 0 ) where T3 = ^- is the Shannon maximum sampling period [86]. In obtaining the regression model (6.264), the multiple integral operation In defined in (6.263), which functions as a pre-filter, is applied to both sides of system model (6.262). Intuitively, the bandwidth of the this pre-filter should match that of the system (6.262) so that the noise in the measurement data can be depressed and at the same time, without rounding off the \"richness\" of the input/output signal which contains the information about the dynamics of the system (6.262). The Laplace transform of the multiple integrator (6.263) over a time length of T is [67] (1 _ e - s T ) n 4\u00C2\u00AB01 - J - ^ U o i ( 6.2 7 I ) = =(\u00C2\u00BB)\u00C2\u00A3[(\u00E2\u0080\u00A2)] It is clear from (6.271) that the integration span T of the multiple integrator / \u00E2\u0080\u009E should be selected such that the bandwidth of the multiple integrator transfer function E(s) matches the bandwidth of the system being identified. 6.2 The E F R A For the regression model (6.264), the standard R L S algorithm is given by the following for-m u l a e ^ ] 8(t0) = \u00C2\u00A7(to - 1) + /<\u00E2\u0080\u00A2(*(>) (WB)(*o) - /(to)^ fl - 1)) P(to)={l-K(t0) A7 + y > r ( t o ) P ( * o - l M < o ) P ( t 0 ) = (I - K(t0) T. Substitute equation (6.276) into equation (6.275), we have V + a m y + ^ 2 0 2 / + a\\ty + a2\ty = bwu + bntu (6.277) App ly I2 defined in (6.263) on both sides of (6.277) over [t0, to + T], the following regression model can be obtained hy(to) = 1 It is important to select the resetting period Tres properly, the principle is that Tres must be chosen large enough to allow reasonable convergence of the parameters but the variation of the real parameters over the period of Tres should stay small so that the offset linear ramp is still a good approximation. The following example shows the effectiveness of this algorithm. Example 6.3.1 Consider system (6.275) with parameters described by b(t) = 2 + 0.It, 0 < i < 3 0 s e c The simulation is performed in open-loop with P R B S signal as input, and the following settings: sampling interval A = 0.01 sec, integration interval T = 0.05sec and resetting period Tres = 0.08sec. The standard R L S algorithm (6.272) is used. The results are depicted in Fig.6.44. a i ( f ) = 2 + 1.5 * sin(0.27r*), 0 < t < ZQsec (6.282) a 2 = 1, 0 < t < 30sec 114 Chapter 6: Continuous-time System Identification Based on Sampled-Data 0 U 1 1 1 1 1 \u00C2\u00AB 1 0 5 10 15 20 25 30 (S) Figure 6.44: Estimation of time-varying parameters As can be seen from Fig.6.44, even the sinusoidally time-varying parameter can be tracked satisfactorily. This verifies the validity of our assumption in equation (6.276). 6.4 Identification and Control of an Inverted Pendulum The control of an inverted pendulum is a classic topic in control engineering. There are many solutions available to this problem, for example PID, LQG, fuzzy logic etc. The SDGPC solution will be given in this section. The advantage of continuous-time parameter estimation over the discrete-time method is highlighted by the comparison between the two different approaches. Fig 6.45 is a picture of the experimental setup2 which is built on a used printer. The pendulum rod is mounted on the printer head and able to freewheel 360 degrees around its axis. The printer head is driven by a DC motor along the x axis. Both the printer head position x and the pendulum angel 6 are available for measurement through a LVDT and an encoder attached to the printer head. The control input to the system is the voltage applied to the DC motor. The purpose of the control is to keep the pendulum rod upward and at the same time keep the printer head at the center position. 2 The author thanks Dr. A. Elnaggar who was then a research engineer at the Pulp & Paper Centre for making this experimental setup available. 115 Chapter 6: Continuous-time System Identification Based on Sampled-Data Figure 6.45: The inverted pendulum experimental setup 6.4.1 System Model The printer head position is proportional to the angular displacement of the D C motor. Thus the transfer function from the input voltage u(t) to the printer head position x(t) has the form Gm(s) = bo %\8j km u(s) s ( r m s + l) s2 + ais (6.283) Only two parameters bo,a-[ need to be estimated. A s for the relation between the printer head position x(t) and the pendulum angle 0(t), let us consider the downward pendulum first. F i g . 6.46 is an idealized sketch of the downward pendulum. Notice that at the equilibrium point Oo = 0, only the acceleration of the printer head M w i l l break the equilibrium. Suppose M is moving with acceleration x(t), observing on the moving M, the effect of x(t) on m is that it seems 116 Chapter 6: Continuous-time System Identification Based on Sampled-Data as though there is a force mx(t) being applied on the other direction. Apply ing Newton's law i n the 9 direction as indicated in F ig . 6.46, the torque balance is \u00E2\u0080\u0094mg * L sin0 \u00E2\u0080\u0094 e9 + mx(t) * L cos 9 = ml?9 (6.284) where L, m are the length and the mass of the idealized pendulum respectively, g is the gravity constant and e is the friction coefficient. Assuming small 9, and linearizing it around 9$ = 0, equation (6.284) becomes Gdoum(^) \u00E2\u0080\u0094 9 + ai9 + a29 = b'x(t) 9{s) bs2 x(s) s 2 + a\ s + a2 \u00C2\u00A3 9 , 1 (6.285) t mg Figure 6.46: Downward pendulum Similarly, the torque balance for the upward pendulum case is mg * L sin 9 \u00E2\u0080\u0094 e9 + mx(t) * L cos 9 = mL29 (6.286) A n d the linearized model around 9o \u00E2\u0080\u0094 0 can be written as 117 Chapter 6: Continuous-time System Identification Based on Sampled-Data 6 + aiO - a26 = bx(t) (6.287) Figure 6.47: Upward pendulum Comparing Gdown(s) in (6.285) and Gup(s) in (6.287), it is easy to see that the parameters in . GdoWn(s) and Gup(s) are the same except for a sign difference in a2. This important a priori information is only preserved in the continuous-time model! Since the upward pendulum is open loop unstable, it is very difficult, if not impossible, to estimate Gup(s) direcdy without stabilizing it first. However, with continuous-time modeling, it is possible to estimate ai , a2 and b for the downward pendulum which is open-loop stable. The estimation results will be presented in the next subsection. 6.4.2 Parameter Estimation The experiment was conducted on the downward pendulum. The input voltage applied to the DC motor is PRBS ( Pseudo Random Binary Sequence) signal with an amplitude of \u00C2\u00B1 1 volt and a length of 2 8 - 1 = 255 samples. The sampling interval A = O.lsec. Both the printer head position x(t) and the angle 6(t) are measured. For the model structure given by (6.283), the continuous-time regression model (6.264) and the standard RLS (6.272) can be readily applied. The integration 118 Chapter 6: Continuous-time System Identification Based on Sampled-Data interval T = 0.3sec. The input/output data and the estimated parameters bo,ai are depicted in F ig . 6.48 where So = 10250, fii = 12.59. Input voltage to DC motor Parameter estimates 201 1 1 : r \u00E2\u0080\u0094 Figure 6.48: Parameter estimation of model (6.283) The estimated model from input voltage to printer head position is thus - 10250 814.09 L ' m ~ + 12.59s \" s(0.0794s + 1) Since the time constant r m = 0:0794sec is very small, (6.288) can be reasonably approximated by 814.09 Gm = (6.289) s For the identification of the system model from printer head position to pendulum angle, a 255 sample P R B S signal with an amplitude of Iv is applied to the motor. The sampling interval is again A = O.lsec. The model parameters a\, a2 and b of G<* o u m(s) in (6.285) are estimated using algorithms given in previous sections in this chapter. The printer head position, the angle output data and the 119 Chapter 6: Continuous-time System Identification Based on Sampled-Data estimated parameters a\, \u00C2\u00A32, b are depicted in F ig . 6.49 with d i = 0.01418, a 2 = 43.9627, b \u00E2\u0080\u0094 0.1125. 1000 -1000 100 Printer head position 10 15 Pendulum angle 10 15 Parameter estimates 25 (s) 25 (s) Figure 6.49: Parameter estimation of model (6.283) The identified model Gdown(s) is thus Gdown{s) \u00E2\u0080\u0094 0.1125s2 s2 + 0.01418s + 43.9627 with poles at -0.0071 \u00C2\u00B1 6.6304z. The open loop unstable upward pendulum model (6.287) can be readily written as r (*\ \u00C2\u00B0- 1 1 2 5 s 2 Gup{s) = (6.290) (6.291) s2 + 0.01418s - 43.9627 with poles at -6.6375,6.6233 which correspond to a time constant of approximately 0.15sec. The sampling interval of A = O.lsec we used in the experiment is relatively large for this system, either for the downward or the upward case. Unfortunately, that was the smallest sampling interval we can get due to the computer system limitations. 120 Chapter 6: Continuous-time System Identification Based on Sampled-Data It is interesting to see how discrete-time estimation w i l l perform with the same experimental data. The discrete-time counterpart of system model (6.285) has the form: 9(k) + adl9(k - 1) + adi9{k - 2) = bdix(k) + bd2x(k - 1) + bd3x(k - 2) 9{k) bdi + bd,q-1 + bd3q~2 (6.292) x(k) 1 + adlq~l + ad3q~2 The MATLAB function ARX in the system identification toolbox was used on the same data set to identify the parameters in (6.292), i.e.: arx([9 x],[na nb nk]) na = 2,nb = 3,nk = 0 The parameter estimates are [bd, bit bd3] = [0.093 -0 .1863 0.0923] [adl ad2] = [-1.5685 0.9663] F ig . 6.50 shows the step response of model (6.292) with estimated parameters. (6.293) (6.294) Figure 6.50: Step response of the estimated discrete-time model (6.292) F i g . 6.50 tells us that the system has a natural undamped frequency of 1.05. This agrees quite satisfactorily with what we have from the continuous-time identification approach. See (6.290) where 121 Chapter 6: Continuous-time System Identification Based on Sampled-Data f = ^ 43.9627 _ 10553. However, the damping factor is quite different from what we obtained in (6.290). From F ig . 6.50, the pendulum should settle in about 30 seconds when there is a step position input. Experiment shows that the pendulum oscillation w i l l last about 400 seconds after a hit on the printer head which agrees quite well with the continuous-time estimation results. A l so notice from F ig . 6.50 that there is a nonzero steady state gain in the estimated discrete-time model. This is obviously wrong. It has been shown that the system has two zeros at the origin, see (6.285). This important a priori information is also lost in the discrete-time modelling. A s a matter of fact, the continuous-time counterpart of the estimated discrete-time model with sampling interval A = 0.1 sec and zero-order-hold is: 0.093s2 + 0.2004s - 0.1036 r (o.zio) s2 + 0.3428s + 41.91 Compare (6.295) with (6.290), the superior performance of continuous-time identification for this example is obvious. 6.4.3 Cont ro l l e r Design Define the states of the pendulum system and the input voltage to the D C motor as [0(t) 6(t) x(t) x(t)] and u(t) respectively, the state space description of the system can be written based on (6.289) and (6.291) \u00E2\u0080\u00A2o-0 X .X. -0.01418 43.9627 0 0 1 0 0 0 0 0 0 0 0 0 1 0 \"1 \u00E2\u0080\u00A2o--91.5851\" 0 0 X 814.09 .X. . 0 . ud(t) (6.296) where ud(t) = ii(t). 122 Chapter 6: Continuous-time System Identification Based on Sampled-Data In the S D G P C framework, system (6.296) can be regarded as the integrator augmented system of \u00E2\u0080\u00A2 e \u00E2\u0080\u00A2 t = -0.01418 43.9627 0 1 0 0 0 0 0. \u00E2\u0080\u00A2 9 \u00E2\u0080\u00A2 -91.5851\" t /' + 0 .814.09 . \u00C2\u00AB(*) \u00E2\u0080\u00A2 e t Jo x x(t) = [0 0 1 ] A p p l y the final state weighting S D G P C (2.20) to (6.296) with design parameters: Nu = 6 Tp = 1.2sec A = 1 7 = 1 The resulting control law is ud(t) = -[0.1905 1.2643 -0 .0069 -0 .0120] The control law (6.299) is implemented in the following form: \u00E2\u0080\u00A2 0 1 9 x .x. u(t) = -0 .1905 * 9(t) - 1.2643 * J 9(T)<1T t +0.0069 * (x(t) - 1100) + 0.0120 * J (X(T) - 1100)<*7 (6.297) (6.298) (6.299) (6.300) The number 1100 in (6.300) is the L V D T reading corresponding to the center position. The picture in Fig.6.52 shows the pendulum being successfully controlled by (6.300). Notice that the control law is fairly robust against a disturbance ( a plastic bag was placed on the top of the pendulum after (6.300) was applied ). The same control law can also stabilize the pendulum when the rod is stretched to twice the original length. See F ig . 6.51. 123 Chapter 6: Continuous-time System Identification Based on Sampled-Data Chapter 6: Continuous-time System Identification Based on Sampled-Data Figure 6.52: SDGPC of pendulum subject to disturbance It is found in the experiment that it is not difficult to stabilize the pendulum, i.e. keeping it upward. But it is not easy to control the printer head exacdy in the center position while keeping the pendulum upward. A l so notice that the system model we developed in subsection 6.4.1 is an idealized one with the assumption that the pendulum rod is a rigid body. A s the length of the rod increases, so does its flexibility. A n adaptive version of the S D G P C would have been more interesting. Unfortunately the experiment setup was only available for a limited period of time. Nevertheless, the experiment results show that the continuous-time model parameter estimation algorithm and the S D G P C algorithm are quite effective solving practical control problems. 6.5 Conclusion Continuous-time system identification based on sampled-data is considered in this chapter. The moving horizon integration approach given in section 6.1 is a simple yet powerful method for 125 Chapter 6: Continuous-time System Identification Based on Sampled-Data parameter estimation of a continuous-time model. Based on the regression model (6.264), various available recursive estimation algorithms such as the E F R A developed in discrete-time context can be readily applied. The algorithm we proposed in 6.2 can deal with fast time-varying parameters as was shown by simulation. A real life inverted pendulum experiment in section 6.3 showed the benefits of continuous-time identification, namely, effect use of a priori information etc. The identification method in this chapter together with the S D G P C algorithm offer an effective way for solving complicated control problems. In this way, the insight about the underlying inherent continuous-time process is never lost during the whole design process. It is the author's belief that even i f the control law is designed in discrete-time domain, it is always beneficial to identify the underlying continuous-time process first and then discretize it. The experiment presented in section 6.3 can be regarded as a supportive example. 126 Chapter 7: Conclusions Chapter 7 Conclusions A new predictive control approach was taken in this thesis. The important issue such as actuator saturation in practical applications was taken into account. The resulting algorithms have very important practical interests as wel l as nice theoretical properties. The work can be summarized as follows. 1. A new predictive control algorithm, S D G P C , has been developed. It possesses the inherent robustness ( gain and phase margin ) and stability property of infinite horizon L Q regulator and at the same time, has the constraint handling flexibility of the finite horizon formulation, a feature unique to M B P C . S D G P C distinguishes itself from the rest of the M B P C family in that it is based on continuous-dme modelling yet assumes digital implementation. This formulation stresses the connection rather than the differences between continuous-time and discrete-time control. It has been shown by simulation that for a stable wel l damped process, the execution time Texe can vary from 0, which corresponds to continuous-time control, to the design sampling interval T m , which can be quite large, without affecting the servo performance significantly. This means that for a given prediction horizon Tp and desired sampling interval Texe, a larger Tm can be selected to reduce computation burden in adaptive applications. For unstable and/or lightly damped processes, however, Texe should be equal to T m . 2. S D G P C for tracking systems has a two-degree-of-freedom design structure. This is achieved by assuming a different model for the reference signal and the disturbance. However, only one performance index is used to obtain the control law. Tracking performance can be improved radically when the future setpoint information is available. This is because knowing the future setpoint is equivalent to knowing the complete state information of the reference signal at present time. 3. Another two-degree-of-freedom design extension to S D G P C was made. Contrary to the approach taken in tracking system design in which different models for reference and disturbance were assumed, two performance indices are used but assuming the reference and the disturbance 127 Chapter 7: Conclusions have the same model ( in this thesis, it is a simple constant). The servo performance and the disturbance rejection performance can be tuned separately by using different design parameters ( prediction horizon, control order, control weighting etc. ) for the two performance indices. The nonlinearity due to actuator constraints is considered in the framework of anti-windup design. The resulting scheme effectively transforms the nonlinear problem into a time-varying linear problem and was shown to have guaranteed stability property. Simulation results confirmed the effectiveness of the scheme. 4. Control of time-delay systems was considered. Laguerre filter based adaptive S D G P C was shown to be particularly effective in dealing with time-delay systems. A priori information about the time-delay can be utilized to improve the control performance significandy. 5. A continuous-time model parameter estimation algorithm based on sampled data was developed. Numerical integration on a moving interval was used to eliminate the initial condition problem. It was argued that even i f the controller design is purely in discrete-time, it is always beneficial to identify the underlying continuous-time model first before discretizing. A priori information about the physical system is best utilized in continuous-time modelling. The continuous-time model identification method and the S D G P C algorithm were applied to an inverted pendulum experiment. The results confirmed the benefits of continuous-time modelling and identification. Some future research suggestions are: 1. Extend the work to multi-input multi-output systems. Although the author sees no major obstacles in doing so for most of the topics covered in the thesis, some efforts are needed to formulate the anti-windup scheme for M D M O case. 2. Dealing with the trade-off between good tracking and disturbance rejection performance and, good noise suppression performance. This is a basic trade-off in any control systems design [3, pp. 112]. S D G P C was formulated in a deterministic framework. Deterministic disturbances such as impulse, step, ramp, sinusoidal etc. can all be handled in a straightforward manner i n the framework of S D G P C . The trade-off between good noise suppress performance and good disturbance rejection performance can be obtained by proper tuning of S D G P C to give the desired 128 Chapter 7: Conclusions closed-loop system bandwidth. For stochastic noise, the wel l known Kalman filter theory can be applied to estimate the system states. The deterministic treatment of S D G P C does not prevent it from using these results because of the Separation Theorem or Certainty Equivalence Principle [3, pp. 218]. For systems with colored noise, which is more often than not, the optimal Kalman filter can be designed based on the noise model provided that the noise model is known [4, pp. 54]. Unfortunately, the noise model is often unknown and difficult to estimate. Estimation of the \"true\" system states subject to unknown colored noise poses one of the biggest challenges in process control applications. Thus how to utilize the available results and develop new one, in the framework of S D G P C , is certainly a. topic worth pursuing. 3. Adaptive S D G P C . We only considered adaptive Laguerre filter based S D G P C in the thesis. Since the parameter estimation algorithm has been developed, it would be nice to see an adaptive version of S D G P C based on general transfer function description of systems. 4. App ly the S D G P C algorithm to practical problems. Although initial experiment on an inverted pendulum showed the effectiveness of S D G P C and the associated continuous-time identification algorithm, only large scale industrial applications can be the final judge. 129 Appendix A Stability Results of Receding Horizon Control Appendix A . l and appendix A . 2 are based on [4]. A . l The Finite and Infinite Horizon Regulator Given a state-space model of a linear plant where F,G,H have proper dimensions. x(k + 1) = Fx(k) + Gu(k) (A.301a) y(k) = Hx(k) (A.301b) The finite horizon L Q regulator problem can be posed as follows. The performance index: J(N, x(k)) = xT(k + N)P0x(k + N)+ N-l {xT{k + j)QxT{k + j) + uT(k + j)Ru(k + j)} (A.302) The solution to the above optimal L Q problem may be given by iterating the Riccati Difference Equation ( R D E ), Pj+1 = FTPjF - FTPjG(GTPjG + R)~1GTPjF + Q (A.303) j = 0 , l , - . - JV -2 from the initial condition PQ and implements the feedback control sequences given by u{k + N-j) = -(GTPj-1G + R)~lGTPj-iFx(k + N - j) (A.304) = Kj-.lX(k + N-j), j = 1,2, - \u00E2\u0080\u00A2\u00E2\u0080\u00A2 N, where Pj is the matrix solution of R D E (A.303). Notice from the control sequence (A.304) that it iterates reversely in time compared with the direction of evolution of the plant (A.301). That is, in order to obtain the current control u(k), PN-I has to be solved first by iterating (A.303). The accumulated cost of (A.302) is given by PN which itself does not appear in the control law, 130 J{N,x(k))F = xT(k)PNx(k) (A.305) Similarly, the infinite horizon L Q regulator problem may be posed as the l imiting case of the finite horizon L Q problem (A.302), J(x(k))= l i m J(N,x(k)) (A.306) N\u00E2\u0080\u0094\u00C2\u00BBco A n d the optimal solution can be obtained by iterating (A.303) indefinitely. Under mi ld assump-tions, Pj converges to its limit P ^ which is the maximal solution of the Algebraic Riccati Equation ( A R E ), P00 = FTP00F-FTP00G(GTP00G + R) 1GTPCX)F + Q (A.307) A n d a stationary control law is obtained as u(k) = - (GTPO0G + R)~1GTP00Fx(k) = Kx(k) (A.308) The following theorem regarding the stability property of the infinite horizon L Q control law (A.308) is due to De Souza et al [74]. 131 Theorem A.6 (De Souza et al [74]) Consider an infinite horizon LQ regulator problem with plant (A.301) and performance index (A.306), for the associated ARE, P = FTPF - FTPG(GTPG + R)~1GTPF + Q (A.309) if \u00E2\u0080\u00A2 [F,G] is stabilizable, \u00E2\u0080\u00A2 [F, Q1/2] has no unobservable modes on the unit circle, \u00E2\u0080\u00A2 Q > 0 and R > 0 , then \u00E2\u0080\u00A2 there exists a unique, maximal, nonnegative definite symmetric solution P. \u00E2\u0080\u00A2 P is a unique stabilizing solution, i.e. F -G(GTPG + R)~1GTPF (A.310) has all its eigenvalues strictly within the unit circle. The solution P above is called the stabilizing solution of the A R E (A.309). A l s o note that the matrix (A.310) is the state transition matrix of the closed loop system when the stationary control law (A.308 ) is applied to plant (A.301). Theorem A . 6 is the fundamental closed loop stability result for infinite horizon L Q control which w i l l be utilized to prove the stability result of receding horizon control in the following and the stability property of S D G P C thereafter. A.2 The Receding H o r i z o n Regulator From the discussions in appendix A . l , a number of facts about the finite horizon and infinite horizon discrete-time L Q regulator problem are clear. For the finite horizon case, the optimization task with cost function (A.302) is merely to find N control values which, in principle, may be found by finite-dimensional optimization which is referred as the \"one shot\" algorithm in most model based 132 predictive controllers. The control sequences may also be obtained by iterating the R D E (A.303) explicitiy from Pa to PN-2 using simple linear algebra. The resulting control law i n feedback form (A.304) is time-varying even i f the plant being controlled is time invariant. B y contrast, the infinite horizon problem involves an infinite-dimensional optimization or the solution of an A R E (A.307) which is computationally burdensome especially in adaptive applications. However, the control law of the infinite horizon problem is stationary and have guaranteed stability properties under mi ld assumptions. Receding horizon control is one method proposed to inherit the simplicity of the finite horizon L Q method while addressing an infinite horizon implementation and preserving the time-invariance of the infinite horizon feedback. In this formulation only the first element u(k) in the control sequences u(k),u(k + 1), \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 u(k + N \u00E2\u0080\u0094 1) is applied to the plant at time k and at time k + 1 the first control u(k + 1) in the control sequences u(k + l),u(k + 2), \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 u(k + N) is applied and so on. In terms of the finite horizon feedback control law (A.304), one has for the receding horizon strategy u(k) = K~N-.ix(k) - i ^ ( A 3 1 1 > = - {GTPN-1G + R) GTPN-XFx{k) which is a stationary control law. Note that there is still no word having been said about the stability of control law (A.304). In fact, receding horizon strategy does not guarantee stability itself. Motivated by the facts that the infinite horizon L Q control law has guaranteed stabilizing property and there are strong similarities between receding horizon control law (A.304) and infinite horizon L Q control law (A.308), i.e. both are stationary and have the same form, one has enough reason to wonder i f the stability result of infinite horizon L Q control summarized as Theorem A . 6 could be of any help to the stability problem of receding horizon control. For this we go to the important work of Bitmead et al [4]. Consider the R D E (A.303) Pj+i = FTPjF - FTPjG(GTPjG + R)~1GTPjF + Q (A.312) j = 0 , l , - - - J V - 2 define QJ = Q-{PJ+I-PJ) (A.313) 133 , the R D E (A.312) w i l l have a form of an A R E , Pj = FTPjF - FTPjG(GTPjG + R)~1GTPjF + Qj (A.314) which is called Fake Algebraic Riccati Equation ( F A R E ) [4]. From Theorem A . 6 , the stability property of the solution of the above F A R E can be immediately established as follows. Theorem A.7 (Bitmead et al. [4, pp. 87] ) Consider the FARE (A.314) or (A.313) defining the matrix Qj. If Qj > 0, R > 0, [F,G] is stabilizable, - 1 / 2 F,Qj is detectable, thenPj is stabilizing, i.e. Fj = F-G (GTPj G + R)~1GTPjF (A.315) has all its eigenvalues strictly within the unit circle. Clearly, i f the conditions in Theorem A .7 are met for j = N - 1, then the receding horizon control law (A.304) w i l l be stabilizing. However, further work needs to be done to relate the design parameters, i.e. the matrices Po,Q, R i n the performance index (A.302), to the conditions in Theorem A . 7 . The following results from [4] can serve this purpose. lemma A.3 (Bitmead et al. [4, pp. 88] ) r 1/2 Given two nonnegative definite symmetric matrices Q\ and Qi satisfying Q\ < Q2 then F, Q^ detectable implies F,Qf detectable. The following corollary [5] which is an immediate result of lemma A . 3 tells that i f the solution of the R D E is decreasing at time j then the closed loop state transition matrix of (A.315) is stable. Corollary A.1 ( Bitmead et al. [5] ) r 1 /21 If the RDE with [F, G] stabilizable, F, Qy' detectable and if Pj in (A.312) is non-increasing at j, i.e. Pj+i < Pj, then Fj defined by (A.315) is stable. 134 A l s o from [5], we have the following theorem regarding the monotonicity properties of the solution of the R D E (A.312). Theorem A.8 ( Bitmead et al. [5]) If the nonnegative definite solution Pj of the RDE (A.312) is monotonically non-increasing at one time, i.e. Pj+\ < Pj for some j, then Pj is monotonically non-increasing for all subsequent times, Pj+k+i < Pj+k, far all k > 0. The following result is immediate by combining Corollary A.1 and Theorem A . 8 . Theorem A.9 (Bitmead et al. [4, pp. 90] ) Consider the RDE (A.312), if \u00E2\u0080\u00A2 [F,G] is stabilizable [F, Qll2] is detectable \u00E2\u0080\u00A2 Pj+i < Pj for some j then Fk given by (A.315) with Pk is stable for all k > j. A s an immediate consequence of Theorem A.9, we see that i f Po in the design of receding horizon controller is selected in such a way that one iteration of the R D E w i l l result in P i < Po, then we have QQ = Q - (P\ - PQ) > Q and Qj > Q for any subsequent j > 0, this implies that Fj given by (A.315) is stable for any j > 0. A clever choice of Po which w i l l guarantee the monotonically non-increasing solution of R D E is to let Po = oo as first proposed by K w o n and Pearson [42] albeit in a very different framework. The result can be summarized as follows: 135 Theorem A.10 ( Kwon et al. [42] and Bitmead et al. [4, pp. 97] ) Consider system x(k + 1) = Fx(k) + Gu(k) (A.316a) y(k) = Hx(k) (A.316b) and the associated receding horizon control problem, i.e. minimize performance index J(N,x(k)) = JV-l (A.317) Y, {xT(k + j)QxT(k + j) + uT(k + j)Ru(k + j)} subject to final state constraint x(k + N) = 0 (A.318) assume Q > 0, R > 0, F is nonsingular and [F, G] is controllable, [F, Q] is observable, then the optimal solution exists and stabilizes the system (A.316) whenever N > n, where n is the dimension of system (A.316). The nonsingularity condition of F was removed in a recent paper by Chisc i and Mosca [10]. The following corollary is a natural consequence of Theorem A . 10 using the argument given by Demircioglu et al. [16]. Corollary A.2 For system described by equation (A.316) with performance index there exists a positive number 7 such that for P o > 71, the closed loop system under the control law obtained by minimizing (A.319) is also stable. Proof: Since the pole location of the closed loop system under the optimal control law obtained by J(N, x(k)) = xT(k + N)PaxT(k + N) N-l (A.319) minimizing (A.319) is a continuous function of Pa, the closed loop system pole can thus be made 136 arbitrarily close to the limiting case of Pa = oo which is stable according to Theorem A. 10 by increasing Pa. Thus there always exists a positive number 7 such that for PQ > j l , the closed loop system is stable. Theorem A . 10 is used to investigate the stability properties of S D G P C in section 2.2.1. 137 References [I] A ida , K . and T. Kitamori (1990). 'Design of a Pi-type state feedback optimal servo system'. INT. J. Control, Vol. 52, No.3. [2] Al-Rahmani , H . M . and G . F . Franklin (1992). 'Multirate control: A new approach'. Automatica, Vol. 28, No. 1. [3] Anderson, B . D . O. and J. B . Moore (1990). Optimal Control, Linear Quadratic Method. Prentice Ha l l , Englewood Cliffs, New Jersey. [4] Bitmead, R . R. , M . Gevers and V . Wertz (1990). Adaptive Optimal Control, The Thinking Man's GPC. Prentice Ha l l . [5] Bitmead, R . R. , M . Gevers, I. R . Petersen and R. J . Kaye (1985). 'Monotonicity and stabilizablility properties of solutions of the riccati difference equation: Propositions, lemmas, theorems, fallacious conjectures and counterexamples'. Systems and Control Letters, Vol. 5. [6] Bittanti, S., P. Colaneri and G . Guardabassi (1984). 'H-controllability and observability of linear periodic systems'. SIAM Journal on Control and Optimization. [7] B o y d , S., L . E l Ghaoui, E . Feron and V . Balakrishnan (June, 1994). Linear Matrix Inequalities in System and Control Theory. Volume 15 of Studies in Applied Mathematics. S I A M , Philadelphia, PA . [8] Campo, P. J. and M . Morar i (1990). 'Robust control of processes subject to saturation nonlinearities'. Computers in chemical Engineering, Vol. 14, No. 4/5. [9] Chen, C . T. (1984). Linear System Theory and Design. New York, Holt, Rinehart and Winston. [10] Chisc i , L . and E . Mosca (September, 1993.). Stabilizing predictive control: The singular transition matrix case.. In 'Advances in Model-Based Predictive Control. Oxford, England. ' . [ I I ] Clarke, D . W . (September,1993). Advances in model-based predictive control. In 'Workshop on Model-Based Predictive Control, Oxford University, England' . [12] Clarke, D . W . and R. Scattolini (July 1991). 'Constrained receding-horizon predictive control ' . IEE Proceedings Vol.138 No.4. [13] Clarke, D . W . , C . Montadi and P. S. Tuffs (1987). 'Generalized predictive control-part I. the basic algorithm.'. Automatica, Vol.23, No.2. [14] Clarke, D . W . , E . Mosca and R. Scattolini (December 1991). Robustness of an adaptive predictive controller. In 'Proceedings of the 30th Conference on Decision and Control , ' . Brighton, England. [15] Cuder, C . R . and B . C . Ramaker (1980). Dynamic matrix control-a computer control algorithm, paper wp5-b. In ' J A C C , San Francisco'. [16] Demircioglu, H . and D . W . Clarke (July, 1992). ' C G P C with guranteed stability properties'. IEE Proceedings D, Vol. 139, No. 4. [17] Demircioglu, H . and D . W . Clarke (July, 1993). 'Generalised predictive control with end-point state weighting'. IEE Proceedings D, Vol. 140, No. 4. [18] Demircioglu, H . and P. J . Gawthrop (1991). 'Continuous-time generalized predictive control ( C G P C ) ' . Automatica, Vol. 27, No. 1. [19] Demircioglu, H . and P. J . Gawthrop (1992). 'Multivariable continuous-time generalized predictive control ( M C G P C ) ' . Automatica. [20] Doyle , J . C , R. S. Smith and D . F. Enns (1987). Control of plants with input saturation nonlinearities. In '1987 A C C , ' . [21] Dumont, G . A . (1992). Fifteen years in the life of an adaptive controller. In T F A C Adaptive Systems in Control and Signal Processing, Grenoble, France'. [22] Dumont, G . A . and C . C . Zervos (1986). Adaptive controllers based on orthonormal series representation. In '2nd I F A C workshop on adaptive control and signal processing'. Lund, Sweden. [23] Dumont, G . A . , Y . F u and G . L u (September, 1993). Nonlinear Adaptive Generalized Predictive Control and Its Applications. In 'Workshop on Model-Based Predictive Control, Oxford University, England' . [24] Elnaggar, A . , G . Dumont and A . Elshafei (December, 1990). System identification and adaptive control based on a variable regression for systems having unknown delay. In 'Proceedings of the 29th Conference on Decision and Control, Honolulu, Hawai i ' . [25] Eykhoff, P. (1974). System Identification. Wiley, New York. [26] Fertik, H . A . and C . W . Ross (1967). 'Direct digital control algorithm with anti-windup feature'. ISA Transactions, 6(4):317-328. [27] Finn, C . K . , B . Wahlberg and B . E . Ydastie (1993). 'Constrained predictive control using orthogonal expansion'. AlChE Journal, Vol. 39 No. 11. [28] Franklin, G . F . , J . D . Powell and A . Emami-Naeini (1994). Feedback Control of Dynamic Systems. Addison-Wesley Publishing Company. [29] Fu , Y . and G . A . Dumont (June, 1993). ' A n optimal time scale for discrete Laguerre network'. IEEE Trans, on Auto. Control, AC-38, No. 6, pp. 934-938. [30] Furutani, E . , T. Hagiwara and M . Arak i (December, 1994). Two-degree-of-freedom design method of state-predictive lqi systems. In 'Proceedings of the 33rd Conference on Decision and Control, Lake Buena Vista, F L . ' . [31] Gacia, C . E . , D . M . Prett and M . Morar i (1989). 'Mode l predictive control: Theory and practice-a survey'. Automatica,. [32] Gawthrop, P. J . (1987). Continuous-time Self-tuning Control, Volume I - Design. Research Studies Press, England. [33] Goodwin, G . C . and D . Q. Mayne (1987). ' A parameter estimation perspective of continuous time model reference adaptive control ' . Automatica, 23 (1), 57-70. [34] Hagiwara, T., T. Yamasaki and M . Arak i (July, 1993a). Two-degree-of-freedom design method of lqi servo systems, part i : Disturbance rejection by constant feedback. In 'The 12th I F A C World Congress'. [35] Hagiwara, T., T. Yamasaki and M . Arak i (July, 1993i\u00C2\u00BB). Two-degree-of-freedom design method of lqi servo systems, part i i : Disturbance rejection by dynamic feedback. In 'The 12th I F A C World Congress'. [36] Hanus, R., M . Kinnaert and J. L . Henrotte (1987). 'Conditioning technique, a general anti-windup and bumpless transfer method'. Automatica, Vol. 23, 729-739. [37] Hautus, M . L . J . (1969). 'Controllability and observability conditions of linear autonomous systems'. Indagationes mathematicae, Vol. 72, pp. 443-448. [38] Kalman, R . E . , Y . C . Ho and K . S. Narendra (1963). Contributions to Differential Equations, Vol. I. New York: Interscience. [39] Kothare, M . , P J . Campo and M . Morar i (1993). A unified framework for the study of anti-windup designs. Technical report. C I T - C D S 93-011, California Institute of Technology. [40] Kothare, M . , V . Balakrishnan and M . Morar i (1995). Robust constrained model predictive control using linear matrix inequalities. Technical report. Chemical Engineering, 210-41, California Institute of Technology. [41] Kwakernaak, H . and R. Silvan (1960). Linear Optimal Control Systems. New York, Wiley-Interscience. [42] K w o n , W . H . and A . E . Pearson (1978). ' O n feedback stabilization of time-varying discrete linear system'. IEEE Trans. AC-23, (3), pp. 479-481. [43] Levis , A . H . , R. A . Schlueter and M . Athans (1971). ' O n the behaviour of optimal linear sampled-data regulators'. INT. J. CONTROL, Vol. 13 No. 2. [44] Ljung., L . (1987). SYSTEM IDENTIFICATION: Theory for the User. Prentice-Hall. [45] Ljung, L . and T. Soderstrdm (1983). Thoery and Practice of Recursive Parameter Estimation. M I T Press, London. [46] L u , G . and G . A . Dumont ( D e c , 1994). Sampled-Data G P C with Integral Act ion : The State Space Approach. In 'Proceedings of the 33rd C D C , Lake Buena Vista, F L ' . [47] M a , C . C . H . (December, 1991). 'Unstabilizability of linear unstable systems with input l imits ' . Transactions of the ASME, Vol. 113. [48] MMkila, P . M . (1990). 'Laguerre series approximation of infinite dimensional systems'. Automatica, Vol. 26, No. 6. [49] M a k i l a , P. M . (1991). ' O n identification of stable systems and optimal approximation'. Automatica, Vol. 27 No. 4. [50] Middleton, R . H . and G . C . Goodwin (1990). Digital Estimation and Control: A Unified Approach.. Englewood Cliffs, N J : Prentice-Hall,. [51] Morar i , M . (September,1993). Mode l predictive control: Multivariable control technique of choice in the 1990s?. In 'Workshop on Model-Based Predictive Control, Oxford University, England' . [52] Mosca , E . and J . Zhang (1992). 'Stable redesign of predictive control ' . Automatica, Vol. 28, No. 6, pp. 1229-1233. [53] Mosca , E . , G . Zappa and J. M . Lemos (1989). 'Robustness of multipredictor adaptive regulators: Musmar ' . Automatica, Vol. 25 No.4. [54] Nicolao, G . D . and R. Scattolini (September, 1993). Stability and output terminal constraints in predictive control. In 'Workshop on Model-Based Predictive Control ' . [55] Pappas, T., A . J. Laub and N . R. Sandell (1980). ' O n the numerical solution of the discrete-time algebraic riccati equation'. IEEE Trans. Auto. Control, AC-25. [56] Parks, T. W . (1971). 'Choice of time scale in Laguerre approximations using signal measurements'. IEEE Trans. Auto. Control, AC-16. [57] Peng, H . and M . Tomizuka (1991). Preview control for vehicle lateral guidance in highway. In 'Proceedings of the 1991 American Control Conference'. [58] Peterka, V . (1984). 'Predictor based self-tuning control ' . Automatica, Vol. 20. [59] Power, H . M . and B . Porter (1970, 6.). 'Necessary and sufficient conditions for controllability of multivariable systems incorporating integral feedback'. Electron. Lett. [60] Rawlings, J . and K . R. Muske (1993). 'The stability of constrained receding horizon control, ' . IEEE Trans. Auto. Contr., 38. [61] Richalet, J . , A . Rault, J. L . Testud and J . Papon (1978a). ' M o d e l predictive heuristic control: Applications to Industrial Processes'. Automatica. Vol. 14. [62] Richalet, J . , A . Rault, J. L . Testud and J . Papon (1978ft). 'Mode l predictive heuristic control: Applications to Industrial Processes'. Automatica. Vol. 14. [63] Astrom, K . J. and B . Wittenmark (1984). Computer Controlled Systems-Theory and Design. Englewood Cliffs, N J : Prentice Ha l l . [64] Astrom, K . J . and B . Wittenmark (1989). Adaptive Control. Addison-Wesley Publishing Company. [65] Astrom, K . J . and P. Eykhoff (1971). 'System identification - a survey'. Automatica, Vol. 7. [66] Robinson, W . R. and A . C . Soudak (1970). ' A method for the identification of time-delays in linear systems'. IEEE Tran. Aut. Control. [67] Sagara, S. and Zhen-Yu Zhao (1990). 'Numerical integration approach to on-line identification of continuous-time systems'. Automatica. [68] Salgado, M . . E . , G . C . Goodwin and R. H . Middleton (1988). 'Modif ied least squares algorithm incorporating exponential resetting and forgetting'. Int. J. Control, Vol. 47, No. 2. [69] Scokaert, P. O. M . and D . W . Clarke (1994). Stability and feasibility in constrained predictive control. In 'Advances in M o d e l Based Predictive Control ' . Oxford Science Publications. [70] Sinha, N . (1972). 'Estimation of the transfer function of a continuous-time systems from sampled data'. IEE Proceedings Part D, Vol. 119. [71] Sinha, N . and S. Puthenpura (November, 1985). 'Choice of the sampling interval for the identification of continuous-time systems from samples of input/output data'. IEE Proceedings Part D, Vol. 132(6). [72] Soderstrom, T. and P. G . Stoica (1989). System Identification. Prentice-Hall, Hemel Hempstead, * U . K . [73] Soeterboek, R. (1992). Predictive Control - A Unified Approach. Prentice-Hall. [74] Souza, C . D . , M . R . Gevers and G . C . Goodwin (Sep. 1986). 'Riccati equations i n optimal filtering of nonstabilizable systems having singular state transition matrices'. IEEE Trans. Auto. Contr. , Vol. AC-31, No. 9. [75] Sznaier, M . and F. Blanchini (Vol. 5, 1995). 'Robust control of constrained systems via convex optimization'. International Journal of Robust and Nonlinear Control. [76] Tomizuka, M . and D . E . Whitney (Dec. 1975). 'The discrete optimal finite preview control problem ( why and how is future information important?)'. ASME Journal of Dynamic Systems, Measurement and Control, Vol. 97, No.4. [77] Tomizuka, M . , D . Dornfeld, X . Q. Bian and H . C . Ca i (Mar. 1984). 'Experimental evaluation of the preview servo scheme for a two-axis positioning system'. ASME Journal of Dynamic Systems, Measurement and Control, Vol. 106, No.l. [78] Unbehauen, H . and G . P. Rab (1987). Identification of Continuous Systems. North-Holland, Amsterdam. [79] Unbehauen, H . and G . P. Rao (1990). 'Continuous-time approaches to system identification-a survey'. Automatica. [80] Wahlberg, B . (May 1991). 'System identification using Laguerre models'. IEEE Transactions on Automatic Control, Vol. 36, No. 5. [81] Walgama, K . S. and J . Sternby (1990). 'Inherent observer property in a class of anti-windup compensators'. Int. J. Control, Vol. 52, No. 3, 705-724. [82] Wiener, N . (1956). The theory of Prediction. Modern Mathematics for Engineers,. New York, M c G r a w - H i l l . [83] X i e , X . and R. J . Evans (1984). 'Discrete-time adaptive control for deterministic time-varying systems'. Automatica, Vol. 20, No. 3. [84] Ydstie, B . (1984). Extended horizon adaptive control. In 'Proceedings of the 9th I F A C World Congress, Budapest, Hungary'. [85] Young, P. C . (1981). 'Parameter estimation for continuous-time models- a survey.'. Automat-ica, Vol. 17. [86] Young, P. C . and A . Jakeman (1980). 'Refined instrumental variable methods of recursive time-series analysis, part i i i . extensions.'. Int. J. Control, Vol. 31. Zervos, C . C . and G . A . Dumont (1988). 'Deterministic adaptive control based on Laguerre series representation'. Int. J. Control, Vol. 48, No. 6. 145 "@en . "Thesis/Dissertation"@en . "1996-05"@en . "10.14288/1.0065201"@en . "eng"@en . "Electrical and Computer Engineering"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en . "Graduate"@en . "Sampled-data generalized predictive control (SDGPC)"@en . "Text"@en . "http://hdl.handle.net/2429/4788"@en .