R O B U S T A D A P T I V E C O N T R O L By Fu, Ye B . A.Sc. Shanghai Jiao Tong University, P R C , 1984 A THESIS SUBMITTED IN PARTIAL F U L F I L L M E N T OF T H E REQUIREMENTS FOR T H E D E G R E E OF D O C T O R OF P H I L O S O P H Y in T H E F A C U L T Y OF GRADUATE STUDIES E L E C T R I C A L ENGINEERING We accept this thesis as conforming to the required standard T H E UNIVERSITY OF BRITISH C O L U M B I A December 1989 © Fu, Ye, 1989 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of E l e c t r i c a l Engineering The University of British Columbia Vancouver, Canada Date Jan. 5th, 1990 DE-6 (2/88) Acknowledgement Four years ago, when I first stepped into U B C , I had no idea on adaptive control theory. In a talk with my thesis advisor, Prof. Guy A . Dumont, a simple problem got me interested. This problem is: given a rational minimum phase continuous plant, when is the equivalent discrete plant can be non-minimum phase? This problem also brought me to a more common question, if the discrete plant is non-minimum phase, how to control it using adaptive control? Through frequent talks with Prof. Dumont and reading many papers recommended by him, I finally went into the area of robustness of adaptive control. Many questions came to my mind, such as the stability of adaptive control under model mismatched case, role of filter in adaptive control, chaos in simple self-tuning control, predictive adaptive control, etc. Here I would like to express my special thanks to Prof. Guy A . Dumont, for guiding my progress throughout this four year research and for his continuous supporting help and guidance. I also would like to thank Prof. Bohn, Prof. Davies, Prof. Lawrence, Prof. Soudack and Prof. Ward, for the courses they give and from which I widened my knowledge beyond adaptive control theory. I also would like to thank Prof. M a , for his critical but helpful discussion on the original version of section 3.4.1. Through this discussion, I benefited a lot and corrected some expression mistakes. I also would like to thank Prof. Lam, who patiently went through my thesis and gave some good suggestions. I also would like to thank many friends studying with me for their time to discuss any topic of interest, including many details in this thesis. Among them, I especially thank to Dr. Natarajan, Mr . Bond and Mr. Kristinsson, for their constant questions ii and suggestions. I thank all my teachers at U B C , at Shanghai Jiao Tong University, China, and also those teachers in my primary schools and middle schools in Shanghai, China. Special thanks to Prof. Astrom in Lund Institute of Technology, Sweden, and Prof. Shah in University of Alberta, for their advice when they visited U B C . The research of this thesis was carried out at the Department of Electrical Engi-neering and at the Pulp and Paper Center with support provided by the Pulp and Paper Research Institute of Canada. Table of Contents Acknowledgement i i List of Tables v i i i L i s t of Figures ix Abstract x i i 1 Introduction 1 1.1 Background • • 1 1.2 The Research Topic 5 1.2.1 Motivation 5 1.2.2 Main Contribution of this Thesis 5 1.2.3 Outline of the Thesis : 6 2 Mathemat i c a l P r e l i m i n a r y 8 2.1 Signal Processing Theory 8 2.1.1 Sampling Process 8 2.1.2 Frequency Domain Estimation via D F T 10 2.2 On Conic Sector Stability Theory 13 2.2.1 Definitions 13 2.2.2 Stability Problem Definition 17 2.2.3 Conic Relations and properties 18 2.2.4 Theorem on Boundedness 21 iv 2.3 On Averaging Theory 22 2.4 On Curve Fitting 24 3 Adaptive C o n t r o l A l g o r i t h m and Stabil i t y 26 3.1 Structure of Adaptive Control System 26 3.2 On Parameter Estimation 26 3.3 Self-tuning Controller Design 32 3.3.1 Pole-zero Placement Design 32 3.3.2 Minimum Variance Control 37 3.3.3 Other Algorithms 39 3.4 Stability Condition for Adaptive Control System 40 3.4.1 Main Stability Theorem 40 3.4.2 Result from Using Conic Sector Criterion 49 3.4.3 Using Averaging Method 50 3.5 Simulations 51 4 Robust Adaptive Control: Slow Sampling 54 4.1 Why Use Slow Sampling? 54 4.2 Choice of Sampling For Model Matched Case Adaptive Control 55 4.2.1 Definition and Background 55 4.2.2 Condition For M P Discrete Plant 57 4.2.3 Obtaining the Critical Sampling by Using a Relay 63 4.3 Choice of Sampling For Model Mismatched Case Adaptive Control . . . 68 4.3.1 Uncertainty in sampled system 68 4.3.2 Stability conditions for adaptive algorithm 75 4.3.3 Approximate way of choosing sampling 76 4.4 Simulation on the Effect of Sampling 78 v 4.5 Conclusion 83 5 Robust Adaptive Control: Slow A d a p t a t i o n 89 5.1 Introduction 89 5.2 Slow Adaptation Algorithm 91 5.2.1 Fixed Slow Adaptation Rate 91 5.2.2 Unfixed Slow adaptation rate 93 5.2.3 Checking Criterion for the Steady State 95 5.3 Frequency Domain Estimation 97 5.4 Filter Design 100 5.5 Simulation Examples of Robust Adaptive Controlled Design 105 5.6 Conclusion 109 6 Robust Adaptive Control: U s i n g Predictive Control 112 6.1 Predictive Control Algorithm 112 6.1.1 A simple predictive control algorithm 112 6.1.2 Generalized Predictive Control 116 6.2 Robustness of Predictive Control 120 6.2.1 Illustration examples 120 6.2.2 Discussion 122 6.3 Stability Analysis for a simple case 126 6.4 Simulation Examples 130 6.5 Conclusion 136 7 Conc lus ion 140 7.1 Summary of the work 140 7.2 Future Research Directions 142 vi 7.3 Conclusion Bibliography A List of Abbreviations B List of Terminology C Simulation Program List of Tables 3.1 Estimated parameters for different sampling frequency 52 4.2 Parameters of discrete nominal model under different sampling in exam-ple 4.2 71 4.3 Estimated a/b for various cases 81 4.4 Simulation condition in Fig . 4.21 82 4.5 Simulation condition in Fig . 4.22 82 5.6 Parameters of discrete plant of Rohrs plant under different sampling . . 98 5.7 Closed-loop poles of adaptive system under different sampling 98 6.8 Simulation result for various sampling and extended horizon 121 6.9 Critical sampling for different horizon 122 viii List of Figures 2.1 Feedback Structure 18 3.2 Conventional Adaptive System Structure 27 3.3 (a). Estimated parameters in persistent excitation case 33 3.3 (b). Estimated parameters when persistent excitation is not satisfied . . 33 3.4 Geometric Explanation of r/n 41 3.5 Error Model 43 3.6 Nyquist curve of H2 under different sampling 52 4.7 Nyquist path 61 4.8 Nyquist curve satisfying (4.71) 62 4.9 Nyquist curve not satisfying (4.71) 63 4.10 Closed loop with relay 64 4.11 Nyquist diagram for limit cycle 65 4.12 Nyquist curve of G* in example for different sampling intervals T . . . . 66 4.13 Limit cycle obtained when putting G(s) of the example under relay control 67 4.14 Model error via sampling periods 71 4.15 Choice of sampling according to the phase of continuous plant 76 4.16 Obtain critical sampling for minimum phase discrete plant of Rohrs' example by using relay 79 4.17 Zeros of discrete plant of Rohrs' example via various sampling 80 4.18 (a). Finding the critical sampling for SPR H2, 84 4.18 (b). Nyquist curve of H2, under different sampling 84 IX 4.19 (a). Nyquist curve of H2 via different sampling frequencies 85 4.19 (b). Nyquist curve of H2 via different command frequencies 85 4.20 S T C simulation under different sampling 86 4.21 S T C simulation under noise or high frequency command 87 4.22 S T C simulation when persistent excitation not satisfied 88 5.23 Estimated parameters when simulating Rohrs' plant by using RLS esti-mation where the sampling is T, = 0.2sec, command frequency is 1.0 rad/sec 90 5.24 S T C result for fixed rate slow adaptation 96 5.25 Unfixed rate slow adaptation STC result when the sampling is very fast, Ta = 0.05sec. for Rhors' plant 101 5.26 Closed-loop with filter 102 5.27 Bode plot of the compensation filter in Example 5.4 103 5.28 Robust Adaptive Controller Design Flow Diagram 104 5.29 Adaptive control result in example 5.5, command is cos{t), sampling T. = 0.05sec 105 5.30 Estimation result in example 5.5 107 5.31 Adaptive control result in example 5.5, command is step level, sampling T, = QMsec 109 5.32 Adaptive control for an eighth order plant(example 5.6) command is step level, sampling T, = l.Osec 110 5.33 Estimation result in example 5.6 I l l 6.34 Error Model for predictive adaptive control 129 6.35 Phase plot of H2 for different algorithm 131 6.36 Simulation result in example 6.6 132 x 6.37 Simulation result in example 6.7 134 6.38 Oscillation obtained by using Relay for a time-varying plant 135 6.39 Frequency Domain estimation result (Nyquist curve of the discrete plant) 137 6.40 Parameter estimation and controller parameters in Example 6.8 138 6.41 Simulation output for predictive adaptive control algorithm to a time-varying plant, Example 6.8 139 XI Abstract This thesis describes discrete robust adaptive control methods based on using slow sampling and slow adaptation. For the stability analysis, we consider that the plant model order is not exactly known and assume that the estimation model order is lower than the plant model order. A stability condition is derived with a given upper limit for the adaptation gain which is related to a strictly positive real operator. Discussion of the relation between sampling and stability condition is then given. For the robust adaptive control design, we study slow adaptation and predictive control. For the slow adaptation, the main idea is to use only good estimates and use a compensation filter. Some frequency domain information on the plant is necessary for this method. For predictive control, we discuss the relationship between the extended control horizon and the critical sampling. For a simple case, it is shown that the larger extended control horizon brings more robust adaptive control. The purpose of this thesis is to provide robust discrete adaptive controller design guidelines, especially in such cases as using slow sampling frequency, slow adaptation rate. It is true that in practice, for various discrete adaptive control algorithms, slow sampling and slow adaptation rate will bring more robustness. The use of slow sampling and slow adaptation rate is simple and economic, thus a careful choice of sampling rate and adaptation rate is highly recommended. This thesis provides such guidelines for choosing proper sampling rate and adaptation rate for robust discrete adaptive control. xi i Chapter 1 Introduction 1.1 Background In classical control system designs, the parameters of the system to be controlled are assumed known. Usually, the purpose of a control system is to obtain a desired plant output. For example, when tracking a given command input signal, the controller is designed such that the error between the plant output and command input is minimum. When this command input signal is a known constant, the tracking problem becomes a regulation problem. There are many methods of classical control system design. A l l those controller designs based on known plant parameters are valid only when the plant is linear time-invariant and its parameters are known accurately. The output of a control system will have large errors or even be unstable if there are errors in the representation of the real system. However, in practice, when the plant is not known, off-line parameter estimation can be used. Parameter estimation is based on a given model structure with unknown parameters. The estimation algorithm will improve the parameters of the model along the direction such that the error between the model output and real plant output is reduced. A n adaptive control system combines on-line parameter estimation with on-line feedback control law synthesis, and is thus composed of a plant, a parameter estimator and a controller design algorithm. A simple block diagram showing the structure of such an adaptive controller can be found in Fig. 3.2, Chapter 3. For the on-line design 1 Chapter 1. Introduction 2 of the feedback controller, it is assumed that the estimated parameters are the true parameters of the plant. This scheme allows the controller to be changed accordingly when the process changes. However, the resultant algorithms are complex since the feedback laws become non-linear and time-varying. Historically, the research of adaptive control started in the 1950s and was motivated by the design of autopilots for high-performance aircrafts. Such aircrafts are required to operate over a wide range of speeds and altitudes. It was found that classical controller design works well only in one operating condition. As the operating condition changes, the difficulty arises. Thus a more sophisticated controller which can work well over a wide range of operating conditions is needed. This early work on adaptive control can be found in [23] and [29]. In 1960s, there were many new contributions to control theory. Among them are the state-space theory, stability theory, stochastic control theory, etc. There were also major developments in system identification and parameter estimation, see [4]. A l l those contributions were to be important for the development of adaptive control later. Starting from the early '70s, the theory of adaptive control developed rapidly. A number of successful applications have also been reported in industry. Those appli-cations cover a wide range of areas such as aerospace, process control, ship steer-ing, robotics, and other industrial control systems ([10], [15], [24], [27]). Research on adaptive control has focussed on two different approaches: the Self-Tuning Regulators (STR), which originated from a stochastic regulation problem; and Model Reference Adaptive Controllers ( M R A C ) , which originated from a deterministic servo problem. In spite of their different origins, objectives, and formulations, these two approaches are closely related. Both systems have two feedback loops. The inner loop is an ordi-nary feedback loop with a process and regulator. The parameters of the regulator are adjusted by an outer loop. However, in M R A C , the regulator parameters are updated Chapter 1. Introduction 3 directly while in S T C , they are updated indirectly via parameter estimation and (or) design calculations. Many studies have been performed to improve the regulator design methods and parameter estimation techniques. In the late 1970s and early 1980s, proofs for the stability of adaptive systems ap-peared, albeit these proofs are under strict assumption of matched plant order in param-eter estimation, i.e., the order of the plant is not higher than the order of the estimation model (see [16], [21]). Various controller design methods have been proposed such as minimum variance control ( M V C ) , moving-average control, pole-zero placement, L Q G control and predictive control (see [7], [9], [11], [12], [38], [40]), etc. According to the stability theory of adaptive control when matched plant order is assumed, if there is pole-zero cancellation in the controller design, the plant should be minimum phase (in discrete domain) for stable self-tuning control (STC). There are two different ways to solve this problem if the plant is non-minimum phase. One is to modify the self-tuning controller design by keeping unstable zeros uncancelled through a factorization method or using another minimization index such as generalized minimum variance algorithm ([8], [11]). In this case, more calculations are generally required because the implicit algorithm is not practical. The other way is to use slow sampling rate since the zeros of the discrete plant are affected by sampling. In [5], it is shown that for a continuous-time plant with stable poles and zeros, if the sampling frequency is slow enough, the sampled system is always minimum phase. In this thesis, we will develop a way of choosing the critical sampling frequency to ensure a minimum phase discrete plant for a given rational, stable and minimum phase discrete plant. If the sampling frequency is chosen to be slower than the critical sampling frequency, the discrete plant will be minimum phase. However, in practice, most plants contain unmodelled high frequency poles and zeros. If the estimation model order is lower than the actual plant order, there will Chapter 1. Introduction 4 be a model mismatch. This model error may be insignificant as far as only open-loop behaviour is concerned, but in adaptive control systems, the basic assumption in those early stability theorems is violated. Later, Rohrs et al. studied the cases when the assumption of matched plant order is discarded (see [31], [32]). In Rohrs' work, serious problems were outlined. The adaptive control algorithm in presence of unmodelled dynamics and of input or output noise is not always stable even if the other conditions of those early stability theorems are satisfied. A different explanation for this phenomenon was given later in [2] and the appendix of [32]. Although Rohrs et al. did simulations in continuous time domain, the problem carries to the discrete time domain. Further, it was also found in [33], [34] that if the sampling frequency is slow enough, the mismatch between the estimated plant and the real plant in frequency band is very small. In practice, an adaptive control algorithm is robust if the sampling is slow enough. Now, it is believed that robust adaptive control is possible in model mismatched case but extreme care must be taken when implementing the controller. Various design methods are available. However, theoretical stability results are available only for simple algorithms. The stability study of self-tuning control algorithms is difficult because it involves a non-linear system with time-varying parameters. Recently, some researchers used conic sector stability theory first proposed in [42] for non-linear systems to derive STC stability conditions under model mismatch (see [13], [30] and [35]). They successfully found sufficient stability conditions for S T C . In [1], averaging analysis is also used to obtain similar stability conditions. These two different methods both indicate that one very crucial condition for stable STC is that an operator be Strictly Positive Real (SPR). Also, in the parameter adaptation algorithms, the adaptation rate is assumed sufficiently small although there is no indication of how small it has to be. Chapter 1. Introduction 5 1.2 The Research Topic 1.2.1 M o t i v a t i o n Currently, it is believed that adaptive control systems can be stabilized and give good performance under extreme care. Some robust adaptive control algorithms have been reported. For example, filtering the data used for estimation so that the parameter updating is improved [39]; using a dead zone to switch the parameter estimation off if the input signal is not appropriate [9]; introducing orthonormal series representation to model the plant dynamics and design a controller based on this new model [43]; ap-plying predictive control for controller design [12]. However, every method has its own limitations and it is also observed that for discrete adaptive control, slower sampling will bring more robust results in most algorithms. The relation between sampling fre-quency and stability of discrete STC algorithm has been noticed in [33], [34] and [13]. A general explanation for the robustness obtained by using slow sampling is that the mismatch between the model and the real plant decreases as the sampling frequency decreases. This conclusion has been verified through some simulation results. 1.2.2 M a i n C o n t r i b u t i o n of this Thesis In this thesis, we study various aspects of discrete time robust self-tuning controller design, with emphasis on the following: • Derivation of a way to choose the critical sampling frequency for a rational, stable and inversely stable continuous plant such that the sampled plant is minimum phase. For the perfect match case, minimum phase behaviour of the discrete plant is crucial for some direct algorithms. If the sampling frequency is chosen to be Chapter 1. Introduction 6 slower than this critical sampling frequency, the discrete plant will be minimum phase. A practical way of choosing such a sampling is suggested. This is an extension of the study on zeros of sampled systems in [5]. • Derivation of a new self-tuning control stability condition under model mismatch. Most available STC stability conditions for the model mismatched case require the existence of a sufficiently small adaptation gain ([1], [13], [30], etc.). Here we will derive an upper limit for this adaptation gain, which is directly related to the S P R operator required and the initial estimation error covariance setting. • Some robust self-tuning controller design guidelines which include using slow sampling, slow adaptation rate, filter and predictive control. Model mismatch is assumed. The slow adaptation rate and the predictive horizon are studied with respect to the sampling frequency for the continuous plant. We explain why practitioners attribute more robustness to slow adaptation and predictive control. 1.2.3 Outline of the Thesis In chapter 2, we present some basic mathematical preliminaries in signal processing, conic sector stability theory, averaging theory and curve fitting. In chapter 3, we discuss some conventional estimation and controller design algorithms. A stability theorem is derived and compared with existing results. In chapter 4, we study the effect of slow sampling on the stability conditions. Both matched model order and mismatched model order cases are included. Also we propose how to choose the critical sampling frequency. Chapter 5 discusses the effect of slow adaptation. It can be shown that a proper filter will always bring stable self-tuning control algorithm. However, the design Chapter 1. Introduction 7 of the filter requires the frequency domain knowledge of the plant, thus frequency domain estimation can be useful. In chapter 6, predictive control is applied. Here the emphasis is on the relation between sampling frequency and prediction horizon. Chapter 7 contains conclusions and some future research directions. Chapter 2 Mathematical Preliminary 2.1 Signal Processing Theory 2.1.1 Sampling Process Suppose G(s) is the continuous transfer function of the plant. Denote Z{G(s)} is the Z-transform of G(s), then the discrete transfer function of the sampled plant with zero-order hold is: J ( r . ) = z { i ^ 0 ( . ) } = ( i - r ' ) z { ^ } . Define: then: ^(?- ,) = ( l - 9 - 1 ) Z { G „ W } (2.1) H(q~1) represents the sampled plant or discrete plant. Z{G(s)} is the Z-transform resulting from G(s), which also depends on sampling period. The following lemma is important to understand the sampling process. Similar result can always be found in text books of digital control such as [17]. Lemma 2.1 If the sampling interval is T and to, = — , then the sampling process will result in: Z{Go(s)} = - Y, G0(s-jnw,) (2.2) 8 Chapter 2. Mathematical Preliminary 9 [proof] Suppose g0{t) is the corresponding impulse response signal of G0(s) in time domain. The sampling process can be considered as a modulation process +00 in which the particular pulse train p(t) = 8(t — nT) is multiplied by nrr — oo the continuous function go(t)-9*o(t) = p(t)g0(t) (2.3) since we can express p(t) by Fourier series as: +00 i<0= E c ^ i n w , t where the coefficients: Thus: j +00 *o*(0 = 7p E SoO''™"' Using Laplace transform: r+00 /•+0GS(«) - / g*o{t)e-" Jo r+oo 1 +0° = / r E go{t)e*'"te-'ldi Jo T n=_co 1 +00 . + 0 0 E / goWe-l-^d n= —00 +00 = j; E Go(5 - JTWO.) n= —00 Also from (2.3): 9o(t) = 9o{0)S{t) + g0(T)8(t - T) + g0(2T)S{t - 2T) + Chapter 2. Mathematical Preliminary 10 G;(s) = €7o(0) + <7o(T)e-sr + g0(2T)e-2'T + • • • = £ So(nT)e — nsT n=0 Let 2 = e*r, then: G0(^) = E 5 o ( n r ) z - ' i = z{G'o(5)} (2.4) n=0 Compare the result before, the sampling process is; HGo(s)} = - £ G0(s-jnw,) n= —oo • From Lemma 2.1, we can see that in the frequency domain, the high frequency components are aliasing to the low frequency part after sampling. 2.1.2 Frequency Domain Estimation via D F T We denote a discrete-time signal by x(n) = x(nT) and xt denotes the sampled continuous-time signal where t = nT, n is an integer and T is the sampling period. xt-i denotes a;[(n —1)T], etc. The Z-transform of x(n) on the unit circle is the Discrete-Time Fourier We also define the N-point Discrete Fourier Transform (DFT) of x(n) at N points in frequency domain, Wk = kw,/N, fc = 0,1, • • •, TV — 1, where w, = 2ir/T is the sampling frequency in (rad/sec). Denote WN = e-W\ D F T is: Transform ( D T F T ) and denned as: -jnwT (2.5) N-l XN{wk)= Yx(n)Kn,k = 0,l,---,N - 1 (2.6) n=0 Chapter 2. Mathematical Preliminary 11 Further, the inverse N-point discrete Fourier transform of Xw(wk) is: *(n) = Jr E XN(wk)W~kn, n = 0,1, • • •, N - 1 If x(n) is of finite duration, for example, x{n) = 0 when n > N, then the N-point D F T of x(n) and D T F T of x(n) are equal at wk: XN{wk) = X(ejwT) \w=Wk, k = 0,1, • • •, N - 1 The following lemma is similar to a result from [26]: Lemma 2.2 Let y(m) = h(m) * u(m), where h(m) is an infinite length, causal, impulse response and all poles of Z[/i(m)] are in the open unit disk. We denote the D T F T of h(m) by H(e*wT), and N-point D F T s of u(m),y(m) ending with time index n, by U^(wk), Yf}(wk), respectively, then: Y£(wk) = H{e^T)U^(wk) + E^(wk), k = 0,1, • • •, N - 1 (2.7) where the discrete function E^(wk) is given by: oo n—N n EnN(wk) = EHp)Kp( E « H < - E « ( ™ X m ) p=l m=n—N—p+l m=n-p+l and k = 0 , l , - - - , i V - 1. The definition of Yfi(wk) above is: Y£{wk)= £ y(rn)W*r,k = 0,l,---,N-l m=n-N+l which uses updated y(n). E^(wk), UJ^(wk) are similarly defined, [proof] For n-N oo m = -oo m=n+l n-N oo H(ej™»T)[ £ u{m)W!T + UZ(wh) + £ u(m)W£»] m= — oo m=n+l Chapter 2. Mathematical Preliminary 12 Thus, n-N E"N(wk) = H(e^T)[ £ u(m)W*» + £ <rn)WkNm) m=-oo m=n+l n - N oo £ y{m)wjr- E y ( ™ X n m = — oo m=n-f-1 For n - N n-N oo £ y(m)W*» = E E M p M m - p ) ^ " 1 m=—oo m= — oo p=0 oo n—TV = E M P ) E «(m-p)W# m-'>W# p=0 m= —oo oo n — N—p = E M P X p E « ( / ) < p=0 i=-oo = iT(e*" r)[ i f «(m)W#" - " E m=—oo m=n—N-p+1 n - N where the term u ( T n ) W ^ m is eliminated when p = 0. Similarly we have, m=n—TV—p+1 oo oo oo n E vMwft" = E HP)WNP[ E «(m)Wj- + E «(m)Wj m] m=n+l p=0 m=n+l n-fl — p Apply these two equations to the expression of Ejf(wk), the lemma is proved. • The discrete transfer function of the plant G(s) with zero order holder is: which is the D T F T of the sampled impulse response h(n) for z — e J t u T . We will have the following remarks concerning H{e^wT): R e m a r k 1: Chapter 2. Mathematical Prehminary 13 The discrete plant H(e3wT) at w — wk, k = 0,1, • • •, N — 1 can be approxi-mated by: ^ r ) = ffi ( 2' 8 ) where Y^(wk),U^(wk) is N-point D F T oiyt,ut, and k = 0,1, • • •, N - 1. Remark 2: The estimation error in (2.8) can be eliminated if u(nT) — 0,y(nT) = 0 for n > N. The error is small if ut,yt are close to finite length signals and N is large enough. Remark 3: For a linear system, if u(t) = Ux(t) + u2(t), y(t) = y^t) + t / 2(*)i }'i(w), 12(10), C/1(w),C/2(w) are D F T s of yi(t), y2{t),v-i(t),u2(t) respectively, and Yi(w) = H{eiwT)U1{w), then Y2(w) = H(e^wT)U2(w). In practice, if ui(t), yi(t) are known steady state input and output signals, u2(t) is a disturbance and y2(t) is the output result from disturbance, choosing u2(t),y2(t) having limited length, then H(ejwT) can be obtained by H(ejx"kT) = Y^k\. U2[Wk) 2.2 On Conic Sector Stability Theory 2.2.1 Definitions Extended Normed Linear Space When discussing stability, unstable systems will be involved, thus the function space will contain functions which "explode", i.e., which grow without bound as time increases. Such functions are not contained in the spaces commonly used in analysis, for example. Chapter 2. Mathematical PreEminary 14 in the Lp spaces. Thus it is necessary to construct a special space, which will be called extanded space and denoted as Xe. Xe will contain both "well-behaved" and "exploding" functions, which will be distinguished from each other by assigning finite norms to the former and infinite norms to the latter. Let T be a given subinterval of the reals, of the type [r.0,oo), or (—00,00). V is a given linear space. Followings are some definitions from [42]: Definition Let x be any function mapping T into V , that is, a; : T —> V; let r be any point in T; then the symbol xT denotes the truncated function, xr : T —• V , which assumes the values x[ = xt for t < r and x\ = 0 elsewhere. Definition For a normed linear space X consisting of functions of the type x : T —> V , 1. IfzeX then xT £ X for all r G T. 2. If x : r -» V, and if xT € X for all r € T, then: (a) | | x r | | is a nondecreasing function of r € T. (b) If l im | |x T | | exists, then x € X and the limit equals ||x||. Definition: The extension of X, denoted by X e , is the space consisting of those functions whose truncations lie in X, that is, Xe — {x\x : T —• V", and xT 6 X for all r (E T} . A n extended norm, denoted | |x | | e , is assigned to each x £ A ' e as follows: [|x||e = ||x|| if x (E A r , and | |s| | e = 00 if x $ X. Example 2.1: Consider xt = ept which is an function in L2[0,oo] when p < 0 and unbounded when Chapter 2. Mathematical Prehminary 15 p > 0. Now by using a truncated function x\ and construct a periodic signal xt based on x[ beyond [0,r], we can write its Fourier series expansion as: oo Xt = do + X / t a n cos{nvjt) + bns'm(nwt)] n=l where w — 2n/T,t 6 [0,r]. The coefficients in the Fourier expansion are e p T - 1 a 0 = - fTeptdt T JO pT 2 r „t , 2 P ( e p T - 1 ) an = - eF cos(nwt)dt = — r - r r bn = — ept sin(nwt)dt = — T Jo T 2nw{l - e^) -(p2 + n2w2) Notice that xt, x\ and xt have the same value in [0,r]. If p < 0, they are in JL 2 [ 0 , T] as T goes to infinity. For INI* = £l4 + \ ±K + *)]* = - U'[£ + - ±)\ thus, ^mK||' = J ^ | W = - ^ = ||x^ which is as expected. Here we have used the equation: ([44], p.526) Y 1 - —\^1±1 _ J _ i ~ n 2 + a 2 ~~ 2a V ™ - 1 ?r J If p > 0, xt is not in L 2 [ 0 , T ] but x\ still is for finite r. However, as r goes to oo, am bn will go to oo, thus both and will go to oo. • Chapter 2. Mathematical Preliminary 16 Input-Output Relation Definition (Relation) A relation H on Xe is any subset of the product space Xe x Xe. If (x,y) is any pair belonging to H then y will be said to be H—related to x; y will also be said to be an image of x under H . In other words, a relation is a set of pairs of functions in Xe. It will be convenient to refer to the first element in any pair as an input, and to the second element as an output. A relation can also be thought of as a mapping, which maps some inputs into outputs. D e f i n i t i o n (Domain and Range) If H is a relation on Xe, then the domain of H denoted J D O ( H ) , and the range of H denoted Ra(H) are the sets: • Do(H) = {x\x € Xe, and there exists y G Xe such that (x,y) 6 H } . • i?a(H) = {y\y £ Xe, and there exists such that '{x,y) G H } . If H is a relation on Xe, and if x is a given element of Xe, then the symbol H x denotes an image of x under H . Definition (Operator) A n operator H is a relation on Xe which satisfies two conditions: 1) Do(H) = A r e ; 2) H is single-valued. This means that the range of an operator is the whole extended space and the input-output relation of it is unique. Definition Chapter 2. Mathematical Prehminary 17 J? is the class of those relations H on Xe having the property that the zero element, denoted o, lies in Do(H) , and Ho = o. If H and K are relations in 3?, and c is a real constant, then the sum ( H + K ) , the product c H , and the composition product K H of K following H , are defined in the usual way, and are relation in 3?. The inverse of H in 3?, denoted by H _ 1 , always exists. The identity operator on Xe is denoted by I. 2.2.2 S t a b i l i t y P r o b l e m De f i n i t i o n In [42], a system is called stable if it is well behaved in two respects: (1). It is bounded, i.e., not explosive; (2). It is continuous, i.e., not critically sensitive to noise. Definition: A subset Y of Xe is bounded if there exists A > 0 such that, for all y € Y, \\y\\ < A. A relation H on Xe is bounded if the image under H of every bounded subset of Xe is a bounded subset of Xe; A relation H on Xe is continuous if H has the following property: Given any x G A", and any A > 0, there exists 8 > 0 such that, for all y G X, if ||a; — y|| < 8 then | |H»—Hy|| < A ; A relation H on Xe is input-output stable if H is bounded and continuous. Consider a feedback system as shown in Fig.2.1, the feedback equations are: et = wt - H2yt Vt = Hiet where H\,H2 are relations in 3?. Our interest is to find conditions on Hi, H2, which ensure the relation E, Y be bounded. Here: E = {(w, e)\(w, e) G Xe x A"e and there Chapter 2. Mathematical Preliminary 18 -o— Hi yt H2 Figure 2.1: Feedback Structure exist y, Hie, H2y, all in Xe, such that the above feed back equations fit }. Similarly, Y relates y to w. 2.2.3 Conic Relations and properties In this chapter, we will give the definition on conic relations and discuss some properties as in [42]. Definition A relation H in 3? is interior conic if there are real constants r > 0 and c for which the inequality \\{Hx)t-cxt\\<r\\xt\\ [xEDo{H)]tET] (2.9) is satisfied. H is exterior conic if the inequality sign in (2.9) is reversed. Here the constant c is called the centre parameter of H , and r is called the radius parameter. Inequality (2.9) can be expressed in the form | | (Hx) e — cxt\\2 — r||a: t | | 2 < 0. If norms are expressed in terms of inner products, then, after factoring, there is an equivalent inequality ((Haj) t - axt,(Hx)t - bxt) < 0 [x (E Do(U):t G T] (2.10) Chapter 2. Mathematical PreUminary 19 where a = c — r and b = c + r. It will often be desirable to manipulate inequalities such as (2.10), thus we define the conic H to be inside the sector {a, 6} , if a < b and (2.10) holds; H is outside the sector {a, b} if a < b and (2.10) holds with the inequality sign reversed. Def in i t ion A relation H in is positive if A positive relation can be regarded as degenerately conic, with a sector from 0 to oo. Example 2.2: If a positive relation H is a linear and time-invariant operator, then (xt,{Hx)t)>0 {x G Do{H);t e T] Re[H(jw)] > 0 Vw [proof] For a linear and time-invariant operator H, we can express it by using a transfer function H(s) where s denotes the derivative operator. Then, 2TT 1 > 0 or Chapter 2. Mathematical Prehminary 20 for all xt or X(jw). Thus the necessary and sufficient condition is: Re[H(jw)} > 0 Vw • Some Properties of Conic Relations Some simple properties of conic relation are listed. Here, we assume conic relations H , H i as: H is inside the sector {a,6}, with b > 0; H i is inside {ai,&i} with bi > 0. Suppose k > 0 is a constant, then 1. I is inside {1,1}. 2. fcH is inside {fca,kb}; — H is inside {—6,—a, } . 3. ( H + H i ) is inside {a + a\, b + &i, } . 4. I N V E R S E R U L E - C A S E 1: if a > 0 then H 1 is inside {1 /b, 1/a}. - C A S E 2: if a < 0 then H " 1 is outside {1/6,1/a}. - C A S E 3: if a = 0 then ( H " 1 - (1/6)1) is positive. 5. Properties (2), (3), (4) remain valid with the terms "inside" and "outside" interchanged throughout. 6. <7(H) < rrcax(|a|, |6|). Hence if H is in {—r,r} then <7(H) < r. Here <7(H) is the gain of relation H which is defined by and the supremum is taken over all x in Do(H), all H i in jRa(H), and all t in T for which xt ^ 0. g(U) = sup Chapter 2. Mathematical Prehminary 21 2.2.4 Theorem on Boundedness Consider the feedback system of Fig.2.1, and suppose that H2 is confined to sector {a, b}. It is desirable to find a condition on Hi which will ensure the boundedness of the closed loop. The following theorem from [42] is outlined here. The proof of the theorem is omitted. Theorem 2.1 (Zames, [42]) In Fig.2.1, let Hi and H2 be conic relations. Let A and 8 be constants, of which one is strictly positive and one is zero. Suppose that 1. H2 is inside the sector {a + A , 6 — A } where b > 0, and, 2. Hi satisfies one of the following conditions: - C A S E 1: If a > 0 the Hx is outside { - - -6,-\ + 8} a b - C A S E 2: If a > 0 the Hx is inside {-\+6,--~ 8} b a - C A S E 3: if a = 0 then Hx + (- — 8)1 is positive; in addition, if A = 0 b then g(Hi) < oo. Then relations E , Y are bounded. In a special case, when Hi is positive, i.e. (Hie,e) > 0 Then the stability condition of a linear time-invariant H2 is Re{H2(jw)} > v Here v is a sufficient small positive real. Chapter 2. Mathematical Prehminary 22 The famous Nyquist stability criterion is a special case of the conic sector stability theorem, where H2x — x. Hi is linear time invariant relation representing the open-loop transfer function of the plant. 2.3 On Averaging Theory In [1] and [25], stability conditions of adaptive system was discussed by using averaging theory. In their analysis, an adaptive algorithm can be expressed by the following time-varying equation: 6k+1 = {I- eRk)h (2.11) where e is a positive small constant representing the adaptation rate ( See section 3.4.3 ). Here, we are going to show the stability conditions for this equation by using averaging method. The variations of 0 in (2.11) are slow if e is sufficient small. In discrete time, the sample average is defined as: Let the state transition matrices of (2.11) be F(k, j ) , F(j,j) = I and denote F{ := F(fc i ,k_ i ) , Oi := 8{ki), then, 6i = FiFi-i • • • Fi-jdj, j < i; , j, i € Z Lemma 2.3: U n i f o r m Contractions If Fi is a. sequence of uniform contractions for all i, that is if there exists a positive constant a such that: l-Fil < e~a, Vi G Z Chapter 2. Mathematical Prehminary then the system (2.11) is exponentially stable. Conversely, if the system (2.11) is exponentially stable, then there exists a sequence of sample inter-vals for which Fi, i € Z , is a sequence of uniform contractions. Lemma 2.4: Averaging Assume that R(k) is a sequence of bounded n x n matrices and let A i k=ki_s Then, Fi given by: Fi = I- eTiRi + {eTifMi (2.12) and, for e satisfying |e|T;/9i < - , | M , | < 2p\. 2 [proof] See [1]. Using lemma 2.3, and lemma 2.4, we can derive the following theorem: Theorem 2.2: St a b i l i t y v i a sample Averages (Anderson et al. [1]) Consider (2.11) with R(k) a sequence of bounded n x n matrix. If for a sequence of sample averages R\, a constant positive definite matrix P = PT > 0 can be found satisfying PR{ + RfP > I , Vi 6 Z (2.13) then there exists an e* > 0 such that the system (2.11) is exponentially stable for all e G (0,e*). Chapter 2. Mathematical Preliminary 24 The sufficient stability condition of theorem 2.2 is a general signal dependent positivity condition, which applies to large classes of system matrices R(k). For more restrictive classes of signals, much sharper stability conditions can be obtained. Theorem 2.3: S t a b i l i t y (Anderson et al. [1]) For periodic R(k), there exists a positive constant e* such that the system (2.11) is exponentially stable for all e (E (0, e*) if and only if min ReXi(R) < 0 (2.14) t 2.4 O n Curve F i t t i n g Given a known form of signal f(k,p,t), where & is a set of known parameters and p is a set of unknown, t is time series, the curve fitting problem is to find the set of parameters p such that the curve fitted signal is y = f(k,p, t) and an error index J(y,y) is minimum for the measured signal yt. In practice, almost every signal can be expressed as Fourier series. The unknown parameters are usually the amplitude and phase of each Fourier series term. Here, we will show how to curve fit a sine wave signal with only single frequency. The multi-frequency cases are similarly done but the calculation is tedious. Now we consider a signal with the known form: yk = Hi sin(u>1fcT — Ax). The error index to be minimized is: n J = E b f c -yk]2 To find Hi, and A x by using curve fitting, we set dJ/dHi — 0 and dJ/dAi = 0 : n [yk - Hi sin(a;1 kT - A x ) ] sin(a;1 kT - Aj ) = 0 k=i Chapter 2. Mathematical Preliminary 25 n J2iVk - # 1 s i n ^ f c r - A j ) ] ^ coBfakT - A a ) = 0 fc=i For large n and nwiT close to an integer, we have J2 sin(u>ifcT - Ax) c o s ^ f c T - A i ) = 0 thus k=i ErinVifcT-Ax) « fc=i 2 Ez/fcCos^ifcT) <anAx = (2.15) ^2/ f c sin(a; 1 A;r) fc=i 2 n J5Ti = — X ) ^ s i n ( ^ i f c T - A i ) (2.16) n fc=i Chapter 3 Adaptive C o n t r o l A l g o r i t h m and Sta b i l i t y 3.1 Structure of Adaptive C o n t r o l System The structure of an adaptive control system is as shown in Fig. 3.2. It includes the plant, the parameter estimation and the controller. Usually, the plant output yt is desired to be as close to the command input wt as possible. Suppose the continuous domain transfer function of the plant is G(s), without delay, then the discrete plant has the form: ,( r tHi-r'>.{^)-agf («T, where A(q~1), B(q~x) are the polynomials in the shift operator (q~1xt = The parameter estimation measures the input-output (ut, yt) and evaluates the parameters of the estimation model. The controller uses estimates to produce the control input of the plant ut. There are various methods of parameter estimation and controller design. In the following two sections, we will introduce some of them. The purpose of listing those estimation and controller design algorithms is to give a glimpse of the variety of adaptive control algorithms. Robust controller design will be discussed in later chapters. 3.2 On Parameter Estim a t i o n In the adaptive control system of Fig. 3.2, the estimation is to find parameters for a given model such that the estimated model and the real plant have the best match 26 Chapter 3. Adaptive Control Algorithm and Stability 27 w ZOH - Plant -f1 -*- a. Estimation Figure 3.2: Conventional Adaptive System Structure under the measured input ut and output yt. Here, we shall represent the plant by an A R M A model. Other structures can be used are: state space model, transfer function model with poles and zeros as parameters, orthonormal series representation model, etc. When using an A R M A model, the expression of input-output relation of the sampled plant with zero-order holder is: Vt = diVt-i + <h7ft-2 + h anyt-n + + b2ut_d_x + • • • + bnut_d_n+1 (3.18) where the plant is linear time-invariant with order n and delay d. Let A(q 1) = 1 - axq 1 - a2q~ B(q-1)=b1 + b2q-1 + ..- + bnq-n+1 then the transfer function of the discrete plant is: usually, we express this input-output relation in the form: (3.19) yt = ft^e (3.20) Chapter 3. Adaptive Control Algorithm and Stabihty 28 where <i>t-\ — [yt-i,yt-2, • • • ,yt-n,ut-d,ut-.d_i,- • • ,ut-d-n+\Y and 0 = W\,a2-• • • , < L n , b i , b 2 , - • • , b n ] T which represents the true plant. Recursive Least Square M e t h o d In the estimation process, <f>t-i, yt is measurable. 9 is to be estimated. Let 6t be the estimate of 9 at time t, then the recursive least-squares parameter estimation is given by: 9t = et-i + Ktet A + <pt_i-Ff_i<Pt-i A - i r r Pt-l<f>t-l<f>t-i 1 p I \ i j.T 5 7 K t - i Where et = yt — <^^_1 t^_1 represents the prediction error, 90 is pre-specified, P0 is a positive definite matrix and 0 < A < 1 is a forgetting factor. In a simple representation, we can rewrite (3.21) as: 9t = $«-i + P t & - i e t P~x = XPrA + h-xtf-x (3-22) The index to be minimized is £ \l ke\, thus when A < 1, more weights are placed fe=i on the most recent errors and the estimation algorithm is able to track time-varying parameters. Chapter 3. Adaptive Control Algorithm and Stabihty 29 The recursive least-squares method works well when the output noise is white. How-ever, in case of coloured noise, an alternative method is required. One such alternative method is the Instrumental Variable Method. The recursive form is: 0t = Bt-r+Ktet ft - A 1 - T f t _ i Where zt = [y*-i,yt-2, • • • ,yt-n,wt_ 1,u t_ 2, • • • , u t _ n ] T yt = zfet-i By using an instrumental variable method, the estimates under coloured noise can be unbiased. Other types of improved least square estimation method include generalized least square, extended least squares (see [37]), etc. E s t i m a t i o n Based on Gradient A l g o r i t h m Some theoretical results on STC stability are based on a constant gain estimation algorithm (for example, [1], [14], [30]). The estimation form is: where e is a constant representing the adaptation gain. In practice, the use of constant gain algorithm is simple but not ideal. The adaptation is slow and inaccurate. Here we will use the interlaced version of the standard least-squares algorithm (3.21) with a small constant gain to slow down the convergence speed of estimation. The using of the interlaced version is necessary when implicit self-tuning control algorithm is used Chapter 3. Adaptive Control Algorithm and Stabihty 30 (see example 3.2). The form of the estimator is: §t = 6t-d + ePttt-det (3.24) A + ez<p{_dFt-d<Pt-d where d is the delay in A R M A model (3.18). The convergence speed will be affected by the adaptation gain e and the delay d. This form is a gradient algorithm as the least-squares method is also a gradient algorithm with faster convergence speed (e = 1). Properties of Estimation Algorithm Parameter estimation is designed to minimize the error between the estimated model and plant model, i.e., prediction error. In case the estimated model and the plant model have the same order, the parameters of the plant can be estimated accurately. However, in practice, we often have to face the fact that the plant structure is not exactly known. This is the model mismatch case. We also say that there exist unmodelled dynamics. In case there is model mismatch, the dimension of the estimated parameters 6t is lower than that of the true plant 6. The above estimation algorithms have the following properties: 1. The estimation result depends on the frequency component of the command. 2. If the order of persistent excitation of the input signal is lower than the estimated model order, the estimation result will not be unique and could be significantly influenced by the presence of noise. For property 1, consider a linear discrete plant with transfer function H(q~l), the estimated model is H{q~l). It can be shown from the minimization index that the estimation algorithm will tend to result in H(e~3W) = H(e~3w) at the frequency w where the regression vector (input, output of the plant) (f> is rich in frequency domain. Since there is model mismatch, H(q~1) and #(<7_1) are not able to match at every Chapter 3. Adaptive Control Algorithm and Stability 31 frequency. Only at the frequency where the command is rich enough will the estimated model be accurate. In property 2, the definition of persistent excitation is as follows: Def in i t ion Let the regression vector <f>t = [Vt-l, • • • ,yt-n,Ut_d, • • • , T i t _ d _ n + 1 ] contains 2n inputs, outputs of the plant (3.18). The input ut is said to be persistently exciting at order 2rc, if for some positive cti, integer S, and every integer j there holds: j+s J2Ml>aJ (3-25) k=j A command signal is said to be persistently exciting at order 2n if (3.25) holds and ut is excited by this command signal. A non-zero constant command is per-sistently exciting at order 1. A signal with m different frequencies of sine wave m ^2aisin(iUi<), a tsin(u;i<) ^ 0,i — l , - - - , m is exciting at the order 2m. Let the i=l excitation order of a determinant input signal be n p , the number of the parameters to be estimated be n. If n p < n , then the estimated parameters form a vector which is in n dimensional space with n — np degree of freedom. In this case the input signal is not persistently exciting for the estimation model. In case the command signal is not persistently exciting, the noise will increase the excitation order. The frequency component of the noise signal is undesired and vary-ing, thus the estimation result is not unique and dominated by the noise in the n — np dimensional space. The following example demonstrates this point: Chapter 3. Adaptive Control Algorithm and Stability 32 Example 3.1: Consider a continuous plant which was given by Rohrs: G(s) = 2 229 (3.26) s + 1 5 2 + 305 + 229 when sampling it by T, = 0.2sec, the discrete transfer function will be: (0.158 + 0.165Q-1 + 0.00708q- 2)g- 1 1 - 0.910O-1 + 0.0776o-2 - 0.00203a-3 Now we use first order estimation model, i.e. yt = ayt-! + bu t - i and all the initial regression vector values are zero. The forgetting factor in the esti-mation is 0.98. excitation order 2 which is a case of persistently exciting. We recorded first 500 samples as in Fig. 3.3(a), where the converged parameters are 1.63, 0.46, respectively. (2). Under the same output noise, we apply step input. The excitation order is np = 1. Theoretically, the estimated parameters are not unique because the persistent excitation is not satisfied [3]. However, the D C gain of the estimated model is unique.. This restricts the parameters d, 6 satisfying: 6 = 2.0(1 — a). The first 500 samples of estimation process is as in Fig. 3.3(b), where the estimated parameters are unfixed. Thus, the persistent excitation condition is important for useful estimation result under model mismatched case. 3.3 Self-tuning Controller Design 3.3.1 Pole-zero Placement Design Basic A l g o r i t h m (1). We apply input ut = cos(f), output noise is JV(0,0.02), the input signal has Chapter 3. Adaptive Control Algorithm and Stability t-, to -a CD J CO O.O l O . 2 0 . 3 0 . * 0 . 6 0 . 6 0 . 7 0 . 0 0 . 0 0 . 100. T i m e i n s e c o n d s Figure 3.3: (a). Estimated parameters in persistent excitation case O.O lO. 20. 30. -40. 50. 60. TO. 60. 90. lOO. T i m e i n . s e c o n d s Figure 3.3: (b). Estimated parameters when persistent excitation is not satisfied Chapter 3. Adaptive Control Algorithm and Stabihty 34 For a discrete plant (3.18), the degree of A(q~1) is n, the degree of B(q~1) is n — 1. The input, output and command signals of the process are denoted ut, yt, and wt, respectively. It is desired to find a controller such that the closed loop is stable and that the transfer function from the command wt to yt is given by: *-(* » - A . ( r ' ) where Am, Bm are coprime and Am is monic. The zeros of Am are assumed to be inside unit disk. A general linear regulator is given by: Riq-^ut = Tiq-^wt - 5 ( . " 1 )y t (3.27) The pole-zero placement algorithm is as follows (Astrom and Wittenmark, [7]): Step 1: Solve the Diophantine equation: ARx + q-dS = AmA„ (3.28) with respect to Ri, S. Step 2: Let R = RiB (3.29) T = A0Bm (3.30) Where A0 is a polynomial used as an observer and all its zeros are in the restricted stability region. Here two conditions concerning the degrees of Am, B,-n, AQ are as following: deg{Am) - deg{Bm) > deg{A) - deg{B) deg{A0) > 2deg(A) - deg{Am) - deg(B) - 1 Chapter 3. Adaptive Control Algorithm and Stabihty 35 Note that all zeros of A0,B are cancelled in the process, thus, B is required to have stable zeros, i.e., the discrete plant must be minimum phase. It will be shown later that this is a strict requirement, especially in the case of fast sampling. Implicit Algorithm (3.28)-(3.30) is an explicit (indirect) algorithm. The controller parameters S, R, T are calculated explicitly according to the estimation result and applied into (3.27) to obtain the control input ut of the plant. For the implicit algorithm, applying (3.28) to the plant, we have A0Amyt = Syt-d + Rut_d (3.31) (3.31) can be used as estimation model where A0, Am are known, R, S are to be estimated. Let R = r0 +r\q~l H (- rn+d_2<l~n~d+2 = r0 + Rq-1, the control law is: ut = —(A0Bmwt - Syt - R'ut^) (3.32) where the estimated parameters are directly applied. The implicit algorithm is also called direct algorithm. Stability Conditions for model order matched case The pole-zero placement S T C algorithm was proved to be stable under the following four assumptions in the early 1980's (see [21]). 1. The time delay of the discrete plant d is known; 2. Upper bounds on the plant order are known; 3. B has all its zeros inside the unit disc; 4. The sign of &i = r 0 is known. Chapter 3. Adaptive Control Algorithm and Stabihty 36 Example 3.2: Consider the plant H { q ]~ A{q~l) (3'33) Choose Am = 1, Bm = 1, and A„ — 1. Then the Diophantine equation (3.28) becomes: ARi + q~dS = 1 and (3.30) becomes T = 1. Thus the estimation model is: yt = Syt-d + Rut-d (3.34) The order of the polynomials R and S are n + d — 2, n — 1 respectively where n ' is the assumed model order of the plant ( n < n ). Thus there is a total of 2n + d — 1 parameters to be estimated. Let's express (3.34) as: yt = siyt-d + s2yt-d-i H 1- f i^ t -d + r2ut_d_1 + • • • Then the explicit control law is: ut = -r-iwt+d - hyt W - i ) (3.35) fj, S{ are estimated parameters, wt is the command. • Factorization i n Pole-zero Placement The above pole-zero placement will cancel all the zeros of the discrete plant. When the plant is nonminimum phase, this algorithm can not be applied because unstable zeros are cancelled. However, we can factorize B into two parts: B+ (the zeros are all inside stable region) and B~ (zeros outside the stable region), B = B+B~. Only B+ is cancelled. Then Bm has to be in the form: Bm = BmB~. The design algorithm Chapter 3. Adaptive Control Algorithm and Stability 37 (3.28)-(3.30) become: ARi + q-dB~S = AmA0 (3.36) R = RXB+ (3.37) T = A0B'm (3.38) The controller is as the same form as (3.27), but here, we should notice that the degree of R\ is higher (d+degB~ — 1). In this case, the implicit algorithm is difficult to design. For AmA0yt = B~ Syt-d + ARxyt = B~ Syt-d + RiB+B~ut_d = B-[Syt-d + B+Rxut-d] (3.39) The practical difficulties of using implicit algorithm are: (1) The order of B+ is not known a priori. (2) B~ is difficult to be factorized from the A R M A model. 3.3.2 M i n i m u m Variance C o n t r o l Basic A l g o r i t h m Consider the discrete plant with delay d and white noise e t: A{q-X)yt = B(q-l)ut_d + C{q-l)et (3.40) The optimal strategy for minimum variance criterion Vt = Eyf is: Where F(q~1),G(q~1) are determined from Diophantine equation: C{q-') = A{q-x)F{q~x) + G{q-')q-d (3.42) Chapter 3. Adaptive Control Algorithm and Stabihty 38 Then the closed-loop transfer function will result in: yt = F{q-l)et (3.43) When using (3.41), it is necessary that: 1. B has all its zeros inside the unit disk. Thus the discrete plant should be minimum phase. 2. C has all its zeros inside the unit circle. This is realizable by using the spectral factorization theorem, (chapter 6 of [8]) Implicit Algorithm Apply both sides of (3.42) to yt by using the plant (3.40) and ignore the output noise, we obtain: Ciq-^yt = F ( g - 1 ) 5 ( g - 1 ) u t _ d + C?(g- 1)y f_ d (3.44) We can use (3.44) as estimation model and the estimated parameters are directly used in (3.41) for control input calculation. This procedures is similar to the implicit pole-zero cancellation algorithm. Factorization in M V C In case B has unstable zeros, let B = B+B~. B+ and B~ are similarly defined. The control law is: u, = — F(q-i)B+(q-i) where the Diophantine equation is: yt (3.45) C{q-1) = A(q-X)F(q-x) + q-dG{q-l)B-{q-1) the degree of F is d + deg(B~) — 1 and which is higher than the degree of F in (3.42). The closed-loop has the same form as (3.43) but here it is only a suboptimal strategies, Chapter 3. Adaptive Control Algorithm and Stabihty 39 the error variance is larger than the minimum variance in all zeros cancelled case. In chapter 5 of [9], this strategy is called moving average control for the controlled output is a moving average process of order d + degB~ — 1. For the implicit algorithm, we have Cyt = FBut.d + GB-yt_d = B~[FB+ut-d + Gyt-d] Here B~, FB+, G have to be estimated. Here, we meet the same difficulty as (3.39) in pole-placement case. 3.3.3 Other Algorithms For the plant (3.33), let zt = P{q-1)yt + q-dQ(q-1)ut where P, Q are weighting polynomials. Thus: Azt = (PB +QA)ut_d = B'ut_d Let the controller be wt+d = Sut + Rzt = 4>j0t R, S are to be estimated and AS* + q-dR*B' = C Thus the closed-loop transfer function is: C~lB (see [14]). Predictive Control This is a robust controller design method. We will introduce the basic algorithm ([12], [40]) in Ch.6, and discuss the relation between the control horizon and the sampling interval. Chapter 3. Adaptive Control Algorithm and Stabihty 40 Dual Control This is a more complicated control law which optimally adds caution and probing to the certainty equivalence control. It cannot be computed, except for very simple systems (see [21]). 3.4 Stability Condition for Adaptive Control System The stability theory of self-tuning control algorithm was first developed in late 1970's under the assumption that the estimation model order is not lower than the actual plant order. However, in practice, there always could exist model mismatch. In this section, we will derive stability conditions of STC under estimation (3.24) and pole-placement in example 3.2. We also assume that the estimation order is not necessary to be larger than or equal to the plant order. This result is an extension of the theoretical work in [1] and [13], thus has similarity in conclusions. However, here by using the signal expansion, if an operator H2 is SPR, we are able to find an explicit upper limit of the adaptation gain e such that the stable STC is ensured. The work in [1] and [13] will also be shown in 3.4.2 and 3.4.3. Compared with those earlier theoretical works, our result is the same with the addition of an explicit expression of the upper limit of the adaptation gain instead of indicating it is sufficiently small. 3.4.1 Main Stability Theorem Theorem 3.1: Considering an adaptive system using the estimation algorithm (3.24) and the pole-zero cancellation control algorithm shown in example 3.2. The Chapter 3. Adaptive Control Algorithm and StabiUty 41 Im 1 Re Figure 3.4: Geometric Explanation of rjn delay d of the plant is assumed known, but there may be unmodelled dy-namics (i.e., n < n in example 3.2). Let the persistent excitation condition (PE) be satisfied for the estimation model and let C = BS*q~d -f AR" and H2 = C~1B, where R*, S* are the polynomials with the same orders of R, S. If H2 is Strictly Positive Real (SPR), then for e <E (0,inf ( — ) ] , the adaptive system is stable. Here nn is the distance between the Nyquist curve of H2 and imaginary axis as shown in Fig . 3.4 and Aj is the maximum eigenvalue of the initial error covariance matrix P{ in the estimator (3.24), i = -d+ 1, - • - ,0. P r o o f of the T h e o r e m The proof of the theorem can be divided into four parts. At first, we will establish an error model. Then we are going to introduce normalization and discuss the property Chapter 3. Adaptive Control Algorithm and StabiHty 42 of estimation. These two steps are similar to [30] and [13]. The third step is to expand the truncated regression vector into Fourier series and express the stability condition. Finally, we will show that if H2 ls SPR, an upper limit on e for stable self-tuning control algorithm can be obtained. 1. Error Model Considering the plant (3.33), and the expression of the polynomial C , Cyt = BS*yt-d + BR*ut_d = B<f>l_d9. where </> is the regression vector containing input-output data. #« is the vector contain-ing the coefficients of R*, S*. Define: error et = yt — wt, 9t = 0t — 0*. The implicit pole-zero cancellation law is wt = <f>J_d9t_d, thus = C-lB[ft_d{-~9t-d + 9t_d)}-wt = -H2(j)J_d9t_d + (H2 - l)wt Denote ij)t — <f>f_d9t_d, w't = (H2 — l) iw t , we have the error model: et = -H2TJ>t + w't (3.46) Here, if H2 is stable, and wt is bounded, then w't is bounded. The block diagram is as shown in Fig. 3.5. 2. Normalization in Estimation Algorithm As in [30], we use the estimation algorithm (3.24) with normalized regression vector: 4>t = P71/2<f>t (3.47) Chapter 3. Adaptive Control Algorithm and Stabihty 43 H2 Estimation Figure 3.5: Error Model where the normalization factor: pt = p,pt^ +max(\(j)t\2,p) (3.48) and p > 0, p. G (0,1). Thus, \U = , W ' ., < i fipt + max{\<pt\2, p) Note, e, P are also normalized by -1/2 e< = p t e t 7 -1/2 / = Pt Vt Pt = Ptpt Then the estimation algorithm (3.24) becomes: 0t = + ePt4>t-det (3.49) A = P t - d - f ^ f - f - d (3.50) 1 + z2$[_dPt-d<bt-d Chapter 3. Adaptive Control Algorithm and StabiHty 44 We can also rewrite (3.50) as: Pr^Pr-d + e'h-dfl-i (3-51) thus A m o x ( P t ) < A m a I ( P t - d ) j set A a be the maximum eigenvalue of the initial error covariance matrix P,-, i — —d + 1, • • •, 0, which are all positive. Then <J>t-dPt-d4>t-d < Ai Define: vt = efPr'Ot = Vt-d + e2$ + 2eTptet + e2[$_dPttt-d}e2 (3.52) then J2 V i - E Vi = + 2ejet + e2ate2t) (3.53) i=N-d+l i=-d+l t=l for any N (N can be oo). Where °~t - ¥t-d.Pt$t-d _ <K-dPt-d<j>t-d l + e34>J_dPt-d$t-d o-t € (0,Ai). Thus 0 N ~ E Vi<Yy$ + 2e^e t + e 2Aie 2] ViV (3.54) i=-d+l t=l 3. Signal Expansion Let ipt be a periodic signal based on tpj in [0,r], similarly as xt based on x[ as in example 2.2. Express ij)t by using Fourier series: 00 00 tj)t = a 0 4- E ^ n cos(nu/f.) + E s'm(nwt) (3.55) Chapter 3. Adaptive Control Algorithm and Stability 45 where ifit, ipj, and rpt a re equal in [0,r] and r can go to oo. a n , bn are functions of r and w = —. T Notice that here ipt is a discrete-time domain signal, thus in (3.55), high frequency component terms in the summation when n = kr, kr + 1, • • •, kr + r — 1 (k = 1,2, • • •) will fold over into the terms n = 0,1,2, • • •, r — 1. Thus (3.55) can be rewritten as: r - l T - 1 V"t = Oo + _v o-n cos(nwt) + ] P 6 n sin(niu£) (3.56) where O-n = «n + an+r + «n+2r H &n = K + 6 n + T + bn+2r H n = 0,1,2, • • • , r - 1. Denote H2(jnw) = H(nw)ejipn, then T-l T-1 f^V ' t = ijT(0)ao + ]T) H(nw)an cos(nwt + <pn) + £ H(nw)bn sin(nwt + <pn) n=l n=l To calculate J^V 'e-^V ' t j w e c l a i m : 1 r £ cos(nto< + <pn) = 0 ^or w = 2TC/T, let r 5 n = y~^cos(ntt>f + ipn) t=i 1 27T 27T = cos(y„ + n — ) + cos(y>„ + n—2) + • • • + cos(^ n + n.—r) r r r „ . ,n ir 2nx n,7r 2n7r , . .n7t\ . 2mr x . . n n \ i n s i n ( — ) = cos(</>„ H )sin( —) + cos(tpn + n 2)sin( — ) - I (- cos(<pn + r) s i n ( — ) l r . . n7T . J N 7 R M l r • / Jn7r . 5U7T = o l ~ s l n ( ^ n + — ) +sin(p n + )J + -[-sin(p„ + ) + sin(y?u + )] l T T 2i T T l r . . ( 2 r - l W . . , (2r + l)n?r„ I r T = -[-sin(p„ +—) + sm(y)» + n2w + — ) J = 0 Z T T For n = 1, 2, • • •, r - 1, sin(^) ^ 0, thus, 5„ = 0. Chapter 3. Adaptive Control Algorithm and Stabihty 46 and T E sin{nwt + <pn) = 0 t=i Thus t=l t=l t=l n=l T T—1 J t = l 71=1 Z = E {#(0)a 2 + \ J2lH(nw) c o s ( ^ n ) K + 6 2)} t=l Z n=l and similarly, E(V><)2 = X>o + ^ X K + *»)] t=l t=l Z n=l T-l E W < ) 2 = 5 > ( 0 ) ' « » + 5 £ tf(^)2 cos 2 ( ^ ) ( a 2 + o2)] 6=1 t=l Z n=l Also let w't be a periodic signal based on truncated w't in t 6 [0,r], T-l tDf = a 0 + E f a n cos(nwt) + ^ n sin( mwf.)] 71=1 then r T 1 T E 1 " ^ = E [ a » a 0 + 9 E ( a " a n + *A] t=l t=l Z Tl=l We use the normalized version of (3.46), i.e. e t = —H2^t + w[ m extended space and •tpt = "4>t — rft, w't = Wj, in r. 6 [0,T]. Define ?7n = H(nw) cos(y n), which is the real part of H2(nw). The right side of (3.54) can be written as: J2[e2tf+2el,tet + e2\1e2} = t=l t=l t=l T-l E[(^ 2 - 26-770 + e'A^aS + \ E(^ 2 - 2^ n + e2\lV2n)(a2n + b2n)} + T-l E[2(e - e2A177o)a0a0 + ]T(e - e 2 A 1 r / n )(a n a^ l + bnb'n)] + t = l 71 = 1 ^ E((«o)2 + T E W + (O2)] «=i 2 n = 1 Chapter 3. Adaptive Control Algorithm and Stabihty 47 where we have used the expression of £ ( ^ e ) 2 > E ^ ' ^ 2 ^ * ' ' ' "> e ^ c - ^ t=i t=i an = e - 2nn + eAjr,2 (3.57) fin = 1-eXiVn (3.58) n — 0,1,2, • • •, T — 1, then we have from (3.54): o T 1 T _ 1 T E < e E [ « o a „ + - E «"(«n + #)] + 2e B / V o a i , + 2 t=-tf+l f=l n=l t=l ± E /3n(anal + 6 n 6 j ] + £ 2 A X ±{(a'or + \ ^ ((O2 + (O2)] (3.59) 4. Stability Condition As shown in Fig . 3.5, nn is the real part of H2(nw) which is always positive if H2 is SPR. Under the S P R H2 condition, we can choose a positive e such that: S < ¥ ( T O ) ^ so that in (3.57), an < 0, n = 0,1, • • •, r — 1. Now set a = max{a n } n /3 = max{|/3n|} Then from (3.59), we have 0 r -1 00 E ~Vi < ea + -z E ( « n + hn)\ T -1 CO + 2 e / ? B K a ; i + - L K a n a : + 6 n O | ] t=l Z n=l T -I O O +£2A1D(02 + ^ E((0 2 + (02)] t=l Z n=l = e « i i v ; n i 2 + 2 e / ? ( i^ i , iu* ; i ) + E2A1||TZ;;ii2 < 6 a | | ^ | | 2 + 2e/?||^T|||K||+£2A1||ti};||2 (3.61) Chapter 3. Adaptive Control Algorithm and StabiUty 48 Notice that a is negative, e, A a (3 are all positive and w't is bounded. If xjjt is un-bounded, we can always find unbounded as r goes to oo, thus the first term in (3.61) is the dominant term and the right side will go to — oo as r goes to oo which cannot be true. Thus we have proved that tpj and ipt are bounded. • (3.60) can be conservatively written as: 1 1 * max where Aj is given in the initial of the estimation, T7m m and r ] m a x are the minimum and maximum distance between the Nyquist curve of H2 and the imaginary axis as shown in Fig. 3.4. Choosing e Define f{Vn) = 1 + A n / ' We will show that the maximum e satisfying (3.60) is either / ( r / m i n ) or f(nmax). Since so there is only one local maximum in rjn € (0, +oo) at rjn — ——=. We will obtain: v Ai 1. if < — T = , or / ( r / m i n ) > 0, then V A l ™ n [ / ( ) j r a i n),/(7| r a a i)] 2. if 77min > —]=, or f'{vmin) < 0, then V A i Chapter 3. Adaptive Control Algorithm and Stability 49 Generally, if < 1 then emax = f{Tjmin), otherwise, emax = f{nmax). In summary, we list the conditions required for a stable adaptive control system as following: 1. The regression vector satisfies the P E condition. 2. H2 is SPR. 3. e is limited by an upper bound (3.60). Notice that condition 2 implies H2 is stable. The stability condition derived here is sufficient. In practice, even if for an unstable plant, as soon as the SPR H2 condition is satisfied and the system with initial controller parameters is stable, the adaptive system will remain stable under small adaptation gain derived in the theorem. 3.4.2 Result from Using Conic Sector Criterion In the error model (3.46) (Fig. 3.5), the relation H2 defined as in Theorem 3.1 is a linear operator. Suppose the relation between t/'tj ef through parameter estimation is denoted by H\, i.e., = Hiet, where Hi is not necessary linear. Assume V 0 = 0, d = 1, N —> oo, we have from (3.54): oo or {ijjt - aiet,xpt - biet} > 0 Chapter 3. Adaptive Control Algorithm and Stability 50 where - 1 - V I - e\x a a = e , - 1 + v / l - e A j »i = e According to Theorem 2.1, the stability requires H2 in { f , .£ }. 1 + V l — eAi 1 — v l — eAj If e —r 0 is sufficiently small and A x —*• 0 (an additional assumption), it can be concluded that for S P R H2, the boundedness condition will be satisfied with e, A x carefully chosen to be sufficiently small. In [13], a procedure of choosing e, A x for given H2 is suggested. However, no clear expression for e is given there. Values for e, Aj are based on the conic condition of H2 and they exist theoretically if H2 is SPR. Model Matched Case In the model matched case, the order of R* and S* are n — 1. We can choose R* — B, S* = q(l — A) such that H2 = 1. In this case, the S P R condition is always satisfied. However, cancellation of unstable zeros is not allowed, thus the stability requires B to have all its zeros stable, i.e., the discrete plant is minimum phase. This agrees with the early stability theory in [16], [21]. 3.4.3 U s i n g Averaging M e t h o d Consider d = 1, apply (3.46) to the estimation (3.24), we can obtain: 9t - 0 f_i - ePttt-iHitftJt.!] + ePtfa-tw't (3.63) If P t = / , i.e. the estimation uses constant gain, and define wt — 0, P*. = <fit_iH2(fif_1 and e be sufficiently small such that 6t changes much slower than (frt-i, then (3.63) will be: 9t+i = (J - eRt)0t (3.64) Chapter 3. Adaptive Control Algorithm and Stability 51 which is the same expression as (2.11). If e is sufficiently small, and H2 is SPR, the adaptive system will be stable under persistently exciting command. 3.5 S imula t ions Example 3.3 For the continuous plant, 2 229 G(s) = s + 1 s2 + 30s + 229 Assume T is the sampling period. The discrete plant is: The coefficients of B, A can be exactly calculated 2. We use (3.21) for parameter esti-mation and only first order model is used. Denote a, b the estimated parameters. 1. Estimation and S P R Properties For sampling period 0.3 sec, 0.24 sec, 0.18 sec, we apply command input cost. This is a persistent excitation case. We let the simulation run for 500 sec. The estimated parameters are as in Table 3.1. Express H2 approximately by: 2Suppose, A(q 1 ) = l - o i g 1 - a2q 2 - a3q 3 , B(q~1) = bi + b2q 1 + b3q 2 , then a1 = e-T + 2 e - 1 5 T a 2 = - 2 e - 1 6 T c o s ( 2 T ) - e - 3 0 T „ _ -31T a3 — e h = 2 - 2 . 2 9 e ~ T -r0.29e- 1 5 T (cos2T+3.5517sin2T) b2 - 0.29e"T - 4.29(1 - e - r ) e - 1 5 T cos 2T - 1.03(1 + e - T ) e - 1 5 T sin 2T - 0.29e-3"T 6 3 = (2.29 - 2 e " T ) e _ 3 0 T - 0.29e~ 1 6 T(cos 2T - 3.5517sin2T) Chapter 3. Adaptive Control Algorithm and Stabihty 52 T 0.3 sec. 0.24 sec. 0.18 sec. a b 0.792 0.473 0.831 0.388 0.871 0.298 Table 3.1: Estimated parameters for different sampling frequency - 0 . 6 0 . 6 l . O R e a l A x i s Figure 3.6: Nyquist curve of H2 under different sampling H2 = B aBq~l + bA the Nyquist curve of H2 obtained via different sampling periods are shown as in Fig. 3.6, where we can see that as the sampling frequency goes faster, H2 will not be SPR. 2. Self-tuning Control result When pole-zero cancellation is used, the adaptive control results are stable when Chapter 3. Adaptive Control Algorithm and Stabihty 53 T = 0.3, 0.24 sec. In Fig. 3.6, when T, = 0.3sec, vmax = 1.260, 77min = 0.165; when T„ = 0.24sec, T]max = 1.394, rfmin — 0.029. Here we choose A x = 1.0, thus emax according to (3.60) are 0.283, 0.057 respectively. In case of T, — 0.24sec, the adap-tation process is slow. However, this upper limit is only sufficient. We still get stable adaptive control output when e = 1.0. However, for T, = 0.18$ec, the adaptive control output is unstable. In this chapter, we have introduced some adaptive control design methods. The main stability theorem is given. Compared with the former proof given in [1], [13], [30], an upper limit for adaptation rate is explicitly given. The bound on e is a very conservative one. We have not found any examples in simulation where stability could only be achieved with the small adaptation gain. In [28], an analysis of the relation between the robustness and adaptation gain was shown by using small gain theory. Although the adaptation gain derived is sufficient for the stability, it has been indicated that to get the maximum robustness, it is not necessary to have the smallest e. However, if e is sufficiently small, the robustness will be non-zero. In summary, for a stable conventional pole-zero adaptive controller, • For the model match case, the discrete plant should be minimum phase. • In general, there is model mismatch, then an operator H2 has to be SPR. Chapter 4 Robust Adaptive Control: Slow Sampling 4.1 Why Use Slow Sampling? The robustness of adaptive control systems depends on: (1). The difference between the discrete plant order and estimation model order; (2). Discrete plant parameters (Coefficients of model); (3). Estimation algorithm; (4). Adaptation algorithm. The sampling frequency obviously affects the parameters of the discrete plant. In [33] and [34], it was shown that with a sufficiantly low sampling frequency, the difference between the discrete plant and model is small, thus the adaptive system is robust. In general, when sampling a continuous time system, if the sampling frequency is high, the discrete system will .be close to the continuous system with a zero order holder. However, using slow sampling will ignore the poles and zeros far beyond the corresponding Nyquist frequency. In this case, the discrete plant behaves as a low order system, thus the error due to model mismatch is small. In [5], it is indicated that an inversely unstable discrete system is possible with sufficiantly high sampling frequency even if the continuous plant is minimum phase. However with low sampling frequency, we can always obtain a minimum phase discrete plant provided the continuous plant is rational, stable and inversely stable. In 4.2, we will present some results on zeros of sampled plant and propose a method to obtain the critical sampling for minimum phase discrete plant. In this section, we will restrict the continuous plant to be rational, stable and minimum phase. 54 Chapter 4. Robust Adaptive Control: Slow Sampling 55 In chapter 3, it has been shown that an SPR condition is essential for stable adaptive control algorithm. We can also show that SPR H2 requires the discrete plant to be inversely stable. However it is not a sufficient condition for stable S T C . In 4.3, we will give an expression of model error on H2 and show an approximate way of choosing critical sampling for SPR H2. A simulation example of those relations is given in 4.4. In the adaptive control algorithm used here, we will assume that least-squares es-timation algorithm is used. The controller is a pole-zero cancellation scheme as in example 3.2. 4.2 Choice of Sampling For M o d e l Matched Case Adaptive Control 4.2.1 De f i n i t i o n and Background A stable continuous-time system with a rational transfer function is minimum phase if it has only left-half plane zeros. Here, by extension we shall define a stable discrete-time system as minimum phase if all its finite zeros are within the unit circle, i.e. inversely stable. In many self-tuning control schemes, it is desirable that the plant be minimum phase (MP) . A n earlier discussion about zeros of sampled systems shows that if there are more than two excess poles in the continuous transfer function of the plant, the sampled system will be non-minimum phase(NMP) if the sampling frequency is fast enough ([5]). Theorem 4.1 (Astrom, Hagander and Sternby, [5]) Let G(s) be a rational function, G(s) k(s - Z]){s - z2) • • • (s - zm) (s - p i ) ( s -p2)---{s -pn) (4.65) Chapter 4. Robust Adaptive Control: Slow Sampling 56 and H(q) the corresponding pulse transfer function ( m < n ). Then, as the sampling period T, goes to 0, m zeros of H(q) go to 1 as eZiT' and remaining n — m — 1 zeros of H(q) go to zeros of P t l _ m (g ) , where Bk{q) is the numerator polynomial of pulse transfer function of G(s) = s~k. For example: Numerator polynomial Unstable zeros Bx{q) = 1 B2(q) = q + 1 -1 B3(q) =q2 + 4q + l -3.73 •84(g) = g 3 + l l g 2 + l l g + l -1,-9.899 B5(q) = g 4 + 26g 3 + 66g2 + 26g + 1 -2.332, -23.20 Be(q) = g 5 + 57g4 + 302g3 + 302g2 + 57g + 1 -1,-4.542,-51.22 It has also been shown that slow enough sampling will always result in M P discrete plant. The following theorem is also proved in [5]. T h e o r e m 4.2 Let G(s) in (4.65) be a strictly proper rational transfer function with G(0) ^ 0 and Re(pi) < 0. Then all zeros of the pulse transfer function go to 0 as the sampling period T, go to infinity. Here we will discuss how slow this sampling should be. First, we present the main result in [5] as follows: Chapter 4. Robust Adaptive Control: Slow Sampling 57 Theorem 4.3 Let G(s) as in (4.65) be a strictly proper, rational transfer function with: 1. Re(pi) < 0; 2. G(0) ^ 0; 3. —7r < arg {G(jw)} < 0, for 0 < w < oo. then all the zeros of the corresponding pulse transfer function H(q~l) are stable. Condition 3 of this theorem is sufficient and very strict. It requires the continuous plant to have at most two excess poles. Also, the conditions are not sampling frequency related. Now we will define the critical sampling frequency wc as follows. Definition The critical sampling frequency wc of a plant G(s) is the largest frequency such that for any sampling frequency w, < wc, the sampled system is M P . For adaptive control using pole-zero cancellation under model matched case, if the discrete plant is minimum phase, the adaptive system is stable. Thus, the critical sampling for minimum phase discrete plant is very useful. In the following, we will show that the critical sampling wc satisfies an equation, whose solution can be obtained simply by using relay. Thus, the critical sampling frequency can be obtained without prior knowledge of the continuous plant. 4.2.2 C o n d i t i o n For M P Discrete Plant Suppose G(s) is the continuous transfer function of a stable, proper, rational plant. A l l the poles and zeros of G(s) are in the left half plane. Then the sampled plant is: Chapter 4. Robust Adaptive Control: Slow Sampling 58 _•(,->> = ( i - , - ) z { ^ l } Define: G0(s) = G(s) s then through the transform relation (4.66) + 00 1 A 1 - 1 Z{Go(s)}= D -G0(s + jmw.) t ±CT(s) = ± G ; ( _ ) (4.67) where w, is the sampling frequency and T ( T > 0 ) is the sampling period. Thus the discrete plant is: In the frequency domain, the sampling process results in aliasing of the low frequency band by the high frequency component. It is obvious that if (7(0) ^ 0, the term (1 — q~l) in (4.68) will be cancelled. Thus the zeros of the sampled system will be the zeros of G*(s) through the transformation (4.66). Before we propose the main theorem, the following lemmas are introduced: L e m m a 4.1: If G(s) is stable, all poles of G"z(q) are stable except one on the unit circle. Hiq-1) = £ ( 1 - q-l)G:{q) (4.68) Chapter 4. Robust Adaptive Control: Slow Sampling [Proof] For T { s J Define $l(x) as the real part of x. It is obvious that after Z-transform, the pole of G(s): —pi with 9?(p;) > 0 is transformed to the pole e~PiT with | e - P i r | < 2. The pole s = 0 is transformed to the pole z — 1 which is on the unit circle. • L e m m a 4.2: If G(s) has n poles, then G*(g) has n finite zeros for almost all T > 0. [Proof] After the Z-transform, the discrete plant is: ^ (o - 1 ) anq~n Ii G(s) has no delay, bx ^ 0. For: rr/ - I N _ ^ (g - 1 )^ ' 1 _ 1 /-, n-i\n*(„\ H { q ]~ A(q->) -T(1~q ) G M ) G*( ) = T* biqn-l+b2qn-2 + ... + bn M ) q-1 qn- a i f l " - 1 an Thus there are n finite zeros in G*(g) = T Z a n t ^ a ^ ^ e a s * o n e of them is q — 0.D L e m m a 4.3: » { C * ( i y ) } = 0 (4.69) where w, is the sampling frequency and $s(x) denotes the imaginary part of x. Chapter 4. Robust Adaptive Control: Slow Sampling 60 [Proof] m= —oo = g { C o [ i ( 2 m + l ) ^ ] + G 0 [ - j ( 2 m + l ) ^ ] } m=0 For a real coefficient transfer function F(s), <5{F(jw) + F(-jw)} = 0 (4.70) Recall that Gn(s) is rational and has real coefficients, thus: y)} = 0 • This means that if we draw a Nyquist curve of G*(jw)1 with w from 0 to — , the 2 Nyquist curve will end at the real axis. It is well known that the Nyquist curve of G*(jw) when w is from — ^ to 0 is symmetric related to the real axis and the Nyquist w w curve of G*(jw) repeats when the frequency is beyond the range [ ]• 2 2 Now, we can express our main theorem on the M P discrete plant condition. T h e o r e m 4.4: Given a continuous, proper, stable and minimum phase rational transfer function (?(*), define G*{s) as in (4.67). If for all w G [0, — ] : 2 - 90° > arg{(T(ju;)} > -270° (4.71) then the sampled plant H{q^) is M P . Chapter 4. Robust Adaptive Control: Slow Sampling 61 l m f R e o Figure 4.7: Nyquist path [Proof] Consider when the Nyquist path is on imaginary axis, w is from —— w to — e x c l u d i n g the origin on the right side as shown in Fig. 4.7. The Nyquist curve which satisfies (4.71) will not encircle the origin as shown in Fig . 4.8. Now extend the Nyquist path from tu = -oo to +00, i.e. to the whole imaginary axis, the corresponding Nyquist curve simply repeats itself and does not encircle the origin. So the number of poles and zeros of G"(s) in left half plane should be the same. According to Lemma 4.1, there are n stable poles of G*(s). Thus from Lemma 4.2, the n finite zeros of G'(s) are stable and # ( o - 1 ) is M P . • ID It is obvious that if H{q~x) is M P , then < 0. Lemma 4.3 has indicated w that G*(±j-~-) is always on the real axis. If there is one unstable finite zero, the Nyquist curve will encircle the origin once as shown in Fig. 4.9, and Gx(j —) will be on the positive part of the real axis. It is obvious that for the critical sampling u)c, Chapter 4. Robust Adaptive Control: Slow Sampling 62 Figure 4.8: Nyquist curve satisfying (4.71) G*(j—-) will be at the origin, thus we have: C o r o l l a r y : The critical sampling frequency wc satisfies: » { < ? ' 0 y ) } = 0 (4.72) For m=0 <?(j»«(m + i ) ) , G{-jwc[m + i)) + jwc(m + |) - ji_ c(m + \) Thus (4.72) is equivalent to: 5 Lm=0 G(jwc(m + ±)) , G(-jwc{m + l)) + = o Using the property (4.70), we will obtain: +<*, G[j(2m + l)^} rS, 2m+ 1 (4.73) Chapter 4. Robust Adaptive Control: Slow Sampling 63 I m e Figure 4.9: Nyquist curve not satisfying (4.71) Notice that the critical sampling frequency satisfies (4.73), but not any sampling frequency satisfying (4.73) is critical. 4.2.3 Obtaining the Critical Sampling by Using a Relay Consider a relay component in series with the plant as shown in Fig. 4.10, the command x(t) is a sine wave with frequency w and amplitude A: x(t) = A sm(wt) (4.74) Then the input signal of the plant u(t) (which is also the output of relay) is a square wave and can be expressed by using a Fourier series: u(t) E oo sin[(2fc + l)wt] 2k + 1 (4.75) 7T fc=0 Suppose the continuous transfer function of the plant is G(s), let w = thus by defining: = \G[j(2k + l)^}\ Chapter 4. Robust Adaptive Control: Slow Sampling 64 O u t p u t ) R e l a y P l a n t Figure 4.10: Closed loop with relay w 62k+1 = arg{G[j(2fc + 1)-^]} the output signal is: w Ad + ° ° #2*4-1 sin[(2A; + l)-^-t + 02fc+i y( 0 = - E 2 27T7V" At the specific time tf/v = , then x(t^) = 0, and w then = 7 _ _ a ( a +1 > (4'76) 11) Thus we can obtain the value of $l{G*(j-^-)} by measuring y(tN) and if the output is zero at t^, then the frequency wc = 2io satisfies the equation (4.73). Chapter 4. Robust Adaptive Control: Slow Sampling 65 to Figure 4.11: Nyquist diagram for limit cycle Further, by using a closed-loop structure as in Fig. 4.10, a limit cycle will exist if there are more than two excess poles in G(s) (see [6]). This can be viewed by drawing the negative reciprocal describing function of the relay and the Nyquist curve of G(s) as shown in Fig. 4.11. When the oscillation occurs, the input of relay is the negative value of the plant output. This means that the input of relay xt and the plant output yt reach zero simul-taneously. Thus ytN = 0 in (4.76) and the oscillation frequency w is half of the critical iv sampling frequency —- as in the equation (4.73). This oscillation frequency can be obtained by measuring the oscillation period, using for example the method described in [6]. For a system with no more than two excess poles, there may be no limit cycle, in this case, the sampled system is always MP. If there is more than one limit cycle, the slowest oscillation frequency is considered. Also in [6], it is mentioned that the configuration of Fig. 4.10 can be used to initialize adaptive controllers, in particular to obtain an estimate of the time delay. As shown here, this method can also be used to select the sampling period for stable, proper, rational minimum-phase continuous plants with the guarantee that the discrete plant is minimum phase, i.e. its finite zeros Chapter 4. Robust Adaptive Control: Slow Sampling 66 8 o -0.5 -1.5 Figure 4.12: Nyquist curve of G* in example for different sampling intervals T are stable. Example 4.1 Suppose G(s) = (s + l)3{S + 2Y For this system, the relation between sampling period T and the presence of unstable zeros is as follows: T > 3.36.sec. inversely stable 3.36sec. > T > 0.74sec. one unstable zero 0.74sec. > T two unstable zeros Fig. 4.12 shows the Nyquist plot of G*(s) for T = 4 (inversely stable), T = 2 (unstable zero at -3.0245) and T = 0.5 (two unstable zeros at -13.1968 and -1.3079). Chapter 4. Robust Adaptive Control: Slow Sampling 67 l.O I 1 • 1 r O 0 . 2 . 4 . 6 . 8 . l O . 1 2 . 1 4 . 1 6 . 1 8 . 2 0 . 2 2 . 2 4 . 2 6 . 2 8 . 3 0 . T i m e i n S e c o n d s Figure 4.13: Limit cycle obtained when putting G(s) of the example under relay control Fig. 4.13 shows the limit cycle obtained when putting the continuous time plant G(s) under relay control. The period of the limit cycle is 6.72, which corresponds to twice the critical sampling interval as described before. In this section, a condition for minimum phase discrete plant is proposed. It is a weaker condition than that in (Astrom et al. 1984 ) and is sampling frequency related. The critical sampling frequency can be obtained by measuring the oscillation frequency in a closed-loop system with relay. No prior knowledge of the continuous plant is required for this method. The sampling frequency for a model matched adaptive control system shall be chosen slower than the critical sampling frequency. Chapter 4. Robust Adaptive Control: Slow Sampling 68 4.3 Cho ice o f Sampl ing For M o d e l M i s m a t c h e d Case A d a p t i v e C o n t r o l This section is a further discussion of the effect of slow sampling on the robustness of adaptive control. Model structure mismatch will be considered. In [33], [34], it was shown that slow sampling will bring small model structure error. Here we will relate the S P R operator H2 to the model error and show that if sampling is slow enough, the S P R H2 condition will be satisfied. As in chapter 3, H2 is defined as: H 2 ( q ^ = A{q-i)nr{q-i) + B{q-i)S*{q-i)q-' ^ where the discrete plant is as (3.33) and R*, S* here denote the converged estimates. The converged estimates R*, S* can be obtained by using estimation only in Fig. 3.2 when the loop is open. In the "Properties of Estimation Algorithm" in section 3.2, it has been mentioned that this estimate depends on frequency components of the command input. Then, the operator H2 is related to: 1. The discrete plant — which depends on — continuous plant; — sampling period. 2. Converged estimated parameters i?*, 5*, which depend on — estimation model order; — excited command frequencies. Here only the effect of sampling frequency is studied. 4.3.1 Unce r t a in ty i n sampled system We will first introduce the uncertainty in sampled system ([33]) and study how the sampling frequency affects this model uncertainty. Chapter 4. Robust Adaptive Control: Slow Sampling 69 Consider the plant uncertainty represented by a multiplicative perturbation, 1 -f L(jw) with the nominal system Gn(jw), i.e., the plant can be expressed as: G(jw) = Gn(jw)[l + L(jw)} (4.78) Here L(jw) represents the model structure error of the continuous plant. It is assumed that \L{jw)\ < l(jw) where l(jw) could be known and it increases as w increases. Suppose the sampling period is T, the sampled systems of G, Gn are: I +oo H{jw) = - £ G(jw+jnws) (4.79) 71= —OO I +oo Hn{jw) = - J2 Gn(jw+jnws) (4.80) n=—oo 2TT where ws = — . If Ld{jw) is a discrete-time multiplicative perturbation, which repre-sents the model structure error of discrete plant, i.e. H(jw) = Hn{jw)[l + Ld{jw)\ (4.81) we have the following theorem: T h e o r e m 4.5 (Rohrs, Stein, Astrom [33]) Given the notation defined as in (4.78) - (4.81) and assume that all infinite sums converge, then -f-oo £ + jnw3)\Gn(jw + jnws)} 1 n— — oo £ ( 4 ' 8 2 ) Chapter 4. Robust Adaptive Control: Slow Sampling 70 The proof of this theorem can be obtained through direct calculations by applying (4.78) - (4.81). The following example illustrates the relationship between sampling and uncertainty. Example 4.2 Consider the continuous plant: 2 229 G(s) s + 1 s2 + 30s + 229 we use a first-order nominal model b Gn(3) s + a for parameter estimation. The sampling periods are 0.18 sec, 0.24 sec, 0.3 sec. respec-tively. The command signal is cos(r.). We express the error L(s) according to (4.78) as: G(s) - G„( . ) . £ W = < ? „ ( . ) . For continuous plant, since the exciting frequency of the command is w\ = l.Orad/sec, we can find a = 0.7673, b = 1.7752 such that Gn(jwi) — G(jioj). For the sampled system, the first order discrete nominal model is: - l Hniq-1) cq 1 - dq-1 Note Hn(q~l) is also the sampled system resulting from Gn(s) but the expression of Ld(q~l) is complicated. The Table 4.2 shows the parameters of discrete nominal plant for exciting frequency of the command 1.0 rad/sec. and different sampling period T. The error \Ld(jw)\ for different cases is shown in Fig. 4.14. From this example, we can observe: Chapter 4. Robust Adaptive Control: Slow Sampling 71 o o T = 0.18 sec. continuous case T=0.24 sec. T = 0.3 sec. o.e 0 . 3 F r e q u e n c y : r a d / s e c . Figure 4.14: Model error via sampling periods • The model structure error is small near the exciting frequency; • The model structure error is large at high frequency; • When the sampling frequency is low, the model structure error is small and \Ld{jw)\ < l-^O' 1 0)! < KJW) a r e ^ r u e f ° r a u t n e cases, here L(jw) is the structure error for the continuous nominal model and Ld(jw) is for the discrete nominal model. c d r = 0.18sec. 0.8704 0.2980 T = 0.24sec. 0.8304 0.3880 T = 0.30sec. 0.7918 0.4337 Table 4.2: Parameters of discrete nominal model under different sampling in example Chapter 4. Robust Adaptive Control: Slow Sampling 72 In (4.82), l(jw) usually increases with increasing w. Gn(jw) usually has low pass characteristics. If an anti-aliasing filter is included in Gn(jw), and if the term n = 0 is the only significant term in the expression of Hn(jw), then we can clearly see that slower sampling will result in smaller model error in (4.82). O n smal l uncer ta inty error Let the discrete plant be and the nominal plant (usually with lower order than the plant) be *•<*-'> -1 Su\^d (4'84) Ld{q~1) is the multiplicative perturbation represents the model structure error as shown in (4.81). By using the discrete plant description, we obtain: H M )=AR* + BS*q-* = R* + S*H (4-85) Here JFJT2 also represents the closed-loop transfer function which uses the converged estimates R*, S* as controller parameters. Let H2(q-1) = H2n(q-1){l + M^q-1)} (4.86) where represents the nominal closed-loop system. Notice that H2n(q~1) contains no unstruc-tured uncertainty. One can from (4.85), (4.86) obtain: Chapter 4. Robust Adaptive Control: Slow Sampling 73 thus, or Hn(l+Ld)q* _ Hnq* 2 R* + S'Hn{l + Ld) R* + S*Hn{ ] = (l + Ld)(R* + S*Hn) _ R* + S*Hn(l + Ld) M* 1 + (1 + LjjfBn Here the model structure error M* has the following properties: 1. As the discrete-time multiplicative perturbation of the plant Ld is smaller, then the multiplicative perturbation of the S P R operator M* is smaller. Here Ld also represents the model structure uncertainty for the sampled plant which is decreasing as the sampling period is longer. s* 2. As the term (1 +—Hn) is larger, M* will be smaller. For the nominal model x t * # 2 7 1 contains no structured uncertainty, R r2n{<l~1) = 1 or 5* _ Hnq* 1 + -R*Hn~-W Thus M* also depends on the nominal model. M o d e l uncer ta inty and S P R cond i t ion In section 3.4, we have shown that for a stable adaptive control algorithm, SPR H2 is essential. We will express H2 as a function of the model structure error. Notice the expression of H2 in (4.77), from (4.81) we have: A - 1-S*q-*{1+Ld) Chapter 4. Robust Adaptive Control: Slow Sampling 74 or AR*{l + Ld) = B{l-S*q-i) thus H2(q-1) = B 1 - S*q~d + (1 + Ld)S*q-d l + Ld (4.88) 1 + LdS*q~d (4.88) shows the relation between the operator H2 and the model structure error Ld. The following results are of interest: 1. The operator H2 is a function of the model structure error of the discrete plant. If the error is small enough, H2(q~1) is SPR. A special case is when there is no unstructured uncertainty, i.e., Ld = 0, then H2 = 1 which is always SPR. 2. A sufficient condition for S P R H2 can be derived as: (4.89) when I I A I O O I U < 1 and WL^S^jw)^ < 1. Here S*(jw) depends on the converged estimates of the nominal model. [proof of (4.89)]: sin 1||I-c/(jt0)jJo o is the maximum phase for [1 + Ld(jw)}. Similarly is the term sin~1\\Ld(jw)S*(jw)\\00. As for SPR H2, the phase is limited in [-90°,90°], thus (4.89) is sufficient. Chapter 4. Robust Adaptive Control: Slow Sampling 75 4.3.2 S tab i l i ty condit ions for adaptive a lgo r i thm In theorem 3.1, R*, S* do not have to be the converged estimates. They are in fact polynomials of fixed order. Here we will simplify the expression of H2 by setting S* = 0, R* — 1. We denote the simplified H2 as H2s. If H2, is SPR, the adaptive system is sufficiently stable. In the following, we will discuss the choice of sampling frequency to ensure SPR H2,. For F a . ( r 1 ) = | = «f f (9- 1 ) = ( 9 - i ) z { ^ l ] where -ff(<j_1) is the discrete transfer function of the continuous plant G(s). As phase(g - 1) = - + — { 3 ) n=-oo J W + j n W a 7T + G(jw +jnwt) • - + phase 2^ " 2 „±rL w + nw, Thus H2l{q-1) is SPR if on" > ?1 + p h a s e g G{jw+jnv,.) > _9Q0 ^ 2 „ - ™ W + "tO. n——oo 1 * Here considering that most practical plants have phase lag, so we can only discuss the case of: wT , I f ? G(jw + jnw.) — + phase )T u 7 ^ > -90° 4.91 71=—OO ' 3 For a low-pass plant, we can neglect the effect of aliasing of high frequency gain (Or use anti-aliasing filter as in [39]). Thus the problem is simplified to finding ws such that: Chapter 4. Robust Adaptive Control: Slow Sampling 76 o.oo <v u-, -a CO (X, -135. —ieo. —326. - \ -\ \ ^ \ \ V ^ 1 3 \ \ : s •w2 2. a. 1 0 . 1 2 . 1 4 . 1 6 . 1 8 . F r e q u e n c y a x i s ( r a d / s e c . ) Figure 4.15: Choice of sampling according to the phase of continuous plant where the sampling interval is: T = —. phaseG(jw) > -90° - — Vio G [0, —-2 2 2TT (4.92) 4.3.3 Approximate way of choosing sampling Fig.4. 15 shows the phase of a continuous plant vs. frequency. When choosing = wi , 2 wT w the straight line /j represents the phase —90° , w £ [0, —-]. Since the phase plot is 2 2 above the line lx, (4.92) is satisfied in tins case. However, as w, is increased to — = w3, (4.92) will not be satisfied. The procedure for choosing the critical sampling according to the known phase response of continuous plant is as follows. Chapter 4. Robust Adaptive Control: Slow Sampling 77 1. Plot the phase-frequency relation of the continuous plant G(s), where x-axis is the frequency in rad/sec, y-axis is the phase of G(s); 2. Draw a line starting from w = 0, phase = —90°, which is tangent to the phase-frequency plot; 3. Find the intersection of this tangent line with the horizon line phase = —180°, the frequency wc corresponding to the intersection is half of the critical frequency required. Remark 1: The critical frequency found in this way is only an approximate solution where the effect of high frequency is neglected. The effect of aliasing of high frequency may result in the actual critical sampling frequency being lower. Thus a safety margin is necessary if no anti-aliasing filter is used. Remark 2: The critical sampling for S P R H2„ is always slower than (or equal to) the critical sampling for minimum phase discrete plant as defined in section 4.2. Thus to ensure H2, SPR, the discrete plant has to be minimum phase. This has also been mentioned in [13] for Rohrs' plant, but no analysis was given. Here we can conclude that tliis is generally true. Remark 3: For special cases, no tangent line can be found or the frequency correspond-ing to the tangent point is less than —180°, then we can just draw a straight line between starting point and the intersection of —180° horizon line and Chapter 4. Robust Adaptive Control: Slow Sampling 78 the phase plot. In those cases, the critical sampling frequency for the non-minimum phase discrete plant and SPR H2, are the same. 4.4 S imu la t i on on the Effect of Sampl ing We will use a simulation example to show that the adaptive control is robust by using slow sampling and show how to choose critical sampling frequency for stable adaptive control. Example 4.3: (Simulation example of STC) Consider the Rohrs' plant with continuous transfer function: 1. Critical sampling for minimum phase discrete plant To find the critical sampling for minimum phase discrete plant, we can apply relay in series with the continuous plant as shown in Fig. 4.10. Then we will observe that there is oscillation in closed-loop system. We plot this oscillating output as in Fig. 4.16, measure 5 periods of the oscillation at the steady state. Denote the oscillation period as Tot, then we have: G(s) = 2 229 (4.93) s + 1 s2 + 305 + 229 5T 0 OS 1.99 or Tot = 0.398 According to section 4.2, the critical sampling period is: Chapter 4. Robust Adaptive Control: Slow Sampling 79 o O.O - . 0 5 l . O 1.5 2 . 0 2 .5 3 . 0 3 . 5 +.0 T i m e i n s e c o n d Figure 4.16: Obtain critical sampling for minimum phase discrete plant of Rohrs' example by using relay Fig. 4.17 shows two zeros of the sampled plant under different sampling periods. Here the slow sampling effect and the value of the critical sampling frequency are verified. 2. Critical sampling for S P R H2a If we obtain the phase-frequency relation of G(s) as in Fig . 4.18(a), we can draw a tangent line from (f> = —90°, w = 0.0 to the phase plot. The intersection of this tangent line with the horizon line (f> = —180° is 1 at w — 14.89rad/sec. Thus the approximate critical sampling period is: T, = TV 14.89 0.211aec. Fig. 4.18(b) are the Nyquist plot of H2,(q l) under several sampling ( T„ — 0.17, 0.20, 1 This procedure can be programmed. Chapter 4. Robust Adaptive Control: Slow Sampling 80 e <L> -t-> CO t » C O X J a> 0 <d C O CO O »-i O) tsi — 1.0 1 1 1 ~—I 1 > 1 ' -O. . O S . 1 0 . 1 6 . 2 0 . 2 5 . 3 0 . 3 5 .-40 S a m p l i n g P e r i o d , i n S e c o n d Figure 4.17: Zeros of discrete plant of Rohrs' example via various sampling 0.22, 0.25 sec. ). Note the corresponding straight line connecting (w = 0,<j> = —90°) and ( w = w,/2, cf> = - 180° ) in Fig. 4.18(a), if H2, is SPR, there will be no intersection of the straight line with the phase plot. Now we assume that the operator H2{q~l) uses the converged estimated parameters of a first order estimation model, G{q~X) = bq - l 1 - aq~l and the command frequency is 0, 1.0, 3.0, 8.0 and 16.1 respectively. The converged estimates under different samplings and command frequencies are as in Table 4.3: We plot the Nyquist curve of II2{q~l) under w = 1.0 rad/sec. for different sam-plings in Fig. 4.19(a) and the Nyquist plot under Ts = 0.25sec. for different command frequency in Fig. 4.19(b). Chapter 4. Robust Adaptive Control: Slow Sampling 81 frequency 0.0 1.0 3.0 8.0 16.1 T, = 0.17 0.876 0.877 1.035 22.12 -0.903 0.248 0.283 0.310 3.868 -0.010 T, = 0.20 0.859 0.857 1.041 -14.99 -1.007 0.282 0.329 0.362 -2.507 -0.0005 r , = 0.22 0.848 0.844 1.045 -7.574 -1.219 0.304 0.359 0.396 -1.191 -0.019 T, = 0.25 0.833 0.824 1.051 -4.741 -2.427 0.334 0.403 0.446 -0.653 -0.213 Table 4.3: Estimated a/b for various cases We notice that the S P R property is not only affected by the sampling, but also affected by the command frequency. In general, slow sampling and low command fre-quency will result in SPR Hi. 3. Self-tuning control under different sampling We simulated the self-tuning control under T, =0.25, 0.22, 0.20, 0.17 sec. There is no output noise. The command frequency is 1.0 rad/sec. The simulated output is as shown in Fig. 4.20. Where we can see that for T, = 0.2,0.17 sec, the adaptive control results are unstable. However, for the slower sampling T, = 0.22,0.25sec, the bounded-input bounded-output relation is observed. 4. STC with white noise and high frequency noise We simulate the adaptive control algorithm under output noise and the high fre-quency command input. Fig. 4.21(a)-(d) are the simulation result under sampling T„ = 0.25sec. The simulation condition is as shown in Table 4.4. Here we restricted the adaptation gain in estimation to be low enough (0.2). The STC with low adaptation gain needs a long time to converge. We can observe from Chapter 4. Robust Adaptive Control: Slow Sampling 82 command frequency 1.0 rad/sec. 8.0 rad/sec. no noise Fig. 4.21(a) Fig. 4.21(b) output noiseiV(0,0.1) Fig. 4.21(c) Fig. 4.21(d) Table 4.4: Simulation condition in Fig. 4.21 noise type: 0.1sm(16.1t.) 0.1iV(0,l) adaptation gain e = 1.0 Fig. 4.22(a) Fig. 4.22(b) adaptation gain e = 0.2 Fig. 4.22(c) Fig. 4.22(d) Table 4.5: Simulation condition in Fig. 4.22 the simulation that bounded input and bounded output is ensured. But for the case of high command frequency, the output performance is not good enough. Also notice that although H2 is not SPR when the command frequency is 8.0 rad/sec, the adaptive process is still stable. This is because the SPR condition is sufficient. 5. Case of persistent excitation not satisfied Under the command: wt = constant + noise a self-tuning control simulation result is shown in Fig. 4.22(a)-(d) under different conditions as in table 4.5. The sampling period is T„ = 0.25sec. Thus for slow enough sampling frequency, the stability of the adaptive control sys-tem is not critically affected by the presence of a small noise and lack of persistent excitation. The various simulation examples in this section show that Chapter 4. Robust Adaptive Control: Slow Sampling 83 1. Slow sampling will bring robust adaptive control result. For the pole-zero cancellation control algorithm and the recursive least square estimation, the critical sampling for the stable adaptive algorithm discussed in this chapter is verified. 2. Not only the sampling, but also the command frequency will affect the S P R condition. However, the stability condition derived is sufficient. Bounded-input bounded-output relation can still be obtained even if SPR condition is not satisfied in some cases. 3. Adaptive control systems can still be stable under small noise and the lack of persistent excitation. 4.5 C o n c l u s i o n In this chapter, we discussed the effect of slow sampling frequency on the zeros of sampled systems and on the S P R condition. It was shown that when the sampling frequency is sufficiently low, minimum phase discrete plant and S P R operator H2 will be ensured. Expressions for the critical sampling frequencies were derived and the practical ways of finding them were also discussed. Note that the minimum phase discrete plant is also necessary for S P R H2. Thus the critical sampling frequency for S P R 1I2 is usually lower than the critical sampling frequency for minimum phase discrete plant. For the model match case, the STC algorithm using simple pole-zero cancellation is stable if the discrete plant is minimum phase. In model mismatch case, such an STC algorithm is stable if H2 is SPR. Chapter 4. Robust Adaptive Control: Slow Sampling - 4 5 . CD <D f-c OO <L> .3 CD CO cd - 1 8 0 . - 2 2 5 . 1: T s = 0 . 2 5 s e c . 3: T s = 0 . 2 2 s e c . 2: T s = 0 . 2 0 s e c . 4: T s = 0 . 1 7 s e c . - 1 8 0 d o g l l i m 1 2 3 4 O. 5 . 1 0 . 1 5 . 2 0 . 2 5 . 3 0 . F r e q u e n c y a x i s ( r a d / s e c . ) Figure 4.18: (a). Finding the critical sampling for SPR H2, 0 . 0 0 a CO M 0.25 0.0 - . 2 6 -O . O 0.05 0 . 1 0 0 . 1 5 0 . 2 0 0 . 2 5 R e a l A x i s Figure 4.18: (b). Nyquist curve of H2a under different sampling er 4. Robust Adaptive Control: Slow Sampling t-, .6 '5b <d - O . 6 0 —0.26 O . O O 0 . 2 S 0 . 6 0 O . T 6 l . O O 1.86 l . S O R e a l A x i s Figure 4.19: (a). Nyquist curve of H2 via different sampling frequencies 0.6 1.0 R e a l Axis Figure 4.19: (b). Nyquist curve of H2 via different command frequencies Chapter 4. Robust Adaptive Control: Slow Sampling 1 6 0 0 . 1 0 0 0 . 6 0 0 . o.ooo - 6 0 0 . 0 . 0 . 5 1 .0 1.6 2 . 0 2 . 6 3 . 0 3 . 6 4 . 0 T s = 0 . 1 7 s e c . Figure 4.20: STC simulation under different sampling Chapter 4. Robust Adaptive Control: Slow Sampling 87 1 0 . Z O . 3 0 . 4 0 . Fig.4.21(a) -to. 20. O . O - 2 0 . —40. - 1 ' Hi J li'i'1. in i i" O . 1 0 . 2 0 . 3 0 . 40. 6 0 . Fig.4.21(b) 10. O . 1 0 . 3 0 . 3 0 . 4 0 . 6 0 . Fig. 4.21(c) - 5 . - 1 0 . 0 . 0 6 0 . 1 0 0 . 1 5 0 . 2 0 0 . Fig.4.21(d) Figure 4.21: STC simulation under noise or high frequency command Chapter 4. Robust Adaptive Control: Slow Sampling 88 O . O 5 0 . l O O . 1 6 0 . 2 0 0 . F i g . 4 . 2 2 ( a ) -i. 0 . 0 5 0 . 1 0 0 . I S O . 2 0 0 . F i g . 4 . 2 2 ( c ) 2 . O . O S O : 1 0 0 . 1 6 0 . 2 0 0 . F i g . 4 . 2 2 ( b ) 1 0 . 0 . 0 - 5 . - 1 0 . O .O 6 0 . 1 0 0 . I S O . 2 0 0 . F i g . ' 4 . 2 2 ( d ) Figure 4.22: STC simulation when persistent exdtation not satisfied Chapter 5 Robust Adaptive Control: Slow Adaptation 5.1 Introduction In Chapter 4, we discussed the slow sampling effect on the stability condition of an adaptive control algorithm. As shown in Fig. 3.2, a general adaptive control structure has two loops, one concerned with estimation and one concerned with controller design. The conventional adaptive control systems update the estimated parameters of the controller at every sampling interval, i.e., the two loops have the same sampling rate. For the direct pole-placement adaptive control as in example 3.2, we recall that the control law is given by: wt+d = R{q~1)yt + Siq'1)^ where the coefficients of the polynomials ^(t j - 1 ) are the estimated parameters $ t in the estimation algorithm (3.24). Stability is obtained if H2{q~l) is SPR, where: ffifa"1) = C-\q-l)B{q-1) C{q~l) = Fiq-^Aiq-1) + i T ^ " ^ ^ 1 ) ^ 1 Here again, we assume S*, R* denote the converged S, R in estimation. SPR H2(q~1) implies that C(q~1) has stable zeros but not conversely. In case there is no unmodelled dynamics, it has been shown in chapter 4 that if all poles of C(q~1) are stable, then, H2{q~1) — 1 a n d stability is ensured. However, when we consider the case of model mismatch, H2(q~l) could be non-SPR even though C(q~1) has stable zeros. 89 Chapter 5. Robust Adaptive Control: Slow Adaptation 90 CO <u CC (A o ccS or 1 . 7 6 l.OO Estimated a Gain: 0.3*-Estimated b o. l . z. 3 . •* . 6 . a. v. s. e. IO. T i m e i n S e c o n d s Figure 5.23: Estimated parameters when simulating Rohrs' plant by using RLS esti-mation where the sampling is T, = 0.2sec, command frequency is 1.0 rad/sec. Fig. 5.23 is a parameter estimation result for Rhohr's plant (4.93) where T, — 0.2sec, the command frequency is 1.0 rad/sec. and a first-order model is used. It has been shown in the simulation example in Chapter 4 that for this case, H2(q~l) is not S P R although C(q~1) has stable zeros. The simple pole placement adaptive control result is unstable. When observing the estimation process, we find that the estimated parameters have a large initial overshoot. Let d, b denote the estimated parameters, B(q-')at the closed-loop stability of C(q *) simply depends on the gain margin of but at the initial stage, — is very large and the gain margin is less than one (unstable). A n explanation for the instability is that for higher sampling rate, there exists a large model mismatch ( [33], [34] ), the input-output data in the transient process may excite high frequencies that mislead the estimation. Use of such bad estimates will continue Chapter 5. Robust Adaptive Control: Slow Adaptation 91 to excite the high frequency band, thus very likely, resulting in an unstable system. However we can expect that the adaptation by only using steady-state estimates can bring more robustness. In this chapter, we will first discuss the effect of slow adaptation. The main idea is to avoid using bad estimation data. It is shown that the SPR Hz{q~l) condition could be relaxed if the converged estimates under persistently exciting condition are obtained and used at every adaptation. Thus, the only requirement for stability is that the polynomial C(q~1) has stable zeros. We will develop a method to check this stability condition. Furthermore, we will also discuss how to design filters to improve the robustness. Some simulation examples will also be presented. 5.2 Slow A d a p t a t i o n A l g o r i t h m 5.2.1 F i x e d Slow A d a p t a t i o n Rate In adaptive control, it is not necessary to adapt the controller at the same fast rate as the sampling rate for the plant. From Fig. 3.2, we can see that the normal adaptation rule (Ta = T ) , i.e.: / S(q-i) = S(q-*) (5.94) ( R{q-*) = %"*) at every sampling interval introduces an extra closed-loop in the adaptive system. Here 5, R represent the controller parameters, R, S represent the estimated parameters. For slow adaptation, we can set the adaptation rate Ta much slower than the sampling rate T. For example, set: Ta — NT , where N is a large integer, then, the adaptation (5.94) is done once only for every NT period. It is assumed that N is large enough for the process to reach steady state. We call this case the fixed slow adaptation rate . Chapter 5. Robust Adaptive Control: Slow Adaptation 92 Algorithm for Fixed Slow Adaptation Suppose that the command wk+i is known at t = kT, and the latest adaptation occurred at ti = kyT(k\ < k): 1. Obtain: yk, y^, • • •, uk_u • • •; 2. Recursively calculate 6k, i.e. jR(g _ 1 ) , S(q~1), by using estimation algorithm; 3. \i k — kx ^ N, go to step 5; 4. If k — ki — N, perform adaptation (5.94) and reset newr kx; 5. Controller calculation and back to step 1 for the next sampling interval. At steady state, Hi operates as H2(e~3UlT) where wx is the dominant frequency. Let H(q~1) be the transfer function of the plant and ^ ( g - 1 ) be the estimation model, then the estimation will result in: H{e^T) = H{ejWiT) (5.95) S(g->) _ B{q~x) 1 -R(q-l)q-l A(q-l) at q'1 = e-jUlT, then B{q~l) = A(q-x)S(q-x) + B(q-x)R(q-x)q-x = C(q-X) at q~x = e~iUiT as S,R converge to S*,R*. Thus between two adaptations, under the conditions that S, R converge to S*, R* and the new closed-loop characteristic polynomial: A^Sriq-1) + Biq-^Rriq-^q-1 (5.96) Chapter 5. Robust Adaptive Control: Slow Adaptation 93 has no unstable zeros, we have H2{e-jWlT) = 1 which is SPR. In equation (5.96), {^.(g- 1)}, {Pc^g" 1)} is a subset of {^(g - 1 )}, {P(g - 1 )} andS' r(g~ 1) = 5"(g_1) ,P t r (g _ 1 ) = Pt(g - 1 )only when adaptation occurs, otherwise ST, R\ will keep the same value. In [1], it is indicated that if the signals in the regressor vector (f> are restricted to a class in which the non-SPR transfer function H2 behaves, on average, as an SPR transfer function, the S P R condition on the whole frequency band can be relaxed. This is known as the average S P R criterion. Here we have shown that by using estimated results only in steady state, <f> is restricted to the signals of the same frequency as the command in which H2 = 1 is SPR. Though the S P R H2 condition is relaxed, an additional condition is that (5.96) has all zeros stable. Actually, SPR # 2 is a stricter condition which implies that (5.96) has all stable zeros. 5.2.2 Unfixed Slow adaptation rate In some cases, we cannot ensure that the transient process is over within the time period NT, or in other words, we do not know how large N should be for a certain plant to reach the steady state under certain sampling frequencies. Then a criterion for checking steady state is necessary. In this case, the checking is carried out periodically at every NT. Only if the process is verified in steady state, will adaptation follow. The adaptation algorithm is: W, t+i (5.97) w here Chapter 5. Robust Adaptive Control: Slow Adaptation 94 9t if the process is in steady state and t = kNT where N is a given integer <f>t-i if not when <pt = 0t, it means that the adaptation is done and the current estimated param-eters are used in controller since a direct algorithm is used. This case is called the unfixed slow adaptation rate since adaptation may not take place every Ta. Algorithm for Unfixed Slow Adaptation Suppose the command wk+i is known at t = kT, and the latest adaptation occurred at h = kiT {ki <k): 1. Obtain y*, jfc-i, • • •, • • • ; 2. Recursively calculate 6k, i.e. i?(tj _ 1), S ( g - 1 ) , by using estimation algorithm; 3. If k — &i ^ N, go to step 6; 4. If k — ki — N, reset k\ and check if the process is in steady state, if not, doing nothing and go to step 6; 5. If the process is in steady state, perform adaptation (5.94); 6. Controller calculation and go back to step 1 for the next sampling interval. The initial state 5^o must be such that (5.96) has all stable zeros. This is not difficult to realize. For example, if we know that the plant has only stable poles (a common situation), we can simply use an open-loop structure in the initial stage: ^o = [ l , 0 , - - - , 0 ] r Chapter 5. Robust Adaptive Control: Slow Adaptation 95 The theoretical discussion on stability is the same as in the fixed slow adaptation rate case, but here, the assumption that the process has to be in steady state within NT is not necessary. 5.2.3 C h e c k i n g C r i t e r i o n for the Steady State In the unfixed slow adaptation rate case, the adaptation is only carried out if we measure the steady state input-output data of the plant. The measurement is set at a fixed rate of every NT period. It becomes the fixed adaptation rate if the process is measured to be in steady state at every measurement. But it is not straight-forward to tell if a process is in steady-state right away after we measure the input-output. We have to record a certain period of input/output data and compare it with the given characteristic of the command. The principal criterion used here is that the steady state output/input of the plant has the same frequency component as the command input provided that the plant is linear and the noise is insignificant. One method is to use curve fitting for the output (or/and input) date according to the mode of the command input signal as described in section 2.4. For example, the command is: tufc = Asm(u)ikT — <f>i), we can expect that the output in steady state has the form: yic — Hi sin(o;1fcr — A i ) in case the closed-loop characteristic equation (5.96) has stable zeros. The error index to be minimized is: (5.98) If J is small compared with the output data {i/fc}, we can consider that the process is in steady state. Chapter 5. Robust Adaptive Control: Slow Adaptation S i m u l a t e d I n p u t a n d . O u t p u t 96 2. 1 V 1 V ' \ T 1 1 1 1 / \ / / v r 1 1 1 r 1 1 1 1 V I \ /' • • • i t 1 1 I < f i i i 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30. E s t i m a t e d and. C o n t r o l l e r P a r a m e t e r s l .Sn 1 1.5 0.0 -0.5 0 5. 10. 15. 20. 25. 30. T i m e i n S e c o n d s T i m e i n S e c o n d s Figure 5.24: STC result for fixed rate slow adaptation By using curve fitting, A a , Hi can be obtained through (see section 2.4): n Y ykcos(uikT) — tanAi Yvksin{uoikT) k=i (5,99) and 2 " Hi = - > yksin(vikT - Aj) nk=i (5.100) Equations (5.99), (5.100) are used to calculate AlyHi, respectively according to the known data yk,wx, and sampling interval T. The steady state is checked by calculating J in (5.98). Chapter 5. Robust Adaptive Control: Slow Adaptation 97 Example 5.1 (STC Simulation by using Slow adaptation) Consider Rohrs' plant with the transfer function of the continuous plant as (4.93) in example 4.3. We use slow adaptation and the sampling frequency is allowed to be T, = 0.15sec. The command input has frequency 1.0 rad/sec, and there is a small output noise N(0,0.03). First order estimation model and pole-placement control are used. Wi th normal adaptation rate, the adaptive control algorithm is unstable. But here, Ta = 10TB, i.e., the adaptation rate for controller is much slower than the sampling rate of the plant. The simulation result is as in Fig. 5.24, where only first 30 seconds are plotted. Initially, the plant is open loop. The transient estimation result is not used by the controller. The controlled output is stable and follows the command input well. A l -though the discrete plant for this sampling is non-minimum phase, since the low order estimation model is used, there is no unstable zeros cancelled in the real process. In Fig. 5.24, the estimated parameters and the controller parameters are also plotted where the parameter 1 refers to the estimates of a and the parameter 2 refers to the estimates of b in the first order estimation model yt = a y t - i + but-\. i 5.3 Frequency Domain Estimation Example 5.2: For the continuous plant in Example 5.1, the discrete transfer function of the plant has the form: TT( (&o + + b2q-2)q-1 1 - (dxq 1 + a2q 2 + a 3q 3 ) we use different sampling rates and obtain the parameters of the discrete transfer func-tion a 1 ? a 2 , a 3 , b0, bx, b2, crossover gain Gc of the discrete transfer function, converged estimates of a first order model as in Example 5.1 a", b* and gain margin Kf of the Chapter 5. Robust Adaptive Control: Slow Adaptation 98 Tt = 0.3sec. T, = 0.15sec. T, = 0.05sec. 0.7592 1.0621 1.8912 -0.01371 -0.1844 -1,1173 a3 0.00009142 0.009561 0.2122 bo 1.0422 0.6017 0.1312 bi 0.6445 0.8435 0.3607 b2 0.009721 0.05864 0.06042 Gc 0.1936 0.1599 0.09931 a* 0.7918 0.8909 0.9580 b* 0.4734 0.2513 0.0841 3.09 1.76 0.884 Table 5.6: Parameters of discrete plant of Rohrs plant under different sampling Sampling . poles T. = 0.30sec. 0.1253 ±j '0 .5699,-0 .0141 stable T, = 0.15sec. 0.3875 ± jO.7130, -0.0328 stable T, = 0.05sec. 0.8227 ± J0.5954,0.1734 unstable Table 5.7: Closed-loop poles of adaptive system under different sampling / o* \ — x closed-loop using a*, b* as controller parameters in Table 5.6 (Kf = (~0~*^c) ^' The slow adaptation adaptive control algorithm is stable under sampling intervals: 0.3 sec, 0.15 sec, but unstable under 0.05 sec. Thus, there is an improvement of robustness by using slow adaptation but this improvement is limited as (5.96) can still have unstable zeros if the model mismatch is too large (such as when using Ts = 0.05 sec). Table 5.7 lists the poles of C(q~1) when the converged parameter estimates are used for the Rohrs' plant. The command frequency is 1.0 rad/sec. However with the information about the frequency response of the plant (in this example, cross-over gain), we can predict whether the characteristic equation (5.96) will have unstable zeros after adaptation. FFT-based frequency domain estimation can Chapter 5. Robust Adaptive Control: Slow Adaptation 99 be used to obtain frequency domain information. In steady state, assume that the noise is small enough, the energy of the input-output data is concentrated at several specific points in frequency band. The use of F F T directly on these input-output data to obtain frequency domain information will not give a good estimate on the other frequencies. Here we suggest that a perturbation is necessary to induce a transient so that the input-output data of frequency domain is rich in whole frequency band. For example, under steady state, there is an intentional perturbation resulting in process transients. Denote the F F T of the input and output signals resulting from this perturbation by t7(u>), Y(u>), then the estimated frequency response of the plant H(e-jwT) can be approximated by: Where u = 0,-^,... ,-^(N - 1), and JV is the number of samples in F F T . The resolution of frequency response of the estimated plant is directly dependent on N. More details about the frequency domain estimation can be found in [26] or in section 2.1.2. Usually, we set the perturbation close to an impulse signal so that the estimation error is small. It is also found that a perturbation with larger amplitude results in more accurate estimates of H(e~3wT), although the process will be more affected. In case there is a small white noise, the accuracy will not be affected too much. Once the frequency response of the plant is obtained, we can search for the gain margin (say Kf) and check the stability for (5.96) before adaptation. If a/(bKf) < 1, the adapted system will remain stable. However, if the gain margin Kf is not large enough, there will be no adaptation and the performance of the output is not adjusted but still stable. Chapter 5. Robust Adaptive Control: Slow Adaptation 100 Example 5.3 In self-tuning control algorithm for the Rohrs' plant as in example 5.1, pole placement algorithm is used. The estimation model order is 1 and the sampling interval is T, — 0.05aec We use unfixed slow adaptation rate (checking rate N = 63, i.e. Ta — 3.2sec.) and check stability condition based on frequency domain estimation. There will be no adaptation if the inverse of the gain margin of the adapted system Kj1 is larger than 0.9 (larger than 1.0 is unstable). In this simulation, the new closed-loop gain margain using new estimates was always found unsatisfactory, thus no adaptation took place. The system is not unstable since the initial controller parameters was chosen such that the closed-loop system is stable. However, the system's performance is not as desired. The command and output are as in Fig. 5.25. 5.4 F i l t e r Des ign To improve the performance of adaptive systems under large model mismatch, a filter F(q~1) = j could be introduced as in Fig. 5.26. This filter has the property: F{e-'"T)\ I y < 1 to ^ The phase shift of F(e~jWlT) is desired to be 0. If ux — 0, i.e. the command input is a step, then F is a low pass filter. Otherwise, F is a band-pass filter. The robustness of such a structure is improved because the gain margin of the closed-loop system is increased around the cross-over frequency. To view the effect of filter, we can see that the closed-loop characteristic equation (5.96) becomes: (5.101) Chapter 5. Robust Adaptive Control: Slow Adaptation 101 S i m u l a t e d I n p u t a n d O u t p u t E s t i m a t e d a n d C o n t r o l l e r P a r a m e t e r s O.O S O . t O O . 1 6 0 . Z O O . T i m e i n S e c o n d s 0 . 0 6 0 . 1 0 0 . 1 6 0 . 2 0 0 . T i m e i n s e c o n d s Figure 5.25: Unfixed rate slow adaptation STC result when the sampling is very fast, T, = 0.05sec. for Rhors' plant or 1 + H(q-')F(q-')Zd^ = i + •Mr1) Since F(q~1) at w -> w c can be chosen small enough, where u>c is the cross over frequency in the Nyquist curve of Gf{q~l), (5.101) can always be stabilized if the frequency response of H(q~1) is known. Otherwise, we can only say that the smaller \F(eJwT)\ is at high frequency, the more robust the closed-loop is. Example 5.4 (Band-pass Filter) For a continuous band-pass filter, Chapter 5. Robust Adaptive Control: Slow Adaptation M B Hi-1) Giq-1) yt Estimation Figure 5.26: Closed-loop with filter F(s) Wi Q' W\ S + ~ 5 + W, the impulse equivalent discrete form is given by: b0 + bxq'1 Diq-1) 1 + a i g - 1 + a2q-w here A W l ~Q bx = -e-aT^[cos{bT) + £ sin(6T)] Q o ai = - 2 e " a T cos(6T) a2 = e and Chapter 5. Robust Adaptive Control: Slow Adaptation 103 l . O i — F r e q u e n c y i n r a d / s e c . Figure 5.27: Bode plot of the compensation filter in Example 5.4 a~ 2Q For the sampling period T, = 0.05sec. and w\ — l.Orad/sec, the Bode plot of F(q~1) is as plotted in Fig. 5.27 where only the logspace(-l,l) is shown. Q is the quality of the band-pass filter. In Fig. 5.27, Q = 5.0 is used. For fast sampling frequency, higher Q is expected. By proper choice of a filter as in example 5.4, the STC system could be very robust. But it only works well in the case of known command frequency. If the frequency of the command changes, this filter has to be tuned accordingly. In summary, the flow diagram of the adaptive control algorithm is as in Fig. 5.28. Some simulation examples will be presented in the next section. Chapter 5. Robust Adaptive Control: Slow Adaptation 104 Start :tc = 0 Controller Design Estimation tc = t No Adaptation Yes Filter Design Yes Figure 5.28: Robust Adaptive Controller Design Flow Diagram Chapter 5. Robust Adaptive Control: Slow Adaptation 105 PH -*-> o a cd a a o 2 . 5 2.0 1.5 l.O 0 .6 O.O - 0 . 6 — 1.0 - 1 . 6 -- 2 . 0 O.O 20. 4 0 . 60. 80. 100. 120. 140. 160. 130. 200. Time in Second Figure 5.29: Adaptive control result in example 5.5, command is cos(t), sampling TB = 0.055 ec. 5.5 S i m u l a t i o n Examples of R o b u s t A d a p t i v e Con t ro l l ed Des ign Example 5.5 (Simulation of robust S T C design) Consider the Rohrs' plant (4.93) as in example 5.1: First order estimation model and pole-placement controller are used. It has been shown that for this conventional adap-tive pole-placement algorithm, the discrete adaptive control system is unstable for T, < 0.20-sec. For slow adaptation, example 5.2 shows that when T, = O.OS^ec. the fixed rate slow adaptation scheme is unstable. Here, we use a digital band-pass filter as shown in example 5.4. The filter parameters are self-adjustable according to the estimated gain margin, i.e., if the new gain margin of the adapted system with filter is checked as inadequate, increase Q until it is adequate. Chapter 5. Robust Adaptive Control: Slow Adaptation 106 Fig. 5.29 is the self-tuning control simulation result. In the first 20 seconds, the system is in open-loop . The command is a sine wave with amplitude 1 and frequency 1.0 rad/sec. At t = 20sec, there is a probing signal p(t) added to the command, p(t) = 2 e - 5 (* - 2 0 0 ) ) t > 20. This probing signal disappears after two seconds (its value is less than 1 0 - 4 after t=22 sec). During that transient, input and output data are recorded for frequency domain estimation. We use discrete Fourier transform for frequency domain estimation. In Fig. 5.30(a),(b), we plot the estimated phase and gain, the estimated cross-over gain of the plant Gc is 0.101, which is close to the real value in example 5.2. The estimated parameters and the controller parameters are as shown in Fig. 5.30(c),(d), where the checking rate for steady state is 6.3 sec. (every 126 samples). The first adaptation occurs at t = 32.75ec, and the estimated data: d = 0.960, b = 0.088 are used. If there is no band pass digital filter, the closed-loop is unstable. However, the program can calculate a proper filter parameter Q according to the estimated d, b, Kc. By setting an initial value of Q, if the gain margin of the open-loop including the estimated Gc of the plant, the controller and the filter is less than one (or desired value), Q is increased until the gain margin is large enough. For more detail about the design, refer to the simulation program in Appendix C. Here in this example, Q = 10.32 is used so that the cross-over gain of compensated system within is 0.8. The command input changes its amplitude at t — 76sec, then, changes back every 50 sec. The adapted output follows the command changes quite well. For the same continuous plant, when the command is a step level, a low pass filter can be used. Fig . 5.31 is a simulated result under step level command input. The control strategy is the same as in the above. In the simulation, there is a small additive Chapter 5. Robust Adaptive Control: Slow Adaptation 107 ( a ) . E s t i m a t e d G a i n ( c ) . E s t i m a t e d P a r a m e t e r 2.6 I • • • , „ 0.6 0 . 0 - 0 . 6 —2. O . O 5 0 . 1 0 0 . 1 6 0 . 2 0 0 . CO to c d ( b ) . E s t i m a t e d P h a s e ( d ) . E s t i m a t e d P a r a m e t e r < 0 . 0 0 — 1 . 6 7 - 3 . 1 4 - 6 . 2 8 0 . 1 6 . 3 2 . 4 8 . 6 4 . F r e q u e n c y F o r m 0 T o W s / 2 i,o — 0 . 6 - 0 . 5 0 . 0 6 0 . 1 0 0 . 1 5 0 . 2 0 0 . T i m e i n S e c o n d s Figure 5.30: Estimation result in example 5.5 Chapter 5. Robust Adaptive Control: Slow Adaptation 108 output white noise iV(0,0.05). The form of low pass filter is 1 Fig-1) Q where the filter parameter in this case is Q = 159. The checking rate for steady state is every 80 samples (4 sec). The first adaptation occurs at t — 30.5sec, where the estimated parameters d = 0.968, b = 0.066 are used for controller parameters. There is a large transient process after the adaptation. The second adaptation is at t = 42.5sec, and the estimated parameters are 0.964, 0.075, respectively. After that, the adaptation occurs every 4 sec. until at t = 76.6$ec, the step level is changed to 0.5 and a large transient occur after that. The adaptive control result is stable and the transient process can be improved by choosing a higher value of Q in the compensation filter. Example 5.6 (STC of high order plant) Give an eighth-order system, the continuous transfer function of which is: We use a first-order model to estimate it and use the estimation result to design a pole-zero cancellation controller. From the Bode phase plot of the plant, we can find that the critical sampling pariod for minimum phase discrete plant and the approximate critical sampling pariod for S P R H2(q~1) are the same, i.e., Tc = 7.58sec. The conventional adaptive pole placement algorithm is unstable if the sampling period Ta is shorter (This was checked by simulating the case T„ — 6.5sec). Here in simulation, we use T„ — l .Osec The adaptive control output is as shown in Fig. 5.32. In the first 300 seconds, we don't apply any control. At t = 300sec, there is a probing signal, and the frequency domain estimation is done at this stage. The Chapter 5. Robust Adaptive Control: Slow Adaptation 109 P-. -+-> 0 o s Pi a a o 3.0 2.6 1.6 l.O O.S O. J f 1 1 1 —i i i ) - -1 1 1 V -/ ( 1 1 1 1 •if* -t »t. -•i / • 11 i t • '•"'" i 1 V/ \ X JU t * 1 i O.O 20. 40. 60. 60. lOO. 120. 140. Time in Second ISO. 180. 200. Figure 5.31: Adaptive control result in example 5.5, command is step level, sampling T. = 0.05sec. first adaptation occurs at t = 479sec. The checking for steady-state is done every 50 seconds. The estimation result is as shown in Fig. 5.33. For robust adaptive control, a digital low-pass filter with the form as in example 5.5 is used, with the parameter Q = 159, which is calculated by the adaptive algorithm since a cross over gain 0.8 is chosen (i.e., desired gain margin is 1.25). For faster sampling, higher Q is expected. The adaptive control system can be stabilized if the frequency domain estimation result is good enough. . 5.6 C o n c l u s i o n In this chapter, we discussed the robust adaptive control algorithm design by using slow adaptation rate. It has been shown that when using only converged estimates, Chapter 5. Robust Adaptive Control: Slow Adaptation 110 i I A i i I I I ( t x. —* ' • » i i i soo. -too. eoo. aoo. 1 0 0 0 . iaoo. 1 4 0 0 . 1 6 0 0 . teoo. zooo. T i m e i n S e c o n d Figure 5.32: Adaptive control for an eighth order plant(example 5.6) command is step level, sampling Ts = 1.0aec. the S P R condition on the operator H2 is relaxed and the adaptive control algorithm is more robust. However, slow adaptation rate is not the only condition for stability. A closed-loop equation (5.96) has to be stable. It was shown in Example 5.2 that too fast a sampling frequency results in an unstable closed-loop system and the adaptive control algorithm is unstable. In this case, a compensation filter has to be used, the design of which is based on the frequency domain information of the discrete plant. The adaptive control algorithm is stable given proper design of the filter. -4-> o a cd e B o C_3 3.5 3.0 2.6 1.5 l.O O.S O.O -O.S O.OO Chapter 5. Robust Adaptive Control: Slow Adaptation 111 ( a ) . E s t i m a t e d P a r a m e t e r 1 4 . 0 . 0 0 5 0 0 . 1 0 0 0 . 1 6 0 O . S O O O . ( b ) . E s t i m a t e d P a r a m e t e r 2 1.6 1.0 - — — 0 . 6 - 0 . 6 -3 cci g CO 0 . 0 0 6 0 0 . 1 0 0 0 . 1 5 0 0 . 2 0 O 0 . T i m e i n S e c o n d s ( c ) . E s t i m a t e d G a i n 2 . 6 — 0 . 5 0 . 1 6 . 3 2 . 4 6 . 6 4 . F r e q u e n c y F r o m 0 t o Ws/2 ( d ) . E s t i m a t e d N y q u i s t C u r v e - i . R e a l A x i s Figure 5.33: Estimation result in example 5.6 Chapter 6 Robust Adaptive Control: Using Predictive Control Predictive control algorithm is simple to use and through applications has been shown to be very robust. It is based on predicting the output of the plant several steps ahead given assumptions about future control actions. In practice, it has been shown that the predictive control algorithm is superior to many existing algorithms ([12]) although no complete theoretical analysis is available so far. In this chapter, we will show some simple relations between the sampling interval and predictive control horizon and give a design guideline on how to choose them in practice. 6.1 Predictive Control Algorithm 6.1.1 A simple predictive control algorithm We first describe a simple predictive control algorithm, which is also called receding-horizon controller, (See [40]). This algorithm is based on an assumed model of the process and on an assumed form for the future control signals. The pole-zero placement scheme of example 3.2 can be interpreted as a simplest case of this receding-horizon controller. Consider the discrete process: Aiq-^yt = B{q-1)q-iut (6.102) Where d is delay of the plant. There is no unstable zeros in A(q~1). The degree of 112 Chapter 6. Robust Adaptive Control: Using Predictive Control 113 deg(A) — n deg(B) - n — 1 By introducing the identity: 1 = A ^ F ^ 1 ) + q-V+^Giq-1) (6.103) The degree of Fiq'1), G{q~x) in (6.103) is: deg(F) = d + h-2 deg(G) = n — 1 Here h > 1 is the extended horizon. Set dh = d + h — 1, (dh > d) and apply (6.103) to yt, we have: yt = A(q-1)F(q-1)yt + G(q~1)yt_dh = F(q-1)B(q-1)ut.d + G(q-1)yt^h (6.104) Introduce: Biq-^Fiq'1) = R^q-1) + q^R^q'1) (6.105) where deg(Rl) = h - 1 deg(R2) = rt + ci - 3 The coefficients of R^q-1) are the first h terms of the pulse response of the open-loop system. Indead, = R1(q-1)q-d + R2(q-1)q-{d+h) + (6.106) A{q l) Chapter 6. Robust Adaptive Control: Using Predictive Control 114 The degrees of the last two terms are equal to or higher than d + h. Equation (6.104) can be rewritten as: yt = i2 i (g _ 1 )« t -d + Riiq-^ut-d-h + G{q-l)yt-dh or y t + d h = i M ^ H + h - i + Rib'1)**-! + Giq-^yt (6.107) By choosing: ut — u t + i = • • • = ut+h-i (6.108) The control law which brings yt+dh to a desired value wt+dh i s : .... v>t+dh - G(g~1)yt inert This control signal is then applied to the process. At the next sampling instant, a new measurement is obtained and the control law (6.109) is used again. The characteristic polynomial of the closed-loop system using predictive control law (6.104) is: C(q-X) = ^(g- 1 ) [ J Ri( l ) + g- 1 JR 2(g- 1)] + G( g- 1)5 ( g - 1 ) g - ' i = i4(<TWl) + qh-l[B{q~X) - A{q-*)Ri{q-*)) Where we have used identity: B = B[AF + Gq-(d+h-V] = A(BF) + GBq~d~h+1 = ARX + AR2q-h + GBq-d~f+1 then B-ARi = AR2q~h + GBq-d-h+1 or ARiq'1+ GBq~d = qh~1[B - ARi] Chapter 6. Robust Adaptive Control: Using Predictive Control 115 Since Ri is the first h terms of ———r, if the discrete plant is stable, i.e., there are no Mi' ) unstable zeros in A(q~1), then Urn C ( ? - 1 ) = A( 9- 1)iZ a(l) n—*oo This means that for h goes to infinity, the closed-loop system under predictive control is always stable. Example 6.1 Let (6.102) represent a third order plant, where: A{q~1) = 1 - a i g - 1 - a2q~2 - a3q~3 B{q~l) = b0 + hq-1 + b2q~2 Here d — 2, n — 3. Suppose the horizon is h = 2. Then, deg(F) = 2 + 2 - 2 = 2 deg(G) = 3 - 1 = 2 The Diophantine equation to be solved in this case is: 1 = (1 - dig-1 - a2q~2 - a3q-3){f0 + f^q'1 + f2q~2) + q-3(g0 + g^'1 + g2q-2) Equation (6.104) is expressed as: yt = (1 - arq~l - a2q~2 - a3q-3)(f0 + fxq~l + f2q~2)yt + {go + + #2g~ 2 )y t - 3 = (/o + + f2q~2)(bo + hq'1 + b2q-2)ut-2 + (g0 + giq'1 + g2q-2)yt-:i Now in (6.105), we have in this case: deg(R\) = 1 deg{R2) = 2 Chapter 6\. Robust Adaptive Control: Using Predictive Control 116 and (/o + fiq"1 + /2<r2)(&o + M " 1 + b2q~2) = r10 + r m j - 1 + q~2(r20 + r 2 1 g _ 1 + r22q~2) The predictive control law is wt+3 — goVt ~ giVt-i — g2Vt-2 — r2oUt-i — r 2 1 u e _ 2 — r22ut-3 ut = rw + rn For a special case, h = 1, the simple predictive control algorithm with extended horizon becomes a pole-zero cancellation scheme. 6.1.2 Generalized Predictive C o n t r o l A more general version of predictive control is the generalized predictive control in [12]. Suppose the process can be described by a C A R I M A model: A{q-1)yi.= B(q-1)q-iut + ^ (6.110) Where A = 1 — q'1 is the difference operator. Consider the loss function to be mini-mized: ( N2 Nu (yt+k - wt+k)2 + PY ^ l k - i } . (6.1H) k=Ni fe=l where: Ni is the minimum costing horizon; N2 is the maximum costing horizon; Nu is control horizon; p is a control-weighting. Chapter 6. Robust Adaptive Control: Using Predictive Control 117 Introduce the identity: 1 = A^q-^Fiiq-^A + q-'Gtiq'1) (6.112) where the degrees of Fi, Gi are I — 1, n — 1 respectively. For I = 1, 2, • • •, N, we can obtain JFi, • • •, JFJV; G\, • • •, GN and for every /, we have yt+i = FiB Aut+i-d + Gtyt + Ftet+l (6.113) Introduce the identity for / = 1, • • •, TV, BF^R^ + q-^^R? (6.114) where Then, here, deg(R[l)) = l-d deg(R2l)) = n + d - 3 y{t + l) = R[l)Au{t + I -d) + yt{t) + Fie{t + I) (6.115) y,(0 = M!)«t-i + GlVt Note that the parameters of Rx \ R2\ Gi are obtained in the same way as Rx, R2, G in section 6.1.1 for horizon I. Consider I = N\, Nx + 1, • • •, N2, we will have: y = R A u + y + e (6.116) where y = [y{t + 7V 1),---,y(< + JV 2 )] r A u = [Au{t),---,Au(t + N2-d))T y = [yjv,(<)>- • • iy~N2{t)}T e = \FNle{i + Ni),---,FN2e(t + N2)} T Chapter 6. Robust Adaptive Control: Using Predictive Control 118 R is a matrix ((N2 - Nx + 1) x (N2 - d + 1)): R nv,-d ••• r 0 0 ••• 0 rNi-d+i • • • r x r 0 • • • 0 0 The parameters r0, rx, • • •, ri form a polynomial: r0 + nq-1 + • • • + riq~l = R^iq-1) B which has the degree of I and is the first I terms of the pulse response of -f^q~d- The term Rx^Au(t + I — d) contains the terms Au(t + I — d), • • •, Au(t). Let us assume Au(t + k-l) = 0 where k > Nu, i.e. the control signal is assumed to be constant after Nu steps. Then the matrix R in (6.116) will be {{N2 - Nx + 1) x Nu): TNt-d ••• r0 0 • • • 0 rNt-d+i • • • ri r0 ••• 0 R = 0 rpr2-d fjv2-c£-7v„+i and the control signal is: Au = [10 • • • 0 ] T ( R r R + piyWfrm - y) 1-DT/ (6.117) When choosing Nx = d, the R will be a lower triangular matrix with the size (N2 - d + 1) x (N2 - d + 1). In [12] and [9], the case Nx = 1 is discussed. Note that r-fc = 0 if k < 0. Thus for the case Nx < d, the first d — Nx rows are zero. Chapter 6. Robust Adaptive Control: Using Predictive Control 119 Example 6.2 Consider the case Nx = N2 = d + h-1, Nu = 1 and A = 1 (using C A R M A model). The generalized predictive control algorithm is then similar to the simple predictive control using extended horizon. Let us consider the case d — 2, h = 2. So N2 = Nx = 3. We can rewrite (6.115) as: y(t + 3) = R[3)u(t + 1) + y3(t) + F3e{t + 3) where deg{R[3)) = 3 - 2 = 1 Suppose: R[%~1) = r10 + r11q-1 then y{t + 3) = r10u(t + 1) + r uu(<) + y3(<) + F3e{t + 3) Here y3(t) = 4'}u(< - 1) + G3y(<) It is not difficult to show that: ys(0 = £oy(<) + 9iy{t - 1) + y2y(< - 2) + r 2 0u(< - 1) + r2 1u(< - 2) + r22u(t - 3) where y,-, v2{ (i = 0,1,2) is derived from the same equations as in the example 6.1. Thus the predictive control with the assumption u(t) = u(t + 1) is: w{t + 3) - y3{t) ut = which is similar to the form obtained in example 6.1. Chapter 6. Robust Adaptive Control: Using Predictive Control 120 6.2 Robustness of Predictive C o n t r o l 6.2.1 I l l u s t r a t i o n examples Before discussing the robustness, we will show an example of the relation between the sampling interval, the extended horizon and the stability. Example 6.3: (Simulation on Sampling, extended horizon and stability) Consider Rohrs' example which is a third-order continuous plant: c ( j ) _ 2 229 k ' s + 1 s2 +• 30s + 229 We simulate predictive control by using different extended horizons and sampling in-tervals. The estimation model is first order. Let h denote the horizon, two self-tuning predictive control algorithms are considered: Algorithm I: (Indirect) The estimation model is Vt - ayt-i + &tit-i where a, b are parameters to be estimated. The predictive control law is: wt+h - ahyt 6(1 + a + ••• + a*-1) In this algorithm, there are only two parameters to be estimated. Algorithm II: (Direct) The estimation model is Vt - ayt-h + biut_l + b2ut_2 H 1- bhut_h There are h + 1 parameters to be estimated. The direct control law is: wt+h - ayt_h Ut — : — bx + b2 + \-bh Chapter 6. Robust Adaptive Control: Using Predictive Control 121 h T, = 0.3 T. = 0.2 T, = 0.15 T. = 0.10 T, = 0.06 T, = 0.03 1 • X X 2 • * • X 4 * • 0 X 8 • • 0 0 Table 6.8: Simulation result for various sampling and extended horizon Table 6.8 includes the simulated stability results of those two self-tuning algorithms under different samplings and control horizons. Where h is the control horizon and T, is sampling interval. • represents both algorithm I, II are stable; o represents algorithm I is unstable but algorithm II is stable; X represents both algorithm I and II are unstable. From Table 6.8, we find that algorithm II is more robust than algorithm I. However, if we use slow adaptation rate as in Chapter 5, we will find that algorithm I and algorithm II have the same robustness. The superior robustness of algorithm II may be explained as follows: Consider the first-order estimation model, there always exists some estimation error. Assume for in-stance a 3 % error in estimation, if first ordel model yt = ayt_x + but_x is used, assume converged value a = 0.99, then 3 % error in algorithm I will result in a being in the range 0.9603 ~ 1.0197. If h = 10, by calculation, ah will be in range 0.6669 ~ 1.2154. However, in algorithm II, the estimated d = ah is in range 0.8773 ~ 0.9315. Thus the sensitivity of algorithm II is smaller. When using slow adaptation, the estimation error is considered to be small enough, thus the robustness is improved. Chapter 6. Robust Adaptive Control: Using Predictive Control 122 predictive horizon h critical sampling Tc 8 0.051 sec. 4 0.105 sec. 2 0.22 sec. 1 0.45 sec. Table 6.9: Critical sampling for different horizon Example 6.4: (Critical Sampling for extended horizon) Consider the third-order continuous plant: G ^ = (S + 2)2(5 + 10) We simulated predictive control using algorithm II. The estimation uses first-order model. Table 6.9 gives the critical sampling period for various values of h. If the sampling period is shorter than the critical sampling period, the system was found to be unstable in simulations. From Table 6.9, we can see that hTc « constant (6.118) This result is very useful. The case h = 1 is just pole-zero cancellation, for which the critical sampling was discussed in chapter 4. Then, given a faster sampling, we can choose the extended horizon h such that the predictive self-tuning algorithm is stable. 6.2.2 Discuss ion Simplest cases: We are going to show that (6.118) is true for the simplest case. Consider the first-order continuous plant Gc(s) = K s + p Chapter 6. Robust Adaptive Control: Using Predictive Control 123 then the discrete description is G d { q } ~ 7 1 - e - ' V 1 where T is the sampling interval. Case 1: Predictive horizon h = N, sampling is T, = T When using a first-order model: Vt = ayt-i + 6u t - i == aNyt_N + but^x + abut-2 H (- aN~1but.N where a = e~pT b = f ( l - e ^ ) when assuming ut — = • • • — u t + j v - i , the control law is Wt+N ~ aNVt teiins U t = b{l + a + ... + a«-i) ( 6 ' 1 1 9 ) define: a = aN = e-*NT then (6.119) is b = b(l + a + --- + aN-1) = f (1 - e-" T)(l + e~pT + ••• + e-p(^-i)r) = f ( l - e " ^ ) w(t + NT) - ayt ut= K , J U- (6.120) b Case 2: Predictive horizon h = 1, sampling J 1, = NT We have the estimation model: yt = a y t - i + but-x (6.121) Chapter 6. Robust Adaptive Control: Using Predictive Control 124 wh ere a = aN = e-*NT b = f ( l - e - ^ ) The control law is simply: w(t + NT) - ayt «, = g which is the same as in case 1. Notice that for different sampling, w(t + NT) has different expressions. In case 1, the sampling is T, then wt+w = w(t + NT). However, in case 2, the sampling is NT, then wt+i = w{t + NT). Thus we have shown that control laws for case 1 and case 2 are equivalent. Discussion for higher order plant For higher-order plant, no complete proof is available yet. However, the following remarks are helpful for understanding (6.118). Consider a continuous plant which has n poles and m zeros: = k(s + Zl)...(s + Zm) ' (s+Pi)(s+p2)---(s+pn) v > The discrete equivalent in A R M A model of (6.122) is: yt = am-i + a2yt-2 + h anyt_n + byut_]_ + b2ut_2 + h bnut_n (6.123) When using a first-order model (6.121), with predictive control extended horizon h, the form used for estimation is: yt = a,hyt_h + 6u t_! -\ h ah~xbut_h 1. For the model above, the static gain is independent of the extended horizon h: b + db + •••ah~1b D.C. gain = 1 - ah b Chapter 6. Robust Adaptive Control: Using Predictive Control 125 2. For different horizons ft, the forms of the controller are different. The contribu-tions to the open-loop gain of the controller are also different. As h — 1, this gain is a simply —. For h > 1, this gain is b 6(1 + a+ ••• + ah~1) When the sampling period T„ is small, a will be close to one, and b will be close to zero. The open-loop gain of the controller for h = 1 will be very large. This can result in an unstable closed-loop system. However, as h increases, the open-loop gain can decrease, thus a small TB requires a large h for the stability. 3. If ft. > 1, the estimation model is bq-1 +baq-2 + •• • + bah~lq~h _ bq'1 1 - ahq~h ~ 1 - aq-1 Theoretically, there is a cancellation. However, in practice, the cancellation will usually not happen. The model under control horizon assumption is: h + k + • • • + bh 1 - ahq~h 4. If slow adaptation as discussed in Chapter 5 is used and the closed-loop system with every new updated estimated parameters is stable, then the adaptive system is stable. When using adaptive control algorithm, if the estimation model is first-order and the horizon is ft, this closed-loop transfer function is: B(£) ^ ( c j - 1 ) ^ ! + a + • • • + a!*-1) + B{q-l)ahq-h It is obvious that as ft increases with \a\ < 1, it will result in an open-loop plant. Des ign Guide l ines for choosing ft • For the continuous plant, find the critical sampling Tc for the stable self-tuning Chapter 6. Robust Adaptive Control: Using Predictive Control 126 pole-zero cancellation algorithm, using for instance the relay method of Chapter 4. • For the desired sampling Ts, where T, < Tc, the extended horizon h has to satisfy: h > [ | ] + 1 (6.124) In practice, for a stable plant, a longer h will bring more robustness. However, the process response will be slow. In section 6.4, several simulation examples will be given. 6.3 S t a b i l i t y Analysis for a simple case Now we will use the stability theory in chapter 3 to study the stability condition for a simple case of predictive control. Let the discrete plant be: Aiq-^yt = B{q-l)ut^ (6.125) where A(q~l), B(q~1) can be of any order. If we use a first-order model for parameter estimation, suppose the converged parameters are a and b, the first-order model is: yt = at/t-i + but-i (6.126) = a2yt-2 + a6u t_ 2 + 6uf_i (6.127) This is an indirect algorithm (Algorithm I in example 6.3). We use (6.126) as estimation model and the controller design is based on (6.127), where the desired output is wt = yt. Based on (6.127) and the desired output, we have wt+2 = a\yt + atbtut + btut+1 where d ( , bt are the estimated parameters at time t. We use an extended horizon, assuming ut = ut+i, the control law is: wt+2 = a\yt + (atbt + bt)ut (6.128) Chapter 6. Robust Adaptive Control: Using Predictive Control 127 Define vt = atyt + btut = $dt v* = ayt + but = <f>j6* Here a, b are existent converged estimation parameters and 9t = [at,bt}T, 6* = [a,b}T, <f>t — [ytiut]T- We also define <j>t = [vt,ut]T Thus, wt+2 = at[atyt + btut] + btut == atut + btut = Let C(g - 1 ) = (a6 + 6)^(o- 1) + a 2 J B(g- 1 ) t 7 - 1 (6.129) Then C t e " 1 ) ^ = (o i + frJ^g-'K-i+a2^-1)^-! = B{q-1)[a{ayt_l + &uc_i) + frut_i] = B{q-x)\avl_1-rbut_1) = B{q~1)[avt-1 + but-i + a(v;_t - uf_i)] For: v?-i-t;t-i = - (^-i-^F^-i where: 0* = 0 t - ^ & = [yt,ut]T then = -C-1(q-1)B(q^)[§l14>t_1 + ael^ - wt+1) Chapter 6. Robust Adaptive Control: Using Predictive Control 128 Let: </>t-i = <j>t-i + o-4>t-\ Then, we have the error model: yt = -C-\q-l)B{q-l)$t + C ^ f a " 1 ) * ( < T > « + i (6.130) Observing the error expression: et = (1 + at-i - at-iq'^yt - wt+1 = yt + dt_it/t - a t _iy c _i - bt^ut-i - at-iVt-i - yt - vt-i + at-i[yt - v t - i ] = (1 + at-!)[yt - vt] = (1 + a t_i)e t Where et is the error in the estimation, 1 et = yt~ (at-iyt-i + bt-xUt^i) = — — et (6.131) 1 + at-i The error model is as in Fig. 6.34. Following the same procedure of the stability analysis in chapter 3, if the term l + at-i is always positive, i.e. d t _i > —1, then the stability condition for the predictive adaptive control system is that H2 be SPR. Note that the expression for H2 here is different, as we shall now proceed to show. Let Cr (g _ 1 ) = 1 + df-i - d c - i g - 1 which is a time varying operator. Define the error as: e~t = Cr(q'1)yt - wt+l = -CC-'B^ + iCrC-1!! - l ) w t + 1 Chapter 6. Robust Adaptive Control: Using Predictive Control 129 H 2 - l 1 + Ot. Hi H2 Figure 6.34: Error Model for predictive adaptive control Thus, ^ ( g - 1 ) = Criq-^C-^q^Biq-1) (6.132) or more specificly, Hn = (a6 + 6)A(g- 1) + a 2 J B(g- 1 )g- 1 The condition for the stable adaptive algorithm is that H2 be SPR, i.e., the phase of Ihiq-1) is within [-90°,90°]. However, the phase of Cr(q~1) is: , n ( -i\ * - i as inwT phaseC r(g ) = tan — — 1 + a — a cos wl which is in [0,90°] for positive a, where w £ [0,—-]. B(q~l) . Usually, the phase of is less than zero, thus when considering the effect of A{q 1) B(q~l) C r , it will add a positive amount to the negative value of the phase — - . thus relax the requirement on the phase —. A n example is given to illustrate this argument. Chapter 6. Robust Adaptive Control: Using Predictive Control 130 Example 6.5 Consider the continuous plant 2 229 G(s) s + Is2 +305 + 229 We consider the case of sampling period T, = 0.2sec, the command input frequency is l.Orad/sec. We compare the phase of H2 for two different adaptive control algorithms. Fig. 6.35(a) is the phase of H2 when only using pole-zero cancellation. As for the simplified case, we just need to check if the phase of ^ \ is within (—90°, 90°). Here A{q *) — is the discrete plant not containing the delay due to the sampling and zero-order hold. For stability, the phase of — is restricted to the shadow area in Fig. 6.35(b). XT. B From the figure, we will see that the stability condition for — is not satisfied. Fig. 6.35(c) is the phase plot of H2 in case the predictive adaptive control is used. Here the estimation result gives the converged estimates a = 0.9104. The phase of Cr is positive, which adds to the negative phase value of —. SPR H2 is satisfied for this case. Fig. 6.35(d) shows the area of the required phase — for S P R H2. When using predictive control, the restricted area is enlarged. Thus the predictive control algorithm is more robust. 6.4 S i m u l a t i o n Examples Example 6.6 (Verification of (6.118) for Rohrs' plant) Consider the plant used in example 6.3, we simulate the predictive self-tuning control for the cases T, = 0.3sec.,h = 1 and Ta = 0.075sec,/i = 4. The command input is a step level. There is a. small output noise A^(0,0.01). The simulation results are as in Fig. 6.36. The estimation uses the first-order model (6.121) and only two parameters are Chapter 6. Robust Adaptive Control: Using Predictive Control 131 <L> ca .a <t> CO cd PL, CD CD CO CO .n CU F i g . 6 . 3 5 ( a ) l O . I S . 1 4 . 1 0 . F r e q u e n c y i n r a d / s e c . F i g . 6 . 3 5 ( b ) O. 2. 4 . S. 8 . l O . 1 2 . 1 4 . 1 0 . F r a q u e n c y i n r a d / s e c . OO CU Q <u CO cd CL, CD ea CD CO CO a. F i g . 6 . 3 5 ( c ) e . 10. is. 14 . i « . F r a q u e n c y i n r a d / s e c . F i g . 6 . 3 5 ( d ) 8. 1 0 . 12. 14 . 18. F r a q u e n c y i n r a d / s e c . Figure 6.35: Phase plot of H2 for different algorithm • Fig . 6.35(a): Phase plot of H2 for conventional pole-zero cancellation adaptive control, Rohrs' plant, T, = 0.20<sec, the command input frequency is 1.0 rad/sec. • Fig. 6.35(b): Required area for the phase plot of — when pole-zero cancellation is used. • Fig. 6.35(c): Phase plot of H2 for predictive adaptive control with h = 2, Rohrs' plant, Ta = 0.20.sec., the command input frequency is 1.0 rad sec. • Fig. 6.35(d): Required area for the phase plot of — when predictive adaptive control is used. Chapter 6. Robust Adaptive Control: Using Predictive Control 132 Z3 O u O d CO CO 6 6 o c_> 2 . 0 1.6 l . O O . S l O O . O.O 2 0 . 4 0 . G O . 6 0 . 1 0 0 . 1.6 1.0 O.O 6 0 . 1 0 0 . CO «-. CD 6 CO <-c CO C u T 3 CO 1.6 1.0 - 0 . 6 0 . 0 6 0 . 1 0 0 . T i m e i n S e c o n d s Figure 6.36: Simulation result in example 6.6 • F ig . 6.36(a): Simulated output and command for the case T, = 0.3sec, h = 1; • F ig . 6.36(b): Estimated parameters for Fig. 6.36(a); • F ig . 6.36(c): Simulated output and command for the case T, = 0.075sec, h = 4; • Fig . 6.36(d): Estimated parameters for Fig. 6.36(c). Chapter 6. Robust Adaptive Control: Using Predictive Control 133 estimated. However, the adaptation is done every 3.0 seconds. Thus only converged estimates are used. For the case Ta = 0.3sec, h = 1, the estimated parameters are d = 0.819, b = 0.365. For the case T, = 0.0755ec, Nu — 4, the estimated parameters are d = 0.950, b = 0.102. It is also interesting to notice that for the latter case, d 4 = 0.813, b{l + d + d 2 + d 3) = 0.378, which is close to the estimated parameters for the former case. Thus the control algorithm is virtually the same. Example 6.7 (Predictive Control for higher order system) Now consider an eighth-order plant: As in example 5.6, the critical sampling for pole-placement adaptive control is around T, = 7.0sec We use the extended horizon h — 8, and the sampling period T, = l.Osec. The estimation uses a first-order model, i.e., by using the predictive control algorithm I in example 6.3, only two parameters are to be estimated. The fixed slow adaptation rate is set as T 0 = 20.0sec, (20 sampling periods). There is a small output noise JV(0,0.01) added to the output of the process. The command input and output are as shown in Fig. 6.37(a), the estimated parameters d, b and controller parameters a l s &i are as shown in Fig. 6.37(b), Fig. 6.37(c) respectively. And ai = a 6i = 6(l + d + d 2 H t-d7) At the end of simulation, we obtain d = 0.919, b = 0.166. The controller parameters are the dashed lines in Fig. 6.37(b),(c) where ax = 0.508, bx = 1.003. When conventional slow adaptation is used, i.e., Ta = T, or the extended horizon is h = 6, we observed unstable output in both cases. Chapter 6. Robust Adaptive Control: Using Predictive Control 134 f o CCS CO c d < c d F i g . 6 . 3 7 ( a ) C o m m a n d a n d O u t p u t 0 . 0 0 1 0 0 . 8 0 0 . 3 0 0 . - t O O . 6 0 0 . 6 0 0 . T O O . 8 0 0 . .. 0 0 0 . l O O O . F i g . 6 . 3 7 ( b ) F i g . 6 . 3 7 ( c ) o.oo zoo. -too. eoo. eoo. 1 0 0 0 . 0 . 0 0 2 0 0 . 4 0 0 . 8 0 0 . 8 0 0 . 1 0 0 0 . T i m e i n S e c o n d s Figure 6.37: Simulation result in example 6.7 • Fig. 6.37(a): Simulated output (dashed line) and command (solid line); • Fig. 6.37(b): Estimated parameter d (solid line) and controller parameter ax (dashed line); • Fig. 6.37(c): Estimated parameter b (solid line) and controller parameter 61 (dashed line). Chapter 6. Robust Adaptive Control: Using Predictive Control 135 cd j-t GO CO ->-> & -4-> o -a CD <d - .—< o CO O -0.5 --1.0 -60. 60. 100. T i m e i n S e c o n d s Figure 6.38: Oscillation obtained by using Relay for a time-varying plant Example 6.8(STC of a time-varying plant) We consider a fifth-order time-varying continuous plant, with transfer function: 1 bt G(s) = {s + l)3(S + aty (6.133) w here at = 3.5 + Aasin(wat) bt = 1.0 + Abs'm(wbt) In the simulation, we set Aa =• 1.0, Ab = 0.3, wa = 0.05, wb = 0.02. To control this time varying system, we first use relay in closed-loop of the plant to obtain critical sampling. The result is as in Fig. 6.38. Chapter 6. Robust Adaptive Control: Using Predictive Control 136 From Fig. 6.38, the average half period of the oscillation is around 3.0sec. Notice that the time-varying system will result in a time-varying oscillation period, thus the worst case should be considered if the critical sampling for non-minimum discrete plant is to be found. For the frequency domain estimation result, Fig. 6.39 shows the estimated Nyquist curve by using 1024 points F F T . We can find that the high-frequency band is well estimated. This is generally true if the rate of change of the time-varying parameters is very slow. Now we simulate predictive control of the plant with the extended horizon h = 10, sampling period T, = l.Osec. The estimation model is a first order model, and the adaptation rate is Ta = 5sec. There is a small noise 7V(0,0.01) added to the output. The estimated and controller parameters are as in Fig. 6.40, where the dashed line denotes the controller parameters. Here for the non-persistent excitation case, we use exponential forgetting and resetting algorithm ( E F R A ) , (see [36]), in parameter estimation. The estimation form is: aPt_x4>tet 9t+i — Qt + 1 + tfPt-itt 1 aPt-^ttfPt^ -Where the forgetting factor A = 0.9 and constants a = 0.5, (3 — 0.005, £ = 0.005. The command and system output are shown in Fig. 6.41. The controlled output is desirable but there is a large overshoot when the command has a rapid change. For a conventional pole-placement adaptive control algorithm, it is unstable if only first order estimation model is used. 6.5 C o n c l u s i o n In this chapter, we have shown that the adajjtive predictive control is in general more robust than the adaptive pole-zero cancellation control. When using the simple receding Chapter 6. Robust Adaptive Control: Using Predictive Control 137 — 1.5 1 1 ' 1 1 1 i i i , - 0 . 6 - 0 . 4 - 0 . 2 O.O 0.2 0.4 0.6 O.S 1.0 1.2 1.4 R e a l a x i s Figure 6.39: Frequency Domain estimation result (Nyquist curve of the discrete plant) horizon controller [40], an interesting relationship exists between the critical sampling for stable STC and the extended control horizon. This relation is verified for a simple case and controller design guidelines is given. Here, the result of critical sampling in Chapter 4 is useful in choosing proper extended horizon. Also we have applied the S P R stability condition to a simple adaptive control case and shown that the predictive adaptive control is more robust when the extended horizon is higher for stable plant. Several simulation examples have been presented. Chapter 6. Robust Adaptive Control: Using Predictive Control 138 O . O O S O O . l O O O . 1SOO. SOOO. 2 5 0 0 . S O O O . Estimated Parameter Controller Parameters 1.4 1 1 1 1 r O.OO 6 0 0 . 1 0 O 0 . 1 5 0 0 . ZOOO. 2 5 0 0 . 3 0 0 0 . T i m e i n S e c o n d s Figure 6.40: Parameter estimation and controller parameters in Example 6.8 Chapter 6. Robust Adaptive Control: Using Predictive Control 139 ps -4-> o a a B B o 1.4 1 .2 l . O 0 .8 0 .6 0 .4 0 .2 o.o f—s<q •—=-» j r—^ _o.a 1 < ' 1 ' 1 * O . O O 5 0 0 . l O O O . 1 5 0 0 . 2 0 0 0 . 2 5 0 0 . 3 0 0 0 . Time in Seconds Figure 6.41: Simulation output for predictive adaptive control algorithm to a time-varying plant, Example 6.8 Chapter 7 Conclusion 7.1 Summary of the work In this thesis, we have discussed various discrete time robust adaptive controller de-sign methods. The sampling frequency plays an important role in the robustness of discrete-time adaptive control. For most adaptive control algorithms, slow sampling will result in more robustness. This fact is verified either theoretically or through sim-ulations in [33], [34], [13], [43] etc., where different algorithms are used. Here we have derived sampling conditions for stable pole-zero cancellation discrete adaptive control algorithms. Ways to find critical sampling frequencies are proposed. Further, we also studied robust adaptive controller design methods using slow adaptation and predic-tive control algorithm. In those robust adaptive controller design methods, extra efforts are necessary to compensate for using faster sampling. A l l results are summarized as follows: 1. A stability condition for conventional adaptive pole-zero cancellation algo-rithm under model mismatch is derived. A n upper limit for adaptation gain is given. However, this upper limit is very conservative in practice. (Refer to section 3.4). 2. The critical sampling frequency for the minimum phase discrete plant can be found by using relay control. Here we suppose the continuous plant is rational, stable and inversely stable. Any sampling frequency lower than 140 Chapter 7. Conclusion 141 the critical sampling frequency will result in minimum phase discrete plant. Keeping the discrete plant minimum phase is essential for stable pole-zero placement adaptive control in model matched case. It can be shown that if there is no unstable poles and zeros in the discrete plant, conventional pole-zero placement adaptive control algorithms are stable. (Refer to section 4.2). 3. However, for the model mismatch case, the minimum-phase discrete plant is only a necessary condition. We have also studied the relationships between the model error and the sampling frequency, the model error and the SPR condition. A n approximate way to find critical sampling for simplified sta-bility condition is proposed. Frequency domain information for continuous plant is necessary for using this method. (Refer to section 4.3) 4. The robustness of discrete-time adaptive control algorithms can be improved if the adaptation rate is much slower than the sampling rate. The slow adaptation rate should ensure only good estimates are used for controller design. For that case, the S P R H2 condition can be relaxed. However, the robustness improvement is limited as sampling too fast may still result in unstable output. This is because, another condition for stability has to be satisfied, i.e. the resulting closed-loop using new estimated parameters must be stable. (Refer to section 5.2) 5. A robust adaptive control algorithm which always ensures stability is possi-ble if we can obtain good frequency domain estimation and design a proper filter. A n example of how to use frequency domain estimation by Fourier transform and how to design a low-pass or band-pass filter is shown. If the Chapter 7. Conclusion 142 frequency domain information is accurately known or estimated, the param-eters of the filter can be self-adjusted by control algorithm and the stability can be ensured. (Refer to section 5.3, 5.4, figure 5.28 and the computer program Appendix A) . 6. Predictive control is a frequently used adaptive control method. We studied the relationship between the extended control horizon and the sampling frequency for the critically stable adaptive control case. Since the pole-zero placement algorithm is a special case of predictive control algorithm, applying this relation with the known critical sampling for conventional pole-placement adaptive control algorithm is useful, especially when a desired sampling rate is set. However, this relation has not been proved for general cases, although simulations show that it seems to hold. (Refer to section 6.2). In [39], it was written, " • • • adaptive control can be useful, but extreme care must be taken when implementing the controllers." Here we discussed the effect of slow sam-pling on the robustness of conventional pole-placement algorithm and some methods to improve the robustness. Those methods are indeed useful when implementing adaptive controllers. 7.2 Future Research Directions Continuing research under this topic can be categorized as theoretical issues and ap-plication issues. For theoretical studies, current stability conditions are expressed in the time domain and they are usually very conservative. Is it possible to express the stability conditions Chapter 7. Conclusion 143 in the frequency domain (like Bode plot)? Then, the effect of sampling can be inter-preted as the high frequency components aliasing to the low frequency part. Also, for predictive control algorithms, there are no good theoretical results on the stability con-ditions for general algorithms so far. For example, given an arbitrary extended horizon, how to derive the stability condition? These questions remain to be answered. On the application issues, the robust adaptive control design guidelines in this thesis can be used in a knowledge-based commissioning and monitoring system. Hardware and software can be built for the robust adaptive controller using slow adaptation (chapter 5) or predictive control (chapter 6). Also, the principle of slow adaptation and slow sampling can be applied to most adaptive control algorithms. 7.3 C o n c l u s i o n It is the theme of this thesis that for a rational, stable and inversely stable continuous plant, the pole-placement adaptive controller design algorithm can always be stable if the sampling frequency is slow enough. Bib l iog raphy [1] Anderson, B .D.O. , R .R. Bitmead, C.R. Johnson, Jr., P .V. Kokotovic, R .L . Kosut, I .M.Y . Mareels, L. Praly, and B . D . Riedle Stability of Adaptive System: Passivity and Averaging Analysis. Cambridge, Mass.: M I T Press, M A . 1986. [2] Astrom, K . J . "Analysis of Rhors counterexample to adaptive control," Proc. 22nd I E E E Conf. Decision Control, San Antonio, T X . , Dec. 1983. [3] Astrom, K . J . "Interactions between excitation and unmodelled dynamics in adaptive control", Proc. 23rd I E E E Conf. Decision Control, Las Vegas, N V , 1984, pp 1276-1281. [4] Astrom, K . J . , P. Eykhoff "System identification - A survey," Automatica, Vol.7, pp.123-162, 1971. [5] Ast rom, K . J . , P. Hagander and J . Sternby "Zeros of sampled systems." Automatica, Vol.20, No 1, pp 31-38, 1984. [6] Ast rom, T. , Hagglund "Automatic tuning of simple regulators with specifications on phase and amplitude margins", Automatica, Vol.20, No.5, pp.645-651, 1984. [7] Ast rom, K . J . , B . Wittenmark "Self-tuning controllers based on pole-zero placement," IEE Proceedings, Vol.127, P t .D, No.3, May 1980. 144 Bibliography 145 [8] Ast rom, K . J . , B . Wittenmark Computer Controlled Systems, Prentice Hall Inc., Englewood Cliffs N . J . 1984. [9] Astrom, K . J . , B . Wittenmark Adaptive Control, Addison-Wesley Publishing Company, 1989. [10] Belanger, P.R., G . A . Dumont, S. Gendron "Practical experience with Kamyr digester level self-tuning control," in Proc. 1984 American Control Conf. San Diego, C A , pp. 48-53, 1984. [11] Clarke, D .W. "Self-tuning control of nonininirnum-phase systems," Automatica, col.20, No.5, pp.501-517, 1984. [12] Clarke, D . W , C. Mohtadi, P.S. Tuffs "Generalized predictive control - part I. the basic algorithm," Automatica, Vol.23, No.2, pP137-148, 1987. [13] Cluett,W.R., S.L. Shah, D . G . Fisher "Robust design of adaptive control systems using conic sector theory," 2nd IFAC Workshop on Adaptive Systems, Lund, 1986. [14] Cluett, W.R. , Shah, S.L., Fisher, D .E . "Robustness analysis of discrete-time adaptive control system using input-output stability theory: a tutorial". IEE Proceeding, Vol.135, Pt .D. , No.2, March, 1988, ppl33-141. [15] Dumont, G .A. , P.R. Belanger "Self-tuning control of a titanium dioxide kiln," I E E E Trans. Automat. Contr., Vol. AC-23, pp.532-537, 1978. Bibliography 146 [16] Egardt, B . Stability of Adaptive Controllers. New York, Springer-Verlag, 1979. [17] Franklin, G.F . , J .D. Powell Digital Control of Dynamic Systems Addison-Wesley Publishing Company, 1980. [18] Fu, Y . , G . A . Dumont "Robust discrete STC design using intermittent adaptation", 1988 IFAC Workshop on Robust Adaptive Control, Newcastle, Australia, August, 1988. [19] Fu, Y . , G . A . Dumont "Choice of sampling to ensure minimum phase behaviour." I E E E Trans. Aut. Control, May, 1989, pp.560-563. [20] Fu, Y . , G . A . Dumont " A stability condition for STC with model order mismatch", Accepted by The 28th I E E E Conference on Decision and Control, Tampa, Florida, December 13 -15, 1989. [21] Goodwin, G . C , P.J . Ramadge and P .E. Caines "Discrete time multivariable adaptive control." I E E E Trans. Aut. Control, A C - 2 5 , 1980, 449-456. [22] Goodwin, G . C , and K . S . Sin Adaptive Filtering, Prediction and Control, Information and Systems Science Se-ries. Englewood Cliffs, N . J . : Prentice-Hall, 1984. [23] Gregory, P . C E D . , Proc. Self adaptive Flight Control Systems Symp., (WADC Tech. Rep. 59-49, Wright Ai r Development Center, Wright-Patterson Ai r Force Base, OH), 1959. Bibliography 147 [24] Kallstr6m,C.G., K . J . Astrom, N . E . Thorell, J.Eriksson, L . Sten " Adaptive autopilots for Tankers", Automatica, Vol.15, pp.241-254, 1979. [25] Kosut, R . L . "Stability theory for adaptive systems: method of averaging and persistency of excitation," I E E E Transaction on Automatic Control, Vol. AC-32, N o . l , pp.26-34, 1987. [26] LaMaire, R .O. "Robust time and frequency domain estimation methods in adaptive control", Ph .D. thesis, M.I .T. , Department of Electrical Engineering and Computer Science, May 1987. [27] Leininger, L . , S.P. Wang "Self-tuning control of manipulator dynamics", Presented at 1983 American Con-trol Conf., San Francisco, C A , 1983. [28] M a , C . C . H . "Inherent robustness of discrete discrete-time adaptive control systems", Control Theory and Advanced Technology, Sept., 1987, pp.197-216. [29] Mishkin, E . , E . J . Adkins Adaptive Control Systems. New York, N Y : McGraw-Hill , 1961. [30] Ortega, R., L . Praly, I.D. Landau "Robustness of discrete-time direct adaptive controllers," I E E E Trans, on Aut. Control, A C - 3 0 , Dec , 1985. [31] Rohrs,C. "Adaptive ontrol in the presence of unmodeled dynamics," Ph.D. dissertation, Bibliography 148 Dept. Elec. Eng. Comput. Sci., MIT , Aug. 1982. [32] Rohrs ,C, L . Valavani, M . Athans, G . Stein "Robustness of continuous-time adaptive control algorithms in the presence of unmodeled dynamics." I E E E Trans. Aut. Control, A C - 3 0 , 1985, pp.881-889. [33] Rohrs,C.E., G . Stein, K . J . Astrom "Uncertainty in sampled systems," Proceedings American Control Conference, Boston, M A , June 1985. [34] Rohrs,C.E., G . Stein, K . J . Astrom " A practical robustness theory for adaptive control," Proceedings American Con-trol Conference, Boston, Ma , June 1985. [35] Safonov, M . G . Stability Robustness of Multivariable Feedback Systems, Cambridge, M A : M.I .T. Press, 1980. [36] Salgado, M . E . , G . C . Goodwin, R . H . Middleton "Modified least squares algorithm incorporating exponential resetting and forget-ting", Int. J . Control, Vol. 47, No.2, 477-491, 1988. [37] Strejc, V . "Least equares parameter estimation," Automatic, Vol.16, pp.535-550, 1980. [38] Wittenmark, B . , K . J . Astrom "Practical issues in the implementation of self-tuning control", Automatica, Vol.20, No.5, pP595-605, 1984. [39] Wittenmark, B . "On the role of fdters in adaptive control," Technical report, EE8662, Dept. of Bibliography 149 Electrical and Computer Engineering, University of Newcastle, New South Wales, Australia, Dec. 1986. [40] Ydstie,B.E., L.S. Kershenbaum, R . W . H . Sargent "Theory and application of an extended horizon self-tuning controller", AIChE Journal, 31(11):1771, 1985. [41] Ydstie, B . E . "Bifurcations and complex dynamics in adaptive control systems", Proceeding of 25th C D C , Athens, Greece, December, 1986, pp 2232-2236. [42] Zames,G "On the input-output stability of time-varying nonlinear feedback systems. Part I." I E E E Trans. Aut. Control, Apr i l , 1966. [43] Zervos, C .C. , G . A . Dumont "Deterministic adaptive control based on a Laguerre series presentation", Interna-tional Journal of Control, 1988, Vol.48, No.6, pp 2333-2359. [44] Mathematics Manual (in Chinese) 1979, Peking. A p p e n d i x A Li s t of Abbrev i a t i ons A R M A Auto-Regresive and Moving-Average, defined in (3.18), p. 25 C A R I M A Controlled Auto-Regressive and Integrated Moving-Average C A R M A Controlled Auto-Regressive and Moving-Average D . C . gain Direct Current gain or zero frequency gain D F T Discrete Fourier Transform, defined in (2.6), p. 8 D T F T Discrete-Time Fourier Transform, defined in (2.5), p. 8 E F R A Exponential Forgetting and Resetting Algorithm, p. 134 F F T Fast Fourier Transform M P Minimum Phase M R A C Model Reference Adaptive Controllers M V C Minimum Variance Control N M P Non-Minimum Phase S P R Strictly Positive Real STC Self-Tuning Control S T R Self-Tuning Regulator 150 A p p e n d i x B L i s t of Terminology-Averaging Diophantine Equation Persistently exciting Relation Rohrs' plant Slow adaptation rate definition: p. 21 stability theory: pp. 22-23 eq. (3.28), p. 32 definition: p. 29 definition: p. 14 conic relation: p. 16 positive relation: p. 17 continuous plant eq. (3.26) in p. 30. fixed: p. 89 unfixed: p. 92 151 A p p e n d i x C S imula t ion P r o g r a m J. The main Program: PROGRAM REAL PLANT SIMULATION "The plant is: 2 229 G(s) = " s+1 s*s+30s+229 "Order of the estimator=l. n "The direct (pole-zero placement) algorithm is used. "(COMMAND) INPUT: AA*cos(wl*t)+DI*cos(w2*t)+NI*GAUSS(0,1) " OUTPUT: YY=Y+NO*GAUSS(0,1) +DO*cos(w3*t) "Subroutines used are: ASY,FILrSCAL,EST,FFT ************************************* REAL THETA (2) ,X(2) ,K(2) ,PEC (2, 2) , GAIN(1024), PHA (1024) REAL YT (2048) , UT(2048) , YF (2048) , UF (2048) ,H(2048) INTEGER NPARf I, J, M, NCON, COUN, NCOUf KM, IC, N, NN "NOTE: The maximum vaule of N is : 10 " INITIAL "SIMULATION CONDITION:" CONSTANT CINT=0.05, PRT=50.0, TF=199.5, TES=20.0 "PARAMETERS AND INITIAL VALUE FOR ESTIMATION" CONSTANT FORGET=1.0, NPAR=2, PERR=0.1 CONSTANT THETA(1) =0.0, THETA(2) =1.0 "INITIAL VALUES OF STATES" CONSTANT YM1=0.0, Y=0.0, YY=0.0, Z=0.0, E=0.0 CONSTANT UM2=0.0, UM1=0.0, U=0.0, RM1=0.0, R=0.0 CONSTANT W=0.0, ND=0.0 "INITIAL VALUE FOR CURVE FITTING AND FILTERING" CONSTANT VAR=0.05, TAl=1.0, TB1=1.0, KC=0.2, Q=2.0 "INITIAL VALUE FOR PARAMETERS IN FFT" CONSTANT FB=2.0, ALF=0.8, FW=-5.0, NN=7 152 "OTHER INITIAL VALUES" CONSTANT AA=1.0,DI=0.0,NI=0.0, DO=0. 0, NO=0.0 CONSTANT W1=0.0,W2=0.0,W3=0.0,C0F=1.0, TYPE=2.0,F=1.0 ALGORITHM IALG=5 MAXTERVAL MAXT=0.001 "ASSIGNED INITIAL VALUES" A1=THETA(1), Bl=THETA (2), NC0N=1, NCOU=Or rN=2*NN GAINY=0.0, GAINU=0.0, PHIY=0.0, PHIU=0.0 TS=TES+(FLOAT(N)-1.0) *CINT END $ "INITIAL" ************************************************ "INPUT TYPE,W1,W2,W3" READ (5, 20) TYPE, Wl, W2, W3 20. .FORMAT(4F15. 6) "INPUT AA,DIfNI,DO,NO" WRITE {6, 30) 30. . FORMAT (IX, 25HINPUT GAIN:AA,DI,NI,DO,NO) READ (5, 35) AA, DI, NI, DO, NO 35. .FORMAT(5F15. 6) IF(TYPE.NE.1.0) GO TO 60 "IN CASE OF NOADAPTIVE CONTROL, INPUT DESIGN PARAMETERS" WRITE (6, 40) 40..FORMAT(IX,21HINPUT ESTIMATED Al,Bl) READ(5,50)A1,B1 50. .FORMAT(2F12. 6) 60..CONTINUE WRITE (6, 64) 64..FORMAT(IX,1H ) "CALCULATE ACCURATE PARAMETERS" CALL CAL(CINT,Wl, TAl, TBI) WRITE (6, 65) TAl, TBI 65..FORMAT(IX,28HTHE CONVERGED PARAMETERS ARE,2F10.4) WRITE (6, 66) 66. .FORMAT(IX, 1H ) IF (Wl.EQ.O.O) GO TO 80 BEI=2.0*3.1415926/(CINT*W1) IF (BEI.GT.8.0) GO TO 80 WRITE (6, 70)BEI 70..FORMAT(IX, 13HWARNING: ONLY,F5.1,28H POINTS IN ONE SIGNAL PEROID) 80..CONTINUE 153 "CHOOSE ASYNCHRONOUS SAMPLING PERIOD COEFFICIENT" CALL PRINT3(1.0) READ(5,90) KM 90. . FORMAT (16) "CALCULATION ASYNCHRONOUS PERIOD" IF (Wl.EQ.O.O) GO TO 92 M=INT (FLOAT (KM) * (0.5+3.1416/ (CINT*W1) ) ) GO TO 99 "IN CASE ZERO FREQUENCY COMMAND, INPUT ASYNCHRONOUS SAMPLING PERIOD" 92. .WRITE (6, 94) 94..FORMAT(IX,38HSince wl=0.0, you name the value of N:) READ (5, 96) M 96. .FORMAT(16) n rr A=l.0/229.0, B=30.0/229.0 "INITIAL VALUS OF FILTER" KK=1.0, FA1=0.0, FA2=0.0, FB1=0.0, FB0=1.0 "INITIAL VECTOR VALUE IN ESTIMATION" DO 105 1=1,2, DO 105 J=l,2, PEC (I, J) =0.0 105. . CONTINUE DO 110 1=1,2, PEC (I, I) =1000.0, K(I)=0.0, X(I)=0.0 110..CONTINUE rr rr "LAST CHANCE TO CANCEL THE DESIGN" rr rr READ (5,120) IC 120. .FORMAT(16) IF(IC.NE.l) GO TO 10 WRITE (1,130) 130. .FORMAT(IX,3HB=[) WRITE (12,131) 131. . FORMAT (IX, 3HA=[) «*******************************************************************" J=0 "PROCESS AND ESTIMATION" "==================== " DYNAMIC "NOISE AND DISTURBANCE ADDED TO THE OUTPUT" YY=Y+NO*GAUSS(0.0,1.0)+DO*COS(W3*T) UM2=UM1, UM1=U X(2)=UM1, X(1)=YM1 154 NC0U=NC0U+1 IF(COUNM. NE.COUN) NCOV=0 COUNM=COUN "PARAMETER ESTIMATION" CALL SQRTFL (YY, X, PEC, K, THETA, NPAR, PERR, FORGET) ************************************************* YM1=YY, RM1=R "CORVER FITTING" IF(T.LT.TES) GO TO 302 IF(T.GT.TES) GO TO 302 IF(T.NE.TES) GO TO 301 NCOU=M GO TO 302 301..NCOU=0 302..CONTINUE CALL ASY1 (NCON, NCOU, M, VAR, YY, U, Wl, CINT, KC, GAINY, PHIY, GAINU, PHIU) IF(T.LE.TS) GO TO 310 IF(TYPE.EQ.1.0) GO TO 310 IF(NCON.NE.O) GO TO 310 ************ "ADAPTION" n it B1=THETA (2) A1=THETA(1) CALL FIL (FA1, FA2, FBI, FBO, KK, CINT, Q, Wl, F, ALF, GAIN, PHA, A1,B1, KC, N) ************ 310..CONTINUE "COMMAND INPUT" IF(T.LE.TS) GO TO 315 COUN=INT ( (T-TS) /PRT) IF(COUN.EQ.2.0*INT(COUN/2)) COF=1.0 IF(COUN.NE.2.0*INT(COUN/2)) COF=0.5 315..CONTINUE W=COF* (AA*COS (W1*T)) "NOISE AND DISTURBANCE ADDED TO THE COMMAND" W=W+DI*COS(W2*T)+NI*GAUSS(0.0,1.0) "SMALL PRETUBETION IS ADDED FOR FFT" . IF(T.LT.TES) GO TO 319 IF(T.GT.TS) GO TO 319 W=W+FB *EXP (FW* (T-TES) ) 319..CONTINUE "CONTROLLER" n II E=W-A1*YY R=E/B1 U=FA1*UM1-FA2*UM2+kk*FBO*R-kk*FB1*RM1 155 IF(T.LE.TS) U=R WRITE (1,320) T, W, YY, theta (1) , theta (2) ,A1,B1 320. . FORMAT (IX, 7F11. 5) 331. . CONTINUE n rr "obtaining steady state" IF(T.NE.TES) GO TO 335 AU=GAINU, AY=GAINY, PU=PHIU, PY=PHIY 335. .CONTINUE "DOING FFT IF REQUIRED" IF(T.LT.TES) GO TO 340 IF(T.GT.TS) GO TO 340 J=J+1 YT (2*J-1) =YY-AY*SIN (M*W1 *CINT-PY+W1 * (T-TES)) UT (2*J-1) =U-AU*SIN (M*ffl *CINT-PU+Wl * (T-TES+CINT)) FUT=AU*SIN(M*W1*CINT-PU+W1 *(T-TES) ) FYT=AY*SIN(M*W1*CINT-PY+W1*(T-TES) ) write (19,339)yt (2*j-l), ut (2*j-l),FYT, Y,FUT, UM1 339..format(lx,6fl2.5) 340..CONTINUE IF((1.5*abs(T-TS)).ge.cint) GO TO 390 "DOING ESTIMATION USING FFT" DO 342 1=1,N, YT(2*I)=0.0, UT(2*I)=0.0 342..CONTINUE DO 345 1=1, N, YF(I)=YT(I) , YF (I+N) =YT (I+N) UF (I) =UT (I) , UF (I+N) =UT (I+N) 345. .CONTINUE CALL FFT (YF,N, -1) CALL FFT(UF,N,-1) DO 348 1=1,N FA=UF (2*1-1) **2+UF(2*I) **2 H(2*I-1) = (YF (2*1-1) *UF (2*1-1) +YF(2*I) *UF (2*1)) /FA H(2*I) = (YF(2*I) *UF (2*1-1)-YF (2*1-1) *UF (2*1)) /FA 348. .CONTINUE DO 360 1=1,N GAIN (I) =SQRT (H (2*1-1) **2+H(2*I) **2) IF (H(2*1-1).EQ.O.O) GO TO 350 PHA (I) =ATAN (H (2*1) /H (2*1-1) ) IF(H(2*I-1) .LT.O.O) PHA(I) =PHA(I) -3.1415926 IF (PHA (I) .GT.O.O) PHA (I) =PHA (I)-6.283185 GO TO 355 350. .PHA (I) =-3.1415826/2.0 355..J=2*I-1 WRITE (12,356) I, GAIN (I) , PHA (I) ,H(J),H (2*1), YF (J), OF (J) 356. .FORMAT(IX, 16, 6F12.3) 360. .CONTINUE 156 "FIND KC" n rr PHI=2.0 DO 370 1=2,N IF ( (PHA (I) +3.1415926) * (PHA (1-1) +3.1415926) .LE.O.O) GO TO 380 370..CONTINUE 380..J-I-l CE1=ABS (PHA (J) +3.1415926) CE2=ABS (PHA (J+l) +3.1415926) CE=CE1+CE2, CE1=CE1/CE, CE2=CE2/CE KC=ABS (CE2*GAIN (J) /COS (PHA (J) ) +CE1 *GAIN (J+l) /COS (PHA (J+l) ) ) 390. .CONTINUE "PPROCESS OF THE PLANT" rr n DERIVATIVE Z=REALPL (1. 0r Uf 0. 0) Y=2. 0*CMPXPL(A,B,Z,0.0,0.0) END $ "DERIVATIVE" IT IT IF (ABS (Y) . GT. 9999. 9) TF=0. 02 TERMT(T.GT.TF) END $ "DYNAMIC" TERMINAL WRITE (1, 400) 400. .FORMAT(IX,2H];) WRITE (12, 401) 401.. FORMAT (IX, 2H];) END $ "OF TERMINAL" END $ "PROGRAM" 2. Subroutines in FORTRAN: (1). Subroutine to check steady state: SUBROUTINE ASY1 (NCON, NCOU, MAX, VAR, YY, UU, Wl, CINT, KC, GAIN, PHI, GAINU, PHIU) REAL VAR, YY, Y (500) , UU, U (500) , Wl, CINT, KC, Hi, TELTA1 DO 100,1=1,499, U (501-1) =U (500-1) 100 Y(501-1) =Y(500-1) U(1)=UU, Y(1)=YY IF(Wl.NE.O.O) GO TO 300 IF(NCOU.LT.MAX) GO TO 190 ARV=0.0 157 110 DO 110, J=1,MAX ARV=ARV+Y(J) AR V=AR V/FLO A T (MAX) SUM=0.0 DO 120, J=1,MAX 120 SUM=SUM+ (Y (J) -ARV) **2 SUM=SQRT (SUM/FLOAT (MAX)) IF(SVM.GE.abs(VAR*ARV)) GO TO 180 NCON=0, NCOU=0, GO TO 200 180 TYPE *,'The process is not in steady state!' WRITE (3,185) SUM, ARV 185 FORMAT(lX,6F12.4) NCOU=NCOU-INT(MAX/2) 190 NC0N=1 200 RETURN c 300 IF (NCOU.LT.MAX) GO TO 590 TWl=2*3.1415926535/Wl, PREAL=0.0, PIMAG=0.0 DO 310 1=1, MAX PREAL=PREAL+Y (MAX+1 -I) *COS (FLOAT (I) *W1 *CINT) PIMAG=PIMAG+Y (MAX+1 -I) *SIN (FLOAT (I) *Wl *CINT) 310 CONTINUE IF(PIMAG.EQ.O.O) GO TO 315 PHI=ATAN (-PREAL/PIMAG) GO TO 320 315 PHI=-3.1415926/2.0 320 CONTINUE C GAIN=0.0 DO 325 1=1, MAX GAIN=GAIN+Y (MAX+1 -I) *SIN (FLOAT (I) *W1*CINT-PHI) 325 CONTINUE GAIN=GAIN*2.0/FLOAT(MAX) IF(GAIN.GT.0.0) GO TO 330 GAIN=-GAIN, PHI=PHI+3.1415926 330 CONTINUE C PREALU=0.0, PIMAGU=0.0 DO 331 1=1, MAX PREALU=PREALU+ U (MAX+1 -I) *COS (FLOAT (I) *W1 *CINT) PIMAGU=PIMAGU+U (MAX+1 -I) *SIN (FLOAT (I) *W1 *CINT) 331 CONTINUE IF(PIMAGU.EQ.O.O) GO TO 332 PHIU=ATAN(-PREALU/PIMAGU) GO TO 333 332 PHIU=-3.1415926/2.0 333 CONTINUE 158 GAINU=0.0 DO 334 1=1,MAX GAINU=GAINU+U(MAX+1-I)*SIN(FLOAT(I) *W1*CINT-PHIU) 334 CONTINUE GAINU=GAINU*2.0/FLOAT (MAX) IF(GAINU.GT.O.O) GO TO 335 GAINU=-GAINU, PHIU=PHIU+3.1415926 335 CONTINUE HI=GAIN/GAINU TELTA1=PHI-PHIU IF(TELTAl.LT.O.O) TELTAl=TELTAl+6.28318 DEX=H1*SIN(Wl *CINT)/SIN(TELTA1-W1*CINT) C SUM=0.0 DO 350 1=1,MAX 350 SUM=SUM+ (Y (MAX+1 -I) -GAIN*SIN (FLOAT (I) *W1 *CINT-PHI)) **2 SUM=SQRT(SUM/FLOAT(MAX)) IF(SUM.GE.VAR*GAIN) GO TO 580 C IF(KC.GT.DEX) GO TO 570 400 NCON=0, NCOU=0 WRITE (2, *)SUM,H1, TELTA1, KC *DEX GO TO 600 570 CONTINUE TYPE *,'Basic stability equation does not fit1.' GO TO 588 580 CONTINUE TYPE *,/The process is not in steady state!' 588 WRITE(3,589)SUM,HI,TELTA1,KC*DEX,GAIN,PHI 589 FORMAT (IX, 6F12. 4) NCOU=NCOU-INT (MAX/2) 590 NC0N=1 600 RETURN END (2). Subroutine for f i l t e r design SUBROUTINE FIL (Al, A2,Bl,B0,KK, T,Q, Wl,FILTER, ALF, GAIN, PHA, TA, TB,KC,N) REAL GAIN (N) , PHA (N) ,PHI (2048) ,G (2048) REAL A1,A2,B1,B0, KK, T, Q, Wl, FILTER, ALF, KC IF(TA*KC/(TB).LE.1.0) GO TO 900 10 IF (Wl.EQ.O.O) GO TO 100 IF (FILTER.NE.1.0) GO TO 100 C CALCULATION OF BY-PASS FILTER C KK=T*W1/Q, A=-Wl/(2. 0*Q) , B=0. 5*W1*SQRT (4. 0-1. 0/ (Q**2)) A1=2.0*EXP (A*T) *COS (B*T) , A2=EXP (2.0*A*T) B1=KK*EXP (A*T) * (COS (B*T) -A*SIN (B*T) /B) , BO=KK 159 ZR=(BO-B1*COS (W1*T)) , ZI=B1*SIN (W1*T) PR=1. O-Al *COS (Wl *T) +A2*C0S (2. 0*Wl *T) PI=A1*SIN(W1*T)-A2*SIN(2.0*W1*T) KK=SQRT(PR**2+PI**2)/SQRT(ZR**2+ZI**2) GO TO 102 c 100 CONTINUE C CALCULATION OF LOW PASS FILTER c A2=0.0, Bl=0.0f Al=1.0-1.0/Q, B0=1.0/Q, KK=1.0 102 CONTINUE WS=3.1415926/T DO 120 1=2,N W=FL0AT(I-1) *WS/FLOAT (N) ZR=kk*(B0-B1*COS(W*T)), ZI=kk*(Bl*SIN(W*T)) PR=1.O-Al*COS(W*T)+A2*C0S(2.0*W*T) PI=A1*SIN(W*T) -A2*SIN(2.0*W*T) G (I) = (GAIN (I) /SQRT (PR**2+PI**2) )*SQRT (ZR**2+ZI**2) PH1=ATAN(ZI/ZR) IF(ZR.LT.O.O) PHl=PHl-3.1415926 PH2=ATAN (PI/PR) IF (PR.LT.0.0) PH2=PH2+3.1415926 PHI (I) =PHA (I) +PH1-PH2 120 CONTINUE DO 150 I=lrN-l IF( ((PHI(I)+3.1415926)* (phi (i+1)+3.1415926)) .le. 0.0) GO TO 160 150 CONTINUE 160 j = i cel=abs(3.1415926+phi (j)), ce2=abs (3.1415926+phi (j+1)) ce=cel+ce2, cel=cel/ce, ce2=ce2/ce FKC=abs (ce2*G (J) /COS (phi (j)) +cel *g(j+1) /cos (phi (j+1))) write (6, *) rFILTER PARAMETERS:',fkc, j,Q IF(FKC.LT. (tb*alf/ta)) GO TO 200 0=1.2*Q GO TO 10 200 RETURN 900 WRITE (6,*)'NO FILTER REQUIRED' A1=0.0, A2=0.0, B1=0.0, B0=1.0, KK=1.0 RETURN END 160 (3). This is just a compare subroutine. To show runner the theoretically calculated estimated value and compare with the estimated parameters. It is not necessary to be included in real controller design. C AS, BS are calculated parameters. C c SUBROUTINE CAL (CINT, W1,AS, BS) REAL CINT, W1,AS,BS, Al,A2,A3,Bl, B2,B3,PBS,PBC,PAS,PAC REAL CT, ST, E1,E15,E30, H, DELTA,ANA,ANB, C1W, C2W, C3N, S1W, S2W, S3W C ======= ^==================================:=:=== CT=COS (2. 0 *CINT) , ST=SIN (2. 0 *CINT) E1=EXP (-CINT) , E15=EXP(-15.0*CINT) , E30=EXP(-30.0*CINT) c Al=El+2.0*E15*CT A2=-2.0*EXP(-16.0*CINT) *CT-E30 A3=EXP(-31. 0*CINT) Bl=2.0-2.29*El+0.29*E15*(CT+3.5517*ST) B2=0.29*El-4.29* (1.0-E1)*E15*CT-1.03*(1.O+El) *E15*ST-0.29*E30 B3=(2.29-2. 0*E1) *E30-0.29*EXP (-16. 0*CINT) * (CT-3.5517*ST) C ^==^^^=^^===^===^^^=^-^=.==^^=^^====;-.;.==^:-:. IF(Wl.EQ.O.O) GO TO 19 C C CALCULATION OF GAIN AND PHASE C =- ===== = C1W=C0S (Wl*CINT) , C2W=C0S (2. 0*W1 *CINT) , C3W=C0S (3. 0*W1 *CINT) S1W=SIN(W1*CINT) , S2W=SIN(2.0*W1*CINT) , S3W=SIN (3. 0*W1*CINT) c PBC=B1+B2*C1 W+B3*C2W, PBS=B2*S1 W+B3*S2W PAC=1. O-Al *C1 W-A2*C2W-A3*C3W, PAS=A1 *S1 W+A2*S2W+A3 *S3W c _ H=SQRT((PBC**2+PBS**2) / (PAC**2+PAS**2)) ANB=ATAN (PBS/PBC) IF(PBC.LT.0.0) ANB=ANB+3.1415926535 ANA=ATAN (PAS/PAC) IF(PAC.LT.0.0) ANA=ANA+3.1415926535 DELTA=ANA+ANB C ========================================================= AS=SIN (DELTA) /SIN (DELTA+W1 *CINT) BS=H*SIN (Wl *CINT) /SIN (DELTA+Wl *CINT) GO TO 20 19 H=(Bl+B2+B3)/ (1.O-Al-A2-A3) DELTA= (Bl +2.0*B2)/ (Bl +B2+B3) + (Al+2.0*A2+3. 0*A3) /(1. O-Al-A2-A3) AS=DELTA/ (1.0+DELTA) , BS=H/ (1. O+DELTA) 20 CONTINUE C +++++++++++++++++++++++++++++++++++++++++ RETURN END 161 (4). Parameter estimation subroutine. C Here is an example of 2-parameter estimation subroutine. For the C higher order estimation, just change the definition in the begining. C Q********************************************************************* SUBROUTINE SQRTFL(Y,X,SQRTP,K,THETA,NPAR,PERR,FORGET) REAL X (2) , SQRTP (2, 2) ,K(2) , THETA (2) REAL F (2) , SIGSQ (3) , SIG (3) , H (2, 2) r SQRTPN (2,2) REAL Y,PERR,B C C C Calculation of estimation error C = -= C PERR=Y DO 1 1=1,NPAR 1 PERR=PERR-THETA (I) *X (I) C C C Estimation Calculation c =====^=^== ========= C DO 100 1=1, NPAR, F(I)=0.0 DO 100 J=1,I , F(I)=F(I)+SQRTP(J,I)*X(J) 100 CONTINUE C c C SIGSQ(1)=FORGET DO 120 1=1,NPAR, 120 CONTINUE N1=NPAR+1 DO 130 1=1, Nl, 130 CONTINUE c B=FORGET DO 170 1=1,NPAR 1 70 B=B+F (I) *F (I) DO 190 1=1, NPAR, K(I)=0.0 DO 180 J=I,NPAR 180 K(I)=K(I) +SQRTP (I, J) *F (J) /B 190 CONTINUE c DO 140 1=1,NPAR DO 135 J=1,NPAR IF (I.EQ.J) THEN H (I, J) =SIG (I) / (SIG (1+1)) ELSE IF (I.LT.J) THEN H (I, J) =-F (I) *F (J) / (SIG (J) *SIG (J+1)) ELSE H(I,J)=0.0 END IF 135 CONTINUE 140 CONTINUE SIGSQ(1+1)=SIG$Q (I)+F (I) *F(I) SIG (I) =SQRT (SIGSQ (I)) 162 DO 150 1=1,NPAR DO 145 J=1,NPAR, SQRTPN(I,J)=0.0 DO 142 KK=1,NPAR SQRTPN (I, J) =SQRTPN (I, J) +SQRTP (I, KK) *H (KK, J) 142 CONTINUE 145 CONTINUE 150 CONTINUE DO 160 1=1, NPAR DO 155 J=1,NPAR 155 SQRTP (I, J) =SQRTPN (I, J) /SQRT (FORGET) 160 CONTINUE C C Calculation of new estimated parameters c ===== ===== ===== C DO 200 1=1,NPAR 200 THETA (I) =THETA (I) +PERR*K (I) RETURN END (5). FFT subroutine SUBROUTINE FFT (DATA, NNr ISIGN) DIMENSION DATA (256) N=2*NN, J=l DO 5 1=1,N,2 IF (I-J) 1,2,2 1 TEMPR=DATA (J) , TEMPI=DATA (J+1) DATA (J) =DATA (I) , DATA (J+1) =DATA (1+1) DATA (I) =TEMPRr DATA (1+1) =TEMPI 2 M=N/2 3 IF (J-M) 5, 4, 4 4 J=J-Mr M=M/2 IF (M-2) 5, 3, 3 5 J=J+M, MMAX=2 6 IF (MMAX-N) 7, 9, 9 7 ISTEP=2*MMAX DO 8 M=1,MMAX,2 THETA=3.1415926535 *FLOAT (ISIGN* (M-l)) /FLOAT (MMAX) WR=COS(THETA), WI=SIN(THETA) DO 8 I=M,N,ISTEP J=I+MMAX TEMPR=NR *DATA (J) -WI*DATA (J+1) TEMPI=WR *DATA (J+1) +WI *DATA (J) DATA (J) =DATA (I) -TEMPR, DATA (J+1) =DATA (1+1) -TEMPI DATA (I) =DATA (I) +TEMPR, DATA (1+1) =DATA (1+1) +TEMPI 8 CONTINUE MMAX=ISTEP GO TO 6 9 RETURN END 163
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Robust adaptive control
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Robust adaptive control Fu, Ye 1989
pdf
Page Metadata
Item Metadata
Title | Robust adaptive control |
Creator |
Fu, Ye |
Publisher | University of British Columbia |
Date Issued | 1989 |
Description | This thesis describes discrete robust adaptive control methods based on using slow sampling and slow adaptation. For the stability analysis, we consider that the plant model order is not exactly known and assume that the estimation model order is lower than the plant model order. A stability condition is derived with a given upper limit for the adaptation gain which is related to a strictly positive real operator. Discussion of the relation between sampling and stability condition is then given. For the robust adaptive control design, we study slow adaptation and predictive control. For the slow adaptation, the main idea is to use only good estimates and use a compensation filter. Some frequency domain information on the plant is necessary for this method. For predictive control, we discuss the relationship between the extended control horizon and the critical sampling. For a simple case, it is shown that the larger extended control horizon brings more robust adaptive control. The purpose of this thesis is to provide robust discrete adaptive controller design guidelines, especially in such cases as using slow sampling frequency, slow adaptation rate. It is true that in practice, for various discrete adaptive control algorithms, slow sampling and slow adaptation rate will bring more robustness. The use of slow sampling and slow adaptation rate is simple and economic, thus a careful choice of sampling rate and adaptation rate is highly recommended. This thesis provides such guidelines for choosing proper sampling rate and adaptation rate for robust discrete adaptive control. |
Subject |
Adaptive control systems -- Design |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2011-01-11 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0098746 |
URI | http://hdl.handle.net/2429/30574 |
Degree |
Doctor of Philosophy - PhD |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-UBC_1990_A1 F92.pdf [ 7.16MB ]
- Metadata
- JSON: 831-1.0098746.json
- JSON-LD: 831-1.0098746-ld.json
- RDF/XML (Pretty): 831-1.0098746-rdf.xml
- RDF/JSON: 831-1.0098746-rdf.json
- Turtle: 831-1.0098746-turtle.txt
- N-Triples: 831-1.0098746-rdf-ntriples.txt
- Original Record: 831-1.0098746-source.json
- Full Text
- 831-1.0098746-fulltext.txt
- Citation
- 831-1.0098746.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0098746/manifest