UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Robust adaptive control Fu, Ye 1989

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1990_A1 F92.pdf [ 7.16MB ]
Metadata
JSON: 831-1.0098746.json
JSON-LD: 831-1.0098746-ld.json
RDF/XML (Pretty): 831-1.0098746-rdf.xml
RDF/JSON: 831-1.0098746-rdf.json
Turtle: 831-1.0098746-turtle.txt
N-Triples: 831-1.0098746-rdf-ntriples.txt
Original Record: 831-1.0098746-source.json
Full Text
831-1.0098746-fulltext.txt
Citation
831-1.0098746.ris

Full Text

ROBUST ADAPTIVE CONTROL By Fu, Ye B . A.Sc. Shanghai Jiao Tong University, P R C , 1984  A THESIS S U B M I T T E D IN PARTIAL F U L F I L L M E N T OF T H E REQUIREMENTS FOR T H E D E G R E E OF D O C T O R OF PHILOSOPHY  in T H E FACULTY OF GRADUATE  STUDIES  ELECTRICAL ENGINEERING  We accept this thesis as conforming to the required standard  T H E UNIVERSITY OF BRITISH C O L U M B I A  December 1989  © Fu, Ye, 1989  In  presenting this  degree at the  thesis  in  University of  partial  fulfilment  of  of  department  this or  thesis for by  his  or  scholarly purposes may be her  representatives.  permission.  of E l e c t r i c a l  Engineering  The University of British Columbia Vancouver, Canada  Date  DE-6  (2/88)  Jan. 5 t h , 1990  for  an advanced  Library shall make it  agree that permission for extensive  It  publication of this thesis for financial gain shall not  Department  requirements  British Columbia, I agree that the  freely available for reference and study. I further copying  the  is  granted  by the  understood  that  head of copying  my or  be allowed without my written  Acknowledgement  Four years ago, when I first stepped into U B C , I had no idea on adaptive control theory. In a talk with my thesis advisor, Prof. Guy A . Dumont, a simple problem got me interested. This problem is: given a rational minimum phase continuous plant, when is the equivalent discrete plant can be non-minimum phase? This problem also brought me to a more common question, if the discrete plant is non-minimum phase, how to control it using adaptive control? Through frequent talks with Prof. Dumont and reading many papers recommended by him, I finally went into the area of robustness of adaptive control. Many questions came to my mind, such as the stability of adaptive control under model mismatched case, role of filter in adaptive control, chaos in simple self-tuning control, predictive adaptive control, etc. Here I would like to express my special thanks to Prof. Guy A . Dumont, for guiding my progress throughout this four year research and for his continuous supporting help and guidance. I also would like to thank Prof. Bohn, Prof. Davies, Prof. Lawrence, Prof. Soudack and Prof. Ward, for the courses they give and from which I widened my knowledge beyond adaptive control theory. I also would like to thank Prof. M a , for his critical but helpful discussion on the original version of section 3.4.1. Through this discussion, I benefited a lot and corrected some expression mistakes. I also would like to thank Prof. Lam, who patiently went through my thesis and gave some good suggestions. I also would like to thank many friends studying with me for their time to discuss any topic of interest, including many details in this thesis. Among them, I especially thank to Dr. Natarajan, M r . Bond and M r . Kristinsson, for their constant questions ii  and suggestions. I thank all my teachers at U B C , at Shanghai Jiao Tong University, China, and also those teachers in my primary schools and middle schools in Shanghai, China. Special thanks to Prof. Astrom in Lund Institute of Technology, Sweden, and Prof. Shah in University of Alberta, for their advice when they visited U B C . The research of this thesis was carried out at the Department of Electrical Engineering and at the Pulp and Paper Center with support provided by the Pulp and Paper Research Institute of Canada.  Table of Contents  Acknowledgement  ii  L i s t o f Tables  viii  List of Figures  ix  Abstract 1  2  xii  Introduction  1  1.1  Background  • •  1.2  The Research Topic  5  1.2.1  Motivation  5  1.2.2  Main Contribution of this Thesis  5  1.2.3  Outline of the Thesis  :  1  6  Mathematical Preliminary  8  2.1  Signal Processing Theory  8  2.1.1  Sampling Process  8  2.1.2  Frequency Domain Estimation via D F T  2.2  10  On Conic Sector Stability Theory  13  2.2.1  Definitions  13  2.2.2  Stability Problem Definition  17  2.2.3  Conic Relations and properties  18  2.2.4  Theorem on Boundedness  21 iv  3  2.3  On Averaging Theory  22  2.4  On Curve Fitting  24  Adaptive C o n t r o l A l g o r i t h m and Stability  26  3.1  Structure of Adaptive Control System  26  3.2  On Parameter Estimation  26  3.3  Self-tuning Controller Design  32  3.3.1  Pole-zero Placement Design  32  3.3.2  Minimum Variance Control  37  3.3.3  Other Algorithms  39  3.4  3.5 4  Stability Condition for Adaptive Control System  40  3.4.1  Main Stability Theorem  40  3.4.2  Result from Using Conic Sector Criterion  49  3.4.3  Using Averaging Method  50  Simulations  51  Robust Adaptive Control: Slow Sampling  54  4.1  Why Use Slow Sampling?  54  4.2  Choice of Sampling For Model Matched Case Adaptive Control  55  4.2.1  Definition and Background  55  4.2.2  Condition For M P Discrete Plant  57  4.2.3  Obtaining the Critical Sampling by Using a Relay  63  4.3  4.4  Choice of Sampling For Model Mismatched Case Adaptive Control . . .  68  4.3.1  Uncertainty in sampled system  68  4.3.2  Stability conditions for adaptive algorithm  75  4.3.3  Approximate way of choosing sampling  76  Simulation on the Effect of Sampling v  78  4.5 5  6  83  R o b u s t A d a p t i v e C o n t r o l : Slow A d a p t a t i o n  89  5.1  Introduction  89  5.2  Slow Adaptation Algorithm  91  5.2.1  Fixed Slow Adaptation Rate  91  5.2.2  Unfixed Slow adaptation rate  93  5.2.3  Checking Criterion for the Steady State  95  5.3  Frequency Domain Estimation  5.4  Filter Design  100  5.5  Simulation Examples of Robust Adaptive Controlled Design  105  5.6  Conclusion  109  97  Robust Adaptive Control: Using Predictive Control  112  6.1  Predictive Control Algorithm  112  6.1.1  A simple predictive control algorithm  112  6.1.2  Generalized Predictive Control  116  6.2  7  Conclusion  Robustness of Predictive Control  120  6.2.1  Illustration examples  120  6.2.2  Discussion  122  6.3  Stability Analysis for a simple case  126  6.4  Simulation Examples  130  6.5  Conclusion  136  Conclusion  140  7.1  Summary of the work  140  7.2  Future Research Directions  142  vi  7.3  Conclusion  Bibliography A List of Abbreviations B  List of Terminology  C  Simulation Program  List of Tables  3.1  Estimated parameters for different sampling frequency  4.2  Parameters of discrete nominal model under different sampling in example 4.2  52  71  4.3  Estimated a/b for various cases  81  4.4  Simulation condition in F i g . 4.21  82  4.5  Simulation condition in F i g . 4.22  82  5.6  Parameters of discrete plant of Rohrs plant under different sampling . .  98  5.7  Closed-loop poles of adaptive system under different sampling  98  6.8  Simulation result for various sampling and extended horizon  121  6.9  Critical sampling for different horizon  122  viii  List of Figures  2.1  Feedback Structure  18  3.2  Conventional Adaptive System Structure  27  3.3  (a). Estimated parameters in persistent excitation case  33  3.3  (b). Estimated parameters when persistent excitation is not satisfied . .  33  3.4  Geometric Explanation of r/  41  3.5  Error Model  3.6  Nyquist curve of H  4.7  Nyquist path  61  4.8  Nyquist curve satisfying (4.71)  62  4.9  Nyquist curve not satisfying (4.71)  63  n  43 2  under different sampling  52  4.10 Closed loop with relay  64  4.11 Nyquist diagram for limit cycle  65  4.12 Nyquist curve of G* in example for different sampling intervals T . . . .  66  4.13 Limit cycle obtained when putting G(s) of the example under relay control 67 4.14 Model error via sampling periods  71  4.15 Choice of sampling according to the phase of continuous plant  76  4.16 Obtain critical sampling for minimum phase discrete plant of Rohrs' example by using relay  79  4.17 Zeros of discrete plant of Rohrs' example via various sampling  80  4.18 (a). Finding the critical sampling for S P R H ,  84  4.18 (b). Nyquist curve of H2, under different sampling  84  2  IX  4.19 (a). Nyquist curve of H2 via different sampling frequencies  85  4.19 (b). Nyquist curve of H2 via different command frequencies  85  4.20 S T C simulation under different sampling  86  4.21 S T C simulation under noise or high frequency command  87  4.22 S T C simulation when persistent excitation not satisfied  88  5.23 Estimated parameters when simulating Rohrs' plant by using R L S estimation where the sampling is T, = 0.2sec, command frequency is 1.0 rad/sec  90  5.24 S T C result for fixed rate slow adaptation  96  5.25 Unfixed rate slow adaptation S T C result when the sampling is very fast, T = 0.05sec. for Rhors' plant  101  a  5.26 Closed-loop with filter  102  5.27 Bode plot of the compensation filter in Example 5.4  103  5.28 Robust Adaptive Controller Design Flow Diagram  104  5.29 Adaptive control result in example 5.5, command is cos{t), sampling T. = 0.05sec  105  5.30 Estimation result in example 5.5  107  5.31 Adaptive control result in example 5.5, command is step level, sampling  109  T, = QMsec 5.32 Adaptive control for an eighth order plant(example 5.6) command is step level, sampling T, = l.Osec  110  5.33 Estimation result in example 5.6  Ill  6.34 Error Model for predictive adaptive control  129  6.35 Phase plot of H  131  2  for different algorithm  6.36 Simulation result in example 6.6  132 x  6.37 Simulation result in example 6.7  134  6.38 Oscillation obtained by using Relay for a time-varying plant  135  6.39 Frequency Domain estimation result (Nyquist curve of the discrete plant) 137 6.40 Parameter estimation and controller parameters in Example 6.8  138  6.41 Simulation output for predictive adaptive control algorithm to a timevarying plant, Example 6.8  139  XI  Abstract  This thesis describes discrete robust adaptive control methods based on using slow sampling and slow adaptation. For the stability analysis, we consider that the plant model order is not exactly known and assume that the estimation model order is lower than the plant model order. A stability condition is derived with a given upper limit for the adaptation gain which is related to a strictly positive real operator. Discussion of the relation between sampling and stability condition is then given. For the robust adaptive control design, we study slow adaptation and predictive control. For the slow adaptation, the main idea is to use only good estimates and use a compensation filter. Some frequency domain information on the plant is necessary for this method.  For  predictive control, we discuss the relationship between the extended control horizon and the critical sampling.  For a simple case, it is shown that the larger extended  control horizon brings more robust adaptive control. The purpose of this thesis is to provide robust discrete adaptive controller design guidelines, especially in such cases as using slow sampling frequency, slow adaptation rate. It is true that in practice, for various discrete adaptive control algorithms, slow sampling and slow adaptation rate will bring more robustness. The use of slow sampling and slow adaptation rate is simple and economic, thus a careful choice of sampling rate and adaptation rate is highly recommended.  This thesis provides such guidelines for  choosing proper sampling rate and adaptation rate for robust discrete adaptive control.  xii  Chapter 1  Introduction  1.1  Background  In classical control system designs, the parameters of the system to be controlled are assumed known. Usually, the purpose of a control system is to obtain a desired plant output. For example, when tracking a given command input signal, the controller is designed such that the error between the plant output and command input is minimum. When this command input signal is a known constant, the tracking problem becomes a regulation problem.  There are many methods of classical control system design.  A l l those controller designs based on known plant parameters are valid only when the plant is linear time-invariant and its parameters are known accurately. The output of a control system will have large errors or even be unstable if there are errors in the representation of the real system. However, in practice, when the plant is not known, off-line parameter estimation can be used. Parameter estimation is based on a given model structure with unknown parameters. The estimation algorithm will improve the parameters of the model along the direction such that the error between the model output and real plant output is reduced. A n adaptive control system combines on-line parameter estimation with on-line feedback control law synthesis, and is thus composed of a plant, a parameter estimator and a controller design algorithm. A simple block diagram showing the structure of such an adaptive controller can be found in Fig. 3.2, Chapter 3. For the on-line design  1  Chapter 1. Introduction  2  of the feedback controller, it is assumed that the estimated parameters are the true parameters of the plant. This scheme allows the controller to be changed accordingly when the process changes. However, the resultant algorithms are complex since the feedback laws become non-linear and time-varying. Historically, the research of adaptive control started in the 1950s and was motivated by the design of autopilots for high-performance aircrafts. Such aircrafts are required to operate over a wide range of speeds and altitudes. It was found that classical controller design works well only i n one operating condition. As the operating condition changes, the difficulty arises. Thus a more sophisticated controller which can work well over a wide range of operating conditions is needed. This early work on adaptive control can be found in [23] and [29]. In 1960s, there were many new contributions to control theory. Among them are the state-space theory, stability theory, stochastic control theory, etc. There were also major developments in system identification and parameter estimation, see [4]. A l l those contributions were to be important for the development of adaptive control later. Starting from the early '70s, the theory of adaptive control developed rapidly. A number of successful applications have also been reported in industry. Those applications cover a wide range of areas such as aerospace, process control, ship steering, robotics, and other industrial control systems ([10], [15], [24], [27]). Research on adaptive control has focussed on two different approaches: the Self-Tuning Regulators (STR), which originated from a stochastic regulation problem; and Model Reference Adaptive Controllers ( M R A C ) , which originated from a deterministic servo problem. In spite of their different origins, objectives, and formulations, these two approaches are closely related. Both systems have two feedback loops. The inner loop is an ordinary feedback loop with a process and regulator. The parameters of the regulator are adjusted by an outer loop. However, in M R A C , the regulator parameters are updated  Chapter 1. Introduction  3  directly while in S T C , they are updated indirectly via parameter estimation and (or) design calculations. Many studies have been performed to improve the regulator design methods and parameter estimation techniques. In the late 1970s and early 1980s, proofs for the stability of adaptive systems appeared, albeit these proofs are under strict assumption of matched plant order in parameter estimation, i.e., the order of the plant is not higher than the order of the estimation model (see [16], [21]). Various controller design methods have been proposed such as minimum variance control ( M V C ) , moving-average control, pole-zero placement, L Q G control and predictive control (see [7], [9], [11], [12], [38], [40]), etc. According to the stability theory of adaptive control when matched plant order is assumed, if there is pole-zero cancellation in the controller design, the plant should be minimum phase (in discrete domain) for stable self-tuning control (STC). There are two different ways to solve this problem if the plant is non-minimum phase. One is to modify the self-tuning controller design by keeping unstable zeros uncancelled through a factorization method or using another minimization index such as generalized minimum variance algorithm ([8], [11]). In this case, more calculations are generally required because the implicit algorithm is not practical. The other way is to use slow sampling rate since the zeros of the discrete plant are affected by sampling. In [5], it is shown that for a continuous-time plant with stable poles and zeros, if the sampling frequency is slow enough, the sampled system is always minimum phase. In this thesis, we will develop a way of choosing the critical sampling frequency to ensure a minimum phase discrete plant for a given rational, stable and minimum phase discrete plant. If the sampling frequency is chosen to be slower than the critical sampling frequency, the discrete plant will be minimum phase. However, in practice, most plants contain unmodelled high frequency poles and zeros. If the estimation model order is lower than the actual plant order, there will  4  Chapter 1. Introduction  be a model mismatch. This model error may be insignificant as far as only open-loop behaviour is concerned, but in adaptive control systems, the basic assumption in those early stability theorems is violated. Later, Rohrs et al. studied the cases when the assumption of matched plant order is discarded (see [31], [32]). In Rohrs' work, serious problems were outlined. The adaptive control algorithm in presence of unmodelled dynamics and of input or output noise is not always stable even if the other conditions of those early stability theorems are satisfied. A different explanation for this phenomenon was given later in [2] and the appendix of [32]. Although Rohrs et al. did simulations in continuous time domain, the problem carries to the discrete time domain. Further, it was also found in [33], [34] that if the sampling frequency is slow enough, the mismatch between the estimated plant and the real plant in frequency band is very small. In practice, an adaptive control algorithm is robust if the sampling is slow enough. Now, it is believed that robust adaptive control is possible in model mismatched case but extreme care must be taken when implementing the controller. Various design methods are available. However, theoretical stability results are available only for simple algorithms. The stability study of self-tuning control algorithms is difficult because it involves a non-linear system with time-varying parameters.  Recently, some researchers used  conic sector stability theory first proposed in [42] for non-linear systems to derive S T C stability conditions under model mismatch (see [13], [30] and [35]). They successfully found sufficient stability conditions for S T C . In [1], averaging analysis is also used to obtain similar stability conditions. These two different methods both indicate that one very crucial condition for stable S T C is that an operator be Strictly  Positive  Real  (SPR). Also, i n the parameter adaptation algorithms, the adaptation rate is assumed sufficiently small although there is no indication of how small it has to be.  Chapter 1. Introduction  1.2  5  T h e Research Topic  1.2.1  Motivation  Currently, it is believed that adaptive control systems can be stabilized and give good performance under extreme care. Some robust adaptive control algorithms have been reported. For example, filtering the data used for estimation so that the parameter updating is improved [39]; using a dead zone to switch the parameter estimation off if the input signal is not appropriate [9]; introducing orthonormal series representation to model the plant dynamics and design a controller based on this new model [43]; applying predictive control for controller design [12]. However, every method has its own limitations and it is also observed that for discrete adaptive control, slower sampling will bring more robust results in most algorithms. The relation between sampling frequency and stability of discrete S T C algorithm has been noticed in [33], [34] and [13]. A general explanation for the robustness obtained by using slow sampling is that the mismatch between the model and the real plant decreases as the sampling frequency decreases. This conclusion has been verified through some simulation results.  1.2.2  M a i n C o n t r i b u t i o n o f this T h e s i s  In this thesis, we study various aspects of discrete time robust self-tuning controller design, with emphasis on the following: • Derivation of a way to choose the critical sampling frequency for a rational, stable and inversely stable continuous plant such that the sampled plant is minimum phase. For the perfect match case, minimum phase behaviour of the discrete plant is crucial for some direct algorithms. If the sampling frequency is chosen to be  6  Chapter 1. Introduction  slower than this critical sampling frequency, the discrete plant will be minimum phase. A practical way of choosing such a sampling is suggested.  This is an  extension of the study on zeros of sampled systems in [5]. • Derivation of a new self-tuning control stability condition under model mismatch. Most available S T C stability conditions for the model mismatched case require the existence of a sufficiently small adaptation gain ([1], [13], [30], etc.). Here we will derive an upper limit for this adaptation gain, which is directly related to the S P R operator required and the initial estimation error covariance setting. • Some robust self-tuning controller design guidelines which include using slow sampling, slow adaptation rate, filter and predictive control. Model mismatch is assumed. The slow adaptation rate and the predictive horizon are studied with respect to the sampling frequency for the continuous plant. We explain why practitioners attribute more robustness to slow adaptation and predictive control.  1.2.3  O u t l i n e o f the T h e s i s  In chapter 2, we present some basic mathematical preliminaries in signal processing, conic sector stability theory, averaging theory and curve fitting. In chapter 3, we discuss some conventional estimation and controller design algorithms. A stability theorem is derived and compared with existing results. In chapter 4, we study the effect of slow sampling on the stability conditions.  Both matched model order and mismatched  model order cases are included. Also we propose how to choose the critical sampling frequency. Chapter 5 discusses the effect of slow adaptation. It can be shown that a proper filter will always bring stable self-tuning control algorithm. However, the design  Chapter 1. Introduction  7  of the filter requires the frequency domain knowledge of the plant, thus frequency domain estimation can be useful. In chapter 6, predictive control is applied. Here the emphasis is on the relation between sampling frequency and prediction horizon. Chapter 7 contains conclusions and some future research directions.  Chapter 2  Mathematical  2.1  Preliminary  Signal Processing Theory  2.1.1  Sampling Process  Suppose G(s) is the continuous transfer function of the plant. Denote Z{G(s)} is the Z-transform of G(s), then the discrete transfer function of the sampled plant with zero-order hold is:  J ( r  .) = z { i ^  0 (  .)} = (i-r')z{^}.  Define:  then: ^(?- ) = ( l - 9 - ) Z { G „ W } ,  1  (2.1)  H(q~ ) represents the sampled plant or discrete plant. Z{G(s)} is the Z-transform 1  resulting from G(s), which also depends on sampling period. The following lemma is important to understand the sampling process. Similar result can always be found in text books of digital control such as [17]. Lemma 2.1 If the sampling interval is T and to, = — , then the sampling process will result in:  Z{Go(s)} = - Y, G (s-jnw,) 0  8  (2.2)  9  Chapter 2. Mathematical Preliminary  [proof] Suppose g {t) is the corresponding impulse response signal of G (s) in time 0  0  domain. The sampling process can be considered as a modulation process +00  in which the particular pulse train p(t) =  8(t — nT) is multiplied by  nrr  — oo  the continuous function go(t)(2.3)  9*o(t) = p(t)g (t) 0  since we can express p(t) by Fourier series as: +00  E ^ c  i<0=  inw,t  where the coefficients:  Thus: j  +00  E  *o*(0 = 7p  SoO''™"'  Using Laplace transform:  /•+00 r+00  GS(«) -  / Jo r  = J  g*o{t)e-"  +oo  / o  1  1  +0°  r E go{t)e*'" e-' di T _ t  n=  +00 n= —00 +00  .  E  =  j; E  / G  l  co  +  0  0  goWe-l-^d  o ( 5 - JTWO.)  n= —00  Also from (2.3): 9o(t) = 9o{0)S{t) + g (T)8(t - T) + g (2T)S{t - 2T) + 0  0  Chapter 2. Mathematical Preliminary  10  —  G;(s) = €7o(0) + <7o(T)e- + g (2T)e- ' + • • • = £ So(nT)e sr  2  T  nsT  0  n=0  Let 2 = e* , then: r  G (^) = E 5 o ( n r ) z - ' = z{G'o(5)} i  0  (2.4)  n=0  Compare the result before, the sampling process is;  HGo(s)} = - £  G (s-jnw,) 0  n= —oo  • From Lemma 2.1, we can see that in the frequency domain, the high frequency components are aliasing to the low frequency part after sampling.  2.1.2  Frequency Domain Estimation via D F T  We denote a discrete-time signal by x(n) = x(nT) and x denotes the sampled continuoust  time signal where t = nT, n is an integer and T is the sampling period. x -i denotes t  a;[(n —1)T], etc. The Z-transform of x(n) on the unit circle is the Discrete-Time Fourier Transform ( D T F T ) and denned as: -jnwT  (2.5)  We also define the N-point Discrete Fourier Transform ( D F T ) of x(n) at N points in frequency domain, Wk = kw,/N,fc= 0,1, • • •, TV — 1, where w, = 2ir/T is the sampling frequency in (rad/sec). Denote W  = -W\  N  e  D F T is:  N-l  X {w )= N  k  Y ( )K ,k x  n=0  n  n  = 0,l,---,N - 1  (2.6)  11  Chapter 2. Mathematical Preliminary  Further, the inverse N-point discrete Fourier transform of Xw(w ) is: k  *( ) = Jr E X (w )W~ , n = 0,1, • • •, N - 1 n  kn  N  k  If x(n) is of finite duration, for example, x{n) = 0 when n > N, then the N-point D F T of x(n) and D T F T of x(n) are equal at w : k  X {w ) = X(e ) \ = , k = 0,1, • • •, N - 1 jwT  N  k  w  Wk  The following lemma is similar to a result from [26]: Lemma 2.2 Let y(m) = h(m) * u(m), where h(m) is an infinite length, causal, impulse response and all poles of Z[/i(m)] are in the open unit disk. We denote the D T F T of h(m) by H(e* ), and N-point D F T s of u(m),y(m) ending with time index n, by U^(w ), wT  k  Yf}(w ), respectively, then: k  Y£(w ) = H{e^ )U^(w ) + E^(w ), k = 0,1, • • •, N - 1 T  k  k  k  where the discrete function E^(w ) is given by: k  oo  E  E (w ) = EHp)K ( p  N  n  n—N  n  k  «  H  <  -  E  m=n—N—p+l  p=l  «(™X ) m  m=n-p+l  and k = 0 , l , - - - , i V - 1. The definition of Yfi(w ) above is: k  Y£{w )= k  £  y(rn)W*r,k = 0,l,---,N-l  m=n-N+l  which uses updated y(n). E^(w ), UJ^(w ) are similarly defined, k  k  [proof] For n-N  oo  m = -oo  m=n+l  n-N  H(e ™» )[ £ j  T  m= — oo  oo  u{m)W!T + UZ(w ) + h  £  m=n+l  u(m)W£»]  (2.7)  12  Chapter 2. Mathematical Preliminary  Thus, n-N  E" (w ) = H(e^ )[  £  T  N  k  u(m)W*» +  m=-oo n-N  £  m = — oo  £  <rn)W ) k m N  m=n+l oo  y{m)wjr-  E  m=n-f-1  y(™X  n  For n - N  n-N  £  y(m)W*»  =  E  oo  E M p M m - p ) ^ "  1  m= — oo p=0  m=—oo  oo  =  n—TV  EMP) E  m  m= —oo  p=0 oo  N—p  n —  = EMPX  p  p=0  =  «(m-p)W# -'>W#  E «(/)< i=-oo  iT(e*" )[ i f «(m)W#" -  " E  r  m=—oo  m=n—N-p+1  n-N  u(Tn)W^  where the term  m  is eliminated when p = 0. Similarly we have,  m=n—TV—p+1 oo  E m=n+l  oo  vMwft" = E  oo  HP)W [  E  P  N  p=0  m=n+l  n  «(m)Wj- +  E  «(m)Wj ] m  n-fl — p  Apply these two equations to the expression of Ejf(wk), the lemma is proved. • The discrete transfer function of the plant G(s) with zero order holder is:  which is the D T F T of the sampled impulse response h(n) for z — e the following remarks concerning H{e^ ): wT  R e m a r k 1:  JtuT  . We will have  13  Chapter 2. Mathematical Prehminary  The discrete plant H(e ) at w — w , k = 0,1, • • •, N — 1 can be approxi3wT  k  mated by: ^  r  )  =  ffi  ( 2  '  8 )  where Y^(w ),U^(w ) is N-point D F T oiy ,u , and k = 0,1, • • •, N - 1. k  k  t  t  Remark 2: The estimation error in (2.8) can be eliminated if u(nT) — 0,y(nT) = 0 for n > N. The error is small if u ,yt are close to finite length signals and N t  is large enough.  Remark 3: For a linear system, if u(t) = Ux(t) + u (t), y(t) = y^t) + t/ (*)i }'i(w), 2  2  12(10), C/ (w),C/ (w) are D F T s of yi(t), y2{t),v-i(t),u (t) respectively, and 1  2  2  Yi(w) = H{e )U {w), then Y (w) = H(e^ )U (w). iwT  wT  1  2  2  In practice, if ui(t), yi(t) are known steady state input and output signals, u (t) 2  is a disturbance and y (t) is the output result from disturbance, choosing u (t),y (t) 2  2  having limited length, then H(e ) can be obtained by H(e " ) = jwT  jx  kT  2  ^ \.  Y  k  U2[Wk)  2.2 2.2.1  O n Conic Sector Stability Theory Definitions  Extended Normed Linear Space When discussing stability, unstable systems will be involved, thus the function space will contain functions which "explode", i.e., which grow without bound as time increases. Such functions are not contained in the spaces commonly used in analysis, for example.  14  Chapter 2. Mathematical PreEminary  in the L  p  spaces.  Thus it is necessary to construct a special space, which will be  called extanded space and denoted as X . X e  e  will contain both "well-behaved" and  "exploding" functions, which will be distinguished from each other by assigning finite norms to the former and infinite norms to the latter. Let T be a given subinterval of the reals, of the type [r. ,oo), or (—00,00). V is a 0  given linear space. Followings are some definitions from [42]: Definition Let x be any function mapping T into V , that is, a; : T —> V; let r be any point i n T; then the symbol x denotes the truncated function, x : T —• V , T  r  which assumes the values x[ = x for t < r and x\ = 0 elsewhere. t  Definition For a normed linear space X consisting of functions of the type x : T —> V , 1. IfzeX  then x £ X for all r G T. T  2. If x : r -» V, and if x € X for all r € T , then: T  (a) | | x | | is a nondecreasing function of r € T. r  (b) If l i m ||x || exists, then x € X and the limit equals ||x||. T  Definition: The extension of X, denoted by X , is the space consisting of those functions e  whose truncations lie i n X, that is, X  — {x\x : T —• V", and x 6 X for T  e  all r (E T } . A n extended norm, denoted ||x|| , is assigned to each x £ A ' as e  e  follows: [|x|| = ||x|| if x (E A , and ||s|| = 00 if x $ X. r  e  e  Example 2.1: Consider x = e which is an function in L [0,oo] when p < 0 and unbounded when pt  t  2  15  Chapter 2. Mathematical Prehminary  p > 0. Now by using a truncated function x\ and construct a periodic signal x based t  on x[ beyond [0,r], we can write its Fourier series expansion as: oo  Xt = do + X / t  { t)  a n cos  + b s'm(nwt)]  nvj  n  n=l  where w — 2n/T,t 6 [0,r]. The coefficients in the Fourier expansion are a = - f e dt T JO T  0  pt  e  pT  - 1  pT  2 r „ , 2 ( p a = e cos(nwt)dt = P  t  -1)  T  e  F  — r - r r  n  2nw{l - e^) b = — e sin(nwt)dt = — T Jo T-(p + n w ) pt  n  2  2  2  Notice that x , x\ and x have the same value in [0,r]. If p < 0, they are in J L [ 0 , T ] t  t  2  as T goes to infinity. For  INI* = £l4 + \ ±K  +  *)]* = - U'[£ +  - ±)\  thus,  ^mK||' = J ^ | W = - ^ = ||x^ which is as expected. Here we have used the equation: ([44], p.526) Y ~  - —\^1±1 _ J_i n + a ~~ 2a V ™ - 1 ?r J 1  2  2  If p > 0, x is not in L [ 0 , T ] but x\ still is for finite r. However, as r goes to oo, t  2  am b will go to oo, thus both n  and  will go to oo. •  Chapter 2. Mathematical Preliminary  16  Input-Output Relation D e f i n i t i o n (Relation) A relation H on X  e  is any subset of the product space X  e  x X . If (x,y) e  is any pair belonging to H then y will be said to be H—related to x; y will also be said to be an image of x under H . In other words, a relation is a set of pairs of functions in X . It will be convenient e  to refer to the first element in any pair as an input, and to the second element as an output. A relation can also be thought of as a mapping, which maps some inputs into outputs. D e f i n i t i o n (Domain and Range) If H is a relation on X , then the domain of H denoted e  JDO(H),  and the  range of H denoted Ra(H) are the sets: • Do(H) = {x\x € X , and there exists y G X  such that (x,y) 6 H } .  • i?a(H) = {y\y £ X , and there exists  such that '{x,y) G H } .  e  e  e  If H is a relation on X , and if x is a given element of X , then the symbol H x e  e  denotes an image of x under H . D e f i n i t i o n (Operator) An operator H is a relation on X which satisfies two conditions: 1) D o ( H ) = e  A ; 2) H is single-valued. r  e  This means that the range of an operator is the whole extended space and the inputoutput relation of it is unique. Definition  17  Chapter 2. Mathematical Prehminary  J? is the class of those relations H on X  having the property that the zero  e  element, denoted o, lies in D o ( H ) , and H o = o. If H and K are relations in 3?, and c is a real constant, then the sum ( H + K ) , the product c H , and the composition product K H of K following H , are defined in the usual way, and are relation in 3?. The inverse of H in 3?, denoted by H The identity operator on X 2.2.2  e  _ 1  , always exists.  is denoted by I.  Stability Problem Definition  In [42], a system is called stable if it is well behaved in two respects: (1). It is bounded, i.e., not explosive; (2). It is continuous, i.e., not critically sensitive to noise. Definition: A subset Y of X  e  is bounded if there exists A > 0 such that, for all y €  Y, \\y\\ < A. A relation every bounded subset of X  e  X  e  H on X  e  is bounded if the image under  H of  is a bounded subset of X ; A relation H on e  is continuous if H has the following property: Given any x G A", and  any A > 0, there exists 8 > 0 such that, for all y G X, if ||a; — y|| < 8 then ||H»—Hy|| < A ; A relation H on X  e  is input-output stable if H is  bounded and continuous. Consider a feedback system as shown in Fig.2.1, the feedback equations are:  e = w - Hy t  t  2  t  Vt = Hiet where H\,H are relations in 3?. Our interest is to find conditions on Hi, H , which 2  ensure the relation E, Y be bounded. Here: E = {(w, e)\(w, e) G X  2  e  x A" and there e  18  Chapter 2. Mathematical Preliminary  yt  -o—  Hi  H  2  Figure 2.1: Feedback Structure exist y, Hie, H y, all in X , such that the above feed back equations fit }. Similarly, Y 2  e  relates y to w. 2.2.3  Conic Relations and properties  In this chapter, we will give the definition on conic relations and discuss some properties as i n [42]. Definition A relation H i n 3? is interior conic if there are real constants r > 0 and c for which the inequality  (2.9)  \\{Hx) -cx \\<r\\x \\ [xEDo{H)]tET] t  is satisfied.  t  t  H is exterior conic if the inequality sign in (2.9) is reversed.  Here the constant c is called the centre parameter of H , and r is called the radius parameter. Inequality (2.9) can be expressed i n the form | | ( H x ) — cx \\ — r||a: || < 0. If norms 2  2  e  t  t  are expressed in terms of inner products, then, after factoring, there is an equivalent inequality ((Haj) - ax ,(Hx)t - bx ) < 0 t  t  t  [x (E Do(U):t G T]  (2.10)  Chapter 2. Mathematical PreUminary  19  where a = c — r and b = c + r. It will often be desirable to manipulate inequalities such as (2.10), thus we define the conic H to be inside the sector {a, 6} , if a < b and (2.10) holds; H is outside the sector {a, b} if a < b and (2.10) holds with the inequality sign reversed. Definition A relation H in  is positive if  (x ,{Hx) )>0 t  t  {x G Do{H);t e T]  A positive relation can be regarded as degenerately conic, with a sector from 0 to oo. Example 2.2: If a positive relation H is a linear and time-invariant operator, then  Re[H(jw)] > 0 Vw [proof] For a linear and time-invariant operator H, we can express it by using a transfer function H(s) where s denotes the derivative operator. Then,  2TT  1 > or  0  Chapter 2. Mathematical Prehminary  20  for all x or X(jw). Thus the necessary and sufficient condition is: t  Re[H(jw)} > 0  Vw •  Some Properties of Conic Relations Some simple properties of conic relation are listed. Here, we assume conic relations H , H i as:  H is inside the sector {a,6}, with b > 0; H i is inside {ai,&i} with bi > 0.  Suppose k > 0 is a constant, then 1. I is inside {1,1}. 2. fcH is inside {fca,kb}; — H is inside {—6,—a, } . 3. ( H + H i ) is inside {a + a\, b + &i, } . 4. I N V E R S E R U L E - C A S E 1: if a > 0 then H  is inside {1 /b, 1/a}.  1  - C A S E 2: if a < 0 then H " is outside {1/6,1/a}. 1  - C A S E 3: if a = 0 then ( H " - (1/6)1) is positive. 1  5. Properties (2), (3), (4) remain valid with the terms "inside" and "outside" interchanged throughout. 6. <7(H) < rrcax(|a|, |6|). Hence if H is in {—r,r} then <7(H) < r. Here <7(H) is the gain of relation H which is defined by  g(U)  = sup  and the supremum is taken over all x in Do(H), all H i in jRa(H), and all t in T for which x ^ 0. t  21  Chapter 2. Mathematical Prehminary  2.2.4  Theorem on Boundedness Consider the feedback system of Fig.2.1, and suppose that H is confined to sector 2  {a, b}. It is desirable to find a condition on Hi which will ensure the boundedness of the closed loop. The following theorem from [42] is outlined here. The proof of the theorem is omitted. Theorem 2.1 (Zames, [42]) In Fig.2.1, let Hi and H  2  be conic relations. Let A and 8 be constants, of which one is  strictly positive and one is zero. Suppose that 1. H  2  is inside the sector {a + A , 6 — A } where b > 0, and,  2. Hi satisfies one of the following conditions: - C A S E 1: If a > 0 the H  is outside { - - -6,-\ + 8} a b - C A S E 2: If a > 0 the H is inside {-\+6,--~ 8} b a - C A S E 3: if a = 0 then H + (- — 8)1 is positive; in addition, if A = 0 x  x  x  b then g(Hi) < oo. Then relations E , Y are bounded. In a special case, when Hi is positive, i.e.  (H ,e) > 0 ie  Then the stability condition of a linear time-invariant H  2  Re{H (jw)} > v 2  Here v is a sufficient small positive real.  is  22  Chapter 2. Mathematical Prehminary  The famous Nyquist stability criterion is a special case of the conic sector stability theorem, where H x — x. Hi is linear time invariant relation representing the open-loop 2  transfer function of the plant.  2.3  O n Averaging Theory  In [1] and [25], stability conditions of adaptive system was discussed by using averaging theory. In their analysis, an adaptive algorithm can be expressed by the following timevarying equation: 6  k+1  (2.11)  = {I- eR )h k  where e is a positive small constant representing the adaptation rate ( See section 3.4.3 ). Here, we are going to show the stability conditions for this equation by using averaging method. The  variations of 0 i n (2.11) are slow if e is sufficient small. In discrete time, the  sample average is defined as:  Let the state transition matrices of (2.11) be F(k, j ) , F(j,j) = I and denote F := {  F ( f c i , k _ i ) , Oi := 8{ki), then,  6i = FiFi-i • • • Fi-jdj,  j < i;  , j, i € Z  L e m m a 2.3: U n i f o r m C o n t r a c t i o n s If Fi is a. sequence of uniform contractions for all i, that is if there exists a positive constant a such that: l-Fil < e~ , a  Vi G Z  Chapter 2. Mathematical Prehminary  then the system (2.11) is exponentially stable. Conversely, if the system (2.11) is exponentially stable, then there exists a sequence of sample intervals for which Fi, i € Z , is a sequence of uniform contractions.  L e m m a 2.4: A v e r a g i n g  Assume that R(k) is a sequence of bounded n x n matrices and let  A  i  k=ki_s  Then, Fi given by: F  i=  I- eTiRi + {eTifMi  (2.12)  and, for e satisfying |e|T;/9i < - , | M , | < 2p\. 2  [proof] See [1]. Using lemma 2.3, and lemma 2.4, we can derive the following theorem: T h e o r e m 2.2: S t a b i l i t y v i a s a m p l e A v e r a g e s (Anderson et al. [1])  Consider (2.11) with R(k) a sequence of bounded n x n matrix. If for a sequence of sample averages R\, a constant positive definite matrix P = P  T  > 0 can be found satisfying PR + RfP > I , {  Vi 6 Z  (2.13)  then there exists an e* > 0 such that the system (2.11) is exponentially stable for all e G (0,e*).  Chapter 2. Mathematical Preliminary  24  The sufficient stability condition of theorem 2.2 is a general signal dependent positivity condition, which applies to large classes of system matrices R(k).  For more restrictive  classes of signals, much sharper stability conditions can be obtained. T h e o r e m 2.3:  For  S t a b i l i t y (Anderson et al. [1])  periodic R(k),  there exists a positive constant e* such that the system  (2.11) is exponentially stable for all e (E (0, e*) if and only if min  ReXi(R) < 0  t  2.4  (2.14)  On Curve Fitting  Given a known form of signal f(k,p,t), where & is a set of known parameters and p is a set of unknown, t is time series, the curve fitting problem is to find the set of parameters p such that the curve fitted signal is y = f(k,p, t) and an error index J(y,y) is minimum for the measured signal y . t  In practice, almost every signal can be expressed as Fourier series. The unknown parameters are usually the amplitude and phase of each Fourier series term. Here, we will show how to curve fit a sine wave signal with only single frequency. The multifrequency cases are similarly done but the calculation is tedious. Now  we consider a signal with the known form: y = Hi sin(u> fcT — Ax). k  1  The error  index to be minimized is: n  J = E b f c -yk]  2  To find Hi, and A by using curve fitting, we set dJ/dHi — 0 and dJ/dAi = 0 : x  n  [yk - Hi sin(a; kT - A ) ] sin(a; kT - A j ) = 0 1  k=i  x  1  25  Chapter 2. Mathematical Preliminary  n  J2iVk -  fc=i  #1  sin^fcr - A j ) ] ^  coBfakT - A ) = 0 a  For large n and nwiT close to an integer, we have J2 sin(u>ifcT - Ax) c o s ^ f c T - A i ) = 0 k=i  ErinVifcT-Ax)  «  fc=i  2  thus Ez/fcCos^ifcT) <anA =  (2.15)  x  ^2/ sin(a; A;r) fc=i fc  2 J5Ti = — X ) ^ fc=i  1  n  n  s i n  (^i  f c T  -  A  i)  (2.16)  Chapter 3  Adaptive Control Algorithm and Stability  3.1  Structure of Adaptive Control System  The structure of an adaptive control system is as shown in Fig. 3.2. It includes the plant, the parameter estimation and the controller. Usually, the plant output y is t  desired to be as close to the command input w as possible. Suppose the continuous t  domain transfer function of the plant is G(s), without delay, then the discrete plant has the form:  ,( r t Hi-r'>.{^)-agf where A(q~ ), B(q~ ) are the polynomials in the shift operator 1  x  («T, (q~ x = 1  t  The  parameter estimation measures the input-output (u , yt) and evaluates the parameters t  of the estimation model. The controller uses estimates to produce the control input of the plant u . t  There are various methods of parameter estimation and controller design. In the following two sections, we will introduce some of them. The purpose of listing those estimation and controller design algorithms is to give a glimpse of the variety of adaptive control algorithms. Robust controller design will be discussed in later chapters. 3.2  On Parameter Estimation  In the adaptive control system of F i g . 3.2, the estimation is to find parameters for a given model such that the estimated model and the real plant have the best match 26  27  Chapter 3. Adaptive Control Algorithm and Stability  w  Z O H - Plant  Estimation -f  1  -*- a.  Figure 3.2: Conventional Adaptive System Structure under the measured input u and output y . Here, we shall represent the plant by an t  t  A R M A model. Other structures can be used are: state space model, transfer function model with poles and zeros as parameters, orthonormal series representation model, etc. When using an A R M A model, the expression of input-output relation of the sampled plant with zero-order holder is: Vt = diVt-i + <h7ft-2 +  h a yt- + n  + bu__ + ••• + bu__  n  2  t  d  x  n  t  d  n+1  where the plant is linear time-invariant with order n and delay d. Let  A(q ) = 1 - a q - a q~ 1  1  x  2  B(q- )=b + b q- + ..- + b q1  1  1  2  n+1  n  then the transfer function of the discrete plant is: (3.19) usually, we express this input-output relation in the form:  y = t  ft^e  (3.20)  (3.18)  28  Chapter 3. Adaptive Control Algorithm and Stabihty  where <i>t-\  [yt-i,yt-2,  —  • • •  • • ,u -d- +\Y  ,yt-n,ut-d,ut-.d_i,-  t  n  and 0=  W\, 2-• a  • • ,<Ln,bi,b2,-  • •  ,b  n  ]  T  which represents the true plant. R e c u r s i v e Least S q u a r e M e t h o d In the estimation process,  <f>t-i,  is measurable. 9 is to be estimated. Let 6 be the  yt  t  estimate of 9 at time t, then the recursive least-squares parameter estimation is given by:  9 = e -i + K e t  t  t  t  A + <p _i-Ff_i<Pt-i t  A-irr I  \  Pt-l<f>t-l<f> -i i j.T  5  7  t  K t - i  1  p  Where e = yt — <^^_ ^ _ represents the prediction error, 9 is pre-specified, P is a t  1  t  1  0  0  positive definite matrix and 0 < A < 1 is a forgetting factor. In a simple representation, we can rewrite (3.21) as: 9  t  P~  x  = $«-i + P t & - i e  t  = XPrA + h-xtf-x  (3-22)  The index to be minimized is £ \ e\, thus when A < 1, more weights are placed fe=i lk  on the most recent errors and the estimation algorithm is able to track time-varying parameters.  Chapter 3. Adaptive Control Algorithm and Stabihty  29  The recursive least-squares method works well when the output noise is white. However, in case of coloured noise, an alternative method is required. One such alternative method is the Instrumental Variable Method. The recursive form is:  0  t  = Bt-r+Ktet  ft - A  1-  f  T  t  _ i  Where z  t = [y*-i,yt-2, • • • ,yt-n,wt_ ,u _ , • • • , u _ ] 1  t  2  t  T  n  yt = zfe -i t  By using an instrumental variable method, the estimates under coloured noise can be unbiased. Other types of improved least square estimation method include generalized least square, extended least squares (see [37]), etc. Estimation Based on Gradient A l g o r i t h m Some theoretical results on S T C stability are based on a constant gain estimation algorithm (for example, [1], [14], [30]). The estimation form is:  where e is a constant representing the adaptation gain. In practice, the use of constant gain algorithm is simple but not ideal. The adaptation is slow and inaccurate. Here we will use the interlaced version of the standard least-squares algorithm (3.21) with a small constant gain to slow down the convergence speed of estimation. The using of the interlaced version is necessary when implicit self-tuning control algorithm is used  Chapter 3. Adaptive Control Algorithm and Stabihty  30  (see example 3.2). The form of the estimator is: §  =  t  6 -d + ePttt-det  (3.24)  t  A + e <p{_ F -d<Pt-d z  d  t  where d is the delay in A R M A model (3.18). The convergence speed will be affected by the adaptation gain e and the delay d. This form is a gradient algorithm as the least-squares method is also a gradient algorithm with faster convergence speed (e = 1). Properties of Estimation Algorithm Parameter estimation is designed to minimize the error between the estimated model and plant model, i.e., prediction error. In case the estimated model and the plant model have the same order, the parameters of the plant can be estimated accurately. However, in practice, we often have to face the fact that the plant structure is not exactly known. This is the model mismatch case. We also say that there exist unmodelled dynamics. In case there is model mismatch, the dimension of the estimated parameters 6 is t  lower than that of the true plant 6. The above estimation algorithms have the following properties: 1. The estimation result depends on the frequency component of the command. 2. If the order of persistent excitation of the input signal is lower than the estimated model order, the estimation result will not be unique and could be significantly influenced by the presence of noise. For property 1, consider a linear discrete plant with transfer function H(q~ ), the l  estimated model is H{q~ ). It can be shown from the minimization index that the l  estimation algorithm will tend to result in H(e~ ) = H(e~ ) at the frequency w 3W  3w  where the regression vector (input, output of the plant) (f> is rich in frequency domain. Since there is model mismatch, H(q~ ) and #(<7 ) are not able to match at every 1  _1  31  Chapter 3. Adaptive Control Algorithm and Stability  frequency. Only at the frequency where the command is rich enough will the estimated model be accurate. In property 2, the definition of persistent excitation is as follows: Definition Let the regression vector <f>t  = [Vt-l, • • • ,yt-n,U _ , t  d  • •  • ,Ti _ _ t  d  n + 1  ]  contains 2n inputs, outputs of the plant (3.18). The input u is said to t  be persistently exciting at order 2rc, if for some positive cti, integer S, and every integer j there holds: j+s  (3-25)  J2Ml>aJ k=j  A command signal is said to be persistently exciting at order 2n if (3.25) holds and u is excited by this command signal. t  sistently exciting at order 1.  A non-zero constant command is per-  A signal with m different frequencies of sine wave  m  ^2 isin(iUi<), a  a sin(u;i<) ^ 0,i — l , - - - , m is exciting at the order 2m. Let the t  i=l  excitation order of a determinant input signal be n , the number of the parameters to p  be estimated be n. If n < n , then the estimated parameters form a vector which is i n p  n dimensional space with n — n degree of freedom. In this case the input signal is not p  persistently exciting for the estimation model. In case the command signal is not persistently exciting, the noise will increase the excitation order. The frequency component of the noise signal is undesired and varying, thus the estimation result is not unique and dominated by the noise in the n — n  p  dimensional space. The following example demonstrates this point:  Chapter 3. Adaptive Control Algorithm and Stability  32  Example 3.1: Consider a continuous plant which was given by Rohrs:  G(s) =  2 229 s + 1 5 + 305 + 229  (3.26)  2  when sampling it by T, = 0.2sec, the discrete transfer function will be: (0.158 + 0.165Q- + 0.00708q- )g1  2  1  1 - 0.910O- + 0.0776o- - 0.00203a1  2  3  Now we use first order estimation model, i.e.  y = ay -! + bu t - i t  t  and all the initial regression vector values are zero. The forgetting factor in the estimation is 0.98. (1). We apply input u = cos(f), output noise is JV(0,0.02), the input signal has t  excitation order 2 which is a case of persistently exciting. We recorded first 500 samples as in F i g . 3.3(a), where the converged parameters are 1.63, 0.46, respectively. (2). Under the same output noise, we apply step input. The excitation order is n = 1. Theoretically, the estimated parameters are not unique because the persistent p  excitation is not satisfied [3]. However, the D C gain of the estimated model is unique.. This restricts the parameters d, 6 satisfying: 6 = 2.0(1 — a). The first 500 samples of estimation process is as in F i g . 3.3(b), where the estimated parameters are unfixed. Thus, the persistent excitation condition is important for useful estimation result under model mismatched case.  3.3 3.3.1  Self-tuning Controller Design Pole-zero Placement Design  Basic A l g o r i t h m  Chapter 3. Adaptive Control Algorithm and Stability  t-,  to  -a CD  J CO  O.O  lO.  20.  30.  *0.  T i m e  60.  i n  60.  70.  00.  00.  100.  s e c o n d s  Figure 3.3: (a). Estimated parameters in persistent excitation case  O.O  lO.  20.  30.  -40.  Time  50.  60.  TO.  60.  90.  lOO.  in. s e c o n d s  Figure 3.3: (b). Estimated parameters when persistent excitation is not satisfied  34  Chapter 3. Adaptive Control Algorithm and Stabihty  For a discrete plant (3.18), the degree of A(q~ ) is n, the degree of B(q~ ) is n — 1. 1  1  The input, output and command signals of the process are denoted u , yt, and w , t  t  respectively. It is desired to find a controller such that the closed loop is stable and that the transfer function from the command w to y is given by: t  A.(r')  *- * » (  where A , B m  m  are coprime and A  m  t  is monic. The zeros of A  are assumed to be inside  m  unit disk. A general linear regulator is given by: Riq-^ut = Tiq-^wt - 5 ( . " ) y  (3.27)  1  t  The pole-zero placement algorithm is as follows (Astrom and Wittenmark, [7]): Step 1: Solve the Diophantine equation: (3.28)  ARx + q- S = A A„ d  m  with respect to Ri, S. Step 2: Let R  =  RiB  (3.29)  T  =  AB  (3.30)  0  m  Where A is a polynomial used as an observer and all its zeros are in the restricted 0  stability region. Here two conditions concerning the degrees of A , B,- , A m  following: deg{A ) - deg{B ) > deg{A) - deg{B) m  m  deg{A ) > 2deg(A) - deg{A ) - deg(B) - 1 0  m  n  Q  are as  Chapter 3. Adaptive Control Algorithm and Stabihty  35  Note that all zeros of A ,B are cancelled in the process, thus, B is required to have 0  stable zeros, i.e., the discrete plant must be minimum phase. It will be shown later that this is a strict requirement, especially i n the case of fast sampling. Implicit Algorithm (3.28)-(3.30) is an explicit (indirect) algorithm. The controller parameters S, R, T are calculated explicitly according to the estimation result and applied into (3.27) to obtain the control input u of the plant. For the implicit algorithm, applying (3.28) to t  the plant, we have  AAy 0  m  t  = Sy - + Ru _ t  d  t  (3.31)  d  (3.31) can be used as estimation model where A , A 0  (- r _2<l~ ~  estimated. Let R = r +r\q~ H l  n  n+d  0  u = —(A B w t  0  m  t  d+2  are known, R, S are to be  m  = r + Rq , the control law is: -1  0  - Sy - R'u ^) t  t  (3.32)  where the estimated parameters are directly applied. The implicit algorithm is also called direct algorithm. Stability Conditions for model order matched case The pole-zero placement S T C algorithm was proved to be stable under the following four assumptions in the early 1980's (see [21]). 1. The time delay of the discrete plant d is known; 2. Upper bounds on the plant order are known; 3. B has all its zeros inside the unit disc; 4. The sign of &i = r is known. 0  36  Chapter 3. Adaptive Control Algorithm and Stabihty  Example 3.2: Consider the plant  ~  H { q  Choose A  m  = 1, B  m  ]  A{q~l)  (3  '  33)  = 1, and A„ — 1. Then the Diophantine equation (3.28) becomes:  ARi + q~ S = 1 d  and  (3.30) becomes T = 1. Thus the estimation model is: (3.34)  y = Syt-d + Ru -d t  The  t  order of the polynomials R and S are n + d — 2, n — 1 respectively where n ' is  the assumed model order of the plant ( n < n ). Thus there is a total of 2n + d — 1 parameters to be estimated. Let's express (3.34) as:  yt = siyt-d + s y -d-i H 2  t  1- fi^t-d + r u _ _ + • • • 2  t  d  1  Then the explicit control law is: W - i  u = -r-iwt+d - hyt t  )  (3.35)  fj, S{ are estimated parameters, w is the command. • t  F a c t o r i z a t i o n i n Pole-zero P l a c e m e n t The  above pole-zero placement will cancel all the zeros of the discrete plant. When  the plant is nonminimum phase, this algorithm can not be applied because unstable zeros are cancelled. However, we can factorize B into two parts: B  +  (the zeros are  all inside stable region) and B~ (zeros outside the stable region), B = B B~. +  B  +  is cancelled. Then B  m  has to be in the form: B  m  = B B~. m  Only  The design algorithm  37  Chapter 3. Adaptive Control Algorithm and Stability  (3.28)-(3.30) become: (3.36)  ARi + q- B~S = A A d  m  0  R = RB  (3.37)  +  X  (3.38)  T = A B' 0  m  The controller is as the same form as (3.27), but here, we should notice that the degree of R\ is higher (d+degB~ — 1). In this case, the implicit algorithm is difficult to design. For AAy m  0  t  =  B~ Syt-d + AR y  =  B~ Sy -d + RiB B~u _  =  B-[Sy - + B+R u - ]  x  t  +  t  t  t  d  x  t  d  (3.39)  d  The practical difficulties of using implicit algorithm are: (1) The order of B  +  is not  known a priori. (2) B~ is difficult to be factorized from the A R M A model.  3.3.2  M i n i m u m Variance Control  Basic A l g o r i t h m Consider the discrete plant with delay d and white noise e : t  (3.40)  A{q- )y = B(q- )u _ + C{q- )e X  l  t  l  t  d  t  The optimal strategy for minimum variance criterion V = Eyf is: t  Where F(q~ ),G(q~ ) are determined from Diophantine equation: 1  1  C{q-') = A{q- )F{q~ ) + G{q-')qx  x  d  (3.42)  38  Chapter 3. Adaptive Control Algorithm and Stabihty  Then the closed-loop transfer function will result in: (3.43)  y = F{q- )e l  t  t  When using (3.41), it is necessary that: 1. B has all its zeros inside the unit disk. Thus the discrete plant should be minimum phase. 2. C has all its zeros inside the unit circle. This is realizable by using the spectral factorization theorem, (chapter 6 of [8]) Implicit Algorithm Apply both sides of (3.42) to y by using the plant (3.40) and ignore the output noise, t  we obtain: Ciq-^yt = F ( g - ) 5 ( g - ) u _ + C?(g- )y _ 1  1  (3.44)  1  t  d  f  d  We can use (3.44) as estimation model and the estimated parameters are directly used in (3.41) for control input calculation. This procedures is similar to the implicit polezero cancellation algorithm. Factorization in M V C In case B has unstable zeros, let B = B B~. +  B  +  and B~ are similarly defined. The  control law is: (3.45)  u, = F(q — -i)B+(q-i) yt where the Diophantine equation is:  C{q- ) = A(q- )F(q- ) + q- G{q- )B-{q- ) 1  X  x  d  l  1  the degree of F is d + deg(B~) — 1 and which is higher than the degree of F in (3.42). The closed-loop has the same form as (3.43) but here it is only a suboptimal strategies,  39  Chapter 3. Adaptive Control Algorithm and Stabihty  the error variance is larger than the minimum variance in all zeros cancelled case. In chapter 5 of [9], this strategy is called moving average control for the controlled output is a moving average process of order d + degB~ — 1. For the implicit algorithm, we have Cy = FBu . + GB-y _ = B~[FB u -  + Gy - ]  +  t  t  d  t  d  t  d  t  d  Here B~, FB , G have to be estimated. Here, we meet the same difficulty as (3.39) in +  pole-placement case. 3.3.3  Other Algorithms  For the plant (3.33), let zt =  P{q- 1)yt  q- Q(q- )ut d  +  1  where P, Q are weighting polynomials. Thus: Az = (PB +QA)u _ = B'u _ t  t  d  t  d  Let the controller be w  t+d  = Su + Rz = 4>j0 t  t  t  R, S are to be estimated and AS* + q- R*B' = C d  Thus the closed-loop transfer function is: C~ B (see [14]). l  Predictive Control This is a robust controller design method. We will introduce the basic algorithm ([12], [40]) in Ch.6, and discuss the relation between the control horizon and the sampling interval.  Chapter 3. Adaptive Control Algorithm and Stabihty  40  Dual Control This is a more complicated control law which optimally adds caution and probing to the certainty equivalence control. It cannot be computed, except for very simple systems (see [21]).  3.4  Stability Condition for Adaptive Control System  The stability theory of self-tuning control algorithm was first developed in late 1970's under the assumption that the estimation model order is not lower than the actual plant order. However, i n practice, there always could exist model mismatch. In this section, we will derive stability conditions of S T C under estimation (3.24) and pole-placement in example 3.2. We also assume that the estimation order is not necessary to be larger than or equal to the plant order. This result is an extension of the theoretical work in [1] and [13], thus has similarity in conclusions. However, here by using the signal expansion, if an operator H  2  is S P R , we are able to find an explicit upper limit of the  adaptation gain e such that the stable S T C is ensured. The work in [1] and [13] will also be shown in 3.4.2 and 3.4.3. Compared with those earlier theoretical works, our result is the same with the addition of an explicit expression of the upper limit of the adaptation gain instead of indicating it is sufficiently small.  3.4.1  M a i n Stability Theorem  Theorem 3.1:  Considering an adaptive system using the estimation algorithm (3.24) and the pole-zero cancellation control algorithm shown in example 3.2.  The  41  Chapter 3. Adaptive Control Algorithm and StabiUty  Im  1  Re  Figure 3.4: Geometric Explanation of rj  n  delay d of the plant is assumed known, but there may be unmodelled dynamics (i.e., n < n in example 3.2). Let the persistent excitation condition (PE) be satisfied for the estimation model and let C = BS*q~ -f AR" and d  H  = C~ B, where R*, S* are the polynomials with the same orders of R, 1  2  S. If H  2  is Strictly Positive Real (SPR), then for e <E (0,inf ( — ) ] ,  the adaptive system is stable. Here n is the distance between the Nyquist n  curve of H  2  and imaginary axis as shown in Fig. 3.4 and Aj is the maximum  eigenvalue of the initial error covariance matrix P{ in the estimator (3.24),  i = -d+ 1, - • - ,0. P r o o f of the T h e o r e m The proof of the theorem can be divided into four parts. At first, we will establish an error model. Then we are going to introduce normalization and discuss the property  42  Chapter 3. Adaptive Control Algorithm and StabiHty  of estimation. These two steps are similar to [30] and [13]. The third step is to expand the truncated regression vector into Fourier series and express the stability condition. Finally, we will show that if H  ls 2  S P R , an upper limit on e for stable self-tuning control  algorithm can be obtained. 1. Error Model Considering the plant (3.33), and the expression of the polynomial C , Cy = BS*y - + BR*u _ = B<f>l_ 9. t  t  d  t  d  d  where <> / is the regression vector containing input-output data. #« is the vector containing the coefficients of R*, S*. Define: error e = y — w , 9 = 0 — 0*. The implicit t  t  t  t  t  pole-zero cancellation law is w = <f>J_ 9 _ , thus t  d  t  d  =  C- B[f _ {-~9 - + 9 _ )}-w  =  -H (j)J_ 9 _ + (H - l)w  l  t  d  2  t  d  t  d  d  t  2  d  t  t  Denote ij) — <f>f_ 9 _ , w' = (H — l)iw , we have the error model: t  d  t  d  t  t  2  e = t  -H TJ> 2  t  + w'  t  (3.46)  Here, if H is stable, and w is bounded, then w' is bounded. The block diagram is as 2  t  t  shown in Fig. 3.5. 2. Normalization in Estimation Algorithm As in [30], we use the estimation algorithm (3.24) with normalized regression vector: 4>t  =  P7 1/2<f>t  (3.47)  Chapter 3. Adaptive Control Algorithm and Stabihty  43  H  2  Estimation  Figure 3.5: Error Model where the normalization factor: p = p,p ^ +max(\(j) \ ,p) 2  t  t  t  (3.48)  and p > 0, p. G (0,1). Thus, \U =  ,  '  W  < i  .,  fip + max{\<pt\ , p) 2  t  Note, e,  P are also normalized by e< = p 7  -1/2 t  e  t  -1/2 /  = Pt Vt Pt = Ptp  t  Then the estimation algorithm (3.24) becomes: 0 =  + eP 4>t- e t  t  A = P  t  -  d  d  - f ^ f - f -  t  d  1 + z $[_ P - <bt-d 2  d  t  d  (3.49) (3.50)  44  Chapter 3. Adaptive Control Algorithm and StabiHty  We can also rewrite (3.50) as: Pr^Pr-d thus A  m o x  (P ) < A t  m a I  (3-51)  + e'h-dfl-i  ( P - d ) j set A be the maximum eigenvalue of the initial error t  a  covariance matrix P,-, i — —d + 1, • • •, 0, which are all positive. Then <J>t-dPt-d4>t-d < Ai Define: v  t  = efPr'Ot =  V -d + e $ + 2eTp e + e [$_ P tt-d}e 2  2  t  t  t  d  t  2  (3.52)  then J2 - E Vi = i=N-d+l i=-d+l t=l for any N (N can be oo). Where V  + 2eje + e a e ) 2  i  °~t  -  _  t  2  t  t  (3.53)  ¥t-d.Pt$t-d <K-dPt-d<j>t-d  l + e 4>J_ Pt-d$t-d 3  d  o-t € (0,Ai). Thus 0  ~  N  + 2 e ^ e + e Aie ]  E Vi<Yy$ i=-d+l t=l  2  t  2  ViV  (3.54)  3. Signal Expansion Let ipt be a periodic signal based on tpj in [0,r], similarly as x based on x[ as in t  example 2.2. Express ij) by using Fourier series: t  00  00  tj) = a 4- E ^ cos(nu/f.) + E n  t  0  s'm(nwt)  (3.55)  45  Chapter 3. Adaptive Control Algorithm and Stability  where  ifit,  ipj, and rp r e equal i n [0,r] and r can go to oo. a , b are functions of r a  n  t  n  and w = —. T  Notice that here ip is a discrete-time domain signal, thus i n (3.55), high frequency t  component terms in the summation when n = kr, kr + 1, • • •, kr + r — 1 (k = 1,2, • • •) will fold over into the terms n = 0,1,2, • • •, r — 1. Thus (3.55) can be rewritten as: r-l T-1 V"t = Oo + _ v o-n cos(nwt) + ] P 6 sin(niu£)  (3.56)  n  where O-n = «n + a +r + «n+2r H n  &n = K + 6  n + T  +b  H  n+2r  n = 0,1,2, • • • , r - 1. Denote H (jnw) = H(nw)e  jipn  2  , then  T-l T-1 f^V't = ijT(0)ao + ]T) H(nw)a cos(nwt + <p ) + £ H(nw)b sin(nwt + <p ) n=l n=l n  To calculate J^V'e-^V'tj  w  e  c  l i a  m  :  n  n  n  1  r  £ ^or  cos(nto< + <p ) = 0 n  w = 2TC/T, let  r  5  =  n  y~^cos(ntt>f + ip ) n  t=i  1  = „  .  ,nir  i sin(—) n  27T  27T  cos(y„ + n — ) + cos(y>„ + n — 2 ) + • • • + cos(^ + n.—r) r r r 2nx n,7r 2n7r , . .n7t\ . 2mr . . nn\ = cos(</>„ H )sin( —) + cos(tp + n 2)sin( — ) - I (- cos(<p + r) s i n ( — ) l .. n7T . J M l •/ Jn7r . 5U7T = o l ~ ( ^ n + — ) +sin(p + )J + -[-sin(p„ + ) + sin(y? + )] n  x  n  n  N7R  r  r  s l n  l  T  l  =  r  . .  n  T  (2r-lW.  T  2i  . ,  (2r +  I r -[-sin(p„ + — ) + sm(y)» + n2w + — ) J = 0 Z  T  For n = 1, 2, • • •, r - 1, s i n ( ^ ) ^ 0, thus, 5„ = 0.  T  T  l)n?r„  u  T  46  Chapter 3. Adaptive Control Algorithm and Stabihty  and T E sin{nwt + <p ) = 0 t=i n  Thus t=l  t=l  t=l n=l T—1  T  J  t = l 71=1  Z  E {#(0)a + \ J2lH(nw) c o s ( ^ ) K + 6 )} t=l n=l  =  2  2  n  Z  and similarly, E(V><) = X>o  + ^ X  2  K  + *»)]  t=l t=l n=l T-l = 5>(0)'«» + 5 £ tf(^) Z  E W<)  2  cos (^)(a  2  t=l  6=1  2  + o )]  2  2  n=l  Z  Also let w' be a periodic signal based on truncated w' in t 6 [0,r], t  t  T-l tDf = a + E f n ( t) a  0  cos  + ^ sin( mwf.)]  nw  n  71=1  then r E " ^ = E[ t=l t=l  1  T  1  a  » 0 + a  T  E( " n + Tl=l a  9 Z  a  *A]  We use the normalized version of (3.46), i.e. e = —H ^t + [ w  t  extended space and  m  2  •tpt = "4>t —rft,w't = Wj, in r. 6 [0,T]. Define ?7 = H(nw) cos(y ), which is the real part n  n  of H2(nw). The right side of (3.54) can be written as: J2[e tf+2el, e  + e\e} =  2  t  t=l  E[(^  2  2  t  2  1  t=l - 26-770 +  e'A^aS  E(^  2  + \T-l  t=l - 2 ^ + e \ )(a 2  2  n  2  lV n  n  T-l E[2(e t=l  ^  - e A 77o)a a + ]T(e - e A r )(a a^ + b b' )] + 71 = 1 2  «=i E((«o)2  2  1  0  0  + 2T E W n = 1  1  + (O )] 2  /n  n  l  n  n  + b )} + 2  n  Chapter 3. Adaptive Control Algorithm and Stabihty  47  where we have used the expression of £ ( ^ e ) > E ^ ' ^ ^ * ' ' ' "> ^ - ^ 2  2  e  c  t=i t=i  = e - 2n + eAjr,  a  n  (3.57)  2  n  (3.58)  fi = 1-eXiVn n  n — 0,1,2, • • •, T — 1, then we have from (3.54): o E  1 < e E [ « o a „ + - E «"(«n + #)] + 2e B / V o a i , + T  T _ 1  T  2  f=l  t=-tf+l  n=l  ± E /3 (a al + 6 6 j ] n  n  t=l  + A ±{(a' r + \ ^((O 2  n  £  o  X  2  + (O )] 2  (3.59)  4. Stability Condition As shown i n F i g . 3.5, n is the real part of H2(nw) which is always positive if H is n  2  SPR. Under the S P R H condition, we can choose a positive e such that: 2  <¥  S  ( T O )  ^  so that i n (3.57), a < 0, n = 0,1, • • •, r — 1. Now set n  a = max{a } n n  /3 = max{|/3 |} n  Then from (3.59), we have 0  E  r  -1 0 0  ~Vi < ea  + -z E ( « n + h )\ n  T  -1 CO  +2e/?BKa;i + - L K n a : + 6 O|] a  n  t=l  n=l  Z  -I  T  OO  + A D(0 + ^E((0 + (0 )] 2  2  £  2  2  1  t=l  n=l  Z  =  e  « i i v ; n i + 2 / ? ( i ^ i , i u * ; i ) + A ||TZ;;ii  <  6 a | | ^ | | + 2e/?||^ |||K||+ A ||ti};||  2  2  e  2  E  T  2  £  2  1  1  2  (3.61)  48  Chapter 3. Adaptive Control Algorithm and StabiUty  Notice that a is negative, e, A (3 are all positive and w' is bounded. If xjj is una  t  bounded, we can always find unbounded  t  as r goes to oo, thus the first term in (3.61)  is the dominant term and the right side will go to — oo as r goes to oo which cannot be true. Thus we have proved that tpj and ip are bounded.  •  t  (3.60) can be conservatively written as:  1  1  * max  where Aj is given in the initial of the estimation, T7 maximum distance between the Nyquist curve of H  2  mm  and r ]  m a x  are the minimum and  and the imaginary axis as shown  in Fig. 3.4. Choosing e Define  f{Vn) =  1 + An/'  We will show that the maximum e satisfying (3.60) is either / ( r /  min  ) or f(n ). max  Since  so there is only one local maximum in rj € (0, +oo) at rj — ——=. We will obtain: n  1. if  n  v Ai  < — T = , or / ( r / ) > 0, then min  V Al  ™n[/()j  2. if 77  min  > —]=, or f'{vmin) < 0, then V Ai  rain  ),/(7|  raai  )]  Chapter 3. Adaptive Control Algorithm and Stability  49  Generally, if <1 then e  max  = f{Tj ), min  otherwise, e  max  =  f{n ). max  In summary, we list the conditions required for a stable adaptive control system as following: 1. The regression vector satisfies the P E condition. 2. H is S P R . 2  3. e is limited by an upper bound (3.60). Notice that condition 2 implies H is stable. The stability condition derived here is 2  sufficient. In practice, even if for an unstable plant, as soon as the S P R H condition 2  is satisfied and the system with initial controller parameters is stable, the adaptive system will remain stable under small adaptation gain derived i n the theorem.  3.4.2  Result from Using Conic Sector Criterion  In the error model (3.46) (Fig. 3.5), the relation H defined as i n Theorem 3.1 is a 2  linear operator. Suppose the relation between t/'tj e through parameter estimation is f  denoted by H\, i.e.,  = Hie , where Hi is not necessary linear. Assume V = 0, d = 1, 0  t  N —> oo, we have from (3.54): oo  or {ijj - aie ,xp - bie } > 0 t  t  t  t  Chapter 3. Adaptive Control Algorithm and Stability  50  where - 1 - V I - e\  x  a = a  e , -1 + v / l - e A j »i = e According to Theorem 2.1, the stability requires H  2  in {  f  ,  1 + V l — eAi  .  }. 1 — v l — eAj £  If e —r 0 is sufficiently small and A —*• 0 (an additional assumption), it can be concluded x  that for S P R H , the boundedness condition will be satisfied with e, A carefully chosen x  2  to be sufficiently small. In [13], a procedure of choosing e, A for given H is suggested. x  2  However, no clear expression for e is given there. Values for e, Aj are based on the conic condition of H  2  and they exist theoretically if H  2  is S P R .  Model Matched Case In the model matched case, the order of R* and S* are n — 1. We can choose R* — B, S* = q(l — A) such that H  = 1. In this case, the S P R condition is always satisfied.  2  However, cancellation of unstable zeros is not allowed, thus the stability requires B to have all its zeros stable, i.e., the discrete plant is minimum phase. This agrees with the early stability theory i n [16], [21]. 3.4.3  Using Averaging Method  Consider d = 1, apply (3.46) to the estimation (3.24), we can obtain:  9 - 0 _i - ePttt-iHitftJt.!] + ePtfa-tw't t  (3.63)  f  If P = / , i.e. the estimation uses constant gain, and define w — 0, P*. = <fi _ H2(fif_ t  t  t  i  1  and e be sufficiently small such that 6 changes much slower than (frt-i, then (3.63) will t  be: 9 i = ( J - eRt)0 t+  t  (3.64)  Chapter 3. Adaptive Control Algorithm and Stability  51  which is the same expression as (2.11). If e is sufficiently small, and H is S P R , the 2  adaptive system will be stable under persistently exciting command.  3.5  Simulations  Example 3.3 For the continuous plant, 2  229  G(s) = s + 1 s + 30s + 229 2  Assume T is the sampling period. The discrete plant is:  The coefficients of B, A can be exactly calculated . We use (3.21) for parameter esti2  mation and only first order model is used. Denote a, b the estimated parameters. 1. Estimation and S P R Properties For sampling period 0.3 sec, 0.24 sec, 0.18 sec, we apply command input cost. This is a persistent excitation case. We let the simulation run for 500 sec. The estimated parameters are as i n Table 3.1. Express H approximately by: 2  2  Suppose, A(q  1  ) = l-oig  1  - aq  - aq  2  2  a a  2  , B(q~ ) = bi + b q  3  1  3  2  e-  T  1  =  = -2e-  1 6 T  „  + 2e-  1  + bq  2  3  , then  1 5 T  cos(2T)-e-  3 0 T  -31T  _  a — e 3  h = 2-2.29e~ b - 0.29e" - 4.29(1 T  2  - )er  e  -r0.29e-  T  1 5 T  6 = (2.29 - 2 e " ) e T  3  15T  (cos2T+3.5517sin2T)  cos 2 T - 1.03(1 + - ) T  e  _ 3 0 T  - 0.29e~  16T  e  1 5 T  sin 2 T - 0.29e- "  (cos 2T - 3.5517sin2T)  3  T  52  Chapter 3. Adaptive Control Algorithm and Stabihty  T a b  0.3 sec. 0.792  0.24 sec. 0.831 0.388  0.473  0.18 sec. 0.871 0.298  Table 3.1: Estimated parameters for different sampling frequency  -0.6  0.6  l.O  Real  Figure 3.6: Nyquist curve of H  H = 2  the Nyquist curve of H  2  Axis  under different sampling  2  B aBq~ + bA l  obtained via different sampling periods are shown as in Fig.  3.6, where we can see that as the sampling frequency goes faster, H  2  will not be S P R .  2. Self-tuning Control result When pole-zero cancellation is used, the adaptive control results are stable when  Chapter 3. Adaptive Control Algorithm and Stabihty  T = 0.3, 0.24 sec. In F i g . 3.6, when T, = 0.3sec, v  max  when T„ = 0.24sec, T]  max  53  = 1.260, 77  min  = 0.165;  = 1.394, rf i — 0.029. Here we choose A = 1.0, thus e m  x  n  max  according to (3.60) are 0.283, 0.057 respectively. In case of T, — 0.24sec, the adaptation process is slow. However, this upper limit is only sufficient. We still get stable adaptive control output when e = 1.0. However, for T, = 0.18$ec, the adaptive control output is unstable.  In this chapter, we have introduced some adaptive control design methods. The main stability theorem is given. Compared with the former proof given in [1], [13], [30], an upper limit for adaptation rate is explicitly given. The bound on e is a very conservative one. We have not found any examples i n simulation where stability could only be achieved with the small adaptation gain. In [28], an analysis of the relation between the robustness and adaptation gain was shown by using small gain theory. Although the adaptation gain derived is sufficient for the stability, it has been indicated that to get the maximum robustness, it is not necessary to have the smallest e. However, if e is sufficiently small, the robustness will be non-zero. In summary, for a stable conventional pole-zero adaptive controller, • For the model match case, the discrete plant should be minimum phase. • In general, there is model mismatch, then an operator H has to be SPR. 2  Chapter 4  Robust Adaptive Control: Slow Sampling  4.1  W h y Use Slow Sampling?  The robustness of adaptive control systems depends on: (1). The difference between the discrete plant order and estimation model order; (2). Discrete plant parameters (Coefficients of model); (3). Estimation algorithm; (4). Adaptation algorithm. The sampling frequency obviously affects the parameters of the discrete plant. In [33] and [34], it was shown that with a sufficiantly low sampling frequency, the difference between the discrete plant and model is small, thus the adaptive system is robust. In general, when sampling a continuous time system, if the sampling frequency is high, the discrete system will .be close to the continuous system with a zero order holder. However, using slow sampling will ignore the poles and zeros far beyond the corresponding Nyquist frequency. In this case, the discrete plant behaves as a low order system, thus the error due to model mismatch is small. In [5], it is indicated that an inversely unstable discrete system is possible with sufficiantly high sampling frequency even if the continuous plant is minimum phase. However with low sampling frequency, we can always obtain a minimum phase discrete plant provided the continuous plant is rational, stable and inversely stable. In 4.2, we will present some results on zeros of sampled plant and propose a method to obtain the critical sampling for minimum phase discrete plant. In this section, we will restrict the continuous plant to be rational, stable and minimum phase.  54  Chapter 4. Robust Adaptive Control: Slow Sampling  55  In chapter 3, it has been shown that an S P R condition is essential for stable adaptive control algorithm. We can also show that S P R H  2  requires the discrete plant to be  inversely stable. However it is not a sufficient condition for stable S T C . In 4.3, we will give an expression of model error on H  2  and show an approximate way of choosing  critical sampling for S P R H . A simulation example of those relations is given in 4.4. 2  In the adaptive control algorithm used here, we will assume that least-squares estimation algorithm is used. The controller is a pole-zero cancellation scheme as in example 3.2.  4.2  C h o i c e o f S a m p l i n g F o r M o d e l M a t c h e d Case A d a p t i v e C o n t r o l  4.2.1  Definition and Background  A stable continuous-time system with a rational transfer function is minimum phase if it has only left-half plane zeros. Here, by extension we shall define a stable discrete-time system as minimum phase if all itsfinitezeros are within the unit circle, i.e. inversely stable. In many self-tuning control schemes, it is desirable that the plant be minimum phase ( M P ) . A n earlier discussion about zeros of sampled systems shows that if there are more than two excess poles in the continuous transfer function of the plant, the sampled system will be non-minimum phase(NMP) if the sampling frequency is fast enough ([5]).  T h e o r e m 4.1 (Astrom, Hagander and Sternby, [5]) Let G(s) be a rational function,  G(s)  k(s - Z]){s - z ) • • • (s - z ) 2  m  (s - p i ) ( s -p )---{s -p ) 2  n  (4.65)  Chapter 4. Robust Adaptive Control: Slow Sampling  56  and H(q) the corresponding pulse transfer function ( m < n ). Then, as the sampling period T, goes to 0, m zeros of H(q) go to 1 as e ' and remaining ZiT  n — m — 1 zeros of H(q) go to zeros of P _ ( g ) , tl  m  where Bk{q) is the numerator polynomial of pulse transfer function of G(s) = s~ . For k  example: Numerator polynomial  Unstable zeros  Bx{q) = 1 B (q) = q + 1  -1  B (q) =q + 4q + l  -3.73  2  2  3  •84(g) = g  + llg + llg + l  3  -1,-9.899  2  B (q) = g + 26g + 66g + 26g + 1  -2.332, -23.20  B (q) = g + 57g + 302g + 302g + 57g + 1  -1,-4.542,-51.22  4  3  2  5  5  4  3  2  e  It has also been shown that slow enough sampling will always result in M P discrete plant. The following theorem is also proved in [5].  T h e o r e m 4.2 Let G(s) in (4.65) be a strictly proper rational transfer function with G(0) ^ 0 and Re(pi) < 0. Then all zeros of the pulse transfer function go to 0 as the sampling period T, go to infinity. Here we will discuss how slow this sampling should be. First, we present the main result in [5] as follows:  57  Chapter 4. Robust Adaptive Control: Slow Sampling  T h e o r e m 4.3 Let G(s) as in (4.65) be a strictly proper, rational transfer function with:  1. Re(pi) < 0; 2. G(0) ^ 0;  3. —7r < arg {G(jw)} < 0, for 0 < w < oo. then all the zeros of the corresponding pulse transfer function H(q~ ) are l  stable. Condition 3 of this theorem is sufficient and very strict. It requires the continuous plant to have at most two excess poles. Also, the conditions are not sampling frequency related. Now we will define the critical sampling frequency w as follows. c  Definition The critical sampling frequency w of a plant G(s) is the largest frequency c  such that for any sampling frequency w, < w , the sampled system is M P . c  For  adaptive control using pole-zero cancellation under model matched case, if the  discrete plant is minimum phase, the adaptive system is stable.  Thus, the critical  sampling for minimum phase discrete plant is very useful. In the following, we will show that the critical sampling w satisfies an equation, c  whose solution can be obtained simply by using relay. Thus, the critical sampling frequency can be obtained without prior knowledge of the continuous plant.  4.2.2  C o n d i t i o n For M P D i s c r e t e P l a n t  Suppose G(s) is the continuous transfer function of a stable, proper, rational plant. A l l the poles and zeros of G(s) are in the left half plane. Then the sampled plant is:  Chapter 4. Robust Adaptive Control: Slow Sampling  58  _•(,->> = ( i - , - ) z { ^ l } Define: G (s) = 0  G(s) s  then through the transform relation (4.66)  + 00  Z{Go(s)}= D  1  A  1  -  1  -G (s + jmw.) t ±CT(s) = ± G ; ( _ )  (4.67)  0  where w, is the sampling frequency and T ( T > 0 ) is the sampling period. Thus the discrete plant is:  Hiq- ) = £ ( 1 - q- )G:{q) 1  l  (4.68)  In the frequency domain, the sampling process results in aliasing of the low frequency band by the high frequency component. It is obvious that if (7(0) ^ 0, the term (1 — q~ ) l  in (4.68) will be cancelled. Thus the zeros of the sampled system will be the zeros of G*(s) through the transformation (4.66). Before we propose the main theorem, the following lemmas are introduced: L e m m a 4.1: If G(s) is stable, all poles of G" (q) are stable except one on z  the unit circle.  Chapter 4. Robust Adaptive Control: Slow Sampling  [Proof] For { s J  T  Define $l(x) as the real part of x. It is obvious that after Z-transform, the pole of G(s): —pi with 9?(p;) > 0 is transformed to the pole e~  PiT  with  | - P i r | < 2. The pole s = 0 is transformed to the pole z — 1 which is on e  the unit circle. • L e m m a 4.2: If G(s) has n poles, then G*(g) has n finite zeros for almost all T > 0. [Proof] After the Z-transform, the discrete plant is:  ^(o- )  a q~  1  n  n  Ii G(s) has no delay, b ^ 0. For: x  rr/ - I N _ ^ ( g ) ^ ' _ /-, -i\n*(„\ ~ A(q->) -T ~ -1  1  1  n  H  {  q  ]  *  G  )  ( M  (1  =  )  * b - +b q q-1 q - a i f l " T  n  l  iq  n  2  2  n  1  q  )  G  M  )  + ... + b a  Thus there are n finite zeros i n G*(g) = T Z  n  n  a n t  ^^ ^ a  e a s  *  o  n  e  of them is q — 0.D L e m m a 4.3: »{C*(iy)} = 0  (4.69)  where w, is the sampling frequency and $s(x) denotes the imaginary part of x.  60  Chapter 4. Robust Adaptive Control: Slow Sampling  [Proof]  m= —oo  = g{Co[i(2m + l ) ^ ] + G [-j(2m + l ) ^ ] } 0  m=0  For a real coefficient transfer function F(s),  <5{F(jw) + F(-jw)} = 0  (4.70)  Recall that Gn(s) is rational and has real coefficients, thus:  y)} = 0 • This means that if we draw a Nyquist curve of G*(jw) with w from 0 to — , the 2 Nyquist curve will end at the real axis. It is well known that the Nyquist curve of 1  G*(jw) when w is from — ^ to 0 is symmetric related to the real axis and the Nyquist  w  w  2  2  curve of G*(jw) repeats when the frequency is beyond the range [  ]•  Now, we can express our main theorem on the M P discrete plant condition. T h e o r e m 4.4: Given a continuous, proper, stable and minimum phase rational transfer function (?(*), define G*{s) as in (4.67). If for all w G [0, — ] : 2 - 90° > arg{(T(ju;)} > - 2 7 0 ° then the sampled plant H{q^) is M P .  (4.71)  61  Chapter 4. Robust Adaptive Control: Slow Sampling  lm  f  Re  o  Figure 4.7: Nyquist path [Proof] Consider when the Nyquist path is on imaginary axis, w is from ——  w to — e x c l u d i n g the origin on the right side as shown in Fig. 4.7. The Nyquist curve which satisfies (4.71) will not encircle the origin as shown in F i g . 4.8. Now extend the Nyquist path from tu = - o o to +00, i.e. to the whole imaginary axis, the corresponding Nyquist curve simply repeats itself and does not encircle the origin. So the number of poles and zeros of G"(s) in left half plane should be the same. According to Lemma 4.1, there are n stable poles of G*(s). Thus from Lemma 4.2, the n finite zeros of G'(s) are stable and # ( o ) is M P . • - 1  ID  It is obvious that if H{q~ ) is M P , then x  < 0. Lemma 4.3 has indicated  w that G*(±j-~-) is always on the real axis. If there is one unstable finite zero, the Nyquist curve will encircle the origin once as shown in Fig. 4.9, and G (j —) will be x  on the positive part of the real axis. It is obvious that for the critical sampling u) , c  62  Chapter 4. Robust Adaptive Control: Slow Sampling  Figure 4.8: Nyquist curve satisfying (4.71) G*(j—-) will be at the origin, thus we have: C o r o l l a r y : The critical sampling frequency w satisfies: c  (4.72)  »{<?'0y)} = 0  For  <?(j»«(m +  i ) ) , G{-jw [m + i)) - j i _ ( m + \)  jw (m + |)  m=0  c  +  c  c  Thus (4.72) is equivalent to: G(jw (m + ±)) , G(-jw {m + l)) c  5  +  Lm=0  c  =o  Using the property (4.70), we will obtain: +<*, G[j(2m + l)^}  rS,  2m+ 1  (4.73)  Chapter 4. Robust Adaptive Control: Slow Sampling  63  Im  e  Figure 4.9: Nyquist curve not satisfying (4.71) Notice that the critical sampling frequency satisfies (4.73), but not any sampling frequency satisfying (4.73) is critical.  4.2.3  Obtaining the Critical Sampling by Using a Relay  Consider a relay component in series with the plant as shown in Fig. 4.10, the command x(t) is a sine wave with frequency w and amplitude A:  x(t) = A sm(wt)  (4.74)  Then the input signal of the plant u(t) (which is also the output of relay) is a square wave and can be expressed by using a Fourier series:  u(t)  oo  E  sin[(2fc + l)wt]  7T fc=0  2k + 1  Suppose the continuous transfer function of the plant is G(s), let w = defining:  = \G[j(2k + l)^}\  (4.75)  thus by  64  Chapter 4. Robust Adaptive Control: Slow Sampling  Output  ) Relay  P l a n t  Figure 4.10: Closed loop with relay  6  2k+1  w = arg{G[j(2fc + 1)-^]}  the output signal is: w Ad  y(0 At the specific time tf/v =  + ° ° #2*4-1  sin[(2A; + l)-^-t + 0fc+i 2  =- E 27T7V"  w  2  , then x(t^) = 0, and  then  =7__ (  a  a  +1  >  '  (4  76)  11)  Thus we can obtain the value of $l{G*(j-^-)} by measuring y(t ) and if the output N  is zero at t^, then the frequency w = 2io satisfies the equation (4.73). c  Chapter 4. Robust Adaptive Control: Slow Sampling  65  to  Figure 4.11: Nyquist diagram for limit cycle Further, by using a closed-loop structure as in F i g . 4.10, a limit cycle will exist if there are more than two excess poles in G(s) (see [6]). This can be viewed by drawing the negative reciprocal describing function of the relay and the Nyquist curve of G(s) as shown in F i g . 4.11. When the oscillation occurs, the input of relay is the negative value of the plant output. This means that the input of relay x and the plant output y reach zero simult  taneously. Thus y  tN  t  = 0 in (4.76) and the oscillation frequency w is half of the critical iv  sampling frequency —- as in the equation (4.73). This oscillation frequency can be obtained by measuring the oscillation period, using for example the method described in [6]. For a system with no more than two excess poles, there may be no limit cycle, in this case, the sampled system is always M P . If there is more than one limit cycle, the slowest oscillation frequency is considered. Also in [6], it is mentioned that the configuration of Fig. 4.10 can be used to initialize adaptive controllers, in particular to obtain an estimate of the time delay. As shown here, this method can also be used to select the sampling period for stable, proper, rational minimum-phase continuous plants with the guarantee that the discrete plant is minimum phase, i.e. its finite zeros  Chapter 4. Robust Adaptive Control: Slow Sampling  8  66  o -0.5  -1.5  Figure 4.12: Nyquist curve of G* in example for different sampling intervals T are stable.  Example 4.1 Suppose  G(s) = (s + l) { + 2Y 3  S  For this system, the relation between sampling period T and the presence of unstable zeros is as follows:  Fig.  T  >  3.36.sec. inversely stable  >  0.74sec. one unstable zero  3.36sec.  >  T  0.74sec.  >  T  two unstable zeros  4.12 shows the Nyquist plot of G*(s) for T = 4 (inversely stable), T = 2  (unstable zero at -3.0245) and T = 0.5 (two unstable zeros at -13.1968 and -1.3079).  67  Chapter 4. Robust Adaptive Control: Slow Sampling l.O  I 1 • 1  r  O  0.  2.  4.  6.  8.  l O .  12.  Time  14.  in  16.  18.  20.  22.  24.  26.  28.  30.  Seconds  Figure 4.13: Limit cycle obtained when putting G(s) of the example under relay control Fig. 4.13 shows the limit cycle obtained when putting the continuous time plant G(s) under relay control. The period of the limit cycle is 6.72, which corresponds to twice the critical sampling interval as described before.  In this section, a condition for minimum phase discrete plant is proposed. It is a weaker condition than that in (Astrom et al. 1984 ) and is sampling frequency related. The critical sampling frequency can be obtained by measuring the oscillation frequency in a closed-loop system with relay. No prior knowledge of the continuous plant is required for this method. The sampling frequency for a model matched adaptive control system shall be chosen slower than the critical sampling frequency.  Chapter 4. Robust Adaptive Control: Slow Sampling  4.3  68  C h o i c e o f S a m p l i n g F o r M o d e l M i s m a t c h e d Case A d a p t i v e C o n t r o l  This section is a further discussion of the effect of slow sampling on the robustness of adaptive control. Model structure mismatch will be considered. In [33], [34], it was shown that slow sampling will bring small model structure error. Here we will relate the S P R operator H  2  SPR H  2  to the model error and show that if sampling is slow enough, the  condition will be satisfied. As in chapter 3, H  2  H 2 ( q  ^  =  A{q-i)nr{q-i)  is defined as:  + B{q-i)S*{q-i)q-'  ^  where the discrete plant is as (3.33) and R*, S* here denote the converged estimates. The converged estimates R*, S* can be obtained by using estimation only in Fig. 3.2 when the loop is open. In the "Properties of Estimation Algorithm" in section 3.2, it has been mentioned that this estimate depends on frequency components of the command input. Then, the operator H  2  is related to:  1. The discrete plant — which depends on — continuous plant; — sampling period. 2. Converged estimated parameters i?*, 5*, which depend on — estimation model order; — excited command frequencies. Here only the effect of sampling frequency is studied. 4.3.1  Uncertainty i n sampled system  We will first introduce the uncertainty in sampled system ([33]) and study how the sampling frequency affects this model uncertainty.  69  Chapter 4. Robust Adaptive Control: Slow Sampling  Consider the plant uncertainty represented by a multiplicative perturbation, 1 -f L(jw) with the nominal system G (jw), i.e., the plant can be expressed as: n  (4.78)  G(jw) = G (jw)[l + L(jw)} n  Here L(jw) represents the model structure error of the continuous plant.  It is  assumed that \L{jw)\ < l(jw) where l(jw) could be known and it increases as w increases. Suppose the sampling period is T, the sampled systems of G, G  are:  n  I H{jw) = -  +oo  £  (4.79)  G(jw+jnw ) s  71= —OO  I H {jw) = n  +oo  J2 G (jw+jnw ) n  (4.80)  s  n=—oo 2TT  where w = — . If Ld{jw) is a discrete-time multiplicative perturbation, which repres  sents the model structure error of discrete plant, i.e. (4.81)  H(jw) = H {jw)[l + L {jw)\ n  d  we have the following theorem: T h e o r e m 4.5 (Rohrs, Stein, Astrom [33]) Given the notation defined as i n (4.78) - (4.81) and assume that all infinite sums converge, then -f-oo  1 £  £ n—  + jnw )\G (jw + jnw )} 3  n  s  — oo ( 4  '  8 2 )  Chapter 4. Robust Adaptive Control: Slow Sampling  70  The proof of this theorem can be obtained through direct calculations by applying (4.78) - (4.81). The following example illustrates the relationship between sampling and uncertainty.  Example 4.2 Consider the continuous plant:  G(s)  2  229  s + 1 s + 30s + 229 2  we use a first-order nominal model G (3) n  b s+ a  for parameter estimation. The sampling periods are 0.18 sec, 0.24 sec, 0.3 sec. respectively. The command signal is cos(r.). We express the error L(s) according to (4.78) as:  G(s) - G „ ( . ) . £  W  =  <?„(.).  For continuous plant, since the exciting frequency of the command is w\ = l.Orad/sec, we can find a = 0.7673, b = 1.7752 such that G (jwi) — G(jioj). n  For the sampled system, the first order discrete nominal model is: -l  Hniq- ) 1  cq 1 - dq-  1  Note H (q~ ) is also the sampled system resulting from G (s) but the expression of l  n  n  Ld(q~ ) is complicated. The Table 4.2 shows the parameters of discrete nominal plant l  for exciting frequency of the command 1.0 rad/sec. and different sampling period T. The error \L (jw)\ for different cases is shown in Fig. 4.14. d  From this example, we can observe:  Chapter 4. Robust Adaptive Control: Slow Sampling  71  T = 0.18 sec. continuous case  T=0.24 sec. T = 0.3 sec.  o  o.e  o 0.3  Frequency:  rad/sec.  Figure 4.14: Model error via sampling periods • The model structure error is small near the exciting frequency; • The model structure error is large at high frequency; • When the sampling frequency is low, the model structure error is small and \Ld{jw)\ < l-^O' )! < KJ ) 10  W  a  r  e  ^  r u e  f°  r a  u  t  n e  cases, here L(jw) is the structure  error for the continuous nominal model and Ld(jw) is for the discrete nominal model.  c r = 0.18sec. 0.8704 T = 0.24sec. 0.8304 T = 0.30sec. 0.7918  d 0.2980 0.3880 0.4337  Table 4.2: Parameters of discrete nominal model under different sampling in example  72  Chapter 4. Robust Adaptive Control: Slow Sampling  In (4.82), l(jw) usually increases with increasing w. G (jw) usually has low pass n  characteristics. If an anti-aliasing filter is included in G (jw), and if the term n = 0 n  is the only significant term in the expression of H (jw), then we can clearly see that n  slower sampling will result in smaller model error in (4.82). O n small uncertainty error Let the discrete plant be  and the nominal plant (usually with lower order than the plant) be  *•<*-'> -1  Su\^  '  d  (4 84)  Ld{q~ ) is the multiplicative perturbation represents the model structure error as shown 1  in (4.81). B y using the discrete plant description, we obtain:  H  Here  JFJT  2  M  )=  AR*  + BS* -* = R* + S*H  -  (4  q  85)  also represents the closed-loop transfer function which uses the converged  estimates R*, S* as controller parameters. Let H (q- ) = H2n(q- ){l + M^q- )} 1  1  1  2  (4.86)  where  represents the nominal closed-loop system. Notice that H (q~ ) contains no unstruc1  2n  tured uncertainty. One can from (4.85), (4.86) obtain:  73  Chapter 4. Robust Adaptive Control: Slow Sampling  H (l+L )q* _ R* + S'H {l + L ) n  2  d  n  d  H q* R* + S*H n  {  ]  n  thus, (l + L )(R* + S*H ) _ R* + S*H (l + L )  =  d  n  n  d  or M*  1 + (1 + LjjfBn Here the model structure error M* has the following properties: 1. As the discrete-time multiplicative perturbation of the plant L is smaller, d  then the multiplicative perturbation of the S P R operator M* is smaller. Here L  d  also represents the model structure uncertainty for the sampled  plant which is decreasing as the sampling period is longer. s* 2. As the term (1 +—H ) is larger, M* will be smaller. For the nominal model xt*  n  # 2 7 1 contains no structured uncertainty, R r2n{<l~ ) = 1 or 1  5* 1 +  _  H q* n  -R* ~-W Hn  Thus M* also depends on the nominal model. M o d e l uncertainty and S P R condition In section 3.4, we have shown that for a stable adaptive control algorithm, S P R H is 2  essential. We will express H  2  as a function of the model structure error. Notice the  expression of H in (4.77), from (4.81) we have: 2  A  -  1-S*q-*  {1+Ld)  74  Chapter 4. Robust Adaptive Control: Slow Sampling  or AR*{l + L ) = B{l-S*q- ) i  d  thus B  H2(q- ) = 1  1 - S*q~ + (1 + L )S*qd  d  d  l + L  (4.88)  d  1 + L S*q~  d  d  (4.88) shows the relation between the operator H  and the model structure error  2  L . The following results are of interest: d  1. The operator H  2  is a function of the model structure error of the discrete  plant. If the error is small enough, H (q~ ) is S P R . A special case is when 1  2  there is no unstructured uncertainty, i.e., L  = 0, then H  d  2  = 1 which is  always S P R . 2. A sufficient condition for S P R H  2  can be derived as: (4.89)  when I I A I O O I U < 1 and WL^S^jw)^  < 1.  Here S*(jw) depends on the converged estimates of the nominal model. [proof of (4.89)]: sin ||I- /(jt0)jJ 1  c  oo  is the maximum phase for [1 + L (jw)}. Similarly is the d  term sin~ \\L (jw)S*(jw)\\ . 1  d  00  As for S P R H , the phase is limited in  [-90°,90°], thus (4.89) is sufficient.  2  Chapter 4. Robust Adaptive Control: Slow Sampling  4.3.2  75  S t a b i l i t y c o n d i t i o n s for a d a p t i v e a l g o r i t h m  In theorem 3.1, R*, S* do not have to be the converged estimates. They are in fact polynomials of fixed order. Here we will simplify the expression of H  by setting S* = 0, R* — 1. We denote  2  the simplified H  2  as H . If H , is S P R , the adaptive system is sufficiently stable. In 2s  2  the following, we will discuss the choice of sampling frequency to ensure S P R H ,. For 2  F .(r ) 1  a  = |  = «ff(9- ) = ( 9 - i ) z { ^ l ] 1  where -ff(<j ) is the discrete transfer function of the continuous plant G(s). As _1  phase(g - 1) = - +  {  3  )  —  n=-oo  J W + j n W  7T  +  a  G(jw +jnw ) t  • - + phase 2^ 2 „±rL  " w + nw,  Thus H {q- ) is S P R if 1  2l  on" >  ?1 2  +  phase  g  G{jw+jnv,.)  „- ™  W  n——oo  >  _  9Q0  ^  + "tO. *  1  Here considering that most practical plants have phase lag, so we can only discuss the case of:  wT —  ,  If?  + phase ) T 71=—OO  G(jw + jnw.) u  ^ > -90°  7  '  4.91  3  For a low-pass plant, we can neglect the effect of aliasing of high frequency gain (Or use anti-aliasing filter as in [39]). Thus the problem is simplified to finding w such s  that:  76  Chapter 4. Robust Adaptive Control: Slow Sampling  o.oo  -  - \  <v u-,  \ \ ^  -a CO  \  \  V  -135.  ^  1  3  (X, \  —ieo.  \ :  s  •w  2  —326.  2.  a.  Frequency  10.  axis  12.  14.  16.  18.  (rad/sec.)  Figure 4.15: Choice of sampling according to the phase of continuous plant  phaseG(jw) > -90° - —  Vio G [0, —-  2  where the sampling interval is: T = 4.3.3  (4.92)  2  2TT  —.  Approximate way of choosing sampling  Fig.4. 15 shows the phase of a continuous plant vs. frequency. When choosing the straight line /j represents the phase —90°  wT  w  = wi, 2  , w £ [0, —-]. Since the phase plot is 2 2  above the line l , (4.92) is satisfied in tins case. However, as w, is increased to — = w , x  (4.92) will not be satisfied. The procedure for choosing the critical sampling according to the known phase response of continuous plant is as follows.  3  Chapter 4. Robust Adaptive Control: Slow Sampling  77  1. Plot the phase-frequency relation of the continuous plant G(s), where x-axis is the frequency in rad/sec, y-axis is the phase of G(s); 2. Draw a line starting from w = 0, phase = —90°, which is tangent to the phase-frequency plot; 3. Find the intersection of this tangent line with the horizon line phase = —180°, the frequency w  c  corresponding to the intersection is half of the  critical frequency required. Remark 1: The critical frequency found in this way is only an approximate solution where the effect of high frequency is neglected. The effect of aliasing of high frequency may result in the actual critical sampling frequency being lower. Thus a safety margin is necessary if no anti-aliasing filter is used. Remark 2: The critical sampling for S P R H „ is always slower than (or equal to) the 2  critical sampling for minimum phase discrete plant as defined in section 4.2. Thus to ensure H , S P R , the discrete plant has to be minimum phase. This 2  has also been mentioned in [13] for Rohrs' plant, but no analysis was given. Here we can conclude that tliis is generally true. Remark 3: For special cases, no tangent line can be found or the frequency corresponding to the tangent point is less than —180°, then we can just draw a straight line between starting point and the intersection of —180° horizon line and  Chapter 4. Robust Adaptive Control: Slow Sampling  78  the phase plot. In those cases, the critical sampling frequency for the nonminimum phase discrete plant and S P R H2, are the same.  4.4  S i m u l a t i o n o n the Effect of S a m p l i n g  We will use a simulation example to show that the adaptive control is robust by using slow sampling and show how to choose critical sampling frequency for stable adaptive control. Example 4.3: (Simulation example of S T C ) Consider the Rohrs' plant with continuous transfer function: 2  229  G(s) = s + 1 s + 305 + 229 2  (4.93)  1. Critical sampling for minimum phase discrete plant To find the critical sampling for minimum phase discrete plant, we can apply relay in series with the continuous plant as shown in Fig. 4.10. Then we will observe that there is oscillation in closed-loop system. We plot this oscillating output as in Fig. 4.16, measure 5 periods of the oscillation at the steady state. Denote the oscillation period as T , then we have: ot  5TOS 0  1.99  or T  ot  = 0.398  According to section 4.2, the critical sampling period is:  79  Chapter 4. Robust Adaptive Control: Slow Sampling  O.O  -.05  o  l.O  1.5  2.0  T i m e  i n  2.5  3.0  3.5  +.0  s e c o n d  Figure 4.16: Obtain critical sampling for minimum phase discrete plant of Rohrs' example by using relay Fig. 4.17 shows two zeros of the sampled plant under different sampling periods. Here the slow sampling effect and the value of the critical sampling frequency are verified. 2. Critical sampling for S P R H  2a  If we obtain the phase-frequency relation of G(s) as in F i g . 4.18(a), we can draw a tangent line from (f> = —90°, w = 0.0 to the phase plot. The intersection of this tangent line with the horizon line (f> = —180° is  1  at w — 14.89rad/sec. Thus the approximate  critical sampling period is: TV  T, =  0.211aec.  14.89  Fig. 4.18(b) are the Nyquist plot of H ,(q ) under several sampling ( T„ — 0.17, 0.20, l  2  1  T h i s procedure can be programmed.  Chapter 4. Robust Adaptive Control: Slow Sampling  1  e  1  1  ~—I  1  1  >  80  '  -  <L> -t-> CO  t » CO X J  a>  0  — 1.0  <d CO  CO  O »-i O)  tsi  O.  .OS  .10  .16  Sampling  .20  .25  Period, in  .30  .35  .-40  Second  Figure 4.17: Zeros of discrete plant of Rohrs' example via various sampling 0.22, 0.25 sec. ). Note the corresponding straight line connecting (w = 0,<j> = —90°) and ( w = w,/2, cf> = - 1 8 0 ° ) in Fig. 4.18(a), if H , is S P R , there will be no intersection 2  of the straight line with the phase plot. Now we assume that the operator H {q~ ) uses the converged estimated parameters l  2  of a first order estimation model,  G{ ~ ) = X  q  bq- l 1 - aq~  l  and the command frequency is 0, 1.0, 3.0, 8.0 and 16.1 respectively. The converged estimates under different samplings and command frequencies are as in Table 4.3: We plot the Nyquist curve of II {q~ ) under w = 1.0 rad/sec. for different saml  2  plings in Fig. 4.19(a) and the Nyquist plot under T = 0.25sec. for different command s  frequency in Fig. 4.19(b).  Chapter 4. Robust Adaptive Control: Slow Sampling  frequency 0.0 T, = 0.170.876 0.248 T, = 0.200.859 0.282 r , = 0.22 0.848 0.304 T, = 0.250.833 0.334  1.0 0.877 0.283 0.857 0.329 0.844 0.359 0.824 0.403  3.0 1.035 0.310 1.041 0.362 1.045 0.396 1.051 0.446  8.0 22.12 3.868 -14.99 -2.507 -7.574 -1.191 -4.741 -0.653  81  16.1 -0.903 -0.010 -1.007 -0.0005 -1.219 -0.019 -2.427 -0.213  Table 4.3: Estimated a/b for various cases We notice that the S P R property is not only affected by the sampling, but also affected by the command frequency. In general, slow sampling and low command frequency will result in S P R Hi. 3. Self-tuning control under different sampling We simulated the self-tuning control under T, =0.25, 0.22, 0.20, 0.17 sec. There is no output noise. The command frequency is 1.0 rad/sec.  The simulated output is  as shown i n F i g . 4.20. Where we can see that for T, = 0.2,0.17 sec, the adaptive control results are unstable. However, for the slower sampling T, = 0.22,0.25sec, the bounded-input bounded-output relation is observed. 4. S T C with white noise and high frequency noise We simulate the adaptive control algorithm under output noise and the high frequency command input.  Fig. 4.21(a)-(d) are the simulation result under sampling  T„ = 0.25sec. The simulation condition is as shown in Table 4.4. Here we restricted the adaptation gain in estimation to be low enough (0.2). The S T C with low adaptation gain needs a long time to converge. We can observe from  Chapter 4. Robust Adaptive Control: Slow Sampling  command frequency no noise output noiseiV(0,0.1)  1.0 rad/sec. Fig. 4.21(a) Fig. 4.21(c)  82  8.0 rad/sec. Fig. 4.21(b) Fig. 4.21(d)  Table 4.4: Simulation condition in Fig. 4.21 noise type: adaptation gain e = 1.0 adaptation gain e = 0.2  0.1sm(16.1t.) Fig. 4.22(a) Fig. 4.22(c)  0.1iV(0,l) Fig. 4.22(b) Fig. 4.22(d)  Table 4.5: Simulation condition in Fig. 4.22 the simulation that bounded input and bounded output is ensured. But for the case of high command frequency, the output performance is not good enough. Also notice that although H is not S P R when the command frequency is 8.0 rad/sec, the adaptive 2  process is still stable. This is because the S P R condition is sufficient. 5. Case of persistent excitation not satisfied Under the command: w = constant + noise t  a self-tuning control simulation result is shown in F i g . 4.22(a)-(d) under different conditions as in table 4.5. The sampling period is T„ = 0.25sec. Thus for slow enough sampling frequency, the stability of the adaptive control system is not critically affected by the presence of a small noise and lack of persistent excitation.  The various simulation examples in this section show that  Chapter 4. Robust Adaptive Control: Slow Sampling  83  1. Slow sampling will bring robust adaptive control result. For the pole-zero cancellation control algorithm and the recursive least square estimation, the critical sampling for the stable adaptive algorithm discussed in this chapter is verified. 2. Not only the sampling, but also the command frequency will affect the S P R condition. However, the stability condition derived is sufficient. Boundedinput bounded-output relation can still be obtained even if S P R condition is not satisfied in some cases. 3. Adaptive control systems can still be stable under small noise and the lack of persistent excitation.  4.5  Conclusion  In this chapter, we discussed the effect of slow sampling frequency on the zeros of sampled systems and on the S P R condition. It was shown that when the sampling frequency is sufficiently low, minimum phase discrete plant and S P R operator H  2  will  be ensured. Expressions for the critical sampling frequencies were derived and the practical ways of finding them were also discussed. discrete plant is also necessary for S P R H . 2  Note that the minimum phase  Thus the critical sampling frequency  for S P R 1I is usually lower than the critical sampling frequency for minimum phase 2  discrete plant. For the model match case, the S T C algorithm using simple pole-zero cancellation is stable if the discrete plant is minimum phase. In model mismatch case, such an S T C algorithm is stable if H  2  is S P R .  Chapter 4.  Robust Adaptive Control: Slow Sampling  -45.  CD <D f-c OO <L>  1: T s = 0 . 2 5  sec.  3: T s = 0 . 2 2  sec.  2: T s = 0 . 2 0  sec.  4: T s = 0 . 1 7 s e c .  .3 CD CO  cd  -180  d o g llim  -180.  1 2  3  4  -225. O.  5.  10.  Frequency  15.  20.  axis  25.  30.  (rad/sec.)  Figure 4.18: (a). Finding the critical sampling for S P R H , 2  0.00  0.25  a 0.0  CO  M -.26  -  O.O  0.05  0.10  Real  0.15  0.20  0.25  Axis  Figure 4.18: (b). Nyquist curve of H  2a  under different sampling  er 4. Robust Adaptive Control: Slow Sampling  t-, .6 '5b <d  -O.60  —0.26  O.OO  0.2S  0.60  Real  Figure 4.19: (a). Nyquist curve of H  2  l.OO  1.86  l.SO  Axis  via different sampling frequencies  1.0  0.6  Real  Figure 4.19: (b). Nyquist curve of H  O.T6  Axis  2  via different command frequencies  Chapter 4. Robust Adaptive Control: Slow Sampling  1600.  1000.  600.  o.ooo -600.  0.  0.5  1.0  1.6  2.0  2.6  Ts = 0.17 Figure 4.20: S T C simulation under different sampling  3.0  sec.  3.6  4.0  87  Chapter 4. Robust Adaptive Control: Slow Sampling  -to. - 1  20.  O.O  ' Hi  J  li'i' . 1  in  -20.  ii"  —40. 10.  Z O .  3 0 . 4 0 .  O.  Fig.4.21(a)  10.  2 0 .  3 0 .  40.  6 0 .  Fig.4.21(b) 10.  - 5 .  -10.  O.  1 0 .  3 0 .  3 0 .  4 0 .  Fig. 4.21(c)  6 0 .  0.0  6 0 .  1 0 0 .  150.  Fig.4.21(d)  Figure 4.21: S T C simulation under noise or high frequency command  2 0 0 .  88  Chapter 4. Robust Adaptive Control: Slow Sampling  2.  O.O  5 0 .  lOO.  160.  2 0 0 .  O.O  SO:  100.  160.  2 0 0 .  Fig.4.22(b)  Fig.4.22(a) 10.  0.0  -i.  - 5 .  - 1 0 . 0.0  5 0 .  100.  I S O .  Fig.4.22(c)  2 0 0 .  O.O  60.  100.  ISO.  F i g .'4.22(d)  Figure 4.22: S T C simulation when persistent exdtation not satisfied  2 0 0 .  Chapter 5  Robust Adaptive Control: Slow Adaptation  5.1  Introduction  In Chapter 4, we discussed the slow sampling effect on the stability condition of an adaptive control algorithm. As shown i n F i g . 3.2, a general adaptive control structure has two loops, one concerned with estimation and one concerned with controller design. The conventional adaptive control systems update the estimated parameters of the controller at every sampling interval, i.e., the two loops have the same sampling rate. For the direct pole-placement adaptive control as in example 3.2, we recall that the control law is given by:  wt+d = R{q~ )yt + Siq' )^ 1  1  where the coefficients of the polynomials  ^ ( t j ) are the estimated parameters -1  $ in the estimation algorithm (3.24). Stability is obtained if H {q~ ) is S P R , where: l  t  2  ffifa" ) 1  =  C-\q- )B{q- ) l  1  C{q~ ) = Fiq-^Aiq- ) + i T ^ " ^ ^ ) ^ l  1  1  1  Here again, we assume S*, R* denote the converged S, R in estimation. S P R H (q~ ) 1  2  implies that C(q~ ) has stable zeros but not conversely. In case there is no unmodelled 1  dynamics, it has been shown in chapter 4 that if all poles of C(q~ ) are stable, then, 1  H2{q~ ) — 1 n d stability is ensured. However, when we consider the case of model a  1  mismatch, H (q~ ) could be non-SPR even though C(q~ ) has stable zeros. l  1  2  89  Chapter 5. Robust Adaptive Control: Slow Adaptation  90  1.76  CO  <u  CC  (A  o  Estimated a  l.OO  ccS  or  Gain: 0.3*-  Estimated b  o.  l.  z.  3.  T i m e  •*.  6.  i n  a.  v.  s.  e.  IO.  S e c o n d s  Figure 5.23: Estimated parameters when simulating Rohrs' plant by using RLS estimation where the sampling is T, = 0.2sec, command frequency is 1.0 rad/sec. Fig.  5.23 is a parameter estimation result for Rhohr's plant (4.93) where T, —  0.2sec, the command frequency is 1.0 rad/sec. and a first-order model is used. It has been shown i n the simulation example i n Chapter 4 that for this case, H (q~ ) is not l  2  S P R although C(q~ ) has stable zeros. The simple pole placement adaptive control 1  result is unstable. When observing the estimation process, we find that the estimated parameters have a large initial overshoot. Let d, b denote the estimated parameters, the closed-loop stability of C(q *) simply depends on the gain margin of  B(q-')a  t  but at the initial stage, — is very large and the gain margin is less than one (unstable). A n explanation for the instability is that for higher sampling rate, there exists a large model mismatch ( [33], [34] ), the input-output data in the transient process may excite high frequencies that mislead the estimation. Use of such bad estimates will continue  Chapter 5. Robust Adaptive Control: Slow Adaptation  91  to excite the high frequency band, thus very likely, resulting in an unstable system. However we can expect that the adaptation by only using steady-state estimates can bring more robustness. In this chapter, we will first discuss the effect of slow adaptation. The main idea is to avoid using bad estimation data. It is shown that the S P R Hz{q~ ) condition l  could be relaxed if the converged estimates under persistently exciting condition are obtained and used at every adaptation. Thus, the only requirement for stability is that the polynomial C(q~ ) has stable zeros. We will develop a method to check this 1  stability condition. Furthermore, we will also discuss how to design filters to improve the robustness. Some simulation examples will also be presented.  5.2  Slow A d a p t a t i o n A l g o r i t h m  5.2.1  F i x e d Slow A d a p t a t i o n Rate  In adaptive control, it is not necessary to adapt the controller at the same fast rate as the sampling rate for the plant. From Fig. 3.2, we can see that the normal adaptation rule (T = T ) , i.e.: a  / S(q-i) = S(q-*) (5.94)  ( R{q-*) = %"*) at every sampling interval introduces an extra closed-loop in the adaptive system. Here 5, R represent the controller parameters, R, S represent the estimated parameters. For slow adaptation, we can set the adaptation rate T much slower than the sampling rate a  T. For example, set:  T — NT a  , where N is a large integer, then, the adaptation  (5.94) is done once only for every NT period. It is assumed that N is large enough for the process to reach steady state. We call this case thefixedslow adaptation rate .  92  Chapter 5. Robust Adaptive Control: Slow Adaptation  Algorithm for Fixed Slow Adaptation Suppose that the command w +i is known at t = kT, and the latest adaptation occurred k  at ti = kyT(k\ < k): 1. Obtain: y , y^, • • •, u _ k  k  u  • • •;  2. Recursively calculate 6 , i.e. jR(g ), S(q~ ), by using estimation algorithm; _1  1  k  3. \i k — k ^ N, go to step 5; x  4. If k — ki — N, perform adaptation (5.94) and reset new k ; r  x  5. Controller calculation and back to step 1 for the next sampling interval. At steady state, Hi operates as H2(e~ )  where w is the dominant frequency. Let  3UlT  x  H(q~ ) be the transfer function of the plant and ^ ( g ) be the estimation model, then - 1  1  the estimation will result in: H{e^ ) = H{e T  S(g->)  _ B{q~ ) x  1 -R(q-l)q-l at q' = e1  jUlT  l  = x  A(q-l)  , then B{q~ ) =  at q~ = e~i  (5.95)  )  jWiT  UiT  A(q- )S(q- ) + B(q- )R(q- )qx  x  x  x  x  C( - ) X  q  as S,R converge to S*,R*. Thus between two adaptations, under  the conditions that S, R converge to S*, R* and the new closed-loop characteristic polynomial: A^Sriq- ) 1  + Biq-^Rriq-^q-  1  (5.96)  Chapter 5. Robust Adaptive Control: Slow Adaptation  93  has no unstable zeros, we have  H {e- ) = 1 jWlT  2  which is S P R . In equation (5.96), {^.(g- )}, {Pc^g" )} is a subset of {^(g )}, { P ( g ) } 1  1  -1  -1  andS' (g~ ) = 5"(g ) , P t ( g ) = P t ( g ) o n l y when adaptation occurs, otherwise S , R\ 1  _1  _1  r  -1  r  T  will keep the same value. In [1], it is indicated that if the signals in the regressor vector (f> are restricted to a class in which the non-SPR transfer function H  2  behaves, on average, as an S P R  transfer function, the S P R condition on the whole frequency band can be relaxed. This is known as the average S P R criterion. Here we have shown that by using estimated results only in steady state, <f> is restricted to the signals of the same frequency as the command in which H  = 1 is S P R .  2  Though the S P R H  2  condition is relaxed, an additional condition is that (5.96) has  all zeros stable. Actually, S P R # 2 is a stricter condition which implies that (5.96) has all stable zeros.  5.2.2  Unfixed Slow adaptation rate  In some cases, we cannot ensure that the transient process is over within the time period NT, or in other words, we do not know how large N should be for a certain plant to reach the steady state under certain sampling frequencies. Then a criterion for checking steady state is necessary. In this case, the checking is carried out periodically at every NT. Only if the process is verified in steady state, will adaptation follow. The adaptation algorithm is:  W,t+i w here  (5.97)  Chapter 5. Robust Adaptive Control: Slow Adaptation  9  t  94  if the process is in steady state and t = kNT where N is a given integer  <f>t-i if not when <p = 0 , it means that the adaptation is done and the current estimated paramt  t  eters are used in controller since a direct algorithm is used. This case is called the unfixed slow adaptation rate since adaptation may not take place every T . a  Algorithm for Unfixed Slow Adaptation Suppose the command w +i is known at t = kT, and the latest adaptation occurred at k  h = kiT {ki <k): 1. Obtain y*, jfc-i, • • •,  • • •;  2. Recursively calculate 6 , i.e. i?(tj _ 1 ), S ( g ) , by using estimation algorithm; - 1  k  3. If k — &i ^ N, go to step 6; 4. If k — ki — N, reset k\ and check if the process is in steady state, if not, doing nothing and go to step 6; 5. If the process is in steady state, perform adaptation (5.94); 6. Controller calculation and go back to step 1 for the next sampling interval. The initial state ^5o must be such that (5.96) has all stable zeros. This is not difficult to realize. For example, if we know that the plant has only stable poles (a common situation), we can simply use an open-loop structure in the initial stage: ^o = [ l , 0 , - - - , 0 ]  r  Chapter 5. Robust Adaptive Control: Slow Adaptation  95  The theoretical discussion on stability is the same as in the fixed slow adaptation rate case, but here, the assumption that the process has to be in steady state within NT is not necessary. 5.2.3  C h e c k i n g C r i t e r i o n for the S t e a d y S t a t e  In the unfixed slow adaptation rate case, the adaptation is only carried out if we measure the steady state input-output data of the plant. The measurement is set at a fixed rate of every NT period. It becomes the fixed adaptation rate if the process is measured to be i n steady state at every measurement. But it is not straight-forward to tell if a process is i n steady-state right away after we measure the input-output. We have to record a certain period of input/output data and compare it with the given characteristic of the command. The principal criterion used here is that the steady state output/input of the plant has the same frequency component as the command input provided that the plant is linear and the noise is insignificant.  One method  is to use curve fitting for the output (or/and input) date according to the mode of the command input signal as described in section 2.4. For example, the command is: tufc =  Asm(u)ikT — <f>i), we can expect that the output i n steady state has the form:  yic — Hi sin(o; fcr — A i ) i n case the closed-loop characteristic equation (5.96) has stable 1  zeros. The error index to be minimized is:  (5.98)  If J is small compared with the output data {i/fc}, we can consider that the process is in steady state.  96  Chapter 5. Robust Adaptive Control: Slow Adaptation  Simulated  2.  1  T  V V ' \  Input 1  1  1  1  and. Output  1  r  1  1  1  1  1  1  1  / V  0.  l.Sn  2.  I •  4.  /  6.  \ v  /  \ /'  r  • • i t  Estimated  10.  8.  12.  1  14.  1  16.  I <  18.  and. C o n t r o l l e r  22.  i  24.  i i  26.  28.  30.  Parameters  1.5  1  f  20.  0.0 -0.5 0 T i m e  i n S e c o n d s  5. T i m e  10.  15.  20. 25.  30.  i n S e c o n d s  Figure 5.24: S T C result for fixed rate slow adaptation By using curve fitting, A , Hi can be obtained through (see section 2.4): a  n  Y y cos(uikT) k  — tanAi  (5,99) Yvksin{uoikT) k=i  and 2 " (5.100) Hi = - > y sin(vikT - A j ) k=i Equations (5.99), (5.100) are used to calculate A Hi, respectively according to the k  n  ly  known data y ,w , and sampling interval T. The steady state is checked by calculating k  J in (5.98).  x  Chapter 5. Robust Adaptive Control: Slow Adaptation  97  Example 5.1 ( S T C Simulation by using Slow adaptation) Consider Rohrs' plant with the transfer function of the continuous plant as (4.93) in example 4.3. We use slow adaptation and the sampling frequency is allowed to be T, = 0.15sec. The command input has frequency 1.0 rad/sec, and there is a small output noise N(0,0.03). First order estimation model and pole-placement control are used. W i t h normal adaptation rate, the adaptive control algorithm is unstable. But here, T = 10T , i.e., the adaptation rate for controller is much slower than the sampling a  B  rate of the plant. The simulation result is as i n Fig. 5.24, where only first 30 seconds are plotted. Initially, the plant is open loop. The transient estimation result is not used by the controller. The controlled output is stable and follows the command input well. A l though the discrete plant for this sampling is non-minimum phase, since the low order estimation model is used, there is no unstable zeros cancelled in the real process. In Fig. 5.24, the estimated parameters and the controller parameters are also plotted where the parameter 1 refers to the estimates of a and the parameter 2 refers to the estimates of b i n the first order estimation model y = a y - i + bu -\. t  t  i 5.3  t  Frequency Domain Estimation  Example 5.2: For the continuous plant in Example 5.1, the discrete transfer function of the plant has the form:  TT(  (&o + + b q- )q1 - (dxq + a q + a q ) 2  1  2  1  2  2  3  3  we use different sampling rates and obtain the parameters of the discrete transfer function a  1?  a , a , b , b , b , crossover gain G of the discrete transfer function, converged 2  3  0  x  2  c  estimates of a first order model as in Example 5.1 a", b* and gain margin Kf of the  98  Chapter 5. Robust Adaptive Control: Slow Adaptation  T = 0.3sec. T, = 0.15sec.T, = 0.05sec. t  a bo bi b G a* b* 3  2  c  0.7592 -0.01371 0.00009142 1.0422 0.6445 0.009721 0.1936 0.7918 0.4734 3.09  1.0621 -0.1844 0.009561 0.6017 0.8435 0.05864 0.1599 0.8909 0.2513 1.76  1.8912 -1,1173 0.2122 0.1312 0.3607 0.06042 0.09931 0.9580 0.0841 0.884  Table 5.6: Parameters of discrete plant of Rohrs plant under different sampling Sampling . poles T. = 0.30sec.0.1253 ± j ' 0 . 5 6 9 9 , - 0 . 0 1 4 1 T, = 0.15sec.0.3875 ± jO.7130, -0.0328 T, = 0.05sec. 0.8227 ± J0.5954,0.1734  stable stable unstable  Table 5.7: Closed-loop poles of adaptive system under different sampling / o* \ closed-loop using a*, b* as controller parameters in Table 5.6 (Kf = (~ ~*^ ) — x  c  0  ^'  The slow adaptation adaptive control algorithm is stable under sampling intervals: 0.3 sec, 0.15 sec, but unstable under 0.05 sec. Thus, there is an improvement of robustness by using slow adaptation but this improvement is limited as (5.96) can still have unstable zeros if the model mismatch is too large (such as when using T = 0.05 s  sec). Table 5.7 lists the poles of C(q~ ) when the converged parameter estimates are 1  used for the Rohrs' plant. The command frequency is 1.0 rad/sec. However with the information about the frequency response of the plant (in this example, cross-over gain), we can predict whether the characteristic equation (5.96) will have unstable zeros after adaptation. FFT-based frequency domain estimation can  99  Chapter 5. Robust Adaptive Control: Slow Adaptation  be used to obtain frequency domain information. In steady state, assume that the noise is small enough, the energy of the inputoutput data is concentrated at several specific points in frequency band. The use of F F T directly on these input-output data to obtain frequency domain information will not give a good estimate on the other frequencies. Here we suggest that a perturbation is necessary to induce a transient so that the input-output data of frequency domain is rich i n whole frequency band. For example, under steady state, there is an intentional perturbation resulting in process transients. Denote the F F T of the input and output signals resulting from this perturbation by t7(u>), Y(u>), then the estimated frequency response of the plant H( - ) can be approximated by: jwT  e  Where u = 0,-^,... ,-^(N - 1), and JV is the number of samples in F F T . The resolution of frequency response of the estimated plant is directly dependent on N. More details about the frequency domain estimation can be found in [26] or in section 2.1.2. Usually, we set the perturbation close to an impulse signal so that the estimation error is small. It is also found that a perturbation with larger amplitude results in more accurate estimates of H(e~ ), although the process will be more affected. In case there is a 3wT  small white noise, the accuracy will not be affected too much. Once the frequency response of the plant is obtained, we can search for the gain margin (say Kf) and check the stability for (5.96) before adaptation. If a/(bKf)  < 1,  the adapted system will remain stable. However, if the gain margin Kf is not large enough, there will be no adaptation and the performance of the output is not adjusted but still stable.  Chapter 5. Robust Adaptive Control: Slow Adaptation  100  Example 5.3 In self-tuning control algorithm for the Rohrs' plant as in example 5.1, pole placement algorithm is used. The estimation model order is 1 and the sampling interval is T, — 0.05aec We use unfixed slow adaptation rate (checking rate N = 63, i.e. T — 3.2sec.) a  and check stability condition based on frequency domain estimation. There will be no adaptation if the inverse of the gain margin of the adapted system Kj  1  is larger than  0.9 (larger than 1.0 is unstable). In this simulation, the new closed-loop gain margain using new estimates was always found unsatisfactory, thus no adaptation took place. The system is not unstable since the initial controller parameters was chosen such that the closed-loop system is stable. However, the system's performance is not as desired. The command and output are as in Fig. 5.25.  5.4  Filter Design  To improve the performance of adaptive systems under large model mismatch, a filter F(q~ ) = 1  j could be introduced as in Fig. 5.26. This filter has the property:  F{e-'" )\ I T  y < 1 to ^  The phase shift of F(e~ ) is desired to be 0. If u — 0, i.e. the command input is jWlT  x  a step, then F is a low pass filter. Otherwise, F is a band-pass filter. The robustness of such a structure is improved because the gain margin of the closed-loop system is increased around the cross-over frequency. To view the effect of filter, we can see that the closed-loop characteristic equation (5.96) becomes:  (5.101)  101  Chapter 5. Robust Adaptive Control: Slow Adaptation  Simulated  Estimated  O.O  SO.  Time  tOO.  in  Input  a n d Output  a n d Controller  160.  ZOO.  0.0  Seconds  Parameters  60.  100.  Time in  160.  200.  seconds  Figure 5.25: Unfixed rate slow adaptation S T C result when the sampling is very fast, T, = 0.05sec. for Rhors' plant or 1 + H( -')F( -')Zd^ q  = i +  q  •Mr ) 1  Since F(q~ ) at w -> w can be chosen small enough, where u> is the cross over 1  c  c  frequency in the Nyquist curve of Gf{q~ ), (5.101) can always be stabilized if the l  frequency response of H(q~ ) is known. Otherwise, we can only say that the smaller 1  \F(e )\ is at high frequency, the more robust the closed-loop is. JwT  Example 5.4 (Band-pass Filter) For a continuous band-pass filter,  Chapter 5. Robust Adaptive Control: Slow Adaptation  M B  yt  Hi- ) 1  Giq- ) 1  Estimation  Figure 5.26: Closed-loop with filter  Wi  Q'  F(s)  W\  S  + ~ 5 +  W,  the impulse equivalent discrete form is given by: b + bxq' 1 + a i g - + a q1  0  Diq- )  1  1  2  w here A  W  l  ~Q b = -e- ^[cos{bT) + £ sin(6T)] Q o aT  x  ai = - 2 e "  a T  a =e 2  and  cos(6T)  Chapter 5. Robust Adaptive Control: Slow Adaptation  103  l.Oi—  Frequency in  rad/sec.  Figure 5.27: Bode plot of the compensation filter in Example 5.4  ~  a  2Q  For the sampling period T, = 0.05sec. and w\ — l.Orad/sec, the Bode plot of F(q~ ) 1  is as plotted i n Fig. 5.27 where only the logspace(-l,l) is shown. Q is the quality of the band-pass filter. In Fig. 5.27, Q = 5.0 is used. For fast sampling frequency, higher Q is expected. By proper choice of a filter as in example 5.4, the S T C system could be very robust. But it only works well in the case of known command frequency. If the frequency of the command changes, this filter has to be tuned accordingly. In summary, the flow diagram of the adaptive control algorithm is as in Fig. 5.28. Some simulation examples will be presented in the next section.  Chapter 5. Robust Adaptive Control: Slow Adaptation  Start :t = 0 c  Controller Design  Estimation  t  c  = t  No  Adaptation Yes  Filter Design  Yes  Figure 5.28: Robust Adaptive Controller Design Flow Diagram  104  105  Chapter 5. Robust Adaptive Control: Slow Adaptation 2.5  2.0  1.5  PH  -*-> o a  l.O  0.6  O.O  cd  a a  -0.6  o  — 1.0  -1.6  -  -2.0  O.O  20.  40.  60.  80.  100.  120.  140.  160.  130.  200.  Time in Second Figure 5.29: Adaptive control result in example 5.5, command is cos(t), sampling T = 0.055 ec. B  5.5  S i m u l a t i o n E x a m p l e s of R o b u s t A d a p t i v e C o n t r o l l e d D e s i g n  Example 5.5 (Simulation of robust S T C design) Consider the Rohrs' plant (4.93) as i n example 5.1: First order estimation model and pole-placement controller are used. It has been shown that for this conventional adaptive pole-placement algorithm, the discrete adaptive control system is unstable for T, < 0.20-sec. For slow adaptation, example 5.2 shows that when T, = O.OS^ec. the fixed rate slow adaptation scheme is unstable. Here, we use a digital band-pass filter as shown in example 5.4. The filter parameters are self-adjustable according to the estimated gain margin, i.e., if the new gain margin of the adapted system with filter is checked as inadequate, increase Q until it is adequate.  Chapter 5. Robust Adaptive Control: Slow Adaptation  106  Fig. 5.29 is the self-tuning control simulation result. In the first 20 seconds, the system is in open-loop . The command is a sine wave with amplitude 1 and frequency 1.0 rad/sec. At t = 20sec, there is a probing signal p(t) added to the command, p(t) = 2 - (*5  200  e  than 1 0  ) t > 20. This probing signal disappears after two seconds (its value is less )  - 4  after t=22 sec). During that transient, input and output data are recorded  for frequency domain estimation. We use discrete Fourier transform for frequency domain estimation. In Fig. 5.30(a),(b), we plot the estimated phase and gain, the estimated cross-over gain of the plant G  c  is 0.101, which is close to the real value in example 5.2. The  estimated parameters and the controller parameters are as shown in F i g . 5.30(c),(d), where the checking rate for steady state is 6.3 sec. (every 126 samples). The first adaptation occurs at t = 32.75ec, and the estimated data: d = 0.960, b = 0.088 are used. If there is no band pass digital filter, the closed-loop is unstable. However, the program can calculate a proper filter parameter Q according to the estimated d, b, K . B y setting an initial value of Q, if the gain margin of the open-loop including the c  estimated G of the plant, the controller and the filter is less than one (or desired value), c  Q is increased until the gain margin is large enough. For more detail about the design, refer to the simulation program in Appendix C . Here in this example, Q = 10.32 is used so that the cross-over gain of compensated system within is 0.8. The command input changes its amplitude at t — 76sec, then, changes back every 50 sec. The adapted output follows the command changes quite well. For the same continuous plant, when the command is a step level, a low pass filter can be used. F i g . 5.31 is a simulated result under step level command input. The control strategy is the same as in the above. In the simulation, there is a small additive  107  Chapter 5. Robust Adaptive Control: Slow Adaptation  (a).Estimated 2.6  I  •  •  Gain •  ,  ( c ) . E s t i m a t e d  P a r a m e t e r  „  0.6 0.0  -0.6  —2. O.O  (b).Estimated  Phase  50.  100.  ( d ) . E s t i m a t e d  160.  200.  P a r a m e t e r  0.00  — 1.67  i,o —  CO -3.14  0.6  to cd  -6.28  0.  16.  F r e q u e n c y  32.  F o r m  48.  0  T o  6 4 .  Ws/2  -0.5 0.0  60.  T i m e  100.  i n  Figure 5.30: Estimation result in example 5.5  150.  200.  S e c o n d s  <  Chapter 5. Robust Adaptive Control: Slow Adaptation  108  output white noise iV(0,0.05). The form of low pass filter is 1 Fig- ) 1  Q  where the filter parameter in this case is Q = 159. The checking rate for steady state is every 80 samples (4 sec). The first adaptation occurs at t — 30.5sec, where the estimated parameters d = 0.968, b = 0.066 are used for controller parameters. There is a large transient process after the adaptation. The second adaptation is at t = 42.5sec, and the estimated parameters are 0.964, 0.075, respectively. After that, the adaptation occurs every 4 sec. until at t = 76.6$ec, the step level is changed to 0.5 and a large transient occur after that.  The adaptive control result is stable and the transient  process can be improved by choosing a higher value of Q in the compensation filter. Example 5.6 ( S T C of high order plant) Give an eighth-order system, the continuous transfer function of which is:  We use a first-order model to estimate it and use the estimation result to design a polezero cancellation controller. From the Bode phase plot of the plant, we can find that the critical sampling pariod for minimum phase discrete plant and the approximate critical sampling pariod for S P R H (q~ ) are the same, i.e., T = 7.58sec. The conventional 1  2  c  adaptive pole placement algorithm is unstable if the sampling period T is shorter (This a  was checked by simulating the case T„ — 6.5sec). Here in simulation, we use T„ — l . O s e c The adaptive control output is as shown in Fig. 5.32. In the first 300 seconds, we don't apply any control. At t = 300sec, there is a probing signal, and the frequency domain estimation is done at this stage. The  109  Chapter 5. Robust Adaptive Control: Slow Adaptation J  f  1  1  1  —i  i  i  ) -  -  3.0  1 1 1  2.6 P-. -+->  0  o  s  Pi  a ao  V  -/  1 1 1 1  (  -  1.6 t  •if* •i / •  l.O  »t.  11 it  O.S  V/  i  \  1  O. O.O  JU 20.  t  40.  • '•"'"  60.  60.  X  *  1  lOO.  120.  i  140.  ISO.  180.  200.  Time i n Second Figure 5.31: Adaptive control result in example 5.5, command is step level, sampling T. = 0.05sec. first adaptation occurs at t = 479sec. The checking for steady-state is done every 50 seconds. The estimation result is as shown in F i g . 5.33. For robust adaptive control, a digital low-pass filter with the form as in example 5.5 is used, with the parameter Q = 159, which is calculated by the adaptive algorithm since a cross over gain 0.8 is chosen (i.e., desired gain margin is 1.25). For faster sampling, higher Q is expected.  The adaptive control system can be  stabilized if the frequency domain estimation result is good enough. .  5.6  Conclusion  In this chapter, we discussed the robust adaptive control algorithm design by using slow adaptation rate. It has been shown that when using only converged estimates,  110  Chapter 5. Robust Adaptive Control: Slow Adaptation  3.5 i  I  A  3.0  i i I  2.6  I I  ( t  -4->  o a  1.5  cd  l.O  e  x.  B o  C_3  O.S  O.O  -O.S O.OO  —* soo.  '  •  -too. eoo.  »  aoo. 1 0 0 0 .  Time  i iaoo. 1 4 0 0 .  i 1600.  i  teoo.  zooo.  in Second  Figure 5.32: Adaptive control for an eighth order plant(example 5.6) command is step level, sampling T = 1.0aec. s  the S P R condition on the operator H  2  is relaxed and the adaptive control algorithm  is more robust. However, slow adaptation rate is not the only condition for stability. A closed-loop equation (5.96) has to be stable. It was shown in Example 5.2 that too fast a sampling frequency results in an unstable closed-loop system and the adaptive control algorithm is unstable. In this case, a compensation filter has to be used, the design of which is based on the frequency domain information of the discrete plant. The adaptive control algorithm is stable given proper design of the filter.  Chapter 5. Robust Adaptive Control: Slow Adaptation  (a). E s t i m a t e d  Parameter  111  (c).Estimated  1  4.  Gain  2.6  —0.5 0.00  500.  1000.  160O.  SOOO.  0.  16.  Frequency (b).Estimated  Parameter  2  32.  46.  From  (d).Estimated  0 to  Nyquist  1.6  1.0  -  —  —  -3 0.6  cci g CO  -i.  -0.6 0.00  600.  Time  1000.  i n  1500.  20O0.  Seconds  Figure 5.33:  Real  Estimation result in example 5.6  Axis  64.  Ws/2 Curve  Chapter 6  Robust Adaptive Control: Using Predictive Control  Predictive control algorithm is simple to use and through applications has been shown to be very robust. It is based on predicting the output of the plant several steps ahead given assumptions about future control actions. In practice, it has been shown that the predictive control algorithm is superior to many existing algorithms ([12]) although no complete theoretical analysis is available so far. In this chapter, we will show some simple relations between the sampling interval and predictive control horizon and give a design guideline on how to choose them in practice.  6.1  Predictive Control Algorithm  6.1.1  A simple predictive control algorithm  We first describe a simple predictive control algorithm, which is also called recedinghorizon  controller,  (See [40]). This algorithm is based on an assumed model of the  process and on an assumed form for the future control signals. The pole-zero placement scheme of example 3.2 can be interpreted as a simplest case of this receding-horizon controller. Consider the discrete process: Aiq-^yt = B{q- )q- u 1  (6.102)  i  t  Where d is delay of the plant. There is no unstable zeros in A(q~ ). The degree of 1  112  Chapter 6. Robust Adaptive Control: Using Predictive Control  113  deg(A) — n deg(B) - n — 1 By introducing the identity:  1= A ^ F ^  1  (6.103)  ) + q-V+^Giq- ) 1  The degree of Fiq' ), G{q~ ) in (6.103) is: 1  x  deg(F) = d + h-2 deg(G) = n — 1 Here h > 1 is the extended horizon. Set dh = d + h — 1, (dh > d) and apply (6.103) to y , we have: t  y  = A(q- )F(q- )y + G(q~ )y _ 1  t  1  1  t  =  t  dh  F(q- )B(q- )u . + G(q- )y ^ 1  1  1  t  d  t  h  (6.104)  Introduce:  Biq-^Fiq' ) = R^q- ) + q^R^q' ) 1  1  1  (6.105)  where  deg(R ) = h - 1 l  deg(R ) = rt + ci - 3 2  The coefficients of R^q ) are the first h terms of the pulse response of the open-loop -1  system. Indead,  =  R (q- )q- + R2(q- )q1  1  d  1  {d+h)  +  (6.106)  A{q ) l  Chapter 6. Robust Adaptive Control: Using Predictive Control  114  The degrees of the last two terms are equal to or higher than d + h. Equation (6.104) can be rewritten as: y = t  i 2 i ( g ) « t - d + Riiq-^ut-d-h _1  +  G{q- )y l  t  dh  or y  t + d h  i M ^ H + h - i + Rib' )**-!  =  (6.107)  + Giq-^yt  1  By choosing: u — u i = • • • = ut+h-i t  (6.108)  t +  The control law which brings yt+d to a desired value w +d h  t  ....  i  s  :  h  v>t+d - G(g~ )y  inert  1  h  t  This control signal is then applied to the process. At the next sampling instant, a new measurement is obtained and the control law (6.109) is used again. The characteristic polynomial of the closed-loop system using predictive control law (6.104) is: C(q- ) X  = ^ ( g - ) [ R i ( l ) + g- R (g- )] + G ( - ) 5 ( - ) g - ' 1  1  J  =  1  J  g  g  i4(<TWl) + q - [B{q~ ) h  l  -  X  A{q-*)Ri{q-*))  Where we have used identity: B  =  B[AF +  =  A(BF) +  Gq-( -V]  =  AR + AR q-  d+h  X  GBq~ ~ d  +  h  2  h+1  GBq- ~ d  f+1  then B-ARi  = AR q~  ARiq' +  GBq~  2  h  +  GBq- d  h+1  or 1  d  = q ~ [B h  1  1  1  2  - ARi]  i  Chapter 6. Robust Adaptive Control: Using Predictive Control  Since Ri is the first h terms of unstable zeros i n A(q~ ),  if the discrete plant is stable, i.e., there are no  ———r,  Mi' )  then  1  115  Urn C ( - ) = A( - )iZ (l) 1  1  ?  n—*oo  9  a  This means that for h goes to infinity, the closed-loop system under predictive control is always stable.  Example 6.1 Let (6.102) represent a third order plant, where: A{q~ )  =  1- aig  - 1  B{q~ )  =  b + hq-  1  1  l  - a q~  -  2  2  a q~  3  3  + b q~  2  0  2  Here d — 2, n — 3. Suppose the horizon is h = 2. Then, deg(F)  =  2+ 2-2  deg(G)  =  3-1  =  2  = 2  The Diophantine equation to be solved i n this case is: 1 = (1 - dig  - a q~  -1  - a q- ){f  2  3  2  3  + f^q'  + f q~ )  + f q~  + f q~ )y  1  0  + q- (g  2  + g^'  3  2  0  +  1  g q- ) 2  2  Equation (6.104) is expressed as: y  t  =  (1 - a q~  =  (/o +  l  r  - a q~  - a q- )(f  2  3  2  3  l  0  + f2q~ )(bo + hq' 2  1  2  + b q- )u 2  2  t  2  Now i n (6.105), we have in this case: deg(R\)  =  1  deg{R )  =  2  2  + #2g~ )yt-  + {go +  2  x  t  + (g + giq' 0  1  2  +  g q- )y -:i 2  2  t  3  Chapter 6\. Robust Adaptive Control: Using Predictive Control  116  and (/o + fiq"  +  1  /2<r )(&o + M " 2  1  + b q~ ) = r  + rmj  2  2  10  - 1  + q~ (r 2  20  + r  2 1  g  _ 1  +  r q~ ) 2  22  The predictive control law is w +3 — goVt ~ giVt-i — g2Vt-2 — r oUt-i t  —r  2  2 1  u _ e  2  —  r u -3 22  t  u = t  rw + r  n  For a special case, h = 1, the simple predictive control algorithm with extended horizon becomes a pole-zero cancellation scheme. 6.1.2  Generalized Predictive Control  A more general version of predictive control is the generalized predictive control in [12]. Suppose the process can be described by a C A R I M A model: (6.110)  A{q- )y .= B(q- )q- u + ^ 1  1  i  i  t  Where A = 1 — q' is the difference operator. Consider the loss function to be mini1  mized:  where:  (  N2  Nu  (y  t+k  k=Ni  Ni is the minimum costing horizon; N  2  is the maximum costing horizon;  N  u  is control horizon;  p is a control-weighting.  - w )  2  t+k  +  PY  fe=l  ^lk-i}  .  (6.1H)  Chapter 6. Robust Adaptive Control: Using Predictive Control  117  Introduce the identity: 1 =  A^q-^Fiiq-^A + q-'Gtiq' ) 1  (6.112)  where the degrees of Fi, Gi are I — 1, n — 1 respectively. For I = 1, 2, • • •, N, we can obtain JFi, • • •, JFJV; G\, • • •, GN and for every /, we have  y i = FiB Aut+i-d + G y + F e t+  t  t  t  (6.113)  t+l  Introduce the identity for / = 1, • • •, TV,  BF^R^  + q-^^R?  (6.114)  where  deg(R[ ) = l-d l)  deg(R ) = n + d - 3 l)  2  Then,  y{t + l) = R[ Au{t + I -d) + y {t) + Fie{t + I) l)  (6.115)  t  here,  y,(0  = M «t-i + G !)  lVt  Note that the parameters of R \ R \ Gi are obtained in the same way as R , R , G x  2  x  2  in section 6.1.1 for horizon I. Consider I = N\, N + 1, • • •, N , we will have: x  2  y = RAu + y + e  (6.116)  where y  =  [y{t + 7V ),---,y(< + JV )]  Au  = [Au{t),---,Au(t + N -d))  y  =  [yjv,(<)>- • •  e  =  T \F e{i + Ni),---,F e(t + N )}  1  r  2  T  2  Nl  iy~N {t)}  T  2  N2  2  Chapter 6.  118  Robust Adaptive Control: Using Predictive Control  R is a matrix ((N - N + 1) x (N - d + 1)): 2  x  2  R  nv,-d  ••• r  rNi-d+i  •••  r  0  x  0  ••• 0  r  •••  0  0  0  The parameters r , r , • • •, ri form a polynomial: 0  x  r + nq- + • • • + r ~ = 1  R^iq- )  l  0  1  iq  B which has the degree of I and is the first I terms of the pulse response of -f^q~ - The d  term R ^Au(t + I — d) contains the terms Au(t + I — d), • • •, Au(t). Let us assume x  Au(t + k-l) = 0 where k > N , i.e. the control signal is assumed to be constant after N u  u  steps. Then  the matrix R i n (6.116) will be {{N - N + 1) x N ): 2  x  u  TNt-d  •••  r  0  rNt-d+i  •••  ri  r  0  ••• 0  0  ••• 0  R = 0 rpr -  fjv -c£-7v„+i  2 d  2  and the control signal is: 1-DT/  Au = [10 • • • 0 ] ( R R + piyWfrm T  When choosing N  x  r  (6.117)  = d, the R will be a lower triangular matrix with the size  (N - d + 1) x (N - d + 1). In [12] and [9], the case N 2  - y)  2  x  r-fc = 0 if k < 0. Thus for the case N  x  = 1 is discussed. Note that  < d, the first d — N  x  rows are zero.  Chapter 6. Robust Adaptive Control: Using Predictive Control  119  Example 6.2 Consider the case N = N = d + h-1, N x  2  u  = 1 and A = 1 (using C A R M A model). The  generalized predictive control algorithm is then similar to the simple predictive control using extended horizon. Let us consider the case d — 2, h = 2. So N = N = 3. We 2  x  can rewrite (6.115) as: y(t + 3) = R[ u(t + 1) + y (t) + F e{t + 3) 3)  3  3  where deg{R[ ) = 3 - 2 = 1 3)  Suppose: R[%~ ) = r + r q1  1  10  11  then y{t + 3) = r u(t + 1) + r u(<) + y (<) + F e{t + 3) u  10  3  3  Here y (t) = 4' u(< - 1) + G y(<) }  3  3  It is not difficult to show that: ys(0 = £oy(<) + 9iy{t - 1) + y y(< - 2) + r u(< - 1) + r u(< - 2) + r u(t - 3) 2  20  21  22  where y,-, v { (i = 0,1,2) is derived from the same equations as in the example 6.1. 2  Thus the predictive control with the assumption u(t) = u(t + 1) is: u =  w{t + 3) - y {t)  t  which is similar to the form obtained in example 6.1.  3  Chapter 6. Robust Adaptive Control: Using Predictive Control  6.2 6.2.1  120  R o b u s t n e s s of P r e d i c t i v e C o n t r o l Illustration examples  Before discussing the robustness, we will show an example of the relation between the sampling interval, the extended horizon and the stability. Example 6.3: (Simulation on Sampling, extended horizon and stability) Consider Rohrs' example which is a third-order continuous plant: c ( j )  _  We  2  '  k  229  s + 1 s +• 30s + 229 2  simulate predictive control by using different extended horizons and sampling in-  tervals. The estimation model is first order. Let h denote the horizon, two self-tuning predictive control algorithms are considered: Algorithm I: (Indirect) The estimation model is Vt - ayt-i +  &tit-i  where a, b are parameters to be estimated. The predictive control law is: w  -  t+h  ay h  t  6(1 + a + ••• + a*- ) 1  In this algorithm, there are only two parameters to be estimated. Algorithm II: (Direct) The estimation model is Vt - ayt-h + biu _ t  + b u_  l  2  t  2  1- b u _  H  h  t  h  There are h + 1 parameters to be estimated. The direct control law is: w  t+h  Ut  —  :  - ay _  b + b + x  2  t  h  —  \-b  h  Chapter 6. Robust Adaptive Control: Using Predictive Control  121  h T, = 0.3T. = 0.2T, = 0.15T. = 0.10T, = 0.06T, = 0.03 1 2 4 8  • •  X  *  X •  * •  X • •  0  X  0  0  Table 6.8: Simulation result for various sampling and extended horizon Table 6.8 includes the simulated stability results of those two self-tuning algorithms under different samplings and control horizons. Where h is the control horizon and T, is sampling interval. • represents both algorithm I, II are stable; o represents algorithm I is unstable but algorithm II is stable; X represents both algorithm I and II are unstable. From Table 6.8, we find that algorithm II is more robust than algorithm I. However, if we use slow adaptation rate as i n Chapter 5, we will find that algorithm I and algorithm II have the same robustness. The superior robustness of algorithm II may be explained as follows: Consider the first-order estimation model, there always exists some estimation error. Assume for instance a 3 % error in estimation, if first ordel model y = ay _ + bu _ is used, assume t  t  x  t  x  converged value a = 0.99, then 3 % error in algorithm I will result in a being in the range 0.9603 ~ 1.0197. If h = 10, by calculation, a will be in range 0.6669 ~ 1.2154. h  However, in algorithm II, the estimated d = a is in range 0.8773 ~ 0.9315. Thus the h  sensitivity of algorithm II is smaller. When using slow adaptation, the estimation error is considered to be small enough, thus the robustness is improved.  Chapter 6. Robust Adaptive Control: Using Predictive Control  predictive horizon h 8 4 2 1  critical sampling T 0.051 sec. 0.105 sec. 0.22 sec. 0.45 sec.  122  c  Table 6.9: Critical sampling for different horizon Example 6.4: (Critical Sampling for extended horizon) Consider the third-order continuous plant:  G  ^  (S  =  +  2)2(5  + 10)  We simulated predictive control using algorithm II. The estimation uses first-order model.  Table 6.9 gives the critical sampling period for various values of h. If the  sampling period is shorter than the critical sampling period, the system was found to be unstable in simulations. From Table 6.9, we can see that hT « constant c  (6.118)  This result is very useful. The case h = 1 is just pole-zero cancellation, for which the critical sampling was discussed in chapter 4. Then, given a faster sampling, we can choose the extended horizon h such that the predictive self-tuning algorithm is stable. 6.2.2  Discussion  S i m p l e s t cases: We are going to show that (6.118) is true for the simplest case. Consider the first-order continuous plant K  G (s) = s+ p c  123  Chapter 6. Robust Adaptive Control: Using Predictive Control  then the discrete description is  G d { q  ~7  }  1- e-'V  1  where T is the sampling interval. Case 1: Predictive horizon h = N, sampling is T, = T When using a first-order model: Vt = ayt-i + 6 u - i t  == a y _ + bu ^x + abu - H  (- a ~ bu .  N  t  N  t  t  N  2  1  t  N  where a = e~  pT  f(l-e^)  b = when assuming u —  = • • • — u j v - i , the control law is t +  t  ~ a Vt N  W  t+N  U  t  teiins  = b{l + a + ... + a«-i)  (  6  '  1  1  9  )  define: a = a = e-* N  NT  b = b(l + a + --- + a - ) N  = =  1  f (1 - e-" )(l + e~ + ••• + -p(^-i)r) T  pT  e  f(l-e"^)  then (6.119) is u=  w(t K  t  + NT) , b J  U  -  ay  t  (6.120)  Case 2: Predictive horizon h = 1, sampling J , = NT 1  We have the estimation model: yt =  ayt-i +  but-x  (6.121)  124  Chapter 6. Robust Adaptive Control: Using Predictive Control  wh ere a =  a = e-*  b  f(l-e-^)  N  =  NT  The control law is simply: «, =  w(t + NT) - ay  t  g  which is the same as in case 1. Notice that for different sampling, w(t + NT) has different expressions. In case 1, the sampling is T, then w w = w(t + NT). However, in case 2, the sampling is NT, t+  then w +i = w{t + NT). Thus we have shown that control laws for case 1 and case 2 t  are equivalent. D i s c u s s i o n for h i g h e r o r d e r p l a n t For higher-order plant, no complete proof is available yet.  However, the following  remarks are helpful for understanding (6.118). Consider a continuous plant which has n poles and m zeros: k(s + )...(s ) (s+Pi)(s+p )---(s+p )  =  Zl  '  +  Zm  2  v  n  >  The discrete equivalent in A R M A model of (6.122) is: yt =  am-i  +  a yt-2 2  +  h a y _ + b u _]_ + b u _ n  t  n  y  t  2  t  2  +  h  bu_ n  t  n  (6.123)  When using a first-order model (6.121), with predictive control extended horizon h, the form used for estimation is: yt = a, y _ + 6u _! -\ h  t  t  h  h a ~ bu _ h  x  t  h  1. For the model above, the static gain is independent of the extended horizon h: b + db + •••a ~ b 1- a b h  D.C. gain  =  h  1  Chapter 6. Robust Adaptive Control: Using Predictive Control  125  2. For different horizons ft, the forms of the controller are different. The contributions to the open-loop gain of the controller are also different. As h — 1, this gain is a simply —. For h > 1, this gain is  b  6(1 + a+ ••• + a ~ ) h  1  When the sampling period T„ is small, a will be close to one, and b will be close to zero. The open-loop gain of the controller for h = 1 will be very large. This can result in an unstable closed-loop system. However, as h increases, the open-loop gain can decrease, thus a small T requires a large h for the stability. B  3. If ft. > 1, the estimation model is  bq- +baq- + •• • + ba ~ q~ _ bq' 1 - a q~ ~ 1 - aq1  2  h  h  l  h  1  h  1  Theoretically, there is a cancellation. However, in practice, the cancellation will usually not happen. The model under control horizon assumption is:  h + k + • • •+ b 1 - a q~  h  h  h  4. If slow adaptation as discussed in Chapter 5 is used and the closed-loop system with every new updated estimated parameters is stable, then the adaptive system is stable. When using adaptive control algorithm, if the estimation model is first-order and the horizon is ft, this closed-loop transfer function is: B(£) ^(cj- )^! + 1  a + • • • + a!*- ) + B{q- )a q1  l  h  h  It is obvious that as ft increases with \a\ < 1, it will result in an open-loop plant. D e s i g n G u i d e l i n e s for choosing ft • For the continuous plant, find the critical sampling T for the stable self-tuning c  Chapter 6. Robust Adaptive Control: Using Predictive Control  126  pole-zero cancellation algorithm, using for instance the relay method of Chapter 4. • For the desired sampling T , where T, < T , the extended horizon h has to satisfy: s  c  h > [|] + 1  (6.124)  In practice, for a stable plant, a longer h will bring more robustness. However, the process response will be slow. In section 6.4, several simulation examples will be given.  6.3  S t a b i l i t y A n a l y s i s for a s i m p l e case  Now we will use the stability theory in chapter 3 to study the stability condition for a simple case of predictive control. Let the discrete plant be: (6.125)  Aiq-^yt = B{q- )u ^ l  t  where A(q~ ), B(q~ ) can be of any order. If we use a first-order model for parameter l  1  estimation, suppose the converged parameters are a and b, the first-order model is: y  t  =  at/t-i + bu -i  (6.126)  =  a y - + a6u _ + 6u _i  t  (6.127)  2  t  2  t  f  2  This is an indirect algorithm (Algorithm I in example 6.3). We use (6.126) as estimation model and the controller design is based on (6.127), where the desired output is w = y . t  t  Based on (6.127) and the desired output, we have  w  t+2  = a\y + a b u + b u t  t  t  t  t  where d , b are the estimated parameters at time t. (  t+1  We use an extended horizon,  t  assuming u = u , the control law is: t  t+i  w  t+2  = a\y + (a b + b )u t  t  t  t  t  (6.128)  Chapter 6. Robust Adaptive Control: Using Predictive Control  127  Define v  =  t  ay + bu = t  v* =  t  t  t  = <f>j6*  ay + bu t  $d  t  t  Here a, b are existent converged estimation parameters and 9 = [a ,b } , 6* = [a,b} , T  t  t  T  t  <f>t — [yti t] - We also define u  T  <j>t = [vt,u ]  T  t  Thus,  = a [a yt + b u ] == au + b u  w 2 t+  t  t  t  t  t  t  + bu  t  t  t  t  = Let C ( g - ) = (a6 + 6)^(o- ) + a B ( g - ) 1  1  2  1  J  (6.129)  1  t7  Then Cte" )^ 1  =  (oi +  frJ^g-'K-i+a ^- )^-!  =  B{q- )[a{ay _  2  + &u_i) + fru _i]  1  t  1  c  l  t  = B{q- )\avl_ - bu _ ) x  1  =  B{q~ )[av 1  t  1  r  t  1  + but-i +  a(v;_  t  - u_i)] f  For:  v?-i-t; -i = - ( ^ - i - ^ F ^ - i t  where: 0* = 0 - ^ t  & = [yt,u ]  T  t  then =  -C- (q- )B( ^)[§l 4> _ + ael^ 1  1  q  1  t  1  - w ) t+1  Chapter 6. Robust Adaptive Control: Using Predictive Control  128  Let: </>t-i = <j>t-i + o-4> -\ t  Then, we have the error model: y = -C-\q- )B{q- )$t  + C ^ f a " ) * (<T>«+i  l  l  (6.130)  1  t  Observing the error expression: e  t  =  (1 + a -i  - at-iq'^yt  =  y +  -  yt - v -i  =  (1 + at-!)[yt - v ]  =  (1 + a _i)e  t  t  -  w  t+1  dt_it/t - a _ i y _ i - bt^ut-i t  + a -i[y  t  at-iVt-i  c  t  t  - v -i] t  t  t  t  Where e is the error in the estimation, t  et = yt~  (a -iy -i t  t  + bt-xUt^i)  1 =  —  —  1 + at-i  e  (6.131)  t  The error model is as i n F i g . 6.34. Following the same procedure of the stability analysis i n chapter 3, if the term l + a -i is always positive, i.e. d _ i > —1, then the stability condition for the predictive t  t  adaptive control system is that H be S P R . Note that the expression for H here is 2  2  different, as we shall now proceed to show. Let C r ( g ) = 1 + df-i - d c - i g _1  - 1  which is a time varying operator. Define the error as: e~t =  =  C (q' )y 1  r  t  -CC-'B^  -  w  t+l  + iCrC- !! 1  -l)w  t + 1  129  Chapter 6. Robust Adaptive Control: Using Predictive Control  H  2  - l  1  +  Hi  Ot.  H  2  Figure 6.34: Error Model for predictive adaptive control Thus, ^ ( g - ) = Criq-^C-^q^Biq- ) 1  (6.132)  1  or more specificly, Hn =  (a6 + 6)A(g- ) + a B ( g - ) g 1  2  1  1  J  The condition for the stable adaptive algorithm is that H be S P R , i.e., the phase of 2  Ihiq- ) is within [-90°,90°]. 1  However, the phase of C (q~ ) is: 1  r  , ( -i\ * -i asinwT phaseC (g ) = tan — — 1 + a — a cos wl which is in [0,90°] for positive a, where w £ [0,—-]. n  r  B(q~ ) . is less than zero, thus when considering the effect of A{q ) B(q~ ) C , it will add a positive amount to the negative value of the phase — -. thus relax l  Usually, the phase of  1  l  r  the requirement on the phase —. A n example is given to illustrate this argument.  Chapter 6. Robust Adaptive Control: Using Predictive Control  130  Example 6.5 Consider the continuous plant  G(s)  2 229 s + Is +305 + 229 2  We consider the case of sampling period T, = 0.2sec, the command input frequency is l.Orad/sec. We compare the phase of H for two different adaptive control algorithms. 2  Fig. 6.35(a) is the phase of H  2  when only using pole-zero cancellation. As for the  simplified case, we just need to check if the phase of  ^  \ is within (—90°, 90°). Here  A{q *) — is the discrete plant not containing the delay due to the sampling and zero-order hold. For stability, the phase of — is restricted to the shadow area in Fig. 6.35(b). XT.  B  From the figure, we will see that the stability condition for — is not satisfied. Fig. 6.35(c) is the phase plot of H in case the predictive adaptive control is used. 2  Here the estimation result gives the converged estimates a = 0.9104. The phase of C  r  is positive, which adds to the negative phase value of —. S P R H  2  is satisfied for  this case. F i g . 6.35(d) shows the area of the required phase — for S P R H . When 2  using predictive control, the restricted area is enlarged. Thus the predictive control algorithm is more robust. 6.4  Simulation Examples  Example 6.6 (Verification of (6.118) for Rohrs' plant) Consider the plant used in example 6.3, we simulate the predictive self-tuning control for the cases T, = 0.3sec.,h = 1 and T = 0.075sec,/i = 4. The command input is a a  step level. There is a. small output noise A^(0,0.01). The simulation results are as in Fig. 6.36. The estimation uses the first-order model (6.121) and only two parameters are  Chapter 6. Robust Adaptive Control: Using Predictive Control  131  Fig.6.35(c)  Fig.6.35(a)  OO  <L>  CU  Q  ca  .a <t>  <u  CO  CO  cd  cd CL,  PL,  lO.  Frequency  IS.  14. 10.  Fraquency  i n rad/sec.  e.  10.  is.  14. i«.  i n rad/sec.  Fig.6.35(d)  Fig. 6.35(b)  CD  CD  ea CD CO CO  CD CO CO  .n  a.  CU O.  2.  4.  S.  8.  lO.  12.  8.  14. 10.  i n rad/sec.  Fraquency  Figure 6.35: Phase plot of H  for different algorithm  Fraquency  2  10.  12.  1 4 . 18.  i n rad/sec.  • F i g . 6.35(a): Phase plot of H for conventional pole-zero cancellation adaptive control, Rohrs' plant, T, = 0.20<sec, the command input frequency is 1.0 rad/sec. 2  • F i g . 6.35(b): Required area for the phase plot of — when pole-zero cancellation is used. • F i g . 6.35(c): Phase plot of H for predictive adaptive control with h = 2, Rohrs' plant, T = 0.20.sec., the command input frequency is 1.0 rad sec. 2  a  • F i g . 6.35(d): Required area for the phase plot of — when predictive adaptive control is used.  Chapter 6. Robust Adaptive Control: Using Predictive Control  132  2.0  Z3 Ou  1.6  O  d  l.O  CO O.S  CO  lOO.  6 6 o c_>  O.O  20.  40.  GO.  60.  100.  1.6  1.6 CO  «-.  CD  1.0  1.0  6 CO <-c CO Cu T3  O.O  CO -0.6 60.  100.  0.0  60.  100.  Time in Seconds Figure 6.36: Simulation result in example 6.6 • F i g . 6.36(a): Simulated output and command for the case T, = 0.3sec, h = 1; • F i g . 6.36(b): Estimated parameters for Fig. 6.36(a); • F i g . 6.36(c): Simulated output and command for the case T, = 0.075sec, h = 4; • F i g . 6.36(d): Estimated parameters for Fig. 6.36(c).  Chapter 6. Robust Adaptive Control: Using Predictive Control  133  estimated. However, the adaptation is done every 3.0 seconds. Thus only converged estimates are used. For the case T = 0.3sec, h = 1, the estimated parameters are a  d = 0.819, b = 0.365. For the case T, = 0.0755ec, N  — 4, the estimated parameters  u  are d = 0.950, b = 0.102. It is also interesting to notice that for the latter case, d = 0.813, b{l + d + d + d ) = 0.378, which is close to the estimated parameters for 4  2  3  the former case. Thus the control algorithm is virtually the same. Example 6.7 (Predictive Control for higher order system) Now consider an eighth-order plant:  As in example 5.6, the critical sampling for pole-placement adaptive control is around T, = 7.0sec We use the extended horizon h — 8, and the sampling period T, = l.Osec. The estimation uses a first-order model, i.e., by using the predictive control algorithm I in example 6.3, only two parameters are to be estimated. The fixed slow adaptation rate is set as T  0  = 20.0sec, (20 sampling periods). There is a small output noise  JV(0,0.01) added to the output of the process. The command input and output are as shown in Fig. 6.37(a), the estimated parameters d, b and controller parameters a  l s  &i  are as shown i n Fig. 6.37(b), Fig. 6.37(c) respectively. A n d  ai = a 6i  =  6(l + d + d H  t-d )  2  7  At the end of simulation, we obtain d = 0.919, b = 0.166. The controller parameters are the dashed lines in Fig. 6.37(b),(c) where a = 0.508, b = 1.003. x  x  When conventional slow adaptation is used, i.e., T = T, or the extended horizon is a  h = 6, we observed unstable output in both cases.  134  Chapter 6. Robust Adaptive Control: Using Predictive Control  Fig.6.37(a)  Command  a n d Output  f o  0.00  100.  800.  300.  -tOO.  600.  600.  8 0 0 . ..  000.  lOOO.  Fig.6.37(c)  Fig. 6.37(b)  CCS  TOO.  CO  cd <  cd  o.oo  zoo.  -too.  eoo.  eoo.  1000.  0.00  200.  400.  800.  800.  1000.  Time i n Seconds  Figure 6.37: Simulation result in example 6.7 • F i g . 6.37(a): Simulated output (dashed line) and command (solid line); • F i g . 6.37(b): Estimated parameter d (solid line) and controller parameter a (dashed line);  x  • F i g . 6.37(c): Estimated parameter b (solid line) and controller parameter 61 (dashed line).  Chapter 6. Robust Adaptive Control: Using Predictive Control  135  cd j-t  GO CO  ->->  &  -4-> o -a  -0.5 -  CD  <d  -— .< o  -1.0 -  CO  O  60.  Time  in  100.  60.  Seconds  Figure 6.38: Oscillation obtained by using Relay for a time-varying plant Example 6.8(STC of a time-varying plant) We consider a fifth-order time-varying continuous plant, with transfer function: G(s)  =  b  1  t  {s + l) ( 3  S  (6.133)  ay  +  t  w here a = 3.5 +  Aasin(w t)  b = 1.0 +  Abs'm(w t)  t  t  a  b  In the simulation, we set Aa =• 1.0, Ab = 0.3, w = 0.05, w = 0.02. To control a  b  this time varying system, we first use relay in closed-loop of the plant to obtain critical sampling. The result is as in F i g . 6.38.  Chapter 6. Robust Adaptive Control: Using Predictive Control  136  From Fig. 6.38, the average half period of the oscillation is around 3.0sec. Notice that the time-varying system will result i n a time-varying oscillation period, thus the worst case should be considered if the critical sampling for non-minimum discrete plant is to be found.  For the frequency domain estimation result, Fig. 6.39 shows the  estimated Nyquist curve by using 1024 points F F T . We can find that the high-frequency band is well estimated. This is generally true if the rate of change of the time-varying parameters is very slow. Now we simulate predictive control of the plant with the extended horizon h = 10, sampling period T, = l.Osec. The estimation model is a first order model, and the adaptation rate is T = 5sec. There is a small noise 7V(0,0.01) a  added to the output.  The estimated and controller parameters are as i n Fig. 6.40,  where the dashed line denotes the controller parameters. Here for the non-persistent excitation case, we use exponential forgetting and resetting algorithm ( E F R A ) , (see [36]), i n parameter estimation. The estimation form is: 9t+i — Qt + 1  aP _ 4> e t  1 +  aPt-^ttfPt^  x  t  t  tfPt-itt -  Where the forgetting factor A = 0.9 and constants a = 0.5, (3 — 0.005, £ = 0.005. The command and system output are shown i n F i g . 6.41. The controlled output is desirable but there is a large overshoot when the command has a rapid change. For a conventional pole-placement adaptive control algorithm, it is unstable if only first order estimation model is used.  6.5  Conclusion  In this chapter, we have shown that the adajjtive predictive control is i n general more robust than the adaptive pole-zero cancellation control. When using the simple receding  137  Chapter 6. Robust Adaptive Control: Using Predictive Control  — 1.5  1  -0.6  1  -0.4  '  1  1  1  i  i  -0.2  O.O  0.2  0.4  0.6  O.S  Real  i  1.0  ,  1.2  1.4  axis  Figure 6.39: Frequency Domain estimation result (Nyquist curve of the discrete plant) horizon controller [40], an interesting relationship exists between the critical sampling for stable S T C and the extended control horizon. This relation is verified for a simple case and controller design guidelines is given. Here, the result of critical sampling in Chapter 4 is useful in choosing proper extended horizon. Also we have applied the S P R stability condition to a simple adaptive control case and shown that the predictive adaptive control is more robust when the extended horizon is higher for stable plant. Several simulation examples have been presented.  Chapter 6. Robust Adaptive Control: Using Predictive Control  O.OO  SOO.  lOOO.  1SOO.  SOOO.  Estimated Parameter 1.4  1  O.OO  1  6 0 0 .  SOOO.  Controller Parameters  1  1  10O0.  1500.  Time  2500.  r  ZOOO.  2500.  3000.  i n Seconds  Figure 6.40: Parameter estimation and controller parameters in Example 6.8  138  Chapter 6.  139  Robust Adaptive Control: Using Predictive Control  1.4  1.2  l.O  f—s<q  r—^  •—=-» j  ps  -4->  0.8  o  a a B B o  0.6  0.4  0.2  o.o _o.a  1  O.OO  <  500.  '  lOOO.  1  1500.  '  2000.  1  2500.  *  3000.  Time i n Seconds Figure 6.41: Simulation output for predictive adaptive control algorithm to a time-varying plant, Example 6.8  Chapter 7  Conclusion  7.1  S u m m a r y o f the w o r k  In this thesis, we have discussed various discrete time robust adaptive controller design methods. The sampling frequency plays an important role in the robustness of discrete-time adaptive control. For most adaptive control algorithms, slow sampling will result in more robustness. This fact is verified either theoretically or through simulations in [33], [34], [13], [43] etc., where different algorithms are used. Here we have derived sampling conditions for stable pole-zero cancellation discrete adaptive control algorithms. Ways to find critical sampling frequencies are proposed. Further, we also studied robust adaptive controller design methods using slow adaptation and predictive control algorithm. In those robust adaptive controller design methods, extra efforts are necessary to compensate for using faster sampling. A l l results are summarized as follows: 1. A stability condition for conventional adaptive pole-zero cancellation algorithm under model mismatch is derived. A n upper limit for adaptation gain is given. However, this upper limit is very conservative in practice. (Refer to section 3.4). 2. The critical sampling frequency for the minimum phase discrete plant can be found by using relay control. Here we suppose the continuous plant is rational, stable and inversely stable. 140  Any sampling frequency lower than  141  Chapter 7. Conclusion  the critical sampling frequency will result in minimum phase discrete plant. Keeping the discrete plant minimum phase is essential for stable pole-zero placement adaptive control in model matched case. It can be shown that if there is no unstable poles and zeros in the discrete plant, conventional polezero placement adaptive control algorithms are stable.  (Refer to section  4.2). 3. However, for the model mismatch case, the minimum-phase discrete plant is only a necessary condition. We have also studied the relationships between the model error and the sampling frequency, the model error and the S P R condition. A n approximate way to find critical sampling for simplified stability condition is proposed. Frequency domain information for continuous plant is necessary for using this method. (Refer to section 4.3) 4. The robustness of discrete-time adaptive control algorithms can be improved if the adaptation rate is much slower than the sampling rate. The slow adaptation rate should ensure only good estimates are used for controller design. For that case, the S P R H  2  condition can be relaxed. However, the  robustness improvement is limited as sampling too fast may still result in unstable output. This is because, another condition for stability has to be satisfied, i.e. the resulting closed-loop using new estimated parameters must be stable. (Refer to section 5.2) 5. A robust adaptive control algorithm which always ensures stability is possible if we can obtain good frequency domain estimation and design a proper filter.  A n example of how to use frequency domain estimation by Fourier  transform and how to design a low-pass or band-pass filter is shown. If the  Chapter 7. Conclusion  142  frequency domain information is accurately known or estimated, the parameters of the filter can be self-adjusted by control algorithm and the stability can be ensured.  (Refer to section 5.3, 5.4, figure 5.28 and the computer  program Appendix A ) . 6. Predictive control is a frequently used adaptive control method. We studied the relationship between the extended control horizon and the sampling frequency for the critically stable adaptive control case.  Since the pole-  zero placement algorithm is a special case of predictive control algorithm, applying this relation with the known critical sampling for conventional poleplacement adaptive control algorithm is useful, especially when a desired sampling rate is set. However, this relation has not been proved for general cases, although simulations show that it seems to hold. (Refer to section 6.2). In [39], it was written, " • • • adaptive control can be useful, but extreme care must be taken when implementing the controllers." Here we discussed the effect of slow sampling on the robustness of conventional pole-placement algorithm and some methods to improve the robustness. Those methods are indeed useful when implementing adaptive controllers.  7.2  Future Research Directions  Continuing research under this topic can be categorized as theoretical issues and application issues. For theoretical studies, current stability conditions are expressed in the time domain and they are usually very conservative. Is it possible to express the stability conditions  Chapter 7. Conclusion  143  in the frequency domain (like Bode plot)? Then, the effect of sampling can be interpreted as the high frequency components aliasing to the low frequency part. Also, for predictive control algorithms, there are no good theoretical results on the stability conditions for general algorithms so far. For example, given an arbitrary extended horizon, how to derive the stability condition? These questions remain to be answered. On the application issues, the robust adaptive control design guidelines in this thesis can be used in a knowledge-based commissioning and monitoring system.  Hardware  and software can be built for the robust adaptive controller using slow adaptation (chapter 5) or predictive control (chapter 6). Also, the principle of slow adaptation and slow sampling can be applied to most adaptive control algorithms.  7.3  Conclusion  It is the theme of this thesis that for a rational, stable and inversely stable continuous plant, the pole-placement adaptive controller design algorithm can always be stable if the sampling frequency is slow enough.  Bibliography  [1] Anderson, B . D . O . , R . R . Bitmead, C . R . Johnson, Jr., P . V . Kokotovic, R . L . Kosut, I . M . Y . Mareels, L . Praly, and B . D . Riedle  Stability of Adaptive System: Passivity and Averaging Analysis. Cambridge, Mas M I T Press, M A . 1986. [2] Astrom, K . J . "Analysis of Rhors counterexample to adaptive control," Proc. 22nd I E E E Conf. Decision Control, San Antonio, T X . , Dec. 1983. [3] Astrom, K . J . "Interactions between excitation and unmodelled dynamics in adaptive control", Proc. 23rd I E E E Conf. Decision Control, Las Vegas, N V , 1984, pp 1276-1281. [4] Astrom, K . J . , P. Eykhoff "System identification - A survey," Automatica, Vol.7, pp.123-162, 1971. [5] Ast rom, K . J . , P. Hagander and J . Sternby "Zeros of sampled systems." Automatica, Vol.20, No 1, pp 31-38, 1984. [6] Ast rom, T . , Hagglund "Automatic tuning of simple regulators with specifications on phase and amplitude margins", Automatica, Vol.20, No.5, pp.645-651, 1984. [7] Ast rom, K . J . , B . Wittenmark "Self-tuning controllers based on pole-zero placement," I E E Proceedings, Vol.127, P t . D , No.3, May 1980. 144  145  Bibliography  [8] Ast rom, K . J . , B . Wittenmark Computer  Controlled  Systems,  Prentice Hall Inc., Englewood Cliffs N . J . 1984.  [9] Astrom, K . J . , B . Wittenmark Adaptive  Control,  Addison-Wesley Publishing Company, 1989.  [10] Belanger, P.R., G . A . Dumont, S. Gendron "Practical experience with Kamyr digester level self-tuning control," in Proc. 1984 American Control Conf. San Diego, C A , pp. 48-53, 1984. [11] Clarke, D . W . "Self-tuning control of nonininirnum-phase systems," Automatica, col.20, No.5, pp.501-517, 1984. [12] Clarke, D . W , C. Mohtadi, P.S. Tuffs "Generalized predictive control - part I. the basic algorithm," Automatica, Vol.23, No.2, p 137-148, 1987. P  [13] Cluett,W.R., S.L. Shah, D . G . Fisher "Robust design of adaptive control systems using conic sector theory," 2nd IFAC Workshop on Adaptive Systems, Lund, 1986. [14] Cluett, W . R . , Shah, S.L., Fisher, D . E . "Robustness analysis of discrete-time adaptive control system using input-output stability theory: a tutorial". I E E Proceeding, Vol.135, Pt.D., No.2, March, 1988, ppl33-141. [15] Dumont, G . A . , P.R. Belanger "Self-tuning control of a titanium dioxide kiln," I E E E Trans. Automat. Contr., Vol. AC-23, pp.532-537, 1978.  Bibliography  146  [16] Egardt, B . Stability of Adaptive Controllers. New York, Springer-Verlag, 1979. [17] Franklin, G . F . , J . D . Powell Digital Control of Dynamic Systems Addison-Wesley Publishing Company, 1980. [18] Fu, Y . , G . A . Dumont "Robust discrete S T C design using intermittent adaptation", 1988 I F A C Workshop on Robust Adaptive Control, Newcastle, Australia, August, 1988. [19] Fu, Y . , G . A . Dumont "Choice of sampling to ensure minimum phase behaviour." I E E E Trans. Aut. Control, May, 1989, pp.560-563. [20] F u , Y . , G . A . Dumont " A stability condition for S T C with model order mismatch", Accepted by The 28th I E E E Conference on Decision and Control, Tampa, Florida, December 13 15, 1989. [21] Goodwin, G . C , P . J . Ramadge and P . E . Caines "Discrete time multivariable adaptive control." I E E E Trans. Aut. Control, A C - 2 5 , 1980, 449-456. [22] Goodwin, G . C , and K . S . Sin Adaptive Filtering, Prediction and Control, Information and Systems Science Series. Englewood Cliffs, N . J . : Prentice-Hall, 1984. [23] Gregory, P . C E D . , Proc. Self adaptive Flight Control Systems Symp., ( W A D C Tech. Rep. 59-49, Wright A i r Development Center, Wright-Patterson A i r Force Base, O H ) , 1959.  Bibliography  147  [24] Kallstr6m,C.G., K . J . Astrom, N . E . Thorell, J.Eriksson, L . Sten " Adaptive autopilots for Tankers", Automatica, Vol.15, pp.241-254, 1979. [25] Kosut, R . L . "Stability theory for adaptive systems: method of averaging and persistency of excitation," I E E E Transaction on Automatic Control, Vol. AC-32, N o . l , pp.26-34, 1987. [26] LaMaire, R . O . "Robust time and frequency domain estimation methods in adaptive control", P h . D . thesis, M.I.T., Department of Electrical Engineering and Computer Science, May 1987. [27] Leininger, L . , S.P. Wang "Self-tuning control of manipulator dynamics", Presented at 1983 American Control Conf., San Francisco, C A , 1983. [28] M a , C . C . H . "Inherent robustness of discrete discrete-time adaptive control systems", Control Theory and Advanced Technology, Sept., 1987, pp.197-216. [29] Mishkin, E . , E . J . Adkins Adaptive Control Systems. New York, N Y : McGraw-Hill, 1961. [30] Ortega, R., L . Praly, I.D. Landau "Robustness of discrete-time direct adaptive controllers," I E E E Trans, on Aut. Control, A C - 3 0 , D e c , 1985. [31] Rohrs,C. "Adaptive ontrol in the presence of unmodeled dynamics," Ph.D. dissertation,  Bibliography  148  Dept. Elec. Eng. Comput. Sci., M I T , Aug. 1982. [32] R o h r s , C , L . Valavani, M . Athans, G . Stein "Robustness of continuous-time adaptive control algorithms in the presence of unmodeled dynamics." I E E E Trans. Aut. Control, A C - 3 0 , 1985, pp.881-889. [33] Rohrs,C.E., G . Stein, K . J . Astrom "Uncertainty in sampled systems," Proceedings American Control Conference, Boston, M A , June 1985. [34] Rohrs,C.E., G . Stein, K . J . Astrom " A practical robustness theory for adaptive control," Proceedings American Control Conference, Boston, M a , June 1985. [35] Safonov, M . G .  Stability Robustness of Multivariable Feedback Systems, Cambridge, M A : M.I. Press, 1980. [36] Salgado, M . E . , G . C . Goodwin, R . H . Middleton "Modified least squares algorithm incorporating exponential resetting and forgetting", Int. J . Control, Vol. 47, No.2, 477-491, 1988. [37] Strejc, V . "Least equares parameter estimation," Automatic, Vol.16, pp.535-550, 1980. [38] Wittenmark, B . , K . J . Astrom "Practical issues in the implementation of self-tuning control", Automatica, Vol.20, No.5, p 595-605, 1984. P  [39] Wittenmark, B . "On the role of fdters in adaptive control," Technical report, EE8662, Dept. of  Bibliography  149  Electrical and Computer Engineering, University of Newcastle, New South Wales, Australia, Dec. 1986. [40] Ydstie,B.E., L.S. Kershenbaum, R . W . H . Sargent "Theory and application of an extended horizon self-tuning controller", A I C h E Journal, 31(11):1771, 1985. [41] Ydstie, B . E . "Bifurcations and complex dynamics in adaptive control systems", Proceeding of 25th C D C , Athens, Greece, December, 1986, pp 2232-2236. [42] Zames,G "On the input-output stability of time-varying nonlinear feedback systems. Part I." I E E E Trans. A u t . Control, A p r i l , 1966. [43] Zervos, C . C . , G . A . Dumont "Deterministic adaptive control based on a Laguerre series presentation", International Journal of Control, 1988, Vol.48, No.6, pp 2333-2359. [44] Mathematics Manual (in Chinese) 1979, Peking.  Appendix A  List of Abbreviations  ARMA  Auto-Regresive and Moving-Average, defined in (3.18), p. 25  CARIMA  Controlled Auto-Regressive and Integrated Moving-Average  CARMA  Controlled Auto-Regressive and Moving-Average  D . C . gain  Direct Current gain or zero frequency gain  DFT  Discrete Fourier Transform, defined i n (2.6), p. 8  DTFT  Discrete-Time Fourier Transform, defined in (2.5), p. 8  EFRA  Exponential Forgetting and Resetting Algorithm, p. 134  FFT  Fast Fourier Transform  MP  Minimum Phase  MRAC  Model Reference Adaptive Controllers  MVC  Minimum Variance Control  NMP  Non-Minimum Phase  SPR  Strictly Positive Real  STC  Self-Tuning Control  STR  Self-Tuning Regulator  150  Appendix B  L i s t o f Terminology-  Averaging  definition: p. 21 stability theory: pp. 22-23  Diophantine Equation  eq. (3.28), p. 32  Persistently exciting  definition: p. 29  Relation  definition: p. 14 conic relation: p. 16 positive relation: p. 17  Rohrs' plant  continuous plant eq. (3.26) in p. 30.  Slow adaptation rate  fixed: p. 89 unfixed: p. 92  151  Appendix C  Simulation Program  J. The main  Program:  PROGRAM REAL PLANT SIMULATION "The plant is: 2  229  G(s) = " "Order  s+1 of the  s*s+30s+229  estimator=l.  n  "The direct (pole-zero placement) algorithm is used. "(COMMAND) INPUT: AA*cos(wl*t)+DI*cos(w2*t)+NI*GAUSS(0,1) " OUTPUT: YY=Y+NO*GAUSS(0,1) +DO*cos(w3*t) "Subroutines used are: ASY,FIL SCAL,EST,FFT ************************************* r  REAL THETA (2) ,X(2) ,K(2) ,PEC (2, 2) , GAIN(1024), PHA (1024) REAL YT (2048) , UT(2048) , YF (2048) , UF (2048) ,H(2048) INTEGER NPARf I, J, M, NCON, COUN, NCOU KM, IC, N, NN "NOTE: The maximum vaule of N is : 10 " f  INITIAL "SIMULATION CONDITION:" CONSTANT CINT=0.05, PRT=50.0, TF=199.5,  TES=20.0  "PARAMETERS AND INITIAL VALUE FOR ESTIMATION" CONSTANT FORGET=1.0, NPAR=2, PERR=0.1 CONSTANT THETA(1) =0.0, THETA(2) =1.0 "INITIAL CONSTANT CONSTANT CONSTANT  VALUES OF STATES" YM1=0.0, Y=0.0, YY=0.0, Z=0.0, E=0.0 UM2=0.0, UM1=0.0, U=0.0, RM1=0.0, R=0.0 W=0.0, ND=0.0  "INITIAL VALUE FOR CURVE FITTING AND FILTERING" CONSTANT VAR=0.05, TAl=1.0, TB1=1.0, KC=0.2, Q=2.0 "INITIAL VALUE FOR PARAMETERS IN FFT" CONSTANT FB=2.0, ALF=0.8, FW=-5.0, NN=7  152  "OTHER INITIAL VALUES" CONSTANT AA=1.0,DI=0.0,NI=0.0, DO=0. 0, NO=0.0 CONSTANT W1=0.0,W2=0.0,W3=0.0,C0F=1.0, TYPE=2.0,F=1.0 ALGORITHM IALG=5 MAXTERVAL MAXT=0.001 "ASSIGNED INITIAL VALUES" A1=THETA(1), Bl=THETA (2), NC0N=1, NCOU=O N=2*NN GAINY=0.0, GAINU=0.0, PHIY=0.0, PHIU=0.0 TS=TES+(FLOAT(N)-1.0) *CINT r  END $  r  "INITIAL"  ************************************************ "INPUT TYPE,W1,W2,W3" READ (5, 20) TYPE, Wl, W2, W3 20. .FORMAT(4F15. 6) "INPUT AA,DI NI,DO,NO" WRITE {6, 30) 30. . FORMAT (IX, 25HINPUT GAIN:AA,DI,NI,DO,NO) READ (5, 35) AA, DI, NI, DO, NO f  35. .FORMAT(5F15. 6) IF(TYPE.NE.1.0)  GO TO 60  "IN CASE OF NOADAPTIVE CONTROL, INPUT DESIGN PARAMETERS" WRITE (6, 40) 40..FORMAT(IX,21HINPUT ESTIMATED Al,Bl) READ(5,50)A1,B1 50. .FORMAT(2F12. 6) 60..CONTINUE WRITE (6, 64) 64..FORMAT(IX,1H  )  "CALCULATE ACCURATE PARAMETERS" CALL CAL(CINT,Wl, TAl, TBI) WRITE (6, 65) TAl, TBI 65..FORMAT(IX,28HTHE CONVERGED PARAMETERS WRITE (6, 66) 66. .FORMAT(IX, 1H )  ARE,2F10.4)  IF (Wl.EQ.O.O) GO TO 80 BEI=2.0*3.1415926/(CINT*W1) IF (BEI.GT.8.0) GO TO 80 WRITE (6, 70)BEI 70..FORMAT(IX, 13HWARNING: ONLY,F5.1,28H 80..CONTINUE  153  POINTS IN ONE SIGNAL PEROID)  "CHOOSE ASYNCHRONOUS SAMPLING PERIOD COEFFICIENT" CALL PRINT3(1.0) READ(5,90) KM 90. . FORMAT (16) "CALCULATION ASYNCHRONOUS PERIOD" IF (Wl.EQ.O.O) GO TO 92 M=INT (FLOAT (KM) * (0.5+3.1416/ (CINT*W1) ) ) GO TO 99 "IN CASE ZERO FREQUENCY COMMAND, INPUT ASYNCHRONOUS SAMPLING PERIOD" 92. .WRITE (6, 94) 94..FORMAT(IX,38HSince wl=0.0, you name the value of N:) READ (5, 96) M 96. .FORMAT(16) rr  n  A=l.0/229.0,  B=30.0/229.0  "INITIAL VALUS OF FILTER" KK=1.0, FA1=0.0, FA2=0.0, FB1=0.0, FB0=1.0 "INITIAL VECTOR VALUE IN ESTIMATION" DO 105 1=1,2, DO 105 J=l,2, PEC (I, J) =0.0 105. . CONTINUE DO 110 1=1,2, PEC (I, I) =1000.0, K(I)=0.0, 110..CONTINUE rr  X(I)=0.0  rr  "LAST CHANCE TO CANCEL THE DESIGN" rr  rr  READ (5,120) IC 120. .FORMAT(16) IF(IC.NE.l) GO TO 10 WRITE (1,130) 130. .FORMAT(IX,3HB=[) WRITE (12,131) 131. . FORMAT (IX, 3HA=[)  «*******************************************************************" J=0 "PROCESS AND ESTIMATION"  "==================== " DYNAMIC "NOISE AND DISTURBANCE ADDED TO THE OUTPUT" YY=Y+NO*GAUSS(0.0,1.0)+DO*COS(W3*T) UM2=UM1,  UM1=U  X(2)=UM1, X(1)=YM1  154  NC0U=NC0U+1 IF(COUNM. NE.COUN) NCOV=0 COUNM=COUN "PARAMETER ESTIMATION" CALL SQRTFL (YY, X, PEC, K, THETA, NPAR, PERR, FORGET) ************************************************* YM1=YY, RM1=R "CORVER FITTING" IF(T.LT.TES) GO TO 302 IF(T.GT.TES) GO TO 302 IF(T.NE.TES) GO TO 301 NCOU=M GO TO 302 301..NCOU=0 302..CONTINUE CALL ASY1 (NCON, NCOU, M, VAR, YY, U, Wl, CINT, KC, GAINY, PHIY, GAINU, PHIU) IF(T.LE.TS) GO TO 310 IF(TYPE.EQ.1.0) GO TO 310 IF(NCON.NE.O) GO TO 310 ************ "ADAPTION" n  it  B1=THETA (2) A1=THETA(1) CALL FIL (FA1, FA2, FBI, FBO, KK, CINT, Q, Wl, F, ALF, GAIN, PHA, A1,B1, KC, N) ************ 310..CONTINUE "COMMAND INPUT" IF(T.LE.TS) GO TO 315 COUN=INT ( (T-TS) /PRT) IF(COUN.EQ.2.0*INT(COUN/2)) COF=1.0 IF(COUN.NE.2.0*INT(COUN/2)) COF=0.5 315..CONTINUE W=COF* (AA*COS (W1*T)) "NOISE AND DISTURBANCE ADDED TO THE COMMAND" W=W+DI*COS(W2*T)+NI*GAUSS(0.0,1.0) "SMALL PRETUBETION IS ADDED FOR FFT" . IF(T.LT.TES) GO TO 319 IF(T.GT.TS) GO TO 319 W=W+FB *EXP (FW* (T-TES) ) 319..CONTINUE "CONTROLLER" n  II  E=W-A1*YY R=E/B1 U=FA1*UM1-FA2*UM2+kk*FBO*R-kk*FB1*RM1  155  IF(T.LE.TS)  U=R  WRITE (1,320) T, W, YY, theta (1) , theta (2) ,A1,B1 320. . FORMAT (IX, 7F11. 5) 331. . CONTINUE n  rr  "obtaining steady state" IF(T.NE.TES) GO TO 335 AU=GAINU, AY=GAINY, PU=PHIU, PY=PHIY 335. .CONTINUE "DOING FFT IF REQUIRED" IF(T.LT.TES) GO TO 340 IF(T.GT.TS) GO TO 340 J=J+1 YT (2*J-1) =YY-AY*SIN (M*W1 *CINT-PY+W1 * (T-TES)) UT (2*J-1) =U-AU*SIN (M*ffl *CINT-PU+Wl * (T-TES+CINT)) FUT=AU*SIN(M*W1*CINT-PU+W1 *(T-TES) ) FYT=AY*SIN(M*W1*CINT-PY+W1*(T-TES) ) write (19,339)yt (2*j-l), ut (2*j-l),FYT, Y,FUT, UM1 339..format(lx,6fl2.5) 340..CONTINUE IF((1.5*abs(T-TS)).ge.cint) GO TO 390 "DOING ESTIMATION USING FFT" DO 342 1=1,N, YT(2*I)=0.0, UT(2*I)=0.0 342..CONTINUE DO 345 1=1, N, YF(I)=YT(I) , YF (I+N) =YT (I+N) UF (I) =UT (I) , UF (I+N) =UT (I+N) 345. .CONTINUE CALL FFT (YF,N, -1) CALL FFT(UF,N,-1) DO 348 1=1,N FA=UF (2*1-1) **2+UF(2*I) **2 H(2*I-1) = (YF (2*1-1) *UF (2*1-1) +YF(2*I) *UF (2*1)) /FA H(2*I) = (YF(2*I) *UF (2*1-1)-YF (2*1-1) *UF (2*1)) /FA 348. .CONTINUE DO 360 1=1,N GAIN (I) =SQRT (H (2*1-1) **2+H(2*I) **2) IF (H(2*1-1).EQ.O.O) GO TO 350 PHA (I) =ATAN (H (2*1) /H (2*1-1) ) IF(H(2*I-1) .LT.O.O) PHA(I) =PHA(I) -3.1415926 IF (PHA (I) .GT.O.O) PHA (I) =PHA (I)-6.283185 GO TO 355 350. .PHA (I) =-3.1415826/2.0 355..J=2*I-1 WRITE (12,356) I, GAIN (I) , PHA (I) ,H(J),H (2*1), YF (J), OF (J) 356. .FORMAT(IX, 16, 6F12.3) 360. .CONTINUE  156  "FIND KC" n  rr  PHI=2.0 DO 370 1=2,N IF ( (PHA (I) +3.1415926) * (PHA (1-1) +3.1415926) .LE.O.O) GO TO 380 370..CONTINUE 380..J-I-l CE1=ABS (PHA (J) +3.1415926) CE2=ABS (PHA (J+l) +3.1415926) CE=CE1+CE2, CE1=CE1/CE, CE2=CE2/CE KC=ABS (CE2*GAIN (J) /COS (PHA (J) ) +CE1 *GAIN (J+l) /COS (PHA (J+l) ) ) 390. .CONTINUE "PPROCESS OF THE PLANT" rr  n  DERIVATIVE Z=REALPL (1. 0 U 0. 0) Y=2. 0*CMPXPL(A,B,Z,0.0,0.0) END $ "DERIVATIVE" r  f  IT  IT  IF (ABS (Y) . GT. 9999. 9) TF=0. 02 TERMT(T.GT.TF) END $ "DYNAMIC" TERMINAL WRITE (1, 400) 400. .FORMAT(IX,2H];) WRITE (12, 401) 401.. FORMAT (IX, 2H];) END $ "OF TERMINAL" END $ "PROGRAM"  2. Subroutines  in FORTRAN:  (1). Subroutine  to check steady  state:  SUBROUTINE ASY1 (NCON, NCOU, MAX, VAR, YY, UU, Wl, CINT, KC, GAIN, PHI, GAINU, PHIU) REAL VAR, YY, Y (500) , UU, U (500) , Wl, CINT, KC, Hi, TELTA1  100  DO 100,1=1,499, U (501-1) =U (500-1) Y(501-1) =Y(500-1) U(1)=UU, Y(1)=YY IF(Wl.NE.O.O) GO TO 300 IF(NCOU.LT.MAX) GO TO 190 ARV=0.0  157  110  120  DO 110, J=1,MAX ARV=ARV+Y(J) AR V=AR V/FLO A T (MAX) SUM=0.0 DO 120, J=1,MAX SUM=SUM+ (Y (J) -ARV) **2 SUM=SQRT (SUM/FLOAT (MAX)) IF(SVM.GE.abs(VAR*ARV)) GO TO 180 NCON=0, NCOU=0, GO TO 200  180 185  TYPE *,'The process is not in steady WRITE (3,185) SUM, ARV FORMAT(lX,6F12.4)  state!'  NCOU=NCOU-INT(MAX/2) 190 200  NC0N=1 RETURN  c  300  310  315 320 C  325  330 C  331  332 333  IF (NCOU.LT.MAX) GO TO 590 TWl=2*3.1415926535/Wl, PREAL=0.0, PIMAG=0.0 DO 310 1=1, MAX PREAL=PREAL+Y (MAX+1 -I) *COS (FLOAT (I) *W1 *CINT) PIMAG=PIMAG+Y (MAX+1 -I) *SIN (FLOAT (I) *Wl *CINT) CONTINUE IF(PIMAG.EQ.O.O) GO TO 315 PHI=ATAN (-PREAL/PIMAG) GO TO 320 PHI=-3.1415926/2.0 CONTINUE GAIN=0.0 DO 325 1=1, MAX GAIN=GAIN+Y (MAX+1 -I) *SIN (FLOAT (I) *W1*CINT-PHI) CONTINUE GAIN=GAIN*2.0/FLOAT(MAX) IF(GAIN.GT.0.0) GO TO 330 GAIN=-GAIN, PHI=PHI+3.1415926 CONTINUE PREALU=0.0, PIMAGU=0.0 DO 331 1=1, MAX PREALU=PREALU+ U (MAX+1 -I) *COS (FLOAT (I) *W1 *CINT) PIMAGU=PIMAGU+U (MAX+1 -I) *SIN (FLOAT (I) *W1 *CINT) CONTINUE IF(PIMAGU.EQ.O.O) GO TO 332 PHIU=ATAN(-PREALU/PIMAGU) GO TO 333 PHIU=-3.1415926/2.0 CONTINUE  158  334  GAINU=0.0 DO 334 1=1,MAX GAINU=GAINU+U(MAX+1-I)*SIN(FLOAT(I) *W1*CINT-PHIU) CONTINUE GAINU=GAINU*2.0/FLOAT (MAX) IF(GAINU.GT.O.O)  335  GO TO 335  GAINU=-GAINU, PHIU=PHIU+3.1415926 CONTINUE HI=GAIN/GAINU TELTA1=PHI-PHIU IF(TELTAl.LT.O.O) TELTAl=TELTAl+6.28318 DEX=H1*SIN(Wl *CINT)/SIN(TELTA1-W1*CINT)  C  350  C 400  570  SUM=0.0 DO 350 1=1,MAX SUM=SUM+ (Y (MAX+1 -I) -GAIN*SIN (FLOAT (I) *W1 *CINT-PHI)) **2 SUM=SQRT(SUM/FLOAT(MAX)) IF(SUM.GE.VAR*GAIN) GO TO 580 IF(KC.GT.DEX) GO TO 570 NCON=0, NCOU=0 WRITE (2, *)SUM,H1, TELTA1, KC *DEX GO TO 600 CONTINUE TYPE *,'Basic stability equation does not fit .' GO TO 588 CONTINUE TYPE *, The process is not in steady state!' WRITE(3,589)SUM,HI,TELTA1,KC*DEX,GAIN,PHI FORMAT (IX, 6F12. 4) NCOU=NCOU-INT (MAX/2) NC0N=1 RETURN END 1  580  /  588 589 590 600  (2).  Subroutine  for f i l t e r  design  SUBROUTINE FIL (Al, A2,Bl,B0,KK, T,Q, Wl,FILTER, ALF, GAIN, PHA, TA, TB,KC,N) REAL GAIN (N) , PHA (N) ,PHI (2048) ,G (2048) REAL A1,A2,B1,B0, KK, T, Q, Wl, FILTER, ALF, KC IF(TA*KC/(TB).LE.1.0)  GO TO 900  10  IF (Wl.EQ.O.O) GO TO 100 IF (FILTER.NE.1.0) GO TO 100  C  CALCULATION OF BY-PASS FILTER  C  KK=T*W1/Q, A=-Wl/(2. 0*Q) , B=0. 5*W1*SQRT (4. 0-1. 0/ (Q**2)) A1=2.0*EXP (A*T) *COS (B*T) , A2=EXP (2.0*A*T) B1=KK*EXP (A*T) * (COS (B*T) -A*SIN (B*T) /B) , BO=KK  159  ZR=(BO-B1*COS (W1*T)) , ZI=B1*SIN (W1*T) PR=1. O-Al *COS (Wl *T) +A2*C0S (2. 0*Wl *T) PI=A1*SIN(W1*T)-A2*SIN(2.0*W1*T) KK=SQRT(PR**2+PI**2)/SQRT(ZR**2+ZI**2) GO TO 102 c  100 C  CONTINUE CALCULATION OF LOW PASS FILTER  c  A2=0.0, Bl=0.0 Al=1.0-1.0/Q, B0=1.0/Q, KK=1.0 CONTINUE WS=3.1415926/T DO 120 1=2,N W=FL0AT(I-1) *WS/FLOAT (N) ZR=kk*(B0-B1*COS(W*T)), ZI=kk*(Bl*SIN(W*T)) PR=1.O-Al*COS(W*T)+A2*C0S(2.0*W*T) PI=A1*SIN(W*T) -A2*SIN(2.0*W*T) G (I) = (GAIN (I) /SQRT (PR**2+PI**2) )*SQRT (ZR**2+ZI**2) PH1=ATAN(ZI/ZR) IF(ZR.LT.O.O) PHl=PHl-3.1415926 PH2=ATAN (PI/PR) IF (PR.LT.0.0) PH2=PH2+3.1415926 PHI (I) =PHA (I) +PH1-PH2 CONTINUE f  102  120  DO 150 I=l N-l IF( ((PHI(I)+3.1415926)* CONTINUE r  150 160  (phi (i+1)+3.1415926)) .le. 0.0) GO TO 160  j=i  cel=abs(3.1415926+phi (j)), ce2=abs (3.1415926+phi (j+1)) ce=cel+ce2, cel=cel/ce, ce2=ce2/ce  FKC=abs (ce2*G (J) /COS (phi (j)) +cel *g(j+1) /cos (phi (j+1))) write (6, *) FILTER PARAMETERS:',fkc, j,Q IF(FKC.LT. (tb*alf/ta)) GO TO 200 0=1.2*Q GO TO 10 r  200 900  RETURN WRITE (6,*)'NO FILTER REQUIRED' A1=0.0, A2=0.0, B1=0.0, B0=1.0, KK=1.0 RETURN END  160  (3). This is just a compare subroutine. To show runner the theoretically calculated estimated value and compare with the estimated parameters. It is not necessary to be included in real controller design. C C  AS, BS are calculated  parameters.  c  C  SUBROUTINE CAL (CINT, W1,AS, BS) REAL CINT, W1,AS,BS, Al,A2,A3,Bl, B2,B3,PBS,PBC,PAS,PAC REAL CT, ST, E1,E15,E30, H, DELTA,ANA,ANB, C1W, C2W, C3N, S1W, S2W, S3W ======= ^==================================:=:=== CT=COS (2. 0 *CINT) , ST=SIN (2. 0 *CINT) E1=EXP (-CINT) , E15=EXP(-15.0*CINT) , E30=EXP(-30.0*CINT)  c  C  Al=El+2.0*E15*CT A2=-2.0*EXP(-16.0*CINT) *CT-E30 A3=EXP(-31. 0*CINT) Bl=2.0-2.29*El+0.29*E15*(CT+3.5517*ST) B2=0.29*El-4.29* (1.0-E1)*E15*CT-1.03*(1.O+El) *E15*ST-0.29*E30 B3=(2.29-2. 0*E1) *E30-0.29*EXP (-16. 0*CINT) * (CT-3.5517*ST) ^==^^^=^^===^===^^^=^-^=.==^^=^^==== -. .==^:-:. IF(Wl.EQ.O.O) GO TO 19 ;  ;  C  C C  CALCULATION OF GAIN AND PHASE ====== = C1W=C0S (Wl*CINT) , C2W=C0S (2. 0*W1 *CINT) , C3W=C0S (3. 0*W1 *CINT) S1W=SIN(W1*CINT) , S2W=SIN(2.0*W1*CINT) , S3W=SIN (3. 0*W1*CINT)  c  c  PBC=B1+B2*C1 W+B3*C2W, PBS=B2*S1 W+B3*S2W PAC=1. O-Al *C1 W-A2*C2W-A3*C3W, PAS=A1 *S1 W+A2*S2W+A3 *S3W _  C  H=SQRT((PBC**2+PBS**2) / (PAC**2+PAS**2)) ANB=ATAN (PBS/PBC) IF(PBC.LT.0.0) ANB=ANB+3.1415926535 ANA=ATAN (PAS/PAC) IF(PAC.LT.0.0) ANA=ANA+3.1415926535 DELTA=ANA+ANB =========================================================  19  20 C  AS=SIN (DELTA) /SIN (DELTA+W1 *CINT) BS=H*SIN (Wl *CINT) /SIN (DELTA+Wl *CINT) GO TO 20 H=(Bl+B2+B3)/ (1.O-Al-A2-A3) DELTA= (Bl +2.0*B2)/ (Bl +B2+B3) + (Al+2.0*A2+3. 0*A3) /(1. O-Al-A2-A3) AS=DELTA/ (1.0+DELTA) , BS=H/ (1. O+DELTA) CONTINUE +++++++++++++++++++++++++++++++++++++++++ RETURN END  161  (4). Parameter  estimation  subroutine.  C Here is an example of 2-parameter estimation subroutine. For the C higher order estimation, just change the definition in the begining. C  Q*********************************************************************  SUBROUTINE SQRTFL(Y,X,SQRTP,K,THETA,NPAR,PERR,FORGET) REAL X (2) , SQRTP (2, 2) ,K(2) , THETA (2) REAL F (2) , SIGSQ (3) , SIG (3) , H (2, 2) SQRTPN (2,2) REAL Y,PERR,B r  C C C C  Calculation =  of estimation  error -=  C  1 C C C c  PERR=Y DO 1 1=1,NPAR PERR=PERR-THETA (I) *X (I)  Estimation  Calculation  =====^=^= ========== =  C  100 C  DO 100 1=1, NPAR, DO 100 J=1,I , CONTINUE  F(I)=0.0 F(I)=F(I)+SQRTP(J,I)*X(J)  c  C  120  130  SIGSQ(1)=FORGET DO 120 1=1,NPAR, SIGSQ(1+1)=SIG$Q (I)+F (I) *F(I) CONTINUE N1=NPAR+1 DO 130 1=1, Nl, SIG (I) =SQRT (SIGSQ (I)) CONTINUE  c  1 70  180 190  B=FORGET DO 170 1=1,NPAR B=B+F (I) *F (I) DO 190 1=1, NPAR, K(I)=0.0 DO 180 J=I,NPAR K(I)=K(I) +SQRTP (I, J) *F (J) /B CONTINUE  c  135 140  DO 140 1=1,NPAR DO 135 J=1,NPAR IF (I.EQ.J) THEN H (I, J) =SIG (I) / (SIG (1+1)) ELSE IF (I.LT.J) THEN H (I, J) =-F (I) *F (J) / (SIG (J) *SIG (J+1)) ELSE H(I,J)=0.0 END IF CONTINUE CONTINUE  162  142 145 150  155 160 C C c  DO 150 1=1,NPAR DO 145 J=1,NPAR, SQRTPN(I,J)=0.0 DO 142 KK=1,NPAR SQRTPN (I, J) =SQRTPN (I, J) +SQRTP (I, KK) *H (KK, J) CONTINUE CONTINUE CONTINUE DO 160 1=1, NPAR DO 155 J=1,NPAR SQRTP (I, J) =SQRTPN (I, J) /SQRT (FORGET) CONTINUE Calculation  =====  of new estimated  =====  parameters =====  C 200  (5).  DO 200 1=1,NPAR THETA (I) =THETA (I) +PERR*K (I) RETURN END  FFT  subroutine  SUBROUTINE FFT (DATA, NN ISIGN) DIMENSION DATA (256) N=2*NN, J=l DO 5 1=1,N,2 IF (I-J) 1,2,2 TEMPR=DATA (J) , TEMPI=DATA (J+1) DATA (J) =DATA (I) , DATA (J+1) =DATA (1+1) DATA (I) =TEMPR DATA (1+1) =TEMPI M=N/2 IF (J-M) 5, 4, 4 J=J-M M=M/2 IF (M-2) 5, 3, 3 J=J+M, MMAX=2 IF (MMAX-N) 7, 9, 9 ISTEP=2*MMAX DO 8 M=1,MMAX,2 THETA=3.1415926535 *FLOAT (ISIGN* (M-l)) /FLOAT (MMAX) WR=COS(THETA), WI=SIN(THETA) DO 8 I=M,N,ISTEP J=I+MMAX TEMPR=NR *DATA (J) -WI*DATA (J+1) TEMPI=WR *DATA (J+1) +WI *DATA (J) DATA (J) =DATA (I) -TEMPR, DATA (J+1) =DATA (1+1) -TEMPI DATA (I) =DATA (I) +TEMPR, DATA (1+1) =DATA (1+1) +TEMPI CONTINUE MMAX=ISTEP GO TO 6 RETURN END r  1  r  2 3 4 5 6 7  8  9  r  163  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0098746/manifest

Comment

Related Items