UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Linear systems identification and optimization with application to adaptive control 1972

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
UBC_1972_A1 H35.pdf
UBC_1972_A1 H35.pdf [ 5.83MB ]
UBC_1972_A1 H35.pdf
Metadata
JSON: 1.0101279.json
JSON-LD: 1.0101279+ld.json
RDF/XML (Pretty): 1.0101279.xml
RDF/JSON: 1.0101279+rdf.json
Turtle: 1.0101279+rdf-turtle.txt
N-Triples: 1.0101279+rdf-ntriples.txt
Citation
1.0101279.ris

Full Text

LINEAR SYSTEMS IDENTIFICATION AND OPTIMIZATION WITH APPLICATION TO ADAPTIVE CONTROL ADEL ABDEL-RAOUF HANAFY B. S c , U n i v e r s i t y of Cairo, Cairo, Egypt, 1965 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY i n the Department of E l e c t r i c a l Engineering We accept t h i s thesis as conforming to the required standards THE UNIVERSITY OF BRITISH COLUMBIA July, 1972 In p r e s e n t i n g t h i s t h e s i s in p a r t i a l f u l f i l m e n t o f the requi rements f o r an advanced degree at the U n i v e r s i t y of B r i t i s h Columbia, I agree that the L i b r a r y shal1 make i t f r e e l y a v a i l a b l e f o r re ference and s tudy . I f u r t h e r agree t h a t permiss ion fo r e x t e n s i v e copying of t h i s t h e s i s f o r s c h o l a r l y purposes may be granted by the Head of my Department or by h i s r e p r e s e n t a t i v e s . It i s understood that copy ing o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l ga in s h a l l not be a l lowed wi thout my w r i t t e n p e r m i s s i o n . Depa rtment The U n i v e r s i t y o f B r i t i s h Columbia Vancouver 8, Canada Date 26-1-7 Z ABSTRACT This t h e s i s i s concerned with the problem of i d e n t i f y i n g and c o n t r o l l i n g l i n e a r continuous systems. Algorithms which are f e a s i b l e f o r real-time d i g i t a l computations are developed f o r the design of both the c o n t r o l l e r and the i d e n t i f i e r . The generalized equation e r r o r i s shown to be a p p l i c a b l e to a mean-square method of rapid d i g i t a l estimation of l i n e a r system para- meters. Due to the imposed structure of the estimator the manipulation of high order matrices i s avoided. Examples i l l u s t r a t e the e f f e c t i v e - ness of the estimator f o r a v a r i e t y of cases dealing with quantization noise as w e l l as measurements noise. In some cases, t h i s d i g i t a l i d e n t i - f i e r requires the computation of the generalized inverse of a matrix. A simple algorithm f o r computing the generalized inverse of a matrix i s developed. This algorithm eliminates the need f o r Gram-Schmidt ortho- g o n a l i z a t i o n and i t s associated ( i n the i n t e r e s t of accuracy) reortho- g o n a l i z a t i o n as new vectors are introduced. A two-stage estimator i s developed f o r estimating time-invariant and time-varying parameters i n l i n e a r systems. During the second stage, the time-varying parameters are considered as unknown control inputs to a l i n e a r subsystem of known dynamics. For the f i r s t stage, the d i g i t a l i d e n t i f i e r i s shown to be e f f e c t i v e i n i d e n t i f y i n g the time-invariant parameters. Numerous examples i l l u s t r a t e the effectiveness of the method. To design a feedback c o n t r o l l e r a method of successive approxi- mations f o r s o l v i n g the two point boundary value problem f o r optimum constant gain matrices i s developed. The method i s shown to be compta- t i o n a l l y equivalent to a deflected gradient method. Convergence can i i always be achieved by choice of a s c a l a r step-size parameter. An on- l i n e approximate method i s developed which appears s u i t a b l e f o r systems whose parameters must be i d e n t i f i e d . The two point boundary value problem i s replaced by an a l g e b r a i c problem, the s o l u t i o n of which gives a sub- optimal constant gains. The problem of t r a j e c t o r y s e n s i t i v i t y reduction by augmented state feedback i s shown to be well-posed i f constrained structure of the gain matrices i s taken. The simple structure of constant gain matrices i s considered. Based on the developed i d e n t i f i e r and c o n t r o l l e r , one p o s s i b l e strategy f o r optimal adaptive c o n t r o l which i s p a r t i c u l a r l y a t t r a c t i v e from an engineering point of view i s studied. The strategy i s to i d e n t i f y system parameters at the end of an observation i n t e r v a l and then to use the parameters to derive an "optimal" c o n t r o l f o r a sub- sequent c o n t r o l i n t e r v a l . Two examples are considered i n order to i l l u s t r a t e the e f f e c t i v e n e s s of t h i s strategy. i i i TABLE OF CONTENTS Page ABSTRACT i i TABLE OF CONTENTS i v LIST OF TABLES v i LIST OF FIGURES v i i ACKNOWLEDGEMENT ix 1. INTRODUCTION 1 2. RAPID DIGITAL IDENTIFICATION OF LINEAR SYSTEMS 2.1 Introduction 8 2.2 Problem Formulation 9 2.3 Recursive and Block Data Processing 14 2.4 Measurement Requirements 17 2.5 Applications 18 2.6 Comparison with a Limited Memory F i l t e r 27 2.7 Some St a t i s t i c a l Properties of the Digital Identifier... 30 2.7.1 Correlation of Noise Samples 30 2.7.2 The Error Covariance Matrix 36 2.7.3 Estimation of the Confidence Interval Length 37 2.8 Discussion 42 3. RECURSIVE COMPUTATION OF THE GENERALIZED INVERSE OF A MATRIX. 3.1 Introduction 44 3.2 Properties of the Pseudo-Inverse 44 3.3 Recursive Evaluation of R̂  46 3.4 Recursive Evaluation of A 1 49 3.5 Examples 50 4. TWO-STAGE ESTIMATION OF TIME-INVARIANT AND TIME-VARYING PARAMETERS IN LINEAR SYSTEMS 4.1 Introduction.. 56 4.2 Problem Formulation 57 4.3 Estimation of A v. Algorithm 1 59 4.4 Estimation of a Periodic \ Algorithm II 62 4.5 Estimation of a Periodic A v. Algorithm III 65 4.6 Two-Stage Estimation of Af and A v 67 4.7 Examples 69 4.7.1 Example 1 70 4.7.2 Example 2 71 Page 4.7.3 Example 3 71 4.7.4 Example 4 72 4.7.5 Example 5 73 4.7.6 Example 6 73 5. OPTIMUM CONSTANT FEEDBACK GAINS FOR LINEAR SYSTEMS AND THE PROBLEM OF TRAJECTORY SENSITIVITY REDUCTION 5.1 Introduction 76 5.2 Problem Formulation 78 5.3 U n i f i e d Treatment of D i f f e r e n t Formulations 82 5.4 Gradient Descent and Successive Approximations 84 5.5 Optimum Constant Gain: Trajectory S e n s i t i v i t y Reduction. 88 5.6 On-Line Computation of Gains 90 5.7 Examples 93 6. OPTIMAL ADAPTIVE CONTROL 6.1 Introduction 102 6.2 Optimum Control During the F i r s t and Subsequent Optimization Intervals 102 6.3 Examples 104 6.3.2 Example 1 105 6.3.3 Example 2 107 6.3.4 Discussion . 110 7. CONCLUSIONS 116 APPENDIX I 119 APPENDIX I I .' . 122 APPENDIX III.- 125 APPENDIX IV 127 APPENDIX V 130 REFERENCES • 132 v LIST OF TABLES Pagi Table (4.1) 75 Table (4.2) 75 Table (4.3) 75 Table (4.4) 75 LIST OF FIGURES Figure Page 1.1 The Proposed General Optimal Adaptive System 2 2.1 Example 1. Estimation of a^ with Quantization Noise 21 2.2 Example 1. Estimation of with Quantization Noise 21 2.3 Example 1. Estimation of a^ with Quantization and Unbiased Measurement Noise 22 2.4 Example 1. Estimation of a^ with Quantization and Unbiased Measurement Noise 22 2.5 Example 1. Estimation of a-̂  with Quantization and Biased Measurement Noise 23 2.6 Example 1. Estimation of a^ with Quantization and Biased Measurement Noise 23 2.7 Example 2. Estimation of a^ with Quantization noise 24 2.8 Example 2. Estimation of a^ with Quantization Noise 24 2.9 Example 2... Estimation of. b^ with. Quantization. Noise.. 25. 2.10 Example 2. Estimation of a^ with Quantization and Unbiased Measurement Noise 25 2.11 Example 2. Estimation of a^ with Quantization and Unbiased Measurement Noise 26 2.12 Example 2. Estimation of b^ with Quantization and Unbiased Measurement Noise 26 2.13 Example 2. Estimation of a., with Quantization and Unbiased Measurement Noise Using a Standard Limited Memory F i l t e r . . . 28 2.14 Example 2. Estimation of a~ with Quantization and'Unbiased' Measurement Noise Using a Standard Limited Memory F i l t e r . . . 28 2.15 Example 2. Estimation of b 1 with Quantization and Unbiased Measurement Noise Using a Standard Limited Memory F i l t e r . . . 29 4.1 The Recursive Observation-Estimation Intervals 64 5.1 Example 1. Control as a Function of Time For: (a) Lee's Method. (b) Constant Gain and Augmented States. (c) Constant Gain and Non-Augmented States 98 5.2 Example 1. Trajectory Senstivity Function z^ For: (a) Lee's Method. (b) Constant Gain and Augmented States, (c) Constant Gain and Non-Augmented States 99 Figure Page 5.3 Example 1. Trajectory Sensitivity Function z.^ for: (a) Lee's Method. (b) Constant Gain and Augmented States. (c) Constant Gain and Non-Augmented States 100 5.4 Example 2. Block Diagram of a Sychronous Generator Voltage Control System 101 6.1-a The I n i t i a l i z a t i o n Interval and the Operation Period 103 6.1-b The Successive Observation Intervals 103 6.1-c The Successive Optimization Intervals (X, = r) 103 6.1-d The Successive Optimization Intervals (I = 2r) 103 6.2 Example 1. Identification of Parameter a^ with Quantization., Noise 112 6.3 Example 1. Identification of Parameter a^ with Quantization Noise 112 6.4 Example 1. The Optimum Piecewise Constant Feedback Gain K̂ ... 113 6.5 Example 1. The Optimum Piecewise Constant Feedback Gain K̂ ... 113 6.6 Example 2. Identification of Parameter a with Quantization Noise 114 6.7 Example 2. Identification of Parameter a^ with Quantization Noise 114 6.8 Example 2. Identification of Parameter a^ with Quantization Noise 114 6.9 Example 2. The Optimum Piecewise Constant Feedback Gain K̂ ... 115 6.10 Example 2. The Optimum Piecewise Constant Feedback Gain l ^ . . . 6.11 Example 2. The- Optimum Piecewise Constant Feedback Gain K,.-..- 115 v i i i ACKNOWLEDGEMENT I am indebted to my supervisor, Dr. E. V. Bohn, for his continuous guidance and assistance throughout the period of this research. Thanks are due to Dr. V. G. Haussmann for his valuable comments on the draft. Reading the draft by Dr. M. S. Davies and Dr. G. F. Schrack is duely appreciated. 1 also wish to thank Miss Linda Morris for her patience i n typing the manuscript. The financial support by the National Research Council of Canada, grant 67-3134 (1968-1971), the Scholarship by the B. C. Hydro Company (1970), and the U. B. C. Graduate Scholarship (1971-1972) are appreciated. ix 1 1. INTRODUCTION An optimal adaptive control i s a control that w i l l cause a plant to operate in an "optimum" manner without a Priori knowledge about the constraints on either the magnitude or the rate of variation of the plant uncertainties. These uncertainties may be in the plant parameters, i n i t i a l conditions, and/or noisy measurements. One possible mathematical design approach for an adaptive control i s to use s t a t i s t i c a l methods. Plant uncertainties are then described by probability density functions. Such s t a t i s t i c a l methods are generally not computationally feasible even when a priori knowledge of the s t a t i s t i c s of the plant uncertainties exist. In most practical situations, simplifying assumptions must be introduced and these detract considerably from the usefulness of a mathematically precise s t a t i s t i c a l approach. An alternative approach is the optimally sensitive control [1] where no a priori knowledge of s t a t i s t i c a l parameters is required. However, the range of plant parameter and/or i n i t i a l condition variations is restricted. This thesis is concerned with the development of an optimally sensitive feedback control which does not require detailed s t a t i s t i c a l data for i t s realization. However, in order to increase the range of optimal control action, parameter estimation i s used. Algorithms which are feasible for real-time d i g i t a l computations are developed for the design of both the controller and the identifier. Figure (1.1) i l l u s t r a t e s the general structure of the optimal adaptive control system which has been developed in this thesis. The -n LEAST Sq. I.C ESTIMATOR A- NOISE 11 FEEDBACK GAINS Y- - 1 SENSITIVITY L_ ^ M?Z?£Z. ^ PLANT DIGITAL IDENTIFIER ADJUSTABLE MODEL ~—7> 11 C=3 /7/.7F/P r 11 ' L -SCOMPOSITE T ^JDENTIFI^ _p OUTPUT FIG. (I.I) THE PROPOSED GENERAL OPTIMAL ADAPTIVE SYSTEM, 3 blocks drawn in dashed lines represent elements which may not be required in many practical applications. This optimal adaptive control system can generally be divided into two main parts; the estimator and the controller. A detailed discussion of the components which comprise these two parts together with the thesis layout i s considered In the following two sections. 1.1 The Estimator The estimator i s used to obtain a "best" estimate for the unknown plant parameters and i n i t i a l conditions. It i s also used to obtain a "best" estimate for the noisy and/or the unmeasured plant state variables. These estimates are needed i n order to construct a reasonably accurate deterministic linear model of the plant. A deterministic linear model i s necessary i f use i s to be made of the powerful tools that have been developed from optimal control theory. Linear continuous models are considered in this thesis. The choice of linear models is made because of their relative simplicity from the instrumentation and computation point of view. This simplicity is largely due to the high degree of development of the theory of linear systems. Even when the plant is nonlinear, however, the use of quasilinearization techniques can result in a linearized model of the plant. Over a fixed period of time-, such, a linear... model is. of ten. adequate. to. des- cribe the dynamic behaviour of a nonlinear plant. The estimator is composed of four major elements (see Figure (1.1)). These four elements are: a- I n i t i a l Conditions Estimator This estimator is a standard linear least-squares estimator which minimizes 4 a quadratic error function based on a system of linear algebraic equations. It i s necessary to obtain a good estimate of the plant i n i t i a l conditions since the values of the controller constant feedback gains depend on i n i t i a l conditions. This estimator is used whenever the plant states are conta- minated by noise and/or some of these states are unmeasurable. A linear least-squares estimator has been chosen due to i t s simplicity. Also, no apriori knowledge of noise s t a t i s t i c s is needed. b - F i l t e r The f i l t e r gives a continuous estimation of the plant states. If some of the states are unmeasurable then a Luenberger observer i s a good choice to reconstruct the unavailable states. On the other hand a Kalman f i l t e r can be used for best estimation of the noisy states. c - Digital Identifier A continuous-time parameter identification method that has been extensively investigated i s the parameter-tracking technique [2,3,4], This is based on a steepest descent parameter adjustment. These parameter-tracking identifiers have been largely developed for use with analog computers. Practical on-line identification, however, requires the use of d i g i t a l computers and the discrete version of parameter-tracking introduces several d i f f i c u l t i e s . The optimum gain or optimum step-size depends on the input signals and i s d i f f i c u l t to determine on-line [5]. In Chapter 2 a d i g i t a l identifier i s developed which gives rapid and accurate estimation of the unknown, time-invariant parameters of the plant. The identifier uses data obtained from augmented input controls and output states which results in parallel data processing. 5 A discretized version of the continuous augmented model i s used. Digital simulation results have shown that the identifier developed in this thesis has very good rates of convergence. This d i g i t a l identifier requires the computation of a matrix inverse which may not exist in some cases. In such cases a generalized-inverse of a matrix is required. A number 7 of methods have been proposed for finding the generalized inverse of a matrix [6, 7, 8], These methods generally consist of some combination of a Gram-Schmidt orthogonalization process, the expansion of products of matrices, and the solution of sets of linear algebraic equations. In Chapter 3 an algorithm i s developed which replaces these operations and computes- the generalized inverse from simple recursive formulas. Inherent in the method is a simple check on the linear inde- pendency of the column vectors of the matrix. d - Composite Identifier A two-stage estimator i s developed in Chapter 4. In the f i r s t stage the d i g i t a l i d e n t i f i e r i s used to obtain an estimate for the time-invariant parameters. In the second stage the unknown time-varying parameters are considered as control inputs and a linear state regulator quadratic cost function dynamic optimization problem is formulated. The solution of the associated two-point boundary-value problem for the optimum control" results in an estimate for the time-varying parameters. This two-stage identification algorithm is suitable for the cases where only a few of the unknown parameters of the plant are time-varying. The use of the complicated techniques developed for the identification of time-varying parameters is not j u s t i f i e d for such cases. The usual approach taken to identify time-varying parameters is to consider them as unknown augmented states described by differential equations of known form and then use some form of state estimation [9]. Alternatively, integral transforms have been used [10] to extend the capability of parameter- tracking system to track time-varying parameters. The implementation d i f f i c u l t i e s for such techniques make their use questionable i f not completely unpractical. 1.2 The Controller In Chapter 5 the design of a controller with optimum constant feedback gains is discussed. A method of successive approximations is developed and i t is shown to be equivalent to a deflected gradient des- cent approach with a variable weighting matrix. This property i s utilized to develop an algorithm for off- l i n e design studies which results in rapid convergence. An approximate method i s developed to replace the nonlinear two-point boundary-value problem by an algebraic optimization problem. This method appears to be suitable for on-line computation of suboptimum constant feedback gains (see e in Figure (1.1)). Trajectory sensitivity function- minimization i s an important requirement... The i n - troduction of the sensitivity functions in the feedback structure of the adaptive controller makes i t possible to up-date the constant feedback gains less frequently and makes the plant controller less sensitive to modelling errors. The sensitivity functions are generated by the sensi- t i v i t y model shown by f in Figure (1.1). The parameters of the sensitivity model are up-dated each time a new estimate of the unknown plant parameters is made. In Chapter 6 a recursive observation-optimization strategy i s described. During a f i n i t e observation Interval, measurements of the state are taken and used to determine estimates of the plant parameters. The linear model parameters are adjusted according to these new estimates. The linear model is used to compute a control strategy which is applied in a subsequent control interval. This scheme seems particularly attractive for adaptive systems. In summary, the objective of this thesis i s to investigate practical methods for designing an optimal adaptive control system based on a reasonably large domain of plant uncertainties. Emphasis i s placed on techniques which can be implemented on real-time d i g i t a l process control computers. To reduce computational requirements, the control strategy should not require frequent up-dating. This objective has been accomplished by developing a rapid d i g i t a l identifier for continuous- time systems and by developing simple algorithms for the computation of optimally sensitive feedback gains. 8 2. RAPID DIGITAL IDENTIFICATION OF LINEAR SYSTEMS 2.1 Introduction D ig i ta l control of adaptive systems is feasible only i f parameter ident i f icat ion and evaluation of control laws can be per- formed in real time. To reduce the complexity of the computations, simplifying assumptions must be made. One feasible method i s to identify the system as an approximate l inear system within a given time interval and then use a l inear optimal control [5,11], For such a control policy to be effect ive in an adaptive system the ident i f icat ion should be rapid. A continuous-time parameter ident i f icat ion method that has been extensively investigated is the parameter tracking technique [2,3,4]. This is based on a steepest descent parameter adjustment of a quadratic error function or a Liapunov function. The error can be taken to be a response or a generalized equation error. The generalized error approach is attractive since i t results in an exact continuous-time steepest descent adjustment. Furthermore, by use of state variable f i l t e r s , i t allows considerable f l e x i b i l i t y in the generation of additional input-output signals and additional- equation errors. These signals play an important:- role- in the estimator that w i l l be discussed in this Chapter. They are considered to be the time response of augmented input controls and output states. These states can be chosen so that the quadratic error function has a unique minimum in parameter space which then allows very rapid ident i - f i ca t ion . Pract ica l on- l ine ident i f i cat ion , however, requires the use of d ig i ta l computers and the discrete version of parameter tracking introduces several d i f f i c u l t i e s . 9 The optimum gain or step-size depends on the input signals and i s d i f f i c u l t to determine on-line [5]. Furthermore, a steepest descent search for a minimum does not generally result in rapid identi- fication. A popular method for on-line identification by d i g i t a l computers i s the mean-square technique. In the linear system case a l l state variables must be measured and the estimated parameters are biased. Furthermore, i f rapid identification i s attempted, d i f f i c u l t i e s can arise with ill-conditioned matrices. A d i g i t a l estimation method i s developed here which can overcome these d i f f i c u l t i e s by use of augmented input controls and output states. Due to the imposed structure of the estimator the manipulation of high-order matrices, which i s generally the case in the mean-square approach, i s avoided. 2.2 Problem Formulation Consider a completely observable and controllable continuous- time linear system x = Ax + Bu + w (2.1) where x is an n-state vector, u i s an m-control vector and w i s additive plant noise. Let T be the sampling interval and let t Q , t^, t^, be a set of sampling instants. For the moment, to ease the notational burden, let z^(k) = z^(t^) be the sampled measurement at t^ where z ^ t ) = x(t) + v(t) (2.2) and where v(t) i s additive measurement noise. (The estimator to be developed does not require that a l l states be measured as implied by (2.2). The general measurement requirements are discussed in Section (2.4) Let Z i + l ( t ) - fo\^d'> U £ + l ( t ) = i \ ™ * > Wfc) = l \ ^ d ^ <2'3> The vectors z^, u^, (£ = 2), are defined as augmented states and aug- mented controls, respectively. It i s seen from (2.1), (2.2) and (2.3) that z„ = Az„ + Bu„ + w. % I I I (2.4) A A • where u^ = u, ŵ  = v - Av + w. Integrating (2.4) over the interval t k = x = t k + 1 and letting Z £(k) = z £(t k>, U £(k) - u £ ( t k ) , yields Z£(k+1) = AZ^(k) + BU£(k) + N £(k) (2.5) where A = I + TA, B = TB and V k ) " [ k + 1t 5( z £< T) " z £ < t k ) ) + B ( u J l ( T ) " u £ ( t k } ) + V T ) ] d T ( 2' 6 ) k The discrete set of equations forms the basis for a l l further discussion. To ease the notational burden a l l n(n+m) elements of A and B are consi- dered unknown. The problem is to estimate A and B from a set of measured Z^(j)» U-j^(j) and computed (from (2.3)) augmented states and controls. Let V k )- - z 4(k) A c = b' n (2...Z) where prime denotes transpose and where a^ and b^ are the j-th n and m row vectors of A and B respectively. These rows are ordered into an 2 (n + nm) vector c as shown. Equation (2.5) can then be written as z £(k+i) = e £ (k)c + N 4(k) (2.8) where 0 Yj(k) 0 •Yi(k) ( Y l ( k ) ) B n (2.9) (Due to the frequent occurrence of matrices of block-diagonal form i t i s convenient to introduce a special notation. The f i r s t subscript B is used to indicate that a block diagonal matrix is to be formed from the vector or matrix within the bracket. The second subscript n indicates 2 the number of blocks. Note that 9 (k) is an nx(n + nm) matrix). JO Consider the sampling instants k-N+1, k, so that N blocks of data are available and let Z £ N̂ Z(k) Z (k-N+1) J Tc + N(k-l) N(k-N) (2.10) where N e(k-i) K • , 6(k) = , Z(k) e(k-N) n+m mm " z x(k) Zn+m<k> (2.11) The matrix 9(k) is n(n+m) x n(n+m) and the vector Z(k) is n(n+m) x 1. The matrix 6̂  i s of dimension nN(n+m) x n(n+m) as can be seen from (2.11). It i s seen from (2.7), (2.9) and (2.11) that the measured and computed augmented states and controls are used to form the matrix 8(k) and the vector Z(k) at the k-th sampling instant. These quantitites are evaluated for N sampling instants and are used to form the matrix 9̂  and the vector Z^. The problem is to determine an estimate of the parameter vector c given 9̂  and Z^, where these quantities are related by (2.10). The simplest approach is to define the optimum estimate to be the estimate which minimizes the quadratic form (Z^ - 8 ^ 0 )'(Z^ - Q^c). The solution i s the minimum mean-square (also called least-squares) estimate [6] c = PM6*Z.T N N N where h1 = < w (2.12) (2.13) Since the basic structure of a l l linear mean-square estimators i s the same i t i s of interest to compare the estimator (2.12) with pre- viously proposed estimators [5, 12-16]. Mayne has used the Kalman f i l t e r approach. This requires that (2.10) be interpreted as a measurement equation with 6̂  as a deterministic measurement matrix. This i s the case only i f there is no measurement noise and i f a l l states are measured. A further and very significant difference is in the definition of 0̂  and N(k). These investigators take 0 0 0 9 1(k-l) 01(k-n-m) , N(k) Nx(k-1) N (k-n-m) (2.14) (References [2, 13-15], furthermore, treat only the scalar measurement case. Their results do not directly apply to the above case). Since (see (2.5)) E[Z 1(j+l)N^(j)] = E[N 1(j)Nj L(j)] •* 0 (2.15) i t follows that the estimate i s biased. This has been pointed out pre- viously [13]. In order to eliminate the bias, sufficient time must elapse between samples. This can make rapid identification d i f f i c u l t . Even when the bias is small enough to give acceptable identification, further d i f f i c u l t i e s can arise with data in the form [2.14]. If a small sampling interval i s chosen to achieve rapid identification there may not be a significant difference between adjacent rows of 6(k) resulting in an ill-conditioned matrix. These d i f f i c u l t i e s tend to be avoided i f data in the form (2.10) i s used. Some insight into the estimation bias associated with (2.12) can be obtained by considering the noise samples N^(j), N^(k), (p = 1, ..., n + m; j ̂  k) , as independent. It follows from this assumption and (2.5) that E[Z 0(k)N'(k)] = 0, (Z, p = 1, .... n + m) (2.16) Consequently 6.. at sampling time t, i s independent of N (k) and the estimate _L ic p (2.12) i s unbiased when N = 1. From (2.6) i t i s seen that quantization noise is an important component of N^(k) making an analytic investigation into the dependency of the noise samples intractable. Some general observations can, however, be made. Successive integrations performed on w (T) not only reduce the effective amplitude of the noise but also reduces the correlation between noise samples N^(k), (£ ~ 1 , ..., n + m). The truth of this statement for a certain class of noise samples i s shown in Section (2.7). Consequently, for a given k, the quantization noise terms of (2.6) for different I tend to be uncorrelated. Con- sequently, l i t t l e i f any d i f f i c u l t y i s to be expected with estimation bias. A l l examples tried verified this expectation. The essential difference between the two estimators can be seen from (2.15) and (2.16). If £ = 1, sequential data processing must be used. If T is chosen large to overcome ill-conditioning then quantization noise can become excessive. The combination of heavy noise and estimation bias (2.15) can result in failure of the conventional estimator to give acceptable estimation. These d i f f i c u l t i e s tend to be avoided i f augmented states and augmented controls are chosen (£ = n + m) so that par a l l e l data processing can be used. Heuristic engineering justifications can also be given for the use of augmented states and controls in linear system parameter i d e n t i f i - cation. If combined parameter estimation and optimal control i s attempted signal-to-noise problems can arise [12], Optimal control attempts to keep state deviations from a zero reference state small which decreases the signal-to-noise ratio. Integrators are low-pass f i l t e r s which improve signal-to-noise ratio. The data used i s then less noisy. Such a situa- tion has been reported by Bakke and he suggests using a hybrid i d e n t i f i - cation method [16]. Basically his hybrid method i s to use data in the form (2.14) and then replace Y^, 6^ by X^y 82* Two different estimates are taken and a decision i s then, made on the best estimate. However, a f u l l set of augmented states i s not considered, nor i s the data combined in a mean-square sense. 2.3 Recursive and Block Data Processing The estimator (2.12) operates in a block data processing mode. A new estimate is made after every N additional data blocks. Other processing modes can be used such as the recursive method suggested by Kishi where old data blocks are deleted and new data blocks introduced. Let, 9N-R ZN-R - N > - X T (2.17) - -  ZN-R . z = ' N 9 z R R It follows from (2.13) that PN' = PN-R + 9R9R (2.18) Suppose that a l l quantities in (2.12) are known and i t i s desired to determine ^ from these quantities (This amount to determining the best estimate when R data blocks are deleted). Making use of (2.18) i t follows from ( 2 . 1 2 ) that 9N-RZN-R - ^ " W -N-R ( 2 " 1 9 ) By partitioning ( 2 . 1 2 ) in the manner indicated by ( 2 . 1 7 ) , i t is seen that ° N = W R + PN6N-RZN-R ( 2 ' 2 0 ) substituting (2.19) into (2.20) yields C V R " ( I " V R V ^ N " V R V ( 2'2 1> With the aid of the identities <X " PN 6R 6R )" l pN eR = PN 6R ( I " W i ^ <X ~ V R V " 1 = 1 + ( I ~ PN eR 0R ) _ l pN eR eR ( 2 ' 2 2 ) ( 2 . 2 1 ) can be reduced to the standard from for recursive estimation C VR = *N + PN 9R ( I " eR PN 9R ) _ 1 ( V N " ZR> ( 2 ' 2 3 ) applying the matrix inversion lemma to (2.18) yields PN-R = PN + V i ( I " W R ^ V N ( 2 ' 2 4 ) Equation (2.23) gives the best estimate i f the last R data blocks are deleted and (2.24) gives the associated P-matrix. If R new data blocks are sampled, a new Z and 6 D are available and new values of c^ and P^ can be computed. The equations giving these values are obtainable from (2.23) and (2.24) by interchanging N with N - R, replacing 6_ by -8_ and leaving 0' unchanged. With these changes (2.23) K K is. and (2.24) gives the Kalman f i l t e r algorithm (since the measurement matrix is a random variable the Kalman f i l t e r is not optimal). The estimators proposed in References [ 5 , 12-15] are based on a scalar measurement. The bracketed quantity i n (2.24) i s then a scalar and no matrix inversion is required. However, many scalar samples are required to estimate the parameters within a given confidence interval. This can result in slow identification. The matrix to be inverted in (2.24) is of dimension Rn(n + m) x Rn(n + m). This is a consequence of the use of a vector measurement of augmented states. In effect, rapid identification i s attempted by the parallel processing of augmented state samples. The price paid for parallel processing is in the high order matrix to be inverted i n the Kalman f i l t e r algorithm. An alternative recursive approach is therefore desirable. It follows from (2.9), and (2.14) that n+m k-1 SL=1 J=k-N where n+m k-1 QN = £ £ V j ) - Y i ( j ) ( 2 ' 2 6 ) 1=1 j=k-N consequently PN = % \ n (2'27) Equation (2.27) gives the interesting result that only an (n + m) x (n + m) matrix need be inverted to find V^. (In general, to determine 2 2 2 n + nm parameters, an (n + nm) x (n + nm) matrix has to be inverted). The structure of (2.27) allows a recursive algorithm to be developed which gives i - 1 1 terms of QN"''. Consider (2.26) and define Vn = Y„(k - N), S, = Q_T - V.V' S. = S. - V. V'., I I ' 1 N 1 1 j J-1 J j ' S = Q 1 (2.28) n+m N-1 applying the matrix inversion lemma yields s-1 = ( Q l ) - v^p- 1 - Q J 1 + Q" 1 v 1ci 1 + v ^ r ^ o " 1 (2.29) Note that the quantities i n brackets on the right hand side of (2.29) are scalars. The set of equations (2.29) define a recursive algorithm for finding given Repeating the algorithm R times allows P^^ to be expressed in terms of (see (2.27)). This procedure replaces (2.24) in the data deletion step. Substituting (2.18) into the f i r s t identity in (2.22) yields PN-R6R = W 1 " W P " ' <2'30> which allows (2.23) to be expressed in the form N̂-R " N̂ + P N - R 6 R ( V N " V . ( 2' 3 1 ) The data deletion i s performed using (2.29) and (2.31). No matrix i n - version i s required. Insertion of new data i s performed using (2.29) in reverse and by changing (2.31) to the appropriate form. 2.4 Measurement Requirements 1 Due to the use of state variable f i l t e r s not a l l states need be measured as implied by (2.2). Most systems of the form (2.1) can be decomposed into subsystems which have a phase variable structure. Suppose, for example, that a subsystem is represented by x 3 = a 3x 3 + a 2x 2 + a l X ] L + ... x2 = X3 x x = x 2 (2.32) To i d e n t i f y (2.32) i t i s s u f f i c i e n t to generate the complete states of one the augmented models. The f i r s t augmented model that can be generated completely i s the t h i r d model (Jl = 3) . This i s seen by taking Z3 = I t i s seen from (2.33) that only x^ need be measured. Any a d d i t i o n a l augmented model (l > 3) can be generated using (2.3) and (2.33). C x i d T i d x2 ^ ^ x„ dr, dx 0 o o 2 1 2 ft x„ dr, dx 0 o o 3 1 2 ^ fZ x d r 1 dx„ o o 1 1 2 •^t x 1 dx, o i l (2.33) 2.5 Applications Example 1. Consider the l i n e a r system x.̂  = a - | X 2 + a 2 x 1 + u, x^(o) X2 X l , x 2(°) 1.0 0.0 (2.34) where a^ = -2.0, a^ = -3.0, t Q = 0, t ^ = 16., and take I = 4 (see The fac t that only two elements need be i d e n t i f i e d r e s u l t s i n some s i m p l i f i c a t i o n . The f u l l augmented system i s of the form x l "X2 x l " " U l " x 3 X4 x3 " a l " U2 = • + X5 X6 x5 U3 _x8 X7 _U4 _ (2.35) where x^ = x^, x g = x5» x8 = x7' Equation (2.35) i s discretized and expressed in the form of (2.8) by appropriate definition of Z^(k), ^ ( k ) an& c ' = ( a p » The estimation of c i s performed by a block processing mode with R = 1. ( i f no old data i s deleted, the estimation algorithm i s essentially the Kalman f i l t e r algorithm). The following sample intervals and control are used: (1) T = 0.1, (2) T = 0.2, (3) T = 0.4, (4) T = 1.0 with the corresponding values of N: (1) N = 21, (2) N = 11, (3) N = 6, t (4) N = 3. The control u(t) = 0.1 + sin 3t + sin lOt i s taken. Figures (2.1) and (2.2) give the results with zero additive measurement noise (n^ = o = n 2) . However, discretization of (2.35) introduces quantization errors which can be considered as additive correlated quantization noise. It i s seen from the figures that identification i s rapid even for large quantization noise. Figure (2.3) and (2.4) i l l u s t r a t e the results for state dependent noise of the form x,(k) n.(k) = a. • (-i ) ng., ( i = 1, 2) (2.36) m where ng^ is a pseudo-random computer generated noise sequence which has a maximum amplitude of n . The choice a. = a„ = 0.1, E(ng.) =0. is m 1 2 l made. Figures (2.5) and (2.6) i l l u s t r a t e the results for the choice = 0.1, a 2 = 0.25, E (ng^) = 0.1, E (ng2) =0.14. It is seen from these results that the estimator i s insensitive to correlated quanti- zation noise, state dependent measurement noise and to bias in the measure ment noise. Example 2. Consider the linear system (See Reference [3]) X i = x 2 , x 3̂ (0) = 1.0 X2 = a l X l + a 2x 2 + bjU , X 2(0) = 0.0 where = --2.0, a2 = -1.0, b1 1.0, and take Z = 4. system i s of the form • X2 x l x2 u l X4 x 3 x4 " a l " U2 X6 = X5 x6 _ a2_ + U3 b l u X 8 , X7 X8 U - ^ U4 _ • • • (2.37) where x_ = x,, x r = x,, x_ = x n. 3 4 5 6' 7 8 The following sampling intervals are used: (1) T = 0.1, (2)- T =• 1.0-. The-cor-responding- values of N are: (1) (1) N = 21, (2) N = 3. The value of R is taken as R =1. The i n i t i a l parameter estimate is c = 0 and u(t) = 0.1 + sint + sin 3t. Figures (2.7), (2.8) and (2.9) give the results for quantization noise only. Figures (2.10), (2.11) and (2.12) give the results for quantization noise and additive state dependent noise where «1 = a2 = 0.1, E ^ g ^ =0, i = 1, 2. Comparing these time responses with those obtained in Reference [3] for the noise-free case, i t is evident that identification i s more rapid. The speed of response of parameter tracking systems seems to be adversly effected by the interaction between parameter generating subsystems which i s an inherent feature of the steepest descent search procedure used. Such subsystems are not required for the method developed here. Consequently, in the noise free case, one-step identification i s achieved. FIG. (2.2) EXAMPLE 1. ESTIMATION OF WITH QUANTIZATION NOISE 0.4 FIG. (2.4) EXAMPLE 1. ESTIMATION OF a2 WITH QUANTIZATION AND UNBIASED MEASUREMENT NOISE. 0.0 t 2 4 6 8 10 12 U 16 IS 20 FIG. (2.61 EXAMPLE!. ESTIMATION OF a2 WITH QUANTIZATION AND BIASED . MEASUREMENT NOISE^ 0.4r 1 1 1 1 1 1 2 3 4 5 5 7 0 9 70 ®-\ N \ ' V FIG.(2.7) EXAMPLE 2. ESTIMATION OF a, WITH QUANTIZATION NOISE. -i 1 1 1 1 1 1 i 1 1— 1 2 3 4 5 6 7 8 9 10 FIG. (2.8) EXAMPLE 2. ESTIMATION OF a2 WITH QUANTIZATION NOISE. 1.6 1.2 0.4 0.0 or* I 2 3 10 -0 .4L FIG. (2.9) EXAMPLE 2. ESTIMATION OF b. WITH QUANTIZATION NOISE. FIG. (2.10) EXAMPLE 2. ESTIMATION OF a, WITH QUANTIZATION AND UNBIASED MEASUREMENT NOISE. 04. 2.0 . -2.4 . FIG. (2.11) EXAMPLE 2. ESTIMATION OF WITH QUANTIZATION AND UNBIASED MEASUREMENT NOISE. 2.4 2.0 1-6 1.2 0.8 0.4 0.0 Or** w FIG. (2.12) EXAMPLE 2. ESTIMATION OF b, WITH QUANTIZATION AND UNBIASED MEASUREMENT NOISE. 27 2.6 Comparison with a Limited Memory F i l t e r In this section a comparison is made between the proposed d i g i t a l i d e n t i f i e r using augmented states (I > 1) and the standard limited memory identifier ( I = 1). The standard limited memory identi- f i e r (or f i l t e r ) i s of the type discussed in Section 2.3. The f i l t e r operates in a block data processing mode where R old data blocks are deleted and R new data blocks introduced resulting in a limited (finite) memory f i l t e r . Example 2 i s used to il l u s t r a t e the superiority of the augmented state identifier for various values of T and N (R = 1). Figures(2.13), (2.14) and (2.15) show the results for the identification of the parameters a^, a^ and b^, respectively. It is evident that the standard limited memory f i l t e r f a i l s to track the parameters for values T = 1., N = 3, R = l and exhibit poor tracking accuracy for values T = 0.1, N = 21, R = 1. The failure of the standard limited memory f i l t e r to give acceptable estimation of the parameters is due to the fact that i t i s not an optimal f i l t e r for the proposed identification problem. Consequently, excessive quantization noise and correlation between samples, as is the case here, can have a serious effect on i t s performance. In conclusion i t can be stated that the use of augmented states and controls results in a more rapid and accurate identification of continuous time linear system parameters than can be achieved by either parameter tracking systems or the standard limited memory f i l t e r . FIG. (2.13) EXAMPLE 2. ESTIMATION OF a, WITH QUANTIZATION AND UNBIASED MEASUREMENT NOISE USING A STANDARD LIMITED MEMORY FILTER. FIG (2.14) EXAMPLE 2. ESTIMATION OF o2 WITH QUANTIZATION AND UNBIASED MEASUREMENT NOISE USING A STANDARD LIMITED MEMORY FILTER. 29 28. 2.4 . 2.0. 16. FIG. (2.15) EXAMPLE 2. ESTIMATION OF b, WITH QUANTIZATION AND UNBIASED MEASUREMENT NOISE USING A STANDARD LIMITED MEMORY FILTER. 30 2.7 Some S t a t i s t i c a l Properties of the Digital Identifier In the previous section, the performance of the d i g i t a l identi- f i e r was evaluated by comparing estimated values with known values. In a practical situation, however, the parameters are not known and s t a t i s t i c a l means must be used to establish confidence in the estimates. In reference [5] Kishi made use of extensive studies made by Linnik [17] to determine confidence intervals for parameter estimates in the case of a simple single-input single-output system. The relations given by Linnik w i l l be used in this section to obtain approximate confidence limits for the parameter estimates of the d i g i t a l identifier. However, Linnik's results can be applied only i f the different components of the noise vector N (see (2.10)) can be represented by a s t a t i s t i c a l noise model. Such a task is not an easy one. However, i t w i l l be shown that under certain simplifying assumptions i t can be argued that the different noise com- ponents w i l l be uncorrelated. 2.7.1 Correlation of Noise Samples In Section (2.2), i t has been assumed that the noise samples of the different models are uncorrelated. Hence, equation (2.16) is assumed to be true. In the following, i t w i l l be shown that i f the noise samples are uncorrelated then the samples of the integrated noise are also un- correlated. For simplicity, we w i l l consider a single noise component in order to prove the validity of the previous statement. Consider the noise component n(t) which can be represented by the following s t a t i s t i c a l model E[n(t)] = 0 (2.38) E[n(t)n(x)] = a e - 0 1 ! ^ ' If a i s of i n f i n i t e value then n(t) w i l l be a type of white-noise com- ponent. (In the white-noise case the amplitude a w i l l be equal to i n - finity ) . For a physical realization of the noise component n(t) i t w i l l be assumed that a is a very large but f i n i t e quantity and that a' is a very large but f i n i t e quantity and that a is of f i n i t e value. Let N x(t) = n ( T 1 ) d x 1 (2.39) It follows from (2.38) and (2.39) that E[N 1(t)n( T)] = E[n(x)n(T 1)]dx 1 1 o 1 1 = a / t e - a l T - T l ' dx, (2.40) o 1 To evaluate the above integral we w i l l consider two cases: (i) Case 1, x ^ t E[N 1(t)n(x)] = a £ e ~ a ( T _ T l ) dx^ = a [ e-a(x-t) _ e-ax a ( i i ) Case 2, o sc x «_ t T - T U T / \ i r T -a(x-x.) , , ,t -a(x..-x) , E[N,(t)n(x)J = a J e 1 dr, + a f e 1 dx, 1 o 1 x 1 a -axr ax , a ax r -ax -at, = — e [e — 1J H e [e - e J a a a r o -ax -a(t-x), , 0 = — [ 2 - e - e ] (2.42) a 32 For fixed time t and a very large value of a, equations (2.41) and (2.42) can be approximated by f = 0 , x > t I (t)n(x)] J U ^ , 0 < x < E [ N l V W ' n " J ** (2.43) As seen from (2.43), the noise components n(t) and N^(t) become un- correlated as a -*- 0 0. Using equation (2.41) or (2.42) for the special case where x = t yields E[N 1(t)n(t)] = f [1 - e" a t] = — , for fixed t and a ->• °° The following non-dimensional ratio can be used as a measure of the degree of correlation of N^(t) and n(t): E. I M r(t)q(.t)] r = - r , for fixed t, a -> « (2.44) at Equation (2.44) shows that for fixed t and a large value of a the correlation between N^(t) and n(t) samples is much less than the correlation between the samples of n(t). This shows clearly the advantage of using the inte- grators in order to generate the augmented states and controls. It i s also interesting to study the degree of correlation between the samples of N x ( t ) . Equation (2.39) yields E[N 1(t)N 1(x)] = E[N 1(t)n(x 1)]dx 1 (2.45) As before, we have two cases: (i) Case 1, x > t E l N . W N . d ) ] = f]2 - e a T l - e ~ a ( T " T l ) ] dx, 1 1 o a 1 -ax, -a(x-x,) . . « [ 2 T l I - £ a 1 a a o a r o -ax , -at -a(x-t) = — j [2at +e +e - e -1] a 2at = , for fixed t and a -*• 0 0 (2.46) a ( i i ) Case 2, o < x < t E[N (t)N (x)] = / T E[N (x)n(x 1)] dx, + / E[N (x)n(x.)] dx, l i o l l l x l 1 1 fx a r o -ax -a(x-x 1) 1 , , = J — L2 - e 1 - e 1 ] dx, + o a 1 rt a r -a(x,-x) -ax, -, , / — Le 1 - e 1] dx, x a 1 a n ~ a x , ~ a t -a(t-x) = —[2ax + e +e - e -1] a 2ax = , for fixed t and a •> 0 0 (2.47) For the case x = t equation (2.46) or (2.47) yields E[N-2(t)] = ~ [at + e a t - 1] a = — t (2.48) a Equations (2.48) and (2.38) yield E[N 2(t)] —4 a l (2.49) tE[n^(t)] a Equations (2.46), (2.47), (2.48) and (2.49) show that the correlation between the samples of the noise component N^(t) vanishes as a -»• 0 0. Equation (2.49) shows that for fixed t and large value of a the corre- lation between the samples of N^(t) is much less than that between the samples of n(t). The same analysis can be directly extended to ^ ( t ) where N 2(t) = N 1(x 1) dx 1 (2.50) Since the same approach is to be used i t is sufficient to summarize the corresponding results E[N 2(t)N 1(t)] = ^ t 2 (2.51) 2 a t 3 E [ N 2 ( t ) ] ~ f o ~ ( 2 " 5 2 ) E f r - C t y ^ c t ) ] - , 9 = (2.53) t 3 E [ n 2 ( t ) ] E[N 2(t)] ~ 1 (2.54) t 4 E [ n 2 ( t ) ] 3 a t Equations (2.51), (2.52), (2.53) and (2.54) show that the same conclusions that have been reached for N^(t) hold for N 2 ( t ) . Suc- cessive integrations of s t a t i s t i c a l l y independent noise samples result in s t a t i s t i c a l l y independent noise components. Also, successive inte- grations of correlated noise samples result in less correlated noise components. The above simplified mathematical treatment shows that the independence property assumed by equation (2.16) is j u s t i f i a b l e for many practical situations where n(t) i s usually taken as white noise. Equation (2.16) can now be used to prove the unbiasness property of the di g i t a l i d e n t i f i e r . Equation (2.10) has the form Z N(k + 1) = 6 N(k)c + N(k) (2.55) Where the argument value denotes the sampling instant at which the various vectors and matrices are evaluated. Combining (2.55) and (2.12): E[c] = E[P N(k)9^(k)Z N(k + 1)] = E[P N(k)e^(k)(G N(k)c + N(k))] = c + E[P N(k)e^(k)N(k)] (2.56) The elements of P M(k) and 8 (k) depend on Z (k) and U (k), A = 1, n+m. Equation (2,55) shows that Z^(k) is only related to N(k - 1). Since N(k) is s t a t i s t i c a l l y independent of N(k - 1) as shown by the above analysis, then ^ ( k ) and 6jj(k) a r e not s t a t i s t i c a l l y dependent on N(k) and the following relations hold E[P N(k)8^(k)N(k)] = 0 (2.57) E[£] = c (2.58) Equation (2.58) shows that the estimate obtained by using the d i g i t a l identifier i s unbiased. The results given in figures (2.1)-(2.4) and (2.7)-(2.12) would not be obtainable i f (2.16) did not hold approximately. Equation (2.58) holds when 7^ = (9^9^) ^. This case has been referred to by Kishi as the observable case. The case where the inverse (9^9^) ̂ does not exist i s referred to as the non-observable case. In the non- observable case an estimate i s obtained by use of the equation c - 9* Z N ( 2 . 5 9 ) where is the generalized inverse of 6 ^ . That i s , e ^ e N x = X ( 2 . 6 0 ) It follows from ( 2 . 1 6 ) and ( 2 . 6 0 ) that the estimator ( 2 . 5 9 ) is unbiased. 2 . 7 . 2 The Error Covariance Matrix Let V = 6 Na - ^ ( 2 . 6 1 ) be the error vector representing the difference between a predicted measurement vector and the measurement vector. For the observable case the above equation can be written in the form (see ( 2 . 1 2 ) ) V = <Wi - I ) Z N ( 2 ' 6 2 ) Using equation ( 2 . 5 5 ) yields ( E N P N 6 N " I ) S ( 2 ' 6 3 ) For the non-observable case the expression for V i s v = ( e N e ^ - I ) N ( 2 . 6 4 ) where use i s made of the property N N N N The error convariance i s defined by 6*B = fl ( 2 . 6 5 ) Q w = E[W»] ( 2 . 6 6 ) Making use of equation (2.63) yields Q W = E [ ( W N " I ) M ' ( 9 N P N E N " I ) ] ( 2 * 6 7 ) For the non-observable case using (2.64) yields Q W = E [ ( 6 N 6 N " ^ ' ( V N " I ) ] ( 2 ' 6 8 ) 2.7.3 Estimation of the Confidence-Interval Length Only the observable case w i l l be studied in this subsection since for the non-observable case the length of the confidence interval w i l l depend on the degree of redundancy in the measurements which, would change from one system to the other. Equation (2.63) yields V = ( 6 N P N 6 N " I ) S ( 2 > 6 3 ) The fol*lowing relations' hold for the observable case:" R(8 N) = R(6^) = R(6^6N) = R(PN) = n(n + m) (2.69) where R(*) stands for the rank of the matrix written between the brackets. Let the matrix Y„ be defined as N Y N - P N 6 N ( 2 - 7 0 ) Using the known lemma R(AB) 4 min(R(A), R(B)) (2.71) where A and B are any arbitrary matrices, i t follows from (2.70) that, R < V = n ( n + m ) (2.72) Equation (2.70) yields 3N = ( 9 N 9 N ) Y N ( 2 ' 7 3 ) Hence R(8jJ) = n(n + m) < min(R(6^9N) , R(Y^)) < min(n(n.+m), R(Y )) (2.74) Hence R ( Y N ) > n(n + m) (2.75) It follows from (2.72) and (2.75) that R ( Y N ) = R(Pn<3N) = N < N + M> < 2' 7 6> Let the matrix be defined as N " H H W (2-77) Then WM9M = 9.. (2.78) N N N It follows from (2.78) that i s a projection matrix. Using (2.78) and (2.71) yield n(n + m) = R(8N) < min(R(WN) , R(e^) Hence. <=min(R(WN), n(n + m)) (2.79) R(WN) ^ n(n + m) (2.80) Applying (2.71) to (2.77) i t is seen that R(WN) 4 n(n + m) (2.81) It follows from (2.80) and (2.81) that R(WN) = n(n + m) (2.82) Considering now the computation of the number of degrees of freedom of the random vectors c and V, that i s (as defined by Linnik [17]) the number of s t a t i s t i c a l l y independent components of c and V. Starting by computing the degrees of freedom of the random vector c we w i l l make use of Theorem 2.3.1 given by Linnik [17] which states that i f the vector U is expressed as and the matrix A is of dimension r * £ with r < Z and R(A) = r then the random vector U i s of r degrees of freedom. From equation (2.12) £ = P M9'Z M (2 N N N Equation (2.76) yields Applying Theorem 2.3.1 given by Linnik [17] i t i s readly seen that the vector c i s a random vector of n(n + m) degrees of freedom. To compute the degrees of freedom of the noise error vector V, (2.63) is expressed in the form U = AX (2.83) R(P. T 0 ' ) = n(n + m) (2.84) V = (WM - I)N (2.85) where W is given by (2.77). Let 'N (2.86) The matrix is idempotent as can be seen from the following = W - 2W + I N N = I - N (2.87) Accordingly, there exists an orthogonal matrix 0^ such that (see reference [17]) where the matrix i s a diagonal one. The matrix $^ has the property that $N*N " V N = 1 ( 2 ' 8 9 ) The matrix is an idempotent matrix as can be seen from the following N N N N N N N N N N =z $'w $ = n (2.90) Hence: d 2 . = d.. i . e . , d . . =0 or 1 (2.91) n i i n The number of d ^ s equals to unity is given by the rank of the matrix W ,̂ that i s , by n(n + m). From (2.89) i t follows that R(I - WN) = R(^( I - WN)$N) = R d - y = n(n + m)(N - 1) (2.92) The f i n a l result follows from the fact that the total number of diagonal elements of is n(n + m)N and of these n(.n + m) are equal to unity (see (2.91)). Applying the above result given by (2.92) to (2.85) and making use of Theorem 2.3.1 [17], i t follows that the noise error vector V is a random vector of n(n + m)(N - 1) degrees of freedom. From the result reached in the previous statement i t is seen that V'V can be expressed in the form n(n+m)(N-1) V'V = I (2.93) i=l Let Q N = E[c c'] (2.94) cc With c is computed according to (2.12). The following results given by Kishi [5] and Linnik [17] are required. If £ and E,^ are s t a t i s t i c a l l y independent Gaussian random 2 2 — variables with distributed as % and having m degrees of freedom, then the t-distribution i s formed by the following ratio t = g (2.95) m Let (see Appendix IV) c. -c. K = 1 1 (2.96) / (Q ).. CC IX where (Q ).. is the i-th element of the main diagonal of the covariance cc 11 matrix Q and c. stands for the i-th element of the vector c. Also cc 1 let (see Appendix IV) m = n(n + m)(N - 1) .2 £q = V'V (2.97) Then using (2.96), (2.97) into (2.95) yields c. - c. 1 1 (2.98) "n(n + m)(N - 1) ^ ) > > ( v , v / n ( n + m ) ( N _ 1 } cc 1 1 42 Using the t -d is t r ibut ion i t i s possible to compute the length of the interval about the parameter c. which would include c. with a certain i 1 probabi l i ty . For example, let the assigned probabil ity be 0.9, then ^ n C n + DCN - 1)1 4 Y > = 0.9 (2.99) where P{*} stands for the probabil ity density of the expression between the brackets. The number y can be found from tabulated standard tables using the number n(n + m)(N - 1) as an index. Hence Ic. - c.I = Y / ( Q ).. , , VsYM TT (2.100) 1 l x1 1 cc i i n(n + m)(N - 1) If the augmented states and controls are not used the denominator under the square-root sign in (2.100) w i l l be n(N - n - m). Thus using the augmented states and controls decreases the estimation error | c\ - c^| and results in a better estimation accuracy. The range 2A of the confidence interval for a probabil ity of 0.9 i s given by 2A = [c. ± YAQ ) . . , . VZ TV (2.101) l ' % cc i i n(n + m)(N - 1) 2.8 Discussion For the observable case equation (2.12) i s used to estimate the unknown parameters. The matrix P^ is recursively computed without any need to invert a matrix. For the non-observable case the matrix P^ does not exist . Equation (2.59) i s to be used instead of (2.12) in order to estimate the unknown parameters. The generalized matrix inverse 6^ i s required. It i s desirable to compute Ĝj in a way similar to that developed in this Chapter for computing P^ in the observable case. This requires a computation algorithm to evaluate 0̂  recursively without any need for a matrix inversion. In the next chapter an algorithm w i l l be developed in order to compute 6* recursively and no matrix inversion w i l l be needed. This algorithm i s computationally simple and is suitable for real-time compu- tations . A simplified version of the algorithm that w i l l be developed i n the next chapter can be used in order to compute P^. This presents us with another method to compute P^. Due to the simplicity of this algorithm i t can be used for state estimation applications in cases where the plant is represented by a discrete model. Since the generalized inverse has numerous applications besides those which occur in state and parameter estimation, the important topic of recursive computation of a generalized matrix inverse i s dealt with in the next chapter in a general way. 3. RECURSIVE COMPUTATION OF THE GENERALIZED INVERSE OF A MATRIX 3.1 Introduction Because of i t s value in the solution of minimum norm problems in f i n i t e dimensional spaces, the generalized inverse (or pseudo-inverse) finds important applications in linear system theory. Applications of the generalized inverse to estimation and f i l t e r i n g problems are given in [18], [6], [19]. Applications to control problems are given in [20], [21]. A number of methods have been proposed for computing the generalized inverse A''" of a matrix A. The method proposed by Ben-Israel et. [22] a l . requires the formation of C = A*A, where A* is the conjugate transpose of A, and a series expansion of C. The method proposed by Pyle [7] requires a two-stage Gram-Schmidt orthogonalization process and subsequent solution of sets of linear equations. A simpler algorithm has been pro- posed by Rust et. a l . [8]. The algorithm is based on a two stage Gram- Schmidt orthogonalization process. Unlike the other two methods men- tioned above, i t does not require the generation of powers of matrices or the solution of sets of linear equations. However, by eliminating the Gram-Schmidt orthogonalization process, an even simpler recursive algorithm can be derived. 3.2 Properties of the Generalized Inverse Let A be an m x n complex matrix. The generalized inverse of A, as defined by Penrose [23], is the unique n x m atrix A* which satisfies the following relations: AA IA (AA1) * (A1 A)* = A = AA A XA (3.1) (3.2) (3.3) (3.4) Following Rust et. a l . , suppose that A can be partitioned into the form A = (R,S) (3,5) where R is an m x n^ matrix with l inear ly independent columns and S is an m x n 2 matrix whose columns are l inear ly dependent on the columns of R. The following relations hold: R IR = I R1 = (R*R) -1R* S -RU U = R S (I + UU*) 1 R I U*(I + uu*)"^ 1 (3.6) (3.7) (3.8) (3.9) (3.10) (In (3.6) and (3.10) I i s a unit n^ x n^ matrix). The generalized inverse is given in partit ioned form by (3.10) where ^ and U are given by (3.7) and (3.9), respectively. Rust et. a l . show that the inverses in (3.10) can be found by a two stage Gram-Schmidt orthogonalization process (the f i r s t stage generates R*)• The poss ib i l i t y of replacing both orthogonalization stages by a recursive algorithm is considered in the next two sections. 3.3 Recursive Evaluation of R The following relationships are required to develop the recursive algorithm. Taking the conjugate transpose of (3.1) and using (3.3) yields Let A*AA = A* A = (V,W) (3.11) (3.12) be a partitioning of A where V is composed of linearly independent columns. Equations (3.7) and (3.11) can be applied to V: v 1 = (V*V) - 1V* v * v v = V* (3.13) (3.14) From (3.1) and the partitioning (3.12) i t is seen that AA V = V (3.15) Taking the conjugate transpose of (3.15) and using (3.3) (3.13) and (3.14) yields V ^ 1 = (V*V) - 1V* = V 1 (3.16) Let r_., (j = 1, n^) denote the linearly independent columns of R. Consider the partitioning k+1 k+1 (3.17) where R, = r,, R = R, and where the b.'s are row vectors. From (3.17) 1 1 n 1 J i t follows that " W+i + rk+ibk+i (3.18) Applying (3.6) to yields = I (3.19) Applying (3.11) to 8̂ +1 1 1 1 the partitioned form (3.17) yields k+1 r* k+1 (3.20) Partitioning of (3.20) and solving for b^-fi gives -1 \ + i = <WW r k + i ( I " Vk+i> (3.21) Replacing A by and V by R^ in (3.16) gives ^W^+l " (3.22) Substituting (3.18) into (3.22) and using (3.19) yields w = V 1 - WW' (3.23) Substituting (3.23) into (3.21) gives W = w W 1 - \ v (3.24) where °k+l = ( r k + l r k + l " r k + l \ < r k + l ) _ 1 (3.25) is a scalar. The recursive algorithm for the computation of R is based on (3.17) (3.23) (3.24) and (3.25). It is desirable, however, to compute ,1 • the matrix product R̂ k̂. i n (3.24) from a recursive formula. Substituting (3.23) and (3.24) into (3.18) yields = Vk"+ v i ( i - \ ^ ) r k + i t + i ( i - v £ ( 3 - 2 6 ) Since (see (3.3)) (3.27) i t i s seen that the second matrix on the right-hand side of (3.26) i s the outer product of two vectors Summary of the Algorithm for Finding R"*" I n i t i a l i z e . Set k = 1, R][ = r , R* = (R*R ) - 1R* \ + i " ( rk+i rk+i ~ t+A^W - 1 bk+l = \ + l r k + l ( I " \RJ) Bk+1 " ^ ( I " V l W v A i = \*i+ v i ( I - A ^ w W 1 - <+1 3.. k. = k+1 k+1 k+1 4. If k < n 1 go to 2, The above algorithm replaces the f i r s t Gram-Schmidt orthogonali- zation stage of Rust's algorithm, which, in the interest of accuracy, reorthogonalizes each column after i t is f i r s t orthogonalized. An addi- tional advantage of the above algorithm is the manner in which A is partitioned into the form (3.5). In the Gram-Schmidt process, the de- 49 tection of a zero vector after orthogonalization (but before normali- zation) indicates that the vector being orthogonalized i s linearly dependent on the previous vectors. This vector i s consequently moved into S and the next column vector of A is chosen. In the algorithm presented here, linear dependency i s detected when CL ̂  (see (3.25)) i s K."t"-L zero. Since a^ +^ i s required i n (3.24) no additional computations are required to test for linear dependency. If 0^+1 ^ s z e r o » fc^e associated vector i s moved into S and the next column vector of A is chosen. 3.4 Recursive Evaluation of A Consider the partitioning \ + l . " ( R» Sk+l?' \ + l " k+1 k+1 (3.28) where S. = s n i s the f i r s t column vector of S and S = S. The generalized 1 1 n ? I z inverse R can be obtained by the recursive algorithm given in Section (3.3). Applying (3.10) to A^, i t is seen that J 1 C. = U*B. J 11 (3.29) (3.30) U. = (U]L, u 2, , u.) = R S. . J J (3.31) where D. = (I + U.U*) 1 11 (3.32) The matrix inversion lemma [9] \ l i - \ ' - D k \ + i ( I + " k V A ' W ^ A 1 (3.33) is used to develop a recursive formula for B^. Setting j = k, k + 1 in (3.29) and using (3.33) gives where Bk+i= ( I" ViVWk+i^k e k + 1 = .(i + ug + 1B kR u k + 1 ) " (3.34) (3.35) The recursive algorithm for the computation of A* given i s based on (3.28) (3.30) (3.31) (3.34) and (3.35). • Summary of the Algorithm for finding A* 1. I n i t i a l i z e . Set k = 1, B ]. = D'V = [I - (1 + u*u1)~1u1u*]R I - 1 V i • ( I - ek +i\ R \ + i t + i ) B k 3. 4. 5. k = k + 1 If k ^ n^, go to 2, C = (R IS )*B n 2 n 2 n 2 A 1 = A 1 nv. 3.5 Examples The f i r s t two examples are taken from the l i terature . This allows a comparison to be made showing the relat ive simplicity of the proposed algorithm. Example 1. This example is given in [22]. It i s required to compute the generalized inverse of A = It i s seen that (3.36) has the form (3.5) where 1 0 0 -1 -1 1 0 0 0 -1 1 0 0 0 -1 1 (3.36) R = 1 0 0 -1 1 0 0 -1 1 0 0 -1 s = (3.37) The algorithm of Section 3.3 is used to find R Step 1. I n i t i a l i z a t i o n . Step 2, R l " r l 1 -1 0 0 a2 = 3 R* = |(1, -1, 0, 0) b 0 = ±(1, 1, -2, 0) B, |(2, -1, -1, 0) R2 R2 1 3 2 -1 -1 0 -1 2 -1 0 -1 0 -1 0 2 0 0 0 Step 2. (repeated) b„ = f (1, 1, 1, -3) = 1 f3 -1 -1 - l ) " 4 [2 2 - 2 - 2 J R = R = 3 -1 -1 -1 2 2 - 2 - 2 1 1 1 - 3 Having found R*, the algorithm of Section (3.4) is used to evaluate A*. Since S consists of a single column vector s^, only the i n i t i a l i z a t i o n stage i s required. u i " R I S I = -1 -1 -1 Step 1. • i - i 3 - 3 - 1 1 1 3 - 3 - 1 - 1 1 3 - 3 Step 5 C l = U P l - | ( 3, 1, -1, -3) A 1 = 1 A l 8 3 -3 1 3 -1 1 -3 -1 -1 1 -3 -1 3 -3 1 3 (3.38) Example 2. This example is given in [7]. It is required to compute the generalized inverse of 1 0 1 1 0 1 - 1 0 1 1 0 1 (3.39) It i s seen that (3.39) has the form (3.5) where 53 1 0 1 l " R = 0 1 , s = -1 0 1 1 0 1 (3.40) The algorithm of Section (3.3) i s used to compute R*. Step 1. I n i t i a l i z a t i o n Step 2. R l " r l , RJ = \ (1, 0, 1) 0 U = 2 3 f (-1, 2, 1) B2 = R1 = f (2, -1, 1) 1 = 1 , 2 -1 1 2 3 K-l 2 V Having found R̂ , the algorithm of Section (3.4) is used to evaluate A*, Step 1. I n i t i a l i z a t i o n U = ( u p u 2 ) D"1 = I - u l ( l + u j u ^ u * 3 v l 2J 1 3 [0 1 IJ Step 2. B = - i - ( 3 ° 3 ) 2 15 v - l 5 h} Step 5. C2 = U* B2 - T5 (3 "o A2 " 15 3 -1 4 3 0 5 -5 0 3 4 -1 3 (3.41) Example 3. This example illustrates a case where A does not have the form (3.5) i n i t i a l l y . It i s required to compute the generalized inverse of A = 1 0 1 0 -1 1 0 0 0 -1 -1 0 0 0 0 1 (3.42) Note that the third column of A is the sum of the f i r s t two columns. Generally, the detection of linear dependency by simple observation is not possible. The algorithm of Section (3.3) is applied to (3.42). It i s seen from (3.36) that there i s no change in the computation given in Example 1 unt i l k = 2. At this stage the algorithm gives = 0, hence the third column of A is shifted into S and the last column of A is chosen. This gives = 1 and b 3 = (0, 0, 0, 1) 1 (2 -1 -1 0 3 3 1 1 -2 cr 2 - 1 - 1 0 1 1-2 0 0 0 0 3 The interchange of the last two columns of (3.42) is accomplished by a permutation matrix P given by 55 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 (3.43) The algorithm of Section (3.4) is now used to find A*' where A = AP, A = AP (3.44) The generalized inverse A''" is then given by (see [8]) A 1 = PA1 (3.45) Step 1. I n i t i a l i z a t i o n u^ = R s^ = 1 -1 0 0 0 1 -1 0 0 0 0 3 Step 5. U1 B1 = I ( 1» °» _ 1 ' 0 ) A 1 1 3 -1 1 0 0 0 0 -1 0 0 3 -1 0 (3.46) The generalized- inverse A-̂  is- found- from (-3-. 45) and- is- A = -1 1 0 0 0 0 -1 0 1 0 0 3 (3.47) 4. TWO-STAGE ESTIMATION OF TIME-INVARIANT AND TIME-VARYING PARAMETERS IN LINEAR SYSTEMS 4.1 Introduction Because of i t s importance in system engineering, considerable effort has been devoted to the problem of estimating the parameters of systems whose dynamic behaviour can be described by differential equations. Two distinct methods for estimating time-invariant parameters have been brought to a high degree of refinement. The parameter tracking method is based on a steepest-descent parameter adjustment of a quadratic error function or a Liapunov function. The error can be taken to be a response error or a generalized equation error. Detailed descriptions of para- meter tracking systems are given by Lion [2], Pazera and Pottinger [3], Rose and Lona [4]. The second method for parameter estimation is based on optimum f i l t e r i n g . In i t s simplest form an optimum f i l t e r minimizes a mean-square error cost function subject to a prescribed dynamical constraint. Detailed descriptions of optimum f i l t e r s and an extensive bibliography are given by Sage and Melsa [9]. The estimation of time-varying parameters is generally a much more d i f f i c u l t problem. Parameter tracking systems and optimum f i l t e r s often give acceptable estimates provided that the parameters vary s u f f i - ciently slowly with respect to time. Lessing and Crane [10] have made use of integral transforms to extend the capability of parameter tracking systems to track time-varying parameters. Practical implementation of the integral transform approach requires the assumption that the parameters can be represented by polynomials of known order. The estimation of time-varying parameters by optimum f i l t e r s requires essentially the same assumption. It is assumed that the parameters are solutions of differential equations of known form with unknown time-invariant para- meters. The problem i s then reduced to the estimation of time-invariant parameters of a higher order augmented system. It i s the purpose of this Chapter to develop a computationally simple and accurate method for estimating time-varying parameters which is not restricted by a p r i o r i assumptions about the dynamical behaviour of the parameters. 4.2 Problem Formulation Consider the continuous-time linear system x = Ax + Bu, x (t ) = x (4.1) p p p o po where x i s an n x 1 dimensional state variable vector and u is an m x 1 P dimensional control vector which i s a known function of time. The system matrix A i s considered to be decomposable into a time-invariant component A^ and a time-varying component A^: A =• A f + A v (-4.2) The most general estimation problem associated with (4.1) is to estimate x , A and B given an observed continuous-time measurement vector defined P over an observation interval t 4 t <_ t^, and given complete s t a t i s t i c a l information about measurement noise and system noise. However, i f emphasis is placed on computationally simple methods then a less ambitious problem must be considered. Suppose that B i s known and that the state x^ is 58 measured during the observation interval. The problem to be considered is the estimation of A given x^ and given no information about noise s t a t i s t i c s . The estimation is performed sequentially in two stages. During the f i r s t stage an estimate of the time-invariant A^ is made given an estimate of A^. During the second stage a new estimate of A^ is made using the estimate of A^ found during the f i r s t stage. This section deals with the second stage of identification. Let z. = x - x, (4.3) 1 p 1 x, = A.x. + Bu, x.(t ) = x (4.4) 1 f 1 1 o po It follows from equations (4.1), (4.3) and (4.4) that z., = A.Z.. + A x 1 f 1 v p = A.z, .+ 6(x )a, z,(t ) = 0 (4.5) t l p — 1 o In equation (4.5), a i s a vector whose components are the time-varying elements of A . v The equation A x = 0(x )a (4.6) v p p - defines a matrix 0 whose elements are linear functions of x . Let P g = x p - x x h A g(t) - Tg(t-T) ( 4 > y ) In (4.7), x^ is defined by (4.4), x^ is assumed known (through measurements) and h. is an approximation to g, with the accuracy of the approximation depending on the fixed-time increment T. Consider the following quadratic cost index J = j f l f o [ | | g - ^ 1 l | 2 Q 1 + l l h - ^||" 2Q 2]dT (4.8) where z^ and are defined by (4.5) where the unknown vector a. be re- placed by an estimate a_. The weighting matrices and are taken to be positive definite. It follows from (4.3)-(4.8) that lim J = 0 pro- T->0 vided that the time-varying parameter vector c i is known exactly. Con- sequently, (4.5) and (4.8) can be used to formulate a problem in optimum estimation. In the problem formulation the unknown time-varying para- meter vector i s considered to be a control vector for the system des- cribed by (4.5). The optimum control minimizes J where g and h are known functions of time. 4.3 Estimation of A^. Algorithm I. The minimization of J (see (4.8)) subject to the dynamical con- straint given by (4.5) is a standard problem in optimal control. The Hamiltonian for this problem is (prime denotes transposition). H = p ' C A ^ + 6(x )a)- -|(g-z 1)'Q 1(g-z 1)- j(h-A f z±-Q (x ) a) ' Q 2(h-A f z.j-6 (x ) a) (4.9) The costate equation is p = -Hz = -A^p + Q 1(g-z 1) - A^Q 2(h-A f Z ; L - 6(x p)a), p(t f) = 0 (4.10) and the gradient condition i s 60 H = 9' (x )[p+ Q,(h - A,z - G(x )a)] = 0 (4.11) a p L r 1 p — Solving (4.11) for the optimum control a yields a = (9»(x )Q29(x )) V(xp)[p + Q2(h - A fz^ ] (4.12) Existence of an optimal control for the formulated problem can be viewed as a contro l lab i l i t y condition in a function space S of elements (g, h) , where h i s either equal to g or an approximation to g. Taking (4.8) to define the norm, then the system described by (4.5) is defined to be controllable in S i f a control ji exists such that the element (ẑ  - g, ẑ  - h) has minimum norm. Satisfaction of this con- t r o l l a b i l i t y condition requires the existence of the matrix inverse in (4.12). Equations (4.5), (4.10) and (4.12) constitutes a two-point boundary-value problem. Due to the l inear i ty of the equations the solution i s easi ly found using standard methods. Let L(x p) ^ (6'(xp)Q26(xp)) V(xp) (4.13) Substituting (4.12) into (4.5) and (4.10) i t is seen that where C. I 9(x )L - r ' 4 + r (4.14 C± = (I - 6(x ) LQ2) A f C 2 = - Q l + A ' Q ^ 9(xp)LQ2h Q 1 § - A^Q2(I - 0(x ) LQJh (4.15) 61 and where I is the unit matrix. The general solution of (4.14) has the form where P(t f) - *(t f, tQ) + p(t c) y2 A y = L y2 J A t f = / 1 iKt, T) r(x)dx fco (4.16) (4.17) and where i|;(t, x) is the state transition matrix for (4.14). It is seen from (4.16) that y can be found by solving (4.14) taking z^(t Q) = 0 = p(t Q) for the i n i t i a l values. The state transition matrix is found by solving (4.14) taking -Kt , t ) = I, r = 0. Substituting the boundary conditions into (4.16) and part i - tioning ^(t^,, t ) in the form i t i s seen that * ( t f , t Q ) = "*11 |*12 (4.18) *21 j*22 p(t Q) - .-1 -*22 y2 (4.19) The i n i t i a l costate given-by (4.19) in conjunction with (4.5), (4.10)' and (4.12) defines the optimal estimate of a. over the observation interval t Q <̂  t ^ t^. The computational procedure w i l l be referred to as algorithm I. Minimization of a cost index of the form given by (4.8) subject to the dynamical constraint of (4.4) has found extensive applications in state estimation (Sage [11]). There i s , however, a fundamental difference in the problem formulations. In the state estimation formulation, (4.5) and (4.8) are replaced by x = Ax + Bu, x (t ) = x (4.20) P P P o po ~J = 1 t / t f [ l l * p - * P M 2 q 1 + l l * p - % \ \ 2 ^ <4-21> The minimization of J subject to the dynamical constraint of (4.20) is a problem in optimum f i l t e r i n g . The problem can be considered as one of finding an approximation x to z so that the error (z - x ) has minimum P P P P norm subject to a cost penalty associated with the norm of the control effort. The solution of the two-point boundary-value problem defined by (4.20) and (4.21) can be obtained by invariant imbedding or by use of a Riccati transformation. Either approach results in the Kalman f i l t e r algorithm, for generating, the. optimal, estimate, x^ of the state. x p given. the measurements z over an observation interval t < T < t. Comparing p o = = the second terms in the integrands of (4.8) and (4.21) i t i s seen that there i s a fundamental difference between the state and parameter e s t i - mation problems. This difference manifests i t s e l f analytically in the controllability condition (the existence of L(x p), see (4.13)) which does not arise in the solution of the state estimation problem. 4.4 Estimation of a Periodic A . Algorithm II. , v S Several questions arise concerning the estimation accuracy of algorithm I. The f i r s t question concerns the effect of measurement noise on the optimal estimate. It i s seen from (4.7) that h, since i t approxi- mates g, may lose i t s significance i f the data representing g is noisy. To some extent this loss of significance can be taken into account by decreasing the absolute value of the elements of the weighting matrix Q 2 (see (4.8)). A second question arises concerning the satisfaction of the controllability condition. Algorithm I must be modified i f practical answers to the above questions are to be found. When no information i s available about noise s t a t i s t i c s there is no unique best method for for- mulating an estimation problem, since the choice of a cost index, such as the quadratic cost index given by (4.8), i s an arbitrary one. Con- sequently, estimation algorithms can only be judged on the basis of trade-off between computational complexity and estimation accuracy. With the above discussion in mind, an algorithm is developed for estimating time-varying periodic parameters. Let x 3 = z - x 2 (4.22) x 2 =-Afx-2 ••+ e(-x̂ );a., x- 2(t Q) = 0* * (4.2-3) It i s seen from (4.5) and (4.20) that X3 = A f X 3 + 9 ( x 2 + X3 )-' X 3 ( t o ) = 0 (4.24) Equation (4.23) i s taken to represent the dynamical constraint for the new problem formulation. It i s seen from (4.1) and (4.23) that the dynamics are noise-free. A cost index of the form given by (4.8) must be specified to complete the problem formulation which i s based on the following assumptions: (1). L(x^) is well-conditioned (see (4.13)). (2) an estimate I of a is available and an estimate x^ of x^ can be found by solving (4.24) using successive approximations. That i s , x^ is defined x 3 = A fx 3 + Q(x 2)a, x 3 ( t 0 ) = 0 • x 0 = A j L + 0(x.)a, x_(t ) = 0 (4.25) Z J. I. 1 — I o (3) The cost index i s (see (4.8) and (4.22)) 3 - X 2 H 2 q 1 + M H " X 3 " X 2 H \ ] D T (4.26) o Successive approximations converge i f the norm of a is sufficiently small. This restriction is imposed to achieve convergence in the two- stage estimation of A., and A . (If A., is known, then the f i r s t estimation f v f stage is unnecessary; (2.4) could then be used to define x^ and no norm constraints need be imposed on a). It is always possible to avoid i l l - conditioning of L( Xp) by modifying the values of i t s time-varying elements. The modifications, however, cannot be arbitrary. They must enter into the problem formulations in a mathematically consistent manner. In the present problem formulation i t i s seen from (4.22) and (4.26) that L(x^) is replaced by- L ( X j - ) . . Equations- (4. 23) , (4..25). and. (4..26). define-a--two- point boundary-value problem for the optimum estimate. be a fixed estimation interval. At the end t^ of an estimation interval a new measurement sequence i s taken over an observation interval t, < t < t f + HT. That i s , the next estimation interval is t + £T < t < Let T be the length of a sampling interval and let t,. - t = NT o (N + £)T (see Fig. (4.1)) . •OBSERVATION INTERVAL FOR NEW MEASUREMENTS 0IT NT(UN)T h H a = H ESTIMATION INTERVAL OPERATION PERIOD flG.(4.l) THE RECURSIVE OBSERVATION - ESTIMATION INTERVALS. In order to solve Eqn. (4.25) over this interval i t is necessary to extrapolate a^(t) . This can be done in a variety of ways. One method is to f i t a polynomial to sampled values j i ( t o + kT) , k = 0, N, and to use the polynomial approximation for extrapolation. ..Alternatively, , i f i t i s known that the length of the estimation interval NT is greater than the smallest period of the periodic components of a_, then the known periodicity of a_ can be used for extrapolation. The following is a summary of Algorithm II: 1. A measurement sequence g ( t Q + kT), h ( t Q + kT) and an estimate a_(tQ + kT) over an estimation interval k = 0, N are assumed known. Take a new measurement sequence over an observation interval k = N, N + I and define a_ over this interval by extrapolation. 2. Solve the two-point boundary-value problem associated with the new estimation interval for a new estimate (Equations (4.23)*, (4.25) and (4.26)). The two steps of the algorithm are repeated recursively. 4.5 Estimation of a Periodic A^. Algorithm III. Algorithm II makes recursive use of measurement data and pre- vaious estimates. It i s based on the solution of an optimization problem having.-the noise free, dynamics, of. (4.. 23). and a cost index given by (4.26) where x^ is generated in a smoothed form by integrating (4.25). Conse- quently, i t can be anticipated that algorithm II w i l l be superior to algorithm I i f measurement noise is present. There is no guarantee, however, that the controllability condition i s satisfied. That i s , L(x^) may be ill-conditioned. As mentioned in the previous section, one method for overcoming ill-conditioning i s to modify the time-varying elements of L by modifying the problem formulation. Let z^ be an estimate of z^ and let X2 = A f X 2 + 9 ( x l + V - ' x 2 ( t o J = ° (4.27) represent the dynamics associated with a third problem formulation. The cost index i s taken to have the quadratic form (see (4.8)). J = \ t / t f [||g - * 2 \ \ \ + ||h - x 2|| 2Q 2 ] d T (4.28) o Comparing (4.27), (4.28) with (4.5), (4.8) i t i s seen that z^ should be chosen so that x 2 approximates z^. (Ideally, in the noise-free case and when L(x ) is well-conditioned, then the choice z, = z, can be made. P 1 1 This results in x 2 = z^ and Algorithm III degenerates into Algorithm I). If L(x^) i s ill-conditioned Algorithm II w i l l f a i l . This ill-conditioning can be overcome in Algorithm III provided z^ i s chosen so that L(x^ + z^) is well-conditioned. A further constraint i s imposed on z^ by the require- ment that the two-stage estimation procedure converges. This requirement, which was discussed in the previous section, led to the use of successive approximations and the assumption that the norm of a_ must be sufficiently small. It i s evident that z^ i s constrained by several conflicting re- quirements. Some sort. of. mathematically tractable, approach must be taken, which accounts for the various trade-offs associated with ill-conditioning, dynamical approximations and two-stage convergence. Solving (4.5) by successive approximations using z^ = 0 as an i n i t i a l i z i n g function and an estimate a. gives x 2 - A fx 2 + 6( X L)I , x 2 ( t Q ) = 0 x„ = A.x„ + 9(x. + x 0)a, x 0 ( t ) = 0 (4.29) J 1 J 1 Z — J O 6; for the next two approximating functions. Let z^ be defined by z1 = ax 2 + 3x 3 (4.30) The general structure given by (4.30) allows the trade-offs mentioned above to be investigated as a function of two scalar parameters a and $. The choice a = 0, B = 1 results in z^ being defined by successive approxi- mations given by (4.29). Numerous algorithms for the solution of linear algebraic equations are given in [24]. Abramov's method for accelerated convergence results when a = 2, 3 = -1. (The j u s t i f i c a t i o n for using these algorithms is easily seen by the discretization and subsequent rearrangement of (4.5) into a linear algebraic equation). Algorithm III is based on the solution of the two-point boundary- value problem defined by (4.27), (4.28), (4.29) and (4.30). The scalar parameters a. and p can. be. determined by numerical. experimentation., or, they can be chosen on the basis of successive approximations (a = 0, 3 ='. 1) or Abramov's method (a = 2, 3 = -1). 4.6 TwO-Stage Estimation of A^ and A^. During the f i r s t stage an estimate of A^ i s found given an estimate of A^. Three different algorithms have been developed in the previous sections for the second stage during which an estimate of A^ is found given an estimate of A^. The two stages are performed sequentially. Together they comprise a composite estimator for A. For reasons of simplicity, the discussion is limited to the development of a f i r s t stage algorithm which i s associated with second stage Algorithm I. The modifi- cations required i f second stage Algorithm II or III is used are straight- forward. Equation (4.4) defines the dynamics associated with the f i r s t stage of estimation. If x^ were available through measurements, (4.4) could be discretized and estimated by use of standard minimum mean- square error estimation procedures. However, x^ is not directly available. Furthermore, a two-stage estimation is proposed and the interaction between stages must be accounted for i f a convergent sequence of e s t i - mates is to be obtained. Chapter 2 has suggested augmenting (4.4) by additional states and controls to reduce the effect of quantization noise which i s intro- duced by discretization. It i s shown that the use of augmented states and controls results in rapid convergence of the estimates. The augmented states and controls are defined by successive integrations of (4.4): y. = Ay. + B u. Jx fx i y i ( t o ) = i 1 I 0 , i > 1 (4.31) A ft A y i = o y i - l d T > y l = X l u. = •/"t u. ^dx, u, = u (4.32) l o i - l 1 For' composite estimation" of" A, successive - integrations has' the beneficial effect of reducing the influence of the time-varying components of A^ on the augmented states. The augmented system of equations (4.31) are dis- cretized and rearranged into a set of linear algebraic equations. The optimum estimate for A^ i f taken to be the solution with minimum mean- square error (For details see Chapter 2). In the case of composite estimation the above procedure must be sli g h t l y modified since x^ is not directly available. Consequently, x^ must be estimated from existing data. From (4.3) and (4.5) i t is seen that XJL = x p - (4.33) where z- = A cz, + 9(x )a, z.(t ) = 0 (4.34) 1 f l p —' 1 o In (4.34), and a. are estimates of   and a_ respectively. The f i r s t stage of estimation consists of a minimum mean-square error solution of the discretized equations (4.31) using (4.33) and (4.34) to define x^. The new estimate of   i s used in place of the old estimate provided that i t results in improved prediction of the state of the sys- tem. The following cost index can be used as a basis for making this decision: J l = I t / f ' l * l ' x p " X l " SiM2Q3d"T ( 4 ' 3 5 ) In (4.35), is a positive definite weighting matrix. The rate of- change of between estimation intervals i s useful in making decisions concerning the displacement length (£T) of the observation interval. The displacement can be increased when the rate of decrease of goes below a preset threshold. 4.7 Examples In a l l examples the following computational procedures were adopted. The two-point boundary-value problem was solved using a fourth- order Runge-Kutta method with an integration step-size of 0.025. The time-varying estimates a.(t) were generated in the computer by storing the values at grid points and taking a piecewise constant approximation 70 a(t)=a(t Q+kT), kT < t<(k + 1)T). In the case of algorithms II and III, extrapolation of _a was performed using a very simple constant approxi- mations, _a(t^ + kT) = a_(t^) , k = 0, , and the i n i t i a l i z i n g functions for successive approximations were taken to be identically zero. 4.7.1 Example 1. Consider a f i r s t order system x = -(1 + b sino)t)x + u (4.36) where the control u i s taken to be a step function of magnitude equal to 3 and x(0) = 0 with T = 0.025, I = 4, N = 80 (4.37) The length of the whole operation period for a l l the examples is taken to be 10 seconds. The length of the estimation interval, as seen from (4.37) is NT = 2 seconds and the same has been considered for a l l the subsequent examples. The maximum percentage error of the estimate of a(t) = b sin wt for different b and w values are given in Table (4.1). Table (4.2) shows the results when a(t) i s a square wave: f 0.5,. 0 < L< 0..5. a(t) =\ L-0.5, 0.5 4 t < 1.0 (4.38) In the case of algorithms II and III, two estimation intervals were re- quired to obtain the results given in the tables. Algorithm III was tried with successive approximations (a = 0, 6=1) and Abramov's method (a = 2, g = -1). 4.7.2 Example 2. Consider the second order system x^ = -(1 + b^sin io^t)x 2 + u^ x 2 = -(2 + t>2 sin o) 2t)x 1 - x 2 + u 2 (4.39) where i s taken to be a step function of magnitude equal to 2 and {t , 0 < t < 1 , t > _ 3 3 (4.40) In addition to (4.37) the following values are used: x^(0) = 0 = x 2 ( 0 ) , b^ = 0.4, b 2 = 0.6, cô  = 3, = 2. Table (4.3) gives the maximum per- centage estimation errors for the parameters a^(t) = b^ sin cô t and a 2(t) = b 2 sin o)2t, respectively. In the case of algorithms II and III, two estimation intervals were used. 4.7.3 Example 3. Consider the third order system x^ = (1 + b^ sin to^t) x 2 + u^ x 2 = -(2 + b 2 sin ui^t) x 3 - x 2 4- u 2 x 3 = -(3 + a 3 ( t ) ) x 1 - 2x 3 + u 3 (4.41) where the controls u^ and u 3 are taken to be step functions of magnitudes 0.1 and 0.5, respectively and where f t , t < " 2 = l l . « > 3 3 (4.42) The parameter ^ ( t ) i s periodic and given by a 3 ( t ) - 1 , 0 < t < 0.25 U l , 0.25 < t <_0.5 (4.43) In addition to (4.37) the following values are used: x^(0) = ̂ ( 0 ) = x 3(0) = 0, b± = 0.3, b 2 = 0.5, o>1 = 3., OJ2 = 2. Table (4.4) gives the maximum percentage estimation errors. In the case of algorithms II and III , four estimation intervals were used. 4.7.4 Example 4. The next two- examples i l l u s t r a t e the composite two-stage estimator. In both examples the first-stage of estimation was i n i t i a l i z e d by taking a. = 0 as the i n i t i a l estimate for c i and only one set of aug- mented states and controls was used ( i = 1, 2, see (4.31) and (4.32)). Consider the second order system x^ = (1 + b^ sin to^t)x 2 + u^ x 2 = (b-2- sin o)2t.)x1- + a2x2 + U2 (4.-44)- where the controls u^ and u 2 are taken to be step functions of magnitude equal to 2. and 3., respectively. In (4.37), £ is taken equal to two and the following values are used: x^(0) = = »̂ ̂ 1 = 0.3, t>2 = -0.5, co^ = 1, c o 2 = 2, a 3 = 1. After two estimation intervals the maximum per- centage estimation error for a^ was 1.2% and after ten estimation i n - tervals, the maximum percentage estimation errors for a 2 and a 3 were 1.8% 72 and 1.0%, respectively. 4.7.5 Example 5. Consider the second order system x^ = (1 + sin ui^t)x^ + X2 = a 2 X l + a3 X2 + U2 (4-45) where b^ = 0.4, a^ = -2, a^ = -1, cô  = 2 and where a l l remaining data i s the same as in Example 4. After the second estimation interval, the maximum percentage estimation errors for a^, a^, and a^ were 0.9%, 0.4% and 0.3%, respectively. 4.7.6 Example 6. In the previous examples the measurements were noise-free. Example 1 has been tried with zero mean gaussian measurement noise with unity standard deviation and the random noise generator has been randomly i n i t i a l i z e d . A peak r.m.s. signal to noise ratio of 10 was taken. The following values were chosen: b = 0.4, co = 1, a = 2, (3 = -1. The maximum percentage estimation errors for algorithms. I.,.. II and. III. were 6%., 3.2% and 2%, respectively. In the case of algorithms II and III, seven and four estimation intervals were used, respectively. The examples show that the proposed algorithms and the two- stage estimation method can give accurate estimates of linear system parameters. If the measurements are noise free, Algorithm I gives the most accurate estimates. In the case of measurement noise, Algorithms II and III give better estimates. In the above examples, no problem 11 was encountered with ill-conditioning in any of the above cases. 7 5 b 1 W 0.5 1.0 1 / 0.6 % 0.75% JT 3-0 10 % 15 % 2.0 0.7% 0.85% 777 °(=2, 3=-1 3.0 0.9 % 1.0% 777 3.0 0.93% 0.95% TABLE (4.1) TABLE (4.2) TABLE (4.3) TABLE (4.4) 5. OPTIMUM CONSTANT FEEDBACK GAINS FOR LINEAR SYSTEMS AND THE PROBLEM OF TRAJECTORY SENSITIVITY REDUCTION 5.1 Introduction The optimum solution of the linear state regulator problem with quadratic cost has assumed a central position in the development of modern control theory. The optimum gain, however, is in general time-varying and this has severely restricted practical applications. Practical considerations generally specify constant or piece-wise con- stant feedback gains. Considerable effort has gone into the development of design procedures for constant feedback gains and in the design of specific optimal controllers [25-29]. There are, however, computational as well as practical d i f f i - culties associated with optimum constant gains. Specifying a constant gain results in a constrained optimization problem and the solution of the associated two point boundary value problem is generally more d i f f i - cult. Furthermore, the optimum constant gain depends on the i n i t i a l conditions. One method that has been proposed to overcome these d i f f i - culties i s to introduce the trace of a cost matrix which then allows i n i - t i a l conditions to be eliminated in the problem formulation by arb i t r a r i l y assuming a uniform distribution of i n i t i a l states over a sphere [25-27]. From a design standpoint, however, i t is important to investi- gate the effect of i n i t i a l conditions on optimum feedback gains before computing average suboptimum gains. Furthermore, in many practical s i t u - ations, state trajectories are controlled, and the effect of i n i t i a l conitions on optimum gains can become important. This occurs, for example, in certain schemes for combined estimation and control. During a f i n i t e observation interval, measurements of the state are taken and used to determine parameter estimates of a linear model. The linear model is used to compute a control strategy which i s applied in a subsequent control interval [11], [5]. This scheme seems particularly attractive for adaptive systems. Its practical implementation, however, requires an on-line computational algorithm for determining optimal or suboptimal gains. Trajectory sensitivity minimization i s an important practical problem that has received considerable attention [30-37], A widely adopted method i s based on augmenting the state vector with the sensi- t i v i t y vector and formulating a linear state regulator problem for the augmented state. However, i f time-varying gains are permitted, the problem, i s ill-posed [35, 38]. A well-posed, traj.ectory. sensitivity mini- mization problem requires that the gain be constrained to be time-invariant. In general, the sensitivity vector is affected by i n i t i a l conditions. Consequently, i f feedback i s to minimize trajectory sensi- t i v i t y , the effect of i n i t i a l conditions on the optimum constant gain must be evaluated. It may be possible to choose a suboptimal average gain. A unified treatment of several problem formulations dealing with time-varying gains and constant gains is presented. Extensive" use is made of the known results for the linear state regulator problem to develop a simple algorithm which results in rapid convergence. An approximate method is developed to replace the nonlinear two point boundary value problem by an algebraic optimization problem. This method appears to be suitable for on-line computation of suboptimum constant feedback gains. 5.2 Problem Formulation Consider a linear time-varying system x = Ax + Bu, x(0) = X q (5.1) where the state x and the control u are n x 1 and m x 1 vectors, res- pectively. The optimal control which minimizes the quadratic cost index (prime denotes transposition) 1 t f J = ± / (x'Q-x + u'Ru)dt, (5.2) o 2 o 1 where i s positive semidefinite and R positive definite, i s given in feedback form by u = -R •''B'K.jX. (5.3) The time-varying gain matrix is found by solving a matrix Riccati di f f e r e n t i a l equation and has the property that = K|. To ease the notational burden, argument values of functions are omitted when they appear unessential and only one parameter q in the matrix A is considered to have a significant effect on sensitivity. Let z = represent the d q closed-loop trajectory sensitivity vector. A feedback structure of the form u = -R-1B'(K x + K 2z) (5.4) is postulated, where = K| and = K^. This specific structure is postulated because i t resembles the feedback structure of the optimal control resulting from solving the state linear regulator problem (see (5.3)). Substituting (5.4) into (5.1) and neglecting the second- s ' order sensitivity function —j yields 9 q z = 1̂  x + (A - F K j z , z(0) = 0 3q J. (5.5) where F = BR "*"B'. By defining an augmented vector y' = (x', z'), (5.1) and (5.5) can be combined into a composite vector differential equation y = (A - FK)y = Cy, y(0) = y r where (5.6) A = 3q 1 J F 0 0 0 - A , K = K l K2 L K2 K3 ' yo = x (5.7) Equation (5.6) represents (5.1) and (5.5) when = K^. This constraint i s relaxed in (5.6) i n order to obtain a unified treatment of the linear and nonlinear formulations. The design objective i s to choose gain matrices and so that (5.2) i s minimized subject to the additional constraint that the feedback (5.4) reduces, in some sense, the closed-loop trajectory sensitivity vector z. A mathematically tractable approach that has been widely adopted in sensitivity studies i s to neglect second-order sensitivity terms so that (5.6) i s valid and to introduce a cost index J ( K r K, t f) = j / f y'(Q + KFK)y dt (5.8) where Q is an augmented positive semidefinite weighting matrix [32-38], The optimum gain matrix minimizes (5.8) subject to the differential constraint (5.6). It w i l l be seen that the sensitivity problem formu- lated by (5.6) and (5.8) i s well-posed only i f the gain matrices are constant. The argument l i s t in the cost index (5.8) is used to distinguish between different problem formulations. Let I(K^, K, t^) and I(K^, K, t ^ ) , where i s arbitrary, represent the linear and nonlinear formulations, respectively (If (5.6) i s rewritten i n the form y = Ay + Fu, then for the linear formulation the matrices A and F are completely known and inde- pendent of K^. This results in linear dynamic constraints. For the nonlinear formulation the parametric matrix A depends on the unknown gain matrix K^. This results in nonlinear dynamic constraints). The Hamiltonian for either formulation i s H = p'Cy - |- y'(Q + KFK)y (5.9) where the costate vector p i s defined by p = -C»p + (Q + KFK)y, p(t f) = p f = 0 (5.10) Gradient matrices (see Appendix III) are used to derive the gradient condition for a minimum. In order to use gradient matrices a l l matrix elements must be independent and cannot be constrained by the symmetry condition K = K'. This is easily accomplished by taking K = K + K', where K is a matrix with arbitrary elements. In order to introduce a compact notation which proves useful in subsequent discussions, i t is assumed that F is time-invariant. The gradient condition for the linear formulation I(K^, K, t^) can then be expressed in the form (see Appendix V) 8J(K K,t ) t i — — = / J - _ - 5 - d t = $ KF + FK$ +F$ +<S> F = 0 3K o 3K yy yy py yp (5.11) where $ k y v » dt (5.12) yv o J 81 (Equation (5.11) remains valid in the general case of time-varying F provided that KF and F are moved under the respective integral signs. Keeping this in mind, i t is seen that the form (5.11) is actually not a restriction but rather a notational convenience). The gradient condition for the nonlinear formulation I(K^, K, t^) differs from (5.11) in an additional term which arises from the dependency of A on K^. Let p' = (p|, pp be a partitioning of the costate vector which corresponds to the partitioning of y and let K^ = L + L', where L i s arbitrary. The additional term which is associated with K^ i s a t f ~ f Tr(-FK l Zp!)dt = F$ + $ F 3L o 1 r2 p z zp„ (5.13) Introducing the augmented matrix E = F$ + $ F ' 0 P 2z Z p2 ! 0 1 0 (5.14) allows the gradient condition for the nonlinear formulation to be ex- pressed in the form 8K " = E + $ KF + FKo + F'f + $ F = 0 yy yy py yp (5.15) Minimizing J(K^, K, t^) is a constrained optimization problem. Conse- quently J(K*, K*, t f ) = J(K 1 } K, t f ) (5.16) where K* i s the optimum gain matrix for the nonlinear formulation. If K^ = K*, i t is seen from (5.16) that the minimum for the linear formulation 82 is achieved when K = K*. The condition E* = 0, which is satisfied when F$ + $ F = 0 (5.17) p 2z zp 2 is an optimality condition. It i s interesting to note that i f time- varying gains are permitted, the optimality condition (5.17) becomes a singularity condition which i s impossible to satisfy [32, 37]. This indicates that the sensitivity problem formulated with time-varying gains is ill-posed and cannot, in general, meet the design objectives. Equations (5.11) and (5.15) can be simplified and unified by introducing a matrix S defined by FS$ + $ SF = -F$ - $ F - E (5.18) yy yy py yp (Several methods are available for solving an equation of the form (5.18) for S. See Appendix II). Substituting (5.18) into (5.15) yields 9J(K K,t ) \ v = $ (K - S)F + F(K - S)$ =0 (5.19) 9K yy yy Equations (5.6), (5.10) and (5.19) define a nonlinear two- point boundary value problem which must be solved in order to determine the optimum gain matrix. 5.3 Unified Treatment of Different Formulations Because i t is a constrained optimization problem, the computation of optimum constant gain matrices is generally more d i f f i c u l t than the computation of unconstrained time-varying gains. It is only in the special case of time-invariant systems and i n f i n i t e tf that the problems are computationally equivalent. From an engineering point of view, however, constant gain matrices are easier to implement and therefore preferable, despite the possible computational d i f f i c u l t i e s . It i s evidently desirable to retain the comparative computational simplicity of the unconstrained time-varying gain case, as far as this i s possible. This requires developing a unified treatment for the various formulations and then making maximum use of the known results for the linear state regulator problem with time-varying gain and quadratic cost. Substituting the Riccati transformation p = -Sy (5.20) into (5.10) and using (5.6) yields ' S* + SC + C'S = -Q - KFK, S = 0 (5.21) Consider the general case of a time-varying system with time-varying gains. In the linear formulation I(K^, K, t^), E = 0. Substituting (5.20) into (5.18) yields t f / (F(S - S)yy' + yy'(S - S)F)dt = 0 (5.22) Equation (5.22) i s satisfied when S = S and (5.19) i s satisfied when K - S =• S. Substituting this- constraint into (5.21)- gives- the- standard matrix Riccati d i f f e r e n t i a l equation for the optimum time-varying gain. In the case of a constant gain matrix the above procedure must be modified. Heuristic considerations indicate that some sort of time average of S should result in a good suboptimum constant gain (For a time-invariant system, the steady-state solution of (5.21), taking K = S, often gives a good suboptimum result). The correct time average to use is given by (5.22). Replacing Sy by p, (5.22) can be solved for S by use of Kronecker products (See Appendix III). The vector representation (S) of the matrix S satisfies the linear equation (5.23) A method of successive approximations could be considered for solving the two-point boundary value problem. Using a nominal gain K = K̂ , (5.6) is integrated in the forward direction and (5.16) in the backward direction. Equation (5.23) can then be solved for S and (5.19) used to determine the gain increment. The updated gain i s and the procedure is repeated iteratively. While successive approximations' seems straightforward, they generally f a i l to converge in the case of nonlinear two-point boundary value problems. The special structure of the formulated problem, however, allows some insight to be obtained concerning the convergence of successive approximations. 5.4 Gradient Descent and Successive Approximations 3 J Let (6K) and (—) be the vector representations for a matrix v 3K v gain increment and the gradient matrix, respectively. The incremental cost associated with 6K is given by (5.24) (5.25) where (5.26) is positive semidefinite (see Appendix III). Consequently, i f 6K = w(S - K), 0 < w = 1 (5.27) then 5J = -w(S - K)'W(S - K) = 0 (5.28) The gain i s updated by \+l " \ + W (\ " V (5.29) where the subscript k i s used to indicate an iteration stage. For the problem formulation under discussion, successive approximations (5.24) is a special case (w = 1) of. gradient descent (5.29) with a weighting matrix (5.26). Consequently, the proposed successive approximations method i s equivalent to a deflected gradient method. In the case of quadratic cost indices i t is.known that some deflected gradient methods (such as conjugate gradients) result in significantly improved convergence rates. Having established the property that (5.24) is a deflected gradient descent, i t i s of interest to see whether i t has the properties of improved convergence, such as quadratic convergence. It i s known that successive approximations results in quadratic convergence in the case of unconstrained time-varying gains. Quadratic convergence can be proven by considering (5.21) for two successive iterations. Introducing k+1 S, , the linear matrix dif f e r e n t i a l equation for AS, can be solved using standard procedures, giving -AS k(0) o / f c f K + ± ( T , 0 ) ^ - - V ^ C t , 0)dx + Cf * k + l ( T > ° ) C ( \ + l ~ ~ W + ( \ " \ + l ) F " ( \ + l " S k ) ] W ( T ' ° ) d T (5.30) ' In (5.30) ^ ( t , x) i s the state-transition matrix of (5.6) at the k-th iteration stage. It i s easily shown that the cost index at stage k i s given by J , = — y'S.y . The incremental cost AJ, = J , , , - J , is found k 2 o k o k k+1 k from (5.30) and can be expressed in the form - A J k - T r t l y y C k K K ^ - \)H\+± ~ VI + T r K ^ ) ^ - K ^ ) ] (5.31) where - Cf [Wk+i(\+i - V F + F(\+i - VWk+i^ (5-32) and where the notation $ (k) is used to indicate that (5.12) i s evaluated yy at stage k. The gradient condition (5.19) can be obtained from (5.31) by noting that Tr[(A + A')B] = 0 for arbitrary symmetric matrix B only when A + A' =0. Equating (5.32) to zero yields (5.19). The remarkable properties of the case with unconstrained time- varying gain are seen from (5.32). By setting K k +^ = S k > (5.32) vanishes and the decrease in cost (5.31) i s quadratic [38]. This is accomplished by solving (5.21) .using successive approximations. The same quadratic convergence would result in the case of constant gain i f could be chosen to make (5.32) vanish. In principle this may be possible. However, y^-^ depends on K̂ -̂ * Consequently, any attempt to satisfy the condition ( ^ ) ^ = 0 would result in poor convergence. It i s reasonable to look for an approximation by replacing 87 y^ +^ in (5.32) by y^. Let (^r)^ represent the matrix arising from this substitution. It i s seen that <M>k • V k ) (V-l " V F + f ( \ + l " V $ y y ( k > ( 5 i 3 3 ) where S^ is defined by (5.22) evaluated at stage k. It follows from (5.33) that (^O^ = 0 i f i s determined by successive approximations (set w = 1 in (5.29)). Since (7^). i s an approximation to (7^), i t i s reasonable to anticipate that successive approximations results in rapid convergence. A rigorous proof of quadratic convergence would require shoxtfing that the second term of (5.31) is either never negative or, i f i t i s negative for a certain gain sequence, that i t s absolute value is less than a fixed fraction of the f i r s t term, which for Kjc+-^ f1 is always positive. From a practical point of view such a proof i s not essential. If the choice w = 1 should happen to result in divergence, then a choice w < 1 can be made which w i l l always give a convergent sequence (see (5.28)). The proposed computational algorithm is the following. Choose a nominal K = K^. Integrate (5.6) in the forward direction and (5.10) in the backward direction. Solve (5.18) for S^ and update the gain by use of (5.29). The rate of convergence i s controlled by w. Computationally, the proposed method i s similar to the conventional steepest descent procedure. The. essential, difference i s in the use. of a variable weighting matrix (5.26) to give a deflected gradient. The associated change in the gain matrix i s actually given by the simple successive approximations formula (5.29). Limited computational experience to date indicates that this variable gradient deflection allows a large step (w = 1) to be taken resulting in rapid convergence. Note that the linear and nonlinear formulations of the sensi- t i v i t y problem are treated in a unified manner. This arises from the fact that in both formulations a nominal gain K is chosen to perform the computations. The above algorithm is suitable for off-line design studies. Since the optimum constant gain depends on i n i t i a l conditions, there i s a problem of practical on-line implementation. A method for eliminating i n i t i a l conditions and deriving suboptimal feedback control laws is given in.[39]. A method which is suitable for combined on-line e s t i - mation and control is discussed in Section (5.6). 5.5 Optimum Constant Gain: Trajectory Sensitivity Reduction Because of i t s engineering importance, open-loop and closed-loop system sensitivities have been extensively investigated (see Reference [32] for a bibliography). However, for the closed-loop sensitivity formulation given by (5.6) and (5.8), a paradox arises i f unconstrained time-varying gains are permitted. This paradox has been discussed in the literature but i t s effect on sensitivity reduction does not seem to have been f u l l y appreciated [33, 37]. A simple example can be- used to illustrate- the paradox. Consider- x = -ax + u, x(0) = x (5.34) o z = -x - az - aKjZ, z(0) = 0 (5.35) u = -Hi K - K 2z = 0 (5.36) J = -| /°°(x2 + Q 2z 2 + Ru 2)dt (5.37) and suppose that and R are both chosen large compared to unity. This would seem a reasonable choice to make in order to keep both sensitivity and control effort small. The optimum time-varying gains, however, satisfy the constraint (5.36) and are i n f i n i t e . This can be seen by taking K -> + » i n (5.35). Equations (5.34)-(5.37) then yield. 1 aK x = 0 (5.38) K 2 = K 2 (5.39) (5.40) The optimum (infinite) gain can result in a very significant smaller cost than that associated with other suboptimal (but physically realizable) gains (see [37]). It i s evident, however, that minimization of (5.37) has not succeeded in the design objective, which i s to decrease system sensitivity. The system (5.34) with the optimum control (5.36) operates open-loop. The same d i f f i c u l t y i s experienced with a general n-th order system (see (5.1), (5.4) and (5.5)). The sensitivity problem defined by (5.6) and (5.8) i s consequently ill-posed i f unconstrained time-varying gains are permitted. From both a computational and an engineering point of view the simplest constraint i s that of time-invariant gains. It i s interesting to note that this choice was made by Kreindler in his design studies which dealt exclusively with linear time-invariant systems and t^ = + <» [3]. It seems natural to choose constant gains in this case. However, i f t^ i s f i n i t e and/or the system is time-varying, then the choice of constant gains i s not obvious. 90 5.6 On-Line Computation of Gains From an engineering point of view a very attractive feasible on-line adaptive control scheme is to identify a process as a linear time-invariant system of the form (5.1) over a f i n i t e observation interval and then apply a control of the form (5.3) over a subsequent f i n i t e control interval. In such a combined estimation and control scheme i t is often important to reduce the system sensitivity to incorrect parameter estimates. It i s , however, not generally computationally feasible to solve a nonlinear two-point boundary value problem on-line. Some form of suboptimal control must be introduced. It i s known that the steady-state solution of the matrix Riccati equation (take K = S in (5.21)) often results in a good suboptimal control. With this in mind let P = -My - Ny + e (5.41) where M and N are constant matrices. Substituting (5.41) into (5.10) and choosing M to be defined by MC + CM = -Q - K F K , (5.42) which is the steady - state form of (5.21) gives e + C e = -(NC + CN)y, e f = (M + N)y f (5.43) A For reasons of notational convenience the linear formulation I(K^, K, t^) is discussed. Substituting (5.41) into (5.11) yields F(S - M - N)$ + $ (S - M - N)F = -F$ - $ F (5.44) yy yy ey ye The decomposition (5.41) does not uniquely define N and e. Consider the possib i l i t y of defining N in terms of observational data so that the right hand side of (5.44) has a negligible effect on S. The updated gain, as given by (5.24), i s then S = K 2 = M + N (5.45) It i s seen from (5.43) that e i s "small" when N and e^ are "small". Besides the obvious trade-offs involved in keeping N and e^ simultaneously "small", there i s also the requirement for on-line computation using observational data. The following cost index accounts for the various factors which enter into an "optimal" decomposition (5.41): Je = ̂  / f (e + C'e)*W. (e + C'e)dt + ̂  e/W.e. (5.46) 2 o 1 2 r 2 f In (5.46) Ŵ  and are positive definite weighting matrices. Sustituting (5.43) into (5.46) i t i s seen that Je is an algebraic function of N, $ and y^y1^' Setting N = L + L', where L is arbitrary, allows gradient matrices to be used to determine the minimum. This yields 1̂ - = TC1 + CT + W0(M + N)i|) + iK(M + N)W_ = 0 (5.47) oL 2 t r 2 where T = W1S$ + $ SW, 1 yy yy 1 S = NC + C'N <J>f = y fy^ (5.48) Equation (5.47) can be solved for N by use of Kronecker products. It i s seen that (S) = G. (N) v 1 v ( T ) v = G 2(S) v where (TC* + CT) = G.(T) v 3 v [ G1 G2 G3 + G 4 ] ( N ) v = ~ G 4 ( M ) v ( 5 * 4 9 ) G 1 = C , © I + I © C ' =G 3 2 yy 1 1 yy G 4 = <|»f ® W2 + W2 O ^ f (5.50) Efficient algorithms are available for evaluating (5.50) [40]. The solution of (5.49) for N involves one matrix inversion. The system parameters... in. the. C. matrix- are- determined, by the identifier from, ob.- servational data. The matrices $ and i i , can be found by a forward yy f integration of (5.6). Alternatively, i t may be possible to use measured states or state estimates to evaluate these matrices. The gain compu- tation (5.45) i s then an algebraic problem and replaces the two-point boundary value problem. On-line use of (5.42), (5.49) and (5.45) to update the gain is computationally equivalent to solving these equations by successive approximations. The f i r s t iteration must result in an improved sub- optimum gain. It i s desirable, but no essential, that successive ite r a - tions result in a convergent suboptimum sequence. Since and W2 are arbitrary positive definite weighting matrices, i t is not possible to reach general conclusions concerning the convergence of successive approximations. It i s easy to show, how- ever, that given a nominal K, weighting matrices exist which result in one-step convergence. Suppose that the optimum gain K* is determined by off-line computations. Equation (5.42) can be solved for M. Let (see (5.45)) N = K* - M and take Ŵ  = I. Equation (5.47) is a Liapunov type equation which can be solved for a positive definite l ^ . With the weighting matrices so defined o there is one-step convergence to the optimum gain. It should therefore be possible to choose weighting, matrices so that the f i r s t iteration results in an improved suboptimum gain for small variations about nominal parameter values. If the system is time-invariant and i f N is small in the sense that terms involving products of the elements of N can be neglected, then i t is seen by sub- stituting (5.45) into (5.42) that M is the steady-state gain. In this case (5.42) can be solved for the steady-state gain and N evaluated by (5.44). 5.7 Examples To give a non-trivial i l l u s t r a t i o n of the successive approxi- mations as well as the approximation method discussed in Section (5.6) t^ should be chosen smaller than the settling time of the open-loop response for the nominal gain, and the nominal gain should dif f e r signi- ficantly from the optimum gain. These requirements are satisfied in the case of the second order system C l = " q X2 + U ' xl(°) = 1 , 0 (5.51) x 2 = x x , x 2(0) = -1.0 t J = \ f f (x 2 + 3x 2 + u 2)dt o I o 1 2. 94 by taking q = 1, tf = 2.0, and = (-10, 10). (The system i s an oscillator for u = 0 and is unstable for the chosen nominal gain). Successive approximations ((5.6), (5.10), (5.23) and (5.24)) required eight iterations to converge. No attempt was made to determine weighting matrices which would result in rapid convergence for the on-line approximation method ((5.42), (5.49) and (5.45)). An arbitrary choice = W2 = I was made. Six iterations were required. The results are the following: 1. Successive approximations K̂ ** = (-1.06, - .215), J ** = 1.518 2. On-line approximation method. K ** = (-1.713, - .952), J q * * = 1.565 (5.52) The minimum cost for unconstrained time-varying gain i s J * = 1.503. It i s seen that both methods give very good suboptimal constant gains. The increases are only 1% and 4% for case (1) and (2), respectively. To i l l u s t r a t e the sensitivity problem consider (5.51) to be augmented by (5.5) and take J = 5 ftf ( z 2 + z 2)dt + J (5.53) o 1 2 o A comparison i s made with Lee's et a l . method which uses a time-varying gain and the choice t^ = 10 is made. The following results are obtained (a) Lee's method [35] (augmented state feedback; t = t ) u = -2x± - 1.5x2 + .25z1 - .5z 2, J * = 1.94 (5.54) (b) Constant gain (augmented state feedback) u = - 6 . 8 x i - 6.6x 2 + 4.66Z;L + 20z 2 > J * = 1.70 (5.55) (c) Constant gain (Non-augmented state feedback) u = -1.812X - 1.22x2, J = 2.04 (5.56) Figures (5.1)-(5.3) i l l u s t r a t e the improvements that result by- use of constant gain and augmented state feedback. The magnitude of the control is smaller, the cost index is smaller and there is a significant reduction in trajectory sensitivity. That the design objective of tra- jectory sensitivity reduction is achieved can be seen by comparing the sensitivity functions of case (b) and case (c) (case (c) is based on (5.51) taking t f = 10). The optimality condition (5.17) was checked and gave (-.0002, .0004). It should be noted that Lee's method uses a time-varying gain which i s a suboptimum solution of an ill-posed sensitivity problem. (The gain values given in (5.54) are for t = t ). It is therefore not surprising that an optimum constant gain can give significantly better results. Note that increasing t^ from 2 to 10 and using sensitivity augmentation does not result in an appreciable increase in the cost index (compare (5.55) and (5.52). The convergence of successive approximation (5.29) was investi- gated by choosing K± = (-1.5, -1.0), K 2 = (-0.1, -0.5) (5.57) as the i n i t i a l gain. This choice is significantly different from the optimum gain (see (5.55)). Only five iterations were required to converge to the optimum. The following example is used to i l l us t ra te the on- l ine approxi- mation method when i t i s applied to a rea l i s t i c system of moderate complexity. Figure (5.4) i l lus t ra tes a block diagram for the feedback control of a power generator connected to an in f in i te bus [41]. The matrices in (5.1) are A = 0.0 1.0 0.0 0.0 -0.676 -.25 25.0 0.0 0.0 0.0 -2.0 2.0 0.0 -0.06 0.0 -2.0 , B = 0.0 0.0 0.0 2.0 (5.58) The components of the state vector represent the following variables; x^ = power angle deviation, x 2 = angular frequency deviation, x^ = mechanical power deviation, x̂  = governor position variat ion, u = governor posit ion control . The i n i t i a l state is taken to be x^ = (0. , 3.0, 2.0, 0.0) and in (5.2) and (5.46) the choice Q± = W = W2 = I and R = 1 i s made. The choice t^ = 1 sec. i s made to accentuate convergence problems, in case they should exist . The minimum cost using the optimum time-varying gain S(t) (see (5.21)) is J* = 33.215. Successive approximations for the optimum con- stant gain i s i n i t i a l i z e d using K = S(0). The results for the cases (a) K = M, (b) K = M + N, are as follows. k = 1 2 3 4 (a) J = 45.404 39.167 36.636 36.121 (b) J = 38.035 36.267 35.601 34.682 (a) u* = -.66x± - 1.22x2 -8.44x3 -3.35x4 (b) u* = -.069x -1.47x2 -10.67x3 -5.47x4 (5.59) (5.60) The suboptimal choice K = M in case (a) amounts to taking the solution of the i n f i n i t e time case for the constant gain. It is known that (5.42) converges quadratically to the optimum gain for the i n f i n i t e time case. The suboptimum choice K = M + N in case (b) is based on the approximation technique discussed in Section (5.6). It is seen to have a quadratic-type convergence and results in a smaller cost index than for case (a). In fact, the f i r s t iteration results i n a significant decrease. A one-step improvement of this kind could be of practical significance in an on-line application. a FIG.(5.1) EXAMPLE I CONTROL AS A FUNCTION OF TIME FOR: (a) LEE'S METHOD, (b) CONSTANT GAIN AND AUGMENTED STATES, (c) CONSTANT GAIN AND NON- AUGMENTED STATES. 0.24 0.161 ^ 0.081 to 0.0 co-0.08 ^~ o UJ £ -0.24 FIG. (5.2) EXAMPLE 1. TRAJECTORY SENSITIVITY FUNCTION 37 FOR- (a) LEE'S METHOD- (b) CONSTANT GAIN AND AUGMENTED STATES, (c) CONSTANT GAIN AND NON - AUGMENTED STATES. VO VO FIG. (5.3) EXAMPLE I TRAJECTORY SENSITIVITY FUNCTION Z 2 F 0 R : (a) LEE'S METHOD, (b) CONSTANT GAIN AND AUGMENTED STATES, (c) CONSTANT GAIN AND NON- AUGMENTED STATES. SPEED - GOVERNOR TURBINE GENERATOR FIG. (5.4) EXAMPLE 2. BLOCK DIAGRAM OF A SYNCHRONOUS GENERATOR VOLTAGE CONTROL SYSTEM. 6. OPTIMAL ADAPTIVE CONTROL 6.1 Introduction In this chapter we w i l l discuss one possible strategy for optimal adaptive control which is particularly attractive from an engineering point of view. The strategy is to identify system parameters at the end of an observation interval and then to use the parameters to derive an "optimal" control for a subsequent control interval. This chapter develops one version of this strategy which is based on the identifiers and optimum controls developed in Chapters 2, and 5. 6.2 Optimum Control During the First and Subsequent Optimization Intervals The particular optimization technique to be used during the f i r s t optimization interval i s of importance since the i n i t i a l information about the system is usually insufficient to derive a good control. If parameter estimates are poor then the optimum constant gains may not be sufficiently close to the correct values. The problem of convergence can become serious during the f i r s t optimization interval. Hence, i t i s re- commended to use the successive approximations method with a step size constant w < 1 (see equation (5.27)). Figures (6.1-a, b, c, d) show the time intervals associated with a recursive observation-optimization strategy which would be suitable for the optimal adaptive systems. The system is considered to start operating at time t and to terminate i t s operation at time t^. In Figures (6.1-a, d) (T) is the length of the sampling interval. The length of each observation interval i s (NT) seconds and the start of the i-th INITIALIZATION INTERVAL OPERATION PERIOD -sa-f«a R J L t0-NT t o FIG. (6.1-a) THE INITIALIZATION INTERVAL AND THE OPERATION PERIOD. rT rT NT t t0-NT t0 tj° t°+NT FIG. (6.1-b) THE SUCCESSIVE OBSERVATION INTERVALS. FlG.(6.1-c) THE SUCCESSIVE OPTIMIZATION INTERVALS (l = r). IT , Tc FIG. (6.1-d) THE SUCCESSIVE OPTIMIZATION INTERVALS (l = 2r). observation interval i s denoted by the instant ( t ^ ) . The displacement between two successive observation intervals is (rT) seconds (see Fig. (6.1-b)) where r i s an integer number. The length of each optimization interval i s (T ) seconds and the start of the i-th optimization interval is denoted by the instant (t^) (see Fig's (6.1-c, d)). The displacement between two successive optimization intervals i s (&T) seconds. The para-, meter I is a multiple of r, i.e., I = mr. The value of m need not be the same for every two successive optimization intervals. The choice of the parameter (m) is based on the rate of variation of certain perfor- mance indices to be subsequently discussed (see subsection(6.3.4)). During the i n i t i a l i z a t i o n interval (tQ-NT < t < t ) special input test signals can be used to drive the system. Measurements of the input and output at successive sampling instants are used to determine i n i t i a l estimates of the unknown parameters of the system. The choice of the i n i t i a l conditions at the start of the i n i t i a l i z a t i o n interval (t = t -NT) o is completely arbitrary. At the end of each observation interval new estimates of the plant parameters are computed. At the beginning of each optimization interval new estimates of the "optimum" constant gains are made using the latest estimates of the plant unknown parameters. These new gains are used for the coming control interval of IT seconds. 6.3 Examples In this section a linear, continuous system with time-invariant parameters i s considered. Some of the system parameters are unknown and an optimal adaptive control i s computed for the system. The recursive observation-optimization strategy described in the previous section i s followed. The identification of the unknown parameters i s performed using the d i g i t a l identifier developed in Chapter 2. The number of augmented sets of states for Examples 1 and 2 are one and two, respectively. The test inputs used during the i n i t i a l i z a t i o n interval are taken to be step functions with unity magnitude, in both examples. The i n i t i a l conditions at the start of the i n i t i a l i z a t i o n interval are taken to be identically zero i n both Example 1 and 2. It should be noted that use of a d i g i t a l identifier results in quantization noise. Consequently, even though the examples treat deter- ministic cases, the system models are perturbed by quantization noise. This noise also affects the value of the optimum constant feedback gains which are computed using the identified parameters. It should be further noted that the optimum constant gains depend on the system state at the start of an optimization interval. The examples i l l u s t r a t e convergence of" the proposed" adaptive"strategy in the presence of quantization noise. 6.3.2 Example 1 The system considered in this example is represented by the dynamic model X l = a l x 2 + U x 2 =- a^x^ (6-. 1) where the nominal values for a^ and a^ are -1.0 and 1.0, respectively. The i n i t i a l conditions are x^CO) = ^ ( 0 ) = 1.0. The essential parameters of the recursive observation-optimization strategy (see Figs. (6.1-a, d)) are taken to be: T = 0.2, t, = 10., t = 0., r = I = 2, N = 10 f o T = NT = 2.0 (6.2) It i s required to compute the optimum constant feedback gains and such that the performance index 2+t? J. = \ f 1 (x 2 + x 2 + u 2)dt (6.3) l L • C 1 L I is minimized using the new estimates of a^ and a^ which are obtained at the end of the i - th observation interval. The optimum constant gains are computed using the successive approximations method developed in Section (5.2) with w = 1. The optimum solution i s assumed to be obtained when the inequality ' J i " J i _ 1 | = 1 0 " 5 ' ( 6 , 4 ) is satisfied, where J. is the value of J. at the end of the k-th iteration. l l The time t^ (see Fig.'s (6.1-c, d)) denotes the beginning of the i-th, optimization interval. During the f i r s t optimization interval the i n i t i a l guess for the feedback gains and is taken to be = Y.^ = -10. For the subsequent optimization intervals, the i n i t i a l guess for the i-th optimization interval i s taken to be the optimum values obtained for the ( i - l ) - t h optimization interval. The stopping rule (6.4) was satisfied after six iterations for the f i r s t optimization interval. No more than two iterations were required to satisfy the stopping rule given by (6^4) for the subsequent optimization intervals. The performance index of the optimal adaptive system i s taken to be 25 J* = E AJ* (6.5) • a i=l 1 whe re the optimum value of J^ (see (6.3)) is denoted by J* and AJ* is i i the value of J * for an integration interval t^ < t < t^ + 0.4. Since the optimum constant gains computed at the start of the i-th optimization interval are only used for a period of (JIT) seconds starting from the c instant t., then J * as given by (6.5) is the total operation cost. For X cL the sake of comparison the optimum value of the performance index J = \ f (x2. + xl + u 2)dt (6.6) o I o 1 l i s computed using (6.1) with a^ and a^ being equal to their respective nominal values and the control feedback gains are taken to be the optimal time-varying gains. The optimum value of J q is found to be 2.046 while the value of J * (see (6.5)) i s equal to 2.202, i.e., the increase in the performance index value does not exceed 8%. Figures (6.2) and (6.3) show that the identification errors do not exceed 5%. 6.3.3 Example 2 The system considered in this example i s represented by the dynamical model x^ = a^c^ + 2u x 2 = a 2x 3 + 2u x 3 = a 3x 2 - u (6.7) where the nominal values for a^, a 2, and a 3 are -1.0, 2.0, and -2.0, respectively. The i n i t i a l conditions are x^(0) = x 2(0) = x^(0) = 1.0. The essential parameters of the recursive observation-optimization strategy are as given by (6.2). It i s required to compute the optimum constant gainsK , K 2 and K 3 such that the performance index 108 2+t° J . = | J 1 (x 2 + x 2 + 4 + u 2)dt (6.8) i i s minimized during the i-th optimization interval. The optimum solution is assumed to be obtained when the inequality (6.4) is satisfied. In this example, using the successive approximations method with w = 1 (see equation (5.27)) during the f i r s t optimization interval results in a diverging sequence of iterations. Thus w i s taken to be 0.1 and the i n i t i a l guess for the feedback gains are taken to be = -1, = -2, and = -3. Twenty iterations are required to satisfy the stopping rule (see (6.4)). For the subsequent optimization intervals, the i n i t i a l guess for the i-th optimization interval i s taken to be the optimum values obtained for the ( i - l ) - t h optimization interval. No more than two iterations, are required to satisfy. (6.4) for the subsequent optimi- zation intervals and w i s taken equal to unity. For the sake of comparison the optimum value of the performance index 1 10 2 2 2 2 J = i / 1 U (xf + x^ + x^ + u )dt (6.9) o 2 o 1 2 3 « is computed using (6.7) with a^, a^, and a^ being equal to their respective nominal values and the control feedback gains are time-varying. The optimum value, J*. of J was found to be 0.655 while the value of J* ^ ' o o a (see (6.5)) was equal to 0.705, that i s , the increase in the performance index value does not exceed 7.5%. Figures (6.6), (6.7), and (6.8) show the identification results for the unknown parameters a^, a^, and a^, respectively. The maximum identification errors do not exceed 5%. The rapid rate of identification i s evident in both this example and the previous one since a l l the unknown parameters have been identified 109 to a good degree of accuracy after only one observation interval . In Figures (6.4), and (6.5), the piece-wise constant curves which are drawn as sol id l ines represent the optimum constant gain K* and K*, respectively. The parameters a^, and (see (6.1)) are updated at the end of each observation interval and a new estimate of the feed- back gains is made at the start of the following optimization interval . Since the optimum values of the constant feedback gains depend on the plant state at the start of the optimization interval , these optimum values are not constant during the whole operation period (see Figure (6.1-a)) . The curves drawn as dashed l ines in Figures (6.4), and (6.5) represent the optimum time-varying feedback gains K̂ , and respectively. These gains minimize the performance index given by equation (6.6) subject to the dynamical constraints given by (6.1) with a^ and a^ being equal to their respective nominal values. Figures (6.9), (6.10), and (6.11) show the optimum feedback gains K*, K* and K*, respectively. The curves drawn as sol id l ines represent the optimum constant gains. The parameters a^, a^, and a^ (see (6.7)) are updated at the end of each observation interval and a new estimate of the feedback gains i s made at the start of the following optimization interval . The optimum constant gains are not constant along the whole operation period but unlike the situation in Example 1 the feedback gains for Example 2 (see Figures (6.9), (6.10), (6.11)) change less frequently. The curves draxm as dashed l ines in Figures (6.9), (6.10), and (6.11) represent the optimum time-varying feedback gains K̂ , K̂ , and respectively. These gains minimize the performance index given by equation (6.9) subject to the dynamical constraints given by (6.7) with a^, a^, and a^ being equal to their respective nominal values. 110 The recursive observation-optimization strategy described in Section (6.2) is characterized by i t s f l e x i b i l i t y . This f l e x i b i l i t y is due to the fact that i t i s possible to adjust the displacement distance (£T) between the successive optimization intervals (see Figures (6.1-b, c, d)) according to the behaviour of the system under consideration. The shortest time interval during which the gains remain constant is equal to (2rT) in Example 1 (see Figures (6.4), (6.5)) and equal to (4rT) in Example 2 (see Figures (6.9), (6.10), (6.11)). Hence, using I = 2r and 4r for Examples 1 and 2, respectively, results in a satisfactory performance with less computational e f for t . The value of % used in both Example 1 and 2 was % = r. In the following discussion two performance indices w i l l be discussed. These performance indices can be used to decide whether or not i t is necessary to update the identif ied parameters and/or the con- stant feedback gains at the end of each observation interval . 6.3.4 Discussion The following performance index can be used to reach decisions concerning the necessity for updating model parameters: NT 1 2 o p m p m In Eqn. (6.10) Q is a posit ive definite weighting matrix, x p i s the measured plant state during the latest observation interva l , and x i s the corresponding model state during the same interval using the m latest ident i f ied values of the plant unknown parameters. The following performance index can be used together with given by (6.10) to reach decisions concerning the necessity for updating feedback gains: x'(t?)x (t?) - x * ( t C ..)x (t? ) j = P 1 P 1 P X- X P ^ (6.11) In Eqn. (6.11), x (t^) I s t n e plant state at the start of the i-th optimization interval (the present interval), and x (t^- -j) is the plant state at the start of the ( i - l ) - t h optimization interval. The assignment of values for the upper bounds of and J^y respectively, would depend upon the degree of familiarity with the par t i - cular system under investigation (as can be seen from Figures (6.4), (6.5), (6.9), (6.10), and (6.11)). Taking IT = 2rT would be a reasonable choice for the system considered in Example 1. For Example 2 taking £T = 4rT would be a reasonable choice. Parameter or gain updating would be performed whenever or exceeded their respective bounds. 0-4. 0.2 0-0. 1 2 3 4 5 6 7 8 9 W t 0.2 'I 0.4 . -0.5. -0.8 . -10. FIG. (6.2) EXAMPLE 1. IDENTIFICATION OF PARAMETER o, WITH QUANTIZATION NOISE. 1.0. 0.8. 0.6 . 0-4V 2 0.2 0.0 0.2 0.4 10 t FIG.(6.3) EXAMPLE I. IDENTIFICATION OF PARAMETER a2 WITH QUANTIZATION NOISE. 0.2 0.0 -0.2\ -0-4 «1 -0.6 -0-0 -1.0 -1.2 -1.4 OPTIMAL SOLUTION ADAPTIVE SOLUTION i i 1 1 1 1— 1 2 3 4 5 6 —I 1 r- 7 8 9 FIG. (6.4) EXAMPLE 1. THE OPTIMUM PIECEWISE CONSTANT FEEDBACK GAIN -0.61 FIG.(5.5) EXAMPLE J. THE OPTIMUM PIECEWISE CONSTANT FEEDBACK GAIN K2. -0.2 I -0.4 -OS -as -t.o FIO.(SJS) EXAMPLE 2. IDENTIFICATION OF PARAMETER a, WITH QUANTIZATION NOISE 2.0 IS 1.2 0 8 0-4 -0-4 -0-8 10 I FI61S-7) EXAMPLE 2. IDENTIFICATION OF PARAMETER aj WITH QUANTIZATION NOISE. as. 0-4. ao. -0.4 "3 -as. -1-2 . -IS -2X1. FIGIS-S) EXAMPLE 2. IDENTIFICATION OF PARAMETER o, WITH QUANTIZATION NOISE. 02 0.0 -0.1 -0.4 OPTIMAL SOLTN . ADAPTIVE SOLTN FIG. (5.9) EXAMPLE 2. THE OPTIMUM PIECEWISE CONSTANT FEEDBACK CAIN Kt 0.4 0.2 0.0 -0.2 -0-4 -OS -08 -t-0 OPTIMAL SOLTN — — — ADAPTIVE SOLTN I 2 3 4 S S 7 S 9 FIG.t6.IO) EXAMPLE 2. THE OPTIMUM PIECEWISE CONSTANT FEEDBACK GAIN Kj 1.0. OS OS 0.4 3 0.2. OO -0.2 -0.4 10 t OPTIMAL SOLTN - - ADAPTIVE SOLTN FIG.(S-U) EXAMPLE 2. THE OPTIMUM PIECEWISE CONSTANT FEEDBACK GAIN Kj. 7. CONCLUSIONS In Chapter 2 the generalized equation error has been shown to be applicable to a mean-square method of rapid d i g i t a l estimation of linear system parameters. Due to the imposed structure of the estimator the manipulation of high order matrices is avoided. The matrices used by the estimator have a block diagonal structure. Consequently only the lower order matrix within a block need be considered. It has been shown that the matrix can be recursively evaluated and that matrix inversions are not required. Examples i l l u s t r a t e the effectiveness of the estimator for a variety of cases dealing with quanti- zation noise as well as measurements noise. In Chapter 3 a simple algorithm for computing the generalized inverse of a matrix has been developed. The advantage of this algorithm over other methods is that i t eliminates the need for Gram-Schmidt ortho- gonalization and i t s associated (in the interest of accuracy) reorthogon- alization as new vectors are introduced. Inherent in the method i s a very simple check on the linear dependence of the different matrix columns. In Chapter 4 a two-stage method for estimating, time-invariant and time-varying parameters in linear systems has been developed. During the second stage, the time-varying parameters are considered as unknown control inputs to a linear subsystem of known dynamics. This method for estimating time-varying parameters is computationally simple in that the estimates are obtained by the solution of a linear two-point boundary- value problem. Standard methods of solution result in one-step convergence A method of stage and control augmentation in conjunction with d i s c r e t i - zation and mean-square error minimization has been shown to be effective for the f i r s t stage of estimation. This approach reduces undesirable coupling between the two-stages which could result in convergence problems. Numerous examples i l l u s t r a t e the effectiveness of the method. In Chapter 5 method of successive approximations for solving the two point boundary-value problem for optimum constant gain matrices has been developed. The method i s shown to be computationally equivalent to a de- flected gradient method. Convergence can always be achieved by choice of a scalar step-size parameter. Rapid convergence is important in off- line design studies when the effect of various weighting matrices on optimal control performance i s investigated. A close relationship be- tween successive approximations and the solution technique for the linear time invariant case with i n f i n i t e t^ is exploited so that in many cases rapid convergence is achieved. The problem of trajectory sensitivity reduction by augmented state feedback is shown to be well-posed i f gain matrices are constrained to be constant. The problem of on-line implementation of a constant gain matrix whose elements depend on i n i t i a l conditions i s discussed. The simplest approach is to take a weighted average of the computed optimum gain matrices. This results in an average suboptimal linear feedback control law. A second approach is to introduce nonlinear feed- back control [ 3 9 ] . An on-line approximate method is developed which appears suitable for systems whose parameters must be identified. The two point boundary-value problem i s replaced by an algebraic problem, the solution of which gives a suboptimal constant gain. It does not appear feasible to give a formal proof of convergence or a proof of improved response. By proper choice of weighting matrices, however, i t does seem possible to achieve an improvement with the f i r s t iteration. Examples and para- meter values were chosen to deliberately accentuate situations where convergence problems could arise. No d i f f i c u l t i e s , however, were en- countered and rapid convergence was achieved in a l l cases investigated. A natural development of the present research i s to augment the sensitivity vector with the sensitivity functions with respect to the i n i t i a l conditions. This w i l l reduce the sensitivity of the optimum constant gains with respect to the state of the plant at the start of the optimization interval. Accordingly, the optimum constant gains w i l l be updated less frequently. Further research is needed concerning the "best" choice of the constrained structure of the feedback gains. One possibility has been investigated in this thesis where the gains are taken to be time- invariant. A more general choice i s to use specific time-varying gains. Let the feedback gains be constrained by the specific structure K(t) = FG(t), where the time-varying matrix G(t) is to be specified before starting the optimization process. The time-invariant gain matrix F i s to be computed according to the algorithm developed in Chapter 5. The problem i s how to choose G(t) systematically such that the resulting time-varying gain matrix K(t) is "optimum". A specific time-varying gain could result in better results concerning sensitivity function reduction. APPENDIX I INITIAL CONDITIONS LINEAR LEAST-SQUARES ESTIMATOR The values of the optimum constant feedback gains as computed by the algorithm developed in Chapter 5 depend on the plant state at the start of the optimization interval. Accordingly, a good estimate of the plant i n i t i a l conditions is required. In a noise free situation, i t may be possible to obtain the i n i t i a l conditions by measuring the state of the plant. However, i f the measurements of the plant state are conta- minated by noise then an i n i t i a l conditions estimator i s required. One possible choice of such an estimator is the linear least-squares one. This estimator does not require a p r i o r i knowledge about the noise st a t i s t i c s and is characterized by i t s simplicity. Considering-the- i-ttv optimization interval (see Figures- 6vl-a, b, c, d), the instant (t^) indicates the start of this interval as well as the completion of the i-th observation interval. c c Let the plant be represented (for t^ - NT < t < t^) by x = A ±x + B ± u (1.1) where A^, and are the identified values of the plant parameteric matrices A, and B, respectively. These values are obtained according to the measurements taken during the i~th observation interval. The second term on the right hand side of equation (1.1) can be expressed as B.u = -F.K j X (1.2) 1 1 i where the definition of the matrix F. is as given in Chapter 5 (F = BR "*"B') and represent the piecewise constant optimum feedback gain matrix being used during the i-th observation interval. Substituting (1.2) into (1.1) yields x = (A. - F.K.)x i 1 1 (1.3) The transition matrix i^(t^, x) of the dynamic system represented by (1.3) can be obtained by solving the following matrix differential equation <L = -(A. - F .K. ) ' ii; iKt?, t?) = I r \ x l l T T 1 1 (1.4) at Let X j , represent the noisy measurements of the plant-state C /s t = t^ - KT, and let x^ be an estimate of x^ defined by *K = *K x ( t i > (1.5) where x(t^) is the estimate of x(t^) which minimizes the quadratic error function e = - r (x - x ) ' (x - x ) 2 a a a a (1.6) where * K - * < ^ ' i " 1 ^ X a = (1.7) The state vector x is given by a b J x = ij, x(t?) a a l (1.8) where N (1.9) The value of x(t^) which minimizes the error function e given by (1.6) i s known to be x(t C) = W fJ ' V x , (1.10) 1 a a a a Equation (I.10) gives the best estimate of the plant state at t = t^ which i s the i n i t i a l condition for the i-th optimization interval (the present interval). These i n i t i a l conditions are required in order to compute new values for the optimum constant feedback gains. APPENDIX II ANALYTICAL SOLUTION FOR THE LIAPUNOV MATRIX EQUATION [42] This algorithm i s characterized by i t s simplicity since no matrix inversion i s needed and the solution i s obtained in a closed form and not in a numerical or iterative way. Consider the matrix equation S'X + XS = -Q (II.1) where S, X and Q are a l l of dimension (n x n) Let A k k - i S, = Z X.S (II.2) k i=0 1 where the parameters X^(i = 0, n-l) are the coefficients of the chracteristic equation of the matrix S. These parameters can be computed easily using Faddeev's method. X. = - ~ Tr[SS. . ] , S. = SS. . + X.I, i = 1, n 1 x i - l ' l i - l x ' ' ' X Q = 1 S Q = I, S = 0 (II.3) The last relation is used as a check. We may write down the solution to equation (II.1) as 1 2 n " 2 k k-l X = j j - i E A (S, -S) E s ; Q ( - l ) k *S (II.4) (-l)nR(S, -S) k=0 K 1 £=0 % k % 123 where: R(S, -S) = ( - l ) n 2 n T°- H 2(S) n (II.5) H n(S) = A, A 1 o A A A0 Ai A 3 2 1 o 0 0 n (II.6) The determinant H (S) is known as the Hurwiz determinant, r o AK+1(S, - S ) k odd _ H (S) l» 2 n —7 G, (S) k even A k n (II.7) where Ĝ , for k even, is the (- + l ) t h . Cofactor of the f i r s t column of H (S). n Hence equation (II,. 4) can be written in a simplified form as X = 1  n _ 1 2 1 I E G2±(S) E (-iyS\QSn4 2*^ (S) i=0 n £,=0 2i-£ (II.8) From (II.8) i t is clear that the condition for a unique solution i s H (S) ^ 0 which- is equivalent, to. the condition, that, the dynamical system. (X = SX) is asymptotically stable. For the non-trivial case where the matrix Q is symmetrical, the term 2 1 I E (-D S;QS A=O (II.9) 124 w i l l be symmetrical and since G„.(S) and H (S) are scalers, X i s also J - 2 i n symmetrical and we need only to solve n(n + l)/2 entries of X instead 2 of n . Thus for this case we have X U = - r T - i VG (S )Z <-!>*<SjQS ) ' (11.10) 1 1 2N H (S) 1=0 L Si=0 % l X % U where i s the upper triangle of X and (s1pQS2i-X?]J i s t r i e u P P e r triangle for (SlQS„. „•) for the case i = £, while for i ̂  I i t is a triangular matrix with i t s main diagonal equal to twice the main diagonal of (SlQS ) and every element ( i , j) is equal to the sum of the corresponding element ( i , j) in the upper triangle of ( S£Q s 2 i_ £) P l u s the ( i + 1, j - 1) element in the lower triangle. 125 APPENDIX III The trace properties and gradient matrices used in deriving the gradient conditions are the following [43], Tr[AB] = Tr[BA], Tr[AB f] = Tr[BA 1] Tr[AKB] = A'B' Tr[AK'B] = BA Tr[AKBK] = A'K'B' + B'K'A' d K Tr[AK'BK'] = BK'A + AK'B dK. ^ TrfAKBK'] = A'KB' + AKB (III.l) dK The following properties of Kronecker matrix products have been u t i l i z e d in the algebraic manipulation of matrix equations [44], (AB) v = ( 1 0 A) (B) v = (B' © I) (A) v (A ® B) (C © D) = (AC) © (BD) (A + B) ® (C + D) = A ® C + A ® D + B ® C + B ® D (III. 2) If Aj,, i = 1, .. ., n, and u , j = 1, n, are the eigenvalues of A and B, respectively, then u .̂ are the eigenvalues of A@ B. Because of the partitioning (5.7) of F, the eigenvalues of F can be found from the eigenvalues of F. Consider the following eigenvalue equations: Fz. = X.z. x x x (111,3) $ V. = U .V. yy J i i It i s seen from the defining equations for F and $ that (see (5.5) and (5.12)) (B'z.)'R 1 (B'z.) = X iz i' z. = 0 (III.4) 2 / (v.'y) dt = u.v.'v. = 0 o 1 111 The matrix W is therefore positive semi-definite. Since ("ĵ ) = W(S - K)^ i t follows that the equality sign in (5.28) holds only when the gradient condition is satisfied. 127 APPENDIX IV The confidence interval length deduced in Subsection(2.7.3) (see (2.101)) is based on the assumptions that the random vector (c - c) is normally distributed and that the random vector V is s t a t i s t i c a l l y r. independent of (c - c) and i s normally distributed. Also the random 2 variable V'V i s assumed to be distributed as % . The above mentioned assumptions can only be ju s t i f i e d i f the matrix 6̂  (see (2.11)) is deterministic. In the following we w i l l prove the above assumptions for the case where 6̂  i s a deterministic quantity: Using (2.12) and (2.38) yields c - c = P M0'Z M - c N N N = PN 6N ( 6N C + N ) " C = PN6^N (IV.1) where, for convenience, the random vector N is taken to be normally distributed with E[N] = 0 and E[N N'] =1. It follows directly from (IV.1) that (c - c) i s a normally distributed random vector. Using (2.63) and(EV.l) yields . E[(c - c ) V ] = E[PN0^NN'(6NPN6^ - I)] = PN 6N ( 9N PN eN ~ I } =0 (IV.2) Thus the random vectors (c - c) and V are s t a t i s t i c a l l y inde- pendent. The vector V is normally distributed as can directly be seen from (2.63) where 128 V = ( 0N PN N " I ) S = ( WN " ^ ( 2 ' 6 3 ) Since 8JJ and P^ are deterministic matrices then V is normally distributed as N . Using (2.63), (2.64), and (2.65) yields W = N'(WN - D*(WN - I)N = N'(WN - I) 2N = N r(I - WN)N (IV.3) Considering the orthogonal matrix <J>̂  (see (2.66)), equation (IV.3) can be written as V'V = N$ $'(I - W )$ $'N V V V N V V N N N'$N(I - D N)^N = EyJ (IV.4) where y. is a linear combination of a l l the elements of the random vector (the matrix (I - D^) is a diagonal one with the main diagonal elements being equal to 1 or 0 (see (2.69))). Thus the random vector y^ is distributed as the vector $' N. The vector $'N is distributed as N N the random vector N which i s a direct result of Fisher's Theorem [17]. Accordingly, the random vector i s normally distributed like N and 2 2 Ey^ i s distributed as % . The above proofs are based on a deterministic 0 ^ . If 0 ^ i s a random variable, i t i s no longer possible to prove that the random vectors (c - c) and V are normally distributed along with the assumption that VrV i s distributed i s % . There appears to be no mathe- matically tractable approach to derive confidence intervals in the case 129 where 0^ contains quantization noise. To retain the basic simplicity of Equation (2.101) the following procedure i s suggested. In equations (IV.1 - 4 ) , i t i s possible to replace Z N(k) by Z N(k) where ^ N(k) is obtained by use of the best estimate c at stage k, that i s by use of (see (2.10)). z N = ? N ; dv.5) The "smoothed" values 0^ of 0^ are used in equations (IV.1-4). Since the randomness in 6^ tends to be eliminated, i t seems ju s t i f i a b l e to consider 0^ as a deterministic quantity. APPENDIX V The gradient condition as given by equation (5.11) can be derived as follows: Let H denotes the Hamiltonian, then using (5.6) and (5.9) yields H = p'(A - F(K + K'))y - | y » ( Q + (K + K')F(K + K'))y (V.l) = p'Ay - p'FKy - p'FK'y - | y'Qy - | y'KFKy - j y'KFK'y - -| y'K'FKy - j y 'K'FK'y Using (III . l) the Hamiltonian can be written as H = p'Ay - TrfFKyp'] - Tr[FK'yp'] - ~ y'Qy - |- Tr[yy'KFK] - | Tr[yy'KFK'] - | Tr[FKyy'K'] - |- Tr[yy'K'FK'] (V.2) The following gradient matrix operations are required g | Tr[FKyp'] = F(py') ^ Tr[FK'yp'] = (yp')F " \ ~k T r t y y ' K f K ] = " f ( yy ' )K 'F - \ FK' (yy') - \ g| Tr[yy'KFK'] = -(yy')KF - T r [FKyy 'K ' ] = -FK(yy') ( v . 3 ) " \ a! T r [ y y ' K ' f K ' ] = - \ ^ ' ( y y ' ) - f (yy ' )K 'F The above operations allow ~ to be written as 13 where 3H 1 1 — = - F(py») - (yp')p - |( y y«) K'F - -| FK»(yy*) - (yy')KF - FK(yy') - | FK'(yy') - | (yy')K'F = - F(py') - (yp')F - (yy*)K'F - FK'(yy') - (yy')KF - FK(yy«) = - F(py') - (yp*)F - (yy')(K + K')F - F(K + K ,)(yy') = - F(py') - (yp')F - (yy')KF - FK(yy*) (V.4) K = K + K? From (V.4) and for time-invariant F and K equation (V.4) gives t a H _ t t t t £ - 9K = F / py'dt + / yp'dt F + / yy'dt K F + F K / yy'dt = 0 0 0 0 o (V.5) Using the definition A t & = / yv'dt y v J O Equation (V.5) can be written as t gxr _ _ _ / - -r^r = F<£> + $ F + $ KF + FK $ =0 (V.6) q 9K py yp yy yy This completes the proof of equation (5.11). 132 REFERENCES 1. P. V. Kokotovic', J. B. Cruz, Jr., J. E. Heller and P. Sannuti, "Synthesis of Optimally Sensitive Systems", Proc. I.E.E.E., Vol. 56, No. 8, August, 1968. 2. Lion, P., "Rapid Identification of Linear and Nonlinear Systems", Joint Automatic Control Conference, August, 1966, pp. 605-615. 3. Pazde ra, J., and Pottinger, H., "Linear System Identification Via Liapunov Design Techniques", Joint Automatic Control Conference, August,1969, pp. 795-801. 4. Rose, R., and Lona, G., "An Approximate Steepest Descent Method for Parameter Identification", Joint Automatic Control Conference, August, 1969, pp. 483-487. 5. Kishi, F. H., "On-Line Computer Control Techniques and Their Appli- cation to Re-entry Aerospace Vehicle Control", C. T. Leondes, ed. Advances in Control Systems Theory arid Applications, Vol. 1, Academic Press, New York, 1964. 6. C. Price, "The Matrix Pseudo-Inverse and Minimal Variance Estimates',' SIAM Rev. 6, 1964, pp. 115-120. 7. L. D. Pyle, "Generalized Inverse Computations Using the Gradient Projection Method", Journal of ACM, 11, 1964, pp. 422-428. 8. B. Rust, W. R. Burrus, C. Schneeberger, "A Simple Algorithm for Computing the Generalized Inverse of a Matrix", Journal of ACM, 9, 1966, pp. 381-387. 9. Sage, A., and Melsa, J., Systems Identification, Academic Press, 1971. 10. H. C. Lessing and D. F. Crane, "The Use of Integral Transforms in the Estimation of Time Variable Parameters", NASA Techn. Note, NASA TN D-5194, May 1969. 11. Sage, A. P., Optimum Systems Control, Prentice-Hall, Inc., 1968, pp. 507. 12. Mayne, D. 0., "Optimal Non-Stationary Estimation of the Parameters of a Linear System with Gaussian Inputs", Journal of Electronics and Control, Vol. 14, 1963, pp. 101--112. 13. Lee, R. C. K., Optimal Estimation and Control, M.I.T. Press, Cambridge, Mass., 1964. 14. Lee, R. C. K., and Ho, Y. C , "Identification of Linear Dynamic Systems", Information arid Control, Vol. 8, No. 1, February, 1965. 15. Leathrum, J., "On-Line Recursive Estimation", Joint Automatic Control Conference, August, 1969, pp. 173-178. 16. Bakke, R., "Case Study in the Paper-Making Industry", Case Studies in System Control, 1968, pp. 113-192. 17. Y. V. Linnik, Method of Least Squares and the Principles of the Theory of Observation, Pergamon, New York, 1961. 18. M. Aoki, Optimization of Stochastic Systems, Academic Press, 1967, pp. 318-324. 19. B. J. Prochaska, "Applications of Generalized Inverses to E s t i - mation i n a Dynamic System", JACC, 1971, pp. 820-829. 20. D. Luenberger, Optimization by Vector Space Methods 9 John Wiley and Sons, 1969. 21. J. S. Meditch, "A Class of Suboptimal Linear Controls", Trans, on Automatic Control, July 1966, pp. 433-439. 22. A. Ben-Israel, A. Charnes, "Contributions to the Theory of Genera- lized Inverses", SIAM Rev. 11, 1963, pp. 667-699. 23. R. Penrose, "A Generalized Inverse for Matrices", Proc. Cambridge Philosophical Society, 51, 1955, pp. 406-413. 24. D. K. Faddeev and V. N. Faddeeva, Computational Methods of Linear Algebra, W. H. Freeman and Co., 1963, 25. D. L. Kleinman and M. Athans, "The Design of Suboptimal Linear Time- varying Systems", I.E.E.E. Trans., Automatic Control, Vol. AC-13, pp. 150-159, April 1968. 26. D. L. Kleinman, T. Fortmann and M. Athans, "On the Design of Linear Systems with Piecewise Constant Feedback Gains", I.E.E.E. Trans., Automatic Control, Vol. AC-13, pp. 354-361, August 1968. 27. W. Levine and M. Athans, "On the Determination of the Optimal Constant. Output. Feedback..Gains for Linear Multivar.iab.le.. Systems", I.E.E.E. Trans., Automatic Control, Vol. AC-15, pp. 44-48, February 1970. 28. M. M. Newmann, "Specific. Optimal Control of the Linear Regulator Using a Dynamical Controller Based on the Minimal-Order Luenberger Observer", Intern. J. Control, Vol. 12, pp. 33-48, No. 1, 1970. 29. R. L. Kosut, "Suboptimal Control of Linear Time-Invariant Systems Subject to Control Structure Constraints", Proc. JACC, pp. 820-828, 1970. 30. T. Hendericks and H. D'angelo, "An Optimal Fixed Control Structure Design with Minimal Sensitivity for a Large Electric Booster", Proc. of the 5th Annual Allerton Conference, pp. 142-148, 1967. 134 31. E. Kreindler, "On Minimization of the Trajectory Sensitivity", Intern. J. Control, Vol. 8, No. 1, pp. 89-96, 1968. 32. E. Kreindler, "Synthesis of Flight Control Systems Subject to Vehicle Parameters Variations", Grumman Aircraft Engg. Corp., Bethpage, N. Y., Techn. Rept. AFFDL-TR-66-209, April 1967. 33. E. Kreindler, "Formulation of the Minimum Trajectory Sensitivity Problems", I.E.E.E. Trans., Automatic Control, Vol. AC-14, pp. 206-207, April 1969. 34. J. F. Cassidy Jr. and I. Lee, "On the Optimal Feedback Control of a Large Launch Vehicle to Reduce Trajectory Sensitivity", Proc. of 1967 JACC, pp. 587-595. 35. H. J. Dougherty, I. Lee and P. M. DeRusso, "Synthesis of Optimal Feedback Control Systems Subject to Parameter Variations", 1967 Proc. JACC, pp. .125-133. 36. P. Sannuti and J. B. Cruz Jr., "A Note on Trajectory Sensitivity of Optimal Control Systems", I.E.E.E. Trans., Automatic Control, Vol. AC-13, pp. 111-112, February 1968. 37. A. J. Bradt, "Sensitivity Functions in the Design of Optimal Controllers", I.E.E.E. Trans., Automatic Control, Vol. AC-13, pp. 110-111, February 1968. 38. D. L. Kleinman, "On an Iterative Technique for Riccati Equation Computations", I.E.E.E. Trans., Automatic Control, Vol. AC-13, . pp. 114-115, February 1968. 39. A. G. Longmuir and E. V. Bohn, "The Synthesis of Suboptimal Feed- back Control Laws", I.E.E.E. Trans., Automatic Control, Vol. AC-12, pp. 755-758, December 1967. 40. C. F. Chen and L. S. Shiek, "A Note on Expanding PA + A'P = -Q", I.E.E.E. Trans., Automatic Control, Vol. AC-13, pp. 122-123, February 1968. 41. I i . K. Kirchmayer, Economic Control of Interconnected Systems , New York, John Wiley and Sons, 1959. 42. P. Mueller, "Solution of the Matrix Equations AX + XB = -Q and S'X + XS = -Q*", SIAM J. Appl. Math., Vol. 18, No. 3 pp. 682-687, May 1970. 43. M. Athans and F. C. Schweppe, "Gradient Matrices and Matrix Calculations", M.I.T. Lincolin Lab., Lexington Mass. Techn. Note 1965-53, November 1965. 44. H. Neudecker, "A Note on Kronecker Matrix Products and Matrix Equation Systems", SIAM J. Appl. Math., Vol. 17, No. 3, pp. 603-606, May 1969.

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
China 5 25
France 4 0
United States 3 0
Russia 2 0
City Views Downloads
Unknown 7 2
Beijing 5 0
Sunnyvale 1 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items