Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Linear systems identification algorithms for a minicomputer 1974

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
UBC_1975_A7 Z83_7.pdf
UBC_1975_A7 Z83_7.pdf
UBC_1975_A7 Z83_7.pdf [ 2.32MB ]
UBC_1975_A7 Z83_7.pdf
Metadata
JSON: 1.0065433.json
JSON-LD: 1.0065433+ld.json
RDF/XML (Pretty): 1.0065433.xml
RDF/JSON: 1.0065433+rdf.json
Turtle: 1.0065433+rdf-turtle.txt
N-Triples: 1.0065433+rdf-ntriples.txt
Citation
1.0065433.ris

Full Text

LINEAR SYSTEMS IDENTIFICATION ALGORITHMS FOR A MINICOMPUTER by HEINRICH ZUERCHER Diploma i n E l . Eng., Federal I n s t i t u t e of Technology, S w i t z e r l a n d , 1972 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER W APPLIED SCIENCE i n the Department of E l e c t r i c a l Engineering We accept t h i s t h e s i s as conforming to the requ i r e d standard Research Supervisor Members of Committee Head of Department Members of the Department of E l e c t r i c a l Engineering THE UNIVERSITY OF BRITISH COLUMBIA November, 1974 In presenting th i s thesis in pa r t i a l fu l f i lment of the requirements for an advanced degree at the Univers i ty of B r i t i s h Columbia, I agree that the L ibrary sha l l make it f ree ly ava i lab le for reference and study. I fur ther agree that permission for extensive copying of th is thes is for scho lar ly purposes may be granted by the Head of my Department or by his representat ives. It is understood that copying or pub l i ca t ion of th i s thes is fo r f inanc ia l gain sha l l not be allowed without my writ ten permission. Department of The Univers i ty of B r i t i s h Columbia Vancouver 8, Canada Date AJ&V / < " S97tf- i i A B S T R A C T Two methods f o r i d e n t i f y i n g multiple-input mutliple-output l i n e a r time-invariant d i s c r e t e systems from input-output data are derived. The selector matrix p r i n c i p l e of ,Gopinath i s used and two s p e c i a l classes of s e l e c t o r matrices that lead to canonical system models are introduced. Very general input sequences can be applied. However, a few r e s t r i c t i o n s e x i s t f or the i n i t i a l system state. The input matrix i s i d e n t i f i e d column- wise. Both methods are considerably better than Gopinath's method i n terms of storage requirements and numerical accuracy. Tests on a large computer are performed. One method i s implemented on a minicomputer with good r e s u l t s . i i i TABLE OF CONTENTS Page • I. INTRODUCTION 1 I I . GOPINATH'S METHOD USING SPECIAL SELECTOR MATRICES. 3 2.1 Problem Statement. . ... .... . 3 2.2 Two Special Classes of Selector Matrices 4 2.3 The Algorithm with Selector Matrix Type 2 . . . . 6 2.4 The Algorithm with Selector Matrix Type 3 . . . . 10 2.5 Summary 12 I I I . MODIFIED VERSIONS OF GOPINATH'S METHOD 13 3.1 • Method A 13 3.2 Method B 14 3.2.1 A- Algorithm 14 3.2.2 6- Algorithm 17 3.3 Comparison with Gopinath's Method. .......... . . . 19 IV. IDENTIFICATION OF REAL-TIME SYSTEMS WITH A MINICOMPUTER 22 4.1 Experiment 22 4.2 Minicomputer Software 22 4.3 Sources of Error . . 27 4.4 Storage A l l o c a t i o n and Time Requirements 28 V. EXPERIMENTS AND RESULTS 29 VI. CONCLUSIONS 37 REFERENCES 39 Appendices APPENDIX 1 40 APPENDIX 2 42 APPENDIX 3 43 iv LIST OF FIGURES Figure Page 3.1 Simple input sequence 20 4.1 Scheme of i d e n t i f i c a t i o n experiment 23 4.2 Flow chart of i d e n t i f i c a t i o n program 24 4.3 Nonideal square wave 27 5.1 Input sequence for i d e n t i f i c a t i o n 32 5.2 Controlled and uncontrolled system output . . . 34 5.3 Suboptimal system output behaviour 36 5.4 Suboptimal co n t r o l s i g n a l 36 Table 5.1 LIST OF TABLES Models with d i f f e r e n t s e l e c t o r matrices . . . . 30 ACKNOWLEDGEMENT I wish to thank Dr. E.V. Bohn, my supervisor, f o r h i s advice, guidance and i n t e r e s t during the research and wr i t i n g of my th e s i s . I am thankful to Mr. Dave Holmes for h i s assistance with the hardware work. The f i n a n c i a l support received i n the form of U.B.C. Graduate Fellowships from 1972 to 1974 and under the National Research Council Grant 67-3134 i s g r a t e f u l l y acknowledged.. 1 I. INTRODUCTION Aim of Thesis Recent advances i n the c a p a b i l i t i e s of process computers-and corresponding d i g i t a l i n t e r f a c e equipment have made the a p p l i c a t i o n of "modern" con t r o l techniques much more f e a s i b l e . "Modern" design methods for multi-input multi-output l i n e a r time-invariant systems require a s t a t e - space model of the system and the problem of i d e n t i f y i n g such models from experimental data has therefore been the subject of intensive research i n recent years. Several i d e n t i f i c a t i o n methods have been proposed and tested on large computers. They a l l require extensive data processing, which makes t h e i r implementation on minicomputers i n e f f i c i e n t i f not impossible. The aim of t h i s thesis i s to develop and test on-line i d e n t i f i c a t i o n a l - gorithms that can be r e a l i z e d e f f i c i e n t l y on minicomputers. Background I t i s e s s e n t i a l f o r the p r a c t i c a l usefulness of an i d e n t i f i c a t i o n method that the class of applicable input signals i s not r e s t r i c t e d to par- t i c u l a r inputs such as impulse function, step function or white noise. The i d e a l case would c e r t a i n l y be the i d e n t i f i c a t i o n from normal operating records. Another p r a c t i c a l l y relevant requirement regards the minimality of the number of s i g n i f i c a n t parameters of the model and t h i s requires the choice of a s u i t a b l e canonical form, whenever t h i s i s p o s s i b l e . More- over the s i m p l i c i t y of the l i n k between the selected state-space repre- sentation and the set of input-output equations used for the i d e n t i f i c a t i o n heavily conditions the required amount of computation. The above problems have been solved f o r single-input s i n g l e - output or multi-input single-output systems [9] but no u n i f i e d approach 2 for multi-input multi-output systems that s a t i s f i e s a l l previous require- ments can be found i n the l i t e r a t u r e . So f a r a number of methods that allow the use of general input-output records have been suggested: ,Gopin- ath [1], Budin [2], Bonivento and.Guirdozi [3], Ackermann [4] and Bingulac and Djorovic [8]. vGopinath introduces the p r i n c i p l e of a data s e l e c t o r matrix which enables the i d e n t i f i c a t i o n method to be formulated concisely and i n a manner su i t a b l e f or d i g i t a l computation. However, the models found generally have no canonical structure. The computational part of t h i s procedure, e s p e c i a l l y the determination of a s e l e c t o r matrix, has been improved by Budin: Bonivento and Ackermann both introduce a p a r t i - cular canonical system model and then i d e n t i f y the reduced number of model parameters using the corresponding input-output equations. However, the s e l e c t i o n and processing of input-output data i s more complicated than with Gppinath's s e l e c t o r matrix. Bingulac's procedure leads to a model i n Jordan canonical form and requires the determination of system eigen- values i . e . polynomial roots. This makes the treatment of noisy data very d i f f i c u l t , which i s a serious drawback. Outline of work An examination of Gopinath's method i n chapter II shows that a s l i g h t l y more general d e f i n i t i o n of the selector matrix allows us to con- sider both Ackermann's and Bonivento's approach as s p e c i a l cases of Gopin- ath's s e l e c t o r matrix formulation. The generality of Gopinath's procedure j u s t i f i e s i t s further i n v e s t i g a t i o n . Two modified versions of Gopinath's method are proposed i n chapter I I I and the implementation of one such ;y-. method on a minicomputer i s described i n chapter IV. F i n a l l y chapter V presents experimental r e s u l t s and examples whereas chapter VI contains the conclusions. 3 I I . GOPINATH'S METHOD USING SPECIAL SELECTOR MATRICES 2.1 Problem Statement A l i n e a r time-invariant multiple-input multiple-output system can be represented by the state equations x(k + 1) = A x(k) + B u(k) (1) y(k) = H x(k) (2) where x i s an n: x 1 state.vector, y i s an & x 1 output vector, u i s an m x 1 input vector, A i s an n x n state t r a n s i t i o n matrix, B i s an n x m input matrix and H i s an I x n output matrix. The system i s thusfispeci- f i e d by the t r i p l e t (A,B,H). (A,B,H) i s assumed to be completely observ- able and c o n t r o l l a b l e because otherwise i t i s impossible to i d e n t i f y the system from input-output data. The problem i s to f i n d a model (A,B,H) of the same dimension as (A,B,H) from a sequence of inputs u(k) and outputs y(k) such that (A,B,H) simulates the input-output behaviour of (A,B,H). I t i s clear that the ^ A number of correct models (A,B,H) i s i n f i n i t e and that there are input sequences u(k) which w i l l not be s u f f i c i e n t to uniquely spec i f y any r e a l - ^ / \ / \ i z a t i o n (A,B,H). The use of a sel e c t o r matrix f o r the processing of data s p l i t s the problem i n t o two parts: 1) The fi n d i n g of a selector, matrix. T h i s . i s equivalent-to-determining order and structure of (A,B,H). 2) The i d e n t i f i c a t i o n of a l l parameters i n (A,B,H). This thesis deals with an e f f i c i e n t on-line computer s o l u t i o n of problem part 2) and a s e l e c t o r matrix i s , t h e r e f o r e , always assumed to be known. Two p a r t i c u l a r classes of s e l e c t o r matrices are introduced i n the next section and then used throughout the t h e s i s . 4 2.2 Two S p e c i a l Classes of S e l e c t o r M a t r i c e s A more general c l a s s of s e l e c t o r matrices than the one i n t r o - duced i n [1] and used i n [2] i s defined below. D e f i n i t i o n 1; S. denotes the set of n x d matrices (n £ d) w i t h the f o l l o w i n g p r o p e r t i e s : 1) S = is±^} where B = 0 or 1 (3) 2) V i , = 1 f o r one and only one j , say (4) 3) j ± + Jk f o r i f k i , k = 1,2, n (5) This d e f i n i t i o n i m p l i e s that when p r e m u l t i p l y i n g a d x p matrix A w i t h S the r e s u l t i n g n x p matrix SA c o n s i s t s of n of the rows of A ordered i n accordance w i t h the u n i t row v e c t o r s of S. Note t h a t w i t h s e l e c t o r ma- t r i c e s as defined i n [1] and [2] the n rows i n SA are always ordered as they are i n A. However, w i t h s e l e c t o r matrices as defined i n D e f i n i t i o n 1, the n rows of SA can be ordered i n any p o s s i b l e way. We can now define the f o l l o w i n g s p e c i a l c l a s s of s e l e c t o r matrices. D e f i n i t i o n 2: denotes the set of n x d matrices w i t h the f o l l o w i n g proper- t i e s : 1) S s 2 c S (6) 2) d i s an i n t e g r a l m u l t i p l e of I (7) 3) S' = [Si S' .... SI] (8) where Sfc = {s^ ̂ } are n f c x d matrices w i t h J i = k + ( i - 1) I i = 1, ... n^ lc ™ 1 £ • • • $J and i s defined as i n (4) 4) n k 5: 1 f o r k = 1, . .. I (9) A n e x a m p l e o f a s e l e c t o r m a t r i x S 0 i s 3 s 2 w i t h S l = 1 0 . 0 0 0 0" 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 _ 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 w h e r e * i ! = 5 , I = 2 , d = 3 * £ - = 6 ^ = 3 , n 2 = 2 a n d S 2 ' = 0 1 0 0 0 0 0 0 0 1 0 0 T h e a d v a n t a g e g a i n e d f r o m t h e a p p l i c a t i o n o f a s e l e c t o r m a t r i x o f t y p e 2 i s d e m o n s t r a t e d i n s e c t i o n 2 . 3 . T h e u s e o f a s e l e c t o r m a t r i x o f t y p e 3 a s d e f i n e d b e l o w w i l l b e s h o w n i n s e c t i o n 2\4. H o w e v e r , i t i s n o t n e c - e s s a r y f o r t h e u n d e r s t a n d i n g o f t h e r e m a i n d e r o f t h e t h e s i s , e x c e p t s e c t i o n 2 . 4 , t o r e a d D e f i n i t i o n 3 w h i c h i s o b t a i n e d b y s l i g h t l y c h a n g i n g ( 8 ) a n d ( 9 ) i n D e f i n i t i o n 2 . D e f i n i t i o n 3 : i ) S s 3 C S 2 ) d i s a n i n t e g r a l m u l t i p l e o f I 3 ) s; 3 = [ s i s- ( 1 0 ) ( 1 1 ) ( 1 2 ) w h e r e S f c = t s ^ ^ } a r e x d m a t r i c e s w i t h j ± = k + ( 1 - 1 ) * ' : i = 1 , k = 1 . . . . n , k . h < l a n d j ± i s d e f i n e d a s i n ( 4 ) . 4 ) n k > 1 f o r k = l , . . . . . h A n e x a m p l e o f a s e l e c t o r m a t r i x S „ i s s3 ( 1 3 ) 3 s 3 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 . 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 w h e r e n = 5 , £ = 2 , d = 5*.l = 1 0 n 1 = 5 , n 2 = 0 , h = 1 6 2.3 The Algorithm with Selector Matrix Type 2 Gopinath's d e r i v a t i o n s t a r t s out from the assumption that a s e l - ector matrix S e x i s t s such that = T where T i s nonsingular f"H HA ' n * - l HA ' (14) where n* = o b s e r v a b i l i t y index of (A,B,H) n* < n Theorem 1 states that there always e x i s t s a s e l e c t o r matrix of type 2 that s a t i s f i e s (14). Theorem 1: I f the system (A,B,H) i s completely observable and the matrix H has the rank A, i . e . p[H] = SL, then there e x i s t s an S e S g 2 of dimen- sion n x i n * such that H HA HA n * - l = T where T i s nonsingular (15) The proof f o r Theorem 1 i s described i n Appendix 1. The next step ^ * * i n the procedure i s the introduction of a basis f o r the model (A,B,H) such that ) H HA HA = I (16) n (14) and (16) imply d i r e c t l y that -1 A = T A T B = 1 B (17) H = H T -1 I t i s w e l l known that a state transformation of (A,B,H) with the transformation matrix T described i n (15) leads to a model (A,B,H) with the following observable canonical form: A = 0 1 0 0 ... 0 _ a 2 i — 0 ... 0 — 0 0 0 . . . 0 0 1 0 • • 0 1' — aZ2— 0 ... 0 0 .. . 0 — < H 2 — 0 ... 0 0 .. . 0 0 ... 0 0 ... 0 — 0.21— 0 1 0 n. n. n. B = r i o . . . { j ,' 0~] 0 0 j 1 0 . . . j | 0 ; 1 0...0 H Because of Theorem 1 we can without los s of generality continue the d e r i - vation of Gopinath's input output equation assuming that (A,B,H) s a t i s f i e s (18). Simple manipulation of (1), (2) and the equivalence of (A,B,H) and (A,B,H) y i e l d y(k) = H HA HA x(k) + R u(k) (19) where y'(k) = [y'(k) y'(k+l) ... y'(k+n*-l)] u'(k) = [u*(k) u'(k+l) ... u'(k+n*-l)] (20) 8 and R l = 0 ... HB 0 .. . •A. A A. A A HAB HB 0 .. HA n*-2 HB 0 (21) The m u l t i p l i c a t i o n of (19) by S, using the equations (16) and. (18), gives S y(k) = x(k) + u(k) (22) where 0 n-n^+l Jn-1 n-n 0+l Substituting x(k) and x(k+l) from (22) i n t o x(k+l) = A x(k) + B u(k) y i e l d s Gopinath's d i r e c t input-output r e l a t i o n S y (k+1) = [AR] S y (k) u(k) where R = -ASR1 + S HB 0... A /\ /s SS. /\ HAB HB 0 ... ~*n*—1«. HA B ... HB (24) (25) 9 ( 2 4 ) i s a n e x t e r n a l d e s c r i p t i o n i n t h e s e n s e t h a t t h e v a r i a b l e s u ( k ) , y ( k ) , c a n b e m e a s u r e d e x t e r n a l l y . E q u a t i o n ( 2 4 ) d o e s n o t i n v o l v e a s y s t e m s t a t e . W i t h r e l a t i o n ( 1 8 ) , ( 2 3 ) a n d ( 2 5 ) i t c a n b e v e r i f i e d t h a t t n „ n r [ A R ] = n „ 0 1 0 1 0 I o l :o ... o — a i l — j — 0.12— 0 ... Q |0 1 0 0 . . . 0 I 0 1 — a 2 1 — | ^ a 2 2 0 . . . 0 ]~0 . . . 0 : i o ... o i o .... o •OJI.I — l! 1 0 . . . 0 i : o ... o | — a . i f c — • ! 0 . . . 0 1 0 . . . 0 I — a 2 £ — JL.-A I T ~ T " , 0 1 0 1 0 . . . 1 1 ~ a n — o ... o .... 0'(| 0 • p . l - 0 . . . 0 . . . 0 0 • Pi-. 0 0 ( 2 6 ) A l l s i g n i f i c a n t p a r a m e t e r s o f [ A R ] a p p e a r i n t h e r o w s n ^ , h ^ + h ^ , . . . n w h e n a s e l e c t o r ; m a t r i x , o f . t y p e 2 i s u s e d . T h e r e f o r e t h e n u m b e r o f p a r a m e t e r s t h a t h a v e t o b e e s t i m a t e d i s r e d u c e d f r o m ( n + m n * ) n t o ( n + m n * ) £ . T h e d e t e r m i n a t i o n o f [ A R ] i s s t r a i g h t f o r w a r d . U s i n g ( 2 4 ) w e h a v e S [ y ( k + 1 ) . . . y ( k + n + m n * ) ] = [ A j R ] S y ( k ) . . . S y ( k + n + m n * - l ) T T ( k ) . . . u ( k + n + m n * - l ) w h i c h c a n b e w r i t t e n c o m p a c t l y a s S Y n ^ ( k + 1 ) = [ A R ] U n A ( k ) E q u a t i o n ( 2 6 ) a l l o w s u s t o i n t r o d u c e a r e d u c e d s e l e c t o r m a t r i x ( 2 7 ) ( 2 8 ) r e d " n ^ f n 2 - s n ^ + n 2 ~ + . S j i s H x ? n * £ r e d ( 2 9 ) w h e r e s . i s t h e i t h r o w o f S . l Substituting ^ r e ^ into (28) we get S , Y A(k+1) = red n* N ' 21 12 Un*<k> (30) which has the form S r e d Y n * ( k + 1 ) = [ 1 r e d R r e d ] U n * ( k ) (31) Note that (31) represents the input-output equations derived by Bonivento and Guirdozi i n [3]. Theorem 2 states conditions under which a unique so- l u t i o n f o r [AR] and t h e r e f o r e [A , R , ] can be found. red red Theorem 2: If (A,B,H) i s completely observable and c o n t r o l l a b l e and S e and u(k) are random variables with j o i n t n o n l a t t i c e d i s t r i b u t i o n , then U n A ( k ) i s "almost surely" nonsingular, i . e . there e x i s t s a unique s o l u t i o n for [AR] with p r o b a b i l i t y one. A. The proof of t h i s theorem i s given i n Appendix 1. Once' [AR] i s known, then B i s found as described i n [1]. B = R + AIL + ... A n* _ 1R . i (32) o 1 n * - l where R = [E R, ... R . n ] with each R. an n x m matrix. . . o 1 n * - l J l (33) Using equation (18), i t can be v e r i f i e d that H i s found i n the following manner where h. . = s.. for i = 1, ... Z and j = 1, H = {h_} and S = {s ±j} n (34) 2.4. The Algorithm with Selector Matrix Type 3 As i n the preceding section i t can be shown that there always ex i s t s a s e l e c t o r matrix of type 3 such that equations (28) and (32) 11 lead to A = 0 1 • 1 • • • • • 0 1 1* • « 1 0 0 1 I I j - a i r o ... 0 0 1 1 °i - 1 " • • . 1 1 0 0 ... 0 0 • • • i ! 1 — i — r — 1 — o ... • 0 0 • • • 0 I 1 Io 1 0 * 0 ... 0 0 • • • 1 0 1 |0 • • 1 — a h i - <%2 - 1 1 — ••"hh- n,. B = Note that i n t h i s p a r t i c u l a r case equation (31) represents the input-output representation derived by Ackermann i n [4]. The number of s i g n i f i c a n t para- meters i n A i s reduced because A i s i n lower t r i a n g u l a r form. However» the output matrix H contains more s i g n i f i c a n t parameters. H = 1 0 0 .. 1 0 ... n 0 1 0 ... 0 'h+l Equation (34) can s t i l l be used to f i n d the f i r s t h rows of the matrix H. The f i n d i n g of the bottom rows requires the s o l u t i o n of the following system of l i n e a r equations y h + 1 ( k - r n - l ) ~ y h + l ( k ) *•* y h + 2 ( k ) '•• y A ( k ) ... y (k4n-l) _ h h + / \ + 2 • * [ x ( k ) x(k+l)...x(k+n-l)J (37) where y^(k) i s the j t h component of y ( k ) . The model states x(k) , x(k+l)....can be found with equation (22) a f t e r A and B have been computed. 2.5. Summary It has been shown that a selector matrix of type 2 i n conjunction with equations (28), (32) and (34) leads d i r e c t l y to a canonical observable A * model (A,B,H) as i n (18). I t has also been proved that such a s e l e c t o r matrix e x i s t s f or any observable and c o n t r o l l a b l e system with p[H] = 11 and that a unique s o l u t i o n of (28) i s guaranteed f o r a random input se- quence . S i m i l a r l y , a s e l e c t o r matrix of type 3 and the equations (28), (32), (34) and (37) y i e l d a model as i n (35) and (36). The proof that a s e l e c t o r matrix of type 3 always e x i s t s and that a unique s o l u t i o n of (28) and (37) i s guaranteed for a random input sequence i s analogous to the proof f o r s e l e c t o r matrix type 2. 13 I I I . Modified Versions of Gopinath's Method 3.1. Method A A reduction of the matrix dimensions i n equation (28) i s achieved by decomposing the i d e n t i f i c a t i o n experiment i n t o m i n t e r v a l s . A d i f f e r e n t input sequence i s used during each i n t e r v a l and the data f o r one i n t e r v a l are used to i d e n t i f y A and one column of B. A f t e r interchanging columns i n R and components i n \l(k) equation (24) can be s p l i t up i n the following way (38) Sy(k+1) = ASy(k) + R-.a. (k) + ... R u (k) 1 1 mm where R. = [r. r.. l l l+m ri+(n*-l)m- 1 and r. i s the i t h column of R l U ; L'(k) = [u^k) u ±(k+l) ... u ±(k+n*-l)] u^(k) i s the i t h component of u(k) (39) Assuming that u.(k) = 0 J for j 4  1 we get the following p a r t i a l input-output equation analogous to (24) Sy(k+1) = [AR,] 'sy(k)" u»(k) fo r i = 1, ...m (40) (41) The determination of [AR^] i s straightforward. We have Sy(k). Sy(k+n+n*-l) S[y(k+1)... y(k+n+n*)] = [AR.] for i = 1, TT^k)... u ±(k+n+n*-l) .m (42) SY .(k+1) = [AR,] U. .(k) f o r i = 1, .. m (43) which can be written compactly as 'n* A unique s o l u t i o n f o r [AR^] e x i s t s whenever U^ n*(^) i - s nonsingular. We f i r s t assume that the one-input system (A,b^,H) i s c o n t r o l l a b l e , where b_̂ 14 i s the i t h column of B. Then theorem 2 applies to (A,b^,H), i . e . a unique so l u t i o n f o r [AIL] e x i s t s with p r o b a b i l i t y one whenever the input u^(j) i s random. Most l i k e l y not a l l the m systems (A,b^,H) w i l l be c o n t r o l l a b l e . I f (A,b.,H) i s not c o n t r o l l a b l e , general conditions that guarantee U. . (k) i m* to be nonsingular have not been found. However, whatever input sequence u^(j) i s used, the s t a r t i n g condition x(k) =|= 0, and therefore x(k) =}= 0 (44) i s necessary for u i n A ( k ) to be nonsingular. The proof follows d i r e c t l y from an equation analogous to (4) i n Appendix 1. from where Once [AR^] i s determined the column b^ of B i s found d i r e c t l y b. = T + AT. + ... A n* 1 r . . (45) l o 1 n * - l R. = [r r,.- ... r" . . ] with each "r. an n x 1 vector, i o -1, n * - l l Relation (45) follows from (32) and (39). 3.2. Method B P a r t i c u l a r input sequences allow a modification of (A,B,H) into an augmented system (A,H) eliminating the input matrix.. The i d e n t i f i c a t i o n of the augmented system with Gopinath's method w i l l give A. d i r e c t l y and i s s i m p l i f i e d because the term R i n the input-output equation (24) disappears. B i s then i d e n t i f i e d columnwise with a d i f f e r e n t algorithm which does not require any knowledge about A. 3.2.1. A-Algorithm ' For any input sequence of the type u(k+l) = C u(k) for k = 1, ... (46) where C i s m x m, the dynamic behaviour of system (1), (2) can be described by the following augmented state equations 15 where x(k+l) = A x(k) . y_(k) = H x(k) x'(k) = [u'(k) x'(k)] Z ' ( k ) = [u'(k) Z ' ( k ) ] (47) and A = H = Im I 0 - -r — 0 | H (48) C I 0 1 B I A I t i s obvious that p[H] = m + p[H] We define the following augmented (m+n) x (m+£)n* se l e c t o r matrix (49) S = Im | ~~F ~ 0 I s1 m?. 0 r ~ i ~ ~ i o i s. I T . 1 0 1 s i i n* (50) -<—> m <—>-m where S = [S, S„ ... S .] with S. an n x I matrix, i s the se l e c t o r matrix 1 2 n* i for the o r i g i n a l system (A,B,H) as used i n (15) and (24). I t i s now easy to show that s e l e c t o r matrix S_ can be used to iden- t i f y an augmented model (A,H) with Gopinath's method. M u l t i p l y i n g S_ with the o b s e r v a b i l i t y matrix of (A,H)«and using (15), (48) and (50) gives H HA HA11*"1 = S ,n*-l I 1 0 m 0 H C o X HA c 2 i 0 0 HA n * - l m X 0 H HA HA n * - l I i 0 m | X ! T = T (51) Because T i s nonsingular T, must be nonsingular too, i . e . the augmented sys- tem (A,H) as defined i n (48) i s observable. Furthermore can be used to 16 i d e n t i f y model (A,H). Analogous to (17) we obtain A = T A T - 1 which leads with (48) and (51) to (52) A = C _|_ _ 0 _ X ! T A T" ~c 1 0 "1 _ — 1 X 1 1 A. (53) where A i s i d e n t i c a l to A i n (18) and (24). Because (A,H) has no input matrix, equation (24) i s reduced to ^ y_(k+l) = A £(k) and equation (24) becomes S Y n^(k+1) = A S Y n i ( k ) (54) (55) with Y n*0O = tz(k> Z(k+D ... x(k+n+m-l)] where J(k) i s defined corresponding to (20) and (48). A unique s o l u t i o n for A i s found whenever " ' . _,n+m-l s Y n * ( k ) - u(k) Cu(k) ... C"'ul "u(k) Sy(k) Sy(k+1)...Sy(k+n+m-1) (56) i s nonsingular. S u f f i c i e n t and p r a c t i c a l relevant conditions f o r C, u(k) and i n - i t i a l system state x(k)" such that (56) i s s a t i s f i e d have not been found. However, i t i s necessary that matrix C be c y c l i c ( d e f i n i t i o n i n Appendix 1). This i s no serious r e s t r i c t i o n . "The condition of being non-cyclic i s caused by having two i d e n t i c a l subsystems embedded i n one system and yet completely decoupled from each other"[15]. In s p i t e of an a r b i t r a r y choice of C and u(k) i n a l l numerical examples a unique A was always found, i . e . the a l - gorithm i s p r a c t i c a l even without s u f f i c i e n t conditions for C, u(k) and x(k). 3-2.2. B-Algorithm Simple manipulation of (1) and (2) gives y(k+l) = HA HA2 " n * HA x(k) + HB ^ ̂ y\ w A HAB HB HA B HB u(k) (57) where y(k) and u(k) are defined i n (20). •V A ^ I t i s assumed now that (A,B,H) i s i n i t i a l l y i n the steady state, i n the sense that Let x = Ax + Bu s s s y = Hx s y o ( k ) ^ y(k) - y s s\ A ^ ^ x (k) = x(k) - x o s u Q(k) ^ u(k) - u s (58) (59) Substituting y,x and u i n (57) and using (58) gives u Q(0) u Q ( n * - l ) "y0U>' _ HB A -N A A ^ HAB HB y Q(n*) HA B ... HB (60) To s i m p l i f y the notation y(k), u(k) s h a l l denote y Q ( k ) and u Q(k) i n the rest of t h i s section. In the case of m d i f f e r e n t input sequences of length n* equation (60) can be written i n the form Y, 1 • • Y ^ n* HB A. A. S\ /\ /N HAB HB ~ ~n* —1^ HA B HB U .o U n * - l (61) where Y k = [ y l ( k ) y 2 ( k ) y m ( k ) ] i s an £:;<x m matrix and Uk = [u^(k) u 2(k) ... u m ( k ) ] i s an m x m matrix y^(k), k = 1, 2, ...n* represents the i t h output sequence u^(k), k = 0,<i2, . . . n * - l represents the i t h input sequence The s t a r t i n g values ^ ( 0 ) of the input sequences are chosen such that they span the whole input space, i . e . U q i s non-singular With (61) we compute sequentially the following terms -1 (62) (63) (64) -1 HB = Y, U 1 o /S /N /V. /s /\ HAB = (Y 2 - HBl^) U Q HA XB = (Y - HA BU, - . . . -HAB U . 0-HBU. n* 1 n*-2 (65) h * - l ) U o -1 Recalling -one gets B d i r e c t l y by use of (65) HB /\ S\ HAB = S H HA B = B (66) HA B x * n * - l HA Equation (65) becomes p a r t i c u l a r l y simple when_ m' stepfunctions are used as input sequences such that U = U 1 = . . . = U J t 1 = U (67) o 1 n * - l s ' and 1 s l 0 u = u s2 0 'U sm (68) Substituting (67) and (68) into (65) gives HB = Y 1 U -1 -1 HAB = (Y 2 - Y±) U (69) -1 and for (66) •1 Y - Y i-2 1 (70) n * - l P a r t i t i o n i n g (70) into m columns gives S y ^ d ) y.(2) - y . ( l ) .1 f o r i = 1 (71) > • • m y^(n*) - y.(n*-l) where b. i s the i t h column of B. Note that the matrix A need not be known i n order to solve (65) and (66) or (70) and (71) for B. Four aspects of i d e n t i f i c a t i o n that are important i n p r a c t i c a l applications are discussed for Gopinath's method and method A and B. 1) order of input-output equations 2) input r e s t r i c t i o n s 3) number of experiments 4) i n i t i a l conditions 1) The order of the systems of l i n e a r input-output equations (28), (43) and (55) varies from method to method. Low order equations require less computer storage and time and give more accurate r e s u l t s (less numerical e r r o r s ) . Method A and B are always better than Gopinath's method i n t h i s respect. Method B i s the best one i n case of few output components. The following maximum matrix dimensions are obtained i n equation (28), (43) 3.3 Comparison with Gopinath's Method and (55) respectively f o r a fourth order system with a maximum of 4 i n - puts and 4 outputs. max (n + mn*) = 20 for Gopinath's method max (n + n*) = 8 for Method A max (n + m) = 8 for method B. 2) Only Gopinath's method f a c i l i t a t e s t h e o r e t i c a l l y the i d e n t i - f i c a t i o n of a model (A,B,H) from normal operating.records but better r e s u l t s w i l l c e r t a i n l y be achieved with the app l i c a t i o n of chosen input si g n a l s . A l l three methods allow the use of simple input sequences l i k e the square wave i l l u s t r a t e d i n Figure 3.1. Figure 3.1. The following r e s t r i c t i o n s for the input sequences have to be considered though: The lower mn* rows i n U .(k) i n equation (28) have to be n* l i n e a r l y independent i f we use Gopinath's method. The same i s necessary for the lower n* rows i n U. (k) i n equation (43) for method A whereas for method B equation (46) has to be s a t i s f i e d . Tests showed that the occurrence of a singular data matrix from such r e s t r i c t e d inputisequences i s u n l i k e l y (Experiment 1 and 2). 3) The number of necessary experiments i s : 1 for Gopinath's method 1 with m i n t e r v a l s for method A m + 1 for method B. 4) No i n i t i a l conditions for the system are required by Gop- inath's method whereas i n case A equation (44) has to be s a t i s f i e d . With method B the system has to be i n the steady-state when the i d e n t i f i c a t i o n • of B i s started. This r e s t r i c t i o n i s balanced o f f by the advantage that the i d e n t i f i c a t i o n of A and B i s decoupled i n case B. In conclusion, i t may be sa i d that there i s a trade o f f between experimental and computational requirements. We obtain small matrices and numerical decoupling but require several experiments f o r method B on the one hand and obtain large matrices with only one experiment for Gopinath's method on the other hand. IV. IDENTIFICATION OF REAL-TIME SYSTEMS WITH A MINICOMPUTER 4.1 Experiment I d e n t i f i c a t i o n method B has been implemented on a NOVA mini- computer i n order to i d e n t i f y models of l i n e a r systems r e a l i z e d on the DONNER 3400 analog computer. The experimental setup i s shown i n Figure 4.1. The NOVA as a central unit supervises the i d e n t i f i c a t i o n experiment, i n p a r t i c u l a r i t controls the A/D conversion and i n i t i a t e s and resets the input sequences. The computer accepts known system parameters (dimension, etc.) from the teletype and computes the model parameters from the measured data a f t e r a l l the samples have been taken. The control l o g i c generates the desired input sequence at a rate given by the external s i g n a l generator. The state of the sampling l o g i c t e l l s the waiting computer when a sample can be taken and when the experiment can be s t a r t e d . 4.2 . Software for Minicomputer Because of the 4K memory l i m i t a t i o n of the standard NOVA emph- asis was put on e f f i c i e n t storage use rather than fast data processing whenever these two goals were incompatible. On-line computation of A and B would have been possible but would not have resulted i n storage savings as a l l input-output data;.had to be stored, anyway, for teletype • output and performance evaluation. Lt proved to be simpler to acquire a l l experimental data f i r s t and then compute A and B o f f l i n e . The program flow chart i s shown i n Figure 4.2. The experiment supervision and data a c q u i s i t i o n i s performed by the same subroutine i n both case A and ;.case B even though i t i s represented by two d i f f e r e n t boxes i n the flow chart. n ANALOG SYSTEM outputs system inputs r *• A D N 0 V A ^ • 0 - CONTROL LOGIC STEPPING MOTOR TTY SAMPLING LOGIC A I CLOCK Figure 4.1. START 1 accept preliminary parameters from TTY: order, s e l e c t i o n . vector ect. A or B supervis A-Experi sampled stored ion of ment, data are \ r l e a s t sq mation o uare e s t i - f A return to START supervis B-Experii sampled stored ion of Tient, data are t compute average B r Figure 4.2. The computation of A from equation (55) proved to be i n s u f f i c i e n t even for the small disturbances described i n Section 4.3. In order to reduce the influence of measurement noise more data has to be included into the parameter i d e n t i f i c a t i o n . Equation (55) can be generalized to Z = AD + V (72) where Z-^and D ^are .-(n+m)x d with d = 1,2,... and V i s an array of noise terms. The l e a s t squares estimate of A i s then -1 A = ZD' [ DD' ] for d > (m+n) (73) The l e a s t squares program described i n [7] was used to solve equation (73) r e c u r s i v e l y . The algorithm s t a r t s out with d = 1 and then updates A with each new data vector. New data vectors are checked f o r —e l i n e a r independence u n t i l the f i r s t unique s o l u t i o n (d = m+n) i s reached. This check enables us to detect cases of singular or nearly singular data matrices. (The recursive l e a s t squares algorithm and the l i n e a r independence check are presented i n Appendix 2.) Matrix B i s computed columnwise with formula (71), therefore only step inputs as given by (67) and (68) can be used. In the presence of measurement noise we have y±a\ y,(2) - y , ( l ) y ±(n*) - y.(n*-l) u . s i = b ± = b ^ + \ v . - f o r i • • =. 1,; . an . ( where b i s the estimate>-of column b. and v i s a noise vector, l l " ' I f we assume that E[ b.-b.] = E[v] = 0 (75) 1 x and E[w'] = R then i t i s possible to reduce the covariance matrix R by averaging N independent estimates. We compute b. = _ J _ E b (76) where' 2 b. . i t h estimate of b. v. = b.. - 'b. 3 xj ' x E[v.v k'] = R for j = k ( 7 7 ) 0 for j f k and get E[b.-b. ] = E[v] = 0 x x - R _ T j (78) E[vv'] = _ " R s The program enables the user to repeat the i d e n t i f i c a t i o n of B several times and i t computes the average of a l l parameter estimates as a f i n a l r e s u l t . The program diverges i n one point from the described theory. The s e l e c t o r matrix i s replaced by the less general s e l e c t i o n vector because i t saves storage and computing time and i s e a s i e r to i n i t i a l i z e . Whereas s e l e c t o r matrices can s e l e c t rows out of a data matrix ordered i n any possible way the s e l e c t i o n vectors s e l e c t the rows as they are ordered i n the data matrix, e.g. s = [1011] indicates that out of 4 rows number 1,3 and 4 are selected i n this sequence. The consequence of this i s that model (A,B,H) w i l l not be i n the canonical form described i n (18). Example '1 a n d 2 i n E x p e r i m e n t 1 s h o w , h o w e v e r , t h a t t h e c a n o n i c a l f o r m i s e a s i l y o b t a i n e d b y i n t e r c h a n g i n g o f r o w s a n d c o l u m n s i n A , r o w s i n B a n d c o l u m n s i n H . 4.3 S o u r c e s o f E r r o r T h e e x p e r i m e n t a l l y f o u n d m o d e l i s i n a c c u r a t e e v e n f o r a n e x a c t l y l i n e a r s y s t e m b e c a u s e o f n o i s e c o n t a m i n a t e d s y s t e m o u t p u t s , q u a n t i z a t i o n e r r o r s i n t h e A / D c o n v e r s i o n f r o m c o n t i n u o u s s i g n a l s t o 12 - b i t b i n a r y n u m b e r a n d c o m p u t a t i o n a l e r r o r s ( t r u n c a t i o n , r o u n d o f f ) . T h e r e l a t i v e p a r a m e t e r e r r o r c a u s e d b y a l l t h e a b o v e e f f e c t s w a s l e s s t h a n o n e p e r c e n t i n a l l e x p e r i m e n t s o f C h a p t e r 5. A f u r t h e r e r r o r s o u r c e a r e n o n i d e a l i n p u t s i g n a l s a s i n d i c a t e d b y t h e d a s h e d l i n e s i n F i g u r e 4.3. u 0 t [sec] F i g u r e 4.3. T h e c o m p u t a t i o n o f m o d e l p a r a m e t e r s i s b a s e d o n t h e a s s u m p t i o n o f i d e a l i n p u t s t e p - f u n c t i o n s a n d s a m p l i n g o f i n p u t a n d o u t p u t i m m e d i a t e l y a f t e r t h e s t e p s . T h e m e a s u r i n g o f a n i n c o r r e c t i n p u t a m p l i t u d e c a n b e p r e - v e n t e d b y d e l a y i n g t h e s a m p l i n g o f t h e i n p u t b y T a n d t h e i n f l u e n c e o f t h e n o n i d e a l s h a p e o n t h e o u t p u t c a n b e n e g l e c t e d a s l o n g a s t h e c o n d i t i o n x « T i s s a t i s f i e d . A d e l a y o f t h e i n p u t s a m p l e w a s b u i l t i n t o t h e p r o g r a m and set at T = 0.0032 seconds for the experiments i n chapter V. 4.4. Storage A l l o c a t i o n and Time Requirements The storage a l l o c a t i o n i n the program i s such that at the most fourth order systems with up to four inputs and four outputs can be ident- i f i e d . The 4000 available storage locations are apportioned as follows 1215 for f l o a t i n g point arithmetic and operating software (binary loader etc.) 1255 for complete i d e n t i f i c a t i o n package (method B) 1125 for data storage 405 empty 4000 t o t a l I t was calculated that about 4300 storage locations would be required for the i d e n t i f i c a t i o n of f i f t h order systems. An estimation of the storage needed by Gopinath's procedure for fourth order systems was carried out under the assumption that only the l e a s t squares algorithm of the complete program would be used. I t was found that about 7500 storage locations would be required, i . e . twice as much as with method B or A. The used recursive least squares algorithm completes the cycle f o r one data vector for a fourth order system i n about three seconds which indicates the length of the shortest possible sampling i n t e r v a l f or real-time i d e n t i f i c a t i o n of A. V. EXPERIMENTS AND RESULTS The purpose of the following experiments was to test the pro- grams written for the proposed i d e n t i f i c a t i o n methods A and B and to study various properties of the algorithms. Linear systems were e i t h e r r e a l i z e d on the analog computer DONNER 3400 or simulated on the IBM 370. Models i d e n t i f i e d on the large computer were tested by comparing t h e i r step response with the one of the o r i g i n a l system over the f i r s t 15 sampling i n t e r v a l s whereas models computed on the NOVA with r e a l system data were compared with models found from a disturbance free system simulation on the IBM 370. Experiment 1 The following system was simulated and i d e n t i f i e d on the IBM 370. A = 0 1 -1 0 - V 1 -1 1 1 .'0 2. 0 1 0 1 -2 . o'. B.= 1 1 0 1 —2 1, . 1 1 0 -1 0 ;-3. 1 H 1 0 0 1 o - - i - i - l 1 o (0) The r e s u l t s achieved with four d i f f e r e n t s e l e c t o r matrices are shown i n Table 5.1. Both i d e n t i f i c a t i o n methods were applied i n a l l four cases, method A with an input sequence a , a, 0, 0, a, a , 0, 0,...and Method B with a step input. The equivalence of the models (1), (2), (3), (4) and system (0) was then v e r i f i e d by matching t h e i r step responses. —6 The errors caused by round-off were less than 10 for model parameters and step response, i . e . both method A and B gave i d e n t i c a l and correct models. Selector Matrices Selection Vector H S. = 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 n m o oro" 0 0 0 1 0 0 = 3 -A, = 0 1 • 0 I 0 0 0 0 1| 0 0 1__0_-1I-1 1 0 0 7)7 TTT 1 0 -1 i 1 -2 H l = 0 0 0 0 0 1 0 0 (1) 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 10 0 0 0 0 0 1 0 = 3 H2 • 0 1 0 0 0 0 0 0 (2) 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 Jp_0_0_0 0 JL 0 0 1 0 0 0 0 0 0 = 4 s, = n = 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 5 0 1 0 1 0 A . = o :,i o o i o o o l o l o o o o i l o 4_l_-4_ T4l z2_ •1 0 1 II 1 A , = 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 -2 3 3 -2 -3 H3 = 0 0 0 0 0 0 0 1 H4 = 1 0 0 0 0 2 0.5 -2 -2 -0.5 (3) (4) Table 5.1. T h e f o u r m o d e l s (1), : (2), (3) a n d (4) d e m o n s t r a t e t h e r e l a t i o n b e t w e e n s e l e c t o r m a t r i x a n d m o d e l s t r u c t u r e . S e l e c t o r m a t r i x S , a n d S 1 3 a r e o f t y p e 2 a n d t h e s t r u c t u r e s o f m o d e l (1) a n d (3) v e r i f y e q u a t i o n (18) a n d (34). S e l e c t o r m a t r i x S ^ l e a d s t o t w o c o u p l e d s u b s y s t e m s o f s e c o n d a n d t h i r d o r d e r i n A ^ b u t w i t h S ^ t h e s u b s y s t e m s i n a r e o f f o u r t h a n d f i r s t o r d e r . M a t r i x A ^ i s i n l o w e r t r i a n g u l a r f o r m b e c a u s e S ^ i s o f t y p e 3. M o d e l (4) d e m o n s t r a t e s t h a t t h e c o m p l e t e s y s t e m (0) i s o b s e r v a b l e t h r o u g h t h e f i r s t o u t p u t c o m p o n e n t , c o n s e q u e n t l y A ^ c o n t a i n s o n l y o n e s u b s y s t e m a n d t h e s e c o n d r o w o f H^, b e c o m e s " g e n e r a l " a c c o r d i n g t o (36). M o d e l (1), (3) a n d (4) v e r i f y t h a t t h e s e l e c t o r m a t r i x p r i n c i p l e a l l o w s o n e t o c h o o s e t h e o r d e r a n d n u m b e r o f s u b s y s t e m s w i t h i n c e r t a i n l i m i t s . T h e S e l e c t o r m a t r i c e s S ^ a n d S ^ s e l e c t t h e s a m e r o w s i n a d i f f e r e n t o r d e r w h e r e S 1 i s o f t y p e 2 a n d S ? h a s a c o r r e s p o n d i n g s e l e c t i o n v e c t o r . T h e m o d e l s (1) a n d (2) a r e h e n c e v e r y c l o s e l y r e l a t e d . A n e x c h a n g e o f r o w s a n d c o l u m n s i n A ^ a f t e r t h e f o l l o w i n g s c h e m e 2 3, 3 + 5, 4-»2, 5+4 g i v e s A^' I n o n l y t h e r o w s a n d i n o n l y t h e c o l u m n s h a v e t o b e e x c h a n g e d i n o r d e r t o g e t B ^ a n d H ^ . E x p e r i m e n t 2 T h e f o l l o w i n g c o n t i n u o u s t h i r d o r d e r s y s t e m w i t h t w o i n p u t s a n d t w o o u t p u t s w a s r e a l i z e d o n t h e a n a l o g c o m p u t e r . " 0.0 1.0 0.0 " 2.0 2.0" X = -0.04 -0.1 0.0 x +._ 0.1 0.0 u 0.0 0.0 -0.08 . 1.0 0.0_ " -1.0 1.0 1.0 y = 1.0 0.0 0.0 x (5) A sampling period period T = 3 seconds, n* = 2 and the se l e c - tion vector [ 1 1 1 0 ] were chosen f or the i d e n t i f i c a t i o n of a di s c r e t e model. The input sequence for the A - i d e n t i f i c a t i o n i s shown below 0.5 W.03 0.5 T \0.03 t (sec) 7 (sec) sampling Pulse t (sec) Figure 5.1. Steps on input 1 respectively 2 were used for the i d e n t i f i c a t i o n of column 1 respectively 2 of B. The model computed on the NOVA with r e a l data as described i n Chapter 3 was A = 0.00000 -0.00000 1.00000 0.72637 0.69602 -0.92843 -0.58098 0.24383- 1.53274 B -3.44633 -6.03377 6.03390 5.68480 -3.48022 -4.76172 H = 1.0 0.0 0.0" 0.0 1.0 0.0_ System (5) was also simulated and i d e n t i f i e d on the IBM 370. The r e s u l t i n g model was A = H = 0.00000 0.00000 1.00000 0.73002 0.70000: -0.92806 -0.58131 0.24084 1.52562 B = -3.47356 -5.98833 6.06758 5.67127 -3.46499 -4.74163 1.0 0.0 0.0 0.0 1.0 0.0 (6) (7) 33 F i g u r e 5 . 2 . s h o w s t h a t t h e f r e e m o t i o n ( u = 0 ) o f t h e a n a l o g s y s t e m a n d t h e s i m u l a t e d s y s t e m a r e n e a r l y i d e n t i c a l . T h e r e l a t i v e d i f - f e r e n c e b e t w e e n ( 6 ) a n d ( 7 ) e x c e e d s 1% f o r o n l y o n e p a r a m e t e r i n A . T h i s d i s c r e p a n c y i s c a u s e d b y t h e d i f f e r e n c e b e t w e e n t h e a n a l o g s y s t e m a n d i t s s i m u l a t i o n a s w e l l a s t h e e r r o r s o u r c e s m e n t i o n e d i n S e c t i o n 4 . 3 . R e p e t i t i o n s o f t h e e x p e r i m e n t w i t h c h a n g e d i n p u t a m p l i t u d e s u ^ = u 2 = 0 . 3 5 o r w i t h e x c h a n g e d u ^ a n d u ^ g a v e r e s u l t s o f t h e s a m e a c c u r a c y . E x p e r i m e n t s w e r e a l s o c a r r i e d t h r o u g h u n d e r t h e a s s u m p t i o n o f i n c o r r e c t s y s t e m o r d e r s 4 a n d 2 . T h e c a s e n = 4 w i t h r e a l s y s t e m a n d s i m u l a t e d d a t a r e s u l t e d i n l i n e a r d e p e n d e n t d a t a v e c t o r s . C o n s e q u e n t l y , n o m o d e l p a r a m e t e r s c o u l d b e c o m p u t e d . M o d e l p a r a m e t e r s w e r e a l w a y s f o u n d i n t h e c a s e n = 2 . H o w e v e r , t h e p a r a m e t e r s d e p e n d e d o n t h e i n p u t a m p l i t u d e s a n d t h e i n i t i a l s t a t e o f t h e s y s t e m a n d w e r e t h e r e f o r e m e a n i n g l e s s . E x p e r i m e n t 3 M o d e l ( 6 ) f o u n d i n e x p e r i m e n t 2 c a n b e u s e d f o r t h e d e s i g n o f a d i g i t a l c o n t r o l l e r f o r t h e a n a l o g s y s t e m ( 5 ) . A s u b o p t i m a l c o n t r o l u ( k ) = - F x ( k ) ~ w a s c h o s e n t o a p p r o x i m a t e l y m i n i m i z e t h e p e r f o r m a n c e f u n c t i o n N - l J = x ' ( N ) Q x ( N ) + E x ' ( k ) Q x ( k ) + u ' ( k ) Q u ( k ) ° k = 0 1 w i t h (8) N = 1 5 2 - 1 - 1 - 1 1 1 - 1 1 1 a n d Q 2 = 2 0 0 2 0 w h e r e x ( k ) i s t h e o b s e r v e d s t a t e o f m o d e l (A,6 , H ) . (9) Figure 5.2. 35 The formulas for the computation of F and the state observation are given i n Appendix 3. The experimental scheme i s shown i n Figure 4.1., where the computer controlled stepping motors coupled with p r e c i s i o n potentionmeters set the inputs according to (8) a f t e r each sampling pulse. F was computed on the NOVA i n 30 seconds before the s t a r t of the control sequence. Figure 5.2. compares for an i n i t i a l state x'(0) = [3.75 -1.0 6.25] y'(0) = [1.5 3.75] the output obtained with control (8) with the open-loop case with control u(k) = 0 for a l l stages. The potentiometer control voltages are shown i n Figure 5.4. Note the zero-input at stage 0 r e s u l t i n g from the f a c t o that the f i r s t state observation was only possible at stage 1. Control (8) was also simulated on the IBM 370 and the obtained output data are compared with the r e a l output i n Figure 5.3. The o s c i l - l a t i o n of the r e a l response around the simulated undisturbed output has two main reasons: The input voltage can only be adjusted stepwise, 0.025 V per motorstep, and the s e t t i n g of the second potentiometer i s always delayed u n t i l the f i r s t input i s set. The delay i n case of a 0.5 v o l t step for input 1 i s about 0.3 seconds. Further e r r o r sources are' the, inaccuracy of model (6) ,' the ramp- instead of step-function.pfo- , cduced by the stepping motor and measurement noise'. ; _ . - meay ' c - 1 ' . : 3 6 V U 2 ( V o l t s ) 0 . 5 1 S y s t e m I n p u t s : P o t e n t i o m e t e r V o l t a g e s 0.4 1 H V o l t a g e p e r M o t o r S t e p = 0 . 0 2 5 V o l t s 0 . 3 4 0 . 2 0 . 1 0 5 T — I - i r t ( s e c ) F i g u r e 5.4 37 VI. CONCLUSIONS The purpose of t h i s thesis i s to i d e n t i f y multi-input multi-output l i n e a r systems on-line on a minicomputer. The s e l e c t o r matrix method of Gopinath i s employed for the de r i v a t i o n of two new i d e n t i f i c a t i o n methods. Both methods are s u c c e s s f u l l y tested with d i f f e r e n t input sequences f o r various systems on a large computer. The implementation of method B on a NOVA minicomputer for the i d e n t i f i c a t i o n of systems sim- ulated on an analog computer gives an estimated 50% storage saving i n comparison with Gopinath's method and very accurate r e s u l t s . The main features of the two methods are: 1) Less computational requirements (less storage, more accuracy) than Gopinath's method. 2) Simple input sequences can be used. The few e x i s t i n g r e s t r i c t i o n s prove to be i n s i g n i f i c a n t f o r p r a c t i c a l a p p l i c a t ions. 3) Each column of input matrix B i s i d e n t i f i e d from d i f f e r e n t data. In method B the i d e n t i f i c a t i o n of the state t r a n s i t i o n matrix A and the input matrix B are completely decoupled. 4) Some r e s t r i c t i o n s f o r the i n i t i a l system state e x i s t but they can be s a t i s f i e d quite e a s i l y . 5) The s e l e c t o r matrix has to be known i n advance. Two classes of se l e c t o r matrixes that lead to canonical models are proposed and tested. The main problem that remains to be solved i s the extension of the proposed methods to the case of noise contaminated data. Both methods as w e l l as Gopinath's procedure lead to a system of l i n e a r equations as shown below D 1 = PD 2 + V (1) where and contain measured data, P i s the array of unknown fixed parameters and V i s an array of measurement noise terms. The c l a s s i c a l method of l e a s t squares computes the estimate P such that the cost function J = trace (D - PD ) (D_ - PD.) ' 1 2 1 2 (2) i s minimized. However, even independent measurement errors (additive noise) , i . e . y(k) = Hx(k) + n(k) r e s u l t i n correlated r e s i d u a l vectors i n (1). The reason for t h i s i s that any component of the output vector y(k) may appear i n more than one column of D̂ . As a consequence of correlated r e s i d u a l vectors the l e a s t squares estimate w i l l be biased. This bias can be eliminated by the use of one of the many sophisticated estimation methods described i n [9], [13] and [14]. Another remaining problem i s the f i n d i n g of the s e l e c t o r matrix. An e f f i c i e n t algorithm to determine s e l e c t o r matrices from noisy data would c e r t a i n l y make the proposed i d e n t i f i c a t i o n methods more u s e f u l . The question of "optimal" input sequences with respect to "good" para- meter estimates i s another problem that requires further i n v e s t i g a t i o n . 39 REFERENCES [1] Gopinath, B., "On the I d e n t i f i c a t i o n of Linear Time-Invariant Systems from Input-Output Data," B e l l . Syst. Tech. J . , Vol. 48, . pp. 1101-1113, May 1969. [2] Budin, M.A., "Minimal Re a l i z a t i o n of Discrete Linear Systems from Input-Output Observations", IEEE Trans. Automat. Contr., Vol. AC-16, pp. 395-401, Oct. 1971. [3] Bonivento, C., Guirdozi R. and Marro G., "Irreducible Canonical Realizations from External Data Sequences", Int. J. Control, Vol. 17, No. 3, pp. 553-563, 1973. [4] Ackermann, J'.E., "Die Minimale Ein-Ausgangs Beschreibung von Mehrgrossensystemen und ihre Bestimmung aus Ein-Ausgangs Messungen", Regelungstechnik, Heft 5, pp. 203-206, 19 71. [5] Luenberger, D.G., "Observers for Mu l t i v a r i a b l e Systems", IEEE Trans. Automat. Contr., V o l . AC-11, p.p. 190-197, A p r i l 1966. [6] Bucy, R.S., "Canonical Forms for M u l t i v a r i a b l e Systems", IEEE Trans. Automat. Contr., Vol. AC-13, p.p. 567-569, Oct. 1968. [7] Tapp, R.J., "A Parameter Estimation Algorithm for Small D i g i t a l Computers", Master's Thesis Dept. of E l e c t r i c a l Engineering, UBC, Ju l y 1972. [8] Bingulac, S.P. and Djorovic, M. , '•'Estimation of State Variable Models Using Input-Output Data", IFAC Symp. on Estimation, TS-14, pp. 995-1004, June 1973. [9] Astrom, K.J. and Eykhoff, P., "System I d e n t i f i c a t i o n - A Survey". Automatica, V o l . 7, pp. 123-162, 1971. [10] Ackermann, J.E., "Abtastregelung", Springer Verlag, New York 1972. [11] Kalman, R.E., "Mathematical Description of Linear Dynamical Systems", SIAM J. on Control, V ol. 1, No. 2, pp. 152-192, 1963. [12] Kwakernaak, H. and Sivan, R., "Linear Optimal Control Systems", John Wiley and Sons, New York, 1972. [13] Isermann, R., "Comparison of Six On-Line I d e n t i f i c a t i o n and Parameter Estimation Methods", Automatica, Vol. 10, pp. 81-103, 1974. [14] S a r i d i s , G.N., "Comparison of On-Line I d e n t i f i c a t i o n Algorithms", Automatica, Vol. 10, pp. 69-79, 1974. [15] Gopinath, B., "On the Control of Linear Multiple Input-Output Systems", B e l l . Syst. Tech. J . , Vol. 50, pp. 1063-1081, March 1971. Proof of Theorem 1: ;. APPENDIX 1 Assuming that H = I J (1) we get f o r (15) H HA * n * - l HA (2) where E n = n and n*> n.>l for 1=1,...£ i = 1 ~ 1 _ Luenberger [5] has shown that f o r a completely observable system (A,B,H) with p[H] = £ and o b s e r v a b i l i t y c o e f f i c i e n t n* there e x i s t s a sequence n l ' n2'""* nJl s u c ^ t n a t T given by (15) i s nonsingular q.e.d. Proof of Theorem 2: Using the known r e l a t i o n Sy(k) = S H HA * n * - l HA x(k) + S 0.. HB 0 . . HA n*-2 u(k) (3) - Tx(k) + SR 1u(k) 41 we get I S R . r ' i o I I i x ( k ) . . . x(k+nfmn*-l) u(k). .. u(k+n+mn*-l) = QP The (n-hnn*) x (n+mn*) matrix Q i s nonsingular because T i s nonsingular (15) and th e r e f o r e P [ u n A ( k ) ] = p [P] The r e s t of the proof f o l l o w s from Lemma 2 i n Gopinath's paper [1]. D e f i n i t i o n of a C y c l i c M a t r i x : C i s an m x m matrix and u(k) i s m x 1;; C i s c y c l i c i f there e x i s t s u(k) such that the matrix f= r [u(k) Cu(k) C r a _ : Lu(k)] i s n onsingular. (4) A P P E N D I X 2 T h e R e c u r s i v e L e a s t S q u a r e s A l g o r i t h m o f T a p p [ 7 ] » T h e r e l a t i o n b e t w e e n a c o l l e c t i o n o f i n p u t - o u t p u t d a t a a n d m o d e l p a r a m e t e r s w a s s h o w n i n ( 7 2 ) t o b e g i v e n b y Z = AD + V ( 1 ) w h e r e Z = z , z „ . . . z a n d D = d n d „ . . . d a r e s x p l i p 1 2 p r A i s a n s x s m a t r i x o f u n k n o w n f i x e d p a r a m e t e r s a n d V i s a n a r r a y o f n o i s e t e r m s . T h e l e a s t s q u a r e s e s t i m a t e A ^ o f t h e p a r a m e t e r s A m i n i - m i z e s t h e c o s t J = t r a c e (Z - A ^ D ) (Z - i ^ D ) ' ( 2 ) a n d c a n b e c o m p u t e d r e c u r s i v e l y a s f o l l o w s w i t h k = l , 2 . . . . p C k =  ( I " W d k < 3 > e k = c k W1 <4> k < s Q k " W + ' V k » Q k = 0 a t k = 0 ( 5 ) \ - l d k 6 k " VWc-l + ̂ " k - l W * V k • R ^ = 0 a t k = 0 ( 6 ) 4,k = 4 , k - i + e k (4 " dk4,k-i> ' 4,k • 0 a t k = 0 ( 7 ) \ • ^ + d k p k - i d k ) _ 1 < 8 > k > s p k • p k - i " \ d k p k - i ? p k = \ a t k • s ( 9 ) A . = A . . + b . ( z , ' - <L 'A . . ) , A . = A . — e , k — e , k - l k k k — e , k - l — e , k — e , k f r o m ( 7 ) a t k = s ( 1 0 ) W h e n e v e r t h e f i r s t s d a t a v e c t o r s d ^ , d 2 . . . d g a r e l i n e a r l y d e p e n d e n t , t h e e q u a t i o n ( 3 ) w i l l g i v e c k - ( I " \ _ ± ) d k - 0 S i n c e t h e r e c u r s i v e l e a s t s q u a r e s p r o c e d u r e a n d c o n d i t i o n ( 5 6 ) r e q u i r e t h a t d ^ , d 2 . . . d g a r e l i n e a r l y i n d e p e n d e n t , t h e v a l u e o f c ^ f o r a n e w d a t a v e c t o r i s a n e x t r e m l y u s e f u l i n d i c a t o r o f l i n e a r i n d e p e n d e n c y . APPENDIX 3 Suboptimal Control: The s o l u t i o n of the optimal control problem f o r the dete r m i n i s t i c system x(k + 1) = A x(k) + B u(k) (1) with the c r i t e r i o n N-l J = x'(N)Q x(N) + x'(k)Q x(k) + u'(k)Q 9u(k) (2) k=0 Z i s a time dependent feedback u(k) = -F(k) x(k) (3) where F(k) = [Q 2 + B'S(k+l)B] _ 1B'S(k+l)A S(k) = [A - BF(k)]'S(k+l)A + Q (4) S(N) = Q Q The matrices QQ and be symmetric and nonnegative, the matrix C^ be po s i t i v e d e f i n i t e . I t i s known that F(k) and S(k) asymptotically approach a "steady state" value for decreasing k. The suboptimal co n t r o l u(k) = -F(0) x(k) = -F x(k) (5) i s a good approximation of (3) because F(k) departs s i g n i f i c a n t l y from i t s asymptotic value only over the terminal stages of the process when x(k) i s already small. The larger the N the better the approximation w i l l be. The r e l a t i v e increase of the cost J f o r suboptimal control of the system i n experiment 3 was computed. It was less than 3% f o r N = 15. O b s e r v e r : /V A A T h e p a r t i c u l a r s t r u c t u r e o f (A,B,H) d e s c r i b e d i n ( 1 8 ) g i v e s a v e r y s i m p l e s t a t e o b s e r v e r . A t s t a g e k + n - . 1 t h e p a s t m o d e l s t a t e x ( k ) i s f o u n d f r o m ( 2 2 ) x ( k ) = S y ( k ) - u ( k ) w h e r e u ( k ) a n d y ( k ) a r e d e f i n e d i n ( 2 0 ) a n d SR^ h a s t h e f o r m i n ( 2 3 ) . B y r e - p e a t e d u s e o f x ( k + 1 ) = A x ( k ) + B u ( k ) a n d u ( k ) , u ( k + l ) . . . u ( k + n * - 2 ) w e g e t t h e p r e s e n t m o d e l s t a t e x ( k + n - 1 ) . N o t e t h a t t h e f i r s t o b s e r v a t i o n i s p o s s i b l e a t s t a g e n - 1 o f t h e p r o c e s s , t h e r e f o r e a n n a s s m a l l a s p o s s i b l e s h o u l d b e c h o s e n f o r t h e i d e n t i f i c a t i o n ( s e e e x p e r i m e n t 1 ) .

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
China 9 24
United States 3 1
City Views Downloads
Beijing 9 0
Kansas City 2 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items