Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Linear systems identification algorithms for a minicomputer Zuercher, Heinrich 1974-01-29

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
831-UBC_1975_A7 Z83_7.pdf [ 2.32MB ]
Metadata
JSON: 831-1.0065433.json
JSON-LD: 831-1.0065433-ld.json
RDF/XML (Pretty): 831-1.0065433-rdf.xml
RDF/JSON: 831-1.0065433-rdf.json
Turtle: 831-1.0065433-turtle.txt
N-Triples: 831-1.0065433-rdf-ntriples.txt
Original Record: 831-1.0065433-source.json
Full Text
831-1.0065433-fulltext.txt
Citation
831-1.0065433.ris

Full Text

LINEAR SYSTEMS IDENTIFICATION ALGORITHMS FOR A MINICOMPUTER by HEINRICH ZUERCHER Diploma in El. Eng., Federal Institute of Technology, Switzerland, 1972 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER W APPLIED SCIENCE in the Department of Electrical Engineering We accept this thesis as conforming to the required standard Research Supervisor Members of Committee Head of Department Members of the Department of Electrical Engineering THE UNIVERSITY OF BRITISH COLUMBIA November, 1974 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the Head of my Department or by his representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of The University of British Columbia Vancouver 8, Canada Date AJ&V /<" S97tf-ii ABSTRACT Two methods for identifying multiple-input mutliple-output linear time-invariant discrete systems from input-output data are derived. The selector matrix principle of ,Gopinath is used and two special classes of selector matrices that lead to canonical system models are introduced. Very general input sequences can be applied. However, a few restrictions exist for the initial system state. The input matrix is identified column wise. Both methods are considerably better than Gopinath's method in terms of storage requirements and numerical accuracy. Tests on a large computer are performed. One method is implemented on a minicomputer with good results. iii TABLE OF CONTENTS Page • I. INTRODUCTION 1 II. GOPINATH'S METHOD USING SPECIAL SELECTOR MATRICES. 3 2.1 Problem Statement. .... .... . 3 2.2 Two Special Classes of Selector Matrices 4 2.3 The Algorithm with Selector Matrix Type 2 . . . . 6 2.4 The Algorithm with Selector Matrix Type 3 .... 10 2.5 Summary 12 III. MODIFIED VERSIONS OF GOPINATH'S METHOD 13 3.1 • Method A 13.2 Method B 4 3.2.1 A- Algorithm 13.2.2 6- Algorithm 7 3.3 Comparison with Gopinath's Method. .......... . . . 19 IV. IDENTIFICATION OF REAL-TIME SYSTEMS WITH A MINICOMPUTER 22 4.1 Experiment 24.2 Minicomputer Software 22 4.3 Sources of Error . . 7 4.4 Storage Allocation and Time Requirements 28 V. EXPERIMENTS AND RESULTS 29 VI. CONCLUSIONS 37 REFERENCES 9 Appendices APPENDIX 1 40 APPENDIX 2 2 APPENDIX 3 3 iv LIST OF FIGURES Figure Page 3.1 Simple input sequence 20 4.1 Scheme of identification experiment 23 4.2 Flow chart of identification program 24 4.3 Nonideal square wave 27 5.1 Input sequence for identification 32 5.2 Controlled and uncontrolled system output ... 34 5.3 Suboptimal system output behaviour 36 5.4 Suboptimal control signal 3Table 5.1 LIST OF TABLES Models with different selector matrices .... 30 ACKNOWLEDGEMENT I wish to thank Dr. E.V. Bohn, my supervisor, for his advice, guidance and interest during the research and writing of my thesis. I am thankful to Mr. Dave Holmes for his assistance with the hardware work. The financial support received in the form of U.B.C. Graduate Fellowships from 1972 to 1974 and under the National Research Council Grant 67-3134 is gratefully acknowledged.. 1 I. INTRODUCTION Aim of Thesis Recent advances in the capabilities of process computers-and corresponding digital interface equipment have made the application of "modern" control techniques much more feasible. "Modern" design methods for multi-input multi-output linear time-invariant systems require a state-space model of the system and the problem of identifying such models from experimental data has therefore been the subject of intensive research in recent years. Several identification methods have been proposed and tested on large computers. They all require extensive data processing, which makes their implementation on minicomputers inefficient if not impossible. The aim of this thesis is to develop and test on-line identification al gorithms that can be realized efficiently on minicomputers. Background It is essential for the practical usefulness of an identification method that the class of applicable input signals is not restricted to par ticular inputs such as impulse function, step function or white noise. The ideal case would certainly be the identification from normal operating records. Another practically relevant requirement regards the minimality of the number of significant parameters of the model and this requires the choice of a suitable canonical form, whenever this is possible. More over the simplicity of the link between the selected state-space repre sentation and the set of input-output equations used for the identification heavily conditions the required amount of computation. The above problems have been solved for single-input single-output or multi-input single-output systems [9] but no unified approach 2 for multi-input multi-output systems that satisfies all previous require ments can be found in the literature. So far a number of methods that allow the use of general input-output records have been suggested: ,Gopin-ath [1], Budin [2], Bonivento and.Guirdozi [3], Ackermann [4] and Bingulac and Djorovic [8]. vGopinath introduces the principle of a data selector matrix which enables the identification method to be formulated concisely and in a manner suitable for digital computation. However, the models found generally have no canonical structure. The computational part of this procedure, especially the determination of a selector matrix, has been improved by Budin: Bonivento and Ackermann both introduce a parti cular canonical system model and then identify the reduced number of model parameters using the corresponding input-output equations. However, the selection and processing of input-output data is more complicated than with Gppinath's selector matrix. Bingulac's procedure leads to a model in Jordan canonical form and requires the determination of system eigen values i.e. polynomial roots. This makes the treatment of noisy data very difficult, which is a serious drawback. Outline of work An examination of Gopinath's method in chapter II shows that a slightly more general definition of the selector matrix allows us to con sider both Ackermann's and Bonivento's approach as special cases of Gopin ath's selector matrix formulation. The generality of Gopinath's procedure justifies its further investigation. Two modified versions of Gopinath's method are proposed in chapter III and the implementation of one such ;y-. method on a minicomputer is described in chapter IV. Finally chapter V presents experimental results and examples whereas chapter VI contains the conclusions. 3 II. GOPINATH'S METHOD USING SPECIAL SELECTOR MATRICES 2.1 Problem Statement A linear time-invariant multiple-input multiple-output system can be represented by the state equations x(k + 1) = A x(k) + B u(k) (1) y(k) = H x(k) (2where x is an n: x 1 state.vector, y is an & x 1 output vector, u is an m x 1 input vector, A is an n x n state transition matrix, B is an n x m input matrix and H is an I x n output matrix. The system is thusfispeci fied by the triplet (A,B,H). (A,B,H) is assumed to be completely observ able and controllable because otherwise it is impossible to identify the system from input-output data. The problem is to find a model (A,B,H) of the same dimension as (A,B,H) from a sequence of inputs u(k) and outputs y(k) such that (A,B,H) simulates the input-output behaviour of (A,B,H). It is clear that the ^ A number of correct models (A,B,H) is infinite and that there are input sequences u(k) which will not be sufficient to uniquely specify any real-^ /\ /\ ization (A,B,H). The use of a selector matrix for the processing of data splits the problem into two parts: 1) The finding of a selector, matrix. This.is equivalent-to-determining order and structure of (A,B,H). 2) The identification of all parameters in (A,B,H). This thesis deals with an efficient on-line computer solution of problem part 2) and a selector matrix is,therefore, always assumed to be known. Two particular classes of selector matrices are introduced in the next section and then used throughout the thesis. 4 2.2 Two Special Classes of Selector Matrices A more general class of selector matrices than the one intro duced in [1] and used in [2] is defined below. Definition 1; S. denotes the set of n x d matrices (n £ d) with the following properties: 1) S = is±^} where B = 0 or 1 (3) 2) Vi, = 1 for one and only one j, say (4) 3) j± + Jk for i f k i, k = 1,2, n (5) This definition implies that when premultiplying a d x p matrix A with S the resulting n x p matrix SA consists of n of the rows of A ordered in accordance with the unit row vectors of S. Note that with selector ma trices as defined in [1] and [2] the n rows in SA are always ordered as they are in A. However, with selector matrices as defined in Definition 1, the n rows of SA can be ordered in any possible way. We can now define the following special class of selector matrices. Definition 2: denotes the set of n x d matrices with the following proper ties : 1) Ss2 c S (6) 2) d is an integral multiple of I (73) S' = [Si S' .... SI] (8) where Sfc = {s^ ^} are nfc x d matrices with Ji = k + (i - 1) I i = 1, ... n^ lc ™ 1 £ • • • $J and is defined as in (4) 4) nk 5: 1 for k = 1, . .. I (9) An example of a selector matrix S 0 is 3s2 with Sl = 1 0 .0 0 0 0" 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 _0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 where *i!= 5, I = 2, d = 3*£-= 6 ^ = 3, n2 = 2 and S2' = 0 1 0 0 0 0 0 0 0 1 0 0 The advantage gained from the application of a selector matrix of type 2 is demonstrated in section 2.3. The use of a selector matrix of type 3 as defined below will be shown in section 2\4. However, it is not nec essary for the understanding of the remainder of the thesis, except section 2.4, to read Definition 3 which is obtained by slightly changing (8) and (9) in Definition 2. Definition 3: i) Ss3 C S 2) d is an integral multiple of I 3) s;3 = [si s-(10) (11) (12) where Sfc = ts^^} are x d matrices with j± = k + (1-1)*': i = 1, k = 1.. .. n, k .h < l and j± is defined as in (4). 4) nk > 1 for k = l,.....h An example of a selector matrix S „ is s3 (13) 3s3 1000000000 0010000000 000010 0.0 00 00000010 00 0000000010 where n = 5, £ = 2, d = 5*.l = 10 n1 = 5, n2= 0, h = 1 6 2.3 The Algorithm with Selector Matrix Type 2 Gopinath's derivation starts out from the assumption that a sel ector matrix S exists such that = T where T is nonsingular f"H HA ' n*-l HA ' (14) where n* = observability index of (A,B,H) n* < n Theorem 1 states that there always exists a selector matrix of type 2 that satisfies (14). Theorem 1: If the system (A,B,H) is completely observable and the matrix H has the rank A, i.e. p[H] = SL, then there exists an S e Sg2 of dimen sion n x in* such that H HA HA n*-l = T where T is nonsingular (15) The proof for Theorem 1 is described in Appendix 1. The next step ^ * * in the procedure is the introduction of a basis for the model (A,B,H) such that ) H HA HA = I (16) n (14) and (16) imply directly that -1 A = T A T B = 1 B (17) H = H T -1 It is well known that a state transformation of (A,B,H) with the transformation matrix T described in (15) leads to a model (A,B,H) with the following observable canonical form: A = 0 1 0 0 ... 0 _a2i— 0 ... 0 — 0 0 0 . . . 0 0 1 0 • • 0 1' — aZ2— 0 ... 0 0 .. . 0 — <H2— 0 ... 0 0 .. . 0 0 ... 0 0 ... 0 — 0.21— 0 1 0 n. n. n. B = r i o ... { j ,' 0~] 0 0 j 1 0 ... j | 0 ; 1 0...0 H Because of Theorem 1 we can without loss of generality continue the deri vation of Gopinath's input output equation assuming that (A,B,H) satisfies (18). Simple manipulation of (1), (2) and the equivalence of (A,B,H) and (A,B,H) yield y(k) = H HA HA x(k) + R u(k) (19) where y'(k) = [y'(k) y'(k+l) ... y'(k+n*-l)] u'(k) = [u*(k) u'(k+l) ... u'(k+n*-l)] (20) 8 and Rl = 0 ... HB 0 .. . •A. A A. A A HAB HB 0 .. HA n*-2 HB 0 (21) The multiplication of (19) by S, using the equations (16) and. (18), gives S y(k) = x(k) + u(k) (22) where 0 n-n^+l Jn-1 n-n0+l Substituting x(k) and x(k+l) from (22) into x(k+l) = A x(k) + B u(k) yields Gopinath's direct input-output relation S y (k+1) = [AR] S y (k) u(k) where R = -ASR1 + S HB 0... A /\ /s SS. /\ HAB HB 0 ... ~*n*—1«. HA B ... HB (24) (25) 9 (24) is an external description in the sense that the variables u(k), y(k), can be measured externally. Equation (24) does not involve a system state. With relation (18), (23) and (25) it can be verified that t n„ nr [AR] = n„ 0 1 0 10 I o l :o ... o — ail— j— 0.12— 0 ... Q |0 1 0 0 ... 0 I 0 1 — a21— |^a22 0 ... 0 ]~0 ... 0 : i o ... o i o .... o •OJI.I — l! 10 ... 0 i : o ... o | —a.ifc—• !0 ... 0 10 ... 0 I —a2£— JL.-A I T~T " ,0 1 0 10 ... 1 1 ~an— o ... o .... 0'(| 0 •p.l-0 .. . 0 ... 0 0 • Pi-. 0 0 (26) All significant parameters of [AR] appear in the rows n^, h^+h^, ... n when a selector; matrix, of .type 2 is used. Therefore the number of parameters that have to be estimated is reduced from (n + mn*)n to (n + mn*)£. The determination of [AR] is straightforward. Using (24) we have S[y(k+1) ... y(k+n+mn*)] = [AjR] Sy(k) ... Sy(k+n+mn*-l) TT(k) ... u(k+n+mn*-l) which can be written compactly as S Yn^(k+1) = [AR] UnA(k) Equation (26) allows us to introduce a reduced selector matrix (27) (28) red "n^f n2-sn^+ n2~+. S j is H x?n*£ red (29) where s. is the ith row of S. l Substituting ^re^ into (28) we get S , Y A(k+1) = red n*N ' 21 12 Un*<k> (30) which has the form Sred Yn*(k+1) = [1red Rred] Un*(k) (31) Note that (31) represents the input-output equations derived by Bonivento and Guirdozi in [3]. Theorem 2 states conditions under which a unique so-lution for [AR] and therefore [A , R , ] can be found. red red Theorem 2: If (A,B,H) is completely observable and controllable and S e and u(k) are random variables with joint nonlattice distribution, then UnA(k) is "almost surely" nonsingular, i.e. there exists a unique solution for [AR] with probability one. A. The proof of this theorem is given in Appendix 1. Once' [AR] is known, then B is found as described in [1]. B = R + AIL + ... An*_1R . i (32) o 1 n*-l where R = [E R, ... R . n] with each R. an n x m matrix. . . o 1 n*-lJ l (33) Using equation (18), it can be verified that H is found in the following manner where h. . = s.. for i = 1, ... Z and j = 1, H = {h_} and S = {s±j} n (34) 2.4. The Algorithm with Selector Matrix Type 3 As in the preceding section it can be shown that there always exists a selector matrix of type 3 such that equations (28) and (32) 11 lead to A = 0 1 • 1 • • • • • 0 1 1* • « 1 0 0 1 I I j -air o ... 0 0 1 1 °i - 1 " • • . 1 1 0 0 ... 0 0 • • • i! 1 — i —r — 1— o ... • 0 0 • • • 0 I 1 Io 1 0 * 0 ... 0 0 • • • 1 0 1 |0 • • 1 — ahi- <%2 - 1 1 — ••"hh-n,. B = Note that in this particular case equation (31) represents the input-output representation derived by Ackermann in [4]. The number of significant para-meters in A is reduced because A is in lower triangular form. However» the output matrix H contains more significant parameters. H = 1 0 0 .. 1 0 ... n 0 1 0 ... 0 'h+l Equation (34) can still be used to find the first h rows of the matrix H. The finding of the bottom rows requires the solution of the following system of linear equations yh+1(k-rn-l)~ yh+l(k) *•* yh+2(k) '•• yA(k) ... y (k4n-l) _hh+/ \+2 • *[x(k) x(k+l)...x(k+n-l)J (37) where y^(k) is the jth component of y(k). The model states x(k) , x(k+l)....can be found with equation (22) after A and B have been computed. 2.5. Summary It has been shown that a selector matrix of type 2 in conjunction with equations (28), (32) and (34) leads directly to a canonical observable A * model (A,B,H) as in (18). It has also been proved that such a selector matrix exists for any observable and controllable system with p[H] = 11 and that a unique solution of (28) is guaranteed for a random input se quence . Similarly, a selector matrix of type 3 and the equations (28), (32), (34) and (37) yield a model as in (35) and (36). The proof that a selector matrix of type 3 always exists and that a unique solution of (28) and (37) is guaranteed for a random input sequence is analogous to the proof for selector matrix type 2. 13 III. Modified Versions of Gopinath's Method 3.1. Method A A reduction of the matrix dimensions in equation (28) is achieved by decomposing the identification experiment into m intervals. A different input sequence is used during each interval and the data for one interval are used to identify A and one column of B. After interchanging columns in R and components in \l(k) equation (24) can be split up in the following way (38) Sy(k+1) = ASy(k) + R-.a. (k) + ... R u (k) 11 mm where R. = [r. r.. l l l+m ri+(n*-l)m-1 and r. is the ith column of R l U;L'(k) = [u^k) u±(k+l) ... u±(k+n*-l)] u^(k) is the ith component of u(k) (39) Assuming that u.(k) = 0 J for j 4 1 we get the following partial input-output equation analogous to (24) Sy(k+1) = [AR,] 'sy(k)" u»(k) for i = 1, ...m (40) (41) The determination of [AR^] is straightforward. We have Sy(k). Sy(k+n+n*-l) S[y(k+1)... y(k+n+n*)] = [AR.] for i = 1, TT^k)... u±(k+n+n*-l) .m (42) SY .(k+1) = [AR,] U. .(k) for i = 1, .. m (43) which can be written compactly as 'n* A unique solution for [AR^] exists whenever U^n*(^) i-s nonsingular. We first assume that the one-input system (A,b^,H) is controllable, where b_^ 14 is the ith column of B. Then theorem 2 applies to (A,b^,H), i.e. a unique solution for [AIL] exists with probability one whenever the input u^(j) is random. Most likely not all the m systems (A,b^,H) will be controllable. If (A,b.,H) is not controllable, general conditions that guarantee U. . (k) i m* to be nonsingular have not been found. However, whatever input sequence u^(j) is used, the starting condition x(k) =|= 0, and therefore x(k) =}= 0 (44) is necessary for uinA(k) to be nonsingular. The proof follows directly from an equation analogous to (4) in Appendix 1. from where Once [AR^] is determined the column b^ of B is found directly b. = T + AT. + ... An* 1r. . (45) l o 1 n*-l R. = [r r,.- ... r" . . ] with each "r. an n x 1 vector, i o -1, n*-l l Relation (45) follows from (32) and (39). 3.2. Method B Particular input sequences allow a modification of (A,B,H) into an augmented system (A,H) eliminating the input matrix.. The identification of the augmented system with Gopinath's method will give A. directly and is simplified because the term R in the input-output equation (24) disappears. B is then identified columnwise with a different algorithm which does not require any knowledge about A. 3.2.1. A-Algorithm ' For any input sequence of the type u(k+l) = C u(k) for k = 1, ... (46) where C is m x m, the dynamic behaviour of system (1), (2) can be described by the following augmented state equations 15 where x(k+l) = A x(k) . y_(k) = H x(k) x'(k) = [u'(k) x'(k)] Z'(k) = [u'(k) Z'(k)] (47) and A = H = Im I 0 - -r — 0 | H (48) C I 0 1 B I A It is obvious that p[H] = m + p[H] We define the following augmented (m+n) x (m+£)n* selector matrix (49) S = Im | ~~F ~ 0 I s1 m?. 0 r ~i~ ~ i o i s. IT .10 1 s i i n* (50) -<—> m <—>-m where S = [S, S„ ... S .] with S. an n x I matrix, is the selector matrix 12 n* i for the original system (A,B,H) as used in (15) and (24). It is now easy to show that selector matrix S_ can be used to iden-tify an augmented model (A,H) with Gopinath's method. Multiplying S_ with the observability matrix of (A,H)«and using (15), (48) and (50) gives H HA HA11*"1 = S ,n*-l I 1 0 m 0 H C o X HA c2 i 0 0 HA n*-l m X 0 H HA HA n*-l I i 0 m | X ! T = T (51) Because T is nonsingular T, must be nonsingular too, i.e. the augmented sys tem (A,H) as defined in (48) is observable. Furthermore can be used to 16 identify model (A,H). Analogous to (17) we obtain A = T A T-1 which leads with (48) and (51) to (52) A = C _|_ _ 0 _ X ! T A T" ~c 1 0 "1 _ —1 X 1 1 A. (53) where A is identical to A in (18) and (24). Because (A,H) has no input matrix, equation (24) is reduced to ^ y_(k+l) = A £(k) and equation (24) becomes S Yn^(k+1) =ASYni(k) (54) (55) with Yn*0O = tz(k> Z(k+D ... x(k+n+m-l)] where J(k) is defined corresponding to (20) and (48). A unique solution for A is found whenever " ' . _,n+m-l s Yn*(k) -u(k) Cu(k) ... C"'ul "u(k) Sy(k) Sy(k+1)...Sy(k+n+m-1) (56) is nonsingular. Sufficient and practical relevant conditions for C, u(k) and in itial system state x(k)" such that (56) is satisfied have not been found. However, it is necessary that matrix C be cyclic (definition in Appendix 1). This is no serious restriction. "The condition of being non-cyclic is caused by having two identical subsystems embedded in one system and yet completely decoupled from each other"[15]. In spite of an arbitrary choice of C and u(k) in all numerical examples a unique A was always found, i.e. the al gorithm is practical even without sufficient conditions for C, u(k) and x(k). 3-2.2. B-Algorithm Simple manipulation of (1) and (2) gives y(k+l) = HA HA2 "n* HA x(k) + HB ^ ^ y\ w A HAB HB HA B HB u(k) (57) where y(k) and u(k) are defined in (20). •V A ^ It is assumed now that (A,B,H) is initially in the steady state, in the sense that Let x = Ax + Bu s s s y = Hx s yo(k) ^ y(k) - ys s\ A ^ ^ x (k) = x(k) - x o s uQ(k) ^ u(k) - us (58) (59) Substituting y,x and u in (57) and using (58) gives uQ(0) uQ(n*-l) "y0U>' _ HB A -N A A ^ HAB HB yQ(n*) HA B ... HB (60) To simplify the notation y(k), u(k) shall denote yQ(k) and uQ(k) in the rest of this section. In the case of m different input sequences of length n* equation (60) can be written in the form Y, 1 • • Y ^ n* HB A. A. S\ /\ /N HAB HB ~ ~n*—1^ HA B HB U .o U n*-l (61) where Yk = [yl(k) y2(k) ym(k) ] is an £:;<x m matrix and Uk = [u^(k) u2(k) ... um(k)] is an m x m matrix y^(k), k = 1, 2, ...n* represents the ith output sequence u^(k), k = 0,<i2, ...n*-l represents the ith input sequence The starting values ^(0) of the input sequences are chosen such that they span the whole input space, i.e. Uq is non-singular With (61) we compute sequentially the following terms -1 (62) (63) (64) -1 HB = Y, U 1 o /S /N /V. /s /\ HAB = (Y2 - HBl^) UQ HA XB = (Y - HA BU, - . . . -HAB U . 0-HBU. n* 1 n*-2 (65) h*-l)Uo -1 Recalling -one gets B directly by use of (65) HB /\ S\ HAB = S H HA B = B (66) HA B x*n*-l HA Equation (65) becomes particularly simple when_ m' stepfunctions are used as input sequences such that U=U1 = ...=UJt1=U (67) o 1 n*-l s ' and 1 sl 0 u = u s2 0 'U sm (68) Substituting (67) and (68) into (65) gives HB = Y1 U -1 -1 HAB = (Y2 - Y±) U (69) -1 and for (66) •1 Y - Y i-2 1 (70) n*-l Partitioning (70) into m columns gives S y^d) y.(2) - y.(l) .1 for i = 1 (71) > • • m y^(n*) - y.(n*-l) where b. is the ith column of B. Note that the matrix A need not be known in order to solve (65) and (66) or (70) and (71) for B. Four aspects of identification that are important in practical applications are discussed for Gopinath's method and method A and B. 1) order of input-output equations 2) input restrictions 3) number of experiments 4) initial conditions 1) The order of the systems of linear input-output equations (28), (43) and (55) varies from method to method. Low order equations require less computer storage and time and give more accurate results (less numerical errors). Method A and B are always better than Gopinath's method in this respect. Method B is the best one in case of few output components. The following maximum matrix dimensions are obtained in equation (28), (43) 3.3 Comparison with Gopinath's Method and (55) respectively for a fourth order system with a maximum of 4 in puts and 4 outputs. max (n + mn*) = 20 for Gopinath's method max (n + n*) = 8 for Method A max (n + m) = 8 for method B. 2) Only Gopinath's method facilitates theoretically the identi fication of a model (A,B,H) from normal operating.records but better results will certainly be achieved with the application of chosen input signals. All three methods allow the use of simple input sequences like the square wave illustrated in Figure 3.1. Figure 3.1. The following restrictions for the input sequences have to be considered though: The lower mn* rows in U .(k) in equation (28) have to be n* linearly independent if we use Gopinath's method. The same is necessary for the lower n* rows in U. (k) in equation (43) for method A whereas for method B equation (46) has to be satisfied. Tests showed that the occurrence of a singular data matrix from such restricted inputisequences is unlikely (Experiment 1 and 2). 3) The number of necessary experiments is: 1 for Gopinath's method 1 with m intervals for method A m + 1 for method B. 4) No initial conditions for the system are required by Gop inath's method whereas in case A equation (44) has to be satisfied. With method B the system has to be in the steady-state when the identification • of B is started. This restriction is balanced off by the advantage that the identification of A and B is decoupled in case B. In conclusion, it may be said that there is a trade off between experimental and computational requirements. We obtain small matrices and numerical decoupling but require several experiments for method B on the one hand and obtain large matrices with only one experiment for Gopinath's method on the other hand. IV. IDENTIFICATION OF REAL-TIME SYSTEMS WITH A MINICOMPUTER 4.1 Experiment Identification method B has been implemented on a NOVA mini computer in order to identify models of linear systems realized on the DONNER 3400 analog computer. The experimental setup is shown in Figure 4.1. The NOVA as a central unit supervises the identification experiment, in particular it controls the A/D conversion and initiates and resets the input sequences. The computer accepts known system parameters (dimension, etc.) from the teletype and computes the model parameters from the measured data after all the samples have been taken. The control logic generates the desired input sequence at a rate given by the external signal generator. The state of the sampling logic tells the waiting computer when a sample can be taken and when the experiment can be started. 4.2 . Software for Minicomputer Because of the 4K memory limitation of the standard NOVA emph asis was put on efficient storage use rather than fast data processing whenever these two goals were incompatible. On-line computation of A and B would have been possible but would not have resulted in storage savings as all input-output data;.had to be stored, anyway, for teletype • output and performance evaluation. Lt proved to be simpler to acquire all experimental data first and then compute A and B off line. The program flow chart is shown in Figure 4.2. The experiment supervision and data acquisition is performed by the same subroutine in both case A and ;.case B even though it is represented by two different boxes in the flow chart. n ANALOG SYSTEM outputs system inputs r *• A D N 0 V A ^ •0-CONTROL LOGIC STEPPING MOTOR TTY SAMPLING LOGIC A I CLOCK Figure 4.1. START 1 accept preliminary parameters from TTY: order, selection . vector ect. A or B supervis A-Experi sampled stored ion of ment, data are \ r least sq mation o uare esti-f A return to START supervis B-Experii sampled stored ion of Tient, data are t compute average B r Figure 4.2. The computation of A from equation (55) proved to be insufficient even for the small disturbances described in Section 4.3. In order to reduce the influence of measurement noise more data has to be included into the parameter identification. Equation (55) can be generalized to Z = AD + V (72) where Z-^and D ^are .-(n+m)x d with d = 1,2,... and V is an array of noise terms. The least squares estimate of A is then -1 A = ZD' [ DD' ] for d > (m+n) (73) The least squares program described in [7] was used to solve equation (73) recursively. The algorithm starts out with d = 1 and then updates A with each new data vector. New data vectors are checked for —e linear independence until the first unique solution (d = m+n) is reached. This check enables us to detect cases of singular or nearly singular data matrices. (The recursive least squares algorithm and the linear independence check are presented in Appendix 2.) Matrix B is computed columnwise with formula (71), therefore only step inputs as given by (67) and (68) can be used. In the presence of measurement noise we have y±a\ y,(2) - y,(l) y±(n*) - y.(n*-l) u . si = b± = b^+\v.-for i•• =. 1,; . an . ( where b is the estimate>-of column b. and v is a noise vector, l l " ' If we assume that E[ b.-b.] = E[v] = 0 (75) 1 x and E[w'] = R then it is possible to reduce the covariance matrix R by averaging N independent estimates. We compute b. = _J_ E b (76) where' 2 b. . ith estimate of b. v. = b.. - 'b. 3 xj ' x E[v.vk'] = R for j = k (77) 0 for j f k and get E[b.-b. ] = E[v] = 0 x x -R _ Tj (78) E[vv'] =_ " Rs The program enables the user to repeat the identification of B several times and it computes the average of all parameter estimates as a final result. The program diverges in one point from the described theory. The selector matrix is replaced by the less general selection vector because it saves storage and computing time and is easier to initialize. Whereas selector matrices can select rows out of a data matrix ordered in any possible way the selection vectors select the rows as they are ordered in the data matrix, e.g. s = [1011] indicates that out of 4 rows number 1,3 and 4 are selected in this sequence. The consequence of this is that model (A,B,H) will not be in the canonical form described in (18). Example '1 and 2 in Experiment 1 show, however, that the canonical form is easily obtained by interchanging of rows and columns in A, rows in B and columns in H. 4.3 Sources of Error The experimentally found model is inaccurate even for an exactly linear system because of noise contaminated system outputs, quantization errors in the A/D conversion from continuous signals to 12-bit binary number and computational errors (truncation, round off). The relative parameter error caused by all the above effects was less than one percent in all experiments of Chapter 5. A further error source are nonideal input signals as indicated by the dashed lines in Figure 4.3. u 0 t [sec] Figure 4.3. The computation of model parameters is based on the assumption of ideal input step - functions and sampling of input and output immediately after the steps. The measuring of an incorrect input amplitude can be pre vented by delaying the sampling of the input by T and the influence of the nonideal shape on the output can be neglected as long as the condition x « T is satisfied. A delay of the input sample was built into the program and set at T = 0.0032 seconds for the experiments in chapter V. 4.4. Storage Allocation and Time Requirements The storage allocation in the program is such that at the most fourth order systems with up to four inputs and four outputs can be ident ified. The 4000 available storage locations are apportioned as follows 1215 for floating point arithmetic and operating software (binary loader etc.) 1255 for complete identification package (method B) 1125 for data storage 405 empty 4000 total It was calculated that about 4300 storage locations would be required for the identification of fifth order systems. An estimation of the storage needed by Gopinath's procedure for fourth order systems was carried out under the assumption that only the least squares algorithm of the complete program would be used. It was found that about 7500 storage locations would be required, i.e. twice as much as with method B or A. The used recursive least squares algorithm completes the cycle for one data vector for a fourth order system in about three seconds which indicates the length of the shortest possible sampling interval for real-time identification of A. V. EXPERIMENTS AND RESULTS The purpose of the following experiments was to test the pro grams written for the proposed identification methods A and B and to study various properties of the algorithms. Linear systems were either realized on the analog computer DONNER 3400 or simulated on the IBM 370. Models identified on the large computer were tested by comparing their step response with the one of the original system over the first 15 sampling intervals whereas models computed on the NOVA with real system data were compared with models found from a disturbance free system simulation on the IBM 370. Experiment 1 The following system was simulated and identified on the IBM 370. A = 0 1 -1 0 -V 1 -1 1 1 .'0 2. 0 1 0 1 -2 . o'. B.= 1 1 0 1 —2 1, . 1 1 0 -1 0 ;-3. 1 H 1 0 0 1 o --i -i -l 1 o (0) The results achieved with four different selector matrices are shown in Table 5.1. Both identification methods were applied in all four cases, method A with an input sequence a, a, 0, 0, a, a, 0, 0,...and Method B with a step input. The equivalence of the models (1), (2), (3), (4) and system (0) was then verified by matching their step responses. —6 The errors caused by round-off were less than 10 for model parameters and step response, i.e. both method A and B gave identical and correct models. Selector Matrices Selection Vector H S. = 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 nm o oro" 0 0 0 1 0 0 = 3 -A, = 0 1 • 0 I 0 0 0 0 1| 0 0 1__0_-1I-1 1 0 0 7)7 TTT 1 0 -1 i 1 -2 Hl = 0 0 0 0 0 1 0 0 (1) 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 10 0 0 0 0 0 1 0 = 3 H2 • 0 1 0 0 0 0 0 0 (2) 10000000 00100000 00001000 0 Jp_0_0_0 0 JL 0 01000000 = 4 s, = n = 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 5 0 1 0 1 0 A. = o :,i o oio o o l ol o o o o il o 4_l_-4_T4lz2_ •1 0 1 II 1 A, = 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 -2 3 3 -2 -3 H3 = 0 0 0 0 0 0 0 1 H4 = 1 0 0 0 0 2 0.5 -2 -2 -0.5 (3) (4) Table 5.1. The four models (1), : (2), (3) and (4) demonstrate the relation between selector matrix and model structure. Selector matrix S, and S 1 3 are of type 2 and the structures of model (1) and (3) verify equation (18) and (34). Selector matrix S^ leads to two coupled subsystems of second and third order in A^ but with S^ the subsystems in are of fourth and first order. Matrix A^ is in lower triangular form because S^ is of type 3. Model (4) demonstrates that the complete system (0) is observable through the first output component, consequently A^ contains only one subsystem and the second row of H^, becomes "general" according to (36). Model (1), (3) and (4) verify that the selector matrix principle allows one to choose the order and number of subsystems within certain limits. The Selector matrices S^ and S^ select the same rows in a different order where S1 is of type 2 and S? has a corresponding selection vector. The models (1) and (2) are hence very closely related. An exchange of rows and columns in A^ after the following scheme 2 3, 3 + 5, 4-»2, 5+4 gives A^' In only the rows and in only the columns have to be exchanged in order to get B^ and H^. Experiment 2 The following continuous third order system with two inputs and two outputs was realized on the analog computer. " 0.0 1.0 0.0 " 2.0 2.0" X = -0.04 -0.1 0.0 x +._ 0.1 0.0 u 0.0 0.0 -0.08 . 1.0 0.0_ " -1.0 1.0 1.0 y = 1.0 0.0 0.0 x (5) A sampling period period T = 3 seconds, n* = 2 and the selec tion vector [1110] were chosen for the identification of a discrete model. The input sequence for the A-identification is shown below 0.5 W.03 0.5 T \0.03 t (sec) 7 (sec) sampling Pulse t (sec) Figure 5.1. Steps on input 1 respectively 2 were used for the identification of column 1 respectively 2 of B. The model computed on the NOVA with real data as described in Chapter 3 was A = 0.00000 -0.00000 1.00000 0.72637 0.69602 -0.92843 -0.58098 0.24383- 1.53274 B -3.44633 -6.03377 6.03390 5.68480 -3.48022 -4.76172 H = 1.0 0.0 0.0" 0.0 1.0 0.0_ System (5) was also simulated and identified on the IBM 370. The resulting model was A = H = 0.00000 0.00000 1.00000 0.73002 0.70000: -0.92806 -0.58131 0.24084 1.52562 B = -3.47356 -5.98833 6.06758 5.67127 -3.46499 -4.74163 1.0 0.0 0.0 0.0 1.0 0.0 (6) (7) 33 Figure 5.2. shows that the free motion (u =0) of the analog system and the simulated system are nearly identical. The relative dif ference between (6) and (7) exceeds 1% for only one parameter in A. This discrepancy is caused by the difference between the analog system and its simulation as well as the error sources mentioned in Section 4.3. Repetitions of the experiment with changed input amplitudes u^ = u2 = 0.35 or with exchanged u^ and u^ gave results of the same accuracy. Experiments were also carried through under the assumption of incorrect system orders 4 and 2. The case n = 4 with real system and simulated data resulted in linear dependent data vectors. Consequently, no model parameters could be computed. Model parameters were always found in the case n = 2. However, the parameters depended on the input amplitudes and the initial state of the system and were therefore meaningless. Experiment 3 Model (6) found in experiment 2 can be used for the design of a digital controller for the analog system (5). A suboptimal control u(k) = -F x(k) ~ was chosen to approximately minimize the performance function N-l J = x'(N)Q x(N) + E x'(k)Q x(k) + u'(k)Q u(k) ° k=0 1 with (8) N = 15 2 -1 -1 -111 -1 1 1 and Q2 = 20 0 20 where x(k) is the observed state of model (A,6,H). (9) Figure 5.2. 35 The formulas for the computation of F and the state observation are given in Appendix 3. The experimental scheme is shown in Figure 4.1., where the computer controlled stepping motors coupled with precision potentionmeters set the inputs according to (8) after each sampling pulse. F was computed on the NOVA in 30 seconds before the start of the control sequence. Figure 5.2. compares for an initial state x'(0) = [3.75 -1.0 6.25] y'(0) = [1.5 3.75] the output obtained with control (8) with the open-loop case with control u(k) = 0 for all stages. The potentiometer control voltages are shown in Figure 5.4. Note the zero-input at stage 0 resulting from the fact o that the first state observation was only possible at stage 1. Control (8) was also simulated on the IBM 370 and the obtained output data are compared with the real output in Figure 5.3. The oscil lation of the real response around the simulated undisturbed output has two main reasons: The input voltage can only be adjusted stepwise, 0.025 V per motorstep, and the setting of the second potentiometer is always delayed until the first input is set. The delay in case of a 0.5 volt step for input 1 is about 0.3 seconds. Further error sources are' the, inaccuracy of model (6) ,' the ramp- instead of step-function.pfo-,cduced by the stepping motor and measurement noise'. ; _ . -meay 'c-1'.: 36 V U2 (Volts) 0.5 1 System Inputs: Potentiometer Voltages 0.4 1 H Voltage per Motor Step = 0.025 Volts 0.3 4 0.2 0.1 0 5T —I-i r t (sec) Figure 5.4 37 VI. CONCLUSIONS The purpose of this thesis is to identify multi-input multi-output linear systems on-line on a minicomputer. The selector matrix method of Gopinath is employed for the derivation of two new identification methods. Both methods are successfully tested with different input sequences for various systems on a large computer. The implementation of method B on a NOVA minicomputer for the identification of systems sim ulated on an analog computer gives an estimated 50% storage saving in comparison with Gopinath's method and very accurate results. The main features of the two methods are: 1) Less computational requirements (less storage, more accuracy) than Gopinath's method. 2) Simple input sequences can be used. The few existing restrictions prove to be insignificant for practical applications. 3) Each column of input matrix B is identified from different data. In method B the identification of the state transition matrix A and the input matrix B are completely decoupled. 4) Some restrictions for the initial system state exist but they can be satisfied quite easily. 5) The selector matrix has to be known in advance. Two classes of selector matrixes that lead to canonical models are proposed and tested. The main problem that remains to be solved is the extension of the proposed methods to the case of noise contaminated data. Both methods as well as Gopinath's procedure lead to a system of linear equations as shown below D1 = PD2 + V (1) where and contain measured data, P is the array of unknown fixed parameters and V is an array of measurement noise terms. The classical method of least squares computes the estimate P such that the cost function J = trace (D - PD ) (D_ - PD.) ' 12 12 (2) is minimized. However, even independent measurement errors (additive noise) , i.e. y(k) = Hx(k) + n(k) result in correlated residual vectors in (1). The reason for this is that any component of the output vector y(k) may appear in more than one column of D^. As a consequence of correlated residual vectors the least squares estimate will be biased. This bias can be eliminated by the use of one of the many sophisticated estimation methods described in [9], [13] and [14]. Another remaining problem is the finding of the selector matrix. An efficient algorithm to determine selector matrices from noisy data would certainly make the proposed identification methods more useful. The question of "optimal" input sequences with respect to "good" para meter estimates is another problem that requires further investigation. 39 REFERENCES [1] Gopinath, B., "On the Identification of Linear Time-Invariant Systems from Input-Output Data," Bell. Syst. Tech. J., Vol. 48, . pp. 1101-1113, May 1969. [2] Budin, M.A., "Minimal Realization of Discrete Linear Systems from Input-Output Observations", IEEE Trans. Automat. Contr., Vol. AC-16, pp. 395-401, Oct. 1971. [3] Bonivento, C., Guirdozi R. and Marro G., "Irreducible Canonical Realizations from External Data Sequences", Int. J. Control, Vol. 17, No. 3, pp. 553-563, 1973. [4] Ackermann, J'.E., "Die Minimale Ein-Ausgangs Beschreibung von Mehrgrossensystemen und ihre Bestimmung aus Ein-Ausgangs Messungen", Regelungstechnik, Heft 5, pp. 203-206, 19 71. [5] Luenberger, D.G., "Observers for Multivariable Systems", IEEE Trans. Automat. Contr., Vol. AC-11, p.p. 190-197, April 1966. [6] Bucy, R.S., "Canonical Forms for Multivariable Systems", IEEE Trans. Automat. Contr., Vol. AC-13, p.p. 567-569, Oct. 1968. [7] Tapp, R.J., "A Parameter Estimation Algorithm for Small Digital Computers", Master's Thesis Dept. of Electrical Engineering, UBC, July 1972. [8] Bingulac, S.P. and Djorovic, M. , '•'Estimation of State Variable Models Using Input-Output Data", IFAC Symp. on Estimation, TS-14, pp. 995-1004, June 1973. [9] Astrom, K.J. and Eykhoff, P., "System Identification - A Survey". Automatica, Vol. 7, pp. 123-162, 1971. [10] Ackermann, J.E., "Abtastregelung", Springer Verlag, New York 1972. [11] Kalman, R.E., "Mathematical Description of Linear Dynamical Systems", SIAM J. on Control, Vol. 1, No. 2, pp. 152-192, 1963. [12] Kwakernaak, H. and Sivan, R., "Linear Optimal Control Systems", John Wiley and Sons, New York, 1972. [13] Isermann, R., "Comparison of Six On-Line Identification and Parameter Estimation Methods", Automatica, Vol. 10, pp. 81-103, 1974. [14] Saridis, G.N., "Comparison of On-Line Identification Algorithms", Automatica, Vol. 10, pp. 69-79, 1974. [15] Gopinath, B., "On the Control of Linear Multiple Input-Output Systems", Bell. Syst. Tech. J., Vol. 50, pp. 1063-1081, March 1971. Proof of Theorem 1: ;. APPENDIX 1 Assuming that H = I J (1) we get for (15) H HA * n*-l HA (2) where E n = n and n*> n.>l for 1=1,...£ i = 1 ~ 1_ Luenberger [5] has shown that for a completely observable system (A,B,H) with p[H] = £ and observability coefficient n* there exists a sequence nl' n2'""*nJl suc^ tnat T given by (15) is nonsingular q.e.d. Proof of Theorem 2: Using the known relation Sy(k) = S H HA * n*-l HA x(k) + S 0.. HB 0.. HA n*-2 u(k) (3) - Tx(k) + SR1u(k) 41 we get I SR. r' i o I I i x(k)... x(k+nfmn*-l) u(k). .. u(k+n+mn*-l) = QP The (n-hnn*) x (n+mn*) matrix Q is nonsingular because T is nonsingular (15) and therefore P[unA(k)] = p [P] The rest of the proof follows from Lemma 2 in Gopinath's paper [1]. Definition of a Cyclic Matrix: C is an m x m matrix and u(k) is m x 1;; C is cyclic if there exists u(k) such that the matrix f= r [u(k) Cu(k) Cra_:Lu(k)] is nonsingular. (4) APPENDIX 2 The Recursive Least Squares Algorithm of Tapp [7]» The relation between a collection of input-output data and model parameters was shown in (72) to be given by Z = AD + V (1) where Z = z, z„... z and D = dn d„... d are s x p lip 1 2 p r A is an s x s matrix of unknown fixed parameters and V is an array of noise terms. The least squares estimate A^ of the parameters A mini mizes the cost J = trace (Z - A^D) (Z - i^D)' (2) and can be computed recursively as follows with k = l,2....p Ck = (I " W dk <3> ek = ck W1 4k < s Qk " W+'Vk » Qk = 0 at k = 0 (5) \-ldk6k " VWc-l + ^"k-lW * Vk • R^ = 0 at k = 0 (6) 4,k = 4,k-i + ek (4 " dk4,k-i> ' 4,k •0 at k = 0 (7) \ • ^ + dkpk-idk)_1 <8> k > s pk • pk-i" \dkpk-i ? pk = \ at k •s (9) A . = A . . + b. (z,' - <L'A . .) , A . = A . —e,k —e,k-l k k k—e,k-l —e,k —e,k from (7) at k = s (10) Whenever the first s data vectors d^,d2...dg are linearly dependent, the equation (3) will give ck - (I " \_±) dk - 0 Since the recursive least squares procedure and condition (56) require that d^,d2...dg are linearly independent, the value of c^ for a new data vector is an extremly useful indicator of linear independency. APPENDIX 3 Suboptimal Control: The solution of the optimal control problem for the deterministic system x(k + 1) = A x(k) + B u(k) (1) with the criterion N-l J = x'(N)Q x(N) + x'(k)Q x(k) + u'(k)Q9u(k) (2) k=0 Z is a time dependent feedback u(k) = -F(k) x(k) (3) where F(k) = [Q2 + B'S(k+l)B]_1B'S(k+l)A S(k) = [A - BF(k)]'S(k+l)A + Q (4) S(N) = QQ The matrices QQ and be symmetric and nonnegative, the matrix C^ be positive definite. It is known that F(k) and S(k) asymptotically approach a "steady state" value for decreasing k. The suboptimal control u(k) = -F(0) x(k) = -F x(k) (5) is a good approximation of (3) because F(k) departs significantly from its asymptotic value only over the terminal stages of the process when x(k) is already small. The larger the N the better the approximation will be. The relative increase of the cost J for suboptimal control of the system in experiment 3 was computed. It was less than 3% for N = 15. Observer: /V A A The particular structure of (A,B,H) described in (18) gives a very simple state observer. At stage k + n -. 1 the past model state x(k) is found from (22) x(k) = S y(k) - u(k) where u(k) and y(k) are defined in (20) and SR^ has the form in (23). By re peated use of x(k + 1) = A x(k) + B u(k) and u(k),u(k+l)...u(k+n*-2) we get the present model state x(k+n -1). Note that the first observation is possible at stage n - 1 of the process, therefore an n as small as possible should be chosen for the identification (see experiment 1). 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065433/manifest

Comment

Related Items