UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A parameter-estimation algorithm for small digital computers. 1972

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
UBC_1972_A7 T36.pdf [ 5.95MB ]
UBC_1972_A7 T36.pdf
UBC_1972_A7 T36.pdf
Metadata
JSON: 1.0101755.json
JSON-LD: 1.0101755+ld.json
RDF/XML (Pretty): 1.0101755.xml
RDF/JSON: 1.0101755+rdf.json
Turtle: 1.0101755+rdf-turtle.txt
N-Triples: 1.0101755+rdf-ntriples.txt
Citation
1.0101755.ris

Full Text

A PARAMETER—ESTIMATION ALGORITHM FOR SMALL DIGITAL COMPUTERS ROBERT JAMES TAPP B.SCe, Uni v e r s i t y of V i c t o r i a , 1969 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE i n the Department of E l e c t r i c a l Engineering We accept t h i s thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA July, 1972 In present ing th i s thes i s in p a r t i a l fu1fi Iment of the requirements for an advanced degree at the Un iver s i t y of B r i t i s h Columbia, I agree that the L ib ra ry sha l l make i t f r ee l y ava i l ab le for reference and study. I fu r ther agree that permission for extens ive copying of th i s thes i s for s cho l a r l y purposes may be granted by the Head of my Department or by his representat ives . It i s understood that copying or pub l i c a t i on o f th i s thes i s f o r f i nanc i a l gain sha l l not be allowed without my wr i t ten permiss ion. Department of fcc-fncg/ E"nq i n & e r i n The Un iver s i t y of B r i t i s h Columbia Vancouver 8, Canada Date Au.gufr-1 2., /3"7*X ABSTRACT An algorithm i s developed f o r performing parameter e s t i - mation on a small-size d i g i t a l computer. F i r s t p r i n c i p l e s of matrix algebra are used to derive a sequential estimator which computes an estimate of a general parameter array A from an array of measurements Z = HA+V where V i s a matrix of zero- mean noise terms. At every stage a new row i s adjoined to each of Z, H. and V and a new estimate of A i s cal c u l a t e d r e c u r s i v e - l y , with any one of three well-known f i l t e r i n g processes a v a i l - able from the same basic set of recursive equations: a l e a s t - squares f i l t e r to minimize J = trace (Z - H A ) ( Z - H A ) - , a maximum-likelihood f i l t e r to maximize p Z j A ( Z | A ) or a maximum- A " " " " * " — a - p o s t e r i o r i f i l t e r to maximize P A | Z ( A I Z ) . P r o v i s i o n i s made for s t a r t i n g the f i l t e r e i t h e r with a - p r i o r i means and v a r i - ances of the parameters or with a d e t e r m i n i s t i c "minimum-norm" composition based on the f i r s t s measurement rows, s being the number of rows i n the parameter array. The algorithm i s applied to the problem of i d e n t i f y i n g the parameters of a d i s c r e t e model f o r a l i n e a r time-invariant co n t r o l system d i r e c t l y from sequential observations of the inputs and outputs. Results from computer t e s t s are used to demonstrate properties of the algorithm and the important computer programs are included, along with suggestions f o r further a p p l i c a t i o n s . i i TABLE OF CONTENTS Page Xo Int JTOdUC tlOl"! o o e o © o « o o o o o » o o » o * » o © o « « © o o Q e e « e o o o o © X II» Least-Squares F i l t e r i n g « . 3 I T I o Maximum-Likelihood F i l t e r i n g ........ . ..„ 20 IV. General Computational Algorithm ....... 29 Vo I d e n t i f i c a t i o n of A Linear Stationary Process ..oo 37 VX © EX!cHTl]p X 6 S o o « o o « o « e o o o 0 4 o o o o * i > o o « * o * * o « » o o B e « e o o e o « 43 • V i l e 'Further •App-lioartions - • 57 Rapid I d e n t i f i c a t i o n 58 I d e n t i f i c a t i o n of Non-Linear Systems o.o..<,«>. 61 Time-Varying Parameters oo 63 REFERENCES o o f t £ > o o o o a o « e » * e * » * « c r * « o o o e * « o e « 4 > o « o * » » o o « e » f t ' 65 APPENDIX o * o « o o o o o o o 0 e a o o « « * f t e o « e * a o o * « t t e o « 0 » o o o o a o o » o o 63 Program-Equivalents of Symbols Appearing i n Memory-Allocation f o r I d e n t i f i c a t i o n Programs 69 Instructions f o r Using the I d e n t i f i c a t i o n Pro- Instructions f o r Writing I/O Subroutines .... 73, I d e n t i f i c a t i o n Programs f o r Nova Computer »»<> 78 i i i LIST OF TABLES Page ...Lo ...Es.sen.tial .D.if.f.er.enc.es ...of ,.L.e.as.t-.Squares., Maximum- Lik e l i h o o d and Bayesian M . A . P o F i l t e r s ..«.<>. . .ooo 29 I I . Stage-wise Errors of Least-Squares F i l t e r (Example i v LIST OF FIGURES Page ,.l,o ..General .Computational Algorithm .for Estimation 3.0 2. System I d e n t i f i c a t i o n Algorithm .....<>.... . . 0 0 . . . . 40 3. Time-response of continuous system corresponding to 4o Plo t of performance functions f o r example 2 ...<>. „ 49 5. "Plot of estimation errors f o r example 3 "CD '51 6. P l o t of estimation errors f o r example 3 (2) 52 7. Plo t of estimation errors f o r example 3 (3) . 0 . . . . 54 8. P l o t of errors f o r estimates i n example 4 ........ 56 v ACKNOWLEDGMENT I should l i k e to express my gratitude to the National Research Council f o r the f i n a n c i a l support afforded me i n the form of a Post-graduate Scholarship (1969-70) and a Post- graduate Bursary (19 70-71), and also to the following i n d i - v i d u a l s : my supervisor, Dr. E.V. Bohn, f o r h i s patient guid- ance; Dr. Ro Donaldson f o r his h e l p f u l suggestions; Dr. Graham Qualtrough f o r his u s e f u l ideas; Mr. Dave Holmes f o r h i s as- sistance with the computer work; and my good f r i e n d and fellow student, Mr. John Wong, f o r his advice and encouragement. Ro Tapp, July, 1972o v i 1 l o Introduction When i t i s necessary to estimate important parameters of a system from measurements of system v a r i a b l e s , the choice of an optimal mathematical procedure depends on the amount of s t a t i s t i c a l information a v a i l a b l e concerning the system and measurement process. Unfortunately, not enough information i s a v a i l a b l e i n many p r a c t i c a l s i t u a t i o n s to permit using w e l l - known estimators l i k e the Kalman f i l t e r , nor i s i t obvious how these procedures can be adapted f o r simpler problems. K i s h i [ s j ? Sage J^l3J, Young [ i v j and other authors have i n d i c a t e d how c l a s s i c a l least-squares f i l t e r i n g can be u s e f u l because of i t s v a l i d i t y in. the absence of s t a t i s t i c a l information and - i t s s i m i l a r i t i e s -with .more s o p h i s t i c a t e d -me-fcnod-s-,- -but -very l i t t l e has been written i n the way of a u n i f i e d and complete theory of p r a c t i c a l least-squares f i l t e r i n g , , G r e v i l l e J ^ J presents a d e r i v a t i o n of least-squares curve f i t t i n g which i s mathematically rigorous but unnecessarily complicated by the use of generalized-inverse theory and not d i r e c t l y a p p l i c a b l e to the problem of parameter estimation. In an attempt to apply i t to the estimation problem, K i s h i J^8J loses some of the mathematical rigour and neglects some important p r a c t i c a l considerations. Young [ i v j and Sinha and P i l l e ^ 1 5 J have con- t r i b u t e d accurate but very s i m p l i f i e d d e s c r i p t i o n s of the method. There i s considerable advantage to be gained by using a c l a s s i c a l least-squares estimator as the b a s i s f o r o n - l i n e f i l t e r i n g algorithms because i t i s straightforward to imple- 2 merit, v a l i d under most conditions and e a s i l y modified f o r a - p r i o r i s t a t i s t i c a l information. I t i s the purpose of t h i s t h e s i s to develop a complete theory f o r least-squares f i l t e r - ing, leading to an algorithm that can be programmed on a small d i g i t a l computer and to considerations of how the algorithm can be extended f o r a number of p r a c t i c a l situations« The mathematical approach used by G r e v i l l e [ 2 , 3 j was chosen as the most s u i t a b l e on which to base the d e r i v a t i o n s f o r general least-squares f i l t e r i n g equations, although h i s use of Pen- rose's pseudo-inverse theory £ll, 12 j has been abandoned i n favour of a more straightforward approach which employs only f i r s t p r i n c i p l e s of matrix algebra. To include the s t a t i s t i c a l maximum-likelihood and Bayesian f i l t e r s , some simple m o d i f i - cations of the equations are considered. In t h i s t h e s i s a l l symbols representing vectors and ma- t r i c e s are underscored, with upper-case l e t t e r s denoting ma- t r i c e s and lower-case l e t t e r s denoting column-vectors wherever p o s s i b l e . A symbol followed by a prime i n d i c a t e s the transpose of the corresponding matrix or column-vector ( example: A ) e Where dimensions of a matrix or vector are given, they are enclosed i n parentheses following the symbol (example: B(mxn)). M I t The i d e n t i t y matrix i s represented by the symbol I and matrix t r 11 inverses are denoted by the s u p e r s c r i p t -1 •« The symbol f o r 11 t i the s t a t i s t i c a l expected-value operator i s e . 3 - H » Least-Squares F i l t e r i n g An a r b i t r a r y but very general representation of the r e - l a t i o n between a c o l l e c t i o n of measurements of system v a r i a b l e s and the basic parameters of the system i s HA + V (1) where Z i s an array containing a l l the measured data, A i s the array of unknown f i x e d parameters, H i s the matrix representing the defined r e l a t i o n s h i p between the q u a n t i t i e s measured and the parameters, and V i s an array of measurement noise terms. In a simple example of a body moving with a constant v e l o c i t y v, i t i s desired to estimate the v e l o c i t y and i n i t i a l p o s i t i o n s -of the -bodv -from measurements of its-Dosit-i'on -s -at-known o times t„ The parameters S q and v are defined by the equation s = s + v t o I f the p o s i t i o n i s measured at times t ^ , and t ^ and values s^, S 2 and S g are obtained, then a representation corresponding to equation (1) would be - — 1 fcl * 2 =. 1 H *3 1 o V n. n, n. where n^, and are measurement noise terms. The c l a s s i c a l method of l e a s t squares assumes that f o r A zero-mean noise the estimate A of the parameter array A should 4 result i n a minimum of the sum of the squares of the elements of the matrix ( Z - HA), This corresponds to minimizing the cost function J = ™ trace (Z-HA) (Z-HA)' (2) If the rows of Z are labelled successively o o « 9 and « « « the rows of H are similarly labelled —2* ~ 3 S * 0 0 ' t n e n t n e cost function, can be written J s f p ^ - h . A K z ^ ^ A ) ( 3 ) For a minimum the derivative with respect to A must be zero: ^ ~ - 1 ~ } " ~ H'Z » H'HA (4) If the number of rows i n H i s greater than or equal to the number of columns and the columns are li n e a r l y independent then the column vector Hu, which i s a linear combination of the columns of H, i s non-zero for a l l non-zero u o Therefore u H Hu i s positive for a l l non-zero u which means that H H i s positive definite and hence non-singularo (4) then gives the unique solution 5 A = (H'H)™ 1 H'Z (5) I f the number of rows i n H i s l e s s than or equal to the number of columns and the rows are l i n e a r l y independent then the row vector u H, which i s a l i n e a r combination of the rows of H, i s non-zero f o r a l l non-zero u. Thus u H H u i s p o s i t i v e g •f o r a l ! non-zero -u -and -H H i s p o s i t i v e -definite- -and therefore nonsingular- Pre-multiplying both sides of (4) by H gives H H'Z = H H*H A A Z = H A (6) Except f o r the case where H i s square, t h i s equation does not have a unique s o l u t i o n , but although no unique s o l u t i o n can be defined on the b a s i s of the least-squares c r i t e r i o n alone i t w i l l nevertheless be d e s i r a b l e to define some a r b i t r a r y solution,- The most l o g i c a l choice i s that least-squares s o l u - t i o n which has a minimum "norm" and i s found by minimizing the cost f u n c t i o n J = i trace (A A ' ) ( 7 ) n <L ------. subject to equation (6)« Using Lagrange's method of undeter- mined m u l t i p l i e r s , an augmented cost function i s defined: J = trace a "I A A f A | A A + M Z - H A ) (8) where X i s the array of undetermined m u l t i p l i e r s . Now 6 = A - H X = 0 A a H X (9) Using (9) i n (6) Z = H H X X (HH ) (10) Using (10) i n (9), A » H (HH ) Z (11) -Thi-sequation - w i l l - d e f i n e " t h e "leas t-squarres --e-st-ima-teof A whenever the number of rows of H i s l e s s than or equal to i t s number of columns and the rows are l i n e a r l y independent.. For the many a p p l i c a t i o n s where the observations are not a v a i l a b l e a l l at once but are received s e q u e n t i a l l y i n time, i t i s d e s i r a b l e to have a re c u r s i v e r e l a t i o n which w i l l provide parameter estimates at every stage by updating p r i o r estimates as each new set or block of data a r r i v e s o The a d d i t i o n of more data to the Z matrix w i l l r e quire that elements be added to the H matrix and since the dimensions of H w i l l be changing at every stage i t i s important to e s t a b l i s h which of equations (5) and (11) should be used to determine the estimate at each stage. I f the parameter matrix A i s to have f i x e d dimensions, 7 l a b e l l e d ( s x r ) , then equation (1) shows that H must always have s columns and Z must always have r columns. Thus i n t h i s scheme j elements adjoined to the H and Z matrices at sequential stages must take the form of a d d i t i o n a l rows. I f q i s the number of rows adjoined to each of 22 and H at every estimation stage, then the t o t a l number of rows i n each matrix i s kq where k i s the number of the current estimation stage» To summarize the dimension l a b e l s , (1) can be re-written Z(kqxr) = H(kqxs) A(sxr) + V(Jcqxr) (12) Now, using (5) and (11), the least-squares estimate f o r A at stage k i s defined by A ] c = ( H ^ ) " 1 Z k , kq>s (14) where Hj, and Z^ are the matrices H and Z at stage k. I f q S s then (14) w i l l apply f o r a l l values of k, but i f q < s then (13) w i l l apply u n t i l k exceeds ~ and (14) w i l l apply f o r a l l further stages* In designing a general r e c u r s i v e r e l a t i o n f o r (13) and (14), advantage can be taken of the f a c t that both solu t i o n s would apply f o r a stage k where kq = s, provided the rows of H are l i n e a r l y independent. would be square and nonsingular and (13) and (14) would reduce to , A]c - H" 1 Z k , kq = s (15) 8 Thus i f the number of rows adjoined to at each stage (q) i s a f a c t o r of i t s number of columns (s) then there w i l l be a stage where kq=s such that both (13) and (14) are v a l i d and the f i n a l s o l u t i o n from the r e c u r s i v e form of (13) canbeused as the s t a r t i n g value f o r the r e c u r s i v e form of (14). To obtain the r e c u r s i v e forms f o r equations (13) and (14) i t i s convenient to introduce new symbols and defined by Sk = £ k < H k H k ) -1 (16) (17) -In the "theory -of -generaiized invers-es G k"would -be "called the r i g h t generalized inverse or r i g h t pseudo-inverse of H_k and J_k would be c a l l e d the l e f t generalized inverse or l e f t pseudo- inverse of The matrices Z^, H_k, G^ and are p a r t i t i o n e d as f o l l o w s : Z^(kqxr) Z^^^ ( [k-l] qxr) 0 * 0 0 0 0 0 0 0 0 0 0 0 0 Z^(qxr) (18) H^fkqxs) = H k r a l( ||c-l]qxs) o o o o o o o a o o o o o o H k(qxs) (19) G_k(sxkq) = F k ( s x [k-l] q) : E ] c(sxq) ( 2 0 ) Equation (13) can now be wri t t e n .Using £19 . ) . , t h i s can.be written ,as jtwo. .equations.. From ( 2 5 ) S u b s t i t u t i n g t h i s i n t o ( 2 2 ) gives A K - A K - 1 - E K H«A k, 1 + E k Z j , k q 2 s 9 J k (sxkq) = D. (sx [k-l] q) I B , ( s x q ) ( 2 1 ) *k - ~k~k = F - Z k ~ k - l + -k-k ' k q = ; ( 2 2 ) ,.T.o .salve .for ,E_k ...and .E_k., define the matrix 2 k - 2 k H k •- i C c i y i r 1 ^ - ~ k ~ k - i + h£ ( 2 3 ) Post-multiplying by H k gives H, « F. H. .H. + E, H* H, ~k —k—k-l—k —k—k —k ( 2 4 ) ^k-1 = £-k»k~l»k-l + ^ k ^ k - 1 ( 2 5 ) F, H, , H r + E. H r H. ~k -k~k~l-k -k-k -k ( 2 6 ) L - M u _ n H..,) A - E, j £ H,;_, (a ,H," ;:) , - i -k ~ k - l ~ k ~ l ~ k - l k-k - k - l - k - l - k - 1 ' ( 2 7 ) A A £k = £)c-l + - —k ~ k - l ^ > k c* = s ( 2 8 ) 10 and i n t o (23) gives and i n t o (26) gives H k = Q k ^ I i k - £ k H ^ Q k ^ + — k~k —k .Equations...(2.8.),, ..(.29.) and ,(.30.) c o n s t i t u t e .the,..rjec..ur,s.i.v:.e..,relation which corresponds to equation (13)» I t may be v e r i f i e d from these equations that the c o r r e c t s t a r t i n g values f o r A and Q are zero, f o r then % = HJ 'CHJHJV 1 L = H*' ( H ? H f V ^ Z ? —1 —1 —1 —1 Si 3 H*' <£* H*' J " 1 which are consistent with the d e f i n i t i o n s of A. and Q. i n (13) and (23)* Using (18) and (21), equation (14) can be wr i t t e n H k(I< (30) 11 (31) To solve f o r and , begin by forming the product H^J^. using (17), (19) and (21); R J., = H, (H.'H, J"1 H' —k—k —k —k--k —k • -k~i-k e o o o o o o o e o o o e o o (32) Pre-multiplying by gives H, S 5 H. —k —k H, _ D, » H, - B, —k-l—k o —k-l—k o o o o o a o o o o o o o e o ^ I ^ k (33) Using (19) 'this can -he •written a-s "the -two -equatrons (34) H k - Hk-l-k-l-k + $ (35) Prom (34) 2* « ^ — k - l — k - l + —k' —k^""1 — k - l (36) and from (35) —k = ( 2 k - A - l + (37) I f a new matrix i s defined by 12 13 D. = P . . H , , - B . H * P . -H. , (44) =k -k-l-k-1 ~k~k ~ k - l - k - l and i n (40) gives R B p * — B H* P ' -k - k - l - k - k - k - k - l - k Bk - P f c ^ C l + i g P k _ 1 ^ , r 1 (45) Using (44) i n (31) A k - P J c „ 1 H ^ 1 Z k _ 1 - 2k&2k-1l£„12k-1 + B k Z k , k q - s e —1 Since £jĉ j_ = (Jl]c_xi-jc_x ^ » the l a s t equation becomes 4 - + B k ( z J - H j A ^ ) , k q - s (46) Equations (43), (45) and (46) provide the r e c u r s i v e r e l a t i o n corresponding to equation (14) and can be st a r t e d by applying (14) and (38) d i r e c t l y to the f i r s t stage k such that k q i s , which w i l l require i n v e r s i o n of at l e a s t an s x s matrix. Since matrix i n v e r s i o n requires f a i r l y complex programming on a small computer, i t i s perhaps better to arrange that the s t a r t i n g value f o r (46.) be taken from the l a s t s o l u t i o n of (28) at a stage k where kq ~ s, as described e a r l i e r . S i m i l a r l y a r e c u r - sive r e l a t i o n can be found which w i l l provide a s t a r t i n g value fo r £ k i n (43) when kq = s. From (38) the d e f i n i t i o n of P_k i s P , = J . J , * —k —k—k 14 and from equations (16) and (17) 2 k = £ k = HjT1 , kq = s Therefore P k - - G kG k , kq = s (47) Thus at a stage k where kq»s i t i s possible to obtain the starting value for from a recursive relation for Rk = — k — k - ^ k V " 1 »k ( 4 8 ) Using (20) this can be written Rk « £ k!l k + E k E k (49) From (27) £ k = ( I » I k H ^ H ^ ^ H ^ H ^ ) " 1 (50) Substituting this into (49) gives £ k - ( I " ^ ^ ^ ^ ) " ( H ^ H ^ ) X ( I - E f c H j ) + E k E k = ( l - ^ H ^ R ^ a - E ^ ) ' + E kF^ 15 £k 5 1 ^ k - i "* - k - A £k ~ £ k - i + h^'^l^X + -k-k ( 5 1 ) which i s i n a convenient form to be c a l c u l a t e d i n conjunction with (28), (29) and (30). The f i n a l general algorithm f o r least-squares estimation of the parameter matrix A would therefore use equations (28), (29), (30) and (51) f o r a l l estimation stages k such that kq 2s and f o r a l l subsequent stages would use equations (43), A (45) and (46) beginning with the values of A^ and given by (28) and (51) at a stage k where kq = s« The c a l c u l a t i o n s involved i n these equations are e a s i l y performed on a small computer, apart from the following i n - verses which appear i n (30) and (45) r e s p e c t i v e l y : Cl + H ^ - i H * ' ) - 1 As shown i n (19) the dimension of i s q x s which i n d i c a t e s that both of the matrices being inverted above have dimension q x q , q being the number of rows adjoined to Z and H at each estimation stage<> Thus by choosing q = l , both inverses w i l l i nvolve s c a l a r s and the necessary computer programming w i l l be v a s t l y s i m p l i f i e d . The number of rows adjoined at each estimation stage need have no e f f e c t on the number of rows adjoined at each measurement stage because the measured rows can be stored and adjoined i n the estimation algorithm one at 16 a time. Selecting q = 1 also has the advantage that q w i l l always be a divisor of s, the number of columns i n H, which i s the requirement for proper linking of the two sets of equa- tions as previously explained. When q = 1, the matrices and H* degenerate to row vec- tors and Ê . and B_k degenerate to column vectors. For this reason i t i s desirable to change the notation "and replace £k b * £k £ k b * ̂ k Equation (30) now becomes (52) If the column vector (1 - Qk„-j. ̂ —]c. ̂ n ^his equation i s given the symbol c_k, £ k = ( I - W U f c - ( 5 3 ) then from the definition of Qk inequation (23), which was Q k - G kH k . H ^ C H ^ ) - 1 ^ i t can be seen that £ k « H ^ I-Qk^)' = h k ( I - Q k ^ ) (54) and 17 • £ k < I - Q k - i - % - i + ̂ c - A - i ^ k ( 5 5 ) so that equation ( 5 2 ) can now be wr i t t e n ( 5 6 ) and equation ( 2 9 ) now becomes Q k « Q k ^ + e k h k ( I - Q k ^ ) ^k-1 + ^ k ( 5 7 ) Following i s a summary of the major equations and t h e i r s t a r t i n g values f o r the s i m p l i f i e d algorithm where q = l : £ k - ( I - Q w ^ e,_ = c, (c.c,_) ( 5 3 ) k S s < £ k - £ k ^ k £ k ( 5 6 ) Q k = Q k - ! + £ k £ k , 2 k = £ at k = 0 ( 5 7 ) 18 £ k = — k — 1 — k — l ~ k — k ™k—k—k—1 + —k—k~k—1—k—k + —k—k » k f s < R k = 0 at k = 0 A, a 0 at k = 0 (58) (59) k ^ s <{ £ k - - b kh kP J c_ 1 , P k = R k at k= s -4k = 4 k - l + . . S k ( ^ k - ^ c ^ l ) * (60) (61) A^ ~ A_k from ( 59) at k = s (62) I t has already been shown that when the rows of the matrix H k are l i n e a r l y independent, the product H k j i k i s nonsingular and the matrix i s a l e f t i d e n t i t y f o r the matrix H k because Q kH k - —k * —k—k *" 1 —k—k - % S i m i l a r l y Q k i s a l e f t i d e n t i t y f o r any other matrix whose columns l i e i n the transposed row space of H k„ Thus i f a new 19 « row h_k i s adjoined to the R matrix and i s a l i n e a r combination of the previous k - l rows, then the vector h. w i l l l i e i n the transposed row space of H^_1 and equation (53) w i l l give c k = ( I - Q ^ ) ^ -= 0 (63) Since the r e c u r s i v e least-squares procedure requires that H have maximum rank at every stage and i n p a r t i c u l a r that the rows be l i n e a r l y independent f o r the f i r s t s stages, c^, being the f i r s t c a l c u l a t i o n i n v o l v i n g a new row of H, i s an extremely u s e f u l i n d i c a t o r of t h i s conditiono Depending on the process involved, a measurement which would make the rows of H l i n e a r l y dependent can be r e j e c t e d i n favour of a new measurement or the e n t i r e process can be r e - s t a r t e d with a minimum of wasted time 0 Although at t h i s point a l l the e s s e n t i a l equations f o r a least-squares f i l t e r i n g algorithm have been developed,a pre- liminary comparison with s t a t i s t i c a l methods w i l l lead to minor improvements which make the algorithm much more u s e f u l . In the next chapter w i l l be presented a d e r i v a t i o n of the s t a t i s t i c a l maximum-likelihood f i l t e r which p a r a l l e l s that of the l e a s t - squares f i l t e r i n t h i s chapter. Chapter IV w i l l then describe the complete mechanics of the f i n a l computational algorithm which was used i n the research p r o j e c t o u t l i n e d i n Chapter V. 20 I I I o Maximum-Likelihood F i l t e r i n g A maximum-likelihood procedure gives the optimum minimum- variance parameter estimate when no a - p r i o r i s t a t i s t i c a l i n - formation i s a v a i l a b l e concerning the parameters and the noise terms a f f e c t i n g the measurements are zero-mean independent ..white-.Gaussian random ..variables .of .known variance.* The development of the maximum-likelihood f i l t e r i n g equa- tions i n t h i s chapter follows c l o s e l y that of the least-squares f i l t e r i n the previous chapter i n order that s i m i l a r i t i e s between the two methods w i l l be apparent. This should f a c i l i - tate explanation of the general-purpose computational algorithm to be presented i n Chapter IV. As i n Chapter I I 5 equation (1) w i l l be the a r b i t r a r y representation of the measurement process, where as before the matrix H i s assumed to have maximum rank. The optimum estimate A of the parameters A i s chosen so that the p r o b a b i l - i t y density of each measured quantity c o n d i t i o n a l on A=A has a maximum at the observed value of the measured quantity. The p r o b a b i l i t y density function involved i s often given the name " l i k e l i h o o d f u n c t i o n " and since the noise terms are s t a t i s t i - c a l l y independent the l i k e l i h o o d f unction f o r a row i of mea- surements i s the product of t h e i r i n d i v i d u a l l i k e l i h o o d func^- t i o n s , which are Gaussian: A (2u) r/2 r/2  e x P s • (64) where r i s the number of measurements i n a row or the number 21 of columns i n the measurement array and i t has been assumed that the noise on each measurement of a row has the same v a r i - ance s ^ o This l a t t e r assumption r e s u l t s i n a great s i m p l i f i - c a t i o n to the d e r i v a t i o n which follows and does not s e r i o u s l y l i m i t the usefulness of the equations, because measurements having d i f f e r e n t noise-variances can always be located i n separate rows. A product of the l i k e l i h o o d functions of a l l k rows gives the l i k e l i h o o d function f o r the e n t i r e measurement set at stage k: P 2«A= ( 2 7 r )kr / 2 ( d e 7 7 ~ r 7 2 e xP I T " * — 1 ' * ' ' » •g-«L s^' ( - hu A) ( - h-A) (65) where S = s x 0 0 0 s 2 0 0 0 s 0 , a p o s i t i v e - d e f i n i t e matrix* Maximizing t h i s l i k e l i h o o d f u n c t i o n i s equivalent to maximizing i t s logarithm? 1T 3 —1 , » » » t « t r log p z { A = -2^-si ~ 111 A) (™i ~ ~ i ~ ) ~ ~T l o g ( 2 7 t ) - | log (det S) ( 6 6 ) and a maximum r e s u l t s when the d e r i v a t i v e with respect to A i s zero: 22 H'S"1 Z = H'S""1 HA (67) When H has fewer rows than columns, (HH ) e x i s t s and the l a s t equation reduces to Z = HA (68) This i s i d e n t i c a l to the least-squares r e s u l t of equation (6) and the minimum-norm estimates f o r the two methods are the same: A e H* (H H' J"*1 Z (69) » When H has more rows than columns, H H i s p o s i t i v e d e f i n i t e 1 — 1 and so i s H S Ho Thus the maximum-likelihood estimate from equation (67) i s A = (H'S""1 H)"1 H'S™1 Z (70) which i s the least-squares s o l u t i o n of (5) weighted by the inverse of the noise variance matrix S_s A r e c u r s i v e r e l a t i o n f o r equation (70), u n l i k e the method of l e a s t squares, cannot t h e o r e t i c a l l y be st a r t e d using the minimum-norm r e s u l t of (69) at stage k = s because (69) does not contain the information regarding the noise variance f o r stages k ™ s that i s required by (70). This problem w i l l be discussed l a t e r . I t i s f i r s t necessary to obtain a r e c u r s i v e 23 form f o r (70K Following a procedure s i m i l a r to that used f o r the r e - cursive form of (5), the maximum-likelihood estimate at a stage k i s wri t t e n as ( 7 1 ) where J_k i s now defined by ( 7 2 ) To make the equations compatible with the algorithm derived near the end of Chapter I I , one row only w i l l be adjoined to each of and at every stage, and the f o l l o w i n g p a r t i t i o n - ings are v a l i d : Zk(kx r) t 4c o o o o o o e o o o o e o z, (lxr) H k(kx s) = J k ( s x k) - k - l ( fc""1] x s } I h k ( 1 x s) V s x [ k - l ] > I b k ( s x l ) j ( 7 3 ) ( 7 4 ) ( 7 5 ) ( j —1 O - k - l I 0 o e o « « o o o o o e - 1 ( 7 6 ) Now 24 o e e o « o o o o e o o A O O h'p. ^A (77) ' —1 Pre-multiplying by H^S^% • -1 H, S. —k-~k ' -1 £A -k-i-k : %-A HA : h k b k (78) Using (74) and (76) t h i s can be w r i t t e n as the two equations Sk-A-i - »k-A-A-A + Vk1 HA (79) k1 - "k-A^A-A + Vk"1 HA (80) From (79) —k A-A-A-i + Vk 1^" 1^!^! (81) From (80) *k - <»k-A-A-i + Vk1 V"1 Vk 1 (82) Defining £ k - A S - 1 ^ ) - 1 = ( H ^ S - ^ H ^ + h ^ 1 ^ ) " 1 (83) (81) and (82) become -k ~ k - k " k From ( 8 3 ) Pre-multiplying by P. and post-multiplying by P, Using ( 8 5 ) t h i s becomes and u s i n g t h i s r e s u l t i n ( 8 4 ) g i v e s and i n ( 8 5 ) , 25 2 k = £ k H k ~ A i ( 8 4 ) b, = P . h, s , 1 ( 8 5 ) ^ k 1 " £ k - \ + V k " 1 ^ ( 8 6 ) * k - l = *k + S c V k * 4 - 1 ( 8 7 ) *k » £ k - l - ^ A A - 1 ( 8 8> 2k " S k - X - i C 1 ! " ^ 4 - i C i 4 i ( 8 9 } k k - E k - i V k 1 - fek^A-iVk'1 b k - £ k - A ( s k + H A - i V " " 1 ( 9 0 ) Using ( 7 3 ) , ( 7 5 ) and ( 8 9 ) , equation ( 7 7 ) can be written k - 4-1 + ^ k ^ k - H k i k - ^ ( 9 1 ) 26 Comparing equations (90), (88) and (91) with (60), (61) and (62) r e s p e c t i v e l y , i t i s seen that the maximum-likelihood ing equations except that the "1" i n equation (60) has been replaced by the variance term s^ i n equation (90). In other words the maximum-likelihood f i l t e r degenerates to a l e a s t - squares f i l t e r i f s, = 1. The s t a r t i n g values f o r and can be obtained by a d i r e c t a p p l i c a t i o n of (70) and (83) to the f i r s t s measurement stages, which would require i n v e r s i o n of an s x s matrix* How- ever, since s t a r t i n g values c o n s t i t u t e a - p r i o r i known s t a t i s - t i c s of the parameters i t i s i n s t r u c t i v e instead to compare the r e c u r s i v e maximum-likelihood f i l t e r with a s i m i l a r f i l t e r that i s based on such s t a t i s t i c s . In the maximum-a-posteriori (MAP) f i l t e r , A has a normal or Gaussian p r o b a b i l i t y d i s t r i b - A u t i o n , A^ i s i t s expected value and i s a diagonal matrix such that the i t h element on i t s main diagonal i s the variance of every parameter i n the i t h row of A. To obtain t h i s f i l t e r i t i s necessary to maximize the s o - c a l l e d a - p o s t e r i o r i density P A | Z which i s r e l a t e d to the l i k e l i h o o d density PZ\A B^ T H E Bayes r u l e : This i s equivalent to f i n d i n g a maximum of i t s logarithm: f i l t e r i n g equations are i d e n t i c a l to the least-squares f i l t e r - (92) log p A l Z = log p Z I A + log p A - log p z (93) 27 Differentiating with respect to A results in HA" P A I Z = S V 1 <z-HA) + ̂  log P A (94) where log p z i s aero because p z i s not a function of A e The a-priori density p A can be written P A = ( 2 7 t ) r s / 2 ( d e t P Q ) r / 2 y P _ I ( A , . - A ^% xi i x, j (95) and then - 2 - log p A » - P ~ (A-A ) dA 3 ^A o — —o (96) The maximum a-posteriori density occurs when -ĝ  log P A j ? 3 i s zero: H ' S " 1 t Z - H A ) P (A — A ) = 0 —O ~ —O — A = P(H S Z + P^ A ^ ) (97) where » — i — i — i P s (H S H + P^ ) (98) Comparison with equation (86) shows that the MAP estimate w i l l i n fact be generated by the recursive maximum-likelihood equa- tions when A and P are the expected value and variance, —o —o respectively, of the parameters. The resulting f i l t e r i s ac- tually a special case of the well-known discrete Kalman f i l t e r 28 but. has l i m i t e d a p p l i c a t i o n p o s s i b i l i t i e s because of the need f o r accurate a - p r i o r i s t a t i s t i c s of the parameters and because of the r e s t r i c t i o n that parameters i n the same row of A must have the same variance.. However, the f a c t that the maximum- l i k e l i h o o d estimate of (70) and (83) i s generated by the same re c u r s i v e r e l a t i o n s as the MAP estimate of (97) and (98) and that the MAP estimate degenerates to the maximum-likelihood estimate as P becomes i n f i n i t e , i n d i c a t e s that the maximum-—o ' l i k e l i h o o d f i l t e r can be started with A equal to zero and P —o —o a diagonal matrix with large elements on the main diagonalo In t h i s way the re c u r s i v e maximum-likelihood f i l t e r would presuppose A to have zero expected value and very large v a r i - ance, which i s consistent with a t o t a l lack of a - p r i o r i s t a - t i s t i c s o 29 IV. General Computational Algorithm I t i s now apparent that with very minor a l t e r a t i o n s the basic least-squares r e c u r s i v e equations of Chapter I I (53-62) can perform'either maximum-likelihood or Bayesian (maximum-a- p o s t e r i o r i ) f i l t e r i n g . By s u b s t i t u t i n g the noise variance i n .place of the "1"- i n equation (60) and r e p l a c i n g the minimum- A norm equations (53-59) with i n i t i a l values A^ zero and very large, a maximum-likelihood f i l t e r r e s u l t s o A Bayesian f i l t e r i s produced by using the expected value and variance of A f o r A A^ and r e s p e c t i v e l y i n the maximum-likelihood f i l t e r . The following table summarizes the d i f f e r e n c e s : TABLE I - -Essential Differences of Least-Squarres, -Maximum- Li k e l i h o o d and Bayesian MAP F i l t e r s F i l t e r s^ I n i t i a l Values T _ r. i minimum-norm composition Least-squares 1 . , . Ir-^ m\ ^ usxng equations (53-59) A Maximum-likelihood noise variance zero, very large A A = expected value of A Bayesian MAP noise variance Zr . _ „ x P = variance of A —o — On the next page i s presented a general computational algorithm which allows f o r any of the combinations i n the above tab l e . I t also allows f o r u n c l a s s i f i e d combinations such as one i n which the noise variance term i s "1" and the A i n i t i a l values are A zero and P large. This i s e f f e c t i v e l y T " 0 """O 30 FIGURE 1 - General Computational Algorithm f o r Estimation BEGIN ^ YES GET STARTING VALUES P , A CALCULATE b f c, £ k FROM (60), (61) GET MEASUREMENT z, AND VARIANCE s CALCULATE ESTIMATE A A k FROM (62) T k = k + 1 (END YES > k M A X ^ N 0 Q3c, Sk--. £ k , A k - 0 J = S —iz: DEFINE MEASUREMENT BY h, —k CALCULATE £ k FROM ( 53 ) YES CALCULATE e, , Q, , R, —k' —k* —k FROM (56), (57), (58) GET MEASUREMENT z. CALCULATE ESTIMATE A FROM (59) k = k + 1 5 = 5 - 1 YES 31 a least-squares f i l t e r which i s begun i n the same way as the maximum-likelihood f i l t e r , e l i m i n a t i n g the need f o r the "min- imum-norm" equations (53) and (56-59). Tests of t h i s f i l t e r are described i n example 1 of Chapter VI. In the general algorithm, l i n e a r dependence of the rows of H i n the f i r s t s stages of the least-squares f i l t e r r e s u l t s i n a value of c^ which i s near zero, r e - i n i t i a l i z i n g the e n t i r e process. How close to zero c^ must come i n order f o r t h i s to occur i s a d i f f i c u l t matter to define and depends among other things on the p r e c i s i o n of the c a l c u l a t i o n s . While exact l i n e a r dependence would t h e o r e t i c a l l y make c^ exactly zero, the value determined by the computer w i l l normally contain errors due to truncation and thus be s l i g h t l y d i f f e r e n t from zero. In any event, cases of near l i n e a r dependence can produce inaccurate estimates, so i t i s probably best to require that c^. remain reasonably large. This can be done by d e f i n i n g a threshold value and causing r e - i n i t i a l i z a t i o n i f the magnitude of c, f a l l s l e s s than t h i s threshold during the f i r s t s stages. In choosing which of the various f i l t e r i n g procedures to use i t i s important to know how the errors of the estimates are expected to compare. A useful matrix which gives an e s t i - mate of the error i s the error covariance matrix, hereby de- fined as A A A | cov (A ) = e ( A - A ) ( A - A ) (99) 32 The trace of t h i s matrix i s equivalent to the expected value A of the sum of the squares of the e r r o r matrix (A - A ) 0 Using equations (5) and (1) i t can be seen that the e r r o r covariance of the least-squares f i l t e r a f t e r s stages i s given by cov - ( A L S - ) •= - ( - H - - H - ) " 1 • • H ' - © ( V - V ^ ) - - H - ( - H - ' « • ) • " 1 (100) E(VV ) can r e a d i l y be shovm to be a diagonal matrix such that the i t h element on i t s main diagonal i s the sum of the v a r i - ances of the measurements i n the i t h row of Zo The least-squares estimate i s always unbiasedo That i s , A e ( A - A L S ) = o (101) The maximum-likelihood estimate (70) has an e r r o r c o v a r i - ance given by cov ( A M )' = ( H ' S " " 1 H ) " 1 H ' S " 1 e(v v ' ) s " 1 H ( H ' S - 1 H ) " " 1 (102) Because the maximum-likelihood f i l t e r requires that measure- ments i n the same row of zz have the same variance and since there are r measurements i n each row, i t i s apparent that e(V v ') = r x S (103) where S_ i s the measurement-noise variance matrix as defined i n (65)c Therefore, when a l l measurements i n the same row of 33 Z have the same variance, the er r o r covariances of the l e a s t - squares and maximum-likelihood estimates become cov ( A L S ) = r x ( H ' H ) " 1 H ' S H ( H ' H ) " 1 ( 1 0 4 ) cov ( A M L ) = r x (H'S - 1 H) (105) The d e f i n i t i o n of the P-matrix f o r the maximum-likelihood f i l t e r as given i n equation (83) shows that the error covar- iance of the maximum-likelihood f i l t e r i s given simply by cov ( A M L ) = r x P (106) Using the matrix i n e q u a l i t y M'M (N'M) ' (N ' N ) " 1 (N'M) (107) (see Sage and Melsa [ l 4 J , p a 246)-where M and N are any two k x s matrices with k | s and N of rank s, and making the sub- s t i t u t i o n s M. = S 2 H(H'H)"" 1 (108) N = S 2 H (109) i t i s e a s i l y shown that cov (A_ M L) If cov (A ) (110) 3 4 That i s , the maximum-likelihood f i l t e r ? when a p p l i c a b l e , gives, an estimate which i s as good as or better than that of the least-squares f i l t e r . , L i ke the least-squares estimate, the maximum-likelihood estimate i s unbiased: e ( A n L - A ) .=  0 ( 1 1 1 ) The error covariance of the Bayesian MAP estimate, as defined by equation (97), i s cov (A M A p ) = £ ( H ' S ™ 1 E ( V / ) S " 1 H + P j " 1 G ( A A ' ) Pj" 1) L (112) Since a l l measurements i n the same row of Z must have the same noise variance and the p r o b a b i l i t y d i s t r i b u t i o n s of a l l pa- rameters i n the same row of A must have the same variance, E ( W ' ) a r x S (113) G(A A' ) = r x P (114) where r i s the number of elements i n each row of Z and A The error covariance therefore becomes cov ( A ^ p ) . ^ r x P ( H ' S " 1 H + Pj" 1) P = r x P ( 1 1 5 ) which can be r e a d i l y determined from the P-matrix. Comparing the values of P f o r the maximum-likelihood and MAP estimates, i t i s obvious that the MAP estimate, where a p p l i c a b l e , has an 35 error covariance which i s l e s s than or equal to that of e i t h e r the maximum-likelihood or least-squares estimates. In f a c t , the MAP estimate, when v a l i d , i s known to have the l e a s t e r r o r covariance of any known estimate. Even when the r e s t r i c t i o n i s removed that the noise and parameters be Gaussian the MAP f i l t e r s t i l l provides the best estimate of a l l l i n e a r filters© The noise must s t i l l be random with zero-mean and known v a r i - ance and the expected value and variance of the parameters must s t i l l be known. The f i l t e r i s then u s u a l l y c a l l e d a l i n - ear-minimum-variance f i l t e r . In a d d i t i o n to the f a c t that a l l parameters i n the same row of A must have the same variance, the MAP estimate has another major disadvantage. I f i n c o r r e c t p r i o r expected values and variances are used the estimates w i l l be biased, with the bias at a stage k given by e ( i k - ^ } - V ^ ^ H ^ A ) + p ; 1 ^ - p - ^ u ) ) = <I + £o J ^ i * ! 1 ^ " 1 i - e ( A ) ) (116) The b i a s i s most noticeable f o r smaller values of k and de- creases as k increases. I t i s al s o smaller f o r higher values of the i n i t i a l variance P and approaches zero as P becomes —o —o 36 i n f i n i t e , the estimate then becoming a maximum-likelihood estimate. In conclusion i t may be said that among the three f i l t e r s , least-squares, maximum-likelihood and Bayesian MAP, the more extensive the a - p r i o r i s t a t i s t i c a l knowledge of the parameters and measurement noise, the lower i s the covariance of estima- t i o n e r r o r . 37 V. I d e n t i f i c a t i o n of A Linear Stationary Process The computational algorithm of the previous chapter can be used to estimate the parameters of a d i s c r e t e model f o r a l i n e a r time-invariant process. I f measurements of the system v a r i a b l e s are a v a i l a b l e at uniformly-spaced i n t e r v a l s of time, , i t ..is ...possible to ..develop a .model of the .form where X j c a n n-dimensional vector composed of the system outputs at stage k, u^ i s an m-dimensional vector composed of the system inputs at stage k and Q and A are matrices composed of the constant parameters d e s c r i b i n g the process, x̂ . i s c a l l e d the state of the system at stage k, û . the c o n t r o l and <£_ and A the s t a t e - t r a n s i t i o n and s t a t e - d r i v i n g matrices r e s p e c t i v e l y . Transposing both sides of the l a s t equation r e s u l t s i n £ k 3 -k-1^' + ~ k - l ( 1 1 8 ) Because of measurement noise, there w i l l be d i f f e r e n c e s between the observed values of the v a r i a b l e s and t h e i r true values. Therefore i t i s convenient to d i s t i n g u i s h the observed values with a superscribed bar as follow s : H k = x k + (119) H k - H k + <£k (120) 38 where and are vectors comprised of the noise terms. Combining these two equations with (118) y i e l d s the r e l a t i o n between the parameters and the observed values: or x. •k - k - i t -k»i o e © o A + he ~ i i k - 1 ^ ' - 2 k - i ^  ( 1 2 1 ) I f the measured vectors x^, k = 1,2,3,««>. become the successive rows of the Z matrix i n the computational algorithm: z-k ( m ) (122) and the corresponding p r i o r measurements become the successive rows of the H matrix: iik - £ k - i : Hk - i (n + m) (123) then i n accordance with the representation of equation (1) the unknown parameter w i l l be A = o o o o f A (124) and the sequential noise vectors w i l l be 39 x k = u'k - H k - i ^ " ^ k - i ^ ' ( 1 2 5 ) In other words, use of the ẑ . and h^ vectors as defined by (122) and (123) i n the general computational algorithm of Chapter IV w i l l produce an estimate of the matrix defined by (124). I f the components of the noise sequences and have zero means and are Gaussian, white and independent, then the least-squares f i l t e r of Chapter I I applies because the o v e r a l l observation noise v̂ . defined by (125) has zero-mean components. The maximum-likelihood and Bayesian f i l t e r s , however, are not s t r i c t l y v a l i d as they have been derived i n Chapter I I I , be- cause the components of v^ are not l i k e l y to be independent or white. None the l e s s i t would seem l o g i c a l that the h i e r - archy among the three f i l t e r s should s t i l l e x i s t because of the d i f f e r i n g degrees of a - p r i o r i information u t i l i z e d . Thus, although methods e x i s t by which the maximum-likelihood and Bayesian f i l t e r s can be made optimal (see, f o r example, Sage and Melsa ^14 J, Chapter 8), they involve such extensive com- p l i c a t i o n of the algorithm that i t i s convenient i n t h i s ap- p l i c a t i o n to merely ignore the f a c t that the noise components may be non-white or s t a t i s t i c a l l y dependent. The computational algorithm as applied to the system- i d e n t i f i c a t i o n problem i s represented i n the flow-chart on the following page. A l l of the experimental t e s t s described i n the next chapter were made using t h i s algorithm on e i t h e r the 40 FIGURE 2 - System I d e n t i f i c a t i o n Algorithm ^ BEGIN ^ GET SYSTEM DIMENSIONS •GET STARTING VALUES i =1 | YES^STARTINC VALUES 1 = 0 USER'S PRELIMINARY MEASUREMENT SUBROUTINE USER'S SUBROUTINE TO [GET STATE MEASUREMENT k = 1 CALCULATE b k ? P k USER'S SUBROUTINE TO GET STATE MEASUREMENT SUBROUTINE TO CALCULATE ESTIMATE USER'S SUBROUTINE TO OUTPUT ESTIMATE k = k + 1 A J » s — i — A,k? £ k s Q k j S k = o CALCULATE c. = (I — Q, . ) h, —k , — —K-l —K CALCULATE e_k, Q k, R k USER'S SUBROUTINE TO GET STATE MEASUREMENT SUBROUTINE TO CALCULATE ESTIMATE USER'S SUBROUTINE TO OUTPUT ESTIMATE 41 I . B c M , 360-67 or the Data General Corporation "Nova" d i g i t a l computer. In the appendix are included the complete programs for the Nova version of the algorithm. Comparison with the flow-chart of Chapter IV w i l l show that the i d e n t i f i c a t i o n algorithm i s basically the same except for the addition of certain specialized subroutines for handling the input and output data. These data subroutines can be changed to suit any particular application. There i s a preliminary measurement subroutine which i s provided in case there are any tasks as- sociated with the measurement process which must be performed before entering the iden t i f i c a t i o n cycle. For example, should i t be necessary to take samples of the system at a rate faster than the computation cycle v/ould allow, the measurements may a l l be made in advance and stored by this subroutine. Then on each cycle i s a subroutine to get the state measurement from the appropriate source and another to output the calculated estimate*. The algorithm was programmed on the system-360 to allow for more sophisticated analysis using a r t i f i c i a l models of known s t a t i s t i c s . In the Nova programs, a l l the i n i t i a l i z i n g procedures are controlled by the operator using the teletype keyboard i n a conversational manner. The system dimensions can be set to estimate any matrix A up to a dimension of 8 x 8 and the c a l - culations are performed i n floating-point arithmetic that i s based on a 24-bit mantissa with sign and 7-bit exponent using the standard basic floating-point software provided with the 42 computer. Although i n the flow-chart the k-counters are separate from the subroutines, i n the Nova programs the job of counting stages has been l e f t to the user-supplied subroutines. This allows f o r counting e i t h e r at the point where data comes i n or at the point of outputting the estimate, whichever i s more su i t a b l e to the p a r t i c u l a r a p p l i c a t i o n . Also l e f t to the user subroutines i s the task of determining the sampling times, which, i n the case where data i s obtained from an actual system using an analogue-to-digital converter, could require an ex- t e r n a l real-time clock connected to the input/output bus of the computer. The programs are thus very v e r s a t i l e , with the permanent software performing i d e n t i f i c a t i o n only, and the user-supplied subroutines having f u l l c o n t r o l over the r e s t of the process. 43 VI. Examples A l l the examples described in this chapter were used to test the computer programs for the system-identification a l - gorithm and to study various properties of the algorithm. Sequential values of the state were generated for the algorithm .in -each case -using. .known values ..of and A .and -the estimates were then compared with these known values. Example 1: The model parameters were 0.995 0.5 0.0 0.0 1.0 0.5 0.0 -1.13 0.9 A = 0.0 0.0 1.25 with i n i t i a l state and control x = —o 0.0 1.5 3.95 u = 1 The control was l e f t constant throughout the process, resulting in an open-loop response corresponding to the three curves of Figure 3. The curves are shown to be continuous because i t has been assumed that in practice the measurements would result from uniform sampling of this continuous system. The simulation was performed using the Nova programs and there was no measurement noise added to the model. Table I I 0 1 2 3 4 5 6 7 8 9 10 11 12 13 M 15 16 17 F i g u r e 3 - T ime r e s p o n s e o f c o n t i n u o u s sy s tem c o r r e s p o n d i n g t o Example 1. TABLE I I - Stage-wise E r r o r s of Least-Squares F i l t e r (Example 1) Fig u r e s i n each column represent e r r o r s at su c c e s s i v e stages. Stage "minimum norm" £o • 1 0 x 1 = 1 0 2 x I P - 10 3 x I —o — P « 10 5 x I —o — P = 1 0 7 x I —o — P - 1 0 1 6 X I —o — 1 + . 4 9 5 6 0 0 9 E + 0 1 + . 4 9 5 6 0 4 4 E + 0 1 + . 4 9 5 6 0 0 9 E + 0 1 + . 4 9 5 6 0 0 9 E + 0 1 + . 4 9 5 6 0 1 1 E + 0 1 + . 4 9 5 6 0 S 9 E + 0 1 + . 4 9 5 6 0 0 9 E + 0 1 2 + . 2 1 9 3 3 5 6 E + 3 1 + . 2 1 9 7 6 1 0 E + 3 1 + . 2 1 9 3 4 0 Q E + 0 ! + • 2 1 9 3 3 5 6 E + 0 1 + . 2 1 9 3 3 5 4 E + 0 1 + . 2 1 9 3 3 5 6 E + 0 1 + . 2 1 9 3 3 5 6 E + 0 1 3 + . 7 2 9 2 2 6 5 E + 0 3 + . 1 2 4 5 6 6 3 E + 3 1 + . 7 4 7 6 3 9 0 E + 0 0 + • 7 2 9 4 4 8 7 E + 0 0 + . 7 2 9 2 2 4 0 E + 0 0 + . 7 2 9 2 2 6 0 E + 0 0 + . 7 2 9 2 2 5 9 E + 0 0 4 + . 3 3 1 5 S 3 4 E - C 3 + . 3 7 6 3 1 8 2 E + 0 0 + . 7 2 S 6 4 6 1 E + 0 0 + • 2 7 2 0 3 5 5 E + 0 0 + . 2 3 7 5 1 1 3 E - 0 3 . + . 6 2 6 0 4 3 6 E - 0 5 + . 8 2 5 1 2 9 7 E - 0 S 5 + . 9 7 2 3 2 3 7 E - 0 3 + . 3 9 6 0 2 4 4 E + 0 3 + . 4 3 7 0 6 7 3 E + 0 0 + . 2 7 3 3 9 0 7 E - 0 1 + . S 0 3 7 8 7 1 E - 0 5 + . 3 3 1 1 3 5 7 E - 0 6 + . 2 6 9 5 4 1 0 E - 0 6 6 + . H 4 7 5 0 6 E - 0 7 + . S 0 4 0 0 1 3 E + 0 0 + . 1 3 5 7 9 0 9 E + 0 0 + • 2 3 7 3 3 7 1 E - 0 2 . + . S 2 7 0 5 3 7 E - 0 6 + . 6 1 6 3 S 9 3 E - 0 7 + . 1 1 1 0 6 5 9 E - 0 7 7 • . 1 0 3 2 9 3 2 E - 0 3 + . S 4 0 3 0 0 6 E + 0 0 + . 3 1 1 4 3 2 0 E - 0 1 • 4 2 2 9 3 4 1 E - 0 3 + . 1 5 2 1 5 9 7 E - 0 6 + . 1 6 9 4 6 2 6 E - 0 7 + . 1 0 3 2 2 S 5 E - 0 7 8 + . 6 3 3 1 3 2 4 E - 0 3 + . 2 6 6 0 2 0 1 E + 0 0 + . 7 1 7 3 6 2 7 E - 0 2 + • 8 2 3 0 0 5 4 E - 0 4 + . • 0 0 9 6 0 0 6 E - 0 7 • . 2 4 3 I S 6 7 E - 0 S + . 3 0 3 6 3 9 7 E - 0 7 9 + . 1 1 7 4 2 6 0 E - 0 7 + . 1 0 6 9 3 8 0 E + 0 0 + . 1 8 6 0 3 0 6 E - 0 2 + . 1 9 9 3 9 7 6 E - 0 4 + . 1 1 7 3 6 0 6 E - 0 7 + . 1 2 1 1 4 4 5 E - 0 9 • + . 2 2 3 5 9 4 1 E - 0 3 10 + . 1 2 3 7 3 7 0 E - 0 7 + . 4 1 2 3 1 3 7 E - 0 1 + . 5 6 9 5 3 6 2 E - 0 3 + • 5 9 3 7 5 4 1 E - 0 5 + . 3 4 4 3 5 3 7 E - 0 3 + . 3 5 2 7 1 5 1 E - 0 9 + . 2 4 0 7 2 3 1 E - B 3 11 + . 1 1 1 3 2 4 9 E - 0 7 + . 1 7 5 3 4 0 4 E - 0 1 + . 2 1 5 9 9 4 4 E - 0 3 ; + • 2 2 1 4 4 7 7 E - 0 5 + . 5 0 1 4 3 5 S E - 0 9 + . 6 3 7 8 63 0 E - G 9 + . 4 0 0 7 2 1 6 E - 0 8 12 + . 3 5 7 6 0 1 7 E - 0 S + . 9 1 4 S 6 1 S E - 0 2 + . 1 0 5 8 2 9 7 E - 0 3 + • 1 0 6 7 2 5 9 E - 0 5 + . 2 5 6 0 3 6 7 E - 1 0 + . 1 0 1 7 6 7 5 E - 0 3 + . 2 4 5 1 5 7 3 E - 0 8 13 + . 5 3 1 9 5 5 4 E - 0 3 + . 5 9 3 6 9 5 1 E - 0 2 + . 6 6 4 5 0 7 7 E - 0 4 . 6 5 5 5 6 9 4 E - 0 6 + . 9 5 5 9 1 0 6 E - 0 9 + . 1 6 4 0 6 5 7 E - 0 3 + . 2 0 1 S 6 6 1 E - G 3 14 + . 4 5 8 3 3 6 1 E - G 3 ' + - 4 5 7 4 9 3 2 E - 0 2 + . 5 0 3 6 4 3 3 E - 0 4 + . 4 9 2 AS 7 I E - 0 6 + . 3 0 3 6 1 2 1 E - 0 3 + . 2 2 5 0 7 G S E - 0 3 + . 1 0 3 9 1 5 3 E - 0 3 15 + . 4 3 1 9 3 5 7 E - 0 3 + . 3 3 8 7 9 1 2 E - 0 2 + . 4 2 4 9 7 3 3 E - 0 4 + • 4 1 9 S 0 2 9 E - 0 6 + . 1 0 9 3 6 2 8 E - 0 7 + . 1 3 7 4 4 2 7 E - 0 7 + . 1 7 1 5 5 9 0 E - 0 9 16 • . 4 3 2 4 7 6 3 E - 0 3 + . 3 4 3 3 0 2 6 E - S 2 + . 3 7 3 7 9 2 5 E - 0 4 . 3 7 S 7 4 1 5 E - 0 6 + . 1 7 2 1 3 7 6 E - 0 9 + . 2 0 0 4 6 5 2 E - 0 3 + . 4 9 4 3 0 4 7 E - 0 3 17 + . 4 2 3 6 4 1 3 E - 0 8 + . 3 0 3 3 7 S 9 E - 0 2 + . 3 2 9 3 4 2 9 E - 0 4 + • 3 4 4 6 3 3 3 E - 0 6 + . 3 3 3 4 9 5 6 E - 0 3 + . 2 5 9 4 7 3 3 E - 0 3 + . 2 0 7 0 6 8 2 E - 0 6 13 + . 4 0 6 6 6 3 1 E - 0 3 + . 2 6 2 9 2 4 5 E - 0 2 + . 2 3 5 5 7 0 3 E - 0 4 . + • 3 0 9 0 7 4 6 E - 0 6 + . 7 3 9 9 0 2 8 E - 0 S + . 3 7 2 6 8 3 9 E - 0 3 + . 5 0 5 7 1 1 0 E - G 3 19 + . 3 9 5 9 7 3 3 E - 0 3 + . 2 2 2 4 6 1 2 E - 0 2 + . 2 4 0 3 3 0 5 E - 0 4 + . 2 6 6 3 9 2 4 E - 0 6 + . 3 6 3 2 5 5 0 E - 0 7 + . 1 6 3 0 3 3 3 E - 0 7 + . 1 0 0 6 1 4 0 E - 0 7 on 4 6 shows the actual estimation e r r o r as defined by Error (A^) = trace (A - A^) (A - A^.) ( 126 ) computed on the Nova at sequential sampling times f o r the i d e n t i f i c a t i o n algorithm when used as a least-squares f i l t e r ..with .a "minimum .norm" ..comp.o.siti.on ..at ..the .start and ...also when start e d with A^ = 0 and equal to various s c a l a r multiples of the i d e n t i t y matrix 1° I t can be seen that when i s f a i r l y large, estimates can be obtained which are as good as, and at some stages marginally better than those obtained when the "minimum norm" procedure i s used. Here the f i l t e r i n g problem i s a d e t e r m i n i s t i c one because there i s no a p r i o r i information and the estimates should"be based s o l e l y on the noise-free observations. P^ must therefore be made large to give minimal weighting of the i n i t i a l estimates A^. The r e s u l t i n g f i l t e r i s then a good approximation to the purely d e t e r m i n i s t i c l e a s t - squares f i l t e r of Chapter I I , with much l e s s computational requirements. However, the pure least-squares f i l t e r i s subject to minimum i n i t i a l b ias and with i t a better estimate r e s u l t s a f t e r fewer measurements* S p e c i f i c a l l y , the estimation e r r o r at the fo u r t h stage i n t h i s example was lowest with the pure least-squares f i l t e r , the estimates being .9949869 .5000086 - . 0 0 0 0 0 3 6 .0000224 .9999967 .5000123 -.0000045 - 1 . 1 3 0 0 0 9 .8999898 ^4 .0000246 - . 0 0 0 0 3 9 6 1.249999 47 Example 2: The model was 0o995 0. 5 0.0 0.0 <L = 0.0 1.0 0.5 A = 0.0 0.0 -1.13 -0.9 1.25 with i n i t i a l state and i n i t i a l c o n t r o l ~~o 0.0 1.5 -1.45 u = 1 o and no simulated measurement noise. In generating the remaining states, the c o n t r o l was l e f t equal to U q u n t i l stage 1, a f t e r which each state was deter- mined using a c o n t r o l chosen to minimize the estimated simple performance function J k + 1 = W + l E i ^ c + i + H k H 2 u k where and are weighting matrices chosen f o r s t a b i l i t y A purposes and x k + i defined by the equation k+1 k -k In t h i s example i t i s possible to have 48 Setting the d e r i v a t i v e of J k + 1 with respect to equal to zero gives A j A ^k ^k+1 + U k = 0 Figure 4 shows the v a r i a t i o n of the r e s u l t i n g performance function J ( t ) = X , X , + U , T , t , f t < t , , —k ~k k ~ l ' k — k+1 f o r the open-loop case where the con t r o l was l e f t equal to U Q f o r a l l stages and ,for two cases where .u^.was .calculated., beginning with u^, based on estimates' from the least-squares f i l t e r using d i f f e r e n t s t a r t i n g "procedures. A l l c a l c u l a t i o n s were performed on the Nova. This method might be us e f u l f o r simple combined i d e n t i - f i c a t i o n and c o n t r o l of an actual continuous system by c a l c u - l a t i n g a sub-optimal c o n t r o l based on the d i s c r e t e model. 4. _ J OPEN-LOOP CASE ' FOR • ALL k - _ J r CONTROL BASED ON SUB-OPTIMAL LEAST-SQUARES IDENTIFICATION J STARTING WITH £ = 0 AND P = 10xl CONTROL BASED ON LEAST- SQUARES IDENTIFICATION STARTING WITH EQUATIONS (53) AND (56 -59) ) 2 4 8 10 12 14 15 F i g u r e 4 - P l o t o f p e r f o r m a n c e f u n c t i o n s f o r example 2. J ( t ) = 4 ^ t u k - l ' < t < t k+1 50 Example 3: The model was OoO O o O 0 . 9 0 . 0 fL - 2 . 0 O o O 0 . 0 A = O . O O o O 0 o 7 0 . 0 0 . 0 with i n i t i a l state and con t r o l x = —o 2 . 7 10.0 4.9 u = 0.0 The simulation was performed on the system-360 with random -noise - o f normal -distri-bution --added—to-"the-state and -control measurements. The standard d e v i a t i o n of the noise was 0. 7 , the variance 0.49. The i d e n t i f i c a t i o n algorithm was used as a "best case" A A of the Bayesian I4AP f i l t e r , with ^ = j# and A^ = A. s^ at every stage was set equal to the noise variance, 0.49. While i n example 1 the i n i t i a l estimate was inaccurate and the measure- ments were exact, i n t h i s example the i n i t i a l estimates are exact and the measurements are noisy. Figure 5 shows the com- puted estimation e r r o r s as defined by ( 1 2 6 ) at each stage. As expected, the r e s u l t s are opposite to. those of example 1 , with a lower P now g i v i n g the better estimates because of increased weighting of A^. The r e s u l t s of Figure 6 were obtained with t h i s same example and show the e f f e c t on the estimation e r r o r of using  52 53 d i f f e r e n t multiples of the noise variance f o r s^ i n the a l - gorithm. When P i s large the e f f e c t i s not noticeable but when PQ i s small, i n c r e a s i n g l y higher multiples give i n c r e a s - i n g l y b e tter estimates i n the stages f o l l o w i n g stage 4. A higher value of ŝ . provides decreased weighting of the noisy measurements and increased weighting of the good i n i t i a l es- timate, ŝ . has no noticeable e f f e c t on the estimates p r i o r to stage 4. Figure 7 was obtained by the same procedure, except that the minimum-norm composition was used at the s t a r t . 'As-is:,the case when the f i l t e r i s started with P_Q large, there was neg- l i g i b l e d i f f e r e n c e of the errors when values of s^ ranging from 0.5 to 2 times the noise variance were used. Figure 7 - Plot of estimation errors f o r example 3. Minimum—norm s t a r t . 55 Example 4: The model was 0.99 5 0.5 0.0 0.0 0.0 1.0 0. 5 A = 0.0 0.0 -1.13 0.9 1.25 with i n i t i a l state and con t r o l x = —o 0.0 1.5 3.95 u = 1 (see Figure 3 f o r response curves). The simulation was done on the system-360, -introducing Gaus-sian 'noi"se of standard deviation 0.5 and variance 0.25 to the measurements. s, i n the algorithm was set equal to the variance at each stage. Figure 8 shows the computed estimation e r r o r s as defined by (126) at each stage f o r three d i f f e r e n t s t a r t i n g procedures: A A M o = ^ > ^-^.i £ o = i f o r a "best-case" Bayesian MAP f i l t e r ; a "minimum-norm" composition f o r a least-squares or maximum- A A g l i k e l i h o o d f i l t e r ; jZ^ = 0, A^ = 0, P^ = 10 x l f o r an approx- imate least-squares or maximum-likelihood f i l t e r . The r e s u l t s s t i l l support the hierarchy of f i l t e r s de- veloped i n Chapter IV despite the f a c t . t h a t the o v e r a l l noise terms defined by (125) are not expected to be s t a t i s t i c a l l y independent or white as discussed i n Chapter V. 56 F i g u r e 8 - P l o t o f e r r o r s f o r e s t i m a t e s i n example 4. 57 VII. Further A p p l i c a t i o n s What has been presented i n t h i s t h e s i s i s an algorithm to estimate the parameter matrix A i n the general measurement process of equation (1): Z = HA + V (1) While the accompanying computer programs (see Appendix) have been written f o r the p a r t i c u l a r s y s t e m - i d e n t i f i c a t i o n problem of Chapter V, i t i s a simple matter to adapt them f o r any measurement process defined by (1). S p e c i f i c a l l y , the i d e n t i - f i c a t i o n problem of Chapter V requires that each row of Z (z^) should be taken from the state measurement which w i l l comprise the next row of H"('h •[•)» and 'thus "for the computer programs the vectors z and h can share the same storage l o c a t i o n s . In other a p p l i c a t i o n s , separate sets of storage l o c a t i o n s may be required. Apart from t h i s , the program, when supplied sequen- t i a l l y ( v i a the user's measurement subroutine) with the rows of Z and H arrays s a t i s f y i n g the r e l a t i o n of equation (1), w i l l generate a sequential estimate of the parameter array A, subject to the fo l l o w i n g conditions developed i n the previous chapters: The elements of V must have zero expected values. I f a l l elements of the same row of V (v^) have the same p r o b a b i l i t y d i s t r i b u t i o n and the variance of t h e i r d i s - t r i b u t i o n i s known, i t should be supplied f o r the value of s, corresponding to that row (maximum-likelihood 58 f i l t e r ) . I f a l l elements of the same row of V do not have the same p r o b a b i l i t y d i s t r i b u t i o n or i f the variances of the d i s t r i b u t i o n s are not known, Sy should be set equal to 1 at every stage (least-squares f i l t e r ) . I f expected values f o r the parameters are known then A they should be used as the elements of A . I f i n a d d i t i o n J. —o the p r o b a b i l i t y d i s t r i b u t i o n s of a l l parameters i n the same row of A have the same variance and the variances of the d i s t r i b u t i o n s of a l l the parameters are known, then P should be a diagonal matrix with the i t h element —o 3 on i t s main diagonal equal to the variance of the d i s t r i - bution of every parameter i n the i t h row of Ao Otherwise P should be a diagonal matrix with each element on i t s main diagonal set large enough to allow f o r any uncer- t a i n t y i n the corresponding row of A^. I f expected values f o r the parameters are not known A then no i n i t i a l values should be supplied f o r A^ and P^, but equations ( 5 3 ) and (56 - 59) should be used at the s t a r t . However, where the increased computational time required by these equations would be p r o h i b i t i v e , a good approximation can be achieved by using i n i t i a l values A A = 0 and P diagonal with larqe elements on the main -r-O — —O 3 diagonal. Rapid I d e n t i f i c a t i o n A major d i f f i c u l t y with the method of s y s t e m - i d e n t i f i c a - t i o n developed i n Chapter V i s that the estimated d i s c r e t e 59 model cannot be accurate unless the rate of sampling the state i s high i n r e l a t i o n to the rate at which the state varieso At the same time, such rapid sampling can lead to near l i n e a r - dependence i n the state measurements and consequent i l l - c o n - d i t i o n i n g of the H matrix, which makes adequate i d e n t i f i c a t i o n impossible. Hanafy and Bohn |^4J have suggested augmenting the state measurement at each sampling time with the measured outputs of i n t e g r a t o r s cascaded to the inputs and outputs of the continuous system. I t i s claimed that t h i s a d d i t i o n a l data i s e f f e c t i v e i n overcoming the problem of i l l - c o n d i t i o n i n g . However, the usual treatment becomes cumbersome because the data and parameters must be structured i n t o lengthy vectors i n order to f i t the form of conventional estimators, which are derived f o r a measurement process of the type z_ = Ha + v z being the data vector, a the parameter vector, H the mea- surement matrix and v a vector of noise terms. For the iden- t i f i c a t i o n problem t h i s r e s u l t s i n a large H matrix of block- diagonal form and containing many zeros. The algorithm of t h i s t h e s i s can be used quite r e a d i l y f o r i d e n t i f i c a t i o n with augmented state measurements. At each sampling time, the inputs and outputs of the system are mea- sured and stored, along with the outputs of the successive i n t e g r a t o r s . A state measurement i s processed as one row i n the estimation algorithm, followed by the i n t e g r a t o r outputs as subsequent rows. Suppose, f o r example, that each of the 60 system i n p u t s and o u t p u t s i s passed through two i n t e g r a t o r s . E v i d e n t l y the i n t e g r a l s t _ ( t ) d t and 0 0 ( t ) d t d t w i l l s a t i s f y the same l i n e a r d i f f e r e n t i a l e q u a t i o n as does x ( t ) , so u n i f o r m samples o f t h e i r o u t p u t s s h o u l d s a t i s f y the same d i f f e r e n c e e q u a t i o n : 2£ ( t ] c ) A ! x ( t ) d t = 0 t k - l x ' ( t ) d t k - l u * ( t ) d t A t r. r t. 0 x ( t ) d t d t = I 0 I, x ( t ) d t 0 d t ' " k - l u ( t ) d t d t A The b e g i n n i n g rows o f d a t a f o r the e s t i m a t i o n a l g o r i t h m would t h e r e f o r e be t ? —1 = — (t^) X ( t Q ) a U ( t Q ) 61 where the superscribed bars i n d i c a t e that these are the ob- served values of the v a r i a b l e s concerned. With t h i s procedure the state measurement def i n i n g z_ at a given stage does not immediately become the h vector f o r the following stage as i t does i n the simple i d e n t i f i c a t i o n problem. Therefore separate sets of memory l o c a t i o n s are needed f o r the z and h vectors, as mentioned at t h e beginning of t h i s chapter. I d e n t i f i c a t i o n of Non-Linear Systems Both the i d e n t i f i c a t i o n methods discussed thus f a r have assumed a l i n e a r model f o r the system being measured. However, i t i s equally p o s s i b l e , within the allowable forms of measure- ment processes, to assume c e r t a i n non-linear models. N e t r a v a l i and de Figueiredo J ^ 9 J have discussed methods of obtaining r e - gression functions f o r classes of d i s c r e t e non-linear systems i n which the evolution operators can be represented by algebraic or trigonometric polynomials. Although noise considerations are more involved, the computational requirements are not unlike those of the l i n e a r i d e n t i f i c a t i o n problem and are adaptable to the computer programs contained i n t h i s t h e s i s . As a very simple example, suppose i t i s desired to estimate a third-order non-linear algebraic model of the form X k + 1 = V + a l X k + a 2 X k + a 3 X k from measurements of the scalar v a r i a b l e x^, k =0,1,2,... . The estimation algorithm would begin with the f o l l o w i n g data: where again the superscribed bar i s used to denote measured values. I t i s u s e f u l to assume non-linear models i n some cases i n v o l v i n g l i n e a r systems where not a l l of the state v a r i a b l e s are measured. For example, although Figure 3 i n the previous chapter describes a l i n e a r system of 3 outputs and 1 input, a model f o r any one of the outputs, obtained from measurements of that output alone, would have to be non-linear. Of course, non-linear models are not always necessary to reduce the order of a l i n e a r system, because many l i n e a r systems can be r e a l i z e d i n terms of reduced l i n e a r models. A further s o p h i s t i c a t i o n of the i d e n t i f i c a t i o n algorithm f o r l i n e a r systems could pro- 63 vide f o r appropriate s e l e c t i o n of the measured v a r i a b l e s to e f f e c t such a reduction. Time-Varying Parameters Time-varying parameters can be accomodated by modifying the algorithm so that p r i o r estimates are updated at each •stage to allow -for expected •time-variations during "the mea- surement i n t e r v a l . That i s , i f the parameter array A i s known to vary according to the d i f f e r e n c e equation -k+1 = £ ( k + 1 » k ) ^k then i n the algorithm i s replaced by i t s a p r i o r i update: = ^ k - l This i s a much-used procedure and forms the basis of the Kalman f i l t e r f o r state estimation. Other methods are a v a i l a b l e i f no model f o r the parameter v a r i a t i o n s i s known (see, f o r ex- ample, Young [ l 7 ] ). 65 REFERENCES l j F l o r e s , I., The Logic of Computer Arithmetic, P r e n t i c e - H a l l , Englewood C l i f f s , N.J., 1963. 2J G r e v i l l e , T.N.E., "The Pseudoinverse of a Rectangular or Singular Matrix and i t s A p p l i c a t i o n to the Solution of Systems of Linear Equations", SIAM Review, Vol. 1, No. .1, January,} 1959. 3J G r e v i l l e , T.N.E., "Some App l i c a t i o n s of the Pseudoinverse of a. Matrix", SIAM Review, Vol. 2, No. 1, January, 1960. 4J Hanafy, A.A.R. and .Bohn, E.V., "Rapid D i g i t a l I d e n t i f i - c a t i o n of Linear Systems", J o i n t Automatic Control Conference, St. Louis, Mo., August, 1971. 5cJ Ho, Y.C., "The Method of Least Squares and Optimal F i l t e r - ing Theory",, RAND Corp.,, Santa Monica,, Calif..,. RM- 3329-PR, October, 1963. 6J Kalman, R.E., "A New Approach to Linear F i l t e r i n g and Pr e d i c t i o n Problems", Trans. ASME, Series D, J . Basic Eng., Vol. 82, March, 1960, pp. 35-45. 7J Kalman, R.E. and Bertram, J.E., "Control System Analysis and Design Via the 'Second Method' of Liapunov, I. Continuous-Time Systems and I I . Discrete-Time Systems", Trans. ASME, Series D, J . Basic Eng., Vol. 82, June, 1960, pp. 371-400. 8J K i s h i , F.H., "On-Line Computer Control Techniques and Their A p p l i c a t i o n to Re - entry Aerospace Vehicle Control", Advances i n Control Systems Theory and Ap p l i c a t i o n , C T , Leondes, ed. , V o l . 1, Academic Press, New York, 1964. 9 J N e t r a v a l i , A.N. and de Figueiredo, R.J.P., "On the Iden- t i f i c a t i o n of Nonlinear Dynamical Systems", IEEE Trans. Automat. Contr. , Vol. AC-16, No. 1, February, 66 1971, pp. 28-36. 10j Ogata, K. , State - Space An a l y s i s of Control Systems, P r e n t i c e - H a l l , Englewood C l i f f s , N.J., 1968. 11j Penrose, R. , "A Generalized Inverse f o r Matrices", Proc. Cambridge P h i l o s . Soc. , V o l . 51, 1955, pp. 406-413. 12j Penrose, R., "On Best Approximate Solutions of Linear Matrix Equations", Proc. Cambridge P h i l o s . Soc. , Vol. 52, 1956. 13J Sage,-A.P., Optimum Systems Control, P r e n t i c e - H a l l , Englewood C l i f f s , N.J., 1968. 14J Sage, A.P. and Melsa, J.L., Estimation Theory with Ap- p l i c a t i o n s to Communications and C o n t r o l , McGraw- H i l l , New York, 1971. 15j Sinha, N.K. and P i l l e , W. , "Online System I d e n t i f i c a t i o n Using Matrix Pseudoinverse", E l e c t r o n i c s L e t t e r s , Vol. 6, No. 15, July 23, 1970, pp. 453, 454. 16j Van Trees, H.L., Detection, Estimation and Modulation Theory, Part I, John Wiley and Sons, New York, 1968. 17j Young, P.C., "Applying Parameter Estimation to Dynamic Systems-Part I", Control Engineering, Vol. 16, No. 10, October, 1969, pp. 119-125. 68 APPENDIX Program-Equivalents of Symbols Appearing i n the Text Programs ' Text A B C ^k • CSQU c. c, —k —k CTHR • threshold f o r c ^ C j H hu * —k' —k I i J 3 K0 kMAX P Q ^k R r , n RS r s , n(m+n) S s, m+n SS 2 2 s , (m+n) V s k 69 Memory-Allocation f o r I d e n t i f i c a t i o n Programs Labels apply to main-program assembly only. Locations a v a i l - able f o r user-written programs are marked by a s t e r i s k s . Locations Label:Content 0000-0001 .0.0.02 0003 0004-0007 0010-0031 0040-0043 0044-0277 Use 0300 KEEP 0301 SAVE 0302 AMAT 0303 AMAT0 0304 BMAT 0305 BMAT0 0306 0307 ZERO - 0 0310 0 0311 ONE 1 0312 R 0313 S 0314 RS 0315 SS 0316 I 0317 J 0320 0321 K0 0322 L 0323 L0 0324 M 0325 N 0326 N0 0327 0330 A • 500 0331 P 700 0332 Q: 1100 0333 TEMPI: 1300 0334 TEMP 2 1500 0335 TEMP 3 1700 0336 H 2100 0337 C : 2120 s t a r t i n g address of main program required by f l o a t i n g - p t . i n t e r p r e t e r required by f l o a t i n g - p t . i n t e r p r e t e r 5K >pointers, i n d i c a t o r s } f l o a t i n g - p o i n t zero one no. of columns i n parameter array (r) number of rows i n parameter array (s) product r s product ss i n d i c a t o r counter •indicators and counters >indirect matrix addresses 70 0340 B: : 2140 0341 CSQU : 2160 0342 CTHR : 040420 0343 0 0344 2700 0345 3054 0346 v ' : 040420 0347 0 0350 MXADD: 2230 0351 MXSUB' 2251 035.2 ..MXMP.Y' .2.272 0353 MXDIV: 2355 0354 MXTR 2374 0355 DATRD' 2440 0356 DATPN' 2473 0357 DATRC 2533 0360 DIGIT 2572 0361 DATWR 2600 0362 WRITE 2646 0363 INIT 0364 MEAS 0365 DAT IN 0366 DTOUT 0367 0370 STR1 2170 0371 • STR2 2174 0372 STR3 2200 0373 STR4 2204 0374 STR5 . 2210 0375 • STR6 ' 2214 0376 STR7 2220 0377 STR8 2224 0400-0477 0500-2161 2162-2167 2170-2227 2230-2431 2432-2437 2440-2666 2667 2700-3403 BEGIN 3404-5577 5600-6577 ^•indirect matrix addresses "^threshold f o r £ k £ k , i n i t i a l l y 1 contains s t a r t i n g adr. of main prog, contains s t a r t i n g adr. of calc'ns ^measurement variance, i n i t i a l l y floaded as f l o a t i n g - p o i n t "1" >indirect subroutine addresses i n d i r e c t addresses f o r user's subroutines i n d i r e c t addresses f o r t e l e p r i n t e r message s t r i n g s f l o a t i n g - p t . i n t e r p r e t e r work area matrix storage area (see 330-341) ( t e l e p r i n t e r message s t r i n g s matrix a r i t h . subr. (see 350-354) -x I/O subroutines (see 355-362) main program basic f l o a t i n g - p t . i n t e r p r e t e r 71 Instructions f o r Using the I d e n t i f i c a t i o n Program-Package F i r s t load the program tapes i n the f o l l o w i n g order: 1. Nova Basic F l o a t i n g - P o i n t Interpreter 2. Data-supply subroutines (INIT, MEAS,DATIN, DTOUT) 3. I d e n t i f i c a t i o n program-package The program i s s e l f - s t a r t i n g and w i l l , b e g i n by p r i n t i n g c e r t a i n questions which are to be answered by typing numbers into the t e l e t y p e w r i t e r . Each number w i l l be required i n e i - ther f i x e d - or f l o a t i n g - p o i n t format. In the case of f i x e d - point only one decimal d i g i t w i l l be accepted, while f l o a t i n g - point format can be any s t r i n g of characters i n the following order: ...1. A .+ o r - s i g n ( o p t i o n a l - ) 2. A s t r i n g of decimal d i g i t s (optional) 3. A decimal point (optional) 4. A s t r i n g of decimal d i g i t s 5. The l e t t e r E, i f there i s to be an exponent 6. A + or - sign (optional) 7. One or two decimal d i g i t s denoting exponent (optional) 8. A "space" A character typed i n error can be deleted with a "rubout". Examples of allowed s t r i n g s are: 500_, +50_, +5.E2_, -2.05E-04_, + . 3054E-22__, - 2 E 0 3 _ , where _ denotes a "space". The questions printed and explanations of the required responses are as follows: 1. "R = " : A f i x e d - p o i n t integer from 1 to 8 equal- l i n g r , the number of columns i n the parameter array, or n, the number of system outputs. 2. "S = " : A f i x e d - p o i n t integer from 1 to 8 equal- l i n g s, the number of rows i n the parameter array, or m+m, where m i s the number of system inputs and n i s the number of 72 system outputs* 3. "SAMPLES?" : A number i n f l o a t i n g - p o i n t format equal to the number of state-samples to be taken. 4. "COPY? " : A fi x e d - p o i n t integer corresponding to one of the following i n s t r u c t i o n s regarding s t a r t i n g values: 0: No s t a r t i n g values are a v a i l a b l e f o r A and P_. 1: The s t a r t i n g values now i n memory are to be used. 2: Copy the s t a r t i n g values from memory onto paper tape. 3: Copy the s t a r t i n g values from memory onto the t e l e p r i n t e r . 4: Read the s t a r t i n g values from the tape i n the high-speed reader and enter them i n t o memory (tape must be one which has been produced by response 2). 5: Accept the s t a r t i n g values from the teletype keyboard. (Note that following t h i s response the program w i l l p r i n t "PARAMS '" a f t e r which the elements b"f the parameter matrix should be typed i n f l o a t i n g - p o i n t format, row by row. A carr i a g e - r e t u r n and l i n e - f e e d w i l l occur automatically a f t e r each element has been typed and an extra l i n e - f e e d w i l l occur at the end of each row. When a l l rows are f i n i s h e d , the pro- gram w i l l p r i n t "P-MATRIX" and the s t a r t i n g values of the IP-matrix elements should then be typed row by row i n f l o a t i n g - point format.) 6: Execute the user-written subroutine whose s t a r t - ing address i s found i n l o c a t i o n INIT = 363. 7: Accept new i n i t i a l values f o r CTHR and V from the teletype keyboard. (The program w i l l respond by p r i n t i n g "CTHR, V:" a f t e r which the values of CTHR and V should be typed i n f l o a t i n g - p o i n t format one a f t e r the other. ) 5. "READY? " : A fixe d - p o i n t integer corresponding to: 0: Return f o r another pass at question 4. 1: Proceed to execute the i d e n t i f i c a t i o n program. IMPORTANT:"The l a s t response to question 4 must be " 0 " or "1" 73 Instructions f o r Writing I/O Subroutines INIT: This subroutine i s c a l l e d i f a "6" i s typed i n response to the question "COPY?" and allows the user to supply s t a r t i n g values with h i s own subroutine. S t a r t i n g address should be stored i n l o c a t i o n 363. MEAS: This subroutine i s c a l l e d j u s t before the rec u r s i v e i d e n t i f i c a t i o n .process .is started and can be used f o r such tasks as rapid pre-measuring and st o r i n g of dajta. I t s s t a r t - ing address should be entered i n t o l o c a t i o n 364. DATIN: Cal l e d each time 'a new sample of the system outputs i s required. S t a r t i n g address should be loaded i n t o l o c a t i o n 365. DTOUT: Ca l l e d j u s t a f t e r a new estimate of the parameters has been c a l c u l a t e d and u s e f u l f o r outputting the parameter matrix. The s t a r t i n g address should be loaded i n t o l o c a t i o n 366. The model estimated by the.program .will be ..(.see Chapter V.) x k(n) = 0(nxn) x k _ x ( n ) + A(nxm) u k_ 1(m) = A ' (nx(n+m) ) h_k (n+m) where *k = The user' s data-supply subroutines should store measured v a l - ues of x_k and u_k i n the hr-vector l o c a t i o n s , which begin at the address found i n l o c a t i o n H= 336. x k i s stored f i r s t and then u_k, each element to be written i n 32-bit hexadecimal f l o a t i n g - point format occupying 2 consecutive l o c a t i o n s as provided by the Nova i n s t r u c t i o n FFLO. The maximum number of elements i n - k - l A = A 74 h i s 8, The estimated parameter array A w i l l be l e f t row by row s t a r t i n g at the address contained i n l o c a t i o n A = 330, each element i n f l o a t i n g - p o i n t format and occupying 2 consecutive locationso Output v i a the t e l e p r i n t e r or paper-tape punch can be achieved using the subroutines which are addressed i n d i - r e c t l y through l o c a t i o n s DATWR = 361 and DATPN = 356<> Values of the variance term s^ f o r a maximum-likelihood f i l t e r can be entered i n f l o a t i n g - p o i n t format i n t o l o c a t i o n s V= 346 and V+1 = 347. The t o t a l number of state samples measured or estimation cycles performed i s c o n t r o l l e d by the user ' s subroutines, using l o c a t i o n K = 320 as a counter. The i n i t i a l count, which i s typed i n response to the question "SAMPLES?", i s found i n l o c a t i o n K0 = 321. Dimension parameters typed i n response to "R = " and "S = " and the lo c a t i o n s where they are stored are: R = 312 r , n S = 313 s , m + n RS = 314 r s , n(m+n) SS =-315 s 2 , (m+n)2 Example: The following set of programs are examples of the sub- routines, MEAS, DATIN and DTOUT, required to make and store a r a p i d set of state-samples of a continuous system v i a an A/D converter with multiplexed inputs, and under con t r o l of an external real-time clock. The stored data i s to be proc- essed one row at a time by the i d e n t i f i c a t i o n program, a f t e r which the parameter estimate i s to be pr i n t e d , along with a 75 warning i n the case of a minimum-norm composition i f an i n - s u f f i c i e n t number of l i n e a r l y independent measurements were a v a i l a b l e f o r the parameters to be observable. ; DATA-SUPPLY SUBROUTINES FOR A/D CONVERTER R = 312 S = 313 I = ...3.1.6 J 317 K = . 320 K0 321 N 325 A = 330 H 336 BEGIN = 344 START 345 MXTR 354 DATWR = 361 WRITE = 362 STR5 374 cLOC 364 ME AS DAT IN DTOUT MAX: STORE: MEAS: SMPL: .LOC 3410 111116 123040 115105 101123 2040 3537 LDA 3, STA 3, LDA 3, STA 3, SUB 2, ADD 3, DSZ N JMP LDA R N K0 K 2 2 .-2 MAX SUBZ# 2, 3, SNC JMP @BEGIN LDA 3, STORE STA 3, 21 ; SUBO 2, 2 J LDA 3, R STA 3, N ; STRING 11: "INS MEAS" ; MAX NO OF STOR LOC AVAILABLE ; IND ADR FOR 1ST STOR LOC N R ; PRESET SAMPLE COUNTER ; CLEAR AC2 AC2 = KR AC3 = MAX SKIP IF KR NOT EXCEED MAX RESTART PRESET LOC POINTER AC 2 = 0 RESET MEASUREMENT COUNTER 76 CRRNT: DOAC 2, 44 NIOS 63 SKPDN 63 JMP .-1 NIOC 51 NIOS 51 SKPDN 51 JMP.o-l DIA 0 , 51 STA 0 , @21 NIOP 44 DSZ . N JMP CRRNT DSZ K JMP SMPL LDA 3, K0 STA 3, K ISZ K LDA 3, STORE STA 3, 21 JMP ©START ; SET MUX CHANNEL TO 0 ; ENABLE HARDWARE CLOCK WAIT FOR CLOCK CLEAR A/D START A/D WAIT FOR A/D GET RESULT STORE RESULT INC MUX CHANNEL SKIP IF DONE CURRENT .SAMPLE SKIP IF DONE ALL SAMPLES PRESET SAMPLE COUNTER DATIN: STA 3, RETURN DSZ K JMP .+2 JMP 'OUT LDA 2, H LDA 3, R STA 3, N CMPNT: SUB 0 , 0 LDA 1, @21 MOVL# 1, 1, SZC COM 0 , 0 STA 0 , 0 , 2 STA 1, 1, 2 FETR FFLO 0 , 2 FIC2 FEXT DSZ N JMP CMPNT JMP ©RETURN RETURN: 0 STR11: 3410 OUT: LDA 3 , I MOV 3 , 3, SZR JMP PRINT LDA 3, J SUBZL 1, 1 ADCZ# 1, 3, SNC JMP .+3 LDA 2, STR11 JSR @WRITE ; STORE RETURN ADR ; SKIP IF NO MORE DATA ; PRESET LOC POINTER ; AC3 = R PRESET MEAS COUNTER AC0 = 0 GET DATA WORD SKIP IF NON-NEGATIVE AC0 = 177777 ; STORE DATUM IN H ; CONVERT TO FP ; INC LOC POINTER ; SKIP IF HAVE ALL COMPS : RETURN SKIP IF I = 0 AC 3 = J AC1 = 1 SKIP IF J GREATER THAN 1 ; TYPE "INS MEAS" 77 PRINT: LDA 2, STR5 JSR ©WRITE LDA 0 , S LDA 1, R LDA 2, A JSR @ DATWR JMP ©BEGIN DTOUT: JMP. 0 , 3 o END ; TYPE "PARAMS PRINT PARAMETERS RESTART MAIN PROG RETURN INTERFACE FOR REAL-TIME CLOCK DS1 DS2 DS3 DS4 DS5 DS6 O n STRT 3 > CLOCK SELD 78 I d e n t i f i c a t i o n Programs f o r Nova Computer On the following pages are found the assembler l i s t i n g s of the basic i d e n t i f i c a t i o n programs f o r the Nova computer. J MATRIX ARITHMETIC SUBROUTINES ; REQUIRE BASIC FLOATING POINT INTERPRETER 79 000300 KEEP • = 300 000301 SAVE = 301 000302 AMAT 302 000303 AMAT0 = 303 000304 BMAT = 304 000305 BMAT0 = 305 000322 L 322 000323 L0 = 323 00032 4 M = 324 00032 5 N - 325 000326 N0 = 326 00030 7 • LOC 307 00 307 000000 2ER0: 0 00310 000000 0 000350 *LOC 350 00350 0022 30 MXADD 00351 002251 MXSUB 00352 00227 2 MXMPY 00353 002 35 5 MXDI V 00354 002 37 4 MXTR 002230 . LOC 2230 i ThESE-ARE THE PAGE-0 ; ADDRESSES IN WHICH THE f STARTING ADDRESSES OF i THE SUBROUTINES CAN BE ; FOUND. ; SUBR TO ADD TWO MATRICES (C = A + B) ENTER WITH: LOC N = NO. OF ELTS IN EACH MATRIX AC0 = ADR OF A AC1 = ADR OF B AC2 = ADR OF C 02230 0 40302 MXADD: STA 0, AMAT 02231 044304 STA 1* BMAT 02232 054301 STA 3* SAVE 02233 006004 XADD: FETR 02234 022302 FLDA 0* ©AMAT 02235 026 30 4 FLDA 1* 6BMAT 02236 123000 FADD I* 0 02237 041000 FSTA Q* 0* 2 02240 104000 FIC2 02241 100000 FEXT 022 42 010302 I Si£ AMAT 02243 010302 ISE AMAT 02244 010304 IS2 BMAT 02245 01030 4 IS2 BMAT 02246 01432 5 DSii N 02247 000764 JMP XADD 02250 002301 JMP ©SAVE ) PRESET A-MATRIX POINTER 1 PRESET B-MATRIX POINTER ; STORE RETURN ADR i ENTER FP MODE ; GET ELT OF A i GET ELT OF B ; ADD ELTS ; STORE RESULT INC i INC C-MATRIX POINTER I EXIT FP MODE ; INC A-MATRIX POINTER. I INC B-MATRIX POINTER I SKIP IF ALL ELTS ADDED ; ADD NEXT PAIR OF ELTS i RETURN I SUBR TO SUBTRACT ONE MATRIX FROM ANOTHER CC = A - ; ENTER WITH: LOC N = NO. OF ELTS IN EACH MATRIX AC0 AC1 AC2 » ADR OF A = ADR OF B = ADR OF C 80 02251 040302 MXSUB: 022 52 0 4430 4 02253 054301 02254 006004 XSUB: 02255 022302 02256 026304 02257 122400 02260 041000 02261 104000 02262 100000 02263 010302 02264 010302 02265 010304 02266 010304 02267 01432 5 02270 000764 02271 002301 STA &* AMAT STA 1* BMAT STA 3* SAVE FETR FLDA 0> ©AMAT FLDA \» 0BMAT FSUB 1* 0 FSTA 0* 0> 2 FIC2 FEXT IS 2 AMAT IS2 AMAT .IS2 BMAT IS2 BMAT . DS£ N JMP XSUB JMP ©SAVE J PRESET A-MATRIX POINTER ; PRESET B-MATRIX POINTER i STOKE RETURN ADR i ENTER FP MODE ; GET ELT OF A t GET ELT OF B ;'SUBTRACT ELT OF B FROM ELT OFA I STORE RESULT IN C t INC C-MATRIX POINTER i EXIT FP MODE ; INC A-MATRIX POINTER ; INC B-MATRIX POINTER ; SKIP IF DONE ; DO ANOTHER SUBTRACTION I RETURN i SUBR TO MULTIPLY TWO MATRICES (C = AB) ; ENTER WI TH s LOC L = NO. OF COLUMNS IN A/ROWS I ; LOC M = NO. OF ROUS IN A i LOC N NO. OF COLUMNS IN B i AC0 ADR OF A t AC1 = ADR OF B 1 AC2 . • -= ADR OF C 02272 0 40 302 MXMPY t STA QJ> AMAT 0227 3 0 4030 3 STA 0t AMAT0 I PRESET A-MATRIX POINTERS 02274 0 4430 4 STA 1*. BMAT 02275 0 4430 5 STA 1 * BMAT0 } PRESET B-MATRIX POINTERS 02276 054301 STA 3* SAVE t STORE RETURN ADR 02277 034322 LDA 3* L 02300 054323 STA 3* L0 i LOC L0 = NOo OF COLUMNS IN 02301 024325 LDA 1 » N 02302 0 44326 STA \> N0 i LOC N0 = NO. OF COLUMNS IN 02303 127000 ADD \* 1 i AC1 = T'.vICE NO. COLS IN B 0230 4 f.••(.'(. •/.,"(.: JMP XK PY + 2 02 305 C-34302 MRE'i : L DA 2> 02.306 054303 STA 3* AMAT0 BEGIN NEXT ROW OF A 02307 03*30 5 LDA 3* BMAT0 02310 0 5430 4 STA 3> BMAT GO TO FIRST COLUMN OF B 0231 1 034326 LDA 3* N0 02312 054325 STA 3* N RESET COLUMN-COUNTER 0231 3 000407 JMP XMPY 0231 4 034300 NRET: LDA 3* KEEP 0231 5 054304 STA 3* BMAT 023i6 010304 IS£ BMAT 02317 010304 IS2 BMAT j BEGIN NEXT COLUMN OF B 02320 034303 LDA 3* AMAT0 02321 054302 STA 3* AMAT i REPEAT SAME ROW OF A 02322 034323 XMPYS LDA 3* L0 02323 054322 STA 3* L i RESET PRODUCT-COUNTER 02324 034304 LDA 3> BMAT 02325 054300 STA 3J> KEEP i STORE COLUMN-POINTER 02 326 006004 FETR t ENTER FP MODE 02327 030307 F L D A 2* iiERO i iiERO CUMULATIVE SUM 02330 022302 LRET: F L D A 0* 0AMAT 1 GET ELT OF A 81 02331 026304 F L D A 1* @8MAT I GET ELT OF B 02332 120100 FMPY 1* 0 I MULTIPLY ELTS 02333 113000 F A D D 0* 2 i ADD PROD TO CUMULATIVE SUM 02334 100000 FEXt t EXIT FP MODE 02335 010302 I S Z AMAT 02336 0 10302 IS2 A M A T i MOVE ALONG ROW OF A 02337 034304 L D A 3* BMAT 02340 137000 ADD \» 3 02341 0 5 430 4 STA 3* BMAT • * MOVE DOWN COLUMN OF B 02342 006004 FETR t ENTER FP MODE 02343 0 14322 FDSZ L i SKIP IF ALL PRODUCTS DONE 023 44 000764 FJMP LRET i FORM NEXT PRODUCT 02345 051000 FSTA 2> 0* 2 % STORE RESULT IN C 02346 104000 FIC2 i MOVE ALONG ROW OF C "02347 100000 FEXT i E-X-I T FP -MODE 02350 01432 5 DSL' N t SKIP TF DONE ALL COLS OF B 02351 0007 43 JMP NRET 02352 014324 DS2 M t SKIP IF DONE ALL ROWS OF A 02353 000732 JMP MRET 02354 002 30 1 JMP G>SAVE i RETURN 82 I SUBR TO DIVIDE A MATRIX BY A.SCALAR <C = A/B) i ENTER WITH: LOC N = NO. OF ELTS IN A J AC0 ADR OF A ; AC1 ADR OF B ; AC2 = ADR OF C 02355 0 40302 MXDIVJ STA 0* AMAT PRESET A-MATRIX POINTER 02356 044304 STA 1* BMAT i PRESET B-MATRIX POINTER 02357 054301 STA 3* SAVE j STORE RETURN ADR 02360 006004 XDI V J FETR 1 ENTER FP MODE 02361 022302 FLDA 0J § AMAT • t GET ELT OF A 02362 026304 FLDA 1* 0 B M A T I GET ELT OF B 02363 120200 "F DT V 1* 0 i DIVIDE' ELT OF A 0236 4 0410 0 0 FSTA 0* 0J> 2 I STORE RESULT' IN C 02365 10 4 0 0 0 FIC2 % INC C-MATRIX POINTER 02366 100000 FEXT i EXIT FP MODE 02367 010302 ISE AMAT 02370 010302 IS2 AMAT i INC A-MATRIX POINTER 02371 014325 DS'Z N i SKIP IF ALL ELTS DIVIDED 02372 000766 JMP XDIV 02373 002301 JMP ©SAVE I RETURN J SUBR TO TRANSPOSE A SQUARE MATRIX (A = A * > i ENTER WITH! AC1 NO.. OF -ROW'S OR COLUMNS OF A AC2 ADR OF A 0237 4 0 4432 4 MXTR % STA \, M 02375 014324 DSif M ; LOC M = I LESS THAN NO. ROWS 02376 000402 JMP .*2 0237 7 001400 JMP 0* 3 02400 054301 STA 3* SAVE ; STORE RETURN ADR 02401- 127000 ADD 1» 1 ; AC I = TWICE NO. OF ROWS 02402 121400 INC 1 * 0 02403 101400 INC 0* 0 t AC0 = TWICE NO. ROWS + 2 0240 4 000403 JMP »+3 02 405 030303 T R R E T S LDA 2> AMAT0 02406 113000 ADD 0* 2 i MOVE DOWN DIAGONAL 02407 050303 STA 2* AMAT0 t STORE ELT POINTER 02410 050302 STA 2t AMAT i PRESET FIRST ELT POINTER 0241 1 034324 LDA 3* M 02412 054325 STA 3* N i PRESET COUNTER 02413 034302 XTR J LDA 3* AMAT 0241 4 137 000 ADD 1*3 02415 054302 STA 3* AMAT i SET FIRST ELT-POlNTER 02416 006004 FETR i ENTER FP MODE 02417 104000 FIC2 I SET SECOND ELT-POINTER 02420 026302 FLDA 1* 0AMAT 02421 031000 FLDA 2» 0* 2 I GET ELTS 02422 045000 FSTA 1* 0* 2 02423 052302 FSTA 2/ 6 A M A T i SWAP ELTS 02 42 4 100000 FEXT EXIT FP MODE 02 425 01432 5 DS£ N i SKIP IF DONE ROW 02426 000765 JMP XTR 02427 01432 4 DS2 M i SKIP IF DONE MATRIX 02430 000755 JMP TRRET JMP @ SAVE ' i RETURN o END 7 777 AMAT 000 302 AMAT0 000 303 BMAT 000304 BMAT0 000305 KEEP 000300 L 000322 L0 00032 3 LRET 002330 M 000324 MRET 00230 5 MXADD 002230 MXDI V 002355 MXMPY 002272 MXSUB 0022.5.1 MXTR 002374 N 00032 5 N0 000326 NRET 002314 SAVE 000301 TRRET 002 40 5 X ADD 002233 XDI V 002 360 XMPY 002322 XSUB 0022 5 4 XTR 002413 HERO 000307 ; INPUT-OUTPUT SUBROUTINES FOR TTY AND PTP 85 $ REQUIRE BASIC FLOATING-POINT INTERPRETER 000040 00040 002560 00041 002637 -LOC RECV TYPE 40 i THESE ARE THE SUBROUTINES i 'TO BE USED BY FP INSTRUCTIONS ; FDFC AND FFDC* RESPECTIVELY 00355 00356 00357 00360 00361 00362 000300 000301 00032 4 000325 000326 000355 002 440 002 47 3 002533 002 57 2 002600 002646 KEEP SAVE M N N0 •- 300 = 301 = 324 = 32 5 = 326 •LOC 355 DATRD DATPN DATRC DIGIT DAT WR WRITE i THESE-ARE THE -PAG-E--0 J ADDRESSES IN WHICH THE t STARTING ADDRESSES OF I THE SUBROUTINES CAN BE i FOUND. 002440 •LOC 2440 I SUBR TO STORE DATA FROM PAPER TAPE ; ENTER WITH: AC1 I AC2 NO- OF FLOATING-POINT DATA STARTING ADR OF STORAGE LOC 02440 127000 DATRD: ADD 1* 1 02 441 0 4432 5 STA 1* N 02442 050020 STA 2* 20 02 443 01 402 0 DS2 20 02444 054301 STA 3* SAVE 02445 060112 NIOS PTR 02446 063612 SKPDN PTR 02447 000777 JMP .-1 02450 060512 DIAS 0* PTR 02451 063612 SKPDN PTR 02452 000777 JMP .-1 02453 101005 MOV 0* 0* SNR 02454 000774 JMP .-4 02455 004405 JSR READ 02456 042020 STA 0* ©20 02457 014325 DSZ N 02460 000775 ' JMP .-3 02461 002301 5 JMP ©SAVE 02462 064512 READ: DIAS 1* PTR 02463 063612 SKPDN PTR 02464 000777 JMP .-1 02465 125300 • MOVS 1* 1 • 02466 060512 DIAS 0* PTR 02467 063612 SKPDN PTR 02470 000777 JMP .-1 02471 123000 ADD 1* 0 02472 001400 ' JMP 0* 3 ; DOUBLE AC1 t N = NO o OF DATA WORDS I SET LOC POINTER ; STORE RETURN ADR i READ A LINE FROM TAPE ; GET RESULT* READ AGAIN i SKIP IF RESULT N0N-2ER0 t GET DATA WORD J STORE DATA WORD ; SKIP IF ALL WORDS READ ; RETURN I GET RESULT* READ AGAIN i LEFT-JUSTIFY IN AC1 i GET RESULT* READ AGAIN I COMBINE HALVES i RETURN ; SUBR TO PUNCH DATA ON PAPER TAPE 86 1 ENTER WITH: AC1 i AC2 = NO. OF FLOATING-POINT DATA = STARTING ADR OF STORAGE LOC 0247 3 127000 DATPN: ADD 1* 1 ; DOUBLE AC1 02474 044325 STA 1* N ; N = NO. OF DATA WORDS 02475 050020 STA 2» 20 02476 014020 DS2 20 t SET LOC POINTER 02 47 7 '0 54301 STA 3* SAVE i STORE RETURN ADR 02500 102 400 SUB 0* 0 i 2ER0 AC0 02501 152 420 SUB2 2> 2 i 2ER0 AC2* SET CARRY 02502 004421 JSR PUNCH t PUNCH A 2ER0 02 503 151103 MOVL 2* 2* SNC i COUNT OF 17 -02-504 •00 0-7-7-6 JMP - 0-2 02505 100000 COM 0* 0 t SET AC0 02506 061113 DOAS 0* PTP i PUNCH A 377 02507 063613 SKPDN PTP 02510 00077 7 JMP »-l 0251 1 022020 LDA 0* @20 i GET DATA WORD 02512 004411 JSR PUNCH i PUNCH DATA WORD 0251 3 01432 5 DS2' N i SKIP.IF ALL WORDS PUNCHED 0251 4 00077 5 JMP .-3 02515 1 02400 SUB 0* 0 i 2ERQ AC0 0251 6 1 52 420 SUBE 2* 2 i 2ER0 AC2* SET CARRY 02517 00440 4 JSR PUNCH i PUNCH A 2ER0 02520 151103 MOVL 2* 2* SNC • * COUNT OF 17 02521 00077 6 JMP .-2 02522 002301 JMP QSAVE i RETURN 02523 105300 PUNCH % MOVS 0* 1 02524 065113 DOAS 1* PTP 02525 063613 SKPDN PTP 02526 000777 JMP .-1 02527 061113 DOAS 0* PTP 02530 063613 SKPDN PTP 02531 000777 JMP -.-1 02532 001400 JMP 0* 3 • I RIGHT-JUSTIFY FIRST HALF i PUNCH FIRST HALF i PUNCH SECOND HALF t RETURN I SUBR TO STORE DATA I ENTER WITH: AC0 J AC! i AC2 02533 04032 4 DATRC % STA 0* M 02534 044326 STA 1 * N0 02535 0 5 4301 STA 3* SAVE 02536 034326 NXTRW'8 LDA 3* N0 02537 054325 STA 3* N 02540 020503 LDA 0* LF 02541 004476 JSR TYPE 02542 020502 NXTEL? LDA 0* CR 02 543 004474 JSR TYPE 02544 020477 LDA 0* LF 02545 004472 JSR TYPE 02 546 006004 FETR 02547 124000 FDFC : 1 FROM KEYBOARD -• NO. OF ROWS OF FP DATA = NO. OF COLUMNS = STARTING ADR OF STORAGE LOC J M = NO. OF ROWS i N0 = NO. OF COLUMNS ; STORE RETURN ADR J N = NO. OF COLUMNS ; LINE-FEED i CARRIAGE-RETURN i LINE-FEED . J ENTER FLOATING-POINT MODE i ACCEPT DEC NO.* CONVERT 02550 02551 02552 02553 02554 02555 02556 02557 045000 10 4000 100000 014325 0007 6 6 014324 000760 002301 FSTA 1* 0* FIC2 FEXT DS2 N JMP NXTEL DSZ M JMP NXTRW JMP ©SAVE 87 I STORE HEXADECIMAL NO. I INC STORAGE-LOC POINTER ; EXIT FP MODE J! SKIP IF HAVE ALL ELTS OF ROW I SKIP IF HAVE ALL ROWS i RETURN 02560 054300 RECVJ ' STA 3* KEEP 02561 060110 NIOS TTI 02562 063610 SKPDN TTI 02563 000777 JMP .-1 02564 060410 DIA 0* TTI 02565 024404 LDA b MASK 02566 123400. AND \* 0 02567 004450 JSR TYPE 02570 002300 JMP ©KEEP 02571 000177 MASK I 177 t STORE RETURN ADR I ENABLE KEYBOARD I WAIT FOR CHARACTER i GET CHARACTER ; AC1 = 177 i MASK TO 7 BITS i ECHO CHARACTER t RETURN 02572 054301 02573 004765 02 57 4 02 4403 -02-57-5 123 400 02576 002 301 02577 000017 I SUBR TO ACCEPT A DIGIT FROM KEYBOARD i BINARY VALUE OF DIGIT IS LEFT IN AC0 DIGITS STA 3* SAVE , JSR RECV LDA 1* DTMSK -AND \* 0 JMP ©SAVE DTMSK s 17 i STORE RETURN ADR I RETURN DIGIT IN AC0 I AC1 = 17 •I -MASK TO 4 .BITS ; RETURN ; SUBR TO TYPE DATA ON TELEPRINTER 88 i ENTER WITH: AC0 = NO. OF ROWS OF FP DATA i AC1 NO. OF COLUMNS i AC2 = FIRST ADR WHERE DATA STOf 02600 0 4032 4 DATWR! STA 0* M t LOC M = NO. OF ROWS 02601 0 4432 6 STA 1J> N0 i N0 = NO. OF COLUMNS 02602 054301 STA 3* SAVE i STORE RETURN ADR 02603 125112 ROW: MOVLtf Is l» S2C I. SKIP IF 2 LINES TYPED 02604 000403 JMP o-f-3 i NO LINE-FEED 02605 020436 LDA 0* LF 02606 004431 JSR TYPE t LINE-FEED 02607 02 4427 LDA 1* COLS I AC1 = -4 026T0 0-3432*6 LDA 3* lN0 0261 1 054325 STA 3* N I N = NO. OF COLUMNS 02612 020 432 LINE; LDA 0* CR 02613 004 42 4 JSR TYPE t CARRIAGE-RETURN 0261 4 020 42 7 LDA 0* LF 0261 5 00 4422 JSR TYPE i LINE-FEED 02616 020427 ELTs LDA 0* SP 02617 00 4420 JSR TYPE i SPACE 02620 00 4417 JSR TYPE SPACE 02621 00600 4 FETR i ENTER FP MODE 02622 021000 FLDA . &» ids 2 i LOAD HEX FP NO. IN FAC0 02623 140000 FFDC 0 i TYPE NO. IN DEC FP 02624 10 4000 FIC2 • > INC STORAGE-LOC POINTER .02625 •,1.0 0.0.0.0 .FEXT * EXIT FP MODE 02626 014325 DS£ N SKIP IF DONE ALL ELTS OF 02627 000 40 4 JMP « + 4 02630 01 432 4 DS?£ M i SKIP IF DONE ALL ROWS 026 31 0007 52 JMP ROW 02632 002301 JMP §SAVE i RETURN 02633 12540 4 INC \* 1* S2R • i SKIP IF FINISHED LINE 02634 000762 JMP ELT 02635 000755 JMP LINE 02636 177774 COLS 5 - 4 02637 061 111 TYPE; DOAS 0* TTO i TYPE CHARACTER 02640 063611 SKPDN TTO 02641 000777 JMP .-1 026 42 001400 JMP 0* 3 i RETURN 02643 000012 LF: 12 02644 000015 CR: 15 02645 000040 SP: 40 02646 0 54301 02647 020775 02650 004767 02651 020772 02652 004765 02653 004764 02654 024762 02655 021000 i SUBR TO TYPE A STRING OF 8 CHARACTERS i ENTER WITH AC2 = STARTING ADR OF STRING WRITE: CHAR % STA 3* SAVE LDA Q> CR JSR TYPE LDA 0* LF JSR TYPE JSR TYPE LDA \» COLS LDA 0* 0* 2 i STORE RETURN ADR i CARRIAGE RETURN ; DOUBLE-SPACE i AC1 = -4 i GET WORD 026 56 101300 MOVS 0* 0 02657 1012O0 MOVK 0* 0 02660 004757 JSR TYPE 02661 021000 LDA 0* 0* 2 02662 0047 55 JSR TYPE 02663 151400 INC 2* 2 02664 125404 INC 1* 1* 02665 000770 JMP CHAR 02666 002301 JMP ©SAVE 007777 » END 7777 i SWAP HALVES I SHIFT RIGHT I PRINT 1ST CHARACTER t GET WORD AGAIN I PRINT 2ND CHARACTER I INC LOC POINTER ; SKIP IF DONE i RETURN CHAR 002655 COLS 002636 CR 002644 DATPN 002 47 3 DATRC 002533 DATRD 002 440 DATWR 002600 DIGIT 002572 DTMSK 002577 ELT 002616 KEEP 000300 LF 002643 LINE 002612 M 00032 4 MASK 002571 N 000 32 5 N0 0'00'32o NXTEL 002542 NX TRW 002536 PUNCH 002 52 3 READ 002 462 RECV 002560 ROW 002603 SAVE 000301 SP 002645 TYPE 002637 WRITE 002646 330 1 .MAIM, ; RECURSIVE LEAST-SQUARES IDENTIFICATION ; REQUIRES! ^ASIC EP INTERPRETER ; MATRIX ARITHMETIC SUBROUTINES ; I/O SUBROUTINES FOR TTY, TAPE ; DAT A,-SUPPLY PROGRAMMES ; THIS PROGRAMME USES THE MAXIMUM ' ; HO. OF SYMBOLS ALLOWED 3Y 'THE ; RELOCATA3LE ASSEMBLER. DO NOT ADD ANY MORE. 3333 13 0003 1 4 0303 15 30 03 16 0iV33 17 0 rj r: 3 21 300322 030324 333325 .'• o. "•. 7 , r -< u r.-1.- O ti/ 33035 1 033352 300333 303354 300355 i ' , o .> •> r. n » v n O i • i r i O J I 333364 333365 ^, r\ x r c •q S RS I J K 3 L M M MX ADR MXSU3 y. y V] p V MXDIV MXTR DATRD. DATPM DATRC DI GIT DATWR I M IT ME AS DAT I!-! DTOUT 3 12 3 13 3 1/1 3 15 3 1 6 3 17 32 1 322 324 325 3 52 35 I 3 52 353 3 54 355 3 5 6 357 3 6 3 351 3 62 3G3 3 64 3 65 366 0 0 0 0 2 0 0 2 3 4 4 .LOC 2 JMP 3 3 4 4 033037 M n • *y f ' - ^ /. r* '.*. K> »; v.' ( L> V..1 £. 3303 1 1 3031 1 2C0361 ONE ? .LOC 7 403 .LOC 3 1 1 1 ;. WORK AREA FOR F? KIT 0SS'333 .LOC 33 3 0333 3 ;7 <T; *•> C O -V «fi 7 <J At 533 0333 1 <-\ r.\ 7 ^ '-. t.- <L- o / i ' o . 700 30332 331 10:" O • 1 133 33333 32 133C: TEMP 1: 1333 3333 4 3 ?! 5 3 3 TEMP2: 1533 30335 .3-0 172 0 TEMP3: 17 r 33336 K: 2133 33337 3 02120 C: 2 1 23 033 4 0 3 32\&? P • 2 14 0. 0334 l 3321 S3 CSRU- 216." i. i.' r..' 0^:3. .LOG 3 42 ; MATRIX ADDRESSES 0032 00342 00343 003 4 4 0034? 00346 00347 33370 0037 1 00372 0037 3 0037 4 00375 00376 00377 MAIM 04 0 4 20 000000 00034 4 002700 00305 4 0404^0 000000 000370 00 2170 002174 0 02200 002204 f.;'' O 9 J rf 002214 0 0 2224 CTHR: STR 1 SIR 2 SIR 3 SIR 4 SIR 5 SIR 6 STR 7 0 40420 ry. .LOC 34 4 BEGIN' START 04 04 20 0 .LOC 370 2170 2174 2 20 0 2204 2210 22 14 2220 999 h t ADDRESSES OF STRINGS 002170 .LOC .2 170 • 02170 122040 1220 4 0 • 0217 1 075 042 • 075040 02172 f> /, r? f» /; P 02173 042040 0 4 22 4;; 02174 1 2 3 0 4 2 123040 • 02173 275::A0 075043 02176 0 4 2 0 4 2 if, /, n. rt /j 02177 2 4 2 2 4 2 043042 32223 123 12 1 12310 1 • 0220 1 115122 115120 0220 2 1 1-4 125 .1 1 4 1 0.5 02203 12307 7 123277 r> o o / 123 1 17 103 1 17 • 0??05 !20!3 1 12013 1 02205 27 7 24,;- 077042 02207 0 4 3 2 4 0 0 4 00/10 0221 0 120 121 120 10 1 • 0221 1 122121 12210 1 0 ? ? I o 115 123 1 15123 02213 0 422 4 0 04 0O40 02214 120255 120255 • 02213 115 12 1 11510 1 02216 124 122 124 122 02217 111132 111130 ['9 99 (/, 122135 122105 * 1 02221 10 1104 101 104 02222 131077 131377 02223 0 4 ^ 0 4 0 043040 0 2224 103124 103 124 » 02225 110 122 110 122 02226 0 5 4 2 4 0543 4 2 02227 126 27.2 126072 002702 .LOC 2723 « 1 02700 0.06005 BEGIN: FIMI 0270 1 032370 LDA 2, STR1 02702 2263 62 JSR 3VRITE • J 02703 006360 JSR 'DDI GIT • 02704 0 4 0312 STA 0 , R MESSAGE STRINGS IM ASC : STRING I : "R = ; STRING 2: "S ; STRING 3: "SAMPLES? t STRING 4: "CO.^Y? ; STRING 5: "PARA^S " STRING 6: "R-MATRI: ; STRING .7: "READY? { STRING 3: "CTHR, V: ; PROGRAMME "-EGIMS 0 0 0 3 , ( .270 3 0 2 7 0 6 0 2 7 0 7 0 2 7 10 0 2 7 1 1 0 2 7 12 0 2 7 13 0 2 7 14 0 2 7 15 G27 IS 0 2 7 17 0 2 7 2 3 0 2 7 2 1 2 2 7 2 2 0 2 7 2 3 0 272/! 0 2 7 2 5 0 2 7 2 6 0 2 7 2 7 2 2 7 3 0 0 2 7 3 1 0 2 7 3 2 0 2 7 3 3 0 2 7 3 4 0 2 7 3 5 0 2 7 3 5 0 2 7 3 7 K 2 7 4 K 0 2 7 4 1 0 27/! 2 0 2 7 4 3 0274/1 0 2 7 4 5 02745 0 2 7 / 7 0 27 5 0 0 2 7 5 1 0 2 7 5 2 K 2 7 5 3 0 2 7 5 ^ 0 2 7 5 5 0 2 7 5 6 0.275 7 0 2 7 6 0 0 2 7 6 I 0 2 7 6 2 0 2 7 63 0276/ - 0 2 7 6 5 0 2 7 6 6 0 2 7 6 7 0 2 7 7 0 0 2 7 7 1 0 2 7 7 2 B 2 7 7 3 0 2 7 7/J 0 2 7 7 5 0 2 7 7 6 0 2 7 7 7 MAIN 0 3 0 3 7 1 0 0 6 3 6 2 0 0 6 3 6 0 0 4 03 13 0 / 0 3 2 5 1 2 6 / 0 3 1 0 7 0 0 0 0 1 4 3 2 5 n r.\ in c C- LJ I / o 0 A A 3 1 5 0 3 43 12 0 5 43 25 1 26 4 0 0 1 0 7 0 0 0 0 1 4 3 2 5 0 0 0 7 7 6 0 4 4 3 1 A 0 3 0 3 7 2 '•• c y c o 1 0 2 5 2 0 1 2 6 5 2 0 0 3 0 3 3 3 0 0 6 0 0 4 0 7 5 0 0 0 100 00 0 0 3 5 0 0 1 93 0 5 4 1 0 3 0 3 7 3 0 0 6 3 6 2 0 0 6 3 6 0 0 4 0 3 16 10 100 5 0 004 7 6 0 4 0 3 17 0 1 4 3 1 7 r> r\ < \ i. n o 1 . «'.• i. - & 0 0 0 4 7 2 0 14.3 17 0 0 0 4 0 7 024.3 14 0 3 0 3 3 0 0 0 6 3 5 6 0 2 4 3 15 0 3 0 3 3 1 0 0 6 3 5 5 0 1 4 3 17 0004. 15 0 3 0 3 7 4 0 0 6 3 5 2 0 2 0 3 13 02/-3 12 0 3 0 3 3 0 0 0 6 3 6 1 0 3 0 3 7 5 0 0 6 3 6 2 0 2 0 3 13 0 2 4 3 13 COPY: OPT o p j o . 0 P T 3 ; J S P J S P STA ST A SUP ADO DSZ JMP STA LDA STA SUB ADD DSZ «j 11 < S T ^ LDA J S P SUP 7. SUPZ LDA JSR LDA FETP F F I X FEXT L D A STA LDA J S P J S P STA MOV JMP STA DSZ JMP JMP 0S7 JMP LDA LDA JSR 2 , S T P 2 f:>!>.! ? I T ~ 001 GIT 3 , M 2 1, 3 , 3 , 1, SS P M 1 1 _ 2 1 , PS 2 , S T P 3 - SVR ITE L 1 , 1 TEMP 1 o '-» f!>^ T 3 C o TEMP 1 3 , K 0 2 , S I P 4 PWRITE f-D I 01 T I ..., iJ, PEADY a I » , J J OPT 2 PEADY J 0PT3 PS A ©DATPN SOP 1 9 2, LDA 1, SS LDA J S P DSZ I W D t _ M ! i LDA J S P LDA LDA LDA J S P LDA J S P LDA LDA 2 , P 50ATPM J OPT A 2 , STP5 (SVP ITE 0 , S 1, R 2 , A ; CDATVR 2 , ' STP6 ©WRITE n. c c. , t> 1 , s ; PR INT "5 •= ? ; GET S j SS = SOU APE OF S ; LOC PS - PRODUCT OF P AND 3 ; PR INT " S A M P L E S ? " ; GET KO • KO = NO. OF RAP ID SAMPLES ; PR IMT " C O P Y ? " : GET I ; S K I P I F I - 0 ; MO ST ART I MO VALUES • SAME START ING VALUES ; P A P E R - T A P E COPY ; TELETYPE COPY 0004 .MAIM 03033 03033 1 0300 1 00636 1 03002 01/i317 OPT/;: 03003 000^07 03004 0243 14 03005 030330 03005 006355 .03007 024315 03010 030331 030 11 00S355 03012 0 M317 0?T5? 030 13 0 0 0 4 15 03014 030374 03015 005362 030 1-6 -0203 13 03017 024312 .03020 030330 030 21 006357 03022 03037 5 03023 006362 03024 0203 13 03025 024313 03026 030331 03027 006357 03030 014317 0?T6: 0303 1 000402 03032 006363 . 03033 0 14317 0 P T 7 t 03034 0 0 0 4 | ! . 03035 030377 0 3 0' 3 6 0 6 3 6 2 03037 006004 03040 120000 03041 124000 03042 040342 03043 044346 03044 100000 0304.5 030376 HEADY! 03046 006362 03047 006360 03050 10 1005 0305 1 00067 1 03052 002364 03053 003260 - 94 LDA 2, P JSP •ODMWP. . DSZ. J e STARTING VALUES FROM JMP OPT5 LDA 1, PS LDA 2, A JSP © D A T R D LDA 1, SS LDA 2, P . * JSP ©DATRD- DSZ J • STARTING VALUES FROM JMP 0PT6 LDA 2, STR5 JSR ©WRITE • PR I NT "P ARAMS" IDA 0.., .S LDA 1, R LDA 2, A JSR ©DATRC LDA 2, STR6 JSR ©WRITE • PRINT "P-MATRIX" LDA 0, S LDA 1, S LDA. 2, ? JSR ©DATRC DSZ J • STARTING VALUES SU??L JMP .+2 . • S 3Y USER SURROUTI v •? JSR ©10 IT DSZ J • CHANGE CTHR AMD V JMP READY LDA 2, STR-- JSR ©WRITE PRINT- "CTHR , V: " EETR FDFC 0 • GET CTHR FDFC 1 • ; GET V FSTA 0, CTHR • STORE CTHR FSTA 1, V • STORE V FEXT LDA 2, STR7 JSR ©WRITE • PRINT "READY?" JSR © D I G I T MOV 3, 0, SMR JMP COPY JMP ©MEAS • * USER PROGRAMME SEQ2 0005 030 5 4 0 3055 03056 03057 03060 030 6 1 03062 03063 030 6 4 03065 03066 03067 03070 0307 1 03072 03073 0 307 4 0307 5 0307 6 03077 0 3 1 0 0 . 0310 1 03 104 03 105 03 106 03107 03 1 10 £3 1 1 1 0 3112 03 1 13 03114. 031 15 031 15 031 17 03 120 03 12 1 03 122 03123 03124 03 125 03 126 MAIM 95 006365 START • JSR ©DAT IN . « 0 3 43 16 LDA 3, I 175004 MOV 3, 3, SZR « 0 02774 JMP © S T A R T - 1 034313 LDA 3, S 0543 17 STA 3, J • 10 2400 SUP O 9 0 30333 LDA 2, TEMPI 034330 L D A 3, A 0 4 14^0 STA 0 , 0, 3 1754.00 IMC 3, 3 1724 14 SU3£ 3, 2, SZR O0O773 JMP .-3 020332 LOO?: LDA 0 0 « 024336 LDA 1 I H 050337 LDA ?., C 0343 13 LDA 3, S 054322 STA 3, L 054324 STA 3, M 0343 1 1 L D \ 3, ONE 05 4 3 25 STA 3, N 006332 JSR ©MXMPY 020335 L D A 0, H • 0 24337 LDA 1 , C 030337 L D A 2, C 0343 13 LDA 3, 5 034325 STA 3, N 00633 1 JSR ©'<*SU3 ;203 37 LDA 0, C • 024337 LDA 1,0 0 3 034 1 LDA 2, CSO.U 0 3 43 13 LD A 3, S 05 4 522 STA 3, L 034311 l?\ 3, ONE 0 5 4 3 24 . STA 3, M 0 54325 STA 3, N 006352 JSR ,j;yjv,v. p y O 0 C n r? /. r r T O 020342 FLDA 0, CTHR 026341 FLDA, 1 , ©C32U ^ 0 4 0 3 - D ; 1. OH ; 2. C .r H - QH ; 5. C \C C TOO SMALL? rSUB •JMP T X T .+3 1, FSLE 03127 000725 JMP s TART 03 150 100000 FEXT 0315 1 0:.' T, % 7 c r n 1 . LDA n. , c. 05152 024541 LDA 1 , C32U 03 133 050540 LDA c » 3 03 13 4 0343 15 LDA' 5 . s 03133 0 5 45 25 STA 3 » l " 03 156 006555 JSR © •IX DIV 03157 020555 LDA u 05140 0243 40 LDA 1 T> 05141 0 30555 LDA T 7 M D , J U •. 1 1 05142 034311 LDA 5 , ONE- 05 145 054322 STA 3 » L 05 144 0345 15 LDA 5 » 5 05145 054524 STA 3 • t i 05 146 054325 3 : 5. 3 r C/C *C ; 6. H3V 3036 . :•• A \ M 0 3 M 7 0 0 6 3 5 2 05 153 02035 1 03 15 1 024.335 03 !52 050534 03 153 3545 13 03 154 0 5 4 3 2 2 03 155 054324 03 155 054325 03 1 57 0 063 5 2 03 160 02035 1 0316 1 0:245 3 4 03 162 03033 1 03 163 0 3 4 3 15 03 164 0 54325 03 165 00635 1 03 166 3243 13 03 167 030354 03 170 006354 03 17 1 020354 03 172 3 24335 03 175 r> 7 7 7 ^ r.- O .,• O O J 03 174 0543 13 03175 3 5 4 3 2 2 03176 334324 03 177 354335 03220 3 3 6 3 5 2 0320 1 3 2 3 3 4 0 0 3 2 0 2 324342 03205 0.3.233-5 03204 3343 1 1 05205 2 5 4 5 2 2 03206 0345 15 03207 0 5 43 24 , 0 3210 054525 0321 1 217•• 6 3 5 9 0 3 2 1 2 0.2053 1 03213 024354 03214 03033 1 03215 0343 1 5 03216 05 4325 0 3217 006351 03220 020351 0 3221 024355 03222 35055 1 03225 0545 15 03224 05 4525 05223 <-• .'• n t, i:. O 0 -> *' 03226 223331 03227 024355 03230 333551 0325 1 0543 15 0 3 2 3 2 054325 05255 0 06353 03234 0 2 0 3 4 0 05255 024357 03236 333333 03237 0545 1 1 03240 3 5 4 7, 2 2 0324 1 0343 13 JSR 3XXMPY L D A D 1 L D A 1, T r v P 1 L D A 0 t TEXP2 L D A 3 , S STA 3, L STA 3 , M STA 3 , M :\ JSR : X P Y L D A > P L D A 1, TEXP2 L D A 2, p L D A 3 , S S STA 3 , V JSR 33 y. •.SL'R L D A •s L D A 0 - * TF.^1?2 J S R . T - : > . ~ > \ L D A f " » T E Y P 2 L D A 1, T E X  0 1 L D A 0 :- t T p v) 0 7 . L D A .3, S STA 3 , L S T A 3 , M STA 3 , M ' JSR m y-: M P Y L D A ' € » B L D A -> L D A 0 TEXP 1. L D A 3 , 0  M E STA 3 , L L D A 3 , S STA 3 , X STA 3 , ,M JSR ry*\ ;X pv L D A 0 , ' ? L D A 1, TEXP2 L D A 2 , P LDA . 3 , SS STA 3 , N JSR 3XX.SUB L D A " » P L D A TEXP 3 LDA. 2 , P L D A 3 , SS • STA 3 , ri J SR P X X A D D L D A u, ? LDA 1, T E X P l LDA 2 , 0 L D A 3 , SS STA 3, M JSR S'XX ADD LDA B LDA 1, C LDA 0 T E X P l L D A 3 , ONE STA 7 • » L LDA 3 , S 96 j .7, PH3' ; 8 . ? P H B ' ; 3. (3H ' P).(H3') ; 10. B B ' • 1 1 . (P - PHB ' ) - RH ' P • 12. ( P - P H B ' - 3 H ' P ) + B H ' P H B ' ! 3 . ?-(P-PHB ' - R H ' * P +BH *PHB ' )+B3 ' • 1/ p r » 007 . MAIN 054524 STA 5 , M 03243 054325 STA 3 , M 03 244 006552 JSR ©MXMPY £ 3 2 4 5 0 2 0 5 3 2 LDA 0 0 03246 024533 LDA 1 , TEMPT 0 324 7 0 3 0 3 5 2 LDA 2 , Q 0 3250 0 3 4 3 1 5 LDA 3 , SS 0325 1 054325 STA 3 , M 03252 0 0 6 3 5 0 JSR ©MX ADD 03253 004 465 JSR EST 03254 0 14317 DSZ J 0325 5 0 005 14 JMP LOO? 03256 0343 1 1 LDA 3 , 0 E 03257 0 5 4316 STA 3 , I 03260 02033 1 SEQ2: LDA 0 , P 0326 I 0 2 4 35-6 • LDA 1, H 03262 0 30555 LDA 2 TEM ? 1 0326 3 0 5 45 15 LDA i\ s 03264 0 5 4 3 2 2 STA 3 , L 03265 0 5 4 3 2 4 STA 3 , M 03266 0543 11 LDA 3 , ONE , 03267 054325 STA 3 , N 03270 0 0 6 3 5 2 JSR ©MX^ 0327 1 r- O n 7 , 7 c LDA 0 , H 03 272 024333 1. D A 1, TEMPI 0 3273 030334 LDA 2 , TEMP2 03274 0343 13 LDA 3 , S 0 3275 0 5 4322 STA 3 , L 05276 0345 1 i .LDA 3 , 0 N 7. 03277 054324 STA 3 , M 03500 05 4325 S T A 3 , M 0330 ! 0 06352 JSR 05502 006004 FET c : 03505 0203 46 FLDA 0 , V 05 50 44 025354 FLDA 1 , ©TEMP2 05505 107000 . F ADD 0 , 1 05306 04 555 4 FSTA 1, ©TEMP2 05507 i r> r* n. r? 1 * • ii- £ • L. XJ FEXT 035 10 0 20555 LDA 0 , TEMP 1 053 1 1 024354 LDA 1, TEMP2 0 5 5 1 2 0 5 0 5 4 0 LDA 2 , B 035 15 03 4 3 13 LDA 3 , S 055 14 £ 5 4 5 2 5 STA 3 , M 055 15 006355 JSR ©MXDIV 05516 0 20533 LDA 8 , TEMPI 055 17 0245 4 0 LDA 1, B 05520 030355 LDA 2 , TEMP5 05521 0545 1 I LDA 5 , ONE 05522 05 4522 STA 5 , L 05323 05 4 5 15 LDA 5 , S 05324 0 5 4 5 24 STA 5 , M 03525 054525 STA * M ^ > " 05326 006552 JSR p M X M p Y 35527 0 2055 1 LDA 0 , P 0 3350 024555 LDA 1, TEMP5 0533 1 05053 1 LDA 2 , P 05332 0345 15 LD A 3 , SS 03533 05 4325 STA 3 , '•! 0355 4 0 0 655 1 JSR ©MXSUB 97 15. 0 r Q +, 16. I I1 17. H *PH 13. V + H ' PH 1 9 . 3 = PH/CV+M ' PH ) 2 0 . ( P H ) B ' 2 1. P = P - P H 3 ' .MAI N 98 03335 00 4 4 03 JSR EST 03336 0 0 0 7 2 2 JMP SEQ2 03337 r> r> (* r\ n (f v.- Oiv o o z-' 0 03340 054777 E S T ; STA 5 , .-1 0334 1 LDA 0 , H 0334 2 .0 24 330 LDA 1 » A 0334 3 030334 LDA 2 TEMP2 0334 4 03 43 13 LDA 5» S 03345 05 4 322 STA 5 S L 33346 03 43 1 1 . LDA 5 , ONE 033 4 7 05 4324 STA 5 , M 03350 0343 12 LDA 3 , 0335 1 05 4 3 25 STA 3 t N 03352 0 0 6 3 5 2 J S P XMPY 03353 006365 ' JSR ©D ATI M 03354 020336 LDA - r?i H 03355 02433 4 LDA 1 ' TEMP 2 03356 030335 LDA o TEMP 3 03357 0343 12 LDA 3,' R 03360 0 5 4325 STA M .1 0336 1 006351 JSR ©MXSUP 03362 02034.;; LDA ?» 03363 024 33 5 LDA 1 \ TEMP 3 0336/J 030333 LDA 2 , TEMPI 0 3365 0343 11 . . LDA 3 , 0ME 03366 0 5 43 22 . STA 3 , L 03367 0 3 4 3 13 LDA 3, S 03370 054324 STA 3 , M 0337 1 0-345 12 LDA 3 . . p 0 3372 054325 STA 3 , N 03373 0 0 63 52 JSR ©M XMPY 0337 4 020330 LDA 0 , 033 7 5 0 24355 LDA 1 , TEMP 1 03375 03 0350 LDA 2 , A 03377 05 45 14 LDA 3 , RS 03 4 00 05 4525 STA 3 , 111 0340 1 0 0 6 5 5 0 JSR ©MX ADD 034 02 r "c 7 c c JSR ©DT0UT 034 03 002754 JMP P~ ST - 1 ; 2 2 . H VA ; 2 3 . Z ' - H ' A ; 2 4 . 3 (Z ' - H ' A) ; 2 5 . A= A+3 (Z ' -1! ' A) ; USER PR0GR AM MS VJ YJ C / v z.; .END 27 0 0 000D ' .MA IN A 000330 B 0 0 0 3 ^ 0 BEG I M 0 02700 C 000337 COPY 0 0 274 2 CSO'J 000341 CTHR 0 0 0 3 4 2 DAT IN 0 00305 DATPN r* n r\ x £\ c . t, <.:- <. o j o DATRC 000357 DATRD 000355 DATWR <• i. <.• •_) 1 DIGIT / r> r' i r r* i . -v >•• o O t ; DTOUT 000366 EST • 003342 H 000336 I 0003 ! 6 IN IT 0 0 0 3 6 3 J 0003 17 K0 L 0 0 0 3 2 2 LOOP 00307 1 M 000324 MEAS 0C0 354 MX ADD 0 0 0 3 5 0 MX DIV 030353 MXMPv 0 0 0 3 5 2 MXSUB 000351 XX TH 00 0-3-51 N 000325 ONE . 0003 1 1 OPT1 002751 0RT2 002754 OPT 3 002764 0PT4 0 0 3 0 0 2 OPT 5 0 030 12 OPT6 0 0 3 0 3 0 OPT 7 0 03033 o I 00033 1 0 0 0 0 3 3 2 R 00 0312 READY 003045 RS 0 003 U S 0 00313 SEQ1 003 13 1 SEQ2 0 0 3 2 6 0 SS 0003 15 START 003054 STR 1 0 0 0 3 7 0 STR2 00037 1 STR 3 0 0 0 3 7 2 STR 4 0003 73 STR5 000374 STRS 0 0 0 3 7 5 STR 7 0 00376 STR8 000377 TEMPI 0 00333 TEMP 2 000334 TEMP 3 000335 06 10 .MAIN 1 0 0 V 0003^6- WRITE 00036R

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
China 7 34
United States 2 0
France 1 0
Canada 1 0
City Views Downloads
Beijing 7 0
Vancouver 1 0
Unknown 1 7
Sunnyvale 1 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items