N-mtJ^+1 1 - Min ^ Min SN N-m 0,0 (3.119) (3.120) (3.121) with m as the larger integer of the two integers p and q. To solve our identification problem as before, we rewrite the optimization problem as below SN \u2014 Min SN 8,

e, = N f f T r N x de, n + TN26 + N{ Y1 [ t^-N 2 0 + rN2e,-] - [Nf N x + f ^ N x ] [Nf f T t N 1 _ 1 f r [n + f N20 And we also need to know the following quantity: dek , \"^dv* , . dek-t (3.143) (3.144) Now if we want to solve the above set of equations by the method of Newton and Raphson, we must take the derivatives of the individual functions \/i;S with respect to #(s. This will call for the second order derivatives of

T \u2014N 20 + f N2et- - \u2014Nr4> ~ f N a 0 n + TN 20 - rNx

Wn y - $ U w (3.155) (3.156) (3.157) which means the premultiplication by the matrix W 0 will delete the last m rows. Now we can obtain similar expressions for the columns of the matrices N x and N 2 . The first column of these matrices can be obtained by premultiplying the matrix y \u2014 # U w with the matrix W i , where W j [0 N-r-f-m-lxl |]J-7V- -f-m- -1 ||0\/V-r-\/-m-lxm-l] (3.158) which means the last column of zeros of the matrix Wo is wrapped around and becomes the first column of zeros of the matrix W ] . The premultiplication by the matrix W j will shift the columns up one row and delete the last m rows. In general, the matrix W ; will shift up i rows and delete m last rows. This matrix will have i first columns of zeros followed by an identity matrix and possibly some columns of zeros after that. Now we are ready to write the expressions for the matrices N i and N 2 . y - *Uu> y - *Ud> W 0 T y - * U w W y - WOJ y - * U \u00ab W 2 T y - *Uo> W 9 J (3.159) As the dimension of the matrices N a and N 2 changes, the dimension of the matrix r also changes. The change is not the change in theory, but actually in separate identi-fication, we redefine the number of observations in the identification of the A R M A time series. So in this combined identification, we have to correct for it. The matrix T is then defined as follows: r = 1 7 i 72 73 1 7 i 72 lN-r-J-m-2 71 1 7i 1 (3.160) 38 CHAPTER 3. IDENTIFICATION The identification of the A R M A is the following optimization problem Min 0,4 Wo [y - *UCJ] + TN 20 - TNl(p]T [w 0 [y - *Uw] + TN26 - TNrf (3.161) and as before this optimization with respect to cp gives us cp = Nfr T rNx ^ N f r T w 0 y - * U w +rN 26\u00bb (3.162) From this equation, we can derive the result we obtained before with just the replace-ment of the vector n by the vector Wo[y \u2014 \\PUd>] - l N T ^ [W0 [y - * U \u00ab ] + TN20 (3.163) The interaction of the two identification problems comes from the equation below: ] = 0 (3.171) The identification equations are similar to those of the split identification case. This means we should get the same results for both cases. The only difference is - of course -the path to the final results. \" $ 1 \" 8s k . V *+i Jq_ ion for our problem now is dgi dgi dgi dgi - i d6r 38s d0x oeq dgs dgs dgs dgs gi d8s ddi d9q dgs+i g2 dgs+i dgs+i dgs+i 881 d8s d6i dOq . gs+q dgs+q dgs+q dgs+q dgs+g dSi d8s d0q . i (3.172) The Hessian of the above equation requires the derivatives of <\/,-s with respect to the two sets of independent variable 6;s and 0;s. The derivative of gi, for i < s with respect to 8j is still the same as the case of split identification. dgi 38., d9rrdw dVT8Cj iT82LJ,T, . . . Uu> H U 1 U h * U y - $Uw 138,38, 38; 38 i 38j 38, 36,38\/ 40 CHAPTER 3. IDENTIFICATION _ [ \u00ab * U ( i + * U \u00a7 H ^ U u > + \u00ab U ^ ] for i~~ s is lengthier, but actually not more complicated. We will take the derivative of gs+i with respect to 83. The derivative is new for this case of combined identification. dg as, ^[|r N^ + f N 2 e i _ | r N i ^ f N i ^ ] T [W0[y - Ww] + f N20 - f N 1 (p] ld$i 86\u2022 86, 4>e (3.174) [ W o [ y _ $Uw] + f N20 - f N^] 1 [ _ W o f - Wo*!*',, + t ^ O - f ^ - f N^',] (3.175) W l ith 0Nf j9_ y - Wf T y - W 2 T y -T - \/ rp rp \u2022 [ ^ - U \u00ab + * U w , . ] X . [ - u w + w w \/ w p T (3.176) (3.177) and similarly for the matrix N 2 . Now we will derive the equations for the derivatives of g,s with respect to 9j. For i < s, the derivative with respect to 9j is zero, because the function contains no 9jS. dgi n , . \u2014- = 0 tor i < s 00; (3.178) 3.5. IDENTIFICATION OF THE TRANSFER FUNCTION-ARIMA 41 For % > s, the derivative of gi with respect to Oj is as below [W0[y - *Uw] + f N 2 0 - f Ni^\u00bb] (3.179) dOidOj odi ddj 00,00j o9i 1 8Y .' . 82ch . . . . . - ^ - N i ^ - r N x - ^ - f [ W 0 [ y - * U \u00a3 ] + TN2d - TN 1 < P] df <9r - -' + [\u2014N 20 + f N 2e 4 - \u2014 Nl(p - r N 1 < P , J 3 [|^N 20 + f N2e, - | ^ N ^ - f N ^ . ] (3.180) The elements of the Hessian matrix requires the first order partial derivatives and also the second order partial derivatives. We have given the first order partial derivatives of CJ and cp. Now we will give the second order partial derivatives. We have as before 02CJ L ^ A r A - i - 1 d 8 id 8 A u r d2*T, 86,38 3 (3.181) But the derivatives of cp will change. We have d2~~

~~ T 1 x t J d 2 r de, de,-\u2022 r f r de, de, N i + N i r d 2 r (3.182) The above equation is obtained by differentiating the expression for ~~

0 (4.18) This is the optimal criterion or performance index of an L Q G controller. Its physical meaning is the sum of two closed-loop output and input variances with a weight on the variance of the input variable. If A is zero, we have a case of unconstrained movement of the input variable. This is the case of minimum variance control. The controller moves the control element maximally to achieve minimum variance of the output variable. On the other hand, if A is very large, the criterion puts all the weight on the input variable. In this case, minimization of the performance index almost means minimization of the movement of the input variable. The result is a zero value for the variance of the input variable under feedback. The controller practically does not move the control element at all, and the variance of the output variable is at the highest value given by the disturbance. 4.3. THE LINEAR QUADRATIC GAUSSIAN CONTROLLER 67 In this case, the loop is practically open. In the usual case, A will obtain a value in the interval [0, co) and the closed-loop variance of the input variable al will be lower than or equal to that of the minimum variance control case a2umv. Actually, Equation (4.18) should be modified for the following reason. Suppose that we want the closed-loop input variable variance equal to or smaller than a2u, the optimization problem will be M m E{ylf+l} (4.19) Ut with the open-loop control system y i + \/ + i = ^ h + i^)at+s+l (4-20) subject to a constraint on the closed-loop input variable variance E{u2t} < al (4.21) The Lagrangian of the above optimization problem is (Luenberger, D. (1969)) Mm E{y2+J+1 + X(u2t - a2u)} (4.22) not Min E{y2+f+1 + \\u2} (4.23) ut The solution of the above optimization problem must follow the Kuhn-Tucker theorem, ie. we must have (Luenberger, D. (1969)) X(E{u2}-al) = 0 (4.24) In case it happens that al m v is smaller than al, A will be zero. On the contrary, E{u2} = al. So in both cases, the Kuhn-Tucker theorem is satisfied. This Kuhn-Tucker theorem can be obtained from Equation (4.22) not Equation (4.18). Nevertheless, these two control criteria will give the same controller, and the latter has established itself in the control literature and therefore we will refer to this optimal criterion when we mention the L Q G controller. In Equation (4.21), the smaller sign (<) is used for the case almv is smaller than al, and the equal sign is used for the case almv is greater than a2tl. Statistically, Equation (4.22) should also be modified. This is because the optimization is taken at time t and yt+f+i is in the future of time t. At this time yt and at are given, 68 CHAPTER 4. CONTROLLERS so the expectation should be the conditional expectation given yt or at - which means we must have either Mm E{ylf+1 + \\(u2-*l)\\yt} (4.25) or Mm E{y2+f+1 + X(u2t - a2u)\\at} (4.26) ut From the above equation, we can write Mm ^ { ( ^ ^ u < + \u00a7 ^ a i + \/ + 1 ) 2 + A ( u 2 - ^ ) | \u00ab t } (4.27) ut b\\z ) nz ) By using a Diophantine equation we had before, we can write the above equation as Mm Emz-^at+m)2 + {j\u00a3^-ut + jf=T^ + X ^ ~ a ^ ( 4 2 8 ) ut \u00b0\\z ) nz 1) or Min^ *lmv + EiC^Ut + JF^Tf^ + Hu2 - a2u)\\at} (4.29) Now the expectation value of at-k (k > 0) given at is at-k and similarly for ut-k (k > 0), because ut is a linear combination of at-k (k > 0). This means the expectation operator will vanish and we are left with a quadratic equation in the variable ut. In Vu, K . (1991), the author has obtained the solution for this problem as below: ( 5 ( z - 1 ) 7 ( , - 1 ) ( A . . . ut = ^ LLi 1 a t (4.30) and therefore the L Q G controller is given as follows: l^Z ^ u(z-i)6(z-^(z-i) + \u2014e(z-1)6(z-'1) (4.31) So compared to the minimum variance controller, the L Q G controller has an additional term \u20146(z~1)6(z~1). Now if we put the above equation back into the Box-Jenkins model u0 4.4. THE PID CONTROLLER 69 equation, we will get an equation for the time series the output variable will follow under feedback. yt = 5 - ^ at (4.32) u{z-i) (z-v) Wo From the two equations for ut and yt, we can find the following relationship between the variables yt and ut under feedback yt = - \u2014 \u00ab t-\/-i + i>{z-l)at (4.33) where 0 ( z _ 1 ) is given in the Diophantine equation mentioned previously. From this equation, we can find the following relationship between the variance ~~ dS3d6, N f r r r N ! 69 i dt, 66; \u2022[W0[y - *Uw] + f N20] 3.6. IDENTIFICATION OF THE PREDICTOR FORM 43 [ - ' [d0, .dT <9N2 dt1 8N2 86, f r f N x + N f f r f - [Ni ^ - f N x + N f f T ^ N x ] ^ <9r 50, Our Newton-Raphsori equation has become 86, 86, 86, 86, (3.185) ' k' 6S 6S 6i Jo . dgi 861 dgi 86s 0 0 8gs 86x 8gs+i dgs 86s dgs+i 0 8gs+i 0 dgs+i 86x 86s 861 d0q dgs+q dgs+q dgs+q dgs+q 86i 86s d0i d6q -1 9i gi gs+q (3.186) This should make the inversion of this matrix a little bit more accurate. The existence of the lower block makes the movement of the parameter vectors S and 0 dependent. 3.6 Identification of the Predictor Form 3 .6 . 1 T h e Predictor F o r m The Box-Jenkins model of a control system was first called the dynamic-stochastic model by a number of researchers. The dynamic part represents the dynamics of the process and the stochastic part is the part of which values are not certain - hence the word stochastic. This part is represented by an A R I M A . Then it was also called the transfer function-noise model or the transfer function-ARIMA model. Even though the names are different, the equations used are the same. The deviation output variable is the sum of two rational functions. One driven by the deviation input variable, the other is driven by a white noise. This is one form of the Box-Jenkins model. The Box-Jenkins model has another form called the predictor form. If the purpose of the identification is only to design a minimum 44 CHAPTER 3. IDENTIFICATION variance or an L Q G controller, then it is better to use this form than the currently known or more popular one. We will derive this form in the following. From the usually known Box-Jenkins model yt = \"CO. c\/ ~7~Zut-S-\\ + 77\u2014TT a* (3.187) we can derive an expression for at as below at yt r i T u * - \/ - i d(z-v (3.188) If we write the system model at time t + \/ + 1, we have yt+i+i = and with the Diophantine equation TT at+\/+i (3.189) ip(z + -1) we can write yt+f+i uiz-1) 7 ( z \" 1 ) , . _ u (3.190) (3.191) 7(--V ^ - f 0 ( z - > ( + \/ + 1 ( 3 . 1 9 2 ) 0(z~x) - 7 ( z ~ 1 ) z - \/ - 1 ut + ^Qryt + Hz->t+ni (3-193) 0(2 (3.194) This equation can be put to the following equation: U(z-l)1>(z-*)~~*=k-s) sum=sum+dpsi(l,i)*del(k-l,l); end; i f (l==k-i) sum=sum+psi(1,1); end; end; dpsi(k,i)=sum; end; end; % f o r k = l : n o b - f - r - l f o r 1=1:nob-f-r-l i f (K=k) d i p s i ( k , l ) = 0 ; e l s e dipsi(k,1 )=dpsi ( 1-k,i); end; end; end; domg(:,i)=inv(u ,*mpsi ,*mpsi*u)*... (u'*dipsi'-(u'*dipsi'*mpsi*u+u'*mpsi'*dipsi*u)* inv(u ;*mpsi'*mpsi*u)*u'*mpsi')*y; g(i,l)=(dipsi*u*omg+mpsi*u*domg(:,i))'*(y-mpsi*u*omg); end; I % C a l c u l a t e the d e r i v a t i v e matrix f o r i = l : s f o r k = l : n o b - f - r - l f o r 1=1:nob-f-r-l i f (K=k) d i p s i ( k , l ) = 0 ; e l s e d i p s i ( k , l ) = d p s i ( l - k , i ) ; end; APPENDIX B end; end; C a l c u l a t e second d e r i v a t i v e s f o r j = l : s f o r k=l:nob-f-r - 2 sum=0; i f (k =k-s) sum=sum+d2ps i (1,l)*del(k-l,1) ; end; i f (l==k-i) sum=sum+dpsi(l,j); end; i f (l==k-j) sum=sum+dpsi(l,i); end; end; d2ps i(k,l)=sum; end; end; f o r k = l : n o b - f - r - l f o r 1=1:nob-f-r-l i f (K=k) d j p s i ( k , l ) = 0 ; d i j p s i ( k , l ) = 0 ; e l s e d j p s i ( k , l ) = d p s i ( l - k , j ) ; d i j p s i ( k , l ) = d 2 p s i ( l - k , l ) ; end; end; end; C a l c u l a t e the Jacobi d e r i v a t i v e matrix APPENDIX B 187 % d2omg=inv(u'*mpsi'*mpsi*u)*(u'*dijpsi ;*y-... (u'*dijpsi'*mpsi*u+u'*dipsi'*djpsi*u+... u'*djpsi'*dipsi*u+u'*mpsi'*dijpsi*u)*omg-... (u' *dipsi'*mpsi*u+u'*mpsi'*dipsi*u)*domg(:,j)-... (u'*djpsi ;*mpsi*u+u )*mpsi'*djpsi*u)*domg(:,i)); dg(i,j)=(dijpsi*u*omg+dipsi*u*domg(:,j)+... djpsi*u*domg(:,i)+mpsi*u*d2omg)'*(y-mpsi*u*omg)-... (dipsi*u*omg+mpsi*u*domg(:,i))'*... (djpsi*u*omg+mpsi*u*domg(:,j)); end; end; % \u00b0\/\u201e Modified Newton-Raphson i t e r a t i o n esp=g'*g\/s; i f (esp<=lesp Iiter==l) l d e l = d e l ; ddel=-dg\\g; sdel=ddel; lesp=esp; i t e r = i t e r + l ; i t e r esp d e l ' omg' el s e ddel=ddel \/ 2 ; sqddel=ddel ;*ddel\/s; i f (sqddel<=1.0e-10) rand('uniform'); f o r i = l : s d d e l ( i J l ) = r a n d * s d e l ( i , l ) ; end; end; i t e r [lesp esp] end; ndel=ldel+ddel; 188 APPENDIX B end; I % Get f i n a l r e s u l t s I omega=omg; delta=del; x=y-mpsi*u*omg; varnt=x'*x\/(nob-f-r-l); f u n c t i o n [phi,theta,varat]=arma_id(nt,p,q,nthe); y. % I d e n t i f i c a t i o n algorithm f o r an ARMA time s e r i e s model. % % C a l l i n g sequence: I % f u n c t i o n [phi,theta,varat]=arma_id(nt,p,q,nthe); y. % Input v a r i a b l e s : I nt y. P % q y. nthe The ARMA time s e r i e s . The number of autoregressive parameters. The number of moving average parameters. The i n i t i a l estimate of the moving average parameter vector. % % Output v a r i a b l e s : The column vector of the autoregressive parameters. The column vector of the moving average parameters. Variance of the white noise. X phi % t h e t a % varat y. y. Written by Ky M. Vu on May 15th 1996 y. nob=length(nt); m=max(p,q); f o r i=l:nob-m n ( i , l ) = n t ( n o b + l - i , l ) ; f o r j=l:p n l ( i , j ) = n t ( n o b + l - i - j , 1); end; f o r j = l : q n 2 ( i , j ) = n t ( n o b + l - i - j , 1 ) ; end; APPENDIX B 189 end; esp=l; lesp=l; i t e r = l ; % % S t a r t i t e r a t i n g % while (iter<100&esp>1.0e-7) the=nthe; gama(l,l)=the(l,1); f o r k=2:nob-m-l sum=0; f o r l = l : k - l i f (l>=k-q) sum=sum+gama(l,1)*the(k-l,1) ; end; end; i f (k<=q) gama(k,l)=sum+the(k,1); el s e gama(k,l)=sum; end; end; % % C a l c u l a t e the optimal phi vector f o r i=l:nob-m f o r j=l:nob-m i f (j==i) mgama(i,j)=l; e l s e i f (j>i) mgama(i,j )=gama(j-i, 1); el s e mgama(i,j)=0; end; end; end; ophi=inv(nl'*mgama'*mgama*nl)*nl^mgama'*(n+mgama*n2*the); I % C a l c u l a t e the i d e n t i f i c a t i o n equations 190 APPENDIX B I f o r i = l : q f o r k=l :nob-m- l sum=0; i f (k=k-q) sum=sum+dgama(l,i)*the(k-1, 1); end; i f ( l==k-i) sum=sum+gama(l,1); end; end; dgama(k,i)=sum; end; end; % f o r k=l:nob-m f o r 1=1:nob-m i f ( K = k ) digama(k,1)=0; e l s e d igama(k , l )=dgama( l -k , i ) ; end; end; end; e i=zer os (q ,1 ) ; e i ( i , l ) = l ; dophi(: , i )=inv(nl , *mgama , *mgama*nl)*(nl '*digama , *(n+mgama*n2*the)+. . . nl'*mgama'*(digama*n2*the+mgama*n2*ei)-.. . (nl'*digama ) *mgama*nl+nl'*mgama'*digama*nl)*. . . inv(nl'*mgama'*mgama*nl)*nl'*mgama'*(n+mgama*n2*the)); h(i ,I)=(digama*n2*the+mgama*n2*ei-. . . d igama*nl*ophi -mgama*n2*dophi ( : , i ) ) '* . . . (n+mgama*n2*the-mgama*nl*ophi); end; APPENDIX B I % C a l c u l a t e the d e r i v a t i v e matrix '\/. f o r i = l : q f o r k=l:nob-m f o r l=l:nob-m i f (K=k) digama(k,l)=0; e l s e digama(k,l)=dgama(l-k,i); end; end; end; I % C a l c u l a t e second d e r i v a t i v e s % f o r j = l : q f o r k=l:nob-m-l sum=0; i f (k=k-q) sum=sum+d2gama(l,1)*the(k-l,1); end; i f (l==k-i) sum=sum+dgama(l,j); end; i f (l==k-j) sum=sum+dgama(l,i); end; end; d2gama(k,l)=sum; end; end; f o r k=l:nob-m f o r l=l:nob-m 192 APPENDIX B i f (K=k) djgama(k,l)=0; dijgama(k,l)=0; e lse djgama(k,l)=dgama(l-k,j); di jgama(k, l)=d2gama( l -k,1); end; end; end; ei=zeros(q,1); ej=zeros(q,1); e i ( i , l ) = l ; e j ( j , D = l ; I % Calcu la te the j acob i de r iva t ive matrix % d2phi=inv(nl'*mgama'*mgama*nl)*(nl'*dijgama'*(n+mgama*n2*the)+... nl'*digama'*(djgama*n2*the+mgama*n2*ej)+... nl'*djgama'*(digama*n2*the+mgama*n2*ei)+... nl'*mgama'*(dijgama'*n2*the+digama'*n2*ej+djgama'*n2*ei)-. . . (nl'*dijgama'*mgama*nl+nl'*digama'*djgama*nl+... nl'*djgama'*digama*nl+nl'*mgama'*dijgama*nl)*ophi-... (nl'*digama'*mgama*nl+nl'*mgama'*digama*nl)*dophi(:,j)); dh(i,j)=(dijgama*n2*the+digama*n2*ej+djgama*n2*ei-... d i jgama*nl*ophi-digama*nl*dophi( : , j ) - . . . djgama*n2*dophi(:,i)-mgama*n2*d2phi)'*... (n+mgama*n2*the-mgama*nl*ophi)+... (digama*n2*the+mgama*n2*ei-... d igama*nl*oph i -mgama*n2*dophi ( : , i ) ) . . (djgama*n2*the+mgama*n2*ej-... djgama*nl*ophi-mgama*n2*dophi(:,j)); end; end; I % Modif ied Newton-Raphson i t e r a t i o n % esp=h'*h\/q; i f (esp<=lesp|iter==l) lthe=the; dthe=-dh\\h; APPENDIX B sthe=dthe; lesp=esp; iter=iter+l; iter esp' the' ophi' h' else dthe=dthe\/2; sqdthe=dthe'*dthe\/q; if (sqdthe<=l.Oe-10) rand('uniform'); for i=l:q dthe(i,l)=rand*sthe(i,1); end; end; iter [lesp esp] end; nthe=lthe+dthe; end; '\/. % Get final results % phi=ophi; theta=the; x=n+mgama*n2*the-mgama*nl*ophi; varat=x'*x\/(nob-m); function [omega,delta,phi,theta,varat]=bj_id(yt,ut,f,r,s,p,q,nprm) I identification algorithm for a Box-Jenkins control model. y. % Calling sequence: \u00b0\/o % [omega,delta,phi,theta,varat]=bj_id(yt,ut,f,r,s,p,q,ndel,nthe); I \u00b0\/0 Input variables: % yt : The output variable time series. 194 APPENDIX B 1 ut I f % r % s '\/. p 7. q \u00b0\/0 nprm % Output v a r i a b l e s : The input v a r i a b l e time s e r i e s . The pure delay of the B-J model. No delay f=0. The number of transmission zeros. No zero r=0. The order of the system. The number of autoregressive parameters. The number of moving average parameters. The i n i t i a l estimate of the time constant parameter vector and the moving average parameter vector. The column vector of the transmission zeros. The column vector of the poles. The column vector of the autoregressive parameters. The column vector of the moving average parameters. Variance of the white noise. % omega % d e l t a y. phi % t h e t a \u00b0\/\u201e varat y. % Written by Ky M. Vu on May 15th 1996 nob=length(yt); f o r i = l : n o b - r - f - l y ( i , l ) = y t ( n o b + l - i , l ) ; f o r j = l : r + l u ( i , j ) = u t ( n o b + l - f - i - j , 1 ) ; end; end; m=max(p,q); % I w0=[eye(nob-r-f-m-l) zeros(nob-r-f-m-1,m)]; y. esp=l; lesp=l; i t e r = l ; y. % S t a r t i t e r a t i n g y. while (iter<100&esp>1.0e-7) del=nprm(l:s,:); the=nprm(s+l:s+q,:); p s i ( l , l ) = d e l ( l , l ) ; APPENDIX B fo r k=2 :nob-r-f -2 sum=0; fo r l = l : k - l i f (l>=k-s) sum=sum+psi( l , l )*del(k- l , l ) ; end; end; i f (k<=s) psi(k, l)=sum+del(k,1); e lse psi(k, l )=sum; end; end; '\/. % Calcu la te the optimal omega vector y. fo r i = l : n o b - r - f - l fo r j = l : n o b - r - f - l i f (j==i) m p s i ( i , j ) = l ; e l s e i f (j>i) m p s i ( i , j ) = p s i ( j - i , l ) ; e lse m p s i ( i , j ) = 0 ; end; end; end; omg=inv(u'*mps i'*mps i *u)*u'*mps i ' * y ; y. % and i t s de r iva t ives w . r . t . the d e l t a ' y. fo r i = l : s fo r k= l :nob- f - r - 2 sum=0; i f (k=k-s) sum=sum+dpsi(l,i)*del(k-l,l); end; i f (l==k-i) sum=sum+psi(1,1); end; end; dpsi(k,i)=sum; end; end; I f o r k = l : n o b - f - r - l f o r 1=1:nob-f-r-l i f (K=k) d i p s i ( k , l ) = 0 ; e l s e d i p s i ( k , l ) = d p s i ( l - k , i ) ; end; end; end; % domg(:,i)=inv(u )*mpsi'*mpsi*u)*(u'*dipsi' *y-... (u'*dipsi'*mpsi*u+u ,*mpsi'*dipsi*u)*omg); g(i,l)=(dipsi*u*omg+mpsi*u*domg(:,i))'*(y-mpsi*u*omg); end; y. % Now form the disturbance matrices y. f o r k=l:m wi=zeros(nob-r-f-m -1,nob-r-f -1 ) ; f o r i=l:nob-r-f-m-l f o r j = l : n o b - r - f - l i f (j==i+k) w i ( i , j ) = l ; end; end; end; i f (k<=p) nl(:,k)=wi*(y-mpsi*u*omg); end; APPENDIX B 197 i f (k<=q) n2(:,k)=wi*(y-mpsi*u*omg); end; end; % gamma(l,l)=the(l, 1); f o r k=2:nob-r-f-m-2 sum=0; f o r l = l : k - l i f (l>=k-q) sum=sum+gamma(l,l)*the(k-l,1); end; end; i f (k<=q) gamma(k,l)=sum+the(k,1); e l s e gamma(k,l)=sum; end; end; % % C a l c u l a t e the optimal phi vector '\/. f o r i=l:nob-r-f-m-l f o r j=l:nob-r-f-m -1 i f (j==i) mgamma(i,j)=l; e l s e i f (j>i) mgamma(i,j)=gamma(j-i,1); e l s e mgamma(i,j)=0; end; end; end; I ophi=inv(nl'*mgamma'*mgamma*nl)*nl^mgamma'*... (wO*(y-mpsi*u*omg)+mgamma*n2*the); I % and i t s f i r s t order d e r i v a t i v e s w.r.t. d e l t a , f o r i = l : s 198 APPENDIX B fo r k=l :nob-r - f -1 for 1=1 : nob - r - f - l i f (K=k) d i p s i ( k , l)=0; else d i p s i ( k , l ) = d p s i ( l - k , i ) ; end; end; end; fo r j = l :m wi=zeros(nob-r-f-m-1 ,nob-r-f-1); fo r k=l:nob-r-f-m-1 fo r 1=1 : nob- r - f - l i f (l==k+j) w i ( k , l ) = l ; end; end; i f (j<=p) dinl(:,j)=-wi*(dipsi*u*omg+mpsi*u*domg(:,i)); end; i f (j<=q) din2( : , j)=-wi*(dipsi*u*omg+mpsi*u*domg(:, i)); end; end; end; I Ji The de r iva t ives of phi w . r . t . de l t a y. dphidel(:,i)=inv(nl'*mgamma'*mgamma*nl)*... (dinl'*mgamma'*(w0*(y-mpsi*u*omg)+mgamma*n2*the)+... nl ,*mgamma ,*(-wO*dipsi*u*omg-wO*mpsi*u*domg(:,i)+... mgamma*din2*the)-(dinl'*mgamma'*... mgamma*nl+nl' ^gamma' *mgamma*dinl)*ophi); end; y. fo r i = l : q fo r k=l:nob-r-f-m-2 sum=0; i f (k=k-q) sum=sum+dgamma(l,i)*the(k-l,1); end; i f (l==k-i) sum=sum+gamma(l,1); end; end; dgamma(k,i)=sum; end; end; I fo r k=l :nob- r - f -m- l fo r 1=1 :nob-r-f -m-l i f (K=k) digamma(k,1)=0; else digamma(k,l)=dgamma(l-k,i); end; end; end; ei=zeros(q,1); e i ( i , l ) = l ; y. % and w . r . t . theta y. dphi the( : , i )= inv(nl '*mg anuria'*mgamma*nl)*... (nl'*diganuna'*(w0*(y-mpsi*u*omg)+mgamma*n2*the)+... nl'*mgamma'*(digarrima*n2*the+mgamma*n2*ei)-... (nl'*digamma'*mgarama*nl+nl'*mgamma,*digamma*nl)*ophi) ; g(i+s,I)=(digamma*n2*the+mgamma*n2*ei-... digamma*nl*ophi-mgamma*nl*dphithe(:,i))'*... (wO*(y-mpsi*u*omg)+mgamma*n2*the-mgamma*nl*ophi); end; y. \u00b0\/0 Ca lcu la te the de r iva t i ve matrix y. 200 APPENDIX B f o r i=l:s+q f o r j=l:s+q % i f (i<=s) i f (j<=s) f o r k=l:nob-f-r - 2 sum=0; i f (k=k-s) sum=sum+d2ps i (1 ,1 )*del(k -1 ,1 ) ; end; i f (l==k-i) sum=sum+dpsi(1,j); end; i f (l==k-j) sum=sum+dpsi(l,i); end; end; d2ps i(k,l)=sum; end; end; f o r k=l:nob-f-r-l f o r 1=1:nob-f-r-l i f (K=k) d i p s i ( k , l ) = 0 ; d j p s i ( k , l ) = 0 ; d i j p s i ( k , l ) = 0 ; e l s e d i p s i ( k , l ) = d p s i ( l - k , i ) ; d j p s i ( k , l ) = d p s i ( l - k , j ) ; d i j p s i ( k , l ) = d 2 p s i ( l - k , 1 ) ; end; end; end; domgdeldel=inv(u'*mpsi'*mpsi*u)*(u'*dij p s i ' * y - . . APPENDIX B (u'*dijpsi'*mpsi*u+u'*dipsi'*djpsi*u+... u,*djpsi'*dipsi*u+u,*mpsi,*dijpsi*u)*omg-... (u'*dipsi,*mpsi*u+u'*mpsi'*dipsi*u)*domg(:,j)-. (u'*djpsi'*mpsi*u+u'*mpsi'*djpsi*u)*domg(:,i)); dg(i,j)=(dijpsi*u*omg+dipsi*u*domg(:,j)+djpsi*u*domg(:,i) mpsi*u*domgdeldel)'*(y-mpsi*u*omg)-... (dipsi*u*omg+mpsi*u*domg(:,i))'*... (djpsi*u*omg+mpsi*u*domg(:,j)); else dg(i,j)=0; end; else ii=i-s; if (j<=s) jj=j; for k=l:nob-r-f-1 for 1=1:nob-r-f-1 if (K=k) djpsi(k,l)=0; else djpsi(k,l)=dpsi(l-k,jj); end; end; end; for kk=l:m wi=zeros(nob-r-f-m-1,nob-r-f-l); for k=l:nob-r-f-m-1 for 1=1:nob-r-f-l if (l==k+kk) wi(k,l)=l; end; end; if (kk<=p) djnl(:,kk)=-wi*(dipsi*u*omg+mpsi*u*domg(:,jj)); end; if (kk<=q) djn2(: ,kk)=-wi*(dipsi*u*omg+mpsi*u*domg(:,jj)); end; end; end; APPENDIX B j j = j - s ; f o r k=l:nob-r\u2014f-m-2 sum=0; i f (k*