System Identification, Control Algorithms and Control Interval for the Box-Jenkins Dynamic Model Structure by Ky Minh Vu, M . Eng. McMaster University, 1980 A Thesis submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in The Faculty of Graduate Studies Department of Chemical and Bio-Resource Engineering We accept this thesis as conforming to the required standard The University of British Columbia June, 1997 © K y Minh Vu, 1997 In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the Head of the Department of Chemical and Bio-Resource Engineering or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written consent or permission. Department of Chemical and Bio-Resource Engineering The University of British Columbia Vancouver, B .C . Canada Date 11 Abstract The Box-Jenkins model of a discrete control system has been studied. First the trans-fer function model was identified numerically via the derivatives of the variance of the disturbance, then the disturbance model was identified via the derivative of the variance of the generating white noise. Once the model had been identified, an approach to obtain the optimal gains of a discrete PID controller was suggested. In an adaptive environment, the recursive least determinant self tuning controller was designed to calculate the best possible control action based only on the presumed orders of the controller and without knowledge of the delay. The problem of determination of a possible slower control interval of a control loop was solved via modelling a skipped ARIMA through matrix algebra and a robust numerical procedure. Contents Abstract 1 1 Contents iii List of Figures vii List of Tables ix Acknowledgements xi 1 Introduction 1 1.1 Introduction 1 1.2 The Research Areas of the Thesis 1 1.3 The Objectives of the Thesis 3 1.4 The Contribution of the Thesis 3 1.5 Outline of the Thesis 4 2 The Discrete Control System 5 2.1 Introduction 5 2.2 The Discrete Control System 5 2.3 The System Models 7 2.3.1 The Box-Jenkins Model 7 2.3.2 The Astr6m Model 8 2.3.3 The State-Space Model 8 2.3.4 Model Advantages and Disadvantages 8 2.4 Conclusion 9 3 Identification 11 3.1 Introduction 11 3.2 Identification of the Box-Jenkins Model 11 3.2.1 Nonparametric Methods 11 iii iv CONTENTS 3.2.2 Parametric Methods . . . 12 3.3 Identification of the Transfer Function 13 3.3.1 The Linear Least Squares Theory 13 3.3.2 Identification Methods 15 3.3.3 The Semi-Analytical Approach 20 3.4 Identification of the A RIM A 29 3.4.1 Methods of Identification for an ARM A 30 3.4.2 The Semi-Analytical Approach 30 3.5 Identification of the Transfer Function-ARIMA 35 3.6 Identification of the Predictor Form 43 3.6.1 The Predictor Form 43 3.6.2 Closed-Loop Data 46 3.7 Examples 50 3.8 Conclusion 61 4 Controllers 63 4.1 Introduction 63 4.2 The Minimum Variance Controller 63 4.3 The Linear Quadratic Gaussian Controller 66 4.4 The PID Controller 69 4.4.1 The Time Series Variance Formulae 71 4.4.2 The Minimum Variance PID Controller 74 4.4.3 The Linear Quadratic Gaussian PID Controller 81 4.4.4 The Pole Placement PID Controller 84 4.4.5 Examples 88 4.5 The Self Tuning Controller 96 4.5.1 The Recursive Least Squares Self Tuning Controller 96 4.5.2 Convergence of the RLS Self-Tuning Algorithm 100 4.5.3 The Recursive Least Determinant Self Tuning Controller 104 4.5.4 Convergence of the RLD Self-Tuning Algorithm 114 4.5.5 Effect of Model Mismatch 120 4.5.6 Simulation Examples 121 4.6 Conclusion 135 5 Control Interval 137 5.1 Introduction 137 5.2 The Sampling and Controlling Rates 137 5.2.1 Sampling Too Slow . 138 5.2.2 Sampling Too Fast 138 5.3 The Control Interval . 140 CONTENTS v 5.3.1 Literature Survey 140 5.3.2 The Optimal Control Interval 141 5.3.3 Examples 152 5.4 Conclusion • 160 6 Conclusion and Recommendations 161 6.1 Conclusion 161 6.2 Summary of the Thesis 161 6.3 Recommendations 162 6.3.1 The Nonlinear Stochastic Control System 162 6.3.2 The Self Correcting Controller 163 Nomenclature 165 Appendices 168 A. Mathematical Results . 169 B. Computer Programs 183 Bibliography 223 CONTENTS List of Figures Figure 2.1. Conventional Block Diagram of a Feedback Control System 6 Figure 2.2. Modified Block Diagram of a Feedback Control System 6 Figure 3.1. Correlation Paths of a Feedback Control Loop . 46 Figure 3.2. Input-Output Series of Identified System 52 Figure 3.3. Parameters of Compared Systems 57 Figure 3.4. PE Estimated Parameters of Compared Systems 58 Figure 3.5. SA Estimated Parameters of Compared Systems 59 Figure 3.6. Comparison of Estimated Disturbance Variances 60 Figure 4.1. Gain Estimation of a Delayed System . . . 92 Figure 4.2. Gain Estimation of a Nonstationarily Disturbed System . 93 Figure 4.3. Gain Estimation of a Nonminimum Phase System 94 Figure 4.4. Responses from PID Feedback 95 Figure 4.5. Block Diagram of a Self-Tuning Controller 99 Figure 4.6. Exponential Convergence of the RLD Self-Tuning Algorithm . . . . 122 Figure 4.7. Convergence of the RLD Self-Tuning Algorithm 123 Figure 4.8. Self-Tuning of a Correctly Estimated System 125 Figure 4.9. Self-Tuning of an Underestimated Order (n) System 129 Figure 4.10. Self-Tuning of an Underestimated Order (n) Underdamped System. 130 Figure 4.11. Self-Tuning of an Underestimated Order (m) System . 131 Figure 4.12. Self-Tuning of an Overestimated Order (n) System 132 Figure 4.13. Self-Tuning of an Overestimated Delay System 133 Figure 4.14. Self-Tuning of an Underestimated Delay System 134 vii LIST OF FIGURES List of Tables Table 3.1. Statistics of the Generated White Noise Table 3.2. Statistics of the Obtained White Noise Table 3.3. Model Comparison Table 4.1. L Q G PID Controller Design ix LIST OF TABLES Acknowledgements The author would like to express his deepest and sincere appreciation to Dr. Patrick Tessier and Dr. Guy A. Dumont for their guidance, support and encouragement through-out the course of this research. He also wishes to thank Dr. Paul A . Watkinson and N S E R C for financial support. He would like to express his sincere thanks to Ms. Rita Penco for her assistance in the P A P R I C A N library and literature survey. He would also like to express his thanks to the system managers: Messrs. Rick Mor-rison, Reid Turner and Brian D. McMillan for their help with the computer network. Last but not least he would like to thank countless friends and colleagues and the staff of P A P R I C A N for making the period he spent at the Pulp and Paper Centre a pleasant time. x i Chapter 1 Introduction 1.1 Introduction Perhaps the two largest industries that employ a majority of chemical engineers are the oil and the pulp and paper industries. Recently, there has been some setback in both industries in Canada. The oil industry has slumped due to the fact that there a.re more energy efficient cars and houses are better insulated. It is also facing a prospect of being replaced by other sources of energy such as electric cars or the revival of the coal industry in the next century. The pulp and paper industry in Canada has faced some difficulties because of competition from the developing countries and tougher environmental laws. Like the oil industry, it might also face less demand for its products in the developed countries. To make a profit or even to survive, these industries need to refine their technologies. Cost must be cut and products must be of highest quality. Chemical plants, pulp and paper mills are required to run at optimal conditions and with highest efficiency. One of the areas that can improve quality, boost productivity and cut cost is the area of process identification and control. The technology of this area can be improved. So in the following, we will discuss ways to improve the technology of identification and control. 1.2 The Research Areas of the Thesis To build a controller for a process, we need to know the system dynamics. A good controller requires a good knowledge of the system in the control bandwidth. To have a good knowledge of the system, we need good techniques to identify it. From the time when Gauss, K . F. (1809) introduced the method of linear least squares, this method has been the cornerstone of identification. However, this method does not work well for stochastic control systems. A stochastic control system, is a control system that is disturbed by random correlated 1 2 CHAPTER 1. INTRODUCTION sequences and it has two parts. The dynamic part is governed by a linear relationship between the input and output variables. This relationship can be described by a rational transfer function. Similarly to the dynamic part, the stochastic part has a rational transfer function with an uncorrelated sequence (white noise) as the input. This stochastic part disturbs the system from the steady state. It is known as an AutoRegressive Integrated Moving Average (ARIMA) time series. In this form, the stochastic control system is known as a Box-Jenkins model control system. Identification of a Box-Jenkins model control system means we have to obtain both the rational transfer function and the A R I M A for the system. The two well-known methods used to identify the Box-Jenkins model so far are the maximum likelihood and the prediction error (Astr6m, K . J. (1980)). The maximum likelihood method, because of the requirement of a normal distribution of the residuals, has fallen out of favour, leaving the prediction error method as the method of choice for identification of this Box-Jenkins model. If only the transfer function of the Box-Jenkins model is desired, the output error method is the method to use. By themselves, the prediction error and the output error methods are not wrong or poor theoretically. But because the solution is usually obtained from a numerical procedure (Gauss-Newton), better algorithms should be used for a more accurate solution. Also, the prediction error method gives no information on the order of the system and the number of parameters. A 2-stage least squares approach from the semi-analytical method, introduced in this thesis, will be able to determine the number of parameters. By setting the derivatives to zeros, the semi-analytical method will remove the linear parameters from the iteration equation and gain additional accuracy in the process. The PID is the most commonly used controller in the process industry. From the day of its introduction as an analogue controller to the recent self-contained self-tuning PID controller, there has been a substantial literature. From the earliest paper of Ziegler, J . G. and Nichols, N . B. (1942) to the very recent paper by Cluett, W. R. and Wang, L. (1996), research on the PID controller is still alive. This is not because the research of the past was poor, but because modern technology enables sophistication. New sets of tuning rules have been suggested for better closed-loop performance. However, a large part of research on the PID controller has been on the analogue or continuous PID controller, in many cases assuming simple models of the control system. Fast digital control loops are now replacing the analogue control loops and new technology for tuning digital PID controllers must be provided. The Box-Jenkins model is a very general model to describe a discrete (digital) control system and techniques to design PID controllers for this model are desirable. The self-tuning controller can correct its gains to meet the control criterion of mini-mum variance or minimum variance with a constraint on the variance of the input variable. Since the pioneering paper by Astrom, K. J. and Wittenmark, B. (1973), various exten-sions have been proposed. For example, the work of Clarke, D. W. and Gawthrop, P. J . (1975) is one good progress. It is a minimum variance self-tuning controller with a 1.3. THE OBJECTIVES OF THE THESIS 3 constraint on the input variance, ie. a linear quadratic Gaussian self-tuning controller. The pole-zero placement self-tuning controller is another progress (Wellstead, P. E. et al (1977)). However, the self-tuning controller can be further improved. A self-tuning con-troller requires three parameters: the delay of the system (/), the number of past control input variables it remembers (m) and the number of past controlled output variables it remembers (n). Of these parameters, m must always be greater or equal to / . m partly depends on / . So if m is chosen, / can be eliminated. Techniques to eliminate the need to know the delay of the system must be provided to enhance the self-tuning algorithm. The choice of the control interval has been occasionally made by rule of thumb and experience. However, in the determination of whether a control loop can be controlled more slowly, the performance of the loop at the two control intervals must be compared, and decision made upon these performances. Since the closed-loop performance can be determined just by the delay of the system and the model of the A R I M A disturbance, the problem becomes the problem of modelling a skipped A R I M A . MacGregor, J. F. (1976) obtained the skipped A R I M A autoregressive part by first obtaining the roots of the autoregressive part of the original series and raising these roots to the correct power, then transforming these roots back to the right form for the autoregressive part of the skipped A R I M A . This approach requires more labor and some accuracy will be lost. The moving average part of the skipped A R I M A is obtained by solving a number of equations. This again will require more work. Matrix algebra to obtain the autoregressive parameters and a robust numerical algorithm to obtain the moving average parameters will reduce labor and increase accuracy. 1.3 The Objectives of the Thesis The purpose of this thesis is to provide process control engineers with improved algorithms to design feedback controllers for a discrete stochastic control system. To be more specific, it provides an easy and efficient algorithm to identify a stochastic control system model. From this model, the thesis provides an easy way to design a PID controller. For adaptive systems, the thesis provides an improved self tuning algorithm. It also improves an existing method to determine the control interval of a discrete control system. 1.4 The Contribution of the Thesis In summary, the contribution in this thesis can be separated into three areas: identifi-cation, control algorithms and control interval. In identification, this thesis introduces methods relating the parameters at optimal conditions. These formulae not only reduce the dimension or number of parameters in the search for optimality but can also be used 4 CHAPTER 1. INTRODUCTION to test the system parameters obtained from other methods. The semi-analytical method in this thesis is more accurate than the prediction error method in the identification of the Box-Jenkins model of a control system, particularly when the number of parameters in the pole and moving average polynomials is smaller than those of the transmission zero and autoregressive polynomials. In control algorithms, this thesis suggests a numerical approach to obtaining the minimum variance and linear quadratic Gaussian gains for a PID controller. It introduces a different concept to self-tuning control. This concept is called recursive least determinant. This concept eliminates the need to the know the delay in the self-tuning algorithm. In the work on the control interval, this thesis improves an existing way to choose the control interval. With this improved approach, we can solve similar problems in statistics such as modelling a temporal aggregate A R I M A . 1.5 Outline of the Thesis The thesis is organized as follows. In chapter 2, we briefly discuss a discrete stochastic control system and its model representations. In chapter 3, the identification method for a rational transfer function and an A R I M A is discussed. In chapter 4, we discuss stochastic controllers. In chapter 5, an approach to determine the control interval of a discrete control system is discussed. Chapter 6 concludes the contribution of the research project with some recommendations for future work. The necessary mathematical derivations and computer programs for the algorithms are included in the appendices of the thesis. Chapter 2 The Discrete Control System 2.1 Introduction In the early days of process control, most processes were under feedback with an analogue PID controller. Many control loops are still under this kind of control today. These are the low level, independent and simple control loops. With the advent of the digital computer, more important control loops are under digital feedback. The signals from these loops are digitized by a sampler and fed to the digital computer - normally called the process computer. This has many advantages. First it might be economically sound, because one small reentrant subroutine in the software can be used by a number of control loops. More importantly, the loops may be interactive. Information stored in the computer can be shared by all the loops. This provides an opportunity for a process engineer to carry out more sophisticated control designs such as feedforward and feedback, ratio, decoupling controls etc... Parallel to this and of equal importance is the optimization strategy in plants and mills. In most plants and mills in North America, the process computer has disappeared and been replaced by a DCS (Distributed Control System). A DCS usually includes many control loops and each loop represents a discrete controlled process. 2.2 The Discrete Control System A linear feedback discrete control system can be described by the block diagram in Figure 2.1. This is the conventional block diagram of a feedback discrete control system. In this block diagram, if yspt is constant and the disturbance nt is not, we say the system is subject to stochastic disturbance and the controller is a regulator. This is the regulating case, because the controller regulates the system at a specific point - the set point. In the opposite case, when nt = 0 and yspt is not constant, the controller is a deterministic controller. This is the servomechanism or tracking case, because the controller is designed 5 6 CHAPTER 2. THE DISCRETE CONTROL SYSTEM to track the set point. However, the difference between these two cases is minor and there is a duality between them (MacGregor, J . F. et al. (1984)). In the following, we will try to unify these two cases. We redraw the block diagram of Figure 2.1 and present it in the block diagram of Figure 2.2. Set point ym -]-yt Disturbance Controller Plant (z- 1 ) Sjz-1] + yt+f+i Figure 2.1. Conventional Block Diagram of a Feedback Control System. Set Point y sPt+m at+f+i Disturbance (z-1) + Vt Controller Plant + o nt+i+i + Figure 2.2. Modified Block Diagram of a Feedback Control System. From Figure 2.2, we can write u(z- 1) e{z-x) , 0 1 . yt+f+i = -yspt+f+i + Ut + (j)(,-ift+f+1 ( ' = r, _,ut - yspt+f+i + nt+f+i (2.3) b{z ) 2.3. THE SYSTEM MODELS 7 Now we consider the case either yspt+j+i or nt+f+i must be zero, then as far as the controller is concerned, the two problems of tracking and regulating are the same if yspt+j+1 = - n t + f + 1 (2.4) The minus sign is due to the fact that the disturbances are opposite in nature. In the deterministic case, we assume the system is at steady state and then we bring it to a new level. In the stochastic case, we assume the system has been disturbed and is at a level different from the desired level and the purpose of the controller is to bring the system back to the desired level. As mentioned earlier, the tracking and regulating problems a,re equivalent if their disturbance models are equivalent. But with the regulating case, or stochastic control we are equipped with more tools for analysis and design. Therefore, we will discuss only the stochastic control system in this thesis. Because of their duality, the word controller will occasionally be seen in place of the word regulator in this thesis. 2.3 The System Models 2.3.1 The Box-Jenkins Model With yspt+j+i = 0, Equation (2.2) gives us — 7. —Ut H —7, ; :Clt+ f+1 l^-O) In the above equation, the roots (z 4 _ 1s) of the polynomial u(z~x) are called the zeros of the system, while the roots of the polynomial S(z~x) are called the poles of the system. The polynomials u(z~x) and < $ ( 2 _ 1 ) are assumed coprime, ie. they do not have a common root. If all the roots of ^ ( ^ - 1 ) are outside the unit circle, then the system is said to be minimum phase. In the opposite case, if one of the roots is inside the unit circle, then the system is said to be non-minimum phase. Similarly, if one of the roots of S(z~1) is inside the unit circle, the system is said to be open-loop unstable. If one of the roots of <^(^_1) is on the unit circle, the system is said to be disturbed by a nonstationary disturbance. A l l the roots of 0(z~1) are forced to be outside the unit circle from modelling. Since the first coefficients in the polynomials 8(z~1), 0(z_1) and < / > ( 2 - 1 ) are unity, they are said to be monic. 8 CHAPTER 2. THE DISCRETE CONTROL SYSTEM 2.3.2 The Astrom Model Another model equally well-known is the Astrom model that is described mathematically below a(z-x)yt = ^ - 1 ) u t _ f c + 7(«- 1 )a* (2-7) This model is sometimes called the A R M A X model due to the fact that the model is an A R M A time series with an eXogenous input ut. The polynomials a(z~ l), and 7 ( 2 - 1 ) are assumed to be coprime. Q ' ( 2 - 1 ) is monic, while ^{z"1) is normally not. If 7 ( z - 1 ) is monic, then the variance of at is normally not unit. If ^(z~ r) has a root inside the unit circle, then the system is said to be non-minimum phase. If ct{z~ l) does not have all its roots outside the unit circle, the system is said to be open-loop unstable. As in the case of the Box-Jenkins model, ^(z -1) will have all the roots outside the unit circle from modelling. 2.3.3 The State-Space Model A stochastic control system also has a state-space representation. It can be written as below. x t + i = Ax* + but-j + w t (2.8) yt = cTxt + vt (2.9) This model can be put into the following predictive form: x t + 1 | i = A x t | ( _ i + but-f + kat (2.10) yt = cTx t|<_i + a t (2.11) In the above equations, the matrix A is called the state transition matrix. The vector b is called the input or control vector. The vector c is called the output vector. The vector x t is a vector of the state variables. If the vector k is the steady state Kalman gain vector, then the vector 5tt+i\t is the optimal one step ahead state estimator. If A does not have all its eigenvalues inside the unit circle, the system is open-loop unstable. The white noise W i is the process noise and the white noise vt is the measurement noise. The process noise and the measurement noise are not correlated. The white noise at is sometimes called the innovation sequence, because each at is an innovation. For this reason, the second state space representation is sometimes called an innovation (state space) representation. 2.3.4 Model Advantages and Disadvantages Now we will discuss the models advantages as well as their disadvantages. The Box-Jenkins model is more rational. It separates the process dynamics and the disturbance. 2.4. CONCLUSION 9 The disturbance can be stochastic - for example an A R I M A time series. In the deter-ministic case, the disturbance can be a set point change. This model also uses the least number of parameters. Since the Box-Jenkins model uses rational transfer functions, it can be clumsy in the case of multivariable systems for identification and control. The state space model is very convenient for this case. The Astrom model can be loosely considered as the intermediate model, when we convert the Box-Jenkins model to the state space model. As an intermediate model, it does not present any difficulty in the multivariable case. In the single input single output case, it is easier to identify, but it does not give as clear an insight of the physical system as the Box-Jenkins model does. Since we can easily convert the Box-Jenkins model to other models, we will choose this model in this thesis as the more general case. 2 . 4 Conclusion The Box-Jenkins model is well-known model for discrete control systems, however, due to its complexity only a relatively small amount of research has used it. Progress in control theory - for example the self tuning controller - has often used the Astrom model. In multivariable control theory, the state space model has reigned supreme. This thesis is an effort to make this model representation more popular. A l l the research work will use this model. This is not due to some special preference for this model by the author, but because the Box-Jenkins model can be considered as the basic model from which other models can be derived. Once a model is chosen, we can proceed with control problems such as identification, controller design and choice of control interval. CHAPTER 2. THE DISCRETE CONTROL SYSTEM Chapter 3 Identification 3.1 Introduction Identification is the first step in the design of a controller, since we must know at least some information about a process before we design a controller for it. By assuming a linear input output model, the word identification can be considered the same as the word parametric estimation or a procedure to obtain estimates of the parameters of the system from a record of the process. Identification dates back to the time of Gauss, K . F. , when he introduced the method of least squares. In this chapter, we will examine some well known identification methods and introduce our semi-analytical approach. 3.2 Identification of the Box-Jenkins Model Identification can be classified into two categories: Parametric and nonparametric. In a parametric approach, we will obtain the values for the parameters according to some criterion. The number of parameters is assumed known a priori. In a nonparametric approach, not only the values of the parameters, but also the number of the parameters must be obtained. The model has not been parameterized. In the following, we will discuss a few well-known methods of both types. 3.2.1 Nonparametric Methods There are a few nonparametric methods and they will be discussed in the following. The simplest method is the analysis of the transient of either the step or impulse response. But this method only works well for deterministic and low order systems. The PRBS (Pseudo Random Binary Signal) gave an enormous boost to the nonparametric impulse response estimation in the mid '60s (Wellstead, P. (1981)), but later it was under criticism, because 11 12 CHAPTER 3. IDENTIFICATION of possible damages inflicted to the hardware of the control system by the PRBS. The frequency analysis method requires large test time for the transient effect to die out and the input signal must be a sinusoid. Theoretically, the discrete input variable ut cannot exactly form a sinusoid and hence the method is normally used in continuous systems. The frequency analysis can give information of a single sinusoid. For multiple frequencies, the tool is the Fourier analysis. Perhaps, the most complicated nonparametric method to use is the spectral analysis method. In this method, one gets a spectrum to examine with a window. The input signal does not have to be a sinusoid. Historically, the frequency, Fourier and spectral analysis were statistical tools to look for harmonics in a time series. Generalization to identification has not been very successful and popular. The parametric methods that have been used quite often are the transient analysis of step response data and the cross-correlation analysis of impulse response data. From the step response data, the dead time, time constant and steady state gain of a first order system may be estimated. For more general models, cross-correlation analysis has been used with impulse response data for identification. Box, G. E. P. and Jenkins, G. M . (1970) used the crosscorrelation function between the input and output variables to identify the transfer function from the impulse response weights. The authors also suggested prewhitening the input variable for a more accurate crosscorrelation. This is preferred to the PRBS, because as mentioned above the PRBS was criticized for possible damage to the system. Sinha, N . K . et al. (1978) suggested a similar crosscorrelation analysis to calculate the Markov parameters of a multivariable system, which are the impulse response weights of a single input single output system, and manipulated the Hankel matrix to obtain the state space model of the transfer function. Usually, nonparametric methods are used to obtain the number of parameters and the initial estimates of the system parameters. These initial estimates are then used in a parametric method to produce finer estimates. 3.2.2 Parametric Methods In a parametric method, we assume that the model has been parameterized and the the total number of parameters is known. There are a number of these methods known in the control literature. We will list them here for a brief reference, and will discuss them more later. The oldest one was introduced by the German mathematician Gauss, K . F. (1809) namely the (linear) least squares method. The method can give the solution in a closed-form both recursively and nonrecursively. Like the method of linear least squares, the method of instrum.ental variables (Young, P. (1970)) can give the solution in a closed-form recursively. However, it has the disadvantage of having to choose the instrumental variables. The more commonly-used method is the method of maximum likelihood. This method has the disadvantage of not giving the solution in a closed form like the method of least squares or instrumental variables. The solution is usually ob-3.3. IDENTIFICATION OF THE TRANSFER FUNCTION 13 tained from a numerical optimization procedure. One method which is very close to the maximum likelihood method is the prediction error method (Astrom, K . J. (1980)). As of today, this is the most used method and it is normally found in many identification software packages. Another method which is less well-known than the method of predic-tion error, but usually implemented similarly is the output error method. This method is useful in the identification of only the rational transfer function. Needless to say, the problem with the parametric methods is the knowledge of the number of parameters. We will discuss these well-known methods in more details in the next section. This is done to see their weakness in applying to identification of the Box-Jenkins model and the reason for our research. 3.3 Identification of the Transfer Function 3.3.1 The Linear Least Squares Theory The method of least squares is the oldest statistical method for parameter estimation. The method was introduced by Karl Friedrick Gauss in 1809 in his masterpiece "Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientium" (Theory of the Motion of the Heavenly Bodies Revolving around the Sun in Conic Sections) where he discussed the determination of the elliptical orbit of a planetary body. To avoid confusion, the method is sometimes called linear least squares, because there is a linear combination of the estimated parameters. The method is also known as linear regression. We consider the following system model: yt ~ uj0vt-f-i +u; 1w t_/_ 2 + nt (3.1) with its estimation problem: N Min SN = ^2 (yt - u0ut-f-i - ioxut-s-2)2 (3.2) ^ 0 , ^ 1 t=f+2 The variable yt is the regressee or the regressand and the variables ut-/-i, Ut-f-2 are the regressors. If we take the derivatives of SN with respect to UIQ and toi and set them to zero, we obtain N Yl (Vt ~ vout-f-i - uiut-f-2)ut-j-i = 0 (3.3) t=f+2 N (Vt ~ U0Ut-f-l - UJlUt-f-2)Ut-f-2 = 0 (3.4) t=f+2 14 CHAPTER 3. IDENTIFICATION or TV £ = 0 (3.5) £ n t u t _ ,_ 2 = 0 (3.6) t=f+2 From the last two equations, we can write 1 N _ f_1 H = 0 (3-7) J V - . f _ 1 * = / + 2 TV ———- ut-f-2n J 1 < = / + 2 (3.8) Now if we can make an assumption that nt has constant zero mean and we can replace the average sum of the above equations by the expectation operator, then the statistical meaning of the above two equations is the series ut does not crosscorrelate to nt at lags / + 1 and / + 2. 7 « n ( / + l) = £ { > , - / - i M = 0 (3.9) 7 u n ( / + 2) = E{ut-f-2nt} = 0 (3.10) The method of instrumental variables that we will discuss later makes use of this property of the least squares method. Note that we have not made any assumption about any statistical property of nt besides it has a constant zero mean. It can autocorrelate to any finite lag and normally or uniformly distributed. The residual of the output error method has this statistical property. Now if in Equation (3.1), we replace ut-j-2 by yt-f-i as below yt = cjo^i- / - ! -rutyt-f-i + nt (3.11) then we get the following equation: 7 « n ( / + l ) = 0 (3.12) 7 * n ( / + l ) = 0 (3.13) With the above second equation, we cannot say nt can autocorrelate to any lag, because if it does it might crosscorrelate to yt at lag / + 1 and this equation will not be true. If nt autocorrelates to only lag / , this equation will be true. The prediction error method makes use of this fact and we can have predictors that predict yt up to / + 1 steps ahead. 3.3. IDENTIFICATION OF THE TRANSFER FUNCTION 15 Now if nt is white, then the above equations will be true. The maximum likelihood method requires this characteristic from the residual of the model. From the above discussion, we can see all these methods has one statistical property in common and that is the residual does not crosscorrelate to the regressor variables. This property comes from the minimization of the sum of squares of the residuals. 3.3.2 Identification Methods Now we will see if we can apply some of what we discussed above to identify our transfer function from the model Vt = j^ut-f-i + n* (3.14) b{z l) 1 - £ - = i fa-* which is known as the output error model. The Least Squares Method One way to apply the (linear) least squares method to our problem is to perform a long division of the transfer function polynomials to get the following equations: V t = ~8[z^)Ut-J-1 + n t m ~ Yl fiiut-f-i-i + "t i=0 Ut-f-1 Ut-f-2 Ut-f-l-m « x f f3 + nt then obtain the optimal parameters /3,- as below Z-t=/+2 Ut-f-l -ft = Jm _ t=f+2Ut-f-2Ut-f-l l^t=f+2Ut-f-N L YltLf+2 Ut-j-l-mll-t-f-l Po Pi Pm J sym. Avt=/+2 at-S-\-m (3.16) (3.17) (3.18) (3.19) - l 16 CHAPTER 3. IDENTIFICATION E t = / + 2 ytut-f-i l*t=f+2yiut-f-2 E t e / + 2 VtUt-f-l-m (3.20) In matrix form, the least squares solution to the above equation can be written as below = [X r X]- x X r y where the matrices X and y are defined as T (3.21) X = 7+2 rT 7+3 yj+2 yj+3 L yN (3.22) The parameters /3{S are the impulse response weights or the Markov parameters. The problem that we might have is the division is usually long and we have to invert a large dimension matrix which is normally ill-conditioned and hence gives poor results for the estimated parameters /?;s. Even though retrieving u;,-s and 8{S from /?,-s is possible, it usually gives error in this process. Another way to apply least squares theory to our problem is to multiply both sides of Equation (3.15) by the polynomial 6 (z - 1 ) to obtain the following equation: or yt 6(z l)yt = u(z 1)ut-f-i + S(z ^nt (3.23) (3.24) t=i Now, we do not know n4_,-, i > 0 at the time of regressing. However, if we calculate the parameters recursively, then we can approximate nt-i with the estimated parameters from the last estimation. Even though this approach does not work all the time, it has seen working for some systems and the approach has a number of names, such as the pseudolinear regression, the approximate maximum likelihood and the extended least squares method. In the above least squares estimation, if the disturbance nt is white, then the estimation is always unbiased. If it is coloured, then it will depend on whether the loop is closed or not. If the data is open-loop, then ut and nt are not correlated, because they are disjoint. The estimation will be unbiased. On the other hand, if the data is closed-loop, the estimation is usually biased. We will discuss more about closed-loop data in a later section. 3.3. IDENTIFICATION OF THE TRANSFER FUNCTION 17 The Instrumental Variables Method The method of instrumental variables is relatively new. It was introduced by Peter Young in 1970. To apply this method to our estimation problem, we rewrite the model as irit-is i-i i=i i=i x f £ + et (3.25) (3.26) (3.27) Now if we regress yt on x t and obtain the optimal parameter vector 0 as 0 = [ X r X ] - 1 X T y (3.28) Then the parameters will be biased, because x< is correlated to et. The problem can be seen as follows Vt yt - et <0 The matrix solution for the parameters 0 of the above equation will be given by 0 = [ X T X ] - 1 X r y [ X ^ - ^ y - [X r X]" 1 X r e l v T . where r T -\ XZ+2 " yj+2 ' " y/+2 ' " e /+2 " X = X / + 3 yf+3 yf+3 e / + 3 , y = , y = , e = T . x^ v _ . yN . . yN . eN (3.29) (3.30) (3.31) (3.32) (3.33) The concept of the instrumental variables method is choosing the variable vector z t in place of the variable vector x< such that it does not correlate to the residual e t, ie. the second term in the Equation (3.32) becomes zero. The parameters will then be given by 0 = [ZTX]-1ZTy (3.34) with X Z / + 2 T Z / + 3 X (3.35) The variable vector zt is the vector of the instrumental variables. The problem with this method is the choice of the instrumental variables. 18 CHAPTER 3. IDENTIFICATION The Maximum Likelihood Method The method of maximum likelihood is attributed to Fisher, R. A . (1956), but the idea was also known to Gauss, K . F.. To use the method we have to be able to express the probability density function involving the estimated parameters. From this probability density function, we can derive an expression for the likelihood function, then maximize it - hence the name maximum likelihood - to obtain the desired parameters. We have nt = Vt - xf (3.36) Assuming a normally distributed function for nt with zero mean and variance we have the probability density function for nt as below ,2 p(nt) = ——exp[--±] (3.37) If all the n ts are independent, the joint probability density function for all the nts is given by p(nt, nt+l, •••) = p(nt) x p{nt+1) x • • • (3.38) 1 r X > )Ne*p[-^} (3.39) L(u>i,6i) (3.40) The above function is called the likelihood function. Since it is a probability function, its value always lies in the interval (0,1). If the distribution has a constant variance <r2, which must be for a stationary series n 4 , then the likelihood function has the greatest value when the exponent term is maximum, ie. when S = £ n\ (3.41) t=f+2 is minimum. The difficulty of applying the maximum likelihood method to our problem is the sequence nt is not independent. They are usually correlated. Secondly, nothing can be known about the probability distribution of nt before identification. Now if the residual ats can be assumed to be normally distributed, and since at is white, we can write p(at,at+1,-• •) = p(at) x p(at+1) x • • • (3.42) 1 exp[-^4] (3-43) 3.3. IDENTIFICATION OF THE TRANSFER FUNCTION 19 Reasoning as before, we can say the right hand side of the above equation is maximum when JrJ = £ J W ' " W - ' - J ) ( 3 ' 4 4 ) is minimum. So the identification criterion for the Box-Jenkins model is But this means both the transfer function and the disturbance models are identified at the same time. It will be difficult to determine the orders of the polynomials C J ( Z _ 1 ) , b(z-v), ^(z-1) and 0(Z-r). The Prediction Error Method In the paper by Astrom, K . J. (1980), the residual at of the maximum likelihood method discussed above is interpreted as the prediction error and we can write yt = y(t\t-l) + at (3.46) = E{yt\t-l} + at (3.47) Now if nt in our model is white then Sf(*|* - 1) = ^ y ^ - / - i (3-48) If nt is not white which is the usual case, then by defining -1) we can write m = i^ut-j-i + ^ Z 7 T « t (3-50) 6(z-- 1) u(z' -1) 8{z--1) u(z~ -1) 6{z--1) Lo(z~ -1) — 1 + 6{z 4>{z - 1 ) 7(z - 1 ) <j>(z - 1 ) 7(2 - 1 ) 9(z - 1 ) z'1 (3.49) r^t-i + at (3.51) T ( ^ _ 1 ) , 6(z-1)1 8(z-r) J l'J~1 6{z-(3.52) (3.53) u\ z ^ ) ^ ) ^ - 1 + ^ - 1 + a i f 3 - 5 4 ) y{t\t-l) + at (3.55) 20 CHAPTER 3. IDENTIFICATION Like the method of maximum likelihood, it will be very difficult for us to determine the orders of the involved polynomials u.'(z - 1) and ^ ( z - 1 ) , if we want to use the prediction error method. It cannot identify only the transfer function unless the disturbance nt is white. T h e O u t p u t E r r o r M e t h o d The output error method does not give us complicated polynomials to identify, because the equation it uses is The method does not require any special statistical property of nt except that it has a constant zero mean and does not crosscorrelate to ut. It should be the favorite method for us to identify the transfer function of the Box-Jenkins model. However, the output error method does have a weakness and that is it does not exploit the fact that the parameters in the polynomial u(z~ l) are linear parameters and can be removed from the numerical identification algorithm for more accuracy. In practice, it has been seen implemented recursively (Dugard, L. and Landau, I. D. (1980)) or similarly to the prediction error method as in the function oe of the M A T L A B system identification toolbox. So we can summarize about all the methods we have just discussed above as follows. The (linear) least squares method has accuracy problem. The instrumental variables method has the problem of choosing the instrumental variables. The methods of maximum likelihood and prediction error cannot identify only the rational transfer function. The output error method should be the method of choice, but it should be enhanced by removing the linear parameters out of the numerical identification algorithm for more accuracy. 3 . 3 . 3 T h e S e m i - A n a l y t i c a l A p p r o a c h As discussed above, we see that all the discussed methods have their roots in or are related to the method of least squares. To identify the Box-Jenkins model, we can have a two-stage (rational) least squares approach. We identify the rational transfer function by the least (sum of) squares of the disturbance series nt and identify the disturbance A R I M A model by the least squares of the white noise at. For the lack of a good name, we will call our method the semi-analytical method, because we can obtain only half the number of parameters for analysis. To identify the rational transfer function, we write the model as follows: Vt = itt-f-i + nt (3.56) yt = LOQ — U3\ Z (3.57) 3.3. IDENTIFICATION OF THE TRANSFER FUNCTION 21 and the identification criterion can be written as: SN = Min Sjv S, U) N = Min ]T n] 8, UJ t=r+f+2 N = Min J2 fa S,UJt=r+f+2 U)Q — U)\Z 1 — • • • — LORZ with Si 82 1-6-LZ-1 8S UJ0 U) — — Ulr Now if we define xt = yt- nt — 1 —r UJQ — U)\Z — • • • — UJRZ 1 - 6lz~1 8. •v — s J then we can write (3.58) (3.59) (3.60) — 8]_XN-2 — ••• — 8SXN-s-l = WoV>N-f-2 ~ ^ l « / V - / - 3 (3.61) (3.62) (3.63) / - r - l (3.64) •f-r-2 (3.65) Xr+f+2 — 8iXr+f+i — • • • — 8SXr+j+2-s — U0Ur+l — UiUr — • • • — UTU\ (3.66) Now if the system is completely at steady state before we collect data, all uts for t = (—oo,0] will be zeros. This means we have xr+f+l LOQ — LO\Z 1 — • • • — LJRZ~ 1 — 8\z_1 — • • • — 8sz~! m i=0 0 (3.67) (3.68) (3.69) 22 CHAPTER 3. IDENTIFICATION and we can safely assume then we can wTrite 1 -Si ••• -Sa 0 -8, •• 1 -S, 1 Si 1 -Sr 1 and therefore xN XN-l Xr+f+3 Xr+f+2 Xr+f+l = 0 Xr+f = 0 Xr+f+2-s = 0 XN XN-1 Xr+f+3 _ Xr+f+2 _ UN-f-1 UN-f-2 Ur+2 Ur + 1 UN-f-r-1 UN-f-r-2 Ul 1 -Si ••• 1 -8s 0 • • • -S„ 1 -Sx 1 Uw 1 01 02 03 1 0 X 02 N-f-r-2 1 01 1 01 1 Uw or And with y defined as VN IjN-l Vr+f+3 Vr+f+2 (3.70) (3-71) (3.72) u0 -U-'i —Co',. (3.73) (3.74) (3.75) (3.76) (3.77) 3.3. IDENTIFICATION OF THE TRANSFER FUNCTION 23 we can write the sum of squares Sjv as follows: SN = [y-x] T[y-x] = [y -$Uwf[y-*Uw] = y Ty - 2u>TUT#Ty + w T U T * r $ U w The derivative of SN with respect to u> is dSN = - 2 U T * T y + 2UJ * J $Uw T -ir,Ti 3u> (3.78) (3.79) (3.80) (3.81) By setting this derivative to zero, we obtain the optimal parameter vector u> as below: U T * T y (3.82) By putting the optimal value u> back into the expression for SN, and differentiate it with respect to the individual <5,s, we will obtain s equations to solve for the components of the parameter vector S. This is done so, because the derivatives of the parameters if^s do not give nice relationships with the vector S. Taking the derivatives, then dividing by -2 and setting the resultants to zeros, we obtain the following equations: 0 9 l = [ | | u « + *u|^] T [y - * U £ ] g2 = [^Uu. + *u|^] T [y - *Uw] = 0 oo2 u°2 (3.83) (3.84) where o ^ y>2 i>z o ••• 4> N-f-r-2 o 4\ o 0 (3.85) (3.86) and the apostrophe denotes the derivative with respect to the parameter 8{. We can derive the relationships between r/'.;s and <5;s. However, this will require some work. This is not 24 CHAPTER 3. IDENTIFICATION only because the relationships are expressed in determinants, but also because of the difficulty of the differentiation of these determinants, when the derivatives are required. Since what we are interested in are the numerical values of the parameters 0;s, we will introduce a fast recursive calculation of these parameters and their derivatives. By definition of the matrix \P, we have 1 - 6 , -S„ 1 -6t 0 1 1 1 01 02 03 1 01 02 1 01 1 01 1 (3.87) By multiplying the first row of the first matrix with the kth column of the second matrix on the left hand side of the above equation, we get k-i 0fc = Y ^lSk-l 1=0 fc-1 = sk + Y i>rfk-i 1=1 (3.88) (3.89) with <5fc = 0, k > s. Now differentiating both sides of the above equation with respect to Si, we obtain 50/= dS; dSk k-i + Y <90/ dst ^ dSi + 0 ( dS, k-i dst W l th dSk dS{ = 1 if k = i = 0 otherivise (3.90) (3.91) (3.92) To find the partial derivative of the vector u> with respect to the individual parameters 6jS, we rewrite Equation (3.82) as follows: [UT\I/T*u]a> = U T # T y (3.93) and take the derivatives of both sides with respect to S\. Doing this we obtain u r^—*u + U T * T — u oSi dSi dSi ~dS~ (3.94) 3.3. IDENTIFICATION OF THE TRANSFER FUNCTION 25 This equation gives us U T * T * U u- ~d8~ UT — — *u + u r * T — u oSi obi and therefore 8CJ ds~ u T# r*u u- 08; dSi Obi (3.95) (3.96) (3.97) At this point we want to clarify a possible confusion in the using of some notation. In Equation (3.82), there are no circumflexes on top of the matrices \Ps which is contrary to Equation (3.93). The circumflex sign " is used to denote an optimal value, ie. the derivative of the minimized quantity with respect to some variable has been set to zero. In Equation (3.82), the derivative of SN with respect to UJ has been set to a zero vector, and hence we have the notation CJ. And this is true whether the derivatives of SN with respect to <54s have been set to zeros or not. Since in Equation (3.82), these derivatives have not been set to zeros, we have the notation ty. However, in Equation (3.93), we were looking for the derivative of CJ with respect to Si which are needed in Equation 3.83 to Equation 3.85. In these equations, the derivatives of SN with respect to <5,-s have been set to zeros, and hence the notation tp. We will keep this notation strategy throughout the rest of this chapter. We are now equipped with all the necessary tools to solve the equation 91 92 = 0 (3.98) numerically. However, if we solve this equation by the method of Newton-Raphson, we need further differentiation. The equation for the next estimate of this method is " Si ' 62 S2 dgi dgi dS-i dS2 dbi db2 dgs dgs dSi db2 dgi as. 0£2 dbs dgs 88< n -1 91 g2 L 9s (3.99) 26 CHAPTER 3. IDENTIFICATION which means we need second order differentiation of the matrix & and the vector u> with respect to the parameter The second derivative of xpk can also be calculated recursively like the first one. We can do this by taking the derivative of the first derivative as follows: dSidSj dHk , 82^ f , 8^88, dSid6j ,=1 dSidSj Sk-i + k-i d4>id6k-i j d28k-i + — — — + y / Q C oc (3-100) d8i dSj 883 d8i d6idSj fc-i <92V>/ > ^ + dy>; 06k-i + cfy'; ^ dSidSj ~ 68i 88j 88) 88t (3.101' To find the second order partial derivative of the parameter vector CJ with respect to the individual parameters Si and Sj, we start from the following equation: uT—— * u + u r* T—-u aSi 88{ r - rr - 8Cj T 8^lT o> + u r * r * u ^ - = u T -— : OS; 68; (3.102) and differentiate both sides with respect to the parameters Sj to obtain T T J 2 * % I T TjTd4>Td4>rr T T T d # T d * T T T j T ~ T d2* TT d8idSj dSi 88. dSidSj uT—— * u + U T * T — u dSi o8i 88, + 883 86{ u T ^ - * u + u T * r — u w + 88, 883 8<2> 88~ 82UJ 88i88j + = V1 824fT 88,86/ (3.103) which will give us 02,:. LO 88\dS) U r * T * U 2,ffT rr,82^T * 88,88j T8VTa* 88i 88j U + U r ^8iJtT - T - T8^ , / r T 88j 88i 8*T 82V 88i88. •U]u> <9* (3.104) With the above equations, we can calculate the individual elements of the Hessian of the Newton-Raphson equation. The element at row i and column j of this matrix is given 3.3. IDENTIFICATION OF THE TRANSFER FUNCTION 27 below: 5<5j dSidSj 681 68j 68) 681 6Si6S3 6$ - 6CJ v 6^f - 6CJ In the above discussion, we have obtained equations for the Newton-Raphson method. This does not mean this is the only way to solve Equations (3.83) to (3.85). Any other method can be tried. However, the Newton-Raphson has its own advantages. It is quadratically convergent. And most of all, it is a very simple method known by many engineers. The only problem with this method is it requires the starting point close to the solution. If we adopt a pseudo-recursive approach, we always have the starting point close to the solution. In case this is not the choice, then we have to do a crude search for the minimum. This search must be systematic and is crude in the sense that we vary the values of the parameters 8{S with a coarse resolution, calculate the corresponding pa-rameters LOiS and the sum of squares SN- The values of that give the smallest sum of squares SN are used as the initial estimates in the Newton-Raphson iteration. This is necessary, even if one chooses not to use the Newton-Raphson method, because Equa-tions (3.83) to (3.85) are not necessary linear in the parameters 6,-s. This means that the obtained solution might not be the global minimum solution. These equations give only an extremum solution. To be certain about the solution, we must have a crude estimate about it and the final value SN the algorithm gives. So by choosing the Newton-Raphson method and solve its initial estimates problem, we partly answer the question of global and local minima which is the topic we discuss next. Global and Local Minima In (1974), Astrom, K . J . discussed the problem of global and local extrema in the identifi-cation of an A R I M A time series. In this paper, the author concluded that if the estimated model has the same number of parameters as the true model, there is only one unique global minimum. If the estimated model has less parameters, there can be several local minima. If the estimated model has more parameters, there will be mam' minima. How-ever, the minima are on a manifold with the property that the polynomials have common roots. In 1975, Soderstrom, T. discussed the global and local minima in the identification of a rational transfer function. The discussion was for both cases when the disturbance nt is white or colored. The conclusion was similar to those of Astrom, K . J. in the case of an A R I M A time series. There will be a unique global minimum if and only if we have more or the same number of parameters. 28 CHAPTER 3. IDENTIFICATION Parameterization Parameterization involves the determination of the number of parameters of the control system and the value of the delay, ie. we parameterize the model. Logically, we can say that if the estimated model, having the numbers of estimated parameters as r, s and / , does not have enough parameters, then the variance of the disturbance nt will be inflated some amount caused by the missing parameters. So if we gradually increase r and s or change / and see no more drop in the variance of the disturbance n 4 , we can claim we have reached the right number of parameters. If we have crossed this boundary, ie. if we overestimate, then the variance of nt will tell us. The variance will no longer drop. The overestimated parameters will be almost zero or there are common roots in the estimated polynomials. Theoretically, the problem is easy. However, the estimation is usually imperfect and it will give us inconclusive results from which we cannot draw an easy conclusion. In this case, we have to resort to a statistical test. The most well-known tool for this is the Aka.ike's Information Criterion (AIC). It is given below AIC = NlogVn + 2p (3.106) where N is the number of observations, Vn is the variance of the disturbance nt and p is the number of estimated parameters. The log function in the above equation tells us the criterion has its root in the maximum likelihood method. The prediction error method has its own criterion called the F P E (Final Prediction Error) criterion. 1 + ^ FPE = fvn (3.107) 1 ~ N These criteria are given as benchmarks to choose discriminating models. We should choose the model with a lower criterion value. Our approach which is similar to the least squares estimation will have a test of its own - an F-test. For this test, one can refer to Soderstrom, T. and Stoica, P. (1989). Nonstationary Disturbance The question of nonstationary disturbance must be answered when we suggested an iden-tification scheme that identifies the transfer function and the disturbance separately. By nonstationary disturbance, it is meant that the autoregressive polynomial of the distur-bance time series has one or more roots on the unit circle. Let us say it has d roots on the unit circle, then we can write yt = jr^r^ts-i + nt (3.108) 3.4. IDENTIFICATION OF THE ARIMA 29 + (3.109) t=0 9(z E K H - + N _ X*Lu-ua* (3-in) j=0 (1 - 2 - ! ) ^ * ( ^ with the polynomial <^*(z_1) has no roots on the unit circle. One requirement for the least squares estimation is that the residual (nt in this case not at) must have constant zero mean and unknown but constant variance. Because nt is nonstationary, it does not have a constant variance. However, by multiplying both sides of the above equation by a factor of (1 — z~x)d, we have (l-z-'fyt « f ; /3 1 - ( l -^ 1 )"u t _/- i - , - + | ^ a t (3.H 91 i=0 y; « g/9iU?_/_1_. + ^ _ l a t (3.113) V*t = 77I=TT " * V i + ( 3 - 1 1 4 ) Now because the autoregressive polynomial of the time series n* has no roots on the unit circle, it is stationary and has a constant variance. The problem of nonstationary disturbance will disappear if we use y* instead of yt and u*_j_1 instead of ut~f-i in our identification method. We get the same transfer function as the original system. However, overdifferencing can cause difficulty in the estimation, because it increases the sample variance of the disturbance. The process of differencing always moves the long wavelength band to the short wavelength band, ie. higher frequency. This makes the data more sensitive to noise. 3.4 Identification of the ARIMA The A R I M A time series is an important statistical tool to analyze equally sampled series. It is called an A R I M A (AutoRegressive Integrated Moving Average) to distinguish itself from another kind of time series - the harmonic time series - which has a discrete power spectrum and is composed of a number of sinusoids. The A R I M A time series has gained popularity over its harmonic counterpart due to its wide application in prediction and stochastic control. A R I M A time series modelling or identification is the art of obtaining a number of parameters that best portrays the behaviour of a process from which the time series was obtained. The A R I M A model can then be used for prediction and control. In 30 CHAPTER 3. IDENTIFICATION practice, we model only an A R M A time series. If we encounter an A R I M A , we change it to an A R M A by a process called differencing where each reading is changed by substracting by its previous reading. By identification of an A R M A n<, where nt is given as below <A(-1) i - n = i ^ (3.115) at (3.116) 1 - E f = i = §\n-t-\ H + §vnt-v + at — Q\at-x Qqat-q (3.117) We have to obtain the values for the parameters 0,-, d>i and a2a (the variance of the white noise at) from a record of nt or its equivalent statistics. 3.4.1 Methods of Identification for an A R M A There are a number of identification methods listed in the literature. Mayne, D. Q. and Firoozan, F. (1982) used a three-stage linear least squares estimation method. Even though the method is not complicated in practice, its theoretical formulation has such complications as p-consistency which means the asymptotic bias tends to zero as the degree p of the autoregressive polynomial tends to infinity and p-efficiency which means the asymptotic efficiency approaches the theoretical maximum as p tends to infinity. Other contributions are the Corner Method of Beguin, J. M . et al. (1980) and the R and S Array Method of Gray, H. L. et al. (1978). But these methods are only suited for identification of the orders of the time series and their estimates are used as initial estimates for other methods. The identification of an A R M A is parallel to the identification of a stochastic control system, because an A R M A is a linear stochastic control system with the input variable ut = 0. This means methods like maximum likelihood and prediction error are also used widely. The prediction error method is the method of choice used by the M A T L A B software in its identification package. 3.4.2 The Semi-Analytical Approach Since an A R M A time series also has a rational form, our first reaction would be formulat-ing its identification problem similarly to the identification of a rational transfer function. Since the generating white noise at of the time series has minimum variance property, the identification problem can be established as the following optimization problem without any further statistical property of the white noise such as distribution, etc... Min E a? (3.118) 3.4. IDENTIFICATION OF THE ARIMA 31 In practice with the assumption of ergodicity which means we can replace the expec-tation operator by the average sum, then the identification problem becomes equivalent to the following optimization problems: 1 N Min E a] - Min 9,<p 0,4>N-mtJ^+1 1 - Min ^ Min SN N-m 0,0 (3.119) (3.120) (3.121) with m as the larger integer of the two integers p and q. To solve our identification problem as before, we rewrite the optimization problem as below SN — Min SN 8,<t> 1 - 4>iz~l 0, 4> t=m+l N = Min (nt — -fit) N (<f>i-ei) + --- + (<f>m-6m)z-m+1 0>*=r+r \-elZ-^ eqz-<> (3.122) (3.123) nt-i)2 (3.124) From the above equation, we can see that the time series can actually be written as follows: O l - f l l ) + - - - + Q m - f l m ) ; 1 - 0 1 Z - 1 - • • h(t\t - 1) + at - m + l •eqZ-o -nt-i + at (3.125) (3.126) The first term of the right hand side of the above equation is the one-step ahead optimal predictor for nt. If we treat nt as yt and nt-\ as Ut-f-i, then we have a very similar identification problem as the identification of a rational transfer function. Similarly to the case of modelling the transfer function, we can write 1 - 0 ! 1 -01 -eq o 1 - 0 ! 1 - 0 ! 1 X N XN-1 Xm+2 n-N-i n-N-2 n~m+i Tim. nN-m nN-m-1 n2 n\ •'m vm -6. (3.127) 32 CHAPTER 3. IDENTIFICATION and therefore xN XN-I -0X ••• 1 - 0 ! -0n 0 -01 1 -01 1 [Ni«p" - N20] 1 7i 72 73 1 7i 72 7 i v _ m _ i 7i 1 7i 1 [NK£ - N20] or in terms of matrix and vector notations x = r [NKP - N26>] with njv_i n-N-2 nm + l n-N-p "/V-p-1 nm-p+2 n>m—p+1 nN-2 nm+l Tim. TlN-q n-N-q-i n-m-q + 2 Tlm — q+l (.3.128) (3.129) (3.130) (3.131) This case is more complicated than the case of the rational transfer function, because of the existence of the matrix N 2 . However, the approach to solve the problem is the same and the case does not cause any more technical difficulty. As before, we define n as n nN n-N-i n>m+2 nm+l (3.132) then write the sum of squares SN as follows: SN = [n — x]T[n — x] [n + rN26» - rNKp]T[n + TN 20 - TNrf] (3.133) (3.134) 3.4. IDENTIFICATION OF THE ARIMA 33 The optimal parameter vector <p can be obtained by differentiating the above equation with respect to <p and set the derivative to zero. Doing this, we obtain <p = [Nfr rrNi] 1 Nf r T [n + T N 2 0 ] Now if we take the derivative of SN with respect to #8, we obtain (3.135) 8S N 80% = A f n + r N a t f - r N ^ H n + r ^ f l - r N ^ ] 80, 2 (3.136) n + TN2e - rNx<p] (3.137) where e; is a column vector which has a unity value at row i and zeros elsewhere. The problem of identification of an A R M A becomes the problem of solving the following set of equations numerically. h = | | ^ N 2 0 + f N 2 e i - | £ 80\ Wi d02 802 n + T N 2 0 - T N K ^ J = 0 (3.138) n + f N 2 0 - f Ni<p] = 0 (3.139) h„ = 8T_ 80a N2e + rN 2 e, 8T 80n n + f N 2 0 - f NKP] 0 (3.140) The above equations will call for the derivative of <p with respect to 0,-. To obtain the expression for this derivative, we start with the equation [NffrfNi]<p = N f f T [n + fN 2 0] (3.141) and differentiate both sides with respect to 9{ to obtain 8TT 8T -' [ N f — f N a + N f f T — + N[f r r N ! ^ = 8TT 80t n + TN 20 8T + NfT J [—N 2 0 + TN2e,] (3.142) 34 CHAPTER 3. IDENTIFICATION By moving the term with the parameter vector <p to the right hand side of the above equation, we can obtain the expression for the derivative vector <f)& with the parameter vector (p removed as follows: 4>e, = N f f T r N x de, n + TN26 + N{ Y1 [ t^-N 2 0 + rN2e,-] - [Nf N x + f ^ N x ] [Nf f T t N 1 _ 1 f r [n + f N20 And we also need to know the following quantity: dek , "^dv* , . dek-t (3.143) (3.144) Now if we want to solve the above set of equations by the method of Newton and Raphson, we must take the derivatives of the individual functions /i;S with respect to #(s. This will call for the second order derivatives of <p and T. Proceeding as in the section of identification of the transfer function, we can derive d2% ddid6i k-l d9i 80j 86, de, (3.145) and from the equation 8TT 8T ' [ N f — f N a + N[f T—Nx]<A + N ? f r f N ^ , , . = N f ^ [n + f N 2 0] + N f F [ | ^ N 2 0 + f N2e,-] we can differentiate both sides with respect to #j to obtain r i V T T # 2 f r - ^T8tTdr^ ~dtTdt^ ^tT~t d2t (3.146) d9j 89, de,ee3 Nx](p + N f f T r N ! . . T A T , a2f ^ t A af., 5 r (3.147) 3.5. IDENTIFICATION OF THE TRANSFER FUNCTION-ARIMA 35 From the above equation, we found the second derivative of <p given as in the following equation: d2<p d9ide3 -1 2-nT T~Tr d2t ^ df dt r T d2TT - rdTT dT T 50,50, ,5fT de, de. 66j 86 F)Y i r)VT 86,36,' dt. 50 50, 50, (3.148) The element at row i and column j of the Hessian in this case is given as below: dK de, d2t ^ dt? 5f T [50,50, 50; dor^"* de3 d2t 50,50, N i f + 5 r \ T 5 r ^ T . T d24> T —N 20 + f N2et- - —Nr4> ~ f N a 0 n + TN 20 - rNx<p 50 50, 5r 5r —N26 + TN.e, - — - T N ^ 50 50/(3.149) With the above equations, we can solve the identification problem of an A R M A time series by the Newton-Raphson method. 3.5 Identification of the Transfer Function-ARIMA When the transfer function and the A R I M A disturbance models are identified separately, this is called a split identification strategy. When both models are identified at the same time, it is called a combined identification strategy. The advantage of a split identification strategy is the ease of determination of the number of system parameters. But of course the combined identification strategy also has its strength. It is a one-shot attempt. We do not have to regenerate the disturbance time series. Therefore, in this section, we will 36 CHAPTER 3. IDENTIFICATION also present equations to identify both the transfer function and the A R I M A disturbance simultaneously. However, our approach to this problem is different from many existing approaches. In our approach, we determine the parameters from two minimized quanti-ties - one is the sum of squares of the disturbance time series and the other is the sum of squares of the white noise generating this disturbance. This is called a 2-stage least squares method. This approach is quite common in system identification. The predic-tion error method minimizes just the sum of squares of the generating white noise and interprets this white noise values as the prediction errors. Our approach guarantees no crosscorrelation between the input variable and the disturbance time series. This comes from the minimization of the sum of squares of the disturbance time series. First we will correct a problem created by the simultaneous identification approach. Since we still use the information from u\ to UN-/-I, the dimensions of the vector y and matrix U are still the same. In fact, nothing changes in the identification of the rational transfer function. This is because the interaction passes down from the identification of the rational transfer function to the identification of the A R I M A . This means we still have these results: [ § U w + * U ^ ] r [ y - W « ] = 0 i = l,---s ad, (3.150) and U T * r \ T / U u V y - [ u V u + u * (3.151) 0 (3.152) with [ U T * r * U ] _ 1 U T * T y (3.153) However, the dimensions of the vector n and the matrices N i and N 2 should be changed. Since we use yt from yr+f+2 to ypj, the matrices N i and N 2 should have elements go from nr+f+2 to n/v-i- This means riN-i nN-2 nr+f+m+2 nr+f+m+l n-N-p nr+f+m+3—p nr+f+m+2-p N 2 = nN-i nN-2 n-r+f+m+2 nr+f+m+l nN-q UN-q-1 nr+f+m+3-g nr+f+m+2-q (3.154) 3.5. IDENTIFICATION OF THE TRANSFER FUNCTION-ARIMA 37 and therefore n nN nN-i nr+f+m+3 n-r+f+m+2 [ I j V - r - / - m - l | | O i V - r - / - m - l X m ] |y _ ^Uu> Wn y - $ U w (3.155) (3.156) (3.157) which means the premultiplication by the matrix W 0 will delete the last m rows. Now we can obtain similar expressions for the columns of the matrices N x and N 2 . The first column of these matrices can be obtained by premultiplying the matrix y — # U w with the matrix W i , where W j [0 N-r-f-m-lxl |]J-7V- -f-m- -1 ||0/V-r-/-m-lxm-l] (3.158) which means the last column of zeros of the matrix Wo is wrapped around and becomes the first column of zeros of the matrix W ] . The premultiplication by the matrix W j will shift the columns up one row and delete the last m rows. In general, the matrix W ; will shift up i rows and delete m last rows. This matrix will have i first columns of zeros followed by an identity matrix and possibly some columns of zeros after that. Now we are ready to write the expressions for the matrices N i and N 2 . y - *Uu> y - *Ud> W 0 T y - * U w W y - WOJ y - * U « W 2 T y - *Uo> W 9 J (3.159) As the dimension of the matrices N a and N 2 changes, the dimension of the matrix r also changes. The change is not the change in theory, but actually in separate identi-fication, we redefine the number of observations in the identification of the A R M A time series. So in this combined identification, we have to correct for it. The matrix T is then defined as follows: r = 1 7 i 72 73 1 7 i 72 lN-r-J-m-2 71 1 7i 1 (3.160) 38 CHAPTER 3. IDENTIFICATION The identification of the A R M A is the following optimization problem Min 0,4 Wo [y - *UCJ] + TN 20 - TNl(p]T [w 0 [y - *Uw] + TN26 - TNrf (3.161) and as before this optimization with respect to cp gives us cp = Nfr T rNx ^ N f r T w 0 y - * U w +rN 26» (3.162) From this equation, we can derive the result we obtained before with just the replace-ment of the vector n by the vector Wo[y — \PUd>] - l N T ^ [W0 [y - * U « ] + TN20 (3.163) The interaction of the two identification problems comes from the equation below: <t>s. N f rT r N x - i W 0 y - f U w +TN26> <9N2 + Nf TT[-W0—Ud; - W0*Uo>5i + T-^e] ds. r 3N f dSi (3.164) To obtain the parameter vector 6, we can differentiate the identification optimization criterion and obtain a result we had before [ | N ^ + f N ^ - | N ' * - f N > ^ l T [Wo [y - *Uu>j + rN26> - TNl(p\ = 0 i = l,---q (3.165) With all the parameters defined, we can write our identification equations as: 9 l = [^Uu, + *Ua>;jT[y-*Uo>] =0 g2 = [^Uw + * U ^ J % - ¥Uw] = 0 ab2 (3.166) (3.167) 3.5. IDENTIFICATION OF THE TRANSFER FUNCTION-ARIMA 39 9s 9s+i dt dt - - *' T = [ ^ N 2 0 + T N 2 e i - — N^-TN^J W 0 [y - *Uw] + f N 20 - f N x 0 r7F dt - -' </s+2 = [—N20 + f N 2 e 2 - — Ntf-TNr^f (3.168) (3.169) W 0 [y - $Uw] + f N 20 - f Ni<p] = 0 (3.170) 5 5 + 7 W 0 [y - *Uw] + f N 20 - f Ni^ >] = 0 (3.171) The identification equations are similar to those of the split identification case. This means we should get the same results for both cases. The only difference is - of course -the path to the final results. " $ 1 " 8s k . V *+i Jq_ ion for our problem now is dgi dgi dgi dgi - i d6r 38s d0x oeq dgs dgs dgs dgs gi d8s ddi d9q dgs+i g2 dgs+i dgs+i dgs+i 881 d8s d6i dOq . gs+q dgs+q dgs+q dgs+q dgs+g dSi d8s d0q . i (3.172) The Hessian of the above equation requires the derivatives of </,-s with respect to the two sets of independent variable 6;s and 0;s. The derivative of gi, for i < s with respect to 8j is still the same as the case of split identification. dgi 38., d9rrdw dVT8Cj iT82LJ,T, . . . Uu> H U 1 U h * U y - $Uw 138,38, 38; 38 i 38j 38, 36,38/ 40 CHAPTER 3. IDENTIFICATION _ [ « * U ( i + * U § H ^ U u > + « U ^ ] for i<s (3.173) •dSi~~ ' d6i' Lc%~~ ' 88, The derivative of ^ , for i > s is lengthier, but actually not more complicated. We will take the derivative of gs+i with respect to 83. The derivative is new for this case of combined identification. dg as, ^[|r N^ + f N 2 e i _ | r N i ^ f N i ^ ] T [W0[y - Ww] + f N20 - f N 1 (p] ld$i 86• 86, 4>e (3.174) [ W o [ y _ $Uw] + f N20 - f N^] 1 [ _ W o f - Wo*!*',, + t ^ O - f ^ - f N^',] (3.175) W l ith 0Nf j9_ y - Wf T y - W 2 T y -T - / rp rp • [ ^ - U « + * U w , . ] X . [ - u w + w w / w p T (3.176) (3.177) and similarly for the matrix N 2 . Now we will derive the equations for the derivatives of g,s with respect to 9j. For i < s, the derivative with respect to 9j is zero, because the function contains no 9jS. dgi n , . —- = 0 tor i < s 00; (3.178) 3.5. IDENTIFICATION OF THE TRANSFER FUNCTION-ARIMA 41 For % > s, the derivative of gi with respect to Oj is as below [W0[y - *Uw] + f N 2 0 - f Ni^»] (3.179) dOidOj odi ddj 00,00j o9i 1 8Y .' . 82ch . . . . . - ^ - N i ^ - r N x - ^ - f [ W 0 [ y - * U £ ] + TN2d - TN 1 < P] df <9r - -' + [—N 20 + f N 2e 4 - — Nl(p - r N 1 < P , J 3 [|^N 20 + f N2e, - | ^ N ^ - f N ^ . ] (3.180) The elements of the Hessian matrix requires the first order partial derivatives and also the second order partial derivatives. We have given the first order partial derivatives of CJ and cp. Now we will give the second order partial derivatives. We have as before 02CJ L ^ A r A - i - 1 d 8 id 8 A u r d2*T, 86,38 3 (3.181) But the derivatives of cp will change. We have d2<P r* T r*TiW l - 1 = Nfr TrN t N f ^ : [ W „ [ y - * U c i ] + f N 2 # ] 42 CHAPTER 3. IDENTIFICATION - [Ni T T N i + N i - > T 1 x t J d 2 r de, de,-• r f r de, de, N i + N i r d 2 r (3.182) The above equation is obtained by differentiating the expression for <pg. with respect to 9j. As we can see the derivative is symmetric with respect to 0,- and 9j. If we differentiate the expression for (f>6. with respect to Sj, we get the last partial derivative required. We start with the equation N 1 r f T f N 1 <p 8T T N [ F [ | ^ N 2 0 + fN2e8] - [Nf N l + N f f then differentiate both sides with respect to Sj to obtain JT (3.183) ^ r T r N 1 + Nf r T r — ± ^ + N * r T r N i i — ^ dSj db3 ds3d9, dWx dt1 dS3 de, •[Wo[y-#Uw] + fN20] + <9<5j #0; d9i dSj dSj d9i d9i dSj [ N , | I r N 1 + N [ F g N l , ^ (3.184) And so from this equation, we have d24> dS3d6, N f r r r N ! 69 i dt, 66; •[W0[y - *Uw] + f N20] 3.6. IDENTIFICATION OF THE PREDICTOR FORM 43 [ - ' [d0, .dT <9N2 dt1 8N2 86, f r f N x + N f f r f - [Ni ^ - f N x + N f f T ^ N x ] ^ <9r 50, Our Newton-Raphsori equation has become 86, 86, 86, 86, (3.185) ' k' 6S 6S 6i Jo . dgi 861 dgi 86s 0 0 8gs 86x 8gs+i dgs 86s dgs+i 0 8gs+i 0 dgs+i 86x 86s 861 d0q dgs+q dgs+q dgs+q dgs+q 86i 86s d0i d6q -1 9i gi gs+q (3.186) This should make the inversion of this matrix a little bit more accurate. The existence of the lower block makes the movement of the parameter vectors S and 0 dependent. 3.6 Identification of the Predictor Form 3 .6 . 1 T h e Predictor F o r m The Box-Jenkins model of a control system was first called the dynamic-stochastic model by a number of researchers. The dynamic part represents the dynamics of the process and the stochastic part is the part of which values are not certain - hence the word stochastic. This part is represented by an A R I M A . Then it was also called the transfer function-noise model or the transfer function-ARIMA model. Even though the names are different, the equations used are the same. The deviation output variable is the sum of two rational functions. One driven by the deviation input variable, the other is driven by a white noise. This is one form of the Box-Jenkins model. The Box-Jenkins model has another form called the predictor form. If the purpose of the identification is only to design a minimum 44 CHAPTER 3. IDENTIFICATION variance or an L Q G controller, then it is better to use this form than the currently known or more popular one. We will derive this form in the following. From the usually known Box-Jenkins model yt = "CO. c/ ~7~Zut-S-\ + 77—TT a* (3.187) we can derive an expression for at as below at yt r i T u * - / - i d(z-v (3.188) If we write the system model at time t + / + 1, we have yt+i+i = and with the Diophantine equation TT at+/+i (3.189) ip(z + -1) we can write yt+f+i uiz-1) 7 ( z " 1 ) , . _ u (3.190) (3.191) 7(--V ^ - f 0 ( z - > ( + / + 1 ( 3 . 1 9 2 ) 0(z~x) - 7 ( z ~ 1 ) z - / - 1 ut + ^Qryt + Hz->t+ni (3-193) 0(2 (3.194) This equation can be put to the following equation: U(z-l)1>(z-*)<l>(z-*)ut + 8(z-^(z^)yt yt+f+i 6(z-i)9(z-i) + 0 ( z "1 ) a i + / + 1 (3.195) This form is called the predictor form of the Box-Jenkins model. It can also be called the controller form, because the numerator of the first term on the right hand side of the above equation is the equation of the minimum variance controller. With the above form, we can apply our technique of identification as in the case of a rational transfer function 3.6. IDENTIFICATION OF THE PREDICTOR FORM 45 or an A R I M A with no more difficulty. The identification criterion will be the minimum sum of squares SN = ] > > t + / + i ) ^ 3 - 1 9 6 ) t=i s(z-i)e{z-This form provides a short cut to the design of a minimum variance, or an L Q G controller. This form is easier to identify and statistical tests for model adequacy are easier and in fact inherent in the model. The value / and the residual •0 ( 2 _ 1 ) a i must agree. The time series ij)(z~ l)at is a moving average of order / and so it has truncated autocovariances at the value of / . Now if we write the model as a(z-1)ut + b(z-1)yt _y ,„1Q7. Vt+f+i = T - z t ; + d(z (3.197) then we can see the polynomials of the original form can be retrieved from those of the predictor form quite easily. From the following relationships: a(z-') = uiz-'YKz-1)^-1) (3.198) biz'1) = ^ ( 2 - 1 ) 7 ( z - 1 ) (3.199) ciz'1) = «5(2- 1)0(z- 1) (3.200) d^ - 1 ) = ^{z'1) (3.201) and with a little bit of algebra, we can obtain the following equations: u{z-x) aiz'1) 8{z~l) ciz-^-biz-^z-J-1 6{z-x) c ( 2 - 1 ) d ( 2 - 1 ) (3.202) (3.203) <t>{z-i) ciz-1) - biz-^z-f-1 The numerators and denominators of the quantities on the right hand side of the above equations have common polynomials. But only when the identification is perfect can we have cancellations of these common polynomials. However, as discussed above this is not necessary when we are interested in a minimum variance or an L Q G controller. The polynomials a ( z - 1 ) , b(z~l) and c?(^_1) are needed for the minimum variance controller. For an L Q G controller, we need the additional polynomial c(z~x). Now if we write the predictor form model at time t and consider the case of a control system with no delay, ie. / = 0, then we have = y(t\t-l)+at (3.205) 46 CHAPTER 3. IDENTIFICATION The quantity y(t\t — 1) is the optimal (in the sense of minimum variance error) one step ahead predictor of yt. This means the prediction error method identifies the Box-Jenkins model via the predictor form. The case / 7^ 0 can be established similarly. yt = + H z K ( 3 - 2 0 6 ) = y ( t | t - / - l ) + y » ( 2 - 1 ) a i (3.207) But in this case the optimal predictor y(t\t — / — 1) is / + 1 step ahead. For less work in the derivation of the minimum variance or an L Q G controller of a delayed system, the / + 1 step ahead optimal predictor y{t\t — / — 1) should be used rather than the one step ahead optimal predictor y(t\t — 1). This is because we will get the controller directly from identification. 3.6.2 Closed-Loop Data The question of closed-loop data began with a paper by Akaike, H . (1967). In this paper, by using cross-spectral method Akaike, H. showed that if closed-loop data is used, we might not get the transfer function of the process dynamics but the inverse transfer function of the controller. From Figure 3.1, we can see that the feedback input variable ut correlates to the controlled output variable yt via two paths. The forward path will give us the transfer function of the process dynamics. The backward path will give us the inverse transfer function of the controller. Set Point — o + Dither df Disturbance z-f- 1 Vt Controller hi*'1) Hz-1) Ut + Plant LO + O + Backward Path Forward Path Figure 3.1. Correlation Paths of a Feedback Control Loop. The question now is if our method is vunerable to closed-loop data? If it is and we carry out the same procedure as described in the section of identification of a rational 3.6. IDENTIFICATION OF THE PREDICTOR FORM 47 transfer function, what will we get? And what about the method of prediction error? Is this method vunerable to closed-loop data? Since our approach to identification is semi-analytical, it will help us in answering these questions quite easily. To answer these questions, we write our predictor form Box-Jenkins model as below. U}\ z -1 yt = 6{z-W~x) ( K ( } = y ^ t - f -\) + ^{z-l)at (3.209) = yi(t\t-f-l) + et (3.210) Now if the feedback controller is then we can write Vt = 6{z-i)0(z-iMz-i) + * (3.212) ^ z - 1 ) ^ * - 1 ) ^ - - 1 ) / ^ - 1 ) + 6{z-*)i(z-i)l2(z-*) = 6(z-i)9(z-i)h(z-i) « t - / - i + c, (3.213) = y2(t\t - f - 1) + et (3.214) and Vt - 6(z-i)6{z-i)h(z-i) yt-f-i + et (3-215) = y 3 ( * | * - / - l ) + et (3.216) In the above equations, we list 3 optimal predictors. They are the same in the sense they give the same value. The predictor yi(t\t — f — 1) uses both the input and output variables ut and yt. The predictor y2(t\t — / — 1) uses only the input variable ut and the predictor 2/3(^ K — / — 1) u s e s o n l y the output variable yt. Both the predictors y2(t\t — f — 1) and yz(t\t — f — 1) use the controller in their prediction equations. With the above equations, we now can answer the questions we raised before. A least squares estimation forces no crosscorrelation between the regressor variables and the residual. This is the statistical meaning of orthogonality in geometry. The answer to the question if our method of identification of a rational transfer function is vunerable to closed-loop data is affirmative. And if we carry out the same procedure as in the section of identification of a rational transfer function with closed-loop data, what we will get is neither the transfer function nor the inverse of the controller but the quantity ^ z - 1 ) ^ - 1 ) ^ - 1 ) / ^ " 1 ) + <5(z-1)7(z-1)/2(z-1) 6(z-*)9(z-*)h{z-i) (3.217) 48 CHAPTER 3. IDENTIFICATION This comes from the optimal predictor y2(t\t — f — 1). The question involving the prediction error method can be answered as follows. In 1975, Ljung, L. et al. (1974) studied identifiability of closed-loop data and suggested that by shifting between different control laws, identifiability can be achieved as in open-loop data. The number of control laws or controllers must be greater than the ratio of the number of input to the number of output variables. This is for multivariable systems. For single input single output systems, we just need to shift between two control laws. The conclusion involving the prediction error method is confusing. In their own words, they concluded "Direct identification with a prediction error method can be used exactly as in the open-loop case; the fact that the system, operates in closed-loop causes no extra difficulty". Unfortunately, this conclusion is confusing and normally causes errors for a user. If the optimal predictors y2(t\t — / — 1) and yz{t\t — f — 1) are used, then the conclusion is correct. The authors actually used the predictor yz(t\t — f — 1) in their paper. However, in practice the predictor yi(t\t — f — 1) is normally used. Please see an example in Soderstrom, T. and Stoica, P. (1989). Using the predictors jj2(t\t — / — 1) and y3(t\t — f — 1) makes the method prediction error no longer direct, because the knowledge of the controller is used. Using the predictor yi(t\t — f — 1) makes the prediction error method vunerable to closed-loop data, because there might be a linear relationship between the input and output variables in the predictor. And so, the prediction error method is vunerable to closed-loop data. The method of maximum likelihood is also vunerable to closed-loop data. The physical meaning of the problem is - the data has one additional linear relationship and so alone it cannot give all the parameters but one less. The remaining one must be obtained from the linear relationship given by the controller. Using the optimal predictors y2(t\t — f — 1) and y3(t\t — f — 1) means we have used this linear relationship and in doing so all the parameters become identifiable. We will explain in mathematical details why this is so. We consider the identification of the polynomials a (z _ 1 ) , biyZ"1) and c(z~l) in the following equation: Vt+f+i = (3.218) Now at the optimal condition, we have from Equation (3.82). (3 = [X T * T #Xj X T * r y = [ I ^ X ^ y (3.219) (3.220) 3.6. IDENTIFICATION OF THE PREDICTOR FORM 49 where - C i • • • —c 1 -ex s+q -ex 1 (3.221) -ex X U / V - / - X • • • U T V - f - X - m UN-f-1 • • • VN-J-X-n UN-f-2 • • ' UN-f-2-m VN-f-2 ' ' ' VN-f-2-n 2//v VN-l ; y = (3.222) If the matrix Lj is nonsingular, then the system is identifiable. This is because we can calculate all the parameters of the system. The matrix 1^ can be rightfully called the identifiability matrix. If Lj is nonsingular, then X must be full (column) rank, ie. there are no linear dependences of the columns of X. The question of identifiability becomes the question of linear independence of the columns of the. matrix X. Now a feedback control law will usually cause this dependence. The nonidentifiability of the system pa-rameters, in case the data have been collected under a minimum variance feedback, has been established by Box, G. E. P. and MacGregor, J. F. (1976). Now if we shift between two control laws, then the top part of the matrix X corresponding to the first control law has a linear dependence of the columns which is different from that of the bottom part of the matrix X corresponding to the second control law. In overall, the columns of the matrix X become linearly independent, and so the matrix 1^ becomes nonsingular. This is an ingenious idea of Ljung, L. et al. (1974). The addition of a dither signal suggested by Box, G. E. P. and MacGregor, J. F. (1974) also works, because the dither signal breaks the linear dependence of the columns caused by the controller. Now suppose that we use a feedback control law that remembers more past input variable or past output variable than those of a minimum variance control law, then the system is identifiable. This is because the additional input or output variable acts just like a dither signal and that is breaking the linear dependence of the columns of the matrix X. On the contrary, if we use a feedback control law that remembers less past input and past output variables, then the system is not identifiable. In 1981 Gevers, M . R. and Anderson, B. D. 0 . presented a new approach to the problem. They said because under feedback the white noise drives both the input and output variable time series and therefore these variables can be put in a vector form and 50 CHAPTER 3. IDENTIFICATION treated as a vector time series. This vector form gives us a joint input-output model which contains both the controller and the transfer function of the process dynamics. This joint input-output model can be identified by a factorization of the joint spectral density matrix (Anderson, B. D. 0 . and Gevers, M . R. (1982)). However, with no dither signal this method becomes an indirect method. We must point out here that there is one problem with closed-loop data and that is it often contains little variation. This makes identification difficult. The addition of a dither signal will help, but this too can be under criticism. This is because, if the magnitude of the dither signal is small, then the data is like closed-loop. On the other hand, if the magnitude of the dither signal is moderate or large, then the situation is like open-loop data. Even open-loop data are not always good. This is the reason why at times experiment designs are needed to collect data. For optimal conditions to minimize the error in the identification of the transfer function, one can refer to the paper by Gevers, M . and Ljung, L. (1986). We now close this section by concluding that our method and the prediction error method are vunerable to closed-loop data. If data must be collected under closed-loop condition, then we can shift between two feedback control laws. These control laws should be quite contrast to one another. If perturbation is allowed, then the addition of a dither signal is better than shifting between control laws. In the case the data are already available and no control law is known, then we will use an indirect method via the calculation of the two optimal predictors y2(t\t — f — 1) and yz(t\t — f — 1). Note that this indirect approach just suggested is different from the indirect approach discussed in Soderstrom, T. et al. (1975). In this section, we will consider a few examples to test for correctness of the equations. There are 3 programs written in M A T L A B software. These programs are included in Appendix B . The program tf Jd is the program used to identify the transfer function. The program armaid is the program used to identify the A R M A time series. The program bj J d is the program used to identify the combined model, ie. the transfer function and the A R M A time series at the same time. A l l these programs have been tested thoroughly for bugs. In this section, we will again test these programs and compare the results obtained from our method with the results obtained by the M A T L A B software package. The model we generated is given below: 3.7 Examples (3.223) 3.7. EXAMPLES 51 U3{ Z 2.0 + 0 .82- 1 1 - O J 0 - 1 +0.12z- 2 - 1 - 0.9-1 + + 1 - I.4.-1 + QA8z-2a* (3-225) with at as the normally distributed white noise with unit variance. The input signal ut is also white with a standard deviation of 5. The four time series yt, ut, nt and at are plotted in Figure 3.2. With the combined identification strategy, we get the following results: ucBiz'1) = 1.9632 + 0.8078,-1 (3.226) ScBiz'1) = 1 - 0 . S 9 1 6 2 - 1 +0.20S12 - 2 (3.227) 4>CB{z~l) = 1 - 1.1928*-1 + 0 .29432 - 2 (3.228) OcBiz'1) = 1 - 0 .47352 - 1 + 0.10532-2 (3.229) The sample variance of the white noise is 1.1278 and its estimated value is 1.0975. With the split identification strategy, the program tf Jd.m gives us the following model: uSp{z~l) = 1.9632 + 0.8078*-1 (3.230) 8SP{z-1) = 1 - 0 . 8 9 1 6 2 - 1 +0 .208U- 2 (3.231) The above polynomials are identically the same as those given by the combined iden-tification strategy. The excellent agreement tells us that both the theory which results in the equations and the software are correct. The split identification of the A R M A time series gives us the following results: (t>SP{z-1) = 1 - 1.04212 - 1 + 0.14S22 - 2 (3.232) 0SP ( -Z _ 1) = 1 - 0.3368s - 1 +0.1198z~2 (3.233) and the estimated variance of the white noise 1.1145. On first sight, this appears to be unexpected, because we would think the split identification strategy would give us a better result, ie. a smaller estimated white noise variance and closer values to the true parameters. To check this result, we identified the same A R M A time series, but this time via the program tf Jd.m not the program armaid.m. We got identical results. This assures us all the software are correct. And the reason for the difference in the obtained results of the A R M A time series is the difference in the time series. When we identified the A R M A time series in the split identification strategy, we identified the raw time series, ie. the one we generated. In the combined identification strategy, we identified another time series - the one that had been substracted out of the output variable time series. 52 CHAPTER 3. IDENTIFICATION T F - Input/Output Time Series Series Time A R M A - Input/Output Time Series Series 5.0 3.0 -1.0 --1.0 -3.0 H -5.0 V !i : v. /1 i i , 1 \ V ! •- nt (It 0 10 20 30 40 — i r~ 50 60 Time 70 I 80 90 100 Figure 3.2. Input-Output Series of Identified System. 3.7. EXAMPLES 53 The above example proves to us the theory is correct and the programs are bug-free. We actually need only one program - the program tf jd .m. This program is simpler and shorter than the other programs. The program bjJd.m, due to its length and number of equations, is less numerically stable than the other two programs. In fact, to use this program we have to set the initial estimates so close to the optimal values for the program to converge. The purpose of the above experiment is to test the equations and prove that we can obtain unbiased estimation of the rational transfer function when the disturbance is color. This is important for both the output error method and our method. Now we will compare our results with the results given by the prediction error method. For this system, the prediction error method in the M A T L A B software gives us the fol-lowing results: LUpe(Z-1) = 2.0059 + 0.82272"1 (3.234) SpEiz'1) = 1 - 0.89562- 1 +0 .20392 - 2 (3.235) <t>PE(z-r) = 1 - 0.17592" 1 + 0.12952"2 (3.236) 0PE{Z~1) = 1 - 0.86002 - 1 + 0 .03392 - 2 (3.237) Comparing the results, we see that the prediction error method in the M A T L A B software gives almost the same accuracy as our method for the parameters of the transfer function model, but the obtained parameters of the A R M A model by the prediction error method seem to be less accurate than those given by the our method. The difference in the accuracies is not that much, since in this example our method iterates half the total number of parameters, ie. it uses only the 8s and 9s in the Newton-Raphson equations and there are exactly four of them (8\, 82, 9\ and 92) out of a total eight parameters. In cases where the number of the iterating parameters, ie. the 8s and 9s is much less than the total number of parameters, significant improvement in accuracy can be obtained by our method. This can be illustrated by the following example in the identification of an A R M A . One hundred observations of the following time series was generated 1 - 0 4 5 " - 1 (3.238) 1 - 1.9s"1 + L I S , - 2 - 0 . 2 4 , - 3 with the statistics of the white noise given by the following table: 54 CHAPTER 3. IDENTIFICATION Table 3.1 Statistics of a< Lag Autocovariances 0 9.0197 1 -1.1262 2 -0.6632 3 -0.2408 4 -0.8945 5 0.1830 6 -0.5839 7 0.9126 8 -0.8533 9 -0.3383 Using these one hundred observations, our semi-analytical method gives us the follow-ing model: 1 - 1.7647Z-1 + 1.0323z-2 - 0.1891,- 3 v ' ' With this obtained model and the obtained white noise e<, we have the following statistic table: Table 3.2 Statistics of et Lag Autocovariances Crosscovariances 0 8.6053 8.6053 1 0.0000 -0.0000 2 -0.0212 0.0000 3 0.1733 0.0001 4 -0.4569 -0.0478 5 0.3906 0.2432 6 -0.2535 -0.1776 7 0.9214 -0.1911 8 -0.6256 -0.7396 9 -0.2696 0.2348 The fact that the autocovariance at lag 1 and the crosscovariances at lag 1, 2 and 3 in the above table are zeros indicates that the identification is perfect. Beyond these lags the statistics are theoretically supposed to be zeros. In the table, they are not zeros, 3.7. EXAMPLES 55 because they are the sample statistics. Since the identification results are good, we want to compare these results with what we obtain from the prediction error method. The comparison is given in the following table: Table 3.3 Model Comparison Parameter True P E ( M A T L A B ) Derivative fa 1.9000 2.1590 1.7647 fa -1.1800 -1.5742 -1.0323 fa 0.2400 0.3949 0.1891 0i 0.4000 0.8344 0.4284 °l 9.0197 8.9191 8.6053 From the above table, we can see that the prediction error method estimation is poorer than the estimation given by our method. With the estimated variance (8.9191) of the white noise which is smaller than its sample counterpart (9.0197), the method of prediction error is adequate, because it means the method indeed tries to look for the minimum variance of the white noise. However, because it has to iterate with 4 parameters, the result is poor. This can be seen as follows. The prediction error method uses the Gauss-Newton approach to calculate the values of the parameters for the next iterative stage, which means we have (3.240) where x is some number. Now the problem with the Gauss-Newton method and all the other Quasi-Newton methods is the approximation of the inverse matrix (Hessian) on the right hand side of the above equation. This creates some loss of accuracy. The loss of accuracy is more when the dimension of the matrix is larger. On the contrary, our method iterates with only one parameter 6\ ' t1' " t1 ' fa ? — fa 2 + at fa fa t+1 . 0i. t X X X X -1 X X X X X X X X X X X X X X X t X 0i 0i t-i (3.241) As for the autoregressive parameters fas, they iterate in harmony with the parameter 0j, because they are calculated from the equation 4> NfFfNx N | F' n + rN26> (3.242) 56 CHAPTER 3. IDENTIFICATION. This is the reason why, in this particular case, our method could deliver such an excellent result. The above example does not mean the prediction error is a poor estimator. In fact, it is an acceptable estimator as it has proved by obtaining a variance of 8.9191 which is smaller than the sample variance 9.0197 of the white noise. One might argue that since most iterative numerical methods will have some approx-imation of the function to be minimized, the approximation of the Hessian by Gauss-Newton and Quasi-Newton methods might not be a great deal, because it might be absorbed by the approximation of the function. Newton's methods approximate the min-imized function by a quadratic function. This is a valid argument. However, one cannot deny the fact that additional accuracy is gained by removing a number of parameters out of the minimized function, as the above example has shown. In general, the identification of an A R I M A is more difficult than the identification of a rational transfer function, because of a small signal to noise ratio. Therefore, an accurate method is in demand. Since in an A R I M A , we usually see more parameters </>;s than parameters 0;s, like in the above example, there should not be any argument that our method gives higher accuracy. This means our semi-analytical method should be used. In the identification of a rational transfer function, we usually have a decent signal to noise ratio and more parameters <5;s than parameters u;,s, a completely different scenario to the case of the A R I M A , so we would like to convince ourselves with many more examples proving a better accuracy of our method. To do this, we now go back to the identification of a rational transfer function. We again used the program tfJd.m and the M A T L A B system identification toolbox for a comparison. This toolbox has a function called oe which stands for output error. It must be mentioned here that even though the function is output error, the method described in the toolbox is referred to as the prediction error method. In our final comparison, we randomly created models and from these models we gen-erated 100 observation data and from these observations of yt and ut, we identified the parameters by both methods. The model has the following form: Vt = , . a " * - i + ( 3 - 2 4 3 ) 1 - bxz 1 - b2z i which is a very typical model one can encounter in practice. The observation data were generated by generating at and ut. Both of these series are white. The series at is normal and has a unit variance, but the series ut is uniformly distributed and has a standard deviation of 2.5. This was done so to give a decent signal to noise ratio. A total of more than 200 models was created. Like the above case of identification of an A R M A , the criterion we will use to judge the model is the variance of the disturbance white noise at. We call a2a the variance of the generated disturbance white noise, o~2 pE the variance of the disturbance white noise obtained by the prediction error method and crlSA the variance of the disturbance white noise obtained by our method or the semi-analytical method. 3.7. EXAMPLES 57 Figure 3.3. Parameters of Compared Systems. 58 CHAPTER 3. IDENTIFICATION Figure 3.4. P E Estimated Parameters of Compared Systems. 3.7. EXAMPLES 59 Figure 3.5. SA Estimated Parameters of Compared Systems. 60 CHAPTER 3. IDENTIFICATION V a 1 u e s 0.1 -0.4 -0.9 -1.4 .Ulii.ii.i.l..n.i.i.|i.i .Ii...In.,.!!,..!.! ..Ii.. ili.li.iliii.|ll.i.i.ii.l...i..iii|i|ilil|li.yi..,l.„|.ili,.l,.1 ...il I„,.II .1.1.1.. I ..III.I I....H • i i i h . i i 1 1 1 1 1 I I I I 0 20 40 60 80 100 120 140 160 180 200 Trial 2 2 <Ja'aa,SA V a 1 u e s Trial 2 2 Ta,PE~aa,SA 200 V a 1 u e s 0.9 H Trial Figure 3.6. Comparison of Estimated Disturbance Variances. 3.8. CONCLUSION 61 Figures 3.3, 3.4 and 3.5 in the previous pages show the parameters of the created and obtained models. We cannot draw any conclusion from these figures. However, from Figure 3.6 we can draw a very convincing conclusion about the accuracy of the two methods. From the top graph, we can see that the prediction error method is an adequate estimator, because a2a P E is smaller than al most of the time. There are only 21 times out of 199 trials, it fails to obtain a smaller variance than that of the generated disturbance. Our semi-analytical method fails only one - trial 101. This can be seen from the middle graph of Figure 3.6. The bottom graph of this figure tells us that there are only two cases the prediction error method gives better results than our method. These cases happen at trials 7 and 50. The result of trial 140 tells us that once in a while the result from the prediction error method given by the M A T L A B software package must be checked. So we can conclude that in general our method gives more accurate results than the methods of prediction and output error. 3.8 Conclusion In this chapter, we have discussed the identification of the Box-Jenkins model. The Box-Jenkins model has been considered difficult to identify, but is a superior model due to its parsimonious characteristic. An accurate identification of the transfer function of the Box-Jenkins model is very important, because its inverse is the pole-zero cancellator needed in the design of many controllers. The method proposed in this thesis makes it easier to choose the initial values of the parameters and gives more accurate results than the methods of prediction error and output error. The method also unifies the identification of a rational transfer function and an A R I M A time series. It also opens up a discussion about the vunerability to closed-loop data of the prediction error and maximum likelihood methods. By introducing a robust semi-analytical method to identify the Box-Jenkins model, the proposed method in this thesis makes a contribution not only to control system theory but also to time series literature. CHAPTER 3. IDENTIFICATION Chapter 4 Controllers 4.1 Introduction Controller design is the next step after system identification. As with identification, there are different design methods. But unlike identification, where the different methods seek the same solution, the different methods of controller design aim at obtaining different controllers with different characteristics. In this chapter, we will discuss a few well known controllers and introduce an improvement of the PID controller and the self tuning con-troller. 4.2 The Minimum Variance Controller The minimum variance controller is the easiest to obtain, because it follows directly from the model of the system. Our work does not involve the minimum variance controller. However, its knowledge will help in the understanding of the minimum variance self tuning controller. Therefore, we will briefly discuss this controller. Derivation of the Controller We consider the usual Box-Jenkins model uiz-1) 6{z-v) yt+f+1 = Wv111 + W^f^1 ( } At time t we want to choose the control action ut in the form of a linear combination of the past control actions and the past output variable values such that the output variable 63 64 CHAPTER 4. CONTROLLERS yt has minimum variance. Let the controller be «* = , / (4-2) / 2 ( z x ) By replacing the controller in the control system equation we have = ^ / ^ y i + ^ T + / + 1 (4-3) Therefore under feedback the system follows the time series "-iUl,--)' W i = ^ a , + ' + i (4-4) or fl*-1)/^-1) 0 ( ^ ) Now if we define the following Diophantine equation: 4>^) n * ) + Hz-r l 4 - b j then the output variable can be written as yt+f+1 = ^ - 1 ) a t + / + 1 + ^ " 1 ) f ^ : 1 ^ ^ ^ (4.7) ^ - 1 ) / 2 ( ^ 1 ) From the above equation, we can see that the variance of yt+f+i consists of two terms. The first term is independent of the controller, while the second term is dependent on it. The minimum variance controller sets the second term to zero, ie. 2(fL_) = . W - , faz-1) S(z-r)l2(z-x)n ] or (4.8) M*-1) Kz-X)iiz-X) l2(z-i) u{z-i)4>(z-i)il>(z-*) The minimum variance controller is hence given by (4.9) = -^XtXl-tf' ( 4 - i o ) and under this control law the output variable follows a moving average series of order / yt = V>(*_1)«t (4.11) 4.2. THE MINIMUM VARIANCE CONTROLLER 65 Nonstationary Disturbance In the case the disturbance has d roots on the unit circle and they can be separated as shown below - = (1-/-'w,--)ai (4'12) where </>*(z-1) has no roots on the unit circle. The m i n i m u m variance controller can be written as 6(z~1)'y(z-1) U t = ~u{z-^)(l - z^Y^{z-^)iiz-^)yt (4'13^ Siz-'Hz-1) [ 1 + 2-i + z-2 + ..yyt (4_14) u(z-^(z-^( The controller is a linear combination of an infinite amount of past output variable read-ings. Since it integrates all these readings into a quantity, the controller is said to have an integrator. This is equivalent to the integral gain of the three mode P I D controller. Usually, the controller is written as ^ = ( 4 - i 5 ) where V = (1 — z'1) and the left hand side of the above equation has the physical meaning of the change in the position of the input variable when d = 1. The controller then has two effects: the integral effect that demands the input vari-able to wander away from its steady state position to compensate for the nonstationary behaviour of the disturbance and the proportional effect that decouples the interaction between the current and past readings of the input, output variables and the disturbance. Non-minimum Phase Systems Recall that under a m i n i m u m variance control law, the input variable wi l l follow the series u(z l)<p(z !) For a stationary disturbance, the closed-loop input variable of a non-minimum phase system is nonstationary, because the polynomial u.(z~1) has roots inside the unit circle. This is not desirable and therefore a m i n i m u m variance controller is not good for non-m i n i m u m phase systems. Also, non-minimum phase systems are extremely sensitive to model mismatch. Non-minimum phase systems are known as systems with an inverse 66 CHAPTER 4. CONTROLLERS response by process engineers. This inverse response might not be seen in a discrete control loop, if we sample slow enough. So this is one way to cope with non-minimum phase systems. The other way is to design a linear quadratic Gaussian controller. This is the topic of our next discussion. 4.3 The Linear Quadratic Gaussian Controller A minimum variance controller is not always desirable. The minimum variance control law allows a higher gain so that the controller can move maximally to give minimum variance of the output variable. A control law that produces as small a variance of the output variable as possible but with a penalty on the variance of the input variable is desired. A minimum variance control law is usually considered optimal, while a control law that limits the variance of the input variable is considered as suboptimal. However, the minimum variance controller with a penalty on the variance of the input variable is occasionally called an optimal controller. The term optimal used in this context does not mean minimum variance, rather than that it means the controller is derived from an optimal criterion. The criterion is quadratic in the input and output variables. The disturbance is driven by a white Gaussian (normal) noise. The controller and control system are both linear. A l l these attributes give the controller a name - the Linear Quadratic Gaussian (LQG) controller. Its formulation is given as follows. The minimum variance controller can be formulated as the following optimization problem Mm E{y*+f+1] (4.17) which is a special case of the following optimization problem Min E{yt+f+1 + \u2} A > 0 (4.18) This is the optimal criterion or performance index of an L Q G controller. Its physical meaning is the sum of two closed-loop output and input variances with a weight on the variance of the input variable. If A is zero, we have a case of unconstrained movement of the input variable. This is the case of minimum variance control. The controller moves the control element maximally to achieve minimum variance of the output variable. On the other hand, if A is very large, the criterion puts all the weight on the input variable. In this case, minimization of the performance index almost means minimization of the movement of the input variable. The result is a zero value for the variance of the input variable under feedback. The controller practically does not move the control element at all, and the variance of the output variable is at the highest value given by the disturbance. 4.3. THE LINEAR QUADRATIC GAUSSIAN CONTROLLER 67 In this case, the loop is practically open. In the usual case, A will obtain a value in the interval [0, co) and the closed-loop variance of the input variable al will be lower than or equal to that of the minimum variance control case a2umv. Actually, Equation (4.18) should be modified for the following reason. Suppose that we want the closed-loop input variable variance equal to or smaller than a2u, the optimization problem will be M m E{ylf+l} (4.19) Ut with the open-loop control system y i + / + i = ^ h + i^)at+s+l (4-20) subject to a constraint on the closed-loop input variable variance E{u2t} < al (4.21) The Lagrangian of the above optimization problem is (Luenberger, D. (1969)) Mm E{y2+J+1 + X(u2t - a2u)} (4.22) not Min E{y2+f+1 + \u2} (4.23) ut The solution of the above optimization problem must follow the Kuhn-Tucker theorem, ie. we must have (Luenberger, D. (1969)) X(E{u2}-al) = 0 (4.24) In case it happens that al m v is smaller than al, A will be zero. On the contrary, E{u2} = al. So in both cases, the Kuhn-Tucker theorem is satisfied. This Kuhn-Tucker theorem can be obtained from Equation (4.22) not Equation (4.18). Nevertheless, these two control criteria will give the same controller, and the latter has established itself in the control literature and therefore we will refer to this optimal criterion when we mention the L Q G controller. In Equation (4.21), the smaller sign (<) is used for the case almv is smaller than al, and the equal sign is used for the case almv is greater than a2tl. Statistically, Equation (4.22) should also be modified. This is because the optimization is taken at time t and yt+f+i is in the future of time t. At this time yt and at are given, 68 CHAPTER 4. CONTROLLERS so the expectation should be the conditional expectation given yt or at - which means we must have either Mm E{ylf+1 + \(u2-*l)\yt} (4.25) or Mm E{y2+f+1 + X(u2t - a2u)\at} (4.26) ut From the above equation, we can write Mm ^ { ( ^ ^ u < + § ^ a i + / + 1 ) 2 + A ( u 2 - ^ ) | « t } (4.27) ut b\z ) nz ) By using a Diophantine equation we had before, we can write the above equation as Mm Emz-^at+m)2 + {j£^-ut + jf=T^ + X ^ ~ a ^ ( 4 2 8 ) ut °\z ) nz 1) or Min^ *lmv + EiC^Ut + JF^Tf^ + Hu2 - a2u)\at} (4.29) Now the expectation value of at-k (k > 0) given at is at-k and similarly for ut-k (k > 0), because ut is a linear combination of at-k (k > 0). This means the expectation operator will vanish and we are left with a quadratic equation in the variable ut. In Vu, K . (1991), the author has obtained the solution for this problem as below: ( 5 ( z - 1 ) 7 ( , - 1 ) ( A . . . ut = ^ LLi 1 a t (4.30) and therefore the L Q G controller is given as follows: l^Z ^ u(z-i)6(z-^(z-i) + —e(z-1)6(z-'1) (4.31) So compared to the minimum variance controller, the L Q G controller has an additional term —6(z~1)6(z~1). Now if we put the above equation back into the Box-Jenkins model u0 4.4. THE PID CONTROLLER 69 equation, we will get an equation for the time series the output variable will follow under feedback. yt = 5 - ^ at (4.32) u{z-i)<p{z-i) + —S{z-l)cj>(z-v) Wo From the two equations for ut and yt, we can find the following relationship between the variables yt and ut under feedback yt = - — « t-/-i + i>{z-l)at (4.33) where 0 ( z _ 1 ) is given in the Diophantine equation mentioned previously. From this equation, we can find the following relationship between the variance <r2 of yt and the variance a\ of ut under feedback. °l = (-)2*l + < m v (4-34) This relationship can be seen easily from the Clarke-Gawthrop's self-tuning controller. We will mention this controller in a later section. From the above equation, we can write *2y-(-Y*2u = <mv (4-35) CJo and see that if A is much smaller than cu0, an increase in the output variance a2 will be compensated for by a huge reduction in the input variance G\. This is the attraction of constraining the input variance and the L Q G controller. 4.4 The PID Controller In this section, we will discuss the stochastic PID controller, ie. the control system is disturbed by an A R I M A time series. There are actually two types of methods to obtain the gains for a PID controller: one is model-based and the other is empirical. Most of the empirical methods were suggested earlier and most of the model-based methods were recently suggested. From the time of beginning of PID control, control engineers have worked on methods to obtain the optimal gains. The early work was by Ziegler, J. G. and Nichols, N . B. (1942) who obtained the gains from the closed loop ultimate gain and frequency. When it was suggested, this ultimate oscillation was obtained by on-line tuning with the loop closed. However, this can also be achieved with a relay feedback 70 CHAPTER 4. CONTROLLERS parallel to the controller (Astrom, K . J. and Wittenmark, B . (1989)). The ultimate gain and frequency can be calculated from the limit cycle produced by an ideal relay. In 1953, Cohen, G. H . and Coon, G. A . introduced the concept of open-loop process reaction curve to approximate the control loop by a delayed first order system and obtained the optimal gains for the minimum offset, one quarter decay ratio and minimum integral square error response to a disturbance change. A method that uses a similar concept to that of the Dahlin controller, is the Lambda (A) tuning method (Thomasson, F. Y . (1995)). In this method, the closed-loop transfer functions of simple processes are derived to be first order with unity gain and the PID controller gains are expressed as functions of this closed-loop time constant A. Tuning means varying this A constant until satisfactory performance is achieved. More recently, Rivera, D. E. et al. (1986) proposed tuning rules based on the Internal Model Control (IMC) design procedure. To use this method, we normally have to approximate the delay by a function (Pade) and reduce the dimension of the process. As closed-loop performance is the current talk-about topic of process'control, new tuning rules for desired closed-loop performance were suggested in Cluett, W. R. and Wang, L. (1996). These sets of rules are different for different processes. The rules are based on the closed-loop time constant and the ratio of the open loop time constant and the dead time. In their book, Astrom, K . J. and Hagglund, T. (1988) discussed a number of methods to tune analogue PID controllers. These methods include: the Ziegler-Nichols methods (both step and frequency response), the modified Ziegler-Nichols method, the dominant pole method, the frequency domain design method and the pole placement method. The frequency response method of Ziegler-Nichols is the one we mentioned above. It is the method that is more well-known. The step response method gives recommended settings as functions of two parameters: the apparent dead time and a constant which is the ratio of the product of the gain and the dead time and the time constant. The modified Ziegler-Nichols method generalizes the Ziegler-Nichols frequency response method by interpreting that in the latter the controller gains move one point on the Nyquist curve to a desired position. It moves the point where the Nyquist curve intersects the negative real axis to where with a phase advance of 25° at the ultimate frequency. The modified Ziegler-Nichols method suggests that other points of the Nyquist curve can be moved to other positions. The Ziegler-Nichols methods are based on the knowledge of one point on the Nyquist curve of the open-loop process dynamics. In the dominant pole design, two points on the Nyquist curve are used. The points correspond to two dominant poles of the closed-loop system. The reason for two poles is the poles can be a complex pair. The controller then must need at least 2 parameters (PI or PD) to move these two poles to the desired locations. PID means one parameter must be restricted. If several points on the Nyquist curve are known, then the frequency domain design method is the one suggested in the book. A design with closed-loop frequency response of unit gain and a resonance peak Mv is the criterion of the method. There are M curves of the closed-loop transfer function. 4.4. THE PID CONTROLLER 71 The Mp value is the largest value of M on its Nyquist curve. The closed-loop performance is specified by the Mv value which is chosen in the range 1.1-1.5. The pole placement method is used for lower order systems, so that the controller can place all the poles not just the dominant ones. More recent work on analogue PID controllers by Zervos, C. et al. (1988) introduced the concept of orthonomal series to represent a system and from them tuned the controller gains. Perhaps, the K L V method of Landau, I. D. and Voda, A . (1992) is the most systematic method to design a PID controller. The acronym K L V probably comes from the names Kessler, Landau and Voda, because the latter two used the former's Symmetrische Optimum (Kessler, C. (1958)) to design their auto-calibration method. The method claims to give better results than the Ziegler-Nichols's frequency response method. The closed-loop characteristics such as response and overshoot can be calculated easily. It assures good robustness margin and boasts a large field of applications. Works on multivariable analogue PID controllers use eigenvalue or pole assignment to place the closed loop poles for stability (Park, H. and Seborg, D. E. (1974), Seraji, H . and Tarokh, M . (1977)). A discrete version of the eigenvalue assignment method was studied by Stojic, M . R. and Petrovic, T. B. (1986) who used a grapho-analytical pole placement procedure to obtain the gains. Meo, J. A . C. et al. (1986) used projective control to obtain the gains. The adaptive or self tuning PID controller was discussed by Radke, F. and Isermann, R. (1987) who used 2-stage least squares to estimate the parameters of a system and a numerical optimization procedure to estimate the controller gains. In 1975, MacGregor, J. F. et al. suggested a method to obtain the optimal gains by plotting contours of variances under feedback. The optimal gains are obtained by extrapolation from the contours of variances. This is labor intensive, and the method does not work well for 3 parameter PID controllers. In Vu, K . (1992) the author used an algorithm similar to one used to calculate the steady state Kalman filter to obtain the optimal gains. Even though the theory of the method is correct, unfortunately the suggested algorithm fails on systems with a delay or with a nonminimum phase. Since in this thesis we choose the Box-Jenkins model, we will improve the design of a PID controller for this model. In the following, we will present our work on the PID controller. 4.4.1 The Time Series Variance Formulae Since under feedback both the input variable and output variable are time series driven by the same white noise and our purpose of control design is to have minimum variance of the output variable, therefore it is necessary to mention the variance formulae of a stationary time series. The time series variance formulae have been studied extensively by the author and the results have been published (Vu, K. (1988)). There are 3 ways to calculate the variance of a stationary time series. We will briefly present them here for a quick reference. 72 CHAPTER 4. CONTROLLERS For a stationary time series yt: i - f l j z - 1 -62z~2 enz~n yt 1 - faz-1 - <p2z its variance can be calculated by one of the following methods. The Residue Method (Jury, E. I. (1964)) The variance can be expressed as: at (4.36) c z^z-^zY -dz = o~l YJ Residues — = a2aResid,ue and it can be given by the following ratio: e(z-^e(z) =poles Z<fi(z l)(f)(z) 9(z^)9(z) (4.37) (4.38) (4.39) iril 2 1 0 (4.40) with 2 + 2 £ 0 2 -20X + 2EW.-+1 --29j + 2 9i9i+j —4>3+i —4>j ~ 4>j+2 -20„_! + 29i 9n —<j)n -29r, 0 and r 0 = 2 - 2 ^ -2<j>2 -</>! 1 - <b2 -<f>! - (j>3 -<t>2 1 - 4>4 -2</»n_i & n -2 - <f>n ~<f>n-j-l 1 0 -2<f>n_i -2<bn $n-2 - <j>n -<t>n-l -<j)n-3 -<f>n-2 1 -<f>l 0 1 - I K "n-1 (4.41) (4.42) 4.4. THE PID CONTROLLER 73 The Recursive Method (Astrom, K. J . (1970)) The variance can be calculated recursively as below: h = {\-c?K)Ik-i + fi (4.43) and the polynomial coefficients calculated recursively as: 6{k) = # - ) - a , # i i = 0 , l , . - - * - l = % (4-44) ro 0 t 1 ] = B^-Mk% i = 0 , l , . - - f c - l fik = ^ k l (4.45) 9o with the initial condition Jc (po(z i)0o( 27TZ 7c 0o(^_1 ^) i f (e^Vdz 2-KiJc Uj") and The variance is given by h ( n ) /)(") • # n - l *(») > ) ' n - l • n 1 Pn ( n - l ) • # n - l , ( n - l ) V - 2 • ^ ) (4.47) = Pl (4-48) 0<n) = -0 t - i ^ 0 (4.49) = 1 ?: = 0 (4.50) ^ = -fr i ^ 0 (4.51) = 1 i = 0 (4.52) *l = *X (4.53) The sequence of calculations is tabulated in the table below: 1 r i n ) ••• *<n) 1 ^(n-1) ^(n-1) . . . ^(n-1) ^ - 1 ) ^ ^ • • • ^ « n - l 74 CHAPTER 4. CONTROLLERS The State Space Model Method (Vu, K. (1988)) The variance is given as below: J n - \ E{=1 ^ifin+i-j <t>l<t>n r-1 with £ ? = i ( & - ^ ) ( ^ + i - - hh+i E i = l ( ^ i — e^i^n+i-j — On+i-j) — <f>i<f>n+i-j (<f>l - #l)(</>n - # n ) - < ^ l ^ n and I -h <f>4 0 0 0 ... o ... o (f>i o (4.54) (4.55) (4.56) 4.4.2 The Minimum Variance PID Controller Now we will discuss procedures to obtain the minimum variance PID controller gains for the Box-Jenkins stochastic control system. When the control interval of a control system coincides with its sampling interval, we can write a discrete PID controller as follows: t ut = kpyt + ki J2 Vi + h{yt ~ yt-i) (4-57) or ut - ut-i = {kp + k + kd)yt + {-kp - 2kd)yt-\ + kdyt_2 = hyt + hyt-i + hyt-2 (4.58) (4.59) (4.60) 4.4. THE PID CONTROLLER 75 with kp as the proportional gain, k{ as the integral gain and kj, as the derivative gain. By replacing the input variable in the Box-Jenkins model of the control system, we can write the output variable as follows: 6(z !) </>(z l) = 1 - z - i V t + ^ + ^ ) a t + f + 1 ( 4 - 6 2 ) Moving the first term of the right hand side of the above equation to the left hand side and expressing yt+f+i as a time series, we obtain If the polynomial 0(, - 1) has a root on the unit circle, we can factor this root out and write faz- 1) = fiz-^il-z-1) (4.64) then Equation (4.63) can be written as: 6{z-x)6{z-*) . . . . . V t + f + 1 = 6(z-*)<Kz-*) - U(z-W(z-*)z-f-U(z-*)at+f+1 ( 4 ' 6 5 ) On the other hand, if (j>(z~ l) has no roots on the unit circle, we can divide both the numerator and denominator of the right hand side of Equation (4.63) by 1 — z'1 and obtain Siz-1)^-1) .. ... V t + f + 1 = 6(z-*)<Kz-i) - W ( z - i ) ^ ) z - / - i / - ( z - i ) a ' + / + 1 ( } with n*~X) = 7 ^ 3 7 (4-67) This is necessary for the time series yt to be invertible. In both cases, under feedback the output variable will follow the following time series: Vt+f+i = ^ ^ q t + / + i (4.68) 1 - a^-1 amz~m . 76 CHAPTER 4. CONTROLLERS Now we consider the following system: w l-Bz-1 ,A_n. yt = i—T—zi—7—z^ut-i + i —at 4-70) 1 — 1 — OoZ 1 1 — Z 1 1 — " — 02^ Under PID feedback, the output variable time series is given by (1 - SiZ-1 - s2z-2)(i - ez-1) V t (1 - Stz-1 - 62z-i)(l - z - 1 ) - uz-^h + l2z-i + hz-*)at (4"71) ^ (4.72) 1 - (1 + 4 ) 2 - ! - (4 - 6X)Z-2 + S2Z~3 - wlXZ-1 - ul2Z~2 - ul3Z-1 - (4 + B)z-1 - (S2 - 4fl)z~2 + 620z-3 1-{1 + S1+ toh)z-1 - {S2 - 4 + ul2)z-2 - (-4 + u;l3)z-3at (4.73) Since a time series will have the minimum variance when it is white, yt will have the minimum variance when 4 + 0 = i+Si+uh (4.74) 4-40 = 4~4+^/2 (4.75) _620 = (4.76) which means . 1-0 ' i = (4.77) , (1-0)4 h = (4.78) (l - 0)4 ' 3 = (4.79) or K = j i - ^ + u,) ( 4 80) k i = ( 4 8 1 ) = < ^ " (4.82) So for a system with no transmission zeros, no delay and the dynamics is of second order or lower and the disturbance is an integrated moving average of first order, the minimum variance controller is a PID controller. The optimal controller gains are given by the above equations. In other cases, the optimal controller gains cannot be obtained 4.4. THE PID CONTROLLER 77 directly from the parameters of the system, but through a numerical procedure outlined in the following discussion. In Equation (4.69), the controller gains appear in only the ft,-1) polynomial and only from coefficient Pf+i to coefficient (5n. In practice, m is usually smaller than n. In the following, for simplicity we will assume that m equals to n. We can always do that by adding trailing zeros to the polynomial with fewer coefficients. If we use the formula given by the method of residue to express the variance of the output variable under feedback, we will get the following result. tail (4.83) with n0 ' 2 - 2 f t - 2 f t - 2 f t - 1 - 2 f t " - f t 1 - f t - f t - ft Pn-2 --ft - f t - 1 - f t - f t 1 - f t ~P n- -3 -Pn-2 - f t - 1 - f t 0 1 - f t - f t 0 0 0. 1 " 1 - f t - f t - f t ••• - f t " 1 - f t -ft 1 - f t - f t ••• - f t - i - f t " f t - f t 1 - f t - f t - f t ••• + 1 - f t -Pn-l - f t 1 ~Pn (4.84) - f t • • • - f t •• - f t (4.85) and 2 + 2 £ a 2 -2ai + 2 £ aiQjj+i -2ft - f t - 2 f t -ft - ft -2a3 + 2 £ a,-a,-+i - f t+ i - f t - ft+2 -2a„_i + 2 a i a „ — f t - 2 a „ 0 0 0 - 2 f t _ i ~Pn-2 — Pn — ft-j-1 1 . 0 - 2 f t - f t - i "f t - i - f t (4.86) The matrix Vti is identical to the matrix S7 0 except for the first column which is given in 78 CHAPTER 4. CONTROLLERS more details below 2 + 2 £ I U a\ -2a 1 + 2 E ^ i 1 a i ^ + i -2an_i + 2a x a n -2a„ (4.87) To obtain the optimal (minimum variance) controller gains, we can take the derivatives of the closed-loop variance a2 with respect to the controller gains and set them to zeros. In doing so, we obtain = 0 for J = l,--< 7 „ (4.88) (4.89) We can solve the above set of equations with a numerical procedure to obtain the optimal controller gains. Since the above set of equations involves determinants and the expression for a determinant has a lot of terms, we have to be careful when the dimension of the matrix is large. In this case, we can use the formula given by the state space model method. In fact, for more practical purpose, we should always use this formula, because it is a ratio of two differentiable scalars. The formula is: cr., = T r - i i + E L i ( " * - f l 0 2 - & 2 + 2*'r ft n - 1 i - E L i # - 2 E L i fikhw Efc=l PkPn+k-l 1 - 1 ft (4.90) = —o~„ (4.91) 4.4. THE PID CONTROLLER 79 where u and v are two functions of the controller gains and u = i + ] T K - f t ) 2 - f t 2 + 2£ Tr-k=l ft ft L ft-i J i - £ f t 2 - 2 Efe=i ftft+i nL=i ftft+^-i ftft ft ft ft-i Efc=j OfcCV/t+i - Q'/tft+i - a/c+ift E L l a t a « + i 5 : - ; - (XkPn+k-l ~ Qn+k-t/3k (4.92) (4.93) (4.94) r = i r ft ft • • ft" " 0 0 ... o ft ft • • 0 — ft o ... o . ft 0 • • o . ft-2 • • • ft o (4.95) If we take the derivative of the closed-loop variance a2y with respect to the controller gain Ij, we have 2V u'v — uv' (4.96) where u' and v' are the derivatives of u and v with respect to the controller gain Ij. For the controller gain Ij to be optimal in the sense of minimum variance, we must have 0 (4.97) which means u'v — uv' = 0 (4.98) 80 CHAPTER 4. CONTROLLERS and this equation can be used to obtain the controller gain lj. So the problem of designing a PID controller finally becomes the problem of solving the following set of equations numericallv: h = h = h = UV UV U[3V — UV = 0 = 0 = 0 (4.99) (4.100) (4.101) If the Newton-Raphson equation is used to obtain the gains, then we will iterate with the following equation: " l l ' " h' h h h • 1 h r dh dh dh dh dh2 dh dh k i dh dh3 dh dh I dh dh dh -I - 1 h h h (4.102) where dhj di3 d_ dh (uuv du dv d2u v H dljdh dh dlj du dv dlj dh d2v dljdk (4.103) (4.104) The detailed expressions for the derivatives are included in Appendix A. With the given derivatives, we can easily obtain the gains / j , j = 1, • • • 3 by the Newton-Raphson iteration. The PID controller gains are related to these gains as in the following equations: kp k{ kd 1 1 -1 0 0 0 0 -1 1 1 0 0 1 - 2 1 - 2 1 1 h h h h h h (4.105) (4.106) When the disturbance is stationary, there is no need for integral action, = 0, the relationship between these, two gain vectors is as follows: (4.107; kp " i r " 7* " '1 0 - 1 / * 2 4.4. THE PID CONTROLLER 81 4.4.3 The Linear Quadratic Gaussian PID Controller In this section, we will briefly discuss the minimum variance PID controller with a con-straint on the input variable variance, ie. a linear quadratic Gaussian PID controller. This extension is actually quite easy. The control criterion can be seen to be given as Min E{y2+f+1 + Xu2} (4.108) r for the case of a PD controller or MinE{y2+f+l + X(Vut)2} (4.109) for the case of a PID controller. The vectors 1* and 1 are the controller gain vectors as before. In the first case, we have a stationary disturbance and the controller is a PD controller. In the second case, the controller is a PID controller, because the disturbance is nonstationary. In this case, we will constrain the variance of the differenced input variable Vuj . From the control criteria, we can derive equations to solve for the optimal (but not necessarily minimum variance) controller gains as follows. For the PID controller case, we have Vut = l{z~l)yt (4.110) So if the output variable yt follows the time series yt = ^ (4.111) then the differenced input variable V i / t will follow the time series: v " ' = ; ( , - ) a< ( 4 - 1 1 2 > 7(M) = f ^ l f - (4-113) The polynomial rj(z~x) is usually not monic. Now by padding the polynomials a ( z _ 1 ) , (5(z~l) or ?7(z - 1), we can assume all the polynomials a ( z _ 1 ) , /?(z _ 1) and n(z~x) have the same order n. We write the control criterion Min E{y2+f+1 + \(Vut)2} (4.114) as Min a2 + Xa2Vu (4.115) 1 82 CHAPTER 4. CONTROLLERS with T r i - i i + E L i K - f t ) 2 - f t 2 + 2^r ft ft L ft-i J i - E L i f t 2 - 2 E f c = J ftft+i E U I ftft+fc-, ftft (4.116) ft ft ft-i (4.117) Since the moving average polynomial v(z 1) of the time series Vu t is not monic, we have to use another formula for the variance. This formula is given below. r r - i Vo + U=i r}k(Vk-2Vopk) + 2C1T i - E L i « - 2 E L i ftft+i Efc=l PkPn+k-l ftft ft ft ft-1 J 2 0~„ (4.118) r 1 ft ft ft-i 10 , — G „ (4.119) where u, to and v are the functions of the controller gains as below u = l + f : (a f c - f t ) 2 - f t 2 + 2^r- 1 fc=i ft ft (4.120) to 2 V , - 1 = Vo + Evk(Vk-2Vopk) + 2CTT fc=i L ft-i J ft ft L ft-i J (4.121) 4.4. THE PID CONTROLLER 83 and i-Eft 2- 2 *;=i E L l Pkftk+1 E f c = i PkPn+k-l ftft ft ft Lft-i E L i ak&k+i - ctkPk+i - ctk+iPk Ei=i akan+k-i - akf3n+k-i - a n + f c - ( f t O'lOn - Q i f t - Q n f t (4.122) (4.123) c = Efc=i »?Jfc*7fc+i - VkVoPk+i - Vk+iVoPk Y?k=l VkVn+k-l - VkVofin+k-l - Vn+k-lVoPk (4.124) r = i -ft ft ft ft Lft o ft o 0 0 ft o ft-2 • • • Now if we define a2 = al + Xa2^ ,U ^ W \ 2 V V ... o ... o ft o (4.125) (4.126) (4.127) then we can take the derivative of the quantity a2 with respect to the controller gain Ij, we have , u v — uv , w v — IVV + A- (4.128) where u', w' and v' are the derivatives of u, w and v with respect to the controller gain Ij. If the derivatives of a2 are set to zeros, we will obtain equations to solve for the optimal 84 CHAPTER 4. CONTROLLERS controller gains. hi — u\v — uv'i + Kw'hv ~ wv'h) = 0 (4.129) h2 = u'hv - uv'h + X(w'hv - wv'h) = 0 (4.130) 3^ = u'lav - uv'h + X(io'hv - ivv'h) = 0 (4.131) With the above equations and a given value of A, we can follow a similar approach as in the case of the minimum variance PID controller to obtain the optimal controller gains. If the method of Newton-Raphson is used, then as discussed many times before, we need further differentiation. In this case, we need Wi = + (4-132) 02u du dv du dv d2v dljdh" ' dUdlj dljdU ~"dljdU d2iv dw dv dw dv 32v ^dl3dkV + WlWJ~WJdTl~ Wdl3dl) ( ' With the first and second order derivatives of the quantities u and v the same as in the case of the minimum variance PID controller, and the first and second order derivatives of w included in Appendix A , we can obtain the optimal controller gains via the Newton-Raphson iteration. 4.4.4 The Pole Placement PID Controller The Newton-Raphson method has an advantage, because it is a method known by all engineers. The method has a distinctive feature and that is when it converges, it converges very fast. It has quadratic convergence compared to linear or superlinear convergence of other methods. The only problem is the starting point must be close to the solution. Therefore, to use this method, we must know a neighbourhood of the solution or modify the method, so that it can be used for every point outside a close neighbourhood. Since modification of the method is not simple for the case of multidimensional Newton-Raphson equation, it is better to locate a neighbourhood of the solution. This approach will also solve the problem of local and global minima and therefore we will present an approach to look for a neighbourhood of the optimal solution. If we just look for a neighbourhood of the optimal gains by varying the three values kp, kt and kd or the vector 1, then the problem is difficult, because of the specified values they assume. However, because the poles of the closed-loop must be stable, we can search for a neighbourhood of the optimal controller gains through the stable poles the controller can assign. The pole assignment PID controller is not a new concept. It has been used to 4.4. THE PID CONTROLLER 85 design PID controllers for continuous systems (Park, H . and Seborg, D. E. (1974), Seraji, H . and Tarokh, M . (1977)). The problem with the pole placement methodology is the physical meaning of the assigned poles. In this application, the pole assignment will be used to locate a neighbourhood of the optimal solution. Under feedback, the output variable yt either follows the time series 6{z-l)0{z-1) yt = -at 6(z-*)4>{z-i) - u{z^mz^)z-S-H{z-\ ^ ( z - 1 ) ^ * - 1 ) ^ - z- 1) - wiz-^z-f-^iz-*)] 6{z-*)0{z-*) at t>*(z~x)ip(z-1 -at or the time series Vt = siz-^eiz-1] • V - : P U (z-i)<f>{z-*)z-f-il*(z-1) at 6(z-i)9(z <f>(z-l)l6(z-l)-L0(z-l)z-f^l*(z-l)} 6{z-*)9{z-i) : at at (4.134) (4.135) (4.136) (4.137) (4.138) (4.139) The roots of the polynomials 0*(z-1) or 0(, - 1) will correspond to the unassignable poles and the roots of the polynomials c ^ ( , _ 1 ) or </p*(,_1) will correspond to the assignable poles, because they contain the controller gains. It must be mentioned that among the assignable poles corresponding to the roots of the polynomial Lp(z~l) or (/3*(z_1), only 2 or 3 are independent. Two for a PD controller and 3 for a PID controller. The rest depends on the values of the independent poles and the coefficients of the polynomial Lp(z~l) or Consider the case of the polynomial ip(z *) t^(z -1) = 1 — <f\Z~X — tp2Z~2 — • • • = ^ ( , - x ) ( l - z-1) - wO?- 1)*-'- 1/^- 1) From the above equations, we can write 0 / X 3 (4.140) (4.141) " <fil' -Si + o2 —62 + S3 + LO0 U 0 - U i 0 u0 — U r (4.142) 86 CHAPTER 4. CONTROLLERS <p = d + W l (4.143) Now suppose we have 3 assignable independent poles p i , p2 and p3 and a number of dependent parameters x2s and they form a polynomial product as follows: y(z~l) = (1 - (pi +p2 +Ps)z~1 ~ (-P1P2 -PlP3 -P2P3)z~2 - PlP2P3Z~3) (l-xlZ-1 Xiz'1) (4.144) = (1 -J2P'Z_1 + J2P'PJZ~2 - IIP^ _ 3)( 1 - x i z ~ l XiZ~x) (4.145) then from this equation, we can write " <P1 ' ¥2 Epi PiPj Y\Pr 0 + 1 -EPi 1 E,^jPiPi -Ept 1 — rip. Ei&PiPj -Epi 1 -TlPi X2 p + P x (4.146) (4.147) The parameters XiS come from the number of delay periods / , the number of transmission zeros of the polynomial o ; ( 2 : _ 1 ) and if the degree of the polynomial <5(z-1)(l — z _ 1 ) is greater than that of the polynomial u>(z~1)z~*~1l(z~1). Given the poles p;S, we can calculate these parameters and the controller gain vector 1 as below. We can write the following equation: d + W l = p + P x (4.148) or W - P 1 x = p - d In case, we are interested in only the controller gain vector 1, then we have i -1 1 0 - P T W - W r P p X p - P T [ P - d ] (4.149) (4.150) 4.4. THE PID CONTROLLER 87 The case of the polynomial tp*(z~l) can be solved similarly. In this case, we have and ¥>•(* - 1 ) = 1 - ipxz 6(z -uo(z-l)z-l-H*{z o / x 3 w 0 - Si ' 62 S3 + : —u.'! w 0 -1> (4.151) (4.152) -w, 0 r (4.153) y>* = d* + Wl* (4.154) In this case, the controller gain vector contains only the proportional and derivative gains. This controller gain vector is given by r = i 2 0 - W T P * - 1 - P * T W p * T p * _ p * T [p* - d*] (4.155) The vector p* and the matrix P* will correspond to the case of only two gains. This means they will be given by Pi + P2 -P1P2 0 p * 1 -12Pi l HPI -Epi 1 (4.156) n> J To look for a neighbourhood of the optimal solution, we can arbitrarily vary the assignable poles p ts with a coarse resolution and compare the closed-loop variance of the output variable j / t , or the performance index a2 + Xal (in the case of the L Q G PID controller) until optimality is achieved. Then the corresponding controller gain vector given by Equation (4.150) or (4.155) can be used as the initial estimate in the Newton-Raphson iteration to obtain the finer optimal solution for the PID controller gains. 88 CHAPTER 4. CONTROLLERS 4.4.5 Examples In this section, we will consider a few examples to clarify the theory. A program written in the M A T L A B language (lqg_pid.m) which is included in the appendix was used to calculate the optimal controller gains for the examples in this section. The program has two stopping criteria. The first criterion is the number of iterations cannot be more than 50, and the second one is A, in Equations (4.129) to (4.131) cannot have absolute values greater than 1.0e-15. In the first example, we consider the following control system: Vt 0.75 1 u + a (4.157) z-i 1 2 T i _ Q.52-1 V ; 1 - 0.25; with at of unit variance. The minimum variance PID controller gains are required for this system. Under feedback the output variable follows the following time series: 8{z-1)9{z-1){\ - z-1) Vt = 8(z-l)faz-1){\ - z-1) -^(z - 1 )^- 1 ) , - ' - 1 /^- 1 ) 8(z-y(z-1) at (4.158) ^ ( z - 1 ) ^ - 1 ) - ^(z- 1)^- 1),-/- 1 /*^- 1) 1 - 0.25,- 1 at (4.159) 1 - 0.75Z" 1 + (0.125 - 0.75/i>-2 + (0.375/? - 0.75/^),- 3 + 0.375/2*,-4 (4.160) The program lqg_pid.m used the initial estimates for the gains at values and converged to the values 11 = -0.5 (4.161) l*2 = 0.0 (4.162) /* = -0.303427 (4.163) /; = 0.031008 (4.164) after 16 iterations. The closed-loop variance of the output variable with these controller gains is <r2 = 1.2530. This is close enough to the possible minimum variance of 1.25 of the minimum variance controller. The optimal PID controller gains for the system are then: kp = -0.2724 (4.165) h = 0 (4.166) kd = -0.0310 (4.167) 4.4. THE PID CONTROLLER 89 The results can be found graphically in Figure 4.1. In the second example, we consider the following control system: 0.168 1 yt 1 - 0.908, ^ + 1 - 1 .3 , - + 0 . 1 3 , - 2 + 0.17 , -3 « * ( 4 - 1 6 8 ) with at having the variance a2a = 2.37. The system has no delay and the disturbance is nonstationary. The unassignable poles of the system are 0.5887 and -0.2887. With a pole resolution of 0.1, and the assignable poles of values: 0.8, -0.1 and -0.1, the closed-loop output variable has the smallest variance of 2.4415. This corresponds to the gain vector h = -7.7857 (4.169) h = 6.2976 (4.170) h = 0.0476 (4-171) Iteration with the above initial estimates for the gain vector 1 gave a convergence, and the obtained gains were: kp = -7.3398 (4.172) kt = -1.2711 (4.173) kd = 0.9050 (4.174) and with these gains the closed loop output variable variance is 2.3871 which is close to the possible minimum variance of a^v = 2.37. Convergence is obtained in 14 iterations. The results can be found in Figure 4.2. In our third example, we consider a non-minimum phase system 0 . 2 5 - 0 . 3 , - 1 1 - 0 . 2 , - 1 . . V i = l - 0 . 8 , - i 1 - 1 . 3 , - i + 0.3*-'°' ( 4 - 1 ? 5 ) with the white noise variance of a2a = 2.0. With the initial estimate h = 2.0 (4.176) h = -1-5 (4.177) / 3 = 0 (4.178) the program lqg_pid.m achieved convergence in 14 iterations and obtained the final con-troller gains kv = 1.4337 (4.179) k = 0.3378 (4.180) kd = -0.9064 (4.181) 90 CHAPTER 4. CONTROLLERS The closed loop variance given by this PID controller is 30.6067. An orderly search for minimum variance gave the smallest variance of 31.3089. This was obtained with the following PID controller gains. kp = 1.50 (4.182) k{ = 0.30 (4.183) kd = 0.0 (4.184) The results of the convergence can be found in Figure 4.3. In our final example, we consider the following deterministic second order model: V t = 1 - 0 . 9 . - + 0 . 2 . - ^ ( 4 - 1 8 5 ) To design a PID controller for this system to track step changes in the setpoint, we proceeded as follows. A step change function is equivalent to a random walk in statistical literature, therefore we have an equivalent system as below 0-25 1 Vt = ; ~ut-i H r«i 4.186) y 1 - 0.92-1 +0 . 22- 2 l-z'1 v ; For this system, the minimum variance controller is a PID controller. The optimal gains are given below. (1 - 0)(4 + 24) kp — ki = kd — = -2.0 LO (l - - 4 - 4) -1.2 [1 - 8)62 -0.8 (4.187) (4.188) (4.189) The controller gain vector 1 is given as \T — [—4.0 3.6 — 0.8]. To constrain the movement of the input variable, we designed L Q G PID controllers. By using the program lqg_pid.m and with the above value of the vector 1 for the initial controller gain vector, we obtained the following table: Table 4.1 L Q G PID Controller Design A 0 0.0001 0.001 0.002 kp -2.0000 -1.9751 -1.8576 -1.8189 -1.2000 -1.1888 -1.1127 -1.0652 kd -0.8000 -0.7722 -0.5505 -0.3862 1.0000 1.0003 1.0163 1.0381 29.6000 27.9238 18.8505 14.7595 4.4. THE PID CONTROLLER 91 In the above table, we see a gradual decrease in the controller gains and the variance of the differenced input variable V M ( . This happens at the expense of an increase in the variance of the output variable yt. By increasing the constraint constant A, we put more penalty on the movement of the control element. This results in a decrease in the variance of V u t and an increase in the variance of yt. However, the trade-off is always positive. We normally get a huge reduction in the variance of Vut and only a slight increase in the variance of yt. In the above table, we see that at the value of A = 0.002, we get a reduction of 50% in the variance of Vut, but only an increase of 3.8% in the variance of yt-Since the controller was designed to track the setpoint, we would like to see how the system responds to setpoint changes. Since the above system consists of two first order systems in series, the responses to setpoint changes are fine. There are no overshoots of both input and output variables. The PID controller of this system needs no constraint on the movement of the input variable. A can be set to zero. However, the following system: 0.25 + 0.07,- 1 , , , n n s V t = 1 - 0 . 9 ^ + 0 . 2 ^ ^ ( 4 - 1 9 0 ) with the minimum variance gains as: kp = -1.3881 (4.191) ki = -1.0072 (4.192) kd = -1.5256 (4.193) and the variances: a2 = 1.0127 and o~\u = 36.5452 requires some constraint of the input variable movement. At the value of A = 0.003, the controller gains are: kp = -1.3855 (4.194) k{ = -0.8408 (4.195) kd = -0.7386 (4.196) and the variances are: a2y = 1.0775 and a27u = 13.3724. At this value of A, the variance of the output variable increases 6%, but the variance of the incremental input variable decreases 63%. A huge gain in the trade-off of the variances. Figure 4.4 shows the responses of yt and V u t to a step change in the setpoint for two controllers: one with A = 0.0 and the other with A = 0.003. By constraining the movement of the input variable, the overshoot of the output variable in the minimum variance PID controller is eliminated. 92 CHAPTER 4. CONTROLLERS The Proportional Gain -0.25 - I KP -0.40 --0.45 --0.50 -0 2 4 6 8 10 12 14 16 18 20 Iteration The Derivative Gain 0.002 -, 1 Iteration Figure 4.1. Gain Estimation of a Delayed System. THE PID CONTROLLER The Proportional Gain -5.4 - \ - 5 . 9 - \ - 6 . 4 -- 6 . 9 -0 2 4 6 8 10 12 14 16 18 20 Iteration The Integral Gain Iteration The Derivative Gain 0.95 -| = 0 2 4 6 8 10 12 14 16 18 20 Iteration Figure 4.2. Gain Estimation of a Nonstationarily Disturbed System. 94 CHAPTER 4. CONTROLLERS The Proportional Gain The Integral Gain 0.55 -| 0 2 4 6 8 10 12 14 16 18 20 Iteration Figure 4.3. Gain Estimation of a Nonminimum Phase System. 4.4. THE PID CONTROLLER Input Variable ut Output Variable yt Figure 4.4 Responses from PID Feedback. 96 CHAPTER 4. CONTROLLERS 4.5 The Self Tuning Controller The self tuning controller is a relatively recent development in the control literature. Even though it has not been used as much as the PID controller, it is a famous controller. Strictly speaking, a self tuning controller is considered different from an adaptive con-troller, because a self tuning controller is supposed to have unknown but constant gains, while an adaptive controller constantly changes its gains in an adaptive environment. However, if the adaptive environment changes very slowly, then we can use a self tuning controller. The self tuning controller will tune itself to its correct gains before the envi-ronment adapts to a new operating point. Then it will tune itself again to have its new gains corresponding to the new operating point. The self tuning controller can be loosely considered as an adaptive controller. It is actually a matter of how fast the controller tunes to its correct gains and how slowly the environment changes its characteristics. In practice, we normally find a slowly changing environment and we want our self tuning controller tune to its correct gains as fast as possible. The self tuning controller is hence applicable. 4.5.1 The Recursive Least Squares Self Tuning Controller The Minimum Variance Controller The first pioneering paper on the self tuning controller was given by Astrom, K . J . and Wittenmark, B . (1973). In this paper, a minimum variance controller was successfully simulated. The problem with non-minimum phase systems was also mentioned. In most research, an Astrom's model was used for a self tuning controller. In this thesis, we will use the Box-Jenkins model for our self-tuning algorithms. Recall the controller form of the Box-Jenkins model control system yt+m = e(z-i)6{z-*) + H z K + / + 1 ( 4 , 1 9 7 ) or e{z-x)6{z-x) [yt+f+1 - i>(z-l)at+s+l\ = uiz-1)^-1)^'1)^ + ^hC*" 1 )* (4.198) We can write this equation as below. Vt+f+i = w (2 - 1 )V ' (2 - 1 ) ^ ( z - 1 ) ^ + ^ ( 2 - 1 ) 7 ( 2 - 1 ) 2 / t + V ' ( 2 - 1 ) a t + / + i q+s - Y CiiVt+J+i-i ~ H.z~l)at+f+i-i) (4.199) i=i 4.5. THE SELF TUNING CONTROLLER 97 q+s Ut-m Ut Vt-1 Vt-n ft ft m+n+1 t = i g + s = x f 0 + tp(z l ) a t + f + 1 - J2 Ciiyt+f+i-i ~ ^ot+Z+i-t) (4.200) (4.201) If the last term in the above equation can be considered zero, which should be if in previous control times there were feedback actions that set this term to zero or very close to it, we can have a least squares fit with yt+j+i as the regressee and as the vector of the regressors. The self tuning controller does just that and therefore a least squares fit will result in unbiased estimation of the controller's parameters. The self tuning algorithm requires 3 parameters / , m and n. We can write the above equation as yt+f+i and the en bloc least squares problem N-l Min ]\2(yt+f+i PN <=I (4.202) (4.203) gives the parameters as follows: ft ft ftn+n with AN = — 1_ N E i = l utut +1 Ljv-l D JV- l (4.204) N l^t=l Ut-mut E f c i ym L Y^=\yt-nut E i = i utyt EN t=l ut-mut-m l^t=l ut-m,yt E f c i ytut-m E t = i ytyt -N E N \^ix t-i yt-nUt-m i^t=i yt-nyt N E t = i utyt_n l^t=l ut-m,yt-n 2^t=l ytVt-n l^t=l yt-nyt-n J (4.205) 98 CHAPTER 4. CONTROLLERS 1 N Et=l Vt+f+lUt ^N Et=i Vt+f+iVt L E (4.206) Now since e i + / + i correlates to the regressee vector x<, the estimation will be biased. This means if we collect a number of observations, get an estimate of the parameter vector j3 and implement a fixed control law with this parameter vector, we will not have a minimum variance control strategy, because the obtained parameter vector is biased. The trick of the self tuning control agorithm is - we estimate the parameter vector recursively and then introduce feedback action to reduce the factor contributing to the bias, until the obtained parameter vector is correct. We will discuss and prove later on that this happens when the matrix AN is singular. The following algorithm can be used for a recursive least squares self tuning controller. 1. At the control time N, get the observed variable yN and the vector XJV_/_I = [ uN-f-i • • • uN-f-m-i VN-S-1 • • • yN-f-n-i ] (4.207) and treat these as y/v+/+i and XJV. 2. Get the parameter vector J3N from the following recursive least squares fit. PN = PN-I + KAT \yN+f+i ~ x ^ / v - i l ( 4 - 2 0 8 ) where K PjV-lXjV N 1 + X^Pjv_iXiv and N T 1 + X ^ P J V - I X J V (4.209) (4.210) 3. From the estimated parameter vector J3N, calculate the control action UN from the equation below. UN-m yN J/JV-1 yN-n PN = 0 (4.211) 4.5. THE SELF TUNING CONTROLLER 99 This algorithm is different from the original algorithm suggested in Astrom, K . J . and Wittenmark, B . (1973) in the sense no parameters are fixed. Nevertheless, it has proved to be a stable algorithm and has been used in all the simulation runs described later. Figure 4.5 shows a block diagram for a self tuning controller. Self Tuning Mechanism yt at+i+i Controller M O Disturbance Plant Hz'1) 0(z-r) faz-1) + Figure 4.5. Block Diagram of a Self Tuning Controller. The L Q G Controller For a minimum variance self tuning controller, the controller parameters do not converge in the case the system has a non-minimum phase. Research on these non-minimum phase systems suggested a linear quadratic gaussian (LQG) control law or a pole-zero placement control strategy. A pole-zero placement control strategy has the difficulty of solving a Diophantine equation (Clarke, D. W. (1984)). This difficulty becomes a numerical problem, if overestimation of the polynomial orders exists (Allidina, A . Y . and Hughes, F. M . (1980)). Also for practical purpose, the determination of the pole locations is difficult. Therefore an L Q G control law is preferred. The problem with the existing approaches to obtain the controller's parameters for an L Q G self tuning controller is their complexities. An L Q G control law usually requires a spectral factorization (Grimble, M . J . (1984)) or the solution of a Riccati equation (Astrom, K . J . and Wittenmark, B . (1989)). Both algorithms require extensive computation. Since a self tuning mechanism requires this calculation done at every control time, the burden on the control algorithms is obvious. In 1975, Clarke, D. W. and Gawthrop, P. J. proposed a very practical L Q G self tuning algorithm. The algorithm is not very much different from the minimum variance one. In the following, we will derive it for the Box-Jenkins model. For a non-minimum phase system, a minimum variance control law will have a non-100 CHAPTER 4. CONTROLLERS stationary input variable ut for a stationary output variable yt, as can be seen from the controller U t = ( - n it -\\y* (4.212 LO(Z 1)<f>(z l)ip(z l) To have a stationary input, we can use an LQG control law. The LQG control criterion Mm E{ylf+1 + Xu2} (4.213) gives the following controller ut = \ u,{z-i)4>{z-i)rKz-i) + - e i z - ^ i z ' 1 ) LOQ Vt (4.214) By adding the quantity —0(z )S(z )ut to both sides of Equation (4.198), we get the following equation: 0(z 1)6(z x) yt+j+i + —ut - i\z l ) a t + s + 1 LOQ Loiz-1)^-1)^-1) + - 0 ( , - 1 ) ( 5 ( , - 1 ; LOQ Ut + Hz-'Mz-^yt (4.215) From the above equation, we can see that it is possible to have a simple way to design an LQG self tuning controller. We can use the self tuning algorithm discussed previously for the minimum variance case with only one exception and that is we use y^ H I<JV_/_I LO0 instead of t/jv in the recursive estimation of (3N. This is the Clarke-Gawthrop's LQG self tuning controller for the Box-Jenkins model. 4.5.2 Convergence of the RLS Self-Tuning Algorithm Now we will investigate the parametric convergence of the RLS self tuning algorithm. The stability and convergence of the parameters of a self tuning controller has been studied by Ljung, L. (1977). The approach uses the theory of differential equation. In this section, we will discuss the parametric convergence via matrix theory. This will give us an insight into the problem, so that we can present our innovative approach to the self tuning algorithm. But before doing that let us introduce two matrix theorems that are useful to our analysis. 4.5. THE SELF TUNING CONTROLLER 101 Theorem 4.1 When a real symmetric matrix B is added to a real symmetric matrix A such as follows: C = A + B (4.216) then the eigenvalues of C are those of A in the same order, shifted by an amount that lies between the smallest and largest eigenvalues o / B . Proof. The theorem is proved via the minimax characteristic of the eigenvalues in Wilkinson, J. H. (1965). Q.E.D. • The roles of the matrices A and B can be interchanged. We can say that the eigen-values of C are those of B in the same order, shifted by an amount that lies between the smallest and largest eigenvalues of A. The above theorem gives the following special case: Theorem 4.2 When a real symmetric matrix B of rank one is added to a real symmetric matrix A such as follows: C = A + B (4.217) then the eigenvalues of C are those of A in the same order, shifted by an amount that is a positive fraction of the only non zero eigenvalue o / B . That means we can write ti(C)i = fx(A)i + mta{B) 0 < ra, < 1, £ m ; = 1 (4.218) Proof. The theorem is proved in Wilkinson, J. H. (1965). Q.E.D. • Since the above theorems are important for the development of our Recursive Least Determinant self-tuning controller, we include these proofs in Appendix A . Now we will get back to the problem of convergence of the recursive least squares self-tuning algorithm. First, we will note that a recursive least squares estimation and a batch or non-recursive (also called en bloc) least squares estimation will give the same values of the estimated parameters, if the recursive least squares start with the initial values calculated from a first few number of observations of the data. Then we can analyze the parametric convergence via the sequence of the matrices A;s and vectors b ts, because the parameters are given by their products as described by Equation (4.204). The sequence of the matrix products: A 1 1 b i , A 2 1 b 2 , A A r 1 _ 1 bjv- i , A ^ b j v , (4.219) 102 CHAPTER 4. CONTROLLERS will converge to a fixed vector when we have A ; s and b;s converged. The convergence of a matrix can be established through its eigenvalues. To study the eigenvalues of the matrix AN, we can write it as below: A N = l [ X l x ? ' + - - - + x J v x ^ ] (4.220) = ^ [ ( / V - - 1 ) A J V _ I + X J V X £ ] (4.221) = - A " - ' ~ X " X & (4.222) The matrix AN is always singular, if TV is smaller than m + n + 2. If N is larger than m + n + 2, then AN can be singular or nonsingular. It depends on the values of the data. However, its eigenvalues are always real and non-negative. AAT is a positive semidefinite or a nonnegative definite matrix. If the data is open loop or nothing can be said about the relationship between the vector XAT and the matrix A J V - I , then nothing can be said about the eigenvalues of AN and their corresponding eigenvalues of AN-I- It A — x xT depends on the eigenvalues of the matrix —— —. This is a result of the above theorems. The eigenvalues of AJV are those of AN-I substracted by amounts which lie between the smallest and largest eigenvalues of — N 1 — N N which can be negative as well as positive. In this case, we are not certain that the matrix AN will approach singularity. But this is not what we are interested in. We are interested in the case of self tuning closed-loop control with the control strategy: x ^ 1 = 0 , x ^ 2 = 0, ••• xNf3N = 0. (4.223) where the parameter vector /3,-s are given by Equation (4.204), and we want to prove that the matrix A J V = ^ [ x x x f + x 2 x 2 r + • • • + xNxN] (4.224) will approach a singular matrix. We consider the least squares optimization step of the self-tuning algorithm. From the equation N-l Mm £ ( y * + / + i - x f / 3 , v ) 2 (4.225) PN t=i we obtain the following AN-IPN = bjv_i (4.226) 4.5. THE SELF TUNING CONTROLLER 103 With the control action we can write xNxNJ3N = 0 and obtain '[N - 1)AAT_I + XJVX^J [3N = ( N - l ) b N ^ NANpN = ( J V - l ) b j v - i The least squares optimization at the time N + 1 will give us AN/3 N+l or By substracting the above two equations, we obtain NAN\pN+1- PN] = NbN - (N - l)bjv_! 1 AN (3N+1 - (3N -[NhN-iN-l^N^] N VN+f+lU-N VN+f+lUN-m VN+f+lVN L VN+f+lVN-n (4.227) (4.228) (4.229) (4.230) (4.231) (4.232) (4.233) (4.234) For stationary time series ut and yt, the values of J/JV-J'S and u^-iS can be said to be bounded. Now at large value of N, we have N 0 (4.235) For the above equation to be true, we have 3 cases: 1. The matrix Ajv is nonsingular and (3N and (3N+l are the same. 2. The matrix A^ is singular and (3N and (3^+1 are not the same. 3. The matrix Ajv is singular and f3N and (3N+1 are the same. 104 CHAPTER 4. CONTROLLERS In the above listed cases, only the third case happens in practice. Case 1 cannot happen, because when the vector /3N and the vector f3N+1 are the same, the self-tuning control law will approach a fixed control law and the matrix Ajv will approach singularity. In case 2, theoretically, we cannot estimate the parameter vector 0N, but for the sake of the discussion, let us say that we can estimate this parameter vector. This is highly probable, because we estimate this parameter vector recursively. The parameter vectors J3N and J3N+1 can be different. But if we force the self-tuning algorithm such that we estimate only m + n + 1 parameters, then /3N and c a n be the same. This is especially true if f3N and /3N+1 are calculated recursively. This is the same as the third case - AAT is singular and (3N and a r e the same. In other words, both AJV and 0N converge. 4.5.3 The Recursive Least Determinant Self Tuning Controller In a previous section, we have asserted that a self tuning controller requires 3 parameters /, m and n. Of these parameters the delay parameter / is the most important one. If we choose wrong values for m and n, the controller performance might not be good, but normally acceptable. We have a case of suboptimal control. However, if a wrong value for the delay parameter / is chosen, the controller performance is normally bad, because this means all the parameters of the controller are wrong. Also the parameter m depends partly on the delay of the system. In many industries, the delay problem is a serious problem. The pulp and paper industry can be an example. In a paper mill , the speed of the beta gauge scanner used to measure paper properties such as basis weight, moisture and caliper is normally constant, whereas the paper machine speed which indicates how fast the paper sheet is running through the machine can occasionally change. This means the delay of the control system can change. A similar problem exists in a pulp mill . In a bleaching tower, a phenomenon called channeling can also change the delay of the control system. This means that even if we choose the right value for the delay parameter at one time, at another time of operation of the plant, the delay parameter might be wrong. Therefore, it is desirable to have an algorithm that will eliminate the use of the delay parameter. The Minimum Variance Controller We have proved earlier that the matrix AJV will approach a singular matrix with the recursive least squares self-tuning algorithm. It is quite easy to prove that with a fixed minimum variance control law, this matrix is singular. To do this what we need is to multiply both sides of Equation (4.198) by ut and take the summation of both sides, we have 9(z 1)6(z ^[J^yt+l+iut-^iz ^ ^ G i + y + i U i = 4.5. THE SELF TUNING CONTROLLER 105 E UtUt ••• E UtUt-m E Utyt • • • E UtVt-n For a minimum variance control law, we have Yyt+f+iUt = 0 Yat+f+iut = 0 which means A characteristic of a minimum variance control law is J2yt+i+iyt-k = ofork>o Yyt+f+nk-k = 0 for k > 0 ft ft (4.236) (4.237) (4.238) ft ft ftn+n +1 0 (4.239) (4.240) (4.241) This comes from the fact that yt is a moving average time series of order / . Since at is a white noise, we have J2at+f+\yt-k = 0 for k ^ 0 Yat+i+\ut-k = 0 for k ^ 0 (4.242) (4.243) With the above equations and by multiplying both sides of Equation (4.198) with ut_k and yt-k and taking summations, we obtain the following equations: ft ft ftn+n +1 (4.244) E yt-kUt J2yt-kut-m Eyt-kijt ••• Hyt-kyt-ft ft m + n + l = 0 (4.245) 106 CHAPTER 4. CONTROLLERS For the Ut-k, we let k go from 0 to m and for yt-k, we let k go from 0 to n, we obtain a set of m + n -f 2 equations that can be ordered into a matrix equation as below: J2utut J2 ut-mut From the above equation, we can conclude the singularity of the matrix A/v at the minimum variance feedback condition. Another easier way to prove this fact is from the equation 9{z-l)6(z-x) [yt+J+1 - T K ^ H + Z + I ] = u K z - 1 ) ^ - 1 ) ^ - 1 ) ^ + Siz-'Hz-^yt (4.247) st = xf/3 (4.248) we square both sides, take summations and divide by N, we will obtain 1]>>2 = pTANp (4.249) Now at the minimum variance feedback condition, st = 0, because yt = ^ (z~1)at and therefore (3TAN(3 = 0 (4.250) From the above equation, we can say the matrix AJV is singular with f3 an eigenvector associated with the zero eigenvalue. Since the matrix A ^ is singular at the minimum variance feedback condition, we will attempt a new control strategy by calculating UN at the control time from the equation \AN\ = 0 (4.251) The control action u/v is calculated from the singularity of the matrix A ^ at each control time. The matrix Ayv contains all the necessary information for us to calculate the control action u^. Doing this we do not need to know the delay / and we can bypass the calculation stage of the estimated control parameter J3N. Even though, we say we do not need to know the delay / , but when we choose the parameter m, we unconsciously choose m based partly on this knowledge. However, we do not use this information twice, E utut-m E utyt Y.ut-mut~m T,ut-myt Y^ytut-m Yytyt Eyt E yt-nVt E utyt-n E Ut-rnVt-n E ytyt-n YVt-nVt —n J3 = 0 (4.246) 4.5. THE SELF TUNING CONTROLLER 107 and there is a great reward for not doing so. This is related to the topic of parameter mismatch and we have discussed it briefly before. This strategy leads to the solution of a quadratic equation in the variable UN at each control time. This means there may be two solutions to this problem, and we have to devise a way to discard one of them. This problem is in fact not difficult. Because we always want less movement of the control element, we can always choose the solution that has the smaller absolute value for smaller variance. A more difficult problem that we want to prove is if there are always real solutions to this quadratic equation. The equation LAM = 0 (4.252) gives Et=l utut E N t=l u t - m u t E t = i ytut E i = l Vt-nUt EN v^vv t = l ut-mut-m Z - W - l ut-m,yt N E t = l ytUt-r, E«=i ytyt t = 1 yt-nUt-m Et=l yt-nVt N Et=l Utyt-n Z ^ i = l ut-m.yt-n TT^N i^t=\ ytyt-n X^N Z^t=l Vt-nVt-n (4.253) At the control time TV, the variable yN is available and we are required to calculate UN- This variable is an unknown and appears in only the first row and column of the matrix AN- By designating the following matrix: B v—* N Ei=l Ut-lUt-l Ei=l Ut-mUt-i \-^N E t = i ytut-i [ E^lVt-nUt-l EtLl Ut-lUt-m Et=l Ut-iyt l^t=l ut-mut-m l^t=l ut-m.yt E i = i ytut-m E t = i ytyt N Ej=i yt-nUt-m E<v=i yt-nVt N E*=i«*- iy t -n L*t=\ ut-mljt-n L,t=i ytyt-n 2-<t=\ yt—nyt-n (4.254) 108 CHAPTER 4. CONTROLLERS we can write the above equation as: 2^2=1 ut-mut B (4.255) or d B with, the vector d defined as: E<=1 Ut-lUt 2^4=1 ut-mut E t = i ytut = d i + d2ujv Ei=l «t-m«l E l l 1 y*«* + UN-IUN UN-mV-N UNUN (4.256) (4.257; (4.258) Now by applying a property of the bordered matrix determinant, we have the following equation: N t = i x IBI = 0 (4.259) Since the first term of the above equation is a scalar, we can write: N ^2utut — d B _ 1 d ) | B | = 0 (4.260) t=i and our control problem gives us the equation N ^ « i u t - d T B - 1 d = 0 (4.261) t=i 4.5. THE SELF TUNING CONTROLLER 109 From Equation (4.261), we have N X>,u t - [dx + d2Ujvr B " 1 [di + d2Ujv] = 0 (4.262) UTUT - [Ul -t- U 2 'UAT I T D 1 4=1 or N - l ( l - d ^ B - 1 d 2 ) « 2 v - 2 d f B - 1 d 2 « i V + ^ u 2 - d f B _ 1 d i = 0 (4.263) t=i The above equation is a quadratic equation with ujy as the unknown. If we write it in the form: au% + buN + c = 0 (4.264) then the coefficients are as follows: a = l - d 2 r B - 1 d 2 (4.265) b = - 2 d f B - 1 d 2 (4.266) N-l c = ^ u\ - d f B - M x (4.267) t=i and the control action at time ./V is given by -b ± Vb2 - iac uN = (4.268) 2a Solvability of the Control Action In this section, we will discuss the solvability of the control action UN at time N when the output variable has value I/N- We have voiced some concern about the fact that there might not be a real solution for UN, because it is an unknown of a quadratic equation. We have the controller equation as follows: | A J V | = 0 (4.269) ' 1 [x ixf + x 2 x ^ + • • • + XTVX^I | (4.270) 1 HN-^AN-I+XNXH (4.271) ]\fm+n+2 110 CHAPTER 4. CONTROLLERS Now if the matrix A / v - i is positive definite, then we definitely have no real solutions for u/v, because the matrix x ^ x ^ will either leave the smallest eigenvalue of the matrix (TV — 1)AJV_I where it was or shift it a positive amount. The matrix AJV will never have a zero eigenvalue. This is similar to saying that the determinant of A AT will never be zero. Tests confirmed this assertion. There were no real solution u^s, whenever the matrix AAT_I was nonsingular. However, if A / v - i is singular, then we will have a real solution UN for A/v to be singular. To prove this is actually quite easy. We can proceed as follows. If Ajv_i is singular, then we have | ( / V | = 0 (4.272) or (N-l)AN-if3N = 0 (4.273) where (5N is the eigenvector corresponding to the zero eigenvalue. Now we can always choose uJV such that which will give us as before But this means XJV/3/V = 0 (4.274) ANfcN = 0 (4.275) 'JV+1 L3N (4.276) Under many tests, whenever Ayv_i was singular, the two u^s coincided, and the parameter vector /3/v+i w a s the same as f3N. This means that the parameter f3N will never diverge from its converged value. We have mentioned earlier that we will not have a real solution UN for the matrix AN to be singular at time TV, if the matrix AAT_I is nonsingular. In the beginning of the control period, this is normally the case. So our attempt to establish the singularity of the matrix AAT at every control time fails. So what we do instead is to use the singularity of Aoo as the converged criterion and at time N calculate UN from the matrix AN such that we come closer to this criterion. In this sense, our control strategy is a self-tuning one. We want the matrix AN tune itself until it is singular. If the parameters m and n are chosen correctly, the strategy will self-tune into a minimum variance controller. At this criterion both the smallest eigenvalue and the determinant are zeros, therefore we can have two approaches to this problem. Either we calculate UN SO that the smallest eigenvalue of AN is as small as possible or we calculate UN SO that the determinant of AN is as small as possible. 4.5. THE SELF TUNING CONTROLLER 111 To take the smallest eigenvalue approach, we can have an algorithm such that the smallest eigenvalue of the matrix AN is smaller than that of the matrix AJV_X. The strategy is for the smallest eigenvalue to approach zero. If we pursue this strategy, we can derive the algorithm from the following mathematical derivations. From Equation (4.222), we can write: eTANe eTe eTA N--I* 91 A 9 6T6 N6T6 (4.277) Let 9 be 9X such that the left hand side of the above equation is minimum, then we have Q\AN9\ = Min dTAN-i9 & k-JV-l 9 9\9X "~""e\ 9T8 N9T9 Now let 9 on the right hand side of the above equation be 92 such that 9TAN-i9 (4.278) 92vAN-\9-1 9292 = Min 9 9T9 (4.279) then we have Min 9\AN9 9 9T9 < Min 9TAN- I9 9\ AN-1 — X A T X ^ 9 BlB N9T292 (4.280) By the Courant-Fisher theorem, the left hand side of the above equation is the smallest eigenvalue of the matrix AN and similarly the first term on the right hand side is the smallest eigenvalue of the matrix AJV_X. So the above equation gives us ^mm(Ajv) < Umin(AN-l) ~ 91 AN-\ XjvX^- 9, N9l9 (4.281) 2 u2 For the smallest eigenvalue of the matrix AN to be smaller than that of the matrix Ajv_x, we can have the following control algorithm: • At the control time A^, we get the matrix AJV_X. • Get the smallest eigenvalue of AN-\ and its associated eigenvector. • Make this eigenvector the control parameter vector at time (f3N), ie. we have x i 0 Nu2 0 (4.282) 112 CHAPTER 4. CONTROLLERS With this control algorithm, we have the following relationship between the smallest eigenvalues of the matrices AJV_I and AAT. A^min (AJV) < ' ^ \lmin{A-N-l) (4.283) This algorithm, however, proved to be unstable under tests. This is because even though the smallest eigenvalue of AN can be smaller than that of A J V _ 1 , the determinant of AN can be greater than that of A / v - i -Now if we take the determinant approach, then instead of looking for solution UN of the equation a,uN + buN + c = 0 (4.284) we will look for the solution of the following optimization problem Min au2N + buN + c (4.285) uN The solution of the above optimization problem is given by uN = ~ (4.286) d f B - 1 ^ 1 - d^B-M, (4.287) which is the unique solution for the determinant of the matrix A ^ obtain the smallest value. The matrix B will be nonsingular, when N is larger than m + n + 1. This solution can be used for the case the matrix AN-I is nonsingular as well as singular. This is one of a few ways to calculate UN- In a later section, we will mention another. The Control Algorithm In this section, we will present our self-tuning control algorithm. We will call our control algorithm the recursive least determinant (RLD) self tuning control algorithm, because we want the smallest determinant of the matrix AN at each control time and this determinant can be calculated recursively. The control action UN is calculated from the following optimization problem: Min \ A N \ (4.288) UN which results in Equation (4.287). From this equation, we can present our R L D self-tuning control algorithm as follows: THE SELF TUNING CONTROLLER 113 • At the control time TV, get the vector CJV = update the vector dN = djv_i + UN-l UN-m VN VN-n UN-2UN-1 UN-l-m.UN-1 y N - i U N - i and the inverse matrix BAT1 • Calculate the control action VN-l-nUN-l = d jV—1 + CN-lUN-l •0-1 _ B j V 1 _ 1 C A f C A ^ B A r 1 _ 1 1 + c A F B i V _ 1 C i v UN = dNHNcN 1 - c^B^cjv • Send UN to the control element. The vector djv and matrix BAT are given en bloc as follows: (4.289) (4.290) (4.291) (4.292) (4.293) (4.294) I N BAT = E U i u t - i u t - i Z>t=l ut-mUt-l E«=i ytut-i Efci yt-nUt-i • • • E£=i yt-nUt-m E i l i ytyt-n • • • E t l i yt-nyt-n J Y^=iut-mut E^LYytUt ••• Et=ilyt-nut N-l 2^4=1 Ut—mUt—m E*=i ytut-m E i = i sym. (4.295) 114 CHAPTER 4. CONTROLLERS (4.296) It is not necessary to update B;v in the control algorithm but its inverse. The calculation of the inverse is done via the Sherman-Morrison formula. The above vectors and matrix are given for ease of initialization. The RLD algorithm has been tested and proved to be an easy and robust self tuning control algorithm. 4.5.4 Convergence of the RLD Self-Tuning Algorithm The Classification of Convergences We have given proof of convergence of the recursive least squares (RLS) self-tuning algo-rithm. In this section, we will study the convergence of our R L D self-tuning algorithm. To prove convergence of the RLD algorithm at control time N, we have to establish that the determinant of A / v is smaller than that of A JV_I . To do this, we first write their relationship AN = A ^ - A " - * ~ X " X " (4.297) = A JV-1 N - 1 A j y ^ X j y X ^ N N (4.298) then take determinant of both sides of the above equation to obtain |A*| = ^ ^ 1 1 ^ 1 + ^ - ^ | (4.299) In Vu, K . (1991), the author has proved the following equation: k |C + AD| = |C |[l + ^p J (C- 1 D)A l ] (4.300) i = l for any nonsingular matrix C of dimension k. The coefficients p;(C - 1D) are the charac-teristic coefficients in the characteristic equation of the square matrix C _ 1 D . The first coefficient p!(C _ 1D) is the sum of the eigenvalues of this matrix. This coefficient is also called the trace of the matrix. The last coefficient pfc(C -1D) is the product of the eigen-values of the matrix. This is the determinant. The coefficients in between are all the possible permutations of sums of the products of the eigenvalues. We do not have to worry about these coefficients and the determinant in our application. We can write 1 + = (4.301) 4.5. THE SELF TUNING CONTROLLER 115 Since the matrix A ^ ^ X A T X ^ is of rank one, it has only one nonzero eigenvalue, and its unique nonzero eigenvalue is x ^ A ^ ^ X / v . This leads to the fact that all the coefficients in the above equation are zero, except for the first coefficient. This is because any product of the eigenvalues will be zero. Only the sum of the eigenvalues which is the trace of the matrix gives a nonzero value. The first coefficient is N N _ 1 N N-l This will give us. ,N N and we can write ,N N = ( N N N-l N N - 1 (4.302) (4.303) (4.304) The minimization of the determinant |Ajv| leads to the minimization of X ^ A ^ - J X J V , because only the vector X;v contains the control action UN- This gives us another way to calculate UN for the R L D control algorithm. Now we write •^N A;v—i^-N N-l UN UN- VN UN-n N-Xu] E d j v - i »JV-1 - 1 UN UN-m VN VN-n (4.305) or after evaluating the inverse matrix N-l UN CN 1 d A r _ 1 B y v 1 _ 1 B ^ L i d j y - i E«? - d ^ . j B ^ j d j v - i E«? - d ^ B ^ d ^ B A 7 1 _ 1 d A r _ 1 d A ^ _ 1 B A ; 1 _ 1 B N - 1 + E ^ - d ^ B ^ d j v - ! J UN CN (4.306) Eu2t - d ^ . i B ^ i d j v - i "E«* - d ^ . j B ^ l j d j v - i UN 116 CHAPTER 4. CONTROLLERS + 4 B i v - i + — I T n i ^ 1 C ] V and the minimization of the above equation gives UN = d A ^ _ 1 B A ; 1 _ 1 C A r = c A ^ B A > 1 _ 1 d j v _ 1 = c 7vCiv (4.307) (4.308) (4.309) (4.310) The above equation gives a shorter formula to calculate the control action UN- Since the controller parameter vector CN 1S g i v e n a s a product of an inverse matrix and a column vector, it can be given recursively as the algorithm of recursive least squares self-tuning regulator. Note that this vector CN does not contain yN- By putting this value of UN back into the quadratic equation, we obtain Min x J v - A j V - l x A f N-l and therefore the minimum value of IAJVI is jV- fc» A r_ 1 Cjv = I A N-l l( N-l N From Equation (4.293), we can write CN^N1(-N — c A / B A r 1 _ 1 C A r — c JvB N-1CN C N B jV -1 C N C N B N-1c N 1 + cJrBtf^CN 1 + c1NBN_1cN and therefore 1 + C A ^ B A 7 1 _ 1 C A T = p r R _ 1 c N C'N'^N'CN (4.311) (4.312) (4.313) (4.314) (4.315) Convergence of the RLD algorithm at the step N implies < 1 1 N ] CNB-NXZN (4.316) The first term of the left hand side of the above equation is smaller than unity, but the second term is larger than unity. Convergence at step N means the first term has a 4.5. THE SELF TUNING CONTROLLER 117 larger effect. In general, convergence cannot be guaranteed for every step, because of the unknown value of yw in the vector of the above equation. This is true for all stochastic systems. However, we can sa.y if the value of yw is not exceptionally larger or smaller than its previous counterparts, then the algorithm will normally converge. Now consider the case when ./V is large and ut and yt are bounded, which is not true for a nonminimum phase system, the matrices N * and are essentially the same, because they are a crosscovariance matrix of the two time series ut and yt. N - 1 BAT (4.317) 7««(0) 7„„(m - 1) 7m, (1) luy{n - 1) 7uu(m - 1) 7uy(l) 7«u(0) luy{m) luy{m) 7w(0) luV{n-m) 7 r a (n) luy{n - 1) luy(n - m) lyy(n) 7 W ( 0 ) (4.318) where ^Uy{k) is the crosscovariance of the two time series ut and yt at lag k. This means cftB^c* ^ N ( 4 3 1 9 ) B ^ C A T With this result, Equation (4.316) will become ,N-1, N N-l < 1 (4.320) which is true and therefore we can say that for a nonminimum phase system, the R L D self-tuning algorithm will converge at every step or converge exponentially when is large. When is not large, Equation (4.319) is approximately true, and we can expect the algorithm to converge. In practice, we might find that the R L D algorithm does not converge at every step or does not converge exponentially but converges eventually. This means the determinant of |Aw | is zero at the end of the self-tuning period, but the algorithm has some tempo-rary divergences during this period. This is the case of eventual convergence (or simply convergence) and the self-tuning period consists of a number of stages. Each stage is a convergence in a number of steps. We will explain what we mean by this. Using the result of the above discussion, we can write ,N- 1, AAT = I A N-l N -)m + n + 2[l + cI,B A f £ J i V _ 1 C A f J (4.321) 118 CHAPTER 4. CONTROLLERS \AN+l\ = l A j v K ^ ^ r + ^ t l + c ^ B ^ c ^ ] (4.322) l A ^ I = \AN+k.1\(N+^~1)m+n+2[l + c % + ^ (4.323) or by combining these equations together, we get \AN+k\ = l A ^ - i l ( ^ ^ ) m + " + 2 [ 1 + CN^N-^N] x • • • x [1 + cN+kBN\k_lCN+k} (4.324) - | A „ \(N ~1\m+n+2CNBN-lcN cN+kRN+k-lcN+k (A-l')f\\ If the algorithm converges in k steps, we have \AN+k\ < | A J V _ I | (4.326) and fN - l N T n + n + 2 C ) v B A 7 1 _ 1 C A r x x cN+k® N+k-lCN+k < ^ (4 327) N + k c^B^cyv c N + k B N \ k c N + k The Convergence Interval of yN We have said that the convergence of the R L D algorithm at step TV depends on the value of yN- In this section, we will find the value of yN for the R L D self-tuning algorithm to converge. This is useful because if we do not allow a temporary divergence, we can use the controller parameter vector CJV-I °f the- previous control time TV — 1. From the equation (l-Ir"+2[l + « _ ^ ] < 1 (4-328) we can write ( 1 _ i . r + n + 2 [ l + c T B - l _ i C i v ] _ 1 < 0 ( 0 2 9 ) Let yl and y2 be the solutions of the quadratic equation given by the left hand side of the above inequality equation, then if yN is inside this interval, the RLD self-tuning algorithm converges for the control time N. 4.5. THE SELF TUNING CONTROLLER 119 The Analysis of a Temporary Divergence We have proved the R L D self-tuning algorithm converges when ./V is relatively large and also obtained the convergence interval for the output variable y^. In this section, we will analyze this convergence problem a little bit more, study the mechanism of a temporary-divergence and propose a solution for systems with a difficult convergence prospect. From the equation AN = A ^ - ^ - 1 " ^ (4.330) = A A ^ - E J V . X (4.331) we can say that if the matrix EAT-I is positive definite, then we have convergence at step N. This is because of the two theorems we presented earlier. The eigenvalues of A AT are those of AAT_I shifted negative amounts of which absolute values lie between the smallest and largest eigenvalues of the matrix EAT-I- If EAT-I is positive definite, all its eigenvalues are positive and this means the eigenvalues of Ayv are smaller than those of A^-i in the same order. This, of course, means convergence at step A , because a direct consequence is the determinant of A AT will be smaller than that of AAT-I- If the matrix EAT-I has a negative eigenvalue, we are not sure of convergence or divergence. It can be either one of these two cases. By taking the trace of both sides of Equation (4.330), we can write trAN = tvA^-tr*"-1-*"*" (4.332) = trAN-r + X W ^ r A j y - 1 (4.333) Now, we have X ^ X A T = uN + uN_x H \-yN -\ (4.334) If yN is so large that u 2 , i „ .2 N-l + --- + VN + ---> ^ A A T - I (4.335) then no real value of UN can change the fact trAN > trAN^ (4.336) When this happens, not all the eigenvalues of AAT will be smaller than those of AAT-I in the same order. This creates a potential for a temporary divergence. To combat this problem, what we want is the magnitude of yN in Equation (4.334) small compared to the 120 CHAPTER 4. CONTROLLERS magnitude of u2N, so a wide swing in value of will have a little effect on the eigenvalues of AN, because it can be absorbed by UN- M J V will reduce itself from its normal value and assume a smaller one, so that the eigenvalue shift by the matrix Xjvx^ is small and still leaves the matrix ~EN-\ positive definite. This is a condition for convergence at step N. In this case, the value of UN will determine the eigenvalues of AN and the convergence of the R L D algorithm. In simulation runs, when the value of UIQ in one simulation model was accidentally lowered to almost half its value, convergence happened in all the runs. Now, the values oi yN and UN will depend on the system. However, we can always create an artificial value, say uN for example, which is a multiple of UN- We use the value of u*N in the algorithm, but send the value of UN to the control element. With this practice, the convergence of the R L D self-tuning algorithm can be enhanced greatly. 4.5.5 Effect of Model Mismatch Order Overestimation We will assume that we have overestimated the same number of parameters for m and n, say m + o and n + o. That means the minimum variance controller has the orders m and n, and we design a self tuning controller with the orders of m + o and n + o. From the model of the system, we have (4.337) Now if we define </?(z_1) as any polynomial of order o, then we can write ^(z-1)^-1)^-1)^^! - VK^H+z+O = (^z-1) [«5(z-1)7(z-1)t/i + W(z-1)^ (z-1)</>(z-1)ui] (4.338) The above equation tells us that the controller will tune itself to a number of con-trollers. The final value of the parameter vector will depend on the initial estimate. The important thing is the controller has minimum variance performance, because we have Vt+f+i = i>{z-l)at+}+1 (4.339) Now consider the case, we design a self tuning controller with the order of m + o and n + o + /. The equation that describes the system can be written as follows: (^z-1) [«5(z-1)7(z-1)yi + W(z- 1)^(2-1)r(^1K] + (O^-n-i (4-340) 4.5. THE SELF TUNING CONTROLLER 121 where d(z~l) is a null polynomial of order /. The above equation tells us that the con-troller will tune itself to a number of controllers but forces the coefficients of the poly-nomial ^ ( z - 1 ) to zeros. From the above results, we can draw the conclusion that - if we overestimate only one order, then the extra number of parameters of the other order will be forced to approach zeros. From the above discussion, we can say that if we overestimate the orders of the self-tuning controller both algorithms RLS and R L D will perform optimally as in the case we have the correct orders. Order Underestimation In this case, minimum variance performance is not obtained, because the controller does not use enough past input and output variable values to decouple the interaction. This is similar to the case of a self-tuning PID controller in a system with a delay or orders higher than second order. The self-tuning algorithms will not obtain minimum variance performance. In this case, tests run have shown that both algorithms are stable. Wrong Delay Estimation Since the delay parameter / also affects the parameter m of the controller, a wrong estimation for the delay causes double penalty. If we overestimate it, the effect is less damaging than we underestimate it. This is because overestimating of the controller orders is acceptable but not underestimating them. We have discussed this before. However, the damaging effect is only on the RLS self-tuning algorithm. This is because it explicitly uses the delay / in the regressee yt+f+i of the algorithm. The R L D algorithm is immune to the wrong estimation of the delay. In all simulation test runs, whenever there was a wrong estimation of the delay, the RLS self-tuning algorithm diverged. As for the R L D algorithm, it was not affected by this problem. This is the strength of the R L D algorithm. We will show and discuss some of the simulation results in the next section. 4.5.6 S imula t ion Examples In this section, we will test our recursive least determinant (RLD) self tuning control algorithm. To be able to draw a realistic conclusion about the algorithm, we will compare the results of the R L D self tuning control algorithm with those of the recursive least squares (RLS) self tuning control algorithm. The results came from the same system. For a fair comparison, the same system was run twice with the same disturbance and the same open-loop initialization period (the first 14 observations). 122 CHAPTER 4. CONTROLLERS Closed-loop Variances Determinants of A J V S xlO" 60 H 1 >i RLS M 11 11 ' i 1 A i : i i : i i : \ I ' . i i : i i : \ i I i "-v i '- \ i • \ r '•. v 1 •• \ i \ t ' . v i ^ ^ R L D i 30 H 20 H io H 50 100 t Covariances 150 200 100 t Eigenvalues of A AT 200 200 Figure 4.6. Exponential Convergence of the R L D Self-Tuning Algorithm 4.5. THE SELF TUNING CONTROLLER 123 Closed-loop Variances Determinants of Ayvs xlCT 200 0.0 50 1 \ RLS . > R L D , i i \ i ' i j ' i • i • i i i t i i t i i t \ i \ i. t r': ( i '-. v^ 1 1 1 100 t 150 200 Covariances Eigenvalues of A N 200 Figure 4.7. Convergence of the R L D Self-Tuning Algorithm 124 CHAPTER 4. CONTROLLERS Figures 4.6 and 4.7 summarize the results of two simulation runs. The model of the control system for these runs is V t = l -0 .5* - i M ' - a + l -0 .6* - i f l ' ( 4 ' 3 4 1 ) with the white noise variance of unity value. In Figure 4.6, we see a case of exponential convergence of the R L D algorithm. In Figure 4.7, the R L D algorithm also converges but it has two temporary divergences. This means two increases in the determinants of Ayv instead of all decreases. Each of these figures contains four small graphs. The top left graph shows the closed-loop variances of the output variable yt. The top right graph shows the determinants of the matrices Ayvs. Both the variances and the determinants are the results of both R L D and RLS self tuning algorithms. To show compatibility of the two algorithms, we plot in the bottom left graph five statistics: l y y ( k + f + l) = for fc = 0, 1 (4.342) 7u„(fc + / + l) = ^J2ut-kyt+f+i for k = Q, 1,2 (4.343) These are some of the autocovariances of yt and the crosscovariances of ut and yt- Astrom, K. J. and Wittenmark, B . (1973) mentioned in their paper that these statistics are sup-posed to be zeros under a minimum variance control law. In Figure 4.6 and 4.7, we see that the statistics approach zeros under the R L D algorithm. The bottom right graph shows the eigenvalues of the matrix AJV. The RLD algorithm decreases all the eigenvalues of this matrix. In all the graphs of these figures, the horizontal lines are the lines of theoretical values. In the top left graph of Figure 4.6, the horizontal line has a value of 1.04 which is the theoretical closed-loop variance calculated from the disturbance model and the delay. We see the dotted and dashed curves close to one another. This tells us the performances of both algorithms are good and almost the same. The dotted curve is the result of the RLS self tuning control algorithm, whereas the dashed curve is the result of the R L D self tuning control algorithm. These two curves coincide in the beginning of the simulation run, but differ at the end of the run. This is because from observation t= l to observation t=14, the loop is open and therefore the variances increase. From observation t=14 to observation t=199, the loop is closed and the variances decrease exponentially. From this graph, we can be misled and conclude that the RLS self tuning algorithm is a better algorithm, because it gives a smaller closed-loop variance and determinant of A/v. In fact, it is only the result of one particular run. The performance of the R L D algorithm is quite compatible with that of the RLS algorithm, as we shall see in Figure 4.8 later. 4.5. THE SELF TUNING CONTROLLER 125 Closed-loop Variances Determinants of Ayvs 18 12 1 1 1 1 t 1 t RLS i 1 1 I t t II ll RLD ll ll ll ll ll ll 1' ll l l 1' 1 1 1 • 1 1 1 1 1 1 : . i V 1 \ 1 1 400 H 50 100 150 200 200 Output Variables Input Variables 6.0 3.0 H 0.0 -3.0 H -6.0 RLS j r — R L D J 50 —r~ 100 t 150 200 200 Figure 4.8. Self-Tuning of a Correctly Estimated System. 126 CHAPTER 4. CONTROLLERS In Figure 4.7, we have a similar situation. The determinants of the matrices Ajys increase when the loop is open but decrease when the loop is closed. And as the closed-loop variances approach the theoretical value, the determinants of AATS approach zeros. Figure 4.7 might also mislead us to conclude that the RLS algorithm is more resistant to large disturbances, because the determinant of A AT increases twice under the R L D algorithm but not under the RLS algorithm. Figure 4.8 shows that the RLS algorithm is also vunerable to large disturbances. This figure shows the results of the self-tuning algorithms of the following control system: O-8 1 -0 . 4 Z - 1 f A O A A \ y< = r^^Ut-2 + T^n^at (4-344) From this figure, we can conclude that the R L D algorithm can give compatible perfor-mance with that of the RLS algorithm, because not only the closed-loop variances, the determinants of AATS but also the output and input variables of the two algorithms are almost the same. To study the effect of wrong estimation of the system parameters / , m and n, we used the following model in our simulation. ~ ^z-1 1 - 0 . 4 Z - 1 Vt = ; T-Zi T—iut-S-i + i 7CT=[at 4.345) 1 — b\Z 1 — b2z 1 1 — O.62 1 and changed the values of cuo, w i , 61, b2 and / to fit the individual cases. For both self-tuning algorithms, we assumed the system has the model: yt = - J f - ^ + ^f^a, (4.346) 1 — b\z 1 1 — (pz 1 The minimum variance controller for this system can be derived to be: U t = ~ n A - i v i U—vT=T\yt 4.347) (1 - <j>z l)(l - (<p-0)z x) This means we assumed / = 1, m = 2 and n = 1 for our controllers. In Figure 4.8, we have a case of correctly estimated system which means we set the values of the parameters in the simulated model as follows: uj0 = 0.8 (4.348) Lox = 0.0 (4.349) 4 = 0.5 (4.350) 62 = 0.0 (4.351) / = 1 (4-352) 4.5. THE SELF TUNING CONTROLLER 127 In Figure 4.9, we have a case of underestimation of n. To achieve this, we set the above parameters as follows: LO0 = O.S (4.353) wi = 0.0 (4.354) 4 = 0.5 (4.355) 62 = 0.24 (4.356) / = 1 (4-357) The parameter n of the minimum variance controller for this system must have value n = 2 and we underestimated it with a value of n = 1. Now if we set u0 = 0.8 (4.358) wi = 0.0 (4.359) Si = 0.5 (4.360) 62 = -0.24 (4.361) / = 1 (4.362) we will have a case of underestimation of n of an underdamped second order system. The simulation results for this system are shown in Figure 4.10. Similarly in Figure 4.11, we have a case of underestimation of m. To achieve this, we set the above parameters as follows: LO0 = 0.8 (4.363) C J I = 0.4 (4.364) 51 = 0.5 (4.365) 52 = 0.0 (4.366) / = 1 (4.367) The parameter m of the minimum variance controller for this system must have value m = 3 and we underestimated with a value of m = 2. For the case of overestimation, we actually do not have to worry very much about it, because as discussed above, this case has minimum variance performance. To achieve the case of overestimation of n, we set the parameters as follows: LO0 = 0.8 (4.368) u>i = 0.0 (4.369) 4 = 0.0 (4.370) 82 = 0.0 (4.371) / = 1 (4.372) 128 CHAPTER 4. CONTROLLERS This means the m i n i m u m variance controller for this system has n = 0 and we overesti-mated it with n = 1. Figure 4.11 shows the results of a typical run for this case. In all the cases of wrong estimation above, both algorithms proved to be stable for all the simulation runs. Figures 4.9, 4.10, 4.11 and 4.12 just show typical results. The simula-tion results support our assertion we had before. If we have wrong orders of the controller, both algorithms wi l l be stable and perform satisfactorily. The same thing cannot be said, when we have a wrong estimation of the delay for the R L S self-tuning algorithm. Figures 4.13 and 4.14 show the results of wrong estimation of the delay parameter / . To achieve wrong estimation of the delay, we set the parameters as follows: u0 = 0.8 (4.373) wi = 0.0 (4.374) Sj = 0.5 (4.375) S2 = 0.0 (4.376) / = 0 (4.377) This is the case of overestimation of the delay. This means the system has no pure delay but our algorithm gave it a value of / = 1 in the self-tuning algorithms. In this case, the R L D algorithm proved to be stable but the R L S algorithm ran away in every simulation run. In Figure 4.12, we see that the dotted curves go out of plot boundaries in al l four l i tt le graphs, but the dashed curves are inside the boundaries. Also in this figure, we see that the determinant of A AT under the R L D algorithm approaches zero which is an indication of immunity of the R L D algorithm to overestimation of the delay. The case of underestimation of the delay can be similarly established. We set the parameters as follows: co0 = 0.8 (4.378) ux = 0.0 (4.379) 51 = 0.5 (4.380) 52 = 0.0 (4.381) / = 2 (4.382) Like the case of overestimation, the R L S self-tuning algorithm ran away, but the R L D algorithm was stable. The determinant of AN approaches zero under the R L D algorithm but goes out of boundary under the R L S algorithm. These results can be seen in Figure 4.14. This means we can conclude that the R L D algorithm is immune to the wrong estimation of the delay whether it is an underestimation or an overestimation, but the R L S algorithm is not. 4.5. THE SELF TUNING CONTROLLER 129 Closed-loop Variances Determinants of AATS 200 Output Variables Input Variables Figure 4.9. Self-Tuning of an Underestimated Order (n) System. 130 CHAPTER 4. CONTROLLERS Closed-loop Variances Determinants of AATS 10.0 Output Variables Input Variables 6.0 3.0 0.0 -3.0 H -6.0 RLS R L D 'I 50 100 t 150 200 Figure 4.10. Self-Tuning of an Underestimated Order (n) Underdamped System. 4.5. THE SELF TUNING CONTROLLER 131 Closed-loop Variances Determinants of A;vs 3 H 2 H RLS R L D ' \t 1,1 \ ! i i i r I ( J 50 100 t 150 200 200 Output Variables Input Variables Figure 4.11. Self-Tuning of an Underestimated Order (m) System. 132 CHAPTER 4. CONTROLLERS Closed-loop Variances Determinants of A ^ s 200 6.0 3.0 0.0 Output Variables RLS R L D i i' ii ll ll I I i • I- v -3.0 H -6.0 r~ 50 i 100 t 150 Input Variables Figure 4.12. Self-Tuning of an Overestimated Order (n) System. 4.5. THE SELF TUNING CONTROLLER 133 Closed-loop Variances Determinants of AATS 3 H RLS R L D l l | ! i i i ! \'< t* t '!' I'? i i 50 100 t 150 200 200 Output Variables Input Variables 200 Figure 4.13. Self-Tuning of an Overestimated Delay System. 134 CHAPTER 4. CONTROLLERS Closed-loop Variances Determinants of AATS 8.0 6.0 H 4.0 2.0 0.0 RLS ( t t 1 1 1 1 1 R L D i <\ \ : !!!•' V !"!« ¥ !!',! i n ; v H * H 50 100 150 200 200 Output Variables Input Variables Figure 4.14. Self-Tuning of an Underestimated Delay System. 4.6. CONCLUSION 135 4.6 Conclusion In this chapter, we have discussed two important controllers. These are the two popular controllers: the PID and the self tuning controllers. Both have been used in the industry and both have problems. This thesis has improved these controllers in the sense that it introduced a method to calculate the optimal gains for the PID controller and suggested a new approach to the self tuning controller. The novel concept presented in this chapter is an attractive way to enhance the self tuning controller. It not only eliminates a require-ment for the delay of the system, but also facilitates the self tuning controller to correct its orders or structure. CHAPTER 4. CONTROLLERS Chapter 5 Control Interval 5.1 Introduction One fundamental problem in process control is the determination of the control interval or the sampling interval (rate) of the control loop. Sampling too fast means a burden on the process computer, while sampling too slowly will degrade the controller performance, because we control less often. The problem is occasionally determined by some rules of thumb. For examples, 1 second for flow loops, 5 seconds for level or pressure loops and 20 seconds for temperature loops. These rules of thumb have been based on the time constant of the dynamics of the system and do not take into account the effect of the stochastic part of the system. The stochastic part can be a loop of different nature. In this chapter, we will improve an existing method to determine the optimal control interval. 5.2 The Sampling and Controlling Rates Technically we have two rates to consider. One is the sampling rate which determines how fast we should sample for data. The other is the control rate or control interval which indicates how often we should control. Not only can the two be different, but also the sampling rate can be irregular. This situation has been mentioned in Lennartson, B. (1986). For simplicity of the discussion and the practicality of the application, we assume the sampling rate is regular and the same as the controlling rate, hereafter called the control interval. The problem we mentioned above is the determination of the control interval. For a digital process control system, a process computer has to process a number of control loops. Initially, these loops were sampled with the above-mentioned rules of thumb or the fastest available control interval. With expansion, there are more loops and the process computer might not have enough time to process all the control loops at shorter control interval. Some of the loops must be processed at a longer control interval. 137 138 CHAPTER 5. CONTROL INTERVAL Since if we control less often, the control performance is likely to be poorer. Therefore, an analysis of which loop should be controlled at a slower rate is necessary. It must be mentioned that the control loop in the analysis is known, ie. its model at the shorter control interval is available. Before analysis of the problem, we will briefly describe the effect of sampling slower and faster on the transfer function and disturbance models of the system. This will help us grasp the situation better and understand the problem thoroughly. 5.2.1 Sampling Too Slow If we have a model with a faster sampling rate and we want a corresponding model at a slower sampling rate, what will this model be like? This is the case we want to discuss and this question must be answered. In the following, we will give a logical answer first and then some mathematical insight later. Effect on the Transfer Function Imagine that we have a system with no disturbance and under open loop condition. Now introduce a step change to the input variable and observe the response of the output variable. If the system is open-loop stable, we will see that the response will oscillate around a particular value and eventually settles on it. If this case occurs, the system is of second order or higher. If the response approaches the final value slowly without crossing it, the system is of first order. Higher order systems can have this similar response, but in this case they can be treated as a number of first order systems in series. If the response settles on the final value right away, the system is of zero order. From these facts, we can conclude that if we sample the system more slowly, the system will approach a zero order system. Effect on the Disturbance In theory, the observations of an A R I M A will autocorrelate to a very high lag and the observations of an A R M A will autocorrelate to only a moderate lag. Beyond this lag the autocorrelation coefficients will be zeros. This means if we sample much slower, an A R M A will become a white noise exhibiting no serial correlation. 5.2.2 Sampling Too Fast Theoretically, if we have a model at a slower rate, we cannot say much about its corre-sponding model at a faster rate, because of a problem called alias. Fortunately, this is 5.2. THE SAMPLING AND CONTROLLING RATES 139 not the problem we wish to discuss. However, in the following, we will speculate on the effect of sampling faster on the models of the transfer function and the disturbance. Effect on the Transfer Function We consider the continuous-time system described by the following model: ^ = A*x(t) + b*«( t ) (5.1) dt y(t) = cTx(t) (5.2) If this system is discretized into an equivalent discrete time system with fixed sampling interval A t , then the discrete control model will be as follows (Kwakernaak, H. and Sivan, R. (1972)): x t + i = Ax t + but (5.3) Vt = cTx< (5.4) with A = eA'At (5.5) /•At b = ( / eAtdt)b* (5.6) Jo Now consider the same continuous-time control system sampled with two different control intervals At\ and At2 and assume At2 = k&ti (5.7) with k as an integer. Then we have the following relationship between the two state transition matrices 0 A * A t 2 e A t (5.8) (5.9) A * A i i \k ) k (5.10) = (Aa)fc (5.11) The poles of the discrete transfer function with the control interval Atj are the eigen-values of the state transition matrix Ai. The above relationship says that the eigenvalues of the state transition matrix A 2 are the kth. powers of the eigenvalues of the state tran-sition matrix Ai . Since for an open loop stable system, all the eigenvalues are inside the unit circle. This means the powers of these eigenvalues are smaller than them in absolute 140 CHAPTER 5. CONTROL INTERVAL values. This verifies the fact we mentioned before. If we sample slower, the system will approach a zero order system with zero poles. Conversely, if we sample faster the eigen-values of the state transition matrix or the poles of the system will increase in absolute values and approach instability. The effect of sampling faster on the transmission zeros is similar but a little difficult to prove mathematically, because of the integral in the expression for b. However, we can say that if we sample slower, a number of transmission zeros will approach zeros and conversely, if we sample faster the polynomial u ^ , - 1 ) will become unstable. This case is known as the case of a nonminimum phase. The process has an inverse response. It has been a practice in the process industry that engineers sample slower to avoid the case of nonminimum phase. Effect on the Disturbance Mathematically, a rational transfer function is different from an A R I M A in the sense that the effect of the transfer function is continuous in nature whereas an A R I M A is essentially discrete. However, the effect of sampling on the models is practically the same. If one samples too fast the A R M A might approach nonstationarity. As for the polynomial we always get an invertible (stable) polynomial from modelling or identification. 5.3 The Control Interval 5.3.1 Literature Survey The control interval has been studied by only a few individuals. Lennartson, B . (1986) studied this problem. However, his work is more on the comparison of different control strategies than an in-depth study of the sampling or control interval. He mentioned in Astrom, K . J. and Wittenmark, B. (1984) a sampling rate h = (5.12) was suggested by these authors. In this formula, LOB is the closed-loop bandwidth and N is a number ranging from 6 to 10. In MacGregor, J . F. (1976) the control interval is determined by a comparison of the theoretical closed loop performances at different sampling rates. Since the method of MacGregor, J . F. is closely related to our work and it has more industrial appeal, we will choose to improve this work. Since the closed-loop performance can be determined entirely from the A R I M A and the delay of the system, the determination of the optimal control interval can be based on the modelling of an A R I M A at different sampling rates from the fastest to the slower ones. 5.3. THE CONTROL INTERVAL 141 MacGregor, J . F.'s approach solves for the roots of the autoregressive polynomial and raises these roots to a power to obtain the autoregressive parameters of the new A R M A . As for the moving average parameters, they have to be solved for from a number of identities. In this chapter, we will improve this approach by using matrix algebra to obtain the autoregressive parameters and a robust numerical algorithm to obtain the moving average parameters. 5.3.2 The Optimal Control Interval The Parameters of a Skipped ARIMA We have mentioned that the optimal control interval can be determined from the modelling of the A R I M A disturbance at different sampling rates. But the way to do this is not just by getting the observations and running the identification algorithm a number of times. The proper way to do it is modelling the A R I M A at the fastest sampling rate. At this rate, we have high accuracy, because of the large number of observations. Then we determine the parametric relationship between the fastest sampling rate A R I M A and one of its slower sampling rates. The slower sampling rate A R I M A is normally called a skipped A R I M A , because a number of observations are actually skipped from the process of recording. In the following, we will determine this parametric relationship. As mentioned in an earlier chapter, we normally model only an A R M A , therefore, we will also continue to do so in this chapter. Suppose we are given an A R M A with the following model nt = \-Qxz~x eq 1 - <j>XZ - l (5.13) The question is if this A R M A is observed at a rate r times slower than the original rate ( A T = rA t ) , what will the parameters of the new A R M A be? The original series can be put to another following state space model form (MacGregor, J . F. (1973)) Ax* + b a t + 1 T C X i (5.14) (5.15) with 1 ••• 0 0 ' 1 <j>2 0 1 0 -01 A = , b = <t>m 0 ••• 0 1 — ^ m - l 0 0 ••• 0 0 — # m (5.16) 142 CHAPTER 5. CONTROL INTERVAL cT = 1 0 0 ••• 0 (5.17) where m = max[p, q], ie. 0, if p < m and • • • ,6m = 0, if q<m From the above equations, we obtain xt+i = Axi + bat+i xt+2 = A x f + i + bat+2 = A[Axi + bat+1] + b a t + 2 2 - 1 = A 2 X i + ^bat+2-i i=0 (5.18) (5.19) (5.20) (5.21) (5.22) (5.23) r - l X t+r Arxt + Y A'bat+r-i i=0 (5.24) By moving the first term on the right hand side to the left hand side and denoting z 1 as a unit backward shift operator, we can write r - l Xi+ r — i-(Az-1y]~1Y^t+ (5.25) i=0 and cT [ l - ( A z - ^ ] _ 1 ; £ A ' W (5.26) i=0 There are a few ways to avoid the inverse matrix in the above equation. The first way is to replace it by its relationship with the adjoint matrix and the determinant. The second way is to use the Cay ley-Hamilton theorem. In Vu, K . (1990), the author introduced a third way to attack this problem. Following this approach, we start from the scalar identitv 1 — x = 1 + x + x2 + lxl (5.27) 5.3. THE CONTROL INTERVAL 143 then write 1 i + x + x 2 + ... + xi + J^L (5.28) 1 — X ' ' ' ' 1 — cc [1 - x}'1 = 1 + x + x2 + • • • + xi + xi+1 [1 - x]'1 (5.29) and obtain the matrix version of the above equation as follows: [i-(A^-1)1"]"1 = I + Arz~r + A2rz~2r + --- + Alrz~ir [i-(Az-1)'-]"1 (5.30) for any non-negative integer i. The above equation can be verified by premultiplication and postmultiplication both sides by the matrix I — (Az _ 1) r. If we multiply both sides of Equation (5.26) by a0 = 1 and choose i in Equation (5.30) as fc, we get [cT + cTArz~r + ••• + c T A ^ z - ^ r-l i=0 Similarly, we can multiply both sides of Equation (5.26) by cx\z r and choose i in Equation (5.30) as k — 1 to get ai z-rnt = [alCTz-r + a i c T A r z - 2 r + --- + alCTA^-2>z^k-l> r-l + a1crA(fc-1)r2-fcr[I-(A^-1)r]-1]EA'ba*-» (5-32) t'=0 We can continue the sequel each time multiply Equation (5.26) by ajZ~]r with in-creasing j and choose i such that i = k — j in Equation (5.30). In general, we will get an equation r-l a]Z-3rnt = [a3cTz-jr + ••• + a3cT' A{k~>)rz~fcr[I - (Az _ 1) r] - 1] £ A*'bat_,- (5.33) j=o The last one in the sequel must be akz~krnt = [akcTz-*[I-{Az-1)T1]YlAihat-i (5-34) i=0 Now if we define the following polynomial: a(z-T) = cxo + aiZ-r + a2z~2r + • • • + akz~kr, a0 = 1 (5.35) 144 CHAPTER 5. CONTROL INTERVAL and then by adding up all the above equations of the sequel, we obtain a(z-r)nt = [cT + cTArz~r + ••• + cT A^k~1)r z^k~l> + cTAkrz-kr[I - ( A z _ 1 ) r ] _ 1 + aicTz-r + aicTArz-2r + • • • + alCTA^-2>z-^ + « 1 c T A ^ - 1 ) r , - ^ [ I - ( A , - 1 ) r ] - 1 + a2cTz~2r + a2cTArz~3r + ••• + a 2 c T A ^ z - ^ r + a2cTA(k-2>z-kr[l-{Az-l)r}-1 + ••• r-l + akcTz-kr[I- (Az " 1 ) ' - ] - 1 ]E A ! B A * - T (5.36) 4 = 0 By collecting terms of the same powers in z, we can write the above equation as a(z-r)nt = [cT+ (cTAr+ alCT)z-r+ (cTA2r+ aicTA + a2cT)z-2r+ ••• + ( c r A ( / j - 1 ' r + cv 1c TA( f c- 2» r + . . . + a f c _ 1 c T ) ,-( f c - 1 » r + (cTAkr + aicrA^r + ••• + akcT) [i - ( A z - 1 ) r ] ~ * z~kr] £ A'ba^ i=0 (5.37) If the coefficients o,s are chosen such that c T A f c r + alCTA{k-^r + • • • + ctkcT = 0 (5.38) then the term with the power — kr in the z-transform operator of the previous equation will vanish. The equation will become J2<xiz-irnt = YJ2^cTA{l~J)rz'ir12Alha^ (5-39) i=0 i=0 j=0 i=0 As for the coefficients OJJS , we can determine them from Equation (5.38) which will give cTA(k-l)r Cil « 2 •••Oik cTA(k-2)r - c J A T A kr (5.40) Now by tranposing the above equation, we get - c r A ( * - i ) r • T ' tti " cTA(k-2)r a2 . ak . - \cTAkr (5.41) 5.3. THE CONTROL INTERVAL 145 and by premultiplying both sides by the matrix - • c r A (*- i ) r - - c T A ( * - l ) r " T-i -1 " c T A ( * - l ) r " cTA(k-2)r cTA(k-2)r c X A ( f c - 2 ) r cTAr cTAr cT Ar cT cT (5.42) we obtain - c r A ( f c - i ) r - - c r A ( * - D r " T-I - i " cr A(fc-i)r -cTA(k-2)r c r A ( ^ - 2 ) r c T A (*-2 ) r cTAr c T A r cT Ar cT cT cTAhr (5.43) The parameter k is chosen such that it is as large as possible, is in the range p < k < m + 1, and makes the inverse matrix in the matrix (5.42) exist. Equation (5.26) can also be written as NT = C II-(AWI SAbfl<" (5.44) So if we premultiply a(z r) to both sides of Equation (5.26) to make its right hand side fractionless, then a [z I - ( A 2 ~1 ) r and cTAdj [i - (Az - 1) r] = Y J l ^ A ^ z -i=0 j=o (5.45) (5.46) The relationship between the moving average parameters is more complicated and can be derived from the equations given below r - l a[z )nt = a[z fe-i - r \ „ r cT[l-(A2-1r]" £ A W i Y Y ajCTA< ,'-'>*- , v Y A b a^i (5.47) (5.48) 146 CHAPTER 5. CONTROL INTERVAL = E E E « i c r A r ( i " J ) + n b o ' - n - « - (5.49) t=0 i=0 n=0 = at - 4hCLt-\ tphat-h (5.50) = V ( * ~ > t (5.51) = xt (5.52) The left hand side of the above equation gives the autoregressive part of the skipped A R M A with the observing rate A T . The parameters of the above equation are given by the following equation: bir] ^ = - £ atcTA^rb (5.53) j=o with the square brackets stand for the integer of. Now if we designate the skipped A R M A as ST = P ^ e r (5.54) a(z r) and (1 + axz~r + • • • + akz-kT)sT = (1-Piz~r Piz~lr)eT (5.55) = wj (5.56) then the orders of the polynomials ft(z~r) and ct(z~r) of ST can be obtained as follows. We have mentioned k is the order of a(z~r). However, from the discussion on the effect of sampling on the transfer function and the results from MacGregor, J . F. (1976) the roots of the polynomial a(z~r) are the roots of the polynomial <f)(z~l) raised to the rth power, so k must be equal to p. If k is larger than p, then the last k — p coefficients will be zeros. As for the order / of the polynomial (3(z~r). MacGregor, J . F. (1976) and Anderson, 0. D. (1975) gave / as below / = [ p + ^ ] (5.57) r We can also give an estimate of / as follows. From Equation (5.50), xt is a moving average of order h sampled at the sampling interval At, so we if we sample xt at the interval AT = rAt to get the moving average time series wj, then / - its order - is the integer or quotient of h/r, and h can be calculated as h = r — 1 + r(k — 1) from Equation (5.49). Therefore, we have I = [ ^ - ] (5.58) 5.3. THE CONTROL INTERVAL 147 Note that in Equation (5.54) for the skipped A R M A sx, the generating white noise is ej not ax, ie. not a nonskipped observation of ats. The reason for this is in the modelling of an A R M A , the white noise is usually considered as a fictitious uncorrelated sequence with the smallest variance. And so when a skipped A R M A sx is formed, it is not bound by the fact that it is driven by ax-Now we will obtain an expression for the parameters of the polynomial /3(z~1). From Equation (5.50), if we square both sides, then take expectation, we will obtain E{a(z-r)nta{z-r)nt} = (1 + X» B 2 (5-59) 8 = 1 Now if we carry out the same operation on the skipped A R M A sx, we will obtain E{a(z-r)sTa(z-r)sT} = (1 + Eft>2 (5-60) 8 = 1 And since at a nonskipped position sx = nt, we have E{a{z-r)nta{z-r)nt} = E{a(z-r)sTa(z-r)sT} (5.61) This will give us (1 + X > M = (l+E#Ve (5-62) 8=1 i=\ Now instead of squaring both sides, we multiply both sides of Equation (5.50) by the quantity a(z~r)nt„r, then take expectation, we will obtain h — r E{a(z~r)nta(z~r)nt-r} = (-ipr + E V#;+r)<72 (5.63) 8 = 1 and similarly and we can write E{a{z-r)sTa{z-r)sT^} = ( - f t + £ ftft+1)<7e2 (5-64) 8 = 1 ( - ^ + E V ' 8 ^ + r ) c 7 Q 2 = (-f31 + 'j2^t+1)a2e (5.65) 8 = 1 8 = 1 148 CHAPTER 5. CONTROL INTERVAL In general, we will have ( i + ft2 + ft2 + - - - + ft2)o-2 (-ft + ftft + --- + ft_iAK2 ( - f t + ftft + • • • + A - 2 f t K 2 740) 7^(1) 7x(0) 1x{r) 7.(2r) (5.66) (5.67) (5.68) (-#_! + ftft)o-e2 = 7 ^ - 1 ) = 7 ^ - r ) - f t ^ e = 7 4 0 = 7 * ( r 0 (5.69) (5.70) In the above set of equations, we have I + 1 equations and / + 1 parameters, therefore we can solve for the parameters by substitution. However, this approach will be more cumbersome when / gets larger and so a numerical approach is preferred. Wilson, G. (1969) suggested an algorithm to obtain the parameters of a moving average time series numerically from its statistics. However, since it uses Newton's method, there is more computation burden, because the derivative has to be calculated. Furthermore, there is a matrix inversion in this method. An alternative approach is presented below. We write the moving average parameters /3,-s as ft ft ft-i ft ftft + • • • + -ftft + • • ' + ft-2ft -741) ( i + ft2 + ft2 + - - - + ft2) 740) 7 4 2 ) ( l + ft2 + ft2 + - - - + f t 2 ) 740) ftft- 7 4 ' - i ) ( i + ft2 + ft2 + - - - + ft2) 740) 7 . (0(1 + ft2 + ft2 + --- + /32) 740) and put the above equations to the following matrix form l + / 3 T / 3 P ft ft ••• ft 0 ft ••• ft 0 0 ft o 0 0 0 0 0 0 p 740) 741) 7* (2) 743) 7 4 0 (5.71) (5.72) (5.73) (5.74) (5.75) The above equation has a familiar form x = f(x) (5.76) 5.3. THE CONTROL INTERVAL 149 This equation has a specific name for itself. It is called an iteration equation. Its name probably comes from the fact that the solution can be obtained numerically by the itera-tion XN = f(xN-l) (5-77) until convergence. It has been proved by Ostrowski, A . M . (1966) that if the absolute value of the derivative of f(x) at the solution is smaller than unity, the iteration always converges to this solution from the initial estimate x = 0. Since the right hand side of Equation (5.75) is quadratic in the parameters fts, its derivatives with respect to these parameters must be linear functions of them. In addition to this, the parameters must form an invertible polynomial. These facts suggest us to try the iteration approach by defining the parameter vector ft' ft PN = I ft (5.78) N as the moving average parameters valued at iteration N, then solve Equation (5.75) by iterating j3pf as below: P N ft ft ft ft ft 0 ft 0 0 0 PN-I I + PN- IPN-\ 7*(0) N-l 7«/(l) 7 4 2 ) 7.(3) 7.(0 (5.79) with the initial estimate Po=0 until convergence. It has been tested extensively with soft-ware that this algorithm always converges to the correct and unique invertible parameters of the moving average polynomial. From the above discussion, we notice that it is not necessary to obtain the polynomial ?/>(z_1) to obtain the parameters fts. However, the availability of this information can be used as an additional check for errors in the procurement of the parameters fts. It must also be noted that the calculation of the parameters a;s does not require the knowledge of the parameters 0ts. Only the calculation of the parameters fts requires this knowledge. The Control Interval Basically, the theory of MacGregor, J. F.'s method is as follows. We have a Box-Jenkins model at a control interval At Loiz-1) 6(z-r) yt = 6{< -V.t-j-1 + - 1 ) (5.80) 150 CHAPTER 5. CONTROL INTERVAL = ^lut_f_1+nt (5.81) 6{z x) The minimum variance closed-loop performance at this control interval can be given entirely from the polynomials 9{z~1) and </>(,-1) and / . Let this variance be o-y(t). Now, if we model the same system at a slower rate A T = r At and get the model VT = ^ ^ u T . f / r . 1 + ^ l e T (5.82) o*(z r) a(z r) i i T - / / r - i + n T (5.83) 6*(z-r) with the minimum variance closed-loop performance a2{T). We can make the decision to choose the slower control interval A T by comparing o~y(T) and o-y(t). The control loop can be sampled and controlled more slowly, if °l(T) * *2y(t) (5.84) The above equation says that there is not much degradation in the controller performance when the loop is controlled more slowly. Other Applications In the above discussion, we have introduced techniques to obtain expressions for the parameters of a skipped A R I M A in terms of the parameters of its original time series. This technique was applied to determine if a control loop can be controlled more slowly. In this endeavour, we have spent a lot of effort to obtain the expression for the autoregressive parameters of the skipped A R I M A - Equation (5.43). The question now is if what we got worth the effort? In term of accuracy, this seems to be a legitimate allegation. The gain in labor and accuracy is probably not much especially if the number of the autoregressive parameters is small (< 5). The answer to this allegation is the technology can be carried to other applications. The new expression for the time series nt given by Equation (5.44) can be used as a stepping stone for other developments in time series literature. This equation enables us to obtain the coefficients ^>jS of the moving average time series xt -Equation (5.53). These coefficients facilitate the calculation of the parameters fts. These coefficients have not been obtained by MacGregor, J. F. (1976) whose work relied on a lemma in Telser, L. G. (1967)'s work. This work also did not give an expression for the coefficients iftiS for the general case. This leaves our work as the only contributor. As for other applications, we will consider two of them: one in statistics and one in engineering. 5.3. THE CONTROL INTERVAL 151 The aggregate A R I M A time series is one that is formed by addition of a number of consecutive observations of another A R I M A as described by the following equation: NT = (5.85) 3=0 = (5.86) 3=0 The new A R I M A time series NT has the physical meaning such as weekly, monthly data instead of daily or hourly data portrayed by the A R I M A time series nt. Note that the new A R I M A NT has different sampling interval from that of the old A R I M A nt. By replacing nt in the above equation, we obtain NT = c — - — — — — — ^ ^ A bat-i-j (5.87) {Az-l)r\ t'=0 j = o The above equation gives us a direct relationship between the autoregessive parameters of the two time series NT and nt. As for the moving average parameters, we can develop the right hand side of the above equation or obtain them from a relationship between the autocovariances of the two time series. The relationship between the autocovariances of an aggregate A R I M A and its disaggregate A R I M A is available in time series literature (Wei, W. W. S. (1990)). In the pulp and paper industry, there is one application that can use the technology we presented in this chapter. This application is very close to the aggregate A R I M A . The beta-gauge sensor measures the basis weight, moisture and caliper profile data while it moves across the width of a paper machine. This profile data have a very high resolution and the profile is called the high resolution profile. This profile can have up to 600 points. In control, we do not need such a high resolution profile. This profile will be reduced to a low resolution profile for control. This low resolution profile contains the averages of the same number of consecutive points of the high resolution profile. There are a number of reasons for doing this. The first reason is to reduce sensor noise. The second reason is to make sure there is enough memory space for storage of these profiles. The third reason is we do not want to waste execution time with these profiles when they contain a lot of data. Averaging out this high resolution profile is the right approach, but we do not want to overdo it and loose good cross-machine variation. With our presented technology, we can model the high resolution profile as the A R I M A nt and the low resolution profile as the A R I M A NT and obtain the following relationship: N t = C r | I - ( A , - ) ' | SgAl""-' (5'88) 152 CHAPTER 5. CONTROL INTERVAL For a different r, we get a different A R I M A NT and a different variance for this A R I M A . By comparing the variances of these ARIMAs NTS, we can come up with a decision how many points we will take for the averaging of the high resolution profile. 5.3.3 Examples In this section, we will present some examples to illustrate the presented theory. We consider the following time series: 1 - 0.8,- 1 + 0.12z-2 1 - 1.2Z-1 +0.47 r-2 0.06,- 3 at (5.89) with at of unit variance. Now we want to find the parameters of the skipped A R M A with the skipping parameter r = 2. The original A R M A time series has a state space model form with A = A x ( + b a t + 1 nt = c T X i 1.2 1 0 " 1 " 0.47 0 1 , b = -0.8 0.06 0 0 0.12 (5.90) (5.91) (5.92) In our example, we have r means we have to obtain cT = [ 1 0 0 ] (5.93) 2, m = 2 and therefore k should be equal to 3. This (5.94) (5.95) (5.96) 0.9700 1.2000 1.0000 " A 2 = -0.5040 -0.4700 0 0.0720 0.0600 0 0.4081 0.6600 0.9700 A 4 = -0.2520 -0.3839 -0.5040 0.0396 0.0582 0.0720 0.1331 0.2377 0.4081 A 6 = -0.0872 -0.1522 -0.2520 0.0143 0.0245 0.0396 5.3. THE CONTROL INTERVAL 153 and we have ' c T A 4 " ' c r A 4 " T~ - 1 " c r A 4 " = — c T A 2 c T A 2 c T A 2 . a 3 . cT cT c T A 6 0.5 -0.0769 0.0036 (5.97) (5.98) With these values of a;s, we have the roots of cx(z~2) as 0.25, 0.16 and 0.09. Comparing to the roots of 0 (z _ 1 ) as 0.5, 0.4 and 0.3, we can claim that the autoregressive polynomial a(z~2) is correct. As for the order of the polynomial f3(z~2), we have h = 5 and hence / = 2. The coefficients z/>;s of the moving average time series xt in Equation (5.50) are given below: - c T A b = -0.4000 (5.99) - c T A 2 b - aicTb = 0.3700 (5.100) - c r A 3 b - ajc/Ab = 0.1720 (5.101) - c r A 4 b - a l C r A 2 b - a2cTb = -0.0084 (5.102) - c T A 5 b - a l C r A 3 b - a 2c TAb = -0.0072 (5.103) ^ 3 •04 fp5 This gives us = 1.3266064 7w(l) = ( -02 + i>ll/>3 + ^2^4 + ^ s ) ^ = -0.4431464 7tu(2) = {-jp4 + ^5)a2a = 0.01128 (5.104) (5.105) (5.106) (5.107) (5.108) (5.109) and as described in the theory, we have (1 + ft2 + ft>2 ( - f t + frfoal Now if we solve these equations by substitution, we will obtain the following relationships: 742) 7*(0) 7*(1) 7 (^2) (5.110) (5.111) (5.112) ft ft 741) *e2 + 7«,(2) (5.113) (5.114) 154 CHAPTER 5. CONTROL INTERVAL and the final equation to solve for the variance a2 as K 2) 4 + ( 2 7 w ( 2 ) - 740))(^) 3 + (272 (2) - 27,(2)7.(0) + 72 (1))(^ + (2 7 5 (2 ) -740)7 2(2))^ 2 + 7 i ( 2 ) = 0 e2)2 (5.115) This is a quartic equation and the solution for <r2 must be positive. To solve this problem by our method, we wrote a program called maJd.m to calculate the parameters ft, ft and the variance a2. This program is included in Appendix B . From this program, we obtain the following solutions: ft ft _2 = 0.3782 = -0.0097 = 1.1605 (5.116) (5.117) (5.118) With the above solution for a2 and the given autocovariances jw(i), i = 0, • • • 2, the left hand side of the above quartic equation gives a value of 3.694e — 07. This is accurate enough and we can say both methods give the same solution. The skipped A R M A ST is hence given as follows: 1 - 0.3782^-'' + 0.0097z-2 r S T ~ 1 - 0.5*- r + 0.0769z-2 r - 0 .0036z- 3 r 6 : r ^ ' ^ As the final test of our theory, we will calculate and compare the autocovariances of both time series sj and nt. And if we have the following relationship: r 7,(o) i " 7 n(0) " 7,(1) 7n(r) (5.120) — . ls{k + l) . . ln{rk + rl) . then we can say the skipped A R M A ST consists of the skipped observations from the original A R M A nt. But first of all, we need the equation to calculate the autocovariances of an A R M A . Since the derivation for this equation is short, we will include this derivation here. The white noise at has the property that it is not autocorrelated which means E{atat-j} = 0 forj^O = °~l for j = 0 (5.121) (5.122) It also has the property that its future value is not cross-correlated to the value of the time series which means E{atnt_j} = 0 for j > 0 = 1an{j) for j < 0 (5.123) (5.124) 5.3. THE CONTROL INTERVAL 155 From the equation of the A R M A nt time series (1 - faz'1 - faz~2 - faz~3)nt = (1 - - V - 1 -92z-2)at (5.125) if we multiply both sides of Equation (5.125) by at_j, let i go from 0 to o > 3, then take expectation, we have 7an(0) 'Jan (1) - fajan(0) 7 a n (2) - 0 i 7 a n ( l ) - falan{^) 7an(3) - ^l7on(2) - hlan{l) ~ falan(0) -02O-1 0 (5.126) (5.127) (5.128) (5.129) lan(o) ~ falan{o ~ 1) - falan{o - 2) - fa"1an{o - 3) The above equations can be written in the following matrix form: 1 1 -fa 1 -fa -fa 1 = 0 (5.130) -fa -fa This means we have " 7an(0) " 7an(l) 7an(2) 7an(3) . lan(o) _ •->3 —92 ~9l ' 7an(0) ' \ ° l 1 7an(l) 7an(2) 7an(3) 0 . 7an(o) . 0 1 1 -1 \ a« 1 -01*1 0 0 (5.131) (5.132) Now if we multiply both sides of Equation (5.125) by n 4_;, take expectation then let i go from 0 to o, we have 7n(0) - &7n(l) - M » ( 2 ) - &7n(3) = lan(0) ~ 01 7an(l) ~ ^lan (2) (5.133) - W 0 ) + ( l - ^ ) 7 n ( l ) - M B ( 2 ) = -0!7an(O) - 027an(l) (5-134) - M n ( 0 ) - ( ^ l + ^ 3 ) 7 n ( l ) + 7 n ( 2 ) = -0 2 7an(O) (5.135) - ^ 3 7 n ( 0 ) - <^27n(l) - M n ( 2 ) + 7n.(3) = 0 (5.136) -faln{o ~ 3) - fa-Jn(o ~ 2) - fajn(o - 1) + Jn(o) = 0 (5.137) 156 CHAPTER 5. CONTROL INTERVAL In matrix form, the above equations can be written as - 0 i 1 - 0 2 "01 - 0 -02 -01 -02 0 0 1 — 01 —02 -02 01 1 7an(0) 7an(l) 7an(2) 7an(3) L 7an(o) " 7n(0) " 7n(l) 7n(2) 7n(3) . 7n(o) . (5.138) and therefore we obtain the equation we are looking for as below 7n(0) " 1 -01 -02 -03 7n(l) -01 1 - 02 -03 0 7n(2) -02 -01 - 03 1 0 7n(3) -03 -02 -01 1 7n(o) . -03 -02 -01 1 — 1 1 -01 -02 1 -1 [ ^ 1 01 1 02 -01 1 03 -02 -01 1 0 -03 -02 -01 1 0 — 01 —02 -02 0 0 (5.139) A program was written in M A T L A B software to calculate the autocovariances of the two time series nt and Sj. This program which is called autocov.m is included in Appendix B. By using this program and choosing o equal 10 for the A R M A nt and o equal 5 for the 5.3. THE CONTROL INTERVAL 157 skipped A R M A , we obtain the autocovariances of the two A R M A s as follows: [ 7n(0) 1 " 1.1779 " 7n(l) 0.4557 7n(2) 0.1406 7n(3) 0.0252 7n(4) -0.0085 7n(5) = -0.0136 7n(6) -0.0108 7n(7) -0.0071 7n(8) -0.0043 7.(9) -0.0024 . 7n(10) . . -0.0013 . [ 7.(0) 1 " 1.1779 " 7.(1) 0.1406 7,(2) -0.0085 7.(3) -0.0108 7,(4) -0.0043 L 7,(5) J . -0.0013 . (5.140) Comparing the above listed autocovariances of the two time series nt and sj, we can claim that the identification of the skipped A R M A is correct. Now suppose that this A R M A nt disturbs a linear control system and the disturbed system can be written as below 5.0 1 - O . 8 2 - 1 +0.12z~ 2 , r i j i . Vt = ~, n „ g . - i " « - 3 + 1 _ l . 2 z - l + 0 , 4 7 ^ - 2 n n g - 3 a « ( 5 1 4 1 ) 1 - 0.452" - O.O62-3 The control loop currently has a control interval of 10 seconds and the pure transportation lag has 20 seconds. Now a control engineer wonders if it is feasible to control this loop at a control interval of 20 seconds. To answer the above question, we will make use of the above result. We can write 1 -0.8Z-1 +0.12z~2 nt = 1 - 1.2Z-1 +0.47^- 2 - 0 . 0 6 z - 3 at (1 + OAz-1 +0.l3z~' + 0.0280 - 0.0371z -1 O.OO780-2 (5.142) z~3)at (5.143) 1 - 1.2Z-1 +0.47^-2 -O .O62- 3 and the variance of the output variable given by the minimum variance control law is a2y(t) = 1 + 0.42 + 0.132 (5.144) = 1.1769 (5.145) At the 20 second control interval, the disturbance A R M A nt will be skipped with a skipping parameter r = 2. By making use of the above result, we can say the disturbance at this control interval is ST which can be written as below: 1 - 0.3782z- r + O.OO970-2r 1 - 0 . 5 2 - r = (1 + 0.1218* + 0.07692-2'- - 0.0036z-3?-0.0063 - 0.00582" + + 0.0004z ~ - 2 r 1 - 0.52~ r + 0.07692 -2r 0.0036 ~ - 3 r (5.146) r2r)eT (5.147) 158 CHAPTER 5. CONTROL INTERVAL with <7g = 1.1605. At this control interval, the variance of the output variable given by the minimum variance controller is a2y(T) = 1.1605(1 + 0.12182) (5.148) = 1.1777 (5.149) We see that there is a slight increase in the output variable variance, but this increase is negligible. So we can conclude that it is feasible to control this control loop at the 20 second control interval. Now consider the case of the same linear system, but the disturbance time series this time is a moving average of the following form: nt = (1 -1.8Z'1 + 1.19z - 2 - 0.342z-3 + 0.03602~4K (5.150) with at of unit variance. In this case, the skipped A R M A is also a moving average. With the skipping parameter r = 2, this moving average time series will have second order. By using the program maid.m in Appendix B, we obtain this skipped A R M A as below: sT = (1 + 0.3588z~r + 0.0070z- 2 r )e r (5.151) with ex having a variance of a2 = 5.1156. Now if the system is controlled at the interval of 10 seconds, then the output variable variance given by the minimum variance controller is a2y(t) = 1 + 1.82 + 1.192 (5.152) = 5.6561 (5.153) And if the system is controlled at the interval of 20 seconds, then the output variable variance given by the minimum variance controller is a2y(T) = 5.1156(1 + 0.35882) (5.154) = 5.7742 (5.155) Now consider the same linear system but with the following second order autoregressive time series disturbance U t = 1 - 1 . 5 , - + 0 . 5 6 . - ^ ( 5 - 1 5 6 ) and at is of unit variance. This time series can be put to the following state space model: x i + i = A x ( + bat+1 (5.157) nt = cTxt (5.158) 5.3. THE CONTROL INTERVAL 159 with 1.5 -0.56 1 0 (5.159) cT = 1 0 The parameters of the skipped A R M A can be obtained as follows. We have ' 1.69 1.5 " " 1.69 1 " - l " 1.69 1.5 " ' 1.5961 ' . a 2 1 0 1.5 0 1 0 1.6950 -1.1300 0.3136 and 0-^ 3 1 — -1.5 -0.56 0 and therefore jw(0) = 3.5636 7*(1) 0.56 Using the program maJd.m, we obtain the following results: W T = (1 + 0.1612z - r)e T with t r 2 = 3.4733. The skipped A R M A hence has the following model: 1 + 0 .1612^ sT 1 - 1 . 1 3 2 ~ r +0.3136z - 2 r (5.160) (5.161) (5.162) (5.163) (5.164) (5.165) (5.166) (5.167) (5.168) (5.169) This result has been checked by comparing the autocovariances of the two time series ST and nt. As before, we can compare the loop performances by checking the minimum variances given by the minimum variance controllers for both cases of 10 and 20 second control intervals. We can write: 1 1 - 1.5Z- 1 +0.56z- 2 at = (1 + 1 . 5 2 " 1 + 1.69*"2 + 1.6950 - 0 . 9 4 6 4 2 - 1 1 - l.bz- 1 + 0 . 5 6 2 - 2 ' - 3 )at (5.170) (5.171) 160 CHAPTER 5. CONTROL INTERVAL and 1 + 0.1612,-'" (5.172) 1 - \ A 3 z - r + 0.3136,-2 (1 + 1.29120~r + 1.1455 - 0.4049,- r (5.173) 1 - 1.13z-r + 0.3136,- 2 r At the control interval of 10 seconds, the output variable variance given by the mini-mum variance controller is Now if the system is controlled at the interval of 20 seconds, then the output variable variance given by the minimum variance controller is Comparing the degradations in feedback performance of the three cases above, we can summarize them as follows. In the first case, the increase in the output variable variance is around 0.01%. In the second case, this increase is at around 2%. In the third case, the increase in the variance is more than 50%. In the third case, it is not feasible to control more slowly. The difference between the third case and the other two cases is that in the third case, the roots of the autoregressive polynomial are big (0.7 and 0.8). This means stronger serial correlation. So skipping even only one observation means skipping too much correlation. The result is the feedback controller, corresponding to the case of the skipped A R M A , does not remove enough correlation of the disturbance and hence gives a higher value of variance for the output variable. 5.4 Conclusion In this chapter, we have discussed the problem of determining the optimal control inter-val. The word optimal normally means smaller variance in stochastic control. This has spawned researches into this topic by this criterion. The work in this thesis on this topic is not original in the sense that it only improves an existing work. However, the method discussed in this chapter provides an elegant way to attack the problem. This elegant way not only solves the problem of the optimal control interval but also solves similar problems such as modelling an aggregate A R I M A . a2y(t) = 1 + 1.52 + 1.692 = 6.1061 (5.174) (5.175) = 3.4733(1 + 1.29122) = 9.2639 (5.176) (5.177) Chapter 6 Conclusion and Recommendations 6.1 Conclusion In this thesis, we have discussed a few fundamental questions concerning the application of the Box-Jenkins model to control problems. Even though the discussion was about the Box-Jenkins model, the theory can also be applied to the tracking control problem. The work in this thesis and its contributions can be separated into three areas. These areas contain questions a control engineer usually faces in his or her working environment. Is the control interval or frequency right? What is the model of the system? Is a PID controller sufficient? Is constraint on the variance of the input variable necessary? Is self-tuning necessary? These questions were answered adequately in simple terms. 6.2 Summary of the Thesis In summary, the thesis has made contributions in 3 areas: identification, control algo-rithms and control interval. The contributions will be summarized as follows: • Chapter 3 discusses identification. Identification of the rational transfer function is discussed in Section 3.3. The sum of squares of the disturbance is differentiated and the derivatives are set to zeros. This gives us r + s + 1 equations, s equations are used in the Newton-Raphson iteration equation to obtain the parameters of the pole polynomial, r +1 equations are used to calculate the parameters of the transmission zero polynomial. The identification of an A R I M A is discussed in Section 3.4. There are two ways to identify it. The first way is to write it in the rational transfer function form and identify its parameters. In the second way, the sum of the squares of the white noise is differentiated and the derivatives are set to zeros. This gives p + q equations, q equations are used in the Newton-Raphson iteration equation to obtain 161 162 CHAPTER 6. CONCLUSION AND RECOMMENDATIONS the parameters of the moving average polynomial, p equations are used to calculate the parameters of the autoregressive polynomial. The combined identification is discussed in Section 3.5. There are a total of s + q equations in the Newton-Raphson iteration equation, r + p + 1 equations are used to calculate the parameters of the transmission zero and autoregressive polynomials. • Chapter 4 discusses the controllers. The PID controller is discussed in Section 4.4. The minimum variance and L Q G PID controller gains are obtained by taking the derivatives of the control criteria containing the variances of the input and output time series and setting them to zeros. A Newton-Raphson equation is used to iterate from an initial estimate of the controller gains until convergence. The initial estimates are estimated by a crude but systematic way of varying the poles until the control criteria are achieved. The recursive least determinant self-tuning algorithm is discussed in Section 4.5. The algorithm tunes itself to the minimum variance condition by setting the determinant of a positive definite matrix to a minimum. This matrix contains only the current and past input and output variables data. • Chapter 5 discusses the control interval. The optimal control interval is discussed in Section 5.3. The question if a control loop can be controlled less frequently is determined by obtaining the parameters of a skipped A R I M A and computing the theoretical minimum variance under feedback. The control loop can be controlled less frequent, if there is not much degradation in performance, ie. no great increase in variance of the output variable under feedback. 6.3 Recommendations 6.3.1 The Nonlinear Stochastic Control System The linear control theory has been well developed and understood. However, many control systems, especially chemical engineering systems, are nonlinear in nature. So a nonlinear controller must be developed. The problem with nonlinear control is its theory is not unified. Unlike linear control theory where the principle of superposition applies, the sys-tem can be sufficiently described by a rational function of polynomials, nonlinear control system has not been well described even mathematically. For industrial applications, the following simple nonlinear model can be used: k i •t-f-i-i i=o j=o 6.3. RECOMMENDATIONS 163 With an A R I M A disturbance, the nonlinear stochastic control system would have the following model: k i yt = EE w «i u t-/-.--i + n * ( 6 - 2 ) i=0 j=0 k 1 Oiz'1) This means a nonlinear stochastic control system consists of two parts. The transfer function relates a nonlinear relationship between the input and output variables that can be described by a sum of truncated power series of past input variables. The disturbance that can always be described by an A R I M A which is a linear combination of an uncorre-cted sequence. The concept of using power series to describe nonlinear systems is not a wild idea. It has been proposed by Wadel, L. B. (1962). Identification of the above model can be done via a linear least squares estimation. The minimum variance nonlinear stochastic controller can be obtained in a similar fashion to the minimum variance controller of a Box-Jenkins model. However, the input variable of the nonlinear controller is not unique. But this can be solved easily. We can always set a strategy to obtain this control action. It must be real and closest to the previous control action. 6.3.2 The Self Correcting Controller The problem with the recursive least squares minimum variance self tuning controller is the determination of the delay parameter / and the controller orders m, n. The recursive least determinant self tuning controller gives us an advantage, because we do not need to know the delay of the system. However, we still need to know the orders m, n of the controller. This is the second suggestion of this thesis. An intelligent scheme is needed to detect the wrong estimation of these two parameters. Once this scheme or algorithm is found, we can have what control engineers have been looking for for a long time - a (linear) controller that can correct itself. The problem of determining the right orders for the controller on line has been studied by a few authors. Kotzev, A. (1992) designed an algorithm called M O D (Model Order Determination) and applied it to an excavator. Kotzev's work, however, is inefficient in the sense it did not use a general model like the Box-Jenkins one and no useful conclusion can be drawn from the work. The analysis is mainly from a cost function defined as the sum of squares of the errors which is the difference between the measured output variable 2/JV+/+I and its estimate x.N/3N. In her own words, the conclusion of the study was -"for correct modeling, the cost function rises initially (when the error goes to zero), and then settles on a constant value. When under-modeled, the cost function's initial rise is 164 CHAPTER 6. CONCLUSION AND RECOMMENDATIONS steeper, going to much higher values and leading to instabilities. When over-modeled, the behavior is much more moderate, but the performance deteriorates and the system can go unstable." We are not going to dispute this conclusion except for saying that it will not help us in our problem of determining the right orders for the controller. Kotzev's thesis also mentioned the work of Niu, S. et al. (1992) who introduced an algorithm called AUDI (Augmented Upper Diagonal Identification). This work is actually an expansion of the Bierman's UD factorization algorithm. It is more on the side of identification than an ingenious way to determine the correct orders for the controller. However, since it claimed it can identify a number of different parameter vectors at the same time, the algorithm could be tried to determine the correct orders for the controller. The self-correcting controller concept has not been suggested, because of the weakness of the recursive least squares approach to the self-tuning algorithm. At each control interval, the algorithm has to store the last controller's parameter vector (3N_1, and the variance matrix PN-I- If the parameter m or n changes, we have to start again. This is because we do not have these parameters from the last control interval. Therefore, it is very cumbersome to administer changes. In a recursive least determinant self tuning environment, the story is different. What the control algorithm stores is only two vectors of values of ut and yt, therefore administration of changes in terms of the parameters m or n is much easier. This fact makes the self correcting controller feasible. Nomenclature "CO Transmission zero polynomial Hz-1) Pole polynomial Hz-1) Moving average polynomial faz-1) Autoregressive polynomial r Degree of the transmission zero polynomial s Degree of the pole polynomial P Degree of the autoregressive polynomial q Degree of the moving average polynomial f The pure delay of the process in the Box-Jenkins model Ut The input variable yt The output variable kp The proportional gain of a PID controller k{ The integral gain of a PID controller kd The derivative gain of a PID controller X The Larange multiplier or penalty constant in the control criterion m The number of past input variables the controller remembers n The number of past output variables the controller remembers Greek Symbols a The standard deviation a2 The variance 165 166 NOMENCLATURE Mathematical Operators E Expectation operator V Differencing operator — Partial derivative operator o Min Minimum value of a positive quantity u() Eigenvalue of a matrix tr Trace of a matrix. Sum of the diagonal elements of a square matrix | | Determinant of a matrix. Product of the eigenvalues of a square matrix Adj The adjoint of a square matrix z"1 Backward shift operator Superscripts ' Derivative of T Matrix transpose -1 Matrix inverse Subscripts min Minimum value mv Minimum variance N Sequence or time N t Time t Overstrikes Optimal or estimate NOMENCLATURE 167 Acronyms A R I M A AutoRegressive Integrated Moving Average A R M A AutoRegressive Moving Average A R M A X AutoRegressive Moving Average with an eXogenous input AUDI Augmented Upper Diagonal Identification DCS Distributed Control System IMC Internal Model Control L Q G Linear Quadratic Gaussian M O D Model Order Determination PID Proportional Intgeral Derivative PI Proportional Intgeral PD Proportional Derivative PRBS Pseudo Random Binary Signal R L D Recursive Least Determinant RLS Recursive Least Squares UD Upper diagonal Bold Face Capital character A matrix Regular character A vector NOMENCLATURE Appendices A. Mathematical Results In this appendix, we will present some proofs of some results that have been used in the thesis but not proved. These results were not proved in the thesis, because they might be obvious to some readers. Recursive Least Squares We consider the least squares problem Min E W i - W (A.1) P t=i and its solution P = [ X ^ X ^ y (A.2) If we have N — l observations, we can subscribe the parameter vector with the subscript N — 1 to indicate the fact that the parameter vector has been estimated with N — l observations, or it has been estimated at time N — 1, and we write $N-I — [X^_ 1 XTV-I] 1 X ^ _ 1 y 7 v - i (A.3) where rT i X N-l yw-i y/+2 y/+3 (A.4) WV-l L VN+J J 169 170 APPENDIX A Now when the observation iV is available, we want to find the relationship between J3N and P N - I - We can proceed as follows. PN = [ X ^ X j v ] - 1 X ^ y j v = [X^ f_1Xiv-i + Xjvx^] 1 Xjv-i x N YN-I VN+f+l = [X^_ 1 XAT-I + XATX^] - 1 [X^_ xyAf-i + XNVN+J+I] Using the Sherman-Morrison formula, we can write PAT = [X^-Xjv] - 1 = [X^_ x XAr_i + XATX^] 1 xjvx^-[X^.iXiv-i] 1 - [X ^ . J X J V - I ] 1 t , y T ^ ,_, [XA r_ 1XAr_i] 1 + xN[X.N_1 AAT_IJ x X i v (A.5) (A.6) (A.7) (A.8) (A.9) i - i X N X N 1 + X ^ P i V - l X i v JV-1 = P J V ^ - P J V . X and therefore A r T 3"I i f T (3N = [X J V _ 1 X A T _ I + XJVX^J |X 7 V _ 1 Y J V _ I + XNVN+J+I -PJV-I PJV-I — PJV-I IjV_l — P j V _ i XATX^-1 + X^PiV-lXjV XjyX^ 1 + xNPN^xN PN-I + Pw-i xivyiv+/+i — PjV-1. , , ^ T (A .10) ( A . l l ) (A .12) X ^ Y A T - I + XATJ/AT+Z+I] (A .13) PN-1 [ X ^ Y J V - I + XNyN+f+1] (A .14) PAT_IX A ^_ 1 YAT_I XATX^--P iV -1 xNxN 1 + XNPN-lXN 1 + x^Pjv-aXjv PN-IXNVN+J+I Now if we define K N PN-T.XN 1 + X^PAT.IXAT then the parameter vector 0N is given as P N P N - I - ^NXNPN_X + PiV-lXjV — PJV-I XJVX^-1 + X^PAT-IXAT PAT.iXAT = PN-I - ^-N^NPN-I + KNyN+f+i = PN-1 + K N \y~N+f+l - X-NPN-I] (A.15) (A.16) VN+f+l (A.17) (A.18) (A.19) APPENDIX A 171 The above equation gives the estimated parameter vector (3N in a recursive form. This form has many advantages. The first advantage we can see right away is it makes the problem of a matrix inversion disappear. The second advantage is it gives the estimated parameter vector recursively, so that we can see the effect on the parameter vector of the new observation. This is so convenient for process control, because the parameter vector is updated with real time. The new parameter vector is calculated with only a little amount of computation. In general, any estimation method which gives the estimated parameter vector as a product of a matrix inverse and a vector can give its solution in a recursive form. Derivatives of u, v and w In this section, we will give the detailed expressions for the derivatives of the quantities u, v and w used in the procurement of the PID controller gains. Since the controller gain Ij appears in the coefficients f3ks, but not aks, the derivatives of u and v are given as below. u k=\ ft ft Lft-i +2? [T r ft 1 \ & 1 ft ft . ft-i. . ft-i. -2 2 fc=i E L i ftft+i Efc=i ftft+fc-, ftft - i - l ft ft . ft-£fe=i ftft+i E L l ftft+A;-; ftft [r -n' ft ft ft-i - 2 E L i ftft+i E J L i ftfti+/c_; ftft ^ T i - l (A.20) ft ft L ft-i (A.21) 172 APPENDIX A The derivatives of the individual components in the above equations are as follows: - E £ = i <Xk/3'k+i + ak+i/3'k E L l akfin+k-l + &n+k-lfik -<XlP'n - Un&'X (A.22) The derivative of an inverse of a square matrix is no difficult task for us. To find the expression for the derivative of an inverse, we write r _ 1 r = i and take the derivative of both sides of the above equation to obtain [r-1]'r + r - 1 r ' = o (A.23) (A.24) and therefore where [r-1]' = - r - 1 r ' r - 1 r = fi2 fis 03 04 fin 0 fin 0 0 0 0 fix 0 fin-2 ... o ... o fix u j (A.25) (A.26) Since we are going to take the second order derivatives, it is better for us to write the derivative of u with respect to /; in a clearer form. The derivative is given as below: du 9 ^ dft dh ™ _ i dpk+x d(3k E f c = 1 " " - f t - + a*+i-ai7 + an+k-i QX dh d(3n ' d/3x dh dh dh ft 02 LA n-l APPENDIX A 173 Efc=i akOSk+i - O f c f t + i - otk+iPk Y!k=l CtkCtn+k-l — Ctkf3n+k-l — CYn+k-lflk a-i_an - Ctifin - QLnPi YX=\ ak®k+i - ctkf3k+i - a>k+if3k +2 I Efcrri ctkan+k-i - ctkfln+k-i - Ctn+k-l/3k « l « n - CtlPn — « n f t dh 5ft ft ft ft-i J dh 5ft 5/ t d f t - i 5L (A.27) Before taking the second order derivative of u, we should remind ourselves that the controller gains exist only in the coefficients fts as linear functions. Therefore, a double differentiation on these coefficients will result in zeros. Secondly, the rules of differentiation of matrix and vector products are just the same as those for scalars. We have done this in the chapter of identification. We have d2u dljdk - 2 ™ _ i n 5 f t + 1 5ft dh dh 5ft +k-l dh + OLn+k-l 5ft dh 5ft , 5ft 5/, 5/, , „_! 5 f t + 1 , _ 5ft E Z = i « * - 5L 5/, df3n+k-l dh 5ft ' 5/, 5ft 5/4 + a , 5ft • dh r 1 ft ft L ft-i 5ft 5/, 5ft dlj 5ft - ! 5/, 174 APPENDIX A _ n _ ! dh+i d h + 2 E f c = l oik d h +k-i dh + Ctn+k-l dh dh dh , dh dh dlj dh h h 0 n - 1 + 2 + 2 E f c = l Ctk®n+k-l — Ctkh+k-l — Cin+k-lh a r a n - a>ih - a n h E f c = l QkO!k+l - otkh+i ~ otk+ih Y!k=l ak&n+k-l - ctkh+k-i - an+k-ih a x a n - a x h - ctnh E £ = i O f c O / c + i - akh+i - otk+ih E f c = i akan+k-i — ctkh+k-i — a n + k - i h a x a n - a ^ h - &nh dh+i . n d h r - i ^ r - i ^ E r - 1 dlj dk r-i^Er- 1—r- 1 3/,- 5/ j ft 0 2 L 0 n - l J 0 1 0 2 L 0 n - l aft dh dh-i dh dh - 2 E j t = i <*k d h +k-i dh + a n + k - i dh dh dh dh , dh olj dlj d h dh dh dh-i dh APPENDIX A 175 - 2 Efc=l CtkCtk+l - "fcft+l - Ofc+lft J2k=l akO!n+k-l — CtkPn+k-l — Ctn+k-lflk - i 5 £ dh r1 5 f t dh 5 f t dh dh (A.28) In a similar fashion, we can write the derivative of v with respect to /, as follows: dv dh - 2 £ f t ^ - 2 k=i dh R d(3k+l d(3n+k-l + ft+l 5ft dh dh n+A; —/ 5ft 5/,-5 f t , fl 5 / 3 : +2 ULri ftft+i EL=X ftft+/t-/ ftft Efc=i ftft+i EL=1 ftft+i-/ ftft 1 5L 5 / 4 ^ 5 / , ft ft L ft-i J r - i 5 f t a/,-5 f t dk 5f t - r 5L ft ft ft-i (A.29) 176 APPENDIX A and then take the derivative of this quantity with respect to Ij to obtain d2v dljdk h ®i dh ! 5 f t 5 f t + i 5 f t + 1 8f3k - T ' jk—1 f <~w ~~r~ dlj dh dlj dh . d/3k 5f t + f c _; d(3n+k-t 5ft dlj dh dlj dh + 2 „ n _ l a dPk+1 R 5ft Efc=i Pk—ET, r Pk+\ d^1mL + d%d£1 dlj dh dlj dh T E L i ft dh dftn+k-l dh + ft. +k-r dh 5ft dh o ^ n djh ' J l ^ h + ( J n d h „ „ _ ! a 5ft+i 5ft dh Pi ft ft-i E U I P> dftn+k-l 5ft dh + 2 h dh + Pn dh „ n _ i fl 5 f t + x 5ft z^fc=i P ^ - ^ r Pk+i 5ft 5L 5ft dh 5ft_i 5L 5/, E f c = 1 ft-^7— + 0 5/, 5ft 5/, ft^ + ft-^1 5L 5/, dh Pi ft ft-i i - i ft ft Lf t - i APPENDIX A 111 Efc=l ftft+1 E L l PkPn+k-l ftft r - i 5 r r - i a r r _ i dlj dh + 2 - 2 E L l ftft+i E l = l ftft+fc-/ ftft E L ! ftft+i Efe=l Pkfin+k-l ftft -1 a 5ft+i r - i — r - i — r - i dh dlj ft ft ft-i ft ft L ft-i aft dh 5ft dh 5 f t - i 50 E L i # EL I ft 50 5ft+fc_; 50 + ft+i 5ft + ft+A;-50 5ft ' 5 0 B 5f t 5ft ^ 5 0 + ^ 5 0 5ft 5/8 5 ^ 5/, 5 f t_ i 5/,-+ 2 E L l ftft+i El=i ftft+fc-. ftft r - i 5 T 5/, 5ft 5/2 5ft 5/, 5f t_ i 5/i (A.30) The first and second order derivatives of the quantity w are given below. dw = 7 r > „ — — 4- 7 N ,=1 ^ o 57?o L 0 A 5 r / f c 5 ^ 0 5j?o f l 0 5ft , 5/, 178 APPENDIX A +2 $ c) d T £ L i Q^VkVk+i - QJ-VkVoPk+i - Tjj-rik+iVoPk • 5 5 d Efc=l -^rVkVn+k-l — -QJ-VkVoPn+k-l ~ -^fVn+k-lVoh dh +2 9 9 R 9 R E L i yMk+i - ijkVoPk+i - Vk+ivoPk E L i VkVn+k-i - r/kr]of3n+k-i - Vn+k-iVoPk E L i VkVk+i - r}krjo/3k+1 - r^+ir/oft E L l VkVn+k-l - T]k1]of3n+k-l ~ Vn+k-lVoPk i - l ft ft ft-1 dh aft ft ft ft-i r-1 dh 5ft dh 5ft-j dh (A.31) Because the parameter rjks are linear functions of the controller gains, by taking the derivative of the above equation with respect to 0, we have d2w dljdk 57/o drjo A dr\k dr\k drj0 5ft dr]k dnk drj0 5ft %m~+ 2 £ 5l7( 50 " 2 50 & " 2 ? ? 0 50 } + 50 ( 5/~" 2 5/7^ ~ 2 7 7 0 a2 52 52 l T E L ! a T a T ^ ' / H l - aTar^VoPk+l ~ aT^Vk+iVoPk + 2 dLdh dljdh dljdh Ei=i 52 dljdh Vk^n+k-l ~^VkVoh+k-i ~ J-^Vn+k-tVofc 52 dljdh VlVn ~ 52 505/t-r/i??oft -52 dLdh ??n??oft -.-1 ft ft ft-] APPENDIX A 179 - 2 + 2 d d d E £ = i -QfVkVk+i - QJ-rikVoPk+i ~ QJ-Jlk+irioPk w 9 d ' 5 J2k=l QJ-VkVn+k-l - Q^VkVoPn+k-l ~ -Qj-Vn+k-lVoPk 9 9 R 9 R d d d YlZl -Qj-VkVk+i - gj-VkVoPk+i - -Tjj-Vk+lVoPk 5 d d J2k=l -Qi'VkVn+k-l — -Q^VkVoPn+k-l ~ -Q^Vn+k-lVoPk 9 9 R 9 R dh ft 0 2 0 n - l i - l 5 0 1 dh dh 50 n-l di3 5 5 E L i ^rVkVk+i - Q^-VkVoPk+i - Mryk+ivoPk dh 5 5 5 / , 5 Efc=i -^-VkVn+k-l — T^rVkVoPn+k-l ~ -Q^Vn+k-lVoPk dl dh 9 9 R 9 R VlVn ~ -KTVlVoPn ~ -^rVnVoPl dh dh dh dh 0 1 0 2 0 n - : + 2 + 2 Efe=i Wfc+i - VkVoPk+i ~ Vk+iVoPk Efe=l VkVn+k-l - VkVoPn+k-l ~ Vn+k-lVoPk VlVn ~ VlVoPn ~ rinVoPl Efc=l ?Mfc+l - VkVoPk+l ~ r/k+lTjofSk E L l VkVn+k-l - T]krioPn+k-l ~ Vn+k-lVofik VlVn - mVoPn - VnVoPl T dlj dh dh dlj 0 1 0 2 0 „ - l 0 1 0 2 L 0 n - l 180 APPENDIX A E L i VkVk+i - VkVoPk+i - Vk+iVoPk E L l VkVn+k-l - VkVoPn+k-l ~ r?n+fc-/?7oft -, T dh d d d n T E L i Qj-Vkm+i - Qj-Vk'ooPk+i - Qi-rik+i'noPk i d d d Efc=l -WrVkVn+k-l — -KTVkVoPn+k-l ~ ^-Vn+k-lVoPk dh dlj dh 9 9 R 9 R dlj dh dh dh dlj dlj dh dh 1 - 1 dh dh dh dh-i dh E L i VkVk+i - VkVoh+i - Vk+ivoh E L l VkVn+k-l - VkVoh+k-l - Vn+k-lVoh ViVn - ViVoh - VnVoh dij dh dh djk dh dh-i dh Eigenvalues of Sum of 2 Symmetric Matrices We consider the following matrix relation: C = A + B (A.32) (A.33) where A and B are symmetric of dimension n. Denote 7,-, a,- and h a s t h e corresponding eigenvalues of the matrices C , A and B arranged in a nonincreasing order for the three sets. By the minimax theorem, we have 7S = Min Max (xTCx) z x (A.34) x rx = 1; zfx = 0 (i = 1,2,-•-5 - 1) (A.35) Hence, if we take any particular set of Zj, we have for all the corresponding x 7S < Max (xTCx) = Max (xTAx + x rBx) (A.36) APPENDIX A 181 If R is the orthogonal matrix such that R T A R = diag(ai) (A.37) then if we take Z ; = Re;, the above relations to be satisfied are: zfx = efy = 0 (i = l,2,-.-a-l) (A.38) With this choice of z,-, the first (s — 1) components of y are zeros, and from Equation (A.36), we have 7 s < Max (xTAx + xTBx) = Max a,?/,2 + xTBx) (A.39) i=s However, < ( A - 4 ° ) while xTBx < p\ (A.41) for any x. ft is the largest eigenvalue of the matrix B. Therefore, the expresssion in the brackets of Equation (A39) is not greater than as + ft. That means 7 s < Q s + ft (A.42) Now applying the same result to the case of the matrix relation A = C + (-B) (A.43) with the eigenvalues of —B in nonincreasing order as — f t , — ft-i, • • • — ft, we obtain <*. < 7, + ( - f t ) (A-44) or 7, > « s + ft (A.45) The above eigenvalue relations state that when B is added to A all the eigenvalues of the latter are shifted an amount between the smallest and largest eigenvalues of the former. 182 APPENDIX A Eigenvalues of Sum of a Symmetric Matrix and a Rank Unity Symmetric Matrix Consider the following matrix relation: C = A + B (A.46) where A and B are symmetric of dimension n and B is of rank unity. In this case, there exists an orthogonal matrix R such that R BR P o 0 0 where p is the unique nonzero eigenvalue of B. If we write R AR = A a A„_i then there is an orthogonal matrix S of order n — l such that S TA n_iS = diag(Xi) Now if we define Q by the relation ~ 1 0 Q = R ' 0 s then Q is orthogonal and "A b T ' - p 0" Q T(A + B)Q = + b diag(Xi) 0 0 where b = STa. The eigenvalues of A and of A + B are therefore those of A bT [ b diag(Xi X + p bT b diag(Xi) J (A.47) (A.48) (A.49) (A.50) (A.51) (A.52) Now if we denote these eigenvalues //»-(A) and pi(A -f B) in decreasing order, then they satisfy the following relation: Ui(A + B) = Ui(A) + mip where 0 < m; < 1 and Ymi = 1-(A.53) APPENDIX B 183 B. Computer Programs In this section, we will include the programs written to verify the theory presented in the thesis. f u n c t i o n [omega,delta,varnt]=tf_id(yt,ut,f,r,s,ndel); '/. % I d e n t i f i c a t i o n algorithm f o r B-J t r a n s f e r f u n c t i o n model. 7. % C a l l i n g sequence: % °/0 f u n c t i o n [omega,delta,varnt]=tf_id(yt,ut,f,r,s,ndel); I % Input v a r i a b l e s : °/0 yt : The output v a r i a b l e column vector. % ut : The input v a r i a b l e column vector. % f : The pure delay of the process. No delay means f=0. % r : The number of transmission zeros. % s : The number of poles. % ndel : The i n i t i a l estimate of the poles parameter vector. I % Output v a r i a b l e s : % omega : The column vector of the process gain. % d e l t a : The column vector of the time constant. % varnt : Variance of the disturbance time s e r i e s . I I Written on May 15th 1996 by Ky M . Vu I nob=length(yt); f o r i = l : n o b - f - r - l y ( i , l ) = y t ( n o b + l - i , l ) ; f o r j = l : r + l u ( i , j ) = u t ( n o b + l - f - i - j , 1 ) ; end; end; esp=l; lesp=l; i t e r = l ; y. °/0 S t a r t i t e r a t i n g 184 APPENDIX B y. while (iter<100&esp>1.0e-7) del=ndel; p s i ( l , l ) = d e l ( l , l ) ; f o r k=2:nob-f-r-2 sum=0; f o r l = l : k - l i f (l>=k-s) sum=sum+psi(1,l)*del(k-1,1); end; end; i f (k<=s) psi(k,l)=sum+del(k,1); e l s e psi(k,l)=sum; end; end; y. % C a l c u l a t e the optimal omega vector f o r i = l : n o b - f - r - l f o r j = l : n o b - f - r - l i f (j==i) m p s i ( i , j ) = l ; e l s e i f (j>i) m p s i ( i , j ) = p s i ( j - i , l ) ; e l s e mpsi(i,j)=0; end; end; end; omg= inv(u'*mps i'*mps i *u)*u'*mps i ' * y ; y. % C a l c u l a t e the i d e n t i f i c a t i o n equations y. f o r i = l : s f o r k=l:nob-f-r-2 sum=0; i f (k<i) dpsi(k,i)=0; APPENDIX B e l s e i f (k==i) d p s i ( k , i ) = l ; e l s e f o r l = l : k - l i f (l>=k-s) sum=sum+dpsi(l,i)*del(k-l,l); end; i f (l==k-i) sum=sum+psi(1,1); end; end; dpsi(k,i)=sum; end; end; % f o r k = l : n o b - f - r - l f o r 1=1:nob-f-r-l i f (K=k) d i p s i ( k , l ) = 0 ; e l s e dipsi(k,1 )=dpsi ( 1-k,i); end; end; end; domg(:,i)=inv(u ,*mpsi ,*mpsi*u)*... (u'*dipsi'-(u'*dipsi'*mpsi*u+u'*mpsi'*dipsi*u)* inv(u ;*mpsi'*mpsi*u)*u'*mpsi')*y; g(i,l)=(dipsi*u*omg+mpsi*u*domg(:,i))'*(y-mpsi*u*omg); end; I % C a l c u l a t e the d e r i v a t i v e matrix f o r i = l : s f o r k = l : n o b - f - r - l f o r 1=1:nob-f-r-l i f (K=k) d i p s i ( k , l ) = 0 ; e l s e d i p s i ( k , l ) = d p s i ( l - k , i ) ; end; APPENDIX B end; end; C a l c u l a t e second d e r i v a t i v e s f o r j = l : s f o r k=l:nob-f-r - 2 sum=0; i f (k<i+j) d2ps i(k,1)=0 ; e l s e i f (k==i+j) d 2 p s i(k,l ) = 2 ; e l s e f o r l = l : k - l i f (l>=k-s) sum=sum+d2ps i (1,l)*del(k-l,1) ; end; i f (l==k-i) sum=sum+dpsi(l,j); end; i f (l==k-j) sum=sum+dpsi(l,i); end; end; d2ps i(k,l)=sum; end; end; f o r k = l : n o b - f - r - l f o r 1=1:nob-f-r-l i f (K=k) d j p s i ( k , l ) = 0 ; d i j p s i ( k , l ) = 0 ; e l s e d j p s i ( k , l ) = d p s i ( l - k , j ) ; d i j p s i ( k , l ) = d 2 p s i ( l - k , l ) ; end; end; end; C a l c u l a t e the Jacobi d e r i v a t i v e matrix APPENDIX B 187 % d2omg=inv(u'*mpsi'*mpsi*u)*(u'*dijpsi ;*y-... (u'*dijpsi'*mpsi*u+u'*dipsi'*djpsi*u+... u'*djpsi'*dipsi*u+u'*mpsi'*dijpsi*u)*omg-... (u' *dipsi'*mpsi*u+u'*mpsi'*dipsi*u)*domg(:,j)-... (u'*djpsi ;*mpsi*u+u )*mpsi'*djpsi*u)*domg(:,i)); dg(i,j)=(dijpsi*u*omg+dipsi*u*domg(:,j)+... djpsi*u*domg(:,i)+mpsi*u*d2omg)'*(y-mpsi*u*omg)-... (dipsi*u*omg+mpsi*u*domg(:,i))'*... (djpsi*u*omg+mpsi*u*domg(:,j)); end; end; % °/„ Modified Newton-Raphson i t e r a t i o n esp=g'*g/s; i f (esp<=lesp Iiter==l) l d e l = d e l ; ddel=-dg\g; sdel=ddel; lesp=esp; i t e r = i t e r + l ; i t e r esp d e l ' omg' el s e ddel=ddel / 2 ; sqddel=ddel ;*ddel/s; i f (sqddel<=1.0e-10) rand('uniform'); f o r i = l : s d d e l ( i J l ) = r a n d * s d e l ( i , l ) ; end; end; i t e r [lesp esp] end; ndel=ldel+ddel; 188 APPENDIX B end; I % Get f i n a l r e s u l t s I omega=omg; delta=del; x=y-mpsi*u*omg; varnt=x'*x/(nob-f-r-l); f u n c t i o n [phi,theta,varat]=arma_id(nt,p,q,nthe); y. % I d e n t i f i c a t i o n algorithm f o r an ARMA time s e r i e s model. % % C a l l i n g sequence: I % f u n c t i o n [phi,theta,varat]=arma_id(nt,p,q,nthe); y. % Input v a r i a b l e s : I nt y. P % q y. nthe The ARMA time s e r i e s . The number of autoregressive parameters. The number of moving average parameters. The i n i t i a l estimate of the moving average parameter vector. % % Output v a r i a b l e s : The column vector of the autoregressive parameters. The column vector of the moving average parameters. Variance of the white noise. X phi % t h e t a % varat y. y. Written by Ky M. Vu on May 15th 1996 y. nob=length(nt); m=max(p,q); f o r i=l:nob-m n ( i , l ) = n t ( n o b + l - i , l ) ; f o r j=l:p n l ( i , j ) = n t ( n o b + l - i - j , 1); end; f o r j = l : q n 2 ( i , j ) = n t ( n o b + l - i - j , 1 ) ; end; APPENDIX B 189 end; esp=l; lesp=l; i t e r = l ; % % S t a r t i t e r a t i n g % while (iter<100&esp>1.0e-7) the=nthe; gama(l,l)=the(l,1); f o r k=2:nob-m-l sum=0; f o r l = l : k - l i f (l>=k-q) sum=sum+gama(l,1)*the(k-l,1) ; end; end; i f (k<=q) gama(k,l)=sum+the(k,1); el s e gama(k,l)=sum; end; end; % % C a l c u l a t e the optimal phi vector f o r i=l:nob-m f o r j=l:nob-m i f (j==i) mgama(i,j)=l; e l s e i f (j>i) mgama(i,j )=gama(j-i, 1); el s e mgama(i,j)=0; end; end; end; ophi=inv(nl'*mgama'*mgama*nl)*nl^mgama'*(n+mgama*n2*the); I % C a l c u l a t e the i d e n t i f i c a t i o n equations 190 APPENDIX B I f o r i = l : q f o r k=l :nob-m- l sum=0; i f (k<i) dgama(k,i)=0; e l s e i f (k==i) dgama(k , i )= l ; e l s e f o r l = l : k - l i f (l>=k-q) sum=sum+dgama(l,i)*the(k-1, 1); end; i f ( l==k-i) sum=sum+gama(l,1); end; end; dgama(k,i)=sum; end; end; % f o r k=l:nob-m f o r 1=1:nob-m i f ( K = k ) digama(k,1)=0; e l s e d igama(k , l )=dgama( l -k , i ) ; end; end; end; e i=zer os (q ,1 ) ; e i ( i , l ) = l ; dophi(: , i )=inv(nl , *mgama , *mgama*nl)*(nl '*digama , *(n+mgama*n2*the)+. . . nl'*mgama'*(digama*n2*the+mgama*n2*ei)-.. . (nl'*digama ) *mgama*nl+nl'*mgama'*digama*nl)*. . . inv(nl'*mgama'*mgama*nl)*nl'*mgama'*(n+mgama*n2*the)); h(i ,I)=(digama*n2*the+mgama*n2*ei-. . . d igama*nl*ophi -mgama*n2*dophi ( : , i ) ) '* . . . (n+mgama*n2*the-mgama*nl*ophi); end; APPENDIX B I % C a l c u l a t e the d e r i v a t i v e matrix '/. f o r i = l : q f o r k=l:nob-m f o r l=l:nob-m i f (K=k) digama(k,l)=0; e l s e digama(k,l)=dgama(l-k,i); end; end; end; I % C a l c u l a t e second d e r i v a t i v e s % f o r j = l : q f o r k=l:nob-m-l sum=0; i f (k<i+j) d2gama(k,1)=0; e l s e i f (k==i+j) d2gama(k,1)=2; el s e f o r l = l : k - l i f (l>=k-q) sum=sum+d2gama(l,1)*the(k-l,1); end; i f (l==k-i) sum=sum+dgama(l,j); end; i f (l==k-j) sum=sum+dgama(l,i); end; end; d2gama(k,l)=sum; end; end; f o r k=l:nob-m f o r l=l:nob-m 192 APPENDIX B i f (K=k) djgama(k,l)=0; dijgama(k,l)=0; e lse djgama(k,l)=dgama(l-k,j); di jgama(k, l)=d2gama( l -k,1); end; end; end; ei=zeros(q,1); ej=zeros(q,1); e i ( i , l ) = l ; e j ( j , D = l ; I % Calcu la te the j acob i de r iva t ive matrix % d2phi=inv(nl'*mgama'*mgama*nl)*(nl'*dijgama'*(n+mgama*n2*the)+... nl'*digama'*(djgama*n2*the+mgama*n2*ej)+... nl'*djgama'*(digama*n2*the+mgama*n2*ei)+... nl'*mgama'*(dijgama'*n2*the+digama'*n2*ej+djgama'*n2*ei)-. . . (nl'*dijgama'*mgama*nl+nl'*digama'*djgama*nl+... nl'*djgama'*digama*nl+nl'*mgama'*dijgama*nl)*ophi-... (nl'*digama'*mgama*nl+nl'*mgama'*digama*nl)*dophi(:,j)); dh(i,j)=(dijgama*n2*the+digama*n2*ej+djgama*n2*ei-... d i jgama*nl*ophi-digama*nl*dophi( : , j ) - . . . djgama*n2*dophi(:,i)-mgama*n2*d2phi)'*... (n+mgama*n2*the-mgama*nl*ophi)+... (digama*n2*the+mgama*n2*ei-... d igama*nl*oph i -mgama*n2*dophi ( : , i ) ) . . (djgama*n2*the+mgama*n2*ej-... djgama*nl*ophi-mgama*n2*dophi(:,j)); end; end; I % Modif ied Newton-Raphson i t e r a t i o n % esp=h'*h/q; i f (esp<=lesp|iter==l) lthe=the; dthe=-dh\h; APPENDIX B sthe=dthe; lesp=esp; iter=iter+l; iter esp' the' ophi' h' else dthe=dthe/2; sqdthe=dthe'*dthe/q; if (sqdthe<=l.Oe-10) rand('uniform'); for i=l:q dthe(i,l)=rand*sthe(i,1); end; end; iter [lesp esp] end; nthe=lthe+dthe; end; '/. % Get final results % phi=ophi; theta=the; x=n+mgama*n2*the-mgama*nl*ophi; varat=x'*x/(nob-m); function [omega,delta,phi,theta,varat]=bj_id(yt,ut,f,r,s,p,q,nprm) I identification algorithm for a Box-Jenkins control model. y. % Calling sequence: °/o % [omega,delta,phi,theta,varat]=bj_id(yt,ut,f,r,s,p,q,ndel,nthe); I °/0 Input variables: % yt : The output variable time series. 194 APPENDIX B 1 ut I f % r % s '/. p 7. q °/0 nprm % Output v a r i a b l e s : The input v a r i a b l e time s e r i e s . The pure delay of the B-J model. No delay f=0. The number of transmission zeros. No zero r=0. The order of the system. The number of autoregressive parameters. The number of moving average parameters. The i n i t i a l estimate of the time constant parameter vector and the moving average parameter vector. The column vector of the transmission zeros. The column vector of the poles. The column vector of the autoregressive parameters. The column vector of the moving average parameters. Variance of the white noise. % omega % d e l t a y. phi % t h e t a °/„ varat y. % Written by Ky M. Vu on May 15th 1996 nob=length(yt); f o r i = l : n o b - r - f - l y ( i , l ) = y t ( n o b + l - i , l ) ; f o r j = l : r + l u ( i , j ) = u t ( n o b + l - f - i - j , 1 ) ; end; end; m=max(p,q); % I w0=[eye(nob-r-f-m-l) zeros(nob-r-f-m-1,m)]; y. esp=l; lesp=l; i t e r = l ; y. % S t a r t i t e r a t i n g y. while (iter<100&esp>1.0e-7) del=nprm(l:s,:); the=nprm(s+l:s+q,:); p s i ( l , l ) = d e l ( l , l ) ; APPENDIX B fo r k=2 :nob-r-f -2 sum=0; fo r l = l : k - l i f (l>=k-s) sum=sum+psi( l , l )*del(k- l , l ) ; end; end; i f (k<=s) psi(k, l)=sum+del(k,1); e lse psi(k, l )=sum; end; end; '/. % Calcu la te the optimal omega vector y. fo r i = l : n o b - r - f - l fo r j = l : n o b - r - f - l i f (j==i) m p s i ( i , j ) = l ; e l s e i f (j>i) m p s i ( i , j ) = p s i ( j - i , l ) ; e lse m p s i ( i , j ) = 0 ; end; end; end; omg=inv(u'*mps i'*mps i *u)*u'*mps i ' * y ; y. % and i t s de r iva t ives w . r . t . the d e l t a ' y. fo r i = l : s fo r k= l :nob- f - r - 2 sum=0; i f (k<i) d p s i ( k , i ) = 0 ; e l s e i f (k==i) d p s i ( k , i ) = l ; e lse for l = l : k - l 196 APPENDIX B i f (l>=k-s) sum=sum+dpsi(l,i)*del(k-l,l); end; i f (l==k-i) sum=sum+psi(1,1); end; end; dpsi(k,i)=sum; end; end; I f o r k = l : n o b - f - r - l f o r 1=1:nob-f-r-l i f (K=k) d i p s i ( k , l ) = 0 ; e l s e d i p s i ( k , l ) = d p s i ( l - k , i ) ; end; end; end; % domg(:,i)=inv(u )*mpsi'*mpsi*u)*(u'*dipsi' *y-... (u'*dipsi'*mpsi*u+u ,*mpsi'*dipsi*u)*omg); g(i,l)=(dipsi*u*omg+mpsi*u*domg(:,i))'*(y-mpsi*u*omg); end; y. % Now form the disturbance matrices y. f o r k=l:m wi=zeros(nob-r-f-m -1,nob-r-f -1 ) ; f o r i=l:nob-r-f-m-l f o r j = l : n o b - r - f - l i f (j==i+k) w i ( i , j ) = l ; end; end; end; i f (k<=p) nl(:,k)=wi*(y-mpsi*u*omg); end; APPENDIX B 197 i f (k<=q) n2(:,k)=wi*(y-mpsi*u*omg); end; end; % gamma(l,l)=the(l, 1); f o r k=2:nob-r-f-m-2 sum=0; f o r l = l : k - l i f (l>=k-q) sum=sum+gamma(l,l)*the(k-l,1); end; end; i f (k<=q) gamma(k,l)=sum+the(k,1); e l s e gamma(k,l)=sum; end; end; % % C a l c u l a t e the optimal phi vector '/. f o r i=l:nob-r-f-m-l f o r j=l:nob-r-f-m -1 i f (j==i) mgamma(i,j)=l; e l s e i f (j>i) mgamma(i,j)=gamma(j-i,1); e l s e mgamma(i,j)=0; end; end; end; I ophi=inv(nl'*mgamma'*mgamma*nl)*nl^mgamma'*... (wO*(y-mpsi*u*omg)+mgamma*n2*the); I % and i t s f i r s t order d e r i v a t i v e s w.r.t. d e l t a , f o r i = l : s 198 APPENDIX B fo r k=l :nob-r - f -1 for 1=1 : nob - r - f - l i f (K=k) d i p s i ( k , l)=0; else d i p s i ( k , l ) = d p s i ( l - k , i ) ; end; end; end; fo r j = l :m wi=zeros(nob-r-f-m-1 ,nob-r-f-1); fo r k=l:nob-r-f-m-1 fo r 1=1 : nob- r - f - l i f (l==k+j) w i ( k , l ) = l ; end; end; i f (j<=p) dinl(:,j)=-wi*(dipsi*u*omg+mpsi*u*domg(:,i)); end; i f (j<=q) din2( : , j)=-wi*(dipsi*u*omg+mpsi*u*domg(:, i)); end; end; end; I Ji The de r iva t ives of phi w . r . t . de l t a y. dphidel(:,i)=inv(nl'*mgamma'*mgamma*nl)*... (dinl'*mgamma'*(w0*(y-mpsi*u*omg)+mgamma*n2*the)+... nl ,*mgamma ,*(-wO*dipsi*u*omg-wO*mpsi*u*domg(:,i)+... mgamma*din2*the)-(dinl'*mgamma'*... mgamma*nl+nl' ^gamma' *mgamma*dinl)*ophi); end; y. fo r i = l : q fo r k=l:nob-r-f-m-2 sum=0; i f (k<i) dgamma(k,i)=0; APPENDIX B e l s e i f (k==i) dgamma(k,i)=l; e lse fo r l = l : k - l i f (l>=k-q) sum=sum+dgamma(l,i)*the(k-l,1); end; i f (l==k-i) sum=sum+gamma(l,1); end; end; dgamma(k,i)=sum; end; end; I fo r k=l :nob- r - f -m- l fo r 1=1 :nob-r-f -m-l i f (K=k) digamma(k,1)=0; else digamma(k,l)=dgamma(l-k,i); end; end; end; ei=zeros(q,1); e i ( i , l ) = l ; y. % and w . r . t . theta y. dphi the( : , i )= inv(nl '*mg anuria'*mgamma*nl)*... (nl'*diganuna'*(w0*(y-mpsi*u*omg)+mgamma*n2*the)+... nl'*mgamma'*(digarrima*n2*the+mgamma*n2*ei)-... (nl'*digamma'*mgarama*nl+nl'*mgamma,*digamma*nl)*ophi) ; g(i+s,I)=(digamma*n2*the+mgamma*n2*ei-... digamma*nl*ophi-mgamma*nl*dphithe(:,i))'*... (wO*(y-mpsi*u*omg)+mgamma*n2*the-mgamma*nl*ophi); end; y. °/0 Ca lcu la te the de r iva t i ve matrix y. 200 APPENDIX B f o r i=l:s+q f o r j=l:s+q % i f (i<=s) i f (j<=s) f o r k=l:nob-f-r - 2 sum=0; i f (k<i+j) d2ps i(k,1)=0; e l s e i f (k==i+j) d2ps i(k,1)=2; e l s e f o r l = l : k - l i f (l>=k-s) sum=sum+d2ps i (1 ,1 )*del(k -1 ,1 ) ; end; i f (l==k-i) sum=sum+dpsi(1,j); end; i f (l==k-j) sum=sum+dpsi(l,i); end; end; d2ps i(k,l)=sum; end; end; f o r k=l:nob-f-r-l f o r 1=1:nob-f-r-l i f (K=k) d i p s i ( k , l ) = 0 ; d j p s i ( k , l ) = 0 ; d i j p s i ( k , l ) = 0 ; e l s e d i p s i ( k , l ) = d p s i ( l - k , i ) ; d j p s i ( k , l ) = d p s i ( l - k , j ) ; d i j p s i ( k , l ) = d 2 p s i ( l - k , 1 ) ; end; end; end; domgdeldel=inv(u'*mpsi'*mpsi*u)*(u'*dij p s i ' * y - . . APPENDIX B (u'*dijpsi'*mpsi*u+u'*dipsi'*djpsi*u+... u,*djpsi'*dipsi*u+u,*mpsi,*dijpsi*u)*omg-... (u'*dipsi,*mpsi*u+u'*mpsi'*dipsi*u)*domg(:,j)-. (u'*djpsi'*mpsi*u+u'*mpsi'*djpsi*u)*domg(:,i)); dg(i,j)=(dijpsi*u*omg+dipsi*u*domg(:,j)+djpsi*u*domg(:,i) mpsi*u*domgdeldel)'*(y-mpsi*u*omg)-... (dipsi*u*omg+mpsi*u*domg(:,i))'*... (djpsi*u*omg+mpsi*u*domg(:,j)); else dg(i,j)=0; end; else ii=i-s; if (j<=s) jj=j; for k=l:nob-r-f-1 for 1=1:nob-r-f-1 if (K=k) djpsi(k,l)=0; else djpsi(k,l)=dpsi(l-k,jj); end; end; end; for kk=l:m wi=zeros(nob-r-f-m-1,nob-r-f-l); for k=l:nob-r-f-m-1 for 1=1:nob-r-f-l if (l==k+kk) wi(k,l)=l; end; end; if (kk<=p) djnl(:,kk)=-wi*(dipsi*u*omg+mpsi*u*domg(:,jj)); end; if (kk<=q) djn2(: ,kk)=-wi*(dipsi*u*omg+mpsi*u*domg(:,jj)); end; end; end; APPENDIX B j j = j - s ; f o r k=l:nob-r—f-m-2 sum=0; i f (k<ii+jj) d2gamma(k,1)=0; e l s e i f (k==ii+jj) d2gamma(k,l)=2; e l s e f o r l = l : k - l i f (l>=k-q) sum=sum+d2gamma(l,l)*the(k-l, 1) ; end; i f (l==k-ii) sum=sum+dgamma(l,jj); end; i f (l==k-jj) sum=sum+dgamma(l,ii); end; end; d2gamma(k,1)=sum; end; end; end; f o r k=l:nob-r-f—m—1 f o r 1=1:nob-r-f-m-1 i f (K=k) digamma(k,l)=0; i f (j>s) djgamma(k,l)=0; dijgamma(k,l)=0; end; e l s e digamma(k,l)=dgamma(l-k,ii); i f (j>s) djgamma(k,l)=dgamma(l-k,jj); dijgamma(k,1)=d2gamma(l-k,1); end; end; APPENDIX B 203 end; end; ei=zeros(q,1); ej=zeros(q,1); e i ( i i , l )= l ; e J (jj , D = i ; i f (j<=s) dphidelthe=inv(nl'*mgamma'*mgamma*nl)*(djnl'*digamma;*... (wO*(y-mpsi*u*omg)+mgamma*n2*the)+nl'*digamma'*... (-wO*djpsi*u*omg-wO*mpsi*u*domg(: , j j)+mgamiria*djn2*the) + . . . djnl,*mgamma,*(digamma*n2*the+mgamma*n2*ei)+... nl'*mgamma'*(digamma*djn2*the+mgamma*djn2*ei)-... (djnl'*mgamma'*mgamma*nl+nl'*mgamma'*mgamma*djnl)*... dphithe(: , i i ) - . . . (nl'*digamma)*mgamma*nl+nl'*mgamma'*digamma*nl)*... dphideK : , j j ) - . . . (djnl'*digamma'*mgamma*nl+nl'*digamma'*mgamma*djnl+... djnl'*mgamma'*digamma*nl+nl'*mgamma'*digamma*djnl)*ophi); % dg(i,j)=(digamma*djn2*the+mgamma*djn2*ei-... digamma*djnl*ophi-digamma*nl*dphidel(:,jj)~... mgamma*djnl*dphithe(:,ii)-mgamma*nl*dphidelthe)'*... (w0*(y-mpsi*u*omg)+mgamma*n2*the-mgamma*nl*ophi)+... (digamma*n2*the+mgamma*n2*ei-... digamma*nl*ophi-mgamma*nl*dphithe(:,ii))'*... (-wO*djpsi*u*omg-wO*mpsi*u*domg(:,j.. mgamma*djn2*the-mgamma*djnl*ophi-... mgamma*nl*dphidel(:,jj)); else dphithethe=inv(nl'*mgamma,*mgamma*nl)*(nl,*dijgamma'*... (w0*(y-mpsi*u*omg)+mgamma*n2*the)+••• nl'*digamma)*(djgamma*n2*the+mgamma*n2*ej)+... nl'•djgamma'*(digamma*n2*the+mgamma*n2*ei) +.. . nl'*mgamma;*(dijgamma*n2*the+digamma*n2*ej+... djgamma*n2*ei)-... (nl'*djgamma'*mgamma*nl+nl'*mgamma;*djgamma*nl)*... dphithe(: , i i ) - . . . (nl'*digamma'*mgamma*nl+nl'*mgamma'*digamma*nl)*... dphithe(: ,jj)- . . . 204 APPENDIX B (nl'*dijgamma'*mgamma*nl+nl'*digamma'*djgamma*nl+. . . nl'*djgamma'*digamma*nl+nl'*mgamma ;*dijgamma*nl)*ophi); y. d g ( i , j ) = (dijgamma*n2*the+digamma*n2*ej+djgairima*n2*ei-... d i j g a m m a * n l * o p h i - d i g a m m a * n l * d p h i t h e ( : , j j ) - . . . d j g a m m a * n l * d p h i t h e ( : , i i ) - m g a m m a * n l * d p h i t h e t h e ) . . (w0*(y-mpsi*u*omg)+mgamma*n2*the-mgamma*nl*ophi)+... (digamma*n2*the+mgamma*n2*ei-... d igamma*nl*ophi -mgamma*nl*dphi the( : , i i ) ) '* . . . (djgamma*n2*the+mgamma*n2*ej-... djgamma*nl*ophi-mgamma*nl*dphithe(: ,jj)); end; end; end; end; I % M o d i f i e d Newton-Raphson i t e r a t i o n y. esp=g'*g/(s+q); i f (esp<=lesp I iter==l) lprm=nprm; dprm=-dg\g; sprm=dprm; lesp=esp; i t e r = i t e r + l ; i t e r esp' nprm' g ' e l se dprm=dprm/2; sqdprm=dprm'*dprm/(s+q); i f (sqdprm<=l.Oe-10) r a n d ( ' u n i f o r m ' ) ; f o r i= l : s+q d p r m ( i , l ) = r a n d * s p r m ( i , 1 ) ; end; end; i t e r [ lesp esp] APPENDIX B 205 end; nprm=lprm+dprm; end; I % Get f i n a l r e s u l t s omega=omg; d e l t a=nprm(1:s,:); phi=ophi; theta=nprm(s+l:s+q,:); at=wO*(y-mpsi*u*omg)+mgamma*n2*theta-mgamma*nl*phi; varat=at'*at/(nob-r-f-m-1); f u n c t i o n [sigma2n]=armavar(theta,phi,sigma2a); % Routine to c a l c u l a t e the variance f o r an ARMA time s e r i e s . The routine can take non-monic moving average polynomial. % C a l l i n g sequence: % [sigma2n]=armavar(theta,phi,sigma2a); % Input arguments: sigma2a t h e t a phi The moving average polynomial. [1 - t h e t a l -theta2 ... -thetaq]; The autoregressive polynomial. [1 - p h i l -phi2 ... -phip]; The variance of the white noise. Output arguments: sigma2n : The variance of the time s e r i e s . Author K. Vu w r i t t e n on Nov. 13th 1993. p=length(phi); q=length(theta); p=p-l; q=q-l; 206 APPENDIX B n=max(p,q); r=abs(roots(phi')); f o r i = l : p , i f a b s ( r ( i , l ) > = l ) error('The Time Series i s Nonstationary'); end; end; 1 gammaO=zeros(n+l,n+l); gammal=zeros(n+1,n+l); f o r i=l:n+l, f o r j=l:n+l, m=i+j-l; i f (m<=p+l) gammaO(i,j)=phi(l,m); e l s e gammaO(i,j)=0; end; end; end; f o r i=l:n+l, f o r j=i:n+l, m=j-i+l; i f (m<=p+l) gammaO(i, j)=gammaO(i,j)+phi(1,m); end; end; end; f o r i=l:n+l, f o r j=2:n+l, gammal(i,j)=gamma0(i,j); end; end; f o r i=l:n+l, f o r j=l:n+l, m=i-l+j; i f (m>0&m<=q+l) gammal(i,l)=gammal(i,l)+theta(l,j)*theta(l,m); end; end; APPENDIX B 207 gammal(i,l)=2*gammal(i,1) ; end; y. '/„ Get the r e s u l t y. sigma2n=det(gammal)*sigma2a/det(gammaO); f u n c t i o n [c]=polymul(a,b); % Function to m u l t i p l y two polynomials % C a l l i n g sequence: % [c]=polymul(a,b); % Input argument: % a : Column of input polynomial c o e f f i c i e n t s . °/0 b : Column of input polynomial c o e f f i c i e n t s . % Output argument: % c : Column of r e s u l t a n t polynomial c o e f f i c i e n t s , c = a*b. % Written by K. Vu on June 1st 1996. [m,n]=size(a); i f (m==l) degreea=n; el s e degreea=m; end; [m,n]=size(b); i f (m==l) degreeb=n; e l s e degreeb=m; end; degreec=degreea+degreeb-l; f o r i=l:degreec c ( i , l ) = 0 ; f o r j = l : i 208 APPENDIX B m=i+l-j ; i f (j<=degreea&m<=degreeb) c ( i , l ) = c ( i , l ) + a ( j , l ) * b ( i + l - j , l ) ; end; end; end; f u n c t i o n [kc,vary,varu]=lqg_pid(omega,delta,theta,phi , f ,vara,lambda,nl); °/0 Routine to c a l c u l a t e the LQG PID c o n t r o l l e r gains % f o r a Box-Jenkins model c o n t r o l system. % C a l l i n g sequence: °/0 [kc, vary, varu, varmv] =lqg_pid (omega, d e l t a, thet a, p h i , f , vara, lambda, nl) ; % Input parameters: % omega : The zero polynomial of the t r a n s f e r f u n c t i o n . % d e l t a : The pole polynomial of the t r a n s f e r f u n c t i o n . % t h e t a : The moving average polynomial of the disturbance. °/0 phi : The autoregressive polynomial of the disturbance. % f : The delay of the system. No delay f=0. % vara : The variance of the white noise. % lambda : The Larange m u l t i p l i e r . MV PID c o n t r o l , lambda=0. % n l : The estimated i n i t i a l gain vector. % Output parameters: % kc : The optimal c o n t r o l l e r gain matrix i n the sequence: % kp, k i , kd. °/0 vary : The variance of the output v a r i a b l e . % varu : The variance of the input v a r i a b l e . % Author : K. Vu wr i t t e n on June 1st 1996. r=length(omega); r = r - l ; s=length(delta); s=s-l; q=length(theta); q=q-l; APPENDIX B 209 p=length(phi); p=p-l; I % C a l c u l a t e the r e s u l t a n t polynomials. % [alpha]=polymul(delta,theta); [gama]=polymul(delta,phi); y. % Check i f nonstationary disturbance. y. i f ( abs(polyval(phi',l))<=le-5) m=3; nonsta=l; y. % Factor out the nonstationary polynomial y. phim(l,1)=1; f o r i=2:p phi m ( i , l ) = p h i m ( i - l , l ) + p h i ( i , 1 ) ; end; p=p-l; i f (r==0) psi=omega(l,l)*phim; e l s e [psi]=polymul(omega,phim); end; g=[0 -1 -2;1 1 1;0 0 1]; e l s e y. °/0 Case of s t a t i o n a r y disturbance only PD c o n t r o l l e r . I m=2; nonsta=0; i f (r==0) psi=omega(l,l)*phi; e l s e [psi]=polymul(omega,phi); end; g=[l 1;0 -1] ; end; 210 APPENDIX B % % S t a r t the i t e r a t i o n . y. esp=l; lesp=l; i t e r = l ; while(iter<50&esp>le-15) y. % Get the parameters of the feedback output v a r i a b l e time s e r i e s . I l = n l ; [betam]=polymul(1,psi); [eta]=polymul(l,alpha); na=length(betam); ng=length(gama); nb=max(ng,na+f+1); I f o r i=l:nb b e t a ( i , l ) = 0 ; i f (i<=ng) beta(i,l)=beta(i,l)+gama(i, 1); end; i f (i>f+l&i<=na+f+l) beta(i,l)=beta(i,1)-betam(i-f-1,1); end; end; na=length(alpha); nb=length(beta); nc=length(eta); [var1]=armavar(alpha',beta',vara); [var2]=armavar(eta',beta',vara); I i f (nb<nc) beta(nb+l:nc,l)=zeros(1:nc-nb,1); end; alpha(2:na,l)=-alpha(2:na,1); beta(2:nb,l)=-beta(2:nb,l); eta(2:nc,l)=-eta(2:nc,l); n=max(nb,nc)-l; u=l; APPENDIX B w = e t a ( l , l ) ~ 2 ; f o r k =2 : n + l % i f (k<=na) i f (k<=nb) u = u + ( a l p h a ( k , l ) - b e t a ( k , l ) ) ~ 2 - b e t a ( k , l ) ~ 2 ; e l s e u=u+alpha(k, 1 ) ~ 2 ; end; end; I i f (k<=nc) i f (k<=nb) s u m = 2 * e t a ( l , l ) * b e t a ( k , l ) ; e l s e sum=0; end; w = w + e t a ( k ,1) * ( e t a ( k ,1) - s u m ) ; end; end; y. f o r i = l : n - l f o r j = l : n - l i f (i+j<=n) mgama( i , j )=-be ta ( i+ j + 1 , 1 ) ; e l s e mgama( i , j)=0; end; i f ( j==i) mgama( i , j )=mgama( i , j )+ l ; e l s e i f ( i > j ) m g a m a ( i , j ) = m g a m a ( i , j ) - b e t a ( i - j + l , 1 ) ; end; end; end; y. n d = l e n g t h ( p s i ) ; f o r i = l : m f o r j = l : n + l i f (j>f+i&j<=nd+f+i) 212 APPENDIX B betap(j,i)=psi(j-f-i , l); else betap(j,i)=0; end; end; vbetap(:,i)=betap(2:n,i); end; % etap=zeros(n,m); for i=l:m etap(i:na+i-l,i)=alpha(l:na,l); end; I for k=l :m for i=l:n-l for j=l:n-l js=(k-l)*(n-l)+j; i f (i+j<=n) gamap(i,j s)=-betap(i+j + l ,k); else gamap(i,js)=0; end; i f (i>j) gamap(i,j s)=gamap(i,j s)-betap(i-j + l ,k); end; end; end; end; y. xip=zeros(n-l,m); zetap=zeros(n-l,m); mbetap=zeros(n-l,m); for i=l:n-l xi(i , l)=0; zeta(i,1)=0; mbeta(i,1)=0; for j=l:n-l I i f (j<=na-l) i f (i+j<=na-l) APPENDIX B x i ( i , l ) = x i ( i , l ) + a l p h a ( j + l , l ) * a l p h a ( j + i + l , l ) ; end; i f (i+j<=nb-l) x i ( i , l ) = x i ( i , l ) - a l p h a ( j + l , l ) * b e t a ( j + i + 1 , 1 ) ; f o r k=l:m xi p ( i , k ) = x i p ( i , k ) + a l p h a ( j + l , l ) * b e t a p ( j + i + l , k ) ; end; end; end; i f (j<=nc-l) i f (i+j<=nc-l) z e t a ( i , l ) = z e t a ( i , l ) + e t a ( j + l , l ) * e t a ( j + i + 1 , 1 ) ; zetap(i,k)=zetap(i,k)+etap(j+l,k)*eta(j+i+l,l)+... e t a ( j + l , l ) * e t a p ( j + i + l , k ) ; end; i f (i+j<=nb-l) z e t a ( i , l ) = z e t a ( i , l ) - e t a ( j + l , l ) * e t a ( l , l ) * b e t a ( j + i + l , l ) ; f o r k=l:m z e t a p ( i , k ) = z e t a p ( i , k ) - e t a p ( j + l , k ) * e t a ( l , l ) * b e t a ( j + i + 1 , 1 ) -e t a ( j + 1 , l ) * e t a p ( l , k ) * b e t a ( j + i + 1 , 1 ) -eta(j+ 1 , 1 )*eta(l,1 )*betap(j+i+ 1,k); end; end; end; i f (i+j<=na-l&j<=nb-l) xi(i, 1 )=xi(i, 1 )-alpha(j+i+ 1 , 1 )*beta(j+ 1 , 1 ) ; f o r k=l:m xip(i,k)=xip(i,k)+alpha(j+i+ 1,l)*betap(j+l,k); end; end; i f (i+j<=nc-l&j<=nb-l) z e t a ( i , l ) = z e t a ( i , l ) - e t a ( j + i + 1 , l ) * e t a ( l , l ) * b e t a ( j + l , 1 ) ; f o r k=l:m z e t a p ( i , k ) = z e t a p ( i , k ) - e t a p ( j + i + 1 , k ) * e t a ( l , l ) * b e t a ( j + l , 1 ) - . . e t a ( j + i + l , l ) * e t a p ( l , k ) * b e t a ( j + l , l ) - . . e t a ( j + i + l , l ) * e t a ( l , l ) * b e t a p ( j + l , k ) ; end; 214 APPENDIX B end; % i f (i+j<=n) mbeta(i,l)=mbeta(i,l)+beta(j+l,l)*beta(j+i+l,l) ; f o r k=l:m mbetap(i,k)=mbetap(i,k)+beta(j+l,l)*betap(j+i+l,k)+... b e t a p ( j + l , k ) * b e t a ( j + i + l , l ) ; end; end; end; end; I vbeta=beta ( 2:n,1 ) ; invgama=inv(mgama); u=u+2*xi'*invgama*vbeta; w=w+2*zeta'*invgama*vbeta; v=2-beta'*beta -2*mbeta'*invgama*vbeta; % % C a l c u l a t e the variances. I vary=(u/v)*vara; varu=(w/v)*vara; '/.[varl vary va r2 varu] %pause (5) I smbebep2=zeros(m,m); f o r i=l:m smalbep(i,l ) = 0 ; s m e t a b e p ( i , l ) = e t a ( l , l ) * e t a p ( l , i ) ; smbebep(i,1)=0; f o r j=2:n+l i f (j<=nb) i f (j<=na) sma l b e p ( i , l ) = s m a l b e p ( i , l ) + a l p h a ( j , l ) * b e t a p ( j , i ) ; end; smbebep(i,l)=smbebep(i,l)+beta(j , l ) * b e t a p ( j , i ) ; f o r k=l :m s m b e b e p 2(i,k ) = s m b e b e p 2(i,k)+betap(j,i)*betap(j,k); end; APPENDIX B i f (j<=nc) s m e t a b e p ( i , l ) = s m e t a b e p ( i , 1 ) + . . . e t a p ( j , i ) * ( e t a ( j , l ) - 2 * e t a ( l , l ) * b e t a ( j , 1 ) ) + . e t a ( j , l ) * ( e t a p ( j , i ) - 2 * e t a p ( l , i ) * b e t a ( j , 1 ) - . 2 * e t a ( l , l ) * b e t a p ( j , i ) ) ; end ; end ; end ; end ; f o r i = l :m i s = ( i - l ) * ( n - l ) + l ; i e = i * ( n - l ) ; u p r i m e ( i , l ) = - s m a l b e p ( i , l ) - x i p ( : , i ) ' * i n v g a m a * v b e t a . . . - x i ' * i n v g a m a * g a m a p ( : , i s : i e ) * i n v g a m a * v b e t a . . . + x i ' * i n v g a m a * v b e t a p ( : , i ) ; w p r i m e ( i , l ) = s m e t a b e p ( i , l ) + z e t a p ( : , i ) ' * i n v g a m a * v b e t a . . . - z e t a ' * i n v g a m a * g a m a p ( : , i s : i e ) * i n v g a m a * v b e t a . . . + z e t a ' * i n v g a m a * v b e t a p ( : , i ) ; v p r i m e ( i , l ) = - s m b e b e p ( i , l ) - m b e t a p ( : , i ) ' * i n v g a m a * v b e t a . . . + m b e t a ' * i n v g a m a * g a m a p ( : , i s : i e ) * i n v g a m a * v b e t a . . . - m b e t a ' * i n v g a m a * v b e t a p ( : , i ) ; h ( i , l ) = ( u p r i m e ( i , l ) * v - u * v p r i m e ( i , 1 ) ) + l a m b d a * . . . ( w p r i m e ( i , 1 ) * v - w * v p r i m e ( i , 1 ) ) ; end ; f o r i = l :m f o r j= l :m i s = ( i - l ) * ( n - l ) + l ; i e = i * ( n - l ) ; j s = ( j - l ) * ( n - l ) + l ; j e = j * ( n - l ) ; vbebep2=ze ros (n - l , 1 ) ; v z e t a p 2 = z e r o s ( n - l , 1 ) ; f o r k = l : n - l f o r k k = l : n - l i f (k+kk<=nc-l) v z e t a p 2 ( k , l ) = v z e t a p 2 ( k , l ) + e t a p ( k k + 1 , i ) * e t a p ( k + k k + 1 , j ) + . e t a p ( k k + l , j ) * e t a p ( k + k k + l , i ) - . . . 216 APPENDIX B etap(k+kk+1 , i)*etap(1 , j)*beta(k+l, 1) -etap(k+kk+1 , i )*eta(l ,1)*betap(k+l, j) • e tap(k+kk+1 , j )*etap(l , i )*beta(k+l , l ) -e tap(k+kk+1, j )*eta( l , l )*betap(k+l , i ) -eta(k+kk+1, l )*etap(l , i )*betap(k+l , j ) -e ta(k+kk+l , l )*etap( l , j )*betap(k+l , i ) end; i f (k+kk<=n) vbebep2(k,1)=vbebep2(k,1)+betap(kk+1 ,i)*betap(k+kk+1 ,j)+ bet ap(k+kk+1,i)*bet ap(kk+1 , j); end; i f (kk<=nc-l&k+kk<=nb-l) v z e t a p 2 ( k , l ) = v z e t a p 2 ( k , 1 ) - . . . etap(kk+1 , i )*etap(l , j)*beta(k+kk+1,1)-etap(kk+l , i )*eta( l , l )*betap(k+kk+1, j ) -etap(kk+1 , j )*etap(l , i )*beta(k+kk+l ,1)-etap(kk+l, j)*eta(1,1)*betap(k+kk+1, i)-e ta(kk+l , l )*e tap( l , i )*betap(k+kk+1, j ) -e ta(kk+l , l )*etap( l , j )*betap(k+kk+1, i ) . end; end; end; sumetabe2=e tap ( l , i )*e t ap ( l , j ) ; fo r k=2:n+l i f (k<=nc-l) sumetabe2=sumetabe2+etap(k, i )*(etap(k, j ) -2*. . . e t a p ( l , j ) * b e t a ( k , l ) -2* e t a ( l , l ) * b e t a p ( k , j ) ) + . . . e t a p ( k , j ) * ( e t a p ( k , i ) -2* e t a p ( l , i ) * b e t a ( k , l ) -2 * . . . e t a ( l , l ) * b e t a p ( k , i ) ) . . . end; end; u2prime ( i , j )=xip( : , i ) '* invgama*gamap( : , j s:je)*invgama*vbeta. - x i p ( : , i ) ' * i n v g a m a * v b e t a p ( : , j ) . . . +xip(:,j) '*invgama*gamap(:,is:ie)*invgama*vbeta. APPENDIX B +xi'*invgama*gamap(:,js:je)*invgama*gamap(:,is:ie)* invgama*vbeta... +xi'*invgama*gamap(:,is:ie)*invgama*gamap(:, j s: je) * invgama*vbeta... -xi'*invgama*gamap(:,is:ie)*invgama*vbetap(:,j)... -xip(:,j)'*invgama*vbetap(:,i)... -xi'*invgama*gamap(:,j s:je)*invgama*vbetap(: , i ) ; I w2prime(i,j)=sumetabe2+vzetap2'*invgama*vbeta... -zetap(:,i)'*invgama*gamap(:,js:je)*invgama*vbeta.. +zetap(:,i)'*invgama*vbetap(:,j)... -zetap(:,j)'*invgama*gamap(:,is:ie)*invgama*vbeta.. +zetaJ *invgama*gamap(: , j s:je)*invgama*... gamap(:,is:ie)*invgama*vbeta... +zeta'*invgama*gamap(:,is:ie)*invgama*... gamap(:,j s:je)*invgama*vbeta... -zeta'*invgama*gamap(:,is:ie)*invgama*vbetap(:,j).. +zetap(:,j)'*invgama*vbetap(:,i)... -zeta J *invgama*gamap(:,j s:je)*invgama*vbetap(:,i); I v2prime(i,j)=-smbebep2(i,j)-vbebep2'*invgama*vbeta... +mbetap(:,i),*invgama*gamap(:,j s:j e)*invgama*vbeta -mbetap(:,i)'*invgama*vbetap(:,j)... +rabetap(:,j)'*invgama*gamap(:,is:ie)*invgama*vbeta -mbeta'*invgama*gamap(:,j s:je)*invgama*... gamap(:,is:ie)*invgama*vbeta... -mbeta;*invgama*gamap(:,is:ie)*invgama*... gamap(:,js:je)*invgama*vbeta... +mbeta'*invgama*gamap(:,is:ie)*invgama*vbetap(:,j) -mbetap(:,j)'*invgama*vbetap(:,i)... +mbeta'*invgama*gamap(:,js:je)*invgama*vbetap(:,i) dh(i,j)=(u2prime(i,j)*v+uprime(i,l)*vprime(j,1)... -uprime(j,1)*vprime(i,l)-u*v2prime(i,j))+lambda*... (w2prime(i,j)*v+wprime(i,1)*vprime(j,1)... -wprime(j,1)*vprime(i,l)-w*v2prime(i,j)); end; end; y. % Modified Newton-Raphson iteration y. 218 APPENDIX B esp=h' *h/m; i f (esp<=lesp Iiter==l) 11=1; dl=-dh\h; s l = d l ; lesp=esp; k c ( : , i t e r ) = g * l ; i t e r = i t e r + l ; i t e r 1' h' [lesp esp] el s e dl=dl/2; sqdl=dl'*dl/s; i f (sqdl<=1.0e-9) rand('uniform'); f o r i=l:m d l ( i , l ) = r a n d * s l ( i , l ) ; end; end; i t e r [lesp esp] end; n l = l l + d l ; end; f u n c t i o n [dn,bninv,cn,un]=stn_rld(dn_1,bn_linv,cn_1,yn,un_1, m, n) ; °/0 Routine to c a l c u l a t e the c o n t r o l a c t i o n f o r the Recursive °/0 Least Determinant (RLD) Self-Tuning Regulator. % C a l l i n g sequence: % [dn,bninv,cn,un]=stn_rld(dn_l,bn_linv,cn_l,yn,un_l,m,n); % Input v a r i a b l e s : dn_l : A colum vector of dimension m+n+1. % b n _ l i n v : An inverse matrix of dimension m+n+1. APPENDIX B % c n _ l : A column vector of past input and output % v a r i a b l e s . Dimension m+n+1. % yn : The newly a v a i l a b l e output v a r i a b l e . % un_l : The l a s t c o n t r o l a c t i o n or input v a r i a b l e . °/„ m : The number of past input v a r i a b l e s the °/0 c o n t r o l l e r remembers. % n : The number of past output v a r i a b l e s the % c o n t r o l l e r remembers. % Output v a r i a b l e s : I % dn : The updated dn column. % bninv : The updated bninv matrix. % cn : The updated cn column. % un : The c a l c u l a t e d c o n t r o l a c t i o n . I % Written by K. Vu on September 16th 1996. I c n ( l , l ) = u n _ l ; cn(m+l,l)=yn; i f (m>=2) f o r i=2:m c n ( i , l ) = c n _ l ( i - 1 , 1 ) ; end; end; i f (n>=l) f o r i=2:n+l cn(i+m,l)=cn_l(i+m-l,1); end; end; bninv=bn_1inv-bn_1inv*cn*cn'*bn_1inv/(1+cn'*bn_1inv*cn); un=dn_l'*bninv*cn/(1-cn'*bninv*cn); dn=dn_l+cn*un; f u n c t i o n [ p t , b e t a t , u t ] = s t n _ r l s ( p t _ l , b e t a t _ l , y t , x t _ f _ 1 , x t ) ; % °/„ Routine to c a l c u l a t e the c o n t r o l a c t i o n f o r the Recursive % Least Squares (RLS) Self-Tuning Regulator. °/. % C a l l i n g sequence: 220 APPENDIX B % % [ p t , b e t a t , u t ] = s t n _ r l s ( p t _ l , b e t a t _ l , y t , x t _ f _ 1 , x t ) ; I % Input v a r i a b l e s : I % p t _ l '/„ b e t a t . l % yt y. x t _ f _ i A p o s i t v e d e f i n i t e matrix of dimension m+n+2. The parameter vector of dimension m+n+2. The newly a v a i l a b l e output v a r i a b l e . A vector of past input and output v a r i a b l e s . % Dimension m+n+2. '/, xt : The above vector at time t . I t contains % [0 u t _ l . . . yt y t _ l . . . ] ' . 'h Output v a r i a b l e s : y. "It pt : The updated matrix of dimension m+n+2. % betat : The updated parameter vector of dimension m+n+2. °/0 ut : The c a l c u l a t e d c o n t r o l a c t i o n . I % Written by K. Vu on September 16th 1996. 7. l = l e n g t h ( b e t a t _ l ) ; k t = p t _ l * x t _ f _ l / ( l + x t _ f _ l , * p t _ l * x t _ f _ l ) ; b e t a t = b e t a t _ l + k t * ( y t - x t _ f _ l , * b e t a t _ l ) ; ut=-betat(2:1,1)'*xt(2:1,1)/betat(1,1); p t = p t _ l - p t _ l * x t _ f _ l * x t _ f _ l , * p t _ l / ( l + x t _ f _ l ' * p t _ l * x t _ f _ l ) ; f u n c t i o n [theta,sigma2]=ma_id(gamma); % % I d e n t i f i c a t i o n algorithm f o r a Moving Average % time s e r i e s given the autocovariances. y. % C a l l i n g sequence: y. °/0 [theta, sigma2] =ma_ i d (gamma); I % Input v a r i a b l e s : °/0 gamma : The autocovariances s t a r t i n g from l a g 0. y, 7, Output v a r i a b l e s : APPENDIX B % t h e t a : The moving average parameter vector. % sigma2 : The variance of the white noise. % Written on Jan 15th 1997 by Ky M. Vu q=length(gamma)-1; b=zeros(q,q); theta=zeros(q,1); ntheta=zeros(q,1); gm=gamma(2:q+l,l)/gamma(l,l); e r r = l ; i t e r = l ; while (iter<100&err>le-ll) f o r i = l : q - l f o r j = 1:q-1 m=i+j; i f (m<=q) b(i,j)=theta(m,1); end; end; end; ntheta=b*theta-(1+theta'*theta)*gm; i t e r = i t e r + l ; err=(ntheta-theta)'*(ntheta-theta)/q; theta=ntheta; end; sigma2=gamma(l,1)/(1+theta'*theta); f u n c t i o n [gamma]=autocov(phi,theta,sigma2,lag); % Routine to c a l c u l a t e the autocovariances % of an ARMA time s e r i e s . % C a l l i n g sequence: % [gamma]=autocov(phi,theta,sigma2,lag); % Input v a r i a b l e s : % phi : The autoregressive parameter column vector % t h e t a : The moving average parameter column vector 222 APPENDIX B sigma2 : The variance of the generating white noise, l a g : The l a s t l a g of the de s i r e d autocovariances. Output v a r i a b l e s : gamma : The autocovariance vector s t a r t i n g from l a g 0. 1 Written on Jan 15th 1997 by Ky M. Vu °/. p=length(phi)-l; q=length(theta)-l; aa=zeros(lag+1,lag+1); bb=zeros(lag+1,lag+1); cc=zeros(lag+1,lag+1); d=zeros(lag+1,1); I f o r i=l:q+l d(i,l)=theta(i,1)*sigma2; f o r j=l:q+l m=i+j-l; i f (m<=q+l) bb(i,j)=theta(m,1); end; end; end; f o r i=l:lag+1 f o r j = l : i m=i-j+l; i f (m<=p+l) aa(i , j ) = p h i ( m , l ) ; c c ( i , j ) = p h i ( m , l ) ; end; end; f o r j=2:lag+1 n=i+j-l; i f (n<=p+l) a a ( i , j ) = a a ( i , j ) + p h i ( n , l ) ; end; end; end; gamma=inv(aa)*bb*inv(cc)*d; Bibliography [1] Akaike, H . (1967) "Some Problem in the Application of the Cross-Spectral Method." Proceedings of an Advanced Seminar on Spectral Analysis of Time Series, B . Harris Editor, Wiley, N . Y . , N . Y . , U.S.A. [2] Akaike, H . (1974) "A New Look at the Statistical Model Identification." I .E.E.E. Trans. Automatic Control, AC-19, pp. 716-722. [3] Allidina, A . Y . and Hughes, F. M . (1980) "Generalised Self-Tuning Controller with Pole Assignment" IEE Proc, Vol. 127, Pt. D, No. 1, 13-18. [4] Anderson, O. D. (1975) "On the Collection of Time Series Data" Op. Res. Quart., Vol. 26, pp 331-335. [5] Anderson, B. D. 0 . and Gevers, M . R. (1982) "Identifiability of Linear Stochastic Systems Operating Under Linear Feedback." Automatica, Vol. 18, No. 2, pp 195-213. [6] Astrom, K . J. (1970) Introduction to Stochastic Control Theory. Academic Press, N . Y . , N . Y . U.S.A. [7] Astrom, K . J . and Hagglund, T. (1988) Automatic Tuning of PID Controllers. ISA, Research Triangle Park, N.C. [8] Astrom, K . J. and Wittenmark, B. (1973) "On Self-Tuning Regulators" Automatica, 9, pp 185-189. [9] Astrom, K . J. and Wittenmark, B. (1980) "Self-Tuning Controllers Based on Pole-Zero Placement." IEE Proc, Part D, Vol. 127, No. 3, pp 120-130. [10] Astr 6m, K . J. , Wittenmark, B. (1984) Computer Controlled Systems - Theory and Design. Prentice-Hall, Englewood Cliffs, N . J. [11] Astrom, K . J. and Wittenmark, B. (1989) Adaptive Control. Addison-Wesley. Read-ing, Mass., U.S.A. 223 224 BIBLIOGRAPHY [12] Astrom, K . J . (1980) "Maximum Likelihood and Predict ion Error Methods." Auto-matica, Vol. 16, pp. 551-574. [13] Astrom, K . J . and Soderstrom, T. (1974) "Uniqueness of the Maximum Likelihood Estimates of the Parameters of an A R M A Model" IEEE Transactions on Automatic Control, Vol. AC-19, No. 6, pp 769-773. [14] Astrom, K . J . and Bohlin, T. (1966) "Numerical Identification of Linear Dynamic Systems from Normal Operating Records." in P. H. Hammond (Ed.) Theory of Self Adaptive Control Systems. Plenum Press, New York. [15] Astrom, K. J. , Bohlin, T. and Wensmark, S. (1965) "Automatic Construction of Lin-ear Stochastic Dynamic Models for Stationary Industrial Processes with Random Dis-turbances Using Operating Records." I B M Nordic Laboratories, Report TP18.150. [16] Astrom, K . J . and Hagglund, T. (1988) Automatic Tuning of PID Controllers. I.S.A., Research Triangle Park, NC, U.S.A. [17] Beguin, J . M . , Gourieroux, C. and Monfort, A. (1980) "Identification of a mixed Autoregressive Moving Average Process: The Corner Method." Time Series (Ed. 0. D. Anderson), pp 423-436, North Holland, Amsterdam. [18] Box, G. E. P. and Jenkins, G. M . (1970) Time Series Analysis, Forecasting and Control. Holden Day, San Francisco, California. U.S.A. [19] Box, G. E. P. and MacGregor, J. F. (1974) "The Analysis of Closed Loop Dynamic-Stochastic Systems." Technometric, Vol. 16, No. 3, pp 391-398. [20] Box, G. E. P. and MacGregor, J. F. (1976) "Parameter Estimation with Closed-Loop Operating Data." Technometric, Vol. 18, No. 4, pp 371-380. [21] Clarke, D. VV. and Gawthrop, P. J. (1975) "Self-Tuning Controller" Proc. IEE, 122, Part D, pp 929-934. [22] Clarke, D. W. (1984) "Self-Tuning Control of Nonminimum Phase Systems." Auto-matica, Vol. 20, No. 5, 501-517. [23] Cluett, W. R. and Wang, L. (1996) "New Tuning Rules for PID Control." Proc. Control System '96, pp 75-80. Whisler, B .C. , Canada. [24] Cohen, G. H. and Coon, G. A. (1953) "Theoretical Considerations of Retarded Con-trol" Trans. A S M E , 75, 827. BIBLIOGRAPHY 225 [25] Dugard, L. and Landau, I. D. (1980) "Recursive Output Error Identification Algo-rithms: Theory and Evaluation" Automatica, Vol. 16, pp 443-462. [26] Fisher, R. A . (1956) Statistical Methods and Scientific Inference. Oliver and Boyd, Edinburgh. [27] Gauss, K . F. (1809) Teoria Motus Corporum Coelestium in Sectionihus Conicis Solem Amhientium. Reprinted Translation: Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections. Dover, New York (1963). [28] Gevers, M . R. and Anderson, B. D. 0. (1981) "Representations of Jointly Stationary Stochastic Feedback Processes." Int. J. of Control, Vol. 33, No. 5, pp 777-809. [29] Gevers, M . and Ljung, L. (1986) "Optimal Experiment Designs with Respect to the Intended Model Application." Automatica, Vol. 22, No. 5, pp 543-554. [30] Gray, H. L. , Kelley, G. D. and Mclntire, D. D. (1978) "A New Approach to A R M A Modeling." Communications in Statistics, 87, pp 1-77. [31] Grimble, M . J. (1984) "Implicit and Explicit L Q G Self-Tuning Controllers" Auto-matica, Vol. 20, No. 5, 661-669. [32] Kessler, C. (1958) Das Symmetrische Optimum. Regelungstetechnik, 6, 11, pp 395-400, 12 pp 432-436. [33] Kwakernaak, H. and Sivan, R. (1972) Linear Optimal Control Systems Wiley-Interscience. [34] Kotzev, A . (1992) Automatic Model Structure Determination for Adaptive Control. PhD. Thesis. University of British Columbia, Department of Mechanical Engineering. [35] Landau, I. D. and Voda, A . (1992) "The Symmetrische Optimum and the Auto-Calibration of PID Controllers." IFAC-ACASP 92 Symposium, Grenoble, July 1-3. [36] Lennartson, B. (1986) On the Design of Stochastic Control Systems with Multirate Sampling PhD. Thesis. Chalmers University of Technology. Goteborg, Sweden. [37] Ljung, L. , Gustavsson, I. and Soderstrom, T. (1974) "Identification of Linear Multi-variable Systems Operating Under Linear Feedback Control." IEEE Trans, on Aut. Control, Vol 19, pp 836-840. [38] Ljung, L. (1977) "Analysis of Recursive Stochastic Algorithms." IEEE Trans., AC-22, 551-575. 226 BIBLIOGRAPHY [39] Ljung, L. (1981) "Analysis of a General Recursive Prediction Error Identification Algorithm." Automatica, Vol. 17, No. 1, pp 89-99. [40] Luenberger, D. (1969) Optimization by Vector Space Methods John Wiley & Sons, Inc. New York, N . Y . , U.S.A. [41] MacGregor, J . F. , Harris, T. J . and Wright, J. D. (1984) "Duality between the Control of Processes Subject to Randomly Occuring Deterministic Disturbances and A R I M A Stochastic Disturbances" Technometrics, 26, 4, 389-397. [42] MacGregor, J . F. (1976) "Optimal Choice of the Sampling Interval for Discrete Pro-cess Control." Technometrics, Vol. 18, No. 2, pp. 151-160. [43] MacGregor, J . F. , Wright, J. D. and Huynh, M . H . (1975) "Optimal Tuning of Digital PID Controllers Using Dynamic-Stochastic Models" IEC Process Des. Dev., 14, 398-402. [44] MacGregor, J. F. (1973) "Optimal Discrete Stochastic Control Theory for Process Applications." Canadian J. of Chem. Eng., Vol. 51, pp 468-478. [45] Mayne, D. Q. and Firoozan, F. (1982) "Linear Identification of A R M A Processes." Automatica, Vol. 18, No. 4, pp 461-466. [46] Meo, J. A. C , Medanic, J. V . and Perkins, W. R. (1986) "Design of Digital PI+ Dynamic Controllers Using Projective Control" International Journal of Control, Vol. 43, No. 2, 539-559. [47] Niu, S., Fisher, D. G. and Xiao, D. (1992) "An Augmented UD Identification Algo-rithm." Int. J . of Control, Vol. 56, No. 1, pp 193-211. [48] Ostrowski, A . M . (1966) Solution of Equations and Systems of Equations Academic Press, New York, N . Y . , U.S.A. [49] Park, H. and Seborg, D. E. (1974) "Eigenvalue Assignment Using Proportional-Integral Feedback Control." Int. J . Control, Vol. 20, No. 3, pp 517-523. [50] Radke, F. and Isermann, R. (1987) "A Parameter-Adaptive PID Controller with Stepwise Parameter Optimization" Automatica, 23, 449-457. [51] Rivera, D. E. , Morari, M . and Skogetad, S. (1986) "Internal Model Control. 4. PID Controller Design." Ind. Eng. Chem. Process Des. Dev. Vol. 25, pp 252-265. [52] Seraji, H . and Tarokh, M . (1977) "Design of PID Controllers for Multivariable Sys-tems." Int. J . Control, Vol. 26, No 1, pp 75-83. BIBLIOGRAPHY 227 [53] Sinha, N . K . , Mahalanabis, A . K . and Sherief, H . E l . (1978) "A Non-parametric Approach to the Identification of Linear Multivariable Systems." Int. J . of Systems Sci., Vol. 9, No. 4, pp 425-430. [54] Soderstrom, T., Gustavsson, I. and Ljung, L. (1975) "Identifiability Conditions for Linear Systems Operating Under Closed-Loop." Int. J. Control, Vol. 21, pp 243-255. [55] Soderstrom, T. (1975) "On the Uniqueness of Maximum Likelihood Identification." Automatica, Vol. 11, pp. 193-197. [56] Soderstrom, T. and Stoica, P. (1989) System Identification. Prentice Hall Interna-tional Ltd. U K . [57] Stojic, M . R. and Petrovic, T. B. (1986) "Design of Digital PID Stand-Alone Single Loop Controller" International Journal of Control, Vol 43, No. 4, 1229-1242. [58] Telser, L. G. (1967) "Discrete Samples and Moving Sums in Stationary Stochastic Processes" J . Amer. Statist. Assoc., Vol. 62, pp 484-499. [59] Thomasson, F. Y . (1995) Process Control Fundamentals for the Pulp and Paper Industry. Tappi Press, Atlanta, G A . U.S.A. [60] Vu, K . (1988) "Linear Time Series Variance." Int. J. of Control, Vol. 47, No. 5, pp 1291-1297. [61] Vu, K . (1990) "Discrete Optimal Stochastic Controller and Its Forms." Int. J . Systems Science, Vol. 21, No. 3, pp 567-577. [62] Vu, K . (1991) "Determination of the Penalty Constant for Discrete Constrained Linear Quadratic Gaussian Controller Design." Int. J. Systems Science, Vol. 22, No. 4, pp 713-721. [63] Vu, K . (1992) "Optimal Setting for Discrete PID Controllers." IEE Proceedings-D, Vol. 139, No. 1, pp 31-40. [64] Wadel, L. B . (1962) "Describing Function as Power Series" IRE Trans. Automatic Control, pp 50. [65] Wei, W. W. S. (1990) Time Series Analysis - Univariate and Multivariate Methods. Addison-Wesley. Redwood City, C. A. , U.S.A. [66] Wellstead, P. E. (1981) "Non-Parametric Methods of System Identification." Auto-matica, Vol. 17, No. 1, pp 55-69. 228 BIBLIOGRAPHY [67] Wellstead, P. E. , Edmunds, J . M . , Prager, D. and Zanker, P. (1977) "Self-Tuning Pole/Assignment Regulators" Int. J. Control, Vol. 30, pp 1-26. [68] Wilkinson, J. H . (1965) The Algebraic Eigenvalue Problem. Clarendon Press, Oxford, London, England. [69] Wilson, G. (1969) "Factorization of the Covariance Generating Function of a Pure Moving Average Process." SIAM J. Numerical Analysis, Vol. 6, No. 1, pp 1-7. [70] Young, P. C. (1970) "An Instrumental Variable Method for Real Time Identification of a Noisy Process." Automatica, Vol. 6, pp 271-287. [71] Zervos, C , Belanger, P. R. and Dumont, G. A . (1988) "On PID Controller Tuning Using Orthonoma.1 Series Identification." Automatica, Vol. 24, No. 2, 165-175. [72] Ziegler, J. G. and Nichols, N . B. (1942) "Optimum Setting for PID Controllers." Trans A S M E , 64, 759-768.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- System identification, control algorithms and control...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
System identification, control algorithms and control interval for the Box-Jenkins dynamic model structure Vu, Ky Mihn 1997
pdf
Page Metadata
Item Metadata
Title | System identification, control algorithms and control interval for the Box-Jenkins dynamic model structure |
Creator |
Vu, Ky Mihn |
Date Issued | 1997 |
Description | The Box-Jenkins model of a discrete control system has been studied. First the transfer function model was identified numerically via the derivatives of the variance of the disturbance, then the disturbance model was identified via the derivative of the variance of the generating white noise. Once the model had been identified, an approach to obtain the optimal gains of a discrete PID controller was suggested. In an adaptive environment, the recursive least determinant self tuning controller was designed to calculate the best possible control action based only on the presumed orders of the controller and without knowledge of the delay. The problem of determination of a possible slower control interval of a control loop was solved via modelling a skipped ARIMA through matrix algebra and a robust numerical procedure. |
Extent | 8929818 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2009-04-17 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
IsShownAt | 10.14288/1.0058532 |
URI | http://hdl.handle.net/2429/7331 |
Degree |
Doctor of Philosophy - PhD |
Program |
Chemical and Biological Engineering |
Affiliation |
Applied Science, Faculty of Chemical and Biological Engineering, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 1997-11 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_1997-251810.pdf [ 8.52MB ]
- Metadata
- JSON: 831-1.0058532.json
- JSON-LD: 831-1.0058532-ld.json
- RDF/XML (Pretty): 831-1.0058532-rdf.xml
- RDF/JSON: 831-1.0058532-rdf.json
- Turtle: 831-1.0058532-turtle.txt
- N-Triples: 831-1.0058532-rdf-ntriples.txt
- Original Record: 831-1.0058532-source.json
- Full Text
- 831-1.0058532-fulltext.txt
- Citation
- 831-1.0058532.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0058532/manifest