9 V. ' PARAMETER ESTIMATION APPLIED TO POWER SYSTEM MODELS SUBJECT TO RANDOM DISTURBANCES by HANS MULLER Dipl. El.-Ing. ETH Swiss Federal Institute of Technology, Zurich 1973 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE SCIENCE Department of Electrical Engineering We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA June, 1979 (c) Hans Muller, 1979 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the Head of my Department or by his representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Hans Muller Department of -^Electrical Engineer in, The University of British Columbia 2075 Wesbrook Place Vancouver, Canada V6T 1W5 Date 9/7/79 6 ABSTRACT For control of an electric generator, it is desirable to approximate the remote part of the power system by a low order model, whose parameters can be estimated from measurements. The proposed model for the remote system consists of a large synchronous machine and a random varying load to account for the dynamic behaviour and for the small fluctuations that are always present. The maximum likelihood algorithm is a good method for parameter estimation of a noisy system. It can even be used for estimation from measurements alone, without applied disturbance, which could be of great advantage in a power system. The algorithm is derived in a form suitable for efficient digital processing of sampled measurements from a linear con tinuous system with physical parameters. The performance of the algorithm is tested by computer simulation of a simplified model for several cases of deterministic and/or stochastic input. It is demonstrated, that good estimates can be found from the output of a disturbed system when no intentional input is applied. ii TABLE OF CONTENTS Abstract ii Acknowledgement v Nomenclature v1. INTRODUCTION 1 1.1 Dynamic Power System Models 1 1.2 Parametric Models for Identification 2 1.3 Input Signal Requirements 3 1.4 Sources of Input Noise in Power Systems 6 1.5 Methods for Parameter Estimation 6 2. THE MAXIMUM LIKELIHOOD ALGORITHM FOR PARAMETER ESTIMATION 9 2.1 State Space Description and Likelihood Function 9 2.2 The Continuous- Discrete Kalman Filter - 12 2.3 Steady State of the Kalman Filter 14 2.4 Maximum Likelihood Estimates 7 2.5 Least Squares Algorithms 19 2.6 Optimization Methods 21 2.7 Correlated Input Noise 4 3. DYNAMIC POWER SYSTEM MODEL 26 3.1 Structure of the Model3.2 Derivation of the State Equations 28 3.3 Dynamic Equations for one Machine 9 3.4 Load Bus Equations 35 3.5 Simplified Dynamic Model 36 3.6 Steady State Values 7 iii 4. IMPLEMENTATION OF ALGORITHM 39 4.1 Calculation of Likelihood Values 34.2 Optimization Procedure 40 4.3 Computation Efficiency4.4 Simulation Programs 42 5. RESULTS 43 5.1 Parameters and Input Data 45.2 Output Data 44 5.3 Convergence of Parameter Estimates 50 6. CONCLUSIONS 58 7. BIBLIOGRAPHY 60 iv ACKNOWLEDGEMENT I would like to thank my thesis supervisor, Dr. M. D. Wvong, as well as Dr. M. Davies for their guidance, advice and encouragement during the research work and writing of this thesis. My thanks also to Ms. S. Y. Kim for typing the thesis. My stay in Canada was made possible by an award of the Canada Council. This grant and the financial support of the University of British Columbia are gratefully acknowledged. v ' NOMENCLATURE General: small letter, eg. x or x column- vector T x row- vector capital letter, eg. A matrix or scalar quantity of power system T A transpose of matrix A E[x], x mean of random vector x T E[x.x ] covariance matrix of random vector x 6(t) Dirac delta 6,, Kronecker delta kl L, V likelihood function (scalar) Time: t continuous time t^, k discrete time, t^ = k.At State space description: x state vector y (model) output vector z (actually measured) output vector u control input vector w input noise vector v measurement noise vector a vector of unknown physical parameters A, B, C, H system matrices of continuous- time model F, G, H system matrices of discrete- time model Q, R noise covariance matrices vi Kalman filter: x(-) , x(+) extrapolated and updated state estimate P(-) , P(+) covariance matrix of x(-), x(+) respectively e innovations vector P innovations covariance matrix e K Kalman gains Power system quantities: all quantities except <5 are per unit quantities E, V voltage I current P, Q real and reactive power X reactance X' transient reactance T torque or time constant T' transient time constant M machine inertia D macine damping 6 angle (in rad) between machine q- axis and synchronous frame of reference a angle (in rad) between machine q- axis and bus voltage Aw speed deviation from synchronous speed vii Indices: e electrical m mechanical d d- axis q q- axis i local generator i j external macine j t terminal of local generat L load bus FD voltage regulator G speed governor viii 1. I INTRODUCTION 1.1 Dynamic Power System Models There is continued interest to improve dynamic and transient stability limits in electric power systems. Modern control theory offers new methods to achieve this goal. Linear optimal control designs show great promise [1]. These techniques in turn require a good model for the power system. The interest here is a model for a single generator to be controlled which is connected to a large system. The classical approach is to model the generator as a machine connected to an infinite bus [2], This implies that the machine under consideration has no influence upon the dynamic behaviour of the rest of the power system. This assumption is too restrictive, especially for a large generator. On the other hand, each machine in the entire interconnected system may be modelled individually. The resulting model is then much too large and must be simplified to become tractable. An intermediate approach is to model the generator under consid eration (the "internal" system) in some detail and postulate a simple model for the remaining "external" system. Parameters of this simple model are then identified from measurements taken at the "internal" system generator. This method has two advantages: First, only the dynamics of the external system that interact strongly with the internal system heed be represented, and second, the parameters could be estimated by an on-line computer and the model therefore adapted to the actual state of the power network. 2. The proposed external system usually consists of one (or several) very large synchronous machines [3-7]. Yu, et al. [3] used a small disturb ing input signal and estimated the parameters from the system response. It was assumed that there were no disturbances in the external system. In the following, additional random fluctuations in the external system are considered. The problems treated are the modelling of these disturbances and estimating of parameters in a noisy system. 1.2 Parametric Models for Identification o _ Basic properties of identification problems are treated by Astrom and Eykhoff in [8]. In the following it is assumed that a mathematical model for a power system is derived from a priori knowledge and the identification problem is reduced to the estimation of several unknown parameters. State- space models have been chosen for several reasons: they are easily obtained from differential equations, can describe multi-output systems easily, and form the basis for optimal control. (a) Deterministic models If all inputs and outputs of the system can be measured accurately, a general linear state space model is of the form: x(k+l) = F(a) . x(k) + G(a) . u(k) (1-1) y_(k) = H(a) . x (k) (1-2a_ is the vector of unknown parameters and shall not depend on time. The discrete-time form of the state equation is chosen because the input and output signals shall be sampled and processed by a discrete 3. machine (computer). If a continuous-time model x(t) = A(o).x(t) + B(a).u(t) (1-3) is derived from differential equations, it can be exactly converted into form (1-1) by integration over one time step, provided that the input u_(t) changes only at the discrete time points. This is not a severe restriction for a power system. (b) Stochastic models The system may have inputs (disturbances) which cannot be measured and the measurements of the outputs may be inaccurate. A state space model for a system with additive noise is: v(k) and w(k) are random variables of input and measurement noise respectively. Additional assumptions like zero mean, independent sequences are usually made. It should be noted that the model state and output are now random variables too, i.e., x.(k) and y_(k) are described by a probability density function. 1.3 Input Signal Requirements Let x(0)=0_. Then identification is only possible if some minimal requirements about the input are met. For many estimators, a sufficient condition of "persistent excitation" was outlined by Astrom in [8], It may be interpreted that the input signal must have some amplitude for every frequency in the band of interest. x(k+l) = F(o) . x(k) + G(a) . u(k) + M(a) . w(k) d-4) y_(k) = H(a). x(k) + v(k) d-5) o Stochastic models have two types of input signals, the control _u(k) which is deterministic and the noise w(k) which is stochastic. (a) Deterministic inputs These include disturbances applied for the sake of estimation. Such disturbances are a compromise between what is desired for estimation purposes and what can be realized under physical and operational constraints. For estimation of power system dynamics input signals can be applied at the governor or the voltage regulator. Since any interference into normal opera tion is undesirable, these signals should be small and of short duration. Yu, et al. [3] proposed a step change of mechanical torque for a short time, the simplest signal with "persistent excitation". (b) Noise inputs Observed changes in the system under test, which are neither due to applied inputs nor to parameter changes may be attributed to additional stochastic inputs w. These inputs may be considered a nuisance since they make a correlation of applied input and measured output much more difficult. But on the other hand, one might ask if the noise alone disturbs the system enough so that identification is possible from output measurements only. A white or colored Gaussian noise sequence for example would meet the requirement of persistent excitation. The less is known about these noise inputs, the system identification gets more difficult, since not only the system parameters , but also parameters of the postulated noise process have to be estimated. But this is compensated by the advantage that the estimation process requires no interference of the normal operation of the system at all, an advantage that would count very high for power system identification. In fact, a few attempts have been made to estimate power system dynamics from normal operating data. Lindahl and Lyung [4] tried to fit several models by processing actual power system measurements and estimated parameters of a very simple 5th order model. They achieved not unreasonable estimates but expressed a desire to introduce additional intentional disturb ances. Price et al. [7] tuned simulation programs to produce output signals that were very similar to actual field measurements. They concluded that parameter estimation from on-line measurements is possible, but that a choice of several model structures should be available, based upon knowledge from system planning data. (c) Effects of noise and feedback One has to be careful in choosing an input signal that it is not the result of a feedback. An example of what can happen is given in [8] for a transfer function model U G(z) 1—6 noise w H(z) An attempt to identify G(z) from measurements would yield the estimate Y(z) _ 1 u(z) H(z) ' For a power system model the excitation voltage and the mechanical torque can be treated as input signals only if the feedback loops of the governor and voltage regulator are open. 6. 1.4 Sources of Input Noise in Power Systems Field measurements of terminal quantities on a generator (e.g. [7]) show fluctuating signals even if no intentional disturbance is applied. These fluctuations come from changes in the internal or external system and the model of the entire system should take them into account. In this thesis, only fluctuations in the external system power demand will be considered. The simplest way to account for these demand changes is to model the active and reactive power demand as stochastic processes with the steady state power as mean value. Hence all load dynamics should be included in the one-machine model that describes the dynamics of the external system. Price et al. [5-7] introduced power demand as a stochastic input ) in their model, whereas Lindahl} et al.[4] introduced a general noise vector w with unknown covariance. The easiest noise process from an estimation standpoint is Gaussian white noise and this rather restrictive assumption is made for most of the following chapters. How a somewhat more realistic noise sequence can be introduced in the power system model is shown in section 2-7. 1.5 Methods for Parameter Estimation There are a variety of methods available, the problem is to find one which yields an unbiased, reliable, stable estimate with reasonable computing effort. (a) Parameter estimation as state estimation For state estimation in linear systems, the Kalman filter is a very efficient and well proven estimator. The parameters a_ in eq. (1-4) 7. can be considered as system states which obey the equation cx(k+l) = tx(k) . The parameter estimation problem can therefore be viewed as estimation of the states of the augmented model [9]: ' x(k+l)' " F(a(k)) 0 " ' 2L(k)' a(k+l) 0 I * a(k) This method has two drawbacks: The augmented system becomes nonlinear even if the original system was linear and the order becomes higher too. It is not too well suited for estimation of unknown parameters [10] , but it should be considered for parameter tracking if very good initial estimates are available. (b) Least squares methods Least squares methods involve minimization of a cost which is a quadratic function of some error. The error results from comparison of the model behaviour with the measurements on the real system. The error is normally so defined that it is linear in the para meters. The cost function can be minimized in one step. These so called "equation error" methods yield efficient algorithms and are easily imple mented as recursive estimators. Unfortunately, they can produce strongly biased estimates when noise is present in the system [8]. In "output error" methods, the error is the difference between measured output and model output. This error is not linear in the parameters and nonlinear techniques are applied to optimize the cost function. This method was used by Yu, et al. [3] for a deterministic system. It can be generalized for the case with noise and is related to the maximum likelihood method (see section 2-5). (c) Maximum likelihood method The maximum likelihood method is designed for stochastic system models where the outputs are random variables. It basically adjusts the model parameters in a way that maximizes the probability that the model delivers the measured output. It is reported to yield stable, unbiased estimates [8] and has been shown useful for power system estimation [4-7, 11]. The maximum likelihood method works for systems with several outputs and also for systems with only noise inputs. Although it is numerically expensive, it can be implemented on a modern large computer. 2. THE MAXIMUM LIKELIHOOD ALGORITHM FOR PARAMETER ESTIMATION In this chapter, the maximum likelihood algorithm for parameter estimation is developed. The major component of the estimator is a linear optimal filter (Kalman filter). The filter presented is useful for digital processing of sampled measurements from a continuous process. In the case of a general white input noise, the model and filter should be rearranged with the filter gain as unknown stochastic parameters to insure uniqueness. Alternatively, an input-noise model of low order may be preferable. A comparison with a least squares algorithm is made. Some methods for nonlinear function optimization are presented. The system model is extended to include correlated noise input. 2.1 State Space Description and Likelihood Function Let the physical system be described by the following equations: x(t) = A(ct) . x(t) + B(a) . u(t) + C(a) . w(t) (2-1) y(tk) = H(a) . x(tk) +v(tk) (2-2) with the vectors: x: state u: control input y: model output w: plant or input noise v: measurement noise a: unknown physical parameters The mixed continuous (2-1) and discrete (2-2) form indicates that the model is derived from differential equations but measurements are sampled and processed in discrete time, w is a Gaussian white noise process with 10. zero mean and constant covariance Q, i.e.: E[w(t)] = 0 E[w(t). WT(T)] = Q . 6(t-r) (2-3) v is a Gaussian white noise vector with zero mean and constant covariance R, i.e.: E[v(k)] = 0 E[v(k) . vT(*)] = R .6kA (2-4) where the index k denotes time t. . k In addition to a , Q and R may be unknown too. With these assumptions for v and w, the probability density function of the output pr(y(k)) can be calculated for all k as a function of a , Q and R from Eqs. (2-1, 2-2). The likelihood function is the value of the joint probability density for the actually measured output sequence pd ' ,„ (z(N) , z(N--l) ,. y(N),y(N-l),...,y(l) ...,z(l)| a,Q,R) where z(l) ....z(N) denote the available output measure ments. The maximum likelihood method attempts to find those parameter t*. estimates a, Q, R for which the likelihood function is maximized. Calculation of the log likelihood function [10] The joint probability density function is computed recursively by application of Bayes' rule. With the abbreviation Y(k) = [y(k),y(k-l),...,y(l)] andZ(k) - [z(k),z(k-l),...,z(1)](2-5) pdY(k)(Z(k)|o,Q,R) = pd (k)(z(k)|z(k-l);a,Q,R) . pdY(k_1)(Z(k-l)|a,Q,R) (2-6) If L denotes the negative logarithm of the likelihood function, Eq. (2-6) becomes 11. -L(k;a,Q,R) = -L(k-l;a,Q,R) + In pd (k)(z(k)jZ(k-l);a,Q,R) (2-7) For L(0)=0 and with the recursion carried out N L(N;a,Q,R) = - £ In pd (z(k)|Z(k-l);a,Q,R) (2-8) k=l yV ; Minimizing L will maximize the likelihood, since the logarithm is a monotonic function. It remains to evaluate the conditional probability density of y(k). Because of the Gaussian assumption for the noises w and v, this density is also a normal distribution. The normal distribution for a random variable is Pdy(z) = (2.T7.CT2)_i . exp(-e2/2a2) (2-9) where e=z-y , y=E[y], a2= E[(y-y)2] The corresponding multivariable normal distribution for a random vector is Pdy(k)(z(k)|z(k-l)) =[(2.u)m. det Py(k|k-l)]_i.exP[- \ eT(k) . Py-:L(k|k-l).e(k)] (2-10where m = dimension of the vector y e(k) = z(k) - y(k|k-l) y(k|k-l) = E[y(k)|Z(k-l);a,Q,R] Py(k|k-1) = E[(y(k)-y(k))(y(k)-y(k))T |z (k-1) ;a,Q,R] The conditional mean and the covariance of y(k) are provided by an optimal linear filter (Kalman filter) that processes the measurements z(l) ... z(k-l) for the specified values of a, Q, R. 12. 2.2 The Continuous-Discrete Kalman Filter The Kalman filter for the model (2-1, 2-2) is a combination of the discrete and continuous filters described by Gelb [9]. (a) Propagation in the absence of measurements Between t, .. and ti, no measurements are available and the state k-1 K-estimate x(t) and its error covariance P(t) are simply extrapolated: x (t) = A . x(t) + B . u(t) (tk_1<t<tk) (2-11) P (t) = A . P(t) + P(t).AT + C.Q.CT(tk_1<t<tk) (2-12) (b) Update with new measurement At time tk a new measurement, z(tk), is available and is used to update state estimate and covariances. There is a discontinuity at this point and the indices tk and tk+ (or simply k- and k+) are used to indicate the quantities before and after the updating takes place. y(k-) = H- x(k-) output estimate (2-13) e(k) = z(k) - y(k-) innovations (2-14) Pe(k) = H • P(k-)*HT + R innovations covariance (2-15) x(k+) = x(k-)+K(k)-e(k) updated state estimate (2-16) P(k+) = [I - K(k)«H] • P(k-) updated error covariance (2-17) K(k) = P(k-)-HT»[H.P(k-).HT+R]~1 Kalman gains (2-18) y(k-) and Pe(k) are the required mean and covariance y(k|k-l) and Pv(k|k-1) in the likelihood formula (2-10). 13. Substitution of (2-10) into (2-8) yields N L(N;cc,Q,R) = j £ m-TT+ln det P^(t) + eT(k)-Pe (k) .e(k) ] (2-19) k=l The calculation of the likelihood value can be done by a slightly modified Kalman filter. Solution of the Differential equations (2-11, 2-12) It is possible to integrate both x(t) and P(t) numerically, for example by the Runge-Kutta method, but the computational burden would be excessive. Since the model parameters are all time-invariant, Eq. (2-11) and (2-12) may be converted into difference equations. x(t) = A-x(t) + B-u(t) tk+1<t<tk , x(tk+x) given (2-11) the superposition integral is x^k >= •<vtk-i)-*(ti*-i) + $ (t -x)« B-u(x)dx with the transi-tion matrix $(x) = exp(A'x). If the input u is allowed to change its value only at the time points t^ , k=l ... N (see section 1-2), then the constant term B.u(x)=B.u(tk_^) can be taken out of the integral. With At = t^-t^ (2-20) fAt -1 exp[A-(At-x)].dx-B = A (a)•[F(a)-I]-B(a) (2-22) 0 F(a) = *(At,a) = exp(A(a)- At) (2-21) G(a) = the integrated version of (2-11) is x(t-) = F(a).x(t+_1) + G(a).u(tHh_1) (2-23) For the coveriance equation (2-12) the same approach as for the Matrix • T Ricatti equation [9, p.136 ] is taken. For the equation P = A-P+P.A +CQC (2-12) the transformations X = P«y and u = -A «y define a linear system X with the solution r-AT o" A • X r y(At)' X(At) (F'V 0 y (0) x (0) where <F-V exp At T 1 r-A 0- * C.Q.C A (2-24) and for P P(t~) = T • FT + F • P(t+ .) k-1' (2-25) There are many methods to evaluate the matrix exponentials numerically, see section 4-3. 2.3 Steady State of the Kalman Filter The calculation of one likelihood value L(N;a, Q,R) as developed in the previous sections involves two steps T (1) For given values of a, Q, R and P(0) = E[x(0)-x (0)], the system matrices F(a), G(a), H(a) , the Kalman gains K(tk), k=l...N and the innovations covariance Pg(tjc), k=l...N are calculated. (2)1 The measurements u(0) ... u(N-l); z(l) .... z(N) are processed by the Kalman filter to yield the innovation sequence e(l)...e(N) and L. Since both the system dynamics F, G, H and the noise statistics Q, R are assumed independent of time, the Kalman filter will reach a statistical steady state, i.e., the matrices P(tk), P(t-), Pe(t£)» ^^k^ converge to constant matrices P(+), P(-), P , K. The estimation algorithm requires relatively long data sequences (> 100 points), the time dependent covariance and gain matrices can therefore be replaced by the steady-state matrices, with the additional advantage, that dependence upon the unknown initial covariance P(0) is removed. The steady state value could be obtained by solving the matrix equation P(t^) = ^(^-i^* However, this system of nonlinear algebraic equations is difficult to solve. It is numerically more convenient to calculate the Kalman gains as shown in the first step above with an arbi trary matrix P(0) and use the values to which the gains converge. For the power system model considered in section 3-5 and the choice P(0)=0, convergence to 5 digits was usually obtained in 5 to 10 steps, an indication that statistical steady-state can safely be assumed. Summary of Equations for the Likelihood Function System model (process): x(t) = A(a) • x(t)•+ B(a).u(t) + C(a)-w(t) (2-1) z(k) = H(a)-x(tk) + v(k) (2-2E[w(t)-wT(x)] = Q • 6(t-x) (2-3) E[v(k)-vT(£)] = R • 6^ (2-4Kalman Filter: x(k-) > = F-x(k-l+) + G.u(k-1+) e(k) = z(k) - H-x(k-) x(k+) = x(k-) + K-e(k) (2-23) (2-13/14) (2-16) likelihood function (constant term mir neglected) : N L(N; a, Q,R) = ^ {N-£n det P + £ eT(k)-P 1. e(k)} (2-19) k=l Steady state matrices: K= lim K(k) , k-x» P = lim P„(k) from e , eK P(k-) = F-P(k-1+).FT + T-FT ; P(0+) = 0 Pe(k) = H-P(k-)-HT + R K(k) = P(k-)-HT. Pe_1(k) P(k+) = [I-K(k)-H].P(k-) (2-25) (2-15) (2-18) (2-17) Remark: This form of the Kalman filter is valid also in the case of perfect measurements (R=0), although numerical difficulties may arise when inverses are calculated. Continuous-discrete relationship: F = exp(At-A(a)) G = A_1.(F - I)-B (2-21, 2-22) (F"V 0 " = exp At • F -A 0-T C.Q.C A (2-24) By use of the identify T T x • A • y = trace (A* y • x ) the time independent matrix P£ can be taken out of the summation in the likelihood formula. Hence 1 N L(N;a,Q,R) = N-£n det P + trace(P £ e(k)-e (k)) (2-27) k=l 2.4 Maximum Likelihood Estimates The existence of optimal parameter estimates is now investigated. Uniqueness of optimal parameters The major drawback of state space models is their non-uniqueness; many combinations of the matrices A,B,C,H account for the same input-output sequence. For multivariable systems even canonical forms are not unique [11]. For a physically derived model with a small set of parameters a, this need not be a big problem. Care should be exercised in the choice of parameters, so that the transfer functions H- (sI-A) ^"B and H-(sI-A) ^"C are unique. o _ Both Astrom [11] and Kashyap [12] point out, that in general the noise statistics (Q and R) cannot be determined by the likelihood method. Different noise covariances may give the same values for K and Pe and there-o fore the same likelihood L. Astrom suggests that L be minimized as a function of a, K and P . e The Innovations Representation [8,11] The equations of Kalman filter and system model can be combined to the following model: x(t~) = F(a).x(t~_1) + G(a)-u(tk_1) + K'-eCt^) (2-28) z(tk) = H(a).£(tk) + e(tk) (2-29) where K'=F.K and e(t^) , k=l..N is a sequence of independent vectors with zero mean and covariance P . These equations can be solved for the innova-e n tions e if a, K', u and z are given. The likelihood function is 1 -1 N T L(N;a,K;Pe)= ^ {N-ln det P + trace(P . £ e(k)-e (k))} (2-30) 6 6 k=l The minimization with respect to Pg is done analytically. Setting the derivatives of L to each element of P£ to zero gives the condition N ?e opt = I I e(k)-eT(k) (2-31) k=l The maximum likelihood algorithm is then restated as: N minimize V(N;cc,K*) = det £ e(k)-e (k) (2-32) k=l e(k) = z(k) - H(a).x(k) (2-33) x(k) = F(a)-x(k-l) + G(a)-u(k-l) + K'e(k-l) (2-34) x(0) = 0, e(0) =0 This is very similar to a minimum variance estimator; the relationship is shown in the next section. Kashyap [12] provides a proof of convergence and the necessary conditions for estimation from an innovations representation. However, he uses a different type of input-output description to avoid the ambiguities of the state-space model. The conclusions of [12] can be interpreted in the following way: Under the assumptions of - zero mean, Gaussian input noise with a certain covariance - zero mean, independent Gaussian measurement noise 19. - stable feedback system and linear optimal filter (i.e. eigenvalues of both F and F-K'H inside unit circle) the maximum likelihood estimates of both the parameters in F and H and the filter gains K will converge to the true values as the number of samples N goes to infinity (NB: This includes the case with no deterministic inputs). Choise of reference model For the theoretical reasons mentioned, the innovations description should be used for estimation of a general model. This was done by Lindahl [4] , who made no special assumptions about the manner in which the input noise enters the system. On the other hand, if there is a simple physical description of the source of the input noise, then the noise covariance Q may contain only a few unknown entries. The simple power system model of section 3-5 contains only one stochastic parameter (the load demand variance), but at least six Kalman gains. In this case, the original function L(a,Q,R) should be optimized, since it involves fewer parameters. Price et al. [5] also choose this approach for their model. 2.5 Least Squares Algorithms (a) Single output systems For a physically derived model with measurement noise the output error method is used. The cost function is N • J(a) = I e(k/a) , e(k/a) = z(k)-y(k/a) (2-35) k=l The output error or residual e is the difference between the measured out put z and the output y of a reference model. Nonlinear optimization methods are used to minimize J(a). Convergence properties of the estimates depend on the reference model that provides the y(k). If the reference model is of the form x(k) = F(a)-x(k-l) + G(a)-u(k-l) (2-36) y(k) = H(a)-x(k) (2-37which does not include any noise, but the measurements are created by a process disturbed by noise at the input (w(k)), then the residuals e(k) will be a correlated sequence. In this case the least squares estimate is biased [8]. For transfer function models with noise, the least squares method is generalized by pre-filtering the measurements u(k) and z(k) in such a way that the residuals e(k) become uncorrelated. For a state space model, this is accomplished by the Kalman filter. If the innovations model (2-33, 2-34) is used as reference instead of (2-36,2-37), then the residuals are the uncorrelat innovations. In fact, for a single output system the likelihood function (2-30) is N N V(a,k) = det I e(k)eX(k) = £ e^(k)=J(a,K) (2-38) k=l k=l In the case of a single output, the maximum likelihood estimate and the least squares estimate (for the innovation model) are accordingly indentical. In addition, it is also the minimum variance estimate, since J(ot,K) is an unbiased approximation of the variance of e. 21. (b) Multiple outputs The least squares cost function for several outputs is N T J(a) = I e (k/a)-W.e(k/a) ; e(k/a)=z(k)-y(k/a) (2-39) k=l with an arbitrary weighting matrix W. If the e(k) are independent Gaussian vectors (from an innovations model) then it can be shown that the optimal choice for W is the inverse of the covariance matrix and for this choice the least squares estimate is equal to the minimum variance estimate with the cost function N N J'(a) = trace £ e(k)<e (k) = 11 e2(k) (2-40) k=l k=l i 1 this is different from the likelihood function which involves the determinant of the covariance of e. Kashyap [12] points out that the likelihood function is the correct criterion and that there are cases where estimation with Eq. (2-40) does not yield the true parameters. 2.6 Optimization Methods With a minimum for the likelihood function guaranteed and the necessary equations to compute function values available, the problem of how to find the minimum value still remains. A great number of nonlinear optimization methods exist, but it is not easy to find one that is both reliable and efficient. All the methods discussed here are described in Adby and Dempster [13]; some of them are available at UBC as subroutines, see Manual UBC NLP [14]. (a) Search methods These methods attempt to find an optimum by using function evaluations only. Problems arising from computation of derivatives are avoided, but a large number of function evaluations are needed. A simple method is the nonlinear simplex technique, a more sophisticated one is Powell' method of conjugate directions. (b) Gradient methods Gradient methods are based on a first order Taylor expansion for the Objective function T 3T V(a +Ao) = V(a )+gV )Aa, g = -~- (2-41) O o o da They are normally superior to search methods for functions with continuous derivatives. Examples are steepest descent and conjugate gradient methods. A second order Taylor expansion involving the Hessian matrix V(a°+Aa) = V(a°)+gT.Aa+ -|AaT. H •Aa (2-42) leads to Newton's method with the update formula Aa = -H-1. g (2-43) The direct use Newton's method is limited, because the Hessian is usually very expensive to evaluate and the method is prone to divergence for initial values not near the optimum. But there are many approximations, known as quasi-Newton methods, that seek to approximate theHessian by a positive-definite matrix using only derivatives of first order. For least squares problems the Gauss-Newton method is often applied, with several possible modifications. Another group consists of the variable metric methods, the best known of which is the DFP (Davidon-Fletcher-Powell) algorithm. Bard [15] compared several of the gradient methods for several static parameter estimation problems and found that the Gauss-Newton methods usually performed best. (c) Evaluation of gradients For the innovations model (2-31, 2-32) and the likelihood function V'(a,K) = j £n V(a,k) = j In det S; S = £ e(k)-eT(k) (2-44) 1 k the derivative to the physical parameter <x is given by 9*'^i - I «< !f7 • S"1) " I ^ S"1..*) (2-45) i k i and the sensitivity equations 3e/3a± = -3H/3a • x(k) + H . 3x(k)/3a± (2-46) 3x/3a± =3F/3ai-x(k-l)+F-3x(k-l)/3ai+3G/3ai-u(k-l) (2-47) The sensitivity matrices 3F/3o^ , 3G/3a^ are finally obtained from 3A/3a^, 3B/aa±. For a physical power system model as developed in Chapter 3, the analytical derivatives of the system matrices w.r.t. the parameters become very complicated. A computationally simpler approach is to perturb each parameter in turn by a small amount Aa^ and evaluate the gradient from the approxima tions 3V/3a± % [V(a+Aa±) - V(a)]/Aa (2-48) Each perturbation Aa^ must be small enough that the linear approximation is. true, but large enough that roundoff errors are kept small. A gradient evaluation requires as many function evaluations as there are unknown parameters. Choice of an optimization method The theoretical advantage of the gradient methods seems question able in view of the computational effort required to compute the gradient. The main reason for this is that the model derived by physical principles tends to have only a small number of parameters, which enter the model in a complex way. The option to experiment with several methods on the actual model should be left open. At UBC this is easily possible by working with the Nonlinear Optimization Monitor [14], which offers a choice of optimization methods. From the limited experience that was gained by working with the model developed in section 3-5, if seems that Powell's method of conjugate directions performs best for this type of model. 2.7 Correlated Input Noise The model proposed in section 2-1 may be inadequate for two reasons: - The input noise w is not white but band limited and would better be modelled as a first order Gauss-Markov process: w(t) = A -w(t) + s(t) , s(t) white noise (2-49) The input noise w may influence the output directly, so that Eq. i(2-2) is replaced by y(t) = H(a).x(t)+D(a)-w(t)+v(t) (2-50) The resulting complications for the estimator can be resolved by including w in an augmented state vector [6]: x(t) A(<x) _w(t) 0 w y(tk) = [H(a) D(a)] " x(t) . w(t) x(tk) w(tk) B(a)' " 0 + • u(t) + . 0 s(t). (2-51) + v(tk) (2-52) This augmented stated space model has the same stochastic properties as the original one. The elements of the matrix Ar? may be considered additional unknown parameters or may be assigned empirical values. 3. DYNAMIC POWER SYSTEM MODEL A two machine equivalent with random load demand is used as a dynamic power system model. A general method to obtain a linearized state space model from a system of mixed algebraic and differential equations is presented and applied for the two-machine case. A simplified version of the model is used for testing the estimation algorithm. 3.1 Structure of the Model The .following model is used for parameter estimation: *di' xqi local generator Vt -TYYVW _rmm_ Fig. 3.1 external system - local generator (i): This is the generator for which a dynamic bus model shall be identified. The generator is described as a salient pole machine of 3rd order (state variables 6±, u^, E^ ) with an additional voltage regulator and governor, each of first order (state variables E,.,., T .). All para-b ' fdi mx meters are known. Measurements can be made at this local generator only. They include the terminal quantities Vt, Pt, Qt and the rotor speed OK. - external system: The traditional concept of an infinite bus is replaced by a simple external system consisting of: - a large external round-rotor synchronous machine (j) which represents the dynamics of the external system. A third order description includes the equations of motion and armature reaction (state vari ables 6 ., oj. , E' . ) J J qj unknown parameters: M. - total external inertia 3 D_. - total external damping X_. - steady-state reactance of machine j X' - transient reactance 3 T* .- transient open circuit time constant - a bus load P + jQ . No dynamics are included for this load, but Li Li the load demand is assumed to vary randomly in the following way: P = P + AP , where P is the steady-state load and APT is a Gaussian white-noise process. - the network interconnection, which consists of the unknown transmission line reactance Xt between the machine terminal and the fictitious bus. Only algebraic equations are considered. These are obtained from phasor diagrams for synchronous speed. Depending on the size of the load P n , Q n , the external machine is operating as a motor or a generator. The sign convention for a generator is always employed. 28. 3.2 Derivation of the State Equations The following approach was chosen because it allows both simplifi cation of the resulting model as well as generalization for more machines. The desired model is linear, valid for small transient deviations from steady state. It is obtained by a first order approximation of the nonlinear equations about the steady state. The steady state is evaluated separately, it depends upon the unknown parameters (see section 3-6). From the physical model, two types of equations are obtained: (1) state equations of one machine with respect to the load bus (section 3-3): They can be written in the form x=f(x,u,r) where x are the states (6, a), E^ ,...), u are the control inputs and r are intermediate variables of the load bus (<5 , V ). The linearized one machine model developed in section 3-3 is very similar to the one of De Mello and Concordia [2] with two differences: - the voltage V of the load bus is variable Li - the machine angle <5^ is different from the angle between machine q-axis and bus voltage (a.=S.-<5 ) X X Li The output equations are of the form y=h(x,r,v) where v stands for the measurement noise. The linearized equations of all machines are combined and one machine angle can be eliminated. (2) Algebraic equations for the connecting network (load bus, section 3-4): Energy balance conditions at the load bus are expressed by equations g(x,r,w)=0 where w stands for the random loads PT, QT. In order to obtain a single state space model, the equations g=0 are solved for the intermediate variables r and substituted into the state equations. In the nonlinear form, this constitutes a load-flow problem which can be solved by iteration only. However, in the linearized case Ag(Ax,Ar,Aw)=0 can be solved directly by Gaussian elimination. For the very simple model of section 3-5 the substitution is done analytically. 3.3 Dynamic Equations for One Machine - equations of motion M • Aw + D • Aw = Tm - Te (3-1) I = ID - Aw (3-2(> o all quantities in per unit, except 6[rad] and Wo=120TT rad/sec J-wo2 M= — inertia constant base D damping constant Tm mechanical torque T > P = V,• I, + V «I electrical torque e^eddqq ^ to -w rot o . . • «_ • Aco = speed deviation w o 6 angle between rotor axis and synchronous frame of reference - electrical equations assumptions: - 3 windings: d,q,f - resistance in stator winding negligible - flux changes in d and q-axis negligible - speed deviation negligible from Kimbark [16], p. 73 *d = Mf * Tf " Ld ' Xd yq q q . *f = Lff ' Tf " I Mf ' xd d o q Ef = Rf • If + * with X, = a) • LJ , X = w *L_ , T, = X' = X, - | -M" d o d'q o q ' do R,. d d 2 n f ff ' - wQ.Mf „ (o M q o f f q L f rf FD Rf f the following equations are obtained: Tdo • ^q + Eq = EFD ' h <3"3> Vd = Xq • Iq (3-4) V = E - X,«I, = E' - X'-I, (3-5) qqddqdd v In the following a linearized model with the load bus as reference is developed. 31. Power transfer between two buses: Fig. 3.2 This simplified phasor diagram of Fig. 3*2 is used in the following. Equations for both machines (i) and (j) are obtained through the substitu tions : Machine (i): E=E(\ , V=VL , Xd=X^. + Xt , Xq=Xq.+ Xt , a=a.=6.-6L , Id=Id± , Iq=I Machine (i): E=E1 . , V=VT , X =X! , X =X. qj L ' d 3 ' q 3 a=a.=6.-6T , I =1 ,. , I =1 . J J L d dj q qj incoming power at the load bus V: S = P+jQ = V- I = <Vd+jV ).(Id-jI ) Q - ~Vd-Iq + Vq.Id (3-6) (3-7) (3-8) V,= V. sina = X .1 d q q Vq= V. cosa = E-Xd.Id Substitution of IJ and I yields: d q 32. „ V-E«sin a , 1 „? . ' p _ + — V^1 «sin2a XJ 2 1 1_ xq xd (3-9) n _ V-E- cos a ,T2 r • 2 Q = - v^'Isin^a Xd 1_ _ 1_ Xq Xd Xd (3-10) For small variations AV,AE,Aa: AP 9P 9V . AV+ 3P 9E 3P ,AE+ — . .Aa da 1 a (3-11) with 9P _ VE- cosa 3a " XJ + V2cos 2a 1 1 lXq " Xdi 3P 3E V* sina 3P 3V E« sing X , + V . sin 2a 1 1 ^x q V where V, E, a on the right hand side are steady state values. Equations for generator (i) - electrical torque with the appropriate substitutions: AT . £ AP . = K -Aa.+ K 'AE' + K7.-AVT ex ex lx i 2x q /x L (3-12) 3P with K ex li 3a 3P ei K2i 3E'. qi VL.Eql-cosa 2 -xT.+x + VL • cos2ai dx t V sina. L' i X'+X d t v x -+x- qx t X'+X„ dx t 33. 8Pei Eqi'Sinai K71 " Ivf • -%^+ Vsi" 2«i I xql+xt - x-i+xt j . and are equal to and in the paper of De Mello and Concordia [2] - internal voltage Eq.: X -X' AE . =AE' . + (X -X' ).AI,. = AE'. + r-~T~- (AE' . —A(V .cosai)) qx qx dx dx dx qx XJ +X qx L or AE . = -r- • AE'. + K,..Aa.+ KQ.•AV (3-13) qx qx 4x x 8x L with v = dx t 3i X..+X,. dx t „ dx dx „ K4i= 3rrTx— ' vsinai dx t dx dx K8i = ~ X'.+X • COSai dx t - terminal voltage V. 2 2 2 AVdt'Vdt+AW Vt =Vdt + Vqt AVt = Vfc ' ' X . AV = X .AI . = v Jl • A(V-sina) dt qx qx X .+X L n n qx t X X* AV =AE\ - X' .AI,. = Y, ,„ -AE', + , J. -A(V .cosa) qt qx dx dx xdi+xt q1 Xdi t hence AV = K Aa .+ K, . . AE' . + K . . AV (3-14) t 5x x 6x qx 9x L with hf ¥ • X^T \ • xpr • V^°i t qx t t dx t 34. V X K - qt * 6i V X' .4X t di t T. dt ai . , qt di K9i = — ' TrTT ' sinai + "v" ' YTTT ' cosai t qi t t di t - terminal power AP_ = AP = AT . (3-15) t L ei Again, the constants K„. through K,. are identical with K„ through K, in [2] Jl ox j b - voltage regulator TFD'^FDi + AEFDi " KFD <AVref " AV (3"16) - governor and turbine T_ 'AT . + AT . = K_ . (A co -Ao .) (3-17) G mi mi - G ref I AV • and Aco . are input signals through which an intentional disturbance ref ref can be applied. Summary of equations for machine (i) A6 . =co . Aw . 1 o i M-J «Au). = T . - T . - D-Ajji ' X I mi ei A T -AT . = -AT .• + YLr(W .-Aw.) G mi mi » ref i T' ,-AE' = -AE .+AE-. dot q qi FDi TFD-AEFDi = "AEFDi + KFD(AVref-AVt> 35. where AT . = K,.•Aa. + K_.•AE'. + K_.•AVT ei li i 2x qi 7x L AE . = r^=— • AE' . + K. .* Aa. + K0 ." AVT qx. qx 4x x 8x L AV. = Kc . •Aa. + K,.•AE'. + Krt..AV t 5x x 6x qx 9x L Summary of equations for machine (j) A6. = w„ . Aw. M. • Aw. = -D. • Aco. - AT . 3 1 3 3 eJ T' .AE'. = -AE . doj qj qj AT . = K * Aa. + K • AE' . + K_ . • AV ej lj J 2j qj 7j L AE . = ^— AE' . + K..•Aa. + Ka.* AV qj qj 4j j 8j L In the combined model for both machines, one machine angle can be eliminated, since only relative positions of the machine axes are of interest, A<5 = A<5_^-A6_., This reduces the order of the model by one. 3.4 Load Bus Equations Three more equations are required to eliminate AV^^\a^,Aa2 from the state equations. Two relations are from power continuity conditions: PT = P. . + PT . QT = Q . + Q . L Lj Lx L Lj Li and one is an angular relationship Ac$..-A<50 =Aan -Aa„ = A6 36. hence K,.Aa. +K.,.Aa. +K-.AE1. +K„.AE'. + (K-. ,+K_ . )-* AVT lx x 1] ] 2x qi 2j qj 7x 7j L AQL = A6 = Aa^ - Ao^ (3-18) The 3 equations can be solved for AV^Ao^jAa by Gaussian elimination. This is best done numerically. The expressions are substituted in the state equations of the machines, which in effect will modify the constants .-K^. 3.5 Simplified Dynamic Model Since the estimation algorithm posed some problems and is expensive in computing time, a simplified model was used: - the voltage change on the load bus AV is assumed to be negligible - the load bus is assumed to keep the frequency of the external machine Aa± ^ A6 Aa2 ^ 0 Under these assumptions, only the equations of motion of the machine (j) re main relevant and its electrical torque can be expressed as AT . = APT - AT . ej L ex (3-19) State equations for simplified model: A6 = w (AOVJ-AW.) ov 1 y M. • Aw-4. = -D4 . A all +AT . -AT . i 1 1 mx ex M.'AtD. = -D.-Au). -AT . 3 3 3 3 ej (3-20) (3-21) (3-22) 37, T. .AT . = -AT . + K^(Au> ,-AOJ,) (3-23) G mi mi G ref 1 Tdo'AEqi • AEFD " AEqi (3"24) TFD,AEFDi = -AEFDi + KFD(AVref-AV (3~25) AT . = K -A6 + K *AE'. (3-26) ei li 2i qi AE . = K..A6 + AE'. (3-27) qi 41 K3i qi AV. = KC.A6 + K-.AE'. (3-28) t 5i 6i qi AT . = AP -AT . (3-29ei L ei This is a 6th order model with the 3 unknown parameters Mj , Dj, Xt and the unknown covariance of the random input AP . Measurable quantities for the estimation procedure are: speed ^ , terminal voltage AVt> terminal power APt=ATei. 3.6 Steady State Values The steady state is computed from the known system parameters (M^, ^ji' •••)» estimates of the unknown parameter and the terminal steady state quantities V Q, PtQ> Qj:0" ^e lndex 0 is omitted in the following. Fig. 3.3 Steady state phasor diagram are: The equations (3-9, 3-10) applied for the terminal of machine (i) P. = V .E .,..siniK t qdx Tx X . V .E' . .sinijj. o t qx l ^ di + V . siniJj. cosil;. t x i X . X« qi dx > V .E ...cosiji. V_ t qdi Yx t X . qi X and -P. = Vt.VL.sin(-i|»t) Vt.VL.cos(-*t) Vt-with the solutions *V rt = arc tan (3-29) arc tan ^ (3-30) qi p . x' E' . = „ t. + V -cos*, qx V .sxnijj. t x t x. 1-X . qi V .sinijij. (3-32) (3-31) (3-33) 4. IMPLEMENTATION OF ALGORITHM This chapter contains a short description of the computer imple mentation of the estimator and of some methods to reduce computing time. 4-1 Calculation of Likelihood Values The following flow-chart shows the major steps in the calculation of one likelihood value: arguments a, Q, R + calculation of system matrices A(a) , B(a), C(a) equations of power system model, sections 3-5, 3-6 4-conversion to discrete-time system A(a) ,B(a) F(a),G(a) C.Q.CT QD= T-F method see section 4-3 4-calculation of steady-state Kalman gains K and (expected) innovations covariance Pe iterative equations, see section 2-3 + processing of the measurement arrays u, z with the Kalman filter and summation of the innovations (sample) covariance S equations, see section 2-3 4-likelihood value L(a, Q, R) The first of the steps above contains the physical model. The following steps are independent of it. The computation sequence is simpler for the case without input noise when there is no Kalman filter. 40. 4-2 Optimization Procedure The Monitor for nonlinear optimization available at UBC [14] was employed to find parameter estimates. It supports interactive processing and switching among different optimization methods. The following user routines were added: FUNCTION XDFUNC - Calculation of a likelihood value as described in section 4-1 SUBROUTINE INIT - an initialization routine that must be called first and asks for the necessary data. Most of the available algorithms in [14] were tried, but none of the methods based on gradient evaluation performed well. A possible cause is that the parameter steps taken for the difference approximation of the derivative were not suitable. Good results were achieved by the conjugate directions algorithm (routine POWEL). 4-3 Computation Efficiency The number of function evaluations is high for any optimization algorithm. Since a function evaluation involves the setting up and running of a linear filter, computation time is long even for a small system. A considerable reduction is achieved by using the following techniques. (a) Filter equations and matrix operations The mathematical operations of the filter are simple but highly repetitive. Since multiplications of a vector by a matrix are among the most frequent operations, all matrices are stored by rows, so that a. . becomes A(J,I) in Fortran. In most subroutines, array elements are addressed by a single index only. A package of short matrix subroutines was written in Fortran. The order in which the elements are stored is chosen carefully. These matrix routines run several times faster than their more general counterparts in the system library. (b) Conversion of continuous to discrete time This can be done by numerical integration. If x(0) is chosen as the i-th unit vector, then integration of the system x=Ax over one time step At will yield the i-th column of the transmission matrix F. The number of integration steps required is therefore equal to the order of the system. It was found that by using the Runge-Kutta method, up to 80% of the total computing time was spent on this integration. A much faster procedure is obtained through approximation of the matrix exponential by its series: At-2 F = exp(A-At) = I + Af A+ jf~ A2+ ... (4-1) In general this method may suffer from several numerical problems, such as roundoff errors and slow convergence. But for the model in section 3-5, approximation by the first eight terms is accurate to 6 digits and about ten times faster than numerical integration. The complete recursive algorithm for F, G and T is obtained from the series expansion of Eqs. (2-21, 2-22, 2-24): F(n) = F(n-l) + • A(n) F(0)=I (4-2) G(n) = G(n-l) + • B(n) G(0)=0 (4-3) T(n) = T(n-l) + ^r- • T(n) T(0)=0 (4-4) n! 42. A(n) = A(n-l) • A A(1)=A=A-At (4-5) B(n) = B(n-l) • B B(l)=B=B"At (4-6) T(n) =-T(n-l)-A T+A(n-1)-f T(l)=f=C.Q-CT.At (4-7) (c) Achieved computing times The evaluation of a likelihood value for a system of 6th order, with one input, one output and 100 measurement points requires the following approximate CPU-times (for an Amdahl 470 computer): 24 msec for deterministic case (no Kalman gain) 55 msec for the stochastic case A typical estimation run with 6 iterations of the conjugate direc tions algorithm requires about 100 function evaluations or 5 sec CPU-time. This is still considerable for such a simple model. Further improvements are possible. Most of the calculations (especially the filter) could probably be done with single instead of double precision. Of most importance would be a faster, reliable method for com puting the steady state Kalman gains. 4.4 Simulation Programmes All data used for the estimation process was obtained from a com puter simulation with the same model. Many of the subroutines were the same for estimation and simulation. The simulation data is therefore ideal and fulfills all assumptions of structural and noise properties that the estimator presupposes. 5 DATA AND RESULTS The simulation data and the performance of the maximum likelihood estimation method are shown for the following cases: Case 1 - deterministic input only Case 2 - combined deterministic and stochastic inputs Case 3 - stochastic input only Case 4 - stochastic input and measurement noise 5.1 Parameters and Input Data The following parameters were used in simulation program: - sampling time step At=0.05 sec (a) Deterministic parameters (all per unit) - internal generator: M.=5 X D.=10 X T* =8 di Xdi=1 • Xd.=0.3 X =0.6 q KFD=5° KG=10 V0.5 — external system (unknown parameters a) - transmission reactance: Xt=0.5 - unknown machine: M.=9 D.=26 - steady state values: vto=1-05 pto=0-9 Qto=0-3 (b) Deterministic input The control input in case 1 and 2 is a short pulse applied at the reference input of the governor/turbine loop. 0.25 0.75 AWref(pu.) 0 2 -10 r3 t(sec) 5s»-Fig. 5.1 Deterministic Input (c) Stochastic input A random disturbance AP^ was applied in cases 2 to 4. Only white noise with a variance Q=E[AP ]=10 4 was considered. In discrete time this corresponds to a random sequence with a standard deviation of am=/Q'At = if Li _ 3 2.2'10 p.u. For cases 1 to 3, measurement noise is neglected. In case 4, _9 -3 measurement noise with covariance R=10 (o- =0.032-10 ) was added to the v simulated output Au)j_. 5.2 Output Data The simulated outputs were APt , AVt and Aco^. Each example con sists of a sequence of 100 points. Some examples for each case are shown in Fig. 5.2 to 5.9. Table 5.1 contains for all simulated sequences the mean values m, the standard deviation a, the maximum absolute amplitude |A| and the expected innovation standard deviation /p^T of a Kalman filter with the true parameters. Table 5-1: Simulation Results: Case Signal m |A| Fig. 1 APt -11.3 (23.9) 94.1 _ 5.2 AV, 0.19 (2.02) 7.66 - 5.3 Aco^ -0.43 (0.62) 1.84 - 5.4 2 Apt -8.78 (30.3) 85.7 same 5.5 AVt 0.31 (4.10) 11.2 as -A(o^ -0.33 (0.83) 2.46 case 3 5.6 3 APt 2.53 21.8 42.7 3.33 5.7 AVt 0.120 3.07 6.26 0.388 -Aw^ 0.099 0.445 0.881 0.0268 5.8 4 Ao). i 0.1011 0.447 0.884 0.0806 5.9 All values x 10 per unit. 46. 49. 5.3 Convergence of parameter estimate Parameters were estimated from the simulated measurements for all four cases. In all examples the cost function was minimized by the conjugate directions algorithm (routine P0WEL[14]). Only single output sequences were considered. This is sufficient in the cases without measurement noise, because one output contains all available information about the system. Estimation from multiple output sequences was not attempted, because numerical problems were encountered in the calculation of the Kalman gains. This should be further investigated for the case with measurement noise. Table 5-2 contains a summary of the obtained optimal estimates for all examples. Fig. 5-10 to 5-18 show how the estimates converged and are followed by comments for the individual cases. Table 5-2 Estimation Results •H p>4 I cr o C •H W cd •H X> *a cd * * < a cd S co rH Cd e •H 4-1 (X o ^2 H i u 3 o CD O to & CU C -H CO •rl 4J •—• I m cd co one <u o =S= 4-1 -H cd •H CO 4J CO •HQ) O c 3 < a M 60 3 ft O 0) CO cd u I to o o o ON ON ON in CN o o o rH CN vO P-i < rH I m i m CO I m 00 ON ON ON ON ON ON ON ON o o O CN CN + 00 rH rH CM o o o VD O 00 rH o o o rH o m m m m m in vO m m CO I VO + + o m CN CN CN 00 I oo i o + m oo CN o o 00 ON ON o vO CO O CM rH 00 CM CM CO 4-1 -H •H PH 3 3 < < < "4" m vO rH | rH i rH | m m m m CM rH o rH rH o O o rH rH rH CO o vO CM o • • • • • rH CM rH O -d-+ + + + 1 oo ON CM m rH vO rH CM rH + + + o r- CM O rH O CO CO CO CM I O I CO I o CM CM 00 ON r-oo oo 00 vO CO oo 00 4J 4-1 < < CO 3 < r~- oo in m o m o CO I vO 00 00 m CM oo co m rH CO + + m CO + oo m in CM <r •H -H 3 3 < < U 4-> cd T3 CD 60 U CD > •H T3 •K OJ CO cd cj C •H en I vo o 1 rH O II rH Pi II v-' O o* t\ CM 1 • o o co rH • II o o-o rH m O CO m • N-^ o II II o o vO a a CM •—s <—s CO o-ON II CD U cd CO u CD 4-1 CO CD • • 6 o O cd u cd •« • Pi m m d) rt 3 • • M m m 4-1 CD II II Xi 4J O O a a • • rH CM S3 Fig. 5-10 Case 1 - deterministic output AP Fig. 5-11 Case 2 - mixed input output AP oc (p.u) Fig. 5-12 Case 2 - mixed input output AID. Fig. 5-13 Case 2 - mixed input output Aw Fig. 5-17 Case 4 - with measurement noise output Aw., Fig. 5-18 Case 4 - with measurement noise output Aw. 56. (a) Case 1 - deterministic In this special case there is no Kalman filter. The estimator is the ordinary least squares estimator with the cost function V = I e2(k). k Rapid convergence to the true parameters was obtained for APt(Fig. 5-10) and Aw^ measurements. The algorithm failed for the AVt measurements, because the initial steps were too large. (b) Case 2 - deterministic and stochastic input The same cost function was used, but now the e(k) are the innovations of a Kalman filter. For negligible measurement noise (R-K)), the Kalman gains depend on the parameters a only and the expected innovations covariance P£ is proportional to Q. Q is therefore estimated from the sample covariance Pe=V/N. Speed of convergence is comparable to the deterministic case, but the estimates have some bias (Fig. 5-11, 5-12). The first step of the algorithm always seems to make M. and D. larger, even if they are already J J too large (Fig. 5-13). It seems preferable to start with estimates that are smaller than expected. (c) Case 3 - stochastic input only Estimation with the cost function V= £ e2(k) did not bring satisfactory results. Slightly biased estimates of M^. and D_. were found if Xt was held constant at the true value 0.5. But this is only one local minimum of V and lower minima can be found for quite different values of M, , and Xfc. 57. Good results were obtained with the original likelihood cost function L=£n det Pe+ £ e2(k)/Pe, where Pe is the estimated covariance of the Kalman filter. Estimates from all of the three outputs converged (Figs. 5-14 to 5-17), with a small bias for M. and and a rather large bias for D.. 3 t J (d) Case 4 - input and measurement noise In the first example (Fig. 5-l7) Q and R are held constant at the true values and only a is optimized. The results are similar as in case 3. In the second example (Fig. 5-18) only R is fixed, a and Q are estimated simultaneously. After four iteration steps, the estimates seem to converge reasonably. However, in the next iteration step very different estimates were found, for slightly lower function value. For this short measurement sequence, the algorithm will not converge to the true values. 58. 6 CONCLUSIONS The maximum likelihood method is a general method for estimation of parameters in a linear system disturbed by both input and measurement noise. The stochastic parameters can be modelled either as unknown gains of a linear filter or as parameters of a low order model derived from physical knowledge. The second approach is given preference and a low order model for the remote part of a power system is proposed that explains small fluctua tions by varying load demand. Estimation results were obtained for a simplified system from one output at a time. They show that the maximum likelihood method can in fact produce good estimates for a heavily disturbed system. Parameter estimates from a system with noise input only are also possible. However, convergence problems may arise for short measurement sequences. False convergence to a wrong set of parameters is possible, especially if the measurement noise is appreciable. This case would need further investigation with longer data sequences and a more realistic power system model. Estimation of the dynamics of a real power system from measurements alone should be possible if ... - accurate measurements of small fluctuations are available. - the dynamic behaviour and the source of fluctuations of the external system with respect to a generator can be represented with sufficient accuracy by a very low order model. To answer these questions, actual measurements from a generator should be investigated. 60. 7 BIBLIOGRAPHY 1. Y.N. Yu and H.A. Moussa, "Optimal stabilization of a multi-machine system", IEEE Trans, on PAS, vol. PAS-91, pp. 1174-82 (May 1972). 2. F.P. DeMello and C. Concordia, "Concepts of synchronous machine stability as affected by excitation control", IEEE Trans, on PAS, vol. PAS-88, pp. 316-329 (April 1969). 3. Y.N. Yu, M.A. El-Sharkawi and M.D. Wvong, "Estimation of unknown large power system dynamics", IEEE Trans, on PAS, vol. PAS-98, pp. 279-289 (January 1979). 4. S. Lindahl and L. Ljung, "Estimation of power generator dynamics from normal operating data", 3rd IFAC Symposium, The Hague (June 1972), Part I, pp. 367-374, Paper pp 2. 5. W. Price, F.C. Schweppe, E.M. Gulachenski and R.F. Silva, "Maximum likelihood identification of power system dynamic equivalents", Proceed ings 1974 IEEE Conference on Decision and Control, pp. 579-586. 6. R.D. Masiello and F.C. Schweppe, "Multi-machine excitation stabilization via system identification", IEEE Trans, on PAS, vol. PAS-94, pp. 444-454, (March 1975). 7. W. Price, D.N. Ewart, E.M. Gulachenski and R.F. Silva, "Dynamic equiva lents from on-line measurements", IEEE Trans, on PAS, vol. PAS-94, pp. 1349-1357, (July 1975). o 8. K.J. Astrom and P. Eykhoff, "System identification - a survey", Auto-matica, vol. 7 (1971), pp. 123-161. 9. A. Gelb (ed.), "Applied optimal estimation", The Analytic Sciences Corporation (MIT-Press), 1974, 74-1604. 10. F.C. Schweppe, "Uncertain dynamic systems", Prentice Hall, 1973, 72-11801. o _ 11. K.J. Astrom, "Modelling and identification of power system components" from E. Handschin (ed.), "Real time control of electric power systems" Elsevier Publishing Company, 1972.' 12. R.L. Kashyap, "Maximum likelihood identification of stochastic linear systems", IEEE Trans, on AC, vol. AC-15, pp. 25-34 (Feb. 1970). 13. P.R. Adby and M.A.H. Dempster, "Introduction to optimization methods", Chapman and Hall mathematic series, 1974, 74-4109. 14. M. Patterson, "UBC-NLP, Nonlinear function optimization", Computing Centre, Univ. of B.C., 1978. 15. Y. Bard, "Comparison of gradient methods for the solution of nonlinear parameter estimation problems", SIAM Journal Numer. Anal., vol. 7, pp. 157-186, (March 1970). 16. E.W. Kimbark, "Power system stability: Synchronous machines", Dover, 1968, 68-12937.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Parameter estimation applied to power system models...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Parameter estimation applied to power system models subject to random disturbances Muller, Hans 1979-03-05
pdf
Page Metadata
Item Metadata
Title | Parameter estimation applied to power system models subject to random disturbances |
Creator |
Muller, Hans |
Date Issued | 1979 |
Description | For control of an electric generator, it is desirable to approximate the remote part of the power system by a low order model, whose parameters can be estimated from measurements. The proposed model for the remote system consists of a large synchronous machine and a random varying load to account for the dynamic behaviour and for the small fluctuations that are always present. The maximum likelihood algorithm is a good method for parameter estimation of a noisy system. It can even be used for estimation from measurements alone, without applied disturbance, which could be of great advantage in a power system. The algorithm is derived in a form suitable for efficient digital processing of sampled measurements from a linear continuous system with physical parameters. The performance of the algorithm is tested by computer simulation of a simplified model for several cases of deterministic and/or stochastic input. It is demonstrated, that good estimates can be found from the output of a disturbed system when no intentional input is applied. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2010-03-05 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0094649 |
URI | http://hdl.handle.net/2429/21569 |
Degree |
Master of Applied Science - MASc |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Unknown |
Aggregated Source Repository | DSpace |
Download
- Media
- 831-UBC_1979_A7 M94.pdf [ 2.83MB ]
- Metadata
- JSON: 831-1.0094649.json
- JSON-LD: 831-1.0094649-ld.json
- RDF/XML (Pretty): 831-1.0094649-rdf.xml
- RDF/JSON: 831-1.0094649-rdf.json
- Turtle: 831-1.0094649-turtle.txt
- N-Triples: 831-1.0094649-rdf-ntriples.txt
- Original Record: 831-1.0094649-source.json
- Full Text
- 831-1.0094649-fulltext.txt
- Citation
- 831-1.0094649.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0094649/manifest