@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix skos: . vivo:departmentOrSchool "Applied Science, Faculty of"@en, "Electrical and Computer Engineering, Department of"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCV"@en ; dcterms:creator "Muller, Hans"@en ; dcterms:issued "2010-03-05T22:06:22Z"@en, "1979"@en ; vivo:relatedDegree "Master of Applied Science - MASc"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description "For control of an electric generator, it is desirable to approximate the remote part of the power system by a low order model, whose parameters can be estimated from measurements. The proposed model for the remote system consists of a large synchronous machine and a random varying load to account for the dynamic behaviour and for the small fluctuations that are always present. The maximum likelihood algorithm is a good method for parameter estimation of a noisy system. It can even be used for estimation from measurements alone, without applied disturbance, which could be of great advantage in a power system. The algorithm is derived in a form suitable for efficient digital processing of sampled measurements from a linear continuous system with physical parameters. The performance of the algorithm is tested by computer simulation of a simplified model for several cases of deterministic and/or stochastic input. It is demonstrated, that good estimates can be found from the output of a disturbed system when no intentional input is applied."@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/21569?expand=metadata"@en ; skos:note "9 V. ' PARAMETER ESTIMATION APPLIED TO POWER SYSTEM MODELS SUBJECT TO RANDOM DISTURBANCES by HANS MULLER Dipl. El.-Ing. ETH Swiss Federal Institute of Technology, Zurich 1973 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE SCIENCE Department of E l e c t r i c a l Engineering We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA June, 1979 (c) Hans Muller, 1979 In p r e s e n t i n g t h i s t h e s i s in p a r t i a l f u l f i l m e n t o f the r e q u i r e m e n t s f o r an advanced degree a t t h e U n i v e r s i t y o f B r i t i s h C o l u m b i a , I ag ree t h a t t h e L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and s t u d y . I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y pu rposes may be g r a n t e d by the Head o f my Depar tment o r by h i s r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l no t be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n . Hans Muller Depar tment o f - ^ E l e c t r i c a l Engineer in, The U n i v e r s i t y o f B r i t i s h Columbia 2075 Wesbrook Place Vancouver, Canada V6T 1W5 Date 9 / 7 / 7 9 6 ABSTRACT For control of an electric generator, i t is desirable to approximate the remote part of the power system by a low order model, whose parameters can be estimated from measurements. The proposed model for the remote system consists of a large synchronous machine and a random varying load to account for the dynamic behaviour and for the small fluctuations that are always present. The maximum likelihood algorithm i s a good method for parameter estimation of a noisy system. It can even be used for estimation from measurements alone, without applied disturbance, which could be of great advantage in a power system. The algorithm i s derived in a form suitable for efficient d i g i t a l processing of sampled measurements from a linear con-tinuous system with physical parameters. The performance of the algorithm i s tested by computer simulation of a simplified model for several cases of deterministic and/or stochastic input. It is demonstrated, that good estimates can be found from the output of a disturbed system when no intentional input i s applied. i i TABLE OF CONTENTS Abstract i i Acknowledgement v Nomenclature v i 1 . INTRODUCTION 1 1.1 Dynamic Power System Models 1 1.2 Parametric Models for Identification 2 1.3 Input Signal Requirements 3 1.4 Sources of Input Noise in Power Systems 6 1.5 Methods for Parameter Estimation 6 2. THE MAXIMUM LIKELIHOOD ALGORITHM FOR PARAMETER ESTIMATION 9 2 . 1 State Space Description and Likelihood Function 9 2.2 The Continuous- Discrete Kalman F i l t e r - 12 2.3 Steady State of the Kalman F i l t e r 14 2.4 Maximum Likelihood Estimates 17 2.5 Least Squares Algorithms 19 2.6 Optimization Methods 21 2.7 Correlated Input Noise 24 3. DYNAMIC POWER SYSTEM MODEL 26 3 . 1 Structure of the Model 26 3.2 Derivation of the State Equations 28 3.3 Dynamic Equations for one Machine 29 3.4 Load Bus Equations 35 3.5 Simplified Dynamic Model 36 3.6 Steady State Values 37 i i i 4. IMPLEMENTATION OF ALGORITHM 39 4.1 Calculation of Likelihood Values 39 4.2 Optimization Procedure 40 4.3 Computation Efficiency 40 4.4 Simulation Programs 42 5. RESULTS 43 5.1 Parameters and Input Data 43 5.2 Output Data 44 5.3 Convergence of Parameter Estimates 50 6. CONCLUSIONS 58 7. BIBLIOGRAPHY 60 i v ACKNOWLEDGEMENT I would like to thank my thesis supervisor, Dr. M. D. Wvong, as well as Dr. M. Davies for their guidance, advice and encouragement during the research work and writing of this thesis. My thanks also to Ms. S. Y. Kim for typing the thesis. My stay in Canada was made possible by an award of the Canada Council. This grant and the financial support of the University of British Columbia are gratefully acknowledged. v ' NOMENCLATURE General: small letter, eg. x or x column- vector T x row- vector capital letter, eg. A matrix or scalar quantity of power system T A transpose of matrix A E[x], x mean of random vector x T E[x.x ] covariance matrix of random vector x 6(t) Dirac delta 6,, Kronecker delta k l L, V likelihood function (scalar) Time: t continuous time t^, k discrete time, t^ = k.At State space description: x state vector y (model) output vector z (actually measured) output vector u control input vector w input noise vector v measurement noise vector a vector of unknown physical parameters A, B, C, H system matrices of continuous- time model F, G, H system matrices of discrete- time model Q, R noise covariance matrices v i Kalman f i l t e r : x(-) , x(+) extrapolated and updated state estimate P(-) , P(+) covariance matrix of x(-), x(+) respectively e innovations vector P innovations covariance matrix e K Kalman gains Power system quantities: a l l quantities except <5 are per unit quantities E, V voltage I current P, Q real and reactive power X reactance X' transient reactance T torque or time constant T' transient time constant M machine inertia D macine damping 6 angle (in rad) between machine q- axis and synchronous frame of reference a angle (in rad) between machine q- axis and bus voltage Aw speed deviation from synchronous speed v i i Indices: e e l e c t r i c a l m mechanical d d- axis q q- axis i local generator i j external macine j t terminal of local generat L load bus FD voltage regulator G speed governor v i i i 1. I INTRODUCTION 1.1 Dynamic Power System Models There is continued interest to improve dynamic and transient st a b i l i t y limits in electric power systems. Modern control theory offers new methods to achieve this goal. Linear optimal control designs show great promise [1]. These techniques in turn require a good model for the power system. The interest here is a model for a single generator to be controlled which is connected to a large system. The class i c a l approach is to model the generator as a machine connected to an i n f i n i t e bus [ 2 ] , This implies that the machine under consideration has no influence upon the dynamic behaviour of the rest of the power system. This assumption is too re s t r i c t i v e , especially for a large generator. On the other hand, each machine in the entire interconnected system may be modelled individually. The resulting model is then much too large and must be simplified to become tractable. An intermediate approach is to model the generator under consid-eration (the \"internal\" system) in some detail and postulate a simple model for the remaining \"external\" system. Parameters of this simple model are then identified from measurements taken at the \"internal\" system generator. This method has two advantages: F i r s t , only the dynamics of the external system that interact strongly with the internal system heed be represented, and second, the parameters could be estimated by an on-line computer and the model therefore adapted to the actual state of the power network. 2. The proposed external system usually consists of one (or several) very large synchronous machines [ 3 - 7 ] . Yu, et a l . [ 3 ] used a small disturb-ing input signal and estimated the parameters from the system response. It was assumed that there were no disturbances in the external system. In the following, additional random fluctuations in the external system are considered. The problems treated are the modelling of these disturbances and estimating of parameters in a noisy system. 1.2 Parametric Models for Identification o _ Basic properties of identification problems are treated by Astrom and Eykhoff in [8]. In the following i t i s assumed that a mathematical model for a power system i s derived from a p r i o r i knowledge and the identification problem is reduced to the estimation of several unknown parameters. State- space models have been chosen for several reasons: they are easily obtained from dif f e r e n t i a l equations, can describe multi-output systems easily, and form the basis for optimal control. (a) Deterministic models If a l l inputs and outputs of the system can be measured accurately, a general linear state space model is of the form: x(k+l) = F(a) . x(k) + G(a) . u(k) (1-1) y_(k) = H(a) . x (k) (1-2) a_ is the vector of unknown parameters and shall not depend on time. The discrete-time form of the state equation is chosen because the input and output signals shall be sampled and processed by a discrete 3. machine (computer). I f a continuous-time model x(t) = A(o).x(t) + B(a).u(t) (1-3) i s derived from d i f f e r e n t i a l equations, i t can be exactly converted i n t o form (1-1) by i n t e g r a t i o n over one time step, provided that the input u_(t) changes only at the d i s c r e t e time points. This i s not a severe r e s t r i c t i o n for a power system. (b) Stochastic models The system may have inputs (disturbances) which cannot be measured and the measurements of the outputs may be inaccurate. A state space model for a system with a d d i t i v e noise i s : v(k) and w(k) are random v a r i a b l e s of input and measurement noise r e s p e c t i v e l y . A d d i t i o n a l assumptions l i k e zero mean, independent sequences are usually made. It should be noted that the model state and output are now random var i a b l e s too, i . e . , x.(k) and y_(k) are described by a p r o b a b i l i t y density function. 1.3 Input Signal Requirements Let x(0)=0_. Then i d e n t i f i c a t i o n i s only p o s s i b l e i f some minimal requirements about the input are met. For many estimators, a s u f f i c i e n t condition of \"pe r s i s t e n t e x c i t a t i o n \" was o u t l i n e d by Astrom i n [8], I t may be interpreted that the input s i g n a l must have some amplitude f o r every frequency i n the band of i n t e r e s t . x(k+l) = F(o) . x(k) + G(a) . u(k) + M(a) . w(k) d-4) y_(k) = H(a). x(k) + v(k) d-5) o Stochastic models have two types of input signals, the control _u(k) which is deterministic and the noise w(k) which i s stochastic. (a) Deterministic inputs These include disturbances applied for the sake of estimation. Such disturbances are a compromise between what is desired for estimation purposes and what can be realized under physical and operational constraints. For estimation of power system dynamics input signals can be applied at the governor or the voltage regulator. Since any interference into normal opera-tion is undesirable, these signals should be small and of short duration. Yu, et a l . [3] proposed a step change of mechanical torque for a short time, the simplest signal with \"persistent excitation\". (b) Noise inputs Observed changes in the system under test, which are neither due to applied inputs nor to parameter changes may be attributed to additional stochastic inputs w. These inputs may be considered a nuisance since they make a correlation of applied input and measured output much more d i f f i c u l t . But on the other hand, one might ask i f the noise alone disturbs the system enough so that identification i s possible from output measurements only. A white or colored Gaussian noise sequence for example would meet the requirement of persistent excitation. The less i s known about these noise inputs, the system identification gets more d i f f i c u l t , since not only the system parameters , but also parameters of the postulated noise process have to be estimated. But this i s compensated by the advantage that the estimation process requires no interference of the normal operation of the system at a l l , an advantage that would count very high for power system identification. In fact, a few attempts have been made to estimate power system dynamics from normal operating data. Lindahl and Lyung [4] tried to f i t several models by processing actual power system measurements and estimated parameters of a very simple 5th order model. They achieved not unreasonable estimates but expressed a desire to introduce additional intentional disturb-ances. Price et a l . [7] tuned simulation programs to produce output signals that were very similar to actual f i e l d measurements. They concluded that parameter estimation from on-line measurements i s possible, but that a choice of several model structures should be available, based upon knowledge from system planning data. (c) Effects of noise and feedback One has to be careful in choosing an input signal that i t is not the result of a feedback. An example of what can happen i s given in [8] for a transfer function model U G(z) 1—6 noise w H(z) An attempt to identify G(z) from measurements would yield the estimate Y(z) _ 1 u(z) H(z) ' For a power system model the excitation voltage and the mechanical torque can be treated as input signals only i f the feedback loops of the governor and voltage regulator are open. 6. 1.4 Sources of Input Noise i n Power Systems F i e l d measurements of terminal q u a n t i t i e s on a generator (e.g. [7]) show f l u c t u a t i n g s i g n a l s even i f no i n t e n t i o n a l disturbance i s applied. These f l u c t u a t i o n s come from changes i n the i n t e r n a l or external system and the model of the e n t i r e system should take them into account. In t h i s t h e s i s , only f l u c t u a t i o n s i n the external system power demand w i l l be considered. The simplest way to account for these demand changes i s to model the a c t i v e and r e a c t i v e power demand as sto c h a s t i c processes with the steady state power as mean value. Hence a l l load dynamics should be included i n the one-machine model that describes the dynamics of the external system. P r i c e et a l . [5-7] introduced power demand as a s t o c h a s t i c input ) i n t h e i r model, whereas L i n d a h l } et al.[4] introduced a general noise vector w with unknown covariance. The e a s i e s t noise process from an estimation standpoint i s Gaussian white noise and t h i s rather r e s t r i c t i v e assumption i s made for most of the following chapters. How a somewhat more r e a l i s t i c noise sequence can be introduced i n the power system model i s shown i n section 2-7. 1.5 Methods for Parameter Estimation There are a v a r i e t y of methods a v a i l a b l e , the problem i s to f i n d one which y i e l d s an unbiased, r e l i a b l e , s table estimate with reasonable computing e f f o r t . (a) Parameter estimation as state estimation For state estimation i n l i n e a r systems, the Kalman f i l t e r i s a very e f f i c i e n t and w e l l proven estimator. The parameters a_ i n eq. (1-4) 7. can be considered as system states which obey the equation cx(k+l) = tx(k) . The parameter estimation problem can therefore be viewed as estimation of the states of the augmented model [9]: ' x(k+l)' \" F(a(k)) 0 \" ' 2L(k)' a(k+l) 0 I * a(k) This method has two drawbacks: The augmented system becomes nonlinear even i f the original system was linear and the order becomes higher too. It i s not too well suited for estimation of unknown parameters [10] , but i t should be considered for parameter tracking i f very good i n i t i a l estimates are available. (b) Least squares methods Least squares methods involve minimization of a cost which i s a quadratic function of some error. The error results from comparison of the model behaviour with the measurements on the real system. The error i s normally so defined that i t is linear in the para-meters. The cost function can be minimized in one step. These so called \"equation error\" methods yield efficient algorithms and are easily imple-mented as recursive estimators. Unfortunately, they can produce strongly biased estimates when noise i s present in the system [8]. In \"output error\" methods, the error i s the difference between measured output and model output. This error i s not linear in the parameters and nonlinear techniques are applied to optimize the cost function. This method was used by Yu, et a l . [3] for a deterministic system. It can be generalized for the case with noise and i s related to the maximum likelihood method (see section 2-5). (c) Maximum likelihood method The maximum likelihood method i s designed for stochastic system models where the outputs are random variables. It basically adjusts the model parameters in a way that maximizes the probability that the model delivers the measured output. It i s reported to yield stable, unbiased estimates [8] and has been shown useful for power system estimation [4-7, 11]. The maximum likelihood method works for systems with several outputs and also for systems with only noise inputs. Although i t i s numerically expensive, i t can be implemented on a modern large computer. 2. THE MAXIMUM LIKELIHOOD ALGORITHM FOR PARAMETER ESTIMATION In this chapter, the maximum likelihood algorithm for parameter estimation is developed. The major component of the estimator is a linear optimal f i l t e r (Kalman f i l t e r ) . The f i l t e r presented is useful for d i g i t a l processing of sampled measurements from a continuous process. In the case of a general white input noise, the model and f i l t e r should be rearranged with the f i l t e r gain as unknown stochastic parameters to insure uniqueness. Alternatively, a n input-noise model of low order may be preferable. A comparison with a least squares algorithm i s made. Some methods for nonlinear function optimization are presented. The system model is extended to include correlated noise input. 2.1 State Space Description and Likelihood Function Let the physical system be described by the following equations: x(t) = A(ct) . x(t) + B(a) . u(t) + C(a) . w ( t ) (2-1) y ( t k ) = H(a) . x(t k) + v ( t k ) (2-2) with the vectors: x: state u: control input y: model output w: plant or input noise v: measurement noise a: unknown physical parameters The mixed continuous (2-1) and discrete (2-2) form indicates that the model is derived from dif f e r e n t i a l equations but measurements are sampled and processed in discrete time, w is a Gaussian white noise process with 10. zero mean and constant covariance Q, i.e.: E[w(t)] = 0 E[w(t). W T ( T ) ] = Q . 6(t-r) (2-3) v i s a Gaussian white noise vector with zero mean and constant covariance R, i.e.: E[v(k)] = 0 E[v(k) . v T(*)] = R .6 k A (2-4) where the index k denotes time t. . k In addition to a , Q and R may be unknown too. With these assumptions for v and w, the probability density function of the output pr(y(k)) can be calculated for a l l k as a function of a , Q and R from Eqs. (2-1, 2-2). The likelihood function is the value of the joint probability density for the actually measured output sequence pd ' ,„ (z(N) , z(N--l) ,. y(N),y(N-l),...,y(l) . . . , z ( l ) | a,Q,R) where z(l) ....z(N) denote the available output measure-ments. The maximum likelihood method attempts to find those parameter t*. estimates a, Q, R for which the likelihood function is maximized. Calculation of the log likelihood function [10] The joint probability density function is computed recursively by application of Bayes' rule. With the abbreviation Y(k) = [y(k),y(k-l),...,y(l)] andZ(k) - [z(k),z(k-l),...,z(1)](2-5) pd Y ( k )(Z(k)|o,Q,R) = pd ( k )(z(k)|z(k-l);a,Q,R) . pd Y ( k_ 1 )(Z(k-l)|a,Q,R) (2-6) If L denotes the negative logarithm of the likelihood function, Eq. (2-6) becomes 11. -L(k;a,Q,R) = -L(k-l;a,Q,R) + In pd ( k )(z(k) jZ(k-l);a,Q,R) (2-7) For L(0)=0 and with the recursion carried out N L(N;a,Q,R) = - £ In pd (z(k)|Z(k-l);a,Q,R) (2-8) k=l y V ; Minimizing L w i l l maximize the likelihood, since the logarithm is a monotonic function. It remains to evaluate the conditional probability density of y(k). Because of the Gaussian assumption for the noises w and v, this density i s also a normal distribution. The normal distribution for a random variable i s P d y ( z ) = ( 2 . T 7 . C T 2 ) _ i . exp(-e 2/ 2a 2) (2-9) where e=z-y , y=E[y], a 2= E[(y-y) 2] The corresponding multivariable normal distribution for a random vector i s P d y ( k ) ( z ( k ) | z ( k - l ) ) =[(2.u) m. det P y ( k | k - l ) ] _ i . e x P [ - \\ e T(k) . P y - : L(k|k-l).e(k)] (2-10) where m = dimension of the vector y e(k) = z(k) - y(k|k-l) y(k|k-l) = E[y(k)| Z(k-l);a,Q,R] Py(k|k-1) = E[(y(k)-y(k))(y(k)-y(k)) T |z (k-1) ;a,Q,R] The conditional mean and the covariance of y(k) are provided by an optimal linear f i l t e r (Kalman f i l t e r ) that processes the measurements z(l) ... z(k-l) for the specified values of a, Q, R. 12. 2.2 The Continuous-Discrete Kalman F i l t e r The Kalman f i l t e r for the model (2-1, 2-2) i s a combination of the discrete and continuous f i l t e r s described by Gelb [9]. (a) Propagation in the absence of measurements Between t, .. and ti, no measurements are available and the state k-1 K-estimate x(t) and i t s error covariance P(t) are simply extrapolated: x (t) = A . x(t) + B . u(t) (t k_ 1 = • < v t k - i ) - * ( t i * - i ) + $ (t -x)« B-u(x)dx with the transi-tion matrix $(x) = exp(A'x). If the input u i s allowed to change i t s value only at the time points t^ , k=l ... N (see section 1-2), then the constant term B.u(x)=B.u(tk_^) can be taken out of the integral. With At = t ^ - t ^ (2-20) f A t -1 exp[A-(At-x)].dx-B = A (a)•[F(a)-I]-B(a) (2-22) 0 F(a) = *(At,a) = exp(A(a)- At) (2-21) G(a) = the integrated version of (2-11) is x(t-) = F(a).x(t+_ 1) + G(a).u(tHh_1) (2-23) For the coveriance equation (2-12) the same approach as for the Matrix • T Ricatti equation [9, p.136 ] is taken. For the equation P = A-P+P.A +CQC (2-12) the transformations X = P«y and u = -A «y define a linear system X with the solution r - A T o\" A • X r y(At)' X(At) (F'V 0 y (0) x (0) where 100 points), the time dependent covariance and gain matrices can therefore be replaced by the steady-state matrices, with the additional advantage, that dependence upon the unknown i n i t i a l covariance P(0) i s removed. The steady state value could be obtained by solving the matrix equation P(t^) = ^ ( ^ - i ^ * However, this system of nonlinear algebraic equations is d i f f i c u l t to solve. It i s numerically more convenient to calculate the Kalman gains as shown in the f i r s t step above with an arbi-trary matrix P(0) and use the values to which the gains converge. For the power system model considered i n section 3-5 and the choice P(0)=0, convergence to 5 digits was usually obtained in 5 to 10 steps, an indication that s t a t i s t i c a l steady-state can safely be assumed. Summary of Equations for the Likelihood Function System model (process): x(t) = A(a) • x(t)•+ B(a).u(t) + C(a)-w(t) (2-1) z(k) = H(a)-x(t k) + v(k) (2-2) E[w(t)-w T(x)] = Q • 6(t-x) (2-3) E[v(k)-vT(£)] = R • 6^ (2-4) Kalman F i l t e r : x(k-) > = F-x(k-l+) + G.u(k-1+) e(k) = z(k) - H-x(k-) x(k+) = x(k-) + K-e(k) (2-23) (2-13/14) (2-16) likelihood function (constant term mir neglected) : N L(N; a, Q,R) = ^ {N-£n det P + £ e T(k)-P 1 . e(k)} (2-19) k=l Steady state matrices: K= lim K(k) , k-x» P = lim P„(k) from e , e K P(k-) = F-P(k-1+).FT + T-F T ; P(0+) = 0 P e(k) = H-P(k-)-HT + R K(k) = P(k-)-H T. P e _ 1 ( k ) P(k+) = [I-K(k)-H].P(k-) (2-25) (2-15) (2-18) (2-17) Remark: This form of the Kalman f i l t e r i s valid also i n the case of perfect measurements (R=0), although numerical d i f f i c u l t i e s may arise when inverses are calculated. Continuous-discrete relationship: F = exp(At-A(a)) G = A _ 1.(F - I)-B (2-21, 2-22) (F\"V 0 \" = exp At • F -A 0-T C.Q.C A (2-24) By use of the identify T T x • A • y = trace (A* y • x ) the time independent matrix P £ can be taken out of the summation in the likelihood formula. Hence 1 N L(N;a,Q,R) = N-£n det P + trace(P £ e(k)-e (k)) (2-27) k=l 2.4 Maximum Likelihood Estimates The existence of optimal parameter estimates i s now investigated. Uniqueness of optimal parameters The major drawback of state space models i s their non-uniqueness; many combinations of the matrices A,B,C,H account for the same input-output sequence. For multivariable systems even canonical forms are not unique [11]. For a physically derived model with a small set of parameters a, this need not be a big problem. Care should be exercised in the choice of parameters, so that the transfer functions H- (sI-A) \"^B and H-(sI-A) ^\"C are unique. o _ Both Astrom [11] and Kashyap [12] point out, that i n general the noise st a t i s t i c s (Q and R) cannot be determined by the likelihood method. Different noise covariances may give the same values for K and P e and there-o fore the same likelihood L. Astrom suggests that L be minimized as a function of a, K and P . e The Innovations Representation [8,11] The equations of Kalman f i l t e r and system model can be combined to the following model: x(t~) = F(a).x(t~_ 1) + G(a)-u(t k_ 1) + K ' - e C t ^ ) (2-28) z(t k ) = H(a).£(tk) + e(t k) (2-29) where K'=F.K and e(t^) , k=l..N i s a sequence of independent vectors with zero mean and covariance P . These equations can be solved for the innova-e n tions e i f a, K', u and z are given. The likelihood function i s 1 -1 N T L(N;a,K;Pe)= ^ {N-ln det P + trace(P . £ e(k)-e (k))} (2-30) 6 6 k=l The minimization with respect to P g is done analytically. Setting the derivatives of L to each element of P £ to zero gives the condition N ?e opt = I I e(k)-e T(k) (2-31) k=l The maximum likelihood algorithm is then restated as: N minimize V(N;cc,K*) = det £ e(k)-e (k) (2-32) k=l e(k) = z(k) - H(a).x(k) (2-33) x(k) = F(a)-x(k-l) + G(a)-u(k-l) + K'e(k-l) (2-34) x(0) = 0, e(0) =0 This i s very similar to a minimum variance estimator; the relationship i s shown in the next section. Kashyap [12] provides a proof of convergence and the necessary conditions for estimation from an innovations representation. However, he uses a different type of input-output description to avoid the ambiguities of the state-space model. The conclusions of [12] can be interpreted i n the following way: Under the assumptions of - zero mean, Gaussian input noise with a certain covariance - zero mean, independent Gaussian measurement noise 19. - stable feedback system and linear optimal f i l t e r (i.e. eigenvalues of both F and F-K'H inside unit circle) the maximum likelihood estimates of both the parameters in F and H and the f i l t e r gains K w i l l converge to the true values as the number of samples N goes to in f i n i t y (NB: This includes the case with no deterministic inputs). Choise of reference model For the theoretical reasons mentioned, the innovations description should be used for estimation of a general model. This was done by Lindahl [4] , who made no special assumptions about the manner in which the input noise enters the system. On the other hand, i f there i s a simple physical description of the source of the input noise, then the noise covariance Q may contain only a few unknown entries. The simple power system model of section 3-5 contains only one stochastic parameter (the load demand variance), but at least six Kalman gains. In this case, the original function L(a,Q,R) should be optimized, since i t involves fewer parameters. Price et a l . [5] also choose this approach for their model. 2.5 Least Squares Algorithms (a) Single output systems For a physically derived model with measurement noise the output error method is used. The cost function i s N • J(a) = I e(k/a) , e(k/a) = z(k)-y(k/a) (2-35) k=l The output error or residual e i s the difference between the measured out-put z and the output y of a reference model. Nonlinear optimization methods are used to minimize J(a). Convergence properties of the estimates depend on the reference model that provides the y(k). If the reference model i s of the form x(k) = F(a)-x(k-l) + G(a)-u(k-l) (2-36) y(k) = H(a)-x(k) (2-37) which does not include any noise, but the measurements are created by a process disturbed by noise at the input (w(k)), then the residuals e(k) w i l l be a correlated sequence. In this case the least squares estimate i s biased [8]. For transfer function models with noise, the least squares method is generalized by pre-filtering the measurements u(k) and z(k) in such a way that the residuals e(k) become uncorrelated. For a state space model, this is accomplished by the Kalman f i l t e r . If the innovations model (2-33, 2-34) is used as reference instead of (2-36,2-37), then the residuals are the uncorrelat innovations. In fact, for a single output system the likelihood function (2-30) is N N V(a,k) = det I e(k)e X(k) = £ e^(k)=J(a,K) (2-38) k=l k=l In the case of a single output, the maximum likelihood estimate and the least squares estimate (for the innovation model) are accordingly indentical. In addition, i t i s also the minimum variance estimate, since J(ot,K) i s an unbiased approximation of the variance of e. 21. (b) Multiple outputs The least squares cost function for several outputs i s N T J(a) = I e (k/a)-W.e(k/a) ; e(k/a)=z(k)-y(k/a) (2-39) k=l with an arbitrary weighting matrix W. If the e(k) are independent Gaussian vectors (from an innovations model) then i t can be shown that the optimal choice for W is the inverse of the covariance matrix and for this choice the least squares estimate is equal to the minimum variance estimate with the cost function N N J'(a) = trace £ e(k) o a l l quantities i n per unit, except 6[rad] and Wo=120TT rad/sec J- wo 2 M= — inertia constant base D damping constant T m mechanical torque T > P = V,• I, + V «I e l e c t r i c a l torque e ^ e d d q q ^ to -w rot o . . • «_ • Aco = speed deviation w o 6 angle between rotor axis and synchronous frame of reference - e l e c t r i c a l equations assumptions: - 3 windings: d,q,f - resistance in stator winding negligible - flux changes in d and q-axis negligible - speed deviation negligible from Kimbark [16], p. 73 *d = M f * T f \" L d ' X d yq q q . * f = L f f ' T f \" I M f ' x d d o q E f = Rf • I f + * with X, = a) • L J , X = w *L_ , T, = X' = X, - | - M \" d o d ' q o q ' do R,. d d 2 n f f f ' - wQ.Mf „ (o M q o f f q L f r f FD Rf f the following equations are obtained: Tdo • ^q + E q = EFD ' h <3\"3> V d = X q • I q (3-4) V = E - X,«I, = E' - X'-I, (3-5) q q d d q d d v In the following a linearized model with the load bus as reference i s developed. 31. Power transfer between two buses: Fig. 3.2 This simplified phasor diagram of Fig. 3*2 is used i n the following. Equations for both machines (i) and (j) are obtained through the substitu-tions : Machine ( i ) : E=E(\\ , V=VL , Xd=X^. + X t , Xq=Xq.+ X t , a=a.=6.-6L , I d = I d ± , I q=I Machine ( i ) : E=E1 . , V=VT , X =X! , X =X. qj L ' d 3 ' q 3 a=a.=6.-6T , I =1 ,. , I =1 . J J L d dj q qj incoming power at the load bus V: S = P+jQ = V- I = 35. where AT . = K,.•Aa. + K_.•AE'. + K_.•AVT e i l i i 2x qi 7x L AE . = r^=— • AE' . + K. .* Aa. + K 0 .\" AVT qx. qx 4x x 8x L AV. = Kc . •Aa. + K,.•AE'. + Krt..AV t 5x x 6x qx 9x L Summary of equations for machine (j) A6. = w„ . Aw. M. • Aw. = -D. • Aco. - AT . 3 1 3 3 e J T' .AE'. = -AE . doj qj qj AT . = K * Aa. + K • AE' . + K_ . • AV ej l j J 2j qj 7j L AE . = ^ — AE' . + K..•Aa. + Ka.* AV qj qj 4j j 8j L In the combined model for both machines, one machine angle can be eliminated, since only relative positions of the machine axes are of interest, A<5 = A<5_^-A6_., This reduces the order of the model by one. 3.4 Load Bus Equations Three more equations are required to eliminate AV^^\\a^,Aa2 from the state equations. Two relations are from power continuity conditions: PT = P. . + PT . QT = Q . + Q . L Lj Lx L Lj L i and one is an angular relationship Ac$..-A<50 =Aan -Aa„ = A6 36. hence K,.Aa. +K.,.Aa. +K-.AE1. +K„.AE'. + (K-. ,+K_ . )-* AVT lx x 1] ] 2x qi 2j qj 7x 7j L AQL = A6 = Aa^ - Ao^ (3-18) The 3 equations can be solved for AV^Ao^jAa by Gaussian elimination. This is best done numerically. The expressions are substituted in the state equations of the machines, which in effect w i l l modify the constants .-K^. 3.5 Simplified Dynamic Model Since the estimation algorithm posed some problems and is expensive in computing time, a simplified model was used: - the voltage change on the load bus AV is assumed to be negligible - the load bus i s assumed to keep the frequency of the external machine Aa ± ^ A6 Aa 2 ^ 0 Under these assumptions, only the equations of motion of the machine (j) re-main relevant and i t s e l e c t r i c a l torque can be expressed as AT . = APT - AT . ej L ex (3-19) State equations for simplified model: A6 = w ( A O V J - A W . ) o v 1 y M. • Aw-4. = -D4 . A all +AT . -AT . i 1 1 mx ex M.'AtD. = -D.-Au). -AT . 3 3 3 3 ej (3-20) (3-21) (3-22) 37, T. .AT . = -AT . + K^(Au> , - A O J , ) (3-23) G mi mi G r e f 1 T d o ' A E q i • A E F D \" A E q i ( 3 \" 2 4 ) T F D , A E F D i = - A E F D i + K F D ( A V r e f - A V (3~25) AT . = K -A6 + K *AE'. (3-26) e i l i 2 i q i AE . = K..A6 + AE'. (3-27) q i 4 1 K 3 i q i AV. = K C.A6 + K-.AE'. (3-28) t 5 i 6 i q i AT . = AP -AT . (3-29) ei L e i This i s a 6th order model with the 3 unknown parameters Mj , Dj, X t and the unknown covariance of the random input AP . Measurable q u a n t i t i e s for the estimation procedure are: speed ^ , terminal voltage AV t > terminal power AP t=AT e i. 3.6 Steady State Values The steady state is computed from the known system parameters (M^, ^ j i ' •••)» estimates of the unknown parameter and the terminal steady state quantities V Q, PtQ> Qj:0\" ^ e l ndex 0 i s omitted in the following. Fig. 3.3 Steady state phasor diagram are: The equations (3-9, 3-10) applied for the terminal of machine (i) P. = V .E .,..siniK t qdx Tx X . V .E' . .sinijj. o t qx l ^ di + V . siniJj. cosil;. t x i X . X« qi dx > V .E ...cosiji. V_ t qdi Yx t X . q i X and -P. = Vt.VL.sin(-i|»t) V t . V L . c o s ( - * t ) V t -with the solutions *V r t = arc tan (3-29) arc tan ^ (3-30) q i p . x' E' . = „ t . + V -cos*, qx V .sxnijj. t x t x. 1-X . q i V .sinijij. (3-32) (3-31) (3-33) 4. IMPLEMENTATION OF ALGORITHM This chapter contains a short description of the computer imple-mentation of the estimator and of some methods to reduce computing time. 4-1 Calculation of Likelihood Values The following flow-chart shows the major steps in the calculation of one likelihood value: arguments a, Q, R + calculation of system matrices A(a) , B(a), C(a) equations of power system model, sections 3-5, 3-6 4-conversion to discrete-time system A(a) ,B(a) F(a),G(a) C.Q.CT QD= T-F method see section 4-3 4-calculation of steady-state Kalman gains K and (expected) innovations covariance P e iterative equations, see section 2-3 + processing of the measurement arrays u, z with the Kalman f i l t e r and summation of the innovations (sample) covariance S equations, see section 2-3 4-likelihood value L(a, Q, R) The f i r s t of the steps above contains the physical model. The following steps are independent of i t . The computation sequence i s simpler for the case without input noise when there i s no Kalman f i l t e r . 40. 4-2 Optimization Procedure The Monitor for nonlinear optimization available at UBC [14] was employed to find parameter estimates. It supports interactive processing and switching among different optimization methods. The following user routines were added: FUNCTION XDFUNC - Calculation of a likelihood value as described in section 4-1 SUBROUTINE INIT - an i n i t i a l i z a t i o n routine that must be called f i r s t and asks for the necessary data. Most of the available algorithms in [14] were tried, but none of the methods based on gradient evaluation performed well. A possible cause is that the parameter steps taken for the difference approximation of the derivative were not suitable. Good results were achieved by the conjugate directions algorithm (routine POWEL). 4-3 Computation Efficiency The number of function evaluations i s high for any optimization algorithm. Since a function evaluation involves the setting up and running of a linear f i l t e r , computation time i s long even for a small system. A considerable reduction i s achieved by using the following techniques. (a) F i l t e r equations and matrix operations The mathematical operations of the f i l t e r are simple but highly repetitive. Since multiplications of a vector by a matrix are among the most frequent operations, a l l matrices are stored by rows, so that a. . becomes A(J,I) in Fortran. In most subroutines, array elements are addressed by a single index only. A package of short matrix subroutines was written in Fortran. The order in which the elements are stored is chosen carefully. These matrix routines run several times faster than their more general counterparts in the system library. (b) Conversion of continuous to discrete time This can be done by numerical integration. If x(0) is chosen as the i-th unit vector, then integration of the system x=Ax over one time step At w i l l yield the i-th column of the transmission matrix F. The number of integration steps required is therefore equal to the order of the system. It was found that by using the Runge-Kutta method, up to 80% of the total computing time was spent on this integration. A much faster procedure is obtained through approximation of the matrix exponential by i t s series: At -2 F = exp(A-At) = I + Af A+ jf~ A2+ ... (4-1) In general this method may suffer from several numerical problems, such as roundoff errors and slow convergence. But for the model in section 3-5, approximation by the f i r s t eight terms is accurate to 6 digits and about ten times faster than numerical integration. The complete recursive algorithm for F, G and T i s obtained from the series expansion of Eqs. (2-21, 2-22, 2-24): F(n) = F(n-l) + • A(n) F(0)=I (4-2) G(n) = G(n-l) + • B(n) G(0)=0 (4-3) T(n) = T(n-l) + ^ r - • T(n) T(0)=0 (4-4) n! 42. A(n) = A(n-l) • A A(1)=A=A-At (4-5) B(n) = B(n-l) • B B(l)=B=B\"At (4-6) T(n) =-T(n-l)-A T+A(n-1)-f T(l)=f=C.Q-C T.At (4-7) (c) Achieved computing times The evaluation of a l i k e l i h o o d value for a system of 6th order, with one input, one output and 100 measurement points requires the following approximate CPU-times ( f o r an Amdahl 470 computer): 24 msec for d e t e r m i n i s t i c case (no Kalman gain) 55 msec for the s t o c h a s t i c case A t y p i c a l estimation run with 6 i t e r a t i o n s of the conjugate d i r e c -tions algorithm requires about 100 function evaluations or 5 sec CPU-time. This i s s t i l l considerable for such a simple model. Further improvements are p o s s i b l e . Most of the c a l c u l a t i o n s ( e s p e c i a l l y the f i l t e r ) could probably be done with s i n g l e instead of double p r e c i s i o n . Of most importance would be a f a s t e r , r e l i a b l e method for com-puting the steady state Kalman gains. 4.4 Simulation Programmes A l l data used for the estimation process was obtained from a com-puter simulation with the same model. Many of the subroutines were the same for estimation and simulation. The simulation data i s therefore i d e a l and f u l f i l l s a l l assumptions of s t r u c t u r a l and noise properties that the estimator presupposes. 5 DATA AND RESULTS The simulation data and the performance of the maximum likelihood estimation method are shown for the following cases: Case 1 - deterministic input only Case 2 - combined deterministic and stochastic inputs Case 3 - stochastic input only Case 4 - stochastic input and measurement noise 5.1 Parameters and Input Data The following parameters were used in simulation program: - sampling time step At=0.05 sec (a) Deterministic parameters ( a l l per unit) - internal generator: M.=5 X D.=10 X T* =8 di X d i = 1 • Xd.=0.3 X =0.6 q K F D = 5 ° KG=10 V 0 . 5 — external system (unknown parameters a) - transmission reactance: Xt=0.5 - unknown machine: M.=9 D.=26 - steady state values: v t o = 1 - 0 5 p t o = 0 - 9 Q t o = 0 - 3 (b) Deterministic input The control input in case 1 and 2 is a short pulse applied at the reference input of the governor/turbine loop. 0.25 0.75 A W r e f ( p u . ) 0 2 -10 r3 t(sec) 5s»-Fig. 5.1 Deterministic Input (c) Stochastic input A random disturbance AP^ was applied in cases 2 to 4. Only white noise with a variance Q=E[AP ]=10 4 was considered. In discrete time this corresponds to a random sequence with a standard deviation of a m=/Q'At = if Li _ 3 2.2'10 p.u. For cases 1 to 3, measurement noise is neglected. In case 4, _9 -3 measurement noise with covariance R=10 (o- =0.032-10 ) was added to the v simulated output Au)j_. 5.2 Output Data The simulated outputs were APt , AVt and Aco^ . Each example con-sists of a sequence of 100 points. Some examples for each case are shown in Fig. 5.2 to 5.9. Table 5.1 contains for a l l simulated sequences the mean values m, the standard deviation a, the maximum absolute amplitude |A| and the expected innovation standard deviation /p^T of a Kalman f i l t e r with the true parameters. Table 5-1: Simulation Results: Case Signal m |A| Fig. 1 AP t -11.3 (23.9) 94.1 _ 5.2 AV, 0.19 (2.02) 7.66 - 5.3 Aco^ -0.43 (0.62) 1.84 - 5.4 2 A p t -8.78 (30.3) 85.7 same 5.5 AV t 0.31 (4.10) 11.2 as -A(o^ -0.33 (0.83) 2.46 case 3 5.6 3 AP t 2.53 21.8 42.7 3.33 5.7 AV t 0.120 3.07 6.26 0.388 -Aw^ 0.099 0.445 0.881 0.0268 5.8 4 Ao). i 0.1011 0.447 0.884 0.0806 5.9 A l l values x 10 per unit. 46. 49. 5.3 Convergence of parameter estimate Parameters were estimated from the simulated measurements for a l l four cases. In a l l examples the cost function was minimized by the conjugate directions algorithm (routine P0WEL[14]). Only single output sequences were considered. This is sufficient in the cases without measurement noise, because one output contains a l l available information about the system. Estimation from multiple output sequences was not attempted, because numerical problems were encountered in the calculation of the Kalman gains. This should be further investigated for the case with measurement noise. Table 5-2 contains a summary of the obtained optimal estimates for a l l examples. Fig. 5-10 to 5-18 show how the estimates converged and are followed by comments for the individual cases. Table 5-2 Es t imat ion Results •H p>4 I cr o C •H W cd •H X> *a cd * * < a cd S co r H Cd e •H 4-1 (X o ^ 2 H i u 3 o CD O to & CU C -H CO •rl 4J •—• I m cd co o n e cd T3 CD 60 U CD > •H T3 •K OJ CO cd cj C •H en I vo o 1 r H O II r H Pi II v-' O o* t\\ C M 1 • o o c o r H • II o o-o r H m O C O m • N-^ o II II o o vO a a C M •—s <—s C O o -ON II CD U cd CO u CD 4-1 C O CD • • 6 o O cd u cd •« • Pi m m d) rt 3 • • M m m 4-1 CD II II Xi 4J O O a a • • r H C M S 3 Fig. 5-10 Case 1 - deterministic output AP Fig. 5-11 Case 2 - mixed input output AP oc (p.u) Fig. 5-12 Case 2 - mixed input output AID. Fig. 5-13 Case 2 - mixed input output Aw F i g . 5-17 Case 4 - with measurement noise output Aw., F i g . 5-18 Case 4 - with measurement noise output Aw. 56. (a) Case 1 - deterministic In this special case there is no Kalman f i l t e r . The estimator is the ordinary least squares estimator with the cost function V = I e 2(k). k Rapid convergence to the true parameters was obtained for AP t(Fig. 5-10) and Aw^ measurements. The algorithm failed for the AVt measurements, because the i n i t i a l steps were too large. (b) Case 2 - deterministic and stochastic input The same cost function was used, but now the e(k) are the innovations of a Kalman f i l t e r . For negligible measurement noise (R-K)), the Kalman gains depend on the parameters a only and the expected innovations covariance P £ is proportional to Q. Q is therefore estimated from the sample covariance Pe=V/N. Speed of convergence is comparable to the deterministic case, but the estimates have some bias (Fig. 5-11, 5-12). The f i r s t step of the algorithm always seems to make M. and D. larger, even i f they are already J J too large (Fig. 5-13). It seems preferable to start with estimates that are smaller than expected. (c) Case 3 - stochastic input only Estimation with the cost function V= £ e 2(k) did not bring satisfactory results. Slightly biased estimates of M^. and D_. were found i f X t was held constant at the true value 0.5. But this i s only one local minimum of V and lower minima can be found for quite different values of M, , and Xfc. 57. Good results were obtained with the original likelihood cost function L=£n det Pe+ £ e 2(k)/P e, where P e is the estimated covariance of the Kalman f i l t e r . Estimates from a l l of the three outputs converged (Figs. 5-14 to 5-17), with a small bias for M. and and a rather large bias for D.. 3 t J (d) Case 4 - input and measurement noise In the f i r s t example (Fig. 5-l7) Q and R are held constant at the true values and only a i s optimized. The results are similar as in case 3. In the second example (Fig. 5-18) only R i s fixed, a and Q are estimated simultaneously. After four iteration steps, the estimates seem to converge reasonably. However, in the next iteration step very different estimates were found, for slightly lower function value. For this short measurement sequence, the algorithm w i l l not converge to the true values. 58. 6 CONCLUSIONS The maximum likelihood method is a general method for estimation of parameters in a linear system disturbed by both input and measurement noise. The stochastic parameters can be modelled either as unknown gains of a linear f i l t e r or as parameters of a low order model derived from physical knowledge. The second approach is given preference and a low order model for the remote part of a power system is proposed that explains small fluctua-tions by varying load demand. Estimation results were obtained for a simplified system from one output at a time. They show that the maximum likelihood method can in fact produce good estimates for a heavily disturbed system. Parameter estimates from a system with noise input only are also possible. However, convergence problems may arise for short measurement sequences. False convergence to a wrong set of parameters is possible, especially i f the measurement noise is appreciable. This case would need further investigation with longer data sequences and a more r e a l i s t i c power system model. Estimation of the dynamics of a real power system from measurements alone should be possible i f ... - accurate measurements of small fluctuations are available. - the dynamic behaviour and the source of fluctuations of the external system with respect to a generator can be represented with sufficient accuracy by a very low order model. To answer these q u e s t i o n s , a c t u a l measurements from a genera tor should be i n v e s t i g a t e d . 60. 7 BIBLIOGRAPHY 1. Y.N. Yu and H.A. Moussa, \"Optimal s t a b i l i z a t i o n of a multi-machine system\", IEEE Trans, on PAS, v o l . PAS-91, pp. 1174-82 (May 1972). 2. F.P. DeMello and C. Concordia, \"Concepts of synchronous machine s t a b i l i t y as affected by e x c i t a t i o n c o n t r o l \" , IEEE Trans, on PAS, v o l . PAS-88, pp. 316-329 ( A p r i l 1969). 3. Y.N. Yu, M.A. El-Sharkawi and M.D. Wvong, \"Estimation of unknown large power system dynamics\", IEEE Trans, on PAS, v o l . PAS-98, pp. 279-289 (January 1979). 4. S. Lindahl and L. Ljung, \"Estimation of power generator dynamics from normal operating data\", 3rd IFAC Symposium, The Hague (June 1972), Part I, pp. 367-374, Paper pp 2. 5. W. P r i c e , F.C. Schweppe, E.M. Gulachenski and R.F. S i l v a , \"Maximum l i k e l i h o o d i d e n t i f i c a t i o n of power system dynamic equivalents\", Proceed-ings 1974 IEEE Conference on Decision and Control, pp. 579-586. 6. R.D. Masiello and F.C. Schweppe, \"Multi-machine e x c i t a t i o n s t a b i l i z a t i o n v i a system i d e n t i f i c a t i o n \" , IEEE Trans, on PAS, v o l . PAS-94, pp. 444-454, (March 1975). 7. W. P r i c e , D.N. Ewart, E.M. Gulachenski and R.F. S i l v a , \"Dynamic equiva-lents from on-line measurements\", IEEE Trans, on PAS, v o l . PAS-94, pp. 1349-1357, (July 1975). o 8. K.J. Astrom and P. Eykhoff, \"System i d e n t i f i c a t i o n - a survey\", Auto-matica, v o l . 7 (1971), pp. 123-161. 9. A. Gelb (ed.), \"Applied optimal estimation\", The A n a l y t i c Sciences Corporation (MIT-Press), 1974, 74-1604. 10. F.C. Schweppe, \"Uncertain dynamic systems\", Prentice Hall, 1973, 72-11801. o _ 11. K.J. Astrom, \"Modelling and identification of power system components\" from E. Handschin (ed.), \"Real time control of electric power systems\" Elsevier Publishing Company, 1972.' 12. R.L. Kashyap, \"Maximum likelihood identification of stochastic linear systems\", IEEE Trans, on AC, vol. AC-15, pp. 25-34 (Feb. 1970). 13. P.R. Adby and M.A.H. Dempster, \"Introduction to optimization methods\", Chapman and Hall mathematic series, 1974, 74-4109. 14. M. Patterson, \"UBC-NLP, Nonlinear function optimization\", Computing Centre, Univ. of B.C., 1978. 15. Y. Bard, \"Comparison of gradient methods for the solution of nonlinear parameter estimation problems\", SIAM Journal Numer. Anal., vol. 7, pp. 157-186, (March 1970). 16. E.W. Kimbark, \"Power system s t a b i l i t y : Synchronous machines\", Dover, 1968, 68-12937. "@en ; edm:hasType "Thesis/Dissertation"@en ; edm:isShownAt "10.14288/1.0094649"@en ; dcterms:language "eng"@en ; ns0:degreeDiscipline "Electrical and Computer Engineering"@en ; edm:provider "Vancouver : University of British Columbia Library"@en ; dcterms:publisher "University of British Columbia"@en ; dcterms:rights "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en ; ns0:scholarLevel "Graduate"@en ; dcterms:title "Parameter estimation applied to power system models subject to random disturbances"@en ; dcterms:type "Text"@en ; ns0:identifierURI "http://hdl.handle.net/2429/21569"@en .