UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Control and estimation of a chaotic system Ghofranih, Jahangir 1990

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1990_A7 G46.pdf [ 4.46MB ]
Metadata
JSON: 831-1.0065630.json
JSON-LD: 831-1.0065630-ld.json
RDF/XML (Pretty): 831-1.0065630-rdf.xml
RDF/JSON: 831-1.0065630-rdf.json
Turtle: 831-1.0065630-turtle.txt
N-Triples: 831-1.0065630-rdf-ntriples.txt
Original Record: 831-1.0065630-source.json
Full Text
831-1.0065630-fulltext.txt
Citation
831-1.0065630.ris

Full Text

C O N T R O L A N D ESTIMATION OF A C H A O T I C S Y S T E M By Jahangir Ghofraniha B. Sc. Sharif University of Technology, Tehran, 1985 A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F A P P L I E D S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S E L E C T R I C A L E N G I N E E R I N G We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A May 1990 © Jahangir Ghofraniha, 1990 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of The University of British Columbia Vancouver, Canada Date J u l y / s /mo DE-6 (2/88) Abstract A class of deterministic nonlinear systems known as "chaotic" behaves similar to noise-corrupted systems. As a specific example, Duffing equation, a nonlinear oscillator representing the roll dynamics of a vessel, was chosen for the study. State estimation and control of such systems in the presence of measurement noise is the prime goal of this research. A nonlinear estimation suitable for chaotic systems was evaluated against conventional methods based on linear equivalent model, and proved to be very efficient. A state feedback controller and a sliding mode controller were applied to the chaotic system and both techniques provided satisfactory results. Investigating the persistence of chaotic behavior of the controlled system is a secondary goal. Simulation results showed that the chaotic behavior persisted in case of the linear feedback controller, while in case of the sliding mode controller the system did not exhibit any chaotic behavior. ii Table of Contents Abstract ii List of Figures v Acknowledgement x 1 Introduction 1 1.1 Background . 1 1.2 Purpose of this Thesis 3 1.3 Motivation 3 1.4 Contribution of this research 3 1.5 Outline of the thesis 7. 4 2 Chaotic dynamics 5 2.1 Local Bifurcation 8 2.1.1 Bifurcation Problem 9 2.1.2 Cyclic fold 11 2.1.3 Flip bifurcation (period-doubling) 12 2.2 Chaotic Map 13 2.3 Mathematical Model (Duffing Equation) 15 3 Sliding Mode Control 22 3.1 Background 22 3.2 Design Procedure 26 iii 3.2.1 Sliding surface design 26 3.2.2 Controller design 27 3.3 Overview of the Boundary Layer Technique 30 3.4 Sliding mode control of the Duffing equation 38 4 State Estimation of the Duffing Equation 51 4.1 Background 51 4.2 Mean and Covariance Propagation 55 4.3 Kalman Filter Calculations 59 4.4 Calculation of a Set of Initial Conditions from a Given Covariance Matrix 60 4.5 State Estimation of the Duffing Equation 62 4.6 Equivalent Linear Noise Model for Duffing Equation 62 4.7 State Estimation of Duffing Equation Based on Linear Model 66 4.8 Closed Loop Control of the Duffing Equation 69 4.9 Simulation Results 71 5 Conclusions and the Future work 96 Bibliography 98 iv List of Figures 2.1 Invariant sets of x = Ax 7 2.2 Wandering and nonwandering points 8 2.3 Phase portraits for example 2.2 11 2.4 Fold bifurcation diagram 12 2.5 Jump phenomenon and the points A and B associated with cyclic fold . 12 2.6 flip bifurcation diagram 13 2.7 Resonance curve for nonlinear roll equation(from Bishop et al) 16 2.8 Period-doubling cascade of roll equation (from Virgin, 1987) . . . . . . . 16 2.9 schematic diagram of the nonlinear circuit(from Ueda, 1985) 17 2.10 Fold bifurcation for Ueda's model(from Bishop et al, 1988) 18 2.11 Sensitive dependence on initial condition for Duffing equation 19 2.12 Existence of a dense orbit in Duffing equation 20 2.13 Power spectrum of the first state of Duffing equation 20 2.14 Poincare map for Ueda's model(from Ueda, 1980a) 21 3.15 An alternate way to control a system via a line passing through the origin 23 3.16 Boundary layer and smoothed control law inside boundary layer : a) Boundary layer b) control law 32 3.17 Output reponse of the system in example 3.2 in two cases: a) With boundary layer b) without boundary layer 36 3.18 Phase plane plot of example 3.2 : a) With boundary layer b) without boundary layer 36 v 3.19 Control input of the system in example 3.2 in two cases : a) With bound-ary layer b) without boundary layer 37 3.20 Output response of the Duffing equation without boundary layer for three different initial conditions : a) I.C.=(3,4) b) LC.=(-2.0,0) c) I.C.=(l,-3.0) 40 3.21 Output response of the Duffing equation with boundary layer for three different initial conditions : a) I.C.=(3,4) b) I.C.=(-2.0,0) c) I.C.=(1,-3.0) 41 3.22 Control input of the Duffing equation without boundary layer for three initial conditions : a) I.C.=(3,4) b) I.C.=(-2.0,0) c) I.C.=(l,-3.0) . . 42 3.23 Control input of the Duffing equation with boundary layer for three different initial conditions : a) I.C.=(3,4) b) I.C.=(-2.0,0) c) I.C.=(1,-3.0) 43 3.24 Output of the uncontrolled Duffing equation for three different initial conditions : a) I.C.=(3,4) b) I.C.=(-2.0,0) c) I.C.=(l,-3.0) 44 3.25 Phase plane plot of the Duffing equation without boundary layer control for three different initial conditions : a) I.C.=(3,4) b) LC.=(-2.0,0) c) I.C.=(l,-3.0) 45 3.26 Phase plane plot of the Duffing equation with boundary layer control for three different initial conditions : a) I.C.=(3,4) b) LC.=(-2.0,0) c) I.C.=(l,-3.0) 46 3.27 Output response of the Duffing equation (LC.=(3,4)) without boundary layer for three different switching gains : a) K=7.7 b) K=20.0 c) K=50.0 47 vi 3.28 Control input of the Duffing equation (I.C.=(3,4)) without boundary-layer for three different switching gains : a) K=7.7 b) K=20.0 c) K=50.0 48 3.29 Phase plane plots of the Duffing equation (I.C.=(3,4)) without boundary layer for three different switching gains : a) K=7.7 b) K=20.0 c) K=50.0 49 .4.30 Block diagram of the state estimation for a chaotic system 55 4.31 Autocorrelation of the state x\ 63 4.32 Power Spectral Density of the first state 64 4.33 Approximate autocorrelation function for state x\ 64 4.34 Block diagram of the estimation algorithm 68 4.35 Block diagram of the linear model with a high gain feedback 70 4.36 Root locus of the compensated linear Duffing equation 71 4.37 Linear and nonlinear estimation of the output of the Duffing equation (xi) for three different initial covariance matrices 74 4.38 Linear and nonlinear estimation of the 2nd state of the Duffing equation ( 1 2 ) f°r three different initial covariance matrices 75 4.39 Variance of the output error of the Duffing equation using both estima-tion schemes for three different initial covariance matrices 76 4.40 Variance of the error of the 2nd state of the Duffing equation using both estimation schemes for three different initial covariance matrices . . . . 77 4.41 Kalman gain (Ku) for the Duffing equation using both estimation schemes for three different initial covariance matrices 78 4.42 Kalman gain ( # 2 2 ) for the Duffing equation using both estimation schemes for three different initial covariance matrices 79 vii 4.43 Estimation error of the output of the Duffing equation (xx — xx) using both estimation schemes for three different initial covariance matrices . 80 4.44 Estimation error of the 2nd state of the Duffing equation (x2 — x2) using both estimation schemes for three different initial covariance matrices . 81 4.45 Noisy measurement of the first state xt of the Duffing equation with constant measurement noise level 82 4.46 Noisy measurement of the 2nd state x2 of the Duffing equation with constant measurement noise level 82 4.47 Estimation of the output of the Duffing equation (xi) using both esti-mation schemes for three different measurement noise levels 83 4.48 Estimation of the 2nd state of the Duffing equation (x2) using both estimation schemes for three different measurement noise levels 84 4.49 Variance of the error of the first state of the Duffing equation using both estimation schemes for three different measurement noise levels 85 4.50 Variance of the error of the 2nd state of the Duffing equation using both estimation schemes for three different measurement noise levels 86 4.51 Kalman gain fcn for the Duffing equation using both estimation schemes for three different measurement noise levels 87 4.52 Kalman gain k22 for the Duffing equation using both estimation schemes for three different measurement noise levels 88 4.53 Estimation error of the output of the Duffing equation using both esti-mation schemes for three different measurement noise levels 89 4.54 Estimation error of the 2nd state of the Duffing equation using both estimation schemes for three different measurement noise levels 90 4.55 Noisy measurement of the first state x\ of the Duffing equation with constant initial covariance matrix Rn = 1.0 R22 = 1.0 91 viii 4.56 Noisy measurement of the 2nd state x2 of the Duffing equation with constant initial covariance matrix Rn = 1.0 R22 = 1.0 91 4.57 Controlled Duffing equation using sliding mode and PD control with noisy measurement 92 4.58 Sliding mode control of the Duffing equation using both estimation scheme. 93 4.59 PD control of the Duffing equation using both estimation scheme . . . . 94 4.60 Variances of the states of the controlled Duffing equation using both controllers 95 ix Acknowledgement I wish to thank my thesis supervisor Prof. M.S. Davies for his help and valuable guidance throughout my research. I would also like to thank all my friends in the control group and the staff at UBC Pulp and Paper Center. At last I am most grateful to my wonderful family, my wife, Bahar, and my daughters Zahra and Maryam, for their support and patience during the long process of this thesis. x Chapter 1 Introduction 1.1 Background Newtonian dynamics represented a milestones in the history of the development of applied science. For this was the first time differential equations were used as a model for a physical phenomenon (Guckenhiemer & Holmes, 1986). In spite of the simplicity of differential equations, the solutions to some of them are in general not easy and sometimes impossible to obtain in closed form. In nonlinear systems some approximate schemes have been developed as perturbation methods, but they are usually applicable to weakly nonlinear problems. Poincare in the late nineteenth century used analysis as a tool to solve differential equations, and added geometrical concepts to develop a qualitative approach to the study. A class of nonlinear systems now called chaotic has its origins in works of Poincare, but it was not until recently that the work of Lorenz and others drew the attention of scientists in other fields to this area. A number of researchers such as Kolmogorov (1957) , Arnold (1963), Siegel and Moser (1971) pursued this line of research in the area of celestial mechanics. Others such as Barrenblat , Loose and Joseph (1983) used these ideas to analyze turbulent flows. Some electronic circuits exhibiting chaotic behavior were constructed in mid 1980's, details of these circuits can be found in Newcomb & Sathyan (1983), Azzoz et al (1983) and Rodrigues-Vazques et al (1985) (Fowler, 1986). The conclusion drawn by all these researchers was that nonlinearities could be a source 1 Chapter 1. Introduction 2 of a random behavior in a system. In a related area, Prigogine and his co-workers in non-equilibrium thermodynamics and biological systems showed that nonlinearities can give rise to ordered structures. Prigogine identified these as dissipative structures. (Prigogine,1967) In a purely chaotic system one can observe these two fundamental structures. Look-ing at the solutions of a chaotic system in phase space, itcan be seen that all trajectories are attracted to a certain bounded region of phase space but then move randomly in-side this bounded region and never settle down to a specific point or an orbit. Another intuitive insight into chaotic systems can be obtained by considering the evolution of trajectories originating within a small volume identifying the initial condition region. In the case of a non-chaotic dissipative stable system the volume occupied by the set of state vector representing x(t) will decrease to the final equlibrium point. In case of a dissipative chaotic system, the volume will not decrease to the final equlibrium point, but rather to a bounded region of state space called the attracting region. The initial condition volume, now decreased to a surface, folds and stretches while inside the attracting region. This folding of the trajectories causes a mixing of the states and a dispersion of them into a wider area compared to the volume of initial conditions at the beginning. For example, if two trajectories start from two adjacent initial points within the attracting regin, the distance between the two trajectories will be greater than the initial distance when they return to the attracting region. This translates to loss of information about the initial state which indicates the element of random behavior in a chaotic system. Chapter 1. Introduction 3 1.2 Purpose of this Thesis The aim in this research was to investigate the control of a class of chaotic nonlinear system in two cases. These systems are deterministic but their behavior in the presence of initial condition uncertainties is similar to that of noise-corrupted systems. Because of this similarity, well-developed techniques of stochastic filtering seemed suitable for the purpose of state estimation. The two controllers tested were, a lead-lag compen-sator designed using a linearized version of the system and a sliding mode controller. Investigating the persistence of chaotic behavior under these controllers was the more specific goal for this research. 1.3 Motivation Because of the close resemblance between chaotic and noise-corrupted systems, it is worth investigating the applicability of the well developed technique in stochastic fil-tering and control to this kind of problem. The possibility of a chaotic solution for a nonlinear system along with other solutions makes this of special interest. In the case of sliding mode control, it was believed that due to strong robust characteristics of this kind of controller in the presence of unmodelled dynamics and uncertainties, it was a good candidate for controlling such systems. 1.4 Contribution of this research The main contribution of the research documented here is the application of new con-trollers to a chaotic system. The models used in (Fowler, 1986) are all non-driven Henon and Lorenz equations. In this research a driven nonlinear oscillator known as the Duffing equation is used. In addition to being used to describe biological systems, Chapter 1. Introduction 4 there is a wide range of applications of this model in mechanical and electrical engi-neering. Specifically the Duffing equation can be used to model roll dynamics of a vessel and vibration of off-shore structures. Extensive work in this area has been done by Bishop and Virgin (1987, 1988). The application of the sliding mode controller on such models has not been investigated previously. The estimation technique explained in (Fowler, 1986) has been evaluated against a Kalman filter design based on the linear equivalent noise model. 1.5 Outline of the thesis The thesis is organized as follows: Chapter 2 covers the dynamics of Duffing's equation with the necessary background to define a chaotic system. The route to chaotic behavior (local bifurcations ) is explained in this Chapter in order to present an overall picture of the emergence of chaotic dynamics. Chapter 3 deals with definition of sliding mode control and the design technique for the standard case and the boundary layer case and both schemes have been applied to the Duffing equation. Chapter 4 explains the nonlinear estimation technique which covers the uncertainty propagation techniques in chaotic systems and calculation of covariance needed in Kalman filter estimation. To evaluate the nonlinear estimation technique, a standard Kalman filter using a linear noise equivalent model of the chaotic system is devised. The feedback loop is completed and the result of the state estimation is fed into a compensator. Performance of both control schemes, the feedback and the sliding mode controller, are compared for the persistence of chaotic behavior of the controlled system. In Chapter 5, conclusions and directions for future work are presented. Chapter 2 Chaotic dynamics This chapter will present briefly a formal definition of a chaotic system and show the characteristics of chaotic behavior in the Duffing equation. The models considered in modern control theory are normally deterministic and in cases where a stochastic ap-proach is adopted, there is still an underlying deterministic model. A deterministic model can be characterized as one in which a given set of initial conditions will repeat-edly generate the same response. Chaotic systems are deterministic in the foregoing sense, but nonetheless stochastic in the practical sense that the slightest uncertainty in the system's initial conditions will grow as time progresses, leading to total uncertainty about the system's state. Trajectories of such systems resemble those of stochastic systems. In order to define these systems more formally we need to start with stan-dard definitions in the theory of differential equations. We start with definition of an autonomous system. Definition 2 . 1 A system described by differential equation x(t) = f[t,x(i)} (2.1) is said to be autonomous if f(x,t) is independent oft and is said to be nonautonomous otherwise. (Vidyasagar, 1978) Definition 2 . 2 A vector xa € 3ftn is said to be an equilibrium point at time t0 G 3ft+ of equation (2.1) if f{t,xo) = 0, Vt>to (2.2) 5 Chapter 2. Chaotic dynamics 6 if x0 is an equilibruim of (2.1) at time t0, then it is the equilibruim point of (2.1) at all times t > to-We shall be restricting our attention to autonomous systems. If the system is au-tonomous, x0 is an equilibrium of (2.1) for all times. Therefore a;0 is a constant solution of (2.1). Equilibrium points however are not the only constant solutions observed in difFerential equations, since the hmit cycle, a steady closed oscillation is also classified in this way. Definition 2.3 A stable limit cycle is one that attracts all nearby motions. For a stable limit cycle, the origin (0,0) is unstable so that trajectories of small amplitude move outwards, while ensuring at the same time that trajectories of large amplitude move invards. (Thompson & Stewart, 1986) The analysis of limit cycles often„makes use of intersection of a transversal plane with the trajectories. This transversal plane is called a Poincare plane and the resulting map is called a Poincare map. With the help of a Poincare map, the dynamics of closed orbits can be reduced to that of equilibrium points. Futhermore, the dimension of the problem is reduced by one through the use of Poincare maps. In order to study the asymptotic behavior of the solutions of a dynamic system, in addition to denning singular points and limit cycles the additional concept of invariant sets and nonwandering sets are needed. Definition 2.4 A set S is called invariant if for x0 G S the solution ipt E S for all t € 7£. It is clear that equilibrium points and limit cycles are invariant sets. ( Guckenheimer & Holmes, 1986) Chapter 2. Chaotic dynamics 7 Figure 2.1: Invariant sets of x = Ax E x a m p l e 2.1: Linear system x = Ax A = 1 1 0 -2 For the above system the equilibrium points and the invariant sets are: Equilibrium point : Ax=0 x=0 Eigenvalues : 1 , -2 Eigenvectors : span(l,0) , span(l,-2) The eigenvectors in this example are lines Y=0 and line Y = — | s . Y =0 associated with eigenvalue Ai = 1 is called the unstable manifold E" and Y=—\x associated with eigenvalue A2 = —2 is called stable manifold E'. The invariant sets in this example are the fixed point (0,0), Eu and E". All three have the property that the trajectory stays within that particular solution if it starts from within that invariant set initially. As seen in this example, the invariant sets of the system provide information dis-cribing the qualitative behavior of the solutions. Knowledge of the fixed point and the stable and the unstable manifolds, can give a good understanding of the solutions of the Chapter 2. Chaotic dynamics 8 Figure 2.2: Wandering and nonwandering points system. It is this characteristic of the invariant sets that makes them important tools in the study of nonlinear systems. A generalization of the concept of the invariant set is that of nonwandering sets. It is easier to understand the definition of nonwandering sets through that of wandering sets. Definition 2.5 A point xQ is wandering if there is an open neighborhood U of xo , and some T > 0 such that solution <pt{U) D U = <p for t > T otherwise, x0 is nonwandering ( Guckenheimer & Holmes, 1986). 2.1 Local Bifurcation Now that the primary concepts needed to consider nonlinear dynamics have been pre-sented , we next introduce the local bifurcation theory. Systems of physical interest typically have variable parameters which appear in the defining systems of equations. As these parameters are varied, changes may occur in the qualitative structure of the solutions. These changes are called bifurcations and the corresponding parameter values are called bifurcation values . General bifurcation analysis is a complicated area, however here we need only consider simple bifurcations Chapter 2. Chaotic dynamics 9 of individual equilibria and periodic orbits, especially those observed in the Duffing equation that is considered extensively here. The analysis of bifurcations is generally performed by studing the responses near the degenerate(bifurcating) equilibrium point or closed orbit. When bifurcating solutions are found in a neighborhood of that limit set, these bifucations are classfied as local. When the changes associated with varying parameters are not limited to a neighborhood of a limit set, and instead are related to the whole flow of the system, then they are called Global bifurcation.^ Guckenheimer & Holmes, 1986) 2.1.1 Bifurcation Problem Important local bifurcations can be identified by an appropriate local simplification of the dynamics. Normally an approximation taking into account the lowest-order component of the vector field near a point in phase space, as well as for the lowest-order part of its dependence on the varying parameter. In other words, low-order approximation is carried out near the point of interest in parameter-phase space. For dynamic systems described explicitly by differential equations such as x = f(fM,x) (2.3) where x and parameter \i are vectors, the local bifurcations can be studied by compu-tation of normal forms that embody the low-order approximations (Guckenheimer &: Holmes, 1986). The equilibrium points are first located by finding states x0(p) such that /( /x, a J o(/x)) = 0 (2.4) For example , if p represents a single scalar parameter, these a?o(/i) a r e paths of equi-librium points in control-phase space. Next, the linearization of the vector field near each x0 is examined. Any value of /z0 for which the linearization has eigenvalues with Chapter 2. Chaotic dynamics 10 zero real part is non-hyperbolic, i.e. a candidate for a bifurcation point. Finally one must compute terms of the nonlinear function /(•) that are of lowest order near Xo(p.) to obtain local forms of bifurcation ( Thompson & Stewart, 1986). Example 2.2 : Consider the following system; x = /(/i, x); x G 5Rn, / i G S * (2.5) f(p,x) = px-x3 Dxf = p. — 3x2, and the only bifurcation point is (x,/z) = (0,0). It is easy to check that the unique fixed point x=0 existing for p. < 0 is stable, that it becomes unstable for p, > 0, and the new bifurcating points at x = ±^//I are stable. Figure(2.3) describes the stability of fixed point x = 0 for different values of p. graphically (plot of /(x).vs.x). For fixed value of p, = p., f(p,x) = ffi(x) = /(a:)- The intersection of f(x) with the x axis is the equilibrium point since at that point /(x) = 0. Since x = f(x), the type of the equilibrium point in one dimension is indicated by determining the sign of /(x) between the intersections of /(x) and x axis. Whenever f(x) > 0 the arrow will be pointing towards the positive x axis and when /(x) < 0 the arrow will be pointing towards the negative x axis. Thus , a point with both arrows pointing towards a fixed point is a sink and arrows pointing outwards form a fixed point is a source. In this manner, from figure(2.3) it is clear that for p < 0 the equilibrium is a sink (stable) while for p. > 0 there are three fixed points two sinks(stable) and a source(unstable). A diagram showing variation of x0 (fixed point) with respect to p, is called a bifurcation diagram and in figure(2.3) a pitchfork bifurcation is shown. Since the model used in this thesis is the Duffing equation, the two bifurcations, namely the cyclic fold and flip associated with this equation are considered next. These forms are the most common bifurcations encountered in limit cycles. Chapter 2. Chaotic dynamics 11 - *—e—« o »—o « — * — o « * •—o « M > 0 /x< o li = o Figure 2.3: Phase portraits for example 2.2 2.1.2 Cyclic fold The first case to be considered is the cyclic fold. Two limit cycles, one stable and the other unstable, coexist for certain values of the varying parameter, and approach one another as the parameter is varied. At the bifurcation point they collide, and afterwards every trajectory goes out of the immediate neighborhood. The cyclic fold is often associated with the so-called jump phenomenon or hysteresis. In case of jump phenomenon there exists a range of frequencies for which three possible output ampli-tudes are possible as shown in Figure (2.5). The fold bifurcation in this case, however, takes place with the participation of one stable and one unstable solution depending on which branch of the resonance curve the initial amplitude is found. As is usually the case for the study of limit cycles, the bifurcation of the fixed points of the Poincare map is considered. In this case the bifurcation diagram of the map looks like a fold ( saddle-node ) bifurcation. Figure(2.4) shows the bifurcation diagram for the Poincare map of such a bifurcation. (Thompson & Stewart, 1986) Chapter 2. Chaotic dynamics 12 stable equlibrium (A) x bifurcation point (A) CO unstable equlibrium (B) Figure 2.4: Fold bifurcation diagram 0)10X0)2 Figure 2.5: Jump phenomenon and the points A and B associated with cyclic fold 2.1.3 Flip bifurcation (period-doubling) In this type of bifurcation a stable limit cycle loses its stability , while another closed orbit is born whose period is twice the period of the original cycle. This bifur-cation requires at least a three-dimensional phase space, and it can arise in both subcritical(catastrophic) and supercritical(subtle) forms. In physical examples a sequence of n period-doubling bifurcations maybe observed, in which a stable limit cycle with period 2 T is finally obtained. Figure(2.6) shows the flip bifurcation diagram. In this figure a steady solution x0 becomes unstable and a period 2 emerges at bifurcation value of Ai and the system will complete a cycle in double the original period. If the sequence is infinite, with a final accumulation point, the dynamic beyond that point is chaotic. The period-doubling sequence has been extensively studied by Feigenbaum(Feigenbaum, 1980). In his work he shows that a function of subsequent frequencies tends towards a limit known as the Feigenbaum number. This limit is the universal number 4.669 The universality is in the sense Chapter 2. Chaotic dynamics 13 bifurcation point x Figure 2.6: flip bifurcation diagram that it is independent of the system undergoing period-doubling cascade ( Thompson & Stewart, 1986). 2.2 Chaotic Map The complex phenomenon known as chaotic behavior that occurs after period-doubling  cascade is only partially understood. Although period-doubling is not the only route to chaotic dynamics, it is unique in that it is the only continuous transition leading to chaos. Before presenting the formal definition of a chaotic behavior, some terms need to be defined. Definition 2.6 A subset U of S is said to be dense in S if for every point y E S we can find a point x in U which is arbitrarily close to y.( Devaney, 1989) A good example of a dense subset is the set of rational numbers Q, which is dense in ft. Definition 2.7 Map f : J —• J is said to be topologically transitive if for any pair of open sets U, V C J there exist K > 0 such that fk(U) n V / f fk(U) refers to the Kth iterate of the map in the forward direction (t > 0). Intuitively, under a topologically transitive map all points eventually move under repeated iteration Chapter 2. Chaotic dynamics 14 from one arbitarily small neighborhood to any other. Consequently,this dynamical system cannot be decomposed into disjoint open sets which are invariant under the map. Definition 2.8 / : J —> J has sensitive dependence on initial conditions if there exist 8 > 0 such that, for any x £ J and any neighborhood N of x, there exist y 6 N and n > 0 such that\fn(x) - fn(y)\ > 8. Intuitively, a map possesses sensitive dependence on initial conditions if there exist points arbitarily close to x which eventually separate from x by at least 8 under it-eration of f. If a map possesses sensitive dependence on initial conditions, then for all practical purposes, the dynamics of the map defy numerical computation. Small errors in computation which are introduced by round-off may become magnified upon iteration. ( Devaney, 1989) Definition 2.9 Let V be a set. f : V —> V is said to be chaotic on V if (Devaney, 1989) • / has sensitive dependence on initial conditions. • f is topologically transitive. • Periodic orbits are dense in V. To summarize, a chaotic map possesses three ingredients: unpredictabiuty, indecom-posability, and an element of regularity. A chaotic system is unpredictable because of the sensitive dependence on initial conditions. It can not be decomposed into two subsystems( Two invariant open subsets). And, in the midst of this random behavior, we nevertheless have an element of regularity, namely the periodic points which are dense. Chapter 2. Chaotic dynamics 15 2.3 Mathematical Model (Duffing Equation) The model used in this thesis was originally adopted from a general nonlinear oscillator of the form: x + G(x) + H(x) = Fcoswf G{x) = 9lx-rg3x3 (2.6) H(x) = <vlx(l + - x 2 + • • • +—x^2n-^) Where u>0 is the natural frequency, gi and Z{ are coefficient representing damping and restoring force respectively. This model has been used to model roll dynamics of a ship in a regular sea state. It has been assumed that motions are uncoupled and that added mass and damping terms are independent of frequency. The coefficients gi and Zi are further obtained from roll decay tests in calm water, further details of which can be found in Marshfiled and Wright( Marshfield & Wright, 1980). This model and its variations have been found to represent physical systems such as: periodically forced buckled beam ( Holmes & Moon, 1983), dynamics of moored semi-submersible ( Virgin & Bishop, 1988), and nonlinear electrical circuits ( Ueda,1980). The model in its general form representing the roll dynamics of a vessel has been extensively studied by Virgin and Bishop. As u, the forcing frequency, is varied two bifurcations are observed, a cyclic fold and period-doubling cascade leading to chaos. The standard resonance curve for this model has been reported in Bishop( Bishop, Leung, Virgin, 1986). Figure(2.7) shows values at which the fold bifurcation occurs. The resonance curve resembles that of softening spring. In this case the fold bifurcation leads to capsize. The period-doubling cascade resulting when a constant has been added to the model to represent effect of cargo shift or steady winds on the ship, is shown in Figure(2.8). As mentioned earlier in the chapter, this type of analysis is carried out using the Poincare map of the system. Figure(2.8) is the Poincare map of the model. A variation of the general model Chapter 2. Chaotic dynamics Amplitude response curve R e l . Rol l Angle \ \ ( rod) \ \ • Numerical simulation \ \ • o.s /a-/ 1 Perturbation solution 0 1 0 8 12 U) ~0-717 Figure 2.7: Resonance curve for nonlinear roll equation(from Bishop et al) - 0 I M Ul 0-S2 j 0 63 W 0 « 3 044 fa) 0 9 0-9S 10 Figure 2.8: Period-doubling cascade of roll equation (from Virgin, 1987) Chapter 2. Chaotic dynamics 17 R rAWvVS Esinott Figure 2.9: schematic diagram of the nonlinear circuit(from Ueda, 1985) is that extensively studied by Ueda(Ueda, 1980a). This model involves the famliar case of the Duffing equation with linear damping and cubic restoring force. This equation models a series-resonance circuit with nonlinear inductance (Figure(2.9)). The circuit equation is written as d(j> n— + Rin = Esmwt dt (2.7) RIR = -J iedt, i = iR + ic Where n is the number of turns of the inductor coil, and <f> is the magnetic flux in the core. We consider the case in which the saturation curve of the core is expressed by i = a<t>z (2.8) The effect of hysteresis is neglected in the above equation. The dimensionless variable x is denned by <l> = <t>nx (2.9) Where <f>n is an appropriate base quantity of flux and is defined by the relation nw24>n = a<j>l (2.10) Chapter 2. Chaotic dynamics 18 A m p l i t u d e * 8 2-0 • 1-0 10 U f do e q u a t i o n . k • 0-2 • d i g i t a l : • a n a l o g u e O U e d a " JL.l» • 1-5 2-0 _J : B -1 /3 Figure 2.10: Fold bifurcation for Ueda's model(from Bishop et al, 1988) Then, eliminating in and ic in equation(2.7) and using equations(2.8), (2.9)and (2.10), we obtain the Duffing equation of the form: (Ueda, 1985) x + kx + x3 = B cos r T = u)t — arctan fc, fc = B = E tvcR (2.11) As in the case of the general model, we will be observing the dynamics of equation(2.11) when the variable parameter is w, the forcing frequency. In two dimensional control space where two parameter are varying at the same time a more complex phenomenon can occur, for example a transversal homoclinic orbit ( Guckenheimer & Holmes, 1986), but in a one parameter family of solutions, the two possible bifurcations are fold and flip. Figure(2.10) shows the results for k=.2, where the x-axis shows the value of j? - 1 / 3 , a measure of the forcing-frequency/system-frequency ratio, and y-axis is B~1^3 multiplied by the amplitude, which in turn gives an appropriate amplitude/static-response measure. The flip bifurcation for Ueda's model has been shown in his 1980 paper ( Ueda, 1980a), where the different cases have been shown through phase plane Chapter 2. Chaotic dynamics 19 D O O O O " o o rs>" o o T T ° ' 0 0 20-0 E~ SU) STo 50.0 Figure 2.11: Sensitive dependence on initial condition for Duffing equation solutions. A set of parameters for which chaotic solution exist is k=.05 and B=7.5. With this set of parameters one can identify the properties of a chaotic map defined earlier, namely • Many periodic orbits • A dense orbit • Sensitive dependence on initial conditions. Sensitive dependence on initial condition can be demonstrated with the divergence of trajectories from adjacent starts. Figure(2.11) shows this property. The two sets of initial conditions are (3,4) and (3.01,4.01). The dense orbit can be observed from phase plane solutions, although the actual phase plane is three dimensional, the projection of the solution into a two-dimension plane shows that the dense trajectory will cover the attracting region. Figure(2.12) shows the dense orbit. The presence of many periodic orbits is demonstrated by the power spectrum of the solution which shows a rich frequency content. Figure(2.13) shows the power spectrum for the set of parameters Chapter 2. Chaotic dynamics 20 o o". o 2-1 , , , , , •-4.00 -2.00 0.00 2.00 4.00 B.00 STATE X Figure 2.12: Existence of a dense orbit in Duffing equation 60. No. of samples Figure 2.13: Power spectrum of the first state of Duffing equation Chapter 2. Chaotic dynamics 21 y 8 6 -4 -2 0 4 2 1 5 X Figure 2.14: Poincare map for Ueda's model(from Ueda, 1980a) discussed previously. The Poincare map for Ueda's model is taken from ( Ueda, 1980a). The Poincare map shows the effect of folding and stretching of trajectories in the chaotic range. In this chapter a formal definition of chaotic system was introduced and the model for the nonlinear oscillator was presented. In the following chapters the effect of con-trol action on a chaotic system is investigated and the Duffing equation in the form studied by Ueda is used as an example. The Duffing equation gives complex nonlinear oscillatory behavior from a relatively simple generating equation. Chapter 3 Sliding Mode Control 3.1 Background The main objective in a conventional control problem is often to guide the system trajectory (solution) to a final state. The system dynamics , represented by a differential equation, will be manipulated by a function of a control varilable, u, so as to achieve the control objectives. The control law f(u), normally a Hnear function of u, has a fixed structure, often a linear combination of states with fixed gain coefficients. Even if the coefficients of the control law are changed (adaptive control) the structure remains fixed. In variable structure control changes can occur in the structure of the control law during transients in accordance with some preassigned rule. These changes occur at times determined by the current value of the error signal and its derivatives. This changing structure can bring great advantages by resolving the conflict between static accuracy (stability, noise immunity) and speed of response. For example, it is well-known that for a type 0 system if the control law employs integral control, the system has zero steady state error in response to a step input. However, if the integral gain is high, overshoot will occur, increasing sharply as a function of the gain. On the other hand, the application of a proportional feedback control will result in steady state error, forcing the control engineer to compromise between the two control strategies and accept a small overshoot and a reasonable speed of the response. One approach to reconciling these two requirements makes use of the principal of variable structure. It is 22 Chapter 3. Sliding Mode Control 23 f e Figure 3.15: An alternate way to control a system via a line passing through the origin intuitively clear that if the control law during the first stage of the transient (while the error is large) is chosen as proportional control, but during the final stage (where the error is small) the control law is changed to integral control, a high-quality transient response can be obtained (Itkis, 1976). The phase plane provides a convenient view of this technique. The task of the con-troller is to bring the system trajectory, starting from a set of initial conditions, to the origin of a plane defined using the error and its derivative as coordinates. In addition, the controlled plant has to meet transient response specifications. A PD controller could be used for a type-0 second order system to achieve the control objectives and would give a smooth phase plane trajectory. As an alternative, the two-stage trajectory starting from the same initial conditions and shown in Figure(3.15) can be followed. The form of the controller changes when the trajectory encounters the switching line that passes through the origin. The equation of line cr is given by : <r:x + Xx = 0 (3.12) If the trajectory is moving along this fine, the response of the system is forced to obey equation (3.12) , regardless of the dynamics of the process. If a controller is Chapter 3. Shding Mode Control 24 devised, that results in these new dynamics, then parameter variations and unmodelled dynamics in the original model will not influence the response. Similarly disturbances will be counteracted, as the system is forced to stay on the line. The steady- state requirements are fulfilled since the trajectory converges to the origin, and the transient requirements can be incorporated into the equation of the line , usually referred to as sliding line or switching line. It can be easily seen that the solution to the equation (3.12) is a decaying exponential of time constant equal to ^ Overshoot requirements can be represented by limiting the distance between the origin and a line parallel with the Y axis. In order to force the trajectory to follow this path in phase plane an appropriate controller is used. An intuitive requirement is that the tangent to the trajectory is always forced to point towards the shding line. The trajectory will then eventually reach the line and stay there for all t > to. If the trajectory crosses the shding line, then the controller is changed so as to keep the trajectory pointing towards the line. This change in direction is characteristic of variable structure system. Sliding mode control is one type of variable structure system. The development of systems with such invariant properties as those of shding mode was the original motivation for the subsequent development of the theory of variable structure systems (Itkis, 1976). To formulate these ideas, consider the general system of differential equations ^ = fi(xu...,xn,t) (i = l , . . . , n ) (3.13) Let us assume that the right-hand sides of these equations are discontinuous on a certain hypersurface o~(xi,... ,xn) = 0 in phase space in such a way that the left-and right-hand limits of the functions fi(xlt..., xn, t) (i = 1,...,n) as the trajectory approaches o~ = 0 from either side exist: lim fi(xu...,xn,t) = fr(xu...,xn,t) limi fi(zu...,xn,i) = ff(xu...,xn,t) Chapter 3. Sliding Mode Control 25 where, in general, ( z l t . . . , z n , £ ) ^ ff(x1,...,xn,t) ( t = l , . . . ,n ) . The derivative of the function <r along the trajectories of the system (3.13) is = £ £ - / . - < / - ^ > - & £ n * < * - / ) (3.u) where f is an n-vector with components / ; , . . . , / „ ( the phase velocity vector), grader is the gradient of the hypersurface cr = 0 , N is the hypernormal to the hypersurface, and (f.N) is the scalar product. By equations (3.12), (3.13), (3.14), the following limits exist : Um ^ = (/-.grad>) (3.15) <r—>o- at l i m ^ = (/+-grader) (3.16) <r-*0+ at where f~,f + are n-vector with components fi~(x\i• • • j ^ n j ' ) and fi~(xii• • • > ^ n j i ) ( * = 1,..., n) respectively . Wherever the inner product of either of (3.16) or (3.16) with the normal to cr is negative, the trajectory points towards the hypersurface and eventually stays in the close neighborhood of a = 0. The existence condition for a sliding mode in a very close neighborhood of the sliding line, for which cr = 0 will become; dc der lim — < 0 < Um — (3.17) <r-»0+ dt (7-.0- dt equation (3.17) can also be written as follows: da d(a)2 Um cr— < 0 or Um - M " < 0 , - „ - , - „ (3.18) °--o dt dt ~ K ' This form suggests that if we choose a Lyapunov function V = [er(xi,... ,x n)] 2, for the system (3.13), V is positive semidefinite and its derivative is negative semidefinite in the neighborhood of the hypersurface. If equation (3.18) is satisfied, function V is by definition a Lyapunov function for system (3.13) relative to set of points cr = 0, and therefore the manifold cr = 0 is attractive for all sets of initial conditions. Chapter 3. Sliding Mode Control 26 3.2 Design Procedure The design of a shding mode controller involves two different tasks, the design of the shding manifold and that of the controller. In what follows a general method for the design of shding surface in a m-dimensional hypersurface is presented. 3.2.1 Sliding surface design Since the mathematical model to be used here is the Duffing equation, a second order system, the multivariable case will not be treated in detail. For a second-order system the choice of a switching surface is essentially that of placing the pole of the controlled system transient response in a proper location. If the time constant of the response is to be, for example, 0.5 seconds, then the pole location is -2.0 and the shding line equation is: cr = x + 2.0s = 0. Designing the shding surface parameter in the second order case is thus a simple procedure. In case of higher order systems the method is more involved. Shding surface design in the MIMO case is based on the equivalent control principle of Utkin (Utkin, 1978). A detailed description of the method can be found in Utkin (Utkin, 1978), or in Decarlo ( Decarlo et al, 1987). A brief overview of the method is as follows : The existence of a shding mode implies 1) &(x(t)) = 0 2) o(x) = 0 for all t > t0. from the chain rule [—]x = 0. Substituting for x yields: C/3/ [ § J * = [ff TO,») + r(<, x)Ueq] = 0 (3.19) Chapter 3. Sliding Mode Control 27 where it has been assumed that T and f are in general form of a nonlinear system, but linear in control. x(t) = f(t,x) + T(t,x)u(t) (3.20) Equation (3.19) is solved for Ueq known as equivalent control which solves the equation. Ueq is : U ^ = ~ [ ^ T ^ x ) ] ' 1 ^ f ^ ( 3 - 2 1 ) subsituting Ueq into equation (3.19), and assuming linear switching surfaces, the motion of the system on the switching line is governed by the following equation : x = [1 - T(t, x)[AT(t, x)-1 A]]f(t, x) (3.22) where | ^ = A is the set of sliding surface parameters. The sliding surface is defined as cr = Ax. This equation can be used to design the switching surface for the multivariable case. First equation (3.22) is formed by multiplying all the matrices inside the bracket, then the characteristic polynomials of equation (3.22) is formed. This characteristic equation should correspond to the desired dynamics. 3.2.2 Controller design The Lyapunov function V = ^o~2 with negative gradient, is chosen to make the sliding line attractive for a set of initial conditions. The controller is designed by tuning the gains so that V > 0 and V < 0. V is positive semidefinite, so that the gradient condition becomes the controller design criterion. The controller structure is assumed to be state feedback with a signum function as shown in equation (3.23). For a two-dimensional problem we have: U = (fci^x + k2X2)sgn(cr) (3.23) Chapter 3. Sliding Mode Control 28 where sgn(o~) = > 1 if *>0 -1 if o-<0 The design procedure is demonstrated through an example. Example 3.1: Consider the hnear time invariant second order system shown in (3.24). The objective is to control the plant ( bring the trajectory to the origin) using a sliding mode controller. dx\ dt dx2 = x2 (3.24) I dt The desired shding surface is to be = — a2x2 — a>\X\ + u a = x + c\x = 0 and the control structure will be a switching state feedback; u = —axisgn(o~) (3.25) Using the stability criterion(<7 <r < 0), and assuming the trajectory is in a neighborhood of the shding hne, leads to : do- d(x2 + CiXi) dt dt = —(«2 - ci)x2 - (ax + a)«i Since the trajectory is on the switching hne a = 0 = —02^2 — a l " c l — a x l "H C1X2 (3.26) X2 = —C1X1 (3.27) Chapter 3. Sliding Mode Control 29 Substituting in (3.26) from (3.27) and at the limit we have lim -p = (a2Ci — c? — ai — a)xi o—»o dt (3.28) In order to fullfil the existence condition we should have der lim — <0 tr-*0+ dt and der lim — > 0 a—»0- dt This leads to choice of the controller gain vector, a, der lim — <r-»0+ ere (ajCi — — — a ) « i if x i > 0 (a 2 Ci — c2 — dx + a)xi if Xi < 0 (3.29) lim — <r-»o-( a 2 c i - -t (02C1 cr -It follows from above equations that if 01 + ct)xi if x i > 0 a-i — a)xx if x x < 0 (3.30) | a | > (a 2 C! - Cj - a a ) (3.31) the sliding condition is fulfilled. (3.31) is the necessary and sufficient condition for the existence of a sliding regime on cr = 0 ( Itkis, 1976). If condition (3.31) is satisfied, the dynamics on sliding mode will be described by equation (3.27) : Chapter 3. Sliding Mode Control 30 the solution of this equation is x(t) = s(<o)e-Cl(t-to) (3.32) where t0 is the time at which the trajectory reaches the shding hne. It is noted that since the system is restricted to shding hne dynamics, it is very robust to any distur-bances and unmodeled dynamics. But this robustness is at the expense of switching of the controller. This feature of shding mode leads to actuator chatter and can be unattractive for implementation purposes. Slotine ( Slotine, 1983) developed a scheme to overcome this undesirable feature of shding mode by using a boundary layer around the shding line and the discontinuous law is replaced by a smooth continuous law inside the boundary layer. Slotine also proposed a new technique to ehminate the reaching mode by using a time-varying shding surface. The initial shding surface passes through the initial condition and the shding surface is changed in time to bring the solution to the desired final dynamics. 3.3 Overview of the Boundary Layer Technique A brief overview of the design technique will be presented here. A more elaborate description of the methods can be found in Slotine (Slotine, 1983), Slotine and Sastry (Slotine & Sastry, 1983) and Slotine (Slotine, 1984). Consider the system : X = f(x,i) + u + d(t) (3.33) where X = [x, i , . . . , aj(n - 1)] r is the state and ti is the control input. In equation (3.33), the function / (in general nonlinear) is not exactly known, but the extent of the imprecision, A / , is bounded by a known continuous function of x and t. Similarly, the disturbance, d(t), is unknown but bounded by a known continuous function of x Chapter 3. Sliding Mode Control 31 and t. The control problem is to force the state x to track a specific target state, xj = [xj,y Xd,..., xdn~^]T, in the presence of model imprecision on f(x;t) and of disturbance d(t). For this to be achievable using a finite control u, and in addition to fulfill the existence condition we must assume x\t=o = 0 where x : x — xj = [x, x,..., x(n~1)]T is the tracking error vector. Let us define a time varying shding surface a(i) in the state space JT1 as <r(t) : <r(x,i) = 0 < , % p-1* ^ , dr-w* , n (3-34) In the above equation (^) is the differential operator, usually denoted by D in the differential equation theory. (^) n _ 1 denotes the (n-l)-order derivative operator. Given the initial conditions for equation(3.33), the problem of tracking x = x^ is equivalent to that of remaining on the surface o~(t) for t > 0. Now a sufficient condition for such positive invariance of o~(t) is to choose the control law u of (3.33) so" that jt<r2(x,t) <-V\o(x;t)\ (3.35) where 77 is a positive const ant (Slotine & Sastry, 1983). Given the bounds of the uncertainties on f(x]t) and the disturbance d(t), obtaining such a control law is quite straightforward as will be shown in an example. Moreover, satisfying equa-tion (3.35) guarantees that if the initial condition is not exactly known, that is if Chapter 3. Sliding Mode Control 32 Figure 3.16: Boundary layer and smoothed control law inside boundary layer : a) Boundary layer b) control law The method outlined above does not solve the chattering problem. This can be solved by introducing a boundary layer neighboring the switching surface. The control law outside the boundary layer is as before, therefore ensuring the boundary layer is attractive, but interpolation takes place while inside the boundary layer (Figure 3.16). One approach is to replace the term sgn(o~) by ( - r ) , where cr shows the algebraic 8 distance from the boundary layer and 0 is the thickness of the boundary layer. 6 and e (the error margin) are related by the relation 6 = A n - 1 e , where A is the slope of sliding line in the two-dimensional case (Slotine, 1984). Inside the boundary layer, the control law forces the local dynamics of <r(i) (algebraic distance from sliding line) to be governed by a first-order differential equation, which can be interpreted as a low-pass type filter. Equation (3.36) shows that the corner k frequency for the filter is wc = —. The elimination of chattering (a high frequency Chapter 3. Sliding Mode Control 33 signal) can thus be viewed as due to this low-pass filtering. u = -f(x,t) - E ^ n ~ 1 j A p 5 ( n " p ) - ) (3-36) where : f(x]t) is available information of the model f(x]t) k(x;t) — switching gain (to be determined based on uncertaity bounds of the system and the disturbance) The corner frequency can be used to shape the boundary layer to eliminate any undesirable high frequencies such as measurment noise or quantization effects. The choice of the control law is now illustrated by a simple example. We use example 3.1 to compare the results of the boundary layer method with those using classical sliding mode control. Example 3.2 : Consider the second order system shown below. The response is to follow a unit step input with the time constant of 0.5 sec. X\ = Xn (3.37) x2 = ajx2 — Axi + u T = \ = 0.5 Sees A = 2.0 Let us assume parameter ax = —3 is estimated as hi = —2.5 where \a-i — ai| < 7 and 7 = 1.0 and Xd = 1, The task of the controller is to bring the system to the origin Chapter 3. Sliding Mode Control 34 in error phase plane. £ is a design parameter based on uncertainties associated with disturbance and desired trajectory. The shding surface is : o(t) = l ! + Axa = 0 to satisfy shding condition of the form (3.35), it is sufficient to define u as : u = aix2 + 4XJ ~ ( 7 l g 2 | + Qsgn(<r) u = 2.5a;2 + 4 x i - 2 £ i - ( | J B 2 I + 0s</n(°") (3.38) where sgn(o) = 1 if <r>0 -1 if <r<0 Equation (3.38) has the same structure as equation (3.36) where the saturation func-tion is replaced with the signum function. Here, d\ and i? are the upperbounds on disturbance and desired acceleration respectively. In this particular problem since d-i and i? are assumed to be zero, it is sufficient to choose £ > 0. £ = 1 for this particular design. For the shding condition we have indeed ld<r2 . -— = aa = o-(xi + Axi) so that control law leads to 1 do~2 = o-[-3x 2-4x 1 + 2.5x2 + 4 x 1 - 2 . 0 i 1 + 2.0i1]-|o-|(7|x2| + 0 2 at ~ = < r ( - 5 * , ) - M ( M + 0 (3.39) Chapter 3. Sliding Mode Control 35 It is clear that equation (3.39) is always negative (since |cr(—.5a;2)| < IMO^I + £)l a n ( f — |cr|(ja;21 -f- £) < 0), so the sliding condition is fulfilled. If this control law is applied, the chattering phenomenon in the control input is still observed. Now the boundary layer is introduced to see the effect of smoothing of the control law. The boundary layer is defined as : m = {» , k | < 6} (3.40) where 6 is the boundary layer thickness. In this case the control law becomes : Ui = oiz 2 + 4 x i — Ax'i — ( 7 x 2 ) 5°*("^) (3-41) where sgn(a) if \a\ > 1 a if |o| < 1 Now, since control u satisfies sliding condition (3.35) , the boundary layer is attractive, hence, positively invariant. Thus, for trajectories starting inside boundary layer we can write, cr = -k(x,t)^+(Af(x,t)) (3.42) although in equation (3.42) k(x,t) and Af(x,t) are not in general constant, but (3.42) can be thought of as a low-pass filter structure. When k(x,t) and A/(x, t) are constant, equation (3.42) is indeed a low-pass filter for local dynamics of cr. Results of these two different design method have been shown in Figures (3.17) through (3.19). Figure (3.17) shows the output response of the system (3.38) using two different controllers. Figure (3.18) shows the phase plane plots of the system for different initial conditions and Figure (3.19) shows the corresponding control inputs for the two cases. In the next section both schemes are applied to the chaotic model of this thesis, the Duffing sat(a) - < Chapter 3. Sliding Mode Control 36 Figure 3.17: Output response of the system in example 3.2 in two cases: a) With boundary layer b) without boundary layer Figure 3.18: Phase plane plot of example 3.2 : a) With boundary layer boundary layer b) without Chapter 3. Sliding Mode Control 37 o Figure 3.19: Control input of the system in example 3.2 in two cases : a) With boundary layer b) without boundary layer equation. We would like to investigate the effect of the robustness properties of sliding mode on uncertainty propagation of a chaotic system. Chapter 3. Sliding Mode Control 38 3.4 Sliding mode control of the Duffing equation As mentioned in Chapter 2, the Duffing equation with the set of parameters shown in The forcing function is treated as a disturbance and the task of the controller is to bring the state to the origin. If the model is used to describe the roll dynamics of a ship, then the control task is to nullify the effect of wave disturbance and to bring the roll angle to zero degrees. In designing the sliding mode controller for this system it has been assumed that there are no uncertainties associated with system parameters, and the controller has to deal with the inherent uncertainty of the chaotic system. State space representation of the system with control is : equation (3.43) has chaotic behavior. x + -05x + x3 = 7.5 cos t (3.43) Xi — Xi (3.44) < x2 = — 0.05x1 — x3 + 7.5 cos t + u The sliding line is defined as : cr{t) = 0 x\ + Axi = 0 where A = 3.14 here the maximum value of the disturbance is 7.5 so , \d(t)\ < dx Wt>0 dx = 7.7 upper bound for disturbance function For this system based on the equation (3.38), the switching gain is equal to dx, since 7 = 0(no uncertainty about the system) and 1? = 0( upper bound on desired acceleration). it is sufficient to take the control law as ; u - 0.05x"i + x? - 3.14x"i - (dx)sat(cr) u = 0.05x'i + Xj - 3.14x'i - 7.7sat(<r) (3.45) Chapter 3. Sliding Mode Control 40 Figure 3.20: Output response of the Duffing equation without boundary layer for three different initial conditions : a) I.C.=(3,4) b) I.C.=(-2.0,0) c) I.C.=(l,-3.0) Chapter 3. Sliding Mode Control Figure 3.21: Output response of the Duffing equation with boundary layer for different initial conditions : a) I.C.=(3,4) b) LC.=(-2.0,0) c) LC.=(l,-3.0) Chapter 3. Sliding Mode Control 42 •4.00 T (SEC) (c) Figure 3.22: Control input of the Duffing equation without boundary layer for three initial conditions : a) I.C.=(3,4) b) I.C.=(-2.0,0) c) I.C.=(l,-3.0) Chapter 3. Sliding Mode Control 43 - r — 1 r — 4.00 6.00 e.oo (c) Figure 3.23: Control input of the Duffing equation with boundary layer for three dif-ferent initial conditions : a) I.C.=(3,4) b) I.C.=(-2.0,0) c) I.C.=(l,-3.0) Chapter 3. Sliding Mode Control 44 Figure 3.24: Output of the uncontrolled Duffing equation for three different initial conditions : a) I.C.=(3,4) b) I.C.=(-2.0,0) c) I.C.=(l,-3.0) Figure 3.25: Phase plane plot of the Duffing equation without boundary layer control for three different initial conditions : a) I.C.=(3,4) b) I.C.=(-2.0,0) c) I.C.=(l,-3.0) Figure 3.26: Phase plane plot of the Duffing equation with boundary layer control for three different initial conditions : a) I.C.=(3,4) b) I.C.=(-2.0,0) c) I.C.=(l,-3.0) Figure 3.27: Output response of the Duffing equation (I.C.=(3,4)) without boundary-layer for three different switching gains : a) K=7.7 b) K=20.0 c) K=50.0 Figure 3.28: Control input of the Duffing equation (I.C.=(3,4)) without boundary layer for three different switching gains : a) K=7.7 b) K=20.0 c) K=50.0 Figure 3.29: Phase plane plots of the Duffing equation (I.C.=(3,4)) without boundary layer for three different switching gains : a) K=7.7 b) K=20.0 c) K=50.0 Chapter 3. Sliding Mode Control 50 input and eliminates the chattering problem, however, in the presence of low frequency disturbances, the robustness property does not hold and the low frequency disturbance will dominate the response. Chapter 4 State Estimation of the Duffing Equation 4.1 Background The purpose of this chapter is to investigate different methods of state estimation for the Duffing equation while in a chaotic region and in the presence of measurement noise. The most common approach to this problem, typically when the system is hnear, is the Kalman filter. In the hnear case, for Gaussian disturbances, the Kalman filter will be the optimal estimator. In Kalman filter theory, the fundamental idea is to combine a noisy measurement of the state with a propagated state, each weighted appropriately through its corresponding uncertainty measure. In order to solve for Kalman gain ( the optimal weighting factor), the mean and the covariance of the state are calculated before measurement time. Thus a procedure for propagating the mean and the covariance up to the measurement time is needed. For the hnear case there is a closed form solution in terms of the transition matrix 3>. The transition matrix <& is defined as a function relating an initial state at t0 to a corresponding state at t > t0 in the absence of forcing function. For a system of the form : < i(t) F(t)x(t) + T(t)u(t) + w(t) H(t)x(t) + v(t) ( 4 . 4 6 ) where E[w(t)] 0, E[w(t)wT(t)] =Q(t)6(t-{) 51 Chapter 4. State Estimation of the Duffing Equation 52 E[v(t)] = 0, E[v(t)vT(t)) =R(t)S{t-t) The optimal state is propagated by : x(tr) = * ( * , . , * , • - l ) x ( t f - l ) + fU Ht,r)T(T)u(r)d(r) P(tr) = $(ti,U-l)P(tt_1)$T(ti,ti-l)+ I $(t,T)Q(T)*T(t,T)d(r) JU-\ the optimal state is updated by: ' K(U) = P(U) HT(U) [H(U) P(U) HT{U) + R(U)}-1 x(tf) = xW + KitAizitA-HitAxit;)} { P(t+) = PitD-KitAHitAPitr) The differential form of the covariance propagation equation is often easier to work with, especially numerically: P{t) = F{t) P(t) + P(t) FT{t) + Q(t) - K{t) H{t) P(t) (4.48) In the nonlinear case, determination of the mean and covariance at the output is much more involved, since there is no easy way to propagate the density function of the state. The standard solution in the nonlinear case is the Fokker-Planck equation(Haken, 1983, 1984). This equation does not directly propagate the covariance matrix, but rather its solution is the joint density function of the state variables. The general form of the Fokker-Planck equation is: ! = _ v ( , . m ) + l £ A , ^ ( , 4 9 ) where p = p(x,t) is the joint density function and the underlying system equations are: xi(t) = Fi(x,t) + wi(t) E[w(t)\ = 0, E[w(t) wT{t)] = Q(t) 8{t - /) Chapter 4. State Estimation of the Duffing Equation 53 E(') represents the expectation operator and D{j is the diffusion coefficient which cor-responds to the strength of the white Gaussian noise corrupting the system state. The boundary conditions are in general derived from the normalization requirements on the joint density function, viz. Jp(x1,x2,...,xn,t)dx1dx2...dxn =1 where the integration is carried out over the allowed range of the variable, usually — oo to -fee. This type of boundary condition is one reason the Fokker-Planck equation is so difficult to solve, even numerically, in most cases. Assuming that the Fokker-Planck equation can be solved for the joint density func-tion p, then the covariance matrix at time t is determined by first calculating E[xi Xj] = JJ ••• J XiXjp(x,t) dx\.. .dxn (4-51) then calculating E[xi] and -Efaij] as E[x{] = / / • • • / * . • P(X, 0 d*i • • • dxn (4.52) E[XJ] = / / • • • / p(X,t)dxx...dxn (4.53) From these, the covariance matrix elements can be determined from the definition: cov(xi Xj) = E[xi Xj] - E[xi] E[XJ] (4.54) In very few simple cases the Fokker-Planck equation can be solved using a separation of variables technique, or other approaches such as Green's function (Gustavson, 1980), but analytical solution is not possible in most cases. Numerical solutions for this problem are, too, a formidable task especially for high-order systems. This problem becomes compounded in the estimation case by the fact that, at measurement time, the covariance matrix changes in accordance with the covariance matrix equation. It would Chapter 4. State Estimation of the Duffing Equation 54 then be necessary to convert this updated covariance matrix back to joint density form for propagation by the Fokker-Planck equation. Another difficulty with the numerical solution of the Fokker-Planck equation is the convergence problem associated with the numerical solution of any partial differential equation (Press, 1988). Some work has been done in the general area of nonlinear filtering by Bucy and Senne (Bucy & Senne, 1971) to get around solving the Fokker-Planck equation in the continuous case and solving Baye's rule sequentially in the discrete case. Their method is mainly directed towards implementation of the solution of the nonlinear filtering problem on a digital computer. Details of their technique can be found in Bucy (Bucy, 1969) and Senne (Bucy & Senne, 1971). Application of this method to a radar tracking problem can be found in (Bowles & CarteUi, 1983). The method used in this thesis to propagate state uncertainty up to the measurement time has been originally developed by Fowler (Fowler, 1986). This method is especially suitable for a chaotic system where although there is no additive noise to the state, the presence of initial condition uncertainty influences the overall behaviour of the system, as if the state were being corrupted by noise. In this method the basic idea of employing noise covariance matrices to model system uncertainty has been used. This technique eliminates the need to solve the Fokker-Planck equation by instead propagating uncertainty with weighted sets of initial conditions, the weight assigned to each being proportional to its probability of occurrence. The method will give exact answers in the linear case (which of course is unnecessary); it will give approximate answers in the nonlinear case, but answers which can be made as accurate as desired, admittedly at the expense of increased computation time. The state estimation based on Fowler's method involves three major steps. Figure(4.30) shows the block diagram for these steps. The first step deals with propagating the discretized density function (Pdf) up to measurement time and calculating the mean and the covariance based on the propagated probability Chapter 4. State Estimation of the Duffing Equation 55 set of initial conditions (LC.) I .C. propagation up to measurement time Kalman Filter Calculations New set of I.C. from updated covariance matrix Figure 4.30: Block diagram of the state estimation for a chaotic system density function. The second step involves the standard Kalman filter calculations to get the updated mean and the covariance. It is important to note that if the updated covariance matrix were diagonal, then this step would be an easy task because the set of initial conditions corresponding to the updated covariance would be the mean plus the positive or the negative multiples of the standard deviation, depending, on the location of the point with respect to mean. In this case the standard deviation is simply the square root of the diagonal elements of the covariance matrix. But the covariance matrix is not in general diagonal, so the third step is needed to calculate a set of initial conditions corresponding to the updated mean and covariance. 4.2 Mean and Covariance Propagation This technique replaces the continuous density function p(X,t) with a suitable dis-crete density function. The advantage of this is that the probability of each possible initial condition vector is known, and since any given initial condition vector can be propagated through the system, the probability of all possible state vectors after time Chapter 4. State Estimation of the Duffing Equation 56 At is automatically known (the probability associated with any state vector at time to remains the same through time). With the propagated state vectors and their as-sociated probabilities, the probability density function may be reconstructed and any desired moment information after time At calculated. For a simple first order case with discrete density function as: (Fowler, 1987) X\ — —x\ *i,i(0) = 1 zi l 2(0) = 5 p(*i,i(0)) = 0.25 p(*i,a(0)) = 0.75 In the above formula the first subscript refers to the state and the second state refers to the sample taken from the distribution. For example, zi,i(0) indicates the first sample taken from the first state at time zero. Lower case p is the probability associated with that particular state. It can be easily verified that at t = 1 second *i,i(l) = 0-5, p(asi,i(0)) -0.25 as before *i,a(l) = 0-833, p(x1>2(0)) = 0.75 as before using the standard formula, the mean and the covariance can be readily computed. E[x] = X>i..-K*i.i)» E[x*\ = J2xlp(*u) i=l i=l Now consider the continuous density function at time t = 0. There are numerous ways to discretize this continuous probability density function. A simple way is to pick 2m+l values, X i , - m »i,-m+i • • • ^l.o^i.i • • • ^l.mj corresponding to the mean x(0) and meanim standard deviations. To each point assign a probability proportional to its probability density function. Since in the discrete case probability density function values correspond to the probabihty of the point, a reasonable choice for probabihty is the normalized density value. This means that the probability assigned to each point will be its probabihty density function divided by the sum of individual Pdf Chapter 4. State Estimation of the Duffing Equation 57 values. In this manner the condition EP(-^0 = 1 is verified. Next, propagate each of x these discrete values aJi,_m... xi,m forward to the desired time, recreate a continuous density function, and normalize it. The normalization process is to keep the area of the new probability density function curve equal to one, with a new p. =mean and points selected at equal distances from the mean (p. ± &), but with the same probability. With the propagated probability density function curve at t = 1, different moments such as covariance can be calculated (Fowler, 1987). For higher-dimensional systems, the same basic method applies, but it is more complicated. To understand the higher-dimensional version of the technique, consider first of all the definition of the covariance matrix elements, viz cov(xi Xj) = E[xi XJ] - E[xi] E[XJ] (4.55) Since any state variable in a higher-dimensional system can be affected by any other (in the general case), it is necessary to vary all of them simultaneously, i.e., to set up all possible combinations of initial conditions, associate probabilities with each combination, and propagate them all through the system. This can be most easily done by considering the possible initial conditions to be points in n-dimensional initial condition space. If we take the points approximating the probability density function surface to be m=5, this will yield : x~i — 2 c r , . . . , X{, , Xi + 2& for each state variable. Now the covariance can be calculated in two steps. First E[xi Xj] is calculated when i > j: 5 5 E[XiXj}= £ XXXjtP(*i±)P(Xjl3) (4-56) <*=1 4=i then the E[xi\ values are determined: = E E ^P(^«)p(x i 4) (4.57) « = 1 0=1 Chapter 4. State Estimation of the Duffing Equation 58 the covariance matrix elements are then determined from the equation(4.55). The first subscript in equations (4.56) and (4.57) refers to the state and the second subscript refers to the points approximating the probabihty density function curve( possible points in n-dimensional initial condition space). It should be noted that the foregoing method assumes a diagonal covariance ma-trix. This corresponds to the initial condition for the control problem of interest, but is not a restriction because a technique for generating a set of initial conditions for any desired covariance matrix will be given later. Indeed, to utilize the Kalman filter algorithm to combine propagated state values and new measurements it is necessary to have such a technique since after the first measurement updates of the state estimate, the covariance matrix in general will no longer be diagonal. For a two-dimensional case with m = 5, the initial condition space will be as follows. Mean values have been indi-cated by and x2, and standard deviations in each dimension by &i and &2 respectively. x\ — 2di,x2 — 2d2 Xi — di,x2 — 2d2 X\,x2 — 2o*2 xi -f &i,x2 — 2d2 xi -f 2dx,x2 — 2d2 xi — 2di,x2 — d2 xi — <ri, x2 — <f2 xi,x2 — d2 ii + o-'i, x2 — <f2 xi + 2<fi,x2 — &2 Xi — 2&i,x2 xi — oi,x2 xi,x2 xi + 0*1,52 xi+2&i,x2 Xi - 2&i,X2 + d2 Xi - di,x2 + <T2 * 1 , x2 + 02 Xl + <f\,x~2 + <T2 *1 + 2^1, X2 + 02 xi - 2di, x2 + 2o-'2 xi - di, x2 + 2<f2 xi, x2 + 2d2 xi 4- di, x2 + 2d2 xx + 2di, x2 + 2d2 The covariance matrix may now be calculated using equations(4.55), (4.56) and (4.57). Clearly, the larger the number of points, the more accurate the final result. A specific example for a hnear system has been given in Fowler (Fowler, 1986), where the tech-nique outlined above has been tested against direct covariance calculation using the covariance differential equation. It is shown in the numerical example that the initial condition propagation technique gives accurate results except for a constant correction factor. Determination of this correction factor is most easily carried out by calculating Chapter 4. State Estimation of the Duffing Equation 59 the initial covariance, a known value, based on the number of points in the initial con-dition space, following the method outlined above. Then the actual value of the initial covariance is divided by the calculated one to get the correction factor. This correction factor depends on the number of approximation points used. The correction factor is found to be 1.824361 for m=3 and 1.08 for m=5 and 1.004 for m=9. In order to get the correct covariance values, it is necessary to multiply all calculated covariances by the factor. In the linear case, since the Gaussian property is preserved in propagation of the equations, this method preserves all the information needed to reconstruct the Pdf of the output. But in the nonlinear case, in general an infinite number of points must be used for complete accuracy. However, if the standard deviations are not too large, this method will give reasonably accurate results. 4.3 Kalman Filter Calculations Using the technique described above it is possible to propagate mean and the covariance of a nonlinear system up to the measurement time. At the measurement time this information will be used to calculate the Kalman gain, the updated state estimate and the covariance. The standard Kalman filter equations are (Brown, 1985): < x(tf) = xW + KMlZM-HitiWiT)] (4-58) . Pitt) = P(t7)-K(U)H(U)P(t7) In above equations the "+" superscript is referring to updated variables and the "-" superscript to variables just before the measurement time. H(t) is the output matrix. In order to be able to complete the estimation loop we need to calculate a set of initial conditions corresponding to these updated mean and covariance values. This technique is explained in the next section. Chapter 4. State Estimation of the Duffing Equation 60 4.4 Calculation of a Set of Initial Conditions from a Given Covariance Matrix As mentioned earlier in case of diagonal covariance matrix, the problem of determining the set of initial conditions is quite trivial, since it is a straightforward extension of the one-dimensional case, with of values directly on the diagonal. The higher-dimensional case is more complex, because values for the initial conditions must be found which satisfy the covariance matrix denning equation, equation (4.55). This equation, and its particular implementation in the technique, is a nonlinear algebraic equation and therefore the problem of finding a set of initial condition becomes one of solving a set of simultaneous nonlinear equations. These nonlinear equations can be found by substituting equations (4.56) and (4.57) into equation (4.55). In equation (4.55) the left-hand-side is the desired covariance and the right-hand-side elements are the unkown initial condition points with the corresponding probabilities and known mean value. Fortunately, due to the symmetry of the covariance matrix, the number of unknowns is £^±D. To solve this nonlinear algebraic equation (4.55), we proceed as follows; start with variable a;,- (known mean value of the first state), and assign to it n unknowns such as S\i,8\2, - • • , oi„. Continue with variable x2, assigning to it n — 1 unknowns 2^1 j 822, • • • j 62n. Proceed in this way until state variable xn is reached, which has only one unknown associated with it, viz Sni. This will yield, of course, a total of "C"*1) unknowns. Next pick starting guesses for unknowns, and construct the initial condition space based on these trial values. After the set up of initial conditions, the algorithm used to solve for the correct values of Sij is a matrix version of Newton-Raphson method 6«.i = 8i~ [^P]_1 • /(*) (4-59) Chapter 4. State Estimation of the Duffing Equation 61 where df(Sj) dSi follows : f($i) = cov(Si) — desired covi is the Jacobian matrix and is evaluated numerically. This matrix is defined as • 0ft{6) dfi(S) dfi{S) as, df*(6) 8S2 ' dfi(S) " dSp as1 ds2 ' " esn dfn(6) dfn(S) . d6r d62 • ' d8n (4.60) Defining a norm as : norm = II** ~ (4.61) the recursive Newton-Raphson formula is applied until some desired degree of accuracy is achieved. After the solution converges, the variables Sij are used to determine a set of initial conditions. It should be noted that the solution for the initial conditions is not unique. Because of the disparity between the number of initial conditions and the number of independent elements in the covariance matrix, there is an infinite number of possible sets of initial conditions having the desired covariance. In the hnear case, all will propagate the same way, so it does not matter which set happens to be found by the program. In the nonlinear case, the propagated covariances will be nearly the same provided that the deviation from linearity is not too great. The above method has been tested on the hnear example used in section 4.2 and the results were very accurate (Fowler, 1987). Due to the parallel nature of the calculations involved, the algorithm can be implemented on a parallel processor., The details of the implementation of the uncertainty propagation technique can be found in Fowler (Fowler, 1986). Chapter 4. State Estimation of the Duffing Equation 62 4.5 State Estimation of the Duffing Equation As was earlier mentioned, the uncertainty propagation technique is specifically suitable for a chaotic system in which an uncertainty in initial state will grow with time. In this section the technique outlined above is applied to estimate the states of the Dufffing equation. Initial conditions are assumed to have a normal distribution. The measure-ment noise has been assumed to be Gaussian with a diagonal covariance matrix R(t). A program has been written to first calculate the mean and the covariance up to the measurement time, then to do the Kalman filter calculations (to update the mean and covariance after the measurement) and finally to find a set of initial conditions given the updated covariance and the mean. As mentioned before, this last step is required due to nondiagonal covariance matrix after the measurement. The results for the state estimation of the Duffing equation for several initial covariances axe presented in the simulation results section. To assess the nonlinear estimation technique used in this thesis we will develope an approximate linear model for the original chaotic system from the statistical point of view. This means that a linear model will be developed with the same output statistical properties as that of the original chaotic system. The developement of the equivalent noise model for the chaotic system is presented in the next section. 4.6 Equivalent Linear Noise Model for Duffing Equation A chaotic system is a deterministic system which behaves in a similar manner to a stochastic system in the presence of initial condition uncertainties. The stochastic behavior is due to the complex dynamics of the system rather to any process noise. In this section an attempt is made to find a linear system with white Gaussian noise as its input which will have the same output statistical measures, specifically, the same Chapter 4. State Estimation of the Duffing Equation 63 3000. 2500. 2000. 1500. 1000. 500. 0. 000 -500. - 1000. - 1500. -2000. 0. Figure 4.31: Autocorrelation of the state X\ autocorrelation function. The first step in the modeling process is to find a reasonably simple function to represent the output autocorrelation of the chaotic system. The output autocorrelation function for the first state of the chaotic system is shown in Figure (4.31). The periodic component is present due to the forcing function in the Duffing equation being considered. In order to simplify the modeling, the first state of the original system is being considered. Next the power spectral density is found to show the contribution of different frequencies in the autocorrelation of the state. Figure(4.32) shows the power spectral density of aslP As it is clear from Figure(4.32) that there are two major frequencies which contribute to this signal. The first frequency is the input frequency of the original system and the second is due to chaotic behavior of the system. The function g(r) approximating the original autocorrelation of the first state is assumed to be the sum of two damped consines. g(r) = ifci e~mt coa(uxt) + k2 e~nt cos(w2t) (4.62) Figure(4.33) shows that with a proper adjustment of parameters, o(r) will give a good approximation of autocorrelation of Xi in the first 350 samples. The contribution of Chapter 4. State Estimation of the Duffing Equation 64 700000. 600000. -500000. -0 .00000 - 100000. 5. 10. 15. 20 . 25 . 30. 35. 40. 45. 50 . 55 . 60 . Figure 4.32: Power Spectral Density of the first state - 2000 . 0 .0 50. 100. 150. 200. 250. 300. 350. 400. 450. 500 . Figure 4.33: Approximate autocorrelation function for state xx Chapter 4. State Estimation of the Buffing Equation 65 second frequency is almost half of that of the input frequency, so « 2k2. The exponential variables m and n were found by evaluating the exponential decay in the autocorrelation funtion of Figure (4.33). The parameters were found to be: fci = 1600 m = .022 ux = 1.00 k2 = 700 n = .005 u2 = 2.55 Knowing the output autocorrelation function, a hnear system is found to generate the required spectrum for a white noise input. The power spectral density of g(r) is: rr(ju) = fci [ 2 rj + 2~TT~ «1 + [ a • / \ ^2 + ~TV7~ «] (4-63) + (u> -f o»i )•* rn* + (u> — u}\Y n1 + (a; + u2y n* + (u> — w2y Assuming a continuous transfer function and letting jw = s , after some algebraic manipulation rr(s) can be written in factored form as: rr{s) = G{s) • G(-s) where „ , , _ (s3 + 1.59a2 + 6.913 + 6.51) (j\S) = o + 0.054a3 + 7.55a2 + .297a + 6.795 (-a 3 + 1.59a2 - 6.91a + 6.51) (4.64) _ G( s) 8 s4 _ o.054a3 + 7.55a2 - .297a + 6.795 The shaping filter, G(s) contains all the stable poles and zeros and G(—a) all the un-stable ones. This implies that if G(s) is driven by a white noise , then the output spectral density is given by equation(4.63). Clearly, having the same output autocorre-lation does not necessarily implies the same time history, however a simple application of filtering theory would start from such a white noise model. It is also true that development of the model is dependent on the number of samples used to calculate the autocorrelation of the original system. Using the hnear equivalent noise model, a comparison with state estimation based on this model may be made. Chapter 4. State Estimation of the Duffing Equation 66 4.7 State Estimation of Duffing Equation Based on Linear Model The standard Kalman filter calculations to estimate the states in the presence of mea-surement noise require the state equations of the noisy linear system. The linear system developed in section 4.6 is defined in observable cononical form as: where X(t) = F(t)X(t) + B{t)w(t) Y(t) = H(t)X(t) + D(t)w(t) (4.65) H 0 1 0 0 0 0 1 0 0 0 0 1 -6.795 -.297 -7.548 -0.054 1 0 0 0 0 1 0 0 where w = N(0,1) White Gaussian Noise B = D 8.0 12.29 -5.78 -42.77 0 8 The Kalman filter is very sensitive to the format of the state equations defined, i.e., the D matrix is assumed to be zero and in cases where a deterministic input is directly affecting the output (non-zero D) the input is directly incorporated in the estimation equation, but in this case the input is white noise which cannot be directly used in the estimation equation. In case of the linear model at hand, the state vector has to be Chapter 4. State Estimation of the Duffing Equation 67 augmented with the input as the new state. The augmented system will be as follows: F = - 1 a — 0 0 0 -6.795 0 1 0 0 .297 0 1 0 0 0 0 0 1 0 0 8 0 1 0 -7.548 0 0 0 0 0 1 0 -0.054 0 0 0 Ba = 8.0 12.29 -5.78 -42.77 0 The augmented system is now in the standard format to set up the Kalman filter equations. The system and the measurement equations are defined as below: Xa(t) = Fa(t)Xa(t) + Ba(t)w(t) Z{t) = Ha(t)X(t)+v(t) (4.66) where E[w(t)wT(t)] = Q(t)S{t-i) E[v(t)vT(t)] = R(t)6(t-t) E[v(t)w(t)] = 0 The Kalman filter equations are: P(t) = F(t)P(t) + P(t)FT(t) + B.(t)Q(t)BZ(t)-K(t)H(t)P(t) m = "° (4.67) K(t) = P(<)^r(0-R_1(<) X(t) = Fa(t)X(t) + K(t)(Z(t)-H(t)X(t)) Chapter 4. State Estimation of the Duffing Equation 68 from gain generator routine measurement o -) K + / J Gain generator routine P = F P + P F T - P H T R - ' H P + B Q B T PH R~ Figure 4.34: Block diagram of the estimation algorithm The covariance differential equation is the matrix Ricatti equation. The covariance equation is solved to generate the Kalman gain which is used to set up the state esti-mation equation. The solution of this latter equation gives the optimal state estimates. The whole process of state estimation can be summarized in Figure (4.34). In this figure Z is the nosiy measurement generated by the process output with additive noise. The error between the measurement and the estimated output y(H x) is weighted through a gain K. This part indicates the contribution of the measurement in state estima-tion. State estimate x is found by the addition of the measurement contribution to the previous estimate of state XQ. One important note is the fact that the measurement vector in simulation runs is provided by the output of the nonlinear chaotic system plus the measurement noise with the covariance of R(t). The covariance matrix R(t) is assumed to be diagonal. Results of the estimation scheme based on the approximate linear model are pre-sented in the simulation results section. In the results section runs for both of the estimation techniques have been carried out with different initial covariance matrices Chapter 4. State Estimation of the Duffing Equation 69 and constant measurement noise levels. The estimation techniques have also been eval-uated with different measurement noise levels. As a second method of assessing the estimation techniques outlined above, the estimation results are fed into a state feed-back controller. The details of the controller design are presented in the next section. 4.8 Closed Loop Control of the Duffing Equation The original nonlinear system was x(t) + 0.05 x + x3 = 7.5 cosT The first step in a hnear control design is to linearize the original nonlinear system near an operating point. In this case the operating point is the origin of the state space. The linearized model of the Duffing equation is obtained by removing the nonlinear term and results in the transfer function of the linearized model being given by: G(s) = , * G N (4.68) V ; S(J» + 0.05) V ' If the model represents the roll dynamics of a vessel, then the forcing function may be considered as a wave disturbance, so the control objective would be to suppress the oscillations caused by wave disturbance. This requirement translates to a high gain feedback. The closed loop transfer function is given by: C(s) _ G(s) D(s) l + KG(s) If K G(s) >^ 1 in steady state C(s) _ 1 D(s) K C(s) Figure (4.35) shows the closed loop system with high gain feedback. The ratio is JJ{s) Chapter 4. State Estimation of the Duffing Equation 70 D(s) R(s)=0 + + G(s) C(s) K Figure 4.35: Block diagram of the linear model with a high gain feedback to be small in steady state, which requires high value of K. A second control objective is to increase the relative stability of the system, since the poles of the linearized model of the Duffing equation He close to the imaginary axis. In order to improve the relative stability of the system, a left half plane zero, corresponding to PD or phase lead control is added. This requires the estimation of both states. The root locus after compensation with a zero is given in Figure (4.36). For a reasonable settling time, the location of zero was found to be at -5.0 and the gain to be 100.0 giving a continuous transfer function The main objective of the closed loop analysis of the approximate linear system was to assess the accuracy of the estimation techniques used. We expect to see a better closed loop performance when using a more accurate estimation technique. The other objective in the linear control of the chaotic system was to investigate the persistence of chaotic behavior under control. This objective can be achieved by observing the norm of the covariance under both controllers. The results of the closed G(s) = s +5.0 s 2 + 100.05 s + 500 Chapter 4. State Estimation of the Duffing Equation 71 2 . 0 . - 2 . - 4 . - 6 . — . 1_ , L _ - 1 2 . - 1 0 . - 8 . - 6 . - 4 . - 2 . 0 . 0 Figure 4.36: Root locus of the compensated linear Duffing equation loop Duffing equation with the linear control are presented in the next section where a comparison of the control performance is made using both estimation methods under different conditions. 4.0 Simulation Results In this section the results for the nonlinear estimation method and the standard Kalman filter based on the approximate linear model are presented. Simulation runs have been carried out for three different initial covariance matrices. These covariance matrices are assumed to be diagonal with diagonal elements as: P n = 9.0 P 2 2 = 12.0, P n = 0.0 P 2 2 = 0.0, P n = 1.0 P 2 2 = 1.3 The runs for these three different initial covariance matrices, referred to in the figures, have been carried out in the above order. These values might not be very realistic, but they can assess the performance of the estimator in the presence of uncertainty in the initial states. The measurement noise matrix for these runs is constant with diagonal Chapter 4. State Estimation of the Duffing Equation 72 elements as Ru = 1, R22 = 1. In order to evaluate the estimator's performance, a second series of runs has been done with three different noise levels with diagonal elements as: Rn = 1 R22 = 1, Rn =4R22 = 4, Rn = .01 R22 = .04 and constant initial covariance matrix with Pn = 1.0 and P22 = 1.3 and off-diagonal elements of zero. These runs indicate that the nonlinear estimator has a higher accuracy and better performance. The norm of the estimation error in the case of nonlinear estimation is usually less than half of that of the linear estimator. As a second method of comparing the estimation techniques, the performance of the controlled plant is considered when the results of the state estimation are used in the controller. We expected to see a better closed loop performance when using a more accurate estimation technique. The results of both estimation techniques when used in the linear controller confirm those obtained in the estimation section. The hnear estimator causes the controller to perform poorly up to first 5 seconds of the response. In this period some oscillations in the same form as those observed in the open loop response of the system show up in the closed loop response. The effect of the measurement noise is also clearly observed in the output. Application of the nonlinear estimator provides a smooth performance of the controlled Duffing equation. In the case of the application of the sliding mode controller, the results were unexpected. The performance of the controlled system is worse than that obtained using the hnear estimation. The reason for this discrepancy appears to be, the way in which the controller is applied in the nonlinear estimation method. The nonlinear estimator makes use of initial condition propagation technique in which a set of initial conditions is propagated to the next time step. The sliding mode controller designed on the basis of the mean value of the Chapter 4. State Estimation of the Buffing Equation 73 state is capable of driving the midpoint(mean value) of the initial condition mesh to sliding line, but the other sets of initial conditions will not necessarily move towards the switching line, since the switching of the control law is based on the location of the trajectroy traced by the mean value. However, due to the averaging process in the nonlinear scheme all the trajectories will stay close to the sliding line. Due to this fact, the performance of the sliding mode controller with the nonlinear estimation is not as expected. The other objective in the control of the Duffing equation is to investigate the persistence of chaotic behavior under control. Plots of the variance of the states of the controlled Duffing equation with sliding mode control reveal that the chaotic behavior does not persist in this case. The norm of the variance of the system under linear control increases for some time indicating that the uncertainty is increasing at those instances. It is concluded that for a nonlinear system when the regulation starts from a point in the chaotic region of the unregulated system's state space, there is an increase in the covariance. The chaotic nature of the system remains under influence of a linear control for some time. The application of sliding mode to the chaotic model, however, resulted in non-chaotic response. Chapter 4. State Estimation of the Duffing Equation 74 8 8 00 (*) Figure 4.37: Lineax and nonlinear estimation of the output of the Duffing equation (si) for three different initial covariance matrices a)N.L.Est b)L.Est, )N.L.Est d)L.Est, e)N.L.Est f)L.Est Chapter 4. State Estimation of the DufRng Equation 75 Figure 4.38: Linear and nonlinear estimation of the 2nd state of the Duffing equation (z2) for three different initial covariance matrices a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est Figure 4.39: Variance of the output error of the Duffing equation using both estimation schemes for three different initial covariance matrices a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est Chapter 4. State Estimation of the Duffing Equation 77 <r (e) (f) Figure 4.40: Variance of the error of the 2nd state of the Duffing equation using both estimation schemes for three different initial covariance matrices a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est d)L.Est Chapter 4. State Estimation of the Duffing Equation 78 6.40 8.00 4.00 5.00 0.00 A.O0 | ° ° L 2 - ° , 6 - ° W . 0 °oToO LOO Too 3^00 ToO S.OO T SEC T SEC 00 (0 Figure 4.41: Kalman gain {R~n) for the Duffing equation using both estimation schemes for three different initial covariance matrices a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est Chapter 4. State Estimation of the Duffing Equation 79 8 Figure 4.42: Kalman gain (K22) for the Duffing equation using both estimation schemes for three different initial covariance matrices a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est Chapter 4. State Estimation of the Duffing Equation 80 8 Figure 4.43: Estimation error of the output of the Duffing equation (xi - Xi) using both estimation schemes for three different initial covariance matrices a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est (Note scale differences) Chapter 4. State Estimation of the Duffing Equation 81 '0.00 4.00 6.00 12.0 IS.O 20.0 '0.00 «.00 fl.00 12.0 16.0 20.0 T SEC T SEC (e) (f) Figure 4.44: Estimation error of the 2nd state of the Duffing equation (x2 — x2) using both estimation schemes for three different initial covariance matrices a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est (Note scale differences) Chapter 4. State Estimation of the Buffing Equation 82 Figure 4.45: Noisy measurement of the first state xx of the Duffing equation with constant measurement noise level o o Figure 4.46: Noisy measurement of the 2nd state x2 of the Duffing equation with constant measurement noise level Chapter 4. State Estimation of the Duffing Equation 83 Figure 4.47: Estimation of the output of the Duffing equation (xi) using both estimation schemes for three different measurement noise levels a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est Chapter 4. State Estimation of the Duffing Equation 84 g o (a) •0.00 1-00 16.0 20.0 (b) SB '0.00 4-00 B.00 12-0 T SEC (c) 16.0 20.0 '0.00 4.00 B.00 12.0 T SEC 16.0 20.0 (d) 16.0 20.0 16.0 20.C (e) Figure 4.48: Estimation of the 2nd state of the Duffing equation (x2) using both es-timation schemes for three different measurement noise levels a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est Chapter 4. State Estimation of the Duffing Equation 85 8 Figure 4.49: Variance of the error of the first state of the Duffing equation using both estimation schemes for three different measurement noise levels a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est Chapter 4. State Estimation of the Duffing Equation 86 6.00 12.0 T SEC 16.0 20.0 (a) 2.00 T SEC (b) 16.0 20.0 (c) 2.00 T SEC (d) Ss (e) (0 Figure 4.50: Variance of the error of the 2nd state of the Duffing equation using both estimation schemes for three different measurement noise levels a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est Figure 4.51: Kalman gain Jfcn for the Duffing equation using both estimation schemes for three different measurement noise levels a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est CM -do'-e.oo T SEC 2.00 3.00 T SEC Figure 4.52: Kalman gain k22 for the Duffing equation using both estimation schemes for three different measurement noise levels a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est Chapter 4. State Estimation of the Duffing Equation 89 8 Figure 4.53: Estimation error of the output of the Duffing equation using both es-timation schemes for three different measurement noise levels a)N.L.Est b)L.Est, c)N.L.Est d)L.Est, e)N.L.Est f)L.Est (Note scale differences) B.OO 12.0 T SEC (e) B.OO T SEC (0 Figure 4.54: E s t i m a t i o n error of the 2nd state of the Duffing equation using both estimation schemes for three different measurement noise levels a ) N . L . E s t b ) L . E s t , c ) N . L . E s t d ) L . E s t , e ) N . L . E s t f ) L . E s t (Note scale differences) Chapter 4. State Estimation of the Duffing Equation 91 (— o cc I— CO LU • 0.00 8.00 T SEC 20.0 Figure 4.55: Noisy measurement of the first state x x of the Duffing equation with constant initial covariance matrix Rn = 1.0 R22 = 1.0 Figure 4.56: Noisy measurement of the 2nd state x 2 of the Duffing equation with constant initial covariance matrix Rn = 1.0 R22 = 1-0 Chapter 4. State Estimation of the Duffing Equation 92 Figure 4.57: Controlled Duffing equation using sliding mode and P D control with noisy measurement. a ) S . M . of xx b) S . M . of x2 c ) P D of s x d ) P D of x2 (Note scale differences) Chapter 4. State Estimation of the Duffing Equation 93 Figure 4.58: Sliding mode control of the Duffing equation using both estimation scheme. a)S.M.+NL.Est of xx b)S.M.+NL.Est of x2 c)S.M.+L.Est of xx d)S.M.+L.Est of x2 Chapter 4. State Estimation of the Duffing Equation 94 Figure 4.59: PD control of the Duffing equation using both estimation scheme. a)PD+NL.Est of xx b)PD+NL.Est of x2 c)PD+L.Est of xx d)PD+L.Est of x2 Chapter 4. State Estimation of the Duffing Equation 95 Figure 4.60: Variances of the states of the controlled Duffing equation using both controllers a)S.M. of xx b)S.M. of x2 c)PD of xx d)PD of x2 Chapter 5 Conclusions and the Future work In this thesis it was found that chaotic systems when controlled using the conventional methods of analysis based on the linearized version of the system may not perform well. For estimation and control purposes, however, some of the nonlinear techniques already available in modern control and estimation theory, with suitable modifications can be used. A system operating under external hnear control, may maintain its chaotic behavior. The estimation technique used was a modified Kalman filter in which the covariance projection was carried out based on an initial condition propagation technique outlined in Chapter 4. This nonlinear estimation scheme proved to be an effective method. The major reason in abandoning the conventional methods in estimation theory when dealing with chaotic system is the lack a reliable method to represent the statistical properties of a chaotic system. An attempt to find a noise equivalent model for the chaotic system based on the autocorelation of the output gave an acceptable result for the control purpose, but the nonlinear scheme performed better. The results of Kalman filter estimation based on the equivalent model were not as good as those in case of nonlinear scheme. In most cases the norm of the estimation error for the linear case was double the nonlinear case. Implementation of a state feedback controller was carried out using the results of the two types of state estimation. The estimation techniques used were assessed through the performance of the closed loop system and the persistence of chaotic behavior 96 Chapter 5. Conclusions and the Future work 97 under control action was also observed. The results confirmed the conclusions reached before, indicating that output would be better controlled when the nonlinear estimation technique was used. Although, the chaotic behavior will sometimes be maintained under hnear state feedback control, apphcation of a shding mode controller showed that the chaotic dynamics would not persist under this kind of control. A good noise-equivalent model for chaotic systems may not exist. Higher order moments than the first(mean) and second (covariance) to represent the output statis-tical properties might improve the approximation. The model used in this thesis is a deterministic model with no system noise, a step further would be to investigate the validity of the nonhnear estimation technique in the presence of system noise. The nonlinear estimation technique could be further evaluated through a comparison with the available methods in nonhnear filtering theory. Bibliography [1] Azzoz.et.al (1983) "Transition to Chaos in a Simple Nonlinear Circuit Driven by a Sinusoidal Voltage,'"IEEE trans., Circuit Syst, Vol cas-30, pp. 913-914 [2] Astrom, K.J. Wittenmark, B.J. (1984). Computer Controlled Systems, Printice-Hall Inc. Englewood Cliffs, N.J. [3] Bishop, S.R. Leung, L.M. Virgin, L.N. (1988) "On the Computation of Domain of Attraction During the Dynamic Modelling of Oscillating systems", Applied Math. Modeling, Vol.12, pp 503-515 [4]-Bishop, S.R. Leung, L.M. Virgin, L.N. (1987) "Predicting Incipient Jumps to Resonance of Compliant Marine Structures in an Evolving Sea-State", J. OMAE(ASME), Vol 3, pp 223-228 [5] Bowles, W.M. Cartelli, J.A. (1983)."Global Approximation for Nonlinear Filtering with Application to Spread Spectrum Ranging," In Control and Dynamic System, Vol.19, pp 297-367 [6] Brown, R.G. (1983). An Introduction to Random Signal Analysis and Kalman Filtering, John Wiley & Sons Inc., N.Y. [7] Bucy, R.S. (1969). "Bayes Theorem and Digital Realization for Nonlinear Filters." The Journal of the Astronautical Science, Vol.17, No.2, pp 80-94 [8] Bucy, R.S. Senne, K.D. (1971) "Digital Synthesis of Nonlinear Filters", Automat-ica, Vol 7, pp 287-298 98 Bibliography 99 [9] Decarlo, R.A. Zak, S.H. Matthews, G.P. (1987)."Variable Structure Control of Nonlinear Multivariable Systems: A Tutorial", Proceedings of IEEE, Vol. 76, No.3, pp 212-232 [10] Devaney, R.L. (1989). An Introduction to Chaotic Dynamical Systems, 2nd ed. Addison-Wesley, Redwood city, Calif. [11] Feigenbaum, M.J. (1980)."Universal Behavior in Nonhnear Systems", Los Alomos Sci.,1, pp 4-27 [12] Fowler, T.B. Jr. (1986).Stochastic Control of Chaotic Nonlinear Systems, Ph.D. Thesis, George Washington University, Washington D.C. [13] Fowler, T.B. Jr. (1987). "A Numerical Method for Propagation of Uncertainty in Nonhnear Systems", Int. J. General Systems, Vol.13, pp 265-280 [14] Guckenheimer, J. and Holmes, P. (1986). Nonlinear Oscillations, Dynamical Sys-tems, and Bifurcation of Vector Fields, Springer-Verlag, New York, Inc. [15] Gustavson, K. (1980). Partial Differential Equations and Hilbert space method, John Wiley & Sons, Inc., N.Y. [16] Haken, H. (1983). Advanced Synergetics, Berlin: Springer-Verlag [17] Haken, H. (1984). Synergetics, An Introduction, Berhn:Springer-Verlag [18] Holmes, J. Moon, F.C. (1983). "Strange Attractors and Chaos in Nonhnear Me-chanics," / . of Applied Mechanics, ASME, Vol.50, pp 1021-1032 [19] Itkis, U. (1976). Control Systems of Varialble Structure, Israel University Press, Keter Publishing House Jerusalem Ltd. Bibliography 100 [20] Kuo, B.C. (1982). Automatic Control Systems, Printice-Hall, Inc., Englewood Cliffs, N.J. [21] Marshfield, W.B. Wright, J.H.G. (1980) "Ship Roll Response and Capsize Behavior in Beam Seas", Trans, of the Royal Instute of Naval Architects (British), Vol.122, pp 129-150 [22] Newcomb, R. Sathyan, S. (1983) "An RC Op Amp Chaos generator," IEEE Trans, circuit and Systems, Vol.CAS-30(l), pp 54-56 [23] Press, W.H. (1988) Numerical Recipes in C : the Art of Scientific Computing, Cambridge University Press, N.Y. [24] Prigogine, I. (1967). "Thermodynamics of Irreversible process", third ed. Inter-science, N.Y. [25] Rodriguez-Vazquez, A. Huertas, J. Chua, L.O. (1983). " Chaos in Switched- ca-pacitor circuit," IEEE trans. Circuit and Systems, Vol.CAS-32(10) , pp 1083-1085 [26] Slotine, J.J. (1983) Tracking Control of Nonlinear Systems using Nonlinear Sliding surfaces, Ph.D. dissertation, Dep. Aero, and Astro., M.I.T., Cambridge, Ma. [27] Slotine, J.J. Sastry, S.S. (1983). "Tracking Control of Nonhnear Systems Using Shding Surfaces, With Apphcation to Robot Manipulator", Int. J. Control, Vol.38, No.2, pp 465-492 [28] Slotine, J.J. (1984). "Shding Controller Design for Nonhnear Systems", Int. J. Control, Vol.40, No.2, 421-434 [29] Thompson, J.M.T. Stewart, H.B. (1986). Nonlinear Dynamics and Chaos: Geo-metrical Methods for Enginneers and Scientist, John Wiley, Chichester. Bibliography 101 [30] Ueda, Y. (1980). "Explosion of Stange Attractors Exhibited by Duffings Equa-tion, in Nonlinear Dynamics", Helleman, R.H.D., ed. New York Acad. Sci, N.Y., Vol.357, pp 422-434 [31] Ueda, Y. (1980a). "Steady Motions Exhibited by Duffing Equation: A Picture Book of Regular and Chaotic Motions", In New Approaches to Nonlinear Problems in Dynamics, SIAM, Philadelphia, Holmes, J. ed. , pp 311-322. [32] Ueda, Y. (1985). "Random Phenomena Resulting from Nonlinearity in the System Described by Duffing Equation", Int. J. Non-linear Mechanics, Vol.5/6, pp 481-491 [33] Utkin, V.I. (1978). Sliding Modes and Their Application in Variable Structure Systems, MIR Publishers, Moscow. [34] Vidyasagar, M. (1978). Nonlinear System Analysis, Printice-Hall, Englewood Cliffs, N.J. [35] Virgin, L.N. (1987). "The Nonlinear Rolling Response of a Vessel Including Chaotic Motions Leading to Capsize in Regular Sea", Applied Ocean Research, Vol.9, No.2, pp 89-95 [36] Virgin, L.N. Bishop, S.R. (1988)."Complex Dynamics and Chaotic Response in the Time Domain Simulations of a Floating Structure", Ocean Eng.(British), Vol.15, No.l, pp 71-90 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065630/manifest

Comment

Related Items