LINEAR EQUIVALENTS OF NONLINEAR SYSTEMS By WILFRED SEE FOON TSE B.A.Sc, The University of British Columbia, 1979 B.Sc, The University of British Columbia, 1983 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES Department of Mathematics We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA April 1987 © Wilfred See Foon Tse, 1987 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Mathematics The University of British Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 Date Apr i l 30,1987 Abstract Consider the following nonlinear system m x = f(x) + ^ f i f t ( x ) u i , (1) t'=l y = h{x), (2) where x G Rn, f,gi,---,gm are C°° function in Rn and h is a C°° function in Rp, all defined on a neighborhood of 0. The problem of finding a necessary and sufficient condition such that system (1) can be transformed to a linear controllable system by a state coordinate change and feedback has been stud-ied quite well. In this thesis, we first discuss a few different approaches to this problem and eventually we will show that the slightly different versions of the necessary and sufficient condition discovered are equivalent. Next we con-sider system (1) with all u,- = 0 together with system (2), and study the dual problem of transforming it to a linear observable system by a state and output coordinate change. Finally, we consider briefly system (l) and (2) with nonzero U{ and study the problem of transforming it to a linear system that is both completely controllable and observable. Examples are given and applications to local stabilization and estimation are discussed. ii Table of contents Abstract ii Table of contents iii List of figures iv Acknowledgement . . . v Chapter 1. Introduction and background §1.1 Introduction 1 §1.2 Preliminaries 9 Chapter 2. Linearization of nonlinear control systems §2.1 Necessary and sufficient condition 14 §2.2 Example 40 §2.3 Local stabilization of a nonlinear system 45 §2.4 Local asymptotic stabilization problem 50 Chapter 3. Linearization of nonlinear systems with outputs §3.1 Necessary and sufficient condition 53 §3.2 Example 68 §3.3 Local estimation of a nonlinear system 71 §3.4 Local asymptotic estimation problem 76 Chapter 4. Linearization of nonlinear control systems with outputs §4.1 Necessary and sufficient condition 80 §4.2 Example 89 §4.3 Local asymptotic estimation and control 90 Bibliography 93 iii List of figures Figure 1 9 2 iv Acknowledgement I wish to thank D r . U . Haussmann for his helpful comments, criticisms, and suggestions on this thesis, and for his patience in reviewing the draft. I am also grateful to the Mathematics Department for providing the financial support during the preparation of this thesis. v C H A P T E R 1 Introduction and background §1.1 Introduction Consider the linear controllable system of the form x = Ax + Bu (1.1) where A and B are nxn and n x m constant matrices respectively, x E Rn, u £ Rm, and satisfying the controllability rank condition rank(B, A B , A n _ 1 B) = n. We will denote this system by the pair (A,B). In the early seventies, some interest was shown in the problem of transforming (1.1) to another linear con-trollable system (A,B) by transformations of the form (i) a nonsingular linear change of state coordinates: x = Piz (ii) a nonsingular linear change of input coordinates: u — Qv (iii) a linear state feedback: u = Wx so that system (A, B) is transformed to system (A, B) in the following way: 1 {A, B) -* [A, B) = (P-1 {A + BW)P, P-1 BQ) Brunovsky gave a necessary and sufficient condition under which such a transformation is possible. The condition is that rt-= f,-, for t = 0,... ,n — 1, where ro = rank B U = rank (B, AB,A{B) - rank (B, AB,A*'1 B), for 1 < t < n — 1, and fj are similarly defined on the system (A,B). He called the system (A, B) feedback equivalent or F-equivalent to the system (A,B) if such a transformation exists, and showed that systems that are F-equivalent to each other are F-equivalent to a certain linear controllable system of the form i = M + Bv (1.2) where o o A2 V o o Ai = / 0 1 0 0 0 0 Vo o 0 1 0 0 o \ o ArJ o \ 0 1 OJ B o o b2 V O o bi = 0 0 o o Jr0 o OJ i — 1,...,r0, where A{ and bi have dimensions K { X Ki and Ki x 1 respectively. Moreover, the integers / c i , . . . , Km can be reordered so as to satisfy 0 < Ki < n, K\ > K2 > • • • > Kro > 1, 2 Ki = 0, for i > ro, and Ki — n-i-l and are generally called the controllability indices. They are related to the ry's in the following way: Ki = the number of r,'s that are > i. System (1.2) is generally said to be in Brunovsky canonical form and will be referred to as system (BCF) for brevity. Since then much work has been done to generalize the result to nonlinear systems, specifically, to find a necessary and sufficient condition under which a nonlinear control system can be transformed locally to the linear controllable system (BCF). The particular class of nonlinear control system considered by many has the form where x E R n , /,g\,... ,gm are C°° vector fields on some neighborhood of 0 in Rn, and /(0) = 0. Such systems have both practical and theoretical significances as many physical systems, especially those in engineering, can be described in this form. Remark. There is no loss of generality in assuming that /(0) = 0, since if f(xo) = 0, then letting x — x — XQ, we have m (1.3) f(x) = f(x + x0) = f(x),and gi(x) = ff;(x). In the new variable x, system (1.3) is written as m t'=l where now /(0) = 0. 3 The class of transformations considered by most people is a modification of the one considered by Brunovsky, specifically, it consists of (Cl) a nonlinear change of state coordinates: x = z{£) such that the map x : V C Rn —> U C Rn is a local diffeomorphism at 0 mapping the origin to the origin, i.e., x(0) = 0, (C2) a linear change of input coordinates: m Ui = ] P qij[X)VJ, i = 1,.. . , m y=i where are C°° real-valued functions on U such that the rn x m ma-trix Q(x) = (qij(x)) is nonsingular on a neighborhood of 0 so that gi —> Em (C3) a nonlinear state feedback: Ui = Wi(x), i — 1,... ,m where Wi(x) are C°° real-valued functions defined on U, and u>;(0) = 0 for i ' = l , . . . ,m , so that /->•/ + ££Li ^tfi-The family of all such transformations forms a group which will be called Gc. We call two systems Gc-related if one can be transformed to the other by a transformation in Gc. It can be seen that this relation is an equivalence relation, i.e., it is reflexive, symmetric, and transitive. Hence, we will also call such systems Gc-equivalent for convenience. Of particular interest is the characterization of systems (1.3) that are lo-cally Gc-equivalent to the linear controllable system (BCF). In [2], Brockett obtained some results for the single input case. Jakubczyk and Respondek [9], 4 and Hunt, Su,and Meyer [7] got similar results for the multi-input case. In [8], Hunt and Su used the global inverse function theorem to get conditions under which a global transformation exists for the single input case. As a dual to the above linearization problem of a nonlinear control system, the linearization of a nonlinear system with outputs has also been considered by some researchers. Such a system has the form x = f(x) (1.4) y = h[x) where x G Rn,y £ Rp,f(x) is a C°° vector field on some neighborhood of 0 in Rn, and h(x) is a C°° map from some neighborhood of 0 in Rn to Rp such that h(0) = 0. x and y are generally called the state and output of the system respectively. Now the group of transformations consists of: (01) a nonlinear change of state coordinates as described earlier in (Cl). (02) a nonlinear change of output coordinates: V = y{4>) such that the map y : W C Rp —»• y{W) C Rp is a local diffeomorphism on W mapping the origin to the origin, i.e., y(0) = 0, and W is some neighborhood of 0 in Rp. We will call this group of transformations Go and we similarly define Go-related or Go-equivalent systems. Analogous to the previous problem, we are interested in the characterization of systems (1.4) that are locally Go-equivalent to a system in observable form given below. e - AZ + a{4>) (1.5) <j> = c$ 5 where f f£ Rn,<j> G Rp, oc is a vector-valued function depending only on the output <f>, and (C, A) is an observable pair, i.e., rank ( C \ CA n. \ CAn~x J Incidentally, if (1.4) is linear and observable, i.e. if it has the form x = Ax (1.6) y = Cx and (C, A) is an observable pair, then it is possible to transform (1.6) to (1.5) where a is linear in <f>, by a linear change in state and output coordinates such that A — A as before and C — C where C = (c\ O . O c 2 . O O . V o o . ° \ O Cd0 o ) Ci = (1 0 0) , i - 1,..., d0, A{ and C{ have dimensions /i,- x in and 1 x m respectively. Moreover, the integers Hi,... ,(ip can be reordered so as to satisfy 0 < Hi < n, HI > A*2 > • • • > Udo > 1> v Hi = 0, for % > do, and Hi = ni i=i and are generally called the observability indices. It can be checked that Hi = the number of rfy's that are > i, where the rf,-'s are defined as follows. do = rank C, d{ = rank ( C \ CA \CA{J — rank ( C \ CA \CA{-1 J 6 , for 1 < i < n — 1. Indeed, if ( C , A) is an observable pair, then [AT,CT) is an controllable pair. That means there exist matrices P, W, Q, such that •A = P-1 (AT + CTW)P, and B = P~lCTQ, or A = {PT)-1ATPT -WTC, and BT = QTC{PT)~1. Let x = S£, and y = Hep be the state and output coordinate change respectively. If 5 = ( P T ) - 1 , a n d ^ = ( Q T ) - 1 , then from (1.6) it follows that £ = S~xASi = PT({PT)-1 ATPT - WTC)(PT)-1 £ = (AT - s-xwTcs)i = [AT - S-lWT{QT)-1 QTCS)£. Also y = H<p=(QT)-1<P = CSc:, so <f> = QT CS£ = BT Hence, z = ATt: + D<p, where D = —S~l WT ( Q r ) _ 1 . It is obvious that a permutation of the states will transform the system in coordinate £ and <f> into system (1.5). The relationship between the integers \i{ and dj is clear in view of our previous discussion on the control system (l.l). 7 System (1.5) will be referred to as the dual Brunovsky observer form or system (DBOF) for brevity if (C, A) = [C,A). We are particularly interested in finding necessary and sufficient conditions that system (1.4) can be transformed to this form. In [13], Krener and Isidori gave a necessary and sufficient condition such that this is possible for the single output case. Bestle and Zeitz [l] also studied this problem for the single output case and without the output change in coordinates, (02). They gave the necessary condition that the map for the state coordinate change has to satisfy: it must solve a system of partial differential equations. They did not examine the solvability of these equations; however, they showed how to design a nonlinear obsever based on the linear system in canonical form if a transformation exists. Krener and Respondek [14] also studied this problem for the multi-output case. In this thesis, we will study the above problems in some details. In the next section, we will give some definitions and background material developed in differential geometry. In chapter 2, we will derive a necessary and sufficient condition that system (1.3) be Gc-equivalent to system (BCF) and show that it is equivalent to those in the literature. Thereafter, an example is given in section 2.2. In section 2.3, we will determine a control law that will stabilize the nonlinear system for small deviation from the equilibrium, and an applica-tion will be discussed in section 2.4. We will derive a necessary and sufficient condition that system (1.4) be Go-equivalent to system (DBOF) in section 3.1, followed by an example in section 3.2. In section 3.3, we will develop the the-ory for the local estimation of states and then we give the design of a local asymptotic observer in section 3.4. In chapter 4, we will find a necessary and 8 sufficient condition that the system m x = f{x)+^2gi{x)ui U (1-7) y = h(x) be Gco-equivalent to a linear system that is both controllable and observable of the form i = M + Bv (1-8) 4> = cz where Geo is the group of transformations of the type (Cl), (C2), (C3), and (02). An example will be given and a regulator problem will be discussed. We will sometimes use the word 'linearizable' in place of 'Gc-equivalent' or 'Go-equivalent' when the meaning is clear. §1.2 Preliminaries In this section, we will introduce some definitions developed in differen-tial geometry that are relevant to our later discussions. Specifically, Brock-ett's survey [3] on nonlinear systems and differential geometry, Hermann and Krener's paper [6] on nonlinear controllability and observability, and, in partic-uar, Isidori's book [9] on nonlinear control systems provide some background material for this section. For nonlinear system analysis, we need to define some terminologies. Given a C°° manifold M, let C(M) be the set of C°° real-valued functions on M, V(M) be the set of C°° vector fields on M , and V*(M) be the set of covariant vector fields or covector fields on M. We define a k dimensional C°° distribu-tion A on M as a mapping assigning to each point p of M a k dimensional tangent space TpM to M at p; it is a submodule of V(M) over the ring C(M). 9 Furthermore, for each p G M , there exists a neighborhood U of p and k C°° vector fields i>i,..., Vk such that A(q) = $pan{vi(q) : i = 1,..., k}, G U. Following the notations of Isidori [ 9 ] , if W = {u){ G V(M) : i e l } for some index set I, we denote the distribution generated by elements of W to be sp{w{ : t G / } , which is the set of all linear combinations of vector fields in W with coefficients in C(M), and will be denoted by W. Pointwise, for each p G M, W(p) :p»-> span{wi(p) : i G / } where span{wi(p) : i G /} denotes all R — linear combinations of u>j(p). Likewise, we define a k dimensional C°° codistribution A* on M as a mapping assigning to each point p of M a k dimensional cotangent space T*M to M at p, it is a submodule of V*(M) over the ring C(M). For each p G M , there exists a neighborhood U of p and A; C°° covector fields 1^,..., v*k such that A* (q) = span{vi (q) : i = 1,..., k}, \/q e U. IfW*={w*eV* (M) : ie I}, then we define W* similarly. Definition : A distribution A (or codistribution) is said to be nonsingular on U if dim A(p) = constant, Vp G U, and is said to be nonsingular if it is nonsingular on M. If a point p has such a neighborhood U on which the distribution is nonsingular , then p is called a regular point. 10 Definition : Given a distribution A, we can define a codistribution, the annihilator of A, by A x : p ^ {v* e T*M : (v* ,v) = 0,Vu € A(p)}. Similary, given a codistribution A*, we can define a distribution, the annihilator of A*, by A*1 :pr-> {v eTpM : {v* ,v) = 0, Vv* 6 A* (p)}. Moreover, we have the properties dim A + dim A - 1 = dim M , and efo'm A* + dim A*"1 = dim M . We will assume M = Rn in the sequel. We define the Lie bracket [ , ] of two vector fields 01,02 £ V(Rn) as [01,02] = ^ l - ^ < 7 2 , which is a column vector valued function. The Lie bracket operation satisfies the following properties: (i) it is skew symmetric, i.e., [01,02] = -[02,0i], (ii) it is bilinear over R, i.e., if 0,1,0,2 are real numbers, 01,02,03 £ V(Rn), then [oi0i + 0202,03] = 0,1 [01,03] + 02 [02,03], (iii) it satisfies the Jacobi identity [01, [02,03]] + [03, [01,02]] + [02, [03,0i]] = 0. 11 If a,P e C{Rn), f,g e V{Rn) ,then {af, Pg] = a.p.[f,g] + Lf{0).a-g-Lg{a) • f3 • f, where • indicates ordinary product. The Lie derivative of a vector field g with respect to / is defined as Lf{d) = \f,g], or &dfg, and inductively, we define ad*£ = [/,ad* -1 g], for k > 1, with ad°# = g. We see that ady<7 is again a vector field on Rn represented as a column vector. Note also that the Jacobi Identity can be equivalently written as Lgi \g2,gz\ = [Lgi {g2),93] + [g2,Lgi {g3)}-If h E C(Rn), then dh is a covector field on Rn defined by a n ~ ^dx, ' • *'' dxn h which is a row vector valued function on Rn, and the Lie derivative of h with respect to a vector field / in V(Rn) is n dh Lf(h) = {dh,f) = J2§kf" i=l where is the ith component of / . Again L / ( / i ) is in C(Rn). If w is a covector field, then the Lie derivative of w with respect to / is 12 In particular, if w is an exact covector field, i.e., w = dh, then which is a row vector valued function. Inductively, we define Lkf(dh) = L /LJT 1 {dh), for Jb > 1, with L°f{dh)=dh. is again in V*(Rn). We also have the relation Lf{dh) =dLf{h), i.e., the operators Ly and d commute, and the Leibnitz type formula Lf(dh,g) = (Lf(dh),g) + {dh,\f,g}), where h G C(Rn), and f,g G V(Rn). If a,{3 G C{Rn),f G V{Rn), and w G V*(Rn), then Laf{Pw) = a- (3 - Lf{w) + a-Lfi/3) • w + /? • (w, /) • da, and if 5 is a set of vector fields or covector fields on Rn f £ V(Rn), then LfS = {Lfr :reS}. Definition : Let V = {fx (x),..., fm (x)} be a collection of C°° vector fields on Rn, V is called involutive if there exist 7,^ G C(Rn) such that m [/»' /y] = ^Zlijk • fk, 1 < ij < m. k=i Definition : A distribution A is involutive if [/,<?] G A whenever f,g G A. Remark. It is easy to see that if a finite collection of vector fields V is involutive, then the distribution V generated by V is an involutive distribution. 13 C H A P T E R 2 Linearization of nonlinear control systems §2.1 Necessary and sufficient condition In this section we will find necessary and sufficient conditions for the non-linear system m x = f(x) +^gi{x)ui (2.1) to be locally Gc-equivalent to the linear controllable system (BCF) around 0. We will examine previous work by Jakubczyk and Respondek [10], Isidori [9], and Hunt, Su, and Meyer [7]. They gave close but seemingly different conditions. We will show their proofs or the ideas behind them when necessary. Finally, we will prove that their results are equivalent. First, we introduce some notation. Call (w,Q) a feedback pair, where w is a m — vector of C°° real-valued functions and Q is a m x m matrix of C°° real-valued functions, all defined on a neighborhood U of 0, if they satisfy the properties: (i) w{0) = 0, (ii) Q(x) is nonsingular for all x E U. We define S0 = {gi,...,gm}, and Sk = Sk-i U [f,Sk-i ], for k > 1. Similarly, we define S0 — {<ji,... ,<7m}, and Sk = 5fc_x U [f,Sk-i }, where m 9j = ^giQij t'=l m i=l 14 and (w, (qij)) is a feedback pair. For simplicity, assume gi,...,gm are linearly independent on a neighbor-hood of 0. Proposition 2.1 The following statements are equivalent: (A) System (2.1) is locally Gc-equivalent to the linear controllable system (BCF) around 0. (B) There exists a feedback pair (w,Q), and integers KI > «2 > • • • > Km > 1 with YliLi Ki = ni s u c n that the following conditions are satisfied on a neighborhood of 0: (i) [ad^yy, ad^ <7,-] = 0, for 1 < i,j < m, k = 0,..., Kj — 1, / = 0,..., Ki — 1, (ii) adKJ gj - 0, for j = i,...,m, (iii) dim span{a.dljgj : j = 1,..., m, k — 0,..., Kj — 1} = n. (C) cf. [10] The following conditions are satisfied on a neighborhood of 0: (i) The distribution 5y is involutive for j = 0,..., n — 1, (ii) Sj is nonsingular for j = 0,..., n — 1, (iii) dim Sn-\ = n. Remark. We will prove that (A) implies (B), (B) implies (C), and (C) implies (A). A direct proof that (B) implies (A) will be shown in the course of proving proposition 4.1. First, call (z(^),iy, Q) the linearizing triple if the map x(f), corresponding to the state coordinate change, and the feedback u = w(x) + Q(x)v, transform system (2.1) into system (BCF). Proof : (A) =^> (B). Let (£(£), w, Q) be the linearizing triple with controlla-bility indices K\ , . . . , rem satisfying K\ > /c2 > • • • > Km > 1 and £2i=i Ki = n-15 Differentiate x(f) with respect to t, ± = H f = / ( * ) + < ? ( * ) « | | (AC + Bv) = f(x) + G(x)(w(x) + Q(x)v) = f(x) + G(x)w(x) + G(x)Q(x)v = f(x) + G(x)v, where f(x) = f(x) + G(x)w(x) G(x) = G(x)Q(x). If (2.2) holds for arbitrary v, then it is necessary that i f i e = / ( * ( £ ) ) , | | B = G(X(0). Let GQ = 0 and OJ = J2i=i Ki> f ° r .7 > 1> then 6 £o - i+l or, if we let = 6^ , for t = 1 , . . . , m , A; = 1,..., /ct-, then ( ^ \ £l2 , At 6«i 0 62 V 0 J , and B = (e i K l 16 where is a standard unit vector with a 1 in the Oi-\ + k th position. By (2.4) § f J3 = G(5(0)g(5(0) Also = (G(m)Qi (5(0), • • •, G(*(fl)ffm(5(fl)). dx r> _ r dx a dx \ — I _dx dx \ so , g | ^ = C 7 ( x ( e ) ) f t ( £ ( 0 ) = w ( £ ( 0 ) , .- = l , . . . , m . (2.5) Differentiating (2.3) partially with respect to £ik+i , we have A (dx 2c\ d & S f ^ = z ^ ( / ( * ( 0 ) ) Also Since s s ^ i ^ ' = < s £ r < S ? » * + S f < 3 £ r < * ) > d M . ^ if A: = 1,..., ^ — 1; 3et*+i ^ _ \ o , if * = o, then Hence 17 Rewriting the above equation, we have 7 # - = Tnr—iigM) - ( A ( T J # - ) M £ and Let fc = Ki — 1 in (2.7), then dx _ df dx d / dx \ 2 c Since from (2.5) then # - = & ( * ( 0 ) , dx _ df A. dgidx Zt d l ~ \ - - ^ - ^ - d ~ l A i = -ad^g{. Letting /c = /c^ — 2 in (2.7) and using (2.9), we find d i ^ = § £ ( - a d / ^ - l ^ / f c ) / ad f^fj. If we repeat this process for fc = Ki — 3,. . . , 1, we find d £ * _ k = (-l)*ad)ft, for fc = 0,..., ,ct-Let fc = Ki; - 1 in (2.10), then = BdJ-1 git for fc = 0,...,/c 18 From (2.8), = -[/,(-l)"i-1ady-1^], or ad£ fa = 0. (2.11) Since the map x(f) is assumed to be smooth, the mixed partials must agree; this implies (i). (ii) holds in view of (2.11), and (iii) is necessary since the vector fields {ad^ gy : j = 1,.... ,m,i = 0,..., KJ — 1} are the columns of the matrix ^ and it is required that this matrix be nonsingular on a neighborhood of 0. To show (B) =>• (C), we require the following lemma. Lemma 2.2 Suppose either Sk or Sk is involutive for 0 < k < n — 1, then Sk = Sk f ° r 0 < k < n. In addition, if either Sk or Sk is nonsingular on a neighborhood of 0, then both are. Proof : Suppose Sk is involutive for 0 < k < n — 1. Let G = {gi,...tgm), G = (gi,... ,gm), Q = {qij), then m G — GQ, i.e., g3- = y^g tg ty. »'=i This implies So C So- Since Q is nonsingular on a neighborhood of 0, we can write G = GQ, where Q = Q~l. It is obvious that the reverse inclusion holds. By induction, suppose Sk — Sk for 0 < k < I < n. If v G Si+i , then m V = ^2 cti • ad^"1 gi + gl, 1=1 19 where gl is a vector field in Si and a t is a real-valued C°° function defined on a neighborhood of 0. Also m ad/1"1 ft = [/ + # ^ ''ad/& 1 y=i m = [/.adjpft] + J^[0/u;y,adJpfo]. y=i Since a d ^ G S"/, and Si = Si. we have [/,ady><7t] G 5 / + i . Since <7y G 5j = 5 / , j = 1 , . . . ,m,and 5 j is involutive, therefore, [&'«>,•, ad^,-] e Si = Si, so ad^+1 fa G Si+i , i = 1 , . . . , m. This shows that v G Si+i , and C Si+\. To show C Si+i , let u G Si+i , then TO t'=i where gl G 5/ , and 7,- G C(£0, for some neighborhood U of 0. But ad^ 1 g i = [fMlfgi] rn = I / - J^ywy.ad/ft] y=i m = [/.ad^ ,-] - ^[ffywy.adjrfff]. y=i Similar argument as before will show that ad/ + 1 gi £ S1+1, t = l , . . . , m , 20 so v E , and this shows that C . Thus = 5j+i and 5^ = Sk for 0 < A; < n by induction. If Sk is involutive for 0 < k < n, we can show the same result by a similar proof. QED. U Now we continue the proof that (B) implies (C) in proposition 2.1. Con-ditions (B-i) and (B—ii) implies that the distribution Sj is involutive for j = 0,..., K\ — 1 because the Lie bracket of any two vector fields in Sj is zero. Hence by lemma 2.2, Sj = Sj and is involutive for j = 0,. ..,/cj.. Fur-thermore, condition (B-iii) says that the n vector fields in S K l _ i are lin-early independent on a neighborhood of 0; hence, Sj and Sj are nonsingular on a neighborhood of 0. This proves conditions (C-i) and (C—ii). Condition (C-iii) holds since SKl-i = SKl-\ and the latter has dimension n. Clearly dim Sn-i = n since 5,- C St+i for all i > 0 and K\ < n. (C) (A). The proof was provided by Jakubczyk and Respondek [10]. We made few modifications except for some changes of notation and some added details. The following lemma was proved in [10]. Lemma 2.3 Let So C Si C • • • C Sjj. denote a sequence of involutive C°° distribution on a n-manifold M having dimensions SQ < si < • • • < respec-tively. Then around any point XQ £ M there exists a coordinate system (x, U) such that the integral manifolds of Sj around 0 are of the form Mj •= {p G U : Xi(p) = Ci, i = Sj + 1,... ,n}, for j = 0,.-.\,k where ct- are constants. Hypotheses (C-i) and (C—ii) imply those of lemma 2.3. Hence there exists a coordinate system such that the integral manifolds of Sj are as described in the lemma. Here k is the smallest integer such that dim S~k = n on a neighborhood of 0. 21 In the new coordinates, vector fields in Sj have the last (n — SJ) compo-nents equal to zero. For convenience, we use the same variable x for the new coordinates and the same / expressed in these new coordinates. We define ro = SQ, and r,- = S{ — s , _ i , for i > 1, and let / = / / ° \ fx°\ f1 X1 ; , X = \xhJ where V u ) and has r» components. In the new coordinates, system (2.1) has the form £ ° = / ° ( * ) + I > i ( * ) u i 1=1 i 1 =/ 1(*) (2.12) i f c = fk{x) where /* (0) = 0, for t = 0,..., k. We will show that §£r=0, for » = 0 , . . . , j - 2 , and i df rank — 3-9 r 8fn . 3 a j - i = r;> f o r i = l,-..,fc. Since = ^J-/ - ^ g , if we let [M KIM*) 22 (2.13) (2.14) then For j > 2, if g G Sy_2 , then »'=o 0 = v o ; and [f,g] G Sy_i , so t'=0 i=0 Since g] = 0, the first sum is zero, hence §1 dx T = 0, : = 0,...,i-2. For j > 1, if g G Sy_i , then gl = 0, for i > j, and [/,</] E Sy, so 1=1 This implies rank o , = ry, since there are ry vector fields in 5,- that are dxJ linearly independent of vector fields in Sy_i . We will show that system (2.12) can be transformed into the form x° = v • l _ ~i x = x (2.15) X — X 23 where the variables in x% are the first rt- variables in xl~l. We see that system (2.15) is the permuted form of system (BCF). First, consider the transformation ^ - ( . t x 2 (2.16) x1 = xl, ij^k — 1, where xk~l is a \r%_x —r^) v e c t ° r consisting of variables in xk~l such that, on a neighborhood of 0, ox This is possible by property (2.14). Moreover, the transformation x >-> % is a local diffeomorphism around 0 since the Jacobian matrix ^ is nonsingular around 0, Furthermore, dxk~l _ dxk~l A-i dx"'1 7 and dfk-x = dxk~x dfk~l dxk~2 dxk~l d~xk~2 ' because £ k ~ l given by (2.16) is independent of xk~2. Since —«—- is a (rt , x rt ,) nonsingular matrix and — ^ — has rank dxk~l k~l k~l dxk~2 r^_ x, therefore rank d/« „ = rt , . Moreover, using (2.13), we find that 0, * = 0, . . . , fc-3. d£l 24 Thus properties (2.13) and (2.14) are preserved. Replacing i by i again, the system has the form X = J + Cr u k-i _ A-i xK~l = f X = X k-l '1 where x\~l are the first components of xk~l, and G° is a m x m matrix whose ith column is g$. If we repeat this process, then after another — 2 steps, the transformed system is X — •k _ Ji-l X — X j where x\ are the first r t + i components of xl for i = 0,..., k — 1. Now let u = ( G 0 ) - 1 (v — f°), the first equation becomes x° = v. A further state coordinate change x t—• Px, where P is an appropriate permutation ma-trix, will transform this last system into the system (BCF) described earlier. This completes the proof of proposition 2.1. QED. U Remark. Condition (B-i) can be simplified in the following lemma. Lemma 2.4 Under hypothesis (B—ii), i.e., Qi — 0) i — 1,.m. The following statements are equivalent: (i) [ad^ <7y,adj>&] = 0, 1 < i, j < m, k = 0,..., Kj — 1,1 = 0,..., Ki: — 1. (i') [ad£<7y,&] = 0, 1 < i, j < m, k = 0,..., icy - 1. 25 Proof : (i) implies (i') is clear as condition (i') is a special case of (i) by setting I = 0. For the converse, we will prove (i) by induction on I. (i) is true for / = 0 by (i'). Assume (i) holds for some I < K{ — 2 and for k = 0,..., K3• — 1. Then using the Jacobi identity, [ad*-gy, ad^"1 gt] = [/, [ad*-£y, ad'jft]] - [ad^+1 £y, adjpfr]. By assumption, the first term is zero, and if k + 1 < /cy — 1 or k + 1 = /cy, then the second term is zero by the inductive hypothesis or by condition (ii) respectively. Hence (i) holds for / + 1 and therefore holds for / = 0,..., /c,; — 1 by induction. QED. H Consequently, condition (B) can be replaced by condition (B') consisting of (i') in lemma 2.4, (B-ii), and (B-iii). Corollary 2.5 Conditions (A), (B'), and (C) are equivalent. Remark. Since Sj = Sj for j > 0, then r,- = dim S{ — dim S{-i — dim S{ — dim 5 t_i , for i > 1 and ro = dim SQ = dim So. It can be checked that the sets of integers {r^}^1 and {KJ}JLx are related as follows: rt- = the number of /cy's that are > i + 1 (2.17) Kj = the number of rt's that are > j. To show this, construct the array of vector fields in SKl-i as shown below: / ^ 2 ° ^ 3 ° : X 2 K 2 _ 1 Y V X f 1 - 1 Y Y 26 xzr Y - l Y Y J (2.18) where Xj denotes ady-<7t- and Y corresponds to vector fields that are linearly dependent on those to the left or above. The number of entries of X\ in the ith row is r t _i for i > 1, and the number of X\ in the j>th colume is KJ. It is easy to check that the r,'s and /cy's are related as claimed. Moreover, the r^ 's satisfy: rK1-i > 1, rt- — 0, for t > /ci, and ^ ^ r t = n. i=o Incidentally, Isidori, and Hunt, Su, and Meyer gave similar but weaker conditions. We will state and explain them below. First, Isidori's conditions [9, p.237] are as follows: P) (i) the distribution Si is involutive for all i > 0 such that m^,-^ ^ 0, where the integers m 0 , . . . , mk are defined by mo — rk m 0 + mi = mo + mi H h mf. = r0. (ii) Si is nonsingular on a neighborhood of 0 for all i > 0. (iii) dim 5^ (0) = n. Next, Hunt, Su, and Meyer's conditions [7] are as follows: (E) (i) the distribution 5^-2 is involutive for t = 1,... ,m. (ii) = sp{v : v £ 5Kj._2 n S}, for i = 1,... ,m, where S = {ady-</t- : t — 1,..., m, j — 0,..., Ki — l}. (iii) S = sp{v £ S} spans an n dimensional space on a neighborhood of 0. 27 Remark. A reordering of the vector fields gi,...,gm may be required in condition (E). It can be seen that conditions (ii) and (iii) in (D) and (C) are the same and are equivalent to those in (E). We will show that (D-i) and (E-i) are also equivalent. The integers rat- defined in (D) can be expressed as mi = r%_{ - r £ _ , + 1 , for i = l,...,k, or m'k-i-i = r ' + i ~ r ' +2 » f o r I = - l , . . . , k - 2 , and mo = r~k. So rn^^y ^ 0 iff r/ + i - r/ + 2 > 0. If Ki and ry are as defined in (2.17) and the vector fields are arranged as in (2.18), we see that K«+2 > Ki> for i = r / + 2 + 1,. ., r j + 1 . In this case Ki=l + 2, for i = r / + 2 + l , . . . , r / + 1 or I = Ki- 2, for i = n + 2 + 1,. ., r i + 1 , so the distribution Si that are required to be involutive in (D-i) have the form S^-2 for i — 1,... ,m, which is condition (E-i). We note that in (E-i) the same distribution will be enumerated twice if two /ct- are the same. Hence, Isidori's conditions are equivalent to those of Hunt, Su, and Meyer. The equivalence of condition (C-i) and (D-i) is proved in the following lemma. 28 Lemma 2.6 If 5t- is nonsingular on a neighborhood U of 0 for i = 0,..., K\ — 1, then the following conditions are equivalent: (C-i) Si is involutive for i = 0,..., K\ — 2. (D-i) 5^-2 is involutive for t = 1,..., m. Proof : Clearly, (C-i) implies (D-i). To prove the converse, it suffices to consider the case with two distinct Ki. Hence assume that «!=••• = «/> Kl+i = • • • = Km, for some / between 1 and m. With no loss of generality, assume that the vector fields are arranged as in (2.18), i.e. By the nonsingularity assumption of 5t- on U for i = 0,..., K\ — 1, the F's correspond to the vector fields that are linearly dependent on U on those to the left and above. We can assume that Km > 2, since if Km < 2,then the second part of the following proof is adequate. So condition (D-i) says that SKm-i an<^ SKl-i are involutive. Suppose SKM-3 is not involutive, then there exist vector fields,Vi, VJ G Snm-3 such that Since SKm-2 is involutive, then m k=\ 29 A/ • • • JLm i • •• x} \ ... ! ... X?1'1 y ••• y y where ak £ C(U) and not all ak equal to zero, and as before gl G for all i > 0. Using the Jacobi identity, [/» K'> = [[/«v»i u;] + K> If, "j)] where [/»«»•]» € 5 ^ - 2 ,since 6 S ^ - 3 we have [[/,V t-],Uy], [«,-,[/, «y]] G 5 K m _ 2 , by involutivity of SKm-2 • Hence, [ / . [ f . - . f y l l G ^ - j . (2.20) But from (2.19) [/, [vi, VJ}} = [f, OLK • ad j - - 2 gk + gKm~z } Jfc=l m m = Y s a k - ^ T ' 1 9 k + 5 ^ a d / m - 2 ^ L / ( a j f c ) + lf^Km~3} k=l k=l m h=l Since at least one ak is not zero and ad^m 1 <7i,..., adjm 1 gm are linearly independent of vector fields in SKm-2 . Therefore [/. [Vi.Vj]} i SKm-2 . This contradicts (2.20). Hence SKm-3 is involutive. By induction Si is involutive for all i < Km — 3. Now suppose Ki —Km > 2 so that K\ — 3 > Km —1, otherwise if/ci — /cm = 1, then «i — 2 = /cm — 1, and SKl-2 is involutive by hypothesis; hence, no proof is needed. 30 Suppose SKl-3 is not involutive, then there exist vector fields vt-, VJ £ SKl-z such that not both are in §^-2 and [vi,vj] (£ 5 K 1 _ 3 . But SKl-2 is involutive by hypothesis, so / Jfc=i where 7/t £ C(U) and not all 7^ equal to zero. Taking Lie derivative of [i>{,vy] with respect to / as before and using the fact that ad*1 _ 1 g±,..., ad*1 _ 1 gi are linearly independent of vector fields in SKl -2 , we find that this leads to a contradiction. Hence, SKl-$ is involutive, and, if we repeat the same argument for SKL-4 and so on, we find that S{ is involutive for i = KM — 1,... ,/ci — 3. Hence S{ is involutive for i = 0,..., K± — 2. QED. ft Remark. Looking at the proof of lemma 2.2, one might think that condition (D-i) can be replaced by the condition that SKl-2 be involutive. But this is not true since involutivity of SKm-i does not imply the involutivity of 5 K m _ 2 . To show this, consider a contrapositive proof as before and assume that SKm-2 is not involutive given that S^-i is. So there exist V{,Vj £ 5/cm-2 such that m k=l and not all ak equal to zero. But if ak = 0 for A; = 1, . . . , / , then m k=l+l Since &dKfmgk £ spfrd'jrgj : j = 1,...,/} + 5 K m _ i , for k = I + 1, . . . , m, 31 so if spi&d1^ gk : k = I + 1,..., rn) C S ^ - i , then [/, [vt',i>/]] G SKm-\ , and this will not lead to a contradiction. In summary, we have shown that (B), (B') (C), (D), and (E) are equivalent conditions such that the nonlinear system (2.1) can be linearized to a system in Brunovsky canonical form. Incidentally, the way that (D) and (E) are proved is based on the study of the map f — £(x) or simply £(x) so that in the f coordinates tj = At: + Bv. Since this provides a different proof of conditions (C), (D), or (E) , we will derive the necessary conditions that f (x) has to satisfy. First, differentiating £(x) with respect to t, we have §ii = ^ + B v or m TH(/ + E ^ ) = M + BV.. (2.21) t=i Recalling the form of A and B, and since f(x) does not depend on Ui,i = 1,..., m, therefore, we have L/(&) = Zi+i, / = l , . . . , a i +1 , . . . ,<7 2 - 1,<72 + l , . . . , n - 1 (2.22) and m (dt*,- ,f)+Y, tt»'<d&y ' ^ ) = vi> f o r 3 = 1, • • •, rn (2.23) »'=l (dthgi)=0, for t = l , . . . , m , (2.24) 32 and for / = 1,..., o\ — 1, o\ + 1,..., 02 — 1, o% + 1,..., n — 1. For convenience, we use double subscripts notation. For i = 1, . . . , m, let ZlKi = £<Ti £ t K i - l = £ c r ; - l f t ' l = £ < X i - l +1 , then (2.22), (2.23), and (2.24) become L j T 1 (fyi) = j = l,...,m,k = 2,...,Kj (2.25) m (dtjK,- J ) + ^2 ^ (dtJKj , 9 I ) = V J , 1 < t, j < m (2.26) (d£jk,9i)=0, 1 < i,j < m,k = l , . . . , /cy - 1. (2.27) Substituting (2.25) into (2.26) and (2.27), we have m (dLK/-1 ( £ y i ) , / ) + ^ u ^ d L ^ - 1 (£yi),ff;) = t>y, 1 < i,j < m (2.28) t'=i and (dLkf{Zjk),9i) = 0, 1 < t , y <m,/c = 0 , . . . , / c y-2. (2.29) Therefore, we need to find m functions £ 1 1 , £ 2 1 , • • •, £ m i such that (2.29) is satisfied. Moreover, the matrix M ° = Ky),where m% = ( d L ^ 1 (£ii),<7y>, is required to be nonsingular on a neighborhood of 0 in order that u\,..., um, be solvable from (2.28). These conditions are stated as (a) in the following lemma. 33 Lemma 2.7 The following statements are equivalent: (a) (i) (dLkf(ti),gj) = 0, l < t , j<m, k = 0,..., /c,: - 2. (ii) the m x m matrix M° = (m°y) is nonsingular on a neighborhood U of 0, where (b) (i) (dLj(fc),ad^y) = 0, for 1 < i,j < m, and for / = 0,..., /ct- — 2, A: = 0,..., /c, — 2 — /. (ii) For / = 0,..., /Ci — 1, the r/ x m matrix = (m|y) has rank r/ on L7, where m|y = (dLj--'"1 (et-),ad^y). Furthermore, Af' = (-lj'Af,0, for / = 0,..., «i - 1, where is the first r\ rows of M° . Proof : Clearly, (b) =>• (a). The converse can be proved by induction. By (a-i), (b-i) is true for / — 0. Assume (b-i) holds for / < p < K{ — 2, k < Ki — 2 — p for some p, and for i = 1,..., m, then (dLkf(ti)MFlgj) = L /(dLj(e i),ad^y) - (dL*+1 (fc),adj#> = 0, for A; < /ct- - 2 - (p + 1). This follows from the induction hypothesis since k + 1 < /c^ — 2 — p. Hence (b-i) holds for / < Ki — 2 by induction. Again, (b-ii) is proved by induction. Clearly 34 (b-ii) is true for / = 0 by (a-ii). Assume (b-ii) holds for I < p < K\ some p, i.e. 1 for Ml = {-l)1 Mi, for / < p < KI - 1. Then for i < r p + i = Lf(dL«-?-2 (fcWJyy) - ( d L ^ " 1 (&),adjyy>. By property (b-i), the first term is zero, since p < K{ — 2, therefore = -m?-, for 1 < i < r p + x MP+i = -(-lYM°p+1 = (-iy+1 M ° + 1 . hence This proves (b-ii), that is, Ml = ( - l ) 'M, 0 , for / = 0 , . . . ,/ci - 1. Obviously Ml has full rank on U if M° is nonsingular on U. QED. U The necessity of conditions (C), (D), or (E) can be proved as follows: Let fdLy-1-1 (£n)\ Yi = \dLf-1-1 ( 6 0 7 , for / = 0 , . . . , /ci — 1. Remark. Yi is an r; x n matrix where r/ is the number of /ct- > / +1. The rows in Y i , . . . , YKl _ i are the rows of the matrix ^ , so they are linearly independent on U. Let h)G = (ad^i,. . . ,ad} f f m), for k = 0 , . . . ,*Kl - 1. 35 Then conditions (b-i) and (b-ii) in lemma 2.7 can be written as f Y0 \ f M° x x O -AfP x V O O x x (2.30) where M° is an rj x m matrix consisting of the first r,- rows of M°. In particular, rank M? = n on U. Let Gj = (G,LfG,---,LjfG), and My O Yo\ x YKl-i J , for j = 0,... ,KI - 1. Then from (2.30) and (- l ) 'M?. FyGy = My, (2.31) (2.32) YyGy = 0, for j = 0,..., Ki — 1. Since the number of rows in Yy is X^=o r i = 5;' a n d ^ n e Y are linearly independent covector fields on £/, so on U and Since rank Yj = sy rank Yj = n — Sj. rank Mj = rank Yj = SJ. 36 then (2.31) implies rank Gj > Sj, and (2.32) implies rank Gj < n — (n — SJ) = SJ. Hence, rank Gj = Sj, for j = 0, . . . , /q — 1. Since Sj = sp{g : ^ belongs to a column of Gj}, then Sj is nonsingular on U with dimension equal to Sj. In particular, dim SKl _i = sKl-i = ^ r t = n. i=0 This proves the necessity of conditions (ii) and (iii) in (C), (D), and (E). To prove the involutivity requirement, we use the following lemma [9, p.21]. L e m m a 2.8 A nonsingular distribution S of dimension k is involutive iff S1 is locally spanned by n — fc exact one forms. From (2.32), for j = 0 , . . . , Ki — 1, Sj annihilates n — Sj linearly independent covector fields which are exact one forms. Hence, by lemma 2.8, it is necessary that Sj be involutive for j = 0,..., /ci — 1. This completes the proof of the necessity of the condition (C), which are, by previous proof, equivalent to (B), (D), and (E). Likewise, the sufficiency of the hypothesis (C) can be proved in a way different from that by Jakubczyk and Respondek. The idea is to show that there exists m functions f i , . . . , f m , such that the n covector fields dL/k{£,-), for i — 1,. . . , m, k = 0 , . . . , K{ — 1 are linearly independent on a neighborhood of 0, 37 and satisfy condition (a) or (b) in lemma 2.7. Then the feedback pair (w,Q) are constructed so that in the coordinates £ik defined by = Lkfx (0), for i = 1,..., m, k = 1,..., /c,-, the system is in Brunovsky canonical form. The details of this proof are given in Isidori's book [9, p.237]. We observed that Hunt, Su, and Meyer [7] give a way to construct the functions f i , . . . , fm . Remark. Frequently we need to solve a system of partial differential equa-tions; the condition for the solvability of such a system is stated in the following proposition (Spivak [15], p.254). Proposition 2.9 Let U x V C Rm x Rn be open, where U is a neigborhood of 0 £ Rm, and let P : U x V — ^ n be C°° functions, j = 1,..., m. Then for every XQ £ V, there exists a unique solution x : W —> V defined in a neighborhood of 0 £ Rm, satisfying x(0) = XQ P ~ (2-33) P(y,x(y)), j = l , . . . , m , VyeW iff there is a neighborhood of (0, xo) £ U x V on which T& + £'<=a4 + l £ ' ' ' l S . J < m . (2.34) Remark. (2.34) is the condition obtained by setting the mixed partials of x equal and is called the integrability condition. As a special case, if f3 is independent of y for all j, then (2.34) becomes % r = %f\ i<.- , i<m, [/'",/*] = 0, l < i , j <m. 38 We wil l outline the procedure for solving system (2.33). First solve the ordinary differential equation ^ ^ = / 1(y 1,0,...,0,i 1(yi,0,...,0)) d y i i^O) = i 0 . Next solve the ordinary differential equation d x 2 ( y i , y 2 ) = / 2 ( y i > i / 2 > 0 ) , , . > o , x 2 ( y 1 , y 2 > 0 , . . . , 0 ) ) dy 2 x2(yu0) =x1{y1), and so on. The last ordinary differential equation to be solved is d i m ( y i , . . . ,ym) t m i , , „ ^ m / „ „ \ \ d y m x m ( y i , . . . , y m _ i ,0) = xm~l ( y i , . . . , y m - i ). 39 §2.2 Example In this section we will exhibit an example concerning the result of the pre-vious section. In particular, we will use condition (B') to solve for the feedback pair (w, Q) after we check that hypothesis (C) is satisfied. Then the method outlined after proposition 2.9 is applied to solve a system of partial differential equations for the map x(£)- We also use the fact that if g\(x),... ,gm(x), are linearly independent vector fields on some open set U, and if To show this, suppose there is a x\ 6 U such that C{(xi) ^ 0 for some i, then m ^2a{x)gi{x) = o, V X G L T , i=i then Ci(x) =0, Vx 6 U, i = 1,...,m. m would contradict the linearly independence of gx (xi),..., 9m Example: Consider system (2.1) with n = 3,m = 2, and (2.2.1) 40 For simplicity, we first employ the following transformation. Let — tan Xi \ ( sec X i 0 \ 0 ) + ( 0 «-«/(«, + !))»• (2.2.2) so that system (2.1) becomes x = f(x) + gi (x)vi + g2 [x)v2 where y/l-x\{xi - ln(x 3 + , h = ( 0 | , (2.2.3) For convenience, we drop the ~ sign and replace v by u. We find, after some computation, v 7 ^ adyffi = | 0 J , ady02 o o X3 + y a d^i (2.2.4) so <7i,<72)ady<7i are linearly independent on U = {x : \xi | < 1,0:3 > — 1}. Also [51,02] = 0- Hence SQ is involutive and hypothesis (C) is satisfied with r 0 = 2,rx = 1,/c-i = 2,K2 = 1. Next we have to find linearly independent vector fields 0i,02,ad^0i such that the Lie bracket between any two is zero and ady.02 = 0, ady.01 = 0, where / = / + 01IU1 + 02^2, and g{ = giqu + g2q2i, i = 1,2, cf. (B'). The aim is to find w and Q so that we can then find the linearizing transformation i ( £ ) . We can also assume that 0 2i = 0 since ad;-0i = [/,0i] = (ad /0i)0n + (ady02)02i + 0 ° 41 where g° £ So, and since ady02 E sp{di,92,&df9i}- Now solve 0 = [01,02] = [01011 ,01012 + 02022] = 0 l ( ^ -0 i )0 i i -01(^-01)012 + 02(^01)011-01(^02)022. Since g\, and gi are linearly independent,so setting their coefficients equal to 0, we have and or and ^22. = o 0x2 ' Assuming tux = W2 = 0, then f = f. Next solve 0 =[/,&] = [/,01 012 + 02 022] = ( a d / 0 1 ) 9 l 2 + 0 1 ( ^ / ) + (ad / 0 2 ) g 2 2 + 0 2 ( ^ f / ) . Using (2.2.4), and setting coefficients of ady0i,0i,02 equal to zero, we have 012 - 022/(xz - 1) = 0, %jg-f = 0, and / = 0, or 022 = 012 ( s 3 + 1), | g f " = 0. ^ d f g f - = 0. 42 Next we solve 0 = [ft,adj$ 2] = [&igii,(ad/ff1)gn -gi{^§^f)] Setting coefficients equal to zero as before, we have = 0, and - ( ^ L a d , ^ ! ! - (fj^fMln = 0, or ^ - = 0 , and -3^LY/T^ = 0. The last equality leads to ^Q^\ = ®* Similarly, we solve 0 = [g2, ad^ g\ \ to get = 0, and ^ 2 = 0. So = constant. Hence let qn = 1. We also check that condition (B'-ii) is satisfied, i.e. ady'ffi = ad2 g\ = 0, and adjfo = 0. Let qi2 =1, then 922 = ^3 + 1> and 1 1 Q ~ 1 0 x 3 +1;' which is nonsingular on c7. Combining with the transformation (2.2.2), we have (— tan xi \ f sec x\ 0 \ /1 1 \ 0 J + V 0 /(x 3 + 1) J \0 x3 + l ] V = w(x) + Q{x)v . s / - t a n x j \ , n , > fsecxi secxi\ where u»(x) = I Q ) , and Q{x) = I Q e _ 2 2 1 . 43 Now we solve the system of partial differential equations (2.10) for the x(£). For convenience, we drop the ~sign and use single subscript for £. So system of equations to be solved are dx 367 dx 35" dx _ dx din d~i2 92 9i = 0 1 x 3 + 1 /0 dx _ dx _ -ad^gi = 0 0 din ~ d^ Using the method outlined after proposition (2.9), we first solve dii 0 ) , x(0) = 0. 0 We get Next we solve We have s i n i\ \ x(ii) = I o dx(£i,62) d£ 2 0 Finally we solve d x (6 ,6 ,6 ) d6 x(iui2) 0 \ 1 x 3 + 1 J 0 J x(£i,0) = x(£i). sin ii i2 0 *(6,6 ,o) = x(fi , 6 ) -We get the map xi = sin ii X2 = 6 + 6 x 3 = - 1. 44 §2.3 Local stabilization of a nonlinear system For the nonlinear system (2.1), it may not be easy to determine if an arbitrary point x £ Rn can be transferred to a given point under an admissible control, but under the assumption that the nonlinear system with /(xo) = 0 is linearizable around XQ to a linear controllable system, it can be seen that points in some neighborhood of xo can be transferred to xo in finite time under some control. We will elaborate on this point in this section. First we introduce some definitions. Let Q be the set of all admissible control, which will be assumed to be unconstrained, so fi = Rm. Let u[to,^ i] £ denote the control acting on the system from time to to time ti. Let tp(t;xo,to,u) denote the trajectory of the system at time t that origi-nated from xo at time io under the control U[<0J*]. Let C(xo) = {x : 3u 6 U such that xj)(ti; x, to,u) = xo,for some ti > t0} i.e.,C(xo) is the set of points that can be transferred to xo at some finite time ti. If U is a neighborhood of xo, we define CU(XQ) = {x £ C(x0) : ip(t;x,t0,u) £ U,t £ [to,ti],u £ Q} Definition : System (2.1) is said to be controllable at xo if C(xo) = Rn, and it is said to be controllable if this is true for all xo £ Rn. Definition : System (2.1) is said to be locally controllable at xo if for every neighborhood U of XQ, CJJ(XO) is also a neighborhood of xo, and it is said to be 45 locally controllable if this is true for all XQ G Rn. Definition : System (2.1) is said to be weakly controllable at x0 if there exists a neighborhood U of xo such that C(xo) = U, and it is said to be weakly controllable if this is true for all XQ G Rn. Definition : System (2.1) is said to be locally weakly controllable at xo if there exists a neighborhood U of XQ such that for every neighborhood V of XQ contained in U, Cy (xo) is also a neighborhood of xo, and it is said to be locally weakly controllable if this is true for all xo E Rn. Remark. These definitions resemble those given by Hermann and Krener [6]. The difference is that our set C(xo) denotes a set of points that can be transferred to xo, whereas the set A(xo) denned in [6] denotes a set of points that can be transferred from xo. Moreover, the definition for weak controllability has been modified so that it is analogous to that for weak observability given in the next chapter. Furthermore, it can be seen that the following implications We will review some results from linear system theory. Consider the linear time-invariant system Lemma 2.10 System (1.1) is controllable iff the controllability rank condition holds. hold. controllable V weakly controllable x = Ax + Bu. (1.1) 46 The above lemma is well known and will not be proved here. As noted by Hermann and Krener in [6], the four concepts of controllability are equivalent for linear system. But this is not the subject of our discussion, and we will limit ourselves to showing the equivalence of controllable and locally controllable system as in the next lemma. Lemma 2.11 System (1.1) is locally controllable iff the controllability rank condition holds. The proof is briefly shown below. Proof : Necessity of the rank condition is clear as local controllability implies controllability. For sufficiency. Let $(i,<o) be the state transition matrix of the system (1.1), i.e., d$(Mo) dt ${to,t0) = / A${t,t0) Let M{t,t0)= f ${t,s)BBT$T{t,s)ds. Jt0 It is known that M(t,to) is nonsingular for t > to iff the controllability rank condition holds. Let u(t) = -P(t,ti,to)x0 where P{t;tut0) = BT${tut)M-1 {tut0)${tut0). 47 Then x(t) = $(t,t0)xo + [ $(t,s)Bu(s)ds Jt0 = *(Mo)*o- / ${t,s)BBT$T(f.sjM-1 {tuto^ih^xods = K(t;ti,to)xo where K{t;tuto) = ${t,t0)- f ${t,s)BBT$T{t,s)M-1{tut0)${t1,t0)ds. Jt0 It can be checked that x{h) = K(h;tuto)x0 = 0. So the above control will transfer any point XQ to the origin. Moreover, |x(t)|< sup |x(t)| = sup |lT(t;ii,*o)|||:Eo|| t0<t<ti t0<t<ti = ||rr(fi,i 0)||||so||. So given any e neighborhood of 0, choose 6 < j r ^ ^ j j p , then ||xo|| < <5 implies \x(t)\ < e for t G [to>*i]-Therefore system (1.1) is locally controllable at 0 and , hence, at all x G Rn since the system is linear and time-invariant. QED. U Corollary 2.12 If the controllability rank condition holds then system (1.1) is locally weakly controllable. Proposition 2.13 If the nonlinear system (2.1) is locally linearizable to a linear controllable system at XQ, then system (2.1) is locally weakly controllable at xo-Remark. Proposition 2.13 is intuitively clear in view of lemma 2.11 or corol-lary 2.12. Nevertheless, we will prove it below. 48 Proof : Let (x,w,Q) be the linearizing triple that tranforms (2.1) to (1.1). In particular, x : V C Rn —> x{V) = U e is a diffeomorphism from V onto U such that xo € x(V). Let £ 0 = 5 _ 1 (x0), then, since x is continuous, for every neighborhood U\ of xo contained in U, x - 1 (LTj) = V~i is a neighborhood of f0 such that x(Vi) = U\. But in the f coordinate around £o> the system is linear and controllable, so there exists a neighborhood V 2 of £o contained in V\, and a control v{t) =-P{t;tut0)t such that any point £ E Vi at time to can be transferred to £o at time ^ with the trajectory 7(i; £ , to ) u ) G ^ 1 f ° r * € [io,^ i]- This implies Cyx (£o) is a neighborhood of fo-But for any v and any £ E Vi, if u = w + Qv, and x = £(£)' then the trajectory for the nonlinear system is xp(t;x,t0,u) = x(Tf(<; £,*o,v)), for <E [ t 0 , t i ] , since (x(£),iu,Q) is the linearizing triple. So 4>(t;x,to,u) E x(V )^ = Z/j. In par-ticular, ij}{ti;x,to,u) = xo, since 7(^1; £,to,v) = £0 and x(£o) = %o- Moreover, since x is a local diffeomorphism, x(V2) is a neighborhood of xo. Hence, we conclude that every point in x(V )^ can be transferred to xo by some feedback control u with the trajectory lying inside U\. This proves that (xo) is a neighborhood of XQ. QED. XI 49 §2.4 Local asymptotic stabilization problem In this section we will consider the particular case that XQ = 0, and /(0) = 0, i.e., the origin is an equilibrium point. F r o m the result of the previous section, we see that if the nonlinear system (2.1) is linearizable around 0 to a linear controllable system and if the admissible control set is Rm, then there exists a control such that any point in a sufficiently small neighborhood of 0 can be transferred to the origin in a finite time with the trajectory lying inside some given neighborhood of 0. But such a control is usually time dependent and is undesirable in practice. Furthermore, as far as stability of the system is concerned, it suffices to have a control that can transfer some initial state to a neighborhood of 0,which is sufficiently small. For these reasons, we consider the following problem. Local asymptotic stabilization problem: For the system (2.1), find a control law u, and neighborhoods U, U\ (if possible) of 0 with U\ C U such that any initial state XQ £ U\ can be transferred to the origin asymptotically under this control law, i.e., XQ —• 0 as t —* oo. Moreover, the trajectory tJj(t;xo,to,u) £ U, for all t > to-Consider again the linear controllable system t = A£ + Bv. (1.1) 50 If we feedback the state linearly into the input, i.e., if v = - P £ where P is a constant matrix, then the closed loop system becomes t = {A-BP)t. It is known from linear system theory that we can find a constant matrix P such that the characteristic values of (A — BP) can be arbitrarily assigned in the complex plane. Since the solution of the closed loop system is then neii<*eA'iieoii, where k is some positive constant and A = max{i?e A i , . . . , Re Xn}, where A i , . . . , A n are the characteristic values of (A — BP). As noted above, we can choose P so that A is negative. Thus, given any e neighborhood V of 0, if | |£ 0 | | < then £(r) <E V, for all t > 0. Moreover, £(t) approaches 0 as t approaches oo. If (x,w,Q) is the linearizing triple transforming system (2.1) to (l.l), and if V is sufficiently small so that the map x is a diffeomorphism on V, then clearly x(f(t)) for t > 0, is the trajectory for the nonlinear system under the control u = w(x) - Q{x)Pi = w{x) - Q(x)Px _ 1 (x) 51 with the property that x(£(i)) E x(V), for t > 0, and £(£(<)) -* 0 as t -» oo. In particular, if the linear controllable system (1.1) is in Brunovsky canon-ical form, the constant matrix P can be easily determined so that the charac-teristic values of the closed loop system are as desired. In summary, the control law u = w(x) - Q(x)Px~l (x) will be called the local asymptotic control law since the linearization is valid only locally. Moreover, system (2.1) subjected to small deviation of the state from the equilibrium point 0 can be stabilized asymptotically. 52 C H A P T E R 3 Linearization of nonlinear systems with outputs §3.1 Necessary and sufficient condition In this section, we will consider the dual problem of the one considered in the previous chapter; that is, we want to find necessary and sufficient conditions that the nonlinear uncontrolled system with output of the form i = /(*) (3.1) y = h(x) where h(0) = 0, is locally Go-equivalent to the linear observable system in dual Brunovsky observer form (3-2) As mentioned in the introduction, Krener and Isidori [11] gave a necessary and sufficient condition such that this is possible by a state coordinate change for the single output case. Isidori also gave a detailed proof in [9, p.248]. Bestle and Zeitz [1] studied the same problem and allowed / and h to be time dependent. They derived the necessary condition by partially differentiating the map x(£) with respect to the coordinate £ just as we did in the previous chapter. They obtained a system of partial differential equations which, using our notation and for the time independent case, is equivalent to system (2.10), namely | | L = (-l)"-*ad}-*0, for * = l , . . . , n . (3.3) 53 Clearly if g is known, then all n partial derivatives can be computed. They derived g by partially differentiating the output and found that g(x) = $-1 (*)(l,0,...,0) r, where $(z) = / dh 1 dx and the operator is defined by Mffe) = 4(#) + fpk + Kdx' otKdx> J dx ox ox' which, for the time independent case, is just the Lie derivative of a covector field dh by / . However, they did not examined the solvability of system (3.3). Krener and Respondek [14] also studied this problem for the multi-output case. It can be seen that their result (theorem 5.1) is equivalent to what we will derive in this section, although their theorem is stated in a more abstract way. Our derivation in the sequel was inspired by the papers mentioned above. For simplicity, assume ^ has rank p where p is the dimension of h. First, we introduce the following notation. Let Ej = {Llf{hi) : i = 1,... ,p, / - 0,... ,j}, j = 0,..., n - 1. We define Ej = sp{r : T 6 Ej} that is, Ej is a C°° codistribution on Rn generated by Ej. 54 Let ey(x) = dim Ej(x), j = 0,..., n — 1, and define do = eo dj = ey — ej-i , for j = 1,..., n — 1. We also call (x, y) the linearizing pair if x and y are the maps that correspond to the state and output coordinate change respectively. P r o p o s i t i o n 3.1 System (3.1) is locally Go-equivalent to system (DBOF) at 0 iff the following conditions are satisfied on a neighborhood U of 0. (i) there exists linearly independent vector fields gi,..., gp such that for i = l , - - - , P , i = 1, •••,P, (a) (Lkf(dhi),gj) = 0, for fc = 0,... - 2 (b) the p x p matrix N = (njy) is nonsingular,where riij = ( L p - 1 (dhi),gj) (c) [adjr0t,ad*0y] = 0, 1 < i,j < p,l = 0,... ,/z; - l,fc = 0,... ,/xy - 1, where yny = the number of dj such that dj > j . (ii) dim Ej = ey = constant, j = 0,..., n — 1. (iii) dim .E^-i = n. P r o o f : For necessity, assume the maps x = x(£) and y = y(<f>) exist so that (3.1) is transformed to (3.2) with observability indices fii, • • • > A*p satisfying p Ml ^ > * * • > > 1 a n ( l 3^ Mi — n-i=l Differentiate x(£) with respect to t, we have 55 So § f ( i £ + = / (* (£ ) ) . (3.4) Using double subscripts for £ as in the previous chapter and differentiating (3.4) partially with respect to £t'jfc+i , we have jfe+i d£_d (fm))) dx d&k+i (3.5) Also ^ ( f f (M + «(*))) = ( B£_(g|))(A£ + «(*)) As in the proof of proposition 2.1, we have dx , if A: = 1 , . . . - 1; if A; = 0. (3.6) Since then <p = ct: = fin \ d<j> j 0 , if A; = 1 , . . . - 1; 3&Jfc+i 1 ei, if A; = 0 , where et denotes a standard unit vector in Rp. Hence ;, if ik = 0. (3.7) 56 Substituting (3.6) and (3.7) into (3.5), we have and Let A; = jzt- — 1 in (3.8), then If we let then, as before, we find dZi^-k = ("^^/g^ f o r fc = 0 , . . . ' , M i - l , (3.10) and § § J ^ = (-!)* a d * (3.11) (3.10) can be written as ^ = (-!)«-* a d j T S i , for * = l , . . . , A i t - . . (3.12) F r o m the output y = h(x) and the map y = y((£), we have y(*(f l) =M*(fl)- (3.13) Differentiating (3.13) partially with respect to £, we have djid& _ dhdx 57 or ifc=i§§. (3.i4) Since where O consists of zero entries and the, ith block of the matrix has dimension p x ft - 1, then from (3.14) and (3.15), for i — 1,... ,p, dy _ dh dx d(f>i dx d£n ° = § i ^ iffc = 2,..., w or = f ^ ' f o r * = l . - . ^ , (3-16) where c* _ / 1 if * = y, Substituting (3.12) into (3.16), we get Ml6" = |S((-I) w"*» d/"*W). for A; = l , . . . , w . (3.17) So for 1 < t, j < p, ^ 1 * = (-l)w"*(d/ly)adJ'-S.-).- for fc = l,...,Mi. (3.18) Since the Jacobian matrix J^ f:(^ ) i s required to be nonsingular on a neighbor-hood of 0 G Rp, hence it is necessary that (dhj, adf~k g{) = 0, for 1 < i,j < p, k = 2,..., m, or (dhj,adjy,-) =0, for 1 < I , J < p, k = 0,...,m - 2, (3.19) 58 and the p x p matrix 7Y° = (ftyt) is nonsingular on a neighborhood of 0, where n% = {dhjMf1 9i)- (3-20) The above conditions are stated as (a') in the following lemma which can be considered as dual to lemma 2.7. Lemma 3.2 The following statements are equivalent: (a') (0 (dhj, ad*0t) =0, 1 < i,j < p, k = 0,... ,m — 2. (ii) the px p matrix N° = (n° t) is nonsingular on a neighborhood U of 0, where njt- = (dhj, a d ^ - 1 gt). P>') (i) (Llf(dhj),adkfgi) = 0, for 1 < i,j < p, and for / = 0,... ,/zt- — 2, A: = 0,..., fj,i — 2 — I. (ii) For / = 0,..., /zi — 1, the p x di matrix JV' = (nyt-) has rank d[ on £/, where nlji = (Llf(dhi)iadf~t~1-gi). Furthermore, Nl = (-lJ'JV,0, for / = 0,..., HI - 1, where is the first di columns of 7Y°. Proof : The equivalence of (a') and (b') is quite clear if we compare lemma 2.7 and lemma 3.2. We see that the roles of covector fields and vector fields are 59 interchanged. In particular, we notice the following correspondence: lemma 2.7 lemma 3.2 Lkf{dti) <—> ad^ t adyflry « • Ly(d/ly) To relate condition (a') to the one stated in proposition 3.1, we use the following lemma. Lemma 3.3 (a') or (b') is equivalent to (c') given below, (c') (i) {Llf (dhj),gi) = 0, 1 < t , j < p, 1 = 0,...,M- 2. (ii) the p x p matrix N = (nyT) is nonsingular on a neighborhood of 0, where riji = ( L ^ - 1 (dhj),gi). Proof : We show that (b') and (c') are equivalent. Clearly, (b'-i) => (c'-i) since the latter is a special case of the former by letting A; = 0. Next we show that (c'-i) (b'-i). For i = 1,...,p, let / = m — 2 in (c'-i), we have 0={L?~2 {dhjUi) = LfiLf'3 {dhj),9i) - (Lf-3 {dhj)Mfgi). The first term is zero by (c'-i), Repeating this process, we get the last equation (-I)*" 2 (dhj,<id?-2gi} = 0. We see that (Ly(<%),ad*5i) = 0, for / = 0,... ,m - 2, k = m - 2 - /. 60 If we repeat with / = m — 3,... ,0, we get condition (b'-i). To show the equivalence of (b'-ii) and (c'-ii), we note that 4 = (dhjMf~l9i) = Lfidhj-Mf^Qi) ~ <L/(<foy),adJi-2<K>. The first term is zero by (b'-i). Repeating this process, we have 4M - i r ~ML7~^<%), f f i ) , for l < t , j < p . Thus, the columns in N and N° are the same except for a possible sign change. This shows that (c'-ii) is equivalent to (a'-ii) which is equivalent to (b'-ii). QED. ll Returning now to the proof of necessity in proposition 3.1, let Zt = (adj1 9 l , . . . , adjT'- 1 gj), for I = 0,..., ^ - 1, where j = the number of /zt- such that m — I — 1 > 0. (3.21) Obviously, this number depends on /, so we will denote it as di, and, for the moment, ignore the previous definition; that is, dj = ey — ey_i . We will show that this is indeed true. The columns of Z\ are linearly independent since they are the columns of the Jacobian matrix ^ by (3.12). Hence rank Z\ = d\. Note also that the following relation between /zt- and dj holds. Hi — the number of dj such that dj > i. 61 Let Lkf(dh) = / L ) ( d / i i ) \ Wf(dhp)J then condition (b') in lemma 3.2 can be written as ( dh \ Lf{dh) ( ZQ,ZI, ... fZfr-i ) — (N° O x -JV° V X X where N® is as defined in lemma 3.2. If we let O O ( - i j M i - i t f O ; Zj — ( Z0, • • • , Zj ) , Zj — ( 2Ty+1 , • • • , ^ _! ) , and dh Ljf{dh) , JVy = V x o then (3.22) can be written as (3.22) EjZj = Nj, for j = 0,... ,ni - 1, (3.23) and Ey-Zy =0, for i = 0,..., MI - 2. (3.24) Since then by (3.23) rank Zj = rank Nj — d\, 1=0 ranA; .Ey > d / . i=o 62 Since rank Zj = n — rank Zj i 1=0 then by (3.24) Hence, we have rank Ej < ^ d/. 1=0 rank Ej — d/. Since the codistribution Ej = sp{r : r 6 Ej} — sp{r : r belongs to a row of Ej} then rank Ej = dim Ej = ey. Therefore, we have 3 e y = f o r y = o,.. . ,MI - 1 . 1=0 Clearly, dy = ey — ey_i as claimed. This also shows that the dimension dy or ey is constant on a neighborhood of 0. Furthermore M I - i v dim E^-i = di = = n. i=o i=l This proves the necessity of (ii) and (iii) in proposition (3.1). F r o m equation (3.10), we get the integrability condition (i-c) by setting the mixed partials of the map x(£) equal. Since (i-a) and (i-b) in prop. 3.1 are just (c'-i) and (c'-ii) respectively in lemma 3.3, the necessity of the hypotheses in prop. 3.1 is proved. 63 For sufficiency, assume that the hypotheses (i),(ii), and (iii) in proposition 3.1 are satisfied. Hypothesis (iii) implies that there is a least integer k such that dim (x) — n, Vx G U and that k k di = Yl(ei ' e*-1) + eo = e-k = n. t'=0 i=l Moreover, the //y's defined in hypothesis (i) imply the relation (3.21), and p k Ml > M2 > • ' * > Mp > 1, Y, f* = YJ DI = U-i=l t'=0 Clearly, k = fj,i — 1. Now let the map x(£) be defined by ^ = (-l)w"*adJ*-*fl-t-, for k = 1,.. . ,^, (3.25) and x(0) = 0. (3.25) is solvable for x(£) since the integrability condition (i-c) is satisfied. Let e = e(x)=£"1 (x), then e = ||A = H/(*(0) = /co-From (3.25) which implies = § | ( - l ) ^ - f c a d ^ - f e ^ , for * = l , . . . , A x t - l 64 where A ? in local coordinates is the unit vector t;v. Hence, for fc = 1, — 1, and using the fact that the Lie bracket of vector fields is invariant with respect to a change of state coordinates, we have = (-l)W"*[||/(5(0).||adr*"1w(£(0)] = ( - i) w-*[/(0 , ( - l ) w - * - 1 This implies M i l = c,-jk, for i = l , fc = 1,... - 1, or, if we use double subscripts for the components of /(£), then for i = 1,... , p, fc = l , . . . , M i - 1, d Zik+i \l ii j — i, and / = fc. This shows that fik{£) has the form fikU) = fiJfe+l + <*ik [ t i l , - f o r » = l , . . . , p , fc = 1,... ,fii - 1, (3.26) To get the map for the output coordinate change, consider the output relation y = h{x). (3.27) Substituting the map x into (3.27), we have 65 Differentiating y partially with respect to.&jt, we have dy _ dh dx d&k ~d~x d£ik ' Using (3.25), then for j = 1,..., p, §g = (dhj,(-l)"-kzd>?-kgi) (0 if k = 2,..., fii, ~ \ (dhj, {-I)"'1 a d f - 1 g i ) if k = 1. So y is independent of for i = 1,... ,p, k = 2,..., m. Letting fa = in, then | | = | | - = < ^ , ( - l ) W - 1 a d f - 1 f f l ) , for i<i,j<p. We notice that dfc ~ \ > n J « - n J l ' where nji is defined in lemma (3.3), i.e. ^ = N. Since the matrix N = (nji) is nonsingular on some neighborhood of 0 by hypothesis, and x(0) = 0, dv <f>(0) = 0, we see that the Jacobian matrix -JT{<P) I S nonsingular for <f> in some neighborhood of 0 £ Rp. Now we show that x(f) is a local diffeomorphism around 0; that is, is nonsingular on some neighborhood of 0. We have shown that hypotheses (i-a) and (i-b) are equivalent to (b') in lemma 3.2, which can be expressed as equations (3.23) and (3.24). Letting j — fj,\ — 1 in (3.23), we have Since rank - E ^ - i (x) = e w _i (x) = n, Vx £ U, 66 and rank N^-i (x) = ^ di(x) = n, Vx G U, t'=0 then ran/c ^T^-i (x) = n, Vx G J7. But is a n x n matrix consisting of the columns of so is nonsin-gular for Vx G 17. Replacing f ti by <fo in (3.26), we see that (3.26) is the required form to which system (3.1) is supposed to be transformed. We note that a((p) can be computed from equation (3.4) or by solving (3.11). QED. U Remark. If /(0) = 0, then necessarily a(0) = 0 as seen from equation (3.4). 67 §3.2 Example We will give an example which satisfies the hypothesis of proposition 3.1 and we will then find the transformations which map the nonlinear system (3.1) to the dual Brunovsky observer form. For convenience, we will drop the "sign for the maps and write i = x(Q and y = y(<f>). Consider x = f(x) y - h(x) where /(*) = /sin 2:2(1 + £i ) \ sec 22(1 + £1) COS £4 V X 3 J We find that h{x) Xl ,14 dh = Lf{dh) = 1 0 0 0 0 0 0 eXi J ' s i n x 2 (1 + xi) cos£2 0 0 0 e X i x3eXi so rank dh LAdh) = 4 on U, where U = {x : xi > — 1, — TT/2 < x2 < 7r/2}. SO do Mi = M2 = 2, and (ii) and (iii) in proposition 3.1 are satisfied. Let = 2, di 9l{x) = 1 0 Vo7 ai{x), g2{x) 0 1 UJ a2 (x), 68 where ai(x), and a2(x) G C°°(U), and not equal to zero on U. We see that (i-a) and (i-b) in proposition 3.1 are satisfied. In particular, if ai(x) = secX2,a2(x) = 1, then (i-c) is also satisfied. So 9i (x) and adyffi / 0 A sec x2 0 V o J (l + xi\ 0 0 v o ; 92(x) = adyt/2 0 1 0 0 Vi7 Solving system (3.10) with x(0) = 0, we find that zi = - 1, Z2 — arcsin £ 2 > xz = 6, x 4 = 6s-(3.2.1) (3.2.2) (3.2.3) (3.2.4) Substituting this map into h(x), we find y(<f>) = e^ 1 - 1 e*2 - 1 where $1 = f 1 } and <£2 = £3- Now we find a((£) from the maps x(f) and y(</>) by differentiating x and equating to / . From (3.2.1) xi = e^1 £i sinx 2(l + xi). = (1 + z i ) ( 6 + ai) sin x2 (1 + xi) = (1 + xi) (sin x2 + « i ) . X2 = sec X2 £ 2 s e c x 2 ( l + xi) = secx2(ci!2)-69 From (3.2.2) From (3.2.3) Finally, from (3.2.4) Hence a\ = 0, c*2 = 1 + £i &3 = U COS I4 = Q?4 . ^4 = 6 £ 3 = £4 + as-e^ 1, a 4 = cos £ 3 = cos 02, and 0:3 = 0. 70 §3.3 Local estimation of a nonlinear system Consider the more general nonlinear system (1.7) with inputs and outputs. Clearly system (3.1) is a special case of (1.7) with gi(x) = 0, i = 1,..., m. Quite often the system state x is required to feedback to the system for stabilization purpose; but in many systems, the states may not be accessible and measured. In such cases, it may be necessary to estimate the states from knowledge of the past output. This is called an observation problem (or reconstruction problem by Kalman [12]). We will not discuss in depth the observability problem of a more general nonlinear system, but rather, we will restrict to those that are locally linearizable to a linear observable system. Just as a nonlinear system (2.1) can be locally stabilized under the assump-tion that it can be locally linearized to a linear controllable system, we expect that the states of the nonlinear system (3.1) can be locally estimated from knowledge of the outputs if it can be locally linearized to a linear observable system. This is the subject of this and the following section. Analogous to the controllability concept of the previous chapter, we will give some definitions on observability, which can be considered as dual to those on controllability. These definitions were given by Hermann and Krener [6]. Just as before, we denote by ip(t; XQ , to, u) the state of the nonlinear system at t that originated from XQ at to for a control u[io,t], and by y(ip(t;xo,to,u)) the corresponding output at t. Let I(x0) = {x : y(ip(t;x,t0,u)) = y(i>{t\x0,t0,u)), for all t,t, with t < t < to, and all u[i, to] G fi}, i.e., I(XQ) is the set of points 71 that are indistinguishable from XQ at time to from knowledge of the past outputs under all admissible controls. If U is a neighborhood of Xo , then we define IU{XQ) = {X : X E I(x0),tp(t;x,to,u) G Z7,for all i< t < t0, and all u\i, to] G fl} Definition : System (1.7) is said to be observable at XQ if I(xo) = XQ, and it is said to be observable if this is true for all xo G RN. Definition : System (1.7) is said to be locally observable at xo if for every neighborhood U of XQ, IJJ(XO) = xo , and it is said to be locally observable if this is true for all xo G RN. Definition : System (1.7) is said to be weakly observable at xo if there exists a neighborhood U of xo such that I(xo) n U = XQ, and it is said to be weakly observable if this is true for all xo G RN. Definition : System (1.7) is said to be locally weakly observable at xo if there exists a neighborhood U of xo such that for every neighborhood V of xo contained in U, IV(XQ) = xo , and it is said to be locally weakly observable if this is true for all xo G RN. It can be seen that the following implication holds. observable We will also review some linear system theory. Consider system (1.5) £ = Ai + a{4>) (1.5) 72 where a(0) = 0. This system can be considered as a linear system driven by an input. Let "y(£; £o,to) be the state of system (1.5) at time t that originated from fo at to, and let <j>("i(t; fo>*o)) he the corresponding output. The state and output equations of system (1.5) can be written as the following integral equations. £(t) = S(Mo)6>+ / ${t,s)a{(f>)ds, (3.28) Jto <p{t) = C$(t,t0)io + C I $(t,s)a{<p)ds. (3.29) Jto Clearly, if £o = 0, then, since o:(0) = 0, 4>(t) = 0, for all /. Hence, for system (1.5), we can state our definition of unobservability as follows: Definition : System (1.5) is unobservable at £o iff fo is indistinguishable from 0 for all past outputs iff (p{i{t;io,t0)) =0, for t<t0. Recall from linear system theory the following lemma. Lemma 3.4 System (1.5) is observable iff the observability rank condition holds. Lemma 3.5 System (1.5) is locally observable iff the observability rank con-dition holds. We will sketch the proof below. Proof : Consider the integral equations (3.28) and (3.29) for t < to. From (3.29), we can write z{t) = C$(Mo)£o (3.30) where z{i) = <f>{t) - C //Q ^{t,s)a((f>)ds. 73 Multiply both sides of (3.30) by $T(t, to)CT and integrate to obtain f $T {s,t0)CT z{s)ds = W{t,t0)to, Jto where W(t,to) = f* $T(s, to)CT C$(s,to)ds. From a known theorem, W(t,to) is nonsingular for t < to iff the observability rank condition holds. In fact (Kalman [12]) / C \ dim range W(t,to) = rank V C A " - 1 J for all i < to- So if the observability rank condition holds, then 4>(t) — 0, for t < to would imply z(t) = 0, for t < to, hence £Q = 0. So the only point that is indistinguishable from zero is zero, i.e., I(£o) — £o- In particular, for any e neighborhood U of £o, a n d f ° r all i < to such that 7(t; £o5*o) E 17 for i < t < to, Iu{io) = Zo- Conversely, if system (1.5) is locally observable, then it is observable. By lemma 3.4, the observability rank condition holds. Indeed, since z(t) is known from the output (f>, we can determine £o from the past output in the above equality for any £o only if W(t,to) is nonsingular for t < to; otherwise if W(t,to) is singular for some t < to, then it is singular for all t < to and hence any point fo E Ker W(t,to) is unobservable. Corollary 3.6 If the observability rank condition holds then system (1.5) is locally weakly observable. Proposition 3.7 If the nonlinear system (3.1) is locally linearizable to the linear observable system (1.5) around xo, then the nonlinear system is locally weakly observable at xo-Proof : Let (x, y) be the linearizing pair given as x : V C Rn —* x(V) = U £ H~> x 74 and y : W C Rp —> y{W) 4> y Since x is continuous, for every neighborhood U\ of XQ in £7, x _ 1 (Ui) = Vi is a neighborhood of fo such that x(£o) = ^o- But in the coordinate around £o, the system is locally weakly observable. This implies that for all to, there exists a t < to such that if <!>{l{t;Z,to)) = <t>Wt;to,to)), Vt € [Mo] and (^t; f,to) E V\ for all t (E [t,to], then f = fo- This implies Ivx (fo) = £o-Since x and y are local diffeomorphism and hence one to one, and x{V\) = Ui, the result follows by mapping trajectory and output in the (£, <f>) coordinates into trajectory and output in the (x,y) coordinate, and we have 7^ (xo) = xo-QED. H 75 §3.4 Local asymptotic estimation problem In this section we will discuss a model which will give an estimate of the state of a nonlinear system. For a linear system of the form i = AZ + Bv (3.31) <p = CZ the system described by £ = F£ + K<p + Hv (3.32) is called an observer or asymptotic state estimator of system (3.31) if for any £(*o)> Z{t) approaches £{t) as t approaches oo. Linear system theory says that (3.32) is an observer for (3.31) iff the ob-servability rank condition holds, and F = A — KC, H = B. Thus the observer has the form 1= (A - KC)Z + K<f> + Bv and the estimation error e — f — £ is described by the dynamical equation e = t - £ = (A- KC)e. K can be chosen so that the characteristic values of (A — KC) lie in the left half complex plane iff (C, A) is an observable pair. In such case, e(t) -> 0 as t —> oo. 76 For the nonlinear system (3.1), we can state the local asymptotic estimation problem as follows: Local asymptotic estimation problem: For the system (3.1), determine a local asymptotic observer and a neighbor-hood U of 0 (if possible) such that any state in U can be estimated asymptoti-cally, that is, if x is the estimate, then the estimation error e(t) = (x(t) — x(t)) —• 0 as t —*• oo. We will consider only those nonlinear system that can be transformed to a linear observable system around 0. Our observer design below is based on the linear system and is a slight modification of that given by Bestle and Zeitz [l]. We made the modification since we have allowed change in the output coordinates and our system is multi-output. We observe also that Krener and Isidori [11], and Krener and Respondek [14] also mention a similar design. From the form of the linear observer it is reasonable to consider the non-linear observer of the form x = /(x)+£(<£-<£) = / ( x ) + £ ( C £ - C | ) , (3.33) where | is the estimate of the state of the linear system, x = x(|) is the estimate of the actual state of the nonlinear system, and K is a n x p matrix which is to be determined so that x(t) —+ x(t) as t —• oo. From equation (3.33), we have x=d4j = f(x) + k(cc-ci) di so that e' = (ff)-1 (/(*) +K(ce-cQ). di 77 Using (3.4) 'z = Ai + a(4>)+KC(Z-£), where K= {^)~l K. Let e be the observer error in the new coordinates, i.e., e = f — £. So = A£ + a{<f>) - [Ai + a(4>) + KC(£ - £)) = [A- KC)e + a(<j>) - a(<j>). Using the first order approximation for a((p) — a(<p), we have «( * ) -« ( * ) = 3^if« -«) + »(«-I) = f f ^ + o(e) where is evaluated at <p. Hence, e = U - ( K - | | ) C ) e + 0(e) = {A-LC)e +o(e), (3.34) where L = K- g | . The asymptotics of system (3.34) depend on the characteristic values of (A — LC). So if (C,A) is an observable pair, then we can choose L such that these characteristic values lie in the left half plane. In such case, e(t) —• 0, as t —*• 00. Now K in equation (3.33) can be determined as follows: 78 where ^ is ^ evaluated at x. Since the estimation error of the state of the nonlinear system is x - x = £(£) - x(f) , and x is continuous, it is clear that x —> x, as t —* oo. Hence, the complete observer is * = m + If (i+ff)c(e-o = /(x) + |f(L+||)(y-1(y)-r1(y)). This observer will be called the local asymptotic observer since the linearization is only valid locally around 0. Note that if the pair (C, A) is in dual Brunovsky canonical form, i.e., (C, A) = (C,A), then the determination of the matrix L for given characteristic values of (A — LC) is relatively easy. 79 C H A P T E R 4 Linearization of nonlinear control systems with outputs §4.1 Necessary and sufficient condition We now consider the problem of finding the necessary and sufficient con-dition such that the nonlinear system of the form m x = f{x) + ^ 2 g i ( x ) U i i=i I4-1) y = h(x) can be transformed locally to a linear system that is both completely control-lable and observable of the form t = AC + Bv (4.2) 4> = ct: In particular, we are interested in the form such that (A, B) is in Brunovsky canonical form and (C, A) is in dual Brunovsky canonical form. The class of transformations consists of the type (Cl), (C2), (C3), and (02) discussed in the introduction. It is clear that we require that /(O) = 0, and h(0) = 0 if the state and output coordinate transformations map the origin to origin. For simplicity, we also assume gi,.. .,gm are linearly independent and rank ^ = p — m on some neighborhood of 0. As a consequence of the previous discussions, we have the following result: Proposition 4.1 System (4.1) is locally Gco-equivalent to system (4.2) at 0 iff there exists a feedback pair (w, Q) such that the following conditions are satisfied on a neighborhood U of 0. 80 (i) For i = l , . . . , m , j' = l , . . . , m , (a) (L*-(d/it), gj) = 0, for fc = 0,..., KJ - 2, (b) the m X m matrix N = (n,y) is nonsingular,where n t ; = (L^' _ 1 {dhi),gj), (c) [adj-gy,&] = 0, 1 < i,j < m, , fc = 0,..., /cy - 1, (d) &dj gi = 0, 1 < i < m, where /cy = the number of rt- such that rt- > j . (ii) dim Ej = dim Sj = constant, j = 0,..., n — 1. (iii) dim En-i = dim Sn-\ — n where Ej = {Llf{dhi) : i = 1,..., m, I = 0,... J}, j = 0,..., n - 1 and Ej = sp{r : T e Ej} Sj, Sj, f, g and r,- are as defined in chapter 2. Proof : Necessity of the hypotheses is clear if we replace u by w + Qv and carry out the steps used to derive the necessary condition in prop. 2.1 and prop. 3.1 with / replaced by / and gi replaced by gi. The necessity of (ii) is also obvious since the controllability indices and observability indices defined by the pair (A,B) and (C,A) respectively must agree. For sufficiency, we proceed in a similar way as that used to prove proposi-tion 3.1. Let the map z(f) be defined by _dx_ = (_!)*,-* a d * - * & , for fc = l , . . . , / c t - , (4.3) 81 and i(0) = 0. (4.3) is solvable for x(f) by conditions (i-c) and (i-d). Let i=Z(x)=x- l(x), then = g £ ( / ( * ) + G(*)u) = ^ (f(x)+G(x)(w + Qv)) = ^(f(x) + G(x)v) ~ m t=i m = /(£) + X > ( £ H , t=l where /(£) = g§/(£(0)> &(0 = g§ft(£(0)» a n d /(*) a n d ftfc) a r e a s defined before. Multiplying both sides of (4.3) by we have Proceeding as in the proof of proposition 3.1, we have fik{£) = tik+l + ccikitn,... ,£ml), for i = l,...,m, k - 1,..., K; - 1, Jim {£) = ctim (iii, • • •, f m i ) • (4.5) Letting k = /ct- in equation (4.4), we have e«c- = &"(0> f o r i=l,---,m. 82 Moreover, for i = 1,..., m, = g f(-l)*adj« & ( z ) = 0, where the last equality follows from condition (i-d). This shows that can only be constants. But by a remark following the proof of proposition 3.1, &ik (0) = 0. Hence ocik = 0, for i = 1,..., m, k — 1,..., / C j . Clearly in the £ coordinates, the system has the form t = AZ + Bv. The rest of the proof is the same as that given in proposition 3.1. QED. XX R e m a r k . From the proof of the sufficiency of the hypotheses in proposition 4.1. We have also shown a direct proof that hypothesis (B) in proposition 2.1 is sufficient that system (2.1) can be transformed to the system in Brunovsky canonical form. Note also that we have shown that Sj = Sj in lemma 2.2 under the hypotheses of (i-c) and (i-d) so we can replace Sj by Sj in hypotheses (ii) and (iii). Moreover, it is clear that Sj is required to be involutive for j — 0,... ,n — 1, and dim Sn-i = on a neighborhood of 0. However, it is not yet clear that Ej = Ej under the other hypotheses in the proposition. But under certain condition, we can show that this is true. 83 Let E = {\}j(dhi) : t = l, . . . ,m,fc = 0,... ,/c,- - 1} be the set of n linearly independent covector fields. This is possible by hy-potheses (ii) and (iii), perhaps after rearranging the entries in the vector valued function h(x). As before we can form the array (2.18) of covector fields with X\ denoting LJ.(dhi) and the V s being defined similarly. Lemma 4.2 Under the hypotheses of proposition 4.1 with nij = S{, (4.6) then (Lkf(dhi),gj) = 0, for k = 0,..., m - 2, j = l,...,m. (4.7) Proof : Obvoiusly (4.7) holds for j = i by hypothesis (i-a). For j ^ i, by hypothesis (4.6) {Ly-1{dhi),§j)=0. (4.8) Hence (Lkf{dhi),gj) =0, for k = 0,... ,/cy - 1, (4.9) and we have (L^dhi), ad^y) =0, for k + l < K}- - 1, (4.10) as seen by the proof of lemma 2.7 or lemma 3.3. Now (Lj (dhi),§j) = LjiLj-1 (dhi),§j) - (LJ-1 {dhi),adL;gj). The first term of the right hand side of the above equation is zero by (4.10). Hence (Ly{dhi),gj) = -(Ly-1 {dhi)Mf9j). 84 Repeatedly applying the Leibnitz type formula and (4.10), we find that (LjidhiMj) = (dhiMJ = o, where the last equality follows from hypothesis (i-d). Now (L*- (dhi), gj) = 0, for k = 0,...,Kj. This leads to (Lkf{dhi), adjffy) =0, for k + l < KJ. Clearly, continuing in a simliar manner we have (L (^d/i.-),0y) = 0, for k > 0. (4.11) In particular, (4.11) holds for k = 0,..., K( - 2. QED. II Lemma 4.3 Under the hypotheses of proposition 4.1 with condition (4.6), \}j(dhi) = Lkf(dhi), » = l , . . . , m , k = 0,... ,K{ - 1. (4.12) Proof : By induction, for i = 1,..., m, (4.12) holds for k = 0. If /c; = 1, then no proof is needed. If K{ > 2, then assume (4.12) holds for some k < K{ — 2. Now L*- + 1 (dhi) =Lf{Lkf{dhi)) m = L/(L)(dfc,-)) + £ L»/«v ( L / W ) y=i m = L * + 1 (d/*) + £ ( L W ( L ^ ) ) ^ - + [{tfidhiUMdwj) 3=1 m = Lkf+1 (dhi) + X^K^/^O.&y))^ + {^){dhi),gj))dwj). y=i 85 We have used the fact that Lg{dh) =dLg{h), where g E V(Rn) and h E C(Rn). By lemma 4.2, the second and third terms are equal to zero since sp{gi,..-,gm} = sp{gi,...,gm}. Hence L*(d/it) = hk(dhi). This proves (4.12) for k < K{ - 1. QED. H Proposition 4.4 Under the hypotheses of proposition 4.1 with condition (4.6), then Ej = Ej, for j = 0,..., n — 1, iff d{Qiw) E sp{r : r E E^ }, t = l , . . . , m , (4.13) where Qi is the r'th row of Q — Q~l. P r o o f : Let E = {Lkf(dhi) : i = 1,... ,m,k = 0,.. .,KT: - 1}. Clearly .Ey C £y, for j - 0,..., n - 1 since by (4.12) and the definition of E, Vf[dhi) E sp{T : T £ E D EJ} = sp{r : r E E D Ej} 86 for j > 0, i = 1,..., m. Since G = GQ, then G = GQ where Q = Q~l, i.e. m /=1 where qij is the Ijih entry of Q. So, for i = 1,..., m, Lf{dhi)=Lf{L'}i-1{dhi)) m = L^I /T 1 (dfc)) - £ L ? y W j ( L ^ 1 (dfc)) ;'=i m m m = Lj* (dhi) -J2Y,(d((L7~1 ( d h i U i q i i ) ) ^ + ((LJ-I(dhi),giqlj))dwj) m = Lj (dhi) ~ ^2(d(qij)wj + (qij)dwj) J'=I m = Ly(dhi)-^2d(qij-wj) = LJ (dhi) - d(Qiw). Hence LJ* (d/i.) E E^ i f f d(Q{w) e E^ . Noting the fact that if h)(dhi) e sP{Lkf(dhx),... , L f (dAi-i)} + Ek-i, then Lkf+1 (dhi) € sp{Lkf+1 (dhx),..., Lkf+1 (dhi-x )} + Ek. 87 Therefore EjCEj, y = 0 , . . . , n - l iff d(Qitv) e E^, t = l , . . . , m , and the reslut follows. QED. II Remark. Note that condition (4.6) is the one that requires that is, no change in output coordinates other than a rearrangement of the entries of h(x). In summary, if the class of transformations consists only of (Cl), (C2), (C3), and a permutation of the output coordinates, then it is necessary that nij = H m hypothesis (i-b) of proposition 4.1. Since condition (4.13) is not known to be necessary, all we can say about the dimensions of Ej and Sj is that, on a neighborhood of 0 (ii') dim Ej > dim Sj, j — 0,..., n — 1, (iii') dim En-\ = dim Sn-\ = n. Remark. We see that conditions (ii'), (iii'), and the involutivity requirement of the distribution Sj for j = 0,..., n — 1 can be checked from the known data of the system (4.1). Also the conditions that dim Sn-i = n and dim En-i = n are just the nonlinear analogues of the controllability rank condition and the observability rank condition respectively discussed earlier for the linear system. 88 §4-2 E x a m p l e Consider system (4.1) with f,gi,g2 as in the example of chapter 2; that is, / y/l-xlfa - l n ( x 3 + 1)) f(x) = I s i n x i V o 0 \ ( 0 gri (x) = I cos x i , g2 (x) = 0 and Recall we found that 0 7 V ( x 3 + l)eS 2 ftW=(l„(l + X 3 ) ) -, , i - t a n x i \ x / s e c x i s e c x i "(*)=( o J ' « W = ( 0 t-** and >/l - xf (x 2 - ln(x 3 + 1)) f{x) = | 0 0 g 1 { x ) = ( l ) , g2{x)=( 1 V 0 0 0 So 0 (dh,^) = (j^ , (dh,g2) = (^j , (L^dh),^) We see that the hypotheses in prop. 4.1 are satisfied on U with KI = 2, and K2 = 1, where U = {x : |xj < l , x 3 > —1}. It can be checked that the state and output coordinate changes are given by 89 §4.3 L o c a l a s y m p t o t i c e s t i m a t i o n a n d c o n t r o l Nonlinear systems of the form (4.1) that can be transformed to a linear system (4.2) which is completely controllable and observable may be rare, but from the discussions of the previous chapters, for such systems we expect that the nonlinear system can be locally estimated and stabilized. Indeed from the result of section 2.4, the local asymptotic control law is u = w{x) - Q(x)Px~l (x) (4.14) where P is chosen that all the characteristic values of (A — BP) have negative real parts and the estimated state is used for the synthesis of the control since the state of the nonlinear system is unavailable by assumption. A n d from the result of section 3.4 and the form of observer for the linear system with controls, it is clear that the nonlinear observer for the system (4.1) should have the form x= f{x) + G{x)u + K {$-$), (4.15) where u = w(x) + Q(x)v, x = x ( f ) , and (x,w,Q) is the linearizing triple. Hence x = f{x) + G{x)v + k{<f>-4>), (4.16) where f(x) = f(x) + G(x)w(x) and G(x) = G(x)Q(x). Furthermore, by the derivation in section 3.4 and the proof of proposition 4.1, we have 1 = H £ = /(*) + +^(C^ - c 0 ' so that i=(f§r1 ( / » + G ( X ) V + k { c t - ci)) = At + Bv + KC{t-Z), 90 where K — (^£)-1 K. Hence the observer error in the new coordinates satisfies e = i — x = A£ + Bv- ( A | + BV + KC{i - £)) = (A-KC)e. Since the pair (C, A) is observable, we can choose K such that the characteristic polynomial of (A — KC) is as desired. In particular, if all the characteristic values of (A — KC) have negative real parts, then e(t) —> 0 as t —> oo. In such case, we will call (4.15) or (4.16) the local asymptotic observer for the system (4.1). For the system (4.1) we will define the local regulator as the system comprising of the local asymptotic control law (4.14) and the local asymptotic observer (4.15). From a result called the separation principle in linear system theory (Kailath [11]), we know that a regulator for a linear system is stable if the linear system is both completely controllable and observable. Since state and output of a nonlinear system can be mapped homeomorphically onto those of a linear system if the nonlinear system is linearizable, we readily see that the above controller and observer design serves the purpose of locally stabilizing the nonlinear system. The nonlinear system (4.1) and the local regulator are depicted in figure 1. 91 h(x) K Mi) Fig. 1. Local regulator 02 Bibliography [I] Bestle,D. and Zeitz,M., "Canonical form observer design for nonlinear time-variable systems," INT. J. CONTROL, 1983, vol. 38, no.2, pp.419-431. [2] Brockett,R.W., "Feedback invariants for nonlinear systems," IFAC Congress, Helsinki, Finland, 1978, pp.1115-1120. [3] Brockett,R.W., "Nonlinear systems and differential geometry," Proceeding of the IEEE, vol. 64, no.l, Jan. 1976, pp.61-72. [4] Brunovsky,P., "A classification of linear controllable systems," Kibernetika, vol.3, 1970, pp.173-188. [5] Hermann,R., "On the accessibility problem in control theory," Nonlinear Differential Equations and Nonlinear Mechanics, J. P. LaSalle and S. Lef-schetz, ed., Academic Press Inc., New York, 1963, pp.325-332. [6] Hermann,R. and Krener,A., "Nonlinear Controllability and Observability," IEEE Tranc. on Automatic Control, vol.AC-22, no.5, OCT. 1977, pp.728-740. [7] Hunt,L.R., Su,R. and Meyer,G., "Design for multi-input nonlinear sys-tems," Differential Geometric Control Theory, R.W.Brockett, et al, ed., 1982, pp.268-298. [8] Hunt,L.R., Su,R. and Meyer,G., "Gobal Transformations of nonlinear sys-tems," IEEE Tranc. on Automatic Control, vol. AC-28, no.l, Jan. 1983, pp.24-30. [9] Isidori,A., Nonlinear Control Systems: An Introduction, Springer-Verlag Berlin, Heidelberg, 1985. [10] Jakubczyk,B. and Respondek,W., "On linearization of control systems," Bull. Acad. Polon. Sci., Ser. Sci. Math. Astronom. Phys., 28, 1980, pp.517-522. [II] Kailath,T., Linear systems, Prentice-Hall Inc., Englewood Cliffs, N.J., 1980. 93 [12] Kalman,R.E., Elementary control theory from the modern point of view, Topics in Mathematical System Theory, Mcgraw-Hill, Inc., 1969. [13] Krener,A. and Isidori,A., "Linearization by output injection and nonlinear observers," System and Control Letters 3, 1983, pp.47-52 [14] Krener,A. and Respondek,W., "Nonlinear observers with linearizable error dynamics," SIAM J. Control and Optimization, vol. 23, no.2, March 1985, pp.197-216. [15] Kwakernaak,H and Sivan,R., Linear Optimal Control System, John Wiley & Sons, Inc., 1972. [16] Spivak,M., Differential Geometry, vol. I, Publish or Perish, Inc., Wilming-ton, 1970. 94
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Linear equivalents of nonlinear systems
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Linear equivalents of nonlinear systems Tse, Wilfred See Foon 1987
pdf
Page Metadata
Item Metadata
Title | Linear equivalents of nonlinear systems |
Creator |
Tse, Wilfred See Foon |
Publisher | University of British Columbia |
Date Issued | 1987 |
Description | Consider the following nonlinear system [Formula Omitted] where ϰ ∈ Rⁿ, f, ℊ₁,…,ℊｍ are C∞ function in Rⁿ and ℎ is a C∞ function in R⍴, all defined on a neighborhood of 0. The problem of finding a necessary and sufficient condition such that system (1) can be transformed to a linear controllable system by a state coordinate change and feedback has been studied quite well. In this thesis, we first discuss a few different approaches to this problem and eventually we will show that the slightly different versions of the necessary and sufficient condition discovered are equivalent. Next we consider system (1) with all սi,= 0 together with system (2), and study the dual problem of transforming it to a linear observable system by a state and output coordinate change. Finally, we consider briefly system (l) and (2) with nonzero սi and study the problem of transforming it to a linear system that is both completely controllable and observable. Examples are given and applications to local stabilization and estimation are discussed. |
Subject |
Control theory |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2010-07-19 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
IsShownAt | 10.14288/1.0097088 |
URI | http://hdl.handle.net/2429/26652 |
Degree |
Master of Science - MSc |
Program |
Mathematics |
Affiliation |
Science, Faculty of Mathematics, Department of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-UBC_1987_A6_7 T76_4.pdf [ 3.91MB ]
- Metadata
- JSON: 831-1.0097088.json
- JSON-LD: 831-1.0097088-ld.json
- RDF/XML (Pretty): 831-1.0097088-rdf.xml
- RDF/JSON: 831-1.0097088-rdf.json
- Turtle: 831-1.0097088-turtle.txt
- N-Triples: 831-1.0097088-rdf-ntriples.txt
- Original Record: 831-1.0097088-source.json
- Full Text
- 831-1.0097088-fulltext.txt
- Citation
- 831-1.0097088.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0097088/manifest