A HAMILTON - JACOBI APPROACH TO THE DIFFERENTIAL INCLUSION PROBLEM Daniel B.Sc, University of by C. O f f i n B r i t i s h Columbia, 1975 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES Department of Mathematics We accept t h i s thesis as conforming to the required standard THE UNIVERSITY OF A p r i l , © Daniel C. BRITISH COLUMBIA 1979 O f f i n , 1979 I n p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t o f t h e r e q u i r e m e n t s f o r an a d v a n c e d d e g r e e a t t h e U n i v e r s i t y o f B r i t i s h C o l u m b i a , I a g r e e t h a t t h e L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e a n d s t u d y . I f u r t h e r a g r e e t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y p u r p o s e s may be g r a n t e d by t h e Head o f my D e p a r t m e n t o r by h i s r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l n o t be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n . D e p a r t m e n t o f Mft-TH E f t i R T / C ^ The U n i v e r s i t y o f B r i t i s h C o l u m b i a 2075 W e s b r o o k P l a c e V a n c o u v e r , C a n a d a V6T 1W5 D E - 6 B P 7 5 - 5 1 ! E ( i i ) ABSTRACT In the c l a s s i c a l calculus of v a r i a t i o n s , the Hamilton - Jacobi theory leads, under general hypotheses, to s u f f i c i e n t conditions f o r a l o c a l minimum. The optimal control problem as well has i t s own Hamilton -Jacobi approach to s u f f i c i e n t conditions f o r optimality. In t h i s thesis we extend t h i s approach to the d i f f e r e n t i a l i n c l u s i o n problem; a general, nonconvex, non d i f f e r e n t i a b l e control problem. In p a r t i c u l a r , the f a m i l i a r Hamilton - Jacobi equation i s generalized and a corresponding necessary condition (chapter 2) i s obtained. The s u f f i c i e n c y condition (chapter 3) i s derived and an example i s presented where i t i s shown how :this r e s u l t may lead to considerable s i m p l i f i c a t i o n . F i n a l l y , we show (chapter 4) how the c l a s s i c a l theory of canonical transformations may be brought to bear on c e r t a i n Hamiltonian inclusions associated with the d i f f e r e n t i a l i n c l u s i o n problem. Our main to o l w i l l be the generalized gradient, a set valued d e r i v a t i v e f o r L i p s c h i t z functions which reduces to the s u b d i f f e r e n t i a l of convex analysis i n the convex case and the f a m i l i a r d e r i v a t i v e i n the C"*" case. ( i i i ) TABLE OF CONTENTS Chapter 0 - Introduction page 1 Chapter 1 - Preliminaries (A) Generalized Gradients page 6 (B) The D i f f e r e n t i a l Inclusion Problem page 9 Chapter 2 - Necessary Conditions page 13 Chapter 3 - S u f f i c i e n t Conditions page 35 Chapter 4 - Canonical Transformations page 47 Bibliography page 62 (iv) Acknowledgements I would l i k e to express my appreciation to the University of B r i t i s h Columbia f or t h e i r generous f i n a n c i a l assistance and the use of t h e i r resources during my stay at UBC. Special thanks are due to Frank Clarke f o r proposing the topic and providing my i n i t i a l impetus and a sense of d i r e c t i o n . His c r i t i c i s m s and comments, made throughout the duration of the work, provided a constant source of stimulation. Many thanks to C o l i n Clark and Frank Clarke f o r reading the f i n a l manuscript. F i n a l l y , I would l i k e to thank P a t r i c i a Bott who c a r e f u l l y typed the manuscript, and who generously added her enthusiasm to my depleted reserves. (1) 0. Introduction The basic problem i n the calculus of va r i a t i o n s consists of mini-mizing the fu n t i o n a l / J L ( x ( t ) , x ( t ) ) d t over the functions x(') i n a s p e c i f i e d class which s a t i s f y the endpoint constraints x(0) = x Q and x ( l ) = x.^ . Under ce r t a i n conditions i t i s possible to associate with the function L(x,v) the Hamiltonian H(",Q defined by H(x,p) = <p,v> - L(x,v), where v = v(x,p) i s obtained by sol v i n g f o r v i n the equation P = L v U , v ) . A well known necessary condition i n order that the arc x(') furnish a minimum f o r the problem i s that the arc x(«), together with the arc p(«) defined by p(t) = L v ( x ( t ) , x ( t ) ) , s a t i s f y Hamilton's canonical equations: (0.1) x(t) = H ( x ( t ) , p ( t ) ) , -P(t) = H (x ( t ) , p ( t ) ) . In the nineteenth century the central importance to t h i s problem of a rel a t e d f i r s t - o r d e r p a r t i a l d i f f e r e n t i a l equation, the Hamilton -Jacobi equation defined below i n (0.2), became apparent. In p a r t i c u l a r (2) the fundamental s u f f i c i e n c y theorem of the calculus of v a r i a t i o n s depends upon the existence of a s o l u t i o n W(t,x), i n an appropriate domain, of the equation (0.2) 3W(t,x) + H ( x , 3 W j - ^ x ) ) = 0 (see H. Rund [ 1] f o r d e t a i l s ) . Furthermore a complete s o l u t i o n of (0.2), depending on n independent parameters W(t,x,X^,...,X ), may be used to generate a transformation of coordinates (x,p) + (X,P) f o r which the transformed version of equations (0.1) i s X(t) = 0 , P(t) = 0 . This transformation i s a s p e c i a l case of a "canonical transformation"; see Courant and H i l b e r t [ 1] f o r d e t a i l s . The standard problem i n optimal control theory consists of minimizing the functional / j L ( x ( t ) , u ( t ) ) d t subject to the t r a j e c t o r y x(«) s a t i s f y i n g (0.3) x(t) = f ( x ( t ) , u ( t ) ) (0.4) u(t) ^ U almost everywhere where U i s a compact set i n R m and x(0) = x Q , x ( l ) = x1 . Naturally associated with t h i s problem i s the value function, (3) defined f o r (t,y) e [0,1] X R n by S(t,y) = infimum /^L(x(s),u(s))ds where the infimum i s taken over the class of t r a j e c t o r i e s x : [ t , l ] -> R n which s a t i s f y (0.3), (0.4) and the endpoint constraints x(t) = y , x( l ) = x1 . This function, when d i f f e r e n t i a b l e , can be shown to be a so l u t i o n of the following p a r t i a l d i f f e r e n t i a l equation (see W.H. Fleming and R.W. Rishel [ 1 ] ) , (0.5) max { S (s,x(s)) + <S (s,x(s)) , f(x(s),u(s))> - L(x(s),u(s)) } = 0 u e u Z X which i s , as we s h a l l see, the natural analogue of the Hamilton - Jacobi equation f o r th i s problem. Related to equation (0.5) i s a s u f f i c i e n t condition f o r optimality. Suppose there exists a s o l u t i o n of (0.5) S(t,x), such that f o r some control u*(t) with corresponding t r a j e c t o r y x * ( t ) , (0.6) S t ( t , x * ( t ) ) + < S x ( t , x * ( t ) ) , f ( x * ( t ) , u * ( t ) ) > _ L ( x* ( t ) j U* ( t ) ) = o . then u*(«) i s an optimal control with x*(«) the optimal t r a j e c t o r y . To demonstrate t h i s we rewrite equation (0.5) as follows: / j L ( x ( t ) , u ( t ) ) d t > / J { S t ( t , x ( t ) ) + <S x ( t , x ( t ) ) , f ( x ( t ) , u ( t ) ) > } d t = S ( l , X l ) - S(0,x 0) . We need only notice that equation (0.6) implies that equality holds when x = x* and u = u*. An equivalent reformulation of the optimal control problem i s as (4) follows: minimize <j>(x(l)) subject to (0.7) x £ E(x) and (0.8) x(0) = x, 0 where the multifunction E(') i s given by E(x) = f(x,U) When we consider a larger class of multifunctions, the d i f f e r e n t i a l i n c l u s i o n problem, as the l a t t e r formulation i s c a l l e d , i s more general than the optimal control problem. This can be seen f o r example from the case when the constraint set U i s i t s e l f a multifunction dependent upon the state: U = U(x). In t h i s case we simply define Following F.H. Clarke [ 1], we introduce the Hamiltonian H(•, •) fo r a multifunction E ( - ) : i n the case i n which we are minimizing a function <(>(x(lj) of the endpoint. I f we are minimizing an i n t e g r a l f u n c t i o n a l f ^h(x,x)dt, then we define H(x,p) = max { <p,v> - L(x,v):v S E(x) } . For the optimal control problem f o r example E(x) = f(x,U) and H(x,p) = max { <p,f(x,u)> - L(x,u):u 6 U } E(x) = f(x,U(x)) (0.9) H(x,p) = max { <p,v>: v £ E(x) } (5) so that equation (0.5) may be written S.(t,x) + max { <S (t,x),f(x,u)> - L(x,u) } = 0 or Z u G U X S t(t,x) + H(x,S x(t,x)) = 0 . This formalism suggests that a Hamilton - Jacobi theory might be developed f o r the d i f f e r e n t i a l i n c l u s i o n problem. However the d i f f i c u l t i e s of applying Hamilton - Jacobi techniques i n optimal control theory are compounded by the problem of characterizing the d i f f e r e n t i a l properties of the value function. W.H. Fleming and R.W. Rishel [ l] show that under sui t a b l e smoothness assumptions, the value function i s l o c a l l y L i p s c h i t z . In f a c t , as we s h a l l see, f o r the more general d i f f e r e n t i a l i n c l u s i o n problem there are examples to show that the value function i s not n e c e s s a r i l y d i f f e r e n t i a b l e . To circumvent t h i s d i f f i c u l t y we w i l l develop a Hamilton -Jacobi theory f o r the d i f f e r e n t i a l i n c l u s i o n problem using a calculus f o r nonsmooth functions introduced and developed by F.H.Clarke [ 3 ] . In so doing, the Hamilton - Jacobi equation f o r the d i f f e r e n t i a l i n c l u s i o n problem w i l l be stated i n terms of "generalized gradients", set valued derivatives which reduce to the f a m i l i a r d e r i v a t i v e i n the case and the s u b d i f f e r e n t i a l of convex analysis i n the convex case. In Chapter 2, we w i l l derive the Hamilton - Jacobi equation from the properties of the value function and subsequently obtain a necessary condition f o r optimality. In Chapter 3 we develop the c h a r a c t e r i s t i c s u f f i c i e n c y condition of Hamilton - Jacobi theory and f i n a l l y i n Chapter 4, we show that the c l a s s i c a l theory of canonical transformations can be brought to bear upon the "Hamiltonian i n c l u s i o n s " that correspond'to d i f f e r e n t i a l i n c l u s i o n problems. (6) 1. Preliminaries We begin by r e c a l l i n g some elements of the calculus of generalized gradients and subsequently s t a t i n g pertinent information and d e f i n i t i o n s concerning the d i f f e r e n t i a l i n c l u s i o n problem. (A) Generalized Gradients. Throughout t h i s paper we w i l l be dealing with r e a l valued functions f:R n -> R. The function f i s s a i d to be l o c a l l y L i p s c h i t z on R n i f given an a r b i t r a r y compact set K of Rn, there exists a constant k < °°, perhaps depending on K, such that |f(x) - f ( y ) | < k|y - x| whenever x,y belong to K. A theorem of Rademacher, (see H. Federer [ 1]) asserts that a l o c a l l y L i p s c h i t z function i s d i f f e r e n t i a b l e everywhere except p o s s i b l y on a set of n-dimensional Lebesque measure 0. The convex h u l l of a set E contained i n R n i s denoted by coE. D e f i n i t i o n : I f f i s l o c a l l y L i p s c h i t z on Rn, the generalized gradient of f at x, denoted 9f(x), i s given by co { lim V f ( y . ) : Vf(y.) e x i s t s , lim Vf(y.) exists and y. -> x } i-x» The generalized gradient i s an example of a set valued mapping. A n n R n mapping from R to the subsets of R E:R -> 2 , i s c a l l e d a multifunction. D e f i n i t i o n : The multifunction E(.) i s said to be upper semicontinuous provided the following holds: given any sequence of points x^ converging to x and a sequence of elements v^ of E(x n) converging to a point v i n Rn, (7) then v belongs to E(x). Some properties of the generalized gradient as a set valued mapping include: (1.1) 3f(-) ^ $ (1.2) 3f(") i s convex and compact. (1.3) 3f(«) i s an upper semicontinuous multifuntion. We s h a l l now state a proposition g i v i n g the r e l a t i o n between generalized gradients and the usual gradient. The proof i s supplied i n F.H. Clarke [ 2]. Proposition 1. The following are equivalent: (1.4) Sf(x) = iO a singleton; (1.5) Vf(x) e x i s t s , Vf(x) = ? and Vf i s continuous at x r e l a t i v e to the set upon which i t e x i s t s . Example: Consider the l o c a l l y L i p s c h i t z function f:R -> R defined by 2 f(x) = x i f x < 1 = 1 i f x > 1 x We attempt to evaluate 9 f ( l ) . We consider an a r b i t r a r y sequence x -*- 1 f o r which f'(x^) e x i s t s f o r a l l n and notice that the sequence f'(x^) has at most two convergent subsequences with l i m i t s 2 and -1 r e s p e c t i v e l y . Taking the convex h u l l , we f i n d that 3 f ( l ) = [-1,2], D e f i n i t i o n : I f f i s l o c a l l y L i p s c h i t z on Rn, the generalized d i r e c t i o n a l d e r i v a t i v e of f at x i n the d i r e c t i o n v, denoted f°(x;v) i s given by (8) f°(x;v) = lim sup f ( y + Av) - f(y) y+x;UO A The usual one-sided d i r e c t i o n a l d e r i v a t i v e of a function f:R n -> R, denoted f' ( x ; v ) , i s given by f'(x;v) = lim f ( x + Av) - f(x) A+0 A whenever t h i s l i m i t e x i s t s . A l o c a l l y L i p s c h i t z function f:R n -> R i s c a l l e d regular at x i n case f°(x;v) = f 1 ( x ; v ) f o r every v i n Rn. Example: The function f(x) considered i n the previous example i s not regular. However the function g(x) = -f(x) does have the property of r e g u l a r i t y . To show t h i s we observe that f o r y < 1, g(7 + A) - g(y) < 0 < g ( l + A) - g(l) . A A Therefore, g°(l;l) = lim sup g(y+A) - g(y) = lim g(l+A) - g(l) = g' . . y+l;X4-0 A A+0 A An i d e n t i c a l argument shows that g°(l;-l) = g'.(l;.-l)> a n d we may deduce that g i s regular at 1. For other values of x, g i s continuously d i f f e r -entiable and hence regular. Proposition 2. f°(x;v) = maximum { <£,v> : ? £ 3f(x) } The proof may be found i n F.H. Clarke [ 2 ] . From proposition 2, we may deduce that (1.6) 3f(x) =.{ ?: <?,v> <^ f°(x;v) f o r every v i n R° } The chain r u l e f o r composition of mappings as well as the mean value (9) property have been adapted f o r generalized gradients. Proposition 3. Let g be a continuously d i f f e r e n t i a b l e mapping g:Rn Rm. Let h:R m -> R be L i p s c h i t z . Then i f f:R n -* R i s given by f = h o g we have 3f(x) C 3h(g(x)) o Dg(x) . The proof may be found i n F.H. Clarke [ 3 ] . Proposition 4. I f x and y are d i s t i n c t points of R n then there i s a point z i n the open l i n e segment between x and y such that f(y) - f(x) € <y-x,3f(z)> . The proof may be found i n G. Lebourg [ 1]. (B) The D i f f e r e n t i a l Inclusion Problem. When considering the problem of. existence of solutions f o r the optimal control problem, A. F i l i p p o v found i t convenient to study the set valued mapping x•+ f(x,U) = {f(x,u) : u e U} and was able to show that i f the t r a j e c t o r y x:[0,l] -> R n s a t i s f i e d the r e l a t i o n (0.7), x ? E(x), with E(x) defined as E(x) = f(x,U) then x(.) also s a t i s f i e d the r e l a t i o n s (0.3) and (0.4), f o r an appropriate u, x(t) = f ( x ( t ) , u ( t ) ) and u(t) G U res p e c t i v e l y . As mentioned i n the introduction, we w i l l undertake a study of the more general d i f f e r e n t i a l i n c l u s i o n problem. (10) In general, we w i l l consider a multifunction E ( t , x ) , (t,x) £ [0,1] X R n possessing the following p r o p e r t i e s : (1.7) f o r each t i n [0,1] and x i n Rn, E(t,x) w i l l be a nonempty compact set. (1.8) f o r each x i n Rn, the multifunction t •+ E(t,x) i s measureable; that i s ' { t e [ o , i ] : E(t,x) n s / o l i s Lebesque measureable f o r every closed set S. (1.9) E(t,x) i s L i p s c h i t z i n x; that i s , there e x i s t s a k £ L*(0,1) with the following property: given any two points x 2 > x 2 ^ n ^ and v^ i n E ( t , x ^ ) , there i s some i n E ( t , X 2 ) such that l v l " V2I 1 kC t) l x i " X2I ' In the event that E i s independent of t, the L i p s c h i t z constant k(t) w i l l be replaced by a number k. (1.10) E(t,x) i s integrably bounded; that i s , there e x i s t s b £ L^(0,1) such that f o r a l l t i n [0,1] x i n R n and v i n E ( t , x ) , |v| 1 b(t) . D e f i n i t i o n : An absolutely continuous mapping x:[0,l] R n i s s a i d to be a t r a j e c t o r y f o r the multifunction E(t,x) provided that x(t) G E ( t , x ( t ) ) f o r almost every t i n [0,1] . „ The d i f f e r e n t i a l i n c l u s i o n problem can then be formulated: given a l o c a l l y (11) L i p s c h i t z function <t>:Rn -»• R and a multifunction s a t i s f y i n g properties (1.7) through (1.10) we attempt to minimize <}>(x(l)) subject to (1.11) x(t) S E ( t , x ( t ) ) f o r almost every t i n [0,1] (1.12) x(0) = X Q . D e f i n i t i o n : A relaxed t r a j e c t o r y f o r the multifunction E(t,x) i s an absolutely continuous mapping x:[0,l] -»- Rn which s a t i s f i e s x(t) S coE(t,x(t)) a. e. The relaxed d i f f e r e n t i a l i n c l u s i o n problem consists of minimizing the func t i o n a l 4>(x(l)) subject to x . £ coE(t,x) and x(0) = X Q . I f a multifunction i s compact, convex, L i p s c h i t z and integrably bounded then i t s t r a j e c t o r i e s are sequentially compact i n the topology of uniform convergence. The following proposition from F.H. Clarke [4] allows us to conclude that (1.13) i n f { o>(x(l)): x e E(t,x) , x(0) = X Q } =--inf { <J>(x(l)): x G coE(t,x) , x(0) = X Q } . Proposition 5. Let E(t,x) be measureable i n t L i p s c h i t z i n x and integrably bounded. Then f o r every relaxed t r a j e c t o r y y ( * ) j given any p o s i t i v e 6 there i s a tr a j e c t o r y x(«) f o r E with x(0) = y(0) and IIx - yll < 6 . J CO -The proof i s supplied i n F.H. Clarke [4] . (12) Definition: Given t ^ [0,1] and y Rn the value function for the differential inclusion problem denoted V(t,y), is given by V(t,y) = inf Kx(l)) where the infimum is taken over a l l trajectories of E which satisfy (1.14) x(t) = y. Given that the multifunction E is integrably bounded we may deduce that for any trajectory x(«) of E |x(l) - x(0) | < llbllj so that in particular V(t,«) is bounded from below on compact sets. The existence of a solution z(») for the relaxed problem is ensured and equation (1.13) tells us (Kz(l)) = V(t,y). Definition: The Hamiltonian function, denoted H(t,x,p) is defined by H(t,x,p) = max { <p,v> : v £ E(t,x) } . The assumptions (1.7) through (1.10) imply that H is locally Lipschitz in the (x,p) argument. (13) 2. Necessary Conditions. In Chapter 1 we introduced the value function; V(t,y) = i n f { ( K x ( l ) ) : x e E(x), x(t) = y } . Our immediate goal w i l l be to demonstrate that under s u i t a b l e r e s t r i c t i o n s on the choice of multifunction E, the value function w i l l be l o c a l l y L i p s c h i t z on [0,1] X Rn. Subsequently we w i l l derive the generalized Hamilton - Jacobi equation and present a necessary condition associated with t h i s equation. We s h a l l assume that the multifunction E(t,x) s a t i s f i e s conditions (1.7) through (1.10), and (2.1) E(t,x) i s L i p s c h i t z i n t; that i s , there exists a number k' < » with the following property: given any two points t ^ and t ^ i n [0,1] and v^ i n E ( t ^ , x ) , there i s some i n E ( t 2 , x ) such that \ \ ~ v 2 | l k ' | t 1 , t 2 | . We w i l l need the following d e f i n i t i o n s (see F.H. Clarke [ 4 ] ) : D e f i n i t i o n : The function p:[0,l] X R n X R n [0,«>) i s defined by, p(t,x,v) = d[v, E(t,x)] . D e f i n i t i o n : I f x(«) i s an absolutely continuous arc defined on the i n t e r v a l [ t , l ] with t G [0,1), we define 1 d t(x) = ft p ( s , x ( s ) , x(s))ds . ( 1 4 ) The following proposition i s taken from the same source. Proposition 1: There e x i s t s a p o s i t i v e number K with the following property: given any arc x(*) defined on [0,1] there exists a t r a j e c t o r y y(«) f o r E defined on [0,1] such that x(0) = y(0) and IIx - yll < Kd_(x) . J co — 0 Corollary 1: Proposition 1 remains v a l i d i f we are considering arcs defined on a smaller i n t e r v a l [ t , l ] where t belongs to [0,1). frroof: Define the change of parameter 6 :[ t , l ] -+[0,1] by v s ) = f ^ r ; Given an arc x(«) defined on [ t , l ] we define the arc u(«) on [0,1] by u(p) = x ( 6 ^ ( p ) ) = x ( ( l - t ) p + t) . Set E(p,u) = ( l - t ) E ( ( l - t ) p + t, u) and P(P,u,v) = d[v, E(p,u)] f o r (p,u,v) £ [0,1] X R n X Rn. It follows that E has properties i d e n t i c a l to E. Let v(«) be the t r a j e c t o r y f o r E, whose existence i s asserted i n Proposition 1, which s a t i s f i e s " v " u " [ 0 1] - K / 0 P CP.u-CpD» u(p))dp . Define the arc y(-) over [ t , l ] by (15) y(s) = v(e t(s)) Notice that y(s) = J _ v(8 (s)) E _1_ E(8. (s),v(6 (s))) = E(s,y(s)) 1-t 1 1-t and p(p,u(p), u(p)) = d[u(p), E(p,u(p))] = d[ ( l - t ) x ( s ) , ( l - t ) E ( s , x ( s ) ) ] = ( l - t ) d [ x ( s ) , E(s,x(s))] where s = ( l - t ) p + t. Therefore we may deduce that 1 1 K/ Qp(p,u(p),u(p))dp = K/ tp(s,x(s),x(s))ds and h - x l l [ t j l ] = l l v " u l l [ 0 , l ] - K / t P ( s ' x ( s ) ^ C s ) ) d s • F i n a l l y we need only notice that y(t) = v(0) = u(0) = x(t) . Q.E.D. Lemma 1. Suppose there exists a t r a j e c t o r y x(«) f o r E defined on [ t , l ] , t G [0,1), with i n i t i a l condition x(t) = y. Given a compact set C containing y there exists a constant n < °°, perhaps depending on C, with the following property: i f y^ S C there exists a t r a j e c t o r y x^(*) f o r E defined on [ t , l ] with i n i t i a l condition x^(t) = y^ which s a t i s f i e s | K x ( i ) ) - K x ^ l ) ) | < n|y - y±\ • Recall <f):Rn -+ R i s assumed to be l o c a l l y L i p s c h i t z . Proof: Define the arc x(-) over [ t , l ] by (16) x(s) = x(s) + y 1 - y Notice that X' (s) = x(s) e E(s,x(s)) . Since E i s L i p s c h i t z i n x, there i s some v £ E(s,x(s)) which s a t i s f i e s |x'(s) - v | < k(s)|x(s) - X(s)| = k ( s ) | y - y^ . Therefore d[X»(s), E ( s , X ( s ) ) H k(s) |y - y | and /Jp(s,X(s), X'(s))ds < /Jk(s) |y - y j d s < I l k H J y - . C o r o l l a r y 1 applies and allows us to assert the existence of a t r a j e c t o r y x^(«) f o r E which s a t i s f i e s x ^ t ) = X(t) = y 1 and IIx n - xll < KlIkIL |y - yJ . 1 «> — 1 1 1 ' Therefore, »x 1 - x l l ^ < ( K l l k l ^ + l ) | y - y j . * There exists a compact set C with the following property: f o r any tr a j e c t o r y x^f*) of E with i n i t i a l point y^ contained i n the compact * set C, x ^ s ) i s contained i n C for a l l s i n [ t , l ] . To see th i s we notice that f o r any such t r a j e c t o r y ; x 1 ( s ) = x ^ t ) + / ^ ( ^ d u so that | x 1 C s ) I < Ix 1 Ct) I + /^k(u)du < | x 1 C t D I + B k l l j . (17) Hence the class of such t r a j e c t o r i e s i s uniformly bounded i n the supremum norm. To complete the proof of Lemma 1 we l e t be a L i p s c h i t z constant * f o r the function $(•) v a l i d on the compact set C . Then UCxjCl)) - 4»(x(l)) | •< K j x ^ l ) - x ( l ) | < K 1CKllklt 1 + 1)|y - yx\ . We set n = KjCKllkll + 1 ) . Q.E.D. Proposition 2: V(t,y) i s l o c a l l y L i p s c h i t z i n y. Proof: Given an a r b i t r a r y compact set C we w i l l show that |v(t,7l) - v(t,y 2)| < nl/j - y 2 | whenever y and y 2 belong to the set C. The constant n i s the same as that i n Lemma 1. If x^(') i s a t r a j e c t o r y f o r E with i n i t i a l data ( t , y ^ ) , Lemma 1 asserts the existence of a t r a j e c t o r y x 2(*) with i n i t i a l data ( t , y 2 ) which s a t i s f i e s (2.2) V ( t , y 2 ) - V ( t , 7 l ) < <Kx 2(l)) - V(t,y x) < <))Cx1Ci)) - V ( t , y p + n|y : - y 2 | . Let x™ denote a minimizing sequence of t r a j e c t o r i e s with i n i t i a l data (t,y^) and x^ the corresponding t r a j e c t o r i e s with i n i t i a l data ( t , y 2 ) which s a t i s f y | K x " ( l ) ) - Kx2(D)| i nlxj - y 2 | • (18) We deduce from the i n e q u a l i t i e s (2.2) that V(t,y 2) - V{t,Yl) < lim - V ( t , 7 l ) + n|y 1 - y 2 | = n|y x - y 2 | Reversing t h i s argument, that i s deducing the existence of a t r a j e c t o r y with i n i t i a l data (t,y^) from a given t r a j e c t o r y with the i n i t i a l data ( t , y 2 ) , allows us to deduce that v(t, y i) - v(t,y2) < n|y 1 - y 2 | which concludes the proof. Q.E.D. Lemma 2: Given e > 0 there e x i s t s a constant g(e) < °° with the following property: i f t ^ and t 2 belong to [0,1 - e] and x^-) i s a t r a j e c t o r y f o r E defined on [ t ^ , l ] with i n i t i a l conditions x ^ ( t p = Y> there exists a t r a j e c t o r y x 2 ( - ) defined on [ t 2 , l ] which s a t i s f i e s x 2 ( t 2 ) = y and | K x 2 ( l ) ) - K x ^ l ) ) | £ S(e) | t 2 - t 1 | . Proof: Consider the mapping x:[t , 1] [ t ,1] defined by x(r) = 1 - t x ( r - t 2 ) + t1 . T-rr-2 It follows that T _ 1 ( S ) = Us - t ^ + t 2 where m = 1 - t 1 . m l _ t " 2 Since x^fs) belongs to E ( s , x 1 ( s ) ) i t follows from (2.1) that there i s some v s belonging to E(x 1 ( s ) , x 1 ( s ) ) such that |v s - x ^ s ) ] < k'|s - x _ 1 ( s ) | = 2sk'|t 1 - t 2 | . 1 - h (19) Therefore, d l x j f s ) , l E ( x " 1 ( s ) , x 1 ( s ) ) ] < | l y - X l ( s ) | ~ m < I l v - V I + I v - x 1 (s) — '— s s' ' s 1 ^ ' mm < | . v s | | l - 1| + |v - x ( s ) | m Since v g belongs to E(x 1 ( s ) , x 1 ( s ) ) we may deduce from (1.10) that dlxjCs) , 1 E ( T " 1 ( S ) , X 1 ( S ) ) ] < b ( x _ 1 ( s ) ) \ t ± - t | + 2sk'|t - t | m 1 - ^ 1 - t , I f E\s,y) = l E ( x " 1 ( s ) , y ) and m p(s,y,v) = dlv , E(s,y)] i t follows that p ( s , x 1 ( s ) , x 1 ( s ) ) < b ( T 1 ( s ) ) + 2sk'|t 1 - t 2 | . e From Coro l l a r y 1 we may deduce the existence of a t r a j e c t o r y v(») f o r E, defined on the i n t e r v a l [ t ^ , l ] , which s a t i s f i e s v ( t p = x 1 ( t 1 ) = y Iv - x, II < K ( l l b l L + 2k 1) I t , - t j 1 ° ° — 1 '1 2' and Define the arc * 2 ( ' ) o v e r t n e i n t e r v a l [ t 2 , l ] by x 2 ( r ) = v(x(r)) It follows that x 2 ( r ) = mv 2(x(r» £ mE(x(r) ,v(x (r))) = E ( r , x 2 ( r ) ) (20) so that x 9(«) i s a t r a j e c t o r y f o r E which s a t i s f i e s x 2 ( t 2 ) = v ( t x ) = y and x2(D = v ( l ) Therefore | x 2 ( l ) - x ^ l ) | < KCllbllj. + 2k') | t 2 - t j £ and |4,(x 2(l)) - ( K X j U ) ) ! i M K C l l b l ^ +.2k')|t 2 - t x f o r an appropriate L i p s c h i t z constant M. Proposition 3: Given E > 0, V(t,y) i s L i p s c h i t z i n t on the i n t e r v a l [0,1-E]. The proof i s i d e n t i c a l to that given f o r Proposition 1 and w i l l be omitted. Here i s an example f o r which the optimal arc z(«) has the following property: the value function V(t,y) i s not d i f f e r e n t i a b l e anywhere on the point set Our example takes place i n R and we w i l l denote a generic point i n R' by ( x>y)• Define the multifunction E(x,y) by {(s,z(s)) : s belongs to [0,1]} . 2 2 E(x,y) = {(0,0)} U {(-1,-1)} and the objective functional <j> by <j>(x,y) = | (x,y) | . (21) We consider the problem of minimizing <Kx(l),y(l)) over all trajectories (x(•),y(•)) of E, defined on [t,l] and satisfying the ini t i a l condition (x(t),y(t)) = (x 0,y 0) . For this particular problem the optimal trajectory for the ini t i a l data (t,XQ, XQ) can be shown to satisfy (x(s),y(s)) = ' (0,0) i f t < s < (l-t)-x Q+t (-1,-1) i f 1-x < s < l provided (1-t) > x^ and (x(s),y(s)) = (-1,-1) i f (1-t) < xf In order to consider arbitrary in i t i a l data (t,x,y) with 0 _< t <_ 1 and 2 (x,y) in the first quadrant of the plane, let D be the diagonal in R : D = {(x,y) : y = x, 0 <_ x <_ 1} . Given (x Q,y 0) let PQ= (pQ,p0) be the point of D closest to (x Q,y 0) Figure 1 (22) Case 1: 1-t <_ . The optimal t r a j e c t o r y s a t i s f i e s (0,0) i f t < s < 1-p (x(s),y(s))= 0 L(-l,-l) i f l - p Q < s < 1 In t h i s case V ( t , x Q , y Q ) = |(x 0,y 0) - ( P 0 , P 0 ) | = d [ ( x 0 , y 0 ) , D] . Case 2: 1-t < p Q . The optimal t r a j e c t o r y s a t i s f i e s (x(s),y(s))-= (-1,-1) and V ( t , x Q , y 0 ) = |(x 0,y 0) - (1-t,1-t)| . A convenient way to summarize t h i s information i s as follows: l e t h be the vector (-1,-1). Then V(t,x,y) = d[ (x,y),D] i f h-(x,y) > h-(1-t,1-t) = |(x,y) - ( l - t , l - t ) | i f h-(x,y) < h - ( l - t , l - t ) I f i s the hyperplane indicated i n Figure 2 then V(t,x,y) = d[(x,y),D] provided (x,y) belongs to the halfspace below L t = |(x,y) - (1-t,1-t) | provided (x,y) belongs to the halfspace above ( 2 3 ) Figure 2 This representation i s convenient when deriving d i f f e r e n t i a l properties of V . For example, i t can be seen that (see Figure 3 ) f o r e > 0 V ( S , 1 - S , 1 - S + E ) = £ and V ( s , 1 - S , 1 - S - E ) = £ J2 Figure 3 I t may be concluded therefore that V i s nowhere d i f f e r e n t i a b l e on the set { ( s , l - s , l - s ) : 0 £ s <_ 1 } . (24) For the i n i t i a l data (0,1,1) the optimal arc i s given by Z(s) = ( l - s , l - s ) . In order that we may compute V ( s , l - s , l - s ) we consider sequences s -»- s; x -»• 1-s; y -> 1-s; and VV(s ,x ,y ) -> v as n -»• °°. There are n n 'n ' n n'-'n' two separate cases to consider when computing v V ( s n > x n > v n ) . Case 1: (x ,y )*h < h * ( l - s ,1-s ) i n which case v n JnJ — v n' n^ VV(s ,x ,y ) = (x +y - 2 ( l - s ),x -(1-s ),y -(1-s )) ^ n' n"n^ v n Jn *• nJ' n ^ n^'^n v n " (x ,y ) - ( l - s ,1-s ) v n Jn K n n Case 2: (x ,y )*h > h - ( l - s ,1-s ) i n which case VV(s ,x ,y ) = ( 0' xn-Pn' yn-Pn ) n n n f , ^ •> r-r (x ,Y ) - (p ,p ) 1 n J n r n r n ' 1 where (P n>P n) i - s the closest point i n D to ( x n > y n ) ' I t i - s easy to see that i n Case 2, lim VV(s ,x ,y ) = ± 1 (0,-1,1). n n n —pr In Case 1 we notice that i t i s s u f f i c i e n t to consider lim VV(s ,x ,y ) ^ n n JnJ x - (1-s 1 when the r a t i o n n^ remains constant as n •> ». The quantities y -(1-s ) •'n n^ lim v^Csn'xn'>rn) m a y then be parameterized with t h i s r a t i o . In p a r t i c u l a r : fo r k belonging to [-1,0], (l+k,k,l) belongs to 8V(s,1-s,1-s); 717k2 f o r p belonging to [0,°°), (l+p,l,p) belongs to 3V(s,l-s,l-s); TITp^ (25) f o r m belonging to [-1,0] (1+m,l,m) belongs to 3V( s , l - s , l - s ) . TT+m2 In a l l cases the (x,y) components of 3V l i e on the un i t c i r c l e of the (x,y) plane while the t component varies from 0 when (x,y) = _1_ (-1,1), J2 to a maximum value of -Jl when (x,y) = _1_ (1,1) and back to 0 when 72 (x,y) = 1 (1,-1). Taking the convex h u l l of these points gives us 71 the two-dimensional set shown i n Figure 4. Figure 4 To begin the discussion of necessary: c 6 h a i t i o n s w e::prove a'-result which has been adapted from W.H. Fleming and R.W. Rishel [ l ] . Proposition 4. I f x(») i s a t r a j e c t o r y f o r E defined on [ t , l ] then V(s,x(s)) i s a nondecreasing function of s on [ t , l ] . I f z(«) i s an optimal t r a j e c t o r y fo r i n i t i a l conditions z(t) = y then V(s,z(s)) i s constant on [ t , l ] . Proof: For any S-^ S2 which s a t i s f y t <^ s^ <_ s^ ^ _ 1 ; V ( s 1 , x ( s 1 ) ) < V ( s 2 , x ( s 2 ) ) . I f z(-) i s optimal, V ( t , z ( t ) ) = cj>(z(l)). Therefore f o r any s i n [ t , l ] , (26) <j>(z(l)) = V ( t , z ( t ) ) < V(s,z(s)) < 4>(z(l)) so that equality must hold and V(s,z(s)) i s constant on [ t , l ] Proposition 5: For every (t,y) belonging to [0,1] X Rn; 0 < i n f { lim sup V(t+ ,y+Av) - V(t,y) } v E E(t,y) A+0 A Proof: We consider the difference quotient V(t+ ,y+Av) - V(t,y) A fo r A > 0 and v £ E ( t , y ) . Define the arc x(-) on the i n t e r v a l [t,t+A] by x(t+s) = y + sv where A > 0. An argument s i m i l a r to that presented i n Co r o l l a r y 1 may be used to show the following: there exists a t r a j e c t o r y x (•) f o r E A defined f o r s E [t,t+A] which s a t i s f i e s x^(t) = y and (2.3) Hx - x x l l [ t ^ t + x ] < K / J p(t+s,x(t+s),x' (t+s))ds . It follows e a s i l y from our assumptions on the multifunction E that p (*,•,•) i s continuous. Therefore i f we consider a sequence of A tending to 0, we may conclude that 1 P(t+s,x(t+s),x'(t+s))ds -> p ( t , x ( t ) , x ' (t)) = 0 . A It follows from (2.3) therefore, that (27) Now f o r A > 0, i f x (•) denotes the t r a j e c t o r y of equation (2.3) and n A i s an appropriate L i p s c h i t z constant f o r V then |V(t+A,y+Av) - V(t+A,x rt+A)) | <_n|x(t+X) - x,(t+A) I = o(A) . A A Therefore f o r a l l A s u f f i c i e n t l y small, -o(A) + V(t.+A,x x(t+A)) - V(t,y) < V(t+A,y+Av) - V(t,y) __ __ _ Since x,(•) i s a t r a j e c t o r y f o r E, Proposition 4 implies V(t+A,x A(t+A)) - V(t,y) > 0 . It follows that -o(A) < V(t+A,y+Av) - V(t,y) and A A 0 <_ lim sup V(t+A,y+Av) - V(t,y) A*0 A Proposition 5 follows since t h i s holds f o r a l l v £ E ( t , y ) . Q.E.D. The following Lemmas w i l l be needed i n the derivation of the generalized Hamilton - Jacobi equation. Lemma 3: I f V(*,») i s d i f f e r e n t i a b l e at (t,y) and z(«) i s an optimal relaxed t r a j e c t o r y with i n i t i a l condition z(t) = y, then there exists a vector £ belonging to 3z(t) n coE(t,y) such that V t(t,y) + V ( t , y ) - ? = 0 (28) Proof: We w i l l show that i n f a c t 8z(t) C coE(t,y) and that there i s an element c. of 9z(t) which s a t i s f i e s V t ( t , y ) + V y ( t , y ) - c = 0 . Given a sequence A^ tending to 0, the mean value property implies the existence of A' G (0.A ") such that z(t+A ) = z(t) + A c v n v n n fo r some t,^ G 9z(t+A^). By considering an appropriate subsequence A.. , we may assume that £ -> It follows from uppersemi-continuity of 8z(») that ? G 3z(t). We may conclude therefore z(t+A.) = z(t) + A.£ + o(A.) and J J • J V(t+A. 3z(t+A.)> - V ( t , z ( t ) ) = V(t+A.,z(t)+A . c : ) - V ( t , z ( t ) ) + o(\.) V t ( t , z ( t ) ) + V y ( t , z ( t ) ) - C + o(X.) A. However i t follows from Proposition 4, since z(») i s optimal, that V(t+X , z(t+A )) - V ( t , z ( t ) ) = 0 . Therefore; V t ( t , z ( t ) ) + V ( t , z C t ) D - c = 0 . To demonstrate that 8z(t) C coE(t,y) we l e t B denote the set (29) {s:t<s<l and z(s) does not exist} U {s:t<_s<l and z(s) £ coE(s,z(s))} . Then B has measure 0 and as i s shown i n F.H. Clarke [ 2], 9z(t) = 3 B z ( t ) where 9_z(t) i s the convex h u l l of a l l l i m i t s z(s ) with s -> t such B ^ n n that s n belongs to [ t , l ] \ B . For any such sequence s , z ( s n ) belongs • to coE(s n J ) z ( s n ) ) . Therefore lim z ( s n ) e coE(t,z(t)) by uppersemicontinuity of the multifunction E(«,«). Taking the convex h u l l s of a l l such l i m i t s we f i n d that 9 D z ( t ) C coE(t,z(t)) . Q.E.D. D Lemma 4: For every (t,y) belonging to [0,1]X Rn, min min { a + b«v }^_0 . 9V(t,y) E(t,y) Proof: We consider (a,b) £ 9V(t,y) as represented by a convex combination of l i m i t s of gradients of V; that i s , m (a,b) = lim y a.[ a. , b. ] . , l i i n-x» 1=1 where a n , b n = VV(t n,y n) : t n -»- t ; y\ •+ y for i = l,...,m. Given i i v l y i l l v G E ( t , y ) , since E(«,«) i s L i p s c h i t z , we may deduce the existence of u^ €E E(t,y") and v^ E E ( t " , y n ) which s a t i s f y , r e s p e c t i v e l y , l u i ~ v l 1 k ( t ) | y ? - y| and i n n i . . j n . v. - u. < k' t. - t . 1 I I 1 — 1 I 1 (30) It follows that v1} -+ v since \ \ ~ v| < k ' | t ? - t | + k ( t ) | y j - y| Therefore, m . i • v r n , n n-, a + b«v = lim £ a..{a. + b.*v.} . 1 l l i i m lim Z a . V ( t n , y n ; l , v n ) n-x» i = l We may conclude from Proposition 5 that V ' ( t n , y n ; l , v n ) > 0 and consequently a + b-v >_ 0 f o r a r b i t r a r y (a,b) £ 9V(t,y). It follows that min min { a + b - v } > _ 0 . Q.E.D. 3V(t,y) E(t,y) Theorem 1: For every (t,y) belonging to [0,1] X Rn, min { a - H(t,y,-b) } = 0 9V(t,y) Proof: We suppose temporarily that V i s d i f f e r e n t i a b l e at ( t , y ) . The optimal relaxed t r a j e c t o r y with i n i t i a l data ( t , y ) , which i s known to e x i s t (see discussion i n Chapter 1(B)), w i l l be denoted z ( * ) . From (31) Lemma 3 we deduce the existence of a vector t, belonging to coE(t,y) ( i f z(t) exists we may take c=z(t)) such that V t ( t , y ) + V (t,y)-? = 0 . Since z, £ coE(t,y) there i s some convex combination {a^} and v^ E(t,y) such that m Z a [V (t,y) + V (t,y)-v ] = 0 i = l y However, V t ( t , y ) + V y ( t , y ) - v . = V ( t , y ; l,v.) > 0 and consequently f o r some i V t(t,y) + V y ( t , y ) - v . = 0 . We may conclude from t h i s and Proposition 5 that whenever V i s d i f f e r e n t i a b l e (2.4) i n f V'(t,y ; l,v) = 0. E(t,y) Given (a,b) G 3V(t,y) such that (a,b) = lim VV(t n,y n) n-*=° fo r sequences t -*"t and Y^*y> l e t v n e E(t ,y ) s a t i s f y (see equation (2.4)) V 1 (t ,y ; l , v ) = 0 . By considering a s u i t a b l y chosen subsequence, v^, we may assume that v^ ->v. It follows by uppersemicontinuity that v G E ( t , y ) . Consequently (32) (2.5) a + b«v = lim V ft.,y.) + V (t.,y.)-v. = 0 . From (2.5) and Lemma 4 we may conclude that (2.6) min min { a + b - v } = 0 . 3V(t,y) E(t,y) Recall that H(t,y,-b) = max {(-b)«v} so that equation (2.6) may be v€E(t,y) rephrased as min { a - H(t,y,-b) } = 0 . Q.E.D. 3V(t,y) We w i l l say a L i p s c h i t z function W(t,y) i s a so l u t i o n of the generalized Hamilton - Jacobi equation i f f o r every (t,y) belonging to [0,1] X R n (2.7) max { a + H(t,y,b) = 0 . 3W(t,y) Theorem 1 could be rephrased then by s t a t i n g that -V(t,y) i s a so l u t i o n of the generalized Hamilton - Jacobi equation. The following r e s u l t i s a necessary condition i n order that z(») be an optimal t r a j e c t o r y . Theorem 2: If z(«) i s an optimal t r a j e c t o r y f o r i n i t i a l data ( t , y ) , then f o r every s £ [ t , l ] , the following holds: there exists £(s) £ 3z(s)i~E(s,z(s)) and (a(s),b(s)) G 3V(s,z(s)) such that (2.8) a(s) + b(s ) - s(s) = 0 . Furthermore, (33) (2.9) -b(s)-C(s) = H(z(s),-b(s)) and we may take £(s) = z(s) whenever z i s d i f f e r e n t i a b l e at s. Proof: Suppose z(*) i s d i f f e r e n t i a b l e at an i n t e r i o r point s of the i n t e r v a l [ t , l ] . Consider f o r A £ [0,1-s], the arc z(s+A) = z(s) + Az(s) . We define the L i p s c h i t z function f(A) by f(A) = V(s+A,z(s+A)) and note that since z i s continuously d i f f e r e n t i a b l e , the chain r u l e f o r generalized gradients (Proposition 1.3) states that 9f(A) C 3V(s+A,z(s+A)) • [ l , z ( s ) ] . We w i l l now demonstrate that 0 3f(0) which w i l l imply (2.10) 0 G 3V(s,z(s)) • [ l , z ( s ) ] . Since z(s) e x i s t s , z(s+A) = z(s) + Az(s) + o(A) as A4-0 . It thereby follows that V(s+A,z(s+A)) - V(s+A,z(s+A)) = o(A) as A+0 and consequently f(A) - f(0) = V(s+A,z(s+A)) - V(s,z(s)) + o(A) . From Proposition 4 we notice that V(s+A,z(s+A)) - V(s,z(s)) = 0 ( 3 4 ) and c o n c l u d e t h a t f ( A ) - f ( 0 ) = o(A) . An i d e n t i c a l argument shows t h a t f ( - A ) - £(0) = o(A) so t h a t f ( 0 ) e x i s t s and f 1 ( 0 ) = 0. T h e r e f o r e , 0 = - f 1 ( 0 ) < f°(0;-l) and 0 = f ' ( 0 ) < f°(0;l) w h i c h i m p l i e s ( P r o p o s i t i o n 1.2) t h a t 0 G 3 f ( 0 ) . E q u a t i o n (2.10) f o l l o w s f rom t h i s f a c t . We w i l l now c o n s i d e r a sequence o f p o i n t s s^ £ ( t , l ) and s^ -»-s such t h a t Z(S^)-H;(S). i t f o l l o w s by u p p e r s e m i c o n t i n u i t y t h a t £(s) G 3 z ( s ) r i E ( s , z ( s ) ) . From e q u a t i o n (2.10) we deduce t h e e x i s t e n c e o f p o i n t s (a^,b^) b e l o n g i n g t o 3 V ( s ^ , z ( s ^ ) ) w h i c h s a t i s f y , f o r e v e r y i , a. + b . • z ( s . ) = 0 . 1 1 v xJ By c o n s i d e r i n g a p p r o p r i a t e subsequences, we may assume t h a t (a^,b^)->(a(s) , b ( s ) ) and a g a i n by u p p e r s e m i c o n t i n u i t y i t f o l l o w s t h a t ( a ( s ) , b ( s ) ) e 3 V ( s , z ( s ) ) . T h e r e f o r e , a ( s ) + b ( s ) • S ( s ) = 0 . To complete t h e s t a t e m e n t o f t h i s theorem, we n o t i c e f r o m Theorem 1 t h a t a ( s ) - ( - b ( s ) ) ' C ( s ) > a ( s ) - H ( s , z ( s ) , - b ( s ) ) > 0 . E q u a l i t y t h e r e b y h o l d s and - b ( s ) - ? ( s ) = H ( s , z ( s ) , - b ( s ) ) . Q.E.D. (35) 3. S u f f i c i e n t Conditions. In the c l a s s i c a l calculus of v a r i a t i o n s , s u f f i c i e n t conditions f o r an optimal s o l u t i o n were tackled with the construction of " f i e l d s of extremals".[ R. Courant [ l ] ] . This method was elegant but the technical d i f f i c u l t i e s were awesome and few problems could be completely analysed i n t h i s manner. Hence, to a l l intents and purposes, the method was not p r a c t i c a l i n the same sense as the necessary conditions. For the optimal control problem, the s i t u a t i o n i s somewhat better owing to a more e x p l i c i t use of the value function. However, technical d i f f i c u l t i e s increased i n d i r e c t proportion to the p r e c i s i o n with which one wanted to specify the d i f f e r e n t i a l properties of t h i s function. In t h i s chapter, we present a s u f f i c i e n c y theorem, or a v e r i f i c a t i o n theorem as i t i s sometimes c a l l e d , which may serve to eliminate some of the computational problems encountered i n the analysis of optimal control problems. We begin by s t a t i n g a converse to Proposition 4 of Chapter 2 which w i l l lay the groundwork f o r the main theorem. Recall the objective functional cf>:Rn ->- R i s by assumption a l o c a l l y L i p s c h i t z function. Proposition 1. Let W(s,y) be a r e a l valued, l o c a l l y L i p s c h i t z function defined on [0,1] X R n such that W(l,y) = Ky) f o r every y belonging to Rn. Let (t ,y ) be given i n i t i a l conditions and suppose f o r any t r a j e c t o r y x(«) of E, defined on [ t Q , l ] and s a t i s f y i n g x(t ) = y , W(s,x(s)) i s nondecreasing on [ t ,1]. I f z(-) i s a t r a j e c t o r y (36) f o r E defined on [ t Q , l ] , s a t i s f y i n g z ( t Q ) = YQ> a n d such that W(s,z(s)) i s constant on [ t ,1]; then z(«) i s optimal f o r the given i n i t i a l conditions and W(t ,y ) = V(t ,y ) v o o o o where V(-,«) i s the value function. Proof: For any t r a j e c t o r y x(-) such that x ( t Q ) = y , with equality f o r x = z. That i s , 4>(z(l)) 1 cf»(x(l)) and z(>) i s thereby optimal for the i n i t i a l conditions (t -,y ). Therefore W(t Q,y o) = 4>(z(l)) = v ( t o ^ o ) Q > E - D -Theorem 1: Let -W(s,y) be a l o c a l l y L i p s c h i t z s o l u t i o n of the generalized Hamilton - Jacobi equation; max { a + H(t,y,b)} = 0 . 3W(t,y) Suppose W s a t i s f i e s the boundary conditions W(l,y) = <Ky) fo r every y belonging to Rn. Let (t ,^ ) be given i n i t i a l conditions and suppose the t r a j e c t o r y z(«) s a t i s f i e s z ( t Q ) = YQ and (37) 3W(s,z(s)) • [ l , z ( s ) ] = 0 a.e.; then z(-) i s an optimal t r a j e c t o r y f o r (t »y ) and W(t ,Y ) = V(t ,y ) . ^ o o o Jo Proof: We begin by noting that i f -W(s,y) i s a s o l u t i o n of (2.7) then W(s,y) i s a s o l u t i o n of the following three equations: (3.1) max { a + H(t,y,b) } = 0 ; -3W(t,y) min { a - H(t,y 3-b) } = 0 ; 3W(t,y) min min { a + v b } = 0 3W(t,y) E(t,y) This serves to show more c l e a r l y the connection between the s u f f i c i e n c y condition and the Hamilton - Jacobi equation (2.7). We w i l l f i r s t demonstrate that W(s,x(s)) i s a nondecreasing function of s f o r every t r a j e c t o r y x(*) of E. We observe that W(s,x(s)) i s L i p s c h i t z as a function of s. It follows that the set G, defined by G = {s:d_ W(s,x(s)) exists} n {s:x(s) exists} ds has Lebesgue measure 1-t i f *(•) i s defined over [ t Q , l ] . For a point s belonging to G we define f (T) = W ( S + T , X ( S ) + T X ( S ) ) f o r T G [0,e] f o r some e > 0. Then the conditions f o r the chain r u l e property are met (see Proposition 1.3) and we may deduce that (38) 3f (0) C 9W(s,x(s)) - [ l , x ( s ) ] . Since s G G i t f o l l o w s that x(s+i) = x(s) + xx(s) + o(x) and W(S+T,X(S+T)) - W(s,x(s)) = W(S+T,X(S)+TX(S)) - W(s,x(s)) + O(T) . Therefore, d_ W(s,x(s)) ds = l i m f ( r ) - f (0) e 3£ (0) C 9W(s,x(s)) • [ l , x ( s ) ] . T+0 ^ s T However 9W(s,x(s)* [ l , x ( s ) ] >^ 0 from equation (3.1) so that we may conclude: d W(s,x(s)) _> 0 almost everywhere and W(s,x(s)) i s i n c r e a s i n g on the ds i n t e r v a l [ t ,1] . o For the t r a j e c t o r y z(«) i t f o l l o w s t h a t d _ W ( s , z ( s ) ) G 9W(s,z(s)) • [ l , z ( s ) l a.e.; ds however by hypothesis 9W(s,z(s)) • [ l , z ( s ) ] = 0 a.e. so that W(s,z(s)) i s constant f o r s G [ t ,1]. The t r a j e c t o r y z(-) and the f u n c t i o n W(»,«) thereby s a t i s f y the c o n d i t i o n s o f P r o p o s i t i o n 1 and we may conclude that z(») i s optimal. Q.E.D. The derived necessary c o n d i t i o n s f o r the d i f f e r e n t i a l i n c l u s i o n problem could be s t a t e d as 0 G 9W • [ l , z ] and 9W • [ l, z ] >_> 0 . (39) The s u f f i c i e n c y condition requires that aw • [ 1,'z] = 0 . Could the s u f f i c i e n c y theorem be sharpened so as to decrease the above discrepancy? To show that i n f a c t t h i s i s not po s s i b l e , we w i l l demon-stra t e that when the value function i s regular the necessary condition may be strengthened to coincide with the s u f f i c i e n t condition. Recall that r e g u l a r i t y requires V°(t,y;i,v) = V'(t,y;i,v) f o r i = ±1 and v €E Rn. This condition could be shown to always be the case when the problem i s "convex" i n a c e r t a i n sense. Theorem 2: I f z(») i s an optimal t r a j e c t o r y f o r the problem with i n i t i a l data (t,y) and i f the value function i s regular then 8V(s.,z(s)) •• [ l , z ( s ) ] = 0 a.e. In order to prove t h i s theorem i t w i l l be necessary to introduce the notions of generalized normal and tangent cones to an a r b i t r a r y closed set Q. For a more d e t a i l e d discussion see F.H. Clarke [ 2 ] . We define the function d:Rn -+ R by d(x) = min {|x-e| : e £ Q} . It i s e a s i l y v e r i f i e d that d(x) i s uniformly L i p s c h i t z with constant 1. D e f i n i t i o n : The cone of normal .vectors to Q at e, denoted N^fe), i s the closure of the set (40) { p £ Rn : sp belongs to 9d(e) for some s belonging to (0,°°) } . Definition: The cone of tangent vectors to Q at e, denoted T^(e), is the cone dual to Ng(e). T Q O ) = { v : <5,v><£ 0 for a l l £ in N (e) } An alternate characterization is: (3.2) d belongs to T^(e) i f and only i f d°(e;d) <_ 0 . We consider an arbitrary locally Lipschitz function F(x) on Rn. We set Q = {x : F(x)r 0} and assume for convenience that 0 £ Q. Lemma 1: If F is regular at 0 then 3F(0) C N (0) Proof: Consider an element d of T_(0). For A > 0, let e G Q satisfy V A It follows that Ad - e = d(Ad) . A F(Ad) = F(Ad) - F(e.) < Kd(Ad) A where K is an appropriate Lipschitz constant for F. Therefore, lim sup F(Ad) < K lim sup d(Ad) < d°(0;d) <_ 0 . A4-0 A A4-0 A If F happens to be regular at 0 then (41) F°(0;d) = lim sup F(Ad) <_ 0 AIO d f o r every d G TQ(0)• That i s <£,d> £ 0 f o r a l l ? G 3F(0) and d G T (0). Therefore, 3F(0) C [T Q(0)]° = N Q(0) . Q.E.D. Proof of Theorem 2: We r e c a l l that a necessary condition f o r the t r a j e c t o r y z(«) to be optimal f o r the i n i t i a l data (t,y) i s that V(s,z(s)) be constant on [ t , l ] (see Proposition 2.4). We could restate t h i s by saying f o r a l l s belonging to the i n t e r v a l [ t , l ] , (s,z(s)) belongs to the set { (s,x) : V(s,x) = V(t,y) } . I f we denote t h i s set by Q, we may conclude that the arc z(*) defined by (s,z(s)) = z(s) belongs to the closed set Q f o r every s belonging to [ t , l ] . At t h i s point we r e c a l l the following theorem from F.H. Clarke [ 2]. Theorem: The t r a j e c t o r y z(s) belongs to the closed set $ f o r s > t i f and only i f z'(s) i s tangent to $ at z(s) almost everywhere; that i s , z'(s) belongs to T $ ( z ( s ) ) a.e. . It follows from t h i s theorem that z'(s) = [ l , z ( s ) ] belongs to T n ( s , z ( s ) ) a.e. . (42) Therefore, f o r every (a,b) belonging to N Q ( S , Z ( S ) ) , [ l , z ( s ) ] • (a,b) < 0 . In p a r t i c u l a r , Lemma 1 asserts that (3.3) [ l , z ( s ) ] • 9V(s,z(s)) <0 a.e. . Now z(«) and V(«,») s a t i s f y the generalized Hamilton - Jacobi equation, Theorem (2.1), min { a + <b,z(s)> } = 0 a.e. . 9V(s,z(s)) It follows that (3.4) [ l , z ( s ) ] • 3V(s,z(s)) ^ 0 a.e. . Inequalities (3.3) and (3.4) imply that [ l , z ( s ) J • 8V(s,z(s)) = 0 a.e. . Q.E.D. •Example: 2 We consider the problem i n R of minimizing {%x(l) + X2(l)} over a l l t r a j e c t o r i e s (x(s),X2(s)) which s a t i s f y (x^(t),X2(t))= (y^,y2) and ( x 1 ( s ) , x 2 ( s ) ) .£ {(v.-lx^s) |) : -1 < v < 1} . This problem could be reformulated as minimize {%x(l) + y 2 - /^|x|ds} I * I over a l l arcs x(s) such that x(t) = y, and |x| <_ 1. ( 4 3 ) A reasonable guess as to the nature of the optimal arc x(s) i s that i t w i l l look, for some switching time T, as follows: Figure 5 X* To v e r i f y t h i s supposition we attempt to construct a s o l u t i o n W of the generalized Hamilton - Jacobi equation; min { a - H(y ,y ,-b) } = 0 3W(t,y ry 2) which s a t i s f i e s the boundary conditions W(l,y 1,y 2) = <$>(y1,y2) = h&1 + y2 . and for which the arc i n question ( x ^ ( s ) , x 2 ( s ) ) s a t i s f i e s SWfSjX^s) ,x 2 ( s ) ) • [ l j X j f s ) ,x 2(s)] = 0 a.e. . To begin with, we must decide which switching time w i l l give an optimal t r a j e c t o r y . We consider arcs x(s) = y + (s-t) on [t,T] = y1 + (T-t) - (s-T) on [T,l] and an associated performance function (44) f(T) = %(y 1 +2T-t-l) - /J|x(s)|ds + y 2 . We will restrict attention to y^ > 0 since the optimal policy for y^ <_ 0 is clearly x = -1. After routine computations, we find f -3T2+[ 3-4(y rt)] T+fy^t) (3/2+2t-y1)+%t2-l+y2 f(T) =< i f T < hi 1 - (y r t ) ] T 2 _ T + ( t_y ( y r t ) + ht2 + y 2 i f T > hi 1 - (y^t)] : Using calculus we can decide for which values of T f(T) will have a global minimum over the interval [ t , l ] . It turns out that i f we partition the strip { (t,y x) : 0 < t < 1 , y± > 0 } as in the following diagram; Figure 6 c B A > then the minimum for f(T) is achieved when T = t in region A; T = t in region B; T = h in region C. Substituting these values of T into the expression for f(T) we obtain a function W(t,y^,y2); W(t,y ry 2) = -J2t2-y1t+3/2y1-y2-l+y2 in A, (45) = ht^ - ht + y^t - y1 + y 2 i n B, 2~ = -ht2 + t/2 + y t - y1 - h + y 2 i n C. 2~" It i s e a s i l y v e r i f i e d that W(l,y 1,y 2) = hy1 + y r To show that W(l,y^,y 2) s a t i s f i e s the Hamilton - Jacobi equation, we note that the Hamiltonian f o r the multifunction E ( y r y 2 ) = UvHyJ) : - l < v < 1} i s given by H ( y 1 , y 2 , p 1 > p 2 ) = p 1 - P 2 1 I i f P X > o = - P X - P 2 l y i l i £ P i < 0 • We derive the function VW; vW(t,y 1,y 2) = [-t - y x + 3/2,-t + 3/2 - 2 y r l ] i n A = [ t - h + y 1 , t - h,l] i n B = [-t + y1 + h,t - % , l ] i n C. The set 9W(t,y^,y 2) i s e a s i l y computed on the boundary of these regions. To demonstrate that W(t,y^,y 2) i s a s o l u t i o n of the generalized Hamilton -Jacobi equation, one l a s t computation i s necessary i n each of the regions A, B, C of which we give one, the other two being s i m i l a r . In region A, - t + 3/2 - 2y 1 > 0 so that (46) = (-t - y± + 3/2) - [ - t + 3/2 - 2y± + = 0 . F i n a l l y , to prove optimality of the t r a j e c t o r y with the chosen switching time, suppose f o r example the i n i t i a l data i s {t,y^,y^){ = (%,%,y 2). Then the optimal arc would be x 1 ( s ) = 1/2 + (s -• 1/4) f o r 1/4 < s < 1/2 =. 3/4 - (s - 1/2) f o r 1/2 < s < 1 x 2 ( s ) = y 2 - / J / 4 | X l | d s . For a l l s such that 1/4 < s < 1/2 , 3W(s,x 1(s),x 2(s)) • [ l , x r x 2 ] = (-S + X l ( s ) + 1/2) + (s - 1/2) - | x 1 ( s ) | = 0 . f o r 1/2 < s < 1 , 8W(s,x 1(s),x 2(s)) • [ l , ^ , ^ ] = (s - 1/2 + x x ( s ) ) - (s - 1/2) - Ix^s)! = 0 which proves optimality by Theorem 1. (47) 4. Canonical Transformations. Let U be a given open subset of Rn. A more general version of the differential inclusion problem, consists of minimizing <))(x(l)) over a l l trajectories x(*) of the multifunction E, which lie in Q and satisfy the boundary conditions x(0) e CQ and x(l) e C1 . When E(',«) satisfies conditions (1.7) through (1.10), <)>(•) is locally Lipschitz and C^ , C^ are closed sets in Rn the following theorem from F.H. Clarke [5], gives necessary conditions for optimality: Theorem 2: (F.H. Clarke [5]) If z(«) solves the above differential inclusion problem then there is an arc p(*) and a number X equal to 0 or 1 such that: (4.1) (-p(t),z(t)) e 3H(t,z(t),p(t)) a.e., p(0) e Nc (z(0)), o -p(l) £ N „ (z(l)) + X9<J.(z(l)) and L l X + |p| is never 0. The differential inclusion of (4.1) is called a Hamiltonian inclusion in analogy with the classical Hamiltonian equations of mechanics and optics. The solution of such Hamiltonian inclusions thereby assumes a position of some importance. As a contribution to the development of techniques suited to the solution of these inclusions, we give a theorem on the transformation properties of "extremal trajectories" and show (48) how t h i s r e s u l t may be used, with the concept of canonical transformations, i n the analysis of Hamiltonian systems. We f i r s t develop some terminology and notation. We consider the problem which consists of minimizing / j F ( t , y ( t ) , y ( t ) ) d t subject to the arc y(*) s a t i s f y i n g various endpoint constraints (which s h a l l be suppressed i n our discussion) . The class of admissable arcs i s the class of absolutely continuous functions y:[0,l]->R n which l i e i n the open subset ft of Rn. The integrand F(t,«,-) w i l l be assumed to be l o c a l l y L i p s c h i t z and Lebesgue measureable i n the t argument. The arc y(«) i s c a l l e d an extremal f o r the integrand F provided i t s a t i s f i e s the Euler - Lagrange d i f f e r e n t i a l i n c l u s i o n , that i s , i f there exists an absolutely continuous arc £(•) which s a t i s f i e s (|(t),C(t)) E 8 F ( t , y ( t ) , y ( t ) ) a.e. (see F.H. Clarke [6] f o r d e t a i l s ) . 2 n n We s h a l l consider a C transformation of coordinates f:R X R -+R , f(t,Y) = y > with (t,Y) representing the new coordinates. The term "transformation of coordinates" i s intended to imply that the matrix fy(t,Y) i s nonsingular everywhere so that, l o c a l l y , we can obtain the inverse transformation h(t,y) = Y . We w i l l also assume that the range of the mapping f includes the open subset ft (49) Q C range £ . Then given an arc y(') which l i e s i n fi, we define the inverse image Y'(«) by the i m p l i c i t equation f ( t , Y ( t ) ) = y(t) . Within the set tt, an equivalent representation could be given by the inverse transformation Y(t) = h ( t , y ( t ) ) . We s h a l l have occasion to consider an appropriate mapping from the tangent space to the tangent space Ry. We notice that whenever the d e r i v a t i v e Y(t) exists f t ( t , Y ( t ) ) + f y ( t , Y ( t ) ) o Y ( t ) = y(t) and we define the transformation j : R X R n X R n -»• R n by (4.2) j(t,Y,V) = f t ( t , Y ) + f Y(t,Y)oV . In equation (4.2) f y i s an nXn matrix and f and V are column vectors of length n. F i n a l l y , we consider a l o c a l l y L i p s c h i t z integrand F(t,y,v) and the associated mapping F*:R X R n X R n -> R defined by F*(t,Y,V) = F(t,y,v) where (y,v) = (f(t,Y),j(t,Y,V)) . Theorem 1: The extremals f o r the integrand F(t,y,v) are i n one to one correspondence, v i a the change of coordinates f(t,Y) = y, with the (50) extremals of the integrand F*(t,Y,V) . Proof: I f we set g(t,Y,V) = [ f ( t , y ) , j ( t , Y , V ) ] = (y,v) then g i s at least continuously d i f f e r e n t i a b l e ; whenever VF e x i s t s , VF* exists and VF*(t,Y,V) = VF(t,y,v)oDg(t,Y,V) where Dg i s the Jacobian of the transformation g. It follows from the d e f i n i t i o n of generalized gradients that (4.3) 9F*(t,Y,V) D 9F(t,y,v)oDg(t,Y,V) In p a r t i c u l a r , i f y(*) i s an extremal t r a j e c t o r y which l i e s i n the set ft, there i s an arc £(•) which s a t i s f i e s (|(t),?(t)) G 9 F ( t , y ( t ) , y ( t ) ) a.e. . From the i n c l u s i o n (4.3) we may deduce that (4.4) (5(t),C(t))oDg(t,Y(t),Y(t)) G 9F*(t,Y(t),Y(t)) a.e.. Now Dg(t,Y,V) i s a l i n e a r transformation from R^n into R^n which has the matrix representation Dg(t,Y,V) f y ( t , Y ) 0 j y(t,Y,V) f Y ( t , Y ) We may, therefore, rewrite i n c l u s i o n (4.4) as (51) (4.5) U(t)of Y(t,Y(t)) + 5(t)oJ Y(t,Y(t),Y(t)) , ? ( t ) o f y ( t , Y ( t ) ) ] belongs to 9F*(t,Y(t),Y(t)) a.e. . From equation (4.2), we may derive j y(t,Y,V) = f t Y ( t , Y ) + £ y Y(t,Y)oV . It follows that.the i n c l u s i o n (4.5) may be written as [| 0 f y ( t , Y ) + 5°f t Y(t,Y) + 5 o f Y Y ( t , Y ) u Y , 5o.£Y(t,Y)] belongs to 9F*(t,Y,Y) a.e. To demonstrate that Y(*) i s an extremal, i t s u f f i c e s to show that d_£(t) f y ( t , Y ( t ) ) = 5°f Y(t,Y) + Cof t y(t,Y) + 5 o f y Y ( t , Y ) o Y . dt ' • For notational convenience, we set G(t,Y,£) = ?of Y(t,Y) Then d J ( t ) o f (t,Y(t)) = G (t,Y, 5) + G (t,Y,£>Y + G (t,Y,£)u f . dt 1 1 ^ We make the following observations: G t(t,Y,0 = Co£ Y t(t,Y) = 5 o f t y C t , Y ) since f is C 2; Gy(t,Y,g)= Cof y Y(t,Y) so that GY(t,Y,g)«Y = &»fyY(t,Y)oY ; G (t,Y,£)(•) = G(t,Y,«) which is just the dual linear mapping [f y(t,Y)] so that G c(t,Y,5)o| = G(t,Y,|) = 5 o f y ( t , Y ) . (52) It follows from t h i s and our previous remark that Y(«) i s an extremal t r a j e c t o r y . The conclusion of the theorem follows provided that given an extremal Y(*) f o r the integrand F*(t,Y,V), the image t r a j e c t o r y y(«) i s an extremal f o r the integrand F(t,Y,V). This demonstration i s i d e n t i c a l to the one given when we use the inverse transformation g(t,y) = Y instead of f(t,Y) = y. Q.E.D. Let y(-)> P(*) be t r a j e c t o r i e s f o r the Hamiltonian i n c l u s i o n (-P(t) , y ( t ) ) e 3H(t,y(t),p(t)) . We are interes t e d i n those transformations of coordinates (t,y,p) -> (Y,P) f o r which the image t r a j e c t o r i e s Y(-)> P(') i n turn s a t i s f y a Hamiltonian i n c l u s i o n (-P(t) , Y(t)) G 9H*(t,Y(t),P(t)) f o r some Hamiltonian function H*(t,Y,P). For our purposes, we s h a l l c a l l the transformation canonical i f t h i s condition i s met. For a more general discussion of canonical transformations i n a c l a s s i c a l s e t t i n g , see Caratheodory [ l ] . An important class of canonical transformations, used widely i n c l a s s i c a l mechanics, are those derivable from a generating function. 2 We are given a C , r e a l valued function <f>(t,y,Y). The transformation i s to be induced through the following r e l a t i o n s : (4.6) p = cfy(t,y,Y) (53) (4.7) . -P = <|>Y(t,y,Y) (4.8) H*(t,Y,P) = H(t,y,p) + <j>t(t,y,Y) . It w i l l be assumed that the matrix i s nonsingular everywhere so that (4.6) may be solved ( l o c a l l y ) f o r Y Y = f(t,y,p) . Equation (4.7) then implies P = g(t,y,p) and the desired transformation i s [Y,P] = [f( t , y , p ) , g(t,y,p)] Notice that nonsingularity implies that equation (4.7) may be solved l o c a l l y f o r y so that the inverse transformations are r e a d i l y a v a i l a b l e . Equation (4.8) gives us a rule f o r forming the new Hamiltonian. Since H* can be obtained by a straightforward c a l c u l a t i o n , canonical transformat-. ions generated i n t h i s fashion are e s p e c i a l l y u s e f u l . The function <j> i s c a l l e d a generating function f o r the canonical transformation. The following Lemma w i l l be used to show that i n f a c t , such transformations are canonical. Lemma 1: 2 Suppose that f o r some C function cj>, the two l o c a l l y L i p s c h i t z functions F(t,y,y) and G(t,y,v) are re l a t e d as follows: F(t,y,v) = G(t,y,v) + (j>t(t,y) + <j>y(t,y)0v . Then F and G generate the same extremals. Proof: (54) Clearly, i f VF exists then VG exists and conversely. Furthermore, VF = VG + (<|>ty+<|> o v , <j> ), and from the definition of generalized gradient 9F(t,y,v) = 3G(t,y,v) + (* (t,y) + * (t.y)ov , <fr (t,y)) The extremals generated by F are those arcs y(*) for which there exists an arc £(•) such that (1,5) G 9F(t,y,y) . This condition is satisfied i f and only i f (|,C) e 8G(t,y,y) + (<|>ty + <fr o y , <jy) which in turn is satisfied i f and only i f [d_U(t)-<j> (t,y(t)) , S(t)-<|. (t,y(t))] G 8G(t,y(t),y(t)) a.e. . dt y y Therefore, y(*) is an extremal for the integrand G. Q.E.D. Proposition 1: 2 The transformation induced by a C generating function is canonical and the image trajectories Y(«)j P(*) satisfy . (-P(t),Y(t)) G 9Ht(t,Y(.t),P(-tj with H* given in (4.8). The proof is adapted from a similar argument given in Gelfand and Fomin [ 1] . Proof: We consider an integrand F of the form (55) F(t,y,p,v,u) = p-v - H(t,y,p) . Our state space is the 2n-dimensional space of pairs (y,p). We make the following observations: VF(t,y,p,v,u) exists i f and only i f VH(t,y,p) exists, and in that case VF(t,y,p,v,u) = [(0,v) - VH(t,y,p) , p , 0] . It follows from the definition of generalized gradients that 9F(t,y,p,v,u) = [(0,v) - 9H(t,y,p) , p , 0] . The Euler - Lagrange inclusion for F is (|(t),?(t),5(t),?(t)) G 9F(t,y(t),p(t),y(t),p(t)) from which we may deduce the following relations: (4.9) ?(t) = p(t) ?(t) = 0 (p(t),0) G (0,y(t)) - 9H(t,y(t),p(t)) . The relations (4.9) are equivalent to (4.10) (-P(t),y(t)) G 3H(t,y(t),p(t)) . Therefore, extremals for the integrand F are identical with trajectories of the Hamiltonian inclusion (4.10). Let (y(*)>P(*)) denote an extremal for the integrand F. We will represent the transformation induced by equations (4.6) and (4.7) as Y=f(t,y,p) , P=g(t,y,p) . Then the image trajectories are Y(t) = f(t,y(t),p(t)) , P(t) = g(t,y(t),p(t)) . (56) We consider the mappings V(t,y,p,v,u) = f (t,y,p) + £ (t,y,p) o v + f (t,y,p ) o u y p and U(t,y,p,v,u) = g (t,y,p) + g (t,y,p) o v + g (t,y,p ) o u . u / p It may be verified that Y(t) = V(t,y(t),p(t),y(t),p(t)) and P(t) = U(t,y(t),p(t),y(t),p(t)) . With H* given by (4.8) we consider the integrand G(t,y,p,v,u) = g(t,y,p)-V(t,y,p,v,u) - H*(t , f(t,y,p) , g(t,y,p)) . The trajectories generated by the integrand G*(t,Y,P,V,U) = G(t,y,p,v,u) = P-V - H*(t,Y,P) are, by Theorem 1, the image trajectories under the transformation Y=f(t,y,p) , P=g(t,y,p) of the extremals for the integrand G. An argument identical to that given for equation (4.10) implies that the extremals for G*(t,Y,P,V,U) are the trajectories for the Hamiltonian inclusion (-P(t),Y(t)) G 8H*(t,Y(t),P(t)) . To complete the proof, i t suffices to show that F and G generate the same extremals. From equations (4.6), (4.7) and (4.8) we may deduce that (57) * t(t,y,Y) + <f>y(t,y,Y)-v + 4>Y(t,y,Y).V = H*(t,Y,P) - H(t,y,p) + p-v - P-V . For notational convenience, we set *(t,y,p) = <t>(t,y,f(t,y,p)) and derive the following: * t = * t + V f t J *y = *y + V f y > ^ p = V fp' • It follows that * t + * * v + ^ p - u = <ft + * Y ' f t + v v + ( v V v + V " u = 4>t + <l>Y'V + <f>Y-[ft + f v ° v + f pou] d>. + <j> -v + cb.,-V Y t T y rY = H*(t,Y,P) - H(t,y,p) + p-v - P-V . Therefore the conditions of Lemma 1 are met and we may conclude that F(t,y,p,v,u) = p-v - H(t,y,p) and G(t,y,p,v,u) = P-V - H*(t,Y,P) generate the same extremals. Q.E.D. Example: 2 We denote a generic element of R by (x,y). Let ft be the open set {(x,y) : x<0} . Define the multifunction E over [0,1] X ft by ( 5 8 ) E(t,x,y) = cb{(2x , xe _ t - |ln(-x)-t|)} U { ( o , -xe _ t - | ln(-x)-t|) } Figure 7 (2x,xe (0,-xe_t - |ln(-x)-t|) It is easily verified that the Hamiltonian for this multifunction is given by H(t3x,y,p,q) = 2xp + xqe t - q[ln(-x)-t| i f q+e^ < 0 -xqe - q|ln(-x)-t| i f q+e^ > 0 . The Hamiltonian system is then defined provided that we specify the boundary conditions for (x,y,p,q) satisfy x < 0. We will attempt to obtain information about the solutions to this system with the help of canonical transformations. We consider the generating function ft(t,x,y,u,v) = -xu - yv . Then the new coordinates (u,v,£,r;) are related to (x^pjq) by and (p>q) = ( f l x» ny) = O u > - v ) ( ? , ? ) = (-n u,-fi v) = (x,y) (59) The Hamiltonian, defined for al l (t,u,v,£,£) such that t G [0 ,1] and c; < 0, satisfies H*(t,u,v,?,£) = H(t,x,y,p,q) = -2<;u - ?ve + v|ln(-?)-t| i f v+e u > 0 = cve - t + v|ln(-£)-t| i f v+e1 < 0 . This Hamiltonian in turn may be simplified with the aid of the canonical transformation generated by the function ft(t,u,v,X,Y) = -(Y + e X)v - e t + Xu . The new coordinates (X,Y,P,Q) are related to (u,v,?,C) by (C,E) = C«u,nv) = (-et+X,-Y-eX) and (-P,-Q) = (ft x,ft Y) = (-veX-uet+X,-v) . These relations imply that X = ln ( - e ) - t ; Y = + e _ t ; Q = V ; P = -ve _ t? - u? . The new Hamiltonian H**, is calculated from equation ( 4 . 8 ) ; H**(t,x,Y,p,Q) = n t + H(t,u,v,nu,nv) uc - 2u£ - ?ve 1 + v X| i f v+e^i > 0 uz + Cve 1 + v|X i f v+etu < 0 It follows, after simplifying, that (60) H**(t,X,Y,P,Q) = P + Q|X[ i f Pe X > 0 = -P + Q | X | i f Pe"X < 0 . t+X 2 2 We notice that since £ = -e , H** is defined on [0,1] X R X R : To begin the analysis of extremal trajectories for this Hamiltonian system, we notice that H** is independent of Y which implies that Q is constant, Q = c. The actual value of c depends on the boundary conditions (X(0),Y(0),Q(1),P(1)) which in turn are specified by transforming the boundary conditions (x(0),y(0),p(l),q(l)) for the original system. Theorem 1 from F.H. Clarke [ 1] implies that H** is constant. The trajectories are, therefore, easily identifiable. This system is analysed for the case c = -1 in F.H. Clarke [ 1] and the (X,P) phase plane is given below. Figure 8 (61) As an example.to i l l u s t r a t e the inverse transformations, i t may be v e r i f i e d that the t r a j e c t o r i e s ( X ( t ) , Y ( t ) , P ( t ) , Q ( t ) ) given by X(t) = - | t - h\ ; Y(t) = /Q|X(S)|ds ; P(t) = h - t ; Q(t) = -1 with boundary data (X(0),Y(0)) = (-%,0) ; (P(1),Q(1)) = {-h,-l) transformed into the t r a j e c t o r i e s x(t) = - e 2 t _ J s 0 < t < h = -e 2 h < t < 1 p(t) = (t-k)eh~2t ~t -t = ( t - % ) e ~ 2 - e " h < t < 1 y(t) = -e~^2 + / J [ x ( s ) e " S - |ln(-x(s)) - s|]ds i f 0 < t < h = -e~^2 + ^ x e ~ S - |ln(-x)-s|]ds + f£[-xe" S - |ln(-x)-s|]ds i f % < t < 1 . q(t) = 1 0 £ t <_ 1 with boundary data (x(0),y(0)) = (-e" %,-e" %) ; ( P ( l ) , q ( l ) ) = fte^-e-1,1) . (62) BIBLIOGRAPHY C.Caratheodory: [ l] Calculus of Variations and P a r t i a l D i f f e r e n t i a l Equations of the F i r s t Order, Part 1, Holden - Day, San Francisco (1965). F.H.Clarke: [ l] Optimal Control and the True Hamiltonian, to appear. [ 2] Genaralized Gradients and Applications, Transactions Amer. Math. Soc. 205 (1975), 247 - 262. [ 3] Generalized Gradients of L i p s c h i t z Functionals, Advances i n Math., to appear (Tech. Rep. #1687, Mathematics Research Center, Madison, Wisconsin) [ 4 ] Admissable Relaxation i n V a r i a t i o n a l and Control Problems, J. Math. Anal. Appl. 51 (1975), 557 - 576. [ 5 ]Necessary Conditions f o r a General Control Problem, i n Calculus of Variations and Control Theory, (edited by D.L.Russel), Mathematics Research Center (University of Wisconsin - Madison) Pub. No. 36, Academic Press, N.Y. (1976). [6] The Euler - Lagrange D i f f e r e n t i a l Inclusion, J . D i f f e r e n t i a l Equations, 19 (1975), 80 - 90. R.Courant: [ l] Calculus of Variations, Courant I n s t i t u t e New York University, N.Y. (1962). R.Courant and D.Hilbert: [ 1] Methods of Mathematical Physics, Volume 2, Interscience, N.Y. (1962). H.Federer: [ l] Geometric Measure Theory, Springer - Verlag (1969). W.H.Flemming and R.W.Rishel: [ l] Deterministic and Stochastic Optimal Control, Springer - Verlag (1975). (63) BIBLIOGRAPHY I.M.Gelfand and S.V.Fomin: [ l] Calculus of Var i a t i o n s , (translated by R.Silverman), Prehtice - H a l l , Englewood C l i f f , N.J. (1963). G. Lebourg: [ 1] Comptes Rendus de l'Academic des Sciences de P a r i s , November 10, (1975). H. Rund: [ l] The Hamilton - Jacobi Theory i n the Calculus of Va r i a t i o n s , Van Nostrand, London (1966).
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- A Hamilton-Jacobi approach to the differential inclusion...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
A Hamilton-Jacobi approach to the differential inclusion problem Offin, Daniel C. 1979
pdf
Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
Page Metadata
Item Metadata
Title | A Hamilton-Jacobi approach to the differential inclusion problem |
Creator |
Offin, Daniel C. |
Publisher | University of British Columbia |
Date Issued | 1979 |
Description | In the classical calculus of variations, the Hamilton - Jacobi theory leads, under general hypotheses, to sufficient conditions for a local minimum. The optimal control problem as well has its own Hamilton -Jacobi approach to sufficient conditions for optimality. In this thesis we extend this approach to the differential inclusion problem; a general, nonconvex, nondifferentiable control problem. In particular, the familiar Hamilton - Jacobi equation is generalized and a corresponding necessary condition (chapter 2) is obtained. The sufficiency condition (chapter 3) is derived and an example is presented where it is shown how this result may lead to considerable simplification. Finally, we show (chapter 4) how the classical theory of canonical transformations may be brought to bear on certain Hamiltonian inclusions associated with the differential inclusion problem. Our main tool will be the generalized gradient, a set valued derivative for Lipschitz functions which reduces to the subdifferential of convex analysis in the convex case and the familiar derivative in the C¹ case. |
Subject |
Hamilton-Jacobi equations Calculus of variations |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2010-03-03 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0080153 |
URI | http://hdl.handle.net/2429/21432 |
Degree |
Master of Science - MSc |
Program |
Mathematics |
Affiliation |
Science, Faculty of Mathematics, Department of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-UBC_1979_A6_7 O44.pdf [ 2.28MB ]
- Metadata
- JSON: 831-1.0080153.json
- JSON-LD: 831-1.0080153-ld.json
- RDF/XML (Pretty): 831-1.0080153-rdf.xml
- RDF/JSON: 831-1.0080153-rdf.json
- Turtle: 831-1.0080153-turtle.txt
- N-Triples: 831-1.0080153-rdf-ntriples.txt
- Original Record: 831-1.0080153-source.json
- Full Text
- 831-1.0080153-fulltext.txt
- Citation
- 831-1.0080153.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
data-media="{[{embed.selectedMedia}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0080153/manifest